summaryrefslogtreecommitdiffstats
path: root/drivers/mfd/tps65912-spi.c
Commit message (Collapse)AuthorAgeFilesLines
* mfd: tps65912: Fix variable name for SPI removeAndrew F. Davis2017-04-271-2/+2
| | | | | | | | | The SPI interface is mostly a copy/paste from the I2C interface with some minor renaming for consistency. "Client" is an I2C specific term that was left in the SPI remove path, rename this here. Signed-off-by: Andrew F. Davis <afd@ti.com> Signed-off-by: Lee Jones <lee.jones@linaro.org>
* mfd: tps65912: Add driver for the TPS65912 PMICAndrew F. Davis2016-02-111-0/+78
| | | | | | | | | | | | This patch adds support for TPS65912 PMIC MFD core. It provides communication through the I2C and SPI interfaces. It contains the following components: - Regulators - GPIO controller Signed-off-by: Andrew F. Davis <afd@ti.com> Signed-off-by: Lee Jones <lee.jones@linaro.org>
* mfd: tps65912: Remove old driver in preparation for new driverAndrew F. Davis2016-02-111-140/+0Star
| | | | | | | | | The old tps65912 driver is being replaced, delete old driver. Signed-off-by: Andrew F. Davis <afd@ti.com> Acked-by: Mark Brown <broonie@kernel.org> Acked-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Lee Jones <lee.jones@linaro.org>
* spi: Drop owner assignment from spi_driversAndrew F. Davis2015-10-281-1/+0Star
| | | | | | | | | An spi_driver does not need to set an owner, it will be populated by the driver core. Signed-off-by: Andrew F. Davis <afd@ti.com> Acked-by: Jonathan Cameron <jic23@kernel.org> Signed-off-by: Mark Brown <broonie@kernel.org>
* mfd: tps65912-spi: Remove unused variableSachin Kamat2014-07-281-2/+1Star
| | | | | | | ‘rx_buf’ is not used in this function. Signed-off-by: Sachin Kamat <sachin.kamat@samsung.com> Signed-off-by: Lee Jones <lee.jones@linaro.org>
* mfd: tps65912: Convert to managed resources for allocating memoryLee Jones2013-06-131-1/+2
| | | | | | | | Saves on code and simplifies the driver, as these resources are now tracked and freed automatically when the driver is realised. Signed-off-by: Lee Jones <lee.jones@linaro.org> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
* mfd: remove use of __devexitBill Pemberton2012-11-281-1/+1
| | | | | | | | | | | | | | | CONFIG_HOTPLUG is going away as an option so __devexit is no longer needed. Signed-off-by: Bill Pemberton <wfp5p@virginia.edu> Cc: Srinidhi Kasagar <srinidhi.kasagar@stericsson.com> Cc: Peter Tyser <ptyser@xes-inc.com> Cc: Daniel Walker <dwalker@fifo99.com> Cc: Bryan Huntsman <bryanh@codeaurora.org> Acked-by: David Brown <davidb@codeaurora.org> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* mfd: remove use of __devinitBill Pemberton2012-11-281-1/+1
| | | | | | | | | | | | | | CONFIG_HOTPLUG is going away as an option so __devinit is no longer needed. Signed-off-by: Bill Pemberton <wfp5p@virginia.edu> Cc: Srinidhi Kasagar <srinidhi.kasagar@stericsson.com> Cc: Peter Tyser <ptyser@xes-inc.com> Cc: Daniel Walker <dwalker@fifo99.com> Cc: Bryan Huntsman <bryanh@codeaurora.org> Acked-by: David Brown <davidb@codeaurora.org> Acked-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* mfd: remove use of __devexit_pBill Pemberton2012-11-281-1/+1
| | | | | | | | | | | | | | | CONFIG_HOTPLUG is going away as an option so __devexit_p is no longer needed. Signed-off-by: Bill Pemberton <wfp5p@virginia.edu> Cc: Srinidhi Kasagar <srinidhi.kasagar@stericsson.com> Cc: Peter Tyser <ptyser@xes-inc.com> Cc: Daniel Walker <dwalker@fifo99.com> Cc: Bryan Huntsman <bryanh@codeaurora.org> Acked-by: David Brown <davidb@codeaurora.org> Acked-by: Linus Walleij <linus.walleij@linaro.org> Acked-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
* mfd: Remove redundant spi driver bus initializationLars-Peter Clausen2012-01-091-1/+0Star
| | | | | | | | | | | | | | | | | | | | | | In ancient times it was necessary to manually initialize the bus field of an spi_driver to spi_bus_type. These days this is done in spi_driver_register(), so we can drop the manual assignment. The patch was generated using the following coccinelle semantic patch: // <smpl> @@ identifier _driver; @@ struct spi_driver _driver = { .driver = { - .bus = &spi_bus_type, }, }; // </smpl> Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Acked-by: Mark Brown <broonie@opensource.wolfsonmicro.com> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
* mfd: tps65912: Add new mfd deviceMargarita Olaya2011-07-311-0/+142
The tps65912 chip is a power management IC. It contains the following components: - Regulators - GPIO controller The core driver is registered as a platform driver, it provides communication through I2C and SPI interfaces. Signed-off-by: Margarita Olaya Cabrera <magi@slimlogic.co.uk> Acked-by: Samuel Ortiz <sameo@linux.intel.com> Acked-by: Liam Girdwood <lrg@ti.com> Signed-off-by: Samuel Ortiz <sameo@linux.intel.com>
h'>
-rw-r--r--Documentation/DocBook/media/v4l/dev-raw-vbi.xml2
-rw-r--r--Documentation/DocBook/media/v4l/dev-sdr.xml6
-rw-r--r--Documentation/DocBook/media/v4l/dev-subdev.xml6
-rw-r--r--Documentation/DocBook/media/v4l/io.xml6
-rw-r--r--Documentation/DocBook/media/v4l/selection-api.xml9
-rw-r--r--Documentation/DocBook/media/v4l/subdev-formats.xml6
-rw-r--r--Documentation/DocBook/media/v4l/vidioc-create-bufs.xml6
-rw-r--r--Documentation/DocBook/media/v4l/vidioc-dv-timings-cap.xml18
-rw-r--r--Documentation/DocBook/media/v4l/vidioc-enum-dv-timings.xml11
-rw-r--r--Documentation/DocBook/media/v4l/vidioc-enum-freq-bands.xml6
-rw-r--r--Documentation/DocBook/media/v4l/vidioc-expbuf.xml6
-rw-r--r--Documentation/DocBook/media/v4l/vidioc-g-edid.xml10
-rw-r--r--Documentation/DocBook/media/v4l/vidioc-g-selection.xml8
-rw-r--r--Documentation/DocBook/media/v4l/vidioc-prepare-buf.xml6
-rw-r--r--Documentation/DocBook/media/v4l/vidioc-query-dv-timings.xml6
-rw-r--r--Documentation/DocBook/media/v4l/vidioc-streamon.xml8
-rw-r--r--Documentation/DocBook/media/v4l/vidioc-subdev-enum-frame-interval.xml6
-rw-r--r--Documentation/DocBook/media/v4l/vidioc-subdev-enum-frame-size.xml6
-rw-r--r--Documentation/DocBook/media/v4l/vidioc-subdev-enum-mbus-code.xml6
-rw-r--r--Documentation/DocBook/media/v4l/vidioc-subdev-g-fmt.xml6
-rw-r--r--Documentation/DocBook/media/v4l/vidioc-subdev-g-frame-interval.xml6
-rw-r--r--Documentation/DocBook/media/v4l/vidioc-subdev-g-selection.xml6
-rw-r--r--Documentation/IRQ-domain.txt12
-rw-r--r--Documentation/Makefile5
-rw-r--r--Documentation/RCU/Design/Data-Structures/BigTreeClassicRCU.svg474
-rw-r--r--Documentation/RCU/Design/Data-Structures/BigTreeClassicRCUBH.svg499
-rw-r--r--Documentation/RCU/Design/Data-Structures/BigTreeClassicRCUBHdyntick.svg695
-rw-r--r--Documentation/RCU/Design/Data-Structures/BigTreePreemptRCUBHdyntick.svg741
-rw-r--r--Documentation/RCU/Design/Data-Structures/BigTreePreemptRCUBHdyntickCB.svg858
-rw-r--r--Documentation/RCU/Design/Data-Structures/Data-Structures.html1333
-rw-r--r--Documentation/RCU/Design/Data-Structures/HugeTreeClassicRCU.svg939
-rw-r--r--Documentation/RCU/Design/Data-Structures/TreeLevel.svg828
-rw-r--r--Documentation/RCU/Design/Data-Structures/TreeMapping.svg305
-rw-r--r--Documentation/RCU/Design/Data-Structures/TreeMappingLevel.svg380
-rw-r--r--Documentation/RCU/Design/Data-Structures/blkd_task.svg843
-rw-r--r--Documentation/RCU/Design/Data-Structures/nxtlist.svg396
-rw-r--r--Documentation/RCU/Design/Requirements/2013-08-is-it-dead.pngbin100825 -> 0 bytes-rw-r--r--Documentation/RCU/Design/Requirements/RCUApplicability.svg237
-rw-r--r--Documentation/RCU/Design/Requirements/Requirements.html941
-rw-r--r--Documentation/RCU/Design/Requirements/Requirements.htmlx2741
-rwxr-xr-xDocumentation/RCU/Design/htmlqqz.sh108
-rw-r--r--Documentation/RCU/RTFP.txt6
-rw-r--r--Documentation/RCU/trace.txt10
-rw-r--r--Documentation/RCU/whatisRCU.txt22
-rw-r--r--Documentation/accounting/getdelays.c5
-rw-r--r--Documentation/acpi/initrd_table_override.txt65
-rw-r--r--Documentation/adding-syscalls.txt2
-rw-r--r--Documentation/arm/SA1100/Assabet2
-rw-r--r--Documentation/arm64/booting.txt4
-rw-r--r--Documentation/arm64/silicon-errata.txt2
-rw-r--r--Documentation/block/00-INDEX4
-rw-r--r--Documentation/block/queue-sysfs.txt9
-rw-r--r--Documentation/block/writeback_cache_control.txt4
-rw-r--r--Documentation/blockdev/zram.txt28
-rw-r--r--Documentation/cgroup-v1/memory.txt14
-rw-r--r--Documentation/connector/connector.txt8
-rw-r--r--Documentation/device-mapper/cache-policies.txt34
-rw-r--r--Documentation/device-mapper/statistics.txt2
-rw-r--r--Documentation/devices.txt86
-rw-r--r--Documentation/devicetree/bindings/arc/eznps.txt7
-rw-r--r--Documentation/devicetree/bindings/arm/altera/socfpga-eccmgr.txt50
-rw-r--r--Documentation/devicetree/bindings/arm/amlogic.txt3
-rw-r--r--Documentation/devicetree/bindings/arm/arm-boards8
-rw-r--r--Documentation/devicetree/bindings/arm/atmel-at91.txt65
-rw-r--r--Documentation/devicetree/bindings/arm/cci.txt2
-rw-r--r--Documentation/devicetree/bindings/arm/coresight.txt28
-rw-r--r--Documentation/devicetree/bindings/arm/fsl.txt4
-rw-r--r--Documentation/devicetree/bindings/arm/hisilicon/hisilicon.txt20
-rw-r--r--Documentation/devicetree/bindings/arm/l2c2x0.txt6
-rw-r--r--Documentation/devicetree/bindings/arm/marvell/ap806-system-controller.txt35
-rw-r--r--Documentation/devicetree/bindings/arm/marvell/cp110-system-controller0.txt83
-rw-r--r--Documentation/devicetree/bindings/arm/omap/crossbar.txt3
-rw-r--r--Documentation/devicetree/bindings/arm/omap/omap.txt6
-rw-r--r--Documentation/devicetree/bindings/arm/oxnas.txt9
-rw-r--r--Documentation/devicetree/bindings/arm/pmu.txt3
-rw-r--r--Documentation/devicetree/bindings/arm/rockchip.txt14
-rw-r--r--Documentation/devicetree/bindings/arm/samsung/samsung-boards.txt2
-rw-r--r--Documentation/devicetree/bindings/arm/spear-misc.txt2
-rw-r--r--Documentation/devicetree/bindings/arm/tegra/nvidia,tegra20-pmc.txt92
-rw-r--r--Documentation/devicetree/bindings/arm/ux500/boards.txt2
-rw-r--r--Documentation/devicetree/bindings/ata/nvidia,tegra124-ahci.txt (renamed from Documentation/devicetree/bindings/ata/tegra-sata.txt)0
-rw-r--r--Documentation/devicetree/bindings/btmrvl.txt29
-rw-r--r--Documentation/devicetree/bindings/clock/artpec6.txt41
-rw-r--r--Documentation/devicetree/bindings/clock/axs10x-i2s-pll-clock.txt25
-rw-r--r--Documentation/devicetree/bindings/clock/hi3519-crg.txt46
-rw-r--r--Documentation/devicetree/bindings/clock/imx35-clock.txt1
-rw-r--r--Documentation/devicetree/bindings/clock/microchip,pic32.txt39
-rw-r--r--Documentation/devicetree/bindings/clock/nvidia,tegra124-dfll.txt2
-rw-r--r--Documentation/devicetree/bindings/clock/oxnas,stdclk.txt35
-rw-r--r--Documentation/devicetree/bindings/clock/rockchip,rk3188-cru.txt2
-rw-r--r--Documentation/devicetree/bindings/clock/rockchip,rk3288-cru.txt2
-rw-r--r--Documentation/devicetree/bindings/clock/rockchip,rk3399-cru.txt62
-rw-r--r--Documentation/devicetree/bindings/clock/st/st,clkgen.txt2
-rw-r--r--Documentation/devicetree/bindings/clock/sunxi.txt6
-rw-r--r--Documentation/devicetree/bindings/cpufreq/nvidia,tegra124-cpufreq.txt (renamed from Documentation/devicetree/bindings/cpufreq/tegra124-cpufreq.txt)0
-rw-r--r--Documentation/devicetree/bindings/crypto/fsl-imx-scc.txt21
-rw-r--r--Documentation/devicetree/bindings/crypto/samsung-sss.txt6
-rw-r--r--Documentation/devicetree/bindings/devfreq/event/exynos-nocp.txt26
-rw-r--r--Documentation/devicetree/bindings/devfreq/exynos-bus.txt409
-rw-r--r--Documentation/devicetree/bindings/display/exynos/exynos_dsim.txt2
-rw-r--r--Documentation/devicetree/bindings/dma/brcm,bcm2835-dma.txt26
-rw-r--r--Documentation/devicetree/bindings/dma/fsl-imx-sdma.txt27
-rw-r--r--Documentation/devicetree/bindings/dma/mv-xor.txt5
-rw-r--r--Documentation/devicetree/bindings/dma/nvidia,tegra20-apbdma.txt (renamed from Documentation/devicetree/bindings/dma/tegra20-apbdma.txt)0
-rw-r--r--Documentation/devicetree/bindings/dma/nvidia,tegra210-adma.txt55
-rw-r--r--Documentation/devicetree/bindings/dma/qcom_bam_dma.txt2
-rw-r--r--Documentation/devicetree/bindings/dma/snps-dma.txt11
-rw-r--r--Documentation/devicetree/bindings/dma/xilinx/xilinx_dma.txt2
-rw-r--r--Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt36
-rw-r--r--Documentation/devicetree/bindings/gpio/gpio-74x164.txt4
-rw-r--r--Documentation/devicetree/bindings/gpio/gpio-mpc8xxx.txt20
-rw-r--r--Documentation/devicetree/bindings/gpio/gpio-xlp.txt3
-rw-r--r--Documentation/devicetree/bindings/gpio/gpio.txt26
-rw-r--r--Documentation/devicetree/bindings/gpio/ibm,ppc4xx-gpio.txt24
-rw-r--r--Documentation/devicetree/bindings/gpio/microchip,pic32-gpio.txt2
-rw-r--r--Documentation/devicetree/bindings/gpio/nvidia,tegra186-gpio.txt161
-rw-r--r--Documentation/devicetree/bindings/gpio/wd,mbl-gpio.txt38
-rw-r--r--Documentation/devicetree/bindings/gpu/nvidia,gk20a.txt37
-rw-r--r--Documentation/devicetree/bindings/hwmon/ltc2978.txt1
-rw-r--r--Documentation/devicetree/bindings/i2c/i2c-octeon.txt6
-rw-r--r--Documentation/devicetree/bindings/i2c/i2c-rcar.txt3
-rw-r--r--Documentation/devicetree/bindings/iio/accel/mma8452.txt3
-rw-r--r--Documentation/devicetree/bindings/iio/adc/lpc1850-adc.txt21
-rw-r--r--Documentation/devicetree/bindings/iio/adc/mxs-lradc.txt (renamed from Documentation/devicetree/bindings/staging/iio/adc/mxs-lradc.txt)0
-rw-r--r--Documentation/devicetree/bindings/iio/adc/rockchip-saradc.txt6
-rw-r--r--Documentation/devicetree/bindings/iio/dac/ad5592r.txt155
-rw-r--r--Documentation/devicetree/bindings/iio/dac/lpc1850-dac.txt20
-rw-r--r--Documentation/devicetree/bindings/iio/imu/inv_mpu6050.txt13
-rw-r--r--Documentation/devicetree/bindings/iio/magnetometer/ak8975.txt12
-rw-r--r--Documentation/devicetree/bindings/iio/potentiometer/ds1803.txt21
-rw-r--r--Documentation/devicetree/bindings/iio/potentiometer/mcp4131.txt84
-rw-r--r--Documentation/devicetree/bindings/iio/pressure/hp03.txt17
-rw-r--r--Documentation/devicetree/bindings/iio/pressure/ms5611.txt19
-rw-r--r--Documentation/devicetree/bindings/iio/st-sensors.txt6
-rw-r--r--Documentation/devicetree/bindings/input/ads7846.txt2
-rw-r--r--Documentation/devicetree/bindings/input/gpio-keys.txt10
-rw-r--r--Documentation/devicetree/bindings/input/touchscreen/brcm,iproc-touchscreen.txt23
-rw-r--r--Documentation/devicetree/bindings/input/touchscreen/fsl-mx25-tcq.txt2
-rw-r--r--Documentation/devicetree/bindings/interrupt-controller/arm,gic-v3.txt34
-rw-r--r--Documentation/devicetree/bindings/interrupt-controller/arm,versatile-fpga-irq.txt2
-rw-r--r--Documentation/devicetree/bindings/interrupt-controller/brcm,bcm2835-armctrl-ic.txt4
-rw-r--r--Documentation/devicetree/bindings/interrupt-controller/brcm,bcm6345-l1-intc.txt57
-rw-r--r--Documentation/devicetree/bindings/interrupt-controller/ezchip,nps400-ic.txt17
-rw-r--r--Documentation/devicetree/bindings/interrupt-controller/fsl,ls-scfg-msi.txt30
-rw-r--r--Documentation/devicetree/bindings/interrupt-controller/mediatek,sysirq.txt3
-rw-r--r--Documentation/devicetree/bindings/interrupt-controller/nvidia,tegra20-ictlr.txt (renamed from Documentation/devicetree/bindings/interrupt-controller/nvidia,tegra-ictlr.txt)0
-rw-r--r--Documentation/devicetree/bindings/interrupt-controller/nxp,lpc3220-mic.txt70
-rw-r--r--Documentation/devicetree/bindings/interrupt-controller/ti,omap4-wugen-mpu4
-rw-r--r--Documentation/devicetree/bindings/iommu/arm,smmu.txt1
-rw-r--r--Documentation/devicetree/bindings/leds/common.txt3
-rw-r--r--Documentation/devicetree/bindings/leds/leds-gpio.txt2
-rw-r--r--Documentation/devicetree/bindings/media/i2c/adv7180.txt29
-rw-r--r--Documentation/devicetree/bindings/media/rcar_vin.txt12
-rw-r--r--Documentation/devicetree/bindings/media/xilinx/video.txt2
-rw-r--r--Documentation/devicetree/bindings/memory-controllers/exynos-srom.txt79
-rw-r--r--Documentation/devicetree/bindings/memory-controllers/nvidia,tegra124-emc.txt (renamed from Documentation/devicetree/bindings/memory-controllers/tegra-emc.txt)4
-rw-r--r--Documentation/devicetree/bindings/memory-controllers/nvidia,tegra30-mc.txt (renamed from Documentation/devicetree/bindings/memory-controllers/nvidia,tegra-mc.txt)6
-rw-r--r--Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt (renamed from Documentation/devicetree/bindings/bus/ti-gpmc.txt)17
-rw-r--r--Documentation/devicetree/bindings/mfd/arizona.txt2
-rw-r--r--Documentation/devicetree/bindings/mfd/axp20x.txt31
-rw-r--r--Documentation/devicetree/bindings/mfd/hisilicon,hi655x.txt27
-rw-r--r--Documentation/devicetree/bindings/mfd/max77620.txt143
-rw-r--r--Documentation/devicetree/bindings/mfd/qcom-rpm.txt4
-rw-r--r--Documentation/devicetree/bindings/mips/brcm/soc.txt3
-rw-r--r--Documentation/devicetree/bindings/mips/cavium/ciu3.txt27
-rw-r--r--Documentation/devicetree/bindings/mips/cpu_irq.txt2
-rw-r--r--Documentation/devicetree/bindings/misc/fsl,qoriq-mc.txt81
-rw-r--r--Documentation/devicetree/bindings/mmc/microchip,sdhci-pic32.txt2
-rw-r--r--Documentation/devicetree/bindings/mmc/mmc-pwrseq-emmc.txt2
-rw-r--r--Documentation/devicetree/bindings/mmc/rockchip-dw-mshc.txt1
-rw-r--r--Documentation/devicetree/bindings/mmc/sdhci-st.txt4
-rw-r--r--Documentation/devicetree/bindings/mmc/tmio_mmc.txt3
-rw-r--r--Documentation/devicetree/bindings/mmc/usdhi6rol0.txt6
-rw-r--r--Documentation/devicetree/bindings/mtd/arm-versatile.txt20
-rw-r--r--Documentation/devicetree/bindings/mtd/atmel-nand.txt2
-rw-r--r--Documentation/devicetree/bindings/mtd/brcm,brcmnand.txt3
-rw-r--r--Documentation/devicetree/bindings/mtd/fsl-quadspi.txt3
-rw-r--r--Documentation/devicetree/bindings/mtd/gpmc-nand.txt19
-rw-r--r--Documentation/devicetree/bindings/mtd/nand.txt46
-rw-r--r--Documentation/devicetree/bindings/net/apm-xgene-enet.txt2
-rw-r--r--Documentation/devicetree/bindings/net/dsa/dsa.txt2
-rw-r--r--Documentation/devicetree/bindings/net/dsa/marvell.txt35
-rw-r--r--Documentation/devicetree/bindings/net/hisilicon-hns-dsaf.txt57
-rw-r--r--Documentation/devicetree/bindings/net/hisilicon-hns-nic.txt32
-rw-r--r--Documentation/devicetree/bindings/net/marvell-bt-sd8xxx.txt56
-rw-r--r--Documentation/devicetree/bindings/net/microchip,enc28j60.txt59
-rw-r--r--Documentation/devicetree/bindings/net/nfc/pn533-i2c.txt31
-rw-r--r--Documentation/devicetree/bindings/net/stmmac.txt6
-rw-r--r--Documentation/devicetree/bindings/net/ti,dp83867.txt2
-rw-r--r--Documentation/devicetree/bindings/net/wireless/marvell-sd8xxx.txt63
-rw-r--r--Documentation/devicetree/bindings/net/wireless/qcom,ath10k.txt23
-rw-r--r--Documentation/devicetree/bindings/numa.txt275
-rw-r--r--Documentation/devicetree/bindings/opp/opp.txt2
-rw-r--r--Documentation/devicetree/bindings/pci/designware-pcie.txt2
-rw-r--r--Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt18
-rw-r--r--Documentation/devicetree/bindings/pci/hisilicon-pcie.txt12
-rw-r--r--Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt224
-rw-r--r--Documentation/devicetree/bindings/pci/pci-armada8k.txt38
-rw-r--r--Documentation/devicetree/bindings/pci/pci-keystone.txt1
-rw-r--r--Documentation/devicetree/bindings/phy/bcm-ns-usb2-phy.txt21
-rw-r--r--Documentation/devicetree/bindings/phy/brcm-sata-phy.txt (renamed from Documentation/devicetree/bindings/phy/brcm,brcmstb-sata-phy.txt)15
-rw-r--r--Documentation/devicetree/bindings/phy/nvidia,tegra124-xusb-padctl.txt733
-rw-r--r--Documentation/devicetree/bindings/phy/phy-lpc18xx-usb-otg.txt2
-rw-r--r--Documentation/devicetree/bindings/phy/phy-mt65xx-usb.txt4
-rw-r--r--Documentation/devicetree/bindings/phy/phy-stih41x-usb.txt2
-rw-r--r--Documentation/devicetree/bindings/phy/rcar-gen2-phy.txt8
-rw-r--r--Documentation/devicetree/bindings/phy/rcar-gen3-phy-usb2.txt12
-rw-r--r--Documentation/devicetree/bindings/phy/samsung-phy.txt18
-rw-r--r--Documentation/devicetree/bindings/pinctrl/microchip,pic32-pinctrl.txt2
-rw-r--r--Documentation/devicetree/bindings/pinctrl/nvidia,tegra124-xusb-padctl.txt12
-rw-r--r--Documentation/devicetree/bindings/pinctrl/qcom,pmic-gpio.txt2
-rw-r--r--Documentation/devicetree/bindings/pinctrl/renesas,pfc-pinctrl.txt4
-rw-r--r--Documentation/devicetree/bindings/power/qcom,coincell-charger.txt2
-rw-r--r--Documentation/devicetree/bindings/power/renesas,rcar-sysc.txt48
-rw-r--r--Documentation/devicetree/bindings/power/reset/gpio-poweroff.txt (renamed from Documentation/devicetree/bindings/gpio/gpio-poweroff.txt)0
-rw-r--r--Documentation/devicetree/bindings/power/reset/gpio-restart.txt (renamed from Documentation/devicetree/bindings/gpio/gpio-restart.txt)0
-rw-r--r--Documentation/devicetree/bindings/power/rockchip-io-domain.txt4
-rw-r--r--Documentation/devicetree/bindings/regmap/regmap.txt59
-rw-r--r--Documentation/devicetree/bindings/regulator/max8973-regulator.txt7
-rw-r--r--Documentation/devicetree/bindings/regulator/palmas-pmic.txt8
-rw-r--r--Documentation/devicetree/bindings/regulator/pv88080.txt49
-rw-r--r--Documentation/devicetree/bindings/regulator/qcom,spmi-regulator.txt37
-rw-r--r--Documentation/devicetree/bindings/regulator/regulator-max77620.txt22
-rw-r--r--Documentation/devicetree/bindings/regulator/ti-abb-regulator.txt10
-rw-r--r--Documentation/devicetree/bindings/regulator/twl-regulator.txt6
-rw-r--r--Documentation/devicetree/bindings/reset/oxnas,reset.txt58
-rw-r--r--Documentation/devicetree/bindings/rng/hisi-rng.txt12
-rw-r--r--Documentation/devicetree/bindings/rtc/maxim-ds1302.txt46
-rw-r--r--Documentation/devicetree/bindings/rtc/rtc-palmas.txt6
-rw-r--r--Documentation/devicetree/bindings/rtc/sa1100-rtc.txt2
-rw-r--r--Documentation/devicetree/bindings/serial/arm,mps2-uart.txt19
-rw-r--r--Documentation/devicetree/bindings/serial/fsl-imx-uart.txt4
-rw-r--r--Documentation/devicetree/bindings/serial/fsl-mxs-auart.txt16
-rw-r--r--Documentation/devicetree/bindings/serial/microchip,pic32-uart.txt29
-rw-r--r--Documentation/devicetree/bindings/serial/mvebu-uart.txt (renamed from Documentation/devicetree/bindings/tty/serial/mvebu-uart.txt)0
-rw-r--r--Documentation/devicetree/bindings/serial/serial.txt57
-rw-r--r--Documentation/devicetree/bindings/serial/sirf-uart.txt8
-rw-r--r--Documentation/devicetree/bindings/soc/mediatek/auxadc.txt21
-rw-r--r--Documentation/devicetree/bindings/soc/mediatek/pwrap.txt1
-rw-r--r--Documentation/devicetree/bindings/soc/rockchip/grf.txt35
-rw-r--r--Documentation/devicetree/bindings/soc/rockchip/power_domain.txt47
-rw-r--r--Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt6
-rw-r--r--Documentation/devicetree/bindings/sound/davinci-mcbsp.txt51
-rw-r--r--Documentation/devicetree/bindings/sound/fsl-sai.txt9
-rw-r--r--Documentation/devicetree/bindings/sound/max98371.txt17
-rw-r--r--Documentation/devicetree/bindings/sound/mt8173-rt5650-rt5676.txt5
-rw-r--r--Documentation/devicetree/bindings/sound/mt8173-rt5650.txt10
-rw-r--r--Documentation/devicetree/bindings/sound/nvidia,tegra30-hda.txt2
-rw-r--r--Documentation/devicetree/bindings/sound/pcm5102a.txt13
-rw-r--r--Documentation/devicetree/bindings/sound/st,sti-asoc-card.txt48
-rw-r--r--Documentation/devicetree/bindings/sound/tas571x.txt10
-rw-r--r--Documentation/devicetree/bindings/sound/tas5720.txt25
-rw-r--r--Documentation/devicetree/bindings/spi/microchip,spi-pic32.txt34
-rw-r--r--Documentation/devicetree/bindings/spi/spi-fsl-dspi.txt8
-rw-r--r--Documentation/devicetree/bindings/spi/sqi-pic32.txt18
-rw-r--r--Documentation/devicetree/bindings/spi/ti_qspi.txt7
-rw-r--r--Documentation/devicetree/bindings/sram/sram.txt2
-rw-r--r--Documentation/devicetree/bindings/thermal/nvidia,tegra124-soctherm.txt (renamed from Documentation/devicetree/bindings/thermal/tegra-soctherm.txt)14
-rw-r--r--Documentation/devicetree/bindings/thermal/rcar-thermal.txt1
-rw-r--r--Documentation/devicetree/bindings/thermal/tango-thermal.txt17
-rw-r--r--Documentation/devicetree/bindings/thermal/thermal-generic-adc.txt89
-rw-r--r--Documentation/devicetree/bindings/timer/arm,mps2-timer.txt28
-rw-r--r--Documentation/devicetree/bindings/timer/ezchip,nps400-timer.txt15
-rw-r--r--Documentation/devicetree/bindings/timer/snps,arc-timer.txt31
-rw-r--r--Documentation/devicetree/bindings/timer/snps,archs-gfrc.txt14
-rw-r--r--Documentation/devicetree/bindings/timer/snps,archs-rtc.txt14
-rw-r--r--Documentation/devicetree/bindings/usb/dwc3.txt6
-rw-r--r--Documentation/devicetree/bindings/usb/nvidia,tegra124-xusb.txt120
-rw-r--r--Documentation/devicetree/bindings/usb/qcom,dwc3.txt1
-rw-r--r--Documentation/devicetree/bindings/usb/usb-xhci.txt1
-rw-r--r--Documentation/devicetree/bindings/vendor-prefixes.txt16
-rw-r--r--Documentation/devicetree/bindings/watchdog/fsl-imx-wdt.txt4
-rw-r--r--Documentation/devicetree/bindings/watchdog/microchip,pic32-dmt.txt19
-rw-r--r--Documentation/devicetree/bindings/watchdog/microchip,pic32-wdt.txt18
-rw-r--r--Documentation/devicetree/bindings/watchdog/renesas-wdt.txt25
-rw-r--r--Documentation/driver-model/devres.txt13
-rw-r--r--Documentation/fb/udlfb.txt6
-rw-r--r--Documentation/features/perf/perf-regs/arch-support.txt2
-rw-r--r--Documentation/features/perf/perf-stackdump/arch-support.txt2
-rw-r--r--Documentation/filesystems/Locking2
-rw-r--r--Documentation/filesystems/cifs/README2
-rw-r--r--Documentation/filesystems/dax.txt32
-rw-r--r--Documentation/filesystems/directory-locking32
-rw-r--r--Documentation/filesystems/nilfs2.txt5
-rw-r--r--Documentation/filesystems/overlayfs.txt9
-rw-r--r--Documentation/filesystems/pohmelfs/design_notes.txt2
-rw-r--r--Documentation/filesystems/porting60
-rw-r--r--Documentation/filesystems/proc.txt1
-rw-r--r--Documentation/filesystems/qnx6.txt2
-rw-r--r--Documentation/filesystems/vfs.txt2
-rw-r--r--Documentation/firmware_class/README2
-rw-r--r--Documentation/gdb-kernel-debugging.txt21
-rw-r--r--Documentation/gpio/driver.txt97
-rw-r--r--Documentation/hwmon/abituguru2
-rw-r--r--Documentation/hwmon/fam15h_power65
-rw-r--r--Documentation/hwmon/it8715
-rw-r--r--Documentation/hwmon/max3172234
-rw-r--r--Documentation/hwmon/max344406
-rw-r--r--Documentation/i2c/i2c-topology370
-rw-r--r--Documentation/infiniband/ipoib.txt2
-rw-r--r--Documentation/infiniband/sysfs.txt12
-rw-r--r--Documentation/isa.txt121
-rw-r--r--Documentation/ja_JP/HOWTO6
-rw-r--r--Documentation/kbuild/kconfig-language.txt22
-rw-r--r--Documentation/kdump/gdbmacros.txt96
-rw-r--r--Documentation/kdump/kdump.txt13
-rw-r--r--Documentation/kernel-docs.txt29
-rw-r--r--Documentation/kernel-parameters.txt98
-rw-r--r--Documentation/ko_KR/HOWTO8
-rw-r--r--Documentation/laptops/toshiba_haps.txt2
-rw-r--r--Documentation/livepatch/livepatch.txt394
-rw-r--r--Documentation/livepatch/module-elf-format.txt311
-rw-r--r--Documentation/locking/lockdep-design.txt4
-rw-r--r--Documentation/lzo.txt4
-rw-r--r--Documentation/md-cluster.txt6
-rw-r--r--Documentation/memory-barriers.txt117
-rw-r--r--Documentation/memory-hotplug.txt9
-rw-r--r--Documentation/mmc/00-INDEX2
-rw-r--r--Documentation/mmc/mmc-tools.txt34
-rw-r--r--Documentation/networking/bonding.txt4
-rw-r--r--Documentation/networking/can.txt2
-rw-r--r--Documentation/networking/checksum-offloads.txt14
-rw-r--r--Documentation/networking/dsa/bcm_sf2.txt2
-rw-r--r--Documentation/networking/dsa/dsa.txt20
-rw-r--r--Documentation/networking/filter.txt103
-rw-r--r--Documentation/networking/gen_stats.txt6
-rw-r--r--Documentation/networking/ip-sysctl.txt10
-rw-r--r--Documentation/networking/mac80211-injection.txt17
-rw-r--r--Documentation/networking/netdev-features.txt10
-rw-r--r--Documentation/networking/netdevices.txt9
-rw-r--r--Documentation/networking/segmentation-offloads.txt130
-rw-r--r--Documentation/networking/stmmac.txt44
-rw-r--r--Documentation/networking/switchdev.txt28
-rw-r--r--Documentation/networking/timestamping.txt48
-rw-r--r--Documentation/phy.txt16
-rw-r--r--Documentation/powerpc/eeh-pci-error-recovery.txt2
-rw-r--r--Documentation/pps/pps.txt2
-rw-r--r--Documentation/pwm.txt30
-rw-r--r--Documentation/robust-futexes.txt6
-rw-r--r--Documentation/rpmsg.txt14
-rw-r--r--Documentation/scsi/ChangeLog.megaraid_sas48
-rw-r--r--Documentation/scsi/bfa.txt2
-rw-r--r--Documentation/scsi/g_NCR5380.txt17
-rw-r--r--Documentation/scsi/scsi-parameters.txt11
-rw-r--r--Documentation/scsi/tcm_qla2xxx.txt22
-rw-r--r--Documentation/security/LoadPin.txt17
-rw-r--r--Documentation/security/keys.txt52
-rw-r--r--Documentation/security/self-protection.txt261
-rw-r--r--Documentation/serial/driver54
-rw-r--r--Documentation/serial/tty.txt3
-rw-r--r--Documentation/sound/alsa/HD-Audio.txt26
-rw-r--r--Documentation/sound/alsa/compress_offload.txt4
-rw-r--r--Documentation/sound/alsa/soc/dapm.txt2
-rw-r--r--Documentation/sound/alsa/soc/overview.txt2
-rw-r--r--Documentation/sound/alsa/timestamping.txt2
-rw-r--r--Documentation/sync_file.txt69
-rw-r--r--Documentation/sysctl/kernel.txt30
-rw-r--r--Documentation/sysctl/net.txt11
-rw-r--r--Documentation/sysctl/vm.txt14
-rw-r--r--Documentation/sysrq.txt2
-rwxr-xr-xDocumentation/target/tcm_mod_builder.py16
-rw-r--r--Documentation/thermal/sysfs-api.txt44
-rw-r--r--Documentation/timers/hrtimers.txt6
-rw-r--r--Documentation/trace/coresight.txt37
-rw-r--r--Documentation/trace/events.txt1555
-rw-r--r--Documentation/trace/ftrace.txt44
-rw-r--r--Documentation/usb/chipidea.txt13
-rw-r--r--Documentation/video4linux/CARDLIST.cx238852
-rw-r--r--Documentation/video4linux/CARDLIST.em28xx12
-rw-r--r--Documentation/video4linux/README.cx882
-rw-r--r--Documentation/video4linux/bttv/Sound-FAQ2
-rw-r--r--Documentation/video4linux/v4l2-framework.txt2
-rw-r--r--Documentation/video4linux/vivid.txt6
-rw-r--r--Documentation/virtual/kvm/api.txt18
-rw-r--r--Documentation/virtual/kvm/devices/s390_flic.txt14
-rw-r--r--Documentation/vm/hugetlbpage.txt6
-rw-r--r--Documentation/vm/pagemap.txt2
-rw-r--r--Documentation/vm/transhuge.txt18
-rw-r--r--Documentation/vm/z3fold.txt26
-rw-r--r--Documentation/w1/slaves/w1_therm10
-rw-r--r--Documentation/watchdog/hpwdt.txt57
-rw-r--r--Documentation/watchdog/watchdog-parameters.txt4
-rw-r--r--Documentation/x86/intel_mpx.txt4
-rw-r--r--Documentation/x86/pat.txt32
-rw-r--r--Documentation/xillybus.txt2
-rw-r--r--Documentation/zh_CN/HOWTO4
-rw-r--r--Documentation/zh_CN/arm64/booting.txt20
-rw-r--r--Kbuild10
-rw-r--r--MAINTAINERS279
-rw-r--r--Makefile72
-rw-r--r--README20
-rw-r--r--arch/Kconfig20
-rw-r--r--arch/alpha/Kconfig2
-rw-r--r--arch/alpha/include/asm/rwsem.h18
-rw-r--r--arch/alpha/kernel/osf_sys.c4
-rw-r--r--arch/alpha/kernel/pci-sysfs.c4
-rw-r--r--arch/alpha/kernel/process.c8
-rw-r--r--arch/arc/Kconfig17
-rw-r--r--arch/arc/Makefile5
-rw-r--r--arch/arc/boot/dts/abilis_tb10x.dtsi16
-rw-r--r--arch/arc/boot/dts/axc001.dtsi14
-rw-r--r--arch/arc/boot/dts/axc003.dtsi16
-rw-r--r--arch/arc/boot/dts/axc003_idu.dtsi14
-rw-r--r--arch/arc/boot/dts/axs10x_mb.dtsi13
-rw-r--r--arch/arc/boot/dts/eznps.dts96
-rw-r--r--arch/arc/boot/dts/nsim_700.dts10
-rw-r--r--arch/arc/boot/dts/nsim_hs.dts8
-rw-r--r--arch/arc/boot/dts/nsim_hs_idu.dts8
-rw-r--r--arch/arc/boot/dts/nsimosci.dts10
-rw-r--r--arch/arc/boot/dts/nsimosci_hs.dts8
-rw-r--r--arch/arc/boot/dts/nsimosci_hs_idu.dts8
-rw-r--r--arch/arc/boot/dts/skeleton.dtsi14
-rw-r--r--arch/arc/boot/dts/skeleton_hs.dtsi52
-rw-r--r--arch/arc/boot/dts/skeleton_hs_idu.dtsi46
-rw-r--r--arch/arc/boot/dts/vdk_axc003.dtsi14
-rw-r--r--arch/arc/boot/dts/vdk_axc003_idu.dtsi12
-rw-r--r--arch/arc/configs/nps_defconfig84
-rw-r--r--arch/arc/include/asm/atomic.h83
-rw-r--r--arch/arc/include/asm/barrier.h12
-rw-r--r--arch/arc/include/asm/bitops.h60
-rw-r--r--arch/arc/include/asm/clk.h22
-rw-r--r--arch/arc/include/asm/cmpxchg.h76
-rw-r--r--arch/arc/include/asm/entry-compact.h6
-rw-r--r--arch/arc/include/asm/hugepage.h2
-rw-r--r--arch/arc/include/asm/irq.h13
-rw-r--r--arch/arc/include/asm/page.h4
-rw-r--r--arch/arc/include/asm/pgtable.h2
-rw-r--r--arch/arc/include/asm/processor.h51
-rw-r--r--arch/arc/include/asm/setup.h4
-rw-r--r--arch/arc/include/asm/spinlock.h14
-rw-r--r--arch/arc/include/uapi/asm/byteorder.h2
-rw-r--r--arch/arc/include/uapi/asm/unistd.h1
-rw-r--r--arch/arc/kernel/Makefile2
-rw-r--r--arch/arc/kernel/clk.c21
-rw-r--r--arch/arc/kernel/ctx_sw.c13
-rw-r--r--arch/arc/kernel/devtree.c13
-rw-r--r--arch/arc/kernel/intc-arcv2.c17
-rw-r--r--arch/arc/kernel/intc-compact.c17
-rw-r--r--arch/arc/kernel/irq.c50
-rw-r--r--arch/arc/kernel/mcip.c7
-rw-r--r--arch/arc/kernel/perf_event.c6
-rw-r--r--arch/arc/kernel/process.c7
-rw-r--r--arch/arc/kernel/setup.c17
-rw-r--r--arch/arc/kernel/smp.c25
-rw-r--r--arch/arc/kernel/time.c238
-rw-r--r--arch/arc/mm/tlb.c11
-rw-r--r--arch/arc/plat-axs10x/Kconfig2
-rw-r--r--arch/arc/plat-axs10x/axs10x.c27
-rw-r--r--arch/arc/plat-eznps/Kconfig35
-rw-r--r--arch/arc/plat-eznps/Makefile7
-rw-r--r--arch/arc/plat-eznps/entry.S70
-rw-r--r--arch/arc/plat-eznps/include/plat/ctop.h209
-rw-r--r--arch/arc/plat-eznps/include/plat/mtm.h60
-rw-r--r--arch/arc/plat-eznps/include/plat/smp.h26
-rw-r--r--arch/arc/plat-eznps/mtm.c133
-rw-r--r--arch/arc/plat-eznps/platform.c102
-rw-r--r--arch/arc/plat-eznps/smp.c155
-rw-r--r--arch/arc/plat-tb10x/Kconfig2
-rw-r--r--arch/arm/Kconfig22
-rw-r--r--arch/arm/Kconfig.debug8
-rw-r--r--arch/arm/Makefile1
-rw-r--r--arch/arm/boot/Makefile3
-rw-r--r--arch/arm/boot/bootp/Makefile3
-rw-r--r--arch/arm/boot/dts/Makefile68
-rw-r--r--arch/arm/boot/dts/am335x-baltos-ir2110.dts71
-rw-r--r--arch/arm/boot/dts/am335x-baltos-ir3220.dts119
-rw-r--r--arch/arm/boot/dts/am335x-baltos-ir5221.dts398
-rw-r--r--arch/arm/boot/dts/am335x-baltos.dtsi408
-rw-r--r--arch/arm/boot/dts/am335x-chiliboard.dts75
-rw-r--r--arch/arm/boot/dts/am335x-chilisom.dtsi77
-rw-r--r--arch/arm/boot/dts/am335x-cm-t335.dts1
-rw-r--r--arch/arm/boot/dts/am335x-evm.dts1
-rw-r--r--arch/arm/boot/dts/am335x-icev2.dts306
-rw-r--r--arch/arm/boot/dts/am335x-igep0033.dtsi1
-rw-r--r--arch/arm/boot/dts/am335x-phycore-som.dtsi1
-rw-r--r--arch/arm/boot/dts/am335x-shc.dts2
-rw-r--r--arch/arm/boot/dts/am33xx-clocks.dtsi90
-rw-r--r--arch/arm/boot/dts/am33xx.dtsi8
-rw-r--r--arch/arm/boot/dts/am35xx-clocks.dtsi20
-rw-r--r--arch/arm/boot/dts/am4372.dtsi14
-rw-r--r--arch/arm/boot/dts/am437x-gp-evm.dts3
-rw-r--r--arch/arm/boot/dts/am43x-epos-evm.dts3
-rw-r--r--arch/arm/boot/dts/am43xx-clocks.dtsi170
-rw-r--r--arch/arm/boot/dts/am572x-idk.dts85
-rw-r--r--arch/arm/boot/dts/am57xx-beagle-x15.dts14
-rw-r--r--arch/arm/boot/dts/am57xx-cl-som-am57x.dts2
-rw-r--r--arch/arm/boot/dts/am57xx-commercial-grade.dtsi23
-rw-r--r--arch/arm/boot/dts/am57xx-idk-common.dtsi304
-rw-r--r--arch/arm/boot/dts/am57xx-industrial-grade.dtsi23
-rw-r--r--arch/arm/boot/dts/arm-realview-eb-11mp-revb.dts93
-rw-r--r--arch/arm/boot/dts/arm-realview-eb-11mp.dts74
-rw-r--r--arch/arm/boot/dts/arm-realview-eb-a9mp.dts70
-rw-r--r--arch/arm/boot/dts/arm-realview-eb-mp.dtsi225
-rw-r--r--arch/arm/boot/dts/arm-realview-eb.dts166
-rw-r--r--arch/arm/boot/dts/arm-realview-eb.dtsi453
-rw-r--r--arch/arm/boot/dts/arm-realview-pb1176.dts40
-rw-r--r--arch/arm/boot/dts/arm-realview-pb11mp.dts19
-rw-r--r--arch/arm/boot/dts/arm-realview-pba8.dts178
-rw-r--r--arch/arm/boot/dts/arm-realview-pbx-a9.dts229
-rw-r--r--arch/arm/boot/dts/arm-realview-pbx.dtsi542
-rw-r--r--arch/arm/boot/dts/armada-385-linksys.dtsi6
-rw-r--r--arch/arm/boot/dts/armada-xp-linksys-mamba.dts4
-rw-r--r--arch/arm/boot/dts/armv7-m.dtsi2
-rw-r--r--arch/arm/boot/dts/artpec6.dtsi99
-rw-r--r--arch/arm/boot/dts/aspeed-ast2500-evb.dts25
-rw-r--r--arch/arm/boot/dts/aspeed-bmc-opp-palmetto.dts25
-rw-r--r--arch/arm/boot/dts/aspeed-g4.dtsi161
-rw-r--r--arch/arm/boot/dts/aspeed-g5.dtsi170
-rw-r--r--arch/arm/boot/dts/at91-sama5d2_xplained.dts9
-rw-r--r--arch/arm/boot/dts/at91-vinco.dts4
-rw-r--r--arch/arm/boot/dts/at91sam9g45.dtsi2
-rw-r--r--arch/arm/boot/dts/at91sam9x5.dtsi2
-rw-r--r--arch/arm/boot/dts/bcm-cygnus.dtsi11
-rw-r--r--arch/arm/boot/dts/bcm2835-rpi-a-plus.dts4
-rw-r--r--arch/arm/boot/dts/bcm2835-rpi-a.dts4
-rw-r--r--arch/arm/boot/dts/bcm2835-rpi-b-plus.dts4
-rw-r--r--arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts4
-rw-r--r--arch/arm/boot/dts/bcm2835-rpi-b.dts4
-rw-r--r--arch/arm/boot/dts/bcm2835-rpi.dtsi9
-rw-r--r--arch/arm/boot/dts/bcm2835.dtsi11
-rw-r--r--arch/arm/boot/dts/bcm2836-rpi-2-b.dts4
-rw-r--r--arch/arm/boot/dts/bcm283x.dtsi69
-rw-r--r--arch/arm/boot/dts/bcm4708-buffalo-wzr-1750dhp.dts10
-rw-r--r--arch/arm/boot/dts/bcm4708-luxul-xwc-1000.dts6
-rw-r--r--arch/arm/boot/dts/bcm4708-netgear-r6250.dts17
-rw-r--r--arch/arm/boot/dts/bcm4708-smartrg-sr400ac.dts6
-rw-r--r--arch/arm/boot/dts/bcm47081-buffalo-wzr-600dhp2.dts2
-rw-r--r--arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts5
-rw-r--r--arch/arm/boot/dts/bcm4709-netgear-r8000.dts8
-rw-r--r--arch/arm/boot/dts/bcm47094-dlink-dir-885l.dts6
-rw-r--r--arch/arm/boot/dts/bcm5301x.dtsi32
-rw-r--r--arch/arm/boot/dts/cros-adc-thermistors.dtsi8
-rw-r--r--arch/arm/boot/dts/da850-enbw-cmc.dts14
-rw-r--r--arch/arm/boot/dts/da850-evm.dts26
-rw-r--r--arch/arm/boot/dts/da850.dtsi95
-rw-r--r--arch/arm/boot/dts/dm814x-clocks.dtsi10
-rw-r--r--arch/arm/boot/dts/dm814x.dtsi2
-rw-r--r--arch/arm/boot/dts/dm816x-clocks.dtsi42
-rw-r--r--arch/arm/boot/dts/dm816x.dtsi2
-rw-r--r--arch/arm/boot/dts/dra7-evm.dts70
-rw-r--r--arch/arm/boot/dts/dra7.dtsi200
-rw-r--r--arch/arm/boot/dts/dra72-evm-common.dtsi817
-rw-r--r--arch/arm/boot/dts/dra72-evm-revc.dts73
-rw-r--r--arch/arm/boot/dts/dra72-evm.dts835
-rw-r--r--arch/arm/boot/dts/dra7xx-clocks.dtsi376
-rw-r--r--arch/arm/boot/dts/emev2-kzm9d.dts4
-rw-r--r--arch/arm/boot/dts/exynos3250-artik5-eval.dts43
-rw-r--r--arch/arm/boot/dts/exynos3250-artik5.dtsi334
-rw-r--r--arch/arm/boot/dts/exynos3250-monk.dts59
-rw-r--r--arch/arm/boot/dts/exynos3250-pinctrl.dtsi80
-rw-r--r--arch/arm/boot/dts/exynos3250-rinato.dts159
-rw-r--r--arch/arm/boot/dts/exynos3250.dtsi208
-rw-r--r--arch/arm/boot/dts/exynos4.dtsi27
-rw-r--r--arch/arm/boot/dts/exynos4210-pinctrl.dtsi7
-rw-r--r--arch/arm/boot/dts/exynos4210-trats.dts4
-rw-r--r--arch/arm/boot/dts/exynos4210.dtsi159
-rw-r--r--arch/arm/boot/dts/exynos4412-odroid-common.dtsi60
-rw-r--r--arch/arm/boot/dts/exynos4412-ppmu-common.dtsi50
-rw-r--r--arch/arm/boot/dts/exynos4412-trats2.dts104
-rw-r--r--arch/arm/boot/dts/exynos4x12-pinctrl.dtsi7
-rw-r--r--arch/arm/boot/dts/exynos4x12.dtsi176
-rw-r--r--arch/arm/boot/dts/exynos5.dtsi4
-rw-r--r--arch/arm/boot/dts/exynos5250-arndale.dts2
-rw-r--r--arch/arm/boot/dts/exynos5250-smdk5250.dts8
-rw-r--r--arch/arm/boot/dts/exynos5250-snow-common.dtsi12
-rw-r--r--arch/arm/boot/dts/exynos5250-spring.dts2
-rw-r--r--arch/arm/boot/dts/exynos5250.dtsi10
-rw-r--r--arch/arm/boot/dts/exynos5410-smdk5410.dts2
-rw-r--r--arch/arm/boot/dts/exynos5410.dtsi4
-rw-r--r--arch/arm/boot/dts/exynos5420-arndale-octa.dts7
-rw-r--r--arch/arm/boot/dts/exynos5420-peach-pit.dts11
-rw-r--r--arch/arm/boot/dts/exynos5420-smdk5420.dts9
-rw-r--r--arch/arm/boot/dts/exynos5420.dtsi420
-rw-r--r--arch/arm/boot/dts/exynos5422-odroidxu3-common.dtsi123
-rw-r--r--arch/arm/boot/dts/exynos5440.dtsi4
-rw-r--r--arch/arm/boot/dts/exynos5800-peach-pi.dts5
-rw-r--r--arch/arm/boot/dts/imx25-pinfunc.h124
-rw-r--r--arch/arm/boot/dts/imx25.dtsi9
-rw-r--r--arch/arm/boot/dts/imx28-m28.dtsi26
-rw-r--r--arch/arm/boot/dts/imx28.dtsi26
-rw-r--r--arch/arm/boot/dts/imx31.dtsi8
-rw-r--r--arch/arm/boot/dts/imx35.dtsi8
-rw-r--r--arch/arm/boot/dts/imx53-m53evk.dts21
-rw-r--r--arch/arm/boot/dts/imx6dl-riotboard.dts2
-rw-r--r--arch/arm/boot/dts/imx6dl-tx6dl-comtft.dts42
-rw-r--r--arch/arm/boot/dts/imx6dl-tx6s-8034.dts237
-rw-r--r--arch/arm/boot/dts/imx6dl-tx6s-8035.dts253
-rw-r--r--arch/arm/boot/dts/imx6dl-tx6u-801x.dts42
-rw-r--r--arch/arm/boot/dts/imx6dl-tx6u-8033.dts248
-rw-r--r--arch/arm/boot/dts/imx6dl-tx6u-811x.dts60
-rw-r--r--arch/arm/boot/dts/imx6dl-tx6u-81xx-mb7.dts255
-rw-r--r--arch/arm/boot/dts/imx6dl.dtsi2
-rw-r--r--arch/arm/boot/dts/imx6q-apalis-ixora.dts50
-rw-r--r--arch/arm/boot/dts/imx6q-b450v3.dts5
-rw-r--r--arch/arm/boot/dts/imx6q-b650v3.dts5
-rw-r--r--arch/arm/boot/dts/imx6q-b850v3.dts33
-rw-r--r--arch/arm/boot/dts/imx6q-ba16.dtsi6
-rw-r--r--arch/arm/boot/dts/imx6q-gw5400-a.dts2
-rw-r--r--arch/arm/boot/dts/imx6q-marsboard.dts403
-rw-r--r--arch/arm/boot/dts/imx6q-tbs2910.dts4
-rw-r--r--arch/arm/boot/dts/imx6q-tx6q-1010-comtft.dts42
-rw-r--r--arch/arm/boot/dts/imx6q-tx6q-1010.dts42
-rw-r--r--arch/arm/boot/dts/imx6q-tx6q-1020-comtft.dts72
-rw-r--r--arch/arm/boot/dts/imx6q-tx6q-1020.dts72
-rw-r--r--arch/arm/boot/dts/imx6q-tx6q-1036.dts252
-rw-r--r--arch/arm/boot/dts/imx6q-tx6q-1110.dts60
-rw-r--r--arch/arm/boot/dts/imx6q-tx6q-11x0-mb7.dts264
-rw-r--r--arch/arm/boot/dts/imx6q.dtsi18
-rw-r--r--arch/arm/boot/dts/imx6qdl-apalis.dtsi2
-rw-r--r--arch/arm/boot/dts/imx6qdl-apf6dev.dtsi2
-rw-r--r--arch/arm/boot/dts/imx6qdl-gw52xx.dtsi2
-rw-r--r--arch/arm/boot/dts/imx6qdl-gw53xx.dtsi2
-rw-r--r--arch/arm/boot/dts/imx6qdl-gw54xx.dtsi2
-rw-r--r--arch/arm/boot/dts/imx6qdl-nit6xlite.dtsi2
-rw-r--r--arch/arm/boot/dts/imx6qdl-nitrogen6_max.dtsi2
-rw-r--r--arch/arm/boot/dts/imx6qdl-nitrogen6x.dtsi2
-rw-r--r--arch/arm/boot/dts/imx6qdl-rex.dtsi2
-rw-r--r--arch/arm/boot/dts/imx6qdl-sabrelite.dtsi2
-rw-r--r--arch/arm/boot/dts/imx6qdl-sabresd.dtsi30
-rw-r--r--arch/arm/boot/dts/imx6qdl-tx6.dtsi788
-rw-r--r--arch/arm/boot/dts/imx6qdl-udoo.dtsi98
-rw-r--r--arch/arm/boot/dts/imx6qdl-wandboard.dtsi2
-rw-r--r--arch/arm/boot/dts/imx6qdl.dtsi32
-rw-r--r--arch/arm/boot/dts/imx6qp-nitrogen6_max.dts59
-rw-r--r--arch/arm/boot/dts/imx6qp.dtsi3
-rw-r--r--arch/arm/boot/dts/imx6sx-nitrogen6sx.dts709
-rw-r--r--arch/arm/boot/dts/imx6sx-sdb-sai.dts67
-rw-r--r--arch/arm/boot/dts/imx6sx-sdb.dts2
-rw-r--r--arch/arm/boot/dts/imx6sx-sdb.dtsi16
-rw-r--r--arch/arm/boot/dts/imx6sx.dtsi8
-rw-r--r--arch/arm/boot/dts/imx6ul-14x14-evk.dts74
-rw-r--r--arch/arm/boot/dts/imx6ul-pico-hobbit.dts516
-rw-r--r--arch/arm/boot/dts/imx6ul-tx6ul-0010.dts53
-rw-r--r--arch/arm/boot/dts/imx6ul-tx6ul-0011.dts68
-rw-r--r--arch/arm/boot/dts/imx6ul-tx6ul-mainboard.dts271
-rw-r--r--arch/arm/boot/dts/imx6ul-tx6ul.dtsi973
-rw-r--r--arch/arm/boot/dts/imx6ul.dtsi12
-rw-r--r--arch/arm/boot/dts/imx7d-nitrogen7.dts745
-rw-r--r--arch/arm/boot/dts/imx7d.dtsi31
-rw-r--r--arch/arm/boot/dts/integrator.dtsi3
-rw-r--r--arch/arm/boot/dts/keystone-k2e-clocks.dtsi (renamed from arch/arm/boot/dts/k2e-clocks.dtsi)0
-rw-r--r--arch/arm/boot/dts/keystone-k2e-evm.dts (renamed from arch/arm/boot/dts/k2e-evm.dts)2
-rw-r--r--arch/arm/boot/dts/keystone-k2e-netcp.dtsi (renamed from arch/arm/boot/dts/k2e-netcp.dtsi)0
-rw-r--r--arch/arm/boot/dts/keystone-k2e.dtsi (renamed from arch/arm/boot/dts/k2e.dtsi)4
-rw-r--r--arch/arm/boot/dts/keystone-k2hk-clocks.dtsi (renamed from arch/arm/boot/dts/k2hk-clocks.dtsi)0
-rw-r--r--arch/arm/boot/dts/keystone-k2hk-evm.dts (renamed from arch/arm/boot/dts/k2hk-evm.dts)2
-rw-r--r--arch/arm/boot/dts/keystone-k2hk-netcp.dtsi (renamed from arch/arm/boot/dts/k2hk-netcp.dtsi)0
-rw-r--r--arch/arm/boot/dts/keystone-k2hk.dtsi (renamed from arch/arm/boot/dts/k2hk.dtsi)4
-rw-r--r--arch/arm/boot/dts/keystone-k2l-clocks.dtsi (renamed from arch/arm/boot/dts/k2l-clocks.dtsi)0
-rw-r--r--arch/arm/boot/dts/keystone-k2l-evm.dts (renamed from arch/arm/boot/dts/k2l-evm.dts)2
-rw-r--r--arch/arm/boot/dts/keystone-k2l-netcp.dtsi (renamed from arch/arm/boot/dts/k2l-netcp.dtsi)0
-rw-r--r--arch/arm/boot/dts/keystone-k2l.dtsi (renamed from arch/arm/boot/dts/k2l.dtsi)4
-rw-r--r--arch/arm/boot/dts/keystone.dtsi11
-rw-r--r--arch/arm/boot/dts/kirkwood-6192.dtsi4
-rw-r--r--arch/arm/boot/dts/kirkwood-6281.dtsi4
-rw-r--r--arch/arm/boot/dts/kirkwood-6282.dtsi4
-rw-r--r--arch/arm/boot/dts/kirkwood-98dx4122.dtsi4
-rw-r--r--arch/arm/boot/dts/kirkwood-b3.dts19
-rw-r--r--arch/arm/boot/dts/kirkwood-blackarmor-nas220.dts4
-rw-r--r--arch/arm/boot/dts/kirkwood-cloudbox.dts2
-rw-r--r--arch/arm/boot/dts/kirkwood-db-88f6281.dts14
-rw-r--r--arch/arm/boot/dts/kirkwood-db-88f6282.dts20
-rw-r--r--arch/arm/boot/dts/kirkwood-dir665.dts20
-rw-r--r--arch/arm/boot/dts/kirkwood-dnskw.dtsi6
-rw-r--r--arch/arm/boot/dts/kirkwood-ds111.dts2
-rw-r--r--arch/arm/boot/dts/kirkwood-ds112.dts6
-rw-r--r--arch/arm/boot/dts/kirkwood-ds212.dts2
-rw-r--r--arch/arm/boot/dts/kirkwood-ds411.dts6
-rw-r--r--arch/arm/boot/dts/kirkwood-ds411slim.dts2
-rw-r--r--arch/arm/boot/dts/kirkwood-ib62x0.dts4
-rw-r--r--arch/arm/boot/dts/kirkwood-iconnect.dts22
-rw-r--r--arch/arm/boot/dts/kirkwood-km_common.dtsi20
-rw-r--r--arch/arm/boot/dts/kirkwood-laplug.dts19
-rw-r--r--arch/arm/boot/dts/kirkwood-linkstation.dtsi17
-rw-r--r--arch/arm/boot/dts/kirkwood-linksys-viper.dts240
-rw-r--r--arch/arm/boot/dts/kirkwood-lsxl.dtsi16
-rw-r--r--arch/arm/boot/dts/kirkwood-mplcec4.dts18
-rw-r--r--arch/arm/boot/dts/kirkwood-mv88f6281gtw-ge.dts24
-rw-r--r--arch/arm/boot/dts/kirkwood-nas2big.dts18
-rw-r--r--arch/arm/boot/dts/kirkwood-netgear_readynas_duo_v2.dts20
-rw-r--r--arch/arm/boot/dts/kirkwood-netgear_readynas_nv+_v2.dts23
-rw-r--r--arch/arm/boot/dts/kirkwood-netxbig.dtsi8
-rw-r--r--arch/arm/boot/dts/kirkwood-ns2-common.dtsi4
-rw-r--r--arch/arm/boot/dts/kirkwood-nsa310.dts18
-rw-r--r--arch/arm/boot/dts/kirkwood-nsa320.dts31
-rw-r--r--arch/arm/boot/dts/kirkwood-nsa325.dts17
-rw-r--r--arch/arm/boot/dts/kirkwood-nsa3x0-common.dtsi24
-rw-r--r--arch/arm/boot/dts/kirkwood-openblocks_a6.dts2
-rw-r--r--arch/arm/boot/dts/kirkwood-openblocks_a7.dts2
-rw-r--r--arch/arm/boot/dts/kirkwood-openrd.dtsi14
-rw-r--r--arch/arm/boot/dts/kirkwood-pogoplug-series-4.dts2
-rw-r--r--arch/arm/boot/dts/kirkwood-rd88f6192.dts20
-rw-r--r--arch/arm/boot/dts/kirkwood-rd88f6281-a.dts2
-rw-r--r--arch/arm/boot/dts/kirkwood-rd88f6281-z0.dts2
-rw-r--r--arch/arm/boot/dts/kirkwood-rd88f6281.dtsi20
-rw-r--r--arch/arm/boot/dts/kirkwood-rs212.dts6
-rw-r--r--arch/arm/boot/dts/kirkwood-synology.dtsi22
-rw-r--r--arch/arm/boot/dts/kirkwood-t5325.dts22
-rw-r--r--arch/arm/boot/dts/kirkwood-ts219-6281.dts4
-rw-r--r--arch/arm/boot/dts/kirkwood-ts219-6282.dts16
-rw-r--r--arch/arm/boot/dts/kirkwood-ts219.dtsi20
-rw-r--r--arch/arm/boot/dts/kirkwood-ts419-6282.dts15
-rw-r--r--arch/arm/boot/dts/kirkwood-ts419.dtsi4
-rw-r--r--arch/arm/boot/dts/kirkwood.dtsi2
-rw-r--r--arch/arm/boot/dts/lpc18xx.dtsi47
-rw-r--r--arch/arm/boot/dts/lpc3250-ea3250.dts (renamed from arch/arm/boot/dts/ea3250.dts)230
-rw-r--r--arch/arm/boot/dts/lpc3250-phy3250.dts226
-rw-r--r--arch/arm/boot/dts/lpc32xx.dtsi77
-rw-r--r--arch/arm/boot/dts/lpc4350-hitex-eval.dts41
-rw-r--r--arch/arm/boot/dts/lpc4357-ea4357-devkit.dts29
-rw-r--r--arch/arm/boot/dts/ls1021a.dtsi65
-rw-r--r--arch/arm/boot/dts/mps2-an385.dts (renamed from arch/arm/boot/dts/rk3288-thermal.dtsi)96
-rw-r--r--arch/arm/boot/dts/mps2-an399.dts92
-rw-r--r--arch/arm/boot/dts/mps2.dtsi241
-rw-r--r--arch/arm/boot/dts/mt2701.dtsi19
-rw-r--r--arch/arm/boot/dts/omap2420-clocks.dtsi38
-rw-r--r--arch/arm/boot/dts/omap2420-n8x0-common.dtsi2
-rw-r--r--arch/arm/boot/dts/omap2420.dtsi4
-rw-r--r--arch/arm/boot/dts/omap2430-clocks.dtsi58
-rw-r--r--arch/arm/boot/dts/omap2430.dtsi6
-rw-r--r--arch/arm/boot/dts/omap24xx-clocks.dtsi228
-rw-r--r--arch/arm/boot/dts/omap3-beagle.dts1
-rw-r--r--arch/arm/boot/dts/omap3-n9.dts14
-rw-r--r--arch/arm/boot/dts/omap3-n950-n9.dtsi151
-rw-r--r--arch/arm/boot/dts/omap3-n950.dts99
-rw-r--r--arch/arm/boot/dts/omap3.dtsi6
-rw-r--r--arch/arm/boot/dts/omap3430es1-clocks.dtsi30
-rw-r--r--arch/arm/boot/dts/omap34xx-omap36xx-clocks.dtsi44
-rw-r--r--arch/arm/boot/dts/omap34xx.dtsi2
-rw-r--r--arch/arm/boot/dts/omap36xx-am35xx-omap3430es2plus-clocks.dtsi32
-rw-r--r--arch/arm/boot/dts/omap36xx-clocks.dtsi14
-rw-r--r--arch/arm/boot/dts/omap36xx-omap3430es2plus-clocks.dtsi14
-rw-r--r--arch/arm/boot/dts/omap36xx.dtsi4
-rw-r--r--arch/arm/boot/dts/omap3xxx-clocks.dtsi276
-rw-r--r--arch/arm/boot/dts/omap4-kc1.dts182
-rw-r--r--arch/arm/boot/dts/omap4-var-som-om44.dtsi2
-rw-r--r--arch/arm/boot/dts/omap4.dtsi6
-rw-r--r--arch/arm/boot/dts/omap443x-clocks.dtsi2
-rw-r--r--arch/arm/boot/dts/omap443x.dtsi2
-rw-r--r--arch/arm/boot/dts/omap4460.dtsi2
-rw-r--r--arch/arm/boot/dts/omap446x-clocks.dtsi4
-rw-r--r--arch/arm/boot/dts/omap44xx-clocks.dtsi316
-rw-r--r--arch/arm/boot/dts/omap5-board-common.dtsi12
-rw-r--r--arch/arm/boot/dts/omap5-cm-t54.dts2
-rw-r--r--arch/arm/boot/dts/omap5.dtsi6
-rw-r--r--arch/arm/boot/dts/omap54xx-clocks.dtsi260
-rw-r--r--arch/arm/boot/dts/orion5x-kuroboxpro.dts127
-rw-r--r--arch/arm/boot/dts/ox810se.dtsi336
-rw-r--r--arch/arm/boot/dts/phy3250.dts227
-rw-r--r--arch/arm/boot/dts/qcom-apq8064-arrow-db600c-pins.dtsi52
-rw-r--r--arch/arm/boot/dts/qcom-apq8064-arrow-db600c.dts349
-rw-r--r--arch/arm/boot/dts/qcom-apq8064-asus-nexus7-flo.dts6
-rw-r--r--arch/arm/boot/dts/qcom-apq8064-pins.dtsi39
-rw-r--r--arch/arm/boot/dts/qcom-apq8064.dtsi135
-rw-r--r--arch/arm/boot/dts/qcom-ipq4019-ap.dk01.1-c1.dts22
-rw-r--r--arch/arm/boot/dts/qcom-ipq4019-ap.dk01.1.dtsi112
-rw-r--r--arch/arm/boot/dts/qcom-ipq4019.dtsi267
-rw-r--r--arch/arm/boot/dts/qcom-msm8974.dtsi52
-rw-r--r--arch/arm/boot/dts/r7s72100.dtsi15
-rw-r--r--arch/arm/boot/dts/r8a73a4-ape6evm.dts20
-rw-r--r--arch/arm/boot/dts/r8a73a4.dtsi75
-rw-r--r--arch/arm/boot/dts/r8a7740-armadillo800eva.dts34
-rw-r--r--arch/arm/boot/dts/r8a7740.dtsi57
-rw-r--r--arch/arm/boot/dts/r8a7778-bockw.dts40
-rw-r--r--arch/arm/boot/dts/r8a7778.dtsi22
-rw-r--r--arch/arm/boot/dts/r8a7779-marzen.dts36
-rw-r--r--arch/arm/boot/dts/r8a7779.dtsi72
-rw-r--r--arch/arm/boot/dts/r8a7790-lager.dts117
-rw-r--r--arch/arm/boot/dts/r8a7790.dtsi278
-rw-r--r--arch/arm/boot/dts/r8a7791-koelsch.dts71
-rw-r--r--arch/arm/boot/dts/r8a7791-porter.dts59
-rw-r--r--arch/arm/boot/dts/r8a7791.dtsi243
-rw-r--r--arch/arm/boot/dts/r8a7793-gose.dts162
-rw-r--r--arch/arm/boot/dts/r8a7793.dtsi228
-rw-r--r--arch/arm/boot/dts/r8a7794-alt.dts32
-rw-r--r--arch/arm/boot/dts/r8a7794-silk.dts44
-rw-r--r--arch/arm/boot/dts/r8a7794.dtsi266
-rw-r--r--arch/arm/boot/dts/rk3036-evb.dts5
-rw-r--r--arch/arm/boot/dts/rk3036-kylin.dts18
-rw-r--r--arch/arm/boot/dts/rk3036.dtsi71
-rw-r--r--arch/arm/boot/dts/rk3066a-bqcurie2.dts11
-rw-r--r--arch/arm/boot/dts/rk3066a-rayeager.dts9
-rw-r--r--arch/arm/boot/dts/rk3066a.dtsi6
-rw-r--r--arch/arm/boot/dts/rk3188-radxarock.dts7
-rw-r--r--arch/arm/boot/dts/rk3188.dtsi6
-rw-r--r--arch/arm/boot/dts/rk3228-evb.dts8
-rw-r--r--arch/arm/boot/dts/rk3228.dtsi149
-rw-r--r--arch/arm/boot/dts/rk3288-evb.dtsi8
-rw-r--r--arch/arm/boot/dts/rk3288-firefly.dtsi9
-rw-r--r--arch/arm/boot/dts/rk3288-miqi.dts472
-rw-r--r--arch/arm/boot/dts/rk3288-popmetal.dts9
-rw-r--r--arch/arm/boot/dts/rk3288-r89.dts7
-rw-r--r--arch/arm/boot/dts/rk3288-veyron-chromebook.dtsi100
-rw-r--r--arch/arm/boot/dts/rk3288-veyron-jaq.dts49
-rw-r--r--arch/arm/boot/dts/rk3288-veyron-jerry.dts9
-rw-r--r--arch/arm/boot/dts/rk3288-veyron-minnie.dts44
-rw-r--r--arch/arm/boot/dts/rk3288-veyron-pinky.dts7
-rw-r--r--arch/arm/boot/dts/rk3288-veyron-speedy.dts16
-rw-r--r--arch/arm/boot/dts/rk3288-veyron.dtsi18
-rw-r--r--arch/arm/boot/dts/rk3288.dtsi158
-rw-r--r--arch/arm/boot/dts/s5pv210-smdkv210.dts2
-rw-r--r--arch/arm/boot/dts/sama5d2-pinfunc.h4
-rw-r--r--arch/arm/boot/dts/sama5d2.dtsi55
-rw-r--r--arch/arm/boot/dts/sama5d3.dtsi7
-rw-r--r--arch/arm/boot/dts/sama5d4.dtsi8
-rw-r--r--arch/arm/boot/dts/sh73a0-kzm9g.dts37
-rw-r--r--arch/arm/boot/dts/sh73a0.dtsi89
-rw-r--r--arch/arm/boot/dts/socfpga.dtsi4
-rw-r--r--arch/arm/boot/dts/socfpga_arria10.dtsi32
-rw-r--r--arch/arm/boot/dts/socfpga_arria10_socdk_sdmmc.dts1
-rw-r--r--arch/arm/boot/dts/socfpga_cyclone5.dtsi6
-rw-r--r--arch/arm/boot/dts/socfpga_cyclone5_sockit.dts94
-rw-r--r--arch/arm/boot/dts/socfpga_cyclone5_socrates.dts1
-rw-r--r--arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts310
-rw-r--r--arch/arm/boot/dts/spear13xx.dtsi4
-rw-r--r--arch/arm/boot/dts/ste-ccu9540.dts2
-rw-r--r--arch/arm/boot/dts/ste-dbx5x0.dtsi92
-rw-r--r--arch/arm/boot/dts/ste-href-stuib.dtsi4
-rw-r--r--arch/arm/boot/dts/ste-href-tvk1281618.dtsi16
-rw-r--r--arch/arm/boot/dts/ste-hrefprev60.dtsi8
-rw-r--r--arch/arm/boot/dts/ste-hrefv60plus.dtsi6
-rw-r--r--arch/arm/boot/dts/ste-nomadik-nhk15.dts13
-rw-r--r--arch/arm/boot/dts/ste-nomadik-stn8815.dtsi38
-rw-r--r--arch/arm/boot/dts/ste-snowball.dts19
-rw-r--r--arch/arm/boot/dts/stih407-family.dtsi126
-rw-r--r--arch/arm/boot/dts/sun4i-a10-a1000.dts24
-rw-r--r--arch/arm/boot/dts/sun4i-a10-dserve-dsrv9703c.dts281
-rw-r--r--arch/arm/boot/dts/sun4i-a10.dtsi76
-rw-r--r--arch/arm/boot/dts/sun5i-a13-difrnce-dit4350.dts226
-rw-r--r--arch/arm/boot/dts/sun5i-a13-empire-electronix-d709.dts19
-rw-r--r--arch/arm/boot/dts/sun5i-a13-inet-98v-rev2.dts15
-rw-r--r--arch/arm/boot/dts/sun5i-a13-olinuxino-micro.dts41
-rw-r--r--arch/arm/boot/dts/sun5i-a13.dtsi66
-rw-r--r--arch/arm/boot/dts/sun5i-r8-chip.dts12
-rw-r--r--arch/arm/boot/dts/sun5i-r8.dtsi142
-rw-r--r--arch/arm/boot/dts/sun5i.dtsi43
-rw-r--r--arch/arm/boot/dts/sun6i-a31s-colorfly-e708-q1.dts208
-rw-r--r--arch/arm/boot/dts/sun7i-a20-cubietruck.dts24
-rw-r--r--arch/arm/boot/dts/sun7i-a20-itead-ibox.dts24
-rw-r--r--arch/arm/boot/dts/sun7i-a20-olinuxino-lime2-emmc.dts82
-rw-r--r--arch/arm/boot/dts/sun7i-a20.dtsi77
-rw-r--r--arch/arm/boot/dts/sun8i-a23-gt90h-v4.dts120
-rw-r--r--arch/arm/boot/dts/sun8i-a23-polaroid-mid2809pxe04.dts243
-rw-r--r--arch/arm/boot/dts/sun8i-h3-orangepi-2.dts186
-rw-r--r--arch/arm/boot/dts/sun8i-h3-orangepi-one.dts145
-rw-r--r--arch/arm/boot/dts/sun8i-h3-orangepi-pc.dts167
-rw-r--r--arch/arm/boot/dts/sun8i-h3-orangepi-plus.dts105
-rw-r--r--arch/arm/boot/dts/sun8i-h3.dtsi123
-rw-r--r--arch/arm/boot/dts/tango4-common.dtsi16
-rw-r--r--arch/arm/boot/dts/tango4-smp8758.dtsi28
-rw-r--r--arch/arm/boot/dts/tango4-vantage-1172.dts1
-rw-r--r--arch/arm/boot/dts/tegra114-dalmore.dts6
-rw-r--r--arch/arm/boot/dts/tegra114-roth.dts2
-rw-r--r--arch/arm/boot/dts/tegra114-tn7.dts2
-rw-r--r--arch/arm/boot/dts/tegra114.dtsi4
-rw-r--r--arch/arm/boot/dts/tegra124-jetson-tk1.dts148
-rw-r--r--arch/arm/boot/dts/tegra124-nyan.dtsi130
-rw-r--r--arch/arm/boot/dts/tegra124-venice2.dts107
-rw-r--r--arch/arm/boot/dts/tegra124.dtsi171
-rw-r--r--arch/arm/boot/dts/tegra20-harmony.dts6
-rw-r--r--arch/arm/boot/dts/tegra20-iris-512.dts4
-rw-r--r--arch/arm/boot/dts/tegra20-medcom-wide.dts4
-rw-r--r--arch/arm/boot/dts/tegra20-paz00.dts6
-rw-r--r--arch/arm/boot/dts/tegra20-seaboard.dts8
-rw-r--r--arch/arm/boot/dts/tegra20-tamonten.dtsi4
-rw-r--r--arch/arm/boot/dts/tegra20-trimslice.dts6
-rw-r--r--arch/arm/boot/dts/tegra20-ventana.dts6
-rw-r--r--arch/arm/boot/dts/tegra20-whistler.dts6
-rw-r--r--arch/arm/boot/dts/tegra20.dtsi4
-rw-r--r--arch/arm/boot/dts/tegra30-apalis-eval.dts6
-rw-r--r--arch/arm/boot/dts/tegra30-beaver.dts4
-rw-r--r--arch/arm/boot/dts/tegra30-cardhu.dtsi6
-rw-r--r--arch/arm/boot/dts/tegra30-colibri-eval-v3.dts6
-rw-r--r--arch/arm/boot/dts/tegra30.dtsi4
-rw-r--r--arch/arm/boot/dts/twl6030.dtsi6
-rw-r--r--arch/arm/boot/dts/uniphier-pinctrl.dtsi10
-rw-r--r--arch/arm/boot/dts/versatile-ab.dts5
-rw-r--r--arch/arm/boot/dts/vexpress-v2m-rs1.dtsi44
-rw-r--r--arch/arm/boot/dts/vexpress-v2m.dtsi44
-rw-r--r--arch/arm/boot/dts/vexpress-v2p-ca15-tc1.dts47
-rw-r--r--arch/arm/boot/dts/vexpress-v2p-ca15_a7.dts57
-rw-r--r--arch/arm/boot/dts/vexpress-v2p-ca5s.dts37
-rw-r--r--arch/arm/boot/dts/vexpress-v2p-ca9.dts41
-rw-r--r--arch/arm/boot/dts/vf-colibri-eval-v3.dtsi16
-rw-r--r--arch/arm/boot/dts/vf-colibri.dtsi40
-rw-r--r--arch/arm/boot/dts/vf500-colibri.dtsi5
-rw-r--r--arch/arm/boot/dts/vf500.dtsi1
-rw-r--r--arch/arm/boot/dts/vf610-colibri.dtsi5
-rw-r--r--arch/arm/boot/dts/vf610-zii-dev-rev-b.dts734
-rw-r--r--arch/arm/boot/dts/vfxxx.dtsi21
-rw-r--r--arch/arm/boot/dts/wd-mbwe.dts112
-rw-r--r--arch/arm/configs/aspeed_g4_defconfig86
-rw-r--r--arch/arm/configs/aspeed_g5_defconfig88
-rw-r--r--arch/arm/configs/bcm2835_defconfig11
-rw-r--r--arch/arm/configs/bcm_defconfig1
-rw-r--r--arch/arm/configs/davinci_all_defconfig7
-rw-r--r--arch/arm/configs/exynos_defconfig11
-rw-r--r--arch/arm/configs/imx_v6_v7_defconfig6
-rw-r--r--arch/arm/configs/keystone_defconfig2
-rw-r--r--arch/arm/configs/lpc32xx_defconfig29
-rw-r--r--arch/arm/configs/mps2_defconfig109
-rw-r--r--arch/arm/configs/multi_v5_defconfig34
-rw-r--r--arch/arm/configs/multi_v7_defconfig37
-rw-r--r--arch/arm/configs/mvebu_v5_defconfig2
-rw-r--r--arch/arm/configs/mvebu_v7_defconfig3
-rw-r--r--arch/arm/configs/omap2plus_defconfig66
-rw-r--r--arch/arm/configs/orion5x_defconfig3
-rw-r--r--arch/arm/configs/sama5_defconfig13
-rw-r--r--arch/arm/configs/shmobile_defconfig5
-rw-r--r--arch/arm/configs/tegra_defconfig2
-rw-r--r--arch/arm/configs/u8500_defconfig5
-rw-r--r--arch/arm/configs/zx_defconfig1
-rw-r--r--arch/arm/include/asm/cpuidle.h2
-rw-r--r--arch/arm/include/asm/dma-mapping.h4
-rw-r--r--arch/arm/include/asm/efi.h37
-rw-r--r--arch/arm/include/asm/io.h12
-rw-r--r--arch/arm/include/asm/kvm_host.h18
-rw-r--r--arch/arm/include/asm/kvm_mmio.h3
-rw-r--r--arch/arm/include/asm/kvm_mmu.h44
-rw-r--r--arch/arm/include/asm/memory.h38
-rw-r--r--arch/arm/include/asm/mmu_context.h3
-rw-r--r--arch/arm/include/asm/pgtable-3level.h5
-rw-r--r--arch/arm/include/asm/stage2_pgtable.h61
-rw-r--r--arch/arm/kernel/bios32.c3
-rw-r--r--arch/arm/kernel/cpuidle.c6
-rw-r--r--arch/arm/kernel/efi.c41
-rw-r--r--arch/arm/kernel/hw_breakpoint.c4
-rw-r--r--arch/arm/kernel/perf_callchain.c10
-rw-r--r--arch/arm/kernel/process.c7
-rw-r--r--arch/arm/kernel/reboot.c3
-rw-r--r--arch/arm/kernel/setup.c29
-rw-r--r--arch/arm/kernel/smp.c2
-rw-r--r--arch/arm/kvm/Kconfig7
-rw-r--r--arch/arm/kvm/Makefile11
-rw-r--r--arch/arm/kvm/arm.c157
-rw-r--r--arch/arm/kvm/mmio.c24
-rw-r--r--arch/arm/kvm/mmu.c413
-rw-r--r--arch/arm/mach-aspeed/Kconfig30
-rw-r--r--arch/arm/mach-at91/sama5.c20
-rw-r--r--arch/arm/mach-at91/soc.c81
-rw-r--r--arch/arm/mach-at91/soc.h12
-rw-r--r--arch/arm/mach-bcm/Kconfig2
-rw-r--r--arch/arm/mach-berlin/berlin.c6
-rw-r--r--arch/arm/mach-davinci/Makefile4
-rw-r--r--arch/arm/mach-davinci/clock.c21
-rw-r--r--arch/arm/mach-davinci/clock.h1
-rw-r--r--arch/arm/mach-davinci/common.c6
-rw-r--r--arch/arm/mach-davinci/cp_intc.c3
-rw-r--r--arch/arm/mach-davinci/da830.c2
-rw-r--r--arch/arm/mach-davinci/da850.c83
-rw-r--r--arch/arm/mach-davinci/da8xx-dt.c19
-rw-r--r--arch/arm/mach-davinci/devices-da8xx.c20
-rw-r--r--arch/arm/mach-davinci/devices.c16
-rw-r--r--arch/arm/mach-davinci/dm355.c1
-rw-r--r--arch/arm/mach-davinci/dm365.c1
-rw-r--r--arch/arm/mach-davinci/dm644x.c1
-rw-r--r--arch/arm/mach-davinci/dm646x.c1
-rw-r--r--arch/arm/mach-davinci/usb-da8xx.c107
-rw-r--r--arch/arm/mach-davinci/usb.c89
-rw-r--r--arch/arm/mach-dove/common.c3
-rw-r--r--arch/arm/mach-exynos/Kconfig3
-rw-r--r--arch/arm/mach-exynos/exynos.c46
-rw-r--r--arch/arm/mach-exynos/include/mach/map.h3
-rw-r--r--arch/arm/mach-exynos/regs-srom.h53
-rw-r--r--arch/arm/mach-exynos/suspend.c20
-rw-r--r--arch/arm/mach-imx/Kconfig2
-rw-r--r--arch/arm/mach-imx/imx27-dt.c10
-rw-r--r--arch/arm/mach-imx/mach-imx51.c3
-rw-r--r--arch/arm/mach-imx/mach-imx53.c2
-rw-r--r--arch/arm/mach-imx/mach-imx7d.c6
-rw-r--r--arch/arm/mach-integrator/integrator_ap.c62
-rw-r--r--arch/arm/mach-integrator/integrator_cp.c51
-rw-r--r--arch/arm/mach-keystone/keystone.c7
-rw-r--r--arch/arm/mach-lpc32xx/Makefile2
-rw-r--r--arch/arm/mach-lpc32xx/common.c95
-rw-r--r--arch/arm/mach-lpc32xx/common.h24
-rw-r--r--arch/arm/mach-lpc32xx/include/mach/irqs.h2
-rw-r--r--arch/arm/mach-lpc32xx/irq.c477
-rw-r--r--arch/arm/mach-lpc32xx/phy3250.c3
-rw-r--r--arch/arm/mach-mediatek/Kconfig4
-rw-r--r--arch/arm/mach-mediatek/mediatek.c1
-rw-r--r--arch/arm/mach-mv78xx0/common.c3
-rw-r--r--arch/arm/mach-mvebu/pmsu.c85
-rw-r--r--arch/arm/mach-omap2/Makefile2
-rw-r--r--arch/arm/mach-omap2/board-rx51-peripherals.c35
-rw-r--r--arch/arm/mach-omap2/gpmc-nand.c7
-rw-r--r--arch/arm/mach-omap2/omap-wakeupgen.c5
-rw-r--r--arch/arm/mach-omap2/omap_hwmod.c12
-rw-r--r--arch/arm/mach-omap2/omap_hwmod.h2
-rw-r--r--arch/arm/mach-omap2/omap_hwmod_33xx_43xx_ipblock_data.c2
-rw-r--r--arch/arm/mach-omap2/omap_hwmod_7xx_data.c364
-rw-r--r--arch/arm/mach-omap2/omap_hwmod_reset.c65
-rw-r--r--arch/arm/mach-omap2/pdata-quirks.c57
-rw-r--r--arch/arm/mach-omap2/pm.c7
-rw-r--r--arch/arm/mach-omap2/powerdomains7xx_data.c82
-rw-r--r--arch/arm/mach-omap2/soc.h140
-rw-r--r--arch/arm/mach-orion5x/common.c3
-rw-r--r--arch/arm/mach-oxnas/Kconfig24
-rw-r--r--arch/arm/mach-pxa/Kconfig2
-rw-r--r--arch/arm/mach-pxa/eseries.c2
-rw-r--r--arch/arm/mach-pxa/raumfeld.c12
-rw-r--r--arch/arm/mach-pxa/spitz.c55
-rw-r--r--arch/arm/mach-realview/realview_pbx.c2
-rw-r--r--arch/arm/mach-rockchip/platsmp.c2
-rw-r--r--arch/arm/mach-rockchip/rockchip.c1
-rw-r--r--arch/arm/mach-s3c24xx/mach-rx1950.c6
-rw-r--r--arch/arm/mach-shmobile/Kconfig11
-rw-r--r--arch/arm/mach-shmobile/Makefile2
-rw-r--r--arch/arm/mach-shmobile/common.h7
-rw-r--r--arch/arm/mach-shmobile/cpufreq.c19
-rw-r--r--arch/arm/mach-shmobile/pm-r8a7779.c3
-rw-r--r--arch/arm/mach-shmobile/pm-rcar-gen2.c2
-rw-r--r--arch/arm/mach-shmobile/pm-rcar.c164
-rw-r--r--arch/arm/mach-shmobile/smp-r8a7779.c2
-rw-r--r--arch/arm/mach-shmobile/smp-r8a7790.c2
-rw-r--r--arch/arm/mach-shmobile/timer.c52
-rw-r--r--arch/arm/mach-socfpga/core.h2
-rw-r--r--arch/arm/mach-socfpga/l2_cache.c49
-rw-r--r--arch/arm/mach-socfpga/ocram.c133
-rw-r--r--arch/arm/mach-socfpga/socfpga.c12
-rw-r--r--arch/arm/mach-sti/Kconfig9
-rw-r--r--arch/arm/mach-sunxi/sunxi.c9
-rw-r--r--arch/arm/mach-tegra/board-paz00.c6
-rw-r--r--arch/arm/mach-tegra/platsmp.c16
-rw-r--r--arch/arm/mach-uniphier/platsmp.c4
-rw-r--r--arch/arm/mach-versatile/versatile_dt.c47
-rw-r--r--arch/arm/mach-vexpress/Makefile4
-rw-r--r--arch/arm/mach-vexpress/Makefile.boot3
-rw-r--r--arch/arm/mach-vexpress/v2m-mps2.c21
-rw-r--r--arch/arm/mach-zynq/common.c2
-rw-r--r--arch/arm/mm/Kconfig3
-rw-r--r--arch/arm/mm/cache-l2x0.c26
-rw-r--r--arch/arm/mm/cache-uniphier.c26
-rw-r--r--arch/arm/mm/dma-mapping.c22
-rw-r--r--arch/arm/mm/idmap.c2
-rw-r--r--arch/arm/mm/ioremap.c16
-rw-r--r--arch/arm/mm/nommu.c9
-rw-r--r--arch/arm/plat-samsung/include/plat/map-s5p.h1
-rw-r--r--arch/arm/tools/Makefile11
-rw-r--r--arch/arm/vdso/Makefile2
-rw-r--r--arch/arm/vfp/vfpmodule.c4
-rw-r--r--arch/arm64/Kconfig47
-rw-r--r--arch/arm64/Kconfig.debug2
-rw-r--r--arch/arm64/Kconfig.platforms11
-rw-r--r--arch/arm64/boot/dts/Makefile1
-rw-r--r--arch/arm64/boot/dts/amlogic/Makefile3
-rw-r--r--arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts69
-rw-r--r--arch/arm64/boot/dts/amlogic/meson-gxbb-p200.dts52
-rw-r--r--arch/arm64/boot/dts/amlogic/meson-gxbb-p201.dts52
-rw-r--r--arch/arm64/boot/dts/amlogic/meson-gxbb-p20x.dtsi65
-rw-r--r--arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95-meta.dts2
-rw-r--r--arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95-pro.dts2
-rw-r--r--arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95-telos.dts2
-rw-r--r--arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi4
-rw-r--r--arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi5
-rw-r--r--arch/arm64/boot/dts/apm/apm-shadowcat.dtsi7
-rw-r--r--arch/arm64/boot/dts/apm/apm-storm.dtsi1
-rw-r--r--arch/arm64/boot/dts/arm/juno-base.dtsi10
-rw-r--r--arch/arm64/boot/dts/broadcom/ns2-clock.dtsi105
-rw-r--r--arch/arm64/boot/dts/broadcom/ns2-svk.dts45
-rw-r--r--arch/arm64/boot/dts/broadcom/ns2.dtsi155
-rw-r--r--arch/arm64/boot/dts/broadcom/vulcan.dtsi2
-rw-r--r--arch/arm64/boot/dts/exynos/exynos7-tmu-sensor-conf.dtsi25
-rw-r--r--arch/arm64/boot/dts/exynos/exynos7-trip-points.dtsi54
-rw-r--r--arch/arm64/boot/dts/exynos/exynos7.dtsi49
-rw-r--r--arch/arm64/boot/dts/freescale/Makefile3
-rw-r--r--arch/arm64/boot/dts/freescale/fsl-ls1043a-qds.dts181
-rw-r--r--arch/arm64/boot/dts/freescale/fsl-ls1043a-rdb.dts13
-rw-r--r--arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi22
-rw-r--r--arch/arm64/boot/dts/freescale/fsl-ls2080a-qds.dts9
-rw-r--r--arch/arm64/boot/dts/freescale/fsl-ls2080a.dtsi110
-rw-r--r--arch/arm64/boot/dts/hisilicon/Makefile4
-rw-r--r--arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts200
-rw-r--r--arch/arm64/boot/dts/hisilicon/hi6220.dtsi623
-rw-r--r--arch/arm64/boot/dts/hisilicon/hikey-pinctrl.dtsi705
-rw-r--r--arch/arm64/boot/dts/hisilicon/hip05-d02.dts34
-rw-r--r--arch/arm64/boot/dts/hisilicon/hip05.dtsi10
-rw-r--r--arch/arm64/boot/dts/hisilicon/hip05_hns.dtsi74
-rw-r--r--arch/arm64/boot/dts/hisilicon/hip06-d03.dts34
-rw-r--r--arch/arm64/boot/dts/hisilicon/hip06.dtsi307
-rw-r--r--arch/arm64/boot/dts/lg/Makefile5
-rw-r--r--arch/arm64/boot/dts/lg/lg1312-ref.dts36
-rw-r--r--arch/arm64/boot/dts/lg/lg1312.dtsi351
-rw-r--r--arch/arm64/boot/dts/marvell/armada-3720-db.dts32
-rw-r--r--arch/arm64/boot/dts/marvell/armada-372x.dtsi1
-rw-r--r--arch/arm64/boot/dts/marvell/armada-37xx.dtsi20
-rw-r--r--arch/arm64/boot/dts/marvell/armada-7020.dtsi1
-rw-r--r--arch/arm64/boot/dts/marvell/armada-7040-db.dts108
-rw-r--r--arch/arm64/boot/dts/marvell/armada-7040.dtsi1
-rw-r--r--arch/arm64/boot/dts/marvell/armada-8020.dtsi1
-rw-r--r--arch/arm64/boot/dts/marvell/armada-8040.dtsi1
-rw-r--r--arch/arm64/boot/dts/marvell/armada-ap806-dual.dtsi1
-rw-r--r--arch/arm64/boot/dts/marvell/armada-ap806-quad.dtsi2
-rw-r--r--arch/arm64/boot/dts/marvell/armada-ap806.dtsi56
-rw-r--r--arch/arm64/boot/dts/marvell/armada-cp110-master.dtsi228
-rw-r--r--arch/arm64/boot/dts/mediatek/mt8173.dtsi60
-rw-r--r--arch/arm64/boot/dts/nvidia/Makefile1
-rw-r--r--arch/arm64/boot/dts/nvidia/tegra132-norrin.dts55
-rw-r--r--arch/arm64/boot/dts/nvidia/tegra132.dtsi120
-rw-r--r--arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi8
-rw-r--r--arch/arm64/boot/dts/nvidia/tegra210-p2530.dtsi14
-rw-r--r--arch/arm64/boot/dts/nvidia/tegra210-p2571.dts2
-rw-r--r--arch/arm64/boot/dts/nvidia/tegra210-p2595.dtsi2
-rw-r--r--arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi30
-rw-r--r--arch/arm64/boot/dts/nvidia/tegra210-smaug.dts1424
-rw-r--r--arch/arm64/boot/dts/nvidia/tegra210.dtsi130
-rw-r--r--arch/arm64/boot/dts/renesas/r8a7795-salvator-x.dts63
-rw-r--r--arch/arm64/boot/dts/renesas/r8a7795.dtsi212
-rw-r--r--arch/arm64/boot/dts/rockchip/Makefile2
-rw-r--r--arch/arm64/boot/dts/rockchip/rk3368-evb.dtsi8
-rw-r--r--arch/arm64/boot/dts/rockchip/rk3368-geekbox.dts319
-rw-r--r--arch/arm64/boot/dts/rockchip/rk3368-r88.dts8
-rw-r--r--arch/arm64/boot/dts/rockchip/rk3368.dtsi82
-rw-r--r--arch/arm64/boot/dts/rockchip/rk3399-evb.dts (renamed from arch/arm64/boot/dts/rockchip/rk3368-thermal.dtsi)124
-rw-r--r--arch/arm64/boot/dts/rockchip/rk3399.dtsi1013
-rw-r--r--arch/arm64/boot/dts/socionext/uniphier-ph1-ld20-ref.dts1
-rw-r--r--arch/arm64/boot/dts/socionext/uniphier-ph1-ld20.dtsi6
l---------arch/arm64/boot/dts/socionext/uniphier-ref-daughter.dtsi1
-rw-r--r--arch/arm64/configs/defconfig50
-rw-r--r--arch/arm64/include/asm/assembler.h133
-rw-r--r--arch/arm64/include/asm/cpufeature.h27
-rw-r--r--arch/arm64/include/asm/dma-mapping.h2
-rw-r--r--arch/arm64/include/asm/efi.h37
-rw-r--r--arch/arm64/include/asm/elf.h3
-rw-r--r--arch/arm64/include/asm/kernel-pgtable.h21
-rw-r--r--arch/arm64/include/asm/kvm_arm.h96
-rw-r--r--arch/arm64/include/asm/kvm_asm.h3
-rw-r--r--arch/arm64/include/asm/kvm_host.h22
-rw-r--r--arch/arm64/include/asm/kvm_mmio.h3
-rw-r--r--arch/arm64/include/asm/kvm_mmu.h112
-rw-r--r--arch/arm64/include/asm/memory.h33
-rw-r--r--arch/arm64/include/asm/mmu.h1
-rw-r--r--arch/arm64/include/asm/mmzone.h12
-rw-r--r--arch/arm64/include/asm/numa.h45
-rw-r--r--arch/arm64/include/asm/page.h2
-rw-r--r--arch/arm64/include/asm/pgtable-hwdef.h81
-rw-r--r--arch/arm64/include/asm/pgtable-types.h32
-rw-r--r--arch/arm64/include/asm/pgtable.h82
-rw-r--r--arch/arm64/include/asm/smp.h11
-rw-r--r--arch/arm64/include/asm/stage2_pgtable-nopmd.h42
-rw-r--r--arch/arm64/include/asm/stage2_pgtable-nopud.h39
-rw-r--r--arch/arm64/include/asm/stage2_pgtable.h142
-rw-r--r--arch/arm64/include/asm/suspend.h32
-rw-r--r--arch/arm64/include/asm/sysreg.h24
-rw-r--r--arch/arm64/include/asm/topology.h10
-rw-r--r--arch/arm64/include/asm/virt.h22
-rw-r--r--arch/arm64/include/uapi/asm/unistd.h3
-rw-r--r--arch/arm64/kernel/Makefile1
-rw-r--r--arch/arm64/kernel/acpi.c33
-rw-r--r--arch/arm64/kernel/asm-offsets.c10
-rw-r--r--arch/arm64/kernel/cpu_errata.c24
-rw-r--r--arch/arm64/kernel/cpufeature.c343
-rw-r--r--arch/arm64/kernel/cpuidle.c9
-rw-r--r--arch/arm64/kernel/cpuinfo.c40
-rw-r--r--arch/arm64/kernel/debug-monitors.c3
-rw-r--r--arch/arm64/kernel/efi-entry.S2
-rw-r--r--arch/arm64/kernel/efi.c57
-rw-r--r--arch/arm64/kernel/head.S165
-rw-r--r--arch/arm64/kernel/hibernate-asm.S176
-rw-r--r--arch/arm64/kernel/hibernate.c487
-rw-r--r--arch/arm64/kernel/hw_breakpoint.c12
-rw-r--r--arch/arm64/kernel/hyp-stub.S45
-rw-r--r--arch/arm64/kernel/image.h3
-rw-r--r--arch/arm64/kernel/insn.c2
-rw-r--r--arch/arm64/kernel/kaslr.c6
-rw-r--r--arch/arm64/kernel/pci.c10
-rw-r--r--arch/arm64/kernel/perf_callchain.c14
-rw-r--r--arch/arm64/kernel/perf_event.c553
-rw-r--r--arch/arm64/kernel/process.c25
-rw-r--r--arch/arm64/kernel/setup.c81
-rw-r--r--arch/arm64/kernel/sleep.S157
-rw-r--r--arch/arm64/kernel/smp.c78
-rw-r--r--arch/arm64/kernel/suspend.c102
-rw-r--r--arch/arm64/kernel/sys.c10
-rw-r--r--arch/arm64/kernel/vdso.c10
-rw-r--r--arch/arm64/kernel/vmlinux.lds.S48
-rw-r--r--arch/arm64/kvm/Kconfig8
-rw-r--r--arch/arm64/kvm/Makefile12
-rw-r--r--arch/arm64/kvm/handle_exit.c7
-rw-r--r--arch/arm64/kvm/hyp-init.S48
-rw-r--r--arch/arm64/kvm/hyp.S11
-rw-r--r--arch/arm64/kvm/hyp/entry.S19
-rw-r--r--arch/arm64/kvm/hyp/hyp-entry.S10
-rw-r--r--arch/arm64/kvm/hyp/s2-setup.c8
-rw-r--r--arch/arm64/kvm/inject_fault.c2
-rw-r--r--arch/arm64/kvm/reset.c30
-rw-r--r--arch/arm64/mm/Makefile1
-rw-r--r--arch/arm64/mm/cache.S2
-rw-r--r--arch/arm64/mm/context.c3
-rw-r--r--arch/arm64/mm/dma-mapping.c64
-rw-r--r--arch/arm64/mm/dump.c52
-rw-r--r--arch/arm64/mm/fault.c87
-rw-r--r--arch/arm64/mm/hugetlbpage.c1
-rw-r--r--arch/arm64/mm/init.c112
-rw-r--r--arch/arm64/mm/mm.h1
-rw-r--r--arch/arm64/mm/mmap.c2
-rw-r--r--arch/arm64/mm/mmu.c24
-rw-r--r--arch/arm64/mm/numa.c396
-rw-r--r--arch/arm64/mm/proc-macros.S98
-rw-r--r--arch/arm64/mm/proc.S56
-rw-r--r--arch/arm64/net/bpf_jit_comp.c91
-rw-r--r--arch/avr32/Kconfig4
-rw-r--r--arch/avr32/include/asm/addrspace.h2
-rw-r--r--arch/avr32/kernel/process.c4
-rw-r--r--arch/avr32/mach-at32ap/at32ap700x.c16
-rw-r--r--arch/blackfin/Kconfig1
-rw-r--r--arch/blackfin/include/asm/processor.h7
-rw-r--r--arch/blackfin/lib/udivsi3.S2
-rw-r--r--arch/blackfin/mach-bf609/include/mach/defBF60x_base.h2
-rw-r--r--arch/c6x/include/asm/clock.h2
-rw-r--r--arch/c6x/include/uapi/asm/unistd.h1
-rw-r--r--arch/c6x/kernel/process.c4
-rw-r--r--arch/c6x/platforms/cache.c2
-rw-r--r--arch/cris/Kconfig4
-rw-r--r--arch/cris/arch-v10/drivers/axisflashmap.c2
-rw-r--r--arch/cris/arch-v10/kernel/process.c9
-rw-r--r--arch/cris/arch-v32/drivers/axisflashmap.c2
-rw-r--r--arch/cris/arch-v32/drivers/cryptocop.c2
-rw-r--r--arch/cris/arch-v32/drivers/mach-a3/nandflash.c1
-rw-r--r--arch/cris/arch-v32/drivers/mach-fs/nandflash.c1
-rw-r--r--arch/cris/arch-v32/kernel/process.c4
-rw-r--r--arch/cris/arch-v32/mach-a3/dram_init.S2
-rw-r--r--arch/cris/arch-v32/mach-fs/dram_init.S2
-rw-r--r--arch/frv/include/asm/processor.h7
-rw-r--r--arch/h8300/Kconfig2
-rw-r--r--arch/h8300/boot/compressed/Makefile1
-rw-r--r--arch/h8300/include/asm/hash.h53
-rw-r--r--arch/h8300/include/asm/processor.h7
-rw-r--r--arch/h8300/include/uapi/asm/unistd.h2
-rw-r--r--arch/hexagon/include/asm/hexagon_vm.h2
-rw-r--r--arch/hexagon/include/asm/vm_mmu.h2
-rw-r--r--arch/hexagon/include/uapi/asm/unistd.h1
-rw-r--r--arch/hexagon/kernel/kgdb.c4
-rw-r--r--arch/hexagon/kernel/process.c7
-rw-r--r--arch/hexagon/kernel/vdso.c3
-rw-r--r--arch/hexagon/kernel/vm_ops.S2
-rw-r--r--arch/hexagon/lib/memcpy.S2
-rw-r--r--arch/ia64/Kconfig1
-rw-r--r--arch/ia64/Makefile6
-rw-r--r--arch/ia64/hp/sim/simserial.c2
-rw-r--r--arch/ia64/include/asm/iommu.h1
-rw-r--r--arch/ia64/include/asm/rwsem.h22
-rw-r--r--arch/ia64/include/asm/sn/ioc3.h2
-rw-r--r--arch/ia64/include/asm/sn/shubio.h4
-rw-r--r--arch/ia64/kernel/efi.c4
-rw-r--r--arch/ia64/kernel/mca.c5
-rw-r--r--arch/ia64/kernel/perfmon.c4
-rw-r--r--arch/ia64/kernel/process.c14
-rw-r--r--arch/ia64/kernel/traps.c1
-rw-r--r--arch/ia64/kernel/unaligned.c1
-rw-r--r--arch/ia64/lib/idiv32.S2
-rw-r--r--arch/ia64/lib/idiv64.S2
-rw-r--r--arch/ia64/sn/kernel/io_acpi_init.c1
-rw-r--r--arch/ia64/sn/kernel/io_init.c4
-rw-r--r--arch/ia64/sn/kernel/sn2/sn2_smp.c35
-rw-r--r--arch/m32r/Kconfig1
-rw-r--r--arch/m32r/boot/compressed/Makefile1
-rw-r--r--arch/m32r/kernel/process.c9
-rw-r--r--arch/m32r/kernel/smp.c1
-rw-r--r--arch/m68k/Kconfig.cpu14
-rw-r--r--arch/m68k/bvme6000/rtc.c2
-rw-r--r--arch/m68k/include/asm/hash.h59
-rw-r--r--arch/m68k/include/asm/processor.h7
-rw-r--r--arch/m68k/mvme16x/rtc.c2
-rw-r--r--arch/metag/Kconfig2
-rw-r--r--arch/metag/Kconfig.soc1
-rw-r--r--arch/metag/include/asm/atomic_lnkget.h2
-rw-r--r--arch/metag/include/asm/metag_regs.h2
-rw-r--r--arch/metag/include/asm/processor.h2
-rw-r--r--arch/metag/include/asm/tbx.h6
-rw-r--r--arch/metag/include/uapi/asm/unistd.h2
-rw-r--r--arch/metag/kernel/ftrace.c1
-rw-r--r--arch/metag/kernel/perf/perf_event.c3
-rw-r--r--arch/metag/kernel/perf_callchain.c10
-rw-r--r--arch/metag/kernel/process.c6
-rw-r--r--arch/metag/mm/hugetlbpage.c1
-rw-r--r--arch/metag/tbx/tbipcx.S2
-rw-r--r--arch/metag/tbx/tbisoft.S6
-rw-r--r--arch/microblaze/Kconfig2
-rw-r--r--arch/microblaze/include/asm/hash.h81
-rw-r--r--arch/microblaze/include/asm/processor.h10
-rw-r--r--arch/microblaze/include/asm/unistd.h2
-rw-r--r--arch/microblaze/include/uapi/asm/unistd.h3
-rw-r--r--arch/microblaze/kernel/syscall_table.S3
-rw-r--r--arch/microblaze/pci/pci-common.c2
-rw-r--r--arch/mips/Kconfig176
-rw-r--r--arch/mips/Makefile22
-rw-r--r--arch/mips/alchemy/Kconfig2
-rw-r--r--arch/mips/alchemy/common/clock.c3
-rw-r--r--arch/mips/ath79/Kconfig12
-rw-r--r--arch/mips/ath79/clock.c213
-rw-r--r--arch/mips/ath79/common.c24
-rw-r--r--arch/mips/ath79/early_printk.c6
-rw-r--r--arch/mips/ath79/setup.c61
-rw-r--r--arch/mips/bcm47xx/Makefile2
-rw-r--r--arch/mips/bcm47xx/bcm47xx_private.h3
-rw-r--r--arch/mips/bcm47xx/setup.c2
-rw-r--r--arch/mips/bmips/Kconfig4
-rw-r--r--arch/mips/bmips/setup.c12
-rw-r--r--arch/mips/boot/compressed/Makefile5
-rw-r--r--arch/mips/boot/dts/brcm/Makefile2
-rw-r--r--arch/mips/boot/dts/brcm/bcm6328.dtsi50
-rw-r--r--arch/mips/boot/dts/brcm/bcm6358.dtsi130
-rw-r--r--arch/mips/boot/dts/brcm/bcm6368.dtsi32
-rw-r--r--arch/mips/boot/dts/brcm/bcm7125.dtsi69
-rw-r--r--arch/mips/boot/dts/brcm/bcm7346.dtsi6
-rw-r--r--arch/mips/boot/dts/brcm/bcm7358.dtsi2
-rw-r--r--arch/mips/boot/dts/brcm/bcm7360.dtsi42
-rw-r--r--arch/mips/boot/dts/brcm/bcm7362.dtsi6
-rw-r--r--arch/mips/boot/dts/brcm/bcm7420.dtsi77
-rw-r--r--arch/mips/boot/dts/brcm/bcm7425.dtsi100
-rw-r--r--arch/mips/boot/dts/brcm/bcm7435.dtsi141
-rw-r--r--arch/mips/boot/dts/brcm/bcm96358nb4ser.dts46
-rw-r--r--arch/mips/boot/dts/brcm/bcm96368mvwg.dts4
-rw-r--r--arch/mips/boot/dts/brcm/bcm97125cbmb.dts24
-rw-r--r--arch/mips/boot/dts/brcm/bcm97360svmb.dts8
-rw-r--r--arch/mips/boot/dts/brcm/bcm97420c.dts28
-rw-r--r--arch/mips/boot/dts/brcm/bcm97425svmb.dts28
-rw-r--r--arch/mips/boot/dts/brcm/bcm97435svmb.dts38
-rw-r--r--arch/mips/boot/dts/cavium-octeon/dlink_dsr-1000n.dts78
-rw-r--r--arch/mips/boot/dts/cavium-octeon/octeon_3xxx.dts205
-rw-r--r--arch/mips/boot/dts/cavium-octeon/octeon_3xxx.dtsi231
-rw-r--r--arch/mips/boot/dts/cavium-octeon/ubnt_e100.dts59
-rw-r--r--arch/mips/boot/dts/ingenic/jz4740.dtsi16
-rw-r--r--arch/mips/boot/dts/lantiq/easy50712.dts2
-rw-r--r--arch/mips/boot/dts/pic32/pic32mzda-clk.dtsi236
-rw-r--r--arch/mips/boot/dts/pic32/pic32mzda.dtsi63
-rw-r--r--arch/mips/boot/dts/pic32/pic32mzda_sk.dts5
-rw-r--r--arch/mips/boot/dts/qca/Makefile7
-rw-r--r--arch/mips/boot/dts/qca/ar9132.dtsi17
-rw-r--r--arch/mips/boot/dts/qca/ar9132_tl_wr1043nd_v1.dts98
-rw-r--r--arch/mips/boot/dts/qca/ar9331.dtsi155
-rw-r--r--arch/mips/boot/dts/qca/ar9331_dpt_module.dts78
-rw-r--r--arch/mips/boot/dts/qca/ar9331_dragino_ms14.dts102
-rw-r--r--arch/mips/boot/dts/qca/ar9331_omega.dts78
-rw-r--r--arch/mips/boot/dts/qca/ar9331_tl_mr3020.dts118
-rw-r--r--arch/mips/boot/dts/ralink/mt7620a.dtsi2
-rw-r--r--arch/mips/boot/dts/ralink/rt2880.dtsi2
-rw-r--r--arch/mips/boot/dts/ralink/rt3050.dtsi2
-rw-r--r--arch/mips/boot/dts/ralink/rt3883.dtsi2
-rw-r--r--arch/mips/boot/dts/xilfpga/nexys4ddr.dts2
-rw-r--r--arch/mips/boot/tools/.gitignore1
-rw-r--r--arch/mips/boot/tools/Makefile8
-rw-r--r--arch/mips/boot/tools/relocs.c680
-rw-r--r--arch/mips/boot/tools/relocs.h45
-rw-r--r--arch/mips/boot/tools/relocs_32.c17
-rw-r--r--arch/mips/boot/tools/relocs_64.c27
-rw-r--r--arch/mips/boot/tools/relocs_main.c84
-rw-r--r--arch/mips/cavium-octeon/csrc-octeon.c13
-rw-r--r--arch/mips/cavium-octeon/executive/cvmx-helper.c43
-rw-r--r--arch/mips/cavium-octeon/executive/cvmx-sysinfo.c72
-rw-r--r--arch/mips/cavium-octeon/executive/octeon-model.c82
-rw-r--r--arch/mips/cavium-octeon/octeon-irq.c679
-rw-r--r--arch/mips/cavium-octeon/octeon-platform.c94
-rw-r--r--arch/mips/cavium-octeon/setup.c86
-rw-r--r--arch/mips/cavium-octeon/smp.c157
-rw-r--r--arch/mips/configs/bcm47xx_defconfig1
-rw-r--r--arch/mips/configs/bcm63xx_defconfig1
-rw-r--r--arch/mips/configs/bigsur_defconfig1
-rw-r--r--arch/mips/configs/bmips_be_defconfig1
-rw-r--r--arch/mips/configs/cavium_octeon_defconfig15
-rw-r--r--arch/mips/configs/db1xxx_defconfig1
-rw-r--r--arch/mips/configs/decstation_defconfig1
-rw-r--r--arch/mips/configs/ip22_defconfig1
-rw-r--r--arch/mips/configs/ip27_defconfig1
-rw-r--r--arch/mips/configs/jazz_defconfig1
-rw-r--r--arch/mips/configs/lemote2f_defconfig1
-rw-r--r--arch/mips/configs/loongson1b_defconfig (renamed from arch/mips/configs/ls1b_defconfig)35
-rw-r--r--arch/mips/configs/loongson3_defconfig1
-rw-r--r--arch/mips/configs/mtx1_defconfig1
-rw-r--r--arch/mips/configs/nlm_xlp_defconfig1
-rw-r--r--arch/mips/configs/nlm_xlr_defconfig1
-rw-r--r--arch/mips/configs/rm200_defconfig1
-rw-r--r--arch/mips/dec/setup.c1
-rw-r--r--arch/mips/include/asm/Kbuild1
-rw-r--r--arch/mips/include/asm/asmmacro.h282
-rw-r--r--arch/mips/include/asm/bitops.h17
-rw-r--r--arch/mips/include/asm/bitrev.h30
-rw-r--r--arch/mips/include/asm/bmips.h1
-rw-r--r--arch/mips/include/asm/bootinfo.h18
-rw-r--r--arch/mips/include/asm/cacheflush.h7
-rw-r--r--arch/mips/include/asm/cacheops.h6
-rw-r--r--arch/mips/include/asm/clkdev.h27
-rw-r--r--arch/mips/include/asm/cpu-features.h147
-rw-r--r--arch/mips/include/asm/cpu-info.h43
-rw-r--r--arch/mips/include/asm/cpu-type.h5
-rw-r--r--arch/mips/include/asm/cpu.h113
-rw-r--r--arch/mips/include/asm/elf.h149
-rw-r--r--arch/mips/include/asm/hazards.h15
-rw-r--r--arch/mips/include/asm/highmem.h4
-rw-r--r--arch/mips/include/asm/io.h10
-rw-r--r--arch/mips/include/asm/irq_regs.h10
-rw-r--r--arch/mips/include/asm/irqflags.h5
-rw-r--r--arch/mips/include/asm/kvm_host.h9
-rw-r--r--arch/mips/include/asm/llsc.h28
-rw-r--r--arch/mips/include/asm/mach-au1x00/au1xxx_dbdma.h2
-rw-r--r--arch/mips/include/asm/mach-au1x00/gpio-au1300.h2
-rw-r--r--arch/mips/include/asm/mach-bcm63xx/bcm63xx_dev_enet.h2
-rw-r--r--arch/mips/include/asm/mach-bmips/cpu-feature-overrides.h14
-rw-r--r--arch/mips/include/asm/mach-bmips/ioremap.h33
-rw-r--r--arch/mips/include/asm/mach-ip27/dma-coherence.h2
-rw-r--r--arch/mips/include/asm/mach-ip32/dma-coherence.h2
-rw-r--r--arch/mips/include/asm/mach-jz4740/jz4740_nand.h2
-rw-r--r--arch/mips/include/asm/mach-jz4740/platform.h1
-rw-r--r--arch/mips/include/asm/mach-lantiq/falcon/lantiq_soc.h4
-rw-r--r--arch/mips/include/asm/mach-lantiq/lantiq.h2
-rw-r--r--arch/mips/include/asm/mach-lantiq/lantiq_platform.h2
-rw-r--r--arch/mips/include/asm/mach-lantiq/xway/irq.h2
-rw-r--r--arch/mips/include/asm/mach-lantiq/xway/lantiq_irq.h2
-rw-r--r--arch/mips/include/asm/mach-lantiq/xway/lantiq_soc.h4
-rw-r--r--arch/mips/include/asm/mach-lantiq/xway/xway_dma.h2
-rw-r--r--arch/mips/include/asm/mach-loongson32/cpufreq.h1
-rw-r--r--arch/mips/include/asm/mach-loongson32/dma.h25
-rw-r--r--arch/mips/include/asm/mach-loongson32/irq.h1
-rw-r--r--arch/mips/include/asm/mach-loongson32/loongson1.h4
-rw-r--r--arch/mips/include/asm/mach-loongson32/nand.h30
-rw-r--r--arch/mips/include/asm/mach-loongson32/platform.h14
-rw-r--r--arch/mips/include/asm/mach-loongson32/regs-clk.h24
-rw-r--r--arch/mips/include/asm/mach-loongson32/regs-mux.h84
-rw-r--r--arch/mips/include/asm/mach-loongson32/regs-pwm.h12
-rw-r--r--arch/mips/include/asm/mach-loongson64/cpu-feature-overrides.h18
-rw-r--r--arch/mips/include/asm/mach-loongson64/kernel-entry-init.h18
-rw-r--r--arch/mips/include/asm/mach-loongson64/loongson_hwmon.h2
-rw-r--r--arch/mips/include/asm/mach-malta/kernel-entry-init.h6
-rw-r--r--arch/mips/include/asm/mach-ralink/mt7620.h3
-rw-r--r--arch/mips/include/asm/mach-ralink/mt7621.h2
-rw-r--r--arch/mips/include/asm/mach-ralink/pinmux.h2
-rw-r--r--arch/mips/include/asm/mach-ralink/ralink_regs.h2
-rw-r--r--arch/mips/include/asm/mach-ralink/rt288x.h2
-rw-r--r--arch/mips/include/asm/mach-ralink/rt305x.h2
-rw-r--r--arch/mips/include/asm/mips-cm.h12
-rw-r--r--arch/mips/include/asm/mips-cpc.h3
-rw-r--r--arch/mips/include/asm/mips_mt.h2
-rw-r--r--arch/mips/include/asm/mipsregs.h802
-rw-r--r--arch/mips/include/asm/mmu_context.h41
-rw-r--r--arch/mips/include/asm/msa.h34
-rw-r--r--arch/mips/include/asm/octeon/cvmx-bootinfo.h16
-rw-r--r--arch/mips/include/asm/octeon/cvmx-ciu3-defs.h353
-rw-r--r--arch/mips/include/asm/octeon/cvmx-cmd-queue.h2
-rw-r--r--arch/mips/include/asm/octeon/cvmx-coremask.h89
-rw-r--r--arch/mips/include/asm/octeon/cvmx-fpa-defs.h1
-rw-r--r--arch/mips/include/asm/octeon/cvmx-helper-board.h2
-rw-r--r--arch/mips/include/asm/octeon/cvmx-ipd.h2
-rw-r--r--arch/mips/include/asm/octeon/cvmx-mio-defs.h410
-rw-r--r--arch/mips/include/asm/octeon/cvmx-pow.h2
-rw-r--r--arch/mips/include/asm/octeon/cvmx-sysinfo.h37
-rw-r--r--arch/mips/include/asm/octeon/cvmx.h27
-rw-r--r--arch/mips/include/asm/octeon/octeon-feature.h19
-rw-r--r--arch/mips/include/asm/octeon/octeon-model.h5
-rw-r--r--arch/mips/include/asm/octeon/octeon.h25
-rw-r--r--arch/mips/include/asm/pci.h10
-rw-r--r--arch/mips/include/asm/pgtable-32.h33
-rw-r--r--arch/mips/include/asm/pgtable-64.h18
-rw-r--r--arch/mips/include/asm/pgtable-bits.h213
-rw-r--r--arch/mips/include/asm/pgtable.h157
-rw-r--r--arch/mips/include/asm/processor.h10
-rw-r--r--arch/mips/include/asm/seccomp.h47
-rw-r--r--arch/mips/include/asm/sgi/hpc3.h2
-rw-r--r--arch/mips/include/asm/sibyte/bcm1480_regs.h4
-rw-r--r--arch/mips/include/asm/signal.h12
-rw-r--r--arch/mips/include/asm/smp-cps.h2
-rw-r--r--arch/mips/include/asm/switch_to.h2
-rw-r--r--arch/mips/include/asm/uasm.h3
-rw-r--r--arch/mips/include/asm/watch.h10
-rw-r--r--arch/mips/include/uapi/asm/inst.h11
-rw-r--r--arch/mips/include/uapi/asm/siginfo.h22
-rw-r--r--arch/mips/jz4740/board-qi_lb60.c139
-rw-r--r--arch/mips/jz4740/platform.c25
-rw-r--r--arch/mips/kernel/Makefile4
-rw-r--r--arch/mips/kernel/asm-offsets.c10
-rw-r--r--arch/mips/kernel/binfmt_elfn32.c16
-rw-r--r--arch/mips/kernel/binfmt_elfo32.c32
-rw-r--r--arch/mips/kernel/bmips_5xxx_init.S753
-rw-r--r--arch/mips/kernel/bmips_vec.S50
-rw-r--r--arch/mips/kernel/branch.c22
-rw-r--r--arch/mips/kernel/cevt-r4k.c82
-rw-r--r--arch/mips/kernel/cps-vec.S311
-rw-r--r--arch/mips/kernel/cpu-probe.c384
-rw-r--r--arch/mips/kernel/crash.c16
-rw-r--r--arch/mips/kernel/elf.c2
-rw-r--r--arch/mips/kernel/genex.S58
-rw-r--r--arch/mips/kernel/head.S21
-rw-r--r--arch/mips/kernel/idle.c5
-rw-r--r--arch/mips/kernel/irq.c3
-rw-r--r--arch/mips/kernel/mips-r2-to-r6-emul.c107
-rw-r--r--arch/mips/kernel/module-rela.c58
-rw-r--r--arch/mips/kernel/module.c60
-rw-r--r--arch/mips/kernel/perf_event.c12
-rw-r--r--arch/mips/kernel/perf_event_mipsxx.c64
-rw-r--r--arch/mips/kernel/pm-cps.c15
-rw-r--r--arch/mips/kernel/pm.c2
-rw-r--r--arch/mips/kernel/proc.c1
-rw-r--r--arch/mips/kernel/process.c56
-rw-r--r--arch/mips/kernel/ptrace.c34
-rw-r--r--arch/mips/kernel/r4k_fpu.S10
-rw-r--r--arch/mips/kernel/r4k_switch.S1
-rw-r--r--arch/mips/kernel/relocate.c386
-rw-r--r--arch/mips/kernel/scall32-o32.S11
-rw-r--r--arch/mips/kernel/scall64-64.S3
-rw-r--r--arch/mips/kernel/scall64-n32.S14
-rw-r--r--arch/mips/kernel/scall64-o32.S14
-rw-r--r--arch/mips/kernel/setup.c79
-rw-r--r--arch/mips/kernel/signal.c17
-rw-r--r--arch/mips/kernel/signal32.c6
-rw-r--r--arch/mips/kernel/smp-bmips.c88
-rw-r--r--arch/mips/kernel/smp-cps.c72
-rw-r--r--arch/mips/kernel/smp.c24
-rw-r--r--arch/mips/kernel/spram.c1
-rw-r--r--arch/mips/kernel/traps.c32
-rw-r--r--arch/mips/kernel/unaligned.c1
-rw-r--r--arch/mips/kernel/vdso.c3
-rw-r--r--arch/mips/kernel/vmlinux.lds.S21
-rw-r--r--arch/mips/kernel/watch.c79
-rw-r--r--arch/mips/kvm/emulate.c114
-rw-r--r--arch/mips/kvm/locore.S94
-rw-r--r--arch/mips/kvm/mips.c9
-rw-r--r--arch/mips/kvm/tlb.c59
-rw-r--r--arch/mips/kvm/trap_emul.c5
-rw-r--r--arch/mips/lantiq/Kconfig12
-rw-r--r--arch/mips/lantiq/Makefile2
-rw-r--r--arch/mips/lantiq/clk.c2
-rw-r--r--arch/mips/lantiq/clk.h2
-rw-r--r--arch/mips/lantiq/early_printk.c2
-rw-r--r--arch/mips/lantiq/falcon/prom.c2
-rw-r--r--arch/mips/lantiq/falcon/reset.c2
-rw-r--r--arch/mips/lantiq/falcon/sysctrl.c2
-rw-r--r--arch/mips/lantiq/irq.c2
-rw-r--r--arch/mips/lantiq/prom.c15
-rw-r--r--arch/mips/lantiq/prom.h2
-rw-r--r--arch/mips/lantiq/xway/clk.c2
-rw-r--r--arch/mips/lantiq/xway/dcdc.c2
-rw-r--r--arch/mips/lantiq/xway/dma.c2
-rw-r--r--arch/mips/lantiq/xway/gptu.c2
-rw-r--r--arch/mips/lantiq/xway/prom.c2
-rw-r--r--arch/mips/lantiq/xway/reset.c4
-rw-r--r--arch/mips/lantiq/xway/sysctrl.c2
-rw-r--r--arch/mips/lantiq/xway/vmmc.c2
-rw-r--r--arch/mips/lantiq/xway/xrx200_phy_fw.c4
-rw-r--r--arch/mips/lasat/picvue_proc.c4
-rw-r--r--arch/mips/lib/ashldi3.c2
-rw-r--r--arch/mips/lib/ashrdi3.c2
-rw-r--r--arch/mips/lib/bswapdi.c2
-rw-r--r--arch/mips/lib/bswapsi.c2
-rw-r--r--arch/mips/lib/cmpdi2.c2
-rw-r--r--arch/mips/lib/dump_tlb.c27
-rw-r--r--arch/mips/lib/lshrdi3.c2
-rw-r--r--arch/mips/lib/memcpy.S2
-rw-r--r--arch/mips/lib/memset.S2
-rw-r--r--arch/mips/lib/r3k_dump_tlb.c9
-rw-r--r--arch/mips/lib/ucmpdi2.c2
-rw-r--r--arch/mips/loongson32/common/platform.c105
-rw-r--r--arch/mips/loongson32/common/reset.c13
-rw-r--r--arch/mips/loongson32/common/time.c27
-rw-r--r--arch/mips/loongson32/ls1b/board.c67
-rw-r--r--arch/mips/loongson64/Platform2
-rw-r--r--arch/mips/loongson64/common/env.c7
-rw-r--r--arch/mips/loongson64/loongson-3/Makefile2
-rw-r--r--arch/mips/loongson64/loongson-3/acpi_init.c (renamed from drivers/platform/mips/acpi_init.c)0
-rw-r--r--arch/mips/loongson64/loongson-3/hpet.c2
-rw-r--r--arch/mips/loongson64/loongson-3/irq.c10
-rw-r--r--arch/mips/loongson64/loongson-3/numa.c6
-rw-r--r--arch/mips/loongson64/loongson-3/smp.c107
-rw-r--r--arch/mips/math-emu/Makefile4
-rw-r--r--arch/mips/math-emu/cp1emu.c45
-rw-r--r--arch/mips/math-emu/dp_maddf.c35
-rw-r--r--arch/mips/math-emu/dp_msubf.c269
-rw-r--r--arch/mips/math-emu/dp_mul.c4
-rw-r--r--arch/mips/math-emu/dsemul.c6
-rw-r--r--arch/mips/math-emu/ieee754dp.c9
-rw-r--r--arch/mips/math-emu/ieee754dp.h1
-rw-r--r--arch/mips/math-emu/ieee754int.h10
-rw-r--r--arch/mips/math-emu/ieee754sp.c12
-rw-r--r--arch/mips/math-emu/ieee754sp.h17
-rw-r--r--arch/mips/math-emu/sp_add.c6
-rw-r--r--arch/mips/math-emu/sp_maddf.c43
-rw-r--r--arch/mips/math-emu/sp_msubf.c258
-rw-r--r--arch/mips/math-emu/sp_sub.c6
-rw-r--r--arch/mips/mm/c-r4k.c72
-rw-r--r--arch/mips/mm/cache.c41
-rw-r--r--arch/mips/mm/dma-default.c7
-rw-r--r--arch/mips/mm/init.c14
-rw-r--r--arch/mips/mm/page.c9
-rw-r--r--arch/mips/mm/sc-mips.c1
-rw-r--r--arch/mips/mm/tlb-r3k.c24
-rw-r--r--arch/mips/mm/tlb-r4k.c56
-rw-r--r--arch/mips/mm/tlb-r8k.c2
-rw-r--r--arch/mips/mm/tlbex.c260
-rw-r--r--arch/mips/mm/uasm-mips.c2
-rw-r--r--arch/mips/mm/uasm.c3
-rw-r--r--arch/mips/mti-malta/malta-setup.c7
-rw-r--r--arch/mips/mti-malta/malta-time.c40
-rw-r--r--arch/mips/mti-sead3/sead3-setup.c5
-rw-r--r--arch/mips/netlogic/common/reset.S11
-rw-r--r--arch/mips/netlogic/common/smpboot.S4
-rw-r--r--arch/mips/netlogic/xlp/nlm_hal.c2
-rw-r--r--arch/mips/netlogic/xlr/setup.c2
-rw-r--r--arch/mips/oprofile/common.c2
-rw-r--r--arch/mips/oprofile/op_impl.h2
-rw-r--r--arch/mips/oprofile/op_model_mipsxx.c4
-rw-r--r--arch/mips/pci/fixup-lantiq.c2
-rw-r--r--arch/mips/pci/ops-bridge.c4
-rw-r--r--arch/mips/pci/ops-lantiq.c2
-rw-r--r--arch/mips/pci/pci-alchemy.c2
-rw-r--r--arch/mips/pci/pci-ip32.c1
-rw-r--r--arch/mips/pci/pci-lantiq.c2
-rw-r--r--arch/mips/pci/pci-lantiq.h2
-rw-r--r--arch/mips/pci/pci-mt7620.c2
-rw-r--r--arch/mips/pci/pci-rt2880.c2
-rw-r--r--arch/mips/pci/pci.c3
-rw-r--r--arch/mips/pic32/Kconfig2
-rw-r--r--arch/mips/pic32/pic32mzda/time.c13
-rw-r--r--arch/mips/pistachio/init.c43
-rw-r--r--arch/mips/pmcs-msp71xx/msp_setup.c2
-rw-r--r--arch/mips/pnx833x/common/setup.c3
-rw-r--r--arch/mips/ralink/Makefile2
-rw-r--r--arch/mips/ralink/bootrom.c2
-rw-r--r--arch/mips/ralink/cevt-rt3352.c2
-rw-r--r--arch/mips/ralink/clk.c2
-rw-r--r--arch/mips/ralink/common.h2
-rw-r--r--arch/mips/ralink/ill_acc.c2
-rw-r--r--arch/mips/ralink/irq-gic.c2
-rw-r--r--arch/mips/ralink/irq.c2
-rw-r--r--arch/mips/ralink/mt7620.c121
-rw-r--r--arch/mips/ralink/mt7621.c2
-rw-r--r--arch/mips/ralink/of.c2
-rw-r--r--arch/mips/ralink/prom.c2
-rw-r--r--arch/mips/ralink/reset.c4
-rw-r--r--arch/mips/ralink/rt288x.c2
-rw-r--r--arch/mips/ralink/rt305x.c2
-rw-r--r--arch/mips/ralink/rt3883.c2
-rw-r--r--arch/mips/ralink/timer-gic.c2
-rw-r--r--arch/mips/ralink/timer.c4
-rw-r--r--arch/mips/sgi-ip27/ip27-hubio.c2
-rw-r--r--arch/mips/sgi-ip27/ip27-nmi.c2
-rw-r--r--arch/mips/sgi-ip27/ip27-xtalk.c2
-rw-r--r--arch/mips/sibyte/Kconfig3
-rw-r--r--arch/mips/sni/rm200.c2
-rw-r--r--arch/mips/vdso/Makefile44
-rw-r--r--arch/mips/vr41xx/common/cmu.c2
-rw-r--r--arch/mips/vr41xx/common/pmu.c2
-rw-r--r--arch/mn10300/Kconfig1
-rw-r--r--arch/mn10300/boot/compressed/Makefile1
-rw-r--r--arch/mn10300/configs/asb2364_defconfig1
-rw-r--r--arch/mn10300/include/asm/fpu.h6
-rw-r--r--arch/mn10300/kernel/process.c4
-rw-r--r--arch/nios2/Kconfig2
-rw-r--r--arch/nios2/Makefile4
-rw-r--r--arch/nios2/boot/compressed/Makefile1
-rw-r--r--arch/nios2/include/asm/io.h1
-rw-r--r--arch/nios2/include/asm/page.h2
-rw-r--r--arch/nios2/include/asm/pgtable.h2
-rw-r--r--arch/nios2/include/asm/processor.h5
-rw-r--r--arch/nios2/include/uapi/asm/unistd.h2
-rw-r--r--arch/openrisc/Kconfig1
-rw-r--r--arch/openrisc/include/asm/page.h2
-rw-r--r--arch/openrisc/include/asm/processor.h9
-rw-r--r--arch/openrisc/include/uapi/asm/unistd.h1
-rw-r--r--arch/parisc/Kconfig4
-rw-r--r--arch/parisc/include/asm/cmpxchg.h9
-rw-r--r--arch/parisc/include/asm/eisa_eeprom.h4
-rw-r--r--arch/parisc/include/asm/ftrace.h2
-rw-r--r--arch/parisc/include/asm/futex.h70
-rw-r--r--arch/parisc/include/asm/ldcw.h2
-rw-r--r--arch/parisc/include/asm/syscall.h9
-rw-r--r--arch/parisc/include/asm/thread_info.h4
-rw-r--r--arch/parisc/include/asm/uaccess.h91
-rw-r--r--arch/parisc/include/uapi/asm/pdc.h2
-rw-r--r--arch/parisc/include/uapi/asm/ptrace.h48
-rw-r--r--arch/parisc/include/uapi/asm/unistd.h34
-rw-r--r--arch/parisc/kernel/entry.S13
-rw-r--r--arch/parisc/kernel/ftrace.c7
-rw-r--r--arch/parisc/kernel/process.c7
-rw-r--r--arch/parisc/kernel/ptrace.c368
-rw-r--r--arch/parisc/kernel/syscall.S1
-rw-r--r--arch/parisc/kernel/time.c63
-rw-r--r--arch/parisc/lib/bitops.c6
-rw-r--r--arch/parisc/math-emu/fpudispatch.c2
-rw-r--r--arch/powerpc/Kconfig13
-rw-r--r--arch/powerpc/Kconfig.debug8
-rw-r--r--arch/powerpc/boot/Makefile6
-rw-r--r--arch/powerpc/boot/dts/canyonlands.dts15
-rw-r--r--arch/powerpc/boot/dts/fsl/gef_ppc9a.dts4
-rw-r--r--arch/powerpc/boot/dts/fsl/gef_sbc310.dts22
-rw-r--r--arch/powerpc/boot/dts/fsl/gef_sbc610.dts4
-rw-r--r--arch/powerpc/boot/dts/fsl/mpc8641_hpcn.dts24
-rw-r--r--arch/powerpc/boot/dts/fsl/mpc8641_hpcn_36b.dts24
-rw-r--r--arch/powerpc/boot/dts/fsl/mpc8641si-post.dtsi41
-rw-r--r--arch/powerpc/boot/dts/fsl/mpc8641si-pre.dtsi1
-rw-r--r--arch/powerpc/boot/dts/fsl/sbc8641d.dts23
-rw-r--r--arch/powerpc/boot/dts/fsl/t1023si-post.dtsi2
-rw-r--r--arch/powerpc/boot/dts/fsl/t1040si-post.dtsi2
-rw-r--r--arch/powerpc/boot/dts/fsl/t104xrdb.dtsi2
-rw-r--r--arch/powerpc/boot/dts/fsl/t208xrdb.dtsi2
-rw-r--r--arch/powerpc/include/asm/book3s/32/hash.h3
-rw-r--r--arch/powerpc/include/asm/book3s/32/mmu-hash.h6
-rw-r--r--arch/powerpc/include/asm/book3s/32/pgalloc.h109
-rw-r--r--arch/powerpc/include/asm/book3s/64/hash-4k.h136
-rw-r--r--arch/powerpc/include/asm/book3s/64/hash-64k.h215
-rw-r--r--arch/powerpc/include/asm/book3s/64/hash.h485
-rw-r--r--arch/powerpc/include/asm/book3s/64/hugetlb-radix.h14
-rw-r--r--arch/powerpc/include/asm/book3s/64/mmu-hash.h79
-rw-r--r--arch/powerpc/include/asm/book3s/64/mmu.h137
-rw-r--r--arch/powerpc/include/asm/book3s/64/pgalloc.h207
-rw-r--r--arch/powerpc/include/asm/book3s/64/pgtable-4k.h53
-rw-r--r--arch/powerpc/include/asm/book3s/64/pgtable-64k.h64
-rw-r--r--arch/powerpc/include/asm/book3s/64/pgtable.h817
-rw-r--r--arch/powerpc/include/asm/book3s/64/radix-4k.h12
-rw-r--r--arch/powerpc/include/asm/book3s/64/radix-64k.h12
-rw-r--r--arch/powerpc/include/asm/book3s/64/radix.h232
-rw-r--r--arch/powerpc/include/asm/book3s/64/tlbflush-hash.h41
-rw-r--r--arch/powerpc/include/asm/book3s/64/tlbflush-radix.h33
-rw-r--r--arch/powerpc/include/asm/book3s/64/tlbflush.h76
-rw-r--r--arch/powerpc/include/asm/book3s/pgalloc.h19
-rw-r--r--arch/powerpc/include/asm/hugetlb.h14
-rw-r--r--arch/powerpc/include/asm/kvm_book3s_64.h36
-rw-r--r--arch/powerpc/include/asm/kvm_host.h5
-rw-r--r--arch/powerpc/include/asm/livepatch.h62
-rw-r--r--arch/powerpc/include/asm/machdep.h1
-rw-r--r--arch/powerpc/include/asm/mmu.h51
-rw-r--r--arch/powerpc/include/asm/mmu_context.h29
-rw-r--r--arch/powerpc/include/asm/nohash/32/pgalloc.h (renamed from arch/powerpc/include/asm/pgalloc-32.h)0
-rw-r--r--arch/powerpc/include/asm/nohash/64/pgalloc.h (renamed from arch/powerpc/include/asm/pgalloc-64.h)92
-rw-r--r--arch/powerpc/include/asm/nohash/64/pgtable.h10
-rw-r--r--arch/powerpc/include/asm/nohash/pgalloc.h23
-rw-r--r--arch/powerpc/include/asm/opal-api.h16
-rw-r--r--arch/powerpc/include/asm/page.h14
-rw-r--r--arch/powerpc/include/asm/page_64.h5
-rw-r--r--arch/powerpc/include/asm/pci-bridge.h41
-rw-r--r--arch/powerpc/include/asm/pgalloc.h19
-rw-r--r--arch/powerpc/include/asm/pgtable-be-types.h92
-rw-r--r--arch/powerpc/include/asm/pgtable-types.h58
-rw-r--r--arch/powerpc/include/asm/pgtable.h1
-rw-r--r--arch/powerpc/include/asm/ppc-opcode.h2
-rw-r--r--arch/powerpc/include/asm/ppc-pci.h6
-rw-r--r--arch/powerpc/include/asm/ppc_asm.h3
-rw-r--r--arch/powerpc/include/asm/pte-common.h26
-rw-r--r--arch/powerpc/include/asm/reg.h3
-rw-r--r--arch/powerpc/include/asm/thread_info.h4
-rw-r--r--arch/powerpc/include/asm/tlbflush.h3
-rw-r--r--arch/powerpc/include/uapi/asm/perf_regs.h50
-rw-r--r--arch/powerpc/kernel/asm-offsets.c8
-rw-r--r--arch/powerpc/kernel/btext.c2
-rw-r--r--arch/powerpc/kernel/cputable.c2
-rw-r--r--arch/powerpc/kernel/eeh.c9
-rw-r--r--arch/powerpc/kernel/eeh_driver.c46
-rw-r--r--arch/powerpc/kernel/eeh_event.c2
-rw-r--r--arch/powerpc/kernel/eeh_pe.c2
-rw-r--r--arch/powerpc/kernel/entry_64.S113
-rw-r--r--arch/powerpc/kernel/exceptions-64s.S163
-rw-r--r--arch/powerpc/kernel/ftrace.c10
-rw-r--r--arch/powerpc/kernel/head_64.S13
-rw-r--r--arch/powerpc/kernel/ibmebus.c2
-rw-r--r--arch/powerpc/kernel/irq.c3
-rw-r--r--arch/powerpc/kernel/isa-bridge.c4
-rw-r--r--arch/powerpc/kernel/machine_kexec.c18
-rw-r--r--arch/powerpc/kernel/machine_kexec_64.c15
-rw-r--r--arch/powerpc/kernel/mce.c4
-rw-r--r--arch/powerpc/kernel/mce_power.c13
-rw-r--r--arch/powerpc/kernel/misc_32.S6
-rw-r--r--arch/powerpc/kernel/nvram_64.c12
-rw-r--r--arch/powerpc/kernel/pci-hotplug.c47
-rw-r--r--arch/powerpc/kernel/pci_64.c5
-rw-r--r--arch/powerpc/kernel/pci_dn.c66
-rw-r--r--arch/powerpc/kernel/process.c27
-rw-r--r--arch/powerpc/kernel/prom.c2
-rw-r--r--arch/powerpc/kernel/rtasd.c2
-rw-r--r--arch/powerpc/kernel/setup-common.c6
-rw-r--r--arch/powerpc/kernel/setup_64.c17
-rw-r--r--arch/powerpc/kernel/smp.c2
-rw-r--r--arch/powerpc/kernel/swsusp.c2
-rw-r--r--arch/powerpc/kernel/time.c1
-rw-r--r--arch/powerpc/kernel/vdso.c3
-rw-r--r--arch/powerpc/kernel/vio.c4
-rw-r--r--arch/powerpc/kvm/book3s.c1
-rw-r--r--arch/powerpc/kvm/book3s_64_mmu_hv.c11
-rw-r--r--arch/powerpc/kvm/book3s_hv.c7
-rw-r--r--arch/powerpc/kvm/book3s_hv_rm_mmu.c12
-rw-r--r--arch/powerpc/kvm/book3s_pr.c38
-rw-r--r--arch/powerpc/kvm/book3s_xics.c29
-rw-r--r--arch/powerpc/kvm/book3s_xics.h1
-rw-r--r--arch/powerpc/kvm/booke.c1
-rw-r--r--arch/powerpc/kvm/powerpc.c22
-rw-r--r--arch/powerpc/lib/copy_32.S2
-rw-r--r--arch/powerpc/lib/sstep.c5
-rw-r--r--arch/powerpc/lib/xor_vmx.c10
-rw-r--r--arch/powerpc/mm/Makefile10
-rw-r--r--arch/powerpc/mm/fsl_booke_mmu.c2
-rw-r--r--arch/powerpc/mm/hash64_4k.c29
-rw-r--r--arch/powerpc/mm/hash64_64k.c71
-rw-r--r--arch/powerpc/mm/hash_native_64.c11
-rw-r--r--arch/powerpc/mm/hash_utils_64.c190
-rw-r--r--arch/powerpc/mm/hugepage-hash64.c22
-rw-r--r--arch/powerpc/mm/hugetlbpage-hash64.c29
-rw-r--r--arch/powerpc/mm/hugetlbpage-radix.c87
-rw-r--r--arch/powerpc/mm/hugetlbpage.c26
-rw-r--r--arch/powerpc/mm/init_64.c73
-rw-r--r--arch/powerpc/mm/mem.c40
-rw-r--r--arch/powerpc/mm/mmap.c110
-rw-r--r--arch/powerpc/mm/mmu_context_book3s64.c (renamed from arch/powerpc/mm/mmu_context_hash64.c)52
-rw-r--r--arch/powerpc/mm/mmu_context_nohash.c6
-rw-r--r--arch/powerpc/mm/mmu_decl.h5
-rw-r--r--arch/powerpc/mm/pgtable-book3e.c122
-rw-r--r--arch/powerpc/mm/pgtable-book3s64.c118
-rw-r--r--arch/powerpc/mm/pgtable-hash64.c342
-rw-r--r--arch/powerpc/mm/pgtable-radix.c526
-rw-r--r--arch/powerpc/mm/pgtable.c24
-rw-r--r--arch/powerpc/mm/pgtable_64.c557
-rw-r--r--arch/powerpc/mm/slb.c1
-rw-r--r--arch/powerpc/mm/slb_low.S54
-rw-r--r--arch/powerpc/mm/slice.c20
-rw-r--r--arch/powerpc/mm/tlb-radix.c251
-rw-r--r--arch/powerpc/mm/tlb_hash64.c6
-rw-r--r--arch/powerpc/perf/Makefile2
-rw-r--r--arch/powerpc/perf/callchain.c22
-rw-r--r--arch/powerpc/perf/perf_regs.c104
-rw-r--r--arch/powerpc/perf/power8-events-list.h40
-rw-r--r--arch/powerpc/perf/power8-pmu.c41
-rw-r--r--arch/powerpc/platforms/52xx/mpc52xx_gpt.c15
-rw-r--r--arch/powerpc/platforms/83xx/mcu_mpc8349emitx.c6
-rw-r--r--arch/powerpc/platforms/Kconfig.cputype11
-rw-r--r--arch/powerpc/platforms/cell/spu_base.c9
-rw-r--r--arch/powerpc/platforms/cell/spufs/coredump.c5
-rw-r--r--arch/powerpc/platforms/cell/spufs/fault.c4
-rw-r--r--arch/powerpc/platforms/cell/spufs/inode.c2
-rw-r--r--arch/powerpc/platforms/powernv/eeh-powernv.c69
-rw-r--r--arch/powerpc/platforms/powernv/npu-dma.c283
-rw-r--r--arch/powerpc/platforms/powernv/opal-hmi.c8
-rw-r--r--arch/powerpc/platforms/powernv/pci-ioda.c953
-rw-r--r--arch/powerpc/platforms/powernv/pci.c19
-rw-r--r--arch/powerpc/platforms/powernv/pci.h72
-rw-r--r--arch/powerpc/platforms/powernv/setup.c5
-rw-r--r--arch/powerpc/platforms/ps3/htab.c2
-rw-r--r--arch/powerpc/platforms/ps3/spu.c4
-rw-r--r--arch/powerpc/platforms/pseries/hotplug-memory.c225
-rw-r--r--arch/powerpc/platforms/pseries/iommu.c24
-rw-r--r--arch/powerpc/platforms/pseries/lpar.c20
-rw-r--r--arch/powerpc/platforms/pseries/lparcfg.c3
-rw-r--r--arch/powerpc/platforms/pseries/mobility.c4
-rw-r--r--arch/powerpc/platforms/pseries/msi.c4
-rw-r--r--arch/powerpc/platforms/pseries/pci_dlpar.c32
-rw-r--r--arch/powerpc/platforms/pseries/reconfig.c5
-rw-r--r--arch/powerpc/platforms/pseries/setup.c4
-rw-r--r--arch/powerpc/sysdev/axonram.c2
-rw-r--r--arch/powerpc/sysdev/cpm1.c36
-rw-r--r--arch/powerpc/sysdev/cpm_common.c18
-rw-r--r--arch/powerpc/sysdev/fsl_pci.c24
-rw-r--r--arch/powerpc/sysdev/mpic.c9
-rw-r--r--arch/powerpc/sysdev/ppc4xx_gpio.c17
-rw-r--r--arch/powerpc/sysdev/simple_gpio.c13
-rw-r--r--arch/powerpc/xmon/Makefile2
-rw-r--r--arch/powerpc/xmon/spr_access.S45
-rw-r--r--arch/powerpc/xmon/xmon.c138
-rw-r--r--arch/s390/Kconfig8
-rw-r--r--arch/s390/boot/compressed/Makefile1
-rw-r--r--arch/s390/crypto/aes_s390.c117
-rw-r--r--arch/s390/crypto/crypt_s390.h493
-rw-r--r--arch/s390/crypto/des_s390.c72
-rw-r--r--arch/s390/crypto/ghash_s390.c16
-rw-r--r--arch/s390/crypto/prng.c60
-rw-r--r--arch/s390/crypto/sha1_s390.c10
-rw-r--r--arch/s390/crypto/sha256_s390.c14
-rw-r--r--arch/s390/crypto/sha512_s390.c14
-rw-r--r--arch/s390/crypto/sha_common.c10
-rw-r--r--arch/s390/include/asm/cpacf.h410
-rw-r--r--arch/s390/include/asm/fpu/types.h10
-rw-r--r--arch/s390/include/asm/ftrace.h4
-rw-r--r--arch/s390/include/asm/kvm_host.h11
-rw-r--r--arch/s390/include/asm/livepatch.h7
-rw-r--r--arch/s390/include/asm/pci.h35
-rw-r--r--arch/s390/include/asm/pgtable.h1
-rw-r--r--arch/s390/include/asm/processor.h9
-rw-r--r--arch/s390/include/asm/rwsem.h18
-rw-r--r--arch/s390/include/asm/sclp.h14
-rw-r--r--arch/s390/include/asm/sigp.h1
-rw-r--r--arch/s390/include/asm/thread_info.h1
-rw-r--r--arch/s390/include/uapi/asm/dasd.h32
-rw-r--r--arch/s390/include/uapi/asm/kvm.h1
-rw-r--r--arch/s390/include/uapi/asm/sie.h7
-rw-r--r--arch/s390/kernel/cache.c2
-rw-r--r--arch/s390/kernel/crash_dump.c2
-rw-r--r--arch/s390/kernel/dumpstack.c24
-rw-r--r--arch/s390/kernel/entry.h4
-rw-r--r--arch/s390/kernel/machine_kexec.c28
-rw-r--r--arch/s390/kernel/module.c6
-rw-r--r--arch/s390/kernel/perf_cpum_cf.c9
-rw-r--r--arch/s390/kernel/perf_cpum_sf.c9
-rw-r--r--arch/s390/kernel/perf_event.c4
-rw-r--r--arch/s390/kernel/process.c29
-rw-r--r--arch/s390/kernel/processor.c16
-rw-r--r--arch/s390/kernel/setup.c17
-rw-r--r--arch/s390/kernel/smp.c2
-rw-r--r--arch/s390/kernel/vdso.c3
-rw-r--r--arch/s390/kernel/vtime.c2
-rw-r--r--arch/s390/kvm/Kconfig1
-rw-r--r--arch/s390/kvm/interrupt.c47
-rw-r--r--arch/s390/kvm/kvm-s390.c61
-rw-r--r--arch/s390/kvm/priv.c21
-rw-r--r--arch/s390/kvm/sigp.c6
-rw-r--r--arch/s390/mm/fault.c41
-rw-r--r--arch/s390/mm/mmap.c1
-rw-r--r--arch/s390/mm/vmem.c8
-rw-r--r--arch/s390/net/bpf_jit_comp.c77
-rw-r--r--arch/s390/pci/pci_debug.c61
-rw-r--r--arch/s390/pci/pci_sysfs.c23
-rw-r--r--arch/score/Kconfig1
-rw-r--r--arch/score/include/uapi/asm/unistd.h1
-rw-r--r--arch/score/kernel/process.c2
-rw-r--r--arch/sh/Kconfig3
-rw-r--r--arch/sh/boards/board-sh7757lcr.c1
-rw-r--r--arch/sh/boards/mach-ap325rxa/setup.c1
-rw-r--r--arch/sh/boards/mach-ecovec24/setup.c1
-rw-r--r--arch/sh/boards/mach-kfr2r09/setup.c1
-rw-r--r--arch/sh/boards/mach-migor/setup.c1
-rw-r--r--arch/sh/boards/mach-sdk7786/gpio.c4
-rw-r--r--arch/sh/boards/mach-se/7724/setup.c1
-rw-r--r--arch/sh/boards/mach-x3proto/gpio.c4
-rw-r--r--arch/sh/boot/compressed/Makefile1
-rw-r--r--arch/sh/boot/romimage/Makefile1
-rw-r--r--arch/sh/configs/apsh4ad0a_defconfig1
-rw-r--r--arch/sh/configs/sdk7786_defconfig1
-rw-r--r--arch/sh/configs/se7206_defconfig1
-rw-r--r--arch/sh/configs/shx3_defconfig1
-rw-r--r--arch/sh/configs/urquell_defconfig1
-rw-r--r--arch/sh/include/asm/Kbuild1
-rw-r--r--arch/sh/include/asm/rwsem.h132
-rw-r--r--arch/sh/kernel/perf_callchain.c4
-rw-r--r--arch/sh/kernel/process_32.c7
-rw-r--r--arch/sh/kernel/process_64.c5
-rw-r--r--arch/sh/kernel/vsyscall/vsyscall.c4
-rw-r--r--arch/sparc/Kconfig6
-rw-r--r--arch/sparc/include/asm/Kbuild1
-rw-r--r--arch/sparc/include/asm/head_32.h8
-rw-r--r--arch/sparc/include/asm/kgdb.h2
-rw-r--r--arch/sparc/include/asm/page_32.h2
-rw-r--r--arch/sparc/include/asm/pgalloc_32.h4
-rw-r--r--arch/sparc/include/asm/pgtable_32.h2
-rw-r--r--arch/sparc/include/asm/pgtable_64.h45
-rw-r--r--arch/sparc/include/asm/rwsem.h124
-rw-r--r--arch/sparc/include/asm/tlbflush_64.h3
-rw-r--r--arch/sparc/kernel/entry.S10
-rw-r--r--arch/sparc/kernel/kernel.h1
-rw-r--r--arch/sparc/kernel/kgdb_32.c11
-rw-r--r--arch/sparc/kernel/perf_event.c14
-rw-r--r--arch/sparc/kernel/process_32.c12
-rw-r--r--arch/sparc/kernel/process_64.c4
-rw-r--r--arch/sparc/kernel/setup_32.c4
-rw-r--r--arch/sparc/mm/hugetlbpage.c33
-rw-r--r--arch/sparc/mm/init_64.c12
-rw-r--r--arch/sparc/mm/io-unit.c4
-rw-r--r--arch/sparc/mm/srmmu.c19
-rw-r--r--arch/sparc/mm/tlb.c25
-rw-r--r--arch/sparc/mm/tsb.c32
-rw-r--r--arch/tile/Kconfig69
-rw-r--r--arch/tile/configs/tilegx_defconfig5
-rw-r--r--arch/tile/configs/tilepro_defconfig5
-rw-r--r--arch/tile/gxio/mpipe.c2
-rw-r--r--arch/tile/include/asm/atomic_64.h17
-rw-r--r--arch/tile/include/asm/pgtable.h1
-rw-r--r--arch/tile/include/uapi/asm/unistd.h1
-rw-r--r--arch/tile/kernel/pci_gx.c4
-rw-r--r--arch/tile/kernel/perf_event.c6
-rw-r--r--arch/tile/kernel/process.c4
-rw-r--r--arch/tile/kernel/setup.c4
-rw-r--r--arch/tile/kernel/unaligned.c4
-rw-r--r--arch/tile/mm/hugetlbpage.c7
-rw-r--r--arch/tile/mm/init.c2
-rw-r--r--arch/um/configs/i386_defconfig1
-rw-r--r--arch/um/configs/x86_64_defconfig1
-rw-r--r--arch/um/drivers/net_kern.c4
-rw-r--r--arch/um/drivers/ubd_kern.c2
-rw-r--r--arch/um/include/shared/registers.h2
-rw-r--r--arch/um/kernel/process.c6
-rw-r--r--arch/um/os-Linux/signal.c28
-rw-r--r--arch/unicore32/boot/Makefile2
-rw-r--r--arch/unicore32/boot/compressed/Makefile1
-rw-r--r--arch/unicore32/include/uapi/asm/unistd.h2
-rw-r--r--arch/unicore32/kernel/gpio.c4
-rw-r--r--arch/unicore32/kernel/process.c7
-rw-r--r--arch/x86/Kconfig91
-rw-r--r--arch/x86/Makefile3
-rw-r--r--arch/x86/boot/Makefile13
-rw-r--r--arch/x86/boot/compressed/Makefile24
-rw-r--r--arch/x86/boot/compressed/aslr.c339
-rw-r--r--arch/x86/boot/compressed/cmdline.c4
-rw-r--r--arch/x86/boot/compressed/eboot.c308
-rw-r--r--arch/x86/boot/compressed/eboot.h74
-rw-r--r--arch/x86/boot/compressed/error.c22
-rw-r--r--arch/x86/boot/compressed/error.h7
-rw-r--r--arch/x86/boot/compressed/head_32.S22
-rw-r--r--arch/x86/boot/compressed/head_64.S19
-rw-r--r--arch/x86/boot/compressed/kaslr.c510
-rw-r--r--arch/x86/boot/compressed/misc.c188
-rw-r--r--arch/x86/boot/compressed/misc.h27
-rw-r--r--arch/x86/boot/compressed/mkpiggy.c34
-rw-r--r--arch/x86/boot/compressed/pagetable.c129
-rw-r--r--arch/x86/boot/compressed/string.c37
-rw-r--r--arch/x86/boot/compressed/vmlinux.lds.S1
-rw-r--r--arch/x86/boot/early_serial_console.c4
-rw-r--r--arch/x86/boot/header.S109
-rw-r--r--arch/x86/configs/i386_defconfig1
-rw-r--r--arch/x86/configs/kvm_guest.config3
-rw-r--r--arch/x86/configs/x86_64_defconfig2
-rw-r--r--arch/x86/crypto/aesni-intel_glue.c2
-rw-r--r--arch/x86/crypto/camellia_aesni_avx2_glue.c5
-rw-r--r--arch/x86/crypto/camellia_aesni_avx_glue.c4
-rw-r--r--arch/x86/crypto/chacha20_glue.c3
-rw-r--r--arch/x86/crypto/poly1305_glue.c5
-rw-r--r--arch/x86/crypto/serpent_avx2_glue.c2
-rw-r--r--arch/x86/crypto/serpent_sse2_glue.c2
-rw-r--r--arch/x86/crypto/sha-mb/sha1_mb.c4
-rw-r--r--arch/x86/crypto/sha-mb/sha1_x8_avx2.S13
-rw-r--r--arch/x86/crypto/sha1_ssse3_glue.c2
-rw-r--r--arch/x86/crypto/sha256_ssse3_glue.c2
-rw-r--r--arch/x86/crypto/sha512_ssse3_glue.c2
-rw-r--r--arch/x86/entry/common.c2
-rw-r--r--arch/x86/entry/entry_32.S7
-rw-r--r--arch/x86/entry/entry_64.S21
-rw-r--r--arch/x86/entry/entry_64_compat.S45
-rw-r--r--arch/x86/entry/syscalls/syscall_32.tbl4
-rw-r--r--arch/x86/entry/syscalls/syscall_64.tbl2
-rw-r--r--arch/x86/entry/thunk_64.S11
-rw-r--r--arch/x86/entry/vdso/Makefile4
-rw-r--r--arch/x86/entry/vdso/vclock_gettime.c15
-rw-r--r--arch/x86/entry/vdso/vdso-layout.lds.S5
-rw-r--r--arch/x86/entry/vdso/vma.c14
-rw-r--r--arch/x86/events/Kconfig36
-rw-r--r--arch/x86/events/Makefile9
-rw-r--r--arch/x86/events/amd/uncore.c2
-rw-r--r--arch/x86/events/core.c24
-rw-r--r--arch/x86/events/intel/Makefile9
-rw-r--r--arch/x86/events/intel/bts.c105
-rw-r--r--arch/x86/events/intel/core.c162
-rw-r--r--arch/x86/events/intel/cstate.c547
-rw-r--r--arch/x86/events/intel/ds.c6
-rw-r--r--arch/x86/events/intel/lbr.c31
-rw-r--r--arch/x86/events/intel/p4.c2
-rw-r--r--arch/x86/events/intel/pt.c320
-rw-r--r--arch/x86/events/intel/pt.h68
-rw-r--r--arch/x86/events/intel/rapl.c183
-rw-r--r--arch/x86/events/intel/uncore.c220
-rw-r--r--arch/x86/events/intel/uncore_snbep.c7
-rw-r--r--arch/x86/events/msr.c38
-rw-r--r--arch/x86/events/perf_event.h5
-rw-r--r--arch/x86/ia32/ia32_aout.c29
-rw-r--r--arch/x86/ia32/ia32_signal.c2
-rw-r--r--arch/x86/include/asm/alternative.h35
-rw-r--r--arch/x86/include/asm/apic.h4
-rw-r--r--arch/x86/include/asm/bios_ebda.h21
-rw-r--r--arch/x86/include/asm/boot.h39
-rw-r--r--arch/x86/include/asm/bugs.h8
-rw-r--r--arch/x86/include/asm/clocksource.h9
-rw-r--r--arch/x86/include/asm/compat.h4
-rw-r--r--arch/x86/include/asm/cpufeature.h38
-rw-r--r--arch/x86/include/asm/cpufeatures.h12
-rw-r--r--arch/x86/include/asm/disabled-features.h6
-rw-r--r--arch/x86/include/asm/efi.h52
-rw-r--r--arch/x86/include/asm/elf.h6
-rw-r--r--arch/x86/include/asm/hugetlb.h2
-rw-r--r--arch/x86/include/asm/intel_telemetry.h2
-rw-r--r--arch/x86/include/asm/irq_work.h2
-rw-r--r--arch/x86/include/asm/kgdb.h2
-rw-r--r--arch/x86/include/asm/kvm_host.h32
-rw-r--r--arch/x86/include/asm/linkage.h34
-rw-r--r--arch/x86/include/asm/livepatch.h2
-rw-r--r--arch/x86/include/asm/mce.h19
-rw-r--r--arch/x86/include/asm/mmu_context.h101
-rw-r--r--arch/x86/include/asm/msr-index.h35
-rw-r--r--arch/x86/include/asm/msr.h20
-rw-r--r--arch/x86/include/asm/mtrr.h6
-rw-r--r--arch/x86/include/asm/page.h5
-rw-r--r--arch/x86/include/asm/page_64_types.h8
-rw-r--r--arch/x86/include/asm/paravirt.h56
-rw-r--r--arch/x86/include/asm/paravirt_types.h20
-rw-r--r--arch/x86/include/asm/pat.h2
-rw-r--r--arch/x86/include/asm/pgtable.h3
-rw-r--r--arch/x86/include/asm/pmc_core.h27
-rw-r--r--arch/x86/include/asm/processor.h13
-rw-r--r--arch/x86/include/asm/rwsem.h42
-rw-r--r--arch/x86/include/asm/segment.h49
-rw-r--r--arch/x86/include/asm/setup.h1
-rw-r--r--arch/x86/include/asm/svm.h12
-rw-r--r--arch/x86/include/asm/switch_to.h4
-rw-r--r--arch/x86/include/asm/text-patching.h40
-rw-r--r--arch/x86/include/asm/thread_info.h2
-rw-r--r--arch/x86/include/asm/tlbflush.h2
-rw-r--r--arch/x86/include/asm/tsc.h2
-rw-r--r--arch/x86/include/asm/uaccess.h15
-rw-r--r--arch/x86/include/asm/uaccess_32.h62
-rw-r--r--arch/x86/include/asm/uaccess_64.h7
-rw-r--r--arch/x86/include/asm/uv/bios.h59
-rw-r--r--arch/x86/include/asm/uv/uv_bau.h2
-rw-r--r--arch/x86/include/asm/uv/uv_hub.h409
-rw-r--r--arch/x86/include/asm/uv/uv_mmrs.h2207
-rw-r--r--arch/x86/include/asm/x86_init.h50
-rw-r--r--arch/x86/include/asm/xor_32.h2
-rw-r--r--arch/x86/include/asm/xor_avx.h4
-rw-r--r--arch/x86/include/uapi/asm/bootparam.h41
-rw-r--r--arch/x86/include/uapi/asm/kvm.h6
-rw-r--r--arch/x86/include/uapi/asm/svm.h49
-rw-r--r--arch/x86/kernel/Makefile7
-rw-r--r--arch/x86/kernel/acpi/boot.c18
-rw-r--r--arch/x86/kernel/alternative.c1
-rw-r--r--arch/x86/kernel/apic/apic.c32
-rw-r--r--arch/x86/kernel/apic/apic_noop.c4
-rw-r--r--arch/x86/kernel/apic/hw_nmi.c1
-rw-r--r--arch/x86/kernel/apic/io_apic.c2
-rw-r--r--arch/x86/kernel/apic/ipi.c2
-rw-r--r--arch/x86/kernel/apic/vector.c2
-rw-r--r--arch/x86/kernel/apic/x2apic_uv_x.c850
-rw-r--r--arch/x86/kernel/apm_32.c2
-rw-r--r--arch/x86/kernel/asm-offsets.c1
-rw-r--r--arch/x86/kernel/cpu/amd.c20
-rw-r--r--arch/x86/kernel/cpu/common.c99
-rw-r--r--arch/x86/kernel/cpu/cyrix.c2
-rw-r--r--arch/x86/kernel/cpu/intel.c53
-rw-r--r--arch/x86/kernel/cpu/mcheck/mce-genpool.c46
-rw-r--r--arch/x86/kernel/cpu/mcheck/mce-internal.h15
-rw-r--r--arch/x86/kernel/cpu/mcheck/mce-severity.c30
-rw-r--r--arch/x86/kernel/cpu/mcheck/mce.c160
-rw-r--r--arch/x86/kernel/cpu/mcheck/mce_amd.c94
-rw-r--r--arch/x86/kernel/cpu/mcheck/mce_intel.c2
-rw-r--r--arch/x86/kernel/cpu/mcheck/therm_throt.c2
-rw-r--r--arch/x86/kernel/cpu/microcode/intel.c2
-rw-r--r--arch/x86/kernel/cpu/mtrr/cyrix.c4
-rw-r--r--arch/x86/kernel/cpu/mtrr/generic.c28
-rw-r--r--arch/x86/kernel/cpu/mtrr/main.c13
-rw-r--r--arch/x86/kernel/cpu/mtrr/mtrr.h1
-rw-r--r--arch/x86/kernel/cpu/vmware.c2
-rw-r--r--arch/x86/kernel/devicetree.c2
-rw-r--r--arch/x86/kernel/dumpstack.c19
-rw-r--r--arch/x86/kernel/ebda.c (renamed from arch/x86/kernel/head.c)2
-rw-r--r--arch/x86/kernel/fpu/bugs.c16
-rw-r--r--arch/x86/kernel/fpu/core.c50
-rw-r--r--arch/x86/kernel/fpu/init.c16
-rw-r--r--arch/x86/kernel/fpu/regset.c25
-rw-r--r--arch/x86/kernel/fpu/xstate.c18
-rw-r--r--arch/x86/kernel/head32.c2
-rw-r--r--arch/x86/kernel/head64.c1
-rw-r--r--arch/x86/kernel/head_32.S116
-rw-r--r--arch/x86/kernel/head_64.S103
-rw-r--r--arch/x86/kernel/hpet.c3
-rw-r--r--arch/x86/kernel/jump_label.c1
-rw-r--r--arch/x86/kernel/kexec-bzimage64.c18
-rw-r--r--arch/x86/kernel/kgdb.c1
-rw-r--r--arch/x86/kernel/kprobes/core.c1
-rw-r--r--arch/x86/kernel/kprobes/opt.c1
-rw-r--r--arch/x86/kernel/kvm.c10
-rw-r--r--arch/x86/kernel/livepatch.c70
-rw-r--r--arch/x86/kernel/machine_kexec_64.c45
-rw-r--r--arch/x86/kernel/mcount_64.S3
-rw-r--r--arch/x86/kernel/module.c1
-rw-r--r--arch/x86/kernel/paravirt.c7
-rw-r--r--arch/x86/kernel/pci-iommu_table.c2
-rw-r--r--arch/x86/kernel/platform-quirks.c35
-rw-r--r--arch/x86/kernel/process.c5
-rw-r--r--arch/x86/kernel/process_64.c245
-rw-r--r--arch/x86/kernel/ptrace.c54
-rw-r--r--arch/x86/kernel/reboot.c9
-rw-r--r--arch/x86/kernel/rtc.c18
-rw-r--r--arch/x86/kernel/setup.c12
-rw-r--r--arch/x86/kernel/signal.c29
-rw-r--r--arch/x86/kernel/smpboot.c7
-rw-r--r--arch/x86/kernel/sysfb_efi.c15
-rw-r--r--arch/x86/kernel/tboot.c6
-rw-r--r--arch/x86/kernel/tce_64.c2
-rw-r--r--arch/x86/kernel/tls.c42
-rw-r--r--arch/x86/kernel/traps.c1
-rw-r--r--arch/x86/kernel/tsc.c33
-rw-r--r--arch/x86/kernel/tsc_msr.c3
-rw-r--r--arch/x86/kernel/uprobes.c6
-rw-r--r--arch/x86/kernel/vmlinux.lds.S2
-rw-r--r--arch/x86/kvm/cpuid.c2
-rw-r--r--arch/x86/kvm/emulate.c6
-rw-r--r--arch/x86/kvm/ioapic.c2
-rw-r--r--arch/x86/kvm/iommu.c2
-rw-r--r--arch/x86/kvm/irq_comm.c3
-rw-r--r--arch/x86/kvm/lapic.c193
-rw-r--r--arch/x86/kvm/lapic.h38
-rw-r--r--arch/x86/kvm/mmu.c31
-rw-r--r--arch/x86/kvm/mtrr.c2
-rw-r--r--arch/x86/kvm/svm.c672
-rw-r--r--arch/x86/kvm/trace.h60
-rw-r--r--arch/x86/kvm/vmx.c62
-rw-r--r--arch/x86/kvm/x86.c80
-rw-r--r--arch/x86/lguest/boot.c3
-rw-r--r--arch/x86/lib/rwsem.S16
-rw-r--r--arch/x86/lib/usercopy_32.c4
-rw-r--r--arch/x86/mm/Makefile3
-rw-r--r--arch/x86/mm/extable.c96
-rw-r--r--arch/x86/mm/fault.c11
-rw-r--r--arch/x86/mm/hugetlbpage.c5
-rw-r--r--arch/x86/mm/ident_map.c79
-rw-r--r--arch/x86/mm/init.c8
-rw-r--r--arch/x86/mm/init_32.c5
-rw-r--r--arch/x86/mm/init_64.c78
-rw-r--r--arch/x86/mm/ioremap.c4
-rw-r--r--arch/x86/mm/numa.c4
-rw-r--r--arch/x86/mm/pageattr.c12
-rw-r--r--arch/x86/mm/pat.c109
-rw-r--r--arch/x86/mm/tlb.c116
-rw-r--r--arch/x86/net/bpf_jit_comp.c70
-rw-r--r--arch/x86/oprofile/nmi_int.c6
-rw-r--r--arch/x86/oprofile/op_model_ppro.c2
-rw-r--r--arch/x86/pci/acpi.c1
-rw-r--r--arch/x86/pci/common.c2
-rw-r--r--arch/x86/pci/fixup.c7
-rw-r--r--arch/x86/pci/xen.c9
-rw-r--r--arch/x86/platform/efi/efi.c133
-rw-r--r--arch/x86/platform/efi/efi_64.c10
-rw-r--r--arch/x86/platform/efi/efi_stub_64.S9
-rw-r--r--arch/x86/platform/efi/quirks.c12
-rw-r--r--arch/x86/platform/uv/bios_uv.c48
-rw-r--r--arch/x86/platform/uv/tlb_uv.c38
-rw-r--r--arch/x86/platform/uv/uv_sysfs.c2
-rw-r--r--arch/x86/platform/uv/uv_time.c6
-rw-r--r--arch/x86/power/hibernate_32.c2
-rw-r--r--arch/x86/purgatory/Makefile2
-rw-r--r--arch/x86/ras/Kconfig2
-rw-r--r--arch/x86/ras/Makefile2
-rw-r--r--arch/x86/ras/mce_amd_inj.c31
-rw-r--r--arch/x86/realmode/rm/Makefile1
-rw-r--r--arch/x86/tools/calc_run_size.sh42
-rw-r--r--arch/x86/um/os-Linux/registers.c49
-rw-r--r--arch/x86/um/ptrace_32.c5
-rw-r--r--arch/x86/um/ptrace_64.c16
-rw-r--r--arch/x86/um/shared/sysdep/ptrace_64.h4
-rw-r--r--arch/x86/um/signal.c37
-rw-r--r--arch/x86/um/user-offsets.c2
-rw-r--r--arch/x86/um/vdso/vma.c3
-rw-r--r--arch/x86/xen/enlighten.c52
-rw-r--r--arch/x86/xen/setup.c65
-rw-r--r--arch/x86/xen/time.c6
-rw-r--r--arch/xtensa/Kconfig2
-rw-r--r--arch/xtensa/configs/generic_kc705_defconfig1
-rw-r--r--arch/xtensa/configs/smp_lx200_defconfig1
-rw-r--r--arch/xtensa/include/asm/Kbuild1
-rw-r--r--arch/xtensa/include/asm/rwsem.h131
-rw-r--r--arch/xtensa/kernel/perf_event.c10
-rw-r--r--arch/xtensa/kernel/process.c4
-rw-r--r--arch/xtensa/platforms/iss/network.c2
-rw-r--r--block/bio.c11
-rw-r--r--block/blk-core.c8
-rw-r--r--block/blk-flush.c11
-rw-r--r--block/blk-lib.c178
-rw-r--r--block/blk-map.c47
-rw-r--r--block/blk-mq-tag.c17
-rw-r--r--block/blk-mq.c9
-rw-r--r--block/blk-settings.c50
-rw-r--r--block/blk-sysfs.c39
-rw-r--r--block/blk-throttle.c5
-rw-r--r--block/ioctl.c33
-rw-r--r--block/partitions/efi.c4
-rw-r--r--block/partitions/ldm.c60
-rw-r--r--certs/Kconfig9
-rw-r--r--certs/system_keyring.c139
-rw-r--r--crypto/Kconfig1
-rw-r--r--crypto/ahash.c3
-rw-r--r--crypto/algif_aead.c268
-rw-r--r--crypto/asymmetric_keys/Kconfig6
-rw-r--r--crypto/asymmetric_keys/Makefile5
-rw-r--r--crypto/asymmetric_keys/asymmetric_keys.h2
-rw-r--r--crypto/asymmetric_keys/asymmetric_type.c96
-rw-r--r--crypto/asymmetric_keys/mscode_parser.c21
-rw-r--r--crypto/asymmetric_keys/pkcs7_key_type.c72
-rw-r--r--crypto/asymmetric_keys/pkcs7_parser.c59
-rw-r--r--crypto/asymmetric_keys/pkcs7_parser.h11
-rw-r--r--crypto/asymmetric_keys/pkcs7_trust.c43
-rw-r--r--crypto/asymmetric_keys/pkcs7_verify.c107
-rw-r--r--crypto/asymmetric_keys/public_key.c20
-rw-r--r--crypto/asymmetric_keys/restrict.c108
-rw-r--r--crypto/asymmetric_keys/signature.c18
-rw-r--r--crypto/asymmetric_keys/verify_pefile.c40
-rw-r--r--crypto/asymmetric_keys/verify_pefile.h5
-rw-r--r--crypto/asymmetric_keys/x509_cert_parser.c52
-rw-r--r--crypto/asymmetric_keys/x509_parser.h12
-rw-r--r--crypto/asymmetric_keys/x509_public_key.c297
-rw-r--r--crypto/drbg.c39
-rw-r--r--crypto/lzo.c2
-rw-r--r--crypto/testmgr.c36
-rw-r--r--drivers/Kconfig4
-rw-r--r--drivers/Makefile1
-rw-r--r--drivers/acpi/Kconfig12
-rw-r--r--drivers/acpi/Makefile3
-rw-r--r--drivers/acpi/acpi_amba.c3
-rw-r--r--drivers/acpi/acpi_apd.c3
-rw-r--r--drivers/acpi/acpi_dbg.c22
-rw-r--r--drivers/acpi/acpi_video.c83
-rw-r--r--drivers/acpi/acpica/Makefile2
-rw-r--r--drivers/acpi/acpica/acdebug.h10
-rw-r--r--drivers/acpi/acpica/acevents.h3
-rw-r--r--drivers/acpi/acpica/acglobal.h11
-rw-r--r--drivers/acpi/acpica/acinterp.h4
-rw-r--r--drivers/acpi/acpica/aclocal.h62
-rw-r--r--drivers/acpi/acpica/acmacros.h21
-rw-r--r--drivers/acpi/acpica/acnamesp.h5
-rw-r--r--drivers/acpi/acpica/acparser.h2
-rw-r--r--drivers/acpi/acpica/acpredef.h14
-rw-r--r--drivers/acpi/acpica/acresrc.h12
-rw-r--r--drivers/acpi/acpica/acstruct.h2
-rw-r--r--drivers/acpi/acpica/actables.h2
-rw-r--r--drivers/acpi/acpica/acutils.h58
-rw-r--r--drivers/acpi/acpica/dbcmds.c4
-rw-r--r--drivers/acpi/acpica/dbconvert.c8
-rw-r--r--drivers/acpi/acpica/dbexec.c2
-rw-r--r--drivers/acpi/acpica/dbinput.c19
-rw-r--r--drivers/acpi/acpica/dbnames.c4
-rw-r--r--drivers/acpi/acpica/dbutils.c9
-rw-r--r--drivers/acpi/acpica/dbxface.c4
-rw-r--r--drivers/acpi/acpica/dscontrol.c4
-rw-r--r--drivers/acpi/acpica/dsinit.c2
-rw-r--r--drivers/acpi/acpica/dsmethod.c2
-rw-r--r--drivers/acpi/acpica/dsutils.c2
-rw-r--r--drivers/acpi/acpica/dswload.c4
-rw-r--r--drivers/acpi/acpica/dswload2.c4
-rw-r--r--drivers/acpi/acpica/dswstate.c10
-rw-r--r--drivers/acpi/acpica/evgpe.c4
-rw-r--r--drivers/acpi/acpica/evgpeblk.c4
-rw-r--r--drivers/acpi/acpica/evgpeutil.c4
-rw-r--r--drivers/acpi/acpica/evhandler.c2
-rw-r--r--drivers/acpi/acpica/evmisc.c3
-rw-r--r--drivers/acpi/acpica/evregion.c74
-rw-r--r--drivers/acpi/acpica/evrgnini.c3
-rw-r--r--drivers/acpi/acpica/evxfgpe.c2
-rw-r--r--drivers/acpi/acpica/exconcat.c439
-rw-r--r--drivers/acpi/acpica/exconfig.c4
-rw-r--r--drivers/acpi/acpica/exconvrt.c8
-rw-r--r--drivers/acpi/acpica/excreate.c2
-rw-r--r--drivers/acpi/acpica/exdump.c15
-rw-r--r--drivers/acpi/acpica/exfield.c4
-rw-r--r--drivers/acpi/acpica/exfldio.c14
-rw-r--r--drivers/acpi/acpica/exmisc.c290
-rw-r--r--drivers/acpi/acpica/exnames.c2
-rw-r--r--drivers/acpi/acpica/exoparg3.c8
-rw-r--r--drivers/acpi/acpica/exoparg6.c2
-rw-r--r--drivers/acpi/acpica/exregion.c6
-rw-r--r--drivers/acpi/acpica/exresnte.c4
-rw-r--r--drivers/acpi/acpica/exresolv.c2
-rw-r--r--drivers/acpi/acpica/exresop.c4
-rw-r--r--drivers/acpi/acpica/exstorob.c4
-rw-r--r--drivers/acpi/acpica/exutils.c12
-rw-r--r--drivers/acpi/acpica/hwgpe.c6
-rw-r--r--drivers/acpi/acpica/hwregs.c291
-rw-r--r--drivers/acpi/acpica/hwxface.c11
-rw-r--r--drivers/acpi/acpica/nsaccess.c7
-rw-r--r--drivers/acpi/acpica/nsconvert.c9
-rw-r--r--drivers/acpi/acpica/nsdump.c9
-rw-r--r--drivers/acpi/acpica/nsinit.c76
-rw-r--r--drivers/acpi/acpica/nsload.c2
-rw-r--r--drivers/acpi/acpica/nsnames.c2
-rw-r--r--drivers/acpi/acpica/nsobject.c4
-rw-r--r--drivers/acpi/acpica/nsprepkg.c92
-rw-r--r--drivers/acpi/acpica/nsrepair.c2
-rw-r--r--drivers/acpi/acpica/nsrepair2.c6
-rw-r--r--drivers/acpi/acpica/nsutils.c8
-rw-r--r--drivers/acpi/acpica/nsxfeval.c113
-rw-r--r--drivers/acpi/acpica/nsxfname.c6
-rw-r--r--drivers/acpi/acpica/nsxfobj.c6
-rw-r--r--drivers/acpi/acpica/psargs.c2
-rw-r--r--drivers/acpi/acpica/psopinfo.c2
-rw-r--r--drivers/acpi/acpica/psparse.c4
-rw-r--r--drivers/acpi/acpica/psutils.c2
-rw-r--r--drivers/acpi/acpica/psxface.c2
-rw-r--r--drivers/acpi/acpica/rscalc.c90
-rw-r--r--drivers/acpi/acpica/rscreate.c2
-rw-r--r--drivers/acpi/acpica/rsdump.c50
-rw-r--r--drivers/acpi/acpica/rsdumpinfo.c9
-rw-r--r--drivers/acpi/acpica/rsmisc.c2
-rw-r--r--drivers/acpi/acpica/rsserial.c21
-rw-r--r--drivers/acpi/acpica/rsutils.c14
-rw-r--r--drivers/acpi/acpica/rsxface.c6
-rw-r--r--drivers/acpi/acpica/tbdata.c15
-rw-r--r--drivers/acpi/acpica/tbfadt.c28
-rw-r--r--drivers/acpi/acpica/tbfind.c2
-rw-r--r--drivers/acpi/acpica/tbinstal.c6
-rw-r--r--drivers/acpi/acpica/tbutils.c33
-rw-r--r--drivers/acpi/acpica/tbxface.c6
-rw-r--r--drivers/acpi/acpica/tbxfload.c2
-rw-r--r--drivers/acpi/acpica/tbxfroot.c8
-rw-r--r--drivers/acpi/acpica/utalloc.c5
-rw-r--r--drivers/acpi/acpica/utascii.c140
-rw-r--r--drivers/acpi/acpica/utbuffer.c24
-rw-r--r--drivers/acpi/acpica/utcache.c7
-rw-r--r--drivers/acpi/acpica/utcopy.c16
-rw-r--r--drivers/acpi/acpica/utdebug.c47
-rw-r--r--drivers/acpi/acpica/utdecode.c30
-rw-r--r--drivers/acpi/acpica/uteval.c4
-rw-r--r--drivers/acpi/acpica/utglobal.c48
-rw-r--r--drivers/acpi/acpica/utids.c8
-rw-r--r--drivers/acpi/acpica/utmath.c4
-rw-r--r--drivers/acpi/acpica/utmisc.c2
-rw-r--r--drivers/acpi/acpica/utnonansi.c67
-rw-r--r--drivers/acpi/acpica/utobject.c18
-rw-r--r--drivers/acpi/acpica/utosi.c4
-rw-r--r--drivers/acpi/acpica/utownerid.c6
-rw-r--r--drivers/acpi/acpica/utprint.c19
-rw-r--r--drivers/acpi/acpica/utstring.c71
-rw-r--r--drivers/acpi/acpica/uttrack.c2
-rw-r--r--drivers/acpi/acpica/utxface.c4
-rw-r--r--drivers/acpi/battery.c2
-rw-r--r--drivers/acpi/blacklist.c196
-rw-r--r--drivers/acpi/bus.c39
-rw-r--r--drivers/acpi/device_pm.c1
-rw-r--r--drivers/acpi/device_sysfs.c44
-rw-r--r--drivers/acpi/ec.c241
-rw-r--r--drivers/acpi/evged.c154
-rw-r--r--drivers/acpi/internal.h3
-rw-r--r--drivers/acpi/nfit.c282
-rw-r--r--drivers/acpi/nfit.h31
-rw-r--r--drivers/acpi/numa.c16
-rw-r--r--drivers/acpi/osi.c522
-rw-r--r--drivers/acpi/osl.c505
-rw-r--r--drivers/acpi/pci_link.c131
-rw-r--r--drivers/acpi/sleep.c7
-rw-r--r--drivers/acpi/sysfs.c7
-rw-r--r--drivers/acpi/tables.c316
-rw-r--r--drivers/acpi/utils.c10
-rw-r--r--drivers/acpi/video_detect.c2
-rw-r--r--drivers/amba/bus.c100
-rw-r--r--drivers/ata/Kconfig13
-rw-r--r--drivers/ata/libahci.c4
-rw-r--r--drivers/ata/libata-core.c237
-rw-r--r--drivers/ata/libata-eh.c113
-rw-r--r--drivers/ata/libata-scsi.c761
-rw-r--r--drivers/ata/libata-trace.c72
-rw-r--r--drivers/ata/libata.h8
-rw-r--r--drivers/ata/pata_icside.c2
-rw-r--r--drivers/ata/sata_dwc_460ex.c552
-rw-r--r--drivers/ata/sata_highbank.c2
-rw-r--r--drivers/base/devcoredump.c83
-rw-r--r--drivers/base/platform.c19
-rw-r--r--drivers/base/power/clock_ops.c2
-rw-r--r--drivers/base/power/domain.c145
-rw-r--r--drivers/base/power/domain_governor.c20
-rw-r--r--drivers/base/power/main.c23
-rw-r--r--drivers/base/power/opp/Makefile1
-rw-r--r--drivers/base/power/opp/core.c440
-rw-r--r--drivers/base/power/opp/cpu.c199
-rw-r--r--drivers/base/power/opp/of.c591
-rw-r--r--drivers/base/power/opp/opp.h14
-rw-r--r--drivers/base/power/runtime.c9
-rw-r--r--drivers/base/property.c34
-rw-r--r--drivers/base/regmap/internal.h1
-rw-r--r--drivers/base/regmap/regcache-flat.c2
-rw-r--r--drivers/base/regmap/regcache.c2
-rw-r--r--drivers/base/regmap/regmap-mmio.c5
-rw-r--r--drivers/base/regmap/regmap-spmi.c2
-rw-r--r--drivers/bcma/driver_chipcommon_sflash.c1
-rw-r--r--drivers/block/aoe/aoecmd.c2
-rw-r--r--drivers/block/brd.c2
-rw-r--r--drivers/block/drbd/drbd_main.c2
-rw-r--r--drivers/block/drbd/drbd_nl.c28
-rw-r--r--drivers/block/loop.c2
-rw-r--r--drivers/block/mtip32xx/mtip32xx.c12
-rw-r--r--drivers/block/nbd.c4
-rw-r--r--drivers/block/osdblk.c2
-rw-r--r--drivers/block/ps3disk.c2
-rw-r--r--drivers/block/rbd.c305
-rw-r--r--drivers/block/skd_main.c61
-rw-r--r--drivers/block/virtio_blk.c6
-rw-r--r--drivers/block/xen-blkback/xenbus.c2
-rw-r--r--drivers/block/xen-blkfront.c3
-rw-r--r--drivers/block/zram/zcomp.c300
-rw-r--r--drivers/block/zram/zcomp.h14
-rw-r--r--drivers/block/zram/zram_drv.c104
-rw-r--r--drivers/block/zram/zram_drv.h2
-rw-r--r--drivers/bluetooth/ath3k.c8
-rw-r--r--drivers/bluetooth/btmrvl_drv.h11
-rw-r--r--drivers/bluetooth/btmrvl_main.c35
-rw-r--r--drivers/bluetooth/btmrvl_sdio.c79
-rw-r--r--drivers/bluetooth/btmrvl_sdio.h6
-rw-r--r--drivers/bluetooth/btusb.c12
-rw-r--r--drivers/bluetooth/hci_bcm.c1
-rw-r--r--drivers/bluetooth/hci_bcsp.c57
-rw-r--r--drivers/bluetooth/hci_intel.c6
-rw-r--r--drivers/bluetooth/hci_ldisc.c11
-rw-r--r--drivers/bluetooth/hci_uart.h1
-rw-r--r--drivers/bluetooth/hci_vhci.c28
-rw-r--r--drivers/bus/Kconfig5
-rw-r--r--drivers/bus/arm-ccn.c7
-rw-r--r--drivers/bus/brcmstb_gisb.c30
-rw-r--r--drivers/bus/mips_cdmm.c12
-rw-r--r--drivers/char/Kconfig4
-rw-r--r--drivers/char/hw_random/Kconfig29
-rw-r--r--drivers/char/hw_random/Makefile2
-rw-r--r--drivers/char/hw_random/exynos-rng.c33
-rw-r--r--drivers/char/hw_random/hisi-rng.c126
-rw-r--r--drivers/char/hw_random/ppc4xx-rng.c147
-rw-r--r--drivers/char/ipmi/ipmi_si_intf.c65
-rw-r--r--drivers/char/ipmi/ipmi_ssif.c2
-rw-r--r--drivers/char/pcmcia/synclink_cs.c33
-rw-r--r--drivers/char/random.c21
-rw-r--r--drivers/char/xillybus/xillybus_of.c11
-rw-r--r--drivers/char/xillybus/xillybus_pcie.c10
-rw-r--r--drivers/clk/Kconfig10
-rw-r--r--drivers/clk/Makefile6
-rw-r--r--drivers/clk/at91/clk-h32mx.c2
-rw-r--r--drivers/clk/axis/Makefile1
-rw-r--r--drivers/clk/axis/clk-artpec6.c242
-rw-r--r--drivers/clk/axs10x/Makefile1
-rw-r--r--drivers/clk/axs10x/i2s_pll_clock.c228
-rw-r--r--drivers/clk/bcm/clk-bcm2835.c1215
-rw-r--r--drivers/clk/bcm/clk-kona-setup.c3
-rw-r--r--drivers/clk/clk-clps711x.c19
-rw-r--r--drivers/clk/clk-composite.c93
-rw-r--r--drivers/clk/clk-divider.c91
-rw-r--r--drivers/clk/clk-fixed-factor.c42
-rw-r--r--drivers/clk/clk-fixed-rate.c44
-rw-r--r--drivers/clk/clk-fractional-divider.c40
-rw-r--r--drivers/clk/clk-gate.c43
-rw-r--r--drivers/clk/clk-gpio.c52
-rw-r--r--drivers/clk/clk-ls1x.c3
-rw-r--r--drivers/clk/clk-mux.c57
-rw-r--r--drivers/clk/clk-nspire.c3
-rw-r--r--drivers/clk/clk-oxnas.c195
-rw-r--r--drivers/clk/clk-palmas.c4
-rw-r--r--drivers/clk/clk-pwm.c17
-rw-r--r--drivers/clk/clk-qoriq.c5
-rw-r--r--drivers/clk/clk-rk808.c1
-rw-r--r--drivers/clk/clk-tango4.c73
-rw-r--r--drivers/clk/clk-twl6040.c1
-rw-r--r--drivers/clk/clk-wm831x.c1
-rw-r--r--drivers/clk/clk-xgene.c2
-rw-r--r--drivers/clk/clk.c222
-rw-r--r--drivers/clk/clkdev.c75
-rw-r--r--drivers/clk/hisilicon/Kconfig15
-rw-r--r--drivers/clk/hisilicon/Makefile2
-rw-r--r--drivers/clk/hisilicon/clk-hi3519.c131
-rw-r--r--drivers/clk/hisilicon/clk.c23
-rw-r--r--drivers/clk/hisilicon/clk.h14
-rw-r--r--drivers/clk/hisilicon/reset.c134
-rw-r--r--drivers/clk/hisilicon/reset.h36
-rw-r--r--drivers/clk/imx/clk-gate2.c7
-rw-r--r--drivers/clk/imx/clk-imx35.c4
-rw-r--r--drivers/clk/imx/clk-imx6sx.c10
-rw-r--r--drivers/clk/imx/clk-imx7d.c5
-rw-r--r--drivers/clk/imx/clk-pllv3.c9
-rw-r--r--drivers/clk/imx/clk-vf610.c60
-rw-r--r--drivers/clk/imx/clk.h13
-rw-r--r--drivers/clk/ingenic/cgu.c11
-rw-r--r--drivers/clk/ingenic/cgu.h6
-rw-r--r--drivers/clk/ingenic/jz4740-cgu.c24
-rw-r--r--drivers/clk/ingenic/jz4780-cgu.c40
-rw-r--r--drivers/clk/meson/meson8b-clkc.c6
-rw-r--r--drivers/clk/microchip/Makefile2
-rw-r--r--drivers/clk/microchip/clk-core.c1031
-rw-r--r--drivers/clk/microchip/clk-core.h84
-rw-r--r--drivers/clk/microchip/clk-pic32mzda.c275
-rw-r--r--drivers/clk/mmp/clk-mmp2.c14
-rw-r--r--drivers/clk/mmp/clk-of-mmp2.c10
-rw-r--r--drivers/clk/mmp/clk-of-pxa168.c8
-rw-r--r--drivers/clk/mmp/clk-of-pxa1928.c12
-rw-r--r--drivers/clk/mmp/clk-of-pxa910.c8
-rw-r--r--drivers/clk/mmp/clk-pxa168.c8
-rw-r--r--drivers/clk/mmp/clk-pxa910.c8
-rw-r--r--drivers/clk/mvebu/Kconfig6
-rw-r--r--drivers/clk/mvebu/Makefile2
-rw-r--r--drivers/clk/mvebu/ap806-system-controller.c168
-rw-r--r--drivers/clk/mvebu/cp110-system-controller.c406
-rw-r--r--drivers/clk/nxp/clk-lpc18xx-creg.c1
-rw-r--r--drivers/clk/qcom/gcc-msm8916.c2
-rw-r--r--drivers/clk/qcom/mmcc-msm8996.c32
-rw-r--r--drivers/clk/renesas/Kconfig16
-rw-r--r--drivers/clk/renesas/Makefile26
-rw-r--r--drivers/clk/renesas/clk-mstp.c10
-rw-r--r--drivers/clk/renesas/r8a7795-cpg-mssr.c47
-rw-r--r--drivers/clk/renesas/renesas-cpg-mssr.c49
-rw-r--r--drivers/clk/renesas/renesas-cpg-mssr.h6
-rw-r--r--drivers/clk/rockchip/Makefile1
-rw-r--r--drivers/clk/rockchip/clk-cpu.c29
-rw-r--r--drivers/clk/rockchip/clk-mmc-phase.c3
-rw-r--r--drivers/clk/rockchip/clk-pll.c336
-rw-r--r--drivers/clk/rockchip/clk-rk3036.c21
-rw-r--r--drivers/clk/rockchip/clk-rk3188.c51
-rw-r--r--drivers/clk/rockchip/clk-rk3228.c21
-rw-r--r--drivers/clk/rockchip/clk-rk3288.c23
-rw-r--r--drivers/clk/rockchip/clk-rk3368.c28
-rw-r--r--drivers/clk/rockchip/clk-rk3399.c1573
-rw-r--r--drivers/clk/rockchip/clk.c151
-rw-r--r--drivers/clk/rockchip/clk.h104
-rw-r--r--drivers/clk/samsung/clk-exynos3250.c15
-rw-r--r--drivers/clk/samsung/clk-exynos5420.c77
-rw-r--r--drivers/clk/sirf/clk-atlas6.c7
-rw-r--r--drivers/clk/sirf/clk-prima2.c7
-rw-r--r--drivers/clk/sunxi/Makefile3
-rw-r--r--drivers/clk/sunxi/clk-a10-hosc.c3
-rw-r--r--drivers/clk/sunxi/clk-a10-mod1.c2
-rw-r--r--drivers/clk/sunxi/clk-sun4i-display.c261
-rw-r--r--drivers/clk/sunxi/clk-sun4i-pll3.c98
-rw-r--r--drivers/clk/sunxi/clk-sun4i-tcon-ch1.c300
-rw-r--r--drivers/clk/sunxi/clk-sun9i-mmc.c2
-rw-r--r--drivers/clk/sunxi/clk-sunxi.c79
-rw-r--r--drivers/clk/tegra/Makefile1
-rw-r--r--drivers/clk/tegra/clk-dfll.c11
-rw-r--r--drivers/clk/tegra/clk-dfll.h22
-rw-r--r--drivers/clk/tegra/clk-id.h2
-rw-r--r--drivers/clk/tegra/clk-periph-fixed.c120
-rw-r--r--drivers/clk/tegra/clk-periph-gate.c2
-rw-r--r--drivers/clk/tegra/clk-periph.c2
-rw-r--r--drivers/clk/tegra/clk-pll.c46
-rw-r--r--drivers/clk/tegra/clk-tegra-fixed.c1
-rw-r--r--drivers/clk/tegra/clk-tegra-periph.c5
-rw-r--r--drivers/clk/tegra/clk-tegra114.c6
-rw-r--r--drivers/clk/tegra/clk-tegra124-dfll-fcpu.c103
-rw-r--r--drivers/clk/tegra/clk-tegra124.c4
-rw-r--r--drivers/clk/tegra/clk-tegra20.c2
-rw-r--r--drivers/clk/tegra/clk-tegra210.c89
-rw-r--r--drivers/clk/tegra/clk-tegra30.c12
-rw-r--r--drivers/clk/tegra/clk.c4
-rw-r--r--drivers/clk/tegra/clk.h27
-rw-r--r--drivers/clk/tegra/cvb.c71
-rw-r--r--drivers/clk/tegra/cvb.h15
-rw-r--r--drivers/clk/ti/clk-54xx.c1
-rw-r--r--drivers/clk/ti/clk-7xx.c3
-rw-r--r--drivers/clk/ti/clk-dra7-atl.c2
-rw-r--r--drivers/clk/ti/clkt_dflt.c2
-rw-r--r--drivers/clk/ti/clkt_dpll.c3
-rw-r--r--drivers/clk/ti/dpll.c5
-rw-r--r--drivers/clk/zte/clk-zx296702.c3
-rw-r--r--drivers/clocksource/Kconfig16
-rw-r--r--drivers/clocksource/Makefile2
-rw-r--r--drivers/clocksource/arm_arch_timer.c11
-rw-r--r--drivers/clocksource/dw_apb_timer.c1
-rw-r--r--drivers/clocksource/mps2-timer.c275
-rw-r--r--drivers/clocksource/mtk_timer.c2
-rw-r--r--drivers/clocksource/tegra20_timer.c14
-rw-r--r--drivers/clocksource/timer-nps.c98
-rw-r--r--drivers/cpufreq/Kconfig45
-rw-r--r--drivers/cpufreq/Kconfig.arm9
-rw-r--r--drivers/cpufreq/Kconfig.x861
-rw-r--r--drivers/cpufreq/Makefile6
-rw-r--r--drivers/cpufreq/acpi-cpufreq.c129
-rw-r--r--drivers/cpufreq/arm_big_little.c54
-rw-r--r--drivers/cpufreq/arm_big_little.h4
-rw-r--r--drivers/cpufreq/arm_big_little_dt.c21
-rw-r--r--drivers/cpufreq/cppc_cpufreq.c21
-rw-r--r--drivers/cpufreq/cpufreq-dt-platdev.c94
-rw-r--r--drivers/cpufreq/cpufreq-dt.c22
-rw-r--r--drivers/cpufreq/cpufreq-nforce2.c28
-rw-r--r--drivers/cpufreq/cpufreq.c240
-rw-r--r--drivers/cpufreq/cpufreq_conservative.c25
-rw-r--r--drivers/cpufreq/cpufreq_governor.c271
-rw-r--r--drivers/cpufreq/cpufreq_governor.h46
-rw-r--r--drivers/cpufreq/cpufreq_governor_attr_set.c84
-rw-r--r--drivers/cpufreq/cpufreq_ondemand.c29
-rw-r--r--drivers/cpufreq/cpufreq_userspace.c43
-rw-r--r--drivers/cpufreq/e_powersaver.c76
-rw-r--r--drivers/cpufreq/elanfreq.c4
-rw-r--r--drivers/cpufreq/hisi-acpu-cpufreq.c42
-rw-r--r--drivers/cpufreq/ia64-acpi-cpufreq.c10
-rw-r--r--drivers/cpufreq/intel_pstate.c302
-rw-r--r--drivers/cpufreq/longhaul.c86
-rw-r--r--drivers/cpufreq/loongson1-cpufreq.c (renamed from drivers/cpufreq/ls1x-cpufreq.c)114
-rw-r--r--drivers/cpufreq/loongson2_cpufreq.c7
-rw-r--r--drivers/cpufreq/maple-cpufreq.c11
-rw-r--r--drivers/cpufreq/mt8173-cpufreq.c25
-rw-r--r--drivers/cpufreq/mvebu-cpufreq.c107
-rw-r--r--drivers/cpufreq/omap-cpufreq.c9
-rw-r--r--drivers/cpufreq/p4-clockmod.c19
-rw-r--r--drivers/cpufreq/pmac32-cpufreq.c16
-rw-r--r--drivers/cpufreq/pmac64-cpufreq.c47
-rw-r--r--drivers/cpufreq/powernow-k6.c16
-rw-r--r--drivers/cpufreq/powernow-k7.c70
-rw-r--r--drivers/cpufreq/powernv-cpufreq.c272
-rw-r--r--drivers/cpufreq/ppc_cbe_cpufreq.h2
-rw-r--r--drivers/cpufreq/ppc_cbe_cpufreq_pmi.c15
-rw-r--r--drivers/cpufreq/pxa2xx-cpufreq.c18
-rw-r--r--drivers/cpufreq/qoriq-cpufreq.c9
-rw-r--r--drivers/cpufreq/s3c2412-cpufreq.c15
-rw-r--r--drivers/cpufreq/s3c2440-cpufreq.c6
-rw-r--r--drivers/cpufreq/s3c24xx-cpufreq-debugfs.c4
-rw-r--r--drivers/cpufreq/s3c24xx-cpufreq.c59
-rw-r--r--drivers/cpufreq/s5pv210-cpufreq.c10
-rw-r--r--drivers/cpufreq/sc520_freq.c10
-rw-r--r--drivers/cpufreq/scpi-cpufreq.c47
-rw-r--r--drivers/cpufreq/speedstep-centrino.c6
-rw-r--r--drivers/cpufreq/speedstep-ich.c8
-rw-r--r--drivers/cpufreq/speedstep-lib.c11
-rw-r--r--drivers/cpufreq/speedstep-smi.c7
-rw-r--r--drivers/cpufreq/tegra124-cpufreq.c7
-rw-r--r--drivers/cpufreq/vexpress-spc-cpufreq.c4
-rw-r--r--drivers/cpuidle/cpuidle.c16
-rw-r--r--drivers/crypto/Kconfig27
-rw-r--r--drivers/crypto/Makefile1
-rw-r--r--drivers/crypto/amcc/Makefile1
-rw-r--r--drivers/crypto/amcc/crypto4xx_core.c7
-rw-r--r--drivers/crypto/amcc/crypto4xx_core.h4
-rw-r--r--drivers/crypto/amcc/crypto4xx_reg_def.h1
-rw-r--r--drivers/crypto/amcc/crypto4xx_trng.c131
-rw-r--r--drivers/crypto/amcc/crypto4xx_trng.h34
-rw-r--r--drivers/crypto/caam/ctrl.c2
-rw-r--r--drivers/crypto/caam/jr.c2
-rw-r--r--drivers/crypto/ccp/Kconfig2
-rw-r--r--drivers/crypto/ccp/Makefile6
-rw-r--r--drivers/crypto/ccp/ccp-dev-v3.c13
-rw-r--r--drivers/crypto/ccp/ccp-dev.c2
-rw-r--r--drivers/crypto/ccp/ccp-dev.h49
-rw-r--r--drivers/crypto/ccp/ccp-dmaengine.c727
-rw-r--r--drivers/crypto/ccp/ccp-ops.c69
-rw-r--r--drivers/crypto/marvell/cesa.c10
-rw-r--r--drivers/crypto/marvell/hash.c3
-rw-r--r--drivers/crypto/marvell/tdma.c5
-rw-r--r--drivers/crypto/mxc-scc.c765
-rw-r--r--drivers/crypto/n2_core.c2
-rw-r--r--drivers/crypto/omap-aes.c62
-rw-r--r--drivers/crypto/omap-des.c165
-rw-r--r--drivers/crypto/omap-sham.c25
-rw-r--r--drivers/crypto/qat/qat_c3xxx/adf_drv.c4
-rw-r--r--drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c23
-rw-r--r--drivers/crypto/qat/qat_c3xxxvf/adf_drv.c6
-rw-r--r--drivers/crypto/qat/qat_c62x/adf_drv.c4
-rw-r--r--drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c23
-rw-r--r--drivers/crypto/qat/qat_c62xvf/adf_drv.c6
-rw-r--r--drivers/crypto/qat/qat_common/Makefile4
-rw-r--r--drivers/crypto/qat/qat_common/adf_admin.c2
-rw-r--r--drivers/crypto/qat/qat_common/adf_cfg_strings.h2
-rw-r--r--drivers/crypto/qat/qat_common/adf_common_drv.h39
-rw-r--r--drivers/crypto/qat/qat_common/adf_ctl_drv.c46
-rw-r--r--drivers/crypto/qat/qat_common/adf_init.c15
-rw-r--r--drivers/crypto/qat/qat_common/adf_isr.c4
-rw-r--r--drivers/crypto/qat/qat_common/adf_sriov.c34
-rw-r--r--drivers/crypto/qat/qat_common/adf_vf2pf_msg.c92
-rw-r--r--drivers/crypto/qat/qat_common/adf_vf_isr.c61
-rw-r--r--drivers/crypto/qat/qat_common/qat_asym_algs.c4
-rw-r--r--drivers/crypto/qat/qat_dh895xcc/adf_drv.c4
-rw-r--r--drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c23
-rw-r--r--drivers/crypto/qat/qat_dh895xccvf/adf_drv.c6
-rw-r--r--drivers/crypto/s5p-sss.c368
-rw-r--r--drivers/crypto/sunxi-ss/sun4i-ss-cipher.c10
-rw-r--r--drivers/crypto/talitos.c64
-rw-r--r--drivers/crypto/vmx/ppc-xlate.pl20
-rw-r--r--drivers/dax/Kconfig26
-rw-r--r--drivers/dax/Makefile4
-rw-r--r--drivers/dax/dax.c575
-rw-r--r--drivers/dax/dax.h24
-rw-r--r--drivers/dax/pmem.c158
-rw-r--r--drivers/devfreq/Kconfig36
-rw-r--r--drivers/devfreq/Makefile4
-rw-r--r--drivers/devfreq/devfreq-event.c5
-rw-r--r--drivers/devfreq/devfreq.c207
-rw-r--r--drivers/devfreq/event/Kconfig8
-rw-r--r--drivers/devfreq/event/Makefile2
-rw-r--r--drivers/devfreq/event/exynos-nocp.c304
-rw-r--r--drivers/devfreq/event/exynos-nocp.h78
-rw-r--r--drivers/devfreq/exynos-bus.c570
-rw-r--r--drivers/devfreq/exynos/Makefile3
-rw-r--r--drivers/devfreq/exynos/exynos4_bus.c1055
-rw-r--r--drivers/devfreq/exynos/exynos4_bus.h110
-rw-r--r--drivers/devfreq/exynos/exynos5_bus.c431
-rw-r--r--drivers/devfreq/exynos/exynos_ppmu.c119
-rw-r--r--drivers/devfreq/exynos/exynos_ppmu.h86
-rw-r--r--drivers/devfreq/governor_passive.c205
-rw-r--r--drivers/dma-buf/Kconfig11
-rw-r--r--drivers/dma-buf/Makefile1
-rw-r--r--drivers/dma-buf/sync_file.c395
-rw-r--r--drivers/dma/Kconfig18
-rw-r--r--drivers/dma/Makefile1
-rw-r--r--drivers/dma/amba-pl08x.c86
-rw-r--r--drivers/dma/bcm2835-dma.c604
-rw-r--r--drivers/dma/dmaengine.c37
-rw-r--r--drivers/dma/dw/core.c495
-rw-r--r--drivers/dma/dw/pci.c5
-rw-r--r--drivers/dma/dw/platform.c40
-rw-r--r--drivers/dma/dw/regs.h56
-rw-r--r--drivers/dma/edma.c11
-rw-r--r--drivers/dma/fsldma.c3
-rw-r--r--drivers/dma/hsu/hsu.c8
-rw-r--r--drivers/dma/hsu/hsu.h4
-rw-r--r--drivers/dma/ioat/init.c17
-rw-r--r--drivers/dma/ioat/registers.h7
-rw-r--r--drivers/dma/mmp_pdma.c3
-rw-r--r--drivers/dma/mpc512x_dma.c174
-rw-r--r--drivers/dma/mv_xor.c98
-rw-r--r--drivers/dma/mv_xor.h1
-rw-r--r--drivers/dma/of-dma.c12
-rw-r--r--drivers/dma/pxa_dma.c16
-rw-r--r--drivers/dma/qcom/Makefile2
-rw-r--r--drivers/dma/qcom/bam_dma.c38
-rw-r--r--drivers/dma/qcom/hidma.c52
-rw-r--r--drivers/dma/qcom/hidma.h40
-rw-r--r--drivers/dma/qcom/hidma_dbg.c217
-rw-r--r--drivers/dma/qcom/hidma_ll.c872
-rw-r--r--drivers/dma/qcom/hidma_mgmt.c113
-rw-r--r--drivers/dma/sun4i-dma.c16
-rw-r--r--drivers/dma/sun6i-dma.c254
-rw-r--r--drivers/dma/tegra20-apb-dma.c16
-rw-r--r--drivers/dma/tegra210-adma.c840
-rw-r--r--drivers/dma/xilinx/xilinx_vdma.c1663
-rw-r--r--drivers/edac/Kconfig5
-rw-r--r--drivers/edac/altera_edac.c412
-rw-r--r--drivers/edac/altera_edac.h128
-rw-r--r--drivers/edac/amd64_edac.c131
-rw-r--r--drivers/edac/amd64_edac.h2
-rw-r--r--drivers/edac/edac_mc.c2
-rw-r--r--drivers/edac/edac_mc_sysfs.c3
-rw-r--r--drivers/edac/i7core_edac.c81
-rw-r--r--drivers/edac/ie31200_edac.c121
-rw-r--r--drivers/edac/mce_amd.c9
-rw-r--r--drivers/edac/sb_edac.c222
-rw-r--r--drivers/firewire/net.c2
-rw-r--r--drivers/firmware/broadcom/Kconfig11
-rw-r--r--drivers/firmware/broadcom/Makefile1
-rw-r--r--drivers/firmware/broadcom/bcm47xx_sprom.c (renamed from arch/mips/bcm47xx/sprom.c)27
-rw-r--r--drivers/firmware/efi/Kconfig25
-rw-r--r--drivers/firmware/efi/Makefile5
-rw-r--r--drivers/firmware/efi/arm-init.c104
-rw-r--r--drivers/firmware/efi/arm-runtime.c45
-rw-r--r--drivers/firmware/efi/capsule-loader.c343
-rw-r--r--drivers/firmware/efi/capsule.c308
-rw-r--r--drivers/firmware/efi/efi.c48
-rw-r--r--drivers/firmware/efi/efibc.c113
-rw-r--r--drivers/firmware/efi/efivars.c5
-rw-r--r--drivers/firmware/efi/fake_mem.c43
-rw-r--r--drivers/firmware/efi/libstub/Makefile2
-rw-r--r--drivers/firmware/efi/libstub/arm-stub.c77
-rw-r--r--drivers/firmware/efi/libstub/arm32-stub.c37
-rw-r--r--drivers/firmware/efi/libstub/arm64-stub.c15
-rw-r--r--drivers/firmware/efi/libstub/efi-stub-helper.c6
-rw-r--r--drivers/firmware/efi/libstub/fdt.c24
-rw-r--r--drivers/firmware/efi/libstub/gop.c354
-rw-r--r--drivers/firmware/efi/memattr.c182
-rw-r--r--drivers/firmware/efi/reboot.c12
-rw-r--r--drivers/firmware/efi/runtime-wrappers.c60
-rw-r--r--drivers/firmware/efi/vars.c56
-rw-r--r--drivers/firmware/iscsi_ibft.c66
-rw-r--r--drivers/firmware/psci.c8
-rw-r--r--drivers/firmware/qemu_fw_cfg.c4
-rw-r--r--drivers/gpio/Kconfig60
-rw-r--r--drivers/gpio/Makefile6
-rw-r--r--drivers/gpio/gpio-104-dio-48e.c106
-rw-r--r--drivers/gpio/gpio-104-idi-48.c86
-rw-r--r--drivers/gpio/gpio-104-idio-16.c85
-rw-r--r--drivers/gpio/gpio-74x164.c25
-rw-r--r--drivers/gpio/gpio-amdpt.c123
-rw-r--r--drivers/gpio/gpio-bcm-kona.c14
-rw-r--r--drivers/gpio/gpio-brcmstb.c1
-rw-r--r--drivers/gpio/gpio-dwapb.c78
-rw-r--r--drivers/gpio/gpio-f7188x.c52
-rw-r--r--drivers/gpio/gpio-it87.c10
-rw-r--r--drivers/gpio/gpio-loongson1.c102
-rw-r--r--drivers/gpio/gpio-mb86s7x.c10
-rw-r--r--drivers/gpio/gpio-mc9s08dz60.c12
-rw-r--r--drivers/gpio/gpio-mcp23s08.c111
-rw-r--r--drivers/gpio/gpio-menz127.c22
-rw-r--r--drivers/gpio/gpio-mmio.c (renamed from drivers/gpio/gpio-generic.c)2
-rw-r--r--drivers/gpio/gpio-moxart.c7
-rw-r--r--drivers/gpio/gpio-mvebu.c5
-rw-r--r--drivers/gpio/gpio-octeon.c26
-rw-r--r--drivers/gpio/gpio-omap.c42
-rw-r--r--drivers/gpio/gpio-palmas.c13
-rw-r--r--drivers/gpio/gpio-pca953x.c42
-rw-r--r--drivers/gpio/gpio-pl061.c26
-rw-r--r--drivers/gpio/gpio-rc5t583.c12
-rw-r--r--drivers/gpio/gpio-rcar.c20
-rw-r--r--drivers/gpio/gpio-sodaville.c28
-rw-r--r--drivers/gpio/gpio-sta2x11.c8
-rw-r--r--drivers/gpio/gpio-stmpe.c31
-rw-r--r--drivers/gpio/gpio-sx150x.c100
-rw-r--r--drivers/gpio/gpio-tc3589x.c69
-rw-r--r--drivers/gpio/gpio-tegra.c485
-rw-r--r--drivers/gpio/gpio-timberdale.c35
-rw-r--r--drivers/gpio/gpio-tpic2810.c35
-rw-r--r--drivers/gpio/gpio-tps65218.c45
-rw-r--r--drivers/gpio/gpio-tps6586x.c13
-rw-r--r--drivers/gpio/gpio-tps65910.c16
-rw-r--r--drivers/gpio/gpio-vx855.c23
-rw-r--r--drivers/gpio/gpio-wm831x.c25
-rw-r--r--drivers/gpio/gpio-wm8994.c25
-rw-r--r--drivers/gpio/gpio-ws16c48.c88
-rw-r--r--drivers/gpio/gpio-xgene-sb.c15
-rw-r--r--drivers/gpio/gpio-xgene.c30
-rw-r--r--drivers/gpio/gpio-xlp.c23
-rw-r--r--drivers/gpio/gpio-zevio.c21
-rw-r--r--drivers/gpio/gpio-zx.c14
-rw-r--r--drivers/gpio/gpio-zynq.c4
-rw-r--r--drivers/gpio/gpiolib-of.c66
-rw-r--r--drivers/gpio/gpiolib.c133
-rw-r--r--drivers/gpio/gpiolib.h4
-rw-r--r--drivers/gpu/drm/Makefile2
-rw-r--r--drivers/gpu/drm/amd/acp/Kconfig1
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu.h2
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c10
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c25
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c27
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c5
-rw-r--r--drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c111
-rw-r--r--drivers/gpu/drm/amd/amdgpu/atombios_dp.c4
-rw-r--r--drivers/gpu/drm/amd/amdgpu/cik_ih.c3
-rw-r--r--drivers/gpu/drm/amd/amdgpu/cz_dpm.c13
-rw-r--r--drivers/gpu/drm/amd/amdgpu/cz_ih.c3
-rw-r--r--drivers/gpu/drm/amd/amdgpu/dce_v11_0.c4
-rw-r--r--drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c7
-rw-r--r--drivers/gpu/drm/amd/amdgpu/iceland_ih.c3
-rw-r--r--drivers/gpu/drm/amd/amdgpu/kv_dpm.c2
-rw-r--r--drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c2
-rw-r--r--drivers/gpu/drm/amd/amdgpu/tonga_ih.c3
-rw-r--r--drivers/gpu/drm/amd/powerplay/hwmgr/fiji_hwmgr.c45
-rw-r--r--drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c38
-rw-r--r--drivers/gpu/drm/amd/powerplay/hwmgr/polaris10_hwmgr.c44
-rw-r--r--drivers/gpu/drm/amd/powerplay/hwmgr/tonga_hwmgr.c16
-rw-r--r--drivers/gpu/drm/amd/powerplay/inc/hwmgr.h2
-rw-r--r--drivers/gpu/drm/amd/powerplay/smumgr/cz_smumgr.c2
-rw-r--r--drivers/gpu/drm/drm_cache.c6
-rw-r--r--drivers/gpu/drm/drm_dp_dual_mode_helper.c366
-rw-r--r--drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c11
-rw-r--r--drivers/gpu/drm/etnaviv/etnaviv_gpu.c4
-rw-r--r--drivers/gpu/drm/i915/i915_debugfs.c16
-rw-r--r--drivers/gpu/drm/i915/i915_dma.c6
-rw-r--r--drivers/gpu/drm/i915/i915_drv.c5
-rw-r--r--drivers/gpu/drm/i915/i915_drv.h6
-rw-r--r--drivers/gpu/drm/i915/i915_gem.c32
-rw-r--r--drivers/gpu/drm/i915/i915_gem_execbuffer.c20
-rw-r--r--drivers/gpu/drm/i915/i915_gem_gtt.c8
-rw-r--r--drivers/gpu/drm/i915/i915_gem_gtt.h1
-rw-r--r--drivers/gpu/drm/i915/i915_reg.h2
-rw-r--r--drivers/gpu/drm/i915/intel_audio.c9
-rw-r--r--drivers/gpu/drm/i915/intel_bios.c36
-rw-r--r--drivers/gpu/drm/i915/intel_crt.c8
-rw-r--r--drivers/gpu/drm/i915/intel_ddi.c40
-rw-r--r--drivers/gpu/drm/i915/intel_display.c11
-rw-r--r--drivers/gpu/drm/i915/intel_dp_mst.c22
-rw-r--r--drivers/gpu/drm/i915/intel_dpll_mgr.c6
-rw-r--r--drivers/gpu/drm/i915/intel_drv.h8
-rw-r--r--drivers/gpu/drm/i915/intel_dsi.c141
-rw-r--r--drivers/gpu/drm/i915/intel_hdmi.c104
-rw-r--r--drivers/gpu/drm/i915/intel_lrc.c146
-rw-r--r--drivers/gpu/drm/i915/intel_lrc.h1
-rw-r--r--drivers/gpu/drm/i915/intel_lvds.c4
-rw-r--r--drivers/gpu/drm/i915/intel_mocs.c12
-rw-r--r--drivers/gpu/drm/i915/intel_panel.c6
-rw-r--r--drivers/gpu/drm/i915/intel_pm.c8
-rw-r--r--drivers/gpu/drm/i915/intel_psr.c55
-rw-r--r--drivers/gpu/drm/i915/intel_ringbuffer.c207
-rw-r--r--drivers/gpu/drm/i915/intel_ringbuffer.h6
-rw-r--r--drivers/gpu/drm/i915/intel_vbt_defs.h12
-rw-r--r--drivers/gpu/drm/imx/imx-drm-core.c8
-rw-r--r--drivers/gpu/drm/imx/ipuv3-crtc.c2
-rw-r--r--drivers/gpu/drm/msm/msm_gem_submit.c11
-rw-r--r--drivers/gpu/drm/radeon/atombios_crtc.c10
-rw-r--r--drivers/gpu/drm/radeon/atombios_dp.c4
-rw-r--r--drivers/gpu/drm/radeon/kv_dpm.c2
-rw-r--r--drivers/gpu/drm/radeon/radeon_dp_auxch.c2
-rw-r--r--drivers/gpu/drm/radeon/radeon_mn.c4
-rw-r--r--drivers/gpu/drm/sti/sti_vtg.c4
-rw-r--r--drivers/gpu/drm/tegra/drm.h2
-rw-r--r--drivers/gpu/drm/tilcdc/tilcdc_slave_compat.c2
-rw-r--r--drivers/gpu/drm/tilcdc/tilcdc_tfp410.c2
-rw-r--r--drivers/gpu/drm/vc4/vc4_drv.c20
-rw-r--r--drivers/gpu/host1x/hw/intr_hw.c2
-rw-r--r--drivers/gpu/ipu-v3/ipu-common.c5
-rw-r--r--drivers/hid/Kconfig10
-rw-r--r--drivers/hid/Makefile1
-rw-r--r--drivers/hid/hid-asus.c52
-rw-r--r--drivers/hid/hid-core.c34
-rw-r--r--drivers/hid/hid-ids.h8
-rw-r--r--drivers/hid/hid-roccat.c5
-rw-r--r--drivers/hid/hid-thingm.c49
-rw-r--r--drivers/hid/hidraw.c18
-rw-r--r--drivers/hid/usbhid/hid-quirks.c5
-rw-r--r--drivers/hid/wacom_sys.c3
-rw-r--r--drivers/hid/wacom_wac.c13
-rw-r--r--drivers/hid/wacom_wac.h1
-rw-r--r--drivers/hsi/controllers/Kconfig8
-rw-r--r--drivers/hsi/controllers/Makefile4
-rw-r--r--drivers/hsi/controllers/omap_ssi.h12
-rw-r--r--drivers/hsi/controllers/omap_ssi_core.c (renamed from drivers/hsi/controllers/omap_ssi.c)107
-rw-r--r--drivers/hsi/controllers/omap_ssi_port.c100
-rw-r--r--drivers/hv/channel_mgmt.c58
-rw-r--r--drivers/hv/connection.c1
-rw-r--r--drivers/hv/hv_balloon.c5
-rw-r--r--drivers/hv/hv_kvp.c31
-rw-r--r--drivers/hv/hyperv_vmbus.h23
-rw-r--r--drivers/hv/ring_buffer.c95
-rw-r--r--drivers/hv/vmbus_drv.c150
-rw-r--r--drivers/hwmon/Kconfig15
-rw-r--r--drivers/hwmon/Makefile1
-rw-r--r--drivers/hwmon/ads7828.c10
-rw-r--r--drivers/hwmon/emc2103.c2
-rw-r--r--drivers/hwmon/fam15h_power.c215
-rw-r--r--drivers/hwmon/it87.c2261
-rw-r--r--drivers/hwmon/lm75.c10
-rw-r--r--drivers/hwmon/max31722.c165
-rw-r--r--drivers/hwmon/ntc_thermistor.c12
-rw-r--r--drivers/hwmon/pwm-fan.c26
-rw-r--r--drivers/hwmon/sch5636.c2
-rw-r--r--drivers/hwmon/scpi-hwmon.c48
-rw-r--r--drivers/hwmon/tmp102.c8
-rw-r--r--drivers/hwspinlock/hwspinlock_core.c2
-rw-r--r--drivers/hwtracing/coresight/Kconfig11
-rw-r--r--drivers/hwtracing/coresight/Makefile13
-rw-r--r--drivers/hwtracing/coresight/coresight-etb10.c107
-rw-r--r--drivers/hwtracing/coresight/coresight-etm3x-sysfs.c33
-rw-r--r--drivers/hwtracing/coresight/coresight-etm4x-sysfs.c2126
-rw-r--r--drivers/hwtracing/coresight/coresight-etm4x.c2402
-rw-r--r--drivers/hwtracing/coresight/coresight-etm4x.h222
-rw-r--r--drivers/hwtracing/coresight/coresight-funnel.c1
-rw-r--r--drivers/hwtracing/coresight/coresight-priv.h30
-rw-r--r--drivers/hwtracing/coresight/coresight-replicator.c1
-rw-r--r--drivers/hwtracing/coresight/coresight-stm.c920
-rw-r--r--drivers/hwtracing/coresight/coresight-tmc-etf.c604
-rw-r--r--drivers/hwtracing/coresight/coresight-tmc-etr.c329
-rw-r--r--drivers/hwtracing/coresight/coresight-tmc.c604
-rw-r--r--drivers/hwtracing/coresight/coresight-tmc.h140
-rw-r--r--drivers/hwtracing/coresight/coresight-tpiu.c1
-rw-r--r--drivers/hwtracing/coresight/coresight.c142
-rw-r--r--drivers/hwtracing/intel_th/core.c29
-rw-r--r--drivers/hwtracing/intel_th/intel_th.h6
-rw-r--r--drivers/hwtracing/intel_th/msu.c118
-rw-r--r--drivers/hwtracing/intel_th/pci.c5
-rw-r--r--drivers/hwtracing/intel_th/pti.c6
-rw-r--r--drivers/hwtracing/stm/core.c36
-rw-r--r--drivers/hwtracing/stm/dummy_stm.c14
-rw-r--r--drivers/hwtracing/stm/heartbeat.c14
-rw-r--r--drivers/hwtracing/stm/policy.c5
-rw-r--r--drivers/i2c/algos/i2c-algo-bit.c2
-rw-r--r--drivers/i2c/busses/Kconfig5
-rw-r--r--drivers/i2c/busses/i2c-at91.c2
-rw-r--r--drivers/i2c/busses/i2c-bcm-iproc.c2
-rw-r--r--drivers/i2c/busses/i2c-bcm-kona.c5
-rw-r--r--drivers/i2c/busses/i2c-brcmstb.c4
-rw-r--r--drivers/i2c/busses/i2c-cpm.c4
-rw-r--r--drivers/i2c/busses/i2c-dln2.c2
-rw-r--r--drivers/i2c/busses/i2c-exynos5.c10
-rw-r--r--drivers/i2c/busses/i2c-i801.c52
-rw-r--r--drivers/i2c/busses/i2c-ibm_iic.c2
-rw-r--r--drivers/i2c/busses/i2c-img-scb.c4
-rw-r--r--drivers/i2c/busses/i2c-imx.c2
-rw-r--r--drivers/i2c/busses/i2c-iop3xx.c5
-rw-r--r--drivers/i2c/busses/i2c-lpc2k.c4
-rw-r--r--drivers/i2c/busses/i2c-mt65xx.c5
-rw-r--r--drivers/i2c/busses/i2c-mv64xxx.c45
-rw-r--r--drivers/i2c/busses/i2c-nforce2.c2
-rw-r--r--drivers/i2c/busses/i2c-ocores.c5
-rw-r--r--drivers/i2c/busses/i2c-octeon.c998
-rw-r--r--drivers/i2c/busses/i2c-omap.c12
-rw-r--r--drivers/i2c/busses/i2c-powermac.c4
-rw-r--r--drivers/i2c/busses/i2c-qup.c2
-rw-r--r--drivers/i2c/busses/i2c-rcar.c233
-rw-r--r--drivers/i2c/busses/i2c-rk3x.c87
-rw-r--r--drivers/i2c/busses/i2c-s3c2410.c244
-rw-r--r--drivers/i2c/busses/i2c-sh_mobile.c3
-rw-r--r--drivers/i2c/busses/i2c-sirf.c4
-rw-r--r--drivers/i2c/busses/i2c-st.c48
-rw-r--r--drivers/i2c/busses/i2c-tegra.c81
-rw-r--r--drivers/i2c/busses/i2c-uniphier-f.c2
-rw-r--r--drivers/i2c/busses/i2c-uniphier.c2
-rw-r--r--drivers/i2c/i2c-core.c72
-rw-r--r--drivers/i2c/i2c-dev.c25
-rw-r--r--drivers/i2c/i2c-mux.c300
-rw-r--r--drivers/i2c/muxes/i2c-arb-gpio-challenge.c47
-rw-r--r--drivers/i2c/muxes/i2c-mux-gpio.c73
-rw-r--r--drivers/i2c/muxes/i2c-mux-pca9541.c58
-rw-r--r--drivers/i2c/muxes/i2c-mux-pca954x.c61
-rw-r--r--drivers/i2c/muxes/i2c-mux-pinctrl.c135
-rw-r--r--drivers/i2c/muxes/i2c-mux-reg.c69
-rw-r--r--drivers/ide/ide-disk.c6
-rw-r--r--drivers/idle/intel_idle.c137
-rw-r--r--drivers/iio/accel/Kconfig5
-rw-r--r--drivers/iio/accel/bmc150-accel-core.c127
-rw-r--r--drivers/iio/accel/bmc150-accel-i2c.c7
-rw-r--r--drivers/iio/accel/bmc150-accel-spi.c8
-rw-r--r--drivers/iio/accel/bmc150-accel.h1
-rw-r--r--drivers/iio/accel/kxcjk-1013.c25
-rw-r--r--drivers/iio/accel/mma7455_core.c5
-rw-r--r--drivers/iio/accel/mma8452.c188
-rw-r--r--drivers/iio/accel/mma9553.c1
-rw-r--r--drivers/iio/accel/mxc4005.c29
-rw-r--r--drivers/iio/accel/st_accel.h1
-rw-r--r--drivers/iio/accel/st_accel_core.c105
-rw-r--r--drivers/iio/accel/st_accel_i2c.c4
-rw-r--r--drivers/iio/accel/stk8312.c1
-rw-r--r--drivers/iio/accel/stk8ba50.c1
-rw-r--r--drivers/iio/adc/Kconfig16
-rw-r--r--drivers/iio/adc/Makefile1
-rw-r--r--drivers/iio/adc/ad799x.c2
-rw-r--r--drivers/iio/adc/at91-sama5d2_adc.c102
-rw-r--r--drivers/iio/adc/at91_adc.c8
-rw-r--r--drivers/iio/adc/ina2xx-adc.c43
-rw-r--r--drivers/iio/adc/lpc18xx_adc.c231
-rw-r--r--drivers/iio/adc/mcp3422.c6
-rw-r--r--drivers/iio/adc/mxs-lradc.c37
-rw-r--r--drivers/iio/adc/rockchip_saradc.c19
-rw-r--r--drivers/iio/adc/ti-adc081c.c118
-rw-r--r--drivers/iio/adc/vf610_adc.c24
-rw-r--r--drivers/iio/common/hid-sensors/hid-sensor-trigger.c2
-rw-r--r--drivers/iio/common/ms_sensors/ms_sensors_i2c.c2
-rw-r--r--drivers/iio/common/st_sensors/st_sensors_buffer.c97
-rw-r--r--drivers/iio/common/st_sensors/st_sensors_core.c20
-rw-r--r--drivers/iio/common/st_sensors/st_sensors_trigger.c13
-rw-r--r--drivers/iio/dac/Kconfig39
-rw-r--r--drivers/iio/dac/Makefile4
-rw-r--r--drivers/iio/dac/ad5592r-base.c691
-rw-r--r--drivers/iio/dac/ad5592r-base.h76
-rw-r--r--drivers/iio/dac/ad5592r.c164
-rw-r--r--drivers/iio/dac/ad5593r.c131
-rw-r--r--drivers/iio/dac/lpc18xx_dac.c210
-rw-r--r--drivers/iio/dac/stx104.c24
-rw-r--r--drivers/iio/frequency/ad9523.c19
-rw-r--r--drivers/iio/gyro/Kconfig2
-rw-r--r--drivers/iio/gyro/bmg160_core.c137
-rw-r--r--drivers/iio/gyro/st_gyro.h1
-rw-r--r--drivers/iio/gyro/st_gyro_core.c4
-rw-r--r--drivers/iio/gyro/st_gyro_i2c.c5
-rw-r--r--drivers/iio/gyro/st_gyro_spi.c1
-rw-r--r--drivers/iio/humidity/Kconfig10
-rw-r--r--drivers/iio/humidity/Makefile1
-rw-r--r--drivers/iio/humidity/am2315.c303
-rw-r--r--drivers/iio/humidity/dht11.c40
-rw-r--r--drivers/iio/imu/Kconfig2
-rw-r--r--drivers/iio/imu/Makefile1
-rw-r--r--drivers/iio/imu/adis.c7
-rw-r--r--drivers/iio/imu/bmi160/Kconfig32
-rw-r--r--drivers/iio/imu/bmi160/Makefile6
-rw-r--r--drivers/iio/imu/bmi160/bmi160.h10
-rw-r--r--drivers/iio/imu/bmi160/bmi160_core.c596
-rw-r--r--drivers/iio/imu/bmi160/bmi160_i2c.c72
-rw-r--r--drivers/iio/imu/bmi160/bmi160_spi.c63
-rw-r--r--drivers/iio/imu/inv_mpu6050/Kconfig10
-rw-r--r--drivers/iio/imu/inv_mpu6050/inv_mpu_acpi.c2
-rw-r--r--drivers/iio/imu/inv_mpu6050/inv_mpu_core.c74
-rw-r--r--drivers/iio/imu/inv_mpu6050/inv_mpu_i2c.c82
-rw-r--r--drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h19
-rw-r--r--drivers/iio/imu/inv_mpu6050/inv_mpu_spi.c20
-rw-r--r--drivers/iio/imu/kmx61.c1
-rw-r--r--drivers/iio/industrialio-core.c123
-rw-r--r--drivers/iio/inkern.c86
-rw-r--r--drivers/iio/light/Kconfig32
-rw-r--r--drivers/iio/light/Makefile3
-rw-r--r--drivers/iio/light/apds9960.c13
-rw-r--r--drivers/iio/light/bh1780.c297
-rw-r--r--drivers/iio/light/max44000.c639
-rw-r--r--drivers/iio/light/stk3310.c1
-rw-r--r--drivers/iio/light/tsl2563.c3
-rw-r--r--drivers/iio/light/veml6070.c218
-rw-r--r--drivers/iio/magnetometer/Kconfig33
-rw-r--r--drivers/iio/magnetometer/Makefile3
-rw-r--r--drivers/iio/magnetometer/ak8975.c232
-rw-r--r--drivers/iio/magnetometer/bmc150_magn.c156
-rw-r--r--drivers/iio/magnetometer/bmc150_magn.h11
-rw-r--r--drivers/iio/magnetometer/bmc150_magn_i2c.c77
-rw-r--r--drivers/iio/magnetometer/bmc150_magn_spi.c68
-rw-r--r--drivers/iio/magnetometer/st_magn_core.c1
-rw-r--r--drivers/iio/potentiometer/Kconfig28
-rw-r--r--drivers/iio/potentiometer/Makefile2
-rw-r--r--drivers/iio/potentiometer/ds1803.c173
-rw-r--r--drivers/iio/potentiometer/mcp4131.c494
-rw-r--r--drivers/iio/potentiometer/mcp4531.c13
-rw-r--r--drivers/iio/potentiometer/tpl0102.c2
-rw-r--r--drivers/iio/pressure/Kconfig28
-rw-r--r--drivers/iio/pressure/Makefile2
-rw-r--r--drivers/iio/pressure/bmp280.c564
-rw-r--r--drivers/iio/pressure/hp03.c312
-rw-r--r--drivers/iio/pressure/hp206c.c426
-rw-r--r--drivers/iio/pressure/ms5611.h23
-rw-r--r--drivers/iio/pressure/ms5611_core.c148
-rw-r--r--drivers/iio/pressure/ms5611_i2c.c25
-rw-r--r--drivers/iio/pressure/ms5611_spi.c34
-rw-r--r--drivers/iio/pressure/st_pressure_core.c10
-rw-r--r--drivers/infiniband/Kconfig2
-rw-r--r--drivers/infiniband/core/Makefile14
-rw-r--r--drivers/infiniband/core/addr.c226
-rw-r--r--drivers/infiniband/core/cma.c4
-rw-r--r--drivers/infiniband/core/core_priv.h16
-rw-r--r--drivers/infiniband/core/device.c58
-rw-r--r--drivers/infiniband/core/iwcm.c4
-rw-r--r--drivers/infiniband/core/iwpm_util.c1
-rw-r--r--drivers/infiniband/core/mad.c13
-rw-r--r--drivers/infiniband/core/mr_pool.c86
-rw-r--r--drivers/infiniband/core/multicast.c23
-rw-r--r--drivers/infiniband/core/netlink.c5
-rw-r--r--drivers/infiniband/core/rw.c727
-rw-r--r--drivers/infiniband/core/sa_query.c213
-rw-r--r--drivers/infiniband/core/sysfs.c366
-rw-r--r--drivers/infiniband/core/uverbs_cmd.c11
-rw-r--r--drivers/infiniband/core/verbs.c172
-rw-r--r--drivers/infiniband/hw/Makefile1
-rw-r--r--drivers/infiniband/hw/cxgb3/cxio_hal.c2
-rw-r--r--drivers/infiniband/hw/cxgb3/iwch_cm.c2
-rw-r--r--drivers/infiniband/hw/cxgb3/iwch_provider.c154
-rw-r--r--drivers/infiniband/hw/cxgb4/cm.c611
-rw-r--r--drivers/infiniband/hw/cxgb4/iw_cxgb4.h14
-rw-r--r--drivers/infiniband/hw/cxgb4/mem.c12
-rw-r--r--drivers/infiniband/hw/cxgb4/provider.c58
-rw-r--r--drivers/infiniband/hw/hfi1/Kconfig (renamed from drivers/staging/rdma/hfi1/Kconfig)0
-rw-r--r--drivers/infiniband/hw/hfi1/Makefile (renamed from drivers/staging/rdma/hfi1/Makefile)2
-rw-r--r--drivers/infiniband/hw/hfi1/affinity.c (renamed from drivers/staging/rdma/hfi1/affinity.c)93
-rw-r--r--drivers/infiniband/hw/hfi1/affinity.h (renamed from drivers/staging/rdma/hfi1/affinity.h)19
-rw-r--r--drivers/infiniband/hw/hfi1/aspm.h (renamed from drivers/staging/rdma/hfi1/aspm.h)0
-rw-r--r--drivers/infiniband/hw/hfi1/chip.c (renamed from drivers/staging/rdma/hfi1/chip.c)688
-rw-r--r--drivers/infiniband/hw/hfi1/chip.h (renamed from drivers/staging/rdma/hfi1/chip.h)13
-rw-r--r--drivers/infiniband/hw/hfi1/chip_registers.h (renamed from drivers/staging/rdma/hfi1/chip_registers.h)1
-rw-r--r--drivers/infiniband/hw/hfi1/common.h (renamed from drivers/staging/rdma/hfi1/common.h)5
-rw-r--r--drivers/infiniband/hw/hfi1/debugfs.c (renamed from drivers/staging/rdma/hfi1/debugfs.c)0
-rw-r--r--drivers/infiniband/hw/hfi1/debugfs.h (renamed from drivers/staging/rdma/hfi1/debugfs.h)0
-rw-r--r--drivers/infiniband/hw/hfi1/device.c (renamed from drivers/staging/rdma/hfi1/device.c)18
-rw-r--r--drivers/infiniband/hw/hfi1/device.h (renamed from drivers/staging/rdma/hfi1/device.h)3
-rw-r--r--drivers/infiniband/hw/hfi1/dma.c (renamed from drivers/staging/rdma/hfi1/dma.c)0
-rw-r--r--drivers/infiniband/hw/hfi1/driver.c (renamed from drivers/staging/rdma/hfi1/driver.c)5
-rw-r--r--drivers/infiniband/hw/hfi1/efivar.c (renamed from drivers/staging/rdma/hfi1/efivar.c)0
-rw-r--r--drivers/infiniband/hw/hfi1/efivar.h (renamed from drivers/staging/rdma/hfi1/efivar.h)0
-rw-r--r--drivers/infiniband/hw/hfi1/eprom.c102
-rw-r--r--drivers/infiniband/hw/hfi1/eprom.h (renamed from drivers/staging/rdma/hfi1/eprom.h)0
-rw-r--r--drivers/infiniband/hw/hfi1/file_ops.c (renamed from drivers/staging/rdma/hfi1/file_ops.c)549
-rw-r--r--drivers/infiniband/hw/hfi1/firmware.c (renamed from drivers/staging/rdma/hfi1/firmware.c)9
-rw-r--r--drivers/infiniband/hw/hfi1/hfi.h (renamed from drivers/staging/rdma/hfi1/hfi.h)18
-rw-r--r--drivers/infiniband/hw/hfi1/init.c (renamed from drivers/staging/rdma/hfi1/init.c)47
-rw-r--r--drivers/infiniband/hw/hfi1/intr.c (renamed from drivers/staging/rdma/hfi1/intr.c)0
-rw-r--r--drivers/infiniband/hw/hfi1/iowait.h (renamed from drivers/staging/rdma/hfi1/iowait.h)0
-rw-r--r--drivers/infiniband/hw/hfi1/mad.c (renamed from drivers/staging/rdma/hfi1/mad.c)115
-rw-r--r--drivers/infiniband/hw/hfi1/mad.h (renamed from drivers/staging/rdma/hfi1/mad.h)0
-rw-r--r--drivers/infiniband/hw/hfi1/mmu_rb.c (renamed from drivers/staging/rdma/hfi1/mmu_rb.c)57
-rw-r--r--drivers/infiniband/hw/hfi1/mmu_rb.h (renamed from drivers/staging/rdma/hfi1/mmu_rb.h)2
-rw-r--r--drivers/infiniband/hw/hfi1/opa_compat.h (renamed from drivers/staging/rdma/hfi1/opa_compat.h)0
-rw-r--r--drivers/infiniband/hw/hfi1/pcie.c (renamed from drivers/staging/rdma/hfi1/pcie.c)0
-rw-r--r--drivers/infiniband/hw/hfi1/pio.c (renamed from drivers/staging/rdma/hfi1/pio.c)55
-rw-r--r--drivers/infiniband/hw/hfi1/pio.h (renamed from drivers/staging/rdma/hfi1/pio.h)8
-rw-r--r--drivers/infiniband/hw/hfi1/pio_copy.c (renamed from drivers/staging/rdma/hfi1/pio_copy.c)0
-rw-r--r--drivers/infiniband/hw/hfi1/platform.c (renamed from drivers/staging/rdma/hfi1/platform.c)126
-rw-r--r--drivers/infiniband/hw/hfi1/platform.h (renamed from drivers/staging/rdma/hfi1/platform.h)1
-rw-r--r--drivers/infiniband/hw/hfi1/qp.c (renamed from drivers/staging/rdma/hfi1/qp.c)15
-rw-r--r--drivers/infiniband/hw/hfi1/qp.h (renamed from drivers/staging/rdma/hfi1/qp.h)0
-rw-r--r--drivers/infiniband/hw/hfi1/qsfp.c (renamed from drivers/staging/rdma/hfi1/qsfp.c)58
-rw-r--r--drivers/infiniband/hw/hfi1/qsfp.h (renamed from drivers/staging/rdma/hfi1/qsfp.h)15
-rw-r--r--drivers/infiniband/hw/hfi1/rc.c (renamed from drivers/staging/rdma/hfi1/rc.c)9
-rw-r--r--drivers/infiniband/hw/hfi1/ruc.c (renamed from drivers/staging/rdma/hfi1/ruc.c)20
-rw-r--r--drivers/infiniband/hw/hfi1/sdma.c (renamed from drivers/staging/rdma/hfi1/sdma.c)4
-rw-r--r--drivers/infiniband/hw/hfi1/sdma.h (renamed from drivers/staging/rdma/hfi1/sdma.h)0
-rw-r--r--drivers/infiniband/hw/hfi1/sdma_txreq.h (renamed from drivers/staging/rdma/hfi1/sdma_txreq.h)0
-rw-r--r--drivers/infiniband/hw/hfi1/sysfs.c (renamed from drivers/staging/rdma/hfi1/sysfs.c)8
-rw-r--r--drivers/infiniband/hw/hfi1/trace.c (renamed from drivers/staging/rdma/hfi1/trace.c)8
-rw-r--r--drivers/infiniband/hw/hfi1/trace.h (renamed from drivers/staging/rdma/hfi1/trace.h)5
-rw-r--r--drivers/infiniband/hw/hfi1/twsi.c (renamed from drivers/staging/rdma/hfi1/twsi.c)0
-rw-r--r--drivers/infiniband/hw/hfi1/twsi.h (renamed from drivers/staging/rdma/hfi1/twsi.h)0
-rw-r--r--drivers/infiniband/hw/hfi1/uc.c (renamed from drivers/staging/rdma/hfi1/uc.c)0
-rw-r--r--drivers/infiniband/hw/hfi1/ud.c (renamed from drivers/staging/rdma/hfi1/ud.c)8
-rw-r--r--drivers/infiniband/hw/hfi1/user_exp_rcv.c (renamed from drivers/staging/rdma/hfi1/user_exp_rcv.c)7
-rw-r--r--drivers/infiniband/hw/hfi1/user_exp_rcv.h (renamed from drivers/staging/rdma/hfi1/user_exp_rcv.h)0
-rw-r--r--drivers/infiniband/hw/hfi1/user_pages.c (renamed from drivers/staging/rdma/hfi1/user_pages.c)0
-rw-r--r--drivers/infiniband/hw/hfi1/user_sdma.c (renamed from drivers/staging/rdma/hfi1/user_sdma.c)115
-rw-r--r--drivers/infiniband/hw/hfi1/user_sdma.h (renamed from drivers/staging/rdma/hfi1/user_sdma.h)0
-rw-r--r--drivers/infiniband/hw/hfi1/verbs.c (renamed from drivers/staging/rdma/hfi1/verbs.c)112
-rw-r--r--drivers/infiniband/hw/hfi1/verbs.h (renamed from drivers/staging/rdma/hfi1/verbs.h)5
-rw-r--r--drivers/infiniband/hw/hfi1/verbs_txreq.c (renamed from drivers/staging/rdma/hfi1/verbs_txreq.c)0
-rw-r--r--drivers/infiniband/hw/hfi1/verbs_txreq.h (renamed from drivers/staging/rdma/hfi1/verbs_txreq.h)0
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw.h7
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_cm.c148
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_cm.h10
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_ctrl.c185
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_d.h4
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_hw.c14
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_main.c58
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_osdep.h1
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_pble.c9
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_puda.c2
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_status.h1
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_type.h14
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_uk.c106
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_user.h36
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_utils.c47
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_verbs.c433
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_verbs.h3
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_vf.c2
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_vf.h2
-rw-r--r--drivers/infiniband/hw/i40iw/i40iw_virtchnl.c102
-rw-r--r--drivers/infiniband/hw/mlx4/main.c2
-rw-r--r--drivers/infiniband/hw/mlx4/mcg.c9
-rw-r--r--drivers/infiniband/hw/mlx4/mlx4_ib.h5
-rw-r--r--drivers/infiniband/hw/mlx4/mr.c7
-rw-r--r--drivers/infiniband/hw/mlx4/qp.c27
-rw-r--r--drivers/infiniband/hw/mlx5/cq.c5
-rw-r--r--drivers/infiniband/hw/mlx5/main.c105
-rw-r--r--drivers/infiniband/hw/mlx5/mlx5_ib.h8
-rw-r--r--drivers/infiniband/hw/mlx5/mr.c25
-rw-r--r--drivers/infiniband/hw/mlx5/qp.c20
-rw-r--r--drivers/infiniband/hw/nes/nes_nic.c16
-rw-r--r--drivers/infiniband/hw/nes/nes_utils.c60
-rw-r--r--drivers/infiniband/hw/nes/nes_verbs.c43
-rw-r--r--drivers/infiniband/hw/nes/nes_verbs.h2
-rw-r--r--drivers/infiniband/hw/ocrdma/ocrdma_verbs.c7
-rw-r--r--drivers/infiniband/hw/ocrdma/ocrdma_verbs.h5
-rw-r--r--drivers/infiniband/hw/qib/qib_file_ops.c5
-rw-r--r--drivers/infiniband/hw/qib/qib_iba7322.c15
-rw-r--r--drivers/infiniband/hw/qib/qib_init.c4
-rw-r--r--drivers/infiniband/hw/qib/qib_mad.c6
-rw-r--r--drivers/infiniband/hw/qib/qib_pcie.c6
-rw-r--r--drivers/infiniband/hw/qib/qib_rc.c2
-rw-r--r--drivers/infiniband/hw/qib/qib_ruc.c4
-rw-r--r--drivers/infiniband/hw/qib/qib_uc.c2
-rw-r--r--drivers/infiniband/hw/qib/qib_ud.c10
-rw-r--r--drivers/infiniband/hw/qib/qib_verbs.h7
-rw-r--r--drivers/infiniband/sw/rdmavt/cq.c1
-rw-r--r--drivers/infiniband/sw/rdmavt/mr.c4
-rw-r--r--drivers/infiniband/sw/rdmavt/qp.c36
-rw-r--r--drivers/infiniband/sw/rdmavt/vt.c13
-rw-r--r--drivers/infiniband/ulp/ipoib/ipoib.h4
-rw-r--r--drivers/infiniband/ulp/ipoib/ipoib_cm.c2
-rw-r--r--drivers/infiniband/ulp/ipoib/ipoib_ethtool.c67
-rw-r--r--drivers/infiniband/ulp/ipoib/ipoib_ib.c113
-rw-r--r--drivers/infiniband/ulp/ipoib/ipoib_main.c142
-rw-r--r--drivers/infiniband/ulp/ipoib/ipoib_multicast.c48
-rw-r--r--drivers/infiniband/ulp/ipoib/ipoib_verbs.c3
-rw-r--r--drivers/infiniband/ulp/ipoib/ipoib_vlan.c2
-rw-r--r--drivers/infiniband/ulp/iser/iser_memory.c4
-rw-r--r--drivers/infiniband/ulp/isert/ib_isert.c852
-rw-r--r--drivers/infiniband/ulp/isert/ib_isert.h69
-rw-r--r--drivers/infiniband/ulp/srp/ib_srp.c235
-rw-r--r--drivers/infiniband/ulp/srp/ib_srp.h2
-rw-r--r--drivers/infiniband/ulp/srpt/ib_srpt.c740
-rw-r--r--drivers/infiniband/ulp/srpt/ib_srpt.h31
-rw-r--r--drivers/input/joystick/analog.c6
-rw-r--r--drivers/input/joystick/xpad.c18
-rw-r--r--drivers/input/keyboard/adp5588-keys.c10
-rw-r--r--drivers/input/keyboard/adp5589-keys.c12
-rw-r--r--drivers/input/keyboard/omap-keypad.c52
-rw-r--r--drivers/input/keyboard/twl4030_keypad.c28
-rw-r--r--drivers/input/misc/cm109.c47
-rw-r--r--drivers/input/misc/max77693-haptic.c17
-rw-r--r--drivers/input/misc/max8997_haptic.c12
-rw-r--r--drivers/input/misc/pwm-beeper.c75
-rw-r--r--drivers/input/misc/rotary_encoder.c8
-rw-r--r--drivers/input/misc/twl6040-vibra.c16
-rw-r--r--drivers/input/misc/uinput.c6
-rw-r--r--drivers/input/mouse/byd.c5
-rw-r--r--drivers/input/tablet/acecad.c12
-rw-r--r--drivers/input/tablet/aiptek.c20
-rw-r--r--drivers/input/tablet/gtco.c24
-rw-r--r--drivers/input/tablet/kbtab.c8
-rw-r--r--drivers/input/touchscreen/ad7879.c10
-rw-r--r--drivers/input/touchscreen/bcm_iproc_tsc.c77
-rw-r--r--drivers/input/touchscreen/cyttsp4_core.c2
-rw-r--r--drivers/input/touchscreen/sun4i-ts.c9
-rw-r--r--drivers/input/touchscreen/ti_am335x_tsc.c16
-rw-r--r--drivers/iommu/Kconfig13
-rw-r--r--drivers/iommu/amd_iommu.c270
-rw-r--r--drivers/iommu/amd_iommu_init.c329
-rw-r--r--drivers/iommu/amd_iommu_types.h40
-rw-r--r--drivers/iommu/arm-smmu-v3.c37
-rw-r--r--drivers/iommu/arm-smmu.c352
-rw-r--r--drivers/iommu/dma-iommu.c146
-rw-r--r--drivers/iommu/dmar.c47
-rw-r--r--drivers/iommu/intel-iommu.c320
-rw-r--r--drivers/iommu/io-pgtable-arm-v7s.c29
-rw-r--r--drivers/iommu/io-pgtable-arm.c9
-rw-r--r--drivers/iommu/io-pgtable.c3
-rw-r--r--drivers/iommu/io-pgtable.h6
-rw-r--r--drivers/iommu/iommu.c32
-rw-r--r--drivers/iommu/iova.c417
-rw-r--r--drivers/iommu/irq_remapping.c2
-rw-r--r--drivers/iommu/mtk_iommu.c16
-rw-r--r--drivers/iommu/of_iommu.c14
-rw-r--r--drivers/iommu/omap-iommu-debug.c2
-rw-r--r--drivers/iommu/omap-iommu.c10
-rw-r--r--drivers/iommu/rockchip-iommu.c2
-rw-r--r--drivers/ipack/devices/ipoctal.c5
-rw-r--r--drivers/irqchip/Kconfig16
-rw-r--r--drivers/irqchip/Makefile4
-rw-r--r--drivers/irqchip/irq-alpine-msi.c2
-rw-r--r--drivers/irqchip/irq-bcm2836.c10
-rw-r--r--drivers/irqchip/irq-clps711x.c2
-rw-r--r--drivers/irqchip/irq-crossbar.c2
-rw-r--r--drivers/irqchip/irq-eznps.c165
-rw-r--r--drivers/irqchip/irq-gic-common.c33
-rw-r--r--drivers/irqchip/irq-gic-common.h3
-rw-r--r--drivers/irqchip/irq-gic-v2m.c19
-rw-r--r--drivers/irqchip/irq-gic-v3-its.c42
-rw-r--r--drivers/irqchip/irq-gic-v3.c368
-rw-r--r--drivers/irqchip/irq-gic.c414
-rw-r--r--drivers/irqchip/irq-hip04.c2
-rw-r--r--drivers/irqchip/irq-lpc32xx.c238
-rw-r--r--drivers/irqchip/irq-ls-scfg-msi.c240
-rw-r--r--drivers/irqchip/irq-mbigen.c4
-rw-r--r--drivers/irqchip/irq-mips-gic.c32
-rw-r--r--drivers/irqchip/irq-partition-percpu.c256
-rw-r--r--drivers/irqchip/irq-tegra.c2
-rw-r--r--drivers/irqchip/irq-versatile-fpga.c1
-rw-r--r--drivers/irqchip/spear-shirq.c2
-rw-r--r--drivers/isdn/hardware/eicon/message.c21
-rw-r--r--drivers/isdn/hysdn/hysdn_net.c2
-rw-r--r--drivers/isdn/i4l/isdn_net.c4
-rw-r--r--drivers/isdn/i4l/isdn_tty.c44
-rw-r--r--drivers/isdn/i4l/isdn_x25iface.c2
-rw-r--r--drivers/leds/Kconfig5
-rw-r--r--drivers/leds/led-triggers.c2
-rw-r--r--drivers/leds/leds-gpio.c4
-rw-r--r--drivers/leds/leds-pwm.c11
-rw-r--r--drivers/leds/leds-ss4200.c13
-rw-r--r--drivers/leds/leds-tca6507.c2
-rw-r--r--drivers/leds/leds.h1
-rw-r--r--drivers/leds/trigger/Kconfig18
-rw-r--r--drivers/leds/trigger/Makefile2
-rw-r--r--drivers/leds/trigger/ledtrig-ide-disk.c3
-rw-r--r--drivers/leds/trigger/ledtrig-mtd.c45
-rw-r--r--drivers/leds/trigger/ledtrig-panic.c77
-rw-r--r--drivers/lguest/x86/core.c2
-rw-r--r--drivers/lightnvm/core.c370
-rw-r--r--drivers/lightnvm/gennvm.c100
-rw-r--r--drivers/lightnvm/rrpc.c42
-rw-r--r--drivers/lightnvm/rrpc.h2
-rw-r--r--drivers/lightnvm/sysblk.c284
-rw-r--r--drivers/macintosh/rack-meter.c5
-rw-r--r--drivers/macintosh/via-pmu.c4
-rw-r--r--drivers/mailbox/mailbox-sti.c4
-rw-r--r--drivers/mailbox/omap-mailbox.c220
-rw-r--r--drivers/mcb/mcb-core.c99
-rw-r--r--drivers/mcb/mcb-internal.h1
-rw-r--r--drivers/mcb/mcb-parse.c17
-rw-r--r--drivers/mcb/mcb-pci.c23
-rw-r--r--drivers/md/bcache/alloc.c2
-rw-r--r--drivers/md/bcache/btree.c2
-rw-r--r--drivers/md/bcache/super.c2
-rw-r--r--drivers/md/bcache/writeback.c3
-rw-r--r--drivers/md/bitmap.c88
-rw-r--r--drivers/md/bitmap.h3
-rw-r--r--drivers/md/dm-ioctl.c2
-rw-r--r--drivers/md/dm-mpath.c351
-rw-r--r--drivers/md/dm-raid.c7
-rw-r--r--drivers/md/dm-table.c20
-rw-r--r--drivers/md/dm-thin.c165
-rw-r--r--drivers/md/dm.c10
-rw-r--r--drivers/md/md-cluster.c96
-rw-r--r--drivers/md/md-cluster.h1
-rw-r--r--drivers/md/md.c88
-rw-r--r--drivers/md/raid1.c4
-rw-r--r--drivers/md/raid10.c20
-rw-r--r--drivers/md/raid5-cache.c7
-rw-r--r--drivers/md/raid5.c10
-rw-r--r--drivers/media/common/Kconfig1
-rw-r--r--drivers/media/common/Makefile2
-rw-r--r--drivers/media/common/v4l2-tpg/Kconfig2
-rw-r--r--drivers/media/common/v4l2-tpg/Makefile3
-rw-r--r--drivers/media/common/v4l2-tpg/v4l2-tpg-colors.c (renamed from drivers/media/platform/vivid/vivid-tpg-colors.c)7
-rw-r--r--drivers/media/common/v4l2-tpg/v4l2-tpg-core.c (renamed from drivers/media/platform/vivid/vivid-tpg.c)25
-rw-r--r--drivers/media/dvb-core/dvb-usb-ids.h14
-rw-r--r--drivers/media/dvb-core/dvbdev.c4
-rw-r--r--drivers/media/dvb-frontends/dib0090.c2
-rw-r--r--drivers/media/dvb-frontends/ds3000.c14
-rw-r--r--drivers/media/dvb-frontends/m88ds3103.c19
-rw-r--r--drivers/media/dvb-frontends/m88ds3103_priv.h4
-rw-r--r--drivers/media/dvb-frontends/rtl2830.c20
-rw-r--r--drivers/media/dvb-frontends/rtl2830_priv.h2
-rw-r--r--drivers/media/dvb-frontends/rtl2832.c243
-rw-r--r--drivers/media/dvb-frontends/rtl2832.h4
-rw-r--r--drivers/media/dvb-frontends/rtl2832_priv.h3
-rw-r--r--drivers/media/dvb-frontends/rtl2832_sdr.c303
-rw-r--r--drivers/media/dvb-frontends/rtl2832_sdr.h5
-rw-r--r--drivers/media/dvb-frontends/si2168.c106
-rw-r--r--drivers/media/dvb-frontends/si2168_priv.h3
-rw-r--r--drivers/media/dvb-frontends/zl10353.c6
-rw-r--r--drivers/media/i2c/ad9389b.c8
-rw-r--r--drivers/media/i2c/adp1653.c14
-rw-r--r--drivers/media/i2c/adv7180.c160
-rw-r--r--drivers/media/i2c/adv7511.c6
-rw-r--r--drivers/media/i2c/adv7604.c8
-rw-r--r--drivers/media/i2c/adv7842.c6
-rw-r--r--drivers/media/i2c/m5mols/m5mols_controls.c2
-rw-r--r--drivers/media/i2c/saa7115.c15
-rw-r--r--drivers/media/i2c/smiapp/smiapp-core.c12
-rw-r--r--drivers/media/i2c/smiapp/smiapp.h1
-rw-r--r--drivers/media/i2c/tc358743.c5
-rw-r--r--drivers/media/i2c/ths7303.c2
-rw-r--r--drivers/media/i2c/tvp5150.c9
-rw-r--r--drivers/media/media-device.c50
-rw-r--r--drivers/media/media-devnode.c6
-rw-r--r--drivers/media/media-entity.c18
-rw-r--r--drivers/media/pci/Kconfig1
-rw-r--r--drivers/media/pci/Makefile1
-rw-r--r--drivers/media/pci/cobalt/Kconfig1
-rw-r--r--drivers/media/pci/cx18/cx18-driver.h13
-rw-r--r--drivers/media/pci/cx23885/cx23885-av.c2
-rw-r--r--drivers/media/pci/ivtv/ivtv-driver.h13
-rw-r--r--drivers/media/pci/smipcie/smipcie-ir.c2
-rw-r--r--drivers/media/pci/smipcie/smipcie-main.c17
-rw-r--r--drivers/media/pci/smipcie/smipcie.h2
-rw-r--r--drivers/media/pci/sta2x11/sta2x11_vip.c28
-rw-r--r--drivers/media/pci/tw686x/Kconfig18
-rw-r--r--drivers/media/pci/tw686x/Makefile3
-rw-r--r--drivers/media/pci/tw686x/tw686x-audio.c386
-rw-r--r--drivers/media/pci/tw686x/tw686x-core.c415
-rw-r--r--drivers/media/pci/tw686x/tw686x-regs.h122
-rw-r--r--drivers/media/pci/tw686x/tw686x-video.c937
-rw-r--r--drivers/media/pci/tw686x/tw686x.h158
-rw-r--r--drivers/media/pci/zoran/videocodec.c5
-rw-r--r--drivers/media/platform/Kconfig4
-rw-r--r--drivers/media/platform/am437x/am437x-vpfe.c4
-rw-r--r--drivers/media/platform/exynos-gsc/gsc-core.c35
-rw-r--r--drivers/media/platform/exynos-gsc/gsc-core.h1
-rw-r--r--drivers/media/platform/exynos4-is/fimc-core.c50
-rw-r--r--drivers/media/platform/exynos4-is/media-dev.c8
-rw-r--r--drivers/media/platform/exynos4-is/mipi-csis.c6
-rw-r--r--drivers/media/platform/omap3isp/ispvideo.c2
-rw-r--r--drivers/media/platform/s5p-g2d/g2d.c27
-rw-r--r--drivers/media/platform/s5p-g2d/g2d.h5
-rw-r--r--drivers/media/platform/s5p-jpeg/jpeg-core.c7
-rw-r--r--drivers/media/platform/s5p-mfc/s5p_mfc.c37
-rw-r--r--drivers/media/platform/s5p-tv/mixer.h2
-rw-r--r--drivers/media/platform/s5p-tv/mixer_drv.c2
-rw-r--r--drivers/media/platform/s5p-tv/mixer_grp_layer.c2
-rw-r--r--drivers/media/platform/s5p-tv/mixer_video.c2
-rw-r--r--drivers/media/platform/s5p-tv/mixer_vp_layer.c2
-rw-r--r--drivers/media/platform/soc_camera/Kconfig4
-rw-r--r--drivers/media/platform/soc_camera/rcar_vin.c2
-rw-r--r--drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c69
-rw-r--r--drivers/media/platform/vivid/Kconfig1
-rw-r--r--drivers/media/platform/vivid/Makefile2
-rw-r--r--drivers/media/platform/vivid/vivid-core.c22
-rw-r--r--drivers/media/platform/vivid/vivid-core.h2
-rw-r--r--drivers/media/platform/vivid/vivid-kthread-cap.c13
-rw-r--r--drivers/media/platform/vivid/vivid-rds-gen.c19
-rw-r--r--drivers/media/platform/vivid/vivid-vid-cap.c101
-rw-r--r--drivers/media/platform/vivid/vivid-vid-common.c97
-rw-r--r--drivers/media/platform/vivid/vivid-vid-common.h9
-rw-r--r--drivers/media/platform/vivid/vivid-vid-out.c103
-rw-r--r--drivers/media/platform/vsp1/vsp1.h14
-rw-r--r--drivers/media/platform/vsp1/vsp1_bru.c359
-rw-r--r--drivers/media/platform/vsp1/vsp1_bru.h3
-rw-r--r--drivers/media/platform/vsp1/vsp1_dl.c567
-rw-r--r--drivers/media/platform/vsp1/vsp1_dl.h49
-rw-r--r--drivers/media/platform/vsp1/vsp1_drm.c234
-rw-r--r--drivers/media/platform/vsp1/vsp1_drm.h27
-rw-r--r--drivers/media/platform/vsp1/vsp1_drv.c34
-rw-r--r--drivers/media/platform/vsp1/vsp1_entity.c288
-rw-r--r--drivers/media/platform/vsp1/vsp1_entity.h63
-rw-r--r--drivers/media/platform/vsp1/vsp1_hsit.c130
-rw-r--r--drivers/media/platform/vsp1/vsp1_lif.c179
-rw-r--r--drivers/media/platform/vsp1/vsp1_lut.c172
-rw-r--r--drivers/media/platform/vsp1/vsp1_lut.h6
-rw-r--r--drivers/media/platform/vsp1/vsp1_pipe.c71
-rw-r--r--drivers/media/platform/vsp1/vsp1_pipe.h19
-rw-r--r--drivers/media/platform/vsp1/vsp1_regs.h10
-rw-r--r--drivers/media/platform/vsp1/vsp1_rpf.c275
-rw-r--r--drivers/media/platform/vsp1/vsp1_rwpf.c171
-rw-r--r--drivers/media/platform/vsp1/vsp1_rwpf.h64
-rw-r--r--drivers/media/platform/vsp1/vsp1_sru.c214
-rw-r--r--drivers/media/platform/vsp1/vsp1_sru.h2
-rw-r--r--drivers/media/platform/vsp1/vsp1_uds.c223
-rw-r--r--drivers/media/platform/vsp1/vsp1_uds.h3
-rw-r--r--drivers/media/platform/vsp1/vsp1_video.c493
-rw-r--r--drivers/media/platform/vsp1/vsp1_video.h2
-rw-r--r--drivers/media/platform/vsp1/vsp1_wpf.c279
-rw-r--r--drivers/media/platform/xilinx/xilinx-vipp.c8
-rw-r--r--drivers/media/rc/ati_remote.c11
-rw-r--r--drivers/media/rc/mceusb.c6
-rw-r--r--drivers/media/rc/rc-main.c9
-rw-r--r--drivers/media/tuners/qm1d1c0042.c38
-rw-r--r--drivers/media/tuners/si2157.c19
-rw-r--r--drivers/media/tuners/si2157_priv.h1
-rw-r--r--drivers/media/usb/au0828/au0828-core.c38
-rw-r--r--drivers/media/usb/au0828/au0828-video.c4
-rw-r--r--drivers/media/usb/au0828/au0828.h1
-rw-r--r--drivers/media/usb/cx231xx/cx231xx-417.c31
-rw-r--r--drivers/media/usb/cx231xx/cx231xx-core.c9
-rw-r--r--drivers/media/usb/cx231xx/cx231xx-i2c.c47
-rw-r--r--drivers/media/usb/cx231xx/cx231xx.h4
-rw-r--r--drivers/media/usb/dvb-usb-v2/af9015.c2
-rw-r--r--drivers/media/usb/dvb-usb-v2/af9035.h24
-rw-r--r--drivers/media/usb/dvb-usb-v2/rtl28xxu.c5
-rw-r--r--drivers/media/usb/dvb-usb/az6027.c7
-rw-r--r--drivers/media/usb/dvb-usb/dib0700_core.c2
-rw-r--r--drivers/media/usb/dvb-usb/dib0700_devices.c4
-rw-r--r--drivers/media/usb/dvb-usb/dibusb-common.c4
-rw-r--r--drivers/media/usb/dvb-usb/dw2102.c63
-rw-r--r--drivers/media/usb/dvb-usb/pctv452e.c4
-rw-r--r--drivers/media/usb/em28xx/Kconfig2
-rw-r--r--drivers/media/usb/em28xx/em28xx-cards.c88
-rw-r--r--drivers/media/usb/em28xx/em28xx-dvb.c185
-rw-r--r--drivers/media/usb/em28xx/em28xx-reg.h13
-rw-r--r--drivers/media/usb/em28xx/em28xx.h3
-rw-r--r--drivers/media/usb/go7007/go7007-v4l2.c2
-rw-r--r--drivers/media/usb/pvrusb2/pvrusb2-hdw.c9
-rw-r--r--drivers/media/v4l2-core/v4l2-compat-ioctl32.c3
-rw-r--r--drivers/media/v4l2-core/v4l2-dev.c1
-rw-r--r--drivers/media/v4l2-core/v4l2-ioctl.c73
-rw-r--r--drivers/media/v4l2-core/v4l2-mc.c2
-rw-r--r--drivers/media/v4l2-core/v4l2-subdev.c44
-rw-r--r--drivers/media/v4l2-core/videobuf2-v4l2.c6
-rw-r--r--drivers/memory/Kconfig2
-rw-r--r--drivers/memory/Makefile1
-rw-r--r--drivers/memory/fsl_ifc.c36
-rw-r--r--drivers/memory/of_memory.c2
-rw-r--r--drivers/memory/omap-gpmc.c657
-rw-r--r--drivers/memory/samsung/Kconfig13
-rw-r--r--drivers/memory/samsung/Makefile1
-rw-r--r--drivers/memory/samsung/exynos-srom.c231
-rw-r--r--drivers/memory/samsung/exynos-srom.h51
-rw-r--r--drivers/memstick/core/ms_block.c16
-rw-r--r--drivers/memstick/core/ms_block.h2
-rw-r--r--drivers/memstick/core/mspro_block.c3
-rw-r--r--drivers/memstick/host/rtsx_usb_ms.c2
-rw-r--r--drivers/message/fusion/mptlan.c2
-rw-r--r--drivers/message/fusion/mptsas.c4
-rw-r--r--drivers/message/fusion/mptspi.c2
-rw-r--r--drivers/mfd/Kconfig33
-rw-r--r--drivers/mfd/Makefile2
-rw-r--r--drivers/mfd/ab8500-debugfs.c2
-rw-r--r--drivers/mfd/act8945a.c13
-rw-r--r--drivers/mfd/arizona-core.c10
-rw-r--r--drivers/mfd/arizona-irq.c3
-rw-r--r--drivers/mfd/as3711.c13
-rw-r--r--drivers/mfd/as3722.c31
-rw-r--r--drivers/mfd/asic3.c10
-rw-r--r--drivers/mfd/atmel-hlcdc.c14
-rw-r--r--drivers/mfd/axp20x-rsb.c1
-rw-r--r--drivers/mfd/axp20x.c90
-rw-r--r--drivers/mfd/bcm590xx.c11
-rw-r--r--drivers/mfd/da9063-irq.c8
-rw-r--r--drivers/mfd/dm355evm_msp.c10
-rw-r--r--drivers/mfd/hi6421-pmic-core.c12
-rw-r--r--drivers/mfd/hi655x-pmic.c162
-rw-r--r--drivers/mfd/htc-egpio.c10
-rw-r--r--drivers/mfd/htc-i2cpld.c15
-rw-r--r--drivers/mfd/intel-lpss-acpi.c12
-rw-r--r--drivers/mfd/intel-lpss-pci.c20
-rw-r--r--drivers/mfd/intel-lpss.c25
-rw-r--r--drivers/mfd/intel-lpss.h4
-rw-r--r--drivers/mfd/intel_quark_i2c_gpio.c50
-rw-r--r--drivers/mfd/intel_soc_pmic_core.c1
-rw-r--r--drivers/mfd/lp3943.c14
-rw-r--r--drivers/mfd/lp8788-irq.c2
-rw-r--r--drivers/mfd/max77620.c590
-rw-r--r--drivers/mfd/max77686.c46
-rw-r--r--drivers/mfd/max77693.c16
-rw-r--r--drivers/mfd/menf21bmc.c11
-rw-r--r--drivers/mfd/mfd-core.c44
-rw-r--r--drivers/mfd/mt6397-core.c40
-rw-r--r--drivers/mfd/omap-usb-tll.c13
-rw-r--r--drivers/mfd/rc5t583-irq.c11
-rw-r--r--drivers/mfd/rc5t583.c24
-rw-r--r--drivers/mfd/rdc321x-southbridge.c13
-rw-r--r--drivers/mfd/rk808.c7
-rw-r--r--drivers/mfd/rn5t618.c5
-rw-r--r--drivers/mfd/rt5033.c14
-rw-r--r--drivers/mfd/sec-core.c20
-rw-r--r--drivers/mfd/sec-irq.c14
-rw-r--r--drivers/mfd/sky81452.c10
-rw-r--r--drivers/mfd/sm501.c15
-rw-r--r--drivers/mfd/smsc-ece1099.c10
-rw-r--r--drivers/mfd/stw481x.c11
-rw-r--r--drivers/mfd/tc6393xb.c14
-rw-r--r--drivers/mfd/tps6105x.c1
-rw-r--r--drivers/mfd/tps65010.c8
-rw-r--r--drivers/mfd/tps6507x.c13
-rw-r--r--drivers/mfd/tps65217.c14
-rw-r--r--drivers/mfd/tps65910.c34
-rw-r--r--drivers/mfd/twl4030-irq.c2
-rw-r--r--drivers/mfd/twl4030-power.c1
-rw-r--r--drivers/mfd/twl6040.c8
-rw-r--r--drivers/mfd/ucb1x00-core.c14
-rw-r--r--drivers/mfd/vexpress-sysreg.c2
-rw-r--r--drivers/mfd/wl1273-core.c14
-rw-r--r--drivers/mfd/wm5110-tables.c1
-rw-r--r--drivers/mfd/wm8400-core.c52
-rw-r--r--drivers/misc/cxl/api.c28
-rw-r--r--drivers/misc/cxl/context.c3
-rw-r--r--drivers/misc/cxl/cxl.h15
-rw-r--r--drivers/misc/cxl/fault.c10
-rw-r--r--drivers/misc/cxl/guest.c78
-rw-r--r--drivers/misc/cxl/native.c29
-rw-r--r--drivers/misc/cxl/pci.c64
-rw-r--r--drivers/misc/cxl/sysfs.c10
-rw-r--r--drivers/misc/eeprom/Kconfig2
-rw-r--r--drivers/misc/eeprom/at24.c112
-rw-r--r--drivers/misc/eeprom/at25.c91
-rw-r--r--drivers/misc/eeprom/eeprom_93xx46.c92
-rw-r--r--drivers/misc/mei/amthif.c4
-rw-r--r--drivers/misc/mei/bus.c42
-rw-r--r--drivers/misc/mei/client.c30
-rw-r--r--drivers/misc/mei/hbm.c26
-rw-r--r--drivers/misc/mei/interrupt.c6
-rw-r--r--drivers/misc/mei/mei_dev.h4
-rw-r--r--drivers/misc/mic/Kconfig1
-rw-r--r--drivers/misc/mic/host/mic_boot.c6
-rw-r--r--drivers/misc/mic/scif/scif_fence.c3
-rw-r--r--drivers/misc/qcom-coincell.c3
-rw-r--r--drivers/misc/sgi-gru/grukservices.c38
-rw-r--r--drivers/misc/sram.c4
-rw-r--r--drivers/misc/ti-st/st_kim.c1
-rw-r--r--drivers/mmc/card/block.c71
-rw-r--r--drivers/mmc/card/sdio_uart.c2
-rw-r--r--drivers/mmc/core/Kconfig21
-rw-r--r--drivers/mmc/core/Makefile4
-rw-r--r--drivers/mmc/core/core.c16
-rw-r--r--drivers/mmc/core/host.c46
-rw-r--r--drivers/mmc/core/mmc.c68
-rw-r--r--drivers/mmc/core/pwrseq.c108
-rw-r--r--drivers/mmc/core/pwrseq.h19
-rw-r--r--drivers/mmc/core/pwrseq_emmc.c81
-rw-r--r--drivers/mmc/core/pwrseq_simple.c91
-rw-r--r--drivers/mmc/core/sdio_cis.c7
-rw-r--r--drivers/mmc/host/Kconfig4
-rw-r--r--drivers/mmc/host/atmel-mci.c9
-rw-r--r--drivers/mmc/host/davinci_mmc.c151
-rw-r--r--drivers/mmc/host/dw_mmc-exynos.c23
-rw-r--r--drivers/mmc/host/dw_mmc-rockchip.c107
-rw-r--r--drivers/mmc/host/dw_mmc.c23
-rw-r--r--drivers/mmc/host/dw_mmc.h2
-rw-r--r--drivers/mmc/host/mmci.c20
-rw-r--r--drivers/mmc/host/mtk-sd.c19
-rw-r--r--drivers/mmc/host/omap.c48
-rw-r--r--drivers/mmc/host/omap_hsmmc.c90
-rw-r--r--drivers/mmc/host/sdhci-acpi.c22
-rw-r--r--drivers/mmc/host/sdhci-esdhc-imx.c2
-rw-r--r--drivers/mmc/host/sdhci-of-arasan.c26
-rw-r--r--drivers/mmc/host/sdhci-of-at91.c76
-rw-r--r--drivers/mmc/host/sdhci-pci-core.c14
-rw-r--r--drivers/mmc/host/sdhci-pic32.c1
-rw-r--r--drivers/mmc/host/sdhci-pltfm.c42
-rw-r--r--drivers/mmc/host/sdhci.c324
-rw-r--r--drivers/mmc/host/sdhci.h10
-rw-r--r--drivers/mmc/host/sh_mmcif.c74
-rw-r--r--drivers/mmc/host/sh_mobile_sdhi.c194
-rw-r--r--drivers/mmc/host/tmio_mmc.h75
-rw-r--r--drivers/mmc/host/tmio_mmc_dma.c1
-rw-r--r--drivers/mmc/host/tmio_mmc_pio.c196
-rw-r--r--drivers/mmc/host/toshsd.c1
-rw-r--r--drivers/mmc/host/usdhi6rol0.c60
-rw-r--r--drivers/mtd/chips/Kconfig1
-rw-r--r--drivers/mtd/devices/bcm47xxsflash.c29
-rw-r--r--drivers/mtd/devices/bcm47xxsflash.h3
-rw-r--r--drivers/mtd/devices/docg3.c46
-rw-r--r--drivers/mtd/devices/m25p80.c22
-rw-r--r--drivers/mtd/devices/pmc551.c2
-rw-r--r--drivers/mtd/maps/Kconfig10
-rw-r--r--drivers/mtd/maps/Makefile3
-rw-r--r--drivers/mtd/maps/ck804xrom.c4
-rw-r--r--drivers/mtd/maps/esb2rom.c4
-rw-r--r--drivers/mtd/maps/ichxrom.c4
-rw-r--r--drivers/mtd/maps/physmap_of.c6
-rw-r--r--drivers/mtd/maps/physmap_of_versatile.c255
-rw-r--r--drivers/mtd/maps/physmap_of_versatile.h16
-rw-r--r--drivers/mtd/maps/pxa2xx-flash.c6
-rw-r--r--drivers/mtd/maps/uclinux.c27
-rw-r--r--drivers/mtd/mtd_blkdevs.c2
-rw-r--r--drivers/mtd/mtdchar.c123
-rw-r--r--drivers/mtd/mtdconcat.c2
-rw-r--r--drivers/mtd/mtdcore.c379
-rw-r--r--drivers/mtd/mtdpart.c23
-rw-r--r--drivers/mtd/nand/ams-delta.c1
-rw-r--r--drivers/mtd/nand/atmel_nand.c350
-rw-r--r--drivers/mtd/nand/atmel_nand_nfc.h3
-rw-r--r--drivers/mtd/nand/au1550nd.c1
-rw-r--r--drivers/mtd/nand/bf5xx_nand.c52
-rw-r--r--drivers/mtd/nand/brcmnand/brcmnand.c290
-rw-r--r--drivers/mtd/nand/cafe_nand.c44
-rw-r--r--drivers/mtd/nand/cmx270_nand.c1
-rw-r--r--drivers/mtd/nand/davinci_nand.c210
-rw-r--r--drivers/mtd/nand/denali.c50
-rw-r--r--drivers/mtd/nand/diskonchip.c60
-rw-r--r--drivers/mtd/nand/docg4.c33
-rw-r--r--drivers/mtd/nand/fsl_elbc_nand.c84
-rw-r--r--drivers/mtd/nand/fsl_ifc_nand.c317
-rw-r--r--drivers/mtd/nand/fsl_upm.c1
-rw-r--r--drivers/mtd/nand/fsmc_nand.c332
-rw-r--r--drivers/mtd/nand/gpio.c1
-rw-r--r--drivers/mtd/nand/gpmi-nand/gpmi-nand.c161
-rw-r--r--drivers/mtd/nand/hisi504_nand.c40
-rw-r--r--drivers/mtd/nand/jz4740_nand.c3
-rw-r--r--drivers/mtd/nand/jz4780_bch.c1
-rw-r--r--drivers/mtd/nand/jz4780_nand.c21
-rw-r--r--drivers/mtd/nand/lpc32xx_mlc.c51
-rw-r--r--drivers/mtd/nand/lpc32xx_slc.c83
-rw-r--r--drivers/mtd/nand/mpc5121_nfc.c1
-rw-r--r--drivers/mtd/nand/mxc_nand.c257
-rw-r--r--drivers/mtd/nand/nand_base.c664
-rw-r--r--drivers/mtd/nand/nand_bch.c48
-rw-r--r--drivers/mtd/nand/nandsim.c10
-rw-r--r--drivers/mtd/nand/nuc900_nand.c1
-rw-r--r--drivers/mtd/nand/omap2.c448
-rw-r--r--drivers/mtd/nand/orion_nand.c1
-rw-r--r--drivers/mtd/nand/pasemi_nand.c16
-rw-r--r--drivers/mtd/nand/plat_nand.c1
-rw-r--r--drivers/mtd/nand/pxa3xx_nand.c132
-rw-r--r--drivers/mtd/nand/qcom_nandc.c88
-rw-r--r--drivers/mtd/nand/s3c2410.c36
-rw-r--r--drivers/mtd/nand/sh_flctl.c115
-rw-r--r--drivers/mtd/nand/sharpsl.c2
-rw-r--r--drivers/mtd/nand/sm_common.c93
-rw-r--r--drivers/mtd/nand/socrates_nand.c1
-rw-r--r--drivers/mtd/nand/sunxi_nand.c600
-rw-r--r--drivers/mtd/nand/vf610_nfc.c35
-rw-r--r--drivers/mtd/onenand/onenand_base.c235
-rw-r--r--drivers/mtd/sm_ftl.c2
-rw-r--r--drivers/mtd/spi-nor/spi-nor.c1
-rw-r--r--drivers/mtd/ubi/build.c18
-rw-r--r--drivers/mtd/ubi/debug.c3
-rw-r--r--drivers/mtd/ubi/eba.c47
-rw-r--r--drivers/mtd/ubi/fastmap.c1
-rw-r--r--drivers/mtd/ubi/kapi.c21
-rw-r--r--drivers/mtd/ubi/ubi.h2
-rw-r--r--drivers/mtd/ubi/vmt.c2
-rw-r--r--drivers/mtd/ubi/wl.c2
-rw-r--r--drivers/net/Kconfig17
-rw-r--r--drivers/net/Makefile1
-rw-r--r--drivers/net/Space.c21
-rw-r--r--drivers/net/appletalk/cops.c2
-rw-r--r--drivers/net/arcnet/com90xx.c2
-rw-r--r--drivers/net/can/dev.c56
-rw-r--r--drivers/net/can/ifi_canfd/ifi_canfd.c187
-rw-r--r--drivers/net/can/janz-ican3.c104
-rw-r--r--drivers/net/can/m_can/m_can.c2
-rw-r--r--drivers/net/can/mscan/mscan.c4
-rw-r--r--drivers/net/can/sja1000/plx_pci.c64
-rw-r--r--drivers/net/can/sja1000/sja1000.c6
-rw-r--r--drivers/net/can/spi/mcp251x.c3
-rw-r--r--drivers/net/can/usb/ems_usb.c4
-rw-r--r--drivers/net/can/usb/esd_usb2.c4
-rw-r--r--drivers/net/can/usb/gs_usb.c3
-rw-r--r--drivers/net/can/usb/peak_usb/pcan_usb_core.c4
-rw-r--r--drivers/net/cris/eth_v10.c2
-rw-r--r--drivers/net/dsa/Kconfig45
-rw-r--r--drivers/net/dsa/Makefile15
-rw-r--r--drivers/net/dsa/bcm_sf2.c62
-rw-r--r--drivers/net/dsa/mv88e6060.c47
-rw-r--r--drivers/net/dsa/mv88e6060.h11
-rw-r--r--drivers/net/dsa/mv88e6123.c124
-rw-r--r--drivers/net/dsa/mv88e6131.c177
-rw-r--r--drivers/net/dsa/mv88e6171.c123
-rw-r--r--drivers/net/dsa/mv88e6352.c345
-rw-r--r--drivers/net/dsa/mv88e6xxx.c2280
-rw-r--r--drivers/net/dsa/mv88e6xxx.h381
-rw-r--r--drivers/net/ethernet/3com/3c509.c2
-rw-r--r--drivers/net/ethernet/3com/3c515.c2
-rw-r--r--drivers/net/ethernet/3com/3c574_cs.c2
-rw-r--r--drivers/net/ethernet/3com/3c589_cs.c2
-rw-r--r--drivers/net/ethernet/3com/3c59x.c2
-rw-r--r--drivers/net/ethernet/8390/axnet_cs.c6
-rw-r--r--drivers/net/ethernet/8390/lib8390.c4
-rw-r--r--drivers/net/ethernet/adaptec/starfire.c2
-rw-r--r--drivers/net/ethernet/adi/bfin_mac.c2
-rw-r--r--drivers/net/ethernet/aeroflex/greth.c2
-rw-r--r--drivers/net/ethernet/agere/et131x.c4
-rw-r--r--drivers/net/ethernet/allwinner/sun4i-emac.c6
-rw-r--r--drivers/net/ethernet/amd/7990.c12
-rw-r--r--drivers/net/ethernet/amd/a2065.c9
-rw-r--r--drivers/net/ethernet/amd/atarilance.c2
-rw-r--r--drivers/net/ethernet/amd/au1000_eth.c4
-rw-r--r--drivers/net/ethernet/amd/declance.c2
-rw-r--r--drivers/net/ethernet/amd/lance.c2
-rw-r--r--drivers/net/ethernet/amd/ni65.c4
-rw-r--r--drivers/net/ethernet/amd/nmclan_cs.c2
-rw-r--r--drivers/net/ethernet/amd/pcnet32.c4
-rw-r--r--drivers/net/ethernet/amd/sunlance.c2
-rw-r--r--drivers/net/ethernet/apm/xgene/xgene_enet_cle.c13
-rw-r--r--drivers/net/ethernet/apm/xgene/xgene_enet_cle.h4
-rw-r--r--drivers/net/ethernet/apm/xgene/xgene_enet_hw.c21
-rw-r--r--drivers/net/ethernet/apm/xgene/xgene_enet_hw.h8
-rw-r--r--drivers/net/ethernet/apm/xgene/xgene_enet_main.c97
-rw-r--r--drivers/net/ethernet/apm/xgene/xgene_enet_main.h20
-rw-r--r--drivers/net/ethernet/apm/xgene/xgene_enet_sgmac.h2
-rw-r--r--drivers/net/ethernet/atheros/alx/main.c2
-rw-r--r--drivers/net/ethernet/atheros/atl1c/atl1c.h3
-rw-r--r--drivers/net/ethernet/atheros/atl1c/atl1c_main.c11
-rw-r--r--drivers/net/ethernet/atheros/atl1e/atl1e.h1
-rw-r--r--drivers/net/ethernet/atheros/atl1e/atl1e_main.c12
-rw-r--r--drivers/net/ethernet/broadcom/bcmsysport.c8
-rw-r--r--drivers/net/ethernet/broadcom/bgmac.c2
-rw-r--r--drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c5
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt.c434
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt.h37
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c300
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.h4
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_fw_hdr.h2
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h467
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_nvm_defs.h2
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c44
-rw-r--r--drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h3
-rw-r--r--drivers/net/ethernet/broadcom/cnic.c5
-rw-r--r--drivers/net/ethernet/broadcom/genet/bcmgenet.c54
-rw-r--r--drivers/net/ethernet/broadcom/sb1250-mac.c2
-rw-r--r--drivers/net/ethernet/broadcom/tg3.c2
-rw-r--r--drivers/net/ethernet/cadence/macb.c152
-rw-r--r--drivers/net/ethernet/cavium/liquidio/lio_main.c4
-rw-r--r--drivers/net/ethernet/cavium/liquidio/octeon_device.c4
-rw-r--r--drivers/net/ethernet/cavium/octeon/octeon_mgmt.c2
-rw-r--r--drivers/net/ethernet/cavium/thunder/nicvf_main.c2
-rw-r--r--drivers/net/ethernet/cavium/thunder/nicvf_queues.c6
-rw-r--r--drivers/net/ethernet/cavium/thunder/thunder_bgx.c4
-rw-r--r--drivers/net/ethernet/chelsio/cxgb/sge.c3
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/cxgb4.h40
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/cxgb4_dcb.c2
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c100
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c147
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/sge.c2
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/t4_hw.c330
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/t4_hw.h7
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/t4_msg.h4
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h5
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4vf/adapter.h4
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c115
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4vf/sge.c2
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4vf/t4vf_common.h29
-rw-r--r--drivers/net/ethernet/chelsio/cxgb4vf/t4vf_hw.c72
-rw-r--r--drivers/net/ethernet/cisco/enic/enic_main.c1
-rw-r--r--drivers/net/ethernet/davicom/dm9000.c15
-rw-r--r--drivers/net/ethernet/dec/tulip/de4x5.c11
-rw-r--r--drivers/net/ethernet/dec/tulip/dmfe.c45
-rw-r--r--drivers/net/ethernet/dec/tulip/pnic.c6
-rw-r--r--drivers/net/ethernet/dec/tulip/tulip_core.c2
-rw-r--r--drivers/net/ethernet/dec/tulip/uli526x.c4
-rw-r--r--drivers/net/ethernet/dec/tulip/winbond-840.c2
-rw-r--r--drivers/net/ethernet/dlink/dl2k.c2
-rw-r--r--drivers/net/ethernet/dlink/sundance.c2
-rw-r--r--drivers/net/ethernet/emulex/benet/be_main.c10
-rw-r--r--drivers/net/ethernet/ezchip/nps_enet.c30
-rw-r--r--drivers/net/ethernet/ezchip/nps_enet.h2
-rw-r--r--drivers/net/ethernet/faraday/ftgmac100.c36
-rw-r--r--drivers/net/ethernet/fealnx.c2
-rw-r--r--drivers/net/ethernet/freescale/fec.h1
-rw-r--r--drivers/net/ethernet/freescale/fec_main.c81
-rw-r--r--drivers/net/ethernet/freescale/fec_mpc52xx.c57
-rw-r--r--drivers/net/ethernet/freescale/fman/fman.c4
-rw-r--r--drivers/net/ethernet/freescale/fman/fman_muram.c4
-rw-r--r--drivers/net/ethernet/freescale/fman/fman_muram.h4
-rw-r--r--drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c45
-rw-r--r--drivers/net/ethernet/freescale/fs_enet/fs_enet.h1
-rw-r--r--drivers/net/ethernet/freescale/fs_enet/mac-fcc.c4
-rw-r--r--drivers/net/ethernet/freescale/fs_enet/mac-fec.c6
-rw-r--r--drivers/net/ethernet/freescale/fs_enet/mac-scc.c2
-rw-r--r--drivers/net/ethernet/freescale/gianfar.c44
-rw-r--r--drivers/net/ethernet/freescale/gianfar.h1
-rw-r--r--drivers/net/ethernet/freescale/gianfar_ethtool.c52
-rw-r--r--drivers/net/ethernet/freescale/ucc_geth_ethtool.c17
-rw-r--r--drivers/net/ethernet/fujitsu/fmvj18x_cs.c2
-rw-r--r--drivers/net/ethernet/hisilicon/hix5hd2_gmac.c2
-rw-r--r--drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c93
-rw-r--r--drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c260
-rw-r--r--drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h12
-rw-r--r--drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c209
-rw-r--r--drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h21
-rw-r--r--drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c171
-rw-r--r--drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c67
-rw-r--r--drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h1
-rw-r--r--drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c187
-rw-r--r--drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h2
-rw-r--r--drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h44
-rw-r--r--drivers/net/ethernet/hisilicon/hns/hns_enet.c23
-rw-r--r--drivers/net/ethernet/hisilicon/hns/hns_enet.h3
-rw-r--r--drivers/net/ethernet/hp/hp100.c2
-rw-r--r--drivers/net/ethernet/i825xx/82596.c2
-rw-r--r--drivers/net/ethernet/i825xx/lib82596.c2
-rw-r--r--drivers/net/ethernet/i825xx/sun3_82586.c4
-rw-r--r--drivers/net/ethernet/ibm/ehea/ehea_main.c9
-rw-r--r--drivers/net/ethernet/ibm/emac/core.c4
-rw-r--r--drivers/net/ethernet/ibm/emac/phy.c26
-rw-r--r--drivers/net/ethernet/ibm/ibmvnic.c251
-rw-r--r--drivers/net/ethernet/ibm/ibmvnic.h4
-rw-r--r--drivers/net/ethernet/intel/Kconfig80
-rw-r--r--drivers/net/ethernet/intel/e1000/e1000.h2
-rw-r--r--drivers/net/ethernet/intel/e1000/e1000_ethtool.c4
-rw-r--r--drivers/net/ethernet/intel/e1000/e1000_main.c8
-rw-r--r--drivers/net/ethernet/intel/e1000e/80003es2lan.c12
-rw-r--r--drivers/net/ethernet/intel/e1000e/82571.c30
-rw-r--r--drivers/net/ethernet/intel/e1000e/e1000.h111
-rw-r--r--drivers/net/ethernet/intel/e1000e/ethtool.c61
-rw-r--r--drivers/net/ethernet/intel/e1000e/ich8lan.c44
-rw-r--r--drivers/net/ethernet/intel/e1000e/ich8lan.h8
-rw-r--r--drivers/net/ethernet/intel/e1000e/mac.c2
-rw-r--r--drivers/net/ethernet/intel/e1000e/netdev.c163
-rw-r--r--drivers/net/ethernet/intel/e1000e/nvm.c2
-rw-r--r--drivers/net/ethernet/intel/e1000e/phy.c4
-rw-r--r--drivers/net/ethernet/intel/e1000e/phy.h10
-rw-r--r--drivers/net/ethernet/intel/e1000e/ptp.c2
-rw-r--r--drivers/net/ethernet/intel/fm10k/Makefile7
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k.h51
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_common.c4
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_common.h4
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_dcbnl.c4
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_debugfs.c4
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_ethtool.c352
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_iov.c8
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_main.c150
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_mbx.c4
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_mbx.h4
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_netdev.c50
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_pci.c226
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_pf.c141
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_pf.h21
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_ptp.c462
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_tlv.c44
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_tlv.h4
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_type.h28
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_vf.c63
-rw-r--r--drivers/net/ethernet/intel/fm10k/fm10k_vf.h14
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e.h33
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_adminq.c37
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_adminq.h1
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h82
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_client.h2
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_common.c168
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_debugfs.c36
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_devids.h2
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_ethtool.c386
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_fcoe.c14
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_hmc.c2
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_main.c284
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_nvm.c86
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_prototype.h20
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_ptp.c7
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_txrx.c1065
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_txrx.h114
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_type.h42
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_virtchnl.h45
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c509
-rw-r--r--drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h11
-rw-r--r--drivers/net/ethernet/intel/i40evf/i40e_adminq.h1
-rw-r--r--drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h70
-rw-r--r--drivers/net/ethernet/intel/i40evf/i40e_common.c2
-rw-r--r--drivers/net/ethernet/intel/i40evf/i40e_devids.h2
-rw-r--r--drivers/net/ethernet/intel/i40evf/i40e_txrx.c1021
-rw-r--r--drivers/net/ethernet/intel/i40evf/i40e_txrx.h114
-rw-r--r--drivers/net/ethernet/intel/i40evf/i40e_type.h51
-rw-r--r--drivers/net/ethernet/intel/i40evf/i40e_virtchnl.h45
-rw-r--r--drivers/net/ethernet/intel/i40evf/i40evf.h49
-rw-r--r--drivers/net/ethernet/intel/i40evf/i40evf_ethtool.c329
-rw-r--r--drivers/net/ethernet/intel/i40evf/i40evf_main.c503
-rw-r--r--drivers/net/ethernet/intel/i40evf/i40evf_virtchnl.c145
-rw-r--r--drivers/net/ethernet/intel/igb/e1000_82575.c8
-rw-r--r--drivers/net/ethernet/intel/igb/e1000_82575.h30
-rw-r--r--drivers/net/ethernet/intel/igb/e1000_defines.h108
-rw-r--r--drivers/net/ethernet/intel/igb/e1000_mac.c10
-rw-r--r--drivers/net/ethernet/intel/igb/e1000_mbx.c4
-rw-r--r--drivers/net/ethernet/intel/igb/e1000_nvm.c2
-rw-r--r--drivers/net/ethernet/intel/igb/e1000_phy.h6
-rw-r--r--drivers/net/ethernet/intel/igb/igb.h40
-rw-r--r--drivers/net/ethernet/intel/igb/igb_ethtool.c21
-rw-r--r--drivers/net/ethernet/intel/igb/igb_main.c221
-rw-r--r--drivers/net/ethernet/intel/igb/igb_ptp.c42
-rw-r--r--drivers/net/ethernet/intel/igbvf/defines.h2
-rw-r--r--drivers/net/ethernet/intel/igbvf/ethtool.c3
-rw-r--r--drivers/net/ethernet/intel/igbvf/igbvf.h4
-rw-r--r--drivers/net/ethernet/intel/igbvf/netdev.c196
-rw-r--r--drivers/net/ethernet/intel/igbvf/vf.c2
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe.h99
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c18
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c29
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_common.c161
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_common.h9
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.c10
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_82598.c2
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_82599.c2
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c6
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c50
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c3
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_main.c1052
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c46
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h4
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_model.h14
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h8
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c10
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c117
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.h1
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_type.h302
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c35
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_x540.h1
-rw-r--r--drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c693
-rw-r--r--drivers/net/ethernet/intel/ixgbevf/defines.h29
-rw-r--r--drivers/net/ethernet/intel/ixgbevf/ethtool.c230
-rw-r--r--drivers/net/ethernet/intel/ixgbevf/ixgbevf.h37
-rw-r--r--drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c298
-rw-r--r--drivers/net/ethernet/intel/ixgbevf/mbx.c11
-rw-r--r--drivers/net/ethernet/intel/ixgbevf/vf.c224
-rw-r--r--drivers/net/ethernet/intel/ixgbevf/vf.h6
-rw-r--r--drivers/net/ethernet/jme.c2
-rw-r--r--drivers/net/ethernet/korina.c8
-rw-r--r--drivers/net/ethernet/lantiq_etop.c4
-rw-r--r--drivers/net/ethernet/marvell/Kconfig2
-rw-r--r--drivers/net/ethernet/marvell/pxa168_eth.c16
-rw-r--r--drivers/net/ethernet/marvell/sky2.c2
-rw-r--r--drivers/net/ethernet/mediatek/mtk_eth_soc.c106
-rw-r--r--drivers/net/ethernet/mediatek/mtk_eth_soc.h4
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/alloc.c93
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/en_cq.c9
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/en_netdev.c42
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/en_resources.c31
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/en_rx.c13
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/en_tx.c29
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/mcg.c4
-rw-r--r--drivers/net/ethernet/mellanox/mlx4/mlx4_en.h2
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/Kconfig1
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/Makefile4
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/cmd.c309
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/cq.c59
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en.h578
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c752
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_clock.c4
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c6
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c341
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_fs.c1060
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_main.c1236
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_rx.c736
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_stats.h367
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_tc.c113
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_tc.h5
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_tx.c6
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c59
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/eq.c12
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/eswitch.c978
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/eswitch.h43
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c135
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.h7
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fs_core.c289
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fs_core.h35
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c226
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/health.c4
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/main.c35
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h4
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/port.c138
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/qp.c68
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/sriov.c2
-rw-r--r--drivers/net/ethernet/mellanox/mlx5/core/vxlan.h3
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/Kconfig8
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/Makefile1
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/core.c736
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/core.h82
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/reg.h667
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum.c396
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum.h134
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c1001
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c480
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c12
-rw-r--r--drivers/net/ethernet/mellanox/mlxsw/switchx2.c45
-rw-r--r--drivers/net/ethernet/micrel/ks8695net.c7
-rw-r--r--drivers/net/ethernet/micrel/ksz884x.c4
-rw-r--r--drivers/net/ethernet/microchip/enc28j60.c34
-rw-r--r--drivers/net/ethernet/microchip/encx24j600.c4
-rw-r--r--drivers/net/ethernet/moxa/moxart_ether.c2
-rw-r--r--drivers/net/ethernet/natsemi/natsemi.c2
-rw-r--r--drivers/net/ethernet/natsemi/sonic.c2
-rw-r--r--drivers/net/ethernet/neterion/s2io.c9
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfp_net.h22
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfp_net_common.c1101
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h10
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c24
-rw-r--r--drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c30
-rw-r--r--drivers/net/ethernet/netx-eth.c12
-rw-r--r--drivers/net/ethernet/nuvoton/w90p910_ether.c4
-rw-r--r--drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe.h2
-rw-r--r--drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c12
-rw-r--r--drivers/net/ethernet/packetengines/hamachi.c2
-rw-r--r--drivers/net/ethernet/packetengines/yellowfin.c2
-rw-r--r--drivers/net/ethernet/qlogic/Kconfig31
-rw-r--r--drivers/net/ethernet/qlogic/netxen/netxen_nic_hw.c14
-rw-r--r--drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c5
-rw-r--r--drivers/net/ethernet/qlogic/qed/Makefile4
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed.h84
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_cxt.c186
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_cxt.h13
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_dcbx.c562
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_dcbx.h80
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_dev.c740
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_dev_api.h33
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_hsi.h147
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_hw.c67
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_hw.h10
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_init_fw_funcs.c167
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_init_ops.c4
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_int.c160
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_int.h36
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_l2.c717
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_l2.h239
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_main.c282
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_mcp.c356
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_mcp.h80
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_reg_addr.h53
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_selftest.c76
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_selftest.h40
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_sp.h36
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_sp_commands.c322
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_spq.c19
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_sriov.c3613
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_sriov.h386
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_vf.c1102
-rw-r--r--drivers/net/ethernet/qlogic/qed/qed_vf.h990
-rw-r--r--drivers/net/ethernet/qlogic/qede/qede.h45
-rw-r--r--drivers/net/ethernet/qlogic/qede/qede_ethtool.c625
-rw-r--r--drivers/net/ethernet/qlogic/qede/qede_main.c522
-rw-r--r--drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c10
-rw-r--r--drivers/net/ethernet/qlogic/qlcnic/qlcnic_minidump.c8
-rw-r--r--drivers/net/ethernet/qlogic/qlge/qlge_main.c2
-rw-r--r--drivers/net/ethernet/qualcomm/qca_spi.c4
-rw-r--r--drivers/net/ethernet/realtek/atp.c2
-rw-r--r--drivers/net/ethernet/realtek/r8169.c44
-rw-r--r--drivers/net/ethernet/renesas/ravb.h206
-rw-r--r--drivers/net/ethernet/renesas/ravb_main.c289
-rw-r--r--drivers/net/ethernet/renesas/ravb_ptp.c26
-rw-r--r--drivers/net/ethernet/renesas/sh_eth.c39
-rw-r--r--drivers/net/ethernet/renesas/sh_eth.h2
-rw-r--r--drivers/net/ethernet/rocker/rocker_ofdpa.c4
-rw-r--r--drivers/net/ethernet/seeq/sgiseeq.c4
-rw-r--r--drivers/net/ethernet/sgi/meth.c4
-rw-r--r--drivers/net/ethernet/sis/sis900.c2
-rw-r--r--drivers/net/ethernet/smsc/epic100.c2
-rw-r--r--drivers/net/ethernet/smsc/smc911x.c6
-rw-r--r--drivers/net/ethernet/smsc/smc9194.c4
-rw-r--r--drivers/net/ethernet/smsc/smc91c92_cs.c4
-rw-r--r--drivers/net/ethernet/smsc/smc91x.c4
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/Makefile3
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/common.h64
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c113
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c7
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c35
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac100_core.c5
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac4.h255
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c407
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c389
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.h129
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c354
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.h202
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c225
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/enh_desc.c21
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/mmc.h4
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/mmc_core.c349
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/norm_desc.c21
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac.h13
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c7
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_main.c653
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c102
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c24
-rw-r--r--drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c13
-rw-r--r--drivers/net/ethernet/sun/niu.c2
-rw-r--r--drivers/net/ethernet/sun/sungem.c2
-rw-r--r--drivers/net/ethernet/synopsys/dwc_eth_qos.c4
-rw-r--r--drivers/net/ethernet/tehuti/tehuti.c10
-rw-r--r--drivers/net/ethernet/ti/cpsw.c54
-rw-r--r--drivers/net/ethernet/ti/netcp_core.c4
-rw-r--r--drivers/net/ethernet/ti/tlan.c2
-rw-r--r--drivers/net/ethernet/tile/tilepro.c4
-rw-r--r--drivers/net/ethernet/toshiba/ps3_gelic_wireless.c4
-rw-r--r--drivers/net/ethernet/toshiba/spider_net.c2
-rw-r--r--drivers/net/ethernet/tundra/tsi108_eth.c3
-rw-r--r--drivers/net/ethernet/via/via-rhine.c2
-rw-r--r--drivers/net/ethernet/wiznet/Kconfig14
-rw-r--r--drivers/net/ethernet/wiznet/Makefile1
-rw-r--r--drivers/net/ethernet/wiznet/w5100-spi.c466
-rw-r--r--drivers/net/ethernet/wiznet/w5100.c1076
-rw-r--r--drivers/net/ethernet/wiznet/w5100.h37
-rw-r--r--drivers/net/ethernet/wiznet/w5300.c2
-rw-r--r--drivers/net/ethernet/xilinx/ll_temac_main.c2
-rw-r--r--drivers/net/ethernet/xilinx/xilinx_axienet_main.c2
-rw-r--r--drivers/net/ethernet/xilinx/xilinx_emaclite.c4
-rw-r--r--drivers/net/ethernet/xircom/xirc2ps_cs.c2
-rw-r--r--drivers/net/fjes/fjes_hw.c30
-rw-r--r--drivers/net/fjes/fjes_hw.h9
-rw-r--r--drivers/net/fjes/fjes_main.c137
-rw-r--r--drivers/net/geneve.c106
-rw-r--r--drivers/net/gtp.c1375
-rw-r--r--drivers/net/hamradio/baycom_epp.c14
-rw-r--r--drivers/net/hamradio/hdlcdrv.c6
-rw-r--r--drivers/net/hamradio/mkiss.c4
-rw-r--r--drivers/net/hamradio/scc.c2
-rw-r--r--drivers/net/hamradio/yam.c2
-rw-r--r--drivers/net/hyperv/hyperv_net.h29
-rw-r--r--drivers/net/hyperv/netvsc.c161
-rw-r--r--drivers/net/hyperv/netvsc_drv.c428
-rw-r--r--drivers/net/hyperv/rndis_filter.c88
-rw-r--r--drivers/net/ieee802154/adf7242.c3
-rw-r--r--drivers/net/ieee802154/at86rf230.c6
-rw-r--r--drivers/net/ieee802154/atusb.c91
-rw-r--r--drivers/net/ieee802154/mrf24j40.c14
-rw-r--r--drivers/net/ifb.c3
-rw-r--r--drivers/net/ipvlan/ipvlan_main.c19
-rw-r--r--drivers/net/irda/Kconfig7
-rw-r--r--drivers/net/irda/Makefile1
-rw-r--r--drivers/net/irda/ali-ircc.c8
-rw-r--r--drivers/net/irda/bfin_sir.c2
-rw-r--r--drivers/net/irda/irda-usb.c4
-rw-r--r--drivers/net/irda/nsc-ircc.c11
-rw-r--r--drivers/net/irda/sh_irda.c875
-rw-r--r--drivers/net/irda/smsc-ircc2.c2
-rw-r--r--drivers/net/irda/stir4200.c2
-rw-r--r--drivers/net/irda/via-ircc.c8
-rw-r--r--drivers/net/macsec.c136
-rw-r--r--drivers/net/macvlan.c10
-rw-r--r--drivers/net/macvtap.c41
-rw-r--r--drivers/net/phy/fixed_phy.c8
-rw-r--r--drivers/net/phy/lxt.c22
-rw-r--r--drivers/net/phy/mdio-mux.c10
-rw-r--r--drivers/net/phy/mdio_bus.c10
-rw-r--r--drivers/net/phy/micrel.c34
-rw-r--r--drivers/net/phy/phy.c113
-rw-r--r--drivers/net/phy/phy_device.c5
-rw-r--r--drivers/net/ppp/ppp_generic.c315
-rw-r--r--drivers/net/rionet.c6
-rw-r--r--drivers/net/slip/slip.c2
-rw-r--r--drivers/net/tun.c124
-rw-r--r--drivers/net/usb/asix_common.c2
-rw-r--r--drivers/net/usb/catc.c4
-rw-r--r--drivers/net/usb/cdc_ncm.c6
-rw-r--r--drivers/net/usb/ch9200.c3
-rw-r--r--drivers/net/usb/hso.c2
-rw-r--r--drivers/net/usb/kaweth.c2
-rw-r--r--drivers/net/usb/lan78xx.c4
-rw-r--r--drivers/net/usb/pegasus.c2
-rw-r--r--drivers/net/usb/r8152.c3
-rw-r--r--drivers/net/usb/rtl8150.c4
-rw-r--r--drivers/net/usb/smsc75xx.c4
-rw-r--r--drivers/net/usb/smsc95xx.c4
-rw-r--r--drivers/net/usb/usbnet.c5
-rw-r--r--drivers/net/veth.c7
-rw-r--r--drivers/net/vrf.c272
-rw-r--r--drivers/net/vxlan.c306
-rw-r--r--drivers/net/wan/cosa.c2
-rw-r--r--drivers/net/wan/farsync.c6
-rw-r--r--drivers/net/wan/lmc/lmc_main.c2
-rw-r--r--drivers/net/wan/sbni.c8
-rw-r--r--drivers/net/wimax/i2400m/netdev.c2
-rw-r--r--drivers/net/wireless/admtek/adm8211.c4
-rw-r--r--drivers/net/wireless/ath/ar5523/ar5523.c4
-rw-r--r--drivers/net/wireless/ath/ath.h2
-rw-r--r--drivers/net/wireless/ath/ath10k/ce.c50
-rw-r--r--drivers/net/wireless/ath/ath10k/ce.h17
-rw-r--r--drivers/net/wireless/ath/ath10k/core.c490
-rw-r--r--drivers/net/wireless/ath/ath10k/core.h118
-rw-r--r--drivers/net/wireless/ath/ath10k/debug.c136
-rw-r--r--drivers/net/wireless/ath/ath10k/debug.h2
-rw-r--r--drivers/net/wireless/ath/ath10k/htc.h4
-rw-r--r--drivers/net/wireless/ath/ath10k/htt.c6
-rw-r--r--drivers/net/wireless/ath/ath10k/htt.h60
-rw-r--r--drivers/net/wireless/ath/ath10k/htt_rx.c718
-rw-r--r--drivers/net/wireless/ath/ath10k/htt_tx.c300
-rw-r--r--drivers/net/wireless/ath/ath10k/hw.h18
-rw-r--r--drivers/net/wireless/ath/ath10k/mac.c742
-rw-r--r--drivers/net/wireless/ath/ath10k/mac.h7
-rw-r--r--drivers/net/wireless/ath/ath10k/pci.c271
-rw-r--r--drivers/net/wireless/ath/ath10k/pci.h17
-rw-r--r--drivers/net/wireless/ath/ath10k/swap.c44
-rw-r--r--drivers/net/wireless/ath/ath10k/swap.h9
-rw-r--r--drivers/net/wireless/ath/ath10k/targaddrs.h2
-rw-r--r--drivers/net/wireless/ath/ath10k/testmode.c197
-rw-r--r--drivers/net/wireless/ath/ath10k/thermal.h2
-rw-r--r--drivers/net/wireless/ath/ath10k/txrx.c55
-rw-r--r--drivers/net/wireless/ath/ath10k/txrx.h4
-rw-r--r--drivers/net/wireless/ath/ath10k/wmi-ops.h44
-rw-r--r--drivers/net/wireless/ath/ath10k/wmi-tlv.c1
-rw-r--r--drivers/net/wireless/ath/ath10k/wmi-tlv.h4
-rw-r--r--drivers/net/wireless/ath/ath10k/wmi.c250
-rw-r--r--drivers/net/wireless/ath/ath10k/wmi.h128
-rw-r--r--drivers/net/wireless/ath/ath10k/wow.c7
-rw-r--r--drivers/net/wireless/ath/ath5k/ani.c2
-rw-r--r--drivers/net/wireless/ath/ath5k/ath5k.h10
-rw-r--r--drivers/net/wireless/ath/ath5k/attach.c8
-rw-r--r--drivers/net/wireless/ath/ath5k/base.c32
-rw-r--r--drivers/net/wireless/ath/ath5k/debug.c6
-rw-r--r--drivers/net/wireless/ath/ath5k/led.c2
-rw-r--r--drivers/net/wireless/ath/ath5k/pcu.c6
-rw-r--r--drivers/net/wireless/ath/ath5k/phy.c32
-rw-r--r--drivers/net/wireless/ath/ath5k/qcu.c8
-rw-r--r--drivers/net/wireless/ath/ath5k/reset.c10
-rw-r--r--drivers/net/wireless/ath/ath6kl/cfg80211.c22
-rw-r--r--drivers/net/wireless/ath/ath6kl/core.c3
-rw-r--r--drivers/net/wireless/ath/ath6kl/core.h3
-rw-r--r--drivers/net/wireless/ath/ath6kl/init.c9
-rw-r--r--drivers/net/wireless/ath/ath6kl/wmi.c27
-rw-r--r--drivers/net/wireless/ath/ath6kl/wmi.h2
-rw-r--r--drivers/net/wireless/ath/ath9k/Kconfig40
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9003_2p2_initvals.h4
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9003_calib.c44
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9003_eeprom.c12
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9003_eeprom.h1
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9003_mci.c39
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9003_phy.c82
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9330_1p1_initvals.h4
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9330_1p2_initvals.h4
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9340_initvals.h4
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9462_2p0_initvals.h4
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9462_2p1_initvals.h4
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9485_initvals.h4
-rw-r--r--drivers/net/wireless/ath/ath9k/ar953x_initvals.h4
-rw-r--r--drivers/net/wireless/ath/ath9k/ar955x_1p0_initvals.h2
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9565_1p0_initvals.h2
-rw-r--r--drivers/net/wireless/ath/ath9k/ar956x_initvals.h2
-rw-r--r--drivers/net/wireless/ath/ath9k/ar9580_1p0_initvals.h4
-rw-r--r--drivers/net/wireless/ath/ath9k/ath9k.h4
-rw-r--r--drivers/net/wireless/ath/ath9k/btcoex.c138
-rw-r--r--drivers/net/wireless/ath/ath9k/btcoex.h2
-rw-r--r--drivers/net/wireless/ath/ath9k/calib.c6
-rw-r--r--drivers/net/wireless/ath/ath9k/channel.c8
-rw-r--r--drivers/net/wireless/ath/ath9k/common-init.c28
-rw-r--r--drivers/net/wireless/ath/ath9k/common.c4
-rw-r--r--drivers/net/wireless/ath/ath9k/debug.c24
-rw-r--r--drivers/net/wireless/ath/ath9k/debug_sta.c6
-rw-r--r--drivers/net/wireless/ath/ath9k/dynack.c2
-rw-r--r--drivers/net/wireless/ath/ath9k/gpio.c69
-rw-r--r--drivers/net/wireless/ath/ath9k/hif_usb.c2
-rw-r--r--drivers/net/wireless/ath/ath9k/htc_drv_gpio.c8
-rw-r--r--drivers/net/wireless/ath/ath9k/htc_drv_init.c22
-rw-r--r--drivers/net/wireless/ath/ath9k/htc_drv_main.c19
-rw-r--r--drivers/net/wireless/ath/ath9k/htc_drv_txrx.c2
-rw-r--r--drivers/net/wireless/ath/ath9k/hw.c277
-rw-r--r--drivers/net/wireless/ath/ath9k/hw.h11
-rw-r--r--drivers/net/wireless/ath/ath9k/init.c22
-rw-r--r--drivers/net/wireless/ath/ath9k/main.c13
-rw-r--r--drivers/net/wireless/ath/ath9k/pci.c10
-rw-r--r--drivers/net/wireless/ath/ath9k/reg.h90
-rw-r--r--drivers/net/wireless/ath/ath9k/rng.c20
-rw-r--r--drivers/net/wireless/ath/ath9k/xmit.c4
-rw-r--r--drivers/net/wireless/ath/carl9170/mac.c12
-rw-r--r--drivers/net/wireless/ath/carl9170/main.c6
-rw-r--r--drivers/net/wireless/ath/carl9170/phy.c18
-rw-r--r--drivers/net/wireless/ath/carl9170/rx.c2
-rw-r--r--drivers/net/wireless/ath/carl9170/tx.c8
-rw-r--r--drivers/net/wireless/ath/regd.c16
-rw-r--r--drivers/net/wireless/ath/regd.h2
-rw-r--r--drivers/net/wireless/ath/wcn36xx/debug.c12
-rw-r--r--drivers/net/wireless/ath/wcn36xx/hal.h55
-rw-r--r--drivers/net/wireless/ath/wcn36xx/main.c145
-rw-r--r--drivers/net/wireless/ath/wcn36xx/pmc.c4
-rw-r--r--drivers/net/wireless/ath/wcn36xx/smd.c228
-rw-r--r--drivers/net/wireless/ath/wcn36xx/smd.h14
-rw-r--r--drivers/net/wireless/ath/wcn36xx/txrx.c12
-rw-r--r--drivers/net/wireless/ath/wcn36xx/wcn36xx.h20
-rw-r--r--drivers/net/wireless/ath/wil6210/Makefile1
-rw-r--r--drivers/net/wireless/ath/wil6210/cfg80211.c337
-rw-r--r--drivers/net/wireless/ath/wil6210/debug.c22
-rw-r--r--drivers/net/wireless/ath/wil6210/debugfs.c196
-rw-r--r--drivers/net/wireless/ath/wil6210/interrupt.c99
-rw-r--r--drivers/net/wireless/ath/wil6210/ioctl.c11
-rw-r--r--drivers/net/wireless/ath/wil6210/main.c170
-rw-r--r--drivers/net/wireless/ath/wil6210/netdev.c9
-rw-r--r--drivers/net/wireless/ath/wil6210/p2p.c259
-rw-r--r--drivers/net/wireless/ath/wil6210/pcie_bus.c1
-rw-r--r--drivers/net/wireless/ath/wil6210/rx_reorder.c204
-rw-r--r--drivers/net/wireless/ath/wil6210/trace.h19
-rw-r--r--drivers/net/wireless/ath/wil6210/txrx.c69
-rw-r--r--drivers/net/wireless/ath/wil6210/txrx.h12
-rw-r--r--drivers/net/wireless/ath/wil6210/wil6210.h177
-rw-r--r--drivers/net/wireless/ath/wil6210/wil_platform.h8
-rw-r--r--drivers/net/wireless/ath/wil6210/wmi.c233
-rw-r--r--drivers/net/wireless/ath/wil6210/wmi.h1325
-rw-r--r--drivers/net/wireless/atmel/at76c50x-usb.c6
-rw-r--r--drivers/net/wireless/atmel/atmel.c4
-rw-r--r--drivers/net/wireless/broadcom/b43/b43.h4
-rw-r--r--drivers/net/wireless/broadcom/b43/main.c34
-rw-r--r--drivers/net/wireless/broadcom/b43/phy_ac.c2
-rw-r--r--drivers/net/wireless/broadcom/b43/phy_common.c2
-rw-r--r--drivers/net/wireless/broadcom/b43/phy_ht.c16
-rw-r--r--drivers/net/wireless/broadcom/b43/phy_lcn.c10
-rw-r--r--drivers/net/wireless/broadcom/b43/phy_lp.c30
-rw-r--r--drivers/net/wireless/broadcom/b43/phy_n.c176
-rw-r--r--drivers/net/wireless/broadcom/b43/tables_lpphy.c14
-rw-r--r--drivers/net/wireless/broadcom/b43/tables_nphy.c16
-rw-r--r--drivers/net/wireless/broadcom/b43/tables_phy_lcn.c2
-rw-r--r--drivers/net/wireless/broadcom/b43/xmit.c8
-rw-r--r--drivers/net/wireless/broadcom/b43legacy/main.c12
-rw-r--r--drivers/net/wireless/broadcom/b43legacy/xmit.c2
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcdc.c10
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/bcmsdh.c3
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/bus.h4
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/cfg80211.c141
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/chip.c1
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/common.c38
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.c247
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/core.h5
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/firmware.c30
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/fweh.c1
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwil.h1
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.c209
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/fwsignal.h1
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/msgbuf.c46
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/p2p.c10
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/proto.h16
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/sdio.c41
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmfmac/usb.c6
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmsmac/channel.c10
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmsmac/mac80211_if.c16
-rw-r--r--drivers/net/wireless/broadcom/brcm80211/brcmsmac/main.c4
-rw-r--r--drivers/net/wireless/cisco/airo.c14
-rw-r--r--drivers/net/wireless/intel/ipw2x00/ipw2100.c10
-rw-r--r--drivers/net/wireless/intel/ipw2x00/ipw2200.c18
-rw-r--r--drivers/net/wireless/intel/iwlegacy/3945-mac.c30
-rw-r--r--drivers/net/wireless/intel/iwlegacy/3945-rs.c22
-rw-r--r--drivers/net/wireless/intel/iwlegacy/3945.c20
-rw-r--r--drivers/net/wireless/intel/iwlegacy/4965-mac.c41
-rw-r--r--drivers/net/wireless/intel/iwlegacy/4965-rs.c22
-rw-r--r--drivers/net/wireless/intel/iwlegacy/4965.c6
-rw-r--r--drivers/net/wireless/intel/iwlegacy/4965.h2
-rw-r--r--drivers/net/wireless/intel/iwlegacy/common.c92
-rw-r--r--drivers/net/wireless/intel/iwlegacy/common.h30
-rw-r--r--drivers/net/wireless/intel/iwlegacy/debug.c4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/Kconfig16
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/agn.h8
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/debugfs.c4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/dev.h6
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/devices.c4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/lib.c6
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/mac80211.c12
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/main.c6
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/rs.c22
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/rs.h2
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/rx.c4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/rxon.c10
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/scan.c38
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/sta.c2
-rw-r--r--drivers/net/wireless/intel/iwlwifi/dvm/tx.c4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-1000.c14
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-2000.c22
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-5000.c15
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-6000.c25
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-7000.c31
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-8000.c30
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-9000.c45
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-config.h127
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-csr.h15
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-drv.c121
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.c38
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-eeprom-parse.h7
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-fh.h3
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-fw-error-dump.h1
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-fw-file.h41
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-fw.h2
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-modparams.h10
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-nvm-parse.c42
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-phy-db.c68
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-phy-db.h4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-prph.h12
-rw-r--r--drivers/net/wireless/intel/iwlwifi/iwl-trans.h6
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/Makefile2
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/coex.c52
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/coex_legacy.c1315
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/constants.h2
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/d3.c10
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/debugfs-vif.c100
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/debugfs.c171
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/fw-api-d3.h2
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/fw-api-rx.h24
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/fw-api-sta.h10
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/fw-api-tx.h35
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/fw-api.h113
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/fw-dbg.c198
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/fw.c58
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/mac-ctxt.c79
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/mac80211.c165
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/mvm.h149
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/ops.c55
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/phy-ctxt.c2
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/power.c2
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/rs.c28
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/rs.h4
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/rx.c28
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/rxmq.c369
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/scan.c58
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/sf.c8
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/sta.c664
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/sta.h100
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/tdls.c2
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/tt.c23
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/tx.c332
-rw-r--r--drivers/net/wireless/intel/iwlwifi/mvm/utils.c210
-rw-r--r--drivers/net/wireless/intel/iwlwifi/pcie/drv.c29
-rw-r--r--drivers/net/wireless/intel/iwlwifi/pcie/internal.h9
-rw-r--r--drivers/net/wireless/intel/iwlwifi/pcie/rx.c123
-rw-r--r--drivers/net/wireless/intel/iwlwifi/pcie/trans.c64
-rw-r--r--drivers/net/wireless/intel/iwlwifi/pcie/tx.c97
-rw-r--r--drivers/net/wireless/intersil/hostap/hostap_hw.c2
-rw-r--r--drivers/net/wireless/intersil/orinoco/cfg.c6
-rw-r--r--drivers/net/wireless/intersil/orinoco/hw.c2
-rw-r--r--drivers/net/wireless/intersil/orinoco/main.c2
-rw-r--r--drivers/net/wireless/intersil/orinoco/orinoco_usb.c2
-rw-r--r--drivers/net/wireless/intersil/orinoco/scan.c4
-rw-r--r--drivers/net/wireless/intersil/p54/eeprom.c32
-rw-r--r--drivers/net/wireless/intersil/p54/main.c4
-rw-r--r--drivers/net/wireless/intersil/p54/p54.h2
-rw-r--r--drivers/net/wireless/intersil/p54/txrx.c4
-rw-r--r--drivers/net/wireless/intersil/prism54/isl_38xx.c35
-rw-r--r--drivers/net/wireless/mac80211_hwsim.c21
-rw-r--r--drivers/net/wireless/mac80211_hwsim.h1
-rw-r--r--drivers/net/wireless/marvell/libertas/cfg.c10
-rw-r--r--drivers/net/wireless/marvell/libertas/cmd.c4
-rw-r--r--drivers/net/wireless/marvell/libertas_tf/main.c4
-rw-r--r--drivers/net/wireless/marvell/mwifiex/11n_rxreorder.c5
-rw-r--r--drivers/net/wireless/marvell/mwifiex/cfg80211.c74
-rw-r--r--drivers/net/wireless/marvell/mwifiex/cfp.c12
-rw-r--r--drivers/net/wireless/marvell/mwifiex/cmdevt.c127
-rw-r--r--drivers/net/wireless/marvell/mwifiex/fw.h11
-rw-r--r--drivers/net/wireless/marvell/mwifiex/init.c2
-rw-r--r--drivers/net/wireless/marvell/mwifiex/main.c24
-rw-r--r--drivers/net/wireless/marvell/mwifiex/main.h20
-rw-r--r--drivers/net/wireless/marvell/mwifiex/pcie.c115
-rw-r--r--drivers/net/wireless/marvell/mwifiex/pcie.h24
-rw-r--r--drivers/net/wireless/marvell/mwifiex/scan.c202
-rw-r--r--drivers/net/wireless/marvell/mwifiex/sdio.c88
-rw-r--r--drivers/net/wireless/marvell/mwifiex/sdio.h7
-rw-r--r--drivers/net/wireless/marvell/mwifiex/sta_cmd.c42
-rw-r--r--drivers/net/wireless/marvell/mwifiex/sta_cmdresp.c15
-rw-r--r--drivers/net/wireless/marvell/mwifiex/sta_event.c3
-rw-r--r--drivers/net/wireless/marvell/mwifiex/sta_ioctl.c47
-rw-r--r--drivers/net/wireless/marvell/mwifiex/tdls.c2
-rw-r--r--drivers/net/wireless/marvell/mwifiex/txrx.c13
-rw-r--r--drivers/net/wireless/marvell/mwifiex/uap_cmd.c4
-rw-r--r--drivers/net/wireless/marvell/mwifiex/uap_txrx.c96
-rw-r--r--drivers/net/wireless/marvell/mwifiex/usb.c11
-rw-r--r--drivers/net/wireless/marvell/mwl8k.c88
-rw-r--r--drivers/net/wireless/mediatek/mt7601u/init.c4
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2800lib.c34
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2x00.h7
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2x00dev.c43
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt2x00usb.c21
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt61pci.c22
-rw-r--r--drivers/net/wireless/ralink/rt2x00/rt73usb.c22
-rw-r--r--drivers/net/wireless/realtek/rtl818x/rtl8180/dev.c16
-rw-r--r--drivers/net/wireless/realtek/rtl818x/rtl8187/dev.c4
-rw-r--r--drivers/net/wireless/realtek/rtl818x/rtl8187/rtl8187.h99
-rw-r--r--drivers/net/wireless/realtek/rtl818x/rtl8187/rtl8225.c93
-rw-r--r--drivers/net/wireless/realtek/rtl8xxxu/Makefile3
-rw-r--r--drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.h301
-rw-r--r--drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192c.c586
-rw-r--r--drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8192e.c1525
-rw-r--r--drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8723a.c397
-rw-r--r--drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_8723b.c1682
-rw-r--r--drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_core.c (renamed from drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu.c)3601
-rw-r--r--drivers/net/wireless/realtek/rtl8xxxu/rtl8xxxu_regs.h46
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/base.c48
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtc8192e2ant.c847
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtc8723b1ant.c611
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtc8723b2ant.c865
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtc8821a1ant.c652
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtc8821a2ant.c851
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.c31
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/btcoexist/halbtcoutsrc.h19
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/btcoexist/rtl_btc.c5
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/pci.c43
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/pci.h2
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/ps.c12
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/regd.c16
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8188ee/dm.c2
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8188ee/phy.c3
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192c/dm_common.c2
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192ee/trx.c2
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192se/phy.c2
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8192se/rf.c3
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8723ae/hal_btc.c6
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8723be/hw.c5
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8723be/phy.c10
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8723be/rf.c4
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8723be/sw.c3
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8821ae/dm.c6
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8821ae/hw.c4
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/rtl8821ae/phy.c8
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/usb.c2
-rw-r--r--drivers/net/wireless/realtek/rtlwifi/wifi.h9
-rw-r--r--drivers/net/wireless/rndis_wlan.c4
-rw-r--r--drivers/net/wireless/rsi/rsi_91x_mac80211.c100
-rw-r--r--drivers/net/wireless/rsi/rsi_91x_mgmt.c8
-rw-r--r--drivers/net/wireless/rsi/rsi_91x_pkt.c24
-rw-r--r--drivers/net/wireless/rsi/rsi_main.h2
-rw-r--r--drivers/net/wireless/st/cw1200/main.c10
-rw-r--r--drivers/net/wireless/st/cw1200/scan.c2
-rw-r--r--drivers/net/wireless/st/cw1200/sta.c6
-rw-r--r--drivers/net/wireless/st/cw1200/txrx.c2
-rw-r--r--drivers/net/wireless/st/cw1200/wsm.c4
-rw-r--r--drivers/net/wireless/ti/wl1251/main.c2
-rw-r--r--drivers/net/wireless/ti/wl1251/ps.c2
-rw-r--r--drivers/net/wireless/ti/wl1251/rx.c2
-rw-r--r--drivers/net/wireless/ti/wl12xx/main.c12
-rw-r--r--drivers/net/wireless/ti/wl12xx/scan.c22
-rw-r--r--drivers/net/wireless/ti/wl18xx/cmd.c6
-rw-r--r--drivers/net/wireless/ti/wl18xx/event.c6
-rw-r--r--drivers/net/wireless/ti/wl18xx/main.c22
-rw-r--r--drivers/net/wireless/ti/wl18xx/scan.c8
-rw-r--r--drivers/net/wireless/ti/wl18xx/tx.c2
-rw-r--r--drivers/net/wireless/ti/wlcore/cmd.c36
-rw-r--r--drivers/net/wireless/ti/wlcore/cmd.h6
-rw-r--r--drivers/net/wireless/ti/wlcore/io.c17
-rw-r--r--drivers/net/wireless/ti/wlcore/io.h3
-rw-r--r--drivers/net/wireless/ti/wlcore/main.c42
-rw-r--r--drivers/net/wireless/ti/wlcore/ps.c6
-rw-r--r--drivers/net/wireless/ti/wlcore/rx.c4
-rw-r--r--drivers/net/wireless/ti/wlcore/rx.h2
-rw-r--r--drivers/net/wireless/ti/wlcore/scan.c16
-rw-r--r--drivers/net/wireless/ti/wlcore/spi.c4
-rw-r--r--drivers/net/wireless/ti/wlcore/tx.c2
-rw-r--r--drivers/net/wireless/ti/wlcore/tx.h4
-rw-r--r--drivers/net/wireless/ti/wlcore/wlcore.h4
-rw-r--r--drivers/net/wireless/ti/wlcore/wlcore_i.h2
-rw-r--r--drivers/net/wireless/wl3501_cs.c4
-rw-r--r--drivers/net/wireless/zydas/zd1201.c2
-rw-r--r--drivers/net/wireless/zydas/zd1211rw/zd_mac.c4
-rw-r--r--drivers/net/xen-netback/Makefile2
-rw-r--r--drivers/net/xen-netback/common.h74
-rw-r--r--drivers/net/xen-netback/hash.c384
-rw-r--r--drivers/net/xen-netback/interface.c133
-rw-r--r--drivers/net/xen-netback/netback.c250
-rw-r--r--drivers/net/xen-netback/xenbus.c79
-rw-r--r--drivers/nfc/Kconfig11
-rw-r--r--drivers/nfc/Makefile2
-rw-r--r--drivers/nfc/fdp/fdp.c3
-rw-r--r--drivers/nfc/nxp-nci/i2c.c1
-rw-r--r--drivers/nfc/pn533/Kconfig27
-rw-r--r--drivers/nfc/pn533/Makefile9
-rw-r--r--drivers/nfc/pn533/i2c.c281
-rw-r--r--drivers/nfc/pn533/pn533.c (renamed from drivers/nfc/pn533.c)1220
-rw-r--r--drivers/nfc/pn533/pn533.h238
-rw-r--r--drivers/nfc/pn533/usb.c597
-rw-r--r--drivers/nfc/pn544/i2c.c1
-rw-r--r--drivers/nfc/st-nci/i2c.c33
-rw-r--r--drivers/nfc/st-nci/se.c28
-rw-r--r--drivers/nfc/st-nci/spi.c32
-rw-r--r--drivers/nfc/st-nci/st-nci.h14
-rw-r--r--drivers/nfc/st-nci/vendor_cmds.c62
-rw-r--r--drivers/nfc/st21nfca/core.c13
-rw-r--r--drivers/nfc/st21nfca/i2c.c39
-rw-r--r--drivers/nfc/st21nfca/se.c2
-rw-r--r--drivers/nvdimm/Kconfig13
-rw-r--r--drivers/nvdimm/Makefile1
-rw-r--r--drivers/nvdimm/blk.c208
-rw-r--r--drivers/nvdimm/btt.c26
-rw-r--r--drivers/nvdimm/btt_devs.c24
-rw-r--r--drivers/nvdimm/bus.c63
-rw-r--r--drivers/nvdimm/claim.c86
-rw-r--r--drivers/nvdimm/core.c5
-rw-r--r--drivers/nvdimm/dax_devs.c134
-rw-r--r--drivers/nvdimm/dimm_devs.c23
-rw-r--r--drivers/nvdimm/namespace_devs.c38
-rw-r--r--drivers/nvdimm/nd-core.h6
-rw-r--r--drivers/nvdimm/nd.h83
-rw-r--r--drivers/nvdimm/pfn.h5
-rw-r--r--drivers/nvdimm/pfn_devs.c315
-rw-r--r--drivers/nvdimm/pmem.c503
-rw-r--r--drivers/nvdimm/region.c2
-rw-r--r--drivers/nvdimm/region_devs.c34
-rw-r--r--drivers/nvme/host/Kconfig2
-rw-r--r--drivers/nvme/host/core.c309
-rw-r--r--drivers/nvme/host/lightnvm.c82
-rw-r--r--drivers/nvme/host/nvme.h92
-rw-r--r--drivers/nvme/host/pci.c272
-rw-r--r--drivers/nvmem/Kconfig5
-rw-r--r--drivers/nvmem/core.c89
-rw-r--r--drivers/nvmem/imx-ocotp.c55
-rw-r--r--drivers/nvmem/lpc18xx_eeprom.c94
-rw-r--r--drivers/nvmem/qfprom.c56
-rw-r--r--drivers/nvmem/rockchip-efuse.c49
-rw-r--r--drivers/nvmem/sunxi_sid.c54
-rw-r--r--drivers/nvmem/vf610-ocotp.c44
-rw-r--r--drivers/of/Kconfig3
-rw-r--r--drivers/of/Makefile2
-rw-r--r--drivers/of/address.c116
-rw-r--r--drivers/of/base.c212
-rw-r--r--drivers/of/device.c2
-rw-r--r--drivers/of/dynamic.c6
-rw-r--r--drivers/of/fdt.c382
-rw-r--r--drivers/of/of_mdio.c27
-rw-r--r--drivers/of/of_mtd.c119
-rw-r--r--drivers/of/of_numa.c211
-rw-r--r--drivers/of/platform.c28
-rw-r--r--drivers/of/unittest.c40
-rw-r--r--drivers/parport/procfs.c2
-rw-r--r--drivers/pci/Kconfig3
-rw-r--r--drivers/pci/Makefile2
-rw-r--r--drivers/pci/bus.c6
-rw-r--r--drivers/pci/ecam.c164
-rw-r--r--drivers/pci/ecam.h67
-rw-r--r--drivers/pci/host/Kconfig18
-rw-r--r--drivers/pci/host/Makefile3
-rw-r--r--drivers/pci/host/pci-dra7xx.c4
-rw-r--r--drivers/pci/host/pci-host-common.c114
-rw-r--r--drivers/pci/host/pci-host-common.h47
-rw-r--r--drivers/pci/host/pci-host-generic.c52
-rw-r--r--drivers/pci/host/pci-hyperv.c38
-rw-r--r--drivers/pci/host/pci-imx6.c213
-rw-r--r--drivers/pci/host/pci-keystone-dw.c38
-rw-r--r--drivers/pci/host/pci-keystone.c52
-rw-r--r--drivers/pci/host/pci-keystone.h6
-rw-r--r--drivers/pci/host/pci-mvebu.c7
-rw-r--r--drivers/pci/host/pci-tegra.c244
-rw-r--r--drivers/pci/host/pci-thunder-ecam.c39
-rw-r--r--drivers/pci/host/pci-thunder-pem.c134
-rw-r--r--drivers/pci/host/pcie-armada8k.c262
-rw-r--r--drivers/pci/host/pcie-designware.c47
-rw-r--r--drivers/pci/host/pcie-xilinx-nwl.c2
-rw-r--r--drivers/pci/hotplug/acpiphp_ibm.c2
-rw-r--r--drivers/pci/hotplug/rpadlpar_core.c8
-rw-r--r--drivers/pci/hotplug/rpaphp_core.c4
-rw-r--r--drivers/pci/hotplug/rpaphp_pci.c4
-rw-r--r--drivers/pci/pci-sysfs.c7
-rw-r--r--drivers/pci/pci.c160
-rw-r--r--drivers/pci/pcie/Kconfig14
-rw-r--r--drivers/pci/pcie/Makefile2
-rw-r--r--drivers/pci/pcie/pcie-dpc.c163
-rw-r--r--drivers/pci/pcie/portdrv.h15
-rw-r--r--drivers/pci/pcie/portdrv_acpi.c12
-rw-r--r--drivers/pci/pcie/portdrv_core.c36
-rw-r--r--drivers/pci/probe.c7
-rw-r--r--drivers/pci/quirks.c196
-rw-r--r--drivers/pci/search.c14
-rw-r--r--drivers/pcmcia/electra_cf.c2
-rw-r--r--drivers/perf/arm_pmu.c8
-rw-r--r--drivers/phy/Kconfig35
-rw-r--r--drivers/phy/Makefile5
-rw-r--r--drivers/phy/phy-bcm-ns-usb2.c137
-rw-r--r--drivers/phy/phy-brcm-sata.c412
-rw-r--r--drivers/phy/phy-brcmstb-sata.c250
-rw-r--r--drivers/phy/phy-core.c50
-rw-r--r--drivers/phy/phy-exynos-mipi-video.c321
-rw-r--r--drivers/phy/phy-mt65xx-usb3.c77
-rw-r--r--drivers/phy/phy-rcar-gen2.c1
-rw-r--r--drivers/phy/phy-rcar-gen3-usb2.c88
-rw-r--r--drivers/phy/phy-rockchip-usb.c2
-rw-r--r--drivers/phy/tegra/Kconfig8
-rw-r--r--drivers/phy/tegra/Makefile6
-rw-r--r--drivers/phy/tegra/xusb-tegra124.c1752
-rw-r--r--drivers/phy/tegra/xusb-tegra210.c2045
-rw-r--r--drivers/phy/tegra/xusb.c1021
-rw-r--r--drivers/phy/tegra/xusb.h421
-rw-r--r--drivers/pinctrl/bcm/Kconfig13
-rw-r--r--drivers/pinctrl/bcm/Makefile1
-rw-r--r--drivers/pinctrl/bcm/pinctrl-bcm281xx.c6
-rw-r--r--drivers/pinctrl/bcm/pinctrl-bcm2835.c16
-rw-r--r--drivers/pinctrl/bcm/pinctrl-cygnus-mux.c4
-rw-r--r--drivers/pinctrl/bcm/pinctrl-iproc-gpio.c14
-rw-r--r--drivers/pinctrl/bcm/pinctrl-ns2-mux.c1117
-rw-r--r--drivers/pinctrl/bcm/pinctrl-nsp-gpio.c4
-rw-r--r--drivers/pinctrl/berlin/berlin.c5
-rw-r--r--drivers/pinctrl/core.c63
-rw-r--r--drivers/pinctrl/freescale/pinctrl-imx.c11
-rw-r--r--drivers/pinctrl/freescale/pinctrl-imx.h1
-rw-r--r--drivers/pinctrl/freescale/pinctrl-imx1-core.c11
-rw-r--r--drivers/pinctrl/freescale/pinctrl-imx1.c1
-rw-r--r--drivers/pinctrl/freescale/pinctrl-imx1.h1
-rw-r--r--drivers/pinctrl/freescale/pinctrl-imx21.c1
-rw-r--r--drivers/pinctrl/freescale/pinctrl-imx25.c1
-rw-r--r--drivers/pinctrl/freescale/pinctrl-imx27.c1
-rw-r--r--drivers/pinctrl/freescale/pinctrl-imx35.c1
-rw-r--r--drivers/pinctrl/freescale/pinctrl-imx50.c1
-rw-r--r--drivers/pinctrl/freescale/pinctrl-imx51.c1
-rw-r--r--drivers/pinctrl/freescale/pinctrl-imx53.c1
-rw-r--r--drivers/pinctrl/freescale/pinctrl-imx6dl.c1
-rw-r--r--drivers/pinctrl/freescale/pinctrl-imx6q.c1
-rw-r--r--drivers/pinctrl/freescale/pinctrl-imx6sl.c1
-rw-r--r--drivers/pinctrl/freescale/pinctrl-imx6sx.c1
-rw-r--r--drivers/pinctrl/freescale/pinctrl-imx6ul.c1
-rw-r--r--drivers/pinctrl/freescale/pinctrl-imx7d.c1
-rw-r--r--drivers/pinctrl/freescale/pinctrl-vf610.c1
-rw-r--r--drivers/pinctrl/intel/Kconfig3
-rw-r--r--drivers/pinctrl/intel/pinctrl-baytrail.c1697
-rw-r--r--drivers/pinctrl/intel/pinctrl-cherryview.c8
-rw-r--r--drivers/pinctrl/intel/pinctrl-intel.c8
-rw-r--r--drivers/pinctrl/mediatek/pinctrl-mtk-common.c21
-rw-r--r--drivers/pinctrl/meson/Makefile2
-rw-r--r--drivers/pinctrl/meson/pinctrl-meson-gxbb.c432
-rw-r--r--drivers/pinctrl/meson/pinctrl-meson.c12
-rw-r--r--drivers/pinctrl/meson/pinctrl-meson.h2
-rw-r--r--drivers/pinctrl/meson/pinctrl-meson8b.c2
-rw-r--r--drivers/pinctrl/mvebu/pinctrl-armada-370.c6
-rw-r--r--drivers/pinctrl/mvebu/pinctrl-armada-375.c6
-rw-r--r--drivers/pinctrl/mvebu/pinctrl-armada-38x.c6
-rw-r--r--drivers/pinctrl/mvebu/pinctrl-armada-39x.c6
-rw-r--r--drivers/pinctrl/mvebu/pinctrl-armada-xp.c6
-rw-r--r--drivers/pinctrl/mvebu/pinctrl-dove.c5
-rw-r--r--drivers/pinctrl/mvebu/pinctrl-kirkwood.c6
-rw-r--r--drivers/pinctrl/mvebu/pinctrl-mvebu.c9
-rw-r--r--drivers/pinctrl/mvebu/pinctrl-mvebu.h1
-rw-r--r--drivers/pinctrl/mvebu/pinctrl-orion.c6
-rw-r--r--drivers/pinctrl/nomadik/pinctrl-abx500.c7
-rw-r--r--drivers/pinctrl/nomadik/pinctrl-nomadik.c168
-rw-r--r--drivers/pinctrl/pinconf-generic.c2
-rw-r--r--drivers/pinctrl/pinctrl-adi2.c13
-rw-r--r--drivers/pinctrl/pinctrl-amd.c12
-rw-r--r--drivers/pinctrl/pinctrl-as3722.c11
-rw-r--r--drivers/pinctrl/pinctrl-at91-pio4.c32
-rw-r--r--drivers/pinctrl/pinctrl-at91.c28
-rw-r--r--drivers/pinctrl/pinctrl-digicolor.c15
-rw-r--r--drivers/pinctrl/pinctrl-lantiq.c2
-rw-r--r--drivers/pinctrl/pinctrl-lpc18xx.c5
-rw-r--r--drivers/pinctrl/pinctrl-palmas.c14
-rw-r--r--drivers/pinctrl/pinctrl-pic32.c5
-rw-r--r--drivers/pinctrl/pinctrl-pistachio.c6
-rw-r--r--drivers/pinctrl/pinctrl-rockchip.c192
-rw-r--r--drivers/pinctrl/pinctrl-st.c2
-rw-r--r--drivers/pinctrl/pinctrl-tb10x.c5
-rw-r--r--drivers/pinctrl/pinctrl-tz1090-pdc.c13
-rw-r--r--drivers/pinctrl/pinctrl-tz1090.c13
-rw-r--r--drivers/pinctrl/pinctrl-u300.c12
-rw-r--r--drivers/pinctrl/pinctrl-utils.c4
-rw-r--r--drivers/pinctrl/pinctrl-utils.h2
-rw-r--r--drivers/pinctrl/pinctrl-zynq.c14
-rw-r--r--drivers/pinctrl/pxa/Kconfig10
-rw-r--r--drivers/pinctrl/pxa/Makefile1
-rw-r--r--drivers/pinctrl/pxa/pinctrl-pxa25x.c274
-rw-r--r--drivers/pinctrl/pxa/pinctrl-pxa2xx.c4
-rw-r--r--drivers/pinctrl/qcom/pinctrl-msm.c10
-rw-r--r--drivers/pinctrl/qcom/pinctrl-spmi-gpio.c9
-rw-r--r--drivers/pinctrl/qcom/pinctrl-spmi-mpp.c9
-rw-r--r--drivers/pinctrl/qcom/pinctrl-ssbi-gpio.c11
-rw-r--r--drivers/pinctrl/qcom/pinctrl-ssbi-mpp.c11
-rw-r--r--drivers/pinctrl/samsung/pinctrl-exynos5440.c17
-rw-r--r--drivers/pinctrl/samsung/pinctrl-samsung.c3
-rw-r--r--drivers/pinctrl/sh-pfc/core.c20
-rw-r--r--drivers/pinctrl/sh-pfc/core.h5
-rw-r--r--drivers/pinctrl/sh-pfc/gpio.c2
-rw-r--r--drivers/pinctrl/sh-pfc/pfc-r8a7790.c54
-rw-r--r--drivers/pinctrl/sh-pfc/pfc-r8a7794.c217
-rw-r--r--drivers/pinctrl/sh-pfc/pfc-r8a7795.c218
-rw-r--r--drivers/pinctrl/sh-pfc/pinctrl.c124
-rw-r--r--drivers/pinctrl/sh-pfc/sh_pfc.h19
-rw-r--r--drivers/pinctrl/sirf/pinctrl-atlas7.c2
-rw-r--r--drivers/pinctrl/spear/pinctrl-spear.c11
-rw-r--r--drivers/pinctrl/spear/pinctrl-spear.h1
-rw-r--r--drivers/pinctrl/spear/pinctrl-spear1310.c6
-rw-r--r--drivers/pinctrl/spear/pinctrl-spear1340.c6
-rw-r--r--drivers/pinctrl/spear/pinctrl-spear300.c6
-rw-r--r--drivers/pinctrl/spear/pinctrl-spear310.c6
-rw-r--r--drivers/pinctrl/spear/pinctrl-spear320.c6
-rw-r--r--drivers/pinctrl/stm32/pinctrl-stm32.c185
-rw-r--r--drivers/pinctrl/sunxi/pinctrl-sunxi.c13
-rw-r--r--drivers/pinctrl/tegra/pinctrl-tegra-xusb.c35
-rw-r--r--drivers/pinctrl/tegra/pinctrl-tegra.c36
-rw-r--r--drivers/pinctrl/tegra/pinctrl-tegra.h8
-rw-r--r--drivers/pinctrl/tegra/pinctrl-tegra114.c3
-rw-r--r--drivers/pinctrl/tegra/pinctrl-tegra124.c3
-rw-r--r--drivers/pinctrl/tegra/pinctrl-tegra20.c4
-rw-r--r--drivers/pinctrl/tegra/pinctrl-tegra210.c5
-rw-r--r--drivers/pinctrl/tegra/pinctrl-tegra30.c3
-rw-r--r--drivers/pinctrl/uniphier/pinctrl-uniphier-core.c14
-rw-r--r--drivers/pinctrl/uniphier/pinctrl-uniphier-ld4.c1
-rw-r--r--drivers/pinctrl/uniphier/pinctrl-uniphier-ld6b.c1
-rw-r--r--drivers/pinctrl/uniphier/pinctrl-uniphier-pro4.c1
-rw-r--r--drivers/pinctrl/uniphier/pinctrl-uniphier-pro5.c1
-rw-r--r--drivers/pinctrl/uniphier/pinctrl-uniphier-pxs2.c1
-rw-r--r--drivers/pinctrl/uniphier/pinctrl-uniphier-sld8.c1
-rw-r--r--drivers/pinctrl/uniphier/pinctrl-uniphier.h2
-rw-r--r--drivers/pinctrl/vt8500/pinctrl-wmt.c7
-rw-r--r--drivers/platform/chrome/Kconfig10
-rw-r--r--drivers/platform/chrome/Makefile15
-rw-r--r--drivers/platform/chrome/chromeos_laptop.c22
-rw-r--r--drivers/platform/chrome/chromeos_pstore.c55
-rw-r--r--drivers/platform/chrome/cros_ec_dev.c7
-rw-r--r--drivers/platform/chrome/cros_ec_lightbar.c10
-rw-r--r--drivers/platform/chrome/cros_ec_proto.c4
-rw-r--r--drivers/platform/chrome/cros_kbd_led_backlight.c122
-rw-r--r--drivers/platform/mips/Kconfig4
-rw-r--r--drivers/platform/mips/Makefile1
-rw-r--r--drivers/platform/mips/cpu_hwmon.c10
-rw-r--r--drivers/platform/x86/Kconfig12
-rw-r--r--drivers/platform/x86/Makefile1
-rw-r--r--drivers/platform/x86/acer-wmi.c16
-rw-r--r--drivers/platform/x86/asus-laptop.c15
-rw-r--r--drivers/platform/x86/asus-wmi.c5
-rw-r--r--drivers/platform/x86/dell-rbtn.c56
-rw-r--r--drivers/platform/x86/eeepc-wmi.c24
-rw-r--r--drivers/platform/x86/fujitsu-laptop.c63
-rw-r--r--drivers/platform/x86/ideapad-laptop.c19
-rw-r--r--drivers/platform/x86/intel_menlow.c49
-rw-r--r--drivers/platform/x86/intel_pmc_core.c200
-rw-r--r--drivers/platform/x86/intel_pmc_core.h51
-rw-r--r--drivers/platform/x86/intel_pmic_gpio.c6
-rw-r--r--drivers/platform/x86/intel_telemetry_core.c6
-rw-r--r--drivers/platform/x86/intel_telemetry_pltdrv.c2
-rw-r--r--drivers/platform/x86/sony-laptop.c3
-rw-r--r--drivers/platform/x86/surfacepro3_button.c9
-rw-r--r--drivers/platform/x86/thinkpad_acpi.c43
-rw-r--r--drivers/platform/x86/wmi.c104
-rw-r--r--drivers/pnp/pnpbios/Kconfig2
-rw-r--r--drivers/pnp/pnpbios/core.c3
-rw-r--r--drivers/power/avs/rockchip-io-domain.c10
-rw-r--r--drivers/power/ipaq_micro_battery.c2
-rw-r--r--drivers/power/max8925_power.c10
-rw-r--r--drivers/power/reset/Kconfig8
-rw-r--r--drivers/power/reset/Makefile1
-rw-r--r--drivers/power/reset/at91-sama5d2_shdwc.c282
-rw-r--r--drivers/power/sbs-battery.c4
-rw-r--r--drivers/powercap/intel_rapl.c71
-rw-r--r--drivers/pwm/core.c223
-rw-r--r--drivers/pwm/pwm-clps711x.c2
-rw-r--r--drivers/pwm/pwm-crc.c2
-rw-r--r--drivers/pwm/pwm-lpc18xx-sct.c2
-rw-r--r--drivers/pwm/pwm-omap-dmtimer.c2
-rw-r--r--drivers/pwm/pwm-pxa.c2
-rw-r--r--drivers/pwm/pwm-rcar.c2
-rw-r--r--drivers/pwm/pwm-sun4i.c3
-rw-r--r--drivers/pwm/sysfs.c70
-rw-r--r--drivers/regulator/Kconfig17
-rw-r--r--drivers/regulator/Makefile8
-rw-r--r--drivers/regulator/act8865-regulator.c113
-rw-r--r--drivers/regulator/as3722-regulator.c65
-rw-r--r--drivers/regulator/axp20x-regulator.c12
-rw-r--r--drivers/regulator/core.c279
-rw-r--r--drivers/regulator/da9063-regulator.c2
-rw-r--r--drivers/regulator/fan53555.c24
-rw-r--r--drivers/regulator/gpio-regulator.c2
-rw-r--r--drivers/regulator/helpers.c2
-rw-r--r--drivers/regulator/lp3971.c2
-rw-r--r--drivers/regulator/lp3972.c2
-rw-r--r--drivers/regulator/lp873x-regulator.c241
-rw-r--r--drivers/regulator/max14577-regulator.c (renamed from drivers/regulator/max14577.c)0
-rw-r--r--drivers/regulator/max77620-regulator.c83
-rw-r--r--drivers/regulator/max77686-regulator.c17
-rw-r--r--drivers/regulator/max77693-regulator.c (renamed from drivers/regulator/max77693.c)0
-rw-r--r--drivers/regulator/max77802-regulator.c2
-rw-r--r--drivers/regulator/max8973-regulator.c97
-rw-r--r--drivers/regulator/max8997-regulator.c (renamed from drivers/regulator/max8997.c)2
-rw-r--r--drivers/regulator/of_regulator.c6
-rw-r--r--drivers/regulator/palmas-regulator.c67
-rw-r--r--drivers/regulator/pv88080-regulator.c419
-rw-r--r--drivers/regulator/pv88080-regulator.h92
-rw-r--r--drivers/regulator/pwm-regulator.c78
-rw-r--r--drivers/regulator/qcom_spmi-regulator.c275
-rw-r--r--drivers/regulator/rk808-regulator.c310
-rw-r--r--drivers/regulator/s2mps11.c41
-rw-r--r--drivers/regulator/tps6524x-regulator.c2
-rw-r--r--drivers/regulator/twl-regulator.c106
-rw-r--r--drivers/remoteproc/remoteproc_core.c39
-rw-r--r--drivers/remoteproc/remoteproc_internal.h1
-rw-r--r--drivers/remoteproc/remoteproc_virtio.c2
-rw-r--r--drivers/reset/Kconfig3
-rw-r--r--drivers/reset/Makefile1
-rw-r--r--drivers/reset/core.c217
-rw-r--r--drivers/reset/reset-lpc18xx.c22
-rw-r--r--drivers/reset/reset-oxnas.c136
-rw-r--r--drivers/rpmsg/virtio_rpmsg_bus.c8
-rw-r--r--drivers/rtc/Kconfig52
-rw-r--r--drivers/rtc/rtc-at91sam9.c2
-rw-r--r--drivers/rtc/rtc-cmos.c2
-rw-r--r--drivers/rtc/rtc-da9052.c13
-rw-r--r--drivers/rtc/rtc-ds1216.c3
-rw-r--r--drivers/rtc/rtc-ds1286.c3
-rw-r--r--drivers/rtc/rtc-ds1302.c348
-rw-r--r--drivers/rtc/rtc-ds1307.c23
-rw-r--r--drivers/rtc/rtc-ds1343.c2
-rw-r--r--drivers/rtc/rtc-ds1511.c3
-rw-r--r--drivers/rtc/rtc-ds1553.c3
-rw-r--r--drivers/rtc/rtc-ds1672.c5
-rw-r--r--drivers/rtc/rtc-ds1685.c4
-rw-r--r--drivers/rtc/rtc-ds1742.c3
-rw-r--r--drivers/rtc/rtc-ds3232.c9
-rw-r--r--drivers/rtc/rtc-ep93xx.c3
-rw-r--r--drivers/rtc/rtc-gemini.c1
-rw-r--r--drivers/rtc/rtc-hym8563.c2
-rw-r--r--drivers/rtc/rtc-isl12022.c5
-rw-r--r--drivers/rtc/rtc-isl1208.c6
-rw-r--r--drivers/rtc/rtc-m41t80.c447
-rw-r--r--drivers/rtc/rtc-m48t35.c3
-rw-r--r--drivers/rtc/rtc-m48t86.c4
-rw-r--r--drivers/rtc/rtc-max6900.c5
-rw-r--r--drivers/rtc/rtc-mc13xxx.c19
-rw-r--r--drivers/rtc/rtc-mrst.c2
-rw-r--r--drivers/rtc/rtc-mxc.c3
-rw-r--r--drivers/rtc/rtc-pcf2123.c4
-rw-r--r--drivers/rtc/rtc-pcf8563.c7
-rw-r--r--drivers/rtc/rtc-rs5c313.c2
-rw-r--r--drivers/rtc/rtc-rs5c348.c4
-rw-r--r--drivers/rtc/rtc-rs5c372.c18
-rw-r--r--drivers/rtc/rtc-rv3029c2.c596
-rw-r--r--drivers/rtc/rtc-rx8581.c5
-rw-r--r--drivers/rtc/rtc-sh.c2
-rw-r--r--drivers/rtc/rtc-snvs.c2
-rw-r--r--drivers/rtc/rtc-stk17ta8.c3
-rw-r--r--drivers/rtc/rtc-stmp3xxx.c7
-rw-r--r--drivers/rtc/rtc-tps6586x.c2
-rw-r--r--drivers/rtc/rtc-x1205.c5
-rw-r--r--drivers/rtc/rtc-zynqmp.c74
-rw-r--r--drivers/s390/block/dasd.c92
-rw-r--r--drivers/s390/block/dasd_3990_erp.c20
-rw-r--r--drivers/s390/block/dasd_devmap.c27
-rw-r--r--drivers/s390/block/dasd_eckd.c699
-rw-r--r--drivers/s390/block/dasd_eckd.h34
-rw-r--r--drivers/s390/block/dasd_int.h17
-rw-r--r--drivers/s390/block/dasd_ioctl.c61
-rw-r--r--drivers/s390/block/dcssblk.c4
-rw-r--r--drivers/s390/char/Makefile2
-rw-r--r--drivers/s390/char/con3215.c22
-rw-r--r--drivers/s390/char/con3270.c3
-rw-r--r--drivers/s390/char/fs3270.c3
-rw-r--r--drivers/s390/char/raw3270.c131
-rw-r--r--drivers/s390/char/raw3270.h8
-rw-r--r--drivers/s390/char/sclp.h38
-rw-r--r--drivers/s390/char/sclp_cmd.c61
-rw-r--r--drivers/s390/char/sclp_cpi_sys.c2
-rw-r--r--drivers/s390/char/sclp_early.c6
-rw-r--r--drivers/s390/char/sclp_pci.c193
-rw-r--r--drivers/s390/char/tty3270.c90
-rw-r--r--drivers/s390/crypto/ap_bus.c2
-rw-r--r--drivers/s390/net/ctcm_main.c6
-rw-r--r--drivers/s390/net/ctcm_mpc.c2
-rw-r--r--drivers/s390/net/netiucv.c2
-rw-r--r--drivers/s390/net/qeth_core_main.c2
-rw-r--r--drivers/s390/scsi/zfcp_unit.c3
-rw-r--r--drivers/sbus/char/openprom.c40
-rw-r--r--drivers/scsi/Kconfig16
-rw-r--r--drivers/scsi/NCR5380.c657
-rw-r--r--drivers/scsi/NCR5380.h143
-rw-r--r--drivers/scsi/aacraid/aachba.c22
-rw-r--r--drivers/scsi/aacraid/aacraid.h9
-rw-r--r--drivers/scsi/aacraid/comminit.c43
-rw-r--r--drivers/scsi/aacraid/commsup.c39
-rw-r--r--drivers/scsi/aacraid/dpcsup.c7
-rw-r--r--drivers/scsi/aacraid/linit.c4
-rw-r--r--drivers/scsi/aacraid/src.c3
-rw-r--r--drivers/scsi/aic94xx/aic94xx_hwi.c2
-rw-r--r--drivers/scsi/aic94xx/aic94xx_seq.c2
-rw-r--r--drivers/scsi/arm/cumana_1.c25
-rw-r--r--drivers/scsi/arm/cumana_2.c2
-rw-r--r--drivers/scsi/arm/eesox.c2
-rw-r--r--drivers/scsi/arm/oak.c22
-rw-r--r--drivers/scsi/arm/powertec.c2
-rw-r--r--drivers/scsi/atari_NCR5380.c2676
-rw-r--r--drivers/scsi/atari_scsi.c144
-rw-r--r--drivers/scsi/bfa/bfa_fcs.h4
-rw-r--r--drivers/scsi/bfa/bfa_fcs_fcpim.c5
-rw-r--r--drivers/scsi/bfa/bfad_im.c5
-rw-r--r--drivers/scsi/bfa/bfi.h2
-rw-r--r--drivers/scsi/bnx2fc/bnx2fc.h3
-rw-r--r--drivers/scsi/bnx2fc/bnx2fc_fcoe.c100
-rw-r--r--drivers/scsi/bnx2fc/bnx2fc_io.c14
-rw-r--r--drivers/scsi/bnx2i/bnx2i_iscsi.c4
-rw-r--r--drivers/scsi/constants.c859
-rw-r--r--drivers/scsi/cxlflash/superpipe.c15
-rw-r--r--drivers/scsi/device_handler/scsi_dh_alua.c34
-rw-r--r--drivers/scsi/dmx3191d.c10
-rw-r--r--drivers/scsi/dtc.c27
-rw-r--r--drivers/scsi/dtc.h7
-rw-r--r--drivers/scsi/eata_pio.c1
-rw-r--r--drivers/scsi/esas2r/esas2r_main.c4
-rw-r--r--drivers/scsi/fnic/fnic.h2
-rw-r--r--drivers/scsi/fnic/fnic_scsi.c91
-rw-r--r--drivers/scsi/g_NCR5380.c141
-rw-r--r--drivers/scsi/g_NCR5380.h26
-rw-r--r--drivers/scsi/hisi_sas/hisi_sas.h7
-rw-r--r--drivers/scsi/hisi_sas/hisi_sas_main.c11
-rw-r--r--drivers/scsi/hisi_sas/hisi_sas_v2_hw.c93
-rw-r--r--drivers/scsi/hpsa.c187
-rw-r--r--drivers/scsi/hpsa.h1
-rw-r--r--drivers/scsi/isci/port.c2
-rw-r--r--drivers/scsi/isci/request.c5
-rw-r--r--drivers/scsi/iscsi_boot_sysfs.c62
-rw-r--r--drivers/scsi/iscsi_tcp.c12
-rw-r--r--drivers/scsi/libiscsi.c9
-rw-r--r--drivers/scsi/libsas/sas_ata.c7
-rw-r--r--drivers/scsi/lpfc/lpfc.h4
-rw-r--r--drivers/scsi/lpfc/lpfc_attr.c26
-rw-r--r--drivers/scsi/lpfc/lpfc_ct.c6
-rw-r--r--drivers/scsi/lpfc/lpfc_els.c176
-rw-r--r--drivers/scsi/lpfc/lpfc_hbadisc.c5
-rw-r--r--drivers/scsi/lpfc/lpfc_hw.h75
-rw-r--r--drivers/scsi/lpfc/lpfc_hw4.h29
-rw-r--r--drivers/scsi/lpfc/lpfc_init.c27
-rw-r--r--drivers/scsi/lpfc/lpfc_mbox.c12
-rw-r--r--drivers/scsi/lpfc/lpfc_mem.c6
-rw-r--r--drivers/scsi/lpfc/lpfc_nportdisc.c4
-rw-r--r--drivers/scsi/lpfc/lpfc_sli.c140
-rw-r--r--drivers/scsi/lpfc/lpfc_version.h6
-rw-r--r--drivers/scsi/lpfc/lpfc_vport.c5
-rw-r--r--drivers/scsi/mac_scsi.c241
-rw-r--r--drivers/scsi/megaraid/megaraid_sas.h6
-rw-r--r--drivers/scsi/megaraid/megaraid_sas_base.c117
-rw-r--r--drivers/scsi/megaraid/megaraid_sas_fusion.c7
-rw-r--r--drivers/scsi/mpt3sas/mpi/mpi2.h7
-rw-r--r--drivers/scsi/mpt3sas/mpi/mpi2_cnfg.h18
-rw-r--r--drivers/scsi/mpt3sas/mpi/mpi2_init.h15
-rw-r--r--drivers/scsi/mpt3sas/mpi/mpi2_ioc.h40
-rw-r--r--drivers/scsi/mpt3sas/mpt3sas_base.c32
-rw-r--r--drivers/scsi/mpt3sas/mpt3sas_base.h11
-rw-r--r--drivers/scsi/mpt3sas/mpt3sas_scsih.c37
-rw-r--r--drivers/scsi/mvsas/mv_init.c19
-rw-r--r--drivers/scsi/mvsas/mv_sas.c5
-rw-r--r--drivers/scsi/pas16.c27
-rw-r--r--drivers/scsi/pas16.h5
-rw-r--r--drivers/scsi/pm8001/pm8001_init.c2
-rw-r--r--drivers/scsi/pm8001/pm8001_sas.c5
-rw-r--r--drivers/scsi/qla1280.c2
-rw-r--r--drivers/scsi/qla2xxx/Kconfig9
-rw-r--r--drivers/scsi/qla2xxx/qla_mr.c5
-rw-r--r--drivers/scsi/qla2xxx/qla_nx.c2
-rw-r--r--drivers/scsi/qla2xxx/qla_sup.c2
-rw-r--r--drivers/scsi/qla2xxx/qla_target.c56
-rw-r--r--drivers/scsi/qla2xxx/qla_target.h4
-rw-r--r--drivers/scsi/qla2xxx/tcm_qla2xxx.c59
-rw-r--r--drivers/scsi/qla2xxx/tcm_qla2xxx.h1
-rw-r--r--drivers/scsi/scsi_common.c53
-rw-r--r--drivers/scsi/scsi_debug.c2801
-rw-r--r--drivers/scsi/scsi_error.c3
-rw-r--r--drivers/scsi/scsi_lib.c188
-rw-r--r--drivers/scsi/scsi_priv.h2
-rw-r--r--drivers/scsi/scsi_proc.c3
-rw-r--r--drivers/scsi/scsi_scan.c45
-rw-r--r--drivers/scsi/scsi_sysfs.c9
-rw-r--r--drivers/scsi/scsi_trace.c161
-rw-r--r--drivers/scsi/scsi_transport_fc.c9
-rw-r--r--drivers/scsi/scsi_transport_iscsi.c19
-rw-r--r--drivers/scsi/scsi_transport_sas.c7
-rw-r--r--drivers/scsi/sd.c10
-rw-r--r--drivers/scsi/sense_codes.h826
-rw-r--r--drivers/scsi/snic/snic.h5
-rw-r--r--drivers/scsi/snic/snic_ctl.c8
-rw-r--r--drivers/scsi/snic/snic_debugfs.c20
-rw-r--r--drivers/scsi/snic/snic_disc.c19
-rw-r--r--drivers/scsi/snic/snic_fwint.h4
-rw-r--r--drivers/scsi/snic/snic_io.c62
-rw-r--r--drivers/scsi/snic/snic_isr.c6
-rw-r--r--drivers/scsi/snic/snic_main.c44
-rw-r--r--drivers/scsi/snic/snic_scsi.c56
-rw-r--r--drivers/scsi/snic/snic_stats.h12
-rw-r--r--drivers/scsi/snic/vnic_dev.c44
-rw-r--r--drivers/scsi/st.c9
-rw-r--r--drivers/scsi/sun3_scsi.c47
-rw-r--r--drivers/scsi/t128.c19
-rw-r--r--drivers/scsi/t128.h7
-rw-r--r--drivers/soc/Makefile3
-rw-r--r--drivers/soc/brcmstb/Kconfig1
-rw-r--r--drivers/soc/brcmstb/common.c66
-rw-r--r--drivers/soc/fsl/qe/gpio.c20
-rw-r--r--drivers/soc/mediatek/mtk-pmic-wrap.c544
-rw-r--r--drivers/soc/qcom/smd-rpm.c9
-rw-r--r--drivers/soc/qcom/smd.c247
-rw-r--r--drivers/soc/qcom/smem.c3
-rw-r--r--drivers/soc/qcom/spm.c10
-rw-r--r--drivers/soc/qcom/wcnss_ctrl.c8
-rw-r--r--drivers/soc/renesas/Makefile7
-rw-r--r--drivers/soc/renesas/r8a7779-sysc.c34
-rw-r--r--drivers/soc/renesas/r8a7790-sysc.c48
-rw-r--r--drivers/soc/renesas/r8a7791-sysc.c33
-rw-r--r--drivers/soc/renesas/r8a7794-sysc.c33
-rw-r--r--drivers/soc/renesas/r8a7795-sysc.c56
-rw-r--r--drivers/soc/renesas/rcar-sysc.c401
-rw-r--r--drivers/soc/renesas/rcar-sysc.h58
-rw-r--r--drivers/soc/rockchip/pm_domains.c247
-rw-r--r--drivers/soc/tegra/Kconfig2
-rw-r--r--drivers/soc/tegra/pmc.c613
-rw-r--r--drivers/soc/versatile/soc-realview.c19
-rw-r--r--drivers/spi/Kconfig21
-rw-r--r--drivers/spi/Makefile2
-rw-r--r--drivers/spi/spi-axi-spi-engine.c1
-rw-r--r--drivers/spi/spi-bcm53xx.c78
-rw-r--r--drivers/spi/spi-cadence.c244
-rw-r--r--drivers/spi/spi-davinci.c76
-rw-r--r--drivers/spi/spi-dln2.c2
-rw-r--r--drivers/spi/spi-dw-pci.c2
-rw-r--r--drivers/spi/spi-ep93xx.c2
-rw-r--r--drivers/spi/spi-fsl-dspi.c11
-rw-r--r--drivers/spi/spi-fsl-espi.c30
-rw-r--r--drivers/spi/spi-octeon.c17
-rw-r--r--drivers/spi/spi-omap2-mcspi.c145
-rw-r--r--drivers/spi/spi-pic32-sqi.c727
-rw-r--r--drivers/spi/spi-pic32.c878
-rw-r--r--drivers/spi/spi-pxa2xx-dma.c28
-rw-r--r--drivers/spi/spi-pxa2xx-pci.c12
-rw-r--r--drivers/spi/spi-pxa2xx.c17
-rw-r--r--drivers/spi/spi-pxa2xx.h3
-rw-r--r--drivers/spi/spi-qup.c15
-rw-r--r--drivers/spi/spi-rockchip.c9
-rw-r--r--drivers/spi/spi-st-ssc4.c8
-rw-r--r--drivers/spi/spi-ti-qspi.c45
-rw-r--r--drivers/spi/spi-zynqmp-gqspi.c3
-rw-r--r--drivers/spi/spi.c7
-rw-r--r--drivers/spmi/spmi.c12
-rw-r--r--drivers/ssb/driver_gpio.c33
-rw-r--r--drivers/staging/Kconfig2
-rw-r--r--drivers/staging/Makefile1
-rw-r--r--drivers/staging/android/Kconfig17
-rw-r--r--drivers/staging/android/Makefile2
-rw-r--r--drivers/staging/android/ion/ion.c16
-rw-r--r--drivers/staging/android/ion/ion_chunk_heap.c4
-rw-r--r--drivers/staging/android/ion/ion_dummy_driver.c2
-rw-r--r--drivers/staging/android/ion/ion_test.c2
-rw-r--r--drivers/staging/android/lowmemorykiller.c9
-rw-r--r--drivers/staging/android/sync.c356
-rw-r--r--drivers/staging/android/sync.h91
-rw-r--r--drivers/staging/android/sync_debug.c8
-rw-r--r--drivers/staging/android/timed_gpio.c166
-rw-r--r--drivers/staging/android/timed_gpio.h33
-rw-r--r--drivers/staging/android/timed_output.c110
-rw-r--r--drivers/staging/android/timed_output.h37
-rw-r--r--drivers/staging/board/armadillo800eva.c8
-rw-r--r--drivers/staging/comedi/comedi_buf.c10
-rw-r--r--drivers/staging/comedi/comedi_fops.c54
-rw-r--r--drivers/staging/comedi/comedidev.h4
-rw-r--r--drivers/staging/comedi/drivers.c40
-rw-r--r--drivers/staging/comedi/drivers/amcc_s5933.h24
-rw-r--r--drivers/staging/comedi/drivers/amplc_dio200_common.c12
-rw-r--r--drivers/staging/comedi/drivers/amplc_pc263.c104
-rw-r--r--drivers/staging/comedi/drivers/amplc_pci224.c71
-rw-r--r--drivers/staging/comedi/drivers/amplc_pci230.c189
-rw-r--r--drivers/staging/comedi/drivers/amplc_pci263.c86
-rw-r--r--drivers/staging/comedi/drivers/c6xdigio.c4
-rw-r--r--drivers/staging/comedi/drivers/comedi_8254.h14
-rw-r--r--drivers/staging/comedi/drivers/daqboard2000.c2
-rw-r--r--drivers/staging/comedi/drivers/das1800.c1385
-rw-r--r--drivers/staging/comedi/drivers/dt282x.c119
-rw-r--r--drivers/staging/comedi/drivers/mite.c1113
-rw-r--r--drivers/staging/comedi/drivers/mite.h329
-rw-r--r--drivers/staging/comedi/drivers/ni_660x.c1174
-rw-r--r--drivers/staging/comedi/drivers/ni_labpc.h33
-rw-r--r--drivers/staging/comedi/drivers/ni_labpc_common.c65
-rw-r--r--drivers/staging/comedi/drivers/ni_labpc_cs.c95
-rw-r--r--drivers/staging/comedi/drivers/ni_labpc_pci.c4
-rw-r--r--drivers/staging/comedi/drivers/ni_labpc_regs.h82
-rw-r--r--drivers/staging/comedi/drivers/ni_mio_c_common.c0
-rw-r--r--drivers/staging/comedi/drivers/ni_mio_common.c981
-rw-r--r--drivers/staging/comedi/drivers/ni_pcidio.c37
-rw-r--r--drivers/staging/comedi/drivers/ni_pcimio.c36
-rw-r--r--drivers/staging/comedi/drivers/ni_stc.h56
-rw-r--r--drivers/staging/comedi/drivers/ni_tio.c807
-rw-r--r--drivers/staging/comedi/drivers/ni_tio.h66
-rw-r--r--drivers/staging/comedi/drivers/ni_tio_internal.h322
-rw-r--r--drivers/staging/comedi/drivers/ni_tiocmd.c127
-rw-r--r--drivers/staging/comedi/drivers/plx9052.h122
-rw-r--r--drivers/staging/comedi/drivers/plx9080.h2
-rw-r--r--drivers/staging/comedi/drivers/z8536.h89
-rw-r--r--drivers/staging/dgnc/dgnc_cls.c2
-rw-r--r--drivers/staging/dgnc/dgnc_driver.c52
-rw-r--r--drivers/staging/dgnc/dgnc_driver.h23
-rw-r--r--drivers/staging/dgnc/dgnc_mgmt.c28
-rw-r--r--drivers/staging/dgnc/dgnc_neo.c131
-rw-r--r--drivers/staging/dgnc/dgnc_sysfs.c22
-rw-r--r--drivers/staging/dgnc/dgnc_tty.c279
-rw-r--r--drivers/staging/dgnc/digi.h4
-rw-r--r--drivers/staging/emxx_udc/emxx_udc.c24
-rw-r--r--drivers/staging/emxx_udc/emxx_udc.h40
-rw-r--r--drivers/staging/fbtft/fb_agm1264k-fl.c2
-rw-r--r--drivers/staging/fbtft/fbtft-io.c8
-rw-r--r--drivers/staging/fbtft/fbtft_device.c6
-rw-r--r--drivers/staging/fsl-mc/README.txt138
-rw-r--r--drivers/staging/fsl-mc/TODO13
-rw-r--r--drivers/staging/fsl-mc/bus/dpbp.c77
-rw-r--r--drivers/staging/fsl-mc/bus/dpmcp-cmd.h7
-rw-r--r--drivers/staging/fsl-mc/bus/dpmcp.c35
-rw-r--r--drivers/staging/fsl-mc/bus/dpmcp.h10
-rw-r--r--drivers/staging/fsl-mc/bus/dprc-cmd.h6
-rw-r--r--drivers/staging/fsl-mc/bus/dprc-driver.c33
-rw-r--r--drivers/staging/fsl-mc/bus/dprc.c26
-rw-r--r--drivers/staging/fsl-mc/bus/mc-allocator.c79
-rw-r--r--drivers/staging/fsl-mc/bus/mc-bus.c90
-rw-r--r--drivers/staging/fsl-mc/bus/mc-msi.c14
-rw-r--r--drivers/staging/fsl-mc/include/dpbp-cmd.h4
-rw-r--r--drivers/staging/fsl-mc/include/dpbp.h51
-rw-r--r--drivers/staging/fsl-mc/include/dprc.h19
-rw-r--r--drivers/staging/fsl-mc/include/mc-private.h2
-rw-r--r--drivers/staging/fwserial/dma_fifo.c8
-rw-r--r--drivers/staging/fwserial/dma_fifo.h16
-rw-r--r--drivers/staging/fwserial/fwserial.c44
-rw-r--r--drivers/staging/fwserial/fwserial.h42
-rw-r--r--drivers/staging/gdm724x/gdm_mux.c5
-rw-r--r--drivers/staging/gdm724x/gdm_usb.c6
-rw-r--r--drivers/staging/gdm724x/hci_packet.h2
-rw-r--r--drivers/staging/gdm724x/netlink_k.c3
-rw-r--r--drivers/staging/gs_fpgaboot/gs_fpgaboot.c8
-rw-r--r--drivers/staging/gs_fpgaboot/gs_fpgaboot.h2
-rw-r--r--drivers/staging/gs_fpgaboot/io.c1
-rw-r--r--drivers/staging/i4l/act2000/act2000_isa.c24
-rw-r--r--drivers/staging/i4l/pcbit/capi.h2
-rw-r--r--drivers/staging/i4l/pcbit/drv.c8
-rw-r--r--drivers/staging/i4l/pcbit/edss1.c2
-rw-r--r--drivers/staging/i4l/pcbit/layer2.h2
-rw-r--r--drivers/staging/iio/accel/Kconfig23
-rw-r--r--drivers/staging/iio/accel/Makefile6
-rw-r--r--drivers/staging/iio/accel/adis16201.h156
-rw-r--r--drivers/staging/iio/accel/adis16201_core.c1
-rw-r--r--drivers/staging/iio/accel/adis16203.h132
-rw-r--r--drivers/staging/iio/accel/adis16203_core.c1
-rw-r--r--drivers/staging/iio/accel/adis16204.h68
-rw-r--r--drivers/staging/iio/accel/adis16204_core.c253
-rw-r--r--drivers/staging/iio/accel/adis16209.h39
-rw-r--r--drivers/staging/iio/accel/adis16209_core.c1
-rw-r--r--drivers/staging/iio/accel/adis16220.h140
-rw-r--r--drivers/staging/iio/accel/adis16220_core.c494
-rw-r--r--drivers/staging/iio/accel/adis16240.h50
-rw-r--r--drivers/staging/iio/accel/adis16240_core.c5
-rw-r--r--drivers/staging/iio/adc/ad7192.c50
-rw-r--r--drivers/staging/iio/adc/ad7280a.c40
-rw-r--r--drivers/staging/iio/adc/ad7280a.h8
-rw-r--r--drivers/staging/iio/adc/ad7606.h28
-rw-r--r--drivers/staging/iio/adc/ad7606_core.c18
-rw-r--r--drivers/staging/iio/adc/ad7606_spi.c5
-rw-r--r--drivers/staging/iio/adc/ad7780.c2
-rw-r--r--drivers/staging/iio/frequency/ad9832.c2
-rw-r--r--drivers/staging/iio/impedance-analyzer/ad5933.c45
-rw-r--r--drivers/staging/iio/impedance-analyzer/ad5933.h28
-rw-r--r--drivers/staging/iio/light/isl29028.c55
-rw-r--r--drivers/staging/iio/light/tsl2x7x_core.c211
-rw-r--r--drivers/staging/iio/meter/ade7753.c4
-rw-r--r--drivers/staging/iio/meter/ade7754.c4
-rw-r--r--drivers/staging/iio/meter/ade7758.h16
-rw-r--r--drivers/staging/iio/meter/ade7758_core.c77
-rw-r--r--drivers/staging/iio/meter/ade7758_ring.c4
-rw-r--r--drivers/staging/iio/meter/ade7759.c4
-rw-r--r--drivers/staging/iio/meter/ade7854.c3
-rw-r--r--drivers/staging/iio/resolver/ad2s1210.h8
-rw-r--r--drivers/staging/iio/trigger/iio-trig-bfin-timer.c15
-rw-r--r--drivers/staging/lustre/include/linux/libcfs/libcfs.h51
-rw-r--r--drivers/staging/lustre/include/linux/libcfs/libcfs_cpu.h79
-rw-r--r--drivers/staging/lustre/include/linux/libcfs/libcfs_crypto.h136
-rw-r--r--drivers/staging/lustre/include/linux/libcfs/libcfs_debug.h18
-rw-r--r--drivers/staging/lustre/include/linux/libcfs/libcfs_fail.h15
-rw-r--r--drivers/staging/lustre/include/linux/libcfs/libcfs_hash.h4
-rw-r--r--drivers/staging/lustre/include/linux/libcfs/libcfs_ioctl.h161
-rw-r--r--drivers/staging/lustre/include/linux/libcfs/libcfs_prim.h31
-rw-r--r--drivers/staging/lustre/include/linux/libcfs/libcfs_private.h75
-rw-r--r--drivers/staging/lustre/include/linux/libcfs/libcfs_workitem.h12
-rw-r--r--drivers/staging/lustre/include/linux/libcfs/linux/libcfs.h2
-rw-r--r--drivers/staging/lustre/include/linux/libcfs/linux/linux-cpu.h2
-rw-r--r--drivers/staging/lustre/include/linux/libcfs/linux/linux-mem.h80
-rw-r--r--drivers/staging/lustre/include/linux/libcfs/linux/linux-time.h4
-rw-r--r--drivers/staging/lustre/include/linux/lnet/lib-dlc.h29
-rw-r--r--drivers/staging/lustre/include/linux/lnet/lib-lnet.h9
-rw-r--r--drivers/staging/lustre/include/linux/lnet/lib-types.h2
-rw-r--r--drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.c405
-rw-r--r--drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd.h134
-rw-r--r--drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_cb.c98
-rw-r--r--drivers/staging/lustre/lnet/klnds/o2iblnd/o2iblnd_modparams.c139
-rw-r--r--drivers/staging/lustre/lnet/klnds/socklnd/socklnd.c1
-rw-r--r--drivers/staging/lustre/lnet/klnds/socklnd/socklnd_lib.c3
-rw-r--r--drivers/staging/lustre/lnet/libcfs/debug.c126
-rw-r--r--drivers/staging/lustre/lnet/libcfs/fail.c3
-rw-r--r--drivers/staging/lustre/lnet/libcfs/hash.c6
-rw-r--r--drivers/staging/lustre/lnet/libcfs/libcfs_lock.c54
-rw-r--r--drivers/staging/lustre/lnet/libcfs/libcfs_mem.c28
-rw-r--r--drivers/staging/lustre/lnet/libcfs/linux/linux-cpu.c9
-rw-r--r--drivers/staging/lustre/lnet/libcfs/linux/linux-crypto.c283
-rw-r--r--drivers/staging/lustre/lnet/libcfs/linux/linux-module.c154
-rw-r--r--drivers/staging/lustre/lnet/libcfs/linux/linux-prim.c31
-rw-r--r--drivers/staging/lustre/lnet/libcfs/module.c132
-rw-r--r--drivers/staging/lustre/lnet/libcfs/tracefile.c17
-rw-r--r--drivers/staging/lustre/lnet/libcfs/workitem.c12
-rw-r--r--drivers/staging/lustre/lnet/lnet/api-ni.c143
-rw-r--r--drivers/staging/lustre/lnet/lnet/config.c3
-rw-r--r--drivers/staging/lustre/lnet/lnet/lib-move.c10
-rw-r--r--drivers/staging/lustre/lnet/lnet/module.c7
-rw-r--r--drivers/staging/lustre/lnet/selftest/brw_test.c82
-rw-r--r--drivers/staging/lustre/lnet/selftest/conctl.c52
-rw-r--r--drivers/staging/lustre/lnet/selftest/conrpc.c215
-rw-r--r--drivers/staging/lustre/lnet/selftest/conrpc.h40
-rw-r--r--drivers/staging/lustre/lnet/selftest/console.c282
-rw-r--r--drivers/staging/lustre/lnet/selftest/console.h47
-rw-r--r--drivers/staging/lustre/lnet/selftest/framework.c270
-rw-r--r--drivers/staging/lustre/lnet/selftest/ping_test.c44
-rw-r--r--drivers/staging/lustre/lnet/selftest/rpc.c133
-rw-r--r--drivers/staging/lustre/lnet/selftest/rpc.h156
-rw-r--r--drivers/staging/lustre/lnet/selftest/selftest.h204
-rw-r--r--drivers/staging/lustre/lnet/selftest/timer.c12
-rw-r--r--drivers/staging/lustre/lustre/fid/fid_request.c12
-rw-r--r--drivers/staging/lustre/lustre/fld/fld_cache.c3
-rw-r--r--drivers/staging/lustre/lustre/fld/fld_internal.h9
-rw-r--r--drivers/staging/lustre/lustre/fld/fld_request.c94
-rw-r--r--drivers/staging/lustre/lustre/include/cl_object.h978
-rw-r--r--drivers/staging/lustre/lustre/include/lclient.h408
-rw-r--r--drivers/staging/lustre/lustre/include/linux/obd.h125
-rw-r--r--drivers/staging/lustre/lustre/include/lu_object.h75
-rw-r--r--drivers/staging/lustre/lustre/include/lustre/lustre_idl.h112
-rw-r--r--drivers/staging/lustre/lustre/include/lustre/lustre_user.h54
-rw-r--r--drivers/staging/lustre/lustre/include/lustre_cfg.h2
-rw-r--r--drivers/staging/lustre/lustre/include/lustre_disk.h2
-rw-r--r--drivers/staging/lustre/lustre/include/lustre_dlm.h14
-rw-r--r--drivers/staging/lustre/lustre/include/lustre_dlm_flags.h120
-rw-r--r--drivers/staging/lustre/lustre/include/lustre_fid.h22
-rw-r--r--drivers/staging/lustre/lustre/include/lustre_import.h2
-rw-r--r--drivers/staging/lustre/lustre/include/lustre_lib.h60
-rw-r--r--drivers/staging/lustre/lustre/include/lustre_mdc.h18
-rw-r--r--drivers/staging/lustre/lustre/include/lustre_net.h4
-rw-r--r--drivers/staging/lustre/lustre/include/lustre_param.h1
-rw-r--r--drivers/staging/lustre/lustre/include/lustre_req_layout.h3
-rw-r--r--drivers/staging/lustre/lustre/include/obd.h77
-rw-r--r--drivers/staging/lustre/lustre/include/obd_cksum.h1
-rw-r--r--drivers/staging/lustre/lustre/include/obd_class.h5
-rw-r--r--drivers/staging/lustre/lustre/include/obd_support.h4
-rw-r--r--drivers/staging/lustre/lustre/lclient/lcommon_cl.c1203
-rw-r--r--drivers/staging/lustre/lustre/ldlm/l_lock.c4
-rw-r--r--drivers/staging/lustre/lustre/ldlm/ldlm_extent.c4
-rw-r--r--drivers/staging/lustre/lustre/ldlm/ldlm_flock.c30
-rw-r--r--drivers/staging/lustre/lustre/ldlm/ldlm_internal.h19
-rw-r--r--drivers/staging/lustre/lustre/ldlm/ldlm_lib.c14
-rw-r--r--drivers/staging/lustre/lustre/ldlm/ldlm_lock.c115
-rw-r--r--drivers/staging/lustre/lustre/ldlm/ldlm_lockd.c28
-rw-r--r--drivers/staging/lustre/lustre/ldlm/ldlm_request.c163
-rw-r--r--drivers/staging/lustre/lustre/ldlm/ldlm_resource.c19
-rw-r--r--drivers/staging/lustre/lustre/llite/Makefile5
-rw-r--r--drivers/staging/lustre/lustre/llite/dcache.c15
-rw-r--r--drivers/staging/lustre/lustre/llite/dir.c99
-rw-r--r--drivers/staging/lustre/lustre/llite/file.c277
-rw-r--r--drivers/staging/lustre/lustre/llite/glimpse.c (renamed from drivers/staging/lustre/lustre/lclient/glimpse.c)87
-rw-r--r--drivers/staging/lustre/lustre/llite/lcommon_cl.c327
-rw-r--r--drivers/staging/lustre/lustre/llite/lcommon_misc.c (renamed from drivers/staging/lustre/lustre/lclient/lcommon_misc.c)45
-rw-r--r--drivers/staging/lustre/lustre/llite/llite_close.c71
-rw-r--r--drivers/staging/lustre/lustre/llite/llite_internal.h274
-rw-r--r--drivers/staging/lustre/lustre/llite/llite_lib.c176
-rw-r--r--drivers/staging/lustre/lustre/llite/llite_mmap.c48
-rw-r--r--drivers/staging/lustre/lustre/llite/llite_nfs.c29
-rw-r--r--drivers/staging/lustre/lustre/llite/lloop.c3
-rw-r--r--drivers/staging/lustre/lustre/llite/lproc_llite.c33
-rw-r--r--drivers/staging/lustre/lustre/llite/namei.c143
-rw-r--r--drivers/staging/lustre/lustre/llite/rw.c367
-rw-r--r--drivers/staging/lustre/lustre/llite/rw26.c318
-rw-r--r--drivers/staging/lustre/lustre/llite/statahead.c17
-rw-r--r--drivers/staging/lustre/lustre/llite/super25.c14
-rw-r--r--drivers/staging/lustre/lustre/llite/symlink.c10
-rw-r--r--drivers/staging/lustre/lustre/llite/vvp_dev.c270
-rw-r--r--drivers/staging/lustre/lustre/llite/vvp_internal.h332
-rw-r--r--drivers/staging/lustre/lustre/llite/vvp_io.c928
-rw-r--r--drivers/staging/lustre/lustre/llite/vvp_lock.c53
-rw-r--r--drivers/staging/lustre/lustre/llite/vvp_object.c141
-rw-r--r--drivers/staging/lustre/lustre/llite/vvp_page.c211
-rw-r--r--drivers/staging/lustre/lustre/llite/vvp_req.c121
-rw-r--r--drivers/staging/lustre/lustre/llite/xattr.c45
-rw-r--r--drivers/staging/lustre/lustre/llite/xattr_cache.c1
-rw-r--r--drivers/staging/lustre/lustre/lmv/lmv_internal.h3
-rw-r--r--drivers/staging/lustre/lustre/lmv/lmv_obd.c182
-rw-r--r--drivers/staging/lustre/lustre/lov/lov_cl_internal.h105
-rw-r--r--drivers/staging/lustre/lustre/lov/lov_dev.c15
-rw-r--r--drivers/staging/lustre/lustre/lov/lov_ea.c5
-rw-r--r--drivers/staging/lustre/lustre/lov/lov_internal.h34
-rw-r--r--drivers/staging/lustre/lustre/lov/lov_io.c246
-rw-r--r--drivers/staging/lustre/lustre/lov/lov_lock.c996
-rw-r--r--drivers/staging/lustre/lustre/lov/lov_merge.c11
-rw-r--r--drivers/staging/lustre/lustre/lov/lov_obd.c26
-rw-r--r--drivers/staging/lustre/lustre/lov/lov_object.c54
-rw-r--r--drivers/staging/lustre/lustre/lov/lov_offset.c12
-rw-r--r--drivers/staging/lustre/lustre/lov/lov_pack.c8
-rw-r--r--drivers/staging/lustre/lustre/lov/lov_page.c183
-rw-r--r--drivers/staging/lustre/lustre/lov/lov_pool.c62
-rw-r--r--drivers/staging/lustre/lustre/lov/lov_request.c11
-rw-r--r--drivers/staging/lustre/lustre/lov/lovsub_dev.c9
-rw-r--r--drivers/staging/lustre/lustre/lov/lovsub_lock.c386
-rw-r--r--drivers/staging/lustre/lustre/lov/lovsub_object.c7
-rw-r--r--drivers/staging/lustre/lustre/lov/lovsub_page.c4
-rw-r--r--drivers/staging/lustre/lustre/mdc/lproc_mdc.c8
-rw-r--r--drivers/staging/lustre/lustre/mdc/mdc_lib.c24
-rw-r--r--drivers/staging/lustre/lustre/mdc/mdc_locks.c5
-rw-r--r--drivers/staging/lustre/lustre/mdc/mdc_request.c26
-rw-r--r--drivers/staging/lustre/lustre/mgc/mgc_request.c12
-rw-r--r--drivers/staging/lustre/lustre/obdclass/cl_io.c430
-rw-r--r--drivers/staging/lustre/lustre/obdclass/cl_lock.c2086
-rw-r--r--drivers/staging/lustre/lustre/obdclass/cl_object.c303
-rw-r--r--drivers/staging/lustre/lustre/obdclass/cl_page.c659
-rw-r--r--drivers/staging/lustre/lustre/obdclass/class_obd.c5
-rw-r--r--drivers/staging/lustre/lustre/obdclass/debug.c4
-rw-r--r--drivers/staging/lustre/lustre/obdclass/genops.c1
-rw-r--r--drivers/staging/lustre/lustre/obdclass/linux/linux-module.c4
-rw-r--r--drivers/staging/lustre/lustre/obdclass/llog.c1
-rw-r--r--drivers/staging/lustre/lustre/obdclass/lprocfs_status.c72
-rw-r--r--drivers/staging/lustre/lustre/obdclass/lu_object.c9
-rw-r--r--drivers/staging/lustre/lustre/obdclass/lustre_peer.c3
-rw-r--r--drivers/staging/lustre/lustre/obdclass/obd_config.c26
-rw-r--r--drivers/staging/lustre/lustre/obdclass/obd_mount.c15
-rw-r--r--drivers/staging/lustre/lustre/obdclass/obdo.c3
-rw-r--r--drivers/staging/lustre/lustre/obdecho/echo_client.c173
-rw-r--r--drivers/staging/lustre/lustre/osc/lproc_osc.c68
-rw-r--r--drivers/staging/lustre/lustre/osc/osc_cache.c531
-rw-r--r--drivers/staging/lustre/lustre/osc/osc_cl_internal.h159
-rw-r--r--drivers/staging/lustre/lustre/osc/osc_internal.h27
-rw-r--r--drivers/staging/lustre/lustre/osc/osc_io.c283
-rw-r--r--drivers/staging/lustre/lustre/osc/osc_lock.c1698
-rw-r--r--drivers/staging/lustre/lustre/osc/osc_object.c38
-rw-r--r--drivers/staging/lustre/lustre/osc/osc_page.c544
-rw-r--r--drivers/staging/lustre/lustre/osc/osc_request.c423
-rw-r--r--drivers/staging/lustre/lustre/ptlrpc/client.c11
-rw-r--r--drivers/staging/lustre/lustre/ptlrpc/events.c1
-rw-r--r--drivers/staging/lustre/lustre/ptlrpc/import.c12
-rw-r--r--drivers/staging/lustre/lustre/ptlrpc/layout.c31
-rw-r--r--drivers/staging/lustre/lustre/ptlrpc/lproc_ptlrpc.c11
-rw-r--r--drivers/staging/lustre/lustre/ptlrpc/nrs.c7
-rw-r--r--drivers/staging/lustre/lustre/ptlrpc/pack_generic.c3
-rw-r--r--drivers/staging/lustre/lustre/ptlrpc/ptlrpcd.c21
-rw-r--r--drivers/staging/lustre/lustre/ptlrpc/sec_bulk.c14
-rw-r--r--drivers/staging/lustre/lustre/ptlrpc/sec_plain.c2
-rw-r--r--drivers/staging/lustre/lustre/ptlrpc/service.c52
-rw-r--r--drivers/staging/lustre/lustre/ptlrpc/wiretest.c12
-rw-r--r--drivers/staging/media/Kconfig2
-rw-r--r--drivers/staging/media/Makefile1
-rw-r--r--drivers/staging/media/bcm2048/radio-bcm2048.c2
-rw-r--r--drivers/staging/media/davinci_vpfe/vpfe_video.c2
-rw-r--r--drivers/staging/media/omap1/omap1_camera.c68
-rw-r--r--drivers/staging/media/omap4iss/iss.c2
-rw-r--r--drivers/staging/media/omap4iss/iss_video.c2
-rw-r--r--drivers/staging/media/tw686x-kh/Kconfig17
-rw-r--r--drivers/staging/media/tw686x-kh/Makefile3
-rw-r--r--drivers/staging/media/tw686x-kh/TODO6
-rw-r--r--drivers/staging/media/tw686x-kh/tw686x-kh-core.c140
-rw-r--r--drivers/staging/media/tw686x-kh/tw686x-kh-regs.h103
-rw-r--r--drivers/staging/media/tw686x-kh/tw686x-kh-video.c821
-rw-r--r--drivers/staging/media/tw686x-kh/tw686x-kh.h118
-rw-r--r--drivers/staging/most/hdm-dim2/dim2_errors.h8
-rw-r--r--drivers/staging/most/hdm-dim2/dim2_hal.h14
-rw-r--r--drivers/staging/most/hdm-dim2/dim2_reg.h8
-rw-r--r--drivers/staging/mt29f_spinand/mt29f_spinand.c49
-rw-r--r--drivers/staging/netlogic/xlr_net.c2
-rw-r--r--drivers/staging/nvec/nvec.c11
-rw-r--r--drivers/staging/nvec/nvec_power.c4
-rw-r--r--drivers/staging/octeon/ethernet-rx.c7
-rw-r--r--drivers/staging/octeon/ethernet-rx.h2
-rw-r--r--drivers/staging/octeon/ethernet-tx.c15
-rw-r--r--drivers/staging/octeon/ethernet.c4
-rw-r--r--drivers/staging/rdma/Kconfig27
-rw-r--r--drivers/staging/rdma/Makefile2
-rw-r--r--drivers/staging/rdma/hfi1/TODO6
-rw-r--r--drivers/staging/rdma/hfi1/diag.c1924
-rw-r--r--drivers/staging/rdma/hfi1/eprom.c471
-rw-r--r--drivers/staging/rtl8188eu/core/rtw_ap.c5
-rw-r--r--drivers/staging/rtl8188eu/core/rtw_cmd.c49
-rw-r--r--drivers/staging/rtl8188eu/core/rtw_debug.c5
-rw-r--r--drivers/staging/rtl8188eu/core/rtw_efuse.c7
-rw-r--r--drivers/staging/rtl8188eu/core/rtw_ieee80211.c5
-rw-r--r--drivers/staging/rtl8188eu/core/rtw_ioctl_set.c5
-rw-r--r--drivers/staging/rtl8188eu/core/rtw_mlme.c13
-rw-r--r--drivers/staging/rtl8188eu/core/rtw_mlme_ext.c49
-rw-r--r--drivers/staging/rtl8188eu/core/rtw_pwrctrl.c5
-rw-r--r--drivers/staging/rtl8188eu/core/rtw_recv.c5
-rw-r--r--drivers/staging/rtl8188eu/core/rtw_rf.c5
-rw-r--r--drivers/staging/rtl8188eu/core/rtw_security.c5
-rw-r--r--drivers/staging/rtl8188eu/core/rtw_sreset.c5
-rw-r--r--drivers/staging/rtl8188eu/core/rtw_sta_mgt.c5
-rw-r--r--drivers/staging/rtl8188eu/core/rtw_wlan_util.c5
-rw-r--r--drivers/staging/rtl8188eu/core/rtw_xmit.c5
-rw-r--r--drivers/staging/rtl8188eu/hal/Hal8188ERateAdaptive.c2
-rw-r--r--drivers/staging/rtl8188eu/hal/bb_cfg.c5
-rw-r--r--drivers/staging/rtl8188eu/hal/fw.c4
-rw-r--r--drivers/staging/rtl8188eu/hal/hal_com.c5
-rw-r--r--drivers/staging/rtl8188eu/hal/hal_intf.c7
-rw-r--r--drivers/staging/rtl8188eu/hal/mac_cfg.c5
-rw-r--r--drivers/staging/rtl8188eu/hal/odm.c5
-rw-r--r--drivers/staging/rtl8188eu/hal/odm_HWConfig.c5
-rw-r--r--drivers/staging/rtl8188eu/hal/odm_RTL8188E.c5
-rw-r--r--drivers/staging/rtl8188eu/hal/phy.c5
-rw-r--r--drivers/staging/rtl8188eu/hal/pwrseq.c5
-rw-r--r--drivers/staging/rtl8188eu/hal/pwrseqcmd.c4
-rw-r--r--drivers/staging/rtl8188eu/hal/rf.c4
-rw-r--r--drivers/staging/rtl8188eu/hal/rf_cfg.c5
-rw-r--r--drivers/staging/rtl8188eu/hal/rtl8188e_cmd.c5
-rw-r--r--drivers/staging/rtl8188eu/hal/rtl8188e_dm.c5
-rw-r--r--drivers/staging/rtl8188eu/hal/rtl8188e_hal_init.c5
-rw-r--r--drivers/staging/rtl8188eu/hal/rtl8188e_rxdesc.c9
-rw-r--r--drivers/staging/rtl8188eu/hal/rtl8188e_xmit.c5
-rw-r--r--drivers/staging/rtl8188eu/hal/rtl8188eu_led.c5
-rw-r--r--drivers/staging/rtl8188eu/hal/rtl8188eu_recv.c5
-rw-r--r--drivers/staging/rtl8188eu/hal/rtl8188eu_xmit.c5
-rw-r--r--drivers/staging/rtl8188eu/hal/usb_halinit.c76
-rw-r--r--drivers/staging/rtl8188eu/include/Hal8188EPhyCfg.h5
-rw-r--r--drivers/staging/rtl8188eu/include/Hal8188EPhyReg.h5
-rw-r--r--drivers/staging/rtl8188eu/include/HalHWImg8188E_FW.h5
-rw-r--r--drivers/staging/rtl8188eu/include/HalVerDef.h5
-rw-r--r--drivers/staging/rtl8188eu/include/basic_types.h5
-rw-r--r--drivers/staging/rtl8188eu/include/drv_types.h5
-rw-r--r--drivers/staging/rtl8188eu/include/fw.h4
-rw-r--r--drivers/staging/rtl8188eu/include/hal_com.h5
-rw-r--r--drivers/staging/rtl8188eu/include/hal_intf.h5
-rw-r--r--drivers/staging/rtl8188eu/include/ieee80211.h5
-rw-r--r--drivers/staging/rtl8188eu/include/mlme_osdep.h5
-rw-r--r--drivers/staging/rtl8188eu/include/mp_custom_oid.h5
-rw-r--r--drivers/staging/rtl8188eu/include/odm.h5
-rw-r--r--drivers/staging/rtl8188eu/include/odm_HWConfig.h4
-rw-r--r--drivers/staging/rtl8188eu/include/odm_RTL8188E.h5
-rw-r--r--drivers/staging/rtl8188eu/include/odm_RegDefine11N.h5
-rw-r--r--drivers/staging/rtl8188eu/include/odm_debug.h5
-rw-r--r--drivers/staging/rtl8188eu/include/odm_precomp.h5
-rw-r--r--drivers/staging/rtl8188eu/include/odm_reg.h5
-rw-r--r--drivers/staging/rtl8188eu/include/odm_types.h5
-rw-r--r--drivers/staging/rtl8188eu/include/osdep_intf.h5
-rw-r--r--drivers/staging/rtl8188eu/include/osdep_service.h5
-rw-r--r--drivers/staging/rtl8188eu/include/pwrseq.h5
-rw-r--r--drivers/staging/rtl8188eu/include/pwrseqcmd.h5
-rw-r--r--drivers/staging/rtl8188eu/include/recv_osdep.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtl8188e_cmd.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtl8188e_dm.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtl8188e_hal.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtl8188e_led.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtl8188e_recv.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtl8188e_spec.h4
-rw-r--r--drivers/staging/rtl8188eu/include/rtl8188e_xmit.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_android.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_ap.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_cmd.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_debug.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_eeprom.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_efuse.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_event.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_ht.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_ioctl.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_ioctl_rtl.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_ioctl_set.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_iol.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_mlme.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_mlme_ext.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_mp_phy_regdef.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_pwrctrl.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_qos.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_recv.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_rf.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_security.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_sreset.h5
-rw-r--r--drivers/staging/rtl8188eu/include/rtw_xmit.h5
-rw-r--r--drivers/staging/rtl8188eu/include/sta_info.h5
-rw-r--r--drivers/staging/rtl8188eu/include/usb_hal.h5
-rw-r--r--drivers/staging/rtl8188eu/include/usb_ops_linux.h5
-rw-r--r--drivers/staging/rtl8188eu/include/wifi.h5
-rw-r--r--drivers/staging/rtl8188eu/include/wlan_bssdef.h5
-rw-r--r--drivers/staging/rtl8188eu/include/xmit_osdep.h5
-rw-r--r--drivers/staging/rtl8188eu/os_dep/ioctl_linux.c13
-rw-r--r--drivers/staging/rtl8188eu/os_dep/mlme_linux.c5
-rw-r--r--drivers/staging/rtl8188eu/os_dep/mon.c2
-rw-r--r--drivers/staging/rtl8188eu/os_dep/os_intfs.c5
-rw-r--r--drivers/staging/rtl8188eu/os_dep/osdep_service.c5
-rw-r--r--drivers/staging/rtl8188eu/os_dep/recv_linux.c5
-rw-r--r--drivers/staging/rtl8188eu/os_dep/rtw_android.c5
-rw-r--r--drivers/staging/rtl8188eu/os_dep/usb_intf.c7
-rw-r--r--drivers/staging/rtl8188eu/os_dep/usb_ops_linux.c4
-rw-r--r--drivers/staging/rtl8188eu/os_dep/xmit_linux.c5
-rw-r--r--drivers/staging/rtl8192e/rtl8192e/rtl_core.c2
-rw-r--r--drivers/staging/rtl8192e/rtllib_softmac.c2
-rw-r--r--drivers/staging/rtl8192u/ieee80211/ieee80211_rx.c2
-rw-r--r--drivers/staging/rtl8192u/ieee80211/ieee80211_softmac.c8
-rw-r--r--drivers/staging/rtl8192u/ieee80211/rtl819x_TSProc.c2
-rw-r--r--drivers/staging/rtl8192u/r8190_rtl8256.c2
-rw-r--r--drivers/staging/rtl8192u/r8192U_core.c77
-rw-r--r--drivers/staging/rtl8192u/r8192U_wx.c22
-rw-r--r--drivers/staging/rtl8712/basic_types.h4
-rw-r--r--drivers/staging/rtl8712/drv_types.h4
-rw-r--r--drivers/staging/rtl8712/ethernet.h4
-rw-r--r--drivers/staging/rtl8712/hal_init.c25
-rw-r--r--drivers/staging/rtl8712/ieee80211.c4
-rw-r--r--drivers/staging/rtl8712/mlme_linux.c2
-rw-r--r--drivers/staging/rtl8712/os_intfs.c4
-rw-r--r--drivers/staging/rtl8712/osdep_service.h3
-rw-r--r--drivers/staging/rtl8712/rtl8712_cmd.c18
-rw-r--r--drivers/staging/rtl8712/rtl8712_recv.c10
-rw-r--r--drivers/staging/rtl8712/rtl8712_xmit.c8
-rw-r--r--drivers/staging/rtl8712/rtl871x_cmd.c80
-rw-r--r--drivers/staging/rtl8712/rtl871x_ioctl_linux.c16
-rw-r--r--drivers/staging/rtl8712/rtl871x_ioctl_set.c6
-rw-r--r--drivers/staging/rtl8712/rtl871x_mlme.c16
-rw-r--r--drivers/staging/rtl8712/rtl871x_recv.c2
-rw-r--r--drivers/staging/rtl8712/rtl871x_sta_mgt.c6
-rw-r--r--drivers/staging/rtl8712/rtl871x_xmit.c2
-rw-r--r--drivers/staging/rtl8712/usb_ops_linux.c2
-rw-r--r--drivers/staging/rtl8723au/Kconfig7
-rw-r--r--drivers/staging/rtl8723au/core/rtw_ap.c3
-rw-r--r--drivers/staging/rtl8723au/core/rtw_mlme_ext.c4
-rw-r--r--drivers/staging/rtl8723au/core/rtw_recv.c25
-rw-r--r--drivers/staging/rtl8723au/core/rtw_wlan_util.c10
-rw-r--r--drivers/staging/rtl8723au/hal/rtl8723a_hal_init.c2
-rw-r--r--drivers/staging/rtl8723au/hal/rtl8723a_rf6052.c2
-rw-r--r--drivers/staging/rtl8723au/include/ieee80211.h2
-rw-r--r--drivers/staging/rtl8723au/include/rtw_mlme_ext.h2
-rw-r--r--drivers/staging/rtl8723au/include/rtw_recv.h2
-rw-r--r--drivers/staging/rtl8723au/os_dep/ioctl_cfg80211.c54
-rw-r--r--drivers/staging/rtl8723au/os_dep/usb_intf.c5
-rw-r--r--drivers/staging/rts5208/ms.c16
-rw-r--r--drivers/staging/rts5208/rtsx_card.c21
-rw-r--r--drivers/staging/rts5208/rtsx_card.h2
-rw-r--r--drivers/staging/rts5208/rtsx_chip.c35
-rw-r--r--drivers/staging/rts5208/rtsx_chip.h3
-rw-r--r--drivers/staging/rts5208/sd.c16
-rw-r--r--drivers/staging/skein/skein_api.c3
-rw-r--r--drivers/staging/skein/skein_base.c90
-rw-r--r--drivers/staging/skein/skein_base.h45
-rw-r--r--drivers/staging/skein/skein_block.c92
-rw-r--r--drivers/staging/skein/skein_generic.c6
-rw-r--r--drivers/staging/skein/threefish_api.h2
-rw-r--r--drivers/staging/skein/threefish_block.c2144
-rw-r--r--drivers/staging/slicoss/slicoss.c8
-rw-r--r--drivers/staging/sm750fb/ddk750_chip.c2
-rw-r--r--drivers/staging/speakup/main.c6
-rw-r--r--drivers/staging/speakup/selection.c2
-rw-r--r--drivers/staging/speakup/serialio.h3
-rw-r--r--drivers/staging/unisys/Documentation/ABI/sysfs-platform-visorchipset14
-rw-r--r--drivers/staging/unisys/Documentation/overview.txt19
-rw-r--r--drivers/staging/unisys/Documentation/proc-entries.txt93
-rw-r--r--drivers/staging/unisys/MAINTAINERS1
-rw-r--r--drivers/staging/unisys/include/channel.h10
-rw-r--r--drivers/staging/unisys/include/iochannel.h42
-rw-r--r--drivers/staging/unisys/include/visorbus.h127
-rw-r--r--drivers/staging/unisys/visorbus/visorbus_main.c394
-rw-r--r--drivers/staging/unisys/visorbus/visorchannel.c5
-rw-r--r--drivers/staging/unisys/visorbus/visorchipset.c444
-rw-r--r--drivers/staging/unisys/visorhba/visorhba_main.c114
-rw-r--r--drivers/staging/unisys/visorinput/visorinput.c24
-rw-r--r--drivers/staging/unisys/visornic/visornic_main.c223
-rw-r--r--drivers/staging/vme/devices/vme_pio2_gpio.c22
-rw-r--r--drivers/staging/vt6655/baseband.c24
-rw-r--r--drivers/staging/vt6655/baseband.h6
-rw-r--r--drivers/staging/vt6655/card.c95
-rw-r--r--drivers/staging/vt6655/card.h9
-rw-r--r--drivers/staging/vt6655/channel.c4
-rw-r--r--drivers/staging/vt6655/desc.h3
-rw-r--r--drivers/staging/vt6655/device_main.c4
-rw-r--r--drivers/staging/vt6655/mac.c15
-rw-r--r--drivers/staging/vt6655/rxtx.c2
-rw-r--r--drivers/staging/vt6655/srom.c9
-rw-r--r--drivers/staging/vt6656/baseband.c26
-rw-r--r--drivers/staging/vt6656/channel.c4
-rw-r--r--drivers/staging/vt6656/int.c2
-rw-r--r--drivers/staging/vt6656/main_usb.c8
-rw-r--r--drivers/staging/vt6656/rxtx.c2
-rw-r--r--drivers/staging/vt6656/wcmd.c8
-rw-r--r--drivers/staging/wilc1000/Kconfig1
-rw-r--r--drivers/staging/wilc1000/host_interface.c438
-rw-r--r--drivers/staging/wilc1000/host_interface.h8
-rw-r--r--drivers/staging/wilc1000/linux_mon.c24
-rw-r--r--drivers/staging/wilc1000/linux_wlan.c98
-rw-r--r--drivers/staging/wilc1000/wilc_spi.c3
-rw-r--r--drivers/staging/wilc1000/wilc_wfi_cfgoperations.c81
-rw-r--r--drivers/staging/wilc1000/wilc_wfi_netdevice.h15
-rw-r--r--drivers/staging/wilc1000/wilc_wlan.c53
-rw-r--r--drivers/staging/wilc1000/wilc_wlan.h6
-rw-r--r--drivers/staging/wilc1000/wilc_wlan_cfg.c7
-rw-r--r--drivers/staging/wilc1000/wilc_wlan_if.h21
-rw-r--r--drivers/staging/wlan-ng/cfg80211.c10
-rw-r--r--drivers/staging/wlan-ng/hfa384x_usb.c8
-rw-r--r--drivers/staging/wlan-ng/p80211conv.c5
-rw-r--r--drivers/staging/wlan-ng/p80211netdev.c6
-rw-r--r--drivers/staging/wlan-ng/p80211netdev.h1
-rw-r--r--drivers/staging/wlan-ng/prism2fw.c28
-rw-r--r--drivers/staging/wlan-ng/prism2usb.c2
-rw-r--r--drivers/staging/xgifb/XGI_main_26.c5
-rw-r--r--drivers/staging/xgifb/vb_init.c16
-rw-r--r--drivers/staging/xgifb/vb_setmode.c22
-rw-r--r--drivers/staging/xgifb/vb_table.h135
-rw-r--r--drivers/staging/xgifb/vb_util.h8
-rw-r--r--drivers/target/iscsi/Kconfig2
-rw-r--r--drivers/target/iscsi/Makefile1
-rw-r--r--drivers/target/iscsi/cxgbit/Kconfig7
-rw-r--r--drivers/target/iscsi/cxgbit/Makefile6
-rw-r--r--drivers/target/iscsi/cxgbit/cxgbit.h353
-rw-r--r--drivers/target/iscsi/cxgbit/cxgbit_cm.c2086
-rw-r--r--drivers/target/iscsi/cxgbit/cxgbit_ddp.c325
-rw-r--r--drivers/target/iscsi/cxgbit/cxgbit_lro.h72
-rw-r--r--drivers/target/iscsi/cxgbit/cxgbit_main.c702
-rw-r--r--drivers/target/iscsi/cxgbit/cxgbit_target.c1561
-rw-r--r--drivers/target/iscsi/iscsi_target.c701
-rw-r--r--drivers/target/iscsi/iscsi_target_auth.c2
-rw-r--r--drivers/target/iscsi/iscsi_target_configfs.c158
-rw-r--r--drivers/target/iscsi/iscsi_target_datain_values.c1
-rw-r--r--drivers/target/iscsi/iscsi_target_erl0.c2
-rw-r--r--drivers/target/iscsi/iscsi_target_login.c17
-rw-r--r--drivers/target/iscsi/iscsi_target_nego.c19
-rw-r--r--drivers/target/iscsi/iscsi_target_parameters.c1
-rw-r--r--drivers/target/iscsi/iscsi_target_util.c10
-rw-r--r--drivers/target/loopback/tcm_loop.c12
-rw-r--r--drivers/target/sbp/sbp_target.c12
-rw-r--r--drivers/target/target_core_alua.c6
-rw-r--r--drivers/target/target_core_configfs.c70
-rw-r--r--drivers/target/target_core_iblock.c6
-rw-r--r--drivers/target/target_core_internal.h6
-rw-r--r--drivers/target/target_core_pr.c2
-rw-r--r--drivers/target/target_core_rd.c4
-rw-r--r--drivers/target/target_core_tpg.c83
-rw-r--r--drivers/target/target_core_transport.c58
-rw-r--r--drivers/target/target_core_xcopy.c2
-rw-r--r--drivers/target/tcm_fc/tcm_fc.h1
-rw-r--r--drivers/target/tcm_fc/tfc_conf.c1
-rw-r--r--drivers/target/tcm_fc/tfc_sess.c12
-rw-r--r--drivers/thermal/Kconfig59
-rw-r--r--drivers/thermal/Makefile4
-rw-r--r--drivers/thermal/gov_bang_bang.c8
-rw-r--r--drivers/thermal/hisi_thermal.c45
-rw-r--r--drivers/thermal/int340x_thermal/Kconfig42
-rw-r--r--drivers/thermal/int340x_thermal/Makefile1
-rw-r--r--drivers/thermal/int340x_thermal/int3406_thermal.c236
-rw-r--r--drivers/thermal/int340x_thermal/processor_thermal_device.c108
-rw-r--r--drivers/thermal/intel_powerclamp.c47
-rw-r--r--drivers/thermal/mtk_thermal.c12
-rw-r--r--drivers/thermal/of-thermal.c10
-rw-r--r--drivers/thermal/qcom-spmi-temp-alarm.c3
-rw-r--r--drivers/thermal/rcar_thermal.c2
-rw-r--r--drivers/thermal/rockchip_thermal.c280
-rw-r--r--drivers/thermal/tango_thermal.c109
-rw-r--r--drivers/thermal/tegra/Kconfig13
-rw-r--r--drivers/thermal/tegra/Makefile6
-rw-r--r--drivers/thermal/tegra/soctherm-fuse.c169
-rw-r--r--drivers/thermal/tegra/soctherm.c685
-rw-r--r--drivers/thermal/tegra/soctherm.h127
-rw-r--r--drivers/thermal/tegra/tegra124-soctherm.c196
-rw-r--r--drivers/thermal/tegra/tegra132-soctherm.c196
-rw-r--r--drivers/thermal/tegra/tegra210-soctherm.c197
-rw-r--r--drivers/thermal/tegra_soctherm.c476
-rw-r--r--drivers/thermal/thermal-generic-adc.c182
-rw-r--r--drivers/thermal/ti-soc-thermal/ti-thermal-common.c5
-rw-r--r--drivers/thermal/x86_pkg_temp_thermal.c2
-rw-r--r--drivers/thunderbolt/ctl.c2
-rw-r--r--drivers/thunderbolt/eeprom.c8
-rw-r--r--drivers/thunderbolt/nhi.c19
-rw-r--r--drivers/thunderbolt/switch.c20
-rw-r--r--drivers/thunderbolt/tb.c2
-rw-r--r--drivers/thunderbolt/tb.h2
-rw-r--r--drivers/thunderbolt/tb_regs.h2
-rw-r--r--drivers/tty/amiserial.c39
-rw-r--r--drivers/tty/cyclades.c38
-rw-r--r--drivers/tty/hvc/hvc_console.c4
-rw-r--r--drivers/tty/hvc/hvcs.c2
-rw-r--r--drivers/tty/hvc/hvsi.c2
-rw-r--r--drivers/tty/ipwireless/hardware.c5
-rw-r--r--drivers/tty/isicom.c19
-rw-r--r--drivers/tty/moxa.c12
-rw-r--r--drivers/tty/mxser.c35
-rw-r--r--drivers/tty/n_gsm.c14
-rw-r--r--drivers/tty/n_hdlc.c4
-rw-r--r--drivers/tty/n_tty.c70
-rw-r--r--drivers/tty/nozomi.c2
-rw-r--r--drivers/tty/pty.c6
-rw-r--r--drivers/tty/rocket.c16
-rw-r--r--drivers/tty/serial/8250/8250.h15
-rw-r--r--drivers/tty/serial/8250/8250_core.c3
-rw-r--r--drivers/tty/serial/8250/8250_dma.c68
-rw-r--r--drivers/tty/serial/8250/8250_dw.c8
-rw-r--r--drivers/tty/serial/8250/8250_fintek.c118
-rw-r--r--drivers/tty/serial/8250/8250_mid.c44
-rw-r--r--drivers/tty/serial/8250/8250_of.c2
-rw-r--r--drivers/tty/serial/8250/8250_omap.c93
-rw-r--r--drivers/tty/serial/8250/8250_pci.c23
-rw-r--r--drivers/tty/serial/8250/8250_port.c38
-rw-r--r--drivers/tty/serial/8250/8250_uniphier.c2
-rw-r--r--drivers/tty/serial/8250/Kconfig24
-rw-r--r--drivers/tty/serial/8250/Makefile2
-rw-r--r--drivers/tty/serial/Kconfig41
-rw-r--r--drivers/tty/serial/Makefile2
-rw-r--r--drivers/tty/serial/amba-pl011.c3
-rw-r--r--drivers/tty/serial/atmel_serial.c14
-rw-r--r--drivers/tty/serial/crisv10.c30
-rw-r--r--drivers/tty/serial/ifx6x60.c2
-rw-r--r--drivers/tty/serial/imx.c174
-rw-r--r--drivers/tty/serial/max310x.c12
-rw-r--r--drivers/tty/serial/meson_uart.c42
-rw-r--r--drivers/tty/serial/mps2-uart.c625
-rw-r--r--drivers/tty/serial/msm_serial.c101
-rw-r--r--drivers/tty/serial/mvebu-uart.c29
-rw-r--r--drivers/tty/serial/mxs-auart.c644
-rw-r--r--drivers/tty/serial/pic32_uart.c960
-rw-r--r--drivers/tty/serial/pic32_uart.h126
-rw-r--r--drivers/tty/serial/samsung.c4
-rw-r--r--drivers/tty/serial/sc16is7xx.c29
-rw-r--r--drivers/tty/serial/serial-tegra.c2
-rw-r--r--drivers/tty/serial/serial_core.c430
-rw-r--r--drivers/tty/serial/serial_mctrl_gpio.c7
-rw-r--r--drivers/tty/serial/serial_mctrl_gpio.h6
-rw-r--r--drivers/tty/serial/sirfsoc_uart.c39
-rw-r--r--drivers/tty/serial/sprd_serial.c2
-rw-r--r--drivers/tty/serial/uartlite.c12
-rw-r--r--drivers/tty/serial/ucc_uart.c3
-rw-r--r--drivers/tty/synclink.c78
-rw-r--r--drivers/tty/synclink_gt.c45
-rw-r--r--drivers/tty/synclinkmp.c45
-rw-r--r--drivers/tty/tty_buffer.c34
-rw-r--r--drivers/tty/tty_io.c9
-rw-r--r--drivers/tty/tty_ioctl.c4
-rw-r--r--drivers/tty/tty_port.c27
-rw-r--r--drivers/tty/vt/selection.c2
-rw-r--r--drivers/tty/vt/vt.c115
-rw-r--r--drivers/uio/uio.c16
-rw-r--r--drivers/usb/Kconfig3
-rw-r--r--drivers/usb/atm/ueagle-atm.c10
-rw-r--r--drivers/usb/chipidea/ci_hdrc_imx.c4
-rw-r--r--drivers/usb/class/cdc-acm.c4
-rw-r--r--drivers/usb/common/usb-otg-fsm.c10
-rw-r--r--drivers/usb/core/buffer.c3
-rw-r--r--drivers/usb/core/devio.c11
-rw-r--r--drivers/usb/core/driver.c40
-rw-r--r--drivers/usb/core/hcd.c16
-rw-r--r--drivers/usb/core/hub.c10
-rw-r--r--drivers/usb/core/message.c48
-rw-r--r--drivers/usb/core/usb.c1
-rw-r--r--drivers/usb/dwc2/gadget.c8
-rw-r--r--drivers/usb/dwc2/hcd.c1
-rw-r--r--drivers/usb/dwc2/hcd.h1
-rw-r--r--drivers/usb/dwc2/hcd_queue.c3
-rw-r--r--drivers/usb/dwc2/platform.c2
-rw-r--r--drivers/usb/dwc3/core.c118
-rw-r--r--drivers/usb/dwc3/core.h84
-rw-r--r--drivers/usb/dwc3/debug.h6
-rw-r--r--drivers/usb/dwc3/debugfs.c358
-rw-r--r--drivers/usb/dwc3/dwc3-omap.c9
-rw-r--r--drivers/usb/dwc3/dwc3-pci.c18
-rw-r--r--drivers/usb/dwc3/ep0.c43
-rw-r--r--drivers/usb/dwc3/gadget.c427
-rw-r--r--drivers/usb/dwc3/gadget.h6
-rw-r--r--drivers/usb/dwc3/platform_data.h2
-rw-r--r--drivers/usb/gadget/Kconfig1
-rw-r--r--drivers/usb/gadget/composite.c22
-rw-r--r--drivers/usb/gadget/function/f_fs.c2
-rw-r--r--drivers/usb/gadget/function/f_mass_storage.c36
-rw-r--r--drivers/usb/gadget/function/f_mass_storage.h2
-rw-r--r--drivers/usb/gadget/function/f_tcm.c11
-rw-r--r--drivers/usb/gadget/function/u_ether.c2
-rw-r--r--drivers/usb/gadget/function/u_serial.c4
-rw-r--r--drivers/usb/gadget/legacy/acm_ms.c4
-rw-r--r--drivers/usb/gadget/legacy/mass_storage.c4
-rw-r--r--drivers/usb/gadget/legacy/multi.c12
-rw-r--r--drivers/usb/gadget/legacy/nokia.c7
-rw-r--r--drivers/usb/gadget/udc/at91_udc.c5
-rw-r--r--drivers/usb/gadget/udc/pch_udc.c175
-rw-r--r--drivers/usb/gadget/udc/r8a66597-udc.c2
-rw-r--r--drivers/usb/gadget/udc/udc-core.c26
-rw-r--r--drivers/usb/host/Kconfig12
-rw-r--r--drivers/usb/host/Makefile1
-rw-r--r--drivers/usb/host/bcma-hcd.c6
-rw-r--r--drivers/usb/host/ehci-dbg.c86
-rw-r--r--drivers/usb/host/ehci-exynos.c2
-rw-r--r--drivers/usb/host/ehci-msm.c2
-rw-r--r--drivers/usb/host/ehci-omap.c2
-rw-r--r--drivers/usb/host/ehci-spear.c2
-rw-r--r--drivers/usb/host/fhci-sched.c2
-rw-r--r--drivers/usb/host/fotg210-hcd.c8
-rw-r--r--drivers/usb/host/ohci-hcd.c5
-rw-r--r--drivers/usb/host/ohci-jz4740.c245
-rw-r--r--drivers/usb/host/whci/hcd.c7
-rw-r--r--drivers/usb/host/whci/qset.c8
-rw-r--r--drivers/usb/host/xhci-mvebu.c7
-rw-r--r--drivers/usb/host/xhci-mvebu.h7
-rw-r--r--drivers/usb/host/xhci-plat.c60
-rw-r--r--drivers/usb/host/xhci-plat.h20
-rw-r--r--drivers/usb/host/xhci-rcar.c34
-rw-r--r--drivers/usb/host/xhci-ring.c478
-rw-r--r--drivers/usb/host/xhci-tegra.c1331
-rw-r--r--drivers/usb/host/xhci.c41
-rw-r--r--drivers/usb/host/xhci.h14
-rw-r--r--drivers/usb/isp1760/isp1760-if.c2
-rw-r--r--drivers/usb/misc/Kconfig26
-rw-r--r--drivers/usb/misc/Makefile1
-rw-r--r--drivers/usb/misc/sisusbvga/sisusb.c4
-rw-r--r--drivers/usb/misc/sisusbvga/sisusb_con.c2
-rw-r--r--drivers/usb/misc/ucsi.c478
-rw-r--r--drivers/usb/misc/ucsi.h215
-rw-r--r--drivers/usb/misc/usbtest.c7
-rw-r--r--drivers/usb/phy/phy-qcom-8x16-usb.c5
-rw-r--r--drivers/usb/phy/phy-twl6030-usb.c10
-rw-r--r--drivers/usb/renesas_usbhs/fifo.c16
-rw-r--r--drivers/usb/renesas_usbhs/mod_gadget.c9
-rw-r--r--drivers/usb/renesas_usbhs/mod_host.c3
-rw-r--r--drivers/usb/renesas_usbhs/pipe.c30
-rw-r--r--drivers/usb/renesas_usbhs/pipe.h6
-rw-r--r--drivers/usb/serial/console.c4
-rw-r--r--drivers/usb/serial/cp210x.c102
-rw-r--r--drivers/usb/serial/digi_acceleport.c3
-rw-r--r--drivers/usb/serial/ftdi_sio.c16
-rw-r--r--drivers/usb/serial/generic.c6
-rw-r--r--drivers/usb/serial/io_edgeport.c56
-rw-r--r--drivers/usb/serial/keyspan.c72
-rw-r--r--drivers/usb/serial/mxuport.c16
-rw-r--r--drivers/usb/serial/option.c155
-rw-r--r--drivers/usb/serial/quatech2.c1
-rw-r--r--drivers/usb/serial/sierra.c4
-rw-r--r--drivers/usb/serial/ti_usb_3410_5052.c55
-rw-r--r--drivers/usb/serial/ti_usb_3410_5052.h8
-rw-r--r--drivers/usb/serial/usb-serial.c7
-rw-r--r--drivers/usb/serial/usb_wwan.c4
-rw-r--r--drivers/usb/storage/alauda.c22
-rw-r--r--drivers/usb/storage/cypress_atacb.c34
-rw-r--r--drivers/usb/storage/datafab.c22
-rw-r--r--drivers/usb/storage/debug.c3
-rw-r--r--drivers/usb/storage/debug.h3
-rw-r--r--drivers/usb/storage/ene_ub6250.c25
-rw-r--r--drivers/usb/storage/freecom.c75
-rw-r--r--drivers/usb/storage/initializers.c15
-rw-r--r--drivers/usb/storage/initializers.h15
-rw-r--r--drivers/usb/storage/isd200.c51
-rw-r--r--drivers/usb/storage/jumpshot.c22
-rw-r--r--drivers/usb/storage/karma.c3
-rw-r--r--drivers/usb/storage/option_ms.c6
-rw-r--r--drivers/usb/storage/protocol.c12
-rw-r--r--drivers/usb/storage/protocol.h3
-rw-r--r--drivers/usb/storage/realtek_cr.c12
-rw-r--r--drivers/usb/storage/scsiglue.c171
-rw-r--r--drivers/usb/storage/scsiglue.h3
-rw-r--r--drivers/usb/storage/sddr09.c82
-rw-r--r--drivers/usb/storage/sddr55.c45
-rw-r--r--drivers/usb/storage/shuttle_usbat.c16
-rw-r--r--drivers/usb/storage/sierra_ms.c3
-rw-r--r--drivers/usb/storage/transport.c165
-rw-r--r--drivers/usb/storage/transport.h3
-rw-r--r--drivers/usb/storage/uas.c3
-rw-r--r--drivers/usb/storage/unusual_alauda.h3
-rw-r--r--drivers/usb/storage/unusual_cypress.h3
-rw-r--r--drivers/usb/storage/unusual_datafab.h6
-rw-r--r--drivers/usb/storage/unusual_devs.h334
-rw-r--r--drivers/usb/storage/unusual_freecom.h3
-rw-r--r--drivers/usb/storage/unusual_isd200.h3
-rw-r--r--drivers/usb/storage/unusual_jumpshot.h3
-rw-r--r--drivers/usb/storage/unusual_karma.h3
-rw-r--r--drivers/usb/storage/unusual_onetouch.h6
-rw-r--r--drivers/usb/storage/unusual_realtek.h3
-rw-r--r--drivers/usb/storage/unusual_sddr09.h3
-rw-r--r--drivers/usb/storage/unusual_sddr55.h3
-rw-r--r--drivers/usb/storage/unusual_uas.h3
-rw-r--r--drivers/usb/storage/unusual_usbat.h3
-rw-r--r--drivers/usb/storage/usb.c98
-rw-r--r--drivers/usb/storage/usb.h14
-rw-r--r--drivers/usb/storage/usual-tables.c3
-rw-r--r--drivers/usb/usbip/Kconfig17
-rw-r--r--drivers/usb/usbip/Makefile3
-rw-r--r--drivers/usb/usbip/stub.h1
-rw-r--r--drivers/usb/usbip/stub_dev.c7
-rw-r--r--drivers/usb/usbip/stub_rx.c19
-rw-r--r--drivers/usb/usbip/stub_tx.c11
-rw-r--r--drivers/usb/usbip/usbip_common.c17
-rw-r--r--drivers/usb/usbip/usbip_common.h14
-rw-r--r--drivers/usb/usbip/usbip_event.c168
-rw-r--r--drivers/usb/usbip/vudc.h190
-rw-r--r--drivers/usb/usbip/vudc_dev.c661
-rw-r--r--drivers/usb/usbip/vudc_main.c113
-rw-r--r--drivers/usb/usbip/vudc_rx.c234
-rw-r--r--drivers/usb/usbip/vudc_sysfs.c229
-rw-r--r--drivers/usb/usbip/vudc_transfer.c506
-rw-r--r--drivers/usb/usbip/vudc_tx.c289
-rw-r--r--drivers/usb/wusbcore/crypto.c6
-rw-r--r--drivers/usb/wusbcore/devconnect.c1
-rw-r--r--drivers/vfio/pci/vfio_pci.c55
-rw-r--r--drivers/vfio/pci/vfio_pci_config.c46
-rw-r--r--drivers/vfio/pci/vfio_pci_private.h1
-rw-r--r--drivers/vfio/vfio_iommu_spapr_tce.c5
-rw-r--r--drivers/vfio/vfio_iommu_type1.c2
-rw-r--r--drivers/vhost/scsi.c12
-rw-r--r--drivers/video/Kconfig4
-rw-r--r--drivers/video/backlight/backlight.c39
-rw-r--r--drivers/video/backlight/lm3630a_bl.c9
-rw-r--r--drivers/video/backlight/lp855x_bl.c6
-rw-r--r--drivers/video/backlight/lp8788_bl.c6
-rw-r--r--drivers/video/backlight/pwm_bl.c14
-rw-r--r--drivers/video/console/fbcon.c4
-rw-r--r--drivers/video/console/mdacon.c2
-rw-r--r--drivers/video/console/newport_con.c2
-rw-r--r--drivers/video/console/sticon.c2
-rw-r--r--drivers/video/console/vgacon.c5
-rw-r--r--drivers/video/fbdev/Kconfig3
-rw-r--r--drivers/video/fbdev/Makefile1
-rw-r--r--drivers/video/fbdev/amba-clcd.c2
-rw-r--r--drivers/video/fbdev/core/fbmem.c20
-rw-r--r--drivers/video/fbdev/da8xx-fb.c4
-rw-r--r--drivers/video/fbdev/efifb.c27
-rw-r--r--drivers/video/fbdev/hyperv_fb.c4
-rw-r--r--drivers/video/fbdev/imxfb.c44
-rw-r--r--drivers/video/fbdev/omap2/omapfb/dss/dsi.c12
-rw-r--r--drivers/video/fbdev/omap2/omapfb/dss/hdmi4.c12
-rw-r--r--drivers/video/fbdev/omap2/omapfb/dss/hdmi5.c12
-rw-r--r--drivers/video/fbdev/sh_mipi_dsi.c587
-rw-r--r--drivers/video/fbdev/ssd1307fb.c13
-rw-r--r--drivers/video/fbdev/via/accel.c2
-rw-r--r--drivers/video/fbdev/via/via-core.c4
-rw-r--r--drivers/virtio/virtio_balloon.c20
-rw-r--r--drivers/vme/bridges/vme_ca91cx42.c9
-rw-r--r--drivers/vme/bridges/vme_tsi148.c9
-rw-r--r--drivers/vme/vme.c26
-rw-r--r--drivers/vme/vme_bridge.h1
-rw-r--r--drivers/w1/masters/ds2482.c18
-rw-r--r--drivers/w1/slaves/w1_therm.c218
-rw-r--r--drivers/w1/w1.c2
-rw-r--r--drivers/w1/w1.h2
-rw-r--r--drivers/w1/w1_io.c2
-rw-r--r--drivers/watchdog/Kconfig36
-rw-r--r--drivers/watchdog/Makefile3
-rw-r--r--drivers/watchdog/cpwd.c4
-rw-r--r--drivers/watchdog/ebc-c384_wdt.c43
-rw-r--r--drivers/watchdog/f71808e_wdt.c30
-rw-r--r--drivers/watchdog/imx2_wdt.c19
-rw-r--r--drivers/watchdog/jz4740_wdt.c4
-rw-r--r--drivers/watchdog/octeon-wdt-main.c2
-rw-r--r--drivers/watchdog/pic32-dmt.c257
-rw-r--r--drivers/watchdog/pic32-wdt.c263
-rw-r--r--drivers/watchdog/qcom-wdt.c7
-rw-r--r--drivers/watchdog/renesas_wdt.c213
-rw-r--r--drivers/watchdog/shwdt.c4
-rw-r--r--drivers/watchdog/sp5100_tco.c15
-rw-r--r--drivers/watchdog/watchdog_core.c4
-rw-r--r--drivers/watchdog/watchdog_dev.c1
-rw-r--r--drivers/xen/Makefile1
-rw-r--r--drivers/xen/efi.c1
-rw-r--r--drivers/xen/events/events_base.c6
-rw-r--r--drivers/xen/gntdev.c2
-rw-r--r--drivers/xen/xen-scsiback.c11
-rw-r--r--fs/9p/acl.c14
-rw-r--r--fs/9p/vfs_addr.c3
-rw-r--r--fs/9p/vfs_dir.c4
-rw-r--r--fs/9p/vfs_inode.c2
-rw-r--r--fs/9p/xattr.c9
-rw-r--r--fs/Kconfig1
-rw-r--r--fs/Kconfig.binfmt8
-rw-r--r--fs/affs/dir.c2
-rw-r--r--fs/affs/file.c5
-rw-r--r--fs/affs/super.c5
-rw-r--r--fs/afs/dir.c16
-rw-r--r--fs/afs/rxrpc.c30
-rw-r--r--fs/afs/write.c4
-rw-r--r--fs/aio.c9
-rw-r--r--fs/autofs4/root.c4
-rw-r--r--fs/bad_inode.c8
-rw-r--r--fs/befs/befs.h4
-rw-r--r--fs/befs/btree.c16
-rw-r--r--fs/befs/btree.h4
-rw-r--r--fs/befs/datastream.c34
-rw-r--r--fs/befs/datastream.h11
-rw-r--r--fs/befs/io.c4
-rw-r--r--fs/befs/linuxvfs.c14
-rw-r--r--fs/bfs/dir.c2
-rw-r--r--fs/binfmt_aout.c26
-rw-r--r--fs/binfmt_elf.c18
-rw-r--r--fs/binfmt_elf_fdpic.c2
-rw-r--r--fs/binfmt_flat.c6
-rw-r--r--fs/block_dev.c170
-rw-r--r--fs/btrfs/acl.c3
-rw-r--r--fs/btrfs/backref.c4
-rw-r--r--fs/btrfs/btrfs_inode.h12
-rw-r--r--fs/btrfs/check-integrity.c2
-rw-r--r--fs/btrfs/compression.c85
-rw-r--r--fs/btrfs/ctree.c20
-rw-r--r--fs/btrfs/ctree.h1129
-rw-r--r--fs/btrfs/delayed-inode.c2
-rw-r--r--fs/btrfs/delayed-ref.h2
-rw-r--r--fs/btrfs/dev-replace.c103
-rw-r--r--fs/btrfs/dev-replace.h4
-rw-r--r--fs/btrfs/disk-io.c144
-rw-r--r--fs/btrfs/extent-tree.c215
-rw-r--r--fs/btrfs/extent_io.c176
-rw-r--r--fs/btrfs/extent_io.h35
-rw-r--r--fs/btrfs/extent_map.c2
-rw-r--r--fs/btrfs/file-item.c2
-rw-r--r--fs/btrfs/file.c53
-rw-r--r--fs/btrfs/free-space-cache.c2
-rw-r--r--fs/btrfs/free-space-cache.h2
-rw-r--r--fs/btrfs/inode-item.c2
-rw-r--r--fs/btrfs/inode.c515
-rw-r--r--fs/btrfs/ioctl.c214
-rw-r--r--fs/btrfs/ordered-data.c26
-rw-r--r--fs/btrfs/ordered-data.h8
-rw-r--r--fs/btrfs/qgroup.c24
-rw-r--r--fs/btrfs/raid56.c6
-rw-r--r--fs/btrfs/relocation.c32
-rw-r--r--fs/btrfs/root-tree.c8
-rw-r--r--fs/btrfs/scrub.c36
-rw-r--r--fs/btrfs/send.c68
-rw-r--r--fs/btrfs/struct-funcs.c2
-rw-r--r--fs/btrfs/super.c68
-rw-r--r--fs/btrfs/sysfs.c14
-rw-r--r--fs/btrfs/tests/extent-io-tests.c10
-rw-r--r--fs/btrfs/tests/free-space-tests.c7
-rw-r--r--fs/btrfs/tests/inode-tests.c2
-rw-r--r--fs/btrfs/tests/qgroup-tests.c2
-rw-r--r--fs/btrfs/transaction.c140
-rw-r--r--fs/btrfs/transaction.h2
-rw-r--r--fs/btrfs/tree-log.c90
-rw-r--r--fs/btrfs/ulist.c2
-rw-r--r--fs/btrfs/volumes.c468
-rw-r--r--fs/btrfs/volumes.h57
-rw-r--r--fs/btrfs/xattr.c40
-rw-r--r--fs/btrfs/xattr.h3
-rw-r--r--fs/buffer.c10
-rw-r--r--fs/ceph/acl.c16
-rw-r--r--fs/ceph/addr.c217
-rw-r--r--fs/ceph/cache.c2
-rw-r--r--fs/ceph/caps.c51
-rw-r--r--fs/ceph/debugfs.c2
-rw-r--r--fs/ceph/dir.c383
-rw-r--r--fs/ceph/file.c100
-rw-r--r--fs/ceph/inode.c188
-rw-r--r--fs/ceph/ioctl.c14
-rw-r--r--fs/ceph/mds_client.c140
-rw-r--r--fs/ceph/mds_client.h17
-rw-r--r--fs/ceph/mdsmap.c43
-rw-r--r--fs/ceph/super.c47
-rw-r--r--fs/ceph/super.h20
-rw-r--r--fs/ceph/xattr.c218
-rw-r--r--fs/char_dev.c4
-rw-r--r--fs/cifs/Makefile3
-rw-r--r--fs/cifs/cifs_dfs_ref.c8
-rw-r--r--fs/cifs/cifs_spnego.c67
-rw-r--r--fs/cifs/cifsacl.c2
-rw-r--r--fs/cifs/cifsencrypt.c97
-rw-r--r--fs/cifs/cifsfs.c39
-rw-r--r--fs/cifs/cifsfs.h14
-rw-r--r--fs/cifs/cifsglob.h2
-rw-r--r--fs/cifs/cifsproto.h14
-rw-r--r--fs/cifs/cifssmb.c15
-rw-r--r--fs/cifs/connect.c139
-rw-r--r--fs/cifs/file.c66
-rw-r--r--fs/cifs/inode.c3
-rw-r--r--fs/cifs/readdir.c61
-rw-r--r--fs/cifs/sess.c139
-rw-r--r--fs/cifs/smb2glob.h1
-rw-r--r--fs/cifs/smb2inode.c8
-rw-r--r--fs/cifs/smb2pdu.c16
-rw-r--r--fs/cifs/smb2proto.h2
-rw-r--r--fs/cifs/smb2transport.c107
-rw-r--r--fs/cifs/transport.c141
-rw-r--r--fs/cifs/xattr.c387
-rw-r--r--fs/coda/dir.c18
-rw-r--r--fs/compat.c16
-rw-r--r--fs/configfs/dir.c37
-rw-r--r--fs/configfs/inode.c2
-rw-r--r--fs/coredump.c9
-rw-r--r--fs/cramfs/inode.c2
-rw-r--r--fs/crypto/keyinfo.c120
-rw-r--r--fs/dax.c846
-rw-r--r--fs/dcache.c283
-rw-r--r--fs/debugfs/file.c436
-rw-r--r--fs/debugfs/inode.c108
-rw-r--r--fs/debugfs/internal.h26
-rw-r--r--fs/direct-io.c48
-rw-r--r--fs/ecryptfs/crypto.c46
-rw-r--r--fs/ecryptfs/ecryptfs_kernel.h11
-rw-r--r--fs/ecryptfs/file.c73
-rw-r--r--fs/ecryptfs/inode.c122
-rw-r--r--fs/ecryptfs/mmap.c6
-rw-r--r--fs/ecryptfs/super.c5
-rw-r--r--fs/efivarfs/file.c2
-rw-r--r--fs/efivarfs/inode.c40
-rw-r--r--fs/efivarfs/super.c3
-rw-r--r--fs/efs/dir.c3
-rw-r--r--fs/efs/namei.c2
-rw-r--r--fs/efs/super.c4
-rw-r--r--fs/eventpoll.c12
-rw-r--r--fs/exec.c53
-rw-r--r--fs/exofs/dir.c16
-rw-r--r--fs/exofs/inode.c3
-rw-r--r--fs/exofs/super.c4
-rw-r--r--fs/exportfs/expfs.c12
-rw-r--r--fs/ext2/acl.c3
-rw-r--r--fs/ext2/dir.c16
-rw-r--r--fs/ext2/file.c4
-rw-r--r--fs/ext2/inode.c20
-rw-r--r--fs/ext2/namei.c2
-rw-r--r--fs/ext2/super.c11
-rw-r--r--fs/ext2/xattr_security.c13
-rw-r--r--fs/ext2/xattr_trusted.c13
-rw-r--r--fs/ext2/xattr_user.c17
-rw-r--r--fs/ext4/acl.c3
-rw-r--r--fs/ext4/balloc.c3
-rw-r--r--fs/ext4/dir.c9
-rw-r--r--fs/ext4/ext4.h21
-rw-r--r--fs/ext4/ext4_jbd2.h15
-rw-r--r--fs/ext4/extents.c20
-rw-r--r--fs/ext4/extents_status.c2
-rw-r--r--fs/ext4/file.c15
-rw-r--r--fs/ext4/ialloc.c59
-rw-r--r--fs/ext4/indirect.c127
-rw-r--r--fs/ext4/inline.c2
-rw-r--r--fs/ext4/inode.c333
-rw-r--r--fs/ext4/ioctl.c4
-rw-r--r--fs/ext4/mballoc.c12
-rw-r--r--fs/ext4/mmp.c4
-rw-r--r--fs/ext4/move_extent.c2
-rw-r--r--fs/ext4/namei.c13
-rw-r--r--fs/ext4/page-io.c2
-rw-r--r--fs/ext4/resize.c2
-rw-r--r--fs/ext4/super.c15
-rw-r--r--fs/ext4/xattr_security.c13
-rw-r--r--fs/ext4/xattr_trusted.c13
-rw-r--r--fs/ext4/xattr_user.c17
-rw-r--r--fs/f2fs/Kconfig8
-rw-r--r--fs/f2fs/acl.c7
-rw-r--r--fs/f2fs/checkpoint.c67
-rw-r--r--fs/f2fs/data.c203
-rw-r--r--fs/f2fs/debug.c25
-rw-r--r--fs/f2fs/dir.c130
-rw-r--r--fs/f2fs/extent_cache.c3
-rw-r--r--fs/f2fs/f2fs.h197
-rw-r--r--fs/f2fs/file.c320
-rw-r--r--fs/f2fs/gc.c27
-rw-r--r--fs/f2fs/inline.c111
-rw-r--r--fs/f2fs/inode.c66
-rw-r--r--fs/f2fs/namei.c2
-rw-r--r--fs/f2fs/node.c316
-rw-r--r--fs/f2fs/recovery.c149
-rw-r--r--fs/f2fs/segment.c8
-rw-r--r--fs/f2fs/segment.h9
-rw-r--r--fs/f2fs/super.c288
-rw-r--r--fs/f2fs/xattr.c29
-rw-r--r--fs/fat/dir.c6
-rw-r--r--fs/fat/inode.c6
-rw-r--r--fs/file.c5
-rw-r--r--fs/freevxfs/vxfs_lookup.c2
-rw-r--r--fs/fs-writeback.c3
-rw-r--r--fs/fuse/dir.c105
-rw-r--r--fs/fuse/file.c5
-rw-r--r--fs/gfs2/acl.c58
-rw-r--r--fs/gfs2/acl.h1
-rw-r--r--fs/gfs2/aops.c11
-rw-r--r--fs/gfs2/dir.c15
-rw-r--r--fs/gfs2/file.c40
-rw-r--r--fs/gfs2/glock.c15
-rw-r--r--fs/gfs2/glops.c7
-rw-r--r--fs/gfs2/inode.c89
-rw-r--r--fs/gfs2/meta_io.c7
-rw-r--r--fs/gfs2/meta_io.h8
-rw-r--r--fs/gfs2/ops_fstype.c4
-rw-r--r--fs/gfs2/rgrp.c16
-rw-r--r--fs/gfs2/super.c2
-rw-r--r--fs/gfs2/util.c1
-rw-r--r--fs/gfs2/xattr.c52
-rw-r--r--fs/hfs/attr.c11
-rw-r--r--fs/hfs/catalog.c3
-rw-r--r--fs/hfs/dir.c12
-rw-r--r--fs/hfs/hfs_fs.h7
-rw-r--r--fs/hfs/inode.c9
-rw-r--r--fs/hfsplus/catalog.c3
-rw-r--r--fs/hfsplus/dir.c12
-rw-r--r--fs/hfsplus/hfsplus_fs.h1
-rw-r--r--fs/hfsplus/inode.c8
-rw-r--r--fs/hfsplus/posix_acl.c3
-rw-r--r--fs/hfsplus/super.c1
-rw-r--r--fs/hfsplus/xattr.c22
-rw-r--r--fs/hfsplus/xattr.h4
-rw-r--r--fs/hfsplus/xattr_security.c13
-rw-r--r--fs/hfsplus/xattr_trusted.c13
-rw-r--r--fs/hfsplus/xattr_user.c13
-rw-r--r--fs/hostfs/hostfs_kern.c2
-rw-r--r--fs/hpfs/dir.c12
-rw-r--r--fs/hpfs/dnode.c8
-rw-r--r--fs/hpfs/hpfs_fn.h2
-rw-r--r--fs/hpfs/super.c42
-rw-r--r--fs/inode.c17
-rw-r--r--fs/isofs/dir.c4
-rw-r--r--fs/isofs/rock.c13
-rw-r--r--fs/jbd2/commit.c4
-rw-r--r--fs/jbd2/journal.c3
-rw-r--r--fs/jbd2/recovery.c2
-rw-r--r--fs/jbd2/transaction.c28
-rw-r--r--fs/jffs2/acl.c2
-rw-r--r--fs/jffs2/dir.c4
-rw-r--r--fs/jffs2/security.c13
-rw-r--r--fs/jffs2/super.c2
-rw-r--r--fs/jffs2/xattr_trusted.c13
-rw-r--r--fs/jffs2/xattr_user.c13
-rw-r--r--fs/jfs/acl.c6
-rw-r--r--fs/jfs/file.c6
-rw-r--r--fs/jfs/inode.c11
-rw-r--r--fs/jfs/jfs_discard.c6
-rw-r--r--fs/jfs/jfs_dtree.c10
-rw-r--r--fs/jfs/jfs_imap.c3
-rw-r--r--fs/jfs/jfs_inode.c2
-rw-r--r--fs/jfs/jfs_logmgr.c14
-rw-r--r--fs/jfs/jfs_txnmgr.c21
-rw-r--r--fs/jfs/jfs_xattr.h6
-rw-r--r--fs/jfs/namei.c12
-rw-r--r--fs/jfs/super.c4
-rw-r--r--fs/jfs/symlink.c12
-rw-r--r--fs/jfs/xattr.c222
-rw-r--r--fs/kernfs/dir.c31
-rw-r--r--fs/kernfs/file.c51
-rw-r--r--fs/kernfs/inode.c32
-rw-r--r--fs/kernfs/kernfs-internal.h7
-rw-r--r--fs/kernfs/mount.c20
-rw-r--r--fs/libfs.c16
-rw-r--r--fs/logfs/dir.c4
-rw-r--r--fs/minix/dir.c2
-rw-r--r--fs/namei.c598
-rw-r--r--fs/nfs/callback_proc.c9
-rw-r--r--fs/nfs/callback_xdr.c17
-rw-r--r--fs/nfs/delegation.c9
-rw-r--r--fs/nfs/delegation.h2
-rw-r--r--fs/nfs/dir.c80
-rw-r--r--fs/nfs/direct.c40
-rw-r--r--fs/nfs/file.c2
-rw-r--r--fs/nfs/filelayout/filelayout.c6
-rw-r--r--fs/nfs/flexfilelayout/flexfilelayout.c200
-rw-r--r--fs/nfs/flexfilelayout/flexfilelayout.h17
-rw-r--r--fs/nfs/flexfilelayout/flexfilelayoutdev.c119
-rw-r--r--fs/nfs/inode.c4
-rw-r--r--fs/nfs/internal.h1
-rw-r--r--fs/nfs/nfs3acl.c43
-rw-r--r--fs/nfs/nfs42.h1
-rw-r--r--fs/nfs/nfs42proc.c107
-rw-r--r--fs/nfs/nfs42xdr.c146
-rw-r--r--fs/nfs/nfs4_fs.h12
-rw-r--r--fs/nfs/nfs4file.c23
-rw-r--r--fs/nfs/nfs4idmap.c2
-rw-r--r--fs/nfs/nfs4proc.c218
-rw-r--r--fs/nfs/nfs4state.c18
-rw-r--r--fs/nfs/nfs4trace.h10
-rw-r--r--fs/nfs/nfs4xdr.c43
-rw-r--r--fs/nfs/nfstrace.h2
-rw-r--r--fs/nfs/pagelist.c6
-rw-r--r--fs/nfs/pnfs.c349
-rw-r--r--fs/nfs/pnfs.h17
-rw-r--r--fs/nfs/pnfs_nfs.c60
-rw-r--r--fs/nfs/super.c9
-rw-r--r--fs/nfs/unlink.c192
-rw-r--r--fs/nfs/write.c64
-rw-r--r--fs/nfsd/nfs3proc.c4
-rw-r--r--fs/nfsd/nfs3xdr.c4
-rw-r--r--fs/nfsd/nfs4layouts.c2
-rw-r--r--fs/nfsd/nfs4state.c8
-rw-r--r--fs/nfsd/nfsfh.c2
-rw-r--r--fs/nfsd/state.h5
-rw-r--r--fs/nfsd/vfs.c18
-rw-r--r--fs/nilfs2/alloc.c39
-rw-r--r--fs/nilfs2/alloc.h11
-rw-r--r--fs/nilfs2/bmap.c10
-rw-r--r--fs/nilfs2/bmap.h22
-rw-r--r--fs/nilfs2/btnode.c9
-rw-r--r--fs/nilfs2/btnode.h8
-rw-r--r--fs/nilfs2/btree.c21
-rw-r--r--fs/nilfs2/btree.h6
-rw-r--r--fs/nilfs2/cpfile.c22
-rw-r--r--fs/nilfs2/cpfile.h10
-rw-r--r--fs/nilfs2/dat.c8
-rw-r--r--fs/nilfs2/dat.h8
-rw-r--r--fs/nilfs2/dir.c75
-rw-r--r--fs/nilfs2/direct.c17
-rw-r--r--fs/nilfs2/direct.h6
-rw-r--r--fs/nilfs2/export.h2
-rw-r--r--fs/nilfs2/file.c7
-rw-r--r--fs/nilfs2/gcinode.c9
-rw-r--r--fs/nilfs2/ifile.c14
-rw-r--r--fs/nilfs2/ifile.h9
-rw-r--r--fs/nilfs2/inode.c109
-rw-r--r--fs/nilfs2/ioctl.c17
-rw-r--r--fs/nilfs2/mdt.c63
-rw-r--r--fs/nilfs2/mdt.h21
-rw-r--r--fs/nilfs2/namei.c14
-rw-r--r--fs/nilfs2/nilfs.h33
-rw-r--r--fs/nilfs2/page.c15
-rw-r--r--fs/nilfs2/page.h10
-rw-r--r--fs/nilfs2/recovery.c25
-rw-r--r--fs/nilfs2/segbuf.c10
-rw-r--r--fs/nilfs2/segbuf.h11
-rw-r--r--fs/nilfs2/segment.c125
-rw-r--r--fs/nilfs2/segment.h44
-rw-r--r--fs/nilfs2/sufile.c14
-rw-r--r--fs/nilfs2/sufile.h10
-rw-r--r--fs/nilfs2/super.c21
-rw-r--r--fs/nilfs2/sysfs.c10
-rw-r--r--fs/nilfs2/sysfs.h6
-rw-r--r--fs/nilfs2/the_nilfs.c16
-rw-r--r--fs/nilfs2/the_nilfs.h27
-rw-r--r--fs/notify/fsnotify.h7
-rw-r--r--fs/notify/group.c17
-rw-r--r--fs/notify/mark.c78
-rw-r--r--fs/ntfs/file.c7
-rw-r--r--fs/ocfs2/acl.c87
-rw-r--r--fs/ocfs2/acl.h5
-rw-r--r--fs/ocfs2/alloc.c3
-rw-r--r--fs/ocfs2/aops.c13
-rw-r--r--fs/ocfs2/cluster/heartbeat.c185
-rw-r--r--fs/ocfs2/cluster/tcp.c17
-rw-r--r--fs/ocfs2/cluster/tcp_internal.h5
-rw-r--r--fs/ocfs2/dlmglue.c3
-rw-r--r--fs/ocfs2/file.c6
-rw-r--r--fs/ocfs2/inode.c9
-rw-r--r--fs/ocfs2/journal.h2
-rw-r--r--fs/ocfs2/namei.c23
-rw-r--r--fs/ocfs2/ocfs2_fs.h2
-rw-r--r--fs/ocfs2/refcounttree.c17
-rw-r--r--fs/ocfs2/slot_map.c6
-rw-r--r--fs/ocfs2/xattr.c57
-rw-r--r--fs/ocfs2/xattr.h4
-rw-r--r--fs/omfs/dir.c2
-rw-r--r--fs/open.c20
-rw-r--r--fs/openpromfs/inode.c2
-rw-r--r--fs/orangefs/file.c4
-rw-r--r--fs/orangefs/orangefs-kernel.h4
-rw-r--r--fs/orangefs/xattr.c20
-rw-r--r--fs/overlayfs/copy_up.c26
-rw-r--r--fs/overlayfs/dir.c67
-rw-r--r--fs/overlayfs/inode.c9
-rw-r--r--fs/overlayfs/overlayfs.h10
-rw-r--r--fs/overlayfs/readdir.c16
-rw-r--r--fs/overlayfs/super.c43
-rw-r--r--fs/posix_acl.c122
-rw-r--r--fs/proc/array.c20
-rw-r--r--fs/proc/base.c79
-rw-r--r--fs/proc/fd.c8
-rw-r--r--fs/proc/generic.c2
-rw-r--r--fs/proc/namespaces.c3
-rw-r--r--fs/proc/page.c2
-rw-r--r--fs/proc/proc_net.c2
-rw-r--r--fs/proc/proc_sysctl.c17
-rw-r--r--fs/proc/root.c4
-rw-r--r--fs/proc/task_mmu.c11
-rw-r--r--fs/proc/vmcore.c2
-rw-r--r--fs/qnx4/dir.c2
-rw-r--r--fs/qnx6/dir.c2
-rw-r--r--fs/quota/netlink.c12
-rw-r--r--fs/ramfs/file-nommu.c8
-rw-r--r--fs/read_write.c51
-rw-r--r--fs/readdir.c41
-rw-r--r--fs/reiserfs/dir.c2
-rw-r--r--fs/reiserfs/file.c6
-rw-r--r--fs/reiserfs/inode.c7
-rw-r--r--fs/reiserfs/ioctl.c6
-rw-r--r--fs/reiserfs/namei.c18
-rw-r--r--fs/reiserfs/objectid.c2
-rw-r--r--fs/reiserfs/xattr.c54
-rw-r--r--fs/reiserfs/xattr.h9
-rw-r--r--fs/reiserfs/xattr_acl.c8
-rw-r--r--fs/reiserfs/xattr_security.c26
-rw-r--r--fs/reiserfs/xattr_trusted.c26
-rw-r--r--fs/reiserfs/xattr_user.c26
-rw-r--r--fs/romfs/super.c4
-rw-r--r--fs/select.c67
-rw-r--r--fs/splice.c3
-rw-r--r--fs/squashfs/dir.c4
-rw-r--r--fs/squashfs/xattr.c6
-rw-r--r--fs/super.c2
-rw-r--r--fs/sysv/dir.c2
-rw-r--r--fs/ubifs/debug.c2
-rw-r--r--fs/ubifs/dir.c8
-rw-r--r--fs/ubifs/file.c12
-rw-r--r--fs/ubifs/sb.c2
-rw-r--r--fs/ubifs/super.c1
-rw-r--r--fs/ubifs/ubifs.h7
-rw-r--r--fs/ubifs/xattr.c144
-rw-r--r--fs/udf/dir.c2
-rw-r--r--fs/udf/file.c7
-rw-r--r--fs/udf/inode.c7
-rw-r--r--fs/udf/namei.c2
-rw-r--r--fs/udf/super.c67
-rw-r--r--fs/udf/udf_sb.h4
-rw-r--r--fs/ufs/dir.c16
-rw-r--r--fs/ufs/super.c2
-rw-r--r--fs/userfaultfd.c41
-rw-r--r--fs/xattr.c25
-rw-r--r--fs/xfs/kmem.c26
-rw-r--r--fs/xfs/kmem.h2
-rw-r--r--fs/xfs/libxfs/xfs_attr.c58
-rw-r--r--fs/xfs/libxfs/xfs_bmap.c22
-rw-r--r--fs/xfs/libxfs/xfs_dir2_sf.c9
-rw-r--r--fs/xfs/libxfs/xfs_inode_fork.c99
-rw-r--r--fs/xfs/libxfs/xfs_inode_fork.h1
-rw-r--r--fs/xfs/libxfs/xfs_log_format.h5
-rw-r--r--fs/xfs/libxfs/xfs_sb.c8
-rw-r--r--fs/xfs/libxfs/xfs_shared.h102
-rw-r--r--fs/xfs/xfs_acl.c20
-rw-r--r--fs/xfs/xfs_aops.c360
-rw-r--r--fs/xfs/xfs_aops.h15
-rw-r--r--fs/xfs/xfs_attr.h4
-rw-r--r--fs/xfs/xfs_attr_inactive.c16
-rw-r--r--fs/xfs/xfs_attr_list.c85
-rw-r--r--fs/xfs/xfs_bmap_util.c60
-rw-r--r--fs/xfs/xfs_buf.c12
-rw-r--r--fs/xfs/xfs_buf.h20
-rw-r--r--fs/xfs/xfs_buf_item.c121
-rw-r--r--fs/xfs/xfs_dir2_readdir.c23
-rw-r--r--fs/xfs/xfs_dquot.c9
-rw-r--r--fs/xfs/xfs_file.c40
-rw-r--r--fs/xfs/xfs_fsops.c14
-rw-r--r--fs/xfs/xfs_icache.c290
-rw-r--r--fs/xfs/xfs_inode.c167
-rw-r--r--fs/xfs/xfs_inode.h5
-rw-r--r--fs/xfs/xfs_inode_item.c6
-rw-r--r--fs/xfs/xfs_ioctl.c31
-rw-r--r--fs/xfs/xfs_iomap.c53
-rw-r--r--fs/xfs/xfs_iops.c117
-rw-r--r--fs/xfs/xfs_log.c62
-rw-r--r--fs/xfs/xfs_log.h3
-rw-r--r--fs/xfs/xfs_log_cil.c1
-rw-r--r--fs/xfs/xfs_log_priv.h1
-rw-r--r--fs/xfs/xfs_log_recover.c12
-rw-r--r--fs/xfs/xfs_mount.c23
-rw-r--r--fs/xfs/xfs_mount.h34
-rw-r--r--fs/xfs/xfs_pnfs.c7
-rw-r--r--fs/xfs/xfs_qm.c9
-rw-r--r--fs/xfs/xfs_qm_syscalls.c26
-rw-r--r--fs/xfs/xfs_rtalloc.c21
-rw-r--r--fs/xfs/xfs_super.c77
-rw-r--r--fs/xfs/xfs_symlink.c37
-rw-r--r--fs/xfs/xfs_sysfs.c291
-rw-r--r--fs/xfs/xfs_sysfs.h3
-rw-r--r--fs/xfs/xfs_trace.h16
-rw-r--r--fs/xfs/xfs_trans.c88
-rw-r--r--fs/xfs/xfs_trans.h8
-rw-r--r--fs/xfs/xfs_xattr.c32
-rw-r--r--include/acpi/acpi_bus.h8
-rw-r--r--include/acpi/acpi_drivers.h1
-rw-r--r--include/acpi/acpiosxf.h10
-rw-r--r--include/acpi/acpixf.h25
-rw-r--r--include/acpi/acrestyp.h1
-rw-r--r--include/acpi/actbl.h4
-rw-r--r--include/acpi/actbl1.h74
-rw-r--r--include/acpi/actbl2.h39
-rw-r--r--include/acpi/actbl3.h66
-rw-r--r--include/acpi/actypes.h48
-rw-r--r--include/acpi/platform/acenv.h44
-rw-r--r--include/acpi/platform/acmsvcex.h54
-rw-r--r--include/acpi/platform/acwinex.h49
-rw-r--r--include/acpi/video.h20
-rw-r--r--include/asm-generic/io.h4
-rw-r--r--include/asm-generic/pgtable.h8
-rw-r--r--include/asm-generic/preempt.h4
-rw-r--r--include/asm-generic/qspinlock.h27
-rw-r--r--include/asm-generic/rwsem.h13
-rw-r--r--include/asm-generic/seccomp.h14
-rw-r--r--include/asm-generic/siginfo.h15
-rw-r--r--include/asm-generic/vmlinux.lds.h4
-rw-r--r--include/clocksource/arm_arch_timer.h12
-rw-r--r--include/crypto/aead.h3
-rw-r--r--include/crypto/hash.h3
-rw-r--r--include/crypto/pkcs7.h6
-rw-r--r--include/crypto/public_key.h33
-rw-r--r--include/crypto/skcipher.h3
-rw-r--r--include/drm/drm_dp_dual_mode_helper.h92
-rw-r--r--include/dt-bindings/clock/ath79-clk.h19
-rw-r--r--include/dt-bindings/clock/axis,artpec6-clkctrl.h38
-rw-r--r--include/dt-bindings/clock/bcm2835.h20
-rw-r--r--include/dt-bindings/clock/exynos3250.h11
-rw-r--r--include/dt-bindings/clock/exynos5420.h24
-rw-r--r--include/dt-bindings/clock/hi3519-clock.h40
-rw-r--r--include/dt-bindings/clock/imx7d-clock.h3
-rw-r--r--include/dt-bindings/clock/microchip,pic32-clock.h42
-rw-r--r--include/dt-bindings/clock/r8a7790-clock.h1
-rw-r--r--include/dt-bindings/clock/r8a7794-clock.h5
-rw-r--r--include/dt-bindings/clock/rk3399-cru.h755
-rw-r--r--include/dt-bindings/clock/tegra210-car.h2
-rw-r--r--include/dt-bindings/clock/vf610-clock.h8
-rw-r--r--include/dt-bindings/gpio/meson-gxbb-gpio.h154
-rw-r--r--include/dt-bindings/gpio/tegra-gpio.h68
-rw-r--r--include/dt-bindings/gpio/tegra186-gpio.h56
-rw-r--r--include/dt-bindings/iio/adi,ad5592r.h16
-rw-r--r--include/dt-bindings/mfd/arizona.h5
-rw-r--r--include/dt-bindings/mfd/max77620.h39
-rw-r--r--include/dt-bindings/pinctrl/hisi.h59
-rw-r--r--include/dt-bindings/power/r8a7779-sysc.h27
-rw-r--r--include/dt-bindings/power/r8a7790-sysc.h34
-rw-r--r--include/dt-bindings/power/r8a7791-sysc.h26
-rw-r--r--include/dt-bindings/power/r8a7793-sysc.h28
-rw-r--r--include/dt-bindings/power/r8a7794-sysc.h26
-rw-r--r--include/dt-bindings/power/r8a7795-sysc.h42
-rw-r--r--include/dt-bindings/power/rk3399-power.h53
-rw-r--r--include/dt-bindings/thermal/tegra124-soctherm.h1
-rw-r--r--include/keys/asymmetric-subtype.h2
-rw-r--r--include/keys/asymmetric-type.h13
-rw-r--r--include/keys/system_keyring.h41
-rw-r--r--include/kvm/arm_arch_timer.h11
-rw-r--r--include/kvm/arm_vgic.h27
-rw-r--r--include/kvm/vgic/vgic.h246
-rw-r--r--include/linux/acpi.h13
-rw-r--r--include/linux/amba/pl08x.h2
-rw-r--r--include/linux/apple-gmux.h2
-rw-r--r--include/linux/ata.h75
-rw-r--r--include/linux/ath9k_platform.h4
-rw-r--r--include/linux/atomic.h4
-rw-r--r--include/linux/audit.h24
-rw-r--r--include/linux/backlight.h3
-rw-r--r--include/linux/bcm47xx_sprom.h24
-rw-r--r--include/linux/bcma/bcma.h1
-rw-r--r--include/linux/bcma/bcma_driver_arm_c9.h15
-rw-r--r--include/linux/bcma/bcma_driver_chipcommon.h1
-rw-r--r--include/linux/bio.h11
-rw-r--r--include/linux/bitops.h16
-rw-r--r--include/linux/blk-mq.h4
-rw-r--r--include/linux/blk_types.h2
-rw-r--r--include/linux/blkdev.h28
-rw-r--r--include/linux/blktrace_api.h9
-rw-r--r--include/linux/bootmem.h16
-rw-r--r--include/linux/bpf.h9
-rw-r--r--include/linux/can/dev.h22
-rw-r--r--include/linux/ccp.h36
-rw-r--r--include/linux/ceph/ceph_frag.h4
-rw-r--r--include/linux/ceph/ceph_fs.h20
-rw-r--r--include/linux/ceph/decode.h2
-rw-r--r--include/linux/ceph/libceph.h57
-rw-r--r--include/linux/ceph/mon_client.h23
-rw-r--r--include/linux/ceph/osd_client.h231
-rw-r--r--include/linux/ceph/osdmap.h158
-rw-r--r--include/linux/ceph/rados.h34
-rw-r--r--include/linux/clk-provider.h103
-rw-r--r--include/linux/clk/renesas.h16
-rw-r--r--include/linux/clk/tegra.h5
-rw-r--r--include/linux/clk/ti.h2
-rw-r--r--include/linux/clkdev.h7
-rw-r--r--include/linux/clocksource.h1
-rw-r--r--include/linux/compaction.h151
-rw-r--r--include/linux/compiler-gcc.h3
-rw-r--r--include/linux/compiler.h4
-rw-r--r--include/linux/console.h2
-rw-r--r--include/linux/coresight-stm.h6
-rw-r--r--include/linux/cpu.h18
-rw-r--r--include/linux/cpufreq-dt.h22
-rw-r--r--include/linux/cpufreq.h54
-rw-r--r--include/linux/cpuhotplug.h2
-rw-r--r--include/linux/cpumask.h6
-rw-r--r--include/linux/cpuset.h42
-rw-r--r--include/linux/crash_dump.h8
-rw-r--r--include/linux/crypto.h3
-rw-r--r--include/linux/dax.h43
-rw-r--r--include/linux/dcache.h65
-rw-r--r--include/linux/debugfs.h49
-rw-r--r--include/linux/debugobjects.h17
-rw-r--r--include/linux/devcoredump.h86
-rw-r--r--include/linux/devfreq.h99
-rw-r--r--include/linux/device.h24
-rw-r--r--include/linux/dma-iommu.h4
-rw-r--r--include/linux/dma-mapping.h2
-rw-r--r--include/linux/dma/dw.h5
-rw-r--r--include/linux/dma/xilinx_dma.h14
-rw-r--r--include/linux/dmaengine.h20
-rw-r--r--include/linux/efi.h181
-rw-r--r--include/linux/err.h2
-rw-r--r--include/linux/errno.h1
-rw-r--r--include/linux/ethtool.h7
-rw-r--r--include/linux/export.h33
-rw-r--r--include/linux/f2fs_fs.h2
-rw-r--r--include/linux/file.h13
-rw-r--r--include/linux/filter.h72
-rw-r--r--include/linux/fs.h139
-rw-r--r--include/linux/fscrypto.h1
-rw-r--r--include/linux/fsl_ifc.h45
-rw-r--r--include/linux/fsnotify_backend.h2
-rw-r--r--include/linux/ftrace.h1
-rw-r--r--include/linux/genhd.h23
-rw-r--r--include/linux/genl_magic_struct.h7
-rw-r--r--include/linux/gpio/driver.h25
-rw-r--r--include/linux/hardirq.h2
-rw-r--r--include/linux/hash.h108
-rw-r--r--include/linux/huge_mm.h4
-rw-r--r--include/linux/hugetlb.h11
-rw-r--r--include/linux/hugetlb_cgroup.h4
-rw-r--r--include/linux/hugetlb_inline.h6
-rw-r--r--include/linux/hyperv.h170
-rw-r--r--include/linux/i2c-mux.h55
-rw-r--r--include/linux/i2c.h50
-rw-r--r--include/linux/i2c/sx150x.h82
-rw-r--r--include/linux/ieee80211.h24
-rw-r--r--include/linux/ieee802154.h45
-rw-r--r--include/linux/iio/buffer.h2
-rw-r--r--include/linux/iio/common/st_sensors.h9
-rw-r--r--include/linux/iio/consumer.h53
-rw-r--r--include/linux/iio/iio.h33
-rw-r--r--include/linux/iio/imu/adis.h1
-rw-r--r--include/linux/iio/magnetometer/ak8975.h16
-rw-r--r--include/linux/ima.h6
-rw-r--r--include/linux/io-64-nonatomic-hi-lo.h25
-rw-r--r--include/linux/io-64-nonatomic-lo-hi.h25
-rw-r--r--include/linux/iommu.h6
-rw-r--r--include/linux/ioport.h4
-rw-r--r--include/linux/iova.h23
-rw-r--r--include/linux/ipv6.h20
-rw-r--r--include/linux/irq.h4
-rw-r--r--include/linux/irqbypass.h4
-rw-r--r--include/linux/irqchip/arm-gic-common.h34
-rw-r--r--include/linux/irqchip/arm-gic-v3.h8
-rw-r--r--include/linux/irqchip/arm-gic.h2
-rw-r--r--include/linux/irqchip/irq-partition-percpu.h59
-rw-r--r--include/linux/irqchip/mips-gic.h17
-rw-r--r--include/linux/irqdesc.h1
-rw-r--r--include/linux/irqdomain.h20
-rw-r--r--include/linux/isa.h32
-rw-r--r--include/linux/iscsi_boot_sysfs.h13
-rw-r--r--include/linux/jbd2.h16
-rw-r--r--include/linux/kasan-checks.h12
-rw-r--r--include/linux/kasan.h13
-rw-r--r--include/linux/kernel.h11
-rw-r--r--include/linux/kernfs.h3
-rw-r--r--include/linux/kexec.h4
-rw-r--r--include/linux/key-type.h1
-rw-r--r--include/linux/key.h44
-rw-r--r--include/linux/kvm_host.h46
-rw-r--r--include/linux/leds.h8
-rw-r--r--include/linux/libata.h35
-rw-r--r--include/linux/libnvdimm.h7
-rw-r--r--include/linux/lightnvm.h48
-rw-r--r--include/linux/livepatch.h26
-rw-r--r--include/linux/lockdep.h38
-rw-r--r--include/linux/lsm_hooks.h39
-rw-r--r--include/linux/mcb.h14
-rw-r--r--include/linux/mdio.h11
-rw-r--r--include/linux/memcontrol.h31
-rw-r--r--include/linux/memory_hotplug.h8
-rw-r--r--include/linux/mempolicy.h16
-rw-r--r--include/linux/mempool.h3
-rw-r--r--include/linux/mfd/as3722.h1
-rw-r--r--include/linux/mfd/axp20x.h59
-rw-r--r--include/linux/mfd/core.h8
-rw-r--r--include/linux/mfd/cros_ec.h6
-rw-r--r--include/linux/mfd/hi655x-pmic.h55
-rw-r--r--include/linux/mfd/max77620.h346
-rw-r--r--include/linux/mfd/samsung/core.h3
-rw-r--r--include/linux/mfd/samsung/s2mps11.h2
-rw-r--r--include/linux/mfd/syscon.h1
-rw-r--r--include/linux/mfd/syscon/exynos5-pmu.h3
-rw-r--r--include/linux/mfd/syscon/imx6q-iomuxc-gpr.h7
-rw-r--r--include/linux/mfd/tmio.h4
-rw-r--r--include/linux/mfd/twl6040.h1
-rw-r--r--include/linux/mfd/wm8400-private.h1
-rw-r--r--include/linux/mlx4/device.h4
-rw-r--r--include/linux/mlx5/cq.h5
-rw-r--r--include/linux/mlx5/device.h104
-rw-r--r--include/linux/mlx5/driver.h38
-rw-r--r--include/linux/mlx5/fs.h23
-rw-r--r--include/linux/mlx5/mlx5_ifc.h354
-rw-r--r--include/linux/mlx5/port.h25
-rw-r--r--include/linux/mlx5/qp.h6
-rw-r--r--include/linux/mm.h68
-rw-r--r--include/linux/mm_inline.h24
-rw-r--r--include/linux/mm_types.h18
-rw-r--r--include/linux/mmc/dw_mmc.h12
-rw-r--r--include/linux/mmc/host.h35
-rw-r--r--include/linux/mmc/sdio_ids.h1
-rw-r--r--include/linux/mmc/sh_mobile_sdhi.h10
-rw-r--r--include/linux/mmc/tmio.h71
-rw-r--r--include/linux/mmu_context.h7
-rw-r--r--include/linux/mmzone.h51
-rw-r--r--include/linux/module.h25
-rw-r--r--include/linux/mtd/fsmc.h18
-rw-r--r--include/linux/mtd/map.h19
-rw-r--r--include/linux/mtd/mtd.h75
-rw-r--r--include/linux/mtd/nand.h28
-rw-r--r--include/linux/mtd/onenand.h2
-rw-r--r--include/linux/mtd/sharpsl.h2
-rw-r--r--include/linux/mtd/spi-nor.h1
-rw-r--r--include/linux/namei.h2
-rw-r--r--include/linux/nd.h11
-rw-r--r--include/linux/net.h3
-rw-r--r--include/linux/netdev_features.h29
-rw-r--r--include/linux/netdevice.h88
-rw-r--r--include/linux/netfilter/ipset/ip_set.h9
-rw-r--r--include/linux/netfilter/x_tables.h18
-rw-r--r--include/linux/nfs4.h28
-rw-r--r--include/linux/nfs_fs.h16
-rw-r--r--include/linux/nfs_fs_sb.h1
-rw-r--r--include/linux/nfs_xdr.h34
-rw-r--r--include/linux/nilfs2_fs.h79
-rw-r--r--include/linux/nl802154.h2
-rw-r--r--include/linux/nodemask.h11
-rw-r--r--include/linux/nvme.h4
-rw-r--r--include/linux/nvmem-provider.h10
-rw-r--r--include/linux/of.h65
-rw-r--r--include/linux/of_address.h9
-rw-r--r--include/linux/of_fdt.h5
-rw-r--r--include/linux/of_graph.h1
-rw-r--r--include/linux/of_iommu.h8
-rw-r--r--include/linux/of_mtd.h50
-rw-r--r--include/linux/omap-gpmc.h172
-rw-r--r--include/linux/omap-mailbox.h2
-rw-r--r--include/linux/oom.h32
-rw-r--r--include/linux/padata.h5
-rw-r--r--include/linux/page-flags.h25
-rw-r--r--include/linux/page_ref.h26
-rw-r--r--include/linux/pagemap.h32
-rw-r--r--include/linux/pci.h18
-rw-r--r--include/linux/pci_ids.h18
-rw-r--r--include/linux/pcieport_if.h2
-rw-r--r--include/linux/percpu.h3
-rw-r--r--include/linux/perf/arm_pmu.h2
-rw-r--r--include/linux/perf_event.h175
-rw-r--r--include/linux/phy.h8
-rw-r--r--include/linux/phy/phy.h31
-rw-r--r--include/linux/phy/tegra/xusb.h30
-rw-r--r--include/linux/pinctrl/pinctrl.h6
-rw-r--r--include/linux/platform_data/at24.h2
-rw-r--r--include/linux/platform_data/dma-dw.h12
-rw-r--r--include/linux/platform_data/gpio-dwapb.h3
-rw-r--r--include/linux/platform_data/gpmc-omap.h172
-rw-r--r--include/linux/platform_data/invensense_mpu6050.h5
-rw-r--r--include/linux/platform_data/mailbox-omap.h58
-rw-r--r--include/linux/platform_data/media/ir-rx51.h1
-rw-r--r--include/linux/platform_data/mtd-nand-omap2.h12
-rw-r--r--include/linux/platform_data/pwm_omap_dmtimer.h21
-rw-r--r--include/linux/platform_data/st_sensors_pdata.h2
-rw-r--r--include/linux/platform_device.h6
-rw-r--r--include/linux/pm.h2
-rw-r--r--include/linux/pm_domain.h6
-rw-r--r--include/linux/pm_opp.h62
-rw-r--r--include/linux/pm_runtime.h6
-rw-r--r--include/linux/pnp.h2
-rw-r--r--include/linux/poll.h11
-rw-r--r--include/linux/posix_acl.h1
-rw-r--r--include/linux/printk.h14
-rw-r--r--include/linux/property.h15
-rw-r--r--include/linux/proportions.h137
-rw-r--r--include/linux/psci.h2
-rw-r--r--include/linux/pwm.h338
-rw-r--r--include/linux/qed/common_hsi.h62
-rw-r--r--include/linux/qed/qed_eth_if.h22
-rw-r--r--include/linux/qed/qed_if.h117
-rw-r--r--include/linux/qed/qed_iov_if.h34
-rw-r--r--include/linux/radix-tree.h180
-rw-r--r--include/linux/random.h1
-rw-r--r--include/linux/rculist.h36
-rw-r--r--include/linux/rcupdate.h30
-rw-r--r--include/linux/rcutiny.h16
-rw-r--r--include/linux/rcutree.h2
-rw-r--r--include/linux/regulator/act8865.h2
-rw-r--r--include/linux/regulator/driver.h7
-rw-r--r--include/linux/regulator/machine.h1
-rw-r--r--include/linux/regulator/max8973-regulator.h5
-rw-r--r--include/linux/remoteproc.h4
-rw-r--r--include/linux/reset-controller.h2
-rw-r--r--include/linux/reset.h194
-rw-r--r--include/linux/rhashtable.h3
-rw-r--r--include/linux/rpmsg.h18
-rw-r--r--include/linux/rwsem-spinlock.h2
-rw-r--r--include/linux/rwsem.h5
-rw-r--r--include/linux/scatterlist.h25
-rw-r--r--include/linux/sched.h186
-rw-r--r--include/linux/sctp.h67
-rw-r--r--include/linux/security.h78
-rw-r--r--include/linux/selection.h8
-rw-r--r--include/linux/seqlock.h4
-rw-r--r--include/linux/serial_8250.h2
-rw-r--r--include/linux/serial_core.h3
-rw-r--r--include/linux/signal.h35
-rw-r--r--include/linux/skbuff.h56
-rw-r--r--include/linux/slab.h16
-rw-r--r--include/linux/slab_def.h4
-rw-r--r--include/linux/slub_def.h16
-rw-r--r--include/linux/soc/qcom/smd.h61
-rw-r--r--include/linux/soc/qcom/smem_state.h35
-rw-r--r--include/linux/soc/renesas/rcar-sysc.h (renamed from arch/arm/mach-shmobile/pm-rcar.h)9
-rw-r--r--include/linux/socket.h4
-rw-r--r--include/linux/spi/spi.h6
-rw-r--r--include/linux/stm.h3
-rw-r--r--include/linux/stmmac.h2
-rw-r--r--include/linux/string.h2
-rw-r--r--include/linux/string_helpers.h6
-rw-r--r--include/linux/stringhash.h76
-rw-r--r--include/linux/sunrpc/auth.h26
-rw-r--r--include/linux/sunrpc/clnt.h1
-rw-r--r--include/linux/sunrpc/msg_prot.h4
-rw-r--r--include/linux/sunrpc/svc_rdma.h2
-rw-r--r--include/linux/sunrpc/svcauth.h40
-rw-r--r--include/linux/sunrpc/xprt.h1
-rw-r--r--include/linux/sunrpc/xprtrdma.h4
-rw-r--r--include/linux/swap.h7
-rw-r--r--include/linux/sync_file.h57
-rw-r--r--include/linux/syscalls.h8
-rw-r--r--include/linux/thermal.h1
-rw-r--r--include/linux/time64.h17
-rw-r--r--include/linux/timekeeping.h17
-rw-r--r--include/linux/trace_events.h134
-rw-r--r--include/linux/tty.h93
-rw-r--r--include/linux/types.h1
-rw-r--r--include/linux/u64_stats_sync.h14
-rw-r--r--include/linux/udp.h16
-rw-r--r--include/linux/uio.h1
-rw-r--r--include/linux/usb.h11
-rw-r--r--include/linux/usb/gadget.h4
-rw-r--r--include/linux/usb/hcd.h1
-rw-r--r--include/linux/usb/otg-fsm.h91
-rw-r--r--include/linux/uuid.h21
-rw-r--r--include/linux/verification.h49
-rw-r--r--include/linux/verify_pefile.h22
-rw-r--r--include/linux/vmalloc.h3
-rw-r--r--include/linux/vmstat.h6
-rw-r--r--include/linux/xattr.h12
-rw-r--r--include/linux/zsmalloc.h4
-rw-r--r--include/media/media-device.h13
-rw-r--r--include/media/media-entity.h81
-rw-r--r--include/media/rc-core.h18
-rw-r--r--include/media/v4l2-dev.h3
-rw-r--r--include/media/v4l2-device.h55
-rw-r--r--include/media/v4l2-rect.h173
-rw-r--r--include/media/v4l2-subdev.h8
-rw-r--r--include/media/v4l2-tpg-colors.h (renamed from drivers/media/platform/vivid/vivid-tpg-colors.h)6
-rw-r--r--include/media/v4l2-tpg.h (renamed from drivers/media/platform/vivid/vivid-tpg.h)9
-rw-r--r--include/media/vsp1.h23
-rw-r--r--include/misc/cxl.h8
-rw-r--r--include/net/6lowpan.h37
-rw-r--r--include/net/act_api.h14
-rw-r--r--include/net/af_rxrpc.h4
-rw-r--r--include/net/bluetooth/hci.h2
-rw-r--r--include/net/cfg80211.h158
-rw-r--r--include/net/codel.h217
-rw-r--r--include/net/codel_impl.h255
-rw-r--r--include/net/codel_qdisc.h73
-rw-r--r--include/net/devlink.h61
-rw-r--r--include/net/dsa.h48
-rw-r--r--include/net/dst.h5
-rw-r--r--include/net/fou.h10
-rw-r--r--include/net/fq.h95
-rw-r--r--include/net/fq_impl.h277
-rw-r--r--include/net/gen_stats.h6
-rw-r--r--include/net/geneve.h6
-rw-r--r--include/net/gre.h104
-rw-r--r--include/net/gtp.h34
-rw-r--r--include/net/icmp.h4
-rw-r--r--include/net/inet6_hashtables.h12
-rw-r--r--include/net/inet_common.h5
-rw-r--r--include/net/inet_hashtables.h47
-rw-r--r--include/net/ip.h16
-rw-r--r--include/net/ip6_tunnel.h69
-rw-r--r--include/net/ip_tunnels.h97
-rw-r--r--include/net/ip_vs.h17
-rw-r--r--include/net/ipv6.h68
-rw-r--r--include/net/l3mdev.h68
-rw-r--r--include/net/mac80211.h94
-rw-r--r--include/net/mac802154.h10
-rw-r--r--include/net/netfilter/nf_conntrack.h2
-rw-r--r--include/net/netfilter/nf_conntrack_core.h1
-rw-r--r--include/net/netfilter/nf_conntrack_ecache.h108
-rw-r--r--include/net/netfilter/nf_conntrack_expect.h1
-rw-r--r--include/net/netfilter/nf_conntrack_l4proto.h3
-rw-r--r--include/net/netfilter/nf_conntrack_labels.h5
-rw-r--r--include/net/netfilter/nf_tables.h2
-rw-r--r--include/net/netlink.h121
-rw-r--r--include/net/netns/conntrack.h10
-rw-r--r--include/net/netns/ipv4.h3
-rw-r--r--include/net/netns/xfrm.h1
-rw-r--r--include/net/nfc/nci_core.h17
-rw-r--r--include/net/nl802154.h6
-rw-r--r--include/net/pkt_cls.h21
-rw-r--r--include/net/protocol.h3
-rw-r--r--include/net/request_sock.h31
-rw-r--r--include/net/route.h7
-rw-r--r--include/net/rtnetlink.h7
-rw-r--r--include/net/sch_generic.h20
-rw-r--r--include/net/sctp/sctp.h38
-rw-r--r--include/net/sctp/structs.h13
-rw-r--r--include/net/snmp.h40
-rw-r--r--include/net/sock.h170
-rw-r--r--include/net/switchdev.h2
-rw-r--r--include/net/tc_act/tc_mirred.h15
-rw-r--r--include/net/tcp.h81
-rw-r--r--include/net/transp_v6.h4
-rw-r--r--include/net/udp.h56
-rw-r--r--include/net/udp_tunnel.h19
-rw-r--r--include/net/vxlan.h77
-rw-r--r--include/net/xfrm.h4
-rw-r--r--include/rdma/ib_mad.h60
-rw-r--r--include/rdma/ib_pack.h5
-rw-r--r--include/rdma/ib_sa.h12
-rw-r--r--include/rdma/ib_verbs.h187
-rw-r--r--include/rdma/mr_pool.h25
-rw-r--r--include/rdma/rdma_vt.h14
-rw-r--r--include/rdma/rdmavt_qp.h10
-rw-r--r--include/rdma/rw.h88
-rw-r--r--include/rxrpc/packet.h2
-rw-r--r--include/scsi/scsi.h19
-rw-r--r--include/scsi/scsi_common.h1
-rw-r--r--include/scsi/scsi_device.h14
-rw-r--r--include/scsi/scsi_eh.h1
-rw-r--r--include/scsi/scsi_host.h2
-rw-r--r--include/scsi/scsi_proto.h9
-rw-r--r--include/soc/at91/atmel-sfr.h18
-rw-r--r--include/soc/nps/common.h166
-rw-r--r--include/soc/tegra/fuse.h1
-rw-r--r--include/soc/tegra/pmc.h36
-rw-r--r--include/sound/dmaengine_pcm.h12
-rw-r--r--include/sound/hda_chmap.h2
-rw-r--r--include/sound/hda_i915.h10
-rw-r--r--include/sound/hdaudio_ext.h13
-rw-r--r--include/sound/hdmi-codec.h100
-rw-r--r--include/sound/pcm_iec958.h2
-rw-r--r--include/sound/soc-dapm.h3
-rw-r--r--include/sound/soc.h5
-rw-r--r--include/target/iscsi/iscsi_target_core.h27
-rw-r--r--include/target/iscsi/iscsi_transport.h41
-rw-r--r--include/target/target_core_backend.h1
-rw-r--r--include/target/target_core_base.h2
-rw-r--r--include/target/target_core_fabric.h10
-rw-r--r--include/trace/events/compaction.h3
-rw-r--r--include/trace/events/ext4.h6
-rw-r--r--include/trace/events/f2fs.h24
-rw-r--r--include/trace/events/kvm.h17
-rw-r--r--include/trace/events/libata.h10
-rw-r--r--include/trace/events/mmc.h182
-rw-r--r--include/trace/events/rcu.h79
-rw-r--r--include/trace/events/scsi.h6
-rw-r--r--include/trace/perf.h16
-rw-r--r--include/trace/trace_events.h3
-rw-r--r--include/uapi/asm-generic/unistd.h3
-rw-r--r--include/uapi/linux/Kbuild1
-rw-r--r--include/uapi/linux/bpf.h7
-rw-r--r--include/uapi/linux/btrfs.h188
-rw-r--r--include/uapi/linux/btrfs_tree.h966
-rw-r--r--include/uapi/linux/coresight-stm.h21
-rw-r--r--include/uapi/linux/devlink.h63
-rw-r--r--include/uapi/linux/elf.h10
-rw-r--r--include/uapi/linux/fib_rules.h1
-rw-r--r--include/uapi/linux/fs.h3
-rw-r--r--include/uapi/linux/gen_stats.h1
-rw-r--r--include/uapi/linux/gtp.h33
-rw-r--r--include/uapi/linux/i2c.h13
-rw-r--r--include/uapi/linux/if.h28
-rw-r--r--include/uapi/linux/if_bridge.h18
-rw-r--r--include/uapi/linux/if_ether.h1
-rw-r--r--include/uapi/linux/if_link.h62
-rw-r--r--include/uapi/linux/if_macsec.h10
-rw-r--r--include/uapi/linux/iio/types.h2
-rw-r--r--include/uapi/linux/ila.h8
-rw-r--r--include/uapi/linux/inet_diag.h6
-rw-r--r--include/uapi/linux/ip_vs.h1
-rw-r--r--include/uapi/linux/keyctl.h10
-rw-r--r--include/uapi/linux/kvm.h1
-rw-r--r--include/uapi/linux/l2tp.h2
-rw-r--r--include/uapi/linux/libc-compat.h44
-rw-r--r--include/uapi/linux/lwtunnel.h2
-rw-r--r--include/uapi/linux/magic.h2
-rw-r--r--include/uapi/linux/ndctl.h80
-rw-r--r--include/uapi/linux/neighbour.h2
-rw-r--r--include/uapi/linux/net_tstamp.h10
-rw-r--r--include/uapi/linux/netfilter/ipset/ip_set.h1
-rw-r--r--include/uapi/linux/netfilter/nf_tables.h9
-rw-r--r--include/uapi/linux/netfilter/nfnetlink_acct.h1
-rw-r--r--include/uapi/linux/netfilter/nfnetlink_conntrack.h3
-rw-r--r--include/uapi/linux/netfilter/nfnetlink_queue.h10
-rw-r--r--include/uapi/linux/nl80211.h110
-rw-r--r--include/uapi/linux/nvme_ioctl.h1
-rw-r--r--include/uapi/linux/openvswitch.h4
-rw-r--r--include/uapi/linux/pci_regs.h20
-rw-r--r--include/uapi/linux/perf_event.h5
-rw-r--r--include/uapi/linux/pkt_cls.h6
-rw-r--r--include/uapi/linux/pkt_sched.h7
-rw-r--r--include/uapi/linux/qrtr.h12
-rw-r--r--include/uapi/linux/quota.h1
-rw-r--r--include/uapi/linux/rtnetlink.h7
-rw-r--r--include/uapi/linux/serial_core.h6
-rw-r--r--include/uapi/linux/signal.h5
-rw-r--r--include/uapi/linux/sock_diag.h1
-rw-r--r--include/uapi/linux/sync_file.h (renamed from drivers/staging/android/uapi/sync.h)44
-rw-r--r--include/uapi/linux/tc_act/Kbuild1
-rw-r--r--include/uapi/linux/tc_act/tc_bpf.h1
-rw-r--r--include/uapi/linux/tc_act/tc_connmark.h1
-rw-r--r--include/uapi/linux/tc_act/tc_csum.h1
-rw-r--r--include/uapi/linux/tc_act/tc_defact.h1
-rw-r--r--include/uapi/linux/tc_act/tc_gact.h1
-rw-r--r--include/uapi/linux/tc_act/tc_ife.h1
-rw-r--r--include/uapi/linux/tc_act/tc_ipt.h1
-rw-r--r--include/uapi/linux/tc_act/tc_mirred.h1
-rw-r--r--include/uapi/linux/tc_act/tc_nat.h1
-rw-r--r--include/uapi/linux/tc_act/tc_pedit.h1
-rw-r--r--include/uapi/linux/tc_act/tc_skbedit.h1
-rw-r--r--include/uapi/linux/tc_act/tc_vlan.h1
-rw-r--r--include/uapi/linux/tcp_metrics.h1
-rw-r--r--include/uapi/linux/tty_flags.h13
-rw-r--r--include/uapi/linux/udp.h3
-rw-r--r--include/uapi/linux/usb/ch9.h121
-rw-r--r--include/uapi/linux/uuid.h4
-rw-r--r--include/uapi/linux/videodev2.h38
-rw-r--r--include/uapi/linux/vt.h1
-rw-r--r--include/uapi/linux/xfrm.h1
-rw-r--r--include/uapi/mtd/mtd-abi.h2
-rw-r--r--include/uapi/rdma/hfi/hfi1_user.h80
-rw-r--r--include/uapi/rdma/ib_user_verbs.h1
-rw-r--r--include/uapi/rdma/rdma_netlink.h10
-rw-r--r--include/uapi/sound/asoc.h44
-rw-r--r--include/uapi/sound/asound.h2
-rw-r--r--include/video/imx-ipu-v3.h2
-rw-r--r--include/video/sh_mipi_dsi.h59
-rw-r--r--init/Kconfig65
-rw-r--r--init/main.c10
-rw-r--r--ipc/shm.c9
-rw-r--r--kernel/Makefile4
-rw-r--r--kernel/audit.c30
-rw-r--r--kernel/audit_tree.c12
-rw-r--r--kernel/audit_watch.c2
-rw-r--r--kernel/auditsc.c8
-rw-r--r--kernel/bpf/core.c308
-rw-r--r--kernel/bpf/helpers.c17
-rw-r--r--kernel/bpf/inode.c39
-rw-r--r--kernel/bpf/stackmap.c13
-rw-r--r--kernel/bpf/syscall.c2
-rw-r--r--kernel/bpf/verifier.c741
-rw-r--r--kernel/cgroup.c63
-rw-r--r--kernel/cpu.c32
-rw-r--r--kernel/cpuset.c22
-rw-r--r--kernel/events/callchain.c65
-rw-r--r--kernel/events/core.c958
-rw-r--r--kernel/events/internal.h10
-rw-r--r--kernel/events/ring_buffer.c128
-rw-r--r--kernel/events/uprobes.c10
-rw-r--r--kernel/exit.c34
-rw-r--r--kernel/fork.c79
-rw-r--r--kernel/futex.c2
-rw-r--r--kernel/gcov/Kconfig1
-rw-r--r--kernel/irq/ipi.c45
-rw-r--r--kernel/irq/irqdesc.c26
-rw-r--r--kernel/irq/irqdomain.c27
-rw-r--r--kernel/irq/manage.c2
-rw-r--r--kernel/kexec.c109
-rw-r--r--kernel/kexec_core.c12
-rw-r--r--kernel/kexec_file.c8
-rw-r--r--kernel/livepatch/core.c191
-rw-r--r--kernel/locking/lockdep.c73
-rw-r--r--kernel/locking/locktorture.c25
-rw-r--r--kernel/locking/percpu-rwsem.c1
-rw-r--r--kernel/locking/qspinlock_stat.h24
-rw-r--r--kernel/locking/rwsem-spinlock.c19
-rw-r--r--kernel/locking/rwsem-xadd.c38
-rw-r--r--kernel/locking/rwsem.c35
-rw-r--r--kernel/memremap.c11
-rw-r--r--kernel/module.c125
-rw-r--r--kernel/module_signing.c7
-rw-r--r--kernel/padata.c138
-rw-r--r--kernel/panic.c6
-rw-r--r--kernel/pid.c2
-rw-r--r--kernel/power/swap.c18
-rw-r--r--kernel/printk/Makefile1
-rw-r--r--kernel/printk/internal.h57
-rw-r--r--kernel/printk/nmi.c260
-rw-r--r--kernel/printk/printk.c31
-rw-r--r--kernel/rcu/Makefile1
-rw-r--r--kernel/rcu/rcuperf.c655
-rw-r--r--kernel/rcu/rcutorture.c29
-rw-r--r--kernel/rcu/tree.c302
-rw-r--r--kernel/rcu/tree.h20
-rw-r--r--kernel/rcu/tree_plugin.h37
-rw-r--r--kernel/rcu/tree_trace.c13
-rw-r--r--kernel/rcu/update.c30
-rw-r--r--kernel/sched/Makefile1
-rw-r--r--kernel/sched/clock.c48
-rw-r--r--kernel/sched/core.c757
-rw-r--r--kernel/sched/cpuacct.c147
-rw-r--r--kernel/sched/cpudeadline.c4
-rw-r--r--kernel/sched/cpufreq.c48
-rw-r--r--kernel/sched/cpufreq_schedutil.c532
-rw-r--r--kernel/sched/cpupri.c4
-rw-r--r--kernel/sched/deadline.c56
-rw-r--r--kernel/sched/debug.c10
-rw-r--r--kernel/sched/fair.c506
-rw-r--r--kernel/sched/idle_task.c2
-rw-r--r--kernel/sched/loadavg.c11
-rw-r--r--kernel/sched/rt.c39
-rw-r--r--kernel/sched/sched.h148
-rw-r--r--kernel/sched/stop_task.c2
-rw-r--r--kernel/seccomp.c15
-rw-r--r--kernel/signal.c39
-rw-r--r--kernel/sys.c3
-rw-r--r--kernel/sysctl.c28
-rw-r--r--kernel/sysctl_binary.c23
-rw-r--r--kernel/taskstats.c37
-rw-r--r--kernel/time/hrtimer.c23
-rw-r--r--kernel/time/tick-sched.c13
-rw-r--r--kernel/time/time.c29
-rw-r--r--kernel/time/timer.c63
-rw-r--r--kernel/torture.c4
-rw-r--r--kernel/trace/Kconfig26
-rw-r--r--kernel/trace/Makefile2
-rw-r--r--kernel/trace/blktrace.c2
-rw-r--r--kernel/trace/bpf_trace.c129
-rw-r--r--kernel/trace/ftrace.c31
-rw-r--r--kernel/trace/power-traces.c1
-rw-r--r--kernel/trace/ring_buffer.c35
-rw-r--r--kernel/trace/trace.c275
-rw-r--r--kernel/trace/trace.h190
-rw-r--r--kernel/trace/trace_event_perf.c43
-rw-r--r--kernel/trace/trace_events.c347
-rw-r--r--kernel/trace/trace_events_filter.c77
-rw-r--r--kernel/trace/trace_events_hist.c1755
-rw-r--r--kernel/trace/trace_events_trigger.c215
-rw-r--r--kernel/trace/trace_kprobe.c10
-rw-r--r--kernel/trace/trace_syscalls.c13
-rw-r--r--kernel/trace/trace_uprobe.c5
-rw-r--r--kernel/trace/tracing_map.c1062
-rw-r--r--kernel/trace/tracing_map.h283
-rw-r--r--kernel/workqueue.c63
-rw-r--r--lib/Kconfig10
-rw-r--r--lib/Kconfig.debug45
-rw-r--r--lib/Kconfig.kgdb2
-rw-r--r--lib/Makefile6
-rw-r--r--lib/asn1_decoder.c19
-rw-r--r--lib/debugobjects.c92
-rw-r--r--lib/dma-debug.c2
-rw-r--r--lib/gcd.c77
-rw-r--r--lib/iov_iter.c123
-rw-r--r--lib/mpi/mpicoder.c122
-rw-r--r--lib/nlattr.c103
-rw-r--r--lib/nmi_backtrace.c89
-rw-r--r--lib/nodemask.c30
-rw-r--r--lib/percpu_counter.c6
-rw-r--r--lib/proportions.c407
-rw-r--r--lib/radix-tree.c933
-rw-r--r--lib/rhashtable.c6
-rw-r--r--lib/sg_pool.c172
-rw-r--r--lib/string_helpers.c92
-rw-r--r--lib/strncpy_from_user.c2
-rw-r--r--lib/test_bpf.c5
-rw-r--r--lib/test_hash.c250
-rw-r--r--lib/test_kasan.c69
-rw-r--r--lib/test_rhashtable.c2
-rw-r--r--lib/uuid.c91
-rw-r--r--lib/vsprintf.c21
-rw-r--r--mm/Kconfig37
-rw-r--r--mm/Makefile1
-rw-r--r--mm/backing-dev.c20
-rw-r--r--mm/cma.c7
-rw-r--r--mm/compaction.c262
-rw-r--r--mm/filemap.c95
-rw-r--r--mm/highmem.c12
-rw-r--r--mm/huge_memory.c113
-rw-r--r--mm/hugetlb.c38
-rw-r--r--mm/hugetlb_cgroup.c35
-rw-r--r--mm/internal.h12
-rw-r--r--mm/kasan/Makefile1
-rw-r--r--mm/kasan/kasan.c133
-rw-r--r--mm/kasan/kasan.h22
-rw-r--r--mm/kasan/quarantine.c291
-rw-r--r--mm/kasan/report.c1
-rw-r--r--mm/ksm.c15
-rw-r--r--mm/maccess.c3
-rw-r--r--mm/madvise.c8
-rw-r--r--mm/memblock.c30
-rw-r--r--mm/memcontrol.c84
-rw-r--r--mm/memory-failure.c72
-rw-r--r--mm/memory.c98
-rw-r--r--mm/memory_hotplug.c25
-rw-r--r--mm/mempolicy.c63
-rw-r--r--mm/mempool.c2
-rw-r--r--mm/migrate.c7
-rw-r--r--mm/mlock.c16
-rw-r--r--mm/mmap.c57
-rw-r--r--mm/mmu_context.c2
-rw-r--r--mm/mmzone.c2
-rw-r--r--mm/mprotect.c3
-rw-r--r--mm/mremap.c50
-rw-r--r--mm/nommu.c2
-rw-r--r--mm/oom_kill.c172
-rw-r--r--mm/page-writeback.c12
-rw-r--r--mm/page_alloc.c1263
-rw-r--r--mm/page_ext.c4
-rw-r--r--mm/page_io.c2
-rw-r--r--mm/page_isolation.c10
-rw-r--r--mm/page_owner.c5
-rw-r--r--mm/page_poison.c8
-rw-r--r--mm/rmap.c6
-rw-r--r--mm/shmem.c146
-rw-r--r--mm/slab.c785
-rw-r--r--mm/slab.h2
-rw-r--r--mm/slab_common.c2
-rw-r--r--mm/slub.c21
-rw-r--r--mm/swap.c5
-rw-r--r--mm/swap_state.c3
-rw-r--r--mm/swapfile.c13
-rw-r--r--mm/truncate.c62
-rw-r--r--mm/util.c26
-rw-r--r--mm/vmalloc.c39
-rw-r--r--mm/vmscan.c164
-rw-r--r--mm/vmstat.c163
-rw-r--r--mm/z3fold.c792
-rw-r--r--mm/zsmalloc.c215
-rw-r--r--mm/zswap.c12
-rw-r--r--net/6lowpan/6lowpan_i.h9
-rw-r--r--net/6lowpan/core.c8
-rw-r--r--net/6lowpan/debugfs.c22
-rw-r--r--net/6lowpan/iphc.c122
-rw-r--r--net/6lowpan/nhc_udp.c2
-rw-r--r--net/9p/client.c8
-rw-r--r--net/Kconfig22
-rw-r--r--net/Makefile1
-rw-r--r--net/atm/lec.c4
-rw-r--r--net/batman-adv/bat_iv_ogm.c178
-rw-r--r--net/batman-adv/bat_v.c75
-rw-r--r--net/batman-adv/bat_v_elp.c31
-rw-r--r--net/batman-adv/bat_v_elp.h2
-rw-r--r--net/batman-adv/bat_v_ogm.c219
-rw-r--r--net/batman-adv/bitarray.c16
-rw-r--r--net/batman-adv/bitarray.h15
-rw-r--r--net/batman-adv/bridge_loop_avoidance.c333
-rw-r--r--net/batman-adv/bridge_loop_avoidance.h43
-rw-r--r--net/batman-adv/debugfs.c21
-rw-r--r--net/batman-adv/distributed-arp-table.c14
-rw-r--r--net/batman-adv/fragmentation.c12
-rw-r--r--net/batman-adv/gateway_client.c12
-rw-r--r--net/batman-adv/hard-interface.c34
-rw-r--r--net/batman-adv/hard-interface.h3
-rw-r--r--net/batman-adv/hash.h6
-rw-r--r--net/batman-adv/icmp_socket.c24
-rw-r--r--net/batman-adv/main.c26
-rw-r--r--net/batman-adv/main.h11
-rw-r--r--net/batman-adv/multicast.c11
-rw-r--r--net/batman-adv/network-coding.c45
-rw-r--r--net/batman-adv/originator.c49
-rw-r--r--net/batman-adv/originator.h2
-rw-r--r--net/batman-adv/packet.h3
-rw-r--r--net/batman-adv/routing.c54
-rw-r--r--net/batman-adv/routing.h6
-rw-r--r--net/batman-adv/send.c10
-rw-r--r--net/batman-adv/soft-interface.c52
-rw-r--r--net/batman-adv/soft-interface.h10
-rw-r--r--net/batman-adv/sysfs.c9
-rw-r--r--net/batman-adv/translation-table.c46
-rw-r--r--net/batman-adv/types.h8
-rw-r--r--net/bluetooth/6lowpan.c93
-rw-r--r--net/bluetooth/af_bluetooth.c2
-rw-r--r--net/bluetooth/bnep/netdev.c2
-rw-r--r--net/bluetooth/hci_core.c4
-rw-r--r--net/bluetooth/hci_event.c13
-rw-r--r--net/bluetooth/hci_request.c6
-rw-r--r--net/bluetooth/l2cap_sock.c2
-rw-r--r--net/bridge/br_ioctl.c45
-rw-r--r--net/bridge/br_multicast.c12
-rw-r--r--net/bridge/br_netfilter_hooks.c6
-rw-r--r--net/bridge/br_netfilter_ipv6.c10
-rw-r--r--net/bridge/br_netlink.c140
-rw-r--r--net/bridge/br_private.h18
-rw-r--r--net/bridge/br_sysfs_br.c101
-rw-r--r--net/bridge/br_sysfs_if.c5
-rw-r--r--net/bridge/br_vlan.c135
-rw-r--r--net/bridge/netfilter/nf_tables_bridge.c47
-rw-r--r--net/can/raw.c2
-rw-r--r--net/ceph/ceph_common.c2
-rw-r--r--net/ceph/ceph_strings.c16
-rw-r--r--net/ceph/debugfs.c147
-rw-r--r--net/ceph/mon_client.c393
-rw-r--r--net/ceph/osd_client.c4032
-rw-r--r--net/ceph/osdmap.c651
-rw-r--r--net/core/datagram.c9
-rw-r--r--net/core/dev.c112
-rw-r--r--net/core/devlink.c1050
-rw-r--r--net/core/ethtool.c29
-rw-r--r--net/core/fib_rules.c4
-rw-r--r--net/core/filter.c185
-rw-r--r--net/core/flow.c14
-rw-r--r--net/core/gen_stats.c35
-rw-r--r--net/core/neighbour.c22
-rw-r--r--net/core/net-procfs.c3
-rw-r--r--net/core/pktgen.c1
-rw-r--r--net/core/rtnetlink.c282
-rw-r--r--net/core/skbuff.c272
-rw-r--r--net/core/sock.c150
-rw-r--r--net/core/sock_diag.c3
-rw-r--r--net/core/sysctl_net_core.c9
-rw-r--r--net/dccp/dccp.h6
-rw-r--r--net/dccp/input.c2
-rw-r--r--net/dccp/ipv4.c33
-rw-r--r--net/dccp/ipv6.c33
-rw-r--r--net/dccp/minisocks.c2
-rw-r--r--net/dccp/options.c2
-rw-r--r--net/dccp/timer.c8
-rw-r--r--net/dns_resolver/dns_key.c2
-rw-r--r--net/dsa/dsa.c62
-rw-r--r--net/dsa/dsa_priv.h5
-rw-r--r--net/dsa/slave.c168
-rw-r--r--net/hsr/Kconfig5
-rw-r--r--net/hsr/hsr_device.c91
-rw-r--r--net/hsr/hsr_device.h2
-rw-r--r--net/hsr/hsr_forward.c43
-rw-r--r--net/hsr/hsr_framereg.c30
-rw-r--r--net/hsr/hsr_main.h13
-rw-r--r--net/hsr/hsr_netlink.c17
-rw-r--r--net/hsr/hsr_slave.c4
-rw-r--r--net/ieee802154/6lowpan/6lowpan_i.h14
-rw-r--r--net/ieee802154/6lowpan/core.c6
-rw-r--r--net/ieee802154/6lowpan/tx.c14
-rw-r--r--net/ieee802154/nl-mac.c17
-rw-r--r--net/ieee802154/nl802154.c26
-rw-r--r--net/ipv4/af_inet.c101
-rw-r--r--net/ipv4/arp.c2
-rw-r--r--net/ipv4/cipso_ipv4.c3
-rw-r--r--net/ipv4/fib_frontend.c1
-rw-r--r--net/ipv4/fib_semantics.c36
-rw-r--r--net/ipv4/fou.c202
-rw-r--r--net/ipv4/gre_demux.c61
-rw-r--r--net/ipv4/gre_offload.c49
-rw-r--r--net/ipv4/icmp.c18
-rw-r--r--net/ipv4/inet_connection_sock.c9
-rw-r--r--net/ipv4/inet_diag.c98
-rw-r--r--net/ipv4/inet_hashtables.c86
-rw-r--r--net/ipv4/inet_timewait_sock.c10
-rw-r--r--net/ipv4/ip_forward.c6
-rw-r--r--net/ipv4/ip_fragment.c14
-rw-r--r--net/ipv4/ip_gre.c266
-rw-r--r--net/ipv4/ip_input.c42
-rw-r--r--net/ipv4/ip_sockglue.c34
-rw-r--r--net/ipv4/ip_tunnel.c45
-rw-r--r--net/ipv4/ip_tunnel_core.c50
-rw-r--r--net/ipv4/ip_vti.c18
-rw-r--r--net/ipv4/ipip.c7
-rw-r--r--net/ipv4/ipmr.c4
-rw-r--r--net/ipv4/netfilter/arp_tables.c522
-rw-r--r--net/ipv4/netfilter/ip_tables.c573
-rw-r--r--net/ipv4/netfilter/nf_conntrack_l3proto_ipv4.c2
-rw-r--r--net/ipv4/netfilter/nf_conntrack_l3proto_ipv4_compat.c47
-rw-r--r--net/ipv4/ping.c7
-rw-r--r--net/ipv4/raw.c13
-rw-r--r--net/ipv4/route.c10
-rw-r--r--net/ipv4/syncookies.c4
-rw-r--r--net/ipv4/sysctl_net_ipv4.c11
-rw-r--r--net/ipv4/tcp.c65
-rw-r--r--net/ipv4/tcp_bic.c6
-rw-r--r--net/ipv4/tcp_cdg.c34
-rw-r--r--net/ipv4/tcp_cubic.c26
-rw-r--r--net/ipv4/tcp_fastopen.c14
-rw-r--r--net/ipv4/tcp_htcp.c10
-rw-r--r--net/ipv4/tcp_illinois.c21
-rw-r--r--net/ipv4/tcp_input.c249
-rw-r--r--net/ipv4/tcp_ipv4.c144
-rw-r--r--net/ipv4/tcp_lp.c6
-rw-r--r--net/ipv4/tcp_metrics.c6
-rw-r--r--net/ipv4/tcp_minisocks.c19
-rw-r--r--net/ipv4/tcp_offload.c43
-rw-r--r--net/ipv4/tcp_output.c171
-rw-r--r--net/ipv4/tcp_recovery.c4
-rw-r--r--net/ipv4/tcp_timer.c28
-rw-r--r--net/ipv4/tcp_vegas.c6
-rw-r--r--net/ipv4/tcp_vegas.h2
-rw-r--r--net/ipv4/tcp_veno.c7
-rw-r--r--net/ipv4/tcp_westwood.c7
-rw-r--r--net/ipv4/tcp_yeah.c7
-rw-r--r--net/ipv4/udp.c430
-rw-r--r--net/ipv4/udp_diag.c18
-rw-r--r--net/ipv4/udp_offload.c158
-rw-r--r--net/ipv4/udp_tunnel.c2
-rw-r--r--net/ipv6/Kconfig1
-rw-r--r--net/ipv6/Makefile6
-rw-r--r--net/ipv6/addrconf.c667
-rw-r--r--net/ipv6/af_inet6.c9
-rw-r--r--net/ipv6/datagram.c29
-rw-r--r--net/ipv6/exthdrs.c66
-rw-r--r--net/ipv6/fou6.c140
-rw-r--r--net/ipv6/icmp.c47
-rw-r--r--net/ipv6/ila/ila.h79
-rw-r--r--net/ipv6/ila/ila_common.c81
-rw-r--r--net/ipv6/ila/ila_lwt.c54
-rw-r--r--net/ipv6/ila/ila_xlat.c167
-rw-r--r--net/ipv6/inet6_hashtables.c64
-rw-r--r--net/ipv6/ip6_fib.c1
-rw-r--r--net/ipv6/ip6_flowlabel.c7
-rw-r--r--net/ipv6/ip6_gre.c525
-rw-r--r--net/ipv6/ip6_input.c72
-rw-r--r--net/ipv6/ip6_offload.c94
-rw-r--r--net/ipv6/ip6_offload.h3
-rw-r--r--net/ipv6/ip6_output.c79
-rw-r--r--net/ipv6/ip6_tunnel.c452
-rw-r--r--net/ipv6/ip6mr.c12
-rw-r--r--net/ipv6/ipv6_sockglue.c13
-rw-r--r--net/ipv6/netfilter/ip6_tables.c551
-rw-r--r--net/ipv6/netfilter/ip6t_SYNPROXY.c58
-rw-r--r--net/ipv6/netfilter/nf_reject_ipv6.c2
-rw-r--r--net/ipv6/ping.c13
-rw-r--r--net/ipv6/raw.c34
-rw-r--r--net/ipv6/reassembly.c32
-rw-r--r--net/ipv6/route.c44
-rw-r--r--net/ipv6/sit.c14
-rw-r--r--net/ipv6/syncookies.c4
-rw-r--r--net/ipv6/tcp_ipv6.c85
-rw-r--r--net/ipv6/udp.c360
-rw-r--r--net/ipv6/udp_offload.c24
-rw-r--r--net/irda/ircomm/ircomm_tty.c31
-rw-r--r--net/irda/ircomm/ircomm_tty_attach.c2
-rw-r--r--net/irda/ircomm/ircomm_tty_ioctl.c18
-rw-r--r--net/irda/irlan/irlan_eth.c2
-rw-r--r--net/kcm/kcmsock.c2
-rw-r--r--net/l2tp/l2tp_ip6.c31
-rw-r--r--net/l2tp/l2tp_netlink.c83
-rw-r--r--net/l3mdev/l3mdev.c63
-rw-r--r--net/llc/af_llc.c1
-rw-r--r--net/llc/llc_proc.c2
-rw-r--r--net/mac80211/agg-tx.c5
-rw-r--r--net/mac80211/cfg.c32
-rw-r--r--net/mac80211/debugfs.c3
-rw-r--r--net/mac80211/debugfs_netdev.c12
-rw-r--r--net/mac80211/debugfs_sta.c134
-rw-r--r--net/mac80211/driver-ops.h15
-rw-r--r--net/mac80211/ibss.c27
-rw-r--r--net/mac80211/ieee80211_i.h45
-rw-r--r--net/mac80211/iface.c4
-rw-r--r--net/mac80211/key.c1
-rw-r--r--net/mac80211/main.c10
-rw-r--r--net/mac80211/mesh.c48
-rw-r--r--net/mac80211/mesh.h71
-rw-r--r--net/mac80211/mesh_hwmp.c4
-rw-r--r--net/mac80211/mesh_pathtbl.c965
-rw-r--r--net/mac80211/mesh_plink.c23
-rw-r--r--net/mac80211/mlme.c35
-rw-r--r--net/mac80211/ocb.c2
-rw-r--r--net/mac80211/rate.c2
-rw-r--r--net/mac80211/rate.h4
-rw-r--r--net/mac80211/rc80211_minstrel.c6
-rw-r--r--net/mac80211/rc80211_minstrel_ht.c81
-rw-r--r--net/mac80211/rx.c555
-rw-r--r--net/mac80211/scan.c20
-rw-r--r--net/mac80211/spectmgmt.c4
-rw-r--r--net/mac80211/sta_info.c289
-rw-r--r--net/mac80211/sta_info.h130
-rw-r--r--net/mac80211/status.c4
-rw-r--r--net/mac80211/tdls.c18
-rw-r--r--net/mac80211/trace.h18
-rw-r--r--net/mac80211/tx.c211
-rw-r--r--net/mac80211/util.c29
-rw-r--r--net/mac80211/vht.c4
-rw-r--r--net/mac80211/wpa.c42
-rw-r--r--net/mpls/mpls_gso.c10
-rw-r--r--net/netfilter/ipvs/ip_vs_conn.c51
-rw-r--r--net/netfilter/ipvs/ip_vs_core.c162
-rw-r--r--net/netfilter/ipvs/ip_vs_ctl.c82
-rw-r--r--net/netfilter/ipvs/ip_vs_nfct.c4
-rw-r--r--net/netfilter/ipvs/ip_vs_pe_sip.c15
-rw-r--r--net/netfilter/ipvs/ip_vs_xmit.c23
-rw-r--r--net/netfilter/nf_conntrack_core.c430
-rw-r--r--net/netfilter/nf_conntrack_ecache.c84
-rw-r--r--net/netfilter/nf_conntrack_expect.c83
-rw-r--r--net/netfilter/nf_conntrack_helper.c12
-rw-r--r--net/netfilter/nf_conntrack_labels.c44
-rw-r--r--net/netfilter/nf_conntrack_netlink.c166
-rw-r--r--net/netfilter/nf_conntrack_proto_dccp.c4
-rw-r--r--net/netfilter/nf_conntrack_proto_sctp.c8
-rw-r--r--net/netfilter/nf_conntrack_proto_tcp.c8
-rw-r--r--net/netfilter/nf_conntrack_proto_udp.c2
-rw-r--r--net/netfilter/nf_conntrack_proto_udplite.c2
-rw-r--r--net/netfilter/nf_conntrack_standalone.c13
-rw-r--r--net/netfilter/nf_nat_core.c39
-rw-r--r--net/netfilter/nf_tables_api.c102
-rw-r--r--net/netfilter/nf_tables_trace.c5
-rw-r--r--net/netfilter/nfnetlink_acct.c11
-rw-r--r--net/netfilter/nfnetlink_cttimeout.c6
-rw-r--r--net/netfilter/nfnetlink_queue.c105
-rw-r--r--net/netfilter/nft_counter.c6
-rw-r--r--net/netfilter/nft_ct.c32
-rw-r--r--net/netfilter/nft_dynset.c3
-rw-r--r--net/netfilter/nft_hash.c4
-rw-r--r--net/netfilter/nft_limit.c6
-rw-r--r--net/netfilter/nft_rbtree.c49
-rw-r--r--net/netfilter/x_tables.c245
-rw-r--r--net/netfilter/xt_IDLETIMER.c1
-rw-r--r--net/netfilter/xt_connlabel.c14
-rw-r--r--net/netfilter/xt_socket.c6
-rw-r--r--net/netlabel/netlabel_kapi.c2
-rw-r--r--net/netlink/af_netlink.c10
-rw-r--r--net/nfc/nci/core.c117
-rw-r--r--net/nfc/nci/ntf.c2
-rw-r--r--net/nfc/nci/rsp.c23
-rw-r--r--net/openvswitch/conntrack.c23
-rw-r--r--net/openvswitch/datapath.c30
-rw-r--r--net/openvswitch/flow_netlink.c5
-rw-r--r--net/openvswitch/vport-internal_dev.c3
-rw-r--r--net/packet/af_packet.c44
-rw-r--r--net/qrtr/Kconfig24
-rw-r--r--net/qrtr/Makefile4
-rw-r--r--net/qrtr/qrtr.c1007
-rw-r--r--net/qrtr/qrtr.h31
-rw-r--r--net/qrtr/smd.c120
-rw-r--r--net/rds/ib_frmr.c2
-rw-r--r--net/rds/tcp_connect.c4
-rw-r--r--net/rds/tcp_listen.c17
-rw-r--r--net/rds/tcp_recv.c18
-rw-r--r--net/rds/tcp_send.c4
-rw-r--r--net/rfkill/core.c36
-rw-r--r--net/rxrpc/Kconfig2
-rw-r--r--net/rxrpc/Makefile7
-rw-r--r--net/rxrpc/af_rxrpc.c9
-rw-r--r--net/rxrpc/ar-accept.c4
-rw-r--r--net/rxrpc/ar-ack.c81
-rw-r--r--net/rxrpc/ar-call.c11
-rw-r--r--net/rxrpc/ar-connection.c14
-rw-r--r--net/rxrpc/ar-connevent.c27
-rw-r--r--net/rxrpc/ar-input.c22
-rw-r--r--net/rxrpc/ar-internal.h70
-rw-r--r--net/rxrpc/ar-key.c4
-rw-r--r--net/rxrpc/ar-output.c6
-rw-r--r--net/rxrpc/ar-proc.c2
-rw-r--r--net/rxrpc/ar-recvmsg.c18
-rw-r--r--net/rxrpc/ar-security.c166
-rw-r--r--net/rxrpc/insecure.c83
-rw-r--r--net/rxrpc/misc.c89
-rw-r--r--net/rxrpc/rxkad.c61
-rw-r--r--net/sched/act_api.c7
-rw-r--r--net/sched/act_bpf.c5
-rw-r--r--net/sched/act_connmark.c3
-rw-r--r--net/sched/act_csum.c2
-rw-r--r--net/sched/act_gact.c17
-rw-r--r--net/sched/act_ife.c16
-rw-r--r--net/sched/act_ipt.c21
-rw-r--r--net/sched/act_mirred.c26
-rw-r--r--net/sched/act_nat.c2
-rw-r--r--net/sched/act_pedit.c2
-rw-r--r--net/sched/act_simple.c20
-rw-r--r--net/sched/act_skbedit.c20
-rw-r--r--net/sched/act_vlan.c24
-rw-r--r--net/sched/cls_bpf.c2
-rw-r--r--net/sched/cls_flower.c21
-rw-r--r--net/sched/cls_u32.c52
-rw-r--r--net/sched/em_meta.c8
-rw-r--r--net/sched/sch_api.c6
-rw-r--r--net/sched/sch_codel.c22
-rw-r--r--net/sched/sch_fq_codel.c111
-rw-r--r--net/sched/sch_generic.c44
-rw-r--r--net/sched/sch_htb.c6
-rw-r--r--net/sched/sch_netem.c3
-rw-r--r--net/sched/sch_tbf.c6
-rw-r--r--net/sctp/Kconfig4
-rw-r--r--net/sctp/Makefile1
-rw-r--r--net/sctp/chunk.c2
-rw-r--r--net/sctp/input.c16
-rw-r--r--net/sctp/inqueue.c5
-rw-r--r--net/sctp/ipv6.c2
-rw-r--r--net/sctp/proc.c104
-rw-r--r--net/sctp/sctp_diag.c500
-rw-r--r--net/sctp/sm_sideeffect.c6
-rw-r--r--net/sctp/socket.c223
-rw-r--r--net/sctp/ulpqueue.c26
-rw-r--r--net/socket.c46
-rw-r--r--net/sunrpc/auth.c9
-rw-r--r--net/sunrpc/auth_generic.c13
-rw-r--r--net/sunrpc/auth_gss/auth_gss.c6
-rw-r--r--net/sunrpc/auth_gss/svcauth_gss.c9
-rw-r--r--net/sunrpc/auth_unix.c6
-rw-r--r--net/sunrpc/clnt.c17
-rw-r--r--net/sunrpc/socklib.c2
-rw-r--r--net/sunrpc/svc_xprt.c23
-rw-r--r--net/sunrpc/svcsock.c8
-rw-r--r--net/sunrpc/xdr.c2
-rw-r--r--net/sunrpc/xprtrdma/backchannel.c16
-rw-r--r--net/sunrpc/xprtrdma/fmr_ops.c134
-rw-r--r--net/sunrpc/xprtrdma/frwr_ops.c214
-rw-r--r--net/sunrpc/xprtrdma/physical_ops.c39
-rw-r--r--net/sunrpc/xprtrdma/rpc_rdma.c517
-rw-r--r--net/sunrpc/xprtrdma/svc_rdma_marshal.c32
-rw-r--r--net/sunrpc/xprtrdma/svc_rdma_recvfrom.c36
-rw-r--r--net/sunrpc/xprtrdma/svc_rdma_sendto.c28
-rw-r--r--net/sunrpc/xprtrdma/svc_rdma_transport.c17
-rw-r--r--net/sunrpc/xprtrdma/transport.c16
-rw-r--r--net/sunrpc/xprtrdma/verbs.c78
-rw-r--r--net/sunrpc/xprtrdma/xprt_rdma.h47
-rw-r--r--net/sunrpc/xprtsock.c21
-rw-r--r--net/switchdev/Kconfig2
-rw-r--r--net/switchdev/switchdev.c6
-rw-r--r--net/tipc/bearer.c101
-rw-r--r--net/tipc/bearer.h15
-rw-r--r--net/tipc/core.c8
-rw-r--r--net/tipc/discover.c7
-rw-r--r--net/tipc/discover.h2
-rw-r--r--net/tipc/link.c52
-rw-r--r--net/tipc/link.h2
-rw-r--r--net/tipc/msg.h29
-rw-r--r--net/tipc/netlink_compat.c2
-rw-r--r--net/tipc/node.c33
-rw-r--r--net/tipc/node.h6
-rw-r--r--net/tipc/server.c27
-rw-r--r--net/tipc/server.h4
-rw-r--r--net/tipc/socket.c149
-rw-r--r--net/tipc/socket.h17
-rw-r--r--net/tipc/subscr.c7
-rw-r--r--net/unix/af_unix.c2
-rw-r--r--net/vmw_vsock/af_vsock.c21
-rw-r--r--net/vmw_vsock/vmci_transport.c2
-rw-r--r--net/wireless/chan.c4
-rw-r--r--net/wireless/core.c32
-rw-r--r--net/wireless/core.h6
-rw-r--r--net/wireless/debugfs.c4
-rw-r--r--net/wireless/ibss.c6
-rw-r--r--net/wireless/mesh.c4
-rw-r--r--net/wireless/mlme.c2
-rw-r--r--net/wireless/nl80211.c342
-rw-r--r--net/wireless/rdev-ops.h2
-rw-r--r--net/wireless/reg.c32
-rw-r--r--net/wireless/reg.h2
-rw-r--r--net/wireless/scan.c24
-rw-r--r--net/wireless/sme.c54
-rw-r--r--net/wireless/sysfs.c2
-rw-r--r--net/wireless/trace.h26
-rw-r--r--net/wireless/util.c48
-rw-r--r--net/wireless/wext-compat.c45
-rw-r--r--net/wireless/wext-core.c5
-rw-r--r--net/x25/x25_facilities.c1
-rw-r--r--net/xfrm/xfrm_output.c3
-rw-r--r--net/xfrm/xfrm_user.c10
-rw-r--r--samples/Kconfig9
-rw-r--r--samples/Makefile2
-rw-r--r--samples/bpf/Makefile44
-rw-r--r--samples/bpf/README.rst66
-rw-r--r--samples/bpf/bpf_load.c26
-rw-r--r--samples/bpf/offwaketime_kern.c36
-rw-r--r--samples/bpf/parse_ldabs.c41
-rw-r--r--samples/bpf/parse_simple.c48
-rw-r--r--samples/bpf/parse_varlen.c153
-rwxr-xr-xsamples/bpf/test_cls_bpf.sh37
-rw-r--r--samples/bpf/test_overhead_kprobe_kern.c41
-rw-r--r--samples/bpf/test_overhead_tp_kern.c36
-rw-r--r--samples/bpf/test_overhead_user.c162
-rw-r--r--samples/bpf/test_verifier.c348
-rw-r--r--samples/bpf/tracex1_kern.c4
-rw-r--r--samples/bpf/tracex2_kern.c4
-rw-r--r--samples/bpf/tracex5_kern.c6
-rw-r--r--samples/connector/.gitignore (renamed from Documentation/connector/.gitignore)0
-rw-r--r--samples/connector/Makefile (renamed from Documentation/connector/Makefile)6
-rw-r--r--samples/connector/cn_test.c (renamed from Documentation/connector/cn_test.c)0
-rw-r--r--samples/connector/ucon.c (renamed from Documentation/connector/ucon.c)0
-rw-r--r--samples/kprobes/jprobe_example.c2
-rw-r--r--samples/kprobes/kprobe_example.c38
-rw-r--r--samples/livepatch/livepatch-sample.c1
-rw-r--r--samples/rpmsg/rpmsg_client_sample.c14
-rw-r--r--samples/v4l/Makefile (renamed from Documentation/video4linux/Makefile)0
-rw-r--r--samples/v4l/v4l2-pci-skeleton.c (renamed from Documentation/video4linux/v4l2-pci-skeleton.c)5
-rw-r--r--scripts/Kbuild.include45
-rw-r--r--scripts/Makefile.build44
-rw-r--r--scripts/Makefile.extrawarn1
-rw-r--r--scripts/Makefile.lib13
-rwxr-xr-xscripts/adjust_autoksyms.sh101
-rw-r--r--scripts/basic/fixdep.c61
-rwxr-xr-xscripts/bloat-o-meter6
-rwxr-xr-xscripts/checkkconfigsymbols.py2
-rwxr-xr-xscripts/checkpatch.pl142
-rwxr-xr-xscripts/coccicheck2
-rw-r--r--scripts/coccinelle/api/debugfs/debugfs_simple_attr.cocci67
-rw-r--r--scripts/coccinelle/api/setup_timer.cocci4
-rw-r--r--scripts/coccinelle/misc/compare_const_fl.cocci171
-rwxr-xr-xscripts/decode_stacktrace.sh55
-rw-r--r--scripts/docproc.c221
-rw-r--r--scripts/dtc/checks.c26
-rw-r--r--scripts/dtc/flattree.c4
-rw-r--r--scripts/dtc/libfdt/fdt_ro.c6
-rw-r--r--scripts/dtc/version_gen.h2
-rw-r--r--scripts/gdb/linux/Makefile12
-rw-r--r--scripts/gdb/linux/constants.py.in59
-rw-r--r--scripts/gdb/linux/cpus.py38
-rw-r--r--scripts/gdb/linux/dmesg.py11
-rw-r--r--scripts/gdb/linux/lists.py21
-rw-r--r--scripts/gdb/linux/modules.py24
-rw-r--r--scripts/gdb/linux/proc.py156
-rw-r--r--scripts/gdb/linux/radixtree.py97
-rw-r--r--scripts/gdb/linux/tasks.py19
-rw-r--r--scripts/gdb/linux/utils.py32
-rw-r--r--scripts/gdb/vmlinux-gdb.py2
-rw-r--r--scripts/genksyms/genksyms.c3
-rwxr-xr-xscripts/headers_check.pl4
-rw-r--r--scripts/kallsyms.c10
-rw-r--r--scripts/kconfig/confdata.c4
-rwxr-xr-xscripts/kconfig/streamline_config.pl44
-rw-r--r--scripts/kconfig/symbol.c14
-rwxr-xr-xscripts/kernel-doc315
-rwxr-xr-xscripts/ld-version.sh2
-rwxr-xr-xscripts/link-vmlinux.sh4
-rw-r--r--scripts/package/Makefile4
-rwxr-xr-xscripts/package/builddeb5
-rwxr-xr-xscripts/package/mkspec5
-rw-r--r--scripts/spelling.txt1
-rw-r--r--security/Kconfig1
-rw-r--r--security/Makefile2
-rw-r--r--security/apparmor/file.c4
-rw-r--r--security/apparmor/include/file.h4
-rw-r--r--security/apparmor/include/path.h2
-rw-r--r--security/apparmor/lsm.c83
-rw-r--r--security/apparmor/path.c8
-rw-r--r--security/commoncap.c8
-rw-r--r--security/integrity/Kconfig1
-rw-r--r--security/integrity/digsig.c15
-rw-r--r--security/integrity/evm/evm_main.c6
-rw-r--r--security/integrity/ima/Kconfig36
-rw-r--r--security/integrity/ima/Makefile2
-rw-r--r--security/integrity/ima/ima.h2
-rw-r--r--security/integrity/ima/ima_api.c2
-rw-r--r--security/integrity/ima/ima_appraise.c7
-rw-r--r--security/integrity/ima/ima_main.c25
-rw-r--r--security/integrity/ima/ima_mok.c23
-rw-r--r--security/integrity/ima/ima_policy.c14
-rw-r--r--security/integrity/integrity.h1
-rw-r--r--security/keys/Kconfig15
-rw-r--r--security/keys/Makefile1
-rw-r--r--security/keys/big_key.c198
-rw-r--r--security/keys/compat.c4
-rw-r--r--security/keys/dh.c160
-rw-r--r--security/keys/internal.h12
-rw-r--r--security/keys/key.c42
-rw-r--r--security/keys/keyctl.c5
-rw-r--r--security/keys/keyring.c46
-rw-r--r--security/keys/persistent.c4
-rw-r--r--security/keys/process_keys.c16
-rw-r--r--security/keys/request_key.c4
-rw-r--r--security/keys/request_key_auth.c2
-rw-r--r--security/keys/user_defined.c42
-rw-r--r--security/loadpin/Kconfig19
-rw-r--r--security/loadpin/Makefile1
-rw-r--r--security/loadpin/loadpin.c190
-rw-r--r--security/security.c32
-rw-r--r--security/selinux/hooks.c157
-rw-r--r--security/selinux/include/classmap.h30
-rw-r--r--security/selinux/include/conditional.h2
-rw-r--r--security/selinux/include/objsec.h5
-rw-r--r--security/selinux/nlmsgtab.c4
-rw-r--r--security/selinux/ss/services.c6
-rw-r--r--security/smack/smack_lsm.c8
-rw-r--r--security/tomoyo/common.h12
-rw-r--r--security/tomoyo/file.c10
-rw-r--r--security/tomoyo/mount.c4
-rw-r--r--security/tomoyo/tomoyo.c28
-rw-r--r--security/yama/yama_lsm.c84
-rw-r--r--sound/core/Kconfig29
-rw-r--r--sound/core/Makefile1
-rw-r--r--sound/core/compress_offload.c25
-rw-r--r--sound/core/hrtimer.c56
-rw-r--r--sound/core/pcm_dmaengine.c11
-rw-r--r--sound/core/pcm_iec958.c65
-rw-r--r--sound/core/pcm_lib.c4
-rw-r--r--sound/core/pcm_native.c4
-rw-r--r--sound/core/rtctimer.c187
-rw-r--r--sound/core/seq/seq.c2
-rw-r--r--sound/core/timer.c5
-rw-r--r--sound/firewire/Kconfig1
-rw-r--r--sound/firewire/Makefile3
-rw-r--r--sound/firewire/amdtp-stream-trace.h110
-rw-r--r--sound/firewire/amdtp-stream.c210
-rw-r--r--sound/firewire/amdtp-stream.h37
-rw-r--r--sound/firewire/bebob/bebob.c217
-rw-r--r--sound/firewire/bebob/bebob.h6
-rw-r--r--sound/firewire/bebob/bebob_stream.c101
-rw-r--r--sound/firewire/dice/dice.c41
-rw-r--r--sound/firewire/digi00x/amdtp-dot.c2
-rw-r--r--sound/firewire/digi00x/digi00x-transaction.c7
-rw-r--r--sound/firewire/digi00x/digi00x.c107
-rw-r--r--sound/firewire/digi00x/digi00x.h3
-rw-r--r--sound/firewire/fireworks/fireworks.c168
-rw-r--r--sound/firewire/fireworks/fireworks.h4
-rw-r--r--sound/firewire/fireworks/fireworks_stream.c84
-rw-r--r--sound/firewire/lib.c32
-rw-r--r--sound/firewire/lib.h3
-rw-r--r--sound/firewire/oxfw/oxfw-stream.c3
-rw-r--r--sound/firewire/oxfw/oxfw.c151
-rw-r--r--sound/firewire/oxfw/oxfw.h4
-rw-r--r--sound/firewire/tascam/tascam-stream.c26
-rw-r--r--sound/firewire/tascam/tascam.c118
-rw-r--r--sound/firewire/tascam/tascam.h2
-rw-r--r--sound/hda/ext/hdac_ext_bus.c4
-rw-r--r--sound/hda/ext/hdac_ext_controller.c66
-rw-r--r--sound/hda/hdac_controller.c17
-rw-r--r--sound/hda/hdac_i915.c47
-rw-r--r--sound/hda/hdmi_chmap.c44
-rw-r--r--sound/hda/local.h10
-rw-r--r--sound/isa/wavefront/wavefront_synth.c9
-rw-r--r--sound/oss/waveartist.c8
-rw-r--r--sound/pci/au88x0/au88x0_core.c14
-rw-r--r--sound/pci/au88x0/au88x0_pcm.c5
-rw-r--r--sound/pci/ctxfi/cttimer.c6
-rw-r--r--sound/pci/ens1370.c2
-rw-r--r--sound/pci/hda/Kconfig10
-rw-r--r--sound/pci/hda/hda_generic.c2
-rw-r--r--sound/pci/hda/hda_sysfs.c8
-rw-r--r--sound/pci/hda/patch_hdmi.c393
-rw-r--r--sound/pci/hda/patch_realtek.c44
-rw-r--r--sound/pci/hda/thinkpad_helper.c2
-rw-r--r--sound/pci/intel8x0.c20
-rw-r--r--sound/pci/lx6464es/lx_core.c2
-rw-r--r--sound/soc/atmel/atmel_ssc_dai.c4
-rw-r--r--sound/soc/au1x/dbdma2.c4
-rw-r--r--sound/soc/bcm/bcm2835-i2s.c27
-rw-r--r--sound/soc/codecs/Kconfig48
-rw-r--r--sound/soc/codecs/Makefile15
-rw-r--r--sound/soc/codecs/ak4642.c7
-rw-r--r--sound/soc/codecs/arizona.c32
-rw-r--r--sound/soc/codecs/cs42l56.c2
-rw-r--r--sound/soc/codecs/cs47l24.c46
-rw-r--r--sound/soc/codecs/da7213.c99
-rw-r--r--sound/soc/codecs/da7213.h35
-rw-r--r--sound/soc/codecs/da7218.c32
-rw-r--r--sound/soc/codecs/da7218.h21
-rw-r--r--sound/soc/codecs/da7219.c38
-rw-r--r--sound/soc/codecs/da7219.h20
-rw-r--r--sound/soc/codecs/es8328.c194
-rw-r--r--sound/soc/codecs/es8328.h23
-rw-r--r--sound/soc/codecs/hdac_hdmi.c194
-rw-r--r--sound/soc/codecs/hdmi-codec.c432
-rw-r--r--sound/soc/codecs/max98371.c441
-rw-r--r--sound/soc/codecs/max98371.h67
-rw-r--r--sound/soc/codecs/pcm5102a.c69
-rw-r--r--sound/soc/codecs/rt298.c70
-rw-r--r--sound/soc/codecs/rt298.h2
-rw-r--r--sound/soc/codecs/rt5645.c16
-rw-r--r--sound/soc/codecs/rt5677.c41
-rw-r--r--sound/soc/codecs/tas571x.c141
-rw-r--r--sound/soc/codecs/tas571x.h22
-rw-r--r--sound/soc/codecs/tas5720.c620
-rw-r--r--sound/soc/codecs/tas5720.h90
-rw-r--r--sound/soc/codecs/tlv320aic31xx.c10
-rw-r--r--sound/soc/codecs/tlv320aic32x4-i2c.c74
-rw-r--r--sound/soc/codecs/tlv320aic32x4-spi.c76
-rw-r--r--sound/soc/codecs/tlv320aic32x4.c279
-rw-r--r--sound/soc/codecs/tlv320aic32x4.h7
-rw-r--r--sound/soc/codecs/twl6040.c16
-rw-r--r--sound/soc/codecs/wm5100.c16
-rw-r--r--sound/soc/codecs/wm5102.c4
-rw-r--r--sound/soc/codecs/wm5110.c6
-rw-r--r--sound/soc/codecs/wm8903.c17
-rw-r--r--sound/soc/codecs/wm8962.c24
-rw-r--r--sound/soc/codecs/wm8962.h6
-rw-r--r--sound/soc/codecs/wm8996.c16
-rw-r--r--sound/soc/codecs/wm_adsp.c292
-rw-r--r--sound/soc/codecs/wm_adsp.h1
-rw-r--r--sound/soc/davinci/Kconfig6
-rw-r--r--sound/soc/davinci/davinci-i2s.c80
-rw-r--r--sound/soc/davinci/davinci-mcasp.c92
-rw-r--r--sound/soc/davinci/davinci-mcasp.h5
-rw-r--r--sound/soc/dwc/designware_i2s.c14
-rw-r--r--sound/soc/fsl/fsl_sai.c24
-rw-r--r--sound/soc/fsl/fsl_ssi.c81
-rw-r--r--sound/soc/fsl/imx-pcm-fiq.c2
-rw-r--r--sound/soc/generic/simple-card.c1
-rw-r--r--sound/soc/intel/Kconfig16
-rw-r--r--sound/soc/intel/atom/sst-atom-controls.c2
-rw-r--r--sound/soc/intel/boards/Makefile2
-rw-r--r--sound/soc/intel/boards/broadwell.c2
-rw-r--r--sound/soc/intel/boards/bxt_rt298.c353
-rw-r--r--sound/soc/intel/boards/bytcr_rt5640.c2
-rw-r--r--sound/soc/intel/boards/bytcr_rt5651.c2
-rw-r--r--sound/soc/intel/boards/cht_bsw_max98090_ti.c4
-rw-r--r--sound/soc/intel/boards/cht_bsw_rt5645.c4
-rw-r--r--sound/soc/intel/boards/cht_bsw_rt5672.c2
-rw-r--r--sound/soc/intel/boards/haswell.c2
-rw-r--r--sound/soc/intel/boards/skl_nau88l25_max98357a.c87
-rw-r--r--sound/soc/intel/boards/skl_nau88l25_ssm4567.c86
-rw-r--r--sound/soc/intel/boards/skl_rt286.c59
-rw-r--r--sound/soc/intel/common/sst-acpi.h9
-rw-r--r--sound/soc/intel/common/sst-firmware.c2
-rw-r--r--sound/soc/intel/haswell/sst-haswell-pcm.c2
-rw-r--r--sound/soc/intel/skylake/Makefile2
-rw-r--r--sound/soc/intel/skylake/bxt-sst.c328
-rw-r--r--sound/soc/intel/skylake/skl-messages.c122
-rw-r--r--sound/soc/intel/skylake/skl-nhlt.c15
-rw-r--r--sound/soc/intel/skylake/skl-pcm.c119
-rw-r--r--sound/soc/intel/skylake/skl-sst-dsp.c2
-rw-r--r--sound/soc/intel/skylake/skl-sst-dsp.h15
-rw-r--r--sound/soc/intel/skylake/skl-sst.c13
-rw-r--r--sound/soc/intel/skylake/skl-topology.c27
-rw-r--r--sound/soc/intel/skylake/skl-topology.h2
-rw-r--r--sound/soc/intel/skylake/skl-tplg-interface.h2
-rw-r--r--sound/soc/intel/skylake/skl.c39
-rw-r--r--sound/soc/intel/skylake/skl.h6
-rw-r--r--sound/soc/kirkwood/Kconfig1
-rw-r--r--sound/soc/mediatek/Kconfig1
-rw-r--r--sound/soc/mediatek/mt8173-rt5650-rt5676.c27
-rw-r--r--sound/soc/mediatek/mt8173-rt5650.c50
-rw-r--r--sound/soc/mediatek/mtk-afe-pcm.c2
-rw-r--r--sound/soc/omap/mcbsp.c8
-rw-r--r--sound/soc/omap/omap-pcm.c2
-rw-r--r--sound/soc/pxa/brownstone.c1
-rw-r--r--sound/soc/pxa/mioa701_wm9713.c1
-rw-r--r--sound/soc/pxa/mmp-pcm.c1
-rw-r--r--sound/soc/pxa/mmp-sspa.c1
-rw-r--r--sound/soc/pxa/palm27x.c1
-rw-r--r--sound/soc/pxa/pxa-ssp.c1
-rw-r--r--sound/soc/pxa/pxa2xx-ac97.c1
-rw-r--r--sound/soc/pxa/pxa2xx-pcm.c1
-rw-r--r--sound/soc/qcom/lpass-platform.c8
-rw-r--r--sound/soc/rockchip/rockchip_i2s.c87
-rw-r--r--sound/soc/sh/rcar/adg.c8
-rw-r--r--sound/soc/sh/rcar/dma.c12
-rw-r--r--sound/soc/sh/rcar/rsnd.h13
-rw-r--r--sound/soc/sh/rcar/src.c4
-rw-r--r--sound/soc/soc-ac97.c8
-rw-r--r--sound/soc/soc-core.c14
-rw-r--r--sound/soc/soc-generic-dmaengine-pcm.c57
-rw-r--r--sound/soc/soc-topology.c48
-rw-r--r--sound/soc/sti/sti_uniperif.c144
-rw-r--r--sound/soc/sti/uniperif.h220
-rw-r--r--sound/soc/sti/uniperif_player.c182
-rw-r--r--sound/soc/sti/uniperif_reader.c229
-rw-r--r--sound/usb/card.c4
-rw-r--r--sound/usb/clock.c4
-rw-r--r--sound/usb/helper.c1
-rw-r--r--sound/usb/midi.c1
-rw-r--r--sound/usb/mixer.c78
-rw-r--r--sound/usb/quirks.c3
-rw-r--r--sound/usb/usbaudio.h1
-rw-r--r--tools/Makefile9
-rw-r--r--tools/build/Makefile.build8
-rw-r--r--tools/build/Makefile.feature10
-rw-r--r--tools/build/feature/Makefile27
-rw-r--r--tools/build/feature/test-all.c5
-rw-r--r--tools/build/feature/test-bpf.c3
-rw-r--r--tools/build/feature/test-dwarf_getlocations.c12
-rw-r--r--tools/build/feature/test-libunwind-aarch64.c26
-rw-r--r--tools/build/feature/test-libunwind-arm.c27
-rw-r--r--tools/build/feature/test-libunwind-debug-frame-aarch64.c16
-rw-r--r--tools/build/feature/test-libunwind-debug-frame-arm.c16
-rw-r--r--tools/build/feature/test-libunwind-x86.c27
-rw-r--r--tools/build/feature/test-libunwind-x86_64.c27
-rw-r--r--tools/gpio/Makefile2
-rw-r--r--tools/gpio/lsgpio.c2
-rw-r--r--tools/hv/lsvmbus1
-rw-r--r--tools/iio/generic_buffer.c116
-rw-r--r--tools/iio/iio_event_monitor.c18
-rw-r--r--tools/iio/iio_utils.h7
-rw-r--r--tools/kvm/kvm_stat/Makefile41
-rwxr-xr-xtools/kvm/kvm_stat/kvm_stat1127
-rw-r--r--tools/kvm/kvm_stat/kvm_stat.txt63
-rw-r--r--tools/lguest/lguest.c10
-rw-r--r--tools/lib/api/fs/fs.c13
-rw-r--r--tools/lib/api/fs/fs.h2
-rw-r--r--tools/lib/traceevent/parse-filter.c4
-rw-r--r--tools/net/bpf_jit_disasm.c3
-rw-r--r--tools/objtool/Makefile4
-rw-r--r--tools/objtool/elf.h5
-rw-r--r--tools/perf/Documentation/intel-pt.txt7
-rw-r--r--tools/perf/Documentation/itrace.txt8
-rw-r--r--tools/perf/Documentation/perf-annotate.txt2
-rw-r--r--tools/perf/Documentation/perf-diff.txt2
-rw-r--r--tools/perf/Documentation/perf-list.txt107
-rw-r--r--tools/perf/Documentation/perf-mem.txt8
-rw-r--r--tools/perf/Documentation/perf-record.txt13
-rw-r--r--tools/perf/Documentation/perf-report.txt5
-rw-r--r--tools/perf/Documentation/perf-sched.txt16
-rw-r--r--tools/perf/Documentation/perf-script.txt14
-rw-r--r--tools/perf/Documentation/perf-top.txt2
-rw-r--r--tools/perf/Documentation/perf-trace.txt33
-rw-r--r--tools/perf/Makefile.perf15
-rw-r--r--tools/perf/arch/powerpc/Makefile1
-rw-r--r--tools/perf/arch/powerpc/include/perf_regs.h69
-rw-r--r--tools/perf/arch/powerpc/util/Build2
-rw-r--r--tools/perf/arch/powerpc/util/dwarf-regs.c40
-rw-r--r--tools/perf/arch/powerpc/util/perf_regs.c49
-rw-r--r--tools/perf/arch/powerpc/util/sym-handling.c43
-rw-r--r--tools/perf/arch/powerpc/util/unwind-libunwind.c96
-rw-r--r--tools/perf/arch/x86/Makefile23
-rw-r--r--tools/perf/arch/x86/entry/syscalls/syscall_64.tbl376
-rwxr-xr-xtools/perf/arch/x86/entry/syscalls/syscalltbl.sh39
-rw-r--r--tools/perf/arch/x86/tests/perf-time-to-tsc.c2
-rw-r--r--tools/perf/arch/x86/util/dwarf-regs.c8
-rw-r--r--tools/perf/arch/x86/util/intel-bts.c5
-rw-r--r--tools/perf/arch/x86/util/intel-pt.c5
-rw-r--r--tools/perf/arch/x86/util/tsc.c32
-rw-r--r--tools/perf/arch/x86/util/tsc.h17
-rw-r--r--tools/perf/bench/futex-lock-pi.c2
-rw-r--r--tools/perf/bench/futex.h6
-rw-r--r--tools/perf/bench/mem-functions.c22
-rw-r--r--tools/perf/builtin-annotate.c5
-rw-r--r--tools/perf/builtin-buildid-cache.c8
-rw-r--r--tools/perf/builtin-config.c39
-rw-r--r--tools/perf/builtin-diff.c9
-rw-r--r--tools/perf/builtin-help.c18
-rw-r--r--tools/perf/builtin-inject.c1
-rw-r--r--tools/perf/builtin-kmem.c2
-rw-r--r--tools/perf/builtin-kvm.c2
-rw-r--r--tools/perf/builtin-mem.c11
-rw-r--r--tools/perf/builtin-record.c348
-rw-r--r--tools/perf/builtin-report.c22
-rw-r--r--tools/perf/builtin-sched.c198
-rw-r--r--tools/perf/builtin-script.c127
-rw-r--r--tools/perf/builtin-stat.c37
-rw-r--r--tools/perf/builtin-timechart.c5
-rw-r--r--tools/perf/builtin-top.c45
-rw-r--r--tools/perf/builtin-trace.c1355
-rw-r--r--tools/perf/config/Makefile17
-rw-r--r--tools/perf/jvmti/jvmti_agent.c43
-rw-r--r--tools/perf/perf.c19
-rw-r--r--tools/perf/perf.h1
-rw-r--r--tools/perf/scripts/python/export-to-postgresql.py52
-rw-r--r--tools/perf/tests/Build2
-rw-r--r--tools/perf/tests/backward-ring-buffer.c151
-rw-r--r--tools/perf/tests/bpf.c2
-rw-r--r--tools/perf/tests/builtin-test.c8
-rw-r--r--tools/perf/tests/code-reading.c2
-rw-r--r--tools/perf/tests/dso-data.c2
-rw-r--r--tools/perf/tests/event-times.c236
-rw-r--r--tools/perf/tests/event_update.c2
-rw-r--r--tools/perf/tests/hists_common.c2
-rw-r--r--tools/perf/tests/hists_cumulate.c4
-rw-r--r--tools/perf/tests/hists_filter.c2
-rw-r--r--tools/perf/tests/hists_link.c4
-rw-r--r--tools/perf/tests/hists_output.c4
-rw-r--r--tools/perf/tests/keep-tracking.c2
-rw-r--r--tools/perf/tests/openat-syscall-all-cpus.c2
-rw-r--r--tools/perf/tests/openat-syscall-tp-fields.c2
-rw-r--r--tools/perf/tests/perf-record.c2
-rw-r--r--tools/perf/tests/switch-tracking.c2
-rw-r--r--tools/perf/tests/tests.h2
-rw-r--r--tools/perf/tests/vmlinux-kallsyms.c11
-rw-r--r--tools/perf/trace/beauty/eventfd.c38
-rw-r--r--tools/perf/trace/beauty/flock.c31
-rw-r--r--tools/perf/trace/beauty/futex_op.c44
-rw-r--r--tools/perf/trace/beauty/mmap.c158
-rw-r--r--tools/perf/trace/beauty/mode_t.c68
-rw-r--r--tools/perf/trace/beauty/msg_flags.c62
-rw-r--r--tools/perf/trace/beauty/open_flags.c56
-rw-r--r--tools/perf/trace/beauty/perf_event_open.c43
-rw-r--r--tools/perf/trace/beauty/pid.c21
-rw-r--r--tools/perf/trace/beauty/sched_policy.c44
-rw-r--r--tools/perf/trace/beauty/seccomp.c52
-rw-r--r--tools/perf/trace/beauty/signum.c53
-rw-r--r--tools/perf/trace/beauty/socket_type.c60
-rw-r--r--tools/perf/trace/beauty/waitid_options.c26
-rw-r--r--tools/perf/ui/browsers/hists.c40
-rw-r--r--tools/perf/ui/gtk/hists.c2
-rw-r--r--tools/perf/ui/hist.c2
-rw-r--r--tools/perf/ui/stdio/hist.c3
-rw-r--r--tools/perf/util/Build8
-rw-r--r--tools/perf/util/annotate.c36
-rw-r--r--tools/perf/util/auxtrace.c7
-rw-r--r--tools/perf/util/auxtrace.h2
-rw-r--r--tools/perf/util/bpf-loader.c143
-rw-r--r--tools/perf/util/bpf-loader.h19
-rw-r--r--tools/perf/util/build-id.c38
-rw-r--r--tools/perf/util/cache.h19
-rw-r--r--tools/perf/util/call-path.c122
-rw-r--r--tools/perf/util/call-path.h77
-rw-r--r--tools/perf/util/callchain.c9
-rw-r--r--tools/perf/util/callchain.h9
-rw-r--r--tools/perf/util/config.c222
-rw-r--r--tools/perf/util/config.h26
-rw-r--r--tools/perf/util/cpumap.c12
-rw-r--r--tools/perf/util/cpumap.h2
-rw-r--r--tools/perf/util/data.c41
-rw-r--r--tools/perf/util/data.h11
-rw-r--r--tools/perf/util/db-export.c88
-rw-r--r--tools/perf/util/db-export.h3
-rw-r--r--tools/perf/util/dso.c11
-rw-r--r--tools/perf/util/dwarf-aux.c61
-rw-r--r--tools/perf/util/event.c13
-rw-r--r--tools/perf/util/event.h9
-rw-r--r--tools/perf/util/evlist.c208
-rw-r--r--tools/perf/util/evlist.h26
-rw-r--r--tools/perf/util/evsel.c177
-rw-r--r--tools/perf/util/evsel.h29
-rw-r--r--tools/perf/util/evsel_fprintf.c212
-rw-r--r--tools/perf/util/header.c33
-rw-r--r--tools/perf/util/help-unknown-cmd.c30
-rw-r--r--tools/perf/util/hist.c32
-rw-r--r--tools/perf/util/hist.h14
-rw-r--r--tools/perf/util/intel-bts.c5
-rw-r--r--tools/perf/util/intel-pt-decoder/intel-pt-decoder.c2
-rw-r--r--tools/perf/util/intel-pt.c22
-rw-r--r--tools/perf/util/jitdump.c41
-rw-r--r--tools/perf/util/jitdump.h3
-rw-r--r--tools/perf/util/machine.c136
-rw-r--r--tools/perf/util/machine.h8
-rw-r--r--tools/perf/util/map.c16
-rw-r--r--tools/perf/util/ordered-events.c9
-rw-r--r--tools/perf/util/ordered-events.h1
-rw-r--r--tools/perf/util/parse-events.c62
-rw-r--r--tools/perf/util/perf_regs.c8
-rw-r--r--tools/perf/util/pmu.c23
-rw-r--r--tools/perf/util/probe-event.c406
-rw-r--r--tools/perf/util/probe-event.h5
-rw-r--r--tools/perf/util/probe-file.c3
-rw-r--r--tools/perf/util/probe-finder.c36
-rw-r--r--tools/perf/util/python-ext-sources1
-rw-r--r--tools/perf/util/quote.c36
-rw-r--r--tools/perf/util/quote.h2
-rw-r--r--tools/perf/util/rb_resort.h149
-rw-r--r--tools/perf/util/record.c5
-rw-r--r--tools/perf/util/scripting-engines/trace-event-perl.c125
-rw-r--r--tools/perf/util/scripting-engines/trace-event-python.c47
-rw-r--r--tools/perf/util/session.c117
-rw-r--r--tools/perf/util/session.h12
-rw-r--r--tools/perf/util/sort.c122
-rw-r--r--tools/perf/util/sort.h9
-rw-r--r--tools/perf/util/stat-shadow.c8
-rw-r--r--tools/perf/util/stat.c4
-rw-r--r--tools/perf/util/strbuf.c93
-rw-r--r--tools/perf/util/strbuf.h25
-rw-r--r--tools/perf/util/symbol-elf.c20
-rw-r--r--tools/perf/util/symbol.c141
-rw-r--r--tools/perf/util/symbol.h26
-rw-r--r--tools/perf/util/symbol_fprintf.c71
-rw-r--r--tools/perf/util/syscalltbl.c134
-rw-r--r--tools/perf/util/syscalltbl.h20
-rw-r--r--tools/perf/util/thread-stack.c139
-rw-r--r--tools/perf/util/thread-stack.h31
-rw-r--r--tools/perf/util/thread.c21
-rw-r--r--tools/perf/util/thread.h8
-rw-r--r--tools/perf/util/thread_map.c22
-rw-r--r--tools/perf/util/thread_map.h3
-rw-r--r--tools/perf/util/tool.h1
-rw-r--r--tools/perf/util/top.h1
-rw-r--r--tools/perf/util/trigger.h94
-rw-r--r--tools/perf/util/tsc.h21
-rw-r--r--tools/perf/util/unwind-libunwind.c25
-rw-r--r--tools/perf/util/util.c39
-rw-r--r--tools/perf/util/util.h16
-rw-r--r--tools/perf/util/wrapper.c29
-rw-r--r--tools/power/acpi/os_specific/service_layers/oslinuxtbl.c47
-rw-r--r--tools/power/acpi/os_specific/service_layers/osunixmap.c2
-rw-r--r--tools/power/acpi/os_specific/service_layers/osunixxf.c24
-rw-r--r--tools/power/acpi/tools/acpidbg/acpidbg.c4
-rw-r--r--tools/power/acpi/tools/acpidump/Makefile1
-rw-r--r--tools/power/acpi/tools/acpidump/apdump.c13
-rw-r--r--tools/power/acpi/tools/acpidump/apmain.c3
-rw-r--r--tools/power/cpupower/Makefile12
-rw-r--r--tools/power/cpupower/bench/Makefile2
-rw-r--r--tools/power/cpupower/bench/README-BENCH2
-rw-r--r--tools/power/cpupower/bench/benchmark.c4
-rw-r--r--tools/power/cpupower/bench/parse.c20
-rw-r--r--tools/power/cpupower/bench/system.c3
-rw-r--r--tools/power/cpupower/lib/cpufreq.c550
-rw-r--r--tools/power/cpupower/lib/cpufreq.h59
-rw-r--r--tools/power/cpupower/lib/cpuidle.c380
-rw-r--r--tools/power/cpupower/lib/cpuidle.h23
-rw-r--r--tools/power/cpupower/lib/cpupower.c192
-rw-r--r--tools/power/cpupower/lib/cpupower.h35
-rw-r--r--tools/power/cpupower/lib/cpupower_intern.h5
-rw-r--r--tools/power/cpupower/lib/sysfs.c672
-rw-r--r--tools/power/cpupower/lib/sysfs.h31
-rw-r--r--tools/power/cpupower/man/cpupower-frequency-info.12
-rw-r--r--tools/power/cpupower/man/cpupower-frequency-set.12
-rw-r--r--tools/power/cpupower/man/cpupower-idle-info.12
-rw-r--r--tools/power/cpupower/man/cpupower-idle-set.12
-rw-r--r--tools/power/cpupower/utils/cpufreq-set.c8
-rw-r--r--tools/power/cpupower/utils/cpuidle-info.c32
-rw-r--r--tools/power/cpupower/utils/cpuidle-set.c26
-rw-r--r--tools/power/cpupower/utils/helpers/helpers.h26
-rw-r--r--tools/power/cpupower/utils/helpers/topology.c107
-rw-r--r--tools/power/cpupower/utils/idle_monitor/cpuidle_sysfs.c12
-rw-r--r--tools/testing/nvdimm/Kbuild11
-rw-r--r--tools/testing/nvdimm/config_check.c2
-rw-r--r--tools/testing/nvdimm/test/iomap.c27
-rw-r--r--tools/testing/nvdimm/test/nfit.c90
-rw-r--r--tools/testing/radix-tree/Makefile4
-rw-r--r--tools/testing/radix-tree/generated/autoconf.h3
-rw-r--r--tools/testing/radix-tree/linux/init.h1
-rw-r--r--tools/testing/radix-tree/linux/kernel.h15
-rw-r--r--tools/testing/radix-tree/linux/slab.h1
-rw-r--r--tools/testing/radix-tree/linux/types.h7
-rw-r--r--tools/testing/radix-tree/main.c84
-rw-r--r--tools/testing/radix-tree/multiorder.c337
-rw-r--r--tools/testing/radix-tree/regression2.c7
-rw-r--r--tools/testing/radix-tree/tag_check.c10
-rw-r--r--tools/testing/radix-tree/test.c49
-rw-r--r--tools/testing/radix-tree/test.h11
-rw-r--r--tools/testing/selftests/Makefile1
-rwxr-xr-xtools/testing/selftests/ftrace/ftracetest9
-rw-r--r--tools/testing/selftests/ftrace/test.d/event/event-pid.tc72
-rw-r--r--tools/testing/selftests/ftrace/test.d/functions9
-rw-r--r--tools/testing/selftests/ftrace/test.d/instances/instance-event.tc138
-rw-r--r--tools/testing/selftests/ftrace/test.d/trigger/trigger-eventonoff.tc64
-rw-r--r--tools/testing/selftests/ftrace/test.d/trigger/trigger-filter.tc59
-rw-r--r--tools/testing/selftests/ftrace/test.d/trigger/trigger-hist-mod.tc75
-rw-r--r--tools/testing/selftests/ftrace/test.d/trigger/trigger-hist.tc83
-rw-r--r--tools/testing/selftests/ftrace/test.d/trigger/trigger-multihist.tc73
-rw-r--r--tools/testing/selftests/ftrace/test.d/trigger/trigger-snapshot.tc56
-rw-r--r--tools/testing/selftests/ftrace/test.d/trigger/trigger-stacktrace.tc53
-rw-r--r--tools/testing/selftests/ftrace/test.d/trigger/trigger-traceonoff.tc58
-rwxr-xr-xtools/testing/selftests/intel_pstate/run.sh2
-rw-r--r--tools/testing/selftests/powerpc/Makefile1
-rw-r--r--tools/testing/selftests/powerpc/context_switch/.gitignore1
-rw-r--r--tools/testing/selftests/powerpc/context_switch/Makefile10
-rw-r--r--tools/testing/selftests/powerpc/context_switch/cp_abort.c110
-rw-r--r--tools/testing/selftests/powerpc/mm/subpage_prot.c18
-rw-r--r--tools/testing/selftests/powerpc/pmu/ebb/ebb.c1
-rw-r--r--tools/testing/selftests/powerpc/pmu/ebb/reg_access_test.c1
-rw-r--r--tools/testing/selftests/powerpc/reg.h (renamed from tools/testing/selftests/powerpc/pmu/ebb/reg.h)18
-rw-r--r--tools/testing/selftests/powerpc/tm/.gitignore3
-rw-r--r--tools/testing/selftests/powerpc/tm/Makefile3
-rw-r--r--tools/testing/selftests/powerpc/tm/tm-fork.c42
-rw-r--r--tools/testing/selftests/powerpc/tm/tm-resched-dscr.c16
-rw-r--r--tools/testing/selftests/powerpc/tm/tm-signal-stack.c4
-rw-r--r--tools/testing/selftests/powerpc/tm/tm-tar.c90
-rw-r--r--tools/testing/selftests/powerpc/tm/tm-tmspr.c143
-rw-r--r--tools/testing/selftests/powerpc/utils.h8
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/jitter.sh90
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf-ftrace.sh121
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-recheck-rcuperf.sh96
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-recheck.sh5
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm-test-1-run.sh59
-rwxr-xr-xtools/testing/selftests/rcutorture/bin/kvm.sh24
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE042
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcu/TREE04.boot2
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcuperf/CFLIST1
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcuperf/CFcommon2
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcuperf/TREE20
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcuperf/TREE5423
-rw-r--r--tools/testing/selftests/rcutorture/configs/rcuperf/ver_functions.sh52
-rw-r--r--tools/testing/selftests/seccomp/seccomp_bpf.c38
-rw-r--r--tools/testing/selftests/sigaltstack/Makefile8
-rw-r--r--tools/testing/selftests/sigaltstack/sas.c176
-rw-r--r--tools/testing/selftests/vm/thuge-gen.c2
-rw-r--r--tools/testing/selftests/x86/Makefile1
-rw-r--r--tools/testing/selftests/x86/fsgsbase.c398
-rw-r--r--tools/testing/selftests/x86/ldt_gdt.c250
-rw-r--r--tools/usb/usbip/libsrc/Makefile.am4
-rw-r--r--tools/usb/usbip/libsrc/usbip_common.h3
-rw-r--r--tools/usb/usbip/libsrc/usbip_device_driver.c163
-rw-r--r--tools/usb/usbip/libsrc/usbip_device_driver.h34
-rw-r--r--tools/usb/usbip/libsrc/usbip_host_common.c273
-rw-r--r--tools/usb/usbip/libsrc/usbip_host_common.h104
-rw-r--r--tools/usb/usbip/libsrc/usbip_host_driver.c269
-rw-r--r--tools/usb/usbip/libsrc/usbip_host_driver.h27
-rw-r--r--tools/usb/usbip/src/usbip_attach.c10
-rw-r--r--tools/usb/usbip/src/usbip_list.c96
-rw-r--r--tools/usb/usbip/src/usbip_port.c13
-rw-r--r--tools/usb/usbip/src/usbipd.c42
-rw-r--r--tools/virtio/ringtest/Makefile5
-rw-r--r--tools/virtio/ringtest/main.c2
-rw-r--r--tools/virtio/ringtest/virtio_ring_0_9.c49
-rw-r--r--tools/virtio/ringtest/virtio_ring_inorder.c2
-rw-r--r--virt/kvm/Kconfig3
-rw-r--r--virt/kvm/arm/arch_timer.c108
-rw-r--r--virt/kvm/arm/hyp/timer-sr.c5
-rw-r--r--virt/kvm/arm/hyp/vgic-v2-sr.c17
-rw-r--r--virt/kvm/arm/pmu.c25
-rw-r--r--virt/kvm/arm/vgic-v2.c65
-rw-r--r--virt/kvm/arm/vgic-v3.c55
-rw-r--r--virt/kvm/arm/vgic.c136
-rw-r--r--virt/kvm/arm/vgic/vgic-init.c452
-rw-r--r--virt/kvm/arm/vgic/vgic-irqfd.c52
-rw-r--r--virt/kvm/arm/vgic/vgic-kvm-device.c431
-rw-r--r--virt/kvm/arm/vgic/vgic-mmio-v2.c446
-rw-r--r--virt/kvm/arm/vgic/vgic-mmio-v3.c455
-rw-r--r--virt/kvm/arm/vgic/vgic-mmio.c526
-rw-r--r--virt/kvm/arm/vgic/vgic-mmio.h150
-rw-r--r--virt/kvm/arm/vgic/vgic-v2.c352
-rw-r--r--virt/kvm/arm/vgic/vgic-v3.c330
-rw-r--r--virt/kvm/arm/vgic/vgic.c619
-rw-r--r--virt/kvm/arm/vgic/vgic.h131
-rw-r--r--virt/kvm/eventfd.c18
-rw-r--r--virt/kvm/kvm_main.c219
-rw-r--r--virt/lib/irqbypass.c12
8481 files changed, 377986 insertions, 167260 deletions
diff --git a/.gitignore b/.gitignore
index fd3a35592543..0c320bf02586 100644
--- a/.gitignore
+++ b/.gitignore
@@ -62,7 +62,7 @@ Module.symvers
/tar-install/
#
-# git files that we don't want to ignore even it they are dot-files
+# git files that we don't want to ignore even if they are dot-files
#
!.gitignore
!.mailmap
diff --git a/CREDITS b/CREDITS
index 4312cd076b5b..0f0bf22afe0c 100644
--- a/CREDITS
+++ b/CREDITS
@@ -768,6 +768,7 @@ D: Z85230 driver
D: Former security contact point (please use vendor-sec@lst.de)
D: ex 2.2 maintainer
D: 2.1.x modular sound
+D: Assigned major/minor numbers maintainer at lanana.org
S: c/o Red Hat UK Ltd
S: Alexandra House
S: Alexandra Terrace
diff --git a/Documentation/ABI/obsolete/sysfs-driver-hid-roccat-savu b/Documentation/ABI/obsolete/sysfs-driver-hid-roccat-savu
index f1e02a98bd9d..99fda67fce18 100644
--- a/Documentation/ABI/obsolete/sysfs-driver-hid-roccat-savu
+++ b/Documentation/ABI/obsolete/sysfs-driver-hid-roccat-savu
@@ -3,9 +3,10 @@ Date: Mai 2012
Contact: Stefan Achatz <erazor_de@users.sourceforge.net>
Description: The mouse can store 5 profiles which can be switched by the
press of a button. A profile is split into general settings and
- button settings. buttons holds informations about button layout.
- When written, this file lets one write the respective profile
- buttons to the mouse. The data has to be 47 bytes long.
+ button settings. The buttons variable holds information about
+ button layout. When written, this file lets one write the
+ respective profile buttons to the mouse. The data has to be
+ 47 bytes long.
The mouse will reject invalid data.
Which profile to write is determined by the profile number
contained in the data.
@@ -26,8 +27,8 @@ Date: Mai 2012
Contact: Stefan Achatz <erazor_de@users.sourceforge.net>
Description: The mouse can store 5 profiles which can be switched by the
press of a button. A profile is split into general settings and
- button settings. profile holds informations like resolution, sensitivity
- and light effects.
+ button settings. A profile holds information like resolution,
+ sensitivity and light effects.
When written, this file lets one write the respective profile
settings back to the mouse. The data has to be 43 bytes long.
The mouse will reject invalid data.
diff --git a/Documentation/ABI/stable/sysfs-class-ubi b/Documentation/ABI/stable/sysfs-class-ubi
index 18d471d9faea..a6b324014692 100644
--- a/Documentation/ABI/stable/sysfs-class-ubi
+++ b/Documentation/ABI/stable/sysfs-class-ubi
@@ -107,6 +107,15 @@ Contact: Artem Bityutskiy <dedekind@infradead.org>
Description:
Number of physical eraseblocks reserved for bad block handling.
+What: /sys/class/ubi/ubiX/ro_mode
+Date: April 2016
+KernelVersion: 4.7
+Contact: linux-mtd@lists.infradead.org
+Description:
+ Contains ASCII "1\n" if the read-only flag is set on this
+ device, and "0\n" if it is cleared. UBI devices mark themselves
+ as read-only when they detect an unrecoverable error.
+
What: /sys/class/ubi/ubiX/total_eraseblocks
Date: July 2006
KernelVersion: 2.6.22
diff --git a/Documentation/ABI/testing/sysfs-block-zram b/Documentation/ABI/testing/sysfs-block-zram
index 2e69e83bf510..4518d30b8c2e 100644
--- a/Documentation/ABI/testing/sysfs-block-zram
+++ b/Documentation/ABI/testing/sysfs-block-zram
@@ -166,3 +166,12 @@ Description:
The mm_stat file is read-only and represents device's mm
statistics (orig_data_size, compr_data_size, etc.) in a format
similar to block layer statistics file format.
+
+What: /sys/block/zram<id>/debug_stat
+Date: July 2016
+Contact: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
+Description:
+ The debug_stat file is read-only and represents various
+ device's debugging info useful for kernel developers. Its
+ format is not documented intentionally and may change
+ anytime without any notice.
diff --git a/Documentation/ABI/testing/sysfs-bus-coresight-devices-etb10 b/Documentation/ABI/testing/sysfs-bus-coresight-devices-etb10
index 4b8d6ec92e2b..b5f526081711 100644
--- a/Documentation/ABI/testing/sysfs-bus-coresight-devices-etb10
+++ b/Documentation/ABI/testing/sysfs-bus-coresight-devices-etb10
@@ -6,13 +6,6 @@ Description: (RW) Add/remove a sink from a trace path. There can be multiple
source for a single sink.
ex: echo 1 > /sys/bus/coresight/devices/20010000.etb/enable_sink
-What: /sys/bus/coresight/devices/<memory_map>.etb/status
-Date: November 2014
-KernelVersion: 3.19
-Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
-Description: (R) List various control and status registers. The specific
- layout and content is driver specific.
-
What: /sys/bus/coresight/devices/<memory_map>.etb/trigger_cntr
Date: November 2014
KernelVersion: 3.19
@@ -22,3 +15,65 @@ Description: (RW) Disables write access to the Trace RAM by stopping the
following the trigger event. The number of 32-bit words written
into the Trace RAM following the trigger event is equal to the
value stored in this register+1 (from ARM ETB-TRM).
+
+What: /sys/bus/coresight/devices/<memory_map>.etb/mgmt/rdp
+Date: March 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) Defines the depth, in words, of the trace RAM in powers of
+ 2. The value is read directly from HW register RDP, 0x004.
+
+What: /sys/bus/coresight/devices/<memory_map>.etb/mgmt/sts
+Date: March 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) Shows the value held by the ETB status register. The value
+ is read directly from HW register STS, 0x00C.
+
+What: /sys/bus/coresight/devices/<memory_map>.etb/mgmt/rrp
+Date: March 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) Shows the value held by the ETB RAM Read Pointer register
+ that is used to read entries from the Trace RAM over the APB
+ interface. The value is read directly from HW register RRP,
+ 0x014.
+
+What: /sys/bus/coresight/devices/<memory_map>.etb/mgmt/rwp
+Date: March 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) Shows the value held by the ETB RAM Write Pointer register
+ that is used to sets the write pointer to write entries from
+ the CoreSight bus into the Trace RAM. The value is read directly
+ from HW register RWP, 0x018.
+
+What: /sys/bus/coresight/devices/<memory_map>.etb/mgmt/trg
+Date: March 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) Similar to "trigger_cntr" above except that this value is
+ read directly from HW register TRG, 0x01C.
+
+What: /sys/bus/coresight/devices/<memory_map>.etb/mgmt/ctl
+Date: March 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) Shows the value held by the ETB Control register. The value
+ is read directly from HW register CTL, 0x020.
+
+What: /sys/bus/coresight/devices/<memory_map>.etb/mgmt/ffsr
+Date: March 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) Shows the value held by the ETB Formatter and Flush Status
+ register. The value is read directly from HW register FFSR,
+ 0x300.
+
+What: /sys/bus/coresight/devices/<memory_map>.etb/mgmt/ffcr
+Date: March 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) Shows the value held by the ETB Formatter and Flush Control
+ register. The value is read directly from HW register FFCR,
+ 0x304.
diff --git a/Documentation/ABI/testing/sysfs-bus-coresight-devices-etm4x b/Documentation/ABI/testing/sysfs-bus-coresight-devices-etm4x
index 2355ed8ae31f..36258bc1b473 100644
--- a/Documentation/ABI/testing/sysfs-bus-coresight-devices-etm4x
+++ b/Documentation/ABI/testing/sysfs-bus-coresight-devices-etm4x
@@ -359,6 +359,19 @@ Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
Description: (R) Print the content of the Peripheral ID3 Register
(0xFEC). The value is taken directly from the HW.
+What: /sys/bus/coresight/devices/<memory_map>.etm/mgmt/trcconfig
+Date: February 2016
+KernelVersion: 4.07
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) Print the content of the trace configuration register
+ (0x010) as currently set by SW.
+
+What: /sys/bus/coresight/devices/<memory_map>.etm/mgmt/trctraceid
+Date: February 2016
+KernelVersion: 4.07
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) Print the content of the trace ID register (0x040).
+
What: /sys/bus/coresight/devices/<memory_map>.etm/trcidr/trcidr0
Date: April 2015
KernelVersion: 4.01
diff --git a/Documentation/ABI/testing/sysfs-bus-coresight-devices-stm b/Documentation/ABI/testing/sysfs-bus-coresight-devices-stm
new file mode 100644
index 000000000000..1dffabe7f48d
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-bus-coresight-devices-stm
@@ -0,0 +1,53 @@
+What: /sys/bus/coresight/devices/<memory_map>.stm/enable_source
+Date: April 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (RW) Enable/disable tracing on this specific trace macrocell.
+ Enabling the trace macrocell implies it has been configured
+ properly and a sink has been identified for it. The path
+ of coresight components linking the source to the sink is
+ configured and managed automatically by the coresight framework.
+
+What: /sys/bus/coresight/devices/<memory_map>.stm/hwevent_enable
+Date: April 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (RW) Provides access to the HW event enable register, used in
+ conjunction with HW event bank select register.
+
+What: /sys/bus/coresight/devices/<memory_map>.stm/hwevent_select
+Date: April 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (RW) Gives access to the HW event block select register
+ (STMHEBSR) in order to configure up to 256 channels. Used in
+ conjunction with "hwevent_enable" register as described above.
+
+What: /sys/bus/coresight/devices/<memory_map>.stm/port_enable
+Date: April 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (RW) Provides access to the stimulus port enable register
+ (STMSPER). Used in conjunction with "port_select" described
+ below.
+
+What: /sys/bus/coresight/devices/<memory_map>.stm/port_select
+Date: April 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (RW) Used to determine which bank of stimulus port bit in
+ register STMSPER (see above) apply to.
+
+What: /sys/bus/coresight/devices/<memory_map>.stm/status
+Date: April 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) List various control and status registers. The specific
+ layout and content is driver specific.
+
+What: /sys/bus/coresight/devices/<memory_map>.stm/traceid
+Date: April 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (RW) Holds the trace ID that will appear in the trace stream
+ coming from this trace entity.
diff --git a/Documentation/ABI/testing/sysfs-bus-coresight-devices-tmc b/Documentation/ABI/testing/sysfs-bus-coresight-devices-tmc
index f38cded5fa22..4fe677ed1305 100644
--- a/Documentation/ABI/testing/sysfs-bus-coresight-devices-tmc
+++ b/Documentation/ABI/testing/sysfs-bus-coresight-devices-tmc
@@ -6,3 +6,80 @@ Description: (RW) Disables write access to the Trace RAM by stopping the
formatter after a defined number of words have been stored
following the trigger event. Additional interface for this
driver are expected to be added as it matures.
+
+What: /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/rsz
+Date: March 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) Defines the size, in 32-bit words, of the local RAM buffer.
+ The value is read directly from HW register RSZ, 0x004.
+
+What: /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/sts
+Date: March 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) Shows the value held by the TMC status register. The value
+ is read directly from HW register STS, 0x00C.
+
+What: /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/rrp
+Date: March 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) Shows the value held by the TMC RAM Read Pointer register
+ that is used to read entries from the Trace RAM over the APB
+ interface. The value is read directly from HW register RRP,
+ 0x014.
+
+What: /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/rwp
+Date: March 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) Shows the value held by the TMC RAM Write Pointer register
+ that is used to sets the write pointer to write entries from
+ the CoreSight bus into the Trace RAM. The value is read directly
+ from HW register RWP, 0x018.
+
+What: /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/trg
+Date: March 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) Similar to "trigger_cntr" above except that this value is
+ read directly from HW register TRG, 0x01C.
+
+What: /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/ctl
+Date: March 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) Shows the value held by the TMC Control register. The value
+ is read directly from HW register CTL, 0x020.
+
+What: /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/ffsr
+Date: March 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) Shows the value held by the TMC Formatter and Flush Status
+ register. The value is read directly from HW register FFSR,
+ 0x300.
+
+What: /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/ffcr
+Date: March 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) Shows the value held by the TMC Formatter and Flush Control
+ register. The value is read directly from HW register FFCR,
+ 0x304.
+
+What: /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/mode
+Date: March 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) Shows the value held by the TMC Mode register, which
+ indicate the mode the device has been configured to enact. The
+ The value is read directly from the MODE register, 0x028.
+
+What: /sys/bus/coresight/devices/<memory_map>.tmc/mgmt/devid
+Date: March 2016
+KernelVersion: 4.7
+Contact: Mathieu Poirier <mathieu.poirier@linaro.org>
+Description: (R) Indicates the capabilities of the Coresight TMC.
+ The value is read directly from the DEVID register, 0xFC8,
diff --git a/Documentation/ABI/testing/sysfs-bus-event_source-devices-hv_24x7 b/Documentation/ABI/testing/sysfs-bus-event_source-devices-hv_24x7
index f893337570c1..ec27c6c9e737 100644
--- a/Documentation/ABI/testing/sysfs-bus-event_source-devices-hv_24x7
+++ b/Documentation/ABI/testing/sysfs-bus-event_source-devices-hv_24x7
@@ -4,7 +4,7 @@ Contact: Linux on PowerPC Developer List <linuxppc-dev@lists.ozlabs.org>
Description:
Provides access to the binary "24x7 catalog" provided by the
hypervisor on POWER7 and 8 systems. This catalog lists events
- avaliable from the powerpc "hv_24x7" pmu. Its format is
+ available from the powerpc "hv_24x7" pmu. Its format is
documented here:
https://raw.githubusercontent.com/jmesmon/catalog-24x7/master/hv-24x7-catalog.h
diff --git a/Documentation/ABI/testing/sysfs-bus-iio b/Documentation/ABI/testing/sysfs-bus-iio
index 3c6624881375..df44998e7506 100644
--- a/Documentation/ABI/testing/sysfs-bus-iio
+++ b/Documentation/ABI/testing/sysfs-bus-iio
@@ -1233,7 +1233,7 @@ KernelVersion: 3.4
Contact: linux-iio@vger.kernel.org
Description:
Proximity measurement indicating that some
- object is near the sensor, usually be observing
+ object is near the sensor, usually by observing
reflectivity of infrared or ultrasound emitted.
Often these sensors are unit less and as such conversion
to SI units is not possible. Higher proximity measurements
@@ -1255,12 +1255,23 @@ Description:
What: /sys/.../iio:deviceX/in_intensityY_raw
What: /sys/.../iio:deviceX/in_intensityY_ir_raw
What: /sys/.../iio:deviceX/in_intensityY_both_raw
+What: /sys/.../iio:deviceX/in_intensityY_uv_raw
KernelVersion: 3.4
Contact: linux-iio@vger.kernel.org
Description:
Unit-less light intensity. Modifiers both and ir indicate
that measurements contains visible and infrared light
- components or just infrared light, respectively.
+ components or just infrared light, respectively. Modifier uv indicates
+ that measurements contain ultraviolet light components.
+
+What: /sys/.../iio:deviceX/in_uvindex_input
+KernelVersion: 4.6
+Contact: linux-iio@vger.kernel.org
+Description:
+ UV light intensity index measuring the human skin's response to
+ different wavelength of sunlight weighted according to the
+ standardised CIE Erythemal Action Spectrum. UV index values range
+ from 0 (low) to >=11 (extreme).
What: /sys/.../iio:deviceX/in_intensity_red_integration_time
What: /sys/.../iio:deviceX/in_intensity_green_integration_time
@@ -1501,3 +1512,56 @@ Contact: linux-iio@vger.kernel.org
Description:
Raw (unscaled no offset etc.) pH reading of a substance as a negative
base-10 logarithm of hydrodium ions in a litre of water.
+
+What: /sys/bus/iio/devices/iio:deviceX/mount_matrix
+What: /sys/bus/iio/devices/iio:deviceX/in_mount_matrix
+What: /sys/bus/iio/devices/iio:deviceX/out_mount_matrix
+What: /sys/bus/iio/devices/iio:deviceX/in_anglvel_mount_matrix
+What: /sys/bus/iio/devices/iio:deviceX/in_accel_mount_matrix
+KernelVersion: 4.6
+Contact: linux-iio@vger.kernel.org
+Description:
+ Mounting matrix for IIO sensors. This is a rotation matrix which
+ informs userspace about sensor chip's placement relative to the
+ main hardware it is mounted on.
+ Main hardware placement is defined according to the local
+ reference frame related to the physical quantity the sensor
+ measures.
+ Given that the rotation matrix is defined in a board specific
+ way (platform data and / or device-tree), the main hardware
+ reference frame definition is left to the implementor's choice
+ (see below for a magnetometer example).
+ Applications should apply this rotation matrix to samples so
+ that when main hardware reference frame is aligned onto local
+ reference frame, then sensor chip reference frame is also
+ perfectly aligned with it.
+ Matrix is a 3x3 unitary matrix and typically looks like
+ [0, 1, 0; 1, 0, 0; 0, 0, -1]. Identity matrix
+ [1, 0, 0; 0, 1, 0; 0, 0, 1] means sensor chip and main hardware
+ are perfectly aligned with each other.
+
+ For example, a mounting matrix for a magnetometer sensor informs
+ userspace about sensor chip's ORIENTATION relative to the main
+ hardware.
+ More specifically, main hardware orientation is defined with
+ respect to the LOCAL EARTH GEOMAGNETIC REFERENCE FRAME where :
+ * Y is in the ground plane and positive towards magnetic North ;
+ * X is in the ground plane, perpendicular to the North axis and
+ positive towards the East ;
+ * Z is perpendicular to the ground plane and positive upwards.
+
+ An implementor might consider that for a hand-held device, a
+ 'natural' orientation would be 'front facing camera at the top'.
+ The main hardware reference frame could then be described as :
+ * Y is in the plane of the screen and is positive towards the
+ top of the screen ;
+ * X is in the plane of the screen, perpendicular to Y axis, and
+ positive towards the right hand side of the screen ;
+ * Z is perpendicular to the screen plane and positive out of the
+ screen.
+ Another example for a quadrotor UAV might be :
+ * Y is in the plane of the propellers and positive towards the
+ front-view camera;
+ * X is in the plane of the propellers, perpendicular to Y axis,
+ and positive towards the starboard side of the UAV ;
+ * Z is perpendicular to propellers plane and positive upwards.
diff --git a/Documentation/ABI/testing/sysfs-bus-mcb b/Documentation/ABI/testing/sysfs-bus-mcb
new file mode 100644
index 000000000000..77947c509796
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-bus-mcb
@@ -0,0 +1,29 @@
+What: /sys/bus/mcb/devices/mcb:X
+Date: March 2016
+KernelVersion: 4.7
+Contact: Johannes Thumshirn <jth@kernel.org>
+Description: Hardware chip or device hosting the MEN chameleon bus
+
+What: /sys/bus/mcb/devices/mcb:X/revision
+Date: March 2016
+KernelVersion: 4.7
+Contact: Johannes Thumshirn <jth@kernel.org>
+Description: The FPGA's revision number
+
+What: /sys/bus/mcb/devices/mcb:X/minor
+Date: March 2016
+KernelVersion: 4.7
+Contact: Johannes Thumshirn <jth@kernel.org>
+Description: The FPGA's minor number
+
+What: /sys/bus/mcb/devices/mcb:X/model
+Date: March 2016
+KernelVersion: 4.7
+Contact: Johannes Thumshirn <jth@kernel.org>
+Description: The FPGA's model number
+
+What: /sys/bus/mcb/devices/mcb:X/name
+Date: March 2016
+KernelVersion: 4.7
+Contact: Johannes Thumshirn <jth@kernel.org>
+Description: The FPGA's name
diff --git a/Documentation/ABI/testing/sysfs-class-cxl b/Documentation/ABI/testing/sysfs-class-cxl
index 7fd737eed38a..4ba0a2a61926 100644
--- a/Documentation/ABI/testing/sysfs-class-cxl
+++ b/Documentation/ABI/testing/sysfs-class-cxl
@@ -233,3 +233,11 @@ Description: read/write
0 = don't trust, the image may be different (default)
1 = trust that the image will not change.
Users: https://github.com/ibm-capi/libcxl
+
+What: /sys/class/cxl/<card>/psl_timebase_synced
+Date: March 2016
+Contact: linuxppc-dev@lists.ozlabs.org
+Description: read only
+ Returns 1 if the psl timebase register is synchronized
+ with the core timebase register, 0 otherwise.
+Users: https://github.com/ibm-capi/libcxl
diff --git a/Documentation/ABI/testing/sysfs-class-stm b/Documentation/ABI/testing/sysfs-class-stm
index c9aa4f3fc9a7..77ed3da0f68e 100644
--- a/Documentation/ABI/testing/sysfs-class-stm
+++ b/Documentation/ABI/testing/sysfs-class-stm
@@ -12,3 +12,13 @@ KernelVersion: 4.3
Contact: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Description:
Shows the number of channels per master on this STM device.
+
+What: /sys/class/stm/<stm>/hw_override
+Date: March 2016
+KernelVersion: 4.7
+Contact: Alexander Shishkin <alexander.shishkin@linux.intel.com>
+Description:
+ Reads as 0 if master numbers in the STP stream produced by
+ this stm device will match the master numbers assigned by
+ the software or 1 if the stm hardware overrides software
+ assigned masters.
diff --git a/Documentation/ABI/testing/sysfs-driver-hid-picolcd b/Documentation/ABI/testing/sysfs-driver-hid-picolcd
index 08579e7e1e89..98fd81ad76a1 100644
--- a/Documentation/ABI/testing/sysfs-driver-hid-picolcd
+++ b/Documentation/ABI/testing/sysfs-driver-hid-picolcd
@@ -39,5 +39,5 @@ Description: Make it possible to adjust defio refresh rate.
Note: As device can barely do 2 complete refreshes a second
it only makes sense to adjust this value if only one or two
tiles get changed and it's not appropriate to expect the application
- to flush it's tiny changes explicitely at higher than default rate.
+ to flush its tiny changes explicitly at higher than default rate.
diff --git a/Documentation/ABI/testing/sysfs-firmware-acpi b/Documentation/ABI/testing/sysfs-firmware-acpi
index b4436cca97a8..c7fc72d4495c 100644
--- a/Documentation/ABI/testing/sysfs-firmware-acpi
+++ b/Documentation/ABI/testing/sysfs-firmware-acpi
@@ -169,7 +169,7 @@ Description:
to enable/disable/clear ACPI interrupts in user space, which can be
used to debug some ACPI interrupt storm issues.
- Note that only writting to VALID GPE/Fixed Event is allowed,
+ Note that only writing to VALID GPE/Fixed Event is allowed,
i.e. user can only change the status of runtime GPE and
Fixed Event with event handler installed.
diff --git a/Documentation/ABI/testing/sysfs-ibft b/Documentation/ABI/testing/sysfs-ibft
index cac3930bdb04..7d6725fe6143 100644
--- a/Documentation/ABI/testing/sysfs-ibft
+++ b/Documentation/ABI/testing/sysfs-ibft
@@ -21,3 +21,13 @@ Contact: Konrad Rzeszutek <ketuzsezr@darnok.org>
Description: The /sys/firmware/ibft/ethernetX directory will contain
files that expose the iSCSI Boot Firmware Table NIC data.
Usually this contains the IP address, MAC, and gateway of the NIC.
+
+What: /sys/firmware/ibft/acpi_header
+Date: March 2016
+Contact: David Bond <dbond@suse.com>
+Description: The /sys/firmware/ibft/acpi_header directory will contain files
+ that expose the SIGNATURE, OEM_ID, and OEM_TABLE_ID fields of the
+ acpi table header of the iBFT structure. This will allow for
+ identification of the creator of the table which is useful in
+ determining quirks associated with some adapters when used in
+ hardware vs software iscsi initiator mode.
diff --git a/Documentation/ABI/testing/sysfs-platform-hidma b/Documentation/ABI/testing/sysfs-platform-hidma
new file mode 100644
index 000000000000..d36441538660
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-platform-hidma
@@ -0,0 +1,9 @@
+What: /sys/devices/platform/hidma-*/chid
+ /sys/devices/platform/QCOM8061:*/chid
+Date: Dec 2015
+KernelVersion: 4.4
+Contact: "Sinan Kaya <okaya@cudeaurora.org>"
+Description:
+ Contains the ID of the channel within the HIDMA instance.
+ It is used to associate a given HIDMA channel with the
+ priority and weight calls in the management interface.
diff --git a/Documentation/ABI/testing/sysfs-platform-usbip-vudc b/Documentation/ABI/testing/sysfs-platform-usbip-vudc
new file mode 100644
index 000000000000..81fcfb454913
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-platform-usbip-vudc
@@ -0,0 +1,35 @@
+What: /sys/devices/platform/usbip-vudc.%d/dev_desc
+Date: April 2016
+KernelVersion: 4.6
+Contact: Krzysztof Opasiak <k.opasiak@samsung.com>
+Description:
+ This file allows to read device descriptor of
+ gadget driver which is currently bound to this
+ controller. It is possible to read this file
+ only if gadget driver is bound, otherwise error
+ is returned.
+
+What: /sys/devices/platform/usbip-vudc.%d/usbip_status
+Date: April 2016
+KernelVersion: 4.6
+Contact: Krzysztof Opasiak <k.opasiak@samsung.com>
+Description:
+ Current status of the device.
+ Allowed values:
+ 1 - Device is available and can be exported
+ 2 - Device is currently exported
+ 3 - Fatal error occurred during communication
+ with peer
+
+What: /sys/devices/platform/usbip-vudc.%d/usbip_sockfd
+Date: April 2016
+KernelVersion: 4.6
+Contact: Krzysztof Opasiak <k.opasiak@samsung.com>
+Description:
+ This file allows to export usb device to
+ connection peer. It is done by writing to this
+ file socket fd (as a string for example "8")
+ associated with a connection to remote peer who
+ would like to use this device. It is possible to
+ close the connection by writing -1 instead of
+ socked fd.
diff --git a/Documentation/DocBook/80211.tmpl b/Documentation/DocBook/80211.tmpl
index f9b9ad7894f5..5f7c55999c77 100644
--- a/Documentation/DocBook/80211.tmpl
+++ b/Documentation/DocBook/80211.tmpl
@@ -75,7 +75,6 @@
<chapter>
<title>Device registration</title>
!Pinclude/net/cfg80211.h Device registration
-!Finclude/net/cfg80211.h ieee80211_band
!Finclude/net/cfg80211.h ieee80211_channel_flags
!Finclude/net/cfg80211.h ieee80211_channel
!Finclude/net/cfg80211.h ieee80211_rate_flags
@@ -136,6 +135,7 @@
!Finclude/net/cfg80211.h cfg80211_tx_mlme_mgmt
!Finclude/net/cfg80211.h cfg80211_ibss_joined
!Finclude/net/cfg80211.h cfg80211_connect_result
+!Finclude/net/cfg80211.h cfg80211_connect_bss
!Finclude/net/cfg80211.h cfg80211_roamed
!Finclude/net/cfg80211.h cfg80211_disconnected
!Finclude/net/cfg80211.h cfg80211_ready_on_channel
diff --git a/Documentation/DocBook/crypto-API.tmpl b/Documentation/DocBook/crypto-API.tmpl
index 348619fcafb8..d55dc5a39bad 100644
--- a/Documentation/DocBook/crypto-API.tmpl
+++ b/Documentation/DocBook/crypto-API.tmpl
@@ -1936,9 +1936,9 @@ static int test_skcipher(void)
}
req = skcipher_request_alloc(skcipher, GFP_KERNEL);
- if (IS_ERR(req)) {
- pr_info("could not allocate request queue\n");
- ret = PTR_ERR(req);
+ if (!req) {
+ pr_info("could not allocate skcipher request\n");
+ ret = -ENOMEM;
goto out;
}
diff --git a/Documentation/DocBook/debugobjects.tmpl b/Documentation/DocBook/debugobjects.tmpl
index 24979f691e3e..7e4f34fde697 100644
--- a/Documentation/DocBook/debugobjects.tmpl
+++ b/Documentation/DocBook/debugobjects.tmpl
@@ -316,8 +316,8 @@
</itemizedlist>
</para>
<para>
- The function returns 1 when the fixup was successful,
- otherwise 0. The return value is used to update the
+ The function returns true when the fixup was successful,
+ otherwise false. The return value is used to update the
statistics.
</para>
<para>
@@ -341,8 +341,8 @@
</itemizedlist>
</para>
<para>
- The function returns 1 when the fixup was successful,
- otherwise 0. The return value is used to update the
+ The function returns true when the fixup was successful,
+ otherwise false. The return value is used to update the
statistics.
</para>
<para>
@@ -359,7 +359,8 @@
statically initialized object or not. In case it is it calls
debug_object_init() and debug_object_activate() to make the
object known to the tracker and marked active. In this case
- the function should return 0 because this is not a real fixup.
+ the function should return false because this is not a real
+ fixup.
</para>
</sect1>
@@ -376,8 +377,8 @@
</itemizedlist>
</para>
<para>
- The function returns 1 when the fixup was successful,
- otherwise 0. The return value is used to update the
+ The function returns true when the fixup was successful,
+ otherwise false. The return value is used to update the
statistics.
</para>
</sect1>
@@ -397,8 +398,8 @@
</itemizedlist>
</para>
<para>
- The function returns 1 when the fixup was successful,
- otherwise 0. The return value is used to update the
+ The function returns true when the fixup was successful,
+ otherwise false. The return value is used to update the
statistics.
</para>
</sect1>
@@ -414,8 +415,8 @@
debug bucket.
</para>
<para>
- The function returns 1 when the fixup was successful,
- otherwise 0. The return value is used to update the
+ The function returns true when the fixup was successful,
+ otherwise false. The return value is used to update the
statistics.
</para>
<para>
@@ -427,7 +428,8 @@
case. The fixup function should check if this is a legitimate
case of a statically initialized object or not. In this case only
debug_object_init() should be called to make the object known to
- the tracker. Then the function should return 0 because this is not
+ the tracker. Then the function should return false because this
+ is not
a real fixup.
</para>
</sect1>
diff --git a/Documentation/DocBook/device-drivers.tmpl b/Documentation/DocBook/device-drivers.tmpl
index 184f3c7b5145..de79efdad46c 100644
--- a/Documentation/DocBook/device-drivers.tmpl
+++ b/Documentation/DocBook/device-drivers.tmpl
@@ -136,6 +136,8 @@ X!Edrivers/base/interface.c
!Iinclude/linux/seqno-fence.h
!Edrivers/dma-buf/reservation.c
!Iinclude/linux/reservation.h
+!Edrivers/dma-buf/sync_file.c
+!Iinclude/linux/sync_file.h
!Edrivers/base/dma-coherent.c
!Edrivers/base/dma-mapping.c
</sect1>
@@ -233,6 +235,7 @@ X!Isound/sound_firmware.c
!Iinclude/media/v4l2-mediabus.h
!Iinclude/media/v4l2-mem2mem.h
!Iinclude/media/v4l2-of.h
+!Iinclude/media/v4l2-rect.h
!Iinclude/media/v4l2-subdev.h
!Iinclude/media/videobuf2-core.h
!Iinclude/media/videobuf2-v4l2.h
diff --git a/Documentation/DocBook/gpu.tmpl b/Documentation/DocBook/gpu.tmpl
index 221a4929ca59..b3a04b521b22 100644
--- a/Documentation/DocBook/gpu.tmpl
+++ b/Documentation/DocBook/gpu.tmpl
@@ -1612,6 +1612,12 @@ void intel_crt_init(struct drm_device *dev)
!Edrivers/gpu/drm/drm_dp_helper.c
</sect2>
<sect2>
+ <title>Display Port Dual Mode Adaptor Helper Functions Reference</title>
+!Pdrivers/gpu/drm/drm_dp_dual_mode_helper.c dp dual mode helpers
+!Iinclude/drm/drm_dp_dual_mode_helper.h
+!Edrivers/gpu/drm/drm_dp_dual_mode_helper.c
+ </sect2>
+ <sect2>
<title>Display Port MST Helper Functions Reference</title>
!Pdrivers/gpu/drm/drm_dp_mst_topology.c dp mst helper
!Iinclude/drm/drm_dp_mst_helper.h
diff --git a/Documentation/DocBook/media/dvb/net.xml b/Documentation/DocBook/media/dvb/net.xml
index d2e44b7e07df..da095ed0b75c 100644
--- a/Documentation/DocBook/media/dvb/net.xml
+++ b/Documentation/DocBook/media/dvb/net.xml
@@ -15,7 +15,7 @@
that are present on the transport stream. This is done through
<constant>/dev/dvb/adapter?/net?</constant> device node.
The data will be available via virtual <constant>dvb?_?</constant>
- network interfaces, and will be controled/routed via the standard
+ network interfaces, and will be controlled/routed via the standard
ip tools (like ip, route, netstat, ifconfig, etc).</para>
<para> Data types and and ioctl definitions are defined via
<constant>linux/dvb/net.h</constant> header.</para>
diff --git a/Documentation/DocBook/media/v4l/compat.xml b/Documentation/DocBook/media/v4l/compat.xml
index 5399e8904715..82fa328abd58 100644
--- a/Documentation/DocBook/media/v4l/compat.xml
+++ b/Documentation/DocBook/media/v4l/compat.xml
@@ -2686,50 +2686,12 @@ and may change in the future.</para>
<itemizedlist>
<listitem>
- <para>Video Output Overlay (OSD) Interface, <xref
- linkend="osd" />.</para>
- </listitem>
- <listitem>
<para>&VIDIOC-DBG-G-REGISTER; and &VIDIOC-DBG-S-REGISTER;
ioctls.</para>
</listitem>
<listitem>
<para>&VIDIOC-DBG-G-CHIP-INFO; ioctl.</para>
</listitem>
- <listitem>
- <para>&VIDIOC-ENUM-DV-TIMINGS;, &VIDIOC-QUERY-DV-TIMINGS; and
- &VIDIOC-DV-TIMINGS-CAP; ioctls.</para>
- </listitem>
- <listitem>
- <para>Flash API. <xref linkend="flash-controls" /></para>
- </listitem>
- <listitem>
- <para>&VIDIOC-CREATE-BUFS; and &VIDIOC-PREPARE-BUF; ioctls.</para>
- </listitem>
- <listitem>
- <para>Selection API. <xref linkend="selection-api" /></para>
- </listitem>
- <listitem>
- <para>Sub-device selection API: &VIDIOC-SUBDEV-G-SELECTION;
- and &VIDIOC-SUBDEV-S-SELECTION; ioctls.</para>
- </listitem>
- <listitem>
- <para>Support for frequency band enumeration: &VIDIOC-ENUM-FREQ-BANDS; ioctl.</para>
- </listitem>
- <listitem>
- <para>Vendor and device specific media bus pixel formats.
- <xref linkend="v4l2-mbus-vendor-spec-fmts" />.</para>
- </listitem>
- <listitem>
- <para>Importing DMABUF file descriptors as a new IO method described
- in <xref linkend="dmabuf" />.</para>
- </listitem>
- <listitem>
- <para>Exporting DMABUF files using &VIDIOC-EXPBUF; ioctl.</para>
- </listitem>
- <listitem>
- <para>Software Defined Radio (SDR) Interface, <xref linkend="sdr" />.</para>
- </listitem>
</itemizedlist>
</section>
diff --git a/Documentation/DocBook/media/v4l/controls.xml b/Documentation/DocBook/media/v4l/controls.xml
index 361040e6b0f4..e2e5484d2d9b 100644
--- a/Documentation/DocBook/media/v4l/controls.xml
+++ b/Documentation/DocBook/media/v4l/controls.xml
@@ -2841,7 +2841,7 @@ for a GOP and keep it below or equal the set bitrate target. Otherwise the rate
overall average bitrate for the stream and keeps it below or equal to the set bitrate. In the first case
the average bitrate for the whole stream will be smaller then the set bitrate. This is caused because the
average is calculated for smaller number of frames, on the other hand enabling this setting will ensure that
-the stream will meet tight bandwidth contraints. Applicable to encoders.
+the stream will meet tight bandwidth constraints. Applicable to encoders.
</entry>
</row>
<row><entry></entry></row>
@@ -4272,13 +4272,6 @@ manually or automatically if set to zero. Unit, range and step are driver-specif
<section id="flash-controls">
<title>Flash Control Reference</title>
- <note>
- <title>Experimental</title>
-
- <para>This is an <link linkend="experimental">experimental</link>
-interface and may change in the future.</para>
- </note>
-
<para>
The V4L2 flash controls are intended to provide generic access
to flash controller devices. Flash controller devices are
@@ -4743,14 +4736,6 @@ interface and may change in the future.</para>
<section id="image-source-controls">
<title>Image Source Control Reference</title>
- <note>
- <title>Experimental</title>
-
- <para>This is an <link
- linkend="experimental">experimental</link> interface and may
- change in the future.</para>
- </note>
-
<para>
The Image Source control class is intended for low-level
control of image source devices such as image sensors. The
@@ -4862,14 +4847,6 @@ interface and may change in the future.</para>
<section id="image-process-controls">
<title>Image Process Control Reference</title>
- <note>
- <title>Experimental</title>
-
- <para>This is an <link
- linkend="experimental">experimental</link> interface and may
- change in the future.</para>
- </note>
-
<para>
The Image Process control class is intended for low-level control of
image processing functions. Unlike
@@ -4955,14 +4932,6 @@ interface and may change in the future.</para>
<section id="dv-controls">
<title>Digital Video Control Reference</title>
- <note>
- <title>Experimental</title>
-
- <para>This is an <link
- linkend="experimental">experimental</link> interface and may
- change in the future.</para>
- </note>
-
<para>
The Digital Video control class is intended to control receivers
and transmitters for <ulink url="http://en.wikipedia.org/wiki/Vga">VGA</ulink>,
diff --git a/Documentation/DocBook/media/v4l/dev-raw-vbi.xml b/Documentation/DocBook/media/v4l/dev-raw-vbi.xml
index f4b61b6ce3c2..78599bbd58f7 100644
--- a/Documentation/DocBook/media/v4l/dev-raw-vbi.xml
+++ b/Documentation/DocBook/media/v4l/dev-raw-vbi.xml
@@ -85,7 +85,7 @@ initialize all fields of the &v4l2-vbi-format;
results of <constant>VIDIOC_G_FMT</constant>, and call the
&VIDIOC-S-FMT; ioctl with a pointer to this structure. Drivers return
an &EINVAL; only when the given parameters are ambiguous, otherwise
-they modify the parameters according to the hardware capabilites and
+they modify the parameters according to the hardware capabilities and
return the actual parameters. When the driver allocates resources at
this point, it may return an &EBUSY; to indicate the returned
parameters are valid but the required resources are currently not
diff --git a/Documentation/DocBook/media/v4l/dev-sdr.xml b/Documentation/DocBook/media/v4l/dev-sdr.xml
index a659771f7b7c..6da1157fb5bd 100644
--- a/Documentation/DocBook/media/v4l/dev-sdr.xml
+++ b/Documentation/DocBook/media/v4l/dev-sdr.xml
@@ -1,11 +1,5 @@
<title>Software Defined Radio Interface (SDR)</title>
- <note>
- <title>Experimental</title>
- <para>This is an <link linkend="experimental"> experimental </link>
- interface and may change in the future.</para>
- </note>
-
<para>
SDR is an abbreviation of Software Defined Radio, the radio device
which uses application software for modulation or demodulation. This interface
diff --git a/Documentation/DocBook/media/v4l/dev-subdev.xml b/Documentation/DocBook/media/v4l/dev-subdev.xml
index 4f0ba58c9bd9..f4bc27af83eb 100644
--- a/Documentation/DocBook/media/v4l/dev-subdev.xml
+++ b/Documentation/DocBook/media/v4l/dev-subdev.xml
@@ -1,11 +1,5 @@
<title>Sub-device Interface</title>
- <note>
- <title>Experimental</title>
- <para>This is an <link linkend="experimental">experimental</link>
- interface and may change in the future.</para>
- </note>
-
<para>The complex nature of V4L2 devices, where hardware is often made of
several integrated circuits that need to interact with each other in a
controlled way, leads to complex V4L2 drivers. The drivers usually reflect
diff --git a/Documentation/DocBook/media/v4l/io.xml b/Documentation/DocBook/media/v4l/io.xml
index 144158b3a5ac..e09025db92bd 100644
--- a/Documentation/DocBook/media/v4l/io.xml
+++ b/Documentation/DocBook/media/v4l/io.xml
@@ -475,12 +475,6 @@ rest should be evident.</para>
<section id="dmabuf">
<title>Streaming I/O (DMA buffer importing)</title>
- <note>
- <title>Experimental</title>
- <para>This is an <link linkend="experimental">experimental</link>
- interface and may change in the future.</para>
- </note>
-
<para>The DMABUF framework provides a generic method for sharing buffers
between multiple devices. Device drivers that support DMABUF can export a DMA
buffer to userspace as a file descriptor (known as the exporter role), import a
diff --git a/Documentation/DocBook/media/v4l/selection-api.xml b/Documentation/DocBook/media/v4l/selection-api.xml
index 28cbded766c9..b764cba150d1 100644
--- a/Documentation/DocBook/media/v4l/selection-api.xml
+++ b/Documentation/DocBook/media/v4l/selection-api.xml
@@ -1,13 +1,6 @@
<section id="selection-api">
- <title>Experimental API for cropping, composing and scaling</title>
-
- <note>
- <title>Experimental</title>
-
- <para>This is an <link linkend="experimental">experimental</link>
-interface and may change in the future.</para>
- </note>
+ <title>API for cropping, composing and scaling</title>
<section>
<title>Introduction</title>
diff --git a/Documentation/DocBook/media/v4l/subdev-formats.xml b/Documentation/DocBook/media/v4l/subdev-formats.xml
index 4e73345e3eab..199c84e3aede 100644
--- a/Documentation/DocBook/media/v4l/subdev-formats.xml
+++ b/Documentation/DocBook/media/v4l/subdev-formats.xml
@@ -4002,12 +4002,6 @@ see <xref linkend="colorspaces" />.</entry>
<section id="v4l2-mbus-vendor-spec-fmts">
<title>Vendor and Device Specific Formats</title>
- <note>
- <title>Experimental</title>
- <para>This is an <link linkend="experimental">experimental</link>
-interface and may change in the future.</para>
- </note>
-
<para>This section lists complex data formats that are either vendor or
device specific.
</para>
diff --git a/Documentation/DocBook/media/v4l/vidioc-create-bufs.xml b/Documentation/DocBook/media/v4l/vidioc-create-bufs.xml
index d81fa0d4016b..6528e97b8990 100644
--- a/Documentation/DocBook/media/v4l/vidioc-create-bufs.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-create-bufs.xml
@@ -49,12 +49,6 @@
<refsect1>
<title>Description</title>
- <note>
- <title>Experimental</title>
- <para>This is an <link linkend="experimental"> experimental </link>
- interface and may change in the future.</para>
- </note>
-
<para>This ioctl is used to create buffers for <link linkend="mmap">memory
mapped</link> or <link linkend="userp">user pointer</link> or <link
linkend="dmabuf">DMA buffer</link> I/O. It can be used as an alternative or in
diff --git a/Documentation/DocBook/media/v4l/vidioc-dv-timings-cap.xml b/Documentation/DocBook/media/v4l/vidioc-dv-timings-cap.xml
index a2017bfcaed2..ca9ffce9b4c1 100644
--- a/Documentation/DocBook/media/v4l/vidioc-dv-timings-cap.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-dv-timings-cap.xml
@@ -49,14 +49,9 @@
<refsect1>
<title>Description</title>
- <note>
- <title>Experimental</title>
- <para>This is an <link linkend="experimental"> experimental </link>
- interface and may change in the future.</para>
- </note>
-
- <para>To query the capabilities of the DV receiver/transmitter applications
-can call the <constant>VIDIOC_DV_TIMINGS_CAP</constant> ioctl on a video node
+ <para>To query the capabilities of the DV receiver/transmitter applications initialize the
+<structfield>pad</structfield> field to 0, zero the reserved array of &v4l2-dv-timings-cap;
+and call the <constant>VIDIOC_DV_TIMINGS_CAP</constant> ioctl on a video node
and the driver will fill in the structure. Note that drivers may return
different values after switching the video input or output.</para>
@@ -65,8 +60,8 @@ queried by calling the <constant>VIDIOC_SUBDEV_DV_TIMINGS_CAP</constant> ioctl
directly on a subdevice node. The capabilities are specific to inputs (for DV
receivers) or outputs (for DV transmitters), applications must specify the
desired pad number in the &v4l2-dv-timings-cap; <structfield>pad</structfield>
-field. Attempts to query capabilities on a pad that doesn't support them will
-return an &EINVAL;.</para>
+field and zero the <structfield>reserved</structfield> array. Attempts to query
+capabilities on a pad that doesn't support them will return an &EINVAL;.</para>
<table pgwide="1" frame="none" id="v4l2-bt-timings-cap">
<title>struct <structname>v4l2_bt_timings_cap</structname></title>
@@ -145,7 +140,8 @@ return an &EINVAL;.</para>
<row>
<entry>__u32</entry>
<entry><structfield>reserved</structfield>[2]</entry>
- <entry>Reserved for future extensions. Drivers must set the array to zero.</entry>
+ <entry>Reserved for future extensions. Drivers and applications must
+ set the array to zero.</entry>
</row>
<row>
<entry>union</entry>
diff --git a/Documentation/DocBook/media/v4l/vidioc-enum-dv-timings.xml b/Documentation/DocBook/media/v4l/vidioc-enum-dv-timings.xml
index 6e3cadd4e1f9..9b3d42018b69 100644
--- a/Documentation/DocBook/media/v4l/vidioc-enum-dv-timings.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-enum-dv-timings.xml
@@ -49,20 +49,15 @@
<refsect1>
<title>Description</title>
- <note>
- <title>Experimental</title>
- <para>This is an <link linkend="experimental"> experimental </link>
- interface and may change in the future.</para>
- </note>
-
<para>While some DV receivers or transmitters support a wide range of timings, others
support only a limited number of timings. With this ioctl applications can enumerate a list
of known supported timings. Call &VIDIOC-DV-TIMINGS-CAP; to check if it also supports other
standards or even custom timings that are not in this list.</para>
<para>To query the available timings, applications initialize the
-<structfield>index</structfield> field and zero the reserved array of &v4l2-enum-dv-timings;
-and call the <constant>VIDIOC_ENUM_DV_TIMINGS</constant> ioctl on a video node with a
+<structfield>index</structfield> field, set the <structfield>pad</structfield> field to 0,
+zero the reserved array of &v4l2-enum-dv-timings; and call the
+<constant>VIDIOC_ENUM_DV_TIMINGS</constant> ioctl on a video node with a
pointer to this structure. Drivers fill the rest of the structure or return an
&EINVAL; when the index is out of bounds. To enumerate all supported DV timings,
applications shall begin at index zero, incrementing by one until the
diff --git a/Documentation/DocBook/media/v4l/vidioc-enum-freq-bands.xml b/Documentation/DocBook/media/v4l/vidioc-enum-freq-bands.xml
index 4e8ea65f7282..a0608abc1ab8 100644
--- a/Documentation/DocBook/media/v4l/vidioc-enum-freq-bands.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-enum-freq-bands.xml
@@ -49,12 +49,6 @@
<refsect1>
<title>Description</title>
- <note>
- <title>Experimental</title>
- <para>This is an <link linkend="experimental"> experimental </link>
- interface and may change in the future.</para>
- </note>
-
<para>Enumerates the frequency bands that a tuner or modulator supports.
To do this applications initialize the <structfield>tuner</structfield>,
<structfield>type</structfield> and <structfield>index</structfield> fields,
diff --git a/Documentation/DocBook/media/v4l/vidioc-expbuf.xml b/Documentation/DocBook/media/v4l/vidioc-expbuf.xml
index 0ae0b6a915d0..a6558a676ef3 100644
--- a/Documentation/DocBook/media/v4l/vidioc-expbuf.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-expbuf.xml
@@ -49,12 +49,6 @@
<refsect1>
<title>Description</title>
- <note>
- <title>Experimental</title>
- <para>This is an <link linkend="experimental"> experimental </link>
- interface and may change in the future.</para>
- </note>
-
<para>This ioctl is an extension to the <link linkend="mmap">memory
mapping</link> I/O method, therefore it is available only for
<constant>V4L2_MEMORY_MMAP</constant> buffers. It can be used to export a
diff --git a/Documentation/DocBook/media/v4l/vidioc-g-edid.xml b/Documentation/DocBook/media/v4l/vidioc-g-edid.xml
index 2702536bbc7c..b7602d30f596 100644
--- a/Documentation/DocBook/media/v4l/vidioc-g-edid.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-g-edid.xml
@@ -1,6 +1,6 @@
<refentry id="vidioc-g-edid">
<refmeta>
- <refentrytitle>ioctl VIDIOC_G_EDID, VIDIOC_S_EDID</refentrytitle>
+ <refentrytitle>ioctl VIDIOC_G_EDID, VIDIOC_S_EDID, VIDIOC_SUBDEV_G_EDID, VIDIOC_SUBDEV_S_EDID</refentrytitle>
&manvol;
</refmeta>
@@ -71,7 +71,8 @@
<para>To get the EDID data the application has to fill in the <structfield>pad</structfield>,
<structfield>start_block</structfield>, <structfield>blocks</structfield> and <structfield>edid</structfield>
- fields and call <constant>VIDIOC_G_EDID</constant>. The current EDID from block
+ fields, zero the <structfield>reserved</structfield> array and call
+ <constant>VIDIOC_G_EDID</constant>. The current EDID from block
<structfield>start_block</structfield> and of size <structfield>blocks</structfield>
will be placed in the memory <structfield>edid</structfield> points to. The <structfield>edid</structfield>
pointer must point to memory at least <structfield>blocks</structfield>&nbsp;*&nbsp;128 bytes
@@ -92,8 +93,9 @@
the driver will set <structfield>blocks</structfield> to 0 and it returns 0.</para>
<para>To set the EDID blocks of a receiver the application has to fill in the <structfield>pad</structfield>,
- <structfield>blocks</structfield> and <structfield>edid</structfield> fields and set
- <structfield>start_block</structfield> to 0. It is not possible to set part of an EDID,
+ <structfield>blocks</structfield> and <structfield>edid</structfield> fields, set
+ <structfield>start_block</structfield> to 0 and zero the <structfield>reserved</structfield> array.
+ It is not possible to set part of an EDID,
it is always all or nothing. Setting the EDID data is only valid for receivers as it makes
no sense for a transmitter.</para>
diff --git a/Documentation/DocBook/media/v4l/vidioc-g-selection.xml b/Documentation/DocBook/media/v4l/vidioc-g-selection.xml
index 7865351688da..997f4e96f297 100644
--- a/Documentation/DocBook/media/v4l/vidioc-g-selection.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-g-selection.xml
@@ -50,12 +50,6 @@
<refsect1>
<title>Description</title>
- <note>
- <title>Experimental</title>
- <para>This is an <link linkend="experimental"> experimental </link>
- interface and may change in the future.</para>
- </note>
-
<para>The ioctls are used to query and configure selection rectangles.</para>
<para>To query the cropping (composing) rectangle set &v4l2-selection;
@@ -222,7 +216,7 @@ or the <structfield>flags</structfield> argument is not valid.</para>
<term><errorcode>ERANGE</errorcode></term>
<listitem>
<para>It is not possible to adjust &v4l2-rect; <structfield>
-r</structfield> rectangle to satisfy all contraints given in the
+r</structfield> rectangle to satisfy all constraints given in the
<structfield>flags</structfield> argument.</para>
</listitem>
</varlistentry>
diff --git a/Documentation/DocBook/media/v4l/vidioc-prepare-buf.xml b/Documentation/DocBook/media/v4l/vidioc-prepare-buf.xml
index fa7ad7e33228..7bde698760e4 100644
--- a/Documentation/DocBook/media/v4l/vidioc-prepare-buf.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-prepare-buf.xml
@@ -48,12 +48,6 @@
<refsect1>
<title>Description</title>
- <note>
- <title>Experimental</title>
- <para>This is an <link linkend="experimental"> experimental </link>
- interface and may change in the future.</para>
- </note>
-
<para>Applications can optionally call the
<constant>VIDIOC_PREPARE_BUF</constant> ioctl to pass ownership of the buffer
to the driver before actually enqueuing it, using the
diff --git a/Documentation/DocBook/media/v4l/vidioc-query-dv-timings.xml b/Documentation/DocBook/media/v4l/vidioc-query-dv-timings.xml
index 0c93677d16b4..d41bf47ee5a2 100644
--- a/Documentation/DocBook/media/v4l/vidioc-query-dv-timings.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-query-dv-timings.xml
@@ -50,12 +50,6 @@ input</refpurpose>
<refsect1>
<title>Description</title>
- <note>
- <title>Experimental</title>
- <para>This is an <link linkend="experimental"> experimental </link>
- interface and may change in the future.</para>
- </note>
-
<para>The hardware may be able to detect the current DV timings
automatically, similar to sensing the video standard. To do so, applications
call <constant>VIDIOC_QUERY_DV_TIMINGS</constant> with a pointer to a
diff --git a/Documentation/DocBook/media/v4l/vidioc-streamon.xml b/Documentation/DocBook/media/v4l/vidioc-streamon.xml
index df2c63d07bac..89fd7ce964f9 100644
--- a/Documentation/DocBook/media/v4l/vidioc-streamon.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-streamon.xml
@@ -123,6 +123,14 @@ synchronize with other events.</para>
</para>
</listitem>
</varlistentry>
+ <varlistentry>
+ <term><errorcode>ENOLINK</errorcode></term>
+ <listitem>
+ <para>The driver implements Media Controller interface and
+ the pipeline link configuration is invalid.
+ </para>
+ </listitem>
+ </varlistentry>
</variablelist>
</refsect1>
</refentry>
diff --git a/Documentation/DocBook/media/v4l/vidioc-subdev-enum-frame-interval.xml b/Documentation/DocBook/media/v4l/vidioc-subdev-enum-frame-interval.xml
index cff59f5cbf04..9d0251a27e5f 100644
--- a/Documentation/DocBook/media/v4l/vidioc-subdev-enum-frame-interval.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-subdev-enum-frame-interval.xml
@@ -49,12 +49,6 @@
<refsect1>
<title>Description</title>
- <note>
- <title>Experimental</title>
- <para>This is an <link linkend="experimental">experimental</link>
- interface and may change in the future.</para>
- </note>
-
<para>This ioctl lets applications enumerate available frame intervals on a
given sub-device pad. Frame intervals only makes sense for sub-devices that
can control the frame period on their own. This includes, for instance,
diff --git a/Documentation/DocBook/media/v4l/vidioc-subdev-enum-frame-size.xml b/Documentation/DocBook/media/v4l/vidioc-subdev-enum-frame-size.xml
index abd545ede67a..9b91b8332ba9 100644
--- a/Documentation/DocBook/media/v4l/vidioc-subdev-enum-frame-size.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-subdev-enum-frame-size.xml
@@ -49,12 +49,6 @@
<refsect1>
<title>Description</title>
- <note>
- <title>Experimental</title>
- <para>This is an <link linkend="experimental">experimental</link>
- interface and may change in the future.</para>
- </note>
-
<para>This ioctl allows applications to enumerate all frame sizes
supported by a sub-device on the given pad for the given media bus format.
Supported formats can be retrieved with the &VIDIOC-SUBDEV-ENUM-MBUS-CODE;
diff --git a/Documentation/DocBook/media/v4l/vidioc-subdev-enum-mbus-code.xml b/Documentation/DocBook/media/v4l/vidioc-subdev-enum-mbus-code.xml
index 0bcb278fd062..c67256ada87a 100644
--- a/Documentation/DocBook/media/v4l/vidioc-subdev-enum-mbus-code.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-subdev-enum-mbus-code.xml
@@ -49,12 +49,6 @@
<refsect1>
<title>Description</title>
- <note>
- <title>Experimental</title>
- <para>This is an <link linkend="experimental">experimental</link>
- interface and may change in the future.</para>
- </note>
-
<para>To enumerate media bus formats available at a given sub-device pad
applications initialize the <structfield>pad</structfield>, <structfield>which</structfield>
and <structfield>index</structfield> fields of &v4l2-subdev-mbus-code-enum; and
diff --git a/Documentation/DocBook/media/v4l/vidioc-subdev-g-fmt.xml b/Documentation/DocBook/media/v4l/vidioc-subdev-g-fmt.xml
index a67cde6f8c54..781089cba453 100644
--- a/Documentation/DocBook/media/v4l/vidioc-subdev-g-fmt.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-subdev-g-fmt.xml
@@ -50,12 +50,6 @@
<refsect1>
<title>Description</title>
- <note>
- <title>Experimental</title>
- <para>This is an <link linkend="experimental">experimental</link>
- interface and may change in the future.</para>
- </note>
-
<para>These ioctls are used to negotiate the frame format at specific
subdev pads in the image pipeline.</para>
diff --git a/Documentation/DocBook/media/v4l/vidioc-subdev-g-frame-interval.xml b/Documentation/DocBook/media/v4l/vidioc-subdev-g-frame-interval.xml
index 0bc3ea22d31f..848ec789ddaa 100644
--- a/Documentation/DocBook/media/v4l/vidioc-subdev-g-frame-interval.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-subdev-g-frame-interval.xml
@@ -50,12 +50,6 @@
<refsect1>
<title>Description</title>
- <note>
- <title>Experimental</title>
- <para>This is an <link linkend="experimental">experimental</link>
- interface and may change in the future.</para>
- </note>
-
<para>These ioctls are used to get and set the frame interval at specific
subdev pads in the image pipeline. The frame interval only makes sense for
sub-devices that can control the frame period on their own. This includes,
diff --git a/Documentation/DocBook/media/v4l/vidioc-subdev-g-selection.xml b/Documentation/DocBook/media/v4l/vidioc-subdev-g-selection.xml
index c62a7360719b..8346b2e4a703 100644
--- a/Documentation/DocBook/media/v4l/vidioc-subdev-g-selection.xml
+++ b/Documentation/DocBook/media/v4l/vidioc-subdev-g-selection.xml
@@ -49,12 +49,6 @@
<refsect1>
<title>Description</title>
- <note>
- <title>Experimental</title>
- <para>This is an <link linkend="experimental">experimental</link>
- interface and may change in the future.</para>
- </note>
-
<para>The selections are used to configure various image
processing functionality performed by the subdevs which affect the
image size. This currently includes cropping, scaling and
diff --git a/Documentation/IRQ-domain.txt b/Documentation/IRQ-domain.txt
index 8d990bde8693..82001a25a14b 100644
--- a/Documentation/IRQ-domain.txt
+++ b/Documentation/IRQ-domain.txt
@@ -70,6 +70,7 @@ of the reverse map types are described below:
==== Linear ====
irq_domain_add_linear()
+irq_domain_create_linear()
The linear reverse map maintains a fixed size table indexed by the
hwirq number. When a hwirq is mapped, an irq_desc is allocated for
@@ -81,10 +82,16 @@ map are fixed time lookup for IRQ numbers, and irq_descs are only
allocated for in-use IRQs. The disadvantage is that the table must be
as large as the largest possible hwirq number.
+irq_domain_add_linear() and irq_domain_create_linear() are functionally
+equivalent, except for the first argument is different - the former
+accepts an Open Firmware specific 'struct device_node', while the latter
+accepts a more general abstraction 'struct fwnode_handle'.
+
The majority of drivers should use the linear map.
==== Tree ====
irq_domain_add_tree()
+irq_domain_create_tree()
The irq_domain maintains a radix tree map from hwirq numbers to Linux
IRQs. When an hwirq is mapped, an irq_desc is allocated and the
@@ -95,6 +102,11 @@ since it doesn't need to allocate a table as large as the largest
hwirq number. The disadvantage is that hwirq to IRQ number lookup is
dependent on how many entries are in the table.
+irq_domain_add_tree() and irq_domain_create_tree() are functionally
+equivalent, except for the first argument is different - the former
+accepts an Open Firmware specific 'struct device_node', while the latter
+accepts a more general abstraction 'struct fwnode_handle'.
+
Very few drivers should need this mapping.
==== No Map ===-
diff --git a/Documentation/Makefile b/Documentation/Makefile
index 1207d7907650..de955e151af8 100644
--- a/Documentation/Makefile
+++ b/Documentation/Makefile
@@ -1,4 +1,3 @@
-subdir-y := accounting auxdisplay blackfin connector \
+subdir-y := accounting auxdisplay blackfin \
filesystems filesystems ia64 laptops mic misc-devices \
- networking pcmcia prctl ptp timers vDSO video4linux \
- watchdog
+ networking pcmcia prctl ptp timers vDSO watchdog
diff --git a/Documentation/RCU/Design/Data-Structures/BigTreeClassicRCU.svg b/Documentation/RCU/Design/Data-Structures/BigTreeClassicRCU.svg
new file mode 100644
index 000000000000..727e270b11e4
--- /dev/null
+++ b/Documentation/RCU/Design/Data-Structures/BigTreeClassicRCU.svg
@@ -0,0 +1,474 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Creator: fig2dev Version 3.2 Patchlevel 5e -->
+
+<!-- CreationDate: Wed Dec 9 17:28:20 2015 -->
+
+<!-- Magnification: 3.000 -->
+
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="9.1in"
+ height="8.9in"
+ viewBox="-66 -66 10932 10707"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.4 r9939"
+ sodipodi:docname="BigTreeClassicRCU.fig">
+ <metadata
+ id="metadata106">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title></dc:title>
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <defs
+ id="defs104">
+ <marker
+ inkscape:stockid="Arrow1Mend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow1Mend"
+ style="overflow:visible;">
+ <path
+ id="path3864"
+ d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;"
+ transform="scale(0.4) rotate(180) translate(10,0)" />
+ </marker>
+ </defs>
+ <sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="973"
+ inkscape:window-height="1137"
+ id="namedview102"
+ showgrid="false"
+ inkscape:zoom="0.9743589"
+ inkscape:cx="409.50003"
+ inkscape:cy="400.49997"
+ inkscape:window-x="915"
+ inkscape:window-y="24"
+ inkscape:window-maximized="0"
+ inkscape:current-layer="g4" />
+ <g
+ style="stroke-width:.025in; fill:none"
+ id="g4">
+ <!-- Line: box -->
+ <rect
+ x="0"
+ y="0"
+ width="10800"
+ height="5625"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffff00; "
+ id="rect6" />
+ <!-- Line: box -->
+ <rect
+ x="1125"
+ y="3600"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect8" />
+ <!-- Line: box -->
+ <rect
+ x="3825"
+ y="900"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect10" />
+ <!-- Line: box -->
+ <rect
+ x="6525"
+ y="3600"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect12" />
+ <!-- Line -->
+ <polyline
+ points="3375,6525 3375,5046 "
+ style="stroke:#00d1d1;stroke-width:44.9934641;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline14" />
+ <!-- Arrowhead on XXXpoint 3375 6525 - 3375 4860-->
+ <!-- Circle -->
+ <circle
+ cx="7425"
+ cy="6075"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle18" />
+ <!-- Circle -->
+ <circle
+ cx="7875"
+ cy="6075"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle20" />
+ <!-- Circle -->
+ <circle
+ cx="8325"
+ cy="6075"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle22" />
+ <!-- Circle -->
+ <circle
+ cx="2025"
+ cy="6075"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle24" />
+ <!-- Circle -->
+ <circle
+ cx="2475"
+ cy="6075"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle26" />
+ <!-- Circle -->
+ <circle
+ cx="2925"
+ cy="6075"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle28" />
+ <!-- Circle -->
+ <circle
+ cx="4725"
+ cy="4275"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle30" />
+ <!-- Circle -->
+ <circle
+ cx="5175"
+ cy="4275"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle32" />
+ <!-- Circle -->
+ <circle
+ cx="5625"
+ cy="4275"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle34" />
+ <!-- Line: box -->
+ <rect
+ x="2025"
+ y="6525"
+ width="2700"
+ height="1800"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect36" />
+ <!-- Line -->
+ <polyline
+ points="2475,3600 3975,2310 "
+ style="stroke:#00d1d1;stroke-width:44.9934641;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline38" />
+ <!-- Arrowhead on XXXpoint 2475 3600 - 4116 2190-->
+ <!-- Line -->
+ <polyline
+ points="7875,3600 6372,2310 "
+ style="stroke:#00d1d1;stroke-width:44.9934641;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline42" />
+ <!-- Arrowhead on XXXpoint 7875 3600 - 6231 2190-->
+ <!-- Line -->
+ <polyline
+ points="6975,8775 6975,5046 "
+ style="stroke:#00d1d1;stroke-width:44.9934641;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline46" />
+ <!-- Arrowhead on XXXpoint 6975 8775 - 6975 4860-->
+ <!-- Line -->
+ <polyline
+ points="1575,8775 1575,5046 "
+ style="stroke:#00d1d1;stroke-width:44.9934641;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline50" />
+ <!-- Arrowhead on XXXpoint 1575 8775 - 1575 4860-->
+ <!-- Line -->
+ <polyline
+ points="8775,6525 8775,5046 "
+ style="stroke:#00d1d1;stroke-width:44.9934641;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline54" />
+ <!-- Arrowhead on XXXpoint 8775 6525 - 8775 4860-->
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1575"
+ y="9225"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text58">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1575"
+ y="9675"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text60">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1575"
+ y="10350"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text62">CPU 0</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="3375"
+ y="6975"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text64">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="3375"
+ y="7425"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text66">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="3375"
+ y="8100"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text68">CPU 15</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="6975"
+ y="9225"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text70">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="6975"
+ y="9675"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text72">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="6975"
+ y="10350"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text74">CPU 1007</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="8730"
+ y="6930"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text76">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="8730"
+ y="7380"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text78">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="8730"
+ y="8055"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text80">CPU 1023</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="225"
+ y="450"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="start"
+ id="text82">struct rcu_state</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2475"
+ y="4050"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text84">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2475"
+ y="4500"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text86">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="7875"
+ y="4500"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text88">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="7875"
+ y="4050"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text90">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5175"
+ y="1350"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text92">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5175"
+ y="1800"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text94">rcu_node</text>
+ <!-- Line: box -->
+ <rect
+ x="225"
+ y="8775"
+ width="2700"
+ height="1800"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect96" />
+ <!-- Line: box -->
+ <rect
+ x="5625"
+ y="8775"
+ width="2700"
+ height="1800"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect98" />
+ <!-- Line: box -->
+ <rect
+ x="7380"
+ y="6480"
+ width="2700"
+ height="1800"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect100" />
+ </g>
+</svg>
diff --git a/Documentation/RCU/Design/Data-Structures/BigTreeClassicRCUBH.svg b/Documentation/RCU/Design/Data-Structures/BigTreeClassicRCUBH.svg
new file mode 100644
index 000000000000..9bbb1944f962
--- /dev/null
+++ b/Documentation/RCU/Design/Data-Structures/BigTreeClassicRCUBH.svg
@@ -0,0 +1,499 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Creator: fig2dev Version 3.2 Patchlevel 5e -->
+
+<!-- CreationDate: Wed Dec 9 17:26:09 2015 -->
+
+<!-- Magnification: 2.000 -->
+
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="5.7in"
+ height="6.6in"
+ viewBox="-44 -44 6838 7888"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.4 r9939"
+ sodipodi:docname="BigTreeClassicRCUBH.fig">
+ <metadata
+ id="metadata110">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title></dc:title>
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <defs
+ id="defs108">
+ <marker
+ inkscape:stockid="Arrow1Mend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow1Mend"
+ style="overflow:visible;">
+ <path
+ id="path3868"
+ d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;"
+ transform="scale(0.4) rotate(180) translate(10,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Mend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow2Mend"
+ style="overflow:visible;">
+ <path
+ id="path3886"
+ style="fill-rule:evenodd;stroke-width:0.62500000;stroke-linejoin:round;"
+ d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
+ transform="scale(0.6) rotate(180) translate(0,0)" />
+ </marker>
+ </defs>
+ <sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="878"
+ inkscape:window-height="1148"
+ id="namedview106"
+ showgrid="false"
+ inkscape:zoom="1.3547758"
+ inkscape:cx="256.5"
+ inkscape:cy="297"
+ inkscape:window-x="45"
+ inkscape:window-y="24"
+ inkscape:window-maximized="0"
+ inkscape:current-layer="g4" />
+ <g
+ style="stroke-width:.025in; fill:none"
+ id="g4">
+ <!-- Line: box -->
+ <rect
+ x="450"
+ y="0"
+ width="6300"
+ height="7350"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffffff; "
+ id="rect6" />
+ <!-- Line: box -->
+ <rect
+ x="4950"
+ y="4950"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect8" />
+ <!-- Line: box -->
+ <rect
+ x="750"
+ y="600"
+ width="5700"
+ height="3750"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffff00; "
+ id="rect10" />
+ <!-- Line: box -->
+ <rect
+ x="0"
+ y="450"
+ width="6300"
+ height="7350"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffffff; "
+ id="rect12" />
+ <!-- Line: box -->
+ <rect
+ x="300"
+ y="1050"
+ width="5700"
+ height="3750"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffff00; "
+ id="rect14" />
+ <!-- Circle -->
+ <circle
+ cx="2850"
+ cy="3900"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle16" />
+ <!-- Circle -->
+ <circle
+ cx="3150"
+ cy="3900"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle18" />
+ <!-- Circle -->
+ <circle
+ cx="3450"
+ cy="3900"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle20" />
+ <!-- Circle -->
+ <circle
+ cx="1350"
+ cy="5100"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle22" />
+ <!-- Circle -->
+ <circle
+ cx="1650"
+ cy="5100"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle24" />
+ <!-- Circle -->
+ <circle
+ cx="1950"
+ cy="5100"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle26" />
+ <!-- Circle -->
+ <circle
+ cx="4350"
+ cy="5100"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle28" />
+ <!-- Circle -->
+ <circle
+ cx="4650"
+ cy="5100"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle30" />
+ <!-- Circle -->
+ <circle
+ cx="4950"
+ cy="5100"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle32" />
+ <!-- Line -->
+ <polyline
+ points="1350,3450 2350,2590 "
+ style="stroke:#00d1d1;stroke-width:30.0045575;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline34" />
+ <!-- Arrowhead on XXXpoint 1350 3450 - 2444 2510-->
+ <!-- Line -->
+ <polyline
+ points="4950,3450 3948,2590 "
+ style="stroke:#00d1d1;stroke-width:30.0045575;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline38" />
+ <!-- Arrowhead on XXXpoint 4950 3450 - 3854 2510-->
+ <!-- Line: box -->
+ <rect
+ x="750"
+ y="3450"
+ width="1800"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect42" />
+ <!-- Line -->
+ <polyline
+ points="2250,5400 2250,4414 "
+ style="stroke:#00d1d1;stroke-width:30.0045575;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline44" />
+ <!-- Arrowhead on XXXpoint 2250 5400 - 2250 4290-->
+ <!-- Line: box -->
+ <rect
+ x="1500"
+ y="5400"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect48" />
+ <!-- Line: box -->
+ <rect
+ x="300"
+ y="6600"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect50" />
+ <!-- Line: box -->
+ <rect
+ x="3750"
+ y="3450"
+ width="1800"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect52" />
+ <!-- Line: box -->
+ <rect
+ x="4500"
+ y="5400"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect54" />
+ <!-- Line: box -->
+ <rect
+ x="3300"
+ y="6600"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect56" />
+ <!-- Line: box -->
+ <rect
+ x="2250"
+ y="1650"
+ width="1800"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect58" />
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="6450"
+ y="300"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="192"
+ text-anchor="end"
+ id="text60">rcu_bh</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="3150"
+ y="1950"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text62">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="3150"
+ y="2250"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text64">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1650"
+ y="3750"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text66">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1650"
+ y="4050"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text68">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4650"
+ y="4050"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text70">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4650"
+ y="3750"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text72">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2250"
+ y="5700"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text74">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2250"
+ y="6000"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text76">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1050"
+ y="6900"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text78">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1050"
+ y="7200"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text80">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5250"
+ y="5700"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text82">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5250"
+ y="6000"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text84">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4050"
+ y="6900"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text86">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4050"
+ y="7200"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text88">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="450"
+ y="1350"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="start"
+ id="text90">struct rcu_state</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="6000"
+ y="750"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="192"
+ text-anchor="end"
+ id="text92">rcu_sched</text>
+ <!-- Line -->
+ <polyline
+ points="5250,5400 5250,4414 "
+ style="stroke:#00d1d1;stroke-width:30.0045575;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline94" />
+ <!-- Arrowhead on XXXpoint 5250 5400 - 5250 4290-->
+ <!-- Line -->
+ <polyline
+ points="4050,6600 4050,4414 "
+ style="stroke:#00d1d1;stroke-width:30.0045575;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline98" />
+ <!-- Arrowhead on XXXpoint 4050 6600 - 4050 4290-->
+ <!-- Line -->
+ <polyline
+ points="1050,6600 1050,4414 "
+ style="stroke:#00d1d1;stroke-width:30.0045575;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline102" />
+ <!-- Arrowhead on XXXpoint 1050 6600 - 1050 4290-->
+ </g>
+</svg>
diff --git a/Documentation/RCU/Design/Data-Structures/BigTreeClassicRCUBHdyntick.svg b/Documentation/RCU/Design/Data-Structures/BigTreeClassicRCUBHdyntick.svg
new file mode 100644
index 000000000000..21ba7823479d
--- /dev/null
+++ b/Documentation/RCU/Design/Data-Structures/BigTreeClassicRCUBHdyntick.svg
@@ -0,0 +1,695 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Creator: fig2dev Version 3.2 Patchlevel 5e -->
+
+<!-- CreationDate: Wed Dec 9 17:20:02 2015 -->
+
+<!-- Magnification: 2.000 -->
+
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="5.7in"
+ height="8.6in"
+ viewBox="-44 -44 6838 10288"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.4 r9939"
+ sodipodi:docname="BigTreeClassicRCUBHdyntick.fig">
+ <metadata
+ id="metadata166">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title></dc:title>
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <defs
+ id="defs164">
+ <marker
+ inkscape:stockid="Arrow1Mend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow1Mend"
+ style="overflow:visible;">
+ <path
+ id="path3924"
+ d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;"
+ transform="scale(0.4) rotate(180) translate(10,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Lend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow2Lend"
+ style="overflow:visible;">
+ <path
+ id="path3936"
+ style="fill-rule:evenodd;stroke-width:0.62500000;stroke-linejoin:round;"
+ d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
+ transform="scale(1.1) rotate(180) translate(1,0)" />
+ </marker>
+ </defs>
+ <sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="845"
+ inkscape:window-height="988"
+ id="namedview162"
+ showgrid="false"
+ inkscape:zoom="1.0452196"
+ inkscape:cx="256.5"
+ inkscape:cy="387.00003"
+ inkscape:window-x="356"
+ inkscape:window-y="61"
+ inkscape:window-maximized="0"
+ inkscape:current-layer="g4" />
+ <g
+ style="stroke-width:.025in; fill:none"
+ id="g4">
+ <!-- Line: box -->
+ <rect
+ x="450"
+ y="0"
+ width="6300"
+ height="7350"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffffff; "
+ id="rect6" />
+ <!-- Line: box -->
+ <rect
+ x="4950"
+ y="4950"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect8" />
+ <!-- Line: box -->
+ <rect
+ x="750"
+ y="600"
+ width="5700"
+ height="3750"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffff00; "
+ id="rect10" />
+ <!-- Line -->
+ <polyline
+ points="5250,8100 5688,5912 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline12" />
+ <!-- Arrowhead on XXXpoint 5250 8100 - 5710 5790-->
+ <polyline
+ points="5714 6068 5704 5822 5598 6044 "
+ style="stroke:#00ff00;stroke-width:14;stroke-miterlimit:8; "
+ id="polyline14" />
+ <!-- Line -->
+ <polyline
+ points="4050,9300 4486,7262 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline16" />
+ <!-- Arrowhead on XXXpoint 4050 9300 - 4512 7140-->
+ <polyline
+ points="4514 7418 4506 7172 4396 7394 "
+ style="stroke:#00ff00;stroke-width:14;stroke-miterlimit:8; "
+ id="polyline18" />
+ <!-- Line -->
+ <polyline
+ points="1040,9300 1476,7262 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline20" />
+ <!-- Arrowhead on XXXpoint 1040 9300 - 1502 7140-->
+ <polyline
+ points="1504 7418 1496 7172 1386 7394 "
+ style="stroke:#00ff00;stroke-width:14;stroke-miterlimit:8; "
+ id="polyline22" />
+ <!-- Line -->
+ <polyline
+ points="2240,8100 2676,6062 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline24" />
+ <!-- Arrowhead on XXXpoint 2240 8100 - 2702 5940-->
+ <polyline
+ points="2704 6218 2696 5972 2586 6194 "
+ style="stroke:#00ff00;stroke-width:14;stroke-miterlimit:8; "
+ id="polyline26" />
+ <!-- Line: box -->
+ <rect
+ x="0"
+ y="450"
+ width="6300"
+ height="7350"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffffff; "
+ id="rect28" />
+ <!-- Line: box -->
+ <rect
+ x="300"
+ y="1050"
+ width="5700"
+ height="3750"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffff00; "
+ id="rect30" />
+ <!-- Line -->
+ <polyline
+ points="1350,3450 2350,2590 "
+ style="stroke:#00d1d1;stroke-width:30.0045575;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline32" />
+ <!-- Arrowhead on XXXpoint 1350 3450 - 2444 2510-->
+ <!-- Line -->
+ <polyline
+ points="4950,3450 3948,2590 "
+ style="stroke:#00d1d1;stroke-width:30.0045575;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline36" />
+ <!-- Arrowhead on XXXpoint 4950 3450 - 3854 2510-->
+ <!-- Line -->
+ <polyline
+ points="4050,6600 4050,4414 "
+ style="stroke:#00d1d1;stroke-width:30.00455750000000066;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline40" />
+ <!-- Arrowhead on XXXpoint 4050 6600 - 4050 4290-->
+ <!-- Line -->
+ <polyline
+ points="1050,6600 1050,4414 "
+ style="stroke:#00d1d1;stroke-width:30.00455750000000066;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline44" />
+ <!-- Arrowhead on XXXpoint 1050 6600 - 1050 4290-->
+ <!-- Line -->
+ <polyline
+ points="2250,5400 2250,4414 "
+ style="stroke:#00d1d1;stroke-width:30.00455750000000066;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline48" />
+ <!-- Arrowhead on XXXpoint 2250 5400 - 2250 4290-->
+ <!-- Line -->
+ <polyline
+ points="2250,8100 2250,6364 "
+ style="stroke:#00ff00;stroke-width:30;stroke-linejoin:miter;stroke-linecap:butt;marker-end:url(#Arrow1Mend)"
+ id="polyline52" />
+ <!-- Arrowhead on XXXpoint 2250 8100 - 2250 6240-->
+ <!-- Line -->
+ <polyline
+ points="1050,9300 1050,7564 "
+ style="stroke:#00ff00;stroke-width:30;stroke-linejoin:miter;stroke-linecap:butt;marker-end:url(#Arrow1Mend)"
+ id="polyline56" />
+ <!-- Arrowhead on XXXpoint 1050 9300 - 1050 7440-->
+ <!-- Line -->
+ <polyline
+ points="4050,9300 4050,7564 "
+ style="stroke:#00ff00;stroke-width:30;stroke-linejoin:miter;stroke-linecap:butt;marker-end:url(#Arrow1Mend)"
+ id="polyline60" />
+ <!-- Arrowhead on XXXpoint 4050 9300 - 4050 7440-->
+ <!-- Line -->
+ <polyline
+ points="5250,8100 5250,6364 "
+ style="stroke:#00ff00;stroke-width:30;stroke-linejoin:miter;stroke-linecap:butt;marker-end:url(#Arrow1Mend)"
+ id="polyline64" />
+ <!-- Arrowhead on XXXpoint 5250 8100 - 5250 6240-->
+ <!-- Circle -->
+ <circle
+ cx="2850"
+ cy="3900"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle68" />
+ <!-- Circle -->
+ <circle
+ cx="3150"
+ cy="3900"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle70" />
+ <!-- Circle -->
+ <circle
+ cx="3450"
+ cy="3900"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle72" />
+ <!-- Circle -->
+ <circle
+ cx="1350"
+ cy="5100"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle74" />
+ <!-- Circle -->
+ <circle
+ cx="1650"
+ cy="5100"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle76" />
+ <!-- Circle -->
+ <circle
+ cx="1950"
+ cy="5100"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle78" />
+ <!-- Circle -->
+ <circle
+ cx="4350"
+ cy="5100"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle80" />
+ <!-- Circle -->
+ <circle
+ cx="4650"
+ cy="5100"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle82" />
+ <!-- Circle -->
+ <circle
+ cx="4950"
+ cy="5100"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle84" />
+ <!-- Line: box -->
+ <rect
+ x="750"
+ y="3450"
+ width="1800"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect86" />
+ <!-- Line: box -->
+ <rect
+ x="300"
+ y="6600"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect88" />
+ <!-- Line: box -->
+ <rect
+ x="3750"
+ y="3450"
+ width="1800"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect90" />
+ <!-- Line: box -->
+ <rect
+ x="4500"
+ y="5400"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect92" />
+ <!-- Line: box -->
+ <rect
+ x="3300"
+ y="6600"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect94" />
+ <!-- Line: box -->
+ <rect
+ x="2250"
+ y="1650"
+ width="1800"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect96" />
+ <!-- Line: box -->
+ <rect
+ x="0"
+ y="9300"
+ width="2100"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#00ff00; "
+ id="rect98" />
+ <!-- Line: box -->
+ <rect
+ x="1350"
+ y="8100"
+ width="2100"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#00ff00; "
+ id="rect100" />
+ <!-- Line: box -->
+ <rect
+ x="3000"
+ y="9300"
+ width="2100"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#00ff00; "
+ id="rect102" />
+ <!-- Line: box -->
+ <rect
+ x="4350"
+ y="8100"
+ width="2100"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#00ff00; "
+ id="rect104" />
+ <!-- Line: box -->
+ <rect
+ x="1500"
+ y="5400"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect106" />
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="6450"
+ y="300"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="192"
+ text-anchor="end"
+ id="text108">rcu_bh</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="3150"
+ y="1950"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text110">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="3150"
+ y="2250"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text112">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1650"
+ y="3750"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text114">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1650"
+ y="4050"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text116">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4650"
+ y="4050"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text118">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4650"
+ y="3750"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text120">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2250"
+ y="5700"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text122">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2250"
+ y="6000"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text124">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1050"
+ y="6900"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text126">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1050"
+ y="7200"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text128">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5250"
+ y="5700"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text130">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5250"
+ y="6000"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text132">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4050"
+ y="6900"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text134">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4050"
+ y="7200"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text136">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="450"
+ y="1350"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="start"
+ id="text138">struct rcu_state</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1050"
+ y="9600"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text140">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1050"
+ y="9900"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text142">rcu_dynticks</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4050"
+ y="9600"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text144">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4050"
+ y="9900"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text146">rcu_dynticks</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2400"
+ y="8400"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text148">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2400"
+ y="8700"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text150">rcu_dynticks</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5400"
+ y="8400"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text152">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5400"
+ y="8700"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text154">rcu_dynticks</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="6000"
+ y="750"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="192"
+ text-anchor="end"
+ id="text156">rcu_sched</text>
+ <!-- Line -->
+ <polyline
+ points="5250,5400 5250,4414 "
+ style="stroke:#00d1d1;stroke-width:30.00455750000000066;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline158" />
+ <!-- Arrowhead on XXXpoint 5250 5400 - 5250 4290-->
+ </g>
+</svg>
diff --git a/Documentation/RCU/Design/Data-Structures/BigTreePreemptRCUBHdyntick.svg b/Documentation/RCU/Design/Data-Structures/BigTreePreemptRCUBHdyntick.svg
new file mode 100644
index 000000000000..15adcac036c7
--- /dev/null
+++ b/Documentation/RCU/Design/Data-Structures/BigTreePreemptRCUBHdyntick.svg
@@ -0,0 +1,741 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Creator: fig2dev Version 3.2 Patchlevel 5e -->
+
+<!-- CreationDate: Wed Dec 9 17:32:59 2015 -->
+
+<!-- Magnification: 2.000 -->
+
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="6.1in"
+ height="8.9in"
+ viewBox="-44 -44 7288 10738"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.4 r9939"
+ sodipodi:docname="BigTreePreemptRCUBHdyntick.fig">
+ <metadata
+ id="metadata182">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title></dc:title>
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <defs
+ id="defs180">
+ <marker
+ inkscape:stockid="Arrow1Mend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow1Mend"
+ style="overflow:visible;">
+ <path
+ id="path3940"
+ d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;"
+ transform="scale(0.4) rotate(180) translate(10,0)" />
+ </marker>
+ </defs>
+ <sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="874"
+ inkscape:window-height="1148"
+ id="namedview178"
+ showgrid="false"
+ inkscape:zoom="1.2097379"
+ inkscape:cx="274.5"
+ inkscape:cy="400.49997"
+ inkscape:window-x="946"
+ inkscape:window-y="24"
+ inkscape:window-maximized="0"
+ inkscape:current-layer="g4" />
+ <g
+ style="stroke-width:.025in; fill:none"
+ id="g4">
+ <!-- Line: box -->
+ <rect
+ x="900"
+ y="0"
+ width="6300"
+ height="7350"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffffff; "
+ id="rect6" />
+ <!-- Line: box -->
+ <rect
+ x="1200"
+ y="600"
+ width="5700"
+ height="3750"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffff00; "
+ id="rect8" />
+ <!-- Line: box -->
+ <rect
+ x="5400"
+ y="4950"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect10" />
+ <!-- Line: box -->
+ <rect
+ x="450"
+ y="450"
+ width="6300"
+ height="7350"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffffff; "
+ id="rect12" />
+ <!-- Line: box -->
+ <rect
+ x="750"
+ y="1050"
+ width="5700"
+ height="3750"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffff00; "
+ id="rect14" />
+ <!-- Line: box -->
+ <rect
+ x="4950"
+ y="5400"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect16" />
+ <!-- Line -->
+ <polyline
+ points="5250,8550 5688,6362 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline18" />
+ <!-- Arrowhead on XXXpoint 5250 8550 - 5710 6240-->
+ <polyline
+ points="5714 6518 5704 6272 5598 6494 "
+ style="stroke:#00ff00;stroke-width:14;stroke-miterlimit:8; "
+ id="polyline20" />
+ <!-- Line -->
+ <polyline
+ points="4050,9750 4486,7712 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline22" />
+ <!-- Arrowhead on XXXpoint 4050 9750 - 4512 7590-->
+ <polyline
+ points="4514 7868 4506 7622 4396 7844 "
+ style="stroke:#00ff00;stroke-width:14;stroke-miterlimit:8; "
+ id="polyline24" />
+ <!-- Line -->
+ <polyline
+ points="1040,9750 1476,7712 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline26" />
+ <!-- Arrowhead on XXXpoint 1040 9750 - 1502 7590-->
+ <polyline
+ points="1504 7868 1496 7622 1386 7844 "
+ style="stroke:#00ff00;stroke-width:14;stroke-miterlimit:8; "
+ id="polyline28" />
+ <!-- Line -->
+ <polyline
+ points="2240,8550 2676,6512 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline30" />
+ <!-- Arrowhead on XXXpoint 2240 8550 - 2702 6390-->
+ <polyline
+ points="2704 6668 2696 6422 2586 6644 "
+ style="stroke:#00ff00;stroke-width:14;stroke-miterlimit:8; "
+ id="polyline32" />
+ <!-- Line -->
+ <polyline
+ points="4050,9750 5682,6360 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline34" />
+ <!-- Arrowhead on XXXpoint 4050 9750 - 5736 6246-->
+ <polyline
+ points="5672 6518 5722 6276 5562 6466 "
+ style="stroke:#00ff00;stroke-width:14;stroke-miterlimit:8; "
+ id="polyline36" />
+ <!-- Line -->
+ <polyline
+ points="1010,9750 2642,6360 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline38" />
+ <!-- Arrowhead on XXXpoint 1010 9750 - 2696 6246-->
+ <polyline
+ points="2632 6518 2682 6276 2522 6466 "
+ style="stroke:#00ff00;stroke-width:14;stroke-miterlimit:8; "
+ id="polyline40" />
+ <!-- Line: box -->
+ <rect
+ x="0"
+ y="900"
+ width="6300"
+ height="7350"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffffff; "
+ id="rect42" />
+ <!-- Line: box -->
+ <rect
+ x="300"
+ y="1500"
+ width="5700"
+ height="3750"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffff00; "
+ id="rect44" />
+ <!-- Line -->
+ <polyline
+ points="1350,3900 2350,3040 "
+ style="stroke:#00d1d1;stroke-width:30.00205472;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline46" />
+ <!-- Arrowhead on XXXpoint 1350 3900 - 2444 2960-->
+ <!-- Line -->
+ <polyline
+ points="4950,3900 3948,3040 "
+ style="stroke:#00d1d1;stroke-width:30.00205472;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline50" />
+ <!-- Arrowhead on XXXpoint 4950 3900 - 3854 2960-->
+ <!-- Line -->
+ <polyline
+ points="4050,7050 4050,4864 "
+ style="stroke:#00d1d1;stroke-width:30.00205472;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline54" />
+ <!-- Arrowhead on XXXpoint 4050 7050 - 4050 4740-->
+ <!-- Line -->
+ <polyline
+ points="1050,7050 1050,4864 "
+ style="stroke:#00d1d1;stroke-width:30.00205472;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline58" />
+ <!-- Arrowhead on XXXpoint 1050 7050 - 1050 4740-->
+ <!-- Line -->
+ <polyline
+ points="2250,5850 2250,4864 "
+ style="stroke:#00d1d1;stroke-width:30.00205472;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline62" />
+ <!-- Arrowhead on XXXpoint 2250 5850 - 2250 4740-->
+ <!-- Line -->
+ <polyline
+ points="2250,8550 2250,6814 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline66" />
+ <!-- Arrowhead on XXXpoint 2250 8550 - 2250 6690-->
+ <!-- Line -->
+ <polyline
+ points="1050,9750 1050,8014 "
+ style="stroke:#00ff00;stroke-width:30.00205472;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline70" />
+ <!-- Arrowhead on XXXpoint 1050 9750 - 1050 7890-->
+ <!-- Line -->
+ <polyline
+ points="4050,9750 4050,8014 "
+ style="stroke:#00ff00;stroke-width:30.00205472;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline74" />
+ <!-- Arrowhead on XXXpoint 4050 9750 - 4050 7890-->
+ <!-- Line -->
+ <polyline
+ points="5250,8550 5250,6814 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline78" />
+ <!-- Arrowhead on XXXpoint 5250 8550 - 5250 6690-->
+ <!-- Circle -->
+ <circle
+ cx="2850"
+ cy="4350"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle82" />
+ <!-- Circle -->
+ <circle
+ cx="3150"
+ cy="4350"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle84" />
+ <!-- Circle -->
+ <circle
+ cx="3450"
+ cy="4350"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle86" />
+ <!-- Circle -->
+ <circle
+ cx="1350"
+ cy="5550"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle88" />
+ <!-- Circle -->
+ <circle
+ cx="1650"
+ cy="5550"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle90" />
+ <!-- Circle -->
+ <circle
+ cx="1950"
+ cy="5550"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle92" />
+ <!-- Circle -->
+ <circle
+ cx="4350"
+ cy="5550"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle94" />
+ <!-- Circle -->
+ <circle
+ cx="4650"
+ cy="5550"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle96" />
+ <!-- Circle -->
+ <circle
+ cx="4950"
+ cy="5550"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle98" />
+ <!-- Line: box -->
+ <rect
+ x="750"
+ y="3900"
+ width="1800"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect100" />
+ <!-- Line: box -->
+ <rect
+ x="300"
+ y="7050"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect102" />
+ <!-- Line: box -->
+ <rect
+ x="3750"
+ y="3900"
+ width="1800"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect104" />
+ <!-- Line: box -->
+ <rect
+ x="4500"
+ y="5850"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect106" />
+ <!-- Line: box -->
+ <rect
+ x="3300"
+ y="7050"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect108" />
+ <!-- Line: box -->
+ <rect
+ x="2250"
+ y="2100"
+ width="1800"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect110" />
+ <!-- Line: box -->
+ <rect
+ x="0"
+ y="9750"
+ width="2100"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#00ff00; "
+ id="rect112" />
+ <!-- Line: box -->
+ <rect
+ x="1350"
+ y="8550"
+ width="2100"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#00ff00; "
+ id="rect114" />
+ <!-- Line: box -->
+ <rect
+ x="3000"
+ y="9750"
+ width="2100"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#00ff00; "
+ id="rect116" />
+ <!-- Line: box -->
+ <rect
+ x="4350"
+ y="8550"
+ width="2100"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#00ff00; "
+ id="rect118" />
+ <!-- Line: box -->
+ <rect
+ x="1500"
+ y="5850"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect120" />
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="6450"
+ y="750"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="192"
+ text-anchor="end"
+ id="text122">rcu_bh</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="3150"
+ y="2400"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text124">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="3150"
+ y="2700"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text126">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1650"
+ y="4200"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text128">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1650"
+ y="4500"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text130">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4650"
+ y="4500"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text132">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4650"
+ y="4200"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text134">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2250"
+ y="6150"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text136">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2250"
+ y="6450"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text138">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1050"
+ y="7350"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text140">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1050"
+ y="7650"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text142">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5250"
+ y="6150"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text144">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5250"
+ y="6450"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text146">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4050"
+ y="7350"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text148">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4050"
+ y="7650"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text150">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="450"
+ y="1800"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="start"
+ id="text152">struct rcu_state</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1050"
+ y="10050"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text154">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1050"
+ y="10350"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text156">rcu_dynticks</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4050"
+ y="10050"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text158">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4050"
+ y="10350"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text160">rcu_dynticks</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2400"
+ y="8850"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text162">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2400"
+ y="9150"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text164">rcu_dynticks</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5400"
+ y="8850"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text166">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5400"
+ y="9150"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text168">rcu_dynticks</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="6900"
+ y="300"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="192"
+ text-anchor="end"
+ id="text170">rcu_preempt</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="6000"
+ y="1200"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="192"
+ text-anchor="end"
+ id="text172">rcu_sched</text>
+ <!-- Line -->
+ <polyline
+ points="5250,5850 5250,4864 "
+ style="stroke:#00d1d1;stroke-width:30.00205472;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline174" />
+ <!-- Arrowhead on XXXpoint 5250 5850 - 5250 4740-->
+ </g>
+</svg>
diff --git a/Documentation/RCU/Design/Data-Structures/BigTreePreemptRCUBHdyntickCB.svg b/Documentation/RCU/Design/Data-Structures/BigTreePreemptRCUBHdyntickCB.svg
new file mode 100644
index 000000000000..bbc3801470d0
--- /dev/null
+++ b/Documentation/RCU/Design/Data-Structures/BigTreePreemptRCUBHdyntickCB.svg
@@ -0,0 +1,858 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Creator: fig2dev Version 3.2 Patchlevel 5e -->
+
+<!-- CreationDate: Wed Dec 9 17:29:48 2015 -->
+
+<!-- Magnification: 2.000 -->
+
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="7.4in"
+ height="9.9in"
+ viewBox="-44 -44 8938 11938"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.4 r9939"
+ sodipodi:docname="BigTreePreemptRCUBHdyntickCB.svg">
+ <metadata
+ id="metadata212">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title></dc:title>
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <defs
+ id="defs210">
+ <marker
+ inkscape:stockid="Arrow1Mend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow1Mend"
+ style="overflow:visible;">
+ <path
+ id="path3970"
+ d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;"
+ transform="scale(0.4) rotate(180) translate(10,0)" />
+ </marker>
+ </defs>
+ <sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="881"
+ inkscape:window-height="1128"
+ id="namedview208"
+ showgrid="false"
+ inkscape:zoom="1.0195195"
+ inkscape:cx="333"
+ inkscape:cy="445.49997"
+ inkscape:window-x="936"
+ inkscape:window-y="24"
+ inkscape:window-maximized="0"
+ inkscape:current-layer="g4" />
+ <g
+ style="stroke-width:.025in; fill:none"
+ id="g4">
+ <!-- Line: box -->
+ <rect
+ x="900"
+ y="0"
+ width="6300"
+ height="7350"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffffff; "
+ id="rect6" />
+ <!-- Line: box -->
+ <rect
+ x="1200"
+ y="600"
+ width="5700"
+ height="3750"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffff00; "
+ id="rect8" />
+ <!-- Line: box -->
+ <rect
+ x="5400"
+ y="4950"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect10" />
+ <!-- Line: box -->
+ <rect
+ x="450"
+ y="450"
+ width="6300"
+ height="7350"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffffff; "
+ id="rect12" />
+ <!-- Line: box -->
+ <rect
+ x="750"
+ y="1050"
+ width="5700"
+ height="3750"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffff00; "
+ id="rect14" />
+ <!-- Line: box -->
+ <rect
+ x="4950"
+ y="5400"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect16" />
+ <!-- Line -->
+ <polyline
+ points="5250,8550 5688,6362 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline18" />
+ <!-- Arrowhead on XXXpoint 5250 8550 - 5710 6240-->
+ <polyline
+ points="5714 6518 5704 6272 5598 6494 "
+ style="stroke:#00ff00;stroke-width:14;stroke-miterlimit:8; "
+ id="polyline20" />
+ <!-- Line -->
+ <polyline
+ points="4050,9750 4486,7712 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline22" />
+ <!-- Arrowhead on XXXpoint 4050 9750 - 4512 7590-->
+ <polyline
+ points="4514 7868 4506 7622 4396 7844 "
+ style="stroke:#00ff00;stroke-width:14;stroke-miterlimit:8; "
+ id="polyline24" />
+ <!-- Line -->
+ <polyline
+ points="1040,9750 1476,7712 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline26" />
+ <!-- Arrowhead on XXXpoint 1040 9750 - 1502 7590-->
+ <polyline
+ points="1504 7868 1496 7622 1386 7844 "
+ style="stroke:#00ff00;stroke-width:14;stroke-miterlimit:8; "
+ id="polyline28" />
+ <!-- Line -->
+ <polyline
+ points="2240,8550 2676,6512 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline30" />
+ <!-- Arrowhead on XXXpoint 2240 8550 - 2702 6390-->
+ <polyline
+ points="2704 6668 2696 6422 2586 6644 "
+ style="stroke:#00ff00;stroke-width:14;stroke-miterlimit:8; "
+ id="polyline32" />
+ <!-- Line -->
+ <polyline
+ points="4050,9600 5692,6062 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline34" />
+ <!-- Arrowhead on XXXpoint 4050 9600 - 5744 5948-->
+ <polyline
+ points="5682 6220 5730 5978 5574 6170 "
+ style="stroke:#00ff00;stroke-width:14;stroke-miterlimit:8; "
+ id="polyline36" />
+ <!-- Line -->
+ <polyline
+ points="1086,9600 2728,6062 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline38" />
+ <!-- Arrowhead on XXXpoint 1086 9600 - 2780 5948-->
+ <polyline
+ points="2718 6220 2766 5978 2610 6170 "
+ style="stroke:#00ff00;stroke-width:14;stroke-miterlimit:8; "
+ id="polyline40" />
+ <!-- Line: box -->
+ <rect
+ x="0"
+ y="900"
+ width="6300"
+ height="7350"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffffff; "
+ id="rect42" />
+ <!-- Line: box -->
+ <rect
+ x="300"
+ y="1500"
+ width="5700"
+ height="3750"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffff00; "
+ id="rect44" />
+ <!-- Line -->
+ <polyline
+ points="1350,3900 2350,3040 "
+ style="stroke:#00d1d1;stroke-width:29.99463964;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline46" />
+ <!-- Arrowhead on XXXpoint 1350 3900 - 2444 2960-->
+ <!-- Line -->
+ <polyline
+ points="4950,3900 3948,3040 "
+ style="stroke:#00d1d1;stroke-width:29.99463964;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline50" />
+ <!-- Arrowhead on XXXpoint 4950 3900 - 3854 2960-->
+ <!-- Line -->
+ <polyline
+ points="4050,7050 4050,4864 "
+ style="stroke:#00d1d1;stroke-width:29.99463964;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline54" />
+ <!-- Arrowhead on XXXpoint 4050 7050 - 4050 4740-->
+ <!-- Line -->
+ <polyline
+ points="1050,7050 1050,4864 "
+ style="stroke:#00d1d1;stroke-width:29.99463964;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline58" />
+ <!-- Arrowhead on XXXpoint 1050 7050 - 1050 4740-->
+ <!-- Line -->
+ <polyline
+ points="2250,5850 2250,4864 "
+ style="stroke:#00d1d1;stroke-width:29.99463964;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline62" />
+ <!-- Arrowhead on XXXpoint 2250 5850 - 2250 4740-->
+ <!-- Line -->
+ <polyline
+ points="2250,8550 2250,6814 "
+ style="stroke:#00ff00;stroke-width:29.99463964;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline66" />
+ <!-- Arrowhead on XXXpoint 2250 8550 - 2250 6690-->
+ <!-- Line -->
+ <polyline
+ points="1050,9750 1050,8014 "
+ style="stroke:#00ff00;stroke-width:29.99463964;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline70" />
+ <!-- Arrowhead on XXXpoint 1050 9750 - 1050 7890-->
+ <!-- Line -->
+ <polyline
+ points="4050,9750 4050,8014 "
+ style="stroke:#00ff00;stroke-width:29.99463964;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline74" />
+ <!-- Arrowhead on XXXpoint 4050 9750 - 4050 7890-->
+ <!-- Line -->
+ <polyline
+ points="5250,8550 5250,6814 "
+ style="stroke:#00ff00;stroke-width:29.99463964;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline78" />
+ <!-- Arrowhead on XXXpoint 5250 8550 - 5250 6690-->
+ <!-- Line -->
+ <polyline
+ points="6000,6300 8048,7910 "
+ style="stroke:#87cfff;stroke-width:30;stroke-linejoin:miter;stroke-linecap:butt;marker-end:url(#Arrow1Mend)"
+ id="polyline82" />
+ <!-- Arrowhead on XXXpoint 6000 6300 - 8146 7986-->
+ <!-- Circle -->
+ <circle
+ cx="2850"
+ cy="4350"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle86" />
+ <!-- Circle -->
+ <circle
+ cx="3150"
+ cy="4350"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle88" />
+ <!-- Circle -->
+ <circle
+ cx="3450"
+ cy="4350"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle90" />
+ <!-- Circle -->
+ <circle
+ cx="1350"
+ cy="5550"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle92" />
+ <!-- Circle -->
+ <circle
+ cx="1650"
+ cy="5550"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle94" />
+ <!-- Circle -->
+ <circle
+ cx="1950"
+ cy="5550"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle96" />
+ <!-- Circle -->
+ <circle
+ cx="4350"
+ cy="5550"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle98" />
+ <!-- Circle -->
+ <circle
+ cx="4650"
+ cy="5550"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle100" />
+ <!-- Circle -->
+ <circle
+ cx="4950"
+ cy="5550"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle102" />
+ <!-- Line: box -->
+ <rect
+ x="7350"
+ y="7950"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect104" />
+ <!-- Line: box -->
+ <rect
+ x="7350"
+ y="9450"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect106" />
+ <!-- Line -->
+ <polyline
+ points="8100,8850 8100,9384 "
+ style="stroke:#000000;stroke-width:30;stroke-linejoin:miter;stroke-linecap:butt;marker-end:url(#Arrow1Mend)"
+ id="polyline108" />
+ <!-- Arrowhead on XXXpoint 8100 8850 - 8100 9510-->
+ <!-- Line: box -->
+ <rect
+ x="7350"
+ y="10950"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect112" />
+ <!-- Line -->
+ <polyline
+ points="8100,10350 8100,10884 "
+ style="stroke:#000000;stroke-width:30;stroke-linejoin:miter;stroke-linecap:butt;marker-end:url(#Arrow1Mend)"
+ id="polyline114" />
+ <!-- Arrowhead on XXXpoint 8100 10350 - 8100 11010-->
+ <!-- Line: box -->
+ <rect
+ x="750"
+ y="3900"
+ width="1800"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect118" />
+ <!-- Line: box -->
+ <rect
+ x="300"
+ y="7050"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect120" />
+ <!-- Line: box -->
+ <rect
+ x="3750"
+ y="3900"
+ width="1800"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect122" />
+ <!-- Line: box -->
+ <rect
+ x="4500"
+ y="5850"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect124" />
+ <!-- Line: box -->
+ <rect
+ x="3300"
+ y="7050"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect126" />
+ <!-- Line: box -->
+ <rect
+ x="2250"
+ y="2100"
+ width="1800"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect128" />
+ <!-- Line: box -->
+ <rect
+ x="0"
+ y="9750"
+ width="2100"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#00ff00; "
+ id="rect130" />
+ <!-- Line: box -->
+ <rect
+ x="1350"
+ y="8550"
+ width="2100"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#00ff00; "
+ id="rect132" />
+ <!-- Line: box -->
+ <rect
+ x="3000"
+ y="9750"
+ width="2100"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#00ff00; "
+ id="rect134" />
+ <!-- Line: box -->
+ <rect
+ x="4350"
+ y="8550"
+ width="2100"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#00ff00; "
+ id="rect136" />
+ <!-- Line: box -->
+ <rect
+ x="1500"
+ y="5850"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect138" />
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="8100"
+ y="8250"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text140">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="8100"
+ y="8550"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text142">rcu_head</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="8100"
+ y="9750"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text144">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="8100"
+ y="10050"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text146">rcu_head</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="8100"
+ y="11250"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text148">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="8100"
+ y="11550"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text150">rcu_head</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="6000"
+ y="1200"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="192"
+ text-anchor="end"
+ id="text152">rcu_sched</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="6450"
+ y="750"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="192"
+ text-anchor="end"
+ id="text154">rcu_bh</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="3150"
+ y="2400"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text156">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="3150"
+ y="2700"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text158">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1650"
+ y="4200"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text160">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1650"
+ y="4500"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text162">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4650"
+ y="4500"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text164">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4650"
+ y="4200"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text166">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2250"
+ y="6150"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text168">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2250"
+ y="6450"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text170">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1050"
+ y="7350"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text172">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1050"
+ y="7650"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text174">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5250"
+ y="6150"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text176">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5250"
+ y="6450"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text178">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4050"
+ y="7350"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text180">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4050"
+ y="7650"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text182">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="450"
+ y="1800"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="start"
+ id="text184">struct rcu_state</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1050"
+ y="10050"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text186">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1050"
+ y="10350"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text188">rcu_dynticks</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4050"
+ y="10050"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text190">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4050"
+ y="10350"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text192">rcu_dynticks</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2400"
+ y="8850"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text194">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2400"
+ y="9150"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text196">rcu_dynticks</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5400"
+ y="8850"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text198">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5400"
+ y="9150"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text200">rcu_dynticks</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="6900"
+ y="300"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="192"
+ text-anchor="end"
+ id="text202">rcu_preempt</text>
+ <!-- Line -->
+ <polyline
+ points="5250,5850 5250,4864 "
+ style="stroke:#00d1d1;stroke-width:29.99463964;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline204" />
+ <!-- Arrowhead on XXXpoint 5250 5850 - 5250 4740-->
+ </g>
+</svg>
diff --git a/Documentation/RCU/Design/Data-Structures/Data-Structures.html b/Documentation/RCU/Design/Data-Structures/Data-Structures.html
new file mode 100644
index 000000000000..7eb47ac25ad7
--- /dev/null
+++ b/Documentation/RCU/Design/Data-Structures/Data-Structures.html
@@ -0,0 +1,1333 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
+ "http://www.w3.org/TR/html4/loose.dtd">
+ <html>
+ <head><title>A Tour Through TREE_RCU's Data Structures [LWN.net]</title>
+ <meta HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
+
+ <p>January 27, 2016</p>
+ <p>This article was contributed by Paul E.&nbsp;McKenney</p>
+
+<h3>Introduction</h3>
+
+This document describes RCU's major data structures and their relationship
+to each other.
+
+<ol>
+<li> <a href="#Data-Structure Relationships">
+ Data-Structure Relationships</a>
+<li> <a href="#The rcu_state Structure">
+ The <tt>rcu_state</tt> Structure</a>
+<li> <a href="#The rcu_node Structure">
+ The <tt>rcu_node</tt> Structure</a>
+<li> <a href="#The rcu_data Structure">
+ The <tt>rcu_data</tt> Structure</a>
+<li> <a href="#The rcu_dynticks Structure">
+ The <tt>rcu_dynticks</tt> Structure</a>
+<li> <a href="#The rcu_head Structure">
+ The <tt>rcu_head</tt> Structure</a>
+<li> <a href="#RCU-Specific Fields in the task_struct Structure">
+ RCU-Specific Fields in the <tt>task_struct</tt> Structure</a>
+<li> <a href="#Accessor Functions">
+ Accessor Functions</a>
+</ol>
+
+At the end we have the
+<a href="#Answers to Quick Quizzes">answers to the quick quizzes</a>.
+
+<h3><a name="Data-Structure Relationships">Data-Structure Relationships</a></h3>
+
+<p>RCU is for all intents and purposes a large state machine, and its
+data structures maintain the state in such a way as to allow RCU readers
+to execute extremely quickly, while also processing the RCU grace periods
+requested by updaters in an efficient and extremely scalable fashion.
+The efficiency and scalability of RCU updaters is provided primarily
+by a combining tree, as shown below:
+
+</p><p><img src="BigTreeClassicRCU.svg" alt="BigTreeClassicRCU.svg" width="30%">
+
+</p><p>This diagram shows an enclosing <tt>rcu_state</tt> structure
+containing a tree of <tt>rcu_node</tt> structures.
+Each leaf node of the <tt>rcu_node</tt> tree has up to 16
+<tt>rcu_data</tt> structures associated with it, so that there
+are <tt>NR_CPUS</tt> number of <tt>rcu_data</tt> structures,
+one for each possible CPU.
+This structure is adjusted at boot time, if needed, to handle the
+common case where <tt>nr_cpu_ids</tt> is much less than
+<tt>NR_CPUs</tt>.
+For example, a number of Linux distributions set <tt>NR_CPUs=4096</tt>,
+which results in a three-level <tt>rcu_node</tt> tree.
+If the actual hardware has only 16 CPUs, RCU will adjust itself
+at boot time, resulting in an <tt>rcu_node</tt> tree with only a single node.
+
+</p><p>The purpose of this combining tree is to allow per-CPU events
+such as quiescent states, dyntick-idle transitions,
+and CPU hotplug operations to be processed efficiently
+and scalably.
+Quiescent states are recorded by the per-CPU <tt>rcu_data</tt> structures,
+and other events are recorded by the leaf-level <tt>rcu_node</tt>
+structures.
+All of these events are combined at each level of the tree until finally
+grace periods are completed at the tree's root <tt>rcu_node</tt>
+structure.
+A grace period can be completed at the root once every CPU
+(or, in the case of <tt>CONFIG_PREEMPT_RCU</tt>, task)
+has passed through a quiescent state.
+Once a grace period has completed, record of that fact is propagated
+back down the tree.
+
+</p><p>As can be seen from the diagram, on a 64-bit system
+a two-level tree with 64 leaves can accommodate 1,024 CPUs, with a fanout
+of 64 at the root and a fanout of 16 at the leaves.
+
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ Why isn't the fanout at the leaves also 64?
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ Because there are more types of events that affect the leaf-level
+ <tt>rcu_node</tt> structures than further up the tree.
+ Therefore, if the leaf <tt>rcu_node</tt> structures have fanout of
+ 64, the contention on these structures' <tt>-&gt;structures</tt>
+ becomes excessive.
+ Experimentation on a wide variety of systems has shown that a fanout
+ of 16 works well for the leaves of the <tt>rcu_node</tt> tree.
+ </font>
+
+ <p><font color="ffffff">Of course, further experience with
+ systems having hundreds or thousands of CPUs may demonstrate
+ that the fanout for the non-leaf <tt>rcu_node</tt> structures
+ must also be reduced.
+ Such reduction can be easily carried out when and if it proves
+ necessary.
+ In the meantime, if you are using such a system and running into
+ contention problems on the non-leaf <tt>rcu_node</tt> structures,
+ you may use the <tt>CONFIG_RCU_FANOUT</tt> kernel configuration
+ parameter to reduce the non-leaf fanout as needed.
+ </font>
+
+ <p><font color="ffffff">Kernels built for systems with
+ strong NUMA characteristics might also need to adjust
+ <tt>CONFIG_RCU_FANOUT</tt> so that the domains of the
+ <tt>rcu_node</tt> structures align with hardware boundaries.
+ However, there has thus far been no need for this.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
+
+<p>If your system has more than 1,024 CPUs (or more than 512 CPUs on
+a 32-bit system), then RCU will automatically add more levels to the
+tree.
+For example, if you are crazy enough to build a 64-bit system with 65,536
+CPUs, RCU would configure the <tt>rcu_node</tt> tree as follows:
+
+</p><p><img src="HugeTreeClassicRCU.svg" alt="HugeTreeClassicRCU.svg" width="50%">
+
+</p><p>RCU currently permits up to a four-level tree, which on a 64-bit system
+accommodates up to 4,194,304 CPUs, though only a mere 524,288 CPUs for
+32-bit systems.
+On the other hand, you can set <tt>CONFIG_RCU_FANOUT</tt> to be
+as small as 2 if you wish, which would permit only 16 CPUs, which
+is useful for testing.
+
+</p><p>This multi-level combining tree allows us to get most of the
+performance and scalability
+benefits of partitioning, even though RCU grace-period detection is
+inherently a global operation.
+The trick here is that only the last CPU to report a quiescent state
+into a given <tt>rcu_node</tt> structure need advance to the <tt>rcu_node</tt>
+structure at the next level up the tree.
+This means that at the leaf-level <tt>rcu_node</tt> structure, only
+one access out of sixteen will progress up the tree.
+For the internal <tt>rcu_node</tt> structures, the situation is even
+more extreme: Only one access out of sixty-four will progress up
+the tree.
+Because the vast majority of the CPUs do not progress up the tree,
+the lock contention remains roughly constant up the tree.
+No matter how many CPUs there are in the system, at most 64 quiescent-state
+reports per grace period will progress all the way to the root
+<tt>rcu_node</tt> structure, thus ensuring that the lock contention
+on that root <tt>rcu_node</tt> structure remains acceptably low.
+
+</p><p>In effect, the combining tree acts like a big shock absorber,
+keeping lock contention under control at all tree levels regardless
+of the level of loading on the system.
+
+</p><p>The Linux kernel actually supports multiple flavors of RCU
+running concurrently, so RCU builds separate data structures for each
+flavor.
+For example, for <tt>CONFIG_TREE_RCU=y</tt> kernels, RCU provides
+rcu_sched and rcu_bh, as shown below:
+
+</p><p><img src="BigTreeClassicRCUBH.svg" alt="BigTreeClassicRCUBH.svg" width="33%">
+
+</p><p>Energy efficiency is increasingly important, and for that
+reason the Linux kernel provides <tt>CONFIG_NO_HZ_IDLE</tt>, which
+turns off the scheduling-clock interrupts on idle CPUs, which in
+turn allows those CPUs to attain deeper sleep states and to consume
+less energy.
+CPUs whose scheduling-clock interrupts have been turned off are
+said to be in <i>dyntick-idle mode</i>.
+RCU must handle dyntick-idle CPUs specially
+because RCU would otherwise wake up each CPU on every grace period,
+which would defeat the whole purpose of <tt>CONFIG_NO_HZ_IDLE</tt>.
+RCU uses the <tt>rcu_dynticks</tt> structure to track
+which CPUs are in dyntick idle mode, as shown below:
+
+</p><p><img src="BigTreeClassicRCUBHdyntick.svg" alt="BigTreeClassicRCUBHdyntick.svg" width="33%">
+
+</p><p>However, if a CPU is in dyntick-idle mode, it is in that mode
+for all flavors of RCU.
+Therefore, a single <tt>rcu_dynticks</tt> structure is allocated per
+CPU, and all of a given CPU's <tt>rcu_data</tt> structures share
+that <tt>rcu_dynticks</tt>, as shown in the figure.
+
+</p><p>Kernels built with <tt>CONFIG_PREEMPT_RCU</tt> support
+rcu_preempt in addition to rcu_sched and rcu_bh, as shown below:
+
+</p><p><img src="BigTreePreemptRCUBHdyntick.svg" alt="BigTreePreemptRCUBHdyntick.svg" width="35%">
+
+</p><p>RCU updaters wait for normal grace periods by registering
+RCU callbacks, either directly via <tt>call_rcu()</tt> and
+friends (namely <tt>call_rcu_bh()</tt> and <tt>call_rcu_sched()</tt>),
+there being a separate interface per flavor of RCU)
+or indirectly via <tt>synchronize_rcu()</tt> and friends.
+RCU callbacks are represented by <tt>rcu_head</tt> structures,
+which are queued on <tt>rcu_data</tt> structures while they are
+waiting for a grace period to elapse, as shown in the following figure:
+
+</p><p><img src="BigTreePreemptRCUBHdyntickCB.svg" alt="BigTreePreemptRCUBHdyntickCB.svg" width="40%">
+
+</p><p>This figure shows how <tt>TREE_RCU</tt>'s and
+<tt>PREEMPT_RCU</tt>'s major data structures are related.
+Lesser data structures will be introduced with the algorithms that
+make use of them.
+
+</p><p>Note that each of the data structures in the above figure has
+its own synchronization:
+
+<p><ol>
+<li> Each <tt>rcu_state</tt> structures has a lock and a mutex,
+ and some fields are protected by the corresponding root
+ <tt>rcu_node</tt> structure's lock.
+<li> Each <tt>rcu_node</tt> structure has a spinlock.
+<li> The fields in <tt>rcu_data</tt> are private to the corresponding
+ CPU, although a few can be read and written by other CPUs.
+<li> Similarly, the fields in <tt>rcu_dynticks</tt> are private
+ to the corresponding CPU, although a few can be read by
+ other CPUs.
+</ol>
+
+<p>It is important to note that different data structures can have
+very different ideas about the state of RCU at any given time.
+For but one example, awareness of the start or end of a given RCU
+grace period propagates slowly through the data structures.
+This slow propagation is absolutely necessary for RCU to have good
+read-side performance.
+If this balkanized implementation seems foreign to you, one useful
+trick is to consider each instance of these data structures to be
+a different person, each having the usual slightly different
+view of reality.
+
+</p><p>The general role of each of these data structures is as
+follows:
+
+</p><ol>
+<li> <tt>rcu_state</tt>:
+ This structure forms the interconnection between the
+ <tt>rcu_node</tt> and <tt>rcu_data</tt> structures,
+ tracks grace periods, serves as short-term repository
+ for callbacks orphaned by CPU-hotplug events,
+ maintains <tt>rcu_barrier()</tt> state,
+ tracks expedited grace-period state,
+ and maintains state used to force quiescent states when
+ grace periods extend too long,
+<li> <tt>rcu_node</tt>: This structure forms the combining
+ tree that propagates quiescent-state
+ information from the leaves to the root, and also propagates
+ grace-period information from the root to the leaves.
+ It provides local copies of the grace-period state in order
+ to allow this information to be accessed in a synchronized
+ manner without suffering the scalability limitations that
+ would otherwise be imposed by global locking.
+ In <tt>CONFIG_PREEMPT_RCU</tt> kernels, it manages the lists
+ of tasks that have blocked while in their current
+ RCU read-side critical section.
+ In <tt>CONFIG_PREEMPT_RCU</tt> with
+ <tt>CONFIG_RCU_BOOST</tt>, it manages the
+ per-<tt>rcu_node</tt> priority-boosting
+ kernel threads (kthreads) and state.
+ Finally, it records CPU-hotplug state in order to determine
+ which CPUs should be ignored during a given grace period.
+<li> <tt>rcu_data</tt>: This per-CPU structure is the
+ focus of quiescent-state detection and RCU callback queuing.
+ It also tracks its relationship to the corresponding leaf
+ <tt>rcu_node</tt> structure to allow more-efficient
+ propagation of quiescent states up the <tt>rcu_node</tt>
+ combining tree.
+ Like the <tt>rcu_node</tt> structure, it provides a local
+ copy of the grace-period information to allow for-free
+ synchronized
+ access to this information from the corresponding CPU.
+ Finally, this structure records past dyntick-idle state
+ for the corresponding CPU and also tracks statistics.
+<li> <tt>rcu_dynticks</tt>:
+ This per-CPU structure tracks the current dyntick-idle
+ state for the corresponding CPU.
+ Unlike the other three structures, the <tt>rcu_dynticks</tt>
+ structure is not replicated per RCU flavor.
+<li> <tt>rcu_head</tt>:
+ This structure represents RCU callbacks, and is the
+ only structure allocated and managed by RCU users.
+ The <tt>rcu_head</tt> structure is normally embedded
+ within the RCU-protected data structure.
+</ol>
+
+<p>If all you wanted from this article was a general notion of how
+RCU's data structures are related, you are done.
+Otherwise, each of the following sections give more details on
+the <tt>rcu_state</tt>, <tt>rcu_node</tt>, <tt>rcu_data</tt>,
+and <tt>rcu_dynticks</tt> data structures.
+
+<h3><a name="The rcu_state Structure">
+The <tt>rcu_state</tt> Structure</a></h3>
+
+<p>The <tt>rcu_state</tt> structure is the base structure that
+represents a flavor of RCU.
+This structure forms the interconnection between the
+<tt>rcu_node</tt> and <tt>rcu_data</tt> structures,
+tracks grace periods, contains the lock used to
+synchronize with CPU-hotplug events,
+and maintains state used to force quiescent states when
+grace periods extend too long,
+
+</p><p>A few of the <tt>rcu_state</tt> structure's fields are discussed,
+singly and in groups, in the following sections.
+The more specialized fields are covered in the discussion of their
+use.
+
+<h5>Relationship to rcu_node and rcu_data Structures</h5>
+
+This portion of the <tt>rcu_state</tt> structure is declared
+as follows:
+
+<pre>
+ 1 struct rcu_node node[NUM_RCU_NODES];
+ 2 struct rcu_node *level[NUM_RCU_LVLS + 1];
+ 3 struct rcu_data __percpu *rda;
+</pre>
+
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ Wait a minute!
+ You said that the <tt>rcu_node</tt> structures formed a tree,
+ but they are declared as a flat array!
+ What gives?
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ The tree is laid out in the array.
+ The first node In the array is the head, the next set of nodes in the
+ array are children of the head node, and so on until the last set of
+ nodes in the array are the leaves.
+ </font>
+
+ <p><font color="ffffff">See the following diagrams to see how
+ this works.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
+
+<p>The <tt>rcu_node</tt> tree is embedded into the
+<tt>-&gt;node[]</tt> array as shown in the following figure:
+
+</p><p><img src="TreeMapping.svg" alt="TreeMapping.svg" width="40%">
+
+</p><p>One interesting consequence of this mapping is that a
+breadth-first traversal of the tree is implemented as a simple
+linear scan of the array, which is in fact what the
+<tt>rcu_for_each_node_breadth_first()</tt> macro does.
+This macro is used at the beginning and ends of grace periods.
+
+</p><p>Each entry of the <tt>-&gt;level</tt> array references
+the first <tt>rcu_node</tt> structure on the corresponding level
+of the tree, for example, as shown below:
+
+</p><p><img src="TreeMappingLevel.svg" alt="TreeMappingLevel.svg" width="40%">
+
+</p><p>The zero<sup>th</sup> element of the array references the root
+<tt>rcu_node</tt> structure, the first element references the
+first child of the root <tt>rcu_node</tt>, and finally the second
+element references the first leaf <tt>rcu_node</tt> structure.
+
+</p><p>For whatever it is worth, if you draw the tree to be tree-shaped
+rather than array-shaped, it is easy to draw a planar representation:
+
+</p><p><img src="TreeLevel.svg" alt="TreeLevel.svg" width="60%">
+
+</p><p>Finally, the <tt>-&gt;rda</tt> field references a per-CPU
+pointer to the corresponding CPU's <tt>rcu_data</tt> structure.
+
+</p><p>All of these fields are constant once initialization is complete,
+and therefore need no protection.
+
+<h5>Grace-Period Tracking</h5>
+
+<p>This portion of the <tt>rcu_state</tt> structure is declared
+as follows:
+
+<pre>
+ 1 unsigned long gpnum;
+ 2 unsigned long completed;
+</pre>
+
+<p>RCU grace periods are numbered, and
+the <tt>-&gt;gpnum</tt> field contains the number of the grace
+period that started most recently.
+The <tt>-&gt;completed</tt> field contains the number of the
+grace period that completed most recently.
+If the two fields are equal, the RCU grace period that most recently
+started has already completed, and therefore the corresponding
+flavor of RCU is idle.
+If <tt>-&gt;gpnum</tt> is one greater than <tt>-&gt;completed</tt>,
+then <tt>-&gt;gpnum</tt> gives the number of the current RCU
+grace period, which has not yet completed.
+Any other combination of values indicates that something is broken.
+These two fields are protected by the root <tt>rcu_node</tt>'s
+<tt>-&gt;lock</tt> field.
+
+</p><p>There are <tt>-&gt;gpnum</tt> and <tt>-&gt;completed</tt> fields
+in the <tt>rcu_node</tt> and <tt>rcu_data</tt> structures
+as well.
+The fields in the <tt>rcu_state</tt> structure represent the
+most current values, and those of the other structures are compared
+in order to detect the start of a new grace period in a distributed
+fashion.
+The values flow from <tt>rcu_state</tt> to <tt>rcu_node</tt>
+(down the tree from the root to the leaves) to <tt>rcu_data</tt>.
+
+<h5>Miscellaneous</h5>
+
+<p>This portion of the <tt>rcu_state</tt> structure is declared
+as follows:
+
+<pre>
+ 1 unsigned long gp_max;
+ 2 char abbr;
+ 3 char *name;
+</pre>
+
+<p>The <tt>-&gt;gp_max</tt> field tracks the duration of the longest
+grace period in jiffies.
+It is protected by the root <tt>rcu_node</tt>'s <tt>-&gt;lock</tt>.
+
+<p>The <tt>-&gt;name</tt> field points to the name of the RCU flavor
+(for example, &ldquo;rcu_sched&rdquo;), and is constant.
+The <tt>-&gt;abbr</tt> field contains a one-character abbreviation,
+for example, &ldquo;s&rdquo; for RCU-sched.
+
+<h3><a name="The rcu_node Structure">
+The <tt>rcu_node</tt> Structure</a></h3>
+
+<p>The <tt>rcu_node</tt> structures form the combining
+tree that propagates quiescent-state
+information from the leaves to the root and also that propagates
+grace-period information from the root down to the leaves.
+They provides local copies of the grace-period state in order
+to allow this information to be accessed in a synchronized
+manner without suffering the scalability limitations that
+would otherwise be imposed by global locking.
+In <tt>CONFIG_PREEMPT_RCU</tt> kernels, they manage the lists
+of tasks that have blocked while in their current
+RCU read-side critical section.
+In <tt>CONFIG_PREEMPT_RCU</tt> with
+<tt>CONFIG_RCU_BOOST</tt>, they manage the
+per-<tt>rcu_node</tt> priority-boosting
+kernel threads (kthreads) and state.
+Finally, they record CPU-hotplug state in order to determine
+which CPUs should be ignored during a given grace period.
+
+</p><p>The <tt>rcu_node</tt> structure's fields are discussed,
+singly and in groups, in the following sections.
+
+<h5>Connection to Combining Tree</h5>
+
+<p>This portion of the <tt>rcu_node</tt> structure is declared
+as follows:
+
+<pre>
+ 1 struct rcu_node *parent;
+ 2 u8 level;
+ 3 u8 grpnum;
+ 4 unsigned long grpmask;
+ 5 int grplo;
+ 6 int grphi;
+</pre>
+
+<p>The <tt>-&gt;parent</tt> pointer references the <tt>rcu_node</tt>
+one level up in the tree, and is <tt>NULL</tt> for the root
+<tt>rcu_node</tt>.
+The RCU implementation makes heavy use of this field to push quiescent
+states up the tree.
+The <tt>-&gt;level</tt> field gives the level in the tree, with
+the root being at level zero, its children at level one, and so on.
+The <tt>-&gt;grpnum</tt> field gives this node's position within
+the children of its parent, so this number can range between 0 and 31
+on 32-bit systems and between 0 and 63 on 64-bit systems.
+The <tt>-&gt;level</tt> and <tt>-&gt;grpnum</tt> fields are
+used only during initialization and for tracing.
+The <tt>-&gt;grpmask</tt> field is the bitmask counterpart of
+<tt>-&gt;grpnum</tt>, and therefore always has exactly one bit set.
+This mask is used to clear the bit corresponding to this <tt>rcu_node</tt>
+structure in its parent's bitmasks, which are described later.
+Finally, the <tt>-&gt;grplo</tt> and <tt>-&gt;grphi</tt> fields
+contain the lowest and highest numbered CPU served by this
+<tt>rcu_node</tt> structure, respectively.
+
+</p><p>All of these fields are constant, and thus do not require any
+synchronization.
+
+<h5>Synchronization</h5>
+
+<p>This field of the <tt>rcu_node</tt> structure is declared
+as follows:
+
+<pre>
+ 1 raw_spinlock_t lock;
+</pre>
+
+<p>This field is used to protect the remaining fields in this structure,
+unless otherwise stated.
+That said, all of the fields in this structure can be accessed without
+locking for tracing purposes.
+Yes, this can result in confusing traces, but better some tracing confusion
+than to be heisenbugged out of existence.
+
+<h5>Grace-Period Tracking</h5>
+
+<p>This portion of the <tt>rcu_node</tt> structure is declared
+as follows:
+
+<pre>
+ 1 unsigned long gpnum;
+ 2 unsigned long completed;
+</pre>
+
+<p>These fields are the counterparts of the fields of the same name in
+the <tt>rcu_state</tt> structure.
+They each may lag up to one behind their <tt>rcu_state</tt>
+counterparts.
+If a given <tt>rcu_node</tt> structure's <tt>-&gt;gpnum</tt> and
+<tt>-&gt;complete</tt> fields are equal, then this <tt>rcu_node</tt>
+structure believes that RCU is idle.
+Otherwise, as with the <tt>rcu_state</tt> structure,
+the <tt>-&gt;gpnum</tt> field will be one greater than the
+<tt>-&gt;complete</tt> fields, with <tt>-&gt;gpnum</tt>
+indicating which grace period this <tt>rcu_node</tt> believes
+is still being waited for.
+
+</p><p>The <tt>&gt;gpnum</tt> field of each <tt>rcu_node</tt>
+structure is updated at the beginning
+of each grace period, and the <tt>-&gt;completed</tt> fields are
+updated at the end of each grace period.
+
+<h5>Quiescent-State Tracking</h5>
+
+<p>These fields manage the propagation of quiescent states up the
+combining tree.
+
+</p><p>This portion of the <tt>rcu_node</tt> structure has fields
+as follows:
+
+<pre>
+ 1 unsigned long qsmask;
+ 2 unsigned long expmask;
+ 3 unsigned long qsmaskinit;
+ 4 unsigned long expmaskinit;
+</pre>
+
+<p>The <tt>-&gt;qsmask</tt> field tracks which of this
+<tt>rcu_node</tt> structure's children still need to report
+quiescent states for the current normal grace period.
+Such children will have a value of 1 in their corresponding bit.
+Note that the leaf <tt>rcu_node</tt> structures should be
+thought of as having <tt>rcu_data</tt> structures as their
+children.
+Similarly, the <tt>-&gt;expmask</tt> field tracks which
+of this <tt>rcu_node</tt> structure's children still need to report
+quiescent states for the current expedited grace period.
+An expedited grace period has
+the same conceptual properties as a normal grace period, but the
+expedited implementation accepts extreme CPU overhead to obtain
+much lower grace-period latency, for example, consuming a few
+tens of microseconds worth of CPU time to reduce grace-period
+duration from milliseconds to tens of microseconds.
+The <tt>-&gt;qsmaskinit</tt> field tracks which of this
+<tt>rcu_node</tt> structure's children cover for at least
+one online CPU.
+This mask is used to initialize <tt>-&gt;qsmask</tt>,
+and <tt>-&gt;expmaskinit</tt> is used to initialize
+<tt>-&gt;expmask</tt> and the beginning of the
+normal and expedited grace periods, respectively.
+
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ Why are these bitmasks protected by locking?
+ Come on, haven't you heard of atomic instructions???
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ Lockless grace-period computation! Such a tantalizing possibility!
+ </font>
+
+ <p><font color="ffffff">But consider the following sequence of events:
+ </font>
+
+ <ol>
+ <li> <font color="ffffff">CPU&nbsp;0 has been in dyntick-idle
+ mode for quite some time.
+ When it wakes up, it notices that the current RCU
+ grace period needs it to report in, so it sets a
+ flag where the scheduling clock interrupt will find it.
+ </font><p>
+ <li> <font color="ffffff">Meanwhile, CPU&nbsp;1 is running
+ <tt>force_quiescent_state()</tt>,
+ and notices that CPU&nbsp;0 has been in dyntick idle mode,
+ which qualifies as an extended quiescent state.
+ </font><p>
+ <li> <font color="ffffff">CPU&nbsp;0's scheduling clock
+ interrupt fires in the
+ middle of an RCU read-side critical section, and notices
+ that the RCU core needs something, so commences RCU softirq
+ processing.
+ </font>
+ <p>
+ <li> <font color="ffffff">CPU&nbsp;0's softirq handler
+ executes and is just about ready
+ to report its quiescent state up the <tt>rcu_node</tt>
+ tree.
+ </font><p>
+ <li> <font color="ffffff">But CPU&nbsp;1 beats it to the punch,
+ completing the current
+ grace period and starting a new one.
+ </font><p>
+ <li> <font color="ffffff">CPU&nbsp;0 now reports its quiescent
+ state for the wrong
+ grace period.
+ That grace period might now end before the RCU read-side
+ critical section.
+ If that happens, disaster will ensue.
+ </font>
+ </ol>
+
+ <p><font color="ffffff">So the locking is absolutely required in
+ order to coordinate
+ clearing of the bits with the grace-period numbers in
+ <tt>-&gt;gpnum</tt> and <tt>-&gt;completed</tt>.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
+
+<h5>Blocked-Task Management</h5>
+
+<p><tt>PREEMPT_RCU</tt> allows tasks to be preempted in the
+midst of their RCU read-side critical sections, and these tasks
+must be tracked explicitly.
+The details of exactly why and how they are tracked will be covered
+in a separate article on RCU read-side processing.
+For now, it is enough to know that the <tt>rcu_node</tt>
+structure tracks them.
+
+<pre>
+ 1 struct list_head blkd_tasks;
+ 2 struct list_head *gp_tasks;
+ 3 struct list_head *exp_tasks;
+ 4 bool wait_blkd_tasks;
+</pre>
+
+<p>The <tt>-&gt;blkd_tasks</tt> field is a list header for
+the list of blocked and preempted tasks.
+As tasks undergo context switches within RCU read-side critical
+sections, their <tt>task_struct</tt> structures are enqueued
+(via the <tt>task_struct</tt>'s <tt>-&gt;rcu_node_entry</tt>
+field) onto the head of the <tt>-&gt;blkd_tasks</tt> list for the
+leaf <tt>rcu_node</tt> structure corresponding to the CPU
+on which the outgoing context switch executed.
+As these tasks later exit their RCU read-side critical sections,
+they remove themselves from the list.
+This list is therefore in reverse time order, so that if one of the tasks
+is blocking the current grace period, all subsequent tasks must
+also be blocking that same grace period.
+Therefore, a single pointer into this list suffices to track
+all tasks blocking a given grace period.
+That pointer is stored in <tt>-&gt;gp_tasks</tt> for normal
+grace periods and in <tt>-&gt;exp_tasks</tt> for expedited
+grace periods.
+These last two fields are <tt>NULL</tt> if either there is
+no grace period in flight or if there are no blocked tasks
+preventing that grace period from completing.
+If either of these two pointers is referencing a task that
+removes itself from the <tt>-&gt;blkd_tasks</tt> list,
+then that task must advance the pointer to the next task on
+the list, or set the pointer to <tt>NULL</tt> if there
+are no subsequent tasks on the list.
+
+</p><p>For example, suppose that tasks&nbsp;T1, T2, and&nbsp;T3 are
+all hard-affinitied to the largest-numbered CPU in the system.
+Then if task&nbsp;T1 blocked in an RCU read-side
+critical section, then an expedited grace period started,
+then task&nbsp;T2 blocked in an RCU read-side critical section,
+then a normal grace period started, and finally task&nbsp;3 blocked
+in an RCU read-side critical section, then the state of the
+last leaf <tt>rcu_node</tt> structure's blocked-task list
+would be as shown below:
+
+</p><p><img src="blkd_task.svg" alt="blkd_task.svg" width="60%">
+
+</p><p>Task&nbsp;T1 is blocking both grace periods, task&nbsp;T2 is
+blocking only the normal grace period, and task&nbsp;T3 is blocking
+neither grace period.
+Note that these tasks will not remove themselves from this list
+immediately upon resuming execution.
+They will instead remain on the list until they execute the outermost
+<tt>rcu_read_unlock()</tt> that ends their RCU read-side critical
+section.
+
+<p>
+The <tt>-&gt;wait_blkd_tasks</tt> field indicates whether or not
+the current grace period is waiting on a blocked task.
+
+<h5>Sizing the <tt>rcu_node</tt> Array</h5>
+
+<p>The <tt>rcu_node</tt> array is sized via a series of
+C-preprocessor expressions as follows:
+
+<pre>
+ 1 #ifdef CONFIG_RCU_FANOUT
+ 2 #define RCU_FANOUT CONFIG_RCU_FANOUT
+ 3 #else
+ 4 # ifdef CONFIG_64BIT
+ 5 # define RCU_FANOUT 64
+ 6 # else
+ 7 # define RCU_FANOUT 32
+ 8 # endif
+ 9 #endif
+10
+11 #ifdef CONFIG_RCU_FANOUT_LEAF
+12 #define RCU_FANOUT_LEAF CONFIG_RCU_FANOUT_LEAF
+13 #else
+14 # ifdef CONFIG_64BIT
+15 # define RCU_FANOUT_LEAF 64
+16 # else
+17 # define RCU_FANOUT_LEAF 32
+18 # endif
+19 #endif
+20
+21 #define RCU_FANOUT_1 (RCU_FANOUT_LEAF)
+22 #define RCU_FANOUT_2 (RCU_FANOUT_1 * RCU_FANOUT)
+23 #define RCU_FANOUT_3 (RCU_FANOUT_2 * RCU_FANOUT)
+24 #define RCU_FANOUT_4 (RCU_FANOUT_3 * RCU_FANOUT)
+25
+26 #if NR_CPUS &lt;= RCU_FANOUT_1
+27 # define RCU_NUM_LVLS 1
+28 # define NUM_RCU_LVL_0 1
+29 # define NUM_RCU_NODES NUM_RCU_LVL_0
+30 # define NUM_RCU_LVL_INIT { NUM_RCU_LVL_0 }
+31 # define RCU_NODE_NAME_INIT { "rcu_node_0" }
+32 # define RCU_FQS_NAME_INIT { "rcu_node_fqs_0" }
+33 # define RCU_EXP_NAME_INIT { "rcu_node_exp_0" }
+34 #elif NR_CPUS &lt;= RCU_FANOUT_2
+35 # define RCU_NUM_LVLS 2
+36 # define NUM_RCU_LVL_0 1
+37 # define NUM_RCU_LVL_1 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_1)
+38 # define NUM_RCU_NODES (NUM_RCU_LVL_0 + NUM_RCU_LVL_1)
+39 # define NUM_RCU_LVL_INIT { NUM_RCU_LVL_0, NUM_RCU_LVL_1 }
+40 # define RCU_NODE_NAME_INIT { "rcu_node_0", "rcu_node_1" }
+41 # define RCU_FQS_NAME_INIT { "rcu_node_fqs_0", "rcu_node_fqs_1" }
+42 # define RCU_EXP_NAME_INIT { "rcu_node_exp_0", "rcu_node_exp_1" }
+43 #elif NR_CPUS &lt;= RCU_FANOUT_3
+44 # define RCU_NUM_LVLS 3
+45 # define NUM_RCU_LVL_0 1
+46 # define NUM_RCU_LVL_1 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_2)
+47 # define NUM_RCU_LVL_2 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_1)
+48 # define NUM_RCU_NODES (NUM_RCU_LVL_0 + NUM_RCU_LVL_1 + NUM_RCU_LVL_2)
+49 # define NUM_RCU_LVL_INIT { NUM_RCU_LVL_0, NUM_RCU_LVL_1, NUM_RCU_LVL_2 }
+50 # define RCU_NODE_NAME_INIT { "rcu_node_0", "rcu_node_1", "rcu_node_2" }
+51 # define RCU_FQS_NAME_INIT { "rcu_node_fqs_0", "rcu_node_fqs_1", "rcu_node_fqs_2" }
+52 # define RCU_EXP_NAME_INIT { "rcu_node_exp_0", "rcu_node_exp_1", "rcu_node_exp_2" }
+53 #elif NR_CPUS &lt;= RCU_FANOUT_4
+54 # define RCU_NUM_LVLS 4
+55 # define NUM_RCU_LVL_0 1
+56 # define NUM_RCU_LVL_1 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_3)
+57 # define NUM_RCU_LVL_2 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_2)
+58 # define NUM_RCU_LVL_3 DIV_ROUND_UP(NR_CPUS, RCU_FANOUT_1)
+59 # define NUM_RCU_NODES (NUM_RCU_LVL_0 + NUM_RCU_LVL_1 + NUM_RCU_LVL_2 + NUM_RCU_LVL_3)
+60 # define NUM_RCU_LVL_INIT { NUM_RCU_LVL_0, NUM_RCU_LVL_1, NUM_RCU_LVL_2, NUM_RCU_LVL_3 }
+61 # define RCU_NODE_NAME_INIT { "rcu_node_0", "rcu_node_1", "rcu_node_2", "rcu_node_3" }
+62 # define RCU_FQS_NAME_INIT { "rcu_node_fqs_0", "rcu_node_fqs_1", "rcu_node_fqs_2", "rcu_node_fqs_3" }
+63 # define RCU_EXP_NAME_INIT { "rcu_node_exp_0", "rcu_node_exp_1", "rcu_node_exp_2", "rcu_node_exp_3" }
+64 #else
+65 # error "CONFIG_RCU_FANOUT insufficient for NR_CPUS"
+66 #endif
+</pre>
+
+<p>The maximum number of levels in the <tt>rcu_node</tt> structure
+is currently limited to four, as specified by lines&nbsp;21-24
+and the structure of the subsequent &ldquo;if&rdquo; statement.
+For 32-bit systems, this allows 16*32*32*32=524,288 CPUs, which
+should be sufficient for the next few years at least.
+For 64-bit systems, 16*64*64*64=4,194,304 CPUs is allowed, which
+should see us through the next decade or so.
+This four-level tree also allows kernels built with
+<tt>CONFIG_RCU_FANOUT=8</tt> to support up to 4096 CPUs,
+which might be useful in very large systems having eight CPUs per
+socket (but please note that no one has yet shown any measurable
+performance degradation due to misaligned socket and <tt>rcu_node</tt>
+boundaries).
+In addition, building kernels with a full four levels of <tt>rcu_node</tt>
+tree permits better testing of RCU's combining-tree code.
+
+</p><p>The <tt>RCU_FANOUT</tt> symbol controls how many children
+are permitted at each non-leaf level of the <tt>rcu_node</tt> tree.
+If the <tt>CONFIG_RCU_FANOUT</tt> Kconfig option is not specified,
+it is set based on the word size of the system, which is also
+the Kconfig default.
+
+</p><p>The <tt>RCU_FANOUT_LEAF</tt> symbol controls how many CPUs are
+handled by each leaf <tt>rcu_node</tt> structure.
+Experience has shown that allowing a given leaf <tt>rcu_node</tt>
+structure to handle 64 CPUs, as permitted by the number of bits in
+the <tt>-&gt;qsmask</tt> field on a 64-bit system, results in
+excessive contention for the leaf <tt>rcu_node</tt> structures'
+<tt>-&gt;lock</tt> fields.
+The number of CPUs per leaf <tt>rcu_node</tt> structure is therefore
+limited to 16 given the default value of <tt>CONFIG_RCU_FANOUT_LEAF</tt>.
+If <tt>CONFIG_RCU_FANOUT_LEAF</tt> is unspecified, the value
+selected is based on the word size of the system, just as for
+<tt>CONFIG_RCU_FANOUT</tt>.
+Lines&nbsp;11-19 perform this computation.
+
+</p><p>Lines&nbsp;21-24 compute the maximum number of CPUs supported by
+a single-level (which contains a single <tt>rcu_node</tt> structure),
+two-level, three-level, and four-level <tt>rcu_node</tt> tree,
+respectively, given the fanout specified by <tt>RCU_FANOUT</tt>
+and <tt>RCU_FANOUT_LEAF</tt>.
+These numbers of CPUs are retained in the
+<tt>RCU_FANOUT_1</tt>,
+<tt>RCU_FANOUT_2</tt>,
+<tt>RCU_FANOUT_3</tt>, and
+<tt>RCU_FANOUT_4</tt>
+C-preprocessor variables, respectively.
+
+</p><p>These variables are used to control the C-preprocessor <tt>#if</tt>
+statement spanning lines&nbsp;26-66 that computes the number of
+<tt>rcu_node</tt> structures required for each level of the tree,
+as well as the number of levels required.
+The number of levels is placed in the <tt>NUM_RCU_LVLS</tt>
+C-preprocessor variable by lines&nbsp;27, 35, 44, and&nbsp;54.
+The number of <tt>rcu_node</tt> structures for the topmost level
+of the tree is always exactly one, and this value is unconditionally
+placed into <tt>NUM_RCU_LVL_0</tt> by lines&nbsp;28, 36, 45, and&nbsp;55.
+The rest of the levels (if any) of the <tt>rcu_node</tt> tree
+are computed by dividing the maximum number of CPUs by the
+fanout supported by the number of levels from the current level down,
+rounding up. This computation is performed by lines&nbsp;37,
+46-47, and&nbsp;56-58.
+Lines&nbsp;31-33, 40-42, 50-52, and&nbsp;62-63 create initializers
+for lockdep lock-class names.
+Finally, lines&nbsp;64-66 produce an error if the maximum number of
+CPUs is too large for the specified fanout.
+
+<h3><a name="The rcu_data Structure">
+The <tt>rcu_data</tt> Structure</a></h3>
+
+<p>The <tt>rcu_data</tt> maintains the per-CPU state for the
+corresponding flavor of RCU.
+The fields in this structure may be accessed only from the corresponding
+CPU (and from tracing) unless otherwise stated.
+This structure is the
+focus of quiescent-state detection and RCU callback queuing.
+It also tracks its relationship to the corresponding leaf
+<tt>rcu_node</tt> structure to allow more-efficient
+propagation of quiescent states up the <tt>rcu_node</tt>
+combining tree.
+Like the <tt>rcu_node</tt> structure, it provides a local
+copy of the grace-period information to allow for-free
+synchronized
+access to this information from the corresponding CPU.
+Finally, this structure records past dyntick-idle state
+for the corresponding CPU and also tracks statistics.
+
+</p><p>The <tt>rcu_data</tt> structure's fields are discussed,
+singly and in groups, in the following sections.
+
+<h5>Connection to Other Data Structures</h5>
+
+<p>This portion of the <tt>rcu_data</tt> structure is declared
+as follows:
+
+<pre>
+ 1 int cpu;
+ 2 struct rcu_state *rsp;
+ 3 struct rcu_node *mynode;
+ 4 struct rcu_dynticks *dynticks;
+ 5 unsigned long grpmask;
+ 6 bool beenonline;
+</pre>
+
+<p>The <tt>-&gt;cpu</tt> field contains the number of the
+corresponding CPU, the <tt>-&gt;rsp</tt> pointer references
+the corresponding <tt>rcu_state</tt> structure (and is most frequently
+used to locate the name of the corresponding flavor of RCU for tracing),
+and the <tt>-&gt;mynode</tt> field references the corresponding
+<tt>rcu_node</tt> structure.
+The <tt>-&gt;mynode</tt> is used to propagate quiescent states
+up the combining tree.
+<p>The <tt>-&gt;dynticks</tt> pointer references the
+<tt>rcu_dynticks</tt> structure corresponding to this
+CPU.
+Recall that a single per-CPU instance of the <tt>rcu_dynticks</tt>
+structure is shared among all flavors of RCU.
+These first four fields are constant and therefore require not
+synchronization.
+
+</p><p>The <tt>-&gt;grpmask</tt> field indicates the bit in
+the <tt>-&gt;mynode-&gt;qsmask</tt> corresponding to this
+<tt>rcu_data</tt> structure, and is also used when propagating
+quiescent states.
+The <tt>-&gt;beenonline</tt> flag is set whenever the corresponding
+CPU comes online, which means that the debugfs tracing need not dump
+out any <tt>rcu_data</tt> structure for which this flag is not set.
+
+<h5>Quiescent-State and Grace-Period Tracking</h5>
+
+<p>This portion of the <tt>rcu_data</tt> structure is declared
+as follows:
+
+<pre>
+ 1 unsigned long completed;
+ 2 unsigned long gpnum;
+ 3 bool cpu_no_qs;
+ 4 bool core_needs_qs;
+ 5 bool gpwrap;
+ 6 unsigned long rcu_qs_ctr_snap;
+</pre>
+
+<p>The <tt>completed</tt> and <tt>gpnum</tt>
+fields are the counterparts of the fields of the same name
+in the <tt>rcu_state</tt> and <tt>rcu_node</tt> structures.
+They may each lag up to one behind their <tt>rcu_node</tt>
+counterparts, but in <tt>CONFIG_NO_HZ_IDLE</tt> and
+<tt>CONFIG_NO_HZ_FULL</tt> kernels can lag
+arbitrarily far behind for CPUs in dyntick-idle mode (but these counters
+will catch up upon exit from dyntick-idle mode).
+If a given <tt>rcu_data</tt> structure's <tt>-&gt;gpnum</tt> and
+<tt>-&gt;complete</tt> fields are equal, then this <tt>rcu_data</tt>
+structure believes that RCU is idle.
+Otherwise, as with the <tt>rcu_state</tt> and <tt>rcu_node</tt>
+structure,
+the <tt>-&gt;gpnum</tt> field will be one greater than the
+<tt>-&gt;complete</tt> fields, with <tt>-&gt;gpnum</tt>
+indicating which grace period this <tt>rcu_data</tt> believes
+is still being waited for.
+
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ All this replication of the grace period numbers can only cause
+ massive confusion.
+ Why not just keep a global pair of counters and be done with it???
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ Because if there was only a single global pair of grace-period
+ numbers, there would need to be a single global lock to allow
+ safely accessing and updating them.
+ And if we are not going to have a single global lock, we need
+ to carefully manage the numbers on a per-node basis.
+ Recall from the answer to a previous Quick Quiz that the consequences
+ of applying a previously sampled quiescent state to the wrong
+ grace period are quite severe.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
+
+<p>The <tt>-&gt;cpu_no_qs</tt> flag indicates that the
+CPU has not yet passed through a quiescent state,
+while the <tt>-&gt;core_needs_qs</tt> flag indicates that the
+RCU core needs a quiescent state from the corresponding CPU.
+The <tt>-&gt;gpwrap</tt> field indicates that the corresponding
+CPU has remained idle for so long that the <tt>completed</tt>
+and <tt>gpnum</tt> counters are in danger of overflow, which
+will cause the CPU to disregard the values of its counters on
+its next exit from idle.
+Finally, the <tt>rcu_qs_ctr_snap</tt> field is used to detect
+cases where a given operation has resulted in a quiescent state
+for all flavors of RCU, for example, <tt>cond_resched_rcu_qs()</tt>.
+
+<h5>RCU Callback Handling</h5>
+
+<p>In the absence of CPU-hotplug events, RCU callbacks are invoked by
+the same CPU that registered them.
+This is strictly a cache-locality optimization: callbacks can and
+do get invoked on CPUs other than the one that registered them.
+After all, if the CPU that registered a given callback has gone
+offline before the callback can be invoked, there really is no other
+choice.
+
+</p><p>This portion of the <tt>rcu_data</tt> structure is declared
+as follows:
+
+<pre>
+ 1 struct rcu_head *nxtlist;
+ 2 struct rcu_head **nxttail[RCU_NEXT_SIZE];
+ 3 unsigned long nxtcompleted[RCU_NEXT_SIZE];
+ 4 long qlen_lazy;
+ 5 long qlen;
+ 6 long qlen_last_fqs_check;
+ 7 unsigned long n_force_qs_snap;
+ 8 unsigned long n_cbs_invoked;
+ 9 unsigned long n_cbs_orphaned;
+10 unsigned long n_cbs_adopted;
+11 long blimit;
+</pre>
+
+<p>The <tt>-&gt;nxtlist</tt> pointer and the
+<tt>-&gt;nxttail[]</tt> array form a four-segment list with
+older callbacks near the head and newer ones near the tail.
+Each segment contains callbacks with the corresponding relationship
+to the current grace period.
+The pointer out of the end of each of the four segments is referenced
+by the element of the <tt>-&gt;nxttail[]</tt> array indexed by
+<tt>RCU_DONE_TAIL</tt> (for callbacks handled by a prior grace period),
+<tt>RCU_WAIT_TAIL</tt> (for callbacks waiting on the current grace period),
+<tt>RCU_NEXT_READY_TAIL</tt> (for callbacks that will wait on the next
+grace period), and
+<tt>RCU_NEXT_TAIL</tt> (for callbacks that are not yet associated
+with a specific grace period)
+respectively, as shown in the following figure.
+
+</p><p><img src="nxtlist.svg" alt="nxtlist.svg" width="40%">
+
+</p><p>In this figure, the <tt>-&gt;nxtlist</tt> pointer references the
+first
+RCU callback in the list.
+The <tt>-&gt;nxttail[RCU_DONE_TAIL]</tt> array element references
+the <tt>-&gt;nxtlist</tt> pointer itself, indicating that none
+of the callbacks is ready to invoke.
+The <tt>-&gt;nxttail[RCU_WAIT_TAIL]</tt> array element references callback
+CB&nbsp;2's <tt>-&gt;next</tt> pointer, which indicates that
+CB&nbsp;1 and CB&nbsp;2 are both waiting on the current grace period.
+The <tt>-&gt;nxttail[RCU_NEXT_READY_TAIL]</tt> array element
+references the same RCU callback that <tt>-&gt;nxttail[RCU_WAIT_TAIL]</tt>
+does, which indicates that there are no callbacks waiting on the next
+RCU grace period.
+The <tt>-&gt;nxttail[RCU_NEXT_TAIL]</tt> array element references
+CB&nbsp;4's <tt>-&gt;next</tt> pointer, indicating that all the
+remaining RCU callbacks have not yet been assigned to an RCU grace
+period.
+Note that the <tt>-&gt;nxttail[RCU_NEXT_TAIL]</tt> array element
+always references the last RCU callback's <tt>-&gt;next</tt> pointer
+unless the callback list is empty, in which case it references
+the <tt>-&gt;nxtlist</tt> pointer.
+
+</p><p>CPUs advance their callbacks from the
+<tt>RCU_NEXT_TAIL</tt> to the <tt>RCU_NEXT_READY_TAIL</tt> to the
+<tt>RCU_WAIT_TAIL</tt> to the <tt>RCU_DONE_TAIL</tt> list segments
+as grace periods advance.
+The CPU advances the callbacks in its <tt>rcu_data</tt> structure
+whenever it notices that another RCU grace period has completed.
+The CPU detects the completion of an RCU grace period by noticing
+that the value of its <tt>rcu_data</tt> structure's
+<tt>-&gt;completed</tt> field differs from that of its leaf
+<tt>rcu_node</tt> structure.
+Recall that each <tt>rcu_node</tt> structure's
+<tt>-&gt;completed</tt> field is updated at the end of each
+grace period.
+
+</p><p>The <tt>-&gt;nxtcompleted[]</tt> array records grace-period
+numbers corresponding to the list segments.
+This allows CPUs that go idle for extended periods to determine
+which of their callbacks are ready to be invoked after reawakening.
+
+</p><p>The <tt>-&gt;qlen</tt> counter contains the number of
+callbacks in <tt>-&gt;nxtlist</tt>, and the
+<tt>-&gt;qlen_lazy</tt> contains the number of those callbacks that
+are known to only free memory, and whose invocation can therefore
+be safely deferred.
+The <tt>-&gt;qlen_last_fqs_check</tt> and
+<tt>-&gt;n_force_qs_snap</tt> coordinate the forcing of quiescent
+states from <tt>call_rcu()</tt> and friends when callback
+lists grow excessively long.
+
+</p><p>The <tt>-&gt;n_cbs_invoked</tt>,
+<tt>-&gt;n_cbs_orphaned</tt>, and <tt>-&gt;n_cbs_adopted</tt>
+fields count the number of callbacks invoked,
+sent to other CPUs when this CPU goes offline,
+and received from other CPUs when those other CPUs go offline.
+Finally, the <tt>-&gt;blimit</tt> counter is the maximum number of
+RCU callbacks that may be invoked at a given time.
+
+<h5>Dyntick-Idle Handling</h5>
+
+<p>This portion of the <tt>rcu_data</tt> structure is declared
+as follows:
+
+<pre>
+ 1 int dynticks_snap;
+ 2 unsigned long dynticks_fqs;
+</pre>
+
+The <tt>-&gt;dynticks_snap</tt> field is used to take a snapshot
+of the corresponding CPU's dyntick-idle state when forcing
+quiescent states, and is therefore accessed from other CPUs.
+Finally, the <tt>-&gt;dynticks_fqs</tt> field is used to
+count the number of times this CPU is determined to be in
+dyntick-idle state, and is used for tracing and debugging purposes.
+
+<h3><a name="The rcu_dynticks Structure">
+The <tt>rcu_dynticks</tt> Structure</a></h3>
+
+<p>The <tt>rcu_dynticks</tt> maintains the per-CPU dyntick-idle state
+for the corresponding CPU.
+Unlike the other structures, <tt>rcu_dynticks</tt> is not
+replicated over the different flavors of RCU.
+The fields in this structure may be accessed only from the corresponding
+CPU (and from tracing) unless otherwise stated.
+Its fields are as follows:
+
+<pre>
+ 1 int dynticks_nesting;
+ 2 int dynticks_nmi_nesting;
+ 3 atomic_t dynticks;
+</pre>
+
+<p>The <tt>-&gt;dynticks_nesting</tt> field counts the
+nesting depth of normal interrupts.
+In addition, this counter is incremented when exiting dyntick-idle
+mode and decremented when entering it.
+This counter can therefore be thought of as counting the number
+of reasons why this CPU cannot be permitted to enter dyntick-idle
+mode, aside from non-maskable interrupts (NMIs).
+NMIs are counted by the <tt>-&gt;dynticks_nmi_nesting</tt>
+field, except that NMIs that interrupt non-dyntick-idle execution
+are not counted.
+
+</p><p>Finally, the <tt>-&gt;dynticks</tt> field counts the corresponding
+CPU's transitions to and from dyntick-idle mode, so that this counter
+has an even value when the CPU is in dyntick-idle mode and an odd
+value otherwise.
+
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ Why not just count all NMIs?
+ Wouldn't that be simpler and less error prone?
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ It seems simpler only until you think hard about how to go about
+ updating the <tt>rcu_dynticks</tt> structure's
+ <tt>-&gt;dynticks</tt> field.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
+
+<p>Additional fields are present for some special-purpose
+builds, and are discussed separately.
+
+<h3><a name="The rcu_head Structure">
+The <tt>rcu_head</tt> Structure</a></h3>
+
+<p>Each <tt>rcu_head</tt> structure represents an RCU callback.
+These structures are normally embedded within RCU-protected data
+structures whose algorithms use asynchronous grace periods.
+In contrast, when using algorithms that block waiting for RCU grace periods,
+RCU users need not provide <tt>rcu_head</tt> structures.
+
+</p><p>The <tt>rcu_head</tt> structure has fields as follows:
+
+<pre>
+ 1 struct rcu_head *next;
+ 2 void (*func)(struct rcu_head *head);
+</pre>
+
+<p>The <tt>-&gt;next</tt> field is used
+to link the <tt>rcu_head</tt> structures together in the
+lists within the <tt>rcu_data</tt> structures.
+The <tt>-&gt;func</tt> field is a pointer to the function
+to be called when the callback is ready to be invoked, and
+this function is passed a pointer to the <tt>rcu_head</tt>
+structure.
+However, <tt>kfree_rcu()</tt> uses the <tt>-&gt;func</tt>
+field to record the offset of the <tt>rcu_head</tt>
+structure within the enclosing RCU-protected data structure.
+
+</p><p>Both of these fields are used internally by RCU.
+From the viewpoint of RCU users, this structure is an
+opaque &ldquo;cookie&rdquo;.
+
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ Given that the callback function <tt>-&gt;func</tt>
+ is passed a pointer to the <tt>rcu_head</tt> structure,
+ how is that function supposed to find the beginning of the
+ enclosing RCU-protected data structure?
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ In actual practice, there is a separate callback function per
+ type of RCU-protected data structure.
+ The callback function can therefore use the <tt>container_of()</tt>
+ macro in the Linux kernel (or other pointer-manipulation facilities
+ in other software environments) to find the beginning of the
+ enclosing structure.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
+
+<h3><a name="RCU-Specific Fields in the task_struct Structure">
+RCU-Specific Fields in the <tt>task_struct</tt> Structure</a></h3>
+
+<p>The <tt>CONFIG_PREEMPT_RCU</tt> implementation uses some
+additional fields in the <tt>task_struct</tt> structure:
+
+<pre>
+ 1 #ifdef CONFIG_PREEMPT_RCU
+ 2 int rcu_read_lock_nesting;
+ 3 union rcu_special rcu_read_unlock_special;
+ 4 struct list_head rcu_node_entry;
+ 5 struct rcu_node *rcu_blocked_node;
+ 6 #endif /* #ifdef CONFIG_PREEMPT_RCU */
+ 7 #ifdef CONFIG_TASKS_RCU
+ 8 unsigned long rcu_tasks_nvcsw;
+ 9 bool rcu_tasks_holdout;
+10 struct list_head rcu_tasks_holdout_list;
+11 int rcu_tasks_idle_cpu;
+12 #endif /* #ifdef CONFIG_TASKS_RCU */
+</pre>
+
+<p>The <tt>-&gt;rcu_read_lock_nesting</tt> field records the
+nesting level for RCU read-side critical sections, and
+the <tt>-&gt;rcu_read_unlock_special</tt> field is a bitmask
+that records special conditions that require <tt>rcu_read_unlock()</tt>
+to do additional work.
+The <tt>-&gt;rcu_node_entry</tt> field is used to form lists of
+tasks that have blocked within preemptible-RCU read-side critical
+sections and the <tt>-&gt;rcu_blocked_node</tt> field references
+the <tt>rcu_node</tt> structure whose list this task is a member of,
+or <tt>NULL</tt> if it is not blocked within a preemptible-RCU
+read-side critical section.
+
+<p>The <tt>-&gt;rcu_tasks_nvcsw</tt> field tracks the number of
+voluntary context switches that this task had undergone at the
+beginning of the current tasks-RCU grace period,
+<tt>-&gt;rcu_tasks_holdout</tt> is set if the current tasks-RCU
+grace period is waiting on this task, <tt>-&gt;rcu_tasks_holdout_list</tt>
+is a list element enqueuing this task on the holdout list,
+and <tt>-&gt;rcu_tasks_idle_cpu</tt> tracks which CPU this
+idle task is running, but only if the task is currently running,
+that is, if the CPU is currently idle.
+
+<h3><a name="Accessor Functions">
+Accessor Functions</a></h3>
+
+<p>The following listing shows the
+<tt>rcu_get_root()</tt>, <tt>rcu_for_each_node_breadth_first</tt>,
+<tt>rcu_for_each_nonleaf_node_breadth_first()</tt>, and
+<tt>rcu_for_each_leaf_node()</tt> function and macros:
+
+<pre>
+ 1 static struct rcu_node *rcu_get_root(struct rcu_state *rsp)
+ 2 {
+ 3 return &amp;rsp-&gt;node[0];
+ 4 }
+ 5
+ 6 #define rcu_for_each_node_breadth_first(rsp, rnp) \
+ 7 for ((rnp) = &amp;(rsp)-&gt;node[0]; \
+ 8 (rnp) &lt; &amp;(rsp)-&gt;node[NUM_RCU_NODES]; (rnp)++)
+ 9
+ 10 #define rcu_for_each_nonleaf_node_breadth_first(rsp, rnp) \
+ 11 for ((rnp) = &amp;(rsp)-&gt;node[0]; \
+ 12 (rnp) &lt; (rsp)-&gt;level[NUM_RCU_LVLS - 1]; (rnp)++)
+ 13
+ 14 #define rcu_for_each_leaf_node(rsp, rnp) \
+ 15 for ((rnp) = (rsp)-&gt;level[NUM_RCU_LVLS - 1]; \
+ 16 (rnp) &lt; &amp;(rsp)-&gt;node[NUM_RCU_NODES]; (rnp)++)
+</pre>
+
+<p>The <tt>rcu_get_root()</tt> simply returns a pointer to the
+first element of the specified <tt>rcu_state</tt> structure's
+<tt>-&gt;node[]</tt> array, which is the root <tt>rcu_node</tt>
+structure.
+
+</p><p>As noted earlier, the <tt>rcu_for_each_node_breadth_first()</tt>
+macro takes advantage of the layout of the <tt>rcu_node</tt>
+structures in the <tt>rcu_state</tt> structure's
+<tt>-&gt;node[]</tt> array, performing a breadth-first traversal by
+simply traversing the array in order.
+The <tt>rcu_for_each_nonleaf_node_breadth_first()</tt> macro operates
+similarly, but traverses only the first part of the array, thus excluding
+the leaf <tt>rcu_node</tt> structures.
+Finally, the <tt>rcu_for_each_leaf_node()</tt> macro traverses only
+the last part of the array, thus traversing only the leaf
+<tt>rcu_node</tt> structures.
+
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ What do <tt>rcu_for_each_nonleaf_node_breadth_first()</tt> and
+ <tt>rcu_for_each_leaf_node()</tt> do if the <tt>rcu_node</tt> tree
+ contains only a single node?
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ In the single-node case,
+ <tt>rcu_for_each_nonleaf_node_breadth_first()</tt> is a no-op
+ and <tt>rcu_for_each_leaf_node()</tt> traverses the single node.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
+
+<h3><a name="Summary">
+Summary</a></h3>
+
+So each flavor of RCU is represented by an <tt>rcu_state</tt> structure,
+which contains a combining tree of <tt>rcu_node</tt> and
+<tt>rcu_data</tt> structures.
+Finally, in <tt>CONFIG_NO_HZ_IDLE</tt> kernels, each CPU's dyntick-idle
+state is tracked by an <tt>rcu_dynticks</tt> structure.
+
+If you made it this far, you are well prepared to read the code
+walkthroughs in the other articles in this series.
+
+<h3><a name="Acknowledgments">
+Acknowledgments</a></h3>
+
+I owe thanks to Cyrill Gorcunov, Mathieu Desnoyers, Dhaval Giani, Paul
+Turner, Abhishek Srivastava, Matt Kowalczyk, and Serge Hallyn
+for helping me get this document into a more human-readable state.
+
+<h3><a name="Legal Statement">
+Legal Statement</a></h3>
+
+<p>This work represents the view of the author and does not necessarily
+represent the view of IBM.
+
+</p><p>Linux is a registered trademark of Linus Torvalds.
+
+</p><p>Other company, product, and service names may be trademarks or
+service marks of others.
+
+</body></html>
diff --git a/Documentation/RCU/Design/Data-Structures/HugeTreeClassicRCU.svg b/Documentation/RCU/Design/Data-Structures/HugeTreeClassicRCU.svg
new file mode 100644
index 000000000000..2bf12b468206
--- /dev/null
+++ b/Documentation/RCU/Design/Data-Structures/HugeTreeClassicRCU.svg
@@ -0,0 +1,939 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Creator: fig2dev Version 3.2 Patchlevel 5e -->
+
+<!-- CreationDate: Wed Dec 9 17:37:22 2015 -->
+
+<!-- Magnification: 3.000 -->
+
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="15.1in"
+ height="11.2in"
+ viewBox="-66 -66 18087 13407"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.4 r9939"
+ sodipodi:docname="HugeTreeClassicRCU.fig">
+ <metadata
+ id="metadata224">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title></dc:title>
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <defs
+ id="defs222">
+ <marker
+ inkscape:stockid="Arrow1Mend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow1Mend"
+ style="overflow:visible;">
+ <path
+ id="path3982"
+ d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;"
+ transform="scale(0.4) rotate(180) translate(10,0)" />
+ </marker>
+ </defs>
+ <sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="1134"
+ inkscape:window-height="789"
+ id="namedview220"
+ showgrid="false"
+ inkscape:zoom="0.60515873"
+ inkscape:cx="679.5"
+ inkscape:cy="504"
+ inkscape:window-x="786"
+ inkscape:window-y="24"
+ inkscape:window-maximized="0"
+ inkscape:current-layer="g4" />
+ <g
+ style="stroke-width:.025in; fill:none"
+ id="g4">
+ <!-- Line: box -->
+ <rect
+ x="450"
+ y="0"
+ width="17100"
+ height="8325"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffff00; "
+ id="rect6" />
+ <!-- Line: box -->
+ <rect
+ x="11025"
+ y="3600"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect8" />
+ <!-- Line: box -->
+ <rect
+ x="4275"
+ y="3600"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect10" />
+ <!-- Line: box -->
+ <rect
+ x="5400"
+ y="6300"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect12" />
+ <!-- Line: box -->
+ <rect
+ x="9900"
+ y="6300"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect14" />
+ <!-- Line: box -->
+ <rect
+ x="14400"
+ y="6300"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect16" />
+ <!-- Line: box -->
+ <rect
+ x="900"
+ y="6300"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect18" />
+ <!-- Line: box -->
+ <rect
+ x="7650"
+ y="900"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect20" />
+ <!-- Line -->
+ <polyline
+ points="3150,9225 3150,7746 "
+ style="stroke:#00d1d1;stroke-width:44.99790066;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline22" />
+ <!-- Arrowhead on XXXpoint 3150 9225 - 3150 7560-->
+ <!-- Circle -->
+ <circle
+ cx="8550"
+ cy="4275"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle26" />
+ <!-- Circle -->
+ <circle
+ cx="9000"
+ cy="4275"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle28" />
+ <!-- Circle -->
+ <circle
+ cx="9450"
+ cy="4275"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle30" />
+ <!-- Line -->
+ <polyline
+ points="6750,6300 8250,5010 "
+ style="stroke:#00d1d1;stroke-width:44.99790066;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline32" />
+ <!-- Arrowhead on XXXpoint 6750 6300 - 8391 4890-->
+ <!-- Line -->
+ <polyline
+ points="11250,6300 9747,5010 "
+ style="stroke:#00d1d1;stroke-width:44.99790066;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline36" />
+ <!-- Arrowhead on XXXpoint 11250 6300 - 9606 4890-->
+ <!-- Circle -->
+ <circle
+ cx="13950"
+ cy="6975"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle40" />
+ <!-- Circle -->
+ <circle
+ cx="13500"
+ cy="6975"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle42" />
+ <!-- Circle -->
+ <circle
+ cx="13050"
+ cy="6975"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle44" />
+ <!-- Circle -->
+ <circle
+ cx="9450"
+ cy="6975"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle46" />
+ <!-- Circle -->
+ <circle
+ cx="9000"
+ cy="6975"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle48" />
+ <!-- Circle -->
+ <circle
+ cx="8550"
+ cy="6975"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle50" />
+ <!-- Circle -->
+ <circle
+ cx="4950"
+ cy="6975"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle52" />
+ <!-- Circle -->
+ <circle
+ cx="4500"
+ cy="6975"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle54" />
+ <!-- Circle -->
+ <circle
+ cx="4050"
+ cy="6975"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle56" />
+ <!-- Circle -->
+ <circle
+ cx="1800"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle58" />
+ <!-- Circle -->
+ <circle
+ cx="2250"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle60" />
+ <!-- Circle -->
+ <circle
+ cx="2700"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle62" />
+ <!-- Circle -->
+ <circle
+ cx="15300"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle64" />
+ <!-- Circle -->
+ <circle
+ cx="15750"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle66" />
+ <!-- Circle -->
+ <circle
+ cx="16200"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle68" />
+ <!-- Circle -->
+ <circle
+ cx="10800"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle70" />
+ <!-- Circle -->
+ <circle
+ cx="11250"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle72" />
+ <!-- Circle -->
+ <circle
+ cx="11700"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle74" />
+ <!-- Circle -->
+ <circle
+ cx="6300"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle76" />
+ <!-- Circle -->
+ <circle
+ cx="6750"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle78" />
+ <!-- Circle -->
+ <circle
+ cx="7200"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle80" />
+ <!-- Line: box -->
+ <rect
+ x="0"
+ y="11475"
+ width="2700"
+ height="1800"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect82" />
+ <!-- Line: box -->
+ <rect
+ x="1800"
+ y="9225"
+ width="2700"
+ height="1800"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect84" />
+ <!-- Line: box -->
+ <rect
+ x="4500"
+ y="11475"
+ width="2700"
+ height="1800"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect86" />
+ <!-- Line: box -->
+ <rect
+ x="6300"
+ y="9270"
+ width="2700"
+ height="1800"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect88" />
+ <!-- Line: box -->
+ <rect
+ x="8955"
+ y="11475"
+ width="2700"
+ height="1800"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect90" />
+ <!-- Line: box -->
+ <rect
+ x="10755"
+ y="9270"
+ width="2700"
+ height="1800"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect92" />
+ <!-- Line: box -->
+ <rect
+ x="13455"
+ y="11475"
+ width="2700"
+ height="1800"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect94" />
+ <!-- Line: box -->
+ <rect
+ x="15255"
+ y="9270"
+ width="2700"
+ height="1800"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect96" />
+ <!-- Line -->
+ <polyline
+ points="11700,3600 10197,2310 "
+ style="stroke:#00d1d1;stroke-width:44.99790066;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline98" />
+ <!-- Arrowhead on XXXpoint 11700 3600 - 10056 2190-->
+ <!-- Line -->
+ <polyline
+ points="6300,3600 7800,2310 "
+ style="stroke:#00d1d1;stroke-width:44.99790066;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline102" />
+ <!-- Arrowhead on XXXpoint 6300 3600 - 7941 2190-->
+ <!-- Line -->
+ <polyline
+ points="3150,6300 4650,5010 "
+ style="stroke:#00d1d1;stroke-width:44.99790066;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline106" />
+ <!-- Arrowhead on XXXpoint 3150 6300 - 4791 4890-->
+ <!-- Line -->
+ <polyline
+ points="14850,6300 13347,5010 "
+ style="stroke:#00d1d1;stroke-width:44.99790066;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline110" />
+ <!-- Arrowhead on XXXpoint 14850 6300 - 13206 4890-->
+ <!-- Line -->
+ <polyline
+ points="1350,11475 1350,7746 "
+ style="stroke:#00d1d1;stroke-width:44.99790066;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline114" />
+ <!-- Arrowhead on XXXpoint 1350 11475 - 1350 7560-->
+ <!-- Line -->
+ <polyline
+ points="16650,9225 16650,7746 "
+ style="stroke:#00d1d1;stroke-width:44.99790066;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline118" />
+ <!-- Arrowhead on XXXpoint 16650 9225 - 16650 7560-->
+ <!-- Line -->
+ <polyline
+ points="14850,11475 14850,7746 "
+ style="stroke:#00d1d1;stroke-width:44.99790066;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline122" />
+ <!-- Arrowhead on XXXpoint 14850 11475 - 14850 7560-->
+ <!-- Line -->
+ <polyline
+ points="12150,9225 12150,7746 "
+ style="stroke:#00d1d1;stroke-width:44.99790066;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline126" />
+ <!-- Arrowhead on XXXpoint 12150 9225 - 12150 7560-->
+ <!-- Line -->
+ <polyline
+ points="10350,11475 10350,7746 "
+ style="stroke:#00d1d1;stroke-width:44.99790066;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline130" />
+ <!-- Arrowhead on XXXpoint 10350 11475 - 10350 7560-->
+ <!-- Line -->
+ <polyline
+ points="7650,9225 7650,7746 "
+ style="stroke:#00d1d1;stroke-width:44.99790066;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline134" />
+ <!-- Arrowhead on XXXpoint 7650 9225 - 7650 7560-->
+ <!-- Line -->
+ <polyline
+ points="5850,11475 5850,7746 "
+ style="stroke:#00d1d1;stroke-width:44.99790066;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline138" />
+ <!-- Arrowhead on XXXpoint 5850 11475 - 5850 7560-->
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="12375"
+ y="4500"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text142">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="12375"
+ y="4050"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text144">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5625"
+ y="4050"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text146">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5625"
+ y="4500"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text148">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="6750"
+ y="6750"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text150">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="6750"
+ y="7200"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text152">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="11250"
+ y="7200"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text154">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="11250"
+ y="6750"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text156">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="15750"
+ y="7200"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text158">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="15750"
+ y="6750"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text160">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2250"
+ y="6750"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text162">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2250"
+ y="7200"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text164">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1350"
+ y="13050"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text166">CPU 0</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1350"
+ y="11925"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text168">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1350"
+ y="12375"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text170">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="3150"
+ y="10800"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text172">CPU 15</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="3150"
+ y="9675"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text174">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="3150"
+ y="10125"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text176">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5850"
+ y="11925"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text178">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5850"
+ y="12375"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text180">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5850"
+ y="13050"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text182">CPU 21823</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="7650"
+ y="10845"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text184">CPU 21839</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="7650"
+ y="10170"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text186">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="7650"
+ y="9720"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text188">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="10305"
+ y="11925"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text190">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="10305"
+ y="12375"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text192">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="10305"
+ y="13050"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text194">CPU 43679</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="12105"
+ y="10845"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text196">CPU 43695</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="12105"
+ y="10170"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text198">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="12105"
+ y="9720"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text200">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="14805"
+ y="11925"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text202">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="14805"
+ y="12375"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text204">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="14805"
+ y="13050"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text206">CPU 65519</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="16605"
+ y="10845"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text208">CPU 65535</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="16605"
+ y="10170"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text210">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="16605"
+ y="9720"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text212">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="675"
+ y="450"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="start"
+ id="text214">struct rcu_state</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="9000"
+ y="1350"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text216">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="9000"
+ y="1800"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text218">rcu_node</text>
+ </g>
+</svg>
diff --git a/Documentation/RCU/Design/Data-Structures/TreeLevel.svg b/Documentation/RCU/Design/Data-Structures/TreeLevel.svg
new file mode 100644
index 000000000000..7a7eb3bac95c
--- /dev/null
+++ b/Documentation/RCU/Design/Data-Structures/TreeLevel.svg
@@ -0,0 +1,828 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Creator: fig2dev Version 3.2 Patchlevel 5e -->
+
+<!-- CreationDate: Wed Dec 9 17:41:29 2015 -->
+
+<!-- Magnification: 3.000 -->
+
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="17.7in"
+ height="10.4in"
+ viewBox="-66 -66 21237 12507"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.4 r9939"
+ sodipodi:docname="TreeLevel.fig">
+ <metadata
+ id="metadata216">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title></dc:title>
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <defs
+ id="defs214">
+ <marker
+ inkscape:stockid="Arrow1Mend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow1Mend"
+ style="overflow:visible;">
+ <path
+ id="path3974"
+ d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;"
+ transform="scale(0.4) rotate(180) translate(10,0)" />
+ </marker>
+ </defs>
+ <sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="1023"
+ inkscape:window-height="1148"
+ id="namedview212"
+ showgrid="false"
+ inkscape:zoom="0.55869424"
+ inkscape:cx="796.50006"
+ inkscape:cy="467.99997"
+ inkscape:window-x="897"
+ inkscape:window-y="24"
+ inkscape:window-maximized="0"
+ inkscape:current-layer="g4" />
+ <g
+ style="stroke-width:.025in; fill:none"
+ id="g4">
+ <!-- Line: box -->
+ <rect
+ x="0"
+ y="0"
+ width="20655"
+ height="8325"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffff00; "
+ id="rect6" />
+ <!-- Line: box -->
+ <rect
+ x="14130"
+ y="3600"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect8" />
+ <!-- Line: box -->
+ <rect
+ x="7380"
+ y="3600"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect10" />
+ <!-- Line: box -->
+ <rect
+ x="8505"
+ y="6300"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect12" />
+ <!-- Line: box -->
+ <rect
+ x="13005"
+ y="6300"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect14" />
+ <!-- Line: box -->
+ <rect
+ x="17505"
+ y="6300"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect16" />
+ <!-- Line: box -->
+ <rect
+ x="4005"
+ y="6300"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect18" />
+ <!-- Line: box -->
+ <rect
+ x="10755"
+ y="900"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect20" />
+ <!-- Line -->
+ <polyline
+ points="6255,9225 6255,7746 "
+ style="stroke:#00d1d1;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline22" />
+ <!-- Arrowhead on XXXpoint 6255 9225 - 6255 7560-->
+ <!-- Circle -->
+ <circle
+ cx="11655"
+ cy="4275"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle26" />
+ <!-- Circle -->
+ <circle
+ cx="12105"
+ cy="4275"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle28" />
+ <!-- Circle -->
+ <circle
+ cx="12555"
+ cy="4275"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle30" />
+ <!-- Line -->
+ <polyline
+ points="9855,6300 11355,5010 "
+ style="stroke:#00d1d1;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline32" />
+ <!-- Arrowhead on XXXpoint 9855 6300 - 11496 4890-->
+ <!-- Line -->
+ <polyline
+ points="14355,6300 12852,5010 "
+ style="stroke:#00d1d1;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline36" />
+ <!-- Arrowhead on XXXpoint 14355 6300 - 12711 4890-->
+ <!-- Circle -->
+ <circle
+ cx="17055"
+ cy="6975"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle40" />
+ <!-- Circle -->
+ <circle
+ cx="16605"
+ cy="6975"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle42" />
+ <!-- Circle -->
+ <circle
+ cx="16155"
+ cy="6975"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle44" />
+ <!-- Circle -->
+ <circle
+ cx="12555"
+ cy="6975"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle46" />
+ <!-- Circle -->
+ <circle
+ cx="12105"
+ cy="6975"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle48" />
+ <!-- Circle -->
+ <circle
+ cx="11655"
+ cy="6975"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle50" />
+ <!-- Circle -->
+ <circle
+ cx="8055"
+ cy="6975"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle52" />
+ <!-- Circle -->
+ <circle
+ cx="7605"
+ cy="6975"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle54" />
+ <!-- Circle -->
+ <circle
+ cx="7155"
+ cy="6975"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle56" />
+ <!-- Circle -->
+ <circle
+ cx="4905"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle58" />
+ <!-- Circle -->
+ <circle
+ cx="5355"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle60" />
+ <!-- Circle -->
+ <circle
+ cx="5805"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle62" />
+ <!-- Circle -->
+ <circle
+ cx="18405"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle64" />
+ <!-- Circle -->
+ <circle
+ cx="18855"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle66" />
+ <!-- Circle -->
+ <circle
+ cx="19305"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle68" />
+ <!-- Circle -->
+ <circle
+ cx="13905"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle70" />
+ <!-- Circle -->
+ <circle
+ cx="14355"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle72" />
+ <!-- Circle -->
+ <circle
+ cx="14805"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle74" />
+ <!-- Circle -->
+ <circle
+ cx="9405"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle76" />
+ <!-- Circle -->
+ <circle
+ cx="9855"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle78" />
+ <!-- Circle -->
+ <circle
+ cx="10305"
+ cy="8775"
+ r="114"
+ style="fill:#000000;stroke:#000000;stroke-width:21;"
+ id="circle80" />
+ <!-- Line: box -->
+ <rect
+ x="225"
+ y="1125"
+ width="3150"
+ height="1125"
+ rx="0"
+ style="stroke:#000000;stroke-width:21; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffffff; "
+ id="rect82" />
+ <!-- Line: box -->
+ <rect
+ x="225"
+ y="2250"
+ width="3150"
+ height="1125"
+ rx="0"
+ style="stroke:#000000;stroke-width:21; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffffff; "
+ id="rect84" />
+ <!-- Line: box -->
+ <rect
+ x="225"
+ y="3375"
+ width="3150"
+ height="1125"
+ rx="0"
+ style="stroke:#000000;stroke-width:21; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffffff; "
+ id="rect86" />
+ <!-- Line -->
+ <polyline
+ points="14805,3600 13302,2310 "
+ style="stroke:#00d1d1;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline88" />
+ <!-- Arrowhead on XXXpoint 14805 3600 - 13161 2190-->
+ <!-- Line -->
+ <polyline
+ points="9405,3600 10905,2310 "
+ style="stroke:#00d1d1;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline92" />
+ <!-- Arrowhead on XXXpoint 9405 3600 - 11046 2190-->
+ <!-- Line -->
+ <polyline
+ points="6255,6300 7755,5010 "
+ style="stroke:#00d1d1;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline96" />
+ <!-- Arrowhead on XXXpoint 6255 6300 - 7896 4890-->
+ <!-- Line -->
+ <polyline
+ points="17955,6300 16452,5010 "
+ style="stroke:#00d1d1;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline100" />
+ <!-- Arrowhead on XXXpoint 17955 6300 - 16311 4890-->
+ <!-- Line -->
+ <polyline
+ points="4455,11025 4455,7746 "
+ style="stroke:#00d1d1;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline104" />
+ <!-- Arrowhead on XXXpoint 4455 11025 - 4455 7560-->
+ <!-- Line -->
+ <polyline
+ points="19755,9225 19755,7746 "
+ style="stroke:#00d1d1;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline108" />
+ <!-- Arrowhead on XXXpoint 19755 9225 - 19755 7560-->
+ <!-- Line -->
+ <polyline
+ points="17955,11025 17955,7746 "
+ style="stroke:#00d1d1;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline112" />
+ <!-- Arrowhead on XXXpoint 17955 11025 - 17955 7560-->
+ <!-- Line -->
+ <polyline
+ points="15255,9225 15255,7746 "
+ style="stroke:#00d1d1;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline116" />
+ <!-- Arrowhead on XXXpoint 15255 9225 - 15255 7560-->
+ <!-- Line -->
+ <polyline
+ points="13455,11025 13455,7746 "
+ style="stroke:#00d1d1;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline120" />
+ <!-- Arrowhead on XXXpoint 13455 11025 - 13455 7560-->
+ <!-- Line -->
+ <polyline
+ points="10755,9225 10755,7746 "
+ style="stroke:#00d1d1;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline124" />
+ <!-- Arrowhead on XXXpoint 10755 9225 - 10755 7560-->
+ <!-- Line -->
+ <polyline
+ points="8955,11025 8955,7746 "
+ style="stroke:#00d1d1;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline128" />
+ <!-- Arrowhead on XXXpoint 8955 11025 - 8955 7560-->
+ <!-- Line: box -->
+ <rect
+ x="12105"
+ y="11025"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect132" />
+ <!-- Line: box -->
+ <rect
+ x="13905"
+ y="9225"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect134" />
+ <!-- Line: box -->
+ <rect
+ x="16605"
+ y="11025"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect136" />
+ <!-- Line: box -->
+ <rect
+ x="18405"
+ y="9225"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect138" />
+ <!-- Line: box -->
+ <rect
+ x="9405"
+ y="9225"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect140" />
+ <!-- Line: box -->
+ <rect
+ x="7605"
+ y="11025"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect142" />
+ <!-- Line: box -->
+ <rect
+ x="4905"
+ y="9225"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect144" />
+ <!-- Line: box -->
+ <rect
+ x="3105"
+ y="11025"
+ width="2700"
+ height="1350"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect146" />
+ <!-- Line -->
+ <polyline
+ points="3375,1575 10701,1575 "
+ style="stroke:#000000;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline148" />
+ <!-- Arrowhead on XXXpoint 3375 1575 - 10890 1575-->
+ <!-- Line -->
+ <polyline
+ points="3375,3825 4050,3825 4050,5400 2700,5400 2700,6975 3951,6975 "
+ style="stroke:#000000;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline152" />
+ <!-- Arrowhead on XXXpoint 2700 6975 - 4140 6975-->
+ <!-- Line -->
+ <polyline
+ points="3375,2700 5175,2700 5175,4275 7326,4275 "
+ style="stroke:#000000;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline156" />
+ <!-- Arrowhead on XXXpoint 5175 4275 - 7515 4275-->
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="15480"
+ y="4500"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text160">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="15480"
+ y="4050"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text162">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="8730"
+ y="4050"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text164">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="8730"
+ y="4500"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text166">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="9855"
+ y="6750"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text168">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="9855"
+ y="7200"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text170">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="14355"
+ y="7200"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text172">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="14355"
+ y="6750"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text174">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="18855"
+ y="7200"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text176">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="18855"
+ y="6750"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text178">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5355"
+ y="6750"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text180">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5355"
+ y="7200"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text182">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="450"
+ y="1800"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="324"
+ text-anchor="start"
+ id="text184">-&gt;level[0]</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="450"
+ y="2925"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="324"
+ text-anchor="start"
+ id="text186">-&gt;level[1]</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="450"
+ y="4050"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="324"
+ text-anchor="start"
+ id="text188">-&gt;level[2]</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="12105"
+ y="1350"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text190">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="12105"
+ y="1800"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="middle"
+ id="text192">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="6255"
+ y="10125"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text194">CPU 15</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4455"
+ y="11925"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text196">CPU 0</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="19755"
+ y="10125"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text198">CPU 65535</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="17955"
+ y="11925"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text200">CPU 65519</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="15255"
+ y="10125"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text202">CPU 43695</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="13455"
+ y="11925"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text204">CPU 43679</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="10755"
+ y="10125"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text206">CPU 21839</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="8955"
+ y="11925"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text208">CPU 21823</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="225"
+ y="450"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="288"
+ text-anchor="start"
+ id="text210">struct rcu_state</text>
+ </g>
+</svg>
diff --git a/Documentation/RCU/Design/Data-Structures/TreeMapping.svg b/Documentation/RCU/Design/Data-Structures/TreeMapping.svg
new file mode 100644
index 000000000000..729cfa9e6cdb
--- /dev/null
+++ b/Documentation/RCU/Design/Data-Structures/TreeMapping.svg
@@ -0,0 +1,305 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Creator: fig2dev Version 3.2 Patchlevel 5e -->
+
+<!-- CreationDate: Wed Dec 9 17:43:22 2015 -->
+
+<!-- Magnification: 1.000 -->
+
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="3.1in"
+ height="0.9in"
+ viewBox="-12 -12 3699 1074"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.4 r9939"
+ sodipodi:docname="TreeMapping.fig">
+ <metadata
+ id="metadata66">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title></dc:title>
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <defs
+ id="defs64">
+ <marker
+ inkscape:stockid="Arrow2Lend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow2Lend"
+ style="overflow:visible;">
+ <path
+ id="path3836"
+ style="fill-rule:evenodd;stroke-width:0.62500000;stroke-linejoin:round;"
+ d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
+ transform="scale(1.1) rotate(180) translate(1,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow2Mend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow2Mend"
+ style="overflow:visible;">
+ <path
+ id="path3842"
+ style="fill-rule:evenodd;stroke-width:0.62500000;stroke-linejoin:round;"
+ d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
+ transform="scale(0.6) rotate(180) translate(0,0)" />
+ </marker>
+ <marker
+ inkscape:stockid="Arrow1Mend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow1Mend"
+ style="overflow:visible;">
+ <path
+ id="path3824"
+ d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;"
+ transform="scale(0.4) rotate(180) translate(10,0)" />
+ </marker>
+ </defs>
+ <sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="991"
+ inkscape:window-height="606"
+ id="namedview62"
+ showgrid="false"
+ inkscape:zoom="3.0752688"
+ inkscape:cx="139.5"
+ inkscape:cy="40.5"
+ inkscape:window-x="891"
+ inkscape:window-y="177"
+ inkscape:window-maximized="0"
+ inkscape:current-layer="g4" />
+ <g
+ style="stroke-width:.025in; fill:none"
+ id="g4">
+ <!-- Line: box -->
+ <rect
+ x="0"
+ y="0"
+ width="3675"
+ height="1050"
+ rx="0"
+ style="stroke:#000000;stroke-width:7; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffff00; "
+ id="rect6" />
+ <!-- Line: box -->
+ <rect
+ x="75"
+ y="375"
+ width="375"
+ height="300"
+ rx="0"
+ style="stroke:#000000;stroke-width:7; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect8" />
+ <!-- Line: box -->
+ <rect
+ x="600"
+ y="375"
+ width="375"
+ height="300"
+ rx="0"
+ style="stroke:#000000;stroke-width:7; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect10" />
+ <!-- Line: box -->
+ <rect
+ x="1125"
+ y="375"
+ width="375"
+ height="300"
+ rx="0"
+ style="stroke:#000000;stroke-width:7; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect12" />
+ <!-- Line: box -->
+ <rect
+ x="1650"
+ y="375"
+ width="375"
+ height="300"
+ rx="0"
+ style="stroke:#000000;stroke-width:7; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect14" />
+ <!-- Line: box -->
+ <rect
+ x="2175"
+ y="375"
+ width="375"
+ height="300"
+ rx="0"
+ style="stroke:#000000;stroke-width:7; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect16" />
+ <!-- Line: box -->
+ <rect
+ x="3225"
+ y="375"
+ width="375"
+ height="300"
+ rx="0"
+ style="stroke:#000000;stroke-width:7; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect18" />
+ <!-- Line -->
+ <polyline
+ points="675,375 675,150 300,150 300,358 "
+ style="stroke:#000000;stroke-width:7.00088889;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow2Lend)"
+ id="polyline20" />
+ <!-- Arrowhead on XXXpoint 300 150 - 300 390-->
+ <!-- Line -->
+ <polyline
+ points="1200,675 1200,900 300,900 300,691 "
+ style="stroke:#000000;stroke-width:7.00088889;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow2Lend)"
+ id="polyline24" />
+ <!-- Arrowhead on XXXpoint 300 900 - 300 660-->
+ <!-- Line -->
+ <polyline
+ points="1725,375 1725,150 900,150 900,358 "
+ style="stroke:#000000;stroke-width:7.00088889;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow2Lend)"
+ id="polyline28" />
+ <!-- Arrowhead on XXXpoint 900 150 - 900 390-->
+ <!-- Line -->
+ <polyline
+ points="2250,375 2250,75 825,75 825,358 "
+ style="stroke:#000000;stroke-width:7.00088889;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow2Lend)"
+ id="polyline32" />
+ <!-- Arrowhead on XXXpoint 825 75 - 825 390-->
+ <!-- Line -->
+ <polyline
+ points="2775,675 2775,900 1425,900 1425,691 "
+ style="stroke:#000000;stroke-width:7.00088889;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow2Lend)"
+ id="polyline36" />
+ <!-- Arrowhead on XXXpoint 1425 900 - 1425 660-->
+ <!-- Line -->
+ <polyline
+ points="3300,675 3300,975 1350,975 1350,691 "
+ style="stroke:#000000;stroke-width:7.00088889;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow2Lend)"
+ id="polyline40" />
+ <!-- Arrowhead on XXXpoint 1350 975 - 1350 660-->
+ <!-- Line: box -->
+ <rect
+ x="2700"
+ y="375"
+ width="375"
+ height="300"
+ rx="0"
+ style="stroke:#000000;stroke-width:7; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect44" />
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="300"
+ y="525"
+ fill="#000000"
+ font-family="Times"
+ font-style="normal"
+ font-weight="normal"
+ font-size="96"
+ text-anchor="middle"
+ id="text46">0:7 </text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1350"
+ y="525"
+ fill="#000000"
+ font-family="Times"
+ font-style="normal"
+ font-weight="normal"
+ font-size="96"
+ text-anchor="middle"
+ id="text48">4:7 </text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1875"
+ y="525"
+ fill="#000000"
+ font-family="Times"
+ font-style="normal"
+ font-weight="normal"
+ font-size="96"
+ text-anchor="middle"
+ id="text50">0:1 </text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2400"
+ y="525"
+ fill="#000000"
+ font-family="Times"
+ font-style="normal"
+ font-weight="normal"
+ font-size="96"
+ text-anchor="middle"
+ id="text52">2:3 </text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2925"
+ y="525"
+ fill="#000000"
+ font-family="Times"
+ font-style="normal"
+ font-weight="normal"
+ font-size="96"
+ text-anchor="middle"
+ id="text54">4:5 </text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="3450"
+ y="525"
+ fill="#000000"
+ font-family="Times"
+ font-style="normal"
+ font-weight="normal"
+ font-size="96"
+ text-anchor="middle"
+ id="text56">6:7 </text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="825"
+ y="525"
+ fill="#000000"
+ font-family="Times"
+ font-style="normal"
+ font-weight="normal"
+ font-size="96"
+ text-anchor="middle"
+ id="text58">0:3 </text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="3600"
+ y="150"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="normal"
+ font-size="96"
+ text-anchor="end"
+ id="text60">struct rcu_state</text>
+ </g>
+</svg>
diff --git a/Documentation/RCU/Design/Data-Structures/TreeMappingLevel.svg b/Documentation/RCU/Design/Data-Structures/TreeMappingLevel.svg
new file mode 100644
index 000000000000..5b416a4b8453
--- /dev/null
+++ b/Documentation/RCU/Design/Data-Structures/TreeMappingLevel.svg
@@ -0,0 +1,380 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Creator: fig2dev Version 3.2 Patchlevel 5e -->
+
+<!-- CreationDate: Wed Dec 9 17:45:19 2015 -->
+
+<!-- Magnification: 1.000 -->
+
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="3.1in"
+ height="1.8in"
+ viewBox="-12 -12 3699 2124"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.4 r9939"
+ sodipodi:docname="TreeMappingLevel.svg">
+ <metadata
+ id="metadata98">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title />
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <defs
+ id="defs96">
+ <marker
+ inkscape:stockid="Arrow2Lend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow2Lend"
+ style="overflow:visible;">
+ <path
+ id="path3868"
+ style="fill-rule:evenodd;stroke-width:0.62500000;stroke-linejoin:round;"
+ d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
+ transform="scale(1.1) rotate(180) translate(1,0)" />
+ </marker>
+ </defs>
+ <sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="1598"
+ inkscape:window-height="1211"
+ id="namedview94"
+ showgrid="false"
+ inkscape:zoom="5.2508961"
+ inkscape:cx="139.5"
+ inkscape:cy="81"
+ inkscape:window-x="840"
+ inkscape:window-y="122"
+ inkscape:window-maximized="0"
+ inkscape:current-layer="g4" />
+ <g
+ style="stroke-width:.025in; fill:none"
+ id="g4">
+ <!-- Line: box -->
+ <rect
+ x="0"
+ y="0"
+ width="3675"
+ height="2100"
+ rx="0"
+ style="stroke:#000000;stroke-width:7; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffff00; "
+ id="rect6" />
+ <!-- Line: box -->
+ <rect
+ x="75"
+ y="1350"
+ width="750"
+ height="225"
+ rx="0"
+ style="stroke:#000000;stroke-width:7; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect8" />
+ <!-- Line: box -->
+ <rect
+ x="75"
+ y="1575"
+ width="750"
+ height="225"
+ rx="0"
+ style="stroke:#000000;stroke-width:7; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect10" />
+ <!-- Line: box -->
+ <rect
+ x="75"
+ y="1800"
+ width="750"
+ height="225"
+ rx="0"
+ style="stroke:#000000;stroke-width:7; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect12" />
+ <!-- Arc -->
+ <path
+ style="stroke:#000000;stroke-width:7;stroke-linecap:butt;"
+ d="M 1800,900 A 118 118 0 0 0 1800 1125 "
+ id="path14" />
+ <!-- Arc -->
+ <path
+ style="stroke:#000000;stroke-width:7;stroke-linecap:butt;"
+ d="M 750,900 A 75 75 0 0 0 750 1050 "
+ id="path16" />
+ <!-- Line -->
+ <polyline
+ points="750,900 750,691 "
+ style="stroke:#000000;stroke-width:7.00025806;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow2Lend)"
+ id="polyline18" />
+ <!-- Arrowhead on XXXpoint 750 900 - 750 660-->
+ <!-- Line: box -->
+ <rect
+ x="75"
+ y="375"
+ width="375"
+ height="300"
+ rx="0"
+ style="stroke:#000000;stroke-width:7; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect22" />
+ <!-- Line: box -->
+ <rect
+ x="600"
+ y="375"
+ width="375"
+ height="300"
+ rx="0"
+ style="stroke:#000000;stroke-width:7; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect24" />
+ <!-- Line: box -->
+ <rect
+ x="1650"
+ y="375"
+ width="375"
+ height="300"
+ rx="0"
+ style="stroke:#000000;stroke-width:7; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect26" />
+ <!-- Line: box -->
+ <rect
+ x="2175"
+ y="375"
+ width="375"
+ height="300"
+ rx="0"
+ style="stroke:#000000;stroke-width:7; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect28" />
+ <!-- Line: box -->
+ <rect
+ x="3225"
+ y="375"
+ width="375"
+ height="300"
+ rx="0"
+ style="stroke:#000000;stroke-width:7; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect30" />
+ <!-- Line -->
+ <polyline
+ points="675,375 675,150 300,150 300,358 "
+ style="stroke:#000000;stroke-width:7.00025806;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow2Lend)"
+ id="polyline32" />
+ <!-- Arrowhead on XXXpoint 300 150 - 300 390-->
+ <!-- Line -->
+ <polyline
+ points="1725,375 1725,150 900,150 900,358 "
+ style="stroke:#000000;stroke-width:7.00025806;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow2Lend)"
+ id="polyline36" />
+ <!-- Arrowhead on XXXpoint 900 150 - 900 390-->
+ <!-- Line -->
+ <polyline
+ points="2250,375 2250,75 825,75 825,358 "
+ style="stroke:#000000;stroke-width:7.00025806;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow2Lend)"
+ id="polyline40" />
+ <!-- Arrowhead on XXXpoint 825 75 - 825 390-->
+ <!-- Line -->
+ <polyline
+ points="2775,675 2775,975 1425,975 1425,691 "
+ style="stroke:#000000;stroke-width:7.00025806;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow2Lend)"
+ id="polyline44" />
+ <!-- Arrowhead on XXXpoint 1425 975 - 1425 660-->
+ <!-- Line: box -->
+ <rect
+ x="2700"
+ y="375"
+ width="375"
+ height="300"
+ rx="0"
+ style="stroke:#000000;stroke-width:7; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect48" />
+ <!-- Line: box -->
+ <rect
+ x="1125"
+ y="375"
+ width="375"
+ height="300"
+ rx="0"
+ style="stroke:#000000;stroke-width:7; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect50" />
+ <!-- Line -->
+ <polyline
+ points="3300,675 3300,1050 1350,1050 1350,691 "
+ style="stroke:#000000;stroke-width:7.00025806;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow2Lend)"
+ id="polyline52" />
+ <!-- Arrowhead on XXXpoint 1350 1050 - 1350 660-->
+ <!-- Line -->
+ <polyline
+ points="825,1425 975,1425 975,1200 225,1200 225,691 "
+ style="stroke:#000000;stroke-width:7.00025806;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow2Lend)"
+ id="polyline56" />
+ <!-- Arrowhead on XXXpoint 225 1200 - 225 660-->
+ <!-- Line -->
+ <polyline
+ points="1200,675 1200,975 300,975 300,691 "
+ style="stroke:#000000;stroke-width:7.00025806;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow2Lend)"
+ id="polyline60" />
+ <!-- Arrowhead on XXXpoint 300 975 - 300 660-->
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="150"
+ y="1500"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="108"
+ text-anchor="start"
+ id="text64">-&gt;level[0]</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="150"
+ y="1725"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="108"
+ text-anchor="start"
+ id="text66">-&gt;level[1]</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="150"
+ y="1950"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="108"
+ text-anchor="start"
+ id="text68">-&gt;level[2]</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="300"
+ y="525"
+ fill="#000000"
+ font-family="Times"
+ font-style="normal"
+ font-weight="normal"
+ font-size="96"
+ text-anchor="middle"
+ id="text70">0:7 </text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1350"
+ y="525"
+ fill="#000000"
+ font-family="Times"
+ font-style="normal"
+ font-weight="normal"
+ font-size="96"
+ text-anchor="middle"
+ id="text72">4:7 </text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1875"
+ y="525"
+ fill="#000000"
+ font-family="Times"
+ font-style="normal"
+ font-weight="normal"
+ font-size="96"
+ text-anchor="middle"
+ id="text74">0:1 </text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2400"
+ y="525"
+ fill="#000000"
+ font-family="Times"
+ font-style="normal"
+ font-weight="normal"
+ font-size="96"
+ text-anchor="middle"
+ id="text76">2:3 </text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2925"
+ y="525"
+ fill="#000000"
+ font-family="Times"
+ font-style="normal"
+ font-weight="normal"
+ font-size="96"
+ text-anchor="middle"
+ id="text78">4:5 </text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="3450"
+ y="525"
+ fill="#000000"
+ font-family="Times"
+ font-style="normal"
+ font-weight="normal"
+ font-size="96"
+ text-anchor="middle"
+ id="text80">6:7 </text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="825"
+ y="525"
+ fill="#000000"
+ font-family="Times"
+ font-style="normal"
+ font-weight="normal"
+ font-size="96"
+ text-anchor="middle"
+ id="text82">0:3 </text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="3600"
+ y="150"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="normal"
+ font-size="96"
+ text-anchor="end"
+ id="text84">struct rcu_state</text>
+ <!-- Line -->
+ <polyline
+ points="825,1875 1800,1875 1800,1125 "
+ style="stroke:#000000;stroke-width:7.00025806;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:none"
+ id="polyline86" />
+ <!-- Line -->
+ <polyline
+ points="1800,900 1800,691 "
+ style="stroke:#000000;stroke-width:7.00025806;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow2Lend)"
+ id="polyline88" />
+ <!-- Arrowhead on XXXpoint 1800 900 - 1800 660-->
+ <!-- Line -->
+ <polyline
+ points="825,1650 1200,1650 1200,1125 750,1125 750,1050 "
+ style="stroke:#000000;stroke-width:7; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline92" />
+ </g>
+</svg>
diff --git a/Documentation/RCU/Design/Data-Structures/blkd_task.svg b/Documentation/RCU/Design/Data-Structures/blkd_task.svg
new file mode 100644
index 000000000000..00e810bb8419
--- /dev/null
+++ b/Documentation/RCU/Design/Data-Structures/blkd_task.svg
@@ -0,0 +1,843 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Creator: fig2dev Version 3.2 Patchlevel 5e -->
+
+<!-- CreationDate: Wed Dec 9 17:35:03 2015 -->
+
+<!-- Magnification: 2.000 -->
+
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="10.1in"
+ height="8.6in"
+ viewBox="-44 -44 12088 10288"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.4 r9939"
+ sodipodi:docname="blkd_task.fig">
+ <metadata
+ id="metadata212">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title></dc:title>
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <defs
+ id="defs210">
+ <marker
+ inkscape:stockid="Arrow1Mend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow1Mend"
+ style="overflow:visible;">
+ <path
+ id="path3970"
+ d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;"
+ transform="scale(0.4) rotate(180) translate(10,0)" />
+ </marker>
+ </defs>
+ <sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="1087"
+ inkscape:window-height="1144"
+ id="namedview208"
+ showgrid="false"
+ inkscape:zoom="1.0495049"
+ inkscape:cx="454.50003"
+ inkscape:cy="387.00003"
+ inkscape:window-x="833"
+ inkscape:window-y="28"
+ inkscape:window-maximized="0"
+ inkscape:current-layer="g4" />
+ <g
+ style="stroke-width:.025in; fill:none"
+ id="g4">
+ <!-- Line: box -->
+ <rect
+ x="450"
+ y="0"
+ width="6300"
+ height="7350"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffffff; "
+ id="rect6" />
+ <!-- Line: box -->
+ <rect
+ x="4950"
+ y="4950"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect8" />
+ <!-- Line: box -->
+ <rect
+ x="750"
+ y="600"
+ width="5700"
+ height="3750"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffff00; "
+ id="rect10" />
+ <!-- Line -->
+ <polyline
+ points="5250,8100 5688,5912 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline12" />
+ <!-- Arrowhead on XXXpoint 5250 8100 - 5710 5790-->
+ <polyline
+ points="5714 6068 5704 5822 5598 6044 "
+ style="stroke:#00ff00;stroke-width:14;stroke-miterlimit:8; "
+ id="polyline14" />
+ <!-- Line -->
+ <polyline
+ points="4050,9300 4486,7262 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline16" />
+ <!-- Arrowhead on XXXpoint 4050 9300 - 4512 7140-->
+ <polyline
+ points="4514 7418 4506 7172 4396 7394 "
+ style="stroke:#00ff00;stroke-width:14;stroke-miterlimit:8; "
+ id="polyline18" />
+ <!-- Line -->
+ <polyline
+ points="1040,9300 1476,7262 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline20" />
+ <!-- Arrowhead on XXXpoint 1040 9300 - 1502 7140-->
+ <polyline
+ points="1504 7418 1496 7172 1386 7394 "
+ style="stroke:#00ff00;stroke-width:14;stroke-miterlimit:8; "
+ id="polyline22" />
+ <!-- Line -->
+ <polyline
+ points="2240,8100 2676,6062 "
+ style="stroke:#00ff00;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="polyline24" />
+ <!-- Arrowhead on XXXpoint 2240 8100 - 2702 5940-->
+ <polyline
+ points="2704 6218 2696 5972 2586 6194 "
+ style="stroke:#00ff00;stroke-width:14;stroke-miterlimit:8; "
+ id="polyline26" />
+ <!-- Line: box -->
+ <rect
+ x="0"
+ y="450"
+ width="6300"
+ height="7350"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffffff; "
+ id="rect28" />
+ <!-- Line: box -->
+ <rect
+ x="300"
+ y="1050"
+ width="5700"
+ height="3750"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffff00; "
+ id="rect30" />
+ <!-- Line -->
+ <polyline
+ points="1350,3450 2350,2590 "
+ style="stroke:#00d1d1;stroke-width:30.00057884;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline32" />
+ <!-- Arrowhead on XXXpoint 1350 3450 - 2444 2510-->
+ <!-- Line -->
+ <polyline
+ points="4950,3450 3948,2590 "
+ style="stroke:#00d1d1;stroke-width:30.00057884;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline36" />
+ <!-- Arrowhead on XXXpoint 4950 3450 - 3854 2510-->
+ <!-- Line -->
+ <polyline
+ points="4050,6600 4050,4414 "
+ style="stroke:#00d1d1;stroke-width:30.00057884;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline40" />
+ <!-- Arrowhead on XXXpoint 4050 6600 - 4050 4290-->
+ <!-- Line -->
+ <polyline
+ points="1050,6600 1050,4414 "
+ style="stroke:#00d1d1;stroke-width:30.00057884;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline44" />
+ <!-- Arrowhead on XXXpoint 1050 6600 - 1050 4290-->
+ <!-- Line -->
+ <polyline
+ points="2250,5400 2250,4414 "
+ style="stroke:#00d1d1;stroke-width:30.00057884;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline48" />
+ <!-- Arrowhead on XXXpoint 2250 5400 - 2250 4290-->
+ <!-- Line -->
+ <polyline
+ points="2250,8100 2250,6364 "
+ style="stroke:#00ff00;stroke-width:30.00057884;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline52" />
+ <!-- Arrowhead on XXXpoint 2250 8100 - 2250 6240-->
+ <!-- Line -->
+ <polyline
+ points="1050,9300 1050,7564 "
+ style="stroke:#00ff00;stroke-width:30.00057884;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline56" />
+ <!-- Arrowhead on XXXpoint 1050 9300 - 1050 7440-->
+ <!-- Line -->
+ <polyline
+ points="4050,9300 4050,7564 "
+ style="stroke:#00ff00;stroke-width:30.00057884;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline60" />
+ <!-- Arrowhead on XXXpoint 4050 9300 - 4050 7440-->
+ <!-- Line -->
+ <polyline
+ points="5250,8100 5250,6364 "
+ style="stroke:#00ff00;stroke-width:30.00057884;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline64" />
+ <!-- Arrowhead on XXXpoint 5250 8100 - 5250 6240-->
+ <!-- Circle -->
+ <circle
+ cx="2850"
+ cy="3900"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle68" />
+ <!-- Circle -->
+ <circle
+ cx="3150"
+ cy="3900"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle70" />
+ <!-- Circle -->
+ <circle
+ cx="3450"
+ cy="3900"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle72" />
+ <!-- Circle -->
+ <circle
+ cx="1350"
+ cy="5100"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle74" />
+ <!-- Circle -->
+ <circle
+ cx="1650"
+ cy="5100"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle76" />
+ <!-- Circle -->
+ <circle
+ cx="1950"
+ cy="5100"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle78" />
+ <!-- Circle -->
+ <circle
+ cx="4350"
+ cy="5100"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle80" />
+ <!-- Circle -->
+ <circle
+ cx="4650"
+ cy="5100"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle82" />
+ <!-- Circle -->
+ <circle
+ cx="4950"
+ cy="5100"
+ r="76"
+ style="fill:#000000;stroke:#000000;stroke-width:14;"
+ id="circle84" />
+ <!-- Line: box -->
+ <rect
+ x="750"
+ y="3450"
+ width="1800"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect86" />
+ <!-- Line: box -->
+ <rect
+ x="300"
+ y="6600"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect88" />
+ <!-- Line: box -->
+ <rect
+ x="4500"
+ y="5400"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect90" />
+ <!-- Line: box -->
+ <rect
+ x="3300"
+ y="6600"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect92" />
+ <!-- Line: box -->
+ <rect
+ x="2250"
+ y="1650"
+ width="1800"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect94" />
+ <!-- Line: box -->
+ <rect
+ x="0"
+ y="9300"
+ width="2100"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#00ff00; "
+ id="rect96" />
+ <!-- Line: box -->
+ <rect
+ x="1350"
+ y="8100"
+ width="2100"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#00ff00; "
+ id="rect98" />
+ <!-- Line: box -->
+ <rect
+ x="3000"
+ y="9300"
+ width="2100"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#00ff00; "
+ id="rect100" />
+ <!-- Line: box -->
+ <rect
+ x="4350"
+ y="8100"
+ width="2100"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#00ff00; "
+ id="rect102" />
+ <!-- Line: box -->
+ <rect
+ x="1500"
+ y="5400"
+ width="1500"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect104" />
+ <!-- Line -->
+ <polygon
+ points="5550,3450 7350,2850 7350,5100 5550,4350 5550,3450 "
+ style="stroke:#000000;stroke-width:14; stroke-linejoin:miter; stroke-linecap:butt; stroke-dasharray:120 120;fill:#ffbfbf; "
+ id="polygon106" />
+ <!-- Line -->
+ <polyline
+ points="9300,3150 10734,3150 "
+ style="stroke:#000000;stroke-width:30.00057884;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline108" />
+ <!-- Arrowhead on XXXpoint 9300 3150 - 10860 3150-->
+ <!-- Line: box -->
+ <rect
+ x="10800"
+ y="2850"
+ width="1200"
+ height="750"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect112" />
+ <!-- Line -->
+ <polyline
+ points="11400,3600 11400,4284 "
+ style="stroke:#000000;stroke-width:30.00057884;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline114" />
+ <!-- Arrowhead on XXXpoint 11400 3600 - 11400 4410-->
+ <!-- Line: box -->
+ <rect
+ x="10800"
+ y="4350"
+ width="1200"
+ height="750"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect118" />
+ <!-- Line -->
+ <polyline
+ points="11400,5100 11400,5784 "
+ style="stroke:#000000;stroke-width:30.00057884;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline120" />
+ <!-- Arrowhead on XXXpoint 11400 5100 - 11400 5910-->
+ <!-- Line: box -->
+ <rect
+ x="10800"
+ y="5850"
+ width="1200"
+ height="750"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect124" />
+ <!-- Line -->
+ <polyline
+ points="9300,3900 9900,3900 9900,4650 10734,4650 "
+ style="stroke:#000000;stroke-width:30.00057884;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline126" />
+ <!-- Arrowhead on XXXpoint 9900 4650 - 10860 4650-->
+ <!-- Line -->
+ <polyline
+ points="9300,4650 9600,4650 9600,6150 10734,6150 "
+ style="stroke:#000000;stroke-width:30.00057884;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline130" />
+ <!-- Arrowhead on XXXpoint 9600 6150 - 10860 6150-->
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="6450"
+ y="300"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="192"
+ text-anchor="end"
+ id="text134">rcu_bh</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="3150"
+ y="1950"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text136">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="3150"
+ y="2250"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text138">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1650"
+ y="3750"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text140">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1650"
+ y="4050"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text142">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2250"
+ y="5700"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text144">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2250"
+ y="6000"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text146">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1050"
+ y="6900"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text148">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1050"
+ y="7200"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text150">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5250"
+ y="5700"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text152">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5250"
+ y="6000"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text154">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4050"
+ y="6900"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text156">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4050"
+ y="7200"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text158">rcu_data</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="450"
+ y="1350"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="start"
+ id="text160">struct rcu_state</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1050"
+ y="9600"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text162">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="1050"
+ y="9900"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text164">rcu_dynticks</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4050"
+ y="9600"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text166">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4050"
+ y="9900"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text168">rcu_dynticks</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2400"
+ y="8400"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text170">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="2400"
+ y="8700"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text172">rcu_dynticks</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5400"
+ y="8400"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text174">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="5400"
+ y="8700"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text176">rcu_dynticks</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="6000"
+ y="750"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="192"
+ text-anchor="end"
+ id="text178">rcu_sched</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="11400"
+ y="3300"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="216"
+ text-anchor="middle"
+ id="text180">T3</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="11400"
+ y="4800"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="216"
+ text-anchor="middle"
+ id="text182">T2</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="11400"
+ y="6300"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="216"
+ text-anchor="middle"
+ id="text184">T1</text>
+ <!-- Line -->
+ <polyline
+ points="5250,5400 5250,4414 "
+ style="stroke:#00d1d1;stroke-width:30.00057884;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline186" />
+ <!-- Arrowhead on XXXpoint 5250 5400 - 5250 4290-->
+ <!-- Line: box -->
+ <rect
+ x="3750"
+ y="3450"
+ width="1800"
+ height="900"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect190" />
+ <!-- Line: box -->
+ <rect
+ x="7350"
+ y="2850"
+ width="1950"
+ height="750"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect192" />
+ <!-- Line: box -->
+ <rect
+ x="7350"
+ y="3600"
+ width="1950"
+ height="750"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect194" />
+ <!-- Line: box -->
+ <rect
+ x="7350"
+ y="4350"
+ width="1950"
+ height="750"
+ rx="0"
+ style="stroke:#000000;stroke-width:30; stroke-linejoin:miter; stroke-linecap:butt; fill:#ffbfbf; "
+ id="rect196" />
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4650"
+ y="4050"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text198">rcu_node</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="4650"
+ y="3750"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="middle"
+ id="text200">struct</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="7500"
+ y="3300"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="start"
+ id="text202">blkd_tasks</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="7500"
+ y="4050"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="start"
+ id="text204">gp_tasks</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="7500"
+ y="4800"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="192"
+ text-anchor="start"
+ id="text206">exp_tasks</text>
+ </g>
+</svg>
diff --git a/Documentation/RCU/Design/Data-Structures/nxtlist.svg b/Documentation/RCU/Design/Data-Structures/nxtlist.svg
new file mode 100644
index 000000000000..abc4cc73a097
--- /dev/null
+++ b/Documentation/RCU/Design/Data-Structures/nxtlist.svg
@@ -0,0 +1,396 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<!-- Creator: fig2dev Version 3.2 Patchlevel 5e -->
+
+<!-- CreationDate: Wed Dec 9 17:39:46 2015 -->
+
+<!-- Magnification: 3.000 -->
+
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="10.4in"
+ height="10.4in"
+ viewBox="-66 -66 12507 12507"
+ id="svg2"
+ version="1.1"
+ inkscape:version="0.48.4 r9939"
+ sodipodi:docname="nxtlist.fig">
+ <metadata
+ id="metadata94">
+ <rdf:RDF>
+ <cc:Work
+ rdf:about="">
+ <dc:format>image/svg+xml</dc:format>
+ <dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
+ <dc:title></dc:title>
+ </cc:Work>
+ </rdf:RDF>
+ </metadata>
+ <defs
+ id="defs92">
+ <marker
+ inkscape:stockid="Arrow1Mend"
+ orient="auto"
+ refY="0.0"
+ refX="0.0"
+ id="Arrow1Mend"
+ style="overflow:visible;">
+ <path
+ id="path3852"
+ d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
+ style="fill-rule:evenodd;stroke:#000000;stroke-width:1.0pt;"
+ transform="scale(0.4) rotate(180) translate(10,0)" />
+ </marker>
+ </defs>
+ <sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="925"
+ inkscape:window-height="928"
+ id="namedview90"
+ showgrid="false"
+ inkscape:zoom="0.80021373"
+ inkscape:cx="467.99997"
+ inkscape:cy="467.99997"
+ inkscape:window-x="948"
+ inkscape:window-y="73"
+ inkscape:window-maximized="0"
+ inkscape:current-layer="g4" />
+ <g
+ style="stroke-width:.025in; fill:none"
+ id="g4">
+ <!-- Line: box -->
+ <rect
+ x="0"
+ y="0"
+ width="7875"
+ height="1125"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect6" />
+ <!-- Line: box -->
+ <rect
+ x="0"
+ y="1125"
+ width="7875"
+ height="1125"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect8" />
+ <!-- Line: box -->
+ <rect
+ x="0"
+ y="2250"
+ width="7875"
+ height="1125"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect10" />
+ <!-- Line: box -->
+ <rect
+ x="0"
+ y="3375"
+ width="7875"
+ height="1125"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect12" />
+ <!-- Line: box -->
+ <rect
+ x="0"
+ y="4500"
+ width="7875"
+ height="1125"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; fill:#87cfff; "
+ id="rect14" />
+ <!-- Line: box -->
+ <rect
+ x="10575"
+ y="0"
+ width="1800"
+ height="1125"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect16" />
+ <!-- Line: box -->
+ <rect
+ x="10575"
+ y="1125"
+ width="1800"
+ height="1125"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect18" />
+ <!-- Line -->
+ <polyline
+ points="11475,2250 11475,3276 "
+ style="stroke:#000000;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline20" />
+ <!-- Arrowhead on XXXpoint 11475 2250 - 11475 3465-->
+ <!-- Line: box -->
+ <rect
+ x="10575"
+ y="6750"
+ width="1800"
+ height="1125"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect24" />
+ <!-- Line: box -->
+ <rect
+ x="10575"
+ y="7875"
+ width="1800"
+ height="1125"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect26" />
+ <!-- Line: box -->
+ <rect
+ x="10575"
+ y="10125"
+ width="1800"
+ height="1125"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect28" />
+ <!-- Line: box -->
+ <rect
+ x="10575"
+ y="11250"
+ width="1800"
+ height="1125"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect30" />
+ <!-- Line: box -->
+ <rect
+ x="10575"
+ y="3375"
+ width="1800"
+ height="1125"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect32" />
+ <!-- Line -->
+ <polyline
+ points="11475,5625 11475,6651 "
+ style="stroke:#000000;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline34" />
+ <!-- Arrowhead on XXXpoint 11475 5625 - 11475 6840-->
+ <!-- Line -->
+ <polyline
+ points="7875,225 10476,225 "
+ style="stroke:#000000;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline38" />
+ <!-- Arrowhead on XXXpoint 7875 225 - 10665 225-->
+ <!-- Line -->
+ <polyline
+ points="7875,1350 9675,1350 9675,675 7971,675 "
+ style="stroke:#000000;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline42" />
+ <!-- Arrowhead on XXXpoint 9675 675 - 7785 675-->
+ <!-- Line -->
+ <polyline
+ points="7875,2475 9675,2475 9675,4725 10476,4725 "
+ style="stroke:#000000;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline46" />
+ <!-- Arrowhead on XXXpoint 9675 4725 - 10665 4725-->
+ <!-- Line -->
+ <polyline
+ points="7875,3600 9225,3600 9225,5175 10476,5175 "
+ style="stroke:#000000;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline50" />
+ <!-- Arrowhead on XXXpoint 9225 5175 - 10665 5175-->
+ <!-- Line -->
+ <polyline
+ points="7875,4725 8775,4725 8775,11475 10476,11475 "
+ style="stroke:#000000;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline54" />
+ <!-- Arrowhead on XXXpoint 8775 11475 - 10665 11475-->
+ <!-- Line: box -->
+ <rect
+ x="10575"
+ y="4500"
+ width="1800"
+ height="1125"
+ rx="0"
+ style="stroke:#000000;stroke-width:45; stroke-linejoin:miter; stroke-linecap:butt; "
+ id="rect58" />
+ <!-- Line -->
+ <polyline
+ points="11475,9000 11475,10026 "
+ style="stroke:#000000;stroke-width:45.00382345;stroke-linejoin:miter;stroke-linecap:butt;stroke-miterlimit:4;stroke-dasharray:none;marker-end:url(#Arrow1Mend)"
+ id="polyline60" />
+ <!-- Arrowhead on XXXpoint 11475 9000 - 11475 10215-->
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="225"
+ y="675"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="324"
+ text-anchor="start"
+ id="text64">nxtlist</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="225"
+ y="1800"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="324"
+ text-anchor="start"
+ id="text66">nxttail[RCU_DONE_TAIL]</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="225"
+ y="2925"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="324"
+ text-anchor="start"
+ id="text68">nxttail[RCU_WAIT_TAIL]</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="225"
+ y="4050"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="324"
+ text-anchor="start"
+ id="text70">nxttail[RCU_NEXT_READY_TAIL]</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="225"
+ y="5175"
+ fill="#000000"
+ font-family="Courier"
+ font-style="normal"
+ font-weight="bold"
+ font-size="324"
+ text-anchor="start"
+ id="text72">nxttail[RCU_NEXT_TAIL]</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="11475"
+ y="675"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text74">CB 1</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="11475"
+ y="1800"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text76">next</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="11475"
+ y="7425"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text78">CB 3</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="11475"
+ y="8550"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text80">next</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="11475"
+ y="10800"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text82">CB 4</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="11475"
+ y="11925"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text84">next</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="11475"
+ y="4050"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text86">CB 2</text>
+ <!-- Text -->
+ <text
+ xml:space="preserve"
+ x="11475"
+ y="5175"
+ fill="#000000"
+ font-family="Helvetica"
+ font-style="normal"
+ font-weight="normal"
+ font-size="324"
+ text-anchor="middle"
+ id="text88">next</text>
+ </g>
+</svg>
diff --git a/Documentation/RCU/Design/Requirements/2013-08-is-it-dead.png b/Documentation/RCU/Design/Requirements/2013-08-is-it-dead.png
deleted file mode 100644
index 7496a55e4e7b..000000000000
--- a/Documentation/RCU/Design/Requirements/2013-08-is-it-dead.png
+++ /dev/null
Binary files differ
diff --git a/Documentation/RCU/Design/Requirements/RCUApplicability.svg b/Documentation/RCU/Design/Requirements/RCUApplicability.svg
deleted file mode 100644
index ebcbeee391ed..000000000000
--- a/Documentation/RCU/Design/Requirements/RCUApplicability.svg
+++ /dev/null
@@ -1,237 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Creator: fig2dev Version 3.2 Patchlevel 5d -->
-
-<!-- CreationDate: Tue Mar 4 18:34:25 2014 -->
-
-<!-- Magnification: 3.000 -->
-
-<svg
- xmlns:dc="http://purl.org/dc/elements/1.1/"
- xmlns:cc="http://creativecommons.org/ns#"
- xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
- xmlns:svg="http://www.w3.org/2000/svg"
- xmlns="http://www.w3.org/2000/svg"
- xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
- xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
- width="1089.1382"
- height="668.21368"
- viewBox="-2121 -36 14554.634 8876.4061"
- id="svg2"
- version="1.1"
- inkscape:version="0.48.3.1 r9886"
- sodipodi:docname="RCUApplicability.svg">
- <metadata
- id="metadata40">
- <rdf:RDF>
- <cc:Work
- rdf:about="">
- <dc:format>image/svg+xml</dc:format>
- <dc:type
- rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
- <dc:title />
- </cc:Work>
- </rdf:RDF>
- </metadata>
- <defs
- id="defs38" />
- <sodipodi:namedview
- pagecolor="#ffffff"
- bordercolor="#666666"
- borderopacity="1"
- objecttolerance="10"
- gridtolerance="10"
- guidetolerance="10"
- inkscape:pageopacity="0"
- inkscape:pageshadow="2"
- inkscape:window-width="849"
- inkscape:window-height="639"
- id="namedview36"
- showgrid="false"
- inkscape:zoom="0.51326165"
- inkscape:cx="544.56912"
- inkscape:cy="334.10686"
- inkscape:window-x="149"
- inkscape:window-y="448"
- inkscape:window-maximized="0"
- inkscape:current-layer="g4"
- fit-margin-top="5"
- fit-margin-left="5"
- fit-margin-right="5"
- fit-margin-bottom="5" />
- <g
- style="fill:none;stroke-width:0.025in"
- id="g4"
- transform="translate(-2043.6828,14.791398)">
- <!-- Line: box -->
- <rect
- x="0"
- y="0"
- width="14400"
- height="8775"
- rx="0"
- style="fill:#ffa1a1;stroke:#000000;stroke-width:21;stroke-linecap:butt;stroke-linejoin:miter"
- id="rect6" />
- <!-- Line: box -->
- <rect
- x="1350"
- y="0"
- width="11700"
- height="6075"
- rx="0"
- style="fill:#ffff00;stroke:#000000;stroke-width:21;stroke-linecap:butt;stroke-linejoin:miter"
- id="rect8" />
- <!-- Line: box -->
- <rect
- x="2700"
- y="0"
- width="9000"
- height="4275"
- rx="0"
- style="fill:#00ff00;stroke:#000000;stroke-width:21;stroke-linecap:butt;stroke-linejoin:miter"
- id="rect10" />
- <!-- Line: box -->
- <rect
- x="4050"
- y="0"
- width="6300"
- height="2475"
- rx="0"
- style="fill:#87cfff;stroke:#000000;stroke-width:21;stroke-linecap:butt;stroke-linejoin:miter"
- id="rect12" />
- <!-- Text -->
- <text
- xml:space="preserve"
- x="7200"
- y="900"
- font-style="normal"
- font-weight="normal"
- font-size="324"
- id="text14"
- sodipodi:linespacing="125%"
- style="font-size:427.63009644px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#000000;font-family:Nimbus Sans L;-inkscape-font-specification:Nimbus Sans L"><tspan
- style="font-size:427.63009644px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;font-family:Nimbus Sans L;-inkscape-font-specification:Nimbus Sans L"
- id="tspan3017">Read-Mostly, Stale &amp;</tspan></text>
- <!-- Text -->
- <text
- xml:space="preserve"
- x="7200"
- y="1350"
- font-style="normal"
- font-weight="normal"
- font-size="324"
- id="text16"
- sodipodi:linespacing="125%"
- style="font-size:427.63009644px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#000000;font-family:Nimbus Sans L;-inkscape-font-specification:Nimbus Sans L"><tspan
- style="font-size:427.63009644px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;font-family:Nimbus Sans L;-inkscape-font-specification:Nimbus Sans L"
- id="tspan3019">Inconsistent Data OK</tspan></text>
- <!-- Text -->
- <text
- xml:space="preserve"
- x="7200"
- y="1800"
- font-style="normal"
- font-weight="normal"
- font-size="324"
- id="text18"
- sodipodi:linespacing="125%"
- style="font-size:427.63009644px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#000000;font-family:Nimbus Sans L;-inkscape-font-specification:Nimbus Sans L"><tspan
- style="font-size:427.63009644px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;font-family:Nimbus Sans L;-inkscape-font-specification:Nimbus Sans L"
- id="tspan3021">(RCU Works Great!!!)</tspan></text>
- <!-- Text -->
- <text
- xml:space="preserve"
- x="7200"
- y="3825"
- font-style="normal"
- font-weight="normal"
- font-size="324"
- id="text20"
- sodipodi:linespacing="125%"
- style="font-size:427.63009644px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#000000;font-family:Nimbus Sans L;-inkscape-font-specification:Nimbus Sans L"><tspan
- style="font-size:427.63009644px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;font-family:Nimbus Sans L;-inkscape-font-specification:Nimbus Sans L"
- id="tspan3023">(RCU Works Well)</tspan></text>
- <!-- Text -->
- <text
- xml:space="preserve"
- x="7200"
- y="3375"
- font-style="normal"
- font-weight="normal"
- font-size="324"
- id="text22"
- sodipodi:linespacing="125%"
- style="font-size:427.63009644px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#000000;font-family:Nimbus Sans L;-inkscape-font-specification:Nimbus Sans L"><tspan
- style="font-size:427.63009644px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;font-family:Nimbus Sans L;-inkscape-font-specification:Nimbus Sans L"
- id="tspan3025">Read-Mostly, Need Consistent Data</tspan></text>
- <!-- Text -->
- <text
- xml:space="preserve"
- x="7200"
- y="5175"
- font-style="normal"
- font-weight="normal"
- font-size="324"
- id="text24"
- sodipodi:linespacing="125%"
- style="font-size:427.63009644px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#000000;font-family:Nimbus Sans L;-inkscape-font-specification:Nimbus Sans L"><tspan
- style="font-size:427.63009644px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;font-family:Nimbus Sans L;-inkscape-font-specification:Nimbus Sans L"
- id="tspan3027">Read-Write, Need Consistent Data</tspan></text>
- <!-- Text -->
- <text
- xml:space="preserve"
- x="7200"
- y="6975"
- font-style="normal"
- font-weight="normal"
- font-size="324"
- id="text26"
- style="font-size:427.63009644px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#000000;font-family:Nimbus Sans L;-inkscape-font-specification:Nimbus Sans L"
- sodipodi:linespacing="125%">Update-Mostly, Need Consistent Data</text>
- <!-- Text -->
- <text
- xml:space="preserve"
- x="7200"
- y="5625"
- font-style="normal"
- font-weight="normal"
- font-size="324"
- id="text28"
- sodipodi:linespacing="125%"
- style="font-size:427.63009644px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#000000;font-family:Nimbus Sans L;-inkscape-font-specification:Nimbus Sans L"><tspan
- style="font-size:427.63009644px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;font-family:Nimbus Sans L;-inkscape-font-specification:Nimbus Sans L"
- id="tspan3029">(RCU Might Be OK...)</tspan></text>
- <!-- Text -->
- <text
- xml:space="preserve"
- x="7200"
- y="7875"
- font-style="normal"
- font-weight="normal"
- font-size="324"
- id="text30"
- style="font-size:427.63009644px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#000000;font-family:Nimbus Sans L;-inkscape-font-specification:Nimbus Sans L"
- sodipodi:linespacing="125%">(1) Provide Existence Guarantees For Update-Friendly Mechanisms</text>
- <!-- Text -->
- <text
- xml:space="preserve"
- x="7200"
- y="8325"
- font-style="normal"
- font-weight="normal"
- font-size="324"
- id="text32"
- style="font-size:427.63009644px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#000000;font-family:Nimbus Sans L;-inkscape-font-specification:Nimbus Sans L"
- sodipodi:linespacing="125%">(2) Provide Wait-Free Read-Side Primitives for Real-Time Use)</text>
- <!-- Text -->
- <text
- xml:space="preserve"
- x="7200"
- y="7425"
- font-style="normal"
- font-weight="normal"
- font-size="324"
- id="text34"
- style="font-size:427.63009644px;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;text-align:center;line-height:125%;writing-mode:lr-tb;text-anchor:middle;fill:#000000;font-family:Nimbus Sans L;-inkscape-font-specification:Nimbus Sans L"
- sodipodi:linespacing="125%">(RCU is Very Unlikely to be the Right Tool For The Job, But it Can:</text>
- </g>
-</svg>
diff --git a/Documentation/RCU/Design/Requirements/Requirements.html b/Documentation/RCU/Design/Requirements/Requirements.html
index a725f9900ec8..e7e24b3e86e2 100644
--- a/Documentation/RCU/Design/Requirements/Requirements.html
+++ b/Documentation/RCU/Design/Requirements/Requirements.html
@@ -1,5 +1,3 @@
-<!-- DO NOT HAND EDIT. -->
-<!-- Instead, edit Documentation/RCU/Design/Requirements/Requirements.htmlx and run 'sh htmlqqz.sh Documentation/RCU/Design/Requirements/Requirements' -->
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<html>
@@ -65,8 +63,8 @@ All that aside, here are the categories of currently known RCU requirements:
<p>
This is followed by a <a href="#Summary">summary</a>,
-which is in turn followed by the inevitable
-<a href="#Answers to Quick Quizzes">answers to the quick quizzes</a>.
+however, the answers to each quick quiz immediately follows the quiz.
+Select the big white space with your mouse to see the answer.
<h2><a name="Fundamental Requirements">Fundamental Requirements</a></h2>
@@ -153,13 +151,27 @@ Therefore, the outcome:
</blockquote>
cannot happen.
-<p><a name="Quick Quiz 1"><b>Quick Quiz 1</b>:</a>
-Wait a minute!
-You said that updaters can make useful forward progress concurrently
-with readers, but pre-existing readers will block
-<tt>synchronize_rcu()</tt>!!!
-Just who are you trying to fool???
-<br><a href="#qq1answer">Answer</a>
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ Wait a minute!
+ You said that updaters can make useful forward progress concurrently
+ with readers, but pre-existing readers will block
+ <tt>synchronize_rcu()</tt>!!!
+ Just who are you trying to fool???
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ First, if updaters do not wish to be blocked by readers, they can use
+ <tt>call_rcu()</tt> or <tt>kfree_rcu()</tt>, which will
+ be discussed later.
+ Second, even when using <tt>synchronize_rcu()</tt>, the other
+ update-side code does run concurrently with readers, whether
+ pre-existing or not.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
<p>
This scenario resembles one of the first uses of RCU in
@@ -210,9 +222,20 @@ to guarantee that <tt>do_something()</tt> never runs concurrently
with <tt>recovery()</tt>, but with little or no synchronization
overhead in <tt>do_something_dlm()</tt>.
-<p><a name="Quick Quiz 2"><b>Quick Quiz 2</b>:</a>
-Why is the <tt>synchronize_rcu()</tt> on line&nbsp;28 needed?
-<br><a href="#qq2answer">Answer</a>
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ Why is the <tt>synchronize_rcu()</tt> on line&nbsp;28 needed?
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ Without that extra grace period, memory reordering could result in
+ <tt>do_something_dlm()</tt> executing <tt>do_something()</tt>
+ concurrently with the last bits of <tt>recovery()</tt>.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
<p>
In order to avoid fatal problems such as deadlocks,
@@ -332,12 +355,27 @@ It also prevents any number of &ldquo;interesting&rdquo; compiler
optimizations, for example, the use of <tt>gp</tt> as a scratch
location immediately preceding the assignment.
-<p><a name="Quick Quiz 3"><b>Quick Quiz 3</b>:</a>
-But <tt>rcu_assign_pointer()</tt> does nothing to prevent the
-two assignments to <tt>p-&gt;a</tt> and <tt>p-&gt;b</tt>
-from being reordered.
-Can't that also cause problems?
-<br><a href="#qq3answer">Answer</a>
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ But <tt>rcu_assign_pointer()</tt> does nothing to prevent the
+ two assignments to <tt>p-&gt;a</tt> and <tt>p-&gt;b</tt>
+ from being reordered.
+ Can't that also cause problems?
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ No, it cannot.
+ The readers cannot see either of these two fields until
+ the assignment to <tt>gp</tt>, by which time both fields are
+ fully initialized.
+ So reordering the assignments
+ to <tt>p-&gt;a</tt> and <tt>p-&gt;b</tt> cannot possibly
+ cause any problems.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
<p>
It is tempting to assume that the reader need not do anything special
@@ -494,11 +532,42 @@ The <tt>rcu_access_pointer()</tt> on line&nbsp;6 is similar to
code protected by the corresponding update-side lock.
</ol>
-<p><a name="Quick Quiz 4"><b>Quick Quiz 4</b>:</a>
-Without the <tt>rcu_dereference()</tt> or the
-<tt>rcu_access_pointer()</tt>, what destructive optimizations
-might the compiler make use of?
-<br><a href="#qq4answer">Answer</a>
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ Without the <tt>rcu_dereference()</tt> or the
+ <tt>rcu_access_pointer()</tt>, what destructive optimizations
+ might the compiler make use of?
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ Let's start with what happens to <tt>do_something_gp()</tt>
+ if it fails to use <tt>rcu_dereference()</tt>.
+ It could reuse a value formerly fetched from this same pointer.
+ It could also fetch the pointer from <tt>gp</tt> in a byte-at-a-time
+ manner, resulting in <i>load tearing</i>, in turn resulting a bytewise
+ mash-up of two distince pointer values.
+ It might even use value-speculation optimizations, where it makes
+ a wrong guess, but by the time it gets around to checking the
+ value, an update has changed the pointer to match the wrong guess.
+ Too bad about any dereferences that returned pre-initialization garbage
+ in the meantime!
+ </font>
+
+ <p><font color="ffffff">
+ For <tt>remove_gp_synchronous()</tt>, as long as all modifications
+ to <tt>gp</tt> are carried out while holding <tt>gp_lock</tt>,
+ the above optimizations are harmless.
+ However,
+ with <tt>CONFIG_SPARSE_RCU_POINTER=y</tt>,
+ <tt>sparse</tt> will complain if you
+ define <tt>gp</tt> with <tt>__rcu</tt> and then
+ access it without using
+ either <tt>rcu_access_pointer()</tt> or <tt>rcu_dereference()</tt>.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
<p>
In short, RCU's publish-subscribe guarantee is provided by the combination
@@ -571,17 +640,156 @@ systems with more than one CPU:
<tt>synchronize_rcu()</tt> migrates in the meantime.
</ol>
-<p><a name="Quick Quiz 5"><b>Quick Quiz 5</b>:</a>
-Given that multiple CPUs can start RCU read-side critical sections
-at any time without any ordering whatsoever, how can RCU possibly tell whether
-or not a given RCU read-side critical section starts before a
-given instance of <tt>synchronize_rcu()</tt>?
-<br><a href="#qq5answer">Answer</a>
-
-<p><a name="Quick Quiz 6"><b>Quick Quiz 6</b>:</a>
-The first and second guarantees require unbelievably strict ordering!
-Are all these memory barriers <i> really</i> required?
-<br><a href="#qq6answer">Answer</a>
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ Given that multiple CPUs can start RCU read-side critical sections
+ at any time without any ordering whatsoever, how can RCU possibly
+ tell whether or not a given RCU read-side critical section starts
+ before a given instance of <tt>synchronize_rcu()</tt>?
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ If RCU cannot tell whether or not a given
+ RCU read-side critical section starts before a
+ given instance of <tt>synchronize_rcu()</tt>,
+ then it must assume that the RCU read-side critical section
+ started first.
+ In other words, a given instance of <tt>synchronize_rcu()</tt>
+ can avoid waiting on a given RCU read-side critical section only
+ if it can prove that <tt>synchronize_rcu()</tt> started first.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
+
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ The first and second guarantees require unbelievably strict ordering!
+ Are all these memory barriers <i> really</i> required?
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ Yes, they really are required.
+ To see why the first guarantee is required, consider the following
+ sequence of events:
+ </font>
+
+ <ol>
+ <li> <font color="ffffff">
+ CPU 1: <tt>rcu_read_lock()</tt>
+ </font>
+ <li> <font color="ffffff">
+ CPU 1: <tt>q = rcu_dereference(gp);
+ /* Very likely to return p. */</tt>
+ </font>
+ <li> <font color="ffffff">
+ CPU 0: <tt>list_del_rcu(p);</tt>
+ </font>
+ <li> <font color="ffffff">
+ CPU 0: <tt>synchronize_rcu()</tt> starts.
+ </font>
+ <li> <font color="ffffff">
+ CPU 1: <tt>do_something_with(q-&gt;a);
+ /* No smp_mb(), so might happen after kfree(). */</tt>
+ </font>
+ <li> <font color="ffffff">
+ CPU 1: <tt>rcu_read_unlock()</tt>
+ </font>
+ <li> <font color="ffffff">
+ CPU 0: <tt>synchronize_rcu()</tt> returns.
+ </font>
+ <li> <font color="ffffff">
+ CPU 0: <tt>kfree(p);</tt>
+ </font>
+ </ol>
+
+ <p><font color="ffffff">
+ Therefore, there absolutely must be a full memory barrier between the
+ end of the RCU read-side critical section and the end of the
+ grace period.
+ </font>
+
+ <p><font color="ffffff">
+ The sequence of events demonstrating the necessity of the second rule
+ is roughly similar:
+ </font>
+
+ <ol>
+ <li> <font color="ffffff">CPU 0: <tt>list_del_rcu(p);</tt>
+ </font>
+ <li> <font color="ffffff">CPU 0: <tt>synchronize_rcu()</tt> starts.
+ </font>
+ <li> <font color="ffffff">CPU 1: <tt>rcu_read_lock()</tt>
+ </font>
+ <li> <font color="ffffff">CPU 1: <tt>q = rcu_dereference(gp);
+ /* Might return p if no memory barrier. */</tt>
+ </font>
+ <li> <font color="ffffff">CPU 0: <tt>synchronize_rcu()</tt> returns.
+ </font>
+ <li> <font color="ffffff">CPU 0: <tt>kfree(p);</tt>
+ </font>
+ <li> <font color="ffffff">
+ CPU 1: <tt>do_something_with(q-&gt;a); /* Boom!!! */</tt>
+ </font>
+ <li> <font color="ffffff">CPU 1: <tt>rcu_read_unlock()</tt>
+ </font>
+ </ol>
+
+ <p><font color="ffffff">
+ And similarly, without a memory barrier between the beginning of the
+ grace period and the beginning of the RCU read-side critical section,
+ CPU&nbsp;1 might end up accessing the freelist.
+ </font>
+
+ <p><font color="ffffff">
+ The &ldquo;as if&rdquo; rule of course applies, so that any
+ implementation that acts as if the appropriate memory barriers
+ were in place is a correct implementation.
+ That said, it is much easier to fool yourself into believing
+ that you have adhered to the as-if rule than it is to actually
+ adhere to it!
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
+
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ You claim that <tt>rcu_read_lock()</tt> and <tt>rcu_read_unlock()</tt>
+ generate absolutely no code in some kernel builds.
+ This means that the compiler might arbitrarily rearrange consecutive
+ RCU read-side critical sections.
+ Given such rearrangement, if a given RCU read-side critical section
+ is done, how can you be sure that all prior RCU read-side critical
+ sections are done?
+ Won't the compiler rearrangements make that impossible to determine?
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ In cases where <tt>rcu_read_lock()</tt> and <tt>rcu_read_unlock()</tt>
+ generate absolutely no code, RCU infers quiescent states only at
+ special locations, for example, within the scheduler.
+ Because calls to <tt>schedule()</tt> had better prevent calling-code
+ accesses to shared variables from being rearranged across the call to
+ <tt>schedule()</tt>, if RCU detects the end of a given RCU read-side
+ critical section, it will necessarily detect the end of all prior
+ RCU read-side critical sections, no matter how aggressively the
+ compiler scrambles the code.
+ </font>
+
+ <p><font color="ffffff">
+ Again, this all assumes that the compiler cannot scramble code across
+ calls to the scheduler, out of interrupt handlers, into the idle loop,
+ into user-mode code, and so on.
+ But if your kernel build allows that sort of scrambling, you have broken
+ far more than just RCU!
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
<p>
Note that these memory-barrier requirements do not replace the fundamental
@@ -626,9 +834,19 @@ inconvenience can be avoided through use of the
<tt>call_rcu()</tt> and <tt>kfree_rcu()</tt> API members
described later in this document.
-<p><a name="Quick Quiz 7"><b>Quick Quiz 7</b>:</a>
-But how does the upgrade-to-write operation exclude other readers?
-<br><a href="#qq7answer">Answer</a>
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ But how does the upgrade-to-write operation exclude other readers?
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ It doesn't, just like normal RCU updates, which also do not exclude
+ RCU readers.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
<p>
This guarantee allows lookup code to be shared between read-side
@@ -714,9 +932,20 @@ to do significant reordering.
This is by design: Any significant ordering constraints would slow down
these fast-path APIs.
-<p><a name="Quick Quiz 8"><b>Quick Quiz 8</b>:</a>
-Can't the compiler also reorder this code?
-<br><a href="#qq8answer">Answer</a>
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ Can't the compiler also reorder this code?
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ No, the volatile casts in <tt>READ_ONCE()</tt> and
+ <tt>WRITE_ONCE()</tt> prevent the compiler from reordering in
+ this particular case.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
<h3><a name="Readers Do Not Exclude Updaters">Readers Do Not Exclude Updaters</a></h3>
@@ -769,10 +998,28 @@ new readers can start immediately after <tt>synchronize_rcu()</tt>
starts, and <tt>synchronize_rcu()</tt> is under no
obligation to wait for these new readers.
-<p><a name="Quick Quiz 9"><b>Quick Quiz 9</b>:</a>
-Suppose that synchronize_rcu() did wait until all readers had completed.
-Would the updater be able to rely on this?
-<br><a href="#qq9answer">Answer</a>
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ Suppose that synchronize_rcu() did wait until <i>all</i>
+ readers had completed instead of waiting only on
+ pre-existing readers.
+ For how long would the updater be able to rely on there
+ being no readers?
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ For no time at all.
+ Even if <tt>synchronize_rcu()</tt> were to wait until
+ all readers had completed, a new reader might start immediately after
+ <tt>synchronize_rcu()</tt> completed.
+ Therefore, the code following
+ <tt>synchronize_rcu()</tt> can <i>never</i> rely on there being
+ no readers.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
<h3><a name="Grace Periods Don't Partition Read-Side Critical Sections">
Grace Periods Don't Partition Read-Side Critical Sections</a></h3>
@@ -969,11 +1216,24 @@ grace period.
As a result, an RCU read-side critical section cannot partition a pair
of RCU grace periods.
-<p><a name="Quick Quiz 10"><b>Quick Quiz 10</b>:</a>
-How long a sequence of grace periods, each separated by an RCU read-side
-critical section, would be required to partition the RCU read-side
-critical sections at the beginning and end of the chain?
-<br><a href="#qq10answer">Answer</a>
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ How long a sequence of grace periods, each separated by an RCU
+ read-side critical section, would be required to partition the RCU
+ read-side critical sections at the beginning and end of the chain?
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ In theory, an infinite number.
+ In practice, an unknown number that is sensitive to both implementation
+ details and timing considerations.
+ Therefore, even in practice, RCU users must abide by the
+ theoretical rather than the practical answer.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
<h3><a name="Disabling Preemption Does Not Block Grace Periods">
Disabling Preemption Does Not Block Grace Periods</a></h3>
@@ -1109,12 +1369,27 @@ These classes is covered in the following sections.
<h3><a name="Specialization">Specialization</a></h3>
<p>
-RCU is and always has been intended primarily for read-mostly situations, as
-illustrated by the following figure.
-This means that RCU's read-side primitives are optimized, often at the
+RCU is and always has been intended primarily for read-mostly situations,
+which means that RCU's read-side primitives are optimized, often at the
expense of its update-side primitives.
+Experience thus far is captured by the following list of situations:
-<p><img src="RCUApplicability.svg" alt="RCUApplicability.svg" width="70%"></p>
+<ol>
+<li> Read-mostly data, where stale and inconsistent data is not
+ a problem: RCU works great!
+<li> Read-mostly data, where data must be consistent:
+ RCU works well.
+<li> Read-write data, where data must be consistent:
+ RCU <i>might</i> work OK.
+ Or not.
+<li> Write-mostly data, where data must be consistent:
+ RCU is very unlikely to be the right tool for the job,
+ with the following exceptions, where RCU can provide:
+ <ol type=a>
+ <li> Existence guarantees for update-friendly mechanisms.
+ <li> Wait-free read-side primitives for real-time use.
+ </ol>
+</ol>
<p>
This focus on read-mostly situations means that RCU must interoperate
@@ -1127,9 +1402,43 @@ synchronization primitives be legal within RCU read-side critical sections,
including spinlocks, sequence locks, atomic operations, reference
counters, and memory barriers.
-<p><a name="Quick Quiz 11"><b>Quick Quiz 11</b>:</a>
-What about sleeping locks?
-<br><a href="#qq11answer">Answer</a>
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ What about sleeping locks?
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ These are forbidden within Linux-kernel RCU read-side critical
+ sections because it is not legal to place a quiescent state
+ (in this case, voluntary context switch) within an RCU read-side
+ critical section.
+ However, sleeping locks may be used within userspace RCU read-side
+ critical sections, and also within Linux-kernel sleepable RCU
+ <a href="#Sleepable RCU"><font color="ffffff">(SRCU)</font></a>
+ read-side critical sections.
+ In addition, the -rt patchset turns spinlocks into a
+ sleeping locks so that the corresponding critical sections
+ can be preempted, which also means that these sleeplockified
+ spinlocks (but not other sleeping locks!) may be acquire within
+ -rt-Linux-kernel RCU read-side critical sections.
+ </font>
+
+ <p><font color="ffffff">
+ Note that it <i>is</i> legal for a normal RCU read-side
+ critical section to conditionally acquire a sleeping locks
+ (as in <tt>mutex_trylock()</tt>), but only as long as it does
+ not loop indefinitely attempting to conditionally acquire that
+ sleeping locks.
+ The key point is that things like <tt>mutex_trylock()</tt>
+ either return with the mutex held, or return an error indication if
+ the mutex was not immediately available.
+ Either way, <tt>mutex_trylock()</tt> returns immediately without
+ sleeping.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
<p>
It often comes as a surprise that many algorithms do not require a
@@ -1160,10 +1469,7 @@ some period of time, so the exact wait period is a judgment call.
One of our pair of veternarians might wait 30 seconds before pronouncing
the cat dead, while the other might insist on waiting a full minute.
The two veternarians would then disagree on the state of the cat during
-the final 30 seconds of the minute following the last heartbeat, as
-fancifully illustrated below:
-
-<p><img src="2013-08-is-it-dead.png" alt="2013-08-is-it-dead.png" width="431"></p>
+the final 30 seconds of the minute following the last heartbeat.
<p>
Interestingly enough, this same situation applies to hardware.
@@ -1343,7 +1649,8 @@ situations where neither <tt>synchronize_rcu()</tt> nor
<tt>synchronize_rcu_expedited()</tt> would be legal,
including within preempt-disable code, <tt>local_bh_disable()</tt> code,
interrupt-disable code, and interrupt handlers.
-However, even <tt>call_rcu()</tt> is illegal within NMI handlers.
+However, even <tt>call_rcu()</tt> is illegal within NMI handlers
+and from idle and offline CPUs.
The callback function (<tt>remove_gp_cb()</tt> in this case) will be
executed within softirq (software interrupt) environment within the
Linux kernel,
@@ -1354,12 +1661,27 @@ write an RCU callback function that takes too long.
Long-running operations should be relegated to separate threads or
(in the Linux kernel) workqueues.
-<p><a name="Quick Quiz 12"><b>Quick Quiz 12</b>:</a>
-Why does line&nbsp;19 use <tt>rcu_access_pointer()</tt>?
-After all, <tt>call_rcu()</tt> on line&nbsp;25 stores into the
-structure, which would interact badly with concurrent insertions.
-Doesn't this mean that <tt>rcu_dereference()</tt> is required?
-<br><a href="#qq12answer">Answer</a>
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ Why does line&nbsp;19 use <tt>rcu_access_pointer()</tt>?
+ After all, <tt>call_rcu()</tt> on line&nbsp;25 stores into the
+ structure, which would interact badly with concurrent insertions.
+ Doesn't this mean that <tt>rcu_dereference()</tt> is required?
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ Presumably the <tt>-&gt;gp_lock</tt> acquired on line&nbsp;18 excludes
+ any changes, including any insertions that <tt>rcu_dereference()</tt>
+ would protect against.
+ Therefore, any insertions will be delayed until after
+ <tt>-&gt;gp_lock</tt>
+ is released on line&nbsp;25, which in turn means that
+ <tt>rcu_access_pointer()</tt> suffices.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
<p>
However, all that <tt>remove_gp_cb()</tt> is doing is
@@ -1406,14 +1728,31 @@ This was due to the fact that RCU was not heavily used within DYNIX/ptx,
so the very few places that needed something like
<tt>synchronize_rcu()</tt> simply open-coded it.
-<p><a name="Quick Quiz 13"><b>Quick Quiz 13</b>:</a>
-Earlier it was claimed that <tt>call_rcu()</tt> and
-<tt>kfree_rcu()</tt> allowed updaters to avoid being blocked
-by readers.
-But how can that be correct, given that the invocation of the callback
-and the freeing of the memory (respectively) must still wait for
-a grace period to elapse?
-<br><a href="#qq13answer">Answer</a>
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ Earlier it was claimed that <tt>call_rcu()</tt> and
+ <tt>kfree_rcu()</tt> allowed updaters to avoid being blocked
+ by readers.
+ But how can that be correct, given that the invocation of the callback
+ and the freeing of the memory (respectively) must still wait for
+ a grace period to elapse?
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ We could define things this way, but keep in mind that this sort of
+ definition would say that updates in garbage-collected languages
+ cannot complete until the next time the garbage collector runs,
+ which does not seem at all reasonable.
+ The key point is that in most cases, an updater using either
+ <tt>call_rcu()</tt> or <tt>kfree_rcu()</tt> can proceed to the
+ next update as soon as it has invoked <tt>call_rcu()</tt> or
+ <tt>kfree_rcu()</tt>, without having to wait for a subsequent
+ grace period.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
<p>
But what if the updater must wait for the completion of code to be
@@ -1838,11 +2177,26 @@ kthreads to be spawned.
Therefore, invoking <tt>synchronize_rcu()</tt> during scheduler
initialization can result in deadlock.
-<p><a name="Quick Quiz 14"><b>Quick Quiz 14</b>:</a>
-So what happens with <tt>synchronize_rcu()</tt> during
-scheduler initialization for <tt>CONFIG_PREEMPT=n</tt>
-kernels?
-<br><a href="#qq14answer">Answer</a>
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ So what happens with <tt>synchronize_rcu()</tt> during
+ scheduler initialization for <tt>CONFIG_PREEMPT=n</tt>
+ kernels?
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ In <tt>CONFIG_PREEMPT=n</tt> kernel, <tt>synchronize_rcu()</tt>
+ maps directly to <tt>synchronize_sched()</tt>.
+ Therefore, <tt>synchronize_rcu()</tt> works normally throughout
+ boot in <tt>CONFIG_PREEMPT=n</tt> kernels.
+ However, your code must also work in <tt>CONFIG_PREEMPT=y</tt> kernels,
+ so it is still necessary to avoid invoking <tt>synchronize_rcu()</tt>
+ during scheduler initialization.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
<p>
I learned of these boot-time requirements as a result of a series of
@@ -2171,6 +2525,14 @@ This real-time requirement motivated the grace-period kthread, which
also simplified handling of a number of race conditions.
<p>
+RCU must avoid degrading real-time response for CPU-bound threads, whether
+executing in usermode (which is one use case for
+<tt>CONFIG_NO_HZ_FULL=y</tt>) or in the kernel.
+That said, CPU-bound loops in the kernel must execute
+<tt>cond_resched_rcu_qs()</tt> at least once per few tens of milliseconds
+in order to avoid receiving an IPI from RCU.
+
+<p>
Finally, RCU's status as a synchronization primitive means that
any RCU failure can result in arbitrary memory corruption that can be
extremely difficult to debug.
@@ -2223,6 +2585,8 @@ described in a separate section.
<li> <a href="#Sched Flavor">Sched Flavor</a>
<li> <a href="#Sleepable RCU">Sleepable RCU</a>
<li> <a href="#Tasks RCU">Tasks RCU</a>
+<li> <a href="#Waiting for Multiple Grace Periods">
+ Waiting for Multiple Grace Periods</a>
</ol>
<h3><a name="Bottom-Half Flavor">Bottom-Half Flavor</a></h3>
@@ -2472,6 +2836,94 @@ The tasks-RCU API is quite compact, consisting only of
<tt>synchronize_rcu_tasks()</tt>, and
<tt>rcu_barrier_tasks()</tt>.
+<h3><a name="Waiting for Multiple Grace Periods">
+Waiting for Multiple Grace Periods</a></h3>
+
+<p>
+Perhaps you have an RCU protected data structure that is accessed from
+RCU read-side critical sections, from softirq handlers, and from
+hardware interrupt handlers.
+That is three flavors of RCU, the normal flavor, the bottom-half flavor,
+and the sched flavor.
+How to wait for a compound grace period?
+
+<p>
+The best approach is usually to &ldquo;just say no!&rdquo; and
+insert <tt>rcu_read_lock()</tt> and <tt>rcu_read_unlock()</tt>
+around each RCU read-side critical section, regardless of what
+environment it happens to be in.
+But suppose that some of the RCU read-side critical sections are
+on extremely hot code paths, and that use of <tt>CONFIG_PREEMPT=n</tt>
+is not a viable option, so that <tt>rcu_read_lock()</tt> and
+<tt>rcu_read_unlock()</tt> are not free.
+What then?
+
+<p>
+You <i>could</i> wait on all three grace periods in succession, as follows:
+
+<blockquote>
+<pre>
+ 1 synchronize_rcu();
+ 2 synchronize_rcu_bh();
+ 3 synchronize_sched();
+</pre>
+</blockquote>
+
+<p>
+This works, but triples the update-side latency penalty.
+In cases where this is not acceptable, <tt>synchronize_rcu_mult()</tt>
+may be used to wait on all three flavors of grace period concurrently:
+
+<blockquote>
+<pre>
+ 1 synchronize_rcu_mult(call_rcu, call_rcu_bh, call_rcu_sched);
+</pre>
+</blockquote>
+
+<p>
+But what if it is necessary to also wait on SRCU?
+This can be done as follows:
+
+<blockquote>
+<pre>
+ 1 static void call_my_srcu(struct rcu_head *head,
+ 2 void (*func)(struct rcu_head *head))
+ 3 {
+ 4 call_srcu(&amp;my_srcu, head, func);
+ 5 }
+ 6
+ 7 synchronize_rcu_mult(call_rcu, call_rcu_bh, call_rcu_sched, call_my_srcu);
+</pre>
+</blockquote>
+
+<p>
+If you needed to wait on multiple different flavors of SRCU
+(but why???), you would need to create a wrapper function resembling
+<tt>call_my_srcu()</tt> for each SRCU flavor.
+
+<table>
+<tr><th>&nbsp;</th></tr>
+<tr><th align="left">Quick Quiz:</th></tr>
+<tr><td>
+ But what if I need to wait for multiple RCU flavors, but I also need
+ the grace periods to be expedited?
+</td></tr>
+<tr><th align="left">Answer:</th></tr>
+<tr><td bgcolor="#ffffff"><font color="ffffff">
+ If you are using expedited grace periods, there should be less penalty
+ for waiting on them in succession.
+ But if that is nevertheless a problem, you can use workqueues
+ or multiple kthreads to wait on the various expedited grace
+ periods concurrently.
+</font></td></tr>
+<tr><td>&nbsp;</td></tr>
+</table>
+
+<p>
+Again, it is usually better to adjust the RCU read-side critical sections
+to use a single flavor of RCU, but when this is not feasible, you can use
+<tt>synchronize_rcu_mult()</tt>.
+
<h2><a name="Possible Future Changes">Possible Future Changes</a></h2>
<p>
@@ -2569,329 +3021,4 @@ and is provided
under the terms of the Creative Commons Attribution-Share Alike 3.0
United States license.
-<h3><a name="Answers to Quick Quizzes">
-Answers to Quick Quizzes</a></h3>
-
-<a name="qq1answer"></a>
-<p><b>Quick Quiz 1</b>:
-Wait a minute!
-You said that updaters can make useful forward progress concurrently
-with readers, but pre-existing readers will block
-<tt>synchronize_rcu()</tt>!!!
-Just who are you trying to fool???
-
-
-</p><p><b>Answer</b>:
-First, if updaters do not wish to be blocked by readers, they can use
-<tt>call_rcu()</tt> or <tt>kfree_rcu()</tt>, which will
-be discussed later.
-Second, even when using <tt>synchronize_rcu()</tt>, the other
-update-side code does run concurrently with readers, whether pre-existing
-or not.
-
-
-</p><p><a href="#Quick%20Quiz%201"><b>Back to Quick Quiz 1</b>.</a>
-
-<a name="qq2answer"></a>
-<p><b>Quick Quiz 2</b>:
-Why is the <tt>synchronize_rcu()</tt> on line&nbsp;28 needed?
-
-
-</p><p><b>Answer</b>:
-Without that extra grace period, memory reordering could result in
-<tt>do_something_dlm()</tt> executing <tt>do_something()</tt>
-concurrently with the last bits of <tt>recovery()</tt>.
-
-
-</p><p><a href="#Quick%20Quiz%202"><b>Back to Quick Quiz 2</b>.</a>
-
-<a name="qq3answer"></a>
-<p><b>Quick Quiz 3</b>:
-But <tt>rcu_assign_pointer()</tt> does nothing to prevent the
-two assignments to <tt>p-&gt;a</tt> and <tt>p-&gt;b</tt>
-from being reordered.
-Can't that also cause problems?
-
-
-</p><p><b>Answer</b>:
-No, it cannot.
-The readers cannot see either of these two fields until
-the assignment to <tt>gp</tt>, by which time both fields are
-fully initialized.
-So reordering the assignments
-to <tt>p-&gt;a</tt> and <tt>p-&gt;b</tt> cannot possibly
-cause any problems.
-
-
-</p><p><a href="#Quick%20Quiz%203"><b>Back to Quick Quiz 3</b>.</a>
-
-<a name="qq4answer"></a>
-<p><b>Quick Quiz 4</b>:
-Without the <tt>rcu_dereference()</tt> or the
-<tt>rcu_access_pointer()</tt>, what destructive optimizations
-might the compiler make use of?
-
-
-</p><p><b>Answer</b>:
-Let's start with what happens to <tt>do_something_gp()</tt>
-if it fails to use <tt>rcu_dereference()</tt>.
-It could reuse a value formerly fetched from this same pointer.
-It could also fetch the pointer from <tt>gp</tt> in a byte-at-a-time
-manner, resulting in <i>load tearing</i>, in turn resulting a bytewise
-mash-up of two distince pointer values.
-It might even use value-speculation optimizations, where it makes a wrong
-guess, but by the time it gets around to checking the value, an update
-has changed the pointer to match the wrong guess.
-Too bad about any dereferences that returned pre-initialization garbage
-in the meantime!
-
-<p>
-For <tt>remove_gp_synchronous()</tt>, as long as all modifications
-to <tt>gp</tt> are carried out while holding <tt>gp_lock</tt>,
-the above optimizations are harmless.
-However,
-with <tt>CONFIG_SPARSE_RCU_POINTER=y</tt>,
-<tt>sparse</tt> will complain if you
-define <tt>gp</tt> with <tt>__rcu</tt> and then
-access it without using
-either <tt>rcu_access_pointer()</tt> or <tt>rcu_dereference()</tt>.
-
-
-</p><p><a href="#Quick%20Quiz%204"><b>Back to Quick Quiz 4</b>.</a>
-
-<a name="qq5answer"></a>
-<p><b>Quick Quiz 5</b>:
-Given that multiple CPUs can start RCU read-side critical sections
-at any time without any ordering whatsoever, how can RCU possibly tell whether
-or not a given RCU read-side critical section starts before a
-given instance of <tt>synchronize_rcu()</tt>?
-
-
-</p><p><b>Answer</b>:
-If RCU cannot tell whether or not a given
-RCU read-side critical section starts before a
-given instance of <tt>synchronize_rcu()</tt>,
-then it must assume that the RCU read-side critical section
-started first.
-In other words, a given instance of <tt>synchronize_rcu()</tt>
-can avoid waiting on a given RCU read-side critical section only
-if it can prove that <tt>synchronize_rcu()</tt> started first.
-
-
-</p><p><a href="#Quick%20Quiz%205"><b>Back to Quick Quiz 5</b>.</a>
-
-<a name="qq6answer"></a>
-<p><b>Quick Quiz 6</b>:
-The first and second guarantees require unbelievably strict ordering!
-Are all these memory barriers <i> really</i> required?
-
-
-</p><p><b>Answer</b>:
-Yes, they really are required.
-To see why the first guarantee is required, consider the following
-sequence of events:
-
-<ol>
-<li> CPU 1: <tt>rcu_read_lock()</tt>
-<li> CPU 1: <tt>q = rcu_dereference(gp);
- /* Very likely to return p. */</tt>
-<li> CPU 0: <tt>list_del_rcu(p);</tt>
-<li> CPU 0: <tt>synchronize_rcu()</tt> starts.
-<li> CPU 1: <tt>do_something_with(q-&gt;a);
- /* No smp_mb(), so might happen after kfree(). */</tt>
-<li> CPU 1: <tt>rcu_read_unlock()</tt>
-<li> CPU 0: <tt>synchronize_rcu()</tt> returns.
-<li> CPU 0: <tt>kfree(p);</tt>
-</ol>
-
-<p>
-Therefore, there absolutely must be a full memory barrier between the
-end of the RCU read-side critical section and the end of the
-grace period.
-
-<p>
-The sequence of events demonstrating the necessity of the second rule
-is roughly similar:
-
-<ol>
-<li> CPU 0: <tt>list_del_rcu(p);</tt>
-<li> CPU 0: <tt>synchronize_rcu()</tt> starts.
-<li> CPU 1: <tt>rcu_read_lock()</tt>
-<li> CPU 1: <tt>q = rcu_dereference(gp);
- /* Might return p if no memory barrier. */</tt>
-<li> CPU 0: <tt>synchronize_rcu()</tt> returns.
-<li> CPU 0: <tt>kfree(p);</tt>
-<li> CPU 1: <tt>do_something_with(q-&gt;a); /* Boom!!! */</tt>
-<li> CPU 1: <tt>rcu_read_unlock()</tt>
-</ol>
-
-<p>
-And similarly, without a memory barrier between the beginning of the
-grace period and the beginning of the RCU read-side critical section,
-CPU&nbsp;1 might end up accessing the freelist.
-
-<p>
-The &ldquo;as if&rdquo; rule of course applies, so that any implementation
-that acts as if the appropriate memory barriers were in place is a
-correct implementation.
-That said, it is much easier to fool yourself into believing that you have
-adhered to the as-if rule than it is to actually adhere to it!
-
-
-</p><p><a href="#Quick%20Quiz%206"><b>Back to Quick Quiz 6</b>.</a>
-
-<a name="qq7answer"></a>
-<p><b>Quick Quiz 7</b>:
-But how does the upgrade-to-write operation exclude other readers?
-
-
-</p><p><b>Answer</b>:
-It doesn't, just like normal RCU updates, which also do not exclude
-RCU readers.
-
-
-</p><p><a href="#Quick%20Quiz%207"><b>Back to Quick Quiz 7</b>.</a>
-
-<a name="qq8answer"></a>
-<p><b>Quick Quiz 8</b>:
-Can't the compiler also reorder this code?
-
-
-</p><p><b>Answer</b>:
-No, the volatile casts in <tt>READ_ONCE()</tt> and
-<tt>WRITE_ONCE()</tt> prevent the compiler from reordering in
-this particular case.
-
-
-</p><p><a href="#Quick%20Quiz%208"><b>Back to Quick Quiz 8</b>.</a>
-
-<a name="qq9answer"></a>
-<p><b>Quick Quiz 9</b>:
-Suppose that synchronize_rcu() did wait until all readers had completed.
-Would the updater be able to rely on this?
-
-
-</p><p><b>Answer</b>:
-No.
-Even if <tt>synchronize_rcu()</tt> were to wait until
-all readers had completed, a new reader might start immediately after
-<tt>synchronize_rcu()</tt> completed.
-Therefore, the code following
-<tt>synchronize_rcu()</tt> cannot rely on there being no readers
-in any case.
-
-
-</p><p><a href="#Quick%20Quiz%209"><b>Back to Quick Quiz 9</b>.</a>
-
-<a name="qq10answer"></a>
-<p><b>Quick Quiz 10</b>:
-How long a sequence of grace periods, each separated by an RCU read-side
-critical section, would be required to partition the RCU read-side
-critical sections at the beginning and end of the chain?
-
-
-</p><p><b>Answer</b>:
-In theory, an infinite number.
-In practice, an unknown number that is sensitive to both implementation
-details and timing considerations.
-Therefore, even in practice, RCU users must abide by the theoretical rather
-than the practical answer.
-
-
-</p><p><a href="#Quick%20Quiz%2010"><b>Back to Quick Quiz 10</b>.</a>
-
-<a name="qq11answer"></a>
-<p><b>Quick Quiz 11</b>:
-What about sleeping locks?
-
-
-</p><p><b>Answer</b>:
-These are forbidden within Linux-kernel RCU read-side critical sections
-because it is not legal to place a quiescent state (in this case,
-voluntary context switch) within an RCU read-side critical section.
-However, sleeping locks may be used within userspace RCU read-side critical
-sections, and also within Linux-kernel sleepable RCU
-<a href="#Sleepable RCU">(SRCU)</a>
-read-side critical sections.
-In addition, the -rt patchset turns spinlocks into a sleeping locks so
-that the corresponding critical sections can be preempted, which
-also means that these sleeplockified spinlocks (but not other sleeping locks!)
-may be acquire within -rt-Linux-kernel RCU read-side critical sections.
-
-<p>
-Note that it <i>is</i> legal for a normal RCU read-side critical section
-to conditionally acquire a sleeping locks (as in <tt>mutex_trylock()</tt>),
-but only as long as it does not loop indefinitely attempting to
-conditionally acquire that sleeping locks.
-The key point is that things like <tt>mutex_trylock()</tt>
-either return with the mutex held, or return an error indication if
-the mutex was not immediately available.
-Either way, <tt>mutex_trylock()</tt> returns immediately without sleeping.
-
-
-</p><p><a href="#Quick%20Quiz%2011"><b>Back to Quick Quiz 11</b>.</a>
-
-<a name="qq12answer"></a>
-<p><b>Quick Quiz 12</b>:
-Why does line&nbsp;19 use <tt>rcu_access_pointer()</tt>?
-After all, <tt>call_rcu()</tt> on line&nbsp;25 stores into the
-structure, which would interact badly with concurrent insertions.
-Doesn't this mean that <tt>rcu_dereference()</tt> is required?
-
-
-</p><p><b>Answer</b>:
-Presumably the <tt>-&gt;gp_lock</tt> acquired on line&nbsp;18 excludes
-any changes, including any insertions that <tt>rcu_dereference()</tt>
-would protect against.
-Therefore, any insertions will be delayed until after <tt>-&gt;gp_lock</tt>
-is released on line&nbsp;25, which in turn means that
-<tt>rcu_access_pointer()</tt> suffices.
-
-
-</p><p><a href="#Quick%20Quiz%2012"><b>Back to Quick Quiz 12</b>.</a>
-
-<a name="qq13answer"></a>
-<p><b>Quick Quiz 13</b>:
-Earlier it was claimed that <tt>call_rcu()</tt> and
-<tt>kfree_rcu()</tt> allowed updaters to avoid being blocked
-by readers.
-But how can that be correct, given that the invocation of the callback
-and the freeing of the memory (respectively) must still wait for
-a grace period to elapse?
-
-
-</p><p><b>Answer</b>:
-We could define things this way, but keep in mind that this sort of
-definition would say that updates in garbage-collected languages
-cannot complete until the next time the garbage collector runs,
-which does not seem at all reasonable.
-The key point is that in most cases, an updater using either
-<tt>call_rcu()</tt> or <tt>kfree_rcu()</tt> can proceed to the
-next update as soon as it has invoked <tt>call_rcu()</tt> or
-<tt>kfree_rcu()</tt>, without having to wait for a subsequent
-grace period.
-
-
-</p><p><a href="#Quick%20Quiz%2013"><b>Back to Quick Quiz 13</b>.</a>
-
-<a name="qq14answer"></a>
-<p><b>Quick Quiz 14</b>:
-So what happens with <tt>synchronize_rcu()</tt> during
-scheduler initialization for <tt>CONFIG_PREEMPT=n</tt>
-kernels?
-
-
-</p><p><b>Answer</b>:
-In <tt>CONFIG_PREEMPT=n</tt> kernel, <tt>synchronize_rcu()</tt>
-maps directly to <tt>synchronize_sched()</tt>.
-Therefore, <tt>synchronize_rcu()</tt> works normally throughout
-boot in <tt>CONFIG_PREEMPT=n</tt> kernels.
-However, your code must also work in <tt>CONFIG_PREEMPT=y</tt> kernels,
-so it is still necessary to avoid invoking <tt>synchronize_rcu()</tt>
-during scheduler initialization.
-
-
-</p><p><a href="#Quick%20Quiz%2014"><b>Back to Quick Quiz 14</b>.</a>
-
-
</body></html>
diff --git a/Documentation/RCU/Design/Requirements/Requirements.htmlx b/Documentation/RCU/Design/Requirements/Requirements.htmlx
deleted file mode 100644
index 3a97ba490c42..000000000000
--- a/Documentation/RCU/Design/Requirements/Requirements.htmlx
+++ /dev/null
@@ -1,2741 +0,0 @@
-<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
- "http://www.w3.org/TR/html4/loose.dtd">
- <html>
- <head><title>A Tour Through RCU's Requirements [LWN.net]</title>
- <meta HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=utf-8">
-
-<h1>A Tour Through RCU's Requirements</h1>
-
-<p>Copyright IBM Corporation, 2015</p>
-<p>Author: Paul E.&nbsp;McKenney</p>
-<p><i>The initial version of this document appeared in the
-<a href="https://lwn.net/">LWN</a> articles
-<a href="https://lwn.net/Articles/652156/">here</a>,
-<a href="https://lwn.net/Articles/652677/">here</a>, and
-<a href="https://lwn.net/Articles/653326/">here</a>.</i></p>
-
-<h2>Introduction</h2>
-
-<p>
-Read-copy update (RCU) is a synchronization mechanism that is often
-used as a replacement for reader-writer locking.
-RCU is unusual in that updaters do not block readers,
-which means that RCU's read-side primitives can be exceedingly fast
-and scalable.
-In addition, updaters can make useful forward progress concurrently
-with readers.
-However, all this concurrency between RCU readers and updaters does raise
-the question of exactly what RCU readers are doing, which in turn
-raises the question of exactly what RCU's requirements are.
-
-<p>
-This document therefore summarizes RCU's requirements, and can be thought
-of as an informal, high-level specification for RCU.
-It is important to understand that RCU's specification is primarily
-empirical in nature;
-in fact, I learned about many of these requirements the hard way.
-This situation might cause some consternation, however, not only
-has this learning process been a lot of fun, but it has also been
-a great privilege to work with so many people willing to apply
-technologies in interesting new ways.
-
-<p>
-All that aside, here are the categories of currently known RCU requirements:
-</p>
-
-<ol>
-<li> <a href="#Fundamental Requirements">
- Fundamental Requirements</a>
-<li> <a href="#Fundamental Non-Requirements">Fundamental Non-Requirements</a>
-<li> <a href="#Parallelism Facts of Life">
- Parallelism Facts of Life</a>
-<li> <a href="#Quality-of-Implementation Requirements">
- Quality-of-Implementation Requirements</a>
-<li> <a href="#Linux Kernel Complications">
- Linux Kernel Complications</a>
-<li> <a href="#Software-Engineering Requirements">
- Software-Engineering Requirements</a>
-<li> <a href="#Other RCU Flavors">
- Other RCU Flavors</a>
-<li> <a href="#Possible Future Changes">
- Possible Future Changes</a>
-</ol>
-
-<p>
-This is followed by a <a href="#Summary">summary</a>,
-which is in turn followed by the inevitable
-<a href="#Answers to Quick Quizzes">answers to the quick quizzes</a>.
-
-<h2><a name="Fundamental Requirements">Fundamental Requirements</a></h2>
-
-<p>
-RCU's fundamental requirements are the closest thing RCU has to hard
-mathematical requirements.
-These are:
-
-<ol>
-<li> <a href="#Grace-Period Guarantee">
- Grace-Period Guarantee</a>
-<li> <a href="#Publish-Subscribe Guarantee">
- Publish-Subscribe Guarantee</a>
-<li> <a href="#Memory-Barrier Guarantees">
- Memory-Barrier Guarantees</a>
-<li> <a href="#RCU Primitives Guaranteed to Execute Unconditionally">
- RCU Primitives Guaranteed to Execute Unconditionally</a>
-<li> <a href="#Guaranteed Read-to-Write Upgrade">
- Guaranteed Read-to-Write Upgrade</a>
-</ol>
-
-<h3><a name="Grace-Period Guarantee">Grace-Period Guarantee</a></h3>
-
-<p>
-RCU's grace-period guarantee is unusual in being premeditated:
-Jack Slingwine and I had this guarantee firmly in mind when we started
-work on RCU (then called &ldquo;rclock&rdquo;) in the early 1990s.
-That said, the past two decades of experience with RCU have produced
-a much more detailed understanding of this guarantee.
-
-<p>
-RCU's grace-period guarantee allows updaters to wait for the completion
-of all pre-existing RCU read-side critical sections.
-An RCU read-side critical section
-begins with the marker <tt>rcu_read_lock()</tt> and ends with
-the marker <tt>rcu_read_unlock()</tt>.
-These markers may be nested, and RCU treats a nested set as one
-big RCU read-side critical section.
-Production-quality implementations of <tt>rcu_read_lock()</tt> and
-<tt>rcu_read_unlock()</tt> are extremely lightweight, and in
-fact have exactly zero overhead in Linux kernels built for production
-use with <tt>CONFIG_PREEMPT=n</tt>.
-
-<p>
-This guarantee allows ordering to be enforced with extremely low
-overhead to readers, for example:
-
-<blockquote>
-<pre>
- 1 int x, y;
- 2
- 3 void thread0(void)
- 4 {
- 5 rcu_read_lock();
- 6 r1 = READ_ONCE(x);
- 7 r2 = READ_ONCE(y);
- 8 rcu_read_unlock();
- 9 }
-10
-11 void thread1(void)
-12 {
-13 WRITE_ONCE(x, 1);
-14 synchronize_rcu();
-15 WRITE_ONCE(y, 1);
-16 }
-</pre>
-</blockquote>
-
-<p>
-Because the <tt>synchronize_rcu()</tt> on line&nbsp;14 waits for
-all pre-existing readers, any instance of <tt>thread0()</tt> that
-loads a value of zero from <tt>x</tt> must complete before
-<tt>thread1()</tt> stores to <tt>y</tt>, so that instance must
-also load a value of zero from <tt>y</tt>.
-Similarly, any instance of <tt>thread0()</tt> that loads a value of
-one from <tt>y</tt> must have started after the
-<tt>synchronize_rcu()</tt> started, and must therefore also load
-a value of one from <tt>x</tt>.
-Therefore, the outcome:
-<blockquote>
-<pre>
-(r1 == 0 &amp;&amp; r2 == 1)
-</pre>
-</blockquote>
-cannot happen.
-
-<p>@@QQ@@
-Wait a minute!
-You said that updaters can make useful forward progress concurrently
-with readers, but pre-existing readers will block
-<tt>synchronize_rcu()</tt>!!!
-Just who are you trying to fool???
-<p>@@QQA@@
-First, if updaters do not wish to be blocked by readers, they can use
-<tt>call_rcu()</tt> or <tt>kfree_rcu()</tt>, which will
-be discussed later.
-Second, even when using <tt>synchronize_rcu()</tt>, the other
-update-side code does run concurrently with readers, whether pre-existing
-or not.
-<p>@@QQE@@
-
-<p>
-This scenario resembles one of the first uses of RCU in
-<a href="https://en.wikipedia.org/wiki/DYNIX">DYNIX/ptx</a>,
-which managed a distributed lock manager's transition into
-a state suitable for handling recovery from node failure,
-more or less as follows:
-
-<blockquote>
-<pre>
- 1 #define STATE_NORMAL 0
- 2 #define STATE_WANT_RECOVERY 1
- 3 #define STATE_RECOVERING 2
- 4 #define STATE_WANT_NORMAL 3
- 5
- 6 int state = STATE_NORMAL;
- 7
- 8 void do_something_dlm(void)
- 9 {
-10 int state_snap;
-11
-12 rcu_read_lock();
-13 state_snap = READ_ONCE(state);
-14 if (state_snap == STATE_NORMAL)
-15 do_something();
-16 else
-17 do_something_carefully();
-18 rcu_read_unlock();
-19 }
-20
-21 void start_recovery(void)
-22 {
-23 WRITE_ONCE(state, STATE_WANT_RECOVERY);
-24 synchronize_rcu();
-25 WRITE_ONCE(state, STATE_RECOVERING);
-26 recovery();
-27 WRITE_ONCE(state, STATE_WANT_NORMAL);
-28 synchronize_rcu();
-29 WRITE_ONCE(state, STATE_NORMAL);
-30 }
-</pre>
-</blockquote>
-
-<p>
-The RCU read-side critical section in <tt>do_something_dlm()</tt>
-works with the <tt>synchronize_rcu()</tt> in <tt>start_recovery()</tt>
-to guarantee that <tt>do_something()</tt> never runs concurrently
-with <tt>recovery()</tt>, but with little or no synchronization
-overhead in <tt>do_something_dlm()</tt>.
-
-<p>@@QQ@@
-Why is the <tt>synchronize_rcu()</tt> on line&nbsp;28 needed?
-<p>@@QQA@@
-Without that extra grace period, memory reordering could result in
-<tt>do_something_dlm()</tt> executing <tt>do_something()</tt>
-concurrently with the last bits of <tt>recovery()</tt>.
-<p>@@QQE@@
-
-<p>
-In order to avoid fatal problems such as deadlocks,
-an RCU read-side critical section must not contain calls to
-<tt>synchronize_rcu()</tt>.
-Similarly, an RCU read-side critical section must not
-contain anything that waits, directly or indirectly, on completion of
-an invocation of <tt>synchronize_rcu()</tt>.
-
-<p>
-Although RCU's grace-period guarantee is useful in and of itself, with
-<a href="https://lwn.net/Articles/573497/">quite a few use cases</a>,
-it would be good to be able to use RCU to coordinate read-side
-access to linked data structures.
-For this, the grace-period guarantee is not sufficient, as can
-be seen in function <tt>add_gp_buggy()</tt> below.
-We will look at the reader's code later, but in the meantime, just think of
-the reader as locklessly picking up the <tt>gp</tt> pointer,
-and, if the value loaded is non-<tt>NULL</tt>, locklessly accessing the
-<tt>-&gt;a</tt> and <tt>-&gt;b</tt> fields.
-
-<blockquote>
-<pre>
- 1 bool add_gp_buggy(int a, int b)
- 2 {
- 3 p = kmalloc(sizeof(*p), GFP_KERNEL);
- 4 if (!p)
- 5 return -ENOMEM;
- 6 spin_lock(&amp;gp_lock);
- 7 if (rcu_access_pointer(gp)) {
- 8 spin_unlock(&amp;gp_lock);
- 9 return false;
-10 }
-11 p-&gt;a = a;
-12 p-&gt;b = a;
-13 gp = p; /* ORDERING BUG */
-14 spin_unlock(&amp;gp_lock);
-15 return true;
-16 }
-</pre>
-</blockquote>
-
-<p>
-The problem is that both the compiler and weakly ordered CPUs are within
-their rights to reorder this code as follows:
-
-<blockquote>
-<pre>
- 1 bool add_gp_buggy_optimized(int a, int b)
- 2 {
- 3 p = kmalloc(sizeof(*p), GFP_KERNEL);
- 4 if (!p)
- 5 return -ENOMEM;
- 6 spin_lock(&amp;gp_lock);
- 7 if (rcu_access_pointer(gp)) {
- 8 spin_unlock(&amp;gp_lock);
- 9 return false;
-10 }
-<b>11 gp = p; /* ORDERING BUG */
-12 p-&gt;a = a;
-13 p-&gt;b = a;</b>
-14 spin_unlock(&amp;gp_lock);
-15 return true;
-16 }
-</pre>
-</blockquote>
-
-<p>
-If an RCU reader fetches <tt>gp</tt> just after
-<tt>add_gp_buggy_optimized</tt> executes line&nbsp;11,
-it will see garbage in the <tt>-&gt;a</tt> and <tt>-&gt;b</tt>
-fields.
-And this is but one of many ways in which compiler and hardware optimizations
-could cause trouble.
-Therefore, we clearly need some way to prevent the compiler and the CPU from
-reordering in this manner, which brings us to the publish-subscribe
-guarantee discussed in the next section.
-
-<h3><a name="Publish-Subscribe Guarantee">Publish/Subscribe Guarantee</a></h3>
-
-<p>
-RCU's publish-subscribe guarantee allows data to be inserted
-into a linked data structure without disrupting RCU readers.
-The updater uses <tt>rcu_assign_pointer()</tt> to insert the
-new data, and readers use <tt>rcu_dereference()</tt> to
-access data, whether new or old.
-The following shows an example of insertion:
-
-<blockquote>
-<pre>
- 1 bool add_gp(int a, int b)
- 2 {
- 3 p = kmalloc(sizeof(*p), GFP_KERNEL);
- 4 if (!p)
- 5 return -ENOMEM;
- 6 spin_lock(&amp;gp_lock);
- 7 if (rcu_access_pointer(gp)) {
- 8 spin_unlock(&amp;gp_lock);
- 9 return false;
-10 }
-11 p-&gt;a = a;
-12 p-&gt;b = a;
-13 rcu_assign_pointer(gp, p);
-14 spin_unlock(&amp;gp_lock);
-15 return true;
-16 }
-</pre>
-</blockquote>
-
-<p>
-The <tt>rcu_assign_pointer()</tt> on line&nbsp;13 is conceptually
-equivalent to a simple assignment statement, but also guarantees
-that its assignment will
-happen after the two assignments in lines&nbsp;11 and&nbsp;12,
-similar to the C11 <tt>memory_order_release</tt> store operation.
-It also prevents any number of &ldquo;interesting&rdquo; compiler
-optimizations, for example, the use of <tt>gp</tt> as a scratch
-location immediately preceding the assignment.
-
-<p>@@QQ@@
-But <tt>rcu_assign_pointer()</tt> does nothing to prevent the
-two assignments to <tt>p-&gt;a</tt> and <tt>p-&gt;b</tt>
-from being reordered.
-Can't that also cause problems?
-<p>@@QQA@@
-No, it cannot.
-The readers cannot see either of these two fields until
-the assignment to <tt>gp</tt>, by which time both fields are
-fully initialized.
-So reordering the assignments
-to <tt>p-&gt;a</tt> and <tt>p-&gt;b</tt> cannot possibly
-cause any problems.
-<p>@@QQE@@
-
-<p>
-It is tempting to assume that the reader need not do anything special
-to control its accesses to the RCU-protected data,
-as shown in <tt>do_something_gp_buggy()</tt> below:
-
-<blockquote>
-<pre>
- 1 bool do_something_gp_buggy(void)
- 2 {
- 3 rcu_read_lock();
- 4 p = gp; /* OPTIMIZATIONS GALORE!!! */
- 5 if (p) {
- 6 do_something(p-&gt;a, p-&gt;b);
- 7 rcu_read_unlock();
- 8 return true;
- 9 }
-10 rcu_read_unlock();
-11 return false;
-12 }
-</pre>
-</blockquote>
-
-<p>
-However, this temptation must be resisted because there are a
-surprisingly large number of ways that the compiler
-(to say nothing of
-<a href="https://h71000.www7.hp.com/wizard/wiz_2637.html">DEC Alpha CPUs</a>)
-can trip this code up.
-For but one example, if the compiler were short of registers, it
-might choose to refetch from <tt>gp</tt> rather than keeping
-a separate copy in <tt>p</tt> as follows:
-
-<blockquote>
-<pre>
- 1 bool do_something_gp_buggy_optimized(void)
- 2 {
- 3 rcu_read_lock();
- 4 if (gp) { /* OPTIMIZATIONS GALORE!!! */
-<b> 5 do_something(gp-&gt;a, gp-&gt;b);</b>
- 6 rcu_read_unlock();
- 7 return true;
- 8 }
- 9 rcu_read_unlock();
-10 return false;
-11 }
-</pre>
-</blockquote>
-
-<p>
-If this function ran concurrently with a series of updates that
-replaced the current structure with a new one,
-the fetches of <tt>gp-&gt;a</tt>
-and <tt>gp-&gt;b</tt> might well come from two different structures,
-which could cause serious confusion.
-To prevent this (and much else besides), <tt>do_something_gp()</tt> uses
-<tt>rcu_dereference()</tt> to fetch from <tt>gp</tt>:
-
-<blockquote>
-<pre>
- 1 bool do_something_gp(void)
- 2 {
- 3 rcu_read_lock();
- 4 p = rcu_dereference(gp);
- 5 if (p) {
- 6 do_something(p-&gt;a, p-&gt;b);
- 7 rcu_read_unlock();
- 8 return true;
- 9 }
-10 rcu_read_unlock();
-11 return false;
-12 }
-</pre>
-</blockquote>
-
-<p>
-The <tt>rcu_dereference()</tt> uses volatile casts and (for DEC Alpha)
-memory barriers in the Linux kernel.
-Should a
-<a href="http://www.rdrop.com/users/paulmck/RCU/consume.2015.07.13a.pdf">high-quality implementation of C11 <tt>memory_order_consume</tt> [PDF]</a>
-ever appear, then <tt>rcu_dereference()</tt> could be implemented
-as a <tt>memory_order_consume</tt> load.
-Regardless of the exact implementation, a pointer fetched by
-<tt>rcu_dereference()</tt> may not be used outside of the
-outermost RCU read-side critical section containing that
-<tt>rcu_dereference()</tt>, unless protection of
-the corresponding data element has been passed from RCU to some
-other synchronization mechanism, most commonly locking or
-<a href="https://www.kernel.org/doc/Documentation/RCU/rcuref.txt">reference counting</a>.
-
-<p>
-In short, updaters use <tt>rcu_assign_pointer()</tt> and readers
-use <tt>rcu_dereference()</tt>, and these two RCU API elements
-work together to ensure that readers have a consistent view of
-newly added data elements.
-
-<p>
-Of course, it is also necessary to remove elements from RCU-protected
-data structures, for example, using the following process:
-
-<ol>
-<li> Remove the data element from the enclosing structure.
-<li> Wait for all pre-existing RCU read-side critical sections
- to complete (because only pre-existing readers can possibly have
- a reference to the newly removed data element).
-<li> At this point, only the updater has a reference to the
- newly removed data element, so it can safely reclaim
- the data element, for example, by passing it to <tt>kfree()</tt>.
-</ol>
-
-This process is implemented by <tt>remove_gp_synchronous()</tt>:
-
-<blockquote>
-<pre>
- 1 bool remove_gp_synchronous(void)
- 2 {
- 3 struct foo *p;
- 4
- 5 spin_lock(&amp;gp_lock);
- 6 p = rcu_access_pointer(gp);
- 7 if (!p) {
- 8 spin_unlock(&amp;gp_lock);
- 9 return false;
-10 }
-11 rcu_assign_pointer(gp, NULL);
-12 spin_unlock(&amp;gp_lock);
-13 synchronize_rcu();
-14 kfree(p);
-15 return true;
-16 }
-</pre>
-</blockquote>
-
-<p>
-This function is straightforward, with line&nbsp;13 waiting for a grace
-period before line&nbsp;14 frees the old data element.
-This waiting ensures that readers will reach line&nbsp;7 of
-<tt>do_something_gp()</tt> before the data element referenced by
-<tt>p</tt> is freed.
-The <tt>rcu_access_pointer()</tt> on line&nbsp;6 is similar to
-<tt>rcu_dereference()</tt>, except that:
-
-<ol>
-<li> The value returned by <tt>rcu_access_pointer()</tt>
- cannot be dereferenced.
- If you want to access the value pointed to as well as
- the pointer itself, use <tt>rcu_dereference()</tt>
- instead of <tt>rcu_access_pointer()</tt>.
-<li> The call to <tt>rcu_access_pointer()</tt> need not be
- protected.
- In contrast, <tt>rcu_dereference()</tt> must either be
- within an RCU read-side critical section or in a code
- segment where the pointer cannot change, for example, in
- code protected by the corresponding update-side lock.
-</ol>
-
-<p>@@QQ@@
-Without the <tt>rcu_dereference()</tt> or the
-<tt>rcu_access_pointer()</tt>, what destructive optimizations
-might the compiler make use of?
-<p>@@QQA@@
-Let's start with what happens to <tt>do_something_gp()</tt>
-if it fails to use <tt>rcu_dereference()</tt>.
-It could reuse a value formerly fetched from this same pointer.
-It could also fetch the pointer from <tt>gp</tt> in a byte-at-a-time
-manner, resulting in <i>load tearing</i>, in turn resulting a bytewise
-mash-up of two distince pointer values.
-It might even use value-speculation optimizations, where it makes a wrong
-guess, but by the time it gets around to checking the value, an update
-has changed the pointer to match the wrong guess.
-Too bad about any dereferences that returned pre-initialization garbage
-in the meantime!
-
-<p>
-For <tt>remove_gp_synchronous()</tt>, as long as all modifications
-to <tt>gp</tt> are carried out while holding <tt>gp_lock</tt>,
-the above optimizations are harmless.
-However,
-with <tt>CONFIG_SPARSE_RCU_POINTER=y</tt>,
-<tt>sparse</tt> will complain if you
-define <tt>gp</tt> with <tt>__rcu</tt> and then
-access it without using
-either <tt>rcu_access_pointer()</tt> or <tt>rcu_dereference()</tt>.
-<p>@@QQE@@
-
-<p>
-In short, RCU's publish-subscribe guarantee is provided by the combination
-of <tt>rcu_assign_pointer()</tt> and <tt>rcu_dereference()</tt>.
-This guarantee allows data elements to be safely added to RCU-protected
-linked data structures without disrupting RCU readers.
-This guarantee can be used in combination with the grace-period
-guarantee to also allow data elements to be removed from RCU-protected
-linked data structures, again without disrupting RCU readers.
-
-<p>
-This guarantee was only partially premeditated.
-DYNIX/ptx used an explicit memory barrier for publication, but had nothing
-resembling <tt>rcu_dereference()</tt> for subscription, nor did it
-have anything resembling the <tt>smp_read_barrier_depends()</tt>
-that was later subsumed into <tt>rcu_dereference()</tt>.
-The need for these operations made itself known quite suddenly at a
-late-1990s meeting with the DEC Alpha architects, back in the days when
-DEC was still a free-standing company.
-It took the Alpha architects a good hour to convince me that any sort
-of barrier would ever be needed, and it then took me a good <i>two</i> hours
-to convince them that their documentation did not make this point clear.
-More recent work with the C and C++ standards committees have provided
-much education on tricks and traps from the compiler.
-In short, compilers were much less tricky in the early 1990s, but in
-2015, don't even think about omitting <tt>rcu_dereference()</tt>!
-
-<h3><a name="Memory-Barrier Guarantees">Memory-Barrier Guarantees</a></h3>
-
-<p>
-The previous section's simple linked-data-structure scenario clearly
-demonstrates the need for RCU's stringent memory-ordering guarantees on
-systems with more than one CPU:
-
-<ol>
-<li> Each CPU that has an RCU read-side critical section that
- begins before <tt>synchronize_rcu()</tt> starts is
- guaranteed to execute a full memory barrier between the time
- that the RCU read-side critical section ends and the time that
- <tt>synchronize_rcu()</tt> returns.
- Without this guarantee, a pre-existing RCU read-side critical section
- might hold a reference to the newly removed <tt>struct foo</tt>
- after the <tt>kfree()</tt> on line&nbsp;14 of
- <tt>remove_gp_synchronous()</tt>.
-<li> Each CPU that has an RCU read-side critical section that ends
- after <tt>synchronize_rcu()</tt> returns is guaranteed
- to execute a full memory barrier between the time that
- <tt>synchronize_rcu()</tt> begins and the time that the RCU
- read-side critical section begins.
- Without this guarantee, a later RCU read-side critical section
- running after the <tt>kfree()</tt> on line&nbsp;14 of
- <tt>remove_gp_synchronous()</tt> might
- later run <tt>do_something_gp()</tt> and find the
- newly deleted <tt>struct foo</tt>.
-<li> If the task invoking <tt>synchronize_rcu()</tt> remains
- on a given CPU, then that CPU is guaranteed to execute a full
- memory barrier sometime during the execution of
- <tt>synchronize_rcu()</tt>.
- This guarantee ensures that the <tt>kfree()</tt> on
- line&nbsp;14 of <tt>remove_gp_synchronous()</tt> really does
- execute after the removal on line&nbsp;11.
-<li> If the task invoking <tt>synchronize_rcu()</tt> migrates
- among a group of CPUs during that invocation, then each of the
- CPUs in that group is guaranteed to execute a full memory barrier
- sometime during the execution of <tt>synchronize_rcu()</tt>.
- This guarantee also ensures that the <tt>kfree()</tt> on
- line&nbsp;14 of <tt>remove_gp_synchronous()</tt> really does
- execute after the removal on
- line&nbsp;11, but also in the case where the thread executing the
- <tt>synchronize_rcu()</tt> migrates in the meantime.
-</ol>
-
-<p>@@QQ@@
-Given that multiple CPUs can start RCU read-side critical sections
-at any time without any ordering whatsoever, how can RCU possibly tell whether
-or not a given RCU read-side critical section starts before a
-given instance of <tt>synchronize_rcu()</tt>?
-<p>@@QQA@@
-If RCU cannot tell whether or not a given
-RCU read-side critical section starts before a
-given instance of <tt>synchronize_rcu()</tt>,
-then it must assume that the RCU read-side critical section
-started first.
-In other words, a given instance of <tt>synchronize_rcu()</tt>
-can avoid waiting on a given RCU read-side critical section only
-if it can prove that <tt>synchronize_rcu()</tt> started first.
-<p>@@QQE@@
-
-<p>@@QQ@@
-The first and second guarantees require unbelievably strict ordering!
-Are all these memory barriers <i> really</i> required?
-<p>@@QQA@@
-Yes, they really are required.
-To see why the first guarantee is required, consider the following
-sequence of events:
-
-<ol>
-<li> CPU 1: <tt>rcu_read_lock()</tt>
-<li> CPU 1: <tt>q = rcu_dereference(gp);
- /* Very likely to return p. */</tt>
-<li> CPU 0: <tt>list_del_rcu(p);</tt>
-<li> CPU 0: <tt>synchronize_rcu()</tt> starts.
-<li> CPU 1: <tt>do_something_with(q-&gt;a);
- /* No smp_mb(), so might happen after kfree(). */</tt>
-<li> CPU 1: <tt>rcu_read_unlock()</tt>
-<li> CPU 0: <tt>synchronize_rcu()</tt> returns.
-<li> CPU 0: <tt>kfree(p);</tt>
-</ol>
-
-<p>
-Therefore, there absolutely must be a full memory barrier between the
-end of the RCU read-side critical section and the end of the
-grace period.
-
-<p>
-The sequence of events demonstrating the necessity of the second rule
-is roughly similar:
-
-<ol>
-<li> CPU 0: <tt>list_del_rcu(p);</tt>
-<li> CPU 0: <tt>synchronize_rcu()</tt> starts.
-<li> CPU 1: <tt>rcu_read_lock()</tt>
-<li> CPU 1: <tt>q = rcu_dereference(gp);
- /* Might return p if no memory barrier. */</tt>
-<li> CPU 0: <tt>synchronize_rcu()</tt> returns.
-<li> CPU 0: <tt>kfree(p);</tt>
-<li> CPU 1: <tt>do_something_with(q-&gt;a); /* Boom!!! */</tt>
-<li> CPU 1: <tt>rcu_read_unlock()</tt>
-</ol>
-
-<p>
-And similarly, without a memory barrier between the beginning of the
-grace period and the beginning of the RCU read-side critical section,
-CPU&nbsp;1 might end up accessing the freelist.
-
-<p>
-The &ldquo;as if&rdquo; rule of course applies, so that any implementation
-that acts as if the appropriate memory barriers were in place is a
-correct implementation.
-That said, it is much easier to fool yourself into believing that you have
-adhered to the as-if rule than it is to actually adhere to it!
-<p>@@QQE@@
-
-<p>
-Note that these memory-barrier requirements do not replace the fundamental
-RCU requirement that a grace period wait for all pre-existing readers.
-On the contrary, the memory barriers called out in this section must operate in
-such a way as to <i>enforce</i> this fundamental requirement.
-Of course, different implementations enforce this requirement in different
-ways, but enforce it they must.
-
-<h3><a name="RCU Primitives Guaranteed to Execute Unconditionally">RCU Primitives Guaranteed to Execute Unconditionally</a></h3>
-
-<p>
-The common-case RCU primitives are unconditional.
-They are invoked, they do their job, and they return, with no possibility
-of error, and no need to retry.
-This is a key RCU design philosophy.
-
-<p>
-However, this philosophy is pragmatic rather than pigheaded.
-If someone comes up with a good justification for a particular conditional
-RCU primitive, it might well be implemented and added.
-After all, this guarantee was reverse-engineered, not premeditated.
-The unconditional nature of the RCU primitives was initially an
-accident of implementation, and later experience with synchronization
-primitives with conditional primitives caused me to elevate this
-accident to a guarantee.
-Therefore, the justification for adding a conditional primitive to
-RCU would need to be based on detailed and compelling use cases.
-
-<h3><a name="Guaranteed Read-to-Write Upgrade">Guaranteed Read-to-Write Upgrade</a></h3>
-
-<p>
-As far as RCU is concerned, it is always possible to carry out an
-update within an RCU read-side critical section.
-For example, that RCU read-side critical section might search for
-a given data element, and then might acquire the update-side
-spinlock in order to update that element, all while remaining
-in that RCU read-side critical section.
-Of course, it is necessary to exit the RCU read-side critical section
-before invoking <tt>synchronize_rcu()</tt>, however, this
-inconvenience can be avoided through use of the
-<tt>call_rcu()</tt> and <tt>kfree_rcu()</tt> API members
-described later in this document.
-
-<p>@@QQ@@
-But how does the upgrade-to-write operation exclude other readers?
-<p>@@QQA@@
-It doesn't, just like normal RCU updates, which also do not exclude
-RCU readers.
-<p>@@QQE@@
-
-<p>
-This guarantee allows lookup code to be shared between read-side
-and update-side code, and was premeditated, appearing in the earliest
-DYNIX/ptx RCU documentation.
-
-<h2><a name="Fundamental Non-Requirements">Fundamental Non-Requirements</a></h2>
-
-<p>
-RCU provides extremely lightweight readers, and its read-side guarantees,
-though quite useful, are correspondingly lightweight.
-It is therefore all too easy to assume that RCU is guaranteeing more
-than it really is.
-Of course, the list of things that RCU does not guarantee is infinitely
-long, however, the following sections list a few non-guarantees that
-have caused confusion.
-Except where otherwise noted, these non-guarantees were premeditated.
-
-<ol>
-<li> <a href="#Readers Impose Minimal Ordering">
- Readers Impose Minimal Ordering</a>
-<li> <a href="#Readers Do Not Exclude Updaters">
- Readers Do Not Exclude Updaters</a>
-<li> <a href="#Updaters Only Wait For Old Readers">
- Updaters Only Wait For Old Readers</a>
-<li> <a href="#Grace Periods Don't Partition Read-Side Critical Sections">
- Grace Periods Don't Partition Read-Side Critical Sections</a>
-<li> <a href="#Read-Side Critical Sections Don't Partition Grace Periods">
- Read-Side Critical Sections Don't Partition Grace Periods</a>
-<li> <a href="#Disabling Preemption Does Not Block Grace Periods">
- Disabling Preemption Does Not Block Grace Periods</a>
-</ol>
-
-<h3><a name="Readers Impose Minimal Ordering">Readers Impose Minimal Ordering</a></h3>
-
-<p>
-Reader-side markers such as <tt>rcu_read_lock()</tt> and
-<tt>rcu_read_unlock()</tt> provide absolutely no ordering guarantees
-except through their interaction with the grace-period APIs such as
-<tt>synchronize_rcu()</tt>.
-To see this, consider the following pair of threads:
-
-<blockquote>
-<pre>
- 1 void thread0(void)
- 2 {
- 3 rcu_read_lock();
- 4 WRITE_ONCE(x, 1);
- 5 rcu_read_unlock();
- 6 rcu_read_lock();
- 7 WRITE_ONCE(y, 1);
- 8 rcu_read_unlock();
- 9 }
-10
-11 void thread1(void)
-12 {
-13 rcu_read_lock();
-14 r1 = READ_ONCE(y);
-15 rcu_read_unlock();
-16 rcu_read_lock();
-17 r2 = READ_ONCE(x);
-18 rcu_read_unlock();
-19 }
-</pre>
-</blockquote>
-
-<p>
-After <tt>thread0()</tt> and <tt>thread1()</tt> execute
-concurrently, it is quite possible to have
-
-<blockquote>
-<pre>
-(r1 == 1 &amp;&amp; r2 == 0)
-</pre>
-</blockquote>
-
-(that is, <tt>y</tt> appears to have been assigned before <tt>x</tt>),
-which would not be possible if <tt>rcu_read_lock()</tt> and
-<tt>rcu_read_unlock()</tt> had much in the way of ordering
-properties.
-But they do not, so the CPU is within its rights
-to do significant reordering.
-This is by design: Any significant ordering constraints would slow down
-these fast-path APIs.
-
-<p>@@QQ@@
-Can't the compiler also reorder this code?
-<p>@@QQA@@
-No, the volatile casts in <tt>READ_ONCE()</tt> and
-<tt>WRITE_ONCE()</tt> prevent the compiler from reordering in
-this particular case.
-<p>@@QQE@@
-
-<h3><a name="Readers Do Not Exclude Updaters">Readers Do Not Exclude Updaters</a></h3>
-
-<p>
-Neither <tt>rcu_read_lock()</tt> nor <tt>rcu_read_unlock()</tt>
-exclude updates.
-All they do is to prevent grace periods from ending.
-The following example illustrates this:
-
-<blockquote>
-<pre>
- 1 void thread0(void)
- 2 {
- 3 rcu_read_lock();
- 4 r1 = READ_ONCE(y);
- 5 if (r1) {
- 6 do_something_with_nonzero_x();
- 7 r2 = READ_ONCE(x);
- 8 WARN_ON(!r2); /* BUG!!! */
- 9 }
-10 rcu_read_unlock();
-11 }
-12
-13 void thread1(void)
-14 {
-15 spin_lock(&amp;my_lock);
-16 WRITE_ONCE(x, 1);
-17 WRITE_ONCE(y, 1);
-18 spin_unlock(&amp;my_lock);
-19 }
-</pre>
-</blockquote>
-
-<p>
-If the <tt>thread0()</tt> function's <tt>rcu_read_lock()</tt>
-excluded the <tt>thread1()</tt> function's update,
-the <tt>WARN_ON()</tt> could never fire.
-But the fact is that <tt>rcu_read_lock()</tt> does not exclude
-much of anything aside from subsequent grace periods, of which
-<tt>thread1()</tt> has none, so the
-<tt>WARN_ON()</tt> can and does fire.
-
-<h3><a name="Updaters Only Wait For Old Readers">Updaters Only Wait For Old Readers</a></h3>
-
-<p>
-It might be tempting to assume that after <tt>synchronize_rcu()</tt>
-completes, there are no readers executing.
-This temptation must be avoided because
-new readers can start immediately after <tt>synchronize_rcu()</tt>
-starts, and <tt>synchronize_rcu()</tt> is under no
-obligation to wait for these new readers.
-
-<p>@@QQ@@
-Suppose that synchronize_rcu() did wait until all readers had completed.
-Would the updater be able to rely on this?
-<p>@@QQA@@
-No.
-Even if <tt>synchronize_rcu()</tt> were to wait until
-all readers had completed, a new reader might start immediately after
-<tt>synchronize_rcu()</tt> completed.
-Therefore, the code following
-<tt>synchronize_rcu()</tt> cannot rely on there being no readers
-in any case.
-<p>@@QQE@@
-
-<h3><a name="Grace Periods Don't Partition Read-Side Critical Sections">
-Grace Periods Don't Partition Read-Side Critical Sections</a></h3>
-
-<p>
-It is tempting to assume that if any part of one RCU read-side critical
-section precedes a given grace period, and if any part of another RCU
-read-side critical section follows that same grace period, then all of
-the first RCU read-side critical section must precede all of the second.
-However, this just isn't the case: A single grace period does not
-partition the set of RCU read-side critical sections.
-An example of this situation can be illustrated as follows, where
-<tt>x</tt>, <tt>y</tt>, and <tt>z</tt> are initially all zero:
-
-<blockquote>
-<pre>
- 1 void thread0(void)
- 2 {
- 3 rcu_read_lock();
- 4 WRITE_ONCE(a, 1);
- 5 WRITE_ONCE(b, 1);
- 6 rcu_read_unlock();
- 7 }
- 8
- 9 void thread1(void)
-10 {
-11 r1 = READ_ONCE(a);
-12 synchronize_rcu();
-13 WRITE_ONCE(c, 1);
-14 }
-15
-16 void thread2(void)
-17 {
-18 rcu_read_lock();
-19 r2 = READ_ONCE(b);
-20 r3 = READ_ONCE(c);
-21 rcu_read_unlock();
-22 }
-</pre>
-</blockquote>
-
-<p>
-It turns out that the outcome:
-
-<blockquote>
-<pre>
-(r1 == 1 &amp;&amp; r2 == 0 &amp;&amp; r3 == 1)
-</pre>
-</blockquote>
-
-is entirely possible.
-The following figure show how this can happen, with each circled
-<tt>QS</tt> indicating the point at which RCU recorded a
-<i>quiescent state</i> for each thread, that is, a state in which
-RCU knows that the thread cannot be in the midst of an RCU read-side
-critical section that started before the current grace period:
-
-<p><img src="GPpartitionReaders1.svg" alt="GPpartitionReaders1.svg" width="60%"></p>
-
-<p>
-If it is necessary to partition RCU read-side critical sections in this
-manner, it is necessary to use two grace periods, where the first
-grace period is known to end before the second grace period starts:
-
-<blockquote>
-<pre>
- 1 void thread0(void)
- 2 {
- 3 rcu_read_lock();
- 4 WRITE_ONCE(a, 1);
- 5 WRITE_ONCE(b, 1);
- 6 rcu_read_unlock();
- 7 }
- 8
- 9 void thread1(void)
-10 {
-11 r1 = READ_ONCE(a);
-12 synchronize_rcu();
-13 WRITE_ONCE(c, 1);
-14 }
-15
-16 void thread2(void)
-17 {
-18 r2 = READ_ONCE(c);
-19 synchronize_rcu();
-20 WRITE_ONCE(d, 1);
-21 }
-22
-23 void thread3(void)
-24 {
-25 rcu_read_lock();
-26 r3 = READ_ONCE(b);
-27 r4 = READ_ONCE(d);
-28 rcu_read_unlock();
-29 }
-</pre>
-</blockquote>
-
-<p>
-Here, if <tt>(r1 == 1)</tt>, then
-<tt>thread0()</tt>'s write to <tt>b</tt> must happen
-before the end of <tt>thread1()</tt>'s grace period.
-If in addition <tt>(r4 == 1)</tt>, then
-<tt>thread3()</tt>'s read from <tt>b</tt> must happen
-after the beginning of <tt>thread2()</tt>'s grace period.
-If it is also the case that <tt>(r2 == 1)</tt>, then the
-end of <tt>thread1()</tt>'s grace period must precede the
-beginning of <tt>thread2()</tt>'s grace period.
-This mean that the two RCU read-side critical sections cannot overlap,
-guaranteeing that <tt>(r3 == 1)</tt>.
-As a result, the outcome:
-
-<blockquote>
-<pre>
-(r1 == 1 &amp;&amp; r2 == 1 &amp;&amp; r3 == 0 &amp;&amp; r4 == 1)
-</pre>
-</blockquote>
-
-cannot happen.
-
-<p>
-This non-requirement was also non-premeditated, but became apparent
-when studying RCU's interaction with memory ordering.
-
-<h3><a name="Read-Side Critical Sections Don't Partition Grace Periods">
-Read-Side Critical Sections Don't Partition Grace Periods</a></h3>
-
-<p>
-It is also tempting to assume that if an RCU read-side critical section
-happens between a pair of grace periods, then those grace periods cannot
-overlap.
-However, this temptation leads nowhere good, as can be illustrated by
-the following, with all variables initially zero:
-
-<blockquote>
-<pre>
- 1 void thread0(void)
- 2 {
- 3 rcu_read_lock();
- 4 WRITE_ONCE(a, 1);
- 5 WRITE_ONCE(b, 1);
- 6 rcu_read_unlock();
- 7 }
- 8
- 9 void thread1(void)
-10 {
-11 r1 = READ_ONCE(a);
-12 synchronize_rcu();
-13 WRITE_ONCE(c, 1);
-14 }
-15
-16 void thread2(void)
-17 {
-18 rcu_read_lock();
-19 WRITE_ONCE(d, 1);
-20 r2 = READ_ONCE(c);
-21 rcu_read_unlock();
-22 }
-23
-24 void thread3(void)
-25 {
-26 r3 = READ_ONCE(d);
-27 synchronize_rcu();
-28 WRITE_ONCE(e, 1);
-29 }
-30
-31 void thread4(void)
-32 {
-33 rcu_read_lock();
-34 r4 = READ_ONCE(b);
-35 r5 = READ_ONCE(e);
-36 rcu_read_unlock();
-37 }
-</pre>
-</blockquote>
-
-<p>
-In this case, the outcome:
-
-<blockquote>
-<pre>
-(r1 == 1 &amp;&amp; r2 == 1 &amp;&amp; r3 == 1 &amp;&amp; r4 == 0 &amp&amp; r5 == 1)
-</pre>
-</blockquote>
-
-is entirely possible, as illustrated below:
-
-<p><img src="ReadersPartitionGP1.svg" alt="ReadersPartitionGP1.svg" width="100%"></p>
-
-<p>
-Again, an RCU read-side critical section can overlap almost all of a
-given grace period, just so long as it does not overlap the entire
-grace period.
-As a result, an RCU read-side critical section cannot partition a pair
-of RCU grace periods.
-
-<p>@@QQ@@
-How long a sequence of grace periods, each separated by an RCU read-side
-critical section, would be required to partition the RCU read-side
-critical sections at the beginning and end of the chain?
-<p>@@QQA@@
-In theory, an infinite number.
-In practice, an unknown number that is sensitive to both implementation
-details and timing considerations.
-Therefore, even in practice, RCU users must abide by the theoretical rather
-than the practical answer.
-<p>@@QQE@@
-
-<h3><a name="Disabling Preemption Does Not Block Grace Periods">
-Disabling Preemption Does Not Block Grace Periods</a></h3>
-
-<p>
-There was a time when disabling preemption on any given CPU would block
-subsequent grace periods.
-However, this was an accident of implementation and is not a requirement.
-And in the current Linux-kernel implementation, disabling preemption
-on a given CPU in fact does not block grace periods, as Oleg Nesterov
-<a href="https://lkml.kernel.org/g/20150614193825.GA19582@redhat.com">demonstrated</a>.
-
-<p>
-If you need a preempt-disable region to block grace periods, you need to add
-<tt>rcu_read_lock()</tt> and <tt>rcu_read_unlock()</tt>, for example
-as follows:
-
-<blockquote>
-<pre>
- 1 preempt_disable();
- 2 rcu_read_lock();
- 3 do_something();
- 4 rcu_read_unlock();
- 5 preempt_enable();
- 6
- 7 /* Spinlocks implicitly disable preemption. */
- 8 spin_lock(&amp;mylock);
- 9 rcu_read_lock();
-10 do_something();
-11 rcu_read_unlock();
-12 spin_unlock(&amp;mylock);
-</pre>
-</blockquote>
-
-<p>
-In theory, you could enter the RCU read-side critical section first,
-but it is more efficient to keep the entire RCU read-side critical
-section contained in the preempt-disable region as shown above.
-Of course, RCU read-side critical sections that extend outside of
-preempt-disable regions will work correctly, but such critical sections
-can be preempted, which forces <tt>rcu_read_unlock()</tt> to do
-more work.
-And no, this is <i>not</i> an invitation to enclose all of your RCU
-read-side critical sections within preempt-disable regions, because
-doing so would degrade real-time response.
-
-<p>
-This non-requirement appeared with preemptible RCU.
-If you need a grace period that waits on non-preemptible code regions, use
-<a href="#Sched Flavor">RCU-sched</a>.
-
-<h2><a name="Parallelism Facts of Life">Parallelism Facts of Life</a></h2>
-
-<p>
-These parallelism facts of life are by no means specific to RCU, but
-the RCU implementation must abide by them.
-They therefore bear repeating:
-
-<ol>
-<li> Any CPU or task may be delayed at any time,
- and any attempts to avoid these delays by disabling
- preemption, interrupts, or whatever are completely futile.
- This is most obvious in preemptible user-level
- environments and in virtualized environments (where
- a given guest OS's VCPUs can be preempted at any time by
- the underlying hypervisor), but can also happen in bare-metal
- environments due to ECC errors, NMIs, and other hardware
- events.
- Although a delay of more than about 20 seconds can result
- in splats, the RCU implementation is obligated to use
- algorithms that can tolerate extremely long delays, but where
- &ldquo;extremely long&rdquo; is not long enough to allow
- wrap-around when incrementing a 64-bit counter.
-<li> Both the compiler and the CPU can reorder memory accesses.
- Where it matters, RCU must use compiler directives and
- memory-barrier instructions to preserve ordering.
-<li> Conflicting writes to memory locations in any given cache line
- will result in expensive cache misses.
- Greater numbers of concurrent writes and more-frequent
- concurrent writes will result in more dramatic slowdowns.
- RCU is therefore obligated to use algorithms that have
- sufficient locality to avoid significant performance and
- scalability problems.
-<li> As a rough rule of thumb, only one CPU's worth of processing
- may be carried out under the protection of any given exclusive
- lock.
- RCU must therefore use scalable locking designs.
-<li> Counters are finite, especially on 32-bit systems.
- RCU's use of counters must therefore tolerate counter wrap,
- or be designed such that counter wrap would take way more
- time than a single system is likely to run.
- An uptime of ten years is quite possible, a runtime
- of a century much less so.
- As an example of the latter, RCU's dyntick-idle nesting counter
- allows 54 bits for interrupt nesting level (this counter
- is 64 bits even on a 32-bit system).
- Overflowing this counter requires 2<sup>54</sup>
- half-interrupts on a given CPU without that CPU ever going idle.
- If a half-interrupt happened every microsecond, it would take
- 570 years of runtime to overflow this counter, which is currently
- believed to be an acceptably long time.
-<li> Linux systems can have thousands of CPUs running a single
- Linux kernel in a single shared-memory environment.
- RCU must therefore pay close attention to high-end scalability.
-</ol>
-
-<p>
-This last parallelism fact of life means that RCU must pay special
-attention to the preceding facts of life.
-The idea that Linux might scale to systems with thousands of CPUs would
-have been met with some skepticism in the 1990s, but these requirements
-would have otherwise have been unsurprising, even in the early 1990s.
-
-<h2><a name="Quality-of-Implementation Requirements">Quality-of-Implementation Requirements</a></h2>
-
-<p>
-These sections list quality-of-implementation requirements.
-Although an RCU implementation that ignores these requirements could
-still be used, it would likely be subject to limitations that would
-make it inappropriate for industrial-strength production use.
-Classes of quality-of-implementation requirements are as follows:
-
-<ol>
-<li> <a href="#Specialization">Specialization</a>
-<li> <a href="#Performance and Scalability">Performance and Scalability</a>
-<li> <a href="#Composability">Composability</a>
-<li> <a href="#Corner Cases">Corner Cases</a>
-</ol>
-
-<p>
-These classes is covered in the following sections.
-
-<h3><a name="Specialization">Specialization</a></h3>
-
-<p>
-RCU is and always has been intended primarily for read-mostly situations, as
-illustrated by the following figure.
-This means that RCU's read-side primitives are optimized, often at the
-expense of its update-side primitives.
-
-<p><img src="RCUApplicability.svg" alt="RCUApplicability.svg" width="70%"></p>
-
-<p>
-This focus on read-mostly situations means that RCU must interoperate
-with other synchronization primitives.
-For example, the <tt>add_gp()</tt> and <tt>remove_gp_synchronous()</tt>
-examples discussed earlier use RCU to protect readers and locking to
-coordinate updaters.
-However, the need extends much farther, requiring that a variety of
-synchronization primitives be legal within RCU read-side critical sections,
-including spinlocks, sequence locks, atomic operations, reference
-counters, and memory barriers.
-
-<p>@@QQ@@
-What about sleeping locks?
-<p>@@QQA@@
-These are forbidden within Linux-kernel RCU read-side critical sections
-because it is not legal to place a quiescent state (in this case,
-voluntary context switch) within an RCU read-side critical section.
-However, sleeping locks may be used within userspace RCU read-side critical
-sections, and also within Linux-kernel sleepable RCU
-<a href="#Sleepable RCU">(SRCU)</a>
-read-side critical sections.
-In addition, the -rt patchset turns spinlocks into a sleeping locks so
-that the corresponding critical sections can be preempted, which
-also means that these sleeplockified spinlocks (but not other sleeping locks!)
-may be acquire within -rt-Linux-kernel RCU read-side critical sections.
-
-<p>
-Note that it <i>is</i> legal for a normal RCU read-side critical section
-to conditionally acquire a sleeping locks (as in <tt>mutex_trylock()</tt>),
-but only as long as it does not loop indefinitely attempting to
-conditionally acquire that sleeping locks.
-The key point is that things like <tt>mutex_trylock()</tt>
-either return with the mutex held, or return an error indication if
-the mutex was not immediately available.
-Either way, <tt>mutex_trylock()</tt> returns immediately without sleeping.
-<p>@@QQE@@
-
-<p>
-It often comes as a surprise that many algorithms do not require a
-consistent view of data, but many can function in that mode,
-with network routing being the poster child.
-Internet routing algorithms take significant time to propagate
-updates, so that by the time an update arrives at a given system,
-that system has been sending network traffic the wrong way for
-a considerable length of time.
-Having a few threads continue to send traffic the wrong way for a
-few more milliseconds is clearly not a problem: In the worst case,
-TCP retransmissions will eventually get the data where it needs to go.
-In general, when tracking the state of the universe outside of the
-computer, some level of inconsistency must be tolerated due to
-speed-of-light delays if nothing else.
-
-<p>
-Furthermore, uncertainty about external state is inherent in many cases.
-For example, a pair of veternarians might use heartbeat to determine
-whether or not a given cat was alive.
-But how long should they wait after the last heartbeat to decide that
-the cat is in fact dead?
-Waiting less than 400 milliseconds makes no sense because this would
-mean that a relaxed cat would be considered to cycle between death
-and life more than 100 times per minute.
-Moreover, just as with human beings, a cat's heart might stop for
-some period of time, so the exact wait period is a judgment call.
-One of our pair of veternarians might wait 30 seconds before pronouncing
-the cat dead, while the other might insist on waiting a full minute.
-The two veternarians would then disagree on the state of the cat during
-the final 30 seconds of the minute following the last heartbeat, as
-fancifully illustrated below:
-
-<p><img src="2013-08-is-it-dead.png" alt="2013-08-is-it-dead.png" width="431"></p>
-
-<p>
-Interestingly enough, this same situation applies to hardware.
-When push comes to shove, how do we tell whether or not some
-external server has failed?
-We send messages to it periodically, and declare it failed if we
-don't receive a response within a given period of time.
-Policy decisions can usually tolerate short
-periods of inconsistency.
-The policy was decided some time ago, and is only now being put into
-effect, so a few milliseconds of delay is normally inconsequential.
-
-<p>
-However, there are algorithms that absolutely must see consistent data.
-For example, the translation between a user-level SystemV semaphore
-ID to the corresponding in-kernel data structure is protected by RCU,
-but it is absolutely forbidden to update a semaphore that has just been
-removed.
-In the Linux kernel, this need for consistency is accommodated by acquiring
-spinlocks located in the in-kernel data structure from within
-the RCU read-side critical section, and this is indicated by the
-green box in the figure above.
-Many other techniques may be used, and are in fact used within the
-Linux kernel.
-
-<p>
-In short, RCU is not required to maintain consistency, and other
-mechanisms may be used in concert with RCU when consistency is required.
-RCU's specialization allows it to do its job extremely well, and its
-ability to interoperate with other synchronization mechanisms allows
-the right mix of synchronization tools to be used for a given job.
-
-<h3><a name="Performance and Scalability">Performance and Scalability</a></h3>
-
-<p>
-Energy efficiency is a critical component of performance today,
-and Linux-kernel RCU implementations must therefore avoid unnecessarily
-awakening idle CPUs.
-I cannot claim that this requirement was premeditated.
-In fact, I learned of it during a telephone conversation in which I
-was given &ldquo;frank and open&rdquo; feedback on the importance
-of energy efficiency in battery-powered systems and on specific
-energy-efficiency shortcomings of the Linux-kernel RCU implementation.
-In my experience, the battery-powered embedded community will consider
-any unnecessary wakeups to be extremely unfriendly acts.
-So much so that mere Linux-kernel-mailing-list posts are
-insufficient to vent their ire.
-
-<p>
-Memory consumption is not particularly important for in most
-situations, and has become decreasingly
-so as memory sizes have expanded and memory
-costs have plummeted.
-However, as I learned from Matt Mackall's
-<a href="http://elinux.org/Linux_Tiny-FAQ">bloatwatch</a>
-efforts, memory footprint is critically important on single-CPU systems with
-non-preemptible (<tt>CONFIG_PREEMPT=n</tt>) kernels, and thus
-<a href="https://lkml.kernel.org/g/20090113221724.GA15307@linux.vnet.ibm.com">tiny RCU</a>
-was born.
-Josh Triplett has since taken over the small-memory banner with his
-<a href="https://tiny.wiki.kernel.org/">Linux kernel tinification</a>
-project, which resulted in
-<a href="#Sleepable RCU">SRCU</a>
-becoming optional for those kernels not needing it.
-
-<p>
-The remaining performance requirements are, for the most part,
-unsurprising.
-For example, in keeping with RCU's read-side specialization,
-<tt>rcu_dereference()</tt> should have negligible overhead (for
-example, suppression of a few minor compiler optimizations).
-Similarly, in non-preemptible environments, <tt>rcu_read_lock()</tt> and
-<tt>rcu_read_unlock()</tt> should have exactly zero overhead.
-
-<p>
-In preemptible environments, in the case where the RCU read-side
-critical section was not preempted (as will be the case for the
-highest-priority real-time process), <tt>rcu_read_lock()</tt> and
-<tt>rcu_read_unlock()</tt> should have minimal overhead.
-In particular, they should not contain atomic read-modify-write
-operations, memory-barrier instructions, preemption disabling,
-interrupt disabling, or backwards branches.
-However, in the case where the RCU read-side critical section was preempted,
-<tt>rcu_read_unlock()</tt> may acquire spinlocks and disable interrupts.
-This is why it is better to nest an RCU read-side critical section
-within a preempt-disable region than vice versa, at least in cases
-where that critical section is short enough to avoid unduly degrading
-real-time latencies.
-
-<p>
-The <tt>synchronize_rcu()</tt> grace-period-wait primitive is
-optimized for throughput.
-It may therefore incur several milliseconds of latency in addition to
-the duration of the longest RCU read-side critical section.
-On the other hand, multiple concurrent invocations of
-<tt>synchronize_rcu()</tt> are required to use batching optimizations
-so that they can be satisfied by a single underlying grace-period-wait
-operation.
-For example, in the Linux kernel, it is not unusual for a single
-grace-period-wait operation to serve more than
-<a href="https://www.usenix.org/conference/2004-usenix-annual-technical-conference/making-rcu-safe-deep-sub-millisecond-response">1,000 separate invocations</a>
-of <tt>synchronize_rcu()</tt>, thus amortizing the per-invocation
-overhead down to nearly zero.
-However, the grace-period optimization is also required to avoid
-measurable degradation of real-time scheduling and interrupt latencies.
-
-<p>
-In some cases, the multi-millisecond <tt>synchronize_rcu()</tt>
-latencies are unacceptable.
-In these cases, <tt>synchronize_rcu_expedited()</tt> may be used
-instead, reducing the grace-period latency down to a few tens of
-microseconds on small systems, at least in cases where the RCU read-side
-critical sections are short.
-There are currently no special latency requirements for
-<tt>synchronize_rcu_expedited()</tt> on large systems, but,
-consistent with the empirical nature of the RCU specification,
-that is subject to change.
-However, there most definitely are scalability requirements:
-A storm of <tt>synchronize_rcu_expedited()</tt> invocations on 4096
-CPUs should at least make reasonable forward progress.
-In return for its shorter latencies, <tt>synchronize_rcu_expedited()</tt>
-is permitted to impose modest degradation of real-time latency
-on non-idle online CPUs.
-That said, it will likely be necessary to take further steps to reduce this
-degradation, hopefully to roughly that of a scheduling-clock interrupt.
-
-<p>
-There are a number of situations where even
-<tt>synchronize_rcu_expedited()</tt>'s reduced grace-period
-latency is unacceptable.
-In these situations, the asynchronous <tt>call_rcu()</tt> can be
-used in place of <tt>synchronize_rcu()</tt> as follows:
-
-<blockquote>
-<pre>
- 1 struct foo {
- 2 int a;
- 3 int b;
- 4 struct rcu_head rh;
- 5 };
- 6
- 7 static void remove_gp_cb(struct rcu_head *rhp)
- 8 {
- 9 struct foo *p = container_of(rhp, struct foo, rh);
-10
-11 kfree(p);
-12 }
-13
-14 bool remove_gp_asynchronous(void)
-15 {
-16 struct foo *p;
-17
-18 spin_lock(&amp;gp_lock);
-19 p = rcu_dereference(gp);
-20 if (!p) {
-21 spin_unlock(&amp;gp_lock);
-22 return false;
-23 }
-24 rcu_assign_pointer(gp, NULL);
-25 call_rcu(&amp;p-&gt;rh, remove_gp_cb);
-26 spin_unlock(&amp;gp_lock);
-27 return true;
-28 }
-</pre>
-</blockquote>
-
-<p>
-A definition of <tt>struct foo</tt> is finally needed, and appears
-on lines&nbsp;1-5.
-The function <tt>remove_gp_cb()</tt> is passed to <tt>call_rcu()</tt>
-on line&nbsp;25, and will be invoked after the end of a subsequent
-grace period.
-This gets the same effect as <tt>remove_gp_synchronous()</tt>,
-but without forcing the updater to wait for a grace period to elapse.
-The <tt>call_rcu()</tt> function may be used in a number of
-situations where neither <tt>synchronize_rcu()</tt> nor
-<tt>synchronize_rcu_expedited()</tt> would be legal,
-including within preempt-disable code, <tt>local_bh_disable()</tt> code,
-interrupt-disable code, and interrupt handlers.
-However, even <tt>call_rcu()</tt> is illegal within NMI handlers.
-The callback function (<tt>remove_gp_cb()</tt> in this case) will be
-executed within softirq (software interrupt) environment within the
-Linux kernel,
-either within a real softirq handler or under the protection
-of <tt>local_bh_disable()</tt>.
-In both the Linux kernel and in userspace, it is bad practice to
-write an RCU callback function that takes too long.
-Long-running operations should be relegated to separate threads or
-(in the Linux kernel) workqueues.
-
-<p>@@QQ@@
-Why does line&nbsp;19 use <tt>rcu_access_pointer()</tt>?
-After all, <tt>call_rcu()</tt> on line&nbsp;25 stores into the
-structure, which would interact badly with concurrent insertions.
-Doesn't this mean that <tt>rcu_dereference()</tt> is required?
-<p>@@QQA@@
-Presumably the <tt>-&gt;gp_lock</tt> acquired on line&nbsp;18 excludes
-any changes, including any insertions that <tt>rcu_dereference()</tt>
-would protect against.
-Therefore, any insertions will be delayed until after <tt>-&gt;gp_lock</tt>
-is released on line&nbsp;25, which in turn means that
-<tt>rcu_access_pointer()</tt> suffices.
-<p>@@QQE@@
-
-<p>
-However, all that <tt>remove_gp_cb()</tt> is doing is
-invoking <tt>kfree()</tt> on the data element.
-This is a common idiom, and is supported by <tt>kfree_rcu()</tt>,
-which allows &ldquo;fire and forget&rdquo; operation as shown below:
-
-<blockquote>
-<pre>
- 1 struct foo {
- 2 int a;
- 3 int b;
- 4 struct rcu_head rh;
- 5 };
- 6
- 7 bool remove_gp_faf(void)
- 8 {
- 9 struct foo *p;
-10
-11 spin_lock(&amp;gp_lock);
-12 p = rcu_dereference(gp);
-13 if (!p) {
-14 spin_unlock(&amp;gp_lock);
-15 return false;
-16 }
-17 rcu_assign_pointer(gp, NULL);
-18 kfree_rcu(p, rh);
-19 spin_unlock(&amp;gp_lock);
-20 return true;
-21 }
-</pre>
-</blockquote>
-
-<p>
-Note that <tt>remove_gp_faf()</tt> simply invokes
-<tt>kfree_rcu()</tt> and proceeds, without any need to pay any
-further attention to the subsequent grace period and <tt>kfree()</tt>.
-It is permissible to invoke <tt>kfree_rcu()</tt> from the same
-environments as for <tt>call_rcu()</tt>.
-Interestingly enough, DYNIX/ptx had the equivalents of
-<tt>call_rcu()</tt> and <tt>kfree_rcu()</tt>, but not
-<tt>synchronize_rcu()</tt>.
-This was due to the fact that RCU was not heavily used within DYNIX/ptx,
-so the very few places that needed something like
-<tt>synchronize_rcu()</tt> simply open-coded it.
-
-<p>@@QQ@@
-Earlier it was claimed that <tt>call_rcu()</tt> and
-<tt>kfree_rcu()</tt> allowed updaters to avoid being blocked
-by readers.
-But how can that be correct, given that the invocation of the callback
-and the freeing of the memory (respectively) must still wait for
-a grace period to elapse?
-<p>@@QQA@@
-We could define things this way, but keep in mind that this sort of
-definition would say that updates in garbage-collected languages
-cannot complete until the next time the garbage collector runs,
-which does not seem at all reasonable.
-The key point is that in most cases, an updater using either
-<tt>call_rcu()</tt> or <tt>kfree_rcu()</tt> can proceed to the
-next update as soon as it has invoked <tt>call_rcu()</tt> or
-<tt>kfree_rcu()</tt>, without having to wait for a subsequent
-grace period.
-<p>@@QQE@@
-
-<p>
-But what if the updater must wait for the completion of code to be
-executed after the end of the grace period, but has other tasks
-that can be carried out in the meantime?
-The polling-style <tt>get_state_synchronize_rcu()</tt> and
-<tt>cond_synchronize_rcu()</tt> functions may be used for this
-purpose, as shown below:
-
-<blockquote>
-<pre>
- 1 bool remove_gp_poll(void)
- 2 {
- 3 struct foo *p;
- 4 unsigned long s;
- 5
- 6 spin_lock(&amp;gp_lock);
- 7 p = rcu_access_pointer(gp);
- 8 if (!p) {
- 9 spin_unlock(&amp;gp_lock);
-10 return false;
-11 }
-12 rcu_assign_pointer(gp, NULL);
-13 spin_unlock(&amp;gp_lock);
-14 s = get_state_synchronize_rcu();
-15 do_something_while_waiting();
-16 cond_synchronize_rcu(s);
-17 kfree(p);
-18 return true;
-19 }
-</pre>
-</blockquote>
-
-<p>
-On line&nbsp;14, <tt>get_state_synchronize_rcu()</tt> obtains a
-&ldquo;cookie&rdquo; from RCU,
-then line&nbsp;15 carries out other tasks,
-and finally, line&nbsp;16 returns immediately if a grace period has
-elapsed in the meantime, but otherwise waits as required.
-The need for <tt>get_state_synchronize_rcu</tt> and
-<tt>cond_synchronize_rcu()</tt> has appeared quite recently,
-so it is too early to tell whether they will stand the test of time.
-
-<p>
-RCU thus provides a range of tools to allow updaters to strike the
-required tradeoff between latency, flexibility and CPU overhead.
-
-<h3><a name="Composability">Composability</a></h3>
-
-<p>
-Composability has received much attention in recent years, perhaps in part
-due to the collision of multicore hardware with object-oriented techniques
-designed in single-threaded environments for single-threaded use.
-And in theory, RCU read-side critical sections may be composed, and in
-fact may be nested arbitrarily deeply.
-In practice, as with all real-world implementations of composable
-constructs, there are limitations.
-
-<p>
-Implementations of RCU for which <tt>rcu_read_lock()</tt>
-and <tt>rcu_read_unlock()</tt> generate no code, such as
-Linux-kernel RCU when <tt>CONFIG_PREEMPT=n</tt>, can be
-nested arbitrarily deeply.
-After all, there is no overhead.
-Except that if all these instances of <tt>rcu_read_lock()</tt>
-and <tt>rcu_read_unlock()</tt> are visible to the compiler,
-compilation will eventually fail due to exhausting memory,
-mass storage, or user patience, whichever comes first.
-If the nesting is not visible to the compiler, as is the case with
-mutually recursive functions each in its own translation unit,
-stack overflow will result.
-If the nesting takes the form of loops, either the control variable
-will overflow or (in the Linux kernel) you will get an RCU CPU stall warning.
-Nevertheless, this class of RCU implementations is one
-of the most composable constructs in existence.
-
-<p>
-RCU implementations that explicitly track nesting depth
-are limited by the nesting-depth counter.
-For example, the Linux kernel's preemptible RCU limits nesting to
-<tt>INT_MAX</tt>.
-This should suffice for almost all practical purposes.
-That said, a consecutive pair of RCU read-side critical sections
-between which there is an operation that waits for a grace period
-cannot be enclosed in another RCU read-side critical section.
-This is because it is not legal to wait for a grace period within
-an RCU read-side critical section: To do so would result either
-in deadlock or
-in RCU implicitly splitting the enclosing RCU read-side critical
-section, neither of which is conducive to a long-lived and prosperous
-kernel.
-
-<p>
-It is worth noting that RCU is not alone in limiting composability.
-For example, many transactional-memory implementations prohibit
-composing a pair of transactions separated by an irrevocable
-operation (for example, a network receive operation).
-For another example, lock-based critical sections can be composed
-surprisingly freely, but only if deadlock is avoided.
-
-<p>
-In short, although RCU read-side critical sections are highly composable,
-care is required in some situations, just as is the case for any other
-composable synchronization mechanism.
-
-<h3><a name="Corner Cases">Corner Cases</a></h3>
-
-<p>
-A given RCU workload might have an endless and intense stream of
-RCU read-side critical sections, perhaps even so intense that there
-was never a point in time during which there was not at least one
-RCU read-side critical section in flight.
-RCU cannot allow this situation to block grace periods: As long as
-all the RCU read-side critical sections are finite, grace periods
-must also be finite.
-
-<p>
-That said, preemptible RCU implementations could potentially result
-in RCU read-side critical sections being preempted for long durations,
-which has the effect of creating a long-duration RCU read-side
-critical section.
-This situation can arise only in heavily loaded systems, but systems using
-real-time priorities are of course more vulnerable.
-Therefore, RCU priority boosting is provided to help deal with this
-case.
-That said, the exact requirements on RCU priority boosting will likely
-evolve as more experience accumulates.
-
-<p>
-Other workloads might have very high update rates.
-Although one can argue that such workloads should instead use
-something other than RCU, the fact remains that RCU must
-handle such workloads gracefully.
-This requirement is another factor driving batching of grace periods,
-but it is also the driving force behind the checks for large numbers
-of queued RCU callbacks in the <tt>call_rcu()</tt> code path.
-Finally, high update rates should not delay RCU read-side critical
-sections, although some read-side delays can occur when using
-<tt>synchronize_rcu_expedited()</tt>, courtesy of this function's use
-of <tt>try_stop_cpus()</tt>.
-(In the future, <tt>synchronize_rcu_expedited()</tt> will be
-converted to use lighter-weight inter-processor interrupts (IPIs),
-but this will still disturb readers, though to a much smaller degree.)
-
-<p>
-Although all three of these corner cases were understood in the early
-1990s, a simple user-level test consisting of <tt>close(open(path))</tt>
-in a tight loop
-in the early 2000s suddenly provided a much deeper appreciation of the
-high-update-rate corner case.
-This test also motivated addition of some RCU code to react to high update
-rates, for example, if a given CPU finds itself with more than 10,000
-RCU callbacks queued, it will cause RCU to take evasive action by
-more aggressively starting grace periods and more aggressively forcing
-completion of grace-period processing.
-This evasive action causes the grace period to complete more quickly,
-but at the cost of restricting RCU's batching optimizations, thus
-increasing the CPU overhead incurred by that grace period.
-
-<h2><a name="Software-Engineering Requirements">
-Software-Engineering Requirements</a></h2>
-
-<p>
-Between Murphy's Law and &ldquo;To err is human&rdquo;, it is necessary to
-guard against mishaps and misuse:
-
-<ol>
-<li> It is all too easy to forget to use <tt>rcu_read_lock()</tt>
- everywhere that it is needed, so kernels built with
- <tt>CONFIG_PROVE_RCU=y</tt> will spat if
- <tt>rcu_dereference()</tt> is used outside of an
- RCU read-side critical section.
- Update-side code can use <tt>rcu_dereference_protected()</tt>,
- which takes a
- <a href="https://lwn.net/Articles/371986/">lockdep expression</a>
- to indicate what is providing the protection.
- If the indicated protection is not provided, a lockdep splat
- is emitted.
-
- <p>
- Code shared between readers and updaters can use
- <tt>rcu_dereference_check()</tt>, which also takes a
- lockdep expression, and emits a lockdep splat if neither
- <tt>rcu_read_lock()</tt> nor the indicated protection
- is in place.
- In addition, <tt>rcu_dereference_raw()</tt> is used in those
- (hopefully rare) cases where the required protection cannot
- be easily described.
- Finally, <tt>rcu_read_lock_held()</tt> is provided to
- allow a function to verify that it has been invoked within
- an RCU read-side critical section.
- I was made aware of this set of requirements shortly after Thomas
- Gleixner audited a number of RCU uses.
-<li> A given function might wish to check for RCU-related preconditions
- upon entry, before using any other RCU API.
- The <tt>rcu_lockdep_assert()</tt> does this job,
- asserting the expression in kernels having lockdep enabled
- and doing nothing otherwise.
-<li> It is also easy to forget to use <tt>rcu_assign_pointer()</tt>
- and <tt>rcu_dereference()</tt>, perhaps (incorrectly)
- substituting a simple assignment.
- To catch this sort of error, a given RCU-protected pointer may be
- tagged with <tt>__rcu</tt>, after which running sparse
- with <tt>CONFIG_SPARSE_RCU_POINTER=y</tt> will complain
- about simple-assignment accesses to that pointer.
- Arnd Bergmann made me aware of this requirement, and also
- supplied the needed
- <a href="https://lwn.net/Articles/376011/">patch series</a>.
-<li> Kernels built with <tt>CONFIG_DEBUG_OBJECTS_RCU_HEAD=y</tt>
- will splat if a data element is passed to <tt>call_rcu()</tt>
- twice in a row, without a grace period in between.
- (This error is similar to a double free.)
- The corresponding <tt>rcu_head</tt> structures that are
- dynamically allocated are automatically tracked, but
- <tt>rcu_head</tt> structures allocated on the stack
- must be initialized with <tt>init_rcu_head_on_stack()</tt>
- and cleaned up with <tt>destroy_rcu_head_on_stack()</tt>.
- Similarly, statically allocated non-stack <tt>rcu_head</tt>
- structures must be initialized with <tt>init_rcu_head()</tt>
- and cleaned up with <tt>destroy_rcu_head()</tt>.
- Mathieu Desnoyers made me aware of this requirement, and also
- supplied the needed
- <a href="https://lkml.kernel.org/g/20100319013024.GA28456@Krystal">patch</a>.
-<li> An infinite loop in an RCU read-side critical section will
- eventually trigger an RCU CPU stall warning splat, with
- the duration of &ldquo;eventually&rdquo; being controlled by the
- <tt>RCU_CPU_STALL_TIMEOUT</tt> <tt>Kconfig</tt> option, or,
- alternatively, by the
- <tt>rcupdate.rcu_cpu_stall_timeout</tt> boot/sysfs
- parameter.
- However, RCU is not obligated to produce this splat
- unless there is a grace period waiting on that particular
- RCU read-side critical section.
- <p>
- Some extreme workloads might intentionally delay
- RCU grace periods, and systems running those workloads can
- be booted with <tt>rcupdate.rcu_cpu_stall_suppress</tt>
- to suppress the splats.
- This kernel parameter may also be set via <tt>sysfs</tt>.
- Furthermore, RCU CPU stall warnings are counter-productive
- during sysrq dumps and during panics.
- RCU therefore supplies the <tt>rcu_sysrq_start()</tt> and
- <tt>rcu_sysrq_end()</tt> API members to be called before
- and after long sysrq dumps.
- RCU also supplies the <tt>rcu_panic()</tt> notifier that is
- automatically invoked at the beginning of a panic to suppress
- further RCU CPU stall warnings.
-
- <p>
- This requirement made itself known in the early 1990s, pretty
- much the first time that it was necessary to debug a CPU stall.
- That said, the initial implementation in DYNIX/ptx was quite
- generic in comparison with that of Linux.
-<li> Although it would be very good to detect pointers leaking out
- of RCU read-side critical sections, there is currently no
- good way of doing this.
- One complication is the need to distinguish between pointers
- leaking and pointers that have been handed off from RCU to
- some other synchronization mechanism, for example, reference
- counting.
-<li> In kernels built with <tt>CONFIG_RCU_TRACE=y</tt>, RCU-related
- information is provided via both debugfs and event tracing.
-<li> Open-coded use of <tt>rcu_assign_pointer()</tt> and
- <tt>rcu_dereference()</tt> to create typical linked
- data structures can be surprisingly error-prone.
- Therefore, RCU-protected
- <a href="https://lwn.net/Articles/609973/#RCU List APIs">linked lists</a>
- and, more recently, RCU-protected
- <a href="https://lwn.net/Articles/612100/">hash tables</a>
- are available.
- Many other special-purpose RCU-protected data structures are
- available in the Linux kernel and the userspace RCU library.
-<li> Some linked structures are created at compile time, but still
- require <tt>__rcu</tt> checking.
- The <tt>RCU_POINTER_INITIALIZER()</tt> macro serves this
- purpose.
-<li> It is not necessary to use <tt>rcu_assign_pointer()</tt>
- when creating linked structures that are to be published via
- a single external pointer.
- The <tt>RCU_INIT_POINTER()</tt> macro is provided for
- this task and also for assigning <tt>NULL</tt> pointers
- at runtime.
-</ol>
-
-<p>
-This not a hard-and-fast list: RCU's diagnostic capabilities will
-continue to be guided by the number and type of usage bugs found
-in real-world RCU usage.
-
-<h2><a name="Linux Kernel Complications">Linux Kernel Complications</a></h2>
-
-<p>
-The Linux kernel provides an interesting environment for all kinds of
-software, including RCU.
-Some of the relevant points of interest are as follows:
-
-<ol>
-<li> <a href="#Configuration">Configuration</a>.
-<li> <a href="#Firmware Interface">Firmware Interface</a>.
-<li> <a href="#Early Boot">Early Boot</a>.
-<li> <a href="#Interrupts and NMIs">
- Interrupts and non-maskable interrupts (NMIs)</a>.
-<li> <a href="#Loadable Modules">Loadable Modules</a>.
-<li> <a href="#Hotplug CPU">Hotplug CPU</a>.
-<li> <a href="#Scheduler and RCU">Scheduler and RCU</a>.
-<li> <a href="#Tracing and RCU">Tracing and RCU</a>.
-<li> <a href="#Energy Efficiency">Energy Efficiency</a>.
-<li> <a href="#Memory Efficiency">Memory Efficiency</a>.
-<li> <a href="#Performance, Scalability, Response Time, and Reliability">
- Performance, Scalability, Response Time, and Reliability</a>.
-</ol>
-
-<p>
-This list is probably incomplete, but it does give a feel for the
-most notable Linux-kernel complications.
-Each of the following sections covers one of the above topics.
-
-<h3><a name="Configuration">Configuration</a></h3>
-
-<p>
-RCU's goal is automatic configuration, so that almost nobody
-needs to worry about RCU's <tt>Kconfig</tt> options.
-And for almost all users, RCU does in fact work well
-&ldquo;out of the box.&rdquo;
-
-<p>
-However, there are specialized use cases that are handled by
-kernel boot parameters and <tt>Kconfig</tt> options.
-Unfortunately, the <tt>Kconfig</tt> system will explicitly ask users
-about new <tt>Kconfig</tt> options, which requires almost all of them
-be hidden behind a <tt>CONFIG_RCU_EXPERT</tt> <tt>Kconfig</tt> option.
-
-<p>
-This all should be quite obvious, but the fact remains that
-Linus Torvalds recently had to
-<a href="https://lkml.kernel.org/g/CA+55aFy4wcCwaL4okTs8wXhGZ5h-ibecy_Meg9C4MNQrUnwMcg@mail.gmail.com">remind</a>
-me of this requirement.
-
-<h3><a name="Firmware Interface">Firmware Interface</a></h3>
-
-<p>
-In many cases, kernel obtains information about the system from the
-firmware, and sometimes things are lost in translation.
-Or the translation is accurate, but the original message is bogus.
-
-<p>
-For example, some systems' firmware overreports the number of CPUs,
-sometimes by a large factor.
-If RCU naively believed the firmware, as it used to do,
-it would create too many per-CPU kthreads.
-Although the resulting system will still run correctly, the extra
-kthreads needlessly consume memory and can cause confusion
-when they show up in <tt>ps</tt> listings.
-
-<p>
-RCU must therefore wait for a given CPU to actually come online before
-it can allow itself to believe that the CPU actually exists.
-The resulting &ldquo;ghost CPUs&rdquo; (which are never going to
-come online) cause a number of
-<a href="https://paulmck.livejournal.com/37494.html">interesting complications</a>.
-
-<h3><a name="Early Boot">Early Boot</a></h3>
-
-<p>
-The Linux kernel's boot sequence is an interesting process,
-and RCU is used early, even before <tt>rcu_init()</tt>
-is invoked.
-In fact, a number of RCU's primitives can be used as soon as the
-initial task's <tt>task_struct</tt> is available and the
-boot CPU's per-CPU variables are set up.
-The read-side primitives (<tt>rcu_read_lock()</tt>,
-<tt>rcu_read_unlock()</tt>, <tt>rcu_dereference()</tt>,
-and <tt>rcu_access_pointer()</tt>) will operate normally very early on,
-as will <tt>rcu_assign_pointer()</tt>.
-
-<p>
-Although <tt>call_rcu()</tt> may be invoked at any
-time during boot, callbacks are not guaranteed to be invoked until after
-the scheduler is fully up and running.
-This delay in callback invocation is due to the fact that RCU does not
-invoke callbacks until it is fully initialized, and this full initialization
-cannot occur until after the scheduler has initialized itself to the
-point where RCU can spawn and run its kthreads.
-In theory, it would be possible to invoke callbacks earlier,
-however, this is not a panacea because there would be severe restrictions
-on what operations those callbacks could invoke.
-
-<p>
-Perhaps surprisingly, <tt>synchronize_rcu()</tt>,
-<a href="#Bottom-Half Flavor"><tt>synchronize_rcu_bh()</tt></a>
-(<a href="#Bottom-Half Flavor">discussed below</a>),
-and
-<a href="#Sched Flavor"><tt>synchronize_sched()</tt></a>
-will all operate normally
-during very early boot, the reason being that there is only one CPU
-and preemption is disabled.
-This means that the call <tt>synchronize_rcu()</tt> (or friends)
-itself is a quiescent
-state and thus a grace period, so the early-boot implementation can
-be a no-op.
-
-<p>
-Both <tt>synchronize_rcu_bh()</tt> and <tt>synchronize_sched()</tt>
-continue to operate normally through the remainder of boot, courtesy
-of the fact that preemption is disabled across their RCU read-side
-critical sections and also courtesy of the fact that there is still
-only one CPU.
-However, once the scheduler starts initializing, preemption is enabled.
-There is still only a single CPU, but the fact that preemption is enabled
-means that the no-op implementation of <tt>synchronize_rcu()</tt> no
-longer works in <tt>CONFIG_PREEMPT=y</tt> kernels.
-Therefore, as soon as the scheduler starts initializing, the early-boot
-fastpath is disabled.
-This means that <tt>synchronize_rcu()</tt> switches to its runtime
-mode of operation where it posts callbacks, which in turn means that
-any call to <tt>synchronize_rcu()</tt> will block until the corresponding
-callback is invoked.
-Unfortunately, the callback cannot be invoked until RCU's runtime
-grace-period machinery is up and running, which cannot happen until
-the scheduler has initialized itself sufficiently to allow RCU's
-kthreads to be spawned.
-Therefore, invoking <tt>synchronize_rcu()</tt> during scheduler
-initialization can result in deadlock.
-
-<p>@@QQ@@
-So what happens with <tt>synchronize_rcu()</tt> during
-scheduler initialization for <tt>CONFIG_PREEMPT=n</tt>
-kernels?
-<p>@@QQA@@
-In <tt>CONFIG_PREEMPT=n</tt> kernel, <tt>synchronize_rcu()</tt>
-maps directly to <tt>synchronize_sched()</tt>.
-Therefore, <tt>synchronize_rcu()</tt> works normally throughout
-boot in <tt>CONFIG_PREEMPT=n</tt> kernels.
-However, your code must also work in <tt>CONFIG_PREEMPT=y</tt> kernels,
-so it is still necessary to avoid invoking <tt>synchronize_rcu()</tt>
-during scheduler initialization.
-<p>@@QQE@@
-
-<p>
-I learned of these boot-time requirements as a result of a series of
-system hangs.
-
-<h3><a name="Interrupts and NMIs">Interrupts and NMIs</a></h3>
-
-<p>
-The Linux kernel has interrupts, and RCU read-side critical sections are
-legal within interrupt handlers and within interrupt-disabled regions
-of code, as are invocations of <tt>call_rcu()</tt>.
-
-<p>
-Some Linux-kernel architectures can enter an interrupt handler from
-non-idle process context, and then just never leave it, instead stealthily
-transitioning back to process context.
-This trick is sometimes used to invoke system calls from inside the kernel.
-These &ldquo;half-interrupts&rdquo; mean that RCU has to be very careful
-about how it counts interrupt nesting levels.
-I learned of this requirement the hard way during a rewrite
-of RCU's dyntick-idle code.
-
-<p>
-The Linux kernel has non-maskable interrupts (NMIs), and
-RCU read-side critical sections are legal within NMI handlers.
-Thankfully, RCU update-side primitives, including
-<tt>call_rcu()</tt>, are prohibited within NMI handlers.
-
-<p>
-The name notwithstanding, some Linux-kernel architectures
-can have nested NMIs, which RCU must handle correctly.
-Andy Lutomirski
-<a href="https://lkml.kernel.org/g/CALCETrXLq1y7e_dKFPgou-FKHB6Pu-r8+t-6Ds+8=va7anBWDA@mail.gmail.com">surprised me</a>
-with this requirement;
-he also kindly surprised me with
-<a href="https://lkml.kernel.org/g/CALCETrXSY9JpW3uE6H8WYk81sg56qasA2aqmjMPsq5dOtzso=g@mail.gmail.com">an algorithm</a>
-that meets this requirement.
-
-<h3><a name="Loadable Modules">Loadable Modules</a></h3>
-
-<p>
-The Linux kernel has loadable modules, and these modules can
-also be unloaded.
-After a given module has been unloaded, any attempt to call
-one of its functions results in a segmentation fault.
-The module-unload functions must therefore cancel any
-delayed calls to loadable-module functions, for example,
-any outstanding <tt>mod_timer()</tt> must be dealt with
-via <tt>del_timer_sync()</tt> or similar.
-
-<p>
-Unfortunately, there is no way to cancel an RCU callback;
-once you invoke <tt>call_rcu()</tt>, the callback function is
-going to eventually be invoked, unless the system goes down first.
-Because it is normally considered socially irresponsible to crash the system
-in response to a module unload request, we need some other way
-to deal with in-flight RCU callbacks.
-
-<p>
-RCU therefore provides
-<tt><a href="https://lwn.net/Articles/217484/">rcu_barrier()</a></tt>,
-which waits until all in-flight RCU callbacks have been invoked.
-If a module uses <tt>call_rcu()</tt>, its exit function should therefore
-prevent any future invocation of <tt>call_rcu()</tt>, then invoke
-<tt>rcu_barrier()</tt>.
-In theory, the underlying module-unload code could invoke
-<tt>rcu_barrier()</tt> unconditionally, but in practice this would
-incur unacceptable latencies.
-
-<p>
-Nikita Danilov noted this requirement for an analogous filesystem-unmount
-situation, and Dipankar Sarma incorporated <tt>rcu_barrier()</tt> into RCU.
-The need for <tt>rcu_barrier()</tt> for module unloading became
-apparent later.
-
-<h3><a name="Hotplug CPU">Hotplug CPU</a></h3>
-
-<p>
-The Linux kernel supports CPU hotplug, which means that CPUs
-can come and go.
-It is of course illegal to use any RCU API member from an offline CPU.
-This requirement was present from day one in DYNIX/ptx, but
-on the other hand, the Linux kernel's CPU-hotplug implementation
-is &ldquo;interesting.&rdquo;
-
-<p>
-The Linux-kernel CPU-hotplug implementation has notifiers that
-are used to allow the various kernel subsystems (including RCU)
-to respond appropriately to a given CPU-hotplug operation.
-Most RCU operations may be invoked from CPU-hotplug notifiers,
-including even normal synchronous grace-period operations
-such as <tt>synchronize_rcu()</tt>.
-However, expedited grace-period operations such as
-<tt>synchronize_rcu_expedited()</tt> are not supported,
-due to the fact that current implementations block CPU-hotplug
-operations, which could result in deadlock.
-
-<p>
-In addition, all-callback-wait operations such as
-<tt>rcu_barrier()</tt> are also not supported, due to the
-fact that there are phases of CPU-hotplug operations where
-the outgoing CPU's callbacks will not be invoked until after
-the CPU-hotplug operation ends, which could also result in deadlock.
-
-<h3><a name="Scheduler and RCU">Scheduler and RCU</a></h3>
-
-<p>
-RCU depends on the scheduler, and the scheduler uses RCU to
-protect some of its data structures.
-This means the scheduler is forbidden from acquiring
-the runqueue locks and the priority-inheritance locks
-in the middle of an outermost RCU read-side critical section unless either
-(1)&nbsp;it releases them before exiting that same
-RCU read-side critical section, or
-(2)&nbsp;interrupts are disabled across
-that entire RCU read-side critical section.
-This same prohibition also applies (recursively!) to any lock that is acquired
-while holding any lock to which this prohibition applies.
-Adhering to this rule prevents preemptible RCU from invoking
-<tt>rcu_read_unlock_special()</tt> while either runqueue or
-priority-inheritance locks are held, thus avoiding deadlock.
-
-<p>
-Prior to v4.4, it was only necessary to disable preemption across
-RCU read-side critical sections that acquired scheduler locks.
-In v4.4, expedited grace periods started using IPIs, and these
-IPIs could force a <tt>rcu_read_unlock()</tt> to take the slowpath.
-Therefore, this expedited-grace-period change required disabling of
-interrupts, not just preemption.
-
-<p>
-For RCU's part, the preemptible-RCU <tt>rcu_read_unlock()</tt>
-implementation must be written carefully to avoid similar deadlocks.
-In particular, <tt>rcu_read_unlock()</tt> must tolerate an
-interrupt where the interrupt handler invokes both
-<tt>rcu_read_lock()</tt> and <tt>rcu_read_unlock()</tt>.
-This possibility requires <tt>rcu_read_unlock()</tt> to use
-negative nesting levels to avoid destructive recursion via
-interrupt handler's use of RCU.
-
-<p>
-This pair of mutual scheduler-RCU requirements came as a
-<a href="https://lwn.net/Articles/453002/">complete surprise</a>.
-
-<p>
-As noted above, RCU makes use of kthreads, and it is necessary to
-avoid excessive CPU-time accumulation by these kthreads.
-This requirement was no surprise, but RCU's violation of it
-when running context-switch-heavy workloads when built with
-<tt>CONFIG_NO_HZ_FULL=y</tt>
-<a href="http://www.rdrop.com/users/paulmck/scalability/paper/BareMetal.2015.01.15b.pdf">did come as a surprise [PDF]</a>.
-RCU has made good progress towards meeting this requirement, even
-for context-switch-have <tt>CONFIG_NO_HZ_FULL=y</tt> workloads,
-but there is room for further improvement.
-
-<h3><a name="Tracing and RCU">Tracing and RCU</a></h3>
-
-<p>
-It is possible to use tracing on RCU code, but tracing itself
-uses RCU.
-For this reason, <tt>rcu_dereference_raw_notrace()</tt>
-is provided for use by tracing, which avoids the destructive
-recursion that could otherwise ensue.
-This API is also used by virtualization in some architectures,
-where RCU readers execute in environments in which tracing
-cannot be used.
-The tracing folks both located the requirement and provided the
-needed fix, so this surprise requirement was relatively painless.
-
-<h3><a name="Energy Efficiency">Energy Efficiency</a></h3>
-
-<p>
-Interrupting idle CPUs is considered socially unacceptable,
-especially by people with battery-powered embedded systems.
-RCU therefore conserves energy by detecting which CPUs are
-idle, including tracking CPUs that have been interrupted from idle.
-This is a large part of the energy-efficiency requirement,
-so I learned of this via an irate phone call.
-
-<p>
-Because RCU avoids interrupting idle CPUs, it is illegal to
-execute an RCU read-side critical section on an idle CPU.
-(Kernels built with <tt>CONFIG_PROVE_RCU=y</tt> will splat
-if you try it.)
-The <tt>RCU_NONIDLE()</tt> macro and <tt>_rcuidle</tt>
-event tracing is provided to work around this restriction.
-In addition, <tt>rcu_is_watching()</tt> may be used to
-test whether or not it is currently legal to run RCU read-side
-critical sections on this CPU.
-I learned of the need for diagnostics on the one hand
-and <tt>RCU_NONIDLE()</tt> on the other while inspecting
-idle-loop code.
-Steven Rostedt supplied <tt>_rcuidle</tt> event tracing,
-which is used quite heavily in the idle loop.
-
-<p>
-It is similarly socially unacceptable to interrupt an
-<tt>nohz_full</tt> CPU running in userspace.
-RCU must therefore track <tt>nohz_full</tt> userspace
-execution.
-And in
-<a href="https://lwn.net/Articles/558284/"><tt>CONFIG_NO_HZ_FULL_SYSIDLE=y</tt></a>
-kernels, RCU must separately track idle CPUs on the one hand and
-CPUs that are either idle or executing in userspace on the other.
-In both cases, RCU must be able to sample state at two points in
-time, and be able to determine whether or not some other CPU spent
-any time idle and/or executing in userspace.
-
-<p>
-These energy-efficiency requirements have proven quite difficult to
-understand and to meet, for example, there have been more than five
-clean-sheet rewrites of RCU's energy-efficiency code, the last of
-which was finally able to demonstrate
-<a href="http://www.rdrop.com/users/paulmck/realtime/paper/AMPenergy.2013.04.19a.pdf">real energy savings running on real hardware [PDF]</a>.
-As noted earlier,
-I learned of many of these requirements via angry phone calls:
-Flaming me on the Linux-kernel mailing list was apparently not
-sufficient to fully vent their ire at RCU's energy-efficiency bugs!
-
-<h3><a name="Memory Efficiency">Memory Efficiency</a></h3>
-
-<p>
-Although small-memory non-realtime systems can simply use Tiny RCU,
-code size is only one aspect of memory efficiency.
-Another aspect is the size of the <tt>rcu_head</tt> structure
-used by <tt>call_rcu()</tt> and <tt>kfree_rcu()</tt>.
-Although this structure contains nothing more than a pair of pointers,
-it does appear in many RCU-protected data structures, including
-some that are size critical.
-The <tt>page</tt> structure is a case in point, as evidenced by
-the many occurrences of the <tt>union</tt> keyword within that structure.
-
-<p>
-This need for memory efficiency is one reason that RCU uses hand-crafted
-singly linked lists to track the <tt>rcu_head</tt> structures that
-are waiting for a grace period to elapse.
-It is also the reason why <tt>rcu_head</tt> structures do not contain
-debug information, such as fields tracking the file and line of the
-<tt>call_rcu()</tt> or <tt>kfree_rcu()</tt> that posted them.
-Although this information might appear in debug-only kernel builds at some
-point, in the meantime, the <tt>-&gt;func</tt> field will often provide
-the needed debug information.
-
-<p>
-However, in some cases, the need for memory efficiency leads to even
-more extreme measures.
-Returning to the <tt>page</tt> structure, the <tt>rcu_head</tt> field
-shares storage with a great many other structures that are used at
-various points in the corresponding page's lifetime.
-In order to correctly resolve certain
-<a href="https://lkml.kernel.org/g/1439976106-137226-1-git-send-email-kirill.shutemov@linux.intel.com">race conditions</a>,
-the Linux kernel's memory-management subsystem needs a particular bit
-to remain zero during all phases of grace-period processing,
-and that bit happens to map to the bottom bit of the
-<tt>rcu_head</tt> structure's <tt>-&gt;next</tt> field.
-RCU makes this guarantee as long as <tt>call_rcu()</tt>
-is used to post the callback, as opposed to <tt>kfree_rcu()</tt>
-or some future &ldquo;lazy&rdquo;
-variant of <tt>call_rcu()</tt> that might one day be created for
-energy-efficiency purposes.
-
-<h3><a name="Performance, Scalability, Response Time, and Reliability">
-Performance, Scalability, Response Time, and Reliability</a></h3>
-
-<p>
-Expanding on the
-<a href="#Performance and Scalability">earlier discussion</a>,
-RCU is used heavily by hot code paths in performance-critical
-portions of the Linux kernel's networking, security, virtualization,
-and scheduling code paths.
-RCU must therefore use efficient implementations, especially in its
-read-side primitives.
-To that end, it would be good if preemptible RCU's implementation
-of <tt>rcu_read_lock()</tt> could be inlined, however, doing
-this requires resolving <tt>#include</tt> issues with the
-<tt>task_struct</tt> structure.
-
-<p>
-The Linux kernel supports hardware configurations with up to
-4096 CPUs, which means that RCU must be extremely scalable.
-Algorithms that involve frequent acquisitions of global locks or
-frequent atomic operations on global variables simply cannot be
-tolerated within the RCU implementation.
-RCU therefore makes heavy use of a combining tree based on the
-<tt>rcu_node</tt> structure.
-RCU is required to tolerate all CPUs continuously invoking any
-combination of RCU's runtime primitives with minimal per-operation
-overhead.
-In fact, in many cases, increasing load must <i>decrease</i> the
-per-operation overhead, witness the batching optimizations for
-<tt>synchronize_rcu()</tt>, <tt>call_rcu()</tt>,
-<tt>synchronize_rcu_expedited()</tt>, and <tt>rcu_barrier()</tt>.
-As a general rule, RCU must cheerfully accept whatever the
-rest of the Linux kernel decides to throw at it.
-
-<p>
-The Linux kernel is used for real-time workloads, especially
-in conjunction with the
-<a href="https://rt.wiki.kernel.org/index.php/Main_Page">-rt patchset</a>.
-The real-time-latency response requirements are such that the
-traditional approach of disabling preemption across RCU
-read-side critical sections is inappropriate.
-Kernels built with <tt>CONFIG_PREEMPT=y</tt> therefore
-use an RCU implementation that allows RCU read-side critical
-sections to be preempted.
-This requirement made its presence known after users made it
-clear that an earlier
-<a href="https://lwn.net/Articles/107930/">real-time patch</a>
-did not meet their needs, in conjunction with some
-<a href="https://lkml.kernel.org/g/20050318002026.GA2693@us.ibm.com">RCU issues</a>
-encountered by a very early version of the -rt patchset.
-
-<p>
-In addition, RCU must make do with a sub-100-microsecond real-time latency
-budget.
-In fact, on smaller systems with the -rt patchset, the Linux kernel
-provides sub-20-microsecond real-time latencies for the whole kernel,
-including RCU.
-RCU's scalability and latency must therefore be sufficient for
-these sorts of configurations.
-To my surprise, the sub-100-microsecond real-time latency budget
-<a href="http://www.rdrop.com/users/paulmck/realtime/paper/bigrt.2013.01.31a.LCA.pdf">
-applies to even the largest systems [PDF]</a>,
-up to and including systems with 4096 CPUs.
-This real-time requirement motivated the grace-period kthread, which
-also simplified handling of a number of race conditions.
-
-<p>
-Finally, RCU's status as a synchronization primitive means that
-any RCU failure can result in arbitrary memory corruption that can be
-extremely difficult to debug.
-This means that RCU must be extremely reliable, which in
-practice also means that RCU must have an aggressive stress-test
-suite.
-This stress-test suite is called <tt>rcutorture</tt>.
-
-<p>
-Although the need for <tt>rcutorture</tt> was no surprise,
-the current immense popularity of the Linux kernel is posing
-interesting&mdash;and perhaps unprecedented&mdash;validation
-challenges.
-To see this, keep in mind that there are well over one billion
-instances of the Linux kernel running today, given Android
-smartphones, Linux-powered televisions, and servers.
-This number can be expected to increase sharply with the advent of
-the celebrated Internet of Things.
-
-<p>
-Suppose that RCU contains a race condition that manifests on average
-once per million years of runtime.
-This bug will be occurring about three times per <i>day</i> across
-the installed base.
-RCU could simply hide behind hardware error rates, given that no one
-should really expect their smartphone to last for a million years.
-However, anyone taking too much comfort from this thought should
-consider the fact that in most jurisdictions, a successful multi-year
-test of a given mechanism, which might include a Linux kernel,
-suffices for a number of types of safety-critical certifications.
-In fact, rumor has it that the Linux kernel is already being used
-in production for safety-critical applications.
-I don't know about you, but I would feel quite bad if a bug in RCU
-killed someone.
-Which might explain my recent focus on validation and verification.
-
-<h2><a name="Other RCU Flavors">Other RCU Flavors</a></h2>
-
-<p>
-One of the more surprising things about RCU is that there are now
-no fewer than five <i>flavors</i>, or API families.
-In addition, the primary flavor that has been the sole focus up to
-this point has two different implementations, non-preemptible and
-preemptible.
-The other four flavors are listed below, with requirements for each
-described in a separate section.
-
-<ol>
-<li> <a href="#Bottom-Half Flavor">Bottom-Half Flavor</a>
-<li> <a href="#Sched Flavor">Sched Flavor</a>
-<li> <a href="#Sleepable RCU">Sleepable RCU</a>
-<li> <a href="#Tasks RCU">Tasks RCU</a>
-</ol>
-
-<h3><a name="Bottom-Half Flavor">Bottom-Half Flavor</a></h3>
-
-<p>
-The softirq-disable (AKA &ldquo;bottom-half&rdquo;,
-hence the &ldquo;_bh&rdquo; abbreviations)
-flavor of RCU, or <i>RCU-bh</i>, was developed by
-Dipankar Sarma to provide a flavor of RCU that could withstand the
-network-based denial-of-service attacks researched by Robert
-Olsson.
-These attacks placed so much networking load on the system
-that some of the CPUs never exited softirq execution,
-which in turn prevented those CPUs from ever executing a context switch,
-which, in the RCU implementation of that time, prevented grace periods
-from ever ending.
-The result was an out-of-memory condition and a system hang.
-
-<p>
-The solution was the creation of RCU-bh, which does
-<tt>local_bh_disable()</tt>
-across its read-side critical sections, and which uses the transition
-from one type of softirq processing to another as a quiescent state
-in addition to context switch, idle, user mode, and offline.
-This means that RCU-bh grace periods can complete even when some of
-the CPUs execute in softirq indefinitely, thus allowing algorithms
-based on RCU-bh to withstand network-based denial-of-service attacks.
-
-<p>
-Because
-<tt>rcu_read_lock_bh()</tt> and <tt>rcu_read_unlock_bh()</tt>
-disable and re-enable softirq handlers, any attempt to start a softirq
-handlers during the
-RCU-bh read-side critical section will be deferred.
-In this case, <tt>rcu_read_unlock_bh()</tt>
-will invoke softirq processing, which can take considerable time.
-One can of course argue that this softirq overhead should be associated
-with the code following the RCU-bh read-side critical section rather
-than <tt>rcu_read_unlock_bh()</tt>, but the fact
-is that most profiling tools cannot be expected to make this sort
-of fine distinction.
-For example, suppose that a three-millisecond-long RCU-bh read-side
-critical section executes during a time of heavy networking load.
-There will very likely be an attempt to invoke at least one softirq
-handler during that three milliseconds, but any such invocation will
-be delayed until the time of the <tt>rcu_read_unlock_bh()</tt>.
-This can of course make it appear at first glance as if
-<tt>rcu_read_unlock_bh()</tt> was executing very slowly.
-
-<p>
-The
-<a href="https://lwn.net/Articles/609973/#RCU Per-Flavor API Table">RCU-bh API</a>
-includes
-<tt>rcu_read_lock_bh()</tt>,
-<tt>rcu_read_unlock_bh()</tt>,
-<tt>rcu_dereference_bh()</tt>,
-<tt>rcu_dereference_bh_check()</tt>,
-<tt>synchronize_rcu_bh()</tt>,
-<tt>synchronize_rcu_bh_expedited()</tt>,
-<tt>call_rcu_bh()</tt>,
-<tt>rcu_barrier_bh()</tt>, and
-<tt>rcu_read_lock_bh_held()</tt>.
-
-<h3><a name="Sched Flavor">Sched Flavor</a></h3>
-
-<p>
-Before preemptible RCU, waiting for an RCU grace period had the
-side effect of also waiting for all pre-existing interrupt
-and NMI handlers.
-However, there are legitimate preemptible-RCU implementations that
-do not have this property, given that any point in the code outside
-of an RCU read-side critical section can be a quiescent state.
-Therefore, <i>RCU-sched</i> was created, which follows &ldquo;classic&rdquo;
-RCU in that an RCU-sched grace period waits for for pre-existing
-interrupt and NMI handlers.
-In kernels built with <tt>CONFIG_PREEMPT=n</tt>, the RCU and RCU-sched
-APIs have identical implementations, while kernels built with
-<tt>CONFIG_PREEMPT=y</tt> provide a separate implementation for each.
-
-<p>
-Note well that in <tt>CONFIG_PREEMPT=y</tt> kernels,
-<tt>rcu_read_lock_sched()</tt> and <tt>rcu_read_unlock_sched()</tt>
-disable and re-enable preemption, respectively.
-This means that if there was a preemption attempt during the
-RCU-sched read-side critical section, <tt>rcu_read_unlock_sched()</tt>
-will enter the scheduler, with all the latency and overhead entailed.
-Just as with <tt>rcu_read_unlock_bh()</tt>, this can make it look
-as if <tt>rcu_read_unlock_sched()</tt> was executing very slowly.
-However, the highest-priority task won't be preempted, so that task
-will enjoy low-overhead <tt>rcu_read_unlock_sched()</tt> invocations.
-
-<p>
-The
-<a href="https://lwn.net/Articles/609973/#RCU Per-Flavor API Table">RCU-sched API</a>
-includes
-<tt>rcu_read_lock_sched()</tt>,
-<tt>rcu_read_unlock_sched()</tt>,
-<tt>rcu_read_lock_sched_notrace()</tt>,
-<tt>rcu_read_unlock_sched_notrace()</tt>,
-<tt>rcu_dereference_sched()</tt>,
-<tt>rcu_dereference_sched_check()</tt>,
-<tt>synchronize_sched()</tt>,
-<tt>synchronize_rcu_sched_expedited()</tt>,
-<tt>call_rcu_sched()</tt>,
-<tt>rcu_barrier_sched()</tt>, and
-<tt>rcu_read_lock_sched_held()</tt>.
-However, anything that disables preemption also marks an RCU-sched
-read-side critical section, including
-<tt>preempt_disable()</tt> and <tt>preempt_enable()</tt>,
-<tt>local_irq_save()</tt> and <tt>local_irq_restore()</tt>,
-and so on.
-
-<h3><a name="Sleepable RCU">Sleepable RCU</a></h3>
-
-<p>
-For well over a decade, someone saying &ldquo;I need to block within
-an RCU read-side critical section&rdquo; was a reliable indication
-that this someone did not understand RCU.
-After all, if you are always blocking in an RCU read-side critical
-section, you can probably afford to use a higher-overhead synchronization
-mechanism.
-However, that changed with the advent of the Linux kernel's notifiers,
-whose RCU read-side critical
-sections almost never sleep, but sometimes need to.
-This resulted in the introduction of
-<a href="https://lwn.net/Articles/202847/">sleepable RCU</a>,
-or <i>SRCU</i>.
-
-<p>
-SRCU allows different domains to be defined, with each such domain
-defined by an instance of an <tt>srcu_struct</tt> structure.
-A pointer to this structure must be passed in to each SRCU function,
-for example, <tt>synchronize_srcu(&amp;ss)</tt>, where
-<tt>ss</tt> is the <tt>srcu_struct</tt> structure.
-The key benefit of these domains is that a slow SRCU reader in one
-domain does not delay an SRCU grace period in some other domain.
-That said, one consequence of these domains is that read-side code
-must pass a &ldquo;cookie&rdquo; from <tt>srcu_read_lock()</tt>
-to <tt>srcu_read_unlock()</tt>, for example, as follows:
-
-<blockquote>
-<pre>
- 1 int idx;
- 2
- 3 idx = srcu_read_lock(&amp;ss);
- 4 do_something();
- 5 srcu_read_unlock(&amp;ss, idx);
-</pre>
-</blockquote>
-
-<p>
-As noted above, it is legal to block within SRCU read-side critical sections,
-however, with great power comes great responsibility.
-If you block forever in one of a given domain's SRCU read-side critical
-sections, then that domain's grace periods will also be blocked forever.
-Of course, one good way to block forever is to deadlock, which can
-happen if any operation in a given domain's SRCU read-side critical
-section can block waiting, either directly or indirectly, for that domain's
-grace period to elapse.
-For example, this results in a self-deadlock:
-
-<blockquote>
-<pre>
- 1 int idx;
- 2
- 3 idx = srcu_read_lock(&amp;ss);
- 4 do_something();
- 5 synchronize_srcu(&amp;ss);
- 6 srcu_read_unlock(&amp;ss, idx);
-</pre>
-</blockquote>
-
-<p>
-However, if line&nbsp;5 acquired a mutex that was held across
-a <tt>synchronize_srcu()</tt> for domain <tt>ss</tt>,
-deadlock would still be possible.
-Furthermore, if line&nbsp;5 acquired a mutex that was held across
-a <tt>synchronize_srcu()</tt> for some other domain <tt>ss1</tt>,
-and if an <tt>ss1</tt>-domain SRCU read-side critical section
-acquired another mutex that was held across as <tt>ss</tt>-domain
-<tt>synchronize_srcu()</tt>,
-deadlock would again be possible.
-Such a deadlock cycle could extend across an arbitrarily large number
-of different SRCU domains.
-Again, with great power comes great responsibility.
-
-<p>
-Unlike the other RCU flavors, SRCU read-side critical sections can
-run on idle and even offline CPUs.
-This ability requires that <tt>srcu_read_lock()</tt> and
-<tt>srcu_read_unlock()</tt> contain memory barriers, which means
-that SRCU readers will run a bit slower than would RCU readers.
-It also motivates the <tt>smp_mb__after_srcu_read_unlock()</tt>
-API, which, in combination with <tt>srcu_read_unlock()</tt>,
-guarantees a full memory barrier.
-
-<p>
-The
-<a href="https://lwn.net/Articles/609973/#RCU Per-Flavor API Table">SRCU API</a>
-includes
-<tt>srcu_read_lock()</tt>,
-<tt>srcu_read_unlock()</tt>,
-<tt>srcu_dereference()</tt>,
-<tt>srcu_dereference_check()</tt>,
-<tt>synchronize_srcu()</tt>,
-<tt>synchronize_srcu_expedited()</tt>,
-<tt>call_srcu()</tt>,
-<tt>srcu_barrier()</tt>, and
-<tt>srcu_read_lock_held()</tt>.
-It also includes
-<tt>DEFINE_SRCU()</tt>,
-<tt>DEFINE_STATIC_SRCU()</tt>, and
-<tt>init_srcu_struct()</tt>
-APIs for defining and initializing <tt>srcu_struct</tt> structures.
-
-<h3><a name="Tasks RCU">Tasks RCU</a></h3>
-
-<p>
-Some forms of tracing use &ldquo;tramopolines&rdquo; to handle the
-binary rewriting required to install different types of probes.
-It would be good to be able to free old trampolines, which sounds
-like a job for some form of RCU.
-However, because it is necessary to be able to install a trace
-anywhere in the code, it is not possible to use read-side markers
-such as <tt>rcu_read_lock()</tt> and <tt>rcu_read_unlock()</tt>.
-In addition, it does not work to have these markers in the trampoline
-itself, because there would need to be instructions following
-<tt>rcu_read_unlock()</tt>.
-Although <tt>synchronize_rcu()</tt> would guarantee that execution
-reached the <tt>rcu_read_unlock()</tt>, it would not be able to
-guarantee that execution had completely left the trampoline.
-
-<p>
-The solution, in the form of
-<a href="https://lwn.net/Articles/607117/"><i>Tasks RCU</i></a>,
-is to have implicit
-read-side critical sections that are delimited by voluntary context
-switches, that is, calls to <tt>schedule()</tt>,
-<tt>cond_resched_rcu_qs()</tt>, and
-<tt>synchronize_rcu_tasks()</tt>.
-In addition, transitions to and from userspace execution also delimit
-tasks-RCU read-side critical sections.
-
-<p>
-The tasks-RCU API is quite compact, consisting only of
-<tt>call_rcu_tasks()</tt>,
-<tt>synchronize_rcu_tasks()</tt>, and
-<tt>rcu_barrier_tasks()</tt>.
-
-<h2><a name="Possible Future Changes">Possible Future Changes</a></h2>
-
-<p>
-One of the tricks that RCU uses to attain update-side scalability is
-to increase grace-period latency with increasing numbers of CPUs.
-If this becomes a serious problem, it will be necessary to rework the
-grace-period state machine so as to avoid the need for the additional
-latency.
-
-<p>
-Expedited grace periods scan the CPUs, so their latency and overhead
-increases with increasing numbers of CPUs.
-If this becomes a serious problem on large systems, it will be necessary
-to do some redesign to avoid this scalability problem.
-
-<p>
-RCU disables CPU hotplug in a few places, perhaps most notably in the
-expedited grace-period and <tt>rcu_barrier()</tt> operations.
-If there is a strong reason to use expedited grace periods in CPU-hotplug
-notifiers, it will be necessary to avoid disabling CPU hotplug.
-This would introduce some complexity, so there had better be a <i>very</i>
-good reason.
-
-<p>
-The tradeoff between grace-period latency on the one hand and interruptions
-of other CPUs on the other hand may need to be re-examined.
-The desire is of course for zero grace-period latency as well as zero
-interprocessor interrupts undertaken during an expedited grace period
-operation.
-While this ideal is unlikely to be achievable, it is quite possible that
-further improvements can be made.
-
-<p>
-The multiprocessor implementations of RCU use a combining tree that
-groups CPUs so as to reduce lock contention and increase cache locality.
-However, this combining tree does not spread its memory across NUMA
-nodes nor does it align the CPU groups with hardware features such
-as sockets or cores.
-Such spreading and alignment is currently believed to be unnecessary
-because the hotpath read-side primitives do not access the combining
-tree, nor does <tt>call_rcu()</tt> in the common case.
-If you believe that your architecture needs such spreading and alignment,
-then your architecture should also benefit from the
-<tt>rcutree.rcu_fanout_leaf</tt> boot parameter, which can be set
-to the number of CPUs in a socket, NUMA node, or whatever.
-If the number of CPUs is too large, use a fraction of the number of
-CPUs.
-If the number of CPUs is a large prime number, well, that certainly
-is an &ldquo;interesting&rdquo; architectural choice!
-More flexible arrangements might be considered, but only if
-<tt>rcutree.rcu_fanout_leaf</tt> has proven inadequate, and only
-if the inadequacy has been demonstrated by a carefully run and
-realistic system-level workload.
-
-<p>
-Please note that arrangements that require RCU to remap CPU numbers will
-require extremely good demonstration of need and full exploration of
-alternatives.
-
-<p>
-There is an embarrassingly large number of flavors of RCU, and this
-number has been increasing over time.
-Perhaps it will be possible to combine some at some future date.
-
-<p>
-RCU's various kthreads are reasonably recent additions.
-It is quite likely that adjustments will be required to more gracefully
-handle extreme loads.
-It might also be necessary to be able to relate CPU utilization by
-RCU's kthreads and softirq handlers to the code that instigated this
-CPU utilization.
-For example, RCU callback overhead might be charged back to the
-originating <tt>call_rcu()</tt> instance, though probably not
-in production kernels.
-
-<h2><a name="Summary">Summary</a></h2>
-
-<p>
-This document has presented more than two decade's worth of RCU
-requirements.
-Given that the requirements keep changing, this will not be the last
-word on this subject, but at least it serves to get an important
-subset of the requirements set forth.
-
-<h2><a name="Acknowledgments">Acknowledgments</a></h2>
-
-I am grateful to Steven Rostedt, Lai Jiangshan, Ingo Molnar,
-Oleg Nesterov, Borislav Petkov, Peter Zijlstra, Boqun Feng, and
-Andy Lutomirski for their help in rendering
-this article human readable, and to Michelle Rankin for her support
-of this effort.
-Other contributions are acknowledged in the Linux kernel's git archive.
-The cartoon is copyright (c) 2013 by Melissa Broussard,
-and is provided
-under the terms of the Creative Commons Attribution-Share Alike 3.0
-United States license.
-
-<p>@@QQAL@@
-
-</body></html>
diff --git a/Documentation/RCU/Design/htmlqqz.sh b/Documentation/RCU/Design/htmlqqz.sh
deleted file mode 100755
index d354f069559b..000000000000
--- a/Documentation/RCU/Design/htmlqqz.sh
+++ /dev/null
@@ -1,108 +0,0 @@
-#!/bin/sh
-#
-# Usage: sh htmlqqz.sh file
-#
-# Extracts and converts quick quizzes in a proto-HTML document file.htmlx.
-# Commands, all of which must be on a line by themselves:
-#
-# "<p>@@QQ@@": Start of a quick quiz.
-# "<p>@@QQA@@": Start of a quick-quiz answer.
-# "<p>@@QQE@@": End of a quick-quiz answer, and thus of the quick quiz.
-# "<p>@@QQAL@@": Place to put quick-quiz answer list.
-#
-# Places the result in file.html.
-#
-# This program is free software; you can redistribute it and/or modify
-# it under the terms of the GNU General Public License as published by
-# the Free Software Foundation; either version 2 of the License, or
-# (at your option) any later version.
-#
-# This program is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
-# GNU General Public License for more details.
-#
-# You should have received a copy of the GNU General Public License
-# along with this program; if not, you can access it online at
-# http://www.gnu.org/licenses/gpl-2.0.html.
-#
-# Copyright (c) 2013 Paul E. McKenney, IBM Corporation.
-
-fn=$1
-if test ! -r $fn.htmlx
-then
- echo "Error: $fn.htmlx unreadable."
- exit 1
-fi
-
-echo "<!-- DO NOT HAND EDIT. -->" > $fn.html
-echo "<!-- Instead, edit $fn.htmlx and run 'sh htmlqqz.sh $fn' -->" >> $fn.html
-awk < $fn.htmlx >> $fn.html '
-
-state == "" && $1 != "<p>@@QQ@@" && $1 != "<p>@@QQAL@@" {
- print $0;
- if ($0 ~ /^<p>@@QQ/)
- print "Bad Quick Quiz command: " NR " (expected <p>@@QQ@@ or <p>@@QQAL@@)." > "/dev/stderr"
- next;
-}
-
-state == "" && $1 == "<p>@@QQ@@" {
- qqn++;
- qqlineno = NR;
- haveqq = 1;
- state = "qq";
- print "<p><a name=\"Quick Quiz " qqn "\"><b>Quick Quiz " qqn "</b>:</a>"
- next;
-}
-
-state == "qq" && $1 != "<p>@@QQA@@" {
- qq[qqn] = qq[qqn] $0 "\n";
- print $0
- if ($0 ~ /^<p>@@QQ/)
- print "Bad Quick Quiz command: " NR ". (expected <p>@@QQA@@)" > "/dev/stderr"
- next;
-}
-
-state == "qq" && $1 == "<p>@@QQA@@" {
- state = "qqa";
- print "<br><a href=\"#qq" qqn "answer\">Answer</a>"
- next;
-}
-
-state == "qqa" && $1 != "<p>@@QQE@@" {
- qqa[qqn] = qqa[qqn] $0 "\n";
- if ($0 ~ /^<p>@@QQ/)
- print "Bad Quick Quiz command: " NR " (expected <p>@@QQE@@)." > "/dev/stderr"
- next;
-}
-
-state == "qqa" && $1 == "<p>@@QQE@@" {
- state = "";
- next;
-}
-
-state == "" && $1 == "<p>@@QQAL@@" {
- haveqq = "";
- print "<h3><a name=\"Answers to Quick Quizzes\">"
- print "Answers to Quick Quizzes</a></h3>"
- print "";
- for (i = 1; i <= qqn; i++) {
- print "<a name=\"qq" i "answer\"></a>"
- print "<p><b>Quick Quiz " i "</b>:"
- print qq[i];
- print "";
- print "</p><p><b>Answer</b>:"
- print qqa[i];
- print "";
- print "</p><p><a href=\"#Quick%20Quiz%20" i "\"><b>Back to Quick Quiz " i "</b>.</a>"
- print "";
- }
- next;
-}
-
-END {
- if (state != "")
- print "Unterminated Quick Quiz: " qqlineno "." > "/dev/stderr"
- else if (haveqq)
- print "Missing \"<p>@@QQAL@@\", no Quick Quiz." > "/dev/stderr"
-}'
diff --git a/Documentation/RCU/RTFP.txt b/Documentation/RCU/RTFP.txt
index 370ca006db7a..9bccf16736f7 100644
--- a/Documentation/RCU/RTFP.txt
+++ b/Documentation/RCU/RTFP.txt
@@ -176,13 +176,13 @@ a history of how Linux changed RCU more than RCU changed Linux
which Mathieu Desnoyers is now maintaining [MathieuDesnoyers2009URCU]
[MathieuDesnoyersPhD]. TINY_RCU [PaulEMcKenney2009BloatWatchRCU] made
its appearance, as did expedited RCU [PaulEMcKenney2009expeditedRCU].
-The problem of resizeable RCU-protected hash tables may now be on a path
+The problem of resizable RCU-protected hash tables may now be on a path
to a solution [JoshTriplett2009RPHash]. A few academic researchers are now
using RCU to solve their parallel problems [HariKannan2009DynamicAnalysisRCU].
2010 produced a simpler preemptible-RCU implementation
based on TREE_RCU [PaulEMcKenney2010SimpleOptRCU], lockdep-RCU
-[PaulEMcKenney2010LockdepRCU], another resizeable RCU-protected hash
+[PaulEMcKenney2010LockdepRCU], another resizable RCU-protected hash
table [HerbertXu2010RCUResizeHash] (this one consuming more memory,
but allowing arbitrary changes in hash function, as required for DoS
avoidance in the networking code), realization of the 2009 RCU-protected
@@ -193,7 +193,7 @@ the RCU API [PaulEMcKenney2010RCUAPI].
[LinusTorvalds2011Linux2:6:38:rc1:NPigginVFS], an RCU-protected red-black
tree using software transactional memory to protect concurrent updates
(strange, but true!) [PhilHoward2011RCUTMRBTree], yet another variant of
-RCU-protected resizeable hash tables [Triplett:2011:RPHash], the 3.0 RCU
+RCU-protected resizable hash tables [Triplett:2011:RPHash], the 3.0 RCU
trainwreck [PaulEMcKenney2011RCU3.0trainwreck], and Neil Brown's "Meet the
Lockers" LWN article [NeilBrown2011MeetTheLockers]. Some academic
work looked at debugging uses of RCU [Seyster:2011:RFA:2075416.2075425].
diff --git a/Documentation/RCU/trace.txt b/Documentation/RCU/trace.txt
index ec6998b1b6d0..00a3a38b375a 100644
--- a/Documentation/RCU/trace.txt
+++ b/Documentation/RCU/trace.txt
@@ -237,17 +237,17 @@ o "ktl" is the low-order 16 bits (in hexadecimal) of the count of
The output of "cat rcu/rcu_preempt/rcuexp" looks as follows:
-s=21872 wd0=0 wd1=0 wd2=0 wd3=5 n=0 enq=0 sc=21872
+s=21872 wd1=0 wd2=0 wd3=5 n=0 enq=0 sc=21872
These fields are as follows:
o "s" is the sequence number, with an odd number indicating that
an expedited grace period is in progress.
-o "wd0", "wd1", "wd2", and "wd3" are the number of times that an
- attempt to start an expedited grace period found that someone
- else had completed an expedited grace period that satisfies the
- attempted request. "Our work is done."
+o "wd1", "wd2", and "wd3" are the number of times that an attempt
+ to start an expedited grace period found that someone else had
+ completed an expedited grace period that satisfies the attempted
+ request. "Our work is done."
o "n" is number of times that a concurrent CPU-hotplug operation
forced a fallback to a normal grace period.
diff --git a/Documentation/RCU/whatisRCU.txt b/Documentation/RCU/whatisRCU.txt
index dc49c6712b17..111770ffa10e 100644
--- a/Documentation/RCU/whatisRCU.txt
+++ b/Documentation/RCU/whatisRCU.txt
@@ -681,22 +681,30 @@ Although RCU can be used in many different ways, a very common use of
RCU is analogous to reader-writer locking. The following unified
diff shows how closely related RCU and reader-writer locking can be.
+ @@ -5,5 +5,5 @@ struct el {
+ int data;
+ /* Other data fields */
+ };
+ -rwlock_t listmutex;
+ +spinlock_t listmutex;
+ struct el head;
+
@@ -13,15 +14,15 @@
struct list_head *lp;
struct el *p;
- - read_lock();
+ - read_lock(&listmutex);
- list_for_each_entry(p, head, lp) {
+ rcu_read_lock();
+ list_for_each_entry_rcu(p, head, lp) {
if (p->key == key) {
*result = p->data;
- - read_unlock();
+ - read_unlock(&listmutex);
+ rcu_read_unlock();
return 1;
}
}
- - read_unlock();
+ - read_unlock(&listmutex);
+ rcu_read_unlock();
return 0;
}
@@ -732,7 +740,7 @@ Or, for those who prefer a side-by-side listing:
5 int data; 5 int data;
6 /* Other data fields */ 6 /* Other data fields */
7 }; 7 };
- 8 spinlock_t listmutex; 8 spinlock_t listmutex;
+ 8 rwlock_t listmutex; 8 spinlock_t listmutex;
9 struct el head; 9 struct el head;
1 int search(long key, int *result) 1 int search(long key, int *result)
@@ -740,15 +748,15 @@ Or, for those who prefer a side-by-side listing:
3 struct list_head *lp; 3 struct list_head *lp;
4 struct el *p; 4 struct el *p;
5 5
- 6 read_lock(); 6 rcu_read_lock();
+ 6 read_lock(&listmutex); 6 rcu_read_lock();
7 list_for_each_entry(p, head, lp) { 7 list_for_each_entry_rcu(p, head, lp) {
8 if (p->key == key) { 8 if (p->key == key) {
9 *result = p->data; 9 *result = p->data;
-10 read_unlock(); 10 rcu_read_unlock();
+10 read_unlock(&listmutex); 10 rcu_read_unlock();
11 return 1; 11 return 1;
12 } 12 }
13 } 13 }
-14 read_unlock(); 14 rcu_read_unlock();
+14 read_unlock(&listmutex); 14 rcu_read_unlock();
15 return 0; 15 return 0;
16 } 16 }
diff --git a/Documentation/accounting/getdelays.c b/Documentation/accounting/getdelays.c
index 7785fb5eb93f..b5ca536e56a8 100644
--- a/Documentation/accounting/getdelays.c
+++ b/Documentation/accounting/getdelays.c
@@ -505,6 +505,8 @@ int main(int argc, char *argv[])
if (!loop)
goto done;
break;
+ case TASKSTATS_TYPE_NULL:
+ break;
default:
fprintf(stderr, "Unknown nested"
" nla_type %d\n",
@@ -512,7 +514,8 @@ int main(int argc, char *argv[])
break;
}
len2 += NLA_ALIGN(na->nla_len);
- na = (struct nlattr *) ((char *) na + len2);
+ na = (struct nlattr *)((char *)na +
+ NLA_ALIGN(na->nla_len));
}
break;
diff --git a/Documentation/acpi/initrd_table_override.txt b/Documentation/acpi/initrd_table_override.txt
index 35c3f5415476..eb651a6aa285 100644
--- a/Documentation/acpi/initrd_table_override.txt
+++ b/Documentation/acpi/initrd_table_override.txt
@@ -1,5 +1,5 @@
-Overriding ACPI tables via initrd
-=================================
+Upgrading ACPI tables via initrd
+================================
1) Introduction (What is this about)
2) What is this for
@@ -9,12 +9,14 @@ Overriding ACPI tables via initrd
1) What is this about
---------------------
-If the ACPI_INITRD_TABLE_OVERRIDE compile option is true, it is possible to
-override nearly any ACPI table provided by the BIOS with an instrumented,
-modified one.
+If the ACPI_TABLE_UPGRADE compile option is true, it is possible to
+upgrade the ACPI execution environment that is defined by the ACPI tables
+via upgrading the ACPI tables provided by the BIOS with an instrumented,
+modified, more recent version one, or installing brand new ACPI tables.
-For a full list of ACPI tables that can be overridden, take a look at
-the char *table_sigs[MAX_ACPI_SIGNATURE]; definition in drivers/acpi/osl.c
+For a full list of ACPI tables that can be upgraded/installed, take a look
+at the char *table_sigs[MAX_ACPI_SIGNATURE]; definition in
+drivers/acpi/tables.c.
All ACPI tables iasl (Intel's ACPI compiler and disassembler) knows should
be overridable, except:
- ACPI_SIG_RSDP (has a signature of 6 bytes)
@@ -25,17 +27,20 @@ Both could get implemented as well.
2) What is this for
-------------------
-Please keep in mind that this is a debug option.
-ACPI tables should not get overridden for productive use.
-If BIOS ACPI tables are overridden the kernel will get tainted with the
-TAINT_OVERRIDDEN_ACPI_TABLE flag.
-Complain to your platform/BIOS vendor if you find a bug which is so sever
-that a workaround is not accepted in the Linux kernel.
+Complain to your platform/BIOS vendor if you find a bug which is so severe
+that a workaround is not accepted in the Linux kernel. And this facility
+allows you to upgrade the buggy tables before your platform/BIOS vendor
+releases an upgraded BIOS binary.
-Still, it can and should be enabled in any kernel, because:
- - There is no functional change with not instrumented initrds
- - It provides a powerful feature to easily debug and test ACPI BIOS table
- compatibility with the Linux kernel.
+This facility can be used by platform/BIOS vendors to provide a Linux
+compatible environment without modifying the underlying platform firmware.
+
+This facility also provides a powerful feature to easily debug and test
+ACPI BIOS table compatibility with the Linux kernel by modifying old
+platform provided ACPI tables or inserting new ACPI tables.
+
+It can and should be enabled in any kernel because there is no functional
+change with not instrumented initrds.
3) How does it work
@@ -50,23 +55,31 @@ iasl -d *.dat
# For example add this statement into a _PRT (PCI Routing Table) function
# of the DSDT:
Store("HELLO WORLD", debug)
+# And increase the OEM Revision. For example, before modification:
+DefinitionBlock ("DSDT.aml", "DSDT", 2, "INTEL ", "TEMPLATE", 0x00000000)
+# After modification:
+DefinitionBlock ("DSDT.aml", "DSDT", 2, "INTEL ", "TEMPLATE", 0x00000001)
iasl -sa dsdt.dsl
# Add the raw ACPI tables to an uncompressed cpio archive.
-# They must be put into a /kernel/firmware/acpi directory inside the
-# cpio archive.
-# The uncompressed cpio archive must be the first.
-# Other, typically compressed cpio archives, must be
-# concatenated on top of the uncompressed one.
+# They must be put into a /kernel/firmware/acpi directory inside the cpio
+# archive. Note that if the table put here matches a platform table
+# (similar Table Signature, and similar OEMID, and similar OEM Table ID)
+# with a more recent OEM Revision, the platform table will be upgraded by
+# this table. If the table put here doesn't match a platform table
+# (dissimilar Table Signature, or dissimilar OEMID, or dissimilar OEM Table
+# ID), this table will be appended.
mkdir -p kernel/firmware/acpi
cp dsdt.aml kernel/firmware/acpi
-# A maximum of: #define ACPI_OVERRIDE_TABLES 10
-# tables are currently allowed (see osl.c):
+# A maximum of "NR_ACPI_INITRD_TABLES (64)" tables are currently allowed
+# (see osl.c):
iasl -sa facp.dsl
iasl -sa ssdt1.dsl
cp facp.aml kernel/firmware/acpi
cp ssdt1.aml kernel/firmware/acpi
-# Create the uncompressed cpio archive and concatenate the original initrd
-# on top:
+# The uncompressed cpio archive must be the first. Other, typically
+# compressed cpio archives, must be concatenated on top of the uncompressed
+# one. Following command creates the uncompressed cpio archive and
+# concatenates the original initrd on top:
find kernel | cpio -H newc --create > /boot/instrumented_initrd
cat /boot/initrd >>/boot/instrumented_initrd
# reboot with increased acpi debug level, e.g. boot params:
diff --git a/Documentation/adding-syscalls.txt b/Documentation/adding-syscalls.txt
index cc2d4ac4f404..bbb31e091b28 100644
--- a/Documentation/adding-syscalls.txt
+++ b/Documentation/adding-syscalls.txt
@@ -136,7 +136,7 @@ an fxyzzy(3) operation for free:
- xyzzyat(fd, "", ..., AT_EMPTY_PATH) is equivalent to fxyzzy(fd, ...)
(For more details on the rationale of the *at() calls, see the openat(2) man
-page; for an example of AT_EMPTY_PATH, see the statat(2) man page.)
+page; for an example of AT_EMPTY_PATH, see the fstatat(2) man page.)
If your new xyzzy(2) system call involves a parameter describing an offset
within a file, make its type loff_t so that 64-bit offsets can be supported
diff --git a/Documentation/arm/SA1100/Assabet b/Documentation/arm/SA1100/Assabet
index 08b885d35674..e08a6739e72c 100644
--- a/Documentation/arm/SA1100/Assabet
+++ b/Documentation/arm/SA1100/Assabet
@@ -214,7 +214,7 @@ RedBoot scripting
-----------------
All the commands above aren't so useful if they have to be typed in every
-time the Assabet is rebooted. Therefore it's possible to automatize the boot
+time the Assabet is rebooted. Therefore it's possible to automate the boot
process using RedBoot's scripting capability.
For example, I use this to boot Linux with both the kernel and the ramdisk
diff --git a/Documentation/arm64/booting.txt b/Documentation/arm64/booting.txt
index 56d6d8b796db..8d0df62c3fe0 100644
--- a/Documentation/arm64/booting.txt
+++ b/Documentation/arm64/booting.txt
@@ -132,6 +132,10 @@ NOTE: versions prior to v4.6 cannot make use of memory below the
physical offset of the Image so it is recommended that the Image be
placed as close as possible to the start of system RAM.
+If an initrd/initramfs is passed to the kernel at boot, it must reside
+entirely within a 1 GB aligned physical memory window of up to 32 GB in
+size that fully covers the kernel Image as well.
+
Any memory described to the kernel (even that below the start of the
image) which is not marked as reserved from the kernel (e.g., with a
memreserve region in the device tree) will be considered as available to
diff --git a/Documentation/arm64/silicon-errata.txt b/Documentation/arm64/silicon-errata.txt
index ba4b6acfc545..c6938e50e71f 100644
--- a/Documentation/arm64/silicon-errata.txt
+++ b/Documentation/arm64/silicon-errata.txt
@@ -53,7 +53,9 @@ stable kernels.
| ARM | Cortex-A57 | #832075 | ARM64_ERRATUM_832075 |
| ARM | Cortex-A57 | #852523 | N/A |
| ARM | Cortex-A57 | #834220 | ARM64_ERRATUM_834220 |
+| ARM | MMU-500 | #841119,#826419 | N/A |
| | | | |
| Cavium | ThunderX ITS | #22375, #24313 | CAVIUM_ERRATUM_22375 |
| Cavium | ThunderX GICv3 | #23154 | CAVIUM_ERRATUM_23154 |
| Cavium | ThunderX Core | #27456 | CAVIUM_ERRATUM_27456 |
+| Cavium | ThunderX SMMUv2 | #27704 | N/A |
diff --git a/Documentation/block/00-INDEX b/Documentation/block/00-INDEX
index e840b47613f7..e55103ace382 100644
--- a/Documentation/block/00-INDEX
+++ b/Documentation/block/00-INDEX
@@ -2,6 +2,8 @@
- This file
biodoc.txt
- Notes on the Generic Block Layer Rewrite in Linux 2.5
+biovecs.txt
+ - Immutable biovecs and biovec iterators
capability.txt
- Generic Block Device Capability (/sys/block/<device>/capability)
cfq-iosched.txt
@@ -14,6 +16,8 @@ deadline-iosched.txt
- Deadline IO scheduler tunables
ioprio.txt
- Block io priorities (in CFQ scheduler)
+pr.txt
+ - Block layer support for Persistent Reservations
null_blk.txt
- Null block for block-layer benchmarking.
queue-sysfs.txt
diff --git a/Documentation/block/queue-sysfs.txt b/Documentation/block/queue-sysfs.txt
index e5d914845be6..dce25d848d92 100644
--- a/Documentation/block/queue-sysfs.txt
+++ b/Documentation/block/queue-sysfs.txt
@@ -141,6 +141,15 @@ control of this block device to that new IO scheduler. Note that writing
an IO scheduler name to this file will attempt to load that IO scheduler
module, if it isn't already present in the system.
+write_cache (RW)
+----------------
+When read, this file will display whether the device has write back
+caching enabled or not. It will return "write back" for the former
+case, and "write through" for the latter. Writing to this file can
+change the kernels view of the device, but it doesn't alter the
+device state. This means that it might not be safe to toggle the
+setting from "write back" to "write through", since that will also
+eliminate cache flushes issued by the kernel.
Jens Axboe <jens.axboe@oracle.com>, February 2009
diff --git a/Documentation/block/writeback_cache_control.txt b/Documentation/block/writeback_cache_control.txt
index 83407d36630a..59e0516cbf6b 100644
--- a/Documentation/block/writeback_cache_control.txt
+++ b/Documentation/block/writeback_cache_control.txt
@@ -71,7 +71,7 @@ requests that have a payload. For devices with volatile write caches the
driver needs to tell the block layer that it supports flushing caches by
doing:
- blk_queue_flush(sdkp->disk->queue, REQ_FLUSH);
+ blk_queue_write_cache(sdkp->disk->queue, true, false);
and handle empty REQ_FLUSH requests in its prep_fn/request_fn. Note that
REQ_FLUSH requests with a payload are automatically turned into a sequence
@@ -79,7 +79,7 @@ of an empty REQ_FLUSH request followed by the actual write by the block
layer. For devices that also support the FUA bit the block layer needs
to be told to pass through the REQ_FUA bit using:
- blk_queue_flush(sdkp->disk->queue, REQ_FLUSH | REQ_FUA);
+ blk_queue_write_cache(sdkp->disk->queue, true, true);
and the driver must handle write requests that have the REQ_FUA bit set
in prep_fn/request_fn. If the FUA bit is not natively supported the block
diff --git a/Documentation/blockdev/zram.txt b/Documentation/blockdev/zram.txt
index 5bda5031c83d..13100fb3c26d 100644
--- a/Documentation/blockdev/zram.txt
+++ b/Documentation/blockdev/zram.txt
@@ -59,27 +59,16 @@ num_devices parameter is optional and tells zram how many devices should be
pre-created. Default: 1.
2) Set max number of compression streams
- Compression backend may use up to max_comp_streams compression streams,
- thus allowing up to max_comp_streams concurrent compression operations.
- By default, compression backend uses single compression stream.
-
- Examples:
- #show max compression streams number
+ Regardless the value passed to this attribute, ZRAM will always
+ allocate multiple compression streams - one per online CPUs - thus
+ allowing several concurrent compression operations. The number of
+ allocated compression streams goes down when some of the CPUs
+ become offline. There is no single-compression-stream mode anymore,
+ unless you are running a UP system or has only 1 CPU online.
+
+ To find out how many streams are currently available:
cat /sys/block/zram0/max_comp_streams
- #set max compression streams number to 3
- echo 3 > /sys/block/zram0/max_comp_streams
-
-Note:
-In order to enable compression backend's multi stream support max_comp_streams
-must be initially set to desired concurrency level before ZRAM device
-initialisation. Once the device initialised as a single stream compression
-backend (max_comp_streams equals to 1), you will see error if you try to change
-the value of max_comp_streams because single stream compression backend
-implemented as a special case by lock overhead issue and does not support
-dynamic max_comp_streams. Only multi stream backend supports dynamic
-max_comp_streams adjustment.
-
3) Select compression algorithm
Using comp_algorithm device attribute one can see available and
currently selected (shown in square brackets) compression algorithms,
@@ -183,6 +172,7 @@ mem_limit RW the maximum amount of memory ZRAM can use to store
pages_compacted RO the number of pages freed during compaction
(available only via zram<id>/mm_stat node)
compact WO trigger memory compaction
+debug_stat RO this file is used for zram debugging purposes
WARNING
=======
diff --git a/Documentation/cgroup-v1/memory.txt b/Documentation/cgroup-v1/memory.txt
index ff71e16cc752..b14abf217239 100644
--- a/Documentation/cgroup-v1/memory.txt
+++ b/Documentation/cgroup-v1/memory.txt
@@ -280,17 +280,9 @@ the amount of kernel memory used by the system. Kernel memory is fundamentally
different than user memory, since it can't be swapped out, which makes it
possible to DoS the system by consuming too much of this precious resource.
-Kernel memory won't be accounted at all until limit on a group is set. This
-allows for existing setups to continue working without disruption. The limit
-cannot be set if the cgroup have children, or if there are already tasks in the
-cgroup. Attempting to set the limit under those conditions will return -EBUSY.
-When use_hierarchy == 1 and a group is accounted, its children will
-automatically be accounted regardless of their limit value.
-
-After a group is first limited, it will be kept being accounted until it
-is removed. The memory limitation itself, can of course be removed by writing
--1 to memory.kmem.limit_in_bytes. In this case, kmem will be accounted, but not
-limited.
+Kernel memory accounting is enabled for all memory cgroups by default. But
+it can be disabled system-wide by passing cgroup.memory=nokmem to the kernel
+at boot time. In this case, kernel memory will not be accounted at all.
Kernel memory limits are not imposed for the root cgroup. Usage for the root
cgroup may or may not be accounted. The memory used is accumulated into
diff --git a/Documentation/connector/connector.txt b/Documentation/connector/connector.txt
index f6215f95149b..ab7ca897fab7 100644
--- a/Documentation/connector/connector.txt
+++ b/Documentation/connector/connector.txt
@@ -186,3 +186,11 @@ only cn_test.c test module used it.
Some work in netlink area is still being done, so things can be changed in
2.6.15 timeframe, if it will happen, documentation will be updated for that
kernel.
+
+/*****************************************/
+Code samples
+/*****************************************/
+
+Sample code for a connector test module and user space can be found
+in samples/connector/. To build this code, enable CONFIG_CONNECTOR
+and CONFIG_SAMPLES.
diff --git a/Documentation/device-mapper/cache-policies.txt b/Documentation/device-mapper/cache-policies.txt
index e5062ad18717..d3ca8af21a31 100644
--- a/Documentation/device-mapper/cache-policies.txt
+++ b/Documentation/device-mapper/cache-policies.txt
@@ -11,7 +11,7 @@ Every bio that is mapped by the target is referred to the policy.
The policy can return a simple HIT or MISS or issue a migration.
Currently there's no way for the policy to issue background work,
-e.g. to start writing back dirty blocks that are going to be evicte
+e.g. to start writing back dirty blocks that are going to be evicted
soon.
Because we map bios, rather than requests it's easy for the policy
@@ -48,7 +48,7 @@ with the multiqueue (mq) policy.
The smq policy (vs mq) offers the promise of less memory utilization,
improved performance and increased adaptability in the face of changing
-workloads. SMQ also does not have any cumbersome tuning knobs.
+workloads. smq also does not have any cumbersome tuning knobs.
Users may switch from "mq" to "smq" simply by appropriately reloading a
DM table that is using the cache target. Doing so will cause all of the
@@ -57,47 +57,45 @@ degrade slightly until smq recalculates the origin device's hotspots
that should be cached.
Memory usage:
-The mq policy uses a lot of memory; 88 bytes per cache block on a 64
+The mq policy used a lot of memory; 88 bytes per cache block on a 64
bit machine.
-SMQ uses 28bit indexes to implement it's data structures rather than
+smq uses 28bit indexes to implement it's data structures rather than
pointers. It avoids storing an explicit hit count for each block. It
-has a 'hotspot' queue rather than a pre cache which uses a quarter of
+has a 'hotspot' queue, rather than a pre-cache, which uses a quarter of
the entries (each hotspot block covers a larger area than a single
cache block).
-All these mean smq uses ~25bytes per cache block. Still a lot of
+All this means smq uses ~25bytes per cache block. Still a lot of
memory, but a substantial improvement nontheless.
Level balancing:
-MQ places entries in different levels of the multiqueue structures
-based on their hit count (~ln(hit count)). This means the bottom
-levels generally have the most entries, and the top ones have very
-few. Having unbalanced levels like this reduces the efficacy of the
+mq placed entries in different levels of the multiqueue structures
+based on their hit count (~ln(hit count)). This meant the bottom
+levels generally had the most entries, and the top ones had very
+few. Having unbalanced levels like this reduced the efficacy of the
multiqueue.
-SMQ does not maintain a hit count, instead it swaps hit entries with
-the least recently used entry from the level above. The over all
+smq does not maintain a hit count, instead it swaps hit entries with
+the least recently used entry from the level above. The overall
ordering being a side effect of this stochastic process. With this
scheme we can decide how many entries occupy each multiqueue level,
resulting in better promotion/demotion decisions.
Adaptability:
-The MQ policy maintains a hit count for each cache block. For a
+The mq policy maintained a hit count for each cache block. For a
different block to get promoted to the cache it's hit count has to
-exceed the lowest currently in the cache. This means it can take a
+exceed the lowest currently in the cache. This meant it could take a
long time for the cache to adapt between varying IO patterns.
-Periodically degrading the hit counts could help with this, but I
-haven't found a nice general solution.
-SMQ doesn't maintain hit counts, so a lot of this problem just goes
+smq doesn't maintain hit counts, so a lot of this problem just goes
away. In addition it tracks performance of the hotspot queue, which
is used to decide which blocks to promote. If the hotspot queue is
performing badly then it starts moving entries more quickly between
levels. This lets it adapt to new IO patterns very quickly.
Performance:
-Testing SMQ shows substantially better performance than MQ.
+Testing smq shows substantially better performance than mq.
cleaner
-------
diff --git a/Documentation/device-mapper/statistics.txt b/Documentation/device-mapper/statistics.txt
index 6f5ef944ca4c..170ac02a1f50 100644
--- a/Documentation/device-mapper/statistics.txt
+++ b/Documentation/device-mapper/statistics.txt
@@ -205,7 +205,7 @@ statistics on them:
dmsetup message vol 0 @stats_create - /100
-Set the auxillary data string to "foo bar baz" (the escape for each
+Set the auxiliary data string to "foo bar baz" (the escape for each
space must also be escaped, otherwise the shell will consume them):
dmsetup message vol 0 @stats_set_aux 0 foo\\ bar\\ baz
diff --git a/Documentation/devices.txt b/Documentation/devices.txt
index 87b4c5e82d39..4035eca87144 100644
--- a/Documentation/devices.txt
+++ b/Documentation/devices.txt
@@ -1,20 +1,17 @@
- LINUX ALLOCATED DEVICES (2.6+ version)
-
- Maintained by Alan Cox <device@lanana.org>
-
- Last revised: 6th April 2009
+ LINUX ALLOCATED DEVICES (4.x+ version)
This list is the Linux Device List, the official registry of allocated
device numbers and /dev directory nodes for the Linux operating
system.
-The latest version of this list is available from
-http://www.lanana.org/docs/device-list/ or
-ftp://ftp.kernel.org/pub/linux/docs/device-list/. This version may be
-newer than the one distributed with the Linux kernel.
-
-The LaTeX version of this document is no longer maintained.
+The LaTeX version of this document is no longer maintained, nor is
+the document that used to reside at lanana.org. This version in the
+mainline Linux kernel is the master document. Updates shall be sent
+as patches to the kernel maintainers (see the SubmittingPatches document).
+Specifically explore the sections titled "CHAR and MISC DRIVERS", and
+"BLOCK LAYER" in the MAINTAINERS file to find the right maintainers
+to involve for character and block devices.
This document is included by reference into the Filesystem Hierarchy
Standard (FHS). The FHS is available from http://www.pathname.com/fhs/.
@@ -23,60 +20,33 @@ Allocations marked (68k/Amiga) apply to Linux/68k on the Amiga
platform only. Allocations marked (68k/Atari) apply to Linux/68k on
the Atari platform only.
-The symbol {2.6} means the allocation is obsolete and scheduled for
-removal once kernel version 2.6 (or equivalent) is released. Some of these
-allocations have already been removed.
-
-This document is in the public domain. The author requests, however,
+This document is in the public domain. The authors requests, however,
that semantically altered versions are not distributed without
-permission of the author, assuming the author can be contacted without
+permission of the authors, assuming the authors can be contacted without
an unreasonable effort.
-In particular, please don't sent patches for this list to Linus, at
-least not without contacting me first.
-
-I do not have any information about these devices beyond what appears
-on this list. Any such information requests will be deleted without
-reply.
-
**** DEVICE DRIVERS AUTHORS PLEASE READ THIS ****
-To have a major number allocated, or a minor number in situations
-where that applies (e.g. busmice), please contact me with the
-appropriate device information. Also, if you have additional
-information regarding any of the devices listed below, or if I have
-made a mistake, I would greatly appreciate a note.
-
-I do, however, make a few requests about the nature of your report.
-This is necessary for me to be able to keep this list up to date and
-correct in a timely manner. First of all, *please* send it to the
-correct address... <device@lanana.org>. I receive hundreds of email
-messages a day, so mail sent to other addresses may very well get lost
-in the avalanche. Please put in a descriptive subject, so I can find
-your mail again should I need to. Too many people send me email
-saying just "device number request" in the subject.
-
-Second, please include a description of the device *in the same format
-as this list*. The reason for this is that it is the only way I have
-found to ensure I have all the requisite information to publish your
-device and avoid conflicts.
+Linux now has extensive support for dynamic allocation of device numbering
+and can use sysfs and udev (systemd) to handle the naming needs. There are
+still some exceptions in the serial and boot device area. Before asking
+for a device number make sure you actually need one.
-Third, please don't assume that the distributed version of the list is
-up to date. Due to the number of registrations I have to maintain it
-in "batch mode", so there is likely additional registrations that
-haven't been listed yet.
+To have a major number allocated, or a minor number in situations
+where that applies (e.g. busmice), please submit a patch and send to
+the authors as indicated above.
-Fourth, remember that Linux now has extensive support for dynamic allocation
-of device numbering and can use sysfs and udev to handle the naming needs.
-There are still some exceptions in the serial and boot device area. Before
-asking for a device number make sure you actually need one.
+Keep the description of the device *in the same format
+as this list*. The reason for this is that it is the only way we have
+found to ensure we have all the requisite information to publish your
+device and avoid conflicts.
-Finally, sometimes I have to play "namespace police." Please don't be
-offended. I often get submissions for /dev names that would be bound
-to cause conflicts down the road. I am trying to avoid getting in a
+Finally, sometimes we have to play "namespace police." Please don't be
+offended. We often get submissions for /dev names that would be bound
+to cause conflicts down the road. We are trying to avoid getting in a
situation where we would have to suffer an incompatible forward
-change. Therefore, please consult with me *before* you make your
+change. Therefore, please consult with us *before* you make your
device names and numbers in any way public, at least to the point
where it would be at all difficult to get them changed.
@@ -3099,9 +3069,9 @@ Your cooperation is appreciated.
129 = /dev/ipath_sma Device used by Subnet Management Agent
130 = /dev/ipath_diag Device used by diagnostics programs
-234-239 UNASSIGNED
-
-240-254 char LOCAL/EXPERIMENTAL USE
+234-254 char RESERVED FOR DYNAMIC ASSIGNMENT
+ Character devices that request a dynamic allocation of major number will
+ take numbers starting from 254 and downward.
240-254 block LOCAL/EXPERIMENTAL USE
Allocated for local/experimental use. For devices not
diff --git a/Documentation/devicetree/bindings/arc/eznps.txt b/Documentation/devicetree/bindings/arc/eznps.txt
new file mode 100644
index 000000000000..1aa50c640678
--- /dev/null
+++ b/Documentation/devicetree/bindings/arc/eznps.txt
@@ -0,0 +1,7 @@
+EZchip NPS Network Processor Platforms Device Tree Bindings
+---------------------------------------------------------------------------
+
+Appliance main board with NPS400 ASIC.
+
+Required root node properties:
+ - compatible = "ezchip,arc-nps";
diff --git a/Documentation/devicetree/bindings/arm/altera/socfpga-eccmgr.txt b/Documentation/devicetree/bindings/arm/altera/socfpga-eccmgr.txt
index 885f93d14ef9..5a6b16070a33 100644
--- a/Documentation/devicetree/bindings/arm/altera/socfpga-eccmgr.txt
+++ b/Documentation/devicetree/bindings/arm/altera/socfpga-eccmgr.txt
@@ -3,6 +3,7 @@ This driver uses the EDAC framework to implement the SOCFPGA ECC Manager.
The ECC Manager counts and corrects single bit errors and counts/handles
double bit errors which are uncorrectable.
+Cyclone5 and Arria5 ECC Manager
Required Properties:
- compatible : Should be "altr,socfpga-ecc-manager"
- #address-cells: must be 1
@@ -47,3 +48,52 @@ Example:
interrupts = <0 178 1>, <0 179 1>;
};
};
+
+Arria10 SoCFPGA ECC Manager
+The Arria10 SoC ECC Manager handles the IRQs for each peripheral
+in a shared register instead of individual IRQs like the Cyclone5
+and Arria5. Therefore the device tree is different as well.
+
+Required Properties:
+- compatible : Should be "altr,socfpga-a10-ecc-manager"
+- altr,sysgr-syscon : phandle to Arria10 System Manager Block
+ containing the ECC manager registers.
+- #address-cells: must be 1
+- #size-cells: must be 1
+- interrupts : Should be single bit error interrupt, then double bit error
+ interrupt. Note the rising edge type.
+- ranges : standard definition, should translate from local addresses
+
+Subcomponents:
+
+L2 Cache ECC
+Required Properties:
+- compatible : Should be "altr,socfpga-a10-l2-ecc"
+- reg : Address and size for ECC error interrupt clear registers.
+
+On-Chip RAM ECC
+Required Properties:
+- compatible : Should be "altr,socfpga-a10-ocram-ecc"
+- reg : Address and size for ECC block registers.
+
+Example:
+
+ eccmgr: eccmgr@ffd06000 {
+ compatible = "altr,socfpga-a10-ecc-manager";
+ altr,sysmgr-syscon = <&sysmgr>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ interrupts = <0 2 IRQ_TYPE_LEVEL_HIGH>,
+ <0 0 IRQ_TYPE_LEVEL_HIGH>;
+ ranges;
+
+ l2-ecc@ffd06010 {
+ compatible = "altr,socfpga-a10-l2-ecc";
+ reg = <0xffd06010 0x4>;
+ };
+
+ ocram-ecc@ff8c3000 {
+ compatible = "altr,socfpga-a10-ocram-ecc";
+ reg = <0xff8c3000 0x90>;
+ };
+ };
diff --git a/Documentation/devicetree/bindings/arm/amlogic.txt b/Documentation/devicetree/bindings/arm/amlogic.txt
index 8a5122ab19b0..fcc6f6c10803 100644
--- a/Documentation/devicetree/bindings/arm/amlogic.txt
+++ b/Documentation/devicetree/bindings/arm/amlogic.txt
@@ -25,3 +25,6 @@ Board compatible values:
- "tronsmart,vega-s95-pro", "tronsmart,vega-s95" (Meson gxbb)
- "tronsmart,vega-s95-meta", "tronsmart,vega-s95" (Meson gxbb)
- "tronsmart,vega-s95-telos", "tronsmart,vega-s95" (Meson gxbb)
+ - "hardkernel,odroid-c2" (Meson gxbb)
+ - "amlogic,p200" (Meson gxbb)
+ - "amlogic,p201" (Meson gxbb)
diff --git a/Documentation/devicetree/bindings/arm/arm-boards b/Documentation/devicetree/bindings/arm/arm-boards
index 0226bc2cc1f6..ab318a56fca2 100644
--- a/Documentation/devicetree/bindings/arm/arm-boards
+++ b/Documentation/devicetree/bindings/arm/arm-boards
@@ -93,6 +93,14 @@ Required nodes:
a core-module with regs and the compatible strings
"arm,core-module-versatile", "syscon"
+Optional nodes:
+
+- arm,versatile-ib2-syscon : if the Versatile has an IB2 interface
+ board mounted, this has a separate system controller that is
+ defined in this node.
+ Required properties:
+ compatible = "arm,versatile-ib2-syscon", "syscon"
+
ARM RealView Boards
-------------------
The RealView boards cover tailored evaluation boards that are used to explore
diff --git a/Documentation/devicetree/bindings/arm/atmel-at91.txt b/Documentation/devicetree/bindings/arm/atmel-at91.txt
index 7fd64ec9ee1d..e1f5ad855f14 100644
--- a/Documentation/devicetree/bindings/arm/atmel-at91.txt
+++ b/Documentation/devicetree/bindings/arm/atmel-at91.txt
@@ -41,6 +41,10 @@ compatible: must be one of:
- "atmel,sama5d43"
- "atmel,sama5d44"
+Chipid required properties:
+- compatible: Should be "atmel,sama5d2-chipid"
+- reg : Should contain registers location and length
+
PIT Timer required properties:
- compatible: Should be "atmel,at91sam9260-pit"
- reg: Should contain registers location and length
@@ -147,6 +151,65 @@ Example:
clocks = <&clk32k>;
};
+SHDWC SAMA5D2-Compatible Shutdown Controller
+
+1) shdwc node
+
+required properties:
+- compatible: should be "atmel,sama5d2-shdwc".
+- reg: should contain registers location and length
+- clocks: phandle to input clock.
+- #address-cells: should be one. The cell is the wake-up input index.
+- #size-cells: should be zero.
+
+optional properties:
+
+- debounce-delay-us: minimum wake-up inputs debouncer period in
+ microseconds. It's usually a board-related property.
+- atmel,wakeup-rtc-timer: boolean to enable Real-Time Clock wake-up.
+
+The node contains child nodes for each wake-up input that the platform uses.
+
+2) input nodes
+
+Wake-up input nodes are usually described in the "board" part of the Device
+Tree. Note also that input 0 is linked to the wake-up pin and is frequently
+used.
+
+Required properties:
+- reg: should contain the wake-up input index [0 - 15].
+
+Optional properties:
+- atmel,wakeup-active-high: boolean, the corresponding wake-up input described
+ by the child, forces the wake-up of the core power supply on a high level.
+ The default is to be active low.
+
+Example:
+
+On the SoC side:
+ shdwc@f8048010 {
+ compatible = "atmel,sama5d2-shdwc";
+ reg = <0xf8048010 0x10>;
+ clocks = <&clk32k>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ atmel,wakeup-rtc-timer;
+ };
+
+On the board side:
+ shdwc@f8048010 {
+ debounce-delay-us = <976>;
+
+ input@0 {
+ reg = <0>;
+ };
+
+ input@1 {
+ reg = <1>;
+ atmel,wakeup-active-high;
+ };
+ };
+
Special Function Registers (SFR)
Special Function Registers (SFR) manage specific aspects of the integrated
@@ -155,7 +218,7 @@ elsewhere.
required properties:
- compatible: Should be "atmel,<chip>-sfr", "syscon".
- <chip> can be "sama5d3" or "sama5d4".
+ <chip> can be "sama5d3", "sama5d4" or "sama5d2".
- reg: Should contain registers location and length
sfr@f0038000 {
diff --git a/Documentation/devicetree/bindings/arm/cci.txt b/Documentation/devicetree/bindings/arm/cci.txt
index a1a5a7ecc2fb..0f2153e8fa7e 100644
--- a/Documentation/devicetree/bindings/arm/cci.txt
+++ b/Documentation/devicetree/bindings/arm/cci.txt
@@ -100,7 +100,7 @@ specific to ARM.
"arm,cci-400-pmu,r0"
"arm,cci-400-pmu,r1"
"arm,cci-400-pmu" - DEPRECATED, permitted only where OS has
- secure acces to CCI registers
+ secure access to CCI registers
"arm,cci-500-pmu,r0"
"arm,cci-550-pmu,r0"
- reg:
diff --git a/Documentation/devicetree/bindings/arm/coresight.txt b/Documentation/devicetree/bindings/arm/coresight.txt
index 62938eb9697f..93147c0c8a0e 100644
--- a/Documentation/devicetree/bindings/arm/coresight.txt
+++ b/Documentation/devicetree/bindings/arm/coresight.txt
@@ -19,6 +19,7 @@ its hardware characteristcs.
- "arm,coresight-etm3x", "arm,primecell";
- "arm,coresight-etm4x", "arm,primecell";
- "qcom,coresight-replicator1x", "arm,primecell";
+ - "arm,coresight-stm", "arm,primecell"; [1]
* reg: physical base address and length of the register
set(s) of the component.
@@ -36,6 +37,14 @@ its hardware characteristcs.
layout using the generic DT graph presentation found in
"bindings/graph.txt".
+* Additional required properties for System Trace Macrocells (STM):
+ * reg: along with the physical base address and length of the register
+ set as described above, another entry is required to describe the
+ mapping of the extended stimulus port area.
+
+ * reg-names: the only acceptable values are "stm-base" and
+ "stm-stimulus-base", each corresponding to the areas defined in "reg".
+
* Required properties for devices that don't show up on the AMBA bus, such as
non-configurable replicators:
@@ -202,3 +211,22 @@ Example:
};
};
};
+
+4. STM
+ stm@20100000 {
+ compatible = "arm,coresight-stm", "arm,primecell";
+ reg = <0 0x20100000 0 0x1000>,
+ <0 0x28000000 0 0x180000>;
+ reg-names = "stm-base", "stm-stimulus-base";
+
+ clocks = <&soc_smc50mhz>;
+ clock-names = "apb_pclk";
+ port {
+ stm_out_port: endpoint {
+ remote-endpoint = <&main_funnel_in_port2>;
+ };
+ };
+ };
+
+[1]. There is currently two version of STM: STM32 and STM500. Both
+have the same HW interface and as such don't need an explicit binding name.
diff --git a/Documentation/devicetree/bindings/arm/fsl.txt b/Documentation/devicetree/bindings/arm/fsl.txt
index 752a685d926f..dbbc0952021c 100644
--- a/Documentation/devicetree/bindings/arm/fsl.txt
+++ b/Documentation/devicetree/bindings/arm/fsl.txt
@@ -135,6 +135,10 @@ LS1043A ARMv8 based RDB Board
Required root node properties:
- compatible = "fsl,ls1043a-rdb", "fsl,ls1043a";
+LS1043A ARMv8 based QDS Board
+Required root node properties:
+ - compatible = "fsl,ls1043a-qds", "fsl,ls1043a";
+
LS2080A ARMv8 based Simulator model
Required root node properties:
- compatible = "fsl,ls2080a-simu", "fsl,ls2080a";
diff --git a/Documentation/devicetree/bindings/arm/hisilicon/hisilicon.txt b/Documentation/devicetree/bindings/arm/hisilicon/hisilicon.txt
index e3ccab114006..83fe816ae050 100644
--- a/Documentation/devicetree/bindings/arm/hisilicon/hisilicon.txt
+++ b/Documentation/devicetree/bindings/arm/hisilicon/hisilicon.txt
@@ -1,29 +1,33 @@
Hisilicon Platforms Device Tree Bindings
----------------------------------------------------
-Hi6220 SoC
-Required root node properties:
- - compatible = "hisilicon,hi6220";
-
Hi4511 Board
Required root node properties:
- compatible = "hisilicon,hi3620-hi4511";
-HiP04 D01 Board
+Hi6220 SoC
Required root node properties:
- - compatible = "hisilicon,hip04-d01";
+ - compatible = "hisilicon,hi6220";
+
+HiKey Board
+Required root node properties:
+ - compatible = "hisilicon,hi6220-hikey", "hisilicon,hi6220";
HiP01 ca9x2 Board
Required root node properties:
- compatible = "hisilicon,hip01-ca9x2";
-HiKey Board
+HiP04 D01 Board
Required root node properties:
- - compatible = "hisilicon,hi6220-hikey", "hisilicon,hi6220";
+ - compatible = "hisilicon,hip04-d01";
HiP05 D02 Board
Required root node properties:
- compatible = "hisilicon,hip05-d02";
+HiP06 D03 Board
+Required root node properties:
+ - compatible = "hisilicon,hip06-d03";
+
Hisilicon system controller
Required properties:
diff --git a/Documentation/devicetree/bindings/arm/l2c2x0.txt b/Documentation/devicetree/bindings/arm/l2c2x0.txt
index fe0398c5c77b..c453ab5553cd 100644
--- a/Documentation/devicetree/bindings/arm/l2c2x0.txt
+++ b/Documentation/devicetree/bindings/arm/l2c2x0.txt
@@ -84,6 +84,12 @@ Optional properties:
- prefetch-instr : Instruction prefetch. Value: <0> (forcibly disable),
<1> (forcibly enable), property absent (retain settings set by
firmware)
+- arm,dynamic-clock-gating : L2 dynamic clock gating. Value: <0> (forcibly
+ disable), <1> (forcibly enable), property absent (OS specific behavior,
+ preferrably retain firmware settings)
+- arm,standby-mode: L2 standby mode enable. Value <0> (forcibly disable),
+ <1> (forcibly enable), property absent (OS specific behavior,
+ preferrably retain firmware settings)
Example:
diff --git a/Documentation/devicetree/bindings/arm/marvell/ap806-system-controller.txt b/Documentation/devicetree/bindings/arm/marvell/ap806-system-controller.txt
new file mode 100644
index 000000000000..8968371d84e2
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/marvell/ap806-system-controller.txt
@@ -0,0 +1,35 @@
+Marvell Armada AP806 System Controller
+======================================
+
+The AP806 is one of the two core HW blocks of the Marvell Armada 7K/8K
+SoCs. It contains a system controller, which provides a number
+registers giving access to numerous features: clocks, pin-muxing and
+many other SoC configuration items. This DT binding allows to describe
+this system controller.
+
+The Device Tree node representing the AP806 system controller provides
+a number of clocks:
+
+ - 0: clock of CPU cluster 0
+ - 1: clock of CPU cluster 1
+ - 2: fixed PLL at 1200 Mhz
+ - 3: MSS clock, derived from the fixed PLL
+
+Required properties:
+
+ - compatible: must be:
+ "marvell,ap806-system-controller", "syscon"
+ - reg: register area of the AP806 system controller
+ - #clock-cells: must be set to 1
+ - clock-output-names: must be defined to:
+ "ap-cpu-cluster-0", "ap-cpu-cluster-1", "ap-fixed", "ap-mss"
+
+Example:
+
+ syscon: system-controller@6f4000 {
+ compatible = "marvell,ap806-system-controller", "syscon";
+ #clock-cells = <1>;
+ clock-output-names = "ap-cpu-cluster-0", "ap-cpu-cluster-1",
+ "ap-fixed", "ap-mss";
+ reg = <0x6f4000 0x1000>;
+ };
diff --git a/Documentation/devicetree/bindings/arm/marvell/cp110-system-controller0.txt b/Documentation/devicetree/bindings/arm/marvell/cp110-system-controller0.txt
new file mode 100644
index 000000000000..30c546900b60
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/marvell/cp110-system-controller0.txt
@@ -0,0 +1,83 @@
+Marvell Armada CP110 System Controller 0
+========================================
+
+The CP110 is one of the two core HW blocks of the Marvell Armada 7K/8K
+SoCs. It contains two sets of system control registers, System
+Controller 0 and System Controller 1. This Device Tree binding allows
+to describe the first system controller, which provides registers to
+configure various aspects of the SoC.
+
+The Device Tree node representing this System Controller 0 provides a
+number of clocks:
+
+ - a set of core clocks
+ - a set of gatable clocks
+
+Those clocks can be referenced by other Device Tree nodes using two
+cells:
+ - The first cell must be 0 or 1. 0 for the core clocks and 1 for the
+ gatable clocks.
+ - The second cell identifies the particular core clock or gatable
+ clocks.
+
+The following clocks are available:
+ - Core clocks
+ - 0 0 APLL
+ - 0 1 PPv2 core
+ - 0 2 EIP
+ - 0 3 Core
+ - 0 4 NAND core
+ - Gatable clocks
+ - 1 0 Audio
+ - 1 1 Comm Unit
+ - 1 2 NAND
+ - 1 3 PPv2
+ - 1 4 SDIO
+ - 1 5 MG Domain
+ - 1 6 MG Core
+ - 1 7 XOR1
+ - 1 8 XOR0
+ - 1 9 GOP DP
+ - 1 11 PCIe x1 0
+ - 1 12 PCIe x1 1
+ - 1 13 PCIe x4
+ - 1 14 PCIe / XOR
+ - 1 15 SATA
+ - 1 16 SATA USB
+ - 1 17 Main
+ - 1 18 SD/MMC
+ - 1 21 Slow IO (SPI, NOR, BootROM, I2C, UART)
+ - 1 22 USB3H0
+ - 1 23 USB3H1
+ - 1 24 USB3 Device
+ - 1 25 EIP150
+ - 1 26 EIP197
+
+Required properties:
+
+ - compatible: must be:
+ "marvell,cp110-system-controller0", "syscon";
+ - reg: register area of the CP110 system controller 0
+ - #clock-cells: must be set to 2
+ - core-clock-output-names must be set to:
+ "cpm-apll", "cpm-ppv2-core", "cpm-eip", "cpm-core", "cpm-nand-core"
+ - gate-clock-output-names must be set to:
+ "cpm-audio", "cpm-communit", "cpm-nand", "cpm-ppv2", "cpm-sdio",
+ "cpm-mg-domain", "cpm-mg-core", "cpm-xor1", "cpm-xor0", "cpm-gop-dp", "none",
+ "cpm-pcie_x10", "cpm-pcie_x11", "cpm-pcie_x4", "cpm-pcie-xor", "cpm-sata",
+ "cpm-sata-usb", "cpm-main", "cpm-sd-mmc", "none", "none", "cpm-slow-io",
+ "cpm-usb3h0", "cpm-usb3h1", "cpm-usb3dev", "cpm-eip150", "cpm-eip197";
+
+Example:
+
+ cpm_syscon0: system-controller@440000 {
+ compatible = "marvell,cp110-system-controller0", "syscon";
+ reg = <0x440000 0x1000>;
+ #clock-cells = <2>;
+ core-clock-output-names = "cpm-apll", "cpm-ppv2-core", "cpm-eip", "cpm-core", "cpm-nand-core";
+ gate-clock-output-names = "cpm-audio", "cpm-communit", "cpm-nand", "cpm-ppv2", "cpm-sdio",
+ "cpm-mg-domain", "cpm-mg-core", "cpm-xor1", "cpm-xor0", "cpm-gop-dp", "none",
+ "cpm-pcie_x10", "cpm-pcie_x11", "cpm-pcie_x4", "cpm-pcie-xor", "cpm-sata",
+ "cpm-sata-usb", "cpm-main", "cpm-sd-mmc", "none", "none", "cpm-slow-io",
+ "cpm-usb3h0", "cpm-usb3h1", "cpm-usb3dev", "cpm-eip150", "cpm-eip197";
+ };
diff --git a/Documentation/devicetree/bindings/arm/omap/crossbar.txt b/Documentation/devicetree/bindings/arm/omap/crossbar.txt
index a9b28d74d902..bb5727ae004a 100644
--- a/Documentation/devicetree/bindings/arm/omap/crossbar.txt
+++ b/Documentation/devicetree/bindings/arm/omap/crossbar.txt
@@ -42,7 +42,8 @@ Examples:
Consumer:
========
See Documentation/devicetree/bindings/interrupt-controller/interrupts.txt and
-Documentation/devicetree/bindings/arm/gic.txt for further details.
+Documentation/devicetree/bindings/interrupt-controller/arm,gic.txt for
+further details.
An interrupt consumer on an SoC using crossbar will use:
interrupts = <GIC_SPI request_number interrupt_level>
diff --git a/Documentation/devicetree/bindings/arm/omap/omap.txt b/Documentation/devicetree/bindings/arm/omap/omap.txt
index 21e71a5e866e..94b57f247615 100644
--- a/Documentation/devicetree/bindings/arm/omap/omap.txt
+++ b/Documentation/devicetree/bindings/arm/omap/omap.txt
@@ -133,6 +133,9 @@ Boards:
- AM335X Bone : Low cost community board
compatible = "ti,am335x-bone", "ti,am33xx", "ti,omap3"
+- AM3359 ICEv2 : Low cost Industrial Communication Engine EVM.
+ compatible = "ti,am3359-icev2", "ti,am33xx", "ti,omap3"
+
- AM335X OrionLXm : Substation Automation Platform
compatible = "novatech,am335x-lxm", "ti,am33xx"
@@ -169,6 +172,9 @@ Boards:
- AM57XX SBC-AM57x
compatible = "compulab,sbc-am57x", "compulab,cl-som-am57x", "ti,am5728", "ti,dra742", "ti,dra74", "ti,dra7"
+- AM5728 IDK
+ compatible = "ti,am5728-idk", "ti,am5728", "ti,dra742", "ti,dra74", "ti,dra7"
+
- DRA742 EVM: Software Development Board for DRA742
compatible = "ti,dra7-evm", "ti,dra742", "ti,dra74", "ti,dra7"
diff --git a/Documentation/devicetree/bindings/arm/oxnas.txt b/Documentation/devicetree/bindings/arm/oxnas.txt
new file mode 100644
index 000000000000..b9e49711ba05
--- /dev/null
+++ b/Documentation/devicetree/bindings/arm/oxnas.txt
@@ -0,0 +1,9 @@
+Oxford Semiconductor OXNAS SoCs Family device tree bindings
+-------------------------------------------
+
+Boards with the OX810SE SoC shall have the following properties:
+ Required root node property:
+ compatible: "oxsemi,ox810se"
+
+Board compatible values:
+ - "wd,mbwe" (OX810SE)
diff --git a/Documentation/devicetree/bindings/arm/pmu.txt b/Documentation/devicetree/bindings/arm/pmu.txt
index 6eb73be9433e..74d5417d0410 100644
--- a/Documentation/devicetree/bindings/arm/pmu.txt
+++ b/Documentation/devicetree/bindings/arm/pmu.txt
@@ -22,10 +22,11 @@ Required properties:
"arm,arm11mpcore-pmu"
"arm,arm1176-pmu"
"arm,arm1136-pmu"
+ "brcm,vulcan-pmu"
+ "cavium,thunder-pmu"
"qcom,scorpion-pmu"
"qcom,scorpion-mp-pmu"
"qcom,krait-pmu"
- "cavium,thunder-pmu"
- interrupts : 1 combined interrupt or 1 per core. If the interrupt is a per-cpu
interrupt (PPI) then 1 interrupt should be specified.
diff --git a/Documentation/devicetree/bindings/arm/rockchip.txt b/Documentation/devicetree/bindings/arm/rockchip.txt
index 078c14fcdaaa..715d960d5eea 100644
--- a/Documentation/devicetree/bindings/arm/rockchip.txt
+++ b/Documentation/devicetree/bindings/arm/rockchip.txt
@@ -39,6 +39,10 @@ Rockchip platforms device tree bindings
Required root node properties:
- compatible = "netxeon,r89", "rockchip,rk3288";
+- GeekBuying GeekBox:
+ Required root node properties:
+ - compatible = "geekbuying,geekbox", "rockchip,rk3368";
+
- Google Brain (dev-board):
Required root node properties:
- compatible = "google,veyron-brain-rev0", "google,veyron-brain",
@@ -87,6 +91,10 @@ Rockchip platforms device tree bindings
"google,veyron-speedy-rev3", "google,veyron-speedy-rev2",
"google,veyron-speedy", "google,veyron", "rockchip,rk3288";
+- mqmaker MiQi:
+ Required root node properties:
+ - compatible = "mqmaker,miqi", "rockchip,rk3288";
+
- Rockchip RK3368 evb:
Required root node properties:
- compatible = "rockchip,rk3368-evb-act8846", "rockchip,rk3368";
@@ -97,4 +105,8 @@ Rockchip platforms device tree bindings
- Rockchip RK3228 Evaluation board:
Required root node properties:
- - compatible = "rockchip,rk3228-evb", "rockchip,rk3228";
+ - compatible = "rockchip,rk3228-evb", "rockchip,rk3228";
+
+- Rockchip RK3399 evb:
+ Required root node properties:
+ - compatible = "rockchip,rk3399-evb", "rockchip,rk3399";
diff --git a/Documentation/devicetree/bindings/arm/samsung/samsung-boards.txt b/Documentation/devicetree/bindings/arm/samsung/samsung-boards.txt
index 12129c011c8f..f5deace2b380 100644
--- a/Documentation/devicetree/bindings/arm/samsung/samsung-boards.txt
+++ b/Documentation/devicetree/bindings/arm/samsung/samsung-boards.txt
@@ -2,6 +2,8 @@
Required root node properties:
- compatible = should be one or more of the following.
+ - "samsung,artik5" - for Exynos3250-based Samsung ARTIK5 module.
+ - "samsung,artik5-eval" - for Exynos3250-based Samsung ARTIK5 eval board.
- "samsung,monk" - for Exynos3250-based Samsung Simband board.
- "samsung,rinato" - for Exynos3250-based Samsung Gear2 board.
- "samsung,smdkv310" - for Exynos4210-based Samsung SMDKV310 eval board.
diff --git a/Documentation/devicetree/bindings/arm/spear-misc.txt b/Documentation/devicetree/bindings/arm/spear-misc.txt
index cf649827ffcd..e404e2556b4a 100644
--- a/Documentation/devicetree/bindings/arm/spear-misc.txt
+++ b/Documentation/devicetree/bindings/arm/spear-misc.txt
@@ -6,4 +6,4 @@ few properties of different peripheral controllers.
misc node required properties:
- compatible Should be "st,spear1340-misc", "syscon".
-- reg: Address range of misc space upto 8K
+- reg: Address range of misc space up to 8K
diff --git a/Documentation/devicetree/bindings/arm/tegra/nvidia,tegra20-pmc.txt b/Documentation/devicetree/bindings/arm/tegra/nvidia,tegra20-pmc.txt
index 02c27004d4a8..a74b37b07e5c 100644
--- a/Documentation/devicetree/bindings/arm/tegra/nvidia,tegra20-pmc.txt
+++ b/Documentation/devicetree/bindings/arm/tegra/nvidia,tegra20-pmc.txt
@@ -1,16 +1,20 @@
NVIDIA Tegra Power Management Controller (PMC)
+== Power Management Controller Node ==
+
The PMC block interacts with an external Power Management Unit. The PMC
mostly controls the entry and exit of the system from different sleep
modes. It provides power-gating controllers for SoC and CPU power-islands.
Required properties:
- name : Should be pmc
-- compatible : For Tegra20, must contain "nvidia,tegra20-pmc". For Tegra30,
- must contain "nvidia,tegra30-pmc". For Tegra114, must contain
- "nvidia,tegra114-pmc". For Tegra124, must contain "nvidia,tegra124-pmc".
- Otherwise, must contain "nvidia,<chip>-pmc", plus at least one of the
- above, where <chip> is tegra132.
+- compatible : Should contain one of the following:
+ For Tegra20 must contain "nvidia,tegra20-pmc".
+ For Tegra30 must contain "nvidia,tegra30-pmc".
+ For Tegra114 must contain "nvidia,tegra114-pmc"
+ For Tegra124 must contain "nvidia,tegra124-pmc"
+ For Tegra132 must contain "nvidia,tegra124-pmc"
+ For Tegra210 must contain "nvidia,tegra210-pmc"
- reg : Offset and length of the register set for the device
- clocks : Must contain an entry for each entry in clock-names.
See ../clocks/clock-bindings.txt for details.
@@ -68,6 +72,11 @@ Optional properties for hardware-triggered thermal reset (inside 'i2c-thermtrip'
Defaults to 0. Valid values are described in section 12.5.2
"Pinmux Support" of the Tegra4 Technical Reference Manual.
+Optional nodes:
+- powergates : This node contains a hierarchy of power domain nodes, which
+ should match the powergates on the Tegra SoC. See "Powergate
+ Nodes" below.
+
Example:
/ SoC dts including file
@@ -113,3 +122,76 @@ pmc@7000f400 {
};
...
};
+
+
+== Powergate Nodes ==
+
+Each of the powergate nodes represents a power-domain on the Tegra SoC
+that can be power-gated by the Tegra PMC. The name of the powergate node
+should be one of the below. Note that not every powergate is applicable
+to all Tegra devices and the following list shows which powergates are
+applicable to which devices. Please refer to the Tegra TRM for more
+details on the various powergates.
+
+ Name Description Devices Applicable
+ 3d 3D Graphics Tegra20/114/124/210
+ 3d0 3D Graphics 0 Tegra30
+ 3d1 3D Graphics 1 Tegra30
+ aud Audio Tegra210
+ dfd Debug Tegra210
+ dis Display A Tegra114/124/210
+ disb Display B Tegra114/124/210
+ heg 2D Graphics Tegra30/114/124/210
+ iram Internal RAM Tegra124/210
+ mpe MPEG Encode All
+ nvdec NVIDIA Video Decode Engine Tegra210
+ nvjpg NVIDIA JPEG Engine Tegra210
+ pcie PCIE Tegra20/30/124/210
+ sata SATA Tegra30/124/210
+ sor Display interfaces Tegra124/210
+ ve2 Video Encode Engine 2 Tegra210
+ venc Video Encode Engine All
+ vdec Video Decode Engine Tegra20/30/114/124
+ vic Video Imaging Compositor Tegra124/210
+ xusba USB Partition A Tegra114/124/210
+ xusbb USB Partition B Tegra114/124/210
+ xusbc USB Partition C Tegra114/124/210
+
+Required properties:
+ - clocks: Must contain an entry for each clock required by the PMC for
+ controlling a power-gate. See ../clocks/clock-bindings.txt for details.
+ - resets: Must contain an entry for each reset required by the PMC for
+ controlling a power-gate. See ../reset/reset.txt for details.
+ - #power-domain-cells: Must be 0.
+
+Example:
+
+ pmc: pmc@7000e400 {
+ compatible = "nvidia,tegra210-pmc";
+ reg = <0x0 0x7000e400 0x0 0x400>;
+ clocks = <&tegra_car TEGRA210_CLK_PCLK>, <&clk32k_in>;
+ clock-names = "pclk", "clk32k_in";
+
+ powergates {
+ pd_audio: aud {
+ clocks = <&tegra_car TEGRA210_CLK_APE>,
+ <&tegra_car TEGRA210_CLK_APB2APE>;
+ resets = <&tegra_car 198>;
+ #power-domain-cells = <0>;
+ };
+ };
+ };
+
+
+== Powergate Clients ==
+
+Hardware blocks belonging to a power domain should contain a "power-domains"
+property that is a phandle pointing to the corresponding powergate node.
+
+Example:
+
+ adma: adma@702e2000 {
+ ...
+ power-domains = <&pd_audio>;
+ ...
+ };
diff --git a/Documentation/devicetree/bindings/arm/ux500/boards.txt b/Documentation/devicetree/bindings/arm/ux500/boards.txt
index b8737a8de718..7334c24625fc 100644
--- a/Documentation/devicetree/bindings/arm/ux500/boards.txt
+++ b/Documentation/devicetree/bindings/arm/ux500/boards.txt
@@ -23,7 +23,7 @@ scu:
see binding for arm/scu.txt
interrupt-controller:
- see binding for arm/gic.txt
+ see binding for interrupt-controller/arm,gic.txt
timer:
see binding for arm/twd.txt
diff --git a/Documentation/devicetree/bindings/ata/tegra-sata.txt b/Documentation/devicetree/bindings/ata/nvidia,tegra124-ahci.txt
index 66c83c3e8915..66c83c3e8915 100644
--- a/Documentation/devicetree/bindings/ata/tegra-sata.txt
+++ b/Documentation/devicetree/bindings/ata/nvidia,tegra124-ahci.txt
diff --git a/Documentation/devicetree/bindings/btmrvl.txt b/Documentation/devicetree/bindings/btmrvl.txt
deleted file mode 100644
index 58f964bb0a52..000000000000
--- a/Documentation/devicetree/bindings/btmrvl.txt
+++ /dev/null
@@ -1,29 +0,0 @@
-btmrvl
-------
-
-Required properties:
-
- - compatible : must be "btmrvl,cfgdata"
-
-Optional properties:
-
- - btmrvl,cal-data : Calibration data downloaded to the device during
- initialization. This is an array of 28 values(u8).
-
- - btmrvl,gpio-gap : gpio and gap (in msecs) combination to be
- configured.
-
-Example:
-
-GPIO pin 13 is configured as a wakeup source and GAP is set to 100 msecs
-in below example.
-
-btmrvl {
- compatible = "btmrvl,cfgdata";
-
- btmrvl,cal-data = /bits/ 8 <
- 0x37 0x01 0x1c 0x00 0xff 0xff 0xff 0xff 0x01 0x7f 0x04 0x02
- 0x00 0x00 0xba 0xce 0xc0 0xc6 0x2d 0x00 0x00 0x00 0x00 0x00
- 0x00 0x00 0xf0 0x00>;
- btmrvl,gpio-gap = <0x0d64>;
-};
diff --git a/Documentation/devicetree/bindings/clock/artpec6.txt b/Documentation/devicetree/bindings/clock/artpec6.txt
new file mode 100644
index 000000000000..dff9cdf0009c
--- /dev/null
+++ b/Documentation/devicetree/bindings/clock/artpec6.txt
@@ -0,0 +1,41 @@
+* Clock bindings for Axis ARTPEC-6 chip
+
+The bindings are based on the clock provider binding in
+Documentation/devicetree/bindings/clock/clock-bindings.txt
+
+External clocks:
+----------------
+
+There are two external inputs to the main clock controller which should be
+provided using the common clock bindings.
+- "sys_refclk": External 50 Mhz oscillator (required)
+- "i2s_refclk": Alternate audio reference clock (optional).
+
+Main clock controller
+---------------------
+
+Required properties:
+- #clock-cells: Should be <1>
+ See dt-bindings/clock/axis,artpec6-clkctrl.h for the list of valid identifiers.
+- compatible: Should be "axis,artpec6-clkctrl"
+- reg: Must contain the base address and length of the system controller
+- clocks: Must contain a phandle entry for each clock in clock-names
+- clock-names: Must include the external oscillator ("sys_refclk"). Optional
+ ones are the audio reference clock ("i2s_refclk") and the audio fractional
+ dividers ("frac_clk0" and "frac_clk1").
+
+Examples:
+
+ext_clk: ext_clk {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <50000000>;
+};
+
+clkctrl: clkctrl@f8000000 {
+ #clock-cells = <1>;
+ compatible = "axis,artpec6-clkctrl";
+ reg = <0xf8000000 0x48>;
+ clocks = <&ext_clk>;
+ clock-names = "sys_refclk";
+};
diff --git a/Documentation/devicetree/bindings/clock/axs10x-i2s-pll-clock.txt b/Documentation/devicetree/bindings/clock/axs10x-i2s-pll-clock.txt
new file mode 100644
index 000000000000..5ffc8df7e6da
--- /dev/null
+++ b/Documentation/devicetree/bindings/clock/axs10x-i2s-pll-clock.txt
@@ -0,0 +1,25 @@
+Binding for the AXS10X I2S PLL clock
+
+This binding uses the common clock binding[1].
+
+[1] Documentation/devicetree/bindings/clock/clock-bindings.txt
+
+Required properties:
+- compatible: shall be "snps,axs10x-i2s-pll-clock"
+- reg : address and length of the I2S PLL register set.
+- clocks: shall be the input parent clock phandle for the PLL.
+- #clock-cells: from common clock binding; Should always be set to 0.
+
+Example:
+ pll_clock: pll_clock {
+ compatible = "fixed-clock";
+ clock-frequency = <27000000>;
+ #clock-cells = <0>;
+ };
+
+ i2s_clock@100a0 {
+ compatible = "snps,axs10x-i2s-pll-clock";
+ reg = <0x100a0 0x10>;
+ clocks = <&pll_clock>;
+ #clock-cells = <0>;
+ };
diff --git a/Documentation/devicetree/bindings/clock/hi3519-crg.txt b/Documentation/devicetree/bindings/clock/hi3519-crg.txt
new file mode 100644
index 000000000000..acd1f235d548
--- /dev/null
+++ b/Documentation/devicetree/bindings/clock/hi3519-crg.txt
@@ -0,0 +1,46 @@
+* Hisilicon Hi3519 Clock and Reset Generator(CRG)
+
+The Hi3519 CRG module provides clock and reset signals to various
+controllers within the SoC.
+
+This binding uses the following bindings:
+ Documentation/devicetree/bindings/clock/clock-bindings.txt
+ Documentation/devicetree/bindings/reset/reset.txt
+
+Required Properties:
+
+- compatible: should be one of the following.
+ - "hisilicon,hi3519-crg" - controller compatible with Hi3519 SoC.
+
+- reg: physical base address of the controller and length of memory mapped
+ region.
+
+- #clock-cells: should be 1.
+
+Each clock is assigned an identifier and client nodes use this identifier
+to specify the clock which they consume.
+
+All these identifier could be found in <dt-bindings/clock/hi3519-clock.h>.
+
+- #reset-cells: should be 2.
+
+A reset signal can be controlled by writing a bit register in the CRG module.
+The reset specifier consists of two cells. The first cell represents the
+register offset relative to the base address. The second cell represents the
+bit index in the register.
+
+Example: CRG nodes
+CRG: clock-reset-controller@12010000 {
+ compatible = "hisilicon,hi3519-crg";
+ reg = <0x12010000 0x10000>;
+ #clock-cells = <1>;
+ #reset-cells = <2>;
+};
+
+Example: consumer nodes
+i2c0: i2c@12110000 {
+ compatible = "hisilicon,hi3519-i2c";
+ reg = <0x12110000 0x1000>;
+ clocks = <&CRG HI3519_I2C0_RST>;
+ resets = <&CRG 0xe4 0>;
+};
diff --git a/Documentation/devicetree/bindings/clock/imx35-clock.txt b/Documentation/devicetree/bindings/clock/imx35-clock.txt
index a70356452a82..f49783213c56 100644
--- a/Documentation/devicetree/bindings/clock/imx35-clock.txt
+++ b/Documentation/devicetree/bindings/clock/imx35-clock.txt
@@ -94,6 +94,7 @@ clocks and IDs.
csi_sel 79
iim_gate 80
gpu2d_gate 81
+ ckli_gate 82
Examples:
diff --git a/Documentation/devicetree/bindings/clock/microchip,pic32.txt b/Documentation/devicetree/bindings/clock/microchip,pic32.txt
new file mode 100644
index 000000000000..c93d88fdd858
--- /dev/null
+++ b/Documentation/devicetree/bindings/clock/microchip,pic32.txt
@@ -0,0 +1,39 @@
+Microchip PIC32 Clock Controller Binding
+----------------------------------------
+Microchip clock controller is consists of few oscillators, PLL, multiplexer
+and few divider modules.
+
+This binding uses common clock bindings.
+[1] Documentation/devicetree/bindings/clock/clock-bindings.txt
+
+Required properties:
+- compatible: shall be "microchip,pic32mzda-clk".
+- reg: shall contain base address and length of clock registers.
+- #clock-cells: shall be 1.
+
+Optional properties:
+- microchip,pic32mzda-sosc: shall be added only if platform has
+ secondary oscillator connected.
+
+Example:
+ rootclk: clock-controller@1f801200 {
+ compatible = "microchip,pic32mzda-clk";
+ reg = <0x1f801200 0x200>;
+ #clock-cells = <1>;
+ /* optional */
+ microchip,pic32mzda-sosc;
+ };
+
+
+The clock consumer shall specify the desired clock-output of the clock
+controller (as defined in [2]) by specifying output-id in its "clock"
+phandle cell.
+[2] include/dt-bindings/clock/microchip,pic32-clock.h
+
+For example for UART2:
+uart2: serial@2 {
+ compatible = "microchip,pic32mzda-uart";
+ reg = <>;
+ interrupts = <>;
+ clocks = <&rootclk PB2CLK>;
+};
diff --git a/Documentation/devicetree/bindings/clock/nvidia,tegra124-dfll.txt b/Documentation/devicetree/bindings/clock/nvidia,tegra124-dfll.txt
index ee7e5fd4a50b..63f9d8277d48 100644
--- a/Documentation/devicetree/bindings/clock/nvidia,tegra124-dfll.txt
+++ b/Documentation/devicetree/bindings/clock/nvidia,tegra124-dfll.txt
@@ -50,7 +50,7 @@ Required properties for I2C mode:
Example:
-clock@0,70110000 {
+clock@70110000 {
compatible = "nvidia,tegra124-dfll";
reg = <0 0x70110000 0 0x100>, /* DFLL control */
<0 0x70110000 0 0x100>, /* I2C output control */
diff --git a/Documentation/devicetree/bindings/clock/oxnas,stdclk.txt b/Documentation/devicetree/bindings/clock/oxnas,stdclk.txt
new file mode 100644
index 000000000000..208cca6ac4ec
--- /dev/null
+++ b/Documentation/devicetree/bindings/clock/oxnas,stdclk.txt
@@ -0,0 +1,35 @@
+Oxford Semiconductor OXNAS SoC Family Standard Clocks
+================================================
+
+Please also refer to clock-bindings.txt in this directory for common clock
+bindings usage.
+
+Required properties:
+- compatible: Should be "oxsemi,ox810se-stdclk"
+- #clock-cells: 1, see below
+
+Parent node should have the following properties :
+- compatible: Should be "oxsemi,ox810se-sys-ctrl", "syscon", "simple-mfd"
+
+For OX810SE, the clock indices are :
+ - 0: LEON
+ - 1: DMA_SGDMA
+ - 2: CIPHER
+ - 3: SATA
+ - 4: AUDIO
+ - 5: USBMPH
+ - 6: ETHA
+ - 7: PCIA
+ - 8: NAND
+
+example:
+
+sys: sys-ctrl@000000 {
+ compatible = "oxsemi,ox810se-sys-ctrl", "syscon", "simple-mfd";
+ reg = <0x000000 0x100000>;
+
+ stdclk: stdclk {
+ compatible = "oxsemi,ox810se-stdclk";
+ #clock-cells = <1>;
+ };
+};
diff --git a/Documentation/devicetree/bindings/clock/rockchip,rk3188-cru.txt b/Documentation/devicetree/bindings/clock/rockchip,rk3188-cru.txt
index 0c2bf5eba43e..7f368530a2e4 100644
--- a/Documentation/devicetree/bindings/clock/rockchip,rk3188-cru.txt
+++ b/Documentation/devicetree/bindings/clock/rockchip,rk3188-cru.txt
@@ -16,7 +16,7 @@ Required Properties:
Optional Properties:
- rockchip,grf: phandle to the syscon managing the "general register files"
- If missing pll rates are not changable, due to the missing pll lock status.
+ If missing pll rates are not changeable, due to the missing pll lock status.
Each clock is assigned an identifier and client nodes can use this identifier
to specify the clock which they consume. All available clocks are defined as
diff --git a/Documentation/devicetree/bindings/clock/rockchip,rk3288-cru.txt b/Documentation/devicetree/bindings/clock/rockchip,rk3288-cru.txt
index c9fbb76573e1..8cb47c39ba53 100644
--- a/Documentation/devicetree/bindings/clock/rockchip,rk3288-cru.txt
+++ b/Documentation/devicetree/bindings/clock/rockchip,rk3288-cru.txt
@@ -15,7 +15,7 @@ Required Properties:
Optional Properties:
- rockchip,grf: phandle to the syscon managing the "general register files"
- If missing pll rates are not changable, due to the missing pll lock status.
+ If missing pll rates are not changeable, due to the missing pll lock status.
Each clock is assigned an identifier and client nodes can use this identifier
to specify the clock which they consume. All available clocks are defined as
diff --git a/Documentation/devicetree/bindings/clock/rockchip,rk3399-cru.txt b/Documentation/devicetree/bindings/clock/rockchip,rk3399-cru.txt
new file mode 100644
index 000000000000..3888dd33fcbd
--- /dev/null
+++ b/Documentation/devicetree/bindings/clock/rockchip,rk3399-cru.txt
@@ -0,0 +1,62 @@
+* Rockchip RK3399 Clock and Reset Unit
+
+The RK3399 clock controller generates and supplies clock to various
+controllers within the SoC and also implements a reset controller for SoC
+peripherals.
+
+Required Properties:
+
+- compatible: PMU for CRU should be "rockchip,rk3399-pmucru"
+- compatible: CRU should be "rockchip,rk3399-cru"
+- reg: physical base address of the controller and length of memory mapped
+ region.
+- #clock-cells: should be 1.
+- #reset-cells: should be 1.
+
+Each clock is assigned an identifier and client nodes can use this identifier
+to specify the clock which they consume. All available clocks are defined as
+preprocessor macros in the dt-bindings/clock/rk3399-cru.h headers and can be
+used in device tree sources. Similar macros exist for the reset sources in
+these files.
+
+External clocks:
+
+There are several clocks that are generated outside the SoC. It is expected
+that they are defined using standard clock bindings with following
+clock-output-names:
+ - "xin24m" - crystal input - required,
+ - "xin32k" - rtc clock - optional,
+ - "clkin_gmac" - external GMAC clock - optional,
+ - "clkin_i2s" - external I2S clock - optional,
+ - "pclkin_cif" - external ISP clock - optional,
+ - "clk_usbphy0_480m" - output clock of the pll in the usbphy0
+ - "clk_usbphy1_480m" - output clock of the pll in the usbphy1
+
+Example: Clock controller node:
+
+ pmucru: pmu-clock-controller@ff750000 {
+ compatible = "rockchip,rk3399-pmucru";
+ reg = <0x0 0xff750000 0x0 0x1000>;
+ #clock-cells = <1>;
+ #reset-cells = <1>;
+ };
+
+ cru: clock-controller@ff760000 {
+ compatible = "rockchip,rk3399-cru";
+ reg = <0x0 0xff760000 0x0 0x1000>;
+ #clock-cells = <1>;
+ #reset-cells = <1>;
+ };
+
+Example: UART controller node that consumes the clock generated by the clock
+ controller:
+
+ uart0: serial@ff1a0000 {
+ compatible = "rockchip,rk3399-uart", "snps,dw-apb-uart";
+ reg = <0x0 0xff180000 0x0 0x100>;
+ clocks = <&cru SCLK_UART0>, <&cru PCLK_UART0>;
+ clock-names = "baudclk", "apb_pclk";
+ interrupts = <GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>;
+ reg-shift = <2>;
+ reg-io-width = <4>;
+ };
diff --git a/Documentation/devicetree/bindings/clock/st/st,clkgen.txt b/Documentation/devicetree/bindings/clock/st/st,clkgen.txt
index 78978f1f5158..b18bf86f926f 100644
--- a/Documentation/devicetree/bindings/clock/st/st,clkgen.txt
+++ b/Documentation/devicetree/bindings/clock/st/st,clkgen.txt
@@ -40,7 +40,7 @@ address is common of all subnode.
};
This binding uses the common clock binding[1].
-Each subnode should use the binding discribe in [2]..[7]
+Each subnode should use the binding described in [2]..[7]
[1] Documentation/devicetree/bindings/clock/clock-bindings.txt
[2] Documentation/devicetree/bindings/clock/st,clkgen-divmux.txt
diff --git a/Documentation/devicetree/bindings/clock/sunxi.txt b/Documentation/devicetree/bindings/clock/sunxi.txt
index 834436fbe83d..8f7619d8c8d8 100644
--- a/Documentation/devicetree/bindings/clock/sunxi.txt
+++ b/Documentation/devicetree/bindings/clock/sunxi.txt
@@ -10,6 +10,7 @@ Required properties:
"allwinner,sun4i-a10-pll1-clk" - for the main PLL clock and PLL4
"allwinner,sun6i-a31-pll1-clk" - for the main PLL clock on A31
"allwinner,sun8i-a23-pll1-clk" - for the main PLL clock on A23
+ "allwinner,sun4i-a10-pll3-clk" - for the video PLL clock on A10
"allwinner,sun9i-a80-pll4-clk" - for the peripheral PLLs on A80
"allwinner,sun4i-a10-pll5-clk" - for the PLL5 clock
"allwinner,sun4i-a10-pll6-clk" - for the PLL6 clock
@@ -63,7 +64,9 @@ Required properties:
"allwinner,sun8i-a83t-bus-gates-clk" - for the bus gates on A83T
"allwinner,sun8i-h3-bus-gates-clk" - for the bus gates on H3
"allwinner,sun9i-a80-apbs-gates-clk" - for the APBS gates on A80
+ "allwinner,sun4i-a10-display-clk" - for the display clocks on the A10
"allwinner,sun4i-a10-dram-gates-clk" - for the DRAM gates on A10
+ "allwinner,sun5i-a13-dram-gates-clk" - for the DRAM gates on A13
"allwinner,sun5i-a13-mbus-clk" - for the MBUS clock on A13
"allwinner,sun4i-a10-mmc-clk" - for the MMC clock
"allwinner,sun9i-a80-mmc-clk" - for mmc module clocks on A80
@@ -73,6 +76,8 @@ Required properties:
"allwinner,sun8i-a23-mbus-clk" - for the MBUS clock on A23
"allwinner,sun7i-a20-out-clk" - for the external output clocks
"allwinner,sun7i-a20-gmac-clk" - for the GMAC clock module on A20/A31
+ "allwinner,sun4i-a10-tcon-ch0-clk" - for the TCON channel 0 clock on the A10
+ "allwinner,sun4i-a10-tcon-ch1-clk" - for the TCON channel 1 clock on the A10
"allwinner,sun4i-a10-usb-clk" - for usb gates + resets on A10 / A20
"allwinner,sun5i-a13-usb-clk" - for usb gates + resets on A13
"allwinner,sun6i-a31-usb-clk" - for usb gates + resets on A31
@@ -81,6 +86,7 @@ Required properties:
"allwinner,sun9i-a80-usb-mod-clk" - for usb gates + resets on A80
"allwinner,sun9i-a80-usb-phy-clk" - for usb phy gates + resets on A80
"allwinner,sun4i-a10-ve-clk" - for the Video Engine clock
+ "allwinner,sun6i-a31-display-clk" - for the display clocks
Required properties for all clocks:
- reg : shall be the control register address for the clock.
diff --git a/Documentation/devicetree/bindings/cpufreq/tegra124-cpufreq.txt b/Documentation/devicetree/bindings/cpufreq/nvidia,tegra124-cpufreq.txt
index b1669fbfb740..b1669fbfb740 100644
--- a/Documentation/devicetree/bindings/cpufreq/tegra124-cpufreq.txt
+++ b/Documentation/devicetree/bindings/cpufreq/nvidia,tegra124-cpufreq.txt
diff --git a/Documentation/devicetree/bindings/crypto/fsl-imx-scc.txt b/Documentation/devicetree/bindings/crypto/fsl-imx-scc.txt
new file mode 100644
index 000000000000..7aad448e8a36
--- /dev/null
+++ b/Documentation/devicetree/bindings/crypto/fsl-imx-scc.txt
@@ -0,0 +1,21 @@
+Freescale Security Controller (SCC)
+
+Required properties:
+- compatible : Should be "fsl,imx25-scc".
+- reg : Should contain register location and length.
+- interrupts : Should contain interrupt numbers for SCM IRQ and SMN IRQ.
+- interrupt-names : Should specify the names "scm" and "smn" for the
+ SCM IRQ and SMN IRQ.
+- clocks: Should contain the clock driving the SCC core.
+- clock-names: Should be set to "ipg".
+
+Example:
+
+ scc: crypto@53fac000 {
+ compatible = "fsl,imx25-scc";
+ reg = <0x53fac000 0x4000>;
+ clocks = <&clks 111>;
+ clock-names = "ipg";
+ interrupts = <49>, <50>;
+ interrupt-names = "scm", "smn";
+ };
diff --git a/Documentation/devicetree/bindings/crypto/samsung-sss.txt b/Documentation/devicetree/bindings/crypto/samsung-sss.txt
index a6dafa83c6df..7a5ca56683cc 100644
--- a/Documentation/devicetree/bindings/crypto/samsung-sss.txt
+++ b/Documentation/devicetree/bindings/crypto/samsung-sss.txt
@@ -23,10 +23,8 @@ Required properties:
- "samsung,exynos4210-secss" for Exynos4210, Exynos4212, Exynos4412, Exynos5250,
Exynos5260 and Exynos5420 SoCs.
- reg : Offset and length of the register set for the module
-- interrupts : interrupt specifiers of SSS module interrupts, should contain
- following entries:
- - first : feed control interrupt (required for all variants),
- - second : hash interrupt (required only for samsung,s5pv210-secss).
+- interrupts : interrupt specifiers of SSS module interrupts (one feed
+ control interrupt).
- clocks : list of clock phandle and specifier pairs for all clocks listed in
clock-names property.
diff --git a/Documentation/devicetree/bindings/devfreq/event/exynos-nocp.txt b/Documentation/devicetree/bindings/devfreq/event/exynos-nocp.txt
new file mode 100644
index 000000000000..fd459f00aa5a
--- /dev/null
+++ b/Documentation/devicetree/bindings/devfreq/event/exynos-nocp.txt
@@ -0,0 +1,26 @@
+
+* Samsung Exynos NoC (Network on Chip) Probe device
+
+The Samsung Exynos542x SoC has NoC (Network on Chip) Probe for NoC bus.
+NoC provides the primitive values to get the performance data. The packets
+that the Network on Chip (NoC) probes detects are transported over
+the network infrastructure to observer units. You can configure probes to
+capture packets with header or data on the data request response network,
+or as traffic debug or statistic collectors. Exynos542x bus has multiple
+NoC probes to provide bandwidth information about behavior of the SoC
+that you can use while analyzing system performance.
+
+Required properties:
+- compatible: Should be "samsung,exynos5420-nocp"
+- reg: physical base address of each NoC Probe and length of memory mapped region.
+
+Optional properties:
+- clock-names : the name of clock used by the NoC Probe, "nocp"
+- clocks : phandles for clock specified in "clock-names" property
+
+Example : NoC Probe nodes in Device Tree are listed below.
+
+ nocp_mem0_0: nocp@10CA1000 {
+ compatible = "samsung,exynos5420-nocp";
+ reg = <0x10CA1000 0x200>;
+ };
diff --git a/Documentation/devicetree/bindings/devfreq/exynos-bus.txt b/Documentation/devicetree/bindings/devfreq/exynos-bus.txt
new file mode 100644
index 000000000000..d3ec8e676b6b
--- /dev/null
+++ b/Documentation/devicetree/bindings/devfreq/exynos-bus.txt
@@ -0,0 +1,409 @@
+* Generic Exynos Bus frequency device
+
+The Samsung Exynos SoC has many buses for data transfer between DRAM
+and sub-blocks in SoC. Most Exynos SoCs share the common architecture
+for buses. Generally, each bus of Exynos SoC includes a source clock
+and a power line, which are able to change the clock frequency
+of the bus in runtime. To monitor the usage of each bus in runtime,
+the driver uses the PPMU (Platform Performance Monitoring Unit), which
+is able to measure the current load of sub-blocks.
+
+The Exynos SoC includes the various sub-blocks which have the each AXI bus.
+The each AXI bus has the owned source clock but, has not the only owned
+power line. The power line might be shared among one more sub-blocks.
+So, we can divide into two type of device as the role of each sub-block.
+There are two type of bus devices as following:
+- parent bus device
+- passive bus device
+
+Basically, parent and passive bus device share the same power line.
+The parent bus device can only change the voltage of shared power line
+and the rest bus devices (passive bus device) depend on the decision of
+the parent bus device. If there are three blocks which share the VDD_xxx
+power line, Only one block should be parent device and then the rest blocks
+should depend on the parent device as passive device.
+
+ VDD_xxx |--- A block (parent)
+ |--- B block (passive)
+ |--- C block (passive)
+
+There are a little different composition among Exynos SoC because each Exynos
+SoC has different sub-blocks. Therefore, such difference should be specified
+in devicetree file instead of each device driver. In result, this driver
+is able to support the bus frequency for all Exynos SoCs.
+
+Required properties for all bus devices:
+- compatible: Should be "samsung,exynos-bus".
+- clock-names : the name of clock used by the bus, "bus".
+- clocks : phandles for clock specified in "clock-names" property.
+- operating-points-v2: the OPP table including frequency/voltage information
+ to support DVFS (Dynamic Voltage/Frequency Scaling) feature.
+
+Required properties only for parent bus device:
+- vdd-supply: the regulator to provide the buses with the voltage.
+- devfreq-events: the devfreq-event device to monitor the current utilization
+ of buses.
+
+Required properties only for passive bus device:
+- devfreq: the parent bus device.
+
+Optional properties only for parent bus device:
+- exynos,saturation-ratio: the percentage value which is used to calibrate
+ the performance count against total cycle count.
+- exynos,voltage-tolerance: the percentage value for bus voltage tolerance
+ which is used to calculate the max voltage.
+
+Detailed correlation between sub-blocks and power line according to Exynos SoC:
+- In case of Exynos3250, there are two power line as following:
+ VDD_MIF |--- DMC
+
+ VDD_INT |--- LEFTBUS (parent device)
+ |--- PERIL
+ |--- MFC
+ |--- G3D
+ |--- RIGHTBUS
+ |--- PERIR
+ |--- FSYS
+ |--- LCD0
+ |--- PERIR
+ |--- ISP
+ |--- CAM
+
+- In case of Exynos4210, there is one power line as following:
+ VDD_INT |--- DMC (parent device)
+ |--- LEFTBUS
+ |--- PERIL
+ |--- MFC(L)
+ |--- G3D
+ |--- TV
+ |--- LCD0
+ |--- RIGHTBUS
+ |--- PERIR
+ |--- MFC(R)
+ |--- CAM
+ |--- FSYS
+ |--- GPS
+ |--- LCD0
+ |--- LCD1
+
+- In case of Exynos4x12, there are two power line as following:
+ VDD_MIF |--- DMC
+
+ VDD_INT |--- LEFTBUS (parent device)
+ |--- PERIL
+ |--- MFC(L)
+ |--- G3D
+ |--- TV
+ |--- IMAGE
+ |--- RIGHTBUS
+ |--- PERIR
+ |--- MFC(R)
+ |--- CAM
+ |--- FSYS
+ |--- GPS
+ |--- LCD0
+ |--- ISP
+
+- In case of Exynos5422, there are two power line as following:
+ VDD_MIF |--- DREX 0 (parent device, DRAM EXpress controller)
+ |--- DREX 1
+
+ VDD_INT |--- NoC_Core (parent device)
+ |--- G2D
+ |--- G3D
+ |--- DISP1
+ |--- NoC_WCORE
+ |--- GSCL
+ |--- MSCL
+ |--- ISP
+ |--- MFC
+ |--- GEN
+ |--- PERIS
+ |--- PERIC
+ |--- FSYS
+ |--- FSYS2
+
+Example1:
+ Show the AXI buses of Exynos3250 SoC. Exynos3250 divides the buses to
+ power line (regulator). The MIF (Memory Interface) AXI bus is used to
+ transfer data between DRAM and CPU and uses the VDD_MIF regulator.
+
+ - MIF (Memory Interface) block
+ : VDD_MIF |--- DMC (Dynamic Memory Controller)
+
+ - INT (Internal) block
+ : VDD_INT |--- LEFTBUS (parent device)
+ |--- PERIL
+ |--- MFC
+ |--- G3D
+ |--- RIGHTBUS
+ |--- FSYS
+ |--- LCD0
+ |--- PERIR
+ |--- ISP
+ |--- CAM
+
+ - MIF bus's frequency/voltage table
+ -----------------------
+ |Lv| Freq | Voltage |
+ -----------------------
+ |L1| 50000 |800000 |
+ |L2| 100000 |800000 |
+ |L3| 134000 |800000 |
+ |L4| 200000 |825000 |
+ |L5| 400000 |875000 |
+ -----------------------
+
+ - INT bus's frequency/voltage table
+ ----------------------------------------------------------
+ |Block|LEFTBUS|RIGHTBUS|MCUISP |ISP |PERIL ||VDD_INT |
+ | name| |LCD0 | | | || |
+ | | |FSYS | | | || |
+ | | |MFC | | | || |
+ ----------------------------------------------------------
+ |Mode |*parent|passive |passive|passive|passive|| |
+ ----------------------------------------------------------
+ |Lv |Frequency ||Voltage |
+ ----------------------------------------------------------
+ |L1 |50000 |50000 |50000 |50000 |50000 ||900000 |
+ |L2 |80000 |80000 |80000 |80000 |80000 ||900000 |
+ |L3 |100000 |100000 |100000 |100000 |100000 ||1000000 |
+ |L4 |134000 |134000 |200000 |200000 | ||1000000 |
+ |L5 |200000 |200000 |400000 |300000 | ||1000000 |
+ ----------------------------------------------------------
+
+Example2 :
+ The bus of DMC (Dynamic Memory Controller) block in exynos3250.dtsi
+ is listed below:
+
+ bus_dmc: bus_dmc {
+ compatible = "samsung,exynos-bus";
+ clocks = <&cmu_dmc CLK_DIV_DMC>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_dmc_opp_table>;
+ status = "disabled";
+ };
+
+ bus_dmc_opp_table: opp_table1 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@50000000 {
+ opp-hz = /bits/ 64 <50000000>;
+ opp-microvolt = <800000>;
+ };
+ opp@100000000 {
+ opp-hz = /bits/ 64 <100000000>;
+ opp-microvolt = <800000>;
+ };
+ opp@134000000 {
+ opp-hz = /bits/ 64 <134000000>;
+ opp-microvolt = <800000>;
+ };
+ opp@200000000 {
+ opp-hz = /bits/ 64 <200000000>;
+ opp-microvolt = <825000>;
+ };
+ opp@400000000 {
+ opp-hz = /bits/ 64 <400000000>;
+ opp-microvolt = <875000>;
+ };
+ };
+
+ bus_leftbus: bus_leftbus {
+ compatible = "samsung,exynos-bus";
+ clocks = <&cmu CLK_DIV_GDL>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_leftbus_opp_table>;
+ status = "disabled";
+ };
+
+ bus_rightbus: bus_rightbus {
+ compatible = "samsung,exynos-bus";
+ clocks = <&cmu CLK_DIV_GDR>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_leftbus_opp_table>;
+ status = "disabled";
+ };
+
+ bus_lcd0: bus_lcd0 {
+ compatible = "samsung,exynos-bus";
+ clocks = <&cmu CLK_DIV_ACLK_160>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_leftbus_opp_table>;
+ status = "disabled";
+ };
+
+ bus_fsys: bus_fsys {
+ compatible = "samsung,exynos-bus";
+ clocks = <&cmu CLK_DIV_ACLK_200>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_leftbus_opp_table>;
+ status = "disabled";
+ };
+
+ bus_mcuisp: bus_mcuisp {
+ compatible = "samsung,exynos-bus";
+ clocks = <&cmu CLK_DIV_ACLK_400_MCUISP>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_mcuisp_opp_table>;
+ status = "disabled";
+ };
+
+ bus_isp: bus_isp {
+ compatible = "samsung,exynos-bus";
+ clocks = <&cmu CLK_DIV_ACLK_266>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_isp_opp_table>;
+ status = "disabled";
+ };
+
+ bus_peril: bus_peril {
+ compatible = "samsung,exynos-bus";
+ clocks = <&cmu CLK_DIV_ACLK_100>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_peril_opp_table>;
+ status = "disabled";
+ };
+
+ bus_mfc: bus_mfc {
+ compatible = "samsung,exynos-bus";
+ clocks = <&cmu CLK_SCLK_MFC>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_leftbus_opp_table>;
+ status = "disabled";
+ };
+
+ bus_leftbus_opp_table: opp_table1 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@50000000 {
+ opp-hz = /bits/ 64 <50000000>;
+ opp-microvolt = <900000>;
+ };
+ opp@80000000 {
+ opp-hz = /bits/ 64 <80000000>;
+ opp-microvolt = <900000>;
+ };
+ opp@100000000 {
+ opp-hz = /bits/ 64 <100000000>;
+ opp-microvolt = <1000000>;
+ };
+ opp@134000000 {
+ opp-hz = /bits/ 64 <134000000>;
+ opp-microvolt = <1000000>;
+ };
+ opp@200000000 {
+ opp-hz = /bits/ 64 <200000000>;
+ opp-microvolt = <1000000>;
+ };
+ };
+
+ bus_mcuisp_opp_table: opp_table2 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@50000000 {
+ opp-hz = /bits/ 64 <50000000>;
+ };
+ opp@80000000 {
+ opp-hz = /bits/ 64 <80000000>;
+ };
+ opp@100000000 {
+ opp-hz = /bits/ 64 <100000000>;
+ };
+ opp@200000000 {
+ opp-hz = /bits/ 64 <200000000>;
+ };
+ opp@400000000 {
+ opp-hz = /bits/ 64 <400000000>;
+ };
+ };
+
+ bus_isp_opp_table: opp_table3 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@50000000 {
+ opp-hz = /bits/ 64 <50000000>;
+ };
+ opp@80000000 {
+ opp-hz = /bits/ 64 <80000000>;
+ };
+ opp@100000000 {
+ opp-hz = /bits/ 64 <100000000>;
+ };
+ opp@200000000 {
+ opp-hz = /bits/ 64 <200000000>;
+ };
+ opp@300000000 {
+ opp-hz = /bits/ 64 <300000000>;
+ };
+ };
+
+ bus_peril_opp_table: opp_table4 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@50000000 {
+ opp-hz = /bits/ 64 <50000000>;
+ };
+ opp@80000000 {
+ opp-hz = /bits/ 64 <80000000>;
+ };
+ opp@100000000 {
+ opp-hz = /bits/ 64 <100000000>;
+ };
+ };
+
+
+ Usage case to handle the frequency and voltage of bus on runtime
+ in exynos3250-rinato.dts is listed below:
+
+ &bus_dmc {
+ devfreq-events = <&ppmu_dmc0_3>, <&ppmu_dmc1_3>;
+ vdd-supply = <&buck1_reg>; /* VDD_MIF */
+ status = "okay";
+ };
+
+ &bus_leftbus {
+ devfreq-events = <&ppmu_leftbus_3>, <&ppmu_rightbus_3>;
+ vdd-supply = <&buck3_reg>;
+ status = "okay";
+ };
+
+ &bus_rightbus {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+ };
+
+ &bus_lcd0 {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+ };
+
+ &bus_fsys {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+ };
+
+ &bus_mcuisp {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+ };
+
+ &bus_isp {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+ };
+
+ &bus_peril {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+ };
+
+ &bus_mfc {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+ };
diff --git a/Documentation/devicetree/bindings/display/exynos/exynos_dsim.txt b/Documentation/devicetree/bindings/display/exynos/exynos_dsim.txt
index 22756b3dede2..a78265993665 100644
--- a/Documentation/devicetree/bindings/display/exynos/exynos_dsim.txt
+++ b/Documentation/devicetree/bindings/display/exynos/exynos_dsim.txt
@@ -41,7 +41,7 @@ Video interfaces:
endpoint node connected from mic node (reg = 0):
- remote-endpoint: specifies the endpoint in mic node. This node is required
for Exynos5433 mipi dsi. So mic can access to panel node
- thoughout this dsi node.
+ throughout this dsi node.
endpoint node connected to panel node (reg = 1):
- remote-endpoint: specifies the endpoint in panel node. This node is
required in all kinds of exynos mipi dsi to represent
diff --git a/Documentation/devicetree/bindings/dma/brcm,bcm2835-dma.txt b/Documentation/devicetree/bindings/dma/brcm,bcm2835-dma.txt
index 1396078d15ac..baf9b34d20bf 100644
--- a/Documentation/devicetree/bindings/dma/brcm,bcm2835-dma.txt
+++ b/Documentation/devicetree/bindings/dma/brcm,bcm2835-dma.txt
@@ -12,6 +12,10 @@ Required properties:
- reg: Should contain DMA registers location and length.
- interrupts: Should contain the DMA interrupts associated
to the DMA channels in ascending order.
+- interrupt-names: Should contain the names of the interrupt
+ in the form "dmaXX".
+ Use "dma-shared-all" for the common interrupt line
+ that is shared by all dma channels.
- #dma-cells: Must be <1>, the cell in the dmas property of the
client device represents the DREQ number.
- brcm,dma-channel-mask: Bit mask representing the channels
@@ -34,13 +38,35 @@ dma: dma@7e007000 {
<1 24>,
<1 25>,
<1 26>,
+ /* dma channel 11-14 share one irq */
<1 27>,
+ <1 27>,
+ <1 27>,
+ <1 27>,
+ /* unused shared irq for all channels */
<1 28>;
+ interrupt-names = "dma0",
+ "dma1",
+ "dma2",
+ "dma3",
+ "dma4",
+ "dma5",
+ "dma6",
+ "dma7",
+ "dma8",
+ "dma9",
+ "dma10",
+ "dma11",
+ "dma12",
+ "dma13",
+ "dma14",
+ "dma-shared-all";
#dma-cells = <1>;
brcm,dma-channel-mask = <0x7f35>;
};
+
DMA clients connected to the BCM2835 DMA controller must use the format
described in the dma.txt file, using a two-cell specifier for each channel.
diff --git a/Documentation/devicetree/bindings/dma/fsl-imx-sdma.txt b/Documentation/devicetree/bindings/dma/fsl-imx-sdma.txt
index dc8d3aac1aa9..175f0e44ed85 100644
--- a/Documentation/devicetree/bindings/dma/fsl-imx-sdma.txt
+++ b/Documentation/devicetree/bindings/dma/fsl-imx-sdma.txt
@@ -58,6 +58,15 @@ The third cell specifies the transfer priority as below.
1 Medium
2 Low
+Optional properties:
+
+- gpr : The phandle to the General Purpose Register (GPR) node.
+- fsl,sdma-event-remap : Register bits of sdma event remap, the format is
+ <reg shift val>.
+ reg is the GPR register offset.
+ shift is the bit position inside the GPR register.
+ val is the value of the bit (0 or 1).
+
Examples:
sdma@83fb0000 {
@@ -83,3 +92,21 @@ ssi2: ssi@70014000 {
dma-names = "rx", "tx";
fsl,fifo-depth = <15>;
};
+
+Using the fsl,sdma-event-remap property:
+
+If we want to use SDMA on the SAI1 port on a MX6SX:
+
+&sdma {
+ gpr = <&gpr>;
+ /* SDMA events remap for SAI1_RX and SAI1_TX */
+ fsl,sdma-event-remap = <0 15 1>, <0 16 1>;
+};
+
+The fsl,sdma-event-remap property in this case has two values:
+- <0 15 1> means that the offset is 0, so GPR0 is the register of the
+SDMA remap. Bit 15 of GPR0 selects between UART4_RX and SAI1_RX.
+Setting bit 15 to 1 selects SAI1_RX.
+- <0 16 1> means that the offset is 0, so GPR0 is the register of the
+SDMA remap. Bit 16 of GPR0 selects between UART4_TX and SAI1_TX.
+Setting bit 16 to 1 selects SAI1_TX.
diff --git a/Documentation/devicetree/bindings/dma/mv-xor.txt b/Documentation/devicetree/bindings/dma/mv-xor.txt
index 276ef815ef32..c075f5988135 100644
--- a/Documentation/devicetree/bindings/dma/mv-xor.txt
+++ b/Documentation/devicetree/bindings/dma/mv-xor.txt
@@ -1,7 +1,10 @@
* Marvell XOR engines
Required properties:
-- compatible: Should be "marvell,orion-xor" or "marvell,armada-380-xor"
+- compatible: Should be one of the following:
+ - "marvell,orion-xor"
+ - "marvell,armada-380-xor"
+ - "marvell,armada-3700-xor".
- reg: Should contain registers location and length (two sets)
the first set is the low registers, the second set the high
registers for the XOR engine.
diff --git a/Documentation/devicetree/bindings/dma/tegra20-apbdma.txt b/Documentation/devicetree/bindings/dma/nvidia,tegra20-apbdma.txt
index c6908e7c42cc..c6908e7c42cc 100644
--- a/Documentation/devicetree/bindings/dma/tegra20-apbdma.txt
+++ b/Documentation/devicetree/bindings/dma/nvidia,tegra20-apbdma.txt
diff --git a/Documentation/devicetree/bindings/dma/nvidia,tegra210-adma.txt b/Documentation/devicetree/bindings/dma/nvidia,tegra210-adma.txt
new file mode 100644
index 000000000000..1e1dc8f972e4
--- /dev/null
+++ b/Documentation/devicetree/bindings/dma/nvidia,tegra210-adma.txt
@@ -0,0 +1,55 @@
+* NVIDIA Tegra Audio DMA (ADMA) controller
+
+The Tegra Audio DMA controller that is used for transferring data
+between system memory and the Audio Processing Engine (APE).
+
+Required properties:
+- compatible: Must be "nvidia,tegra210-adma".
+- reg: Should contain DMA registers location and length. This should be
+ a single entry that includes all of the per-channel registers in one
+ contiguous bank.
+- interrupt-parent: Phandle to the interrupt parent controller.
+- interrupts: Should contain all of the per-channel DMA interrupts in
+ ascending order with respect to the DMA channel index.
+- clocks: Must contain one entry for the ADMA module clock
+ (TEGRA210_CLK_D_AUDIO).
+- clock-names: Must contain the name "d_audio" for the corresponding
+ 'clocks' entry.
+- #dma-cells : Must be 1. The first cell denotes the receive/transmit
+ request number and should be between 1 and the maximum number of
+ requests supported. This value corresponds to the RX/TX_REQUEST_SELECT
+ fields in the ADMA_CHn_CTRL register.
+
+
+Example:
+
+adma: dma@702e2000 {
+ compatible = "nvidia,tegra210-adma";
+ reg = <0x0 0x702e2000 0x0 0x2000>;
+ interrupt-parent = <&tegra_agic>;
+ interrupts = <GIC_SPI 24 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 25 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 27 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 28 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 35 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 42 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 43 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 44 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 45 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&tegra_car TEGRA210_CLK_D_AUDIO>;
+ clock-names = "d_audio";
+ #dma-cells = <1>;
+};
diff --git a/Documentation/devicetree/bindings/dma/qcom_bam_dma.txt b/Documentation/devicetree/bindings/dma/qcom_bam_dma.txt
index 1c9d48ea4914..9cbf5d9df8fd 100644
--- a/Documentation/devicetree/bindings/dma/qcom_bam_dma.txt
+++ b/Documentation/devicetree/bindings/dma/qcom_bam_dma.txt
@@ -13,6 +13,8 @@ Required properties:
- clock-names: must contain "bam_clk" entry
- qcom,ee : indicates the active Execution Environment identifier (0-7) used in
the secure world.
+- qcom,controlled-remotely : optional, indicates that the bam is controlled by
+ remote proccessor i.e. execution environment.
Example:
diff --git a/Documentation/devicetree/bindings/dma/snps-dma.txt b/Documentation/devicetree/bindings/dma/snps-dma.txt
index c261598164a7..0f5583293c9c 100644
--- a/Documentation/devicetree/bindings/dma/snps-dma.txt
+++ b/Documentation/devicetree/bindings/dma/snps-dma.txt
@@ -13,6 +13,11 @@ Required properties:
- chan_priority: priority of channels. 0 (default): increase from chan 0->n, 1:
increase from chan n->0
- block_size: Maximum block size supported by the controller
+- data-width: Maximum data width supported by hardware per AHB master
+ (in bytes, power of 2)
+
+
+Deprecated properties:
- data_width: Maximum data width supported by hardware per AHB master
(0 - 8bits, 1 - 16bits, ..., 5 - 256bits)
@@ -38,7 +43,7 @@ Example:
chan_allocation_order = <1>;
chan_priority = <1>;
block_size = <0xfff>;
- data_width = <3 3>;
+ data-width = <8 8>;
};
DMA clients connected to the Designware DMA controller must use the format
@@ -47,8 +52,8 @@ The four cells in order are:
1. A phandle pointing to the DMA controller
2. The DMA request line number
-3. Source master for transfers on allocated channel
-4. Destination master for transfers on allocated channel
+3. Memory master for transfers on allocated channel
+4. Peripheral master for transfers on allocated channel
Example:
diff --git a/Documentation/devicetree/bindings/dma/xilinx/xilinx_dma.txt b/Documentation/devicetree/bindings/dma/xilinx/xilinx_dma.txt
index 2291c4098730..3cf0072d3141 100644
--- a/Documentation/devicetree/bindings/dma/xilinx/xilinx_dma.txt
+++ b/Documentation/devicetree/bindings/dma/xilinx/xilinx_dma.txt
@@ -7,7 +7,7 @@ Required properties:
- compatible: Should be "xlnx,axi-dma-1.00.a"
- #dma-cells: Should be <1>, see "dmas" property below
- reg: Should contain DMA registers location and length.
-- dma-channel child node: Should have atleast one channel and can have upto
+- dma-channel child node: Should have at least one channel and can have up to
two channels per device. This node specifies the properties of each
DMA channel (see child node properties below).
diff --git a/Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt b/Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt
index e4c4d47f8137..a1f2683c49bf 100644
--- a/Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt
+++ b/Documentation/devicetree/bindings/dma/xilinx/xilinx_vdma.txt
@@ -3,18 +3,44 @@ It can be configured to have one channel or two channels. If configured
as two channels, one is to transmit to the video device and another is
to receive from the video device.
+Xilinx AXI DMA engine, it does transfers between memory and AXI4 stream
+target devices. It can be configured to have one channel or two channels.
+If configured as two channels, one is to transmit to the device and another
+is to receive from the device.
+
+Xilinx AXI CDMA engine, it does transfers between memory-mapped source
+address and a memory-mapped destination address.
+
Required properties:
-- compatible: Should be "xlnx,axi-vdma-1.00.a"
+- compatible: Should be "xlnx,axi-vdma-1.00.a" or "xlnx,axi-dma-1.00.a" or
+ "xlnx,axi-cdma-1.00.a""
- #dma-cells: Should be <1>, see "dmas" property below
- reg: Should contain VDMA registers location and length.
-- xlnx,num-fstores: Should be the number of framebuffers as configured in h/w.
+- xlnx,addrwidth: Should be the vdma addressing size in bits(ex: 32 bits).
+- dma-ranges: Should be as the following <dma_addr cpu_addr max_len>.
- dma-channel child node: Should have at least one channel and can have up to
two channels per device. This node specifies the properties of each
DMA channel (see child node properties below).
+- clocks: Input clock specifier. Refer to common clock bindings.
+- clock-names: List of input clocks
+ For VDMA:
+ Required elements: "s_axi_lite_aclk"
+ Optional elements: "m_axi_mm2s_aclk" "m_axi_s2mm_aclk",
+ "m_axis_mm2s_aclk", "s_axis_s2mm_aclk"
+ For CDMA:
+ Required elements: "s_axi_lite_aclk", "m_axi_aclk"
+ FOR AXIDMA:
+ Required elements: "s_axi_lite_aclk"
+ Optional elements: "m_axi_mm2s_aclk", "m_axi_s2mm_aclk",
+ "m_axi_sg_aclk"
+
+Required properties for VDMA:
+- xlnx,num-fstores: Should be the number of framebuffers as configured in h/w.
Optional properties:
- xlnx,include-sg: Tells configured for Scatter-mode in
the hardware.
+Optional properties for VDMA:
- xlnx,flush-fsync: Tells which channel to Flush on Frame sync.
It takes following values:
{1}, flush both channels
@@ -31,6 +57,7 @@ Required child node properties:
Optional child node properties:
- xlnx,include-dre: Tells hardware is configured for Data
Realignment Engine.
+Optional child node properties for VDMA:
- xlnx,genlock-mode: Tells Genlock synchronization is
enabled/disabled in hardware.
@@ -41,8 +68,13 @@ axi_vdma_0: axivdma@40030000 {
compatible = "xlnx,axi-vdma-1.00.a";
#dma_cells = <1>;
reg = < 0x40030000 0x10000 >;
+ dma-ranges = <0x00000000 0x00000000 0x40000000>;
xlnx,num-fstores = <0x8>;
xlnx,flush-fsync = <0x1>;
+ xlnx,addrwidth = <0x20>;
+ clocks = <&clk 0>, <&clk 1>, <&clk 2>, <&clk 3>, <&clk 4>;
+ clock-names = "s_axi_lite_aclk", "m_axi_mm2s_aclk", "m_axi_s2mm_aclk",
+ "m_axis_mm2s_aclk", "s_axis_s2mm_aclk";
dma-channel@40030000 {
compatible = "xlnx,axi-vdma-mm2s-channel";
interrupts = < 0 54 4 >;
diff --git a/Documentation/devicetree/bindings/gpio/gpio-74x164.txt b/Documentation/devicetree/bindings/gpio/gpio-74x164.txt
index cc2608021f26..ce1b2231bf5d 100644
--- a/Documentation/devicetree/bindings/gpio/gpio-74x164.txt
+++ b/Documentation/devicetree/bindings/gpio/gpio-74x164.txt
@@ -1,7 +1,9 @@
* Generic 8-bits shift register GPIO driver
Required properties:
-- compatible : Should be "fairchild,74hc595"
+- compatible: Should contain one of the following:
+ "fairchild,74hc595"
+ "nxp,74lvc594"
- reg : chip select number
- gpio-controller : Marks the device node as a gpio controller.
- #gpio-cells : Should be two. The first cell is the pin number and
diff --git a/Documentation/devicetree/bindings/gpio/gpio-mpc8xxx.txt b/Documentation/devicetree/bindings/gpio/gpio-mpc8xxx.txt
index 120bc4971cf3..4b6cc632ca5c 100644
--- a/Documentation/devicetree/bindings/gpio/gpio-mpc8xxx.txt
+++ b/Documentation/devicetree/bindings/gpio/gpio-mpc8xxx.txt
@@ -1,9 +1,10 @@
-* Freescale MPC512x/MPC8xxx/Layerscape GPIO controller
+* Freescale MPC512x/MPC8xxx/QorIQ/Layerscape GPIO controller
Required properties:
- compatible : Should be "fsl,<soc>-gpio"
The following <soc>s are known to be supported:
- mpc5121, mpc5125, mpc8349, mpc8572, mpc8610, pq3, qoriq.
+ mpc5121, mpc5125, mpc8349, mpc8572, mpc8610, pq3, qoriq,
+ ls1021a, ls1043a, ls2080a.
- reg : Address and length of the register set for the device
- interrupts : Should be the port interrupt shared by all 32 pins.
- #gpio-cells : Should be two. The first cell is the pin number and
@@ -15,7 +16,7 @@ Optional properties:
- little-endian : GPIO registers are used as little endian. If not
present registers are used as big endian by default.
-Example:
+Example of gpio-controller node for a mpc5125 SoC:
gpio0: gpio@1100 {
compatible = "fsl,mpc5125-gpio";
@@ -24,3 +25,16 @@ gpio0: gpio@1100 {
interrupts = <78 0x8>;
status = "okay";
};
+
+Example of gpio-controller node for a ls2080a SoC:
+
+gpio0: gpio@2300000 {
+ compatible = "fsl,ls2080a-gpio", "fsl,qoriq-gpio";
+ reg = <0x0 0x2300000 0x0 0x10000>;
+ interrupts = <0 36 0x4>; /* Level high type */
+ gpio-controller;
+ little-endian;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+};
diff --git a/Documentation/devicetree/bindings/gpio/gpio-xlp.txt b/Documentation/devicetree/bindings/gpio/gpio-xlp.txt
index 262ee4ddf2cb..28662d83a43e 100644
--- a/Documentation/devicetree/bindings/gpio/gpio-xlp.txt
+++ b/Documentation/devicetree/bindings/gpio/gpio-xlp.txt
@@ -3,6 +3,8 @@ Netlogic XLP Family GPIO
This GPIO driver is used for following Netlogic XLP SoCs:
XLP832, XLP316, XLP208, XLP980, XLP532
+This GPIO driver is also compatible with GPIO controller found on
+Broadcom Vulcan ARM64.
Required properties:
-------------------
@@ -13,6 +15,7 @@ Required properties:
- "netlogic,xlp208-gpio": For Netlogic XLP208
- "netlogic,xlp980-gpio": For Netlogic XLP980
- "netlogic,xlp532-gpio": For Netlogic XLP532
+ - "brcm,vulcan-gpio": For Broadcom Vulcan ARM64
- reg: Physical base address and length of the controller's registers.
- #gpio-cells: Should be two. The first cell is the pin number and the second
cell is used to specify optional parameters (currently unused).
diff --git a/Documentation/devicetree/bindings/gpio/gpio.txt b/Documentation/devicetree/bindings/gpio/gpio.txt
index 069cdf6f9dac..68d28f62a6f4 100644
--- a/Documentation/devicetree/bindings/gpio/gpio.txt
+++ b/Documentation/devicetree/bindings/gpio/gpio.txt
@@ -131,6 +131,13 @@ Every GPIO controller node must contain both an empty "gpio-controller"
property, and a #gpio-cells integer property, which indicates the number of
cells in a gpio-specifier.
+Some system-on-chips (SoCs) use the concept of GPIO banks. A GPIO bank is an
+instance of a hardware IP core on a silicon die, usually exposed to the
+programmer as a coherent range of I/O addresses. Usually each such bank is
+exposed in the device tree as an individual gpio-controller node, reflecting
+the fact that the hardware was synthesized by reusing the same IP block a
+few times over.
+
Optionally, a GPIO controller may have a "ngpios" property. This property
indicates the number of in-use slots of available slots for GPIOs. The
typical example is something like this: the hardware register is 32 bits
@@ -145,6 +152,21 @@ additional bitmask is needed to specify which GPIOs are actually in use,
and which are dummies. The bindings for this case has not yet been
specified, but should be specified if/when such hardware appears.
+Optionally, a GPIO controller may have a "gpio-line-names" property. This is
+an array of strings defining the names of the GPIO lines going out of the
+GPIO controller. This name should be the most meaningful producer name
+for the system, such as a rail name indicating the usage. Package names
+such as pin name are discouraged: such lines have opaque names (since they
+are by definition generic purpose) and such names are usually not very
+helpful. For example "MMC-CD", "Red LED Vdd" and "ethernet reset" are
+reasonable line names as they describe what the line is used for. "GPIO0"
+is not a good name to give to a GPIO line. Placeholders are discouraged:
+rather use the "" (blank string) if the use of the GPIO line is undefined
+in your design. The names are assigned starting from line offset 0 from
+left to right from the passed array. An incomplete array (where the number
+of passed named are less than ngpios) will still be used up until the last
+provided valid line index.
+
Example:
gpio-controller@00000000 {
@@ -153,6 +175,10 @@ gpio-controller@00000000 {
gpio-controller;
#gpio-cells = <2>;
ngpios = <18>;
+ gpio-line-names = "MMC-CD", "MMC-WP", "VDD eth", "RST eth", "LED R",
+ "LED G", "LED B", "Col A", "Col B", "Col C", "Col D",
+ "Row A", "Row B", "Row C", "Row D", "NMI button",
+ "poweroff", "reset";
}
The GPIO chip may contain GPIO hog definitions. GPIO hogging is a mechanism
diff --git a/Documentation/devicetree/bindings/gpio/ibm,ppc4xx-gpio.txt b/Documentation/devicetree/bindings/gpio/ibm,ppc4xx-gpio.txt
new file mode 100644
index 000000000000..d58b3958f3ea
--- /dev/null
+++ b/Documentation/devicetree/bindings/gpio/ibm,ppc4xx-gpio.txt
@@ -0,0 +1,24 @@
+* IBM/AMCC/APM GPIO Controller for PowerPC 4XX series and compatible SoCs
+
+All GPIOs are pin-shared with other functions. DCRs control whether a
+particular pin that has GPIO capabilities acts as a GPIO or is used for
+another purpose. GPIO outputs are separately programmable to emulate
+an open-drain driver.
+
+Required properties:
+ - compatible: must be "ibm,ppc4xx-gpio"
+ - reg: address and length of the register set for the device
+ - #gpio-cells: must be set to 2. The first cell is the pin number
+ and the second cell is used to specify the gpio polarity:
+ 0 = active high
+ 1 = active low
+ - gpio-controller: marks the device node as a gpio controller.
+
+Example:
+
+GPIO0: gpio@ef600b00 {
+ compatible = "ibm,ppc4xx-gpio";
+ reg = <0xef600b00 0x00000048>;
+ #gpio-cells = <2>;
+ gpio-controller;
+};
diff --git a/Documentation/devicetree/bindings/gpio/microchip,pic32-gpio.txt b/Documentation/devicetree/bindings/gpio/microchip,pic32-gpio.txt
index ef3752889496..dd031fc93b55 100644
--- a/Documentation/devicetree/bindings/gpio/microchip,pic32-gpio.txt
+++ b/Documentation/devicetree/bindings/gpio/microchip,pic32-gpio.txt
@@ -33,7 +33,7 @@ gpio0: gpio0@1f860000 {
gpio-controller;
interrupt-controller;
#interrupt-cells = <2>;
- clocks = <&PBCLK4>;
+ clocks = <&rootclk PB4CLK>;
microchip,gpio-bank = <0>;
gpio-ranges = <&pic32_pinctrl 0 0 16>;
};
diff --git a/Documentation/devicetree/bindings/gpio/nvidia,tegra186-gpio.txt b/Documentation/devicetree/bindings/gpio/nvidia,tegra186-gpio.txt
new file mode 100644
index 000000000000..c82a2e221bc1
--- /dev/null
+++ b/Documentation/devicetree/bindings/gpio/nvidia,tegra186-gpio.txt
@@ -0,0 +1,161 @@
+NVIDIA Tegra186 GPIO controllers
+
+Tegra186 contains two GPIO controllers; a main controller and an "AON"
+controller. This binding document applies to both controllers. The register
+layouts for the controllers share many similarities, but also some significant
+differences. Hence, this document describes closely related but different
+bindings and compatible values.
+
+The Tegra186 GPIO controller allows software to set the IO direction of, and
+read/write the value of, numerous GPIO signals. Routing of GPIO signals to
+package balls is under the control of a separate pin controller HW block. Two
+major sets of registers exist:
+
+a) Security registers, which allow configuration of allowed access to the GPIO
+register set. These registers exist in a single contiguous block of physical
+address space. The size of this block, and the security features available,
+varies between the different GPIO controllers.
+
+Access to this set of registers is not necessary in all circumstances. Code
+that wishes to configure access to the GPIO registers needs access to these
+registers to do so. Code which simply wishes to read or write GPIO data does not
+need access to these registers.
+
+b) GPIO registers, which allow manipulation of the GPIO signals. In some GPIO
+controllers, these registers are exposed via multiple "physical aliases" in
+address space, each of which access the same underlying state. See the hardware
+documentation for rationale. Any particular GPIO client is expected to access
+just one of these physical aliases.
+
+Tegra HW documentation describes a unified naming convention for all GPIOs
+implemented by the SoC. Each GPIO is assigned to a port, and a port may control
+a number of GPIOs. Thus, each GPIO is named according to an alphabetical port
+name and an integer GPIO name within the port. For example, GPIO_PA0, GPIO_PN6,
+or GPIO_PCC3.
+
+The number of ports implemented by each GPIO controller varies. The number of
+implemented GPIOs within each port varies. GPIO registers within a controller
+are grouped and laid out according to the port they affect.
+
+The mapping from port name to the GPIO controller that implements that port, and
+the mapping from port name to register offset within a controller, are both
+extremely non-linear. The header file <dt-bindings/gpio/tegra186-gpio.h>
+describes the port-level mapping. In that file, the naming convention for ports
+matches the HW documentation. The values chosen for the names are alphabetically
+sorted within a particular controller. Drivers need to map between the DT GPIO
+IDs and HW register offsets using a lookup table.
+
+Each GPIO controller can generate a number of interrupt signals. Each signal
+represents the aggregate status for all GPIOs within a set of ports. Thus, the
+number of interrupt signals generated by a controller varies as a rough function
+of the number of ports it implements. Note that the HW documentation refers to
+both the overall controller HW module and the sets-of-ports as "controllers".
+
+Each GPIO controller in fact generates multiple interrupts signals for each set
+of ports. Each GPIO may be configured to feed into a specific one of the
+interrupt signals generated by a set-of-ports. The intent is for each generated
+signal to be routed to a different CPU, thus allowing different CPUs to each
+handle subsets of the interrupts within a port. The status of each of these
+per-port-set signals is reported via a separate register. Thus, a driver needs
+to know which status register to observe. This binding currently defines no
+configuration mechanism for this. By default, drivers should use register
+GPIO_${port}_INTERRUPT_STATUS_G1_0. Future revisions to the binding could
+define a property to configure this.
+
+Required properties:
+- compatible
+ Array of strings.
+ One of:
+ - "nvidia,tegra186-gpio".
+ - "nvidia,tegra186-gpio-aon".
+- reg-names
+ Array of strings.
+ Contains a list of names for the register spaces described by the reg
+ property. May contain the following entries, in any order:
+ - "gpio": Mandatory. GPIO control registers. This may cover either:
+ a) The single physical alias that this OS should use.
+ b) All physical aliases that exist in the controller. This is
+ appropriate when the OS is responsible for managing assignment of
+ the physical aliases.
+ - "security": Optional. Security configuration registers.
+ Users of this binding MUST look up entries in the reg property by name,
+ using this reg-names property to do so.
+- reg
+ Array of (physical base address, length) tuples.
+ Must contain one entry per entry in the reg-names property, in a matching
+ order.
+- interrupts
+ Array of interrupt specifiers.
+ The interrupt outputs from the HW block, one per set of ports, in the
+ order the HW manual describes them. The number of entries required varies
+ depending on compatible value:
+ - "nvidia,tegra186-gpio": 6 entries.
+ - "nvidia,tegra186-gpio-aon": 1 entry.
+- gpio-controller
+ Boolean.
+ Marks the device node as a GPIO controller/provider.
+- #gpio-cells
+ Single-cell integer.
+ Must be <2>.
+ Indicates how many cells are used in a consumer's GPIO specifier.
+ In the specifier:
+ - The first cell is the pin number.
+ See <dt-bindings/gpio/tegra186-gpio.h>.
+ - The second cell contains flags:
+ - Bit 0 specifies polarity
+ - 0: Active-high (normal).
+ - 1: Active-low (inverted).
+- interrupt-controller
+ Boolean.
+ Marks the device node as an interrupt controller/provider.
+- #interrupt-cells
+ Single-cell integer.
+ Must be <2>.
+ Indicates how many cells are used in a consumer's interrupt specifier.
+ In the specifier:
+ - The first cell is the GPIO number.
+ See <dt-bindings/gpio/tegra186-gpio.h>.
+ - The second cell is contains flags:
+ - Bits [3:0] indicate trigger type and level:
+ - 1: Low-to-high edge triggered.
+ - 2: High-to-low edge triggered.
+ - 4: Active high level-sensitive.
+ - 8: Active low level-sensitive.
+ Valid combinations are 1, 2, 3, 4, 8.
+
+Example:
+
+#include <dt-bindings/interrupt-controller/irq.h>
+
+gpio@2200000 {
+ compatible = "nvidia,tegra186-gpio";
+ reg-names = "security", "gpio";
+ reg =
+ <0x0 0x2200000 0x0 0x10000>,
+ <0x0 0x2210000 0x0 0x10000>;
+ interrupts =
+ <0 47 IRQ_TYPE_LEVEL_HIGH>,
+ <0 50 IRQ_TYPE_LEVEL_HIGH>,
+ <0 53 IRQ_TYPE_LEVEL_HIGH>,
+ <0 56 IRQ_TYPE_LEVEL_HIGH>,
+ <0 59 IRQ_TYPE_LEVEL_HIGH>,
+ <0 180 IRQ_TYPE_LEVEL_HIGH>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+};
+
+gpio@c2f0000 {
+ compatible = "nvidia,tegra186-gpio-aon";
+ reg-names = "security", "gpio";
+ reg =
+ <0x0 0xc2f0000 0x0 0x1000>,
+ <0x0 0xc2f1000 0x0 0x1000>;
+ interrupts =
+ <0 60 IRQ_TYPE_LEVEL_HIGH>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+};
diff --git a/Documentation/devicetree/bindings/gpio/wd,mbl-gpio.txt b/Documentation/devicetree/bindings/gpio/wd,mbl-gpio.txt
new file mode 100644
index 000000000000..038c3a6a1f4d
--- /dev/null
+++ b/Documentation/devicetree/bindings/gpio/wd,mbl-gpio.txt
@@ -0,0 +1,38 @@
+Bindings for the Western Digital's MyBook Live memory-mapped GPIO controllers.
+
+The Western Digital MyBook Live has two memory-mapped GPIO controllers.
+Both GPIO controller only have a single 8-bit data register, where GPIO
+state can be read and/or written.
+
+Required properties:
+ - compatible: should be "wd,mbl-gpio"
+ - reg-names: must contain
+ "dat" - data register
+ - reg: address + size pairs describing the GPIO register sets;
+ order must correspond with the order of entries in reg-names
+ - #gpio-cells: must be set to 2. The first cell is the pin number and
+ the second cell is used to specify the gpio polarity:
+ 0 = active high
+ 1 = active low
+ - gpio-controller: Marks the device node as a gpio controller.
+
+Optional properties:
+ - no-output: GPIOs are read-only.
+
+Examples:
+ gpio0: gpio0@e0000000 {
+ compatible = "wd,mbl-gpio";
+ reg-names = "dat";
+ reg = <0xe0000000 0x1>;
+ #gpio-cells = <2>;
+ gpio-controller;
+ };
+
+ gpio1: gpio1@e0100000 {
+ compatible = "wd,mbl-gpio";
+ reg-names = "dat";
+ reg = <0xe0100000 0x1>;
+ #gpio-cells = <2>;
+ gpio-controller;
+ no-output;
+ };
diff --git a/Documentation/devicetree/bindings/gpu/nvidia,gk20a.txt b/Documentation/devicetree/bindings/gpu/nvidia,gk20a.txt
index 23bfe8e1f7cc..ff3db65e50de 100644
--- a/Documentation/devicetree/bindings/gpu/nvidia,gk20a.txt
+++ b/Documentation/devicetree/bindings/gpu/nvidia,gk20a.txt
@@ -1,9 +1,10 @@
-NVIDIA GK20A Graphics Processing Unit
+NVIDIA Tegra Graphics Processing Units
Required properties:
-- compatible: "nvidia,<chip>-<gpu>"
+- compatible: "nvidia,<gpu>"
Currently recognized values:
- - nvidia,tegra124-gk20a
+ - nvidia,gk20a
+ - nvidia,gm20b
- reg: Physical base address and length of the controller's registers.
Must contain two entries:
- first entry for bar0
@@ -19,14 +20,20 @@ Required properties:
- clock-names: Must include the following entries:
- gpu
- pwr
+If the compatible string is "nvidia,gm20b", then the following clock
+is also required:
+ - ref
- resets: Must contain an entry for each entry in reset-names.
See ../reset/reset.txt for details.
- reset-names: Must include the following entries:
- gpu
-Example:
+Optional properties:
+- iommus: A reference to the IOMMU. See ../iommu/iommu.txt for details.
- gpu@0,57000000 {
+Example for GK20A:
+
+ gpu@57000000 {
compatible = "nvidia,gk20a";
reg = <0x0 0x57000000 0x0 0x01000000>,
<0x0 0x58000000 0x0 0x01000000>;
@@ -39,5 +46,25 @@ Example:
clock-names = "gpu", "pwr";
resets = <&tegra_car 184>;
reset-names = "gpu";
+ iommus = <&mc TEGRA_SWGROUP_GPU>;
+ status = "disabled";
+ };
+
+Example for GM20B:
+
+ gpu@57000000 {
+ compatible = "nvidia,gm20b";
+ reg = <0x0 0x57000000 0x0 0x01000000>,
+ <0x0 0x58000000 0x0 0x01000000>;
+ interrupts = <GIC_SPI 157 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 158 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "stall", "nonstall";
+ clocks = <&tegra_car TEGRA210_CLK_GPU>,
+ <&tegra_car TEGRA210_CLK_PLL_P_OUT5>,
+ <&tegra_car TEGRA210_CLK_PLL_G_REF>;
+ clock-names = "gpu", "pwr", "ref";
+ resets = <&tegra_car 184>;
+ reset-names = "gpu";
+ iommus = <&mc TEGRA_SWGROUP_GPU>;
status = "disabled";
};
diff --git a/Documentation/devicetree/bindings/hwmon/ltc2978.txt b/Documentation/devicetree/bindings/hwmon/ltc2978.txt
index a7afbf60bb9c..bf2a47bbdc58 100644
--- a/Documentation/devicetree/bindings/hwmon/ltc2978.txt
+++ b/Documentation/devicetree/bindings/hwmon/ltc2978.txt
@@ -13,6 +13,7 @@ Required properties:
* "lltc,ltc3886"
* "lltc,ltc3887"
* "lltc,ltm2987"
+ * "lltc,ltm4675"
* "lltc,ltm4676"
- reg: I2C slave address
diff --git a/Documentation/devicetree/bindings/i2c/i2c-octeon.txt b/Documentation/devicetree/bindings/i2c/i2c-octeon.txt
index dced82ebe31d..872d485dffab 100644
--- a/Documentation/devicetree/bindings/i2c/i2c-octeon.txt
+++ b/Documentation/devicetree/bindings/i2c/i2c-octeon.txt
@@ -4,6 +4,12 @@
Compatibility with all cn3XXX, cn5XXX and cn6XXX SOCs.
+ or
+
+ compatible: "cavium,octeon-7890-twsi"
+
+ Compatibility with cn78XX SOCs.
+
- reg: The base address of the TWSI/I2C bus controller register bank.
- #address-cells: Must be <1>.
diff --git a/Documentation/devicetree/bindings/i2c/i2c-rcar.txt b/Documentation/devicetree/bindings/i2c/i2c-rcar.txt
index cf8bfc956cdc..5f0cb502b1db 100644
--- a/Documentation/devicetree/bindings/i2c/i2c-rcar.txt
+++ b/Documentation/devicetree/bindings/i2c/i2c-rcar.txt
@@ -19,6 +19,9 @@ Optional properties:
- clock-frequency: desired I2C bus clock frequency in Hz. The absence of this
property indicates the default frequency 100 kHz.
- clocks: clock specifier.
+- dmas: Must contain a list of two references to DMA specifiers, one for
+ transmission, and one for reception.
+- dma-names: Must contain a list of two DMA names, "tx" and "rx".
- i2c-scl-falling-time-ns: see i2c.txt
- i2c-scl-internal-delay-ns: see i2c.txt
diff --git a/Documentation/devicetree/bindings/iio/accel/mma8452.txt b/Documentation/devicetree/bindings/iio/accel/mma8452.txt
index 165937e1ac1c..45f5c5c5929c 100644
--- a/Documentation/devicetree/bindings/iio/accel/mma8452.txt
+++ b/Documentation/devicetree/bindings/iio/accel/mma8452.txt
@@ -1,4 +1,4 @@
-Freescale MMA8451Q, MMA8452Q, MMA8453Q, MMA8652FC or MMA8653FC
+Freescale MMA8451Q, MMA8452Q, MMA8453Q, MMA8652FC, MMA8653FC or FXLS8471Q
triaxial accelerometer
Required properties:
@@ -9,6 +9,7 @@ Required properties:
* "fsl,mma8453"
* "fsl,mma8652"
* "fsl,mma8653"
+ * "fsl,fxls8471"
- reg: the I2C address of the chip
diff --git a/Documentation/devicetree/bindings/iio/adc/lpc1850-adc.txt b/Documentation/devicetree/bindings/iio/adc/lpc1850-adc.txt
new file mode 100644
index 000000000000..0bcae5140bc5
--- /dev/null
+++ b/Documentation/devicetree/bindings/iio/adc/lpc1850-adc.txt
@@ -0,0 +1,21 @@
+NXP LPC1850 ADC bindings
+
+Required properties:
+- compatible: Should be "nxp,lpc1850-adc"
+- reg: Offset and length of the register set for the ADC device
+- interrupts: The interrupt number for the ADC device
+- clocks: The root clock of the ADC controller
+- vref-supply: The regulator supply ADC reference voltage
+- resets: phandle to reset controller and line specifier
+
+Example:
+
+adc0: adc@400e3000 {
+ compatible = "nxp,lpc1850-adc";
+ reg = <0x400e3000 0x1000>;
+ interrupts = <17>;
+ clocks = <&ccu1 CLK_APB3_ADC0>;
+ vref-supply = <&reg_vdda>;
+ resets = <&rgu 40>;
+ status = "disabled";
+};
diff --git a/Documentation/devicetree/bindings/staging/iio/adc/mxs-lradc.txt b/Documentation/devicetree/bindings/iio/adc/mxs-lradc.txt
index 555fb117d4fa..555fb117d4fa 100644
--- a/Documentation/devicetree/bindings/staging/iio/adc/mxs-lradc.txt
+++ b/Documentation/devicetree/bindings/iio/adc/mxs-lradc.txt
diff --git a/Documentation/devicetree/bindings/iio/adc/rockchip-saradc.txt b/Documentation/devicetree/bindings/iio/adc/rockchip-saradc.txt
index a9a5fe19ff2a..bf99e2f24788 100644
--- a/Documentation/devicetree/bindings/iio/adc/rockchip-saradc.txt
+++ b/Documentation/devicetree/bindings/iio/adc/rockchip-saradc.txt
@@ -1,7 +1,11 @@
Rockchip Successive Approximation Register (SAR) A/D Converter bindings
Required properties:
-- compatible: Should be "rockchip,saradc" or "rockchip,rk3066-tsadc"
+- compatible: should be "rockchip,<name>-saradc" or "rockchip,rk3066-tsadc"
+ - "rockchip,saradc": for rk3188, rk3288
+ - "rockchip,rk3066-tsadc": for rk3036
+ - "rockchip,rk3399-saradc": for rk3399
+
- reg: physical base address of the controller and length of memory mapped
region.
- interrupts: The interrupt number to the cpu. The interrupt specifier format
diff --git a/Documentation/devicetree/bindings/iio/dac/ad5592r.txt b/Documentation/devicetree/bindings/iio/dac/ad5592r.txt
new file mode 100644
index 000000000000..989f96f31c66
--- /dev/null
+++ b/Documentation/devicetree/bindings/iio/dac/ad5592r.txt
@@ -0,0 +1,155 @@
+Analog Devices AD5592R/AD5593R DAC/ADC device driver
+
+Required properties for the AD5592R:
+ - compatible: Must be "adi,ad5592r"
+ - reg: SPI chip select number for the device
+ - spi-max-frequency: Max SPI frequency to use (< 30000000)
+ - spi-cpol: The AD5592R requires inverse clock polarity (CPOL) mode
+
+Required properties for the AD5593R:
+ - compatible: Must be "adi,ad5593r"
+ - reg: I2C address of the device
+
+Required properties for all supported chips:
+ - #address-cells: Should be 1.
+ - #size-cells: Should be 0.
+ - channel nodes:
+ Each child node represents one channel and has the following
+ Required properties:
+ * reg: Pin on which this channel is connected to.
+ * adi,mode: Mode or function of this channel.
+ Macros specifying the valid values
+ can be found in <dt-bindings/iio/adi,ad5592r.h>.
+
+ The following values are currently supported:
+ * CH_MODE_UNUSED (the pin is unused)
+ * CH_MODE_ADC (the pin is ADC input)
+ * CH_MODE_DAC (the pin is DAC output)
+ * CH_MODE_DAC_AND_ADC (the pin is DAC output
+ but can be monitored by an ADC, since
+ there is no disadvantage this
+ this should be considered as the
+ preferred DAC mode)
+ * CH_MODE_GPIO (the pin is registered
+ with GPIOLIB)
+ Optional properties:
+ * adi,off-state: State of this channel when unused or the
+ device gets removed. Macros specifying the
+ valid values can be found in
+ <dt-bindings/iio/adi,ad5592r.h>.
+
+ * CH_OFFSTATE_PULLDOWN (the pin is pulled down)
+ * CH_OFFSTATE_OUT_LOW (the pin is output low)
+ * CH_OFFSTATE_OUT_HIGH (the pin is output high)
+ * CH_OFFSTATE_OUT_TRISTATE (the pin is
+ tristated output)
+
+
+Optional properties:
+ - vref-supply: Phandle to the external reference voltage supply. This should
+ only be set if there is an external reference voltage connected to the VREF
+ pin. If the property is not set the internal 2.5V reference is used.
+ - reset-gpios : GPIO spec for the RESET pin. If specified, it will be
+ asserted during driver probe.
+ - gpio-controller: Marks the device node as a GPIO controller.
+ - #gpio-cells: Should be 2. The first cell is the GPIO number and the second
+ cell specifies GPIO flags, as defined in <dt-bindings/gpio/gpio.h>.
+
+AD5592R Example:
+
+ #include <dt-bindings/iio/adi,ad5592r.h>
+
+ vref: regulator-vref {
+ compatible = "regulator-fixed";
+ regulator-name = "vref-ad559x";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ };
+
+ ad5592r@0 {
+ #size-cells = <0>;
+ #address-cells = <1>;
+ #gpio-cells = <2>;
+ compatible = "adi,ad5592r";
+ reg = <0>;
+
+ spi-max-frequency = <1000000>;
+ spi-cpol;
+
+ vref-supply = <&vref>; /* optional */
+ reset-gpios = <&gpio0 86 0>; /* optional */
+ gpio-controller;
+
+ channel@0 {
+ reg = <0>;
+ adi,mode = <CH_MODE_DAC>;
+ };
+ channel@1 {
+ reg = <1>;
+ adi,mode = <CH_MODE_ADC>;
+ };
+ channel@2 {
+ reg = <2>;
+ adi,mode = <CH_MODE_DAC_AND_ADC>;
+ };
+ channel@3 {
+ reg = <3>;
+ adi,mode = <CH_MODE_DAC_AND_ADC>;
+ adi,off-state = <CH_OFFSTATE_PULLDOWN>;
+ };
+ channel@4 {
+ reg = <4>;
+ adi,mode = <CH_MODE_UNUSED>;
+ adi,off-state = <CH_OFFSTATE_PULLDOWN>;
+ };
+ channel@5 {
+ reg = <5>;
+ adi,mode = <CH_MODE_GPIO>;
+ adi,off-state = <CH_OFFSTATE_PULLDOWN>;
+ };
+ channel@6 {
+ reg = <6>;
+ adi,mode = <CH_MODE_GPIO>;
+ adi,off-state = <CH_OFFSTATE_PULLDOWN>;
+ };
+ channel@7 {
+ reg = <7>;
+ adi,mode = <CH_MODE_GPIO>;
+ adi,off-state = <CH_OFFSTATE_PULLDOWN>;
+ };
+ };
+
+AD5593R Example:
+
+ #include <dt-bindings/iio/adi,ad5592r.h>
+
+ ad5593r@10 {
+ #size-cells = <0>;
+ #address-cells = <1>;
+ #gpio-cells = <2>;
+ compatible = "adi,ad5593r";
+ reg = <0x10>;
+ gpio-controller;
+
+ channel@0 {
+ reg = <0>;
+ adi,mode = <CH_MODE_DAC>;
+ adi,off-state = <CH_OFFSTATE_PULLDOWN>;
+ };
+ channel@1 {
+ reg = <1>;
+ adi,mode = <CH_MODE_ADC>;
+ adi,off-state = <CH_OFFSTATE_PULLDOWN>;
+ };
+ channel@2 {
+ reg = <2>;
+ adi,mode = <CH_MODE_DAC_AND_ADC>;
+ adi,off-state = <CH_OFFSTATE_PULLDOWN>;
+ };
+ channel@6 {
+ reg = <6>;
+ adi,mode = <CH_MODE_GPIO>;
+ adi,off-state = <CH_OFFSTATE_PULLDOWN>;
+ };
+ };
diff --git a/Documentation/devicetree/bindings/iio/dac/lpc1850-dac.txt b/Documentation/devicetree/bindings/iio/dac/lpc1850-dac.txt
new file mode 100644
index 000000000000..7d6647d4af5e
--- /dev/null
+++ b/Documentation/devicetree/bindings/iio/dac/lpc1850-dac.txt
@@ -0,0 +1,20 @@
+NXP LPC1850 DAC bindings
+
+Required properties:
+- compatible: Should be "nxp,lpc1850-dac"
+- reg: Offset and length of the register set for the ADC device
+- interrupts: The interrupt number for the ADC device
+- clocks: The root clock of the ADC controller
+- vref-supply: The regulator supply ADC reference voltage
+- resets: phandle to reset controller and line specifier
+
+Example:
+dac: dac@400e1000 {
+ compatible = "nxp,lpc1850-dac";
+ reg = <0x400e1000 0x1000>;
+ interrupts = <0>;
+ clocks = <&ccu1 CLK_APB3_DAC>;
+ vref-supply = <&reg_vdda>;
+ resets = <&rgu 42>;
+ status = "disabled";
+};
diff --git a/Documentation/devicetree/bindings/iio/imu/inv_mpu6050.txt b/Documentation/devicetree/bindings/iio/imu/inv_mpu6050.txt
index e4d8f1c52f4a..a9fc11e43b45 100644
--- a/Documentation/devicetree/bindings/iio/imu/inv_mpu6050.txt
+++ b/Documentation/devicetree/bindings/iio/imu/inv_mpu6050.txt
@@ -8,10 +8,23 @@ Required properties:
- interrupt-parent : should be the phandle for the interrupt controller
- interrupts : interrupt mapping for GPIO IRQ
+Optional properties:
+ - mount-matrix: an optional 3x3 mounting rotation matrix
+
+
Example:
mpu6050@68 {
compatible = "invensense,mpu6050";
reg = <0x68>;
interrupt-parent = <&gpio1>;
interrupts = <18 1>;
+ mount-matrix = "-0.984807753012208", /* x0 */
+ "0", /* y0 */
+ "-0.173648177666930", /* z0 */
+ "0", /* x1 */
+ "-1", /* y1 */
+ "0", /* z1 */
+ "-0.173648177666930", /* x2 */
+ "0", /* y2 */
+ "0.984807753012208"; /* z2 */
};
diff --git a/Documentation/devicetree/bindings/iio/magnetometer/ak8975.txt b/Documentation/devicetree/bindings/iio/magnetometer/ak8975.txt
index 011679f1a425..e1e7dd3259f6 100644
--- a/Documentation/devicetree/bindings/iio/magnetometer/ak8975.txt
+++ b/Documentation/devicetree/bindings/iio/magnetometer/ak8975.txt
@@ -8,6 +8,8 @@ Required properties:
Optional properties:
- gpios : should be device tree identifier of the magnetometer DRDY pin
+ - vdd-supply: an optional regulator that needs to be on to provide VDD
+ - mount-matrix: an optional 3x3 mounting rotation matrix
Example:
@@ -15,4 +17,14 @@ ak8975@0c {
compatible = "asahi-kasei,ak8975";
reg = <0x0c>;
gpios = <&gpj0 7 0>;
+ vdd-supply = <&ldo_3v3_gnss>;
+ mount-matrix = "-0.984807753012208", /* x0 */
+ "0", /* y0 */
+ "-0.173648177666930", /* z0 */
+ "0", /* x1 */
+ "-1", /* y1 */
+ "0", /* z1 */
+ "-0.173648177666930", /* x2 */
+ "0", /* y2 */
+ "0.984807753012208"; /* z2 */
};
diff --git a/Documentation/devicetree/bindings/iio/potentiometer/ds1803.txt b/Documentation/devicetree/bindings/iio/potentiometer/ds1803.txt
new file mode 100644
index 000000000000..df77bf552656
--- /dev/null
+++ b/Documentation/devicetree/bindings/iio/potentiometer/ds1803.txt
@@ -0,0 +1,21 @@
+* Maxim Integrated DS1803 digital potentiometer driver
+
+The node for this driver must be a child node of a I2C controller, hence
+all mandatory properties for your controller must be specified. See directory:
+
+ Documentation/devicetree/bindings/i2c
+
+for more details.
+
+Required properties:
+ - compatible: Must be one of the following, depending on the
+ model:
+ "maxim,ds1803-010",
+ "maxim,ds1803-050",
+ "maxim,ds1803-100"
+
+Example:
+ds1803: ds1803@1 {
+ reg = <0x28>;
+ compatible = "maxim,ds1803-010";
+};
diff --git a/Documentation/devicetree/bindings/iio/potentiometer/mcp4131.txt b/Documentation/devicetree/bindings/iio/potentiometer/mcp4131.txt
new file mode 100644
index 000000000000..3ccba16f7035
--- /dev/null
+++ b/Documentation/devicetree/bindings/iio/potentiometer/mcp4131.txt
@@ -0,0 +1,84 @@
+* Microchip MCP413X/414X/415X/416X/423X/424X/425X/426X Digital Potentiometer
+ driver
+
+The node for this driver must be a child node of a SPI controller, hence
+all mandatory properties described in
+
+ Documentation/devicetree/bindings/spi/spi-bus.txt
+
+must be specified.
+
+Required properties:
+ - compatible: Must be one of the following, depending on the
+ model:
+ "microchip,mcp4131-502"
+ "microchip,mcp4131-103"
+ "microchip,mcp4131-503"
+ "microchip,mcp4131-104"
+ "microchip,mcp4132-502"
+ "microchip,mcp4132-103"
+ "microchip,mcp4132-503"
+ "microchip,mcp4132-104"
+ "microchip,mcp4141-502"
+ "microchip,mcp4141-103"
+ "microchip,mcp4141-503"
+ "microchip,mcp4141-104"
+ "microchip,mcp4142-502"
+ "microchip,mcp4142-103"
+ "microchip,mcp4142-503"
+ "microchip,mcp4142-104"
+ "microchip,mcp4151-502"
+ "microchip,mcp4151-103"
+ "microchip,mcp4151-503"
+ "microchip,mcp4151-104"
+ "microchip,mcp4152-502"
+ "microchip,mcp4152-103"
+ "microchip,mcp4152-503"
+ "microchip,mcp4152-104"
+ "microchip,mcp4161-502"
+ "microchip,mcp4161-103"
+ "microchip,mcp4161-503"
+ "microchip,mcp4161-104"
+ "microchip,mcp4162-502"
+ "microchip,mcp4162-103"
+ "microchip,mcp4162-503"
+ "microchip,mcp4162-104"
+ "microchip,mcp4231-502"
+ "microchip,mcp4231-103"
+ "microchip,mcp4231-503"
+ "microchip,mcp4231-104"
+ "microchip,mcp4232-502"
+ "microchip,mcp4232-103"
+ "microchip,mcp4232-503"
+ "microchip,mcp4232-104"
+ "microchip,mcp4241-502"
+ "microchip,mcp4241-103"
+ "microchip,mcp4241-503"
+ "microchip,mcp4241-104"
+ "microchip,mcp4242-502"
+ "microchip,mcp4242-103"
+ "microchip,mcp4242-503"
+ "microchip,mcp4242-104"
+ "microchip,mcp4251-502"
+ "microchip,mcp4251-103"
+ "microchip,mcp4251-503"
+ "microchip,mcp4251-104"
+ "microchip,mcp4252-502"
+ "microchip,mcp4252-103"
+ "microchip,mcp4252-503"
+ "microchip,mcp4252-104"
+ "microchip,mcp4261-502"
+ "microchip,mcp4261-103"
+ "microchip,mcp4261-503"
+ "microchip,mcp4261-104"
+ "microchip,mcp4262-502"
+ "microchip,mcp4262-103"
+ "microchip,mcp4262-503"
+ "microchip,mcp4262-104"
+
+Example:
+mcp4131: mcp4131@0 {
+ compatible = "mcp4131-502";
+ reg = <0>;
+ spi-max-frequency = <500000>;
+};
diff --git a/Documentation/devicetree/bindings/iio/pressure/hp03.txt b/Documentation/devicetree/bindings/iio/pressure/hp03.txt
new file mode 100644
index 000000000000..54e7e70bcea5
--- /dev/null
+++ b/Documentation/devicetree/bindings/iio/pressure/hp03.txt
@@ -0,0 +1,17 @@
+HopeRF HP03 digital pressure/temperature sensors
+
+Required properties:
+- compatible: must be "hoperf,hp03"
+- xclr-gpio: must be device tree identifier of the XCLR pin.
+ The XCLR pin is a reset of the ADC in the chip,
+ it must be pulled HI before the conversion and
+ readout of the value from the ADC registers and
+ pulled LO afterward.
+
+Example:
+
+hp03@0x77 {
+ compatible = "hoperf,hp03";
+ reg = <0x77>;
+ xclr-gpio = <&portc 0 0x0>;
+};
diff --git a/Documentation/devicetree/bindings/iio/pressure/ms5611.txt b/Documentation/devicetree/bindings/iio/pressure/ms5611.txt
new file mode 100644
index 000000000000..17bca866c084
--- /dev/null
+++ b/Documentation/devicetree/bindings/iio/pressure/ms5611.txt
@@ -0,0 +1,19 @@
+MEAS ms5611 family pressure sensors
+
+Pressure sensors from MEAS Switzerland with SPI and I2C bus interfaces.
+
+Required properties:
+- compatible: "meas,ms5611" or "meas,ms5607"
+- reg: the I2C address or SPI chip select the device will respond to
+
+Optional properties:
+- vdd-supply: an optional regulator that needs to be on to provide VDD
+ power to the sensor.
+
+Example:
+
+ms5607@77 {
+ compatible = "meas,ms5607";
+ reg = <0x77>;
+ vdd-supply = <&ldo_3v3_gnss>;
+};
diff --git a/Documentation/devicetree/bindings/iio/st-sensors.txt b/Documentation/devicetree/bindings/iio/st-sensors.txt
index d4b87cc1e446..5844cf72862d 100644
--- a/Documentation/devicetree/bindings/iio/st-sensors.txt
+++ b/Documentation/devicetree/bindings/iio/st-sensors.txt
@@ -16,6 +16,10 @@ Optional properties:
- st,drdy-int-pin: the pin on the package that will be used to signal
"data ready" (valid values: 1 or 2). This property is not configurable
on all sensors.
+- drive-open-drain: the interrupt/data ready line will be configured
+ as open drain, which is useful if several sensors share the same
+ interrupt line. (This binding is taken from pinctrl/pinctrl-bindings.txt)
+ This is a boolean property.
Sensors may also have applicable pin control settings, those use the
standard bindings from pinctrl/pinctrl-bindings.txt.
@@ -37,6 +41,7 @@ Accelerometers:
- st,lsm330-accel
- st,lsm303agr-accel
- st,lis2dh12-accel
+- st,h3lis331dl-accel
Gyroscopes:
- st,l3g4200d-gyro
@@ -46,6 +51,7 @@ Gyroscopes:
- st,l3gd20-gyro
- st,l3g4is-gyro
- st,lsm330-gyro
+- st,lsm9ds0-gyro
Magnetometers:
- st,lsm303agr-magn
diff --git a/Documentation/devicetree/bindings/input/ads7846.txt b/Documentation/devicetree/bindings/input/ads7846.txt
index c6cfe2e3ed41..9fc47b006fd1 100644
--- a/Documentation/devicetree/bindings/input/ads7846.txt
+++ b/Documentation/devicetree/bindings/input/ads7846.txt
@@ -29,7 +29,7 @@ Optional properties:
ti,vref-delay-usecs vref supply delay in usecs, 0 for
external vref (u16).
ti,vref-mv The VREF voltage, in millivolts (u16).
- Set to 0 to use internal refernce
+ Set to 0 to use internal references
(ADS7846).
ti,keep-vref-on set to keep vref on for differential
measurements as well
diff --git a/Documentation/devicetree/bindings/input/gpio-keys.txt b/Documentation/devicetree/bindings/input/gpio-keys.txt
index 21641236c095..a94940481e55 100644
--- a/Documentation/devicetree/bindings/input/gpio-keys.txt
+++ b/Documentation/devicetree/bindings/input/gpio-keys.txt
@@ -32,17 +32,17 @@ Optional subnode-properties:
Example nodes:
- gpio_keys {
+ gpio-keys {
compatible = "gpio-keys";
- #address-cells = <1>;
- #size-cells = <0>;
autorepeat;
- button@21 {
+
+ up {
label = "GPIO Key UP";
linux,code = <103>;
gpios = <&gpio1 0 1>;
};
- button@22 {
+
+ down {
label = "GPIO Key DOWN";
linux,code = <108>;
interrupts = <1 IRQ_TYPE_LEVEL_HIGH 7>;
diff --git a/Documentation/devicetree/bindings/input/touchscreen/brcm,iproc-touchscreen.txt b/Documentation/devicetree/bindings/input/touchscreen/brcm,iproc-touchscreen.txt
index 34e3382a0659..ac5dff412e25 100644
--- a/Documentation/devicetree/bindings/input/touchscreen/brcm,iproc-touchscreen.txt
+++ b/Documentation/devicetree/bindings/input/touchscreen/brcm,iproc-touchscreen.txt
@@ -2,11 +2,17 @@
Required properties:
- compatible: must be "brcm,iproc-touchscreen"
-- reg: physical base address of the controller and length of memory mapped
- region.
+- ts_syscon: handler of syscon node defining physical base
+ address of the controller and length of memory mapped region.
+ If this property is selected please make sure MFD_SYSCON config
+ is enabled in the defconfig file.
- clocks: The clock provided by the SOC to driver the tsc
-- clock-name: name for the clock
+- clock-names: name for the clock
- interrupts: The touchscreen controller's interrupt
+- address-cells: Specify the number of u32 entries needed in child nodes.
+ Should set to 1.
+- size-cells: Specify number of u32 entries needed to specify child nodes size
+ in reg property. Should set to 1.
Optional properties:
- scanning_period: Time between scans. Each step is 1024 us. Valid 1-256.
@@ -53,13 +59,18 @@ Optional properties:
- touchscreen-inverted-x: X axis is inverted (boolean)
- touchscreen-inverted-y: Y axis is inverted (boolean)
-Example:
+Example: An example of touchscreen node
- touchscreen: tsc@0x180A6000 {
+ ts_adc_syscon: ts_adc_syscon@180a6000 {
+ compatible = "brcm,iproc-ts-adc-syscon","syscon";
+ reg = <0x180a6000 0xc30>;
+ };
+
+ touchscreen: touchscreen@180A6000 {
compatible = "brcm,iproc-touchscreen";
#address-cells = <1>;
#size-cells = <1>;
- reg = <0x180A6000 0x40>;
+ ts_syscon = <&ts_adc_syscon>;
clocks = <&adc_clk>;
clock-names = "tsc_clk";
interrupts = <GIC_SPI 164 IRQ_TYPE_LEVEL_HIGH>;
diff --git a/Documentation/devicetree/bindings/input/touchscreen/fsl-mx25-tcq.txt b/Documentation/devicetree/bindings/input/touchscreen/fsl-mx25-tcq.txt
index cdf05f9b2329..abfcab3edc66 100644
--- a/Documentation/devicetree/bindings/input/touchscreen/fsl-mx25-tcq.txt
+++ b/Documentation/devicetree/bindings/input/touchscreen/fsl-mx25-tcq.txt
@@ -15,7 +15,7 @@ Optional properties:
- fsl,pen-debounce-ns: Pen debounce time in nanoseconds.
- fsl,pen-threshold: Pen-down threshold for the touchscreen. This is a value
between 1 and 4096. It is the ratio between the internal reference voltage
- and the measured voltage after the plate was precharged. Resistence between
+ and the measured voltage after the plate was precharged. Resistance between
plates and therefore the voltage decreases with pressure so that a smaller
value is equivalent to a higher pressure.
- fsl,settling-time-ns: Settling time in nanoseconds. The settling time is before
diff --git a/Documentation/devicetree/bindings/interrupt-controller/arm,gic-v3.txt b/Documentation/devicetree/bindings/interrupt-controller/arm,gic-v3.txt
index 007a5b46256a..4c29cdab0ea5 100644
--- a/Documentation/devicetree/bindings/interrupt-controller/arm,gic-v3.txt
+++ b/Documentation/devicetree/bindings/interrupt-controller/arm,gic-v3.txt
@@ -11,6 +11,8 @@ Main node required properties:
- interrupt-controller : Identifies the node as an interrupt controller
- #interrupt-cells : Specifies the number of cells needed to encode an
interrupt source. Must be a single cell with a value of at least 3.
+ If the system requires describing PPI affinity, then the value must
+ be at least 4.
The 1st cell is the interrupt type; 0 for SPI interrupts, 1 for PPI
interrupts. Other values are reserved for future use.
@@ -24,7 +26,14 @@ Main node required properties:
1 = edge triggered
4 = level triggered
- Cells 4 and beyond are reserved for future use and must have a value
+ The 4th cell is a phandle to a node describing a set of CPUs this
+ interrupt is affine to. The interrupt must be a PPI, and the node
+ pointed must be a subnode of the "ppi-partitions" subnode. For
+ interrupt types other than PPI or PPIs that are not partitionned,
+ this cell must be zero. See the "ppi-partitions" node description
+ below.
+
+ Cells 5 and beyond are reserved for future use and must have a value
of 0 if present.
- reg : Specifies base physical address(s) and size of the GIC
@@ -50,6 +59,11 @@ Optional
Sub-nodes:
+PPI affinity can be expressed as a single "ppi-partitions" node,
+containing a set of sub-nodes, each with the following property:
+- affinity: Should be a list of phandles to CPU nodes (as described in
+Documentation/devicetree/bindings/arm/cpus.txt).
+
GICv3 has one or more Interrupt Translation Services (ITS) that are
used to route Message Signalled Interrupts (MSI) to the CPUs.
@@ -91,7 +105,7 @@ Examples:
gic: interrupt-controller@2c010000 {
compatible = "arm,gic-v3";
- #interrupt-cells = <3>;
+ #interrupt-cells = <4>;
#address-cells = <2>;
#size-cells = <2>;
ranges;
@@ -119,4 +133,20 @@ Examples:
#msi-cells = <1>;
reg = <0x0 0x2c400000 0 0x200000>;
};
+
+ ppi-partitions {
+ part0: interrupt-partition-0 {
+ affinity = <&cpu0 &cpu2>;
+ };
+
+ part1: interrupt-partition-1 {
+ affinity = <&cpu1 &cpu3>;
+ };
+ };
+ };
+
+
+ device@0 {
+ reg = <0 0 0 4>;
+ interrupts = <1 1 4 &part0>;
};
diff --git a/Documentation/devicetree/bindings/interrupt-controller/arm,versatile-fpga-irq.txt b/Documentation/devicetree/bindings/interrupt-controller/arm,versatile-fpga-irq.txt
index c9cf605bb995..2a1d16bdf834 100644
--- a/Documentation/devicetree/bindings/interrupt-controller/arm,versatile-fpga-irq.txt
+++ b/Documentation/devicetree/bindings/interrupt-controller/arm,versatile-fpga-irq.txt
@@ -6,7 +6,7 @@ controllers are OR:ed together and fed to the CPU tile's IRQ input. Each
instance can handle up to 32 interrupts.
Required properties:
-- compatible: "arm,versatile-fpga-irq"
+- compatible: "arm,versatile-fpga-irq" or "oxsemi,ox810se-rps-irq"
- interrupt-controller: Identifies the node as an interrupt controller
- #interrupt-cells: The number of cells to define the interrupts. Must be 1
as the FPGA IRQ controller has no configuration options for interrupt
diff --git a/Documentation/devicetree/bindings/interrupt-controller/brcm,bcm2835-armctrl-ic.txt b/Documentation/devicetree/bindings/interrupt-controller/brcm,bcm2835-armctrl-ic.txt
index 2d6c8bb4d827..6428a6ba9f4a 100644
--- a/Documentation/devicetree/bindings/interrupt-controller/brcm,bcm2835-armctrl-ic.txt
+++ b/Documentation/devicetree/bindings/interrupt-controller/brcm,bcm2835-armctrl-ic.txt
@@ -71,8 +71,8 @@ Bank 1:
24: DMA8
25: DMA9
26: DMA10
-27: DMA11
-28: DMA12
+27: DMA11-14 - shared interrupt for DMA 11 to 14
+28: DMAALL - triggers on all dma interrupts (including chanel 15)
29: AUX
30: ARM
31: VPUDMA
diff --git a/Documentation/devicetree/bindings/interrupt-controller/brcm,bcm6345-l1-intc.txt b/Documentation/devicetree/bindings/interrupt-controller/brcm,bcm6345-l1-intc.txt
new file mode 100644
index 000000000000..4040905388d9
--- /dev/null
+++ b/Documentation/devicetree/bindings/interrupt-controller/brcm,bcm6345-l1-intc.txt
@@ -0,0 +1,57 @@
+Broadcom BCM6345-style Level 1 interrupt controller
+
+This block is a first level interrupt controller that is typically connected
+directly to one of the HW INT lines on each CPU.
+
+Key elements of the hardware design include:
+
+- 32, 64 or 128 incoming level IRQ lines
+
+- Most onchip peripherals are wired directly to an L1 input
+
+- A separate instance of the register set for each CPU, allowing individual
+ peripheral IRQs to be routed to any CPU
+
+- Contains one or more enable/status word pairs per CPU
+
+- No atomic set/clear operations
+
+- No polarity/level/edge settings
+
+- No FIFO or priority encoder logic; software is expected to read all
+ 2-4 status words to determine which IRQs are pending
+
+Required properties:
+
+- compatible: should be "brcm,bcm<soc>-l1-intc", "brcm,bcm6345-l1-intc"
+- reg: specifies the base physical address and size of the registers;
+ the number of supported IRQs is inferred from the size argument
+- interrupt-controller: identifies the node as an interrupt controller
+- #interrupt-cells: specifies the number of cells needed to encode an interrupt
+ source, should be 1.
+- interrupt-parent: specifies the phandle to the parent interrupt controller(s)
+ this one is cascaded from
+- interrupts: specifies the interrupt line(s) in the interrupt-parent controller
+ node; valid values depend on the type of parent interrupt controller
+
+If multiple reg ranges and interrupt-parent entries are present on an SMP
+system, the driver will allow IRQ SMP affinity to be set up through the
+/proc/irq/ interface. In the simplest possible configuration, only one
+reg range and one interrupt-parent is needed.
+
+The driver operates in native CPU endian by default, there is no support for
+specifying an alternative endianness.
+
+Example:
+
+periph_intc: interrupt-controller@10000000 {
+ compatible = "brcm,bcm63168-l1-intc", "brcm,bcm6345-l1-intc";
+ reg = <0x10000020 0x20>,
+ <0x10000040 0x20>;
+
+ interrupt-controller;
+ #interrupt-cells = <1>;
+
+ interrupt-parent = <&cpu_intc>;
+ interrupts = <2>, <3>;
+};
diff --git a/Documentation/devicetree/bindings/interrupt-controller/ezchip,nps400-ic.txt b/Documentation/devicetree/bindings/interrupt-controller/ezchip,nps400-ic.txt
new file mode 100644
index 000000000000..888b2b9f7064
--- /dev/null
+++ b/Documentation/devicetree/bindings/interrupt-controller/ezchip,nps400-ic.txt
@@ -0,0 +1,17 @@
+EZchip NPS Interrupt Controller
+
+Required properties:
+
+- compatible : should be "ezchip,nps400-ic"
+- interrupt-controller : Identifies the node as an interrupt controller
+- #interrupt-cells : Specifies the number of cells needed to encode an
+ interrupt source. The value shall be 1.
+
+
+Example:
+
+intc: interrupt-controller {
+ compatible = "ezchip,nps400-ic";
+ interrupt-controller;
+ #interrupt-cells = <1>;
+};
diff --git a/Documentation/devicetree/bindings/interrupt-controller/fsl,ls-scfg-msi.txt b/Documentation/devicetree/bindings/interrupt-controller/fsl,ls-scfg-msi.txt
new file mode 100644
index 000000000000..9e389493203f
--- /dev/null
+++ b/Documentation/devicetree/bindings/interrupt-controller/fsl,ls-scfg-msi.txt
@@ -0,0 +1,30 @@
+* Freescale Layerscape SCFG PCIe MSI controller
+
+Required properties:
+
+- compatible: should be "fsl,<soc-name>-msi" to identify
+ Layerscape PCIe MSI controller block such as:
+ "fsl,1s1021a-msi"
+ "fsl,1s1043a-msi"
+- msi-controller: indicates that this is a PCIe MSI controller node
+- reg: physical base address of the controller and length of memory mapped.
+- interrupts: an interrupt to the parent interrupt controller.
+
+Optional properties:
+- interrupt-parent: the phandle to the parent interrupt controller.
+
+This interrupt controller hardware is a second level interrupt controller that
+is hooked to a parent interrupt controller: e.g: ARM GIC for ARM-based
+platforms. If interrupt-parent is not provided, the default parent interrupt
+controller will be used.
+Each PCIe node needs to have property msi-parent that points to
+MSI controller node
+
+Examples:
+
+ msi1: msi-controller@1571000 {
+ compatible = "fsl,1s1043a-msi";
+ reg = <0x0 0x1571000 0x0 0x8>,
+ msi-controller;
+ interrupts = <0 116 0x4>;
+ };
diff --git a/Documentation/devicetree/bindings/interrupt-controller/mediatek,sysirq.txt b/Documentation/devicetree/bindings/interrupt-controller/mediatek,sysirq.txt
index b8e1674c7837..8cf564d083d2 100644
--- a/Documentation/devicetree/bindings/interrupt-controller/mediatek,sysirq.txt
+++ b/Documentation/devicetree/bindings/interrupt-controller/mediatek,sysirq.txt
@@ -16,8 +16,7 @@ Required properties:
"mediatek,mt6577-sysirq"
"mediatek,mt2701-sysirq"
- interrupt-controller : Identifies the node as an interrupt controller
-- #interrupt-cells : Use the same format as specified by GIC in
- Documentation/devicetree/bindings/arm/gic.txt
+- #interrupt-cells : Use the same format as specified by GIC in arm,gic.txt.
- interrupt-parent: phandle of irq parent for sysirq. The parent must
use the same interrupt-cells format as GIC.
- reg: Physical base address of the intpol registers and length of memory
diff --git a/Documentation/devicetree/bindings/interrupt-controller/nvidia,tegra-ictlr.txt b/Documentation/devicetree/bindings/interrupt-controller/nvidia,tegra20-ictlr.txt
index 1099fe0788fa..1099fe0788fa 100644
--- a/Documentation/devicetree/bindings/interrupt-controller/nvidia,tegra-ictlr.txt
+++ b/Documentation/devicetree/bindings/interrupt-controller/nvidia,tegra20-ictlr.txt
diff --git a/Documentation/devicetree/bindings/interrupt-controller/nxp,lpc3220-mic.txt b/Documentation/devicetree/bindings/interrupt-controller/nxp,lpc3220-mic.txt
index 539adca19e8f..38211f344dc8 100644
--- a/Documentation/devicetree/bindings/interrupt-controller/nxp,lpc3220-mic.txt
+++ b/Documentation/devicetree/bindings/interrupt-controller/nxp,lpc3220-mic.txt
@@ -1,38 +1,60 @@
-* NXP LPC32xx Main Interrupt Controller
- (MIC, including SIC1 and SIC2 secondary controllers)
+* NXP LPC32xx MIC, SIC1 and SIC2 Interrupt Controllers
Required properties:
-- compatible: Should be "nxp,lpc3220-mic"
-- interrupt-controller: Identifies the node as an interrupt controller.
-- interrupt-parent: Empty for the interrupt controller itself
-- #interrupt-cells: The number of cells to define the interrupts. Should be 2.
- The first cell is the IRQ number
- The second cell is used to specify mode:
- 1 = low-to-high edge triggered
- 2 = high-to-low edge triggered
- 4 = active high level-sensitive
- 8 = active low level-sensitive
- Default for internal sources should be set to 4 (active high).
-- reg: Should contain MIC registers location and length
+- compatible: "nxp,lpc3220-mic" or "nxp,lpc3220-sic".
+- reg: should contain IC registers location and length.
+- interrupt-controller: identifies the node as an interrupt controller.
+- #interrupt-cells: the number of cells to define an interrupt, should be 2.
+ The first cell is the IRQ number, the second cell is used to specify
+ one of the supported IRQ types:
+ IRQ_TYPE_EDGE_RISING = low-to-high edge triggered,
+ IRQ_TYPE_EDGE_FALLING = high-to-low edge triggered,
+ IRQ_TYPE_LEVEL_HIGH = active high level-sensitive,
+ IRQ_TYPE_LEVEL_LOW = active low level-sensitive.
+ Reset value is IRQ_TYPE_LEVEL_LOW.
+
+Optional properties:
+- interrupt-parent: empty for MIC interrupt controller, link to parent
+ MIC interrupt controller for SIC1 and SIC2
+- interrupts: empty for MIC interrupt controller, cascaded MIC
+ hardware interrupts for SIC1 and SIC2
Examples:
- /*
- * MIC
- */
+
+ /* LPC32xx MIC, SIC1 and SIC2 interrupt controllers */
mic: interrupt-controller@40008000 {
compatible = "nxp,lpc3220-mic";
+ reg = <0x40008000 0x4000>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
+ sic1: interrupt-controller@4000c000 {
+ compatible = "nxp,lpc3220-sic";
+ reg = <0x4000c000 0x4000>;
interrupt-controller;
- interrupt-parent;
#interrupt-cells = <2>;
- reg = <0x40008000 0xC000>;
+
+ interrupt-parent = <&mic>;
+ interrupts = <0 IRQ_TYPE_LEVEL_LOW>,
+ <30 IRQ_TYPE_LEVEL_LOW>;
};
- /*
- * ADC
- */
+ sic2: interrupt-controller@40010000 {
+ compatible = "nxp,lpc3220-sic";
+ reg = <0x40010000 0x4000>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+
+ interrupt-parent = <&mic>;
+ interrupts = <1 IRQ_TYPE_LEVEL_LOW>,
+ <31 IRQ_TYPE_LEVEL_LOW>;
+ };
+
+ /* ADC */
adc@40048000 {
compatible = "nxp,lpc3220-adc";
reg = <0x40048000 0x1000>;
- interrupt-parent = <&mic>;
- interrupts = <39 4>;
+ interrupt-parent = <&sic1>;
+ interrupts = <7 IRQ_TYPE_LEVEL_HIGH>;
};
diff --git a/Documentation/devicetree/bindings/interrupt-controller/ti,omap4-wugen-mpu b/Documentation/devicetree/bindings/interrupt-controller/ti,omap4-wugen-mpu
index 43effa0a4fe7..18d4f407bf0e 100644
--- a/Documentation/devicetree/bindings/interrupt-controller/ti,omap4-wugen-mpu
+++ b/Documentation/devicetree/bindings/interrupt-controller/ti,omap4-wugen-mpu
@@ -4,7 +4,7 @@ All TI OMAP4/5 (and their derivatives) an interrupt controller that
routes interrupts to the GIC, and also serves as a wakeup source. It
is also referred to as "WUGEN-MPU", hence the name of the binding.
-Reguired properties:
+Required properties:
- compatible : should contain at least "ti,omap4-wugen-mpu" or
"ti,omap5-wugen-mpu"
@@ -20,7 +20,7 @@ Notes:
- Because this HW ultimately routes interrupts to the GIC, the
interrupt specifier must be that of the GIC.
- Only SPIs can use the WUGEN as an interrupt parent. SGIs and PPIs
- are explicitly forbiden.
+ are explicitly forbidden.
Example:
diff --git a/Documentation/devicetree/bindings/iommu/arm,smmu.txt b/Documentation/devicetree/bindings/iommu/arm,smmu.txt
index 718074501fcb..19fe6f2c83f6 100644
--- a/Documentation/devicetree/bindings/iommu/arm,smmu.txt
+++ b/Documentation/devicetree/bindings/iommu/arm,smmu.txt
@@ -16,6 +16,7 @@ conditions.
"arm,mmu-400"
"arm,mmu-401"
"arm,mmu-500"
+ "cavium,smmu-v2"
depending on the particular implementation and/or the
version of the architecture implemented.
diff --git a/Documentation/devicetree/bindings/leds/common.txt b/Documentation/devicetree/bindings/leds/common.txt
index 68419843e32f..af10678ea2f6 100644
--- a/Documentation/devicetree/bindings/leds/common.txt
+++ b/Documentation/devicetree/bindings/leds/common.txt
@@ -37,6 +37,9 @@ Optional properties for child nodes:
property is mandatory for the LEDs in the non-flash modes
(e.g. torch or indicator).
+- panic-indicator : This property specifies that the LED should be used,
+ if at all possible, as a panic indicator.
+
Required properties for flash LED child nodes:
- flash-max-microamp : Maximum flash LED supply current in microamperes.
- flash-max-timeout-us : Maximum timeout in microseconds after which the flash
diff --git a/Documentation/devicetree/bindings/leds/leds-gpio.txt b/Documentation/devicetree/bindings/leds/leds-gpio.txt
index fea1ebfe24a9..cbbeb1850910 100644
--- a/Documentation/devicetree/bindings/leds/leds-gpio.txt
+++ b/Documentation/devicetree/bindings/leds/leds-gpio.txt
@@ -23,6 +23,8 @@ LED sub-node properties:
property is not present.
- retain-state-suspended: (optional) The suspend state can be retained.Such
as charge-led gpio.
+- panic-indicator : (optional)
+ see Documentation/devicetree/bindings/leds/common.txt
Examples:
diff --git a/Documentation/devicetree/bindings/media/i2c/adv7180.txt b/Documentation/devicetree/bindings/media/i2c/adv7180.txt
new file mode 100644
index 000000000000..0d501154dfb2
--- /dev/null
+++ b/Documentation/devicetree/bindings/media/i2c/adv7180.txt
@@ -0,0 +1,29 @@
+* Analog Devices ADV7180 analog video decoder family
+
+The adv7180 family devices are used to capture analog video to different
+digital interfaces like MIPI CSI-2 or parallel video.
+
+Required Properties :
+- compatible : value must be one of
+ "adi,adv7180"
+ "adi,adv7182"
+ "adi,adv7280"
+ "adi,adv7280-m"
+ "adi,adv7281"
+ "adi,adv7281-m"
+ "adi,adv7281-ma"
+ "adi,adv7282"
+ "adi,adv7282-m"
+
+Example:
+
+ i2c0@1c22000 {
+ ...
+ ...
+ adv7180@21 {
+ compatible = "adi,adv7180";
+ reg = <0x21>;
+ };
+ ...
+ };
+
diff --git a/Documentation/devicetree/bindings/media/rcar_vin.txt b/Documentation/devicetree/bindings/media/rcar_vin.txt
index 619193ccf7ff..6a4e61cbe011 100644
--- a/Documentation/devicetree/bindings/media/rcar_vin.txt
+++ b/Documentation/devicetree/bindings/media/rcar_vin.txt
@@ -5,14 +5,22 @@ The rcar_vin device provides video input capabilities for the Renesas R-Car
family of devices. The current blocks are always slaves and suppot one input
channel which can be either RGB, YUYV or BT656.
- - compatible: Must be one of the following
+ - compatible: Must be one or more of the following
- "renesas,vin-r8a7795" for the R8A7795 device
- "renesas,vin-r8a7794" for the R8A7794 device
- "renesas,vin-r8a7793" for the R8A7793 device
+ - "renesas,vin-r8a7792" for the R8A7792 device
- "renesas,vin-r8a7791" for the R8A7791 device
- "renesas,vin-r8a7790" for the R8A7790 device
- "renesas,vin-r8a7779" for the R8A7779 device
- "renesas,vin-r8a7778" for the R8A7778 device
+ - "renesas,rcar-gen2-vin" for a generic R-Car Gen2 compatible device.
+ - "renesas,rcar-gen3-vin" for a generic R-Car Gen3 compatible device.
+
+ When compatible with the generic version nodes must list the
+ SoC-specific version corresponding to the platform first
+ followed by the generic version.
+
- reg: the register base and size for the device registers
- interrupts: the interrupt for the device
- clocks: Reference to the parent clock
@@ -37,7 +45,7 @@ Device node example
};
vin0: vin@0xe6ef0000 {
- compatible = "renesas,vin-r8a7790";
+ compatible = "renesas,vin-r8a7790", "renesas,rcar-gen2-vin";
clocks = <&mstp8_clks R8A7790_CLK_VIN0>;
reg = <0 0xe6ef0000 0 0x1000>;
interrupts = <0 188 IRQ_TYPE_LEVEL_HIGH>;
diff --git a/Documentation/devicetree/bindings/media/xilinx/video.txt b/Documentation/devicetree/bindings/media/xilinx/video.txt
index cbd46fa0988f..68ac210e688e 100644
--- a/Documentation/devicetree/bindings/media/xilinx/video.txt
+++ b/Documentation/devicetree/bindings/media/xilinx/video.txt
@@ -20,7 +20,7 @@ The following properties are common to all Xilinx video IP cores.
- xlnx,video-format: This property represents a video format transmitted on an
AXI bus between video IP cores, using its VF code as defined in "AXI4-Stream
Video IP and System Design Guide" [UG934]. How the format relates to the IP
- core is decribed in the IP core bindings documentation.
+ core is described in the IP core bindings documentation.
- xlnx,video-width: This property qualifies the video format with the sample
width expressed as a number of bits per pixel component. All components must
diff --git a/Documentation/devicetree/bindings/memory-controllers/exynos-srom.txt b/Documentation/devicetree/bindings/memory-controllers/exynos-srom.txt
new file mode 100644
index 000000000000..f633b5d0f8ca
--- /dev/null
+++ b/Documentation/devicetree/bindings/memory-controllers/exynos-srom.txt
@@ -0,0 +1,79 @@
+SAMSUNG Exynos SoCs SROM Controller driver.
+
+Required properties:
+- compatible : Should contain "samsung,exynos4210-srom".
+
+- reg: offset and length of the register set
+
+Optional properties:
+The SROM controller can be used to attach external peripherals. In this case
+extra properties, describing the bus behind it, should be specified as below:
+
+- #address-cells: Must be set to 2 to allow device address translation.
+ Address is specified as (bank#, offset).
+
+- #size-cells: Must be set to 1 to allow device size passing
+
+- ranges: Must be set up to reflect the memory layout with four integer values
+ per bank:
+ <bank-number> 0 <parent address of bank> <size>
+
+Sub-nodes:
+The actual device nodes should be added as subnodes to the SROMc node. These
+subnodes, in addition to regular device specification, should contain the following
+properties, describing configuration of the relevant SROM bank:
+
+Required properties:
+- reg: bank number, base address (relative to start of the bank) and size of
+ the memory mapped for the device. Note that base address will be
+ typically 0 as this is the start of the bank.
+
+- samsung,srom-timing : array of 6 integers, specifying bank timings in the
+ following order: Tacp, Tcah, Tcoh, Tacc, Tcos, Tacs.
+ Each value is specified in cycles and has the following
+ meaning and valid range:
+ Tacp : Page mode access cycle at Page mode (0 - 15)
+ Tcah : Address holding time after CSn (0 - 15)
+ Tcoh : Chip selection hold on OEn (0 - 15)
+ Tacc : Access cycle (0 - 31, the actual time is N + 1)
+ Tcos : Chip selection set-up before OEn (0 - 15)
+ Tacs : Address set-up before CSn (0 - 15)
+
+Optional properties:
+- reg-io-width : data width in bytes (1 or 2). If omitted, default of 1 is used.
+
+- samsung,srom-page-mode : if page mode is set, 4 data page mode will be configured,
+ else normal (1 data) page mode will be set.
+
+Example: basic definition, no banks are configured
+ memory-controller@12570000 {
+ compatible = "samsung,exynos4210-srom";
+ reg = <0x12570000 0x14>;
+ };
+
+Example: SROMc with SMSC911x ethernet chip on bank 3
+ memory-controller@12570000 {
+ #address-cells = <2>;
+ #size-cells = <1>;
+ ranges = <0 0 0x04000000 0x20000 // Bank0
+ 1 0 0x05000000 0x20000 // Bank1
+ 2 0 0x06000000 0x20000 // Bank2
+ 3 0 0x07000000 0x20000>; // Bank3
+
+ compatible = "samsung,exynos4210-srom";
+ reg = <0x12570000 0x14>;
+
+ ethernet@3,0 {
+ compatible = "smsc,lan9115";
+ reg = <3 0 0x10000>; // Bank 3, offset = 0
+ phy-mode = "mii";
+ interrupt-parent = <&gpx0>;
+ interrupts = <5 8>;
+ reg-io-width = <2>;
+ smsc,irq-push-pull;
+ smsc,force-internal-phy;
+
+ samsung,srom-page-mode;
+ samsung,srom-timing = <9 12 1 9 1 1>;
+ };
+ };
diff --git a/Documentation/devicetree/bindings/memory-controllers/tegra-emc.txt b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra124-emc.txt
index b59c625d6336..ba0bc3f12419 100644
--- a/Documentation/devicetree/bindings/memory-controllers/tegra-emc.txt
+++ b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra124-emc.txt
@@ -190,7 +190,7 @@ be specified, according to the board documentation:
Example SoC include file:
/ {
- emc@0,7001b000 {
+ emc@7001b000 {
compatible = "nvidia,tegra124-emc";
reg = <0x0 0x7001b000 0x0 0x1000>;
@@ -201,7 +201,7 @@ Example SoC include file:
Example board file:
/ {
- emc@0,7001b000 {
+ emc@7001b000 {
emc-timings-3 {
nvidia,ram-code = <3>;
diff --git a/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra-mc.txt b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra30-mc.txt
index 3338a2834ad7..8dbe47013c2b 100644
--- a/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra-mc.txt
+++ b/Documentation/devicetree/bindings/memory-controllers/nvidia,tegra30-mc.txt
@@ -61,7 +61,7 @@ specified, according to the board documentation:
Example SoC include file:
/ {
- mc: memory-controller@0,70019000 {
+ mc: memory-controller@70019000 {
compatible = "nvidia,tegra124-mc";
reg = <0x0 0x70019000 0x0 0x1000>;
clocks = <&tegra_car TEGRA124_CLK_MC>;
@@ -72,7 +72,7 @@ Example SoC include file:
#iommu-cells = <1>;
};
- sdhci@0,700b0000 {
+ sdhci@700b0000 {
compatible = "nvidia,tegra124-sdhci";
...
iommus = <&mc TEGRA_SWGROUP_SDMMC1A>;
@@ -82,7 +82,7 @@ Example SoC include file:
Example board file:
/ {
- memory-controller@0,70019000 {
+ memory-controller@70019000 {
emc-timings-3 {
nvidia,ram-code = <3>;
diff --git a/Documentation/devicetree/bindings/bus/ti-gpmc.txt b/Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
index 01683707060b..21055e210234 100644
--- a/Documentation/devicetree/bindings/bus/ti-gpmc.txt
+++ b/Documentation/devicetree/bindings/memory-controllers/omap-gpmc.txt
@@ -32,6 +32,19 @@ Required properties:
bootloader) are used for the physical address decoding.
As this will change in the future, filling correct
values here is a requirement.
+ - interrupt-controller: The GPMC driver implements and interrupt controller for
+ the NAND events "fifoevent" and "termcount" plus the
+ rising/falling edges on the GPMC_WAIT pins.
+ The interrupt number mapping is as follows
+ 0 - NAND_fifoevent
+ 1 - NAND_termcount
+ 2 - GPMC_WAIT0 pin edge
+ 3 - GPMC_WAIT1 pin edge, and so on.
+ - interrupt-cells: Must be set to 2
+ - gpio-controller: The GPMC driver implements a GPIO controller for the
+ GPMC WAIT pins that can be used as general purpose inputs.
+ 0 maps to GPMC_WAIT0 pin.
+ - gpio-cells: Must be set to 2
Timing properties for child nodes. All are optional and default to 0.
@@ -130,6 +143,10 @@ Example for an AM33xx board:
#address-cells = <2>;
#size-cells = <1>;
ranges = <0 0 0x08000000 0x10000000>; /* CS0 @addr 0x8000000, size 0x10000000 */
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ gpio-controller;
+ #gpio-cells = <2>;
/* child nodes go here */
};
diff --git a/Documentation/devicetree/bindings/mfd/arizona.txt b/Documentation/devicetree/bindings/mfd/arizona.txt
index 9b30011ecabe..a6e2ea41160c 100644
--- a/Documentation/devicetree/bindings/mfd/arizona.txt
+++ b/Documentation/devicetree/bindings/mfd/arizona.txt
@@ -1,6 +1,6 @@
Cirrus Logic/Wolfson Microelectronics Arizona class audio SoCs
-These devices are audio SoCs with extensive digital capabilites and a range
+These devices are audio SoCs with extensive digital capabilities and a range
of analogue I/O.
Required properties:
diff --git a/Documentation/devicetree/bindings/mfd/axp20x.txt b/Documentation/devicetree/bindings/mfd/axp20x.txt
index fd39fa54571b..d20b1034e967 100644
--- a/Documentation/devicetree/bindings/mfd/axp20x.txt
+++ b/Documentation/devicetree/bindings/mfd/axp20x.txt
@@ -6,10 +6,11 @@ axp202 (X-Powers)
axp209 (X-Powers)
axp221 (X-Powers)
axp223 (X-Powers)
+axp809 (X-Powers)
Required properties:
- compatible: "x-powers,axp152", "x-powers,axp202", "x-powers,axp209",
- "x-powers,axp221", "x-powers,axp223"
+ "x-powers,axp221", "x-powers,axp223", "x-powers,axp809"
- reg: The I2C slave address or RSB hardware address for the AXP chip
- interrupt-parent: The parent interrupt controller
- interrupts: SoC NMI / GPIO interrupt connected to the PMIC's IRQ pin
@@ -18,7 +19,9 @@ Required properties:
Optional properties:
- x-powers,dcdc-freq: defines the work frequency of DC-DC in KHz
- (range: 750-1875). Default: 1.5MHz
+ AXP152/20X: range: 750-1875, Default: 1.5 MHz
+ AXP22X/80X: range: 1800-4050, Default: 3 MHz
+
- <input>-supply: a phandle to the regulator supply node. May be omitted if
inputs are unregulated, such as using the IPSOUT output
from the PMIC.
@@ -77,6 +80,30 @@ LDO_IO0 : LDO : ips-supply : GPIO 0
LDO_IO1 : LDO : ips-supply : GPIO 1
RTC_LDO : LDO : ips-supply : always on
+AXP809 regulators, type, and corresponding input supply names:
+
+Regulator Type Supply Name Notes
+--------- ---- ----------- -----
+DCDC1 : DC-DC buck : vin1-supply
+DCDC2 : DC-DC buck : vin2-supply
+DCDC3 : DC-DC buck : vin3-supply
+DCDC4 : DC-DC buck : vin4-supply
+DCDC5 : DC-DC buck : vin5-supply
+DC1SW : On/Off Switch : : DCDC1 secondary output
+DC5LDO : LDO : : input from DCDC5
+ALDO1 : LDO : aldoin-supply : shared supply
+ALDO2 : LDO : aldoin-supply : shared supply
+ALDO3 : LDO : aldoin-supply : shared supply
+DLDO1 : LDO : dldoin-supply : shared supply
+DLDO2 : LDO : dldoin-supply : shared supply
+ELDO1 : LDO : eldoin-supply : shared supply
+ELDO2 : LDO : eldoin-supply : shared supply
+ELDO3 : LDO : eldoin-supply : shared supply
+LDO_IO0 : LDO : ips-supply : GPIO 0
+LDO_IO1 : LDO : ips-supply : GPIO 1
+RTC_LDO : LDO : ips-supply : always on
+SW : On/Off Switch : swin-supply
+
Example:
axp209: pmic@34 {
diff --git a/Documentation/devicetree/bindings/mfd/hisilicon,hi655x.txt b/Documentation/devicetree/bindings/mfd/hisilicon,hi655x.txt
new file mode 100644
index 000000000000..05485699d70e
--- /dev/null
+++ b/Documentation/devicetree/bindings/mfd/hisilicon,hi655x.txt
@@ -0,0 +1,27 @@
+Hisilicon Hi655x Power Management Integrated Circuit (PMIC)
+
+The hardware layout for access PMIC Hi655x from AP SoC Hi6220.
+Between PMIC Hi655x and Hi6220, the physical signal channel is SSI.
+We can use memory-mapped I/O to communicate.
+
++----------------+ +-------------+
+| | | |
+| Hi6220 | SSI bus | Hi655x |
+| |-------------| |
+| |(REGMAP_MMIO)| |
++----------------+ +-------------+
+
+Required properties:
+- compatible: Should be "hisilicon,hi655x-pmic".
+- reg: Base address of PMIC on Hi6220 SoC.
+- interrupt-controller: Hi655x has internal IRQs (has own IRQ domain).
+- pmic-gpios: The GPIO used by PMIC IRQ.
+
+Example:
+ pmic: pmic@f8000000 {
+ compatible = "hisilicon,hi655x-pmic";
+ reg = <0x0 0xf8000000 0x0 0x1000>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ pmic-gpios = <&gpio1 2 GPIO_ACTIVE_HIGH>;
+ }
diff --git a/Documentation/devicetree/bindings/mfd/max77620.txt b/Documentation/devicetree/bindings/mfd/max77620.txt
new file mode 100644
index 000000000000..2ad44f7e4880
--- /dev/null
+++ b/Documentation/devicetree/bindings/mfd/max77620.txt
@@ -0,0 +1,143 @@
+MAX77620 Power management IC from Maxim Semiconductor.
+
+Required properties:
+-------------------
+- compatible: Must be one of
+ "maxim,max77620"
+ "maxim,max20024".
+- reg: I2C device address.
+
+Optional properties:
+-------------------
+- interrupts: The interrupt on the parent the controller is
+ connected to.
+- interrupt-controller: Marks the device node as an interrupt controller.
+- #interrupt-cells: is <2> and their usage is compliant to the 2 cells
+ variant of <../interrupt-controller/interrupts.txt>
+ IRQ numbers for different interrupt source of MAX77620
+ are defined at dt-bindings/mfd/max77620.h.
+
+Optional subnodes and their properties:
+=======================================
+
+Flexible power sequence configurations:
+--------------------------------------
+The Flexible Power Sequencer (FPS) allows each regulator to power up under
+hardware or software control. Additionally, each regulator can power on
+independently or among a group of other regulators with an adjustable power-up
+and power-down delays (sequencing). GPIO1, GPIO2, and GPIO3 can be programmed
+to be part of a sequence allowing external regulators to be sequenced along
+with internal regulators. 32KHz clock can be programmed to be part of a
+sequence.
+
+The flexible sequencing structure consists of two hardware enable inputs
+(EN0, EN1), and 3 master sequencing timers called FPS0, FPS1 and FPS2.
+Each master sequencing timer is programmable through its configuration
+register to have a hardware enable source (EN1 or EN2) or a software enable
+source (SW). When enabled/disabled, the master sequencing timer generates
+eight sequencing events on different time periods called slots. The time
+period between each event is programmable within the configuration register.
+Each regulator, GPIO1, GPIO2, GPIO3, and 32KHz clock has a flexible power
+sequence slave register which allows its enable source to be specified as
+a flexible power sequencer timer or a software bit. When a FPS source of
+regulators, GPIOs and clocks specifies the enable source to be a flexible
+power sequencer, the power up and power down delays can be specified in
+the regulators, GPIOs and clocks flexible power sequencer configuration
+registers.
+
+When FPS event cleared (set to LOW), regulators, GPIOs and 32KHz
+clock are set into following state at the sequencing event that
+corresponds to its flexible sequencer configuration register.
+ Sleep state: In this state, regulators, GPIOs
+ and 32KHz clock get disabled at
+ the sequencing event.
+ Global Low Power Mode (GLPM): In this state, regulators are set in
+ low power mode at the sequencing event.
+
+The configuration parameters of FPS is provided through sub-node "fps"
+and their child for FPS specific. The child node name for FPS are "fps0",
+"fps1", and "fps2" for FPS0, FPS1 and FPS2 respectively.
+
+The FPS configurations like FPS source, power up and power down slots for
+regulators, GPIOs and 32kHz clocks are provided in their respective
+configuration nodes which is explained in respective sub-system DT
+binding document.
+
+There is need for different FPS configuration parameters based on system
+state like when system state changed from active to suspend or active to
+power off (shutdown).
+
+Optional properties:
+-------------------
+-maxim,fps-event-source: u32, FPS event source like external
+ hardware input to PMIC i.e. EN0, EN1 or
+ software (SW).
+ The macros are defined on
+ dt-bindings/mfd/max77620.h
+ for different control source.
+ - MAX77620_FPS_EVENT_SRC_EN0
+ for hardware input pin EN0.
+ - MAX77620_FPS_EVENT_SRC_EN1
+ for hardware input pin EN1.
+ - MAX77620_FPS_EVENT_SRC_SW
+ for software control.
+
+-maxim,shutdown-fps-time-period-us: u32, FPS time period in microseconds
+ when system enters in to shutdown
+ state.
+
+-maxim,suspend-fps-time-period-us: u32, FPS time period in microseconds
+ when system enters in to suspend state.
+
+-maxim,device-state-on-disabled-event: u32, describe the PMIC state when FPS
+ event cleared (set to LOW) whether it
+ should go to sleep state or low-power
+ state. Following are valid values:
+ - MAX77620_FPS_INACTIVE_STATE_SLEEP
+ to set the PMIC state to sleep.
+ - MAX77620_FPS_INACTIVE_STATE_LOW_POWER
+ to set the PMIC state to low
+ power.
+ Absence of this property or other value
+ will not change device state when FPS
+ event get cleared.
+
+Here supported time periods by device in microseconds are as follows:
+MAX77620 supports 40, 80, 160, 320, 640, 1280, 2560 and 5120 microseconds.
+MAX20024 supports 20, 40, 80, 160, 320, 640, 1280 and 2540 microseconds.
+
+For DT binding details of different sub modules like GPIO, pincontrol,
+regulator, power, please refer respective device-tree binding document
+under their respective sub-system directories.
+
+Example:
+--------
+#include <dt-bindings/mfd/max77620.h>
+
+max77620@3c {
+ compatible = "maxim,max77620";
+ reg = <0x3c>;
+
+ interrupt-parent = <&intc>;
+ interrupts = <0 86 IRQ_TYPE_NONE>;
+
+ interrupt-controller;
+ #interrupt-cells = <2>;
+
+ fps {
+ fps0 {
+ maxim,shutdown-fps-time-period-us = <1280>;
+ maxim,fps-event-source = <MAX77620_FPS_EVENT_SRC_EN1>;
+ };
+
+ fps1 {
+ maxim,shutdown-fps-time-period-us = <1280>;
+ maxim,fps-event-source = <MAX77620_FPS_EVENT_SRC_EN0>;
+ };
+
+ fps2 {
+ maxim,shutdown-fps-time-period-us = <1280>;
+ maxim,fps-event-source = <MAX77620_FPS_EVENT_SRC_SW>;
+ };
+ };
+};
diff --git a/Documentation/devicetree/bindings/mfd/qcom-rpm.txt b/Documentation/devicetree/bindings/mfd/qcom-rpm.txt
index 5e97a9593ad7..b98b291a31ba 100644
--- a/Documentation/devicetree/bindings/mfd/qcom-rpm.txt
+++ b/Documentation/devicetree/bindings/mfd/qcom-rpm.txt
@@ -178,7 +178,7 @@ see regulator.txt - with additional custom properties described below:
- qcom,force-mode:
Usage: optional (default if no other qcom,force-mode is specified)
Value type: <u32>
- Defintion: indicates that the regulator should be forced to a
+ Definition: indicates that the regulator should be forced to a
particular mode, valid values are:
QCOM_RPM_FORCE_MODE_NONE - do not force any mode
QCOM_RPM_FORCE_MODE_LPM - force into low power mode
@@ -204,7 +204,7 @@ see regulator.txt - with additional custom properties described below:
- qcom,force-mode:
Usage: optional
Value type: <u32>
- Defintion: indicates that the regulator should not be forced to any
+ Definition: indicates that the regulator should not be forced to any
particular mode, valid values are:
QCOM_RPM_FORCE_MODE_NONE - do not force any mode
QCOM_RPM_FORCE_MODE_LPM - force into low power mode
diff --git a/Documentation/devicetree/bindings/mips/brcm/soc.txt b/Documentation/devicetree/bindings/mips/brcm/soc.txt
index 7bab90cc4a7b..4a7e030e4f9b 100644
--- a/Documentation/devicetree/bindings/mips/brcm/soc.txt
+++ b/Documentation/devicetree/bindings/mips/brcm/soc.txt
@@ -4,7 +4,8 @@ Required properties:
- compatible: "brcm,bcm3384", "brcm,bcm33843"
"brcm,bcm3384-viper", "brcm,bcm33843-viper"
- "brcm,bcm6328", "brcm,bcm6368",
+ "brcm,bcm6328", "brcm,bcm6358", "brcm,bcm6368",
+ "brcm,bcm63168", "brcm,bcm63268",
"brcm,bcm7125", "brcm,bcm7346", "brcm,bcm7358", "brcm,bcm7360",
"brcm,bcm7362", "brcm,bcm7420", "brcm,bcm7425"
diff --git a/Documentation/devicetree/bindings/mips/cavium/ciu3.txt b/Documentation/devicetree/bindings/mips/cavium/ciu3.txt
new file mode 100644
index 000000000000..616862ad2b71
--- /dev/null
+++ b/Documentation/devicetree/bindings/mips/cavium/ciu3.txt
@@ -0,0 +1,27 @@
+* Central Interrupt Unit v3
+
+Properties:
+- compatible: "cavium,octeon-7890-ciu3"
+
+ Compatibility with 78XX and 73XX SOCs.
+
+- interrupt-controller: This is an interrupt controller.
+
+- reg: The base address of the CIU's register bank.
+
+- #interrupt-cells: Must be <2>. The first cell is source number.
+ The second cell indicates the triggering semantics, and may have a
+ value of either 4 for level semantics, or 1 for edge semantics.
+
+Example:
+ interrupt-controller@1010000000000 {
+ compatible = "cavium,octeon-7890-ciu3";
+ interrupt-controller;
+ /* Interrupts are specified by two parts:
+ * 1) Source number (20 significant bits)
+ * 2) Trigger type: (4 == level, 1 == edge)
+ */
+ #address-cells = <0>;
+ #interrupt-cells = <2>;
+ reg = <0x10100 0x00000000 0x0 0xb0000000>;
+ };
diff --git a/Documentation/devicetree/bindings/mips/cpu_irq.txt b/Documentation/devicetree/bindings/mips/cpu_irq.txt
index fc149f326dae..f080f06da6d8 100644
--- a/Documentation/devicetree/bindings/mips/cpu_irq.txt
+++ b/Documentation/devicetree/bindings/mips/cpu_irq.txt
@@ -13,7 +13,7 @@ Required properties:
- compatible : Should be "mti,cpu-interrupt-controller"
Example devicetree:
- cpu-irq: cpu-irq@0 {
+ cpu-irq: cpu-irq {
#address-cells = <0>;
interrupt-controller;
diff --git a/Documentation/devicetree/bindings/misc/fsl,qoriq-mc.txt b/Documentation/devicetree/bindings/misc/fsl,qoriq-mc.txt
index c7a26ca8da12..6611a7c2053a 100644
--- a/Documentation/devicetree/bindings/misc/fsl,qoriq-mc.txt
+++ b/Documentation/devicetree/bindings/misc/fsl,qoriq-mc.txt
@@ -30,11 +30,90 @@ Required properties:
region may not be present in some scenarios, such
as in the device tree presented to a virtual machine.
+ - msi-parent
+ Value type: <phandle>
+ Definition: Must be present and point to the MSI controller node
+ handling message interrupts for the MC.
+
+ - ranges
+ Value type: <prop-encoded-array>
+ Definition: A standard property. Defines the mapping between the child
+ MC address space and the parent system address space.
+
+ The MC address space is defined by 3 components:
+ <region type> <offset hi> <offset lo>
+
+ Valid values for region type are
+ 0x0 - MC portals
+ 0x1 - QBMAN portals
+
+ - #address-cells
+ Value type: <u32>
+ Definition: Must be 3. (see definition in 'ranges' property)
+
+ - #size-cells
+ Value type: <u32>
+ Definition: Must be 1.
+
+Sub-nodes:
+
+ The fsl-mc node may optionally have dpmac sub-nodes that describe
+ the relationship between the Ethernet MACs which belong to the MC
+ and the Ethernet PHYs on the system board.
+
+ The dpmac nodes must be under a node named "dpmacs" which contains
+ the following properties:
+
+ - #address-cells
+ Value type: <u32>
+ Definition: Must be present if dpmac sub-nodes are defined and must
+ have a value of 1.
+
+ - #size-cells
+ Value type: <u32>
+ Definition: Must be present if dpmac sub-nodes are defined and must
+ have a value of 0.
+
+ These nodes must have the following properties:
+
+ - compatible
+ Value type: <string>
+ Definition: Must be "fsl,qoriq-mc-dpmac".
+
+ - reg
+ Value type: <prop-encoded-array>
+ Definition: Specifies the id of the dpmac.
+
+ - phy-handle
+ Value type: <phandle>
+ Definition: Specifies the phandle to the PHY device node associated
+ with the this dpmac.
+
Example:
fsl_mc: fsl-mc@80c000000 {
compatible = "fsl,qoriq-mc";
reg = <0x00000008 0x0c000000 0 0x40>, /* MC portal base */
<0x00000000 0x08340000 0 0x40000>; /* MC control reg */
- };
+ msi-parent = <&its>;
+ #address-cells = <3>;
+ #size-cells = <1>;
+
+ /*
+ * Region type 0x0 - MC portals
+ * Region type 0x1 - QBMAN portals
+ */
+ ranges = <0x0 0x0 0x0 0x8 0x0c000000 0x4000000
+ 0x1 0x0 0x0 0x8 0x18000000 0x8000000>;
+ dpmacs {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ dpmac@1 {
+ compatible = "fsl,qoriq-mc-dpmac";
+ reg = <1>;
+ phy-handle = <&mdio0_phy0>;
+ }
+ }
+ };
diff --git a/Documentation/devicetree/bindings/mmc/microchip,sdhci-pic32.txt b/Documentation/devicetree/bindings/mmc/microchip,sdhci-pic32.txt
index 71ad57e050b1..3149297b3933 100644
--- a/Documentation/devicetree/bindings/mmc/microchip,sdhci-pic32.txt
+++ b/Documentation/devicetree/bindings/mmc/microchip,sdhci-pic32.txt
@@ -20,7 +20,7 @@ Example:
compatible = "microchip,pic32mzda-sdhci";
reg = <0x1f8ec000 0x100>;
interrupts = <191 IRQ_TYPE_LEVEL_HIGH>;
- clocks = <&REFCLKO4>, <&PBCLK5>;
+ clocks = <&rootclk REF4CLK>, <&rootclk PB5CLK>;
clock-names = "base_clk", "sys_clk";
bus-width = <4>;
cap-sd-highspeed;
diff --git a/Documentation/devicetree/bindings/mmc/mmc-pwrseq-emmc.txt b/Documentation/devicetree/bindings/mmc/mmc-pwrseq-emmc.txt
index 0cb827bf9435..3d965d57e00b 100644
--- a/Documentation/devicetree/bindings/mmc/mmc-pwrseq-emmc.txt
+++ b/Documentation/devicetree/bindings/mmc/mmc-pwrseq-emmc.txt
@@ -1,7 +1,7 @@
* The simple eMMC hardware reset provider
The purpose of this driver is to perform standard eMMC hw reset
-procedure, as descibed by Jedec 4.4 specification. This procedure is
+procedure, as described by Jedec 4.4 specification. This procedure is
performed just after MMC core enabled power to the given mmc host (to
fix possible issues if bootloader has left eMMC card in initialized or
unknown state), and before performing complete system reboot (also in
diff --git a/Documentation/devicetree/bindings/mmc/rockchip-dw-mshc.txt b/Documentation/devicetree/bindings/mmc/rockchip-dw-mshc.txt
index ea5614b6f613..07184e8f894e 100644
--- a/Documentation/devicetree/bindings/mmc/rockchip-dw-mshc.txt
+++ b/Documentation/devicetree/bindings/mmc/rockchip-dw-mshc.txt
@@ -15,6 +15,7 @@ Required Properties:
- "rockchip,rk3288-dw-mshc": for Rockchip RK3288
- "rockchip,rk3036-dw-mshc", "rockchip,rk3288-dw-mshc": for Rockchip RK3036
- "rockchip,rk3368-dw-mshc", "rockchip,rk3288-dw-mshc": for Rockchip RK3368
+ - "rockchip,rk3399-dw-mshc", "rockchip,rk3288-dw-mshc": for Rockchip RK3399
Optional Properties:
* clocks: from common clock binding: if ciu_drive and ciu_sample are
diff --git a/Documentation/devicetree/bindings/mmc/sdhci-st.txt b/Documentation/devicetree/bindings/mmc/sdhci-st.txt
index 18d950df2749..88faa91125bf 100644
--- a/Documentation/devicetree/bindings/mmc/sdhci-st.txt
+++ b/Documentation/devicetree/bindings/mmc/sdhci-st.txt
@@ -38,7 +38,7 @@ Optional properties:
- bus-width: Number of data lines.
See: Documentation/devicetree/bindings/mmc/mmc.txt.
-- max-frequency: Can be 200MHz, 100Mz or 50MHz (default) and used for
+- max-frequency: Can be 200MHz, 100Mz or 50MHz (default) and used for
configuring the CCONFIG3 in the mmcss.
See: Documentation/devicetree/bindings/mmc/mmc.txt.
@@ -48,7 +48,7 @@ Optional properties:
- vqmmc-supply: Phandle to the regulator dt node, mentioned as the vcc/vdd
supply in eMMC/SD specs.
-- sd-uhs--sdr50: To enable the SDR50 in the mmcss.
+- sd-uhs-sdr50: To enable the SDR50 in the mmcss.
See: Documentation/devicetree/bindings/mmc/mmc.txt.
- sd-uhs-sdr104: To enable the SDR104 in the mmcss.
diff --git a/Documentation/devicetree/bindings/mmc/tmio_mmc.txt b/Documentation/devicetree/bindings/mmc/tmio_mmc.txt
index 7fb746dd1a68..0f610d4b5b00 100644
--- a/Documentation/devicetree/bindings/mmc/tmio_mmc.txt
+++ b/Documentation/devicetree/bindings/mmc/tmio_mmc.txt
@@ -26,3 +26,6 @@ Required properties:
Optional properties:
- toshiba,mmc-wrprotect-disable: write-protect detection is unavailable
+- pinctrl-names: should be "default", "state_uhs"
+- pinctrl-0: should contain default/high speed pin ctrl
+- pinctrl-1: should contain uhs mode pin ctrl
diff --git a/Documentation/devicetree/bindings/mmc/usdhi6rol0.txt b/Documentation/devicetree/bindings/mmc/usdhi6rol0.txt
index 8babdaa8623b..6d1b7971d078 100644
--- a/Documentation/devicetree/bindings/mmc/usdhi6rol0.txt
+++ b/Documentation/devicetree/bindings/mmc/usdhi6rol0.txt
@@ -12,6 +12,12 @@ Optional properties:
- vmmc-supply: a phandle of a regulator, supplying Vcc to the card
- vqmmc-supply: a phandle of a regulator, supplying VccQ to the card
+- pinctrl-names: Can contain a "default" entry and a "state_uhs"
+ entry. The state_uhs entry is used together with the default
+ entry when the board requires distinct settings for UHS speeds.
+
+- pinctrl-N: One property for each name listed in pinctrl-names, see
+ ../pinctrl/pinctrl-bindings.txt.
Additionally any standard mmc bindings from mmc.txt can be used.
diff --git a/Documentation/devicetree/bindings/mtd/arm-versatile.txt b/Documentation/devicetree/bindings/mtd/arm-versatile.txt
index beace4b89daa..4ec28796a3c0 100644
--- a/Documentation/devicetree/bindings/mtd/arm-versatile.txt
+++ b/Documentation/devicetree/bindings/mtd/arm-versatile.txt
@@ -1,8 +1,26 @@
Flash device on ARM Versatile board
+These flash chips are found in the ARM reference designs like Integrator,
+Versatile, RealView, Versatile Express etc.
+
+They are regular CFI compatible (Intel or AMD extended) flash chips with
+some special write protect/VPP bits that can be controlled by the machine's
+system controller.
+
Required properties:
-- compatible : must be "arm,versatile-flash";
+- compatible : must be "arm,versatile-flash", "cfi-flash";
+- reg : memory address for the flash chip
- bank-width : width in bytes of flash interface.
+For the rest of the properties, see mtd-physmap.txt.
+
The device tree may optionally contain sub-nodes describing partitions of the
address space. See partition.txt for more detail.
+
+Example:
+
+flash@34000000 {
+ compatible = "arm,versatile-flash", "cfi-flash";
+ reg = <0x34000000 0x4000000>;
+ bank-width = <4>;
+};
diff --git a/Documentation/devicetree/bindings/mtd/atmel-nand.txt b/Documentation/devicetree/bindings/mtd/atmel-nand.txt
index d53aba98fbc9..3e7ee99d3949 100644
--- a/Documentation/devicetree/bindings/mtd/atmel-nand.txt
+++ b/Documentation/devicetree/bindings/mtd/atmel-nand.txt
@@ -39,7 +39,7 @@ Optional properties:
Nand Flash Controller(NFC) is an optional sub-node
Required properties:
-- compatible : "atmel,sama5d3-nfc" or "atmel,sama5d4-nfc".
+- compatible : "atmel,sama5d3-nfc".
- reg : should specify the address and size used for NFC command registers,
NFC registers and NFC SRAM. NFC SRAM address and size can be absent
if don't want to use it.
diff --git a/Documentation/devicetree/bindings/mtd/brcm,brcmnand.txt b/Documentation/devicetree/bindings/mtd/brcm,brcmnand.txt
index c2546ced9c02..7066597c9a81 100644
--- a/Documentation/devicetree/bindings/mtd/brcm,brcmnand.txt
+++ b/Documentation/devicetree/bindings/mtd/brcm,brcmnand.txt
@@ -24,6 +24,7 @@ Required properties:
brcm,brcmnand-v5.0
brcm,brcmnand-v6.0
brcm,brcmnand-v6.1
+ brcm,brcmnand-v6.2
brcm,brcmnand-v7.0
brcm,brcmnand-v7.1
brcm,brcmnand
@@ -52,7 +53,7 @@ Optional properties:
v7.0. Use this property to describe the rare
earlier versions of this core that include WP
- -- Additonal SoC-specific NAND controller properties --
+ -- Additional SoC-specific NAND controller properties --
The NAND controller is integrated differently on the variety of SoCs on which it
is found. Part of this integration involves providing status and enable bits
diff --git a/Documentation/devicetree/bindings/mtd/fsl-quadspi.txt b/Documentation/devicetree/bindings/mtd/fsl-quadspi.txt
index 0333ec87dc49..c34aa6f8a424 100644
--- a/Documentation/devicetree/bindings/mtd/fsl-quadspi.txt
+++ b/Documentation/devicetree/bindings/mtd/fsl-quadspi.txt
@@ -5,7 +5,8 @@ Required properties:
"fsl,imx7d-qspi", "fsl,imx6ul-qspi",
"fsl,ls1021a-qspi"
or
- "fsl,ls2080a-qspi" followed by "fsl,ls1021a-qspi"
+ "fsl,ls2080a-qspi" followed by "fsl,ls1021a-qspi",
+ "fsl,ls1043a-qspi" followed by "fsl,ls1021a-qspi"
- reg : the first contains the register location and length,
the second contains the memory mapping address and length
- reg-names: Should contain the reg names "QuadSPI" and "QuadSPI-memory"
diff --git a/Documentation/devicetree/bindings/mtd/gpmc-nand.txt b/Documentation/devicetree/bindings/mtd/gpmc-nand.txt
index fb733c4e1c11..3ee7e202657c 100644
--- a/Documentation/devicetree/bindings/mtd/gpmc-nand.txt
+++ b/Documentation/devicetree/bindings/mtd/gpmc-nand.txt
@@ -13,7 +13,11 @@ Documentation/devicetree/bindings/mtd/nand.txt
Required properties:
- - reg: The CS line the peripheral is connected to
+ - compatible: "ti,omap2-nand"
+ - reg: range id (CS number), base offset and length of the
+ NAND I/O space
+ - interrupt-parent: must point to gpmc node
+ - interrupts: Two interrupt specifiers, one for fifoevent, one for termcount.
Optional properties:
@@ -44,6 +48,7 @@ Optional properties:
locating ECC errors for BCHx algorithms. SoC devices which have
ELM hardware engines should specify this device node in .dtsi
Using ELM for ECC error correction frees some CPU cycles.
+ - rb-gpios: GPIO specifier for the ready/busy# pin.
For inline partition table parsing (optional):
@@ -55,20 +60,26 @@ Example for an AM33xx board:
gpmc: gpmc@50000000 {
compatible = "ti,am3352-gpmc";
ti,hwmods = "gpmc";
- reg = <0x50000000 0x1000000>;
+ reg = <0x50000000 0x36c>;
interrupts = <100>;
gpmc,num-cs = <8>;
gpmc,num-waitpins = <2>;
#address-cells = <2>;
#size-cells = <1>;
- ranges = <0 0 0x08000000 0x2000>; /* CS0: NAND */
+ ranges = <0 0 0x08000000 0x1000000>; /* CS0 space, 16MB */
elm_id = <&elm>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
nand@0,0 {
- reg = <0 0 0>; /* CS0, offset 0 */
+ compatible = "ti,omap2-nand";
+ reg = <0 0 4>; /* CS0, offset 0, NAND I/O window 4 */
+ interrupt-parent = <&gpmc>;
+ interrupts = <0 IRQ_TYPE_NONE>, <1 IRQ_TYPE NONE>;
nand-bus-width = <16>;
ti,nand-ecc-opt = "bch8";
ti,nand-xfer-type = "polled";
+ rb-gpios = <&gpmc 0 GPIO_ACTIVE_HIGH>; /* gpmc_wait0 */
gpmc,sync-clk-ps = <0>;
gpmc,cs-on-ns = <0>;
diff --git a/Documentation/devicetree/bindings/mtd/nand.txt b/Documentation/devicetree/bindings/mtd/nand.txt
index b53f92e252d4..3733300de8dd 100644
--- a/Documentation/devicetree/bindings/mtd/nand.txt
+++ b/Documentation/devicetree/bindings/mtd/nand.txt
@@ -1,8 +1,31 @@
-* MTD generic binding
+* NAND chip and NAND controller generic binding
+
+NAND controller/NAND chip representation:
+
+The NAND controller should be represented with its own DT node, and all
+NAND chips attached to this controller should be defined as children nodes
+of the NAND controller. This representation should be enforced even for
+simple controllers supporting only one chip.
+
+Mandatory NAND controller properties:
+- #address-cells: depends on your controller. Should at least be 1 to
+ encode the CS line id.
+- #size-cells: depends on your controller. Put zero unless you need a
+ mapping between CS lines and dedicated memory regions
+
+Optional NAND controller properties
+- ranges: only needed if you need to define a mapping between CS lines and
+ memory regions
+
+Optional NAND chip properties:
- nand-ecc-mode : String, operation mode of the NAND ecc mode.
- Supported values are: "none", "soft", "hw", "hw_syndrome", "hw_oob_first",
- "soft_bch".
+ Supported values are: "none", "soft", "hw", "hw_syndrome",
+ "hw_oob_first".
+ Deprecated values:
+ "soft_bch": use "soft" and nand-ecc-algo instead
+- nand-ecc-algo: string, algorithm of NAND ECC.
+ Supported values are: "hamming", "bch".
- nand-bus-width : 8 or 16 bus width if not present 8
- nand-on-flash-bbt: boolean to enable on flash bbt option if not present false
@@ -19,3 +42,20 @@ errors per {size} bytes".
The interpretation of these parameters is implementation-defined, so not all
implementations must support all possible combinations. However, implementations
are encouraged to further specify the value(s) they support.
+
+Example:
+
+ nand-controller {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ /* controller specific properties */
+
+ nand@0 {
+ reg = <0>;
+ nand-ecc-mode = "soft";
+ nand-ecc-algo = "bch";
+
+ /* controller specific properties */
+ };
+ };
diff --git a/Documentation/devicetree/bindings/net/apm-xgene-enet.txt b/Documentation/devicetree/bindings/net/apm-xgene-enet.txt
index 078060a97f95..05f705e32a4a 100644
--- a/Documentation/devicetree/bindings/net/apm-xgene-enet.txt
+++ b/Documentation/devicetree/bindings/net/apm-xgene-enet.txt
@@ -18,6 +18,8 @@ Required properties for all the ethernet interfaces:
- First is the Rx interrupt. This irq is mandatory.
- Second is the Tx completion interrupt.
This is supported only on SGMII based 1GbE and 10GbE interfaces.
+- channel: Ethernet to CPU, start channel (prefetch buffer) number
+ - Must map to the first irq and irqs must be sequential
- port-id: Port number (0 or 1)
- clocks: Reference to the clock entry.
- local-mac-address: MAC address assigned to this device
diff --git a/Documentation/devicetree/bindings/net/dsa/dsa.txt b/Documentation/devicetree/bindings/net/dsa/dsa.txt
index 5fdbbcdf8c4b..9f4807f90c31 100644
--- a/Documentation/devicetree/bindings/net/dsa/dsa.txt
+++ b/Documentation/devicetree/bindings/net/dsa/dsa.txt
@@ -31,8 +31,6 @@ A switch child node has the following optional property:
switch. Must be set if the switch can not detect
the presence and/or size of a connected EEPROM,
otherwise optional.
-- reset-gpios : phandle and specifier to a gpio line connected to
- reset pin of the switch chip.
A switch may have multiple "port" children nodes
diff --git a/Documentation/devicetree/bindings/net/dsa/marvell.txt b/Documentation/devicetree/bindings/net/dsa/marvell.txt
new file mode 100644
index 000000000000..7629189398aa
--- /dev/null
+++ b/Documentation/devicetree/bindings/net/dsa/marvell.txt
@@ -0,0 +1,35 @@
+Marvell DSA Switch Device Tree Bindings
+---------------------------------------
+
+WARNING: This binding is currently unstable. Do not program it into a
+FLASH never to be changed again. Once this binding is stable, this
+warning will be removed.
+
+If you need a stable binding, use the old dsa.txt binding.
+
+Marvell Switches are MDIO devices. The following properties should be
+placed as a child node of an mdio device.
+
+The properties described here are those specific to Marvell devices.
+Additional required and optional properties can be found in dsa.txt.
+
+Required properties:
+- compatible : Should be one of "marvell,mv88e6085",
+- reg : Address on the MII bus for the switch.
+
+Optional properties:
+
+- reset-gpios : Should be a gpio specifier for a reset line
+
+Example:
+
+ mdio {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ switch0: switch@0 {
+ compatible = "marvell,mv88e6085";
+ reg = <0>;
+ reset-gpios = <&gpio5 1 GPIO_ACTIVE_LOW>;
+ };
+ };
diff --git a/Documentation/devicetree/bindings/net/hisilicon-hns-dsaf.txt b/Documentation/devicetree/bindings/net/hisilicon-hns-dsaf.txt
index ecacfa44b1eb..d4b7f2e49984 100644
--- a/Documentation/devicetree/bindings/net/hisilicon-hns-dsaf.txt
+++ b/Documentation/devicetree/bindings/net/hisilicon-hns-dsaf.txt
@@ -7,19 +7,45 @@ Required properties:
- mode: dsa fabric mode string. only support one of dsaf modes like these:
"2port-64vf",
"6port-16rss",
- "6port-16vf".
+ "6port-16vf",
+ "single-port".
- interrupt-parent: the interrupt parent of this device.
- interrupts: should contain the DSA Fabric and rcb interrupt.
- reg: specifies base physical address(es) and size of the device registers.
- The first region is external interface control register base and size.
- The second region is SerDes base register and size.
+ The first region is external interface control register base and size(optional,
+ only used when subctrl-syscon does not exist). It is recommended using
+ subctrl-syscon rather than this address.
+ The second region is SerDes base register and size(optional, only used when
+ serdes-syscon in port node does not exist). It is recommended using
+ serdes-syscon rather than this address.
The third region is the PPE register base and size.
- The fourth region is dsa fabric base register and size.
- The fifth region is cpld base register and size, it is not required if do not use cpld.
-- phy-handle: phy handle of physicl port, 0 if not any phy device. see ethernet.txt [1].
+ The fourth region is dsa fabric base register and size. It is not required for
+ single-port mode.
+- reg-names: may be ppe-base and(or) dsaf-base. It is used to find the
+ corresponding reg's index.
+
+- phy-handle: phy handle of physical port, 0 if not any phy device. It is optional
+ attribute. If port node exists, phy-handle in each port node will be used.
+ see ethernet.txt [1].
+- subctrl-syscon: is syscon handle for external interface control register.
+- reset-field-offset: is offset of reset field. Its value depends on the hardware
+ user manual.
- buf-size: rx buffer size, should be 16-1024.
- desc-num: number of description in TX and RX queue, should be 512, 1024, 2048 or 4096.
+- port: subnodes of dsaf. A dsaf node may contain several port nodes(Depending
+ on mode of dsaf). Port node contain some attributes listed below:
+- reg: is physical port index in one dsaf.
+- phy-handle: phy handle of physical port. It is not required if there isn't
+ phy device. see ethernet.txt [1].
+- serdes-syscon: is syscon handle for SerDes register.
+- cpld-syscon: is syscon handle + register offset pair for cpld register. It is
+ not required if there isn't cpld device.
+- port-rst-offset: is offset of reset field for each port in dsaf. Its value
+ depends on the hardware user manual.
+- port-mode-offset: is offset of port mode field for each port in dsaf. Its
+ value depends on the hardware user manual.
+
[1] Documentation/devicetree/bindings/net/phy.txt
Example:
@@ -28,11 +54,11 @@ dsaf0: dsa@c7000000 {
compatible = "hisilicon,hns-dsaf-v1";
mode = "6port-16rss";
interrupt-parent = <&mbigen_dsa>;
- reg = <0x0 0xC0000000 0x0 0x420000
- 0x0 0xC2000000 0x0 0x300000
- 0x0 0xc5000000 0x0 0x890000
+ reg = <0x0 0xc5000000 0x0 0x890000
0x0 0xc7000000 0x0 0x60000>;
- phy-handle = <0 0 0 0 &soc0_phy4 &soc0_phy5 0 0>;
+ reg-names = "ppe-base", "dsaf-base";
+ subctrl-syscon = <&subctrl>;
+ reset-field-offset = 0;
interrupts = <131 4>,<132 4>, <133 4>,<134 4>,
<135 4>,<136 4>, <137 4>,<138 4>,
<139 4>,<140 4>, <141 4>,<142 4>,
@@ -43,4 +69,15 @@ dsaf0: dsa@c7000000 {
buf-size = <4096>;
desc-num = <1024>;
dma-coherent;
+
+ port@0 {
+ reg = 0;
+ phy-handle = <&phy0>;
+ serdes-syscon = <&serdes>;
+ };
+
+ port@1 {
+ reg = 1;
+ serdes-syscon = <&serdes>;
+ };
};
diff --git a/Documentation/devicetree/bindings/net/hisilicon-hns-nic.txt b/Documentation/devicetree/bindings/net/hisilicon-hns-nic.txt
index e6a9d1c30878..f0421ee3c714 100644
--- a/Documentation/devicetree/bindings/net/hisilicon-hns-nic.txt
+++ b/Documentation/devicetree/bindings/net/hisilicon-hns-nic.txt
@@ -8,7 +8,7 @@ Required properties:
specifies a reference to the associating hardware driver node.
see Documentation/devicetree/bindings/net/hisilicon-hns-dsaf.txt
- port-id: is the index of port provided by DSAF (the accelerator). DSAF can
- connect to 8 PHYs. Port 0 to 1 are both used for adminstration purpose. They
+ connect to 8 PHYs. Port 0 to 1 are both used for administration purpose. They
are called debug ports.
The remaining 6 PHYs are taken according to the mode of DSAF.
@@ -36,6 +36,34 @@ Required properties:
| | | | | |
external port
+ This attribute is remained for compatible purpose. It is not recommended to
+ use it in new code.
+
+- port-idx-in-ae: is the index of port provided by AE.
+ In NIC mode of DSAF, all 6 PHYs of service DSAF are taken as ethernet ports
+ to the CPU. The port-idx-in-ae can be 0 to 5. Here is the diagram:
+ +-----+---------------+
+ | CPU |
+ +-+-+-+---+-+-+-+-+-+-+
+ | | | | | | | |
+ debug debug service
+ port port port
+ (0) (0) (0-5)
+
+ In Switch mode of DSAF, all 6 PHYs of service DSAF are taken as physical
+ ports connected to a LAN Switch while the CPU side assume itself have one
+ single NIC connected to this switch. In this case, the port-idx-in-ae
+ will be 0 only.
+ +-----+-----+------+------+
+ | CPU |
+ +-+-+-+-+-+-+-+-+-+-+-+-+-+
+ | | service| port(0)
+ debug debug +------------+
+ port port | switch |
+ (0) (0) +-+-+-+-+-+-++
+ | | | | | |
+ external port
+
- local-mac-address: mac addr of the ethernet interface
Example:
@@ -43,6 +71,6 @@ Example:
ethernet@0{
compatible = "hisilicon,hns-nic-v1";
ae-handle = <&dsaf0>;
- port-id = <0>;
+ port-idx-in-ae = <0>;
local-mac-address = [a2 14 e4 4b 56 76];
};
diff --git a/Documentation/devicetree/bindings/net/marvell-bt-sd8xxx.txt b/Documentation/devicetree/bindings/net/marvell-bt-sd8xxx.txt
new file mode 100644
index 000000000000..14aa6cf58201
--- /dev/null
+++ b/Documentation/devicetree/bindings/net/marvell-bt-sd8xxx.txt
@@ -0,0 +1,56 @@
+Marvell 8897/8997 (sd8897/sd8997) bluetooth SDIO devices
+------
+
+Required properties:
+
+ - compatible : should be one of the following:
+ * "marvell,sd8897-bt"
+ * "marvell,sd8997-bt"
+
+Optional properties:
+
+ - marvell,cal-data: Calibration data downloaded to the device during
+ initialization. This is an array of 28 values(u8).
+
+ - marvell,wakeup-pin: It represents wakeup pin number of the bluetooth chip.
+ firmware will use the pin to wakeup host system.
+ - marvell,wakeup-gap-ms: wakeup gap represents wakeup latency of the host
+ platform. The value will be configured to firmware. This
+ is needed to work chip's sleep feature as expected.
+ - interrupt-parent: phandle of the parent interrupt controller
+ - interrupts : interrupt pin number to the cpu. Driver will request an irq based
+ on this interrupt number. During system suspend, the irq will be
+ enabled so that the bluetooth chip can wakeup host platform under
+ certain condition. During system resume, the irq will be disabled
+ to make sure unnecessary interrupt is not received.
+
+Example:
+
+IRQ pin 119 is used as system wakeup source interrupt.
+wakeup pin 13 and gap 100ms are configured so that firmware can wakeup host
+using this device side pin and wakeup latency.
+calibration data is also available in below example.
+
+&mmc3 {
+ status = "okay";
+ vmmc-supply = <&wlan_en_reg>;
+ bus-width = <4>;
+ cap-power-off-card;
+ keep-power-in-suspend;
+
+ #address-cells = <1>;
+ #size-cells = <0>;
+ btmrvl: bluetooth@2 {
+ compatible = "marvell,sd8897-bt";
+ reg = <2>;
+ interrupt-parent = <&pio>;
+ interrupts = <119 IRQ_TYPE_LEVEL_LOW>;
+
+ marvell,cal-data = /bits/ 8 <
+ 0x37 0x01 0x1c 0x00 0xff 0xff 0xff 0xff 0x01 0x7f 0x04 0x02
+ 0x00 0x00 0xba 0xce 0xc0 0xc6 0x2d 0x00 0x00 0x00 0x00 0x00
+ 0x00 0x00 0xf0 0x00>;
+ marvell,wakeup-pin = <0x0d>;
+ marvell,wakeup-gap-ms = <0x64>;
+ };
+};
diff --git a/Documentation/devicetree/bindings/net/microchip,enc28j60.txt b/Documentation/devicetree/bindings/net/microchip,enc28j60.txt
new file mode 100644
index 000000000000..1dc3bc75539d
--- /dev/null
+++ b/Documentation/devicetree/bindings/net/microchip,enc28j60.txt
@@ -0,0 +1,59 @@
+* Microchip ENC28J60
+
+This is a standalone 10 MBit ethernet controller with SPI interface.
+
+For each device connected to a SPI bus, define a child node within
+the SPI master node.
+
+Required properties:
+- compatible: Should be "microchip,enc28j60"
+- reg: Specify the SPI chip select the ENC28J60 is wired to
+- interrupt-parent: Specify the phandle of the source interrupt, see interrupt
+ binding documentation for details. Usually this is the GPIO bank
+ the interrupt line is wired to.
+- interrupts: Specify the interrupt index within the interrupt controller (referred
+ to above in interrupt-parent) and interrupt type. The ENC28J60 natively
+ generates falling edge interrupts, however, additional board logic
+ might invert the signal.
+- pinctrl-names: List of assigned state names, see pinctrl binding documentation.
+- pinctrl-0: List of phandles to configure the GPIO pin used as interrupt line,
+ see also generic and your platform specific pinctrl binding
+ documentation.
+
+Optional properties:
+- spi-max-frequency: Maximum frequency of the SPI bus when accessing the ENC28J60.
+ According to the ENC28J80 datasheet, the chip allows a maximum of 20 MHz, however,
+ board designs may need to limit this value.
+- local-mac-address: See ethernet.txt in the same directory.
+
+
+Example (for NXP i.MX28 with pin control stuff for GPIO irq):
+
+ ssp2: ssp@80014000 {
+ compatible = "fsl,imx28-spi";
+ pinctrl-names = "default";
+ pinctrl-0 = <&spi2_pins_b &spi2_sck_cfg>;
+ status = "okay";
+
+ enc28j60: ethernet@0 {
+ compatible = "microchip,enc28j60";
+ pinctrl-names = "default";
+ pinctrl-0 = <&enc28j60_pins>;
+ reg = <0>;
+ interrupt-parent = <&gpio3>;
+ interrupts = <3 IRQ_TYPE_EDGE_FALLING>;
+ spi-max-frequency = <12000000>;
+ };
+ };
+
+ pinctrl@80018000 {
+ enc28j60_pins: enc28j60_pins@0 {
+ reg = <0>;
+ fsl,pinmux-ids = <
+ MX28_PAD_AUART0_RTS__GPIO_3_3 /* Interrupt */
+ >;
+ fsl,drive-strength = <MXS_DRIVE_4mA>;
+ fsl,voltage = <MXS_VOLTAGE_HIGH>;
+ fsl,pull-up = <MXS_PULL_DISABLE>;
+ };
+ };
diff --git a/Documentation/devicetree/bindings/net/nfc/pn533-i2c.txt b/Documentation/devicetree/bindings/net/nfc/pn533-i2c.txt
new file mode 100644
index 000000000000..1aea822d4530
--- /dev/null
+++ b/Documentation/devicetree/bindings/net/nfc/pn533-i2c.txt
@@ -0,0 +1,31 @@
+* NXP Semiconductors PN532 NFC Controller
+
+Required properties:
+- compatible: Should be "nxp,pn532-i2c" or "nxp,pn533-i2c".
+- clock-frequency: I²C work frequency.
+- reg: address on the bus
+- interrupt-parent: phandle for the interrupt gpio controller
+- interrupts: GPIO interrupt to which the chip is connected
+
+Optional SoC Specific Properties:
+- pinctrl-names: Contains only one value - "default".
+- pintctrl-0: Specifies the pin control groups used for this controller.
+
+Example (for ARM-based BeagleBone with PN532 on I2C2):
+
+&i2c2 {
+
+ status = "okay";
+
+ pn532: pn532@24 {
+
+ compatible = "nxp,pn532-i2c";
+
+ reg = <0x24>;
+ clock-frequency = <400000>;
+
+ interrupt-parent = <&gpio1>;
+ interrupts = <17 IRQ_TYPE_EDGE_FALLING>;
+
+ };
+};
diff --git a/Documentation/devicetree/bindings/net/stmmac.txt b/Documentation/devicetree/bindings/net/stmmac.txt
index 6605d19601c2..95816c5fc589 100644
--- a/Documentation/devicetree/bindings/net/stmmac.txt
+++ b/Documentation/devicetree/bindings/net/stmmac.txt
@@ -51,14 +51,16 @@ Optional properties:
AXI register inside the DMA module:
- snps,lpi_en: enable Low Power Interface
- snps,xit_frm: unlock on WoL
- - snps,wr_osr_lmt: max write oustanding req. limit
- - snps,rd_osr_lmt: max read oustanding req. limit
+ - snps,wr_osr_lmt: max write outstanding req. limit
+ - snps,rd_osr_lmt: max read outstanding req. limit
- snps,kbbe: do not cross 1KiB boundary.
- snps,axi_all: align address
- snps,blen: this is a vector of supported burst length.
- snps,fb: fixed-burst
- snps,mb: mixed-burst
- snps,rb: rebuild INCRx Burst
+ - snps,tso: this enables the TSO feature otherwise it will be managed by
+ MAC HW capability register.
- mdio: with compatible = "snps,dwmac-mdio", create and register mdio bus.
Examples:
diff --git a/Documentation/devicetree/bindings/net/ti,dp83867.txt b/Documentation/devicetree/bindings/net/ti,dp83867.txt
index 58d935b58598..5d21141a68b5 100644
--- a/Documentation/devicetree/bindings/net/ti,dp83867.txt
+++ b/Documentation/devicetree/bindings/net/ti,dp83867.txt
@@ -2,7 +2,7 @@
Required properties:
- reg - The ID number for the phy, usually a small integer
- - ti,rx-internal-delay - RGMII Recieve Clock Delay - see dt-bindings/net/ti-dp83867.h
+ - ti,rx-internal-delay - RGMII Receive Clock Delay - see dt-bindings/net/ti-dp83867.h
for applicable values
- ti,tx-internal-delay - RGMII Transmit Clock Delay - see dt-bindings/net/ti-dp83867.h
for applicable values
diff --git a/Documentation/devicetree/bindings/net/wireless/marvell-sd8xxx.txt b/Documentation/devicetree/bindings/net/wireless/marvell-sd8xxx.txt
new file mode 100644
index 000000000000..c421aba0a5bc
--- /dev/null
+++ b/Documentation/devicetree/bindings/net/wireless/marvell-sd8xxx.txt
@@ -0,0 +1,63 @@
+Marvell 8897/8997 (sd8897/sd8997) SDIO devices
+------
+
+This node provides properties for controlling the marvell sdio wireless device.
+The node is expected to be specified as a child node to the SDIO controller that
+connects the device to the system.
+
+Required properties:
+
+ - compatible : should be one of the following:
+ * "marvell,sd8897"
+ * "marvell,sd8997"
+
+Optional properties:
+
+ - marvell,caldata* : A series of properties with marvell,caldata prefix,
+ represent calibration data downloaded to the device during
+ initialization. This is an array of unsigned 8-bit values.
+ the properties should follow below property name and
+ corresponding array length:
+ "marvell,caldata-txpwrlimit-2g" (length = 566).
+ "marvell,caldata-txpwrlimit-5g-sub0" (length = 502).
+ "marvell,caldata-txpwrlimit-5g-sub1" (length = 688).
+ "marvell,caldata-txpwrlimit-5g-sub2" (length = 750).
+ "marvell,caldata-txpwrlimit-5g-sub3" (length = 502).
+ - marvell,wakeup-pin : a wakeup pin number of wifi chip which will be configured
+ to firmware. Firmware will wakeup the host using this pin
+ during suspend/resume.
+ - interrupt-parent: phandle of the parent interrupt controller
+ - interrupts : interrupt pin number to the cpu. driver will request an irq based on
+ this interrupt number. during system suspend, the irq will be enabled
+ so that the wifi chip can wakeup host platform under certain condition.
+ during system resume, the irq will be disabled to make sure
+ unnecessary interrupt is not received.
+
+Example:
+
+Tx power limit calibration data is configured in below example.
+The calibration data is an array of unsigned values, the length
+can vary between hw versions.
+IRQ pin 38 is used as system wakeup source interrupt. wakeup pin 3 is configured
+so that firmware can wakeup host using this device side pin.
+
+&mmc3 {
+ status = "okay";
+ vmmc-supply = <&wlan_en_reg>;
+ bus-width = <4>;
+ cap-power-off-card;
+ keep-power-in-suspend;
+
+ #address-cells = <1>;
+ #size-cells = <0>;
+ mwifiex: wifi@1 {
+ compatible = "marvell,sd8897";
+ reg = <1>;
+ interrupt-parent = <&pio>;
+ interrupts = <38 IRQ_TYPE_LEVEL_LOW>;
+
+ marvell,caldata_00_txpwrlimit_2g_cfg_set = /bits/ 8 <
+ 0x01 0x00 0x06 0x00 0x08 0x02 0x89 0x01>;
+ marvell,wakeup-pin = <3>;
+ };
+};
diff --git a/Documentation/devicetree/bindings/net/wireless/qcom,ath10k.txt b/Documentation/devicetree/bindings/net/wireless/qcom,ath10k.txt
index 96aae6b4f736..74d7f0af209c 100644
--- a/Documentation/devicetree/bindings/net/wireless/qcom,ath10k.txt
+++ b/Documentation/devicetree/bindings/net/wireless/qcom,ath10k.txt
@@ -5,12 +5,18 @@ Required properties:
* "qcom,ath10k"
* "qcom,ipq4019-wifi"
-PCI based devices uses compatible string "qcom,ath10k" and takes only
-calibration data via "qcom,ath10k-calibration-data". Rest of the properties
-are not applicable for PCI based devices.
+PCI based devices uses compatible string "qcom,ath10k" and takes calibration
+data along with board specific data via "qcom,ath10k-calibration-data".
+Rest of the properties are not applicable for PCI based devices.
AHB based devices (i.e. ipq4019) uses compatible string "qcom,ipq4019-wifi"
-and also uses most of the properties defined in this doc.
+and also uses most of the properties defined in this doc (except
+"qcom,ath10k-calibration-data"). It uses "qcom,ath10k-pre-calibration-data"
+to carry pre calibration data.
+
+In general, entry "qcom,ath10k-pre-calibration-data" and
+"qcom,ath10k-calibration-data" conflict with each other and only one
+can be provided per device.
Optional properties:
- reg: Address and length of the register set for the device.
@@ -35,8 +41,11 @@ Optional properties:
- qcom,msi_addr: MSI interrupt address.
- qcom,msi_base: Base value to add before writing MSI data into
MSI address register.
-- qcom,ath10k-calibration-data : calibration data as an array, the
- length can vary between hw versions
+- qcom,ath10k-calibration-data : calibration data + board specific data
+ as an array, the length can vary between
+ hw versions.
+- qcom,ath10k-pre-calibration-data : pre calibration data as an array,
+ the length can vary between hw versions.
Example (to supply the calibration data alone):
@@ -105,5 +114,5 @@ wifi0: wifi@a000000 {
"legacy";
qcom,msi_addr = <0x0b006040>;
qcom,msi_base = <0x40>;
- qcom,ath10k-calibration-data = [ 01 02 03 ... ];
+ qcom,ath10k-pre-calibration-data = [ 01 02 03 ... ];
};
diff --git a/Documentation/devicetree/bindings/numa.txt b/Documentation/devicetree/bindings/numa.txt
new file mode 100644
index 000000000000..21b35053ca5a
--- /dev/null
+++ b/Documentation/devicetree/bindings/numa.txt
@@ -0,0 +1,275 @@
+==============================================================================
+NUMA binding description.
+==============================================================================
+
+==============================================================================
+1 - Introduction
+==============================================================================
+
+Systems employing a Non Uniform Memory Access (NUMA) architecture contain
+collections of hardware resources including processors, memory, and I/O buses,
+that comprise what is commonly known as a NUMA node.
+Processor accesses to memory within the local NUMA node is generally faster
+than processor accesses to memory outside of the local NUMA node.
+DT defines interfaces that allow the platform to convey NUMA node
+topology information to OS.
+
+==============================================================================
+2 - numa-node-id
+==============================================================================
+
+For the purpose of identification, each NUMA node is associated with a unique
+token known as a node id. For the purpose of this binding
+a node id is a 32-bit integer.
+
+A device node is associated with a NUMA node by the presence of a
+numa-node-id property which contains the node id of the device.
+
+Example:
+ /* numa node 0 */
+ numa-node-id = <0>;
+
+ /* numa node 1 */
+ numa-node-id = <1>;
+
+==============================================================================
+3 - distance-map
+==============================================================================
+
+The optional device tree node distance-map describes the relative
+distance (memory latency) between all numa nodes.
+
+- compatible : Should at least contain "numa-distance-map-v1".
+
+- distance-matrix
+ This property defines a matrix to describe the relative distances
+ between all numa nodes.
+ It is represented as a list of node pairs and their relative distance.
+
+ Note:
+ 1. Each entry represents distance from first node to second node.
+ The distances are equal in either direction.
+ 2. The distance from a node to self (local distance) is represented
+ with value 10 and all internode distance should be represented with
+ a value greater than 10.
+ 3. distance-matrix should have entries in lexicographical ascending
+ order of nodes.
+ 4. There must be only one device node distance-map which must
+ reside in the root node.
+ 5. If the distance-map node is not present, a default
+ distance-matrix is used.
+
+Example:
+ 4 nodes connected in mesh/ring topology as below,
+
+ 0_______20______1
+ | |
+ | |
+ 20 20
+ | |
+ | |
+ |_______________|
+ 3 20 2
+
+ if relative distance for each hop is 20,
+ then internode distance would be,
+ 0 -> 1 = 20
+ 1 -> 2 = 20
+ 2 -> 3 = 20
+ 3 -> 0 = 20
+ 0 -> 2 = 40
+ 1 -> 3 = 40
+
+ and dt presentation for this distance matrix is,
+
+ distance-map {
+ compatible = "numa-distance-map-v1";
+ distance-matrix = <0 0 10>,
+ <0 1 20>,
+ <0 2 40>,
+ <0 3 20>,
+ <1 0 20>,
+ <1 1 10>,
+ <1 2 20>,
+ <1 3 40>,
+ <2 0 40>,
+ <2 1 20>,
+ <2 2 10>,
+ <2 3 20>,
+ <3 0 20>,
+ <3 1 40>,
+ <3 2 20>,
+ <3 3 10>;
+ };
+
+==============================================================================
+4 - Example dts
+==============================================================================
+
+Dual socket system consists of 2 boards connected through ccn bus and
+each board having one socket/soc of 8 cpus, memory and pci bus.
+
+ memory@c00000 {
+ device_type = "memory";
+ reg = <0x0 0xc00000 0x0 0x80000000>;
+ /* node 0 */
+ numa-node-id = <0>;
+ };
+
+ memory@10000000000 {
+ device_type = "memory";
+ reg = <0x100 0x0 0x0 0x80000000>;
+ /* node 1 */
+ numa-node-id = <1>;
+ };
+
+ cpus {
+ #address-cells = <2>;
+ #size-cells = <0>;
+
+ cpu@0 {
+ device_type = "cpu";
+ compatible = "arm,armv8";
+ reg = <0x0 0x0>;
+ enable-method = "psci";
+ /* node 0 */
+ numa-node-id = <0>;
+ };
+ cpu@1 {
+ device_type = "cpu";
+ compatible = "arm,armv8";
+ reg = <0x0 0x1>;
+ enable-method = "psci";
+ numa-node-id = <0>;
+ };
+ cpu@2 {
+ device_type = "cpu";
+ compatible = "arm,armv8";
+ reg = <0x0 0x2>;
+ enable-method = "psci";
+ numa-node-id = <0>;
+ };
+ cpu@3 {
+ device_type = "cpu";
+ compatible = "arm,armv8";
+ reg = <0x0 0x3>;
+ enable-method = "psci";
+ numa-node-id = <0>;
+ };
+ cpu@4 {
+ device_type = "cpu";
+ compatible = "arm,armv8";
+ reg = <0x0 0x4>;
+ enable-method = "psci";
+ numa-node-id = <0>;
+ };
+ cpu@5 {
+ device_type = "cpu";
+ compatible = "arm,armv8";
+ reg = <0x0 0x5>;
+ enable-method = "psci";
+ numa-node-id = <0>;
+ };
+ cpu@6 {
+ device_type = "cpu";
+ compatible = "arm,armv8";
+ reg = <0x0 0x6>;
+ enable-method = "psci";
+ numa-node-id = <0>;
+ };
+ cpu@7 {
+ device_type = "cpu";
+ compatible = "arm,armv8";
+ reg = <0x0 0x7>;
+ enable-method = "psci";
+ numa-node-id = <0>;
+ };
+ cpu@8 {
+ device_type = "cpu";
+ compatible = "arm,armv8";
+ reg = <0x0 0x8>;
+ enable-method = "psci";
+ /* node 1 */
+ numa-node-id = <1>;
+ };
+ cpu@9 {
+ device_type = "cpu";
+ compatible = "arm,armv8";
+ reg = <0x0 0x9>;
+ enable-method = "psci";
+ numa-node-id = <1>;
+ };
+ cpu@a {
+ device_type = "cpu";
+ compatible = "arm,armv8";
+ reg = <0x0 0xa>;
+ enable-method = "psci";
+ numa-node-id = <1>;
+ };
+ cpu@b {
+ device_type = "cpu";
+ compatible = "arm,armv8";
+ reg = <0x0 0xb>;
+ enable-method = "psci";
+ numa-node-id = <1>;
+ };
+ cpu@c {
+ device_type = "cpu";
+ compatible = "arm,armv8";
+ reg = <0x0 0xc>;
+ enable-method = "psci";
+ numa-node-id = <1>;
+ };
+ cpu@d {
+ device_type = "cpu";
+ compatible = "arm,armv8";
+ reg = <0x0 0xd>;
+ enable-method = "psci";
+ numa-node-id = <1>;
+ };
+ cpu@e {
+ device_type = "cpu";
+ compatible = "arm,armv8";
+ reg = <0x0 0xe>;
+ enable-method = "psci";
+ numa-node-id = <1>;
+ };
+ cpu@f {
+ device_type = "cpu";
+ compatible = "arm,armv8";
+ reg = <0x0 0xf>;
+ enable-method = "psci";
+ numa-node-id = <1>;
+ };
+ };
+
+ pcie0: pcie0@848000000000 {
+ compatible = "arm,armv8";
+ device_type = "pci";
+ bus-range = <0 255>;
+ #size-cells = <2>;
+ #address-cells = <3>;
+ reg = <0x8480 0x00000000 0 0x10000000>; /* Configuration space */
+ ranges = <0x03000000 0x8010 0x00000000 0x8010 0x00000000 0x70 0x00000000>;
+ /* node 0 */
+ numa-node-id = <0>;
+ };
+
+ pcie1: pcie1@948000000000 {
+ compatible = "arm,armv8";
+ device_type = "pci";
+ bus-range = <0 255>;
+ #size-cells = <2>;
+ #address-cells = <3>;
+ reg = <0x9480 0x00000000 0 0x10000000>; /* Configuration space */
+ ranges = <0x03000000 0x9010 0x00000000 0x9010 0x00000000 0x70 0x00000000>;
+ /* node 1 */
+ numa-node-id = <1>;
+ };
+
+ distance-map {
+ compatible = "numa-distance-map-v1";
+ distance-matrix = <0 0 10>,
+ <0 1 20>,
+ <1 1 10>;
+ };
diff --git a/Documentation/devicetree/bindings/opp/opp.txt b/Documentation/devicetree/bindings/opp/opp.txt
index 601256fe8c0d..ee91cbdd95ee 100644
--- a/Documentation/devicetree/bindings/opp/opp.txt
+++ b/Documentation/devicetree/bindings/opp/opp.txt
@@ -45,7 +45,7 @@ Devices supporting OPPs must set their "operating-points-v2" property with
phandle to a OPP table in their DT node. The OPP core will use this phandle to
find the operating points for the device.
-If required, this can be extended for SoC vendor specfic bindings. Such bindings
+If required, this can be extended for SoC vendor specific bindings. Such bindings
should be documented as Documentation/devicetree/bindings/power/<vendor>-opp.txt
and should have a compatible description like: "operating-points-v2-<vendor>".
diff --git a/Documentation/devicetree/bindings/pci/designware-pcie.txt b/Documentation/devicetree/bindings/pci/designware-pcie.txt
index 64f2fff12128..6c5322c55411 100644
--- a/Documentation/devicetree/bindings/pci/designware-pcie.txt
+++ b/Documentation/devicetree/bindings/pci/designware-pcie.txt
@@ -31,7 +31,7 @@ Optional properties:
Example configuration:
- pcie: pcie@0xdffff000 {
+ pcie: pcie@dffff000 {
compatible = "snps,dw-pcie";
reg = <0xdffff000 0x1000>, /* Controller registers */
<0xd0000000 0x2000>; /* PCI config space */
diff --git a/Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt b/Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt
index 3be80c68941a..83aeb1f5a645 100644
--- a/Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt
+++ b/Documentation/devicetree/bindings/pci/fsl,imx6q-pcie.txt
@@ -4,8 +4,8 @@ This PCIe host controller is based on the Synopsis Designware PCIe IP
and thus inherits all the common properties defined in designware-pcie.txt.
Required properties:
-- compatible: "fsl,imx6q-pcie"
-- reg: base addresse and length of the pcie controller
+- compatible: "fsl,imx6q-pcie", "fsl,imx6sx-pcie", "fsl,imx6qp-pcie"
+- reg: base address and length of the PCIe controller
- interrupts: A list of interrupt outputs of the controller. Must contain an
entry for each entry in the interrupt-names property.
- interrupt-names: Must include the following entries:
@@ -19,6 +19,20 @@ Optional properties:
- fsl,tx-deemph-gen2-6db: Gen2 (6db) De-emphasis value. Default: 20
- fsl,tx-swing-full: Gen2 TX SWING FULL value. Default: 127
- fsl,tx-swing-low: TX launch amplitude swing_low value. Default: 127
+- fsl,max-link-speed: Specify PCI gen for link capability. Must be '2' for
+ gen2, otherwise will default to gen1. Note that the IMX6 LVDS clock outputs
+ do not meet gen2 jitter requirements and thus for gen2 capability a gen2
+ compliant clock generator should be used and configured.
+- reset-gpio: Should specify the GPIO for controlling the PCI bus device reset
+ signal. It's not polarity aware and defaults to active-low reset sequence
+ (L=reset state, H=operation state).
+- reset-gpio-active-high: If present then the reset sequence using the GPIO
+ specified in the "reset-gpio" property is reversed (H=reset state,
+ L=operation state).
+
+Additional required properties for imx6sx-pcie:
+- clock names: Must include the following additional entries:
+ - "pcie_inbound_axi"
Example:
diff --git a/Documentation/devicetree/bindings/pci/hisilicon-pcie.txt b/Documentation/devicetree/bindings/pci/hisilicon-pcie.txt
index b721beacfe4d..59c2f47aa303 100644
--- a/Documentation/devicetree/bindings/pci/hisilicon-pcie.txt
+++ b/Documentation/devicetree/bindings/pci/hisilicon-pcie.txt
@@ -34,11 +34,11 @@ Hip05 Example (note that Hip06 is the same except compatible):
ranges = <0x82000000 0 0x00000000 0x220 0x00000000 0 0x10000000>;
num-lanes = <8>;
port-id = <1>;
- #interrupts-cells = <1>;
- interrupts-map-mask = <0xf800 0 0 7>;
- interrupts-map = <0x0 0 0 1 &mbigen_pcie 1 10
- 0x0 0 0 2 &mbigen_pcie 2 11
- 0x0 0 0 3 &mbigen_pcie 3 12
- 0x0 0 0 4 &mbigen_pcie 4 13>;
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0xf800 0 0 7>;
+ interrupt-map = <0x0 0 0 1 &mbigen_pcie 1 10
+ 0x0 0 0 2 &mbigen_pcie 2 11
+ 0x0 0 0 3 &mbigen_pcie 3 12
+ 0x0 0 0 4 &mbigen_pcie 4 13>;
status = "ok";
};
diff --git a/Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt b/Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt
index 75321ae23c08..b8cc395fffea 100644
--- a/Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt
+++ b/Documentation/devicetree/bindings/pci/nvidia,tegra20-pcie.txt
@@ -60,11 +60,14 @@ Required properties:
- afi
- pcie_x
-Required properties on Tegra124 and later:
+Required properties on Tegra124 and later (deprecated):
- phys: Must contain an entry for each entry in phy-names.
- phy-names: Must include the following entries:
- pcie
+These properties are deprecated in favour of per-lane PHYs define in each of
+the root ports (see below).
+
Power supplies for Tegra20:
- avdd-pex-supply: Power supply for analog PCIe logic. Must supply 1.05 V.
- vdd-pex-supply: Power supply for digital PCIe I/O. Must supply 1.05 V.
@@ -122,11 +125,22 @@ Required properties:
- Root port 0 uses 4 lanes, root port 1 is unused.
- Both root ports use 2 lanes.
-Example:
+Required properties for Tegra124 and later:
+- phys: Must contain an phandle to a PHY for each entry in phy-names.
+- phy-names: Must include an entry for each active lane. Note that the number
+ of entries does not have to (though usually will) be equal to the specified
+ number of lanes in the nvidia,num-lanes property. Entries are of the form
+ "pcie-N": where N ranges from 0 to the value specified in nvidia,num-lanes.
+
+Examples:
+=========
+
+Tegra20:
+--------
SoC DTSI:
- pcie-controller {
+ pcie-controller@80003000 {
compatible = "nvidia,tegra20-pcie";
device_type = "pci";
reg = <0x80003000 0x00000800 /* PADS registers */
@@ -186,10 +200,9 @@ SoC DTSI:
};
};
-
Board DTS:
- pcie-controller {
+ pcie-controller@80003000 {
status = "okay";
vdd-supply = <&pci_vdd_reg>;
@@ -222,3 +235,204 @@ if a device on the PCI bus provides a non-probeable bus such as I2C or SPI,
device nodes need to be added in order to allow the bus' children to be
instantiated at the proper location in the operating system's device tree (as
illustrated by the optional nodes in the example above).
+
+Tegra30:
+--------
+
+SoC DTSI:
+
+ pcie-controller@00003000 {
+ compatible = "nvidia,tegra30-pcie";
+ device_type = "pci";
+ reg = <0x00003000 0x00000800 /* PADS registers */
+ 0x00003800 0x00000200 /* AFI registers */
+ 0x10000000 0x10000000>; /* configuration space */
+ reg-names = "pads", "afi", "cs";
+ interrupts = <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH /* controller interrupt */
+ GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>; /* MSI interrupt */
+ interrupt-names = "intr", "msi";
+
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &intc GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>;
+
+ bus-range = <0x00 0xff>;
+ #address-cells = <3>;
+ #size-cells = <2>;
+
+ ranges = <0x82000000 0 0x00000000 0x00000000 0 0x00001000 /* port 0 configuration space */
+ 0x82000000 0 0x00001000 0x00001000 0 0x00001000 /* port 1 configuration space */
+ 0x82000000 0 0x00004000 0x00004000 0 0x00001000 /* port 2 configuration space */
+ 0x81000000 0 0 0x02000000 0 0x00010000 /* downstream I/O */
+ 0x82000000 0 0x20000000 0x20000000 0 0x08000000 /* non-prefetchable memory */
+ 0xc2000000 0 0x28000000 0x28000000 0 0x18000000>; /* prefetchable memory */
+
+ clocks = <&tegra_car TEGRA30_CLK_PCIE>,
+ <&tegra_car TEGRA30_CLK_AFI>,
+ <&tegra_car TEGRA30_CLK_PLL_E>,
+ <&tegra_car TEGRA30_CLK_CML0>;
+ clock-names = "pex", "afi", "pll_e", "cml";
+ resets = <&tegra_car 70>,
+ <&tegra_car 72>,
+ <&tegra_car 74>;
+ reset-names = "pex", "afi", "pcie_x";
+ status = "disabled";
+
+ pci@1,0 {
+ device_type = "pci";
+ assigned-addresses = <0x82000800 0 0x00000000 0 0x1000>;
+ reg = <0x000800 0 0 0 0>;
+ status = "disabled";
+
+ #address-cells = <3>;
+ #size-cells = <2>;
+ ranges;
+
+ nvidia,num-lanes = <2>;
+ };
+
+ pci@2,0 {
+ device_type = "pci";
+ assigned-addresses = <0x82001000 0 0x00001000 0 0x1000>;
+ reg = <0x001000 0 0 0 0>;
+ status = "disabled";
+
+ #address-cells = <3>;
+ #size-cells = <2>;
+ ranges;
+
+ nvidia,num-lanes = <2>;
+ };
+
+ pci@3,0 {
+ device_type = "pci";
+ assigned-addresses = <0x82001800 0 0x00004000 0 0x1000>;
+ reg = <0x001800 0 0 0 0>;
+ status = "disabled";
+
+ #address-cells = <3>;
+ #size-cells = <2>;
+ ranges;
+
+ nvidia,num-lanes = <2>;
+ };
+ };
+
+Board DTS:
+
+ pcie-controller@00003000 {
+ status = "okay";
+
+ avdd-pexa-supply = <&ldo1_reg>;
+ vdd-pexa-supply = <&ldo1_reg>;
+ avdd-pexb-supply = <&ldo1_reg>;
+ vdd-pexb-supply = <&ldo1_reg>;
+ avdd-pex-pll-supply = <&ldo1_reg>;
+ avdd-plle-supply = <&ldo1_reg>;
+ vddio-pex-ctl-supply = <&sys_3v3_reg>;
+ hvdd-pex-supply = <&sys_3v3_pexs_reg>;
+
+ pci@1,0 {
+ status = "okay";
+ };
+
+ pci@3,0 {
+ status = "okay";
+ };
+ };
+
+Tegra124:
+---------
+
+SoC DTSI:
+
+ pcie-controller@01003000 {
+ compatible = "nvidia,tegra124-pcie";
+ device_type = "pci";
+ reg = <0x0 0x01003000 0x0 0x00000800 /* PADS registers */
+ 0x0 0x01003800 0x0 0x00000800 /* AFI registers */
+ 0x0 0x02000000 0x0 0x10000000>; /* configuration space */
+ reg-names = "pads", "afi", "cs";
+ interrupts = <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>, /* controller interrupt */
+ <GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>; /* MSI interrupt */
+ interrupt-names = "intr", "msi";
+
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &gic GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>;
+
+ bus-range = <0x00 0xff>;
+ #address-cells = <3>;
+ #size-cells = <2>;
+
+ ranges = <0x82000000 0 0x01000000 0x0 0x01000000 0 0x00001000 /* port 0 configuration space */
+ 0x82000000 0 0x01001000 0x0 0x01001000 0 0x00001000 /* port 1 configuration space */
+ 0x81000000 0 0x0 0x0 0x12000000 0 0x00010000 /* downstream I/O (64 KiB) */
+ 0x82000000 0 0x13000000 0x0 0x13000000 0 0x0d000000 /* non-prefetchable memory (208 MiB) */
+ 0xc2000000 0 0x20000000 0x0 0x20000000 0 0x20000000>; /* prefetchable memory (512 MiB) */
+
+ clocks = <&tegra_car TEGRA124_CLK_PCIE>,
+ <&tegra_car TEGRA124_CLK_AFI>,
+ <&tegra_car TEGRA124_CLK_PLL_E>,
+ <&tegra_car TEGRA124_CLK_CML0>;
+ clock-names = "pex", "afi", "pll_e", "cml";
+ resets = <&tegra_car 70>,
+ <&tegra_car 72>,
+ <&tegra_car 74>;
+ reset-names = "pex", "afi", "pcie_x";
+ status = "disabled";
+
+ pci@1,0 {
+ device_type = "pci";
+ assigned-addresses = <0x82000800 0 0x01000000 0 0x1000>;
+ reg = <0x000800 0 0 0 0>;
+ status = "disabled";
+
+ #address-cells = <3>;
+ #size-cells = <2>;
+ ranges;
+
+ nvidia,num-lanes = <2>;
+ };
+
+ pci@2,0 {
+ device_type = "pci";
+ assigned-addresses = <0x82001000 0 0x01001000 0 0x1000>;
+ reg = <0x001000 0 0 0 0>;
+ status = "disabled";
+
+ #address-cells = <3>;
+ #size-cells = <2>;
+ ranges;
+
+ nvidia,num-lanes = <1>;
+ };
+ };
+
+Board DTS:
+
+ pcie-controller@01003000 {
+ status = "okay";
+
+ avddio-pex-supply = <&vdd_1v05_run>;
+ dvddio-pex-supply = <&vdd_1v05_run>;
+ avdd-pex-pll-supply = <&vdd_1v05_run>;
+ hvdd-pex-supply = <&vdd_3v3_lp0>;
+ hvdd-pex-pll-e-supply = <&vdd_3v3_lp0>;
+ vddio-pex-ctl-supply = <&vdd_3v3_lp0>;
+ avdd-pll-erefe-supply = <&avdd_1v05_run>;
+
+ /* Mini PCIe */
+ pci@1,0 {
+ phys = <&{/padctl@7009f000/pads/pcie/lanes/pcie-4}>;
+ phy-names = "pcie-0";
+ status = "okay";
+ };
+
+ /* Gigabit Ethernet */
+ pci@2,0 {
+ phys = <&{/padctl@7009f000/pads/pcie/lanes/pcie-2}>;
+ phy-names = "pcie-0";
+ status = "okay";
+ };
+ };
diff --git a/Documentation/devicetree/bindings/pci/pci-armada8k.txt b/Documentation/devicetree/bindings/pci/pci-armada8k.txt
new file mode 100644
index 000000000000..598533a57d79
--- /dev/null
+++ b/Documentation/devicetree/bindings/pci/pci-armada8k.txt
@@ -0,0 +1,38 @@
+* Marvell Armada 7K/8K PCIe interface
+
+This PCIe host controller is based on the Synopsis Designware PCIe IP
+and thus inherits all the common properties defined in designware-pcie.txt.
+
+Required properties:
+- compatible: "marvell,armada8k-pcie"
+- reg: must contain two register regions
+ - the control register region
+ - the config space region
+- reg-names:
+ - "ctrl" for the control register region
+ - "config" for the config space region
+- interrupts: Interrupt specifier for the PCIe controler
+- clocks: reference to the PCIe controller clock
+
+Example:
+
+ pcie@f2600000 {
+ compatible = "marvell,armada8k-pcie", "snps,dw-pcie";
+ reg = <0 0xf2600000 0 0x10000>, <0 0xf6f00000 0 0x80000>;
+ reg-names = "ctrl", "config";
+ #address-cells = <3>;
+ #size-cells = <2>;
+ #interrupt-cells = <1>;
+ device_type = "pci";
+ dma-coherent;
+
+ bus-range = <0 0xff>;
+ ranges = <0x81000000 0 0xf9000000 0 0xf9000000 0 0x10000 /* downstream I/O */
+ 0x82000000 0 0xf6000000 0 0xf6000000 0 0xf00000>; /* non-prefetchable memory */
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>;
+ num-lanes = <1>;
+ clocks = <&cpm_syscon0 1 13>;
+ status = "disabled";
+ };
diff --git a/Documentation/devicetree/bindings/pci/pci-keystone.txt b/Documentation/devicetree/bindings/pci/pci-keystone.txt
index 54eae2938174..d08a4d51108f 100644
--- a/Documentation/devicetree/bindings/pci/pci-keystone.txt
+++ b/Documentation/devicetree/bindings/pci/pci-keystone.txt
@@ -56,6 +56,7 @@ Optional properties:-
phy-names: name of the Generic Keystine SerDes phy for PCI
- If boot loader already does PCI link establishment, then phys and
phy-names shouldn't be present.
+ interrupts: platform interrupt for error interrupts.
Designware DT Properties not applicable for Keystone PCI
diff --git a/Documentation/devicetree/bindings/phy/bcm-ns-usb2-phy.txt b/Documentation/devicetree/bindings/phy/bcm-ns-usb2-phy.txt
new file mode 100644
index 000000000000..a7aee9ea8926
--- /dev/null
+++ b/Documentation/devicetree/bindings/phy/bcm-ns-usb2-phy.txt
@@ -0,0 +1,21 @@
+Driver for Broadcom Northstar USB 2.0 PHY
+
+Required properties:
+- compatible: brcm,ns-usb2-phy
+- reg: iomem address range of DMU (Device Management Unit)
+- reg-names: "dmu", the only needed & supported reg right now
+- clocks: USB PHY reference clock
+- clock-names: "phy-ref-clk", the only needed & supported clock right now
+
+To initialize USB 2.0 PHY driver needs to setup PLL correctly. To do this it
+requires passing phandle to the USB PHY reference clock.
+
+Example:
+ usb2-phy {
+ compatible = "brcm,ns-usb2-phy";
+ reg = <0x1800c000 0x1000>;
+ reg-names = "dmu";
+ #phy-cells = <0>;
+ clocks = <&genpll BCM_NSP_GENPLL_USB_PHY_REF_CLK>;
+ clock-names = "phy-ref-clk";
+ };
diff --git a/Documentation/devicetree/bindings/phy/brcm,brcmstb-sata-phy.txt b/Documentation/devicetree/bindings/phy/brcm-sata-phy.txt
index d87ab7c127b8..d0231209d846 100644
--- a/Documentation/devicetree/bindings/phy/brcm,brcmstb-sata-phy.txt
+++ b/Documentation/devicetree/bindings/phy/brcm-sata-phy.txt
@@ -1,14 +1,17 @@
-* Broadcom SATA3 PHY for STB
+* Broadcom SATA3 PHY
Required properties:
- compatible: should be one or more of
"brcm,bcm7425-sata-phy"
"brcm,bcm7445-sata-phy"
+ "brcm,iproc-ns2-sata-phy"
"brcm,phy-sata3"
- address-cells: should be 1
- size-cells: should be 0
-- reg: register range for the PHY PCB interface
-- reg-names: should be "phy"
+- reg: register ranges for the PHY PCB interface
+- reg-names: should be "phy" and "phy-ctrl"
+ The "phy-ctrl" registers are only required for
+ "brcm,iproc-ns2-sata-phy".
Sub-nodes:
Each port's PHY should be represented as a sub-node.
@@ -16,12 +19,12 @@ Sub-nodes:
Sub-nodes required properties:
- reg: the PHY number
- phy-cells: generic PHY binding; must be 0
-Optional:
-- brcm,enable-ssc: use spread spectrum clocking (SSC) on this port
+Sub-nodes optional properties:
+- brcm,enable-ssc: use spread spectrum clocking (SSC) on this port
+ This property is not applicable for "brcm,iproc-ns2-sata-phy".
Example:
-
sata-phy@f0458100 {
compatible = "brcm,bcm7445-sata-phy", "brcm,phy-sata3";
reg = <0xf0458100 0x1e00>, <0xf045804c 0x10>;
diff --git a/Documentation/devicetree/bindings/phy/nvidia,tegra124-xusb-padctl.txt b/Documentation/devicetree/bindings/phy/nvidia,tegra124-xusb-padctl.txt
new file mode 100644
index 000000000000..0bf1ae243552
--- /dev/null
+++ b/Documentation/devicetree/bindings/phy/nvidia,tegra124-xusb-padctl.txt
@@ -0,0 +1,733 @@
+Device tree binding for NVIDIA Tegra XUSB pad controller
+========================================================
+
+The Tegra XUSB pad controller manages a set of I/O lanes (with differential
+signals) which connect directly to pins/pads on the SoC package. Each lane
+is controlled by a HW block referred to as a "pad" in the Tegra hardware
+documentation. Each such "pad" may control either one or multiple lanes,
+and thus contains any logic common to all its lanes. Each lane can be
+separately configured and powered up.
+
+Some of the lanes are high-speed lanes, which can be used for PCIe, SATA or
+super-speed USB. Other lanes are for various types of low-speed, full-speed
+or high-speed USB (such as UTMI, ULPI and HSIC). The XUSB pad controller
+contains a software-configurable mux that sits between the I/O controller
+ports (e.g. PCIe) and the lanes.
+
+In addition to per-lane configuration, USB 3.0 ports may require additional
+settings on a per-board basis.
+
+Pads will be represented as children of the top-level XUSB pad controller
+device tree node. Each lane exposed by the pad will be represented by its
+own subnode and can be referenced by users of the lane using the standard
+PHY bindings, as described by the phy-bindings.txt file in this directory.
+
+The Tegra hardware documentation refers to the connection between the XUSB
+pad controller and the XUSB controller as "ports". This is confusing since
+"port" is typically used to denote the physical USB receptacle. The device
+tree binding in this document uses the term "port" to refer to the logical
+abstraction of the signals that are routed to a USB receptacle (i.e. a PHY
+for the USB signal, the VBUS power supply, the USB 2.0 companion port for
+USB 3.0 receptacles, ...).
+
+Required properties:
+--------------------
+- compatible: Must be:
+ - Tegra124: "nvidia,tegra124-xusb-padctl"
+ - Tegra132: "nvidia,tegra132-xusb-padctl", "nvidia,tegra124-xusb-padctl"
+ - Tegra210: "nvidia,tegra210-xusb-padctl"
+- reg: Physical base address and length of the controller's registers.
+- resets: Must contain an entry for each entry in reset-names.
+- reset-names: Must include the following entries:
+ - "padctl"
+
+
+Pad nodes:
+==========
+
+A required child node named "pads" contains a list of subnodes, one for each
+of the pads exposed by the XUSB pad controller. Each pad may need additional
+resources that can be referenced in its pad node.
+
+The "status" property is used to enable or disable the use of a pad. If set
+to "disabled", the pad will not be used on the given board. In order to use
+the pad and any of its lanes, this property must be set to "okay".
+
+For Tegra124 and Tegra132, the following pads exist: usb2, ulpi, hsic, pcie
+and sata. No extra resources are required for operation of these pads.
+
+For Tegra210, the following pads exist: usb2, hsic, pcie and sata. Below is
+a description of the properties of each pad.
+
+UTMI pad:
+---------
+
+Required properties:
+- clocks: Must contain an entry for each entry in clock-names.
+- clock-names: Must contain the following entries:
+ - "trk": phandle and specifier referring to the USB2 tracking clock
+
+HSIC pad:
+---------
+
+Required properties:
+- clocks: Must contain an entry for each entry in clock-names.
+- clock-names: Must contain the following entries:
+ - "trk": phandle and specifier referring to the HSIC tracking clock
+
+PCIe pad:
+---------
+
+Required properties:
+- clocks: Must contain an entry for each entry in clock-names.
+- clock-names: Must contain the following entries:
+ - "pll": phandle and specifier referring to the PLLE
+- resets: Must contain an entry for each entry in reset-names.
+- reset-names: Must contain the following entries:
+ - "phy": reset for the PCIe UPHY block
+
+SATA pad:
+---------
+
+Required properties:
+- resets: Must contain an entry for each entry in reset-names.
+- reset-names: Must contain the following entries:
+ - "phy": reset for the SATA UPHY block
+
+
+PHY nodes:
+==========
+
+Each pad node has a child named "lanes" that contains one or more children of
+its own, each representing one of the lanes controlled by the pad.
+
+Required properties:
+--------------------
+- status: Defines the operation status of the PHY. Valid values are:
+ - "disabled": the PHY is disabled
+ - "okay": the PHY is enabled
+- #phy-cells: Should be 0. Since each lane represents a single PHY, there is
+ no need for an additional specifier.
+- nvidia,function: The output function of the PHY. See below for a list of
+ valid functions per SoC generation.
+
+For Tegra124 and Tegra132, the list of valid PHY nodes is given below:
+- usb2: usb2-0, usb2-1, usb2-2
+ - functions: "snps", "xusb", "uart"
+- ulpi: ulpi-0
+ - functions: "snps", "xusb"
+- hsic: hsic-0, hsic-1
+ - functions: "snps", "xusb"
+- pcie: pcie-0, pcie-1, pcie-2, pcie-3, pcie-4
+ - functions: "pcie", "usb3-ss"
+- sata: sata-0
+ - functions: "usb3-ss", "sata"
+
+For Tegra210, the list of valid PHY nodes is given below:
+- utmi: utmi-0, utmi-1, utmi-2, utmi-3
+ - functions: "snps", "xusb", "uart"
+- hsic: hsic-0, hsic-1
+ - functions: "snps", "xusb"
+- pcie: pcie-0, pcie-1, pcie-2, pcie-3, pcie-4, pcie-5, pcie-6
+ - functions: "pcie-x1", "usb3-ss", "pcie-x4"
+- sata: sata-0
+ - functions: "usb3-ss", "sata"
+
+
+Port nodes:
+===========
+
+A required child node named "ports" contains a list of all the ports exposed
+by the XUSB pad controller. Per-port configuration is only required for USB.
+
+USB2 ports:
+-----------
+
+Required properties:
+- status: Defines the operation status of the port. Valid values are:
+ - "disabled": the port is disabled
+ - "okay": the port is enabled
+- mode: A string that determines the mode in which to run the port. Valid
+ values are:
+ - "host": for USB host mode
+ - "device": for USB device mode
+ - "otg": for USB OTG mode
+
+Optional properties:
+- nvidia,internal: A boolean property whose presence determines that a port
+ is internal. In the absence of this property the port is considered to be
+ external.
+- vbus-supply: phandle to a regulator supplying the VBUS voltage.
+
+ULPI ports:
+-----------
+
+Optional properties:
+- status: Defines the operation status of the port. Valid values are:
+ - "disabled": the port is disabled
+ - "okay": the port is enabled
+- nvidia,internal: A boolean property whose presence determines that a port
+ is internal. In the absence of this property the port is considered to be
+ external.
+- vbus-supply: phandle to a regulator supplying the VBUS voltage.
+
+HSIC ports:
+-----------
+
+Required properties:
+- status: Defines the operation status of the port. Valid values are:
+ - "disabled": the port is disabled
+ - "okay": the port is enabled
+
+Optional properties:
+- vbus-supply: phandle to a regulator supplying the VBUS voltage.
+
+Super-speed USB ports:
+----------------------
+
+Required properties:
+- status: Defines the operation status of the port. Valid values are:
+ - "disabled": the port is disabled
+ - "okay": the port is enabled
+- nvidia,usb2-companion: A single cell that specifies the physical port number
+ to map this super-speed USB port to. The range of valid port numbers varies
+ with the SoC generation:
+ - 0-2: for Tegra124 and Tegra132
+ - 0-3: for Tegra210
+
+Optional properties:
+- nvidia,internal: A boolean property whose presence determines that a port
+ is internal. In the absence of this property the port is considered to be
+ external.
+
+For Tegra124 and Tegra132, the XUSB pad controller exposes the following
+ports:
+- 3x USB2: usb2-0, usb2-1, usb2-2
+- 1x ULPI: ulpi-0
+- 2x HSIC: hsic-0, hsic-1
+- 2x super-speed USB: usb3-0, usb3-1
+
+For Tegra210, the XUSB pad controller exposes the following ports:
+- 4x USB2: usb2-0, usb2-1, usb2-2, usb2-3
+- 2x HSIC: hsic-0, hsic-1
+- 4x super-speed USB: usb3-0, usb3-1, usb3-2, usb3-3
+
+
+Examples:
+=========
+
+Tegra124 and Tegra132:
+----------------------
+
+SoC include:
+
+ padctl@7009f000 {
+ /* for Tegra124 */
+ compatible = "nvidia,tegra124-xusb-padctl";
+ /* for Tegra132 */
+ compatible = "nvidia,tegra132-xusb-padctl",
+ "nvidia,tegra124-xusb-padctl";
+ reg = <0x0 0x7009f000 0x0 0x1000>;
+ resets = <&tegra_car 142>;
+ reset-names = "padctl";
+
+ pads {
+ usb2 {
+ status = "disabled";
+
+ lanes {
+ usb2-0 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ usb2-1 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ usb2-2 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+ };
+ };
+
+ ulpi {
+ status = "disabled";
+
+ lanes {
+ ulpi-0 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+ };
+ };
+
+ hsic {
+ status = "disabled";
+
+ lanes {
+ hsic-0 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ hsic-1 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+ };
+ };
+
+ pcie {
+ status = "disabled";
+
+ lanes {
+ pcie-0 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ pcie-1 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ pcie-2 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ pcie-3 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ pcie-4 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+ };
+ };
+
+ sata {
+ status = "disabled";
+
+ lanes {
+ sata-0 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+ };
+ };
+ };
+
+ ports {
+ usb2-0 {
+ status = "disabled";
+ };
+
+ usb2-1 {
+ status = "disabled";
+ };
+
+ usb2-2 {
+ status = "disabled";
+ };
+
+ ulpi-0 {
+ status = "disabled";
+ };
+
+ hsic-0 {
+ status = "disabled";
+ };
+
+ hsic-1 {
+ status = "disabled";
+ };
+
+ usb3-0 {
+ status = "disabled";
+ };
+
+ usb3-1 {
+ status = "disabled";
+ };
+ };
+ };
+
+Board file:
+
+ padctl@7009f000 {
+ status = "okay";
+
+ pads {
+ usb2 {
+ status = "okay";
+
+ lanes {
+ usb2-0 {
+ nvidia,function = "xusb";
+ status = "okay";
+ };
+
+ usb2-1 {
+ nvidia,function = "xusb";
+ status = "okay";
+ };
+
+ usb2-2 {
+ nvidia,function = "xusb";
+ status = "okay";
+ };
+ };
+ };
+
+ pcie {
+ status = "okay";
+
+ lanes {
+ pcie-0 {
+ nvidia,function = "usb3-ss";
+ status = "okay";
+ };
+
+ pcie-2 {
+ nvidia,function = "pcie";
+ status = "okay";
+ };
+
+ pcie-4 {
+ nvidia,function = "pcie";
+ status = "okay";
+ };
+ };
+ };
+
+ sata {
+ status = "okay";
+
+ lanes {
+ sata-0 {
+ nvidia,function = "sata";
+ status = "okay";
+ };
+ };
+ };
+ };
+
+ ports {
+ /* Micro A/B */
+ usb2-0 {
+ status = "okay";
+ mode = "otg";
+ };
+
+ /* Mini PCIe */
+ usb2-1 {
+ status = "okay";
+ mode = "host";
+ };
+
+ /* USB3 */
+ usb2-2 {
+ status = "okay";
+ mode = "host";
+
+ vbus-supply = <&vdd_usb3_vbus>;
+ };
+
+ usb3-0 {
+ nvidia,port = <2>;
+ status = "okay";
+ };
+ };
+ };
+
+Tegra210:
+---------
+
+SoC include:
+
+ padctl@7009f000 {
+ compatible = "nvidia,tegra210-xusb-padctl";
+ reg = <0x0 0x7009f000 0x0 0x1000>;
+ resets = <&tegra_car 142>;
+ reset-names = "padctl";
+
+ status = "disabled";
+
+ pads {
+ usb2 {
+ clocks = <&tegra_car TEGRA210_CLK_USB2_TRK>;
+ clock-names = "trk";
+ status = "disabled";
+
+ lanes {
+ usb2-0 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ usb2-1 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ usb2-2 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ usb2-3 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+ };
+ };
+
+ hsic {
+ clocks = <&tegra_car TEGRA210_CLK_HSIC_TRK>;
+ clock-names = "trk";
+ status = "disabled";
+
+ lanes {
+ hsic-0 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ hsic-1 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+ };
+ };
+
+ pcie {
+ clocks = <&tegra_car TEGRA210_CLK_PLL_E>;
+ clock-names = "pll";
+ resets = <&tegra_car 205>;
+ reset-names = "phy";
+ status = "disabled";
+
+ lanes {
+ pcie-0 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ pcie-1 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ pcie-2 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ pcie-3 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ pcie-4 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ pcie-5 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ pcie-6 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+ };
+ };
+
+ sata {
+ clocks = <&tegra_car TEGRA210_CLK_PLL_E>;
+ clock-names = "pll";
+ resets = <&tegra_car 204>;
+ reset-names = "phy";
+ status = "disabled";
+
+ lanes {
+ sata-0 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+ };
+ };
+ };
+
+ ports {
+ usb2-0 {
+ status = "disabled";
+ };
+
+ usb2-1 {
+ status = "disabled";
+ };
+
+ usb2-2 {
+ status = "disabled";
+ };
+
+ usb2-3 {
+ status = "disabled";
+ };
+
+ hsic-0 {
+ status = "disabled";
+ };
+
+ hsic-1 {
+ status = "disabled";
+ };
+
+ usb3-0 {
+ status = "disabled";
+ };
+
+ usb3-1 {
+ status = "disabled";
+ };
+
+ usb3-2 {
+ status = "disabled";
+ };
+
+ usb3-3 {
+ status = "disabled";
+ };
+ };
+ };
+
+Board file:
+
+ padctl@7009f000 {
+ status = "okay";
+
+ pads {
+ usb2 {
+ status = "okay";
+
+ lanes {
+ usb2-0 {
+ nvidia,function = "xusb";
+ status = "okay";
+ };
+
+ usb2-1 {
+ nvidia,function = "xusb";
+ status = "okay";
+ };
+
+ usb2-2 {
+ nvidia,function = "xusb";
+ status = "okay";
+ };
+
+ usb2-3 {
+ nvidia,function = "xusb";
+ status = "okay";
+ };
+ };
+ };
+
+ pcie {
+ status = "okay";
+
+ lanes {
+ pcie-0 {
+ nvidia,function = "pcie-x1";
+ status = "okay";
+ };
+
+ pcie-1 {
+ nvidia,function = "pcie-x4";
+ status = "okay";
+ };
+
+ pcie-2 {
+ nvidia,function = "pcie-x4";
+ status = "okay";
+ };
+
+ pcie-3 {
+ nvidia,function = "pcie-x4";
+ status = "okay";
+ };
+
+ pcie-4 {
+ nvidia,function = "pcie-x4";
+ status = "okay";
+ };
+
+ pcie-5 {
+ nvidia,function = "usb3-ss";
+ status = "okay";
+ };
+
+ pcie-6 {
+ nvidia,function = "usb3-ss";
+ status = "okay";
+ };
+ };
+ };
+
+ sata {
+ status = "okay";
+
+ lanes {
+ sata-0 {
+ nvidia,function = "sata";
+ status = "okay";
+ };
+ };
+ };
+ };
+
+ ports {
+ usb2-0 {
+ status = "okay";
+ mode = "otg";
+ };
+
+ usb2-1 {
+ status = "okay";
+ vbus-supply = <&vdd_5v0_rtl>;
+ mode = "host";
+ };
+
+ usb2-2 {
+ status = "okay";
+ vbus-supply = <&vdd_usb_vbus>;
+ mode = "host";
+ };
+
+ usb2-3 {
+ status = "okay";
+ mode = "host";
+ };
+
+ usb3-0 {
+ status = "okay";
+ nvidia,lanes = "pcie-6";
+ nvidia,port = <1>;
+ };
+
+ usb3-1 {
+ status = "okay";
+ nvidia,lanes = "pcie-5";
+ nvidia,port = <2>;
+ };
+ };
+ };
diff --git a/Documentation/devicetree/bindings/phy/phy-lpc18xx-usb-otg.txt b/Documentation/devicetree/bindings/phy/phy-lpc18xx-usb-otg.txt
index bd61b467e30a..3bb821cd6a7f 100644
--- a/Documentation/devicetree/bindings/phy/phy-lpc18xx-usb-otg.txt
+++ b/Documentation/devicetree/bindings/phy/phy-lpc18xx-usb-otg.txt
@@ -18,7 +18,7 @@ creg: syscon@40043000 {
compatible = "nxp,lpc1850-creg", "syscon", "simple-mfd";
reg = <0x40043000 0x1000>;
- usb0_otg_phy: phy@004 {
+ usb0_otg_phy: phy {
compatible = "nxp,lpc1850-usb-otg-phy";
clocks = <&ccu1 CLK_USB0>;
#phy-cells = <0>;
diff --git a/Documentation/devicetree/bindings/phy/phy-mt65xx-usb.txt b/Documentation/devicetree/bindings/phy/phy-mt65xx-usb.txt
index 00100cf3e037..33a2b1ee3f3e 100644
--- a/Documentation/devicetree/bindings/phy/phy-mt65xx-usb.txt
+++ b/Documentation/devicetree/bindings/phy/phy-mt65xx-usb.txt
@@ -4,7 +4,9 @@ mt65xx USB3.0 PHY binding
This binding describes a usb3.0 phy for mt65xx platforms of Medaitek SoC.
Required properties (controller (parent) node):
- - compatible : should be "mediatek,mt8173-u3phy"
+ - compatible : should be one of
+ "mediatek,mt2701-u3phy"
+ "mediatek,mt8173-u3phy"
- reg : offset and length of register for phy, exclude port's
register.
- clocks : a list of phandle + clock-specifier pairs, one for each
diff --git a/Documentation/devicetree/bindings/phy/phy-stih41x-usb.txt b/Documentation/devicetree/bindings/phy/phy-stih41x-usb.txt
index 00944a05ee6b..744b4809542e 100644
--- a/Documentation/devicetree/bindings/phy/phy-stih41x-usb.txt
+++ b/Documentation/devicetree/bindings/phy/phy-stih41x-usb.txt
@@ -17,7 +17,7 @@ Example:
usb2_phy: usb2phy@0 {
compatible = "st,stih416-usb-phy";
- #phy-cell = <0>;
+ #phy-cells = <0>;
st,syscfg = <&syscfg_rear>;
clocks = <&clk_sysin>;
clock-names = "osc_phy";
diff --git a/Documentation/devicetree/bindings/phy/rcar-gen2-phy.txt b/Documentation/devicetree/bindings/phy/rcar-gen2-phy.txt
index d564ba4f1cf6..91da947ae9b6 100644
--- a/Documentation/devicetree/bindings/phy/rcar-gen2-phy.txt
+++ b/Documentation/devicetree/bindings/phy/rcar-gen2-phy.txt
@@ -7,6 +7,12 @@ Required properties:
- compatible: "renesas,usb-phy-r8a7790" if the device is a part of R8A7790 SoC.
"renesas,usb-phy-r8a7791" if the device is a part of R8A7791 SoC.
"renesas,usb-phy-r8a7794" if the device is a part of R8A7794 SoC.
+ "renesas,rcar-gen2-usb-phy" for a generic R-Car Gen2 compatible device.
+
+ When compatible with the generic version, nodes must list the
+ SoC-specific version corresponding to the platform first
+ followed by the generic version.
+
- reg: offset and length of the register block.
- #address-cells: number of address cells for the USB channel subnodes, must
be <1>.
@@ -34,7 +40,7 @@ the USB channel; see the selector meanings below:
Example (Lager board):
usb-phy@e6590100 {
- compatible = "renesas,usb-phy-r8a7790";
+ compatible = "renesas,usb-phy-r8a7790", "renesas,rcar-gen2-usb-phy";
reg = <0 0xe6590100 0 0x100>;
#address-cells = <1>;
#size-cells = <0>;
diff --git a/Documentation/devicetree/bindings/phy/rcar-gen3-phy-usb2.txt b/Documentation/devicetree/bindings/phy/rcar-gen3-phy-usb2.txt
index eaf7e9b7ce6b..2281d6cdecb1 100644
--- a/Documentation/devicetree/bindings/phy/rcar-gen3-phy-usb2.txt
+++ b/Documentation/devicetree/bindings/phy/rcar-gen3-phy-usb2.txt
@@ -6,6 +6,12 @@ This file provides information on what the device node for the R-Car generation
Required properties:
- compatible: "renesas,usb2-phy-r8a7795" if the device is a part of an R8A7795
SoC.
+ "renesas,rcar-gen3-usb2-phy" for a generic R-Car Gen3 compatible device.
+
+ When compatible with the generic version, nodes must list the
+ SoC-specific version corresponding to the platform first
+ followed by the generic version.
+
- reg: offset and length of the partial USB 2.0 Host register block.
- clocks: clock phandle and specifier pair(s).
- #phy-cells: see phy-bindings.txt in the same directory, must be <0>.
@@ -15,18 +21,20 @@ To use a USB channel where USB 2.0 Host and HSUSB (USB 2.0 Peripheral) are
combined, the device tree node should set interrupt properties to use the
channel as USB OTG:
- interrupts: interrupt specifier for the PHY.
+- vbus-supply: Phandle to a regulator that provides power to the VBUS. This
+ regulator will be managed during the PHY power on/off sequence.
Example (R-Car H3):
usb-phy@ee080200 {
- compatible = "renesas,usb2-phy-r8a7795";
+ compatible = "renesas,usb2-phy-r8a7795", "renesas,rcar-gen3-usb2-phy";
reg = <0 0xee080200 0 0x700>;
interrupts = <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp7_clks R8A7795_CLK_EHCI0>;
};
usb-phy@ee0a0200 {
- compatible = "renesas,usb2-phy-r8a7795";
+ compatible = "renesas,usb2-phy-r8a7795", "renesas,rcar-gen3-usb2-phy";
reg = <0 0xee0a0200 0 0x700>;
clocks = <&mstp7_clks R8A7795_CLK_EHCI0>;
};
diff --git a/Documentation/devicetree/bindings/phy/samsung-phy.txt b/Documentation/devicetree/bindings/phy/samsung-phy.txt
index 0289d3b07853..9872ba8546bd 100644
--- a/Documentation/devicetree/bindings/phy/samsung-phy.txt
+++ b/Documentation/devicetree/bindings/phy/samsung-phy.txt
@@ -2,9 +2,20 @@ Samsung S5P/EXYNOS SoC series MIPI CSIS/DSIM DPHY
-------------------------------------------------
Required properties:
-- compatible : should be "samsung,s5pv210-mipi-video-phy";
+- compatible : should be one of the listed compatibles:
+ - "samsung,s5pv210-mipi-video-phy"
+ - "samsung,exynos5420-mipi-video-phy"
+ - "samsung,exynos5433-mipi-video-phy"
- #phy-cells : from the generic phy bindings, must be 1;
-- syscon - phandle to the PMU system controller;
+
+In case of s5pv210 and exynos5420 compatible PHYs:
+- syscon - phandle to the PMU system controller
+
+In case of exynos5433 compatible PHY:
+ - samsung,pmu-syscon - phandle to the PMU system controller
+ - samsung,disp-sysreg - phandle to the DISP system registers controller
+ - samsung,cam0-sysreg - phandle to the CAM0 system registers controller
+ - samsung,cam1-sysreg - phandle to the CAM1 system registers controller
For "samsung,s5pv210-mipi-video-phy" compatible PHYs the second cell in
the PHY specifier identifies the PHY and its meaning is as follows:
@@ -12,6 +23,9 @@ the PHY specifier identifies the PHY and its meaning is as follows:
1 - MIPI DSIM 0,
2 - MIPI CSIS 1,
3 - MIPI DSIM 1.
+"samsung,exynos5420-mipi-video-phy" and "samsung,exynos5433-mipi-video-phy"
+supports additional fifth PHY:
+ 4 - MIPI CSIS 2.
Samsung EXYNOS SoC series Display Port PHY
-------------------------------------------------
diff --git a/Documentation/devicetree/bindings/pinctrl/microchip,pic32-pinctrl.txt b/Documentation/devicetree/bindings/pinctrl/microchip,pic32-pinctrl.txt
index 4b5efa51bec7..29b72e303ebf 100644
--- a/Documentation/devicetree/bindings/pinctrl/microchip,pic32-pinctrl.txt
+++ b/Documentation/devicetree/bindings/pinctrl/microchip,pic32-pinctrl.txt
@@ -34,7 +34,7 @@ pic32_pinctrl: pinctrl@1f801400{
#size-cells = <1>;
compatible = "microchip,pic32mzda-pinctrl";
reg = <0x1f801400 0x400>;
- clocks = <&PBCLK1>;
+ clocks = <&rootclk PB1CLK>;
pinctrl_uart2: pinctrl_uart2 {
uart2-tx {
diff --git a/Documentation/devicetree/bindings/pinctrl/nvidia,tegra124-xusb-padctl.txt b/Documentation/devicetree/bindings/pinctrl/nvidia,tegra124-xusb-padctl.txt
index 30676ded85bb..4048f43a9d29 100644
--- a/Documentation/devicetree/bindings/pinctrl/nvidia,tegra124-xusb-padctl.txt
+++ b/Documentation/devicetree/bindings/pinctrl/nvidia,tegra124-xusb-padctl.txt
@@ -1,6 +1,12 @@
Device tree binding for NVIDIA Tegra XUSB pad controller
========================================================
+NOTE: It turns out that this binding isn't an accurate description of the XUSB
+pad controller. While the description is good enough for the functional subset
+required for PCIe and SATA, it lacks the flexibility to represent the features
+needed for USB. For the new binding, see ../phy/nvidia,tegra-xusb-padctl.txt.
+The binding described in this file is deprecated and should not be used.
+
The Tegra XUSB pad controller manages a set of lanes, each of which can be
assigned to one out of a set of different pads. Some of these pads have an
associated PHY that must be powered up before the pad can be used.
@@ -79,7 +85,7 @@ Example:
SoC file extract:
-----------------
- padctl@0,7009f000 {
+ padctl@7009f000 {
compatible = "nvidia,tegra124-xusb-padctl";
reg = <0x0 0x7009f000 0x0 0x1000>;
resets = <&tegra_car 142>;
@@ -91,7 +97,7 @@ SoC file extract:
Board file extract:
-------------------
- pcie-controller@0,01003000 {
+ pcie-controller@01003000 {
...
phys = <&padctl 0>;
@@ -102,7 +108,7 @@ Board file extract:
...
- padctl: padctl@0,7009f000 {
+ padctl: padctl@7009f000 {
pinctrl-0 = <&padctl_default>;
pinctrl-names = "default";
diff --git a/Documentation/devicetree/bindings/pinctrl/qcom,pmic-gpio.txt b/Documentation/devicetree/bindings/pinctrl/qcom,pmic-gpio.txt
index a90c812ad642..a54c39ebbf8b 100644
--- a/Documentation/devicetree/bindings/pinctrl/qcom,pmic-gpio.txt
+++ b/Documentation/devicetree/bindings/pinctrl/qcom,pmic-gpio.txt
@@ -122,7 +122,7 @@ to specify in a pin configuration subnode:
2: 1.5uA (PMIC_GPIO_PULL_UP_1P5)
3: 31.5uA (PMIC_GPIO_PULL_UP_31P5)
4: 1.5uA + 30uA boost (PMIC_GPIO_PULL_UP_1P5_30)
- If this property is ommited 30uA strength will be used if
+ If this property is omitted 30uA strength will be used if
pull up is selected
- bias-high-impedance:
diff --git a/Documentation/devicetree/bindings/pinctrl/renesas,pfc-pinctrl.txt b/Documentation/devicetree/bindings/pinctrl/renesas,pfc-pinctrl.txt
index ffadb7a371f6..74e6ec0339d6 100644
--- a/Documentation/devicetree/bindings/pinctrl/renesas,pfc-pinctrl.txt
+++ b/Documentation/devicetree/bindings/pinctrl/renesas,pfc-pinctrl.txt
@@ -72,8 +72,8 @@ Pin Configuration Node Properties:
The pin configuration parameters use the generic pinconf bindings defined in
pinctrl-bindings.txt in this directory. The supported parameters are
-bias-disable, bias-pull-up, bias-pull-down and power-source. For pins that
-have a configurable I/O voltage, the power-source value should be the
+bias-disable, bias-pull-up, bias-pull-down, drive strength and power-source. For
+pins that have a configurable I/O voltage, the power-source value should be the
nominal I/O voltage in millivolts.
diff --git a/Documentation/devicetree/bindings/power/qcom,coincell-charger.txt b/Documentation/devicetree/bindings/power/qcom,coincell-charger.txt
index 0e6d8754e7ec..747899223262 100644
--- a/Documentation/devicetree/bindings/power/qcom,coincell-charger.txt
+++ b/Documentation/devicetree/bindings/power/qcom,coincell-charger.txt
@@ -29,7 +29,7 @@ IC (PMIC)
- qcom,charger-disable:
Usage: optional
Value type: <boolean>
- Definition: definining this property disables charging
+ Definition: defining this property disables charging
This charger is a sub-node of one of the 8941 PMIC blocks, and is specified
as a child node in DTS of that node. See ../mfd/qcom,spmi-pmic.txt and
diff --git a/Documentation/devicetree/bindings/power/renesas,rcar-sysc.txt b/Documentation/devicetree/bindings/power/renesas,rcar-sysc.txt
new file mode 100644
index 000000000000..b74e4d4785ab
--- /dev/null
+++ b/Documentation/devicetree/bindings/power/renesas,rcar-sysc.txt
@@ -0,0 +1,48 @@
+DT bindings for the Renesas R-Car System Controller
+
+== System Controller Node ==
+
+The R-Car System Controller provides power management for the CPU cores and
+various coprocessors.
+
+Required properties:
+ - compatible: Must contain exactly one of the following:
+ - "renesas,r8a7779-sysc" (R-Car H1)
+ - "renesas,r8a7790-sysc" (R-Car H2)
+ - "renesas,r8a7791-sysc" (R-Car M2-W)
+ - "renesas,r8a7792-sysc" (R-Car V2H)
+ - "renesas,r8a7793-sysc" (R-Car M2-N)
+ - "renesas,r8a7794-sysc" (R-Car E2)
+ - "renesas,r8a7795-sysc" (R-Car H3)
+ - reg: Address start and address range for the device.
+ - #power-domain-cells: Must be 1.
+
+
+Example:
+
+ sysc: system-controller@e6180000 {
+ compatible = "renesas,r8a7791-sysc";
+ reg = <0 0xe6180000 0 0x0200>;
+ #power-domain-cells = <1>;
+ };
+
+
+== PM Domain Consumers ==
+
+Devices residing in a power area must refer to that power area, as documented
+by the generic PM domain bindings in
+Documentation/devicetree/bindings/power/power_domain.txt.
+
+Required properties:
+ - power-domains: A phandle and symbolic PM domain specifier, as defined in
+ <dt-bindings/power/r8a77*-sysc.h>.
+
+
+Example:
+
+ L2_CA15: cache-controller@0 {
+ compatible = "cache";
+ power-domains = <&sysc R8A7791_PD_CA15_SCU>;
+ cache-unified;
+ cache-level = <2>;
+ };
diff --git a/Documentation/devicetree/bindings/gpio/gpio-poweroff.txt b/Documentation/devicetree/bindings/power/reset/gpio-poweroff.txt
index d4eab9227ea4..d4eab9227ea4 100644
--- a/Documentation/devicetree/bindings/gpio/gpio-poweroff.txt
+++ b/Documentation/devicetree/bindings/power/reset/gpio-poweroff.txt
diff --git a/Documentation/devicetree/bindings/gpio/gpio-restart.txt b/Documentation/devicetree/bindings/power/reset/gpio-restart.txt
index af3701bc15c4..af3701bc15c4 100644
--- a/Documentation/devicetree/bindings/gpio/gpio-restart.txt
+++ b/Documentation/devicetree/bindings/power/reset/gpio-restart.txt
diff --git a/Documentation/devicetree/bindings/power/rockchip-io-domain.txt b/Documentation/devicetree/bindings/power/rockchip-io-domain.txt
index c84fb47265eb..d23dc002a87e 100644
--- a/Documentation/devicetree/bindings/power/rockchip-io-domain.txt
+++ b/Documentation/devicetree/bindings/power/rockchip-io-domain.txt
@@ -37,8 +37,10 @@ Required properties:
- "rockchip,rk3368-pmu-io-voltage-domain" for rk3368 pmu-domains
- "rockchip,rk3399-io-voltage-domain" for rk3399
- "rockchip,rk3399-pmu-io-voltage-domain" for rk3399 pmu-domains
-- rockchip,grf: phandle to the syscon managing the "general register files"
+Deprecated properties:
+- rockchip,grf: phandle to the syscon managing the "general register files"
+ Systems should move the io-domains to a sub-node of the grf simple-mfd.
You specify supplies using the standard regulator bindings by including
a phandle the relevant regulator. All specified supplies must be able
diff --git a/Documentation/devicetree/bindings/regmap/regmap.txt b/Documentation/devicetree/bindings/regmap/regmap.txt
index e98a9652ccc8..0127be360fe8 100644
--- a/Documentation/devicetree/bindings/regmap/regmap.txt
+++ b/Documentation/devicetree/bindings/regmap/regmap.txt
@@ -1,50 +1,29 @@
-Device-Tree binding for regmap
-
-The endianness mode of CPU & Device scenarios:
-Index Device Endianness properties
----------------------------------------------------
-1 BE 'big-endian'
-2 LE 'little-endian'
-3 Native 'native-endian'
-
-For one device driver, which will run in different scenarios above
-on different SoCs using the devicetree, we need one way to simplify
-this.
+Devicetree binding for regmap
Optional properties:
-- {big,little,native}-endian: these are boolean properties, if absent
- then the implementation will choose a default based on the device
- being controlled. These properties are for register values and all
- the buffers only. Native endian means that the CPU and device have
- the same endianness.
-Examples:
-Scenario 1 : CPU in LE mode & device in LE mode.
-dev: dev@40031000 {
- compatible = "name";
- reg = <0x40031000 0x1000>;
- ...
-};
+ little-endian,
+ big-endian,
+ native-endian: See common-properties.txt for a definition
-Scenario 2 : CPU in LE mode & device in BE mode.
-dev: dev@40031000 {
- compatible = "name";
- reg = <0x40031000 0x1000>;
- ...
- big-endian;
-};
+Note:
+Regmap defaults to little-endian register access on MMIO based
+devices, this is by far the most common setting. On CPU
+architectures that typically run big-endian operating systems
+(e.g. PowerPC), registers can be defined as big-endian and must
+be marked that way in the devicetree.
-Scenario 3 : CPU in BE mode & device in BE mode.
-dev: dev@40031000 {
- compatible = "name";
- reg = <0x40031000 0x1000>;
- ...
-};
+On SoCs that can be operated in both big-endian and little-endian
+modes, with a single hardware switch controlling both the endianess
+of the CPU and a byteswap for MMIO registers (e.g. many Broadcom MIPS
+chips), "native-endian" is used to allow using the same device tree
+blob in both cases.
-Scenario 4 : CPU in BE mode & device in LE mode.
+Examples:
+Scenario 1 : a register set in big-endian mode.
dev: dev@40031000 {
- compatible = "name";
+ compatible = "syscon";
reg = <0x40031000 0x1000>;
+ big-endian;
...
- little-endian;
};
diff --git a/Documentation/devicetree/bindings/regulator/max8973-regulator.txt b/Documentation/devicetree/bindings/regulator/max8973-regulator.txt
index f80ea2fe27e6..c2c68fcc1b41 100644
--- a/Documentation/devicetree/bindings/regulator/max8973-regulator.txt
+++ b/Documentation/devicetree/bindings/regulator/max8973-regulator.txt
@@ -32,6 +32,13 @@ Optional properties:
Enhanced transient response (ETR) will affect the configuration of CKADV.
+-junction-warn-millicelsius: u32, junction warning temperature threshold
+ in millicelsius. If die temperature crosses this level then
+ device generates the warning interrupts.
+
+Please note that thermal functionality is only supported on MAX77621. The
+supported threshold warning temperature for MAX77621 are 120 degC and 140 degC.
+
Example:
max8973@1b {
diff --git a/Documentation/devicetree/bindings/regulator/palmas-pmic.txt b/Documentation/devicetree/bindings/regulator/palmas-pmic.txt
index 725393c8a7f2..99872819604f 100644
--- a/Documentation/devicetree/bindings/regulator/palmas-pmic.txt
+++ b/Documentation/devicetree/bindings/regulator/palmas-pmic.txt
@@ -1,5 +1,12 @@
* palmas regulator IP block devicetree bindings
+The tps659038 for the AM57x class have OTP spins that
+have different part numbers but the same functionality. There
+is not a need to add the OTP spins to the palmas driver. The
+spin devices should use the tps659038 as it's compatible value.
+This is the list of those devices:
+tps659037
+
Required properties:
- compatible : Should be from the list
ti,twl6035-pmic
@@ -8,6 +15,7 @@ Required properties:
ti,tps65913-pmic
ti,tps65914-pmic
ti,tps65917-pmic
+ ti,tps659038-pmic
and also the generic series names
ti,palmas-pmic
- interrupt-parent : The parent interrupt controller which is palmas.
diff --git a/Documentation/devicetree/bindings/regulator/pv88080.txt b/Documentation/devicetree/bindings/regulator/pv88080.txt
new file mode 100644
index 000000000000..38a614210dcb
--- /dev/null
+++ b/Documentation/devicetree/bindings/regulator/pv88080.txt
@@ -0,0 +1,49 @@
+* Powerventure Semiconductor PV88080 Voltage Regulator
+
+Required properties:
+- compatible: "pvs,pv88080".
+- reg: I2C slave address, usually 0x49.
+- interrupts: the interrupt outputs of the controller
+- regulators: A node that houses a sub-node for each regulator within the
+ device. Each sub-node is identified using the node's name, with valid
+ values listed below. The content of each sub-node is defined by the
+ standard binding for regulators; see regulator.txt.
+ BUCK1, BUCK2, and BUCK3.
+
+Optional properties:
+- Any optional property defined in regulator.txt
+
+Example
+
+ pmic: pv88080@49 {
+ compatible = "pvs,pv88080";
+ reg = <0x49>;
+ interrupt-parent = <&gpio>;
+ interrupts = <24 24>;
+
+ regulators {
+ BUCK1 {
+ regulator-name = "buck1";
+ regulator-min-microvolt = < 600000>;
+ regulator-max-microvolt = <1393750>;
+ regulator-min-microamp = < 220000>;
+ regulator-max-microamp = <7040000>;
+ };
+
+ BUCK2 {
+ regulator-name = "buck2";
+ regulator-min-microvolt = < 600000>;
+ regulator-max-microvolt = <1393750>;
+ regulator-min-microamp = <1496000>;
+ regulator-max-microamp = <4189000>;
+ };
+
+ BUCK3 {
+ regulator-name = "buck3";
+ regulator-min-microvolt = <1400000>;
+ regulator-max-microvolt = <2193750>;
+ regulator-min-microamp = <1496000>;
+ regulator-max-microamp = <4189000>;
+ };
+ };
+ };
diff --git a/Documentation/devicetree/bindings/regulator/qcom,spmi-regulator.txt b/Documentation/devicetree/bindings/regulator/qcom,spmi-regulator.txt
index d00bfd8624a5..46c6f3ed1a1c 100644
--- a/Documentation/devicetree/bindings/regulator/qcom,spmi-regulator.txt
+++ b/Documentation/devicetree/bindings/regulator/qcom,spmi-regulator.txt
@@ -7,6 +7,7 @@ Qualcomm SPMI Regulators
"qcom,pm8841-regulators"
"qcom,pm8916-regulators"
"qcom,pm8941-regulators"
+ "qcom,pm8994-regulators"
- interrupts:
Usage: optional
@@ -68,6 +69,37 @@ Qualcomm SPMI Regulators
Definition: Reference to regulator supplying the input pin, as
described in the data sheet.
+- vdd_s1-supply:
+- vdd_s2-supply:
+- vdd_s3-supply:
+- vdd_s4-supply:
+- vdd_s5-supply:
+- vdd_s6-supply:
+- vdd_s7-supply:
+- vdd_s8-supply:
+- vdd_s9-supply:
+- vdd_s10-supply:
+- vdd_s11-supply:
+- vdd_s12-supply:
+- vdd_l1-supply:
+- vdd_l2_l26_l28-supply:
+- vdd_l3_l11-supply:
+- vdd_l4_l27_l31-supply:
+- vdd_l5_l7-supply:
+- vdd_l6_l12_l32-supply:
+- vdd_l8_l16_l30-supply:
+- vdd_l9_l10_l18_l22-supply:
+- vdd_l13_l19_l23_l24-supply:
+- vdd_l14_l15-supply:
+- vdd_l17_l29-supply:
+- vdd_l20_l21-supply:
+- vdd_l25-supply:
+- vdd_lvs_1_2-supply:
+ Usage: optional (pm8994 only)
+ Value type: <phandle>
+ Definition: Reference to regulator supplying the input pin, as
+ described in the data sheet.
+
The regulator node houses sub-nodes for each regulator within the device. Each
sub-node is identified using the node's name, with valid values listed for each
@@ -85,6 +117,11 @@ pm8941:
l15, l16, l17, l18, l19, l20, l21, l22, l23, l24, lvs1, lvs2, lvs3,
mvs1, mvs2
+pm8994:
+ s1, s2, s3, s4, s5, s6, s7, s8, s9, s10, s11, s12, l1, l2, l3, l4, l5,
+ l6, l7, l8, l9, l10, l11, l12, l13, l14, l15, l16, l17, l18, l19, l20,
+ l21, l22, l23, l24, l25, l26, l27, l28, l29, l30, l31, l32, lvs1, lvs2
+
The content of each sub-node is defined by the standard binding for regulators -
see regulator.txt - with additional custom properties described below:
diff --git a/Documentation/devicetree/bindings/regulator/regulator-max77620.txt b/Documentation/devicetree/bindings/regulator/regulator-max77620.txt
index b3c8ca672024..1c4bfe786736 100644
--- a/Documentation/devicetree/bindings/regulator/regulator-max77620.txt
+++ b/Documentation/devicetree/bindings/regulator/regulator-max77620.txt
@@ -94,6 +94,28 @@ Following are additional properties:
This is applicable if suspend state
FPS source is selected as FPS0, FPS1 or
FPS2.
+- maxim,ramp-rate-setting: integer, ramp rate(uV/us) setting to be
+ configured to the device.
+ The platform may have different ramp
+ rate than advertised ramp rate if it has
+ design variation from Maxim's
+ recommended. On this case, platform
+ specific ramp rate is used for ramp time
+ calculation and this property is used
+ for device register configurations.
+ The measured ramp rate of platform is
+ provided by the regulator-ramp-delay
+ as described in <devicetree/bindings/
+ regulator/regulator.txt>.
+ Maxim Max77620 supports following ramp
+ delay:
+ SD: 13.75mV/us, 27.5mV/us, 55mV/us
+ LDOs: 5mV/us, 100mV/us
+
+Note: If the measured ramp delay is same as advertised ramp delay then it is not
+required to provide the ramp delay with property "maxim,ramp-rate-setting". The
+ramp rate can be provided by the regulator-ramp-delay which will be used for
+ramp time calculation for voltage change as well as for device configuration.
Example:
--------
diff --git a/Documentation/devicetree/bindings/regulator/ti-abb-regulator.txt b/Documentation/devicetree/bindings/regulator/ti-abb-regulator.txt
index c58db75f959e..c3f6546ebac7 100644
--- a/Documentation/devicetree/bindings/regulator/ti-abb-regulator.txt
+++ b/Documentation/devicetree/bindings/regulator/ti-abb-regulator.txt
@@ -14,8 +14,8 @@ Required Properties:
- "setup-address" - contains setup register address of ABB module (ti,abb-v3)
- "int-address" - contains address of interrupt register for ABB module
(also see Optional properties)
-- #address-cell: should be 0
-- #size-cell: should be 0
+- #address-cells: should be 0
+- #size-cells: should be 0
- clocks: should point to the clock node used by ABB module
- ti,settling-time: Settling time in uSecs from SoC documentation for ABB module
to settle down(target time for SR2_WTCNT_VALUE).
@@ -69,7 +69,7 @@ Example #1: Simplest configuration (no efuse data, hard coded ABB table):
abb_x: regulator-abb-x {
compatible = "ti,abb-v1";
regulator-name = "abb_x";
- #address-cell = <0>;
+ #address-cells = <0>;
#size-cells = <0>;
reg = <0x483072f0 0x8>, <0x48306818 0x4>;
reg-names = "base-address", "int-address";
@@ -89,7 +89,7 @@ Example #2: Efuse bits contain ABB mode setting (no LDO override capability)
abb_y: regulator-abb-y {
compatible = "ti,abb-v2";
regulator-name = "abb_y";
- #address-cell = <0>;
+ #address-cells = <0>;
#size-cells = <0>;
reg = <0x4a307bd0 0x8>, <0x4a306014 0x4>, <0x4A002268 0x8>;
reg-names = "base-address", "int-address", "efuse-address";
@@ -110,7 +110,7 @@ Example #3: Efuse bits contain ABB mode setting and LDO override capability
abb_z: regulator-abb-z {
compatible = "ti,abb-v2";
regulator-name = "abb_z";
- #address-cell = <0>;
+ #address-cells = <0>;
#size-cells = <0>;
reg = <0x4ae07ce4 0x8>, <0x4ae06010 0x4>,
<0x4a002194 0x8>, <0x4ae0C314 0x4>;
diff --git a/Documentation/devicetree/bindings/regulator/twl-regulator.txt b/Documentation/devicetree/bindings/regulator/twl-regulator.txt
index 75b0c1669504..74a91c4f8530 100644
--- a/Documentation/devicetree/bindings/regulator/twl-regulator.txt
+++ b/Documentation/devicetree/bindings/regulator/twl-regulator.txt
@@ -57,6 +57,12 @@ For twl4030 regulators/LDOs
Optional properties:
- Any optional property defined in bindings/regulator/regulator.txt
+For twl4030 regulators/LDOs:
+ - regulator-initial-mode:
+ - 0x08 - Sleep mode, the nominal output voltage is maintained with low power
+ consumption with low load current capability.
+ - 0x0e - Active mode, the regulator can deliver its nominal output voltage
+ with full-load current capability.
Example:
diff --git a/Documentation/devicetree/bindings/reset/oxnas,reset.txt b/Documentation/devicetree/bindings/reset/oxnas,reset.txt
new file mode 100644
index 000000000000..6f06db930030
--- /dev/null
+++ b/Documentation/devicetree/bindings/reset/oxnas,reset.txt
@@ -0,0 +1,58 @@
+Oxford Semiconductor OXNAS SoC Family RESET Controller
+================================================
+
+Please also refer to reset.txt in this directory for common reset
+controller binding usage.
+
+Required properties:
+- compatible: Should be "oxsemi,ox810se-reset"
+- #reset-cells: 1, see below
+
+Parent node should have the following properties :
+- compatible: Should be "oxsemi,ox810se-sys-ctrl", "syscon", "simple-mfd"
+
+For OX810SE, the indices are :
+ - 0 : ARM
+ - 1 : COPRO
+ - 2 : Reserved
+ - 3 : Reserved
+ - 4 : USBHS
+ - 5 : USBHSPHY
+ - 6 : MAC
+ - 7 : PCI
+ - 8 : DMA
+ - 9 : DPE
+ - 10 : DDR
+ - 11 : SATA
+ - 12 : SATA_LINK
+ - 13 : SATA_PHY
+ - 14 : Reserved
+ - 15 : NAND
+ - 16 : GPIO
+ - 17 : UART1
+ - 18 : UART2
+ - 19 : MISC
+ - 20 : I2S
+ - 21 : AHB_MON
+ - 22 : UART3
+ - 23 : UART4
+ - 24 : SGDMA
+ - 25 : Reserved
+ - 26 : Reserved
+ - 27 : Reserved
+ - 28 : Reserved
+ - 29 : Reserved
+ - 30 : Reserved
+ - 31 : BUS
+
+example:
+
+sys: sys-ctrl@000000 {
+ compatible = "oxsemi,ox810se-sys-ctrl", "syscon", "simple-mfd";
+ reg = <0x000000 0x100000>;
+
+ reset: reset-controller {
+ compatible = "oxsemi,ox810se-reset";
+ #reset-cells = <1>;
+ };
+};
diff --git a/Documentation/devicetree/bindings/rng/hisi-rng.txt b/Documentation/devicetree/bindings/rng/hisi-rng.txt
new file mode 100644
index 000000000000..d04d55a6c2f5
--- /dev/null
+++ b/Documentation/devicetree/bindings/rng/hisi-rng.txt
@@ -0,0 +1,12 @@
+Hisilicon Random Number Generator
+
+Required properties:
+- compatible : Should be "hisilicon,hip04-rng" or "hisilicon,hip05-rng"
+- reg : Offset and length of the register set of this block
+
+Example:
+
+rng@d1010000 {
+ compatible = "hisilicon,hip05-rng";
+ reg = <0xd1010000 0x100>;
+};
diff --git a/Documentation/devicetree/bindings/rtc/maxim-ds1302.txt b/Documentation/devicetree/bindings/rtc/maxim-ds1302.txt
new file mode 100644
index 000000000000..ba470c56cdec
--- /dev/null
+++ b/Documentation/devicetree/bindings/rtc/maxim-ds1302.txt
@@ -0,0 +1,46 @@
+* Maxim/Dallas Semiconductor DS-1302 RTC
+
+Simple device which could be used to store date/time between reboots.
+
+The device uses the standard MicroWire half-duplex transfer timing.
+Master output is set on low clock and sensed by the RTC on the rising
+edge. Master input is set by the RTC on the trailing edge and is sensed
+by the master on low clock.
+
+Required properties:
+
+- compatible : Should be "maxim,ds1302"
+
+Required SPI properties:
+
+- reg : Should be address of the device chip select within
+ the controller.
+
+- spi-max-frequency : DS-1302 has 500 kHz if powered at 2.2V,
+ and 2MHz if powered at 5V.
+
+- spi-3wire : The device has a shared signal IN/OUT line.
+
+- spi-lsb-first : DS-1302 requires least significant bit first
+ transfers.
+
+- spi-cs-high: DS-1302 has active high chip select line. This is
+ required unless inverted in hardware.
+
+Example:
+
+spi@901c {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ compatible = "icpdas,lp8841-spi-rtc";
+ reg = <0x901c 0x1>;
+
+ rtc@0 {
+ compatible = "maxim,ds1302";
+ reg = <0>;
+ spi-max-frequency = <500000>;
+ spi-3wire;
+ spi-lsb-first;
+ spi-cs-high;
+ };
+};
diff --git a/Documentation/devicetree/bindings/rtc/rtc-palmas.txt b/Documentation/devicetree/bindings/rtc/rtc-palmas.txt
index adbccc0a51e1..eb1c7fdeb413 100644
--- a/Documentation/devicetree/bindings/rtc/rtc-palmas.txt
+++ b/Documentation/devicetree/bindings/rtc/rtc-palmas.txt
@@ -15,9 +15,9 @@ Optional properties:
battery is chargeable or not. If charging battery then driver can
enable the charging.
- ti,backup-battery-charge-high-current: Enable high current charging in
- backup battery. Device supports the < 100mA and > 100mA charging.
- The high current will be > 100mA. Absence of this property will
- charge battery to lower current i.e. < 100mA.
+ backup battery. Device supports the < 100uA and > 100uA charging.
+ The high current will be > 100uA. Absence of this property will
+ charge battery to lower current i.e. < 100uA.
Example:
palmas: tps65913@58 {
diff --git a/Documentation/devicetree/bindings/rtc/sa1100-rtc.txt b/Documentation/devicetree/bindings/rtc/sa1100-rtc.txt
index 0cda19ad4859..968ac820254b 100644
--- a/Documentation/devicetree/bindings/rtc/sa1100-rtc.txt
+++ b/Documentation/devicetree/bindings/rtc/sa1100-rtc.txt
@@ -13,5 +13,5 @@ Example:
compatible = "mrvl,mmp-rtc";
reg = <0xd4010000 0x1000>;
interrupts = <5>, <6>;
- interrupt-name = "rtc 1Hz", "rtc alarm";
+ interrupt-names = "rtc 1Hz", "rtc alarm";
};
diff --git a/Documentation/devicetree/bindings/serial/arm,mps2-uart.txt b/Documentation/devicetree/bindings/serial/arm,mps2-uart.txt
new file mode 100644
index 000000000000..128cc6aed001
--- /dev/null
+++ b/Documentation/devicetree/bindings/serial/arm,mps2-uart.txt
@@ -0,0 +1,19 @@
+ARM MPS2 UART
+
+Required properties:
+- compatible : Should be "arm,mps2-uart"
+- reg : Address and length of the register set
+- interrupts : Reference to the UART RX, TX and overrun interrupts
+
+Required clocking property:
+- clocks : The input clock of the UART
+
+
+Examples:
+
+uart0: serial@40004000 {
+ compatible = "arm,mps2-uart";
+ reg = <0x40004000 0x1000>;
+ interrupts = <0 1 12>;
+ clocks = <&sysclk>;
+};
diff --git a/Documentation/devicetree/bindings/serial/fsl-imx-uart.txt b/Documentation/devicetree/bindings/serial/fsl-imx-uart.txt
index ed94c217c98d..1e82802d8e32 100644
--- a/Documentation/devicetree/bindings/serial/fsl-imx-uart.txt
+++ b/Documentation/devicetree/bindings/serial/fsl-imx-uart.txt
@@ -6,7 +6,7 @@ Required properties:
- interrupts : Should contain uart interrupt
Optional properties:
-- fsl,uart-has-rtscts : Indicate the uart has rts and cts
+- uart-has-rtscts : Indicate the uart has rts and cts
- fsl,irda-mode : Indicate the uart supports irda mode
- fsl,dte-mode : Indicate the uart works in DTE mode. The uart works
in DCE mode by default.
@@ -24,6 +24,6 @@ uart1: serial@73fbc000 {
compatible = "fsl,imx51-uart", "fsl,imx21-uart";
reg = <0x73fbc000 0x4000>;
interrupts = <31>;
- fsl,uart-has-rtscts;
+ uart-has-rtscts;
fsl,dte-mode;
};
diff --git a/Documentation/devicetree/bindings/serial/fsl-mxs-auart.txt b/Documentation/devicetree/bindings/serial/fsl-mxs-auart.txt
index 7c408c87e613..5c96d41899f1 100644
--- a/Documentation/devicetree/bindings/serial/fsl-mxs-auart.txt
+++ b/Documentation/devicetree/bindings/serial/fsl-mxs-auart.txt
@@ -1,8 +1,10 @@
* Freescale MXS Application UART (AUART)
-Required properties:
-- compatible : Should be "fsl,<soc>-auart". The supported SoCs include
- imx23 and imx28.
+Required properties for all SoCs:
+- compatible : Should be one of fallowing variants:
+ "fsl,imx23-auart" - Freescale i.MX23
+ "fsl,imx28-auart" - Freescale i.MX28
+ "alphascale,asm9260-auart" - Alphascale ASM9260
- reg : Address and length of the register set for the device
- interrupts : Should contain the auart interrupt numbers
- dmas: DMA specifier, consisting of a phandle to DMA controller node
@@ -10,8 +12,14 @@ Required properties:
Refer to dma.txt and fsl-mxs-dma.txt for details.
- dma-names: "rx" for RX channel, "tx" for TX channel.
+Required properties for "alphascale,asm9260-auart":
+- clocks : the clocks feeding the watchdog timer. See clock-bindings.txt
+- clock-names : should be set to
+ "mod" - source for tick counter.
+ "ahb" - ahb gate.
+
Optional properties:
-- fsl,uart-has-rtscts : Indicate the UART has RTS and CTS lines
+- uart-has-rtscts : Indicate the UART has RTS and CTS lines
for hardware flow control,
it also means you enable the DMA support for this UART.
- {rts,cts,dtr,dsr,rng,dcd}-gpios: specify a GPIO for RTS/CTS/DTR/DSR/RI/DCD
diff --git a/Documentation/devicetree/bindings/serial/microchip,pic32-uart.txt b/Documentation/devicetree/bindings/serial/microchip,pic32-uart.txt
new file mode 100644
index 000000000000..7a34345d0ca3
--- /dev/null
+++ b/Documentation/devicetree/bindings/serial/microchip,pic32-uart.txt
@@ -0,0 +1,29 @@
+* Microchip Universal Asynchronous Receiver Transmitter (UART)
+
+Required properties:
+- compatible: Should be "microchip,pic32mzda-uart"
+- reg: Should contain registers location and length
+- interrupts: Should contain interrupt
+- clocks: Phandle to the clock.
+ See: Documentation/devicetree/bindings/clock/clock-bindings.txt
+- pinctrl-names: A pinctrl state names "default" must be defined.
+- pinctrl-0: Phandle referencing pin configuration of the UART peripheral.
+ See: Documentation/devicetree/bindings/pinctrl/pinctrl-binding.txt
+
+Optional properties:
+- cts-gpios: CTS pin for UART
+
+Example:
+ uart1: serial@1f822000 {
+ compatible = "microchip,pic32mzda-uart";
+ reg = <0x1f822000 0x50>;
+ interrupts = <112 IRQ_TYPE_LEVEL_HIGH>,
+ <113 IRQ_TYPE_LEVEL_HIGH>,
+ <114 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&rootclk PB2CLK>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart1
+ &pinctrl_uart1_cts
+ &pinctrl_uart1_rts>;
+ cts-gpios = <&gpio1 15 0>;
+ };
diff --git a/Documentation/devicetree/bindings/tty/serial/mvebu-uart.txt b/Documentation/devicetree/bindings/serial/mvebu-uart.txt
index 6087defd9f93..6087defd9f93 100644
--- a/Documentation/devicetree/bindings/tty/serial/mvebu-uart.txt
+++ b/Documentation/devicetree/bindings/serial/mvebu-uart.txt
diff --git a/Documentation/devicetree/bindings/serial/serial.txt b/Documentation/devicetree/bindings/serial/serial.txt
new file mode 100644
index 000000000000..fd970f76a7b8
--- /dev/null
+++ b/Documentation/devicetree/bindings/serial/serial.txt
@@ -0,0 +1,57 @@
+Generic Serial DT Bindings
+
+This document lists a set of generic properties for describing UARTs in a
+device tree. Whether these properties apply to a particular device depends on
+the DT bindings for the actual device.
+
+Optional properties:
+ - cts-gpios: Must contain a GPIO specifier, referring to the GPIO pin to be
+ used as the UART's CTS line.
+ - dcd-gpios: Must contain a GPIO specifier, referring to the GPIO pin to be
+ used as the UART's DCD line.
+ - dsr-gpios: Must contain a GPIO specifier, referring to the GPIO pin to be
+ used as the UART's DSR line.
+ - dtr-gpios: Must contain a GPIO specifier, referring to the GPIO pin to be
+ used as the UART's DTR line.
+ - rng-gpios: Must contain a GPIO specifier, referring to the GPIO pin to be
+ used as the UART's RNG line.
+ - rts-gpios: Must contain a GPIO specifier, referring to the GPIO pin to be
+ used as the UART's RTS line.
+
+ - uart-has-rtscts: The presence of this property indicates that the
+ UART has dedicated lines for RTS/CTS hardware flow control, and that
+ they are available for use (wired and enabled by pinmux configuration).
+ This depends on both the UART hardware and the board wiring.
+ Note that this property is mutually-exclusive with "cts-gpios" and
+ "rts-gpios" above.
+
+
+Examples:
+
+ uart1: serial@48022000 {
+ compatible = "ti,am3352-uart", "ti,omap3-uart";
+ ti,hwmods = "uart2";
+ clock-frequency = <48000000>;
+ reg = <0x48022000 0x2000>;
+ interrupts = <73>;
+ dmas = <&edma 28 0>, <&edma 29 0>;
+ dma-names = "tx", "rx";
+ dtr-gpios = <&gpio2 22 GPIO_ACTIVE_LOW>;
+ dsr-gpios = <&gpio2 23 GPIO_ACTIVE_LOW>;
+ dcd-gpios = <&gpio2 24 GPIO_ACTIVE_LOW>;
+ rng-gpios = <&gpio2 25 GPIO_ACTIVE_LOW>;
+ cts-gpios = <&gpio0 12 GPIO_ACTIVE_LOW>;
+ rts-gpios = <&gpio0 13 GPIO_ACTIVE_LOW>;
+ status = "okay";
+ };
+
+ scifa4: serial@e6c80000 {
+ compatible = "renesas,scifa-sh73a0", "renesas,scifa";
+ reg = <0xe6c80000 0x100>;
+ interrupts = <GIC_SPI 78 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&mstp2_clks SH73A0_CLK_SCIFA4>;
+ clock-names = "fck";
+ power-domains = <&pd_a3sp>;
+ uart-has-rtscts;
+ status = "okay";
+ };
diff --git a/Documentation/devicetree/bindings/serial/sirf-uart.txt b/Documentation/devicetree/bindings/serial/sirf-uart.txt
index 67e2a0aeb042..1e48bbbeecc6 100644
--- a/Documentation/devicetree/bindings/serial/sirf-uart.txt
+++ b/Documentation/devicetree/bindings/serial/sirf-uart.txt
@@ -9,9 +9,9 @@ Required properties:
- clocks : Should contain uart clock number
Optional properties:
-- sirf,uart-has-rtscts: we have hardware flow controller pins in hardware
-- rts-gpios: RTS pin for USP-based UART if sirf,uart-has-rtscts is true
-- cts-gpios: CTS pin for USP-based UART if sirf,uart-has-rtscts is true
+- uart-has-rtscts: we have hardware flow controller pins in hardware
+- rts-gpios: RTS pin for USP-based UART if uart-has-rtscts is true
+- cts-gpios: CTS pin for USP-based UART if uart-has-rtscts is true
Example:
@@ -28,7 +28,7 @@ On the board-specific dts, we can put rts-gpios and cts-gpios like
usp@b0090000 {
compatible = "sirf,prima2-usp-uart";
- sirf,uart-has-rtscts;
+ uart-has-rtscts;
rts-gpios = <&gpio 15 0>;
cts-gpios = <&gpio 46 0>;
};
diff --git a/Documentation/devicetree/bindings/soc/mediatek/auxadc.txt b/Documentation/devicetree/bindings/soc/mediatek/auxadc.txt
new file mode 100644
index 000000000000..bdb782918a72
--- /dev/null
+++ b/Documentation/devicetree/bindings/soc/mediatek/auxadc.txt
@@ -0,0 +1,21 @@
+MediaTek AUXADC
+===============
+
+The Auxiliary Analog/Digital Converter (AUXADC) is an ADC found
+in some Mediatek SoCs which among other things measures the temperatures
+in the SoC. It can be used directly with register accesses, but it is also
+used by thermal controller which reads the temperatures from the AUXADC
+directly via its own bus interface. See
+Documentation/devicetree/bindings/thermal/mediatek-thermal.txt
+for the Thermal Controller which holds a phandle to the AUXADC.
+
+Required properties:
+- compatible: Must be "mediatek,mt8173-auxadc"
+- reg: Address range of the AUXADC unit
+
+Example:
+
+auxadc: auxadc@11001000 {
+ compatible = "mediatek,mt8173-auxadc";
+ reg = <0 0x11001000 0 0x1000>;
+};
diff --git a/Documentation/devicetree/bindings/soc/mediatek/pwrap.txt b/Documentation/devicetree/bindings/soc/mediatek/pwrap.txt
index ddeb5b6a53c1..107700d00df4 100644
--- a/Documentation/devicetree/bindings/soc/mediatek/pwrap.txt
+++ b/Documentation/devicetree/bindings/soc/mediatek/pwrap.txt
@@ -18,6 +18,7 @@ IP Pairing
Required properties in pwrap device node.
- compatible:
+ "mediatek,mt2701-pwrap" for MT2701/7623 SoCs
"mediatek,mt8135-pwrap" for MT8135 SoCs
"mediatek,mt8173-pwrap" for MT8173 SoCs
- interrupts: IRQ for pwrap in SOC
diff --git a/Documentation/devicetree/bindings/soc/rockchip/grf.txt b/Documentation/devicetree/bindings/soc/rockchip/grf.txt
new file mode 100644
index 000000000000..013e71a2cdc7
--- /dev/null
+++ b/Documentation/devicetree/bindings/soc/rockchip/grf.txt
@@ -0,0 +1,35 @@
+* Rockchip General Register Files (GRF)
+
+The general register file will be used to do static set by software, which
+is composed of many registers for system control.
+
+From RK3368 SoCs, the GRF is divided into two sections,
+- GRF, used for general non-secure system,
+- PMUGRF, used for always on system
+
+Required Properties:
+
+- compatible: GRF should be one of the followings
+ - "rockchip,rk3066-grf", "syscon": for rk3066
+ - "rockchip,rk3188-grf", "syscon": for rk3188
+ - "rockchip,rk3228-grf", "syscon": for rk3228
+ - "rockchip,rk3288-grf", "syscon": for rk3288
+ - "rockchip,rk3368-grf", "syscon": for rk3368
+ - "rockchip,rk3399-grf", "syscon": for rk3399
+- compatible: PMUGRF should be one of the followings
+ - "rockchip,rk3368-pmugrf", "syscon": for rk3368
+ - "rockchip,rk3399-pmugrf", "syscon": for rk3399
+- reg: physical base address of the controller and length of memory mapped
+ region.
+
+Example: GRF and PMUGRF of RK3399 SoCs
+
+ pmugrf: syscon@ff320000 {
+ compatible = "rockchip,rk3399-pmugrf", "syscon";
+ reg = <0x0 0xff320000 0x0 0x1000>;
+ };
+
+ grf: syscon@ff770000 {
+ compatible = "rockchip,rk3399-grf", "syscon";
+ reg = <0x0 0xff770000 0x0 0x10000>;
+ };
diff --git a/Documentation/devicetree/bindings/soc/rockchip/power_domain.txt b/Documentation/devicetree/bindings/soc/rockchip/power_domain.txt
index 13dc6a3fdb4a..f909ce06afc4 100644
--- a/Documentation/devicetree/bindings/soc/rockchip/power_domain.txt
+++ b/Documentation/devicetree/bindings/soc/rockchip/power_domain.txt
@@ -7,6 +7,7 @@ Required properties for power domain controller:
- compatible: Should be one of the following.
"rockchip,rk3288-power-controller" - for RK3288 SoCs.
"rockchip,rk3368-power-controller" - for RK3368 SoCs.
+ "rockchip,rk3399-power-controller" - for RK3399 SoCs.
- #power-domain-cells: Number of cells in a power-domain specifier.
Should be 1 for multiple PM domains.
- #address-cells: Should be 1.
@@ -16,8 +17,18 @@ Required properties for power domain sub nodes:
- reg: index of the power domain, should use macros in:
"include/dt-bindings/power/rk3288-power.h" - for RK3288 type power domain.
"include/dt-bindings/power/rk3368-power.h" - for RK3368 type power domain.
+ "include/dt-bindings/power/rk3399-power.h" - for RK3399 type power domain.
- clocks (optional): phandles to clocks which need to be enabled while power domain
switches state.
+- pm_qos (optional): phandles to qos blocks which need to be saved and restored
+ while power domain switches state.
+
+Qos Example:
+
+ qos_gpu: qos_gpu@ffaf0000 {
+ compatible ="syscon";
+ reg = <0x0 0xffaf0000 0x0 0x20>;
+ };
Example:
@@ -30,6 +41,7 @@ Example:
pd_gpu {
reg = <RK3288_PD_GPU>;
clocks = <&cru ACLK_GPU>;
+ pm_qos = <&qos_gpu>;
};
};
@@ -45,12 +57,41 @@ Example:
};
};
+Example 2:
+ power: power-controller {
+ compatible = "rockchip,rk3399-power-controller";
+ #power-domain-cells = <1>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ pd_vio {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <RK3399_PD_VIO>;
+
+ pd_vo {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <RK3399_PD_VO>;
+
+ pd_vopb {
+ reg = <RK3399_PD_VOPB>;
+ };
+
+ pd_vopl {
+ reg = <RK3399_PD_VOPL>;
+ };
+ };
+ };
+ };
+
Node of a device using power domains must have a power-domains property,
containing a phandle to the power device node and an index specifying which
power domain to use.
The index should use macros in:
"include/dt-bindings/power/rk3288-power.h" - for rk3288 type power domain.
"include/dt-bindings/power/rk3368-power.h" - for rk3368 type power domain.
+ "include/dt-bindings/power/rk3399-power.h" - for rk3399 type power domain.
Example of the node using power domain:
@@ -65,3 +106,9 @@ Example of the node using power domain:
power-domains = <&power RK3368_PD_GPU_1>;
/* ... */
};
+
+ node {
+ /* ... */
+ power-domains = <&power RK3399_PD_VOPB>;
+ /* ... */
+ };
diff --git a/Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt b/Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt
index d1ce21a4904d..64c66a5644e7 100644
--- a/Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt
+++ b/Documentation/devicetree/bindings/soc/ti/keystone-navigator-qmss.txt
@@ -42,7 +42,7 @@ Required properties:
- queue-pools : child node classifying the queue ranges into pools.
Queue ranges are grouped into 3 type of pools:
- qpend : pool of qpend(interruptible) queues
- - general-purpose : pool of general queues, primarly used
+ - general-purpose : pool of general queues, primarily used
as free descriptor queues or the
transmit DMA queues.
- accumulator : pool of queues on PDSP accumulator channel
@@ -50,7 +50,7 @@ Required properties:
-- qrange : number of queues to use per queue range, specified as
<"base queue #" "# of queues">.
-- interrupts : Optional property to specify the interrupt mapping
- for interruptible queues. The driver additionaly sets
+ for interruptible queues. The driver additionally sets
the interrupt affinity hint based on the cpu mask.
-- qalloc-by-id : Optional property to specify that the queues in this
range can only be allocated by queue id.
@@ -80,7 +80,7 @@ Required properties:
latency : time to delay the interrupt, specified
in microseconds.
-- multi-queue : Optional property to specify that the channel has to
- monitor upto 32 queues starting at the base queue #.
+ monitor up to 32 queues starting at the base queue #.
- descriptor-regions : child node describing the memory regions for keystone
navigator packet DMA descriptors. The memory for
descriptors will be allocated by the driver.
diff --git a/Documentation/devicetree/bindings/sound/davinci-mcbsp.txt b/Documentation/devicetree/bindings/sound/davinci-mcbsp.txt
new file mode 100644
index 000000000000..55b53e1fd72c
--- /dev/null
+++ b/Documentation/devicetree/bindings/sound/davinci-mcbsp.txt
@@ -0,0 +1,51 @@
+Texas Instruments DaVinci McBSP module
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This binding describes the "Multi-channel Buffered Serial Port" (McBSP)
+audio interface found in some TI DaVinci processors like the OMAP-L138 or AM180x.
+
+
+Required properties:
+~~~~~~~~~~~~~~~~~~~~
+- compatible :
+ "ti,da850-mcbsp" : for DA850, AM180x and OPAM-L138 platforms
+
+- reg : physical base address and length of the controller memory mapped
+ region(s).
+- reg-names : Should contain:
+ * "mpu" for the main registers (required).
+ * "dat" for the data FIFO (optional).
+
+- dmas: three element list of DMA controller phandles, DMA request line and
+ TC channel ordered triplets.
+- dma-names: identifier string for each DMA request line in the dmas property.
+ These strings correspond 1:1 with the ordered pairs in dmas. The dma
+ identifiers must be "rx" and "tx".
+
+Optional properties:
+~~~~~~~~~~~~~~~~~~~~
+- interrupts : Interrupt numbers for McBSP
+- interrupt-names : Known interrupt names are "rx" and "tx"
+
+- pinctrl-0: Should specify pin control group used for this controller.
+- pinctrl-names: Should contain only one value - "default", for more details
+ please refer to pinctrl-bindings.txt
+
+Example (AM1808):
+~~~~~~~~~~~~~~~~~
+
+mcbsp0: mcbsp@1d10000 {
+ compatible = "ti,da850-mcbsp";
+ pinctrl-names = "default";
+ pinctrl-0 = <&mcbsp0_pins>;
+
+ reg = <0x00110000 0x1000>,
+ <0x00310000 0x1000>;
+ reg-names = "mpu", "dat";
+ interrupts = <97 98>;
+ interrupts-names = "rx", "tx";
+ dmas = <&edma0 3 1
+ &edma0 2 1>;
+ dma-names = "tx", "rx";
+ status = "okay";
+};
diff --git a/Documentation/devicetree/bindings/sound/fsl-sai.txt b/Documentation/devicetree/bindings/sound/fsl-sai.txt
index 044e5d76e2dd..740b467adf7d 100644
--- a/Documentation/devicetree/bindings/sound/fsl-sai.txt
+++ b/Documentation/devicetree/bindings/sound/fsl-sai.txt
@@ -7,8 +7,8 @@ codec/DSP interfaces.
Required properties:
- - compatible : Compatible list, contains "fsl,vf610-sai" or
- "fsl,imx6sx-sai".
+ - compatible : Compatible list, contains "fsl,vf610-sai",
+ "fsl,imx6sx-sai" or "fsl,imx6ul-sai"
- reg : Offset and length of the register set for the device.
@@ -48,6 +48,11 @@ Required properties:
receive data by following their own bit clocks and
frame sync clocks separately.
+Optional properties (for mx6ul):
+
+ - fsl,sai-mclk-direction-output: This is a boolean property. If present,
+ indicates that SAI will output the SAI MCLK clock.
+
Note:
- If both fsl,sai-asynchronous and fsl,sai-synchronous-rx are absent, the
default synchronous mode (sync Rx with Tx) will be used, which means both
diff --git a/Documentation/devicetree/bindings/sound/max98371.txt b/Documentation/devicetree/bindings/sound/max98371.txt
new file mode 100644
index 000000000000..6c285235e64b
--- /dev/null
+++ b/Documentation/devicetree/bindings/sound/max98371.txt
@@ -0,0 +1,17 @@
+max98371 codec
+
+This device supports I2C mode only.
+
+Required properties:
+
+- compatible : "maxim,max98371"
+- reg : The chip select number on the I2C bus
+
+Example:
+
+&i2c {
+ max98371: max98371@0x31 {
+ compatible = "maxim,max98371";
+ reg = <0x31>;
+ };
+};
diff --git a/Documentation/devicetree/bindings/sound/mt8173-rt5650-rt5676.txt b/Documentation/devicetree/bindings/sound/mt8173-rt5650-rt5676.txt
index f205ce9e31dd..ac28cdb4910e 100644
--- a/Documentation/devicetree/bindings/sound/mt8173-rt5650-rt5676.txt
+++ b/Documentation/devicetree/bindings/sound/mt8173-rt5650-rt5676.txt
@@ -1,15 +1,16 @@
-MT8173 with RT5650 RT5676 CODECS
+MT8173 with RT5650 RT5676 CODECS and HDMI via I2S
Required properties:
- compatible : "mediatek,mt8173-rt5650-rt5676"
- mediatek,audio-codec: the phandles of rt5650 and rt5676 codecs
+ and of the hdmi encoder node
- mediatek,platform: the phandle of MT8173 ASoC platform
Example:
sound {
compatible = "mediatek,mt8173-rt5650-rt5676";
- mediatek,audio-codec = <&rt5650 &rt5676>;
+ mediatek,audio-codec = <&rt5650 &rt5676 &hdmi0>;
mediatek,platform = <&afe>;
};
diff --git a/Documentation/devicetree/bindings/sound/mt8173-rt5650.txt b/Documentation/devicetree/bindings/sound/mt8173-rt5650.txt
index fe5a5ef1714d..5bfa6b60530b 100644
--- a/Documentation/devicetree/bindings/sound/mt8173-rt5650.txt
+++ b/Documentation/devicetree/bindings/sound/mt8173-rt5650.txt
@@ -5,11 +5,21 @@ Required properties:
- mediatek,audio-codec: the phandles of rt5650 codecs
- mediatek,platform: the phandle of MT8173 ASoC platform
+Optional subnodes:
+- codec-capture : the subnode of rt5650 codec capture
+Required codec-capture subnode properties:
+- sound-dai: audio codec dai name on capture path
+ <&rt5650 0> : Default setting. Connect rt5650 I2S1 for capture. (dai_name = rt5645-aif1)
+ <&rt5650 1> : Connect rt5650 I2S2 for capture. (dai_name = rt5645-aif2)
+
Example:
sound {
compatible = "mediatek,mt8173-rt5650";
mediatek,audio-codec = <&rt5650>;
mediatek,platform = <&afe>;
+ codec-capture {
+ sound-dai = <&rt5650 1>;
+ };
};
diff --git a/Documentation/devicetree/bindings/sound/nvidia,tegra30-hda.txt b/Documentation/devicetree/bindings/sound/nvidia,tegra30-hda.txt
index 275c6ea356f6..44d27456e8a4 100644
--- a/Documentation/devicetree/bindings/sound/nvidia,tegra30-hda.txt
+++ b/Documentation/devicetree/bindings/sound/nvidia,tegra30-hda.txt
@@ -15,7 +15,7 @@ Required properties:
Example:
-hda@0,70030000 {
+hda@70030000 {
compatible = "nvidia,tegra124-hda", "nvidia,tegra30-hda";
reg = <0x0 0x70030000 0x0 0x10000>;
interrupts = <GIC_SPI 81 IRQ_TYPE_LEVEL_HIGH>;
diff --git a/Documentation/devicetree/bindings/sound/pcm5102a.txt b/Documentation/devicetree/bindings/sound/pcm5102a.txt
new file mode 100644
index 000000000000..c63ab0b6ee19
--- /dev/null
+++ b/Documentation/devicetree/bindings/sound/pcm5102a.txt
@@ -0,0 +1,13 @@
+PCM5102a audio CODECs
+
+These devices does not use I2C or SPI.
+
+Required properties:
+
+ - compatible : set as "ti,pcm5102a"
+
+Examples:
+
+ pcm5102a: pcm5102a {
+ compatible = "ti,pcm5102a";
+ };
diff --git a/Documentation/devicetree/bindings/sound/st,sti-asoc-card.txt b/Documentation/devicetree/bindings/sound/st,sti-asoc-card.txt
index 028fa1c82f50..4d9a83d9a017 100644
--- a/Documentation/devicetree/bindings/sound/st,sti-asoc-card.txt
+++ b/Documentation/devicetree/bindings/sound/st,sti-asoc-card.txt
@@ -37,17 +37,18 @@ Required properties:
- dai-name: DAI name that describes the IP.
+ - IP mode: IP working mode depending on associated codec.
+ "HDMI" connected to HDMI codec and support IEC HDMI formats (player only).
+ "SPDIF" connected to SPDIF codec and support SPDIF formats (player only).
+ "PCM" PCM standard mode for I2S or TDM bus.
+ "TDM" TDM mode for TDM bus.
+
Required properties ("st,sti-uni-player" compatibility only):
- clocks: CPU_DAI IP clock source, listed in the same order than the
CPU_DAI properties.
- uniperiph-id: internal SOC IP instance ID.
- - IP mode: IP working mode depending on associated codec.
- "HDMI" connected to HDMI codec IP and IEC HDMI formats.
- "SPDIF"connected to SPDIF codec and support SPDIF formats.
- "PCM" PCM standard mode for I2S or TDM bus.
-
Optional properties:
- pinctrl-0: defined for CPU_DAI@1 and CPU_DAI@4 to describe I2S PIOs for
external codecs connection.
@@ -56,6 +57,22 @@ Optional properties:
Example:
+ sti_uni_player1: sti-uni-player@1 {
+ compatible = "st,sti-uni-player";
+ status = "okay";
+ #sound-dai-cells = <0>;
+ st,syscfg = <&syscfg_core>;
+ clocks = <&clk_s_d0_flexgen CLK_PCM_1>;
+ reg = <0x8D81000 0x158>;
+ interrupts = <GIC_SPI 85 IRQ_TYPE_NONE>;
+ dmas = <&fdma0 3 0 1>;
+ st,dai-name = "Uni Player #1 (I2S)";
+ dma-names = "tx";
+ st,uniperiph-id = <1>;
+ st,version = <5>;
+ st,mode = "TDM";
+ };
+
sti_uni_player2: sti-uni-player@2 {
compatible = "st,sti-uni-player";
status = "okay";
@@ -65,7 +82,7 @@ Example:
reg = <0x8D82000 0x158>;
interrupts = <GIC_SPI 86 IRQ_TYPE_NONE>;
dmas = <&fdma0 4 0 1>;
- dai-name = "Uni Player #1 (DAC)";
+ dai-name = "Uni Player #2 (DAC)";
dma-names = "tx";
uniperiph-id = <2>;
version = <5>;
@@ -82,7 +99,7 @@ Example:
interrupts = <GIC_SPI 89 IRQ_TYPE_NONE>;
dmas = <&fdma0 7 0 1>;
dma-names = "tx";
- dai-name = "Uni Player #1 (PIO)";
+ dai-name = "Uni Player #3 (SPDIF)";
uniperiph-id = <3>;
version = <5>;
mode = "SPDIF";
@@ -99,6 +116,7 @@ Example:
dma-names = "rx";
dai-name = "Uni Reader #1 (HDMI RX)";
version = <3>;
+ st,mode = "PCM";
};
2) sti-sas-codec: internal audio codec IPs driver
@@ -152,4 +170,20 @@ Example of audio card declaration:
sound-dai = <&sti_sasg_codec 0>;
};
};
+ simple-audio-card,dai-link@2 {
+ /* TDM playback */
+ format = "left_j";
+ frame-inversion = <1>;
+ cpu {
+ sound-dai = <&sti_uni_player1>;
+ dai-tdm-slot-num = <16>;
+ dai-tdm-slot-width = <16>;
+ dai-tdm-slot-tx-mask =
+ <1 1 1 1 0 0 0 0 0 0 1 1 0 0 1 1>;
+ };
+
+ codec {
+ sound-dai = <&sti_sasg_codec 3>;
+ };
+ };
};
diff --git a/Documentation/devicetree/bindings/sound/tas571x.txt b/Documentation/devicetree/bindings/sound/tas571x.txt
index 0ac31d8d5ac4..b4959f10b74b 100644
--- a/Documentation/devicetree/bindings/sound/tas571x.txt
+++ b/Documentation/devicetree/bindings/sound/tas571x.txt
@@ -1,4 +1,4 @@
-Texas Instruments TAS5711/TAS5717/TAS5719 stereo power amplifiers
+Texas Instruments TAS5711/TAS5717/TAS5719/TAS5721 stereo power amplifiers
The codec is controlled through an I2C interface. It also has two other
signals that can be wired up to GPIOs: reset (strongly recommended), and
@@ -6,7 +6,11 @@ powerdown (optional).
Required properties:
-- compatible: "ti,tas5711", "ti,tas5717", or "ti,tas5719"
+- compatible: should be one of the following:
+ - "ti,tas5711",
+ - "ti,tas5717",
+ - "ti,tas5719",
+ - "ti,tas5721"
- reg: The I2C address of the device
- #sound-dai-cells: must be equal to 0
@@ -25,6 +29,8 @@ Optional properties:
- PVDD_B-supply: regulator phandle for the PVDD_B supply (5711)
- PVDD_C-supply: regulator phandle for the PVDD_C supply (5711)
- PVDD_D-supply: regulator phandle for the PVDD_D supply (5711)
+- DRVDD-supply: regulator phandle for the DRVDD supply (5721)
+- PVDD-supply: regulator phandle for the PVDD supply (5721)
Example:
diff --git a/Documentation/devicetree/bindings/sound/tas5720.txt b/Documentation/devicetree/bindings/sound/tas5720.txt
new file mode 100644
index 000000000000..806ea7381483
--- /dev/null
+++ b/Documentation/devicetree/bindings/sound/tas5720.txt
@@ -0,0 +1,25 @@
+Texas Instruments TAS5720 Mono Audio amplifier
+
+The TAS5720 serial control bus communicates through the I2C protocol only. The
+serial bus is also used for periodic codec fault checking/reporting during
+audio playback. For more product information please see the links below:
+
+http://www.ti.com/product/TAS5720L
+http://www.ti.com/product/TAS5720M
+
+Required properties:
+
+- compatible : "ti,tas5720"
+- reg : I2C slave address
+- dvdd-supply : phandle to a 3.3-V supply for the digital circuitry
+- pvdd-supply : phandle to a supply used for the Class-D amp and the analog
+
+Example:
+
+tas5720: tas5720@6c {
+ status = "okay";
+ compatible = "ti,tas5720";
+ reg = <0x6c>;
+ dvdd-supply = <&vdd_3v3_reg>;
+ pvdd-supply = <&amp_supply_reg>;
+};
diff --git a/Documentation/devicetree/bindings/spi/microchip,spi-pic32.txt b/Documentation/devicetree/bindings/spi/microchip,spi-pic32.txt
new file mode 100644
index 000000000000..79de379f4dc0
--- /dev/null
+++ b/Documentation/devicetree/bindings/spi/microchip,spi-pic32.txt
@@ -0,0 +1,34 @@
+Microchip PIC32 SPI Master controller
+
+Required properties:
+- compatible: Should be "microchip,pic32mzda-spi".
+- reg: Address and length of register space for the device.
+- interrupts: Should contain all three spi interrupts in sequence
+ of <fault-irq>, <receive-irq>, <transmit-irq>.
+- interrupt-names: Should be "fault", "rx", "tx" in order.
+- clocks: Phandle of the clock generating SPI clock on the bus.
+- clock-names: Should be "mck0".
+- cs-gpios: Specifies the gpio pins to be used for chipselects.
+ See: Documentation/devicetree/bindings/spi/spi-bus.txt
+
+Optional properties:
+- dmas: Two or more DMA channel specifiers following the convention outlined
+ in Documentation/devicetree/bindings/dma/dma.txt
+- dma-names: Names for the dma channels. There must be at least one channel
+ named "spi-tx" for transmit and named "spi-rx" for receive.
+
+Example:
+
+spi1: spi@1f821000 {
+ compatible = "microchip,pic32mzda-spi";
+ reg = <0x1f821000 0x200>;
+ interrupts = <109 IRQ_TYPE_LEVEL_HIGH>,
+ <110 IRQ_TYPE_LEVEL_HIGH>,
+ <111 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "fault", "rx", "tx";
+ clocks = <&PBCLK2>;
+ clock-names = "mck0";
+ cs-gpios = <&gpio3 4 GPIO_ACTIVE_LOW>;
+ dmas = <&dma 134>, <&dma 135>;
+ dma-names = "spi-rx", "spi-tx";
+};
diff --git a/Documentation/devicetree/bindings/spi/spi-fsl-dspi.txt b/Documentation/devicetree/bindings/spi/spi-fsl-dspi.txt
index fa77f874e321..ff5893d275a2 100644
--- a/Documentation/devicetree/bindings/spi/spi-fsl-dspi.txt
+++ b/Documentation/devicetree/bindings/spi/spi-fsl-dspi.txt
@@ -1,7 +1,10 @@
ARM Freescale DSPI controller
Required properties:
-- compatible : "fsl,vf610-dspi", "fsl,ls1021a-v1.0-dspi", "fsl,ls2085a-dspi"
+- compatible : "fsl,vf610-dspi", "fsl,ls1021a-v1.0-dspi",
+ "fsl,ls2085a-dspi"
+ or
+ "fsl,ls2080a-dspi" followed by "fsl,ls2085a-dspi"
- reg : Offset and length of the register set for the device
- interrupts : Should contain SPI controller interrupt
- clocks: from common clock binding: handle to dspi clock.
@@ -13,8 +16,7 @@ Required properties:
Optional property:
- big-endian: If present the dspi device's registers are implemented
- in big endian mode, otherwise in native mode(same with CPU), for more
- detail please see: Documentation/devicetree/bindings/regmap/regmap.txt.
+ in big endian mode.
Optional SPI slave node properties:
- fsl,spi-cs-sck-delay: a delay in nanoseconds between activating chip
diff --git a/Documentation/devicetree/bindings/spi/sqi-pic32.txt b/Documentation/devicetree/bindings/spi/sqi-pic32.txt
new file mode 100644
index 000000000000..c82d021bce50
--- /dev/null
+++ b/Documentation/devicetree/bindings/spi/sqi-pic32.txt
@@ -0,0 +1,18 @@
+Microchip PIC32 Quad SPI controller
+-----------------------------------
+Required properties:
+- compatible: Should be "microchip,pic32mzda-sqi".
+- reg: Address and length of SQI controller register space.
+- interrupts: Should contain SQI interrupt.
+- clocks: Should contain phandle of two clocks in sequence, one that drives
+ clock on SPI bus and other that drives SQI controller.
+- clock-names: Should be "spi_ck" and "reg_ck" in order.
+
+Example:
+ sqi1: spi@1f8e2000 {
+ compatible = "microchip,pic32mzda-sqi";
+ reg = <0x1f8e2000 0x200>;
+ clocks = <&rootclk REF2CLK>, <&rootclk PB5CLK>;
+ clock-names = "spi_ck", "reg_ck";
+ interrupts = <169 IRQ_TYPE_LEVEL_HIGH>;
+ };
diff --git a/Documentation/devicetree/bindings/spi/ti_qspi.txt b/Documentation/devicetree/bindings/spi/ti_qspi.txt
index cc8304aa64ac..50b14f6b53a3 100644
--- a/Documentation/devicetree/bindings/spi/ti_qspi.txt
+++ b/Documentation/devicetree/bindings/spi/ti_qspi.txt
@@ -19,6 +19,13 @@ Optional properties:
- syscon-chipselects: Handle to system control region contains QSPI
chipselect register and offset of that register.
+NOTE: TI QSPI controller requires different pinmux and IODelay
+paramaters for Mode-0 and Mode-3 operations, which needs to be set up by
+the bootloader (U-Boot). Default configuration only supports Mode-0
+operation. Hence, "spi-cpol" and "spi-cpha" DT properties cannot be
+specified in the slave nodes of TI QSPI controller without appropriate
+modification to bootloader.
+
Example:
For am4372:
diff --git a/Documentation/devicetree/bindings/sram/sram.txt b/Documentation/devicetree/bindings/sram/sram.txt
index 227e3a341af1..add48f09015e 100644
--- a/Documentation/devicetree/bindings/sram/sram.txt
+++ b/Documentation/devicetree/bindings/sram/sram.txt
@@ -51,7 +51,7 @@ sram: sram@5c000000 {
compatible = "mmio-sram";
reg = <0x5c000000 0x40000>; /* 256 KiB SRAM at address 0x5c000000 */
- #adress-cells = <1>;
+ #address-cells = <1>;
#size-cells = <1>;
ranges = <0 0x5c000000 0x40000>;
diff --git a/Documentation/devicetree/bindings/thermal/tegra-soctherm.txt b/Documentation/devicetree/bindings/thermal/nvidia,tegra124-soctherm.txt
index 6b68cd150405..edebfa0a985e 100644
--- a/Documentation/devicetree/bindings/thermal/tegra-soctherm.txt
+++ b/Documentation/devicetree/bindings/thermal/nvidia,tegra124-soctherm.txt
@@ -26,10 +26,14 @@ Required properties :
of this property. See <dt-bindings/thermal/tegra124-soctherm.h> for a
list of valid values when referring to thermal sensors.
+Note:
+- the "critical" type trip points will be set to SOC_THERM hardware as the
+shut down temperature. Once the temperature of this thermal zone is higher
+than it, the system will be shutdown or reset by hardware.
Example :
- soctherm@0,700e2000 {
+ soctherm@700e2000 {
compatible = "nvidia,tegra124-soctherm";
reg = <0x0 0x700e2000 0x0 0x1000>;
interrupts = <GIC_SPI 48 IRQ_TYPE_LEVEL_HIGH>;
@@ -51,5 +55,13 @@ Example: referring to thermal sensors :
thermal-sensors =
<&soctherm TEGRA124_SOCTHERM_SENSOR_CPU>;
+
+ trips {
+ cpu_shutdown_trip: shutdown-trip {
+ temperature = <102500>;
+ hysteresis = <1000>;
+ type = "critical";
+ };
+ };
};
};
diff --git a/Documentation/devicetree/bindings/thermal/rcar-thermal.txt b/Documentation/devicetree/bindings/thermal/rcar-thermal.txt
index e5ee3f159893..a8e52c8ccfcc 100644
--- a/Documentation/devicetree/bindings/thermal/rcar-thermal.txt
+++ b/Documentation/devicetree/bindings/thermal/rcar-thermal.txt
@@ -11,7 +11,6 @@ Required properties:
- "renesas,thermal-r8a7791" (R-Car M2-W)
- "renesas,thermal-r8a7792" (R-Car V2H)
- "renesas,thermal-r8a7793" (R-Car M2-N)
- - "renesas,thermal-r8a7794" (R-Car E2)
- reg : Address range of the thermal registers.
The 1st reg will be recognized as common register
if it has "interrupts".
diff --git a/Documentation/devicetree/bindings/thermal/tango-thermal.txt b/Documentation/devicetree/bindings/thermal/tango-thermal.txt
new file mode 100644
index 000000000000..212198d4b937
--- /dev/null
+++ b/Documentation/devicetree/bindings/thermal/tango-thermal.txt
@@ -0,0 +1,17 @@
+* Tango Thermal
+
+The SMP8758 SoC includes 3 instances of this temperature sensor
+(in the CPU, video decoder, and PCIe controller).
+
+Required properties:
+- #thermal-sensor-cells: Should be 0 (see thermal.txt)
+- compatible: "sigma,smp8758-thermal"
+- reg: Address range of the thermal registers
+
+Example:
+
+ cpu_temp: thermal@920100 {
+ #thermal-sensor-cells = <0>;
+ compatible = "sigma,smp8758-thermal";
+ reg = <0x920100 12>;
+ };
diff --git a/Documentation/devicetree/bindings/thermal/thermal-generic-adc.txt b/Documentation/devicetree/bindings/thermal/thermal-generic-adc.txt
new file mode 100644
index 000000000000..d72355502b78
--- /dev/null
+++ b/Documentation/devicetree/bindings/thermal/thermal-generic-adc.txt
@@ -0,0 +1,89 @@
+General Purpose Analog To Digital Converter (ADC) based thermal sensor.
+
+On some of platforms, thermal sensor like thermistors are connected to
+one of ADC channel and sensor resistance is read via voltage across the
+sensor resistor. The voltage read across the sensor is mapped to
+temperature using voltage-temperature lookup table.
+
+Required properties:
+===================
+- compatible: Must be "generic-adc-thermal".
+- temperature-lookup-table: Two dimensional array of Integer; lookup table
+ to map the relation between ADC value and
+ temperature. When ADC is read, the value is
+ looked up on the table to get the equivalent
+ temperature.
+ The first value of the each row of array is the
+ temperature in milliCelsius and second value of
+ the each row of array is the ADC read value.
+- #thermal-sensor-cells: Should be 1. See ./thermal.txt for a description
+ of this property.
+
+Example :
+#include <dt-bindings/thermal/thermal.h>
+
+i2c@7000c400 {
+ ads1015: ads1015@4a {
+ reg = <0x4a>;
+ compatible = "ads1015";
+ sampling-frequency = <3300>;
+ #io-channel-cells = <1>;
+ };
+};
+
+tboard_thermistor: thermal-sensor {
+ compatible = "generic-adc-thermal";
+ #thermal-sensor-cells = <0>;
+ io-channels = <&ads1015 1>;
+ io-channel-names = "sensor-channel";
+ temperature-lookup-table = < (-40000) 2578
+ (-39000) 2577
+ (-38000) 2576
+ (-37000) 2575
+ (-36000) 2574
+ (-35000) 2573
+ (-34000) 2572
+ (-33000) 2571
+ (-32000) 2569
+ (-31000) 2568
+ (-30000) 2567
+ ::::::::::
+ 118000 254
+ 119000 247
+ 120000 240
+ 121000 233
+ 122000 226
+ 123000 220
+ 124000 214
+ 125000 208>;
+};
+
+dummy_cool_dev: dummy-cool-dev {
+ compatible = "dummy-cooling-dev";
+ #cooling-cells = <2>; /* min followed by max */
+};
+
+thermal-zones {
+ Tboard {
+ polling-delay = <15000>; /* milliseconds */
+ polling-delay-passive = <0>; /* milliseconds */
+ thermal-sensors = <&tboard_thermistor>;
+
+ trips {
+ therm_est_trip: therm_est_trip {
+ temperature = <40000>;
+ type = "active";
+ hysteresis = <1000>;
+ };
+ };
+
+ cooling-maps {
+ map0 {
+ trip = <&therm_est_trip>;
+ cooling-device = <&dummy_cool_dev THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ contribution = <100>;
+ };
+
+ };
+ };
+};
diff --git a/Documentation/devicetree/bindings/timer/arm,mps2-timer.txt b/Documentation/devicetree/bindings/timer/arm,mps2-timer.txt
new file mode 100644
index 000000000000..48f84d74edde
--- /dev/null
+++ b/Documentation/devicetree/bindings/timer/arm,mps2-timer.txt
@@ -0,0 +1,28 @@
+ARM MPS2 timer
+
+The MPS2 platform has simple general-purpose 32 bits timers.
+
+Required properties:
+- compatible : Should be "arm,mps2-timer"
+- reg : Address and length of the register set
+- interrupts : Reference to the timer interrupt
+
+Required clocking property, have to be one of:
+- clocks : The input clock of the timer
+- clock-frequency : The rate in HZ in input of the ARM MPS2 timer
+
+Examples:
+
+timer1: mps2-timer@40000000 {
+ compatible = "arm,mps2-timer";
+ reg = <0x40000000 0x1000>;
+ interrupts = <8>;
+ clocks = <&sysclk>;
+};
+
+timer2: mps2-timer@40001000 {
+ compatible = "arm,mps2-timer";
+ reg = <0x40001000 0x1000>;
+ interrupts = <9>;
+ clock-frequency = <25000000>;
+};
diff --git a/Documentation/devicetree/bindings/timer/ezchip,nps400-timer.txt b/Documentation/devicetree/bindings/timer/ezchip,nps400-timer.txt
new file mode 100644
index 000000000000..c8c03d700382
--- /dev/null
+++ b/Documentation/devicetree/bindings/timer/ezchip,nps400-timer.txt
@@ -0,0 +1,15 @@
+NPS Network Processor
+
+Required properties:
+
+- compatible : should be "ezchip,nps400-timer"
+
+Clocks required for compatible = "ezchip,nps400-timer":
+- clocks : Must contain a single entry describing the clock input
+
+Example:
+
+timer {
+ compatible = "ezchip,nps400-timer";
+ clocks = <&sysclk>;
+};
diff --git a/Documentation/devicetree/bindings/timer/snps,arc-timer.txt b/Documentation/devicetree/bindings/timer/snps,arc-timer.txt
new file mode 100644
index 000000000000..4ef024630d61
--- /dev/null
+++ b/Documentation/devicetree/bindings/timer/snps,arc-timer.txt
@@ -0,0 +1,31 @@
+Synopsys ARC Local Timer with Interrupt Capabilities
+- Found on all ARC CPUs (ARC700/ARCHS)
+- Can be optionally programmed to interrupt on Limit
+- Two idential copies TIMER0 and TIMER1 exist in ARC cores and historically
+ TIMER0 used as clockevent provider (true for all ARC cores)
+ TIMER1 used for clocksource (mandatory for ARC700, optional for ARC HS)
+
+Required properties:
+
+- compatible : should be "snps,arc-timer"
+- interrupts : single Interrupt going into parent intc
+ (16 for ARCHS cores, 3 for ARC700 cores)
+- clocks : phandle to the source clock
+
+Optional properties:
+
+- interrupt-parent : phandle to parent intc
+
+Example:
+
+ timer0 {
+ compatible = "snps,arc-timer";
+ interrupts = <3>;
+ interrupt-parent = <&core_intc>;
+ clocks = <&core_clk>;
+ };
+
+ timer1 {
+ compatible = "snps,arc-timer";
+ clocks = <&core_clk>;
+ };
diff --git a/Documentation/devicetree/bindings/timer/snps,archs-gfrc.txt b/Documentation/devicetree/bindings/timer/snps,archs-gfrc.txt
new file mode 100644
index 000000000000..b6cd1b3922de
--- /dev/null
+++ b/Documentation/devicetree/bindings/timer/snps,archs-gfrc.txt
@@ -0,0 +1,14 @@
+Synopsys ARC Free Running 64-bit Global Timer for ARC HS CPUs
+- clocksource provider for SMP SoC
+
+Required properties:
+
+- compatible : should be "snps,archs-gfrc"
+- clocks : phandle to the source clock
+
+Example:
+
+ gfrc {
+ compatible = "snps,archs-gfrc";
+ clocks = <&core_clk>;
+ };
diff --git a/Documentation/devicetree/bindings/timer/snps,archs-rtc.txt b/Documentation/devicetree/bindings/timer/snps,archs-rtc.txt
new file mode 100644
index 000000000000..47bd7a702f3f
--- /dev/null
+++ b/Documentation/devicetree/bindings/timer/snps,archs-rtc.txt
@@ -0,0 +1,14 @@
+Synopsys ARC Free Running 64-bit Local Timer for ARC HS CPUs
+- clocksource provider for UP SoC
+
+Required properties:
+
+- compatible : should be "snps,archs-rtc"
+- clocks : phandle to the source clock
+
+Example:
+
+ rtc {
+ compatible = "snps,arc-rtc";
+ clocks = <&core_clk>;
+ };
diff --git a/Documentation/devicetree/bindings/usb/dwc3.txt b/Documentation/devicetree/bindings/usb/dwc3.txt
index fb2ad0acedbd..7d7ce089b003 100644
--- a/Documentation/devicetree/bindings/usb/dwc3.txt
+++ b/Documentation/devicetree/bindings/usb/dwc3.txt
@@ -14,7 +14,6 @@ Optional properties:
the second element is expected to be a handle to the USB3/SS PHY
- phys: from the *Generic PHY* bindings
- phy-names: from the *Generic PHY* bindings
- - tx-fifo-resize: determines if the FIFO *has* to be reallocated.
- snps,usb3_lpm_capable: determines if platform is USB3 LPM capable
- snps,disable_scramble_quirk: true when SW should disable data scrambling.
Only really useful for FPGA builds.
@@ -38,6 +37,8 @@ Optional properties:
- snps,dis_u2_susphy_quirk: when set core will disable USB2 suspend phy.
- snps,dis_enblslpm_quirk: when set clears the enblslpm in GUSB2PHYCFG,
disabling the suspend signal to the PHY.
+ - snps,dis_rxdet_inp3_quirk: when set core will disable receiver detection
+ in PHY P3 power state.
- snps,is-utmi-l1-suspend: true when DWC3 asserts output signal
utmi_l1_suspend_n, false when asserts utmi_sleep_n
- snps,hird-threshold: HIRD threshold
@@ -47,6 +48,8 @@ Optional properties:
register for post-silicon frame length adjustment when the
fladj_30mhz_sdbnd signal is invalid or incorrect.
+ - <DEPRECATED> tx-fifo-resize: determines if the FIFO *has* to be reallocated.
+
This is usually a subnode to DWC3 glue to which it is connected.
dwc3@4a030000 {
@@ -54,5 +57,4 @@ dwc3@4a030000 {
reg = <0x4a030000 0xcfff>;
interrupts = <0 92 4>
usb-phy = <&usb2_phy>, <&usb3,phy>;
- tx-fifo-resize;
};
diff --git a/Documentation/devicetree/bindings/usb/nvidia,tegra124-xusb.txt b/Documentation/devicetree/bindings/usb/nvidia,tegra124-xusb.txt
new file mode 100644
index 000000000000..d28295a3e55f
--- /dev/null
+++ b/Documentation/devicetree/bindings/usb/nvidia,tegra124-xusb.txt
@@ -0,0 +1,120 @@
+NVIDIA Tegra xHCI controller
+============================
+
+The Tegra xHCI controller supports both USB2 and USB3 interfaces exposed by
+the Tegra XUSB pad controller.
+
+Required properties:
+--------------------
+- compatible: Must be:
+ - Tegra124: "nvidia,tegra124-xusb"
+ - Tegra132: "nvidia,tegra132-xusb", "nvidia,tegra124-xusb"
+ - Tegra210: "nvidia,tegra210-xusb"
+- reg: Must contain the base and length of the xHCI host registers, XUSB FPCI
+ registers and XUSB IPFS registers.
+- reg-names: Must contain the following entries:
+ - "hcd"
+ - "fpci"
+ - "ipfs"
+- interrupts: Must contain the xHCI host interrupt and the mailbox interrupt.
+- clocks: Must contain an entry for each entry in clock-names.
+ See ../clock/clock-bindings.txt for details.
+- clock-names: Must include the following entries:
+ - xusb_host
+ - xusb_host_src
+ - xusb_falcon_src
+ - xusb_ss
+ - xusb_ss_src
+ - xusb_ss_div2
+ - xusb_hs_src
+ - xusb_fs_src
+ - pll_u_480m
+ - clk_m
+ - pll_e
+- resets: Must contain an entry for each entry in reset-names.
+ See ../reset/reset.txt for details.
+- reset-names: Must include the following entries:
+ - xusb_host
+ - xusb_ss
+ - xusb_src
+ Note that xusb_src is the shared reset for xusb_{ss,hs,fs,falcon,host}_src.
+- nvidia,xusb-padctl: phandle to the XUSB pad controller that is used to
+ configure the USB pads used by the XHCI controller
+
+For Tegra124 and Tegra132:
+- avddio-pex-supply: PCIe/USB3 analog logic power supply. Must supply 1.05 V.
+- dvddio-pex-supply: PCIe/USB3 digital logic power supply. Must supply 1.05 V.
+- avdd-usb-supply: USB controller power supply. Must supply 3.3 V.
+- avdd-pll-utmip-supply: UTMI PLL power supply. Must supply 1.8 V.
+- avdd-pll-erefe-supply: PLLE reference PLL power supply. Must supply 1.05 V.
+- avdd-usb-ss-pll-supply: PCIe/USB3 PLL power supply. Must supply 1.05 V.
+- hvdd-usb-ss-supply: High-voltage PCIe/USB3 power supply. Must supply 3.3 V.
+- hvdd-usb-ss-pll-e-supply: High-voltage PLLE power supply. Must supply 3.3 V.
+
+For Tegra210:
+- dvddio-pex-supply: PCIe/USB3 analog logic power supply. Must supply 1.05 V.
+- hvddio-pex-supply: High-voltage PCIe/USB3 power supply. Must supply 1.8 V.
+- avdd-usb-supply: USB controller power supply. Must supply 3.3 V.
+- avdd-pll-utmip-supply: UTMI PLL power supply. Must supply 1.8 V.
+- avdd-pll-uerefe-supply: PLLE reference PLL power supply. Must supply 1.05 V.
+- dvdd-pex-pll-supply: PCIe/USB3 PLL power supply. Must supply 1.05 V.
+- hvdd-pex-pll-e-supply: High-voltage PLLE power supply. Must supply 1.8 V.
+
+Optional properties:
+--------------------
+- phys: Must contain an entry for each entry in phy-names.
+ See ../phy/phy-bindings.txt for details.
+- phy-names: Should include an entry for each PHY used by the controller. The
+ following PHYs are available:
+ - Tegra124: usb2-0, usb2-1, usb2-2, hsic-0, hsic-1, usb3-0, usb3-1
+ - Tegra132: usb2-0, usb2-1, usb2-2, hsic-0, hsic-1, usb3-0, usb3-1
+ - Tegra210: usb2-0, usb2-1, usb2-2, usb2-3, hsic-0, usb3-0, usb3-1, usb3-2,
+ usb3-3
+
+Example:
+--------
+
+ usb@0,70090000 {
+ compatible = "nvidia,tegra124-xusb";
+ reg = <0x0 0x70090000 0x0 0x8000>,
+ <0x0 0x70098000 0x0 0x1000>,
+ <0x0 0x70099000 0x0 0x1000>;
+ reg-names = "hcd", "fpci", "ipfs";
+
+ interrupts = <GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>;
+
+ clocks = <&tegra_car TEGRA124_CLK_XUSB_HOST>,
+ <&tegra_car TEGRA124_CLK_XUSB_HOST_SRC>,
+ <&tegra_car TEGRA124_CLK_XUSB_FALCON_SRC>,
+ <&tegra_car TEGRA124_CLK_XUSB_SS>,
+ <&tegra_car TEGRA124_CLK_XUSB_SS_DIV2>,
+ <&tegra_car TEGRA124_CLK_XUSB_SS_SRC>,
+ <&tegra_car TEGRA124_CLK_XUSB_HS_SRC>,
+ <&tegra_car TEGRA124_CLK_XUSB_FS_SRC>,
+ <&tegra_car TEGRA124_CLK_PLL_U_480M>,
+ <&tegra_car TEGRA124_CLK_CLK_M>,
+ <&tegra_car TEGRA124_CLK_PLL_E>;
+ clock-names = "xusb_host", "xusb_host_src", "xusb_falcon_src",
+ "xusb_ss", "xusb_ss_div2", "xusb_ss_src",
+ "xusb_hs_src", "xusb_fs_src", "pll_u_480m",
+ "clk_m", "pll_e";
+ resets = <&tegra_car 89>, <&tegra_car 156>, <&tegra_car 143>;
+ reset-names = "xusb_host", "xusb_ss", "xusb_src";
+
+ nvidia,xusb-padctl = <&padctl>;
+
+ phys = <&{/padctl@0,7009f000/pads/usb2/usb2-1}>, /* mini-PCIe USB */
+ <&{/padctl@0,7009f000/pads/usb2/usb2-2}>, /* USB A */
+ <&{/padctl@0,7009f000/pads/pcie/pcie-0}>; /* USB A */
+ phy-names = "utmi-1", "utmi-2", "usb3-0";
+
+ avddio-pex-supply = <&vdd_1v05_run>;
+ dvddio-pex-supply = <&vdd_1v05_run>;
+ avdd-usb-supply = <&vdd_3v3_lp0>;
+ avdd-pll-utmip-supply = <&vddio_1v8>;
+ avdd-pll-erefe-supply = <&avdd_1v05_run>;
+ avdd-usb-ss-pll-supply = <&vdd_1v05_run>;
+ hvdd-usb-ss-supply = <&vdd_3v3_lp0>;
+ hvdd-usb-ss-pll-e-supply = <&vdd_3v3_lp0>;
+ };
diff --git a/Documentation/devicetree/bindings/usb/qcom,dwc3.txt b/Documentation/devicetree/bindings/usb/qcom,dwc3.txt
index ca164e71dd50..39acb084bce9 100644
--- a/Documentation/devicetree/bindings/usb/qcom,dwc3.txt
+++ b/Documentation/devicetree/bindings/usb/qcom,dwc3.txt
@@ -59,7 +59,6 @@ Example device nodes:
interrupts = <0 205 0x4>;
phys = <&hs_phy>, <&ss_phy>;
phy-names = "usb2-phy", "usb3-phy";
- tx-fifo-resize;
dr_mode = "host";
};
};
diff --git a/Documentation/devicetree/bindings/usb/usb-xhci.txt b/Documentation/devicetree/bindings/usb/usb-xhci.txt
index 6a17aa85c4d5..966885c636d0 100644
--- a/Documentation/devicetree/bindings/usb/usb-xhci.txt
+++ b/Documentation/devicetree/bindings/usb/usb-xhci.txt
@@ -4,6 +4,7 @@ Required properties:
- compatible: should be one or more of
- "generic-xhci" for generic XHCI device
+ - "marvell,armada3700-xhci" for Armada 37xx SoCs
- "marvell,armada-375-xhci" for Armada 375 SoCs
- "marvell,armada-380-xhci" for Armada 38x SoCs
- "renesas,xhci-r8a7790" for r8a7790 SoC
diff --git a/Documentation/devicetree/bindings/vendor-prefixes.txt b/Documentation/devicetree/bindings/vendor-prefixes.txt
index 42b6688c4a98..a7440bcd67ff 100644
--- a/Documentation/devicetree/bindings/vendor-prefixes.txt
+++ b/Documentation/devicetree/bindings/vendor-prefixes.txt
@@ -16,6 +16,7 @@ al Annapurna Labs
allwinner Allwinner Technology Co., Ltd.
alphascale AlphaScale Integrated Circuits Systems, Inc.
altr Altera Corp.
+amazon Amazon.com, Inc.
amcc Applied Micro Circuits Corporation (APM, formally AMCC)
amd Advanced Micro Devices (AMD), Inc.
amlogic Amlogic, Inc.
@@ -28,8 +29,10 @@ aptina Aptina Imaging
arasan Arasan Chip Systems
arm ARM Ltd.
armadeus ARMadeus Systems SARL
+arrow Arrow Electronics
artesyn Artesyn Embedded Technologies Inc.
asahi-kasei Asahi Kasei Corp.
+aspeed ASPEED Technology Inc.
atlas Atlas Scientific LLC
atmel Atmel Corporation
auo AU Optronics Corporation
@@ -59,6 +62,7 @@ cnxt Conexant Systems, Inc.
compulab CompuLab Ltd.
cortina Cortina Systems, Inc.
cosmic Cosmic Circuits
+creative Creative Technology Ltd
crystalfontz Crystalfontz America, Inc.
cubietech Cubietech, Ltd.
cypress Cypress Semiconductor Corporation
@@ -71,11 +75,14 @@ digilent Diglent, Inc.
dlg Dialog Semiconductor
dlink D-Link Corporation
dmo Data Modul AG
+dptechnics DPTechnics
+dragino Dragino Technology Co., Limited
ea Embedded Artists AB
ebv EBV Elektronik
edt Emerging Display Technologies
eeti eGalax_eMPIA Technology Inc
elan Elan Microelectronic Corp.
+embest Shenzhen Embest Technology Co., Ltd.
emmicro EM Microelectronic
energymicro Silicon Laboratories (formerly Energy Micro AS)
epcos EPCOS AG
@@ -87,11 +94,13 @@ eukrea Eukréa Electromatique
everest Everest Semiconductor Co. Ltd.
everspin Everspin Technologies, Inc.
excito Excito
+ezchip EZchip Semiconductor
fcs Fairchild Semiconductor
firefly Firefly
focaltech FocalTech Systems Co.,Ltd
fsl Freescale Semiconductor
ge General Electric Company
+geekbuying GeekBuying
GEFanuc GE Fanuc Intelligent Platforms Embedded Systems, Inc.
gef GE Fanuc Intelligent Platforms Embedded Systems, Inc.
geniatech Geniatech, Inc.
@@ -119,6 +128,7 @@ idt Integrated Device Technologies, Inc.
ifi Ingenieurburo Fur Ic-Technologie (I/F/I)
iom Iomega Corporation
img Imagination Technologies Ltd.
+inforce Inforce Computing
ingenic Ingenic Semiconductor
innolux Innolux Corporation
intel Intel Corporation
@@ -142,6 +152,7 @@ lsi LSI Corp. (LSI Logic)
lltc Linear Technology Corporation
marvell Marvell Technology Group Ltd.
maxim Maxim Integrated Products
+meas Measurement Specialties
mediatek MediaTek Inc.
melexis Melexis N.V.
merrii Merrii Technology Co., Ltd.
@@ -153,6 +164,7 @@ mitsubishi Mitsubishi Electric Corporation
mosaixtech Mosaix Technologies, Inc.
moxa Moxa
mpl MPL AG
+mqmaker mqmaker Inc.
msi Micro-Star International Co. Ltd.
mti Imagination Technologies Ltd. (formerly MIPS Technologies Inc.)
mundoreader Mundo Reader S.L.
@@ -172,6 +184,7 @@ nvidia NVIDIA
nxp NXP Semiconductors
okaya Okaya Electric America, Inc.
olimex OLIMEX Ltd.
+onion Onion Corporation
onnn ON Semiconductor Corp.
ontat On Tat Industrial Company
opencores OpenCores.org
@@ -179,6 +192,7 @@ option Option NV
ortustech Ortus Technology Co., Ltd.
ovti OmniVision Technologies
ORCL Oracle Corporation
+oxsemi Oxford Semiconductor, Ltd.
panasonic Panasonic Corporation
parade Parade Technologies Inc.
pericom Pericom Technology Inc.
@@ -253,6 +267,7 @@ tpk TPK U.S.A. LLC
tronfy Tronfy
tronsmart Tronsmart
truly Truly Semiconductors Limited
+tyan Tyan Computer Corporation
upisemi uPI Semiconductor Corp.
urt United Radiant Technology Corporation
usi Universal Scientific Industrial Co., Ltd.
@@ -262,6 +277,7 @@ via VIA Technologies, Inc.
virtio Virtual I/O Device Specification, developed by the OASIS consortium
vivante Vivante Corporation
voipac Voipac Technologies s.r.o.
+wd Western Digital Corp.
wexler Wexler
winbond Winbond Electronics corp.
wlf Wolfson Microelectronics
diff --git a/Documentation/devicetree/bindings/watchdog/fsl-imx-wdt.txt b/Documentation/devicetree/bindings/watchdog/fsl-imx-wdt.txt
index 8dab6fd024aa..107280ef0025 100644
--- a/Documentation/devicetree/bindings/watchdog/fsl-imx-wdt.txt
+++ b/Documentation/devicetree/bindings/watchdog/fsl-imx-wdt.txt
@@ -5,10 +5,12 @@ Required properties:
- reg : Should contain WDT registers location and length
- interrupts : Should contain WDT interrupt
-Optional property:
+Optional properties:
- big-endian: If present the watchdog device's registers are implemented
in big endian mode, otherwise in native mode(same with CPU), for more
detail please see: Documentation/devicetree/bindings/regmap/regmap.txt.
+- fsl,ext-reset-output: If present the watchdog device is configured to
+ assert its external reset (WDOG_B) instead of issuing a software reset.
Examples:
diff --git a/Documentation/devicetree/bindings/watchdog/microchip,pic32-dmt.txt b/Documentation/devicetree/bindings/watchdog/microchip,pic32-dmt.txt
new file mode 100644
index 000000000000..49485f831373
--- /dev/null
+++ b/Documentation/devicetree/bindings/watchdog/microchip,pic32-dmt.txt
@@ -0,0 +1,19 @@
+* Microchip PIC32 Deadman Timer
+
+The deadman timer is used to reset the processor in the event of a software
+malfunction. It is a free-running instruction fetch timer, which is clocked
+whenever an instruction fetch occurs until a count match occurs.
+
+Required properties:
+- compatible: must be "microchip,pic32mzda-dmt".
+- reg: physical base address of the controller and length of memory mapped
+ region.
+- clocks: phandle of source clk. Should be <&rootclk PB7CLK>.
+
+Example:
+
+ watchdog@1f800a00 {
+ compatible = "microchip,pic32mzda-dmt";
+ reg = <0x1f800a00 0x80>;
+ clocks = <&rootclk PB7CLK>;
+ };
diff --git a/Documentation/devicetree/bindings/watchdog/microchip,pic32-wdt.txt b/Documentation/devicetree/bindings/watchdog/microchip,pic32-wdt.txt
new file mode 100644
index 000000000000..f03a29a1b323
--- /dev/null
+++ b/Documentation/devicetree/bindings/watchdog/microchip,pic32-wdt.txt
@@ -0,0 +1,18 @@
+* Microchip PIC32 Watchdog Timer
+
+When enabled, the watchdog peripheral can be used to reset the device if the
+WDT is not cleared periodically in software.
+
+Required properties:
+- compatible: must be "microchip,pic32mzda-wdt".
+- reg: physical base address of the controller and length of memory mapped
+ region.
+- clocks: phandle of source clk. Should be <&rootclk LPRCCLK>.
+
+Example:
+
+ watchdog@1f800800 {
+ compatible = "microchip,pic32mzda-wdt";
+ reg = <0x1f800800 0x200>;
+ clocks = <&rootclk LPRCCLK>;
+ };
diff --git a/Documentation/devicetree/bindings/watchdog/renesas-wdt.txt b/Documentation/devicetree/bindings/watchdog/renesas-wdt.txt
new file mode 100644
index 000000000000..b9512f1eb80a
--- /dev/null
+++ b/Documentation/devicetree/bindings/watchdog/renesas-wdt.txt
@@ -0,0 +1,25 @@
+Renesas Watchdog Timer (WDT) Controller
+
+Required properties:
+- compatible : Should be "renesas,r8a7795-wdt", or "renesas,rcar-gen3-wdt"
+
+ When compatible with the generic version, nodes must list the SoC-specific
+ version corresponding to the platform first, followed by the generic
+ version.
+
+- reg : Should contain WDT registers location and length
+- clocks : the clock feeding the watchdog timer.
+
+Optional properties:
+- timeout-sec : Contains the watchdog timeout in seconds
+- power-domains : the power domain the WDT belongs to
+
+Examples:
+
+ wdt0: watchdog@e6020000 {
+ compatible = "renesas,r8a7795-wdt", "renesas,rcar-gen3-wdt";
+ reg = <0 0xe6020000 0 0x0c>;
+ clocks = <&cpg CPG_MOD 402>;
+ power-domains = <&cpg>;
+ timeout-sec = <60>;
+ };
diff --git a/Documentation/driver-model/devres.txt b/Documentation/driver-model/devres.txt
index 73b98dfbcea4..c63eea0c1c8c 100644
--- a/Documentation/driver-model/devres.txt
+++ b/Documentation/driver-model/devres.txt
@@ -236,6 +236,7 @@ certainly invest a bit more effort into libata core layer).
CLOCK
devm_clk_get()
devm_clk_put()
+ devm_clk_hw_register()
DMA
dmam_alloc_coherent()
@@ -267,6 +268,13 @@ IIO
devm_iio_kfifo_free()
devm_iio_trigger_alloc()
devm_iio_trigger_free()
+ devm_iio_channel_get()
+ devm_iio_channel_release()
+ devm_iio_channel_get_all()
+ devm_iio_channel_release_all()
+
+INPUT
+ devm_input_allocate_device()
IO region
devm_release_mem_region()
@@ -317,6 +325,9 @@ MEM
devm_kvasprintf()
devm_kzalloc()
+MFD
+ devm_mfd_add_devices()
+
PCI
pcim_enable_device() : after success, all PCI ops become managed
pcim_pin_device() : keep PCI device enabled after release
@@ -328,6 +339,8 @@ PHY
PINCTRL
devm_pinctrl_get()
devm_pinctrl_put()
+ devm_pinctrl_register()
+ devm_pinctrl_unregister()
PWM
devm_pwm_get()
diff --git a/Documentation/fb/udlfb.txt b/Documentation/fb/udlfb.txt
index 57d2f2908b12..c985cb65dd06 100644
--- a/Documentation/fb/udlfb.txt
+++ b/Documentation/fb/udlfb.txt
@@ -9,7 +9,7 @@ pairing that with a hardware framebuffer (16MB) on the other end of the
USB wire. That hardware framebuffer is able to drive the VGA, DVI, or HDMI
monitor with no CPU involvement until a pixel has to change.
-The CPU or other local resource does all the rendering; optinally compares the
+The CPU or other local resource does all the rendering; optionally compares the
result with a local shadow of the remote hardware framebuffer to identify
the minimal set of pixels that have changed; and compresses and sends those
pixels line-by-line via USB bulk transfers.
@@ -66,10 +66,10 @@ means that from a hardware and fbdev software perspective, everything is good.
At that point, a /dev/fb? interface will be present for user-mode applications
to open and begin writing to the framebuffer of the DisplayLink device using
standard fbdev calls. Note that if mmap() is used, by default the user mode
-application must send down damage notifcations to trigger repaints of the
+application must send down damage notifications to trigger repaints of the
changed regions. Alternatively, udlfb can be recompiled with experimental
defio support enabled, to support a page-fault based detection mechanism
-that can work without explicit notifcation.
+that can work without explicit notification.
The most common client of udlfb is xf86-video-displaylink or a modified
xf86-video-fbdev X server. These servers have no real DisplayLink specific
diff --git a/Documentation/features/perf/perf-regs/arch-support.txt b/Documentation/features/perf/perf-regs/arch-support.txt
index e2b4a78ec543..f179b1fb26ef 100644
--- a/Documentation/features/perf/perf-regs/arch-support.txt
+++ b/Documentation/features/perf/perf-regs/arch-support.txt
@@ -27,7 +27,7 @@
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
- | powerpc: | TODO |
+ | powerpc: | ok |
| s390: | TODO |
| score: | TODO |
| sh: | TODO |
diff --git a/Documentation/features/perf/perf-stackdump/arch-support.txt b/Documentation/features/perf/perf-stackdump/arch-support.txt
index 3dc24b0673c0..85777c5c6353 100644
--- a/Documentation/features/perf/perf-stackdump/arch-support.txt
+++ b/Documentation/features/perf/perf-stackdump/arch-support.txt
@@ -27,7 +27,7 @@
| nios2: | TODO |
| openrisc: | TODO |
| parisc: | TODO |
- | powerpc: | TODO |
+ | powerpc: | ok |
| s390: | TODO |
| score: | TODO |
| sh: | TODO |
diff --git a/Documentation/filesystems/Locking b/Documentation/filesystems/Locking
index 619af9bfdcb3..75eea7ce3d7c 100644
--- a/Documentation/filesystems/Locking
+++ b/Documentation/filesystems/Locking
@@ -194,7 +194,7 @@ prototypes:
void (*invalidatepage) (struct page *, unsigned int, unsigned int);
int (*releasepage) (struct page *, int);
void (*freepage)(struct page *);
- int (*direct_IO)(struct kiocb *, struct iov_iter *iter, loff_t offset);
+ int (*direct_IO)(struct kiocb *, struct iov_iter *iter);
int (*migratepage)(struct address_space *, struct page *, struct page *);
int (*launder_page)(struct page *);
int (*is_partially_uptodate)(struct page *, unsigned long, unsigned long);
diff --git a/Documentation/filesystems/cifs/README b/Documentation/filesystems/cifs/README
index 2d5622f60e11..a54788405429 100644
--- a/Documentation/filesystems/cifs/README
+++ b/Documentation/filesystems/cifs/README
@@ -272,7 +272,7 @@ A partial list of the supported mount options follows:
same domain (e.g. running winbind or nss_ldap) and
the server supports the Unix Extensions then the uid
and gid can be retrieved from the server (and uid
- and gid would not have to be specifed on the mount.
+ and gid would not have to be specified on the mount.
For servers which do not support the CIFS Unix
extensions, the default uid (and gid) returned on lookup
of existing files will be the uid (gid) of the person
diff --git a/Documentation/filesystems/dax.txt b/Documentation/filesystems/dax.txt
index 7bde64014a89..ce4587d257d2 100644
--- a/Documentation/filesystems/dax.txt
+++ b/Documentation/filesystems/dax.txt
@@ -79,6 +79,38 @@ These filesystems may be used for inspiration:
- ext4: the fourth extended filesystem, see Documentation/filesystems/ext4.txt
+Handling Media Errors
+---------------------
+
+The libnvdimm subsystem stores a record of known media error locations for
+each pmem block device (in gendisk->badblocks). If we fault at such location,
+or one with a latent error not yet discovered, the application can expect
+to receive a SIGBUS. Libnvdimm also allows clearing of these errors by simply
+writing the affected sectors (through the pmem driver, and if the underlying
+NVDIMM supports the clear_poison DSM defined by ACPI).
+
+Since DAX IO normally doesn't go through the driver/bio path, applications or
+sysadmins have an option to restore the lost data from a prior backup/inbuilt
+redundancy in the following ways:
+
+1. Delete the affected file, and restore from a backup (sysadmin route):
+ This will free the file system blocks that were being used by the file,
+ and the next time they're allocated, they will be zeroed first, which
+ happens through the driver, and will clear bad sectors.
+
+2. Truncate or hole-punch the part of the file that has a bad-block (at least
+ an entire aligned sector has to be hole-punched, but not necessarily an
+ entire filesystem block).
+
+These are the two basic paths that allow DAX filesystems to continue operating
+in the presence of media errors. More robust error recovery mechanisms can be
+built on top of this in the future, for example, involving redundancy/mirroring
+provided at the block layer through DM, or additionally, at the filesystem
+level. These would have to rely on the above two tenets, that error clearing
+can happen either by sending an IO through the driver, or zeroing (also through
+the driver).
+
+
Shortcomings
------------
diff --git a/Documentation/filesystems/directory-locking b/Documentation/filesystems/directory-locking
index 09bbf9a54f80..c314badbcfc6 100644
--- a/Documentation/filesystems/directory-locking
+++ b/Documentation/filesystems/directory-locking
@@ -1,30 +1,37 @@
Locking scheme used for directory operations is based on two
-kinds of locks - per-inode (->i_mutex) and per-filesystem
+kinds of locks - per-inode (->i_rwsem) and per-filesystem
(->s_vfs_rename_mutex).
- When taking the i_mutex on multiple non-directory objects, we
+ When taking the i_rwsem on multiple non-directory objects, we
always acquire the locks in order by increasing address. We'll call
that "inode pointer" order in the following.
For our purposes all operations fall in 5 classes:
1) read access. Locking rules: caller locks directory we are accessing.
+The lock is taken shared.
-2) object creation. Locking rules: same as above.
+2) object creation. Locking rules: same as above, but the lock is taken
+exclusive.
3) object removal. Locking rules: caller locks parent, finds victim,
-locks victim and calls the method.
+locks victim and calls the method. Locks are exclusive.
4) rename() that is _not_ cross-directory. Locking rules: caller locks
-the parent and finds source and target. If target already exists, lock
-it. If source is a non-directory, lock it. If that means we need to
-lock both, lock them in inode pointer order.
+the parent and finds source and target. In case of exchange (with
+RENAME_EXCHANGE in rename2() flags argument) lock both. In any case,
+if the target already exists, lock it. If the source is a non-directory,
+lock it. If we need to lock both, lock them in inode pointer order.
+Then call the method. All locks are exclusive.
+NB: we might get away with locking the the source (and target in exchange
+case) shared.
5) link creation. Locking rules:
* lock parent
* check that source is not a directory
* lock source
* call the method.
+All locks are exclusive.
6) cross-directory rename. The trickiest in the whole bunch. Locking
rules:
@@ -35,11 +42,12 @@ rules:
fail with -ENOTEMPTY
* if new parent is equal to or is a descendent of source
fail with -ELOOP
- * If target exists, lock it. If source is a non-directory, lock
- it. In case that means we need to lock both source and target,
- do so in inode pointer order.
+ * If it's an exchange, lock both the source and the target.
+ * If the target exists, lock it. If the source is a non-directory,
+ lock it. If we need to lock both, do so in inode pointer order.
* call the method.
-
+All ->i_rwsem are taken exclusive. Again, we might get away with locking
+the the source (and target in exchange case) shared.
The rules above obviously guarantee that all directories that are going to be
read, modified or removed by method will be locked by caller.
@@ -73,7 +81,7 @@ objects - A < B iff A is an ancestor of B.
attempt to acquire some lock and already holds at least one lock. Let's
consider the set of contended locks. First of all, filesystem lock is
not contended, since any process blocked on it is not holding any locks.
-Thus all processes are blocked on ->i_mutex.
+Thus all processes are blocked on ->i_rwsem.
By (3), any process holding a non-directory lock can only be
waiting on another non-directory lock with a larger address. Therefore
diff --git a/Documentation/filesystems/nilfs2.txt b/Documentation/filesystems/nilfs2.txt
index 41c3d332acc9..5b21ef76f751 100644
--- a/Documentation/filesystems/nilfs2.txt
+++ b/Documentation/filesystems/nilfs2.txt
@@ -268,3 +268,8 @@ among NILFS2 files can be depicted as follows:
( regular file, directory, or symlink )
For detail on the format of each file, please see include/linux/nilfs2_fs.h.
+
+There are no patents or other intellectual property that we protect
+with regard to the design of NILFS2. It is allowed to replicate the
+design in hopes that other operating systems could share (mount, read,
+write, etc.) data stored in this format.
diff --git a/Documentation/filesystems/overlayfs.txt b/Documentation/filesystems/overlayfs.txt
index 28091457b71a..d6259c786316 100644
--- a/Documentation/filesystems/overlayfs.txt
+++ b/Documentation/filesystems/overlayfs.txt
@@ -194,15 +194,6 @@ If a file with multiple hard links is copied up, then this will
"break" the link. Changes will not be propagated to other names
referring to the same inode.
-Symlinks in /proc/PID/ and /proc/PID/fd which point to a non-directory
-object in overlayfs will not contain valid absolute paths, only
-relative paths leading up to the filesystem's root. This will be
-fixed in the future.
-
-Some operations are not atomic, for example a crash during copy_up or
-rename will leave the filesystem in an inconsistent state. This will
-be addressed in the future.
-
Changes to underlying filesystems
---------------------------------
diff --git a/Documentation/filesystems/pohmelfs/design_notes.txt b/Documentation/filesystems/pohmelfs/design_notes.txt
index 8aef91335701..106d17fbb05f 100644
--- a/Documentation/filesystems/pohmelfs/design_notes.txt
+++ b/Documentation/filesystems/pohmelfs/design_notes.txt
@@ -29,7 +29,7 @@ Main features of this FS include:
* Read request (data read, directory listing, lookup requests) balancing between multiple servers.
* Write requests are replicated to multiple servers and completed only when all of them are acked.
* Ability to add and/or remove servers from the working set at run-time.
- * Strong authentification and possible data encryption in network channel.
+ * Strong authentication and possible data encryption in network channel.
* Extended attributes support.
POHMELFS is based on transactions, which are potentially long-standing objects that live
diff --git a/Documentation/filesystems/porting b/Documentation/filesystems/porting
index f1b87d8aa2da..a5fb89cac615 100644
--- a/Documentation/filesystems/porting
+++ b/Documentation/filesystems/porting
@@ -525,3 +525,63 @@ in your dentry operations instead.
set_delayed_call() where it used to set *cookie.
->put_link() is gone - just give the destructor to set_delayed_call()
in ->get_link().
+--
+[mandatory]
+ ->getxattr() and xattr_handler.get() get dentry and inode passed separately.
+ dentry might be yet to be attached to inode, so do _not_ use its ->d_inode
+ in the instances. Rationale: !@#!@# security_d_instantiate() needs to be
+ called before we attach dentry to inode.
+--
+[mandatory]
+ symlinks are no longer the only inodes that do *not* have i_bdev/i_cdev/
+ i_pipe/i_link union zeroed out at inode eviction. As the result, you can't
+ assume that non-NULL value in ->i_nlink at ->destroy_inode() implies that
+ it's a symlink. Checking ->i_mode is really needed now. In-tree we had
+ to fix shmem_destroy_callback() that used to take that kind of shortcut;
+ watch out, since that shortcut is no longer valid.
+--
+[mandatory]
+ ->i_mutex is replaced with ->i_rwsem now. inode_lock() et.al. work as
+ they used to - they just take it exclusive. However, ->lookup() may be
+ called with parent locked shared. Its instances must not
+ * use d_instantiate) and d_rehash() separately - use d_add() or
+ d_splice_alias() instead.
+ * use d_rehash() alone - call d_add(new_dentry, NULL) instead.
+ * in the unlikely case when (read-only) access to filesystem
+ data structures needs exclusion for some reason, arrange it
+ yourself. None of the in-tree filesystems needed that.
+ * rely on ->d_parent and ->d_name not changing after dentry has
+ been fed to d_add() or d_splice_alias(). Again, none of the
+ in-tree instances relied upon that.
+ We are guaranteed that lookups of the same name in the same directory
+ will not happen in parallel ("same" in the sense of your ->d_compare()).
+ Lookups on different names in the same directory can and do happen in
+ parallel now.
+--
+[recommended]
+ ->iterate_shared() is added; it's a parallel variant of ->iterate().
+ Exclusion on struct file level is still provided (as well as that
+ between it and lseek on the same struct file), but if your directory
+ has been opened several times, you can get these called in parallel.
+ Exclusion between that method and all directory-modifying ones is
+ still provided, of course.
+
+ Often enough ->iterate() can serve as ->iterate_shared() without any
+ changes - it is a read-only operation, after all. If you have any
+ per-inode or per-dentry in-core data structures modified by ->iterate(),
+ you might need something to serialize the access to them. If you
+ do dcache pre-seeding, you'll need to switch to d_alloc_parallel() for
+ that; look for in-tree examples.
+
+ Old method is only used if the new one is absent; eventually it will
+ be removed. Switch while you still can; the old one won't stay.
+--
+[mandatory]
+ ->atomic_open() calls without O_CREAT may happen in parallel.
+--
+[mandatory]
+ ->setxattr() and xattr_handler.set() get dentry and inode passed separately.
+ dentry might be yet to be attached to inode, so do _not_ use its ->d_inode
+ in the instances. Rationale: !@#!@# security_d_instantiate() needs to be
+ called before we attach dentry to inode and !@#!@##!@$!$#!@#$!@$!@$ smack
+ ->d_instantiate() uses not just ->getxattr() but ->setxattr() as well.
diff --git a/Documentation/filesystems/proc.txt b/Documentation/filesystems/proc.txt
index 7f5607a089b4..e8d00759bfa5 100644
--- a/Documentation/filesystems/proc.txt
+++ b/Documentation/filesystems/proc.txt
@@ -225,6 +225,7 @@ Table 1-2: Contents of the status files (as of 4.1)
TracerPid PID of process tracing this process (0 if not)
Uid Real, effective, saved set, and file system UIDs
Gid Real, effective, saved set, and file system GIDs
+ Umask file mode creation mask
FDSize number of file descriptor slots currently allocated
Groups supplementary group list
NStgid descendant namespace thread group ID hierarchy
diff --git a/Documentation/filesystems/qnx6.txt b/Documentation/filesystems/qnx6.txt
index 408679789136..4f3d6a882bdc 100644
--- a/Documentation/filesystems/qnx6.txt
+++ b/Documentation/filesystems/qnx6.txt
@@ -16,7 +16,7 @@ qnx6fs shares many properties with traditional Unix filesystems. It has the
concepts of blocks, inodes and directories.
On QNX it is possible to create little endian and big endian qnx6 filesystems.
This feature makes it possible to create and use a different endianness fs
-for the target (QNX is used on quite a range of embedded systems) plattform
+for the target (QNX is used on quite a range of embedded systems) platform
running on a different endianness.
The Linux driver handles endianness transparently. (LE and BE)
diff --git a/Documentation/filesystems/vfs.txt b/Documentation/filesystems/vfs.txt
index 4164bd6397a2..c61a223ef3ff 100644
--- a/Documentation/filesystems/vfs.txt
+++ b/Documentation/filesystems/vfs.txt
@@ -591,7 +591,7 @@ struct address_space_operations {
void (*invalidatepage) (struct page *, unsigned int, unsigned int);
int (*releasepage) (struct page *, int);
void (*freepage)(struct page *);
- ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter, loff_t offset);
+ ssize_t (*direct_IO)(struct kiocb *, struct iov_iter *iter);
/* migrate the contents of a page to the specified target */
int (*migratepage) (struct page *, struct page *);
int (*launder_page) (struct page *);
diff --git a/Documentation/firmware_class/README b/Documentation/firmware_class/README
index 71f86859d7d8..cafdca8b3b15 100644
--- a/Documentation/firmware_class/README
+++ b/Documentation/firmware_class/README
@@ -20,7 +20,7 @@
1), kernel(driver):
- calls request_firmware(&fw_entry, $FIRMWARE, device)
- - kernel searchs the fimware image with name $FIRMWARE directly
+ - kernel searches the firmware image with name $FIRMWARE directly
in the below search path of root filesystem:
User customized search path by module parameter 'path'[1]
"/lib/firmware/updates/" UTS_RELEASE,
diff --git a/Documentation/gdb-kernel-debugging.txt b/Documentation/gdb-kernel-debugging.txt
index 7050ce8794b9..4ab7d43d0754 100644
--- a/Documentation/gdb-kernel-debugging.txt
+++ b/Documentation/gdb-kernel-debugging.txt
@@ -139,6 +139,27 @@ Examples of using the Linux-provided gdb helpers
start_comm = "swapper/2\000\000\000\000\000\000"
}
+ o Dig into a radix tree data structure, such as the IRQ descriptors:
+ (gdb) print (struct irq_desc)$lx_radix_tree_lookup(irq_desc_tree, 18)
+ $6 = {
+ irq_common_data = {
+ state_use_accessors = 67584,
+ handler_data = 0x0 <__vectors_start>,
+ msi_desc = 0x0 <__vectors_start>,
+ affinity = {{
+ bits = {65535}
+ }}
+ },
+ irq_data = {
+ mask = 0,
+ irq = 18,
+ hwirq = 27,
+ common = 0xee803d80,
+ chip = 0xc0eb0854 <gic_data>,
+ domain = 0xee808000,
+ parent_data = 0x0 <__vectors_start>,
+ chip_data = 0xc0eb0854 <gic_data>
+ } <... trimmed ...>
List of commands and functions
------------------------------
diff --git a/Documentation/gpio/driver.txt b/Documentation/gpio/driver.txt
index bbeec415f406..6cb35a78eff4 100644
--- a/Documentation/gpio/driver.txt
+++ b/Documentation/gpio/driver.txt
@@ -68,6 +68,103 @@ control callbacks) if it is expected to call GPIO APIs from atomic context
on -RT (inside hard IRQ handlers and similar contexts). Normally this should
not be required.
+
+GPIOs with open drain/source support
+------------------------------------
+
+Open drain (CMOS) or open collector (TTL) means the line is not actively driven
+high: instead you provide the drain/collector as output, so when the transistor
+is not open, it will present a high-impedance (tristate) to the external rail.
+
+
+ CMOS CONFIGURATION TTL CONFIGURATION
+
+ ||--- out +--- out
+ in ----|| |/
+ ||--+ in ----|
+ | |\
+ GND GND
+
+This configuration is normally used as a way to achieve one of two things:
+
+- Level-shifting: to reach a logical level higher than that of the silicon
+ where the output resides.
+
+- inverse wire-OR on an I/O line, for example a GPIO line, making it possible
+ for any driving stage on the line to drive it low even if any other output
+ to the same line is simultaneously driving it high. A special case of this
+ is driving the SCL and SCA lines of an I2C bus, which is by definition a
+ wire-OR bus.
+
+Both usecases require that the line be equipped with a pull-up resistor. This
+resistor will make the line tend to high level unless one of the transistors on
+the rail actively pulls it down.
+
+The level on the line will go as high as the VDD on the pull-up resistor, which
+may be higher than the level supported by the transistor, achieveing a
+level-shift to the higher VDD.
+
+Integrated electronics often have an output driver stage in the form of a CMOS
+"totem-pole" with one N-MOS and one P-MOS transistor where one of them drives
+the line high and one of them drives the line low. This is called a push-pull
+output. The "totem-pole" looks like so:
+
+ VDD
+ |
+ OD ||--+
+ +--/ ---o|| P-MOS-FET
+ | ||--+
+IN --+ +----- out
+ | ||--+
+ +--/ ----|| N-MOS-FET
+ OS ||--+
+ |
+ GND
+
+The desired output signal (e.g. coming directly from some GPIO output register)
+arrives at IN. The switches named "OD" and "OS" are normally closed, creating
+a push-pull circuit.
+
+Consider the little "switches" named "OD" and "OS" that enable/disable the
+P-MOS or N-MOS transistor right after the split of the input. As you can see,
+either transistor will go totally numb if this switch is open. The totem-pole
+is then halved and give high impedance instead of actively driving the line
+high or low respectively. That is usually how software-controlled open
+drain/source works.
+
+Some GPIO hardware come in open drain / open source configuration. Some are
+hard-wired lines that will only support open drain or open source no matter
+what: there is only one transistor there. Some are software-configurable:
+by flipping a bit in a register the output can be configured as open drain
+or open source, in practice by flicking open the switches labeled "OD" and "OS"
+in the drawing above.
+
+By disabling the P-MOS transistor, the output can be driven between GND and
+high impedance (open drain), and by disabling the N-MOS transistor, the output
+can be driven between VDD and high impedance (open source). In the first case,
+a pull-up resistor is needed on the outgoing rail to complete the circuit, and
+in the second case, a pull-down resistor is needed on the rail.
+
+Hardware that supports open drain or open source or both, can implement a
+special callback in the gpio_chip: .set_single_ended() that takes an enum flag
+telling whether to configure the line as open drain, open source or push-pull.
+This will happen in response to the GPIO_OPEN_DRAIN or GPIO_OPEN_SOURCE flag
+set in the machine file, or coming from other hardware descriptions.
+
+If this state can not be configured in hardware, i.e. if the GPIO hardware does
+not support open drain/open source in hardware, the GPIO library will instead
+use a trick: when a line is set as output, if the line is flagged as open
+drain, and the IN output value is low, it will be driven low as usual. But
+if the IN output value is set to high, it will instead *NOT* be driven high,
+instead it will be switched to input, as input mode is high impedance, thus
+achieveing an "open drain emulation" of sorts: electrically the behaviour will
+be identical, with the exception of possible hardware glitches when switching
+the mode of the line.
+
+For open source configuration the same principle is used, just that instead
+of actively driving the line low, it is set to input.
+
+
GPIO drivers providing IRQs
---------------------------
It is custom that GPIO drivers (GPIO chips) are also providing interrupts,
diff --git a/Documentation/hwmon/abituguru b/Documentation/hwmon/abituguru
index 915f32063a26..f1d4fe4c366c 100644
--- a/Documentation/hwmon/abituguru
+++ b/Documentation/hwmon/abituguru
@@ -25,7 +25,7 @@ Supported chips:
1) For revisions 2 and 3 uGuru's the driver can autodetect the
sensortype (Volt or Temp) for bank1 sensors, for revision 1 uGuru's
this doesnot always work. For these uGuru's the autodection can
- be overriden with the bank1_types module param. For all 3 known
+ be overridden with the bank1_types module param. For all 3 known
revison 1 motherboards the correct use of this param is:
bank1_types=1,1,0,0,0,0,0,2,0,0,0,0,2,0,0,1
You may also need to specify the fan_sensors option for these boards
diff --git a/Documentation/hwmon/fam15h_power b/Documentation/hwmon/fam15h_power
index e2b1b69eebea..fb594c281c46 100644
--- a/Documentation/hwmon/fam15h_power
+++ b/Documentation/hwmon/fam15h_power
@@ -10,14 +10,22 @@ Supported chips:
Datasheets:
BIOS and Kernel Developer's Guide (BKDG) For AMD Family 15h Processors
BIOS and Kernel Developer's Guide (BKDG) For AMD Family 16h Processors
+ AMD64 Architecture Programmer's Manual Volume 2: System Programming
Author: Andreas Herrmann <herrmann.der.user@googlemail.com>
Description
-----------
+1) Processor TDP (Thermal design power)
+
+Given a fixed frequency and voltage, the power consumption of a
+processor varies based on the workload being executed. Derated power
+is the power consumed when running a specific application. Thermal
+design power (TDP) is an example of derated power.
+
This driver permits reading of registers providing power information
-of AMD Family 15h and 16h processors.
+of AMD Family 15h and 16h processors via TDP algorithm.
For AMD Family 15h and 16h processors the following power values can
be calculated using different processor northbridge function
@@ -37,3 +45,58 @@ This driver provides ProcessorPwrWatts and CurrPwrWatts:
On multi-node processors the calculated value is for the entire
package and not for a single node. Thus the driver creates sysfs
attributes only for internal node0 of a multi-node processor.
+
+2) Accumulated Power Mechanism
+
+This driver also introduces an algorithm that should be used to
+calculate the average power consumed by a processor during a
+measurement interval Tm. The feature of accumulated power mechanism is
+indicated by CPUID Fn8000_0007_EDX[12].
+
+* Tsample: compute unit power accumulator sample period
+* Tref: the PTSC counter period
+* PTSC: performance timestamp counter
+* N: the ratio of compute unit power accumulator sample period to the
+ PTSC period
+* Jmax: max compute unit accumulated power which is indicated by
+ MaxCpuSwPwrAcc MSR C001007b
+* Jx/Jy: compute unit accumulated power which is indicated by
+ CpuSwPwrAcc MSR C001007a
+* Tx/Ty: the value of performance timestamp counter which is indicated
+ by CU_PTSC MSR C0010280
+* PwrCPUave: CPU average power
+
+i. Determine the ratio of Tsample to Tref by executing CPUID Fn8000_0007.
+ N = value of CPUID Fn8000_0007_ECX[CpuPwrSampleTimeRatio[15:0]].
+
+ii. Read the full range of the cumulative energy value from the new
+MSR MaxCpuSwPwrAcc.
+ Jmax = value returned.
+iii. At time x, SW reads CpuSwPwrAcc MSR and samples the PTSC.
+ Jx = value read from CpuSwPwrAcc and Tx = value read from
+PTSC.
+
+iv. At time y, SW reads CpuSwPwrAcc MSR and samples the PTSC.
+ Jy = value read from CpuSwPwrAcc and Ty = value read from
+PTSC.
+
+v. Calculate the average power consumption for a compute unit over
+time period (y-x). Unit of result is uWatt.
+ if (Jy < Jx) // Rollover has occurred
+ Jdelta = (Jy + Jmax) - Jx
+ else
+ Jdelta = Jy - Jx
+ PwrCPUave = N * Jdelta * 1000 / (Ty - Tx)
+
+This driver provides PwrCPUave and interval(default is 10 millisecond
+and maximum is 1 second):
+* power1_average (PwrCPUave)
+* power1_average_interval (Interval)
+
+The power1_average_interval can be updated at /etc/sensors3.conf file
+as below:
+
+chip "fam15h_power-*"
+ set power1_average_interval 0.01
+
+Then save it with "sensors -s".
diff --git a/Documentation/hwmon/it87 b/Documentation/hwmon/it87
index 733296d65449..fff6f6bf55bc 100644
--- a/Documentation/hwmon/it87
+++ b/Documentation/hwmon/it87
@@ -9,6 +9,9 @@ Supported chips:
* IT8620E
Prefix: 'it8620'
Addresses scanned: from Super I/O config space (8 I/O ports)
+ * IT8628E
+ Prefix: 'it8628'
+ Addresses scanned: from Super I/O config space (8 I/O ports)
Datasheet: Not publicly available
* IT8705F
Prefix: 'it87'
@@ -114,8 +117,8 @@ motherboard models.
Description
-----------
-This driver implements support for the IT8603E, IT8620E, IT8623E, IT8705F,
-IT8712F, IT8716F, IT8718F, IT8720F, IT8721F, IT8726F, IT8728F, IT8732F,
+This driver implements support for the IT8603E, IT8620E, IT8623E, IT8628E,
+IT8705F, IT8712F, IT8716F, IT8718F, IT8720F, IT8721F, IT8726F, IT8728F, IT8732F,
IT8758E, IT8771E, IT8772E, IT8781F, IT8782F, IT8783E/F, IT8786E, IT8790E, and
SiS950 chips.
@@ -158,8 +161,8 @@ The IT8603E/IT8623E is a custom design, hardware monitoring part is similar to
IT8728F. It only supports 3 fans, 16-bit fan mode, and the full speed mode
of the fan is not supported (value 0 of pwmX_enable).
-The IT8620E is another custom design, hardware monitoring part is similar to
-IT8728F. It only supports 16-bit fan mode.
+The IT8620E and IT8628E are custom designs, hardware monitoring part is similar
+to IT8728F. It only supports 16-bit fan mode. Both chips support up to 6 fans.
The IT8790E supports up to 3 fans. 16-bit fan mode is always enabled.
@@ -187,8 +190,8 @@ of 0.016 volt. IT8603E, IT8721F/IT8758E and IT8728F can measure between 0 and
2.8 volts with a resolution of 0.0109 volt. The battery voltage in8 does not
have limit registers.
-On the IT8603E, IT8721F/IT8758E, IT8732F, IT8781F, IT8782F, and IT8783E/F, some
-voltage inputs are internal and scaled inside the chip:
+On the IT8603E, IT8620E, IT8628E, IT8721F/IT8758E, IT8732F, IT8781F, IT8782F,
+and IT8783E/F, some voltage inputs are internal and scaled inside the chip:
* in3 (optional)
* in7 (optional for IT8781F, IT8782F, and IT8783E/F)
* in8 (always)
diff --git a/Documentation/hwmon/max31722 b/Documentation/hwmon/max31722
new file mode 100644
index 000000000000..090da84538c8
--- /dev/null
+++ b/Documentation/hwmon/max31722
@@ -0,0 +1,34 @@
+Kernel driver max31722
+======================
+
+Supported chips:
+ * Maxim Integrated MAX31722
+ Prefix: 'max31722'
+ ACPI ID: MAX31722
+ Addresses scanned: -
+ Datasheet: https://datasheets.maximintegrated.com/en/ds/MAX31722-MAX31723.pdf
+ * Maxim Integrated MAX31723
+ Prefix: 'max31723'
+ ACPI ID: MAX31723
+ Addresses scanned: -
+ Datasheet: https://datasheets.maximintegrated.com/en/ds/MAX31722-MAX31723.pdf
+
+Author: Tiberiu Breana <tiberiu.a.breana@intel.com>
+
+Description
+-----------
+
+This driver adds support for the Maxim Integrated MAX31722/MAX31723 thermometers
+and thermostats running over an SPI interface.
+
+Usage Notes
+-----------
+
+This driver uses ACPI to auto-detect devices. See ACPI IDs in the above section.
+
+Sysfs entries
+-------------
+
+The following attribute is supported:
+
+temp1_input Measured temperature. Read-only.
diff --git a/Documentation/hwmon/max34440 b/Documentation/hwmon/max34440
index f5b1fcaa9e4e..9ba6587b7657 100644
--- a/Documentation/hwmon/max34440
+++ b/Documentation/hwmon/max34440
@@ -5,17 +5,17 @@ Supported chips:
* Maxim MAX34440
Prefixes: 'max34440'
Addresses scanned: -
- Datasheet: http://datasheets.maxim-ic.com/en/ds/MAX34440.pdf
+ Datasheet: http://datasheets.maximintegrated.com/en/ds/MAX34440.pdf
* Maxim MAX34441
PMBus 5-Channel Power-Supply Manager and Intelligent Fan Controller
Prefixes: 'max34441'
Addresses scanned: -
- Datasheet: http://datasheets.maxim-ic.com/en/ds/MAX34441.pdf
+ Datasheet: http://datasheets.maximintegrated.com/en/ds/MAX34441.pdf
* Maxim MAX34446
PMBus Power-Supply Data Logger
Prefixes: 'max34446'
Addresses scanned: -
- Datasheet: http://datasheets.maxim-ic.com/en/ds/MAX34446.pdf
+ Datasheet: http://datasheets.maximintegrated.com/en/ds/MAX34446.pdf
* Maxim MAX34460
PMBus 12-Channel Voltage Monitor & Sequencer
Prefix: 'max34460'
diff --git a/Documentation/i2c/i2c-topology b/Documentation/i2c/i2c-topology
new file mode 100644
index 000000000000..e0aefeece551
--- /dev/null
+++ b/Documentation/i2c/i2c-topology
@@ -0,0 +1,370 @@
+I2C topology
+============
+
+There are a couple of reasons for building more complex i2c topologies
+than a straight-forward i2c bus with one adapter and one or more devices.
+
+1. A mux may be needed on the bus to prevent address collisions.
+
+2. The bus may be accessible from some external bus master, and arbitration
+ may be needed to determine if it is ok to access the bus.
+
+3. A device (particularly RF tuners) may want to avoid the digital noise
+ from the i2c bus, at least most of the time, and sits behind a gate
+ that has to be operated before the device can be accessed.
+
+Etc
+
+These constructs are represented as i2c adapter trees by Linux, where
+each adapter has a parent adapter (except the root adapter) and zero or
+more child adapters. The root adapter is the actual adapter that issues
+i2c transfers, and all adapters with a parent are part of an "i2c-mux"
+object (quoted, since it can also be an arbitrator or a gate).
+
+Depending of the particular mux driver, something happens when there is
+an i2c transfer on one of its child adapters. The mux driver can
+obviously operate a mux, but it can also do arbitration with an external
+bus master or open a gate. The mux driver has two operations for this,
+select and deselect. select is called before the transfer and (the
+optional) deselect is called after the transfer.
+
+
+Locking
+=======
+
+There are two variants of locking available to i2c muxes, they can be
+mux-locked or parent-locked muxes. As is evident from below, it can be
+useful to know if a mux is mux-locked or if it is parent-locked. The
+following list was correct at the time of writing:
+
+In drivers/i2c/muxes/
+i2c-arb-gpio-challenge Parent-locked
+i2c-mux-gpio Normally parent-locked, mux-locked iff
+ all involved gpio pins are controlled by the
+ same i2c root adapter that they mux.
+i2c-mux-pca9541 Parent-locked
+i2c-mux-pca954x Parent-locked
+i2c-mux-pinctrl Normally parent-locked, mux-locked iff
+ all involved pinctrl devices are controlled
+ by the same i2c root adapter that they mux.
+i2c-mux-reg Parent-locked
+
+In drivers/iio/
+imu/inv_mpu6050/ Mux-locked
+
+In drivers/media/
+dvb-frontends/m88ds3103 Parent-locked
+dvb-frontends/rtl2830 Parent-locked
+dvb-frontends/rtl2832 Mux-locked
+dvb-frontends/si2168 Mux-locked
+usb/cx231xx/ Parent-locked
+
+
+Mux-locked muxes
+----------------
+
+Mux-locked muxes does not lock the entire parent adapter during the
+full select-transfer-deselect transaction, only the muxes on the parent
+adapter are locked. Mux-locked muxes are mostly interesting if the
+select and/or deselect operations must use i2c transfers to complete
+their tasks. Since the parent adapter is not fully locked during the
+full transaction, unrelated i2c transfers may interleave the different
+stages of the transaction. This has the benefit that the mux driver
+may be easier and cleaner to implement, but it has some caveats.
+
+ML1. If you build a topology with a mux-locked mux being the parent
+ of a parent-locked mux, this might break the expectation from the
+ parent-locked mux that the root adapter is locked during the
+ transaction.
+
+ML2. It is not safe to build arbitrary topologies with two (or more)
+ mux-locked muxes that are not siblings, when there are address
+ collisions between the devices on the child adapters of these
+ non-sibling muxes.
+
+ I.e. the select-transfer-deselect transaction targeting e.g. device
+ address 0x42 behind mux-one may be interleaved with a similar
+ operation targeting device address 0x42 behind mux-two. The
+ intension with such a topology would in this hypothetical example
+ be that mux-one and mux-two should not be selected simultaneously,
+ but mux-locked muxes do not guarantee that in all topologies.
+
+ML3. A mux-locked mux cannot be used by a driver for auto-closing
+ gates/muxes, i.e. something that closes automatically after a given
+ number (one, in most cases) of i2c transfers. Unrelated i2c transfers
+ may creep in and close prematurely.
+
+ML4. If any non-i2c operation in the mux driver changes the i2c mux state,
+ the driver has to lock the root adapter during that operation.
+ Otherwise garbage may appear on the bus as seen from devices
+ behind the mux, when an unrelated i2c transfer is in flight during
+ the non-i2c mux-changing operation.
+
+
+Mux-locked Example
+------------------
+
+ .----------. .--------.
+ .--------. | mux- |-----| dev D1 |
+ | root |--+--| locked | '--------'
+ '--------' | | mux M1 |--. .--------.
+ | '----------' '--| dev D2 |
+ | .--------. '--------'
+ '--| dev D3 |
+ '--------'
+
+When there is an access to D1, this happens:
+
+ 1. Someone issues an i2c-transfer to D1.
+ 2. M1 locks muxes on its parent (the root adapter in this case).
+ 3. M1 calls ->select to ready the mux.
+ 4. M1 (presumably) does some i2c-transfers as part of its select.
+ These transfers are normal i2c-transfers that locks the parent
+ adapter.
+ 5. M1 feeds the i2c-transfer from step 1 to its parent adapter as a
+ normal i2c-transfer that locks the parent adapter.
+ 6. M1 calls ->deselect, if it has one.
+ 7. Same rules as in step 4, but for ->deselect.
+ 8. M1 unlocks muxes on its parent.
+
+This means that accesses to D2 are lockout out for the full duration
+of the entire operation. But accesses to D3 are possibly interleaved
+at any point.
+
+
+Parent-locked muxes
+-------------------
+
+Parent-locked muxes lock the parent adapter during the full select-
+transfer-deselect transaction. The implication is that the mux driver
+has to ensure that any and all i2c transfers through that parent
+adapter during the transaction are unlocked i2c transfers (using e.g.
+__i2c_transfer), or a deadlock will follow. There are a couple of
+caveats.
+
+PL1. If you build a topology with a parent-locked mux being the child
+ of another mux, this might break a possible assumption from the
+ child mux that the root adapter is unused between its select op
+ and the actual transfer (e.g. if the child mux is auto-closing
+ and the parent mux issus i2c-transfers as part of its select).
+ This is especially the case if the parent mux is mux-locked, but
+ it may also happen if the parent mux is parent-locked.
+
+PL2. If select/deselect calls out to other subsystems such as gpio,
+ pinctrl, regmap or iio, it is essential that any i2c transfers
+ caused by these subsystems are unlocked. This can be convoluted to
+ accomplish, maybe even impossible if an acceptably clean solution
+ is sought.
+
+
+Parent-locked Example
+---------------------
+
+ .----------. .--------.
+ .--------. | parent- |-----| dev D1 |
+ | root |--+--| locked | '--------'
+ '--------' | | mux M1 |--. .--------.
+ | '----------' '--| dev D2 |
+ | .--------. '--------'
+ '--| dev D3 |
+ '--------'
+
+When there is an access to D1, this happens:
+
+ 1. Someone issues an i2c-transfer to D1.
+ 2. M1 locks muxes on its parent (the root adapter in this case).
+ 3. M1 locks its parent adapter.
+ 4. M1 calls ->select to ready the mux.
+ 5. If M1 does any i2c-transfers (on this root adapter) as part of
+ its select, those transfers must be unlocked i2c-transfers so
+ that they do not deadlock the root adapter.
+ 6. M1 feeds the i2c-transfer from step 1 to the root adapter as an
+ unlocked i2c-transfer, so that it does not deadlock the parent
+ adapter.
+ 7. M1 calls ->deselect, if it has one.
+ 8. Same rules as in step 5, but for ->deselect.
+ 9. M1 unlocks its parent adapter.
+10. M1 unlocks muxes on its parent.
+
+
+This means that accesses to both D2 and D3 are locked out for the full
+duration of the entire operation.
+
+
+Complex Examples
+================
+
+Parent-locked mux as parent of parent-locked mux
+------------------------------------------------
+
+This is a useful topology, but it can be bad.
+
+ .----------. .----------. .--------.
+ .--------. | parent- |-----| parent- |-----| dev D1 |
+ | root |--+--| locked | | locked | '--------'
+ '--------' | | mux M1 |--. | mux M2 |--. .--------.
+ | '----------' | '----------' '--| dev D2 |
+ | .--------. | .--------. '--------'
+ '--| dev D4 | '--| dev D3 |
+ '--------' '--------'
+
+When any device is accessed, all other devices are locked out for
+the full duration of the operation (both muxes lock their parent,
+and specifically when M2 requests its parent to lock, M1 passes
+the buck to the root adapter).
+
+This topology is bad if M2 is an auto-closing mux and M1->select
+issues any unlocked i2c transfers on the root adapter that may leak
+through and be seen by the M2 adapter, thus closing M2 prematurely.
+
+
+Mux-locked mux as parent of mux-locked mux
+------------------------------------------
+
+This is a good topology.
+
+ .----------. .----------. .--------.
+ .--------. | mux- |-----| mux- |-----| dev D1 |
+ | root |--+--| locked | | locked | '--------'
+ '--------' | | mux M1 |--. | mux M2 |--. .--------.
+ | '----------' | '----------' '--| dev D2 |
+ | .--------. | .--------. '--------'
+ '--| dev D4 | '--| dev D3 |
+ '--------' '--------'
+
+When device D1 is accessed, accesses to D2 are locked out for the
+full duration of the operation (muxes on the top child adapter of M1
+are locked). But accesses to D3 and D4 are possibly interleaved at
+any point. Accesses to D3 locks out D1 and D2, but accesses to D4
+are still possibly interleaved.
+
+
+Mux-locked mux as parent of parent-locked mux
+---------------------------------------------
+
+This is probably a bad topology.
+
+ .----------. .----------. .--------.
+ .--------. | mux- |-----| parent- |-----| dev D1 |
+ | root |--+--| locked | | locked | '--------'
+ '--------' | | mux M1 |--. | mux M2 |--. .--------.
+ | '----------' | '----------' '--| dev D2 |
+ | .--------. | .--------. '--------'
+ '--| dev D4 | '--| dev D3 |
+ '--------' '--------'
+
+When device D1 is accessed, accesses to D2 and D3 are locked out
+for the full duration of the operation (M1 locks child muxes on the
+root adapter). But accesses to D4 are possibly interleaved at any
+point.
+
+This kind of topology is generally not suitable and should probably
+be avoided. The reason is that M2 probably assumes that there will
+be no i2c transfers during its calls to ->select and ->deselect, and
+if there are, any such transfers might appear on the slave side of M2
+as partial i2c transfers, i.e. garbage or worse. This might cause
+device lockups and/or other problems.
+
+The topology is especially troublesome if M2 is an auto-closing
+mux. In that case, any interleaved accesses to D4 might close M2
+prematurely, as might any i2c-transfers part of M1->select.
+
+But if M2 is not making the above stated assumption, and if M2 is not
+auto-closing, the topology is fine.
+
+
+Parent-locked mux as parent of mux-locked mux
+---------------------------------------------
+
+This is a good topology.
+
+ .----------. .----------. .--------.
+ .--------. | parent- |-----| mux- |-----| dev D1 |
+ | root |--+--| locked | | locked | '--------'
+ '--------' | | mux M1 |--. | mux M2 |--. .--------.
+ | '----------' | '----------' '--| dev D2 |
+ | .--------. | .--------. '--------'
+ '--| dev D4 | '--| dev D3 |
+ '--------' '--------'
+
+When D1 is accessed, accesses to D2 are locked out for the full
+duration of the operation (muxes on the top child adapter of M1
+are locked). Accesses to D3 and D4 are possibly interleaved at
+any point, just as is expected for mux-locked muxes.
+
+When D3 or D4 are accessed, everything else is locked out. For D3
+accesses, M1 locks the root adapter. For D4 accesses, the root
+adapter is locked directly.
+
+
+Two mux-locked sibling muxes
+----------------------------
+
+This is a good topology.
+
+ .--------.
+ .----------. .--| dev D1 |
+ | mux- |--' '--------'
+ .--| locked | .--------.
+ | | mux M1 |-----| dev D2 |
+ | '----------' '--------'
+ | .----------. .--------.
+ .--------. | | mux- |-----| dev D3 |
+ | root |--+--| locked | '--------'
+ '--------' | | mux M2 |--. .--------.
+ | '----------' '--| dev D4 |
+ | .--------. '--------'
+ '--| dev D5 |
+ '--------'
+
+When D1 is accessed, accesses to D2, D3 and D4 are locked out. But
+accesses to D5 may be interleaved at any time.
+
+
+Two parent-locked sibling muxes
+-------------------------------
+
+This is a good topology.
+
+ .--------.
+ .----------. .--| dev D1 |
+ | parent- |--' '--------'
+ .--| locked | .--------.
+ | | mux M1 |-----| dev D2 |
+ | '----------' '--------'
+ | .----------. .--------.
+ .--------. | | parent- |-----| dev D3 |
+ | root |--+--| locked | '--------'
+ '--------' | | mux M2 |--. .--------.
+ | '----------' '--| dev D4 |
+ | .--------. '--------'
+ '--| dev D5 |
+ '--------'
+
+When any device is accessed, accesses to all other devices are locked
+out.
+
+
+Mux-locked and parent-locked sibling muxes
+------------------------------------------
+
+This is a good topology.
+
+ .--------.
+ .----------. .--| dev D1 |
+ | mux- |--' '--------'
+ .--| locked | .--------.
+ | | mux M1 |-----| dev D2 |
+ | '----------' '--------'
+ | .----------. .--------.
+ .--------. | | parent- |-----| dev D3 |
+ | root |--+--| locked | '--------'
+ '--------' | | mux M2 |--. .--------.
+ | '----------' '--| dev D4 |
+ | .--------. '--------'
+ '--| dev D5 |
+ '--------'
+
+When D1 or D2 are accessed, accesses to D3 and D4 are locked out while
+accesses to D5 may interleave. When D3 or D4 are accessed, accesses to
+all other devices are locked out.
diff --git a/Documentation/infiniband/ipoib.txt b/Documentation/infiniband/ipoib.txt
index f2cfe265e836..47c1dd9818f2 100644
--- a/Documentation/infiniband/ipoib.txt
+++ b/Documentation/infiniband/ipoib.txt
@@ -25,7 +25,7 @@ Partitions and P_Keys
main interface for a subinterface is in "parent."
Child interface create/delete can also be done using IPoIB's
- rtnl_link_ops, where childs created using either way behave the same.
+ rtnl_link_ops, where children created using either way behave the same.
Datagram vs Connected modes
diff --git a/Documentation/infiniband/sysfs.txt b/Documentation/infiniband/sysfs.txt
index 3ecf0c3a133f..45bcafe6ff8a 100644
--- a/Documentation/infiniband/sysfs.txt
+++ b/Documentation/infiniband/sysfs.txt
@@ -56,6 +56,18 @@ SYSFS FILES
ports/1/pkeys/10 contains the value at index 10 in port 1's P_Key
table.
+ There is an optional "hw_counters" subdirectory that may be under either
+ the parent device or the port subdirectories or both. If present,
+ there are a list of counters provided by the hardware. They may match
+ some of the counters in the counters directory, but they often include
+ many other counters. In addition to the various counters, there will
+ be a file named "lifespan" that configures how frequently the core
+ should update the counters when they are being accessed (counters are
+ not updated if they are not being accessed). The lifespan is in milli-
+ seconds and defaults to 10 unless set to something else by the driver.
+ Users may echo a value between 0 - 10000 to the lifespan file to set
+ the length of time between updates in milliseconds.
+
MTHCA
The Mellanox HCA driver also creates the files:
diff --git a/Documentation/isa.txt b/Documentation/isa.txt
new file mode 100644
index 000000000000..f232c26a40be
--- /dev/null
+++ b/Documentation/isa.txt
@@ -0,0 +1,121 @@
+ISA Drivers
+-----------
+
+The following text is adapted from the commit message of the initial
+commit of the ISA bus driver authored by Rene Herman.
+
+During the recent "isa drivers using platform devices" discussion it was
+pointed out that (ALSA) ISA drivers ran into the problem of not having
+the option to fail driver load (device registration rather) upon not
+finding their hardware due to a probe() error not being passed up
+through the driver model. In the course of that, I suggested a separate
+ISA bus might be best; Russell King agreed and suggested this bus could
+use the .match() method for the actual device discovery.
+
+The attached does this. For this old non (generically) discoverable ISA
+hardware only the driver itself can do discovery so as a difference with
+the platform_bus, this isa_bus also distributes match() up to the
+driver.
+
+As another difference: these devices only exist in the driver model due
+to the driver creating them because it might want to drive them, meaning
+that all device creation has been made internal as well.
+
+The usage model this provides is nice, and has been acked from the ALSA
+side by Takashi Iwai and Jaroslav Kysela. The ALSA driver module_init's
+now (for oldisa-only drivers) become:
+
+static int __init alsa_card_foo_init(void)
+{
+ return isa_register_driver(&snd_foo_isa_driver, SNDRV_CARDS);
+}
+
+static void __exit alsa_card_foo_exit(void)
+{
+ isa_unregister_driver(&snd_foo_isa_driver);
+}
+
+Quite like the other bus models therefore. This removes a lot of
+duplicated init code from the ALSA ISA drivers.
+
+The passed in isa_driver struct is the regular driver struct embedding a
+struct device_driver, the normal probe/remove/shutdown/suspend/resume
+callbacks, and as indicated that .match callback.
+
+The "SNDRV_CARDS" you see being passed in is a "unsigned int ndev"
+parameter, indicating how many devices to create and call our methods
+with.
+
+The platform_driver callbacks are called with a platform_device param;
+the isa_driver callbacks are being called with a "struct device *dev,
+unsigned int id" pair directly -- with the device creation completely
+internal to the bus it's much cleaner to not leak isa_dev's by passing
+them in at all. The id is the only thing we ever want other then the
+struct device * anyways, and it makes for nicer code in the callbacks as
+well.
+
+With this additional .match() callback ISA drivers have all options. If
+ALSA would want to keep the old non-load behaviour, it could stick all
+of the old .probe in .match, which would only keep them registered after
+everything was found to be present and accounted for. If it wanted the
+behaviour of always loading as it inadvertently did for a bit after the
+changeover to platform devices, it could just not provide a .match() and
+do everything in .probe() as before.
+
+If it, as Takashi Iwai already suggested earlier as a way of following
+the model from saner buses more closely, wants to load when a later bind
+could conceivably succeed, it could use .match() for the prerequisites
+(such as checking the user wants the card enabled and that port/irq/dma
+values have been passed in) and .probe() for everything else. This is
+the nicest model.
+
+To the code...
+
+This exports only two functions; isa_{,un}register_driver().
+
+isa_register_driver() register's the struct device_driver, and then
+loops over the passed in ndev creating devices and registering them.
+This causes the bus match method to be called for them, which is:
+
+int isa_bus_match(struct device *dev, struct device_driver *driver)
+{
+ struct isa_driver *isa_driver = to_isa_driver(driver);
+
+ if (dev->platform_data == isa_driver) {
+ if (!isa_driver->match ||
+ isa_driver->match(dev, to_isa_dev(dev)->id))
+ return 1;
+ dev->platform_data = NULL;
+ }
+ return 0;
+}
+
+The first thing this does is check if this device is in fact one of this
+driver's devices by seeing if the device's platform_data pointer is set
+to this driver. Platform devices compare strings, but we don't need to
+do that with everything being internal, so isa_register_driver() abuses
+dev->platform_data as a isa_driver pointer which we can then check here.
+I believe platform_data is available for this, but if rather not, moving
+the isa_driver pointer to the private struct isa_dev is ofcourse fine as
+well.
+
+Then, if the the driver did not provide a .match, it matches. If it did,
+the driver match() method is called to determine a match.
+
+If it did _not_ match, dev->platform_data is reset to indicate this to
+isa_register_driver which can then unregister the device again.
+
+If during all this, there's any error, or no devices matched at all
+everything is backed out again and the error, or -ENODEV, is returned.
+
+isa_unregister_driver() just unregisters the matched devices and the
+driver itself.
+
+module_isa_driver is a helper macro for ISA drivers which do not do
+anything special in module init/exit. This eliminates a lot of
+boilerplate code. Each module may only use this macro once, and calling
+it replaces module_init and module_exit.
+
+max_num_isa_dev is a macro to determine the maximum possible number of
+ISA devices which may be registered in the I/O port address space given
+the address extent of the ISA devices.
diff --git a/Documentation/ja_JP/HOWTO b/Documentation/ja_JP/HOWTO
index 52ef02b33da9..581c14bdd7be 100644
--- a/Documentation/ja_JP/HOWTO
+++ b/Documentation/ja_JP/HOWTO
@@ -290,12 +290,6 @@ Linux カーネルの開発プロセスは現在幾つかの異なるメイン
- このプロセスはカーネルが 「準備ができた」と考えられるまで継続しま
す。このプロセスはだいたい 6週間継続します。
- - 各リリースでの既知の後戻り問題(regression: このリリースの中で新規
- に作り込まれた問題を指す) はその都度 Linux-kernel メーリングリスト
- に投稿されます。ゴールとしては、カーネルが 「準備ができた」と宣言
- する前にこのリストの長さをゼロに減らすことですが、現実には、数個の
- 後戻り問題がリリース時にたびたび残ってしまいます。
-
Andrew Morton が Linux-kernel メーリングリストにカーネルリリースについ
て書いたことをここで言っておくことは価値があります-
「カーネルがいつリリースされるかは誰も知りません。なぜなら、これは現
diff --git a/Documentation/kbuild/kconfig-language.txt b/Documentation/kbuild/kconfig-language.txt
index c52856da0cad..db101857b2c9 100644
--- a/Documentation/kbuild/kconfig-language.txt
+++ b/Documentation/kbuild/kconfig-language.txt
@@ -241,9 +241,8 @@ comment "module support disabled"
depends on !MODULES
MODVERSIONS directly depends on MODULES, this means it's only visible if
-MODULES is different from 'n'. The comment on the other hand is always
-visible when MODULES is visible (the (empty) dependency of MODULES is
-also part of the comment dependencies).
+MODULES is different from 'n'. The comment on the other hand is only
+visible when MODULES is set to 'n'.
Kconfig syntax
@@ -285,12 +284,17 @@ choices:
"endchoice"
This defines a choice group and accepts any of the above attributes as
-options. A choice can only be of type bool or tristate, while a boolean
-choice only allows a single config entry to be selected, a tristate
-choice also allows any number of config entries to be set to 'm'. This
-can be used if multiple drivers for a single hardware exists and only a
-single driver can be compiled/loaded into the kernel, but all drivers
-can be compiled as modules.
+options. A choice can only be of type bool or tristate. If no type is
+specified for a choice, it's type will be determined by the type of
+the first choice element in the group or remain unknown if none of the
+choice elements have a type specified, as well.
+
+While a boolean choice only allows a single config entry to be
+selected, a tristate choice also allows any number of config entries
+to be set to 'm'. This can be used if multiple drivers for a single
+hardware exists and only a single driver can be compiled/loaded into
+the kernel, but all drivers can be compiled as modules.
+
A choice accepts another option "optional", which allows to set the
choice to 'n' and no entry needs to be selected.
If no [symbol] is associated with a choice, then you can not have multiple
diff --git a/Documentation/kdump/gdbmacros.txt b/Documentation/kdump/gdbmacros.txt
index 9b9b454b048a..35f6a982a0d5 100644
--- a/Documentation/kdump/gdbmacros.txt
+++ b/Documentation/kdump/gdbmacros.txt
@@ -15,15 +15,16 @@
define bttnobp
set $tasks_off=((size_t)&((struct task_struct *)0)->tasks)
- set $pid_off=((size_t)&((struct task_struct *)0)->pids[1].pid_list.next)
+ set $pid_off=((size_t)&((struct task_struct *)0)->thread_group.next)
set $init_t=&init_task
set $next_t=(((char *)($init_t->tasks).next) - $tasks_off)
+ set var $stacksize = sizeof(union thread_union)
while ($next_t != $init_t)
set $next_t=(struct task_struct *)$next_t
printf "\npid %d; comm %s:\n", $next_t.pid, $next_t.comm
printf "===================\n"
- set var $stackp = $next_t.thread.esp
- set var $stack_top = ($stackp & ~4095) + 4096
+ set var $stackp = $next_t.thread.sp
+ set var $stack_top = ($stackp & ~($stacksize - 1)) + $stacksize
while ($stackp < $stack_top)
if (*($stackp) > _stext && *($stackp) < _sinittext)
@@ -31,13 +32,13 @@ define bttnobp
end
set $stackp += 4
end
- set $next_th=(((char *)$next_t->pids[1].pid_list.next) - $pid_off)
+ set $next_th=(((char *)$next_t->thread_group.next) - $pid_off)
while ($next_th != $next_t)
set $next_th=(struct task_struct *)$next_th
printf "\npid %d; comm %s:\n", $next_t.pid, $next_t.comm
printf "===================\n"
- set var $stackp = $next_t.thread.esp
- set var $stack_top = ($stackp & ~4095) + 4096
+ set var $stackp = $next_t.thread.sp
+ set var $stack_top = ($stackp & ~($stacksize - 1)) + stacksize
while ($stackp < $stack_top)
if (*($stackp) > _stext && *($stackp) < _sinittext)
@@ -45,7 +46,7 @@ define bttnobp
end
set $stackp += 4
end
- set $next_th=(((char *)$next_th->pids[1].pid_list.next) - $pid_off)
+ set $next_th=(((char *)$next_th->thread_group.next) - $pid_off)
end
set $next_t=(char *)($next_t->tasks.next) - $tasks_off
end
@@ -54,42 +55,44 @@ document bttnobp
dump all thread stack traces on a kernel compiled with !CONFIG_FRAME_POINTER
end
+define btthreadstack
+ set var $pid_task = $arg0
+
+ printf "\npid %d; comm %s:\n", $pid_task.pid, $pid_task.comm
+ printf "task struct: "
+ print $pid_task
+ printf "===================\n"
+ set var $stackp = $pid_task.thread.sp
+ set var $stacksize = sizeof(union thread_union)
+ set var $stack_top = ($stackp & ~($stacksize - 1)) + $stacksize
+ set var $stack_bot = ($stackp & ~($stacksize - 1))
+
+ set $stackp = *((unsigned long *) $stackp)
+ while (($stackp < $stack_top) && ($stackp > $stack_bot))
+ set var $addr = *(((unsigned long *) $stackp) + 1)
+ info symbol $addr
+ set $stackp = *((unsigned long *) $stackp)
+ end
+end
+document btthreadstack
+ dump a thread stack using the given task structure pointer
+end
+
+
define btt
set $tasks_off=((size_t)&((struct task_struct *)0)->tasks)
- set $pid_off=((size_t)&((struct task_struct *)0)->pids[1].pid_list.next)
+ set $pid_off=((size_t)&((struct task_struct *)0)->thread_group.next)
set $init_t=&init_task
set $next_t=(((char *)($init_t->tasks).next) - $tasks_off)
while ($next_t != $init_t)
set $next_t=(struct task_struct *)$next_t
- printf "\npid %d; comm %s:\n", $next_t.pid, $next_t.comm
- printf "===================\n"
- set var $stackp = $next_t.thread.esp
- set var $stack_top = ($stackp & ~4095) + 4096
- set var $stack_bot = ($stackp & ~4095)
-
- set $stackp = *($stackp)
- while (($stackp < $stack_top) && ($stackp > $stack_bot))
- set var $addr = *($stackp + 4)
- info symbol $addr
- set $stackp = *($stackp)
- end
+ btthreadstack $next_t
- set $next_th=(((char *)$next_t->pids[1].pid_list.next) - $pid_off)
+ set $next_th=(((char *)$next_t->thread_group.next) - $pid_off)
while ($next_th != $next_t)
set $next_th=(struct task_struct *)$next_th
- printf "\npid %d; comm %s:\n", $next_t.pid, $next_t.comm
- printf "===================\n"
- set var $stackp = $next_t.thread.esp
- set var $stack_top = ($stackp & ~4095) + 4096
- set var $stack_bot = ($stackp & ~4095)
-
- set $stackp = *($stackp)
- while (($stackp < $stack_top) && ($stackp > $stack_bot))
- set var $addr = *($stackp + 4)
- info symbol $addr
- set $stackp = *($stackp)
- end
- set $next_th=(((char *)$next_th->pids[1].pid_list.next) - $pid_off)
+ btthreadstack $next_th
+ set $next_th=(((char *)$next_th->thread_group.next) - $pid_off)
end
set $next_t=(char *)($next_t->tasks.next) - $tasks_off
end
@@ -101,7 +104,7 @@ end
define btpid
set var $pid = $arg0
set $tasks_off=((size_t)&((struct task_struct *)0)->tasks)
- set $pid_off=((size_t)&((struct task_struct *)0)->pids[1].pid_list.next)
+ set $pid_off=((size_t)&((struct task_struct *)0)->thread_group.next)
set $init_t=&init_task
set $next_t=(((char *)($init_t->tasks).next) - $tasks_off)
set var $pid_task = 0
@@ -113,29 +116,18 @@ define btpid
set $pid_task = $next_t
end
- set $next_th=(((char *)$next_t->pids[1].pid_list.next) - $pid_off)
+ set $next_th=(((char *)$next_t->thread_group.next) - $pid_off)
while ($next_th != $next_t)
set $next_th=(struct task_struct *)$next_th
if ($next_th.pid == $pid)
set $pid_task = $next_th
end
- set $next_th=(((char *)$next_th->pids[1].pid_list.next) - $pid_off)
+ set $next_th=(((char *)$next_th->thread_group.next) - $pid_off)
end
set $next_t=(char *)($next_t->tasks.next) - $tasks_off
end
- printf "\npid %d; comm %s:\n", $pid_task.pid, $pid_task.comm
- printf "===================\n"
- set var $stackp = $pid_task.thread.esp
- set var $stack_top = ($stackp & ~4095) + 4096
- set var $stack_bot = ($stackp & ~4095)
-
- set $stackp = *($stackp)
- while (($stackp < $stack_top) && ($stackp > $stack_bot))
- set var $addr = *($stackp + 4)
- info symbol $addr
- set $stackp = *($stackp)
- end
+ btthreadstack $pid_task
end
document btpid
backtrace of pid
@@ -145,7 +137,7 @@ end
define trapinfo
set var $pid = $arg0
set $tasks_off=((size_t)&((struct task_struct *)0)->tasks)
- set $pid_off=((size_t)&((struct task_struct *)0)->pids[1].pid_list.next)
+ set $pid_off=((size_t)&((struct task_struct *)0)->thread_group.next)
set $init_t=&init_task
set $next_t=(((char *)($init_t->tasks).next) - $tasks_off)
set var $pid_task = 0
@@ -157,13 +149,13 @@ define trapinfo
set $pid_task = $next_t
end
- set $next_th=(((char *)$next_t->pids[1].pid_list.next) - $pid_off)
+ set $next_th=(((char *)$next_t->thread_group.next) - $pid_off)
while ($next_th != $next_t)
set $next_th=(struct task_struct *)$next_th
if ($next_th.pid == $pid)
set $pid_task = $next_th
end
- set $next_th=(((char *)$next_th->pids[1].pid_list.next) - $pid_off)
+ set $next_th=(((char *)$next_th->thread_group.next) - $pid_off)
end
set $next_t=(char *)($next_t->tasks.next) - $tasks_off
end
diff --git a/Documentation/kdump/kdump.txt b/Documentation/kdump/kdump.txt
index bc4bd5a44b88..88ff63d5fde3 100644
--- a/Documentation/kdump/kdump.txt
+++ b/Documentation/kdump/kdump.txt
@@ -263,12 +263,6 @@ The syntax is:
crashkernel=<range1>:<size1>[,<range2>:<size2>,...][@offset]
range=start-[end]
-Please note, on arm, the offset is required.
- crashkernel=<range1>:<size1>[,<range2>:<size2>,...]@offset
- range=start-[end]
-
- 'start' is inclusive and 'end' is exclusive.
-
For example:
crashkernel=512M-2G:64M,2G-:128M
@@ -307,10 +301,9 @@ Boot into System Kernel
on the memory consumption of the kdump system. In general this is not
dependent on the memory size of the production system.
- On arm, use "crashkernel=Y@X". Note that the start address of the kernel
- will be aligned to 128MiB (0x08000000), so if the start address is not then
- any space below the alignment point may be overwritten by the dump-capture kernel,
- which means it is possible that the vmcore is not that precise as expected.
+ On arm, the use of "crashkernel=Y@X" is no longer necessary; the
+ kernel will automatically locate the crash kernel image within the
+ first 512MB of RAM if X is not given.
Load the Dump-capture Kernel
diff --git a/Documentation/kernel-docs.txt b/Documentation/kernel-docs.txt
index fe217c1c2f7f..1dafc52167b0 100644
--- a/Documentation/kernel-docs.txt
+++ b/Documentation/kernel-docs.txt
@@ -194,15 +194,15 @@
simple---most of the complexity (other than talking to the
hardware) involves managing network packets in memory".
- * Title: "Writing Linux Device Drivers"
+ * Title: "Linux Kernel Hackers' Guide"
Author: Michael K. Johnson.
- URL: http://users.evitech.fi/~tk/rtos/writing_linux_device_d.html
- Keywords: files, VFS, file operations, kernel interface, character
- vs block devices, I/O access, hardware interrupts, DMA, access to
- user memory, memory allocation, timers.
- Description: Introductory 50-minutes (sic) tutorial on writing
- device drivers. 12 pages written by the same author of the "Kernel
- Hackers' Guide" which give a very good overview of the topic.
+ URL: http://www.tldp.org/LDP/khg/HyperNews/get/khg.html
+ Keywords: device drivers, files, VFS, kernel interface, character vs
+ block devices, hardware interrupts, scsi, DMA, access to user memory,
+ memory allocation, timers.
+ Description: A guide designed to help you get up to speed on the
+ concepts that are not intuitevly obvious, and to document the internal
+ structures of Linux.
* Title: "The Venus kernel interface"
Author: Peter J. Braam.
@@ -250,7 +250,7 @@
* Title: "Analysis of the Ext2fs structure"
Author: Louis-Dominique Dubeau.
- URL: http://www.nondot.org/sabre/os/files/FileSystems/ext2fs/
+ URL: http://teaching.csse.uwa.edu.au/units/CITS2002/fs-ext2/
Keywords: ext2, filesystem, ext2fs.
Description: Description of ext2's blocks, directories, inodes,
bitmaps, invariants...
@@ -266,14 +266,14 @@
* Title: "Kernel API changes from 2.0 to 2.2"
Author: Richard Gooch.
- URL:
- http://www.linuxhq.com/guides/LKMPG/node28.html
+ URL: http://www.safe-mbox.com/~rgooch/linux/docs/porting-to-2.2.html
Keywords: 2.2, changes.
Description: Kernel functions/structures/variables which changed
from 2.0.x to 2.2.x.
* Title: "Kernel API changes from 2.2 to 2.4"
Author: Richard Gooch.
+ URL: http://www.safe-mbox.com/~rgooch/linux/docs/porting-to-2.4.html
Keywords: 2.4, changes.
Description: Kernel functions/structures/variables which changed
from 2.2.x to 2.4.x.
@@ -609,6 +609,13 @@
Pages: 432.
ISBN: 0-201-63338-8
+ * Title: "Linux Kernel Development, 3rd Edition"
+ Author: Robert Love
+ Publisher: Addison-Wesley.
+ Date: July, 2010
+ Pages: 440
+ ISBN: 978-0672329463
+
MISCELLANEOUS:
* Name: linux/Documentation
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 0b3de80ec8f6..82b42c958d1c 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -131,6 +131,7 @@ parameter is applicable:
More X86-64 boot options can be found in
Documentation/x86/x86_64/boot-options.txt .
X86 Either 32-bit or 64-bit x86 (same as X86-32+X86-64)
+ X86_UV SGI UV support is enabled.
XEN Xen support is enabled
In addition, the following text indicates that the option:
@@ -167,16 +168,18 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
acpi= [HW,ACPI,X86,ARM64]
Advanced Configuration and Power Interface
- Format: { force | off | strict | noirq | rsdt |
+ Format: { force | on | off | strict | noirq | rsdt |
copy_dsdt }
force -- enable ACPI if default was off
+ on -- enable ACPI but allow fallback to DT [arm64]
off -- disable ACPI if default was on
noirq -- do not use ACPI for IRQ routing
strict -- Be less tolerant of platforms that are not
strictly ACPI specification compliant.
rsdt -- prefer RSDT over (default) XSDT
copy_dsdt -- copy DSDT to memory
- For ARM64, ONLY "acpi=off" or "acpi=force" are available
+ For ARM64, ONLY "acpi=off", "acpi=on" or "acpi=force"
+ are available
See also Documentation/power/runtime_pm.txt, pci=noacpi
@@ -312,6 +315,8 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
acpi_osi=!* # remove all strings
acpi_osi=! # disable all built-in OS vendor
strings
+ acpi_osi=!! # enable all built-in OS vendor
+ strings
acpi_osi= # disable all strings
'acpi_osi=!' can be used in combination with single or
@@ -542,6 +547,13 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
Format: <int> (must be >=0)
Default: 64
+ bau= [X86_UV] Enable the BAU on SGI UV. The default
+ behavior is to disable the BAU (i.e. bau=0).
+ Format: { "0" | "1" }
+ 0 - Disable the BAU.
+ 1 - Enable the BAU.
+ unset - Disable the BAU.
+
baycom_epp= [HW,AX25]
Format: <io>,<mode>
@@ -826,6 +838,9 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
It will be ignored when crashkernel=X,high is not used
or memory reserved is below 4G.
+ cryptomgr.notests
+ [KNL] Disable crypto self-tests
+
cs89x0_dma= [HW,NET]
Format: <dma>
@@ -1039,6 +1054,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
the driver will use only 32-bit accessors to read/write
the device registers.
+ meson,<addr>
+ Start an early, polled-mode console on a meson serial
+ port at the specified address. The serial port must
+ already be setup and configured. Options are not yet
+ supported.
+
msm_serial,<addr>
Start an early, polled-mode console on an msm serial
port at the specified address. The serial port
@@ -1661,6 +1682,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
hwp_only
Only load intel_pstate on systems which support
hardware P state control (HWP) if available.
+ support_acpi_ppc
+ Enforce ACPI _PPC performance limits. If the Fixed ACPI
+ Description Table, specifies preferred power management
+ profile as "Enterprise Server" or "Performance Server",
+ then this feature is turned on by default.
intremap= [X86-64, Intel-IOMMU]
on enable Interrupt Remapping (default)
@@ -1767,6 +1793,13 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
PCI device 00:14.0 write the parameter as:
ivrs_hpet[0]=00:14.0
+ ivrs_acpihid [HW,X86_64]
+ Provide an override to the ACPI-HID:UID<->DEVICE-ID
+ mapping provided in the IVRS ACPI table. For
+ example, to map UART-HID:UID AMD0020:0 to
+ PCI device 00:14.5 write the parameter as:
+ ivrs_acpihid[00:14.5]=AMD0020:0
+
js= [HW,JOY] Analog joystick
See Documentation/input/joystick.txt.
@@ -2141,6 +2174,14 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
[KNL,SH] Allow user to override the default size for
per-device physically contiguous DMA buffers.
+ memhp_default_state=online/offline
+ [KNL] Set the initial state for the memory hotplug
+ onlining policy. If not specified, the default value is
+ set according to the
+ CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE kernel config
+ option.
+ See Documentation/memory-hotplug.txt.
+
memmap=exactmap [KNL,X86] Enable setting of an exact
E820 memory map, as specified by the user.
Such memmap=exactmap lines can be constructed based on
@@ -2538,6 +2579,9 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
nohugeiomap [KNL,x86] Disable kernel huge I/O mappings.
+ nosmt [KNL,S390] Disable symmetric multithreading (SMT).
+ Equivalent to smt=1.
+
noxsave [BUGS=X86] Disables x86 extended register state save
and restore using xsave. The kernel will fallback to
enabling legacy floating-point and sse state.
@@ -2921,11 +2965,6 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
for broken drivers that don't call it.
skip_isa_align [X86] do not align io start addr, so can
handle more pci cards
- firmware [ARM] Do not re-enumerate the bus but instead
- just use the configuration from the
- bootloader. This is currently used on
- IXP2000 systems where the bus has to be
- configured a certain way for adjunct CPUs.
noearly [X86] Don't do any early type 1 scanning.
This might help on some broken boards which
machine check when some devices' config space
@@ -3284,6 +3323,44 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
Lazy RCU callbacks are those which RCU can
prove do nothing more than free memory.
+ rcuperf.gp_exp= [KNL]
+ Measure performance of expedited synchronous
+ grace-period primitives.
+
+ rcuperf.holdoff= [KNL]
+ Set test-start holdoff period. The purpose of
+ this parameter is to delay the start of the
+ test until boot completes in order to avoid
+ interference.
+
+ rcuperf.nreaders= [KNL]
+ Set number of RCU readers. The value -1 selects
+ N, where N is the number of CPUs. A value
+ "n" less than -1 selects N-n+1, where N is again
+ the number of CPUs. For example, -2 selects N
+ (the number of CPUs), -3 selects N+1, and so on.
+ A value of "n" less than or equal to -N selects
+ a single reader.
+
+ rcuperf.nwriters= [KNL]
+ Set number of RCU writers. The values operate
+ the same as for rcuperf.nreaders.
+ N, where N is the number of CPUs
+
+ rcuperf.perf_runnable= [BOOT]
+ Start rcuperf running at boot time.
+
+ rcuperf.shutdown= [KNL]
+ Shut the system down after performance tests
+ complete. This is useful for hands-off automated
+ testing.
+
+ rcuperf.perf_type= [KNL]
+ Specify the RCU implementation to test.
+
+ rcuperf.verbose= [KNL]
+ Enable additional printk() statements.
+
rcutorture.cbflood_inter_holdoff= [KNL]
Set holdoff time (jiffies) between successive
callback-flood tests.
@@ -3695,6 +3772,13 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
1: Fast pin select (default)
2: ATC IRMode
+ smt [KNL,S390] Set the maximum number of threads (logical
+ CPUs) to use per physical CPU on systems capable of
+ symmetric multithreading (SMT). Will be capped to the
+ actual hardware limit.
+ Format: <integer>
+ Default: -1 (no limit)
+
softlockup_panic=
[KNL] Should the soft-lockup detector generate panics.
Format: <integer>
diff --git a/Documentation/ko_KR/HOWTO b/Documentation/ko_KR/HOWTO
index 5a81b394b3b5..9a3e65924d54 100644
--- a/Documentation/ko_KR/HOWTO
+++ b/Documentation/ko_KR/HOWTO
@@ -236,9 +236,9 @@ Documentation/DocBook/ 디렉토리 내에서 만들어지며 PDF, Postscript, H
- 새로운 커널이 배포되자마자 2주의 시간이 주어진다. 이 기간동은
메인테이너들은 큰 diff들을 Linus에게 제출할 수 있다. 대개 이 패치들은
몇 주 동안 -next 커널내에 이미 있었던 것들이다. 큰 변경들을 제출하는 데
- 선호되는 방법은 git(커널의 소스 관리 툴, 더 많은 정보들은 http://git.or.cz/
- 에서 참조할 수 있다)를 사용하는 것이지만 순수한 패치파일의 형식으로 보내는
- 것도 무관하다.
+ 선호되는 방법은 git(커널의 소스 관리 툴, 더 많은 정보들은
+ http://git-scm.com/ 에서 참조할 수 있다)를 사용하는 것이지만 순수한
+ 패치파일의 형식으로 보내는 것도 무관하다.
- 2주 후에 -rc1 커널이 배포되며 지금부터는 전체 커널의 안정성에 영향을
미칠수 있는 새로운 기능들을 포함하지 않는 패치들만이 추가될 수 있다.
완전히 새로운 드라이버(혹은 파일시스템)는 -rc1 이후에만 받아들여진다는
@@ -253,8 +253,6 @@ Documentation/DocBook/ 디렉토리 내에서 만들어지며 PDF, Postscript, H
것이다.
- 이러한 프로세스는 커널이 "준비(ready)"되었다고 여겨질때까지 계속된다.
프로세스는 대체로 6주간 지속된다.
- - 각 -rc 배포에 있는 알려진 회귀의 목록들은 다음 URI에 남겨진다.
- http://kernelnewbies.org/known_regressions
커널 배포에 있어서 언급할만한 가치가 있는 리눅스 커널 메일링 리스트의
Andrew Morton의 글이 있다.
diff --git a/Documentation/laptops/toshiba_haps.txt b/Documentation/laptops/toshiba_haps.txt
index 11dbcfdc9e7a..0c1d88dedbde 100644
--- a/Documentation/laptops/toshiba_haps.txt
+++ b/Documentation/laptops/toshiba_haps.txt
@@ -19,7 +19,7 @@ Author: Azael Avalos <coproscefalo@gmail.com>
--------------
This driver provides support for the accelerometer found in various Toshiba
-laptops, being called "Toshiba HDD Protection - Shock Sensor" officialy,
+laptops, being called "Toshiba HDD Protection - Shock Sensor" officially,
and detects laptops automatically with this device.
On Windows, Toshiba provided software monitors this device and provides
automatic HDD protection (head unload) on sudden moves or harsh vibrations,
diff --git a/Documentation/livepatch/livepatch.txt b/Documentation/livepatch/livepatch.txt
new file mode 100644
index 000000000000..6c43f6ebee8d
--- /dev/null
+++ b/Documentation/livepatch/livepatch.txt
@@ -0,0 +1,394 @@
+=========
+Livepatch
+=========
+
+This document outlines basic information about kernel livepatching.
+
+Table of Contents:
+
+1. Motivation
+2. Kprobes, Ftrace, Livepatching
+3. Consistency model
+4. Livepatch module
+ 4.1. New functions
+ 4.2. Metadata
+ 4.3. Livepatch module handling
+5. Livepatch life-cycle
+ 5.1. Registration
+ 5.2. Enabling
+ 5.3. Disabling
+ 5.4. Unregistration
+6. Sysfs
+7. Limitations
+
+
+1. Motivation
+=============
+
+There are many situations where users are reluctant to reboot a system. It may
+be because their system is performing complex scientific computations or under
+heavy load during peak usage. In addition to keeping systems up and running,
+users want to also have a stable and secure system. Livepatching gives users
+both by allowing for function calls to be redirected; thus, fixing critical
+functions without a system reboot.
+
+
+2. Kprobes, Ftrace, Livepatching
+================================
+
+There are multiple mechanisms in the Linux kernel that are directly related
+to redirection of code execution; namely: kernel probes, function tracing,
+and livepatching:
+
+ + The kernel probes are the most generic. The code can be redirected by
+ putting a breakpoint instruction instead of any instruction.
+
+ + The function tracer calls the code from a predefined location that is
+ close to the function entry point. This location is generated by the
+ compiler using the '-pg' gcc option.
+
+ + Livepatching typically needs to redirect the code at the very beginning
+ of the function entry before the function parameters or the stack
+ are in any way modified.
+
+All three approaches need to modify the existing code at runtime. Therefore
+they need to be aware of each other and not step over each other's toes.
+Most of these problems are solved by using the dynamic ftrace framework as
+a base. A Kprobe is registered as a ftrace handler when the function entry
+is probed, see CONFIG_KPROBES_ON_FTRACE. Also an alternative function from
+a live patch is called with the help of a custom ftrace handler. But there are
+some limitations, see below.
+
+
+3. Consistency model
+====================
+
+Functions are there for a reason. They take some input parameters, get or
+release locks, read, process, and even write some data in a defined way,
+have return values. In other words, each function has a defined semantic.
+
+Many fixes do not change the semantic of the modified functions. For
+example, they add a NULL pointer or a boundary check, fix a race by adding
+a missing memory barrier, or add some locking around a critical section.
+Most of these changes are self contained and the function presents itself
+the same way to the rest of the system. In this case, the functions might
+be updated independently one by one.
+
+But there are more complex fixes. For example, a patch might change
+ordering of locking in multiple functions at the same time. Or a patch
+might exchange meaning of some temporary structures and update
+all the relevant functions. In this case, the affected unit
+(thread, whole kernel) need to start using all new versions of
+the functions at the same time. Also the switch must happen only
+when it is safe to do so, e.g. when the affected locks are released
+or no data are stored in the modified structures at the moment.
+
+The theory about how to apply functions a safe way is rather complex.
+The aim is to define a so-called consistency model. It attempts to define
+conditions when the new implementation could be used so that the system
+stays consistent. The theory is not yet finished. See the discussion at
+http://thread.gmane.org/gmane.linux.kernel/1823033/focus=1828189
+
+The current consistency model is very simple. It guarantees that either
+the old or the new function is called. But various functions get redirected
+one by one without any synchronization.
+
+In other words, the current implementation _never_ modifies the behavior
+in the middle of the call. It is because it does _not_ rewrite the entire
+function in the memory. Instead, the function gets redirected at the
+very beginning. But this redirection is used immediately even when
+some other functions from the same patch have not been redirected yet.
+
+See also the section "Limitations" below.
+
+
+4. Livepatch module
+===================
+
+Livepatches are distributed using kernel modules, see
+samples/livepatch/livepatch-sample.c.
+
+The module includes a new implementation of functions that we want
+to replace. In addition, it defines some structures describing the
+relation between the original and the new implementation. Then there
+is code that makes the kernel start using the new code when the livepatch
+module is loaded. Also there is code that cleans up before the
+livepatch module is removed. All this is explained in more details in
+the next sections.
+
+
+4.1. New functions
+------------------
+
+New versions of functions are typically just copied from the original
+sources. A good practice is to add a prefix to the names so that they
+can be distinguished from the original ones, e.g. in a backtrace. Also
+they can be declared as static because they are not called directly
+and do not need the global visibility.
+
+The patch contains only functions that are really modified. But they
+might want to access functions or data from the original source file
+that may only be locally accessible. This can be solved by a special
+relocation section in the generated livepatch module, see
+Documentation/livepatch/module-elf-format.txt for more details.
+
+
+4.2. Metadata
+------------
+
+The patch is described by several structures that split the information
+into three levels:
+
+ + struct klp_func is defined for each patched function. It describes
+ the relation between the original and the new implementation of a
+ particular function.
+
+ The structure includes the name, as a string, of the original function.
+ The function address is found via kallsyms at runtime.
+
+ Then it includes the address of the new function. It is defined
+ directly by assigning the function pointer. Note that the new
+ function is typically defined in the same source file.
+
+ As an optional parameter, the symbol position in the kallsyms database can
+ be used to disambiguate functions of the same name. This is not the
+ absolute position in the database, but rather the order it has been found
+ only for a particular object ( vmlinux or a kernel module ). Note that
+ kallsyms allows for searching symbols according to the object name.
+
+ + struct klp_object defines an array of patched functions (struct
+ klp_func) in the same object. Where the object is either vmlinux
+ (NULL) or a module name.
+
+ The structure helps to group and handle functions for each object
+ together. Note that patched modules might be loaded later than
+ the patch itself and the relevant functions might be patched
+ only when they are available.
+
+
+ + struct klp_patch defines an array of patched objects (struct
+ klp_object).
+
+ This structure handles all patched functions consistently and eventually,
+ synchronously. The whole patch is applied only when all patched
+ symbols are found. The only exception are symbols from objects
+ (kernel modules) that have not been loaded yet. Also if a more complex
+ consistency model is supported then a selected unit (thread,
+ kernel as a whole) will see the new code from the entire patch
+ only when it is in a safe state.
+
+
+4.3. Livepatch module handling
+------------------------------
+
+The usual behavior is that the new functions will get used when
+the livepatch module is loaded. For this, the module init() function
+has to register the patch (struct klp_patch) and enable it. See the
+section "Livepatch life-cycle" below for more details about these
+two operations.
+
+Module removal is only safe when there are no users of the underlying
+functions. The immediate consistency model is not able to detect this;
+therefore livepatch modules cannot be removed. See "Limitations" below.
+
+5. Livepatch life-cycle
+=======================
+
+Livepatching defines four basic operations that define the life cycle of each
+live patch: registration, enabling, disabling and unregistration. There are
+several reasons why it is done this way.
+
+First, the patch is applied only when all patched symbols for already
+loaded objects are found. The error handling is much easier if this
+check is done before particular functions get redirected.
+
+Second, the immediate consistency model does not guarantee that anyone is not
+sleeping in the new code after the patch is reverted. This means that the new
+code needs to stay around "forever". If the code is there, one could apply it
+again. Therefore it makes sense to separate the operations that might be done
+once and those that need to be repeated when the patch is enabled (applied)
+again.
+
+Third, it might take some time until the entire system is migrated
+when a more complex consistency model is used. The patch revert might
+block the livepatch module removal for too long. Therefore it is useful
+to revert the patch using a separate operation that might be called
+explicitly. But it does not make sense to remove all information
+until the livepatch module is really removed.
+
+
+5.1. Registration
+-----------------
+
+Each patch first has to be registered using klp_register_patch(). This makes
+the patch known to the livepatch framework. Also it does some preliminary
+computing and checks.
+
+In particular, the patch is added into the list of known patches. The
+addresses of the patched functions are found according to their names.
+The special relocations, mentioned in the section "New functions", are
+applied. The relevant entries are created under
+/sys/kernel/livepatch/<name>. The patch is rejected when any operation
+fails.
+
+
+5.2. Enabling
+-------------
+
+Registered patches might be enabled either by calling klp_enable_patch() or
+by writing '1' to /sys/kernel/livepatch/<name>/enabled. The system will
+start using the new implementation of the patched functions at this stage.
+
+In particular, if an original function is patched for the first time, a
+function specific struct klp_ops is created and an universal ftrace handler
+is registered.
+
+Functions might be patched multiple times. The ftrace handler is registered
+only once for the given function. Further patches just add an entry to the
+list (see field `func_stack`) of the struct klp_ops. The last added
+entry is chosen by the ftrace handler and becomes the active function
+replacement.
+
+Note that the patches might be enabled in a different order than they were
+registered.
+
+
+5.3. Disabling
+--------------
+
+Enabled patches might get disabled either by calling klp_disable_patch() or
+by writing '0' to /sys/kernel/livepatch/<name>/enabled. At this stage
+either the code from the previously enabled patch or even the original
+code gets used.
+
+Here all the functions (struct klp_func) associated with the to-be-disabled
+patch are removed from the corresponding struct klp_ops. The ftrace handler
+is unregistered and the struct klp_ops is freed when the func_stack list
+becomes empty.
+
+Patches must be disabled in exactly the reverse order in which they were
+enabled. It makes the problem and the implementation much easier.
+
+
+5.4. Unregistration
+-------------------
+
+Disabled patches might be unregistered by calling klp_unregister_patch().
+This can be done only when the patch is disabled and the code is no longer
+used. It must be called before the livepatch module gets unloaded.
+
+At this stage, all the relevant sys-fs entries are removed and the patch
+is removed from the list of known patches.
+
+
+6. Sysfs
+========
+
+Information about the registered patches can be found under
+/sys/kernel/livepatch. The patches could be enabled and disabled
+by writing there.
+
+See Documentation/ABI/testing/sysfs-kernel-livepatch for more details.
+
+
+7. Limitations
+==============
+
+The current Livepatch implementation has several limitations:
+
+
+ + The patch must not change the semantic of the patched functions.
+
+ The current implementation guarantees only that either the old
+ or the new function is called. The functions are patched one
+ by one. It means that the patch must _not_ change the semantic
+ of the function.
+
+
+ + Data structures can not be patched.
+
+ There is no support to version data structures or anyhow migrate
+ one structure into another. Also the simple consistency model does
+ not allow to switch more functions atomically.
+
+ Once there is more complex consistency mode, it will be possible to
+ use some workarounds. For example, it will be possible to use a hole
+ for a new member because the data structure is aligned. Or it will
+ be possible to use an existing member for something else.
+
+ There are no plans to add more generic support for modified structures
+ at the moment.
+
+
+ + Only functions that can be traced could be patched.
+
+ Livepatch is based on the dynamic ftrace. In particular, functions
+ implementing ftrace or the livepatch ftrace handler could not be
+ patched. Otherwise, the code would end up in an infinite loop. A
+ potential mistake is prevented by marking the problematic functions
+ by "notrace".
+
+
+ + Anything inlined into __schedule() can not be patched.
+
+ The switch_to macro is inlined into __schedule(). It switches the
+ context between two processes in the middle of the macro. It does
+ not save RIP in x86_64 version (contrary to 32-bit version). Instead,
+ the currently used __schedule()/switch_to() handles both processes.
+
+ Now, let's have two different tasks. One calls the original
+ __schedule(), its registers are stored in a defined order and it
+ goes to sleep in the switch_to macro and some other task is restored
+ using the original __schedule(). Then there is the second task which
+ calls patched__schedule(), it goes to sleep there and the first task
+ is picked by the patched__schedule(). Its RSP is restored and now
+ the registers should be restored as well. But the order is different
+ in the new patched__schedule(), so...
+
+ There is work in progress to remove this limitation.
+
+
+ + Livepatch modules can not be removed.
+
+ The current implementation just redirects the functions at the very
+ beginning. It does not check if the functions are in use. In other
+ words, it knows when the functions get called but it does not
+ know when the functions return. Therefore it can not decide when
+ the livepatch module can be safely removed.
+
+ This will get most likely solved once a more complex consistency model
+ is supported. The idea is that a safe state for patching should also
+ mean a safe state for removing the patch.
+
+ Note that the patch itself might get disabled by writing zero
+ to /sys/kernel/livepatch/<patch>/enabled. It causes that the new
+ code will not longer get called. But it does not guarantee
+ that anyone is not sleeping anywhere in the new code.
+
+
+ + Livepatch works reliably only when the dynamic ftrace is located at
+ the very beginning of the function.
+
+ The function need to be redirected before the stack or the function
+ parameters are modified in any way. For example, livepatch requires
+ using -fentry gcc compiler option on x86_64.
+
+ One exception is the PPC port. It uses relative addressing and TOC.
+ Each function has to handle TOC and save LR before it could call
+ the ftrace handler. This operation has to be reverted on return.
+ Fortunately, the generic ftrace code has the same problem and all
+ this is is handled on the ftrace level.
+
+
+ + Kretprobes using the ftrace framework conflict with the patched
+ functions.
+
+ Both kretprobes and livepatches use a ftrace handler that modifies
+ the return address. The first user wins. Either the probe or the patch
+ is rejected when the handler is already in use by the other.
+
+
+ + Kprobes in the original function are ignored when the code is
+ redirected to the new implementation.
+
+ There is a work in progress to add warnings about this situation.
diff --git a/Documentation/livepatch/module-elf-format.txt b/Documentation/livepatch/module-elf-format.txt
new file mode 100644
index 000000000000..eedbdcf8ba50
--- /dev/null
+++ b/Documentation/livepatch/module-elf-format.txt
@@ -0,0 +1,311 @@
+===========================
+Livepatch module Elf format
+===========================
+
+This document outlines the Elf format requirements that livepatch modules must follow.
+
+-----------------
+Table of Contents
+-----------------
+0. Background and motivation
+1. Livepatch modinfo field
+2. Livepatch relocation sections
+ 2.1 What are livepatch relocation sections?
+ 2.2 Livepatch relocation section format
+ 2.2.1 Required flags
+ 2.2.2 Required name format
+ 2.2.3 Example livepatch relocation section names
+ 2.2.4 Example `readelf --sections` output
+ 2.2.5 Example `readelf --relocs` output
+3. Livepatch symbols
+ 3.1 What are livepatch symbols?
+ 3.2 A livepatch module's symbol table
+ 3.3 Livepatch symbol format
+ 3.3.1 Required flags
+ 3.3.2 Required name format
+ 3.3.3 Example livepatch symbol names
+ 3.3.4 Example `readelf --symbols` output
+4. Symbol table and Elf section access
+
+----------------------------
+0. Background and motivation
+----------------------------
+
+Formerly, livepatch required separate architecture-specific code to write
+relocations. However, arch-specific code to write relocations already
+exists in the module loader, so this former approach produced redundant
+code. So, instead of duplicating code and re-implementing what the module
+loader can already do, livepatch leverages existing code in the module
+loader to perform the all the arch-specific relocation work. Specifically,
+livepatch reuses the apply_relocate_add() function in the module loader to
+write relocations. The patch module Elf format described in this document
+enables livepatch to be able to do this. The hope is that this will make
+livepatch more easily portable to other architectures and reduce the amount
+of arch-specific code required to port livepatch to a particular
+architecture.
+
+Since apply_relocate_add() requires access to a module's section header
+table, symbol table, and relocation section indices, Elf information is
+preserved for livepatch modules (see section 4). Livepatch manages its own
+relocation sections and symbols, which are described in this document. The
+Elf constants used to mark livepatch symbols and relocation sections were
+selected from OS-specific ranges according to the definitions from glibc.
+
+0.1 Why does livepatch need to write its own relocations?
+---------------------------------------------------------
+A typical livepatch module contains patched versions of functions that can
+reference non-exported global symbols and non-included local symbols.
+Relocations referencing these types of symbols cannot be left in as-is
+since the kernel module loader cannot resolve them and will therefore
+reject the livepatch module. Furthermore, we cannot apply relocations that
+affect modules not yet loaded at patch module load time (e.g. a patch to a
+driver that is not loaded). Formerly, livepatch solved this problem by
+embedding special "dynrela" (dynamic rela) sections in the resulting patch
+module Elf output. Using these dynrela sections, livepatch could resolve
+symbols while taking into account its scope and what module the symbol
+belongs to, and then manually apply the dynamic relocations. However this
+approach required livepatch to supply arch-specific code in order to write
+these relocations. In the new format, livepatch manages its own SHT_RELA
+relocation sections in place of dynrela sections, and the symbols that the
+relas reference are special livepatch symbols (see section 2 and 3). The
+arch-specific livepatch relocation code is replaced by a call to
+apply_relocate_add().
+
+================================
+PATCH MODULE FORMAT REQUIREMENTS
+================================
+
+--------------------------
+1. Livepatch modinfo field
+--------------------------
+
+Livepatch modules are required to have the "livepatch" modinfo attribute.
+See the sample livepatch module in samples/livepatch/ for how this is done.
+
+Livepatch modules can be identified by users by using the 'modinfo' command
+and looking for the presence of the "livepatch" field. This field is also
+used by the kernel module loader to identify livepatch modules.
+
+Example modinfo output:
+-----------------------
+% modinfo livepatch-meminfo.ko
+filename: livepatch-meminfo.ko
+livepatch: Y
+license: GPL
+depends:
+vermagic: 4.3.0+ SMP mod_unload
+
+--------------------------------
+2. Livepatch relocation sections
+--------------------------------
+
+-------------------------------------------
+2.1 What are livepatch relocation sections?
+-------------------------------------------
+A livepatch module manages its own Elf relocation sections to apply
+relocations to modules as well as to the kernel (vmlinux) at the
+appropriate time. For example, if a patch module patches a driver that is
+not currently loaded, livepatch will apply the corresponding livepatch
+relocation section(s) to the driver once it loads.
+
+Each "object" (e.g. vmlinux, or a module) within a patch module may have
+multiple livepatch relocation sections associated with it (e.g. patches to
+multiple functions within the same object). There is a 1-1 correspondence
+between a livepatch relocation section and the target section (usually the
+text section of a function) to which the relocation(s) apply. It is
+also possible for a livepatch module to have no livepatch relocation
+sections, as in the case of the sample livepatch module (see
+samples/livepatch).
+
+Since Elf information is preserved for livepatch modules (see Section 4), a
+livepatch relocation section can be applied simply by passing in the
+appropriate section index to apply_relocate_add(), which then uses it to
+access the relocation section and apply the relocations.
+
+Every symbol referenced by a rela in a livepatch relocation section is a
+livepatch symbol. These must be resolved before livepatch can call
+apply_relocate_add(). See Section 3 for more information.
+
+---------------------------------------
+2.2 Livepatch relocation section format
+---------------------------------------
+
+2.2.1 Required flags
+--------------------
+Livepatch relocation sections must be marked with the SHF_RELA_LIVEPATCH
+section flag. See include/uapi/linux/elf.h for the definition. The module
+loader recognizes this flag and will avoid applying those relocation sections
+at patch module load time. These sections must also be marked with SHF_ALLOC,
+so that the module loader doesn't discard them on module load (i.e. they will
+be copied into memory along with the other SHF_ALLOC sections).
+
+2.2.2 Required name format
+--------------------------
+The name of a livepatch relocation section must conform to the following format:
+
+.klp.rela.objname.section_name
+^ ^^ ^ ^ ^
+|________||_____| |__________|
+ [A] [B] [C]
+
+[A] The relocation section name is prefixed with the string ".klp.rela."
+[B] The name of the object (i.e. "vmlinux" or name of module) to
+ which the relocation section belongs follows immediately after the prefix.
+[C] The actual name of the section to which this relocation section applies.
+
+2.2.3 Example livepatch relocation section names:
+-------------------------------------------------
+.klp.rela.ext4.text.ext4_attr_store
+.klp.rela.vmlinux.text.cmdline_proc_show
+
+2.2.4 Example `readelf --sections` output for a patch
+module that patches vmlinux and modules 9p, btrfs, ext4:
+--------------------------------------------------------
+ Section Headers:
+ [Nr] Name Type Address Off Size ES Flg Lk Inf Al
+ [ snip ]
+ [29] .klp.rela.9p.text.caches.show RELA 0000000000000000 002d58 0000c0 18 AIo 64 9 8
+ [30] .klp.rela.btrfs.text.btrfs.feature.attr.show RELA 0000000000000000 002e18 000060 18 AIo 64 11 8
+ [ snip ]
+ [34] .klp.rela.ext4.text.ext4.attr.store RELA 0000000000000000 002fd8 0000d8 18 AIo 64 13 8
+ [35] .klp.rela.ext4.text.ext4.attr.show RELA 0000000000000000 0030b0 000150 18 AIo 64 15 8
+ [36] .klp.rela.vmlinux.text.cmdline.proc.show RELA 0000000000000000 003200 000018 18 AIo 64 17 8
+ [37] .klp.rela.vmlinux.text.meminfo.proc.show RELA 0000000000000000 003218 0000f0 18 AIo 64 19 8
+ [ snip ] ^ ^
+ | |
+ [*] [*]
+[*] Livepatch relocation sections are SHT_RELA sections but with a few special
+characteristics. Notice that they are marked SHF_ALLOC ("A") so that they will
+not be discarded when the module is loaded into memory, as well as with the
+SHF_RELA_LIVEPATCH flag ("o" - for OS-specific).
+
+2.2.5 Example `readelf --relocs` output for a patch module:
+-----------------------------------------------------------
+Relocation section '.klp.rela.btrfs.text.btrfs_feature_attr_show' at offset 0x2ba0 contains 4 entries:
+ Offset Info Type Symbol's Value Symbol's Name + Addend
+000000000000001f 0000005e00000002 R_X86_64_PC32 0000000000000000 .klp.sym.vmlinux.printk,0 - 4
+0000000000000028 0000003d0000000b R_X86_64_32S 0000000000000000 .klp.sym.btrfs.btrfs_ktype,0 + 0
+0000000000000036 0000003b00000002 R_X86_64_PC32 0000000000000000 .klp.sym.btrfs.can_modify_feature.isra.3,0 - 4
+000000000000004c 0000004900000002 R_X86_64_PC32 0000000000000000 .klp.sym.vmlinux.snprintf,0 - 4
+[ snip ] ^
+ |
+ [*]
+[*] Every symbol referenced by a relocation is a livepatch symbol.
+
+--------------------
+3. Livepatch symbols
+--------------------
+
+-------------------------------
+3.1 What are livepatch symbols?
+-------------------------------
+Livepatch symbols are symbols referred to by livepatch relocation sections.
+These are symbols accessed from new versions of functions for patched
+objects, whose addresses cannot be resolved by the module loader (because
+they are local or unexported global syms). Since the module loader only
+resolves exported syms, and not every symbol referenced by the new patched
+functions is exported, livepatch symbols were introduced. They are used
+also in cases where we cannot immediately know the address of a symbol when
+a patch module loads. For example, this is the case when livepatch patches
+a module that is not loaded yet. In this case, the relevant livepatch
+symbols are resolved simply when the target module loads. In any case, for
+any livepatch relocation section, all livepatch symbols referenced by that
+section must be resolved before livepatch can call apply_relocate_add() for
+that reloc section.
+
+Livepatch symbols must be marked with SHN_LIVEPATCH so that the module
+loader can identify and ignore them. Livepatch modules keep these symbols
+in their symbol tables, and the symbol table is made accessible through
+module->symtab.
+
+-------------------------------------
+3.2 A livepatch module's symbol table
+-------------------------------------
+Normally, a stripped down copy of a module's symbol table (containing only
+"core" symbols) is made available through module->symtab (See layout_symtab()
+in kernel/module.c). For livepatch modules, the symbol table copied into memory
+on module load must be exactly the same as the symbol table produced when the
+patch module was compiled. This is because the relocations in each livepatch
+relocation section refer to their respective symbols with their symbol indices,
+and the original symbol indices (and thus the symtab ordering) must be
+preserved in order for apply_relocate_add() to find the right symbol.
+
+For example, take this particular rela from a livepatch module:
+Relocation section '.klp.rela.btrfs.text.btrfs_feature_attr_show' at offset 0x2ba0 contains 4 entries:
+ Offset Info Type Symbol's Value Symbol's Name + Addend
+000000000000001f 0000005e00000002 R_X86_64_PC32 0000000000000000 .klp.sym.vmlinux.printk,0 - 4
+
+This rela refers to the symbol '.klp.sym.vmlinux.printk,0', and the symbol index is encoded
+in 'Info'. Here its symbol index is 0x5e, which is 94 in decimal, which refers to the
+symbol index 94.
+And in this patch module's corresponding symbol table, symbol index 94 refers to that very symbol:
+[ snip ]
+94: 0000000000000000 0 NOTYPE GLOBAL DEFAULT OS [0xff20] .klp.sym.vmlinux.printk,0
+[ snip ]
+
+---------------------------
+3.3 Livepatch symbol format
+---------------------------
+
+3.3.1 Required flags
+--------------------
+Livepatch symbols must have their section index marked as SHN_LIVEPATCH, so
+that the module loader can identify them and not attempt to resolve them.
+See include/uapi/linux/elf.h for the actual definitions.
+
+3.3.2 Required name format
+--------------------------
+Livepatch symbol names must conform to the following format:
+
+.klp.sym.objname.symbol_name,sympos
+^ ^^ ^ ^ ^ ^
+|_______||_____| |_________| |
+ [A] [B] [C] [D]
+
+[A] The symbol name is prefixed with the string ".klp.sym."
+[B] The name of the object (i.e. "vmlinux" or name of module) to
+ which the symbol belongs follows immediately after the prefix.
+[C] The actual name of the symbol.
+[D] The position of the symbol in the object (as according to kallsyms)
+ This is used to differentiate duplicate symbols within the same
+ object. The symbol position is expressed numerically (0, 1, 2...).
+ The symbol position of a unique symbol is 0.
+
+3.3.3 Example livepatch symbol names:
+-------------------------------------
+.klp.sym.vmlinux.snprintf,0
+.klp.sym.vmlinux.printk,0
+.klp.sym.btrfs.btrfs_ktype,0
+
+3.3.4 Example `readelf --symbols` output for a patch module:
+------------------------------------------------------------
+Symbol table '.symtab' contains 127 entries:
+ Num: Value Size Type Bind Vis Ndx Name
+ [ snip ]
+ 73: 0000000000000000 0 NOTYPE GLOBAL DEFAULT OS [0xff20] .klp.sym.vmlinux.snprintf,0
+ 74: 0000000000000000 0 NOTYPE GLOBAL DEFAULT OS [0xff20] .klp.sym.vmlinux.capable,0
+ 75: 0000000000000000 0 NOTYPE GLOBAL DEFAULT OS [0xff20] .klp.sym.vmlinux.find_next_bit,0
+ 76: 0000000000000000 0 NOTYPE GLOBAL DEFAULT OS [0xff20] .klp.sym.vmlinux.si_swapinfo,0
+ [ snip ] ^
+ |
+ [*]
+[*] Note that the 'Ndx' (Section index) for these symbols is SHN_LIVEPATCH (0xff20).
+ "OS" means OS-specific.
+
+--------------------------------------
+4. Symbol table and Elf section access
+--------------------------------------
+A livepatch module's symbol table is accessible through module->symtab.
+
+Since apply_relocate_add() requires access to a module's section headers,
+symbol table, and relocation section indices, Elf information is preserved for
+livepatch modules and is made accessible by the module loader through
+module->klp_info, which is a klp_modinfo struct. When a livepatch module loads,
+this struct is filled in by the module loader. Its fields are documented below:
+
+struct klp_modinfo {
+ Elf_Ehdr hdr; /* Elf header */
+ Elf_Shdr *sechdrs; /* Section header table */
+ char *secstrings; /* String table for the section headers */
+ unsigned int symndx; /* The symbol table section index */
+};
diff --git a/Documentation/locking/lockdep-design.txt b/Documentation/locking/lockdep-design.txt
index 5001280e9d82..9de1c158d44c 100644
--- a/Documentation/locking/lockdep-design.txt
+++ b/Documentation/locking/lockdep-design.txt
@@ -97,7 +97,7 @@ between any two lock-classes:
<hardirq-safe> -> <hardirq-unsafe>
<softirq-safe> -> <softirq-unsafe>
-The first rule comes from the fact the a hardirq-safe lock could be
+The first rule comes from the fact that a hardirq-safe lock could be
taken by a hardirq context, interrupting a hardirq-unsafe lock - and
thus could result in a lock inversion deadlock. Likewise, a softirq-safe
lock could be taken by an softirq context, interrupting a softirq-unsafe
@@ -220,7 +220,7 @@ calculated, which hash is unique for every lock chain. The hash value,
when the chain is validated for the first time, is then put into a hash
table, which hash-table can be checked in a lockfree manner. If the
locking chain occurs again later on, the hash table tells us that we
-dont have to validate the chain again.
+don't have to validate the chain again.
Troubleshooting:
----------------
diff --git a/Documentation/lzo.txt b/Documentation/lzo.txt
index ea45dd3901e3..285c54f66779 100644
--- a/Documentation/lzo.txt
+++ b/Documentation/lzo.txt
@@ -69,9 +69,9 @@ Description
IMPORTANT NOTE : in the code some length checks are missing because certain
instructions are called under the assumption that a certain number of bytes
- follow because it has already been garanteed before parsing the instructions.
+ follow because it has already been guaranteed before parsing the instructions.
They just have to "refill" this credit if they consume extra bytes. This is
- an implementation design choice independant on the algorithm or encoding.
+ an implementation design choice independent on the algorithm or encoding.
Byte sequences
diff --git a/Documentation/md-cluster.txt b/Documentation/md-cluster.txt
index c100c7163507..38883276d31c 100644
--- a/Documentation/md-cluster.txt
+++ b/Documentation/md-cluster.txt
@@ -316,3 +316,9 @@ The algorithm is:
nodes are using the raid which is achieved by lock all bitmap
locks within the cluster, and also those locks are unlocked
accordingly.
+
+7. Unsupported features
+
+There are somethings which are not supported by cluster MD yet.
+
+- update size and change array_sectors.
diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index 3729cbe60e41..147ae8ec836f 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -4,8 +4,40 @@
By: David Howells <dhowells@redhat.com>
Paul E. McKenney <paulmck@linux.vnet.ibm.com>
+ Will Deacon <will.deacon@arm.com>
+ Peter Zijlstra <peterz@infradead.org>
-Contents:
+==========
+DISCLAIMER
+==========
+
+This document is not a specification; it is intentionally (for the sake of
+brevity) and unintentionally (due to being human) incomplete. This document is
+meant as a guide to using the various memory barriers provided by Linux, but
+in case of any doubt (and there are many) please ask.
+
+To repeat, this document is not a specification of what Linux expects from
+hardware.
+
+The purpose of this document is twofold:
+
+ (1) to specify the minimum functionality that one can rely on for any
+ particular barrier, and
+
+ (2) to provide a guide as to how to use the barriers that are available.
+
+Note that an architecture can provide more than the minimum requirement
+for any particular barrier, but if the architecure provides less than
+that, that architecture is incorrect.
+
+Note also that it is possible that a barrier may be a no-op for an
+architecture because the way that arch works renders an explicit barrier
+unnecessary in that case.
+
+
+========
+CONTENTS
+========
(*) Abstract memory access model.
@@ -31,15 +63,15 @@ Contents:
(*) Implicit kernel memory barriers.
- - Locking functions.
+ - Lock acquisition functions.
- Interrupt disabling functions.
- Sleep and wake-up functions.
- Miscellaneous functions.
- (*) Inter-CPU locking barrier effects.
+ (*) Inter-CPU acquiring barrier effects.
- - Locks vs memory accesses.
- - Locks vs I/O accesses.
+ - Acquires vs memory accesses.
+ - Acquires vs I/O accesses.
(*) Where are memory barriers needed?
@@ -61,6 +93,7 @@ Contents:
(*) The things CPUs get up to.
- And then there's the Alpha.
+ - Virtual Machine Guests.
(*) Example uses.
@@ -148,7 +181,7 @@ As a further example, consider this sequence of events:
CPU 1 CPU 2
=============== ===============
- { A == 1, B == 2, C = 3, P == &A, Q == &C }
+ { A == 1, B == 2, C == 3, P == &A, Q == &C }
B = 4; Q = P;
P = &B D = *Q;
@@ -430,8 +463,9 @@ And a couple of implicit varieties:
This acts as a one-way permeable barrier. It guarantees that all memory
operations after the ACQUIRE operation will appear to happen after the
ACQUIRE operation with respect to the other components of the system.
- ACQUIRE operations include LOCK operations and smp_load_acquire()
- operations.
+ ACQUIRE operations include LOCK operations and both smp_load_acquire()
+ and smp_cond_acquire() operations. The later builds the necessary ACQUIRE
+ semantics from relying on a control dependency and smp_rmb().
Memory operations that occur before an ACQUIRE operation may appear to
happen after it completes.
@@ -464,6 +498,11 @@ And a couple of implicit varieties:
This means that ACQUIRE acts as a minimal "acquire" operation and
RELEASE acts as a minimal "release" operation.
+A subset of the atomic operations described in atomic_ops.txt have ACQUIRE
+and RELEASE variants in addition to fully-ordered and relaxed (no barrier
+semantics) definitions. For compound atomics performing both a load and a
+store, ACQUIRE semantics apply only to the load and RELEASE semantics apply
+only to the store portion of the operation.
Memory barriers are only required where there's a possibility of interaction
between two CPUs or between a CPU and a device. If it can be guaranteed that
@@ -517,7 +556,7 @@ following sequence of events:
CPU 1 CPU 2
=============== ===============
- { A == 1, B == 2, C = 3, P == &A, Q == &C }
+ { A == 1, B == 2, C == 3, P == &A, Q == &C }
B = 4;
<write barrier>
WRITE_ONCE(P, &B)
@@ -544,7 +583,7 @@ between the address load and the data load:
CPU 1 CPU 2
=============== ===============
- { A == 1, B == 2, C = 3, P == &A, Q == &C }
+ { A == 1, B == 2, C == 3, P == &A, Q == &C }
B = 4;
<write barrier>
WRITE_ONCE(P, &B);
@@ -813,9 +852,10 @@ In summary:
the same variable, then those stores must be ordered, either by
preceding both of them with smp_mb() or by using smp_store_release()
to carry out the stores. Please note that it is -not- sufficient
- to use barrier() at beginning of each leg of the "if" statement,
- as optimizing compilers do not necessarily respect barrier()
- in this case.
+ to use barrier() at beginning of each leg of the "if" statement
+ because, as shown by the example above, optimizing compilers can
+ destroy the control dependency while respecting the letter of the
+ barrier() law.
(*) Control dependencies require at least one run-time conditional
between the prior load and the subsequent store, and this
@@ -1731,15 +1771,15 @@ The Linux kernel has eight basic CPU memory barriers:
All memory barriers except the data dependency barriers imply a compiler
-barrier. Data dependencies do not impose any additional compiler ordering.
+barrier. Data dependencies do not impose any additional compiler ordering.
Aside: In the case of data dependencies, the compiler would be expected
to issue the loads in the correct order (eg. `a[b]` would have to load
the value of b before loading a[b]), however there is no guarantee in
the C specification that the compiler may not speculate the value of b
(eg. is equal to 1) and load a before b (eg. tmp = a[1]; if (b != 1)
-tmp = a[b]; ). There is also the problem of a compiler reloading b after
-having loaded a[b], thus having a newer copy of b than a[b]. A consensus
+tmp = a[b]; ). There is also the problem of a compiler reloading b after
+having loaded a[b], thus having a newer copy of b than a[b]. A consensus
has not yet been reached about these problems, however the READ_ONCE()
macro is a good place to start looking.
@@ -1794,6 +1834,7 @@ There are some more advanced barrier functions:
(*) lockless_dereference();
+
This can be thought of as a pointer-fetch wrapper around the
smp_read_barrier_depends() data-dependency barrier.
@@ -1858,7 +1899,7 @@ This is a variation on the mandatory write barrier that causes writes to weakly
ordered I/O regions to be partially ordered. Its effects may go beyond the
CPU->Hardware interface and actually affect the hardware at some level.
-See the subsection "Locks vs I/O accesses" for more information.
+See the subsection "Acquires vs I/O accesses" for more information.
===============================
@@ -1873,8 +1914,8 @@ provide more substantial guarantees, but these may not be relied upon outside
of arch specific code.
-ACQUIRING FUNCTIONS
--------------------
+LOCK ACQUISITION FUNCTIONS
+--------------------------
The Linux kernel has a number of locking constructs:
@@ -1895,7 +1936,7 @@ for each construct. These operations all imply certain barriers:
Memory operations issued before the ACQUIRE may be completed after
the ACQUIRE operation has completed. An smp_mb__before_spinlock(),
combined with a following ACQUIRE, orders prior stores against
- subsequent loads and stores. Note that this is weaker than smp_mb()!
+ subsequent loads and stores. Note that this is weaker than smp_mb()!
The smp_mb__before_spinlock() primitive is free on many architectures.
(2) RELEASE operation implication:
@@ -2090,9 +2131,9 @@ or:
event_indicated = 1;
wake_up_process(event_daemon);
-A write memory barrier is implied by wake_up() and co. if and only if they wake
-something up. The barrier occurs before the task state is cleared, and so sits
-between the STORE to indicate the event and the STORE to set TASK_RUNNING:
+A write memory barrier is implied by wake_up() and co. if and only if they
+wake something up. The barrier occurs before the task state is cleared, and so
+sits between the STORE to indicate the event and the STORE to set TASK_RUNNING:
CPU 1 CPU 2
=============================== ===============================
@@ -2206,7 +2247,7 @@ three CPUs; then should the following sequence of events occur:
Then there is no guarantee as to what order CPU 3 will see the accesses to *A
through *H occur in, other than the constraints imposed by the separate locks
-on the separate CPUs. It might, for example, see:
+on the separate CPUs. It might, for example, see:
*E, ACQUIRE M, ACQUIRE Q, *G, *C, *F, *A, *B, RELEASE Q, *D, *H, RELEASE M
@@ -2486,9 +2527,9 @@ The following operations are special locking primitives:
clear_bit_unlock();
__clear_bit_unlock();
-These implement ACQUIRE-class and RELEASE-class operations. These should be used in
-preference to other operations when implementing locking primitives, because
-their implementations can be optimised on many architectures.
+These implement ACQUIRE-class and RELEASE-class operations. These should be
+used in preference to other operations when implementing locking primitives,
+because their implementations can be optimised on many architectures.
[!] Note that special memory barrier primitives are available for these
situations because on some CPUs the atomic instructions used imply full memory
@@ -2568,12 +2609,12 @@ explicit barriers are used.
Normally this won't be a problem because the I/O accesses done inside such
sections will include synchronous load operations on strictly ordered I/O
-registers that form implicit I/O barriers. If this isn't sufficient then an
+registers that form implicit I/O barriers. If this isn't sufficient then an
mmiowb() may need to be used explicitly.
A similar situation may occur between an interrupt routine and two routines
-running on separate CPUs that communicate with each other. If such a case is
+running on separate CPUs that communicate with each other. If such a case is
likely, then interrupt-disabling locks should be used to guarantee ordering.
@@ -2587,8 +2628,8 @@ functions:
(*) inX(), outX():
These are intended to talk to I/O space rather than memory space, but
- that's primarily a CPU-specific concept. The i386 and x86_64 processors do
- indeed have special I/O space access cycles and instructions, but many
+ that's primarily a CPU-specific concept. The i386 and x86_64 processors
+ do indeed have special I/O space access cycles and instructions, but many
CPUs don't have such a concept.
The PCI bus, amongst others, defines an I/O space concept which - on such
@@ -2610,7 +2651,7 @@ functions:
Whether these are guaranteed to be fully ordered and uncombined with
respect to each other on the issuing CPU depends on the characteristics
- defined for the memory window through which they're accessing. On later
+ defined for the memory window through which they're accessing. On later
i386 architecture machines, for example, this is controlled by way of the
MTRR registers.
@@ -2635,10 +2676,10 @@ functions:
(*) readX_relaxed(), writeX_relaxed()
These are similar to readX() and writeX(), but provide weaker memory
- ordering guarantees. Specifically, they do not guarantee ordering with
+ ordering guarantees. Specifically, they do not guarantee ordering with
respect to normal memory accesses (e.g. DMA buffers) nor do they guarantee
- ordering with respect to LOCK or UNLOCK operations. If the latter is
- required, an mmiowb() barrier can be used. Note that relaxed accesses to
+ ordering with respect to LOCK or UNLOCK operations. If the latter is
+ required, an mmiowb() barrier can be used. Note that relaxed accesses to
the same peripheral are guaranteed to be ordered with respect to each
other.
@@ -3040,8 +3081,9 @@ The Alpha defines the Linux kernel's memory barrier model.
See the subsection on "Cache Coherency" above.
+
VIRTUAL MACHINE GUESTS
--------------------
+----------------------
Guests running within virtual machines might be affected by SMP effects even if
the guest itself is compiled without SMP support. This is an artifact of
@@ -3050,7 +3092,7 @@ barriers for this use-case would be possible but is often suboptimal.
To handle this case optimally, low-level virt_mb() etc macros are available.
These have the same effect as smp_mb() etc when SMP is enabled, but generate
-identical code for SMP and non-SMP systems. For example, virtual machine guests
+identical code for SMP and non-SMP systems. For example, virtual machine guests
should use virt_mb() rather than smp_mb() when synchronizing against a
(possibly SMP) host.
@@ -3058,6 +3100,7 @@ These are equivalent to smp_mb() etc counterparts in all other respects,
in particular, they do not control MMIO effects: to control
MMIO effects, use mandatory barriers.
+
============
EXAMPLE USES
============
diff --git a/Documentation/memory-hotplug.txt b/Documentation/memory-hotplug.txt
index 443f4b44ad97..0d7cb955aa01 100644
--- a/Documentation/memory-hotplug.txt
+++ b/Documentation/memory-hotplug.txt
@@ -261,10 +261,11 @@ it according to the policy which can be read from "auto_online_blocks" file:
% cat /sys/devices/system/memory/auto_online_blocks
-The default is "offline" which means the newly added memory is not in a
-ready-to-use state and you have to "online" the newly added memory blocks
-manually. Automatic onlining can be requested by writing "online" to
-"auto_online_blocks" file:
+The default depends on the CONFIG_MEMORY_HOTPLUG_DEFAULT_ONLINE kernel config
+option. If it is disabled the default is "offline" which means the newly added
+memory is not in a ready-to-use state and you have to "online" the newly added
+memory blocks manually. Automatic onlining can be requested by writing "online"
+to "auto_online_blocks" file:
% echo online > /sys/devices/system/memory/auto_online_blocks
diff --git a/Documentation/mmc/00-INDEX b/Documentation/mmc/00-INDEX
index a9ba6720ffdf..4623bc0aa0bb 100644
--- a/Documentation/mmc/00-INDEX
+++ b/Documentation/mmc/00-INDEX
@@ -6,3 +6,5 @@ mmc-dev-parts.txt
- info on SD and MMC device partitions
mmc-async-req.txt
- info on mmc asynchronous requests
+mmc-tools.txt
+ - info on mmc-utils tools
diff --git a/Documentation/mmc/mmc-tools.txt b/Documentation/mmc/mmc-tools.txt
new file mode 100644
index 000000000000..735509c165d5
--- /dev/null
+++ b/Documentation/mmc/mmc-tools.txt
@@ -0,0 +1,34 @@
+MMC tools introduction
+======================
+
+There is one MMC test tools called mmc-utils, which is maintained by Chris Ball,
+you can find it at the below public git repository:
+http://git.kernel.org/cgit/linux/kernel/git/cjb/mmc-utils.git/
+
+Functions
+=========
+
+The mmc-utils tools can do the following:
+ - Print and parse extcsd data.
+ - Determine the eMMC writeprotect status.
+ - Set the eMMC writeprotect status.
+ - Set the eMMC data sector size to 4KB by disabling emulation.
+ - Create general purpose partition.
+ - Enable the enhanced user area.
+ - Enable write reliability per partition.
+ - Print the response to STATUS_SEND (CMD13).
+ - Enable the boot partition.
+ - Set Boot Bus Conditions.
+ - Enable the eMMC BKOPS feature.
+ - Permanently enable the eMMC H/W Reset feature.
+ - Permanently disable the eMMC H/W Reset feature.
+ - Send Sanitize command.
+ - Program authentication key for the device.
+ - Counter value for the rpmb device will be read to stdout.
+ - Read from rpmb device to output.
+ - Write to rpmb device from data file.
+ - Enable the eMMC cache feature.
+ - Disable the eMMC cache feature.
+ - Print and parse CID data.
+ - Print and parse CSD data.
+ - Print and parse SCR data.
diff --git a/Documentation/networking/bonding.txt b/Documentation/networking/bonding.txt
index 334b49ef02d1..57f52cdce32e 100644
--- a/Documentation/networking/bonding.txt
+++ b/Documentation/networking/bonding.txt
@@ -1880,8 +1880,8 @@ or more peers on the local network.
The ARP monitor relies on the device driver itself to verify
that traffic is flowing. In particular, the driver must keep up to
-date the last receive time, dev->last_rx, and transmit start time,
-dev->trans_start. If these are not updated by the driver, then the
+date the last receive time, dev->last_rx. Drivers that use NETIF_F_LLTX
+flag must also update netdev_queue->trans_start. If they do not, then the
ARP monitor will immediately fail any slaves using that driver, and
those slaves will stay down. If networking monitoring (tcpdump, etc)
shows the ARP requests and replies on the network, then it may be that
diff --git a/Documentation/networking/can.txt b/Documentation/networking/can.txt
index 6ab619fcc517..d58ff8467953 100644
--- a/Documentation/networking/can.txt
+++ b/Documentation/networking/can.txt
@@ -1256,7 +1256,7 @@ solution for a couple of reasons:
7. SocketCAN resources
-----------------------
- The Linux CAN / SocketCAN project ressources (project site / mailing list)
+ The Linux CAN / SocketCAN project resources (project site / mailing list)
are referenced in the MAINTAINERS file in the Linux source tree.
Search for CAN NETWORK [LAYERS|DRIVERS].
diff --git a/Documentation/networking/checksum-offloads.txt b/Documentation/networking/checksum-offloads.txt
index de2a327766a7..56e36861245f 100644
--- a/Documentation/networking/checksum-offloads.txt
+++ b/Documentation/networking/checksum-offloads.txt
@@ -69,18 +69,18 @@ LCO: Local Checksum Offload
LCO is a technique for efficiently computing the outer checksum of an
encapsulated datagram when the inner checksum is due to be offloaded.
The ones-complement sum of a correctly checksummed TCP or UDP packet is
- equal to the sum of the pseudo header, because everything else gets
- 'cancelled out' by the checksum field. This is because the sum was
+ equal to the complement of the sum of the pseudo header, because everything
+ else gets 'cancelled out' by the checksum field. This is because the sum was
complemented before being written to the checksum field.
More generally, this holds in any case where the 'IP-style' ones complement
checksum is used, and thus any checksum that TX Checksum Offload supports.
That is, if we have set up TX Checksum Offload with a start/offset pair, we
- know that _after the device has filled in that checksum_, the ones
+ know that after the device has filled in that checksum, the ones
complement sum from csum_start to the end of the packet will be equal to
- _whatever value we put in the checksum field beforehand_. This allows us
- to compute the outer checksum without looking at the payload: we simply
- stop summing when we get to csum_start, then add the 16-bit word at
- (csum_start + csum_offset).
+ the complement of whatever value we put in the checksum field beforehand.
+ This allows us to compute the outer checksum without looking at the payload:
+ we simply stop summing when we get to csum_start, then add the complement of
+ the 16-bit word at (csum_start + csum_offset).
Then, when the true inner checksum is filled in (either by hardware or by
skb_checksum_help()), the outer checksum will become correct by virtue of
the arithmetic.
diff --git a/Documentation/networking/dsa/bcm_sf2.txt b/Documentation/networking/dsa/bcm_sf2.txt
index d999d0c1c5b8..eba3a2431e91 100644
--- a/Documentation/networking/dsa/bcm_sf2.txt
+++ b/Documentation/networking/dsa/bcm_sf2.txt
@@ -38,7 +38,7 @@ Implementation details
======================
The driver is located in drivers/net/dsa/bcm_sf2.c and is implemented as a DSA
-driver; see Documentation/networking/dsa/dsa.txt for details on the subsytem
+driver; see Documentation/networking/dsa/dsa.txt for details on the subsystem
and what it provides.
The SF2 switch is configured to enable a Broadcom specific 4-bytes switch tag
diff --git a/Documentation/networking/dsa/dsa.txt b/Documentation/networking/dsa/dsa.txt
index 3b196c304b73..631b0f7ae16f 100644
--- a/Documentation/networking/dsa/dsa.txt
+++ b/Documentation/networking/dsa/dsa.txt
@@ -334,7 +334,7 @@ more specifically with its VLAN filtering portion when configuring VLANs on top
of per-port slave network devices. Since DSA primarily deals with
MDIO-connected switches, although not exclusively, SWITCHDEV's
prepare/abort/commit phases are often simplified into a prepare phase which
-checks whether the operation is supporte by the DSA switch driver, and a commit
+checks whether the operation is supported by the DSA switch driver, and a commit
phase which applies the changes.
As of today, the only SWITCHDEV objects supported by DSA are the FDB and VLAN
@@ -533,7 +533,7 @@ Bridge layer
out at the switch hardware for the switch to (re) learn MAC addresses behind
this port.
-- port_stp_update: bridge layer function invoked when a given switch port STP
+- port_stp_state_set: bridge layer function invoked when a given switch port STP
state is computed by the bridge layer and should be propagated to switch
hardware to forward/block/learn traffic. The switch driver is responsible for
computing a STP state change based on current and asked parameters and perform
@@ -542,6 +542,12 @@ Bridge layer
Bridge VLAN filtering
---------------------
+- port_vlan_prepare: bridge layer function invoked when the bridge prepares the
+ configuration of a VLAN on the given port. If the operation is not supported
+ by the hardware, this function should return -EOPNOTSUPP to inform the bridge
+ code to fallback to a software implementation. No hardware setup must be done
+ in this function. See port_vlan_add for this and details.
+
- port_vlan_add: bridge layer function invoked when a VLAN is configured
(tagged or untagged) for the given switch port
@@ -552,6 +558,12 @@ Bridge VLAN filtering
function that the driver has to call for each VLAN the given port is a member
of. A switchdev object is used to carry the VID and bridge flags.
+- port_fdb_prepare: bridge layer function invoked when the bridge prepares the
+ installation of a Forwarding Database entry. If the operation is not
+ supported, this function should return -EOPNOTSUPP to inform the bridge code
+ to fallback to a software implementation. No hardware setup must be done in
+ this function. See port_fdb_add for this and details.
+
- port_fdb_add: bridge layer function invoked when the bridge wants to install a
Forwarding Database entry, the switch hardware should be programmed with the
specified address in the specified VLAN Id in the forwarding database
@@ -565,6 +577,10 @@ of DSA, would be the its port-based VLAN, used by the associated bridge device.
the specified MAC address from the specified VLAN ID if it was mapped into
this port forwarding database
+- port_fdb_dump: bridge layer function invoked with a switchdev callback
+ function that the driver has to call for each MAC address known to be behind
+ the given port. A switchdev object is used to carry the VID and FDB info.
+
TODO
====
diff --git a/Documentation/networking/filter.txt b/Documentation/networking/filter.txt
index 96da119a47e7..683ada5ad81d 100644
--- a/Documentation/networking/filter.txt
+++ b/Documentation/networking/filter.txt
@@ -216,21 +216,21 @@ opcodes as defined in linux/filter.h stand for:
jmp 6 Jump to label
ja 6 Jump to label
- jeq 7, 8 Jump on k == A
- jneq 8 Jump on k != A
- jne 8 Jump on k != A
- jlt 8 Jump on k < A
- jle 8 Jump on k <= A
- jgt 7, 8 Jump on k > A
- jge 7, 8 Jump on k >= A
- jset 7, 8 Jump on k & A
+ jeq 7, 8 Jump on A == k
+ jneq 8 Jump on A != k
+ jne 8 Jump on A != k
+ jlt 8 Jump on A < k
+ jle 8 Jump on A <= k
+ jgt 7, 8 Jump on A > k
+ jge 7, 8 Jump on A >= k
+ jset 7, 8 Jump on A & k
add 0, 4 A + <x>
sub 0, 4 A - <x>
mul 0, 4 A * <x>
div 0, 4 A / <x>
mod 0, 4 A % <x>
- neg 0, 4 !A
+ neg !A
and 0, 4 A & <x>
or 0, 4 A | <x>
xor 0, 4 A ^ <x>
@@ -1095,6 +1095,87 @@ all use cases.
See details of eBPF verifier in kernel/bpf/verifier.c
+Direct packet access
+--------------------
+In cls_bpf and act_bpf programs the verifier allows direct access to the packet
+data via skb->data and skb->data_end pointers.
+Ex:
+1: r4 = *(u32 *)(r1 +80) /* load skb->data_end */
+2: r3 = *(u32 *)(r1 +76) /* load skb->data */
+3: r5 = r3
+4: r5 += 14
+5: if r5 > r4 goto pc+16
+R1=ctx R3=pkt(id=0,off=0,r=14) R4=pkt_end R5=pkt(id=0,off=14,r=14) R10=fp
+6: r0 = *(u16 *)(r3 +12) /* access 12 and 13 bytes of the packet */
+
+this 2byte load from the packet is safe to do, since the program author
+did check 'if (skb->data + 14 > skb->data_end) goto err' at insn #5 which
+means that in the fall-through case the register R3 (which points to skb->data)
+has at least 14 directly accessible bytes. The verifier marks it
+as R3=pkt(id=0,off=0,r=14).
+id=0 means that no additional variables were added to the register.
+off=0 means that no additional constants were added.
+r=14 is the range of safe access which means that bytes [R3, R3 + 14) are ok.
+Note that R5 is marked as R5=pkt(id=0,off=14,r=14). It also points
+to the packet data, but constant 14 was added to the register, so
+it now points to 'skb->data + 14' and accessible range is [R5, R5 + 14 - 14)
+which is zero bytes.
+
+More complex packet access may look like:
+ R0=imm1 R1=ctx R3=pkt(id=0,off=0,r=14) R4=pkt_end R5=pkt(id=0,off=14,r=14) R10=fp
+ 6: r0 = *(u8 *)(r3 +7) /* load 7th byte from the packet */
+ 7: r4 = *(u8 *)(r3 +12)
+ 8: r4 *= 14
+ 9: r3 = *(u32 *)(r1 +76) /* load skb->data */
+10: r3 += r4
+11: r2 = r1
+12: r2 <<= 48
+13: r2 >>= 48
+14: r3 += r2
+15: r2 = r3
+16: r2 += 8
+17: r1 = *(u32 *)(r1 +80) /* load skb->data_end */
+18: if r2 > r1 goto pc+2
+ R0=inv56 R1=pkt_end R2=pkt(id=2,off=8,r=8) R3=pkt(id=2,off=0,r=8) R4=inv52 R5=pkt(id=0,off=14,r=14) R10=fp
+19: r1 = *(u8 *)(r3 +4)
+The state of the register R3 is R3=pkt(id=2,off=0,r=8)
+id=2 means that two 'r3 += rX' instructions were seen, so r3 points to some
+offset within a packet and since the program author did
+'if (r3 + 8 > r1) goto err' at insn #18, the safe range is [R3, R3 + 8).
+The verifier only allows 'add' operation on packet registers. Any other
+operation will set the register state to 'unknown_value' and it won't be
+available for direct packet access.
+Operation 'r3 += rX' may overflow and become less than original skb->data,
+therefore the verifier has to prevent that. So it tracks the number of
+upper zero bits in all 'uknown_value' registers, so when it sees
+'r3 += rX' instruction and rX is more than 16-bit value, it will error as:
+"cannot add integer value with N upper zero bits to ptr_to_packet"
+Ex. after insn 'r4 = *(u8 *)(r3 +12)' (insn #7 above) the state of r4 is
+R4=inv56 which means that upper 56 bits on the register are guaranteed
+to be zero. After insn 'r4 *= 14' the state becomes R4=inv52, since
+multiplying 8-bit value by constant 14 will keep upper 52 bits as zero.
+Similarly 'r2 >>= 48' will make R2=inv48, since the shift is not sign
+extending. This logic is implemented in evaluate_reg_alu() function.
+
+The end result is that bpf program author can access packet directly
+using normal C code as:
+ void *data = (void *)(long)skb->data;
+ void *data_end = (void *)(long)skb->data_end;
+ struct eth_hdr *eth = data;
+ struct iphdr *iph = data + sizeof(*eth);
+ struct udphdr *udp = data + sizeof(*eth) + sizeof(*iph);
+
+ if (data + sizeof(*eth) + sizeof(*iph) + sizeof(*udp) > data_end)
+ return 0;
+ if (eth->h_proto != htons(ETH_P_IP))
+ return 0;
+ if (iph->protocol != IPPROTO_UDP || iph->ihl != 5)
+ return 0;
+ if (udp->dest == 53 || udp->source == 9)
+ ...;
+which makes such programs easier to write comparing to LD_ABS insn
+and significantly faster.
+
eBPF maps
---------
'maps' is a generic storage of different types for sharing data between kernel
@@ -1293,5 +1374,5 @@ to give potential BPF hackers or security auditors a better overview of
the underlying architecture.
Jay Schulist <jschlst@samba.org>
-Daniel Borkmann <dborkman@redhat.com>
-Alexei Starovoitov <ast@plumgrid.com>
+Daniel Borkmann <daniel@iogearbox.net>
+Alexei Starovoitov <ast@kernel.org>
diff --git a/Documentation/networking/gen_stats.txt b/Documentation/networking/gen_stats.txt
index 70e6275b757a..ff630a87b511 100644
--- a/Documentation/networking/gen_stats.txt
+++ b/Documentation/networking/gen_stats.txt
@@ -33,7 +33,8 @@ my_dumping_routine(struct sk_buff *skb, ...)
{
struct gnet_dump dump;
- if (gnet_stats_start_copy(skb, TCA_STATS2, &mystruct->lock, &dump) < 0)
+ if (gnet_stats_start_copy(skb, TCA_STATS2, &mystruct->lock, &dump,
+ TCA_PAD) < 0)
goto rtattr_failure;
if (gnet_stats_copy_basic(&dump, &mystruct->bstats) < 0 ||
@@ -56,7 +57,8 @@ existing TLV types.
my_dumping_routine(struct sk_buff *skb, ...)
{
if (gnet_stats_start_copy_compat(skb, TCA_STATS2, TCA_STATS,
- TCA_XSTATS, &mystruct->lock, &dump) < 0)
+ TCA_XSTATS, &mystruct->lock, &dump,
+ TCA_PAD) < 0)
goto rtattr_failure;
...
}
diff --git a/Documentation/networking/ip-sysctl.txt b/Documentation/networking/ip-sysctl.txt
index b183e2b606c8..6c7f365b1515 100644
--- a/Documentation/networking/ip-sysctl.txt
+++ b/Documentation/networking/ip-sysctl.txt
@@ -63,6 +63,16 @@ fwmark_reflect - BOOLEAN
fwmark of the packet they are replying to.
Default: 0
+fib_multipath_use_neigh - BOOLEAN
+ Use status of existing neighbor entry when determining nexthop for
+ multipath routes. If disabled, neighbor information is not used and
+ packets could be directed to a failed nexthop. Only valid for kernels
+ built with CONFIG_IP_ROUTE_MULTIPATH enabled.
+ Default: 0 (disabled)
+ Possible values:
+ 0 - disabled
+ 1 - enabled
+
route/max_size - INTEGER
Maximum number of routes allowed in the kernel. Increase
this when using large numbers of interfaces and/or routes.
diff --git a/Documentation/networking/mac80211-injection.txt b/Documentation/networking/mac80211-injection.txt
index ec8f934c2eb2..d58d78df9ca2 100644
--- a/Documentation/networking/mac80211-injection.txt
+++ b/Documentation/networking/mac80211-injection.txt
@@ -37,14 +37,27 @@ radiotap headers and used to control injection:
HT rate for the transmission (only for devices without own rate control).
Also some flags are parsed
- IEEE80211_TX_RC_SHORT_GI: use short guard interval
- IEEE80211_TX_RC_40_MHZ_WIDTH: send in HT40 mode
+ IEEE80211_RADIOTAP_MCS_SGI: use short guard interval
+ IEEE80211_RADIOTAP_MCS_BW_40: send in HT40 mode
* IEEE80211_RADIOTAP_DATA_RETRIES
number of retries when either IEEE80211_RADIOTAP_RATE or
IEEE80211_RADIOTAP_MCS was used
+ * IEEE80211_RADIOTAP_VHT
+
+ VHT mcs and number of streams used in the transmission (only for devices
+ without own rate control). Also other fields are parsed
+
+ flags field
+ IEEE80211_RADIOTAP_VHT_FLAG_SGI: use short guard interval
+
+ bandwidth field
+ 1: send using 40MHz channel width
+ 4: send using 80MHz channel width
+ 11: send using 160MHz channel width
+
The injection code can also skip all other currently defined radiotap fields
facilitating replay of captured radiotap headers directly.
diff --git a/Documentation/networking/netdev-features.txt b/Documentation/networking/netdev-features.txt
index f310edec8a77..7413eb05223b 100644
--- a/Documentation/networking/netdev-features.txt
+++ b/Documentation/networking/netdev-features.txt
@@ -131,13 +131,11 @@ stack. Driver should not change behaviour based on them.
* LLTX driver (deprecated for hardware drivers)
-NETIF_F_LLTX should be set in drivers that implement their own locking in
-transmit path or don't need locking at all (e.g. software tunnels).
-In ndo_start_xmit, it is recommended to use a try_lock and return
-NETDEV_TX_LOCKED when the spin lock fails. The locking should also properly
-protect against other callbacks (the rules you need to find out).
+NETIF_F_LLTX is meant to be used by drivers that don't need locking at all,
+e.g. software tunnels.
-Don't use it for new drivers.
+This is also used in a few legacy drivers that implement their
+own locking, don't use it for new (hardware) drivers.
* netns-local device
diff --git a/Documentation/networking/netdevices.txt b/Documentation/networking/netdevices.txt
index 0b1cf6b2a592..7fec2061a334 100644
--- a/Documentation/networking/netdevices.txt
+++ b/Documentation/networking/netdevices.txt
@@ -69,10 +69,9 @@ ndo_start_xmit:
When the driver sets NETIF_F_LLTX in dev->features this will be
called without holding netif_tx_lock. In this case the driver
- has to lock by itself when needed. It is recommended to use a try lock
- for this and return NETDEV_TX_LOCKED when the spin lock fails.
- The locking there should also properly protect against
- set_rx_mode. Note that the use of NETIF_F_LLTX is deprecated.
+ has to lock by itself when needed.
+ The locking there should also properly protect against
+ set_rx_mode. WARNING: use of NETIF_F_LLTX is deprecated.
Don't use it for new drivers.
Context: Process with BHs disabled or BH (timer),
@@ -83,8 +82,6 @@ ndo_start_xmit:
o NETDEV_TX_BUSY Cannot transmit packet, try later
Usually a bug, means queue start/stop flow control is broken in
the driver. Note: the driver must NOT put the skb in its DMA ring.
- o NETDEV_TX_LOCKED Locking failed, please retry quickly.
- Only valid when NETIF_F_LLTX is set.
ndo_tx_timeout:
Synchronization: netif_tx_lock spinlock; all TX queues frozen.
diff --git a/Documentation/networking/segmentation-offloads.txt b/Documentation/networking/segmentation-offloads.txt
new file mode 100644
index 000000000000..f200467ade38
--- /dev/null
+++ b/Documentation/networking/segmentation-offloads.txt
@@ -0,0 +1,130 @@
+Segmentation Offloads in the Linux Networking Stack
+
+Introduction
+============
+
+This document describes a set of techniques in the Linux networking stack
+to take advantage of segmentation offload capabilities of various NICs.
+
+The following technologies are described:
+ * TCP Segmentation Offload - TSO
+ * UDP Fragmentation Offload - UFO
+ * IPIP, SIT, GRE, and UDP Tunnel Offloads
+ * Generic Segmentation Offload - GSO
+ * Generic Receive Offload - GRO
+ * Partial Generic Segmentation Offload - GSO_PARTIAL
+
+TCP Segmentation Offload
+========================
+
+TCP segmentation allows a device to segment a single frame into multiple
+frames with a data payload size specified in skb_shinfo()->gso_size.
+When TCP segmentation requested the bit for either SKB_GSO_TCP or
+SKB_GSO_TCP6 should be set in skb_shinfo()->gso_type and
+skb_shinfo()->gso_size should be set to a non-zero value.
+
+TCP segmentation is dependent on support for the use of partial checksum
+offload. For this reason TSO is normally disabled if the Tx checksum
+offload for a given device is disabled.
+
+In order to support TCP segmentation offload it is necessary to populate
+the network and transport header offsets of the skbuff so that the device
+drivers will be able determine the offsets of the IP or IPv6 header and the
+TCP header. In addition as CHECKSUM_PARTIAL is required csum_start should
+also point to the TCP header of the packet.
+
+For IPv4 segmentation we support one of two types in terms of the IP ID.
+The default behavior is to increment the IP ID with every segment. If the
+GSO type SKB_GSO_TCP_FIXEDID is specified then we will not increment the IP
+ID and all segments will use the same IP ID. If a device has
+NETIF_F_TSO_MANGLEID set then the IP ID can be ignored when performing TSO
+and we will either increment the IP ID for all frames, or leave it at a
+static value based on driver preference.
+
+UDP Fragmentation Offload
+=========================
+
+UDP fragmentation offload allows a device to fragment an oversized UDP
+datagram into multiple IPv4 fragments. Many of the requirements for UDP
+fragmentation offload are the same as TSO. However the IPv4 ID for
+fragments should not increment as a single IPv4 datagram is fragmented.
+
+IPIP, SIT, GRE, UDP Tunnel, and Remote Checksum Offloads
+========================================================
+
+In addition to the offloads described above it is possible for a frame to
+contain additional headers such as an outer tunnel. In order to account
+for such instances an additional set of segmentation offload types were
+introduced including SKB_GSO_IPIP, SKB_GSO_SIT, SKB_GSO_GRE, and
+SKB_GSO_UDP_TUNNEL. These extra segmentation types are used to identify
+cases where there are more than just 1 set of headers. For example in the
+case of IPIP and SIT we should have the network and transport headers moved
+from the standard list of headers to "inner" header offsets.
+
+Currently only two levels of headers are supported. The convention is to
+refer to the tunnel headers as the outer headers, while the encapsulated
+data is normally referred to as the inner headers. Below is the list of
+calls to access the given headers:
+
+IPIP/SIT Tunnel:
+ Outer Inner
+MAC skb_mac_header
+Network skb_network_header skb_inner_network_header
+Transport skb_transport_header
+
+UDP/GRE Tunnel:
+ Outer Inner
+MAC skb_mac_header skb_inner_mac_header
+Network skb_network_header skb_inner_network_header
+Transport skb_transport_header skb_inner_transport_header
+
+In addition to the above tunnel types there are also SKB_GSO_GRE_CSUM and
+SKB_GSO_UDP_TUNNEL_CSUM. These two additional tunnel types reflect the
+fact that the outer header also requests to have a non-zero checksum
+included in the outer header.
+
+Finally there is SKB_GSO_REMCSUM which indicates that a given tunnel header
+has requested a remote checksum offload. In this case the inner headers
+will be left with a partial checksum and only the outer header checksum
+will be computed.
+
+Generic Segmentation Offload
+============================
+
+Generic segmentation offload is a pure software offload that is meant to
+deal with cases where device drivers cannot perform the offloads described
+above. What occurs in GSO is that a given skbuff will have its data broken
+out over multiple skbuffs that have been resized to match the MSS provided
+via skb_shinfo()->gso_size.
+
+Before enabling any hardware segmentation offload a corresponding software
+offload is required in GSO. Otherwise it becomes possible for a frame to
+be re-routed between devices and end up being unable to be transmitted.
+
+Generic Receive Offload
+=======================
+
+Generic receive offload is the complement to GSO. Ideally any frame
+assembled by GRO should be segmented to create an identical sequence of
+frames using GSO, and any sequence of frames segmented by GSO should be
+able to be reassembled back to the original by GRO. The only exception to
+this is IPv4 ID in the case that the DF bit is set for a given IP header.
+If the value of the IPv4 ID is not sequentially incrementing it will be
+altered so that it is when a frame assembled via GRO is segmented via GSO.
+
+Partial Generic Segmentation Offload
+====================================
+
+Partial generic segmentation offload is a hybrid between TSO and GSO. What
+it effectively does is take advantage of certain traits of TCP and tunnels
+so that instead of having to rewrite the packet headers for each segment
+only the inner-most transport header and possibly the outer-most network
+header need to be updated. This allows devices that do not support tunnel
+offloads or tunnel offloads with checksum to still make use of segmentation.
+
+With the partial offload what occurs is that all headers excluding the
+inner transport header are updated such that they will contain the correct
+values for if the header was simply duplicated. The one exception to this
+is the outer IPv4 ID field. It is up to the device drivers to guarantee
+that the IPv4 ID field is incremented in the case that a given header does
+not have the DF bit set.
diff --git a/Documentation/networking/stmmac.txt b/Documentation/networking/stmmac.txt
index d64a14714236..671fe3dd56d3 100644
--- a/Documentation/networking/stmmac.txt
+++ b/Documentation/networking/stmmac.txt
@@ -1,6 +1,6 @@
STMicroelectronics 10/100/1000 Synopsys Ethernet driver
-Copyright (C) 2007-2014 STMicroelectronics Ltd
+Copyright (C) 2007-2015 STMicroelectronics Ltd
Author: Giuseppe Cavallaro <peppe.cavallaro@st.com>
This is the driver for the MAC 10/100/1000 on-chip Ethernet controllers
@@ -138,6 +138,8 @@ struct plat_stmmacenet_data {
int (*init)(struct platform_device *pdev, void *priv);
void (*exit)(struct platform_device *pdev, void *priv);
void *bsp_priv;
+ int has_gmac4;
+ bool tso_en;
};
Where:
@@ -181,6 +183,8 @@ Where:
registers. init/exit callbacks should not use or modify
platform data.
o bsp_priv: another private pointer.
+ o has_gmac4: uses GMAC4 core.
+ o tso_en: Enables TSO (TCP Segmentation Offload) feature.
For MDIO bus The we have:
@@ -278,6 +282,13 @@ Please see the following document:
o stmmac_ethtool.c: to implement the ethtool support;
o stmmac.h: private driver structure;
o common.h: common definitions and VFTs;
+ o mmc_core.c/mmc.h: Management MAC Counters;
+ o stmmac_hwtstamp.c: HW timestamp support for PTP;
+ o stmmac_ptp.c: PTP 1588 clock;
+ o dwmac-<XXX>.c: these are for the platform glue-logic file; e.g. dwmac-sti.c
+ for STMicroelectronics SoCs.
+
+- GMAC 3.x
o descs.h: descriptor structure definitions;
o dwmac1000_core.c: dwmac GiGa core functions;
o dwmac1000_dma.c: dma functions for the GMAC chip;
@@ -289,11 +300,32 @@ Please see the following document:
o enh_desc.c: functions for handling enhanced descriptors;
o norm_desc.c: functions for handling normal descriptors;
o chain_mode.c/ring_mode.c:: functions to manage RING/CHAINED modes;
- o mmc_core.c/mmc.h: Management MAC Counters;
- o stmmac_hwtstamp.c: HW timestamp support for PTP;
- o stmmac_ptp.c: PTP 1588 clock;
- o dwmac-<XXX>.c: these are for the platform glue-logic file; e.g. dwmac-sti.c
- for STMicroelectronics SoCs.
+
+- GMAC4.x generation
+ o dwmac4_core.c: dwmac GMAC4.x core functions;
+ o dwmac4_desc.c: functions for handling GMAC4.x descriptors;
+ o dwmac4_descs.h: descriptor definitions;
+ o dwmac4_dma.c: dma functions for the GMAC4.x chip;
+ o dwmac4_dma.h: dma definitions for the GMAC4.x chip;
+ o dwmac4.h: core definitions for the GMAC4.x chip;
+ o dwmac4_lib.c: generic GMAC4.x functions;
+
+4.12) TSO support (GMAC4.x)
+
+TSO (Tcp Segmentation Offload) feature is supported by GMAC 4.x chip family.
+When a packet is sent through TCP protocol, the TCP stack ensures that
+the SKB provided to the low level driver (stmmac in our case) matches with
+the maximum frame len (IP header + TCP header + payload <= 1500 bytes (for
+MTU set to 1500)). It means that if an application using TCP want to send a
+packet which will have a length (after adding headers) > 1514 the packet
+will be split in several TCP packets: The data payload is split and headers
+(TCP/IP ..) are added. It is done by software.
+
+When TSO is enabled, the TCP stack doesn't care about the maximum frame
+length and provide SKB packet to stmmac as it is. The GMAC IP will have to
+perform the segmentation by it self to match with maximum frame length.
+
+This feature can be enabled in device tree through "snps,tso" entry.
5) Debug Information
diff --git a/Documentation/networking/switchdev.txt b/Documentation/networking/switchdev.txt
index 2f659129694b..31c39115834d 100644
--- a/Documentation/networking/switchdev.txt
+++ b/Documentation/networking/switchdev.txt
@@ -89,6 +89,18 @@ Typically, the management port is not participating in offloaded data plane and
is loaded with a different driver, such as a NIC driver, on the management port
device.
+Switch ID
+^^^^^^^^^
+
+The switchdev driver must implement the switchdev op switchdev_port_attr_get
+for SWITCHDEV_ATTR_ID_PORT_PARENT_ID for each port netdev, returning the same
+physical ID for each port of a switch. The ID must be unique between switches
+on the same system. The ID does not need to be unique between switches on
+different systems.
+
+The switch ID is used to locate ports on a switch and to know if aggregated
+ports belong to the same switch.
+
Port Netdev Naming
^^^^^^^^^^^^^^^^^^
@@ -104,25 +116,13 @@ external configuration. For example, if a physical 40G port is split logically
into 4 10G ports, resulting in 4 port netdevs, the device can give a unique
name for each port using port PHYS name. The udev rule would be:
-SUBSYSTEM=="net", ACTION=="add", DRIVER="<driver>", ATTR{phys_port_name}!="", \
- NAME="$attr{phys_port_name}"
+SUBSYSTEM=="net", ACTION=="add", ATTR{phys_switch_id}=="<phys_switch_id>", \
+ ATTR{phys_port_name}!="", NAME="swX$attr{phys_port_name}"
Suggested naming convention is "swXpYsZ", where X is the switch name or ID, Y
is the port name or ID, and Z is the sub-port name or ID. For example, sw1p1s0
would be sub-port 0 on port 1 on switch 1.
-Switch ID
-^^^^^^^^^
-
-The switchdev driver must implement the switchdev op switchdev_port_attr_get
-for SWITCHDEV_ATTR_ID_PORT_PARENT_ID for each port netdev, returning the same
-physical ID for each port of a switch. The ID must be unique between switches
-on the same system. The ID does not need to be unique between switches on
-different systems.
-
-The switch ID is used to locate ports on a switch and to know if aggregated
-ports belong to the same switch.
-
Port Features
^^^^^^^^^^^^^
diff --git a/Documentation/networking/timestamping.txt b/Documentation/networking/timestamping.txt
index a977339fbe0a..671cccf0dcd2 100644
--- a/Documentation/networking/timestamping.txt
+++ b/Documentation/networking/timestamping.txt
@@ -44,11 +44,17 @@ timeval of SO_TIMESTAMP (ms).
Supports multiple types of timestamp requests. As a result, this
socket option takes a bitmap of flags, not a boolean. In
- err = setsockopt(fd, SOL_SOCKET, SO_TIMESTAMPING, (void *) val, &val);
+ err = setsockopt(fd, SOL_SOCKET, SO_TIMESTAMPING, (void *) val,
+ sizeof(val));
val is an integer with any of the following bits set. Setting other
bit returns EINVAL and does not change the current state.
+The socket option configures timestamp generation for individual
+sk_buffs (1.3.1), timestamp reporting to the socket's error
+queue (1.3.2) and options (1.3.3). Timestamp generation can also
+be enabled for individual sendmsg calls using cmsg (1.3.4).
+
1.3.1 Timestamp Generation
@@ -71,13 +77,16 @@ SOF_TIMESTAMPING_RX_SOFTWARE:
kernel receive stack.
SOF_TIMESTAMPING_TX_HARDWARE:
- Request tx timestamps generated by the network adapter.
+ Request tx timestamps generated by the network adapter. This flag
+ can be enabled via both socket options and control messages.
SOF_TIMESTAMPING_TX_SOFTWARE:
Request tx timestamps when data leaves the kernel. These timestamps
are generated in the device driver as close as possible, but always
prior to, passing the packet to the network interface. Hence, they
require driver support and may not be available for all devices.
+ This flag can be enabled via both socket options and control messages.
+
SOF_TIMESTAMPING_TX_SCHED:
Request tx timestamps prior to entering the packet scheduler. Kernel
@@ -90,7 +99,8 @@ SOF_TIMESTAMPING_TX_SCHED:
machines with virtual devices where a transmitted packet travels
through multiple devices and, hence, multiple packet schedulers,
a timestamp is generated at each layer. This allows for fine
- grained measurement of queuing delay.
+ grained measurement of queuing delay. This flag can be enabled
+ via both socket options and control messages.
SOF_TIMESTAMPING_TX_ACK:
Request tx timestamps when all data in the send buffer has been
@@ -99,6 +109,7 @@ SOF_TIMESTAMPING_TX_ACK:
over-report measurement, because the timestamp is generated when all
data up to and including the buffer at send() was acknowledged: the
cumulative acknowledgment. The mechanism ignores SACK and FACK.
+ This flag can be enabled via both socket options and control messages.
1.3.2 Timestamp Reporting
@@ -183,6 +194,37 @@ having access to the contents of the original packet, so cannot be
combined with SOF_TIMESTAMPING_OPT_TSONLY.
+1.3.4. Enabling timestamps via control messages
+
+In addition to socket options, timestamp generation can be requested
+per write via cmsg, only for SOF_TIMESTAMPING_TX_* (see Section 1.3.1).
+Using this feature, applications can sample timestamps per sendmsg()
+without paying the overhead of enabling and disabling timestamps via
+setsockopt:
+
+ struct msghdr *msg;
+ ...
+ cmsg = CMSG_FIRSTHDR(msg);
+ cmsg->cmsg_level = SOL_SOCKET;
+ cmsg->cmsg_type = SO_TIMESTAMPING;
+ cmsg->cmsg_len = CMSG_LEN(sizeof(__u32));
+ *((__u32 *) CMSG_DATA(cmsg)) = SOF_TIMESTAMPING_TX_SCHED |
+ SOF_TIMESTAMPING_TX_SOFTWARE |
+ SOF_TIMESTAMPING_TX_ACK;
+ err = sendmsg(fd, msg, 0);
+
+The SOF_TIMESTAMPING_TX_* flags set via cmsg will override
+the SOF_TIMESTAMPING_TX_* flags set via setsockopt.
+
+Moreover, applications must still enable timestamp reporting via
+setsockopt to receive timestamps:
+
+ __u32 val = SOF_TIMESTAMPING_SOFTWARE |
+ SOF_TIMESTAMPING_OPT_ID /* or any other flag */;
+ err = setsockopt(fd, SOL_SOCKET, SO_TIMESTAMPING, (void *) val,
+ sizeof(val));
+
+
1.4 Bytestream Timestamps
The SO_TIMESTAMPING interface supports timestamping of bytes in a
diff --git a/Documentation/phy.txt b/Documentation/phy.txt
index b388c5af9e72..0aa994bd9a91 100644
--- a/Documentation/phy.txt
+++ b/Documentation/phy.txt
@@ -31,16 +31,28 @@ should provide its own implementation of of_xlate. of_xlate is used only for
dt boot case.
#define of_phy_provider_register(dev, xlate) \
- __of_phy_provider_register((dev), THIS_MODULE, (xlate))
+ __of_phy_provider_register((dev), NULL, THIS_MODULE, (xlate))
#define devm_of_phy_provider_register(dev, xlate) \
- __devm_of_phy_provider_register((dev), THIS_MODULE, (xlate))
+ __devm_of_phy_provider_register((dev), NULL, THIS_MODULE, (xlate))
of_phy_provider_register and devm_of_phy_provider_register macros can be used to
register the phy_provider and it takes device and of_xlate as
arguments. For the dt boot case, all PHY providers should use one of the above
2 macros to register the PHY provider.
+Often the device tree nodes associated with a PHY provider will contain a set
+of children that each represent a single PHY. Some bindings may nest the child
+nodes within extra levels for context and extensibility, in which case the low
+level of_phy_provider_register_full() and devm_of_phy_provider_register_full()
+macros can be used to override the node containing the children.
+
+#define of_phy_provider_register_full(dev, children, xlate) \
+ __of_phy_provider_register(dev, children, THIS_MODULE, xlate)
+
+#define devm_of_phy_provider_register_full(dev, children, xlate) \
+ __devm_of_phy_provider_register_full(dev, children, THIS_MODULE, xlate)
+
void devm_of_phy_provider_unregister(struct device *dev,
struct phy_provider *phy_provider);
void of_phy_provider_unregister(struct phy_provider *phy_provider);
diff --git a/Documentation/powerpc/eeh-pci-error-recovery.txt b/Documentation/powerpc/eeh-pci-error-recovery.txt
index 9d4e33df624c..678189280bb4 100644
--- a/Documentation/powerpc/eeh-pci-error-recovery.txt
+++ b/Documentation/powerpc/eeh-pci-error-recovery.txt
@@ -12,7 +12,7 @@ Overview:
The IBM POWER-based pSeries and iSeries computers include PCI bus
controller chips that have extended capabilities for detecting and
reporting a large variety of PCI bus error conditions. These features
-go under the name of "EEH", for "Extended Error Handling". The EEH
+go under the name of "EEH", for "Enhanced Error Handling". The EEH
hardware features allow PCI bus errors to be cleared and a PCI
card to be "rebooted", without also having to reboot the operating
system.
diff --git a/Documentation/pps/pps.txt b/Documentation/pps/pps.txt
index 7cb7264ad598..50022b3c8ebf 100644
--- a/Documentation/pps/pps.txt
+++ b/Documentation/pps/pps.txt
@@ -98,7 +98,7 @@ pps_source_info_s as follows:
};
and then calling the function pps_register_source() in your
-intialization routine as follows:
+initialization routine as follows:
source = pps_register_source(&pps_ktimer_info,
PPS_CAPTUREASSERT | PPS_OFFSETASSERT);
diff --git a/Documentation/pwm.txt b/Documentation/pwm.txt
index ca895fd211e4..789b27c6ec99 100644
--- a/Documentation/pwm.txt
+++ b/Documentation/pwm.txt
@@ -42,9 +42,26 @@ variants of these functions, devm_pwm_get() and devm_pwm_put(), also exist.
After being requested, a PWM has to be configured using:
-int pwm_config(struct pwm_device *pwm, int duty_ns, int period_ns);
+int pwm_apply_state(struct pwm_device *pwm, struct pwm_state *state);
-To start/stop toggling the PWM output use pwm_enable()/pwm_disable().
+This API controls both the PWM period/duty_cycle config and the
+enable/disable state.
+
+The pwm_config(), pwm_enable() and pwm_disable() functions are just wrappers
+around pwm_apply_state() and should not be used if the user wants to change
+several parameter at once. For example, if you see pwm_config() and
+pwm_{enable,disable}() calls in the same function, this probably means you
+should switch to pwm_apply_state().
+
+The PWM user API also allows one to query the PWM state with pwm_get_state().
+
+In addition to the PWM state, the PWM API also exposes PWM arguments, which
+are the reference PWM config one should use on this PWM.
+PWM arguments are usually platform-specific and allows the PWM user to only
+care about dutycycle relatively to the full period (like, duty = 50% of the
+period). struct pwm_args contains 2 fields (period and polarity) and should
+be used to set the initial PWM config (usually done in the probe function
+of the PWM user). PWM arguments are retrieved with pwm_get_args().
Using PWMs with the sysfs interface
-----------------------------------
@@ -105,6 +122,15 @@ goes low for the remainder of the period. Conversely, a signal with inversed
polarity starts low for the duration of the duty cycle and goes high for the
remainder of the period.
+Drivers are encouraged to implement ->apply() instead of the legacy
+->enable(), ->disable() and ->config() methods. Doing that should provide
+atomicity in the PWM config workflow, which is required when the PWM controls
+a critical device (like a regulator).
+
+The implementation of ->get_state() (a method used to retrieve initial PWM
+state) is also encouraged for the same reason: letting the PWM user know
+about the current PWM state would allow him to avoid glitches.
+
Locking
-------
diff --git a/Documentation/robust-futexes.txt b/Documentation/robust-futexes.txt
index af6fce23e484..61c22d608759 100644
--- a/Documentation/robust-futexes.txt
+++ b/Documentation/robust-futexes.txt
@@ -126,9 +126,9 @@ vma based method:
- no VM changes are needed - 'struct address_space' is left alone.
- - no registration of individual locks is needed: robust mutexes dont
+ - no registration of individual locks is needed: robust mutexes don't
need any extra per-lock syscalls. Robust mutexes thus become a very
- lightweight primitive - so they dont force the application designer
+ lightweight primitive - so they don't force the application designer
to do a hard choice between performance and robustness - robust
mutexes are just as fast.
@@ -202,7 +202,7 @@ and the remaining bits are for the TID.
Testing, architecture support
-----------------------------
-i've tested the new syscalls on x86 and x86_64, and have made sure the
+I've tested the new syscalls on x86 and x86_64, and have made sure the
parsing of the userspace list is robust [ ;-) ] even if the list is
deliberately corrupted.
diff --git a/Documentation/rpmsg.txt b/Documentation/rpmsg.txt
index f7edc3aa1e92..a95e36a43288 100644
--- a/Documentation/rpmsg.txt
+++ b/Documentation/rpmsg.txt
@@ -249,24 +249,12 @@ MODULE_DEVICE_TABLE(rpmsg, rpmsg_driver_sample_id_table);
static struct rpmsg_driver rpmsg_sample_client = {
.drv.name = KBUILD_MODNAME,
- .drv.owner = THIS_MODULE,
.id_table = rpmsg_driver_sample_id_table,
.probe = rpmsg_sample_probe,
.callback = rpmsg_sample_cb,
.remove = rpmsg_sample_remove,
};
-
-static int __init init(void)
-{
- return register_rpmsg_driver(&rpmsg_sample_client);
-}
-module_init(init);
-
-static void __exit fini(void)
-{
- unregister_rpmsg_driver(&rpmsg_sample_client);
-}
-module_exit(fini);
+module_rpmsg_driver(rpmsg_sample_client);
Note: a similar sample which can be built and loaded can be found
in samples/rpmsg/.
diff --git a/Documentation/scsi/ChangeLog.megaraid_sas b/Documentation/scsi/ChangeLog.megaraid_sas
index 18b570990040..00ffdf187f0b 100644
--- a/Documentation/scsi/ChangeLog.megaraid_sas
+++ b/Documentation/scsi/ChangeLog.megaraid_sas
@@ -63,7 +63,7 @@ Release Date : Sat. Feb 9, 2013 17:00:00 PST 2013 -
Current Version : 06.506.00.00-rc1
Old Version : 06.504.01.00-rc1
1. Add 4k FastPath DIF support.
- 2. Dont load DevHandle unless FastPath enabled.
+ 2. Don't load DevHandle unless FastPath enabled.
3. Version and Changelog update.
-------------------------------------------------------------------------------
Release Date : Mon. Oct 1, 2012 17:00:00 PST 2012 -
@@ -105,7 +105,7 @@ Old Version : 00.00.06.12-rc1
1. Fix reglockFlags for degraded raid5/6 for MR 9360/9380.
2. Mask off flags in ioctl path to prevent memory scribble with older
MegaCLI versions.
- 3. Remove poll_mode_io module paramater, sysfs node, and associated code.
+ 3. Remove poll_mode_io module parameter, sysfs node, and associated code.
-------------------------------------------------------------------------------
Release Date : Wed. Oct 5, 2011 17:00:00 PST 2010 -
(emaild-id:megaraidlinux@lsi.com)
@@ -199,7 +199,7 @@ Old Version : 00.00.04.31-rc1
1. Add the Online Controller Reset (OCR) to the Driver.
OCR is the new feature for megaraid_sas driver which
will allow the fw to do the chip reset which will not
- affact the OS behavious.
+ affect the OS behavior.
To add the OCR support, driver need to do:
a). reset the controller chips -- Xscale and Gen2 which
@@ -233,7 +233,7 @@ Old Version : 00.00.04.31-rc1
failed state. Driver will kill adapter if can't bring back FW after the
this three times reset.
4. Add the input parameter max_sectors to 1MB support to our GEN2 controller.
- customer can use the input paramenter max_sectors to add 1MB support to GEN2
+ customer can use the input parameter max_sectors to add 1MB support to GEN2
controller.
1 Release Date : Thur. Oct 29, 2009 09:12:45 PST 2009 -
@@ -582,11 +582,11 @@ ii. Bug fix : Disable controller interrupt before firing INIT cmd to FW.
1 Release Date : Wed Feb 03 14:31:44 PST 2006 - Sumant Patro <Sumant.Patro@lsil.com>
2 Current Version : 00.00.02.04
-3 Older Version : 00.00.02.04
+3 Older Version : 00.00.02.04
-i. Remove superflous instance_lock
+i. Remove superfluous instance_lock
- gets rid of the otherwise superflous instance_lock and avoids an unsave
+ gets rid of the otherwise superfluous instance_lock and avoids an unsafe
unsynchronized access in the error handler.
- Christoph Hellwig <hch@lst.de>
@@ -594,43 +594,43 @@ i. Remove superflous instance_lock
1 Release Date : Wed Feb 03 14:31:44 PST 2006 - Sumant Patro <Sumant.Patro@lsil.com>
2 Current Version : 00.00.02.04
-3 Older Version : 00.00.02.04
+3 Older Version : 00.00.02.04
i. Support for 1078 type (ppc IOP) controller, device id : 0x60 added.
- During initialization, depending on the device id, the template members
- are initialized with function pointers specific to the ppc or
- xscale controllers.
+ During initialization, depending on the device id, the template members
+ are initialized with function pointers specific to the ppc or
+ xscale controllers.
-Sumant Patro <Sumant.Patro@lsil.com>
-1 Release Date : Fri Feb 03 14:16:25 PST 2006 - Sumant Patro
+1 Release Date : Fri Feb 03 14:16:25 PST 2006 - Sumant Patro
<Sumant.Patro@lsil.com>
2 Current Version : 00.00.02.04
-3 Older Version : 00.00.02.02
-i. Register 16 byte CDB capability with scsi midlayer
+3 Older Version : 00.00.02.02
+i. Register 16 byte CDB capability with scsi midlayer
- "This patch properly registers the 16 byte command length capability of the
- megaraid_sas controlled hardware with the scsi midlayer. All megaraid_sas
+ "This patch properly registers the 16 byte command length capability of the
+ megaraid_sas controlled hardware with the scsi midlayer. All megaraid_sas
hardware supports 16 byte CDB's."
- -Joshua Giles <joshua_giles@dell.com>
+ -Joshua Giles <joshua_giles@dell.com>
1 Release Date : Mon Jan 23 14:09:01 PST 2006 - Sumant Patro <Sumant.Patro@lsil.com>
2 Current Version : 00.00.02.02
-3 Older Version : 00.00.02.01
+3 Older Version : 00.00.02.01
-i. New template defined to represent each family of controllers (identified by processor used).
- The template will have defintions that will be initialised to appropritae values for a specific family of controllers. The template definition has four function pointers. During driver initialisation the function pointers will be set based on the controller family type. This change is done to support new controllers that has different processors and thus different register set.
+i. New template defined to represent each family of controllers (identified by processor used).
+ The template will have definitions that will be initialised to appropriate values for a specific family of controllers. The template definition has four function pointers. During driver initialisation the function pointers will be set based on the controller family type. This change is done to support new controllers that has different processors and thus different register set.
-Sumant Patro <Sumant.Patro@lsil.com>
1 Release Date : Mon Dec 19 14:36:26 PST 2005 - Sumant Patro <Sumant.Patro@lsil.com>
-2 Current Version : 00.00.02.00-rc4
-3 Older Version : 00.00.02.01
+2 Current Version : 00.00.02.00-rc4
+3 Older Version : 00.00.02.01
-i. Code reorganized to remove code duplication in megasas_build_cmd.
+i. Code reorganized to remove code duplication in megasas_build_cmd.
- "There's a lot of duplicate code megasas_build_cmd. Move that out of the different codepathes and merge the reminder of megasas_build_cmd into megasas_queue_command"
+ "There's a lot of duplicate code megasas_build_cmd. Move that out of the different codepaths and merge the reminder of megasas_build_cmd into megasas_queue_command"
- Christoph Hellwig <hch@lst.de>
diff --git a/Documentation/scsi/bfa.txt b/Documentation/scsi/bfa.txt
index f2d6e9d1791e..3cc4d80d6092 100644
--- a/Documentation/scsi/bfa.txt
+++ b/Documentation/scsi/bfa.txt
@@ -50,7 +50,7 @@ be found at:
http://www.brocade.com/services-support/drivers-downloads/adapters/Linux.page
-and then click following respective util pacakge link
+and then click following respective util package link
Version Link
diff --git a/Documentation/scsi/g_NCR5380.txt b/Documentation/scsi/g_NCR5380.txt
index 3b80f567f818..fd880150aeea 100644
--- a/Documentation/scsi/g_NCR5380.txt
+++ b/Documentation/scsi/g_NCR5380.txt
@@ -23,11 +23,10 @@ supported by the driver.
If the default configuration does not work for you, you can use the kernel
command lines (eg using the lilo append command):
- ncr5380=port,irq,dma
- ncr53c400=port,irq
-or
- ncr5380=base,irq,dma
- ncr53c400=base,irq
+ ncr5380=addr,irq
+ ncr53c400=addr,irq
+ ncr53c400a=addr,irq
+ dtc3181e=addr,irq
The driver does not probe for any addresses or ports other than those in
the OVERRIDE or given to the kernel as above.
@@ -36,19 +35,17 @@ This driver provides some information on what it has detected in
/proc/scsi/g_NCR5380/x where x is the scsi card number as detected at boot
time. More info to come in the future.
-When NCR53c400 support is compiled in, BIOS parameters will be returned by
-the driver (the raw 5380 driver does not and I don't plan to fiddle with
-it!).
-
This driver works as a module.
When included as a module, parameters can be passed on the insmod/modprobe
command line:
ncr_irq=xx the interrupt
ncr_addr=xx the port or base address (for port or memory
mapped, resp.)
- ncr_dma=xx the DMA
ncr_5380=1 to set up for a NCR5380 board
ncr_53c400=1 to set up for a NCR53C400 board
+ ncr_53c400a=1 to set up for a NCR53C400A board
+ dtc_3181e=1 to set up for a Domex Technology Corp 3181E board
+ hp_c2502=1 to set up for a Hewlett Packard C2502 board
e.g.
modprobe g_NCR5380 ncr_irq=5 ncr_addr=0x350 ncr_5380=1
for a port mapped NCR5380 board or
diff --git a/Documentation/scsi/scsi-parameters.txt b/Documentation/scsi/scsi-parameters.txt
index 2bfd6f6d2d3d..1241ac11edb1 100644
--- a/Documentation/scsi/scsi-parameters.txt
+++ b/Documentation/scsi/scsi-parameters.txt
@@ -27,13 +27,15 @@ parameters may be changed at runtime by the command
aic79xx= [HW,SCSI]
See Documentation/scsi/aic79xx.txt.
- atascsi= [HW,SCSI] Atari SCSI
+ atascsi= [HW,SCSI]
+ See drivers/scsi/atari_scsi.c.
BusLogic= [HW,SCSI]
See drivers/scsi/BusLogic.c, comment before function
BusLogic_ParseDriverOptions().
dtc3181e= [HW,SCSI]
+ See Documentation/scsi/g_NCR5380.txt.
eata= [HW,SCSI]
@@ -51,8 +53,8 @@ parameters may be changed at runtime by the command
ips= [HW,SCSI] Adaptec / IBM ServeRAID controller
See header of drivers/scsi/ips.c.
- mac5380= [HW,SCSI] Format:
- <can_queue>,<cmd_per_lun>,<sg_tablesize>,<hostid>,<use_tags>
+ mac5380= [HW,SCSI]
+ See drivers/scsi/mac_scsi.c.
max_luns= [SCSI] Maximum number of LUNs to probe.
Should be between 1 and 2^32-1.
@@ -65,10 +67,13 @@ parameters may be changed at runtime by the command
See header of drivers/scsi/NCR_D700.c.
ncr5380= [HW,SCSI]
+ See Documentation/scsi/g_NCR5380.txt.
ncr53c400= [HW,SCSI]
+ See Documentation/scsi/g_NCR5380.txt.
ncr53c400a= [HW,SCSI]
+ See Documentation/scsi/g_NCR5380.txt.
ncr53c406a= [HW,SCSI]
diff --git a/Documentation/scsi/tcm_qla2xxx.txt b/Documentation/scsi/tcm_qla2xxx.txt
new file mode 100644
index 000000000000..c3a670a25e2b
--- /dev/null
+++ b/Documentation/scsi/tcm_qla2xxx.txt
@@ -0,0 +1,22 @@
+tcm_qla2xxx jam_host attribute
+------------------------------
+There is now a new module endpoint atribute called jam_host
+attribute: jam_host: boolean=0/1
+This attribute and accompanying code is only included if the
+Kconfig parameter TCM_QLA2XXX_DEBUG is set to Y
+By default this jammer code and functionality is disabled
+
+Use this attribute to control the discarding of SCSI commands to a
+selected host.
+This may be useful for testing error handling and simulating slow drain
+and other fabric issues.
+
+Setting a boolean of 1 for the jam_host attribute for a particular host
+ will discard the commands for that host.
+Reset back to 0 to stop the jamming.
+
+Enable host 4 to be jammed
+echo 1 > /sys/kernel/config/target/qla2xxx/21:00:00:24:ff:27:8f:ae/tpgt_1/attrib/jam_host
+
+Disable jamming on host 4
+echo 0 > /sys/kernel/config/target/qla2xxx/21:00:00:24:ff:27:8f:ae/tpgt_1/attrib/jam_host
diff --git a/Documentation/security/LoadPin.txt b/Documentation/security/LoadPin.txt
new file mode 100644
index 000000000000..e11877f5d3d4
--- /dev/null
+++ b/Documentation/security/LoadPin.txt
@@ -0,0 +1,17 @@
+LoadPin is a Linux Security Module that ensures all kernel-loaded files
+(modules, firmware, etc) all originate from the same filesystem, with
+the expectation that such a filesystem is backed by a read-only device
+such as dm-verity or CDROM. This allows systems that have a verified
+and/or unchangeable filesystem to enforce module and firmware loading
+restrictions without needing to sign the files individually.
+
+The LSM is selectable at build-time with CONFIG_SECURITY_LOADPIN, and
+can be controlled at boot-time with the kernel command line option
+"loadpin.enabled". By default, it is enabled, but can be disabled at
+boot ("loadpin.enabled=0").
+
+LoadPin starts pinning when it sees the first file loaded. If the
+block device backing the filesystem is not read-only, a sysctl is
+created to toggle pinning: /proc/sys/kernel/loadpin/enabled. (Having
+a mutable filesystem means pinning is mutable too, but having the
+sysctl allows for easy testing on systems with a mutable filesystem.)
diff --git a/Documentation/security/keys.txt b/Documentation/security/keys.txt
index 8c183873b2b7..20d05719bceb 100644
--- a/Documentation/security/keys.txt
+++ b/Documentation/security/keys.txt
@@ -823,6 +823,36 @@ The keyctl syscall functions are:
A process must have search permission on the key for this function to be
successful.
+ (*) Compute a Diffie-Hellman shared secret or public key
+
+ long keyctl(KEYCTL_DH_COMPUTE, struct keyctl_dh_params *params,
+ char *buffer, size_t buflen);
+
+ The params struct contains serial numbers for three keys:
+
+ - The prime, p, known to both parties
+ - The local private key
+ - The base integer, which is either a shared generator or the
+ remote public key
+
+ The value computed is:
+
+ result = base ^ private (mod prime)
+
+ If the base is the shared generator, the result is the local
+ public key. If the base is the remote public key, the result is
+ the shared secret.
+
+ The buffer length must be at least the length of the prime, or zero.
+
+ If the buffer length is nonzero, the length of the result is
+ returned when it is successfully calculated and copied in to the
+ buffer. When the buffer length is zero, the minimum required
+ buffer length is returned.
+
+ This function will return error EOPNOTSUPP if the key type is not
+ supported, error ENOKEY if the key could not be found, or error
+ EACCES if the key is not readable by the caller.
===============
KERNEL SERVICES
@@ -999,6 +1029,10 @@ payload contents" for more information.
struct key *keyring_alloc(const char *description, uid_t uid, gid_t gid,
const struct cred *cred,
key_perm_t perm,
+ int (*restrict_link)(struct key *,
+ const struct key_type *,
+ unsigned long,
+ const union key_payload *),
unsigned long flags,
struct key *dest);
@@ -1010,6 +1044,24 @@ payload contents" for more information.
KEY_ALLOC_NOT_IN_QUOTA in flags if the keyring shouldn't be accounted
towards the user's quota). Error ENOMEM can also be returned.
+ If restrict_link not NULL, it should point to a function that will be
+ called each time an attempt is made to link a key into the new keyring.
+ This function is called to check whether a key may be added into the keying
+ or not. Callers of key_create_or_update() within the kernel can pass
+ KEY_ALLOC_BYPASS_RESTRICTION to suppress the check. An example of using
+ this is to manage rings of cryptographic keys that are set up when the
+ kernel boots where userspace is also permitted to add keys - provided they
+ can be verified by a key the kernel already has.
+
+ When called, the restriction function will be passed the keyring being
+ added to, the key flags value and the type and payload of the key being
+ added. Note that when a new key is being created, this is called between
+ payload preparsing and actual key creation. The function should return 0
+ to allow the link or an error to reject it.
+
+ A convenience function, restrict_link_reject, exists to always return
+ -EPERM to in this case.
+
(*) To check the validity of a key, this function can be called:
diff --git a/Documentation/security/self-protection.txt b/Documentation/security/self-protection.txt
new file mode 100644
index 000000000000..babd6378ec05
--- /dev/null
+++ b/Documentation/security/self-protection.txt
@@ -0,0 +1,261 @@
+# Kernel Self-Protection
+
+Kernel self-protection is the design and implementation of systems and
+structures within the Linux kernel to protect against security flaws in
+the kernel itself. This covers a wide range of issues, including removing
+entire classes of bugs, blocking security flaw exploitation methods,
+and actively detecting attack attempts. Not all topics are explored in
+this document, but it should serve as a reasonable starting point and
+answer any frequently asked questions. (Patches welcome, of course!)
+
+In the worst-case scenario, we assume an unprivileged local attacker
+has arbitrary read and write access to the kernel's memory. In many
+cases, bugs being exploited will not provide this level of access,
+but with systems in place that defend against the worst case we'll
+cover the more limited cases as well. A higher bar, and one that should
+still be kept in mind, is protecting the kernel against a _privileged_
+local attacker, since the root user has access to a vastly increased
+attack surface. (Especially when they have the ability to load arbitrary
+kernel modules.)
+
+The goals for successful self-protection systems would be that they
+are effective, on by default, require no opt-in by developers, have no
+performance impact, do not impede kernel debugging, and have tests. It
+is uncommon that all these goals can be met, but it is worth explicitly
+mentioning them, since these aspects need to be explored, dealt with,
+and/or accepted.
+
+
+## Attack Surface Reduction
+
+The most fundamental defense against security exploits is to reduce the
+areas of the kernel that can be used to redirect execution. This ranges
+from limiting the exposed APIs available to userspace, making in-kernel
+APIs hard to use incorrectly, minimizing the areas of writable kernel
+memory, etc.
+
+### Strict kernel memory permissions
+
+When all of kernel memory is writable, it becomes trivial for attacks
+to redirect execution flow. To reduce the availability of these targets
+the kernel needs to protect its memory with a tight set of permissions.
+
+#### Executable code and read-only data must not be writable
+
+Any areas of the kernel with executable memory must not be writable.
+While this obviously includes the kernel text itself, we must consider
+all additional places too: kernel modules, JIT memory, etc. (There are
+temporary exceptions to this rule to support things like instruction
+alternatives, breakpoints, kprobes, etc. If these must exist in a
+kernel, they are implemented in a way where the memory is temporarily
+made writable during the update, and then returned to the original
+permissions.)
+
+In support of this are (the poorly named) CONFIG_DEBUG_RODATA and
+CONFIG_DEBUG_SET_MODULE_RONX, which seek to make sure that code is not
+writable, data is not executable, and read-only data is neither writable
+nor executable.
+
+#### Function pointers and sensitive variables must not be writable
+
+Vast areas of kernel memory contain function pointers that are looked
+up by the kernel and used to continue execution (e.g. descriptor/vector
+tables, file/network/etc operation structures, etc). The number of these
+variables must be reduced to an absolute minimum.
+
+Many such variables can be made read-only by setting them "const"
+so that they live in the .rodata section instead of the .data section
+of the kernel, gaining the protection of the kernel's strict memory
+permissions as described above.
+
+For variables that are initialized once at __init time, these can
+be marked with the (new and under development) __ro_after_init
+attribute.
+
+What remains are variables that are updated rarely (e.g. GDT). These
+will need another infrastructure (similar to the temporary exceptions
+made to kernel code mentioned above) that allow them to spend the rest
+of their lifetime read-only. (For example, when being updated, only the
+CPU thread performing the update would be given uninterruptible write
+access to the memory.)
+
+#### Segregation of kernel memory from userspace memory
+
+The kernel must never execute userspace memory. The kernel must also never
+access userspace memory without explicit expectation to do so. These
+rules can be enforced either by support of hardware-based restrictions
+(x86's SMEP/SMAP, ARM's PXN/PAN) or via emulation (ARM's Memory Domains).
+By blocking userspace memory in this way, execution and data parsing
+cannot be passed to trivially-controlled userspace memory, forcing
+attacks to operate entirely in kernel memory.
+
+### Reduced access to syscalls
+
+One trivial way to eliminate many syscalls for 64-bit systems is building
+without CONFIG_COMPAT. However, this is rarely a feasible scenario.
+
+The "seccomp" system provides an opt-in feature made available to
+userspace, which provides a way to reduce the number of kernel entry
+points available to a running process. This limits the breadth of kernel
+code that can be reached, possibly reducing the availability of a given
+bug to an attack.
+
+An area of improvement would be creating viable ways to keep access to
+things like compat, user namespaces, BPF creation, and perf limited only
+to trusted processes. This would keep the scope of kernel entry points
+restricted to the more regular set of normally available to unprivileged
+userspace.
+
+### Restricting access to kernel modules
+
+The kernel should never allow an unprivileged user the ability to
+load specific kernel modules, since that would provide a facility to
+unexpectedly extend the available attack surface. (The on-demand loading
+of modules via their predefined subsystems, e.g. MODULE_ALIAS_*, is
+considered "expected" here, though additional consideration should be
+given even to these.) For example, loading a filesystem module via an
+unprivileged socket API is nonsense: only the root or physically local
+user should trigger filesystem module loading. (And even this can be up
+for debate in some scenarios.)
+
+To protect against even privileged users, systems may need to either
+disable module loading entirely (e.g. monolithic kernel builds or
+modules_disabled sysctl), or provide signed modules (e.g.
+CONFIG_MODULE_SIG_FORCE, or dm-crypt with LoadPin), to keep from having
+root load arbitrary kernel code via the module loader interface.
+
+
+## Memory integrity
+
+There are many memory structures in the kernel that are regularly abused
+to gain execution control during an attack, By far the most commonly
+understood is that of the stack buffer overflow in which the return
+address stored on the stack is overwritten. Many other examples of this
+kind of attack exist, and protections exist to defend against them.
+
+### Stack buffer overflow
+
+The classic stack buffer overflow involves writing past the expected end
+of a variable stored on the stack, ultimately writing a controlled value
+to the stack frame's stored return address. The most widely used defense
+is the presence of a stack canary between the stack variables and the
+return address (CONFIG_CC_STACKPROTECTOR), which is verified just before
+the function returns. Other defenses include things like shadow stacks.
+
+### Stack depth overflow
+
+A less well understood attack is using a bug that triggers the
+kernel to consume stack memory with deep function calls or large stack
+allocations. With this attack it is possible to write beyond the end of
+the kernel's preallocated stack space and into sensitive structures. Two
+important changes need to be made for better protections: moving the
+sensitive thread_info structure elsewhere, and adding a faulting memory
+hole at the bottom of the stack to catch these overflows.
+
+### Heap memory integrity
+
+The structures used to track heap free lists can be sanity-checked during
+allocation and freeing to make sure they aren't being used to manipulate
+other memory areas.
+
+### Counter integrity
+
+Many places in the kernel use atomic counters to track object references
+or perform similar lifetime management. When these counters can be made
+to wrap (over or under) this traditionally exposes a use-after-free
+flaw. By trapping atomic wrapping, this class of bug vanishes.
+
+### Size calculation overflow detection
+
+Similar to counter overflow, integer overflows (usually size calculations)
+need to be detected at runtime to kill this class of bug, which
+traditionally leads to being able to write past the end of kernel buffers.
+
+
+## Statistical defenses
+
+While many protections can be considered deterministic (e.g. read-only
+memory cannot be written to), some protections provide only statistical
+defense, in that an attack must gather enough information about a
+running system to overcome the defense. While not perfect, these do
+provide meaningful defenses.
+
+### Canaries, blinding, and other secrets
+
+It should be noted that things like the stack canary discussed earlier
+are technically statistical defenses, since they rely on a (leakable)
+secret value.
+
+Blinding literal values for things like JITs, where the executable
+contents may be partially under the control of userspace, need a similar
+secret value.
+
+It is critical that the secret values used must be separate (e.g.
+different canary per stack) and high entropy (e.g. is the RNG actually
+working?) in order to maximize their success.
+
+### Kernel Address Space Layout Randomization (KASLR)
+
+Since the location of kernel memory is almost always instrumental in
+mounting a successful attack, making the location non-deterministic
+raises the difficulty of an exploit. (Note that this in turn makes
+the value of leaks higher, since they may be used to discover desired
+memory locations.)
+
+#### Text and module base
+
+By relocating the physical and virtual base address of the kernel at
+boot-time (CONFIG_RANDOMIZE_BASE), attacks needing kernel code will be
+frustrated. Additionally, offsetting the module loading base address
+means that even systems that load the same set of modules in the same
+order every boot will not share a common base address with the rest of
+the kernel text.
+
+#### Stack base
+
+If the base address of the kernel stack is not the same between processes,
+or even not the same between syscalls, targets on or beyond the stack
+become more difficult to locate.
+
+#### Dynamic memory base
+
+Much of the kernel's dynamic memory (e.g. kmalloc, vmalloc, etc) ends up
+being relatively deterministic in layout due to the order of early-boot
+initializations. If the base address of these areas is not the same
+between boots, targeting them is frustrated, requiring a leak specific
+to the region.
+
+
+## Preventing Leaks
+
+Since the locations of sensitive structures are the primary target for
+attacks, it is important to defend against leaks of both kernel memory
+addresses and kernel memory contents (since they may contain kernel
+addresses or other sensitive things like canary values).
+
+### Unique identifiers
+
+Kernel memory addresses must never be used as identifiers exposed to
+userspace. Instead, use an atomic counter, an idr, or similar unique
+identifier.
+
+### Memory initialization
+
+Memory copied to userspace must always be fully initialized. If not
+explicitly memset(), this will require changes to the compiler to make
+sure structure holes are cleared.
+
+### Memory poisoning
+
+When releasing memory, it is best to poison the contents (clear stack on
+syscall return, wipe heap memory on a free), to avoid reuse attacks that
+rely on the old contents of memory. This frustrates many uninitialized
+variable attacks, stack info leaks, heap info leaks, and use-after-free
+attacks.
+
+### Destination tracking
+
+To help kill classes of bugs that result in kernel addresses being
+written to userspace, the destination of writes needs to be tracked. If
+the buffer is destined for userspace (e.g. seq_file backed /proc files),
+it should automatically censor sensitive values.
diff --git a/Documentation/serial/driver b/Documentation/serial/driver
index 379468e12680..da193e092fc3 100644
--- a/Documentation/serial/driver
+++ b/Documentation/serial/driver
@@ -28,7 +28,7 @@ The serial core provides a few helper functions. This includes identifing
the correct port structure (via uart_get_console) and decoding command line
arguments (uart_parse_options).
-There is also a helper function (uart_write_console) which performs a
+There is also a helper function (uart_console_write) which performs a
character by character write, translating newlines to CRLF sequences.
Driver writers are recommended to use this function rather than implementing
their own version.
@@ -41,27 +41,23 @@ It is the responsibility of the low level hardware driver to perform the
necessary locking using port->lock. There are some exceptions (which
are described in the uart_ops listing below.)
-There are three locks. A per-port spinlock, a per-port tmpbuf semaphore,
-and an overall semaphore.
+There are two locks. A per-port spinlock, and an overall semaphore.
From the core driver perspective, the port->lock locks the following
data:
port->mctrl
port->icount
- info->xmit.head (circ->head)
- info->xmit.tail (circ->tail)
+ port->state->xmit.head (circ_buf->head)
+ port->state->xmit.tail (circ_buf->tail)
The low level driver is free to use this lock to provide any additional
locking.
-The core driver uses the info->tmpbuf_sem lock to prevent multi-threaded
-access to the info->tmpbuf bouncebuffer used for port writes.
-
The port_sem semaphore is used to protect against ports being added/
removed or reconfigured at inappropriate times. Since v2.6.27, this
semaphore has been the 'mutex' member of the tty_port struct, and
-commonly referred to as the port mutex (or port->mutex).
+commonly referred to as the port mutex.
uart_ops
@@ -135,6 +131,24 @@ hardware.
Interrupts: locally disabled.
This call must not sleep
+ throttle(port)
+ Notify the serial driver that input buffers for the line discipline are
+ close to full, and it should somehow signal that no more characters
+ should be sent to the serial port.
+ This will be called only if hardware assisted flow control is enabled.
+
+ Locking: serialized with .unthrottle() and termios modification by the
+ tty layer.
+
+ unthrottle(port)
+ Notify the serial driver that characters can now be sent to the serial
+ port without fear of overrunning the input buffers of the line
+ disciplines.
+ This will be called only if hardware assisted flow control is enabled.
+
+ Locking: serialized with .throttle() and termios modification by the
+ tty layer.
+
send_xchar(port,ch)
Transmit a high priority character, even if the port is stopped.
This is used to implement XON/XOFF flow control and tcflow(). If
@@ -172,9 +186,7 @@ hardware.
should be terminated when another call is made with a zero
ctl.
- Locking: none.
- Interrupts: caller dependent.
- This call must not sleep
+ Locking: caller holds tty_port->mutex
startup(port)
Grab any interrupt resources and initialise any low level driver
@@ -192,7 +204,7 @@ hardware.
RTS nor DTR; this will have already been done via a separate
call to set_mctrl.
- Drivers must not access port->info once this call has completed.
+ Drivers must not access port->state once this call has completed.
This method will only be called when there are no more users of
this port.
@@ -204,7 +216,7 @@ hardware.
Flush any write buffers, reset any DMA state and stop any
ongoing DMA transfers.
- This will be called whenever the port->info->xmit circular
+ This will be called whenever the port->state->xmit circular
buffer is cleared.
Locking: port->lock taken.
@@ -250,10 +262,15 @@ hardware.
Other flags may be used (eg, xon/xoff characters) if your
hardware supports hardware "soft" flow control.
- Locking: caller holds port->mutex
+ Locking: caller holds tty_port->mutex
Interrupts: caller dependent.
This call must not sleep
+ set_ldisc(port,termios)
+ Notifier for discipline change. See Documentation/serial/tty.txt.
+
+ Locking: caller holds tty_port->mutex
+
pm(port,state,oldstate)
Perform any power management related activities on the specified
port. State indicates the new state (defined by
@@ -371,7 +388,7 @@ uart_get_baud_rate(port,termios,old,min,max)
Interrupts: n/a
uart_get_divisor(port,baud)
- Return the divsor (baud_base / baud) for the specified baud
+ Return the divisor (baud_base / baud) for the specified baud
rate, appropriately rounded.
If 38400 baud and custom divisor is selected, return the
@@ -449,11 +466,12 @@ mctrl_gpio_init(port, idx):
mctrl_gpio_free(dev, gpios):
This will free the requested gpios in mctrl_gpio_init().
- As devm_* function are used, there's generally no need to call
+ As devm_* functions are used, there's generally no need to call
this function.
mctrl_gpio_to_gpiod(gpios, gidx)
- This returns the gpio structure associated to the modem line index.
+ This returns the gpio_desc structure associated to the modem line
+ index.
mctrl_gpio_set(gpios, mctrl):
This will sets the gpios according to the mctrl state.
diff --git a/Documentation/serial/tty.txt b/Documentation/serial/tty.txt
index 798cba82c762..b48780977a68 100644
--- a/Documentation/serial/tty.txt
+++ b/Documentation/serial/tty.txt
@@ -210,9 +210,6 @@ TTY_IO_ERROR If set, causes all subsequent userspace read/write
TTY_OTHER_CLOSED Device is a pty and the other side has closed.
-TTY_OTHER_DONE Device is a pty and the other side has closed and
- all pending input processing has been completed.
-
TTY_NO_WRITE_SPLIT Prevent driver from splitting up writes into
smaller chunks.
diff --git a/Documentation/sound/alsa/HD-Audio.txt b/Documentation/sound/alsa/HD-Audio.txt
index e7193aac669c..d4510ebf2e8c 100644
--- a/Documentation/sound/alsa/HD-Audio.txt
+++ b/Documentation/sound/alsa/HD-Audio.txt
@@ -655,17 +655,6 @@ development branches in general while the development for the current
and next kernels are found in for-linus and for-next branches,
respectively.
-If you are using the latest Linus tree, it'd be better to pull the
-above GIT tree onto it. If you are using the older kernels, an easy
-way to try the latest ALSA code is to build from the snapshot
-tarball. There are daily tarballs and the latest snapshot tarball.
-All can be built just like normal alsa-driver release packages, that
-is, installed via the usual spells: configure, make and make
-install(-modules). See INSTALL in the package. The snapshot tarballs
-are found at:
-
-- ftp://ftp.suse.com/pub/people/tiwai/snapshot/
-
Sending a Bug Report
~~~~~~~~~~~~~~~~~~~~
@@ -699,7 +688,12 @@ problems.
alsa-info
~~~~~~~~~
The script `alsa-info.sh` is a very useful tool to gather the audio
-device information. You can fetch the latest version from:
+device information. It's included in alsa-utils package. The latest
+version can be found on git repository:
+
+- git://git.alsa-project.org/alsa-utils.git
+
+The script can be fetched directly from the following URL, too:
- http://www.alsa-project.org/alsa-info.sh
@@ -836,15 +830,11 @@ can get a proc-file dump at the current state, get a list of control
(mixer) elements, set/get the control element value, simulate the PCM
operation, the jack plugging simulation, etc.
-The package is found in:
-
-- ftp://ftp.suse.com/pub/people/tiwai/misc/
-
-A git repository is available:
+The program is found in the git repository below:
- git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/hda-emu.git
-See README file in the tarball for more details about hda-emu
+See README file in the repository for more details about hda-emu
program.
diff --git a/Documentation/sound/alsa/compress_offload.txt b/Documentation/sound/alsa/compress_offload.txt
index 630c492c3dc2..8ba556a131c3 100644
--- a/Documentation/sound/alsa/compress_offload.txt
+++ b/Documentation/sound/alsa/compress_offload.txt
@@ -149,7 +149,7 @@ Gapless Playback
================
When playing thru an album, the decoders have the ability to skip the encoder
delay and padding and directly move from one track content to another. The end
-user can perceive this as gapless playback as we dont have silence while
+user can perceive this as gapless playback as we don't have silence while
switching from one track to another
Also, there might be low-intensity noises due to encoding. Perfect gapless is
@@ -184,7 +184,7 @@ Sequence flow for gapless would be:
- Fill data of the first track
- Trigger start
- User-space finished sending all,
-- Indicaite next track data by sending set_next_track
+- Indicate next track data by sending set_next_track
- Set metadata of the next track
- then call partial_drain to flush most of buffer in DSP
- Fill data of the next track
diff --git a/Documentation/sound/alsa/soc/dapm.txt b/Documentation/sound/alsa/soc/dapm.txt
index 6faab4880006..c45bd79f291e 100644
--- a/Documentation/sound/alsa/soc/dapm.txt
+++ b/Documentation/sound/alsa/soc/dapm.txt
@@ -132,7 +132,7 @@ SOC_DAPM_SINGLE("HiFi Playback Switch", WM8731_APANA, 4, 1, 0),
SND_SOC_DAPM_MIXER("Output Mixer", WM8731_PWR, 4, 1, wm8731_output_mixer_controls,
ARRAY_SIZE(wm8731_output_mixer_controls)),
-If you dont want the mixer elements prefixed with the name of the mixer widget,
+If you don't want the mixer elements prefixed with the name of the mixer widget,
you can use SND_SOC_DAPM_MIXER_NAMED_CTL instead. the parameters are the same
as for SND_SOC_DAPM_MIXER.
diff --git a/Documentation/sound/alsa/soc/overview.txt b/Documentation/sound/alsa/soc/overview.txt
index ff88f52eec98..f3f28b7ae242 100644
--- a/Documentation/sound/alsa/soc/overview.txt
+++ b/Documentation/sound/alsa/soc/overview.txt
@@ -63,7 +63,7 @@ multiple re-usable component drivers :-
and any audio DSP drivers for that platform.
* Machine class driver: The machine driver class acts as the glue that
- decribes and binds the other component drivers together to form an ALSA
+ describes and binds the other component drivers together to form an ALSA
"sound card device". It handles any machine specific controls and
machine level audio events (e.g. turning on an amp at start of playback).
diff --git a/Documentation/sound/alsa/timestamping.txt b/Documentation/sound/alsa/timestamping.txt
index 0b191a23f534..1b6473f393a8 100644
--- a/Documentation/sound/alsa/timestamping.txt
+++ b/Documentation/sound/alsa/timestamping.txt
@@ -129,7 +129,7 @@ will be required to issue multiple queries and perform an
interpolation of the results
In some hardware-specific configuration, the system timestamp is
-latched by a low-level audio subsytem, and the information provided
+latched by a low-level audio subsystem, and the information provided
back to the driver. Due to potential delays in the communication with
the hardware, there is a risk of misalignment with the avail and delay
information. To make sure applications are not confused, a
diff --git a/Documentation/sync_file.txt b/Documentation/sync_file.txt
new file mode 100644
index 000000000000..eaf8297dbca2
--- /dev/null
+++ b/Documentation/sync_file.txt
@@ -0,0 +1,69 @@
+ Sync File API Guide
+ ~~~~~~~~~~~~~~~~~~~
+
+ Gustavo Padovan
+ <gustavo at padovan dot org>
+
+This document serves as a guide for device drivers writers on what the
+sync_file API is, and how drivers can support it. Sync file is the carrier of
+the fences(struct fence) that needs to synchronized between drivers or across
+process boundaries.
+
+The sync_file API is meant to be used to send and receive fence information
+to/from userspace. It enables userspace to do explicit fencing, where instead
+of attaching a fence to the buffer a producer driver (such as a GPU or V4L
+driver) sends the fence related to the buffer to userspace via a sync_file.
+
+The sync_file then can be sent to the consumer (DRM driver for example), that
+will not use the buffer for anything before the fence(s) signals, i.e., the
+driver that issued the fence is not using/processing the buffer anymore, so it
+signals that the buffer is ready to use. And vice-versa for the consumer ->
+producer part of the cycle.
+
+Sync files allows userspace awareness on buffer sharing synchronization between
+drivers.
+
+Sync file was originally added in the Android kernel but current Linux Desktop
+can benefit a lot from it.
+
+in-fences and out-fences
+------------------------
+
+Sync files can go either to or from userspace. When a sync_file is sent from
+the driver to userspace we call the fences it contains 'out-fences'. They are
+related to a buffer that the driver is processing or is going to process, so
+the driver an create out-fence to be able to notify, through fence_signal(),
+when it has finished using (or processing) that buffer. Out-fences are fences
+that the driver creates.
+
+On the other hand if the driver receives fence(s) through a sync_file from
+userspace we call these fence(s) 'in-fences'. Receiveing in-fences means that
+we need to wait for the fence(s) to signal before using any buffer related to
+the in-fences.
+
+Creating Sync Files
+-------------------
+
+When a driver needs to send an out-fence userspace it creates a sync_file.
+
+Interface:
+ struct sync_file *sync_file_create(struct fence *fence);
+
+The caller pass the out-fence and gets back the sync_file. That is just the
+first step, next it needs to install an fd on sync_file->file. So it gets an
+fd:
+
+ fd = get_unused_fd_flags(O_CLOEXEC);
+
+and installs it on sync_file->file:
+
+ fd_install(fd, sync_file->file);
+
+The sync_file fd now can be sent to userspace.
+
+If the creation process fail, or the sync_file needs to be released by any
+other reason fput(sync_file->file) should be used.
+
+References:
+[1] struct sync_file in include/linux/sync_file.h
+[2] All interfaces mentioned above defined in include/linux/sync_file.h
diff --git a/Documentation/sysctl/kernel.txt b/Documentation/sysctl/kernel.txt
index 57653a44b128..a3683ce2a2f3 100644
--- a/Documentation/sysctl/kernel.txt
+++ b/Documentation/sysctl/kernel.txt
@@ -60,6 +60,8 @@ show up in /proc/sys/kernel:
- panic_on_warn
- perf_cpu_time_max_percent
- perf_event_paranoid
+- perf_event_max_stack
+- perf_event_max_contexts_per_stack
- pid_max
- powersave-nap [ PPC only ]
- printk
@@ -645,7 +647,7 @@ allowed to execute.
perf_event_paranoid:
Controls use of the performance events system by unprivileged
-users (without CAP_SYS_ADMIN). The default value is 1.
+users (without CAP_SYS_ADMIN). The default value is 2.
-1: Allow use of (almost) all events by all users
>=0: Disallow raw tracepoint access by users without CAP_IOC_LOCK
@@ -654,6 +656,32 @@ users (without CAP_SYS_ADMIN). The default value is 1.
==============================================================
+perf_event_max_stack:
+
+Controls maximum number of stack frames to copy for (attr.sample_type &
+PERF_SAMPLE_CALLCHAIN) configured events, for instance, when using
+'perf record -g' or 'perf trace --call-graph fp'.
+
+This can only be done when no events are in use that have callchains
+enabled, otherwise writing to this file will return -EBUSY.
+
+The default value is 127.
+
+==============================================================
+
+perf_event_max_contexts_per_stack:
+
+Controls maximum number of stack frame context entries for
+(attr.sample_type & PERF_SAMPLE_CALLCHAIN) configured events, for
+instance, when using 'perf record -g' or 'perf trace --call-graph fp'.
+
+This can only be done when no events are in use that have callchains
+enabled, otherwise writing to this file will return -EBUSY.
+
+The default value is 8.
+
+==============================================================
+
pid_max:
PID allocation wrap value. When the kernel's next PID value
diff --git a/Documentation/sysctl/net.txt b/Documentation/sysctl/net.txt
index 809ab6efcc74..f0480f7ea740 100644
--- a/Documentation/sysctl/net.txt
+++ b/Documentation/sysctl/net.txt
@@ -43,6 +43,17 @@ Values :
1 - enable the JIT
2 - enable the JIT and ask the compiler to emit traces on kernel log.
+bpf_jit_harden
+--------------
+
+This enables hardening for the Berkeley Packet Filter Just in Time compiler.
+Supported are eBPF JIT backends. Enabling hardening trades off performance,
+but can mitigate JIT spraying.
+Values :
+ 0 - disable JIT hardening (default value)
+ 1 - enable JIT hardening for unprivileged users only
+ 2 - enable JIT hardening for all users
+
dev_weight
--------------
diff --git a/Documentation/sysctl/vm.txt b/Documentation/sysctl/vm.txt
index 34a5fece3121..720355cbdf45 100644
--- a/Documentation/sysctl/vm.txt
+++ b/Documentation/sysctl/vm.txt
@@ -57,6 +57,7 @@ Currently, these files are in /proc/sys/vm:
- panic_on_oom
- percpu_pagelist_fraction
- stat_interval
+- stat_refresh
- swappiness
- user_reserve_kbytes
- vfs_cache_pressure
@@ -755,6 +756,19 @@ is 1 second.
==============================================================
+stat_refresh
+
+Any read or write (by root only) flushes all the per-cpu vm statistics
+into their global totals, for more accurate reports when testing
+e.g. cat /proc/sys/vm/stat_refresh /proc/meminfo
+
+As a side-effect, it also checks for negative totals (elsewhere reported
+as 0) and "fails" with EINVAL if any are found, with a warning in dmesg.
+(At time of writing, a few stats are known sometimes to be found negative,
+with no ill effects: errors and warnings on these stats are suppressed.)
+
+==============================================================
+
swappiness
This control is used to define how aggressive the kernel will swap
diff --git a/Documentation/sysrq.txt b/Documentation/sysrq.txt
index 13f5619b2203..3a3b30ac2a75 100644
--- a/Documentation/sysrq.txt
+++ b/Documentation/sysrq.txt
@@ -212,7 +212,7 @@ it is currently registered in that slot. This is in case the slot has been
overwritten since you registered it.
The Magic SysRQ system works by registering key operations against a key op
-lookup table, which is defined in 'drivers/char/sysrq.c'. This key table has
+lookup table, which is defined in 'drivers/tty/sysrq.c'. This key table has
a number of operations registered into it at compile time, but is mutable,
and 2 functions are exported for interface to it:
register_sysrq_key and unregister_sysrq_key.
diff --git a/Documentation/target/tcm_mod_builder.py b/Documentation/target/tcm_mod_builder.py
index 7d370c9b1450..94bf6944bb1e 100755
--- a/Documentation/target/tcm_mod_builder.py
+++ b/Documentation/target/tcm_mod_builder.py
@@ -294,8 +294,6 @@ def tcm_mod_build_configfs(proto_ident, fabric_mod_dir_var, fabric_mod_name):
buf += " .tpg_check_prod_mode_write_protect = " + fabric_mod_name + "_check_false,\n"
buf += " .tpg_get_inst_index = " + fabric_mod_name + "_tpg_get_inst_index,\n"
buf += " .release_cmd = " + fabric_mod_name + "_release_cmd,\n"
- buf += " .shutdown_session = " + fabric_mod_name + "_shutdown_session,\n"
- buf += " .close_session = " + fabric_mod_name + "_close_session,\n"
buf += " .sess_get_index = " + fabric_mod_name + "_sess_get_index,\n"
buf += " .sess_get_initiator_sid = NULL,\n"
buf += " .write_pending = " + fabric_mod_name + "_write_pending,\n"
@@ -467,20 +465,6 @@ def tcm_mod_dump_fabric_ops(proto_ident, fabric_mod_dir_var, fabric_mod_name):
buf += "}\n\n"
bufi += "void " + fabric_mod_name + "_release_cmd(struct se_cmd *);\n"
- if re.search('shutdown_session\)\(', fo):
- buf += "int " + fabric_mod_name + "_shutdown_session(struct se_session *se_sess)\n"
- buf += "{\n"
- buf += " return 0;\n"
- buf += "}\n\n"
- bufi += "int " + fabric_mod_name + "_shutdown_session(struct se_session *);\n"
-
- if re.search('close_session\)\(', fo):
- buf += "void " + fabric_mod_name + "_close_session(struct se_session *se_sess)\n"
- buf += "{\n"
- buf += " return;\n"
- buf += "}\n\n"
- bufi += "void " + fabric_mod_name + "_close_session(struct se_session *);\n"
-
if re.search('sess_get_index\)\(', fo):
buf += "u32 " + fabric_mod_name + "_sess_get_index(struct se_session *se_sess)\n"
buf += "{\n"
diff --git a/Documentation/thermal/sysfs-api.txt b/Documentation/thermal/sysfs-api.txt
index ed419d6c8dec..efc3f3d293c4 100644
--- a/Documentation/thermal/sysfs-api.txt
+++ b/Documentation/thermal/sysfs-api.txt
@@ -69,8 +69,8 @@ temperature) and throttle appropriate devices.
1.1.2 void thermal_zone_device_unregister(struct thermal_zone_device *tz)
This interface function removes the thermal zone device.
- It deletes the corresponding entry form /sys/class/thermal folder and
- unbind all the thermal cooling devices it uses.
+ It deletes the corresponding entry from /sys/class/thermal folder and
+ unbinds all the thermal cooling devices it uses.
1.1.3 struct thermal_zone_device *thermal_zone_of_sensor_register(
struct device *dev, int sensor_id, void *data,
@@ -146,32 +146,32 @@ temperature) and throttle appropriate devices.
This interface function adds a new thermal cooling device (fan/processor/...)
to /sys/class/thermal/ folder as cooling_device[0-*]. It tries to bind itself
- to all the thermal zone devices register at the same time.
+ to all the thermal zone devices registered at the same time.
name: the cooling device name.
devdata: device private data.
ops: thermal cooling devices call-backs.
.get_max_state: get the Maximum throttle state of the cooling device.
- .get_cur_state: get the Current throttle state of the cooling device.
+ .get_cur_state: get the Currently requested throttle state of the cooling device.
.set_cur_state: set the Current throttle state of the cooling device.
1.2.2 void thermal_cooling_device_unregister(struct thermal_cooling_device *cdev)
- This interface function remove the thermal cooling device.
- It deletes the corresponding entry form /sys/class/thermal folder and
- unbind itself from all the thermal zone devices using it.
+ This interface function removes the thermal cooling device.
+ It deletes the corresponding entry from /sys/class/thermal folder and
+ unbinds itself from all the thermal zone devices using it.
1.3 interface for binding a thermal zone device with a thermal cooling device
1.3.1 int thermal_zone_bind_cooling_device(struct thermal_zone_device *tz,
int trip, struct thermal_cooling_device *cdev,
unsigned long upper, unsigned long lower, unsigned int weight);
- This interface function bind a thermal cooling device to the certain trip
+ This interface function binds a thermal cooling device to a particular trip
point of a thermal zone device.
This function is usually called in the thermal zone device .bind callback.
tz: the thermal zone device
cdev: thermal cooling device
- trip: indicates which trip point the cooling devices is associated with
- in this thermal zone.
+ trip: indicates which trip point in this thermal zone the cooling device
+ is associated with.
upper:the Maximum cooling state for this trip point.
THERMAL_NO_LIMIT means no upper limit,
and the cooling device can be in max_state.
@@ -184,13 +184,13 @@ temperature) and throttle appropriate devices.
1.3.2 int thermal_zone_unbind_cooling_device(struct thermal_zone_device *tz,
int trip, struct thermal_cooling_device *cdev);
- This interface function unbind a thermal cooling device from the certain
+ This interface function unbinds a thermal cooling device from a particular
trip point of a thermal zone device. This function is usually called in
the thermal zone device .unbind callback.
tz: the thermal zone device
cdev: thermal cooling device
- trip: indicates which trip point the cooling devices is associated with
- in this thermal zone.
+ trip: indicates which trip point in this thermal zone the cooling device
+ is associated with.
1.4 Thermal Zone Parameters
1.4.1 struct thermal_bind_params
@@ -210,13 +210,13 @@ temperature) and throttle appropriate devices.
this thermal zone and cdev, for a particular trip point.
If nth bit is set, then the cdev and thermal zone are bound
for trip point n.
- .limits: This is an array of cooling state limits. Must have exactly
- 2 * thermal_zone.number_of_trip_points. It is an array consisting
- of tuples <lower-state upper-state> of state limits. Each trip
- will be associated with one state limit tuple when binding.
- A NULL pointer means <THERMAL_NO_LIMITS THERMAL_NO_LIMITS>
- on all trips. These limits are used when binding a cdev to a
- trip point.
+ .binding_limits: This is an array of cooling state limits. Must have
+ exactly 2 * thermal_zone.number_of_trip_points. It is an
+ array consisting of tuples <lower-state upper-state> of
+ state limits. Each trip will be associated with one state
+ limit tuple when binding. A NULL pointer means
+ <THERMAL_NO_LIMITS THERMAL_NO_LIMITS> on all trips.
+ These limits are used when binding a cdev to a trip point.
.match: This call back returns success(0) if the 'tz and cdev' need to
be bound, as per platform data.
1.4.2 struct thermal_zone_params
@@ -351,8 +351,8 @@ cdev[0-*]
RO, Optional
cdev[0-*]_trip_point
- The trip point with which cdev[0-*] is associated in this thermal
- zone; -1 means the cooling device is not associated with any trip
+ The trip point in this thermal zone which cdev[0-*] is associated
+ with; -1 means the cooling device is not associated with any trip
point.
RO, Optional
diff --git a/Documentation/timers/hrtimers.txt b/Documentation/timers/hrtimers.txt
index ce31f65e12e7..588d85724f10 100644
--- a/Documentation/timers/hrtimers.txt
+++ b/Documentation/timers/hrtimers.txt
@@ -28,9 +28,9 @@ several reasons why such integration is hard/impossible:
- the unpredictable [O(N)] overhead of cascading leads to delays which
necessitate a more complex handling of high resolution timers, which
- in turn decreases robustness. Such a design still led to rather large
+ in turn decreases robustness. Such a design still leads to rather large
timing inaccuracies. Cascading is a fundamental property of the timer
- wheel concept, it cannot be 'designed out' without unevitably
+ wheel concept, it cannot be 'designed out' without inevitably
degrading other portions of the timers.c code in an unacceptable way.
- the implementation of the current posix-timer subsystem on top of
@@ -119,7 +119,7 @@ was not really a win, due to the different data structures. Also, the
hrtimer functions now have clearer behavior and clearer names - such as
hrtimer_try_to_cancel() and hrtimer_cancel() [which are roughly
equivalent to del_timer() and del_timer_sync()] - so there's no direct
-1:1 mapping between them on the algorithmical level, and thus no real
+1:1 mapping between them on the algorithmic level, and thus no real
potential for code sharing either.
Basic data types: every time value, absolute or relative, is in a
diff --git a/Documentation/trace/coresight.txt b/Documentation/trace/coresight.txt
index 0a5c3290e732..a33c88cd5d1d 100644
--- a/Documentation/trace/coresight.txt
+++ b/Documentation/trace/coresight.txt
@@ -190,8 +190,8 @@ expected to be accessed and controlled using those entries.
Last but not least, "struct module *owner" is expected to be set to reflect
the information carried in "THIS_MODULE".
-How to use
-----------
+How to use the tracer modules
+-----------------------------
Before trace collection can start, a coresight sink needs to be identify.
There is no limit on the amount of sinks (nor sources) that can be enabled at
@@ -297,3 +297,36 @@ Info Tracing enabled
Instruction 13570831 0x8026B584 E28DD00C false ADD sp,sp,#0xc
Instruction 0 0x8026B588 E8BD8000 true LDM sp!,{pc}
Timestamp Timestamp: 17107041535
+
+How to use the STM module
+-------------------------
+
+Using the System Trace Macrocell module is the same as the tracers - the only
+difference is that clients are driving the trace capture rather
+than the program flow through the code.
+
+As with any other CoreSight component, specifics about the STM tracer can be
+found in sysfs with more information on each entry being found in [1]:
+
+root@genericarmv8:~# ls /sys/bus/coresight/devices/20100000.stm
+enable_source hwevent_select port_enable subsystem uevent
+hwevent_enable mgmt port_select traceid
+root@genericarmv8:~#
+
+Like any other source a sink needs to be identified and the STM enabled before
+being used:
+
+root@genericarmv8:~# echo 1 > /sys/bus/coresight/devices/20010000.etf/enable_sink
+root@genericarmv8:~# echo 1 > /sys/bus/coresight/devices/20100000.stm/enable_source
+
+From there user space applications can request and use channels using the devfs
+interface provided for that purpose by the generic STM API:
+
+root@genericarmv8:~# ls -l /dev/20100000.stm
+crw------- 1 root root 10, 61 Jan 3 18:11 /dev/20100000.stm
+root@genericarmv8:~#
+
+Details on how to use the generic STM API can be found here [2].
+
+[1]. Documentation/ABI/testing/sysfs-bus-coresight-devices-stm
+[2]. Documentation/trace/stm.txt
diff --git a/Documentation/trace/events.txt b/Documentation/trace/events.txt
index c010be8c85d7..08d74d75150d 100644
--- a/Documentation/trace/events.txt
+++ b/Documentation/trace/events.txt
@@ -512,3 +512,1558 @@ The following commands are supported:
Note that there can be only one traceon or traceoff trigger per
triggering event.
+
+- hist
+
+ This command aggregates event hits into a hash table keyed on one or
+ more trace event format fields (or stacktrace) and a set of running
+ totals derived from one or more trace event format fields and/or
+ event counts (hitcount).
+
+ The format of a hist trigger is as follows:
+
+ hist:keys=<field1[,field2,...]>[:values=<field1[,field2,...]>]
+ [:sort=<field1[,field2,...]>][:size=#entries][:pause][:continue]
+ [:clear][:name=histname1] [if <filter>]
+
+ When a matching event is hit, an entry is added to a hash table
+ using the key(s) and value(s) named. Keys and values correspond to
+ fields in the event's format description. Values must correspond to
+ numeric fields - on an event hit, the value(s) will be added to a
+ sum kept for that field. The special string 'hitcount' can be used
+ in place of an explicit value field - this is simply a count of
+ event hits. If 'values' isn't specified, an implicit 'hitcount'
+ value will be automatically created and used as the only value.
+ Keys can be any field, or the special string 'stacktrace', which
+ will use the event's kernel stacktrace as the key. The keywords
+ 'keys' or 'key' can be used to specify keys, and the keywords
+ 'values', 'vals', or 'val' can be used to specify values. Compound
+ keys consisting of up to two fields can be specified by the 'keys'
+ keyword. Hashing a compound key produces a unique entry in the
+ table for each unique combination of component keys, and can be
+ useful for providing more fine-grained summaries of event data.
+ Additionally, sort keys consisting of up to two fields can be
+ specified by the 'sort' keyword. If more than one field is
+ specified, the result will be a 'sort within a sort': the first key
+ is taken to be the primary sort key and the second the secondary
+ key. If a hist trigger is given a name using the 'name' parameter,
+ its histogram data will be shared with other triggers of the same
+ name, and trigger hits will update this common data. Only triggers
+ with 'compatible' fields can be combined in this way; triggers are
+ 'compatible' if the fields named in the trigger share the same
+ number and type of fields and those fields also have the same names.
+ Note that any two events always share the compatible 'hitcount' and
+ 'stacktrace' fields and can therefore be combined using those
+ fields, however pointless that may be.
+
+ 'hist' triggers add a 'hist' file to each event's subdirectory.
+ Reading the 'hist' file for the event will dump the hash table in
+ its entirety to stdout. If there are multiple hist triggers
+ attached to an event, there will be a table for each trigger in the
+ output. The table displayed for a named trigger will be the same as
+ any other instance having the same name. Each printed hash table
+ entry is a simple list of the keys and values comprising the entry;
+ keys are printed first and are delineated by curly braces, and are
+ followed by the set of value fields for the entry. By default,
+ numeric fields are displayed as base-10 integers. This can be
+ modified by appending any of the following modifiers to the field
+ name:
+
+ .hex display a number as a hex value
+ .sym display an address as a symbol
+ .sym-offset display an address as a symbol and offset
+ .syscall display a syscall id as a system call name
+ .execname display a common_pid as a program name
+
+ Note that in general the semantics of a given field aren't
+ interpreted when applying a modifier to it, but there are some
+ restrictions to be aware of in this regard:
+
+ - only the 'hex' modifier can be used for values (because values
+ are essentially sums, and the other modifiers don't make sense
+ in that context).
+ - the 'execname' modifier can only be used on a 'common_pid'. The
+ reason for this is that the execname is simply the 'comm' value
+ saved for the 'current' process when an event was triggered,
+ which is the same as the common_pid value saved by the event
+ tracing code. Trying to apply that comm value to other pid
+ values wouldn't be correct, and typically events that care save
+ pid-specific comm fields in the event itself.
+
+ A typical usage scenario would be the following to enable a hist
+ trigger, read its current contents, and then turn it off:
+
+ # echo 'hist:keys=skbaddr.hex:vals=len' > \
+ /sys/kernel/debug/tracing/events/net/netif_rx/trigger
+
+ # cat /sys/kernel/debug/tracing/events/net/netif_rx/hist
+
+ # echo '!hist:keys=skbaddr.hex:vals=len' > \
+ /sys/kernel/debug/tracing/events/net/netif_rx/trigger
+
+ The trigger file itself can be read to show the details of the
+ currently attached hist trigger. This information is also displayed
+ at the top of the 'hist' file when read.
+
+ By default, the size of the hash table is 2048 entries. The 'size'
+ parameter can be used to specify more or fewer than that. The units
+ are in terms of hashtable entries - if a run uses more entries than
+ specified, the results will show the number of 'drops', the number
+ of hits that were ignored. The size should be a power of 2 between
+ 128 and 131072 (any non- power-of-2 number specified will be rounded
+ up).
+
+ The 'sort' parameter can be used to specify a value field to sort
+ on. The default if unspecified is 'hitcount' and the default sort
+ order is 'ascending'. To sort in the opposite direction, append
+ .descending' to the sort key.
+
+ The 'pause' parameter can be used to pause an existing hist trigger
+ or to start a hist trigger but not log any events until told to do
+ so. 'continue' or 'cont' can be used to start or restart a paused
+ hist trigger.
+
+ The 'clear' parameter will clear the contents of a running hist
+ trigger and leave its current paused/active state.
+
+ Note that the 'pause', 'cont', and 'clear' parameters should be
+ applied using 'append' shell operator ('>>') if applied to an
+ existing trigger, rather than via the '>' operator, which will cause
+ the trigger to be removed through truncation.
+
+- enable_hist/disable_hist
+
+ The enable_hist and disable_hist triggers can be used to have one
+ event conditionally start and stop another event's already-attached
+ hist trigger. Any number of enable_hist and disable_hist triggers
+ can be attached to a given event, allowing that event to kick off
+ and stop aggregations on a host of other events.
+
+ The format is very similar to the enable/disable_event triggers:
+
+ enable_hist:<system>:<event>[:count]
+ disable_hist:<system>:<event>[:count]
+
+ Instead of enabling or disabling the tracing of the target event
+ into the trace buffer as the enable/disable_event triggers do, the
+ enable/disable_hist triggers enable or disable the aggregation of
+ the target event into a hash table.
+
+ A typical usage scenario for the enable_hist/disable_hist triggers
+ would be to first set up a paused hist trigger on some event,
+ followed by an enable_hist/disable_hist pair that turns the hist
+ aggregation on and off when conditions of interest are hit:
+
+ # echo 'hist:keys=skbaddr.hex:vals=len:pause' > \
+ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+
+ # echo 'enable_hist:net:netif_receive_skb if filename==/usr/bin/wget' > \
+ /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
+
+ # echo 'disable_hist:net:netif_receive_skb if comm==wget' > \
+ /sys/kernel/debug/tracing/events/sched/sched_process_exit/trigger
+
+ The above sets up an initially paused hist trigger which is unpaused
+ and starts aggregating events when a given program is executed, and
+ which stops aggregating when the process exits and the hist trigger
+ is paused again.
+
+ The examples below provide a more concrete illustration of the
+ concepts and typical usage patterns discussed above.
+
+
+6.2 'hist' trigger examples
+---------------------------
+
+ The first set of examples creates aggregations using the kmalloc
+ event. The fields that can be used for the hist trigger are listed
+ in the kmalloc event's format file:
+
+ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/format
+ name: kmalloc
+ ID: 374
+ format:
+ field:unsigned short common_type; offset:0; size:2; signed:0;
+ field:unsigned char common_flags; offset:2; size:1; signed:0;
+ field:unsigned char common_preempt_count; offset:3; size:1; signed:0;
+ field:int common_pid; offset:4; size:4; signed:1;
+
+ field:unsigned long call_site; offset:8; size:8; signed:0;
+ field:const void * ptr; offset:16; size:8; signed:0;
+ field:size_t bytes_req; offset:24; size:8; signed:0;
+ field:size_t bytes_alloc; offset:32; size:8; signed:0;
+ field:gfp_t gfp_flags; offset:40; size:4; signed:0;
+
+ We'll start by creating a hist trigger that generates a simple table
+ that lists the total number of bytes requested for each function in
+ the kernel that made one or more calls to kmalloc:
+
+ # echo 'hist:key=call_site:val=bytes_req' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ This tells the tracing system to create a 'hist' trigger using the
+ call_site field of the kmalloc event as the key for the table, which
+ just means that each unique call_site address will have an entry
+ created for it in the table. The 'val=bytes_req' parameter tells
+ the hist trigger that for each unique entry (call_site) in the
+ table, it should keep a running total of the number of bytes
+ requested by that call_site.
+
+ We'll let it run for awhile and then dump the contents of the 'hist'
+ file in the kmalloc event's subdirectory (for readability, a number
+ of entries have been omitted):
+
+ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+ # trigger info: hist:keys=call_site:vals=bytes_req:sort=hitcount:size=2048 [active]
+
+ { call_site: 18446744072106379007 } hitcount: 1 bytes_req: 176
+ { call_site: 18446744071579557049 } hitcount: 1 bytes_req: 1024
+ { call_site: 18446744071580608289 } hitcount: 1 bytes_req: 16384
+ { call_site: 18446744071581827654 } hitcount: 1 bytes_req: 24
+ { call_site: 18446744071580700980 } hitcount: 1 bytes_req: 8
+ { call_site: 18446744071579359876 } hitcount: 1 bytes_req: 152
+ { call_site: 18446744071580795365 } hitcount: 3 bytes_req: 144
+ { call_site: 18446744071581303129 } hitcount: 3 bytes_req: 144
+ { call_site: 18446744071580713234 } hitcount: 4 bytes_req: 2560
+ { call_site: 18446744071580933750 } hitcount: 4 bytes_req: 736
+ .
+ .
+ .
+ { call_site: 18446744072106047046 } hitcount: 69 bytes_req: 5576
+ { call_site: 18446744071582116407 } hitcount: 73 bytes_req: 2336
+ { call_site: 18446744072106054684 } hitcount: 136 bytes_req: 140504
+ { call_site: 18446744072106224230 } hitcount: 136 bytes_req: 19584
+ { call_site: 18446744072106078074 } hitcount: 153 bytes_req: 2448
+ { call_site: 18446744072106062406 } hitcount: 153 bytes_req: 36720
+ { call_site: 18446744071582507929 } hitcount: 153 bytes_req: 37088
+ { call_site: 18446744072102520590 } hitcount: 273 bytes_req: 10920
+ { call_site: 18446744071582143559 } hitcount: 358 bytes_req: 716
+ { call_site: 18446744072106465852 } hitcount: 417 bytes_req: 56712
+ { call_site: 18446744072102523378 } hitcount: 485 bytes_req: 27160
+ { call_site: 18446744072099568646 } hitcount: 1676 bytes_req: 33520
+
+ Totals:
+ Hits: 4610
+ Entries: 45
+ Dropped: 0
+
+ The output displays a line for each entry, beginning with the key
+ specified in the trigger, followed by the value(s) also specified in
+ the trigger. At the beginning of the output is a line that displays
+ the trigger info, which can also be displayed by reading the
+ 'trigger' file:
+
+ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+ hist:keys=call_site:vals=bytes_req:sort=hitcount:size=2048 [active]
+
+ At the end of the output are a few lines that display the overall
+ totals for the run. The 'Hits' field shows the total number of
+ times the event trigger was hit, the 'Entries' field shows the total
+ number of used entries in the hash table, and the 'Dropped' field
+ shows the number of hits that were dropped because the number of
+ used entries for the run exceeded the maximum number of entries
+ allowed for the table (normally 0, but if not a hint that you may
+ want to increase the size of the table using the 'size' parameter).
+
+ Notice in the above output that there's an extra field, 'hitcount',
+ which wasn't specified in the trigger. Also notice that in the
+ trigger info output, there's a parameter, 'sort=hitcount', which
+ wasn't specified in the trigger either. The reason for that is that
+ every trigger implicitly keeps a count of the total number of hits
+ attributed to a given entry, called the 'hitcount'. That hitcount
+ information is explicitly displayed in the output, and in the
+ absence of a user-specified sort parameter, is used as the default
+ sort field.
+
+ The value 'hitcount' can be used in place of an explicit value in
+ the 'values' parameter if you don't really need to have any
+ particular field summed and are mainly interested in hit
+ frequencies.
+
+ To turn the hist trigger off, simply call up the trigger in the
+ command history and re-execute it with a '!' prepended:
+
+ # echo '!hist:key=call_site:val=bytes_req' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ Finally, notice that the call_site as displayed in the output above
+ isn't really very useful. It's an address, but normally addresses
+ are displayed in hex. To have a numeric field displayed as a hex
+ value, simply append '.hex' to the field name in the trigger:
+
+ # echo 'hist:key=call_site.hex:val=bytes_req' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+ # trigger info: hist:keys=call_site.hex:vals=bytes_req:sort=hitcount:size=2048 [active]
+
+ { call_site: ffffffffa026b291 } hitcount: 1 bytes_req: 433
+ { call_site: ffffffffa07186ff } hitcount: 1 bytes_req: 176
+ { call_site: ffffffff811ae721 } hitcount: 1 bytes_req: 16384
+ { call_site: ffffffff811c5134 } hitcount: 1 bytes_req: 8
+ { call_site: ffffffffa04a9ebb } hitcount: 1 bytes_req: 511
+ { call_site: ffffffff8122e0a6 } hitcount: 1 bytes_req: 12
+ { call_site: ffffffff8107da84 } hitcount: 1 bytes_req: 152
+ { call_site: ffffffff812d8246 } hitcount: 1 bytes_req: 24
+ { call_site: ffffffff811dc1e5 } hitcount: 3 bytes_req: 144
+ { call_site: ffffffffa02515e8 } hitcount: 3 bytes_req: 648
+ { call_site: ffffffff81258159 } hitcount: 3 bytes_req: 144
+ { call_site: ffffffff811c80f4 } hitcount: 4 bytes_req: 544
+ .
+ .
+ .
+ { call_site: ffffffffa06c7646 } hitcount: 106 bytes_req: 8024
+ { call_site: ffffffffa06cb246 } hitcount: 132 bytes_req: 31680
+ { call_site: ffffffffa06cef7a } hitcount: 132 bytes_req: 2112
+ { call_site: ffffffff8137e399 } hitcount: 132 bytes_req: 23232
+ { call_site: ffffffffa06c941c } hitcount: 185 bytes_req: 171360
+ { call_site: ffffffffa06f2a66 } hitcount: 185 bytes_req: 26640
+ { call_site: ffffffffa036a70e } hitcount: 265 bytes_req: 10600
+ { call_site: ffffffff81325447 } hitcount: 292 bytes_req: 584
+ { call_site: ffffffffa072da3c } hitcount: 446 bytes_req: 60656
+ { call_site: ffffffffa036b1f2 } hitcount: 526 bytes_req: 29456
+ { call_site: ffffffffa0099c06 } hitcount: 1780 bytes_req: 35600
+
+ Totals:
+ Hits: 4775
+ Entries: 46
+ Dropped: 0
+
+ Even that's only marginally more useful - while hex values do look
+ more like addresses, what users are typically more interested in
+ when looking at text addresses are the corresponding symbols
+ instead. To have an address displayed as symbolic value instead,
+ simply append '.sym' or '.sym-offset' to the field name in the
+ trigger:
+
+ # echo 'hist:key=call_site.sym:val=bytes_req' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+ # trigger info: hist:keys=call_site.sym:vals=bytes_req:sort=hitcount:size=2048 [active]
+
+ { call_site: [ffffffff810adcb9] syslog_print_all } hitcount: 1 bytes_req: 1024
+ { call_site: [ffffffff8154bc62] usb_control_msg } hitcount: 1 bytes_req: 8
+ { call_site: [ffffffffa00bf6fe] hidraw_send_report [hid] } hitcount: 1 bytes_req: 7
+ { call_site: [ffffffff8154acbe] usb_alloc_urb } hitcount: 1 bytes_req: 192
+ { call_site: [ffffffffa00bf1ca] hidraw_report_event [hid] } hitcount: 1 bytes_req: 7
+ { call_site: [ffffffff811e3a25] __seq_open_private } hitcount: 1 bytes_req: 40
+ { call_site: [ffffffff8109524a] alloc_fair_sched_group } hitcount: 2 bytes_req: 128
+ { call_site: [ffffffff811febd5] fsnotify_alloc_group } hitcount: 2 bytes_req: 528
+ { call_site: [ffffffff81440f58] __tty_buffer_request_room } hitcount: 2 bytes_req: 2624
+ { call_site: [ffffffff81200ba6] inotify_new_group } hitcount: 2 bytes_req: 96
+ { call_site: [ffffffffa05e19af] ieee80211_start_tx_ba_session [mac80211] } hitcount: 2 bytes_req: 464
+ { call_site: [ffffffff81672406] tcp_get_metrics } hitcount: 2 bytes_req: 304
+ { call_site: [ffffffff81097ec2] alloc_rt_sched_group } hitcount: 2 bytes_req: 128
+ { call_site: [ffffffff81089b05] sched_create_group } hitcount: 2 bytes_req: 1424
+ .
+ .
+ .
+ { call_site: [ffffffffa04a580c] intel_crtc_page_flip [i915] } hitcount: 1185 bytes_req: 123240
+ { call_site: [ffffffffa0287592] drm_mode_page_flip_ioctl [drm] } hitcount: 1185 bytes_req: 104280
+ { call_site: [ffffffffa04c4a3c] intel_plane_duplicate_state [i915] } hitcount: 1402 bytes_req: 190672
+ { call_site: [ffffffff812891ca] ext4_find_extent } hitcount: 1518 bytes_req: 146208
+ { call_site: [ffffffffa029070e] drm_vma_node_allow [drm] } hitcount: 1746 bytes_req: 69840
+ { call_site: [ffffffffa045e7c4] i915_gem_do_execbuffer.isra.23 [i915] } hitcount: 2021 bytes_req: 792312
+ { call_site: [ffffffffa02911f2] drm_modeset_lock_crtc [drm] } hitcount: 2592 bytes_req: 145152
+ { call_site: [ffffffffa0489a66] intel_ring_begin [i915] } hitcount: 2629 bytes_req: 378576
+ { call_site: [ffffffffa046041c] i915_gem_execbuffer2 [i915] } hitcount: 2629 bytes_req: 3783248
+ { call_site: [ffffffff81325607] apparmor_file_alloc_security } hitcount: 5192 bytes_req: 10384
+ { call_site: [ffffffffa00b7c06] hid_report_raw_event [hid] } hitcount: 5529 bytes_req: 110584
+ { call_site: [ffffffff8131ebf7] aa_alloc_task_context } hitcount: 21943 bytes_req: 702176
+ { call_site: [ffffffff8125847d] ext4_htree_store_dirent } hitcount: 55759 bytes_req: 5074265
+
+ Totals:
+ Hits: 109928
+ Entries: 71
+ Dropped: 0
+
+ Because the default sort key above is 'hitcount', the above shows a
+ the list of call_sites by increasing hitcount, so that at the bottom
+ we see the functions that made the most kmalloc calls during the
+ run. If instead we we wanted to see the top kmalloc callers in
+ terms of the number of bytes requested rather than the number of
+ calls, and we wanted the top caller to appear at the top, we can use
+ the 'sort' parameter, along with the 'descending' modifier:
+
+ # echo 'hist:key=call_site.sym:val=bytes_req:sort=bytes_req.descending' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+ # trigger info: hist:keys=call_site.sym:vals=bytes_req:sort=bytes_req.descending:size=2048 [active]
+
+ { call_site: [ffffffffa046041c] i915_gem_execbuffer2 [i915] } hitcount: 2186 bytes_req: 3397464
+ { call_site: [ffffffffa045e7c4] i915_gem_do_execbuffer.isra.23 [i915] } hitcount: 1790 bytes_req: 712176
+ { call_site: [ffffffff8125847d] ext4_htree_store_dirent } hitcount: 8132 bytes_req: 513135
+ { call_site: [ffffffff811e2a1b] seq_buf_alloc } hitcount: 106 bytes_req: 440128
+ { call_site: [ffffffffa0489a66] intel_ring_begin [i915] } hitcount: 2186 bytes_req: 314784
+ { call_site: [ffffffff812891ca] ext4_find_extent } hitcount: 2174 bytes_req: 208992
+ { call_site: [ffffffff811ae8e1] __kmalloc } hitcount: 8 bytes_req: 131072
+ { call_site: [ffffffffa04c4a3c] intel_plane_duplicate_state [i915] } hitcount: 859 bytes_req: 116824
+ { call_site: [ffffffffa02911f2] drm_modeset_lock_crtc [drm] } hitcount: 1834 bytes_req: 102704
+ { call_site: [ffffffffa04a580c] intel_crtc_page_flip [i915] } hitcount: 972 bytes_req: 101088
+ { call_site: [ffffffffa0287592] drm_mode_page_flip_ioctl [drm] } hitcount: 972 bytes_req: 85536
+ { call_site: [ffffffffa00b7c06] hid_report_raw_event [hid] } hitcount: 3333 bytes_req: 66664
+ { call_site: [ffffffff8137e559] sg_kmalloc } hitcount: 209 bytes_req: 61632
+ .
+ .
+ .
+ { call_site: [ffffffff81095225] alloc_fair_sched_group } hitcount: 2 bytes_req: 128
+ { call_site: [ffffffff81097ec2] alloc_rt_sched_group } hitcount: 2 bytes_req: 128
+ { call_site: [ffffffff812d8406] copy_semundo } hitcount: 2 bytes_req: 48
+ { call_site: [ffffffff81200ba6] inotify_new_group } hitcount: 1 bytes_req: 48
+ { call_site: [ffffffffa027121a] drm_getmagic [drm] } hitcount: 1 bytes_req: 48
+ { call_site: [ffffffff811e3a25] __seq_open_private } hitcount: 1 bytes_req: 40
+ { call_site: [ffffffff811c52f4] bprm_change_interp } hitcount: 2 bytes_req: 16
+ { call_site: [ffffffff8154bc62] usb_control_msg } hitcount: 1 bytes_req: 8
+ { call_site: [ffffffffa00bf1ca] hidraw_report_event [hid] } hitcount: 1 bytes_req: 7
+ { call_site: [ffffffffa00bf6fe] hidraw_send_report [hid] } hitcount: 1 bytes_req: 7
+
+ Totals:
+ Hits: 32133
+ Entries: 81
+ Dropped: 0
+
+ To display the offset and size information in addition to the symbol
+ name, just use 'sym-offset' instead:
+
+ # echo 'hist:key=call_site.sym-offset:val=bytes_req:sort=bytes_req.descending' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+ # trigger info: hist:keys=call_site.sym-offset:vals=bytes_req:sort=bytes_req.descending:size=2048 [active]
+
+ { call_site: [ffffffffa046041c] i915_gem_execbuffer2+0x6c/0x2c0 [i915] } hitcount: 4569 bytes_req: 3163720
+ { call_site: [ffffffffa0489a66] intel_ring_begin+0xc6/0x1f0 [i915] } hitcount: 4569 bytes_req: 657936
+ { call_site: [ffffffffa045e7c4] i915_gem_do_execbuffer.isra.23+0x694/0x1020 [i915] } hitcount: 1519 bytes_req: 472936
+ { call_site: [ffffffffa045e646] i915_gem_do_execbuffer.isra.23+0x516/0x1020 [i915] } hitcount: 3050 bytes_req: 211832
+ { call_site: [ffffffff811e2a1b] seq_buf_alloc+0x1b/0x50 } hitcount: 34 bytes_req: 148384
+ { call_site: [ffffffffa04a580c] intel_crtc_page_flip+0xbc/0x870 [i915] } hitcount: 1385 bytes_req: 144040
+ { call_site: [ffffffff811ae8e1] __kmalloc+0x191/0x1b0 } hitcount: 8 bytes_req: 131072
+ { call_site: [ffffffffa0287592] drm_mode_page_flip_ioctl+0x282/0x360 [drm] } hitcount: 1385 bytes_req: 121880
+ { call_site: [ffffffffa02911f2] drm_modeset_lock_crtc+0x32/0x100 [drm] } hitcount: 1848 bytes_req: 103488
+ { call_site: [ffffffffa04c4a3c] intel_plane_duplicate_state+0x2c/0xa0 [i915] } hitcount: 461 bytes_req: 62696
+ { call_site: [ffffffffa029070e] drm_vma_node_allow+0x2e/0xd0 [drm] } hitcount: 1541 bytes_req: 61640
+ { call_site: [ffffffff815f8d7b] sk_prot_alloc+0xcb/0x1b0 } hitcount: 57 bytes_req: 57456
+ .
+ .
+ .
+ { call_site: [ffffffff8109524a] alloc_fair_sched_group+0x5a/0x1a0 } hitcount: 2 bytes_req: 128
+ { call_site: [ffffffffa027b921] drm_vm_open_locked+0x31/0xa0 [drm] } hitcount: 3 bytes_req: 96
+ { call_site: [ffffffff8122e266] proc_self_follow_link+0x76/0xb0 } hitcount: 8 bytes_req: 96
+ { call_site: [ffffffff81213e80] load_elf_binary+0x240/0x1650 } hitcount: 3 bytes_req: 84
+ { call_site: [ffffffff8154bc62] usb_control_msg+0x42/0x110 } hitcount: 1 bytes_req: 8
+ { call_site: [ffffffffa00bf6fe] hidraw_send_report+0x7e/0x1a0 [hid] } hitcount: 1 bytes_req: 7
+ { call_site: [ffffffffa00bf1ca] hidraw_report_event+0x8a/0x120 [hid] } hitcount: 1 bytes_req: 7
+
+ Totals:
+ Hits: 26098
+ Entries: 64
+ Dropped: 0
+
+ We can also add multiple fields to the 'values' parameter. For
+ example, we might want to see the total number of bytes allocated
+ alongside bytes requested, and display the result sorted by bytes
+ allocated in a descending order:
+
+ # echo 'hist:keys=call_site.sym:values=bytes_req,bytes_alloc:sort=bytes_alloc.descending' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+ # trigger info: hist:keys=call_site.sym:vals=bytes_req,bytes_alloc:sort=bytes_alloc.descending:size=2048 [active]
+
+ { call_site: [ffffffffa046041c] i915_gem_execbuffer2 [i915] } hitcount: 7403 bytes_req: 4084360 bytes_alloc: 5958016
+ { call_site: [ffffffff811e2a1b] seq_buf_alloc } hitcount: 541 bytes_req: 2213968 bytes_alloc: 2228224
+ { call_site: [ffffffffa0489a66] intel_ring_begin [i915] } hitcount: 7404 bytes_req: 1066176 bytes_alloc: 1421568
+ { call_site: [ffffffffa045e7c4] i915_gem_do_execbuffer.isra.23 [i915] } hitcount: 1565 bytes_req: 557368 bytes_alloc: 1037760
+ { call_site: [ffffffff8125847d] ext4_htree_store_dirent } hitcount: 9557 bytes_req: 595778 bytes_alloc: 695744
+ { call_site: [ffffffffa045e646] i915_gem_do_execbuffer.isra.23 [i915] } hitcount: 5839 bytes_req: 430680 bytes_alloc: 470400
+ { call_site: [ffffffffa04c4a3c] intel_plane_duplicate_state [i915] } hitcount: 2388 bytes_req: 324768 bytes_alloc: 458496
+ { call_site: [ffffffffa02911f2] drm_modeset_lock_crtc [drm] } hitcount: 3911 bytes_req: 219016 bytes_alloc: 250304
+ { call_site: [ffffffff815f8d7b] sk_prot_alloc } hitcount: 235 bytes_req: 236880 bytes_alloc: 240640
+ { call_site: [ffffffff8137e559] sg_kmalloc } hitcount: 557 bytes_req: 169024 bytes_alloc: 221760
+ { call_site: [ffffffffa00b7c06] hid_report_raw_event [hid] } hitcount: 9378 bytes_req: 187548 bytes_alloc: 206312
+ { call_site: [ffffffffa04a580c] intel_crtc_page_flip [i915] } hitcount: 1519 bytes_req: 157976 bytes_alloc: 194432
+ .
+ .
+ .
+ { call_site: [ffffffff8109bd3b] sched_autogroup_create_attach } hitcount: 2 bytes_req: 144 bytes_alloc: 192
+ { call_site: [ffffffff81097ee8] alloc_rt_sched_group } hitcount: 2 bytes_req: 128 bytes_alloc: 128
+ { call_site: [ffffffff8109524a] alloc_fair_sched_group } hitcount: 2 bytes_req: 128 bytes_alloc: 128
+ { call_site: [ffffffff81095225] alloc_fair_sched_group } hitcount: 2 bytes_req: 128 bytes_alloc: 128
+ { call_site: [ffffffff81097ec2] alloc_rt_sched_group } hitcount: 2 bytes_req: 128 bytes_alloc: 128
+ { call_site: [ffffffff81213e80] load_elf_binary } hitcount: 3 bytes_req: 84 bytes_alloc: 96
+ { call_site: [ffffffff81079a2e] kthread_create_on_node } hitcount: 1 bytes_req: 56 bytes_alloc: 64
+ { call_site: [ffffffffa00bf6fe] hidraw_send_report [hid] } hitcount: 1 bytes_req: 7 bytes_alloc: 8
+ { call_site: [ffffffff8154bc62] usb_control_msg } hitcount: 1 bytes_req: 8 bytes_alloc: 8
+ { call_site: [ffffffffa00bf1ca] hidraw_report_event [hid] } hitcount: 1 bytes_req: 7 bytes_alloc: 8
+
+ Totals:
+ Hits: 66598
+ Entries: 65
+ Dropped: 0
+
+ Finally, to finish off our kmalloc example, instead of simply having
+ the hist trigger display symbolic call_sites, we can have the hist
+ trigger additionally display the complete set of kernel stack traces
+ that led to each call_site. To do that, we simply use the special
+ value 'stacktrace' for the key parameter:
+
+ # echo 'hist:keys=stacktrace:values=bytes_req,bytes_alloc:sort=bytes_alloc' > \
+ /sys/kernel/debug/tracing/events/kmem/kmalloc/trigger
+
+ The above trigger will use the kernel stack trace in effect when an
+ event is triggered as the key for the hash table. This allows the
+ enumeration of every kernel callpath that led up to a particular
+ event, along with a running total of any of the event fields for
+ that event. Here we tally bytes requested and bytes allocated for
+ every callpath in the system that led up to a kmalloc (in this case
+ every callpath to a kmalloc for a kernel compile):
+
+ # cat /sys/kernel/debug/tracing/events/kmem/kmalloc/hist
+ # trigger info: hist:keys=stacktrace:vals=bytes_req,bytes_alloc:sort=bytes_alloc:size=2048 [active]
+
+ { stacktrace:
+ __kmalloc_track_caller+0x10b/0x1a0
+ kmemdup+0x20/0x50
+ hidraw_report_event+0x8a/0x120 [hid]
+ hid_report_raw_event+0x3ea/0x440 [hid]
+ hid_input_report+0x112/0x190 [hid]
+ hid_irq_in+0xc2/0x260 [usbhid]
+ __usb_hcd_giveback_urb+0x72/0x120
+ usb_giveback_urb_bh+0x9e/0xe0
+ tasklet_hi_action+0xf8/0x100
+ __do_softirq+0x114/0x2c0
+ irq_exit+0xa5/0xb0
+ do_IRQ+0x5a/0xf0
+ ret_from_intr+0x0/0x30
+ cpuidle_enter+0x17/0x20
+ cpu_startup_entry+0x315/0x3e0
+ rest_init+0x7c/0x80
+ } hitcount: 3 bytes_req: 21 bytes_alloc: 24
+ { stacktrace:
+ __kmalloc_track_caller+0x10b/0x1a0
+ kmemdup+0x20/0x50
+ hidraw_report_event+0x8a/0x120 [hid]
+ hid_report_raw_event+0x3ea/0x440 [hid]
+ hid_input_report+0x112/0x190 [hid]
+ hid_irq_in+0xc2/0x260 [usbhid]
+ __usb_hcd_giveback_urb+0x72/0x120
+ usb_giveback_urb_bh+0x9e/0xe0
+ tasklet_hi_action+0xf8/0x100
+ __do_softirq+0x114/0x2c0
+ irq_exit+0xa5/0xb0
+ do_IRQ+0x5a/0xf0
+ ret_from_intr+0x0/0x30
+ } hitcount: 3 bytes_req: 21 bytes_alloc: 24
+ { stacktrace:
+ kmem_cache_alloc_trace+0xeb/0x150
+ aa_alloc_task_context+0x27/0x40
+ apparmor_cred_prepare+0x1f/0x50
+ security_prepare_creds+0x16/0x20
+ prepare_creds+0xdf/0x1a0
+ SyS_capset+0xb5/0x200
+ system_call_fastpath+0x12/0x6a
+ } hitcount: 1 bytes_req: 32 bytes_alloc: 32
+ .
+ .
+ .
+ { stacktrace:
+ __kmalloc+0x11b/0x1b0
+ i915_gem_execbuffer2+0x6c/0x2c0 [i915]
+ drm_ioctl+0x349/0x670 [drm]
+ do_vfs_ioctl+0x2f0/0x4f0
+ SyS_ioctl+0x81/0xa0
+ system_call_fastpath+0x12/0x6a
+ } hitcount: 17726 bytes_req: 13944120 bytes_alloc: 19593808
+ { stacktrace:
+ __kmalloc+0x11b/0x1b0
+ load_elf_phdrs+0x76/0xa0
+ load_elf_binary+0x102/0x1650
+ search_binary_handler+0x97/0x1d0
+ do_execveat_common.isra.34+0x551/0x6e0
+ SyS_execve+0x3a/0x50
+ return_from_execve+0x0/0x23
+ } hitcount: 33348 bytes_req: 17152128 bytes_alloc: 20226048
+ { stacktrace:
+ kmem_cache_alloc_trace+0xeb/0x150
+ apparmor_file_alloc_security+0x27/0x40
+ security_file_alloc+0x16/0x20
+ get_empty_filp+0x93/0x1c0
+ path_openat+0x31/0x5f0
+ do_filp_open+0x3a/0x90
+ do_sys_open+0x128/0x220
+ SyS_open+0x1e/0x20
+ system_call_fastpath+0x12/0x6a
+ } hitcount: 4766422 bytes_req: 9532844 bytes_alloc: 38131376
+ { stacktrace:
+ __kmalloc+0x11b/0x1b0
+ seq_buf_alloc+0x1b/0x50
+ seq_read+0x2cc/0x370
+ proc_reg_read+0x3d/0x80
+ __vfs_read+0x28/0xe0
+ vfs_read+0x86/0x140
+ SyS_read+0x46/0xb0
+ system_call_fastpath+0x12/0x6a
+ } hitcount: 19133 bytes_req: 78368768 bytes_alloc: 78368768
+
+ Totals:
+ Hits: 6085872
+ Entries: 253
+ Dropped: 0
+
+ If you key a hist trigger on common_pid, in order for example to
+ gather and display sorted totals for each process, you can use the
+ special .execname modifier to display the executable names for the
+ processes in the table rather than raw pids. The example below
+ keeps a per-process sum of total bytes read:
+
+ # echo 'hist:key=common_pid.execname:val=count:sort=count.descending' > \
+ /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/trigger
+
+ # cat /sys/kernel/debug/tracing/events/syscalls/sys_enter_read/hist
+ # trigger info: hist:keys=common_pid.execname:vals=count:sort=count.descending:size=2048 [active]
+
+ { common_pid: gnome-terminal [ 3196] } hitcount: 280 count: 1093512
+ { common_pid: Xorg [ 1309] } hitcount: 525 count: 256640
+ { common_pid: compiz [ 2889] } hitcount: 59 count: 254400
+ { common_pid: bash [ 8710] } hitcount: 3 count: 66369
+ { common_pid: dbus-daemon-lau [ 8703] } hitcount: 49 count: 47739
+ { common_pid: irqbalance [ 1252] } hitcount: 27 count: 27648
+ { common_pid: 01ifupdown [ 8705] } hitcount: 3 count: 17216
+ { common_pid: dbus-daemon [ 772] } hitcount: 10 count: 12396
+ { common_pid: Socket Thread [ 8342] } hitcount: 11 count: 11264
+ { common_pid: nm-dhcp-client. [ 8701] } hitcount: 6 count: 7424
+ { common_pid: gmain [ 1315] } hitcount: 18 count: 6336
+ .
+ .
+ .
+ { common_pid: postgres [ 1892] } hitcount: 2 count: 32
+ { common_pid: postgres [ 1891] } hitcount: 2 count: 32
+ { common_pid: gmain [ 8704] } hitcount: 2 count: 32
+ { common_pid: upstart-dbus-br [ 2740] } hitcount: 21 count: 21
+ { common_pid: nm-dispatcher.a [ 8696] } hitcount: 1 count: 16
+ { common_pid: indicator-datet [ 2904] } hitcount: 1 count: 16
+ { common_pid: gdbus [ 2998] } hitcount: 1 count: 16
+ { common_pid: rtkit-daemon [ 2052] } hitcount: 1 count: 8
+ { common_pid: init [ 1] } hitcount: 2 count: 2
+
+ Totals:
+ Hits: 2116
+ Entries: 51
+ Dropped: 0
+
+ Similarly, if you key a hist trigger on syscall id, for example to
+ gather and display a list of systemwide syscall hits, you can use
+ the special .syscall modifier to display the syscall names rather
+ than raw ids. The example below keeps a running total of syscall
+ counts for the system during the run:
+
+ # echo 'hist:key=id.syscall:val=hitcount' > \
+ /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger
+
+ # cat /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/hist
+ # trigger info: hist:keys=id.syscall:vals=hitcount:sort=hitcount:size=2048 [active]
+
+ { id: sys_fsync [ 74] } hitcount: 1
+ { id: sys_newuname [ 63] } hitcount: 1
+ { id: sys_prctl [157] } hitcount: 1
+ { id: sys_statfs [137] } hitcount: 1
+ { id: sys_symlink [ 88] } hitcount: 1
+ { id: sys_sendmmsg [307] } hitcount: 1
+ { id: sys_semctl [ 66] } hitcount: 1
+ { id: sys_readlink [ 89] } hitcount: 3
+ { id: sys_bind [ 49] } hitcount: 3
+ { id: sys_getsockname [ 51] } hitcount: 3
+ { id: sys_unlink [ 87] } hitcount: 3
+ { id: sys_rename [ 82] } hitcount: 4
+ { id: unknown_syscall [ 58] } hitcount: 4
+ { id: sys_connect [ 42] } hitcount: 4
+ { id: sys_getpid [ 39] } hitcount: 4
+ .
+ .
+ .
+ { id: sys_rt_sigprocmask [ 14] } hitcount: 952
+ { id: sys_futex [202] } hitcount: 1534
+ { id: sys_write [ 1] } hitcount: 2689
+ { id: sys_setitimer [ 38] } hitcount: 2797
+ { id: sys_read [ 0] } hitcount: 3202
+ { id: sys_select [ 23] } hitcount: 3773
+ { id: sys_writev [ 20] } hitcount: 4531
+ { id: sys_poll [ 7] } hitcount: 8314
+ { id: sys_recvmsg [ 47] } hitcount: 13738
+ { id: sys_ioctl [ 16] } hitcount: 21843
+
+ Totals:
+ Hits: 67612
+ Entries: 72
+ Dropped: 0
+
+ The syscall counts above provide a rough overall picture of system
+ call activity on the system; we can see for example that the most
+ popular system call on this system was the 'sys_ioctl' system call.
+
+ We can use 'compound' keys to refine that number and provide some
+ further insight as to which processes exactly contribute to the
+ overall ioctl count.
+
+ The command below keeps a hitcount for every unique combination of
+ system call id and pid - the end result is essentially a table
+ that keeps a per-pid sum of system call hits. The results are
+ sorted using the system call id as the primary key, and the
+ hitcount sum as the secondary key:
+
+ # echo 'hist:key=id.syscall,common_pid.execname:val=hitcount:sort=id,hitcount' > \
+ /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger
+
+ # cat /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/hist
+ # trigger info: hist:keys=id.syscall,common_pid.execname:vals=hitcount:sort=id.syscall,hitcount:size=2048 [active]
+
+ { id: sys_read [ 0], common_pid: rtkit-daemon [ 1877] } hitcount: 1
+ { id: sys_read [ 0], common_pid: gdbus [ 2976] } hitcount: 1
+ { id: sys_read [ 0], common_pid: console-kit-dae [ 3400] } hitcount: 1
+ { id: sys_read [ 0], common_pid: postgres [ 1865] } hitcount: 1
+ { id: sys_read [ 0], common_pid: deja-dup-monito [ 3543] } hitcount: 2
+ { id: sys_read [ 0], common_pid: NetworkManager [ 890] } hitcount: 2
+ { id: sys_read [ 0], common_pid: evolution-calen [ 3048] } hitcount: 2
+ { id: sys_read [ 0], common_pid: postgres [ 1864] } hitcount: 2
+ { id: sys_read [ 0], common_pid: nm-applet [ 3022] } hitcount: 2
+ { id: sys_read [ 0], common_pid: whoopsie [ 1212] } hitcount: 2
+ .
+ .
+ .
+ { id: sys_ioctl [ 16], common_pid: bash [ 8479] } hitcount: 1
+ { id: sys_ioctl [ 16], common_pid: bash [ 3472] } hitcount: 12
+ { id: sys_ioctl [ 16], common_pid: gnome-terminal [ 3199] } hitcount: 16
+ { id: sys_ioctl [ 16], common_pid: Xorg [ 1267] } hitcount: 1808
+ { id: sys_ioctl [ 16], common_pid: compiz [ 2994] } hitcount: 5580
+ .
+ .
+ .
+ { id: sys_waitid [247], common_pid: upstart-dbus-br [ 2690] } hitcount: 3
+ { id: sys_waitid [247], common_pid: upstart-dbus-br [ 2688] } hitcount: 16
+ { id: sys_inotify_add_watch [254], common_pid: gmain [ 975] } hitcount: 2
+ { id: sys_inotify_add_watch [254], common_pid: gmain [ 3204] } hitcount: 4
+ { id: sys_inotify_add_watch [254], common_pid: gmain [ 2888] } hitcount: 4
+ { id: sys_inotify_add_watch [254], common_pid: gmain [ 3003] } hitcount: 4
+ { id: sys_inotify_add_watch [254], common_pid: gmain [ 2873] } hitcount: 4
+ { id: sys_inotify_add_watch [254], common_pid: gmain [ 3196] } hitcount: 6
+ { id: sys_openat [257], common_pid: java [ 2623] } hitcount: 2
+ { id: sys_eventfd2 [290], common_pid: ibus-ui-gtk3 [ 2760] } hitcount: 4
+ { id: sys_eventfd2 [290], common_pid: compiz [ 2994] } hitcount: 6
+
+ Totals:
+ Hits: 31536
+ Entries: 323
+ Dropped: 0
+
+ The above list does give us a breakdown of the ioctl syscall by
+ pid, but it also gives us quite a bit more than that, which we
+ don't really care about at the moment. Since we know the syscall
+ id for sys_ioctl (16, displayed next to the sys_ioctl name), we
+ can use that to filter out all the other syscalls:
+
+ # echo 'hist:key=id.syscall,common_pid.execname:val=hitcount:sort=id,hitcount if id == 16' > \
+ /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/trigger
+
+ # cat /sys/kernel/debug/tracing/events/raw_syscalls/sys_enter/hist
+ # trigger info: hist:keys=id.syscall,common_pid.execname:vals=hitcount:sort=id.syscall,hitcount:size=2048 if id == 16 [active]
+
+ { id: sys_ioctl [ 16], common_pid: gmain [ 2769] } hitcount: 1
+ { id: sys_ioctl [ 16], common_pid: evolution-addre [ 8571] } hitcount: 1
+ { id: sys_ioctl [ 16], common_pid: gmain [ 3003] } hitcount: 1
+ { id: sys_ioctl [ 16], common_pid: gmain [ 2781] } hitcount: 1
+ { id: sys_ioctl [ 16], common_pid: gmain [ 2829] } hitcount: 1
+ { id: sys_ioctl [ 16], common_pid: bash [ 8726] } hitcount: 1
+ { id: sys_ioctl [ 16], common_pid: bash [ 8508] } hitcount: 1
+ { id: sys_ioctl [ 16], common_pid: gmain [ 2970] } hitcount: 1
+ { id: sys_ioctl [ 16], common_pid: gmain [ 2768] } hitcount: 1
+ .
+ .
+ .
+ { id: sys_ioctl [ 16], common_pid: pool [ 8559] } hitcount: 45
+ { id: sys_ioctl [ 16], common_pid: pool [ 8555] } hitcount: 48
+ { id: sys_ioctl [ 16], common_pid: pool [ 8551] } hitcount: 48
+ { id: sys_ioctl [ 16], common_pid: avahi-daemon [ 896] } hitcount: 66
+ { id: sys_ioctl [ 16], common_pid: Xorg [ 1267] } hitcount: 26674
+ { id: sys_ioctl [ 16], common_pid: compiz [ 2994] } hitcount: 73443
+
+ Totals:
+ Hits: 101162
+ Entries: 103
+ Dropped: 0
+
+ The above output shows that 'compiz' and 'Xorg' are far and away
+ the heaviest ioctl callers (which might lead to questions about
+ whether they really need to be making all those calls and to
+ possible avenues for further investigation.)
+
+ The compound key examples used a key and a sum value (hitcount) to
+ sort the output, but we can just as easily use two keys instead.
+ Here's an example where we use a compound key composed of the the
+ common_pid and size event fields. Sorting with pid as the primary
+ key and 'size' as the secondary key allows us to display an
+ ordered summary of the recvfrom sizes, with counts, received by
+ each process:
+
+ # echo 'hist:key=common_pid.execname,size:val=hitcount:sort=common_pid,size' > \
+ /sys/kernel/debug/tracing/events/syscalls/sys_enter_recvfrom/trigger
+
+ # cat /sys/kernel/debug/tracing/events/syscalls/sys_enter_recvfrom/hist
+ # trigger info: hist:keys=common_pid.execname,size:vals=hitcount:sort=common_pid.execname,size:size=2048 [active]
+
+ { common_pid: smbd [ 784], size: 4 } hitcount: 1
+ { common_pid: dnsmasq [ 1412], size: 4096 } hitcount: 672
+ { common_pid: postgres [ 1796], size: 1000 } hitcount: 6
+ { common_pid: postgres [ 1867], size: 1000 } hitcount: 10
+ { common_pid: bamfdaemon [ 2787], size: 28 } hitcount: 2
+ { common_pid: bamfdaemon [ 2787], size: 14360 } hitcount: 1
+ { common_pid: compiz [ 2994], size: 8 } hitcount: 1
+ { common_pid: compiz [ 2994], size: 20 } hitcount: 11
+ { common_pid: gnome-terminal [ 3199], size: 4 } hitcount: 2
+ { common_pid: firefox [ 8817], size: 4 } hitcount: 1
+ { common_pid: firefox [ 8817], size: 8 } hitcount: 5
+ { common_pid: firefox [ 8817], size: 588 } hitcount: 2
+ { common_pid: firefox [ 8817], size: 628 } hitcount: 1
+ { common_pid: firefox [ 8817], size: 6944 } hitcount: 1
+ { common_pid: firefox [ 8817], size: 408880 } hitcount: 2
+ { common_pid: firefox [ 8822], size: 8 } hitcount: 2
+ { common_pid: firefox [ 8822], size: 160 } hitcount: 2
+ { common_pid: firefox [ 8822], size: 320 } hitcount: 2
+ { common_pid: firefox [ 8822], size: 352 } hitcount: 1
+ .
+ .
+ .
+ { common_pid: pool [ 8923], size: 1960 } hitcount: 10
+ { common_pid: pool [ 8923], size: 2048 } hitcount: 10
+ { common_pid: pool [ 8924], size: 1960 } hitcount: 10
+ { common_pid: pool [ 8924], size: 2048 } hitcount: 10
+ { common_pid: pool [ 8928], size: 1964 } hitcount: 4
+ { common_pid: pool [ 8928], size: 1965 } hitcount: 2
+ { common_pid: pool [ 8928], size: 2048 } hitcount: 6
+ { common_pid: pool [ 8929], size: 1982 } hitcount: 1
+ { common_pid: pool [ 8929], size: 2048 } hitcount: 1
+
+ Totals:
+ Hits: 2016
+ Entries: 224
+ Dropped: 0
+
+ The above example also illustrates the fact that although a compound
+ key is treated as a single entity for hashing purposes, the sub-keys
+ it's composed of can be accessed independently.
+
+ The next example uses a string field as the hash key and
+ demonstrates how you can manually pause and continue a hist trigger.
+ In this example, we'll aggregate fork counts and don't expect a
+ large number of entries in the hash table, so we'll drop it to a
+ much smaller number, say 256:
+
+ # echo 'hist:key=child_comm:val=hitcount:size=256' > \
+ /sys/kernel/debug/tracing/events/sched/sched_process_fork/trigger
+
+ # cat /sys/kernel/debug/tracing/events/sched/sched_process_fork/hist
+ # trigger info: hist:keys=child_comm:vals=hitcount:sort=hitcount:size=256 [active]
+
+ { child_comm: dconf worker } hitcount: 1
+ { child_comm: ibus-daemon } hitcount: 1
+ { child_comm: whoopsie } hitcount: 1
+ { child_comm: smbd } hitcount: 1
+ { child_comm: gdbus } hitcount: 1
+ { child_comm: kthreadd } hitcount: 1
+ { child_comm: dconf worker } hitcount: 1
+ { child_comm: evolution-alarm } hitcount: 2
+ { child_comm: Socket Thread } hitcount: 2
+ { child_comm: postgres } hitcount: 2
+ { child_comm: bash } hitcount: 3
+ { child_comm: compiz } hitcount: 3
+ { child_comm: evolution-sourc } hitcount: 4
+ { child_comm: dhclient } hitcount: 4
+ { child_comm: pool } hitcount: 5
+ { child_comm: nm-dispatcher.a } hitcount: 8
+ { child_comm: firefox } hitcount: 8
+ { child_comm: dbus-daemon } hitcount: 8
+ { child_comm: glib-pacrunner } hitcount: 10
+ { child_comm: evolution } hitcount: 23
+
+ Totals:
+ Hits: 89
+ Entries: 20
+ Dropped: 0
+
+ If we want to pause the hist trigger, we can simply append :pause to
+ the command that started the trigger. Notice that the trigger info
+ displays as [paused]:
+
+ # echo 'hist:key=child_comm:val=hitcount:size=256:pause' >> \
+ /sys/kernel/debug/tracing/events/sched/sched_process_fork/trigger
+
+ # cat /sys/kernel/debug/tracing/events/sched/sched_process_fork/hist
+ # trigger info: hist:keys=child_comm:vals=hitcount:sort=hitcount:size=256 [paused]
+
+ { child_comm: dconf worker } hitcount: 1
+ { child_comm: kthreadd } hitcount: 1
+ { child_comm: dconf worker } hitcount: 1
+ { child_comm: gdbus } hitcount: 1
+ { child_comm: ibus-daemon } hitcount: 1
+ { child_comm: Socket Thread } hitcount: 2
+ { child_comm: evolution-alarm } hitcount: 2
+ { child_comm: smbd } hitcount: 2
+ { child_comm: bash } hitcount: 3
+ { child_comm: whoopsie } hitcount: 3
+ { child_comm: compiz } hitcount: 3
+ { child_comm: evolution-sourc } hitcount: 4
+ { child_comm: pool } hitcount: 5
+ { child_comm: postgres } hitcount: 6
+ { child_comm: firefox } hitcount: 8
+ { child_comm: dhclient } hitcount: 10
+ { child_comm: emacs } hitcount: 12
+ { child_comm: dbus-daemon } hitcount: 20
+ { child_comm: nm-dispatcher.a } hitcount: 20
+ { child_comm: evolution } hitcount: 35
+ { child_comm: glib-pacrunner } hitcount: 59
+
+ Totals:
+ Hits: 199
+ Entries: 21
+ Dropped: 0
+
+ To manually continue having the trigger aggregate events, append
+ :cont instead. Notice that the trigger info displays as [active]
+ again, and the data has changed:
+
+ # echo 'hist:key=child_comm:val=hitcount:size=256:cont' >> \
+ /sys/kernel/debug/tracing/events/sched/sched_process_fork/trigger
+
+ # cat /sys/kernel/debug/tracing/events/sched/sched_process_fork/hist
+ # trigger info: hist:keys=child_comm:vals=hitcount:sort=hitcount:size=256 [active]
+
+ { child_comm: dconf worker } hitcount: 1
+ { child_comm: dconf worker } hitcount: 1
+ { child_comm: kthreadd } hitcount: 1
+ { child_comm: gdbus } hitcount: 1
+ { child_comm: ibus-daemon } hitcount: 1
+ { child_comm: Socket Thread } hitcount: 2
+ { child_comm: evolution-alarm } hitcount: 2
+ { child_comm: smbd } hitcount: 2
+ { child_comm: whoopsie } hitcount: 3
+ { child_comm: compiz } hitcount: 3
+ { child_comm: evolution-sourc } hitcount: 4
+ { child_comm: bash } hitcount: 5
+ { child_comm: pool } hitcount: 5
+ { child_comm: postgres } hitcount: 6
+ { child_comm: firefox } hitcount: 8
+ { child_comm: dhclient } hitcount: 11
+ { child_comm: emacs } hitcount: 12
+ { child_comm: dbus-daemon } hitcount: 22
+ { child_comm: nm-dispatcher.a } hitcount: 22
+ { child_comm: evolution } hitcount: 35
+ { child_comm: glib-pacrunner } hitcount: 59
+
+ Totals:
+ Hits: 206
+ Entries: 21
+ Dropped: 0
+
+ The previous example showed how to start and stop a hist trigger by
+ appending 'pause' and 'continue' to the hist trigger command. A
+ hist trigger can also be started in a paused state by initially
+ starting the trigger with ':pause' appended. This allows you to
+ start the trigger only when you're ready to start collecting data
+ and not before. For example, you could start the trigger in a
+ paused state, then unpause it and do something you want to measure,
+ then pause the trigger again when done.
+
+ Of course, doing this manually can be difficult and error-prone, but
+ it is possible to automatically start and stop a hist trigger based
+ on some condition, via the enable_hist and disable_hist triggers.
+
+ For example, suppose we wanted to take a look at the relative
+ weights in terms of skb length for each callpath that leads to a
+ netif_receieve_skb event when downloading a decent-sized file using
+ wget.
+
+ First we set up an initially paused stacktrace trigger on the
+ netif_receive_skb event:
+
+ # echo 'hist:key=stacktrace:vals=len:pause' > \
+ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+
+ Next, we set up an 'enable_hist' trigger on the sched_process_exec
+ event, with an 'if filename==/usr/bin/wget' filter. The effect of
+ this new trigger is that it will 'unpause' the hist trigger we just
+ set up on netif_receive_skb if and only if it sees a
+ sched_process_exec event with a filename of '/usr/bin/wget'. When
+ that happens, all netif_receive_skb events are aggregated into a
+ hash table keyed on stacktrace:
+
+ # echo 'enable_hist:net:netif_receive_skb if filename==/usr/bin/wget' > \
+ /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
+
+ The aggregation continues until the netif_receive_skb is paused
+ again, which is what the following disable_hist event does by
+ creating a similar setup on the sched_process_exit event, using the
+ filter 'comm==wget':
+
+ # echo 'disable_hist:net:netif_receive_skb if comm==wget' > \
+ /sys/kernel/debug/tracing/events/sched/sched_process_exit/trigger
+
+ Whenever a process exits and the comm field of the disable_hist
+ trigger filter matches 'comm==wget', the netif_receive_skb hist
+ trigger is disabled.
+
+ The overall effect is that netif_receive_skb events are aggregated
+ into the hash table for only the duration of the wget. Executing a
+ wget command and then listing the 'hist' file will display the
+ output generated by the wget command:
+
+ $ wget https://www.kernel.org/pub/linux/kernel/v3.x/patch-3.19.xz
+
+ # cat /sys/kernel/debug/tracing/events/net/netif_receive_skb/hist
+ # trigger info: hist:keys=stacktrace:vals=len:sort=hitcount:size=2048 [paused]
+
+ { stacktrace:
+ __netif_receive_skb_core+0x46d/0x990
+ __netif_receive_skb+0x18/0x60
+ netif_receive_skb_internal+0x23/0x90
+ napi_gro_receive+0xc8/0x100
+ ieee80211_deliver_skb+0xd6/0x270 [mac80211]
+ ieee80211_rx_handlers+0xccf/0x22f0 [mac80211]
+ ieee80211_prepare_and_rx_handle+0x4e7/0xc40 [mac80211]
+ ieee80211_rx+0x31d/0x900 [mac80211]
+ iwlagn_rx_reply_rx+0x3db/0x6f0 [iwldvm]
+ iwl_rx_dispatch+0x8e/0xf0 [iwldvm]
+ iwl_pcie_irq_handler+0xe3c/0x12f0 [iwlwifi]
+ irq_thread_fn+0x20/0x50
+ irq_thread+0x11f/0x150
+ kthread+0xd2/0xf0
+ ret_from_fork+0x42/0x70
+ } hitcount: 85 len: 28884
+ { stacktrace:
+ __netif_receive_skb_core+0x46d/0x990
+ __netif_receive_skb+0x18/0x60
+ netif_receive_skb_internal+0x23/0x90
+ napi_gro_complete+0xa4/0xe0
+ dev_gro_receive+0x23a/0x360
+ napi_gro_receive+0x30/0x100
+ ieee80211_deliver_skb+0xd6/0x270 [mac80211]
+ ieee80211_rx_handlers+0xccf/0x22f0 [mac80211]
+ ieee80211_prepare_and_rx_handle+0x4e7/0xc40 [mac80211]
+ ieee80211_rx+0x31d/0x900 [mac80211]
+ iwlagn_rx_reply_rx+0x3db/0x6f0 [iwldvm]
+ iwl_rx_dispatch+0x8e/0xf0 [iwldvm]
+ iwl_pcie_irq_handler+0xe3c/0x12f0 [iwlwifi]
+ irq_thread_fn+0x20/0x50
+ irq_thread+0x11f/0x150
+ kthread+0xd2/0xf0
+ } hitcount: 98 len: 664329
+ { stacktrace:
+ __netif_receive_skb_core+0x46d/0x990
+ __netif_receive_skb+0x18/0x60
+ process_backlog+0xa8/0x150
+ net_rx_action+0x15d/0x340
+ __do_softirq+0x114/0x2c0
+ do_softirq_own_stack+0x1c/0x30
+ do_softirq+0x65/0x70
+ __local_bh_enable_ip+0xb5/0xc0
+ ip_finish_output+0x1f4/0x840
+ ip_output+0x6b/0xc0
+ ip_local_out_sk+0x31/0x40
+ ip_send_skb+0x1a/0x50
+ udp_send_skb+0x173/0x2a0
+ udp_sendmsg+0x2bf/0x9f0
+ inet_sendmsg+0x64/0xa0
+ sock_sendmsg+0x3d/0x50
+ } hitcount: 115 len: 13030
+ { stacktrace:
+ __netif_receive_skb_core+0x46d/0x990
+ __netif_receive_skb+0x18/0x60
+ netif_receive_skb_internal+0x23/0x90
+ napi_gro_complete+0xa4/0xe0
+ napi_gro_flush+0x6d/0x90
+ iwl_pcie_irq_handler+0x92a/0x12f0 [iwlwifi]
+ irq_thread_fn+0x20/0x50
+ irq_thread+0x11f/0x150
+ kthread+0xd2/0xf0
+ ret_from_fork+0x42/0x70
+ } hitcount: 934 len: 5512212
+
+ Totals:
+ Hits: 1232
+ Entries: 4
+ Dropped: 0
+
+ The above shows all the netif_receive_skb callpaths and their total
+ lengths for the duration of the wget command.
+
+ The 'clear' hist trigger param can be used to clear the hash table.
+ Suppose we wanted to try another run of the previous example but
+ this time also wanted to see the complete list of events that went
+ into the histogram. In order to avoid having to set everything up
+ again, we can just clear the histogram first:
+
+ # echo 'hist:key=stacktrace:vals=len:clear' >> \
+ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+
+ Just to verify that it is in fact cleared, here's what we now see in
+ the hist file:
+
+ # cat /sys/kernel/debug/tracing/events/net/netif_receive_skb/hist
+ # trigger info: hist:keys=stacktrace:vals=len:sort=hitcount:size=2048 [paused]
+
+ Totals:
+ Hits: 0
+ Entries: 0
+ Dropped: 0
+
+ Since we want to see the detailed list of every netif_receive_skb
+ event occurring during the new run, which are in fact the same
+ events being aggregated into the hash table, we add some additional
+ 'enable_event' events to the triggering sched_process_exec and
+ sched_process_exit events as such:
+
+ # echo 'enable_event:net:netif_receive_skb if filename==/usr/bin/wget' > \
+ /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
+
+ # echo 'disable_event:net:netif_receive_skb if comm==wget' > \
+ /sys/kernel/debug/tracing/events/sched/sched_process_exit/trigger
+
+ If you read the trigger files for the sched_process_exec and
+ sched_process_exit triggers, you should see two triggers for each:
+ one enabling/disabling the hist aggregation and the other
+ enabling/disabling the logging of events:
+
+ # cat /sys/kernel/debug/tracing/events/sched/sched_process_exec/trigger
+ enable_event:net:netif_receive_skb:unlimited if filename==/usr/bin/wget
+ enable_hist:net:netif_receive_skb:unlimited if filename==/usr/bin/wget
+
+ # cat /sys/kernel/debug/tracing/events/sched/sched_process_exit/trigger
+ enable_event:net:netif_receive_skb:unlimited if comm==wget
+ disable_hist:net:netif_receive_skb:unlimited if comm==wget
+
+ In other words, whenever either of the sched_process_exec or
+ sched_process_exit events is hit and matches 'wget', it enables or
+ disables both the histogram and the event log, and what you end up
+ with is a hash table and set of events just covering the specified
+ duration. Run the wget command again:
+
+ $ wget https://www.kernel.org/pub/linux/kernel/v3.x/patch-3.19.xz
+
+ Displaying the 'hist' file should show something similar to what you
+ saw in the last run, but this time you should also see the
+ individual events in the trace file:
+
+ # cat /sys/kernel/debug/tracing/trace
+
+ # tracer: nop
+ #
+ # entries-in-buffer/entries-written: 183/1426 #P:4
+ #
+ # _-----=> irqs-off
+ # / _----=> need-resched
+ # | / _---=> hardirq/softirq
+ # || / _--=> preempt-depth
+ # ||| / delay
+ # TASK-PID CPU# |||| TIMESTAMP FUNCTION
+ # | | | |||| | |
+ wget-15108 [000] ..s1 31769.606929: netif_receive_skb: dev=lo skbaddr=ffff88009c353100 len=60
+ wget-15108 [000] ..s1 31769.606999: netif_receive_skb: dev=lo skbaddr=ffff88009c353200 len=60
+ dnsmasq-1382 [000] ..s1 31769.677652: netif_receive_skb: dev=lo skbaddr=ffff88009c352b00 len=130
+ dnsmasq-1382 [000] ..s1 31769.685917: netif_receive_skb: dev=lo skbaddr=ffff88009c352200 len=138
+ ##### CPU 2 buffer started ####
+ irq/29-iwlwifi-559 [002] ..s. 31772.031529: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d433d00 len=2948
+ irq/29-iwlwifi-559 [002] ..s. 31772.031572: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d432200 len=1500
+ irq/29-iwlwifi-559 [002] ..s. 31772.032196: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d433100 len=2948
+ irq/29-iwlwifi-559 [002] ..s. 31772.032761: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d433000 len=2948
+ irq/29-iwlwifi-559 [002] ..s. 31772.033220: netif_receive_skb: dev=wlan0 skbaddr=ffff88009d432e00 len=1500
+ .
+ .
+ .
+
+ The following example demonstrates how multiple hist triggers can be
+ attached to a given event. This capability can be useful for
+ creating a set of different summaries derived from the same set of
+ events, or for comparing the effects of different filters, among
+ other things.
+
+ # echo 'hist:keys=skbaddr.hex:vals=len if len < 0' >> \
+ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+ # echo 'hist:keys=skbaddr.hex:vals=len if len > 4096' >> \
+ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+ # echo 'hist:keys=skbaddr.hex:vals=len if len == 256' >> \
+ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+ # echo 'hist:keys=skbaddr.hex:vals=len' >> \
+ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+ # echo 'hist:keys=len:vals=common_preempt_count' >> \
+ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+
+ The above set of commands create four triggers differing only in
+ their filters, along with a completely different though fairly
+ nonsensical trigger. Note that in order to append multiple hist
+ triggers to the same file, you should use the '>>' operator to
+ append them ('>' will also add the new hist trigger, but will remove
+ any existing hist triggers beforehand).
+
+ Displaying the contents of the 'hist' file for the event shows the
+ contents of all five histograms:
+
+ # cat /sys/kernel/debug/tracing/events/net/netif_receive_skb/hist
+
+ # event histogram
+ #
+ # trigger info: hist:keys=len:vals=hitcount,common_preempt_count:sort=hitcount:size=2048 [active]
+ #
+
+ { len: 176 } hitcount: 1 common_preempt_count: 0
+ { len: 223 } hitcount: 1 common_preempt_count: 0
+ { len: 4854 } hitcount: 1 common_preempt_count: 0
+ { len: 395 } hitcount: 1 common_preempt_count: 0
+ { len: 177 } hitcount: 1 common_preempt_count: 0
+ { len: 446 } hitcount: 1 common_preempt_count: 0
+ { len: 1601 } hitcount: 1 common_preempt_count: 0
+ .
+ .
+ .
+ { len: 1280 } hitcount: 66 common_preempt_count: 0
+ { len: 116 } hitcount: 81 common_preempt_count: 40
+ { len: 708 } hitcount: 112 common_preempt_count: 0
+ { len: 46 } hitcount: 221 common_preempt_count: 0
+ { len: 1264 } hitcount: 458 common_preempt_count: 0
+
+ Totals:
+ Hits: 1428
+ Entries: 147
+ Dropped: 0
+
+
+ # event histogram
+ #
+ # trigger info: hist:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 [active]
+ #
+
+ { skbaddr: ffff8800baee5e00 } hitcount: 1 len: 130
+ { skbaddr: ffff88005f3d5600 } hitcount: 1 len: 1280
+ { skbaddr: ffff88005f3d4900 } hitcount: 1 len: 1280
+ { skbaddr: ffff88009fed6300 } hitcount: 1 len: 115
+ { skbaddr: ffff88009fe0ad00 } hitcount: 1 len: 115
+ { skbaddr: ffff88008cdb1900 } hitcount: 1 len: 46
+ { skbaddr: ffff880064b5ef00 } hitcount: 1 len: 118
+ { skbaddr: ffff880044e3c700 } hitcount: 1 len: 60
+ { skbaddr: ffff880100065900 } hitcount: 1 len: 46
+ { skbaddr: ffff8800d46bd500 } hitcount: 1 len: 116
+ { skbaddr: ffff88005f3d5f00 } hitcount: 1 len: 1280
+ { skbaddr: ffff880100064700 } hitcount: 1 len: 365
+ { skbaddr: ffff8800badb6f00 } hitcount: 1 len: 60
+ .
+ .
+ .
+ { skbaddr: ffff88009fe0be00 } hitcount: 27 len: 24677
+ { skbaddr: ffff88009fe0a400 } hitcount: 27 len: 23052
+ { skbaddr: ffff88009fe0b700 } hitcount: 31 len: 25589
+ { skbaddr: ffff88009fe0b600 } hitcount: 32 len: 27326
+ { skbaddr: ffff88006a462800 } hitcount: 68 len: 71678
+ { skbaddr: ffff88006a463700 } hitcount: 70 len: 72678
+ { skbaddr: ffff88006a462b00 } hitcount: 71 len: 77589
+ { skbaddr: ffff88006a463600 } hitcount: 73 len: 71307
+ { skbaddr: ffff88006a462200 } hitcount: 81 len: 81032
+
+ Totals:
+ Hits: 1451
+ Entries: 318
+ Dropped: 0
+
+
+ # event histogram
+ #
+ # trigger info: hist:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 if len == 256 [active]
+ #
+
+
+ Totals:
+ Hits: 0
+ Entries: 0
+ Dropped: 0
+
+
+ # event histogram
+ #
+ # trigger info: hist:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 if len > 4096 [active]
+ #
+
+ { skbaddr: ffff88009fd2c300 } hitcount: 1 len: 7212
+ { skbaddr: ffff8800d2bcce00 } hitcount: 1 len: 7212
+ { skbaddr: ffff8800d2bcd700 } hitcount: 1 len: 7212
+ { skbaddr: ffff8800d2bcda00 } hitcount: 1 len: 21492
+ { skbaddr: ffff8800ae2e2d00 } hitcount: 1 len: 7212
+ { skbaddr: ffff8800d2bcdb00 } hitcount: 1 len: 7212
+ { skbaddr: ffff88006a4df500 } hitcount: 1 len: 4854
+ { skbaddr: ffff88008ce47b00 } hitcount: 1 len: 18636
+ { skbaddr: ffff8800ae2e2200 } hitcount: 1 len: 12924
+ { skbaddr: ffff88005f3e1000 } hitcount: 1 len: 4356
+ { skbaddr: ffff8800d2bcdc00 } hitcount: 2 len: 24420
+ { skbaddr: ffff8800d2bcc200 } hitcount: 2 len: 12996
+
+ Totals:
+ Hits: 14
+ Entries: 12
+ Dropped: 0
+
+
+ # event histogram
+ #
+ # trigger info: hist:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 if len < 0 [active]
+ #
+
+
+ Totals:
+ Hits: 0
+ Entries: 0
+ Dropped: 0
+
+ Named triggers can be used to have triggers share a common set of
+ histogram data. This capability is mostly useful for combining the
+ output of events generated by tracepoints contained inside inline
+ functions, but names can be used in a hist trigger on any event.
+ For example, these two triggers when hit will update the same 'len'
+ field in the shared 'foo' histogram data:
+
+ # echo 'hist:name=foo:keys=skbaddr.hex:vals=len' > \
+ /sys/kernel/debug/tracing/events/net/netif_receive_skb/trigger
+ # echo 'hist:name=foo:keys=skbaddr.hex:vals=len' > \
+ /sys/kernel/debug/tracing/events/net/netif_rx/trigger
+
+ You can see that they're updating common histogram data by reading
+ each event's hist files at the same time:
+
+ # cat /sys/kernel/debug/tracing/events/net/netif_receive_skb/hist;
+ cat /sys/kernel/debug/tracing/events/net/netif_rx/hist
+
+ # event histogram
+ #
+ # trigger info: hist:name=foo:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 [active]
+ #
+
+ { skbaddr: ffff88000ad53500 } hitcount: 1 len: 46
+ { skbaddr: ffff8800af5a1500 } hitcount: 1 len: 76
+ { skbaddr: ffff8800d62a1900 } hitcount: 1 len: 46
+ { skbaddr: ffff8800d2bccb00 } hitcount: 1 len: 468
+ { skbaddr: ffff8800d3c69900 } hitcount: 1 len: 46
+ { skbaddr: ffff88009ff09100 } hitcount: 1 len: 52
+ { skbaddr: ffff88010f13ab00 } hitcount: 1 len: 168
+ { skbaddr: ffff88006a54f400 } hitcount: 1 len: 46
+ { skbaddr: ffff8800d2bcc500 } hitcount: 1 len: 260
+ { skbaddr: ffff880064505000 } hitcount: 1 len: 46
+ { skbaddr: ffff8800baf24e00 } hitcount: 1 len: 32
+ { skbaddr: ffff88009fe0ad00 } hitcount: 1 len: 46
+ { skbaddr: ffff8800d3edff00 } hitcount: 1 len: 44
+ { skbaddr: ffff88009fe0b400 } hitcount: 1 len: 168
+ { skbaddr: ffff8800a1c55a00 } hitcount: 1 len: 40
+ { skbaddr: ffff8800d2bcd100 } hitcount: 1 len: 40
+ { skbaddr: ffff880064505f00 } hitcount: 1 len: 174
+ { skbaddr: ffff8800a8bff200 } hitcount: 1 len: 160
+ { skbaddr: ffff880044e3cc00 } hitcount: 1 len: 76
+ { skbaddr: ffff8800a8bfe700 } hitcount: 1 len: 46
+ { skbaddr: ffff8800d2bcdc00 } hitcount: 1 len: 32
+ { skbaddr: ffff8800a1f64800 } hitcount: 1 len: 46
+ { skbaddr: ffff8800d2bcde00 } hitcount: 1 len: 988
+ { skbaddr: ffff88006a5dea00 } hitcount: 1 len: 46
+ { skbaddr: ffff88002e37a200 } hitcount: 1 len: 44
+ { skbaddr: ffff8800a1f32c00 } hitcount: 2 len: 676
+ { skbaddr: ffff88000ad52600 } hitcount: 2 len: 107
+ { skbaddr: ffff8800a1f91e00 } hitcount: 2 len: 92
+ { skbaddr: ffff8800af5a0200 } hitcount: 2 len: 142
+ { skbaddr: ffff8800d2bcc600 } hitcount: 2 len: 220
+ { skbaddr: ffff8800ba36f500 } hitcount: 2 len: 92
+ { skbaddr: ffff8800d021f800 } hitcount: 2 len: 92
+ { skbaddr: ffff8800a1f33600 } hitcount: 2 len: 675
+ { skbaddr: ffff8800a8bfff00 } hitcount: 3 len: 138
+ { skbaddr: ffff8800d62a1300 } hitcount: 3 len: 138
+ { skbaddr: ffff88002e37a100 } hitcount: 4 len: 184
+ { skbaddr: ffff880064504400 } hitcount: 4 len: 184
+ { skbaddr: ffff8800a8bfec00 } hitcount: 4 len: 184
+ { skbaddr: ffff88000ad53700 } hitcount: 5 len: 230
+ { skbaddr: ffff8800d2bcdb00 } hitcount: 5 len: 196
+ { skbaddr: ffff8800a1f90000 } hitcount: 6 len: 276
+ { skbaddr: ffff88006a54f900 } hitcount: 6 len: 276
+
+ Totals:
+ Hits: 81
+ Entries: 42
+ Dropped: 0
+ # event histogram
+ #
+ # trigger info: hist:name=foo:keys=skbaddr.hex:vals=hitcount,len:sort=hitcount:size=2048 [active]
+ #
+
+ { skbaddr: ffff88000ad53500 } hitcount: 1 len: 46
+ { skbaddr: ffff8800af5a1500 } hitcount: 1 len: 76
+ { skbaddr: ffff8800d62a1900 } hitcount: 1 len: 46
+ { skbaddr: ffff8800d2bccb00 } hitcount: 1 len: 468
+ { skbaddr: ffff8800d3c69900 } hitcount: 1 len: 46
+ { skbaddr: ffff88009ff09100 } hitcount: 1 len: 52
+ { skbaddr: ffff88010f13ab00 } hitcount: 1 len: 168
+ { skbaddr: ffff88006a54f400 } hitcount: 1 len: 46
+ { skbaddr: ffff8800d2bcc500 } hitcount: 1 len: 260
+ { skbaddr: ffff880064505000 } hitcount: 1 len: 46
+ { skbaddr: ffff8800baf24e00 } hitcount: 1 len: 32
+ { skbaddr: ffff88009fe0ad00 } hitcount: 1 len: 46
+ { skbaddr: ffff8800d3edff00 } hitcount: 1 len: 44
+ { skbaddr: ffff88009fe0b400 } hitcount: 1 len: 168
+ { skbaddr: ffff8800a1c55a00 } hitcount: 1 len: 40
+ { skbaddr: ffff8800d2bcd100 } hitcount: 1 len: 40
+ { skbaddr: ffff880064505f00 } hitcount: 1 len: 174
+ { skbaddr: ffff8800a8bff200 } hitcount: 1 len: 160
+ { skbaddr: ffff880044e3cc00 } hitcount: 1 len: 76
+ { skbaddr: ffff8800a8bfe700 } hitcount: 1 len: 46
+ { skbaddr: ffff8800d2bcdc00 } hitcount: 1 len: 32
+ { skbaddr: ffff8800a1f64800 } hitcount: 1 len: 46
+ { skbaddr: ffff8800d2bcde00 } hitcount: 1 len: 988
+ { skbaddr: ffff88006a5dea00 } hitcount: 1 len: 46
+ { skbaddr: ffff88002e37a200 } hitcount: 1 len: 44
+ { skbaddr: ffff8800a1f32c00 } hitcount: 2 len: 676
+ { skbaddr: ffff88000ad52600 } hitcount: 2 len: 107
+ { skbaddr: ffff8800a1f91e00 } hitcount: 2 len: 92
+ { skbaddr: ffff8800af5a0200 } hitcount: 2 len: 142
+ { skbaddr: ffff8800d2bcc600 } hitcount: 2 len: 220
+ { skbaddr: ffff8800ba36f500 } hitcount: 2 len: 92
+ { skbaddr: ffff8800d021f800 } hitcount: 2 len: 92
+ { skbaddr: ffff8800a1f33600 } hitcount: 2 len: 675
+ { skbaddr: ffff8800a8bfff00 } hitcount: 3 len: 138
+ { skbaddr: ffff8800d62a1300 } hitcount: 3 len: 138
+ { skbaddr: ffff88002e37a100 } hitcount: 4 len: 184
+ { skbaddr: ffff880064504400 } hitcount: 4 len: 184
+ { skbaddr: ffff8800a8bfec00 } hitcount: 4 len: 184
+ { skbaddr: ffff88000ad53700 } hitcount: 5 len: 230
+ { skbaddr: ffff8800d2bcdb00 } hitcount: 5 len: 196
+ { skbaddr: ffff8800a1f90000 } hitcount: 6 len: 276
+ { skbaddr: ffff88006a54f900 } hitcount: 6 len: 276
+
+ Totals:
+ Hits: 81
+ Entries: 42
+ Dropped: 0
+
+ And here's an example that shows how to combine histogram data from
+ any two events even if they don't share any 'compatible' fields
+ other than 'hitcount' and 'stacktrace'. These commands create a
+ couple of triggers named 'bar' using those fields:
+
+ # echo 'hist:name=bar:key=stacktrace:val=hitcount' > \
+ /sys/kernel/debug/tracing/events/sched/sched_process_fork/trigger
+ # echo 'hist:name=bar:key=stacktrace:val=hitcount' > \
+ /sys/kernel/debug/tracing/events/net/netif_rx/trigger
+
+ And displaying the output of either shows some interesting if
+ somewhat confusing output:
+
+ # cat /sys/kernel/debug/tracing/events/sched/sched_process_fork/hist
+ # cat /sys/kernel/debug/tracing/events/net/netif_rx/hist
+
+ # event histogram
+ #
+ # trigger info: hist:name=bar:keys=stacktrace:vals=hitcount:sort=hitcount:size=2048 [active]
+ #
+
+ { stacktrace:
+ _do_fork+0x18e/0x330
+ kernel_thread+0x29/0x30
+ kthreadd+0x154/0x1b0
+ ret_from_fork+0x3f/0x70
+ } hitcount: 1
+ { stacktrace:
+ netif_rx_internal+0xb2/0xd0
+ netif_rx_ni+0x20/0x70
+ dev_loopback_xmit+0xaa/0xd0
+ ip_mc_output+0x126/0x240
+ ip_local_out_sk+0x31/0x40
+ igmp_send_report+0x1e9/0x230
+ igmp_timer_expire+0xe9/0x120
+ call_timer_fn+0x39/0xf0
+ run_timer_softirq+0x1e1/0x290
+ __do_softirq+0xfd/0x290
+ irq_exit+0x98/0xb0
+ smp_apic_timer_interrupt+0x4a/0x60
+ apic_timer_interrupt+0x6d/0x80
+ cpuidle_enter+0x17/0x20
+ call_cpuidle+0x3b/0x60
+ cpu_startup_entry+0x22d/0x310
+ } hitcount: 1
+ { stacktrace:
+ netif_rx_internal+0xb2/0xd0
+ netif_rx_ni+0x20/0x70
+ dev_loopback_xmit+0xaa/0xd0
+ ip_mc_output+0x17f/0x240
+ ip_local_out_sk+0x31/0x40
+ ip_send_skb+0x1a/0x50
+ udp_send_skb+0x13e/0x270
+ udp_sendmsg+0x2bf/0x980
+ inet_sendmsg+0x67/0xa0
+ sock_sendmsg+0x38/0x50
+ SYSC_sendto+0xef/0x170
+ SyS_sendto+0xe/0x10
+ entry_SYSCALL_64_fastpath+0x12/0x6a
+ } hitcount: 2
+ { stacktrace:
+ netif_rx_internal+0xb2/0xd0
+ netif_rx+0x1c/0x60
+ loopback_xmit+0x6c/0xb0
+ dev_hard_start_xmit+0x219/0x3a0
+ __dev_queue_xmit+0x415/0x4f0
+ dev_queue_xmit_sk+0x13/0x20
+ ip_finish_output2+0x237/0x340
+ ip_finish_output+0x113/0x1d0
+ ip_output+0x66/0xc0
+ ip_local_out_sk+0x31/0x40
+ ip_send_skb+0x1a/0x50
+ udp_send_skb+0x16d/0x270
+ udp_sendmsg+0x2bf/0x980
+ inet_sendmsg+0x67/0xa0
+ sock_sendmsg+0x38/0x50
+ ___sys_sendmsg+0x14e/0x270
+ } hitcount: 76
+ { stacktrace:
+ netif_rx_internal+0xb2/0xd0
+ netif_rx+0x1c/0x60
+ loopback_xmit+0x6c/0xb0
+ dev_hard_start_xmit+0x219/0x3a0
+ __dev_queue_xmit+0x415/0x4f0
+ dev_queue_xmit_sk+0x13/0x20
+ ip_finish_output2+0x237/0x340
+ ip_finish_output+0x113/0x1d0
+ ip_output+0x66/0xc0
+ ip_local_out_sk+0x31/0x40
+ ip_send_skb+0x1a/0x50
+ udp_send_skb+0x16d/0x270
+ udp_sendmsg+0x2bf/0x980
+ inet_sendmsg+0x67/0xa0
+ sock_sendmsg+0x38/0x50
+ ___sys_sendmsg+0x269/0x270
+ } hitcount: 77
+ { stacktrace:
+ netif_rx_internal+0xb2/0xd0
+ netif_rx+0x1c/0x60
+ loopback_xmit+0x6c/0xb0
+ dev_hard_start_xmit+0x219/0x3a0
+ __dev_queue_xmit+0x415/0x4f0
+ dev_queue_xmit_sk+0x13/0x20
+ ip_finish_output2+0x237/0x340
+ ip_finish_output+0x113/0x1d0
+ ip_output+0x66/0xc0
+ ip_local_out_sk+0x31/0x40
+ ip_send_skb+0x1a/0x50
+ udp_send_skb+0x16d/0x270
+ udp_sendmsg+0x2bf/0x980
+ inet_sendmsg+0x67/0xa0
+ sock_sendmsg+0x38/0x50
+ SYSC_sendto+0xef/0x170
+ } hitcount: 88
+ { stacktrace:
+ _do_fork+0x18e/0x330
+ SyS_clone+0x19/0x20
+ entry_SYSCALL_64_fastpath+0x12/0x6a
+ } hitcount: 244
+
+ Totals:
+ Hits: 489
+ Entries: 7
+ Dropped: 0
diff --git a/Documentation/trace/ftrace.txt b/Documentation/trace/ftrace.txt
index f52f297cb406..a6b3705e62a6 100644
--- a/Documentation/trace/ftrace.txt
+++ b/Documentation/trace/ftrace.txt
@@ -210,6 +210,11 @@ of ftrace. Here is a list of some of the key files:
Note, sched_switch and sched_wake_up will also trace events
listed in this file.
+ To have the PIDs of children of tasks with their PID in this file
+ added on fork, enable the "event-fork" option. That option will also
+ cause the PIDs of tasks to be removed from this file when the task
+ exits.
+
set_graph_function:
Set a "trigger" function where tracing should start
@@ -725,16 +730,14 @@ noraw
nohex
nobin
noblock
-nostacktrace
trace_printk
-noftrace_preempt
nobranch
annotate
nouserstacktrace
nosym-userobj
noprintk-msg-only
context-info
-latency-format
+nolatency-format
sleep-time
graph-time
record-cmd
@@ -742,7 +745,10 @@ overwrite
nodisable_on_free
irq-info
markers
+noevent-fork
function-trace
+nodisplay-graph
+nostacktrace
To disable one of the options, echo in the option prepended with
"no".
@@ -796,11 +802,6 @@ Here are the available options:
block - When set, reading trace_pipe will not block when polled.
- stacktrace - This is one of the options that changes the trace
- itself. When a trace is recorded, so is the stack
- of functions. This allows for back traces of
- trace sites.
-
trace_printk - Can disable trace_printk() from writing into the buffer.
branch - Enable branch tracing with the tracer.
@@ -897,6 +898,10 @@ x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
When disabled, the trace_marker will error with EINVAL
on write.
+ event-fork - When set, tasks with PIDs listed in set_event_pid will have
+ the PIDs of their children added to set_event_pid when those
+ tasks fork. Also, when tasks with PIDs in set_event_pid exit,
+ their PIDs will be removed from the file.
function-trace - The latency tracers will enable function tracing
if this option is enabled (default it is). When
@@ -904,8 +909,17 @@ x494] <- /root/a.out[+0x4a8] <- /lib/libc-2.7.so[+0x1e1a6]
functions. This keeps the overhead of the tracer down
when performing latency tests.
- Note: Some tracers have their own options. They only appear
- when the tracer is active.
+ display-graph - When set, the latency tracers (irqsoff, wakeup, etc) will
+ use function graph tracing instead of function tracing.
+
+ stacktrace - This is one of the options that changes the trace
+ itself. When a trace is recorded, so is the stack
+ of functions. This allows for back traces of
+ trace sites.
+
+ Note: Some tracers have their own options. They only appear in this
+ file when the tracer is active. They always appear in the
+ options directory.
@@ -1562,12 +1576,12 @@ Doing the same with chrt -r 5 and function-trace set.
<idle>-0 3dN.1 12us : menu_hrtimer_cancel <-tick_nohz_idle_exit
<idle>-0 3dN.1 12us : ktime_get <-tick_nohz_idle_exit
<idle>-0 3dN.1 12us : tick_do_update_jiffies64 <-tick_nohz_idle_exit
- <idle>-0 3dN.1 13us : update_cpu_load_nohz <-tick_nohz_idle_exit
- <idle>-0 3dN.1 13us : _raw_spin_lock <-update_cpu_load_nohz
+ <idle>-0 3dN.1 13us : cpu_load_update_nohz <-tick_nohz_idle_exit
+ <idle>-0 3dN.1 13us : _raw_spin_lock <-cpu_load_update_nohz
<idle>-0 3dN.1 13us : add_preempt_count <-_raw_spin_lock
- <idle>-0 3dN.2 13us : __update_cpu_load <-update_cpu_load_nohz
- <idle>-0 3dN.2 14us : sched_avg_update <-__update_cpu_load
- <idle>-0 3dN.2 14us : _raw_spin_unlock <-update_cpu_load_nohz
+ <idle>-0 3dN.2 13us : __cpu_load_update <-cpu_load_update_nohz
+ <idle>-0 3dN.2 14us : sched_avg_update <-__cpu_load_update
+ <idle>-0 3dN.2 14us : _raw_spin_unlock <-cpu_load_update_nohz
<idle>-0 3dN.2 14us : sub_preempt_count <-_raw_spin_unlock
<idle>-0 3dN.1 15us : calc_load_exit_idle <-tick_nohz_idle_exit
<idle>-0 3dN.1 15us : touch_softlockup_watchdog <-tick_nohz_idle_exit
diff --git a/Documentation/usb/chipidea.txt b/Documentation/usb/chipidea.txt
index 678741b0f213..edf7cdfddc88 100644
--- a/Documentation/usb/chipidea.txt
+++ b/Documentation/usb/chipidea.txt
@@ -3,14 +3,17 @@
To show how to demo OTG HNP and SRP functions via sys input files
with 2 Freescale i.MX6Q sabre SD boards.
-1.1 How to enable OTG FSM in menuconfig
+1.1 How to enable OTG FSM
---------------------------------------
-Select CONFIG_USB_OTG_FSM, rebuild kernel Image and modules.
-If you want to check some internal variables for otg fsm,
-mount debugfs, there are 2 files which can show otg fsm
-variables and some controller registers value:
+1.1.1 Select CONFIG_USB_OTG_FSM in menuconfig, rebuild kernel
+Image and modules. If you want to check some internal
+variables for otg fsm, mount debugfs, there are 2 files
+which can show otg fsm variables and some controller registers value:
cat /sys/kernel/debug/ci_hdrc.0/otg
cat /sys/kernel/debug/ci_hdrc.0/registers
+1.1.2 Add below entries in your dts file for your controller node
+ otg-rev = <0x0200>;
+ adp-disable;
1.2 Test operations
-------------------
diff --git a/Documentation/video4linux/CARDLIST.cx23885 b/Documentation/video4linux/CARDLIST.cx23885
index 44a4cfbfdc40..85a8fdcfcdaa 100644
--- a/Documentation/video4linux/CARDLIST.cx23885
+++ b/Documentation/video4linux/CARDLIST.cx23885
@@ -52,3 +52,5 @@
51 -> DVBSky T982 [4254:0982]
52 -> Hauppauge WinTV-HVR5525 [0070:f038]
53 -> Hauppauge WinTV Starburst [0070:c12a]
+ 54 -> ViewCast 260e [1576:0260]
+ 55 -> ViewCast 460e [1576:0460]
diff --git a/Documentation/video4linux/CARDLIST.em28xx b/Documentation/video4linux/CARDLIST.em28xx
index 67209998a439..6784220c6a16 100644
--- a/Documentation/video4linux/CARDLIST.em28xx
+++ b/Documentation/video4linux/CARDLIST.em28xx
@@ -76,9 +76,9 @@
75 -> Dikom DK300 (em2882)
76 -> KWorld PlusTV 340U or UB435-Q (ATSC) (em2870) [1b80:a340]
77 -> EM2874 Leadership ISDBT (em2874)
- 78 -> PCTV nanoStick T2 290e (em28174)
+ 78 -> PCTV nanoStick T2 290e (em28174) [2013:024f]
79 -> Terratec Cinergy H5 (em2884) [eb1a:2885,0ccd:10a2,0ccd:10ad,0ccd:10b6]
- 80 -> PCTV DVB-S2 Stick (460e) (em28174)
+ 80 -> PCTV DVB-S2 Stick (460e) (em28174) [2013:024c]
81 -> Hauppauge WinTV HVR 930C (em2884) [2040:1605]
82 -> Terratec Cinergy HTC Stick (em2884) [0ccd:00b2]
83 -> Honestech Vidbox NW03 (em2860) [eb1a:5006]
@@ -90,9 +90,11 @@
89 -> Delock 61959 (em2874) [1b80:e1cc]
90 -> KWorld USB ATSC TV Stick UB435-Q V2 (em2874) [1b80:e346]
91 -> SpeedLink Vicious And Devine Laplace webcam (em2765) [1ae7:9003,1ae7:9004]
- 92 -> PCTV DVB-S2 Stick (461e) (em28178)
+ 92 -> PCTV DVB-S2 Stick (461e) (em28178) [2013:0258]
93 -> KWorld USB ATSC TV Stick UB435-Q V3 (em2874) [1b80:e34c]
- 94 -> PCTV tripleStick (292e) (em28178)
+ 94 -> PCTV tripleStick (292e) (em28178) [2013:025f,2040:0264]
95 -> Leadtek VC100 (em2861) [0413:6f07]
- 96 -> Terratec Cinergy T2 Stick HD (em28178)
+ 96 -> Terratec Cinergy T2 Stick HD (em28178) [eb1a:8179]
97 -> Elgato EyeTV Hybrid 2008 INT (em2884) [0fd9:0018]
+ 98 -> PLEX PX-BCUD (em28178) [3275:0085]
+ 99 -> Hauppauge WinTV-dualHD DVB (em28174) [2040:0265]
diff --git a/Documentation/video4linux/README.cx88 b/Documentation/video4linux/README.cx88
index 35fae23f883b..b09ce36b921e 100644
--- a/Documentation/video4linux/README.cx88
+++ b/Documentation/video4linux/README.cx88
@@ -50,7 +50,7 @@ the driver. What to do then?
cx88-cards.c. If that worked, mail me your changes as unified
diff ("diff -u").
(3) Or you can mail me the config information. I need at least the
- following informations to add the card:
+ following information to add the card:
* the PCI Subsystem ID ("0070:3400" from the line above,
"lspci -v" output is fine too).
diff --git a/Documentation/video4linux/bttv/Sound-FAQ b/Documentation/video4linux/bttv/Sound-FAQ
index d3f1d7783d1c..646a47de0016 100644
--- a/Documentation/video4linux/bttv/Sound-FAQ
+++ b/Documentation/video4linux/bttv/Sound-FAQ
@@ -55,7 +55,7 @@ receiver chips. Some boards use the i2c bus instead of the gpio pins
to connect the mux chip.
As mentioned above, there is a array which holds the required
-informations for each known board. You basically have to create a new
+information for each known board. You basically have to create a new
line for your board. The important fields are these two:
struct tvcard
diff --git a/Documentation/video4linux/v4l2-framework.txt b/Documentation/video4linux/v4l2-framework.txt
index fa41608ab2b4..cbefc7902f5f 100644
--- a/Documentation/video4linux/v4l2-framework.txt
+++ b/Documentation/video4linux/v4l2-framework.txt
@@ -35,7 +35,7 @@ need and this same framework should make it much easier to refactor
common code into utility functions shared by all drivers.
A good example to look at as a reference is the v4l2-pci-skeleton.c
-source that is available in this directory. It is a skeleton driver for
+source that is available in samples/v4l/. It is a skeleton driver for
a PCI capture card, and demonstrates how to use the V4L2 driver
framework. It can be used as a template for real PCI video capture driver.
diff --git a/Documentation/video4linux/vivid.txt b/Documentation/video4linux/vivid.txt
index e35d376b7f64..8da5d2a576bc 100644
--- a/Documentation/video4linux/vivid.txt
+++ b/Documentation/video4linux/vivid.txt
@@ -294,7 +294,7 @@ the result will be.
These inputs support all combinations of the field setting. Special care has
been taken to faithfully reproduce how fields are handled for the different
-TV standards. This is particularly noticable when generating a horizontally
+TV standards. This is particularly noticeable when generating a horizontally
moving image so the temporal effect of using interlaced formats becomes clearly
visible. For 50 Hz standards the top field is the oldest and the bottom field
is the newest in time. For 60 Hz standards that is reversed: the bottom field
@@ -313,7 +313,7 @@ will be SMPTE-170M.
The pixel aspect ratio will depend on the TV standard. The video aspect ratio
can be selected through the 'Standard Aspect Ratio' Vivid control.
Choices are '4x3', '16x9' which will give letterboxed widescreen video and
-'16x9 Anomorphic' which will give full screen squashed anamorphic widescreen
+'16x9 Anamorphic' which will give full screen squashed anamorphic widescreen
video that will need to be scaled accordingly.
The TV 'tuner' supports a frequency range of 44-958 MHz. Channels are available
@@ -862,7 +862,7 @@ RDS Radio Text:
RDS Stereo:
RDS Artificial Head:
RDS Compressed:
-RDS Dymanic PTY:
+RDS Dynamic PTY:
RDS Traffic Announcement:
RDS Traffic Program:
RDS Music: these are all controls that set the RDS data that is transmitted by
diff --git a/Documentation/virtual/kvm/api.txt b/Documentation/virtual/kvm/api.txt
index 4d0542c5206b..a4482cce4bae 100644
--- a/Documentation/virtual/kvm/api.txt
+++ b/Documentation/virtual/kvm/api.txt
@@ -199,8 +199,8 @@ Type: vm ioctl
Parameters: vcpu id (apic id on x86)
Returns: vcpu fd on success, -1 on error
-This API adds a vcpu to a virtual machine. The vcpu id is a small integer
-in the range [0, max_vcpus).
+This API adds a vcpu to a virtual machine. No more than max_vcpus may be added.
+The vcpu id is an integer in the range [0, max_vcpu_id).
The recommended max_vcpus value can be retrieved using the KVM_CAP_NR_VCPUS of
the KVM_CHECK_EXTENSION ioctl() at run-time.
@@ -212,6 +212,12 @@ cpus max.
If the KVM_CAP_MAX_VCPUS does not exist, you should assume that max_vcpus is
same as the value returned from KVM_CAP_NR_VCPUS.
+The maximum possible value for max_vcpu_id can be retrieved using the
+KVM_CAP_MAX_VCPU_ID of the KVM_CHECK_EXTENSION ioctl() at run-time.
+
+If the KVM_CAP_MAX_VCPU_ID does not exist, you should assume that max_vcpu_id
+is the same as the value returned from KVM_CAP_MAX_VCPUS.
+
On powerpc using book3s_hv mode, the vcpus are mapped onto virtual
threads in one or more virtual CPU cores. (This is because the
hardware requires all the hardware threads in a CPU core to be in the
@@ -3788,6 +3794,14 @@ a KVM_EXIT_IOAPIC_EOI vmexit will be reported to userspace.
Fails if VCPU has already been created, or if the irqchip is already in the
kernel (i.e. KVM_CREATE_IRQCHIP has already been called).
+7.6 KVM_CAP_S390_RI
+
+Architectures: s390
+Parameters: none
+
+Allows use of runtime-instrumentation introduced with zEC12 processor.
+Will return -EINVAL if the machine does not support runtime-instrumentation.
+Will return -EBUSY if a VCPU has already been created.
8. Other capabilities.
----------------------
diff --git a/Documentation/virtual/kvm/devices/s390_flic.txt b/Documentation/virtual/kvm/devices/s390_flic.txt
index e3e314cb83e8..6b0e115301c8 100644
--- a/Documentation/virtual/kvm/devices/s390_flic.txt
+++ b/Documentation/virtual/kvm/devices/s390_flic.txt
@@ -11,6 +11,7 @@ FLIC provides support to
- add interrupts (KVM_DEV_FLIC_ENQUEUE)
- inspect currently pending interrupts (KVM_FLIC_GET_ALL_IRQS)
- purge all pending floating interrupts (KVM_DEV_FLIC_CLEAR_IRQS)
+- purge one pending floating I/O interrupt (KVM_DEV_FLIC_CLEAR_IO_IRQ)
- enable/disable for the guest transparent async page faults
- register and modify adapter interrupt sources (KVM_DEV_FLIC_ADAPTER_*)
@@ -40,6 +41,11 @@ Groups:
Simply deletes all elements from the list of currently pending floating
interrupts. No interrupts are injected into the guest.
+ KVM_DEV_FLIC_CLEAR_IO_IRQ
+ Deletes one (if any) I/O interrupt for a subchannel identified by the
+ subsystem identification word passed via the buffer specified by
+ attr->addr (address) and attr->attr (length).
+
KVM_DEV_FLIC_APF_ENABLE
Enables async page faults for the guest. So in case of a major page fault
the host is allowed to handle this async and continues the guest.
@@ -68,7 +74,7 @@ struct kvm_s390_io_adapter {
KVM_DEV_FLIC_ADAPTER_MODIFY
Modifies attributes of an existing I/O adapter interrupt source. Takes
- a kvm_s390_io_adapter_req specifiying the adapter and the operation:
+ a kvm_s390_io_adapter_req specifying the adapter and the operation:
struct kvm_s390_io_adapter_req {
__u32 id;
@@ -94,3 +100,9 @@ struct kvm_s390_io_adapter_req {
KVM_S390_IO_ADAPTER_UNMAP
release a userspace page for the translated address specified in addr
from the list of mappings
+
+Note: The KVM_SET_DEVICE_ATTR/KVM_GET_DEVICE_ATTR device ioctls executed on
+FLIC with an unknown group or attribute gives the error code EINVAL (instead of
+ENXIO, as specified in the API documentation). It is not possible to conclude
+that a FLIC operation is unavailable based on the error code resulting from a
+usage attempt.
diff --git a/Documentation/vm/hugetlbpage.txt b/Documentation/vm/hugetlbpage.txt
index 54dd9b9c6c31..59cbc803aad6 100644
--- a/Documentation/vm/hugetlbpage.txt
+++ b/Documentation/vm/hugetlbpage.txt
@@ -220,7 +220,7 @@ resulting effect on persistent huge page allocation is as follows:
node list of "all" with numactl --interleave or --membind [-m] to achieve
interleaving over all nodes in the system or cpuset.
-4) Any task mempolicy specifed--e.g., using numactl--will be constrained by
+4) Any task mempolicy specified--e.g., using numactl--will be constrained by
the resource limits of any cpuset in which the task runs. Thus, there will
be no way for a task with non-default policy running in a cpuset with a
subset of the system nodes to allocate huge pages outside the cpuset
@@ -275,10 +275,10 @@ This command mounts a (pseudo) filesystem of type hugetlbfs on the directory
options sets the owner and group of the root of the file system. By default
the uid and gid of the current process are taken. The mode option sets the
mode of root of file system to value & 01777. This value is given in octal.
-By default the value 0755 is picked. If the paltform supports multiple huge
+By default the value 0755 is picked. If the platform supports multiple huge
page sizes, the pagesize option can be used to specify the huge page size and
associated pool. pagesize is specified in bytes. If pagesize is not specified
-the paltform's default huge page size and associated pool will be used. The
+the platform's default huge page size and associated pool will be used. The
size option sets the maximum value of memory (huge pages) allowed for that
filesystem (/mnt/huge). The size option can be specified in bytes, or as a
percentage of the specified huge page pool (nr_hugepages). The size is
diff --git a/Documentation/vm/pagemap.txt b/Documentation/vm/pagemap.txt
index 0e1e55588b59..eafcefa15261 100644
--- a/Documentation/vm/pagemap.txt
+++ b/Documentation/vm/pagemap.txt
@@ -62,7 +62,7 @@ There are four components to pagemap:
14. SWAPBACKED
15. COMPOUND_HEAD
16. COMPOUND_TAIL
- 16. HUGE
+ 17. HUGE
18. UNEVICTABLE
19. HWPOISON
20. NOPAGE
diff --git a/Documentation/vm/transhuge.txt b/Documentation/vm/transhuge.txt
index d9cb65cf5cfd..7c871d6beb63 100644
--- a/Documentation/vm/transhuge.txt
+++ b/Documentation/vm/transhuge.txt
@@ -340,7 +340,7 @@ unaffected. libhugetlbfs will also work fine as usual.
== Graceful fallback ==
-Code walking pagetables but unware about huge pmds can simply call
+Code walking pagetables but unaware about huge pmds can simply call
split_huge_pmd(vma, pmd, addr) where the pmd is the one returned by
pmd_offset. It's trivial to make the code transparent hugepage aware
by just grepping for "pmd_offset" and adding split_huge_pmd where
@@ -394,9 +394,9 @@ hugepage natively. Once finished you can drop the page table lock.
Refcounting on THP is mostly consistent with refcounting on other compound
pages:
- - get_page()/put_page() and GUP operate in head page's ->_count.
+ - get_page()/put_page() and GUP operate in head page's ->_refcount.
- - ->_count in tail pages is always zero: get_page_unless_zero() never
+ - ->_refcount in tail pages is always zero: get_page_unless_zero() never
succeed on tail pages.
- map/unmap of the pages with PTE entry increment/decrement ->_mapcount
@@ -414,7 +414,7 @@ tracking. The alternative is alter ->_mapcount in all subpages on each
map/unmap of the whole compound page.
We set PG_double_map when a PMD of the page got split for the first time,
-but still have PMD mapping. The addtional references go away with last
+but still have PMD mapping. The additional references go away with last
compound_mapcount.
split_huge_page internally has to distribute the refcounts in the head
@@ -426,16 +426,16 @@ requests to split pinned huge page: it expects page count to be equal to
sum of mapcount of all sub-pages plus one (split_huge_page caller must
have reference for head page).
-split_huge_page uses migration entries to stabilize page->_count and
+split_huge_page uses migration entries to stabilize page->_refcount and
page->_mapcount.
We safe against physical memory scanners too: the only legitimate way
scanner can get reference to a page is get_page_unless_zero().
-All tail pages has zero ->_count until atomic_add(). It prevent scanner
-from geting reference to tail page up to the point. After the atomic_add()
-we don't care about ->_count value. We already known how many references
-with should uncharge from head page.
+All tail pages have zero ->_refcount until atomic_add(). This prevents the
+scanner from getting a reference to the tail page up to that point. After the
+atomic_add() we don't care about the ->_refcount value. We already known how
+many references should be uncharged from the head page.
For head page get_page_unless_zero() will succeed and we don't mind. It's
clear where reference should go after split: it will stay on head page.
diff --git a/Documentation/vm/z3fold.txt b/Documentation/vm/z3fold.txt
new file mode 100644
index 000000000000..38e4dac810b6
--- /dev/null
+++ b/Documentation/vm/z3fold.txt
@@ -0,0 +1,26 @@
+z3fold
+------
+
+z3fold is a special purpose allocator for storing compressed pages.
+It is designed to store up to three compressed pages per physical page.
+It is a zbud derivative which allows for higher compression
+ratio keeping the simplicity and determinism of its predecessor.
+
+The main differences between z3fold and zbud are:
+* unlike zbud, z3fold allows for up to PAGE_SIZE allocations
+* z3fold can hold up to 3 compressed pages in its page
+* z3fold doesn't export any API itself and is thus intended to be used
+ via the zpool API.
+
+To keep the determinism and simplicity, z3fold, just like zbud, always
+stores an integral number of compressed pages per page, but it can store
+up to 3 pages unlike zbud which can store at most 2. Therefore the
+compression ratio goes to around 2.7x while zbud's one is around 1.7x.
+
+Unlike zbud (but like zsmalloc for that matter) z3fold_alloc() does not
+return a dereferenceable pointer. Instead, it returns an unsigned long
+handle which encodes actual location of the allocated object.
+
+Keeping effective compression ratio close to zsmalloc's, z3fold doesn't
+depend on MMU enabled and provides more predictable reclaim behavior
+which makes it a better fit for small and response-critical systems.
diff --git a/Documentation/w1/slaves/w1_therm b/Documentation/w1/slaves/w1_therm
index 13411fe52f7f..d1f93af36f38 100644
--- a/Documentation/w1/slaves/w1_therm
+++ b/Documentation/w1/slaves/w1_therm
@@ -33,7 +33,15 @@ temperature conversion at a time. If none of the devices are parasite
powered it would be possible to convert all the devices at the same
time and then go back to read individual sensors. That isn't
currently supported. The driver also doesn't support reduced
-precision (which would also reduce the conversion time).
+precision (which would also reduce the conversion time) when reading values.
+
+Writing a value between 9 and 12 to the sysfs w1_slave file will change the
+precision of the sensor for the next readings. This value is in (volatile)
+SRAM, so it is reset when the sensor gets power-cycled.
+
+To store the current precision configuration into EEPROM, the value 0
+has to be written to the sysfs w1_slave file. Since the EEPROM has a limited
+amount of writes (>50k), this command should be used wisely.
The module parameter strong_pullup can be set to 0 to disable the
strong pullup, 1 to enable autodetection or 2 to force strong pullup.
diff --git a/Documentation/watchdog/hpwdt.txt b/Documentation/watchdog/hpwdt.txt
index 9488078900e0..a40398cce9d1 100644
--- a/Documentation/watchdog/hpwdt.txt
+++ b/Documentation/watchdog/hpwdt.txt
@@ -1,64 +1,67 @@
-Last reviewed: 06/02/2009
+Last reviewed: 04/04/2016
- HP iLO2 NMI Watchdog Driver
- NMI sourcing for iLO2 based ProLiant Servers
+ HPE iLO NMI Watchdog Driver
+ NMI sourcing for iLO based ProLiant Servers
Documentation and Driver by
- Thomas Mingarelli <thomas.mingarelli@hp.com>
+ Thomas Mingarelli <thomas.mingarelli@hpe.com>
- The HP iLO2 NMI Watchdog driver is a kernel module that provides basic
+ The HPE iLO NMI Watchdog driver is a kernel module that provides basic
watchdog functionality and the added benefit of NMI sourcing. Both the
watchdog functionality and the NMI sourcing capability need to be enabled
by the user. Remember that the two modes are not dependent on one another.
A user can have the NMI sourcing without the watchdog timer and vice-versa.
+ All references to iLO in this document imply it also works on iLO2 and all
+ subsequent generations.
Watchdog functionality is enabled like any other common watchdog driver. That
is, an application needs to be started that kicks off the watchdog timer. A
basic application exists in the Documentation/watchdog/src directory called
watchdog-test.c. Simply compile the C file and kick it off. If the system
- gets into a bad state and hangs, the HP ProLiant iLO 2 timer register will
+ gets into a bad state and hangs, the HPE ProLiant iLO timer register will
not be updated in a timely fashion and a hardware system reset (also known as
an Automatic Server Recovery (ASR)) event will occur.
- The hpwdt driver also has four (4) module parameters. They are the following:
+ The hpwdt driver also has three (3) module parameters. They are the following:
- soft_margin - allows the user to set the watchdog timer value
- allow_kdump - allows the user to save off a kernel dump image after an NMI
+ soft_margin - allows the user to set the watchdog timer value.
+ Default value is 30 seconds.
+ allow_kdump - allows the user to save off a kernel dump image after an NMI.
+ Default value is 1/ON
nowayout - basic watchdog parameter that does not allow the timer to
be restarted or an impending ASR to be escaped.
- priority - determines whether or not the hpwdt driver is first on the
- die_notify list to handle NMIs or last. The default value
- for this module parameter is 0 or LAST. If the user wants to
- enable NMI sourcing then reload the hpwdt driver with
- priority=1 (and boot with nmi_watchdog=0).
+ Default value is set when compiling the kernel. If it is set
+ to "Y", then there is no way of disabling the watchdog once
+ it has been started.
NOTE: More information about watchdog drivers in general, including the ioctl
interface to /dev/watchdog can be found in
Documentation/watchdog/watchdog-api.txt and Documentation/IPMI.txt.
- The priority parameter was introduced due to other kernel software that relied
- on handling NMIs (like oprofile). Keeping hpwdt's priority at 0 (or LAST)
- enables the users of NMIs for non critical events to be work as expected.
-
The NMI sourcing capability is disabled by default due to the inability to
distinguish between "NMI Watchdog Ticks" and "HW generated NMI events" in the
Linux kernel. What this means is that the hpwdt nmi handler code is called
each time the NMI signal fires off. This could amount to several thousands of
NMIs in a matter of seconds. If a user sees the Linux kernel's "dazed and
confused" message in the logs or if the system gets into a hung state, then
- the hpwdt driver can be reloaded with the "priority" module parameter set
- (priority=1).
+ the hpwdt driver can be reloaded.
1. If the kernel has not been booted with nmi_watchdog turned off then
- edit /boot/grub/menu.lst and place the nmi_watchdog=0 at the end of the
- currently booting kernel line.
+ edit and place the nmi_watchdog=0 at the end of the currently booting
+ kernel line. Depending on your Linux distribution and platform setup:
+ For non-UEFI systems
+ /boot/grub/grub.conf or
+ /boot/grub/menu.lst
+ For UEFI systems
+ /boot/efi/EFI/distroname/grub.conf or
+ /boot/efi/efi/distroname/elilo.conf
2. reboot the sever
- 3. Once the system comes up perform a rmmod hpwdt
- 4. insmod /lib/modules/`uname -r`/kernel/drivers/char/watchdog/hpwdt.ko priority=1
+ 3. Once the system comes up perform a modprobe -r hpwdt
+ 4. modprobe /lib/modules/`uname -r`/kernel/drivers/watchdog/hpwdt.ko
Now, the hpwdt can successfully receive and source the NMI and provide a log
- message that details the reason for the NMI (as determined by the HP BIOS).
+ message that details the reason for the NMI (as determined by the HPE BIOS).
- Below is a list of NMIs the HP BIOS understands along with the associated
+ Below is a list of NMIs the HPE BIOS understands along with the associated
code (reason):
No source found 00h
@@ -92,4 +95,4 @@ Last reviewed: 06/02/2009
-- Tom Mingarelli
- (thomas.mingarelli@hp.com)
+ (thomas.mingarelli@hpe.com)
diff --git a/Documentation/watchdog/watchdog-parameters.txt b/Documentation/watchdog/watchdog-parameters.txt
index c161399a6b5c..a8d364227a77 100644
--- a/Documentation/watchdog/watchdog-parameters.txt
+++ b/Documentation/watchdog/watchdog-parameters.txt
@@ -86,6 +86,10 @@ nowayout: Watchdog cannot be stopped once started
davinci_wdt:
heartbeat: Watchdog heartbeat period in seconds from 1 to 600, default 60
-------------------------------------------------
+ebc-c384_wdt:
+timeout: Watchdog timeout in seconds. (1<=timeout<=15300, default=60)
+nowayout: Watchdog cannot be stopped once started
+-------------------------------------------------
ep93xx_wdt:
nowayout: Watchdog cannot be stopped once started
timeout: Watchdog timeout in seconds. (1<=timeout<=3600, default=TBD)
diff --git a/Documentation/x86/intel_mpx.txt b/Documentation/x86/intel_mpx.txt
index 818518a3ff01..1a5a12184a35 100644
--- a/Documentation/x86/intel_mpx.txt
+++ b/Documentation/x86/intel_mpx.txt
@@ -136,7 +136,7 @@ A: MPX-enabled application will possibly create a lot of bounds tables in
If we were to preallocate them for the 128TB of user virtual address
space, we would need to reserve 512TB+2GB, which is larger than the
entire virtual address space today. This means they can not be reserved
- ahead of time. Also, a single process's pre-popualated bounds directory
+ ahead of time. Also, a single process's pre-populated bounds directory
consumes 2GB of virtual *AND* physical memory. IOW, it's completely
infeasible to prepopulate bounds directories.
@@ -151,7 +151,7 @@ A: This would work if we could hook the site of each and every memory
these calls.
Q: Could a bounds fault be handed to userspace and the tables allocated
- there in a signal handler intead of in the kernel?
+ there in a signal handler instead of in the kernel?
A: mmap() is not on the list of safe async handler functions and even
if mmap() would work it still requires locking or nasty tricks to
keep track of the allocation state there.
diff --git a/Documentation/x86/pat.txt b/Documentation/x86/pat.txt
index 54944c71b819..2a4ee6302122 100644
--- a/Documentation/x86/pat.txt
+++ b/Documentation/x86/pat.txt
@@ -196,3 +196,35 @@ Another, more verbose way of getting PAT related debug messages is with
"debugpat" boot parameter. With this parameter, various debug messages are
printed to dmesg log.
+PAT Initialization
+------------------
+
+The following table describes how PAT is initialized under various
+configurations. The PAT MSR must be updated by Linux in order to support WC
+and WT attributes. Otherwise, the PAT MSR has the value programmed in it
+by the firmware. Note, Xen enables WC attribute in the PAT MSR for guests.
+
+ MTRR PAT Call Sequence PAT State PAT MSR
+ =========================================================
+ E E MTRR -> PAT init Enabled OS
+ E D MTRR -> PAT init Disabled -
+ D E MTRR -> PAT disable Disabled BIOS
+ D D MTRR -> PAT disable Disabled -
+ - np/E PAT -> PAT disable Disabled BIOS
+ - np/D PAT -> PAT disable Disabled -
+ E !P/E MTRR -> PAT init Disabled BIOS
+ D !P/E MTRR -> PAT disable Disabled BIOS
+ !M !P/E MTRR stub -> PAT disable Disabled BIOS
+
+ Legend
+ ------------------------------------------------
+ E Feature enabled in CPU
+ D Feature disabled/unsupported in CPU
+ np "nopat" boot option specified
+ !P CONFIG_X86_PAT option unset
+ !M CONFIG_MTRR option unset
+ Enabled PAT state set to enabled
+ Disabled PAT state set to disabled
+ OS PAT initializes PAT MSR with OS setting
+ BIOS PAT keeps PAT MSR with BIOS setting
+
diff --git a/Documentation/xillybus.txt b/Documentation/xillybus.txt
index 81d111b4dc28..1660145b9969 100644
--- a/Documentation/xillybus.txt
+++ b/Documentation/xillybus.txt
@@ -215,7 +215,7 @@ in xillybus_core.c as follows:
choice is a non-zero value, to match standard UNIX behavior.
* synchronous: A non-zero value means that the pipe is synchronous. See
- Syncronization above.
+ Synchronization above.
* bufsize: Each DMA buffer's size. Always a power of two.
diff --git a/Documentation/zh_CN/HOWTO b/Documentation/zh_CN/HOWTO
index 54ea24ff63c7..f0613b92e0be 100644
--- a/Documentation/zh_CN/HOWTO
+++ b/Documentation/zh_CN/HOWTO
@@ -207,7 +207,7 @@ kernel.org网站的pub/linux/kernel/v2.6/目录下找到它。它的开发遵循
- 每当一个新版本的内核被发布,为期两周的集成窗口将被打开。在这段时间里
维护者可以向Linus提交大段的修改,通常这些修改已经被放到-mm内核中几个
星期了。提交大量修改的首选方式是使用git工具(内核的代码版本管理工具
- ,更多的信息可以在http://git.or.cz/获取),不过使用普通补丁也是可以
+ ,更多的信息可以在http://git-scm.com/获取),不过使用普通补丁也是可以
的。
- 两个星期以后-rc1版本内核发布。之后只有不包含可能影响整个内核稳定性的
新功能的补丁才可能被接受。请注意一个全新的驱动程序(或者文件系统)有
@@ -218,8 +218,6 @@ kernel.org网站的pub/linux/kernel/v2.6/目录下找到它。它的开发遵循
时,一个新的-rc版本就会被发布。计划是每周都发布新的-rc版本。
- 这个过程一直持续下去直到内核被认为达到足够稳定的状态,持续时间大概是
6个星期。
- - 以下地址跟踪了在每个-rc发布中发现的退步列表:
- http://kernelnewbies.org/known_regressions
关于内核发布,值得一提的是Andrew Morton在linux-kernel邮件列表中如是说:
“没有人知道新内核何时会被发布,因为发布是根据已知bug的情况来决定
diff --git a/Documentation/zh_CN/arm64/booting.txt b/Documentation/zh_CN/arm64/booting.txt
index 1145bf864082..c1dd968c5ee9 100644
--- a/Documentation/zh_CN/arm64/booting.txt
+++ b/Documentation/zh_CN/arm64/booting.txt
@@ -8,7 +8,7 @@ or if there is a problem with the translation.
M: Will Deacon <will.deacon@arm.com>
zh_CN: Fu Wei <wefu@redhat.com>
-C: 1926e54f115725a9248d0c4c65c22acaf94de4c4
+C: 55f058e7574c3615dea4615573a19bdb258696c6
---------------------------------------------------------------------
Documentation/arm64/booting.txt 的中文翻译
@@ -20,7 +20,7 @@ Documentation/arm64/booting.txt 的中文翻译
中文版维护者: 傅炜 Fu Wei <wefu@redhat.com>
中文版翻译者: 傅炜 Fu Wei <wefu@redhat.com>
中文版校译者: 傅炜 Fu Wei <wefu@redhat.com>
-本文翻译提交时的 Git 检出点为: 1926e54f115725a9248d0c4c65c22acaf94de4c4
+本文翻译提交时的 Git 检出点为: 55f058e7574c3615dea4615573a19bdb258696c6
以下为正文
---------------------------------------------------------------------
@@ -125,18 +125,22 @@ AArch64 内核当前没有提供自解压代码,因此如果使用了压缩内
1 - 4K
2 - 16K
3 - 64K
- 位 3-63: 保留。
+ 位 3: 内核物理位置
+ 0 - 2MB 对齐基址应尽量靠近内存起始处,因为
+ 其基址以下的内存无法通过线性映射访问
+ 1 - 2MB 对齐基址可以在物理内存的任意位置
+ 位 4-63: 保留。
- 当 image_size 为零时,引导装载程序应试图在内核映像末尾之后尽可能
多地保留空闲内存供内核直接使用。对内存空间的需求量因所选定的内核
特性而异, 并无实际限制。
-内核映像必须被放置在靠近可用系统内存起始的 2MB 对齐为基址的
-text_offset 字节处,并从该处被调用。当前,对 Linux 来说在此基址以下的
-内存是无法使用的,因此强烈建议将系统内存的起始作为这个基址。2MB 对齐
-基址和内核映像起始地址之间的区域对于内核来说没有特殊意义,且可能被
-用于其他目的。
+内核映像必须被放置在任意一个可用系统内存 2MB 对齐基址的 text_offset
+字节处,并从该处被调用。2MB 对齐基址和内核映像起始地址之间的区域对于
+内核来说没有特殊意义,且可能被用于其他目的。
从映像起始地址算起,最少必须准备 image_size 字节的空闲内存供内核使用。
+注: v4.6 之前的版本无法使用内核映像物理偏移以下的内存,所以当时建议
+将映像尽量放置在靠近系统内存起始的地方。
任何提供给内核的内存(甚至在映像起始地址之前),若未从内核中标记为保留
(如在设备树(dtb)的 memreserve 区域),都将被认为对内核是可用。
diff --git a/Kbuild b/Kbuild
index f55cefd9bf29..3d0ae152af7c 100644
--- a/Kbuild
+++ b/Kbuild
@@ -5,6 +5,7 @@
# 2) Generate timeconst.h
# 3) Generate asm-offsets.h (may need bounds.h and timeconst.h)
# 4) Check for missing system calls
+# 5) Generate constants.py (may need bounds.h)
# Default sed regexp - multiline due to syntax constraints
define sed-y
@@ -96,5 +97,14 @@ quiet_cmd_syscalls = CALL $<
missing-syscalls: scripts/checksyscalls.sh $(offsets-file) FORCE
$(call cmd,syscalls)
+#####
+# 5) Generate constants for Python GDB integration
+#
+
+extra-$(CONFIG_GDB_SCRIPTS) += build_constants_py
+
+build_constants_py: $(obj)/$(timeconst-file) $(obj)/$(bounds-file)
+ @$(MAKE) $(build)=scripts/gdb/linux $@
+
# Keep these three files during make clean
no-clean-files := $(bounds-file) $(offsets-file) $(timeconst-file)
diff --git a/MAINTAINERS b/MAINTAINERS
index 15ec03b58d84..150d52cb983c 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -175,7 +175,6 @@ F: drivers/net/ethernet/realtek/r8169.c
8250/16?50 (AND CLONE UARTS) SERIAL DRIVER
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: linux-serial@vger.kernel.org
-W: http://serial.sourceforge.net
S: Maintained
T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty.git
F: drivers/tty/serial/8250*
@@ -627,6 +626,7 @@ F: include/linux/altera_jtaguart.h
AMD CRYPTOGRAPHIC COPROCESSOR (CCP) DRIVER
M: Tom Lendacky <thomas.lendacky@amd.com>
+M: Gary Hook <gary.hook@amd.com>
L: linux-crypto@vger.kernel.org
S: Supported
F: drivers/crypto/ccp/
@@ -776,6 +776,15 @@ S: Supported
F: drivers/android/
F: drivers/staging/android/
+ANDROID ION DRIVER
+M: Laura Abbott <labbott@redhat.com>
+M: Sumit Semwal <sumit.semwal@linaro.org>
+L: devel@driverdev.osuosl.org
+S: Supported
+F: drivers/staging/android/ion
+F: drivers/staging/android/uapi/ion.h
+F: drivers/staging/android/uapi/ion_test.h
+
AOA (Apple Onboard Audio) ALSA DRIVER
M: Johannes Berg <johannes@sipsolutions.net>
L: linuxppc-dev@lists.ozlabs.org
@@ -954,12 +963,15 @@ F: drivers/clk/sunxi/
ARM/Amlogic Meson SoC support
M: Carlo Caione <carlo@caione.org>
+M: Kevin Hilman <khilman@baylibre.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
-L: linux-meson@googlegroups.com
+L: linux-amlogic@lists.infradead.org
W: http://linux-meson.com/
S: Maintained
F: arch/arm/mach-meson/
F: arch/arm/boot/dts/meson*
+F: arch/arm64/boot/dts/amlogic/
+F: drivers/pinctrl/meson/
N: meson
ARM/Annapurna Labs ALPINE ARCHITECTURE
@@ -979,7 +991,14 @@ S: Maintained
L: linux-arm-kernel@axis.com
F: arch/arm/mach-artpec
F: arch/arm/boot/dts/artpec6*
-F: drivers/clk/clk-artpec6.c
+F: drivers/clk/axis
+
+ARM/ASPEED MACHINE SUPPORT
+M: Joel Stanley <joel@jms.id.au>
+S: Maintained
+F: arch/arm/mach-aspeed/
+F: arch/arm/boot/dts/aspeed-*
+F: drivers/*/*aspeed*
ARM/ATMEL AT91RM9200, AT91SAM9 AND SAMA5 SOC SUPPORT
M: Nicolas Ferre <nicolas.ferre@atmel.com>
@@ -1266,7 +1285,7 @@ M: Santosh Shilimkar <ssantosh@kernel.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-keystone/
-F: arch/arm/boot/dts/k2*
+F: arch/arm/boot/dts/keystone-*
T: git git://git.kernel.org/pub/scm/linux/kernel/git/ssantosh/linux-keystone.git
ARM/TEXAS INSTRUMENT KEYSTONE CLOCK FRAMEWORK
@@ -1294,6 +1313,12 @@ L: linux-kernel@vger.kernel.org
S: Maintained
F: drivers/memory/*emif*
+ARM/LG1K ARCHITECTURE
+M: Chanho Min <chanho.min@lge.com>
+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+S: Maintained
+F: arch/arm64/boot/dts/lg/
+
ARM/LOGICPD PXA270 MACHINE SUPPORT
M: Lennert Buytenhek <kernel@wantstofly.org>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
@@ -1312,11 +1337,25 @@ F: drivers/mtd/spi-nor/nxp-spifi.c
F: drivers/rtc/rtc-lpc24xx.c
N: lpc18xx
+ARM/LPC32XX SOC SUPPORT
+M: Vladimir Zapolskiy <vz@mleia.com>
+M: Sylvain Lemieux <slemieux.tyco@gmail.com>
+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+T: git git://github.com/vzapolskiy/linux-lpc32xx.git
+S: Maintained
+F: arch/arm/boot/dts/lpc32*
+F: arch/arm/mach-lpc32xx/
+F: drivers/i2c/busses/i2c-pnx.c
+F: drivers/net/ethernet/nxp/lpc_eth.c
+F: drivers/usb/host/ohci-nxp.c
+F: drivers/watchdog/pnx4008_wdt.c
+N: lpc32xx
+
ARM/MAGICIAN MACHINE SUPPORT
M: Philipp Zabel <philipp.zabel@gmail.com>
S: Maintained
-ARM/Marvell Kirkwood and Armada 370, 375, 38x, XP SOC support
+ARM/Marvell Kirkwood and Armada 370, 375, 38x, 39x, XP, 3700, 7K/8K SOC support
M: Jason Cooper <jason@lakedaemon.net>
M: Andrew Lunn <andrew@lunn.ch>
M: Gregory Clement <gregory.clement@free-electrons.com>
@@ -1328,7 +1367,8 @@ F: drivers/rtc/rtc-armada38x.c
F: arch/arm/boot/dts/armada*
F: arch/arm/boot/dts/kirkwood*
F: arch/arm64/boot/dts/marvell/armada*
-
+F: drivers/cpufreq/mvebu-cpufreq.c
+F: arch/arm/configs/mvebu_*_defconfig
ARM/Marvell Berlin SoC support
M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
@@ -1361,6 +1401,15 @@ W: http://www.digriz.org.uk/ts78xx/kernel
S: Maintained
F: arch/arm/mach-orion5x/ts78xx-*
+ARM/OXNAS platform support
+M: Neil Armstrong <narmstrong@baylibre.com>
+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+S: Maintained
+F: arch/arm/mach-oxnas/
+F: arch/arm/boot/dts/oxnas*
+F: arch/arm/boot/dts/wd-mbwe.dts
+N: oxnas
+
ARM/Mediatek RTC DRIVER
M: Eddie Huang <eddie.huang@mediatek.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
@@ -1476,7 +1525,10 @@ F: arch/arm/boot/dts/qcom-*.dts
F: arch/arm/boot/dts/qcom-*.dtsi
F: arch/arm/mach-qcom/
F: arch/arm64/boot/dts/qcom/*
+F: drivers/i2c/busses/i2c-qup.c
+F: drivers/clk/qcom/
F: drivers/soc/qcom/
+F: drivers/spi/spi-qup.c
F: drivers/tty/serial/msm_serial.h
F: drivers/tty/serial/msm_serial.c
F: drivers/*/pm8???-*
@@ -1497,6 +1549,8 @@ Q: http://patchwork.kernel.org/project/linux-renesas-soc/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/horms/renesas.git next
S: Supported
F: arch/arm64/boot/dts/renesas/
+F: drivers/soc/renesas/
+F: include/linux/soc/renesas/
ARM/RISCPC ARCHITECTURE
M: Russell King <linux@armlinux.org.uk>
@@ -1546,6 +1600,7 @@ F: arch/arm/mach-s5p*/
F: arch/arm/mach-exynos*/
F: drivers/*/*s3c2410*
F: drivers/*/*/*s3c2410*
+F: drivers/memory/samsung/*
F: drivers/soc/samsung/*
F: drivers/spi/spi-s3c*
F: sound/soc/samsung/*
@@ -1610,6 +1665,8 @@ F: arch/arm/configs/shmobile_defconfig
F: arch/arm/include/debug/renesas-scif.S
F: arch/arm/mach-shmobile/
F: drivers/sh/
+F: drivers/soc/renesas/
+F: include/linux/soc/renesas/
ARM/SOCFPGA ARCHITECTURE
M: Dinh Nguyen <dinguyen@opensource.altera.com>
@@ -1644,6 +1701,7 @@ F: arch/arm/boot/dts/sti*
F: drivers/char/hw_random/st-rng.c
F: drivers/clocksource/arm_global_timer.c
F: drivers/clocksource/clksrc_st_lpc.c
+F: drivers/cpufreq/sti-cpufreq.c
F: drivers/i2c/busses/i2c-st.c
F: drivers/media/rc/st_rc.c
F: drivers/media/platform/sti/c8sectpfe/
@@ -1653,6 +1711,7 @@ F: drivers/phy/phy-miphy365x.c
F: drivers/phy/phy-stih407-usb.c
F: drivers/phy/phy-stih41x-usb.c
F: drivers/pinctrl/pinctrl-st.c
+F: drivers/remoteproc/st_remoteproc.c
F: drivers/reset/sti/
F: drivers/rtc/rtc-st-lpc.c
F: drivers/tty/serial/st-asc.c
@@ -1777,6 +1836,7 @@ F: */*/vexpress*
F: */*/*/vexpress*
F: drivers/clk/versatile/clk-vexpress-osc.c
F: drivers/clocksource/versatile.c
+N: mps2
ARM/VFP SUPPORT
M: Russell King <linux@armlinux.org.uk>
@@ -1891,6 +1951,16 @@ L: platform-driver-x86@vger.kernel.org
S: Maintained
F: drivers/platform/x86/asus-wireless.c
+ASYMMETRIC KEYS
+M: David Howells <dhowells@redhat.com>
+L: keyrings@vger.kernel.org
+S: Maintained
+F: Documentation/crypto/asymmetric-keys.txt
+F: include/linux/verification.h
+F: include/crypto/public_key.h
+F: include/crypto/pkcs7.h
+F: crypto/asymmetric_keys/
+
ASYNCHRONOUS TRANSFERS/TRANSFORMS (IOAT) API
R: Dan Williams <dan.j.williams@intel.com>
W: http://sourceforge.net/projects/xscaleiop
@@ -2002,6 +2072,11 @@ M: Nicolas Ferre <nicolas.ferre@atmel.com>
S: Supported
F: drivers/tty/serial/atmel_serial.c
+ATMEL AT91 SAMA5D2-Compatible Shutdown Controller
+M: Nicolas Ferre <nicolas.ferre@atmel.com>
+S: Supported
+F: drivers/power/reset/at91-sama5d2_shdwc.c
+
ATMEL SAMA5D2 ADC DRIVER
M: Ludovic Desroches <ludovic.desroches@atmel.com>
L: linux-iio@vger.kernel.org
@@ -2209,10 +2284,13 @@ BATMAN ADVANCED
M: Marek Lindner <mareklindner@neomailbox.ch>
M: Simon Wunderlich <sw@simonwunderlich.de>
M: Antonio Quartulli <a@unstable.cc>
-L: b.a.t.m.a.n@lists.open-mesh.org
+L: b.a.t.m.a.n@lists.open-mesh.org (moderated for non-subscribers)
W: https://www.open-mesh.org/
Q: https://patchwork.open-mesh.org/project/batman/list/
S: Maintained
+F: Documentation/ABI/testing/sysfs-class-net-batman-adv
+F: Documentation/ABI/testing/sysfs-class-net-mesh
+F: Documentation/networking/batman-adv.txt
F: net/batman-adv/
BAYCOM/HDLCDRV DRIVERS FOR AX.25
@@ -2226,7 +2304,7 @@ BCACHE (BLOCK LAYER CACHE)
M: Kent Overstreet <kent.overstreet@gmail.com>
L: linux-bcache@vger.kernel.org
W: http://bcache.evilpiepirate.org
-S: Maintained
+S: Orphan
F: drivers/md/bcache/
BDISP ST MEDIA DRIVER
@@ -2427,6 +2505,7 @@ M: Hauke Mehrtens <hauke@hauke-m.de>
M: Rafał Miłecki <zajec5@gmail.com>
L: linux-mips@linux-mips.org
S: Maintained
+F: Documentation/devicetree/bindings/mips/brcm/
F: arch/mips/bcm47xx/*
F: arch/mips/include/asm/mach-bcm47xx/*
@@ -3354,6 +3433,7 @@ F: Documentation/powerpc/cxlflash.txt
STMMAC ETHERNET DRIVER
M: Giuseppe Cavallaro <peppe.cavallaro@st.com>
+M: Alexandre Torgue <alexandre.torgue@st.com>
L: netdev@vger.kernel.org
W: http://www.stlinux.com
S: Supported
@@ -3545,6 +3625,15 @@ F: drivers/devfreq/devfreq-event.c
F: include/linux/devfreq-event.h
F: Documentation/devicetree/bindings/devfreq/event/
+BUS FREQUENCY DRIVER FOR SAMSUNG EXYNOS
+M: Chanwoo Choi <cw00.choi@samsung.com>
+L: linux-pm@vger.kernel.org
+L: linux-samsung-soc@vger.kernel.org
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/mzx/devfreq.git
+S: Maintained
+F: drivers/devfreq/exynos-bus.c
+F: Documentation/devicetree/bindings/devfreq/exynos-bus.txt
+
DEVICE NUMBER REGISTRY
M: Torben Mathiasen <device@lanana.org>
W: http://lanana.org/docs/device-list/index.html
@@ -4506,6 +4595,12 @@ S: Maintained
F: drivers/video/fbdev/exynos/exynos_mipi*
F: include/video/exynos_mipi*
+EZchip NPS platform support
+M: Noam Camus <noamc@ezchip.com>
+S: Supported
+F: arch/arc/plat-eznps
+F: arch/arc/boot/dts/eznps.dts
+
F71805F HARDWARE MONITORING DRIVER
M: Jean Delvare <jdelvare@suse.com>
L: linux-hwmon@vger.kernel.org
@@ -4788,6 +4883,7 @@ FREESCALE SOC SOUND DRIVERS
M: Timur Tabi <timur@tabi.org>
M: Nicolin Chen <nicoleotsuka@gmail.com>
M: Xiubo Li <Xiubo.Lee@gmail.com>
+R: Fabio Estevam <fabio.estevam@nxp.com>
L: alsa-devel@alsa-project.org (moderated for non-subscribers)
L: linuxppc-dev@lists.ozlabs.org
S: Maintained
@@ -4797,6 +4893,7 @@ F: sound/soc/fsl/mpc8610_hpcd.c
FREESCALE QORIQ MANAGEMENT COMPLEX DRIVER
M: "J. German Rivera" <German.Rivera@freescale.com>
+M: Stuart Yoder <stuart.yoder@nxp.com>
L: linux-kernel@vger.kernel.org
S: Maintained
F: drivers/staging/fsl-mc/
@@ -4834,7 +4931,7 @@ F: include/linux/fscache*.h
F2FS FILE SYSTEM
M: Jaegeuk Kim <jaegeuk@kernel.org>
M: Changman Lee <cm224.lee@samsung.com>
-R: Chao Yu <chao2.yu@samsung.com>
+R: Chao Yu <yuchao0@huawei.com>
L: linux-f2fs-devel@lists.sourceforge.net
W: http://en.wikipedia.org/wiki/F2FS
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs.git
@@ -4900,6 +4997,7 @@ F: drivers/scsi/gdt*
GDB KERNEL DEBUGGING HELPER SCRIPTS
M: Jan Kiszka <jan.kiszka@siemens.com>
+M: Kieran Bingham <kieran@bingham.xyz>
S: Supported
F: scripts/gdb/
@@ -5011,6 +5109,7 @@ M: Alexandre Courbot <gnurou@gmail.com>
L: linux-gpio@vger.kernel.org
T: git git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-gpio.git
S: Maintained
+F: Documentation/devicetree/bindings/gpio/
F: Documentation/gpio/
F: Documentation/ABI/testing/gpio-cdev
F: Documentation/ABI/obsolete/sysfs-gpio
@@ -5213,6 +5312,13 @@ F: drivers/block/cciss*
F: include/linux/cciss_ioctl.h
F: include/uapi/linux/cciss_ioctl.h
+HFI1 DRIVER
+M: Mike Marciniszyn <mike.marciniszyn@intel.com>
+M: Dennis Dalessandro <dennis.dalessandro@intel.com>
+L: linux-rdma@vger.kernel.org
+S: Supported
+F: drivers/infiniband/hw/hfi1
+
HFS FILESYSTEM
L: linux-fsdevel@vger.kernel.org
S: Orphan
@@ -5402,6 +5508,7 @@ I2C MUXES
M: Peter Rosin <peda@axentia.se>
L: linux-i2c@vger.kernel.org
S: Maintained
+F: Documentation/i2c/i2c-topology
F: Documentation/i2c/muxes/
F: Documentation/devicetree/bindings/i2c/i2c-mux*
F: drivers/i2c/i2c-mux.c
@@ -5666,7 +5773,7 @@ IIO SUBSYSTEM AND DRIVERS
M: Jonathan Cameron <jic23@kernel.org>
R: Hartmut Knaack <knaack.h@gmx.de>
R: Lars-Peter Clausen <lars@metafoo.de>
-R: Peter Meerwald <pmeerw@pmeerw.net>
+R: Peter Meerwald-Stadler <pmeerw@pmeerw.net>
L: linux-iio@vger.kernel.org
S: Maintained
F: drivers/iio/
@@ -5741,7 +5848,6 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/dledford/rdma.git
S: Supported
F: Documentation/infiniband/
F: drivers/infiniband/
-F: drivers/staging/rdma/
F: include/uapi/linux/if_infiniband.h
F: include/uapi/rdma/
F: include/rdma/
@@ -5871,13 +5977,6 @@ F: drivers/char/hw_random/ixp4xx-rng.c
INTEL ETHERNET DRIVERS
M: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
-R: Jesse Brandeburg <jesse.brandeburg@intel.com>
-R: Shannon Nelson <shannon.nelson@intel.com>
-R: Carolyn Wyborny <carolyn.wyborny@intel.com>
-R: Don Skidmore <donald.c.skidmore@intel.com>
-R: Bruce Allan <bruce.w.allan@intel.com>
-R: John Ronciak <john.ronciak@intel.com>
-R: Mitch Williams <mitch.a.williams@intel.com>
L: intel-wired-lan@lists.osuosl.org (moderated for non-subscribers)
W: http://www.intel.com/support/feedback.htm
W: http://e1000.sourceforge.net/
@@ -5957,6 +6056,7 @@ F: drivers/net/wireless/intel/iwlegacy/
INTEL WIRELESS WIFI LINK (iwlwifi)
M: Johannes Berg <johannes.berg@intel.com>
M: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
+M: Luca Coelho <luciano.coelho@intel.com>
M: Intel Linux Wireless <linuxwifi@intel.com>
L: linux-wireless@vger.kernel.org
W: http://intellinuxwireless.org
@@ -6006,6 +6106,14 @@ S: Maintained
F: arch/x86/include/asm/intel_telemetry.h
F: drivers/platform/x86/intel_telemetry*
+INTEL PMC CORE DRIVER
+M: Rajneesh Bhardwaj <rajneesh.bhardwaj@intel.com>
+M: Vishwanath Somayaji <vishwanath.somayaji@intel.com>
+L: platform-driver-x86@vger.kernel.org
+S: Maintained
+F: arch/x86/include/asm/pmc_core.h
+F: drivers/platform/x86/intel_pmc_core*
+
IOC3 ETHERNET DRIVER
M: Ralf Baechle <ralf@linux-mips.org>
L: linux-mips@linux-mips.org
@@ -6122,6 +6230,13 @@ F: include/linux/irqdomain.h
F: kernel/irq/irqdomain.c
F: kernel/irq/msi.c
+ISA
+M: William Breathitt Gray <vilhelm.gray@gmail.com>
+S: Maintained
+F: Documentation/isa.txt
+F: drivers/base/isa.c
+F: include/linux/isa.h
+
ISAPNP
M: Jaroslav Kysela <perex@perex.cz>
S: Maintained
@@ -6302,7 +6417,7 @@ S: Maintained
F: arch/*/include/asm/kasan.h
F: arch/*/mm/kasan_init*
F: Documentation/kasan.txt
-F: include/linux/kasan.h
+F: include/linux/kasan*.h
F: lib/test_kasan.c
F: mm/kasan/
F: scripts/Makefile.kasan
@@ -6316,8 +6431,9 @@ F: Documentation/kbuild/kconfig-language.txt
F: scripts/kconfig/
KDUMP
-M: Vivek Goyal <vgoyal@redhat.com>
-M: Haren Myneni <hbabu@us.ibm.com>
+M: Dave Young <dyoung@redhat.com>
+M: Baoquan He <bhe@redhat.com>
+R: Vivek Goyal <vgoyal@redhat.com>
L: kexec@lists.infradead.org
W: http://lse.sourceforge.net/kdump/
S: Maintained
@@ -6394,6 +6510,7 @@ F: arch/*/include/asm/kvm*
F: include/linux/kvm*
F: include/uapi/linux/kvm*
F: virt/kvm/
+F: tools/kvm/
KERNEL VIRTUAL MACHINE (KVM) FOR AMD-V
M: Joerg Roedel <joro@8bytes.org>
@@ -6462,7 +6579,7 @@ L: kexec@lists.infradead.org
S: Maintained
F: include/linux/kexec.h
F: include/uapi/linux/kexec.h
-F: kernel/kexec.c
+F: kernel/kexec*
KEYS/KEYRINGS:
M: David Howells <dhowells@redhat.com>
@@ -6471,6 +6588,8 @@ S: Maintained
F: Documentation/security/keys.txt
F: include/linux/key.h
F: include/linux/key-type.h
+F: include/linux/keyctl.h
+F: include/uapi/linux/keyctl.h
F: include/keys/
F: security/keys/
@@ -6553,7 +6672,7 @@ F: net/l3mdev
F: include/net/l3mdev.h
LANTIQ MIPS ARCHITECTURE
-M: John Crispin <blogic@openwrt.org>
+M: John Crispin <john@phrozen.org>
L: linux-mips@linux-mips.org
S: Maintained
F: arch/mips/lantiq
@@ -6735,6 +6854,19 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux.git
S: Supported
F: Documentation/powerpc/
F: arch/powerpc/
+F: drivers/char/tpm/tpm_ibmvtpm*
+F: drivers/crypto/nx/
+F: drivers/crypto/vmx/
+F: drivers/net/ethernet/ibm/ibmveth.*
+F: drivers/net/ethernet/ibm/ibmvnic.*
+F: drivers/pci/hotplug/rpa*
+F: drivers/scsi/ibmvscsi/
+N: opal
+N: /pmac
+N: powermac
+N: powernv
+N: [^a-z0-9]ps3
+N: pseries
LINUX FOR POWER MACINTOSH
M: Benjamin Herrenschmidt <benh@kernel.crashing.org>
@@ -6815,6 +6947,7 @@ F: kernel/livepatch/
F: include/linux/livepatch.h
F: arch/x86/include/asm/livepatch.h
F: arch/x86/kernel/livepatch.c
+F: Documentation/livepatch/
F: Documentation/ABI/testing/sysfs-kernel-livepatch
F: samples/livepatch/
L: live-patching@vger.kernel.org
@@ -6903,12 +7036,6 @@ W: logfs.org
S: Maintained
F: fs/logfs/
-LPC32XX MACHINE SUPPORT
-M: Roland Stigge <stigge@antcom.de>
-L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
-S: Maintained
-F: arch/arm/mach-lpc32xx/
-
LSILOGIC MPT FUSION DRIVERS (FC/SAS/SPI)
M: Sathya Prakash <sathya.prakash@broadcom.com>
M: Chaitra P B <chaitra.basappa@broadcom.com>
@@ -7149,9 +7276,9 @@ M: Chanwoo Choi <cw00.choi@samsung.com>
M: Krzysztof Kozlowski <k.kozlowski@samsung.com>
L: linux-kernel@vger.kernel.org
S: Supported
-F: drivers/*/max14577.c
+F: drivers/*/max14577*.c
F: drivers/*/max77686*.c
-F: drivers/*/max77693.c
+F: drivers/*/max77693*.c
F: drivers/extcon/extcon-max14577.c
F: drivers/extcon/extcon-max77693.c
F: drivers/rtc/rtc-max77686.c
@@ -7398,9 +7525,19 @@ W: http://www.linux-mips.org/
T: git git://git.linux-mips.org/pub/scm/ralf/linux.git
Q: http://patchwork.linux-mips.org/project/linux-mips/list/
S: Supported
+F: Documentation/devicetree/bindings/mips/
F: Documentation/mips/
F: arch/mips/
+MIPS/LOONGSON1 ARCHITECTURE
+M: Keguang Zhang <keguang.zhang@gmail.com>
+L: linux-mips@linux-mips.org
+S: Maintained
+F: arch/mips/loongson32/
+F: arch/mips/include/asm/mach-loongson32/
+F: drivers/*/*loongson1*
+F: drivers/*/*/*loongson1*
+
MIROSOUND PCM20 FM RADIO RECEIVER DRIVER
M: Hans Verkuil <hverkuil@xs4all.nl>
L: linux-media@vger.kernel.org
@@ -7668,10 +7805,10 @@ M: Michael Schmitz <schmitzmic@gmail.com>
L: linux-scsi@vger.kernel.org
S: Maintained
F: Documentation/scsi/g_NCR5380.txt
+F: Documentation/scsi/dtc3x80.txt
F: drivers/scsi/NCR5380.*
F: drivers/scsi/arm/cumana_1.c
F: drivers/scsi/arm/oak.c
-F: drivers/scsi/atari_NCR5380.c
F: drivers/scsi/atari_scsi.*
F: drivers/scsi/dmx3191d.c
F: drivers/scsi/dtc.*
@@ -7923,6 +8060,7 @@ NILFS2 FILESYSTEM
M: Ryusuke Konishi <konishi.ryusuke@lab.ntt.co.jp>
L: linux-nilfs@vger.kernel.org
W: http://nilfs.sourceforge.net/
+W: http://nilfs.osdn.jp/
T: git git://github.com/konis/nilfs2.git
S: Supported
F: Documentation/filesystems/nilfs2.txt
@@ -8303,7 +8441,6 @@ F: drivers/of/resolver.c
OPENRISC ARCHITECTURE
M: Jonas Bonn <jonas@southpole.se>
W: http://openrisc.net
-L: linux@lists.openrisc.net (moderated for non-subscribers)
S: Maintained
T: git git://openrisc.net/~jonas/linux
F: arch/openrisc/
@@ -8424,7 +8561,6 @@ F: drivers/platform/x86/panasonic-laptop.c
PANASONIC MN10300/AM33/AM34 PORT
M: David Howells <dhowells@redhat.com>
-M: Koichi Yasutake <yasutake.koichi@jp.panasonic.com>
L: linux-am33-list@redhat.com (moderated for non-subscribers)
W: ftp://ftp.redhat.com/pub/redhat/gnupro/AM33/
S: Maintained
@@ -8766,6 +8902,7 @@ F: arch/*/kernel/*/perf_event*.c
F: arch/*/kernel/*/*/perf_event*.c
F: arch/*/include/asm/perf_event.h
F: arch/*/kernel/perf_callchain.c
+F: arch/*/events/*
F: tools/perf/
PERSONALITY HANDLING
@@ -8858,7 +8995,6 @@ F: drivers/pinctrl/pinctrl-single.c
PIN CONTROLLER - ST SPEAR
M: Viresh Kumar <vireshk@kernel.org>
-L: spear-devel@list.st.com
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
W: http://www.st.com/spear
S: Maintained
@@ -9320,7 +9456,7 @@ S: Maintained
F: drivers/video/fbdev/aty/aty128fb.c
RALINK MIPS ARCHITECTURE
-M: John Crispin <blogic@openwrt.org>
+M: John Crispin <john@phrozen.org>
L: linux-mips@linux-mips.org
S: Maintained
F: arch/mips/ralink
@@ -9618,7 +9754,7 @@ F: drivers/net/wireless/realtek/rtlwifi/rtl8192ce/
RTL8XXXU WIRELESS DRIVER (rtl8xxxu)
M: Jes Sorensen <Jes.Sorensen@redhat.com>
L: linux-wireless@vger.kernel.org
-T: git git://git.kernel.org/pub/scm/linux/kernel/git/jes/linux.git rtl8723au-mac80211
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/jes/linux.git rtl8xxxu-devel
S: Maintained
F: drivers/net/wireless/realtek/rtl8xxxu/
@@ -9883,6 +10019,7 @@ F: drivers/mmc/host/dw_mmc*
SYSTEM TRACE MODULE CLASS
M: Alexander Shishkin <alexander.shishkin@linux.intel.com>
S: Maintained
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/ash/stm.git
F: Documentation/trace/stm.txt
F: drivers/hwtracing/stm/
F: include/linux/stm.h
@@ -10062,7 +10199,6 @@ F: drivers/mmc/host/sdhci-s3c*
SECURE DIGITAL HOST CONTROLLER INTERFACE (SDHCI) ST SPEAR DRIVER
M: Viresh Kumar <vireshk@kernel.org>
-L: spear-devel@list.st.com
L: linux-mmc@vger.kernel.org
S: Maintained
F: drivers/mmc/host/sdhci-spear.c
@@ -10100,6 +10236,12 @@ T: git git://git.kernel.org/pub/scm/linux/kernel/git/jj/apparmor-dev.git
S: Supported
F: security/apparmor/
+LOADPIN SECURITY MODULE
+M: Kees Cook <keescook@chromium.org>
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git lsm/loadpin
+S: Supported
+F: security/loadpin/
+
YAMA SECURITY MODULE
M: Kees Cook <keescook@chromium.org>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git yama/tip
@@ -10290,8 +10432,8 @@ F: arch/arm/mach-s3c24xx/bast-irq.c
TI DAVINCI MACHINE SUPPORT
M: Sekhar Nori <nsekhar@ti.com>
M: Kevin Hilman <khilman@kernel.org>
-T: git git://gitorious.org/linux-davinci/linux-davinci.git
-Q: http://patchwork.kernel.org/project/linux-davinci/list/
+L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+T: git git://git.kernel.org/pub/scm/linux/kernel/git/nsekhar/linux-davinci.git
S: Supported
F: arch/arm/mach-davinci/
F: drivers/i2c/busses/i2c-davinci.c
@@ -10619,7 +10761,6 @@ F: include/linux/compiler.h
SPEAR PLATFORM SUPPORT
M: Viresh Kumar <vireshk@kernel.org>
M: Shiraz Hashim <shiraz.linux.kernel@gmail.com>
-L: spear-devel@list.st.com
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
W: http://www.st.com/spear
S: Maintained
@@ -10628,7 +10769,6 @@ F: arch/arm/mach-spear/
SPEAR CLOCK FRAMEWORK SUPPORT
M: Viresh Kumar <vireshk@kernel.org>
-L: spear-devel@list.st.com
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
W: http://www.st.com/spear
S: Maintained
@@ -10791,12 +10931,6 @@ M: Arnaud Patard <arnaud.patard@rtp-net.org>
S: Odd Fixes
F: drivers/staging/xgifb/
-HFI1 DRIVER
-M: Mike Marciniszyn <infinipath@intel.com>
-L: linux-rdma@vger.kernel.org
-S: Supported
-F: drivers/staging/rdma/hfi1
-
STARFIRE/DURALAN NETWORK DRIVER
M: Ion Badulescu <ionut@badula.org>
S: Odd Fixes
@@ -11071,10 +11205,11 @@ M: Prashant Gaikwad <pgaikwad@nvidia.com>
S: Supported
F: drivers/clk/tegra/
-TEGRA DMA DRIVER
+TEGRA DMA DRIVERS
M: Laxman Dewangan <ldewangan@nvidia.com>
+M: Jon Hunter <jonathanh@nvidia.com>
S: Supported
-F: drivers/dma/tegra20-apb-dma.c
+F: drivers/dma/tegra*
TEGRA I2C DRIVER
M: Laxman Dewangan <ldewangan@nvidia.com>
@@ -11176,6 +11311,7 @@ F: drivers/platform/x86/thinkpad_acpi.c
TI BANDGAP AND THERMAL DRIVER
M: Eduardo Valentin <edubezval@gmail.com>
+M: Keerthy <j-keerthy@ti.com>
L: linux-pm@vger.kernel.org
L: linux-omap@vger.kernel.org
S: Maintained
@@ -11375,14 +11511,13 @@ S: Maintained
F: drivers/media/i2c/tc358743*
F: include/media/i2c/tc358743.h
-TMIO MMC DRIVER
-M: Ian Molton <ian@mnementh.co.uk>
+TMIO/SDHI MMC DRIVER
+M: Wolfram Sang <wsa+renesas@sang-engineering.com>
L: linux-mmc@vger.kernel.org
-S: Maintained
+S: Supported
F: drivers/mmc/host/tmio_mmc*
F: drivers/mmc/host/sh_mobile_sdhi.c
-F: include/linux/mmc/tmio.h
-F: include/linux/mmc/sh_mobile_sdhi.h
+F: include/linux/mfd/tmio.h
TMP401 HARDWARE MONITOR DRIVER
M: Guenter Roeck <linux@roeck-us.net>
@@ -11414,6 +11549,14 @@ W: https://linuxtv.org
S: Odd Fixes
F: drivers/media/pci/tw68/
+TW686X VIDEO4LINUX DRIVER
+M: Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
+L: linux-media@vger.kernel.org
+T: git git://linuxtv.org/media_tree.git
+W: http://linuxtv.org
+S: Maintained
+F: drivers/media/pci/tw686x/
+
TPM DEVICE DRIVER
M: Peter Huewe <peterhuewe@gmx.de>
M: Marcel Selhorst <tpmdd@selhorst.net>
@@ -11447,6 +11590,20 @@ F: include/trace/
F: kernel/trace/
F: tools/testing/selftests/ftrace/
+TRACING MMIO ACCESSES (MMIOTRACE)
+M: Steven Rostedt <rostedt@goodmis.org>
+M: Ingo Molnar <mingo@kernel.org>
+R: Karol Herbst <karolherbst@gmail.com>
+R: Pekka Paalanen <ppaalanen@gmail.com>
+S: Maintained
+L: linux-kernel@vger.kernel.org
+L: nouveau@lists.freedesktop.org
+F: kernel/trace/trace_mmiotrace.c
+F: include/linux/mmiotrace.h
+F: arch/x86/mm/kmmio.c
+F: arch/x86/mm/mmio-mod.c
+F: arch/x86/mm/testmmiotrace.c
+
TRIVIAL PATCHES
M: Jiri Kosina <trivial@kernel.org>
T: git git://git.kernel.org/pub/scm/linux/kernel/git/jikos/trivial.git
@@ -11518,7 +11675,8 @@ F: Documentation/filesystems/ubifs.txt
F: fs/ubifs/
UCLINUX (M68KNOMMU AND COLDFIRE)
-M: Greg Ungerer <gerg@uclinux.org>
+M: Greg Ungerer <gerg@linux-m68k.org>
+W: http://www.linux-m68k.org/
W: http://www.uclinux.org/
L: linux-m68k@lists.linux-m68k.org
L: uclinux-dev@uclinux.org (subscribers-only)
@@ -12126,7 +12284,9 @@ L: linux-kernel@vger.kernel.org
W: http://www.slimlogic.co.uk/?p=48
T: git git://git.kernel.org/pub/scm/linux/kernel/git/broonie/regulator.git
S: Supported
+F: Documentation/devicetree/bindings/regulator/
F: drivers/regulator/
+F: include/dt-bindings/regulator/
F: include/linux/regulator/
VRF
@@ -12202,6 +12362,7 @@ L: linux-watchdog@vger.kernel.org
W: http://www.linux-watchdog.org/
T: git git://www.linux-watchdog.org/linux-watchdog.git
S: Maintained
+F: Documentation/devicetree/bindings/watchdog/
F: Documentation/watchdog/
F: drivers/watchdog/
F: include/linux/watchdog.h
@@ -12306,6 +12467,12 @@ F: include/linux/workqueue.h
F: kernel/workqueue.c
F: Documentation/workqueue.txt
+X-POWERS MULTIFUNCTION PMIC DEVICE DRIVERS
+M: Chen-Yu Tsai <wens@csie.org>
+L: linux-kernel@vger.kernel.org
+S: Maintained
+N: axp[128]
+
X.25 NETWORK LAYER
M: Andrew Hendry <andrew.hendry@gmail.com>
L: linux-x25@vger.kernel.org
diff --git a/Makefile b/Makefile
index acf6155421cc..0f70de63cfdb 100644
--- a/Makefile
+++ b/Makefile
@@ -1,8 +1,8 @@
VERSION = 4
-PATCHLEVEL = 6
+PATCHLEVEL = 7
SUBLEVEL = 0
-EXTRAVERSION = -rc7
-NAME = Charred Weasel
+EXTRAVERSION = -rc1
+NAME = Psychotic Stoned Sheep
# *DOCUMENTATION*
# To see a list of typical targets execute "make help"
@@ -128,6 +128,10 @@ _all:
# Cancel implicit rules on top Makefile
$(CURDIR)/Makefile Makefile: ;
+ifneq ($(words $(subst :, ,$(CURDIR))), 1)
+ $(error main directory cannot contain spaces nor colons)
+endif
+
ifneq ($(KBUILD_OUTPUT),)
# Invoke a second make in the output directory, passing relevant variables
# check that the output directory actually exists
@@ -142,7 +146,7 @@ PHONY += $(MAKECMDGOALS) sub-make
$(filter-out _all sub-make $(CURDIR)/Makefile, $(MAKECMDGOALS)) _all: sub-make
@:
-sub-make: FORCE
+sub-make:
$(Q)$(MAKE) -C $(KBUILD_OUTPUT) KBUILD_SRC=$(CURDIR) \
-f $(CURDIR)/Makefile $(filter-out _all sub-make,$(MAKECMDGOALS))
@@ -364,7 +368,7 @@ AFLAGS_MODULE =
LDFLAGS_MODULE =
CFLAGS_KERNEL =
AFLAGS_KERNEL =
-CFLAGS_GCOV = -fprofile-arcs -ftest-coverage
+CFLAGS_GCOV = -fprofile-arcs -ftest-coverage -fno-tree-loop-im -Wno-maybe-uninitialized
CFLAGS_KCOV = -fsanitize-coverage=trace-pc
@@ -617,7 +621,11 @@ KBUILD_CFLAGS += $(call cc-option,-fno-delete-null-pointer-checks,)
ifdef CONFIG_CC_OPTIMIZE_FOR_SIZE
KBUILD_CFLAGS += -Os $(call cc-disable-warning,maybe-uninitialized,)
else
-KBUILD_CFLAGS += -O2
+ifdef CONFIG_PROFILE_ALL_BRANCHES
+KBUILD_CFLAGS += -O2 $(call cc-disable-warning,maybe-uninitialized,)
+else
+KBUILD_CFLAGS += -O2
+endif
endif
# Tell gcc to never replace conditional load with a non-conditional one
@@ -697,9 +705,10 @@ KBUILD_CFLAGS += $(call cc-option, -mno-global-merge,)
KBUILD_CFLAGS += $(call cc-option, -fcatch-undefined-behavior)
else
-# This warning generated too much noise in a regular build.
-# Use make W=1 to enable this warning (see scripts/Makefile.build)
+# These warnings generated too much noise in a regular build.
+# Use make W=1 to enable them (see scripts/Makefile.build)
KBUILD_CFLAGS += $(call cc-disable-warning, unused-but-set-variable)
+KBUILD_CFLAGS += $(call cc-disable-warning, unused-const-variable)
endif
ifdef CONFIG_FRAME_POINTER
@@ -926,27 +935,41 @@ export KBUILD_ALLDIRS := $(sort $(filter-out arch/%,$(vmlinux-alldirs)) arch Doc
vmlinux-deps := $(KBUILD_LDS) $(KBUILD_VMLINUX_INIT) $(KBUILD_VMLINUX_MAIN)
-# Final link of vmlinux
- cmd_link-vmlinux = $(CONFIG_SHELL) $< $(LD) $(LDFLAGS) $(LDFLAGS_vmlinux)
-quiet_cmd_link-vmlinux = LINK $@
-
-# Include targets which we want to
-# execute if the rest of the kernel build went well.
-vmlinux: scripts/link-vmlinux.sh $(vmlinux-deps) FORCE
+# Include targets which we want to execute sequentially if the rest of the
+# kernel build went well. If CONFIG_TRIM_UNUSED_KSYMS is set, this might be
+# evaluated more than once.
+PHONY += vmlinux_prereq
+vmlinux_prereq: $(vmlinux-deps) FORCE
ifdef CONFIG_HEADERS_CHECK
$(Q)$(MAKE) -f $(srctree)/Makefile headers_check
endif
-ifdef CONFIG_SAMPLES
- $(Q)$(MAKE) $(build)=samples
-endif
ifdef CONFIG_BUILD_DOCSRC
$(Q)$(MAKE) $(build)=Documentation
endif
ifdef CONFIG_GDB_SCRIPTS
$(Q)ln -fsn `cd $(srctree) && /bin/pwd`/scripts/gdb/vmlinux-gdb.py
endif
+ifdef CONFIG_TRIM_UNUSED_KSYMS
+ $(Q)$(CONFIG_SHELL) $(srctree)/scripts/adjust_autoksyms.sh \
+ "$(MAKE) KBUILD_MODULES=1 -f $(srctree)/Makefile vmlinux_prereq"
+endif
+
+# standalone target for easier testing
+include/generated/autoksyms.h: FORCE
+ $(Q)$(CONFIG_SHELL) $(srctree)/scripts/adjust_autoksyms.sh true
+
+# Final link of vmlinux
+ cmd_link-vmlinux = $(CONFIG_SHELL) $< $(LD) $(LDFLAGS) $(LDFLAGS_vmlinux)
+quiet_cmd_link-vmlinux = LINK $@
+
+vmlinux: scripts/link-vmlinux.sh vmlinux_prereq $(vmlinux-deps) FORCE
+$(call if_changed,link-vmlinux)
+# Build samples along the rest of the kernel
+ifdef CONFIG_SAMPLES
+vmlinux-dirs += samples
+endif
+
# The actual objects are generated when descending,
# make sure no implicit rule kicks in
$(sort $(vmlinux-deps)): $(vmlinux-dirs) ;
@@ -998,10 +1021,12 @@ prepare2: prepare3 outputmakefile asm-generic
prepare1: prepare2 $(version_h) include/generated/utsrelease.h \
include/config/auto.conf
$(cmd_crmodverdir)
+ $(Q)test -e include/generated/autoksyms.h || \
+ touch include/generated/autoksyms.h
archprepare: archheaders archscripts prepare1 scripts_basic
-prepare0: archprepare FORCE
+prepare0: archprepare
$(Q)$(MAKE) $(build)=.
# All the preparing..
@@ -1061,7 +1086,7 @@ INSTALL_FW_PATH=$(INSTALL_MOD_PATH)/lib/firmware
export INSTALL_FW_PATH
PHONY += firmware_install
-firmware_install: FORCE
+firmware_install:
@mkdir -p $(objtree)/firmware
$(Q)$(MAKE) -f $(srctree)/scripts/Makefile.fwinst obj=firmware __fw_install
@@ -1081,7 +1106,7 @@ PHONY += archscripts
archscripts:
PHONY += __headers
-__headers: $(version_h) scripts_basic asm-generic archheaders archscripts FORCE
+__headers: $(version_h) scripts_basic asm-generic archheaders archscripts
$(Q)$(MAKE) $(build)=scripts build_unifdef
PHONY += headers_install_all
@@ -1192,7 +1217,8 @@ else # CONFIG_MODULES
# Modules not configured
# ---------------------------------------------------------------------------
-modules modules_install: FORCE
+PHONY += modules modules_install
+modules modules_install:
@echo >&2
@echo >&2 "The present kernel configuration has modules disabled."
@echo >&2 "Type 'make config' and enable loadable module support."
@@ -1283,6 +1309,7 @@ boards := $(sort $(notdir $(boards)))
board-dirs := $(dir $(wildcard $(srctree)/arch/$(SRCARCH)/configs/*/*_defconfig))
board-dirs := $(sort $(notdir $(board-dirs:/=)))
+PHONY += help
help:
@echo 'Cleaning targets:'
@echo ' clean - Remove most generated files but keep the config and'
@@ -1453,6 +1480,7 @@ $(clean-dirs):
clean: rm-dirs := $(MODVERDIR)
clean: rm-files := $(KBUILD_EXTMOD)/Module.symvers
+PHONY += help
help:
@echo ' Building external modules.'
@echo ' Syntax: make -C path/to/kernel/src M=$$PWD target'
diff --git a/README b/README
index afc4f0d81ee1..e8c8a6dc1c2b 100644
--- a/README
+++ b/README
@@ -2,7 +2,7 @@
These are the release notes for Linux version 4. Read them carefully,
as they tell you what this is all about, explain how to install the
-kernel, and what to do if something goes wrong.
+kernel, and what to do if something goes wrong.
WHAT IS LINUX?
@@ -16,7 +16,7 @@ WHAT IS LINUX?
and multistack networking including IPv4 and IPv6.
It is distributed under the GNU General Public License - see the
- accompanying COPYING file for more details.
+ accompanying COPYING file for more details.
ON WHAT HARDWARE DOES IT RUN?
@@ -44,7 +44,7 @@ DOCUMENTATION:
system: there are much better sources available.
- There are various README files in the Documentation/ subdirectory:
- these typically contain kernel-specific installation notes for some
+ these typically contain kernel-specific installation notes for some
drivers for example. See Documentation/00-INDEX for a list of what
is contained in each file. Please read the Changes file, as it
contains information about the problems, which may result by upgrading
@@ -276,7 +276,7 @@ COMPILING the kernel:
To have the build system also tell the reason for the rebuild of each
target, use "V=2". The default is "V=0".
- - Keep a backup kernel handy in case something goes wrong. This is
+ - Keep a backup kernel handy in case something goes wrong. This is
especially true for the development releases, since each new release
contains new code which has not been debugged. Make sure you keep a
backup of the modules corresponding to that kernel, as well. If you
@@ -290,7 +290,7 @@ COMPILING the kernel:
- In order to boot your new kernel, you'll need to copy the kernel
image (e.g. .../linux/arch/i386/boot/bzImage after compilation)
- to the place where your regular bootable kernel is found.
+ to the place where your regular bootable kernel is found.
- Booting a kernel directly from a floppy without the assistance of a
bootloader such as LILO, is no longer supported.
@@ -303,10 +303,10 @@ COMPILING the kernel:
to update the loading map! If you don't, you won't be able to boot
the new kernel image.
- Reinstalling LILO is usually a matter of running /sbin/lilo.
+ Reinstalling LILO is usually a matter of running /sbin/lilo.
You may wish to edit /etc/lilo.conf to specify an entry for your
old kernel image (say, /vmlinux.old) in case the new one does not
- work. See the LILO docs for more information.
+ work. See the LILO docs for more information.
After reinstalling LILO, you should be all set. Shutdown the system,
reboot, and enjoy!
@@ -314,9 +314,9 @@ COMPILING the kernel:
If you ever need to change the default root device, video mode,
ramdisk size, etc. in the kernel image, use the 'rdev' program (or
alternatively the LILO boot options when appropriate). No need to
- recompile the kernel to change these parameters.
+ recompile the kernel to change these parameters.
- - Reboot with the new kernel and enjoy.
+ - Reboot with the new kernel and enjoy.
IF SOMETHING GOES WRONG:
@@ -383,7 +383,7 @@ IF SOMETHING GOES WRONG:
is followed by a function with a higher address you will find the one
you want. In fact, it may be a good idea to include a bit of
"context" in your problem report, giving a few lines around the
- interesting one.
+ interesting one.
If you for some reason cannot do the above (you have a pre-compiled
kernel image or similar), telling me as much about your setup as
diff --git a/arch/Kconfig b/arch/Kconfig
index 81869a5e7e17..d794384a0404 100644
--- a/arch/Kconfig
+++ b/arch/Kconfig
@@ -187,7 +187,11 @@ config HAVE_OPTPROBES
config HAVE_KPROBES_ON_FTRACE
bool
+config HAVE_NMI
+ bool
+
config HAVE_NMI_WATCHDOG
+ depends on HAVE_NMI
bool
#
# An arch should select this if it provides all these things:
@@ -517,6 +521,11 @@ config HAVE_ARCH_MMAP_RND_BITS
- ARCH_MMAP_RND_BITS_MIN
- ARCH_MMAP_RND_BITS_MAX
+config HAVE_EXIT_THREAD
+ bool
+ help
+ An architecture implements exit_thread.
+
config ARCH_MMAP_RND_BITS_MIN
int
@@ -589,6 +598,14 @@ config HAVE_STACK_VALIDATION
Architecture supports the 'objtool check' host tool command, which
performs compile-time stack metadata validation.
+config HAVE_ARCH_HASH
+ bool
+ default n
+ help
+ If this is set, the architecture provides an <asm/hash.h>
+ file which provides platform-specific implementations of some
+ functions in <linux/hash.h> or fs/namei.c.
+
#
# ABI hall of shame
#
@@ -638,4 +655,7 @@ config COMPAT_OLD_SIGACTION
config ARCH_NO_COHERENT_DMA_MMAP
bool
+config CPU_NO_EFFICIENT_FFS
+ def_bool n
+
source "kernel/gcov/Kconfig"
diff --git a/arch/alpha/Kconfig b/arch/alpha/Kconfig
index 9d8a85801ed1..7f312d80b43b 100644
--- a/arch/alpha/Kconfig
+++ b/arch/alpha/Kconfig
@@ -13,7 +13,6 @@ config ALPHA
select GENERIC_IRQ_PROBE
select AUTO_IRQ_AFFINITY if SMP
select GENERIC_IRQ_SHOW
- select ARCH_WANT_OPTIONAL_GPIOLIB
select ARCH_WANT_IPC_PARSE_VERSION
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
@@ -27,6 +26,7 @@ config ALPHA
select MODULES_USE_ELF_RELA
select ODD_RT_SIGACTION
select OLD_SIGSUSPEND
+ select CPU_NO_EFFICIENT_FFS if !ALPHA_EV67
help
The Alpha is a 64-bit general-purpose processor designed and
marketed by the Digital Equipment Corporation of blessed memory,
diff --git a/arch/alpha/include/asm/rwsem.h b/arch/alpha/include/asm/rwsem.h
index a83bbea62c67..0131a7058778 100644
--- a/arch/alpha/include/asm/rwsem.h
+++ b/arch/alpha/include/asm/rwsem.h
@@ -63,7 +63,7 @@ static inline int __down_read_trylock(struct rw_semaphore *sem)
return res >= 0 ? 1 : 0;
}
-static inline void __down_write(struct rw_semaphore *sem)
+static inline long ___down_write(struct rw_semaphore *sem)
{
long oldcount;
#ifndef CONFIG_SMP
@@ -83,10 +83,24 @@ static inline void __down_write(struct rw_semaphore *sem)
:"=&r" (oldcount), "=m" (sem->count), "=&r" (temp)
:"Ir" (RWSEM_ACTIVE_WRITE_BIAS), "m" (sem->count) : "memory");
#endif
- if (unlikely(oldcount))
+ return oldcount;
+}
+
+static inline void __down_write(struct rw_semaphore *sem)
+{
+ if (unlikely(___down_write(sem)))
rwsem_down_write_failed(sem);
}
+static inline int __down_write_killable(struct rw_semaphore *sem)
+{
+ if (unlikely(___down_write(sem)))
+ if (IS_ERR(rwsem_down_write_failed_killable(sem)))
+ return -EINTR;
+
+ return 0;
+}
+
/*
* trylock for writing -- returns 1 if successful, 0 if contention
*/
diff --git a/arch/alpha/kernel/osf_sys.c b/arch/alpha/kernel/osf_sys.c
index 6cc08166ff00..ffb93f499c83 100644
--- a/arch/alpha/kernel/osf_sys.c
+++ b/arch/alpha/kernel/osf_sys.c
@@ -147,7 +147,7 @@ SYSCALL_DEFINE4(osf_getdirentries, unsigned int, fd,
long __user *, basep)
{
int error;
- struct fd arg = fdget(fd);
+ struct fd arg = fdget_pos(fd);
struct osf_dirent_callback buf = {
.ctx.actor = osf_filldir,
.dirent = dirent,
@@ -164,7 +164,7 @@ SYSCALL_DEFINE4(osf_getdirentries, unsigned int, fd,
if (count != buf.count)
error = count - buf.count;
- fdput(arg);
+ fdput_pos(arg);
return error;
}
diff --git a/arch/alpha/kernel/pci-sysfs.c b/arch/alpha/kernel/pci-sysfs.c
index 99e8d4796c96..92c0d460815b 100644
--- a/arch/alpha/kernel/pci-sysfs.c
+++ b/arch/alpha/kernel/pci-sysfs.c
@@ -77,10 +77,10 @@ static int pci_mmap_resource(struct kobject *kobj,
if (i >= PCI_ROM_RESOURCE)
return -ENODEV;
- if (!__pci_mmap_fits(pdev, i, vma, sparse))
+ if (res->flags & IORESOURCE_MEM && iomem_is_exclusive(res->start))
return -EINVAL;
- if (iomem_is_exclusive(res->start))
+ if (!__pci_mmap_fits(pdev, i, vma, sparse))
return -EINVAL;
pcibios_resource_to_bus(pdev->bus, &bar, res);
diff --git a/arch/alpha/kernel/process.c b/arch/alpha/kernel/process.c
index 84d13263ce46..b483156698d5 100644
--- a/arch/alpha/kernel/process.c
+++ b/arch/alpha/kernel/process.c
@@ -210,14 +210,6 @@ start_thread(struct pt_regs * regs, unsigned long pc, unsigned long sp)
}
EXPORT_SYMBOL(start_thread);
-/*
- * Free current thread data structures etc..
- */
-void
-exit_thread(void)
-{
-}
-
void
flush_thread(void)
{
diff --git a/arch/arc/Kconfig b/arch/arc/Kconfig
index a8767430df7d..0dcbacfdea4b 100644
--- a/arch/arc/Kconfig
+++ b/arch/arc/Kconfig
@@ -10,8 +10,9 @@ config ARC
def_bool y
select ARCH_SUPPORTS_ATOMIC_RMW if ARC_HAS_LLSC
select BUILDTIME_EXTABLE_SORT
- select COMMON_CLK
+ select CLKSRC_OF
select CLONE_BACKWARDS
+ select COMMON_CLK
select GENERIC_ATOMIC64
select GENERIC_CLOCKEVENTS
select GENERIC_FIND_FIRST_BIT
@@ -30,6 +31,7 @@ config ARC
select HAVE_MOD_ARCH_SPECIFIC if ARC_DW2_UNWIND
select HAVE_OPROFILE
select HAVE_PERF_EVENTS
+ select HANDLE_DOMAIN_IRQ
select IRQ_DOMAIN
select MODULES_USE_ELF_RELA
select NO_BOOTMEM
@@ -95,6 +97,7 @@ source "arch/arc/plat-sim/Kconfig"
source "arch/arc/plat-tb10x/Kconfig"
source "arch/arc/plat-axs10x/Kconfig"
#New platform adds here
+source "arch/arc/plat-eznps/Kconfig"
endmenu
@@ -104,6 +107,7 @@ choice
config ISA_ARCOMPACT
bool "ARCompact ISA"
+ select CPU_NO_EFFICIENT_FFS
help
The original ARC ISA of ARC600/700 cores
@@ -490,6 +494,17 @@ config ARCH_DMA_ADDR_T_64BIT
config ARC_PLAT_NEEDS_PHYS_TO_DMA
bool
+config ARC_KVADDR_SIZE
+ int "Kernel Virtaul Address Space size (MB)"
+ range 0 512
+ default "256"
+ help
+ The kernel address space is carved out of 256MB of translated address
+ space for catering to vmalloc, modules, pkmap, fixmap. This however may
+ not suffice vmalloc requirements of a 4K CPU EZChip system. So allow
+ this to be stretched to 512 MB (by extending into the reserved
+ kernel-user gutter)
+
config ARC_CURR_IN_REG
bool "Dedicate Register r25 for current_task pointer"
default y
diff --git a/arch/arc/Makefile b/arch/arc/Makefile
index def69e347b2d..02fabef2891c 100644
--- a/arch/arc/Makefile
+++ b/arch/arc/Makefile
@@ -115,6 +115,11 @@ core-y += arch/arc/boot/dts/
core-$(CONFIG_ARC_PLAT_SIM) += arch/arc/plat-sim/
core-$(CONFIG_ARC_PLAT_TB10X) += arch/arc/plat-tb10x/
core-$(CONFIG_ARC_PLAT_AXS10X) += arch/arc/plat-axs10x/
+core-$(CONFIG_ARC_PLAT_EZNPS) += arch/arc/plat-eznps/
+
+ifdef CONFIG_ARC_PLAT_EZNPS
+KBUILD_CPPFLAGS += -I$(srctree)/arch/arc/plat-eznps/include
+endif
drivers-$(CONFIG_OPROFILE) += arch/arc/oprofile/
diff --git a/arch/arc/boot/dts/abilis_tb10x.dtsi b/arch/arc/boot/dts/abilis_tb10x.dtsi
index cfb5052239a1..de53f5c3251c 100644
--- a/arch/arc/boot/dts/abilis_tb10x.dtsi
+++ b/arch/arc/boot/dts/abilis_tb10x.dtsi
@@ -35,6 +35,20 @@
};
};
+ /* TIMER0 with interrupt for clockevent */
+ timer0 {
+ compatible = "snps,arc-timer";
+ interrupts = <3>;
+ interrupt-parent = <&intc>;
+ clocks = <&cpu_clk>;
+ };
+
+ /* TIMER1 for free running clocksource */
+ timer1 {
+ compatible = "snps,arc-timer";
+ clocks = <&cpu_clk>;
+ };
+
soc100 {
#address-cells = <1>;
#size-cells = <1>;
@@ -112,7 +126,7 @@
chan_allocation_order = <0>;
chan_priority = <1>;
block_size = <0x7ff>;
- data_width = <2>;
+ data-width = <4>;
clocks = <&ahb_clk>;
clock-names = "hclk";
};
diff --git a/arch/arc/boot/dts/axc001.dtsi b/arch/arc/boot/dts/axc001.dtsi
index 262496ac2628..3e02f152edcb 100644
--- a/arch/arc/boot/dts/axc001.dtsi
+++ b/arch/arc/boot/dts/axc001.dtsi
@@ -11,6 +11,8 @@
* Note that this file only supports the 770D CPU
*/
+/include/ "skeleton.dtsi"
+
/ {
compatible = "snps,arc";
clock-frequency = <750000000>; /* 750 MHZ */
@@ -24,7 +26,13 @@
ranges = <0x00000000 0xf0000000 0x10000000>;
- cpu_intc: arc700-intc@cpu {
+ core_clk: core_clk {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <750000000>;
+ };
+
+ core_intc: arc700-intc@cpu {
compatible = "snps,arc700-intc";
interrupt-controller;
#interrupt-cells = <1>;
@@ -48,7 +56,7 @@
reg = <0>;
interrupt-controller;
#interrupt-cells = <2>;
- interrupt-parent = <&cpu_intc>;
+ interrupt-parent = <&core_intc>;
interrupts = <15>;
};
};
@@ -86,7 +94,7 @@
compatible = "snps,dw-apb-ictl";
reg = < 0xe0012000 0x200 >;
interrupt-controller;
- interrupt-parent = <&cpu_intc>;
+ interrupt-parent = <&core_intc>;
interrupts = < 7 >;
};
diff --git a/arch/arc/boot/dts/axc003.dtsi b/arch/arc/boot/dts/axc003.dtsi
index 35ece04d8353..378e455a94c4 100644
--- a/arch/arc/boot/dts/axc003.dtsi
+++ b/arch/arc/boot/dts/axc003.dtsi
@@ -10,6 +10,8 @@
* Device tree for AXC003 CPU card: HS38x UP configuration
*/
+/include/ "skeleton_hs.dtsi"
+
/ {
compatible = "snps,arc";
clock-frequency = <90000000>;
@@ -23,7 +25,13 @@
ranges = <0x00000000 0xf0000000 0x10000000>;
- cpu_intc: archs-intc@cpu {
+ core_clk: core_clk {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <90000000>;
+ };
+
+ core_intc: archs-intc@cpu {
compatible = "snps,archs-intc";
interrupt-controller;
#interrupt-cells = <1>;
@@ -47,7 +55,7 @@
reg = <0>;
interrupt-controller;
#interrupt-cells = <2>;
- interrupt-parent = <&cpu_intc>;
+ interrupt-parent = <&core_intc>;
interrupts = <25>;
};
};
@@ -66,7 +74,7 @@
arcpct0: pct {
compatible = "snps,archs-pct";
#interrupt-cells = <1>;
- interrupt-parent = <&cpu_intc>;
+ interrupt-parent = <&core_intc>;
interrupts = <20>;
};
};
@@ -89,7 +97,7 @@
compatible = "snps,dw-apb-ictl";
reg = < 0xe0012000 0x200 >;
interrupt-controller;
- interrupt-parent = <&cpu_intc>;
+ interrupt-parent = <&core_intc>;
interrupts = < 24 >;
};
diff --git a/arch/arc/boot/dts/axc003_idu.dtsi b/arch/arc/boot/dts/axc003_idu.dtsi
index df9ddb623bfe..64c94b2860ab 100644
--- a/arch/arc/boot/dts/axc003_idu.dtsi
+++ b/arch/arc/boot/dts/axc003_idu.dtsi
@@ -10,6 +10,8 @@
* Device tree for AXC003 CPU card: HS38x2 (Dual Core) with IDU intc
*/
+/include/ "skeleton_hs_idu.dtsi"
+
/ {
compatible = "snps,arc";
clock-frequency = <90000000>;
@@ -23,7 +25,13 @@
ranges = <0x00000000 0xf0000000 0x10000000>;
- cpu_intc: archs-intc@cpu {
+ core_clk: core_clk {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <100000000>;
+ };
+
+ core_intc: archs-intc@cpu {
compatible = "snps,archs-intc";
interrupt-controller;
#interrupt-cells = <1>;
@@ -32,7 +40,7 @@
idu_intc: idu-interrupt-controller {
compatible = "snps,archs-idu-intc";
interrupt-controller;
- interrupt-parent = <&cpu_intc>;
+ interrupt-parent = <&core_intc>;
/*
* <hwirq distribution>
@@ -89,7 +97,7 @@
arcpct0: pct {
compatible = "snps,archs-pct";
#interrupt-cells = <1>;
- interrupt-parent = <&cpu_intc>;
+ interrupt-parent = <&core_intc>;
interrupts = <20>;
};
};
diff --git a/arch/arc/boot/dts/axs10x_mb.dtsi b/arch/arc/boot/dts/axs10x_mb.dtsi
index c3ecb5e6b72d..d6c1bbc98ac3 100644
--- a/arch/arc/boot/dts/axs10x_mb.dtsi
+++ b/arch/arc/boot/dts/axs10x_mb.dtsi
@@ -16,7 +16,20 @@
ranges = <0x00000000 0xe0000000 0x10000000>;
interrupt-parent = <&mb_intc>;
+ i2sclk: i2sclk@100a0 {
+ compatible = "snps,axs10x-i2s-pll-clock";
+ reg = <0x100a0 0x10>;
+ clocks = <&i2spll_clk>;
+ #clock-cells = <0>;
+ };
+
clocks {
+ i2spll_clk: i2spll_clk {
+ compatible = "fixed-clock";
+ clock-frequency = <27000000>;
+ #clock-cells = <0>;
+ };
+
i2cclk: i2cclk {
compatible = "fixed-clock";
clock-frequency = <50000000>;
diff --git a/arch/arc/boot/dts/eznps.dts b/arch/arc/boot/dts/eznps.dts
new file mode 100644
index 000000000000..b89f6c3eb352
--- /dev/null
+++ b/arch/arc/boot/dts/eznps.dts
@@ -0,0 +1,96 @@
+/*
+ * Copyright(c) 2015 EZchip Technologies.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ */
+
+/dts-v1/;
+
+/ {
+ compatible = "ezchip,arc-nps";
+ clock-frequency = <83333333>; /* 83.333333 MHZ */
+ #address-cells = <1>;
+ #size-cells = <1>;
+ interrupt-parent = <&intc>;
+ present-cpus = "0-1,16-17";
+ possible-cpus = "0-4095";
+
+ aliases {
+ ethernet0 = &gmac0;
+ };
+
+ chosen {
+ bootargs = "earlycon=uart8250,mmio32be,0xf7209000,115200n8 console=ttyS0,115200n8";
+ };
+
+ memory {
+ device_type = "memory";
+ reg = <0x80000000 0x20000000>; /* 512M */
+ };
+
+ clocks {
+ sysclk: sysclk {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <83333333>;
+ };
+ };
+
+ soc {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+ /* child and parent address space 1:1 mapped */
+ ranges;
+
+ intc: interrupt-controller {
+ compatible = "ezchip,nps400-ic";
+ interrupt-controller;
+ #interrupt-cells = <1>;
+ };
+
+ timer0: timer_clkevt {
+ compatible = "snps,arc-timer";
+ interrupts = <3>;
+ clocks = <&sysclk>;
+ };
+
+ timer1: timer_clksrc {
+ compatible = "ezchip,nps400-timer";
+ clocks = <&sysclk>;
+ clock-names="sysclk";
+ };
+
+ uart@f7209000 {
+ compatible = "snps,dw-apb-uart";
+ device_type = "serial";
+ reg = <0xf7209000 0x100>;
+ interrupts = <6>;
+ clocks = <&sysclk>;
+ clock-names="baudclk";
+ baud = <115200>;
+ reg-shift = <2>;
+ reg-io-width = <4>;
+ native-endian;
+ };
+
+ gmac0: ethernet@f7470000 {
+ compatible = "ezchip,nps-mgt-enet";
+ reg = <0xf7470000 0x1940>;
+ interrupts = <7>;
+ /* Filled in by U-Boot */
+ mac-address = [ 00 C0 00 F0 04 03 ];
+ };
+ };
+};
diff --git a/arch/arc/boot/dts/nsim_700.dts b/arch/arc/boot/dts/nsim_700.dts
index 105a0017023f..5d5e373e0ebc 100644
--- a/arch/arc/boot/dts/nsim_700.dts
+++ b/arch/arc/boot/dts/nsim_700.dts
@@ -14,7 +14,7 @@
clock-frequency = <80000000>; /* 80 MHZ */
#address-cells = <1>;
#size-cells = <1>;
- interrupt-parent = <&intc>;
+ interrupt-parent = <&core_intc>;
chosen {
bootargs = "earlycon=arc_uart,mmio32,0xc0fc1000,115200n8 console=ttyARC0,115200n8";
@@ -32,7 +32,13 @@
/* child and parent address space 1:1 mapped */
ranges;
- intc: interrupt-controller {
+ core_clk: core_clk {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <80000000>;
+ };
+
+ core_intc: interrupt-controller {
compatible = "snps,arc700-intc";
interrupt-controller;
#interrupt-cells = <1>;
diff --git a/arch/arc/boot/dts/nsim_hs.dts b/arch/arc/boot/dts/nsim_hs.dts
index f46633eeb06b..bf05fe5f67b0 100644
--- a/arch/arc/boot/dts/nsim_hs.dts
+++ b/arch/arc/boot/dts/nsim_hs.dts
@@ -7,7 +7,7 @@
*/
/dts-v1/;
-/include/ "skeleton.dtsi"
+/include/ "skeleton_hs.dtsi"
/ {
compatible = "snps,nsim_hs";
@@ -39,6 +39,12 @@
bus addr, parent bus addr, size */
ranges = <0x80000000 0x0 0x80000000 0x80000000>;
+ core_clk: core_clk {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <80000000>;
+ };
+
core_intc: core-interrupt-controller {
compatible = "snps,archs-intc";
interrupt-controller;
diff --git a/arch/arc/boot/dts/nsim_hs_idu.dts b/arch/arc/boot/dts/nsim_hs_idu.dts
index 46ab31975612..99eabe1a2bf6 100644
--- a/arch/arc/boot/dts/nsim_hs_idu.dts
+++ b/arch/arc/boot/dts/nsim_hs_idu.dts
@@ -7,7 +7,7 @@
*/
/dts-v1/;
-/include/ "skeleton.dtsi"
+/include/ "skeleton_hs_idu.dtsi"
/ {
compatible = "snps,nsim_hs";
@@ -29,6 +29,12 @@
/* child and parent address space 1:1 mapped */
ranges;
+ core_clk: core_clk {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <80000000>;
+ };
+
core_intc: core-interrupt-controller {
compatible = "snps,archs-intc";
interrupt-controller;
diff --git a/arch/arc/boot/dts/nsimosci.dts b/arch/arc/boot/dts/nsimosci.dts
index d94b4ce516ad..b5b060adce8a 100644
--- a/arch/arc/boot/dts/nsimosci.dts
+++ b/arch/arc/boot/dts/nsimosci.dts
@@ -14,7 +14,7 @@
clock-frequency = <20000000>; /* 20 MHZ */
#address-cells = <1>;
#size-cells = <1>;
- interrupt-parent = <&intc>;
+ interrupt-parent = <&core_intc>;
chosen {
/* this is for console on PGU */
@@ -35,7 +35,13 @@
/* child and parent address space 1:1 mapped */
ranges;
- intc: interrupt-controller {
+ core_clk: core_clk {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <20000000>;
+ };
+
+ core_intc: interrupt-controller {
compatible = "snps,arc700-intc";
interrupt-controller;
#interrupt-cells = <1>;
diff --git a/arch/arc/boot/dts/nsimosci_hs.dts b/arch/arc/boot/dts/nsimosci_hs.dts
index 034a3139c1e2..325e73090a18 100644
--- a/arch/arc/boot/dts/nsimosci_hs.dts
+++ b/arch/arc/boot/dts/nsimosci_hs.dts
@@ -7,7 +7,7 @@
*/
/dts-v1/;
-/include/ "skeleton.dtsi"
+/include/ "skeleton_hs.dtsi"
/ {
compatible = "snps,nsimosci_hs";
@@ -35,6 +35,12 @@
/* child and parent address space 1:1 mapped */
ranges;
+ core_clk: core_clk {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <20000000>;
+ };
+
core_intc: core-interrupt-controller {
compatible = "snps,archs-intc";
interrupt-controller;
diff --git a/arch/arc/boot/dts/nsimosci_hs_idu.dts b/arch/arc/boot/dts/nsimosci_hs_idu.dts
index 8a1297e02540..ee03d7126581 100644
--- a/arch/arc/boot/dts/nsimosci_hs_idu.dts
+++ b/arch/arc/boot/dts/nsimosci_hs_idu.dts
@@ -7,7 +7,7 @@
*/
/dts-v1/;
-/include/ "skeleton.dtsi"
+/include/ "skeleton_hs_idu.dtsi"
/ {
compatible = "snps,nsimosci_hs";
@@ -33,6 +33,12 @@
/* child and parent address space 1:1 mapped */
ranges;
+ core_clk: core_clk {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <5000000>;
+ };
+
core_intc: core-interrupt-controller {
compatible = "snps,archs-intc";
interrupt-controller;
diff --git a/arch/arc/boot/dts/skeleton.dtsi b/arch/arc/boot/dts/skeleton.dtsi
index 296d371a335c..3a10cc633e2b 100644
--- a/arch/arc/boot/dts/skeleton.dtsi
+++ b/arch/arc/boot/dts/skeleton.dtsi
@@ -30,6 +30,20 @@
};
};
+ /* TIMER0 with interrupt for clockevent */
+ timer0 {
+ compatible = "snps,arc-timer";
+ interrupts = <3>;
+ interrupt-parent = <&core_intc>;
+ clocks = <&core_clk>;
+ };
+
+ /* TIMER1 for free running clocksource */
+ timer1 {
+ compatible = "snps,arc-timer";
+ clocks = <&core_clk>;
+ };
+
memory {
device_type = "memory";
reg = <0x80000000 0x10000000>; /* 256M */
diff --git a/arch/arc/boot/dts/skeleton_hs.dtsi b/arch/arc/boot/dts/skeleton_hs.dtsi
new file mode 100644
index 000000000000..71fd308a9298
--- /dev/null
+++ b/arch/arc/boot/dts/skeleton_hs.dtsi
@@ -0,0 +1,52 @@
+/*
+ * Copyright (C) 2016 Synopsys, Inc. (www.synopsys.com)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+/ {
+ compatible = "snps,arc";
+ clock-frequency = <80000000>; /* 80 MHZ */
+ #address-cells = <1>;
+ #size-cells = <1>;
+ chosen { };
+ aliases { };
+
+ cpus {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ cpu@0 {
+ device_type = "cpu";
+ compatible = "snps,archs38";
+ reg = <0>;
+ };
+ };
+
+ /* TIMER0 with interrupt for clockevent */
+ timer0 {
+ compatible = "snps,arc-timer";
+ interrupts = <16>;
+ interrupt-parent = <&core_intc>;
+ clocks = <&core_clk>;
+ };
+
+ /* 64-bit Local RTC: preferred clocksource for UP */
+ rtc {
+ compatible = "snps,archs-timer-rtc";
+ clocks = <&core_clk>;
+ };
+
+ /* TIMER1 for free running clocksource: Fallback if rtc not found */
+ timer1 {
+ compatible = "snps,arc-timer";
+ clocks = <&core_clk>;
+ };
+
+ memory {
+ device_type = "memory";
+ reg = <0x80000000 0x10000000>; /* 256M */
+ };
+};
diff --git a/arch/arc/boot/dts/skeleton_hs_idu.dtsi b/arch/arc/boot/dts/skeleton_hs_idu.dtsi
new file mode 100644
index 000000000000..d1cb25a66989
--- /dev/null
+++ b/arch/arc/boot/dts/skeleton_hs_idu.dtsi
@@ -0,0 +1,46 @@
+/*
+ * Copyright (C) 2016 Synopsys, Inc. (www.synopsys.com)
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+/ {
+ compatible = "snps,arc";
+ clock-frequency = <80000000>; /* 80 MHZ */
+ #address-cells = <1>;
+ #size-cells = <1>;
+ chosen { };
+ aliases { };
+
+ cpus {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ cpu@0 {
+ device_type = "cpu";
+ compatible = "snps,archs38xN";
+ reg = <0>;
+ };
+ };
+
+ /* TIMER0 with interrupt for clockevent */
+ timer0 {
+ compatible = "snps,arc-timer";
+ interrupts = <16>;
+ interrupt-parent = <&core_intc>;
+ clocks = <&core_clk>;
+ };
+
+ /* 64-bit Global Free Running Counter */
+ gfrc {
+ compatible = "snps,archs-timer-gfrc";
+ clocks = <&core_clk>;
+ };
+
+ memory {
+ device_type = "memory";
+ reg = <0x80000000 0x10000000>; /* 256M */
+ };
+};
diff --git a/arch/arc/boot/dts/vdk_axc003.dtsi b/arch/arc/boot/dts/vdk_axc003.dtsi
index 84226bd48baf..ad4ee43bd2ac 100644
--- a/arch/arc/boot/dts/vdk_axc003.dtsi
+++ b/arch/arc/boot/dts/vdk_axc003.dtsi
@@ -10,6 +10,8 @@
* Device tree for AXC003 CPU card: HS38x UP configuration (VDK version)
*/
+/include/ "skeleton_hs.dtsi"
+
/ {
compatible = "snps,arc";
clock-frequency = <50000000>;
@@ -23,7 +25,13 @@
ranges = <0x00000000 0xf0000000 0x10000000>;
- cpu_intc: archs-intc@cpu {
+ core_clk: core_clk {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <50000000>;
+ };
+
+ core_intc: archs-intc@cpu {
compatible = "snps,archs-intc";
interrupt-controller;
#interrupt-cells = <1>;
@@ -33,7 +41,7 @@
compatible = "snps,dw-apb-uart";
reg = <0x5000 0x100>;
clock-frequency = <2403200>;
- interrupt-parent = <&cpu_intc>;
+ interrupt-parent = <&core_intc>;
interrupts = <19>;
baud = <115200>;
reg-shift = <2>;
@@ -47,7 +55,7 @@
compatible = "snps,dw-apb-ictl";
reg = < 0xe0012000 0x200 >;
interrupt-controller;
- interrupt-parent = <&cpu_intc>;
+ interrupt-parent = <&core_intc>;
interrupts = < 18 >;
};
diff --git a/arch/arc/boot/dts/vdk_axc003_idu.dtsi b/arch/arc/boot/dts/vdk_axc003_idu.dtsi
index 31f0fb5fc91d..a3cb6263c581 100644
--- a/arch/arc/boot/dts/vdk_axc003_idu.dtsi
+++ b/arch/arc/boot/dts/vdk_axc003_idu.dtsi
@@ -11,6 +11,8 @@
* HS38x2 (Dual Core) with IDU intc (VDK version)
*/
+/include/ "skeleton_hs_idu.dtsi"
+
/ {
compatible = "snps,arc";
clock-frequency = <50000000>;
@@ -24,7 +26,13 @@
ranges = <0x00000000 0xf0000000 0x10000000>;
- cpu_intc: archs-intc@cpu {
+ core_clk: core_clk {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <50000000>;
+ };
+
+ core_intc: archs-intc@cpu {
compatible = "snps,archs-intc";
interrupt-controller;
#interrupt-cells = <1>;
@@ -33,7 +41,7 @@
idu_intc: idu-interrupt-controller {
compatible = "snps,archs-idu-intc";
interrupt-controller;
- interrupt-parent = <&cpu_intc>;
+ interrupt-parent = <&core_intc>;
/*
* <hwirq distribution>
diff --git a/arch/arc/configs/nps_defconfig b/arch/arc/configs/nps_defconfig
new file mode 100644
index 000000000000..ede625c76216
--- /dev/null
+++ b/arch/arc/configs/nps_defconfig
@@ -0,0 +1,84 @@
+# CONFIG_LOCALVERSION_AUTO is not set
+# CONFIG_SWAP is not set
+CONFIG_SYSVIPC=y
+CONFIG_NO_HZ_IDLE=y
+CONFIG_HIGH_RES_TIMERS=y
+CONFIG_IKCONFIG=y
+CONFIG_IKCONFIG_PROC=y
+CONFIG_BLK_DEV_INITRD=y
+CONFIG_SYSCTL_SYSCALL=y
+# CONFIG_EPOLL is not set
+# CONFIG_SIGNALFD is not set
+# CONFIG_TIMERFD is not set
+# CONFIG_EVENTFD is not set
+# CONFIG_AIO is not set
+CONFIG_EMBEDDED=y
+CONFIG_PERF_EVENTS=y
+# CONFIG_COMPAT_BRK is not set
+CONFIG_KPROBES=y
+CONFIG_MODULES=y
+CONFIG_MODULE_FORCE_LOAD=y
+CONFIG_MODULE_UNLOAD=y
+# CONFIG_BLK_DEV_BSG is not set
+# CONFIG_IOSCHED_DEADLINE is not set
+# CONFIG_IOSCHED_CFQ is not set
+CONFIG_ARC_PLAT_EZNPS=y
+CONFIG_SMP=y
+CONFIG_NR_CPUS=4096
+CONFIG_ARC_CACHE_LINE_SHIFT=5
+# CONFIG_ARC_CACHE_PAGES is not set
+# CONFIG_ARC_HAS_LLSC is not set
+CONFIG_ARC_KVADDR_SIZE=402
+CONFIG_ARC_EMUL_UNALIGNED=y
+CONFIG_ARC_UBOOT_SUPPORT=y
+CONFIG_PREEMPT=y
+CONFIG_NET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+CONFIG_IP_PNP=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_LRO is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_PREVENT_FIRMWARE_BUILD is not set
+CONFIG_BLK_DEV_RAM=y
+CONFIG_BLK_DEV_RAM_COUNT=1
+CONFIG_BLK_DEV_RAM_SIZE=2048
+CONFIG_NETDEVICES=y
+CONFIG_NETCONSOLE=y
+# CONFIG_NET_VENDOR_BROADCOM is not set
+# CONFIG_NET_VENDOR_MICREL is not set
+# CONFIG_NET_VENDOR_STMICRO is not set
+# CONFIG_WLAN is not set
+# CONFIG_INPUT_MOUSEDEV is not set
+# CONFIG_INPUT_KEYBOARD is not set
+# CONFIG_INPUT_MOUSE is not set
+# CONFIG_SERIO is not set
+# CONFIG_LEGACY_PTYS is not set
+# CONFIG_DEVKMEM is not set
+CONFIG_SERIAL_8250=y
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_8250_NR_UARTS=1
+CONFIG_SERIAL_8250_RUNTIME_UARTS=1
+CONFIG_SERIAL_8250_DW=y
+CONFIG_SERIAL_OF_PLATFORM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_DNOTIFY is not set
+CONFIG_PROC_KCORE=y
+CONFIG_TMPFS=y
+# CONFIG_MISC_FILESYSTEMS is not set
+CONFIG_NFS_FS=y
+CONFIG_ROOT_NFS=y
+CONFIG_DEBUG_INFO=y
+# CONFIG_ENABLE_WARN_DEPRECATED is not set
+# CONFIG_ENABLE_MUST_CHECK is not set
+CONFIG_MAGIC_SYSRQ=y
+CONFIG_DEBUG_MEMORY_INIT=y
+CONFIG_ENABLE_DEFAULT_TRACERS=y
diff --git a/arch/arc/include/asm/atomic.h b/arch/arc/include/asm/atomic.h
index 7730d302cadb..5f3dcbbc0cc9 100644
--- a/arch/arc/include/asm/atomic.h
+++ b/arch/arc/include/asm/atomic.h
@@ -17,6 +17,8 @@
#include <asm/barrier.h>
#include <asm/smp.h>
+#ifndef CONFIG_ARC_PLAT_EZNPS
+
#define atomic_read(v) READ_ONCE((v)->counter)
#ifdef CONFIG_ARC_HAS_LLSC
@@ -180,13 +182,88 @@ ATOMIC_OP(andnot, &= ~, bic)
ATOMIC_OP(or, |=, or)
ATOMIC_OP(xor, ^=, xor)
-#undef ATOMIC_OPS
-#undef ATOMIC_OP_RETURN
-#undef ATOMIC_OP
#undef SCOND_FAIL_RETRY_VAR_DEF
#undef SCOND_FAIL_RETRY_ASM
#undef SCOND_FAIL_RETRY_VARS
+#else /* CONFIG_ARC_PLAT_EZNPS */
+
+static inline int atomic_read(const atomic_t *v)
+{
+ int temp;
+
+ __asm__ __volatile__(
+ " ld.di %0, [%1]"
+ : "=r"(temp)
+ : "r"(&v->counter)
+ : "memory");
+ return temp;
+}
+
+static inline void atomic_set(atomic_t *v, int i)
+{
+ __asm__ __volatile__(
+ " st.di %0,[%1]"
+ :
+ : "r"(i), "r"(&v->counter)
+ : "memory");
+}
+
+#define ATOMIC_OP(op, c_op, asm_op) \
+static inline void atomic_##op(int i, atomic_t *v) \
+{ \
+ __asm__ __volatile__( \
+ " mov r2, %0\n" \
+ " mov r3, %1\n" \
+ " .word %2\n" \
+ : \
+ : "r"(i), "r"(&v->counter), "i"(asm_op) \
+ : "r2", "r3", "memory"); \
+} \
+
+#define ATOMIC_OP_RETURN(op, c_op, asm_op) \
+static inline int atomic_##op##_return(int i, atomic_t *v) \
+{ \
+ unsigned int temp = i; \
+ \
+ /* Explicit full memory barrier needed before/after */ \
+ smp_mb(); \
+ \
+ __asm__ __volatile__( \
+ " mov r2, %0\n" \
+ " mov r3, %1\n" \
+ " .word %2\n" \
+ " mov %0, r2" \
+ : "+r"(temp) \
+ : "r"(&v->counter), "i"(asm_op) \
+ : "r2", "r3", "memory"); \
+ \
+ smp_mb(); \
+ \
+ temp c_op i; \
+ \
+ return temp; \
+}
+
+#define ATOMIC_OPS(op, c_op, asm_op) \
+ ATOMIC_OP(op, c_op, asm_op) \
+ ATOMIC_OP_RETURN(op, c_op, asm_op)
+
+ATOMIC_OPS(add, +=, CTOP_INST_AADD_DI_R2_R2_R3)
+#define atomic_sub(i, v) atomic_add(-(i), (v))
+#define atomic_sub_return(i, v) atomic_add_return(-(i), (v))
+
+ATOMIC_OP(and, &=, CTOP_INST_AAND_DI_R2_R2_R3)
+#define atomic_andnot(mask, v) atomic_and(~(mask), (v))
+ATOMIC_OP(or, |=, CTOP_INST_AOR_DI_R2_R2_R3)
+ATOMIC_OP(xor, ^=, CTOP_INST_AXOR_DI_R2_R2_R3)
+
+#endif /* CONFIG_ARC_PLAT_EZNPS */
+
+#undef ATOMIC_OPS
+#undef ATOMIC_OP_RETURN
+#undef ATOMIC_OP
+
/**
* __atomic_add_unless - add unless the number is a given value
* @v: pointer of type atomic_t
diff --git a/arch/arc/include/asm/barrier.h b/arch/arc/include/asm/barrier.h
index a7209983ee64..b1e327495c7d 100644
--- a/arch/arc/include/asm/barrier.h
+++ b/arch/arc/include/asm/barrier.h
@@ -30,9 +30,7 @@
#define rmb() asm volatile("dmb 1\n" : : : "memory")
#define wmb() asm volatile("dmb 2\n" : : : "memory")
-#endif
-
-#ifdef CONFIG_ISA_ARCOMPACT
+#elif !defined(CONFIG_ARC_PLAT_EZNPS) /* CONFIG_ISA_ARCOMPACT */
/*
* ARCompact based cores (ARC700) only have SYNC instruction which is super
@@ -41,6 +39,14 @@
*/
#define mb() asm volatile("sync\n" : : : "memory")
+
+#else /* CONFIG_ARC_PLAT_EZNPS */
+
+#include <plat/ctop.h>
+
+#define mb() asm volatile (".word %0" : : "i"(CTOP_INST_SCHD_RW) : "memory")
+#define rmb() asm volatile (".word %0" : : "i"(CTOP_INST_SCHD_RD) : "memory")
+
#endif
#include <asm-generic/barrier.h>
diff --git a/arch/arc/include/asm/bitops.h b/arch/arc/include/asm/bitops.h
index 0352fb8d21b9..8da87feec59a 100644
--- a/arch/arc/include/asm/bitops.h
+++ b/arch/arc/include/asm/bitops.h
@@ -22,7 +22,7 @@
#include <asm/smp.h>
#endif
-#if defined(CONFIG_ARC_HAS_LLSC)
+#ifdef CONFIG_ARC_HAS_LLSC
/*
* Hardware assisted Atomic-R-M-W
@@ -88,7 +88,7 @@ static inline int test_and_##op##_bit(unsigned long nr, volatile unsigned long *
return (old & (1 << nr)) != 0; \
}
-#else /* !CONFIG_ARC_HAS_LLSC */
+#elif !defined(CONFIG_ARC_PLAT_EZNPS)
/*
* Non hardware assisted Atomic-R-M-W
@@ -139,7 +139,55 @@ static inline int test_and_##op##_bit(unsigned long nr, volatile unsigned long *
return (old & (1UL << (nr & 0x1f))) != 0; \
}
-#endif /* CONFIG_ARC_HAS_LLSC */
+#else /* CONFIG_ARC_PLAT_EZNPS */
+
+#define BIT_OP(op, c_op, asm_op) \
+static inline void op##_bit(unsigned long nr, volatile unsigned long *m)\
+{ \
+ m += nr >> 5; \
+ \
+ nr = (1UL << (nr & 0x1f)); \
+ if (asm_op == CTOP_INST_AAND_DI_R2_R2_R3) \
+ nr = ~nr; \
+ \
+ __asm__ __volatile__( \
+ " mov r2, %0\n" \
+ " mov r3, %1\n" \
+ " .word %2\n" \
+ : \
+ : "r"(nr), "r"(m), "i"(asm_op) \
+ : "r2", "r3", "memory"); \
+}
+
+#define TEST_N_BIT_OP(op, c_op, asm_op) \
+static inline int test_and_##op##_bit(unsigned long nr, volatile unsigned long *m)\
+{ \
+ unsigned long old; \
+ \
+ m += nr >> 5; \
+ \
+ nr = old = (1UL << (nr & 0x1f)); \
+ if (asm_op == CTOP_INST_AAND_DI_R2_R2_R3) \
+ old = ~old; \
+ \
+ /* Explicit full memory barrier needed before/after */ \
+ smp_mb(); \
+ \
+ __asm__ __volatile__( \
+ " mov r2, %0\n" \
+ " mov r3, %1\n" \
+ " .word %2\n" \
+ " mov %0, r2" \
+ : "+r"(old) \
+ : "r"(m), "i"(asm_op) \
+ : "r2", "r3", "memory"); \
+ \
+ smp_mb(); \
+ \
+ return (old & nr) != 0; \
+}
+
+#endif /* CONFIG_ARC_PLAT_EZNPS */
/***************************************
* Non atomic variants
@@ -181,9 +229,15 @@ static inline int __test_and_##op##_bit(unsigned long nr, volatile unsigned long
/* __test_and_set_bit(), __test_and_clear_bit(), __test_and_change_bit() */\
__TEST_N_BIT_OP(op, c_op, asm_op)
+#ifndef CONFIG_ARC_PLAT_EZNPS
BIT_OPS(set, |, bset)
BIT_OPS(clear, & ~, bclr)
BIT_OPS(change, ^, bxor)
+#else
+BIT_OPS(set, |, CTOP_INST_AOR_DI_R2_R2_R3)
+BIT_OPS(clear, & ~, CTOP_INST_AAND_DI_R2_R2_R3)
+BIT_OPS(change, ^, CTOP_INST_AXOR_DI_R2_R2_R3)
+#endif
/*
* This routine doesn't need to be atomic.
diff --git a/arch/arc/include/asm/clk.h b/arch/arc/include/asm/clk.h
deleted file mode 100644
index bf9d29f5bd53..000000000000
--- a/arch/arc/include/asm/clk.h
+++ /dev/null
@@ -1,22 +0,0 @@
-/*
- * Copyright (C) 2012 Synopsys, Inc. (www.synopsys.com)
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- */
-
-#ifndef _ASM_ARC_CLK_H
-#define _ASM_ARC_CLK_H
-
-/* Although we can't really hide core_freq, the accessor is still better way */
-extern unsigned long core_freq;
-
-static inline unsigned long arc_get_core_freq(void)
-{
- return core_freq;
-}
-
-extern int arc_set_core_freq(unsigned long);
-
-#endif
diff --git a/arch/arc/include/asm/cmpxchg.h b/arch/arc/include/asm/cmpxchg.h
index a444be67cd53..d819de1c5d10 100644
--- a/arch/arc/include/asm/cmpxchg.h
+++ b/arch/arc/include/asm/cmpxchg.h
@@ -44,7 +44,7 @@ __cmpxchg(volatile void *ptr, unsigned long expected, unsigned long new)
return prev;
}
-#else
+#elif !defined(CONFIG_ARC_PLAT_EZNPS)
static inline unsigned long
__cmpxchg(volatile void *ptr, unsigned long expected, unsigned long new)
@@ -64,23 +64,48 @@ __cmpxchg(volatile void *ptr, unsigned long expected, unsigned long new)
return prev;
}
+#else /* CONFIG_ARC_PLAT_EZNPS */
+
+static inline unsigned long
+__cmpxchg(volatile void *ptr, unsigned long expected, unsigned long new)
+{
+ /*
+ * Explicit full memory barrier needed before/after
+ */
+ smp_mb();
+
+ write_aux_reg(CTOP_AUX_GPA1, expected);
+
+ __asm__ __volatile__(
+ " mov r2, %0\n"
+ " mov r3, %1\n"
+ " .word %2\n"
+ " mov %0, r2"
+ : "+r"(new)
+ : "r"(ptr), "i"(CTOP_INST_EXC_DI_R2_R2_R3)
+ : "r2", "r3", "memory");
+
+ smp_mb();
+
+ return new;
+}
+
#endif /* CONFIG_ARC_HAS_LLSC */
#define cmpxchg(ptr, o, n) ((typeof(*(ptr)))__cmpxchg((ptr), \
(unsigned long)(o), (unsigned long)(n)))
/*
- * Since not supported natively, ARC cmpxchg() uses atomic_ops_lock (UP/SMP)
- * just to gaurantee semantics.
- * atomic_cmpxchg() needs to use the same locks as it's other atomic siblings
- * which also happens to be atomic_ops_lock.
- *
- * Thus despite semantically being different, implementation of atomic_cmpxchg()
- * is same as cmpxchg().
+ * atomic_cmpxchg is same as cmpxchg
+ * LLSC: only different in data-type, semantics are exactly same
+ * !LLSC: cmpxchg() has to use an external lock atomic_ops_lock to guarantee
+ * semantics, and this lock also happens to be used by atomic_*()
*/
#define atomic_cmpxchg(v, o, n) ((int)cmpxchg(&((v)->counter), (o), (n)))
+#ifndef CONFIG_ARC_PLAT_EZNPS
+
/*
* xchg (reg with memory) based on "Native atomic" EX insn
*/
@@ -143,6 +168,41 @@ static inline unsigned long __xchg(unsigned long val, volatile void *ptr,
#endif
+#else /* CONFIG_ARC_PLAT_EZNPS */
+
+static inline unsigned long __xchg(unsigned long val, volatile void *ptr,
+ int size)
+{
+ extern unsigned long __xchg_bad_pointer(void);
+
+ switch (size) {
+ case 4:
+ /*
+ * Explicit full memory barrier needed before/after
+ */
+ smp_mb();
+
+ __asm__ __volatile__(
+ " mov r2, %0\n"
+ " mov r3, %1\n"
+ " .word %2\n"
+ " mov %0, r2\n"
+ : "+r"(val)
+ : "r"(ptr), "i"(CTOP_INST_XEX_DI_R2_R2_R3)
+ : "r2", "r3", "memory");
+
+ smp_mb();
+
+ return val;
+ }
+ return __xchg_bad_pointer();
+}
+
+#define xchg(ptr, with) ((typeof(*(ptr)))__xchg((unsigned long)(with), (ptr), \
+ sizeof(*(ptr))))
+
+#endif /* CONFIG_ARC_PLAT_EZNPS */
+
/*
* "atomic" variant of xchg()
* REQ: It needs to follow the same serialization rules as other atomic_xxx()
diff --git a/arch/arc/include/asm/entry-compact.h b/arch/arc/include/asm/entry-compact.h
index 1d8f57cd6057..e0e1faf03c50 100644
--- a/arch/arc/include/asm/entry-compact.h
+++ b/arch/arc/include/asm/entry-compact.h
@@ -36,6 +36,10 @@
#include <asm/irqflags-compact.h>
#include <asm/thread_info.h> /* For THREAD_SIZE */
+#ifdef CONFIG_ARC_PLAT_EZNPS
+#include <plat/ctop.h>
+#endif
+
/*--------------------------------------------------------------
* Switch to Kernel Mode stack if SP points to User Mode stack
*
@@ -296,11 +300,13 @@
bic \reg, sp, (THREAD_SIZE - 1)
.endm
+#ifndef CONFIG_ARC_PLAT_EZNPS
/* Get CPU-ID of this core */
.macro GET_CPU_ID reg
lr \reg, [identity]
lsr \reg, \reg, 8
bmsk \reg, \reg, 7
.endm
+#endif
#endif /* __ASM_ARC_ENTRY_COMPACT_H */
diff --git a/arch/arc/include/asm/hugepage.h b/arch/arc/include/asm/hugepage.h
index 7afe3356b770..317ff773e1ca 100644
--- a/arch/arc/include/asm/hugepage.h
+++ b/arch/arc/include/asm/hugepage.h
@@ -61,8 +61,6 @@ static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
extern void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
pmd_t *pmd);
-#define has_transparent_hugepage() 1
-
/* Generic variants assume pgtable_t is struct page *, hence need for these */
#define __HAVE_ARCH_PGTABLE_DEPOSIT
extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
diff --git a/arch/arc/include/asm/irq.h b/arch/arc/include/asm/irq.h
index 49014f0ef36d..c0fa0d2de400 100644
--- a/arch/arc/include/asm/irq.h
+++ b/arch/arc/include/asm/irq.h
@@ -13,21 +13,14 @@
#define NR_IRQS 128 /* allow some CPU external IRQ handling */
/* Platform Independent IRQs */
-#ifdef CONFIG_ISA_ARCOMPACT
-#define TIMER0_IRQ 3
-#define TIMER1_IRQ 4
-#else
-#define TIMER0_IRQ 16
-#define TIMER1_IRQ 17
+#ifdef CONFIG_ISA_ARCV2
+#define IPI_IRQ 19
+#define SOFTIRQ_IRQ 21
#endif
#include <linux/interrupt.h>
#include <asm-generic/irq.h>
extern void arc_init_IRQ(void);
-void arc_local_timer_setup(void);
-void arc_request_percpu_irq(int irq, int cpu,
- irqreturn_t (*isr)(int irq, void *dev),
- const char *irq_nm, void *percpu_dev);
#endif
diff --git a/arch/arc/include/asm/page.h b/arch/arc/include/asm/page.h
index 0d53854884d0..296c3426a6ad 100644
--- a/arch/arc/include/asm/page.h
+++ b/arch/arc/include/asm/page.h
@@ -31,7 +31,11 @@ void clear_user_page(void *to, unsigned long u_vaddr, struct page *page);
* These are used to make use of C type-checking..
*/
typedef struct {
+#ifdef CONFIG_ARC_HAS_PAE40
+ unsigned long long pte;
+#else
unsigned long pte;
+#endif
} pte_t;
typedef struct {
unsigned long pgd;
diff --git a/arch/arc/include/asm/pgtable.h b/arch/arc/include/asm/pgtable.h
index 10d4b8b8e545..034bbdc0ff61 100644
--- a/arch/arc/include/asm/pgtable.h
+++ b/arch/arc/include/asm/pgtable.h
@@ -217,7 +217,7 @@
#define BITS_FOR_PTE (PGDIR_SHIFT - PAGE_SHIFT)
#define BITS_FOR_PGD (32 - PGDIR_SHIFT)
-#define PGDIR_SIZE (1UL << PGDIR_SHIFT) /* vaddr span, not PDG sz */
+#define PGDIR_SIZE _BITUL(PGDIR_SHIFT) /* vaddr span, not PDG sz */
#define PGDIR_MASK (~(PGDIR_SIZE-1))
#define PTRS_PER_PTE _BITUL(BITS_FOR_PTE)
diff --git a/arch/arc/include/asm/processor.h b/arch/arc/include/asm/processor.h
index 1d694c1ef6d6..f9048994b22f 100644
--- a/arch/arc/include/asm/processor.h
+++ b/arch/arc/include/asm/processor.h
@@ -57,9 +57,19 @@ struct task_struct;
* A lot of busy-wait loops in SMP are based off of non-volatile data otherwise
* get optimised away by gcc
*/
-#define cpu_relax() __asm__ __volatile__ ("" : : : "memory")
+#ifndef CONFIG_EZNPS_MTM_EXT
-#define cpu_relax_lowlatency() cpu_relax()
+#define cpu_relax() barrier()
+#define cpu_relax_lowlatency() cpu_relax()
+
+#else
+
+#define cpu_relax() \
+ __asm__ __volatile__ (".word %0" : : "i"(CTOP_INST_SCHD_RW) : "memory")
+
+#define cpu_relax_lowlatency() barrier()
+
+#endif
#define copy_segments(tsk, mm) do { } while (0)
#define release_segments(mm) do { } while (0)
@@ -97,7 +107,7 @@ extern unsigned int get_wchan(struct task_struct *p);
#endif /* !__ASSEMBLY__ */
/*
- * System Memory Map on ARC
+ * Default System Memory Map on ARC
*
* ---------------------------- (lower 2G, Translated) -------------------------
* 0x0000_0000 0x5FFF_FFFF (user vaddr: TASK_SIZE)
@@ -109,20 +119,37 @@ extern unsigned int get_wchan(struct task_struct *p);
* 0xC000_0000 0xFFFF_FFFF (peripheral uncached space)
* -----------------------------------------------------------------------------
*/
-#define VMALLOC_START 0x70000000
-/*
- * 1 PGDIR_SIZE each for fixmap/pkmap, 2 PGDIR_SIZE gutter
- * See asm/highmem.h for details
- */
-#define VMALLOC_SIZE (PAGE_OFFSET - VMALLOC_START - PGDIR_SIZE * 4)
-#define VMALLOC_END (VMALLOC_START + VMALLOC_SIZE)
+#define TASK_SIZE 0x60000000
-#define USER_KERNEL_GUTTER 0x10000000
+#define VMALLOC_START (PAGE_OFFSET - (CONFIG_ARC_KVADDR_SIZE << 20))
-#define TASK_SIZE (VMALLOC_START - USER_KERNEL_GUTTER)
+/* 1 PGDIR_SIZE each for fixmap/pkmap, 2 PGDIR_SIZE gutter (see asm/highmem.h) */
+#define VMALLOC_SIZE ((CONFIG_ARC_KVADDR_SIZE << 20) - PGDIR_SIZE * 4)
+#define VMALLOC_END (VMALLOC_START + VMALLOC_SIZE)
+
+#define USER_KERNEL_GUTTER (VMALLOC_START - TASK_SIZE)
+
+#ifdef CONFIG_ARC_PLAT_EZNPS
+/* NPS architecture defines special window of 129M in user address space for
+ * special memory areas, when accessing this window the MMU do not use TLB.
+ * Instead MMU direct the access to:
+ * 0x57f00000:0x57ffffff -- 1M of closely coupled memory (aka CMEM)
+ * 0x58000000:0x5fffffff -- 16 huge pages, 8M each, with fixed map (aka FMTs)
+ *
+ * CMEM - is the fastest memory we got and its size is 16K.
+ * FMT - is used to map either to internal/external memory.
+ * Internal memory is the second fast memory and its size is 16M
+ * External memory is the biggest memory (16G) and also the slowest.
+ *
+ * STACK_TOP need to be PMD align (21bit) that is why we supply 0x57e00000.
+ */
+#define STACK_TOP 0x57e00000
+#else
#define STACK_TOP TASK_SIZE
+#endif
+
#define STACK_TOP_MAX STACK_TOP
/* This decides where the kernel will search for a free chunk of vm
diff --git a/arch/arc/include/asm/setup.h b/arch/arc/include/asm/setup.h
index 307846691be6..48b37c693db3 100644
--- a/arch/arc/include/asm/setup.h
+++ b/arch/arc/include/asm/setup.h
@@ -12,7 +12,11 @@
#include <linux/types.h>
#include <uapi/asm/setup.h>
+#ifdef CONFIG_ARC_PLAT_EZNPS
+#define COMMAND_LINE_SIZE 2048
+#else
#define COMMAND_LINE_SIZE 256
+#endif
/*
* Data structure to map a ID to string
diff --git a/arch/arc/include/asm/spinlock.h b/arch/arc/include/asm/spinlock.h
index db8c59d1eaeb..800e7c430ca5 100644
--- a/arch/arc/include/asm/spinlock.h
+++ b/arch/arc/include/asm/spinlock.h
@@ -610,7 +610,9 @@ static inline void arch_spin_unlock(arch_spinlock_t *lock)
static inline int arch_read_trylock(arch_rwlock_t *rw)
{
int ret = 0;
+ unsigned long flags;
+ local_irq_save(flags);
arch_spin_lock(&(rw->lock_mutex));
/*
@@ -623,6 +625,7 @@ static inline int arch_read_trylock(arch_rwlock_t *rw)
}
arch_spin_unlock(&(rw->lock_mutex));
+ local_irq_restore(flags);
smp_mb();
return ret;
@@ -632,7 +635,9 @@ static inline int arch_read_trylock(arch_rwlock_t *rw)
static inline int arch_write_trylock(arch_rwlock_t *rw)
{
int ret = 0;
+ unsigned long flags;
+ local_irq_save(flags);
arch_spin_lock(&(rw->lock_mutex));
/*
@@ -646,6 +651,7 @@ static inline int arch_write_trylock(arch_rwlock_t *rw)
ret = 1;
}
arch_spin_unlock(&(rw->lock_mutex));
+ local_irq_restore(flags);
return ret;
}
@@ -664,16 +670,24 @@ static inline void arch_write_lock(arch_rwlock_t *rw)
static inline void arch_read_unlock(arch_rwlock_t *rw)
{
+ unsigned long flags;
+
+ local_irq_save(flags);
arch_spin_lock(&(rw->lock_mutex));
rw->counter++;
arch_spin_unlock(&(rw->lock_mutex));
+ local_irq_restore(flags);
}
static inline void arch_write_unlock(arch_rwlock_t *rw)
{
+ unsigned long flags;
+
+ local_irq_save(flags);
arch_spin_lock(&(rw->lock_mutex));
rw->counter = __ARCH_RW_LOCK_UNLOCKED__;
arch_spin_unlock(&(rw->lock_mutex));
+ local_irq_restore(flags);
}
#endif
diff --git a/arch/arc/include/uapi/asm/byteorder.h b/arch/arc/include/uapi/asm/byteorder.h
index 9da71d415c38..ea5ca444c7e3 100644
--- a/arch/arc/include/uapi/asm/byteorder.h
+++ b/arch/arc/include/uapi/asm/byteorder.h
@@ -9,7 +9,7 @@
#ifndef __ASM_ARC_BYTEORDER_H
#define __ASM_ARC_BYTEORDER_H
-#ifdef CONFIG_CPU_BIG_ENDIAN
+#ifdef __BIG_ENDIAN__
#include <linux/byteorder/big_endian.h>
#else
#include <linux/byteorder/little_endian.h>
diff --git a/arch/arc/include/uapi/asm/unistd.h b/arch/arc/include/uapi/asm/unistd.h
index 39e58d1cdf90..41fa2ec9e02c 100644
--- a/arch/arc/include/uapi/asm/unistd.h
+++ b/arch/arc/include/uapi/asm/unistd.h
@@ -15,6 +15,7 @@
#if !defined(_UAPI_ASM_ARC_UNISTD_H) || defined(__SYSCALL)
#define _UAPI_ASM_ARC_UNISTD_H
+#define __ARCH_WANT_RENAMEAT
#define __ARCH_WANT_SYS_EXECVE
#define __ARCH_WANT_SYS_CLONE
#define __ARCH_WANT_SYS_VFORK
diff --git a/arch/arc/kernel/Makefile b/arch/arc/kernel/Makefile
index 1bc2036b19d7..cfcdedf52ff8 100644
--- a/arch/arc/kernel/Makefile
+++ b/arch/arc/kernel/Makefile
@@ -9,7 +9,7 @@
CFLAGS_ptrace.o += -DUTS_MACHINE='"$(UTS_MACHINE)"'
obj-y := arcksyms.o setup.o irq.o time.o reset.o ptrace.o process.o devtree.o
-obj-y += signal.o traps.o sys.o troubleshoot.o stacktrace.o disasm.o clk.o
+obj-y += signal.o traps.o sys.o troubleshoot.o stacktrace.o disasm.o
obj-$(CONFIG_ISA_ARCOMPACT) += entry-compact.o intc-compact.o
obj-$(CONFIG_ISA_ARCV2) += entry-arcv2.o intc-arcv2.o
obj-$(CONFIG_PCI) += pcibios.o
diff --git a/arch/arc/kernel/clk.c b/arch/arc/kernel/clk.c
deleted file mode 100644
index 10c7b0b5a079..000000000000
--- a/arch/arc/kernel/clk.c
+++ /dev/null
@@ -1,21 +0,0 @@
-/*
- * Copyright (C) 2012 Synopsys, Inc. (www.synopsys.com)
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- */
-
-#include <asm/clk.h>
-
-unsigned long core_freq = 80000000;
-
-/*
- * As of now we default to device-tree provided clock
- * In future we can determine this in early boot
- */
-int arc_set_core_freq(unsigned long freq)
-{
- core_freq = freq;
- return 0;
-}
diff --git a/arch/arc/kernel/ctx_sw.c b/arch/arc/kernel/ctx_sw.c
index 5d446df2c413..6f4cb0dab1b9 100644
--- a/arch/arc/kernel/ctx_sw.c
+++ b/arch/arc/kernel/ctx_sw.c
@@ -16,6 +16,9 @@
#include <asm/asm-offsets.h>
#include <linux/sched.h>
+#ifdef CONFIG_ARC_PLAT_EZNPS
+#include <plat/ctop.h>
+#endif
#define KSP_WORD_OFF ((TASK_THREAD + THREAD_KSP) / 4)
@@ -67,9 +70,16 @@ __switch_to(struct task_struct *prev_task, struct task_struct *next_task)
#ifndef CONFIG_SMP
"st %2, [@_current_task] \n\t"
#else
+#ifdef CONFIG_ARC_PLAT_EZNPS
+ "lr r24, [%4] \n\t"
+#ifndef CONFIG_EZNPS_MTM_EXT
+ "lsr r24, r24, 4 \n\t"
+#endif
+#else
"lr r24, [identity] \n\t"
"lsr r24, r24, 8 \n\t"
"bmsk r24, r24, 7 \n\t"
+#endif
"add2 r24, @_current_task, r24 \n\t"
"st %2, [r24] \n\t"
#endif
@@ -107,6 +117,9 @@ __switch_to(struct task_struct *prev_task, struct task_struct *next_task)
: "=r"(tmp)
: "n"(KSP_WORD_OFF), "r"(next), "r"(prev)
+#ifdef CONFIG_ARC_PLAT_EZNPS
+ , "i"(CTOP_AUX_LOGIC_GLOBAL_ID)
+#endif
: "blink"
);
diff --git a/arch/arc/kernel/devtree.c b/arch/arc/kernel/devtree.c
index 7e844fd8213f..f1e07c2344f8 100644
--- a/arch/arc/kernel/devtree.c
+++ b/arch/arc/kernel/devtree.c
@@ -14,7 +14,6 @@
#include <linux/memblock.h>
#include <linux/of.h>
#include <linux/of_fdt.h>
-#include <asm/clk.h>
#include <asm/mach_desc.h>
#ifdef CONFIG_SERIAL_EARLYCON
@@ -28,14 +27,12 @@ unsigned int __init arc_early_base_baud(void)
static void __init arc_set_early_base_baud(unsigned long dt_root)
{
- unsigned int core_clk = arc_get_core_freq();
-
if (of_flat_dt_is_compatible(dt_root, "abilis,arc-tb10x"))
- arc_base_baud = core_clk/3;
+ arc_base_baud = 166666666; /* Fixed 166.6MHz clk (TB10x) */
else if (of_flat_dt_is_compatible(dt_root, "snps,arc-sdp"))
arc_base_baud = 33333333; /* Fixed 33MHz clk (AXS10x) */
else
- arc_base_baud = core_clk;
+ arc_base_baud = 50000000; /* Fixed default 50MHz */
}
#else
#define arc_set_early_base_baud(dt_root)
@@ -65,8 +62,6 @@ const struct machine_desc * __init setup_machine_fdt(void *dt)
{
const struct machine_desc *mdesc;
unsigned long dt_root;
- const void *clk;
- int len;
if (!early_init_dt_scan(dt))
return NULL;
@@ -76,10 +71,6 @@ const struct machine_desc * __init setup_machine_fdt(void *dt)
machine_halt();
dt_root = of_get_flat_dt_root();
- clk = of_get_flat_dt_prop(dt_root, "clock-frequency", &len);
- if (clk)
- arc_set_core_freq(of_read_ulong(clk, len/4));
-
arc_set_early_base_baud(dt_root);
return mdesc;
diff --git a/arch/arc/kernel/intc-arcv2.c b/arch/arc/kernel/intc-arcv2.c
index 942526322ae7..6c24faf48b16 100644
--- a/arch/arc/kernel/intc-arcv2.c
+++ b/arch/arc/kernel/intc-arcv2.c
@@ -137,23 +137,30 @@ static const struct irq_domain_ops arcv2_irq_ops = {
.map = arcv2_irq_map,
};
-static struct irq_domain *root_domain;
static int __init
init_onchip_IRQ(struct device_node *intc, struct device_node *parent)
{
+ struct irq_domain *root_domain;
+
if (parent)
panic("DeviceTree incore intc not a root irq controller\n");
- root_domain = irq_domain_add_legacy(intc, NR_CPU_IRQS, 0, 0,
- &arcv2_irq_ops, NULL);
-
+ root_domain = irq_domain_add_linear(intc, NR_CPU_IRQS, &arcv2_irq_ops, NULL);
if (!root_domain)
panic("root irq domain not avail\n");
- /* with this we don't need to export root_domain */
+ /*
+ * Needed for primary domain lookup to succeed
+ * This is a primary irqchip, and can never have a parent
+ */
irq_set_default_host(root_domain);
+#ifdef CONFIG_SMP
+ irq_create_mapping(root_domain, IPI_IRQ);
+#endif
+ irq_create_mapping(root_domain, SOFTIRQ_IRQ);
+
return 0;
}
diff --git a/arch/arc/kernel/intc-compact.c b/arch/arc/kernel/intc-compact.c
index 224d1c3aa9c4..c5cceca36118 100644
--- a/arch/arc/kernel/intc-compact.c
+++ b/arch/arc/kernel/intc-compact.c
@@ -14,6 +14,8 @@
#include <linux/irqchip.h>
#include <asm/irq.h>
+#define TIMER0_IRQ 3 /* Fixed by ISA */
+
/*
* Early Hardware specific Interrupt setup
* -Platform independent, needed for each CPU (not foldable into init_IRQ)
@@ -79,8 +81,9 @@ static struct irq_chip onchip_intc = {
static int arc_intc_domain_map(struct irq_domain *d, unsigned int irq,
irq_hw_number_t hw)
{
- switch (irq) {
+ switch (hw) {
case TIMER0_IRQ:
+ irq_set_percpu_devid(irq);
irq_set_chip_and_handler(irq, &onchip_intc, handle_percpu_irq);
break;
default:
@@ -94,21 +97,23 @@ static const struct irq_domain_ops arc_intc_domain_ops = {
.map = arc_intc_domain_map,
};
-static struct irq_domain *root_domain;
-
static int __init
init_onchip_IRQ(struct device_node *intc, struct device_node *parent)
{
+ struct irq_domain *root_domain;
+
if (parent)
panic("DeviceTree incore intc not a root irq controller\n");
- root_domain = irq_domain_add_legacy(intc, NR_CPU_IRQS, 0, 0,
+ root_domain = irq_domain_add_linear(intc, NR_CPU_IRQS,
&arc_intc_domain_ops, NULL);
-
if (!root_domain)
panic("root irq domain not avail\n");
- /* with this we don't need to export root_domain */
+ /*
+ * Needed for primary domain lookup to succeed
+ * This is a primary irqchip, and can never have a parent
+ */
irq_set_default_host(root_domain);
return 0;
diff --git a/arch/arc/kernel/irq.c b/arch/arc/kernel/irq.c
index ba17f85285cf..538b36afe89e 100644
--- a/arch/arc/kernel/irq.c
+++ b/arch/arc/kernel/irq.c
@@ -41,53 +41,7 @@ void __init init_IRQ(void)
* "C" Entry point for any ARC ISR, called from low level vector handler
* @irq is the vector number read from ICAUSE reg of on-chip intc
*/
-void arch_do_IRQ(unsigned int irq, struct pt_regs *regs)
+void arch_do_IRQ(unsigned int hwirq, struct pt_regs *regs)
{
- struct pt_regs *old_regs = set_irq_regs(regs);
-
- irq_enter();
- generic_handle_irq(irq);
- irq_exit();
- set_irq_regs(old_regs);
-}
-
-/*
- * API called for requesting percpu interrupts - called by each CPU
- * - For boot CPU, actually request the IRQ with genirq core + enables
- * - For subsequent callers only enable called locally
- *
- * Relies on being called by boot cpu first (i.e. request called ahead) of
- * any enable as expected by genirq. Hence Suitable only for TIMER, IPI
- * which are guaranteed to be setup on boot core first.
- * Late probed peripherals such as perf can't use this as there no guarantee
- * of being called on boot CPU first.
- */
-
-void arc_request_percpu_irq(int irq, int cpu,
- irqreturn_t (*isr)(int irq, void *dev),
- const char *irq_nm,
- void *percpu_dev)
-{
- /* Boot cpu calls request, all call enable */
- if (!cpu) {
- int rc;
-
-#ifdef CONFIG_ISA_ARCOMPACT
- /*
- * A subsequent request_percpu_irq() fails if percpu_devid is
- * not set. That in turns sets NOAUTOEN, meaning each core needs
- * to call enable_percpu_irq()
- *
- * For ARCv2, this is done in irq map function since we know
- * which irqs are strictly per cpu
- */
- irq_set_percpu_devid(irq);
-#endif
-
- rc = request_percpu_irq(irq, isr, irq_nm, percpu_dev);
- if (rc)
- panic("Percpu IRQ request failed for %d\n", irq);
- }
-
- enable_percpu_irq(irq, 0);
+ handle_domain_irq(NULL, hwirq, regs);
}
diff --git a/arch/arc/kernel/mcip.c b/arch/arc/kernel/mcip.c
index c41c364b926c..72f9179b1a24 100644
--- a/arch/arc/kernel/mcip.c
+++ b/arch/arc/kernel/mcip.c
@@ -15,9 +15,6 @@
#include <asm/mcip.h>
#include <asm/setup.h>
-#define IPI_IRQ 19
-#define SOFTIRQ_IRQ 21
-
static char smp_cpuinfo_buf[128];
static int idu_detected;
@@ -116,15 +113,13 @@ static void mcip_probe_n_setup(void)
IS_AVAIL1(mp.dbg, "DEBUG "),
IS_AVAIL1(mp.gfrc, "GFRC"));
+ cpuinfo_arc700[0].extn.gfrc = mp.gfrc;
idu_detected = mp.idu;
if (mp.dbg) {
__mcip_cmd_data(CMD_DEBUG_SET_SELECT, 0, 0xf);
__mcip_cmd_data(CMD_DEBUG_SET_MASK, 0xf, 0xf);
}
-
- if (IS_ENABLED(CONFIG_ARC_HAS_GFRC) && !mp.gfrc)
- panic("kernel trying to use non-existent GFRC\n");
}
struct plat_smp_ops plat_smp_ops = {
diff --git a/arch/arc/kernel/perf_event.c b/arch/arc/kernel/perf_event.c
index 8b134cfe5e1f..6fd48021324b 100644
--- a/arch/arc/kernel/perf_event.c
+++ b/arch/arc/kernel/perf_event.c
@@ -48,7 +48,7 @@ struct arc_callchain_trace {
static int callchain_trace(unsigned int addr, void *data)
{
struct arc_callchain_trace *ctrl = data;
- struct perf_callchain_entry *entry = ctrl->perf_stuff;
+ struct perf_callchain_entry_ctx *entry = ctrl->perf_stuff;
perf_callchain_store(entry, addr);
if (ctrl->depth++ < 3)
@@ -58,7 +58,7 @@ static int callchain_trace(unsigned int addr, void *data)
}
void
-perf_callchain_kernel(struct perf_callchain_entry *entry, struct pt_regs *regs)
+perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
{
struct arc_callchain_trace ctrl = {
.depth = 0,
@@ -69,7 +69,7 @@ perf_callchain_kernel(struct perf_callchain_entry *entry, struct pt_regs *regs)
}
void
-perf_callchain_user(struct perf_callchain_entry *entry, struct pt_regs *regs)
+perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
{
/*
* User stack can't be unwound trivially with kernel dwarf unwinder
diff --git a/arch/arc/kernel/process.c b/arch/arc/kernel/process.c
index a3f750e76b68..b5db9e7fd649 100644
--- a/arch/arc/kernel/process.c
+++ b/arch/arc/kernel/process.c
@@ -183,13 +183,6 @@ void flush_thread(void)
{
}
-/*
- * Free any architecture-specific thread data structures, etc.
- */
-void exit_thread(void)
-{
-}
-
int dump_fpu(struct pt_regs *regs, elf_fpregset_t *fpu)
{
return 0;
diff --git a/arch/arc/kernel/setup.c b/arch/arc/kernel/setup.c
index 151acf0c9383..f63b8bfefb0c 100644
--- a/arch/arc/kernel/setup.c
+++ b/arch/arc/kernel/setup.c
@@ -13,7 +13,6 @@
#include <linux/console.h>
#include <linux/module.h>
#include <linux/cpu.h>
-#include <linux/clk-provider.h>
#include <linux/of_fdt.h>
#include <linux/of_platform.h>
#include <linux/cache.h>
@@ -24,7 +23,6 @@
#include <asm/page.h>
#include <asm/irq.h>
#include <asm/unwind.h>
-#include <asm/clk.h>
#include <asm/mach_desc.h>
#include <asm/smp.h>
@@ -220,10 +218,6 @@ static char *arc_cpu_mumbojumbo(int cpu_id, char *buf, int len)
if (tbl->info.id == 0)
n += scnprintf(buf + n, len - n, "UNKNOWN ARC Processor\n");
- n += scnprintf(buf + n, len - n, "CPU speed\t: %u.%02u Mhz\n",
- (unsigned int)(arc_get_core_freq() / 1000000),
- (unsigned int)(arc_get_core_freq() / 10000) % 100);
-
n += scnprintf(buf + n, len - n, "Timers\t\t: %s%s%s%s\nISA Extn\t: ",
IS_AVAIL1(cpu->extn.timer0, "Timer0 "),
IS_AVAIL1(cpu->extn.timer1, "Timer1 "),
@@ -314,9 +308,6 @@ static void arc_chk_core_config(void)
if (!cpu->extn.timer1)
panic("Timer1 is not present!\n");
- if (IS_ENABLED(CONFIG_ARC_HAS_RTC) && !cpu->extn.rtc)
- panic("RTC is not present\n");
-
#ifdef CONFIG_ARC_HAS_DCCM
/*
* DCCM can be arbit placed in hardware.
@@ -444,7 +435,6 @@ void __init setup_arch(char **cmdline_p)
static int __init customize_machine(void)
{
- of_clk_init(NULL);
/*
* Traverses flattened DeviceTree - registering platform devices
* (if any) complete with their resources
@@ -477,6 +467,8 @@ static int show_cpuinfo(struct seq_file *m, void *v)
{
char *str;
int cpu_id = ptr_to_cpu(v);
+ struct device_node *core_clk = of_find_node_by_name(NULL, "core_clk");
+ u32 freq = 0;
if (!cpu_online(cpu_id)) {
seq_printf(m, "processor [%d]\t: Offline\n", cpu_id);
@@ -489,6 +481,11 @@ static int show_cpuinfo(struct seq_file *m, void *v)
seq_printf(m, arc_cpu_mumbojumbo(cpu_id, str, PAGE_SIZE));
+ of_property_read_u32(core_clk, "clock-frequency", &freq);
+ if (freq)
+ seq_printf(m, "CPU speed\t: %u.%02u Mhz\n",
+ freq / 1000000, (freq / 10000) % 100);
+
seq_printf(m, "Bogo MIPS\t: %lu.%02lu\n",
loops_per_jiffy / (500000 / HZ),
(loops_per_jiffy / (5000 / HZ)) % 100);
diff --git a/arch/arc/kernel/smp.c b/arch/arc/kernel/smp.c
index 4cb3add77c75..f183cc648851 100644
--- a/arch/arc/kernel/smp.c
+++ b/arch/arc/kernel/smp.c
@@ -126,11 +126,6 @@ void start_kernel_secondary(void)
current->active_mm = mm;
cpumask_set_cpu(cpu, mm_cpumask(mm));
- notify_cpu_starting(cpu);
- set_cpu_online(cpu, true);
-
- pr_info("## CPU%u LIVE ##: Executing Code...\n", cpu);
-
/* Some SMP H/w setup - for each cpu */
if (plat_smp_ops.init_per_cpu)
plat_smp_ops.init_per_cpu(cpu);
@@ -138,7 +133,10 @@ void start_kernel_secondary(void)
if (machine_desc->init_per_cpu)
machine_desc->init_per_cpu(cpu);
- arc_local_timer_setup();
+ notify_cpu_starting(cpu);
+ set_cpu_online(cpu, true);
+
+ pr_info("## CPU%u LIVE ##: Executing Code...\n", cpu);
local_irq_enable();
preempt_disable();
@@ -346,6 +344,10 @@ irqreturn_t do_IPI(int irq, void *dev_id)
/*
* API called by platform code to hookup arch-common ISR to their IPI IRQ
+ *
+ * Note: If IPI is provided by platform (vs. say ARC MCIP), their intc setup/map
+ * function needs to call call irq_set_percpu_devid() for IPI IRQ, otherwise
+ * request_percpu_irq() below will fail
*/
static DEFINE_PER_CPU(int, ipi_dev);
@@ -353,7 +355,16 @@ int smp_ipi_irq_setup(int cpu, int irq)
{
int *dev = per_cpu_ptr(&ipi_dev, cpu);
- arc_request_percpu_irq(irq, cpu, do_IPI, "IPI Interrupt", dev);
+ /* Boot cpu calls request, all call enable */
+ if (!cpu) {
+ int rc;
+
+ rc = request_percpu_irq(irq, do_IPI, "IPI Interrupt", dev);
+ if (rc)
+ panic("Percpu IRQ request failed for %d\n", irq);
+ }
+
+ enable_percpu_irq(irq, 0);
return 0;
}
diff --git a/arch/arc/kernel/time.c b/arch/arc/kernel/time.c
index 7d9a736fc7e5..4549ab255dd1 100644
--- a/arch/arc/kernel/time.c
+++ b/arch/arc/kernel/time.c
@@ -29,21 +29,16 @@
* which however is currently broken
*/
-#include <linux/spinlock.h>
#include <linux/interrupt.h>
-#include <linux/module.h>
-#include <linux/sched.h>
-#include <linux/kernel.h>
-#include <linux/time.h>
-#include <linux/init.h>
-#include <linux/timex.h>
-#include <linux/profile.h>
+#include <linux/clk.h>
+#include <linux/clk-provider.h>
#include <linux/clocksource.h>
#include <linux/clockchips.h>
+#include <linux/cpu.h>
+#include <linux/of.h>
+#include <linux/of_irq.h>
#include <asm/irq.h>
#include <asm/arcregs.h>
-#include <asm/clk.h>
-#include <asm/mach_desc.h>
#include <asm/mcip.h>
@@ -60,16 +55,35 @@
#define ARC_TIMER_MAX 0xFFFFFFFF
-/********** Clock Source Device *********/
-
-#ifdef CONFIG_ARC_HAS_GFRC
+static unsigned long arc_timer_freq;
-static int arc_counter_setup(void)
+static int noinline arc_get_timer_clk(struct device_node *node)
{
- return 1;
+ struct clk *clk;
+ int ret;
+
+ clk = of_clk_get(node, 0);
+ if (IS_ERR(clk)) {
+ pr_err("timer missing clk");
+ return PTR_ERR(clk);
+ }
+
+ ret = clk_prepare_enable(clk);
+ if (ret) {
+ pr_err("Couldn't enable parent clk\n");
+ return ret;
+ }
+
+ arc_timer_freq = clk_get_rate(clk);
+
+ return 0;
}
-static cycle_t arc_counter_read(struct clocksource *cs)
+/********** Clock Source Device *********/
+
+#ifdef CONFIG_ARC_HAS_GFRC
+
+static cycle_t arc_read_gfrc(struct clocksource *cs)
{
unsigned long flags;
union {
@@ -94,15 +108,31 @@ static cycle_t arc_counter_read(struct clocksource *cs)
return stamp.full;
}
-static struct clocksource arc_counter = {
+static struct clocksource arc_counter_gfrc = {
.name = "ARConnect GFRC",
.rating = 400,
- .read = arc_counter_read,
+ .read = arc_read_gfrc,
.mask = CLOCKSOURCE_MASK(64),
.flags = CLOCK_SOURCE_IS_CONTINUOUS,
};
-#else
+static void __init arc_cs_setup_gfrc(struct device_node *node)
+{
+ int exists = cpuinfo_arc700[0].extn.gfrc;
+ int ret;
+
+ if (WARN(!exists, "Global-64-bit-Ctr clocksource not detected"))
+ return;
+
+ ret = arc_get_timer_clk(node);
+ if (ret)
+ return;
+
+ clocksource_register_hz(&arc_counter_gfrc, arc_timer_freq);
+}
+CLOCKSOURCE_OF_DECLARE(arc_gfrc, "snps,archs-timer-gfrc", arc_cs_setup_gfrc);
+
+#endif
#ifdef CONFIG_ARC_HAS_RTC
@@ -110,15 +140,7 @@ static struct clocksource arc_counter = {
#define AUX_RTC_LOW 0x104
#define AUX_RTC_HIGH 0x105
-int arc_counter_setup(void)
-{
- write_aux_reg(AUX_RTC_CTRL, 1);
-
- /* Not usable in SMP */
- return !IS_ENABLED(CONFIG_SMP);
-}
-
-static cycle_t arc_counter_read(struct clocksource *cs)
+static cycle_t arc_read_rtc(struct clocksource *cs)
{
unsigned long status;
union {
@@ -142,47 +164,78 @@ static cycle_t arc_counter_read(struct clocksource *cs)
return stamp.full;
}
-static struct clocksource arc_counter = {
+static struct clocksource arc_counter_rtc = {
.name = "ARCv2 RTC",
.rating = 350,
- .read = arc_counter_read,
+ .read = arc_read_rtc,
.mask = CLOCKSOURCE_MASK(64),
.flags = CLOCK_SOURCE_IS_CONTINUOUS,
};
-#else /* !CONFIG_ARC_HAS_RTC */
-
-/*
- * set 32bit TIMER1 to keep counting monotonically and wraparound
- */
-int arc_counter_setup(void)
+static void __init arc_cs_setup_rtc(struct device_node *node)
{
- write_aux_reg(ARC_REG_TIMER1_LIMIT, ARC_TIMER_MAX);
- write_aux_reg(ARC_REG_TIMER1_CNT, 0);
- write_aux_reg(ARC_REG_TIMER1_CTRL, TIMER_CTRL_NH);
+ int exists = cpuinfo_arc700[smp_processor_id()].extn.rtc;
+ int ret;
+
+ if (WARN(!exists, "Local-64-bit-Ctr clocksource not detected"))
+ return;
+
+ /* Local to CPU hence not usable in SMP */
+ if (WARN(IS_ENABLED(CONFIG_SMP), "Local-64-bit-Ctr not usable in SMP"))
+ return;
+
+ ret = arc_get_timer_clk(node);
+ if (ret)
+ return;
+
+ write_aux_reg(AUX_RTC_CTRL, 1);
- /* Not usable in SMP */
- return !IS_ENABLED(CONFIG_SMP);
+ clocksource_register_hz(&arc_counter_rtc, arc_timer_freq);
}
+CLOCKSOURCE_OF_DECLARE(arc_rtc, "snps,archs-timer-rtc", arc_cs_setup_rtc);
-static cycle_t arc_counter_read(struct clocksource *cs)
+#endif
+
+/*
+ * 32bit TIMER1 to keep counting monotonically and wraparound
+ */
+
+static cycle_t arc_read_timer1(struct clocksource *cs)
{
return (cycle_t) read_aux_reg(ARC_REG_TIMER1_CNT);
}
-static struct clocksource arc_counter = {
+static struct clocksource arc_counter_timer1 = {
.name = "ARC Timer1",
.rating = 300,
- .read = arc_counter_read,
+ .read = arc_read_timer1,
.mask = CLOCKSOURCE_MASK(32),
.flags = CLOCK_SOURCE_IS_CONTINUOUS,
};
-#endif
-#endif
+static void __init arc_cs_setup_timer1(struct device_node *node)
+{
+ int ret;
+
+ /* Local to CPU hence not usable in SMP */
+ if (IS_ENABLED(CONFIG_SMP))
+ return;
+
+ ret = arc_get_timer_clk(node);
+ if (ret)
+ return;
+
+ write_aux_reg(ARC_REG_TIMER1_LIMIT, ARC_TIMER_MAX);
+ write_aux_reg(ARC_REG_TIMER1_CNT, 0);
+ write_aux_reg(ARC_REG_TIMER1_CTRL, TIMER_CTRL_NH);
+
+ clocksource_register_hz(&arc_counter_timer1, arc_timer_freq);
+}
/********** Clock Event Device *********/
+static int arc_timer_irq;
+
/*
* Arm the timer to interrupt after @cycles
* The distinction for oneshot/periodic is done in arc_event_timer_ack() below
@@ -209,7 +262,7 @@ static int arc_clkevent_set_periodic(struct clock_event_device *dev)
* At X Hz, 1 sec = 1000ms -> X cycles;
* 10ms -> X / 100 cycles
*/
- arc_timer_event_setup(arc_get_core_freq() / HZ);
+ arc_timer_event_setup(arc_timer_freq / HZ);
return 0;
}
@@ -218,7 +271,6 @@ static DEFINE_PER_CPU(struct clock_event_device, arc_clockevent_device) = {
.features = CLOCK_EVT_FEAT_ONESHOT |
CLOCK_EVT_FEAT_PERIODIC,
.rating = 300,
- .irq = TIMER0_IRQ, /* hardwired, no need for resources */
.set_next_event = arc_clkevent_set_next_event,
.set_state_periodic = arc_clkevent_set_periodic,
};
@@ -244,45 +296,81 @@ static irqreturn_t timer_irq_handler(int irq, void *dev_id)
return IRQ_HANDLED;
}
+static int arc_timer_cpu_notify(struct notifier_block *self,
+ unsigned long action, void *hcpu)
+{
+ struct clock_event_device *evt = this_cpu_ptr(&arc_clockevent_device);
+
+ evt->cpumask = cpumask_of(smp_processor_id());
+
+ switch (action & ~CPU_TASKS_FROZEN) {
+ case CPU_STARTING:
+ clockevents_config_and_register(evt, arc_timer_freq,
+ 0, ULONG_MAX);
+ enable_percpu_irq(arc_timer_irq, 0);
+ break;
+ case CPU_DYING:
+ disable_percpu_irq(arc_timer_irq);
+ break;
+ }
+
+ return NOTIFY_OK;
+}
+
+static struct notifier_block arc_timer_cpu_nb = {
+ .notifier_call = arc_timer_cpu_notify,
+};
+
/*
- * Setup the local event timer for @cpu
+ * clockevent setup for boot CPU
*/
-void arc_local_timer_setup()
+static void __init arc_clockevent_setup(struct device_node *node)
{
struct clock_event_device *evt = this_cpu_ptr(&arc_clockevent_device);
- int cpu = smp_processor_id();
+ int ret;
+
+ register_cpu_notifier(&arc_timer_cpu_nb);
- evt->cpumask = cpumask_of(cpu);
- clockevents_config_and_register(evt, arc_get_core_freq(),
+ arc_timer_irq = irq_of_parse_and_map(node, 0);
+ if (arc_timer_irq <= 0)
+ panic("clockevent: missing irq");
+
+ ret = arc_get_timer_clk(node);
+ if (ret)
+ panic("clockevent: missing clk");
+
+ evt->irq = arc_timer_irq;
+ evt->cpumask = cpumask_of(smp_processor_id());
+ clockevents_config_and_register(evt, arc_timer_freq,
0, ARC_TIMER_MAX);
- /* setup the per-cpu timer IRQ handler - for all cpus */
- arc_request_percpu_irq(TIMER0_IRQ, cpu, timer_irq_handler,
- "Timer0 (per-cpu-tick)", evt);
+ /* Needs apriori irq_set_percpu_devid() done in intc map function */
+ ret = request_percpu_irq(arc_timer_irq, timer_irq_handler,
+ "Timer0 (per-cpu-tick)", evt);
+ if (ret)
+ panic("clockevent: unable to request irq\n");
+
+ enable_percpu_irq(arc_timer_irq, 0);
}
+static void __init arc_of_timer_init(struct device_node *np)
+{
+ static int init_count = 0;
+
+ if (!init_count) {
+ init_count = 1;
+ arc_clockevent_setup(np);
+ } else {
+ arc_cs_setup_timer1(np);
+ }
+}
+CLOCKSOURCE_OF_DECLARE(arc_clkevt, "snps,arc-timer", arc_of_timer_init);
+
/*
* Called from start_kernel() - boot CPU only
- *
- * -Sets up h/w timers as applicable on boot cpu
- * -Also sets up any global state needed for timer subsystem:
- * - for "counting" timer, registers a clocksource, usable across CPUs
- * (provided that underlying counter h/w is synchronized across cores)
- * - for "event" timer, sets up TIMER0 IRQ (as that is platform agnostic)
*/
void __init time_init(void)
{
- /*
- * sets up the timekeeping free-flowing counter which also returns
- * whether the counter is usable as clocksource
- */
- if (arc_counter_setup())
- /*
- * CLK upto 4.29 GHz can be safely represented in 32 bits
- * because Max 32 bit number is 4,294,967,295
- */
- clocksource_register_hz(&arc_counter, arc_get_core_freq());
-
- /* sets up the periodic event timer */
- arc_local_timer_setup();
+ of_clk_init(NULL);
+ clocksource_probe();
}
diff --git a/arch/arc/mm/tlb.c b/arch/arc/mm/tlb.c
index 7046c12c58ed..ec868a9081a1 100644
--- a/arch/arc/mm/tlb.c
+++ b/arch/arc/mm/tlb.c
@@ -814,6 +814,17 @@ void arc_mmu_init(void)
printk(arc_mmu_mumbojumbo(0, str, sizeof(str)));
+ /*
+ * Can't be done in processor.h due to header include depenedencies
+ */
+ BUILD_BUG_ON(!IS_ALIGNED((CONFIG_ARC_KVADDR_SIZE << 20), PMD_SIZE));
+
+ /*
+ * stack top size sanity check,
+ * Can't be done in processor.h due to header include depenedencies
+ */
+ BUILD_BUG_ON(!IS_ALIGNED(STACK_TOP, PMD_SIZE));
+
/* For efficiency sake, kernel is compile time built for a MMU ver
* This must match the hardware it is running on.
* Linux built for MMU V2, if run on MMU V1 will break down because V1
diff --git a/arch/arc/plat-axs10x/Kconfig b/arch/arc/plat-axs10x/Kconfig
index 426ac4b8bb39..c54d1ae57fe0 100644
--- a/arch/arc/plat-axs10x/Kconfig
+++ b/arch/arc/plat-axs10x/Kconfig
@@ -13,7 +13,7 @@ menuconfig ARC_PLAT_AXS10X
select OF_GPIO
select MIGHT_HAVE_PCI
select GENERIC_IRQ_CHIP
- select ARCH_REQUIRE_GPIOLIB
+ select GPIOLIB
help
Support for the ARC AXS10x Software Development Platforms.
diff --git a/arch/arc/plat-axs10x/axs10x.c b/arch/arc/plat-axs10x/axs10x.c
index 1b0f0f458a2b..86548701023c 100644
--- a/arch/arc/plat-axs10x/axs10x.c
+++ b/arch/arc/plat-axs10x/axs10x.c
@@ -14,10 +14,11 @@
*
*/
+#include <linux/of_fdt.h>
#include <linux/of_platform.h>
+#include <linux/libfdt.h>
#include <asm/asm-offsets.h>
-#include <asm/clk.h>
#include <asm/io.h>
#include <asm/mach_desc.h>
#include <asm/mcip.h>
@@ -389,6 +390,13 @@ axs103_set_freq(unsigned int id, unsigned int fd, unsigned int od)
static void __init axs103_early_init(void)
{
+ int offset = fdt_path_offset(initial_boot_params, "/cpu_card/core_clk");
+ const struct fdt_property *prop = fdt_get_property(initial_boot_params,
+ offset,
+ "clock-frequency",
+ NULL);
+ u32 freq = be32_to_cpu(*(u32*)(prop->data)) / 1000000, orig = freq;
+
/*
* AXS103 configurations for SMP/QUAD configurations share device tree
* which defaults to 90 MHz. However recent failures of Quad config
@@ -401,12 +409,10 @@ static void __init axs103_early_init(void)
#ifdef CONFIG_ARC_MCIP
unsigned int num_cores = (read_aux_reg(ARC_REG_MCIP_BCR) >> 16) & 0x3F;
if (num_cores > 2)
- arc_set_core_freq(50 * 1000000);
- else if (num_cores == 2)
- arc_set_core_freq(75 * 1000000);
+ freq = 50;
#endif
- switch (arc_get_core_freq()/1000000) {
+ switch (freq) {
case 33:
axs103_set_freq(1, 1, 1);
break;
@@ -431,11 +437,18 @@ static void __init axs103_early_init(void)
* DT "clock-frequency" might not match with board value.
* Hence update it to match the board value.
*/
- arc_set_core_freq(axs103_get_freq() * 1000000);
+ freq = axs103_get_freq();
break;
}
- pr_info("Freq is %dMHz\n", axs103_get_freq());
+ pr_info("Freq is %dMHz\n", freq);
+
+ /* Patching .dtb in-place with new core clock value */
+ if (freq != orig ) {
+ freq = cpu_to_be32(freq * 1000000);
+ fdt_setprop_inplace(initial_boot_params, offset,
+ "clock-frequency", &freq, sizeof(freq));
+ }
/* Memory maps already config in pre-bootloader */
diff --git a/arch/arc/plat-eznps/Kconfig b/arch/arc/plat-eznps/Kconfig
new file mode 100644
index 000000000000..1d175cc6ad6d
--- /dev/null
+++ b/arch/arc/plat-eznps/Kconfig
@@ -0,0 +1,35 @@
+#
+# For a description of the syntax of this configuration file,
+# see Documentation/kbuild/kconfig-language.txt.
+#
+
+menuconfig ARC_PLAT_EZNPS
+ bool "\"EZchip\" ARC dev platform"
+ select ARC_HAS_COH_CACHES if SMP
+ select CPU_BIG_ENDIAN
+ select CLKSRC_NPS
+ select EZNPS_GIC
+ select EZCHIP_NPS_MANAGEMENT_ENET if ETHERNET
+ help
+ Support for EZchip development platforms,
+ based on ARC700 cores.
+ We handle few flavours:
+ - Hardware Emulator AKA HE which is FPGA based chasis
+ - Simulator based on MetaWare nSIM
+ - NPS400 chip based on ASIC
+
+config EZNPS_MTM_EXT
+ bool "ARC-EZchip MTM Extensions"
+ select CPUMASK_OFFSTACK
+ depends on ARC_PLAT_EZNPS && SMP
+ default y
+ help
+ Here we add new hierarchy for CPUs topology.
+ We got:
+ Core
+ Thread
+ At the new thread level each CPU represent one HW thread.
+ At highest hierarchy each core contain 16 threads,
+ any of them seem like CPU from Linux point of view.
+ All threads within same core share the execution unit of the
+ core and HW scheduler round robin between them.
diff --git a/arch/arc/plat-eznps/Makefile b/arch/arc/plat-eznps/Makefile
new file mode 100644
index 000000000000..21091b199df0
--- /dev/null
+++ b/arch/arc/plat-eznps/Makefile
@@ -0,0 +1,7 @@
+#
+# Makefile for the linux kernel.
+#
+
+obj-y := entry.o platform.o
+obj-$(CONFIG_SMP) += smp.o
+obj-$(CONFIG_EZNPS_MTM_EXT) += mtm.o
diff --git a/arch/arc/plat-eznps/entry.S b/arch/arc/plat-eznps/entry.S
new file mode 100644
index 000000000000..328261c27cda
--- /dev/null
+++ b/arch/arc/plat-eznps/entry.S
@@ -0,0 +1,70 @@
+/*******************************************************************************
+
+ EZNPS CPU startup Code
+ Copyright(c) 2012 EZchip Technologies.
+
+ This program is free software; you can redistribute it and/or modify it
+ under the terms and conditions of the GNU General Public License,
+ version 2, as published by the Free Software Foundation.
+
+ This program is distributed in the hope it will be useful, but WITHOUT
+ ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ more details.
+
+ The full GNU General Public License is included in this distribution in
+ the file called "COPYING".
+
+*******************************************************************************/
+#include <linux/linkage.h>
+#include <asm/entry.h>
+#include <asm/cache.h>
+#include <plat/ctop.h>
+
+ .cpu A7
+
+ .section .init.text, "ax",@progbits
+ .align 1024 ; HW requierment for restart first PC
+
+ENTRY(res_service)
+#ifdef CONFIG_EZNPS_MTM_EXT
+ ; There is no work for HW thread id != 0
+ lr r3, [CTOP_AUX_THREAD_ID]
+ cmp r3, 0
+ jne stext
+#endif
+
+#ifdef CONFIG_ARC_HAS_DCACHE
+ ; With no cache coherency mechanism D$ need to be used very carefully.
+ ; Address space:
+ ; 0G-2G: We disable CONFIG_ARC_CACHE_PAGES.
+ ; 2G-3G: We disable D$ by setting this bit.
+ ; 3G-4G: D$ is disabled by architecture.
+ ; FMT are huge pages for user application reside at 0-2G.
+ ; Only FMT left as one who can use D$ where each such page got
+ ; disable/enable bit for cachability.
+ ; Programmer will use FMT pages for private data so cache coherency
+ ; would not be a problem.
+ ; First thing we invalidate D$
+ sr 1, [ARC_REG_DC_IVDC]
+ sr HW_COMPLY_KRN_NOT_D_CACHED, [CTOP_AUX_HW_COMPLY]
+#endif
+
+#ifdef CONFIG_SMP
+ ; We set logical cpuid to be used by GET_CPUID
+ ; We do not use physical cpuid since we want ids to be continious when
+ ; it comes to cpus on the same quad cluster.
+ ; This is useful for applications that used shared resources of a quad
+ ; cluster such SRAMS.
+ lr r3, [CTOP_AUX_CORE_ID]
+ sr r3, [CTOP_AUX_LOGIC_CORE_ID]
+ lr r3, [CTOP_AUX_CLUSTER_ID]
+ ; Set logical is acheived by swap of 2 middle bits of cluster id (4 bit)
+ ; r3 is used since we use short instruction and we need q-class reg
+ .short CTOP_INST_MOV2B_FLIP_R3_B1_B2_INST
+ .word CTOP_INST_MOV2B_FLIP_R3_B1_B2_LIMM
+ sr r3, [CTOP_AUX_LOGIC_CLUSTER_ID]
+#endif
+
+ j stext
+END(res_service)
diff --git a/arch/arc/plat-eznps/include/plat/ctop.h b/arch/arc/plat-eznps/include/plat/ctop.h
new file mode 100644
index 000000000000..9d6718c1a199
--- /dev/null
+++ b/arch/arc/plat-eznps/include/plat/ctop.h
@@ -0,0 +1,209 @@
+/*
+ * Copyright(c) 2015 EZchip Technologies.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ */
+
+#ifndef _PLAT_EZNPS_CTOP_H
+#define _PLAT_EZNPS_CTOP_H
+
+#ifndef CONFIG_ARC_PLAT_EZNPS
+#error "Incorrect ctop.h include"
+#endif
+
+#include <soc/nps/common.h>
+
+/* core auxiliary registers */
+#ifdef __ASSEMBLY__
+#define CTOP_AUX_BASE (-0x800)
+#else
+#define CTOP_AUX_BASE 0xFFFFF800
+#endif
+
+#define CTOP_AUX_GLOBAL_ID (CTOP_AUX_BASE + 0x000)
+#define CTOP_AUX_CLUSTER_ID (CTOP_AUX_BASE + 0x004)
+#define CTOP_AUX_CORE_ID (CTOP_AUX_BASE + 0x008)
+#define CTOP_AUX_THREAD_ID (CTOP_AUX_BASE + 0x00C)
+#define CTOP_AUX_LOGIC_GLOBAL_ID (CTOP_AUX_BASE + 0x010)
+#define CTOP_AUX_LOGIC_CLUSTER_ID (CTOP_AUX_BASE + 0x014)
+#define CTOP_AUX_LOGIC_CORE_ID (CTOP_AUX_BASE + 0x018)
+#define CTOP_AUX_MT_CTRL (CTOP_AUX_BASE + 0x020)
+#define CTOP_AUX_HW_COMPLY (CTOP_AUX_BASE + 0x024)
+#define CTOP_AUX_LPC (CTOP_AUX_BASE + 0x030)
+#define CTOP_AUX_EFLAGS (CTOP_AUX_BASE + 0x080)
+#define CTOP_AUX_IACK (CTOP_AUX_BASE + 0x088)
+#define CTOP_AUX_GPA1 (CTOP_AUX_BASE + 0x08C)
+#define CTOP_AUX_UDMC (CTOP_AUX_BASE + 0x300)
+
+/* EZchip core instructions */
+#define CTOP_INST_HWSCHD_OFF_R3 0x3B6F00BF
+#define CTOP_INST_HWSCHD_OFF_R4 0x3C6F00BF
+#define CTOP_INST_HWSCHD_RESTORE_R3 0x3E6F70C3
+#define CTOP_INST_HWSCHD_RESTORE_R4 0x3E6F7103
+#define CTOP_INST_SCHD_RW 0x3E6F7004
+#define CTOP_INST_SCHD_RD 0x3E6F7084
+#define CTOP_INST_ASRI_0_R3 0x3B56003E
+#define CTOP_INST_XEX_DI_R2_R2_R3 0x4A664C00
+#define CTOP_INST_EXC_DI_R2_R2_R3 0x4A664C01
+#define CTOP_INST_AADD_DI_R2_R2_R3 0x4A664C02
+#define CTOP_INST_AAND_DI_R2_R2_R3 0x4A664C04
+#define CTOP_INST_AOR_DI_R2_R2_R3 0x4A664C05
+#define CTOP_INST_AXOR_DI_R2_R2_R3 0x4A664C06
+
+/* Do not use D$ for address in 2G-3G */
+#define HW_COMPLY_KRN_NOT_D_CACHED _BITUL(28)
+
+#define NPS_MSU_EN_CFG 0x80
+#define NPS_CRG_BLKID 0x480
+#define NPS_CRG_SYNC_BIT _BITUL(0)
+#define NPS_GIM_BLKID 0x5C0
+
+/* GIM registers and fields*/
+#define NPS_GIM_UART_LINE _BITUL(7)
+#define NPS_GIM_DBG_LAN_EAST_TX_DONE_LINE _BITUL(10)
+#define NPS_GIM_DBG_LAN_EAST_RX_RDY_LINE _BITUL(11)
+#define NPS_GIM_DBG_LAN_WEST_TX_DONE_LINE _BITUL(25)
+#define NPS_GIM_DBG_LAN_WEST_RX_RDY_LINE _BITUL(26)
+
+#ifndef __ASSEMBLY__
+/* Functional registers definition */
+struct nps_host_reg_mtm_cfg {
+ union {
+ struct {
+ u32 gen:1, gdis:1, clk_gate_dis:1, asb:1,
+ __reserved:9, nat:3, ten:16;
+ };
+ u32 value;
+ };
+};
+
+struct nps_host_reg_mtm_cpu_cfg {
+ union {
+ struct {
+ u32 csa:22, dmsid:6, __reserved:3, cs:1;
+ };
+ u32 value;
+ };
+};
+
+struct nps_host_reg_thr_init {
+ union {
+ struct {
+ u32 str:1, __reserved:27, thr_id:4;
+ };
+ u32 value;
+ };
+};
+
+struct nps_host_reg_thr_init_sts {
+ union {
+ struct {
+ u32 bsy:1, err:1, __reserved:26, thr_id:4;
+ };
+ u32 value;
+ };
+};
+
+struct nps_host_reg_msu_en_cfg {
+ union {
+ struct {
+ u32 __reserved1:11,
+ rtc_en:1, ipc_en:1, gim_1_en:1,
+ gim_0_en:1, ipi_en:1, buff_e_rls_bmuw:1,
+ buff_e_alc_bmuw:1, buff_i_rls_bmuw:1, buff_i_alc_bmuw:1,
+ buff_e_rls_bmue:1, buff_e_alc_bmue:1, buff_i_rls_bmue:1,
+ buff_i_alc_bmue:1, __reserved2:1, buff_e_pre_en:1,
+ buff_i_pre_en:1, pmuw_ja_en:1, pmue_ja_en:1,
+ pmuw_nj_en:1, pmue_nj_en:1, msu_en:1;
+ };
+ u32 value;
+ };
+};
+
+struct nps_host_reg_gim_p_int_dst {
+ union {
+ struct {
+ u32 int_out_en:1, __reserved1:4,
+ is:1, intm:2, __reserved2:4,
+ nid:4, __reserved3:4, cid:4,
+ __reserved4:4, tid:4;
+ };
+ u32 value;
+ };
+};
+
+/* AUX registers definition */
+struct nps_host_reg_aux_udmc {
+ union {
+ struct {
+ u32 dcp:1, cme:1, __reserved:19, nat:3,
+ __reserved2:5, dcas:3;
+ };
+ u32 value;
+ };
+};
+
+struct nps_host_reg_aux_mt_ctrl {
+ union {
+ struct {
+ u32 mten:1, hsen:1, scd:1, sten:1,
+ st_cnt:8, __reserved:8,
+ hs_cnt:8, __reserved1:4;
+ };
+ u32 value;
+ };
+};
+
+struct nps_host_reg_aux_hw_comply {
+ union {
+ struct {
+ u32 me:1, le:1, te:1, knc:1, __reserved:28;
+ };
+ u32 value;
+ };
+};
+
+struct nps_host_reg_aux_lpc {
+ union {
+ struct {
+ u32 mep:1, __reserved:31;
+ };
+ u32 value;
+ };
+};
+
+/* CRG registers */
+#define REG_GEN_PURP_0 nps_host_reg_non_cl(NPS_CRG_BLKID, 0x1BF)
+
+/* GIM registers */
+#define REG_GIM_P_INT_EN_0 nps_host_reg_non_cl(NPS_GIM_BLKID, 0x100)
+#define REG_GIM_P_INT_POL_0 nps_host_reg_non_cl(NPS_GIM_BLKID, 0x110)
+#define REG_GIM_P_INT_SENS_0 nps_host_reg_non_cl(NPS_GIM_BLKID, 0x114)
+#define REG_GIM_P_INT_BLK_0 nps_host_reg_non_cl(NPS_GIM_BLKID, 0x118)
+#define REG_GIM_P_INT_DST_10 nps_host_reg_non_cl(NPS_GIM_BLKID, 0x13A)
+#define REG_GIM_P_INT_DST_11 nps_host_reg_non_cl(NPS_GIM_BLKID, 0x13B)
+#define REG_GIM_P_INT_DST_25 nps_host_reg_non_cl(NPS_GIM_BLKID, 0x149)
+#define REG_GIM_P_INT_DST_26 nps_host_reg_non_cl(NPS_GIM_BLKID, 0x14A)
+
+#else
+
+.macro GET_CPU_ID reg
+ lr \reg, [CTOP_AUX_LOGIC_GLOBAL_ID]
+#ifndef CONFIG_EZNPS_MTM_EXT
+ lsr \reg, \reg, 4
+#endif
+.endm
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _PLAT_EZNPS_CTOP_H */
diff --git a/arch/arc/plat-eznps/include/plat/mtm.h b/arch/arc/plat-eznps/include/plat/mtm.h
new file mode 100644
index 000000000000..29b91b553bf9
--- /dev/null
+++ b/arch/arc/plat-eznps/include/plat/mtm.h
@@ -0,0 +1,60 @@
+/*
+ * Copyright(c) 2015 EZchip Technologies.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ */
+
+#ifndef _PLAT_EZNPS_MTM_H
+#define _PLAT_EZNPS_MTM_H
+
+#include <plat/ctop.h>
+
+static inline void *nps_mtm_reg_addr(u32 cpu, u32 reg)
+{
+ struct global_id gid;
+ u32 core, blkid;
+
+ gid.value = cpu;
+ core = gid.core;
+ blkid = (((core & 0x0C) << 2) | (core & 0x03));
+
+ return nps_host_reg(cpu, blkid, reg);
+}
+
+#ifdef CONFIG_EZNPS_MTM_EXT
+#define NPS_CPU_TO_THREAD_NUM(cpu) \
+ ({ struct global_id gid; gid.value = cpu; gid.thread; })
+
+/* MTM registers */
+#define MTM_CFG(cpu) nps_mtm_reg_addr(cpu, 0x81)
+#define MTM_THR_INIT(cpu) nps_mtm_reg_addr(cpu, 0x92)
+#define MTM_THR_INIT_STS(cpu) nps_mtm_reg_addr(cpu, 0x93)
+
+#define get_thread(map) map.thread
+#define eznps_max_cpus 4096
+#define eznps_cpus_per_cluster 256
+
+void mtm_enable_core(unsigned int cpu);
+int mtm_enable_thread(int cpu);
+#else /* !CONFIG_EZNPS_MTM_EXT */
+
+#define get_thread(map) 0
+#define eznps_max_cpus 256
+#define eznps_cpus_per_cluster 16
+#define mtm_enable_core(cpu)
+#define mtm_enable_thread(cpu) 1
+#define NPS_CPU_TO_THREAD_NUM(cpu) 0
+
+#endif /* CONFIG_EZNPS_MTM_EXT */
+
+#endif /* _PLAT_EZNPS_MTM_H */
diff --git a/arch/arc/plat-eznps/include/plat/smp.h b/arch/arc/plat-eznps/include/plat/smp.h
new file mode 100644
index 000000000000..06b59bd13a95
--- /dev/null
+++ b/arch/arc/plat-eznps/include/plat/smp.h
@@ -0,0 +1,26 @@
+/*
+ * Copyright(c) 2015 EZchip Technologies.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ */
+
+#ifndef __PLAT_EZNPS_SMP_H
+#define __PLAT_EZNPS_SMP_H
+
+#ifdef CONFIG_SMP
+
+extern void res_service(void);
+
+#endif /* CONFIG_SMP */
+
+#endif
diff --git a/arch/arc/plat-eznps/mtm.c b/arch/arc/plat-eznps/mtm.c
new file mode 100644
index 000000000000..aaaaffd3d940
--- /dev/null
+++ b/arch/arc/plat-eznps/mtm.c
@@ -0,0 +1,133 @@
+/*
+ * Copyright(c) 2015 EZchip Technologies.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ */
+
+#include <linux/smp.h>
+#include <linux/io.h>
+#include <linux/log2.h>
+#include <asm/arcregs.h>
+#include <plat/mtm.h>
+#include <plat/smp.h>
+
+#define MT_CTRL_HS_CNT 0xFF
+#define MT_CTRL_ST_CNT 0xF
+#define NPS_NUM_HW_THREADS 0x10
+
+static void mtm_init_nat(int cpu)
+{
+ struct nps_host_reg_mtm_cfg mtm_cfg;
+ struct nps_host_reg_aux_udmc udmc;
+ int log_nat, nat = 0, i, t;
+
+ /* Iterate core threads and update nat */
+ for (i = 0, t = cpu; i < NPS_NUM_HW_THREADS; i++, t++)
+ nat += test_bit(t, cpumask_bits(cpu_possible_mask));
+
+ log_nat = ilog2(nat);
+
+ udmc.value = read_aux_reg(CTOP_AUX_UDMC);
+ udmc.nat = log_nat;
+ write_aux_reg(CTOP_AUX_UDMC, udmc.value);
+
+ mtm_cfg.value = ioread32be(MTM_CFG(cpu));
+ mtm_cfg.nat = log_nat;
+ iowrite32be(mtm_cfg.value, MTM_CFG(cpu));
+}
+
+static void mtm_init_thread(int cpu)
+{
+ int i, tries = 5;
+ struct nps_host_reg_thr_init thr_init;
+ struct nps_host_reg_thr_init_sts thr_init_sts;
+
+ /* Set thread init register */
+ thr_init.value = 0;
+ iowrite32be(thr_init.value, MTM_THR_INIT(cpu));
+ thr_init.thr_id = NPS_CPU_TO_THREAD_NUM(cpu);
+ thr_init.str = 1;
+ iowrite32be(thr_init.value, MTM_THR_INIT(cpu));
+
+ /* Poll till thread init is done */
+ for (i = 0; i < tries; i++) {
+ thr_init_sts.value = ioread32be(MTM_THR_INIT_STS(cpu));
+ if (thr_init_sts.thr_id == thr_init.thr_id) {
+ if (thr_init_sts.bsy)
+ continue;
+ else if (thr_init_sts.err)
+ pr_warn("Failed to thread init cpu %u\n", cpu);
+ break;
+ }
+
+ pr_warn("Wrong thread id in thread init for cpu %u\n", cpu);
+ break;
+ }
+
+ if (i == tries)
+ pr_warn("Got thread init timeout for cpu %u\n", cpu);
+}
+
+int mtm_enable_thread(int cpu)
+{
+ struct nps_host_reg_mtm_cfg mtm_cfg;
+
+ if (NPS_CPU_TO_THREAD_NUM(cpu) == 0)
+ return 1;
+
+ /* Enable thread in mtm */
+ mtm_cfg.value = ioread32be(MTM_CFG(cpu));
+ mtm_cfg.ten |= (1 << (NPS_CPU_TO_THREAD_NUM(cpu)));
+ iowrite32be(mtm_cfg.value, MTM_CFG(cpu));
+
+ return 0;
+}
+
+void mtm_enable_core(unsigned int cpu)
+{
+ int i;
+ struct nps_host_reg_aux_mt_ctrl mt_ctrl;
+ struct nps_host_reg_mtm_cfg mtm_cfg;
+
+ if (NPS_CPU_TO_THREAD_NUM(cpu) != 0)
+ return;
+
+ /* Initialize Number of Active Threads */
+ mtm_init_nat(cpu);
+
+ /* Initialize mtm_cfg */
+ mtm_cfg.value = ioread32be(MTM_CFG(cpu));
+ mtm_cfg.ten = 1;
+ iowrite32be(mtm_cfg.value, MTM_CFG(cpu));
+
+ /* Initialize all other threads in core */
+ for (i = 1; i < NPS_NUM_HW_THREADS; i++)
+ mtm_init_thread(cpu + i);
+
+
+ /* Enable HW schedule, stall counter, mtm */
+ mt_ctrl.value = 0;
+ mt_ctrl.hsen = 1;
+ mt_ctrl.hs_cnt = MT_CTRL_HS_CNT;
+ mt_ctrl.sten = 1;
+ mt_ctrl.st_cnt = MT_CTRL_ST_CNT;
+ mt_ctrl.mten = 1;
+ write_aux_reg(CTOP_AUX_MT_CTRL, mt_ctrl.value);
+
+ /*
+ * HW scheduling mechanism will start working
+ * Only after call to instruction "schd.rw".
+ * cpu_relax() calls "schd.rw" instruction.
+ */
+ cpu_relax();
+}
diff --git a/arch/arc/plat-eznps/platform.c b/arch/arc/plat-eznps/platform.c
new file mode 100644
index 000000000000..7ad6d2b8f12a
--- /dev/null
+++ b/arch/arc/plat-eznps/platform.c
@@ -0,0 +1,102 @@
+/*
+ * Copyright(c) 2015 EZchip Technologies.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ */
+
+#include <linux/init.h>
+#include <linux/io.h>
+#include <asm/mach_desc.h>
+#include <plat/mtm.h>
+
+static void __init eznps_configure_msu(void)
+{
+ int cpu;
+ struct nps_host_reg_msu_en_cfg msu_en_cfg = {.value = 0};
+
+ msu_en_cfg.msu_en = 1;
+ msu_en_cfg.ipi_en = 1;
+ msu_en_cfg.gim_0_en = 1;
+ msu_en_cfg.gim_1_en = 1;
+
+ /* enable IPI and GIM messages on all clusters */
+ for (cpu = 0 ; cpu < eznps_max_cpus; cpu += eznps_cpus_per_cluster)
+ iowrite32be(msu_en_cfg.value,
+ nps_host_reg(cpu, NPS_MSU_BLKID, NPS_MSU_EN_CFG));
+}
+
+static void __init eznps_configure_gim(void)
+{
+ u32 reg_value;
+ u32 gim_int_lines;
+ struct nps_host_reg_gim_p_int_dst gim_p_int_dst = {.value = 0};
+
+ gim_int_lines = NPS_GIM_UART_LINE;
+ gim_int_lines |= NPS_GIM_DBG_LAN_EAST_TX_DONE_LINE;
+ gim_int_lines |= NPS_GIM_DBG_LAN_EAST_RX_RDY_LINE;
+ gim_int_lines |= NPS_GIM_DBG_LAN_WEST_TX_DONE_LINE;
+ gim_int_lines |= NPS_GIM_DBG_LAN_WEST_RX_RDY_LINE;
+
+ /*
+ * IRQ polarity
+ * low or high level
+ * negative or positive edge
+ */
+ reg_value = ioread32be(REG_GIM_P_INT_POL_0);
+ reg_value &= ~gim_int_lines;
+ iowrite32be(reg_value, REG_GIM_P_INT_POL_0);
+
+ /* IRQ type level or edge */
+ reg_value = ioread32be(REG_GIM_P_INT_SENS_0);
+ reg_value |= NPS_GIM_DBG_LAN_EAST_TX_DONE_LINE;
+ reg_value |= NPS_GIM_DBG_LAN_WEST_TX_DONE_LINE;
+ iowrite32be(reg_value, REG_GIM_P_INT_SENS_0);
+
+ /*
+ * GIM interrupt select type for
+ * dbg_lan TX and RX interrupts
+ * should be type 1
+ * type 0 = IRQ line 6
+ * type 1 = IRQ line 7
+ */
+ gim_p_int_dst.is = 1;
+ iowrite32be(gim_p_int_dst.value, REG_GIM_P_INT_DST_10);
+ iowrite32be(gim_p_int_dst.value, REG_GIM_P_INT_DST_11);
+ iowrite32be(gim_p_int_dst.value, REG_GIM_P_INT_DST_25);
+ iowrite32be(gim_p_int_dst.value, REG_GIM_P_INT_DST_26);
+
+ /*
+ * CTOP IRQ lines should be defined
+ * as blocking in GIM
+ */
+ iowrite32be(gim_int_lines, REG_GIM_P_INT_BLK_0);
+
+ /* enable CTOP IRQ lines in GIM */
+ iowrite32be(gim_int_lines, REG_GIM_P_INT_EN_0);
+}
+
+static void __init eznps_early_init(void)
+{
+ eznps_configure_msu();
+ eznps_configure_gim();
+}
+
+static const char *eznps_compat[] __initconst = {
+ "ezchip,arc-nps",
+ NULL,
+};
+
+MACHINE_START(NPS, "nps")
+ .dt_compat = eznps_compat,
+ .init_early = eznps_early_init,
+MACHINE_END
diff --git a/arch/arc/plat-eznps/smp.c b/arch/arc/plat-eznps/smp.c
new file mode 100644
index 000000000000..5e901f86e4bd
--- /dev/null
+++ b/arch/arc/plat-eznps/smp.c
@@ -0,0 +1,155 @@
+/*
+ * Copyright(c) 2015 EZchip Technologies.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * The full GNU General Public License is included in this distribution in
+ * the file called "COPYING".
+ */
+
+#include <linux/smp.h>
+#include <linux/of_fdt.h>
+#include <linux/io.h>
+#include <linux/irqdomain.h>
+#include <asm/irq.h>
+#include <plat/ctop.h>
+#include <plat/smp.h>
+#include <plat/mtm.h>
+
+#define NPS_DEFAULT_MSID 0x34
+#define NPS_MTM_CPU_CFG 0x90
+
+static char smp_cpuinfo_buf[128] = {"Extn [EZNPS-SMP]\t: On\n"};
+
+/* Get cpu map from device tree */
+static int __init eznps_get_map(const char *name, struct cpumask *cpumask)
+{
+ unsigned long dt_root = of_get_flat_dt_root();
+ const char *buf;
+
+ buf = of_get_flat_dt_prop(dt_root, name, NULL);
+ if (!buf)
+ return 1;
+
+ cpulist_parse(buf, cpumask);
+
+ return 0;
+}
+
+/* Update board cpu maps */
+static void __init eznps_init_cpumasks(void)
+{
+ struct cpumask cpumask;
+
+ if (eznps_get_map("present-cpus", &cpumask)) {
+ pr_err("Failed to get present-cpus from dtb");
+ return;
+ }
+ init_cpu_present(&cpumask);
+
+ if (eznps_get_map("possible-cpus", &cpumask)) {
+ pr_err("Failed to get possible-cpus from dtb");
+ return;
+ }
+ init_cpu_possible(&cpumask);
+}
+
+static void eznps_init_core(unsigned int cpu)
+{
+ u32 sync_value;
+ struct nps_host_reg_aux_hw_comply hw_comply;
+ struct nps_host_reg_aux_lpc lpc;
+
+ if (NPS_CPU_TO_THREAD_NUM(cpu) != 0)
+ return;
+
+ hw_comply.value = read_aux_reg(CTOP_AUX_HW_COMPLY);
+ hw_comply.me = 1;
+ hw_comply.le = 1;
+ hw_comply.te = 1;
+ write_aux_reg(CTOP_AUX_HW_COMPLY, hw_comply.value);
+
+ /* Enable MMU clock */
+ lpc.mep = 1;
+ write_aux_reg(CTOP_AUX_LPC, lpc.value);
+
+ /* Boot CPU only */
+ if (!cpu) {
+ /* Write to general purpose register in CRG */
+ sync_value = ioread32be(REG_GEN_PURP_0);
+ sync_value |= NPS_CRG_SYNC_BIT;
+ iowrite32be(sync_value, REG_GEN_PURP_0);
+ }
+}
+
+/*
+ * Master kick starting another CPU
+ */
+static void __init eznps_smp_wakeup_cpu(int cpu, unsigned long pc)
+{
+ struct nps_host_reg_mtm_cpu_cfg cpu_cfg;
+
+ if (mtm_enable_thread(cpu) == 0)
+ return;
+
+ /* set PC, dmsid, and start CPU */
+ cpu_cfg.value = (u32)res_service;
+ cpu_cfg.dmsid = NPS_DEFAULT_MSID;
+ cpu_cfg.cs = 1;
+ iowrite32be(cpu_cfg.value, nps_mtm_reg_addr(cpu, NPS_MTM_CPU_CFG));
+}
+
+static void eznps_ipi_send(int cpu)
+{
+ struct global_id gid;
+ struct {
+ union {
+ struct {
+ u32 num:8, cluster:8, core:8, thread:8;
+ };
+ u32 value;
+ };
+ } ipi;
+
+ gid.value = cpu;
+ ipi.thread = get_thread(gid);
+ ipi.core = gid.core;
+ ipi.cluster = nps_cluster_logic_to_phys(gid.cluster);
+ ipi.num = NPS_IPI_IRQ;
+
+ __asm__ __volatile__(
+ " mov r3, %0\n"
+ " .word %1\n"
+ :
+ : "r"(ipi.value), "i"(CTOP_INST_ASRI_0_R3)
+ : "r3");
+}
+
+static void eznps_init_per_cpu(int cpu)
+{
+ smp_ipi_irq_setup(cpu, NPS_IPI_IRQ);
+
+ eznps_init_core(cpu);
+ mtm_enable_core(cpu);
+}
+
+static void eznps_ipi_clear(int irq)
+{
+ write_aux_reg(CTOP_AUX_IACK, 1 << irq);
+}
+
+struct plat_smp_ops plat_smp_ops = {
+ .info = smp_cpuinfo_buf,
+ .init_early_smp = eznps_init_cpumasks,
+ .cpu_kick = eznps_smp_wakeup_cpu,
+ .ipi_send = eznps_ipi_send,
+ .init_per_cpu = eznps_init_per_cpu,
+ .ipi_clear = eznps_ipi_clear,
+};
diff --git a/arch/arc/plat-tb10x/Kconfig b/arch/arc/plat-tb10x/Kconfig
index d14b3d3c5dfd..149e0917645d 100644
--- a/arch/arc/plat-tb10x/Kconfig
+++ b/arch/arc/plat-tb10x/Kconfig
@@ -21,7 +21,7 @@ menuconfig ARC_PLAT_TB10X
select PINCTRL
select PINCTRL_TB10X
select PINMUX
- select ARCH_REQUIRE_GPIOLIB
+ select GPIOLIB
select GPIO_TB10X
select TB10X_IRQC
help
diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index cdfa6c2b7626..90542db1220d 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -41,7 +41,7 @@ config ARM
select HAVE_ARCH_SECCOMP_FILTER if (AEABI && !OABI_COMPAT)
select HAVE_ARCH_TRACEHOOK
select HAVE_ARM_SMCCC if CPU_V7
- select HAVE_BPF_JIT
+ select HAVE_CBPF_JIT
select HAVE_CC_STACKPROTECTOR
select HAVE_CONTEXT_TRACKING
select HAVE_C_RECORDMCOUNT
@@ -50,6 +50,7 @@ config ARM
select HAVE_DMA_CONTIGUOUS if MMU
select HAVE_DYNAMIC_FTRACE if (!XIP_KERNEL) && !CPU_ENDIAN_BE32 && MMU
select HAVE_EFFICIENT_UNALIGNED_ACCESS if (CPU_V6 || CPU_V6K || CPU_V7) && MMU
+ select HAVE_EXIT_THREAD
select HAVE_FTRACE_MCOUNT_RECORD if (!XIP_KERNEL)
select HAVE_FUNCTION_GRAPH_TRACER if (!THUMB2_KERNEL)
select HAVE_FUNCTION_TRACER if (!XIP_KERNEL)
@@ -66,6 +67,7 @@ config ARM
select HAVE_KRETPROBES if (HAVE_KPROBES)
select HAVE_MEMBLOCK
select HAVE_MOD_ARCH_SPECIFIC
+ select HAVE_NMI
select HAVE_OPROFILE if (HAVE_PERF_EVENTS)
select HAVE_OPTPROBES if !THUMB2_KERNEL
select HAVE_PERF_EVENTS
@@ -531,6 +533,8 @@ config ARCH_LPC32XX
select COMMON_CLK
select CPU_ARM926T
select GENERIC_CLOCKEVENTS
+ select MULTI_IRQ_HANDLER
+ select SPARSE_IRQ
select USE_OF
help
Support for the NXP LPC32XX family of processors
@@ -775,6 +779,8 @@ source "arch/arm/mach-meson/Kconfig"
source "arch/arm/mach-moxart/Kconfig"
+source "arch/arm/mach-aspeed/Kconfig"
+
source "arch/arm/mach-mv78xx0/Kconfig"
source "arch/arm/mach-imx/Kconfig"
@@ -804,6 +810,8 @@ source "arch/arm/plat-pxa/Kconfig"
source "arch/arm/mach-mmp/Kconfig"
+source "arch/arm/mach-oxnas/Kconfig"
+
source "arch/arm/mach-qcom/Kconfig"
source "arch/arm/mach-realview/Kconfig"
@@ -892,6 +900,18 @@ config MACH_STM32F429
depends on ARCH_STM32
default y
+config ARCH_MPS2
+ bool "ARM MPS2 paltform"
+ depends on ARM_SINGLE_ARMV7M
+ select ARM_AMBA
+ select CLKSRC_MPS2
+ help
+ Support for Cortex-M Prototyping System (or V2M-MPS2) which comes
+ with a range of available cores like Cortex-M3/M4/M7.
+
+ Please, note that depends which Application Note is used memory map
+ for the platform may vary, so adjustment of RAM base might be needed.
+
# Definitions to make life easier
config ARCH_ACORN
bool
diff --git a/arch/arm/Kconfig.debug b/arch/arm/Kconfig.debug
index 1098e91d6d3f..19a3dcf5eb2e 100644
--- a/arch/arm/Kconfig.debug
+++ b/arch/arm/Kconfig.debug
@@ -268,14 +268,6 @@ choice
Say Y here if you want kernel low-level debugging support
on HI3620 UART.
- config DEBUG_HI3716_UART
- bool "Hisilicon Hi3716 Debug UART"
- depends on ARCH_HI3xxx
- select DEBUG_UART_PL01X
- help
- Say Y here if you want kernel low-level debugging support
- on HI3716 UART.
-
config DEBUG_HIGHBANK_UART
bool "Kernel low-level debugging messages via Highbank UART"
depends on ARCH_HIGHBANK
diff --git a/arch/arm/Makefile b/arch/arm/Makefile
index 8c3ce2ac44c4..274e8a6582f1 100644
--- a/arch/arm/Makefile
+++ b/arch/arm/Makefile
@@ -183,6 +183,7 @@ machine-$(CONFIG_ARCH_LPC18XX) += lpc18xx
machine-$(CONFIG_ARCH_LPC32XX) += lpc32xx
machine-$(CONFIG_ARCH_MESON) += meson
machine-$(CONFIG_ARCH_MMP) += mmp
+machine-$(CONFIG_ARCH_MPS2) += vexpress
machine-$(CONFIG_ARCH_MOXART) += moxart
machine-$(CONFIG_ARCH_MV78XX0) += mv78xx0
machine-$(CONFIG_ARCH_MVEBU) += mvebu
diff --git a/arch/arm/boot/Makefile b/arch/arm/boot/Makefile
index 48fab15cfc02..5be33a2d59a9 100644
--- a/arch/arm/boot/Makefile
+++ b/arch/arm/boot/Makefile
@@ -82,13 +82,12 @@ $(obj)/uImage: $(obj)/zImage FORCE
$(obj)/bootp/bootp: $(obj)/zImage initrd FORCE
$(Q)$(MAKE) $(build)=$(obj)/bootp $@
- @:
$(obj)/bootpImage: $(obj)/bootp/bootp FORCE
$(call if_changed,objcopy)
@$(kecho) ' Kernel: $@ is ready'
-PHONY += initrd
+PHONY += initrd install zinstall uinstall
initrd:
@test "$(INITRD_PHYS)" != "" || \
(echo This machine does not support INITRD; exit -1)
diff --git a/arch/arm/boot/bootp/Makefile b/arch/arm/boot/bootp/Makefile
index 5761f0039133..5e4acd253b30 100644
--- a/arch/arm/boot/bootp/Makefile
+++ b/arch/arm/boot/bootp/Makefile
@@ -17,7 +17,6 @@ targets := bootp init.o kernel.o initrd.o
# Note that bootp.lds picks up kernel.o and initrd.o
$(obj)/bootp: $(src)/bootp.lds $(addprefix $(obj)/,init.o kernel.o initrd.o) FORCE
$(call if_changed,ld)
- @:
# kernel.o and initrd.o includes a binary image using
# .incbin, a dependency which is not tracked automatically
@@ -26,4 +25,4 @@ $(obj)/kernel.o: arch/arm/boot/zImage FORCE
$(obj)/initrd.o: $(INITRD) FORCE
-PHONY += $(INITRD) FORCE
+PHONY += $(INITRD)
diff --git a/arch/arm/boot/dts/Makefile b/arch/arm/boot/dts/Makefile
index 95c1923ce6fa..06b6c2d695bf 100644
--- a/arch/arm/boot/dts/Makefile
+++ b/arch/arm/boot/dts/Makefile
@@ -112,6 +112,7 @@ dtb-$(CONFIG_ARCH_DIGICOLOR) += \
dtb-$(CONFIG_ARCH_EFM32) += \
efm32gg-dk3750.dtb
dtb-$(CONFIG_ARCH_EXYNOS3) += \
+ exynos3250-artik5-eval.dtb \
exynos3250-monk.dtb \
exynos3250-rinato.dtb
dtb-$(CONFIG_ARCH_EXYNOS4) += \
@@ -158,9 +159,9 @@ dtb-$(CONFIG_ARCH_INTEGRATOR) += \
integratorap.dtb \
integratorcp.dtb
dtb-$(CONFIG_ARCH_KEYSTONE) += \
- k2hk-evm.dtb \
- k2l-evm.dtb \
- k2e-evm.dtb \
+ keystone-k2hk-evm.dtb \
+ keystone-k2l-evm.dtb \
+ keystone-k2e-evm.dtb \
keystone-k2g-evm.dtb
dtb-$(CONFIG_MACH_KIRKWOOD) += \
kirkwood-b3.dtb \
@@ -177,6 +178,7 @@ dtb-$(CONFIG_MACH_KIRKWOOD) += \
kirkwood-ds109.dtb \
kirkwood-ds110jv10.dtb \
kirkwood-ds111.dtb \
+ kirkwood-ds112.dtb \
kirkwood-ds209.dtb \
kirkwood-ds210.dtb \
kirkwood-ds212.dtb \
@@ -199,6 +201,7 @@ dtb-$(CONFIG_MACH_KIRKWOOD) += \
kirkwood-linkstation-lswsxl.dtb \
kirkwood-linkstation-lswvl.dtb \
kirkwood-linkstation-lswxl.dtb \
+ kirkwood-linksys-viper.dtb \
kirkwood-lschlv2.dtb \
kirkwood-lsxhl.dtb \
kirkwood-mplcec4.dtb \
@@ -214,6 +217,7 @@ dtb-$(CONFIG_MACH_KIRKWOOD) += \
kirkwood-ns2mini.dtb \
kirkwood-nsa310.dtb \
kirkwood-nsa310a.dtb \
+ kirkwood-nsa320.dtb \
kirkwood-nsa325.dtb \
kirkwood-openblocks_a6.dtb \
kirkwood-openblocks_a7.dtb \
@@ -241,7 +245,8 @@ dtb-$(CONFIG_ARCH_LPC18XX) += \
lpc4350-hitex-eval.dtb \
lpc4357-ea4357-devkit.dtb
dtb-$(CONFIG_ARCH_LPC32XX) += \
- ea3250.dtb phy3250.dtb
+ lpc3250-ea3250.dtb \
+ lpc3250-phy3250.dtb
dtb-$(CONFIG_MACH_MESON6) += \
meson6-atv1200.dtb
dtb-$(CONFIG_MACH_MESON8) += \
@@ -253,6 +258,9 @@ dtb-$(CONFIG_ARCH_MMP) += \
dtb-$(CONFIG_MACH_MESON8B) += \
meson8b-mxq.dtb \
meson8b-odroidc1.dtb
+dtb-$(CONFIG_ARCH_MPS2) += \
+ mps2-an385.dtb \
+ mps2-an399.dtb
dtb-$(CONFIG_ARCH_MOXART) += \
moxart-uc7112lx.dtb
dtb-$(CONFIG_SOC_IMX1) += \
@@ -320,8 +328,12 @@ dtb-$(CONFIG_SOC_IMX6Q) += \
imx6dl-sabrelite.dtb \
imx6dl-sabresd.dtb \
imx6dl-tx6dl-comtft.dtb \
+ imx6dl-tx6s-8034.dtb \
+ imx6dl-tx6s-8035.dtb \
imx6dl-tx6u-801x.dtb \
+ imx6dl-tx6u-8033.dtb \
imx6dl-tx6u-811x.dtb \
+ imx6dl-tx6u-81xx-mb7.dtb \
imx6dl-udoo.dtb \
imx6dl-wandboard.dtb \
imx6dl-wandboard-revb1.dtb \
@@ -346,6 +358,7 @@ dtb-$(CONFIG_SOC_IMX6Q) += \
imx6q-gw552x.dtb \
imx6q-hummingboard.dtb \
imx6q-icore-rqs.dtb \
+ imx6q-marsboard.dtb \
imx6q-nitrogen6x.dtb \
imx6q-nitrogen6_max.dtb \
imx6q-novena.dtb \
@@ -360,23 +373,33 @@ dtb-$(CONFIG_SOC_IMX6Q) += \
imx6q-tx6q-1010-comtft.dtb \
imx6q-tx6q-1020.dtb \
imx6q-tx6q-1020-comtft.dtb \
+ imx6q-tx6q-1036.dtb \
imx6q-tx6q-1110.dtb \
+ imx6q-tx6q-11x0-mb7.dtb \
imx6q-udoo.dtb \
imx6q-wandboard.dtb \
imx6q-wandboard-revb1.dtb \
+ imx6qp-nitrogen6_max.dtb \
imx6qp-sabreauto.dtb \
imx6qp-sabresd.dtb
dtb-$(CONFIG_SOC_IMX6SL) += \
imx6sl-evk.dtb \
imx6sl-warp.dtb
dtb-$(CONFIG_SOC_IMX6SX) += \
+ imx6sx-nitrogen6sx.dtb \
imx6sx-sabreauto.dtb \
imx6sx-sdb-reva.dtb \
+ imx6sx-sdb-sai.dtb \
imx6sx-sdb.dtb
dtb-$(CONFIG_SOC_IMX6UL) += \
- imx6ul-14x14-evk.dtb
+ imx6ul-14x14-evk.dtb \
+ imx6ul-pico-hobbit.dtb \
+ imx6ul-tx6ul-0010.dtb \
+ imx6ul-tx6ul-0011.dtb \
+ imx6ul-tx6ul-mainboard.dtb
dtb-$(CONFIG_SOC_IMX7D) += \
imx7d-cl-som-imx7.dtb \
+ imx7d-nitrogen7.dtb \
imx7d-sbc-imx7.dtb \
imx7d-sdb.dtb
dtb-$(CONFIG_SOC_LS1021A) += \
@@ -388,7 +411,8 @@ dtb-$(CONFIG_SOC_VF610) += \
vf610m4-colibri.dtb \
vf610-cosmic.dtb \
vf610m4-cosmic.dtb \
- vf610-twr.dtb
+ vf610-twr.dtb \
+ vf610-zii-dev-rev-b.dtb
dtb-$(CONFIG_ARCH_MXS) += \
imx23-evk.dtb \
imx23-olinuxino.dtb \
@@ -485,6 +509,8 @@ dtb-$(CONFIG_SOC_TI81XX) += \
dm8168-evm.dtb \
dra62x-j5eco-evm.dtb
dtb-$(CONFIG_SOC_AM33XX) += \
+ am335x-baltos-ir2110.dtb \
+ am335x-baltos-ir3220.dtb \
am335x-baltos-ir5221.dtb \
am335x-base0033.dtb \
am335x-bone.dtb \
@@ -494,6 +520,7 @@ dtb-$(CONFIG_SOC_AM33XX) += \
am335x-cm-t335.dtb \
am335x-evm.dtb \
am335x-evmsk.dtb \
+ am335x-icev2.dtb \
am335x-lxm.dtb \
am335x-nano.dtb \
am335x-pepper.dtb \
@@ -503,6 +530,7 @@ dtb-$(CONFIG_SOC_AM33XX) += \
am335x-wega-rdk.dtb
dtb-$(CONFIG_ARCH_OMAP4) += \
omap4-duovero-parlor.dtb \
+ omap4-kc1.dtb \
omap4-panda.dtb \
omap4-panda-a4.dtb \
omap4-panda-es.dtb \
@@ -526,9 +554,12 @@ dtb-$(CONFIG_SOC_DRA7XX) += \
am57xx-beagle-x15.dtb \
am57xx-cl-som-am57x.dtb \
am57xx-sbc-am57x.dtb \
+ am572x-idk.dtb \
dra7-evm.dtb \
- dra72-evm.dtb
+ dra72-evm.dtb \
+ dra72-evm-revc.dtb
dtb-$(CONFIG_ARCH_ORION5X) += \
+ orion5x-kuroboxpro.dtb \
orion5x-lacie-d2-network.dtb \
orion5x-lacie-ethernet-disk-mini-v2.dtb \
orion5x-linkstation-lsgl.dtb \
@@ -538,7 +569,10 @@ dtb-$(CONFIG_ARCH_ORION5X) += \
orion5x-rd88f5182-nas.dtb
dtb-$(CONFIG_ARCH_PRIMA2) += \
prima2-evb.dtb
+dtb-$(CONFIG_ARCH_OXNAS) += \
+ wd-mbwe.dtb
dtb-$(CONFIG_ARCH_QCOM) += \
+ qcom-apq8064-arrow-db600c.dtb \
qcom-apq8064-cm-qs600.dtb \
qcom-apq8064-ifc6410.dtb \
qcom-apq8064-sony-xperia-yuga.dtb \
@@ -546,13 +580,20 @@ dtb-$(CONFIG_ARCH_QCOM) += \
qcom-apq8074-dragonboard.dtb \
qcom-apq8084-ifc6540.dtb \
qcom-apq8084-mtp.dtb \
+ qcom-ipq4019-ap.dk01.1-c1.dtb \
qcom-ipq8064-ap148.dtb \
qcom-msm8660-surf.dtb \
qcom-msm8960-cdp.dtb \
qcom-msm8974-sony-xperia-honami.dtb
dtb-$(CONFIG_ARCH_REALVIEW) += \
arm-realview-pb1176.dtb \
- arm-realview-pb11mp.dtb
+ arm-realview-pb11mp.dtb \
+ arm-realview-eb.dtb \
+ arm-realview-eb-11mp.dtb \
+ arm-realview-eb-11mp-revb.dtb \
+ arm-realview-eb-a9mp.dtb \
+ arm-realview-pba8.dtb \
+ arm-realview-pbx-a9.dtb
dtb-$(CONFIG_ARCH_ROCKCHIP) += \
rk3036-evb.dtb \
rk3036-kylin.dtb \
@@ -565,6 +606,7 @@ dtb-$(CONFIG_ARCH_ROCKCHIP) += \
rk3288-evb-rk808.dtb \
rk3288-firefly-beta.dtb \
rk3288-firefly.dtb \
+ rk3288-miqi.dtb \
rk3288-popmetal.dtb \
rk3288-r89.dtb \
rk3288-rock2-square.dtb \
@@ -608,6 +650,7 @@ dtb-$(CONFIG_ARCH_SOCFPGA) += \
socfpga_cyclone5_de0_sockit.dtb \
socfpga_cyclone5_sockit.dtb \
socfpga_cyclone5_socrates.dtb \
+ socfpga_cyclone5_vining_fpga.dtb \
socfpga_vt.dtb
dtb-$(CONFIG_ARCH_SPEAR13XX) += \
spear1310-evb.dtb \
@@ -637,6 +680,7 @@ dtb-$(CONFIG_MACH_SUN4I) += \
sun4i-a10-ba10-tvbox.dtb \
sun4i-a10-chuwi-v7-cw0825.dtb \
sun4i-a10-cubieboard.dtb \
+ sun4i-a10-dserve-dsrv9703c.dtb \
sun4i-a10-gemei-g9.dtb \
sun4i-a10-hackberry.dtb \
sun4i-a10-hyundai-a7hd.dtb \
@@ -660,6 +704,7 @@ dtb-$(CONFIG_MACH_SUN5I) += \
sun5i-a10s-olinuxino-micro.dtb \
sun5i-a10s-r7-tv-dongle.dtb \
sun5i-a10s-wobo-i5.dtb \
+ sun5i-a13-difrnce-dit4350.dtb \
sun5i-a13-empire-electronix-d709.dtb \
sun5i-a13-hsg-h702.dtb \
sun5i-a13-inet-98v-rev2.dtb \
@@ -675,6 +720,7 @@ dtb-$(CONFIG_MACH_SUN6I) += \
sun6i-a31-i7.dtb \
sun6i-a31-m9.dtb \
sun6i-a31-mele-a1000g-quad.dtb \
+ sun6i-a31s-colorfly-e708-q1.dtb \
sun6i-a31s-cs908.dtb \
sun6i-a31s-primo81.dtb \
sun6i-a31s-sina31s.dtb \
@@ -707,6 +753,7 @@ dtb-$(CONFIG_MACH_SUN8I) += \
sun8i-a23-gt90h-v4.dtb \
sun8i-a23-ippo-q8h-v5.dtb \
sun8i-a23-ippo-q8h-v1.2.dtb \
+ sun8i-a23-polaroid-mid2809pxe04.dtb \
sun8i-a23-q8-tablet.dtb \
sun8i-a33-et-q8-v1.6.dtb \
sun8i-a33-ga10h-v1.1.dtb \
@@ -715,6 +762,9 @@ dtb-$(CONFIG_MACH_SUN8I) += \
sun8i-a33-sinlinx-sina33.dtb \
sun8i-a83t-allwinner-h8homlet-v2.dtb \
sun8i-a83t-cubietruck-plus.dtb \
+ sun8i-h3-orangepi-2.dtb \
+ sun8i-h3-orangepi-one.dtb \
+ sun8i-h3-orangepi-pc.dtb \
sun8i-h3-orangepi-plus.dtb
dtb-$(CONFIG_MACH_SUN9I) += \
sun9i-a80-optimus.dtb \
@@ -839,6 +889,8 @@ dtb-$(CONFIG_ARCH_MEDIATEK) += \
mt8127-moose.dtb \
mt8135-evbp1.dtb
dtb-$(CONFIG_ARCH_ZX) += zx296702-ad1.dtb
+dtb-$(CONFIG_ARCH_ASPEED) += aspeed-bmc-opp-palmetto.dtb \
+ aspeed-ast2500-evb.dtb
endif
dtstree := $(srctree)/$(src)
diff --git a/arch/arm/boot/dts/am335x-baltos-ir2110.dts b/arch/arm/boot/dts/am335x-baltos-ir2110.dts
new file mode 100644
index 000000000000..a9a97307d66c
--- /dev/null
+++ b/arch/arm/boot/dts/am335x-baltos-ir2110.dts
@@ -0,0 +1,71 @@
+/*
+ * Copyright (C) 2012 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+/*
+ * VScom OnRISC
+ * http://www.vscom.de
+ */
+
+/dts-v1/;
+
+#include "am335x-baltos.dtsi"
+
+/ {
+ model = "OnRISC Baltos iR 2110";
+};
+
+&am33xx_pinmux {
+ uart1_pins: pinmux_uart1_pins {
+ pinctrl-single,pins = <
+ AM33XX_IOPAD(0x980, PIN_INPUT | MUX_MODE0) /* uart1_rxd */
+ AM33XX_IOPAD(0x984, PIN_INPUT | MUX_MODE0) /* uart1_txd */
+ AM33XX_IOPAD(0x978, PIN_INPUT_PULLDOWN | MUX_MODE0) /* uart1_ctsn */
+ AM33XX_IOPAD(0x97c, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* uart1_rtsn */
+ AM33XX_IOPAD(0x8e0, PIN_OUTPUT_PULLDOWN | MUX_MODE7) /* lcd_vsync.gpio2[22] DTR */
+ AM33XX_IOPAD(0x8e4, PIN_INPUT_PULLDOWN | MUX_MODE7) /* lcd_hsync.gpio2[23] DSR */
+ AM33XX_IOPAD(0x8e8, PIN_INPUT_PULLDOWN | MUX_MODE7) /* lcd_pclk.gpio2[24] DCD */
+ AM33XX_IOPAD(0x8ec, PIN_INPUT_PULLDOWN | MUX_MODE7) /* lcd_ac_bias_en.gpio2[25] RI */
+ >;
+ };
+};
+
+&uart1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart1_pins>;
+ dtr-gpios = <&gpio2 22 GPIO_ACTIVE_LOW>;
+ dsr-gpios = <&gpio2 23 GPIO_ACTIVE_LOW>;
+ dcd-gpios = <&gpio2 24 GPIO_ACTIVE_LOW>;
+ rng-gpios = <&gpio2 25 GPIO_ACTIVE_LOW>;
+
+ status = "okay";
+};
+
+&usb0_phy {
+ status = "okay";
+};
+
+&usb0 {
+ status = "okay";
+ dr_mode = "host";
+};
+
+&cpsw_emac0 {
+ phy_id = <&davinci_mdio>, <1>;
+ phy-mode = "rmii";
+ dual_emac_res_vlan = <1>;
+};
+
+&cpsw_emac1 {
+ phy_id = <&davinci_mdio>, <7>;
+ phy-mode = "rgmii-txid";
+ dual_emac_res_vlan = <2>;
+};
+
+&phy_sel {
+ rmii-clock-ext = <1>;
+};
diff --git a/arch/arm/boot/dts/am335x-baltos-ir3220.dts b/arch/arm/boot/dts/am335x-baltos-ir3220.dts
new file mode 100644
index 000000000000..fe002a17c04b
--- /dev/null
+++ b/arch/arm/boot/dts/am335x-baltos-ir3220.dts
@@ -0,0 +1,119 @@
+/*
+ * Copyright (C) 2012 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+/*
+ * VScom OnRISC
+ * http://www.vscom.de
+ */
+
+/dts-v1/;
+
+#include "am335x-baltos.dtsi"
+
+/ {
+ model = "OnRISC Baltos iR 3220";
+};
+
+&am33xx_pinmux {
+ tca6416_pins: pinmux_tca6416_pins {
+ pinctrl-single,pins = <
+ AM33XX_IOPAD(0x9b4, PIN_INPUT_PULLUP | MUX_MODE7) /* xdma_event_intr1.gpio0[20] tca6416 stuff */
+ >;
+ };
+
+ uart1_pins: pinmux_uart1_pins {
+ pinctrl-single,pins = <
+ AM33XX_IOPAD(0x980, PIN_INPUT | MUX_MODE0) /* uart1_rxd */
+ AM33XX_IOPAD(0x984, PIN_INPUT | MUX_MODE0) /* uart1_txd */
+ AM33XX_IOPAD(0x978, PIN_INPUT_PULLDOWN | MUX_MODE0) /* uart1_ctsn */
+ AM33XX_IOPAD(0x97c, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* uart1_rtsn */
+ AM33XX_IOPAD(0x8e0, PIN_OUTPUT_PULLDOWN | MUX_MODE7) /* lcd_vsync.gpio2[22] DTR */
+ AM33XX_IOPAD(0x8e4, PIN_INPUT_PULLDOWN | MUX_MODE7) /* lcd_hsync.gpio2[23] DSR */
+ AM33XX_IOPAD(0x8e8, PIN_INPUT_PULLDOWN | MUX_MODE7) /* lcd_pclk.gpio2[24] DCD */
+ AM33XX_IOPAD(0x8ec, PIN_INPUT_PULLDOWN | MUX_MODE7) /* lcd_ac_bias_en.gpio2[25] RI */
+ >;
+ };
+
+ uart2_pins: pinmux_uart2_pins {
+ pinctrl-single,pins = <
+ AM33XX_IOPAD(0x950, PIN_INPUT | MUX_MODE1) /* spi0_sclk.uart2_rxd_mux3 */
+ AM33XX_IOPAD(0x954, PIN_OUTPUT | MUX_MODE1) /* spi0_d0.uart2_txd_mux3 */
+ AM33XX_IOPAD(0x988, PIN_INPUT_PULLDOWN | MUX_MODE2) /* i2c0_sda.uart2_ctsn_mux0 */
+ AM33XX_IOPAD(0x98c, PIN_OUTPUT_PULLDOWN | MUX_MODE2) /* i2c0_scl.uart2_rtsn_mux0 */
+ AM33XX_IOPAD(0x830, PIN_OUTPUT_PULLDOWN | MUX_MODE7) /* gpmc_ad12.gpio1[12] DTR */
+ AM33XX_IOPAD(0x834, PIN_INPUT_PULLDOWN | MUX_MODE7) /* gpmc_ad13.gpio1[13] DSR */
+ AM33XX_IOPAD(0x838, PIN_INPUT_PULLDOWN | MUX_MODE7) /* gpmc_ad14.gpio1[14] DCD */
+ AM33XX_IOPAD(0x83c, PIN_INPUT_PULLDOWN | MUX_MODE7) /* gpmc_ad15.gpio1[15] RI */
+
+ AM33XX_IOPAD(0x9a0, PIN_INPUT_PULLUP | MUX_MODE7) /* mcasp0_aclkr.gpio3[18], INPUT_PULLDOWN | MODE7 */
+ >;
+ };
+};
+
+&uart1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart1_pins>;
+ dtr-gpios = <&gpio2 22 GPIO_ACTIVE_LOW>;
+ dsr-gpios = <&gpio2 23 GPIO_ACTIVE_LOW>;
+ dcd-gpios = <&gpio2 24 GPIO_ACTIVE_LOW>;
+ rng-gpios = <&gpio2 25 GPIO_ACTIVE_LOW>;
+
+ status = "okay";
+};
+
+&uart2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart2_pins>;
+ dtr-gpios = <&gpio1 12 GPIO_ACTIVE_LOW>;
+ dsr-gpios = <&gpio1 13 GPIO_ACTIVE_LOW>;
+ dcd-gpios = <&gpio1 14 GPIO_ACTIVE_LOW>;
+ rng-gpios = <&gpio1 15 GPIO_ACTIVE_LOW>;
+
+ status = "okay";
+};
+
+&i2c1 {
+ tca6416: gpio@20 {
+ compatible = "ti,tca6416";
+ reg = <0x20>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-parent = <&gpio0>;
+ interrupts = <20 GPIO_ACTIVE_LOW>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&tca6416_pins>;
+ };
+};
+
+&usb0_phy {
+ status = "okay";
+};
+
+&usb0 {
+ status = "okay";
+ dr_mode = "host";
+};
+
+&cpsw_emac0 {
+ phy-mode = "rmii";
+ dual_emac_res_vlan = <1>;
+ fixed-link {
+ speed = <100>;
+ full-duplex;
+ };
+};
+
+&cpsw_emac1 {
+ phy_id = <&davinci_mdio>, <7>;
+ phy-mode = "rgmii-txid";
+ dual_emac_res_vlan = <2>;
+};
+
+&phy_sel {
+ rmii-clock-ext = <1>;
+};
diff --git a/arch/arm/boot/dts/am335x-baltos-ir5221.dts b/arch/arm/boot/dts/am335x-baltos-ir5221.dts
index 4e28d87e9356..d0faa7b8c5da 100644
--- a/arch/arm/boot/dts/am335x-baltos-ir5221.dts
+++ b/arch/arm/boot/dts/am335x-baltos-ir5221.dts
@@ -13,83 +13,19 @@
/dts-v1/;
-#include "am33xx.dtsi"
-#include <dt-bindings/pwm/pwm.h>
-#include <dt-bindings/interrupt-controller/irq.h>
+#include "am335x-baltos.dtsi"
/ {
model = "OnRISC Baltos iR 5221";
- compatible = "vscom,onrisc", "ti,am33xx";
-
- cpus {
- cpu@0 {
- cpu0-supply = <&vdd1_reg>;
- };
- };
-
- memory {
- device_type = "memory";
- reg = <0x80000000 0x10000000>; /* 256 MB */
- };
-
- vbat: fixedregulator@0 {
- compatible = "regulator-fixed";
- regulator-name = "vbat";
- regulator-min-microvolt = <5000000>;
- regulator-max-microvolt = <5000000>;
- regulator-boot-on;
- };
-
- wl12xx_vmmc: fixedregulator@2 {
- pinctrl-names = "default";
- pinctrl-0 = <&wl12xx_gpio>;
- compatible = "regulator-fixed";
- regulator-name = "vwl1271";
- regulator-min-microvolt = <3300000>;
- regulator-max-microvolt = <3300000>;
- gpio = <&gpio3 8 0>;
- startup-delay-us = <70000>;
- enable-active-high;
- };
};
&am33xx_pinmux {
- mmc2_pins: pinmux_mmc2_pins {
- pinctrl-single,pins = <
- AM33XX_IOPAD(0x820, PIN_INPUT_PULLUP | MUX_MODE2) /* gpmc_ad8.mmc1_dat0_mux0 */
- AM33XX_IOPAD(0x824, PIN_INPUT_PULLUP | MUX_MODE2) /* gpmc_ad9.mmc1_dat1_mux0 */
- AM33XX_IOPAD(0x828, PIN_INPUT_PULLUP | MUX_MODE2) /* gpmc_ad10.mmc1_dat2_mux0 */
- AM33XX_IOPAD(0x82c, PIN_INPUT_PULLUP | MUX_MODE2) /* gpmc_ad11.mmc1_dat3_mux0 */
- AM33XX_IOPAD(0x880, PIN_INPUT_PULLUP | MUX_MODE2) /* gpmc_csn1.mmc1_clk_mux0 */
- AM33XX_IOPAD(0x884, PIN_INPUT_PULLUP | MUX_MODE2) /* gpmc_csn2.mmc1_cmd_mux0 */
- AM33XX_IOPAD(0x9e4, PIN_INPUT_PULLUP | MUX_MODE7) /* emu0.gpio3[7] */
- >;
- };
-
- wl12xx_gpio: pinmux_wl12xx_gpio {
- pinctrl-single,pins = <
- AM33XX_IOPAD(0x9e8, PIN_OUTPUT_PULLUP | MUX_MODE7) /* emu1.gpio3[8] */
- >;
- };
-
- tps65910_pins: pinmux_tps65910_pins {
- pinctrl-single,pins = <
- AM33XX_IOPAD(0x878, PIN_INPUT_PULLUP | MUX_MODE7) /* gpmc_ben1.gpio1[28] */
- >;
- };
-
tca6416_pins: pinmux_tca6416_pins {
pinctrl-single,pins = <
AM33XX_IOPAD(0x9b4, PIN_INPUT_PULLUP | MUX_MODE7) /* xdma_event_intr1.gpio0[20] tca6416 stuff */
>;
};
- i2c1_pins: pinmux_i2c1_pins {
- pinctrl-single,pins = <
- AM33XX_IOPAD(0x958, PIN_INPUT | MUX_MODE2) /* spi0_d1.i2c1_sda_mux3 */
- AM33XX_IOPAD(0x95c, PIN_INPUT | MUX_MODE2) /* spi0_cs0.i2c1_scl_mux3 */
- >;
- };
dcan1_pins: pinmux_dcan1_pins {
pinctrl-single,pins = <
@@ -98,19 +34,12 @@
>;
};
- uart0_pins: pinmux_uart0_pins {
- pinctrl-single,pins = <
- AM33XX_IOPAD(0x970, PIN_INPUT_PULLUP | MUX_MODE0) /* uart0_rxd.uart0_rxd */
- AM33XX_IOPAD(0x974, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* uart0_txd.uart0_txd */
- >;
- };
-
uart1_pins: pinmux_uart1_pins {
pinctrl-single,pins = <
AM33XX_IOPAD(0x980, PIN_INPUT | MUX_MODE0) /* uart1_rxd */
AM33XX_IOPAD(0x984, PIN_INPUT | MUX_MODE0) /* uart1_txd */
- AM33XX_IOPAD(0x978, PIN_INPUT_PULLDOWN | MUX_MODE7) /* uart1_ctsn, INPUT | MODE0 */
- AM33XX_IOPAD(0x97c, PIN_OUTPUT_PULLDOWN | MUX_MODE7) /* uart1_rtsn, OUTPUT | MODE0 */
+ AM33XX_IOPAD(0x978, PIN_INPUT_PULLDOWN | MUX_MODE0) /* uart1_ctsn */
+ AM33XX_IOPAD(0x97c, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* uart1_rtsn */
AM33XX_IOPAD(0x8e0, PIN_OUTPUT_PULLDOWN | MUX_MODE7) /* lcd_vsync.gpio2[22] DTR */
AM33XX_IOPAD(0x8e4, PIN_INPUT_PULLDOWN | MUX_MODE7) /* lcd_hsync.gpio2[23] DSR */
AM33XX_IOPAD(0x8e8, PIN_INPUT_PULLDOWN | MUX_MODE7) /* lcd_pclk.gpio2[24] DCD */
@@ -122,8 +51,8 @@
pinctrl-single,pins = <
AM33XX_IOPAD(0x950, PIN_INPUT | MUX_MODE1) /* spi0_sclk.uart2_rxd_mux3 */
AM33XX_IOPAD(0x954, PIN_OUTPUT | MUX_MODE1) /* spi0_d0.uart2_txd_mux3 */
- AM33XX_IOPAD(0x988, PIN_INPUT_PULLDOWN | MUX_MODE7) /* i2c0_sda.uart2_ctsn_mux0 */
- AM33XX_IOPAD(0x98c, PIN_OUTPUT_PULLDOWN | MUX_MODE7) /* i2c0_scl.uart2_rtsn_mux0 */
+ AM33XX_IOPAD(0x988, PIN_INPUT_PULLDOWN | MUX_MODE2) /* i2c0_sda.uart2_ctsn_mux0 */
+ AM33XX_IOPAD(0x98c, PIN_OUTPUT_PULLDOWN | MUX_MODE2) /* i2c0_scl.uart2_rtsn_mux0 */
AM33XX_IOPAD(0x830, PIN_OUTPUT_PULLDOWN | MUX_MODE7) /* gpmc_ad12.gpio1[12] DTR */
AM33XX_IOPAD(0x834, PIN_INPUT_PULLDOWN | MUX_MODE7) /* gpmc_ad13.gpio1[13] DSR */
AM33XX_IOPAD(0x838, PIN_INPUT_PULLDOWN | MUX_MODE7) /* gpmc_ad14.gpio1[14] DCD */
@@ -133,151 +62,6 @@
>;
};
- cpsw_default: cpsw_default {
- pinctrl-single,pins = <
- /* Slave 1 */
- AM33XX_IOPAD(0x90c, PIN_INPUT_PULLDOWN | MUX_MODE1) /* mii1_crs.rmii1_crs_dv */
- AM33XX_IOPAD(0x914, PIN_OUTPUT_PULLDOWN | MUX_MODE1) /* mii1_tx_en.rmii1_txen */
- AM33XX_IOPAD(0x924, PIN_OUTPUT_PULLDOWN | MUX_MODE1) /* mii1_txd1.rmii1_txd1 */
- AM33XX_IOPAD(0x928, PIN_OUTPUT_PULLDOWN | MUX_MODE1) /* mii1_txd0.rmii1_txd0 */
- AM33XX_IOPAD(0x93c, PIN_INPUT_PULLDOWN | MUX_MODE1) /* mii1_rxd1.rmii1_rxd1 */
- AM33XX_IOPAD(0x940, PIN_INPUT_PULLDOWN | MUX_MODE1) /* mii1_rxd0.rmii1_rxd0 */
- AM33XX_IOPAD(0x944, PIN_INPUT_PULLDOWN | MUX_MODE0) /* rmii1_ref_clk.rmii1_refclk */
-
-
- /* Slave 2 */
- AM33XX_IOPAD(0x840, PIN_OUTPUT_PULLDOWN | MUX_MODE2) /* gpmc_a0.rgmii2_tctl */
- AM33XX_IOPAD(0x844, PIN_INPUT_PULLDOWN | MUX_MODE2) /* gpmc_a1.rgmii2_rctl */
- AM33XX_IOPAD(0x848, PIN_OUTPUT_PULLDOWN | MUX_MODE2) /* gpmc_a2.rgmii2_td3 */
- AM33XX_IOPAD(0x84c, PIN_OUTPUT_PULLDOWN | MUX_MODE2) /* gpmc_a3.rgmii2_td2 */
- AM33XX_IOPAD(0x850, PIN_OUTPUT_PULLDOWN | MUX_MODE2) /* gpmc_a4.rgmii2_td1 */
- AM33XX_IOPAD(0x854, PIN_OUTPUT_PULLDOWN | MUX_MODE2) /* gpmc_a5.rgmii2_td0 */
- AM33XX_IOPAD(0x858, PIN_OUTPUT_PULLDOWN | MUX_MODE2) /* gpmc_a6.rgmii2_tclk */
- AM33XX_IOPAD(0x85c, PIN_INPUT_PULLDOWN | MUX_MODE2) /* gpmc_a7.rgmii2_rclk */
- AM33XX_IOPAD(0x860, PIN_INPUT_PULLDOWN | MUX_MODE2) /* gpmc_a8.rgmii2_rd3 */
- AM33XX_IOPAD(0x864, PIN_INPUT_PULLDOWN | MUX_MODE2) /* gpmc_a9.rgmii2_rd2 */
- AM33XX_IOPAD(0x868, PIN_INPUT_PULLDOWN | MUX_MODE2) /* gpmc_a10.rgmii2_rd1 */
- AM33XX_IOPAD(0x86c, PIN_INPUT_PULLDOWN | MUX_MODE2) /* gpmc_a11.rgmii2_rd0 */
- >;
- };
-
- cpsw_sleep: cpsw_sleep {
- pinctrl-single,pins = <
- /* Slave 1 reset value */
- AM33XX_IOPAD(0x90c, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x914, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x924, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x928, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x93c, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x940, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x944, PIN_INPUT_PULLDOWN | MUX_MODE7)
-
- /* Slave 2 reset value*/
- AM33XX_IOPAD(0x840, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x844, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x848, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x84c, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x850, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x854, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x858, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x85c, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x860, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x864, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x868, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x86c, PIN_INPUT_PULLDOWN | MUX_MODE7)
- >;
- };
-
- davinci_mdio_default: davinci_mdio_default {
- pinctrl-single,pins = <
- /* MDIO */
- AM33XX_IOPAD(0x948, PIN_INPUT_PULLUP | SLEWCTRL_FAST | MUX_MODE0) /* mdio_data.mdio_data */
- AM33XX_IOPAD(0x94c, PIN_OUTPUT_PULLUP | MUX_MODE0) /* mdio_clk.mdio_clk */
- >;
- };
-
- davinci_mdio_sleep: davinci_mdio_sleep {
- pinctrl-single,pins = <
- /* MDIO reset value */
- AM33XX_IOPAD(0x948, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x94c, PIN_INPUT_PULLDOWN | MUX_MODE7)
- >;
- };
-
- nandflash_pins_s0: nandflash_pins_s0 {
- pinctrl-single,pins = <
- AM33XX_IOPAD(0x800, PIN_INPUT_PULLUP | MUX_MODE0) /* gpmc_ad0.gpmc_ad0 */
- AM33XX_IOPAD(0x804, PIN_INPUT_PULLUP | MUX_MODE0) /* gpmc_ad1.gpmc_ad1 */
- AM33XX_IOPAD(0x808, PIN_INPUT_PULLUP | MUX_MODE0) /* gpmc_ad2.gpmc_ad2 */
- AM33XX_IOPAD(0x80c, PIN_INPUT_PULLUP | MUX_MODE0) /* gpmc_ad3.gpmc_ad3 */
- AM33XX_IOPAD(0x810, PIN_INPUT_PULLUP | MUX_MODE0) /* gpmc_ad4.gpmc_ad4 */
- AM33XX_IOPAD(0x814, PIN_INPUT_PULLUP | MUX_MODE0) /* gpmc_ad5.gpmc_ad5 */
- AM33XX_IOPAD(0x818, PIN_INPUT_PULLUP | MUX_MODE0) /* gpmc_ad6.gpmc_ad6 */
- AM33XX_IOPAD(0x81c, PIN_INPUT_PULLUP | MUX_MODE0) /* gpmc_ad7.gpmc_ad7 */
- AM33XX_IOPAD(0x870, PIN_INPUT_PULLUP | MUX_MODE0) /* gpmc_wait0.gpmc_wait0 */
- AM33XX_IOPAD(0x874, PIN_INPUT_PULLUP | MUX_MODE7) /* gpmc_wpn.gpio0_30 */
- AM33XX_IOPAD(0x87c, PIN_OUTPUT | MUX_MODE0) /* gpmc_csn0.gpmc_csn0 */
- AM33XX_IOPAD(0x890, PIN_OUTPUT | MUX_MODE0) /* gpmc_advn_ale.gpmc_advn_ale */
- AM33XX_IOPAD(0x894, PIN_OUTPUT | MUX_MODE0) /* gpmc_oen_ren.gpmc_oen_ren */
- AM33XX_IOPAD(0x898, PIN_OUTPUT | MUX_MODE0) /* gpmc_wen.gpmc_wen */
- AM33XX_IOPAD(0x89c, PIN_OUTPUT | MUX_MODE0) /* gpmc_be0n_cle.gpmc_be0n_cle */
- >;
- };
-};
-
-&elm {
- status = "okay";
-};
-
-&gpmc {
- pinctrl-names = "default";
- pinctrl-0 = <&nandflash_pins_s0>;
- ranges = <0 0 0x08000000 0x10000000>; /* CS0: NAND */
- status = "okay";
-
- nand@0,0 {
- compatible = "ti,omap2-nand";
- reg = <0 0 4>; /* CS0, offset 0, IO size 4 */
- interrupt-parent = <&gpmc>;
- interrupts = <0 IRQ_TYPE_NONE>, /* fifoevent */
- <1 IRQ_TYPE_NONE>; /* termcount */
- nand-bus-width = <8>;
- ti,nand-ecc-opt = "bch8";
- ti,nand-xfer-type = "polled";
-
- gpmc,device-nand = "true";
- gpmc,device-width = <1>;
- gpmc,sync-clk-ps = <0>;
- gpmc,cs-on-ns = <0>;
- gpmc,cs-rd-off-ns = <44>;
- gpmc,cs-wr-off-ns = <44>;
- gpmc,adv-on-ns = <6>;
- gpmc,adv-rd-off-ns = <34>;
- gpmc,adv-wr-off-ns = <44>;
- gpmc,we-on-ns = <0>;
- gpmc,we-off-ns = <40>;
- gpmc,oe-on-ns = <0>;
- gpmc,oe-off-ns = <54>;
- gpmc,access-ns = <64>;
- gpmc,rd-cycle-ns = <82>;
- gpmc,wr-cycle-ns = <82>;
- gpmc,bus-turnaround-ns = <0>;
- gpmc,cycle2cycle-delay-ns = <0>;
- gpmc,clk-activation-ns = <0>;
- gpmc,wr-access-ns = <40>;
- gpmc,wr-data-mux-bus-ns = <0>;
-
- #address-cells = <1>;
- #size-cells = <1>;
- elm_id = <&elm>;
- };
-};
-
-&uart0 {
- pinctrl-names = "default";
- pinctrl-0 = <&uart0_pins>;
-
- status = "okay";
};
&uart1 {
@@ -287,8 +71,6 @@
dsr-gpios = <&gpio2 23 GPIO_ACTIVE_LOW>;
dcd-gpios = <&gpio2 24 GPIO_ACTIVE_LOW>;
rng-gpios = <&gpio2 25 GPIO_ACTIVE_LOW>;
- cts-gpios = <&gpio0 12 GPIO_ACTIVE_LOW>;
- rts-gpios = <&gpio0 13 GPIO_ACTIVE_LOW>;
status = "okay";
};
@@ -300,35 +82,11 @@
dsr-gpios = <&gpio1 13 GPIO_ACTIVE_LOW>;
dcd-gpios = <&gpio1 14 GPIO_ACTIVE_LOW>;
rng-gpios = <&gpio1 15 GPIO_ACTIVE_LOW>;
- cts-gpios = <&gpio3 5 GPIO_ACTIVE_LOW>;
- rts-gpios = <&gpio3 6 GPIO_ACTIVE_LOW>;
status = "okay";
};
&i2c1 {
- pinctrl-names = "default";
- pinctrl-0 = <&i2c1_pins>;
-
- status = "okay";
- clock-frequency = <400000>;
-
- tps: tps@2d {
- reg = <0x2d>;
- gpio-controller;
- #gpio-cells = <2>;
- interrupt-parent = <&gpio1>;
- interrupts = <28 GPIO_ACTIVE_LOW>;
- pinctrl-names = "default";
- pinctrl-0 = <&tps65910_pins>;
- };
-
- at24@50 {
- compatible = "at24,24c02";
- pagesize = <8>;
- reg = <0x50>;
- };
-
tca6416: gpio@20 {
compatible = "ti,tca6416";
reg = <0x20>;
@@ -341,14 +99,6 @@
};
};
-&usb {
- status = "okay";
-};
-
-&usb_ctrl_mod {
- status = "okay";
-};
-
&usb0_phy {
status = "okay";
};
@@ -367,108 +117,6 @@
dr_mode = "otg";
};
-&cppi41dma {
- status = "okay";
-};
-
-#include "tps65910.dtsi"
-
-&tps {
- vcc1-supply = <&vbat>;
- vcc2-supply = <&vbat>;
- vcc3-supply = <&vbat>;
- vcc4-supply = <&vbat>;
- vcc5-supply = <&vbat>;
- vcc6-supply = <&vbat>;
- vcc7-supply = <&vbat>;
- vccio-supply = <&vbat>;
-
- ti,en-ck32k-xtal = <1>;
-
- regulators {
- vrtc_reg: regulator@0 {
- regulator-always-on;
- };
-
- vio_reg: regulator@1 {
- regulator-always-on;
- };
-
- vdd1_reg: regulator@2 {
- /* VDD_MPU voltage limits 0.95V - 1.26V with +/-4% tolerance */
- regulator-name = "vdd_mpu";
- regulator-min-microvolt = <912500>;
- regulator-max-microvolt = <1312500>;
- regulator-boot-on;
- regulator-always-on;
- };
-
- vdd2_reg: regulator@3 {
- /* VDD_CORE voltage limits 0.95V - 1.1V with +/-4% tolerance */
- regulator-name = "vdd_core";
- regulator-min-microvolt = <912500>;
- regulator-max-microvolt = <1150000>;
- regulator-boot-on;
- regulator-always-on;
- };
-
- vdd3_reg: regulator@4 {
- regulator-always-on;
- };
-
- vdig1_reg: regulator@5 {
- regulator-always-on;
- };
-
- vdig2_reg: regulator@6 {
- regulator-always-on;
- };
-
- vpll_reg: regulator@7 {
- regulator-always-on;
- };
-
- vdac_reg: regulator@8 {
- regulator-always-on;
- };
-
- vaux1_reg: regulator@9 {
- regulator-always-on;
- };
-
- vaux2_reg: regulator@10 {
- regulator-always-on;
- };
-
- vaux33_reg: regulator@11 {
- regulator-always-on;
- };
-
- vmmc_reg: regulator@12 {
- regulator-min-microvolt = <1800000>;
- regulator-max-microvolt = <3300000>;
- regulator-always-on;
- };
- };
-};
-
-&mac {
- pinctrl-names = "default", "sleep";
- pinctrl-0 = <&cpsw_default>;
- pinctrl-1 = <&cpsw_sleep>;
- dual_emac = <1>;
-
- status = "okay";
-};
-
-&davinci_mdio {
- pinctrl-names = "default", "sleep";
- pinctrl-0 = <&davinci_mdio_default>;
- pinctrl-1 = <&davinci_mdio_sleep>;
-
- status = "okay";
-};
-
&cpsw_emac0 {
phy-mode = "rmii";
dual_emac_res_vlan = <1>;
@@ -488,42 +136,6 @@
rmii-clock-ext = <1>;
};
-&mmc1 {
- vmmc-supply = <&vmmc_reg>;
- status = "okay";
-};
-
-&mmc2 {
- status = "okay";
- vmmc-supply = <&wl12xx_vmmc>;
- ti,non-removable;
- bus-width = <4>;
- cap-power-off-card;
- pinctrl-names = "default";
- pinctrl-0 = <&mmc2_pins>;
-
- #address-cells = <1>;
- #size-cells = <0>;
- wlcore: wlcore@2 {
- compatible = "ti,wl1835";
- reg = <2>;
- interrupt-parent = <&gpio3>;
- interrupts = <7 IRQ_TYPE_LEVEL_HIGH>;
- };
-};
-
-&sham {
- status = "okay";
-};
-
-&aes {
- status = "okay";
-};
-
-&gpio0 {
- ti,no-reset-on-init;
-};
-
&dcan1 {
pinctrl-names = "default";
pinctrl-0 = <&dcan1_pins>;
diff --git a/arch/arm/boot/dts/am335x-baltos.dtsi b/arch/arm/boot/dts/am335x-baltos.dtsi
new file mode 100644
index 000000000000..c8609d8d2c55
--- /dev/null
+++ b/arch/arm/boot/dts/am335x-baltos.dtsi
@@ -0,0 +1,408 @@
+/*
+ * Copyright (C) 2012 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+/*
+ * VScom OnRISC
+ * http://www.vscom.de
+ */
+
+#include "am33xx.dtsi"
+#include <dt-bindings/pwm/pwm.h>
+#include <dt-bindings/interrupt-controller/irq.h>
+
+/ {
+ compatible = "vscom,onrisc", "ti,am33xx";
+
+ cpus {
+ cpu@0 {
+ cpu0-supply = <&vdd1_reg>;
+ };
+ };
+
+ memory {
+ device_type = "memory";
+ reg = <0x80000000 0x10000000>; /* 256 MB */
+ };
+
+ vbat: fixedregulator@0 {
+ compatible = "regulator-fixed";
+ regulator-name = "vbat";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ regulator-boot-on;
+ };
+
+ wl12xx_vmmc: fixedregulator@2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&wl12xx_gpio>;
+ compatible = "regulator-fixed";
+ regulator-name = "vwl1271";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ gpio = <&gpio3 8 0>;
+ startup-delay-us = <70000>;
+ enable-active-high;
+ };
+};
+
+&am33xx_pinmux {
+ mmc2_pins: pinmux_mmc2_pins {
+ pinctrl-single,pins = <
+ AM33XX_IOPAD(0x820, PIN_INPUT_PULLUP | MUX_MODE2) /* gpmc_ad8.mmc1_dat0_mux0 */
+ AM33XX_IOPAD(0x824, PIN_INPUT_PULLUP | MUX_MODE2) /* gpmc_ad9.mmc1_dat1_mux0 */
+ AM33XX_IOPAD(0x828, PIN_INPUT_PULLUP | MUX_MODE2) /* gpmc_ad10.mmc1_dat2_mux0 */
+ AM33XX_IOPAD(0x82c, PIN_INPUT_PULLUP | MUX_MODE2) /* gpmc_ad11.mmc1_dat3_mux0 */
+ AM33XX_IOPAD(0x880, PIN_INPUT_PULLUP | MUX_MODE2) /* gpmc_csn1.mmc1_clk_mux0 */
+ AM33XX_IOPAD(0x884, PIN_INPUT_PULLUP | MUX_MODE2) /* gpmc_csn2.mmc1_cmd_mux0 */
+ AM33XX_IOPAD(0x9e4, PIN_INPUT_PULLUP | MUX_MODE7) /* emu0.gpio3[7] */
+ >;
+ };
+
+ wl12xx_gpio: pinmux_wl12xx_gpio {
+ pinctrl-single,pins = <
+ AM33XX_IOPAD(0x9e8, PIN_OUTPUT_PULLUP | MUX_MODE7) /* emu1.gpio3[8] */
+ >;
+ };
+
+ tps65910_pins: pinmux_tps65910_pins {
+ pinctrl-single,pins = <
+ AM33XX_IOPAD(0x878, PIN_INPUT_PULLUP | MUX_MODE7) /* gpmc_ben1.gpio1[28] */
+ >;
+ };
+
+ i2c1_pins: pinmux_i2c1_pins {
+ pinctrl-single,pins = <
+ AM33XX_IOPAD(0x958, PIN_INPUT | MUX_MODE2) /* spi0_d1.i2c1_sda_mux3 */
+ AM33XX_IOPAD(0x95c, PIN_INPUT | MUX_MODE2) /* spi0_cs0.i2c1_scl_mux3 */
+ >;
+ };
+
+ uart0_pins: pinmux_uart0_pins {
+ pinctrl-single,pins = <
+ AM33XX_IOPAD(0x970, PIN_INPUT_PULLUP | MUX_MODE0) /* uart0_rxd.uart0_rxd */
+ AM33XX_IOPAD(0x974, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* uart0_txd.uart0_txd */
+ >;
+ };
+
+ cpsw_default: cpsw_default {
+ pinctrl-single,pins = <
+ /* Slave 1 */
+ AM33XX_IOPAD(0x90c, PIN_INPUT_PULLDOWN | MUX_MODE1) /* mii1_crs.rmii1_crs_dv */
+ AM33XX_IOPAD(0x914, PIN_OUTPUT_PULLDOWN | MUX_MODE1) /* mii1_tx_en.rmii1_txen */
+ AM33XX_IOPAD(0x924, PIN_OUTPUT_PULLDOWN | MUX_MODE1) /* mii1_txd1.rmii1_txd1 */
+ AM33XX_IOPAD(0x928, PIN_OUTPUT_PULLDOWN | MUX_MODE1) /* mii1_txd0.rmii1_txd0 */
+ AM33XX_IOPAD(0x93c, PIN_INPUT_PULLDOWN | MUX_MODE1) /* mii1_rxd1.rmii1_rxd1 */
+ AM33XX_IOPAD(0x940, PIN_INPUT_PULLDOWN | MUX_MODE1) /* mii1_rxd0.rmii1_rxd0 */
+ AM33XX_IOPAD(0x944, PIN_INPUT_PULLDOWN | MUX_MODE0) /* rmii1_ref_clk.rmii1_refclk */
+
+
+ /* Slave 2 */
+ AM33XX_IOPAD(0x840, PIN_OUTPUT_PULLDOWN | MUX_MODE2) /* gpmc_a0.rgmii2_tctl */
+ AM33XX_IOPAD(0x844, PIN_INPUT_PULLDOWN | MUX_MODE2) /* gpmc_a1.rgmii2_rctl */
+ AM33XX_IOPAD(0x848, PIN_OUTPUT_PULLDOWN | MUX_MODE2) /* gpmc_a2.rgmii2_td3 */
+ AM33XX_IOPAD(0x84c, PIN_OUTPUT_PULLDOWN | MUX_MODE2) /* gpmc_a3.rgmii2_td2 */
+ AM33XX_IOPAD(0x850, PIN_OUTPUT_PULLDOWN | MUX_MODE2) /* gpmc_a4.rgmii2_td1 */
+ AM33XX_IOPAD(0x854, PIN_OUTPUT_PULLDOWN | MUX_MODE2) /* gpmc_a5.rgmii2_td0 */
+ AM33XX_IOPAD(0x858, PIN_OUTPUT_PULLDOWN | MUX_MODE2) /* gpmc_a6.rgmii2_tclk */
+ AM33XX_IOPAD(0x85c, PIN_INPUT_PULLDOWN | MUX_MODE2) /* gpmc_a7.rgmii2_rclk */
+ AM33XX_IOPAD(0x860, PIN_INPUT_PULLDOWN | MUX_MODE2) /* gpmc_a8.rgmii2_rd3 */
+ AM33XX_IOPAD(0x864, PIN_INPUT_PULLDOWN | MUX_MODE2) /* gpmc_a9.rgmii2_rd2 */
+ AM33XX_IOPAD(0x868, PIN_INPUT_PULLDOWN | MUX_MODE2) /* gpmc_a10.rgmii2_rd1 */
+ AM33XX_IOPAD(0x86c, PIN_INPUT_PULLDOWN | MUX_MODE2) /* gpmc_a11.rgmii2_rd0 */
+ >;
+ };
+
+ cpsw_sleep: cpsw_sleep {
+ pinctrl-single,pins = <
+ /* Slave 1 reset value */
+ AM33XX_IOPAD(0x90c, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x914, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x924, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x928, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x93c, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x940, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x944, PIN_INPUT_PULLDOWN | MUX_MODE7)
+
+ /* Slave 2 reset value*/
+ AM33XX_IOPAD(0x840, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x844, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x848, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x84c, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x850, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x854, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x858, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x85c, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x860, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x864, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x868, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x86c, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ >;
+ };
+
+ davinci_mdio_default: davinci_mdio_default {
+ pinctrl-single,pins = <
+ /* MDIO */
+ AM33XX_IOPAD(0x948, PIN_INPUT_PULLUP | SLEWCTRL_FAST | MUX_MODE0) /* mdio_data.mdio_data */
+ AM33XX_IOPAD(0x94c, PIN_OUTPUT_PULLUP | MUX_MODE0) /* mdio_clk.mdio_clk */
+ >;
+ };
+
+ davinci_mdio_sleep: davinci_mdio_sleep {
+ pinctrl-single,pins = <
+ /* MDIO reset value */
+ AM33XX_IOPAD(0x948, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x94c, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ >;
+ };
+
+ nandflash_pins_s0: nandflash_pins_s0 {
+ pinctrl-single,pins = <
+ AM33XX_IOPAD(0x800, PIN_INPUT_PULLUP | MUX_MODE0) /* gpmc_ad0.gpmc_ad0 */
+ AM33XX_IOPAD(0x804, PIN_INPUT_PULLUP | MUX_MODE0) /* gpmc_ad1.gpmc_ad1 */
+ AM33XX_IOPAD(0x808, PIN_INPUT_PULLUP | MUX_MODE0) /* gpmc_ad2.gpmc_ad2 */
+ AM33XX_IOPAD(0x80c, PIN_INPUT_PULLUP | MUX_MODE0) /* gpmc_ad3.gpmc_ad3 */
+ AM33XX_IOPAD(0x810, PIN_INPUT_PULLUP | MUX_MODE0) /* gpmc_ad4.gpmc_ad4 */
+ AM33XX_IOPAD(0x814, PIN_INPUT_PULLUP | MUX_MODE0) /* gpmc_ad5.gpmc_ad5 */
+ AM33XX_IOPAD(0x818, PIN_INPUT_PULLUP | MUX_MODE0) /* gpmc_ad6.gpmc_ad6 */
+ AM33XX_IOPAD(0x81c, PIN_INPUT_PULLUP | MUX_MODE0) /* gpmc_ad7.gpmc_ad7 */
+ AM33XX_IOPAD(0x870, PIN_INPUT_PULLUP | MUX_MODE0) /* gpmc_wait0.gpmc_wait0 */
+ AM33XX_IOPAD(0x874, PIN_INPUT_PULLUP | MUX_MODE7) /* gpmc_wpn.gpio0_30 */
+ AM33XX_IOPAD(0x87c, PIN_OUTPUT | MUX_MODE0) /* gpmc_csn0.gpmc_csn0 */
+ AM33XX_IOPAD(0x890, PIN_OUTPUT | MUX_MODE0) /* gpmc_advn_ale.gpmc_advn_ale */
+ AM33XX_IOPAD(0x894, PIN_OUTPUT | MUX_MODE0) /* gpmc_oen_ren.gpmc_oen_ren */
+ AM33XX_IOPAD(0x898, PIN_OUTPUT | MUX_MODE0) /* gpmc_wen.gpmc_wen */
+ AM33XX_IOPAD(0x89c, PIN_OUTPUT | MUX_MODE0) /* gpmc_be0n_cle.gpmc_be0n_cle */
+ >;
+ };
+};
+
+&elm {
+ status = "okay";
+};
+
+&gpmc {
+ pinctrl-names = "default";
+ pinctrl-0 = <&nandflash_pins_s0>;
+ ranges = <0 0 0x08000000 0x10000000>; /* CS0: NAND */
+ status = "okay";
+
+ nand@0,0 {
+ compatible = "ti,omap2-nand";
+ reg = <0 0 4>; /* CS0, offset 0, IO size 4 */
+ interrupt-parent = <&gpmc>;
+ interrupts = <0 IRQ_TYPE_NONE>, /* fifoevent */
+ <1 IRQ_TYPE_NONE>; /* termcount */
+ rb-gpios = <&gpmc 0 GPIO_ACTIVE_HIGH>; /* gpmc_wait0 */
+ nand-bus-width = <8>;
+ ti,nand-ecc-opt = "bch8";
+ ti,nand-xfer-type = "polled";
+
+ gpmc,device-nand = "true";
+ gpmc,device-width = <1>;
+ gpmc,sync-clk-ps = <0>;
+ gpmc,cs-on-ns = <0>;
+ gpmc,cs-rd-off-ns = <44>;
+ gpmc,cs-wr-off-ns = <44>;
+ gpmc,adv-on-ns = <6>;
+ gpmc,adv-rd-off-ns = <34>;
+ gpmc,adv-wr-off-ns = <44>;
+ gpmc,we-on-ns = <0>;
+ gpmc,we-off-ns = <40>;
+ gpmc,oe-on-ns = <0>;
+ gpmc,oe-off-ns = <54>;
+ gpmc,access-ns = <64>;
+ gpmc,rd-cycle-ns = <82>;
+ gpmc,wr-cycle-ns = <82>;
+ gpmc,bus-turnaround-ns = <0>;
+ gpmc,cycle2cycle-delay-ns = <0>;
+ gpmc,clk-activation-ns = <0>;
+ gpmc,wr-access-ns = <40>;
+ gpmc,wr-data-mux-bus-ns = <0>;
+
+ #address-cells = <1>;
+ #size-cells = <1>;
+ elm_id = <&elm>;
+ };
+};
+
+&uart0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart0_pins>;
+
+ status = "okay";
+};
+
+&i2c1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c1_pins>;
+
+ status = "okay";
+ clock-frequency = <400000>;
+
+ tps: tps@2d {
+ reg = <0x2d>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-parent = <&gpio1>;
+ interrupts = <28 GPIO_ACTIVE_LOW>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&tps65910_pins>;
+ };
+
+ at24@50 {
+ compatible = "at24,24c02";
+ pagesize = <8>;
+ reg = <0x50>;
+ };
+};
+
+&usb {
+ status = "okay";
+};
+
+&usb_ctrl_mod {
+ status = "okay";
+};
+
+&cppi41dma {
+ status = "okay";
+};
+
+#include "tps65910.dtsi"
+
+&tps {
+ vcc1-supply = <&vbat>;
+ vcc2-supply = <&vbat>;
+ vcc3-supply = <&vbat>;
+ vcc4-supply = <&vbat>;
+ vcc5-supply = <&vbat>;
+ vcc6-supply = <&vbat>;
+ vcc7-supply = <&vbat>;
+ vccio-supply = <&vbat>;
+
+ ti,en-ck32k-xtal = <1>;
+
+ regulators {
+ vrtc_reg: regulator@0 {
+ regulator-always-on;
+ };
+
+ vio_reg: regulator@1 {
+ regulator-always-on;
+ };
+
+ vdd1_reg: regulator@2 {
+ /* VDD_MPU voltage limits 0.95V - 1.26V with +/-4% tolerance */
+ regulator-name = "vdd_mpu";
+ regulator-min-microvolt = <912500>;
+ regulator-max-microvolt = <1312500>;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ vdd2_reg: regulator@3 {
+ /* VDD_CORE voltage limits 0.95V - 1.1V with +/-4% tolerance */
+ regulator-name = "vdd_core";
+ regulator-min-microvolt = <912500>;
+ regulator-max-microvolt = <1150000>;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ vdd3_reg: regulator@4 {
+ regulator-always-on;
+ };
+
+ vdig1_reg: regulator@5 {
+ regulator-always-on;
+ };
+
+ vdig2_reg: regulator@6 {
+ regulator-always-on;
+ };
+
+ vpll_reg: regulator@7 {
+ regulator-always-on;
+ };
+
+ vdac_reg: regulator@8 {
+ regulator-always-on;
+ };
+
+ vaux1_reg: regulator@9 {
+ regulator-always-on;
+ };
+
+ vaux2_reg: regulator@10 {
+ regulator-always-on;
+ };
+
+ vaux33_reg: regulator@11 {
+ regulator-always-on;
+ };
+
+ vmmc_reg: regulator@12 {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ };
+ };
+};
+
+&mac {
+ pinctrl-names = "default", "sleep";
+ pinctrl-0 = <&cpsw_default>;
+ pinctrl-1 = <&cpsw_sleep>;
+ dual_emac = <1>;
+
+ status = "okay";
+};
+
+&davinci_mdio {
+ pinctrl-names = "default", "sleep";
+ pinctrl-0 = <&davinci_mdio_default>;
+ pinctrl-1 = <&davinci_mdio_sleep>;
+
+ status = "okay";
+};
+
+&mmc1 {
+ vmmc-supply = <&vmmc_reg>;
+ status = "okay";
+};
+
+&mmc2 {
+ status = "okay";
+ vmmc-supply = <&wl12xx_vmmc>;
+ ti,non-removable;
+ bus-width = <4>;
+ cap-power-off-card;
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc2_pins>;
+
+ #address-cells = <1>;
+ #size-cells = <0>;
+ wlcore: wlcore@2 {
+ compatible = "ti,wl1835";
+ reg = <2>;
+ interrupt-parent = <&gpio3>;
+ interrupts = <7 IRQ_TYPE_LEVEL_HIGH>;
+ };
+};
+
+&sham {
+ status = "okay";
+};
+
+&aes {
+ status = "okay";
+};
+
+&gpio0 {
+ ti,no-reset-on-init;
+};
diff --git a/arch/arm/boot/dts/am335x-chiliboard.dts b/arch/arm/boot/dts/am335x-chiliboard.dts
index 15d47ab28865..2a624b3c9258 100644
--- a/arch/arm/boot/dts/am335x-chiliboard.dts
+++ b/arch/arm/boot/dts/am335x-chiliboard.dts
@@ -35,6 +35,59 @@
};
&am33xx_pinmux {
+ uart0_pins: pinmux_uart0_pins {
+ pinctrl-single,pins = <
+ AM33XX_IOPAD(0x970, PIN_INPUT_PULLUP | MUX_MODE0) /* uart0_rxd.uart0_rxd */
+ AM33XX_IOPAD(0x974, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* uart0_txd.uart0_txd */
+ >;
+ };
+
+ cpsw_default: cpsw_default {
+ pinctrl-single,pins = <
+ /* Slave 1 */
+ AM33XX_IOPAD(0x90c, PIN_INPUT_PULLDOWN | MUX_MODE1) /* mii1_crs.rmii1_crs */
+ AM33XX_IOPAD(0x910, PIN_INPUT_PULLUP | MUX_MODE1) /* mii1_rxerr.rmii1_rxerr */
+ AM33XX_IOPAD(0x914, PIN_OUTPUT_PULLDOWN | MUX_MODE1) /* mii1_txen.rmii1_txen */
+ AM33XX_IOPAD(0x924, PIN_OUTPUT_PULLDOWN | MUX_MODE1) /* mii1_txd1.rmii1_txd1 */
+ AM33XX_IOPAD(0x928, PIN_OUTPUT_PULLDOWN | MUX_MODE1) /* mii1_txd0.rmii1_txd0 */
+ AM33XX_IOPAD(0x93c, PIN_INPUT_PULLUP | MUX_MODE1) /* mii1_rxd1.rmii1_rxd1 */
+ AM33XX_IOPAD(0x940, PIN_INPUT_PULLUP | MUX_MODE1) /* mii1_rxd0.rmii1_rxd0 */
+ AM33XX_IOPAD(0x944, PIN_INPUT_PULLDOWN | MUX_MODE0) /* rmii1_ref_clk.rmii_ref_clk */
+ >;
+ };
+
+ cpsw_sleep: cpsw_sleep {
+ pinctrl-single,pins = <
+ /* Slave 1 reset value */
+ AM33XX_IOPAD(0x90c, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x910, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x914, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x918, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x924, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x928, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x93c, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x940, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x944, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ >;
+ };
+
+ davinci_mdio_default: davinci_mdio_default {
+ pinctrl-single,pins = <
+ /* mdio_data.mdio_data */
+ AM33XX_IOPAD(0x948, PIN_INPUT_PULLUP | SLEWCTRL_FAST | MUX_MODE0)
+ /* mdio_clk.mdio_clk */
+ AM33XX_IOPAD(0x94c, PIN_OUTPUT_PULLUP | MUX_MODE0)
+ >;
+ };
+
+ davinci_mdio_sleep: davinci_mdio_sleep {
+ pinctrl-single,pins = <
+ /* MDIO reset value */
+ AM33XX_IOPAD(0x948, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ AM33XX_IOPAD(0x94c, PIN_INPUT_PULLDOWN | MUX_MODE7)
+ >;
+ };
+
usb1_drvvbus: usb1_drvvbus {
pinctrl-single,pins = <
AM33XX_IOPAD(0xa34, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* usb1_drvvbus.usb1_drvvbus */
@@ -61,12 +114,34 @@
};
};
+&uart0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart0_pins>;
+
+ status = "okay";
+};
+
&ldo4_reg {
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
};
/* Ethernet */
+&mac {
+ slaves = <1>;
+ pinctrl-names = "default", "sleep";
+ pinctrl-0 = <&cpsw_default>;
+ pinctrl-1 = <&cpsw_sleep>;
+ status = "okay";
+};
+
+&davinci_mdio {
+ pinctrl-names = "default", "sleep";
+ pinctrl-0 = <&davinci_mdio_default>;
+ pinctrl-1 = <&davinci_mdio_sleep>;
+ status = "okay";
+};
+
&cpsw_emac0 {
phy_id = <&davinci_mdio>, <0>;
phy-mode = "rmii";
diff --git a/arch/arm/boot/dts/am335x-chilisom.dtsi b/arch/arm/boot/dts/am335x-chilisom.dtsi
index 95461a28bc98..1d647358f1c1 100644
--- a/arch/arm/boot/dts/am335x-chilisom.dtsi
+++ b/arch/arm/boot/dts/am335x-chilisom.dtsi
@@ -35,59 +35,6 @@
>;
};
- uart0_pins: pinmux_uart0_pins {
- pinctrl-single,pins = <
- AM33XX_IOPAD(0x970, PIN_INPUT_PULLUP | MUX_MODE0) /* uart0_rxd.uart0_rxd */
- AM33XX_IOPAD(0x974, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* uart0_txd.uart0_txd */
- >;
- };
-
- cpsw_default: cpsw_default {
- pinctrl-single,pins = <
- /* Slave 1 */
- AM33XX_IOPAD(0x90c, PIN_INPUT_PULLDOWN | MUX_MODE1) /* mii1_crs.rmii1_crs */
- AM33XX_IOPAD(0x910, PIN_INPUT_PULLUP | MUX_MODE1) /* mii1_rxerr.rmii1_rxerr */
- AM33XX_IOPAD(0x914, PIN_OUTPUT_PULLDOWN | MUX_MODE1) /* mii1_txen.rmii1_txen */
- AM33XX_IOPAD(0x924, PIN_OUTPUT_PULLDOWN | MUX_MODE1) /* mii1_txd1.rmii1_txd1 */
- AM33XX_IOPAD(0x928, PIN_OUTPUT_PULLDOWN | MUX_MODE1) /* mii1_txd0.rmii1_txd0 */
- AM33XX_IOPAD(0x93c, PIN_INPUT_PULLUP | MUX_MODE1) /* mii1_rxd1.rmii1_rxd1 */
- AM33XX_IOPAD(0x940, PIN_INPUT_PULLUP | MUX_MODE1) /* mii1_rxd0.rmii1_rxd0 */
- AM33XX_IOPAD(0x944, PIN_INPUT_PULLDOWN | MUX_MODE0) /* rmii1_ref_clk.rmii_ref_clk */
- >;
- };
-
- cpsw_sleep: cpsw_sleep {
- pinctrl-single,pins = <
- /* Slave 1 reset value */
- AM33XX_IOPAD(0x90c, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x910, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x914, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x918, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x924, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x928, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x93c, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x940, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x944, PIN_INPUT_PULLDOWN | MUX_MODE7)
- >;
- };
-
- davinci_mdio_default: davinci_mdio_default {
- pinctrl-single,pins = <
- /* mdio_data.mdio_data */
- AM33XX_IOPAD(0x948, PIN_INPUT_PULLUP | SLEWCTRL_FAST | MUX_MODE0)
- /* mdio_clk.mdio_clk */
- AM33XX_IOPAD(0x94c, PIN_OUTPUT_PULLUP | MUX_MODE0)
- >;
- };
-
- davinci_mdio_sleep: davinci_mdio_sleep {
- pinctrl-single,pins = <
- /* MDIO reset value */
- AM33XX_IOPAD(0x948, PIN_INPUT_PULLDOWN | MUX_MODE7)
- AM33XX_IOPAD(0x94c, PIN_INPUT_PULLDOWN | MUX_MODE7)
- >;
- };
-
nandflash_pins: nandflash_pins {
pinctrl-single,pins = <
AM33XX_IOPAD(0x800, PIN_INPUT_PULLDOWN | MUX_MODE0) /* gpmc_ad0.gpmc_ad0 */
@@ -109,13 +56,6 @@
};
};
-&uart0 {
- pinctrl-names = "default";
- pinctrl-0 = <&uart0_pins>;
-
- status = "okay";
-};
-
&i2c0 {
pinctrl-names = "default";
pinctrl-0 = <&i2c0_pins>;
@@ -182,20 +122,8 @@
};
};
-/* Ethernet MAC */
-&mac {
- slaves = <1>;
- pinctrl-names = "default", "sleep";
- pinctrl-0 = <&cpsw_default>;
- pinctrl-1 = <&cpsw_sleep>;
- status = "okay";
-};
-
-&davinci_mdio {
- pinctrl-names = "default", "sleep";
- pinctrl-0 = <&davinci_mdio_default>;
- pinctrl-1 = <&davinci_mdio_sleep>;
- status = "okay";
+&rtc {
+ system-power-controller;
};
/* NAND Flash */
@@ -214,6 +142,7 @@
interrupt-parent = <&gpmc>;
interrupts = <0 IRQ_TYPE_NONE>, /* fifoevent */
<1 IRQ_TYPE_NONE>; /* termcount */
+ rb-gpios = <&gpmc 0 GPIO_ACTIVE_HIGH>; /* gpmc_wait0 */
ti,nand-ecc-opt = "bch8";
ti,elm-id = <&elm>;
nand-bus-width = <8>;
diff --git a/arch/arm/boot/dts/am335x-cm-t335.dts b/arch/arm/boot/dts/am335x-cm-t335.dts
index e835644c5054..817b1dec0683 100644
--- a/arch/arm/boot/dts/am335x-cm-t335.dts
+++ b/arch/arm/boot/dts/am335x-cm-t335.dts
@@ -411,6 +411,7 @@ status = "okay";
interrupt-parent = <&gpmc>;
interrupts = <0 IRQ_TYPE_NONE>, /* fifoevent */
<1 IRQ_TYPE_NONE>; /* termcount */
+ rb-gpios = <&gpmc 0 GPIO_ACTIVE_HIGH>; /* gpmc_wait0 */
ti,nand-ecc-opt = "bch8";
ti,elm-id = <&elm>;
nand-bus-width = <8>;
diff --git a/arch/arm/boot/dts/am335x-evm.dts b/arch/arm/boot/dts/am335x-evm.dts
index 28b916210271..516673bb023d 100644
--- a/arch/arm/boot/dts/am335x-evm.dts
+++ b/arch/arm/boot/dts/am335x-evm.dts
@@ -524,6 +524,7 @@
interrupt-parent = <&gpmc>;
interrupts = <0 IRQ_TYPE_NONE>, /* fifoevent */
<1 IRQ_TYPE_NONE>; /* termcount */
+ rb-gpios = <&gpmc 0 GPIO_ACTIVE_HIGH>; /* gpmc_wait0 */
ti,nand-ecc-opt = "bch8";
ti,elm-id = <&elm>;
nand-bus-width = <8>;
diff --git a/arch/arm/boot/dts/am335x-icev2.dts b/arch/arm/boot/dts/am335x-icev2.dts
new file mode 100644
index 000000000000..e271013e78a6
--- /dev/null
+++ b/arch/arm/boot/dts/am335x-icev2.dts
@@ -0,0 +1,306 @@
+/*
+ * Copyright (C) 2016 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+/*
+ * AM335x ICE V2 board
+ * http://www.ti.com/tool/tmdsice3359
+ */
+
+/dts-v1/;
+
+#include "am33xx.dtsi"
+
+/ {
+ model = "TI AM3359 ICE-V2";
+ compatible = "ti,am3359-icev2", "ti,am33xx";
+
+ memory {
+ device_type = "memory";
+ reg = <0x80000000 0x10000000>; /* 256 MB */
+ };
+
+ vbat: fixedregulator@0 {
+ compatible = "regulator-fixed";
+ regulator-name = "vbat";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ regulator-boot-on;
+ };
+
+ vtt_fixed: fixedregulator@1 {
+ compatible = "regulator-fixed";
+ regulator-name = "vtt";
+ regulator-min-microvolt = <1500000>;
+ regulator-max-microvolt = <1500000>;
+ gpio = <&gpio0 18 GPIO_ACTIVE_HIGH>;
+ regulator-always-on;
+ regulator-boot-on;
+ enable-active-high;
+ };
+
+ leds@0 {
+ compatible = "gpio-leds";
+
+ led@0 {
+ label = "out0";
+ gpios = <&tpic2810 0 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+
+ led@1 {
+ label = "out1";
+ gpios = <&tpic2810 1 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+
+ led@2 {
+ label = "out2";
+ gpios = <&tpic2810 2 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+
+ led@3 {
+ label = "out3";
+ gpios = <&tpic2810 3 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+
+ led@4 {
+ label = "out4";
+ gpios = <&tpic2810 4 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+
+ led@5 {
+ label = "out5";
+ gpios = <&tpic2810 5 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+
+ led@6 {
+ label = "out6";
+ gpios = <&tpic2810 6 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+
+ led@7 {
+ label = "out7";
+ gpios = <&tpic2810 7 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+ };
+
+ /* Tricolor status LEDs */
+ leds@1 {
+ compatible = "gpio-leds";
+ pinctrl-names = "default";
+ pinctrl-0 = <&user_leds>;
+
+ led@0 {
+ label = "status0:red:cpu0";
+ gpios = <&gpio0 17 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ linux,default-trigger = "cpu0";
+ };
+
+ led@1 {
+ label = "status0:green:usr";
+ gpios = <&gpio0 16 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+
+ led@2 {
+ label = "status0:yellow:usr";
+ gpios = <&gpio3 9 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+
+ led@3 {
+ label = "status1:red:mmc0";
+ gpios = <&gpio1 30 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ linux,default-trigger = "mmc0";
+ };
+
+ led@4 {
+ label = "status1:green:usr";
+ gpios = <&gpio0 20 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+
+ led@5 {
+ label = "status1:yellow:usr";
+ gpios = <&gpio0 19 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+ };
+};
+
+&am33xx_pinmux {
+ user_leds: user_leds {
+ pinctrl-single,pins = <
+ AM33XX_IOPAD(0x91c, PIN_OUTPUT | MUX_MODE7) /* (J18) gmii1_txd3.gpio0[16] */
+ AM33XX_IOPAD(0x920, PIN_OUTPUT | MUX_MODE7) /* (K15) gmii1_txd2.gpio0[17] */
+ AM33XX_IOPAD(0x9b0, PIN_OUTPUT | MUX_MODE7) /* (A15) xdma_event_intr0.gpio0[19] */
+ AM33XX_IOPAD(0x9b4, PIN_OUTPUT | MUX_MODE7) /* (D14) xdma_event_intr1.gpio0[20] */
+ AM33XX_IOPAD(0x880, PIN_OUTPUT | MUX_MODE7) /* (U9) gpmc_csn1.gpio1[30] */
+ AM33XX_IOPAD(0x92c, PIN_OUTPUT | MUX_MODE7) /* (K18) gmii1_txclk.gpio3[9] */
+ >;
+ };
+
+ mmc0_pins_default: mmc0_pins_default {
+ pinctrl-single,pins = <
+ AM33XX_IOPAD(0x8f0, PIN_INPUT_PULLUP | MUX_MODE0) /* (F17) mmc0_dat3.mmc0_dat3 */
+ AM33XX_IOPAD(0x8f4, PIN_INPUT_PULLUP | MUX_MODE0) /* (F18) mmc0_dat2.mmc0_dat2 */
+ AM33XX_IOPAD(0x8f8, PIN_INPUT_PULLUP | MUX_MODE0) /* (G15) mmc0_dat1.mmc0_dat1 */
+ AM33XX_IOPAD(0x8fc, PIN_INPUT_PULLUP | MUX_MODE0) /* (G16) mmc0_dat0.mmc0_dat0 */
+ AM33XX_IOPAD(0x900, PIN_INPUT_PULLUP | MUX_MODE0) /* (G17) mmc0_clk.mmc0_clk */
+ AM33XX_IOPAD(0x904, PIN_INPUT_PULLUP | MUX_MODE0) /* (G18) mmc0_cmd.mmc0_cmd */
+ AM33XX_IOPAD(0x960, PIN_INPUT_PULLUP | MUX_MODE5) /* (C15) spi0_cs1.mmc0_sdcd */
+ >;
+ };
+
+ i2c0_pins_default: i2c0_pins_default {
+ pinctrl-single,pins = <
+ AM33XX_IOPAD(0x988, PIN_INPUT | MUX_MODE0) /* (C17) I2C0_SDA.I2C0_SDA */
+ AM33XX_IOPAD(0x98c, PIN_INPUT | MUX_MODE0) /* (C16) I2C0_SCL.I2C0_SCL */
+ >;
+ };
+
+ spi0_pins_default: spi0_pins_default {
+ pinctrl-single,pins = <
+ AM33XX_IOPAD(0x950, PIN_INPUT_PULLUP | MUX_MODE0) /* (A17) spi0_sclk.spi0_sclk */
+ AM33XX_IOPAD(0x954, PIN_INPUT_PULLUP | MUX_MODE0) /* (B17) spi0_d0.spi0_d0 */
+ AM33XX_IOPAD(0x958, PIN_INPUT_PULLUP | MUX_MODE0) /* (B16) spi0_d1.spi0_d1 */
+ AM33XX_IOPAD(0x95c, PIN_INPUT_PULLUP | MUX_MODE0) /* (A16) spi0_cs0.spi0_cs0 */
+ >;
+ };
+
+ uart3_pins_default: uart3_pins_default {
+ pinctrl-single,pins = <
+ AM33XX_IOPAD(0x934, PIN_INPUT_PULLUP | MUX_MODE1) /* (L17) gmii1_rxd3.uart3_rxd */
+ AM33XX_IOPAD(0x938, PIN_OUTPUT_PULLUP | MUX_MODE1) /* (L16) gmii1_rxd2.uart3_txd */
+ >;
+ };
+};
+
+&i2c0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c0_pins_default>;
+
+ status = "okay";
+ clock-frequency = <400000>;
+
+ tps: power-controller@2d {
+ reg = <0x2d>;
+ };
+
+ tpic2810: gpio@60 {
+ compatible = "ti,tpic2810";
+ reg = <0x60>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ };
+};
+
+#include "tps65910.dtsi"
+
+&tps {
+ vcc1-supply = <&vbat>;
+ vcc2-supply = <&vbat>;
+ vcc3-supply = <&vbat>;
+ vcc4-supply = <&vbat>;
+ vcc5-supply = <&vbat>;
+ vcc6-supply = <&vbat>;
+ vcc7-supply = <&vbat>;
+ vccio-supply = <&vbat>;
+
+ regulators {
+ vrtc_reg: regulator@0 {
+ regulator-always-on;
+ };
+
+ vio_reg: regulator@1 {
+ regulator-always-on;
+ };
+
+ vdd1_reg: regulator@2 {
+ regulator-name = "vdd_mpu";
+ regulator-min-microvolt = <912500>;
+ regulator-max-microvolt = <1326000>;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ vdd2_reg: regulator@3 {
+ regulator-name = "vdd_core";
+ regulator-min-microvolt = <912500>;
+ regulator-max-microvolt = <1144000>;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ vdd3_reg: regulator@4 {
+ regulator-always-on;
+ };
+
+ vdig1_reg: regulator@5 {
+ regulator-always-on;
+ };
+
+ vdig2_reg: regulator@6 {
+ regulator-always-on;
+ };
+
+ vpll_reg: regulator@7 {
+ regulator-always-on;
+ };
+
+ vdac_reg: regulator@8 {
+ regulator-always-on;
+ };
+
+ vaux1_reg: regulator@9 {
+ regulator-always-on;
+ };
+
+ vaux2_reg: regulator@10 {
+ regulator-always-on;
+ };
+
+ vaux33_reg: regulator@11 {
+ regulator-always-on;
+ };
+
+ vmmc_reg: regulator@12 {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ };
+ };
+};
+
+&mmc1 {
+ status = "okay";
+ vmmc-supply = <&vmmc_reg>;
+ bus-width = <4>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc0_pins_default>;
+};
+
+&gpio0 {
+ /* Do not idle the GPIO used for holding the VTT regulator */
+ ti,no-reset-on-init;
+ ti,no-idle-on-init;
+};
+
+&uart3 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart3_pins_default>;
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/am335x-igep0033.dtsi b/arch/arm/boot/dts/am335x-igep0033.dtsi
index 6c3a9bf3638a..df63484ef9b3 100644
--- a/arch/arm/boot/dts/am335x-igep0033.dtsi
+++ b/arch/arm/boot/dts/am335x-igep0033.dtsi
@@ -135,6 +135,7 @@
interrupt-parent = <&gpmc>;
interrupts = <0 IRQ_TYPE_NONE>, /* fifoevent */
<1 IRQ_TYPE_NONE>; /* termcount */
+ rb-gpios = <&gpmc 0 GPIO_ACTIVE_HIGH>; /* gpmc_wait0 */
nand-bus-width = <8>;
ti,nand-ecc-opt = "bch8";
gpmc,device-width = <1>;
diff --git a/arch/arm/boot/dts/am335x-phycore-som.dtsi b/arch/arm/boot/dts/am335x-phycore-som.dtsi
index d4b7f3bd553f..86f773165d5c 100644
--- a/arch/arm/boot/dts/am335x-phycore-som.dtsi
+++ b/arch/arm/boot/dts/am335x-phycore-som.dtsi
@@ -171,6 +171,7 @@
interrupt-parent = <&gpmc>;
interrupts = <0 IRQ_TYPE_NONE>, /* fifoevent */
<1 IRQ_TYPE_NONE>; /* termcount */
+ rb-gpios = <&gpmc 0 GPIO_ACTIVE_HIGH>; /* gpmc_wait0 */
nand-bus-width = <8>;
ti,nand-ecc-opt = "bch8";
gpmc,device-nand = "true";
diff --git a/arch/arm/boot/dts/am335x-shc.dts b/arch/arm/boot/dts/am335x-shc.dts
index 865de8500f1c..837d5b80ea1d 100644
--- a/arch/arm/boot/dts/am335x-shc.dts
+++ b/arch/arm/boot/dts/am335x-shc.dts
@@ -138,7 +138,7 @@
&epwmss1 {
status = "okay";
- ehrpwm1: ehrpwm@48302200 {
+ ehrpwm1: pwm@48302200 {
pinctrl-names = "default";
pinctrl-0 = <&ehrpwm1_pins>;
status = "okay";
diff --git a/arch/arm/boot/dts/am33xx-clocks.dtsi b/arch/arm/boot/dts/am33xx-clocks.dtsi
index afb4b3a7bab4..8d8319590cde 100644
--- a/arch/arm/boot/dts/am33xx-clocks.dtsi
+++ b/arch/arm/boot/dts/am33xx-clocks.dtsi
@@ -8,7 +8,7 @@
* published by the Free Software Foundation.
*/
&scm_clocks {
- sys_clkin_ck: sys_clkin_ck {
+ sys_clkin_ck: sys_clkin_ck@40 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&virt_19200000_ck>, <&virt_24000000_ck>, <&virt_25000000_ck>, <&virt_26000000_ck>;
@@ -163,7 +163,7 @@
clock-frequency = <12000000>;
};
- dpll_core_ck: dpll_core_ck {
+ dpll_core_ck: dpll_core_ck@490 {
#clock-cells = <0>;
compatible = "ti,am3-dpll-core-clock";
clocks = <&sys_clkin_ck>, <&sys_clkin_ck>;
@@ -176,7 +176,7 @@
clocks = <&dpll_core_ck>;
};
- dpll_core_m4_ck: dpll_core_m4_ck {
+ dpll_core_m4_ck: dpll_core_m4_ck@480 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -185,7 +185,7 @@
ti,index-starts-at-one;
};
- dpll_core_m5_ck: dpll_core_m5_ck {
+ dpll_core_m5_ck: dpll_core_m5_ck@484 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -194,7 +194,7 @@
ti,index-starts-at-one;
};
- dpll_core_m6_ck: dpll_core_m6_ck {
+ dpll_core_m6_ck: dpll_core_m6_ck@4d8 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -203,14 +203,14 @@
ti,index-starts-at-one;
};
- dpll_mpu_ck: dpll_mpu_ck {
+ dpll_mpu_ck: dpll_mpu_ck@488 {
#clock-cells = <0>;
compatible = "ti,am3-dpll-clock";
clocks = <&sys_clkin_ck>, <&sys_clkin_ck>;
reg = <0x0488>, <0x0420>, <0x042c>;
};
- dpll_mpu_m2_ck: dpll_mpu_m2_ck {
+ dpll_mpu_m2_ck: dpll_mpu_m2_ck@4a8 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_mpu_ck>;
@@ -219,14 +219,14 @@
ti,index-starts-at-one;
};
- dpll_ddr_ck: dpll_ddr_ck {
+ dpll_ddr_ck: dpll_ddr_ck@494 {
#clock-cells = <0>;
compatible = "ti,am3-dpll-no-gate-clock";
clocks = <&sys_clkin_ck>, <&sys_clkin_ck>;
reg = <0x0494>, <0x0434>, <0x0440>;
};
- dpll_ddr_m2_ck: dpll_ddr_m2_ck {
+ dpll_ddr_m2_ck: dpll_ddr_m2_ck@4a0 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_ddr_ck>;
@@ -243,14 +243,14 @@
clock-div = <2>;
};
- dpll_disp_ck: dpll_disp_ck {
+ dpll_disp_ck: dpll_disp_ck@498 {
#clock-cells = <0>;
compatible = "ti,am3-dpll-no-gate-clock";
clocks = <&sys_clkin_ck>, <&sys_clkin_ck>;
reg = <0x0498>, <0x0448>, <0x0454>;
};
- dpll_disp_m2_ck: dpll_disp_m2_ck {
+ dpll_disp_m2_ck: dpll_disp_m2_ck@4a4 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_disp_ck>;
@@ -260,14 +260,14 @@
ti,set-rate-parent;
};
- dpll_per_ck: dpll_per_ck {
+ dpll_per_ck: dpll_per_ck@48c {
#clock-cells = <0>;
compatible = "ti,am3-dpll-no-gate-j-type-clock";
clocks = <&sys_clkin_ck>, <&sys_clkin_ck>;
reg = <0x048c>, <0x0470>, <0x049c>;
};
- dpll_per_m2_ck: dpll_per_m2_ck {
+ dpll_per_m2_ck: dpll_per_m2_ck@4ac {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_ck>;
@@ -292,7 +292,7 @@
clock-div = <4>;
};
- cefuse_fck: cefuse_fck {
+ cefuse_fck: cefuse_fck@a20 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_clkin_ck>;
@@ -316,7 +316,7 @@
clock-div = <732>;
};
- clkdiv32k_ick: clkdiv32k_ick {
+ clkdiv32k_ick: clkdiv32k_ick@14c {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&clkdiv32k_ck>;
@@ -332,14 +332,14 @@
clock-div = <1>;
};
- pruss_ocp_gclk: pruss_ocp_gclk {
+ pruss_ocp_gclk: pruss_ocp_gclk@530 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&l3_gclk>, <&dpll_disp_m2_ck>;
reg = <0x0530>;
};
- mmu_fck: mmu_fck {
+ mmu_fck: mmu_fck@914 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll_core_m4_ck>;
@@ -347,56 +347,56 @@
reg = <0x0914>;
};
- timer1_fck: timer1_fck {
+ timer1_fck: timer1_fck@528 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin_ck>, <&clkdiv32k_ick>, <&tclkin_ck>, <&clk_rc32k_ck>, <&clk_32768_ck>;
reg = <0x0528>;
};
- timer2_fck: timer2_fck {
+ timer2_fck: timer2_fck@508 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sys_clkin_ck>, <&clkdiv32k_ick>;
reg = <0x0508>;
};
- timer3_fck: timer3_fck {
+ timer3_fck: timer3_fck@50c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sys_clkin_ck>, <&clkdiv32k_ick>;
reg = <0x050c>;
};
- timer4_fck: timer4_fck {
+ timer4_fck: timer4_fck@510 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sys_clkin_ck>, <&clkdiv32k_ick>;
reg = <0x0510>;
};
- timer5_fck: timer5_fck {
+ timer5_fck: timer5_fck@518 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sys_clkin_ck>, <&clkdiv32k_ick>;
reg = <0x0518>;
};
- timer6_fck: timer6_fck {
+ timer6_fck: timer6_fck@51c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sys_clkin_ck>, <&clkdiv32k_ick>;
reg = <0x051c>;
};
- timer7_fck: timer7_fck {
+ timer7_fck: timer7_fck@504 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sys_clkin_ck>, <&clkdiv32k_ick>;
reg = <0x0504>;
};
- usbotg_fck: usbotg_fck {
+ usbotg_fck: usbotg_fck@47c {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll_per_ck>;
@@ -412,7 +412,7 @@
clock-div = <2>;
};
- ieee5000_fck: ieee5000_fck {
+ ieee5000_fck: ieee5000_fck@e4 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll_core_m4_div2_ck>;
@@ -420,7 +420,7 @@
reg = <0x00e4>;
};
- wdt1_fck: wdt1_fck {
+ wdt1_fck: wdt1_fck@538 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&clk_rc32k_ck>, <&clkdiv32k_ick>;
@@ -483,21 +483,21 @@
clock-div = <2>;
};
- cpsw_cpts_rft_clk: cpsw_cpts_rft_clk {
+ cpsw_cpts_rft_clk: cpsw_cpts_rft_clk@520 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&dpll_core_m5_ck>, <&dpll_core_m4_ck>;
reg = <0x0520>;
};
- gpio0_dbclk_mux_ck: gpio0_dbclk_mux_ck {
+ gpio0_dbclk_mux_ck: gpio0_dbclk_mux_ck@53c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&clk_rc32k_ck>, <&clk_32768_ck>, <&clkdiv32k_ick>;
reg = <0x053c>;
};
- gpio0_dbclk: gpio0_dbclk {
+ gpio0_dbclk: gpio0_dbclk@408 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&gpio0_dbclk_mux_ck>;
@@ -505,7 +505,7 @@
reg = <0x0408>;
};
- gpio1_dbclk: gpio1_dbclk {
+ gpio1_dbclk: gpio1_dbclk@ac {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&clkdiv32k_ick>;
@@ -513,7 +513,7 @@
reg = <0x00ac>;
};
- gpio2_dbclk: gpio2_dbclk {
+ gpio2_dbclk: gpio2_dbclk@b0 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&clkdiv32k_ick>;
@@ -521,7 +521,7 @@
reg = <0x00b0>;
};
- gpio3_dbclk: gpio3_dbclk {
+ gpio3_dbclk: gpio3_dbclk@b4 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&clkdiv32k_ick>;
@@ -529,7 +529,7 @@
reg = <0x00b4>;
};
- lcd_gclk: lcd_gclk {
+ lcd_gclk: lcd_gclk@534 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&dpll_disp_m2_ck>, <&dpll_core_m5_ck>, <&dpll_per_m2_ck>;
@@ -545,7 +545,7 @@
clock-div = <2>;
};
- gfx_fclk_clksel_ck: gfx_fclk_clksel_ck {
+ gfx_fclk_clksel_ck: gfx_fclk_clksel_ck@52c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&dpll_core_m4_ck>, <&dpll_per_m2_ck>;
@@ -553,7 +553,7 @@
reg = <0x052c>;
};
- gfx_fck_div_ck: gfx_fck_div_ck {
+ gfx_fck_div_ck: gfx_fck_div_ck@52c {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&gfx_fclk_clksel_ck>;
@@ -561,14 +561,14 @@
ti,max-div = <2>;
};
- sysclkout_pre_ck: sysclkout_pre_ck {
+ sysclkout_pre_ck: sysclkout_pre_ck@700 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&clk_32768_ck>, <&l3_gclk>, <&dpll_ddr_m2_ck>, <&dpll_per_m2_ck>, <&lcd_gclk>;
reg = <0x0700>;
};
- clkout2_div_ck: clkout2_div_ck {
+ clkout2_div_ck: clkout2_div_ck@700 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&sysclkout_pre_ck>;
@@ -577,7 +577,7 @@
reg = <0x0700>;
};
- dbg_sysclk_ck: dbg_sysclk_ck {
+ dbg_sysclk_ck: dbg_sysclk_ck@414 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_clkin_ck>;
@@ -585,7 +585,7 @@
reg = <0x0414>;
};
- dbg_clka_ck: dbg_clka_ck {
+ dbg_clka_ck: dbg_clka_ck@414 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll_core_m4_ck>;
@@ -593,7 +593,7 @@
reg = <0x0414>;
};
- stm_pmd_clock_mux_ck: stm_pmd_clock_mux_ck {
+ stm_pmd_clock_mux_ck: stm_pmd_clock_mux_ck@414 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&dbg_sysclk_ck>, <&dbg_clka_ck>;
@@ -601,7 +601,7 @@
reg = <0x0414>;
};
- trace_pmd_clk_mux_ck: trace_pmd_clk_mux_ck {
+ trace_pmd_clk_mux_ck: trace_pmd_clk_mux_ck@414 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&dbg_sysclk_ck>, <&dbg_clka_ck>;
@@ -609,7 +609,7 @@
reg = <0x0414>;
};
- stm_clk_div_ck: stm_clk_div_ck {
+ stm_clk_div_ck: stm_clk_div_ck@414 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&stm_pmd_clock_mux_ck>;
@@ -619,7 +619,7 @@
ti,index-power-of-two;
};
- trace_clk_div_ck: trace_clk_div_ck {
+ trace_clk_div_ck: trace_clk_div_ck@414 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&trace_pmd_clk_mux_ck>;
@@ -629,7 +629,7 @@
ti,index-power-of-two;
};
- clkout2_ck: clkout2_ck {
+ clkout2_ck: clkout2_ck@700 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&clkout2_div_ck>;
diff --git a/arch/arm/boot/dts/am33xx.dtsi b/arch/arm/boot/dts/am33xx.dtsi
index 0467846b4cc3..52be48bbd2dd 100644
--- a/arch/arm/boot/dts/am33xx.dtsi
+++ b/arch/arm/boot/dts/am33xx.dtsi
@@ -688,7 +688,7 @@
status = "disabled";
};
- ehrpwm0: ehrpwm@48300200 {
+ ehrpwm0: pwm@48300200 {
compatible = "ti,am33xx-ehrpwm";
#pwm-cells = <3>;
reg = <0x48300200 0x80>;
@@ -718,7 +718,7 @@
status = "disabled";
};
- ehrpwm1: ehrpwm@48302200 {
+ ehrpwm1: pwm@48302200 {
compatible = "ti,am33xx-ehrpwm";
#pwm-cells = <3>;
reg = <0x48302200 0x80>;
@@ -748,7 +748,7 @@
status = "disabled";
};
- ehrpwm2: ehrpwm@48304200 {
+ ehrpwm2: pwm@48304200 {
compatible = "ti,am33xx-ehrpwm";
#pwm-cells = <3>;
reg = <0x48304200 0x80>;
@@ -868,6 +868,8 @@
#size-cells = <1>;
interrupt-controller;
#interrupt-cells = <2>;
+ gpio-controller;
+ #gpio-cells = <2>;
status = "disabled";
};
diff --git a/arch/arm/boot/dts/am35xx-clocks.dtsi b/arch/arm/boot/dts/am35xx-clocks.dtsi
index 18cc826e9db5..00dd1f091be5 100644
--- a/arch/arm/boot/dts/am35xx-clocks.dtsi
+++ b/arch/arm/boot/dts/am35xx-clocks.dtsi
@@ -8,7 +8,7 @@
* published by the Free Software Foundation.
*/
&scm_clocks {
- emac_ick: emac_ick {
+ emac_ick: emac_ick@32c {
#clock-cells = <0>;
compatible = "ti,am35xx-gate-clock";
clocks = <&ipss_ick>;
@@ -16,7 +16,7 @@
ti,bit-shift = <1>;
};
- emac_fck: emac_fck {
+ emac_fck: emac_fck@32c {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&rmii_ck>;
@@ -24,7 +24,7 @@
ti,bit-shift = <9>;
};
- vpfe_ick: vpfe_ick {
+ vpfe_ick: vpfe_ick@32c {
#clock-cells = <0>;
compatible = "ti,am35xx-gate-clock";
clocks = <&ipss_ick>;
@@ -32,7 +32,7 @@
ti,bit-shift = <2>;
};
- vpfe_fck: vpfe_fck {
+ vpfe_fck: vpfe_fck@32c {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&pclk_ck>;
@@ -40,7 +40,7 @@
ti,bit-shift = <10>;
};
- hsotgusb_ick_am35xx: hsotgusb_ick_am35xx {
+ hsotgusb_ick_am35xx: hsotgusb_ick_am35xx@32c {
#clock-cells = <0>;
compatible = "ti,am35xx-gate-clock";
clocks = <&ipss_ick>;
@@ -48,7 +48,7 @@
ti,bit-shift = <0>;
};
- hsotgusb_fck_am35xx: hsotgusb_fck_am35xx {
+ hsotgusb_fck_am35xx: hsotgusb_fck_am35xx@32c {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_ck>;
@@ -56,7 +56,7 @@
ti,bit-shift = <8>;
};
- hecc_ck: hecc_ck {
+ hecc_ck: hecc_ck@32c {
#clock-cells = <0>;
compatible = "ti,am35xx-gate-clock";
clocks = <&sys_ck>;
@@ -65,7 +65,7 @@
};
};
&cm_clocks {
- ipss_ick: ipss_ick {
+ ipss_ick: ipss_ick@a10 {
#clock-cells = <0>;
compatible = "ti,am35xx-interface-clock";
clocks = <&core_l3_ick>;
@@ -85,7 +85,7 @@
clock-frequency = <27000000>;
};
- uart4_ick_am35xx: uart4_ick_am35xx {
+ uart4_ick_am35xx: uart4_ick_am35xx@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -93,7 +93,7 @@
ti,bit-shift = <23>;
};
- uart4_fck_am35xx: uart4_fck_am35xx {
+ uart4_fck_am35xx: uart4_fck_am35xx@a00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&core_48m_fck>;
diff --git a/arch/arm/boot/dts/am4372.dtsi b/arch/arm/boot/dts/am4372.dtsi
index ba580a9da390..12fcde4d4d2e 100644
--- a/arch/arm/boot/dts/am4372.dtsi
+++ b/arch/arm/boot/dts/am4372.dtsi
@@ -679,7 +679,7 @@
status = "disabled";
};
- ehrpwm0: ehrpwm@48300200 {
+ ehrpwm0: pwm@48300200 {
compatible = "ti,am4372-ehrpwm","ti,am33xx-ehrpwm";
#pwm-cells = <3>;
reg = <0x48300200 0x80>;
@@ -705,7 +705,7 @@
status = "disabled";
};
- ehrpwm1: ehrpwm@48302200 {
+ ehrpwm1: pwm@48302200 {
compatible = "ti,am4372-ehrpwm","ti,am33xx-ehrpwm";
#pwm-cells = <3>;
reg = <0x48302200 0x80>;
@@ -731,7 +731,7 @@
status = "disabled";
};
- ehrpwm2: ehrpwm@48304200 {
+ ehrpwm2: pwm@48304200 {
compatible = "ti,am4372-ehrpwm","ti,am33xx-ehrpwm";
#pwm-cells = <3>;
reg = <0x48304200 0x80>;
@@ -749,7 +749,7 @@
ti,hwmods = "epwmss3";
status = "disabled";
- ehrpwm3: ehrpwm@48306200 {
+ ehrpwm3: pwm@48306200 {
compatible = "ti,am4372-ehrpwm","ti,am33xx-ehrpwm";
#pwm-cells = <3>;
reg = <0x48306200 0x80>;
@@ -767,7 +767,7 @@
ti,hwmods = "epwmss4";
status = "disabled";
- ehrpwm4: ehrpwm@48308200 {
+ ehrpwm4: pwm@48308200 {
compatible = "ti,am4372-ehrpwm","ti,am33xx-ehrpwm";
#pwm-cells = <3>;
reg = <0x48308200 0x80>;
@@ -785,7 +785,7 @@
ti,hwmods = "epwmss5";
status = "disabled";
- ehrpwm5: ehrpwm@4830a200 {
+ ehrpwm5: pwm@4830a200 {
compatible = "ti,am4372-ehrpwm","ti,am33xx-ehrpwm";
#pwm-cells = <3>;
reg = <0x4830a200 0x80>;
@@ -896,6 +896,8 @@
#size-cells = <1>;
interrupt-controller;
#interrupt-cells = <2>;
+ gpio-controller;
+ #gpio-cells = <2>;
status = "disabled";
};
diff --git a/arch/arm/boot/dts/am437x-gp-evm.dts b/arch/arm/boot/dts/am437x-gp-evm.dts
index 8889be1ca1c3..5bcd3aa025bc 100644
--- a/arch/arm/boot/dts/am437x-gp-evm.dts
+++ b/arch/arm/boot/dts/am437x-gp-evm.dts
@@ -119,7 +119,7 @@
clock-frequency = <32768>;
};
- sound0: sound@0 {
+ sound0: sound0 {
compatible = "simple-audio-card";
simple-audio-card,name = "AM437x-GP-EVM";
simple-audio-card,widgets =
@@ -817,6 +817,7 @@
interrupt-parent = <&gpmc>;
interrupts = <0 IRQ_TYPE_NONE>, /* fifoevent */
<1 IRQ_TYPE_NONE>; /* termcount */
+ rb-gpios = <&gpmc 0 GPIO_ACTIVE_HIGH>; /* gpmc_wait0 */
ti,nand-ecc-opt = "bch16";
ti,elm-id = <&elm>;
nand-bus-width = <8>;
diff --git a/arch/arm/boot/dts/am43x-epos-evm.dts b/arch/arm/boot/dts/am43x-epos-evm.dts
index d5dd72047a7e..3549b8c9ac49 100644
--- a/arch/arm/boot/dts/am43x-epos-evm.dts
+++ b/arch/arm/boot/dts/am43x-epos-evm.dts
@@ -107,7 +107,7 @@
default-brightness-level = <8>;
};
- sound0: sound@0 {
+ sound0: sound0 {
compatible = "simple-audio-card";
simple-audio-card,name = "AM43-EPOS-EVM";
simple-audio-card,widgets =
@@ -568,6 +568,7 @@
interrupt-parent = <&gpmc>;
interrupts = <0 IRQ_TYPE_NONE>, /* fifoevent */
<1 IRQ_TYPE_NONE>; /* termcount */
+ rb-gpios = <&gpmc 0 GPIO_ACTIVE_HIGH>; /* gpmc_wait0 */
ti,nand-ecc-opt = "bch16";
ti,elm-id = <&elm>;
nand-bus-width = <8>;
diff --git a/arch/arm/boot/dts/am43xx-clocks.dtsi b/arch/arm/boot/dts/am43xx-clocks.dtsi
index a38af2bfbfcf..7630ba1d89e4 100644
--- a/arch/arm/boot/dts/am43xx-clocks.dtsi
+++ b/arch/arm/boot/dts/am43xx-clocks.dtsi
@@ -8,7 +8,7 @@
* published by the Free Software Foundation.
*/
&scm_clocks {
- sys_clkin_ck: sys_clkin_ck {
+ sys_clkin_ck: sys_clkin_ck@40 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sysboot_freq_sel_ck>, <&crystal_freq_sel_ck>;
@@ -16,7 +16,7 @@
reg = <0x0040>;
};
- crystal_freq_sel_ck: crystal_freq_sel_ck {
+ crystal_freq_sel_ck: crystal_freq_sel_ck@40 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&virt_19200000_ck>, <&virt_24000000_ck>, <&virt_25000000_ck>, <&virt_26000000_ck>;
@@ -104,7 +104,7 @@
clock-div = <1>;
};
- ehrpwm0_tbclk: ehrpwm0_tbclk {
+ ehrpwm0_tbclk: ehrpwm0_tbclk@664 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l4ls_gclk>;
@@ -112,7 +112,7 @@
reg = <0x0664>;
};
- ehrpwm1_tbclk: ehrpwm1_tbclk {
+ ehrpwm1_tbclk: ehrpwm1_tbclk@664 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l4ls_gclk>;
@@ -120,7 +120,7 @@
reg = <0x0664>;
};
- ehrpwm2_tbclk: ehrpwm2_tbclk {
+ ehrpwm2_tbclk: ehrpwm2_tbclk@664 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l4ls_gclk>;
@@ -128,7 +128,7 @@
reg = <0x0664>;
};
- ehrpwm3_tbclk: ehrpwm3_tbclk {
+ ehrpwm3_tbclk: ehrpwm3_tbclk@664 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l4ls_gclk>;
@@ -136,7 +136,7 @@
reg = <0x0664>;
};
- ehrpwm4_tbclk: ehrpwm4_tbclk {
+ ehrpwm4_tbclk: ehrpwm4_tbclk@664 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l4ls_gclk>;
@@ -144,7 +144,7 @@
reg = <0x0664>;
};
- ehrpwm5_tbclk: ehrpwm5_tbclk {
+ ehrpwm5_tbclk: ehrpwm5_tbclk@664 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l4ls_gclk>;
@@ -195,7 +195,7 @@
clock-frequency = <26000000>;
};
- dpll_core_ck: dpll_core_ck {
+ dpll_core_ck: dpll_core_ck@2d20 {
#clock-cells = <0>;
compatible = "ti,am3-dpll-core-clock";
clocks = <&sys_clkin_ck>, <&sys_clkin_ck>;
@@ -208,7 +208,7 @@
clocks = <&dpll_core_ck>;
};
- dpll_core_m4_ck: dpll_core_m4_ck {
+ dpll_core_m4_ck: dpll_core_m4_ck@2d38 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -219,7 +219,7 @@
ti,invert-autoidle-bit;
};
- dpll_core_m5_ck: dpll_core_m5_ck {
+ dpll_core_m5_ck: dpll_core_m5_ck@2d3c {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -230,7 +230,7 @@
ti,invert-autoidle-bit;
};
- dpll_core_m6_ck: dpll_core_m6_ck {
+ dpll_core_m6_ck: dpll_core_m6_ck@2d40 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -241,14 +241,14 @@
ti,invert-autoidle-bit;
};
- dpll_mpu_ck: dpll_mpu_ck {
+ dpll_mpu_ck: dpll_mpu_ck@2d60 {
#clock-cells = <0>;
compatible = "ti,am3-dpll-clock";
clocks = <&sys_clkin_ck>, <&sys_clkin_ck>;
reg = <0x2d60>, <0x2d64>, <0x2d6c>;
};
- dpll_mpu_m2_ck: dpll_mpu_m2_ck {
+ dpll_mpu_m2_ck: dpll_mpu_m2_ck@2d70 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_mpu_ck>;
@@ -267,14 +267,14 @@
clock-div = <2>;
};
- dpll_ddr_ck: dpll_ddr_ck {
+ dpll_ddr_ck: dpll_ddr_ck@2da0 {
#clock-cells = <0>;
compatible = "ti,am3-dpll-clock";
clocks = <&sys_clkin_ck>, <&sys_clkin_ck>;
reg = <0x2da0>, <0x2da4>, <0x2dac>;
};
- dpll_ddr_m2_ck: dpll_ddr_m2_ck {
+ dpll_ddr_m2_ck: dpll_ddr_m2_ck@2db0 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_ddr_ck>;
@@ -285,14 +285,14 @@
ti,invert-autoidle-bit;
};
- dpll_disp_ck: dpll_disp_ck {
+ dpll_disp_ck: dpll_disp_ck@2e20 {
#clock-cells = <0>;
compatible = "ti,am3-dpll-clock";
clocks = <&sys_clkin_ck>, <&sys_clkin_ck>;
reg = <0x2e20>, <0x2e24>, <0x2e2c>;
};
- dpll_disp_m2_ck: dpll_disp_m2_ck {
+ dpll_disp_m2_ck: dpll_disp_m2_ck@2e30 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_disp_ck>;
@@ -304,14 +304,14 @@
ti,set-rate-parent;
};
- dpll_per_ck: dpll_per_ck {
+ dpll_per_ck: dpll_per_ck@2de0 {
#clock-cells = <0>;
compatible = "ti,am3-dpll-j-type-clock";
clocks = <&sys_clkin_ck>, <&sys_clkin_ck>;
reg = <0x2de0>, <0x2de4>, <0x2dec>;
};
- dpll_per_m2_ck: dpll_per_m2_ck {
+ dpll_per_m2_ck: dpll_per_m2_ck@2df0 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_ck>;
@@ -354,7 +354,7 @@
clock-div = <732>;
};
- clkdiv32k_ick: clkdiv32k_ick {
+ clkdiv32k_ick: clkdiv32k_ick@2a38 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&clkdiv32k_ck>;
@@ -370,7 +370,7 @@
clock-div = <1>;
};
- pruss_ocp_gclk: pruss_ocp_gclk {
+ pruss_ocp_gclk: pruss_ocp_gclk@4248 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sysclk_div>, <&dpll_disp_m2_ck>;
@@ -383,56 +383,56 @@
clock-frequency = <32768>;
};
- timer1_fck: timer1_fck {
+ timer1_fck: timer1_fck@4200 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin_ck>, <&clkdiv32k_ick>, <&tclkin_ck>, <&clk_rc32k_ck>, <&clk_32768_ck>, <&clk_32k_tpm_ck>;
reg = <0x4200>;
};
- timer2_fck: timer2_fck {
+ timer2_fck: timer2_fck@4204 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sys_clkin_ck>, <&clkdiv32k_ick>;
reg = <0x4204>;
};
- timer3_fck: timer3_fck {
+ timer3_fck: timer3_fck@4208 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sys_clkin_ck>, <&clkdiv32k_ick>;
reg = <0x4208>;
};
- timer4_fck: timer4_fck {
+ timer4_fck: timer4_fck@420c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sys_clkin_ck>, <&clkdiv32k_ick>;
reg = <0x420c>;
};
- timer5_fck: timer5_fck {
+ timer5_fck: timer5_fck@4210 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sys_clkin_ck>, <&clkdiv32k_ick>;
reg = <0x4210>;
};
- timer6_fck: timer6_fck {
+ timer6_fck: timer6_fck@4214 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sys_clkin_ck>, <&clkdiv32k_ick>;
reg = <0x4214>;
};
- timer7_fck: timer7_fck {
+ timer7_fck: timer7_fck@4218 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sys_clkin_ck>, <&clkdiv32k_ick>;
reg = <0x4218>;
};
- wdt1_fck: wdt1_fck {
+ wdt1_fck: wdt1_fck@422c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&clk_rc32k_ck>, <&clkdiv32k_ick>;
@@ -487,14 +487,14 @@
clock-div = <2>;
};
- cpsw_cpts_rft_clk: cpsw_cpts_rft_clk {
+ cpsw_cpts_rft_clk: cpsw_cpts_rft_clk@4238 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sysclk_div>, <&dpll_core_m5_ck>, <&dpll_disp_m2_ck>;
reg = <0x4238>;
};
- dpll_clksel_mac_clk: dpll_clksel_mac_clk {
+ dpll_clksel_mac_clk: dpll_clksel_mac_clk@4234 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_m5_ck>;
@@ -509,14 +509,14 @@
clock-frequency = <32768>;
};
- gpio0_dbclk_mux_ck: gpio0_dbclk_mux_ck {
+ gpio0_dbclk_mux_ck: gpio0_dbclk_mux_ck@4240 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&clk_rc32k_ck>, <&clk_32768_ck>, <&clkdiv32k_ick>, <&clk_32k_mosc_ck>, <&clk_32k_tpm_ck>;
reg = <0x4240>;
};
- gpio0_dbclk: gpio0_dbclk {
+ gpio0_dbclk: gpio0_dbclk@2b68 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&gpio0_dbclk_mux_ck>;
@@ -524,7 +524,7 @@
reg = <0x2b68>;
};
- gpio1_dbclk: gpio1_dbclk {
+ gpio1_dbclk: gpio1_dbclk@8c78 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&clkdiv32k_ick>;
@@ -532,7 +532,7 @@
reg = <0x8c78>;
};
- gpio2_dbclk: gpio2_dbclk {
+ gpio2_dbclk: gpio2_dbclk@8c80 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&clkdiv32k_ick>;
@@ -540,7 +540,7 @@
reg = <0x8c80>;
};
- gpio3_dbclk: gpio3_dbclk {
+ gpio3_dbclk: gpio3_dbclk@8c88 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&clkdiv32k_ick>;
@@ -548,7 +548,7 @@
reg = <0x8c88>;
};
- gpio4_dbclk: gpio4_dbclk {
+ gpio4_dbclk: gpio4_dbclk@8c90 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&clkdiv32k_ick>;
@@ -556,7 +556,7 @@
reg = <0x8c90>;
};
- gpio5_dbclk: gpio5_dbclk {
+ gpio5_dbclk: gpio5_dbclk@8c98 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&clkdiv32k_ick>;
@@ -572,7 +572,7 @@
clock-div = <2>;
};
- gfx_fclk_clksel_ck: gfx_fclk_clksel_ck {
+ gfx_fclk_clksel_ck: gfx_fclk_clksel_ck@423c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sysclk_div>, <&dpll_per_m2_ck>;
@@ -580,7 +580,7 @@
reg = <0x423c>;
};
- gfx_fck_div_ck: gfx_fck_div_ck {
+ gfx_fck_div_ck: gfx_fck_div_ck@423c {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&gfx_fclk_clksel_ck>;
@@ -588,7 +588,7 @@
ti,max-div = <2>;
};
- disp_clk: disp_clk {
+ disp_clk: disp_clk@4244 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&dpll_disp_m2_ck>, <&dpll_core_m5_ck>, <&dpll_per_m2_ck>;
@@ -596,14 +596,14 @@
ti,set-rate-parent;
};
- dpll_extdev_ck: dpll_extdev_ck {
+ dpll_extdev_ck: dpll_extdev_ck@2e60 {
#clock-cells = <0>;
compatible = "ti,am3-dpll-clock";
clocks = <&sys_clkin_ck>, <&sys_clkin_ck>;
reg = <0x2e60>, <0x2e64>, <0x2e6c>;
};
- dpll_extdev_m2_ck: dpll_extdev_m2_ck {
+ dpll_extdev_m2_ck: dpll_extdev_m2_ck@2e70 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_extdev_ck>;
@@ -614,14 +614,14 @@
ti,invert-autoidle-bit;
};
- mux_synctimer32k_ck: mux_synctimer32k_ck {
+ mux_synctimer32k_ck: mux_synctimer32k_ck@4230 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&clk_32768_ck>, <&clk_32k_tpm_ck>, <&clkdiv32k_ick>;
reg = <0x4230>;
};
- synctimer_32kclk: synctimer_32kclk {
+ synctimer_32kclk: synctimer_32kclk@2a30 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&mux_synctimer32k_ck>;
@@ -629,28 +629,28 @@
reg = <0x2a30>;
};
- timer8_fck: timer8_fck {
+ timer8_fck: timer8_fck@421c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sys_clkin_ck>, <&clkdiv32k_ick>, <&clk_32k_tpm_ck>;
reg = <0x421c>;
};
- timer9_fck: timer9_fck {
+ timer9_fck: timer9_fck@4220 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sys_clkin_ck>, <&clkdiv32k_ick>, <&clk_32k_tpm_ck>;
reg = <0x4220>;
};
- timer10_fck: timer10_fck {
+ timer10_fck: timer10_fck@4224 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sys_clkin_ck>, <&clkdiv32k_ick>, <&clk_32k_tpm_ck>;
reg = <0x4224>;
};
- timer11_fck: timer11_fck {
+ timer11_fck: timer11_fck@4228 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sys_clkin_ck>, <&clkdiv32k_ick>, <&clk_32k_tpm_ck>;
@@ -679,7 +679,7 @@
clocks = <&dpll_ddr_ck>;
};
- dpll_ddr_m4_ck: dpll_ddr_m4_ck {
+ dpll_ddr_m4_ck: dpll_ddr_m4_ck@2db8 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_ddr_x2_ck>;
@@ -690,7 +690,7 @@
ti,invert-autoidle-bit;
};
- dpll_per_clkdcoldo: dpll_per_clkdcoldo {
+ dpll_per_clkdcoldo: dpll_per_clkdcoldo@2e14 {
#clock-cells = <0>;
compatible = "ti,fixed-factor-clock";
clocks = <&dpll_per_ck>;
@@ -701,7 +701,7 @@
ti,invert-autoidle-bit;
};
- dll_aging_clk_div: dll_aging_clk_div {
+ dll_aging_clk_div: dll_aging_clk_div@4250 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&sys_clkin_ck>;
@@ -733,14 +733,14 @@
clock-div = <2>;
};
- usbphy_32khz_clkmux: usbphy_32khz_clkmux {
+ usbphy_32khz_clkmux: usbphy_32khz_clkmux@4260 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&clk_32768_ck>, <&clk_32k_tpm_ck>;
reg = <0x4260>;
};
- usb_phy0_always_on_clk32k: usb_phy0_always_on_clk32k {
+ usb_phy0_always_on_clk32k: usb_phy0_always_on_clk32k@2a40 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&usbphy_32khz_clkmux>;
@@ -748,7 +748,7 @@
reg = <0x2a40>;
};
- usb_phy1_always_on_clk32k: usb_phy1_always_on_clk32k {
+ usb_phy1_always_on_clk32k: usb_phy1_always_on_clk32k@2a48 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&usbphy_32khz_clkmux>;
@@ -756,7 +756,7 @@
reg = <0x2a48>;
};
- usb_otg_ss0_refclk960m: usb_otg_ss0_refclk960m {
+ usb_otg_ss0_refclk960m: usb_otg_ss0_refclk960m@8a60 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll_per_clkdcoldo>;
@@ -764,11 +764,65 @@
reg = <0x8a60>;
};
- usb_otg_ss1_refclk960m: usb_otg_ss1_refclk960m {
+ usb_otg_ss1_refclk960m: usb_otg_ss1_refclk960m@8a68 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll_per_clkdcoldo>;
ti,bit-shift = <8>;
reg = <0x8a68>;
};
+
+ clkout1_osc_div_ck: clkout1_osc_div_ck {
+ #clock-cells = <0>;
+ compatible = "ti,divider-clock";
+ clocks = <&sys_clkin_ck>;
+ ti,bit-shift = <20>;
+ ti,max-div = <4>;
+ reg = <0x4100>;
+ };
+
+ clkout1_src2_mux_ck: clkout1_src2_mux_ck {
+ #clock-cells = <0>;
+ compatible = "ti,mux-clock";
+ clocks = <&clk_rc32k_ck>, <&sysclk_div>, <&dpll_ddr_m2_ck>,
+ <&dpll_per_m2_ck>, <&dpll_disp_m2_ck>,
+ <&dpll_mpu_m2_ck>;
+ reg = <0x4100>;
+ };
+
+ clkout1_src2_pre_div_ck: clkout1_src2_pre_div_ck {
+ #clock-cells = <0>;
+ compatible = "ti,divider-clock";
+ clocks = <&clkout1_src2_mux_ck>;
+ ti,bit-shift = <4>;
+ ti,max-div = <8>;
+ reg = <0x4100>;
+ };
+
+ clkout1_src2_post_div_ck: clkout1_src2_post_div_ck {
+ #clock-cells = <0>;
+ compatible = "ti,divider-clock";
+ clocks = <&clkout1_src2_pre_div_ck>;
+ ti,bit-shift = <8>;
+ ti,max-div = <32>;
+ ti,index-power-of-two;
+ reg = <0x4100>;
+ };
+
+ clkout1_mux_ck: clkout1_mux_ck {
+ #clock-cells = <0>;
+ compatible = "ti,mux-clock";
+ clocks = <&clkout1_osc_div_ck>, <&clk_rc32k_ck>,
+ <&clkout1_src2_post_div_ck>, <&dpll_extdev_m2_ck>;
+ ti,bit-shift = <16>;
+ reg = <0x4100>;
+ };
+
+ clkout1_ck: clkout1_ck {
+ #clock-cells = <0>;
+ compatible = "ti,gate-clock";
+ clocks = <&clkout1_mux_ck>;
+ ti,bit-shift = <23>;
+ reg = <0x4100>;
+ };
};
diff --git a/arch/arm/boot/dts/am572x-idk.dts b/arch/arm/boot/dts/am572x-idk.dts
new file mode 100644
index 000000000000..e3acb99703e1
--- /dev/null
+++ b/arch/arm/boot/dts/am572x-idk.dts
@@ -0,0 +1,85 @@
+/*
+ * Copyright (C) 2015-2016 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+/dts-v1/;
+
+#include "dra74x.dtsi"
+#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/interrupt-controller/irq.h>
+#include "am57xx-idk-common.dtsi"
+
+/ {
+ model = "TI AM5728 IDK";
+ compatible = "ti,am5728-idk", "ti,am5728", "ti,dra742", "ti,dra74",
+ "ti,dra7";
+
+ memory {
+ device_type = "memory";
+ reg = <0x0 0x80000000 0x0 0x80000000>;
+ };
+
+ extcon_usb2: extcon_usb2 {
+ compatible = "linux,extcon-usb-gpio";
+ id-gpio = <&gpio3 16 GPIO_ACTIVE_HIGH>;
+ };
+
+ status-leds {
+ compatible = "gpio-leds";
+ cpu0-led {
+ label = "status0:red:cpu0";
+ gpios = <&gpio4 0 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ linux,default-trigger = "cpu0";
+ };
+
+ usr0-led {
+ label = "status0:green:usr";
+ gpios = <&gpio3 11 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+
+ heartbeat-led {
+ label = "status0:blue:heartbeat";
+ gpios = <&gpio3 12 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ linux,default-trigger = "heartbeat";
+ };
+
+ cpu1-led {
+ label = "status1:red:cpu1";
+ gpios = <&gpio3 10 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ linux,default-trigger = "cpu1";
+ };
+
+ usr1-led {
+ label = "status1:green:usr";
+ gpios = <&gpio7 23 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+
+ mmc0-led {
+ label = "status1:blue:mmc0";
+ gpios = <&gpio7 22 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ linux,default-trigger = "mmc0";
+ };
+ };
+};
+
+&omap_dwc3_2 {
+ extcon = <&extcon_usb2>;
+};
+
+&mmc1 {
+ status = "okay";
+ vmmc-supply = <&v3_3d>;
+ vmmc_aux-supply = <&ldo1_reg>;
+ bus-width = <4>;
+ cd-gpios = <&gpio6 27 0>; /* gpio 219 */
+};
diff --git a/arch/arm/boot/dts/am57xx-beagle-x15.dts b/arch/arm/boot/dts/am57xx-beagle-x15.dts
index 4168eb9dd369..81d6c3033b51 100644
--- a/arch/arm/boot/dts/am57xx-beagle-x15.dts
+++ b/arch/arm/boot/dts/am57xx-beagle-x15.dts
@@ -8,6 +8,7 @@
/dts-v1/;
#include "dra74x.dtsi"
+#include "am57xx-commercial-grade.dtsi"
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/irq.h>
@@ -144,7 +145,7 @@
};
};
- sound0: sound@0 {
+ sound0: sound0 {
compatible = "simple-audio-card";
simple-audio-card,name = "BeagleBoard-X15";
simple-audio-card,widgets =
@@ -166,8 +167,6 @@
sound0_master: simple-audio-card,codec {
sound-dai = <&tlv320aic3104>;
- assigned-clocks = <&clkoutmux2_clk_mux>;
- assigned-clock-parents = <&sys_clk2_dclk_div>;
clocks = <&clkout2_clk>;
};
};
@@ -427,7 +426,7 @@
/* VDD_DSPEVE, VDD_IVA, VDD_GPU */
regulator-name = "smps45";
regulator-min-microvolt = < 850000>;
- regulator-max-microvolt = <1150000>;
+ regulator-max-microvolt = <1250000>;
regulator-always-on;
regulator-boot-on;
};
@@ -436,7 +435,7 @@
/* VDD_CORE */
regulator-name = "smps6";
regulator-min-microvolt = <850000>;
- regulator-max-microvolt = <1030000>;
+ regulator-max-microvolt = <1150000>;
regulator-always-on;
regulator-boot-on;
};
@@ -571,6 +570,9 @@
pinctrl-names = "default", "sleep";
pinctrl-0 = <&clkout2_pins_default>;
pinctrl-1 = <&clkout2_pins_sleep>;
+ assigned-clocks = <&clkoutmux2_clk_mux>;
+ assigned-clock-parents = <&sys_clk2_dclk_div>;
+
status = "okay";
adc-settle-ms = <40>;
@@ -795,6 +797,8 @@
serial-dir = < /* 0: INACTIVE, 1: TX, 2: RX */
1 2 0 0
>;
+ tx-num-evt = <32>;
+ rx-num-evt = <32>;
};
&mailbox5 {
diff --git a/arch/arm/boot/dts/am57xx-cl-som-am57x.dts b/arch/arm/boot/dts/am57xx-cl-som-am57x.dts
index 14f912a1b4fd..378b142ef88c 100644
--- a/arch/arm/boot/dts/am57xx-cl-som-am57x.dts
+++ b/arch/arm/boot/dts/am57xx-cl-som-am57x.dts
@@ -51,7 +51,7 @@
regulator-max-microvolt = <3300000>;
};
- sound0: sound@0 {
+ sound0: sound0 {
compatible = "simple-audio-card";
simple-audio-card,name = "CL-SOM-AM57x-Sound-Card";
simple-audio-card,format = "i2s";
diff --git a/arch/arm/boot/dts/am57xx-commercial-grade.dtsi b/arch/arm/boot/dts/am57xx-commercial-grade.dtsi
new file mode 100644
index 000000000000..c183654464e9
--- /dev/null
+++ b/arch/arm/boot/dts/am57xx-commercial-grade.dtsi
@@ -0,0 +1,23 @@
+&cpu_alert0 {
+ temperature = <80000>; /* milliCelsius */
+};
+
+&cpu_crit {
+ temperature = <90000>; /* milliCelsius */
+};
+
+&gpu_crit {
+ temperature = <90000>; /* milliCelsius */
+};
+
+&core_crit {
+ temperature = <90000>; /* milliCelsius */
+};
+
+&dspeve_crit {
+ temperature = <90000>; /* milliCelsius */
+};
+
+&iva_crit {
+ temperature = <90000>; /* milliCelsius */
+};
diff --git a/arch/arm/boot/dts/am57xx-idk-common.dtsi b/arch/arm/boot/dts/am57xx-idk-common.dtsi
new file mode 100644
index 000000000000..b01a5948cdd0
--- /dev/null
+++ b/arch/arm/boot/dts/am57xx-idk-common.dtsi
@@ -0,0 +1,304 @@
+/*
+ * Copyright (C) 2015-2016 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include "am57xx-industrial-grade.dtsi"
+
+/ {
+ aliases {
+ rtc0 = &tps659038_rtc;
+ rtc1 = &rtc;
+ };
+
+ vmain: fixedregulator-vmain {
+ compatible = "regulator-fixed";
+ regulator-name = "VMAIN";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ v3_3d: fixedregulator-v3_3d {
+ compatible = "regulator-fixed";
+ regulator-name = "V3_3D";
+ vin-supply = <&smps9_reg>;
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ vtt_fixed: fixedregulator-vtt {
+ /* TPS51200 */
+ compatible = "regulator-fixed";
+ regulator-name = "vtt_fixed";
+ vin-supply = <&v3_3d>;
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+};
+
+&i2c1 {
+ status = "okay";
+ clock-frequency = <400000>;
+
+ tps659038: tps659038@58 {
+ compatible = "ti,tps659038";
+ reg = <0x58>;
+ interrupts-extended = <&gpio6 16 IRQ_TYPE_LEVEL_HIGH
+ &dra7_pmx_core 0x418>;
+ #interrupt-cells = <2>;
+ interrupt-controller;
+ ti,system-power-controller;
+
+ tps659038_pmic {
+ compatible = "ti,tps659038-pmic";
+ regulators {
+ smps12_reg: smps12 {
+ /* VDD_MPU */
+ vin-supply = <&vmain>;
+ regulator-name = "smps12";
+ regulator-min-microvolt = <850000>;
+ regulator-max-microvolt = <1250000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ smps3_reg: smps3 {
+ /* VDD_DDR EMIF1 EMIF2 */
+ vin-supply = <&vmain>;
+ regulator-name = "smps3";
+ regulator-min-microvolt = <1350000>;
+ regulator-max-microvolt = <1350000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ smps45_reg: smps45 {
+ /* VDD_DSPEVE on AM572 */
+ /* VDD_IVA + VDD_DSP on AM571 */
+ vin-supply = <&vmain>;
+ regulator-name = "smps45";
+ regulator-min-microvolt = <850000>;
+ regulator-max-microvolt = <1250000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ smps6_reg: smps6 {
+ /* VDD_GPU */
+ vin-supply = <&vmain>;
+ regulator-name = "smps6";
+ regulator-min-microvolt = <850000>;
+ regulator-max-microvolt = <1250000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ smps7_reg: smps7 {
+ /* VDD_CORE */
+ vin-supply = <&vmain>;
+ regulator-name = "smps7";
+ regulator-min-microvolt = <850000>;
+ regulator-max-microvolt = <1150000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ smps8_reg: smps8 {
+ /* 5728 - VDD_IVAHD */
+ /* 5718 - N.C. test point */
+ vin-supply = <&vmain>;
+ regulator-name = "smps8";
+ };
+
+ smps9_reg: smps9 {
+ /* VDD_3_3D */
+ vin-supply = <&vmain>;
+ regulator-name = "smps9";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ ldo1_reg: ldo1 {
+ /* VDDSHV8 - VSDMMC */
+ /* NOTE: on rev 1.3a, data supply */
+ vin-supply = <&vmain>;
+ regulator-name = "ldo1";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ ldo2_reg: ldo2 {
+ /* VDDSH18V */
+ vin-supply = <&vmain>;
+ regulator-name = "ldo2";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ ldo3_reg: ldo3 {
+ /* R1.3a 572x V1_8PHY_LDO3: USB, SATA */
+ vin-supply = <&vmain>;
+ regulator-name = "ldo3";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ ldo4_reg: ldo4 {
+ /* R1.3a 572x V1_8PHY_LDO4: PCIE, HDMI*/
+ vin-supply = <&vmain>;
+ regulator-name = "ldo4";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ /* LDO5-8 unused */
+
+ ldo9_reg: ldo9 {
+ /* VDD_RTC */
+ vin-supply = <&vmain>;
+ regulator-name = "ldo9";
+ regulator-min-microvolt = <840000>;
+ regulator-max-microvolt = <1160000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ ldoln_reg: ldoln {
+ /* VDDA_1V8_PLL */
+ vin-supply = <&vmain>;
+ regulator-name = "ldoln";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ ldousb_reg: ldousb {
+ /* VDDA_3V_USB: VDDA_USBHS33 */
+ vin-supply = <&vmain>;
+ regulator-name = "ldousb";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ ldortc_reg: ldortc {
+ /* VDDA_RTC */
+ vin-supply = <&vmain>;
+ regulator-name = "ldortc";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ regen1: regen1 {
+ /* VDD_3V3_ON */
+ regulator-name = "regen1";
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ regen2: regen2 {
+ /* Needed for PMIC internal resource */
+ regulator-name = "regen2";
+ regulator-boot-on;
+ regulator-always-on;
+ };
+ };
+ };
+
+ tps659038_rtc: tps659038_rtc {
+ compatible = "ti,palmas-rtc";
+ interrupt-parent = <&tps659038>;
+ interrupts = <8 IRQ_TYPE_EDGE_FALLING>;
+ wakeup-source;
+ };
+
+ tps659038_pwr_button: tps659038_pwr_button {
+ compatible = "ti,palmas-pwrbutton";
+ interrupt-parent = <&tps659038>;
+ interrupts = <1 IRQ_TYPE_EDGE_FALLING>;
+ wakeup-source;
+ ti,palmas-long-press-seconds = <12>;
+ };
+
+ tps659038_gpio: tps659038_gpio {
+ compatible = "ti,palmas-gpio";
+ gpio-controller;
+ #gpio-cells = <2>;
+ };
+ };
+};
+
+&uart3 {
+ status = "okay";
+ interrupts-extended = <&crossbar_mpu GIC_SPI 69 IRQ_TYPE_LEVEL_HIGH
+ &dra7_pmx_core 0x248>;
+};
+
+&rtc {
+ status = "okay";
+ ext-clk-src;
+};
+
+&mac {
+ status = "okay";
+ dual_emac;
+};
+
+&cpsw_emac0 {
+ phy_id = <&davinci_mdio>, <0>;
+ phy-mode = "rgmii";
+ dual_emac_res_vlan = <1>;
+};
+
+&cpsw_emac1 {
+ phy_id = <&davinci_mdio>, <1>;
+ phy-mode = "rgmii";
+ dual_emac_res_vlan = <2>;
+};
+
+&usb2_phy1 {
+ phy-supply = <&ldousb_reg>;
+};
+
+&usb2_phy2 {
+ phy-supply = <&ldousb_reg>;
+};
+
+&usb1 {
+ dr_mode = "host";
+};
+
+&usb2 {
+ dr_mode = "otg";
+};
+
+&mmc2 {
+ status = "okay";
+ vmmc-supply = <&v3_3d>;
+ bus-width = <8>;
+ ti,non-removable;
+ max-frequency = <96000000>;
+};
diff --git a/arch/arm/boot/dts/am57xx-industrial-grade.dtsi b/arch/arm/boot/dts/am57xx-industrial-grade.dtsi
new file mode 100644
index 000000000000..70c8c4ba1933
--- /dev/null
+++ b/arch/arm/boot/dts/am57xx-industrial-grade.dtsi
@@ -0,0 +1,23 @@
+&cpu_alert0 {
+ temperature = <90000>; /* milliCelsius */
+};
+
+&cpu_crit {
+ temperature = <105000>; /* milliCelsius */
+};
+
+&gpu_crit {
+ temperature = <105000>; /* milliCelsius */
+};
+
+&core_crit {
+ temperature = <105000>; /* milliCelsius */
+};
+
+&dspeve_crit {
+ temperature = <105000>; /* milliCelsius */
+};
+
+&iva_crit {
+ temperature = <105000>; /* milliCelsius */
+};
diff --git a/arch/arm/boot/dts/arm-realview-eb-11mp-revb.dts b/arch/arm/boot/dts/arm-realview-eb-11mp-revb.dts
new file mode 100644
index 000000000000..e68527b0d552
--- /dev/null
+++ b/arch/arm/boot/dts/arm-realview-eb-11mp-revb.dts
@@ -0,0 +1,93 @@
+/*
+ * Copyright 2016 Linaro Ltd
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+#include "arm-realview-eb-11mp.dts"
+
+/ {
+ model = "ARM RealView Emulation Baseboard with ARM11MPCore Rev B";
+};
+
+/*
+ * The revision B has a distinctly different layout of the syscon, so
+ * append a specific compatible-string.
+ */
+&syscon {
+ compatible = "arm,realview-eb11mp-revb-syscon", "arm,realview-eb-syscon", "syscon", "simple-mfd";
+};
+
+&intc {
+ reg = <0x10101000 0x1000>,
+ <0x10100100 0x100>;
+};
+
+&L2 {
+ reg = <0x10102000 0x1000>;
+};
+
+&scu {
+ reg = <0x10100000 0x100>;
+};
+
+&twd_timer {
+ reg = <0x10100600 0x20>;
+};
+
+&twd_wdog {
+ reg = <0x10100620 0x20>;
+};
+
+/*
+ * On revision B, we cannot reach the secondary interrupt
+ * controller, as a result, some peripherals that are dependent
+ * on their IRQ cannot be reached, so disable them.
+ */
+&intc_second {
+ status = "disabled";
+};
+
+&gpio0 {
+ status = "disabled";
+};
+
+&gpio1 {
+ status = "disabled";
+};
+
+&gpio2 {
+ status = "disabled";
+};
+
+&serial2 {
+ status = "disabled";
+};
+
+&serial3 {
+ status = "disabled";
+};
+
+&ssp {
+ status = "disabled";
+};
+
+&wdog {
+ status = "disabled";
+};
diff --git a/arch/arm/boot/dts/arm-realview-eb-11mp.dts b/arch/arm/boot/dts/arm-realview-eb-11mp.dts
new file mode 100644
index 000000000000..87ff602a2a2d
--- /dev/null
+++ b/arch/arm/boot/dts/arm-realview-eb-11mp.dts
@@ -0,0 +1,74 @@
+/*
+ * Copyright 2016 Linaro Ltd
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "arm-realview-eb-mp.dtsi"
+
+/ {
+ model = "ARM RealView Emulation Baseboard with ARM11MPCore Rev C";
+ arm,hbi = <0x146>;
+
+ /*
+ * This is the ARM11 MPCore tile (HBI-0146) used with the RealView EB.
+ * Reference: ARM DUI 0318F
+ *
+ * To run this machine with QEMU, specify the following:
+ * qemu-system-arm -M realview-eb-mpcore -smp cpus=4
+ */
+ cpus {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ enable-method = "arm,realview-smp";
+
+ MP11_0: cpu@0 {
+ device_type = "cpu";
+ compatible = "arm,arm11mpcore";
+ reg = <0>;
+ next-level-cache = <&L2>;
+ };
+
+ MP11_1: cpu@1 {
+ device_type = "cpu";
+ compatible = "arm,arm11mpcore";
+ reg = <1>;
+ next-level-cache = <&L2>;
+ };
+
+ MP11_2: cpu@2 {
+ device_type = "cpu";
+ compatible = "arm,arm11mpcore";
+ reg = <2>;
+ next-level-cache = <&L2>;
+ };
+
+ MP11_3: cpu@3 {
+ device_type = "cpu";
+ compatible = "arm,arm11mpcore";
+ reg = <3>;
+ next-level-cache = <&L2>;
+ };
+ };
+};
+
+&pmu {
+ interrupt-affinity = <&MP11_0>, <&MP11_1>, <&MP11_2>, <&MP11_3>;
+};
diff --git a/arch/arm/boot/dts/arm-realview-eb-a9mp.dts b/arch/arm/boot/dts/arm-realview-eb-a9mp.dts
new file mode 100644
index 000000000000..967684b3636c
--- /dev/null
+++ b/arch/arm/boot/dts/arm-realview-eb-a9mp.dts
@@ -0,0 +1,70 @@
+/*
+ * Copyright 2016 Linaro Ltd
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "arm-realview-eb-mp.dtsi"
+
+/ {
+ model = "ARM RealView EB Cortex A9 MPCore";
+
+ /*
+ * This is the Cortex A9 MPCore tile used with the
+ * RealView EB.
+ */
+ cpus {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ enable-method = "arm,realview-smp";
+
+ A9_0: cpu@0 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a9";
+ reg = <0>;
+ next-level-cache = <&L2>;
+ };
+
+ A9_1: cpu@1 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a9";
+ reg = <1>;
+ next-level-cache = <&L2>;
+ };
+
+ A9_2: cpu@2 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a9";
+ reg = <2>;
+ next-level-cache = <&L2>;
+ };
+
+ A9_3: cpu@3 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a9";
+ reg = <3>;
+ next-level-cache = <&L2>;
+ };
+ };
+};
+
+&pmu {
+ interrupt-affinity = <&A9_0>, <&A9_1>, <&A9_2>, <&A9_3>;
+};
diff --git a/arch/arm/boot/dts/arm-realview-eb-mp.dtsi b/arch/arm/boot/dts/arm-realview-eb-mp.dtsi
new file mode 100644
index 000000000000..7b8d90b7aeea
--- /dev/null
+++ b/arch/arm/boot/dts/arm-realview-eb-mp.dtsi
@@ -0,0 +1,225 @@
+/*
+ * Copyright 2016 Linaro Ltd
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+#include <dt-bindings/interrupt-controller/irq.h>
+#include <dt-bindings/gpio/gpio.h>
+#include "arm-realview-eb.dtsi"
+
+/*
+ * This is the common include file for all MPCore variants of the
+ * Evaluation Baseboard, i.e. ARM11MPCore, ARM11MPCore Revision B
+ * and Cortex-A9 MPCore.
+ */
+/ {
+ soc {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "arm,realview-eb-soc", "simple-bus";
+ regmap = <&syscon>;
+ ranges;
+
+ /* Primary interrupt controller in the test chip */
+ intc: interrupt-controller@1f000100 {
+ compatible = "arm,eb11mp-gic";
+ #interrupt-cells = <3>;
+ #address-cells = <1>;
+ interrupt-controller;
+ reg = <0x1f001000 0x1000>,
+ <0x1f000100 0x100>;
+ };
+
+ /* Secondary interrupt controller on the FPGA */
+ intc_second: interrupt-controller@10040000 {
+ compatible = "arm,pl390";
+ #interrupt-cells = <3>;
+ #address-cells = <1>;
+ interrupt-controller;
+ reg = <0x10041000 0x1000>,
+ <0x10040000 0x100>;
+ interrupt-parent = <&intc>;
+ interrupts = <0 10 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
+ L2: l2-cache {
+ compatible = "arm,l220-cache";
+ reg = <0x1f002000 0x1000>;
+ interrupt-parent = <&intc>;
+ interrupts = <0 29 IRQ_TYPE_LEVEL_HIGH>,
+ <0 30 IRQ_TYPE_LEVEL_HIGH>,
+ <0 31 IRQ_TYPE_LEVEL_HIGH>;
+ cache-unified;
+ cache-level = <2>;
+ /*
+ * Override default cache size, sets and
+ * associativity as these may be erroneously set
+ * up by boot loader(s), probably for safety
+ * since th outer sync operation can cause the
+ * cache to hang unless disabled.
+ */
+ cache-size = <1048576>; // 1MB
+ cache-sets = <4096>;
+ cache-line-size = <32>;
+ arm,shared-override;
+ arm,parity-enable;
+ arm,outer-sync-disable;
+ };
+
+ scu: scu@1f000000 {
+ compatible = "arm,arm11mp-scu";
+ reg = <0x1f000000 0x100>;
+ };
+
+ twd_timer: timer@1f000600 {
+ compatible = "arm,arm11mp-twd-timer";
+ reg = <0x1f000600 0x20>;
+ interrupt-parent = <&intc>;
+ interrupts = <1 13 0xf04>;
+ };
+
+ twd_wdog: watchdog@1f000620 {
+ compatible = "arm,arm11mp-twd-wdt";
+ reg = <0x1f000620 0x20>;
+ interrupt-parent = <&intc>;
+ interrupts = <1 14 0xf04>;
+ };
+
+ /* PMU with one IRQ line per core */
+ pmu: pmu@0 {
+ compatible = "arm,arm11mpcore-pmu";
+ interrupt-parent = <&intc>;
+ interrupts = <0 17 IRQ_TYPE_LEVEL_HIGH>,
+ <0 18 IRQ_TYPE_LEVEL_HIGH>,
+ <0 19 IRQ_TYPE_LEVEL_HIGH>,
+ <0 20 IRQ_TYPE_LEVEL_HIGH>;
+ };
+ };
+};
+
+/*
+ * This adapts all the peripherals to the interrupt routing
+ * to the GIC on the core tile.
+ */
+
+&ethernet {
+ interrupt-parent = <&intc>;
+ interrupts = <0 9 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&usb {
+ interrupt-parent = <&intc>;
+ interrupts = <0 3 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&aaci {
+ interrupt-parent = <&intc>;
+ interrupts = <0 0 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&mmc {
+ interrupt-parent = <&intc>;
+ interrupts = <0 14 IRQ_TYPE_LEVEL_HIGH>,
+ <0 15 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&kmi0 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 7 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&kmi1 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 8 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&charlcd {
+ interrupt-parent = <&intc>;
+ interrupts = <0 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&serial0 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 4 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&serial1 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 5 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&timer01 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 1 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&timer23 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 2 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&rtc {
+ interrupt-parent = <&intc>;
+ interrupts = <0 6 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+/*
+ * On revision A, these peripherals does not have their IRQ lines
+ * routed to the core tile, but they can be reached on the secondary
+ * GIC.
+ */
+&gpio0 {
+ interrupt-parent = <&intc_second>;
+ interrupts = <0 6 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&gpio1 {
+ interrupt-parent = <&intc_second>;
+ interrupts = <0 7 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&gpio2 {
+ interrupt-parent = <&intc_second>;
+ interrupts = <0 8 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&serial2 {
+ interrupt-parent = <&intc_second>;
+ interrupts = <0 14 IRQ_TYPE_LEVEL_HIGH>;
+ status = "okay";
+};
+
+&serial3 {
+ interrupt-parent = <&intc_second>;
+ interrupts = <0 15 IRQ_TYPE_LEVEL_HIGH>;
+ status = "okay";
+};
+
+&ssp {
+ interrupt-parent = <&intc_second>;
+ interrupts = <0 11 IRQ_TYPE_LEVEL_HIGH>;
+ status = "okay";
+};
+
+&wdog {
+ interrupt-parent = <&intc_second>;
+ interrupts = <0 0 IRQ_TYPE_LEVEL_HIGH>;
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/arm-realview-eb.dts b/arch/arm/boot/dts/arm-realview-eb.dts
new file mode 100644
index 000000000000..15431077f00c
--- /dev/null
+++ b/arch/arm/boot/dts/arm-realview-eb.dts
@@ -0,0 +1,166 @@
+/*
+ * Copyright 2016 Linaro Ltd
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include <dt-bindings/interrupt-controller/irq.h>
+#include <dt-bindings/gpio/gpio.h>
+#include "arm-realview-eb.dtsi"
+
+/ {
+ model = "ARM RealView Emulation Baseboard";
+ compatible = "arm,realview-eb";
+ arm,hbi = <0x140>;
+
+ /*
+ * This is the core tile with the CPU and GIC etc for the
+ * ARM926EJ-S, ARM1136, ARM1176 that does not have L2 cache
+ * or PMU.
+ *
+ * To run this machine with QEMU, specify the following:
+ * qemu-system-arm -M realview-eb
+ * Unless specified, QEMU will emulate an ARM926EJ-S core tile.
+ * Switches -cpu arm1136 or -cpu arm1176 emulates the other
+ * core tiles.
+ */
+ soc {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "arm,realview-eb-soc", "simple-bus";
+ regmap = <&syscon>;
+ ranges;
+
+ intc: interrupt-controller@10040000 {
+ compatible = "arm,pl390";
+ #interrupt-cells = <3>;
+ #address-cells = <1>;
+ interrupt-controller;
+ reg = <0x10041000 0x1000>,
+ <0x10040000 0x100>;
+ };
+ };
+};
+
+/*
+ * This adapts all the peripherals to the interrupt routing
+ * to the GIC on the core tile.
+ */
+
+&ethernet {
+ interrupt-parent = <&intc>;
+ interrupts = <0 28 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&usb {
+ interrupt-parent = <&intc>;
+ interrupts = <0 29 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&aaci {
+ interrupt-parent = <&intc>;
+ interrupts = <0 19 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&mmc {
+ interrupt-parent = <&intc>;
+ interrupts = <0 17 IRQ_TYPE_LEVEL_HIGH>,
+ <0 18 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&kmi0 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 20 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&kmi1 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 21 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&charlcd {
+ interrupt-parent = <&intc>;
+ interrupts = <0 22 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&serial0 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 12 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&serial1 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 13 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&serial2 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 14 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&serial3 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 15 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&ssp {
+ interrupt-parent = <&intc>;
+ interrupts = <0 11 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&wdog {
+ interrupt-parent = <&intc>;
+ interrupts = <0 0 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&timer01 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 4 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&timer23 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 5 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&gpio0 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 6 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&gpio1 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 7 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&gpio2 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 8 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&rtc {
+ interrupt-parent = <&intc>;
+ interrupts = <0 10 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&clcd {
+ interrupt-parent = <&intc>;
+ interrupts = <0 23 IRQ_TYPE_LEVEL_HIGH>;
+};
diff --git a/arch/arm/boot/dts/arm-realview-eb.dtsi b/arch/arm/boot/dts/arm-realview-eb.dtsi
new file mode 100644
index 000000000000..1c6a040218e3
--- /dev/null
+++ b/arch/arm/boot/dts/arm-realview-eb.dtsi
@@ -0,0 +1,453 @@
+/*
+ * Copyright 2016 Linaro Ltd
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+#include <dt-bindings/interrupt-controller/irq.h>
+#include <dt-bindings/gpio/gpio.h>
+#include "skeleton.dtsi"
+
+/ {
+ compatible = "arm,realview-eb";
+
+ chosen { };
+
+ aliases {
+ serial0 = &serial0;
+ serial1 = &serial1;
+ serial2 = &serial2;
+ serial3 = &serial3;
+ i2c0 = &i2c;
+ };
+
+ memory {
+ /* 128 MiB memory @ 0x0 */
+ reg = <0x00000000 0x08000000>;
+ };
+
+ /* The voltage to the MMC card is hardwired at 3.3V */
+ vmmc: fixedregulator@0 {
+ compatible = "regulator-fixed";
+ regulator-name = "vmmc";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-boot-on;
+ };
+
+ veth: fixedregulator@0 {
+ compatible = "regulator-fixed";
+ regulator-name = "veth";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-boot-on;
+ };
+
+ xtal24mhz: xtal24mhz@24M {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <24000000>;
+ };
+
+ timclk: timclk@1M {
+ #clock-cells = <0>;
+ compatible = "fixed-factor-clock";
+ clock-div = <24>;
+ clock-mult = <1>;
+ clocks = <&xtal24mhz>;
+ };
+
+ mclk: mclk@24M {
+ #clock-cells = <0>;
+ compatible = "fixed-factor-clock";
+ clock-div = <1>;
+ clock-mult = <1>;
+ clocks = <&xtal24mhz>;
+ };
+
+ kmiclk: kmiclk@24M {
+ #clock-cells = <0>;
+ compatible = "fixed-factor-clock";
+ clock-div = <1>;
+ clock-mult = <1>;
+ clocks = <&xtal24mhz>;
+ };
+
+ sspclk: sspclk@24M {
+ #clock-cells = <0>;
+ compatible = "fixed-factor-clock";
+ clock-div = <1>;
+ clock-mult = <1>;
+ clocks = <&xtal24mhz>;
+ };
+
+ uartclk: uartclk@24M {
+ #clock-cells = <0>;
+ compatible = "fixed-factor-clock";
+ clock-div = <1>;
+ clock-mult = <1>;
+ clocks = <&xtal24mhz>;
+ };
+
+ wdogclk: wdogclk@24M {
+ #clock-cells = <0>;
+ compatible = "fixed-factor-clock";
+ clock-div = <1>;
+ clock-mult = <1>;
+ clocks = <&xtal24mhz>;
+ };
+
+ /* FIXME: this actually hangs off the PLL clocks */
+ pclk: pclk@0 {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <0>;
+ };
+
+ flash0@40000000 {
+ /* 2 * 32MiB NOR Flash memory */
+ compatible = "arm,versatile-flash", "cfi-flash";
+ reg = <0x40000000 0x04000000>;
+ bank-width = <4>;
+ };
+
+ flash1@44000000 {
+ /* 2 * 32MiB NOR Flash memory */
+ compatible = "arm,versatile-flash", "cfi-flash";
+ reg = <0x44000000 0x04000000>;
+ bank-width = <4>;
+ };
+
+ /* SMSC 9118 ethernet with PHY and EEPROM */
+ ethernet: ethernet@4e000000 {
+ compatible = "smsc,lan9118", "smsc,lan9115";
+ reg = <0x4e000000 0x10000>;
+ phy-mode = "mii";
+ reg-io-width = <4>;
+ smsc,irq-active-high;
+ smsc,irq-push-pull;
+ vdd33a-supply = <&veth>;
+ vddvario-supply = <&veth>;
+ };
+
+ usb: usb@4f000000 {
+ compatible = "nxp,usb-isp1761";
+ reg = <0x4f000000 0x20000>;
+ port1-otg;
+ };
+
+ /* These peripherals are inside the FPGA */
+ fpga {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "simple-bus";
+ ranges;
+
+ syscon: syscon@10000000 {
+ compatible = "arm,realview-eb-syscon", "syscon", "simple-mfd";
+ reg = <0x10000000 0x1000>;
+
+ led@08.0 {
+ compatible = "register-bit-led";
+ offset = <0x08>;
+ mask = <0x01>;
+ label = "versatile:0";
+ linux,default-trigger = "heartbeat";
+ default-state = "on";
+ };
+ led@08.1 {
+ compatible = "register-bit-led";
+ offset = <0x08>;
+ mask = <0x02>;
+ label = "versatile:1";
+ linux,default-trigger = "mmc0";
+ default-state = "off";
+ };
+ led@08.2 {
+ compatible = "register-bit-led";
+ offset = <0x08>;
+ mask = <0x04>;
+ label = "versatile:2";
+ linux,default-trigger = "cpu0";
+ default-state = "off";
+ };
+ led@08.3 {
+ compatible = "register-bit-led";
+ offset = <0x08>;
+ mask = <0x08>;
+ label = "versatile:3";
+ default-state = "off";
+ };
+ led@08.4 {
+ compatible = "register-bit-led";
+ offset = <0x08>;
+ mask = <0x10>;
+ label = "versatile:4";
+ default-state = "off";
+ };
+ led@08.5 {
+ compatible = "register-bit-led";
+ offset = <0x08>;
+ mask = <0x20>;
+ label = "versatile:5";
+ default-state = "off";
+ };
+ led@08.6 {
+ compatible = "register-bit-led";
+ offset = <0x08>;
+ mask = <0x40>;
+ label = "versatile:6";
+ default-state = "off";
+ };
+ led@08.7 {
+ compatible = "register-bit-led";
+ offset = <0x08>;
+ mask = <0x80>;
+ label = "versatile:7";
+ default-state = "off";
+ };
+ oscclk0: osc0@0c {
+ compatible = "arm,syscon-icst307";
+ #clock-cells = <0>;
+ lock-offset = <0x20>;
+ vco-offset = <0x0C>;
+ clocks = <&xtal24mhz>;
+ };
+ oscclk1: osc1@10 {
+ compatible = "arm,syscon-icst307";
+ #clock-cells = <0>;
+ lock-offset = <0x20>;
+ vco-offset = <0x10>;
+ clocks = <&xtal24mhz>;
+ };
+ oscclk2: osc2@14 {
+ compatible = "arm,syscon-icst307";
+ #clock-cells = <0>;
+ lock-offset = <0x20>;
+ vco-offset = <0x14>;
+ clocks = <&xtal24mhz>;
+ };
+ oscclk3: osc3@18 {
+ compatible = "arm,syscon-icst307";
+ #clock-cells = <0>;
+ lock-offset = <0x20>;
+ vco-offset = <0x18>;
+ clocks = <&xtal24mhz>;
+ };
+ oscclk4: osc4@1c {
+ compatible = "arm,syscon-icst307";
+ #clock-cells = <0>;
+ lock-offset = <0x20>;
+ vco-offset = <0x1c>;
+ clocks = <&xtal24mhz>;
+ };
+ };
+
+ i2c: i2c@10002000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ compatible = "arm,versatile-i2c";
+ reg = <0x10002000 0x1000>;
+
+ rtc@68 {
+ compatible = "dallas,ds1338";
+ reg = <0x68>;
+ };
+ };
+
+ aaci: aaci@10004000 {
+ compatible = "arm,pl041", "arm,primecell";
+ reg = <0x10004000 0x1000>;
+ clocks = <&pclk>;
+ clock-names = "apb_pclk";
+ };
+
+ mmc: mmcsd@10005000 {
+ compatible = "arm,pl18x", "arm,primecell";
+ reg = <0x10005000 0x1000>;
+
+ /* Due to frequent FIFO overruns, use just 500 kHz */
+ max-frequency = <500000>;
+ bus-width = <4>;
+ cap-sd-highspeed;
+ cap-mmc-highspeed;
+ clocks = <&mclk>, <&pclk>;
+ clock-names = "mclk", "apb_pclk";
+ vmmc-supply = <&vmmc>;
+ cd-gpios = <&gpio1 0 GPIO_ACTIVE_LOW>;
+ wp-gpios = <&gpio1 1 GPIO_ACTIVE_HIGH>;
+ };
+
+ kmi0: kmi@10006000 {
+ compatible = "arm,pl050", "arm,primecell";
+ reg = <0x10006000 0x1000>;
+ clocks = <&kmiclk>, <&pclk>;
+ clock-names = "KMIREFCLK", "apb_pclk";
+ };
+
+ kmi1: kmi@10007000 {
+ compatible = "arm,pl050", "arm,primecell";
+ reg = <0x10007000 0x1000>;
+ clocks = <&kmiclk>, <&pclk>;
+ clock-names = "KMIREFCLK", "apb_pclk";
+ };
+
+ charlcd: fpga_charlcd: charlcd@10008000 {
+ compatible = "arm,versatile-lcd";
+ reg = <0x10008000 0x1000>;
+ clocks = <&pclk>;
+ clock-names = "apb_pclk";
+ };
+
+ serial0: serial@10009000 {
+ compatible = "arm,pl011", "arm,primecell";
+ reg = <0x10009000 0x1000>;
+ clocks = <&uartclk>, <&pclk>;
+ clock-names = "uartclk", "apb_pclk";
+ };
+
+ serial1: serial@1000a000 {
+ compatible = "arm,pl011", "arm,primecell";
+ reg = <0x1000a000 0x1000>;
+ clocks = <&uartclk>, <&pclk>;
+ clock-names = "uartclk", "apb_pclk";
+ };
+
+ serial2: serial@1000b000 {
+ compatible = "arm,pl011", "arm,primecell";
+ reg = <0x1000b000 0x1000>;
+ clocks = <&uartclk>, <&pclk>;
+ clock-names = "uartclk", "apb_pclk";
+ };
+
+ serial3: serial@1000c000 {
+ compatible = "arm,pl011", "arm,primecell";
+ reg = <0x1000c000 0x1000>;
+ clocks = <&uartclk>, <&pclk>;
+ clock-names = "uartclk", "apb_pclk";
+ };
+
+ ssp: ssp@1000d000 {
+ compatible = "arm,pl022", "arm,primecell";
+ reg = <0x1000d000 0x1000>;
+ clocks = <&sspclk>, <&pclk>;
+ clock-names = "SSPCLK", "apb_pclk";
+ };
+
+ wdog: watchdog@10010000 {
+ compatible = "arm,sp805", "arm,primecell";
+ reg = <0x10010000 0x1000>;
+ clocks = <&wdogclk>, <&pclk>;
+ clock-names = "wdogclk", "apb_pclk";
+ status = "disabled";
+ };
+
+ timer01: timer@10011000 {
+ compatible = "arm,sp804", "arm,primecell";
+ reg = <0x10011000 0x1000>;
+ clocks = <&timclk>, <&timclk>, <&pclk>;
+ clock-names = "timer1", "timer2", "apb_pclk";
+ };
+
+ timer23: timer@10012000 {
+ compatible = "arm,sp804", "arm,primecell";
+ reg = <0x10012000 0x1000>;
+ clocks = <&timclk>, <&timclk>, <&pclk>;
+ clock-names = "timer1", "timer2", "apb_pclk";
+ };
+
+ gpio0: gpio@10013000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x10013000 0x1000>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&pclk>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio1: gpio@10014000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x10014000 0x1000>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&pclk>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio2: gpio@10015000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x10015000 0x1000>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&pclk>;
+ clock-names = "apb_pclk";
+ };
+
+ rtc: rtc@10017000 {
+ compatible = "arm,pl031", "arm,primecell";
+ reg = <0x10017000 0x1000>;
+ clocks = <&pclk>;
+ clock-names = "apb_pclk";
+ };
+
+ clcd: clcd@10020000 {
+ compatible = "arm,pl111", "arm,primecell";
+ reg = <0x10020000 0x1000>;
+ interrupt-names = "combined";
+ clocks = <&oscclk0>, <&pclk>;
+ clock-names = "clcdclk", "apb_pclk";
+
+ port {
+ clcd_pads: endpoint {
+ remote-endpoint = <&clcd_panel>;
+ arm,pl11x,tft-r0g0b0-pads = <0 8 16>;
+ };
+ };
+
+ panel {
+ compatible = "panel-dpi";
+
+ port {
+ clcd_panel: endpoint {
+ remote-endpoint = <&clcd_pads>;
+ };
+ };
+
+ /* Standard 640x480 VGA timings */
+ panel-timing {
+ clock-frequency = <25175000>;
+ hactive = <640>;
+ hback-porch = <48>;
+ hfront-porch = <16>;
+ hsync-len = <96>;
+ vactive = <480>;
+ vback-porch = <33>;
+ vfront-porch = <10>;
+ vsync-len = <2>;
+ };
+ };
+ };
+ };
+};
diff --git a/arch/arm/boot/dts/arm-realview-pb1176.dts b/arch/arm/boot/dts/arm-realview-pb1176.dts
index 652d85b28aaa..c789564f2803 100644
--- a/arch/arm/boot/dts/arm-realview-pb1176.dts
+++ b/arch/arm/boot/dts/arm-realview-pb1176.dts
@@ -394,6 +394,46 @@
reg = <0x10200000 0x4000>;
bank-width = <1>;
};
+
+ clcd@10112000 {
+ compatible = "arm,pl111", "arm,primecell";
+ reg = <0x10112000 0x1000>;
+ interrupt-parent = <&intc_dc1176>;
+ interrupt-names = "combined";
+ interrupts = <0 47 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&oscclk0>, <&pclk>;
+ clock-names = "clcdclk", "apb_pclk";
+
+ port {
+ clcd_pads: endpoint {
+ remote-endpoint = <&clcd_panel>;
+ arm,pl11x,tft-r0g0b0-pads = <0 8 16>;
+ };
+ };
+
+ panel {
+ compatible = "panel-dpi";
+
+ port {
+ clcd_panel: endpoint {
+ remote-endpoint = <&clcd_pads>;
+ };
+ };
+
+ /* Standard 640x480 VGA timings */
+ panel-timing {
+ clock-frequency = <25175000>;
+ hactive = <640>;
+ hback-porch = <48>;
+ hfront-porch = <16>;
+ hsync-len = <96>;
+ vactive = <480>;
+ vback-porch = <33>;
+ vfront-porch = <10>;
+ vsync-len = <2>;
+ };
+ };
+ };
};
/* These peripherals are inside the FPGA rather than the DevChip */
diff --git a/arch/arm/boot/dts/arm-realview-pb11mp.dts b/arch/arm/boot/dts/arm-realview-pb11mp.dts
index 63d6a051839f..3944765ac4b0 100644
--- a/arch/arm/boot/dts/arm-realview-pb11mp.dts
+++ b/arch/arm/boot/dts/arm-realview-pb11mp.dts
@@ -627,16 +627,17 @@
};
};
+ /* Standard 640x480 VGA timings */
panel-timing {
- clock-frequency = <63500127>;
- hactive = <1024>;
- hback-porch = <152>;
- hfront-porch = <48>;
- hsync-len = <104>;
- vactive = <768>;
- vback-porch = <23>;
- vfront-porch = <3>;
- vsync-len = <4>;
+ clock-frequency = <25175000>;
+ hactive = <640>;
+ hback-porch = <48>;
+ hfront-porch = <16>;
+ hsync-len = <96>;
+ vactive = <480>;
+ vback-porch = <33>;
+ vfront-porch = <10>;
+ vsync-len = <2>;
};
};
};
diff --git a/arch/arm/boot/dts/arm-realview-pba8.dts b/arch/arm/boot/dts/arm-realview-pba8.dts
new file mode 100644
index 000000000000..d3238c252b59
--- /dev/null
+++ b/arch/arm/boot/dts/arm-realview-pba8.dts
@@ -0,0 +1,178 @@
+/*
+ * Copyright 2016 Linaro Ltd
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "arm-realview-pbx.dtsi"
+
+/ {
+ model = "ARM RealView Platform Baseboard for Cortex-A8";
+ compatible = "arm,realview-pba8";
+ arm,hbi = <0x178>;
+
+ cpus {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ enable-method = "arm,realview-smp";
+
+ cpu0: cpu@0 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a8";
+ reg = <0>;
+ };
+ };
+
+ pmu: pmu@0 {
+ compatible = "arm,cortex-a8-pmu";
+ interrupt-parent = <&intc>;
+ interrupts = <0 47 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-affinity = <&cpu0>;
+ };
+
+ /* Primary GIC PL390 interrupt controller in the test chip */
+ intc: interrupt-controller@1e000000 {
+ compatible = "arm,pl390";
+ #interrupt-cells = <3>;
+ #address-cells = <1>;
+ interrupt-controller;
+ reg = <0x1e001000 0x1000>,
+ <0x1e000000 0x100>;
+ };
+};
+
+&ethernet {
+ interrupt-parent = <&intc>;
+ interrupts = <0 28 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&usb {
+ interrupt-parent = <&intc>;
+ interrupts = <0 29 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&soc {
+ compatible = "arm,realview-pba8-soc", "simple-bus";
+};
+
+&syscon {
+ compatible = "arm,realview-pba8-syscon", "syscon", "simple-mfd";
+};
+
+&serial0 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 12 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&serial1 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 13 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&serial2 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 14 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&serial3 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 15 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&ssp {
+ interrupt-parent = <&intc>;
+ interrupts = <0 11 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&wdog0 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 0 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&wdog1 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 40 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&timer01 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 4 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&timer23 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 5 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&gpio0 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 6 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&gpio1 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 7 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&gpio2 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 8 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&rtc {
+ interrupt-parent = <&intc>;
+ interrupts = <0 10 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&timer45 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 41 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&timer67 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 42 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&aaci {
+ interrupt-parent = <&intc>;
+ interrupts = <0 19 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&mmc {
+ interrupt-parent = <&intc>;
+ interrupts = <0 17 IRQ_TYPE_LEVEL_HIGH>,
+ <0 18 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&kmi0 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 20 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&kmi1 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 21 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&clcd {
+ interrupt-parent = <&intc>;
+ interrupts = <0 23 IRQ_TYPE_LEVEL_HIGH>;
+};
diff --git a/arch/arm/boot/dts/arm-realview-pbx-a9.dts b/arch/arm/boot/dts/arm-realview-pbx-a9.dts
new file mode 100644
index 000000000000..db808f92dd79
--- /dev/null
+++ b/arch/arm/boot/dts/arm-realview-pbx-a9.dts
@@ -0,0 +1,229 @@
+/*
+ * Copyright 2016 Linaro Ltd
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "arm-realview-pbx.dtsi"
+
+/ {
+ /*
+ * This is the RealView Platform Baseboard Explore for Cortex-A9
+ * (HBI0182 + HBI0183) as described in ARM DUI 0440B
+ */
+ model = "ARM RealView Platform Baseboard Explore for Cortex-A9";
+ arm,hbi = <0x182>;
+
+ cpus {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ enable-method = "arm,realview-smp";
+
+ cpu-map {
+ cluster0 {
+ core0 {
+ cpu = <&CPU0>;
+ };
+ core1 {
+ cpu = <&CPU1>;
+ };
+ };
+ };
+ CPU0: cpu@0 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a9";
+ reg = <0x0>;
+ next-level-cache = <&L2>;
+ };
+ CPU1: cpu@1 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a9";
+ reg = <0x1>;
+ next-level-cache = <&L2>;
+ };
+ };
+
+ L2: l2-cache {
+ compatible = "arm,pl310-cache";
+ reg = <0x1f002000 0x1000>;
+ cache-unified;
+ cache-level = <2>;
+ /*
+ * Override default cache size, sets and
+ * associativity as these may be erroneously set
+ * up by boot loader(s).
+ */
+ cache-size = <1048576>; // 1MB
+ cache-sets = <4096>;
+ cache-line-size = <32>;
+ arm,parity-disable;
+ arm,tag-latency = <1>;
+ arm,data-latency = <1 1>;
+ arm,dirty-latency = <1>;
+ };
+
+ scu: scu@1f000000 {
+ compatible = "arm,cortex-a9-scu";
+ reg = <0x1f000000 0x100>;
+ };
+
+ twd_timer: timer@1f000600 {
+ compatible = "arm,cortex-a9-twd-timer";
+ reg = <0x1f000600 0x20>;
+ interrupt-parent = <&intc>;
+ interrupts = <1 13 0xf04>;
+ };
+
+ twd_wdog: watchdog@1f000620 {
+ compatible = "arm,cortex-a9-twd-wdt";
+ reg = <0x1f000620 0x20>;
+ interrupt-parent = <&intc>;
+ interrupts = <1 14 0xf04>;
+ };
+
+ pmu: pmu@0 {
+ compatible = "arm,cortex-a9-pmu";
+ interrupt-parent = <&intc>;
+ interrupts = <0 44 IRQ_TYPE_LEVEL_HIGH>,
+ <0 45 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-affinity = <&CPU0>, <&CPU1>;
+ };
+
+ /* Primary GIC PL390 interrupt controller in the test chip */
+ intc: interrupt-controller@1f000000 {
+ compatible = "arm,cortex-a9-gic";
+ #interrupt-cells = <3>;
+ #address-cells = <1>;
+ interrupt-controller;
+ reg = <0x1f001000 0x1000>,
+ <0x1f000100 0x100>;
+ };
+};
+
+&ethernet {
+ interrupt-parent = <&intc>;
+ interrupts = <0 28 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&usb {
+ interrupt-parent = <&intc>;
+ interrupts = <0 29 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&serial0 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 12 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&serial1 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 13 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&serial2 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 14 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&serial3 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 15 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&ssp {
+ interrupt-parent = <&intc>;
+ interrupts = <0 11 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&wdog0 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 0 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&wdog1 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 40 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&timer01 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 4 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&timer23 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 5 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&gpio0 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 6 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&gpio1 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 7 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&gpio2 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 8 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&rtc {
+ interrupt-parent = <&intc>;
+ interrupts = <0 10 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&timer45 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 41 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&timer67 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 42 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&aaci {
+ interrupt-parent = <&intc>;
+ interrupts = <0 19 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&mmc {
+ interrupt-parent = <&intc>;
+ interrupts = <0 17 IRQ_TYPE_LEVEL_HIGH>,
+ <0 18 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&kmi0 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 20 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&kmi1 {
+ interrupt-parent = <&intc>;
+ interrupts = <0 21 IRQ_TYPE_LEVEL_HIGH>;
+};
+
+&clcd {
+ interrupt-parent = <&intc>;
+ interrupts = <0 23 IRQ_TYPE_LEVEL_HIGH>;
+};
diff --git a/arch/arm/boot/dts/arm-realview-pbx.dtsi b/arch/arm/boot/dts/arm-realview-pbx.dtsi
new file mode 100644
index 000000000000..aeb49c4bd773
--- /dev/null
+++ b/arch/arm/boot/dts/arm-realview-pbx.dtsi
@@ -0,0 +1,542 @@
+/*
+ * Copyright 2016 Linaro Ltd
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to deal
+ * in the Software without restriction, including without limitation the rights
+ * to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+ * copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+ * OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
+ * THE SOFTWARE.
+ */
+
+#include <dt-bindings/interrupt-controller/irq.h>
+#include <dt-bindings/gpio/gpio.h>
+#include "skeleton.dtsi"
+
+/ {
+ compatible = "arm,realview-pbx";
+
+ chosen { };
+
+ aliases {
+ serial0 = &serial0;
+ serial1 = &serial1;
+ serial2 = &serial2;
+ serial3 = &serial3;
+ i2c0 = &i2c;
+ };
+
+ memory {
+ /* 128 MiB memory @ 0x0 */
+ reg = <0x00000000 0x08000000>;
+ };
+
+ /* The voltage to the MMC card is hardwired at 3.3V */
+ vmmc: fixedregulator@0 {
+ compatible = "regulator-fixed";
+ regulator-name = "vmmc";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-boot-on;
+ };
+
+ veth: fixedregulator@0 {
+ compatible = "regulator-fixed";
+ regulator-name = "veth";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-boot-on;
+ };
+
+ xtal24mhz: xtal24mhz@24M {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <24000000>;
+ };
+
+ refclk32khz: refclk32khz {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <32768>;
+ };
+
+ timclk: timclk@1M {
+ #clock-cells = <0>;
+ compatible = "fixed-factor-clock";
+ clock-div = <24>;
+ clock-mult = <1>;
+ clocks = <&xtal24mhz>;
+ };
+
+ mclk: mclk@24M {
+ #clock-cells = <0>;
+ compatible = "fixed-factor-clock";
+ clock-div = <1>;
+ clock-mult = <1>;
+ clocks = <&xtal24mhz>;
+ };
+
+ kmiclk: kmiclk@24M {
+ #clock-cells = <0>;
+ compatible = "fixed-factor-clock";
+ clock-div = <1>;
+ clock-mult = <1>;
+ clocks = <&xtal24mhz>;
+ };
+
+ sspclk: sspclk@24M {
+ #clock-cells = <0>;
+ compatible = "fixed-factor-clock";
+ clock-div = <1>;
+ clock-mult = <1>;
+ clocks = <&xtal24mhz>;
+ };
+
+ uartclk: uartclk@24M {
+ #clock-cells = <0>;
+ compatible = "fixed-factor-clock";
+ clock-div = <1>;
+ clock-mult = <1>;
+ clocks = <&xtal24mhz>;
+ };
+
+ wdogclk: wdogclk@24M {
+ #clock-cells = <0>;
+ compatible = "fixed-factor-clock";
+ clock-div = <1>;
+ clock-mult = <1>;
+ clocks = <&xtal24mhz>;
+ };
+
+ /* FIXME: this actually hangs off the PLL clocks */
+ pclk: pclk@0 {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <0>;
+ };
+
+ flash0@40000000 {
+ /* 2 * 32MiB NOR Flash memory */
+ compatible = "arm,versatile-flash", "cfi-flash";
+ reg = <0x40000000 0x04000000>;
+ bank-width = <4>;
+ };
+
+ flash1@44000000 {
+ /* 2 * 32MiB NOR Flash memory */
+ compatible = "arm,versatile-flash", "cfi-flash";
+ reg = <0x44000000 0x04000000>;
+ bank-width = <4>;
+ };
+
+ /* SMSC 9118 ethernet with PHY and EEPROM */
+ ethernet: ethernet@4e000000 {
+ compatible = "smsc,lan9118", "smsc,lan9115";
+ reg = <0x4e000000 0x10000>;
+ phy-mode = "mii";
+ reg-io-width = <4>;
+ smsc,irq-active-high;
+ smsc,irq-push-pull;
+ vdd33a-supply = <&veth>;
+ vddvario-supply = <&veth>;
+ };
+
+ usb: usb@4f000000 {
+ compatible = "nxp,usb-isp1761";
+ reg = <0x4f000000 0x20000>;
+ port1-otg;
+ };
+
+ soc: soc@0 {
+ compatible = "arm,realview-pbx-soc", "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ regmap = <&syscon>;
+ ranges;
+
+ syscon: syscon@10000000 {
+ compatible = "arm,realview-pbx-syscon", "syscon", "simple-mfd";
+ reg = <0x10000000 0x1000>;
+
+ led@08.0 {
+ compatible = "register-bit-led";
+ offset = <0x08>;
+ mask = <0x01>;
+ label = "versatile:0";
+ linux,default-trigger = "heartbeat";
+ default-state = "on";
+ };
+ led@08.1 {
+ compatible = "register-bit-led";
+ offset = <0x08>;
+ mask = <0x02>;
+ label = "versatile:1";
+ linux,default-trigger = "mmc0";
+ default-state = "off";
+ };
+ led@08.2 {
+ compatible = "register-bit-led";
+ offset = <0x08>;
+ mask = <0x04>;
+ label = "versatile:2";
+ linux,default-trigger = "cpu0";
+ default-state = "off";
+ };
+ led@08.3 {
+ compatible = "register-bit-led";
+ offset = <0x08>;
+ mask = <0x08>;
+ label = "versatile:3";
+ default-state = "off";
+ };
+ led@08.4 {
+ compatible = "register-bit-led";
+ offset = <0x08>;
+ mask = <0x10>;
+ label = "versatile:4";
+ default-state = "off";
+ };
+ led@08.5 {
+ compatible = "register-bit-led";
+ offset = <0x08>;
+ mask = <0x20>;
+ label = "versatile:5";
+ default-state = "off";
+ };
+ led@08.6 {
+ compatible = "register-bit-led";
+ offset = <0x08>;
+ mask = <0x40>;
+ label = "versatile:6";
+ default-state = "off";
+ };
+ led@08.7 {
+ compatible = "register-bit-led";
+ offset = <0x08>;
+ mask = <0x80>;
+ label = "versatile:7";
+ default-state = "off";
+ };
+ oscclk0: osc0@0c {
+ compatible = "arm,syscon-icst307";
+ #clock-cells = <0>;
+ lock-offset = <0x20>;
+ vco-offset = <0x0C>;
+ clocks = <&xtal24mhz>;
+ };
+ oscclk1: osc1@10 {
+ compatible = "arm,syscon-icst307";
+ #clock-cells = <0>;
+ lock-offset = <0x20>;
+ vco-offset = <0x10>;
+ clocks = <&xtal24mhz>;
+ };
+ oscclk2: osc2@14 {
+ compatible = "arm,syscon-icst307";
+ #clock-cells = <0>;
+ lock-offset = <0x20>;
+ vco-offset = <0x14>;
+ clocks = <&xtal24mhz>;
+ };
+ oscclk3: osc3@18 {
+ compatible = "arm,syscon-icst307";
+ #clock-cells = <0>;
+ lock-offset = <0x20>;
+ vco-offset = <0x18>;
+ clocks = <&xtal24mhz>;
+ };
+ oscclk4: osc4@1c {
+ compatible = "arm,syscon-icst307";
+ #clock-cells = <0>;
+ lock-offset = <0x20>;
+ vco-offset = <0x1c>;
+ clocks = <&xtal24mhz>;
+ };
+ };
+
+ sp810_syscon0: sysctl@10001000 {
+ compatible = "arm,sp810", "arm,primecell";
+ reg = <0x10001000 0x1000>;
+ clocks = <&refclk32khz>, <&timclk>, <&xtal24mhz>;
+ clock-names = "refclk", "timclk", "apb_pclk";
+ #clock-cells = <1>;
+ clock-output-names = "timerclk0",
+ "timerclk1",
+ "timerclk2",
+ "timerclk3";
+ assigned-clocks = <&sp810_syscon0 0>,
+ <&sp810_syscon0 1>,
+ <&sp810_syscon0 2>,
+ <&sp810_syscon0 3>;
+ assigned-clock-parents = <&timclk>,
+ <&timclk>,
+ <&timclk>,
+ <&timclk>;
+ };
+
+ i2c: i2c@10002000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ compatible = "arm,versatile-i2c";
+ reg = <0x10002000 0x1000>;
+
+ rtc@68 {
+ compatible = "dallas,ds1338";
+ reg = <0x68>;
+ };
+ };
+
+ serial0: serial@10009000 {
+ compatible = "arm,pl011", "arm,primecell";
+ reg = <0x10009000 0x1000>;
+ clocks = <&uartclk>, <&pclk>;
+ clock-names = "uartclk", "apb_pclk";
+ };
+
+ serial1: serial@1000a000 {
+ compatible = "arm,pl011", "arm,primecell";
+ reg = <0x1000a000 0x1000>;
+ clocks = <&uartclk>, <&pclk>;
+ clock-names = "uartclk", "apb_pclk";
+ };
+
+ serial2: serial@1000b000 {
+ compatible = "arm,pl011", "arm,primecell";
+ reg = <0x1000b000 0x1000>;
+ clocks = <&uartclk>, <&pclk>;
+ clock-names = "uartclk", "apb_pclk";
+ };
+
+ ssp: ssp@1000d000 {
+ compatible = "arm,pl022", "arm,primecell";
+ reg = <0x1000d000 0x1000>;
+ clocks = <&sspclk>, <&pclk>;
+ clock-names = "SSPCLK", "apb_pclk";
+ };
+
+ wdog0: watchdog@1000f000 {
+ compatible = "arm,sp805", "arm,primecell";
+ reg = <0x1000f000 0x1000>;
+ clocks = <&wdogclk>, <&pclk>;
+ clock-names = "wdogclk", "apb_pclk";
+ status = "disabled";
+ };
+
+ wdog1: watchdog@10010000 {
+ compatible = "arm,sp805", "arm,primecell";
+ reg = <0x10010000 0x1000>;
+ clocks = <&wdogclk>, <&pclk>;
+ clock-names = "wdogclk", "apb_pclk";
+ status = "disabled";
+ };
+
+ timer01: timer@10011000 {
+ compatible = "arm,sp804", "arm,primecell";
+ reg = <0x10011000 0x1000>;
+ clocks = <&sp810_syscon0 0>,
+ <&sp810_syscon0 1>,
+ <&pclk>;
+ clock-names = "timerclk0",
+ "timerclk1",
+ "apb_pclk";
+ };
+
+ timer23: timer@10012000 {
+ compatible = "arm,sp804", "arm,primecell";
+ reg = <0x10012000 0x1000>;
+ clocks = <&sp810_syscon0 2>,
+ <&sp810_syscon0 3>,
+ <&pclk>;
+ clock-names = "timerclk2",
+ "timerclk3",
+ "apb_pclk";
+ };
+
+ gpio0: gpio@10013000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x10013000 0x1000>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&pclk>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio1: gpio@10014000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x10014000 0x1000>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&pclk>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio2: gpio@10015000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x10015000 0x1000>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&pclk>;
+ clock-names = "apb_pclk";
+ };
+
+ /* DVI serial bus control is at 10016000 */
+
+ rtc: rtc@10017000 {
+ compatible = "arm,pl031", "arm,primecell";
+ reg = <0x10017000 0x1000>;
+ clocks = <&pclk>;
+ clock-names = "apb_pclk";
+ };
+
+ timer45: timer@10018000 {
+ compatible = "arm,sp804", "arm,primecell";
+ reg = <0x10018000 0x1000>;
+ clocks = <&timclk>, <&timclk>, <&pclk>;
+ clock-names = "timerclk4", "timerclk5", "apb_pclk";
+ };
+
+ timer67: timer@10019000 {
+ compatible = "arm,sp804", "arm,primecell";
+ reg = <0x10019000 0x1000>;
+ clocks = <&timclk>, <&timclk>, <&pclk>;
+ clock-names = "timerclk6", "timerclk7", "apb_pclk";
+ };
+
+ sp810_syscon1: sysctl@1001a000 {
+ compatible = "arm,sp810", "arm,primecell";
+ reg = <0x1001a000 0x1000>;
+ clocks = <&refclk32khz>, <&timclk>, <&xtal24mhz>;
+ clock-names = "refclk", "timclk", "apb_pclk";
+ #clock-cells = <1>;
+ clock-output-names = "timerclk4",
+ "timerclk5",
+ "timerclk6",
+ "timerclk7";
+ assigned-clocks = <&sp810_syscon1 0>,
+ <&sp810_syscon1 1>,
+ <&sp810_syscon1 2>,
+ <&sp810_syscon1 3>;
+ assigned-clock-parents = <&timclk>,
+ <&timclk>,
+ <&timclk>,
+ <&timclk>;
+ };
+ };
+
+
+ /* These peripherals are inside the FPGA */
+ fpga {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "simple-bus";
+ ranges;
+
+ aaci: aaci@10004000 {
+ compatible = "arm,pl041", "arm,primecell";
+ reg = <0x10004000 0x1000>;
+ clocks = <&pclk>;
+ clock-names = "apb_pclk";
+ };
+
+ mmc: mmcsd@10005000 {
+ compatible = "arm,pl18x", "arm,primecell";
+ reg = <0x10005000 0x1000>;
+
+ /* Due to frequent FIFO overruns, use just 500 kHz */
+ max-frequency = <500000>;
+ bus-width = <4>;
+ cap-sd-highspeed;
+ cap-mmc-highspeed;
+ clocks = <&mclk>, <&pclk>;
+ clock-names = "mclk", "apb_pclk";
+ vmmc-supply = <&vmmc>;
+ cd-gpios = <&gpio2 0 GPIO_ACTIVE_LOW>;
+ wp-gpios = <&gpio2 1 GPIO_ACTIVE_HIGH>;
+ };
+
+ kmi0: kmi@10006000 {
+ compatible = "arm,pl050", "arm,primecell";
+ reg = <0x10006000 0x1000>;
+ clocks = <&kmiclk>, <&pclk>;
+ clock-names = "KMIREFCLK", "apb_pclk";
+ };
+
+ kmi1: kmi@10007000 {
+ compatible = "arm,pl050", "arm,primecell";
+ reg = <0x10007000 0x1000>;
+ clocks = <&kmiclk>, <&pclk>;
+ clock-names = "KMIREFCLK", "apb_pclk";
+ };
+
+ serial3: serial@1000c000 {
+ compatible = "arm,pl011", "arm,primecell";
+ reg = <0x1000c000 0x1000>;
+ clocks = <&uartclk>, <&pclk>;
+ clock-names = "uartclk", "apb_pclk";
+ };
+ };
+
+ /* These peripherals are inside the NEC ISSP */
+ issp {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "simple-bus";
+ ranges;
+
+ clcd: clcd@10020000 {
+ compatible = "arm,pl111", "arm,primecell";
+ reg = <0x10020000 0x1000>;
+ interrupt-names = "combined";
+ clocks = <&oscclk4>, <&pclk>;
+ clock-names = "clcdclk", "apb_pclk";
+
+ port {
+ clcd_pads: endpoint {
+ remote-endpoint = <&clcd_panel>;
+ arm,pl11x,tft-r0g0b0-pads = <0 8 16>;
+ };
+ };
+
+ panel {
+ compatible = "panel-dpi";
+
+ port {
+ clcd_panel: endpoint {
+ remote-endpoint = <&clcd_pads>;
+ };
+ };
+
+ /* Standard 640x480 VGA timings */
+ panel-timing {
+ clock-frequency = <25175000>;
+ hactive = <640>;
+ hback-porch = <48>;
+ hfront-porch = <16>;
+ hsync-len = <96>;
+ vactive = <480>;
+ vback-porch = <33>;
+ vfront-porch = <10>;
+ vsync-len = <2>;
+ };
+ };
+ };
+ };
+};
+
diff --git a/arch/arm/boot/dts/armada-385-linksys.dtsi b/arch/arm/boot/dts/armada-385-linksys.dtsi
index 85d2c377c332..8450944b28e6 100644
--- a/arch/arm/boot/dts/armada-385-linksys.dtsi
+++ b/arch/arm/boot/dts/armada-385-linksys.dtsi
@@ -245,7 +245,7 @@
button@2 {
label = "Factory Reset Button";
linux,code = <KEY_RESTART>;
- gpios = <&gpio1 15 GPIO_ACTIVE_LOW>;
+ gpios = <&gpio0 29 GPIO_ACTIVE_LOW>;
};
};
@@ -260,7 +260,7 @@
};
sata {
- gpios = <&gpio1 22 GPIO_ACTIVE_HIGH>;
+ gpios = <&gpio1 22 GPIO_ACTIVE_LOW>;
default-state = "off";
};
};
@@ -313,7 +313,7 @@
&pinctrl {
keys_pin: keys-pin {
- marvell,pins = "mpp24", "mpp47";
+ marvell,pins = "mpp24", "mpp29";
marvell,function = "gpio";
};
diff --git a/arch/arm/boot/dts/armada-xp-linksys-mamba.dts b/arch/arm/boot/dts/armada-xp-linksys-mamba.dts
index b89e6cf1271a..7a461541ce50 100644
--- a/arch/arm/boot/dts/armada-xp-linksys-mamba.dts
+++ b/arch/arm/boot/dts/armada-xp-linksys-mamba.dts
@@ -304,13 +304,13 @@
button@1 {
label = "WPS";
linux,code = <KEY_WPS_BUTTON>;
- gpios = <&gpio1 0 GPIO_ACTIVE_HIGH>;
+ gpios = <&gpio1 0 GPIO_ACTIVE_LOW>;
};
button@2 {
label = "Factory Reset Button";
linux,code = <KEY_RESTART>;
- gpios = <&gpio1 1 GPIO_ACTIVE_HIGH>;
+ gpios = <&gpio1 1 GPIO_ACTIVE_LOW>;
};
};
diff --git a/arch/arm/boot/dts/armv7-m.dtsi b/arch/arm/boot/dts/armv7-m.dtsi
index b1ad7cf6ac02..16331aa79775 100644
--- a/arch/arm/boot/dts/armv7-m.dtsi
+++ b/arch/arm/boot/dts/armv7-m.dtsi
@@ -1,7 +1,7 @@
#include "skeleton.dtsi"
/ {
- nvic: nv-interrupt-controller {
+ nvic: interrupt-controller@e000e100 {
compatible = "arm,armv7m-nvic";
interrupt-controller;
#interrupt-cells = <1>;
diff --git a/arch/arm/boot/dts/artpec6.dtsi b/arch/arm/boot/dts/artpec6.dtsi
index 30430162b886..3fac4c4d0007 100644
--- a/arch/arm/boot/dts/artpec6.dtsi
+++ b/arch/arm/boot/dts/artpec6.dtsi
@@ -91,96 +91,32 @@
clock-frequency = <50000000>;
};
- /* PLL1 is used by CPU and some peripherals */
- pll1_clk: pll1_clk@f8000000 {
+ eth_phy_ref_clk: eth_phy_ref_clk {
#clock-cells = <0>;
- compatible = "axis,artpec6-pll1-clock";
- reg = <0xf8000000 4>;
- clocks = <&ext_clk>;
- };
-
- cpu_clk: cpu_clk {
- #clock-cells = <0>;
- compatible = "fixed-factor-clock";
- clock-div = <1>;
- clock-mult = <1>;
- clocks = <&pll1_clk>;
- clock-output-names = "cpu_clk";
- };
-
- cpu_clkdiv2: cpu_clkdiv2 {
- #clock-cells = <0>;
- compatible = "fixed-factor-clock";
- clock-div = <2>;
- clock-mult = <1>;
- clocks = <&cpu_clk>;
- };
-
- cpu_clkdiv4: cpu_clkdiv4 {
- #clock-cells = <0>;
- compatible = "fixed-factor-clock";
- clock-div = <4>;
- clock-mult = <1>;
- clocks = <&cpu_clk>;
- };
-
- apb_pclk: apb_pclk {
- #clock-cells = <0>;
- compatible = "fixed-factor-clock";
- clock-div = <8>;
- clock-mult = <1>;
- clocks = <&cpu_clk>;
- clock-output-names = "apb_pclk";
+ compatible = "fixed-clock";
+ clock-frequency = <125000000>;
};
- /* PLL2 is used by a number of peripherals, including UDL */
- pll2: pll2 {
- #clock-cells = <0>;
- compatible = "fixed-factor-clock";
- clock-div = <1>;
- clock-mult = <24>;
+ clkctrl: clkctrl@0xf8000000 {
+ #clock-cells = <1>;
+ compatible = "axis,artpec6-clkctrl";
+ reg = <0xf8000000 0x48>;
clocks = <&ext_clk>;
+ clock-names = "sys_refclk";
};
- /* PLL2DIV2 is used by the Fractional Clock Divider, for i2s */
- pll2div2: pll2div2 {
- #clock-cells = <0>;
- compatible = "fixed-factor-clock";
- clock-div = <2>;
- clock-mult = <1>;
- clocks = <&pll2>;
- };
-
- pll2div12: pll2div12 {
- #clock-cells = <0>;
- compatible = "fixed-factor-clock";
- clock-div = <12>;
- clock-mult = <1>;
- clocks = <&pll2>;
- };
-
- pll2div24: pll2div24 {
- #clock-cells = <0>;
- compatible = "fixed-factor-clock";
- clock-div = <24>;
- clock-mult = <1>;
- clocks = <&pll2>;
- clock-output-names = "uart_clk";
- };
-
-
gtimer@faf00200 {
compatible = "arm,cortex-a9-global-timer";
reg = <0xfaf00200 0x20>;
interrupts = <GIC_PPI 11 0xf01>;
- clocks = <&cpu_clkdiv2>;
+ clocks = <&clkctrl 1>;
};
timer@faf00600 {
compatible = "arm,cortex-a9-twd-timer";
reg = <0xfaf00600 0x20>;
interrupts = <GIC_PPI 13 0xf04>;
- clocks = <&cpu_clkdiv2>;
+ clocks = <&clkctrl 1>;
status = "disabled";
};
@@ -220,7 +156,8 @@
ethernet: ethernet@f8010000 {
clock-names = "phy_ref_clk", "apb_pclk";
- clocks = <&ext_clk>, <&apb_pclk>;
+ clocks = <&eth_phy_ref_clk>,
+ <&clkctrl 4>;
compatible = "snps,dwc-qos-ethernet-4.10";
interrupt-parent = <&intc>;
interrupts = <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>;
@@ -238,7 +175,8 @@
compatible = "arm,pl011", "arm,primecell";
reg = <0xf8036000 0x1000>;
interrupts = <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>;
- clocks = <&pll2div24>, <&apb_pclk>;
+ clocks = <&clkctrl 13>,
+ <&clkctrl 12>;
clock-names = "uart_clk", "apb_pclk";
status = "disabled";
};
@@ -246,7 +184,8 @@
compatible = "arm,pl011", "arm,primecell";
reg = <0xf8037000 0x1000>;
interrupts = <GIC_SPI 113 IRQ_TYPE_LEVEL_HIGH>;
- clocks = <&pll2div24>, <&apb_pclk>;
+ clocks = <&clkctrl 13>,
+ <&clkctrl 12>;
clock-names = "uart_clk", "apb_pclk";
status = "disabled";
};
@@ -254,7 +193,8 @@
compatible = "arm,pl011", "arm,primecell";
reg = <0xf8038000 0x1000>;
interrupts = <GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>;
- clocks = <&pll2div24>, <&apb_pclk>;
+ clocks = <&clkctrl 13>,
+ <&clkctrl 12>;
clock-names = "uart_clk", "apb_pclk";
status = "disabled";
};
@@ -262,7 +202,8 @@
compatible = "arm,pl011", "arm,primecell";
reg = <0xf8039000 0x1000>;
interrupts = <GIC_SPI 129 IRQ_TYPE_LEVEL_HIGH>;
- clocks = <&pll2div24>, <&apb_pclk>;
+ clocks = <&clkctrl 13>,
+ <&clkctrl 12>;
clock-names = "uart_clk", "apb_pclk";
status = "disabled";
};
diff --git a/arch/arm/boot/dts/aspeed-ast2500-evb.dts b/arch/arm/boot/dts/aspeed-ast2500-evb.dts
new file mode 100644
index 000000000000..1b7a5ff0e533
--- /dev/null
+++ b/arch/arm/boot/dts/aspeed-ast2500-evb.dts
@@ -0,0 +1,25 @@
+/dts-v1/;
+
+#include "aspeed-g5.dtsi"
+
+/ {
+ model = "AST2500 EVB";
+ compatible = "aspeed,ast2500";
+
+ aliases {
+ serial4 = &uart5;
+ };
+
+ chosen {
+ stdout-path = &uart5;
+ bootargs = "console=ttyS4,115200 earlyprintk";
+ };
+
+ memory {
+ reg = <0x80000000 0x20000000>;
+ };
+};
+
+&uart5 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/aspeed-bmc-opp-palmetto.dts b/arch/arm/boot/dts/aspeed-bmc-opp-palmetto.dts
new file mode 100644
index 000000000000..cc5fcf2940bf
--- /dev/null
+++ b/arch/arm/boot/dts/aspeed-bmc-opp-palmetto.dts
@@ -0,0 +1,25 @@
+/dts-v1/;
+
+#include "aspeed-g4.dtsi"
+
+/ {
+ model = "Palmetto BMC";
+ compatible = "tyan,palmetto-bmc", "aspeed,ast2400";
+
+ aliases {
+ serial4 = &uart5;
+ };
+
+ chosen {
+ stdout-path = &uart5;
+ bootargs = "console=ttyS4,38400 earlyprintk";
+ };
+
+ memory {
+ reg = <0x40000000 0x10000000>;
+ };
+};
+
+&uart5 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/aspeed-g4.dtsi b/arch/arm/boot/dts/aspeed-g4.dtsi
new file mode 100644
index 000000000000..22dee5937d5c
--- /dev/null
+++ b/arch/arm/boot/dts/aspeed-g4.dtsi
@@ -0,0 +1,161 @@
+#include "skeleton.dtsi"
+
+/ {
+ model = "Aspeed BMC";
+ compatible = "aspeed,ast2400";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ interrupt-parent = <&vic>;
+
+ cpus {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ cpu@0 {
+ compatible = "arm,arm926ej-s";
+ device_type = "cpu";
+ reg = <0>;
+ };
+ };
+
+ clocks {
+ clk_clkin: clk_clkin {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <48000000>;
+ };
+
+ };
+
+ ahb {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+
+ vic: interrupt-controller@1e6c0080 {
+ compatible = "aspeed,ast2400-vic";
+ interrupt-controller;
+ #interrupt-cells = <1>;
+ valid-sources = <0xffffffff 0x0007ffff>;
+ reg = <0x1e6c0080 0x80>;
+ };
+
+ apb {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+
+ clk_hpll: clk_hpll@1e6e2070 {
+ #clock-cells = <0>;
+ compatible = "aspeed,g4-hpll-clock";
+ reg = <0x1e6e2070 0x4>;
+ clocks = <&clk_clkin>;
+ };
+
+ clk_apb: clk_apb@1e6e2008 {
+ #clock-cells = <0>;
+ compatible = "aspeed,g4-apb-clock";
+ reg = <0x1e6e2008 0x4>;
+ clocks = <&clk_hpll>;
+ };
+
+ clk_uart: clk_uart@1e6e2008 {
+ #clock-cells = <0>;
+ compatible = "aspeed,uart-clock";
+ reg = <0x1e6e202c 0x4>;
+ };
+
+ sram@1e720000 {
+ compatible = "mmio-sram";
+ reg = <0x1e720000 0x8000>; // 32K
+ };
+
+ timer: timer@1e782000 {
+ compatible = "aspeed,ast2400-timer";
+ reg = <0x1e782000 0x90>;
+ // The moxart_timer driver registers only one
+ // interrupt and assumes it's for timer 1
+ //interrupts = <16 17 18 35 36 37 38 39>;
+ interrupts = <16>;
+ clocks = <&clk_apb>;
+ };
+
+ wdt1: wdt@1e785000 {
+ compatible = "aspeed,wdt";
+ reg = <0x1e785000 0x1c>;
+ interrupts = <27>;
+ };
+
+ wdt2: wdt@1e785020 {
+ compatible = "aspeed,wdt";
+ reg = <0x1e785020 0x1c>;
+ interrupts = <27>;
+ clocks = <&clk_apb>;
+ status = "disabled";
+ };
+
+ uart1: serial@1e783000 {
+ compatible = "ns16550a";
+ reg = <0x1e783000 0x1000>;
+ reg-shift = <2>;
+ interrupts = <9>;
+ clocks = <&clk_uart>;
+ no-loopback-test;
+ status = "disabled";
+ };
+
+ uart2: serial@1e78d000 {
+ compatible = "ns16550a";
+ reg = <0x1e78d000 0x1000>;
+ reg-shift = <2>;
+ interrupts = <32>;
+ clocks = <&clk_uart>;
+ no-loopback-test;
+ status = "disabled";
+ };
+
+ uart3: serial@1e78e000 {
+ compatible = "ns16550a";
+ reg = <0x1e78e000 0x1000>;
+ reg-shift = <2>;
+ interrupts = <33>;
+ clocks = <&clk_uart>;
+ no-loopback-test;
+ status = "disabled";
+ };
+
+ uart4: serial@1e78f000 {
+ compatible = "ns16550a";
+ reg = <0x1e78f000 0x1000>;
+ reg-shift = <2>;
+ interrupts = <34>;
+ clocks = <&clk_uart>;
+ no-loopback-test;
+ status = "disabled";
+ };
+
+ uart5: serial@1e784000 {
+ compatible = "ns16550a";
+ reg = <0x1e784000 0x1000>;
+ reg-shift = <2>;
+ interrupts = <10>;
+ clocks = <&clk_uart>;
+ current-speed = <38400>;
+ no-loopback-test;
+ status = "disabled";
+ };
+
+ uart6: serial@1e787000 {
+ compatible = "ns16550a";
+ reg = <0x1e787000 0x1000>;
+ reg-shift = <2>;
+ interrupts = <10>;
+ clocks = <&clk_uart>;
+ no-loopback-test;
+ status = "disabled";
+ };
+ };
+ };
+};
diff --git a/arch/arm/boot/dts/aspeed-g5.dtsi b/arch/arm/boot/dts/aspeed-g5.dtsi
new file mode 100644
index 000000000000..dd94d9361fda
--- /dev/null
+++ b/arch/arm/boot/dts/aspeed-g5.dtsi
@@ -0,0 +1,170 @@
+#include "skeleton.dtsi"
+
+/ {
+ model = "Aspeed BMC";
+ compatible = "aspeed,ast2500";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ interrupt-parent = <&vic>;
+
+ cpus {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ cpu@0 {
+ compatible = "arm,arm1176jzf-s";
+ device_type = "cpu";
+ reg = <0>;
+ };
+ };
+
+ ahb {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+
+ vic: interrupt-controller@1e6c0080 {
+ compatible = "aspeed,ast2400-vic";
+ interrupt-controller;
+ #interrupt-cells = <1>;
+ valid-sources = <0xfefff7ff 0x0807ffff>;
+ reg = <0x1e6c0080 0x80>;
+ };
+
+ apb {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+
+ clk_clkin: clk_clkin@1e6e2070 {
+ #clock-cells = <0>;
+ compatible = "aspeed,g5-clkin-clock";
+ reg = <0x1e6e2070 0x04>;
+ };
+
+ clk_hpll: clk_hpll@1e6e2024 {
+ #clock-cells = <0>;
+ compatible = "aspeed,g5-hpll-clock";
+ reg = <0x1e6e2024 0x4>;
+ clocks = <&clk_clkin>;
+ };
+
+ clk_ahb: clk_ahb@1e6e2070 {
+ #clock-cells = <0>;
+ compatible = "aspeed,g5-ahb-clock";
+ reg = <0x1e6e2070 0x4>;
+ clocks = <&clk_hpll>;
+ };
+
+ clk_apb: clk_apb@1e6e2008 {
+ #clock-cells = <0>;
+ compatible = "aspeed,g5-apb-clock";
+ reg = <0x1e6e2008 0x4>;
+ clocks = <&clk_hpll>;
+ };
+
+ clk_uart: clk_uart@1e6e2008 {
+ #clock-cells = <0>;
+ compatible = "aspeed,uart-clock";
+ reg = <0x1e6e202c 0x4>;
+ };
+
+ sram@1e720000 {
+ compatible = "mmio-sram";
+ reg = <0x1e720000 0x9000>; // 36K
+ };
+
+ timer: timer@1e782000 {
+ compatible = "aspeed,ast2400-timer";
+ reg = <0x1e782000 0x90>;
+ // The moxart_timer driver registers only one
+ // interrupt and assumes it's for timer 1
+ //interrupts = <16 17 18 35 36 37 38 39>;
+ interrupts = <16>;
+ clocks = <&clk_apb>;
+ };
+
+ wdt1: wdt@1e785000 {
+ compatible = "aspeed,wdt";
+ reg = <0x1e785000 0x1c>;
+ interrupts = <27>;
+ };
+
+ wdt2: wdt@1e785020 {
+ compatible = "aspeed,wdt";
+ reg = <0x1e785020 0x1c>;
+ interrupts = <27>;
+ status = "disabled";
+ };
+
+ wdt3: wdt@1e785040 {
+ compatible = "aspeed,wdt";
+ reg = <0x1e785074 0x1c>;
+ status = "disabled";
+ };
+
+ uart1: serial@1e783000 {
+ compatible = "ns16550a";
+ reg = <0x1e783000 0x1000>;
+ reg-shift = <2>;
+ interrupts = <9>;
+ clocks = <&clk_uart>;
+ no-loopback-test;
+ status = "disabled";
+ };
+
+ uart2: serial@1e78d000 {
+ compatible = "ns16550a";
+ reg = <0x1e78d000 0x1000>;
+ reg-shift = <2>;
+ interrupts = <32>;
+ clocks = <&clk_uart>;
+ no-loopback-test;
+ status = "disabled";
+ };
+
+ uart3: serial@1e78e000 {
+ compatible = "ns16550a";
+ reg = <0x1e78e000 0x1000>;
+ reg-shift = <2>;
+ interrupts = <33>;
+ clocks = <&clk_uart>;
+ no-loopback-test;
+ status = "disabled";
+ };
+
+ uart4: serial@1e78f000 {
+ compatible = "ns16550a";
+ reg = <0x1e78f000 0x1000>;
+ reg-shift = <2>;
+ interrupts = <34>;
+ clocks = <&clk_uart>;
+ no-loopback-test;
+ status = "disabled";
+ };
+
+ uart5: serial@1e784000 {
+ compatible = "ns16550a";
+ reg = <0x1e784000 0x1000>;
+ reg-shift = <2>;
+ interrupts = <10>;
+ clocks = <&clk_uart>;
+ current-speed = <38400>;
+ no-loopback-test;
+ status = "disabled";
+ };
+
+ uart6: serial@1e787000 {
+ compatible = "ns16550a";
+ reg = <0x1e787000 0x1000>;
+ reg-shift = <2>;
+ interrupts = <10>;
+ clocks = <&clk_uart>;
+ no-loopback-test;
+ status = "disabled";
+ };
+ };
+ };
+};
diff --git a/arch/arm/boot/dts/at91-sama5d2_xplained.dts b/arch/arm/boot/dts/at91-sama5d2_xplained.dts
index 21c780fab761..eb4f1ac96271 100644
--- a/arch/arm/boot/dts/at91-sama5d2_xplained.dts
+++ b/arch/arm/boot/dts/at91-sama5d2_xplained.dts
@@ -234,6 +234,15 @@
};
};
+ shdwc@f8048010 {
+ atmel,shdwc-debouncer = <976>;
+
+ input@0 {
+ reg = <0>;
+ atmel,wakeup-type = "low";
+ };
+ };
+
watchdog@f8048040 {
status = "okay";
};
diff --git a/arch/arm/boot/dts/at91-vinco.dts b/arch/arm/boot/dts/at91-vinco.dts
index 79aec55e1ebc..6a366ee952a8 100644
--- a/arch/arm/boot/dts/at91-vinco.dts
+++ b/arch/arm/boot/dts/at91-vinco.dts
@@ -118,7 +118,7 @@
ethernet-phy@1 {
reg = <0x1>;
- reset-gpios = <&pioE 8 GPIO_ACTIVE_HIGH>;
+ reset-gpios = <&pioE 8 GPIO_ACTIVE_LOW>;
interrupt-parent = <&pioB>;
interrupts = <15 IRQ_TYPE_EDGE_FALLING>;
};
@@ -162,7 +162,7 @@
reg = <0x1>;
interrupt-parent = <&pioB>;
interrupts = <31 IRQ_TYPE_EDGE_FALLING>;
- reset-gpios = <&pioE 6 GPIO_ACTIVE_HIGH>;
+ reset-gpios = <&pioE 6 GPIO_ACTIVE_LOW>;
};
};
diff --git a/arch/arm/boot/dts/at91sam9g45.dtsi b/arch/arm/boot/dts/at91sam9g45.dtsi
index af8b708ac312..8837b7e4292c 100644
--- a/arch/arm/boot/dts/at91sam9g45.dtsi
+++ b/arch/arm/boot/dts/at91sam9g45.dtsi
@@ -978,7 +978,7 @@
trng@fffcc000 {
compatible = "atmel,at91sam9g45-trng";
- reg = <0xfffcc000 0x4000>;
+ reg = <0xfffcc000 0x100>;
interrupts = <6 IRQ_TYPE_LEVEL_HIGH 0>;
clocks = <&trng_clk>;
};
diff --git a/arch/arm/boot/dts/at91sam9x5.dtsi b/arch/arm/boot/dts/at91sam9x5.dtsi
index 0827d594b1f0..cd0cd5fd09a3 100644
--- a/arch/arm/boot/dts/at91sam9x5.dtsi
+++ b/arch/arm/boot/dts/at91sam9x5.dtsi
@@ -106,7 +106,7 @@
pmc: pmc@fffffc00 {
compatible = "atmel,at91sam9x5-pmc", "syscon";
- reg = <0xfffffc00 0x100>;
+ reg = <0xfffffc00 0x200>;
interrupts = <1 IRQ_TYPE_LEVEL_HIGH 7>;
interrupt-controller;
#address-cells = <1>;
diff --git a/arch/arm/boot/dts/bcm-cygnus.dtsi b/arch/arm/boot/dts/bcm-cygnus.dtsi
index 3878793364f0..b42fe5596b94 100644
--- a/arch/arm/boot/dts/bcm-cygnus.dtsi
+++ b/arch/arm/boot/dts/bcm-cygnus.dtsi
@@ -351,9 +351,16 @@
<&pinctrl 142 10 1>;
};
- touchscreen: tsc@180a6000 {
+ ts_adc_syscon: ts_adc_syscon@180a6000 {
+ compatible = "brcm,iproc-ts-adc-syscon", "syscon";
+ reg = <0x180a6000 0xc30>;
+ };
+
+ touchscreen: touchscreen@180a6000 {
compatible = "brcm,iproc-touchscreen";
- reg = <0x180a6000 0x40>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ts_syscon = <&ts_adc_syscon>;
clocks = <&asiu_clks BCM_CYGNUS_ASIU_ADC_CLK>;
clock-names = "tsc_clk";
interrupts = <GIC_SPI 164 IRQ_TYPE_LEVEL_HIGH>;
diff --git a/arch/arm/boot/dts/bcm2835-rpi-a-plus.dts b/arch/arm/boot/dts/bcm2835-rpi-a-plus.dts
index 228614ffff44..35ff4e7a4aac 100644
--- a/arch/arm/boot/dts/bcm2835-rpi-a-plus.dts
+++ b/arch/arm/boot/dts/bcm2835-rpi-a-plus.dts
@@ -29,3 +29,7 @@
brcm,function = <BCM2835_FSEL_ALT0>;
};
};
+
+&hdmi {
+ hpd-gpios = <&gpio 46 GPIO_ACTIVE_LOW>;
+};
diff --git a/arch/arm/boot/dts/bcm2835-rpi-a.dts b/arch/arm/boot/dts/bcm2835-rpi-a.dts
index ddbbbbd42dda..306a84ee9898 100644
--- a/arch/arm/boot/dts/bcm2835-rpi-a.dts
+++ b/arch/arm/boot/dts/bcm2835-rpi-a.dts
@@ -22,3 +22,7 @@
brcm,function = <BCM2835_FSEL_ALT2>;
};
};
+
+&hdmi {
+ hpd-gpios = <&gpio 46 GPIO_ACTIVE_HIGH>;
+};
diff --git a/arch/arm/boot/dts/bcm2835-rpi-b-plus.dts b/arch/arm/boot/dts/bcm2835-rpi-b-plus.dts
index ef5405025223..57d313b6afaf 100644
--- a/arch/arm/boot/dts/bcm2835-rpi-b-plus.dts
+++ b/arch/arm/boot/dts/bcm2835-rpi-b-plus.dts
@@ -29,3 +29,7 @@
brcm,function = <BCM2835_FSEL_ALT0>;
};
};
+
+&hdmi {
+ hpd-gpios = <&gpio 46 GPIO_ACTIVE_LOW>;
+};
diff --git a/arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts b/arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts
index 86f1f2f598a7..cf2774ec0834 100644
--- a/arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts
+++ b/arch/arm/boot/dts/bcm2835-rpi-b-rev2.dts
@@ -22,3 +22,7 @@
brcm,function = <BCM2835_FSEL_ALT2>;
};
};
+
+&hdmi {
+ hpd-gpios = <&gpio 46 GPIO_ACTIVE_LOW>;
+};
diff --git a/arch/arm/boot/dts/bcm2835-rpi-b.dts b/arch/arm/boot/dts/bcm2835-rpi-b.dts
index 4859e9d81b23..8b15f9c35643 100644
--- a/arch/arm/boot/dts/bcm2835-rpi-b.dts
+++ b/arch/arm/boot/dts/bcm2835-rpi-b.dts
@@ -16,3 +16,7 @@
&gpio {
pinctrl-0 = <&gpioout &alt0 &alt3>;
};
+
+&hdmi {
+ hpd-gpios = <&gpio 46 GPIO_ACTIVE_HIGH>;
+};
diff --git a/arch/arm/boot/dts/bcm2835-rpi.dtsi b/arch/arm/boot/dts/bcm2835-rpi.dtsi
index 76bdbcafab18..caf2707680c1 100644
--- a/arch/arm/boot/dts/bcm2835-rpi.dtsi
+++ b/arch/arm/boot/dts/bcm2835-rpi.dtsi
@@ -74,3 +74,12 @@
&usb {
power-domains = <&power RPI_POWER_DOMAIN_USB>;
};
+
+&v3d {
+ power-domains = <&power RPI_POWER_DOMAIN_V3D>;
+};
+
+&hdmi {
+ power-domains = <&power RPI_POWER_DOMAIN_HDMI>;
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/bcm2835.dtsi b/arch/arm/boot/dts/bcm2835.dtsi
index b83b32639358..a78759e73710 100644
--- a/arch/arm/boot/dts/bcm2835.dtsi
+++ b/arch/arm/boot/dts/bcm2835.dtsi
@@ -3,6 +3,17 @@
/ {
compatible = "brcm,bcm2835";
+ cpus {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ cpu@0 {
+ device_type = "cpu";
+ compatible = "arm,arm1176jzf-s";
+ reg = <0x0>;
+ };
+ };
+
soc {
ranges = <0x7e000000 0x20000000 0x02000000>;
dma-ranges = <0x40000000 0x00000000 0x20000000>;
diff --git a/arch/arm/boot/dts/bcm2836-rpi-2-b.dts b/arch/arm/boot/dts/bcm2836-rpi-2-b.dts
index ff946661bd13..c4743f42237b 100644
--- a/arch/arm/boot/dts/bcm2836-rpi-2-b.dts
+++ b/arch/arm/boot/dts/bcm2836-rpi-2-b.dts
@@ -33,3 +33,7 @@
brcm,function = <BCM2835_FSEL_ALT0>;
};
};
+
+&hdmi {
+ hpd-gpios = <&gpio 46 GPIO_ACTIVE_LOW>;
+};
diff --git a/arch/arm/boot/dts/bcm283x.dtsi b/arch/arm/boot/dts/bcm283x.dtsi
index 8aaf193711bf..10b27b912bac 100644
--- a/arch/arm/boot/dts/bcm283x.dtsi
+++ b/arch/arm/boot/dts/bcm283x.dtsi
@@ -1,6 +1,7 @@
#include <dt-bindings/pinctrl/bcm2835.h>
#include <dt-bindings/clock/bcm2835.h>
#include <dt-bindings/clock/bcm2835-aux.h>
+#include <dt-bindings/gpio/gpio.h>
#include "skeleton.dtsi"
/* This include file covers the common peripherals and configuration between
@@ -47,9 +48,29 @@
<1 24>,
<1 25>,
<1 26>,
+ /* dma channel 11-14 share one irq */
<1 27>,
+ <1 27>,
+ <1 27>,
+ <1 27>,
+ /* unused shared irq for all channels */
<1 28>;
-
+ interrupt-names = "dma0",
+ "dma1",
+ "dma2",
+ "dma3",
+ "dma4",
+ "dma5",
+ "dma6",
+ "dma7",
+ "dma8",
+ "dma9",
+ "dma10",
+ "dma11",
+ "dma12",
+ "dma13",
+ "dma14",
+ "dma-shared-all";
#dma-cells = <1>;
brcm,dma-channel-mask = <0x7f35>;
};
@@ -153,6 +174,18 @@
status = "disabled";
};
+ pixelvalve@7e206000 {
+ compatible = "brcm,bcm2835-pixelvalve0";
+ reg = <0x7e206000 0x100>;
+ interrupts = <2 13>; /* pwa0 */
+ };
+
+ pixelvalve@7e207000 {
+ compatible = "brcm,bcm2835-pixelvalve1";
+ reg = <0x7e207000 0x100>;
+ interrupts = <2 14>; /* pwa1 */
+ };
+
aux: aux@0x7e215000 {
compatible = "brcm,bcm2835-aux";
#clock-cells = <1>;
@@ -206,6 +239,12 @@
status = "disabled";
};
+ hvs@7e400000 {
+ compatible = "brcm,bcm2835-hvs";
+ reg = <0x7e400000 0x6000>;
+ interrupts = <2 1>;
+ };
+
i2c1: i2c@7e804000 {
compatible = "brcm,bcm2835-i2c";
reg = <0x7e804000 0x1000>;
@@ -226,11 +265,39 @@
status = "disabled";
};
+ pixelvalve@7e807000 {
+ compatible = "brcm,bcm2835-pixelvalve2";
+ reg = <0x7e807000 0x100>;
+ interrupts = <2 10>; /* pixelvalve */
+ };
+
+ hdmi: hdmi@7e902000 {
+ compatible = "brcm,bcm2835-hdmi";
+ reg = <0x7e902000 0x600>,
+ <0x7e808000 0x100>;
+ interrupts = <2 8>, <2 9>;
+ ddc = <&i2c2>;
+ clocks = <&clocks BCM2835_PLLH_PIX>,
+ <&clocks BCM2835_CLOCK_HSM>;
+ clock-names = "pixel", "hdmi";
+ status = "disabled";
+ };
+
usb: usb@7e980000 {
compatible = "brcm,bcm2835-usb";
reg = <0x7e980000 0x10000>;
interrupts = <1 9>;
};
+
+ v3d: v3d@7ec00000 {
+ compatible = "brcm,bcm2835-v3d";
+ reg = <0x7ec00000 0x1000>;
+ interrupts = <1 10>;
+ };
+
+ vc4: gpu {
+ compatible = "brcm,bcm2835-vc4";
+ };
};
clocks {
diff --git a/arch/arm/boot/dts/bcm4708-buffalo-wzr-1750dhp.dts b/arch/arm/boot/dts/bcm4708-buffalo-wzr-1750dhp.dts
index 42dcdfb769b2..5087aa81efb1 100644
--- a/arch/arm/boot/dts/bcm4708-buffalo-wzr-1750dhp.dts
+++ b/arch/arm/boot/dts/bcm4708-buffalo-wzr-1750dhp.dts
@@ -17,7 +17,7 @@
model = "Buffalo WZR-1750DHP (BCM4708)";
chosen {
- bootargs = "console=ttyS0,115200";
+ bootargs = "console=ttyS0,115200 earlycon";
};
memory {
@@ -139,3 +139,11 @@
&uart0 {
status = "okay";
};
+
+&usb2 {
+ vcc-gpio = <&chipcommon 9 GPIO_ACTIVE_HIGH>;
+};
+
+&usb3 {
+ vcc-gpio = <&chipcommon 10 GPIO_ACTIVE_LOW>;
+};
diff --git a/arch/arm/boot/dts/bcm4708-luxul-xwc-1000.dts b/arch/arm/boot/dts/bcm4708-luxul-xwc-1000.dts
index f18e80e0b61d..1c7e53d60aa4 100644
--- a/arch/arm/boot/dts/bcm4708-luxul-xwc-1000.dts
+++ b/arch/arm/boot/dts/bcm4708-luxul-xwc-1000.dts
@@ -17,7 +17,7 @@
model = "Luxul XWC-1000 (BCM4708)";
chosen {
- bootargs = "console=ttyS0,115200";
+ bootargs = "console=ttyS0,115200 earlycon";
};
memory {
@@ -59,3 +59,7 @@
&uart0 {
status = "okay";
};
+
+&spi_nor {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/bcm4708-netgear-r6250.dts b/arch/arm/boot/dts/bcm4708-netgear-r6250.dts
index ca92bba6a8c5..1049ab108b32 100644
--- a/arch/arm/boot/dts/bcm4708-netgear-r6250.dts
+++ b/arch/arm/boot/dts/bcm4708-netgear-r6250.dts
@@ -17,24 +17,13 @@
model = "Netgear R6250 V1 (BCM4708)";
chosen {
- bootargs = "console=ttyS0,115200";
+ bootargs = "console=ttyS0,115200 earlycon";
};
memory {
reg = <0x00000000 0x08000000>;
};
- axi@18000000 {
- usb3@23000 {
- reg = <0x00023000 0x1000>;
-
- #address-cells = <1>;
- #size-cells = <1>;
-
- vcc-gpio = <&chipcommon 0 GPIO_ACTIVE_HIGH>;
- };
- };
-
leds {
compatible = "gpio-leds";
@@ -97,3 +86,7 @@
&uart0 {
status = "okay";
};
+
+&usb3 {
+ vcc-gpio = <&chipcommon 0 GPIO_ACTIVE_HIGH>;
+};
diff --git a/arch/arm/boot/dts/bcm4708-smartrg-sr400ac.dts b/arch/arm/boot/dts/bcm4708-smartrg-sr400ac.dts
index 64a5e8ab65e0..8b0c440b2e71 100644
--- a/arch/arm/boot/dts/bcm4708-smartrg-sr400ac.dts
+++ b/arch/arm/boot/dts/bcm4708-smartrg-sr400ac.dts
@@ -17,7 +17,7 @@
model = "SmartRG SR400ac";
chosen {
- bootargs = "console=ttyS0,115200";
+ bootargs = "console=ttyS0,115200 earlycon";
};
memory {
@@ -122,3 +122,7 @@
&uart0 {
status = "okay";
};
+
+&spi_nor {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/bcm47081-buffalo-wzr-600dhp2.dts b/arch/arm/boot/dts/bcm47081-buffalo-wzr-600dhp2.dts
index 38f0c00d1aca..a9c8defed4d3 100644
--- a/arch/arm/boot/dts/bcm47081-buffalo-wzr-600dhp2.dts
+++ b/arch/arm/boot/dts/bcm47081-buffalo-wzr-600dhp2.dts
@@ -17,7 +17,7 @@
model = "Buffalo WZR-600DHP2 (BCM47081)";
chosen {
- bootargs = "console=ttyS0,115200";
+ bootargs = "console=ttyS0,115200 earlycon";
};
memory {
diff --git a/arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts b/arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts
index 2a92e8d5ab34..791d7225c733 100644
--- a/arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts
+++ b/arch/arm/boot/dts/bcm4709-buffalo-wxr-1900dhp.dts
@@ -126,3 +126,8 @@
};
};
};
+
+
+&usb2 {
+ vcc-gpio = <&chipcommon 13 GPIO_ACTIVE_HIGH>;
+};
diff --git a/arch/arm/boot/dts/bcm4709-netgear-r8000.dts b/arch/arm/boot/dts/bcm4709-netgear-r8000.dts
index b52927c94e35..ca181516c28a 100644
--- a/arch/arm/boot/dts/bcm4709-netgear-r8000.dts
+++ b/arch/arm/boot/dts/bcm4709-netgear-r8000.dts
@@ -106,3 +106,11 @@
};
};
};
+
+&usb2 {
+ vcc-gpio = <&chipcommon 0 GPIO_ACTIVE_HIGH>;
+};
+
+&usb3 {
+ vcc-gpio = <&chipcommon 0 GPIO_ACTIVE_HIGH>;
+};
diff --git a/arch/arm/boot/dts/bcm47094-dlink-dir-885l.dts b/arch/arm/boot/dts/bcm47094-dlink-dir-885l.dts
index 6c83538bc2d7..ace38efd2db3 100644
--- a/arch/arm/boot/dts/bcm47094-dlink-dir-885l.dts
+++ b/arch/arm/boot/dts/bcm47094-dlink-dir-885l.dts
@@ -17,7 +17,7 @@
model = "D-Link DIR-885L";
chosen {
- bootargs = "console=ttyS0,115200";
+ bootargs = "console=ttyS0,115200 earlycon";
};
memory {
@@ -109,3 +109,7 @@
status = "okay";
clock-frequency = <125000000>;
};
+
+&usb3 {
+ vcc-gpio = <&chipcommon 18 GPIO_ACTIVE_HIGH>;
+};
diff --git a/arch/arm/boot/dts/bcm5301x.dtsi b/arch/arm/boot/dts/bcm5301x.dtsi
index 65a1309bd6e2..7d4d29bf0ed3 100644
--- a/arch/arm/boot/dts/bcm5301x.dtsi
+++ b/arch/arm/boot/dts/bcm5301x.dtsi
@@ -18,6 +18,10 @@
/ {
interrupt-parent = <&gic>;
+ chosen {
+ stdout-path = &uart0;
+ };
+
chipcommonA {
compatible = "simple-bus";
ranges = <0x00000000 0x18000000 0x00001000>;
@@ -207,6 +211,34 @@
gpio-controller;
#gpio-cells = <2>;
};
+
+ usb2: usb2@21000 {
+ reg = <0x00021000 0x1000>;
+
+ #address-cells = <1>;
+ #size-cells = <1>;
+ };
+
+ usb3: usb3@23000 {
+ reg = <0x00023000 0x1000>;
+
+ #address-cells = <1>;
+ #size-cells = <1>;
+ };
+
+ spi@29000 {
+ reg = <0x00029000 0x1000>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ spi_nor: spi-nor@0 {
+ compatible = "jedec,spi-nor";
+ reg = <0>;
+ spi-max-frequency = <20000000>;
+ linux,part-probe = "ofpart", "bcm47xxpart";
+ status = "disabled";
+ };
+ };
};
lcpll0: lcpll0@1800c100 {
diff --git a/arch/arm/boot/dts/cros-adc-thermistors.dtsi b/arch/arm/boot/dts/cros-adc-thermistors.dtsi
index acd4fe1833f2..ce7fca76b0d6 100644
--- a/arch/arm/boot/dts/cros-adc-thermistors.dtsi
+++ b/arch/arm/boot/dts/cros-adc-thermistors.dtsi
@@ -13,28 +13,28 @@
*/
&adc {
- ncp15wb473@3 {
+ thermistor3 {
compatible = "murata,ncp15wb473";
pullup-uv = <1800000>;
pullup-ohm = <47000>;
pulldown-ohm = <0>;
io-channels = <&adc 3>;
};
- ncp15wb473@4 {
+ thermistor4 {
compatible = "murata,ncp15wb473";
pullup-uv = <1800000>;
pullup-ohm = <47000>;
pulldown-ohm = <0>;
io-channels = <&adc 4>;
};
- ncp15wb473@5 {
+ thermistor5 {
compatible = "murata,ncp15wb473";
pullup-uv = <1800000>;
pullup-ohm = <47000>;
pulldown-ohm = <0>;
io-channels = <&adc 5>;
};
- ncp15wb473@6 {
+ thermistor6 {
compatible = "murata,ncp15wb473";
pullup-uv = <1800000>;
pullup-ohm = <47000>;
diff --git a/arch/arm/boot/dts/da850-enbw-cmc.dts b/arch/arm/boot/dts/da850-enbw-cmc.dts
index 645549e14237..14dff3e188ed 100644
--- a/arch/arm/boot/dts/da850-enbw-cmc.dts
+++ b/arch/arm/boot/dts/da850-enbw-cmc.dts
@@ -16,14 +16,20 @@
compatible = "enbw,cmc", "ti,da850";
model = "EnBW CMC";
- soc {
- serial0: serial@1c42000 {
+ soc@1c00000 {
+ serial0: serial@42000 {
status = "okay";
};
- serial1: serial@1d0c000 {
+ serial1: serial@10c000 {
status = "okay";
};
- serial2: serial@1d0d000 {
+ serial2: serial@10d000 {
+ status = "okay";
+ };
+ mdio: mdio@224000 {
+ status = "okay";
+ };
+ eth0: ethernet@220000 {
status = "okay";
};
};
diff --git a/arch/arm/boot/dts/da850-evm.dts b/arch/arm/boot/dts/da850-evm.dts
index ef061e9a2315..1a15db8e376b 100644
--- a/arch/arm/boot/dts/da850-evm.dts
+++ b/arch/arm/boot/dts/da850-evm.dts
@@ -14,8 +14,8 @@
compatible = "ti,da850-evm", "ti,da850";
model = "DA850/AM1808/OMAP-L138 EVM";
- soc {
- pmx_core: pinmux@1c14120 {
+ soc@1c00000 {
+ pmx_core: pinmux@14120 {
status = "okay";
mcasp0_pins: pinmux_mcasp0_pins {
@@ -30,19 +30,19 @@
>;
};
};
- serial0: serial@1c42000 {
+ serial0: serial@42000 {
status = "okay";
};
- serial1: serial@1d0c000 {
+ serial1: serial@10c000 {
status = "okay";
};
- serial2: serial@1d0d000 {
+ serial2: serial@10d000 {
status = "okay";
};
- rtc0: rtc@1c23000 {
+ rtc0: rtc@23000 {
status = "okay";
};
- i2c0: i2c@1c22000 {
+ i2c0: i2c@22000 {
status = "okay";
clock-frequency = <100000>;
pinctrl-names = "default";
@@ -66,17 +66,17 @@
};
};
- wdt: wdt@1c21000 {
+ wdt: wdt@21000 {
status = "okay";
};
- mmc0: mmc@1c40000 {
+ mmc0: mmc@40000 {
max-frequency = <50000000>;
bus-width = <4>;
status = "okay";
pinctrl-names = "default";
pinctrl-0 = <&mmc0_pins>;
};
- spi1: spi@1f0e000 {
+ spi1: spi@30e000 {
status = "okay";
pinctrl-names = "default";
pinctrl-0 = <&spi1_pins &spi1_cs0_pin>;
@@ -116,18 +116,18 @@
};
};
};
- mdio: mdio@1e24000 {
+ mdio: mdio@224000 {
status = "okay";
pinctrl-names = "default";
pinctrl-0 = <&mdio_pins>;
bus_freq = <2200000>;
};
- eth0: ethernet@1e20000 {
+ eth0: ethernet@220000 {
status = "okay";
pinctrl-names = "default";
pinctrl-0 = <&mii_pins>;
};
- gpio: gpio@1e26000 {
+ gpio: gpio@226000 {
status = "okay";
};
};
diff --git a/arch/arm/boot/dts/da850.dtsi b/arch/arm/boot/dts/da850.dtsi
index 226cda76e77c..25f0f8e6dde5 100644
--- a/arch/arm/boot/dts/da850.dtsi
+++ b/arch/arm/boot/dts/da850.dtsi
@@ -15,15 +15,15 @@
#address-cells = <1>;
#size-cells = <1>;
ranges;
- intc: interrupt-controller {
+ intc: interrupt-controller@fffee000 {
compatible = "ti,cp-intc";
interrupt-controller;
#interrupt-cells = <1>;
- ti,intc-size = <100>;
+ ti,intc-size = <101>;
reg = <0xfffee000 0x2000>;
};
};
- soc {
+ soc@1c00000 {
compatible = "simple-bus";
model = "da850";
#address-cells = <1>;
@@ -31,7 +31,7 @@
ranges = <0x0 0x01c00000 0x400000>;
interrupt-parent = <&intc>;
- pmx_core: pinmux@1c14120 {
+ pmx_core: pinmux@14120 {
compatible = "pinctrl-single";
reg = <0x14120 0x50>;
#address-cells = <1>;
@@ -63,6 +63,12 @@
0x10 0x00002200 0x0000ff00
>;
};
+ i2c1_pins: pinmux_i2c1_pins {
+ pinctrl-single,bits = <
+ /* I2C1_SDA, I2C1_SCL */
+ 0x10 0x00440000 0x00ff0000
+ >;
+ };
mmc0_pins: pinmux_mmc_pins {
pinctrl-single,bits = <
/* MMCSD0_DAT[3] MMCSD0_DAT[2]
@@ -114,7 +120,19 @@
0x4 0x00000004 0x0000000f
>;
};
- spi1_pins: pinmux_spi_pins {
+ spi0_pins: pinmux_spi0_pins {
+ pinctrl-single,bits = <
+ /* SIMO, SOMI, CLK */
+ 0xc 0x00001101 0x0000ff0f
+ >;
+ };
+ spi0_cs0_pin: pinmux_spi0_cs0 {
+ pinctrl-single,bits = <
+ /* CS0 */
+ 0x10 0x00000010 0x000000f0
+ >;
+ };
+ spi1_pins: pinmux_spi1_pins {
pinctrl-single,bits = <
/* SIMO, SOMI, CLK */
0x14 0x00110100 0x00ff0f00
@@ -150,7 +168,7 @@
};
};
- edma0: edma@01c00000 {
+ edma0: edma@0 {
compatible = "ti,edma3-tpcc";
/* eDMA3 CC0: 0x01c0 0000 - 0x01c0 7fff */
reg = <0x0 0x8000>;
@@ -161,19 +179,19 @@
ti,tptcs = <&edma0_tptc0 7>, <&edma0_tptc1 0>;
};
- edma0_tptc0: tptc@01c08000 {
+ edma0_tptc0: tptc@8000 {
compatible = "ti,edma3-tptc";
reg = <0x8000 0x400>;
interrupts = <13>;
interrupt-names = "edm3_tcerrint";
};
- edma0_tptc1: tptc@01c08400 {
+ edma0_tptc1: tptc@8400 {
compatible = "ti,edma3-tptc";
reg = <0x8400 0x400>;
interrupts = <32>;
interrupt-names = "edm3_tcerrint";
};
- edma1: edma@01e30000 {
+ edma1: edma@230000 {
compatible = "ti,edma3-tpcc";
/* eDMA3 CC1: 0x01e3 0000 - 0x01e3 7fff */
reg = <0x230000 0x8000>;
@@ -184,41 +202,41 @@
ti,tptcs = <&edma1_tptc0 7>;
};
- edma1_tptc0: tptc@01e38000 {
+ edma1_tptc0: tptc@238000 {
compatible = "ti,edma3-tptc";
reg = <0x238000 0x400>;
interrupts = <95>;
interrupt-names = "edm3_tcerrint";
};
- serial0: serial@1c42000 {
+ serial0: serial@42000 {
compatible = "ns16550a";
reg = <0x42000 0x100>;
reg-shift = <2>;
interrupts = <25>;
status = "disabled";
};
- serial1: serial@1d0c000 {
+ serial1: serial@10c000 {
compatible = "ns16550a";
reg = <0x10c000 0x100>;
reg-shift = <2>;
interrupts = <53>;
status = "disabled";
};
- serial2: serial@1d0d000 {
+ serial2: serial@10d000 {
compatible = "ns16550a";
reg = <0x10d000 0x100>;
reg-shift = <2>;
interrupts = <61>;
status = "disabled";
};
- rtc0: rtc@1c23000 {
+ rtc0: rtc@23000 {
compatible = "ti,da830-rtc";
reg = <0x23000 0x1000>;
interrupts = <19
19>;
status = "disabled";
};
- i2c0: i2c@1c22000 {
+ i2c0: i2c@22000 {
compatible = "ti,davinci-i2c";
reg = <0x22000 0x1000>;
interrupts = <15>;
@@ -226,12 +244,20 @@
#size-cells = <0>;
status = "disabled";
};
- wdt: wdt@1c21000 {
+ i2c1: i2c@228000 {
+ compatible = "ti,davinci-i2c";
+ reg = <0x228000 0x1000>;
+ interrupts = <51>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "disabled";
+ };
+ wdt: wdt@21000 {
compatible = "ti,davinci-wdt";
reg = <0x21000 0x1000>;
status = "disabled";
};
- mmc0: mmc@1c40000 {
+ mmc0: mmc@40000 {
compatible = "ti,da830-mmc";
reg = <0x40000 0x1000>;
interrupts = <16>;
@@ -239,7 +265,7 @@
dma-names = "rx", "tx";
status = "disabled";
};
- mmc1: mmc@1e1b000 {
+ mmc1: mmc@21b000 {
compatible = "ti,da830-mmc";
reg = <0x21b000 0x1000>;
interrupts = <72>;
@@ -247,37 +273,47 @@
dma-names = "rx", "tx";
status = "disabled";
};
- ehrpwm0: ehrpwm@01f00000 {
+ ehrpwm0: pwm@300000 {
compatible = "ti,da850-ehrpwm", "ti,am33xx-ehrpwm";
#pwm-cells = <3>;
reg = <0x300000 0x2000>;
status = "disabled";
};
- ehrpwm1: ehrpwm@01f02000 {
+ ehrpwm1: pwm@302000 {
compatible = "ti,da850-ehrpwm", "ti,am33xx-ehrpwm";
#pwm-cells = <3>;
reg = <0x302000 0x2000>;
status = "disabled";
};
- ecap0: ecap@01f06000 {
+ ecap0: ecap@306000 {
compatible = "ti,da850-ecap", "ti,am33xx-ecap";
#pwm-cells = <3>;
reg = <0x306000 0x80>;
status = "disabled";
};
- ecap1: ecap@01f07000 {
+ ecap1: ecap@307000 {
compatible = "ti,da850-ecap", "ti,am33xx-ecap";
#pwm-cells = <3>;
reg = <0x307000 0x80>;
status = "disabled";
};
- ecap2: ecap@01f08000 {
+ ecap2: ecap@308000 {
compatible = "ti,da850-ecap", "ti,am33xx-ecap";
#pwm-cells = <3>;
reg = <0x308000 0x80>;
status = "disabled";
};
- spi1: spi@1f0e000 {
+ spi0: spi@41000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ compatible = "ti,da830-spi";
+ reg = <0x41000 0x1000>;
+ num-cs = <6>;
+ ti,davinci-spi-intr-line = <1>;
+ interrupts = <20>;
+ status = "disabled";
+ };
+ spi1: spi@30e000 {
#address-cells = <1>;
#size-cells = <0>;
compatible = "ti,da830-spi";
@@ -289,13 +325,14 @@
dma-names = "rx", "tx";
status = "disabled";
};
- mdio: mdio@1e24000 {
+ mdio: mdio@224000 {
compatible = "ti,davinci_mdio";
#address-cells = <1>;
#size-cells = <0>;
reg = <0x224000 0x1000>;
+ status = "disabled";
};
- eth0: ethernet@1e20000 {
+ eth0: ethernet@220000 {
compatible = "ti,davinci-dm6467-emac";
reg = <0x220000 0x4000>;
ti,davinci-ctrl-reg-offset = <0x3000>;
@@ -308,10 +345,12 @@
35
36
>;
+ status = "disabled";
};
- gpio: gpio@1e26000 {
+ gpio: gpio@226000 {
compatible = "ti,dm6441-gpio";
gpio-controller;
+ #gpio-cells = <2>;
reg = <0x226000 0x1000>;
interrupts = <42 IRQ_TYPE_EDGE_BOTH
43 IRQ_TYPE_EDGE_BOTH 44 IRQ_TYPE_EDGE_BOTH
@@ -323,7 +362,7 @@
status = "disabled";
};
- mcasp0: mcasp@01d00000 {
+ mcasp0: mcasp@100000 {
compatible = "ti,da830-mcasp-audio";
reg = <0x100000 0x2000>,
<0x102000 0x400000>;
diff --git a/arch/arm/boot/dts/dm814x-clocks.dtsi b/arch/arm/boot/dts/dm814x-clocks.dtsi
index 792a64ee0df7..c4671af0a28d 100644
--- a/arch/arm/boot/dts/dm814x-clocks.dtsi
+++ b/arch/arm/boot/dts/dm814x-clocks.dtsi
@@ -156,7 +156,7 @@
};
&pllss_clocks {
- timer1_fck: timer1_fck {
+ timer1_fck: timer1_fck@2e0 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sysclk18_ck &aud_clkin0_ck &aud_clkin1_ck
@@ -165,7 +165,7 @@
reg = <0x2e0>;
};
- timer2_fck: timer2_fck {
+ timer2_fck: timer2_fck@2e0 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sysclk18_ck &aud_clkin0_ck &aud_clkin1_ck
@@ -192,7 +192,7 @@
clock-frequency = <125000000>;
};
- sysclk18_ck: sysclk18_ck {
+ sysclk18_ck: sysclk18_ck@2f0 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&rtcosc_ck>, <&rtcdivider_ck>;
@@ -202,7 +202,7 @@
};
&scm_clocks {
- devosc_ck: devosc_ck {
+ devosc_ck: devosc_ck@40 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&virt_20000000_ck>, <&virt_19200000_ck>;
@@ -259,7 +259,7 @@
clock-div = <1>;
};
- mpu_clksrc_ck: mpu_clksrc_ck {
+ mpu_clksrc_ck: mpu_clksrc_ck@40 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&devosc_ck>, <&rtcdivider_ck>;
diff --git a/arch/arm/boot/dts/dm814x.dtsi b/arch/arm/boot/dts/dm814x.dtsi
index 4a6ce8c8bf8f..d4537dc61497 100644
--- a/arch/arm/boot/dts/dm814x.dtsi
+++ b/arch/arm/boot/dts/dm814x.dtsi
@@ -568,6 +568,8 @@
#size-cells = <1>;
interrupt-controller;
#interrupt-cells = <2>;
+ gpio-controller;
+ #gpio-cells = <2>;
};
};
};
diff --git a/arch/arm/boot/dts/dm816x-clocks.dtsi b/arch/arm/boot/dts/dm816x-clocks.dtsi
index 50d9d338fbe9..51865eb84a80 100644
--- a/arch/arm/boot/dts/dm816x-clocks.dtsi
+++ b/arch/arm/boot/dts/dm816x-clocks.dtsi
@@ -86,7 +86,7 @@
/* 0x48180000 */
&prcm_clocks {
- clkout_pre_ck: clkout_pre_ck {
+ clkout_pre_ck: clkout_pre_ck@100 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&main_fapll 5 &ddr_fapll 1 &video_fapll 1
@@ -94,7 +94,7 @@
reg = <0x100>;
};
- clkout_div_ck: clkout_div_ck {
+ clkout_div_ck: clkout_div_ck@100 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&clkout_pre_ck>;
@@ -103,7 +103,7 @@
reg = <0x100>;
};
- clkout_ck: clkout_ck {
+ clkout_ck: clkout_ck@100 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&clkout_div_ck>;
@@ -112,7 +112,7 @@
};
/* CM_DPLL clocks p1795 */
- sysclk1_ck: sysclk1_ck {
+ sysclk1_ck: sysclk1_ck@300 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&main_fapll 1>;
@@ -120,7 +120,7 @@
reg = <0x0300>;
};
- sysclk2_ck: sysclk2_ck {
+ sysclk2_ck: sysclk2_ck@304 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&main_fapll 2>;
@@ -128,7 +128,7 @@
reg = <0x0304>;
};
- sysclk3_ck: sysclk3_ck {
+ sysclk3_ck: sysclk3_ck@308 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&main_fapll 3>;
@@ -136,7 +136,7 @@
reg = <0x0308>;
};
- sysclk4_ck: sysclk4_ck {
+ sysclk4_ck: sysclk4_ck@30c {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&main_fapll 4>;
@@ -144,7 +144,7 @@
reg = <0x030c>;
};
- sysclk5_ck: sysclk5_ck {
+ sysclk5_ck: sysclk5_ck@310 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&sysclk4_ck>;
@@ -152,7 +152,7 @@
reg = <0x0310>;
};
- sysclk6_ck: sysclk6_ck {
+ sysclk6_ck: sysclk6_ck@314 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&main_fapll 4>;
@@ -160,7 +160,7 @@
reg = <0x0314>;
};
- sysclk10_ck: sysclk10_ck {
+ sysclk10_ck: sysclk10_ck@324 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&ddr_fapll 2>;
@@ -168,7 +168,7 @@
reg = <0x0324>;
};
- sysclk24_ck: sysclk24_ck {
+ sysclk24_ck: sysclk24_ck@3b4 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&main_fapll 5>;
@@ -176,7 +176,7 @@
reg = <0x03b4>;
};
- mpu_ck: mpu_ck {
+ mpu_ck: mpu_ck@15dc {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sysclk2_ck>;
@@ -184,7 +184,7 @@
reg = <0x15dc>;
};
- audio_pll_a_ck: audio_pll_a_ck {
+ audio_pll_a_ck: audio_pll_a_ck@35c {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&audio_fapll 1>;
@@ -192,56 +192,56 @@
reg = <0x035c>;
};
- sysclk18_ck: sysclk18_ck {
+ sysclk18_ck: sysclk18_ck@378 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_32k_ck>, <&audio_pll_a_ck>;
reg = <0x0378>;
};
- timer1_fck: timer1_fck {
+ timer1_fck: timer1_fck@390 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sysclk18_ck>, <&sys_clkin_ck>;
reg = <0x0390>;
};
- timer2_fck: timer2_fck {
+ timer2_fck: timer2_fck@394 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sysclk18_ck>, <&sys_clkin_ck>;
reg = <0x0394>;
};
- timer3_fck: timer3_fck {
+ timer3_fck: timer3_fck@398 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sysclk18_ck>, <&sys_clkin_ck>;
reg = <0x0398>;
};
- timer4_fck: timer4_fck {
+ timer4_fck: timer4_fck@39c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sysclk18_ck>, <&sys_clkin_ck>;
reg = <0x039c>;
};
- timer5_fck: timer5_fck {
+ timer5_fck: timer5_fck@3a0 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sysclk18_ck>, <&sys_clkin_ck>;
reg = <0x03a0>;
};
- timer6_fck: timer6_fck {
+ timer6_fck: timer6_fck@3a4 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sysclk18_ck>, <&sys_clkin_ck>;
reg = <0x03a4>;
};
- timer7_fck: timer7_fck {
+ timer7_fck: timer7_fck@3a8 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&tclkin_ck>, <&sysclk18_ck>, <&sys_clkin_ck>;
diff --git a/arch/arm/boot/dts/dm816x.dtsi b/arch/arm/boot/dts/dm816x.dtsi
index d9309a016117..44e39c743b53 100644
--- a/arch/arm/boot/dts/dm816x.dtsi
+++ b/arch/arm/boot/dts/dm816x.dtsi
@@ -185,6 +185,8 @@
gpmc,num-waitpins = <2>;
interrupt-controller;
#interrupt-cells = <2>;
+ gpio-controller;
+ #gpio-cells = <2>;
};
i2c1: i2c@48028000 {
diff --git a/arch/arm/boot/dts/dra7-evm.dts b/arch/arm/boot/dts/dra7-evm.dts
index d9b87236019d..bafcfac067ec 100644
--- a/arch/arm/boot/dts/dra7-evm.dts
+++ b/arch/arm/boot/dts/dra7-evm.dts
@@ -33,6 +33,7 @@
evm_3v3_sw: fixedregulator-evm_3v3_sw {
compatible = "regulator-fixed";
regulator-name = "evm_3v3_sw";
+ vin-supply = <&sysen1>;
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
};
@@ -64,10 +65,11 @@
regulator-always-on;
regulator-boot-on;
enable-active-high;
+ vin-supply = <&sysen2>;
gpio = <&gpio7 11 GPIO_ACTIVE_HIGH>;
};
- sound0: sound@0 {
+ sound0: sound0 {
compatible = "simple-audio-card";
simple-audio-card,name = "DRA7xx-EVM";
simple-audio-card,widgets =
@@ -224,21 +226,6 @@
>;
};
- qspi1_pins: pinmux_qspi1_pins {
- pinctrl-single,pins = <
- DRA7XX_CORE_IOPAD(0x344c, PIN_INPUT | MUX_MODE1) /* gpmc_a3.qspi1_cs2 */
- DRA7XX_CORE_IOPAD(0x3450, PIN_INPUT | MUX_MODE1) /* gpmc_a4.qspi1_cs3 */
- DRA7XX_CORE_IOPAD(0x3474, PIN_INPUT | MUX_MODE1) /* gpmc_a13.qspi1_rtclk */
- DRA7XX_CORE_IOPAD(0x3478, PIN_INPUT | MUX_MODE1) /* gpmc_a14.qspi1_d3 */
- DRA7XX_CORE_IOPAD(0x347c, PIN_INPUT | MUX_MODE1) /* gpmc_a15.qspi1_d2 */
- DRA7XX_CORE_IOPAD(0x3480, PIN_INPUT | MUX_MODE1) /* gpmc_a16.qspi1_d1 */
- DRA7XX_CORE_IOPAD(0x3484, PIN_INPUT | MUX_MODE1) /* gpmc_a17.qspi1_d0 */
- DRA7XX_CORE_IOPAD(0x3488, PIN_INPUT | MUX_MODE1) /* qpmc_a18.qspi1_sclk */
- DRA7XX_CORE_IOPAD(0x34b8, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_cs2.qspi1_cs0 */
- DRA7XX_CORE_IOPAD(0x34bc, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_cs3.qspi1_cs1 */
- >;
- };
-
usb1_pins: pinmux_usb1_pins {
pinctrl-single,pins = <
DRA7XX_CORE_IOPAD(0x3680, PIN_INPUT_SLEW | MUX_MODE0) /* usb1_drvvbus */
@@ -254,8 +241,9 @@
nand_flash_x16: nand_flash_x16 {
/* On DRA7 EVM, GPMC_WPN and NAND_BOOTn comes from DIP switch
* So NAND flash requires following switch settings:
- * SW5.9 (GPMC_WPN) = LOW
- * SW5.1 (NAND_BOOTn) = HIGH */
+ * SW5.1 (NAND_BOOTn) = ON (LOW)
+ * SW5.9 (GPMC_WPN) = OFF (HIGH)
+ */
pinctrl-single,pins = <
DRA7XX_CORE_IOPAD(0x3400, PIN_INPUT | MUX_MODE0) /* gpmc_ad0 */
DRA7XX_CORE_IOPAD(0x3404, PIN_INPUT | MUX_MODE0) /* gpmc_ad1 */
@@ -428,7 +416,7 @@
/* VDD_DSPEVE */
regulator-name = "smps45";
regulator-min-microvolt = < 850000>;
- regulator-max-microvolt = <1150000>;
+ regulator-max-microvolt = <1250000>;
regulator-always-on;
regulator-boot-on;
};
@@ -446,7 +434,7 @@
/* CORE_VDD */
regulator-name = "smps7";
regulator-min-microvolt = <850000>;
- regulator-max-microvolt = <1060000>;
+ regulator-max-microvolt = <1150000>;
regulator-always-on;
regulator-boot-on;
};
@@ -523,12 +511,37 @@
regulator-max-microvolt = <3300000>;
regulator-boot-on;
};
+
+ /* REGEN1 is unused */
+
+ regen2: regen2 {
+ /* Needed for PMIC internal resources */
+ regulator-name = "regen2";
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ /* REGEN3 is unused */
+
+ sysen1: sysen1 {
+ /* PMIC_REGEN_3V3 */
+ regulator-name = "sysen1";
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ sysen2: sysen2 {
+ /* PMIC_REGEN_DDR */
+ regulator-name = "sysen2";
+ regulator-boot-on;
+ regulator-always-on;
+ };
};
};
};
pcf_lcd: gpio@20 {
- compatible = "nxp,pcf8575";
+ compatible = "ti,pcf8575", "nxp,pcf8575";
reg = <0x20>;
gpio-controller;
#gpio-cells = <2>;
@@ -539,7 +552,7 @@
};
pcf_gpio_21: gpio@21 {
- compatible = "ti,pcf8575";
+ compatible = "ti,pcf8575", "nxp,pcf8575";
reg = <0x21>;
lines-initial-states = <0x1408>;
gpio-controller;
@@ -573,7 +586,7 @@
clock-frequency = <400000>;
pcf_hdmi: gpio@26 {
- compatible = "nxp,pcf8575";
+ compatible = "ti,pcf8575", "nxp,pcf8575";
reg = <0x26>;
gpio-controller;
#gpio-cells = <2>;
@@ -650,18 +663,14 @@
&qspi {
status = "okay";
- pinctrl-names = "default";
- pinctrl-0 = <&qspi1_pins>;
- spi-max-frequency = <48000000>;
+ spi-max-frequency = <64000000>;
m25p80@0 {
compatible = "s25fl256s1";
- spi-max-frequency = <48000000>;
+ spi-max-frequency = <64000000>;
reg = <0>;
spi-tx-bus-width = <1>;
spi-rx-bus-width = <4>;
- spi-cpol;
- spi-cpha;
#address-cells = <1>;
#size-cells = <1>;
@@ -748,6 +757,7 @@
interrupt-parent = <&gpmc>;
interrupts = <0 IRQ_TYPE_NONE>, /* fifoevent */
<1 IRQ_TYPE_NONE>; /* termcount */
+ rb-gpios = <&gpmc 0 GPIO_ACTIVE_HIGH>; /* gpmc_wait0 pin */
ti,nand-ecc-opt = "bch8";
ti,elm-id = <&elm>;
nand-bus-width = <16>;
@@ -904,6 +914,8 @@
serial-dir = < /* 0: INACTIVE, 1: TX, 2: RX */
1 2 0 0
>;
+ tx-num-evt = <32>;
+ rx-num-evt = <32>;
};
&mailbox5 {
diff --git a/arch/arm/boot/dts/dra7.dtsi b/arch/arm/boot/dts/dra7.dtsi
index 13ac88279427..e0074014385a 100644
--- a/arch/arm/boot/dts/dra7.dtsi
+++ b/arch/arm/boot/dts/dra7.dtsi
@@ -123,7 +123,7 @@
#size-cells = <1>;
ranges = <0 0x0 0x1400>;
- pbias_regulator: pbias_regulator {
+ pbias_regulator: pbias_regulator@e00 {
compatible = "ti,pbias-dra7", "ti,pbias-omap";
reg = <0xe00 0x4>;
syscon = <&scm_conf>;
@@ -161,6 +161,24 @@
compatible = "syscon";
reg = <0x1c24 0x0024>;
};
+
+ sdma_xbar: dma-router@b78 {
+ compatible = "ti,dra7-dma-crossbar";
+ reg = <0xb78 0xfc>;
+ #dma-cells = <1>;
+ dma-requests = <205>;
+ ti,dma-safe-map = <0>;
+ dma-masters = <&sdma>;
+ };
+
+ edma_xbar: dma-router@c78 {
+ compatible = "ti,dra7-dma-crossbar";
+ reg = <0xc78 0x7c>;
+ #dma-cells = <2>;
+ dma-requests = <204>;
+ ti,dma-safe-map = <0>;
+ dma-masters = <&edma>;
+ };
};
cm_core_aon: cm_core_aon@5000 {
@@ -315,13 +333,43 @@
dma-requests = <127>;
};
- sdma_xbar: dma-router@4a002b78 {
- compatible = "ti,dra7-dma-crossbar";
- reg = <0x4a002b78 0xfc>;
- #dma-cells = <1>;
- dma-requests = <205>;
- ti,dma-safe-map = <0>;
- dma-masters = <&sdma>;
+ edma: edma@43300000 {
+ compatible = "ti,edma3-tpcc";
+ ti,hwmods = "tpcc";
+ reg = <0x43300000 0x100000>;
+ reg-names = "edma3_cc";
+ interrupts = <GIC_SPI 361 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 360 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 359 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "edma3_ccint", "emda3_mperr",
+ "edma3_ccerrint";
+ dma-requests = <64>;
+ #dma-cells = <2>;
+
+ ti,tptcs = <&edma_tptc0 7>, <&edma_tptc1 0>;
+
+ /*
+ * memcpy is disabled, can be enabled with:
+ * ti,edma-memcpy-channels = <20 21>;
+ * for example. Note that these channels need to be
+ * masked in the xbar as well.
+ */
+ };
+
+ edma_tptc0: tptc@43400000 {
+ compatible = "ti,edma3-tptc";
+ ti,hwmods = "tptc0";
+ reg = <0x43400000 0x100000>;
+ interrupts = <GIC_SPI 370 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "edma3_tcerrint";
+ };
+
+ edma_tptc1: tptc@43500000 {
+ compatible = "ti,edma3-tptc";
+ ti,hwmods = "tptc1";
+ reg = <0x43500000 0x100000>;
+ interrupts = <GIC_SPI 371 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "edma3_tcerrint";
};
gpio1: gpio@4ae10000 {
@@ -773,12 +821,20 @@
ti,hwmods = "timer11";
};
+ timer12: timer@4ae20000 {
+ compatible = "ti,omap5430-timer";
+ reg = <0x4ae20000 0x80>;
+ interrupts = <GIC_SPI 90 IRQ_TYPE_LEVEL_HIGH>;
+ ti,hwmods = "timer12";
+ ti,timer-alwon;
+ ti,timer-secure;
+ };
+
timer13: timer@48828000 {
compatible = "ti,omap5430-timer";
reg = <0x48828000 0x80>;
interrupts = <GIC_SPI 339 IRQ_TYPE_LEVEL_HIGH>;
ti,hwmods = "timer13";
- status = "disabled";
};
timer14: timer@4882a000 {
@@ -786,7 +842,6 @@
reg = <0x4882a000 0x80>;
interrupts = <GIC_SPI 340 IRQ_TYPE_LEVEL_HIGH>;
ti,hwmods = "timer14";
- status = "disabled";
};
timer15: timer@4882c000 {
@@ -794,7 +849,6 @@
reg = <0x4882c000 0x80>;
interrupts = <GIC_SPI 341 IRQ_TYPE_LEVEL_HIGH>;
ti,hwmods = "timer15";
- status = "disabled";
};
timer16: timer@4882e000 {
@@ -802,7 +856,6 @@
reg = <0x4882e000 0x80>;
interrupts = <GIC_SPI 342 IRQ_TYPE_LEVEL_HIGH>;
ti,hwmods = "timer16";
- status = "disabled";
};
wdt2: wdt@4ae14000 {
@@ -1404,6 +1457,8 @@
#size-cells = <1>;
interrupt-controller;
#interrupt-cells = <2>;
+ gpio-controller;
+ #gpio-cells = <2>;
status = "disabled";
};
@@ -1418,21 +1473,136 @@
status = "disabled";
};
+ mcasp1: mcasp@48460000 {
+ compatible = "ti,dra7-mcasp-audio";
+ ti,hwmods = "mcasp1";
+ reg = <0x48460000 0x2000>,
+ <0x45800000 0x1000>;
+ reg-names = "mpu","dat";
+ interrupts = <GIC_SPI 104 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 103 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "tx", "rx";
+ dmas = <&edma_xbar 129 1>, <&edma_xbar 128 1>;
+ dma-names = "tx", "rx";
+ clocks = <&mcasp1_aux_gfclk_mux>, <&mcasp1_ahclkx_mux>,
+ <&mcasp1_ahclkr_mux>;
+ clock-names = "fck", "ahclkx", "ahclkr";
+ status = "disabled";
+ };
+
+ mcasp2: mcasp@48464000 {
+ compatible = "ti,dra7-mcasp-audio";
+ ti,hwmods = "mcasp2";
+ reg = <0x48464000 0x2000>,
+ <0x45c00000 0x1000>;
+ reg-names = "mpu","dat";
+ interrupts = <GIC_SPI 149 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "tx", "rx";
+ dmas = <&edma_xbar 131 1>, <&edma_xbar 130 1>;
+ dma-names = "tx", "rx";
+ clocks = <&mcasp2_aux_gfclk_mux>, <&mcasp2_ahclkx_mux>,
+ <&mcasp2_ahclkr_mux>;
+ clock-names = "fck", "ahclkx", "ahclkr";
+ status = "disabled";
+ };
+
mcasp3: mcasp@48468000 {
compatible = "ti,dra7-mcasp-audio";
ti,hwmods = "mcasp3";
- reg = <0x48468000 0x2000>;
- reg-names = "mpu";
+ reg = <0x48468000 0x2000>,
+ <0x46000000 0x1000>;
+ reg-names = "mpu","dat";
interrupts = <GIC_SPI 151 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 150 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "tx", "rx";
- dmas = <&sdma_xbar 133>, <&sdma_xbar 132>;
+ dmas = <&edma_xbar 133 1>, <&edma_xbar 132 1>;
dma-names = "tx", "rx";
clocks = <&mcasp3_aux_gfclk_mux>, <&mcasp3_ahclkx_mux>;
clock-names = "fck", "ahclkx";
status = "disabled";
};
+ mcasp4: mcasp@4846c000 {
+ compatible = "ti,dra7-mcasp-audio";
+ ti,hwmods = "mcasp4";
+ reg = <0x4846c000 0x2000>,
+ <0x48436000 0x1000>;
+ reg-names = "mpu","dat";
+ interrupts = <GIC_SPI 153 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 152 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "tx", "rx";
+ dmas = <&edma_xbar 135 1>, <&edma_xbar 134 1>;
+ dma-names = "tx", "rx";
+ clocks = <&mcasp4_aux_gfclk_mux>, <&mcasp4_ahclkx_mux>;
+ clock-names = "fck", "ahclkx";
+ status = "disabled";
+ };
+
+ mcasp5: mcasp@48470000 {
+ compatible = "ti,dra7-mcasp-audio";
+ ti,hwmods = "mcasp5";
+ reg = <0x48470000 0x2000>,
+ <0x4843a000 0x1000>;
+ reg-names = "mpu","dat";
+ interrupts = <GIC_SPI 155 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 154 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "tx", "rx";
+ dmas = <&edma_xbar 137 1>, <&edma_xbar 136 1>;
+ dma-names = "tx", "rx";
+ clocks = <&mcasp5_aux_gfclk_mux>, <&mcasp5_ahclkx_mux>;
+ clock-names = "fck", "ahclkx";
+ status = "disabled";
+ };
+
+ mcasp6: mcasp@48474000 {
+ compatible = "ti,dra7-mcasp-audio";
+ ti,hwmods = "mcasp6";
+ reg = <0x48474000 0x2000>,
+ <0x4844c000 0x1000>;
+ reg-names = "mpu","dat";
+ interrupts = <GIC_SPI 157 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 156 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "tx", "rx";
+ dmas = <&edma_xbar 139 1>, <&edma_xbar 138 1>;
+ dma-names = "tx", "rx";
+ clocks = <&mcasp6_aux_gfclk_mux>, <&mcasp6_ahclkx_mux>;
+ clock-names = "fck", "ahclkx";
+ status = "disabled";
+ };
+
+ mcasp7: mcasp@48478000 {
+ compatible = "ti,dra7-mcasp-audio";
+ ti,hwmods = "mcasp7";
+ reg = <0x48478000 0x2000>,
+ <0x48450000 0x1000>;
+ reg-names = "mpu","dat";
+ interrupts = <GIC_SPI 159 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 158 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "tx", "rx";
+ dmas = <&edma_xbar 141 1>, <&edma_xbar 140 1>;
+ dma-names = "tx", "rx";
+ clocks = <&mcasp7_aux_gfclk_mux>, <&mcasp7_ahclkx_mux>;
+ clock-names = "fck", "ahclkx";
+ status = "disabled";
+ };
+
+ mcasp8: mcasp@4847c000 {
+ compatible = "ti,dra7-mcasp-audio";
+ ti,hwmods = "mcasp8";
+ reg = <0x4847c000 0x2000>,
+ <0x48454000 0x1000>;
+ reg-names = "mpu","dat";
+ interrupts = <GIC_SPI 161 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 160 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "tx", "rx";
+ dmas = <&edma_xbar 143 1>, <&edma_xbar 142 1>;
+ dma-names = "tx", "rx";
+ clocks = <&mcasp8_aux_gfclk_mux>, <&mcasp8_ahclkx_mux>;
+ clock-names = "fck", "ahclkx";
+ status = "disabled";
+ };
+
crossbar_mpu: crossbar@4a002a48 {
compatible = "ti,irq-crossbar";
reg = <0x4a002a48 0x130>;
diff --git a/arch/arm/boot/dts/dra72-evm-common.dtsi b/arch/arm/boot/dts/dra72-evm-common.dtsi
new file mode 100644
index 000000000000..093538ea5b5f
--- /dev/null
+++ b/arch/arm/boot/dts/dra72-evm-common.dtsi
@@ -0,0 +1,817 @@
+/*
+ * Copyright (C) 2014-2016 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+/dts-v1/;
+
+#include "dra72x.dtsi"
+#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/clk/ti-dra7-atl.h>
+
+/ {
+ compatible = "ti,dra72-evm", "ti,dra722", "ti,dra72", "ti,dra7";
+
+ aliases {
+ display0 = &hdmi0;
+ };
+
+ evm_3v3: fixedregulator-evm_3v3 {
+ compatible = "regulator-fixed";
+ regulator-name = "evm_3v3";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ };
+
+ aic_dvdd: fixedregulator-aic_dvdd {
+ /* TPS77018DBVT */
+ compatible = "regulator-fixed";
+ regulator-name = "aic_dvdd";
+ vin-supply = <&evm_3v3>;
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ };
+
+ evm_3v3_sd: fixedregulator-sd {
+ compatible = "regulator-fixed";
+ regulator-name = "evm_3v3_sd";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ enable-active-high;
+ gpio = <&pcf_gpio_21 5 GPIO_ACTIVE_HIGH>;
+ };
+
+ extcon_usb1: extcon_usb1 {
+ compatible = "linux,extcon-usb-gpio";
+ id-gpio = <&pcf_gpio_21 1 GPIO_ACTIVE_HIGH>;
+ };
+
+ extcon_usb2: extcon_usb2 {
+ compatible = "linux,extcon-usb-gpio";
+ id-gpio = <&pcf_gpio_21 2 GPIO_ACTIVE_HIGH>;
+ };
+
+ hdmi0: connector {
+ compatible = "hdmi-connector";
+ label = "hdmi";
+
+ type = "a";
+
+ port {
+ hdmi_connector_in: endpoint {
+ remote-endpoint = <&tpd12s015_out>;
+ };
+ };
+ };
+
+ tpd12s015: encoder {
+ compatible = "ti,tpd12s015";
+
+ pinctrl-names = "default";
+ pinctrl-0 = <&tpd12s015_pins>;
+
+ gpios = <&pcf_hdmi 4 GPIO_ACTIVE_HIGH>, /* P4, CT CP HPD */
+ <&pcf_hdmi 5 GPIO_ACTIVE_HIGH>, /* P5, LS OE */
+ <&gpio7 12 GPIO_ACTIVE_HIGH>; /* gpio7_12/sp1_cs2, HPD */
+
+ ports {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ port@0 {
+ reg = <0>;
+
+ tpd12s015_in: endpoint {
+ remote-endpoint = <&hdmi_out>;
+ };
+ };
+
+ port@1 {
+ reg = <1>;
+
+ tpd12s015_out: endpoint {
+ remote-endpoint = <&hdmi_connector_in>;
+ };
+ };
+ };
+ };
+
+ sound0: sound0 {
+ compatible = "simple-audio-card";
+ simple-audio-card,name = "DRA7xx-EVM";
+ simple-audio-card,widgets =
+ "Headphone", "Headphone Jack",
+ "Line", "Line Out",
+ "Microphone", "Mic Jack",
+ "Line", "Line In";
+ simple-audio-card,routing =
+ "Headphone Jack", "HPLOUT",
+ "Headphone Jack", "HPROUT",
+ "Line Out", "LLOUT",
+ "Line Out", "RLOUT",
+ "MIC3L", "Mic Jack",
+ "MIC3R", "Mic Jack",
+ "Mic Jack", "Mic Bias",
+ "LINE1L", "Line In",
+ "LINE1R", "Line In";
+ simple-audio-card,format = "dsp_b";
+ simple-audio-card,bitclock-master = <&sound0_master>;
+ simple-audio-card,frame-master = <&sound0_master>;
+ simple-audio-card,bitclock-inversion;
+
+ sound0_master: simple-audio-card,cpu {
+ sound-dai = <&mcasp3>;
+ system-clock-frequency = <5644800>;
+ };
+
+ simple-audio-card,codec {
+ sound-dai = <&tlv320aic3106>;
+ clocks = <&atl_clkin2_ck>;
+ };
+ };
+};
+
+&dra7_pmx_core {
+ i2c1_pins: pinmux_i2c1_pins {
+ pinctrl-single,pins = <
+ DRA7XX_CORE_IOPAD(0x3800, PIN_INPUT | MUX_MODE0) /* i2c1_sda.i2c1_sda */
+ DRA7XX_CORE_IOPAD(0x3804, PIN_INPUT | MUX_MODE0) /* i2c1_scl.i2c1_scl */
+ >;
+ };
+
+ i2c5_pins: pinmux_i2c5_pins {
+ pinctrl-single,pins = <
+ DRA7XX_CORE_IOPAD(0x36b4, PIN_INPUT | MUX_MODE10) /* mcasp1_axr0.i2c5_sda */
+ DRA7XX_CORE_IOPAD(0x36b8, PIN_INPUT | MUX_MODE10) /* mcasp1_axr1.i2c5_scl */
+ >;
+ };
+
+ i2c5_pins: pinmux_i2c5_pins {
+ pinctrl-single,pins = <
+ DRA7XX_CORE_IOPAD(0x36b4, PIN_INPUT | MUX_MODE10) /* mcasp1_axr0.i2c5_sda */
+ DRA7XX_CORE_IOPAD(0x36b8, PIN_INPUT | MUX_MODE10) /* mcasp1_axr1.i2c5_scl */
+ >;
+ };
+
+ nand_default: nand_default {
+ pinctrl-single,pins = <
+ DRA7XX_CORE_IOPAD(0x3400, PIN_INPUT | MUX_MODE0) /* gpmc_ad0 */
+ DRA7XX_CORE_IOPAD(0x3404, PIN_INPUT | MUX_MODE0) /* gpmc_ad1 */
+ DRA7XX_CORE_IOPAD(0x3408, PIN_INPUT | MUX_MODE0) /* gpmc_ad2 */
+ DRA7XX_CORE_IOPAD(0x340c, PIN_INPUT | MUX_MODE0) /* gpmc_ad3 */
+ DRA7XX_CORE_IOPAD(0x3410, PIN_INPUT | MUX_MODE0) /* gpmc_ad4 */
+ DRA7XX_CORE_IOPAD(0x3414, PIN_INPUT | MUX_MODE0) /* gpmc_ad5 */
+ DRA7XX_CORE_IOPAD(0x3418, PIN_INPUT | MUX_MODE0) /* gpmc_ad6 */
+ DRA7XX_CORE_IOPAD(0x341c, PIN_INPUT | MUX_MODE0) /* gpmc_ad7 */
+ DRA7XX_CORE_IOPAD(0x3420, PIN_INPUT | MUX_MODE0) /* gpmc_ad8 */
+ DRA7XX_CORE_IOPAD(0x3424, PIN_INPUT | MUX_MODE0) /* gpmc_ad9 */
+ DRA7XX_CORE_IOPAD(0x3428, PIN_INPUT | MUX_MODE0) /* gpmc_ad10 */
+ DRA7XX_CORE_IOPAD(0x342c, PIN_INPUT | MUX_MODE0) /* gpmc_ad11 */
+ DRA7XX_CORE_IOPAD(0x3430, PIN_INPUT | MUX_MODE0) /* gpmc_ad12 */
+ DRA7XX_CORE_IOPAD(0x3434, PIN_INPUT | MUX_MODE0) /* gpmc_ad13 */
+ DRA7XX_CORE_IOPAD(0x3438, PIN_INPUT | MUX_MODE0) /* gpmc_ad14 */
+ DRA7XX_CORE_IOPAD(0x343c, PIN_INPUT | MUX_MODE0) /* gpmc_ad15 */
+ DRA7XX_CORE_IOPAD(0x34b4, PIN_OUTPUT | MUX_MODE0) /* gpmc_cs0 */
+ DRA7XX_CORE_IOPAD(0x34c4, PIN_OUTPUT | MUX_MODE0) /* gpmc_advn_ale */
+ DRA7XX_CORE_IOPAD(0x34cc, PIN_OUTPUT | MUX_MODE0) /* gpmc_wen */
+ DRA7XX_CORE_IOPAD(0x34c8, PIN_OUTPUT | MUX_MODE0) /* gpmc_oen_ren */
+ DRA7XX_CORE_IOPAD(0x34d0, PIN_OUTPUT | MUX_MODE0) /* gpmc_ben0 */
+ DRA7XX_CORE_IOPAD(0x34d8, PIN_INPUT | MUX_MODE0) /* gpmc_wait0 */
+ >;
+ };
+
+ usb1_pins: pinmux_usb1_pins {
+ pinctrl-single,pins = <
+ DRA7XX_CORE_IOPAD(0x3680, PIN_INPUT_SLEW | MUX_MODE0) /* usb1_drvvbus */
+ >;
+ };
+
+ usb2_pins: pinmux_usb2_pins {
+ pinctrl-single,pins = <
+ DRA7XX_CORE_IOPAD(0x3684, PIN_INPUT_SLEW | MUX_MODE0) /* usb2_drvvbus */
+ >;
+ };
+
+ tps65917_pins_default: tps65917_pins_default {
+ pinctrl-single,pins = <
+ DRA7XX_CORE_IOPAD(0x3824, PIN_INPUT_PULLUP | MUX_MODE1) /* wakeup3.sys_nirq1 */
+ >;
+ };
+
+ mmc1_pins_default: mmc1_pins_default {
+ pinctrl-single,pins = <
+ DRA7XX_CORE_IOPAD(0x376c, PIN_INPUT | MUX_MODE14) /* mmc1sdcd.gpio219 */
+ DRA7XX_CORE_IOPAD(0x3754, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc1_clk.clk */
+ DRA7XX_CORE_IOPAD(0x3758, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc1_cmd.cmd */
+ DRA7XX_CORE_IOPAD(0x375c, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc1_dat0.dat0 */
+ DRA7XX_CORE_IOPAD(0x3760, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc1_dat1.dat1 */
+ DRA7XX_CORE_IOPAD(0x3764, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc1_dat2.dat2 */
+ DRA7XX_CORE_IOPAD(0x3768, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc1_dat3.dat3 */
+ >;
+ };
+
+ mmc2_pins_default: mmc2_pins_default {
+ pinctrl-single,pins = <
+ DRA7XX_CORE_IOPAD(0x349c, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_a23.mmc2_clk */
+ DRA7XX_CORE_IOPAD(0x34b0, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_cs1.mmc2_cmd */
+ DRA7XX_CORE_IOPAD(0x34a0, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_a24.mmc2_dat0 */
+ DRA7XX_CORE_IOPAD(0x34a4, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_a25.mmc2_dat1 */
+ DRA7XX_CORE_IOPAD(0x34a8, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_a26.mmc2_dat2 */
+ DRA7XX_CORE_IOPAD(0x34ac, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_a27.mmc2_dat3 */
+ DRA7XX_CORE_IOPAD(0x348c, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_a19.mmc2_dat4 */
+ DRA7XX_CORE_IOPAD(0x3490, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_a20.mmc2_dat5 */
+ DRA7XX_CORE_IOPAD(0x3494, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_a21.mmc2_dat6 */
+ DRA7XX_CORE_IOPAD(0x3498, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_a22.mmc2_dat7 */
+ >;
+ };
+
+ dcan1_pins_default: dcan1_pins_default {
+ pinctrl-single,pins = <
+ DRA7XX_CORE_IOPAD(0x37d0, PIN_OUTPUT_PULLUP | MUX_MODE0) /* dcan1_tx */
+ DRA7XX_CORE_IOPAD(0x3818, PULL_UP | MUX_MODE1) /* wakeup0.dcan1_rx */
+ >;
+ };
+
+ dcan1_pins_sleep: dcan1_pins_sleep {
+ pinctrl-single,pins = <
+ DRA7XX_CORE_IOPAD(0x37d0, MUX_MODE15 | PULL_UP) /* dcan1_tx.off */
+ DRA7XX_CORE_IOPAD(0x3818, MUX_MODE15 | PULL_UP) /* wakeup0.off */
+ >;
+ };
+
+ hdmi_pins: pinmux_hdmi_pins {
+ pinctrl-single,pins = <
+ DRA7XX_CORE_IOPAD(0x3808, PIN_INPUT | MUX_MODE1) /* i2c2_sda.hdmi1_ddc_scl */
+ DRA7XX_CORE_IOPAD(0x380c, PIN_INPUT | MUX_MODE1) /* i2c2_scl.hdmi1_ddc_sda */
+ >;
+ };
+
+ tpd12s015_pins: pinmux_tpd12s015_pins {
+ pinctrl-single,pins = <
+ DRA7XX_CORE_IOPAD(0x37b8, PIN_INPUT_PULLDOWN | MUX_MODE14) /* gpio7_12 HPD */
+ >;
+ };
+
+ atl_pins: pinmux_atl_pins {
+ pinctrl-single,pins = <
+ DRA7XX_CORE_IOPAD(0x3698, PIN_OUTPUT | MUX_MODE5) /* xref_clk1.atl_clk1 */
+ DRA7XX_CORE_IOPAD(0x369c, PIN_OUTPUT | MUX_MODE5) /* xref_clk2.atl_clk2 */
+ >;
+ };
+
+ mcasp3_pins: pinmux_mcasp3_pins {
+ pinctrl-single,pins = <
+ DRA7XX_CORE_IOPAD(0x3724, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* mcasp3_aclkx */
+ DRA7XX_CORE_IOPAD(0x3728, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* mcasp3_fsx */
+ DRA7XX_CORE_IOPAD(0x372c, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* mcasp3_axr0 */
+ DRA7XX_CORE_IOPAD(0x3730, PIN_INPUT_PULLDOWN | MUX_MODE0) /* mcasp3_axr1 */
+ >;
+ };
+
+ mcasp3_sleep_pins: pinmux_mcasp3_sleep_pins {
+ pinctrl-single,pins = <
+ DRA7XX_CORE_IOPAD(0x3724, PIN_INPUT_PULLDOWN | MUX_MODE15)
+ DRA7XX_CORE_IOPAD(0x3728, PIN_INPUT_PULLDOWN | MUX_MODE15)
+ DRA7XX_CORE_IOPAD(0x372c, PIN_INPUT_PULLDOWN | MUX_MODE15)
+ DRA7XX_CORE_IOPAD(0x3730, PIN_INPUT_PULLDOWN | MUX_MODE15)
+ >;
+ };
+};
+
+&i2c1 {
+ status = "okay";
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c1_pins>;
+ clock-frequency = <400000>;
+
+ tps65917: tps65917@58 {
+ compatible = "ti,tps65917";
+ reg = <0x58>;
+
+ pinctrl-names = "default";
+ pinctrl-0 = <&tps65917_pins_default>;
+
+ interrupts = <GIC_SPI 2 IRQ_TYPE_NONE>; /* IRQ_SYS_1N */
+ interrupt-controller;
+ #interrupt-cells = <2>;
+
+ ti,system-power-controller;
+
+ tps65917_pmic {
+ compatible = "ti,tps65917-pmic";
+
+ tps65917_regulators: regulators {
+ smps1_reg: smps1 {
+ /* VDD_MPU */
+ regulator-name = "smps1";
+ regulator-min-microvolt = <850000>;
+ regulator-max-microvolt = <1250000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ smps2_reg: smps2 {
+ /* VDD_CORE */
+ regulator-name = "smps2";
+ regulator-min-microvolt = <850000>;
+ regulator-max-microvolt = <1150000>;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ smps3_reg: smps3 {
+ /* VDD_GPU IVA DSPEVE */
+ regulator-name = "smps3";
+ regulator-min-microvolt = <850000>;
+ regulator-max-microvolt = <1250000>;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ smps4_reg: smps4 {
+ /* VDDS1V8 */
+ regulator-name = "smps4";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ smps5_reg: smps5 {
+ /* VDD_DDR */
+ regulator-name = "smps5";
+ regulator-min-microvolt = <1350000>;
+ regulator-max-microvolt = <1350000>;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ ldo1_reg: ldo1 {
+ /* LDO1_OUT --> SDIO */
+ regulator-name = "ldo1";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ regulator-boot-on;
+ regulator-allow-bypass;
+ };
+
+ ldo3_reg: ldo3 {
+ /* VDDA_1V8_PHY */
+ regulator-name = "ldo3";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ ldo5_reg: ldo5 {
+ /* VDDA_1V8_PLL */
+ regulator-name = "ldo5";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+
+ ldo4_reg: ldo4 {
+ /* VDDA_3V_USB: VDDA_USBHS33 */
+ regulator-name = "ldo4";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-boot-on;
+ };
+ };
+ };
+
+ tps65917_power_button {
+ compatible = "ti,palmas-pwrbutton";
+ interrupt-parent = <&tps65917>;
+ interrupts = <1 IRQ_TYPE_NONE>;
+ wakeup-source;
+ ti,palmas-long-press-seconds = <6>;
+ };
+ };
+
+ pcf_gpio_21: gpio@21 {
+ compatible = "ti,pcf8575", "nxp,pcf8575";
+ reg = <0x21>;
+ lines-initial-states = <0x1408>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
+ tlv320aic3106: tlv320aic3106@19 {
+ #sound-dai-cells = <0>;
+ compatible = "ti,tlv320aic3106";
+ reg = <0x19>;
+ adc-settle-ms = <40>;
+ ai3x-micbias-vg = <1>; /* 2.0V */
+ status = "okay";
+
+ /* Regulators */
+ AVDD-supply = <&evm_3v3>;
+ IOVDD-supply = <&evm_3v3>;
+ DRVDD-supply = <&evm_3v3>;
+ DVDD-supply = <&aic_dvdd>;
+ };
+};
+
+&i2c5 {
+ status = "okay";
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c5_pins>;
+ clock-frequency = <400000>;
+
+ pcf_hdmi: pcf8575@26 {
+ compatible = "ti,pcf8575", "nxp,pcf8575";
+ reg = <0x26>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ /*
+ * initial state is used here to keep the mdio interface
+ * selected on RU89 through SEL_VIN4_MUX_S0, VIN2_S1 and
+ * VIN2_S0 driven high otherwise Ethernet stops working
+ * VIN6_SEL_S0 is low, thus selecting McASP3 over VIN6
+ */
+ lines-initial-states = <0x0f2b>;
+
+ p1 {
+ /* vin6_sel_s0: high: VIN6, low: audio */
+ gpio-hog;
+ gpios = <1 GPIO_ACTIVE_HIGH>;
+ output-low;
+ line-name = "vin6_sel_s0";
+ };
+ };
+};
+
+&uart1 {
+ status = "okay";
+ interrupts-extended = <&crossbar_mpu GIC_SPI 67 IRQ_TYPE_LEVEL_HIGH>,
+ <&dra7_pmx_core 0x3e0>;
+};
+
+&elm {
+ status = "okay";
+};
+
+&gpmc {
+ status = "okay";
+ pinctrl-names = "default";
+ pinctrl-0 = <&nand_default>;
+ ranges = <0 0 0x08000000 0x01000000>; /* minimum GPMC partition = 16MB */
+ nand@0,0 {
+ /* To use NAND, DIP switch SW5 must be set like so:
+ * SW5.1 (NAND_SELn) = ON (LOW)
+ * SW5.9 (GPMC_WPN) = OFF (HIGH)
+ */
+ compatible = "ti,omap2-nand";
+ reg = <0 0 4>; /* device IO registers */
+ interrupt-parent = <&gpmc>;
+ interrupts = <0 IRQ_TYPE_NONE>, /* fifoevent */
+ <1 IRQ_TYPE_NONE>; /* termcount */
+ rb-gpios = <&gpmc 0 GPIO_ACTIVE_HIGH>; /* gpmc_wait0 pin */
+ ti,nand-ecc-opt = "bch8";
+ ti,elm-id = <&elm>;
+ nand-bus-width = <16>;
+ gpmc,device-width = <2>;
+ gpmc,sync-clk-ps = <0>;
+ gpmc,cs-on-ns = <0>;
+ gpmc,cs-rd-off-ns = <80>;
+ gpmc,cs-wr-off-ns = <80>;
+ gpmc,adv-on-ns = <0>;
+ gpmc,adv-rd-off-ns = <60>;
+ gpmc,adv-wr-off-ns = <60>;
+ gpmc,we-on-ns = <10>;
+ gpmc,we-off-ns = <50>;
+ gpmc,oe-on-ns = <4>;
+ gpmc,oe-off-ns = <40>;
+ gpmc,access-ns = <40>;
+ gpmc,wr-access-ns = <80>;
+ gpmc,rd-cycle-ns = <80>;
+ gpmc,wr-cycle-ns = <80>;
+ gpmc,bus-turnaround-ns = <0>;
+ gpmc,cycle2cycle-delay-ns = <0>;
+ gpmc,clk-activation-ns = <0>;
+ gpmc,wr-data-mux-bus-ns = <0>;
+ /* MTD partition table */
+ /* All SPL-* partitions are sized to minimal length
+ * which can be independently programmable. For
+ * NAND flash this is equal to size of erase-block */
+ #address-cells = <1>;
+ #size-cells = <1>;
+ partition@0 {
+ label = "NAND.SPL";
+ reg = <0x00000000 0x000020000>;
+ };
+ partition@1 {
+ label = "NAND.SPL.backup1";
+ reg = <0x00020000 0x00020000>;
+ };
+ partition@2 {
+ label = "NAND.SPL.backup2";
+ reg = <0x00040000 0x00020000>;
+ };
+ partition@3 {
+ label = "NAND.SPL.backup3";
+ reg = <0x00060000 0x00020000>;
+ };
+ partition@4 {
+ label = "NAND.u-boot-spl-os";
+ reg = <0x00080000 0x00040000>;
+ };
+ partition@5 {
+ label = "NAND.u-boot";
+ reg = <0x000c0000 0x00100000>;
+ };
+ partition@6 {
+ label = "NAND.u-boot-env";
+ reg = <0x001c0000 0x00020000>;
+ };
+ partition@7 {
+ label = "NAND.u-boot-env.backup1";
+ reg = <0x001e0000 0x00020000>;
+ };
+ partition@8 {
+ label = "NAND.kernel";
+ reg = <0x00200000 0x00800000>;
+ };
+ partition@9 {
+ label = "NAND.file-system";
+ reg = <0x00a00000 0x0f600000>;
+ };
+ };
+};
+
+&usb2_phy1 {
+ phy-supply = <&ldo4_reg>;
+};
+
+&usb2_phy2 {
+ phy-supply = <&ldo4_reg>;
+};
+
+&omap_dwc3_1 {
+ extcon = <&extcon_usb1>;
+};
+
+&omap_dwc3_2 {
+ extcon = <&extcon_usb2>;
+};
+
+&usb1 {
+ dr_mode = "peripheral";
+ pinctrl-names = "default";
+ pinctrl-0 = <&usb1_pins>;
+};
+
+&usb2 {
+ dr_mode = "host";
+ pinctrl-names = "default";
+ pinctrl-0 = <&usb2_pins>;
+};
+
+&mmc1 {
+ status = "okay";
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc1_pins_default>;
+ vmmc-supply = <&evm_3v3_sd>;
+ vmmc_aux-supply = <&ldo1_reg>;
+ bus-width = <4>;
+ /*
+ * SDCD signal is not being used here - using the fact that GPIO mode
+ * is a viable alternative
+ */
+ cd-gpios = <&gpio6 27 GPIO_ACTIVE_LOW>;
+ max-frequency = <192000000>;
+};
+
+&mmc2 {
+ /* SW5-3 in ON position */
+ status = "okay";
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc2_pins_default>;
+
+ vmmc-supply = <&evm_3v3>;
+ bus-width = <8>;
+ ti,non-removable;
+ max-frequency = <192000000>;
+};
+
+&dra7_pmx_core {
+ cpsw_default: cpsw_default {
+ pinctrl-single,pins = <
+ /* Slave 2 */
+ DRA7XX_CORE_IOPAD(0x3598, PIN_OUTPUT | MUX_MODE3) /* vin2a_d12.rgmii1_txc */
+ DRA7XX_CORE_IOPAD(0x359c, PIN_OUTPUT | MUX_MODE3) /* vin2a_d13.rgmii1_tctl */
+ DRA7XX_CORE_IOPAD(0x35a0, PIN_OUTPUT | MUX_MODE3) /* vin2a_d14.rgmii1_td3 */
+ DRA7XX_CORE_IOPAD(0x35a4, PIN_OUTPUT | MUX_MODE3) /* vin2a_d15.rgmii1_td2 */
+ DRA7XX_CORE_IOPAD(0x35a8, PIN_OUTPUT | MUX_MODE3) /* vin2a_d16.rgmii1_td1 */
+ DRA7XX_CORE_IOPAD(0x35ac, PIN_OUTPUT | MUX_MODE3) /* vin2a_d17.rgmii1_td0 */
+ DRA7XX_CORE_IOPAD(0x35b0, PIN_INPUT | MUX_MODE3) /* vin2a_d18.rgmii1_rclk */
+ DRA7XX_CORE_IOPAD(0x35b4, PIN_INPUT | MUX_MODE3) /* vin2a_d19.rgmii1_rctl */
+ DRA7XX_CORE_IOPAD(0x35b8, PIN_INPUT | MUX_MODE3) /* vin2a_d20.rgmii1_rd3 */
+ DRA7XX_CORE_IOPAD(0x35bc, PIN_INPUT | MUX_MODE3) /* vin2a_d21.rgmii1_rd2 */
+ DRA7XX_CORE_IOPAD(0x35c0, PIN_INPUT | MUX_MODE3) /* vin2a_d22.rgmii1_rd1 */
+ DRA7XX_CORE_IOPAD(0x35c4, PIN_INPUT | MUX_MODE3) /* vin2a_d23.rgmii1_rd0 */
+ >;
+
+ };
+
+ cpsw_sleep: cpsw_sleep {
+ pinctrl-single,pins = <
+ /* Slave 2 */
+ DRA7XX_CORE_IOPAD(0x3598, MUX_MODE15)
+ DRA7XX_CORE_IOPAD(0x359c, MUX_MODE15)
+ DRA7XX_CORE_IOPAD(0x35a0, MUX_MODE15)
+ DRA7XX_CORE_IOPAD(0x35a4, MUX_MODE15)
+ DRA7XX_CORE_IOPAD(0x35a8, MUX_MODE15)
+ DRA7XX_CORE_IOPAD(0x35ac, MUX_MODE15)
+ DRA7XX_CORE_IOPAD(0x35b0, MUX_MODE15)
+ DRA7XX_CORE_IOPAD(0x35b4, MUX_MODE15)
+ DRA7XX_CORE_IOPAD(0x35b8, MUX_MODE15)
+ DRA7XX_CORE_IOPAD(0x35bc, MUX_MODE15)
+ DRA7XX_CORE_IOPAD(0x35c0, MUX_MODE15)
+ DRA7XX_CORE_IOPAD(0x35c4, MUX_MODE15)
+ >;
+ };
+
+ davinci_mdio_default: davinci_mdio_default {
+ pinctrl-single,pins = <
+ /* MDIO */
+ DRA7XX_CORE_IOPAD(0x363c, PIN_OUTPUT_PULLUP | MUX_MODE0) /* mdio_d.mdio_d */
+ DRA7XX_CORE_IOPAD(0x3640, PIN_INPUT_PULLUP | MUX_MODE0) /* mdio_clk.mdio_clk */
+ >;
+ };
+
+ davinci_mdio_sleep: davinci_mdio_sleep {
+ pinctrl-single,pins = <
+ DRA7XX_CORE_IOPAD(0x363c, MUX_MODE15)
+ DRA7XX_CORE_IOPAD(0x3640, MUX_MODE15)
+ >;
+ };
+};
+
+&mac {
+ status = "okay";
+ pinctrl-names = "default", "sleep";
+ pinctrl-0 = <&cpsw_default>;
+ pinctrl-1 = <&cpsw_sleep>;
+};
+
+&davinci_mdio {
+ pinctrl-names = "default", "sleep";
+ pinctrl-0 = <&davinci_mdio_default>;
+ pinctrl-1 = <&davinci_mdio_sleep>;
+};
+
+&dcan1 {
+ status = "ok";
+ pinctrl-names = "default", "sleep", "active";
+ pinctrl-0 = <&dcan1_pins_sleep>;
+ pinctrl-1 = <&dcan1_pins_sleep>;
+ pinctrl-2 = <&dcan1_pins_default>;
+};
+
+&qspi {
+ status = "okay";
+
+ spi-max-frequency = <64000000>;
+ m25p80@0 {
+ compatible = "s25fl256s1";
+ spi-max-frequency = <64000000>;
+ reg = <0>;
+ spi-tx-bus-width = <1>;
+ spi-rx-bus-width = <4>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+ /* MTD partition table.
+ * The ROM checks the first four physical blocks
+ * for a valid file to boot and the flash here is
+ * 64KiB block size.
+ */
+ partition@0 {
+ label = "QSPI.SPL";
+ reg = <0x00000000 0x000010000>;
+ };
+ partition@1 {
+ label = "QSPI.SPL.backup1";
+ reg = <0x00010000 0x00010000>;
+ };
+ partition@2 {
+ label = "QSPI.SPL.backup2";
+ reg = <0x00020000 0x00010000>;
+ };
+ partition@3 {
+ label = "QSPI.SPL.backup3";
+ reg = <0x00030000 0x00010000>;
+ };
+ partition@4 {
+ label = "QSPI.u-boot";
+ reg = <0x00040000 0x00100000>;
+ };
+ partition@5 {
+ label = "QSPI.u-boot-spl-os";
+ reg = <0x00140000 0x00080000>;
+ };
+ partition@6 {
+ label = "QSPI.u-boot-env";
+ reg = <0x001c0000 0x00010000>;
+ };
+ partition@7 {
+ label = "QSPI.u-boot-env.backup1";
+ reg = <0x001d0000 0x0010000>;
+ };
+ partition@8 {
+ label = "QSPI.kernel";
+ reg = <0x001e0000 0x0800000>;
+ };
+ partition@9 {
+ label = "QSPI.file-system";
+ reg = <0x009e0000 0x01620000>;
+ };
+ };
+};
+
+&dss {
+ status = "ok";
+
+ vdda_video-supply = <&ldo5_reg>;
+};
+
+&hdmi {
+ status = "ok";
+
+ pinctrl-names = "default";
+ pinctrl-0 = <&hdmi_pins>;
+
+ port {
+ hdmi_out: endpoint {
+ remote-endpoint = <&tpd12s015_in>;
+ };
+ };
+};
+
+&atl {
+ pinctrl-names = "default";
+ pinctrl-0 = <&atl_pins>;
+
+ assigned-clocks = <&abe_dpll_sys_clk_mux>,
+ <&atl_gfclk_mux>,
+ <&dpll_abe_ck>,
+ <&dpll_abe_m2x2_ck>,
+ <&atl_clkin2_ck>;
+ assigned-clock-parents = <&sys_clkin2>, <&dpll_abe_m2_ck>;
+ assigned-clock-rates = <0>, <0>, <180633600>, <361267200>, <5644800>;
+
+ status = "okay";
+
+ atl2 {
+ bws = <DRA7_ATL_WS_MCASP2_FSX>;
+ aws = <DRA7_ATL_WS_MCASP3_FSX>;
+ };
+};
+
+&mcasp3 {
+ #sound-dai-cells = <0>;
+ pinctrl-names = "default", "sleep";
+ pinctrl-0 = <&mcasp3_pins>;
+ pinctrl-1 = <&mcasp3_sleep_pins>;
+
+ assigned-clocks = <&mcasp3_ahclkx_mux>;
+ assigned-clock-parents = <&atl_clkin2_ck>;
+
+ status = "okay";
+
+ op-mode = <0>; /* MCASP_IIS_MODE */
+ tdm-slots = <2>;
+ /* 4 serializer */
+ serial-dir = < /* 0: INACTIVE, 1: TX, 2: RX */
+ 1 2 0 0
+ >;
+ tx-num-evt = <32>;
+ rx-num-evt = <32>;
+};
+
+&mailbox5 {
+ status = "okay";
+ mbox_ipu1_ipc3x: mbox_ipu1_ipc3x {
+ status = "okay";
+ };
+ mbox_dsp1_ipc3x: mbox_dsp1_ipc3x {
+ status = "okay";
+ };
+};
+
+&mailbox6 {
+ status = "okay";
+ mbox_ipu2_ipc3x: mbox_ipu2_ipc3x {
+ status = "okay";
+ };
+};
diff --git a/arch/arm/boot/dts/dra72-evm-revc.dts b/arch/arm/boot/dts/dra72-evm-revc.dts
new file mode 100644
index 000000000000..f9cfd3bb4dc2
--- /dev/null
+++ b/arch/arm/boot/dts/dra72-evm-revc.dts
@@ -0,0 +1,73 @@
+/*
+ * Copyright (C) 2016 Texas Instruments Incorporated - http://www.ti.com/
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+#include "dra72-evm-common.dtsi"
+#include <dt-bindings/net/ti-dp83867.h>
+
+/ {
+ model = "TI DRA722 Rev C EVM";
+
+ memory {
+ device_type = "memory";
+ reg = <0x0 0x80000000 0x0 0x80000000>; /* 2GB */
+ };
+};
+
+&tps65917_regulators {
+ ldo2_reg: ldo2 {
+ /* LDO2_OUT --> VDDA_1V8_PHY2 */
+ regulator-name = "ldo2";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+};
+
+&hdmi {
+ vdda-supply = <&ldo2_reg>;
+};
+
+&pcf_gpio_21 {
+ interrupt-parent = <&gpio3>;
+ interrupts = <30 IRQ_TYPE_EDGE_FALLING>;
+};
+
+&mac {
+ mode-gpios = <&pcf_gpio_21 4 GPIO_ACTIVE_LOW>,
+ <&pcf_hdmi 9 GPIO_ACTIVE_LOW>, /* P11 */
+ <&pcf_hdmi 10 GPIO_ACTIVE_LOW>; /* P12 */
+ dual_emac;
+};
+
+&cpsw_emac0 {
+ phy_id = <&davinci_mdio>, <2>;
+ phy-mode = "rgmii-id";
+ dual_emac_res_vlan = <1>;
+};
+
+&cpsw_emac1 {
+ phy_id = <&davinci_mdio>, <3>;
+ phy-mode = "rgmii-id";
+ dual_emac_res_vlan = <2>;
+};
+
+&davinci_mdio {
+ dp83867_0: ethernet-phy@2 {
+ reg = <2>;
+ ti,rx-internal-delay = <DP83867_RGMIIDCTL_2_00_NS>;
+ ti,tx-internal-delay = <DP83867_RGMIIDCTL_1_NS>;
+ ti,fifo-depth = <DP83867_PHYCR_FIFO_DEPTH_8_B_NIB>;
+ };
+
+ dp83867_1: ethernet-phy@3 {
+ reg = <3>;
+ ti,rx-internal-delay = <DP83867_RGMIIDCTL_2_00_NS>;
+ ti,tx-internal-delay = <DP83867_RGMIIDCTL_1_NS>;
+ ti,fifo-depth = <DP83867_PHYCR_FIFO_DEPTH_8_B_NIB>;
+ };
+};
diff --git a/arch/arm/boot/dts/dra72-evm.dts b/arch/arm/boot/dts/dra72-evm.dts
index 6affe2d137da..cc1d32ca4a8a 100644
--- a/arch/arm/boot/dts/dra72-evm.dts
+++ b/arch/arm/boot/dts/dra72-evm.dts
@@ -1,694 +1,40 @@
/*
- * Copyright (C) 2014 Texas Instruments Incorporated - http://www.ti.com/
+ * Copyright (C) 2014-2016 Texas Instruments Incorporated - http://www.ti.com/
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
-/dts-v1/;
-
-#include "dra72x.dtsi"
-#include <dt-bindings/gpio/gpio.h>
-#include <dt-bindings/clk/ti-dra7-atl.h>
-
+#include "dra72-evm-common.dtsi"
/ {
model = "TI DRA722";
- compatible = "ti,dra72-evm", "ti,dra722", "ti,dra72", "ti,dra7";
memory {
device_type = "memory";
reg = <0x0 0x80000000 0x0 0x40000000>; /* 1024 MB */
};
+};
- aliases {
- display0 = &hdmi0;
- };
-
- evm_3v3: fixedregulator-evm_3v3 {
- compatible = "regulator-fixed";
- regulator-name = "evm_3v3";
- regulator-min-microvolt = <3300000>;
- regulator-max-microvolt = <3300000>;
- };
-
- aic_dvdd: fixedregulator-aic_dvdd {
- /* TPS77018DBVT */
- compatible = "regulator-fixed";
- regulator-name = "aic_dvdd";
- vin-supply = <&evm_3v3>;
+&tps65917_regulators {
+ ldo2_reg: ldo2 {
+ /* LDO2_OUT --> TP1017 (UNUSED) */
+ regulator-name = "ldo2";
regulator-min-microvolt = <1800000>;
- regulator-max-microvolt = <1800000>;
- };
-
- evm_3v3_sd: fixedregulator-sd {
- compatible = "regulator-fixed";
- regulator-name = "evm_3v3_sd";
- regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
- enable-active-high;
- gpio = <&pcf_gpio_21 5 GPIO_ACTIVE_HIGH>;
- };
-
- extcon_usb1: extcon_usb1 {
- compatible = "linux,extcon-usb-gpio";
- id-gpio = <&pcf_gpio_21 1 GPIO_ACTIVE_HIGH>;
- };
-
- extcon_usb2: extcon_usb2 {
- compatible = "linux,extcon-usb-gpio";
- id-gpio = <&pcf_gpio_21 2 GPIO_ACTIVE_HIGH>;
- };
-
- hdmi0: connector {
- compatible = "hdmi-connector";
- label = "hdmi";
-
- type = "a";
-
- port {
- hdmi_connector_in: endpoint {
- remote-endpoint = <&tpd12s015_out>;
- };
- };
- };
-
- tpd12s015: encoder {
- compatible = "ti,tpd12s015";
-
- pinctrl-names = "default";
- pinctrl-0 = <&tpd12s015_pins>;
-
- gpios = <&pcf_hdmi 4 GPIO_ACTIVE_HIGH>, /* P4, CT CP HPD */
- <&pcf_hdmi 5 GPIO_ACTIVE_HIGH>, /* P5, LS OE */
- <&gpio7 12 GPIO_ACTIVE_HIGH>; /* gpio7_12/sp1_cs2, HPD */
-
- ports {
- #address-cells = <1>;
- #size-cells = <0>;
-
- port@0 {
- reg = <0>;
-
- tpd12s015_in: endpoint {
- remote-endpoint = <&hdmi_out>;
- };
- };
-
- port@1 {
- reg = <1>;
-
- tpd12s015_out: endpoint {
- remote-endpoint = <&hdmi_connector_in>;
- };
- };
- };
- };
-
- sound0: sound@0 {
- compatible = "simple-audio-card";
- simple-audio-card,name = "DRA7xx-EVM";
- simple-audio-card,widgets =
- "Headphone", "Headphone Jack",
- "Line", "Line Out",
- "Microphone", "Mic Jack",
- "Line", "Line In";
- simple-audio-card,routing =
- "Headphone Jack", "HPLOUT",
- "Headphone Jack", "HPROUT",
- "Line Out", "LLOUT",
- "Line Out", "RLOUT",
- "MIC3L", "Mic Jack",
- "MIC3R", "Mic Jack",
- "Mic Jack", "Mic Bias",
- "LINE1L", "Line In",
- "LINE1R", "Line In";
- simple-audio-card,format = "dsp_b";
- simple-audio-card,bitclock-master = <&sound0_master>;
- simple-audio-card,frame-master = <&sound0_master>;
- simple-audio-card,bitclock-inversion;
-
- sound0_master: simple-audio-card,cpu {
- sound-dai = <&mcasp3>;
- system-clock-frequency = <5644800>;
- };
-
- simple-audio-card,codec {
- sound-dai = <&tlv320aic3106>;
- clocks = <&atl_clkin2_ck>;
- };
- };
-};
-
-&dra7_pmx_core {
- i2c1_pins: pinmux_i2c1_pins {
- pinctrl-single,pins = <
- DRA7XX_CORE_IOPAD(0x3800, PIN_INPUT | MUX_MODE0) /* i2c1_sda.i2c1_sda */
- DRA7XX_CORE_IOPAD(0x3804, PIN_INPUT | MUX_MODE0) /* i2c1_scl.i2c1_scl */
- >;
- };
-
- i2c5_pins: pinmux_i2c5_pins {
- pinctrl-single,pins = <
- DRA7XX_CORE_IOPAD(0x36b4, PIN_INPUT | MUX_MODE10) /* mcasp1_axr0.i2c5_sda */
- DRA7XX_CORE_IOPAD(0x36b8, PIN_INPUT | MUX_MODE10) /* mcasp1_axr1.i2c5_scl */
- >;
- };
-
- i2c5_pins: pinmux_i2c5_pins {
- pinctrl-single,pins = <
- DRA7XX_CORE_IOPAD(0x36b4, PIN_INPUT | MUX_MODE10) /* mcasp1_axr0.i2c5_sda */
- DRA7XX_CORE_IOPAD(0x36b8, PIN_INPUT | MUX_MODE10) /* mcasp1_axr1.i2c5_scl */
- >;
- };
-
- nand_default: nand_default {
- pinctrl-single,pins = <
- DRA7XX_CORE_IOPAD(0x3400, PIN_INPUT | MUX_MODE0) /* gpmc_ad0 */
- DRA7XX_CORE_IOPAD(0x3404, PIN_INPUT | MUX_MODE0) /* gpmc_ad1 */
- DRA7XX_CORE_IOPAD(0x3408, PIN_INPUT | MUX_MODE0) /* gpmc_ad2 */
- DRA7XX_CORE_IOPAD(0x340c, PIN_INPUT | MUX_MODE0) /* gpmc_ad3 */
- DRA7XX_CORE_IOPAD(0x3410, PIN_INPUT | MUX_MODE0) /* gpmc_ad4 */
- DRA7XX_CORE_IOPAD(0x3414, PIN_INPUT | MUX_MODE0) /* gpmc_ad5 */
- DRA7XX_CORE_IOPAD(0x3418, PIN_INPUT | MUX_MODE0) /* gpmc_ad6 */
- DRA7XX_CORE_IOPAD(0x341c, PIN_INPUT | MUX_MODE0) /* gpmc_ad7 */
- DRA7XX_CORE_IOPAD(0x3420, PIN_INPUT | MUX_MODE0) /* gpmc_ad8 */
- DRA7XX_CORE_IOPAD(0x3424, PIN_INPUT | MUX_MODE0) /* gpmc_ad9 */
- DRA7XX_CORE_IOPAD(0x3428, PIN_INPUT | MUX_MODE0) /* gpmc_ad10 */
- DRA7XX_CORE_IOPAD(0x342c, PIN_INPUT | MUX_MODE0) /* gpmc_ad11 */
- DRA7XX_CORE_IOPAD(0x3430, PIN_INPUT | MUX_MODE0) /* gpmc_ad12 */
- DRA7XX_CORE_IOPAD(0x3434, PIN_INPUT | MUX_MODE0) /* gpmc_ad13 */
- DRA7XX_CORE_IOPAD(0x3438, PIN_INPUT | MUX_MODE0) /* gpmc_ad14 */
- DRA7XX_CORE_IOPAD(0x343c, PIN_INPUT | MUX_MODE0) /* gpmc_ad15 */
- DRA7XX_CORE_IOPAD(0x34b4, PIN_OUTPUT | MUX_MODE0) /* gpmc_cs0 */
- DRA7XX_CORE_IOPAD(0x34c4, PIN_OUTPUT | MUX_MODE0) /* gpmc_advn_ale */
- DRA7XX_CORE_IOPAD(0x34cc, PIN_OUTPUT | MUX_MODE0) /* gpmc_wen */
- DRA7XX_CORE_IOPAD(0x34c8, PIN_OUTPUT | MUX_MODE0) /* gpmc_oen_ren */
- DRA7XX_CORE_IOPAD(0x34d0, PIN_OUTPUT | MUX_MODE0) /* gpmc_ben0 */
- DRA7XX_CORE_IOPAD(0x34d8, PIN_INPUT | MUX_MODE0) /* gpmc_wait0 */
- >;
- };
-
- usb1_pins: pinmux_usb1_pins {
- pinctrl-single,pins = <
- DRA7XX_CORE_IOPAD(0x3680, PIN_INPUT_SLEW | MUX_MODE0) /* usb1_drvvbus */
- >;
- };
-
- usb2_pins: pinmux_usb2_pins {
- pinctrl-single,pins = <
- DRA7XX_CORE_IOPAD(0x3684, PIN_INPUT_SLEW | MUX_MODE0) /* usb2_drvvbus */
- >;
- };
-
- tps65917_pins_default: tps65917_pins_default {
- pinctrl-single,pins = <
- DRA7XX_CORE_IOPAD(0x3824, PIN_INPUT_PULLUP | MUX_MODE1) /* wakeup3.sys_nirq1 */
- >;
- };
-
- mmc1_pins_default: mmc1_pins_default {
- pinctrl-single,pins = <
- DRA7XX_CORE_IOPAD(0x376c, PIN_INPUT | MUX_MODE14) /* mmc1sdcd.gpio219 */
- DRA7XX_CORE_IOPAD(0x3754, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc1_clk.clk */
- DRA7XX_CORE_IOPAD(0x3758, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc1_cmd.cmd */
- DRA7XX_CORE_IOPAD(0x375c, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc1_dat0.dat0 */
- DRA7XX_CORE_IOPAD(0x3760, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc1_dat1.dat1 */
- DRA7XX_CORE_IOPAD(0x3764, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc1_dat2.dat2 */
- DRA7XX_CORE_IOPAD(0x3768, PIN_INPUT_PULLUP | MUX_MODE0) /* mmc1_dat3.dat3 */
- >;
- };
-
- mmc2_pins_default: mmc2_pins_default {
- pinctrl-single,pins = <
- DRA7XX_CORE_IOPAD(0x349c, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_a23.mmc2_clk */
- DRA7XX_CORE_IOPAD(0x34b0, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_cs1.mmc2_cmd */
- DRA7XX_CORE_IOPAD(0x34a0, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_a24.mmc2_dat0 */
- DRA7XX_CORE_IOPAD(0x34a4, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_a25.mmc2_dat1 */
- DRA7XX_CORE_IOPAD(0x34a8, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_a26.mmc2_dat2 */
- DRA7XX_CORE_IOPAD(0x34ac, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_a27.mmc2_dat3 */
- DRA7XX_CORE_IOPAD(0x348c, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_a19.mmc2_dat4 */
- DRA7XX_CORE_IOPAD(0x3490, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_a20.mmc2_dat5 */
- DRA7XX_CORE_IOPAD(0x3494, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_a21.mmc2_dat6 */
- DRA7XX_CORE_IOPAD(0x3498, PIN_INPUT_PULLUP | MUX_MODE1) /* gpmc_a22.mmc2_dat7 */
- >;
- };
-
- dcan1_pins_default: dcan1_pins_default {
- pinctrl-single,pins = <
- DRA7XX_CORE_IOPAD(0x37d0, PIN_OUTPUT_PULLUP | MUX_MODE0) /* dcan1_tx */
- DRA7XX_CORE_IOPAD(0x3818, PULL_UP | MUX_MODE1) /* wakeup0.dcan1_rx */
- >;
- };
-
- dcan1_pins_sleep: dcan1_pins_sleep {
- pinctrl-single,pins = <
- DRA7XX_CORE_IOPAD(0x37d0, MUX_MODE15 | PULL_UP) /* dcan1_tx.off */
- DRA7XX_CORE_IOPAD(0x3818, MUX_MODE15 | PULL_UP) /* wakeup0.off */
- >;
- };
-
- qspi1_pins: pinmux_qspi1_pins {
- pinctrl-single,pins = <
- DRA7XX_CORE_IOPAD(0x3474, PIN_OUTPUT | MUX_MODE1) /* gpmc_a13.qspi1_rtclk */
- DRA7XX_CORE_IOPAD(0x3478, PIN_INPUT | MUX_MODE1) /* gpmc_a14.qspi1_d3 */
- DRA7XX_CORE_IOPAD(0x347c, PIN_INPUT | MUX_MODE1) /* gpmc_a15.qspi1_d2 */
- DRA7XX_CORE_IOPAD(0x3480, PIN_INPUT | MUX_MODE1) /* gpmc_a16.qspi1_d1 */
- DRA7XX_CORE_IOPAD(0x3484, PIN_INPUT | MUX_MODE1) /* gpmc_a17.qspi1_d0 */
- DRA7XX_CORE_IOPAD(0x3488, PIN_OUTPUT | MUX_MODE1) /* qpmc_a18.qspi1_sclk */
- DRA7XX_CORE_IOPAD(0x34b8, PIN_OUTPUT | MUX_MODE1) /* gpmc_cs2.qspi1_cs0 */
- >;
- };
-
- hdmi_pins: pinmux_hdmi_pins {
- pinctrl-single,pins = <
- DRA7XX_CORE_IOPAD(0x3808, PIN_INPUT | MUX_MODE1) /* i2c2_sda.hdmi1_ddc_scl */
- DRA7XX_CORE_IOPAD(0x380c, PIN_INPUT | MUX_MODE1) /* i2c2_scl.hdmi1_ddc_sda */
- >;
- };
-
- tpd12s015_pins: pinmux_tpd12s015_pins {
- pinctrl-single,pins = <
- DRA7XX_CORE_IOPAD(0x37b8, PIN_INPUT_PULLDOWN | MUX_MODE14) /* gpio7_12 HPD */
- >;
- };
-
- atl_pins: pinmux_atl_pins {
- pinctrl-single,pins = <
- DRA7XX_CORE_IOPAD(0x3698, PIN_OUTPUT | MUX_MODE5) /* xref_clk1.atl_clk1 */
- DRA7XX_CORE_IOPAD(0x369c, PIN_OUTPUT | MUX_MODE5) /* xref_clk2.atl_clk2 */
- >;
- };
-
- mcasp3_pins: pinmux_mcasp3_pins {
- pinctrl-single,pins = <
- DRA7XX_CORE_IOPAD(0x3724, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* mcasp3_aclkx */
- DRA7XX_CORE_IOPAD(0x3728, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* mcasp3_fsx */
- DRA7XX_CORE_IOPAD(0x372c, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* mcasp3_axr0 */
- DRA7XX_CORE_IOPAD(0x3730, PIN_INPUT_PULLDOWN | MUX_MODE0) /* mcasp3_axr1 */
- >;
- };
-
- mcasp3_sleep_pins: pinmux_mcasp3_sleep_pins {
- pinctrl-single,pins = <
- DRA7XX_CORE_IOPAD(0x3724, PIN_INPUT_PULLDOWN | MUX_MODE15)
- DRA7XX_CORE_IOPAD(0x3728, PIN_INPUT_PULLDOWN | MUX_MODE15)
- DRA7XX_CORE_IOPAD(0x372c, PIN_INPUT_PULLDOWN | MUX_MODE15)
- DRA7XX_CORE_IOPAD(0x3730, PIN_INPUT_PULLDOWN | MUX_MODE15)
- >;
- };
-};
-
-&i2c1 {
- status = "okay";
- pinctrl-names = "default";
- pinctrl-0 = <&i2c1_pins>;
- clock-frequency = <400000>;
-
- tps65917: tps65917@58 {
- compatible = "ti,tps65917";
- reg = <0x58>;
-
- pinctrl-names = "default";
- pinctrl-0 = <&tps65917_pins_default>;
-
- interrupts = <GIC_SPI 2 IRQ_TYPE_NONE>; /* IRQ_SYS_1N */
- interrupt-controller;
- #interrupt-cells = <2>;
-
- ti,system-power-controller;
-
- tps65917_pmic {
- compatible = "ti,tps65917-pmic";
-
- regulators {
- smps1_reg: smps1 {
- /* VDD_MPU */
- regulator-name = "smps1";
- regulator-min-microvolt = <850000>;
- regulator-max-microvolt = <1250000>;
- regulator-always-on;
- regulator-boot-on;
- };
-
- smps2_reg: smps2 {
- /* VDD_CORE */
- regulator-name = "smps2";
- regulator-min-microvolt = <850000>;
- regulator-max-microvolt = <1060000>;
- regulator-boot-on;
- regulator-always-on;
- };
-
- smps3_reg: smps3 {
- /* VDD_GPU IVA DSPEVE */
- regulator-name = "smps3";
- regulator-min-microvolt = <850000>;
- regulator-max-microvolt = <1250000>;
- regulator-boot-on;
- regulator-always-on;
- };
-
- smps4_reg: smps4 {
- /* VDDS1V8 */
- regulator-name = "smps4";
- regulator-min-microvolt = <1800000>;
- regulator-max-microvolt = <1800000>;
- regulator-always-on;
- regulator-boot-on;
- };
-
- smps5_reg: smps5 {
- /* VDD_DDR */
- regulator-name = "smps5";
- regulator-min-microvolt = <1350000>;
- regulator-max-microvolt = <1350000>;
- regulator-boot-on;
- regulator-always-on;
- };
-
- ldo1_reg: ldo1 {
- /* LDO1_OUT --> SDIO */
- regulator-name = "ldo1";
- regulator-min-microvolt = <1800000>;
- regulator-max-microvolt = <3300000>;
- regulator-always-on;
- regulator-boot-on;
- regulator-allow-bypass;
- };
-
- ldo2_reg: ldo2 {
- /* LDO2_OUT --> TP1017 (UNUSED) */
- regulator-name = "ldo2";
- regulator-min-microvolt = <1800000>;
- regulator-max-microvolt = <3300000>;
- regulator-allow-bypass;
- };
-
- ldo3_reg: ldo3 {
- /* VDDA_1V8_PHY */
- regulator-name = "ldo3";
- regulator-min-microvolt = <1800000>;
- regulator-max-microvolt = <1800000>;
- regulator-boot-on;
- regulator-always-on;
- };
-
- ldo5_reg: ldo5 {
- /* VDDA_1V8_PLL */
- regulator-name = "ldo5";
- regulator-min-microvolt = <1800000>;
- regulator-max-microvolt = <1800000>;
- regulator-always-on;
- regulator-boot-on;
- };
-
- ldo4_reg: ldo4 {
- /* VDDA_3V_USB: VDDA_USBHS33 */
- regulator-name = "ldo4";
- regulator-min-microvolt = <3300000>;
- regulator-max-microvolt = <3300000>;
- regulator-boot-on;
- };
- };
- };
-
- tps65917_power_button {
- compatible = "ti,palmas-pwrbutton";
- interrupt-parent = <&tps65917>;
- interrupts = <1 IRQ_TYPE_NONE>;
- wakeup-source;
- ti,palmas-long-press-seconds = <6>;
- };
- };
-
- pcf_gpio_21: gpio@21 {
- compatible = "ti,pcf8575";
- reg = <0x21>;
- lines-initial-states = <0x1408>;
- gpio-controller;
- #gpio-cells = <2>;
- interrupt-parent = <&gpio6>;
- interrupts = <11 IRQ_TYPE_EDGE_FALLING>;
- interrupt-controller;
- #interrupt-cells = <2>;
- };
-
- tlv320aic3106: tlv320aic3106@19 {
- #sound-dai-cells = <0>;
- compatible = "ti,tlv320aic3106";
- reg = <0x19>;
- adc-settle-ms = <40>;
- ai3x-micbias-vg = <1>; /* 2.0V */
- status = "okay";
-
- /* Regulators */
- AVDD-supply = <&evm_3v3>;
- IOVDD-supply = <&evm_3v3>;
- DRVDD-supply = <&evm_3v3>;
- DVDD-supply = <&aic_dvdd>;
+ regulator-allow-bypass;
};
};
-&i2c5 {
- status = "okay";
- pinctrl-names = "default";
- pinctrl-0 = <&i2c5_pins>;
- clock-frequency = <400000>;
-
- pcf_hdmi: pcf8575@26 {
- compatible = "nxp,pcf8575";
- reg = <0x26>;
- gpio-controller;
- #gpio-cells = <2>;
- /*
- * initial state is used here to keep the mdio interface
- * selected on RU89 through SEL_VIN4_MUX_S0, VIN2_S1 and
- * VIN2_S0 driven high otherwise Ethernet stops working
- * VIN6_SEL_S0 is low, thus selecting McASP3 over VIN6
- */
- lines-initial-states = <0x0f2b>;
-
- p1 {
- /* vin6_sel_s0: high: VIN6, low: audio */
- gpio-hog;
- gpios = <1 GPIO_ACTIVE_HIGH>;
- output-low;
- line-name = "vin6_sel_s0";
- };
- };
-};
-
-&uart1 {
- status = "okay";
- interrupts-extended = <&crossbar_mpu GIC_SPI 67 IRQ_TYPE_LEVEL_HIGH>,
- <&dra7_pmx_core 0x3e0>;
-};
-
-&elm {
- status = "okay";
-};
-
-&gpmc {
- status = "okay";
- pinctrl-names = "default";
- pinctrl-0 = <&nand_default>;
- ranges = <0 0 0x08000000 0x01000000>; /* minimum GPMC partition = 16MB */
- nand@0,0 {
- /* To use NAND, DIP switch SW5 must be set like so:
- * SW5.1 (NAND_SELn) = ON (LOW)
- * SW5.9 (GPMC_WPN) = OFF (HIGH)
- */
- compatible = "ti,omap2-nand";
- reg = <0 0 4>; /* device IO registers */
- interrupt-parent = <&gpmc>;
- interrupts = <0 IRQ_TYPE_NONE>, /* fifoevent */
- <1 IRQ_TYPE_NONE>; /* termcount */
- ti,nand-ecc-opt = "bch8";
- ti,elm-id = <&elm>;
- nand-bus-width = <16>;
- gpmc,device-width = <2>;
- gpmc,sync-clk-ps = <0>;
- gpmc,cs-on-ns = <0>;
- gpmc,cs-rd-off-ns = <80>;
- gpmc,cs-wr-off-ns = <80>;
- gpmc,adv-on-ns = <0>;
- gpmc,adv-rd-off-ns = <60>;
- gpmc,adv-wr-off-ns = <60>;
- gpmc,we-on-ns = <10>;
- gpmc,we-off-ns = <50>;
- gpmc,oe-on-ns = <4>;
- gpmc,oe-off-ns = <40>;
- gpmc,access-ns = <40>;
- gpmc,wr-access-ns = <80>;
- gpmc,rd-cycle-ns = <80>;
- gpmc,wr-cycle-ns = <80>;
- gpmc,bus-turnaround-ns = <0>;
- gpmc,cycle2cycle-delay-ns = <0>;
- gpmc,clk-activation-ns = <0>;
- gpmc,wr-data-mux-bus-ns = <0>;
- /* MTD partition table */
- /* All SPL-* partitions are sized to minimal length
- * which can be independently programmable. For
- * NAND flash this is equal to size of erase-block */
- #address-cells = <1>;
- #size-cells = <1>;
- partition@0 {
- label = "NAND.SPL";
- reg = <0x00000000 0x000020000>;
- };
- partition@1 {
- label = "NAND.SPL.backup1";
- reg = <0x00020000 0x00020000>;
- };
- partition@2 {
- label = "NAND.SPL.backup2";
- reg = <0x00040000 0x00020000>;
- };
- partition@3 {
- label = "NAND.SPL.backup3";
- reg = <0x00060000 0x00020000>;
- };
- partition@4 {
- label = "NAND.u-boot-spl-os";
- reg = <0x00080000 0x00040000>;
- };
- partition@5 {
- label = "NAND.u-boot";
- reg = <0x000c0000 0x00100000>;
- };
- partition@6 {
- label = "NAND.u-boot-env";
- reg = <0x001c0000 0x00020000>;
- };
- partition@7 {
- label = "NAND.u-boot-env.backup1";
- reg = <0x001e0000 0x00020000>;
- };
- partition@8 {
- label = "NAND.kernel";
- reg = <0x00200000 0x00800000>;
- };
- partition@9 {
- label = "NAND.file-system";
- reg = <0x00a00000 0x0f600000>;
- };
- };
-};
-
-&usb2_phy1 {
- phy-supply = <&ldo4_reg>;
-};
-
-&usb2_phy2 {
- phy-supply = <&ldo4_reg>;
-};
-
-&omap_dwc3_1 {
- extcon = <&extcon_usb1>;
-};
-
-&omap_dwc3_2 {
- extcon = <&extcon_usb2>;
-};
-
-&usb1 {
- dr_mode = "peripheral";
- pinctrl-names = "default";
- pinctrl-0 = <&usb1_pins>;
-};
-
-&usb2 {
- dr_mode = "host";
- pinctrl-names = "default";
- pinctrl-0 = <&usb2_pins>;
-};
-
-&mmc1 {
- status = "okay";
- pinctrl-names = "default";
- pinctrl-0 = <&mmc1_pins_default>;
- vmmc-supply = <&evm_3v3_sd>;
- vmmc_aux-supply = <&ldo1_reg>;
- bus-width = <4>;
- /*
- * SDCD signal is not being used here - using the fact that GPIO mode
- * is a viable alternative
- */
- cd-gpios = <&gpio6 27 GPIO_ACTIVE_LOW>;
- max-frequency = <192000000>;
-};
-
-&mmc2 {
- /* SW5-3 in ON position */
- status = "okay";
- pinctrl-names = "default";
- pinctrl-0 = <&mmc2_pins_default>;
-
- vmmc-supply = <&evm_3v3>;
- bus-width = <8>;
- ti,non-removable;
- max-frequency = <192000000>;
+&hdmi {
+ vdda-supply = <&ldo3_reg>;
};
-&dra7_pmx_core {
- cpsw_default: cpsw_default {
- pinctrl-single,pins = <
- /* Slave 2 */
- DRA7XX_CORE_IOPAD(0x3598, PIN_OUTPUT | MUX_MODE3) /* vin2a_d12.rgmii1_txc */
- DRA7XX_CORE_IOPAD(0x359c, PIN_OUTPUT | MUX_MODE3) /* vin2a_d13.rgmii1_tctl */
- DRA7XX_CORE_IOPAD(0x35a0, PIN_OUTPUT | MUX_MODE3) /* vin2a_d14.rgmii1_td3 */
- DRA7XX_CORE_IOPAD(0x35a4, PIN_OUTPUT | MUX_MODE3) /* vin2a_d15.rgmii1_td2 */
- DRA7XX_CORE_IOPAD(0x35a8, PIN_OUTPUT | MUX_MODE3) /* vin2a_d16.rgmii1_td1 */
- DRA7XX_CORE_IOPAD(0x35ac, PIN_OUTPUT | MUX_MODE3) /* vin2a_d17.rgmii1_td0 */
- DRA7XX_CORE_IOPAD(0x35b0, PIN_INPUT | MUX_MODE3) /* vin2a_d18.rgmii1_rclk */
- DRA7XX_CORE_IOPAD(0x35b4, PIN_INPUT | MUX_MODE3) /* vin2a_d19.rgmii1_rctl */
- DRA7XX_CORE_IOPAD(0x35b8, PIN_INPUT | MUX_MODE3) /* vin2a_d20.rgmii1_rd3 */
- DRA7XX_CORE_IOPAD(0x35bc, PIN_INPUT | MUX_MODE3) /* vin2a_d21.rgmii1_rd2 */
- DRA7XX_CORE_IOPAD(0x35c0, PIN_INPUT | MUX_MODE3) /* vin2a_d22.rgmii1_rd1 */
- DRA7XX_CORE_IOPAD(0x35c4, PIN_INPUT | MUX_MODE3) /* vin2a_d23.rgmii1_rd0 */
- >;
-
- };
-
- cpsw_sleep: cpsw_sleep {
- pinctrl-single,pins = <
- /* Slave 2 */
- DRA7XX_CORE_IOPAD(0x3598, MUX_MODE15)
- DRA7XX_CORE_IOPAD(0x359c, MUX_MODE15)
- DRA7XX_CORE_IOPAD(0x35a0, MUX_MODE15)
- DRA7XX_CORE_IOPAD(0x35a4, MUX_MODE15)
- DRA7XX_CORE_IOPAD(0x35a8, MUX_MODE15)
- DRA7XX_CORE_IOPAD(0x35ac, MUX_MODE15)
- DRA7XX_CORE_IOPAD(0x35b0, MUX_MODE15)
- DRA7XX_CORE_IOPAD(0x35b4, MUX_MODE15)
- DRA7XX_CORE_IOPAD(0x35b8, MUX_MODE15)
- DRA7XX_CORE_IOPAD(0x35bc, MUX_MODE15)
- DRA7XX_CORE_IOPAD(0x35c0, MUX_MODE15)
- DRA7XX_CORE_IOPAD(0x35c4, MUX_MODE15)
- >;
- };
-
- davinci_mdio_default: davinci_mdio_default {
- pinctrl-single,pins = <
- /* MDIO */
- DRA7XX_CORE_IOPAD(0x363c, PIN_OUTPUT_PULLUP | MUX_MODE0) /* mdio_d.mdio_d */
- DRA7XX_CORE_IOPAD(0x3640, PIN_INPUT_PULLUP | MUX_MODE0) /* mdio_clk.mdio_clk */
- >;
- };
-
- davinci_mdio_sleep: davinci_mdio_sleep {
- pinctrl-single,pins = <
- DRA7XX_CORE_IOPAD(0x363c, MUX_MODE15)
- DRA7XX_CORE_IOPAD(0x3640, MUX_MODE15)
- >;
- };
+&pcf_gpio_21 {
+ interrupt-parent = <&gpio6>;
+ interrupts = <11 IRQ_TYPE_EDGE_FALLING>;
};
&mac {
- status = "okay";
- pinctrl-names = "default", "sleep";
- pinctrl-0 = <&cpsw_default>;
- pinctrl-1 = <&cpsw_sleep>;
slaves = <1>;
mode-gpios = <&pcf_gpio_21 4 GPIO_ACTIVE_HIGH>;
};
@@ -697,158 +43,3 @@
phy_id = <&davinci_mdio>, <3>;
phy-mode = "rgmii";
};
-
-&davinci_mdio {
- pinctrl-names = "default", "sleep";
- pinctrl-0 = <&davinci_mdio_default>;
- pinctrl-1 = <&davinci_mdio_sleep>;
-};
-
-&dcan1 {
- status = "ok";
- pinctrl-names = "default", "sleep", "active";
- pinctrl-0 = <&dcan1_pins_sleep>;
- pinctrl-1 = <&dcan1_pins_sleep>;
- pinctrl-2 = <&dcan1_pins_default>;
-};
-
-&qspi {
- status = "okay";
- pinctrl-names = "default";
- pinctrl-0 = <&qspi1_pins>;
-
- spi-max-frequency = <48000000>;
- m25p80@0 {
- compatible = "s25fl256s1";
- spi-max-frequency = <48000000>;
- reg = <0>;
- spi-tx-bus-width = <1>;
- spi-rx-bus-width = <4>;
- spi-cpol;
- spi-cpha;
- #address-cells = <1>;
- #size-cells = <1>;
-
- /* MTD partition table.
- * The ROM checks the first four physical blocks
- * for a valid file to boot and the flash here is
- * 64KiB block size.
- */
- partition@0 {
- label = "QSPI.SPL";
- reg = <0x00000000 0x000010000>;
- };
- partition@1 {
- label = "QSPI.SPL.backup1";
- reg = <0x00010000 0x00010000>;
- };
- partition@2 {
- label = "QSPI.SPL.backup2";
- reg = <0x00020000 0x00010000>;
- };
- partition@3 {
- label = "QSPI.SPL.backup3";
- reg = <0x00030000 0x00010000>;
- };
- partition@4 {
- label = "QSPI.u-boot";
- reg = <0x00040000 0x00100000>;
- };
- partition@5 {
- label = "QSPI.u-boot-spl-os";
- reg = <0x00140000 0x00080000>;
- };
- partition@6 {
- label = "QSPI.u-boot-env";
- reg = <0x001c0000 0x00010000>;
- };
- partition@7 {
- label = "QSPI.u-boot-env.backup1";
- reg = <0x001d0000 0x0010000>;
- };
- partition@8 {
- label = "QSPI.kernel";
- reg = <0x001e0000 0x0800000>;
- };
- partition@9 {
- label = "QSPI.file-system";
- reg = <0x009e0000 0x01620000>;
- };
- };
-};
-
-&dss {
- status = "ok";
-
- vdda_video-supply = <&ldo5_reg>;
-};
-
-&hdmi {
- status = "ok";
- vdda-supply = <&ldo3_reg>;
-
- pinctrl-names = "default";
- pinctrl-0 = <&hdmi_pins>;
-
- port {
- hdmi_out: endpoint {
- remote-endpoint = <&tpd12s015_in>;
- };
- };
-};
-
-&atl {
- pinctrl-names = "default";
- pinctrl-0 = <&atl_pins>;
-
- assigned-clocks = <&abe_dpll_sys_clk_mux>,
- <&atl_gfclk_mux>,
- <&dpll_abe_ck>,
- <&dpll_abe_m2x2_ck>,
- <&atl_clkin2_ck>;
- assigned-clock-parents = <&sys_clkin2>, <&dpll_abe_m2_ck>;
- assigned-clock-rates = <0>, <0>, <180633600>, <361267200>, <5644800>;
-
- status = "okay";
-
- atl2 {
- bws = <DRA7_ATL_WS_MCASP2_FSX>;
- aws = <DRA7_ATL_WS_MCASP3_FSX>;
- };
-};
-
-&mcasp3 {
- #sound-dai-cells = <0>;
- pinctrl-names = "default", "sleep";
- pinctrl-0 = <&mcasp3_pins>;
- pinctrl-1 = <&mcasp3_sleep_pins>;
-
- assigned-clocks = <&mcasp3_ahclkx_mux>;
- assigned-clock-parents = <&atl_clkin2_ck>;
-
- status = "okay";
-
- op-mode = <0>; /* MCASP_IIS_MODE */
- tdm-slots = <2>;
- /* 4 serializer */
- serial-dir = < /* 0: INACTIVE, 1: TX, 2: RX */
- 1 2 0 0
- >;
-};
-
-&mailbox5 {
- status = "okay";
- mbox_ipu1_ipc3x: mbox_ipu1_ipc3x {
- status = "okay";
- };
- mbox_dsp1_ipc3x: mbox_dsp1_ipc3x {
- status = "okay";
- };
-};
-
-&mailbox6 {
- status = "okay";
- mbox_ipu2_ipc3x: mbox_ipu2_ipc3x {
- status = "okay";
- };
-};
diff --git a/arch/arm/boot/dts/dra7xx-clocks.dtsi b/arch/arm/boot/dts/dra7xx-clocks.dtsi
index ef2164a99d0f..8378b44ee567 100644
--- a/arch/arm/boot/dts/dra7xx-clocks.dtsi
+++ b/arch/arm/boot/dts/dra7xx-clocks.dtsi
@@ -196,7 +196,7 @@
clock-frequency = <0>;
};
- dpll_abe_ck: dpll_abe_ck {
+ dpll_abe_ck: dpll_abe_ck@1e0 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-m4xen-clock";
clocks = <&abe_dpll_clk_mux>, <&abe_dpll_bypass_clk_mux>;
@@ -209,7 +209,7 @@
clocks = <&dpll_abe_ck>;
};
- dpll_abe_m2x2_ck: dpll_abe_m2x2_ck {
+ dpll_abe_m2x2_ck: dpll_abe_m2x2_ck@1f0 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_abe_x2_ck>;
@@ -220,7 +220,7 @@
ti,invert-autoidle-bit;
};
- abe_clk: abe_clk {
+ abe_clk: abe_clk@108 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_abe_m2x2_ck>;
@@ -229,7 +229,7 @@
ti,index-power-of-two;
};
- dpll_abe_m2_ck: dpll_abe_m2_ck {
+ dpll_abe_m2_ck: dpll_abe_m2_ck@1f0 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_abe_ck>;
@@ -240,7 +240,7 @@
ti,invert-autoidle-bit;
};
- dpll_abe_m3x2_ck: dpll_abe_m3x2_ck {
+ dpll_abe_m3x2_ck: dpll_abe_m3x2_ck@1f4 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_abe_x2_ck>;
@@ -251,7 +251,7 @@
ti,invert-autoidle-bit;
};
- dpll_core_byp_mux: dpll_core_byp_mux {
+ dpll_core_byp_mux: dpll_core_byp_mux@12c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>;
@@ -259,7 +259,7 @@
reg = <0x012c>;
};
- dpll_core_ck: dpll_core_ck {
+ dpll_core_ck: dpll_core_ck@120 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-core-clock";
clocks = <&sys_clkin1>, <&dpll_core_byp_mux>;
@@ -272,7 +272,7 @@
clocks = <&dpll_core_ck>;
};
- dpll_core_h12x2_ck: dpll_core_h12x2_ck {
+ dpll_core_h12x2_ck: dpll_core_h12x2_ck@13c {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -291,14 +291,14 @@
clock-div = <1>;
};
- dpll_mpu_ck: dpll_mpu_ck {
+ dpll_mpu_ck: dpll_mpu_ck@160 {
#clock-cells = <0>;
compatible = "ti,omap5-mpu-dpll-clock";
clocks = <&sys_clkin1>, <&mpu_dpll_hs_clk_div>;
reg = <0x0160>, <0x0164>, <0x016c>, <0x0168>;
};
- dpll_mpu_m2_ck: dpll_mpu_m2_ck {
+ dpll_mpu_m2_ck: dpll_mpu_m2_ck@170 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_mpu_ck>;
@@ -325,7 +325,7 @@
clock-div = <1>;
};
- dpll_dsp_byp_mux: dpll_dsp_byp_mux {
+ dpll_dsp_byp_mux: dpll_dsp_byp_mux@240 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin1>, <&dsp_dpll_hs_clk_div>;
@@ -333,14 +333,14 @@
reg = <0x0240>;
};
- dpll_dsp_ck: dpll_dsp_ck {
+ dpll_dsp_ck: dpll_dsp_ck@234 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
clocks = <&sys_clkin1>, <&dpll_dsp_byp_mux>;
reg = <0x0234>, <0x0238>, <0x0240>, <0x023c>;
};
- dpll_dsp_m2_ck: dpll_dsp_m2_ck {
+ dpll_dsp_m2_ck: dpll_dsp_m2_ck@244 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_dsp_ck>;
@@ -359,7 +359,7 @@
clock-div = <1>;
};
- dpll_iva_byp_mux: dpll_iva_byp_mux {
+ dpll_iva_byp_mux: dpll_iva_byp_mux@1ac {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin1>, <&iva_dpll_hs_clk_div>;
@@ -367,14 +367,14 @@
reg = <0x01ac>;
};
- dpll_iva_ck: dpll_iva_ck {
+ dpll_iva_ck: dpll_iva_ck@1a0 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
clocks = <&sys_clkin1>, <&dpll_iva_byp_mux>;
reg = <0x01a0>, <0x01a4>, <0x01ac>, <0x01a8>;
};
- dpll_iva_m2_ck: dpll_iva_m2_ck {
+ dpll_iva_m2_ck: dpll_iva_m2_ck@1b0 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_iva_ck>;
@@ -393,7 +393,7 @@
clock-div = <1>;
};
- dpll_gpu_byp_mux: dpll_gpu_byp_mux {
+ dpll_gpu_byp_mux: dpll_gpu_byp_mux@2e4 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>;
@@ -401,14 +401,14 @@
reg = <0x02e4>;
};
- dpll_gpu_ck: dpll_gpu_ck {
+ dpll_gpu_ck: dpll_gpu_ck@2d8 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
clocks = <&sys_clkin1>, <&dpll_gpu_byp_mux>;
reg = <0x02d8>, <0x02dc>, <0x02e4>, <0x02e0>;
};
- dpll_gpu_m2_ck: dpll_gpu_m2_ck {
+ dpll_gpu_m2_ck: dpll_gpu_m2_ck@2e8 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_gpu_ck>;
@@ -419,7 +419,7 @@
ti,invert-autoidle-bit;
};
- dpll_core_m2_ck: dpll_core_m2_ck {
+ dpll_core_m2_ck: dpll_core_m2_ck@130 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_ck>;
@@ -438,7 +438,7 @@
clock-div = <1>;
};
- dpll_ddr_byp_mux: dpll_ddr_byp_mux {
+ dpll_ddr_byp_mux: dpll_ddr_byp_mux@21c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>;
@@ -446,14 +446,14 @@
reg = <0x021c>;
};
- dpll_ddr_ck: dpll_ddr_ck {
+ dpll_ddr_ck: dpll_ddr_ck@210 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
clocks = <&sys_clkin1>, <&dpll_ddr_byp_mux>;
reg = <0x0210>, <0x0214>, <0x021c>, <0x0218>;
};
- dpll_ddr_m2_ck: dpll_ddr_m2_ck {
+ dpll_ddr_m2_ck: dpll_ddr_m2_ck@220 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_ddr_ck>;
@@ -464,7 +464,7 @@
ti,invert-autoidle-bit;
};
- dpll_gmac_byp_mux: dpll_gmac_byp_mux {
+ dpll_gmac_byp_mux: dpll_gmac_byp_mux@2b4 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin1>, <&dpll_abe_m3x2_ck>;
@@ -472,14 +472,14 @@
reg = <0x02b4>;
};
- dpll_gmac_ck: dpll_gmac_ck {
+ dpll_gmac_ck: dpll_gmac_ck@2a8 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
clocks = <&sys_clkin1>, <&dpll_gmac_byp_mux>;
reg = <0x02a8>, <0x02ac>, <0x02b4>, <0x02b0>;
};
- dpll_gmac_m2_ck: dpll_gmac_m2_ck {
+ dpll_gmac_m2_ck: dpll_gmac_m2_ck@2b8 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_gmac_ck>;
@@ -538,7 +538,7 @@
clock-div = <1>;
};
- dpll_eve_byp_mux: dpll_eve_byp_mux {
+ dpll_eve_byp_mux: dpll_eve_byp_mux@290 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin1>, <&eve_dpll_hs_clk_div>;
@@ -546,14 +546,14 @@
reg = <0x0290>;
};
- dpll_eve_ck: dpll_eve_ck {
+ dpll_eve_ck: dpll_eve_ck@284 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
clocks = <&sys_clkin1>, <&dpll_eve_byp_mux>;
reg = <0x0284>, <0x0288>, <0x0290>, <0x028c>;
};
- dpll_eve_m2_ck: dpll_eve_m2_ck {
+ dpll_eve_m2_ck: dpll_eve_m2_ck@294 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_eve_ck>;
@@ -572,7 +572,7 @@
clock-div = <1>;
};
- dpll_core_h13x2_ck: dpll_core_h13x2_ck {
+ dpll_core_h13x2_ck: dpll_core_h13x2_ck@140 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -583,7 +583,7 @@
ti,invert-autoidle-bit;
};
- dpll_core_h14x2_ck: dpll_core_h14x2_ck {
+ dpll_core_h14x2_ck: dpll_core_h14x2_ck@144 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -594,7 +594,7 @@
ti,invert-autoidle-bit;
};
- dpll_core_h22x2_ck: dpll_core_h22x2_ck {
+ dpll_core_h22x2_ck: dpll_core_h22x2_ck@154 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -605,7 +605,7 @@
ti,invert-autoidle-bit;
};
- dpll_core_h23x2_ck: dpll_core_h23x2_ck {
+ dpll_core_h23x2_ck: dpll_core_h23x2_ck@158 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -616,7 +616,7 @@
ti,invert-autoidle-bit;
};
- dpll_core_h24x2_ck: dpll_core_h24x2_ck {
+ dpll_core_h24x2_ck: dpll_core_h24x2_ck@15c {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -633,7 +633,7 @@
clocks = <&dpll_ddr_ck>;
};
- dpll_ddr_h11x2_ck: dpll_ddr_h11x2_ck {
+ dpll_ddr_h11x2_ck: dpll_ddr_h11x2_ck@228 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_ddr_x2_ck>;
@@ -650,7 +650,7 @@
clocks = <&dpll_dsp_ck>;
};
- dpll_dsp_m3x2_ck: dpll_dsp_m3x2_ck {
+ dpll_dsp_m3x2_ck: dpll_dsp_m3x2_ck@248 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_dsp_x2_ck>;
@@ -667,7 +667,7 @@
clocks = <&dpll_gmac_ck>;
};
- dpll_gmac_h11x2_ck: dpll_gmac_h11x2_ck {
+ dpll_gmac_h11x2_ck: dpll_gmac_h11x2_ck@2c0 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_gmac_x2_ck>;
@@ -678,7 +678,7 @@
ti,invert-autoidle-bit;
};
- dpll_gmac_h12x2_ck: dpll_gmac_h12x2_ck {
+ dpll_gmac_h12x2_ck: dpll_gmac_h12x2_ck@2c4 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_gmac_x2_ck>;
@@ -689,7 +689,7 @@
ti,invert-autoidle-bit;
};
- dpll_gmac_h13x2_ck: dpll_gmac_h13x2_ck {
+ dpll_gmac_h13x2_ck: dpll_gmac_h13x2_ck@2c8 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_gmac_x2_ck>;
@@ -700,7 +700,7 @@
ti,invert-autoidle-bit;
};
- dpll_gmac_m3x2_ck: dpll_gmac_m3x2_ck {
+ dpll_gmac_m3x2_ck: dpll_gmac_m3x2_ck@2bc {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_gmac_x2_ck>;
@@ -735,7 +735,7 @@
clock-div = <1>;
};
- l3_iclk_div: l3_iclk_div {
+ l3_iclk_div: l3_iclk_div@100 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
ti,max-div = <2>;
@@ -785,7 +785,7 @@
clock-div = <1>;
};
- ipu1_gfclk_mux: ipu1_gfclk_mux {
+ ipu1_gfclk_mux: ipu1_gfclk_mux@520 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&dpll_abe_m2x2_ck>, <&dpll_core_h22x2_ck>;
@@ -793,7 +793,7 @@
reg = <0x0520>;
};
- mcasp1_ahclkr_mux: mcasp1_ahclkr_mux {
+ mcasp1_ahclkr_mux: mcasp1_ahclkr_mux@550 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_24m_fclk>, <&abe_sys_clk_div>, <&func_24m_clk>, <&atl_clkin3_ck>, <&atl_clkin2_ck>, <&atl_clkin1_ck>, <&atl_clkin0_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&mlb_clk>, <&mlbp_clk>;
@@ -801,7 +801,7 @@
reg = <0x0550>;
};
- mcasp1_ahclkx_mux: mcasp1_ahclkx_mux {
+ mcasp1_ahclkx_mux: mcasp1_ahclkx_mux@550 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_24m_fclk>, <&abe_sys_clk_div>, <&func_24m_clk>, <&atl_clkin3_ck>, <&atl_clkin2_ck>, <&atl_clkin1_ck>, <&atl_clkin0_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&mlb_clk>, <&mlbp_clk>;
@@ -809,7 +809,7 @@
reg = <0x0550>;
};
- mcasp1_aux_gfclk_mux: mcasp1_aux_gfclk_mux {
+ mcasp1_aux_gfclk_mux: mcasp1_aux_gfclk_mux@550 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&per_abe_x1_gfclk2_div>, <&video1_clk2_div>, <&video2_clk2_div>, <&hdmi_clk2_div>;
@@ -817,7 +817,7 @@
reg = <0x0550>;
};
- timer5_gfclk_mux: timer5_gfclk_mux {
+ timer5_gfclk_mux: timer5_gfclk_mux@558 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&timer_sys_clk_div>, <&sys_32k_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&abe_giclk_div>, <&video1_div_clk>, <&video2_div_clk>, <&hdmi_div_clk>, <&clkoutmux0_clk_mux>;
@@ -825,7 +825,7 @@
reg = <0x0558>;
};
- timer6_gfclk_mux: timer6_gfclk_mux {
+ timer6_gfclk_mux: timer6_gfclk_mux@560 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&timer_sys_clk_div>, <&sys_32k_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&abe_giclk_div>, <&video1_div_clk>, <&video2_div_clk>, <&hdmi_div_clk>, <&clkoutmux0_clk_mux>;
@@ -833,7 +833,7 @@
reg = <0x0560>;
};
- timer7_gfclk_mux: timer7_gfclk_mux {
+ timer7_gfclk_mux: timer7_gfclk_mux@568 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&timer_sys_clk_div>, <&sys_32k_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&abe_giclk_div>, <&video1_div_clk>, <&video2_div_clk>, <&hdmi_div_clk>, <&clkoutmux0_clk_mux>;
@@ -841,7 +841,7 @@
reg = <0x0568>;
};
- timer8_gfclk_mux: timer8_gfclk_mux {
+ timer8_gfclk_mux: timer8_gfclk_mux@570 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&timer_sys_clk_div>, <&sys_32k_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&abe_giclk_div>, <&video1_div_clk>, <&video2_div_clk>, <&hdmi_div_clk>, <&clkoutmux0_clk_mux>;
@@ -849,7 +849,7 @@
reg = <0x0570>;
};
- uart6_gfclk_mux: uart6_gfclk_mux {
+ uart6_gfclk_mux: uart6_gfclk_mux@580 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&func_48m_fclk>, <&dpll_per_m2x2_ck>;
@@ -864,7 +864,7 @@
};
};
&prm_clocks {
- sys_clkin1: sys_clkin1 {
+ sys_clkin1: sys_clkin1@110 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&virt_12000000_ck>, <&virt_20000000_ck>, <&virt_16800000_ck>, <&virt_19200000_ck>, <&virt_26000000_ck>, <&virt_27000000_ck>, <&virt_38400000_ck>;
@@ -872,28 +872,28 @@
ti,index-starts-at-one;
};
- abe_dpll_sys_clk_mux: abe_dpll_sys_clk_mux {
+ abe_dpll_sys_clk_mux: abe_dpll_sys_clk_mux@118 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin1>, <&sys_clkin2>;
reg = <0x0118>;
};
- abe_dpll_bypass_clk_mux: abe_dpll_bypass_clk_mux {
+ abe_dpll_bypass_clk_mux: abe_dpll_bypass_clk_mux@114 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_dpll_sys_clk_mux>, <&sys_32k_ck>;
reg = <0x0114>;
};
- abe_dpll_clk_mux: abe_dpll_clk_mux {
+ abe_dpll_clk_mux: abe_dpll_clk_mux@10c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_dpll_sys_clk_mux>, <&sys_32k_ck>;
reg = <0x010c>;
};
- abe_24m_fclk: abe_24m_fclk {
+ abe_24m_fclk: abe_24m_fclk@11c {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_abe_m2x2_ck>;
@@ -901,7 +901,7 @@
ti,dividers = <8>, <16>;
};
- aess_fclk: aess_fclk {
+ aess_fclk: aess_fclk@178 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&abe_clk>;
@@ -909,7 +909,7 @@
ti,max-div = <2>;
};
- abe_giclk_div: abe_giclk_div {
+ abe_giclk_div: abe_giclk_div@174 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&aess_fclk>;
@@ -917,7 +917,7 @@
ti,max-div = <2>;
};
- abe_lp_clk_div: abe_lp_clk_div {
+ abe_lp_clk_div: abe_lp_clk_div@1d8 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_abe_m2x2_ck>;
@@ -925,7 +925,7 @@
ti,dividers = <16>, <32>;
};
- abe_sys_clk_div: abe_sys_clk_div {
+ abe_sys_clk_div: abe_sys_clk_div@120 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&sys_clkin1>;
@@ -933,14 +933,14 @@
ti,max-div = <2>;
};
- adc_gfclk_mux: adc_gfclk_mux {
+ adc_gfclk_mux: adc_gfclk_mux@1dc {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin1>, <&sys_clkin2>, <&sys_32k_ck>;
reg = <0x01dc>;
};
- sys_clk1_dclk_div: sys_clk1_dclk_div {
+ sys_clk1_dclk_div: sys_clk1_dclk_div@1c8 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&sys_clkin1>;
@@ -949,7 +949,7 @@
ti,index-power-of-two;
};
- sys_clk2_dclk_div: sys_clk2_dclk_div {
+ sys_clk2_dclk_div: sys_clk2_dclk_div@1cc {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&sys_clkin2>;
@@ -958,7 +958,7 @@
ti,index-power-of-two;
};
- per_abe_x1_dclk_div: per_abe_x1_dclk_div {
+ per_abe_x1_dclk_div: per_abe_x1_dclk_div@1bc {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_abe_m2_ck>;
@@ -967,7 +967,7 @@
ti,index-power-of-two;
};
- dsp_gclk_div: dsp_gclk_div {
+ dsp_gclk_div: dsp_gclk_div@18c {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_dsp_m2_ck>;
@@ -976,7 +976,7 @@
ti,index-power-of-two;
};
- gpu_dclk: gpu_dclk {
+ gpu_dclk: gpu_dclk@1a0 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_gpu_m2_ck>;
@@ -985,7 +985,7 @@
ti,index-power-of-two;
};
- emif_phy_dclk_div: emif_phy_dclk_div {
+ emif_phy_dclk_div: emif_phy_dclk_div@190 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_ddr_m2_ck>;
@@ -994,7 +994,7 @@
ti,index-power-of-two;
};
- gmac_250m_dclk_div: gmac_250m_dclk_div {
+ gmac_250m_dclk_div: gmac_250m_dclk_div@19c {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_gmac_m2_ck>;
@@ -1003,7 +1003,7 @@
ti,index-power-of-two;
};
- l3init_480m_dclk_div: l3init_480m_dclk_div {
+ l3init_480m_dclk_div: l3init_480m_dclk_div@1ac {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_usb_m2_ck>;
@@ -1012,7 +1012,7 @@
ti,index-power-of-two;
};
- usb_otg_dclk_div: usb_otg_dclk_div {
+ usb_otg_dclk_div: usb_otg_dclk_div@184 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&usb_otg_clkin_ck>;
@@ -1021,7 +1021,7 @@
ti,index-power-of-two;
};
- sata_dclk_div: sata_dclk_div {
+ sata_dclk_div: sata_dclk_div@1c0 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&sys_clkin1>;
@@ -1030,7 +1030,7 @@
ti,index-power-of-two;
};
- pcie2_dclk_div: pcie2_dclk_div {
+ pcie2_dclk_div: pcie2_dclk_div@1b8 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_pcie_ref_m2_ck>;
@@ -1039,7 +1039,7 @@
ti,index-power-of-two;
};
- pcie_dclk_div: pcie_dclk_div {
+ pcie_dclk_div: pcie_dclk_div@1b4 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&apll_pcie_m2_ck>;
@@ -1048,7 +1048,7 @@
ti,index-power-of-two;
};
- emu_dclk_div: emu_dclk_div {
+ emu_dclk_div: emu_dclk_div@194 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&sys_clkin1>;
@@ -1057,7 +1057,7 @@
ti,index-power-of-two;
};
- secure_32k_dclk_div: secure_32k_dclk_div {
+ secure_32k_dclk_div: secure_32k_dclk_div@1c4 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&secure_32k_clk_src_ck>;
@@ -1066,21 +1066,21 @@
ti,index-power-of-two;
};
- clkoutmux0_clk_mux: clkoutmux0_clk_mux {
+ clkoutmux0_clk_mux: clkoutmux0_clk_mux@158 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clk1_dclk_div>, <&sys_clk2_dclk_div>, <&per_abe_x1_dclk_div>, <&mpu_dclk_div>, <&dsp_gclk_div>, <&iva_dclk>, <&gpu_dclk>, <&core_dpll_out_dclk_div>, <&emif_phy_dclk_div>, <&gmac_250m_dclk_div>, <&video2_dclk_div>, <&video1_dclk_div>, <&hdmi_dclk_div>, <&func_96m_aon_dclk_div>, <&l3init_480m_dclk_div>, <&usb_otg_dclk_div>, <&sata_dclk_div>, <&pcie2_dclk_div>, <&pcie_dclk_div>, <&emu_dclk_div>, <&secure_32k_dclk_div>, <&eve_dclk_div>;
reg = <0x0158>;
};
- clkoutmux1_clk_mux: clkoutmux1_clk_mux {
+ clkoutmux1_clk_mux: clkoutmux1_clk_mux@15c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clk1_dclk_div>, <&sys_clk2_dclk_div>, <&per_abe_x1_dclk_div>, <&mpu_dclk_div>, <&dsp_gclk_div>, <&iva_dclk>, <&gpu_dclk>, <&core_dpll_out_dclk_div>, <&emif_phy_dclk_div>, <&gmac_250m_dclk_div>, <&video2_dclk_div>, <&video1_dclk_div>, <&hdmi_dclk_div>, <&func_96m_aon_dclk_div>, <&l3init_480m_dclk_div>, <&usb_otg_dclk_div>, <&sata_dclk_div>, <&pcie2_dclk_div>, <&pcie_dclk_div>, <&emu_dclk_div>, <&secure_32k_dclk_div>, <&eve_dclk_div>;
reg = <0x015c>;
};
- clkoutmux2_clk_mux: clkoutmux2_clk_mux {
+ clkoutmux2_clk_mux: clkoutmux2_clk_mux@160 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clk1_dclk_div>, <&sys_clk2_dclk_div>, <&per_abe_x1_dclk_div>, <&mpu_dclk_div>, <&dsp_gclk_div>, <&iva_dclk>, <&gpu_dclk>, <&core_dpll_out_dclk_div>, <&emif_phy_dclk_div>, <&gmac_250m_dclk_div>, <&video2_dclk_div>, <&video1_dclk_div>, <&hdmi_dclk_div>, <&func_96m_aon_dclk_div>, <&l3init_480m_dclk_div>, <&usb_otg_dclk_div>, <&sata_dclk_div>, <&pcie2_dclk_div>, <&pcie_dclk_div>, <&emu_dclk_div>, <&secure_32k_dclk_div>, <&eve_dclk_div>;
@@ -1095,21 +1095,21 @@
clock-div = <2>;
};
- eve_clk: eve_clk {
+ eve_clk: eve_clk@180 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&dpll_eve_m2_ck>, <&dpll_dsp_m3x2_ck>;
reg = <0x0180>;
};
- hdmi_dpll_clk_mux: hdmi_dpll_clk_mux {
+ hdmi_dpll_clk_mux: hdmi_dpll_clk_mux@164 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin1>, <&sys_clkin2>;
reg = <0x0164>;
};
- mlb_clk: mlb_clk {
+ mlb_clk: mlb_clk@134 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&mlb_clkin_ck>;
@@ -1118,7 +1118,7 @@
ti,index-power-of-two;
};
- mlbp_clk: mlbp_clk {
+ mlbp_clk: mlbp_clk@130 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&mlbp_clkin_ck>;
@@ -1127,7 +1127,7 @@
ti,index-power-of-two;
};
- per_abe_x1_gfclk2_div: per_abe_x1_gfclk2_div {
+ per_abe_x1_gfclk2_div: per_abe_x1_gfclk2_div@138 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_abe_m2_ck>;
@@ -1136,7 +1136,7 @@
ti,index-power-of-two;
};
- timer_sys_clk_div: timer_sys_clk_div {
+ timer_sys_clk_div: timer_sys_clk_div@144 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&sys_clkin1>;
@@ -1144,28 +1144,28 @@
ti,max-div = <2>;
};
- video1_dpll_clk_mux: video1_dpll_clk_mux {
+ video1_dpll_clk_mux: video1_dpll_clk_mux@168 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin1>, <&sys_clkin2>;
reg = <0x0168>;
};
- video2_dpll_clk_mux: video2_dpll_clk_mux {
+ video2_dpll_clk_mux: video2_dpll_clk_mux@16c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin1>, <&sys_clkin2>;
reg = <0x016c>;
};
- wkupaon_iclk_mux: wkupaon_iclk_mux {
+ wkupaon_iclk_mux: wkupaon_iclk_mux@108 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin1>, <&abe_lp_clk_div>;
reg = <0x0108>;
};
- gpio1_dbclk: gpio1_dbclk {
+ gpio1_dbclk: gpio1_dbclk@1838 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1173,7 +1173,7 @@
reg = <0x1838>;
};
- dcan1_sys_clk_mux: dcan1_sys_clk_mux {
+ dcan1_sys_clk_mux: dcan1_sys_clk_mux@1888 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin1>, <&sys_clkin2>;
@@ -1181,7 +1181,7 @@
reg = <0x1888>;
};
- timer1_gfclk_mux: timer1_gfclk_mux {
+ timer1_gfclk_mux: timer1_gfclk_mux@1840 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&timer_sys_clk_div>, <&sys_32k_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&abe_giclk_div>, <&video1_div_clk>, <&video2_div_clk>, <&hdmi_div_clk>;
@@ -1189,7 +1189,7 @@
reg = <0x1840>;
};
- uart10_gfclk_mux: uart10_gfclk_mux {
+ uart10_gfclk_mux: uart10_gfclk_mux@1880 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&func_48m_fclk>, <&dpll_per_m2x2_ck>;
@@ -1198,14 +1198,14 @@
};
};
&cm_core_clocks {
- dpll_pcie_ref_ck: dpll_pcie_ref_ck {
+ dpll_pcie_ref_ck: dpll_pcie_ref_ck@200 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
clocks = <&sys_clkin1>, <&sys_clkin1>;
reg = <0x0200>, <0x0204>, <0x020c>, <0x0208>;
};
- dpll_pcie_ref_m2ldo_ck: dpll_pcie_ref_m2ldo_ck {
+ dpll_pcie_ref_m2ldo_ck: dpll_pcie_ref_m2ldo_ck@210 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_pcie_ref_ck>;
@@ -1224,7 +1224,7 @@
ti,bit-shift = <7>;
};
- apll_pcie_ck: apll_pcie_ck {
+ apll_pcie_ck: apll_pcie_ck@21c {
#clock-cells = <0>;
compatible = "ti,dra7-apll-clock";
clocks = <&apll_pcie_in_clk_mux>, <&dpll_pcie_ref_ck>;
@@ -1313,7 +1313,7 @@
clock-div = <1>;
};
- dpll_per_byp_mux: dpll_per_byp_mux {
+ dpll_per_byp_mux: dpll_per_byp_mux@14c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin1>, <&per_dpll_hs_clk_div>;
@@ -1321,14 +1321,14 @@
reg = <0x014c>;
};
- dpll_per_ck: dpll_per_ck {
+ dpll_per_ck: dpll_per_ck@140 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
clocks = <&sys_clkin1>, <&dpll_per_byp_mux>;
reg = <0x0140>, <0x0144>, <0x014c>, <0x0148>;
};
- dpll_per_m2_ck: dpll_per_m2_ck {
+ dpll_per_m2_ck: dpll_per_m2_ck@150 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_ck>;
@@ -1347,7 +1347,7 @@
clock-div = <1>;
};
- dpll_usb_byp_mux: dpll_usb_byp_mux {
+ dpll_usb_byp_mux: dpll_usb_byp_mux@18c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin1>, <&usb_dpll_hs_clk_div>;
@@ -1355,14 +1355,14 @@
reg = <0x018c>;
};
- dpll_usb_ck: dpll_usb_ck {
+ dpll_usb_ck: dpll_usb_ck@180 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-j-type-clock";
clocks = <&sys_clkin1>, <&dpll_usb_byp_mux>;
reg = <0x0180>, <0x0184>, <0x018c>, <0x0188>;
};
- dpll_usb_m2_ck: dpll_usb_m2_ck {
+ dpll_usb_m2_ck: dpll_usb_m2_ck@190 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_usb_ck>;
@@ -1373,7 +1373,7 @@
ti,invert-autoidle-bit;
};
- dpll_pcie_ref_m2_ck: dpll_pcie_ref_m2_ck {
+ dpll_pcie_ref_m2_ck: dpll_pcie_ref_m2_ck@210 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_pcie_ref_ck>;
@@ -1390,7 +1390,7 @@
clocks = <&dpll_per_ck>;
};
- dpll_per_h11x2_ck: dpll_per_h11x2_ck {
+ dpll_per_h11x2_ck: dpll_per_h11x2_ck@158 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_x2_ck>;
@@ -1401,7 +1401,7 @@
ti,invert-autoidle-bit;
};
- dpll_per_h12x2_ck: dpll_per_h12x2_ck {
+ dpll_per_h12x2_ck: dpll_per_h12x2_ck@15c {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_x2_ck>;
@@ -1412,7 +1412,7 @@
ti,invert-autoidle-bit;
};
- dpll_per_h13x2_ck: dpll_per_h13x2_ck {
+ dpll_per_h13x2_ck: dpll_per_h13x2_ck@160 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_x2_ck>;
@@ -1423,7 +1423,7 @@
ti,invert-autoidle-bit;
};
- dpll_per_h14x2_ck: dpll_per_h14x2_ck {
+ dpll_per_h14x2_ck: dpll_per_h14x2_ck@164 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_x2_ck>;
@@ -1434,7 +1434,7 @@
ti,invert-autoidle-bit;
};
- dpll_per_m2x2_ck: dpll_per_m2x2_ck {
+ dpll_per_m2x2_ck: dpll_per_m2x2_ck@150 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_x2_ck>;
@@ -1493,7 +1493,7 @@
clock-div = <2>;
};
- l3init_60m_fclk: l3init_60m_fclk {
+ l3init_60m_fclk: l3init_60m_fclk@104 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_usb_m2_ck>;
@@ -1501,7 +1501,7 @@
ti,dividers = <1>, <8>;
};
- clkout2_clk: clkout2_clk {
+ clkout2_clk: clkout2_clk@6b0 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&clkoutmux2_clk_mux>;
@@ -1509,7 +1509,7 @@
reg = <0x06b0>;
};
- l3init_960m_gfclk: l3init_960m_gfclk {
+ l3init_960m_gfclk: l3init_960m_gfclk@6c0 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll_usb_clkdcoldo>;
@@ -1517,7 +1517,7 @@
reg = <0x06c0>;
};
- dss_32khz_clk: dss_32khz_clk {
+ dss_32khz_clk: dss_32khz_clk@1120 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1525,7 +1525,7 @@
reg = <0x1120>;
};
- dss_48mhz_clk: dss_48mhz_clk {
+ dss_48mhz_clk: dss_48mhz_clk@1120 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&func_48m_fclk>;
@@ -1533,7 +1533,7 @@
reg = <0x1120>;
};
- dss_dss_clk: dss_dss_clk {
+ dss_dss_clk: dss_dss_clk@1120 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll_per_h12x2_ck>;
@@ -1542,7 +1542,7 @@
ti,set-rate-parent;
};
- dss_hdmi_clk: dss_hdmi_clk {
+ dss_hdmi_clk: dss_hdmi_clk@1120 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&hdmi_dpll_clk_mux>;
@@ -1550,7 +1550,7 @@
reg = <0x1120>;
};
- dss_video1_clk: dss_video1_clk {
+ dss_video1_clk: dss_video1_clk@1120 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&video1_dpll_clk_mux>;
@@ -1558,7 +1558,7 @@
reg = <0x1120>;
};
- dss_video2_clk: dss_video2_clk {
+ dss_video2_clk: dss_video2_clk@1120 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&video2_dpll_clk_mux>;
@@ -1566,7 +1566,7 @@
reg = <0x1120>;
};
- gpio2_dbclk: gpio2_dbclk {
+ gpio2_dbclk: gpio2_dbclk@1760 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1574,7 +1574,7 @@
reg = <0x1760>;
};
- gpio3_dbclk: gpio3_dbclk {
+ gpio3_dbclk: gpio3_dbclk@1768 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1582,7 +1582,7 @@
reg = <0x1768>;
};
- gpio4_dbclk: gpio4_dbclk {
+ gpio4_dbclk: gpio4_dbclk@1770 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1590,7 +1590,7 @@
reg = <0x1770>;
};
- gpio5_dbclk: gpio5_dbclk {
+ gpio5_dbclk: gpio5_dbclk@1778 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1598,7 +1598,7 @@
reg = <0x1778>;
};
- gpio6_dbclk: gpio6_dbclk {
+ gpio6_dbclk: gpio6_dbclk@1780 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1606,7 +1606,7 @@
reg = <0x1780>;
};
- gpio7_dbclk: gpio7_dbclk {
+ gpio7_dbclk: gpio7_dbclk@1810 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1614,7 +1614,7 @@
reg = <0x1810>;
};
- gpio8_dbclk: gpio8_dbclk {
+ gpio8_dbclk: gpio8_dbclk@1818 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1622,7 +1622,7 @@
reg = <0x1818>;
};
- mmc1_clk32k: mmc1_clk32k {
+ mmc1_clk32k: mmc1_clk32k@1328 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1630,7 +1630,7 @@
reg = <0x1328>;
};
- mmc2_clk32k: mmc2_clk32k {
+ mmc2_clk32k: mmc2_clk32k@1330 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1638,7 +1638,7 @@
reg = <0x1330>;
};
- mmc3_clk32k: mmc3_clk32k {
+ mmc3_clk32k: mmc3_clk32k@1820 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1646,7 +1646,7 @@
reg = <0x1820>;
};
- mmc4_clk32k: mmc4_clk32k {
+ mmc4_clk32k: mmc4_clk32k@1828 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1654,7 +1654,7 @@
reg = <0x1828>;
};
- sata_ref_clk: sata_ref_clk {
+ sata_ref_clk: sata_ref_clk@1388 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_clkin1>;
@@ -1662,7 +1662,7 @@
reg = <0x1388>;
};
- usb_otg_ss1_refclk960m: usb_otg_ss1_refclk960m {
+ usb_otg_ss1_refclk960m: usb_otg_ss1_refclk960m@13f0 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l3init_960m_gfclk>;
@@ -1670,7 +1670,7 @@
reg = <0x13f0>;
};
- usb_otg_ss2_refclk960m: usb_otg_ss2_refclk960m {
+ usb_otg_ss2_refclk960m: usb_otg_ss2_refclk960m@1340 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l3init_960m_gfclk>;
@@ -1678,7 +1678,7 @@
reg = <0x1340>;
};
- usb_phy1_always_on_clk32k: usb_phy1_always_on_clk32k {
+ usb_phy1_always_on_clk32k: usb_phy1_always_on_clk32k@640 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1686,7 +1686,7 @@
reg = <0x0640>;
};
- usb_phy2_always_on_clk32k: usb_phy2_always_on_clk32k {
+ usb_phy2_always_on_clk32k: usb_phy2_always_on_clk32k@688 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1694,7 +1694,7 @@
reg = <0x0688>;
};
- usb_phy3_always_on_clk32k: usb_phy3_always_on_clk32k {
+ usb_phy3_always_on_clk32k: usb_phy3_always_on_clk32k@698 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1702,7 +1702,7 @@
reg = <0x0698>;
};
- atl_dpll_clk_mux: atl_dpll_clk_mux {
+ atl_dpll_clk_mux: atl_dpll_clk_mux@c00 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_32k_ck>, <&video1_clkin_ck>, <&video2_clkin_ck>, <&hdmi_clkin_ck>;
@@ -1710,7 +1710,7 @@
reg = <0x0c00>;
};
- atl_gfclk_mux: atl_gfclk_mux {
+ atl_gfclk_mux: atl_gfclk_mux@c00 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&l3_iclk_div>, <&dpll_abe_m2_ck>, <&atl_dpll_clk_mux>;
@@ -1718,7 +1718,7 @@
reg = <0x0c00>;
};
- gmac_gmii_ref_clk_div: gmac_gmii_ref_clk_div {
+ gmac_gmii_ref_clk_div: gmac_gmii_ref_clk_div@13d0 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_gmac_m2_ck>;
@@ -1727,7 +1727,7 @@
ti,dividers = <2>;
};
- gmac_rft_clk_mux: gmac_rft_clk_mux {
+ gmac_rft_clk_mux: gmac_rft_clk_mux@13d0 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&video1_clkin_ck>, <&video2_clkin_ck>, <&dpll_abe_m2_ck>, <&hdmi_clkin_ck>, <&l3_iclk_div>;
@@ -1735,7 +1735,7 @@
reg = <0x13d0>;
};
- gpu_core_gclk_mux: gpu_core_gclk_mux {
+ gpu_core_gclk_mux: gpu_core_gclk_mux@1220 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&dpll_core_h14x2_ck>, <&dpll_per_h14x2_ck>, <&dpll_gpu_m2_ck>;
@@ -1743,7 +1743,7 @@
reg = <0x1220>;
};
- gpu_hyd_gclk_mux: gpu_hyd_gclk_mux {
+ gpu_hyd_gclk_mux: gpu_hyd_gclk_mux@1220 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&dpll_core_h14x2_ck>, <&dpll_per_h14x2_ck>, <&dpll_gpu_m2_ck>;
@@ -1751,7 +1751,7 @@
reg = <0x1220>;
};
- l3instr_ts_gclk_div: l3instr_ts_gclk_div {
+ l3instr_ts_gclk_div: l3instr_ts_gclk_div@e50 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&wkupaon_iclk_mux>;
@@ -1760,7 +1760,7 @@
ti,dividers = <8>, <16>, <32>;
};
- mcasp2_ahclkr_mux: mcasp2_ahclkr_mux {
+ mcasp2_ahclkr_mux: mcasp2_ahclkr_mux@1860 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_24m_fclk>, <&abe_sys_clk_div>, <&func_24m_clk>, <&atl_clkin3_ck>, <&atl_clkin2_ck>, <&atl_clkin1_ck>, <&atl_clkin0_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&mlb_clk>, <&mlbp_clk>;
@@ -1768,7 +1768,7 @@
reg = <0x1860>;
};
- mcasp2_ahclkx_mux: mcasp2_ahclkx_mux {
+ mcasp2_ahclkx_mux: mcasp2_ahclkx_mux@1860 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_24m_fclk>, <&abe_sys_clk_div>, <&func_24m_clk>, <&atl_clkin3_ck>, <&atl_clkin2_ck>, <&atl_clkin1_ck>, <&atl_clkin0_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&mlb_clk>, <&mlbp_clk>;
@@ -1776,7 +1776,7 @@
reg = <0x1860>;
};
- mcasp2_aux_gfclk_mux: mcasp2_aux_gfclk_mux {
+ mcasp2_aux_gfclk_mux: mcasp2_aux_gfclk_mux@1860 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&per_abe_x1_gfclk2_div>, <&video1_clk2_div>, <&video2_clk2_div>, <&hdmi_clk2_div>;
@@ -1784,7 +1784,7 @@
reg = <0x1860>;
};
- mcasp3_ahclkx_mux: mcasp3_ahclkx_mux {
+ mcasp3_ahclkx_mux: mcasp3_ahclkx_mux@1868 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_24m_fclk>, <&abe_sys_clk_div>, <&func_24m_clk>, <&atl_clkin3_ck>, <&atl_clkin2_ck>, <&atl_clkin1_ck>, <&atl_clkin0_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&mlb_clk>, <&mlbp_clk>;
@@ -1792,7 +1792,7 @@
reg = <0x1868>;
};
- mcasp3_aux_gfclk_mux: mcasp3_aux_gfclk_mux {
+ mcasp3_aux_gfclk_mux: mcasp3_aux_gfclk_mux@1868 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&per_abe_x1_gfclk2_div>, <&video1_clk2_div>, <&video2_clk2_div>, <&hdmi_clk2_div>;
@@ -1800,7 +1800,7 @@
reg = <0x1868>;
};
- mcasp4_ahclkx_mux: mcasp4_ahclkx_mux {
+ mcasp4_ahclkx_mux: mcasp4_ahclkx_mux@1898 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_24m_fclk>, <&abe_sys_clk_div>, <&func_24m_clk>, <&atl_clkin3_ck>, <&atl_clkin2_ck>, <&atl_clkin1_ck>, <&atl_clkin0_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&mlb_clk>, <&mlbp_clk>;
@@ -1808,7 +1808,7 @@
reg = <0x1898>;
};
- mcasp4_aux_gfclk_mux: mcasp4_aux_gfclk_mux {
+ mcasp4_aux_gfclk_mux: mcasp4_aux_gfclk_mux@1898 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&per_abe_x1_gfclk2_div>, <&video1_clk2_div>, <&video2_clk2_div>, <&hdmi_clk2_div>;
@@ -1816,7 +1816,7 @@
reg = <0x1898>;
};
- mcasp5_ahclkx_mux: mcasp5_ahclkx_mux {
+ mcasp5_ahclkx_mux: mcasp5_ahclkx_mux@1878 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_24m_fclk>, <&abe_sys_clk_div>, <&func_24m_clk>, <&atl_clkin3_ck>, <&atl_clkin2_ck>, <&atl_clkin1_ck>, <&atl_clkin0_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&mlb_clk>, <&mlbp_clk>;
@@ -1824,7 +1824,7 @@
reg = <0x1878>;
};
- mcasp5_aux_gfclk_mux: mcasp5_aux_gfclk_mux {
+ mcasp5_aux_gfclk_mux: mcasp5_aux_gfclk_mux@1878 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&per_abe_x1_gfclk2_div>, <&video1_clk2_div>, <&video2_clk2_div>, <&hdmi_clk2_div>;
@@ -1832,7 +1832,7 @@
reg = <0x1878>;
};
- mcasp6_ahclkx_mux: mcasp6_ahclkx_mux {
+ mcasp6_ahclkx_mux: mcasp6_ahclkx_mux@1904 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_24m_fclk>, <&abe_sys_clk_div>, <&func_24m_clk>, <&atl_clkin3_ck>, <&atl_clkin2_ck>, <&atl_clkin1_ck>, <&atl_clkin0_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&mlb_clk>, <&mlbp_clk>;
@@ -1840,7 +1840,7 @@
reg = <0x1904>;
};
- mcasp6_aux_gfclk_mux: mcasp6_aux_gfclk_mux {
+ mcasp6_aux_gfclk_mux: mcasp6_aux_gfclk_mux@1904 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&per_abe_x1_gfclk2_div>, <&video1_clk2_div>, <&video2_clk2_div>, <&hdmi_clk2_div>;
@@ -1848,7 +1848,7 @@
reg = <0x1904>;
};
- mcasp7_ahclkx_mux: mcasp7_ahclkx_mux {
+ mcasp7_ahclkx_mux: mcasp7_ahclkx_mux@1908 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_24m_fclk>, <&abe_sys_clk_div>, <&func_24m_clk>, <&atl_clkin3_ck>, <&atl_clkin2_ck>, <&atl_clkin1_ck>, <&atl_clkin0_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&mlb_clk>, <&mlbp_clk>;
@@ -1856,7 +1856,7 @@
reg = <0x1908>;
};
- mcasp7_aux_gfclk_mux: mcasp7_aux_gfclk_mux {
+ mcasp7_aux_gfclk_mux: mcasp7_aux_gfclk_mux@1908 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&per_abe_x1_gfclk2_div>, <&video1_clk2_div>, <&video2_clk2_div>, <&hdmi_clk2_div>;
@@ -1864,7 +1864,7 @@
reg = <0x1908>;
};
- mcasp8_ahclk_mux: mcasp8_ahclk_mux {
+ mcasp8_ahclkx_mux: mcasp8_ahclkx_mux@1890 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_24m_fclk>, <&abe_sys_clk_div>, <&func_24m_clk>, <&atl_clkin3_ck>, <&atl_clkin2_ck>, <&atl_clkin1_ck>, <&atl_clkin0_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&mlb_clk>, <&mlbp_clk>;
@@ -1872,7 +1872,7 @@
reg = <0x1890>;
};
- mcasp8_aux_gfclk_mux: mcasp8_aux_gfclk_mux {
+ mcasp8_aux_gfclk_mux: mcasp8_aux_gfclk_mux@1890 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&per_abe_x1_gfclk2_div>, <&video1_clk2_div>, <&video2_clk2_div>, <&hdmi_clk2_div>;
@@ -1880,7 +1880,7 @@
reg = <0x1890>;
};
- mmc1_fclk_mux: mmc1_fclk_mux {
+ mmc1_fclk_mux: mmc1_fclk_mux@1328 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&func_128m_clk>, <&dpll_per_m2x2_ck>;
@@ -1888,7 +1888,7 @@
reg = <0x1328>;
};
- mmc1_fclk_div: mmc1_fclk_div {
+ mmc1_fclk_div: mmc1_fclk_div@1328 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&mmc1_fclk_mux>;
@@ -1898,7 +1898,7 @@
ti,index-power-of-two;
};
- mmc2_fclk_mux: mmc2_fclk_mux {
+ mmc2_fclk_mux: mmc2_fclk_mux@1330 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&func_128m_clk>, <&dpll_per_m2x2_ck>;
@@ -1906,7 +1906,7 @@
reg = <0x1330>;
};
- mmc2_fclk_div: mmc2_fclk_div {
+ mmc2_fclk_div: mmc2_fclk_div@1330 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&mmc2_fclk_mux>;
@@ -1916,7 +1916,7 @@
ti,index-power-of-two;
};
- mmc3_gfclk_mux: mmc3_gfclk_mux {
+ mmc3_gfclk_mux: mmc3_gfclk_mux@1820 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&func_48m_fclk>, <&dpll_per_m2x2_ck>;
@@ -1924,7 +1924,7 @@
reg = <0x1820>;
};
- mmc3_gfclk_div: mmc3_gfclk_div {
+ mmc3_gfclk_div: mmc3_gfclk_div@1820 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&mmc3_gfclk_mux>;
@@ -1934,7 +1934,7 @@
ti,index-power-of-two;
};
- mmc4_gfclk_mux: mmc4_gfclk_mux {
+ mmc4_gfclk_mux: mmc4_gfclk_mux@1828 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&func_48m_fclk>, <&dpll_per_m2x2_ck>;
@@ -1942,7 +1942,7 @@
reg = <0x1828>;
};
- mmc4_gfclk_div: mmc4_gfclk_div {
+ mmc4_gfclk_div: mmc4_gfclk_div@1828 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&mmc4_gfclk_mux>;
@@ -1952,7 +1952,7 @@
ti,index-power-of-two;
};
- qspi_gfclk_mux: qspi_gfclk_mux {
+ qspi_gfclk_mux: qspi_gfclk_mux@1838 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&func_128m_clk>, <&dpll_per_h13x2_ck>;
@@ -1960,7 +1960,7 @@
reg = <0x1838>;
};
- qspi_gfclk_div: qspi_gfclk_div {
+ qspi_gfclk_div: qspi_gfclk_div@1838 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&qspi_gfclk_mux>;
@@ -1970,7 +1970,7 @@
ti,index-power-of-two;
};
- timer10_gfclk_mux: timer10_gfclk_mux {
+ timer10_gfclk_mux: timer10_gfclk_mux@1728 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&timer_sys_clk_div>, <&sys_32k_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&abe_giclk_div>, <&video1_div_clk>, <&video2_div_clk>, <&hdmi_div_clk>;
@@ -1978,7 +1978,7 @@
reg = <0x1728>;
};
- timer11_gfclk_mux: timer11_gfclk_mux {
+ timer11_gfclk_mux: timer11_gfclk_mux@1730 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&timer_sys_clk_div>, <&sys_32k_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&abe_giclk_div>, <&video1_div_clk>, <&video2_div_clk>, <&hdmi_div_clk>;
@@ -1986,7 +1986,7 @@
reg = <0x1730>;
};
- timer13_gfclk_mux: timer13_gfclk_mux {
+ timer13_gfclk_mux: timer13_gfclk_mux@17c8 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&timer_sys_clk_div>, <&sys_32k_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&abe_giclk_div>, <&video1_div_clk>, <&video2_div_clk>, <&hdmi_div_clk>;
@@ -1994,7 +1994,7 @@
reg = <0x17c8>;
};
- timer14_gfclk_mux: timer14_gfclk_mux {
+ timer14_gfclk_mux: timer14_gfclk_mux@17d0 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&timer_sys_clk_div>, <&sys_32k_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&abe_giclk_div>, <&video1_div_clk>, <&video2_div_clk>, <&hdmi_div_clk>;
@@ -2002,7 +2002,7 @@
reg = <0x17d0>;
};
- timer15_gfclk_mux: timer15_gfclk_mux {
+ timer15_gfclk_mux: timer15_gfclk_mux@17d8 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&timer_sys_clk_div>, <&sys_32k_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&abe_giclk_div>, <&video1_div_clk>, <&video2_div_clk>, <&hdmi_div_clk>;
@@ -2010,7 +2010,7 @@
reg = <0x17d8>;
};
- timer16_gfclk_mux: timer16_gfclk_mux {
+ timer16_gfclk_mux: timer16_gfclk_mux@1830 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&timer_sys_clk_div>, <&sys_32k_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&abe_giclk_div>, <&video1_div_clk>, <&video2_div_clk>, <&hdmi_div_clk>;
@@ -2018,7 +2018,7 @@
reg = <0x1830>;
};
- timer2_gfclk_mux: timer2_gfclk_mux {
+ timer2_gfclk_mux: timer2_gfclk_mux@1738 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&timer_sys_clk_div>, <&sys_32k_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&abe_giclk_div>, <&video1_div_clk>, <&video2_div_clk>, <&hdmi_div_clk>;
@@ -2026,7 +2026,7 @@
reg = <0x1738>;
};
- timer3_gfclk_mux: timer3_gfclk_mux {
+ timer3_gfclk_mux: timer3_gfclk_mux@1740 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&timer_sys_clk_div>, <&sys_32k_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&abe_giclk_div>, <&video1_div_clk>, <&video2_div_clk>, <&hdmi_div_clk>;
@@ -2034,7 +2034,7 @@
reg = <0x1740>;
};
- timer4_gfclk_mux: timer4_gfclk_mux {
+ timer4_gfclk_mux: timer4_gfclk_mux@1748 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&timer_sys_clk_div>, <&sys_32k_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&abe_giclk_div>, <&video1_div_clk>, <&video2_div_clk>, <&hdmi_div_clk>;
@@ -2042,7 +2042,7 @@
reg = <0x1748>;
};
- timer9_gfclk_mux: timer9_gfclk_mux {
+ timer9_gfclk_mux: timer9_gfclk_mux@1750 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&timer_sys_clk_div>, <&sys_32k_ck>, <&sys_clkin2>, <&ref_clkin0_ck>, <&ref_clkin1_ck>, <&ref_clkin2_ck>, <&ref_clkin3_ck>, <&abe_giclk_div>, <&video1_div_clk>, <&video2_div_clk>, <&hdmi_div_clk>;
@@ -2050,7 +2050,7 @@
reg = <0x1750>;
};
- uart1_gfclk_mux: uart1_gfclk_mux {
+ uart1_gfclk_mux: uart1_gfclk_mux@1840 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&func_48m_fclk>, <&dpll_per_m2x2_ck>;
@@ -2058,7 +2058,7 @@
reg = <0x1840>;
};
- uart2_gfclk_mux: uart2_gfclk_mux {
+ uart2_gfclk_mux: uart2_gfclk_mux@1848 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&func_48m_fclk>, <&dpll_per_m2x2_ck>;
@@ -2066,7 +2066,7 @@
reg = <0x1848>;
};
- uart3_gfclk_mux: uart3_gfclk_mux {
+ uart3_gfclk_mux: uart3_gfclk_mux@1850 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&func_48m_fclk>, <&dpll_per_m2x2_ck>;
@@ -2074,7 +2074,7 @@
reg = <0x1850>;
};
- uart4_gfclk_mux: uart4_gfclk_mux {
+ uart4_gfclk_mux: uart4_gfclk_mux@1858 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&func_48m_fclk>, <&dpll_per_m2x2_ck>;
@@ -2082,7 +2082,7 @@
reg = <0x1858>;
};
- uart5_gfclk_mux: uart5_gfclk_mux {
+ uart5_gfclk_mux: uart5_gfclk_mux@1870 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&func_48m_fclk>, <&dpll_per_m2x2_ck>;
@@ -2090,7 +2090,7 @@
reg = <0x1870>;
};
- uart7_gfclk_mux: uart7_gfclk_mux {
+ uart7_gfclk_mux: uart7_gfclk_mux@18d0 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&func_48m_fclk>, <&dpll_per_m2x2_ck>;
@@ -2098,7 +2098,7 @@
reg = <0x18d0>;
};
- uart8_gfclk_mux: uart8_gfclk_mux {
+ uart8_gfclk_mux: uart8_gfclk_mux@18e0 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&func_48m_fclk>, <&dpll_per_m2x2_ck>;
@@ -2106,7 +2106,7 @@
reg = <0x18e0>;
};
- uart9_gfclk_mux: uart9_gfclk_mux {
+ uart9_gfclk_mux: uart9_gfclk_mux@18e8 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&func_48m_fclk>, <&dpll_per_m2x2_ck>;
@@ -2114,7 +2114,7 @@
reg = <0x18e8>;
};
- vip1_gclk_mux: vip1_gclk_mux {
+ vip1_gclk_mux: vip1_gclk_mux@1020 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&l3_iclk_div>, <&dpll_core_h23x2_ck>;
@@ -2122,7 +2122,7 @@
reg = <0x1020>;
};
- vip2_gclk_mux: vip2_gclk_mux {
+ vip2_gclk_mux: vip2_gclk_mux@1028 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&l3_iclk_div>, <&dpll_core_h23x2_ck>;
@@ -2130,7 +2130,7 @@
reg = <0x1028>;
};
- vip3_gclk_mux: vip3_gclk_mux {
+ vip3_gclk_mux: vip3_gclk_mux@1030 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&l3_iclk_div>, <&dpll_core_h23x2_ck>;
@@ -2147,7 +2147,7 @@
};
&scm_conf_clocks {
- dss_deshdcp_clk: dss_deshdcp_clk {
+ dss_deshdcp_clk: dss_deshdcp_clk@558 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l3_iclk_div>;
@@ -2155,7 +2155,7 @@
reg = <0x558>;
};
- ehrpwm0_tbclk: ehrpwm0_tbclk {
+ ehrpwm0_tbclk: ehrpwm0_tbclk@558 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l4_root_clk_div>;
@@ -2163,7 +2163,7 @@
reg = <0x0558>;
};
- ehrpwm1_tbclk: ehrpwm1_tbclk {
+ ehrpwm1_tbclk: ehrpwm1_tbclk@558 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l4_root_clk_div>;
@@ -2171,7 +2171,7 @@
reg = <0x0558>;
};
- ehrpwm2_tbclk: ehrpwm2_tbclk {
+ ehrpwm2_tbclk: ehrpwm2_tbclk@558 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l4_root_clk_div>;
diff --git a/arch/arm/boot/dts/emev2-kzm9d.dts b/arch/arm/boot/dts/emev2-kzm9d.dts
index 8c24975e8f9d..a35b851e1cd7 100644
--- a/arch/arm/boot/dts/emev2-kzm9d.dts
+++ b/arch/arm/boot/dts/emev2-kzm9d.dts
@@ -105,8 +105,8 @@
&pfc {
uart1_pins: serial@e1030000 {
- renesas,groups = "uart1_ctrl", "uart1_data";
- renesas,function = "uart1";
+ groups = "uart1_ctrl", "uart1_data";
+ function = "uart1";
};
};
diff --git a/arch/arm/boot/dts/exynos3250-artik5-eval.dts b/arch/arm/boot/dts/exynos3250-artik5-eval.dts
new file mode 100644
index 000000000000..be4d6aa379f3
--- /dev/null
+++ b/arch/arm/boot/dts/exynos3250-artik5-eval.dts
@@ -0,0 +1,43 @@
+/*
+ * Samsung's Exynos3250 based ARTIK5 evaluation board device tree source
+ *
+ * Copyright (c) 2016 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com
+ *
+ * Device tree source file for Samsung's ARTIK5 evaluation board
+ * which is based on Samsung Exynos3250 SoC.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+/dts-v1/;
+#include "exynos3250-artik5.dtsi"
+
+/ {
+ model = "Samsung ARTIK5 evaluation board";
+ compatible = "samsung,artik5-eval", "samsung,artik5",
+ "samsung,exynos3250", "samsung,exynos3";
+};
+
+&mshc_2 {
+ num-slots = <1>;
+ cap-sd-highspeed;
+ disable-wp;
+ vqmmc-supply = <&ldo3_reg>;
+ card-detect-delay = <200>;
+ clock-frequency = <100000000>;
+ clock-freq-min-max = <400000 100000000>;
+ samsung,dw-mshc-ciu-div = <1>;
+ samsung,dw-mshc-sdr-timing = <0 1>;
+ samsung,dw-mshc-ddr-timing = <1 2>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&sd2_cmd &sd2_clk &sd2_cd &sd2_bus1 &sd2_bus4>;
+ bus-width = <4>;
+ status = "okay";
+};
+
+&serial_2 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/exynos3250-artik5.dtsi b/arch/arm/boot/dts/exynos3250-artik5.dtsi
new file mode 100644
index 000000000000..130e946f1414
--- /dev/null
+++ b/arch/arm/boot/dts/exynos3250-artik5.dtsi
@@ -0,0 +1,334 @@
+/*
+ * Samsung's Exynos3250 based ARTIK5 module device tree source
+ *
+ * Copyright (c) 2016 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com
+ *
+ * Device tree source file for Samsung's ARTIK5 module which is based on
+ * Samsung Exynos3250 SoC.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include "exynos3250.dtsi"
+#include <dt-bindings/clock/samsung,s2mps11.h>
+#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/interrupt-controller/irq.h>
+
+/ {
+ compatible = "samsung,artik5", "samsung,exynos3250", "samsung,exynos3";
+
+ chosen {
+ stdout-path = &serial_2;
+ };
+
+ memory {
+ reg = <0x40000000 0x1ff00000>;
+ };
+
+ firmware@0205f000 {
+ compatible = "samsung,secure-firmware";
+ reg = <0x0205f000 0x1000>;
+ };
+
+ thermal-zones {
+ cpu_thermal: cpu-thermal {
+ cooling-maps {
+ map0 {
+ /* Corresponds to 500MHz */
+ cooling-device = <&cpu0 5 5>;
+ };
+ map1 {
+ /* Corresponds to 200MHz */
+ cooling-device = <&cpu0 8 8>;
+ };
+ };
+ };
+ };
+};
+
+&adc {
+ vdd-supply = <&ldo7_reg>;
+ assigned-clocks = <&cmu CLK_SCLK_TSADC>;
+ assigned-clock-rates = <6000000>;
+};
+
+&cpu0 {
+ cpu0-supply = <&buck2_reg>;
+};
+
+&i2c_0 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ samsung,i2c-sda-delay = <100>;
+ samsung,i2c-slave-addr = <0x10>;
+ samsung,i2c-max-bus-freq = <100000>;
+ status = "okay";
+
+ s2mps14_pmic@66 {
+ compatible = "samsung,s2mps14-pmic";
+ interrupt-parent = <&gpx3>;
+ interrupts = <5 IRQ_TYPE_NONE>;
+ reg = <0x66>;
+
+ s2mps14_osc: clocks {
+ compatible = "samsung,s2mps14-clk";
+ #clock-cells = <1>;
+ clock-output-names = "s2mps14_ap", "unused",
+ "s2mps14_bt";
+ };
+
+ regulators {
+ ldo1_reg: LDO1 {
+ /* VDD_ALIVE15x */
+ regulator-name = "VLDO1_1.0V";
+ regulator-min-microvolt = <1000000>;
+ regulator-max-microvolt = <1000000>;
+ regulator-always-on;
+ };
+
+ ldo2_reg: LDO2 {
+ /* VDDQM176 ~ VDDQM185 */
+ regulator-name = "VLDO2_1.2V";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1200000>;
+ regulator-always-on;
+ };
+
+ ldo3_reg: LDO3 {
+ /*
+ * VDD1_E106 ~ VDD1_E111
+ * DVDD_RTC_AP, DVDD_MMC2_AP
+ */
+ regulator-name = "VLDO3_1.8V";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+ };
+
+ ldo4_reg: LDO4 {
+ /* AVDD_PLL1120 ~ AVDD_PLL11201 */
+ regulator-name = "VLDO4_1.8V";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+ };
+
+ ldo5_reg: LDO5 {
+ /* VDDI_PLL_ISO141 ~ VDDI_PLL_ISO142 */
+ regulator-name = "VLDO5_1.0V";
+ regulator-min-microvolt = <1000000>;
+ regulator-max-microvolt = <1000000>;
+ regulator-always-on;
+ };
+
+ ldo6_reg: LDO6 {
+ /* VDD_USB, VDD10_HSIC */
+ regulator-name = "VLDO6_1.0V";
+ regulator-min-microvolt = <1000000>;
+ regulator-max-microvolt = <1000000>;
+ regulator-always-on;
+ };
+
+ ldo7_reg: LDO7 {
+ /*
+ * VDD18P, AVDD18_TS, AVDD18_HSIC, AVDD_PLL2,
+ * AVDD_ADC, AVDD_ABB_0, M4S_VDD18
+ */
+ regulator-name = "VLDO7_1.8V";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+ };
+
+ ldo8_reg: LDO8 {
+ /* AVDD33_UOTG */
+ regulator-name = "VLDO8_3.0V";
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3000000>;
+ regulator-always-on;
+ };
+
+ ldo9_reg: LDO9 {
+ /* VDDQ_E86 ~ VDDQ_E105*/
+ regulator-name = "VLDO9_1.2V";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1200000>;
+ regulator-always-on;
+ };
+
+ ldo10_reg: LDO10 {
+ regulator-name = "VLDO10_1.0V";
+ regulator-min-microvolt = <1000000>;
+ regulator-max-microvolt = <1000000>;
+ };
+
+ ldo11_reg: LDO11 {
+ /* VDD74 ~ VDD75 */
+ regulator-name = "VLDO11_1.8V";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ samsung,ext-control-gpios = <&gpk0 2 GPIO_ACTIVE_HIGH>;
+ };
+
+ ldo12_reg: LDO12 {
+ /* VDD72 ~ VDD73 */
+ regulator-name = "VLDO12_2.8V";
+ regulator-min-microvolt = <2800000>;
+ regulator-max-microvolt = <2800000>;
+ samsung,ext-control-gpios = <&gpk0 2 GPIO_ACTIVE_HIGH>;
+ };
+
+ ldo13_reg: LDO13 {
+ regulator-name = "VLDO13_2.8V";
+ regulator-min-microvolt = <2800000>;
+ regulator-max-microvolt = <2800000>;
+ };
+
+ ldo14_reg: LDO14 {
+ regulator-name = "VLDO14_2.7V";
+ regulator-min-microvolt = <2700000>;
+ regulator-max-microvolt = <2700000>;
+ };
+
+ ldo15_reg: LDO15 {
+ regulator-name = "VLDO_3.3V";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ };
+
+ ldo16_reg: LDO16 {
+ regulator-name = "VLDO16_3.3V";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ };
+
+ ldo17_reg: LDO17 {
+ regulator-name = "VLDO17_3.0V";
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3000000>;
+ };
+
+ ldo18_reg: LDO18 {
+ /* DVDD_MMC2_AP */
+ regulator-name = "VLDO18_2.8V";
+ regulator-min-microvolt = <2800000>;
+ regulator-max-microvolt = <2800000>;
+ };
+
+ ldo19_reg: LDO19 {
+ regulator-name = "VLDO19_1.8V";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ };
+
+ ldo20_reg: LDO20 {
+ regulator-name = "VLDO20_1.8V";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ };
+
+ ldo21_reg: LDO21 {
+ regulator-name = "VLDO21_1.25V";
+ regulator-min-microvolt = <1250000>;
+ regulator-max-microvolt = <1250000>;
+ };
+
+ ldo22_reg: LDO22 {
+ regulator-name = "VLDO22_1.2V";
+ regulator-min-microvolt = <1200000>;
+ regulator-max-microvolt = <1200000>;
+ };
+
+ ldo23_reg: LDO23 {
+ /* Xi2c3_SDA/SCL, Xi2c7_SDA/SCL, WLAN_SDIO */
+ regulator-name = "VLDO23_1.8V";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ };
+
+ ldo24_reg: LDO24 {
+ regulator-name = "VLDO24_3.0V";
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3000000>;
+ };
+
+ ldo25_reg: LDO25 {
+ regulator-name = "VLDO25_3.0V";
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3000000>;
+ };
+
+ buck1_reg: BUCK1 {
+ /* VDD_MIF */
+ regulator-name = "VBUCK1_1.0V";
+ regulator-min-microvolt = <800000>;
+ regulator-max-microvolt = <1000000>;
+ regulator-always-on;
+ };
+
+ buck2_reg: BUCK2 {
+ /* VDD_CPU */
+ regulator-name = "VBUCK2_1.2V";
+ regulator-min-microvolt = <850000>;
+ regulator-max-microvolt = <1200000>;
+ regulator-always-on;
+ };
+
+ buck3_reg: BUCK3 {
+ /* VDD_G3D */
+ regulator-name = "VBUCK3_1.0V";
+ regulator-min-microvolt = <850000>;
+ regulator-max-microvolt = <1000000>;
+ regulator-always-on;
+ };
+
+ buck4_reg: BUCK4 {
+ regulator-name = "VBUCK4_1.95V";
+ regulator-min-microvolt = <1950000>;
+ regulator-max-microvolt = <1950000>;
+ regulator-always-on;
+ };
+
+ buck5_reg: BUCK5 {
+ regulator-name = "VBUCK5_1.35V";
+ regulator-min-microvolt = <1350000>;
+ regulator-max-microvolt = <1350000>;
+ regulator-always-on;
+ };
+ };
+ };
+};
+
+&mshc_0 {
+ num-slots = <1>;
+ non-removable;
+ cap-mmc-highspeed;
+ card-detect-delay = <200>;
+ vmmc-supply = <&ldo12_reg>;
+ clock-frequency = <100000000>;
+ clock-freq-min-max = <400000 100000000>;
+ samsung,dw-mshc-ciu-div = <1>;
+ samsung,dw-mshc-sdr-timing = <0 1>;
+ samsung,dw-mshc-ddr-timing = <1 2>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&sd0_cmd &sd0_bus1 &sd0_bus4 &sd0_bus8>;
+ bus-width = <8>;
+ status = "okay";
+};
+
+&rtc {
+ clocks = <&cmu CLK_RTC>, <&s2mps14_osc S2MPS11_CLK_AP>;
+ clock-names = "rtc", "rtc_src";
+ status = "okay";
+};
+
+&tmu {
+ status = "okay";
+};
+
+&xusbxti {
+ clock-frequency = <24000000>;
+};
diff --git a/arch/arm/boot/dts/exynos3250-monk.dts b/arch/arm/boot/dts/exynos3250-monk.dts
index 9e2840b59ae8..8c8906266310 100644
--- a/arch/arm/boot/dts/exynos3250-monk.dts
+++ b/arch/arm/boot/dts/exynos3250-monk.dts
@@ -14,6 +14,7 @@
/dts-v1/;
#include "exynos3250.dtsi"
+#include "exynos4412-ppmu-common.dtsi"
#include <dt-bindings/input/input.h>
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/clock/samsung,s2mps11.h>
@@ -156,6 +157,12 @@
};
};
+&bus_dmc {
+ devfreq-events = <&ppmu_dmc0_3>, <&ppmu_dmc1_3>;
+ vdd-supply = <&buck1_reg>;
+ status = "okay";
+};
+
&cpu0 {
cpu0-supply = <&buck2_reg>;
};
@@ -458,46 +465,6 @@
status = "okay";
};
-&ppmu_dmc0 {
- status = "okay";
-
- events {
- ppmu_dmc0_3: ppmu-event3-dmc0 {
- event-name = "ppmu-event3-dmc0";
- };
- };
-};
-
-&ppmu_dmc1 {
- status = "okay";
-
- events {
- ppmu_dmc1_3: ppmu-event3-dmc1 {
- event-name = "ppmu-event3-dmc1";
- };
- };
-};
-
-&ppmu_leftbus {
- status = "okay";
-
- events {
- ppmu_leftbus_3: ppmu-event3-leftbus {
- event-name = "ppmu-event3-leftbus";
- };
- };
-};
-
-&ppmu_rightbus {
- status = "okay";
-
- events {
- ppmu_rightbus_3: ppmu-event3-rightbus {
- event-name = "ppmu-event3-rightbus";
- };
- };
-};
-
&xusbxti {
clock-frequency = <24000000>;
};
@@ -558,7 +525,17 @@
&pinctrl_1 {
pinctrl-names = "default";
- pinctrl-0 = <&sleep1>;
+ pinctrl-0 = <&initial1 &sleep1>;
+
+ initial1: initial-state {
+ PIN_IN(gpk2-0, DOWN, LV1);
+ PIN_IN(gpk2-1, DOWN, LV1);
+ PIN_IN(gpk2-2, DOWN, LV1);
+ PIN_IN(gpk2-3, DOWN, LV1);
+ PIN_IN(gpk2-4, DOWN, LV1);
+ PIN_IN(gpk2-5, DOWN, LV1);
+ PIN_IN(gpk2-6, DOWN, LV1);
+ };
sleep1: sleep-state {
PIN_SLP(gpe0-0, PREV, NONE);
diff --git a/arch/arm/boot/dts/exynos3250-pinctrl.dtsi b/arch/arm/boot/dts/exynos3250-pinctrl.dtsi
index 5ab81c39e2c9..40ea7de44933 100644
--- a/arch/arm/boot/dts/exynos3250-pinctrl.dtsi
+++ b/arch/arm/boot/dts/exynos3250-pinctrl.dtsi
@@ -16,11 +16,49 @@
#define PIN_PULL_DOWN 1
#define PIN_PULL_UP 3
+#define PIN_DRV_LV1 0
+#define PIN_DRV_LV2 2
+#define PIN_DRV_LV3 1
+#define PIN_DRV_LV4 3
+
#define PIN_PDN_OUT0 0
#define PIN_PDN_OUT1 1
#define PIN_PDN_INPUT 2
#define PIN_PDN_PREV 3
+#define PIN_IN(_pin, _pull, _drv) \
+ _pin { \
+ samsung,pins = #_pin; \
+ samsung,pin-function = <0>; \
+ samsung,pin-pud = <PIN_PULL_ ##_pull>; \
+ samsung,pin-drv = <PIN_DRV_ ##_drv>; \
+ }
+
+#define PIN_OUT(_pin, _drv) \
+ _pin { \
+ samsung,pins = #_pin; \
+ samsung,pin-function = <1>; \
+ samsung,pin-pud = <0>; \
+ samsung,pin-drv = <PIN_DRV_ ##_drv>; \
+ }
+
+#define PIN_OUT_SET(_pin, _val, _drv) \
+ _pin { \
+ samsung,pins = #_pin; \
+ samsung,pin-function = <1>; \
+ samsung,pin-pud = <0>; \
+ samsung,pin-drv = <PIN_DRV_ ##_drv>; \
+ samsung,pin-val = <_val>; \
+ }
+
+#define PIN_CFG(_pin, _sel, _pull, _drv) \
+ _pin { \
+ samsung,pins = #_pin; \
+ samsung,pin-function = <_sel>; \
+ samsung,pin-pud = <PIN_PULL_ ##_pull>; \
+ samsung,pin-drv = <PIN_DRV_ ##_drv>; \
+ }
+
#define PIN_SLP(_pin, _mode, _pull) \
_pin { \
samsung,pins = #_pin; \
@@ -120,6 +158,13 @@
samsung,pin-drv = <0>;
};
+ uart2_data: uart2-data {
+ samsung,pins = "gpa1-0", "gpa1-1";
+ samsung,pin-function = <2>;
+ samsung,pin-pud = <0>;
+ samsung,pin-drv = <0>;
+ };
+
i2c3_bus: i2c3-bus {
samsung,pins = "gpa1-2", "gpa1-3";
samsung,pin-function = <3>;
@@ -445,6 +490,41 @@
samsung,pin-drv = <3>;
};
+ sd2_clk: sd2-clk {
+ samsung,pins = "gpk2-0";
+ samsung,pin-function = <2>;
+ samsung,pin-pud = <0>;
+ samsung,pin-drv = <3>;
+ };
+
+ sd2_cmd: sd2-cmd {
+ samsung,pins = "gpk2-1";
+ samsung,pin-function = <2>;
+ samsung,pin-pud = <0>;
+ samsung,pin-drv = <3>;
+ };
+
+ sd2_cd: sd2-cd {
+ samsung,pins = "gpk2-2";
+ samsung,pin-function = <2>;
+ samsung,pin-pud = <3>;
+ samsung,pin-drv = <3>;
+ };
+
+ sd2_bus1: sd2-bus-width1 {
+ samsung,pins = "gpk2-3";
+ samsung,pin-function = <2>;
+ samsung,pin-pud = <3>;
+ samsung,pin-drv = <3>;
+ };
+
+ sd2_bus4: sd2-bus-width4 {
+ samsung,pins = "gpk2-4", "gpk2-5", "gpk2-6";
+ samsung,pin-function = <2>;
+ samsung,pin-pud = <3>;
+ samsung,pin-drv = <3>;
+ };
+
cam_port_b_io: cam-port-b-io {
samsung,pins = "gpm0-0", "gpm0-1", "gpm0-2", "gpm0-3",
"gpm0-4", "gpm0-5", "gpm0-6", "gpm0-7",
diff --git a/arch/arm/boot/dts/exynos3250-rinato.dts b/arch/arm/boot/dts/exynos3250-rinato.dts
index 1f102f3a1ab1..e422819591dc 100644
--- a/arch/arm/boot/dts/exynos3250-rinato.dts
+++ b/arch/arm/boot/dts/exynos3250-rinato.dts
@@ -14,6 +14,7 @@
/dts-v1/;
#include "exynos3250.dtsi"
+#include "exynos4412-ppmu-common.dtsi"
#include <dt-bindings/input/input.h>
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/clock/samsung,s2mps11.h>
@@ -147,6 +148,53 @@
};
};
+&bus_dmc {
+ devfreq-events = <&ppmu_dmc0_3>, <&ppmu_dmc1_3>;
+ vdd-supply = <&buck1_reg>;
+ status = "okay";
+};
+
+&bus_leftbus {
+ devfreq-events = <&ppmu_leftbus_3>, <&ppmu_rightbus_3>;
+ vdd-supply = <&buck3_reg>;
+ status = "okay";
+};
+
+&bus_rightbus {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+};
+
+&bus_lcd0 {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+};
+
+&bus_fsys {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+};
+
+&bus_mcuisp {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+};
+
+&bus_isp {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+};
+
+&bus_peril {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+};
+
+&bus_mfc {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+};
+
&cpu0 {
cpu0-supply = <&buck2_reg>;
};
@@ -635,53 +683,27 @@
status = "okay";
};
-&ppmu_dmc0 {
- status = "okay";
-
- events {
- ppmu_dmc0_3: ppmu-event3-dmc0 {
- event-name = "ppmu-event3-dmc0";
- };
- };
-};
-
-&ppmu_dmc1 {
- status = "okay";
-
- events {
- ppmu_dmc1_3: ppmu-event3-dmc1 {
- event-name = "ppmu-event3-dmc1";
- };
- };
-};
-
-&ppmu_leftbus {
- status = "okay";
-
- events {
- ppmu_leftbus_3: ppmu-event3-leftbus {
- event-name = "ppmu-event3-leftbus";
- };
- };
-};
-
-&ppmu_rightbus {
- status = "okay";
-
- events {
- ppmu_rightbus_3: ppmu-event3-rightbus {
- event-name = "ppmu-event3-rightbus";
- };
- };
-};
-
&xusbxti {
clock-frequency = <24000000>;
};
&pinctrl_0 {
pinctrl-names = "default";
- pinctrl-0 = <&sleep0>;
+ pinctrl-0 = <&initial0 &sleep0>;
+
+ initial0: initial-state {
+ PIN_IN(gpa1-4, DOWN, LV1);
+ PIN_IN(gpa1-5, DOWN, LV1);
+
+ PIN_IN(gpc0-0, DOWN, LV1);
+ PIN_IN(gpc0-1, DOWN, LV1);
+ PIN_IN(gpc0-2, DOWN, LV1);
+ PIN_IN(gpc0-3, DOWN, LV1);
+ PIN_IN(gpc0-4, DOWN, LV1);
+
+ PIN_IN(gpd0-0, DOWN, LV1);
+ PIN_IN(gpd0-1, DOWN, LV1);
+ };
sleep0: sleep-state {
PIN_SLP(gpa0-0, INPUT, DOWN);
@@ -735,7 +757,60 @@
&pinctrl_1 {
pinctrl-names = "default";
- pinctrl-0 = <&sleep1>;
+ pinctrl-0 = <&initial1 &sleep1>;
+
+ initial1: initial-state {
+ PIN_IN(gpe0-6, DOWN, LV1);
+ PIN_IN(gpe0-7, DOWN, LV1);
+
+ PIN_IN(gpe1-0, DOWN, LV1);
+ PIN_IN(gpe1-3, DOWN, LV1);
+ PIN_IN(gpe1-4, DOWN, LV1);
+ PIN_IN(gpe1-5, DOWN, LV1);
+ PIN_IN(gpe1-6, DOWN, LV1);
+
+ PIN_IN(gpk2-0, DOWN, LV1);
+ PIN_IN(gpk2-1, DOWN, LV1);
+ PIN_IN(gpk2-2, DOWN, LV1);
+ PIN_IN(gpk2-3, DOWN, LV1);
+ PIN_IN(gpk2-4, DOWN, LV1);
+ PIN_IN(gpk2-5, DOWN, LV1);
+ PIN_IN(gpk2-6, DOWN, LV1);
+
+ PIN_IN(gpm0-0, DOWN, LV1);
+ PIN_IN(gpm0-1, DOWN, LV1);
+ PIN_IN(gpm0-2, DOWN, LV1);
+ PIN_IN(gpm0-3, DOWN, LV1);
+ PIN_IN(gpm0-4, DOWN, LV1);
+ PIN_IN(gpm0-5, DOWN, LV1);
+ PIN_IN(gpm0-6, DOWN, LV1);
+ PIN_IN(gpm0-7, DOWN, LV1);
+
+ PIN_IN(gpm1-0, DOWN, LV1);
+ PIN_IN(gpm1-1, DOWN, LV1);
+ PIN_IN(gpm1-2, DOWN, LV1);
+ PIN_IN(gpm1-3, DOWN, LV1);
+ PIN_IN(gpm1-4, DOWN, LV1);
+ PIN_IN(gpm1-5, DOWN, LV1);
+ PIN_IN(gpm1-6, DOWN, LV1);
+
+ PIN_IN(gpm2-0, DOWN, LV1);
+ PIN_IN(gpm2-1, DOWN, LV1);
+
+ PIN_IN(gpm3-0, DOWN, LV1);
+ PIN_IN(gpm3-1, DOWN, LV1);
+ PIN_IN(gpm3-2, DOWN, LV1);
+ PIN_IN(gpm3-3, DOWN, LV1);
+ PIN_IN(gpm3-4, DOWN, LV1);
+
+ PIN_IN(gpm4-1, DOWN, LV1);
+ PIN_IN(gpm4-2, DOWN, LV1);
+ PIN_IN(gpm4-3, DOWN, LV1);
+ PIN_IN(gpm4-4, DOWN, LV1);
+ PIN_IN(gpm4-5, DOWN, LV1);
+ PIN_IN(gpm4-6, DOWN, LV1);
+ PIN_IN(gpm4-7, DOWN, LV1);
+ };
sleep1: sleep-state {
PIN_SLP(gpe0-0, PREV, NONE);
diff --git a/arch/arm/boot/dts/exynos3250.dtsi b/arch/arm/boot/dts/exynos3250.dtsi
index 137f9015d4e8..62f3dcd9e046 100644
--- a/arch/arm/boot/dts/exynos3250.dtsi
+++ b/arch/arm/boot/dts/exynos3250.dtsi
@@ -31,6 +31,7 @@
pinctrl1 = &pinctrl_1;
mshc0 = &mshc_0;
mshc1 = &mshc_1;
+ mshc2 = &mshc_2;
spi0 = &spi_0;
spi1 = &spi_1;
i2c0 = &i2c_0;
@@ -43,6 +44,7 @@
i2c7 = &i2c_7;
serial0 = &serial_0;
serial1 = &serial_1;
+ serial2 = &serial_2;
};
cpus {
@@ -153,7 +155,7 @@
interrupt-parent = <&gic>;
};
- mipi_phy: video-phy@10020710 {
+ mipi_phy: video-phy {
compatible = "samsung,s5pv210-mipi-video-phy";
#phy-cells = <1>;
syscon = <&pmu_system_controller>;
@@ -357,6 +359,18 @@
status = "disabled";
};
+ mshc_2: mshc@12530000 {
+ compatible = "samsung,exynos5250-dw-mshc";
+ reg = <0x12530000 0x1000>;
+ interrupts = <0 144 0>;
+ clocks = <&cmu CLK_SDMMC2>, <&cmu CLK_SCLK_MMC2>;
+ clock-names = "biu", "ciu";
+ fifo-depth = <0x80>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "disabled";
+ };
+
exynos_usbphy: exynos-usbphy@125B0000 {
compatible = "samsung,exynos3250-usb2-phy";
reg = <0x125B0000 0x100>;
@@ -452,6 +466,17 @@
status = "disabled";
};
+ serial_2: serial@13820000 {
+ compatible = "samsung,exynos4210-uart";
+ reg = <0x13820000 0x100>;
+ interrupts = <0 111 0>;
+ clocks = <&cmu CLK_UART2>, <&cmu CLK_SCLK_UART2>;
+ clock-names = "uart", "clk_uart_baud0";
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart2_data>;
+ status = "disabled";
+ };
+
i2c_0: i2c@13860000 {
#address-cells = <1>;
#size-cells = <0>;
@@ -688,6 +713,187 @@
clock-names = "ppmu";
status = "disabled";
};
+
+ bus_dmc: bus_dmc {
+ compatible = "samsung,exynos-bus";
+ clocks = <&cmu_dmc CLK_DIV_DMC>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_dmc_opp_table>;
+ status = "disabled";
+ };
+
+ bus_dmc_opp_table: opp_table1 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@50000000 {
+ opp-hz = /bits/ 64 <50000000>;
+ opp-microvolt = <800000>;
+ };
+ opp@100000000 {
+ opp-hz = /bits/ 64 <100000000>;
+ opp-microvolt = <800000>;
+ };
+ opp@134000000 {
+ opp-hz = /bits/ 64 <134000000>;
+ opp-microvolt = <800000>;
+ };
+ opp@200000000 {
+ opp-hz = /bits/ 64 <200000000>;
+ opp-microvolt = <825000>;
+ };
+ opp@400000000 {
+ opp-hz = /bits/ 64 <400000000>;
+ opp-microvolt = <875000>;
+ };
+ };
+
+ bus_leftbus: bus_leftbus {
+ compatible = "samsung,exynos-bus";
+ clocks = <&cmu CLK_DIV_GDL>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_leftbus_opp_table>;
+ status = "disabled";
+ };
+
+ bus_rightbus: bus_rightbus {
+ compatible = "samsung,exynos-bus";
+ clocks = <&cmu CLK_DIV_GDR>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_leftbus_opp_table>;
+ status = "disabled";
+ };
+
+ bus_lcd0: bus_lcd0 {
+ compatible = "samsung,exynos-bus";
+ clocks = <&cmu CLK_DIV_ACLK_160>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_leftbus_opp_table>;
+ status = "disabled";
+ };
+
+ bus_fsys: bus_fsys {
+ compatible = "samsung,exynos-bus";
+ clocks = <&cmu CLK_DIV_ACLK_200>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_leftbus_opp_table>;
+ status = "disabled";
+ };
+
+ bus_mcuisp: bus_mcuisp {
+ compatible = "samsung,exynos-bus";
+ clocks = <&cmu CLK_DIV_ACLK_400_MCUISP>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_mcuisp_opp_table>;
+ status = "disabled";
+ };
+
+ bus_isp: bus_isp {
+ compatible = "samsung,exynos-bus";
+ clocks = <&cmu CLK_DIV_ACLK_266>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_isp_opp_table>;
+ status = "disabled";
+ };
+
+ bus_peril: bus_peril {
+ compatible = "samsung,exynos-bus";
+ clocks = <&cmu CLK_DIV_ACLK_100>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_peril_opp_table>;
+ status = "disabled";
+ };
+
+ bus_mfc: bus_mfc {
+ compatible = "samsung,exynos-bus";
+ clocks = <&cmu CLK_SCLK_MFC>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_leftbus_opp_table>;
+ status = "disabled";
+ };
+
+ bus_leftbus_opp_table: opp_table2 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@50000000 {
+ opp-hz = /bits/ 64 <50000000>;
+ opp-microvolt = <900000>;
+ };
+ opp@80000000 {
+ opp-hz = /bits/ 64 <80000000>;
+ opp-microvolt = <900000>;
+ };
+ opp@100000000 {
+ opp-hz = /bits/ 64 <100000000>;
+ opp-microvolt = <1000000>;
+ };
+ opp@134000000 {
+ opp-hz = /bits/ 64 <134000000>;
+ opp-microvolt = <1000000>;
+ };
+ opp@200000000 {
+ opp-hz = /bits/ 64 <200000000>;
+ opp-microvolt = <1000000>;
+ };
+ };
+
+ bus_mcuisp_opp_table: opp_table3 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@50000000 {
+ opp-hz = /bits/ 64 <50000000>;
+ };
+ opp@80000000 {
+ opp-hz = /bits/ 64 <80000000>;
+ };
+ opp@100000000 {
+ opp-hz = /bits/ 64 <100000000>;
+ };
+ opp@200000000 {
+ opp-hz = /bits/ 64 <200000000>;
+ };
+ opp@400000000 {
+ opp-hz = /bits/ 64 <400000000>;
+ };
+ };
+
+ bus_isp_opp_table: opp_table4 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@50000000 {
+ opp-hz = /bits/ 64 <50000000>;
+ };
+ opp@80000000 {
+ opp-hz = /bits/ 64 <80000000>;
+ };
+ opp@100000000 {
+ opp-hz = /bits/ 64 <100000000>;
+ };
+ opp@200000000 {
+ opp-hz = /bits/ 64 <200000000>;
+ };
+ opp@300000000 {
+ opp-hz = /bits/ 64 <300000000>;
+ };
+ };
+
+ bus_peril_opp_table: opp_table5 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@50000000 {
+ opp-hz = /bits/ 64 <50000000>;
+ };
+ opp@80000000 {
+ opp-hz = /bits/ 64 <80000000>;
+ };
+ opp@100000000 {
+ opp-hz = /bits/ 64 <100000000>;
+ };
+ };
};
};
diff --git a/arch/arm/boot/dts/exynos4.dtsi b/arch/arm/boot/dts/exynos4.dtsi
index c679b3cc3c48..ca8f3e3cf2f3 100644
--- a/arch/arm/boot/dts/exynos4.dtsi
+++ b/arch/arm/boot/dts/exynos4.dtsi
@@ -77,12 +77,12 @@
reg = <0x10000000 0x100>;
};
- sromc@12570000 {
- compatible = "samsung,exynos-srom";
+ memory-controller@12570000 {
+ compatible = "samsung,exynos4210-srom";
reg = <0x12570000 0x14>;
};
- mipi_phy: video-phy@10020710 {
+ mipi_phy: video-phy {
compatible = "samsung,s5pv210-mipi-video-phy";
#phy-cells = <1>;
syscon = <&pmu_system_controller>;
@@ -743,6 +743,18 @@
status = "disabled";
};
+ hdmicec: cec@100B0000 {
+ compatible = "samsung,s5p-cec";
+ reg = <0x100B0000 0x200>;
+ interrupts = <0 114 0>;
+ clocks = <&clock CLK_HDMI_CEC>;
+ clock-names = "hdmicec";
+ samsung,syscon-phandle = <&pmu_system_controller>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&hdmi_cec>;
+ status = "disabled";
+ };
+
mixer: mixer@12C10000 {
compatible = "samsung,exynos4210-mixer";
interrupts = <0 91 0>;
@@ -969,11 +981,18 @@
#iommu-cells = <0>;
};
+ sss: sss@10830000 {
+ compatible = "samsung,exynos4210-secss";
+ reg = <0x10830000 0x300>;
+ interrupts = <0 112 0>;
+ clocks = <&clock CLK_SSS>;
+ clock-names = "secss";
+ };
+
prng: rng@10830400 {
compatible = "samsung,exynos4-rng";
reg = <0x10830400 0x200>;
clocks = <&clock CLK_SSS>;
clock-names = "secss";
- status = "disabled";
};
};
diff --git a/arch/arm/boot/dts/exynos4210-pinctrl.dtsi b/arch/arm/boot/dts/exynos4210-pinctrl.dtsi
index a7c212891674..9331c6252eff 100644
--- a/arch/arm/boot/dts/exynos4210-pinctrl.dtsi
+++ b/arch/arm/boot/dts/exynos4210-pinctrl.dtsi
@@ -820,6 +820,13 @@
samsung,pin-pud = <1>;
samsung,pin-drv = <0>;
};
+
+ hdmi_cec: hdmi-cec {
+ samsung,pins = "gpx3-6";
+ samsung,pin-function = <3>;
+ samsung,pin-pud = <0>;
+ samsung,pin-drv = <0>;
+ };
};
pinctrl@03860000 {
diff --git a/arch/arm/boot/dts/exynos4210-trats.dts b/arch/arm/boot/dts/exynos4210-trats.dts
index 1df2f0bc1d76..79d983036560 100644
--- a/arch/arm/boot/dts/exynos4210-trats.dts
+++ b/arch/arm/boot/dts/exynos4210-trats.dts
@@ -298,6 +298,8 @@
compatible = "maxim,max8997-pmic";
reg = <0x66>;
+ interrupt-parent = <&gpx0>;
+ interrupts = <7 0>;
max8997,pmic-buck1-uses-gpio-dvs;
max8997,pmic-buck2-uses-gpio-dvs;
@@ -359,7 +361,7 @@
};
vusbdac_reg: LDO8 {
- regulator-name = "VUSB/VDAC_3.3V_C210";
+ regulator-name = "VUSB+VDAC_3.3V_C210";
regulator-min-microvolt = <3300000>;
regulator-max-microvolt = <3300000>;
};
diff --git a/arch/arm/boot/dts/exynos4210.dtsi b/arch/arm/boot/dts/exynos4210.dtsi
index c1cb8df6da07..2d9b02967105 100644
--- a/arch/arm/boot/dts/exynos4210.dtsi
+++ b/arch/arm/boot/dts/exynos4210.dtsi
@@ -257,6 +257,165 @@
power-domains = <&pd_lcd1>;
#iommu-cells = <0>;
};
+
+ bus_dmc: bus_dmc {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DIV_DMC>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_dmc_opp_table>;
+ status = "disabled";
+ };
+
+ bus_acp: bus_acp {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DIV_ACP>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_acp_opp_table>;
+ status = "disabled";
+ };
+
+ bus_peri: bus_peri {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_ACLK100>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_peri_opp_table>;
+ status = "disabled";
+ };
+
+ bus_fsys: bus_fsys {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_ACLK133>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_fsys_opp_table>;
+ status = "disabled";
+ };
+
+ bus_display: bus_display {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_ACLK160>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_display_opp_table>;
+ status = "disabled";
+ };
+
+ bus_lcd0: bus_lcd0 {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_ACLK200>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_leftbus_opp_table>;
+ status = "disabled";
+ };
+
+ bus_leftbus: bus_leftbus {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DIV_GDL>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_leftbus_opp_table>;
+ status = "disabled";
+ };
+
+ bus_rightbus: bus_rightbus {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DIV_GDR>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_leftbus_opp_table>;
+ status = "disabled";
+ };
+
+ bus_mfc: bus_mfc {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_SCLK_MFC>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_leftbus_opp_table>;
+ status = "disabled";
+ };
+
+ bus_dmc_opp_table: opp_table1 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@134000000 {
+ opp-hz = /bits/ 64 <134000000>;
+ opp-microvolt = <1025000>;
+ };
+ opp@267000000 {
+ opp-hz = /bits/ 64 <267000000>;
+ opp-microvolt = <1050000>;
+ };
+ opp@400000000 {
+ opp-hz = /bits/ 64 <400000000>;
+ opp-microvolt = <1150000>;
+ };
+ };
+
+ bus_acp_opp_table: opp_table2 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@134000000 {
+ opp-hz = /bits/ 64 <134000000>;
+ };
+ opp@160000000 {
+ opp-hz = /bits/ 64 <160000000>;
+ };
+ opp@200000000 {
+ opp-hz = /bits/ 64 <200000000>;
+ };
+ };
+
+ bus_peri_opp_table: opp_table3 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@5000000 {
+ opp-hz = /bits/ 64 <5000000>;
+ };
+ opp@100000000 {
+ opp-hz = /bits/ 64 <100000000>;
+ };
+ };
+
+ bus_fsys_opp_table: opp_table4 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@10000000 {
+ opp-hz = /bits/ 64 <10000000>;
+ };
+ opp@134000000 {
+ opp-hz = /bits/ 64 <134000000>;
+ };
+ };
+
+ bus_display_opp_table: opp_table5 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@100000000 {
+ opp-hz = /bits/ 64 <100000000>;
+ };
+ opp@134000000 {
+ opp-hz = /bits/ 64 <134000000>;
+ };
+ opp@160000000 {
+ opp-hz = /bits/ 64 <160000000>;
+ };
+ };
+
+ bus_leftbus_opp_table: opp_table6 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@100000000 {
+ opp-hz = /bits/ 64 <100000000>;
+ };
+ opp@160000000 {
+ opp-hz = /bits/ 64 <160000000>;
+ };
+ opp@200000000 {
+ opp-hz = /bits/ 64 <200000000>;
+ };
+ };
};
&gic {
diff --git a/arch/arm/boot/dts/exynos4412-odroid-common.dtsi b/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
index 5e5d3fecb04c..ec7619a384a2 100644
--- a/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
+++ b/arch/arm/boot/dts/exynos4412-odroid-common.dtsi
@@ -11,6 +11,7 @@
#include <dt-bindings/input/input.h>
#include <dt-bindings/clock/maxim,max77686.h>
#include "exynos4412.dtsi"
+#include "exynos4412-ppmu-common.dtsi"
#include <dt-bindings/gpio/gpio.h>
/ {
@@ -108,6 +109,53 @@
};
};
+&bus_dmc {
+ devfreq-events = <&ppmu_dmc0_3>, <&ppmu_dmc1_3>;
+ vdd-supply = <&buck1_reg>;
+ status = "okay";
+};
+
+&bus_acp {
+ devfreq = <&bus_dmc>;
+ status = "okay";
+};
+
+&bus_c2c {
+ devfreq = <&bus_dmc>;
+ status = "okay";
+};
+
+&bus_leftbus {
+ devfreq-events = <&ppmu_leftbus_3>, <&ppmu_rightbus_3>;
+ vdd-supply = <&buck3_reg>;
+ status = "okay";
+};
+
+&bus_rightbus {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+};
+
+&bus_display {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+};
+
+&bus_fsys {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+};
+
+&bus_peri {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+};
+
+&bus_mfc {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+};
+
&cpu0 {
cpu0-supply = <&buck2_reg>;
};
@@ -188,6 +236,10 @@
status = "okay";
};
+&hdmicec {
+ status = "okay";
+};
+
&hsotg {
dr_mode = "peripheral";
status = "okay";
@@ -355,8 +407,8 @@
buck1_reg: BUCK1 {
regulator-name = "vdd_mif";
- regulator-min-microvolt = <1000000>;
- regulator-max-microvolt = <1000000>;
+ regulator-min-microvolt = <900000>;
+ regulator-max-microvolt = <1100000>;
regulator-always-on;
regulator-boot-on;
};
@@ -371,8 +423,8 @@
buck3_reg: BUCK3 {
regulator-name = "vdd_int";
- regulator-min-microvolt = <1000000>;
- regulator-max-microvolt = <1000000>;
+ regulator-min-microvolt = <900000>;
+ regulator-max-microvolt = <1050000>;
regulator-always-on;
regulator-boot-on;
};
diff --git a/arch/arm/boot/dts/exynos4412-ppmu-common.dtsi b/arch/arm/boot/dts/exynos4412-ppmu-common.dtsi
new file mode 100644
index 000000000000..16e4b77d8cb1
--- /dev/null
+++ b/arch/arm/boot/dts/exynos4412-ppmu-common.dtsi
@@ -0,0 +1,50 @@
+/*
+ * Device tree sources for Exynos4412 PPMU common device tree
+ *
+ * Copyright (C) 2015 Samsung Electronics
+ * Author: Chanwoo Choi <cw00.choi@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+&ppmu_dmc0 {
+ status = "okay";
+
+ events {
+ ppmu_dmc0_3: ppmu-event3-dmc0 {
+ event-name = "ppmu-event3-dmc0";
+ };
+ };
+};
+
+&ppmu_dmc1 {
+ status = "okay";
+
+ events {
+ ppmu_dmc1_3: ppmu-event3-dmc1 {
+ event-name = "ppmu-event3-dmc1";
+ };
+ };
+};
+
+&ppmu_leftbus {
+ status = "okay";
+
+ events {
+ ppmu_leftbus_3: ppmu-event3-leftbus {
+ event-name = "ppmu-event3-leftbus";
+ };
+ };
+};
+
+&ppmu_rightbus {
+ status = "okay";
+
+ events {
+ ppmu_rightbus_3: ppmu-event3-rightbus {
+ event-name = "ppmu-event3-rightbus";
+ };
+ };
+};
diff --git a/arch/arm/boot/dts/exynos4412-trats2.dts b/arch/arm/boot/dts/exynos4412-trats2.dts
index ed017cc7b14f..9336fd4824d9 100644
--- a/arch/arm/boot/dts/exynos4412-trats2.dts
+++ b/arch/arm/boot/dts/exynos4412-trats2.dts
@@ -14,6 +14,7 @@
/dts-v1/;
#include "exynos4412.dtsi"
+#include "exynos4412-ppmu-common.dtsi"
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/clock/maxim,max77686.h>
@@ -146,13 +147,13 @@
reg = <0x66>;
regulators {
- esafeout1_reg: ESAFEOUT1@1 {
+ esafeout1_reg: ESAFEOUT1 {
regulator-name = "ESAFEOUT1";
};
- esafeout2_reg: ESAFEOUT2@2 {
+ esafeout2_reg: ESAFEOUT2 {
regulator-name = "ESAFEOUT2";
};
- charger_reg: CHARGER@0 {
+ charger_reg: CHARGER {
regulator-name = "CHARGER";
regulator-min-microamp = <60000>;
regulator-max-microamp = <2580000>;
@@ -251,7 +252,7 @@
"SPK", "SPKOUTRP";
};
- thermistor-ap@0 {
+ thermistor-ap {
compatible = "ntc,ncp15wb473";
pullup-uv = <1800000>; /* VCC_1.8V_AP */
pullup-ohm = <100000>; /* 100K */
@@ -259,7 +260,7 @@
io-channels = <&adc 1>; /* AP temperature */
};
- thermistor-battery@1 {
+ thermistor-battery {
compatible = "ntc,ncp15wb473";
pullup-uv = <1800000>; /* VCC_1.8V_AP */
pullup-ohm = <100000>; /* 100K */
@@ -288,6 +289,53 @@
status = "okay";
};
+&bus_dmc {
+ devfreq-events = <&ppmu_dmc0_3>, <&ppmu_dmc1_3>;
+ vdd-supply = <&buck1_reg>;
+ status = "okay";
+};
+
+&bus_acp {
+ devfreq = <&bus_dmc>;
+ status = "okay";
+};
+
+&bus_c2c {
+ devfreq = <&bus_dmc>;
+ status = "okay";
+};
+
+&bus_leftbus {
+ devfreq-events = <&ppmu_leftbus_3>, <&ppmu_rightbus_3>;
+ vdd-supply = <&buck3_reg>;
+ status = "okay";
+};
+
+&bus_rightbus {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+};
+
+&bus_display {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+};
+
+&bus_fsys {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+};
+
+&bus_peri {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+};
+
+&bus_mfc {
+ devfreq = <&bus_leftbus>;
+ status = "okay";
+};
+
&cpu0 {
cpu0-supply = <&buck2_reg>;
};
@@ -871,46 +919,6 @@
assigned-clock-parents = <&clock CLK_XUSBXTI>;
};
-&ppmu_dmc0 {
- status = "okay";
-
- events {
- ppmu_dmc0_3: ppmu-event3-dmc0 {
- event-name = "ppmu-event3-dmc0";
- };
- };
-};
-
-&ppmu_dmc1 {
- status = "okay";
-
- events {
- ppmu_dmc1_3: ppmu-event3-dmc1 {
- event-name = "ppmu-event3-dmc1";
- };
- };
-};
-
-&ppmu_leftbus {
- status = "okay";
-
- events {
- ppmu_leftbus_3: ppmu-event3-leftbus {
- event-name = "ppmu-event3-leftbus";
- };
- };
-};
-
-&ppmu_rightbus {
- status = "okay";
-
- events {
- ppmu_rightbus_3: ppmu-event3-rightbus {
- event-name = "ppmu-event3-rightbus";
- };
- };
-};
-
&pinctrl_0 {
pinctrl-names = "default";
pinctrl-0 = <&sleep0>;
@@ -1234,10 +1242,6 @@
status = "okay";
};
-&prng {
- status = "okay";
-};
-
&rtc {
status = "okay";
clocks = <&clock CLK_RTC>, <&max77686 MAX77686_CLK_AP>;
@@ -1276,7 +1280,7 @@
cs-gpios = <&gpb 5 GPIO_ACTIVE_HIGH>;
status = "okay";
- s5c73m3_spi: s5c73m3 {
+ s5c73m3_spi: s5c73m3@0 {
compatible = "samsung,s5c73m3";
spi-max-frequency = <50000000>;
reg = <0>;
diff --git a/arch/arm/boot/dts/exynos4x12-pinctrl.dtsi b/arch/arm/boot/dts/exynos4x12-pinctrl.dtsi
index bac25c672789..856b29254374 100644
--- a/arch/arm/boot/dts/exynos4x12-pinctrl.dtsi
+++ b/arch/arm/boot/dts/exynos4x12-pinctrl.dtsi
@@ -885,6 +885,13 @@
samsung,pin-pud = <0>;
samsung,pin-drv = <0>;
};
+
+ hdmi_cec: hdmi-cec {
+ samsung,pins = "gpx3-6";
+ samsung,pin-function = <3>;
+ samsung,pin-pud = <0>;
+ samsung,pin-drv = <0>;
+ };
};
pinctrl_2: pinctrl@03860000 {
diff --git a/arch/arm/boot/dts/exynos4x12.dtsi b/arch/arm/boot/dts/exynos4x12.dtsi
index 84a23f962946..c452499ae8c9 100644
--- a/arch/arm/boot/dts/exynos4x12.dtsi
+++ b/arch/arm/boot/dts/exynos4x12.dtsi
@@ -179,7 +179,7 @@
ranges;
status = "disabled";
- pmu {
+ pmu@10020000 {
reg = <0x10020000 0x3000>;
};
@@ -281,6 +281,180 @@
clocks = <&clock CLK_SMMU_LITE1>, <&clock CLK_FIMC_LITE1>;
#iommu-cells = <0>;
};
+
+ bus_dmc: bus_dmc {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DIV_DMC>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_dmc_opp_table>;
+ status = "disabled";
+ };
+
+ bus_acp: bus_acp {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DIV_ACP>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_acp_opp_table>;
+ status = "disabled";
+ };
+
+ bus_c2c: bus_c2c {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DIV_C2C>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_dmc_opp_table>;
+ status = "disabled";
+ };
+
+ bus_dmc_opp_table: opp_table1 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@100000000 {
+ opp-hz = /bits/ 64 <100000000>;
+ opp-microvolt = <900000>;
+ };
+ opp@134000000 {
+ opp-hz = /bits/ 64 <134000000>;
+ opp-microvolt = <900000>;
+ };
+ opp@160000000 {
+ opp-hz = /bits/ 64 <160000000>;
+ opp-microvolt = <900000>;
+ };
+ opp@267000000 {
+ opp-hz = /bits/ 64 <267000000>;
+ opp-microvolt = <950000>;
+ };
+ opp@400000000 {
+ opp-hz = /bits/ 64 <400000000>;
+ opp-microvolt = <1050000>;
+ };
+ };
+
+ bus_acp_opp_table: opp_table2 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@100000000 {
+ opp-hz = /bits/ 64 <100000000>;
+ };
+ opp@134000000 {
+ opp-hz = /bits/ 64 <134000000>;
+ };
+ opp@160000000 {
+ opp-hz = /bits/ 64 <160000000>;
+ };
+ opp@267000000 {
+ opp-hz = /bits/ 64 <267000000>;
+ };
+ };
+
+ bus_leftbus: bus_leftbus {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DIV_GDL>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_leftbus_opp_table>;
+ status = "disabled";
+ };
+
+ bus_rightbus: bus_rightbus {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DIV_GDR>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_leftbus_opp_table>;
+ status = "disabled";
+ };
+
+ bus_display: bus_display {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_ACLK160>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_display_opp_table>;
+ status = "disabled";
+ };
+
+ bus_fsys: bus_fsys {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_ACLK133>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_fsys_opp_table>;
+ status = "disabled";
+ };
+
+ bus_peri: bus_peri {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_ACLK100>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_peri_opp_table>;
+ status = "disabled";
+ };
+
+ bus_mfc: bus_mfc {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_SCLK_MFC>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_leftbus_opp_table>;
+ status = "disabled";
+ };
+
+ bus_leftbus_opp_table: opp_table3 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@100000000 {
+ opp-hz = /bits/ 64 <100000000>;
+ opp-microvolt = <900000>;
+ };
+ opp@134000000 {
+ opp-hz = /bits/ 64 <134000000>;
+ opp-microvolt = <925000>;
+ };
+ opp@160000000 {
+ opp-hz = /bits/ 64 <160000000>;
+ opp-microvolt = <950000>;
+ };
+ opp@200000000 {
+ opp-hz = /bits/ 64 <200000000>;
+ opp-microvolt = <1000000>;
+ };
+ };
+
+ bus_display_opp_table: opp_table4 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@160000000 {
+ opp-hz = /bits/ 64 <160000000>;
+ };
+ opp@200000000 {
+ opp-hz = /bits/ 64 <200000000>;
+ };
+ };
+
+ bus_fsys_opp_table: opp_table5 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@100000000 {
+ opp-hz = /bits/ 64 <100000000>;
+ };
+ opp@134000000 {
+ opp-hz = /bits/ 64 <134000000>;
+ };
+ };
+
+ bus_peri_opp_table: opp_table6 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp@50000000 {
+ opp-hz = /bits/ 64 <50000000>;
+ };
+ opp@100000000 {
+ opp-hz = /bits/ 64 <100000000>;
+ };
+ };
};
&combiner {
diff --git a/arch/arm/boot/dts/exynos5.dtsi b/arch/arm/boot/dts/exynos5.dtsi
index 92313cac035e..d5c0f18a4223 100644
--- a/arch/arm/boot/dts/exynos5.dtsi
+++ b/arch/arm/boot/dts/exynos5.dtsi
@@ -31,8 +31,8 @@
reg = <0x10000000 0x100>;
};
- sromc@12250000 {
- compatible = "samsung,exynos-srom";
+ memory-controller@12250000 {
+ compatible = "samsung,exynos4210-srom";
reg = <0x12250000 0x14>;
};
diff --git a/arch/arm/boot/dts/exynos5250-arndale.dts b/arch/arm/boot/dts/exynos5250-arndale.dts
index 85d819217d17..85dad29c08dc 100644
--- a/arch/arm/boot/dts/exynos5250-arndale.dts
+++ b/arch/arm/boot/dts/exynos5250-arndale.dts
@@ -131,7 +131,7 @@
display-timings {
native-mode = <&timing0>;
- timing0: timing@0 {
+ timing0: timing {
/* 2560x1600 DP panel */
clock-frequency = <50000>;
hactive = <2560>;
diff --git a/arch/arm/boot/dts/exynos5250-smdk5250.dts b/arch/arm/boot/dts/exynos5250-smdk5250.dts
index f30c2dbba4f5..b7992b13c9de 100644
--- a/arch/arm/boot/dts/exynos5250-smdk5250.dts
+++ b/arch/arm/boot/dts/exynos5250-smdk5250.dts
@@ -29,7 +29,7 @@
bootargs = "root=/dev/ram0 rw ramdisk=8192 initrd=0x41000000,8M console=ttySAC2,115200 init=/linuxrc";
};
- vdd: fixed-regulator@0 {
+ vdd: fixed-regulator-vdd {
compatible = "regulator-fixed";
regulator-name = "vdd-supply";
regulator-min-microvolt = <1800000>;
@@ -37,7 +37,7 @@
regulator-always-on;
};
- dbvdd: fixed-regulator@1 {
+ dbvdd: fixed-regulator-dbvdd {
compatible = "regulator-fixed";
regulator-name = "dbvdd-supply";
regulator-min-microvolt = <3300000>;
@@ -45,7 +45,7 @@
regulator-always-on;
};
- spkvdd: fixed-regulator@2 {
+ spkvdd: fixed-regulator-spkvdd {
compatible = "regulator-fixed";
regulator-name = "spkvdd-supply";
regulator-min-microvolt = <5000000>;
@@ -91,7 +91,7 @@
display-timings {
native-mode = <&timing0>;
- timing0: timing@0 {
+ timing0: timing {
/* 1280x800 */
clock-frequency = <50000>;
hactive = <1280>;
diff --git a/arch/arm/boot/dts/exynos5250-snow-common.dtsi b/arch/arm/boot/dts/exynos5250-snow-common.dtsi
index 746808f401e5..ddfe1f558c10 100644
--- a/arch/arm/boot/dts/exynos5250-snow-common.dtsi
+++ b/arch/arm/boot/dts/exynos5250-snow-common.dtsi
@@ -84,7 +84,7 @@
sbs,poll-retry-count = <1>;
};
- cros_ec: embedded-controller {
+ cros_ec: embedded-controller@1e {
compatible = "google,cros-ec-i2c";
reg = <0x1e>;
interrupts = <6 IRQ_TYPE_NONE>;
@@ -94,7 +94,7 @@
wakeup-source;
};
- power-regulator {
+ power-regulator@48 {
compatible = "ti,tps65090";
reg = <0x48>;
@@ -242,7 +242,7 @@
hpd-gpios = <&gpx0 7 GPIO_ACTIVE_HIGH>;
ports {
- port@0 {
+ port0 {
dp_out: endpoint {
remote-endpoint = <&bridge_in>;
};
@@ -426,7 +426,7 @@
samsung,i2c-sda-delay = <100>;
samsung,i2c-max-bus-freq = <378000>;
- trackpad {
+ trackpad@67 {
reg = <0x67>;
compatible = "cypress,cyapa";
interrupts = <2 IRQ_TYPE_NONE>;
@@ -485,13 +485,13 @@
edid-emulation = <5>;
ports {
- port@0 {
+ port0 {
bridge_out: endpoint {
remote-endpoint = <&panel_in>;
};
};
- port@1 {
+ port1 {
bridge_in: endpoint {
remote-endpoint = <&dp_out>;
};
diff --git a/arch/arm/boot/dts/exynos5250-spring.dts b/arch/arm/boot/dts/exynos5250-spring.dts
index c607bed575d9..ac291f540812 100644
--- a/arch/arm/boot/dts/exynos5250-spring.dts
+++ b/arch/arm/boot/dts/exynos5250-spring.dts
@@ -381,7 +381,7 @@
samsung,i2c-sda-delay = <100>;
samsung,i2c-max-bus-freq = <66000>;
- cros_ec: embedded-controller {
+ cros_ec: embedded-controller@1e {
compatible = "google,cros-ec-i2c";
reg = <0x1e>;
interrupts = <6 IRQ_TYPE_NONE>;
diff --git a/arch/arm/boot/dts/exynos5250.dtsi b/arch/arm/boot/dts/exynos5250.dtsi
index e653ae04015a..c7158b2fb213 100644
--- a/arch/arm/boot/dts/exynos5250.dtsi
+++ b/arch/arm/boot/dts/exynos5250.dtsi
@@ -596,7 +596,7 @@
pinctrl-0 = <&i2s2_bus>;
};
- usb@12000000 {
+ usb_dwc3 {
compatible = "samsung,exynos5250-dwusb3";
clocks = <&clock CLK_USB3>;
clock-names = "usbdrd30";
@@ -604,7 +604,7 @@
#size-cells = <1>;
ranges;
- usbdrd_dwc3: dwc3 {
+ usbdrd_dwc3: dwc3@12000000 {
compatible = "synopsys,dwc3";
reg = <0x12000000 0x10000>;
interrupts = <0 72 0>;
@@ -763,7 +763,7 @@
iommu = <&sysmmu_gsc3>;
};
- hdmi: hdmi {
+ hdmi: hdmi@14530000 {
compatible = "samsung,exynos4212-hdmi";
reg = <0x14530000 0x70000>;
power-domains = <&pd_disp1>;
@@ -776,7 +776,7 @@
samsung,syscon-phandle = <&pmu_system_controller>;
};
- mixer {
+ mixer@14450000 {
compatible = "samsung,exynos5250-mixer";
reg = <0x14450000 0x10000>;
power-domains = <&pd_disp1>;
@@ -787,7 +787,7 @@
iommus = <&sysmmu_tv>;
};
- dp_phy: video-phy@10040720 {
+ dp_phy: video-phy {
compatible = "samsung,exynos5250-dp-video-phy";
samsung,pmu-syscon = <&pmu_system_controller>;
#phy-cells = <0>;
diff --git a/arch/arm/boot/dts/exynos5410-smdk5410.dts b/arch/arm/boot/dts/exynos5410-smdk5410.dts
index a731fbe28ebc..0f6429e1b75c 100644
--- a/arch/arm/boot/dts/exynos5410-smdk5410.dts
+++ b/arch/arm/boot/dts/exynos5410-smdk5410.dts
@@ -97,7 +97,7 @@
smsc,irq-push-pull;
smsc,force-internal-phy;
- samsung,srom-page-mode = <1>;
+ samsung,srom-page-mode;
samsung,srom-timing = <9 12 1 9 1 1>;
};
};
diff --git a/arch/arm/boot/dts/exynos5410.dtsi b/arch/arm/boot/dts/exynos5410.dtsi
index fa558674ac76..7a56aec2c5ba 100644
--- a/arch/arm/boot/dts/exynos5410.dtsi
+++ b/arch/arm/boot/dts/exynos5410.dtsi
@@ -102,8 +102,8 @@
reg = <0x10000000 0x100>;
};
- sromc: sromc@12250000 {
- compatible = "samsung,exynos-srom";
+ sromc: memory-controller@12250000 {
+ compatible = "samsung,exynos4210-srom";
reg = <0x12250000 0x14>;
#address-cells = <2>;
#size-cells = <1>;
diff --git a/arch/arm/boot/dts/exynos5420-arndale-octa.dts b/arch/arm/boot/dts/exynos5420-arndale-octa.dts
index a103ce8c3985..60bc861d0f9d 100644
--- a/arch/arm/boot/dts/exynos5420-arndale-octa.dts
+++ b/arch/arm/boot/dts/exynos5420-arndale-octa.dts
@@ -75,13 +75,6 @@
s2mps11_pmic@66 {
compatible = "samsung,s2mps11-pmic";
reg = <0x66>;
- s2mps11,buck2-ramp-delay = <12>;
- s2mps11,buck34-ramp-delay = <12>;
- s2mps11,buck16-ramp-delay = <12>;
- s2mps11,buck6-ramp-enable = <1>;
- s2mps11,buck2-ramp-enable = <1>;
- s2mps11,buck3-ramp-enable = <1>;
- s2mps11,buck4-ramp-enable = <1>;
interrupt-parent = <&gpx3>;
interrupts = <2 IRQ_TYPE_EDGE_FALLING>;
diff --git a/arch/arm/boot/dts/exynos5420-peach-pit.dts b/arch/arm/boot/dts/exynos5420-peach-pit.dts
index 7ddb6a066b28..f9d2e4f1a0e0 100644
--- a/arch/arm/boot/dts/exynos5420-peach-pit.dts
+++ b/arch/arm/boot/dts/exynos5420-peach-pit.dts
@@ -163,7 +163,7 @@
hpd-gpios = <&gpx2 6 GPIO_ACTIVE_HIGH>;
ports {
- port@0 {
+ port0 {
dp_out: endpoint {
remote-endpoint = <&bridge_in>;
};
@@ -631,13 +631,13 @@
use-external-pwm;
ports {
- port@0 {
+ port0 {
bridge_out: endpoint {
remote-endpoint = <&panel_in>;
};
};
- port@1 {
+ port1 {
bridge_in: endpoint {
remote-endpoint = <&dp_out>;
};
@@ -694,6 +694,11 @@
status = "okay";
};
+&mfc {
+ samsung,mfc-r = <0x43000000 0x800000>;
+ samsung,mfc-l = <0x51000000 0x800000>;
+};
+
&mmc_0 {
status = "okay";
num-slots = <1>;
diff --git a/arch/arm/boot/dts/exynos5420-smdk5420.dts b/arch/arm/boot/dts/exynos5420-smdk5420.dts
index 288817daa16c..2e748d19322f 100644
--- a/arch/arm/boot/dts/exynos5420-smdk5420.dts
+++ b/arch/arm/boot/dts/exynos5420-smdk5420.dts
@@ -109,7 +109,7 @@
display-timings {
native-mode = <&timing0>;
- timing0: timing@0 {
+ timing0: timing {
clock-frequency = <50000>;
hactive = <2560>;
vactive = <1600>;
@@ -140,13 +140,6 @@
s2mps11_pmic@66 {
compatible = "samsung,s2mps11-pmic";
reg = <0x66>;
- s2mps11,buck2-ramp-delay = <12>;
- s2mps11,buck34-ramp-delay = <12>;
- s2mps11,buck16-ramp-delay = <12>;
- s2mps11,buck6-ramp-enable = <1>;
- s2mps11,buck2-ramp-enable = <1>;
- s2mps11,buck3-ramp-enable = <1>;
- s2mps11,buck4-ramp-enable = <1>;
s2mps11_osc: clocks {
#clock-cells = <1>;
diff --git a/arch/arm/boot/dts/exynos5420.dtsi b/arch/arm/boot/dts/exynos5420.dtsi
index 7b99cb58d82d..c6e05eb88937 100644
--- a/arch/arm/boot/dts/exynos5420.dtsi
+++ b/arch/arm/boot/dts/exynos5420.dtsi
@@ -294,6 +294,42 @@
};
};
+ nocp_mem0_0: nocp@10CA1000 {
+ compatible = "samsung,exynos5420-nocp";
+ reg = <0x10CA1000 0x200>;
+ status = "disabled";
+ };
+
+ nocp_mem0_1: nocp@10CA1400 {
+ compatible = "samsung,exynos5420-nocp";
+ reg = <0x10CA1400 0x200>;
+ status = "disabled";
+ };
+
+ nocp_mem1_0: nocp@10CA1800 {
+ compatible = "samsung,exynos5420-nocp";
+ reg = <0x10CA1800 0x200>;
+ status = "disabled";
+ };
+
+ nocp_mem1_1: nocp@10CA1C00 {
+ compatible = "samsung,exynos5420-nocp";
+ reg = <0x10CA1C00 0x200>;
+ status = "disabled";
+ };
+
+ nocp_g3d_0: nocp@11A51000 {
+ compatible = "samsung,exynos5420-nocp";
+ reg = <0x11A51000 0x200>;
+ status = "disabled";
+ };
+
+ nocp_g3d_1: nocp@11A51400 {
+ compatible = "samsung,exynos5420-nocp";
+ reg = <0x11A51400 0x200>;
+ status = "disabled";
+ };
+
gsc_pd: power-domain@10044000 {
compatible = "samsung,exynos4210-pd";
reg = <0x10044000 0x20>;
@@ -551,13 +587,13 @@
clock-names = "timers";
};
- dp_phy: video-phy@10040728 {
+ dp_phy: dp-video-phy {
compatible = "samsung,exynos5420-dp-video-phy";
samsung,pmu-syscon = <&pmu_system_controller>;
#phy-cells = <0>;
};
- mipi_phy: video-phy@10040714 {
+ mipi_phy: mipi-video-phy {
compatible = "samsung,s5pv210-mipi-video-phy";
syscon = <&pmu_system_controller>;
#phy-cells = <1>;
@@ -913,7 +949,7 @@
clock-names = "secss";
};
- usbdrd3_0: usb@12000000 {
+ usbdrd3_0: usb3-0 {
compatible = "samsung,exynos5250-dwusb3";
clocks = <&clock CLK_USBD300>;
clock-names = "usbdrd30";
@@ -921,7 +957,7 @@
#size-cells = <1>;
ranges;
- usbdrd_dwc3_0: dwc3 {
+ usbdrd_dwc3_0: dwc3@12000000 {
compatible = "snps,dwc3";
reg = <0x12000000 0x10000>;
interrupts = <0 72 0>;
@@ -939,7 +975,7 @@
#phy-cells = <1>;
};
- usbdrd3_1: usb@12400000 {
+ usbdrd3_1: usb3-1 {
compatible = "samsung,exynos5250-dwusb3";
clocks = <&clock CLK_USBD301>;
clock-names = "usbdrd30";
@@ -947,7 +983,7 @@
#size-cells = <1>;
ranges;
- usbdrd_dwc3_1: dwc3 {
+ usbdrd_dwc3_1: dwc3@12400000 {
compatible = "snps,dwc3";
reg = <0x12400000 0x10000>;
interrupts = <0 73 0>;
@@ -1188,6 +1224,377 @@
power-domains = <&disp_pd>;
#iommu-cells = <0>;
};
+
+ bus_wcore: bus_wcore {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DOUT_ACLK400_WCORE>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_wcore_opp_table>;
+ status = "disabled";
+ };
+
+ bus_noc: bus_noc {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DOUT_ACLK100_NOC>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_noc_opp_table>;
+ status = "disabled";
+ };
+
+ bus_fsys_apb: bus_fsys_apb {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DOUT_PCLK200_FSYS>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_fsys_apb_opp_table>;
+ status = "disabled";
+ };
+
+ bus_fsys: bus_fsys {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DOUT_ACLK200_FSYS>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_fsys_apb_opp_table>;
+ status = "disabled";
+ };
+
+ bus_fsys2: bus_fsys2 {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DOUT_ACLK200_FSYS2>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_fsys2_opp_table>;
+ status = "disabled";
+ };
+
+ bus_mfc: bus_mfc {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DOUT_ACLK333>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_mfc_opp_table>;
+ status = "disabled";
+ };
+
+ bus_gen: bus_gen {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DOUT_ACLK266>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_gen_opp_table>;
+ status = "disabled";
+ };
+
+ bus_peri: bus_peri {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DOUT_ACLK66>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_peri_opp_table>;
+ status = "disabled";
+ };
+
+ bus_g2d: bus_g2d {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DOUT_ACLK333_G2D>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_g2d_opp_table>;
+ status = "disabled";
+ };
+
+ bus_g2d_acp: bus_g2d_acp {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DOUT_ACLK266_G2D>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_g2d_acp_opp_table>;
+ status = "disabled";
+ };
+
+ bus_jpeg: bus_jpeg {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DOUT_ACLK300_JPEG>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_jpeg_opp_table>;
+ status = "disabled";
+ };
+
+ bus_jpeg_apb: bus_jpeg_apb {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DOUT_ACLK166>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_jpeg_apb_opp_table>;
+ status = "disabled";
+ };
+
+ bus_disp1_fimd: bus_disp1_fimd {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DOUT_ACLK300_DISP1>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_disp1_fimd_opp_table>;
+ status = "disabled";
+ };
+
+ bus_disp1: bus_disp1 {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DOUT_ACLK400_DISP1>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_disp1_opp_table>;
+ status = "disabled";
+ };
+
+ bus_gscl_scaler: bus_gscl_scaler {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DOUT_ACLK300_GSCL>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_gscl_opp_table>;
+ status = "disabled";
+ };
+
+ bus_mscl: bus_mscl {
+ compatible = "samsung,exynos-bus";
+ clocks = <&clock CLK_DOUT_ACLK400_MSCL>;
+ clock-names = "bus";
+ operating-points-v2 = <&bus_mscl_opp_table>;
+ status = "disabled";
+ };
+
+ bus_wcore_opp_table: opp_table2 {
+ compatible = "operating-points-v2";
+
+ opp00 {
+ opp-hz = /bits/ 64 <84000000>;
+ opp-microvolt = <925000>;
+ };
+ opp01 {
+ opp-hz = /bits/ 64 <111000000>;
+ opp-microvolt = <950000>;
+ };
+ opp02 {
+ opp-hz = /bits/ 64 <222000000>;
+ opp-microvolt = <950000>;
+ };
+ opp03 {
+ opp-hz = /bits/ 64 <333000000>;
+ opp-microvolt = <950000>;
+ };
+ opp04 {
+ opp-hz = /bits/ 64 <400000000>;
+ opp-microvolt = <987500>;
+ };
+ };
+
+ bus_noc_opp_table: opp_table3 {
+ compatible = "operating-points-v2";
+
+ opp00 {
+ opp-hz = /bits/ 64 <67000000>;
+ };
+ opp01 {
+ opp-hz = /bits/ 64 <75000000>;
+ };
+ opp02 {
+ opp-hz = /bits/ 64 <86000000>;
+ };
+ opp03 {
+ opp-hz = /bits/ 64 <100000000>;
+ };
+ };
+
+ bus_fsys_apb_opp_table: opp_table4 {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp00 {
+ opp-hz = /bits/ 64 <100000000>;
+ };
+ opp01 {
+ opp-hz = /bits/ 64 <200000000>;
+ };
+ };
+
+ bus_fsys2_opp_table: opp_table5 {
+ compatible = "operating-points-v2";
+
+ opp00 {
+ opp-hz = /bits/ 64 <75000000>;
+ };
+ opp01 {
+ opp-hz = /bits/ 64 <100000000>;
+ };
+ opp02 {
+ opp-hz = /bits/ 64 <150000000>;
+ };
+ };
+
+ bus_mfc_opp_table: opp_table6 {
+ compatible = "operating-points-v2";
+
+ opp00 {
+ opp-hz = /bits/ 64 <96000000>;
+ };
+ opp01 {
+ opp-hz = /bits/ 64 <111000000>;
+ };
+ opp02 {
+ opp-hz = /bits/ 64 <167000000>;
+ };
+ opp03 {
+ opp-hz = /bits/ 64 <222000000>;
+ };
+ opp04 {
+ opp-hz = /bits/ 64 <333000000>;
+ };
+ };
+
+ bus_gen_opp_table: opp_table7 {
+ compatible = "operating-points-v2";
+
+ opp00 {
+ opp-hz = /bits/ 64 <89000000>;
+ };
+ opp01 {
+ opp-hz = /bits/ 64 <133000000>;
+ };
+ opp02 {
+ opp-hz = /bits/ 64 <178000000>;
+ };
+ opp03 {
+ opp-hz = /bits/ 64 <267000000>;
+ };
+ };
+
+ bus_peri_opp_table: opp_table8 {
+ compatible = "operating-points-v2";
+
+ opp00 {
+ opp-hz = /bits/ 64 <67000000>;
+ };
+ };
+
+ bus_g2d_opp_table: opp_table9 {
+ compatible = "operating-points-v2";
+
+ opp00 {
+ opp-hz = /bits/ 64 <84000000>;
+ };
+ opp01 {
+ opp-hz = /bits/ 64 <167000000>;
+ };
+ opp02 {
+ opp-hz = /bits/ 64 <222000000>;
+ };
+ opp03 {
+ opp-hz = /bits/ 64 <300000000>;
+ };
+ opp04 {
+ opp-hz = /bits/ 64 <333000000>;
+ };
+ };
+
+ bus_g2d_acp_opp_table: opp_table10 {
+ compatible = "operating-points-v2";
+
+ opp00 {
+ opp-hz = /bits/ 64 <67000000>;
+ };
+ opp01 {
+ opp-hz = /bits/ 64 <133000000>;
+ };
+ opp02 {
+ opp-hz = /bits/ 64 <178000000>;
+ };
+ opp03 {
+ opp-hz = /bits/ 64 <267000000>;
+ };
+ };
+
+ bus_jpeg_opp_table: opp_table11 {
+ compatible = "operating-points-v2";
+
+ opp00 {
+ opp-hz = /bits/ 64 <75000000>;
+ };
+ opp01 {
+ opp-hz = /bits/ 64 <150000000>;
+ };
+ opp02 {
+ opp-hz = /bits/ 64 <200000000>;
+ };
+ opp03 {
+ opp-hz = /bits/ 64 <300000000>;
+ };
+ };
+
+ bus_jpeg_apb_opp_table: opp_table12 {
+ compatible = "operating-points-v2";
+
+ opp00 {
+ opp-hz = /bits/ 64 <84000000>;
+ };
+ opp01 {
+ opp-hz = /bits/ 64 <111000000>;
+ };
+ opp02 {
+ opp-hz = /bits/ 64 <134000000>;
+ };
+ opp03 {
+ opp-hz = /bits/ 64 <167000000>;
+ };
+ };
+
+ bus_disp1_fimd_opp_table: opp_table13 {
+ compatible = "operating-points-v2";
+
+ opp00 {
+ opp-hz = /bits/ 64 <120000000>;
+ };
+ opp01 {
+ opp-hz = /bits/ 64 <200000000>;
+ };
+ };
+
+ bus_disp1_opp_table: opp_table14 {
+ compatible = "operating-points-v2";
+
+ opp00 {
+ opp-hz = /bits/ 64 <120000000>;
+ };
+ opp01 {
+ opp-hz = /bits/ 64 <200000000>;
+ };
+ opp02 {
+ opp-hz = /bits/ 64 <300000000>;
+ };
+ };
+
+ bus_gscl_opp_table: opp_table15 {
+ compatible = "operating-points-v2";
+
+ opp00 {
+ opp-hz = /bits/ 64 <150000000>;
+ };
+ opp01 {
+ opp-hz = /bits/ 64 <200000000>;
+ };
+ opp02 {
+ opp-hz = /bits/ 64 <300000000>;
+ };
+ };
+
+ bus_mscl_opp_table: opp_table16 {
+ compatible = "operating-points-v2";
+
+ opp00 {
+ opp-hz = /bits/ 64 <84000000>;
+ };
+ opp01 {
+ opp-hz = /bits/ 64 <167000000>;
+ };
+ opp02 {
+ opp-hz = /bits/ 64 <222000000>;
+ };
+ opp03 {
+ opp-hz = /bits/ 64 <333000000>;
+ };
+ opp04 {
+ opp-hz = /bits/ 64 <400000000>;
+ };
+ };
};
&dp {
@@ -1199,6 +1606,7 @@
};
&fimd {
+ compatible = "samsung,exynos5420-fimd";
clocks = <&clock CLK_SCLK_FIMD1>, <&clock CLK_FIMD1>;
clock-names = "sclk_fimd", "fimd";
power-domains = <&disp_pd>;
diff --git a/arch/arm/boot/dts/exynos5422-odroidxu3-common.dtsi b/arch/arm/boot/dts/exynos5422-odroidxu3-common.dtsi
index 1bd507bfa750..2a4e10bc8801 100644
--- a/arch/arm/boot/dts/exynos5422-odroidxu3-common.dtsi
+++ b/arch/arm/boot/dts/exynos5422-odroidxu3-common.dtsi
@@ -56,6 +56,89 @@
};
};
+&bus_wcore {
+ devfreq-events = <&nocp_mem0_0>, <&nocp_mem0_1>,
+ <&nocp_mem1_0>, <&nocp_mem1_1>;
+ vdd-supply = <&buck3_reg>;
+ exynos,saturation-ratio = <100>;
+ status = "okay";
+};
+
+&bus_noc {
+ devfreq = <&bus_wcore>;
+ status = "okay";
+};
+
+&bus_fsys_apb {
+ devfreq = <&bus_wcore>;
+ status = "okay";
+};
+
+&bus_fsys {
+ devfreq = <&bus_wcore>;
+ status = "okay";
+};
+
+&bus_fsys2 {
+ devfreq = <&bus_wcore>;
+ status = "okay";
+};
+
+&bus_mfc {
+ devfreq = <&bus_wcore>;
+ status = "okay";
+};
+
+&bus_gen {
+ devfreq = <&bus_wcore>;
+ status = "okay";
+};
+
+&bus_peri {
+ devfreq = <&bus_wcore>;
+ status = "okay";
+};
+
+&bus_g2d {
+ devfreq = <&bus_wcore>;
+ status = "okay";
+};
+
+&bus_g2d_acp {
+ devfreq = <&bus_wcore>;
+ status = "okay";
+};
+
+&bus_jpeg {
+ devfreq = <&bus_wcore>;
+ status = "okay";
+};
+
+&bus_jpeg_apb {
+ devfreq = <&bus_wcore>;
+ status = "okay";
+};
+
+&bus_disp1_fimd {
+ devfreq = <&bus_wcore>;
+ status = "okay";
+};
+
+&bus_disp1 {
+ devfreq = <&bus_wcore>;
+ status = "okay";
+};
+
+&bus_gscl_scaler {
+ devfreq = <&bus_wcore>;
+ status = "okay";
+};
+
+&bus_mscl {
+ devfreq = <&bus_wcore>;
+ status = "okay";
+};
+
&clock_audss {
assigned-clocks = <&clock_audss EXYNOS_MOUT_AUDSS>,
<&clock_audss EXYNOS_MOUT_I2S>,
@@ -92,13 +175,6 @@
s2mps11_pmic@66 {
compatible = "samsung,s2mps11-pmic";
reg = <0x66>;
- s2mps11,buck2-ramp-delay = <12>;
- s2mps11,buck34-ramp-delay = <12>;
- s2mps11,buck16-ramp-delay = <12>;
- s2mps11,buck6-ramp-enable = <1>;
- s2mps11,buck2-ramp-enable = <1>;
- s2mps11,buck3-ramp-enable = <1>;
- s2mps11,buck4-ramp-enable = <1>;
samsung,s2mps11-acokb-ground;
interrupt-parent = <&gpx0>;
@@ -121,10 +197,9 @@
};
ldo3_reg: LDO3 {
- regulator-name = "vdd_ldo3";
+ regulator-name = "vddq_mmc0";
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <1800000>;
- regulator-always-on;
};
ldo5_reg: LDO5 {
@@ -184,10 +259,9 @@
};
ldo13_reg: LDO13 {
- regulator-name = "vdd_ldo13";
+ regulator-name = "vddq_mmc2";
regulator-min-microvolt = <2800000>;
regulator-max-microvolt = <2800000>;
- regulator-always-on;
};
ldo15_reg: LDO15 {
@@ -211,11 +285,16 @@
regulator-always-on;
};
+ ldo18_reg: LDO18 {
+ regulator-name = "vdd_emmc_1V8";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ };
+
ldo19_reg: LDO19 {
regulator-name = "vdd_sd";
regulator-min-microvolt = <2800000>;
regulator-max-microvolt = <2800000>;
- regulator-always-on;
};
ldo24_reg: LDO24 {
@@ -347,6 +426,8 @@
cap-mmc-highspeed;
mmc-hs200-1_8v;
mmc-hs400-1_8v;
+ vmmc-supply = <&ldo18_reg>;
+ vqmmc-supply = <&ldo3_reg>;
};
&mmc_2 {
@@ -359,6 +440,24 @@
pinctrl-0 = <&sd2_clk &sd2_cmd &sd2_cd &sd2_bus1 &sd2_bus4>;
bus-width = <4>;
cap-sd-highspeed;
+ vmmc-supply = <&ldo19_reg>;
+ vqmmc-supply = <&ldo13_reg>;
+};
+
+&nocp_mem0_0 {
+ status = "okay";
+};
+
+&nocp_mem0_1 {
+ status = "okay";
+};
+
+&nocp_mem1_0 {
+ status = "okay";
+};
+
+&nocp_mem1_1 {
+ status = "okay";
};
&pinctrl_0 {
diff --git a/arch/arm/boot/dts/exynos5440.dtsi b/arch/arm/boot/dts/exynos5440.dtsi
index b9342ec5b9cf..fd176819b4bf 100644
--- a/arch/arm/boot/dts/exynos5440.dtsi
+++ b/arch/arm/boot/dts/exynos5440.dtsi
@@ -132,7 +132,7 @@
clock-names = "spi", "spi_busclk0";
};
- pin_ctrl: pinctrl {
+ pin_ctrl: pinctrl@E0000 {
compatible = "samsung,exynos5440-pinctrl";
reg = <0xE0000 0x1000>;
interrupts = <0 37 0>, <0 38 0>, <0 39 0>, <0 40 0>,
@@ -205,7 +205,7 @@
ranges;
};
- rtc {
+ rtc@130000 {
compatible = "samsung,s3c6410-rtc";
reg = <0x130000 0x1000>;
interrupts = <0 17 0>, <0 16 0>;
diff --git a/arch/arm/boot/dts/exynos5800-peach-pi.dts b/arch/arm/boot/dts/exynos5800-peach-pi.dts
index 6ba9aec15485..62ceb89e073f 100644
--- a/arch/arm/boot/dts/exynos5800-peach-pi.dts
+++ b/arch/arm/boot/dts/exynos5800-peach-pi.dts
@@ -669,6 +669,11 @@
status = "okay";
};
+&mfc {
+ samsung,mfc-r = <0x43000000 0x800000>;
+ samsung,mfc-l = <0x51000000 0x800000>;
+};
+
&mmc_0 {
status = "okay";
num-slots = <1>;
diff --git a/arch/arm/boot/dts/imx25-pinfunc.h b/arch/arm/boot/dts/imx25-pinfunc.h
index 848ffa785b63..f96fa2df8f11 100644
--- a/arch/arm/boot/dts/imx25-pinfunc.h
+++ b/arch/arm/boot/dts/imx25-pinfunc.h
@@ -110,20 +110,20 @@
#define MX25_PAD_CS4__UART5_CTS 0x054 0x264 0x000 0x13 0x000
#define MX25_PAD_CS4__GPIO_3_20 0x054 0x264 0x000 0x15 0x000
-#define MX25_PAD_CS5__CS5 0x058 0x268 0x000 0x10 0x000
+#define MX25_PAD_CS5__CS5 0x058 0x268 0x000 0x00 0x000
#define MX25_PAD_CS5__NF_CE2 0x058 0x268 0x000 0x01 0x000
-#define MX25_PAD_CS5__UART5_RTS 0x058 0x268 0x574 0x13 0x000
-#define MX25_PAD_CS5__GPIO_3_21 0x058 0x268 0x000 0x15 0x000
+#define MX25_PAD_CS5__UART5_RTS 0x058 0x268 0x574 0x03 0x000
+#define MX25_PAD_CS5__GPIO_3_21 0x058 0x268 0x000 0x05 0x000
#define MX25_PAD_NF_CE0__NF_CE0 0x05c 0x26c 0x000 0x10 0x000
#define MX25_PAD_NF_CE0__GPIO_3_22 0x05c 0x26c 0x000 0x15 0x000
#define MX25_PAD_ECB__ECB 0x060 0x270 0x000 0x10 0x000
-#define MX25_PAD_ECB__UART5_TXD_MUX 0x060 0x270 0x000 0x13 0x000
+#define MX25_PAD_ECB__UART5_TXD 0x060 0x270 0x000 0x13 0x000
#define MX25_PAD_ECB__GPIO_3_23 0x060 0x270 0x000 0x15 0x000
#define MX25_PAD_LBA__LBA 0x064 0x274 0x000 0x10 0x000
-#define MX25_PAD_LBA__UART5_RXD_MUX 0x064 0x274 0x578 0x13 0x000
+#define MX25_PAD_LBA__UART5_RXD 0x064 0x274 0x578 0x13 0x000
#define MX25_PAD_LBA__GPIO_3_24 0x064 0x274 0x000 0x15 0x000
#define MX25_PAD_BCLK__BCLK 0x068 0x000 0x000 0x00 0x000
@@ -237,17 +237,21 @@
#define MX25_PAD_LD7__GPIO_1_21 0x0e4 0x2dc 0x000 0x15 0x000
#define MX25_PAD_LD8__LD8 0x0e8 0x2e0 0x000 0x10 0x000
+#define MX25_PAD_LD8__UART4_RXD 0x0e8 0x2e0 0x570 0x12 0x000
#define MX25_PAD_LD8__FEC_TX_ERR 0x0e8 0x2e0 0x000 0x15 0x000
#define MX25_PAD_LD8__SDHC2_CMD 0x0e8 0x2e0 0x4e0 0x06 0x000
#define MX25_PAD_LD9__LD9 0x0ec 0x2e4 0x000 0x10 0x000
+#define MX25_PAD_LD9__UART4_TXD 0x0ec 0x2e4 0x000 0x12 0x000
#define MX25_PAD_LD9__FEC_COL 0x0ec 0x2e4 0x504 0x15 0x001
#define MX25_PAD_LD9__SDHC2_CLK 0x0ec 0x2e4 0x4dc 0x06 0x000
-#define MX25_PAD_LD10__LD10 0x0f0 0x2e8 0x000 0x10 0x000
-#define MX25_PAD_LD10__FEC_RX_ERR 0x0f0 0x2e8 0x518 0x15 0x001
+#define MX25_PAD_LD10__LD10 0x0f0 0x2e8 0x000 0x00 0x000
+#define MX25_PAD_LD10__UART4_RTS 0x0f0 0x2e8 0x56c 0x02 0x000
+#define MX25_PAD_LD10__FEC_RX_ERR 0x0f0 0x2e8 0x518 0x05 0x001
#define MX25_PAD_LD11__LD11 0x0f4 0x2ec 0x000 0x10 0x000
+#define MX25_PAD_LD11__UART4_CTS 0x0f4 0x2ec 0x000 0x12 0x000
#define MX25_PAD_LD11__FEC_RDATA2 0x0f4 0x2ec 0x50c 0x15 0x001
#define MX25_PAD_LD11__SDHC2_DAT1 0x0f4 0x2ec 0x4e8 0x06 0x000
@@ -291,22 +295,22 @@
#define MX25_PAD_PWM__USBH2_OC 0x11c 0x314 0x580 0x16 0x001
#define MX25_PAD_CSI_D2__CSI_D2 0x120 0x318 0x000 0x10 0x000
-#define MX25_PAD_CSI_D2__UART5_RXD_MUX 0x120 0x318 0x578 0x11 0x001
+#define MX25_PAD_CSI_D2__UART5_RXD 0x120 0x318 0x578 0x11 0x001
#define MX25_PAD_CSI_D2__SIM1_CLK0 0x120 0x318 0x000 0x04 0x000
#define MX25_PAD_CSI_D2__GPIO_1_27 0x120 0x318 0x000 0x15 0x000
#define MX25_PAD_CSI_D2__CSPI3_MOSI 0x120 0x318 0x000 0x17 0x000
#define MX25_PAD_CSI_D3__CSI_D3 0x124 0x31c 0x000 0x10 0x000
-#define MX25_PAD_CSI_D3__UART5_TXD_MUX 0x124 0x31c 0x000 0x11 0x000
+#define MX25_PAD_CSI_D3__UART5_TXD 0x124 0x31c 0x000 0x11 0x000
#define MX25_PAD_CSI_D3__SIM1_RST0 0x124 0x31c 0x000 0x04 0x000
#define MX25_PAD_CSI_D3__GPIO_1_28 0x124 0x31c 0x000 0x15 0x000
#define MX25_PAD_CSI_D3__CSPI3_MISO 0x124 0x31c 0x4b4 0x17 0x001
-#define MX25_PAD_CSI_D4__CSI_D4 0x128 0x320 0x000 0x10 0x000
-#define MX25_PAD_CSI_D4__UART5_RTS 0x128 0x320 0x574 0x11 0x001
+#define MX25_PAD_CSI_D4__CSI_D4 0x128 0x320 0x000 0x00 0x000
+#define MX25_PAD_CSI_D4__UART5_RTS 0x128 0x320 0x574 0x01 0x001
#define MX25_PAD_CSI_D4__SIM1_VEN0 0x128 0x320 0x000 0x04 0x000
-#define MX25_PAD_CSI_D4__GPIO_1_29 0x128 0x320 0x000 0x15 0x000
-#define MX25_PAD_CSI_D4__CSPI3_SCLK 0x128 0x320 0x000 0x17 0x000
+#define MX25_PAD_CSI_D4__GPIO_1_29 0x128 0x320 0x000 0x05 0x000
+#define MX25_PAD_CSI_D4__CSPI3_SCLK 0x128 0x320 0x000 0x07 0x000
#define MX25_PAD_CSI_D5__CSI_D5 0x12c 0x324 0x000 0x10 0x000
#define MX25_PAD_CSI_D5__UART5_CTS 0x12c 0x324 0x000 0x11 0x000
@@ -360,7 +364,7 @@
#define MX25_PAD_I2C1_DAT__GPIO_1_13 0x154 0x34c 0x000 0x15 0x000
#define MX25_PAD_CSPI1_MOSI__CSPI1_MOSI 0x158 0x350 0x000 0x10 0x000
-#define MX25_PAD_CSPI1_MOSI__UART3_RXD 0x158 0x350 0x000 0x12 0x000
+#define MX25_PAD_CSPI1_MOSI__UART3_RXD 0x158 0x350 0x568 0x12 0x000
#define MX25_PAD_CSPI1_MOSI__GPIO_1_14 0x158 0x350 0x000 0x15 0x000
#define MX25_PAD_CSPI1_MISO__CSPI1_MISO 0x15c 0x354 0x000 0x10 0x000
@@ -371,10 +375,10 @@
#define MX25_PAD_CSPI1_SS0__PWM2_PWMO 0x160 0x358 0x000 0x12 0x000
#define MX25_PAD_CSPI1_SS0__GPIO_1_16 0x160 0x358 0x000 0x15 0x000
-#define MX25_PAD_CSPI1_SS1__CSPI1_SS1 0x164 0x35c 0x000 0x10 0x000
-#define MX25_PAD_CSPI1_SS1__I2C3_DAT 0x164 0x35C 0x528 0x11 0x001
-#define MX25_PAD_CSPI1_SS1__UART3_RTS 0x164 0x35c 0x000 0x12 0x000
-#define MX25_PAD_CSPI1_SS1__GPIO_1_17 0x164 0x35c 0x000 0x15 0x000
+#define MX25_PAD_CSPI1_SS1__CSPI1_SS1 0x164 0x35c 0x000 0x00 0x000
+#define MX25_PAD_CSPI1_SS1__I2C3_DAT 0x164 0x35C 0x528 0x01 0x001
+#define MX25_PAD_CSPI1_SS1__UART3_RTS 0x164 0x35c 0x000 0x02 0x000
+#define MX25_PAD_CSPI1_SS1__GPIO_1_17 0x164 0x35c 0x000 0x05 0x000
#define MX25_PAD_CSPI1_SCLK__CSPI1_SCLK 0x168 0x360 0x000 0x10 0x000
#define MX25_PAD_CSPI1_SCLK__UART3_CTS 0x168 0x360 0x000 0x12 0x000
@@ -383,20 +387,24 @@
#define MX25_PAD_CSPI1_RDY__CSPI1_RDY 0x16c 0x364 0x000 0x10 0x000
#define MX25_PAD_CSPI1_RDY__GPIO_2_22 0x16c 0x364 0x000 0x15 0x000
-#define MX25_PAD_UART1_RXD__UART1_RXD 0x170 0x368 0x000 0x10 0x000
-#define MX25_PAD_UART1_RXD__GPIO_4_22 0x170 0x368 0x000 0x15 0x000
+#define MX25_PAD_UART1_RXD__UART1_RXD 0x170 0x368 0x000 0x00 0x000
+#define MX25_PAD_UART1_RXD__UART2_DTR 0x170 0x368 0x000 0x03 0x000
+#define MX25_PAD_UART1_RXD__GPIO_4_22 0x170 0x368 0x000 0x05 0x000
-#define MX25_PAD_UART1_TXD__UART1_TXD 0x174 0x36c 0x000 0x10 0x000
-#define MX25_PAD_UART1_TXD__GPIO_4_23 0x174 0x36c 0x000 0x15 0x000
+#define MX25_PAD_UART1_TXD__UART1_TXD 0x174 0x36c 0x000 0x00 0x000
+#define MX25_PAD_UART1_TXD__UART2_DSR 0x174 0x36c 0x000 0x03 0x000
+#define MX25_PAD_UART1_TXD__GPIO_4_23 0x174 0x36c 0x000 0x05 0x000
-#define MX25_PAD_UART1_RTS__UART1_RTS 0x178 0x370 0x000 0x10 0x000
-#define MX25_PAD_UART1_RTS__CSI_D0 0x178 0x370 0x488 0x11 0x001
-#define MX25_PAD_UART1_RTS__CC3 0x178 0x370 0x000 0x12 0x000
-#define MX25_PAD_UART1_RTS__GPIO_4_24 0x178 0x370 0x000 0x15 0x000
+#define MX25_PAD_UART1_RTS__UART1_RTS 0x178 0x370 0x000 0x00 0x000
+#define MX25_PAD_UART1_RTS__CSI_D0 0x178 0x370 0x488 0x01 0x001
+#define MX25_PAD_UART1_RTS__CC3 0x178 0x370 0x000 0x02 0x000
+#define MX25_PAD_UART1_RTS__UART2_DCD 0x178 0x370 0x000 0x03 0x000
+#define MX25_PAD_UART1_RTS__GPIO_4_24 0x178 0x370 0x000 0x05 0x000
-#define MX25_PAD_UART1_CTS__UART1_CTS 0x17c 0x374 0x000 0x10 0x000
-#define MX25_PAD_UART1_CTS__CSI_D1 0x17c 0x374 0x48c 0x11 0x001
-#define MX25_PAD_UART1_CTS__GPIO_4_25 0x17c 0x374 0x000 0x15 0x000
+#define MX25_PAD_UART1_CTS__UART1_CTS 0x17c 0x374 0x000 0x00 0x000
+#define MX25_PAD_UART1_CTS__CSI_D1 0x17c 0x374 0x48c 0x01 0x001
+#define MX25_PAD_UART1_CTS__UART2_RI 0x17c 0x374 0x000 0x03 0x001
+#define MX25_PAD_UART1_CTS__GPIO_4_25 0x17c 0x374 0x000 0x05 0x000
#define MX25_PAD_UART2_RXD__UART2_RXD 0x180 0x378 0x000 0x10 0x000
#define MX25_PAD_UART2_RXD__GPIO_4_26 0x180 0x378 0x000 0x15 0x000
@@ -404,10 +412,10 @@
#define MX25_PAD_UART2_TXD__UART2_TXD 0x184 0x37c 0x000 0x10 0x000
#define MX25_PAD_UART2_TXD__GPIO_4_27 0x184 0x37c 0x000 0x15 0x000
-#define MX25_PAD_UART2_RTS__UART2_RTS 0x188 0x380 0x000 0x10 0x000
-#define MX25_PAD_UART2_RTS__FEC_COL 0x188 0x380 0x504 0x12 0x002
-#define MX25_PAD_UART2_RTS__CC1 0x188 0x380 0x000 0x13 0x000
-#define MX25_PAD_UART2_RTS__GPIO_4_28 0x188 0x380 0x000 0x15 0x000
+#define MX25_PAD_UART2_RTS__UART2_RTS 0x188 0x380 0x000 0x00 0x000
+#define MX25_PAD_UART2_RTS__FEC_COL 0x188 0x380 0x504 0x02 0x002
+#define MX25_PAD_UART2_RTS__CC1 0x188 0x380 0x000 0x03 0x000
+#define MX25_PAD_UART2_RTS__GPIO_4_28 0x188 0x380 0x000 0x05 0x000
#define MX25_PAD_UART2_CTS__UART2_CTS 0x18c 0x384 0x000 0x10 0x000
#define MX25_PAD_UART2_CTS__FEC_RX_ERR 0x18c 0x384 0x518 0x12 0x002
@@ -439,36 +447,42 @@
#define MX25_PAD_SD1_DATA3__FEC_CRS 0x1a4 0x39c 0x508 0x12 0x002
#define MX25_PAD_SD1_DATA3__GPIO_2_28 0x1a4 0x39c 0x000 0x15 0x000
-#define MX25_PAD_KPP_ROW0__KPP_ROW0 0x1a8 0x3a0 0x000 0x10 0x000
-#define MX25_PAD_KPP_ROW0__UART1_DTR 0x1a8 0x3a0 0x000 0x14 0x000
-#define MX25_PAD_KPP_ROW0__GPIO_2_29 0x1a8 0x3a0 0x000 0x15 0x000
+#define MX25_PAD_KPP_ROW0__KPP_ROW0 0x1a8 0x3a0 0x000 0x00 0x000
+#define MX25_PAD_KPP_ROW0__UART3_RXD 0x1a8 0x3a0 0x568 0x01 0x001
+#define MX25_PAD_KPP_ROW0__UART1_DTR 0x1a8 0x3a0 0x000 0x04 0x000
+#define MX25_PAD_KPP_ROW0__GPIO_2_29 0x1a8 0x3a0 0x000 0x05 0x000
-#define MX25_PAD_KPP_ROW1__KPP_ROW1 0x1ac 0x3a4 0x000 0x10 0x000
-#define MX25_PAD_KPP_ROW1__GPIO_2_30 0x1ac 0x3a4 0x000 0x15 0x000
+#define MX25_PAD_KPP_ROW1__KPP_ROW1 0x1ac 0x3a4 0x000 0x00 0x000
+#define MX25_PAD_KPP_ROW1__UART3_TXD 0x1ac 0x3a4 0x000 0x01 0x000
+#define MX25_PAD_KPP_ROW1__UART1_DSR 0x1ac 0x3a4 0x000 0x04 0x000
+#define MX25_PAD_KPP_ROW1__GPIO_2_30 0x1ac 0x3a4 0x000 0x05 0x000
-#define MX25_PAD_KPP_ROW2__KPP_ROW2 0x1b0 0x3a8 0x000 0x10 0x000
-#define MX25_PAD_KPP_ROW2__CSI_D0 0x1b0 0x3a8 0x488 0x13 0x002
-#define MX25_PAD_KPP_ROW2__UART1_DCD 0x1b0 0x3a8 0x000 0x14 0x000
-#define MX25_PAD_KPP_ROW2__GPIO_2_31 0x1b0 0x3a8 0x000 0x15 0x000
+#define MX25_PAD_KPP_ROW2__KPP_ROW2 0x1b0 0x3a8 0x000 0x00 0x000
+#define MX25_PAD_KPP_ROW2__UART3_RTS 0x1b0 0x3a8 0x000 0x01 0x000
+#define MX25_PAD_KPP_ROW2__CSI_D0 0x1b0 0x3a8 0x488 0x03 0x002
+#define MX25_PAD_KPP_ROW2__UART1_DCD 0x1b0 0x3a8 0x000 0x04 0x000
+#define MX25_PAD_KPP_ROW2__GPIO_2_31 0x1b0 0x3a8 0x000 0x05 0x000
-#define MX25_PAD_KPP_ROW3__KPP_ROW3 0x1b4 0x3ac 0x000 0x10 0x000
-#define MX25_PAD_KPP_ROW3__CSI_D1 0x1b4 0x3ac 0x48c 0x13 0x002
-#define MX25_PAD_KPP_ROW3__GPIO_3_0 0x1b4 0x3ac 0x000 0x15 0x000
+#define MX25_PAD_KPP_ROW3__KPP_ROW3 0x1b4 0x3ac 0x000 0x00 0x000
+#define MX25_PAD_KPP_ROW3__UART3_CTS 0x1b4 0x3ac 0x000 0x01 0x000
+#define MX25_PAD_KPP_ROW3__CSI_D1 0x1b4 0x3ac 0x48c 0x03 0x002
+#define MX25_PAD_KPP_ROW3__UART1_RI 0x1b4 0x3ac 0x000 0x04 0x000
+#define MX25_PAD_KPP_ROW3__GPIO_3_0 0x1b4 0x3ac 0x000 0x05 0x000
#define MX25_PAD_KPP_COL0__KPP_COL0 0x1b8 0x3b0 0x000 0x10 0x000
-#define MX25_PAD_KPP_COL0__UART4_RXD_MUX 0x1b8 0x3b0 0x570 0x11 0x001
+#define MX25_PAD_KPP_COL0__UART4_RXD 0x1b8 0x3b0 0x570 0x11 0x001
#define MX25_PAD_KPP_COL0__AUD5_TXD 0x1b8 0x3b0 0x000 0x12 0x000
#define MX25_PAD_KPP_COL0__GPIO_3_1 0x1b8 0x3b0 0x000 0x15 0x000
#define MX25_PAD_KPP_COL1__KPP_COL1 0x1bc 0x3b4 0x000 0x10 0x000
-#define MX25_PAD_KPP_COL1__UART4_TXD_MUX 0x1bc 0x3b4 0x000 0x11 0x000
+#define MX25_PAD_KPP_COL1__UART4_TXD 0x1bc 0x3b4 0x000 0x11 0x000
#define MX25_PAD_KPP_COL1__AUD5_RXD 0x1bc 0x3b4 0x000 0x12 0x000
#define MX25_PAD_KPP_COL1__GPIO_3_2 0x1bc 0x3b4 0x000 0x15 0x000
-#define MX25_PAD_KPP_COL2__KPP_COL2 0x1c0 0x3b8 0x000 0x10 0x000
-#define MX25_PAD_KPP_COL2__UART4_RTS 0x1c0 0x3b8 0x000 0x11 0x000
-#define MX25_PAD_KPP_COL2__AUD5_TXC 0x1c0 0x3b8 0x000 0x12 0x000
-#define MX25_PAD_KPP_COL2__GPIO_3_3 0x1c0 0x3b8 0x000 0x15 0x000
+#define MX25_PAD_KPP_COL2__KPP_COL2 0x1c0 0x3b8 0x000 0x00 0x000
+#define MX25_PAD_KPP_COL2__UART4_RTS 0x1c0 0x3b8 0x56c 0x01 0x001
+#define MX25_PAD_KPP_COL2__AUD5_TXC 0x1c0 0x3b8 0x000 0x02 0x000
+#define MX25_PAD_KPP_COL2__GPIO_3_3 0x1c0 0x3b8 0x000 0x05 0x000
#define MX25_PAD_KPP_COL3__KPP_COL3 0x1c4 0x3bc 0x000 0x10 0x000
#define MX25_PAD_KPP_COL3__UART4_CTS 0x1c4 0x3bc 0x000 0x11 0x000
@@ -557,9 +571,10 @@
#define MX25_PAD_UPLL_BYPCLK__UPLL_BYPCLK 0x210 0x000 0x000 0x10 0x000
#define MX25_PAD_UPLL_BYPCLK__GPIO_3_16 0x210 0x000 0x000 0x15 0x000
-#define MX25_PAD_VSTBY_REQ__VSTBY_REQ 0x214 0x408 0x000 0x10 0x000
-#define MX25_PAD_VSTBY_REQ__AUD7_TXFS 0x214 0x408 0x000 0x14 0x000
-#define MX25_PAD_VSTBY_REQ__GPIO_3_17 0x214 0x408 0x000 0x15 0x000
+#define MX25_PAD_VSTBY_REQ__VSTBY_REQ 0x214 0x408 0x000 0x00 0x000
+#define MX25_PAD_VSTBY_REQ__AUD7_TXFS 0x214 0x408 0x000 0x04 0x000
+#define MX25_PAD_VSTBY_REQ__GPIO_3_17 0x214 0x408 0x000 0x05 0x000
+#define MX25_PAD_VSTBY_REQ__UART4_RTS 0x214 0x408 0x56c 0x06 0x002
#define MX25_PAD_VSTBY_ACK__VSTBY_ACK 0x218 0x40c 0x000 0x10 0x000
#define MX25_PAD_VSTBY_ACK__GPIO_3_18 0x218 0x40c 0x000 0x15 0x000
@@ -567,6 +582,7 @@
#define MX25_PAD_POWER_FAIL__POWER_FAIL 0x21c 0x410 0x000 0x10 0x000
#define MX25_PAD_POWER_FAIL__AUD7_RXD 0x21c 0x410 0x478 0x14 0x001
#define MX25_PAD_POWER_FAIL__GPIO_3_19 0x21c 0x410 0x000 0x15 0x000
+#define MX25_PAD_POWER_FAIL__UART4_CTS 0x21c 0x410 0x000 0x16 0x000
#define MX25_PAD_CLKO__CLKO 0x220 0x414 0x000 0x10 0x000
#define MX25_PAD_CLKO__GPIO_2_21 0x220 0x414 0x000 0x15 0x000
diff --git a/arch/arm/boot/dts/imx25.dtsi b/arch/arm/boot/dts/imx25.dtsi
index 6b1f4bbe6ec6..af6af8741fe5 100644
--- a/arch/arm/boot/dts/imx25.dtsi
+++ b/arch/arm/boot/dts/imx25.dtsi
@@ -420,6 +420,15 @@
interrupts = <41>;
};
+ scc: crypto@53fac000 {
+ compatible = "fsl,imx25-scc";
+ reg = <0x53fac000 0x4000>;
+ clocks = <&clks 111>;
+ clock-names = "ipg";
+ interrupts = <49>, <50>;
+ interrupt-names = "scm", "smn";
+ };
+
esdhc1: esdhc@53fb4000 {
compatible = "fsl,imx25-esdhc";
reg = <0x53fb4000 0x4000>;
diff --git a/arch/arm/boot/dts/imx28-m28.dtsi b/arch/arm/boot/dts/imx28-m28.dtsi
index 759cc56253dd..6cebaa6b8833 100644
--- a/arch/arm/boot/dts/imx28-m28.dtsi
+++ b/arch/arm/boot/dts/imx28-m28.dtsi
@@ -27,32 +27,6 @@
pinctrl-names = "default";
pinctrl-0 = <&gpmi_pins_a &gpmi_status_cfg>;
status = "okay";
-
- partition@0 {
- label = "bootloader";
- reg = <0x00000000 0x00300000>;
- read-only;
- };
-
- partition@1 {
- label = "environment";
- reg = <0x00300000 0x00080000>;
- };
-
- partition@2 {
- label = "redundant-environment";
- reg = <0x00380000 0x00080000>;
- };
-
- partition@3 {
- label = "kernel";
- reg = <0x00400000 0x00400000>;
- };
-
- partition@4 {
- label = "filesystem";
- reg = <0x00800000 0x0f800000>;
- };
};
};
diff --git a/arch/arm/boot/dts/imx28.dtsi b/arch/arm/boot/dts/imx28.dtsi
index f637ec900cc8..74aa151cdb45 100644
--- a/arch/arm/boot/dts/imx28.dtsi
+++ b/arch/arm/boot/dts/imx28.dtsi
@@ -434,6 +434,32 @@
fsl,pull-up = <MXS_PULL_ENABLE>;
};
+ mac0_pins_b: mac0@1 {
+ reg = <1>;
+ fsl,pinmux-ids = <
+ MX28_PAD_ENET0_MDC__ENET0_MDC
+ MX28_PAD_ENET0_MDIO__ENET0_MDIO
+ MX28_PAD_ENET0_RX_EN__ENET0_RX_EN
+ MX28_PAD_ENET0_RXD0__ENET0_RXD0
+ MX28_PAD_ENET0_RXD1__ENET0_RXD1
+ MX28_PAD_ENET0_RXD2__ENET0_RXD2
+ MX28_PAD_ENET0_RXD3__ENET0_RXD3
+ MX28_PAD_ENET0_TX_EN__ENET0_TX_EN
+ MX28_PAD_ENET0_TXD0__ENET0_TXD0
+ MX28_PAD_ENET0_TXD1__ENET0_TXD1
+ MX28_PAD_ENET0_TXD2__ENET0_TXD2
+ MX28_PAD_ENET0_TXD3__ENET0_TXD3
+ MX28_PAD_ENET_CLK__CLKCTRL_ENET
+ MX28_PAD_ENET0_COL__ENET0_COL
+ MX28_PAD_ENET0_CRS__ENET0_CRS
+ MX28_PAD_ENET0_TX_CLK__ENET0_TX_CLK
+ MX28_PAD_ENET0_RX_CLK__ENET0_RX_CLK
+ >;
+ fsl,drive-strength = <MXS_DRIVE_8mA>;
+ fsl,voltage = <MXS_VOLTAGE_HIGH>;
+ fsl,pull-up = <MXS_PULL_ENABLE>;
+ };
+
mac1_pins_a: mac1@0 {
reg = <0>;
fsl,pinmux-ids = <
diff --git a/arch/arm/boot/dts/imx31.dtsi b/arch/arm/boot/dts/imx31.dtsi
index 5fdb222636a7..1ce7ae94e7ad 100644
--- a/arch/arm/boot/dts/imx31.dtsi
+++ b/arch/arm/boot/dts/imx31.dtsi
@@ -69,6 +69,14 @@
status = "disabled";
};
+ kpp: kpp@43fa8000 {
+ compatible = "fsl,imx31-kpp", "fsl,imx21-kpp";
+ reg = <0x43fa8000 0x4000>;
+ interrupts = <24>;
+ clocks = <&clks 46>;
+ status = "disabled";
+ };
+
uart4: serial@43fb0000 {
compatible = "fsl,imx31-uart", "fsl,imx21-uart";
reg = <0x43fb0000 0x4000>;
diff --git a/arch/arm/boot/dts/imx35.dtsi b/arch/arm/boot/dts/imx35.dtsi
index 14e1320d9f84..490b7b44f1e7 100644
--- a/arch/arm/boot/dts/imx35.dtsi
+++ b/arch/arm/boot/dts/imx35.dtsi
@@ -137,6 +137,14 @@
status = "disabled";
};
+ kpp: kpp@43fa8000 {
+ compatible = "fsl,imx35-kpp", "fsl,imx21-kpp";
+ reg = <0x43fa8000 0x4000>;
+ interrupts = <24>;
+ clocks = <&clks 56>;
+ status = "disabled";
+ };
+
iomuxc: iomuxc@43fac000 {
compatible = "fsl,imx35-iomuxc";
reg = <0x43fac000 0x4000>;
diff --git a/arch/arm/boot/dts/imx53-m53evk.dts b/arch/arm/boot/dts/imx53-m53evk.dts
index 53f40885c530..dcee1e0f968f 100644
--- a/arch/arm/boot/dts/imx53-m53evk.dts
+++ b/arch/arm/boot/dts/imx53-m53evk.dts
@@ -84,6 +84,15 @@
regulator-max-microvolt = <5000000>;
gpio = <&gpio1 2 0>;
};
+
+ reg_usb_otg_vbus: regulator@4 {
+ compatible = "regulator-fixed";
+ reg = <4>;
+ regulator-name = "usb_otg_vbus";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ gpio = <&gpio1 4 0>;
+ };
};
sound {
@@ -168,6 +177,12 @@
>;
};
+ pinctrl_usbotg: usbotggrp {
+ fsl,pins = <
+ MX53_PAD_GPIO_4__GPIO1_4 0x000b0
+ >;
+ };
+
led_pin_gpio: led_gpio@0 {
fsl,pins = <
MX53_PAD_PATA_DATA8__GPIO2_8 0x80000000
@@ -351,6 +366,10 @@
};
&usbotg {
- dr_mode = "peripheral";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usbotg>;
+ dr_mode = "otg";
+ vbus-supply = <&reg_usb_otg_vbus>;
+ disable-over-current;
status = "okay";
};
diff --git a/arch/arm/boot/dts/imx6dl-riotboard.dts b/arch/arm/boot/dts/imx6dl-riotboard.dts
index 5111f5170d53..bfbed52ce1bd 100644
--- a/arch/arm/boot/dts/imx6dl-riotboard.dts
+++ b/arch/arm/boot/dts/imx6dl-riotboard.dts
@@ -114,7 +114,7 @@
codec: sgtl5000@0a {
compatible = "fsl,sgtl5000";
reg = <0x0a>;
- clocks = <&clks 201>;
+ clocks = <&clks IMX6QDL_CLK_CKO>;
VDDA-supply = <&reg_2p5v>;
VDDIO-supply = <&reg_3p3v>;
};
diff --git a/arch/arm/boot/dts/imx6dl-tx6dl-comtft.dts b/arch/arm/boot/dts/imx6dl-tx6dl-comtft.dts
index 913bb9a0466a..063fe7510da5 100644
--- a/arch/arm/boot/dts/imx6dl-tx6dl-comtft.dts
+++ b/arch/arm/boot/dts/imx6dl-tx6dl-comtft.dts
@@ -1,12 +1,42 @@
/*
- * Copyright 2014 Lothar Waßmann <LW@KARO-electronics.de>
+ * Copyright 2014-2016 Lothar Waßmann <LW@KARO-electronics.de>
*
- * The code contained herein is licensed under the GNU General Public
- * License. You may obtain a copy of the GNU General Public License
- * Version 2 at the following locations:
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
*
- * http://www.opensource.org/licenses/gpl-license.html
- * http://www.gnu.org/copyleft/gpl.html
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
*/
/dts-v1/;
diff --git a/arch/arm/boot/dts/imx6dl-tx6s-8034.dts b/arch/arm/boot/dts/imx6dl-tx6s-8034.dts
new file mode 100644
index 000000000000..ff8f7b1c4282
--- /dev/null
+++ b/arch/arm/boot/dts/imx6dl-tx6s-8034.dts
@@ -0,0 +1,237 @@
+/*
+ * Copyright 2015-2016 Lothar Waßmann <LW@KARO-electronics.de>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "imx6dl.dtsi"
+#include "imx6qdl-tx6.dtsi"
+
+/ {
+ model = "Ka-Ro electronics TX6S-8034 Module";
+ compatible = "karo,imx6dl-tx6dl", "fsl,imx6dl";
+
+ aliases {
+ display = &display;
+ ipu1 = &ipu1;
+ };
+
+ cpus {
+ /delete-node/ cpu@1;
+ };
+
+ backlight: backlight {
+ compatible = "pwm-backlight";
+ pwms = <&pwm2 0 500000 PWM_POLARITY_INVERTED>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_lcd0_pwr>;
+ enable-gpios = <&gpio3 29 GPIO_ACTIVE_HIGH>;
+ power-supply = <&reg_lcd1_pwr>;
+ /*
+ * a poor man's way to create a 1:1 relationship between
+ * the PWM value and the actual duty cycle
+ */
+ brightness-levels = < 0 1 2 3 4 5 6 7 8 9
+ 10 11 12 13 14 15 16 17 18 19
+ 20 21 22 23 24 25 26 27 28 29
+ 30 31 32 33 34 35 36 37 38 39
+ 40 41 42 43 44 45 46 47 48 49
+ 50 51 52 53 54 55 56 57 58 59
+ 60 61 62 63 64 65 66 67 68 69
+ 70 71 72 73 74 75 76 77 78 79
+ 80 81 82 83 84 85 86 87 88 89
+ 90 91 92 93 94 95 96 97 98 99
+ 100>;
+ default-brightness-level = <50>;
+ };
+
+ display: display@di0 {
+ compatible = "fsl,imx-parallel-display";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_disp0_2>;
+ interface-pix-fmt = "rgb24";
+ status = "okay";
+
+ port {
+ display0_in: endpoint {
+ remote-endpoint = <&ipu1_di0_disp0>;
+ };
+ };
+
+ display-timings {
+ native-mode = <&vga>;
+
+ vga: VGA {
+ clock-frequency = <25200000>;
+ hactive = <640>;
+ vactive = <480>;
+ hback-porch = <48>;
+ hsync-len = <96>;
+ hfront-porch = <16>;
+ vback-porch = <31>;
+ vsync-len = <2>;
+ vfront-porch = <12>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ETV570 {
+ clock-frequency = <25200000>;
+ hactive = <640>;
+ vactive = <480>;
+ hback-porch = <114>;
+ hsync-len = <30>;
+ hfront-porch = <16>;
+ vback-porch = <32>;
+ vsync-len = <3>;
+ vfront-porch = <10>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ET0350 {
+ clock-frequency = <6413760>;
+ hactive = <320>;
+ vactive = <240>;
+ hback-porch = <34>;
+ hsync-len = <34>;
+ hfront-porch = <20>;
+ vback-porch = <15>;
+ vsync-len = <3>;
+ vfront-porch = <4>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ET0430 {
+ clock-frequency = <9009000>;
+ hactive = <480>;
+ vactive = <272>;
+ hback-porch = <2>;
+ hsync-len = <41>;
+ hfront-porch = <2>;
+ vback-porch = <2>;
+ vsync-len = <10>;
+ vfront-porch = <2>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <1>;
+ };
+
+ ET0500 {
+ clock-frequency = <33264000>;
+ hactive = <800>;
+ vactive = <480>;
+ hback-porch = <88>;
+ hsync-len = <128>;
+ hfront-porch = <40>;
+ vback-porch = <33>;
+ vsync-len = <2>;
+ vfront-porch = <10>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ET0700 { /* same as ET0500 */
+ clock-frequency = <33264000>;
+ hactive = <800>;
+ vactive = <480>;
+ hback-porch = <88>;
+ hsync-len = <128>;
+ hfront-porch = <40>;
+ vback-porch = <33>;
+ vsync-len = <2>;
+ vfront-porch = <10>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ETQ570 {
+ clock-frequency = <6596040>;
+ hactive = <320>;
+ vactive = <240>;
+ hback-porch = <38>;
+ hsync-len = <30>;
+ hfront-porch = <30>;
+ vback-porch = <16>;
+ vsync-len = <3>;
+ vfront-porch = <4>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+ };
+ };
+};
+
+&ds1339 {
+ status = "disabled";
+};
+
+&pinctrl_usdhc1 {
+ fsl,pins = <
+ MX6QDL_PAD_SD1_CMD__SD1_CMD 0x070b1
+ MX6QDL_PAD_SD1_CLK__SD1_CLK 0x070b1
+ MX6QDL_PAD_SD1_DAT0__SD1_DATA0 0x070b1
+ MX6QDL_PAD_SD1_DAT1__SD1_DATA1 0x070b1
+ MX6QDL_PAD_SD1_DAT2__SD1_DATA2 0x070b1
+ MX6QDL_PAD_SD1_DAT3__SD1_DATA3 0x070b1
+ MX6QDL_PAD_SD3_CMD__GPIO7_IO02 0x170b0 /* SD1 CD */
+ >;
+};
+
+&ipu1_di0_disp0 {
+ remote-endpoint = <&display0_in>;
+};
+
+&reg_lcd0_pwr {
+ status = "disabled";
+};
diff --git a/arch/arm/boot/dts/imx6dl-tx6s-8035.dts b/arch/arm/boot/dts/imx6dl-tx6s-8035.dts
new file mode 100644
index 000000000000..f988950e9443
--- /dev/null
+++ b/arch/arm/boot/dts/imx6dl-tx6s-8035.dts
@@ -0,0 +1,253 @@
+/*
+ * Copyright 2015-2016 Lothar Waßmann <LW@KARO-electronics.de>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "imx6dl.dtsi"
+#include "imx6qdl-tx6.dtsi"
+
+/ {
+ model = "Ka-Ro electronics TX6S-8035 Module";
+ compatible = "karo,imx6dl-tx6dl", "fsl,imx6dl";
+
+ aliases {
+ display = &display;
+ ipu1 = &ipu1;
+ };
+
+ cpus {
+ /delete-node/ cpu@1;
+ };
+
+ backlight: backlight {
+ compatible = "pwm-backlight";
+ pwms = <&pwm2 0 500000 PWM_POLARITY_INVERTED>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_lcd0_pwr>;
+ enable-gpios = <&gpio3 29 GPIO_ACTIVE_HIGH>;
+ power-supply = <&reg_lcd1_pwr>;
+ /*
+ * a poor man's way to create a 1:1 relationship between
+ * the PWM value and the actual duty cycle
+ */
+ brightness-levels = < 0 1 2 3 4 5 6 7 8 9
+ 10 11 12 13 14 15 16 17 18 19
+ 20 21 22 23 24 25 26 27 28 29
+ 30 31 32 33 34 35 36 37 38 39
+ 40 41 42 43 44 45 46 47 48 49
+ 50 51 52 53 54 55 56 57 58 59
+ 60 61 62 63 64 65 66 67 68 69
+ 70 71 72 73 74 75 76 77 78 79
+ 80 81 82 83 84 85 86 87 88 89
+ 90 91 92 93 94 95 96 97 98 99
+ 100>;
+ default-brightness-level = <50>;
+ };
+
+ display: display@di0 {
+ compatible = "fsl,imx-parallel-display";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_disp0_2>;
+ interface-pix-fmt = "rgb24";
+ status = "okay";
+
+ port {
+ display0_in: endpoint {
+ remote-endpoint = <&ipu1_di0_disp0>;
+ };
+ };
+
+ display-timings {
+ native-mode = <&vga>;
+
+ vga: VGA {
+ clock-frequency = <25200000>;
+ hactive = <640>;
+ vactive = <480>;
+ hback-porch = <48>;
+ hsync-len = <96>;
+ hfront-porch = <16>;
+ vback-porch = <31>;
+ vsync-len = <2>;
+ vfront-porch = <12>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ETV570 {
+ clock-frequency = <25200000>;
+ hactive = <640>;
+ vactive = <480>;
+ hback-porch = <114>;
+ hsync-len = <30>;
+ hfront-porch = <16>;
+ vback-porch = <32>;
+ vsync-len = <3>;
+ vfront-porch = <10>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ET0350 {
+ clock-frequency = <6413760>;
+ hactive = <320>;
+ vactive = <240>;
+ hback-porch = <34>;
+ hsync-len = <34>;
+ hfront-porch = <20>;
+ vback-porch = <15>;
+ vsync-len = <3>;
+ vfront-porch = <4>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ET0430 {
+ clock-frequency = <9009000>;
+ hactive = <480>;
+ vactive = <272>;
+ hback-porch = <2>;
+ hsync-len = <41>;
+ hfront-porch = <2>;
+ vback-porch = <2>;
+ vsync-len = <10>;
+ vfront-porch = <2>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <1>;
+ };
+
+ ET0500 {
+ clock-frequency = <33264000>;
+ hactive = <800>;
+ vactive = <480>;
+ hback-porch = <88>;
+ hsync-len = <128>;
+ hfront-porch = <40>;
+ vback-porch = <33>;
+ vsync-len = <2>;
+ vfront-porch = <10>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ET0700 { /* same as ET0500 */
+ clock-frequency = <33264000>;
+ hactive = <800>;
+ vactive = <480>;
+ hback-porch = <88>;
+ hsync-len = <128>;
+ hfront-porch = <40>;
+ vback-porch = <33>;
+ vsync-len = <2>;
+ vfront-porch = <10>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ETQ570 {
+ clock-frequency = <6596040>;
+ hactive = <320>;
+ vactive = <240>;
+ hback-porch = <38>;
+ hsync-len = <30>;
+ hfront-porch = <30>;
+ vback-porch = <16>;
+ vsync-len = <3>;
+ vfront-porch = <4>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+ };
+ };
+};
+
+&ds1339 {
+ status = "disabled";
+};
+
+&gpmi {
+ status = "disabled";
+};
+
+&ipu1_di0_disp0 {
+ remote-endpoint = <&display0_in>;
+};
+
+&reg_lcd0_pwr {
+ status = "disabled";
+};
+
+&usdhc4 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usdhc4>;
+ bus-width = <4>;
+ non-removable;
+ no-1-8-v;
+ fsl,wp-controller;
+ status = "okay";
+};
+
+&iomuxc {
+ pinctrl_usdhc4: usdhc4grp {
+ fsl,pins = <
+ MX6QDL_PAD_SD4_CMD__SD4_CMD 0x070b1
+ MX6QDL_PAD_SD4_CLK__SD4_CLK 0x070b1
+ MX6QDL_PAD_SD4_DAT0__SD4_DATA0 0x070b1
+ MX6QDL_PAD_SD4_DAT1__SD4_DATA1 0x070b1
+ MX6QDL_PAD_SD4_DAT2__SD4_DATA2 0x070b1
+ MX6QDL_PAD_SD4_DAT3__SD4_DATA3 0x070b1
+ MX6QDL_PAD_NANDF_ALE__SD4_RESET 0x0b0b1
+ >;
+ };
+};
diff --git a/arch/arm/boot/dts/imx6dl-tx6u-801x.dts b/arch/arm/boot/dts/imx6dl-tx6u-801x.dts
index 5fe465c2814e..b7a72840b7f0 100644
--- a/arch/arm/boot/dts/imx6dl-tx6u-801x.dts
+++ b/arch/arm/boot/dts/imx6dl-tx6u-801x.dts
@@ -1,12 +1,42 @@
/*
- * Copyright 2014 Lothar Waßmann <LW@KARO-electronics.de>
+ * Copyright 2014-2016 Lothar Waßmann <LW@KARO-electronics.de>
*
- * The code contained herein is licensed under the GNU General Public
- * License. You may obtain a copy of the GNU General Public License
- * Version 2 at the following locations:
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
*
- * http://www.opensource.org/licenses/gpl-license.html
- * http://www.gnu.org/copyleft/gpl.html
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
*/
/dts-v1/;
diff --git a/arch/arm/boot/dts/imx6dl-tx6u-8033.dts b/arch/arm/boot/dts/imx6dl-tx6u-8033.dts
new file mode 100644
index 000000000000..4d3204a56f46
--- /dev/null
+++ b/arch/arm/boot/dts/imx6dl-tx6u-8033.dts
@@ -0,0 +1,248 @@
+/*
+ * Copyright 2014-2016 Lothar Waßmann <LW@KARO-electronics.de>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "imx6dl.dtsi"
+#include "imx6qdl-tx6.dtsi"
+
+/ {
+ model = "Ka-Ro electronics TX6U-8033 Module";
+ compatible = "karo,imx6dl-tx6dl", "fsl,imx6dl";
+
+ aliases {
+ display = &display;
+ };
+
+ backlight: backlight {
+ compatible = "pwm-backlight";
+ pwms = <&pwm2 0 500000 PWM_POLARITY_INVERTED>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_lcd0_pwr>;
+ enable-gpios = <&gpio3 29 GPIO_ACTIVE_HIGH>;
+ power-supply = <&reg_lcd1_pwr>;
+ /*
+ * a poor man's way to create a 1:1 relationship between
+ * the PWM value and the actual duty cycle
+ */
+ brightness-levels = < 0 1 2 3 4 5 6 7 8 9
+ 10 11 12 13 14 15 16 17 18 19
+ 20 21 22 23 24 25 26 27 28 29
+ 30 31 32 33 34 35 36 37 38 39
+ 40 41 42 43 44 45 46 47 48 49
+ 50 51 52 53 54 55 56 57 58 59
+ 60 61 62 63 64 65 66 67 68 69
+ 70 71 72 73 74 75 76 77 78 79
+ 80 81 82 83 84 85 86 87 88 89
+ 90 91 92 93 94 95 96 97 98 99
+ 100>;
+ default-brightness-level = <50>;
+ };
+
+ display: display@di0 {
+ compatible = "fsl,imx-parallel-display";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_disp0_2>;
+ interface-pix-fmt = "rgb24";
+ status = "okay";
+
+ port {
+ display0_in: endpoint {
+ remote-endpoint = <&ipu1_di0_disp0>;
+ };
+ };
+
+ display-timings {
+ native-mode = <&vga>;
+
+ vga: VGA {
+ clock-frequency = <25200000>;
+ hactive = <640>;
+ vactive = <480>;
+ hback-porch = <48>;
+ hsync-len = <96>;
+ hfront-porch = <16>;
+ vback-porch = <31>;
+ vsync-len = <2>;
+ vfront-porch = <12>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ETV570 {
+ clock-frequency = <25200000>;
+ hactive = <640>;
+ vactive = <480>;
+ hback-porch = <114>;
+ hsync-len = <30>;
+ hfront-porch = <16>;
+ vback-porch = <32>;
+ vsync-len = <3>;
+ vfront-porch = <10>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ET0350 {
+ clock-frequency = <6413760>;
+ hactive = <320>;
+ vactive = <240>;
+ hback-porch = <34>;
+ hsync-len = <34>;
+ hfront-porch = <20>;
+ vback-porch = <15>;
+ vsync-len = <3>;
+ vfront-porch = <4>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ET0430 {
+ clock-frequency = <9009000>;
+ hactive = <480>;
+ vactive = <272>;
+ hback-porch = <2>;
+ hsync-len = <41>;
+ hfront-porch = <2>;
+ vback-porch = <2>;
+ vsync-len = <10>;
+ vfront-porch = <2>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <1>;
+ };
+
+ ET0500 {
+ clock-frequency = <33264000>;
+ hactive = <800>;
+ vactive = <480>;
+ hback-porch = <88>;
+ hsync-len = <128>;
+ hfront-porch = <40>;
+ vback-porch = <33>;
+ vsync-len = <2>;
+ vfront-porch = <10>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ET0700 { /* same as ET0500 */
+ clock-frequency = <33264000>;
+ hactive = <800>;
+ vactive = <480>;
+ hback-porch = <88>;
+ hsync-len = <128>;
+ hfront-porch = <40>;
+ vback-porch = <33>;
+ vsync-len = <2>;
+ vfront-porch = <10>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ETQ570 {
+ clock-frequency = <6596040>;
+ hactive = <320>;
+ vactive = <240>;
+ hback-porch = <38>;
+ hsync-len = <30>;
+ hfront-porch = <30>;
+ vback-porch = <16>;
+ vsync-len = <3>;
+ vfront-porch = <4>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+ };
+ };
+};
+
+&ds1339 {
+ status = "disabled";
+};
+
+&gpmi {
+ status = "disabled";
+};
+
+&ipu1_di0_disp0 {
+ remote-endpoint = <&display0_in>;
+};
+
+&reg_lcd0_pwr {
+ status = "disabled";
+};
+
+&usdhc4 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usdhc4>;
+ bus-width = <4>;
+ non-removable;
+ no-1-8-v;
+ fsl,wp-controller;
+ status = "okay";
+};
+
+&iomuxc {
+ pinctrl_usdhc4: usdhc4grp {
+ fsl,pins = <
+ MX6QDL_PAD_SD4_CMD__SD4_CMD 0x070b1
+ MX6QDL_PAD_SD4_CLK__SD4_CLK 0x070b1
+ MX6QDL_PAD_SD4_DAT0__SD4_DATA0 0x070b1
+ MX6QDL_PAD_SD4_DAT1__SD4_DATA1 0x070b1
+ MX6QDL_PAD_SD4_DAT2__SD4_DATA2 0x070b1
+ MX6QDL_PAD_SD4_DAT3__SD4_DATA3 0x070b1
+ MX6QDL_PAD_NANDF_ALE__SD4_RESET 0x0b0b1
+ >;
+ };
+};
diff --git a/arch/arm/boot/dts/imx6dl-tx6u-811x.dts b/arch/arm/boot/dts/imx6dl-tx6u-811x.dts
index d35a5cdc3229..5e0c6bb49f37 100644
--- a/arch/arm/boot/dts/imx6dl-tx6u-811x.dts
+++ b/arch/arm/boot/dts/imx6dl-tx6u-811x.dts
@@ -1,12 +1,42 @@
/*
- * Copyright 2014 Lothar Waßmann <LW@KARO-electronics.de>
+ * Copyright 2014-2016 Lothar Waßmann <LW@KARO-electronics.de>
*
- * The code contained herein is licensed under the GNU General Public
- * License. You may obtain a copy of the GNU General Public License
- * Version 2 at the following locations:
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
*
- * http://www.opensource.org/licenses/gpl-license.html
- * http://www.gnu.org/copyleft/gpl.html
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
*/
/dts-v1/;
@@ -81,16 +111,6 @@
};
};
-&iomuxc {
- imx6dl-tx6u-811x {
- pinctrl_eeti: eetigrp {
- fsl,pins = <
- MX6QDL_PAD_EIM_D22__GPIO3_IO22 0x1b0b1 /* Interrupt */
- >;
- };
- };
-};
-
&kpp {
status = "disabled"; /* pad conflict with backlight1 PWM */
};
@@ -148,3 +168,11 @@
&pwm1 {
status = "okay";
};
+
+&iomuxc {
+ pinctrl_eeti: eetigrp {
+ fsl,pins = <
+ MX6QDL_PAD_EIM_D22__GPIO3_IO22 0x1b0b1 /* Interrupt */
+ >;
+ };
+};
diff --git a/arch/arm/boot/dts/imx6dl-tx6u-81xx-mb7.dts b/arch/arm/boot/dts/imx6dl-tx6u-81xx-mb7.dts
new file mode 100644
index 000000000000..b9a783f7160e
--- /dev/null
+++ b/arch/arm/boot/dts/imx6dl-tx6u-81xx-mb7.dts
@@ -0,0 +1,255 @@
+/*
+ * Copyright 2016 Lothar Waßmann <LW@KARO-electronics.de>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "imx6dl.dtsi"
+#include "imx6qdl-tx6.dtsi"
+
+/ {
+ model = "Ka-Ro electronics TX6U-81xx Module on MB7 baseboard";
+ compatible = "karo,imx6dl-tx6dl", "fsl,imx6dl";
+
+ aliases {
+ display = &lvds0;
+ lvds0 = &lvds0;
+ lvds1 = &lvds1;
+ };
+
+ backlight0: backlight0 {
+ compatible = "pwm-backlight";
+ pwms = <&pwm2 0 500000 PWM_POLARITY_INVERTED>;
+ power-supply = <&reg_lcd0_pwr>;
+ /*
+ * a poor man's way to create a 1:1 relationship between
+ * the PWM value and the actual duty cycle
+ */
+ brightness-levels = < 0 1 2 3 4 5 6 7 8 9
+ 10 11 12 13 14 15 16 17 18 19
+ 20 21 22 23 24 25 26 27 28 29
+ 30 31 32 33 34 35 36 37 38 39
+ 40 41 42 43 44 45 46 47 48 49
+ 50 51 52 53 54 55 56 57 58 59
+ 60 61 62 63 64 65 66 67 68 69
+ 70 71 72 73 74 75 76 77 78 79
+ 80 81 82 83 84 85 86 87 88 89
+ 90 91 92 93 94 95 96 97 98 99
+ 100>;
+ default-brightness-level = <50>;
+ };
+
+ backlight1: backlight1 {
+ compatible = "pwm-backlight";
+ pwms = <&pwm1 0 500000 PWM_POLARITY_INVERTED>;
+ power-supply = <&reg_lcd1_pwr>;
+ /*
+ * a poor man's way to create a 1:1 relationship between
+ * the PWM value and the actual duty cycle
+ */
+ brightness-levels = < 0 1 2 3 4 5 6 7 8 9
+ 10 11 12 13 14 15 16 17 18 19
+ 20 21 22 23 24 25 26 27 28 29
+ 30 31 32 33 34 35 36 37 38 39
+ 40 41 42 43 44 45 46 47 48 49
+ 50 51 52 53 54 55 56 57 58 59
+ 60 61 62 63 64 65 66 67 68 69
+ 70 71 72 73 74 75 76 77 78 79
+ 80 81 82 83 84 85 86 87 88 89
+ 90 91 92 93 94 95 96 97 98 99
+ 100>;
+ default-brightness-level = <50>;
+ };
+};
+
+&can1 {
+ status = "disabled";
+};
+
+&can2 {
+ xceiver-supply = <&reg_3v3>;
+};
+
+&i2c3 {
+ polytouch1: eeti@04 {
+ compatible = "eeti,egalax_ts";
+ reg = <0x04>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_eeti>;
+ interrupts-extended = <&gpio3 22 IRQ_TYPE_EDGE_FALLING>;
+ wakeup-gpios = <&gpio3 22 GPIO_ACTIVE_HIGH>;
+ wakeup-source;
+ };
+};
+
+&kpp {
+ status = "disabled"; /* pads partially clash with backlight1 PWM */
+};
+
+&ldb {
+ status = "okay";
+
+ lvds0: lvds-channel@0 {
+ fsl,data-mapping = "spwg";
+ fsl,data-width = <18>;
+ status = "okay";
+
+ display-timings {
+ native-mode = <&lvds0_timing1>;
+
+ lvds0_timing0: hsd100pxn1 {
+ clock-frequency = <65000000>;
+ hactive = <1024>;
+ vactive = <768>;
+ hback-porch = <220>;
+ hfront-porch = <40>;
+ vback-porch = <21>;
+ vfront-porch = <7>;
+ hsync-len = <60>;
+ vsync-len = <10>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <1>;
+ };
+
+ lvds0_timing1: VGA {
+ clock-frequency = <25200000>;
+ hactive = <640>;
+ vactive = <480>;
+ hback-porch = <48>;
+ hfront-porch = <16>;
+ vback-porch = <31>;
+ vfront-porch = <12>;
+ hsync-len = <96>;
+ vsync-len = <2>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ lvds0_timing2: nl12880bc20 {
+ clock-frequency = <71000000>;
+ hactive = <1280>;
+ vactive = <800>;
+ hback-porch = <50>;
+ hfront-porch = <50>;
+ vback-porch = <5>;
+ vfront-porch = <5>;
+ hsync-len = <60>;
+ vsync-len = <13>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <1>;
+ };
+ };
+ };
+
+ lvds1: lvds-channel@1 {
+ fsl,data-mapping = "spwg";
+ fsl,data-width = <18>;
+ status = "okay";
+
+ display-timings {
+ native-mode = <&lvds1_timing2>;
+
+ lvds1_timing0: hsd100pxn1 {
+ clock-frequency = <65000000>;
+ hactive = <1024>;
+ vactive = <768>;
+ hback-porch = <220>;
+ hfront-porch = <40>;
+ vback-porch = <21>;
+ vfront-porch = <7>;
+ hsync-len = <60>;
+ vsync-len = <10>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <1>;
+ };
+
+ lvds1_timing1: VGA {
+ clock-frequency = <25200000>;
+ hactive = <640>;
+ vactive = <480>;
+ hback-porch = <48>;
+ hfront-porch = <16>;
+ vback-porch = <31>;
+ vfront-porch = <12>;
+ hsync-len = <96>;
+ vsync-len = <2>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ lvds1_timing2: nl12880bc20 {
+ clock-frequency = <71000000>;
+ hactive = <1280>;
+ vactive = <800>;
+ hback-porch = <50>;
+ hfront-porch = <50>;
+ vback-porch = <5>;
+ vfront-porch = <5>;
+ hsync-len = <60>;
+ vsync-len = <13>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <1>;
+ };
+ };
+ };
+};
+
+&pwm1 {
+ status = "okay";
+};
+
+&iomuxc {
+ pinctrl_eeti: eetigrp {
+ fsl,pins = <
+ MX6QDL_PAD_EIM_D22__GPIO3_IO22 0x1b0b1 /* Interrupt */
+ >;
+ };
+};
diff --git a/arch/arm/boot/dts/imx6dl.dtsi b/arch/arm/boot/dts/imx6dl.dtsi
index c13a73aa55ca..9a4c22c2dade 100644
--- a/arch/arm/boot/dts/imx6dl.dtsi
+++ b/arch/arm/boot/dts/imx6dl.dtsi
@@ -30,7 +30,7 @@
/* kHz uV */
996000 1250000
792000 1175000
- 396000 1075000
+ 396000 1150000
>;
fsl,soc-operating-points = <
/* ARM kHz SOC-PU uV */
diff --git a/arch/arm/boot/dts/imx6q-apalis-ixora.dts b/arch/arm/boot/dts/imx6q-apalis-ixora.dts
index 2cba82d0d859..8e67ca27ad79 100644
--- a/arch/arm/boot/dts/imx6q-apalis-ixora.dts
+++ b/arch/arm/boot/dts/imx6q-apalis-ixora.dts
@@ -80,6 +80,47 @@
};
};
+ lcd_display: display@di0 {
+ compatible = "fsl,imx-parallel-display";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ interface-pix-fmt = "rgb24";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_ipu1_lcdif>;
+ status = "okay";
+
+ port@0 {
+ reg = <0>;
+
+ lcd_display_in: endpoint {
+ remote-endpoint = <&ipu1_di0_disp1>;
+ };
+ };
+
+ port@1 {
+ reg = <1>;
+
+ lcd_display_out: endpoint {
+ remote-endpoint = <&lcd_panel_in>;
+ };
+ };
+ };
+
+ panel: panel {
+ /*
+ * edt,et057090dhu: EDT 5.7" LCD TFT
+ * edt,et070080dh6: EDT 7.0" LCD TFT
+ */
+ compatible = "edt,et057090dhu";
+ backlight = <&backlight>;
+
+ port {
+ lcd_panel_in: endpoint {
+ remote-endpoint = <&lcd_display_out>;
+ };
+ };
+ };
+
leds {
compatible = "gpio-leds";
@@ -169,13 +210,18 @@
};
};
+&ipu1_di0_disp1 {
+ remote-endpoint = <&lcd_display_in>;
+};
+
&ldb {
status = "okay";
};
&pcie {
- /* active-low meaning opposite of regular PERST# active-low polarity */
- reset-gpio = <&gpio1 28 GPIO_ACTIVE_LOW>;
+ /* active-high meaning opposite of regular PERST# active-low polarity */
+ reset-gpio = <&gpio1 28 GPIO_ACTIVE_HIGH>;
+ reset-gpio-active-high;
status = "okay";
};
diff --git a/arch/arm/boot/dts/imx6q-b450v3.dts b/arch/arm/boot/dts/imx6q-b450v3.dts
index 3101be5bafa7..f0a2be5268e3 100644
--- a/arch/arm/boot/dts/imx6q-b450v3.dts
+++ b/arch/arm/boot/dts/imx6q-b450v3.dts
@@ -65,11 +65,14 @@
};
};
-&ldb {
+&clks {
assigned-clocks = <&clks IMX6QDL_CLK_LDB_DI0_SEL>,
<&clks IMX6QDL_CLK_LDB_DI1_SEL>;
assigned-clock-parents = <&clks IMX6QDL_CLK_PLL3_USB_OTG>,
<&clks IMX6QDL_CLK_PLL3_USB_OTG>;
+};
+
+&ldb {
status = "okay";
lvds0: lvds-channel@0 {
diff --git a/arch/arm/boot/dts/imx6q-b650v3.dts b/arch/arm/boot/dts/imx6q-b650v3.dts
index 823f55ccb60f..33cb71acadcc 100644
--- a/arch/arm/boot/dts/imx6q-b650v3.dts
+++ b/arch/arm/boot/dts/imx6q-b650v3.dts
@@ -65,11 +65,14 @@
};
};
-&ldb {
+&clks {
assigned-clocks = <&clks IMX6QDL_CLK_LDB_DI0_SEL>,
<&clks IMX6QDL_CLK_LDB_DI1_SEL>;
assigned-clock-parents = <&clks IMX6QDL_CLK_PLL3_USB_OTG>,
<&clks IMX6QDL_CLK_PLL3_USB_OTG>;
+};
+
+&ldb {
status = "okay";
lvds0: lvds-channel@0 {
diff --git a/arch/arm/boot/dts/imx6q-b850v3.dts b/arch/arm/boot/dts/imx6q-b850v3.dts
index 984d00000403..167f7446722a 100644
--- a/arch/arm/boot/dts/imx6q-b850v3.dts
+++ b/arch/arm/boot/dts/imx6q-b850v3.dts
@@ -51,25 +51,20 @@
chosen {
stdout-path = &uart3;
};
+};
- panel-lvds0 {
- compatible = "auo,b133htn01";
- backlight = <&backlight_lvds>;
- ddc-i2c-bus = <&mux2_i2c2>;
-
- port {
- panel_in_lvds0: endpoint {
- remote-endpoint = <&lvds0_out>;
- };
- };
- };
+&clks {
+ assigned-clocks = <&clks IMX6QDL_CLK_LDB_DI0_SEL>,
+ <&clks IMX6QDL_CLK_LDB_DI1_SEL>,
+ <&clks IMX6QDL_CLK_IPU1_DI0_PRE_SEL>,
+ <&clks IMX6QDL_CLK_IPU1_DI1_PRE_SEL>;
+ assigned-clock-parents = <&clks IMX6QDL_CLK_PLL5_VIDEO_DIV>,
+ <&clks IMX6QDL_CLK_PLL5_VIDEO_DIV>,
+ <&clks IMX6QDL_CLK_PLL2_PFD2_396M>,
+ <&clks IMX6QDL_CLK_PLL2_PFD2_396M>;
};
&ldb {
- assigned-clocks = <&clks IMX6QDL_CLK_LDB_DI0_SEL>,
- <&clks IMX6QDL_CLK_LDB_DI1_SEL>;
- assigned-clock-parents = <&clks IMX6QDL_CLK_PLL3_USB_OTG>,
- <&clks IMX6QDL_CLK_PLL3_USB_OTG>;
fsl,dual-channel;
status = "okay";
@@ -77,14 +72,6 @@
fsl,data-mapping = "spwg";
fsl,data-width = <24>;
status = "okay";
-
- port@4 {
- reg = <4>;
-
- lvds0_out: endpoint {
- remote-endpoint = <&panel_in_lvds0>;
- };
- };
};
};
diff --git a/arch/arm/boot/dts/imx6q-ba16.dtsi b/arch/arm/boot/dts/imx6q-ba16.dtsi
index 8f6e6035f3f7..f7e17e2004ac 100644
--- a/arch/arm/boot/dts/imx6q-ba16.dtsi
+++ b/arch/arm/boot/dts/imx6q-ba16.dtsi
@@ -323,6 +323,8 @@
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_pcie>;
reset-gpio = <&gpio7 12 GPIO_ACTIVE_HIGH>;
+ fsl,tx-swing-full = <103>;
+ fsl,tx-swing-low = <103>;
status = "okay";
};
@@ -335,7 +337,7 @@
&pwm2 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_pwm2>;
- status = "okay";
+ status = "disabled";
};
&sata {
@@ -390,7 +392,6 @@
pinctrl-0 = <&pinctrl_usdhc3 &pinctrl_usdhc3_reset>;
bus-width = <8>;
vmmc-supply = <&vdd_bperi>;
- vqmmc-supply = <&vdd_bio>;
non-removable;
keep-power-in-suspend;
status = "okay";
@@ -399,6 +400,7 @@
&wdog1 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_wdog>;
+ fsl,ext-reset-output;
};
&iomuxc {
diff --git a/arch/arm/boot/dts/imx6q-gw5400-a.dts b/arch/arm/boot/dts/imx6q-gw5400-a.dts
index a51834e1dd27..0511137d1e23 100644
--- a/arch/arm/boot/dts/imx6q-gw5400-a.dts
+++ b/arch/arm/boot/dts/imx6q-gw5400-a.dts
@@ -327,7 +327,7 @@
codec: sgtl5000@0a {
compatible = "fsl,sgtl5000";
reg = <0x0a>;
- clocks = <&clks 201>;
+ clocks = <&clks IMX6QDL_CLK_CKO>;
VDDA-supply = <&sw4_reg>;
VDDIO-supply = <&reg_3p3v>;
};
diff --git a/arch/arm/boot/dts/imx6q-marsboard.dts b/arch/arm/boot/dts/imx6q-marsboard.dts
new file mode 100644
index 000000000000..3f8013c85fb9
--- /dev/null
+++ b/arch/arm/boot/dts/imx6q-marsboard.dts
@@ -0,0 +1,403 @@
+/*
+ * Copyright (C) 2016 Sergio Prado (sergio.prado@e-labworks.com)
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED , WITHOUT WARRANTY OF ANY KIND
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "imx6q.dtsi"
+#include <dt-bindings/gpio/gpio.h>
+
+/ {
+ model = "Embest MarS Board i.MX6Dual";
+ compatible = "embest,imx6q-marsboard", "fsl,imx6q";
+
+ memory {
+ reg = <0x10000000 0x40000000>;
+ };
+
+ reg_3p3v: regulator-3p3v {
+ compatible = "regulator-fixed";
+ regulator-name = "3P3V";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ };
+
+ reg_usb_otg_vbus: regulator-usb-otg-vbus {
+ compatible = "regulator-fixed";
+ regulator-name = "usb_otg_vbus";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ gpio = <&gpio3 22 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ };
+
+ leds {
+ compatible = "gpio-leds";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_led>;
+
+ user1 {
+ label = "imx6:green:user1";
+ gpios = <&gpio5 2 GPIO_ACTIVE_LOW>;
+ default-state = "off";
+ linux,default-trigger = "heartbeat";
+ };
+
+ user2 {
+ label = "imx6:green:user2";
+ gpios = <&gpio3 28 GPIO_ACTIVE_LOW>;
+ default-state = "off";
+ };
+ };
+};
+
+&audmux {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_audmux>;
+ status = "okay";
+};
+
+&ecspi1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_ecspi1>;
+ cs-gpios = <&gpio2 30 GPIO_ACTIVE_LOW>;
+ fsl,spi-num-chipselects = <1>;
+ status = "okay";
+
+ m25p80@0 {
+ compatible = "microchip,sst25vf016b";
+ spi-max-frequency = <20000000>;
+ reg = <0>;
+ };
+};
+
+&fec {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_enet>;
+ phy-mode = "rgmii";
+ phy-reset-gpios = <&gpio3 31 GPIO_ACTIVE_LOW>;
+ status = "okay";
+};
+
+&hdmi {
+ ddc-i2c-bus = <&i2c2>;
+ status = "okay";
+};
+
+&i2c1 {
+ clock-frequency = <100000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c1>;
+ status = "okay";
+};
+
+&i2c2 {
+ clock-frequency = <100000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c2>;
+ status = "okay";
+};
+
+&i2c3 {
+ clock-frequency = <100000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c3>;
+ status = "okay";
+};
+
+&pwm1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_pwm1>;
+ status = "okay";
+};
+
+&pwm2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_pwm2>;
+ status = "okay";
+};
+
+&pwm3 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_pwm3>;
+ status = "okay";
+};
+
+&pwm4 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_pwm4>;
+ status = "okay";
+};
+
+&uart1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart1>;
+ status = "okay";
+};
+
+&uart2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart2>;
+ status = "okay";
+};
+
+&uart3 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart3>;
+ status = "okay";
+};
+
+&uart4 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart4>;
+ status = "okay";
+};
+
+&uart5 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart5>;
+ status = "okay";
+};
+
+&usbh1 {
+ dr_mode = "host";
+ disable-over-current;
+ status = "okay";
+};
+
+&usbotg {
+ vbus-supply = <&reg_usb_otg_vbus>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usbotg>;
+ dr_mode = "otg";
+ disable-over-current;
+ status = "okay";
+};
+
+&usdhc2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usdhc2>;
+ vmmc-supply = <&reg_3p3v>;
+ cd-gpios = <&gpio1 4 GPIO_ACTIVE_LOW>;
+ wp-gpios = <&gpio1 2 GPIO_ACTIVE_HIGH>;
+ status = "okay";
+};
+
+&usdhc3 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usdhc3>;
+ vmmc-supply = <&reg_3p3v>;
+ non-removable;
+ status = "okay";
+};
+
+&iomuxc {
+
+ pinctrl_audmux: audmuxgrp {
+ fsl,pins = <
+ MX6QDL_PAD_CSI0_DAT7__AUD3_RXD 0x130b0
+ MX6QDL_PAD_CSI0_DAT4__AUD3_TXC 0x130b0
+ MX6QDL_PAD_CSI0_DAT5__AUD3_TXD 0x110b0
+ MX6QDL_PAD_CSI0_DAT6__AUD3_TXFS 0x130b0
+ MX6QDL_PAD_GPIO_0__CCM_CLKO1 0x130b0 /* CAM_MCLK */
+ >;
+ };
+
+ pinctrl_ecspi1: ecspi1grp {
+ fsl,pins = <
+ MX6QDL_PAD_EIM_D16__ECSPI1_SCLK 0x100b1
+ MX6QDL_PAD_EIM_D17__ECSPI1_MISO 0x100b1
+ MX6QDL_PAD_EIM_D18__ECSPI1_MOSI 0x100b1
+ MX6QDL_PAD_EIM_EB2__GPIO2_IO30 0x000b1 /* CS0 */
+ >;
+ };
+
+ pinctrl_enet: enetgrp {
+ fsl,pins = <
+ MX6QDL_PAD_ENET_MDIO__ENET_MDIO 0x1b0b0
+ MX6QDL_PAD_ENET_MDC__ENET_MDC 0x1b0b0
+ MX6QDL_PAD_RGMII_TXC__RGMII_TXC 0x1b0b0
+ MX6QDL_PAD_RGMII_TD0__RGMII_TD0 0x1b0b0
+ MX6QDL_PAD_RGMII_TD1__RGMII_TD1 0x1b0b0
+ MX6QDL_PAD_RGMII_TD2__RGMII_TD2 0x1b0b0
+ MX6QDL_PAD_RGMII_TD3__RGMII_TD3 0x1b0b0
+ MX6QDL_PAD_RGMII_TX_CTL__RGMII_TX_CTL 0x1b0b0
+ /* AR8035 CLK_25M --> ENET_REF_CLK (V22) */
+ MX6QDL_PAD_ENET_REF_CLK__ENET_TX_CLK 0x0a0b1
+ /* AR8035 pin strapping: IO voltage: pull up */
+ MX6QDL_PAD_RGMII_RXC__RGMII_RXC 0x1b0b0
+ /* AR8035 pin strapping: PHYADDR#0: pull down */
+ MX6QDL_PAD_RGMII_RD0__RGMII_RD0 0x130b0
+ /* AR8035 pin strapping: PHYADDR#1: pull down */
+ MX6QDL_PAD_RGMII_RD1__RGMII_RD1 0x130b0
+ /* AR8035 pin strapping: MODE#1: pull up */
+ MX6QDL_PAD_RGMII_RD2__RGMII_RD2 0x1b0b0
+ /* AR8035 pin strapping: MODE#3: pull up */
+ MX6QDL_PAD_RGMII_RD3__RGMII_RD3 0x1b0b0
+ /* AR8035 pin strapping: MODE#0: pull down */
+ MX6QDL_PAD_RGMII_RX_CTL__RGMII_RX_CTL 0x130b0
+ /* GPIO16 -> AR8035 25MHz */
+ MX6QDL_PAD_GPIO_16__ENET_REF_CLK 0x4001b0a8
+ /* RGMII_nRST */
+ MX6QDL_PAD_EIM_D31__GPIO3_IO31 0x130b0
+ /* AR8035 interrupt */
+ MX6QDL_PAD_ENET_TX_EN__GPIO1_IO28 0x180b0
+ >;
+ };
+
+ pinctrl_i2c1: i2c1grp {
+ fsl,pins = <
+ MX6QDL_PAD_CSI0_DAT8__I2C1_SDA 0x4001b8b1
+ MX6QDL_PAD_CSI0_DAT9__I2C1_SCL 0x4001b8b1
+ >;
+ };
+
+ pinctrl_i2c2: i2c2grp {
+ fsl,pins = <
+ MX6QDL_PAD_KEY_COL3__I2C2_SCL 0x4001b8b1
+ MX6QDL_PAD_KEY_ROW3__I2C2_SDA 0x4001b8b1
+ >;
+ };
+
+ pinctrl_i2c3: i2c3grp {
+ fsl,pins = <
+ MX6QDL_PAD_GPIO_5__I2C3_SCL 0x4001b8b1
+ MX6QDL_PAD_GPIO_6__I2C3_SDA 0x4001b8b1
+ >;
+ };
+
+ pinctrl_led: ledgrp {
+ fsl,pins = <
+ MX6QDL_PAD_EIM_A25__GPIO5_IO02 0x1b0b1 /* LED1 */
+ MX6QDL_PAD_EIM_D28__GPIO3_IO28 0x1b0b1 /* LED2 */
+ >;
+ };
+
+ pinctrl_pwm1: pwm1grp {
+ fsl,pins = <
+ MX6QDL_PAD_DISP0_DAT8__PWM1_OUT 0x1b0b1
+ >;
+ };
+
+ pinctrl_pwm2: pwm2grp {
+ fsl,pins = <
+ MX6QDL_PAD_DISP0_DAT9__PWM2_OUT 0x1b0b1
+ >;
+ };
+
+ pinctrl_pwm3: pwm3grp {
+ fsl,pins = <
+ MX6QDL_PAD_SD1_DAT1__PWM3_OUT 0x1b0b1
+ >;
+ };
+
+ pinctrl_pwm4: pwm4grp {
+ fsl,pins = <
+ MX6QDL_PAD_SD1_CMD__PWM4_OUT 0x1b0b1
+ >;
+ };
+
+ pinctrl_uart1: uart1grp {
+ fsl,pins = <
+ MX6QDL_PAD_CSI0_DAT10__UART1_TX_DATA 0x1b0b1
+ MX6QDL_PAD_CSI0_DAT11__UART1_RX_DATA 0x1b0b1
+ >;
+ };
+
+ pinctrl_uart2: uart2grp {
+ fsl,pins = <
+ MX6QDL_PAD_EIM_D26__UART2_TX_DATA 0x1b0b1
+ MX6QDL_PAD_EIM_D27__UART2_RX_DATA 0x1b0b1
+ >;
+ };
+
+ pinctrl_uart3: uart3grp {
+ fsl,pins = <
+ MX6QDL_PAD_EIM_D24__UART3_TX_DATA 0x1b0b1
+ MX6QDL_PAD_EIM_D25__UART3_RX_DATA 0x1b0b1
+ >;
+ };
+
+ pinctrl_uart4: uart4grp {
+ fsl,pins = <
+ MX6QDL_PAD_KEY_COL0__UART4_TX_DATA 0x1b0b1
+ MX6QDL_PAD_KEY_ROW0__UART4_RX_DATA 0x1b0b1
+ >;
+ };
+
+ pinctrl_uart5: uart5grp {
+ fsl,pins = <
+ MX6QDL_PAD_KEY_COL1__UART5_TX_DATA 0x1b0b1
+ MX6QDL_PAD_KEY_ROW1__UART5_RX_DATA 0x1b0b1
+ >;
+ };
+
+ pinctrl_usbotg: usbotggrp {
+ fsl,pins = <
+ MX6QDL_PAD_ENET_RX_ER__USB_OTG_ID 0x17059
+ MX6QDL_PAD_EIM_D21__USB_OTG_OC 0x1b0b0
+ MX6QDL_PAD_EIM_D22__GPIO3_IO22 0x000b0 /* USB OTG POWER ENABLE */
+ >;
+ };
+
+ pinctrl_usdhc2: usdhc2grp {
+ fsl,pins = <
+ MX6QDL_PAD_SD2_CMD__SD2_CMD 0x17059
+ MX6QDL_PAD_SD2_CLK__SD2_CLK 0x10059
+ MX6QDL_PAD_SD2_DAT0__SD2_DATA0 0x17059
+ MX6QDL_PAD_SD2_DAT1__SD2_DATA1 0x17059
+ MX6QDL_PAD_SD2_DAT2__SD2_DATA2 0x17059
+ MX6QDL_PAD_SD2_DAT3__SD2_DATA3 0x17059
+ MX6QDL_PAD_GPIO_4__GPIO1_IO04 0x1b0b0 /* CD */
+ MX6QDL_PAD_GPIO_2__GPIO1_IO02 0x1f0b0 /* WP */
+ >;
+ };
+
+ pinctrl_usdhc3: usdhc3grp {
+ fsl,pins = <
+ MX6QDL_PAD_SD3_CMD__SD3_CMD 0x17009
+ MX6QDL_PAD_SD3_CLK__SD3_CLK 0x10009
+ MX6QDL_PAD_SD3_DAT0__SD3_DATA0 0x17009
+ MX6QDL_PAD_SD3_DAT1__SD3_DATA1 0x17009
+ MX6QDL_PAD_SD3_DAT2__SD3_DATA2 0x17009
+ MX6QDL_PAD_SD3_DAT3__SD3_DATA3 0x17009
+ MX6QDL_PAD_SD3_RST__SD3_RESET 0x17009
+ >;
+ };
+};
diff --git a/arch/arm/boot/dts/imx6q-tbs2910.dts b/arch/arm/boot/dts/imx6q-tbs2910.dts
index 0da81bc2c68a..1926b1348a62 100644
--- a/arch/arm/boot/dts/imx6q-tbs2910.dts
+++ b/arch/arm/boot/dts/imx6q-tbs2910.dts
@@ -141,7 +141,7 @@
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_enet>;
phy-mode = "rgmii";
- phy-reset-gpios = <&gpio1 25 GPIO_ACTIVE_HIGH>;
+ phy-reset-gpios = <&gpio1 25 GPIO_ACTIVE_LOW>;
status = "okay";
};
@@ -159,7 +159,7 @@
status = "okay";
sgtl5000: sgtl5000@0a {
- clocks = <&clks 201>;
+ clocks = <&clks IMX6QDL_CLK_CKO>;
compatible = "fsl,sgtl5000";
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_sgtl5000>;
diff --git a/arch/arm/boot/dts/imx6q-tx6q-1010-comtft.dts b/arch/arm/boot/dts/imx6q-tx6q-1010-comtft.dts
index b18fae10b2e3..65e95ae7509a 100644
--- a/arch/arm/boot/dts/imx6q-tx6q-1010-comtft.dts
+++ b/arch/arm/boot/dts/imx6q-tx6q-1010-comtft.dts
@@ -1,12 +1,42 @@
/*
- * Copyright 2014 Lothar Waßmann <LW@KARO-electronics.de>
+ * Copyright 2014-2016 Lothar Waßmann <LW@KARO-electronics.de>
*
- * The code contained herein is licensed under the GNU General Public
- * License. You may obtain a copy of the GNU General Public License
- * Version 2 at the following locations:
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
*
- * http://www.opensource.org/licenses/gpl-license.html
- * http://www.gnu.org/copyleft/gpl.html
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
*/
/dts-v1/;
diff --git a/arch/arm/boot/dts/imx6q-tx6q-1010.dts b/arch/arm/boot/dts/imx6q-tx6q-1010.dts
index b58ec9c966c8..20cd0e7b3e21 100644
--- a/arch/arm/boot/dts/imx6q-tx6q-1010.dts
+++ b/arch/arm/boot/dts/imx6q-tx6q-1010.dts
@@ -1,12 +1,42 @@
/*
- * Copyright 2014 Lothar Waßmann <LW@KARO-electronics.de>
+ * Copyright 2014-2016 Lothar Waßmann <LW@KARO-electronics.de>
*
- * The code contained herein is licensed under the GNU General Public
- * License. You may obtain a copy of the GNU General Public License
- * Version 2 at the following locations:
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
*
- * http://www.opensource.org/licenses/gpl-license.html
- * http://www.gnu.org/copyleft/gpl.html
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
*/
/dts-v1/;
diff --git a/arch/arm/boot/dts/imx6q-tx6q-1020-comtft.dts b/arch/arm/boot/dts/imx6q-tx6q-1020-comtft.dts
index 0bb9a9de62a9..9ed243b704ff 100644
--- a/arch/arm/boot/dts/imx6q-tx6q-1020-comtft.dts
+++ b/arch/arm/boot/dts/imx6q-tx6q-1020-comtft.dts
@@ -1,12 +1,42 @@
/*
- * Copyright 2014 Lothar Waßmann <LW@KARO-electronics.de>
+ * Copyright 2014-2016 Lothar Waßmann <LW@KARO-electronics.de>
*
- * The code contained herein is licensed under the GNU General Public
- * License. You may obtain a copy of the GNU General Public License
- * Version 2 at the following locations:
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
*
- * http://www.opensource.org/licenses/gpl-license.html
- * http://www.gnu.org/copyleft/gpl.html
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
*/
/dts-v1/;
@@ -94,22 +124,6 @@
status = "disabled";
};
-&iomuxc {
- imx6qdl-tx6 {
- pinctrl_usdhc4: usdhc4grp {
- fsl,pins = <
- MX6QDL_PAD_SD4_CMD__SD4_CMD 0x070b1
- MX6QDL_PAD_SD4_CLK__SD4_CLK 0x070b1
- MX6QDL_PAD_SD4_DAT0__SD4_DATA0 0x070b1
- MX6QDL_PAD_SD4_DAT1__SD4_DATA1 0x070b1
- MX6QDL_PAD_SD4_DAT2__SD4_DATA2 0x070b1
- MX6QDL_PAD_SD4_DAT3__SD4_DATA3 0x070b1
- MX6QDL_PAD_NANDF_ALE__SD4_RESET 0x0b0b1
- >;
- };
- };
-};
-
&ipu1_di0_disp0 {
remote-endpoint = <&display0_in>;
};
@@ -134,3 +148,17 @@
fsl,wp-controller;
status = "okay";
};
+
+&iomuxc {
+ pinctrl_usdhc4: usdhc4grp {
+ fsl,pins = <
+ MX6QDL_PAD_SD4_CMD__SD4_CMD 0x070b1
+ MX6QDL_PAD_SD4_CLK__SD4_CLK 0x070b1
+ MX6QDL_PAD_SD4_DAT0__SD4_DATA0 0x070b1
+ MX6QDL_PAD_SD4_DAT1__SD4_DATA1 0x070b1
+ MX6QDL_PAD_SD4_DAT2__SD4_DATA2 0x070b1
+ MX6QDL_PAD_SD4_DAT3__SD4_DATA3 0x070b1
+ MX6QDL_PAD_NANDF_ALE__SD4_RESET 0x0b0b1
+ >;
+ };
+};
diff --git a/arch/arm/boot/dts/imx6q-tx6q-1020.dts b/arch/arm/boot/dts/imx6q-tx6q-1020.dts
index b96d80a35d39..347b531d3763 100644
--- a/arch/arm/boot/dts/imx6q-tx6q-1020.dts
+++ b/arch/arm/boot/dts/imx6q-tx6q-1020.dts
@@ -1,12 +1,42 @@
/*
- * Copyright 2014 Lothar Waßmann <LW@KARO-electronics.de>
+ * Copyright 2014-2016 Lothar Waßmann <LW@KARO-electronics.de>
*
- * The code contained herein is licensed under the GNU General Public
- * License. You may obtain a copy of the GNU General Public License
- * Version 2 at the following locations:
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
*
- * http://www.opensource.org/licenses/gpl-license.html
- * http://www.gnu.org/copyleft/gpl.html
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
*/
/dts-v1/;
@@ -180,22 +210,6 @@
status = "disabled";
};
-&iomuxc {
- imx6qdl-tx6 {
- pinctrl_usdhc4: usdhc4grp {
- fsl,pins = <
- MX6QDL_PAD_SD4_CMD__SD4_CMD 0x070b1
- MX6QDL_PAD_SD4_CLK__SD4_CLK 0x070b1
- MX6QDL_PAD_SD4_DAT0__SD4_DATA0 0x070b1
- MX6QDL_PAD_SD4_DAT1__SD4_DATA1 0x070b1
- MX6QDL_PAD_SD4_DAT2__SD4_DATA2 0x070b1
- MX6QDL_PAD_SD4_DAT3__SD4_DATA3 0x070b1
- MX6QDL_PAD_NANDF_ALE__SD4_RESET 0x0b0b1
- >;
- };
- };
-};
-
&ipu1_di0_disp0 {
remote-endpoint = <&display0_in>;
};
@@ -208,3 +222,17 @@
fsl,wp-controller;
status = "okay";
};
+
+&iomuxc {
+ pinctrl_usdhc4: usdhc4grp {
+ fsl,pins = <
+ MX6QDL_PAD_SD4_CMD__SD4_CMD 0x070b1
+ MX6QDL_PAD_SD4_CLK__SD4_CLK 0x070b1
+ MX6QDL_PAD_SD4_DAT0__SD4_DATA0 0x070b1
+ MX6QDL_PAD_SD4_DAT1__SD4_DATA1 0x070b1
+ MX6QDL_PAD_SD4_DAT2__SD4_DATA2 0x070b1
+ MX6QDL_PAD_SD4_DAT3__SD4_DATA3 0x070b1
+ MX6QDL_PAD_NANDF_ALE__SD4_RESET 0x0b0b1
+ >;
+ };
+};
diff --git a/arch/arm/boot/dts/imx6q-tx6q-1036.dts b/arch/arm/boot/dts/imx6q-tx6q-1036.dts
new file mode 100644
index 000000000000..7c152e32758c
--- /dev/null
+++ b/arch/arm/boot/dts/imx6q-tx6q-1036.dts
@@ -0,0 +1,252 @@
+/*
+ * Copyright 2014-2016 Lothar Waßmann <LW@KARO-electronics.de>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "imx6q.dtsi"
+#include "imx6qdl-tx6.dtsi"
+
+/ {
+ model = "Ka-Ro electronics TX6Q-1036 Module";
+ compatible = "karo,imx6q-tx6q", "fsl,imx6q";
+
+ aliases {
+ display = &display;
+ };
+
+ backlight: backlight {
+ compatible = "pwm-backlight";
+ pwms = <&pwm2 0 500000 PWM_POLARITY_INVERTED>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_lcd0_pwr>;
+ enable-gpios = <&gpio3 29 GPIO_ACTIVE_HIGH>;
+ power-supply = <&reg_lcd1_pwr>;
+ /*
+ * a poor man's way to create a 1:1 relationship between
+ * the PWM value and the actual duty cycle
+ */
+ brightness-levels = < 0 1 2 3 4 5 6 7 8 9
+ 10 11 12 13 14 15 16 17 18 19
+ 20 21 22 23 24 25 26 27 28 29
+ 30 31 32 33 34 35 36 37 38 39
+ 40 41 42 43 44 45 46 47 48 49
+ 50 51 52 53 54 55 56 57 58 59
+ 60 61 62 63 64 65 66 67 68 69
+ 70 71 72 73 74 75 76 77 78 79
+ 80 81 82 83 84 85 86 87 88 89
+ 90 91 92 93 94 95 96 97 98 99
+ 100>;
+ default-brightness-level = <50>;
+ };
+
+ display: display@di0 {
+ compatible = "fsl,imx-parallel-display";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_disp0_2>;
+ interface-pix-fmt = "rgb24";
+ status = "okay";
+
+ port {
+ display0_in: endpoint {
+ remote-endpoint = <&ipu1_di0_disp0>;
+ };
+ };
+
+ display-timings {
+ native-mode = <&vga>;
+
+ vga: VGA {
+ clock-frequency = <25200000>;
+ hactive = <640>;
+ vactive = <480>;
+ hback-porch = <48>;
+ hsync-len = <96>;
+ hfront-porch = <16>;
+ vback-porch = <31>;
+ vsync-len = <2>;
+ vfront-porch = <12>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ETV570 {
+ clock-frequency = <25200000>;
+ hactive = <640>;
+ vactive = <480>;
+ hback-porch = <114>;
+ hsync-len = <30>;
+ hfront-porch = <16>;
+ vback-porch = <32>;
+ vsync-len = <3>;
+ vfront-porch = <10>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ET0350 {
+ clock-frequency = <6413760>;
+ hactive = <320>;
+ vactive = <240>;
+ hback-porch = <34>;
+ hsync-len = <34>;
+ hfront-porch = <20>;
+ vback-porch = <15>;
+ vsync-len = <3>;
+ vfront-porch = <4>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ET0430 {
+ clock-frequency = <9009000>;
+ hactive = <480>;
+ vactive = <272>;
+ hback-porch = <2>;
+ hsync-len = <41>;
+ hfront-porch = <2>;
+ vback-porch = <2>;
+ vsync-len = <10>;
+ vfront-porch = <2>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <1>;
+ };
+
+ ET0500 {
+ clock-frequency = <33264000>;
+ hactive = <800>;
+ vactive = <480>;
+ hback-porch = <88>;
+ hsync-len = <128>;
+ hfront-porch = <40>;
+ vback-porch = <33>;
+ vsync-len = <2>;
+ vfront-porch = <10>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ET0700 { /* same as ET0500 */
+ clock-frequency = <33264000>;
+ hactive = <800>;
+ vactive = <480>;
+ hback-porch = <88>;
+ hsync-len = <128>;
+ hfront-porch = <40>;
+ vback-porch = <33>;
+ vsync-len = <2>;
+ vfront-porch = <10>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ETQ570 {
+ clock-frequency = <6596040>;
+ hactive = <320>;
+ vactive = <240>;
+ hback-porch = <38>;
+ hsync-len = <30>;
+ hfront-porch = <30>;
+ vback-porch = <16>;
+ vsync-len = <3>;
+ vfront-porch = <4>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+ };
+ };
+};
+
+&ds1339 {
+ status = "disabled";
+};
+
+&gpmi {
+ status = "disabled";
+};
+
+&ipu1_di0_disp0 {
+ remote-endpoint = <&display0_in>;
+};
+
+&ipu2 {
+ status = "disabled";
+};
+
+&reg_lcd0_pwr {
+ status = "disabled";
+};
+
+&usdhc4 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usdhc4>;
+ bus-width = <4>;
+ non-removable;
+ no-1-8-v;
+ fsl,wp-controller;
+ status = "okay";
+};
+
+&iomuxc {
+ pinctrl_usdhc4: usdhc4grp {
+ fsl,pins = <
+ MX6QDL_PAD_SD4_CMD__SD4_CMD 0x070b1
+ MX6QDL_PAD_SD4_CLK__SD4_CLK 0x070b1
+ MX6QDL_PAD_SD4_DAT0__SD4_DATA0 0x070b1
+ MX6QDL_PAD_SD4_DAT1__SD4_DATA1 0x070b1
+ MX6QDL_PAD_SD4_DAT2__SD4_DATA2 0x070b1
+ MX6QDL_PAD_SD4_DAT3__SD4_DATA3 0x070b1
+ MX6QDL_PAD_NANDF_ALE__SD4_RESET 0x0b0b1
+ >;
+ };
+};
diff --git a/arch/arm/boot/dts/imx6q-tx6q-1110.dts b/arch/arm/boot/dts/imx6q-tx6q-1110.dts
index 2792da93db1f..0433e220a931 100644
--- a/arch/arm/boot/dts/imx6q-tx6q-1110.dts
+++ b/arch/arm/boot/dts/imx6q-tx6q-1110.dts
@@ -1,12 +1,42 @@
/*
- * Copyright 2014 Lothar Waßmann <LW@KARO-electronics.de>
+ * Copyright 2014-2016 Lothar Waßmann <LW@KARO-electronics.de>
*
- * The code contained herein is licensed under the GNU General Public
- * License. You may obtain a copy of the GNU General Public License
- * Version 2 at the following locations:
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
*
- * http://www.opensource.org/licenses/gpl-license.html
- * http://www.gnu.org/copyleft/gpl.html
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
*/
/dts-v1/;
@@ -81,16 +111,6 @@
};
};
-&iomuxc {
- imx6q-tx6q-1110 {
- pinctrl_eeti: eetigrp {
- fsl,pins = <
- MX6QDL_PAD_EIM_D22__GPIO3_IO22 0x1b0b1 /* Interrupt */
- >;
- };
- };
-};
-
&kpp {
status = "disabled"; /* pad conflict with backlight1 PWM */
};
@@ -152,3 +172,11 @@
&sata {
status = "okay";
};
+
+&iomuxc {
+ pinctrl_eeti: eetigrp {
+ fsl,pins = <
+ MX6QDL_PAD_EIM_D22__GPIO3_IO22 0x1b0b1 /* Interrupt */
+ >;
+ };
+};
diff --git a/arch/arm/boot/dts/imx6q-tx6q-11x0-mb7.dts b/arch/arm/boot/dts/imx6q-tx6q-11x0-mb7.dts
new file mode 100644
index 000000000000..d78b129d01ea
--- /dev/null
+++ b/arch/arm/boot/dts/imx6q-tx6q-11x0-mb7.dts
@@ -0,0 +1,264 @@
+/*
+ * Copyright 2016 Lothar Waßmann <LW@KARO-electronics.de>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "imx6q.dtsi"
+#include "imx6qdl-tx6.dtsi"
+
+/ {
+ model = "Ka-Ro electronics TX6Q-1110/-1130 Module on MB7 baseboard";
+ compatible = "karo,imx6q-tx6q", "fsl,imx6q";
+
+ aliases {
+ display = &lvds0;
+ ipu1 = &ipu2;
+ lvds0 = &lvds0;
+ lvds1 = &lvds1;
+ };
+
+ backlight0: backlight0 {
+ compatible = "pwm-backlight";
+ pwms = <&pwm2 0 500000 PWM_POLARITY_INVERTED>;
+ power-supply = <&reg_lcd0_pwr>;
+ /*
+ * a poor man's way to create a 1:1 relationship between
+ * the PWM value and the actual duty cycle
+ */
+ brightness-levels = < 0 1 2 3 4 5 6 7 8 9
+ 10 11 12 13 14 15 16 17 18 19
+ 20 21 22 23 24 25 26 27 28 29
+ 30 31 32 33 34 35 36 37 38 39
+ 40 41 42 43 44 45 46 47 48 49
+ 50 51 52 53 54 55 56 57 58 59
+ 60 61 62 63 64 65 66 67 68 69
+ 70 71 72 73 74 75 76 77 78 79
+ 80 81 82 83 84 85 86 87 88 89
+ 90 91 92 93 94 95 96 97 98 99
+ 100>;
+ default-brightness-level = <50>;
+ };
+
+ backlight1: backlight1 {
+ compatible = "pwm-backlight";
+ pwms = <&pwm1 0 500000 PWM_POLARITY_INVERTED>;
+ power-supply = <&reg_lcd1_pwr>;
+ /*
+ * a poor man's way to create a 1:1 relationship between
+ * the PWM value and the actual duty cycle
+ */
+ brightness-levels = < 0 1 2 3 4 5 6 7 8 9
+ 10 11 12 13 14 15 16 17 18 19
+ 20 21 22 23 24 25 26 27 28 29
+ 30 31 32 33 34 35 36 37 38 39
+ 40 41 42 43 44 45 46 47 48 49
+ 50 51 52 53 54 55 56 57 58 59
+ 60 61 62 63 64 65 66 67 68 69
+ 70 71 72 73 74 75 76 77 78 79
+ 80 81 82 83 84 85 86 87 88 89
+ 90 91 92 93 94 95 96 97 98 99
+ 100>;
+ default-brightness-level = <50>;
+ };
+};
+
+&can1 {
+ status = "disabled";
+};
+
+&can2 {
+ xceiver-supply = <&reg_3v3>;
+};
+
+&i2c3 {
+ polytouch1: eeti@04 {
+ compatible = "eeti,egalax_ts";
+ reg = <0x04>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_eeti>;
+ interrupts-extended = <&gpio3 22 IRQ_TYPE_EDGE_FALLING>;
+ wakeup-gpios = <&gpio3 22 GPIO_ACTIVE_HIGH>;
+ wakeup-source;
+ };
+};
+
+&ipu2 {
+ status = "disabled";
+};
+
+&kpp {
+ status = "disabled"; /* pads partially clash with backlight1 PWM */
+};
+
+&ldb {
+ status = "okay";
+
+ lvds0: lvds-channel@0 {
+ fsl,data-mapping = "spwg";
+ fsl,data-width = <18>;
+ status = "okay";
+
+ display-timings {
+ native-mode = <&lvds0_timing1>;
+
+ lvds0_timing0: hsd100pxn1 {
+ clock-frequency = <65000000>;
+ hactive = <1024>;
+ vactive = <768>;
+ hback-porch = <220>;
+ hfront-porch = <40>;
+ vback-porch = <21>;
+ vfront-porch = <7>;
+ hsync-len = <60>;
+ vsync-len = <10>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <1>;
+ };
+
+ lvds0_timing1: VGA {
+ clock-frequency = <25200000>;
+ hactive = <640>;
+ vactive = <480>;
+ hback-porch = <48>;
+ hfront-porch = <16>;
+ vback-porch = <31>;
+ vfront-porch = <12>;
+ hsync-len = <96>;
+ vsync-len = <2>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ lvds0_timing2: nl12880bc20 {
+ clock-frequency = <71000000>;
+ hactive = <1280>;
+ vactive = <800>;
+ hback-porch = <50>;
+ hfront-porch = <50>;
+ vback-porch = <5>;
+ vfront-porch = <5>;
+ hsync-len = <60>;
+ vsync-len = <13>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <1>;
+ };
+ };
+ };
+
+ lvds1: lvds-channel@1 {
+ fsl,data-mapping = "spwg";
+ fsl,data-width = <18>;
+ status = "okay";
+
+ display-timings {
+ native-mode = <&lvds1_timing2>;
+
+ lvds1_timing0: hsd100pxn1 {
+ clock-frequency = <65000000>;
+ hactive = <1024>;
+ vactive = <768>;
+ hback-porch = <220>;
+ hfront-porch = <40>;
+ vback-porch = <21>;
+ vfront-porch = <7>;
+ hsync-len = <60>;
+ vsync-len = <10>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <1>;
+ };
+
+ lvds1_timing1: VGA {
+ clock-frequency = <25200000>;
+ hactive = <640>;
+ vactive = <480>;
+ hback-porch = <48>;
+ hfront-porch = <16>;
+ vback-porch = <31>;
+ vfront-porch = <12>;
+ hsync-len = <96>;
+ vsync-len = <2>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ lvds1_timing2: nl12880bc20 {
+ clock-frequency = <71000000>;
+ hactive = <1280>;
+ vactive = <800>;
+ hback-porch = <50>;
+ hfront-porch = <50>;
+ vback-porch = <5>;
+ vfront-porch = <5>;
+ hsync-len = <60>;
+ vsync-len = <13>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <1>;
+ };
+ };
+ };
+};
+
+&pwm1 {
+ status = "okay";
+};
+
+&sata {
+ status = "okay";
+};
+
+&iomuxc {
+ pinctrl_eeti: eetigrp {
+ fsl,pins = <
+ MX6QDL_PAD_EIM_D22__GPIO3_IO22 0x1b0b1 /* Interrupt */
+ >;
+ };
+};
diff --git a/arch/arm/boot/dts/imx6q.dtsi b/arch/arm/boot/dts/imx6q.dtsi
index cd10c8de1904..c30c8368cae0 100644
--- a/arch/arm/boot/dts/imx6q.dtsi
+++ b/arch/arm/boot/dts/imx6q.dtsi
@@ -154,22 +154,22 @@
#size-cells = <0>;
reg = <2>;
- ipu2_di0_disp0: endpoint@0 {
+ ipu2_di0_disp0: disp0-endpoint {
};
- ipu2_di0_hdmi: endpoint@1 {
+ ipu2_di0_hdmi: hdmi-endpoint {
remote-endpoint = <&hdmi_mux_2>;
};
- ipu2_di0_mipi: endpoint@2 {
+ ipu2_di0_mipi: mipi-endpoint {
remote-endpoint = <&mipi_mux_2>;
};
- ipu2_di0_lvds0: endpoint@3 {
+ ipu2_di0_lvds0: lvds0-endpoint {
remote-endpoint = <&lvds0_mux_2>;
};
- ipu2_di0_lvds1: endpoint@4 {
+ ipu2_di0_lvds1: lvds1-endpoint {
remote-endpoint = <&lvds1_mux_2>;
};
};
@@ -179,19 +179,19 @@
#size-cells = <0>;
reg = <3>;
- ipu2_di1_hdmi: endpoint@1 {
+ ipu2_di1_hdmi: hdmi-endpoint {
remote-endpoint = <&hdmi_mux_3>;
};
- ipu2_di1_mipi: endpoint@2 {
+ ipu2_di1_mipi: mipi-endpoint {
remote-endpoint = <&mipi_mux_3>;
};
- ipu2_di1_lvds0: endpoint@3 {
+ ipu2_di1_lvds0: lvds0-endpoint {
remote-endpoint = <&lvds0_mux_3>;
};
- ipu2_di1_lvds1: endpoint@4 {
+ ipu2_di1_lvds1: lvds1-endpoint {
remote-endpoint = <&lvds1_mux_3>;
};
};
diff --git a/arch/arm/boot/dts/imx6qdl-apalis.dtsi b/arch/arm/boot/dts/imx6qdl-apalis.dtsi
index b33e5a95a0f0..922b1dd06fda 100644
--- a/arch/arm/boot/dts/imx6qdl-apalis.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-apalis.dtsi
@@ -324,7 +324,7 @@
codec: sgtl5000@0a {
compatible = "fsl,sgtl5000";
reg = <0x0a>;
- clocks = <&clks 201>;
+ clocks = <&clks IMX6QDL_CLK_CKO>;
VDDA-supply = <&reg_2p5v>;
VDDIO-supply = <&reg_3p3v>;
};
diff --git a/arch/arm/boot/dts/imx6qdl-apf6dev.dtsi b/arch/arm/boot/dts/imx6qdl-apf6dev.dtsi
index a8f3500ee522..865c9a264a43 100644
--- a/arch/arm/boot/dts/imx6qdl-apf6dev.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-apf6dev.dtsi
@@ -213,7 +213,7 @@
codec: sgtl5000@0a {
compatible = "fsl,sgtl5000";
reg = <0x0a>;
- clocks = <&clks 201>;
+ clocks = <&clks IMX6QDL_CLK_CKO>;
VDDA-supply = <&reg_3p3v>;
VDDIO-supply = <&reg_3p3v>;
};
diff --git a/arch/arm/boot/dts/imx6qdl-gw52xx.dtsi b/arch/arm/boot/dts/imx6qdl-gw52xx.dtsi
index 8dd74e98ffd6..7191b84770b9 100644
--- a/arch/arm/boot/dts/imx6qdl-gw52xx.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-gw52xx.dtsi
@@ -244,7 +244,7 @@
codec: sgtl5000@0a {
compatible = "fsl,sgtl5000";
reg = <0x0a>;
- clocks = <&clks 201>;
+ clocks = <&clks IMX6QDL_CLK_CKO>;
VDDA-supply = <&reg_1p8v>;
VDDIO-supply = <&reg_3p3v>;
};
diff --git a/arch/arm/boot/dts/imx6qdl-gw53xx.dtsi b/arch/arm/boot/dts/imx6qdl-gw53xx.dtsi
index ec3fe7444e15..40d06b09deba 100644
--- a/arch/arm/boot/dts/imx6qdl-gw53xx.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-gw53xx.dtsi
@@ -237,7 +237,7 @@
codec: sgtl5000@0a {
compatible = "fsl,sgtl5000";
reg = <0x0a>;
- clocks = <&clks 201>;
+ clocks = <&clks IMX6QDL_CLK_CKO>;
VDDA-supply = <&reg_1p8v>;
VDDIO-supply = <&reg_3p3v>;
};
diff --git a/arch/arm/boot/dts/imx6qdl-gw54xx.dtsi b/arch/arm/boot/dts/imx6qdl-gw54xx.dtsi
index 367cc49eea0d..d6dbe2a88ee6 100644
--- a/arch/arm/boot/dts/imx6qdl-gw54xx.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-gw54xx.dtsi
@@ -328,7 +328,7 @@
codec: sgtl5000@0a {
compatible = "fsl,sgtl5000";
reg = <0x0a>;
- clocks = <&clks 201>;
+ clocks = <&clks IMX6QDL_CLK_CKO>;
VDDA-supply = <&sw4_reg>;
VDDIO-supply = <&reg_3p3v>;
};
diff --git a/arch/arm/boot/dts/imx6qdl-nit6xlite.dtsi b/arch/arm/boot/dts/imx6qdl-nit6xlite.dtsi
index 24d7d3f18464..e456b5cc1b03 100644
--- a/arch/arm/boot/dts/imx6qdl-nit6xlite.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-nit6xlite.dtsi
@@ -269,7 +269,7 @@
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_sgtl5000>;
reg = <0x0a>;
- clocks = <&clks 201>;
+ clocks = <&clks IMX6QDL_CLK_CKO>;
VDDA-supply = <&reg_2p5v>;
VDDIO-supply = <&reg_3p3v>;
};
diff --git a/arch/arm/boot/dts/imx6qdl-nitrogen6_max.dtsi b/arch/arm/boot/dts/imx6qdl-nitrogen6_max.dtsi
index dc74aa395ff5..657da6b6ccd2 100644
--- a/arch/arm/boot/dts/imx6qdl-nitrogen6_max.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-nitrogen6_max.dtsi
@@ -402,7 +402,7 @@
codec: sgtl5000@0a {
compatible = "fsl,sgtl5000";
reg = <0x0a>;
- clocks = <&clks 201>;
+ clocks = <&clks IMX6QDL_CLK_CKO>;
VDDA-supply = <&reg_2p5v>;
VDDIO-supply = <&reg_3p3v>;
};
diff --git a/arch/arm/boot/dts/imx6qdl-nitrogen6x.dtsi b/arch/arm/boot/dts/imx6qdl-nitrogen6x.dtsi
index c6c590d1e940..73915db704a0 100644
--- a/arch/arm/boot/dts/imx6qdl-nitrogen6x.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-nitrogen6x.dtsi
@@ -304,7 +304,7 @@
codec: sgtl5000@0a {
compatible = "fsl,sgtl5000";
reg = <0x0a>;
- clocks = <&clks 201>;
+ clocks = <&clks IMX6QDL_CLK_CKO>;
VDDA-supply = <&reg_2p5v>;
VDDIO-supply = <&reg_3p3v>;
};
diff --git a/arch/arm/boot/dts/imx6qdl-rex.dtsi b/arch/arm/boot/dts/imx6qdl-rex.dtsi
index a50356243888..cacf5933707d 100644
--- a/arch/arm/boot/dts/imx6qdl-rex.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-rex.dtsi
@@ -126,7 +126,7 @@
codec: sgtl5000@0a {
compatible = "fsl,sgtl5000";
reg = <0x0a>;
- clocks = <&clks 201>;
+ clocks = <&clks IMX6QDL_CLK_CKO>;
VDDA-supply = <&reg_3p3v>;
VDDIO-supply = <&reg_3p3v>;
};
diff --git a/arch/arm/boot/dts/imx6qdl-sabrelite.dtsi b/arch/arm/boot/dts/imx6qdl-sabrelite.dtsi
index 0f1aca450fe6..c47fe6c79b36 100644
--- a/arch/arm/boot/dts/imx6qdl-sabrelite.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-sabrelite.dtsi
@@ -290,7 +290,7 @@
codec: sgtl5000@0a {
compatible = "fsl,sgtl5000";
reg = <0x0a>;
- clocks = <&clks 201>;
+ clocks = <&clks IMX6QDL_CLK_CKO>;
VDDA-supply = <&reg_2p5v>;
VDDIO-supply = <&reg_3p3v>;
};
diff --git a/arch/arm/boot/dts/imx6qdl-sabresd.dtsi b/arch/arm/boot/dts/imx6qdl-sabresd.dtsi
index 0b5c4de74485..5248e7bd2b06 100644
--- a/arch/arm/boot/dts/imx6qdl-sabresd.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-sabresd.dtsi
@@ -115,7 +115,7 @@
mux-ext-port = <3>;
};
- backlight {
+ backlight_lvds: backlight-lvds {
compatible = "pwm-backlight";
pwms = <&pwm1 0 5000000>;
brightness-levels = <0 4 8 16 32 64 128 255>;
@@ -133,6 +133,17 @@
default-state = "on";
};
};
+
+ panel {
+ compatible = "hannstar,hsd100pxn1";
+ backlight = <&backlight_lvds>;
+
+ port {
+ panel_in: endpoint {
+ remote-endpoint = <&lvds0_out>;
+ };
+ };
+ };
};
&audmux {
@@ -509,18 +520,11 @@
fsl,data-width = <18>;
status = "okay";
- display-timings {
- native-mode = <&timing0>;
- timing0: hsd100pxn1 {
- clock-frequency = <65000000>;
- hactive = <1024>;
- vactive = <768>;
- hback-porch = <220>;
- hfront-porch = <40>;
- vback-porch = <21>;
- vfront-porch = <7>;
- hsync-len = <60>;
- vsync-len = <10>;
+ port@4 {
+ reg = <4>;
+
+ lvds0_out: endpoint {
+ remote-endpoint = <&panel_in>;
};
};
};
diff --git a/arch/arm/boot/dts/imx6qdl-tx6.dtsi b/arch/arm/boot/dts/imx6qdl-tx6.dtsi
index efd06b576f1d..39b85aef93e1 100644
--- a/arch/arm/boot/dts/imx6qdl-tx6.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-tx6.dtsi
@@ -1,12 +1,42 @@
/*
- * Copyright 2014 Lothar Waßmann <LW@KARO-electronics.de>
+ * Copyright 2014-2016 Lothar Waßmann <LW@KARO-electronics.de>
*
- * The code contained herein is licensed under the GNU General Public
- * License. You may obtain a copy of the GNU General Public License
- * Version 2 at the following locations:
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
*
- * http://www.opensource.org/licenses/gpl-license.html
- * http://www.gnu.org/copyleft/gpl.html
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
*/
#include <dt-bindings/gpio/gpio.h>
@@ -37,6 +67,7 @@
clocks {
#address-cells = <1>;
#size-cells = <0>;
+
mclk: clock@0 {
compatible = "fixed-clock";
reg = <0>;
@@ -61,109 +92,95 @@
user_led: user {
label = "Heartbeat";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_user_led>;
gpios = <&gpio2 20 GPIO_ACTIVE_HIGH>;
linux,default-trigger = "heartbeat";
};
};
- regulators {
- compatible = "simple-bus";
- #address-cells = <1>;
- #size-cells = <0>;
-
- reg_3v3_etn: regulator@0 {
- compatible = "regulator-fixed";
- reg = <0>;
- regulator-name = "3V3_ETN";
- regulator-min-microvolt = <3300000>;
- regulator-max-microvolt = <3300000>;
- pinctrl-names = "default";
- pinctrl-0 = <&pinctrl_etnphy_power>;
- gpio = <&gpio3 20 GPIO_ACTIVE_HIGH>;
- enable-active-high;
- };
+ reg_3v3_etn: regulator-3v3-etn {
+ compatible = "regulator-fixed";
+ regulator-name = "3V3_ETN";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_etnphy_power>;
+ gpio = <&gpio3 20 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ };
- reg_2v5: regulator@1 {
- compatible = "regulator-fixed";
- reg = <1>;
- regulator-name = "2V5";
- regulator-min-microvolt = <2500000>;
- regulator-max-microvolt = <2500000>;
- regulator-always-on;
- };
+ reg_2v5: regulator-2v5 {
+ compatible = "regulator-fixed";
+ regulator-name = "2V5";
+ regulator-min-microvolt = <2500000>;
+ regulator-max-microvolt = <2500000>;
+ regulator-always-on;
+ };
- reg_3v3: regulator@2 {
- compatible = "regulator-fixed";
- reg = <2>;
- regulator-name = "3V3";
- regulator-min-microvolt = <3300000>;
- regulator-max-microvolt = <3300000>;
- regulator-always-on;
- };
+ reg_3v3: regulator-3v3 {
+ compatible = "regulator-fixed";
+ regulator-name = "3V3";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ };
- reg_can_xcvr: regulator@3 {
- compatible = "regulator-fixed";
- reg = <3>;
- regulator-name = "CAN XCVR";
- regulator-min-microvolt = <3300000>;
- regulator-max-microvolt = <3300000>;
- pinctrl-names = "default";
- pinctrl-0 = <&pinctrl_flexcan_xcvr>;
- gpio = <&gpio4 21 GPIO_ACTIVE_HIGH>;
- enable-active-low;
- };
+ reg_can_xcvr: regulator-can-xcvr {
+ compatible = "regulator-fixed";
+ regulator-name = "CAN XCVR";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_flexcan_xcvr>;
+ gpio = <&gpio4 21 GPIO_ACTIVE_HIGH>;
+ enable-active-low;
+ };
- reg_lcd0_pwr: regulator@4 {
- compatible = "regulator-fixed";
- reg = <4>;
- regulator-name = "LCD0 POWER";
- regulator-min-microvolt = <3300000>;
- regulator-max-microvolt = <3300000>;
- pinctrl-names = "default";
- pinctrl-0 = <&pinctrl_lcd0_pwr>;
- gpio = <&gpio3 29 GPIO_ACTIVE_HIGH>;
- enable-active-high;
- regulator-boot-on;
- regulator-always-on;
- };
+ reg_lcd0_pwr: regulator-lcd0-pwr {
+ compatible = "regulator-fixed";
+ regulator-name = "LCD0 POWER";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_lcd0_pwr>;
+ gpio = <&gpio3 29 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ regulator-boot-on;
+ };
- reg_lcd1_pwr: regulator@5 {
- compatible = "regulator-fixed";
- reg = <5>;
- regulator-name = "LCD1 POWER";
- regulator-min-microvolt = <3300000>;
- regulator-max-microvolt = <3300000>;
- pinctrl-names = "default";
- pinctrl-0 = <&pinctrl_lcd1_pwr>;
- gpio = <&gpio2 31 GPIO_ACTIVE_HIGH>;
- enable-active-high;
- regulator-boot-on;
- regulator-always-on;
- };
+ reg_lcd1_pwr: regulator-lcd1-pwr {
+ compatible = "regulator-fixed";
+ regulator-name = "LCD1 POWER";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_lcd1_pwr>;
+ gpio = <&gpio2 31 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ regulator-boot-on;
+ };
- reg_usbh1_vbus: regulator@6 {
- compatible = "regulator-fixed";
- reg = <6>;
- regulator-name = "usbh1_vbus";
- regulator-min-microvolt = <5000000>;
- regulator-max-microvolt = <5000000>;
- pinctrl-names = "default";
- pinctrl-0 = <&pinctrl_usbh1_vbus>;
- gpio = <&gpio3 31 GPIO_ACTIVE_HIGH>;
- enable-active-high;
- };
+ reg_usbh1_vbus: regulator-usbh1-vbus {
+ compatible = "regulator-fixed";
+ regulator-name = "usbh1_vbus";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usbh1_vbus>;
+ gpio = <&gpio3 31 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ };
- reg_usbotg_vbus: regulator@7 {
- compatible = "regulator-fixed";
- reg = <7>;
- regulator-name = "usbotg_vbus";
- regulator-min-microvolt = <5000000>;
- regulator-max-microvolt = <5000000>;
- pinctrl-names = "default";
- pinctrl-0 = <&pinctrl_usbotg_vbus>;
- gpio = <&gpio1 7 GPIO_ACTIVE_HIGH>;
- enable-active-high;
- };
+ reg_usbotg_vbus: regulator-usbotg-vbus {
+ compatible = "regulator-fixed";
+ regulator-name = "usbotg_vbus";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usbotg_vbus>;
+ gpio = <&gpio1 7 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
};
sound {
@@ -209,7 +226,7 @@
&gpio2 30 GPIO_ACTIVE_HIGH
&gpio3 19 GPIO_ACTIVE_HIGH
>;
- status = "okay";
+ status = "disabled";
spidev0: spi@0 {
compatible = "spidev";
@@ -234,8 +251,22 @@
clock-names = "ipg", "ahb", "ptp", "enet_out";
phy-mode = "rmii";
phy-reset-gpios = <&gpio7 6 GPIO_ACTIVE_HIGH>;
+ phy-handle = <&etnphy>;
phy-supply = <&reg_3v3_etn>;
status = "okay";
+
+ mdio {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ etnphy: ethernet-phy@0 {
+ compatible = "ethernet-phy-ieee802.3-c22";
+ reg = <0>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_enet_mdio>;
+ interrupts-extended = <&gpio7 1 IRQ_TYPE_EDGE_FALLING>;
+ };
+ };
};
&gpmi {
@@ -301,310 +332,318 @@
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_hog>;
- imx6qdl-tx6 {
- pinctrl_hog: hoggrp {
- fsl,pins = <
- MX6QDL_PAD_EIM_A18__GPIO2_IO20 0x1b0b1 /* LED */
- MX6QDL_PAD_SD3_DAT2__GPIO7_IO06 0x1b0b1 /* ETN PHY RESET */
- MX6QDL_PAD_SD3_DAT4__GPIO7_IO01 0x1b0b1 /* ETN PHY INT */
- MX6QDL_PAD_EIM_A25__GPIO5_IO02 0x1b0b1 /* PWR BTN */
- >;
- };
+ pinctrl_hog: hoggrp {
+ fsl,pins = <
+ MX6QDL_PAD_SD3_DAT2__GPIO7_IO06 0x1b0b1 /* ETN PHY RESET */
+ MX6QDL_PAD_SD3_DAT4__GPIO7_IO01 0x1b0b1 /* ETN PHY INT */
+ MX6QDL_PAD_EIM_A25__GPIO5_IO02 0x1b0b1 /* PWR BTN */
+ >;
+ };
- pinctrl_audmux: audmuxgrp {
- fsl,pins = <
- MX6QDL_PAD_KEY_ROW1__AUD5_RXD 0x130b0 /* SSI1_RXD */
- MX6QDL_PAD_KEY_ROW0__AUD5_TXD 0x110b0 /* SSI1_TXD */
- MX6QDL_PAD_KEY_COL0__AUD5_TXC 0x130b0 /* SSI1_CLK */
- MX6QDL_PAD_KEY_COL1__AUD5_TXFS 0x130b0 /* SSI1_FS */
- >;
- };
+ pinctrl_audmux: audmuxgrp {
+ fsl,pins = <
+ MX6QDL_PAD_KEY_ROW1__AUD5_RXD 0x130b0 /* SSI1_RXD */
+ MX6QDL_PAD_KEY_ROW0__AUD5_TXD 0x110b0 /* SSI1_TXD */
+ MX6QDL_PAD_KEY_COL0__AUD5_TXC 0x130b0 /* SSI1_CLK */
+ MX6QDL_PAD_KEY_COL1__AUD5_TXFS 0x130b0 /* SSI1_FS */
+ >;
+ };
- pinctrl_disp0_1: disp0grp-1 {
- fsl,pins = <
- MX6QDL_PAD_DI0_DISP_CLK__IPU1_DI0_DISP_CLK 0x10
- MX6QDL_PAD_DI0_PIN15__IPU1_DI0_PIN15 0x10
- MX6QDL_PAD_DI0_PIN2__IPU1_DI0_PIN02 0x10
- MX6QDL_PAD_DI0_PIN3__IPU1_DI0_PIN03 0x10
- /* PAD DISP0_DAT0 is used for the Flexcan transceiver control */
- MX6QDL_PAD_DISP0_DAT1__IPU1_DISP0_DATA01 0x10
- MX6QDL_PAD_DISP0_DAT2__IPU1_DISP0_DATA02 0x10
- MX6QDL_PAD_DISP0_DAT3__IPU1_DISP0_DATA03 0x10
- MX6QDL_PAD_DISP0_DAT4__IPU1_DISP0_DATA04 0x10
- MX6QDL_PAD_DISP0_DAT5__IPU1_DISP0_DATA05 0x10
- MX6QDL_PAD_DISP0_DAT6__IPU1_DISP0_DATA06 0x10
- MX6QDL_PAD_DISP0_DAT7__IPU1_DISP0_DATA07 0x10
- MX6QDL_PAD_DISP0_DAT8__IPU1_DISP0_DATA08 0x10
- MX6QDL_PAD_DISP0_DAT9__IPU1_DISP0_DATA09 0x10
- MX6QDL_PAD_DISP0_DAT10__IPU1_DISP0_DATA10 0x10
- MX6QDL_PAD_DISP0_DAT11__IPU1_DISP0_DATA11 0x10
- MX6QDL_PAD_DISP0_DAT12__IPU1_DISP0_DATA12 0x10
- MX6QDL_PAD_DISP0_DAT13__IPU1_DISP0_DATA13 0x10
- MX6QDL_PAD_DISP0_DAT14__IPU1_DISP0_DATA14 0x10
- MX6QDL_PAD_DISP0_DAT15__IPU1_DISP0_DATA15 0x10
- MX6QDL_PAD_DISP0_DAT16__IPU1_DISP0_DATA16 0x10
- MX6QDL_PAD_DISP0_DAT17__IPU1_DISP0_DATA17 0x10
- MX6QDL_PAD_DISP0_DAT18__IPU1_DISP0_DATA18 0x10
- MX6QDL_PAD_DISP0_DAT19__IPU1_DISP0_DATA19 0x10
- MX6QDL_PAD_DISP0_DAT20__IPU1_DISP0_DATA20 0x10
- MX6QDL_PAD_DISP0_DAT21__IPU1_DISP0_DATA21 0x10
- MX6QDL_PAD_DISP0_DAT22__IPU1_DISP0_DATA22 0x10
- MX6QDL_PAD_DISP0_DAT23__IPU1_DISP0_DATA23 0x10
- >;
- };
+ pinctrl_disp0_1: disp0grp-1 {
+ fsl,pins = <
+ MX6QDL_PAD_DI0_DISP_CLK__IPU1_DI0_DISP_CLK 0x10
+ MX6QDL_PAD_DI0_PIN15__IPU1_DI0_PIN15 0x10
+ MX6QDL_PAD_DI0_PIN2__IPU1_DI0_PIN02 0x10
+ MX6QDL_PAD_DI0_PIN3__IPU1_DI0_PIN03 0x10
+ /* PAD DISP0_DAT0 is used for the Flexcan transceiver control */
+ MX6QDL_PAD_DISP0_DAT1__IPU1_DISP0_DATA01 0x10
+ MX6QDL_PAD_DISP0_DAT2__IPU1_DISP0_DATA02 0x10
+ MX6QDL_PAD_DISP0_DAT3__IPU1_DISP0_DATA03 0x10
+ MX6QDL_PAD_DISP0_DAT4__IPU1_DISP0_DATA04 0x10
+ MX6QDL_PAD_DISP0_DAT5__IPU1_DISP0_DATA05 0x10
+ MX6QDL_PAD_DISP0_DAT6__IPU1_DISP0_DATA06 0x10
+ MX6QDL_PAD_DISP0_DAT7__IPU1_DISP0_DATA07 0x10
+ MX6QDL_PAD_DISP0_DAT8__IPU1_DISP0_DATA08 0x10
+ MX6QDL_PAD_DISP0_DAT9__IPU1_DISP0_DATA09 0x10
+ MX6QDL_PAD_DISP0_DAT10__IPU1_DISP0_DATA10 0x10
+ MX6QDL_PAD_DISP0_DAT11__IPU1_DISP0_DATA11 0x10
+ MX6QDL_PAD_DISP0_DAT12__IPU1_DISP0_DATA12 0x10
+ MX6QDL_PAD_DISP0_DAT13__IPU1_DISP0_DATA13 0x10
+ MX6QDL_PAD_DISP0_DAT14__IPU1_DISP0_DATA14 0x10
+ MX6QDL_PAD_DISP0_DAT15__IPU1_DISP0_DATA15 0x10
+ MX6QDL_PAD_DISP0_DAT16__IPU1_DISP0_DATA16 0x10
+ MX6QDL_PAD_DISP0_DAT17__IPU1_DISP0_DATA17 0x10
+ MX6QDL_PAD_DISP0_DAT18__IPU1_DISP0_DATA18 0x10
+ MX6QDL_PAD_DISP0_DAT19__IPU1_DISP0_DATA19 0x10
+ MX6QDL_PAD_DISP0_DAT20__IPU1_DISP0_DATA20 0x10
+ MX6QDL_PAD_DISP0_DAT21__IPU1_DISP0_DATA21 0x10
+ MX6QDL_PAD_DISP0_DAT22__IPU1_DISP0_DATA22 0x10
+ MX6QDL_PAD_DISP0_DAT23__IPU1_DISP0_DATA23 0x10
+ >;
+ };
- pinctrl_disp0_2: disp0grp-2 {
- fsl,pins = <
- MX6QDL_PAD_DI0_DISP_CLK__IPU1_DI0_DISP_CLK 0x10
- MX6QDL_PAD_DI0_PIN15__IPU1_DI0_PIN15 0x10
- MX6QDL_PAD_DI0_PIN2__IPU1_DI0_PIN02 0x10
- MX6QDL_PAD_DI0_PIN3__IPU1_DI0_PIN03 0x10
- MX6QDL_PAD_DISP0_DAT0__IPU1_DISP0_DATA00 0x10
- MX6QDL_PAD_DISP0_DAT1__IPU1_DISP0_DATA01 0x10
- MX6QDL_PAD_DISP0_DAT2__IPU1_DISP0_DATA02 0x10
- MX6QDL_PAD_DISP0_DAT3__IPU1_DISP0_DATA03 0x10
- MX6QDL_PAD_DISP0_DAT4__IPU1_DISP0_DATA04 0x10
- MX6QDL_PAD_DISP0_DAT5__IPU1_DISP0_DATA05 0x10
- MX6QDL_PAD_DISP0_DAT6__IPU1_DISP0_DATA06 0x10
- MX6QDL_PAD_DISP0_DAT7__IPU1_DISP0_DATA07 0x10
- MX6QDL_PAD_DISP0_DAT8__IPU1_DISP0_DATA08 0x10
- MX6QDL_PAD_DISP0_DAT9__IPU1_DISP0_DATA09 0x10
- MX6QDL_PAD_DISP0_DAT10__IPU1_DISP0_DATA10 0x10
- MX6QDL_PAD_DISP0_DAT11__IPU1_DISP0_DATA11 0x10
- MX6QDL_PAD_DISP0_DAT12__IPU1_DISP0_DATA12 0x10
- MX6QDL_PAD_DISP0_DAT13__IPU1_DISP0_DATA13 0x10
- MX6QDL_PAD_DISP0_DAT14__IPU1_DISP0_DATA14 0x10
- MX6QDL_PAD_DISP0_DAT15__IPU1_DISP0_DATA15 0x10
- MX6QDL_PAD_DISP0_DAT16__IPU1_DISP0_DATA16 0x10
- MX6QDL_PAD_DISP0_DAT17__IPU1_DISP0_DATA17 0x10
- MX6QDL_PAD_DISP0_DAT18__IPU1_DISP0_DATA18 0x10
- MX6QDL_PAD_DISP0_DAT19__IPU1_DISP0_DATA19 0x10
- MX6QDL_PAD_DISP0_DAT20__IPU1_DISP0_DATA20 0x10
- MX6QDL_PAD_DISP0_DAT21__IPU1_DISP0_DATA21 0x10
- MX6QDL_PAD_DISP0_DAT22__IPU1_DISP0_DATA22 0x10
- MX6QDL_PAD_DISP0_DAT23__IPU1_DISP0_DATA23 0x10
- >;
- };
+ pinctrl_disp0_2: disp0grp-2 {
+ fsl,pins = <
+ MX6QDL_PAD_DI0_DISP_CLK__IPU1_DI0_DISP_CLK 0x10
+ MX6QDL_PAD_DI0_PIN15__IPU1_DI0_PIN15 0x10
+ MX6QDL_PAD_DI0_PIN2__IPU1_DI0_PIN02 0x10
+ MX6QDL_PAD_DI0_PIN3__IPU1_DI0_PIN03 0x10
+ MX6QDL_PAD_DISP0_DAT0__IPU1_DISP0_DATA00 0x10
+ MX6QDL_PAD_DISP0_DAT1__IPU1_DISP0_DATA01 0x10
+ MX6QDL_PAD_DISP0_DAT2__IPU1_DISP0_DATA02 0x10
+ MX6QDL_PAD_DISP0_DAT3__IPU1_DISP0_DATA03 0x10
+ MX6QDL_PAD_DISP0_DAT4__IPU1_DISP0_DATA04 0x10
+ MX6QDL_PAD_DISP0_DAT5__IPU1_DISP0_DATA05 0x10
+ MX6QDL_PAD_DISP0_DAT6__IPU1_DISP0_DATA06 0x10
+ MX6QDL_PAD_DISP0_DAT7__IPU1_DISP0_DATA07 0x10
+ MX6QDL_PAD_DISP0_DAT8__IPU1_DISP0_DATA08 0x10
+ MX6QDL_PAD_DISP0_DAT9__IPU1_DISP0_DATA09 0x10
+ MX6QDL_PAD_DISP0_DAT10__IPU1_DISP0_DATA10 0x10
+ MX6QDL_PAD_DISP0_DAT11__IPU1_DISP0_DATA11 0x10
+ MX6QDL_PAD_DISP0_DAT12__IPU1_DISP0_DATA12 0x10
+ MX6QDL_PAD_DISP0_DAT13__IPU1_DISP0_DATA13 0x10
+ MX6QDL_PAD_DISP0_DAT14__IPU1_DISP0_DATA14 0x10
+ MX6QDL_PAD_DISP0_DAT15__IPU1_DISP0_DATA15 0x10
+ MX6QDL_PAD_DISP0_DAT16__IPU1_DISP0_DATA16 0x10
+ MX6QDL_PAD_DISP0_DAT17__IPU1_DISP0_DATA17 0x10
+ MX6QDL_PAD_DISP0_DAT18__IPU1_DISP0_DATA18 0x10
+ MX6QDL_PAD_DISP0_DAT19__IPU1_DISP0_DATA19 0x10
+ MX6QDL_PAD_DISP0_DAT20__IPU1_DISP0_DATA20 0x10
+ MX6QDL_PAD_DISP0_DAT21__IPU1_DISP0_DATA21 0x10
+ MX6QDL_PAD_DISP0_DAT22__IPU1_DISP0_DATA22 0x10
+ MX6QDL_PAD_DISP0_DAT23__IPU1_DISP0_DATA23 0x10
+ >;
+ };
- pinctrl_ecspi1: ecspi1grp {
- fsl,pins = <
- MX6QDL_PAD_EIM_D18__ECSPI1_MOSI 0x0b0b0
- MX6QDL_PAD_EIM_D17__ECSPI1_MISO 0x0b0b0
- MX6QDL_PAD_EIM_D16__ECSPI1_SCLK 0x0b0b0
- MX6QDL_PAD_GPIO_19__ECSPI1_RDY 0x0b0b0
- MX6QDL_PAD_EIM_EB2__GPIO2_IO30 0x0b0b0 /* SPI CS0 */
- MX6QDL_PAD_EIM_D19__GPIO3_IO19 0x0b0b0 /* SPI CS1 */
- >;
- };
+ pinctrl_ecspi1: ecspi1grp {
+ fsl,pins = <
+ MX6QDL_PAD_EIM_D18__ECSPI1_MOSI 0x0b0b0
+ MX6QDL_PAD_EIM_D17__ECSPI1_MISO 0x0b0b0
+ MX6QDL_PAD_EIM_D16__ECSPI1_SCLK 0x0b0b0
+ MX6QDL_PAD_GPIO_19__ECSPI1_RDY 0x0b0b0
+ MX6QDL_PAD_EIM_EB2__GPIO2_IO30 0x0b0b0 /* SPI CS0 */
+ MX6QDL_PAD_EIM_D19__GPIO3_IO19 0x0b0b0 /* SPI CS1 */
+ >;
+ };
- pinctrl_edt_ft5x06: edt-ft5x06grp {
- fsl,pins = <
- MX6QDL_PAD_NANDF_CS2__GPIO6_IO15 0x1b0b0 /* Interrupt */
- MX6QDL_PAD_EIM_A16__GPIO2_IO22 0x1b0b0 /* Reset */
- MX6QDL_PAD_EIM_A17__GPIO2_IO21 0x1b0b0 /* Wake */
- >;
- };
+ pinctrl_edt_ft5x06: edt-ft5x06grp {
+ fsl,pins = <
+ MX6QDL_PAD_NANDF_CS2__GPIO6_IO15 0x1b0b0 /* Interrupt */
+ MX6QDL_PAD_EIM_A16__GPIO2_IO22 0x1b0b0 /* Reset */
+ MX6QDL_PAD_EIM_A17__GPIO2_IO21 0x1b0b0 /* Wake */
+ >;
+ };
- pinctrl_enet: enetgrp {
- fsl,pins = <
- MX6QDL_PAD_ENET_MDC__ENET_MDC 0x1b0b0
- MX6QDL_PAD_ENET_MDIO__ENET_MDIO 0x1b0b0
- MX6QDL_PAD_ENET_RXD0__ENET_RX_DATA0 0x1b0b0
- MX6QDL_PAD_ENET_RXD1__ENET_RX_DATA1 0x1b0b0
- MX6QDL_PAD_ENET_RX_ER__ENET_RX_ER 0x1b0b0
- MX6QDL_PAD_ENET_TX_EN__ENET_TX_EN 0x1b0b0
- MX6QDL_PAD_ENET_TXD0__ENET_TX_DATA0 0x1b0b0
- MX6QDL_PAD_ENET_TXD1__ENET_TX_DATA1 0x1b0b0
- MX6QDL_PAD_ENET_CRS_DV__ENET_RX_EN 0x1b0b0
- >;
- };
+ pinctrl_enet: enetgrp {
+ fsl,pins = <
+ MX6QDL_PAD_ENET_RXD0__ENET_RX_DATA0 0x1b0b0
+ MX6QDL_PAD_ENET_RXD1__ENET_RX_DATA1 0x1b0b0
+ MX6QDL_PAD_ENET_RX_ER__ENET_RX_ER 0x1b0b0
+ MX6QDL_PAD_ENET_TX_EN__ENET_TX_EN 0x1b0b0
+ MX6QDL_PAD_ENET_TXD0__ENET_TX_DATA0 0x1b0b0
+ MX6QDL_PAD_ENET_TXD1__ENET_TX_DATA1 0x1b0b0
+ MX6QDL_PAD_ENET_CRS_DV__ENET_RX_EN 0x1b0b0
+ >;
+ };
- pinctrl_etnphy_power: etnphy-pwrgrp {
- fsl,pins = <
- MX6QDL_PAD_EIM_D20__GPIO3_IO20 0x1b0b1 /* ETN PHY POWER */
- >;
- };
+ pinctrl_enet_mdio: enet-mdiogrp {
+ fsl,pins = <
+ MX6QDL_PAD_ENET_MDC__ENET_MDC 0x1b0b0
+ MX6QDL_PAD_ENET_MDIO__ENET_MDIO 0x1b0b0
+ >;
+ };
- pinctrl_flexcan1: flexcan1grp {
- fsl,pins = <
- MX6QDL_PAD_GPIO_7__FLEXCAN1_TX 0x1b0b0
- MX6QDL_PAD_GPIO_8__FLEXCAN1_RX 0x1b0b0
- >;
- };
+ pinctrl_etnphy_power: etnphy-pwrgrp {
+ fsl,pins = <
+ MX6QDL_PAD_EIM_D20__GPIO3_IO20 0x1b0b1 /* ETN PHY POWER */
+ >;
+ };
- pinctrl_flexcan2: flexcan2grp {
- fsl,pins = <
- MX6QDL_PAD_KEY_COL4__FLEXCAN2_TX 0x1b0b0
- MX6QDL_PAD_KEY_ROW4__FLEXCAN2_RX 0x1b0b0
- >;
- };
+ pinctrl_flexcan1: flexcan1grp {
+ fsl,pins = <
+ MX6QDL_PAD_GPIO_7__FLEXCAN1_TX 0x1b0b0
+ MX6QDL_PAD_GPIO_8__FLEXCAN1_RX 0x1b0b0
+ >;
+ };
- pinctrl_flexcan_xcvr: flexcan-xcvrgrp {
- fsl,pins = <
- MX6QDL_PAD_DISP0_DAT0__GPIO4_IO21 0x1b0b0 /* Flexcan XCVR enable */
- >;
- };
+ pinctrl_flexcan2: flexcan2grp {
+ fsl,pins = <
+ MX6QDL_PAD_KEY_COL4__FLEXCAN2_TX 0x1b0b0
+ MX6QDL_PAD_KEY_ROW4__FLEXCAN2_RX 0x1b0b0
+ >;
+ };
- pinctrl_gpmi_nand: gpminandgrp {
- fsl,pins = <
- MX6QDL_PAD_NANDF_CLE__NAND_CLE 0x0b0b1
- MX6QDL_PAD_NANDF_ALE__NAND_ALE 0x0b0b1
- MX6QDL_PAD_NANDF_WP_B__NAND_WP_B 0x0b0b1
- MX6QDL_PAD_NANDF_RB0__NAND_READY_B 0x0b000
- MX6QDL_PAD_NANDF_CS0__NAND_CE0_B 0x0b0b1
- MX6QDL_PAD_SD4_CMD__NAND_RE_B 0x0b0b1
- MX6QDL_PAD_SD4_CLK__NAND_WE_B 0x0b0b1
- MX6QDL_PAD_NANDF_D0__NAND_DATA00 0x0b0b1
- MX6QDL_PAD_NANDF_D1__NAND_DATA01 0x0b0b1
- MX6QDL_PAD_NANDF_D2__NAND_DATA02 0x0b0b1
- MX6QDL_PAD_NANDF_D3__NAND_DATA03 0x0b0b1
- MX6QDL_PAD_NANDF_D4__NAND_DATA04 0x0b0b1
- MX6QDL_PAD_NANDF_D5__NAND_DATA05 0x0b0b1
- MX6QDL_PAD_NANDF_D6__NAND_DATA06 0x0b0b1
- MX6QDL_PAD_NANDF_D7__NAND_DATA07 0x0b0b1
- >;
- };
+ pinctrl_flexcan_xcvr: flexcan-xcvrgrp {
+ fsl,pins = <
+ MX6QDL_PAD_DISP0_DAT0__GPIO4_IO21 0x1b0b0 /* Flexcan XCVR enable */
+ >;
+ };
- pinctrl_i2c1: i2c1grp {
- fsl,pins = <
- MX6QDL_PAD_EIM_D21__I2C1_SCL 0x4001b8b1
- MX6QDL_PAD_EIM_D28__I2C1_SDA 0x4001b8b1
- >;
- };
+ pinctrl_gpmi_nand: gpminandgrp {
+ fsl,pins = <
+ MX6QDL_PAD_NANDF_CLE__NAND_CLE 0x0b0b1
+ MX6QDL_PAD_NANDF_ALE__NAND_ALE 0x0b0b1
+ MX6QDL_PAD_NANDF_WP_B__NAND_WP_B 0x0b0b1
+ MX6QDL_PAD_NANDF_RB0__NAND_READY_B 0x0b000
+ MX6QDL_PAD_NANDF_CS0__NAND_CE0_B 0x0b0b1
+ MX6QDL_PAD_SD4_CMD__NAND_RE_B 0x0b0b1
+ MX6QDL_PAD_SD4_CLK__NAND_WE_B 0x0b0b1
+ MX6QDL_PAD_NANDF_D0__NAND_DATA00 0x0b0b1
+ MX6QDL_PAD_NANDF_D1__NAND_DATA01 0x0b0b1
+ MX6QDL_PAD_NANDF_D2__NAND_DATA02 0x0b0b1
+ MX6QDL_PAD_NANDF_D3__NAND_DATA03 0x0b0b1
+ MX6QDL_PAD_NANDF_D4__NAND_DATA04 0x0b0b1
+ MX6QDL_PAD_NANDF_D5__NAND_DATA05 0x0b0b1
+ MX6QDL_PAD_NANDF_D6__NAND_DATA06 0x0b0b1
+ MX6QDL_PAD_NANDF_D7__NAND_DATA07 0x0b0b1
+ >;
+ };
- pinctrl_i2c3: i2c3grp {
- fsl,pins = <
- MX6QDL_PAD_GPIO_3__I2C3_SCL 0x4001b8b1
- MX6QDL_PAD_GPIO_6__I2C3_SDA 0x4001b8b1
- >;
- };
+ pinctrl_i2c1: i2c1grp {
+ fsl,pins = <
+ MX6QDL_PAD_EIM_D21__I2C1_SCL 0x4001b8b1
+ MX6QDL_PAD_EIM_D28__I2C1_SDA 0x4001b8b1
+ >;
+ };
- pinctrl_kpp: kppgrp {
- fsl,pins = <
- MX6QDL_PAD_GPIO_9__KEY_COL6 0x1b0b1
- MX6QDL_PAD_GPIO_4__KEY_COL7 0x1b0b1
- MX6QDL_PAD_KEY_COL2__KEY_COL2 0x1b0b1
- MX6QDL_PAD_KEY_COL3__KEY_COL3 0x1b0b1
- MX6QDL_PAD_GPIO_2__KEY_ROW6 0x1b0b1
- MX6QDL_PAD_GPIO_5__KEY_ROW7 0x1b0b1
- MX6QDL_PAD_KEY_ROW2__KEY_ROW2 0x1b0b1
- MX6QDL_PAD_KEY_ROW3__KEY_ROW3 0x1b0b1
- >;
- };
+ pinctrl_i2c3: i2c3grp {
+ fsl,pins = <
+ MX6QDL_PAD_GPIO_3__I2C3_SCL 0x4001b8b1
+ MX6QDL_PAD_GPIO_6__I2C3_SDA 0x4001b8b1
+ >;
+ };
- pinctrl_lcd0_pwr: lcd0-pwrgrp {
- fsl,pins = <
- MX6QDL_PAD_EIM_D29__GPIO3_IO29 0x1b0b1 /* LCD Reset */
- >;
- };
+ pinctrl_kpp: kppgrp {
+ fsl,pins = <
+ MX6QDL_PAD_GPIO_9__KEY_COL6 0x1b0b1
+ MX6QDL_PAD_GPIO_4__KEY_COL7 0x1b0b1
+ MX6QDL_PAD_KEY_COL2__KEY_COL2 0x1b0b1
+ MX6QDL_PAD_KEY_COL3__KEY_COL3 0x1b0b1
+ MX6QDL_PAD_GPIO_2__KEY_ROW6 0x1b0b1
+ MX6QDL_PAD_GPIO_5__KEY_ROW7 0x1b0b1
+ MX6QDL_PAD_KEY_ROW2__KEY_ROW2 0x1b0b1
+ MX6QDL_PAD_KEY_ROW3__KEY_ROW3 0x1b0b1
+ >;
+ };
- pinctrl_lcd1_pwr: lcd1-pwrgrp {
- fsl,pins = <
- MX6QDL_PAD_EIM_EB3__GPIO2_IO31 0x1b0b1 /* LCD Power Enable */
- >;
- };
+ pinctrl_lcd0_pwr: lcd0-pwrgrp {
+ fsl,pins = <
+ MX6QDL_PAD_EIM_D29__GPIO3_IO29 0x1b0b1 /* LCD Reset */
+ >;
+ };
- pinctrl_pwm1: pwm1grp {
- fsl,pins = <
- MX6QDL_PAD_GPIO_9__PWM1_OUT 0x1b0b1
- >;
- };
+ pinctrl_lcd1_pwr: lcd-pwrgrp {
+ fsl,pins = <
+ MX6QDL_PAD_EIM_EB3__GPIO2_IO31 0x1b0b1 /* LCD Power Enable */
+ >;
+ };
- pinctrl_pwm2: pwm2grp {
- fsl,pins = <
- MX6QDL_PAD_GPIO_1__PWM2_OUT 0x1b0b1
- >;
- };
+ pinctrl_pwm1: pwm1grp {
+ fsl,pins = <
+ MX6QDL_PAD_GPIO_9__PWM1_OUT 0x1b0b1
+ >;
+ };
- pinctrl_tsc2007: tsc2007grp {
- fsl,pins = <
- MX6QDL_PAD_EIM_D26__GPIO3_IO26 0x1b0b0 /* Interrupt */
- >;
- };
+ pinctrl_pwm2: pwm2grp {
+ fsl,pins = <
+ MX6QDL_PAD_GPIO_1__PWM2_OUT 0x1b0b1
+ >;
+ };
- pinctrl_uart1: uart1grp {
- fsl,pins = <
- MX6QDL_PAD_SD3_DAT7__UART1_TX_DATA 0x1b0b1
- MX6QDL_PAD_SD3_DAT6__UART1_RX_DATA 0x1b0b1
- >;
- };
+ pinctrl_tsc2007: tsc2007grp {
+ fsl,pins = <
+ MX6QDL_PAD_EIM_D26__GPIO3_IO26 0x1b0b0 /* Interrupt */
+ >;
+ };
- pinctrl_uart1_rtscts: uart1_rtsctsgrp {
- fsl,pins = <
- MX6QDL_PAD_SD3_DAT1__UART1_RTS_B 0x1b0b1
- MX6QDL_PAD_SD3_DAT0__UART1_CTS_B 0x1b0b1
- >;
- };
+ pinctrl_uart1: uart1grp {
+ fsl,pins = <
+ MX6QDL_PAD_SD3_DAT7__UART1_TX_DATA 0x1b0b1
+ MX6QDL_PAD_SD3_DAT6__UART1_RX_DATA 0x1b0b1
+ >;
+ };
- pinctrl_uart2: uart2grp {
- fsl,pins = <
- MX6QDL_PAD_SD4_DAT7__UART2_TX_DATA 0x1b0b1
- MX6QDL_PAD_SD4_DAT4__UART2_RX_DATA 0x1b0b1
- >;
- };
+ pinctrl_uart1_rtscts: uart1_rtsctsgrp {
+ fsl,pins = <
+ MX6QDL_PAD_SD3_DAT1__UART1_RTS_B 0x1b0b1
+ MX6QDL_PAD_SD3_DAT0__UART1_CTS_B 0x1b0b1
+ >;
+ };
- pinctrl_uart2_rtscts: uart2_rtsctsgrp {
- fsl,pins = <
- MX6QDL_PAD_SD4_DAT5__UART2_RTS_B 0x1b0b1
- MX6QDL_PAD_SD4_DAT6__UART2_CTS_B 0x1b0b1
- >;
- };
+ pinctrl_uart2: uart2grp {
+ fsl,pins = <
+ MX6QDL_PAD_SD4_DAT7__UART2_TX_DATA 0x1b0b1
+ MX6QDL_PAD_SD4_DAT4__UART2_RX_DATA 0x1b0b1
+ >;
+ };
- pinctrl_uart3: uart3grp {
- fsl,pins = <
- MX6QDL_PAD_EIM_D24__UART3_TX_DATA 0x1b0b1
- MX6QDL_PAD_EIM_D25__UART3_RX_DATA 0x1b0b1
- >;
- };
+ pinctrl_uart2_rtscts: uart2_rtsctsgrp {
+ fsl,pins = <
+ MX6QDL_PAD_SD4_DAT5__UART2_RTS_B 0x1b0b1
+ MX6QDL_PAD_SD4_DAT6__UART2_CTS_B 0x1b0b1
+ >;
+ };
- pinctrl_uart3_rtscts: uart3_rtsctsgrp {
- fsl,pins = <
- MX6QDL_PAD_SD3_DAT3__UART3_CTS_B 0x1b0b1
- MX6QDL_PAD_SD3_RST__UART3_RTS_B 0x1b0b1
- >;
- };
+ pinctrl_uart3: uart3grp {
+ fsl,pins = <
+ MX6QDL_PAD_EIM_D24__UART3_TX_DATA 0x1b0b1
+ MX6QDL_PAD_EIM_D25__UART3_RX_DATA 0x1b0b1
+ >;
+ };
- pinctrl_usbh1_vbus: usbh1-vbusgrp {
- fsl,pins = <
- MX6QDL_PAD_EIM_D31__GPIO3_IO31 0x1b0b0 /* USBH1_VBUSEN */
- >;
- };
+ pinctrl_uart3_rtscts: uart3_rtsctsgrp {
+ fsl,pins = <
+ MX6QDL_PAD_SD3_DAT3__UART3_CTS_B 0x1b0b1
+ MX6QDL_PAD_SD3_RST__UART3_RTS_B 0x1b0b1
+ >;
+ };
- pinctrl_usbotg: usbotggrp {
- fsl,pins = <
- MX6QDL_PAD_EIM_D23__GPIO3_IO23 0x17059
- >;
- };
+ pinctrl_usbh1_vbus: usbh1-vbusgrp {
+ fsl,pins = <
+ MX6QDL_PAD_EIM_D31__GPIO3_IO31 0x1b0b0 /* USBH1_VBUSEN */
+ >;
+ };
- pinctrl_usbotg_vbus: usbotg-vbusgrp {
- fsl,pins = <
- MX6QDL_PAD_GPIO_7__GPIO1_IO07 0x1b0b0 /* USBOTG_VBUSEN */
- >;
- };
+ pinctrl_usbotg: usbotggrp {
+ fsl,pins = <
+ MX6QDL_PAD_EIM_D23__GPIO3_IO23 0x17059
+ >;
+ };
- pinctrl_usdhc1: usdhc1grp {
- fsl,pins = <
- MX6QDL_PAD_SD1_CMD__SD1_CMD 0x070b1
- MX6QDL_PAD_SD1_CLK__SD1_CLK 0x070b1
- MX6QDL_PAD_SD1_DAT0__SD1_DATA0 0x070b1
- MX6QDL_PAD_SD1_DAT1__SD1_DATA1 0x070b1
- MX6QDL_PAD_SD1_DAT2__SD1_DATA2 0x070b1
- MX6QDL_PAD_SD1_DAT3__SD1_DATA3 0x070b1
- MX6QDL_PAD_SD3_CMD__GPIO7_IO02 0x170b0 /* SD1 CD */
- >;
- };
+ pinctrl_usbotg_vbus: usbotg-vbusgrp {
+ fsl,pins = <
+ MX6QDL_PAD_GPIO_7__GPIO1_IO07 0x1b0b0 /* USBOTG_VBUSEN */
+ >;
+ };
- pinctrl_usdhc2: usdhc2grp {
- fsl,pins = <
- MX6QDL_PAD_SD2_CMD__SD2_CMD 0x070b1
- MX6QDL_PAD_SD2_CLK__SD2_CLK 0x070b1
- MX6QDL_PAD_SD2_DAT0__SD2_DATA0 0x070b1
- MX6QDL_PAD_SD2_DAT1__SD2_DATA1 0x070b1
- MX6QDL_PAD_SD2_DAT2__SD2_DATA2 0x070b1
- MX6QDL_PAD_SD2_DAT3__SD2_DATA3 0x070b1
- MX6QDL_PAD_SD3_CLK__GPIO7_IO03 0x170b0 /* SD2 CD */
- >;
- };
+ pinctrl_usdhc1: usdhc1grp {
+ fsl,pins = <
+ MX6QDL_PAD_SD1_CMD__SD1_CMD 0x070b1
+ MX6QDL_PAD_SD1_CLK__SD1_CLK 0x070b1
+ MX6QDL_PAD_SD1_DAT0__SD1_DATA0 0x070b1
+ MX6QDL_PAD_SD1_DAT1__SD1_DATA1 0x070b1
+ MX6QDL_PAD_SD1_DAT2__SD1_DATA2 0x070b1
+ MX6QDL_PAD_SD1_DAT3__SD1_DATA3 0x070b1
+ MX6QDL_PAD_SD3_CMD__GPIO7_IO02 0x170b0 /* SD1 CD */
+ >;
+ };
+
+ pinctrl_usdhc2: usdhc2grp {
+ fsl,pins = <
+ MX6QDL_PAD_SD2_CMD__SD2_CMD 0x070b1
+ MX6QDL_PAD_SD2_CLK__SD2_CLK 0x070b1
+ MX6QDL_PAD_SD2_DAT0__SD2_DATA0 0x070b1
+ MX6QDL_PAD_SD2_DAT1__SD2_DATA1 0x070b1
+ MX6QDL_PAD_SD2_DAT2__SD2_DATA2 0x070b1
+ MX6QDL_PAD_SD2_DAT3__SD2_DATA3 0x070b1
+ MX6QDL_PAD_SD3_CLK__GPIO7_IO03 0x170b0 /* SD2 CD */
+ >;
+ };
+
+ pinctrl_user_led: user-ledgrp {
+ fsl,pins = <
+ MX6QDL_PAD_EIM_A18__GPIO2_IO20 0x1b0b1 /* LED */
+ >;
};
};
@@ -649,19 +688,22 @@
&uart1 {
pinctrl-names = "default";
- pinctrl-0 = <&pinctrl_uart1>;
+ pinctrl-0 = <&pinctrl_uart1 &pinctrl_uart1_rtscts>;
+ fsl,uart-has-rtscts;
status = "okay";
};
&uart2 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_uart2 &pinctrl_uart2_rtscts>;
+ fsl,uart-has-rtscts;
status = "okay";
};
&uart3 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_uart3 &pinctrl_uart3_rtscts>;
+ fsl,uart-has-rtscts;
status = "okay";
};
diff --git a/arch/arm/boot/dts/imx6qdl-udoo.dtsi b/arch/arm/boot/dts/imx6qdl-udoo.dtsi
index d3e54e40a017..3bee2f910067 100644
--- a/arch/arm/boot/dts/imx6qdl-udoo.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-udoo.dtsi
@@ -10,14 +10,49 @@
*/
/ {
+ aliases {
+ backlight = &backlight;
+ panelchan = &panelchan;
+ panel7 = &panel7;
+ touchscreenp7 = &touchscreenp7;
+ };
+
chosen {
stdout-path = &uart2;
};
+ backlight: backlight {
+ compatible = "gpio-backlight";
+ gpios = <&gpio1 4 0>;
+ default-on;
+ status = "disabled";
+ };
+
memory {
reg = <0x10000000 0x40000000>;
};
+ panel7: panel7 {
+ /*
+ * in reality it is a -20t (parallel) model,
+ * but with LVDS bridge chip attached,
+ * so it is equivalent to -19t model in drive
+ * characteristics
+ */
+ compatible = "urt,umsh-8596md-19t";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_panel>;
+ power-supply = <&reg_panel>;
+ backlight = <&backlight>;
+ status = "disabled";
+
+ port {
+ panel_in: endpoint {
+ remote-endpoint = <&lvds0_out>;
+ };
+ };
+ };
+
regulators {
compatible = "simple-bus";
#address-cells = <1>;
@@ -33,6 +68,14 @@
startup-delay-us = <2>; /* USB2415 requires a POR of 1 us minimum */
gpio = <&gpio7 12 0>;
};
+
+ reg_panel: regulator@1 {
+ compatible = "regulator-fixed";
+ reg = <1>;
+ regulator-name = "lcd_panel";
+ enable-active-high;
+ gpio = <&gpio1 2 0>;
+ };
};
sound {
@@ -67,6 +110,24 @@
status = "okay";
};
+&i2c3 {
+ clock-frequency = <100000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c3>;
+ status = "okay";
+
+ touchscreenp7: touchscreenp7@55 {
+ compatible = "sitronix,st1232";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_touchscreenp7>;
+ reg = <0x55>;
+ interrupt-parent = <&gpio1>;
+ interrupts = <13 8>;
+ gpios = <&gpio1 15 0>;
+ status = "disabled";
+ };
+};
+
&iomuxc {
imx6q-udoo {
pinctrl_enet: enetgrp {
@@ -97,6 +158,27 @@
>;
};
+ pinctrl_i2c3: i2c3grp {
+ fsl,pins = <
+ MX6QDL_PAD_GPIO_5__I2C3_SCL 0x4001f8b1
+ MX6QDL_PAD_GPIO_6__I2C3_SDA 0x4001f8b1
+ >;
+ };
+
+ pinctrl_panel: panelgrp {
+ fsl,pins = <
+ MX6QDL_PAD_GPIO_2__GPIO1_IO02 0x70
+ MX6QDL_PAD_GPIO_4__GPIO1_IO04 0x70
+ >;
+ };
+
+ pinctrl_touchscreenp7: touchscreenp7grp {
+ fsl,pins = <
+ MX6QDL_PAD_SD2_DAT0__GPIO1_IO15 0x70
+ MX6QDL_PAD_SD2_DAT2__GPIO1_IO13 0x1b0b0
+ >;
+ };
+
pinctrl_uart2: uart2grp {
fsl,pins = <
MX6QDL_PAD_EIM_D26__UART2_TX_DATA 0x1b0b1
@@ -154,6 +236,20 @@
};
};
+&ldb {
+ status = "okay";
+
+ panelchan: lvds-channel@0 {
+ port@4 {
+ reg = <4>;
+
+ lvds0_out: endpoint {
+ remote-endpoint = <&panel_in>;
+ };
+ };
+ };
+};
+
&uart2 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_uart2>;
@@ -164,7 +260,7 @@
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_usbh>;
vbus-supply = <&reg_usb_h1_vbus>;
- clocks = <&clks 201>;
+ clocks = <&clks IMX6QDL_CLK_CKO>;
status = "okay";
};
diff --git a/arch/arm/boot/dts/imx6qdl-wandboard.dtsi b/arch/arm/boot/dts/imx6qdl-wandboard.dtsi
index 9e096d811bed..8e7c40e114dd 100644
--- a/arch/arm/boot/dts/imx6qdl-wandboard.dtsi
+++ b/arch/arm/boot/dts/imx6qdl-wandboard.dtsi
@@ -85,7 +85,7 @@
codec: sgtl5000@0a {
compatible = "fsl,sgtl5000";
reg = <0x0a>;
- clocks = <&clks 201>;
+ clocks = <&clks IMX6QDL_CLK_CKO>;
VDDA-supply = <&reg_2p5v>;
VDDIO-supply = <&reg_3p3v>;
};
diff --git a/arch/arm/boot/dts/imx6qdl.dtsi b/arch/arm/boot/dts/imx6qdl.dtsi
index b42822aa14f2..ed613ebe0812 100644
--- a/arch/arm/boot/dts/imx6qdl.dtsi
+++ b/arch/arm/boot/dts/imx6qdl.dtsi
@@ -621,7 +621,7 @@
<0 54 IRQ_TYPE_LEVEL_HIGH>,
<0 127 IRQ_TYPE_LEVEL_HIGH>;
- regulator-1p1@110 {
+ regulator-1p1 {
compatible = "fsl,anatop-regulator";
regulator-name = "vdd1p1";
regulator-min-microvolt = <800000>;
@@ -635,7 +635,7 @@
anatop-max-voltage = <1375000>;
};
- regulator-3p0@120 {
+ regulator-3p0 {
compatible = "fsl,anatop-regulator";
regulator-name = "vdd3p0";
regulator-min-microvolt = <2800000>;
@@ -649,7 +649,7 @@
anatop-max-voltage = <3400000>;
};
- regulator-2p5@130 {
+ regulator-2p5 {
compatible = "fsl,anatop-regulator";
regulator-name = "vdd2p5";
regulator-min-microvolt = <2000000>;
@@ -663,7 +663,7 @@
anatop-max-voltage = <2750000>;
};
- reg_arm: regulator-vddcore@140 {
+ reg_arm: regulator-vddcore {
compatible = "fsl,anatop-regulator";
regulator-name = "vddarm";
regulator-min-microvolt = <725000>;
@@ -680,7 +680,7 @@
anatop-max-voltage = <1450000>;
};
- reg_pu: regulator-vddpu@140 {
+ reg_pu: regulator-vddpu {
compatible = "fsl,anatop-regulator";
regulator-name = "vddpu";
regulator-min-microvolt = <725000>;
@@ -697,7 +697,7 @@
anatop-max-voltage = <1450000>;
};
- reg_soc: regulator-vddsoc@140 {
+ reg_soc: regulator-vddsoc {
compatible = "fsl,anatop-regulator";
regulator-name = "vddsoc";
regulator-min-microvolt = <725000>;
@@ -1230,22 +1230,22 @@
#size-cells = <0>;
reg = <2>;
- ipu1_di0_disp0: endpoint@0 {
+ ipu1_di0_disp0: disp0-endpoint {
};
- ipu1_di0_hdmi: endpoint@1 {
+ ipu1_di0_hdmi: hdmi-endpoint {
remote-endpoint = <&hdmi_mux_0>;
};
- ipu1_di0_mipi: endpoint@2 {
+ ipu1_di0_mipi: mipi-endpoint {
remote-endpoint = <&mipi_mux_0>;
};
- ipu1_di0_lvds0: endpoint@3 {
+ ipu1_di0_lvds0: lvds0-endpoint {
remote-endpoint = <&lvds0_mux_0>;
};
- ipu1_di0_lvds1: endpoint@4 {
+ ipu1_di0_lvds1: lvds1-endpoint {
remote-endpoint = <&lvds1_mux_0>;
};
};
@@ -1255,22 +1255,22 @@
#size-cells = <0>;
reg = <3>;
- ipu1_di0_disp1: endpoint@0 {
+ ipu1_di0_disp1: disp1-endpoint {
};
- ipu1_di1_hdmi: endpoint@1 {
+ ipu1_di1_hdmi: hdmi-endpoint {
remote-endpoint = <&hdmi_mux_1>;
};
- ipu1_di1_mipi: endpoint@2 {
+ ipu1_di1_mipi: mipi-endpoint {
remote-endpoint = <&mipi_mux_1>;
};
- ipu1_di1_lvds0: endpoint@3 {
+ ipu1_di1_lvds0: lvds0-endpoint {
remote-endpoint = <&lvds0_mux_1>;
};
- ipu1_di1_lvds1: endpoint@4 {
+ ipu1_di1_lvds1: lvds1-endpoint {
remote-endpoint = <&lvds1_mux_1>;
};
};
diff --git a/arch/arm/boot/dts/imx6qp-nitrogen6_max.dts b/arch/arm/boot/dts/imx6qp-nitrogen6_max.dts
new file mode 100644
index 000000000000..a39b86036581
--- /dev/null
+++ b/arch/arm/boot/dts/imx6qp-nitrogen6_max.dts
@@ -0,0 +1,59 @@
+/*
+ * Copyright 2016 Boundary Devices, Inc.
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+
+#include "imx6qp.dtsi"
+#include "imx6qdl-nitrogen6_max.dtsi"
+
+/ {
+ model = "Boundary Devices i.MX6 Quad Plus Nitrogen6_MAX Board";
+ compatible = "boundary,imx6qp-nitrogen6_max", "fsl,imx6qp";
+};
+
+&pcie {
+ status = "disabled";
+};
+
+&sata {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/imx6qp.dtsi b/arch/arm/boot/dts/imx6qp.dtsi
index 1ada71437e49..886dbf2eca49 100644
--- a/arch/arm/boot/dts/imx6qp.dtsi
+++ b/arch/arm/boot/dts/imx6qp.dtsi
@@ -82,5 +82,8 @@
"ldb_di0", "ldb_di1", "prg";
};
+ pcie: pcie@0x01000000 {
+ compatible = "fsl,imx6qp-pcie", "snps,dw-pcie";
+ };
};
};
diff --git a/arch/arm/boot/dts/imx6sx-nitrogen6sx.dts b/arch/arm/boot/dts/imx6sx-nitrogen6sx.dts
new file mode 100644
index 000000000000..ba62348d8284
--- /dev/null
+++ b/arch/arm/boot/dts/imx6sx-nitrogen6sx.dts
@@ -0,0 +1,709 @@
+/*
+ * Copyright (C) 2016 Boundary Devices, Inc.
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED , WITHOUT WARRANTY OF ANY KIND
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+
+#include "imx6sx.dtsi"
+
+/ {
+ model = "Boundary Devices i.MX6 SoloX Nitrogen6sx Board";
+ compatible = "boundary,imx6sx-nitrogen6sx", "fsl,imx6sx";
+
+ aliases {
+ fb_lcd = &lcdif1;
+ t_lcd = &t_lcd;
+ };
+
+ memory {
+ reg = <0x80000000 0x40000000>;
+ };
+
+ backlight-lvds {
+ compatible = "pwm-backlight";
+ pwms = <&pwm4 0 5000000>;
+ brightness-levels = <0 4 8 16 32 64 128 255>;
+ default-brightness-level = <6>;
+ power-supply = <&reg_3p3v>;
+ };
+
+ reg_1p8v: regulator-1p8v {
+ compatible = "regulator-fixed";
+ regulator-name = "1P8V";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+ };
+
+ reg_3p3v: regulator-3p3v {
+ compatible = "regulator-fixed";
+ regulator-name = "3P3V";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ };
+
+ reg_can1_3v3: regulator-can1-3v3 {
+ compatible = "regulator-fixed";
+ regulator-name = "can1-3v3";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ gpio = <&gpio4 27 GPIO_ACTIVE_LOW>;
+ };
+
+ reg_can2_3v3: regulator-can2-3v3 {
+ compatible = "regulator-fixed";
+ regulator-name = "can2-3v3";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ gpio = <&gpio4 24 GPIO_ACTIVE_LOW>;
+ };
+
+ reg_usb_otg1_vbus: regulator-usb-otg1-vbus {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usbotg1_vbus>;
+ compatible = "regulator-fixed";
+ regulator-name = "usb_otg1_vbus";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ gpio = <&gpio1 9 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ };
+
+ reg_wlan: regulator-wlan {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_reg_wlan>;
+ compatible = "regulator-fixed";
+ clocks = <&clks IMX6SX_CLK_CKO>;
+ clock-names = "slow";
+ regulator-name = "wlan-en";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ startup-delay-us = <70000>;
+ gpio = <&gpio7 6 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ };
+
+ sound {
+ compatible = "fsl,imx-audio-sgtl5000";
+ model = "imx6sx-nitrogen6sx-sgtl5000";
+ cpu-dai = <&ssi1>;
+ audio-codec = <&codec>;
+ audio-routing =
+ "MIC_IN", "Mic Jack",
+ "Mic Jack", "Mic Bias",
+ "Headphone Jack", "HP_OUT";
+ mux-int-port = <1>;
+ mux-ext-port = <5>;
+ };
+};
+
+&audmux {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_audmux>;
+ status = "okay";
+};
+
+&ecspi1 {
+ fsl,spi-num-chipselects = <1>;
+ cs-gpios = <&gpio2 16 GPIO_ACTIVE_LOW>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_ecspi1>;
+ status = "okay";
+
+ flash: m25p80@0 {
+ compatible = "microchip,sst25vf016b";
+ spi-max-frequency = <20000000>;
+ reg = <0>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+ partition@0 {
+ label = "U-Boot";
+ reg = <0x0 0xc0000>;
+ read-only;
+ };
+
+ partition@c0000 {
+ label = "env";
+ reg = <0xc0000 0x2000>;
+ read-only;
+ };
+
+ partition@c2000 {
+ label = "Kernel";
+ reg = <0xc2000 0x11e000>;
+ };
+
+ partition@1e0000 {
+ label = "M4";
+ reg = <0x1e0000 0x20000>;
+ };
+ };
+};
+
+&fec1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_enet1>;
+ phy-mode = "rgmii";
+ phy-handle = <&ethphy1>;
+ phy-supply = <&reg_3p3v>;
+ fsl,magic-packet;
+ status = "okay";
+
+ mdio {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ ethphy1: ethernet-phy@4 {
+ reg = <4>;
+ };
+
+ ethphy2: ethernet-phy@5 {
+ reg = <5>;
+ };
+ };
+};
+
+&fec2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_enet2>;
+ phy-mode = "rgmii";
+ phy-handle = <&ethphy2>;
+ phy-supply = <&reg_3p3v>;
+ fsl,magic-packet;
+ status = "okay";
+};
+
+&flexcan1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_flexcan1>;
+ xceiver-supply = <&reg_can1_3v3>;
+ status = "okay";
+};
+
+&flexcan2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_flexcan2>;
+ xceiver-supply = <&reg_can2_3v3>;
+ status = "okay";
+};
+
+&i2c1 {
+ clock-frequency = <100000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c1>;
+ status = "okay";
+
+ codec: sgtl5000@0a {
+ compatible = "fsl,sgtl5000";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_sgtl5000>;
+ reg = <0x0a>;
+ clocks = <&clks IMX6SX_CLK_CKO2>;
+ VDDA-supply = <&reg_1p8v>;
+ VDDIO-supply = <&reg_1p8v>;
+ VDDD-supply = <&reg_1p8v>;
+ assigned-clocks = <&clks IMX6SX_CLK_CKO2_SEL>,
+ <&clks IMX6SX_CLK_CKO2>;
+ assigned-clock-parents = <&clks IMX6SX_CLK_OSC>;
+ assigned-clock-rates = <0>, <24000000>;
+ };
+};
+
+&i2c2 {
+ clock-frequency = <100000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c2>;
+ status = "okay";
+};
+
+&i2c3 {
+ clock-frequency = <100000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c3>;
+ status = "okay";
+};
+
+&lcdif1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_lcdif1>;
+ lcd-supply = <&reg_3p3v>;
+ display = <&display0>;
+ status = "okay";
+
+ display0: display0 {
+ bits-per-pixel = <16>;
+ bus-width = <24>;
+
+ display-timings {
+ native-mode = <&t_lcd>;
+ t_lcd: t_lcd_default {
+ clock-frequency = <74160000>;
+ hactive = <1280>;
+ vactive = <720>;
+ hback-porch = <220>;
+ hfront-porch = <110>;
+ vback-porch = <20>;
+ vfront-porch = <5>;
+ hsync-len = <40>;
+ vsync-len = <5>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+ };
+ };
+};
+
+&pcie {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_pcie>;
+ reset-gpio = <&gpio4 10 GPIO_ACTIVE_HIGH>;
+ status = "okay";
+};
+
+&pwm4 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_pwm4>;
+ status = "okay";
+};
+
+&ssi1 {
+ status = "okay";
+};
+
+&uart1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart1>;
+ status = "okay";
+};
+
+&uart2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart2>;
+ status = "okay";
+};
+
+&uart3 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart3>;
+ fsl,uart-has-rtscts;
+ status = "okay";
+};
+
+&uart5 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart5>;
+ status = "okay";
+};
+
+&usbotg1 {
+ vbus-supply = <&reg_usb_otg1_vbus>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usbotg1>;
+ status = "okay";
+};
+
+&usbotg2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usbotg2>;
+ dr_mode = "host";
+ disable-over-current;
+ reset-gpios = <&gpio4 26 GPIO_ACTIVE_LOW>;
+ status = "okay";
+};
+
+&usdhc2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usdhc2>;
+ bus-width = <4>;
+ cd-gpios = <&gpio2 12 GPIO_ACTIVE_LOW>;
+ keep-power-in-suspend;
+ wakeup-source;
+ status = "okay";
+};
+
+&usdhc3 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usdhc3>;
+ bus-width = <4>;
+ non-removable;
+ keep-power-in-suspend;
+ vmmc-supply = <&reg_wlan>;
+ cap-power-off-card;
+ cap-sdio-irq;
+ status = "okay";
+
+ brcmf: bcrmf@1 {
+ reg = <1>;
+ compatible = "brcm,bcm4329-fmac";
+ interrupt-parent = <&gpio7>;
+ interrupts = <7 IRQ_TYPE_LEVEL_LOW>;
+ };
+
+ wlcore: wlcore@2 {
+ compatible = "ti,wl1271";
+ reg = <2>;
+ interrupt-parent = <&gpio7>;
+ interrupts = <7 IRQ_TYPE_LEVEL_LOW>;
+ ref-clock-frequency = <38400000>;
+ };
+};
+
+&usdhc4 {
+ pinctrl-names = "default", "state_100mhz", "state_200mhz";
+ pinctrl-0 = <&pinctrl_usdhc4_50mhz>;
+ pinctrl-1 = <&pinctrl_usdhc4_100mhz>;
+ pinctrl-2 = <&pinctrl_usdhc4_200mhz>;
+ bus-width = <8>;
+ non-removable;
+ vmmc-supply = <&reg_1p8v>;
+ keep-power-in-suspend;
+ status = "okay";
+};
+
+&iomuxc {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_hog>;
+
+ pinctrl_audmux: audmuxgrp {
+ fsl,pins = <
+ MX6SX_PAD_SD1_DATA0__AUDMUX_AUD5_RXD 0x1b0b0
+ MX6SX_PAD_SD1_DATA1__AUDMUX_AUD5_TXC 0x1b0b0
+ MX6SX_PAD_SD1_DATA2__AUDMUX_AUD5_TXFS 0x1b0b0
+ MX6SX_PAD_SD1_DATA3__AUDMUX_AUD5_TXD 0x1b0b0
+ >;
+ };
+
+ pinctrl_ecspi1: ecspi1grp {
+ fsl,pins = <
+ MX6SX_PAD_KEY_COL1__ECSPI1_MISO 0x100b1
+ MX6SX_PAD_KEY_ROW0__ECSPI1_MOSI 0x100b1
+ MX6SX_PAD_KEY_COL0__ECSPI1_SCLK 0x100b1
+ MX6SX_PAD_KEY_ROW1__GPIO2_IO_16 0x0b0b1
+ >;
+ };
+
+ pinctrl_enet1: enet1grp {
+ fsl,pins = <
+ MX6SX_PAD_ENET1_MDIO__ENET1_MDIO 0x1b0b0
+ MX6SX_PAD_ENET1_MDC__ENET1_MDC 0x1b0b0
+ MX6SX_PAD_RGMII1_TD0__ENET1_TX_DATA_0 0x30b1
+ MX6SX_PAD_RGMII1_TD1__ENET1_TX_DATA_1 0x30b1
+ MX6SX_PAD_RGMII1_TD2__ENET1_TX_DATA_2 0x30b1
+ MX6SX_PAD_RGMII1_TD3__ENET1_TX_DATA_3 0x30b1
+ MX6SX_PAD_RGMII1_TXC__ENET1_RGMII_TXC 0x30b1
+ MX6SX_PAD_RGMII1_TX_CTL__ENET1_TX_EN 0x30b1
+ MX6SX_PAD_RGMII1_RD0__ENET1_RX_DATA_0 0x3081
+ MX6SX_PAD_RGMII1_RD1__ENET1_RX_DATA_1 0x3081
+ MX6SX_PAD_RGMII1_RX_CTL__ENET1_RX_EN 0x3081
+ MX6SX_PAD_RGMII1_RD2__ENET1_RX_DATA_2 0x3081
+ MX6SX_PAD_RGMII1_RD3__ENET1_RX_DATA_3 0x3081
+ MX6SX_PAD_RGMII1_RXC__ENET1_RX_CLK 0x3081
+ MX6SX_PAD_ENET2_CRS__GPIO2_IO_7 0xb0b0
+ MX6SX_PAD_ENET1_RX_CLK__GPIO2_IO_4 0xb0b0
+ MX6SX_PAD_ENET1_TX_CLK__GPIO2_IO_5 0xb0b0
+ >;
+ };
+
+ pinctrl_enet2: enet2grp {
+ fsl,pins = <
+ MX6SX_PAD_RGMII2_TD0__ENET2_TX_DATA_0 0x30b1
+ MX6SX_PAD_RGMII2_TD1__ENET2_TX_DATA_1 0x30b1
+ MX6SX_PAD_RGMII2_TD2__ENET2_TX_DATA_2 0x30b1
+ MX6SX_PAD_RGMII2_TD3__ENET2_TX_DATA_3 0x30b1
+ MX6SX_PAD_RGMII2_TXC__ENET2_RGMII_TXC 0x30b1
+ MX6SX_PAD_RGMII2_TX_CTL__ENET2_TX_EN 0x30b1
+ MX6SX_PAD_RGMII2_RD0__ENET2_RX_DATA_0 0x3081
+ MX6SX_PAD_RGMII2_RD1__ENET2_RX_DATA_1 0x3081
+ MX6SX_PAD_RGMII2_RX_CTL__ENET2_RX_EN 0x3081
+ MX6SX_PAD_RGMII2_RD2__ENET2_RX_DATA_2 0x3081
+ MX6SX_PAD_RGMII2_RD3__ENET2_RX_DATA_3 0x3081
+ MX6SX_PAD_RGMII2_RXC__ENET2_RX_CLK 0x3081
+ MX6SX_PAD_ENET2_COL__GPIO2_IO_6 0xb0b0
+ MX6SX_PAD_ENET2_RX_CLK__GPIO2_IO_8 0xb0b0
+ MX6SX_PAD_ENET2_TX_CLK__GPIO2_IO_9 0xb0b0
+ >;
+ };
+
+ pinctrl_flexcan1: flexcan1grp {
+ fsl,pins = <
+ MX6SX_PAD_QSPI1B_DQS__CAN1_TX 0x1b0b0
+ MX6SX_PAD_QSPI1A_SS1_B__CAN1_RX 0x1b0b0
+ MX6SX_PAD_QSPI1B_DATA3__GPIO4_IO_27 0x1b0b0
+ MX6SX_PAD_QSPI1B_DATA3__GPIO4_IO_27 0x0b0b0
+ >;
+ };
+
+ pinctrl_flexcan2: flexcan2grp {
+ fsl,pins = <
+ MX6SX_PAD_QSPI1A_DQS__CAN2_TX 0x1b0b0
+ MX6SX_PAD_QSPI1B_SS1_B__CAN2_RX 0x1b0b0
+ MX6SX_PAD_QSPI1B_DATA0__GPIO4_IO_24 0x0b0b0
+ >;
+ };
+
+ pinctrl_hog: hoggrp {
+ fsl,pins = <
+ MX6SX_PAD_NAND_CE0_B__GPIO4_IO_1 0x1b0b0
+ MX6SX_PAD_NAND_CLE__GPIO4_IO_3 0x1b0b0
+ MX6SX_PAD_NAND_RE_B__GPIO4_IO_12 0x1b0b0
+ MX6SX_PAD_NAND_WE_B__GPIO4_IO_14 0x1b0b0
+ MX6SX_PAD_NAND_WP_B__GPIO4_IO_15 0x1b0b0
+ MX6SX_PAD_NAND_READY_B__GPIO4_IO_13 0x1b0b0
+ MX6SX_PAD_QSPI1A_DATA0__GPIO4_IO_16 0x1b0b0
+ MX6SX_PAD_QSPI1A_DATA1__GPIO4_IO_17 0x1b0b0
+ MX6SX_PAD_QSPI1A_DATA2__GPIO4_IO_18 0x1b0b0
+ MX6SX_PAD_QSPI1A_DATA3__GPIO4_IO_19 0x1b0b0
+ MX6SX_PAD_SD1_CMD__CCM_CLKO1 0x000b0
+ MX6SX_PAD_SD3_DATA5__GPIO7_IO_7 0x1b0b0
+ /* Test points */
+ MX6SX_PAD_NAND_DATA04__GPIO4_IO_8 0x1b0b0
+ MX6SX_PAD_QSPI1B_DATA1__GPIO4_IO_25 0x1b0b0
+ >;
+ };
+
+ pinctrl_i2c1: i2c1grp {
+ fsl,pins = <
+ MX6SX_PAD_GPIO1_IO00__I2C1_SCL 0x4001b8b1
+ MX6SX_PAD_GPIO1_IO01__I2C1_SDA 0x4001b8b1
+ >;
+ };
+
+ pinctrl_i2c2: i2c2grp {
+ fsl,pins = <
+ MX6SX_PAD_GPIO1_IO02__I2C2_SCL 0x4001b8b1
+ MX6SX_PAD_GPIO1_IO03__I2C2_SDA 0x4001b8b1
+ >;
+ };
+
+ pinctrl_i2c3: i2c3grp {
+ fsl,pins = <
+ MX6SX_PAD_KEY_COL4__I2C3_SCL 0x4001b8b1
+ MX6SX_PAD_KEY_ROW4__I2C3_SDA 0x4001b8b1
+ >;
+ };
+
+ pinctrl_lcdif1: lcdif1grp {
+ fsl,pins = <
+ MX6SX_PAD_LCD1_CLK__LCDIF1_CLK 0x4001b0b0
+ MX6SX_PAD_LCD1_ENABLE__LCDIF1_ENABLE 0x4001b0b0
+ MX6SX_PAD_LCD1_HSYNC__LCDIF1_HSYNC 0x4001b0b0
+ MX6SX_PAD_LCD1_VSYNC__LCDIF1_VSYNC 0x4001b0b0
+ MX6SX_PAD_LCD1_RESET__GPIO3_IO_27 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA00__LCDIF1_DATA_0 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA01__LCDIF1_DATA_1 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA02__LCDIF1_DATA_2 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA03__LCDIF1_DATA_3 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA04__LCDIF1_DATA_4 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA05__LCDIF1_DATA_5 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA06__LCDIF1_DATA_6 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA07__LCDIF1_DATA_7 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA08__LCDIF1_DATA_8 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA09__LCDIF1_DATA_9 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA10__LCDIF1_DATA_10 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA11__LCDIF1_DATA_11 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA12__LCDIF1_DATA_12 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA13__LCDIF1_DATA_13 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA14__LCDIF1_DATA_14 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA15__LCDIF1_DATA_15 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA16__LCDIF1_DATA_16 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA17__LCDIF1_DATA_17 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA18__LCDIF1_DATA_18 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA19__LCDIF1_DATA_19 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA20__LCDIF1_DATA_20 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA21__LCDIF1_DATA_21 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA22__LCDIF1_DATA_22 0x4001b0b0
+ MX6SX_PAD_LCD1_DATA23__LCDIF1_DATA_23 0x4001b0b0
+ >;
+ };
+
+ pinctrl_pcie: pciegrp {
+ fsl,pins = <
+ MX6SX_PAD_NAND_DATA05__GPIO4_IO_9 0xb0b0
+ MX6SX_PAD_NAND_DATA06__GPIO4_IO_10 0xb0b0
+ MX6SX_PAD_NAND_DATA07__GPIO4_IO_11 0xb0b0
+ >;
+ };
+
+ pinctrl_pwm4: pwm4grp {
+ fsl,pins = <
+ MX6SX_PAD_GPIO1_IO13__PWM4_OUT 0x110b0
+ >;
+ };
+
+ pinctrl_reg_wlan: reg-wlangrp {
+ fsl,pins = <
+ MX6SX_PAD_SD3_DATA4__GPIO7_IO_6 0x1b0b0
+ MX6SX_PAD_GPIO1_IO11__CCM_CLKO1 0x000b0
+ >;
+ };
+
+ pinctrl_sgtl5000: sgtl5000grp {
+ fsl,pins = <
+ MX6SX_PAD_GPIO1_IO12__CCM_CLKO2 0x000b0
+ MX6SX_PAD_ENET1_COL__GPIO2_IO_0 0x1b0b0
+ MX6SX_PAD_ENET1_CRS__GPIO2_IO_1 0x1b0b0
+ MX6SX_PAD_QSPI1A_SS0_B__GPIO4_IO_22 0xb0b0
+ >;
+ };
+
+ pinctrl_uart1: uart1grp {
+ fsl,pins = <
+ MX6SX_PAD_GPIO1_IO04__UART1_TX 0x1b0b1
+ MX6SX_PAD_GPIO1_IO05__UART1_RX 0x1b0b1
+ >;
+ };
+
+ pinctrl_uart2: uart2grp {
+ fsl,pins = <
+ MX6SX_PAD_GPIO1_IO06__UART2_TX 0x1b0b1
+ MX6SX_PAD_GPIO1_IO07__UART2_RX 0x1b0b1
+ >;
+ };
+
+ pinctrl_uart3: uart3grp {
+ fsl,pins = <
+ MX6SX_PAD_QSPI1B_SS0_B__UART3_TX 0x1b0b1
+ MX6SX_PAD_QSPI1B_SCLK__UART3_RX 0x1b0b1
+ >;
+ };
+
+ pinctrl_uart5: uart5grp {
+ fsl,pins = <
+ MX6SX_PAD_KEY_COL3__UART5_TX 0x1b0b1
+ MX6SX_PAD_KEY_ROW3__UART5_RX 0x1b0b1
+ MX6SX_PAD_SD3_DATA6__UART3_RTS_B 0x1b0b1
+ MX6SX_PAD_SD3_DATA7__UART3_CTS_B 0x1b0b1
+ >;
+ };
+
+ pinctrl_usbotg1: usbotg1grp {
+ fsl,pins = <
+ MX6SX_PAD_GPIO1_IO08__USB_OTG1_OC 0x1b0b0
+ MX6SX_PAD_GPIO1_IO10__ANATOP_OTG1_ID 0x170b1
+ >;
+ };
+
+ pinctrl_usbotg1_vbus: usbotg1-vbusgrp {
+ fsl,pins = <
+ MX6SX_PAD_GPIO1_IO09__GPIO1_IO_9 0x1b0b0
+ >;
+ };
+
+ pinctrl_usbotg2: usbotg2grp {
+ fsl,pins = <
+ MX6SX_PAD_QSPI1B_DATA2__GPIO4_IO_26 0xb0b0
+ >;
+ };
+
+ pinctrl_usdhc2: usdhc2grp {
+ fsl,pins = <
+ MX6SX_PAD_SD2_CMD__USDHC2_CMD 0x17059
+ MX6SX_PAD_SD2_CLK__USDHC2_CLK 0x10059
+ MX6SX_PAD_SD2_DATA0__USDHC2_DATA0 0x17059
+ MX6SX_PAD_SD2_DATA1__USDHC2_DATA1 0x17059
+ MX6SX_PAD_SD2_DATA2__USDHC2_DATA2 0x17059
+ MX6SX_PAD_SD2_DATA3__USDHC2_DATA3 0x17059
+ MX6SX_PAD_KEY_COL2__GPIO2_IO_12 0x1b0b0
+ >;
+ };
+
+ pinctrl_usdhc3: usdhc3grp {
+ fsl,pins = <
+ MX6SX_PAD_SD3_CLK__USDHC3_CLK 0x10071
+ MX6SX_PAD_SD3_CMD__USDHC3_CMD 0x17071
+ MX6SX_PAD_SD3_DATA0__USDHC3_DATA0 0x17071
+ MX6SX_PAD_SD3_DATA1__USDHC3_DATA1 0x17071
+ MX6SX_PAD_SD3_DATA2__USDHC3_DATA2 0x17071
+ MX6SX_PAD_SD3_DATA3__USDHC3_DATA3 0x17071
+ >;
+ };
+
+ pinctrl_usdhc4_50mhz: usdhc4-50mhzgrp {
+ fsl,pins = <
+ MX6SX_PAD_SD4_CLK__USDHC4_CLK 0x10071
+ MX6SX_PAD_SD4_CMD__USDHC4_CMD 0x17071
+ MX6SX_PAD_SD4_RESET_B__USDHC4_RESET_B 0x17071
+ MX6SX_PAD_SD4_DATA0__USDHC4_DATA0 0x17071
+ MX6SX_PAD_SD4_DATA1__USDHC4_DATA1 0x17071
+ MX6SX_PAD_SD4_DATA2__USDHC4_DATA2 0x17071
+ MX6SX_PAD_SD4_DATA3__USDHC4_DATA3 0x17071
+ MX6SX_PAD_SD4_DATA4__USDHC4_DATA4 0x17071
+ MX6SX_PAD_SD4_DATA5__USDHC4_DATA5 0x17071
+ MX6SX_PAD_SD4_DATA6__USDHC4_DATA6 0x17071
+ MX6SX_PAD_SD4_DATA7__USDHC4_DATA7 0x17071
+ >;
+ };
+
+ pinctrl_usdhc4_100mhz: usdhc4-100mhzgrp {
+ fsl,pins = <
+ MX6SX_PAD_SD4_CLK__USDHC4_CLK 0x100b9
+ MX6SX_PAD_SD4_CMD__USDHC4_CMD 0x170b9
+ MX6SX_PAD_SD4_DATA0__USDHC4_DATA0 0x170b9
+ MX6SX_PAD_SD4_DATA1__USDHC4_DATA1 0x170b9
+ MX6SX_PAD_SD4_DATA2__USDHC4_DATA2 0x170b9
+ MX6SX_PAD_SD4_DATA3__USDHC4_DATA3 0x170b9
+ MX6SX_PAD_SD4_DATA4__USDHC4_DATA4 0x170b9
+ MX6SX_PAD_SD4_DATA5__USDHC4_DATA5 0x170b9
+ MX6SX_PAD_SD4_DATA6__USDHC4_DATA6 0x170b9
+ MX6SX_PAD_SD4_DATA7__USDHC4_DATA7 0x170b9
+ >;
+ };
+
+ pinctrl_usdhc4_200mhz: usdhc4-200mhzgrp {
+ fsl,pins = <
+ MX6SX_PAD_SD4_CLK__USDHC4_CLK 0x100f9
+ MX6SX_PAD_SD4_CMD__USDHC4_CMD 0x170f9
+ MX6SX_PAD_SD4_DATA0__USDHC4_DATA0 0x170f9
+ MX6SX_PAD_SD4_DATA1__USDHC4_DATA1 0x170f9
+ MX6SX_PAD_SD4_DATA2__USDHC4_DATA2 0x170f9
+ MX6SX_PAD_SD4_DATA3__USDHC4_DATA3 0x170f9
+ MX6SX_PAD_SD4_DATA4__USDHC4_DATA4 0x170f9
+ MX6SX_PAD_SD4_DATA5__USDHC4_DATA5 0x170f9
+ MX6SX_PAD_SD4_DATA6__USDHC4_DATA6 0x170f9
+ MX6SX_PAD_SD4_DATA7__USDHC4_DATA7 0x170f9
+ >;
+ };
+};
diff --git a/arch/arm/boot/dts/imx6sx-sdb-sai.dts b/arch/arm/boot/dts/imx6sx-sdb-sai.dts
new file mode 100644
index 000000000000..0155450d680e
--- /dev/null
+++ b/arch/arm/boot/dts/imx6sx-sdb-sai.dts
@@ -0,0 +1,67 @@
+/*
+ * Copyright (C) 2016 NXP Semiconductors
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED , WITHOUT WARRANTY OF ANY KIND
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "imx6sx-sdb.dts"
+
+/ {
+ sound {
+ audio-cpu = <&sai1>;
+ };
+};
+
+&audmux {
+ /* pin conflict with sai */
+ status = "disabled";
+};
+
+&sai1 {
+ status = "okay";
+};
+
+&sdma {
+ gpr = <&gpr>;
+ /* SDMA event remap for SAI1 */
+ fsl,sdma-event-remap = <0 15 1>, <0 16 1>;
+};
+
+&ssi2 {
+ status = "disabled";
+};
diff --git a/arch/arm/boot/dts/imx6sx-sdb.dts b/arch/arm/boot/dts/imx6sx-sdb.dts
index 0ad164ab5729..5bb8fd57e7f5 100644
--- a/arch/arm/boot/dts/imx6sx-sdb.dts
+++ b/arch/arm/boot/dts/imx6sx-sdb.dts
@@ -18,12 +18,14 @@
996000 1250000
792000 1175000
396000 1175000
+ 198000 1175000
>;
fsl,soc-operating-points = <
/* ARM kHz SOC uV */
996000 1250000
792000 1175000
396000 1175000
+ 198000 1175000
>;
};
diff --git a/arch/arm/boot/dts/imx6sx-sdb.dtsi b/arch/arm/boot/dts/imx6sx-sdb.dtsi
index f1d37306e8bf..e5eafe4d9a70 100644
--- a/arch/arm/boot/dts/imx6sx-sdb.dtsi
+++ b/arch/arm/boot/dts/imx6sx-sdb.dtsi
@@ -254,6 +254,12 @@
status = "okay";
};
+&sai1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_sai1>;
+ status = "disabled";
+};
+
&ssi2 {
status = "okay";
};
@@ -468,6 +474,16 @@
>;
};
+ pinctrl_sai1: sai1grp {
+ fsl,pins = <
+ MX6SX_PAD_CSI_DATA00__SAI1_TX_BCLK 0x130b0
+ MX6SX_PAD_CSI_DATA01__SAI1_TX_SYNC 0x130b0
+ MX6SX_PAD_CSI_HSYNC__SAI1_TX_DATA_0 0x120b0
+ MX6SX_PAD_CSI_VSYNC__SAI1_RX_DATA_0 0x130b0
+ MX6SX_PAD_CSI_PIXCLK__AUDMUX_MCLK 0x130b0
+ >;
+ };
+
pinctrl_uart1: uart1grp {
fsl,pins = <
MX6SX_PAD_GPIO1_IO04__UART1_TX 0x1b0b1
diff --git a/arch/arm/boot/dts/imx6sx.dtsi b/arch/arm/boot/dts/imx6sx.dtsi
index a5f76025a0ce..6a993bfda248 100644
--- a/arch/arm/boot/dts/imx6sx.dtsi
+++ b/arch/arm/boot/dts/imx6sx.dtsi
@@ -63,12 +63,14 @@
996000 1250000
792000 1175000
396000 1075000
+ 198000 975000
>;
fsl,soc-operating-points = <
/* ARM kHz SOC uV */
996000 1175000
792000 1175000
396000 1175000
+ 198000 1175000
>;
clock-latency = <61036>; /* two CLK32 periods */
clocks = <&clks IMX6SX_CLK_ARM>,
@@ -970,8 +972,7 @@
<&clks 0>, <&clks 0>;
clock-names = "bus", "mclk1", "mclk2", "mclk3";
dma-names = "rx", "tx";
- dmas = <&sdma 31 23 0>, <&sdma 32 23 0>;
- dma-source = <&gpr 0 15 0 16>;
+ dmas = <&sdma 31 24 0>, <&sdma 32 24 0>;
status = "disabled";
};
@@ -990,8 +991,7 @@
<&clks 0>, <&clks 0>;
clock-names = "bus", "mclk1", "mclk2", "mclk3";
dma-names = "rx", "tx";
- dmas = <&sdma 33 23 0>, <&sdma 34 23 0>;
- dma-source = <&gpr 0 17 0 18>;
+ dmas = <&sdma 33 24 0>, <&sdma 34 24 0>;
status = "disabled";
};
diff --git a/arch/arm/boot/dts/imx6ul-14x14-evk.dts b/arch/arm/boot/dts/imx6ul-14x14-evk.dts
index 720728001d3c..668a72997590 100644
--- a/arch/arm/boot/dts/imx6ul-14x14-evk.dts
+++ b/arch/arm/boot/dts/imx6ul-14x14-evk.dts
@@ -36,6 +36,45 @@
enable-active-high;
};
};
+
+ sound {
+ compatible = "simple-audio-card";
+ simple-audio-card,name = "mx6ul-wm8960";
+ simple-audio-card,format = "i2s";
+ simple-audio-card,bitclock-master = <&dailink_master>;
+ simple-audio-card,frame-master = <&dailink_master>;
+ simple-audio-card,widgets =
+ "Microphone", "Mic Jack",
+ "Line", "Line In",
+ "Line", "Line Out",
+ "Speaker", "Speaker",
+ "Headphone", "Headphone Jack";
+ simple-audio-card,routing =
+ "Headphone Jack", "HP_L",
+ "Headphone Jack", "HP_R",
+ "Speaker", "SPK_LP",
+ "Speaker", "SPK_LN",
+ "Speaker", "SPK_RP",
+ "Speaker", "SPK_RN",
+ "LINPUT1", "Mic Jack",
+ "LINPUT3", "Mic Jack",
+ "RINPUT1", "Mic Jack",
+ "RINPUT2", "Mic Jack";
+
+ simple-audio-card,cpu {
+ sound-dai = <&sai2>;
+ };
+
+ dailink_master: simple-audio-card,codec {
+ sound-dai = <&codec>;
+ clocks = <&clks IMX6UL_CLK_SAI2>;
+ };
+ };
+};
+
+&clks {
+ assigned-clocks = <&clks IMX6UL_CLK_PLL4_AUDIO_DIV>;
+ assigned-clock-rates = <786432000>;
};
&cpu0 {
@@ -43,6 +82,20 @@
soc-supply = <&reg_soc>;
};
+&i2c2 {
+ clock_frequency = <100000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c2>;
+ status = "okay";
+
+ codec: wm8960@1a {
+ #sound-dai-cells = <0>;
+ compatible = "wlf,wm8960";
+ reg = <0x1a>;
+ wlf,shared-lrclk;
+ };
+};
+
&fec1 {
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_enet1>;
@@ -86,6 +139,16 @@
};
};
+&sai2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_sai2>;
+ assigned-clocks = <&clks IMX6UL_CLK_SAI2_SEL>,
+ <&clks IMX6UL_CLK_SAI2>;
+ assigned-clock-parents = <&clks IMX6UL_CLK_PLL4_AUDIO_DIV>;
+ assigned-clock-rates = <0>, <12288000>;
+ status = "okay";
+};
+
&snvs_poweroff {
status = "okay";
};
@@ -272,6 +335,17 @@
>;
};
+ pinctrl_sai2: sai2grp {
+ fsl,pins = <
+ MX6UL_PAD_JTAG_TDI__SAI2_TX_BCLK 0x17088
+ MX6UL_PAD_JTAG_TDO__SAI2_TX_SYNC 0x17088
+ MX6UL_PAD_JTAG_TRST_B__SAI2_TX_DATA 0x11088
+ MX6UL_PAD_JTAG_TCK__SAI2_RX_DATA 0x11088
+ MX6UL_PAD_JTAG_TMS__SAI2_MCLK 0x17088
+ MX6UL_PAD_SNVS_TAMPER4__GPIO5_IO04 0x17059
+ >;
+ };
+
pinctrl_pwm1: pwm1grp {
fsl,pins = <
MX6UL_PAD_GPIO1_IO08__PWM1_OUT 0x110b0
diff --git a/arch/arm/boot/dts/imx6ul-pico-hobbit.dts b/arch/arm/boot/dts/imx6ul-pico-hobbit.dts
new file mode 100644
index 000000000000..8ce1fec36e86
--- /dev/null
+++ b/arch/arm/boot/dts/imx6ul-pico-hobbit.dts
@@ -0,0 +1,516 @@
+/*
+ * Copyright 2015 Technexion Ltd.
+ *
+ * Author: Wig Cheng <wig.cheng@technexion.com>
+ * Richard Hu <richard.hu@technexion.com>
+ * Tapani Utriainen <tapani@technexion.com>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED , WITHOUT WARRANTY OF ANY KIND
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+
+#include "imx6ul.dtsi"
+
+/ {
+ model = "Technexion Pico i.MX6UL Board";
+ compatible = "technexion,imx6ul-pico-hobbit", "fsl,imx6ul";
+
+ memory {
+ reg = <0x80000000 0x10000000>;
+ };
+
+ chosen {
+ stdout-path = &uart6;
+ };
+
+ backlight {
+ compatible = "pwm-backlight";
+ pwms = <&pwm3 0 5000000>;
+ brightness-levels = <0 4 8 16 32 64 128 255>;
+ default-brightness-level = <6>;
+ status = "okay";
+ };
+
+ reg_2p5v: regulator-2p5v {
+ compatible = "regulator-fixed";
+ regulator-name = "2P5V";
+ regulator-min-microvolt = <2500000>;
+ regulator-max-microvolt = <2500000>;
+ };
+
+ reg_3p3v: regulator-3p3v {
+ compatible = "regulator-fixed";
+ regulator-name = "3P3V";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ };
+
+ reg_sd1_vmmc: regulator-sd1-vmmc {
+ compatible = "regulator-fixed";
+ regulator-name = "VSD_3V3";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ gpio = <&gpio1 9 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ };
+
+ reg_usb_otg_vbus: regulator-usb-otg-vbus {
+ compatible = "regulator-fixed";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usb_otg1>;
+ regulator-name = "usb_otg_vbus";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ gpio = <&gpio1 6 0>;
+ };
+
+ sound {
+ compatible = "fsl,imx-audio-sgtl5000";
+ model = "imx6ul-sgtl5000";
+ audio-cpu = <&sai1>;
+ audio-codec = <&codec>;
+ audio-routing =
+ "LINE_IN", "Line In Jack",
+ "MIC_IN", "Mic Jack",
+ "Mic Jack", "Mic Bias",
+ "Headphone Jack", "HP_OUT";
+ };
+
+ sys_mclk: clock-sys-mclk {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <24576000>;
+ };
+
+ leds {
+ compatible = "gpio-leds";
+
+ hobbitled {
+ label = "hobbitled";
+ gpios = <&gpio1 29 GPIO_ACTIVE_LOW>;
+ };
+ };
+};
+
+&can1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_flexcan1>;
+ status = "okay";
+};
+
+&can2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_flexcan2>;
+ status = "okay";
+};
+
+&clks {
+ assigned-clocks = <&clks IMX6UL_CLK_PLL4_AUDIO_DIV>;
+ assigned-clock-rates = <786432000>;
+};
+
+&fec2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_enet2>;
+ phy-mode = "rmii";
+ phy-handle = <&ethphy1>;
+ status = "okay";
+ phy-reset-gpios = <&gpio2 15 GPIO_ACTIVE_LOW>;
+ phy-reset-duration = <11>;
+
+ mdio {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ ethphy1: ethernet-phy@1 {
+ compatible = "ethernet-phy-ieee802.3-c22";
+ reg = <1>;
+ max-speed = <100>;
+ interrupt-parent = <&gpio5>;
+ interrupts = <6 IRQ_TYPE_LEVEL_LOW 0>;
+ };
+ };
+};
+
+&i2c1 {
+ clock-frequency = <100000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c1>;
+ status = "okay";
+
+ pmic: pfuze3000@08 {
+ compatible = "fsl,pfuze3000";
+ reg = <0x08>;
+
+ regulators {
+ /* VDD_ARM_SOC_IN*/
+ sw1b_reg: sw1b {
+ regulator-min-microvolt = <700000>;
+ regulator-max-microvolt = <1475000>;
+ regulator-boot-on;
+ regulator-always-on;
+ regulator-ramp-delay = <6250>;
+ };
+
+ /* DRAM */
+ sw3a_reg: sw3 {
+ regulator-min-microvolt = <900000>;
+ regulator-max-microvolt = <1650000>;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ /* DRAM */
+ vref_reg: vrefddr {
+ regulator-boot-on;
+ regulator-always-on;
+ };
+ };
+ };
+};
+
+&i2c2 {
+ clock_frequency = <100000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c2>;
+ status = "okay";
+
+ codec: sgtl5000@0a {
+ reg = <0x0a>;
+ compatible = "fsl,sgtl5000";
+ clocks = <&sys_mclk>;
+ VDDA-supply = <&reg_2p5v>;
+ VDDIO-supply = <&reg_3p3v>;
+ };
+};
+
+&i2c3 {
+ clock_frequency = <100000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c3>;
+ status = "okay";
+};
+
+&lcdif {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_lcdif_dat &pinctrl_lcdif_ctrl>;
+ display = <&display0>;
+ status = "okay";
+
+ display0: display0 {
+ bits-per-pixel = <32>;
+ bus-width = <24>;
+
+ display-timings {
+ native-mode = <&timing0>;
+
+ timing0: timing0 {
+ clock-frequency = <33200000>;
+ hactive = <800>;
+ vactive = <480>;
+ hfront-porch = <210>;
+ hback-porch = <46>;
+ hsync-len = <1>;
+ vback-porch = <22>;
+ vfront-porch = <23>;
+ vsync-len = <1>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+ };
+ };
+};
+
+&pwm3 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_pwm3>;
+ status = "okay";
+};
+
+&pwm7 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_pwm7>;
+ status = "okay";
+};
+
+&pwm8 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_pwm8>;
+ status = "okay";
+};
+
+&sai1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_sai1>;
+ status = "okay";
+};
+
+&uart3 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart3>;
+ fsl,uart-has-rtscts;
+ status = "okay";
+};
+
+&uart6 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart6>;
+ status = "okay";
+};
+
+&usbotg1 {
+ vbus-supply = <&reg_usb_otg_vbus>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usb_otg1_id>;
+ dr_mode = "otg";
+ disable-over-current;
+ status = "okay";
+};
+
+&usbotg2 {
+ dr_mode = "host";
+ disable-over-current;
+ status = "okay";
+};
+
+&usdhc1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usdhc1>;
+ bus-width = <8>;
+ no-1-8-v;
+ non-removable;
+ keep-power-in-suspend;
+ status = "okay";
+};
+
+&usdhc2 { /* Wifi SDIO */
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usdhc2>;
+ no-1-8-v;
+ keep-power-in-suspend;
+ wakeup-source;
+ status = "okay";
+};
+
+&iomuxc {
+ pinctrl_enet2: enet2grp {
+ fsl,pins = <
+ MX6UL_PAD_ENET1_TX_DATA1__ENET2_MDIO 0x1b0b0
+ MX6UL_PAD_ENET1_TX_EN__ENET2_MDC 0x1b0b0
+ MX6UL_PAD_ENET2_RX_EN__ENET2_RX_EN 0x1b0b0
+ MX6UL_PAD_ENET2_RX_ER__ENET2_RX_ER 0x1b0b0
+ MX6UL_PAD_ENET2_RX_DATA0__ENET2_RDATA00 0x1b0b0
+ MX6UL_PAD_ENET2_RX_DATA1__ENET2_RDATA01 0x1b0b0
+ MX6UL_PAD_ENET2_TX_EN__ENET2_TX_EN 0x1b0b0
+ MX6UL_PAD_ENET2_TX_DATA0__ENET2_TDATA00 0x1b0b0
+ MX6UL_PAD_ENET2_TX_DATA1__ENET2_TDATA01 0x1b0b0
+ MX6UL_PAD_ENET2_TX_CLK__ENET2_REF_CLK2 0x4001b031
+ MX6UL_PAD_SNVS_TAMPER6__GPIO5_IO06 0x800
+ MX6UL_PAD_UART4_TX_DATA__GPIO1_IO28 0x79
+ >;
+ };
+
+ pinctrl_flexcan1: flexcan1grp {
+ fsl,pins = <
+ MX6UL_PAD_ENET1_RX_DATA0__FLEXCAN1_TX 0x1b020
+ MX6UL_PAD_ENET1_RX_DATA1__FLEXCAN1_RX 0x1b020
+ >;
+ };
+
+ pinctrl_flexcan2: flexcan2grp {
+ fsl,pins = <
+ MX6UL_PAD_ENET1_TX_DATA0__FLEXCAN2_RX 0x1b020
+ MX6UL_PAD_ENET1_RX_EN__FLEXCAN2_TX 0x1b020
+ >;
+ };
+
+ pinctrl_i2c1: i2c1grp {
+ fsl,pins = <
+ MX6UL_PAD_GPIO1_IO02__I2C1_SCL 0x4001b8b0
+ MX6UL_PAD_GPIO1_IO03__I2C1_SDA 0x4001b8b0
+ >;
+ };
+
+ pinctrl_i2c2: i2c2grp {
+ fsl,pins = <
+ MX6UL_PAD_UART5_TX_DATA__I2C2_SCL 0x4001b8b0
+ MX6UL_PAD_UART5_RX_DATA__I2C2_SDA 0x4001b8b0
+ >;
+ };
+
+ pinctrl_i2c3: i2c3grp {
+ fsl,pins = <
+ MX6UL_PAD_UART1_TX_DATA__I2C3_SCL 0x4001b8b0
+ MX6UL_PAD_UART1_RX_DATA__I2C3_SDA 0x4001b8b0
+ >;
+ };
+
+ pinctrl_lcdif_dat: lcdifdatgrp {
+ fsl,pins = <
+ MX6UL_PAD_LCD_DATA00__LCDIF_DATA00 0x79
+ MX6UL_PAD_LCD_DATA01__LCDIF_DATA01 0x79
+ MX6UL_PAD_LCD_DATA02__LCDIF_DATA02 0x79
+ MX6UL_PAD_LCD_DATA03__LCDIF_DATA03 0x79
+ MX6UL_PAD_LCD_DATA04__LCDIF_DATA04 0x79
+ MX6UL_PAD_LCD_DATA05__LCDIF_DATA05 0x79
+ MX6UL_PAD_LCD_DATA06__LCDIF_DATA06 0x79
+ MX6UL_PAD_LCD_DATA07__LCDIF_DATA07 0x79
+ MX6UL_PAD_LCD_DATA08__LCDIF_DATA08 0x79
+ MX6UL_PAD_LCD_DATA09__LCDIF_DATA09 0x79
+ MX6UL_PAD_LCD_DATA10__LCDIF_DATA10 0x79
+ MX6UL_PAD_LCD_DATA11__LCDIF_DATA11 0x79
+ MX6UL_PAD_LCD_DATA12__LCDIF_DATA12 0x79
+ MX6UL_PAD_LCD_DATA13__LCDIF_DATA13 0x79
+ MX6UL_PAD_LCD_DATA14__LCDIF_DATA14 0x79
+ MX6UL_PAD_LCD_DATA15__LCDIF_DATA15 0x79
+ MX6UL_PAD_LCD_DATA16__LCDIF_DATA16 0x79
+ MX6UL_PAD_LCD_DATA17__LCDIF_DATA17 0x79
+ MX6UL_PAD_LCD_DATA18__LCDIF_DATA18 0x79
+ MX6UL_PAD_LCD_DATA19__LCDIF_DATA19 0x79
+ MX6UL_PAD_LCD_DATA20__LCDIF_DATA20 0x79
+ MX6UL_PAD_LCD_DATA21__LCDIF_DATA21 0x79
+ MX6UL_PAD_LCD_DATA22__LCDIF_DATA22 0x79
+ MX6UL_PAD_LCD_DATA23__LCDIF_DATA23 0x79
+ >;
+ };
+
+ pinctrl_lcdif_ctrl: lcdifctrlgrp {
+ fsl,pins = <
+ MX6UL_PAD_LCD_CLK__LCDIF_CLK 0x79
+ MX6UL_PAD_LCD_ENABLE__LCDIF_ENABLE 0x79
+ MX6UL_PAD_LCD_HSYNC__LCDIF_HSYNC 0x79
+ MX6UL_PAD_LCD_VSYNC__LCDIF_VSYNC 0x79
+ /* LCD reset */
+ MX6UL_PAD_SNVS_TAMPER9__GPIO5_IO09 0x79
+ >;
+ };
+
+ pinctrl_pwm3: pwm3grp {
+ fsl,pins = <
+ MX6UL_PAD_NAND_ALE__PWM3_OUT 0x110b0
+ >;
+ };
+
+ pinctrl_pwm7: pwm7grp {
+ fsl,pins = <
+ MX6UL_PAD_ENET1_TX_CLK__PWM7_OUT 0x110b0
+ >;
+ };
+
+ pinctrl_pwm8: pwm8grp {
+ fsl,pins = <
+ MX6UL_PAD_ENET1_RX_ER__PWM8_OUT 0x110b0
+ >;
+ };
+
+ pinctrl_sai1: sai1grp {
+ fsl,pins = <
+ MX6UL_PAD_CSI_DATA04__SAI1_TX_SYNC 0x1b0b0
+ MX6UL_PAD_CSI_DATA05__SAI1_TX_BCLK 0x1b0b0
+ MX6UL_PAD_CSI_DATA06__SAI1_RX_DATA 0x110b0
+ MX6UL_PAD_CSI_DATA07__SAI1_TX_DATA 0x1f0b8
+ >;
+ };
+
+ pinctrl_uart3: uart3grp {
+ fsl,pins = <
+ MX6UL_PAD_UART3_TX_DATA__UART3_DCE_TX 0x1b0b0
+ MX6UL_PAD_UART3_RX_DATA__UART3_DCE_RX 0x1b0b0
+ MX6UL_PAD_UART3_RTS_B__UART3_DCE_RTS 0x1b0b0
+ MX6UL_PAD_UART3_CTS_B__UART3_DCE_CTS 0x1b0b0
+ >;
+ };
+
+ pinctrl_uart5: uart5grp {
+ fsl,pins = <
+ MX6UL_PAD_GPIO1_IO04__UART5_DCE_TX 0x1b0b1
+ MX6UL_PAD_GPIO1_IO05__UART5_DCE_RX 0x1b0b1
+ MX6UL_PAD_GPIO1_IO08__UART5_DCE_RTS 0x1b0b1
+ MX6UL_PAD_GPIO1_IO09__UART5_DCE_CTS 0x1b0b1
+ >;
+ };
+
+ pinctrl_uart6: uart6grp {
+ fsl,pins = <
+ MX6UL_PAD_CSI_MCLK__UART6_DCE_TX 0x1b0b1
+ MX6UL_PAD_CSI_PIXCLK__UART6_DCE_RX 0x1b0b1
+ >;
+ };
+
+ pinctrl_usb_otg1: usbotg1grp {
+ fsl,pins = <
+ MX6UL_PAD_GPIO1_IO06__GPIO1_IO06 0x10b0
+ >;
+ };
+
+ pinctrl_usb_otg1_id: usbotg1idgrp {
+ fsl,pins = <
+ MX6UL_PAD_GPIO1_IO00__ANATOP_OTG1_ID 0x17059
+ >;
+ };
+
+ pinctrl_usdhc1: usdhc1grp {
+ fsl,pins = <
+ MX6UL_PAD_SD1_CMD__USDHC1_CMD 0x17059
+ MX6UL_PAD_SD1_CLK__USDHC1_CLK 0x10071
+ MX6UL_PAD_SD1_DATA0__USDHC1_DATA0 0x17059
+ MX6UL_PAD_SD1_DATA1__USDHC1_DATA1 0x17059
+ MX6UL_PAD_SD1_DATA2__USDHC1_DATA2 0x17059
+ MX6UL_PAD_SD1_DATA3__USDHC1_DATA3 0x17059
+ MX6UL_PAD_UART1_RTS_B__USDHC1_CD_B 0x03029
+ MX6UL_PAD_NAND_READY_B__USDHC1_DATA4 0x17059
+ MX6UL_PAD_NAND_CE0_B__USDHC1_DATA5 0x17059
+ MX6UL_PAD_NAND_CE1_B__USDHC1_DATA6 0x17059
+ MX6UL_PAD_NAND_CLE__USDHC1_DATA7 0x17059
+ >;
+ };
+
+ pinctrl_usdhc2: usdhc2grp {
+ fsl,pins = <
+ MX6UL_PAD_NAND_WE_B__USDHC2_CMD 0x17059
+ MX6UL_PAD_NAND_RE_B__USDHC2_CLK 0x10059
+ MX6UL_PAD_NAND_DATA00__USDHC2_DATA0 0x17059
+ MX6UL_PAD_NAND_DATA01__USDHC2_DATA1 0x17059
+ MX6UL_PAD_NAND_DATA02__USDHC2_DATA2 0x17059
+ MX6UL_PAD_NAND_DATA03__USDHC2_DATA3 0x17059
+ >;
+ };
+};
diff --git a/arch/arm/boot/dts/imx6ul-tx6ul-0010.dts b/arch/arm/boot/dts/imx6ul-tx6ul-0010.dts
new file mode 100644
index 000000000000..8c2f3df79b47
--- /dev/null
+++ b/arch/arm/boot/dts/imx6ul-tx6ul-0010.dts
@@ -0,0 +1,53 @@
+/*
+ * Copyright 2015 Lothar Waßmann <LW@KARO-electronics.de>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "imx6ul.dtsi"
+#include "imx6ul-tx6ul.dtsi"
+
+/ {
+ model = "Ka-Ro electronics TXUL-0010 Module";
+ compatible = "karo,imx6ul-tx6ul", "fsl,imx6ul";
+
+ aliases {
+ /delete-property/ mmc1;
+ };
+};
diff --git a/arch/arm/boot/dts/imx6ul-tx6ul-0011.dts b/arch/arm/boot/dts/imx6ul-tx6ul-0011.dts
new file mode 100644
index 000000000000..d82698e7d50f
--- /dev/null
+++ b/arch/arm/boot/dts/imx6ul-tx6ul-0011.dts
@@ -0,0 +1,68 @@
+/*
+ * Copyright 2015 Lothar Waßmann <LW@KARO-electronics.de>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "imx6ul.dtsi"
+#include "imx6ul-tx6ul.dtsi"
+
+/ {
+ model = "Ka-Ro electronics TXUL-0011 Module";
+ compatible = "karo,imx6ul-tx6ul", "fsl,imx6ul";
+
+ aliases {
+ mmc0 = &usdhc2;
+ mmc1 = &usdhc1;
+ };
+};
+
+&gpmi {
+ status = "disabled";
+};
+
+&usdhc2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usdhc2>;
+ bus-width = <4>;
+ no-1-8-v;
+ non-removable;
+ fsl,wp-controller;
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/imx6ul-tx6ul-mainboard.dts b/arch/arm/boot/dts/imx6ul-tx6ul-mainboard.dts
new file mode 100644
index 000000000000..d25899b71575
--- /dev/null
+++ b/arch/arm/boot/dts/imx6ul-tx6ul-mainboard.dts
@@ -0,0 +1,271 @@
+/*
+ * Copyright 2015 Lothar Waßmann <LW@KARO-electronics.de>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "imx6ul.dtsi"
+#include "imx6ul-tx6ul.dtsi"
+
+/ {
+ model = "Ka-Ro electronics TXUL-0010 Module on TXUL Mainboard";
+ compatible = "karo,imx6ul-tx6ul", "fsl,imx6ul";
+
+ aliases {
+ lcdif_24bit_pins_a = &pinctrl_disp0_3;
+ mmc0 = &usdhc1;
+ /delete-property/ mmc1;
+ serial2 = &uart3;
+ serial4 = &uart5;
+ };
+ /delete-node/ sound;
+};
+
+&can1 {
+ xceiver-supply = <&reg_3v3>;
+};
+
+&can2 {
+ xceiver-supply = <&reg_3v3>;
+};
+
+&ds1339 {
+ status = "disabled";
+};
+
+&fec1 {
+ pinctrl-0 = <&pinctrl_enet1 &pinctrl_etnphy0_rst>;
+ /delete-node/ mdio;
+};
+
+&fec2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_enet2 &pinctrl_enet2_mdio &pinctrl_etnphy1_rst>;
+ phy-mode = "rmii";
+ phy-reset-gpios = <&gpio4 28 GPIO_ACTIVE_HIGH>;
+ phy-supply = <&reg_3v3_etn>;
+ phy-handle = <&etnphy1>;
+ status = "okay";
+
+ mdio {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ etnphy0: ethernet-phy@0 {
+ compatible = "ethernet-phy-ieee802.3-c22";
+ reg = <0>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_etnphy0_int>;
+ interrupt-parent = <&gpio5>;
+ interrupts = <5 IRQ_TYPE_EDGE_FALLING>;
+ interrupts-extended = <&gpio5 5 IRQ_TYPE_EDGE_FALLING>;
+ status = "okay";
+ };
+
+ etnphy1: ethernet-phy@2 {
+ compatible = "ethernet-phy-ieee802.3-c22";
+ reg = <2>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_etnphy1_int>;
+ interrupt-parent = <&gpio4>;
+ interrupts = <27 IRQ_TYPE_EDGE_FALLING>;
+ interrupts-extended = <&gpio4 27 IRQ_TYPE_EDGE_FALLING>;
+ status = "okay";
+ };
+ };
+};
+
+&i2c_gpio {
+ status = "disabled";
+};
+
+&i2c2 {
+ /delete-node/ codec@0a;
+ /delete-node/ touchscreen@48;
+
+ rtc: mcp7940x@6f {
+ compatible = "microchip,mcp7940x";
+ reg = <0x6f>;
+ };
+};
+
+&kpp {
+ status = "disabled";
+};
+
+&lcdif {
+ pinctrl-0 = <&pinctrl_disp0_3>;
+};
+
+&reg_usbotg_vbus{
+ status = "disabled";
+};
+
+&usdhc1 {
+ pinctrl-0 = <&pinctrl_usdhc1>;
+ non-removable;
+ /delete-property/ cd-gpios;
+ cap-sdio-irq;
+};
+
+&uart1 {
+ pinctrl-0 = <&pinctrl_uart1>;
+ /delete-property/ fsl,uart-has-rtscts;
+};
+
+&uart2 {
+ pinctrl-0 = <&pinctrl_uart2>;
+ /delete-property/ fsl,uart-has-rtscts;
+ status = "okay";
+};
+
+&uart3 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart3>;
+ status = "okay";
+};
+
+&uart4 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart4>;
+ status = "okay";
+};
+
+&uart5 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart5>;
+ status = "okay";
+};
+
+&uart6 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart6>;
+ status = "okay";
+};
+
+&uart7 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart7>;
+ status = "okay";
+};
+
+&uart8 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart8>;
+ status = "disabled"; /* conflicts with LCDIF */
+};
+
+&iomuxc {
+ hoggrp {
+ fsl,pins = <
+ MX6UL_PAD_CSI_DATA01__GPIO4_IO22 0x0b0b0 /* WLAN_RESET */
+ >;
+ };
+
+ pinctrl_disp0_3: disp0grp-3 {
+ fsl,pins = <
+ MX6UL_PAD_LCD_CLK__LCDIF_CLK 0x10 /* LSCLK */
+ MX6UL_PAD_LCD_ENABLE__LCDIF_ENABLE 0x10 /* OE_ACD */
+ MX6UL_PAD_LCD_HSYNC__LCDIF_HSYNC 0x10 /* HSYNC */
+ MX6UL_PAD_LCD_VSYNC__LCDIF_VSYNC 0x10 /* VSYNC */
+ MX6UL_PAD_LCD_DATA02__LCDIF_DATA02 0x10
+ MX6UL_PAD_LCD_DATA03__LCDIF_DATA03 0x10
+ MX6UL_PAD_LCD_DATA04__LCDIF_DATA04 0x10
+ MX6UL_PAD_LCD_DATA05__LCDIF_DATA05 0x10
+ MX6UL_PAD_LCD_DATA06__LCDIF_DATA06 0x10
+ MX6UL_PAD_LCD_DATA07__LCDIF_DATA07 0x10
+ /* LCD_DATA08..09 not wired */
+ MX6UL_PAD_LCD_DATA10__LCDIF_DATA10 0x10
+ MX6UL_PAD_LCD_DATA11__LCDIF_DATA11 0x10
+ MX6UL_PAD_LCD_DATA12__LCDIF_DATA12 0x10
+ MX6UL_PAD_LCD_DATA13__LCDIF_DATA13 0x10
+ MX6UL_PAD_LCD_DATA14__LCDIF_DATA14 0x10
+ MX6UL_PAD_LCD_DATA15__LCDIF_DATA15 0x10
+ /* LCD_DATA16..17 not wired */
+ MX6UL_PAD_LCD_DATA18__LCDIF_DATA18 0x10
+ MX6UL_PAD_LCD_DATA19__LCDIF_DATA19 0x10
+ MX6UL_PAD_LCD_DATA20__LCDIF_DATA20 0x10
+ MX6UL_PAD_LCD_DATA21__LCDIF_DATA21 0x10
+ MX6UL_PAD_LCD_DATA22__LCDIF_DATA22 0x10
+ MX6UL_PAD_LCD_DATA23__LCDIF_DATA23 0x10
+ >;
+ };
+
+ pinctrl_enet2_mdio: enet2-mdiogrp {
+ fsl,pins = <
+ MX6UL_PAD_GPIO1_IO07__ENET2_MDC 0x0b0b0
+ MX6UL_PAD_GPIO1_IO06__ENET2_MDIO 0x1b0b0
+ >;
+ };
+
+ pinctrl_uart3: uart3grp {
+ fsl,pins = <
+ MX6UL_PAD_UART3_TX_DATA__UART3_DCE_TX 0x0b0b0
+ MX6UL_PAD_UART3_RX_DATA__UART3_DCE_RX 0x0b0b0
+ >;
+ };
+
+ pinctrl_uart4: uart4grp {
+ fsl,pins = <
+ MX6UL_PAD_UART4_TX_DATA__UART4_DCE_TX 0x0b0b0
+ MX6UL_PAD_UART4_RX_DATA__UART4_DCE_RX 0x0b0b0
+ >;
+ };
+
+ pinctrl_uart6: uart6grp {
+ fsl,pins = <
+ MX6UL_PAD_CSI_MCLK__UART6_DCE_TX 0x0b0b0
+ MX6UL_PAD_CSI_PIXCLK__UART6_DCE_RX 0x0b0b0
+ >;
+ };
+
+ pinctrl_uart7: uart7grp {
+ fsl,pins = <
+ MX6UL_PAD_LCD_DATA16__UART7_DCE_TX 0x0b0b0
+ MX6UL_PAD_LCD_DATA17__UART7_DCE_RX 0x0b0b0
+ >;
+ };
+
+ pinctrl_uart8: uart8grp {
+ fsl,pins = <
+ MX6UL_PAD_LCD_DATA20__UART8_DCE_TX 0x0b0b0
+ MX6UL_PAD_LCD_DATA21__UART8_DCE_RX 0x0b0b0
+ >;
+ };
+};
diff --git a/arch/arm/boot/dts/imx6ul-tx6ul.dtsi b/arch/arm/boot/dts/imx6ul-tx6ul.dtsi
new file mode 100644
index 000000000000..437e9aad5920
--- /dev/null
+++ b/arch/arm/boot/dts/imx6ul-tx6ul.dtsi
@@ -0,0 +1,973 @@
+/*
+ * Copyright 2015 Lothar Waßmann <LW@KARO-electronics.de>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/interrupt-controller/irq.h>
+#include <dt-bindings/pwm/pwm.h>
+
+/ {
+ aliases {
+ can0 = &can2;
+ can1 = &can1;
+ display = &display;
+ i2c0 = &i2c2;
+ i2c1 = &i2c_gpio;
+ i2c2 = &i2c1;
+ i2c3 = &i2c3;
+ i2c4 = &i2c4;
+ lcdif_23bit_pins_a = &pinctrl_disp0_1;
+ lcdif_24bit_pins_a = &pinctrl_disp0_2;
+ pwm0 = &pwm5;
+ reg_can_xcvr = &reg_can_xcvr;
+ serial2 = &uart5;
+ serial4 = &uart3;
+ spi0 = &ecspi2;
+ spi1 = &spi_gpio;
+ stk5led = &user_led;
+ usbh1 = &usbotg2;
+ usbotg = &usbotg1;
+ };
+
+ chosen {
+ stdout-path = &uart1;
+ };
+
+ memory {
+ reg = <0 0>; /* will be filled by U-Boot */
+ };
+
+ clocks {
+ mclk: mclk {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <26000000>;
+ };
+ };
+
+ backlight: backlight {
+ compatible = "pwm-backlight";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_lcd_rst>;
+ enable-gpios = <&gpio3 4 GPIO_ACTIVE_HIGH>;
+ pwms = <&pwm5 0 500000 PWM_POLARITY_INVERTED>;
+ power-supply = <&reg_lcd_pwr>;
+ /*
+ * a poor man's way to create a 1:1 relationship between
+ * the PWM value and the actual duty cycle
+ */
+ brightness-levels = < 0 1 2 3 4 5 6 7 8 9
+ 10 11 12 13 14 15 16 17 18 19
+ 20 21 22 23 24 25 26 27 28 29
+ 30 31 32 33 34 35 36 37 38 39
+ 40 41 42 43 44 45 46 47 48 49
+ 50 51 52 53 54 55 56 57 58 59
+ 60 61 62 63 64 65 66 67 68 69
+ 70 71 72 73 74 75 76 77 78 79
+ 80 81 82 83 84 85 86 87 88 89
+ 90 91 92 93 94 95 96 97 98 99
+ 100>;
+ default-brightness-level = <50>;
+ };
+
+ i2c_gpio: i2c-gpio {
+ compatible = "i2c-gpio";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c_gpio>;
+ gpios = <
+ &gpio5 1 GPIO_ACTIVE_HIGH /* SDA */
+ &gpio5 0 GPIO_ACTIVE_HIGH /* SCL */
+ >;
+ clock-frequency = <400000>;
+ status = "okay";
+
+ ds1339: rtc@68 {
+ compatible = "dallas,ds1339";
+ reg = <0x68>;
+ status = "disabled";
+ };
+ };
+
+ leds {
+ compatible = "gpio-leds";
+
+ user_led: user {
+ label = "Heartbeat";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_led>;
+ gpios = <&gpio5 9 GPIO_ACTIVE_HIGH>;
+ linux,default-trigger = "heartbeat";
+ };
+ };
+
+ reg_3v3_etn: regulator-3v3etn {
+ compatible = "regulator-fixed";
+ regulator-name = "3V3_ETN";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_etnphy_power>;
+ gpio = <&gpio5 7 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ };
+
+ reg_2v5: regulator-2v5 {
+ compatible = "regulator-fixed";
+ regulator-name = "2V5";
+ regulator-min-microvolt = <2500000>;
+ regulator-max-microvolt = <2500000>;
+ regulator-always-on;
+ };
+
+ reg_3v3: regulator-3v3 {
+ compatible = "regulator-fixed";
+ regulator-name = "3V3";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ };
+
+ reg_can_xcvr: regulator-canxcvr {
+ compatible = "regulator-fixed";
+ regulator-name = "CAN XCVR";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_flexcan_xcvr>;
+ gpio = <&gpio3 5 GPIO_ACTIVE_HIGH>;
+ enable-active-low;
+ };
+
+ reg_lcd_pwr: regulator-lcdpwr {
+ compatible = "regulator-fixed";
+ regulator-name = "LCD POWER";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_lcd_pwr>;
+ gpio = <&gpio5 4 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ reg_usbh1_vbus: regulator-usbh1vbus {
+ compatible = "regulator-fixed";
+ regulator-name = "usbh1_vbus";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usbh1_vbus &pinctrl_usbh1_oc>;
+ gpio = <&gpio1 2 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ };
+
+ reg_usbotg_vbus: regulator-usbotgvbus {
+ compatible = "regulator-fixed";
+ regulator-name = "usbotg_vbus";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usbotg_vbus &pinctrl_usbotg_oc>;
+ gpio = <&gpio1 26 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ };
+
+ spi_gpio: spi-gpio {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ compatible = "spi-gpio";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_spi_gpio>;
+ gpio-mosi = <&gpio1 30 GPIO_ACTIVE_HIGH>;
+ gpio-miso = <&gpio1 31 GPIO_ACTIVE_HIGH>;
+ gpio-sck = <&gpio1 28 GPIO_ACTIVE_HIGH>;
+ num-chipselects = <2>;
+ cs-gpios = <
+ &gpio1 29 GPIO_ACTIVE_HIGH
+ &gpio1 10 GPIO_ACTIVE_HIGH
+ >;
+ status = "disabled";
+
+ spi@0 {
+ compatible = "spidev";
+ reg = <0>;
+ spi-max-frequency = <660000>;
+ };
+
+ spi@1 {
+ compatible = "spidev";
+ reg = <1>;
+ spi-max-frequency = <660000>;
+ };
+ };
+
+ sound {
+ compatible = "karo,imx6ul-tx6ul-sgtl5000",
+ "simple-audio-card";
+ simple-audio-card,name = "imx6ul-tx6ul-sgtl5000-audio";
+ simple-audio-card,format = "i2s";
+ simple-audio-card,bitclock-master = <&codec_dai>;
+ simple-audio-card,frame-master = <&codec_dai>;
+ simple-audio-card,widgets =
+ "Microphone", "Mic Jack",
+ "Line", "Line In",
+ "Line", "Line Out",
+ "Headphone", "Headphone Jack";
+ simple-audio-card,routing =
+ "MIC_IN", "Mic Jack",
+ "Mic Jack", "Mic Bias",
+ "Headphone Jack", "HP_OUT";
+
+ cpu_dai: simple-audio-card,cpu {
+ sound-dai = <&sai2>;
+ };
+
+ codec_dai: simple-audio-card,codec {
+ sound-dai = <&sgtl5000>;
+ };
+ };
+};
+
+&can1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_flexcan1>;
+ xceiver-supply = <&reg_can_xcvr>;
+ status = "okay";
+};
+
+&can2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_flexcan2>;
+ xceiver-supply = <&reg_can_xcvr>;
+ status = "okay";
+};
+
+&ecspi2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_ecspi2>;
+ fsl,spi-num-chipselects = <2>;
+ cs-gpios = <
+ &gpio1 29 GPIO_ACTIVE_HIGH
+ &gpio1 10 GPIO_ACTIVE_HIGH
+ >;
+ status = "disabled";
+
+ spidev0: spi@0 {
+ compatible = "spidev";
+ reg = <0>;
+ spi-max-frequency = <60000000>;
+ };
+
+ spidev1: spi@1 {
+ compatible = "spidev";
+ reg = <1>;
+ spi-max-frequency = <60000000>;
+ };
+};
+
+&fec1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_enet1 &pinctrl_enet1_mdio &pinctrl_etnphy0_rst>;
+ phy-mode = "rmii";
+ phy-reset-gpios = <&gpio5 6 GPIO_ACTIVE_HIGH>;
+ phy-supply = <&reg_3v3_etn>;
+ phy-handle = <&etnphy0>;
+ status = "okay";
+
+ mdio {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ etnphy0: ethernet-phy@0 {
+ compatible = "ethernet-phy-ieee802.3-c22";
+ reg = <0>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_etnphy0_int>;
+ interrupt-parent = <&gpio5>;
+ interrupts = <5 IRQ_TYPE_EDGE_FALLING>;
+ status = "okay";
+ };
+
+ etnphy1: ethernet-phy@2 {
+ compatible = "ethernet-phy-ieee802.3-c22";
+ reg = <2>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_etnphy1_int>;
+ interrupt-parent = <&gpio4>;
+ interrupts = <27 IRQ_TYPE_EDGE_FALLING>;
+ status = "okay";
+ };
+ };
+};
+
+&fec2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_enet2 &pinctrl_etnphy1_rst>;
+ phy-mode = "rmii";
+ phy-reset-gpios = <&gpio4 28 GPIO_ACTIVE_HIGH>;
+ phy-supply = <&reg_3v3_etn>;
+ phy-handle = <&etnphy1>;
+ status = "disabled";
+};
+
+&gpmi {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_gpmi_nand>;
+ nand-on-flash-bbt;
+ fsl,no-blockmark-swap;
+ status = "okay";
+};
+
+&i2c2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c2>;
+ clock-frequency = <400000>;
+ status = "okay";
+
+ sgtl5000: codec@0a {
+ compatible = "fsl,sgtl5000";
+ reg = <0x0a>;
+ #sound-dai-cells = <0>;
+ VDDA-supply = <&reg_2v5>;
+ VDDIO-supply = <&reg_3v3>;
+ clocks = <&mclk>;
+ };
+
+ polytouch: polytouch@38 {
+ compatible = "edt,edt-ft5x06";
+ reg = <0x38>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_edt_ft5x06>;
+ interrupt-parent = <&gpio5>;
+ interrupts = <2 IRQ_TYPE_EDGE_FALLING>;
+ reset-gpios = <&gpio5 3 GPIO_ACTIVE_LOW>;
+ wake-gpios = <&gpio5 8 GPIO_ACTIVE_HIGH>;
+ wakeup-source;
+ };
+
+ touchscreen: touchscreen@48 {
+ compatible = "ti,tsc2007";
+ reg = <0x48>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_tsc2007>;
+ interrupt-parent = <&gpio3>;
+ interrupts = <26 IRQ_TYPE_NONE>;
+ gpios = <&gpio3 26 GPIO_ACTIVE_LOW>;
+ ti,x-plate-ohms = <660>;
+ wakeup-source;
+ };
+};
+
+&kpp {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_kpp>;
+ /* sample keymap */
+ /* row/col 0..3 are mapped to KPP row/col 4..7 */
+ linux,keymap = <
+ MATRIX_KEY(4, 4, KEY_POWER)
+ MATRIX_KEY(4, 5, KEY_KP0)
+ MATRIX_KEY(4, 6, KEY_KP1)
+ MATRIX_KEY(4, 7, KEY_KP2)
+ MATRIX_KEY(5, 4, KEY_KP3)
+ MATRIX_KEY(5, 5, KEY_KP4)
+ MATRIX_KEY(5, 6, KEY_KP5)
+ MATRIX_KEY(5, 7, KEY_KP6)
+ MATRIX_KEY(6, 4, KEY_KP7)
+ MATRIX_KEY(6, 5, KEY_KP8)
+ MATRIX_KEY(6, 6, KEY_KP9)
+ >;
+ status = "okay";
+};
+
+&lcdif {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_disp0_1>;
+ lcd-supply = <&reg_lcd_pwr>;
+ display = <&display>;
+ status = "okay";
+
+ display: display@di0 {
+ bits-per-pixel = <32>;
+ bus-width = <24>;
+ status = "okay";
+
+ display-timings {
+ VGA {
+ clock-frequency = <25200000>;
+ hactive = <640>;
+ vactive = <480>;
+ hback-porch = <48>;
+ hsync-len = <96>;
+ hfront-porch = <16>;
+ vback-porch = <31>;
+ vsync-len = <2>;
+ vfront-porch = <12>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <1>;
+ };
+
+ ETV570 {
+ clock-frequency = <25200000>;
+ hactive = <640>;
+ vactive = <480>;
+ hback-porch = <114>;
+ hsync-len = <30>;
+ hfront-porch = <16>;
+ vback-porch = <32>;
+ vsync-len = <3>;
+ vfront-porch = <10>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <1>;
+ };
+
+ ET0350 {
+ clock-frequency = <6413760>;
+ hactive = <320>;
+ vactive = <240>;
+ hback-porch = <34>;
+ hsync-len = <34>;
+ hfront-porch = <20>;
+ vback-porch = <15>;
+ vsync-len = <3>;
+ vfront-porch = <4>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <1>;
+ };
+
+ ET0430 {
+ clock-frequency = <9009000>;
+ hactive = <480>;
+ vactive = <272>;
+ hback-porch = <2>;
+ hsync-len = <41>;
+ hfront-porch = <2>;
+ vback-porch = <2>;
+ vsync-len = <10>;
+ vfront-porch = <2>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+
+ ET0500 {
+ clock-frequency = <33264000>;
+ hactive = <800>;
+ vactive = <480>;
+ hback-porch = <88>;
+ hsync-len = <128>;
+ hfront-porch = <40>;
+ vback-porch = <33>;
+ vsync-len = <2>;
+ vfront-porch = <10>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <1>;
+ };
+
+ ET0700 { /* same as ET0500 */
+ clock-frequency = <33264000>;
+ hactive = <800>;
+ vactive = <480>;
+ hback-porch = <88>;
+ hsync-len = <128>;
+ hfront-porch = <40>;
+ vback-porch = <33>;
+ vsync-len = <2>;
+ vfront-porch = <10>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <1>;
+ };
+
+ ETQ570 {
+ clock-frequency = <6596040>;
+ hactive = <320>;
+ vactive = <240>;
+ hback-porch = <38>;
+ hsync-len = <30>;
+ hfront-porch = <30>;
+ vback-porch = <16>;
+ vsync-len = <3>;
+ vfront-porch = <4>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <1>;
+ };
+ };
+ };
+};
+
+&pwm5 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_pwm5>;
+ #pwm-cells = <3>;
+ status = "okay";
+};
+
+&sai2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_sai2>;
+ status = "okay";
+};
+
+&uart1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart1 &pinctrl_uart1_rtscts>;
+ fsl,uart-has-rtscts;
+ status = "okay";
+};
+
+&uart2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart2 &pinctrl_uart2_rtscts>;
+ fsl,uart-has-rtscts;
+ status = "okay";
+};
+
+&uart5 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart5 &pinctrl_uart5_rtscts>;
+ fsl,uart-has-rtscts;
+ status = "okay";
+};
+
+&usbotg1 {
+ vbus-supply = <&reg_usbotg_vbus>;
+ dr_mode = "peripheral";
+ disable-over-current;
+ status = "okay";
+};
+
+&usbotg2 {
+ vbus-supply = <&reg_usbh1_vbus>;
+ dr_mode = "host";
+ disable-over-current;
+ status = "okay";
+};
+
+&usdhc1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usdhc1 &pinctrl_usdhc1_cd>;
+ bus-width = <4>;
+ no-1-8-v;
+ cd-gpios = <&gpio4 14 GPIO_ACTIVE_LOW>;
+ fsl,wp-controller;
+ status = "okay";
+};
+
+&iomuxc {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_hog>;
+
+ pinctrl_hog: hoggrp {
+ };
+
+ pinctrl_led: ledgrp {
+ fsl,pins = <
+ MX6UL_PAD_SNVS_TAMPER9__GPIO5_IO09 0x0b0b0 /* LED */
+ >;
+ };
+
+ pinctrl_disp0_1: disp0grp-1 {
+ fsl,pins = <
+ MX6UL_PAD_LCD_CLK__LCDIF_CLK 0x10 /* LSCLK */
+ MX6UL_PAD_LCD_ENABLE__LCDIF_ENABLE 0x10 /* OE_ACD */
+ MX6UL_PAD_LCD_HSYNC__LCDIF_HSYNC 0x10 /* HSYNC */
+ MX6UL_PAD_LCD_VSYNC__LCDIF_VSYNC 0x10 /* VSYNC */
+ /* PAD DISP0_DAT0 is used for the Flexcan transceiver control on STK5-v5 */
+ MX6UL_PAD_LCD_DATA01__LCDIF_DATA01 0x10
+ MX6UL_PAD_LCD_DATA02__LCDIF_DATA02 0x10
+ MX6UL_PAD_LCD_DATA03__LCDIF_DATA03 0x10
+ MX6UL_PAD_LCD_DATA04__LCDIF_DATA04 0x10
+ MX6UL_PAD_LCD_DATA05__LCDIF_DATA05 0x10
+ MX6UL_PAD_LCD_DATA06__LCDIF_DATA06 0x10
+ MX6UL_PAD_LCD_DATA07__LCDIF_DATA07 0x10
+ MX6UL_PAD_LCD_DATA08__LCDIF_DATA08 0x10
+ MX6UL_PAD_LCD_DATA09__LCDIF_DATA09 0x10
+ MX6UL_PAD_LCD_DATA10__LCDIF_DATA10 0x10
+ MX6UL_PAD_LCD_DATA11__LCDIF_DATA11 0x10
+ MX6UL_PAD_LCD_DATA12__LCDIF_DATA12 0x10
+ MX6UL_PAD_LCD_DATA13__LCDIF_DATA13 0x10
+ MX6UL_PAD_LCD_DATA14__LCDIF_DATA14 0x10
+ MX6UL_PAD_LCD_DATA15__LCDIF_DATA15 0x10
+ MX6UL_PAD_LCD_DATA16__LCDIF_DATA16 0x10
+ MX6UL_PAD_LCD_DATA17__LCDIF_DATA17 0x10
+ MX6UL_PAD_LCD_DATA18__LCDIF_DATA18 0x10
+ MX6UL_PAD_LCD_DATA19__LCDIF_DATA19 0x10
+ MX6UL_PAD_LCD_DATA20__LCDIF_DATA20 0x10
+ MX6UL_PAD_LCD_DATA21__LCDIF_DATA21 0x10
+ MX6UL_PAD_LCD_DATA22__LCDIF_DATA22 0x10
+ MX6UL_PAD_LCD_DATA23__LCDIF_DATA23 0x10
+ >;
+ };
+
+ pinctrl_disp0_2: disp0grp-2 {
+ fsl,pins = <
+ MX6UL_PAD_LCD_CLK__LCDIF_CLK 0x10 /* LSCLK */
+ MX6UL_PAD_LCD_ENABLE__LCDIF_ENABLE 0x10 /* OE_ACD */
+ MX6UL_PAD_LCD_HSYNC__LCDIF_HSYNC 0x10 /* HSYNC */
+ MX6UL_PAD_LCD_VSYNC__LCDIF_VSYNC 0x10 /* VSYNC */
+ MX6UL_PAD_LCD_DATA00__LCDIF_DATA00 0x10
+ MX6UL_PAD_LCD_DATA01__LCDIF_DATA01 0x10
+ MX6UL_PAD_LCD_DATA02__LCDIF_DATA02 0x10
+ MX6UL_PAD_LCD_DATA03__LCDIF_DATA03 0x10
+ MX6UL_PAD_LCD_DATA04__LCDIF_DATA04 0x10
+ MX6UL_PAD_LCD_DATA05__LCDIF_DATA05 0x10
+ MX6UL_PAD_LCD_DATA06__LCDIF_DATA06 0x10
+ MX6UL_PAD_LCD_DATA07__LCDIF_DATA07 0x10
+ MX6UL_PAD_LCD_DATA08__LCDIF_DATA08 0x10
+ MX6UL_PAD_LCD_DATA09__LCDIF_DATA09 0x10
+ MX6UL_PAD_LCD_DATA10__LCDIF_DATA10 0x10
+ MX6UL_PAD_LCD_DATA11__LCDIF_DATA11 0x10
+ MX6UL_PAD_LCD_DATA12__LCDIF_DATA12 0x10
+ MX6UL_PAD_LCD_DATA13__LCDIF_DATA13 0x10
+ MX6UL_PAD_LCD_DATA14__LCDIF_DATA14 0x10
+ MX6UL_PAD_LCD_DATA15__LCDIF_DATA15 0x10
+ MX6UL_PAD_LCD_DATA16__LCDIF_DATA16 0x10
+ MX6UL_PAD_LCD_DATA17__LCDIF_DATA17 0x10
+ MX6UL_PAD_LCD_DATA18__LCDIF_DATA18 0x10
+ MX6UL_PAD_LCD_DATA19__LCDIF_DATA19 0x10
+ MX6UL_PAD_LCD_DATA20__LCDIF_DATA20 0x10
+ MX6UL_PAD_LCD_DATA21__LCDIF_DATA21 0x10
+ MX6UL_PAD_LCD_DATA22__LCDIF_DATA22 0x10
+ MX6UL_PAD_LCD_DATA23__LCDIF_DATA23 0x10
+ >;
+ };
+
+ pinctrl_ecspi2: ecspi2grp {
+ fsl,pins = <
+ MX6UL_PAD_UART4_RX_DATA__GPIO1_IO29 0x0b0b0 /* CSPI_SS */
+ MX6UL_PAD_JTAG_MOD__GPIO1_IO10 0x0b0b0 /* CSPI_SS */
+ MX6UL_PAD_UART5_TX_DATA__ECSPI2_MOSI 0x0b0b0 /* CSPI_MOSI */
+ MX6UL_PAD_UART5_RX_DATA__ECSPI2_MISO 0x0b0b0 /* CSPI_MISO */
+ MX6UL_PAD_UART4_TX_DATA__ECSPI2_SCLK 0x0b0b0 /* CSPI_SCLK */
+ >;
+ };
+
+ pinctrl_edt_ft5x06: edt-ft5x06grp {
+ fsl,pins = <
+ MX6UL_PAD_SNVS_TAMPER2__GPIO5_IO02 0x1b0b0 /* Interrupt */
+ MX6UL_PAD_SNVS_TAMPER3__GPIO5_IO03 0x1b0b0 /* Reset */
+ MX6UL_PAD_SNVS_TAMPER8__GPIO5_IO08 0x1b0b0 /* Wake */
+ >;
+ };
+
+ pinctrl_enet1: enet1grp {
+ fsl,pins = <
+ MX6UL_PAD_ENET1_RX_DATA0__ENET1_RDATA00 0x000b0
+ MX6UL_PAD_ENET1_RX_DATA1__ENET1_RDATA01 0x000b0
+ MX6UL_PAD_ENET1_RX_EN__ENET1_RX_EN 0x000b0
+ MX6UL_PAD_ENET1_RX_ER__ENET1_RX_ER 0x000b0
+ MX6UL_PAD_ENET1_TX_EN__ENET1_TX_EN 0x000b0
+ MX6UL_PAD_ENET1_TX_DATA0__ENET1_TDATA00 0x000b0
+ MX6UL_PAD_ENET1_TX_DATA1__ENET1_TDATA01 0x000b0
+ MX6UL_PAD_ENET1_TX_CLK__ENET1_REF_CLK1 0x400000b1
+ >;
+ };
+
+ pinctrl_enet2: enet2grp {
+ fsl,pins = <
+ MX6UL_PAD_ENET2_RX_DATA0__ENET2_RDATA00 0x000b0
+ MX6UL_PAD_ENET2_RX_DATA1__ENET2_RDATA01 0x000b0
+ MX6UL_PAD_ENET2_RX_EN__ENET2_RX_EN 0x000b0
+ MX6UL_PAD_ENET2_RX_ER__ENET2_RX_ER 0x000b0
+ MX6UL_PAD_ENET2_TX_EN__ENET2_TX_EN 0x000b0
+ MX6UL_PAD_ENET2_TX_DATA0__ENET2_TDATA00 0x000b0
+ MX6UL_PAD_ENET2_TX_DATA1__ENET2_TDATA01 0x000b0
+ MX6UL_PAD_ENET2_TX_CLK__ENET2_REF_CLK2 0x400000b1
+ >;
+ };
+
+ pinctrl_enet1_mdio: enet1-mdiogrp {
+ fsl,pins = <
+ MX6UL_PAD_GPIO1_IO07__ENET1_MDC 0x0b0b0
+ MX6UL_PAD_GPIO1_IO06__ENET1_MDIO 0x1b0b0
+ >;
+ };
+
+ pinctrl_etnphy_power: etnphy-pwrgrp {
+ fsl,pins = <
+ MX6UL_PAD_SNVS_TAMPER7__GPIO5_IO07 0x0b0b0 /* ETN PHY POWER */
+ >;
+ };
+
+ pinctrl_etnphy0_int: etnphy-intgrp-0 {
+ fsl,pins = <
+ MX6UL_PAD_SNVS_TAMPER5__GPIO5_IO05 0x0b0b0 /* ETN PHY INT */
+ >;
+ };
+
+ pinctrl_etnphy0_rst: etnphy-rstgrp-0 {
+ fsl,pins = <
+ MX6UL_PAD_SNVS_TAMPER6__GPIO5_IO06 0x0b0b0 /* ETN PHY RESET */
+ >;
+ };
+
+ pinctrl_etnphy1_int: etnphy-intgrp-1 {
+ fsl,pins = <
+ MX6UL_PAD_CSI_DATA06__GPIO4_IO27 0x0b0b0 /* ETN PHY INT */
+ >;
+ };
+
+ pinctrl_etnphy1_rst: etnphy-rstgrp-1 {
+ fsl,pins = <
+ MX6UL_PAD_CSI_DATA07__GPIO4_IO28 0x0b0b0 /* ETN PHY RESET */
+ >;
+ };
+
+ pinctrl_flexcan1: flexcan1grp {
+ fsl,pins = <
+ MX6UL_PAD_UART3_CTS_B__FLEXCAN1_TX 0x0b0b0
+ MX6UL_PAD_UART3_RTS_B__FLEXCAN1_RX 0x0b0b0
+ >;
+ };
+
+ pinctrl_flexcan2: flexcan2grp {
+ fsl,pins = <
+ MX6UL_PAD_UART2_CTS_B__FLEXCAN2_TX 0x0b0b0
+ MX6UL_PAD_UART2_RTS_B__FLEXCAN2_RX 0x0b0b0
+ >;
+ };
+
+ pinctrl_flexcan_xcvr: flexcan-xcvrgrp {
+ fsl,pins = <
+ MX6UL_PAD_LCD_DATA00__GPIO3_IO05 0x0b0b0 /* Flexcan XCVR enable */
+ >;
+ };
+
+ pinctrl_gpmi_nand: gpminandgrp {
+ fsl,pins = <
+ MX6UL_PAD_NAND_CLE__RAWNAND_CLE 0x0b0b1
+ MX6UL_PAD_NAND_ALE__RAWNAND_ALE 0x0b0b1
+ MX6UL_PAD_NAND_WP_B__RAWNAND_WP_B 0x0b0b1
+ MX6UL_PAD_NAND_READY_B__RAWNAND_READY_B 0x0b000
+ MX6UL_PAD_NAND_CE0_B__RAWNAND_CE0_B 0x0b0b1
+ MX6UL_PAD_NAND_RE_B__RAWNAND_RE_B 0x0b0b1
+ MX6UL_PAD_NAND_WE_B__RAWNAND_WE_B 0x0b0b1
+ MX6UL_PAD_NAND_DATA00__RAWNAND_DATA00 0x0b0b1
+ MX6UL_PAD_NAND_DATA01__RAWNAND_DATA01 0x0b0b1
+ MX6UL_PAD_NAND_DATA02__RAWNAND_DATA02 0x0b0b1
+ MX6UL_PAD_NAND_DATA03__RAWNAND_DATA03 0x0b0b1
+ MX6UL_PAD_NAND_DATA04__RAWNAND_DATA04 0x0b0b1
+ MX6UL_PAD_NAND_DATA05__RAWNAND_DATA05 0x0b0b1
+ MX6UL_PAD_NAND_DATA06__RAWNAND_DATA06 0x0b0b1
+ MX6UL_PAD_NAND_DATA07__RAWNAND_DATA07 0x0b0b1
+ >;
+ };
+
+ pinctrl_i2c_gpio: i2c-gpiogrp {
+ fsl,pins = <
+ MX6UL_PAD_SNVS_TAMPER0__GPIO5_IO00 0x4001b8b1 /* I2C SCL */
+ MX6UL_PAD_SNVS_TAMPER1__GPIO5_IO01 0x4001b8b1 /* I2C SDA */
+ >;
+ };
+
+ pinctrl_i2c2: i2c2grp {
+ fsl,pins = <
+ MX6UL_PAD_GPIO1_IO00__I2C2_SCL 0x4001b8b1
+ MX6UL_PAD_GPIO1_IO01__I2C2_SDA 0x4001b8b1
+ >;
+ };
+
+ pinctrl_kpp: kppgrp {
+ fsl,pins = <
+ MX6UL_PAD_ENET2_RX_DATA1__KPP_COL04 0x1b0b0
+ MX6UL_PAD_ENET2_TX_DATA0__KPP_COL05 0x1b0b0
+ MX6UL_PAD_ENET2_TX_EN__KPP_COL06 0x1b0b0
+ MX6UL_PAD_ENET2_RX_ER__KPP_COL07 0x1b0b0
+ MX6UL_PAD_ENET2_RX_DATA0__KPP_ROW04 0x1b0b0
+ MX6UL_PAD_ENET2_RX_EN__KPP_ROW05 0x1b0b0
+ MX6UL_PAD_ENET2_TX_DATA1__KPP_ROW06 0x1b0b0
+ MX6UL_PAD_ENET2_TX_CLK__KPP_ROW07 0x1b0b0
+ >;
+ };
+
+ pinctrl_lcd_pwr: lcd-pwrgrp {
+ fsl,pins = <
+ MX6UL_PAD_SNVS_TAMPER4__GPIO5_IO04 0x0b0b0 /* LCD Power Enable */
+ >;
+ };
+
+ pinctrl_lcd_rst: lcd-rstgrp {
+ fsl,pins = <
+ MX6UL_PAD_LCD_RESET__GPIO3_IO04 0x0b0b0 /* LCD Reset */
+ >;
+ };
+
+ pinctrl_pwm5: pwm5grp {
+ fsl,pins = <
+ MX6UL_PAD_NAND_DQS__PWM5_OUT 0x0b0b0
+ >;
+ };
+
+ pinctrl_sai2: sai2grp {
+ fsl,pins = <
+ MX6UL_PAD_JTAG_TCK__SAI2_RX_DATA 0x0b0b0 /* SSI1_RXD */
+ MX6UL_PAD_JTAG_TRST_B__SAI2_TX_DATA 0x0b0b0 /* SSI1_TXD */
+ MX6UL_PAD_JTAG_TDI__SAI2_TX_BCLK 0x0b0b0 /* SSI1_CLK */
+ MX6UL_PAD_JTAG_TDO__SAI2_TX_SYNC 0x0b0b0 /* SSI1_FS */
+ >;
+ };
+
+ pinctrl_spi_gpio: spi-gpiogrp {
+ fsl,pins = <
+ MX6UL_PAD_UART4_RX_DATA__GPIO1_IO29 0x0b0b0 /* CSPI_SS */
+ MX6UL_PAD_JTAG_MOD__GPIO1_IO10 0x0b0b0 /* CSPI_SS */
+ MX6UL_PAD_UART5_TX_DATA__GPIO1_IO30 0x0b0b0 /* CSPI_MOSI */
+ MX6UL_PAD_UART5_RX_DATA__GPIO1_IO31 0x0b0b0 /* CSPI_MISO */
+ MX6UL_PAD_UART4_TX_DATA__GPIO1_IO28 0x0b0b0 /* CSPI_SCLK */
+ >;
+ };
+
+ pinctrl_tsc2007: tsc2007grp {
+ fsl,pins = <
+ MX6UL_PAD_JTAG_TMS__GPIO1_IO11 0x1b0b0 /* Interrupt */
+ >;
+ };
+
+ pinctrl_uart1: uart1grp {
+ fsl,pins = <
+ MX6UL_PAD_UART1_TX_DATA__UART1_DCE_TX 0x0b0b0
+ MX6UL_PAD_UART1_RX_DATA__UART1_DCE_RX 0x0b0b0
+ >;
+ };
+
+ pinctrl_uart1_rtscts: uart1-rtsctsgrp {
+ fsl,pins = <
+ MX6UL_PAD_UART1_RTS_B__UART1_DCE_RTS 0x0b0b0
+ MX6UL_PAD_UART1_CTS_B__UART1_DCE_CTS 0x0b0b0
+ >;
+ };
+
+ pinctrl_uart2: uart2grp {
+ fsl,pins = <
+ MX6UL_PAD_UART2_TX_DATA__UART2_DCE_TX 0x0b0b0
+ MX6UL_PAD_UART2_RX_DATA__UART2_DCE_RX 0x0b0b0
+ >;
+ };
+
+ pinctrl_uart2_rtscts: uart2-rtsctsgrp {
+ fsl,pins = <
+ MX6UL_PAD_UART3_RX_DATA__UART2_DCE_RTS 0x0b0b0
+ MX6UL_PAD_UART3_TX_DATA__UART2_DCE_CTS 0x0b0b0
+ >;
+ };
+
+ pinctrl_uart5: uart5grp {
+ fsl,pins = <
+ MX6UL_PAD_GPIO1_IO04__UART5_DCE_TX 0x0b0b0
+ MX6UL_PAD_GPIO1_IO05__UART5_DCE_RX 0x0b0b0
+ >;
+ };
+
+ pinctrl_uart5_rtscts: uart5-rtsctsgrp {
+ fsl,pins = <
+ MX6UL_PAD_GPIO1_IO08__UART5_DCE_RTS 0x0b0b0
+ MX6UL_PAD_GPIO1_IO09__UART5_DCE_CTS 0x0b0b0
+ >;
+ };
+
+ pinctrl_usbh1_oc: usbh1-ocgrp {
+ fsl,pins = <
+ MX6UL_PAD_GPIO1_IO03__GPIO1_IO03 0x17059 /* USBH1_OC */
+ >;
+ };
+
+ pinctrl_usbh1_vbus: usbh1-vbusgrp {
+ fsl,pins = <
+ MX6UL_PAD_GPIO1_IO02__GPIO1_IO02 0x0b0b0 /* USBH1_VBUSEN */
+ >;
+ };
+
+ pinctrl_usbotg_oc: usbotg-ocgrp {
+ fsl,pins = <
+ MX6UL_PAD_UART3_RTS_B__GPIO1_IO27 0x17059 /* USBOTG_OC */
+ >;
+ };
+
+ pinctrl_usbotg_vbus: usbotg-vbusgrp {
+ fsl,pins = <
+ MX6UL_PAD_UART3_CTS_B__GPIO1_IO26 0x1b0b0 /* USBOTG_VBUSEN */
+ >;
+ };
+
+ pinctrl_usdhc1: usdhc1grp {
+ fsl,pins = <
+ MX6UL_PAD_SD1_CMD__USDHC1_CMD 0x070b1
+ MX6UL_PAD_SD1_CLK__USDHC1_CLK 0x07099
+ MX6UL_PAD_SD1_DATA0__USDHC1_DATA0 0x070b1
+ MX6UL_PAD_SD1_DATA1__USDHC1_DATA1 0x070b1
+ MX6UL_PAD_SD1_DATA2__USDHC1_DATA2 0x070b1
+ MX6UL_PAD_SD1_DATA3__USDHC1_DATA3 0x070b1
+ >;
+ };
+
+ pinctrl_usdhc1_cd: usdhc1cdgrp {
+ fsl,pins = <
+ MX6UL_PAD_NAND_CE1_B__GPIO4_IO14 0x170b0 /* SD1 CD */
+ >;
+ };
+
+ pinctrl_usdhc2: usdhc2grp {
+ fsl,pins = <
+ MX6UL_PAD_NAND_WE_B__USDHC2_CMD 0x070b1
+ MX6UL_PAD_NAND_RE_B__USDHC2_CLK 0x070b1
+ MX6UL_PAD_NAND_DATA00__USDHC2_DATA0 0x070b1
+ MX6UL_PAD_NAND_DATA01__USDHC2_DATA1 0x070b1
+ MX6UL_PAD_NAND_DATA02__USDHC2_DATA2 0x070b1
+ MX6UL_PAD_NAND_DATA03__USDHC2_DATA3 0x070b1
+ /* eMMC RESET */
+ MX6UL_PAD_NAND_ALE__USDHC2_RESET_B 0x170b0
+ >;
+ };
+};
diff --git a/arch/arm/boot/dts/imx6ul.dtsi b/arch/arm/boot/dts/imx6ul.dtsi
index 71778992f03d..4356b655ef02 100644
--- a/arch/arm/boot/dts/imx6ul.dtsi
+++ b/arch/arm/boot/dts/imx6ul.dtsi
@@ -55,15 +55,15 @@
clock-latency = <61036>; /* two CLK32 periods */
operating-points = <
/* kHz uV */
- 528000 1250000
- 396000 1150000
- 198000 1150000
+ 528000 1175000
+ 396000 1025000
+ 198000 950000
>;
fsl,soc-operating-points = <
/* KHz uV */
- 528000 1250000
- 396000 1150000
- 198000 1150000
+ 528000 1175000
+ 396000 1175000
+ 198000 1175000
>;
clocks = <&clks IMX6UL_CLK_ARM>,
<&clks IMX6UL_CLK_PLL2_BUS>,
diff --git a/arch/arm/boot/dts/imx7d-nitrogen7.dts b/arch/arm/boot/dts/imx7d-nitrogen7.dts
new file mode 100644
index 000000000000..1ce97800f0c5
--- /dev/null
+++ b/arch/arm/boot/dts/imx7d-nitrogen7.dts
@@ -0,0 +1,745 @@
+/*
+ * Copyright 2016 Boundary Devices, Inc.
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+
+#include <dt-bindings/input/input.h>
+#include "imx7d.dtsi"
+
+/ {
+ model = "Boundary Devices i.MX7 Nitrogen7 Board";
+ compatible = "boundary,imx7d-nitrogen7", "fsl,imx7d";
+
+ aliases {
+ fb_lcd = &lcdif;
+ t_lcd = &t_lcd;
+ };
+
+ memory {
+ reg = <0x80000000 0x40000000>;
+ };
+
+ backlight-j9 {
+ compatible = "gpio-backlight";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_backlight_j9>;
+ gpios = <&gpio1 7 GPIO_ACTIVE_HIGH>;
+ default-on;
+ };
+
+ backlight-j20 {
+ compatible = "pwm-backlight";
+ pwms = <&pwm1 0 5000000>;
+ brightness-levels = <0 4 8 16 32 64 128 255>;
+ default-brightness-level = <6>;
+ status = "okay";
+ };
+
+ reg_usb_otg1_vbus: regulator-usb-otg1-vbus {
+ compatible = "regulator-fixed";
+ regulator-name = "usb_otg1_vbus";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ gpio = <&gpio1 5 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ };
+
+ reg_usb_otg2_vbus: regulator-usb-otg2-vbus {
+ compatible = "regulator-fixed";
+ regulator-name = "usb_otg2_vbus";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ gpio = <&gpio4 7 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ };
+
+ reg_can2_3v3: regulator-can2-3v3 {
+ compatible = "regulator-fixed";
+ regulator-name = "can2-3v3";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ gpio = <&gpio2 14 GPIO_ACTIVE_LOW>;
+ };
+
+ reg_vref_1v8: regulator-vref-1v8 {
+ compatible = "regulator-fixed";
+ regulator-name = "vref-1v8";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ };
+
+ reg_vref_3v3: regulator-vref-3v3 {
+ compatible = "regulator-fixed";
+ regulator-name = "vref-3v3";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ };
+
+ reg_wlan: regulator-wlan {
+ compatible = "regulator-fixed";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ clocks = <&clks IMX7D_CLKO2_ROOT_DIV>;
+ clock-names = "slow";
+ regulator-name = "reg_wlan";
+ startup-delay-us = <70000>;
+ gpio = <&gpio4 21 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ };
+};
+
+&adc1 {
+ vref-supply = <&reg_vref_1v8>;
+ status = "okay";
+};
+
+&adc2 {
+ vref-supply = <&reg_vref_1v8>;
+ status = "okay";
+};
+
+&clks {
+ assigned-clocks = <&clks IMX7D_CLKO2_ROOT_SRC>,
+ <&clks IMX7D_CLKO2_ROOT_DIV>;
+ assigned-clock-parents = <&clks IMX7D_CKIL>;
+ assigned-clock-rates = <0>, <32768>;
+};
+
+&cpu0 {
+ arm-supply = <&sw1a_reg>;
+};
+
+&fec1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_enet1>;
+ assigned-clocks = <&clks IMX7D_ENET1_TIME_ROOT_SRC>,
+ <&clks IMX7D_ENET1_TIME_ROOT_CLK>;
+ assigned-clock-parents = <&clks IMX7D_PLL_ENET_MAIN_100M_CLK>;
+ assigned-clock-rates = <0>, <100000000>;
+ phy-mode = "rgmii";
+ phy-handle = <&ethphy0>;
+ fsl,magic-packet;
+ status = "okay";
+
+ mdio {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ ethphy0: ethernet-phy@4 {
+ reg = <4>;
+ };
+ };
+};
+
+&flexcan2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_flexcan2>;
+ xceiver-supply = <&reg_can2_3v3>;
+ status = "okay";
+};
+
+&i2c1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c1>;
+ status = "okay";
+
+ pmic: pfuze3000@08 {
+ compatible = "fsl,pfuze3000";
+ reg = <0x08>;
+
+ regulators {
+ sw1a_reg: sw1a {
+ regulator-min-microvolt = <700000>;
+ regulator-max-microvolt = <1475000>;
+ regulator-boot-on;
+ regulator-always-on;
+ regulator-ramp-delay = <6250>;
+ };
+
+ /* use sw1c_reg to align with pfuze100/pfuze200 */
+ sw1c_reg: sw1b {
+ regulator-min-microvolt = <700000>;
+ regulator-max-microvolt = <1475000>;
+ regulator-boot-on;
+ regulator-always-on;
+ regulator-ramp-delay = <6250>;
+ };
+
+ sw2_reg: sw2 {
+ regulator-min-microvolt = <1500000>;
+ regulator-max-microvolt = <1850000>;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ sw3a_reg: sw3 {
+ regulator-min-microvolt = <900000>;
+ regulator-max-microvolt = <1650000>;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ swbst_reg: swbst {
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5150000>;
+ };
+
+ snvs_reg: vsnvs {
+ regulator-min-microvolt = <1000000>;
+ regulator-max-microvolt = <3000000>;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ vref_reg: vrefddr {
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ vgen1_reg: vldo1 {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ };
+
+ vgen2_reg: vldo2 {
+ regulator-min-microvolt = <800000>;
+ regulator-max-microvolt = <1550000>;
+ regulator-always-on;
+ };
+
+ vgen3_reg: vccsd {
+ regulator-min-microvolt = <2850000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ };
+
+ vgen4_reg: v33 {
+ regulator-min-microvolt = <2850000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ };
+
+ vgen5_reg: vldo3 {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ };
+
+ vgen6_reg: vldo4 {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ };
+ };
+ };
+};
+
+&i2c2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c2>;
+ status = "okay";
+
+ rtc@68 {
+ compatible = "rv4162";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c2_rv4162>;
+ reg = <0x68>;
+ interrupts-extended = <&gpio2 15 IRQ_TYPE_LEVEL_LOW>;
+ };
+};
+
+&i2c3 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c3>;
+ status = "okay";
+
+ touch@48 {
+ compatible = "ti,tsc2004";
+ reg = <0x48>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c3_tsc2004>;
+ interrupts-extended = <&gpio3 4 IRQ_TYPE_EDGE_FALLING>;
+ wakeup-gpios = <&gpio3 4 GPIO_ACTIVE_LOW>;
+ };
+};
+
+&i2c4 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c4>;
+ status = "okay";
+
+ codec: wm8960@1a {
+ compatible = "wlf,wm8960";
+ reg = <0x1a>;
+ clocks = <&clks IMX7D_AUDIO_MCLK_ROOT_CLK>;
+ clock-names = "mclk";
+ wlf,shared-lrclk;
+ };
+};
+
+&lcdif {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_lcdif_dat
+ &pinctrl_lcdif_ctrl>;
+ lcd-supply = <&reg_vref_3v3>;
+ display = <&display0>;
+ status = "okay";
+
+ display0: lcd-display {
+ bits-per-pixel = <16>;
+ bus-width = <18>;
+
+ display-timings {
+ native-mode = <&t_lcd>;
+ t_lcd: t_lcd_default {
+ /* default to Okaya display */
+ clock-frequency = <30000000>;
+ hactive = <800>;
+ vactive = <480>;
+ hfront-porch = <40>;
+ hback-porch = <40>;
+ hsync-len = <48>;
+ vback-porch = <29>;
+ vfront-porch = <13>;
+ vsync-len = <3>;
+ hsync-active = <0>;
+ vsync-active = <0>;
+ de-active = <1>;
+ pixelclk-active = <0>;
+ };
+ };
+ };
+};
+
+&pwm1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_pwm1>;
+ status = "okay";
+};
+
+&pwm2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_pwm2>;
+ status = "okay";
+};
+
+&uart1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart1>;
+ assigned-clocks = <&clks IMX7D_UART1_ROOT_SRC>;
+ assigned-clock-parents = <&clks IMX7D_OSC_24M_CLK>;
+ status = "okay";
+};
+
+&uart2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart2>;
+ assigned-clocks = <&clks IMX7D_UART2_ROOT_SRC>;
+ assigned-clock-parents = <&clks IMX7D_OSC_24M_CLK>;
+ status = "okay";
+};
+
+&uart3 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart3>;
+ assigned-clocks = <&clks IMX7D_UART3_ROOT_SRC>;
+ assigned-clock-parents = <&clks IMX7D_OSC_24M_CLK>;
+ status = "okay";
+};
+
+&uart6 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart6>;
+ assigned-clocks = <&clks IMX7D_UART6_ROOT_SRC>;
+ assigned-clock-parents = <&clks IMX7D_PLL_SYS_MAIN_240M_CLK>;
+ fsl,uart-has-rtscts;
+ status = "okay";
+};
+
+&usbotg1 {
+ vbus-supply = <&reg_usb_otg1_vbus>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usbotg1>;
+ status = "okay";
+};
+
+&usbotg2 {
+ vbus-supply = <&reg_usb_otg2_vbus>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usbotg2>;
+ dr_mode = "host";
+ status = "okay";
+};
+
+&usdhc1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usdhc1>;
+ cd-gpios = <&gpio5 0 GPIO_ACTIVE_LOW>;
+ vmmc-supply = <&vgen3_reg>;
+ bus-width = <4>;
+ fsl,tuning-step = <2>;
+ wakeup-source;
+ keep-power-in-suspend;
+ status = "okay";
+};
+
+&usdhc2 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usdhc2>;
+ bus-width = <4>;
+ non-removable;
+ vmmc-supply = <&reg_wlan>;
+ cap-power-off-card;
+ keep-power-in-suspend;
+ status = "okay";
+
+ wlcore: wlcore@2 {
+ compatible = "ti,wl1271";
+ reg = <2>;
+ interrupt-parent = <&gpio4>;
+ interrupts = <20 IRQ_TYPE_LEVEL_HIGH>;
+ ref-clock-frequency = <38400000>;
+ };
+};
+
+&usdhc3 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_usdhc3>;
+ assigned-clocks = <&clks IMX7D_USDHC3_ROOT_CLK>;
+ assigned-clock-rates = <400000000>;
+ bus-width = <8>;
+ fsl,tuning-step = <2>;
+ non-removable;
+ status = "okay";
+};
+
+&wdog1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_wdog1>;
+ status = "okay";
+};
+
+&iomuxc {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_hog_1 &pinctrl_j2>;
+
+ pinctrl_hog_1: hoggrp-1 {
+ fsl,pins = <
+ MX7D_PAD_SD3_RESET_B__GPIO6_IO11 0x5d
+ MX7D_PAD_GPIO1_IO13__GPIO1_IO13 0x7d
+ MX7D_PAD_ECSPI2_MISO__GPIO4_IO22 0x7d
+ >;
+ };
+
+ pinctrl_enet1: enet1grp {
+ fsl,pins = <
+ MX7D_PAD_GPIO1_IO10__ENET1_MDIO 0x3
+ MX7D_PAD_GPIO1_IO11__ENET1_MDC 0x3
+ MX7D_PAD_GPIO1_IO12__CCM_ENET_REF_CLK1 0x3
+ MX7D_PAD_ENET1_RGMII_TXC__ENET1_RGMII_TXC 0x71
+ MX7D_PAD_ENET1_RGMII_TD0__ENET1_RGMII_TD0 0x71
+ MX7D_PAD_ENET1_RGMII_TD1__ENET1_RGMII_TD1 0x71
+ MX7D_PAD_ENET1_RGMII_TD2__ENET1_RGMII_TD2 0x71
+ MX7D_PAD_ENET1_RGMII_TD3__ENET1_RGMII_TD3 0x71
+ MX7D_PAD_ENET1_RGMII_TX_CTL__ENET1_RGMII_TX_CTL 0x71
+ MX7D_PAD_ENET1_RGMII_RXC__ENET1_RGMII_RXC 0x71
+ MX7D_PAD_ENET1_RGMII_RD0__ENET1_RGMII_RD0 0x11
+ MX7D_PAD_ENET1_RGMII_RD1__ENET1_RGMII_RD1 0x11
+ MX7D_PAD_ENET1_RGMII_RD2__ENET1_RGMII_RD2 0x11
+ MX7D_PAD_ENET1_RGMII_RD3__ENET1_RGMII_RD3 0x71
+ MX7D_PAD_ENET1_RGMII_RX_CTL__ENET1_RGMII_RX_CTL 0x11
+ MX7D_PAD_SD3_STROBE__GPIO6_IO10 0x75
+ >;
+ };
+
+ pinctrl_flexcan2: flexcan2grp {
+ fsl,pins = <
+ MX7D_PAD_GPIO1_IO14__FLEXCAN2_RX 0x7d
+ MX7D_PAD_GPIO1_IO15__FLEXCAN2_TX 0x7d
+ MX7D_PAD_EPDC_DATA14__GPIO2_IO14 0x7d
+ >;
+ };
+
+ pinctrl_i2c1: i2c1grp {
+ fsl,pins = <
+ MX7D_PAD_I2C1_SDA__I2C1_SDA 0x4000007f
+ MX7D_PAD_I2C1_SCL__I2C1_SCL 0x4000007f
+ >;
+ };
+
+ pinctrl_i2c2: i2c2grp {
+ fsl,pins = <
+ MX7D_PAD_I2C2_SDA__I2C2_SDA 0x4000007f
+ MX7D_PAD_I2C2_SCL__I2C2_SCL 0x4000007f
+ >;
+ };
+
+ pinctrl_i2c2_rv4162: i2c2-rv4162grp {
+ fsl,pins = <
+ MX7D_PAD_EPDC_DATA15__GPIO2_IO15 0x7d
+ >;
+ };
+
+ pinctrl_i2c3: i2c3grp {
+ fsl,pins = <
+ MX7D_PAD_I2C3_SDA__I2C3_SDA 0x4000007f
+ MX7D_PAD_I2C3_SCL__I2C3_SCL 0x4000007f
+ >;
+ };
+
+ pinctrl_i2c3_tsc2004: i2c3tsc2004grp {
+ fsl,pins = <
+ MX7D_PAD_LCD_RESET__GPIO3_IO4 0x79
+ MX7D_PAD_SD2_WP__GPIO5_IO10 0x7d
+ >;
+ };
+
+ pinctrl_i2c4: i2c4grp {
+ fsl,pins = <
+ MX7D_PAD_I2C4_SDA__I2C4_SDA 0x4000007f
+ MX7D_PAD_I2C4_SCL__I2C4_SCL 0x4000007f
+ >;
+ };
+
+ pinctrl_j2: j2grp {
+ fsl,pins = <
+ MX7D_PAD_SAI1_TX_DATA__GPIO6_IO15 0x7d
+ MX7D_PAD_EPDC_BDR0__GPIO2_IO28 0x7d
+ MX7D_PAD_SAI1_RX_DATA__GPIO6_IO12 0x7d
+ MX7D_PAD_EPDC_BDR1__GPIO2_IO29 0x7d
+ MX7D_PAD_SD1_WP__GPIO5_IO1 0x7d
+ MX7D_PAD_EPDC_SDSHR__GPIO2_IO19 0x7d
+ MX7D_PAD_SD1_RESET_B__GPIO5_IO2 0x7d
+ MX7D_PAD_SD2_RESET_B__GPIO5_IO11 0x7d
+ MX7D_PAD_EPDC_DATA07__GPIO2_IO7 0x7d
+ MX7D_PAD_EPDC_DATA08__GPIO2_IO8 0x7d
+ MX7D_PAD_EPDC_DATA09__GPIO2_IO9 0x7d
+ MX7D_PAD_EPDC_DATA10__GPIO2_IO10 0x7d
+ MX7D_PAD_EPDC_DATA11__GPIO2_IO11 0x7d
+ MX7D_PAD_EPDC_DATA12__GPIO2_IO12 0x7d
+ MX7D_PAD_SAI1_TX_SYNC__GPIO6_IO14 0x7d
+ MX7D_PAD_EPDC_DATA13__GPIO2_IO13 0x7d
+ MX7D_PAD_SAI1_TX_BCLK__GPIO6_IO13 0x7d
+ MX7D_PAD_SD2_CD_B__GPIO5_IO9 0x7d
+ MX7D_PAD_EPDC_GDCLK__GPIO2_IO24 0x7d
+ MX7D_PAD_SAI2_RX_DATA__GPIO6_IO21 0x7d
+ MX7D_PAD_EPDC_GDOE__GPIO2_IO25 0x7d
+ MX7D_PAD_EPDC_GDRL__GPIO2_IO26 0x7d
+ MX7D_PAD_SAI2_TX_DATA__GPIO6_IO22 0x7d
+ MX7D_PAD_EPDC_SDCE0__GPIO2_IO20 0x7d
+ MX7D_PAD_SAI2_TX_BCLK__GPIO6_IO20 0x7d
+ MX7D_PAD_EPDC_SDCE1__GPIO2_IO21 0x7d
+ MX7D_PAD_SAI2_TX_SYNC__GPIO6_IO19 0x7d
+ MX7D_PAD_EPDC_SDCE2__GPIO2_IO22 0x7d
+ MX7D_PAD_EPDC_SDCE3__GPIO2_IO23 0x7d
+ MX7D_PAD_EPDC_GDSP__GPIO2_IO27 0x7d
+ MX7D_PAD_EPDC_SDCLK__GPIO2_IO16 0x7d
+ MX7D_PAD_EPDC_SDLE__GPIO2_IO17 0x7d
+ MX7D_PAD_EPDC_SDOE__GPIO2_IO18 0x7d
+ MX7D_PAD_EPDC_PWR_COM__GPIO2_IO30 0x7d
+ MX7D_PAD_EPDC_PWR_STAT__GPIO2_IO31 0x7d
+ >;
+ };
+
+ pinctrl_lcdif_dat: lcdifdatgrp {
+ fsl,pins = <
+ MX7D_PAD_LCD_DATA00__LCD_DATA0 0x79
+ MX7D_PAD_LCD_DATA01__LCD_DATA1 0x79
+ MX7D_PAD_LCD_DATA02__LCD_DATA2 0x79
+ MX7D_PAD_LCD_DATA03__LCD_DATA3 0x79
+ MX7D_PAD_LCD_DATA04__LCD_DATA4 0x79
+ MX7D_PAD_LCD_DATA05__LCD_DATA5 0x79
+ MX7D_PAD_LCD_DATA06__LCD_DATA6 0x79
+ MX7D_PAD_LCD_DATA07__LCD_DATA7 0x79
+ MX7D_PAD_LCD_DATA08__LCD_DATA8 0x79
+ MX7D_PAD_LCD_DATA09__LCD_DATA9 0x79
+ MX7D_PAD_LCD_DATA10__LCD_DATA10 0x79
+ MX7D_PAD_LCD_DATA11__LCD_DATA11 0x79
+ MX7D_PAD_LCD_DATA12__LCD_DATA12 0x79
+ MX7D_PAD_LCD_DATA13__LCD_DATA13 0x79
+ MX7D_PAD_LCD_DATA14__LCD_DATA14 0x79
+ MX7D_PAD_LCD_DATA15__LCD_DATA15 0x79
+ MX7D_PAD_LCD_DATA16__LCD_DATA16 0x79
+ MX7D_PAD_LCD_DATA17__LCD_DATA17 0x79
+ MX7D_PAD_LCD_DATA18__LCD_DATA18 0x79
+ MX7D_PAD_LCD_DATA19__LCD_DATA19 0x79
+ MX7D_PAD_LCD_DATA20__LCD_DATA20 0x79
+ MX7D_PAD_LCD_DATA21__LCD_DATA21 0x79
+ MX7D_PAD_LCD_DATA22__LCD_DATA22 0x79
+ MX7D_PAD_LCD_DATA23__LCD_DATA23 0x79
+ >;
+ };
+
+ pinctrl_lcdif_ctrl: lcdifctrlgrp {
+ fsl,pins = <
+ MX7D_PAD_LCD_CLK__LCD_CLK 0x79
+ MX7D_PAD_LCD_ENABLE__LCD_ENABLE 0x79
+ MX7D_PAD_LCD_VSYNC__LCD_VSYNC 0x79
+ MX7D_PAD_LCD_HSYNC__LCD_HSYNC 0x79
+ >;
+ };
+
+ pinctrl_pwm2: pwm2grp {
+ fsl,pins = <
+ MX7D_PAD_GPIO1_IO09__PWM2_OUT 0x7d
+ >;
+ };
+
+ pinctrl_uart1: uart1grp {
+ fsl,pins = <
+ MX7D_PAD_UART1_TX_DATA__UART1_DCE_TX 0x79
+ MX7D_PAD_UART1_RX_DATA__UART1_DCE_RX 0x79
+ >;
+ };
+
+ pinctrl_uart2: uart2grp {
+ fsl,pins = <
+ MX7D_PAD_UART2_TX_DATA__UART2_DCE_TX 0x79
+ MX7D_PAD_UART2_RX_DATA__UART2_DCE_RX 0x79
+ >;
+ };
+
+ pinctrl_uart3: uart3grp {
+ fsl,pins = <
+ MX7D_PAD_UART3_TX_DATA__UART3_DCE_TX 0x79
+ MX7D_PAD_UART3_RX_DATA__UART3_DCE_RX 0x79
+ MX7D_PAD_EPDC_DATA04__GPIO2_IO4 0x7d
+ >;
+ };
+
+ pinctrl_uart6: uart6grp {
+ fsl,pins = <
+ MX7D_PAD_ECSPI1_MOSI__UART6_DCE_TX 0x79
+ MX7D_PAD_ECSPI1_SCLK__UART6_DCE_RX 0x79
+ MX7D_PAD_ECSPI1_SS0__UART6_DCE_CTS 0x79
+ MX7D_PAD_ECSPI1_MISO__UART6_DCE_RTS 0x79
+ >;
+ };
+
+ pinctrl_usbotg2: usbotg2grp {
+ fsl,pins = <
+ MX7D_PAD_UART3_RTS_B__USB_OTG2_OC 0x7d
+ MX7D_PAD_UART3_CTS_B__GPIO4_IO7 0x14
+ >;
+ };
+
+ pinctrl_usdhc1: usdhc1grp {
+ fsl,pins = <
+ MX7D_PAD_SD1_CMD__SD1_CMD 0x59
+ MX7D_PAD_SD1_CLK__SD1_CLK 0x19
+ MX7D_PAD_SD1_DATA0__SD1_DATA0 0x59
+ MX7D_PAD_SD1_DATA1__SD1_DATA1 0x59
+ MX7D_PAD_SD1_DATA2__SD1_DATA2 0x59
+ MX7D_PAD_SD1_DATA3__SD1_DATA3 0x59
+ MX7D_PAD_GPIO1_IO08__SD1_VSELECT 0x75
+ MX7D_PAD_SD1_CD_B__GPIO5_IO0 0x75
+ >;
+ };
+
+ pinctrl_usdhc2: usdhc2grp {
+ fsl,pins = <
+ MX7D_PAD_SD2_CMD__SD2_CMD 0x59
+ MX7D_PAD_SD2_CLK__SD2_CLK 0x19
+ MX7D_PAD_SD2_DATA0__SD2_DATA0 0x59
+ MX7D_PAD_SD2_DATA1__SD2_DATA1 0x59
+ MX7D_PAD_SD2_DATA2__SD2_DATA2 0x59
+ MX7D_PAD_SD2_DATA3__SD2_DATA3 0x59
+ MX7D_PAD_ECSPI2_SCLK__GPIO4_IO20 0x59
+ MX7D_PAD_ECSPI2_MOSI__GPIO4_IO21 0x59
+ >;
+ };
+
+ pinctrl_usdhc3: usdhc3grp {
+ fsl,pins = <
+ MX7D_PAD_SD3_CMD__SD3_CMD 0x59
+ MX7D_PAD_SD3_CLK__SD3_CLK 0x19
+ MX7D_PAD_SD3_DATA0__SD3_DATA0 0x59
+ MX7D_PAD_SD3_DATA1__SD3_DATA1 0x59
+ MX7D_PAD_SD3_DATA2__SD3_DATA2 0x59
+ MX7D_PAD_SD3_DATA3__SD3_DATA3 0x59
+ MX7D_PAD_SD3_DATA4__SD3_DATA4 0x59
+ MX7D_PAD_SD3_DATA5__SD3_DATA5 0x59
+ MX7D_PAD_SD3_DATA6__SD3_DATA6 0x59
+ MX7D_PAD_SD3_DATA7__SD3_DATA7 0x59
+ >;
+ };
+};
+
+&iomuxc_lpsr {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_hog_2>;
+
+ pinctrl_hog_2: hoggrp-2 {
+ fsl,pins = <
+ MX7D_PAD_GPIO1_IO02__GPIO1_IO2 0x7d
+ MX7D_PAD_GPIO1_IO03__CCM_CLKO2 0x7d
+ >;
+ };
+
+ pinctrl_backlight_j9: backlightj9grp {
+ fsl,pins = <
+ MX7D_PAD_GPIO1_IO07__GPIO1_IO7 0x7d
+ >;
+ };
+
+ pinctrl_pwm1: pwm1grp {
+ fsl,pins = <
+ MX7D_PAD_GPIO1_IO01__PWM1_OUT 0x7d
+ >;
+ };
+
+ pinctrl_usbotg1: usbotg1grp {
+ fsl,pins = <
+ MX7D_PAD_GPIO1_IO04__USB_OTG1_OC 0x7d
+ MX7D_PAD_GPIO1_IO05__GPIO1_IO5 0x14
+ >;
+ };
+
+ pinctrl_wdog1: wdog1grp {
+ fsl,pins = <
+ MX7D_PAD_GPIO1_IO00__WDOD1_WDOG_B 0x75
+ >;
+ };
+};
diff --git a/arch/arm/boot/dts/imx7d.dtsi b/arch/arm/boot/dts/imx7d.dtsi
index b5a50e0e7ff1..6b3faa298417 100644
--- a/arch/arm/boot/dts/imx7d.dtsi
+++ b/arch/arm/boot/dts/imx7d.dtsi
@@ -651,6 +651,17 @@
#pwm-cells = <2>;
status = "disabled";
};
+
+ lcdif: lcdif@30730000 {
+ compatible = "fsl,imx7d-lcdif", "fsl,imx28-lcdif";
+ reg = <0x30730000 0x10000>;
+ interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clks IMX7D_LCDIF_PIXEL_ROOT_CLK>,
+ <&clks IMX7D_CLK_DUMMY>,
+ <&clks IMX7D_CLK_DUMMY>;
+ clock-names = "pix", "axi", "disp_axi";
+ status = "disabled";
+ };
};
aips3: aips-bus@30800000 {
@@ -693,6 +704,26 @@
status = "disabled";
};
+ flexcan1: can@30a00000 {
+ compatible = "fsl,imx7d-flexcan", "fsl,imx6q-flexcan";
+ reg = <0x30a00000 0x10000>;
+ interrupts = <GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clks IMX7D_CLK_DUMMY>,
+ <&clks IMX7D_CAN1_ROOT_CLK>;
+ clock-names = "ipg", "per";
+ status = "disabled";
+ };
+
+ flexcan2: can@30a10000 {
+ compatible = "fsl,imx7d-flexcan", "fsl,imx6q-flexcan";
+ reg = <0x30a10000 0x10000>;
+ interrupts = <GIC_SPI 111 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clks IMX7D_CLK_DUMMY>,
+ <&clks IMX7D_CAN2_ROOT_CLK>;
+ clock-names = "ipg", "per";
+ status = "disabled";
+ };
+
i2c1: i2c@30a20000 {
#address-cells = <1>;
#size-cells = <0>;
diff --git a/arch/arm/boot/dts/integrator.dtsi b/arch/arm/boot/dts/integrator.dtsi
index b82f0e6d9a63..6fe0dd1d3541 100644
--- a/arch/arm/boot/dts/integrator.dtsi
+++ b/arch/arm/boot/dts/integrator.dtsi
@@ -52,8 +52,9 @@
};
flash@24000000 {
- compatible = "cfi-flash";
+ compatible = "arm,versatile-flash", "cfi-flash";
reg = <0x24000000 0x02000000>;
+ bank-width = <4>;
};
fpga {
diff --git a/arch/arm/boot/dts/k2e-clocks.dtsi b/arch/arm/boot/dts/keystone-k2e-clocks.dtsi
index d56d68fe7ffc..d56d68fe7ffc 100644
--- a/arch/arm/boot/dts/k2e-clocks.dtsi
+++ b/arch/arm/boot/dts/keystone-k2e-clocks.dtsi
diff --git a/arch/arm/boot/dts/k2e-evm.dts b/arch/arm/boot/dts/keystone-k2e-evm.dts
index b7e99807f5c2..4c32ebc1425a 100644
--- a/arch/arm/boot/dts/k2e-evm.dts
+++ b/arch/arm/boot/dts/keystone-k2e-evm.dts
@@ -10,7 +10,7 @@
/dts-v1/;
#include "keystone.dtsi"
-#include "k2e.dtsi"
+#include "keystone-k2e.dtsi"
/ {
compatible = "ti,k2e-evm", "ti,k2e", "ti,keystone";
diff --git a/arch/arm/boot/dts/k2e-netcp.dtsi b/arch/arm/boot/dts/keystone-k2e-netcp.dtsi
index ac990f679725..ac990f679725 100644
--- a/arch/arm/boot/dts/k2e-netcp.dtsi
+++ b/arch/arm/boot/dts/keystone-k2e-netcp.dtsi
diff --git a/arch/arm/boot/dts/k2e.dtsi b/arch/arm/boot/dts/keystone-k2e.dtsi
index 1097dada56d2..96b349fb0430 100644
--- a/arch/arm/boot/dts/k2e.dtsi
+++ b/arch/arm/boot/dts/keystone-k2e.dtsi
@@ -44,7 +44,7 @@
};
soc {
- /include/ "k2e-clocks.dtsi"
+ /include/ "keystone-k2e-clocks.dtsi"
usb: usb@2680000 {
interrupts = <GIC_SPI 152 IRQ_TYPE_EDGE_RISING>;
@@ -145,6 +145,6 @@
clock-names = "fck";
bus_freq = <2500000>;
};
- /include/ "k2e-netcp.dtsi"
+ /include/ "keystone-k2e-netcp.dtsi"
};
};
diff --git a/arch/arm/boot/dts/k2hk-clocks.dtsi b/arch/arm/boot/dts/keystone-k2hk-clocks.dtsi
index af9b7190533a..af9b7190533a 100644
--- a/arch/arm/boot/dts/k2hk-clocks.dtsi
+++ b/arch/arm/boot/dts/keystone-k2hk-clocks.dtsi
diff --git a/arch/arm/boot/dts/k2hk-evm.dts b/arch/arm/boot/dts/keystone-k2hk-evm.dts
index 8161bf53271b..b38b3441818b 100644
--- a/arch/arm/boot/dts/k2hk-evm.dts
+++ b/arch/arm/boot/dts/keystone-k2hk-evm.dts
@@ -10,7 +10,7 @@
/dts-v1/;
#include "keystone.dtsi"
-#include "k2hk.dtsi"
+#include "keystone-k2hk.dtsi"
/ {
compatible = "ti,k2hk-evm", "ti,k2hk", "ti,keystone";
diff --git a/arch/arm/boot/dts/k2hk-netcp.dtsi b/arch/arm/boot/dts/keystone-k2hk-netcp.dtsi
index f86d6ddb832b..f86d6ddb832b 100644
--- a/arch/arm/boot/dts/k2hk-netcp.dtsi
+++ b/arch/arm/boot/dts/keystone-k2hk-netcp.dtsi
diff --git a/arch/arm/boot/dts/k2hk.dtsi b/arch/arm/boot/dts/keystone-k2hk.dtsi
index ada4c7ac96e7..8f67fa8df936 100644
--- a/arch/arm/boot/dts/k2hk.dtsi
+++ b/arch/arm/boot/dts/keystone-k2hk.dtsi
@@ -44,7 +44,7 @@
};
soc {
- /include/ "k2hk-clocks.dtsi"
+ /include/ "keystone-k2hk-clocks.dtsi"
dspgpio0: keystone_dsp_gpio@02620240 {
compatible = "ti,keystone-dsp-gpio";
@@ -112,6 +112,6 @@
clock-names = "fck";
bus_freq = <2500000>;
};
- /include/ "k2hk-netcp.dtsi"
+ /include/ "keystone-k2hk-netcp.dtsi"
};
};
diff --git a/arch/arm/boot/dts/k2l-clocks.dtsi b/arch/arm/boot/dts/keystone-k2l-clocks.dtsi
index ef8464bb11ff..ef8464bb11ff 100644
--- a/arch/arm/boot/dts/k2l-clocks.dtsi
+++ b/arch/arm/boot/dts/keystone-k2l-clocks.dtsi
diff --git a/arch/arm/boot/dts/k2l-evm.dts b/arch/arm/boot/dts/keystone-k2l-evm.dts
index 00861244d788..7f9c2e94d605 100644
--- a/arch/arm/boot/dts/k2l-evm.dts
+++ b/arch/arm/boot/dts/keystone-k2l-evm.dts
@@ -10,7 +10,7 @@
/dts-v1/;
#include "keystone.dtsi"
-#include "k2l.dtsi"
+#include "keystone-k2l.dtsi"
/ {
compatible = "ti,k2l-evm", "ti,k2l", "ti,keystone";
diff --git a/arch/arm/boot/dts/k2l-netcp.dtsi b/arch/arm/boot/dts/keystone-k2l-netcp.dtsi
index 5acbd0dcc2ab..5acbd0dcc2ab 100644
--- a/arch/arm/boot/dts/k2l-netcp.dtsi
+++ b/arch/arm/boot/dts/keystone-k2l-netcp.dtsi
diff --git a/arch/arm/boot/dts/k2l.dtsi b/arch/arm/boot/dts/keystone-k2l.dtsi
index 4446da72b0ae..ff22ffc3dee7 100644
--- a/arch/arm/boot/dts/k2l.dtsi
+++ b/arch/arm/boot/dts/keystone-k2l.dtsi
@@ -32,7 +32,7 @@
};
soc {
- /include/ "k2l-clocks.dtsi"
+ /include/ "keystone-k2l-clocks.dtsi"
uart2: serial@02348400 {
compatible = "ns16550a";
@@ -92,7 +92,7 @@
clock-names = "fck";
bus_freq = <2500000>;
};
- /include/ "k2l-netcp.dtsi"
+ /include/ "keystone-k2l-netcp.dtsi"
};
};
diff --git a/arch/arm/boot/dts/keystone.dtsi b/arch/arm/boot/dts/keystone.dtsi
index 3f272826f537..e34b2265458a 100644
--- a/arch/arm/boot/dts/keystone.dtsi
+++ b/arch/arm/boot/dts/keystone.dtsi
@@ -20,6 +20,9 @@
aliases {
serial0 = &uart0;
+ spi0 = &spi0;
+ spi1 = &spi1;
+ spi2 = &spi2;
};
memory {
@@ -59,6 +62,14 @@
<GIC_SPI 23 IRQ_TYPE_EDGE_RISING>;
};
+ psci {
+ compatible = "arm,psci";
+ method = "smc";
+ cpu_suspend = <0x84000001>;
+ cpu_off = <0x84000002>;
+ cpu_on = <0x84000003>;
+ };
+
soc {
#address-cells = <1>;
#size-cells = <1>;
diff --git a/arch/arm/boot/dts/kirkwood-6192.dtsi b/arch/arm/boot/dts/kirkwood-6192.dtsi
index 9e6e9e2691d5..d573e03f3134 100644
--- a/arch/arm/boot/dts/kirkwood-6192.dtsi
+++ b/arch/arm/boot/dts/kirkwood-6192.dtsi
@@ -1,6 +1,6 @@
/ {
- mbus {
- pciec: pcie-controller {
+ mbus@f1000000 {
+ pciec: pcie-controller@82000000 {
compatible = "marvell,kirkwood-pcie";
status = "disabled";
device_type = "pci";
diff --git a/arch/arm/boot/dts/kirkwood-6281.dtsi b/arch/arm/boot/dts/kirkwood-6281.dtsi
index 7dc7d6782e83..748d0b62f233 100644
--- a/arch/arm/boot/dts/kirkwood-6281.dtsi
+++ b/arch/arm/boot/dts/kirkwood-6281.dtsi
@@ -1,6 +1,6 @@
/ {
- mbus {
- pciec: pcie-controller {
+ mbus@f1000000 {
+ pciec: pcie-controller@82000000 {
compatible = "marvell,kirkwood-pcie";
status = "disabled";
device_type = "pci";
diff --git a/arch/arm/boot/dts/kirkwood-6282.dtsi b/arch/arm/boot/dts/kirkwood-6282.dtsi
index 4680eec990f0..bb63d2d50fc5 100644
--- a/arch/arm/boot/dts/kirkwood-6282.dtsi
+++ b/arch/arm/boot/dts/kirkwood-6282.dtsi
@@ -1,6 +1,6 @@
/ {
- mbus {
- pciec: pcie-controller {
+ mbus@f1000000 {
+ pciec: pcie-controller@82000000 {
compatible = "marvell,kirkwood-pcie";
status = "disabled";
device_type = "pci";
diff --git a/arch/arm/boot/dts/kirkwood-98dx4122.dtsi b/arch/arm/boot/dts/kirkwood-98dx4122.dtsi
index 9e1f741d74ff..720c210d491d 100644
--- a/arch/arm/boot/dts/kirkwood-98dx4122.dtsi
+++ b/arch/arm/boot/dts/kirkwood-98dx4122.dtsi
@@ -1,6 +1,6 @@
/ {
- mbus {
- pciec: pcie-controller {
+ mbus@f1000000 {
+ pciec: pcie-controller@82000000 {
compatible = "marvell,kirkwood-pcie";
status = "disabled";
device_type = "pci";
diff --git a/arch/arm/boot/dts/kirkwood-b3.dts b/arch/arm/boot/dts/kirkwood-b3.dts
index d2936ad3af1d..d091ecb61cd2 100644
--- a/arch/arm/boot/dts/kirkwood-b3.dts
+++ b/arch/arm/boot/dts/kirkwood-b3.dts
@@ -33,17 +33,6 @@
stdout-path = &uart0;
};
- mbus {
- pcie-controller {
- status = "okay";
-
- /* Wifi model has Atheros chipset on pcie port */
- pcie@1,0 {
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
pinctrl: pin-controller@10000 {
pmx_button_power: pmx-button-power {
@@ -199,3 +188,11 @@
};
};
+/* Wifi model has Atheros chipset on pcie port */
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-blackarmor-nas220.dts b/arch/arm/boot/dts/kirkwood-blackarmor-nas220.dts
index fa02a9aff05e..f16a73e49a88 100644
--- a/arch/arm/boot/dts/kirkwood-blackarmor-nas220.dts
+++ b/arch/arm/boot/dts/kirkwood-blackarmor-nas220.dts
@@ -36,13 +36,13 @@
gpio_keys {
compatible = "gpio-keys";
- button@1{
+ reset {
label = "Reset";
linux,code = <KEY_POWER>;
gpios = <&gpio0 29 GPIO_ACTIVE_HIGH>;
};
- button@2{
+ button {
label = "Power";
linux,code = <KEY_SLEEP>;
gpios = <&gpio0 26 GPIO_ACTIVE_LOW>;
diff --git a/arch/arm/boot/dts/kirkwood-cloudbox.dts b/arch/arm/boot/dts/kirkwood-cloudbox.dts
index 7ec76566acf2..555b7e4c58a5 100644
--- a/arch/arm/boot/dts/kirkwood-cloudbox.dts
+++ b/arch/arm/boot/dts/kirkwood-cloudbox.dts
@@ -60,7 +60,7 @@
#address-cells = <1>;
#size-cells = <0>;
- button@1 {
+ power {
label = "Power push button";
linux,code = <KEY_POWER>;
gpios = <&gpio0 16 GPIO_ACTIVE_LOW>;
diff --git a/arch/arm/boot/dts/kirkwood-db-88f6281.dts b/arch/arm/boot/dts/kirkwood-db-88f6281.dts
index c39dd766c75a..aee6f02b1c80 100644
--- a/arch/arm/boot/dts/kirkwood-db-88f6281.dts
+++ b/arch/arm/boot/dts/kirkwood-db-88f6281.dts
@@ -17,14 +17,12 @@
/ {
model = "Marvell DB-88F6281-BP Development Board";
compatible = "marvell,db-88f6281-bp", "marvell,kirkwood-88f6281", "marvell,kirkwood";
+};
- mbus {
- pcie-controller {
- status = "okay";
+&pciec {
+ status = "okay";
+};
- pcie@1,0 {
- status = "okay";
- };
- };
- };
+&pcie0 {
+ status = "okay";
};
diff --git a/arch/arm/boot/dts/kirkwood-db-88f6282.dts b/arch/arm/boot/dts/kirkwood-db-88f6282.dts
index 701c6b6cdaa2..e8b23e13ec0c 100644
--- a/arch/arm/boot/dts/kirkwood-db-88f6282.dts
+++ b/arch/arm/boot/dts/kirkwood-db-88f6282.dts
@@ -17,18 +17,16 @@
/ {
model = "Marvell DB-88F6282-BP Development Board";
compatible = "marvell,db-88f6282-bp", "marvell,kirkwood-88f6282", "marvell,kirkwood";
+};
- mbus {
- pcie-controller {
- status = "okay";
+&pciec {
+ status = "okay";
+};
- pcie@1,0 {
- status = "okay";
- };
+&pcie0 {
+ status = "okay";
+};
- pcie@2,0 {
- status = "okay";
- };
- };
- };
+&pcie1 {
+ status = "okay";
};
diff --git a/arch/arm/boot/dts/kirkwood-dir665.dts b/arch/arm/boot/dts/kirkwood-dir665.dts
index 0473fcc260f7..41acbb6dd6ab 100644
--- a/arch/arm/boot/dts/kirkwood-dir665.dts
+++ b/arch/arm/boot/dts/kirkwood-dir665.dts
@@ -25,16 +25,6 @@
stdout-path = &uart0;
};
- mbus {
- pcie-controller {
- status = "okay";
-
- pcie@1,0 {
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
pinctrl: pin-controller@10000 {
pinctrl-0 =< &pmx_led_usb
@@ -203,7 +193,7 @@
};
};
- dsa@0 {
+ dsa {
compatible = "marvell,dsa";
#address-cells = <2>;
#size-cells = <0>;
@@ -276,3 +266,11 @@
&rtc {
status = "disabled";
};
+
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-dnskw.dtsi b/arch/arm/boot/dts/kirkwood-dnskw.dtsi
index 113dcf056dcf..d8fca9db46d0 100644
--- a/arch/arm/boot/dts/kirkwood-dnskw.dtsi
+++ b/arch/arm/boot/dts/kirkwood-dnskw.dtsi
@@ -13,17 +13,17 @@
&pmx_button_reset>;
pinctrl-names = "default";
- button@1 {
+ power {
label = "Power button";
linux,code = <KEY_POWER>;
gpios = <&gpio1 2 GPIO_ACTIVE_LOW>;
};
- button@2 {
+ eject {
label = "USB unmount button";
linux,code = <KEY_EJECTCD>;
gpios = <&gpio1 15 GPIO_ACTIVE_LOW>;
};
- button@3 {
+ reset {
label = "Reset button";
linux,code = <KEY_RESTART>;
gpios = <&gpio1 16 GPIO_ACTIVE_LOW>;
diff --git a/arch/arm/boot/dts/kirkwood-ds111.dts b/arch/arm/boot/dts/kirkwood-ds111.dts
index 61f47fbe44d0..a85a4664431b 100644
--- a/arch/arm/boot/dts/kirkwood-ds111.dts
+++ b/arch/arm/boot/dts/kirkwood-ds111.dts
@@ -40,6 +40,6 @@
status = "okay";
};
-&pcie2 {
+&pcie1 {
status = "okay";
};
diff --git a/arch/arm/boot/dts/kirkwood-ds112.dts b/arch/arm/boot/dts/kirkwood-ds112.dts
index b84af3da8c84..6cef4bdbc01b 100644
--- a/arch/arm/boot/dts/kirkwood-ds112.dts
+++ b/arch/arm/boot/dts/kirkwood-ds112.dts
@@ -44,6 +44,10 @@
status = "okay";
};
-&pcie2 {
+&pciec {
+ status = "okay";
+};
+
+&pcie1 {
status = "okay";
};
diff --git a/arch/arm/boot/dts/kirkwood-ds212.dts b/arch/arm/boot/dts/kirkwood-ds212.dts
index 99afd462f956..7f32e7abffac 100644
--- a/arch/arm/boot/dts/kirkwood-ds212.dts
+++ b/arch/arm/boot/dts/kirkwood-ds212.dts
@@ -43,6 +43,6 @@
status = "okay";
};
-&pcie2 {
+&pcie1 {
status = "okay";
};
diff --git a/arch/arm/boot/dts/kirkwood-ds411.dts b/arch/arm/boot/dts/kirkwood-ds411.dts
index 623cd4a37d71..72e58307416d 100644
--- a/arch/arm/boot/dts/kirkwood-ds411.dts
+++ b/arch/arm/boot/dts/kirkwood-ds411.dts
@@ -48,6 +48,10 @@
status = "okay";
};
-&pcie2 {
+&pciec {
+ status = "okay";
+};
+
+&pcie1 {
status = "okay";
};
diff --git a/arch/arm/boot/dts/kirkwood-ds411slim.dts b/arch/arm/boot/dts/kirkwood-ds411slim.dts
index a0a1fad8b4de..aaaf31b81522 100644
--- a/arch/arm/boot/dts/kirkwood-ds411slim.dts
+++ b/arch/arm/boot/dts/kirkwood-ds411slim.dts
@@ -44,6 +44,6 @@
status = "okay";
};
-&pcie2 {
+&pcie1 {
status = "okay";
};
diff --git a/arch/arm/boot/dts/kirkwood-ib62x0.dts b/arch/arm/boot/dts/kirkwood-ib62x0.dts
index bfa5edde179c..ef84d8699a76 100644
--- a/arch/arm/boot/dts/kirkwood-ib62x0.dts
+++ b/arch/arm/boot/dts/kirkwood-ib62x0.dts
@@ -62,12 +62,12 @@
pinctrl-0 = <&pmx_button_reset &pmx_button_usb_copy>;
pinctrl-names = "default";
- button@1 {
+ copy {
label = "USB Copy";
linux,code = <KEY_COPY>;
gpios = <&gpio0 29 GPIO_ACTIVE_LOW>;
};
- button@2 {
+ reset {
label = "Reset";
linux,code = <KEY_RESTART>;
gpios = <&gpio0 28 GPIO_ACTIVE_LOW>;
diff --git a/arch/arm/boot/dts/kirkwood-iconnect.dts b/arch/arm/boot/dts/kirkwood-iconnect.dts
index 38e31d15a62d..d25184ae4af3 100644
--- a/arch/arm/boot/dts/kirkwood-iconnect.dts
+++ b/arch/arm/boot/dts/kirkwood-iconnect.dts
@@ -19,16 +19,6 @@
linux,initrd-end = <0x4800000>;
};
- mbus {
- pcie-controller {
- status = "okay";
-
- pcie@1,0 {
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
pinctrl: pin-controller@10000 {
pmx_button_reset: pmx-button-reset {
@@ -136,13 +126,13 @@
pinctrl-0 = < &pmx_button_reset &pmx_button_otb >;
pinctrl-names = "default";
- button@1 {
+ otb {
label = "OTB Button";
linux,code = <KEY_COPY>;
gpios = <&gpio1 3 GPIO_ACTIVE_LOW>;
debounce-interval = <100>;
};
- button@2 {
+ reset {
label = "Reset";
linux,code = <KEY_RESTART>;
gpios = <&gpio0 12 GPIO_ACTIVE_LOW>;
@@ -194,3 +184,11 @@
phy-handle = <&ethphy0>;
};
};
+
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-km_common.dtsi b/arch/arm/boot/dts/kirkwood-km_common.dtsi
index 8367c772c764..7962bdefde49 100644
--- a/arch/arm/boot/dts/kirkwood-km_common.dtsi
+++ b/arch/arm/boot/dts/kirkwood-km_common.dtsi
@@ -4,16 +4,6 @@
stdout-path = &uart0;
};
- mbus {
- pcie-controller {
- status = "okay";
-
- pcie@1,0 {
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
pinctrl: pin-controller@10000 {
pinctrl-0 = < &pmx_i2c_gpio_sda &pmx_i2c_gpio_scl >;
@@ -34,7 +24,7 @@
};
};
- i2c@0 {
+ i2c {
compatible = "i2c-gpio";
gpios = < &gpio0 8 GPIO_ACTIVE_HIGH /* sda */
&gpio0 9 GPIO_ACTIVE_HIGH>; /* scl */
@@ -46,3 +36,11 @@
status = "okay";
chip-delay = <25>;
};
+
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-laplug.dts b/arch/arm/boot/dts/kirkwood-laplug.dts
index 24425660e973..1b0f070c2676 100644
--- a/arch/arm/boot/dts/kirkwood-laplug.dts
+++ b/arch/arm/boot/dts/kirkwood-laplug.dts
@@ -27,15 +27,6 @@
stdout-path = &uart0;
};
- mbus {
- pcie-controller {
- status = "okay";
- pcie@1,0 {
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
serial@12000 {
status = "okay";
@@ -62,7 +53,7 @@
gpio_keys {
compatible = "gpio-keys";
- button@1{
+ power {
label = "Power push button";
linux,code = <KEY_POWER>;
gpios = <&gpio1 0 GPIO_ACTIVE_HIGH>;
@@ -169,3 +160,11 @@
phy-handle = <&ethphy0>;
};
};
+
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-linkstation.dtsi b/arch/arm/boot/dts/kirkwood-linkstation.dtsi
index 69061b6e987b..36c54c9dfa30 100644
--- a/arch/arm/boot/dts/kirkwood-linkstation.dtsi
+++ b/arch/arm/boot/dts/kirkwood-linkstation.dtsi
@@ -49,15 +49,6 @@
stdout-path = &uart0;
};
- mbus {
- pcie-controller {
- status = "okay";
- pcie@1,0 {
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
pinctrl: pin-controller@10000 {
pmx_power_hdd0: pmx-power-hdd0 {
@@ -200,3 +191,11 @@
};
};
};
+
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-linksys-viper.dts b/arch/arm/boot/dts/kirkwood-linksys-viper.dts
new file mode 100644
index 000000000000..345fcac48dc7
--- /dev/null
+++ b/arch/arm/boot/dts/kirkwood-linksys-viper.dts
@@ -0,0 +1,240 @@
+/*
+ * kirkwood-viper.dts - Device Tree file for Linksys viper (E4200v2 / EA4500)
+ *
+ * (c) 2013 Jonas Gorski <jogo@openwrt.org>
+ * (c) 2013 Deutsche Telekom Innovation Laboratories
+ * (c) 2014 Luka Perkov <luka@openwrt.org>
+ * (c) 2014 Randy C. Will <randall.will@gmail.com>
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+/dts-v1/;
+
+#include "kirkwood.dtsi"
+#include "kirkwood-6282.dtsi"
+
+/ {
+ model = "Linksys Viper (E4200v2 / EA4500)";
+ compatible = "linksys,viper", "marvell,kirkwood-88f6282", "marvell,kirkwood";
+
+ memory {
+ device_type = "memory";
+ reg = <0x00000000 0x8000000>;
+ };
+
+ aliases {
+ serial0 = &uart0;
+ };
+
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
+ gpio_keys {
+ compatible = "gpio-keys";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ pinctrl-0 = < &pmx_btn_wps &pmx_btn_reset >;
+ pinctrl-names = "default";
+
+ wps {
+ label = "WPS Button";
+ linux,code = <KEY_WPS_BUTTON>;
+ gpios = <&gpio1 15 GPIO_ACTIVE_LOW>;
+ };
+
+ reset {
+ label = "Reset Button";
+ linux,code = <KEY_RESTART>;
+ gpios = <&gpio1 16 GPIO_ACTIVE_LOW>;
+ };
+ };
+
+ gpio-leds {
+ compatible = "gpio-leds";
+ pinctrl-0 = < &pmx_led_white_health &pmx_led_white_pulse >;
+ pinctrl-names = "default";
+
+ white-health {
+ label = "viper:white:health";
+ gpios = <&gpio0 7 GPIO_ACTIVE_HIGH>;
+ };
+
+ white-pulse {
+ label = "viper:white:pulse";
+ gpios = <&gpio0 14 GPIO_ACTIVE_HIGH>;
+ };
+ };
+
+ dsa {
+ compatible = "marvell,dsa";
+ #address-cells = <2>;
+ #size-cells = <0>;
+
+ dsa,ethernet = <&eth0port>;
+ dsa,mii-bus = <&mdio>;
+
+ switch@16,0 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <16 0>; /* MDIO address 16, switch 0 in tree */
+
+ port@0 {
+ reg = <0>;
+ label = "ethernet1";
+ };
+
+ port@1 {
+ reg = <1>;
+ label = "ethernet2";
+ };
+
+ port@2 {
+ reg = <2>;
+ label = "ethernet3";
+ };
+
+ port@3 {
+ reg = <3>;
+ label = "ethernet4";
+ };
+
+ port@4 {
+ reg = <4>;
+ label = "internet";
+ };
+
+ port@5 {
+ reg = <5>;
+ label = "cpu";
+ };
+ };
+ };
+};
+
+&pinctrl {
+ pmx_led_white_health: pmx-led-white-health {
+ marvell,pins = "mpp7";
+ marvell,function = "gpo";
+ };
+ pmx_led_white_pulse: pmx-led-white-pulse {
+ marvell,pins = "mpp14";
+ marvell,function = "gpio";
+ };
+ pmx_btn_wps: pmx-btn-wps {
+ marvell,pins = "mpp47";
+ marvell,function = "gpio";
+ };
+ pmx_btn_reset: pmx-btn-reset {
+ marvell,pins = "mpp48";
+ marvell,function = "gpio";
+ };
+};
+
+&nand {
+ status = "okay";
+ pinctrl-0 = <&pmx_nand>;
+ pinctrl-names = "default";
+
+ partitions {
+ compatible = "fixed-partitions";
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+ partition@0 {
+ label = "u-boot";
+ reg = <0x0 0x80000>;
+ read-only;
+ };
+
+ partition@80000 {
+ label = "u_env";
+ reg = <0x80000 0x20000>;
+ };
+
+ partition@A0000 {
+ label = "s_env";
+ reg = <0xA0000 0x20000>;
+ };
+
+ partition@200000 {
+ label = "kernel";
+ reg = <0x200000 0x2A0000>;
+ };
+
+ partition@4A0000 {
+ label = "rootfs";
+ reg = <0x4A0000 0x1760000>;
+ };
+
+ partition@1C00000 {
+ label = "alt_kernel";
+ reg = <0x1C00000 0x2A0000>;
+ };
+
+ partition@1EA0000 {
+ label = "alt_rootfs";
+ reg = <0x1EA0000 0x1760000>;
+ };
+
+ partition@3600000 {
+ label = "syscfg";
+ reg = <0x3600000 0x4A00000>;
+ };
+
+ partition@C0000 {
+ label = "unused";
+ reg = <0xC0000 0x140000>;
+ };
+
+ };
+};
+
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
+
+&pcie1 {
+ status = "okay";
+};
+
+&mdio {
+ status = "okay";
+};
+
+&uart0 {
+ status = "okay";
+};
+
+/* eth0 is connected to a Marvell 88E6171 switch, without a PHY. So set
+ * fixed speed and duplex.
+ */
+&eth0 {
+ status = "okay";
+ ethernet0-port@0 {
+ speed = <1000>;
+ duplex = <1>;
+ };
+};
+
+/* eth1 is connected to the switch at port 6. However DSA only supports a
+ * single CPU port. So leave this port disabled to avoid confusion.
+ */
+&eth1 {
+ status = "disabled";
+};
+
+/* There is no battery on the board, so the RTC does not keep
+ * time when there is no power, making it useless.
+ */
+&rtc {
+ status = "disabled";
+};
+
diff --git a/arch/arm/boot/dts/kirkwood-lsxl.dtsi b/arch/arm/boot/dts/kirkwood-lsxl.dtsi
index 1d6528d82969..8b7c6ce79a41 100644
--- a/arch/arm/boot/dts/kirkwood-lsxl.dtsi
+++ b/arch/arm/boot/dts/kirkwood-lsxl.dtsi
@@ -107,18 +107,18 @@
&pmx_power_auto_switch>;
pinctrl-names = "default";
- button@1 {
+ option {
label = "Function Button";
linux,code = <KEY_OPTION>;
gpios = <&gpio1 9 GPIO_ACTIVE_LOW>;
};
- button@2 {
+ reserved {
label = "Power-on Switch";
linux,code = <KEY_RESERVED>;
linux,input-type = <5>;
gpios = <&gpio1 10 GPIO_ACTIVE_LOW>;
};
- button@3 {
+ power {
label = "Power-auto Switch";
linux,code = <KEY_ESC>;
linux,input-type = <5>;
@@ -133,28 +133,28 @@
&pmx_led_function_blue>;
pinctrl-names = "default";
- led@1 {
+ func_blue {
label = "lsxl:blue:func";
gpios = <&gpio1 4 GPIO_ACTIVE_LOW>;
};
- led@2 {
+ alarm {
label = "lsxl:red:alarm";
gpios = <&gpio1 5 GPIO_ACTIVE_LOW>;
};
- led@3 {
+ info {
label = "lsxl:amber:info";
gpios = <&gpio1 6 GPIO_ACTIVE_LOW>;
};
- led@4 {
+ power {
label = "lsxl:blue:power";
gpios = <&gpio1 7 GPIO_ACTIVE_LOW>;
default-state = "keep";
};
- led@5 {
+ func_red {
label = "lsxl:red:func";
gpios = <&gpio1 16 GPIO_ACTIVE_LOW>;
};
diff --git a/arch/arm/boot/dts/kirkwood-mplcec4.dts b/arch/arm/boot/dts/kirkwood-mplcec4.dts
index f3a991837515..aa413b0bcce2 100644
--- a/arch/arm/boot/dts/kirkwood-mplcec4.dts
+++ b/arch/arm/boot/dts/kirkwood-mplcec4.dts
@@ -17,16 +17,6 @@
stdout-path = &uart0;
};
- mbus {
- pcie-controller {
- status = "okay";
-
- pcie@1,0 {
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
pinctrl: pin-controller@10000 {
pmx_led_health: pmx-led-health {
@@ -215,3 +205,11 @@
phy-handle = <&ethphy1>;
};
};
+
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-mv88f6281gtw-ge.dts b/arch/arm/boot/dts/kirkwood-mv88f6281gtw-ge.dts
index b7e7d78c484e..172a38c0b8a9 100644
--- a/arch/arm/boot/dts/kirkwood-mv88f6281gtw-ge.dts
+++ b/arch/arm/boot/dts/kirkwood-mv88f6281gtw-ge.dts
@@ -31,16 +31,6 @@
stdout-path = &uart0;
};
- mbus {
- pcie-controller {
- status = "okay";
-
- pcie@1,0 {
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
pin-controller@10000 {
pmx_usb_led: pmx-usb-led {
@@ -109,19 +99,19 @@
pinctrl-0 = <&pmx_keys>;
pinctrl-names = "default";
- button@1 {
+ restart {
label = "SWR Button";
linux,code = <KEY_RESTART>;
gpios = <&gpio1 15 GPIO_ACTIVE_LOW>;
};
- button@2 {
+ wps {
label = "WPS Button";
linux,code = <KEY_WPS_BUTTON>;
gpios = <&gpio1 14 GPIO_ACTIVE_LOW>;
};
};
- dsa@0 {
+ dsa {
compatible = "marvell,dsa";
#address-cells = <1>;
#size-cells = <0>;
@@ -179,3 +169,11 @@
duplex = <1>;
};
};
+
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-nas2big.dts b/arch/arm/boot/dts/kirkwood-nas2big.dts
index 7427ec50b829..f53bcacf6b63 100644
--- a/arch/arm/boot/dts/kirkwood-nas2big.dts
+++ b/arch/arm/boot/dts/kirkwood-nas2big.dts
@@ -28,16 +28,6 @@
stdout-path = &uart0;
};
- mbus {
- pcie-controller {
- status = "okay";
-
- pcie@1,0 {
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
rtc@10300 {
/* The on-chip RTC is not powered (no supercap). */
@@ -141,3 +131,11 @@
reg = <0x9100000 0x6f00000>;
};
};
+
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-netgear_readynas_duo_v2.dts b/arch/arm/boot/dts/kirkwood-netgear_readynas_duo_v2.dts
index fd733c63bc27..c0413b63cf2e 100644
--- a/arch/arm/boot/dts/kirkwood-netgear_readynas_duo_v2.dts
+++ b/arch/arm/boot/dts/kirkwood-netgear_readynas_duo_v2.dts
@@ -28,16 +28,6 @@
stdout-path = &uart0;
};
- mbus {
- pcie-controller {
- status = "okay";
-
- pcie@1,0 {
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
pinctrl: pin-controller@10000 {
pmx_button_power: pmx-button-power {
@@ -193,7 +183,7 @@
#address-cells = <1>;
#size-cells = <0>;
- usb3_regulator: usb3-regulator {
+ usb3_regulator: usb3-regulator@1 {
compatible = "regulator-fixed";
reg = <1>;
regulator-name = "USB 3.0 Power";
@@ -251,3 +241,11 @@
phy-handle = <&ethphy0>;
};
};
+
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-netgear_readynas_nv+_v2.dts b/arch/arm/boot/dts/kirkwood-netgear_readynas_nv+_v2.dts
index b514d643fb6c..2bfc6cfa151d 100644
--- a/arch/arm/boot/dts/kirkwood-netgear_readynas_nv+_v2.dts
+++ b/arch/arm/boot/dts/kirkwood-netgear_readynas_nv+_v2.dts
@@ -28,18 +28,6 @@
stdout-path = &uart0;
};
- mbus {
- pcie-controller {
- status = "okay";
-
- /* Connected to NEC uPD720200 USB 3.0 controller */
- pcie@1,0 {
- /* Port 0, Lane 0 */
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
pinctrl: pin-controller@10000 {
pmx_button_power: pmx-button-power {
@@ -205,7 +193,7 @@
#address-cells = <1>;
#size-cells = <0>;
- usb3_regulator: usb3-regulator {
+ usb3_regulator: usb3-regulator@1 {
compatible = "regulator-fixed";
reg = <1>;
regulator-name = "USB 3.0 Power";
@@ -265,3 +253,12 @@
phy-handle = <&ethphy0>;
};
};
+
+/* Connected to NEC uPD720200 USB 3.0 controller */
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-netxbig.dtsi b/arch/arm/boot/dts/kirkwood-netxbig.dtsi
index 62515a8b99b9..52b58fe0c4fe 100644
--- a/arch/arm/boot/dts/kirkwood-netxbig.dtsi
+++ b/arch/arm/boot/dts/kirkwood-netxbig.dtsi
@@ -59,22 +59,22 @@
#size-cells = <0>;
/*
- * button@1 and button@2 represent a three position rocker
+ * esc and power represent a three position rocker
* switch. Thus the conventional KEY_POWER does not fit
*/
- button@1 {
+ exc {
label = "Back power switch (on|auto)";
linux,code = <KEY_ESC>;
linux,input-type = <5>;
gpios = <&gpio0 13 GPIO_ACTIVE_LOW>;
};
- button@2 {
+ power {
label = "Back power switch (auto|off)";
linux,code = <KEY_1>;
linux,input-type = <5>;
gpios = <&gpio0 15 GPIO_ACTIVE_LOW>;
};
- button@3 {
+ option {
label = "Function button";
linux,code = <KEY_OPTION>;
gpios = <&gpio1 2 GPIO_ACTIVE_LOW>;
diff --git a/arch/arm/boot/dts/kirkwood-ns2-common.dtsi b/arch/arm/boot/dts/kirkwood-ns2-common.dtsi
index e832b6320264..282605f4c92c 100644
--- a/arch/arm/boot/dts/kirkwood-ns2-common.dtsi
+++ b/arch/arm/boot/dts/kirkwood-ns2-common.dtsi
@@ -57,7 +57,7 @@
#address-cells = <1>;
#size-cells = <0>;
- button@1 {
+ power {
label = "Power push button";
linux,code = <KEY_POWER>;
gpios = <&gpio1 0 GPIO_ACTIVE_HIGH>;
@@ -83,7 +83,7 @@
&mdio {
status = "okay";
- ethphy0: ethernet-phy {
+ ethphy0: ethernet-phy@X {
/* overwrite reg property in board file */
};
};
diff --git a/arch/arm/boot/dts/kirkwood-nsa310.dts b/arch/arm/boot/dts/kirkwood-nsa310.dts
index 6139df0f376c..0b69ee4934fa 100644
--- a/arch/arm/boot/dts/kirkwood-nsa310.dts
+++ b/arch/arm/boot/dts/kirkwood-nsa310.dts
@@ -15,16 +15,6 @@
stdout-path = &uart0;
};
- mbus {
- pcie-controller {
- status = "okay";
-
- pcie@1,0 {
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
pinctrl: pin-controller@10000 {
pinctrl-0 = <&pmx_unknown>;
@@ -138,3 +128,11 @@
};
};
};
+
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-nsa320.dts b/arch/arm/boot/dts/kirkwood-nsa320.dts
index 24f686d1044d..6ab104b4bb42 100644
--- a/arch/arm/boot/dts/kirkwood-nsa320.dts
+++ b/arch/arm/boot/dts/kirkwood-nsa320.dts
@@ -27,16 +27,6 @@
stdout-path = &uart0;
};
- mbus {
- pcie-controller {
- status = "okay";
-
- pcie@1,0 {
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
pinctrl: pin-controller@10000 {
pinctrl-names = "default";
@@ -193,10 +183,19 @@
};
};
+ hwmon {
+ compatible = "zyxel,nsa320-mcu";
+ pinctrl-0 = <&pmx_mcu_data &pmx_mcu_clk &pmx_mcu_act>;
+ pinctrl-names = "default";
+
+ data-gpios = <&gpio0 14 GPIO_ACTIVE_HIGH>;
+ clk-gpios = <&gpio0 16 GPIO_ACTIVE_HIGH>;
+ act-gpios = <&gpio0 17 GPIO_ACTIVE_LOW>;
+ };
+
/* The following pins are currently not assigned to a driver,
some of them should be configured as inputs.
- pinctrl-0 = <&pmx_mcu_data &pmx_mcu_clk &pmx_mcu_act
- &pmx_htp &pmx_vid_b1
+ pinctrl-0 = <&pmx_htp &pmx_vid_b1
&pmx_power_resume_data &pmx_power_resume_clk>; */
};
@@ -213,3 +212,11 @@
phy-handle = <&ethphy0>;
};
};
+
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-nsa325.dts b/arch/arm/boot/dts/kirkwood-nsa325.dts
index bc4ec9332387..36c64816bf7f 100644
--- a/arch/arm/boot/dts/kirkwood-nsa325.dts
+++ b/arch/arm/boot/dts/kirkwood-nsa325.dts
@@ -28,16 +28,6 @@
stdout-path = &uart0;
};
- mbus {
- pcie-controller {
- status = "okay";
-
- pcie@1,0 {
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
pinctrl: pin-controller@10000 {
pinctrl-names = "default";
@@ -236,3 +226,10 @@
};
};
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-nsa3x0-common.dtsi b/arch/arm/boot/dts/kirkwood-nsa3x0-common.dtsi
index 2075a2e828f1..e09b79ac73fd 100644
--- a/arch/arm/boot/dts/kirkwood-nsa3x0-common.dtsi
+++ b/arch/arm/boot/dts/kirkwood-nsa3x0-common.dtsi
@@ -4,16 +4,6 @@
/ {
model = "ZyXEL NSA310";
- mbus {
- pcie-controller {
- status = "okay";
-
- pcie@1,0 {
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
pinctrl: pin-controller@10000 {
@@ -77,17 +67,17 @@
pinctrl-0 = <&pmx_btn_reset &pmx_btn_copy &pmx_btn_power>;
pinctrl-names = "default";
- button@1 {
+ power {
label = "Power Button";
linux,code = <KEY_POWER>;
gpios = <&gpio1 14 GPIO_ACTIVE_HIGH>;
};
- button@2 {
+ copy {
label = "Copy Button";
linux,code = <KEY_COPY>;
gpios = <&gpio1 5 GPIO_ACTIVE_LOW>;
};
- button@3 {
+ reset {
label = "Reset Button";
linux,code = <KEY_RESTART>;
gpios = <&gpio1 4 GPIO_ACTIVE_LOW>;
@@ -157,3 +147,11 @@
reg = <0x5040000 0x2fc0000>;
};
};
+
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-openblocks_a6.dts b/arch/arm/boot/dts/kirkwood-openblocks_a6.dts
index fb9dc227255d..0db0e3edc88f 100644
--- a/arch/arm/boot/dts/kirkwood-openblocks_a6.dts
+++ b/arch/arm/boot/dts/kirkwood-openblocks_a6.dts
@@ -117,7 +117,7 @@
#address-cells = <1>;
#size-cells = <0>;
- button@1 {
+ power {
label = "Init Button";
linux,code = <KEY_POWER>;
gpios = <&gpio1 6 GPIO_ACTIVE_HIGH>;
diff --git a/arch/arm/boot/dts/kirkwood-openblocks_a7.dts b/arch/arm/boot/dts/kirkwood-openblocks_a7.dts
index d5e3bc518968..cf2f5240e176 100644
--- a/arch/arm/boot/dts/kirkwood-openblocks_a7.dts
+++ b/arch/arm/boot/dts/kirkwood-openblocks_a7.dts
@@ -135,7 +135,7 @@
#address-cells = <1>;
#size-cells = <0>;
- button@1 {
+ button {
label = "Init Button";
linux,code = <KEY_POWER>;
gpios = <&gpio1 6 GPIO_ACTIVE_HIGH>;
diff --git a/arch/arm/boot/dts/kirkwood-openrd.dtsi b/arch/arm/boot/dts/kirkwood-openrd.dtsi
index 24f1d30970a0..e4ecab112601 100644
--- a/arch/arm/boot/dts/kirkwood-openrd.dtsi
+++ b/arch/arm/boot/dts/kirkwood-openrd.dtsi
@@ -25,16 +25,6 @@
stdout-path = &uart0;
};
- mbus {
- pcie-controller {
- status = "okay";
-
- pcie@1,0 {
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
pinctrl: pin-controller@10000 {
pinctrl-0 = <&pmx_select28 &pmx_sdio_cd &pmx_select34>;
@@ -125,3 +115,7 @@
reg = <0x0600000 0x1FA00000>;
};
};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-pogoplug-series-4.dts b/arch/arm/boot/dts/kirkwood-pogoplug-series-4.dts
index 8082d64266a3..b2f26239d298 100644
--- a/arch/arm/boot/dts/kirkwood-pogoplug-series-4.dts
+++ b/arch/arm/boot/dts/kirkwood-pogoplug-series-4.dts
@@ -33,7 +33,7 @@
pinctrl-0 = <&pmx_button_eject>;
pinctrl-names = "default";
- button@1 {
+ eject {
debounce_interval = <50>;
wakeup-source;
linux,code = <KEY_EJECTCD>;
diff --git a/arch/arm/boot/dts/kirkwood-rd88f6192.dts b/arch/arm/boot/dts/kirkwood-rd88f6192.dts
index e0b959396ca2..b8af907249fb 100644
--- a/arch/arm/boot/dts/kirkwood-rd88f6192.dts
+++ b/arch/arm/boot/dts/kirkwood-rd88f6192.dts
@@ -29,16 +29,6 @@
stdout-path = &uart0;
};
- mbus {
- pcie-controller {
- status = "okay";
-
- pcie@1,0 {
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
pinctrl: pin-controller@10000 {
pinctrl-0 = <&pmx_usb_power>;
@@ -108,4 +98,12 @@
ethernet0-port@0 {
phy-handle = <&ethphy0>;
};
-}; \ No newline at end of file
+};
+
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-rd88f6281-a.dts b/arch/arm/boot/dts/kirkwood-rd88f6281-a.dts
index f2e08b3b33ea..6f771a99cb02 100644
--- a/arch/arm/boot/dts/kirkwood-rd88f6281-a.dts
+++ b/arch/arm/boot/dts/kirkwood-rd88f6281-a.dts
@@ -19,7 +19,7 @@
model = "Marvell RD88f6281 Reference design, with A0 or higher SoC";
compatible = "marvell,rd88f6281-a", "marvell,rd88f6281","marvell,kirkwood-88f6281", "marvell,kirkwood";
- dsa@0 {
+ dsa {
switch@0 {
reg = <10 0>; /* MDIO address 10, switch 0 in tree */
};
diff --git a/arch/arm/boot/dts/kirkwood-rd88f6281-z0.dts b/arch/arm/boot/dts/kirkwood-rd88f6281-z0.dts
index f4272b64ed7f..1a797381d3d4 100644
--- a/arch/arm/boot/dts/kirkwood-rd88f6281-z0.dts
+++ b/arch/arm/boot/dts/kirkwood-rd88f6281-z0.dts
@@ -19,7 +19,7 @@
model = "Marvell RD88f6281 Reference design, with Z0 SoC";
compatible = "marvell,rd88f6281-z0", "marvell,rd88f6281","marvell,kirkwood-88f6281", "marvell,kirkwood";
- dsa@0 {
+ dsa {
switch@0 {
reg = <0 0>; /* MDIO address 0, switch 0 in tree */
port@4 {
diff --git a/arch/arm/boot/dts/kirkwood-rd88f6281.dtsi b/arch/arm/boot/dts/kirkwood-rd88f6281.dtsi
index d195e884b3b5..d5aacf137e40 100644
--- a/arch/arm/boot/dts/kirkwood-rd88f6281.dtsi
+++ b/arch/arm/boot/dts/kirkwood-rd88f6281.dtsi
@@ -25,16 +25,6 @@
stdout-path = &uart0;
};
- mbus {
- pcie-controller {
- status = "okay";
-
- pcie@1,0 {
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
pinctrl: pin-controller@10000 {
pinctrl-names = "default";
@@ -63,7 +53,7 @@
};
};
- dsa@0 {
+ dsa {
compatible = "marvell,dsa";
#address-cells = <2>;
#size-cells = <0>;
@@ -134,3 +124,11 @@
duplex = <1>;
};
};
+
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-rs212.dts b/arch/arm/boot/dts/kirkwood-rs212.dts
index 3b19f1fd4cac..2c722ecd5331 100644
--- a/arch/arm/boot/dts/kirkwood-rs212.dts
+++ b/arch/arm/boot/dts/kirkwood-rs212.dts
@@ -44,6 +44,10 @@
status = "okay";
};
-&pcie2 {
+&pciec {
+ status = "okay";
+};
+
+&pcie1 {
status = "okay";
};
diff --git a/arch/arm/boot/dts/kirkwood-synology.dtsi b/arch/arm/boot/dts/kirkwood-synology.dtsi
index 04015c174b99..65e9524e852a 100644
--- a/arch/arm/boot/dts/kirkwood-synology.dtsi
+++ b/arch/arm/boot/dts/kirkwood-synology.dtsi
@@ -10,20 +10,6 @@
*/
/ {
- mbus {
- pcie-controller {
- status = "okay";
-
- pcie@1,0 {
- status = "okay";
- };
-
- pcie2: pcie@2,0 {
- status = "disabled";
- };
- };
- };
-
ocp@f1000000 {
pinctrl: pin-controller@10000 {
pmx_alarmled_12: pmx-alarmled-12 {
@@ -861,3 +847,11 @@
phy-handle = <&ethphy1>;
};
};
+
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-t5325.dts b/arch/arm/boot/dts/kirkwood-t5325.dts
index ed956b849a71..3500f4738fb0 100644
--- a/arch/arm/boot/dts/kirkwood-t5325.dts
+++ b/arch/arm/boot/dts/kirkwood-t5325.dts
@@ -30,16 +30,6 @@
stdout-path = &uart0;
};
- mbus {
- pcie-controller {
- status = "okay";
-
- pcie@1,0 {
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
pinctrl: pin-controller@10000 {
pinctrl-0 = <&pmx_i2s &pmx_sysrst>;
@@ -173,7 +163,7 @@
pinctrl-0 = <&pmx_button_power>;
pinctrl-names = "default";
- button@1 {
+ power {
label = "Power Button";
linux,code = <KEY_POWER>;
gpios = <&gpio1 13 GPIO_ACTIVE_HIGH>;
@@ -217,7 +207,7 @@
&mdio {
status = "okay";
- ethphy0: ethernet-phy {
+ ethphy0: ethernet-phy@8 {
device_type = "ethernet-phy";
reg = <8>;
};
@@ -229,3 +219,11 @@
phy-handle = <&ethphy0>;
};
};
+
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-ts219-6281.dts b/arch/arm/boot/dts/kirkwood-ts219-6281.dts
index 9767d73f3857..ee62204e4ecd 100644
--- a/arch/arm/boot/dts/kirkwood-ts219-6281.dts
+++ b/arch/arm/boot/dts/kirkwood-ts219-6281.dts
@@ -39,12 +39,12 @@
pinctrl-0 = <&pmx_reset_button &pmx_USB_copy_button>;
pinctrl-names = "default";
- button@1 {
+ copy {
label = "USB Copy";
linux,code = <KEY_COPY>;
gpios = <&gpio0 15 GPIO_ACTIVE_LOW>;
};
- button@2 {
+ reset {
label = "Reset";
linux,code = <KEY_RESTART>;
gpios = <&gpio0 16 GPIO_ACTIVE_LOW>;
diff --git a/arch/arm/boot/dts/kirkwood-ts219-6282.dts b/arch/arm/boot/dts/kirkwood-ts219-6282.dts
index bfc1a32d4e42..3437bb396844 100644
--- a/arch/arm/boot/dts/kirkwood-ts219-6282.dts
+++ b/arch/arm/boot/dts/kirkwood-ts219-6282.dts
@@ -5,16 +5,6 @@
#include "kirkwood-ts219.dtsi"
/ {
- mbus {
- pcie-controller {
- status = "okay";
-
- pcie@2,0 {
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
pinctrl: pin-controller@10000 {
@@ -49,12 +39,12 @@
pinctrl-0 = <&pmx_reset_button &pmx_USB_copy_button>;
pinctrl-names = "default";
- button@1 {
+ copy {
label = "USB Copy";
linux,code = <KEY_COPY>;
gpios = <&gpio1 11 GPIO_ACTIVE_LOW>;
};
- button@2 {
+ reset {
label = "Reset";
linux,code = <KEY_RESTART>;
gpios = <&gpio1 5 GPIO_ACTIVE_LOW>;
@@ -63,3 +53,5 @@
};
&ethphy0 { reg = <0>; };
+
+&pcie1 { status = "okay"; };
diff --git a/arch/arm/boot/dts/kirkwood-ts219.dtsi b/arch/arm/boot/dts/kirkwood-ts219.dtsi
index 0e46560551f4..62e5e2d5c348 100644
--- a/arch/arm/boot/dts/kirkwood-ts219.dtsi
+++ b/arch/arm/boot/dts/kirkwood-ts219.dtsi
@@ -12,16 +12,6 @@
stdout-path = &uart0;
};
- mbus {
- pcie-controller {
- status = "okay";
-
- pcie@1,0 {
- status = "okay";
- };
- };
- };
-
ocp@f1000000 {
i2c@11000 {
status = "okay";
@@ -94,7 +84,7 @@
&mdio {
status = "okay";
- ethphy0: ethernet-phy {
+ ethphy0: ethernet-phy@X {
/* overwrite reg property in board file */
};
};
@@ -105,3 +95,11 @@
phy-handle = <&ethphy0>;
};
};
+
+&pciec {
+ status = "okay";
+};
+
+&pcie0 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/kirkwood-ts419-6282.dts b/arch/arm/boot/dts/kirkwood-ts419-6282.dts
index d7512d4cdced..e3e71f48acc8 100644
--- a/arch/arm/boot/dts/kirkwood-ts419-6282.dts
+++ b/arch/arm/boot/dts/kirkwood-ts419-6282.dts
@@ -16,17 +16,8 @@
#include "kirkwood-ts219.dtsi"
#include "kirkwood-ts419.dtsi"
-/ {
- mbus {
- pcie-controller {
- status = "okay";
-
- pcie@2,0 {
- status = "okay";
- };
- };
- };
-};
-
&ethphy0 { reg = <0>; };
&ethphy1 { reg = <1>; };
+
+&pciec { status = "okay"; };
+&pcie1 { status = "okay"; };
diff --git a/arch/arm/boot/dts/kirkwood-ts419.dtsi b/arch/arm/boot/dts/kirkwood-ts419.dtsi
index 30ab93bfb1e4..02bd53762705 100644
--- a/arch/arm/boot/dts/kirkwood-ts419.dtsi
+++ b/arch/arm/boot/dts/kirkwood-ts419.dtsi
@@ -45,12 +45,12 @@
pinctrl-0 = <&pmx_reset_button &pmx_USB_copy_button>;
pinctrl-names = "default";
- button@1 {
+ copy {
label = "USB Copy";
linux,code = <KEY_COPY>;
gpios = <&gpio1 11 GPIO_ACTIVE_LOW>;
};
- button@2 {
+ reset {
label = "Reset";
linux,code = <KEY_RESTART>;
gpios = <&gpio1 5 GPIO_ACTIVE_LOW>;
diff --git a/arch/arm/boot/dts/kirkwood.dtsi b/arch/arm/boot/dts/kirkwood.dtsi
index 7445a15e259d..29b8bd7e0d93 100644
--- a/arch/arm/boot/dts/kirkwood.dtsi
+++ b/arch/arm/boot/dts/kirkwood.dtsi
@@ -27,7 +27,7 @@
i2c0 = &i2c0;
};
- mbus {
+ mbus@f1000000 {
compatible = "marvell,kirkwood-mbus", "simple-bus";
#address-cells = <2>;
#size-cells = <1>;
diff --git a/arch/arm/boot/dts/lpc18xx.dtsi b/arch/arm/boot/dts/lpc18xx.dtsi
index 053a1f54f4bb..fdb736c82045 100644
--- a/arch/arm/boot/dts/lpc18xx.dtsi
+++ b/arch/arm/boot/dts/lpc18xx.dtsi
@@ -195,13 +195,19 @@
clocks = <&ccu1 CLK_CPU_CREG>;
resets = <&rgu 5>;
- usb0_otg_phy: phy@004 {
+ creg_clk: clock-controller {
+ compatible = "nxp,lpc1850-creg-clk";
+ clocks = <&xtal32>;
+ #clock-cells = <1>;
+ };
+
+ usb0_otg_phy: phy {
compatible = "nxp,lpc1850-usb-otg-phy";
clocks = <&ccu1 CLK_USB0>;
#phy-cells = <0>;
};
- dmamux: dma-mux@11c {
+ dmamux: dma-mux {
compatible = "nxp,lpc1850-dmamux";
#dma-cells = <3>;
dma-requests = <64>;
@@ -209,11 +215,19 @@
};
};
+ rtc: rtc@40046000 {
+ compatible = "nxp,lpc1850-rtc", "nxp,lpc1788-rtc";
+ reg = <0x40046000 0x1000>;
+ interrupts = <47>;
+ clocks = <&creg_clk 0>, <&ccu1 CLK_CPU_BUS>;
+ clock-names = "rtc", "reg";
+ };
+
cgu: clock-controller@40050000 {
compatible = "nxp,lpc1850-cgu";
reg = <0x40050000 0x1000>;
#clock-cells = <1>;
- clocks = <&xtal>, <&xtal32>, <&enet_rx_clk>, <&enet_tx_clk>, <&gp_clkin>;
+ clocks = <&xtal>, <&creg_clk 1>, <&enet_rx_clk>, <&enet_tx_clk>, <&gp_clkin>;
};
ccu1: clock-controller@40051000 {
@@ -430,6 +444,15 @@
status = "disabled";
};
+ dac: dac@400e1000 {
+ compatible = "nxp,lpc1850-dac";
+ reg = <0x400e1000 0x1000>;
+ interrupts = <0>;
+ clocks = <&ccu1 CLK_APB3_DAC>;
+ resets = <&rgu 42>;
+ status = "disabled";
+ };
+
can0: can@400e2000 {
compatible = "bosch,c_can";
reg = <0x400e2000 0x1000>;
@@ -439,6 +462,24 @@
status = "disabled";
};
+ adc0: adc@400e3000 {
+ compatible = "nxp,lpc1850-adc";
+ reg = <0x400e3000 0x1000>;
+ interrupts = <17>;
+ clocks = <&ccu1 CLK_APB3_ADC0>;
+ resets = <&rgu 40>;
+ status = "disabled";
+ };
+
+ adc1: adc@400e4000 {
+ compatible = "nxp,lpc1850-adc";
+ reg = <0x400e4000 0x1000>;
+ interrupts = <21>;
+ clocks = <&ccu1 CLK_APB3_ADC1>;
+ resets = <&rgu 41>;
+ status = "disabled";
+ };
+
gpio: gpio@400f4000 {
compatible = "nxp,lpc1850-gpio";
reg = <0x400f4000 0x4000>;
diff --git a/arch/arm/boot/dts/ea3250.dts b/arch/arm/boot/dts/lpc3250-ea3250.dts
index a4a281fe82af..52b3ed10283a 100644
--- a/arch/arm/boot/dts/ea3250.dts
+++ b/arch/arm/boot/dts/lpc3250-ea3250.dts
@@ -25,119 +25,6 @@
reg = <0x80000000 0x4000000>;
};
- ahb {
- mac: ethernet@31060000 {
- phy-mode = "rmii";
- use-iram;
- };
-
- /* 128MB Flash via SLC NAND controller */
- slc: flash@20020000 {
- status = "okay";
- #address-cells = <1>;
- #size-cells = <1>;
-
- nxp,wdr-clks = <14>;
- nxp,wwidth = <260000000>;
- nxp,whold = <104000000>;
- nxp,wsetup = <200000000>;
- nxp,rdr-clks = <14>;
- nxp,rwidth = <34666666>;
- nxp,rhold = <104000000>;
- nxp,rsetup = <200000000>;
- nand-on-flash-bbt;
- gpios = <&gpio 5 19 1>; /* GPO_P3 19, active low */
-
- mtd0@00000000 {
- label = "ea3250-boot";
- reg = <0x00000000 0x00080000>;
- read-only;
- };
-
- mtd1@00080000 {
- label = "ea3250-uboot";
- reg = <0x00080000 0x000c0000>;
- read-only;
- };
-
- mtd2@00140000 {
- label = "ea3250-kernel";
- reg = <0x00140000 0x00400000>;
- };
-
- mtd3@00540000 {
- label = "ea3250-rootfs";
- reg = <0x00540000 0x07ac0000>;
- };
- };
-
- apb {
- uart5: serial@40090000 {
- status = "okay";
- };
-
- uart3: serial@40080000 {
- status = "okay";
- };
-
- uart6: serial@40098000 {
- status = "okay";
- };
-
- i2c1: i2c@400A0000 {
- clock-frequency = <100000>;
-
- eeprom@50 {
- compatible = "at,24c256";
- reg = <0x50>;
- };
-
- eeprom@57 {
- compatible = "at,24c64";
- reg = <0x57>;
- };
-
- uda1380: uda1380@18 {
- compatible = "nxp,uda1380";
- reg = <0x18>;
- power-gpio = <&gpio 0x59 0>;
- reset-gpio = <&gpio 0x51 0>;
- dac-clk = "wspll";
- };
-
- pca9532: pca9532@60 {
- compatible = "nxp,pca9532";
- gpio-controller;
- #gpio-cells = <2>;
- reg = <0x60>;
- };
- };
-
- i2c2: i2c@400A8000 {
- clock-frequency = <100000>;
- };
-
- sd@20098000 {
- wp-gpios = <&pca9532 5 0>;
- cd-gpios = <&pca9532 4 0>;
- cd-inverted;
- bus-width = <4>;
- status = "okay";
- };
- };
-
- fab {
- uart1: serial@40014000 {
- status = "okay";
- };
-
- /* 3-axis accelerometer X,Y,Z (or AD-IN instead of Z) */
- adc@40048000 {
- status = "okay";
- };
- };
- };
-
gpio_keys {
compatible = "gpio-keys";
#address-cells = <1>;
@@ -258,12 +145,44 @@
};
};
-/* Here, choose exactly one from: ohci, usbd */
-&ohci /* &usbd */ {
- transceiver = <&isp1301>;
+/* 3-axis accelerometer X,Y,Z (or AD-IN instead of Z) */
+&adc {
status = "okay";
};
+&i2c1 {
+ clock-frequency = <100000>;
+
+ uda1380: uda1380@18 {
+ compatible = "nxp,uda1380";
+ reg = <0x18>;
+ power-gpio = <&gpio 0x59 0>;
+ reset-gpio = <&gpio 0x51 0>;
+ dac-clk = "wspll";
+ };
+
+ eeprom@50 {
+ compatible = "atmel,24c256";
+ reg = <0x50>;
+ };
+
+ eeprom@57 {
+ compatible = "atmel,24c64";
+ reg = <0x57>;
+ };
+
+ pca9532: pca9532@60 {
+ compatible = "nxp,pca9532";
+ gpio-controller;
+ #gpio-cells = <2>;
+ reg = <0x60>;
+ };
+};
+
+&i2c2 {
+ clock-frequency = <100000>;
+};
+
&i2cusb {
clock-frequency = <100000>;
@@ -272,3 +191,82 @@
reg = <0x2d>;
};
};
+
+&mac {
+ phy-mode = "rmii";
+ use-iram;
+};
+
+/* Here, choose exactly one from: ohci, usbd */
+&ohci /* &usbd */ {
+ transceiver = <&isp1301>;
+ status = "okay";
+};
+
+&sd {
+ wp-gpios = <&pca9532 5 0>;
+ cd-gpios = <&pca9532 4 0>;
+ cd-inverted;
+ bus-width = <4>;
+ status = "okay";
+};
+
+/* 128MB Flash via SLC NAND controller */
+&slc {
+ status = "okay";
+
+ nxp,wdr-clks = <14>;
+ nxp,wwidth = <260000000>;
+ nxp,whold = <104000000>;
+ nxp,wsetup = <200000000>;
+ nxp,rdr-clks = <14>;
+ nxp,rwidth = <34666666>;
+ nxp,rhold = <104000000>;
+ nxp,rsetup = <200000000>;
+ nand-on-flash-bbt;
+ gpios = <&gpio 5 19 1>; /* GPO_P3 19, active low */
+
+ partitions {
+ compatible = "fixed-partitions";
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+ mtd0@00000000 {
+ label = "ea3250-boot";
+ reg = <0x00000000 0x00080000>;
+ read-only;
+ };
+
+ mtd1@00080000 {
+ label = "ea3250-uboot";
+ reg = <0x00080000 0x000c0000>;
+ read-only;
+ };
+
+ mtd2@00140000 {
+ label = "ea3250-kernel";
+ reg = <0x00140000 0x00400000>;
+ };
+
+ mtd3@00540000 {
+ label = "ea3250-rootfs";
+ reg = <0x00540000 0x07ac0000>;
+ };
+ };
+};
+
+&uart1 {
+ status = "okay";
+};
+
+&uart3 {
+ status = "okay";
+};
+
+&uart5 {
+ status = "okay";
+};
+
+&uart6 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/lpc3250-phy3250.dts b/arch/arm/boot/dts/lpc3250-phy3250.dts
new file mode 100644
index 000000000000..fd95e2b10357
--- /dev/null
+++ b/arch/arm/boot/dts/lpc3250-phy3250.dts
@@ -0,0 +1,226 @@
+/*
+ * PHYTEC phyCORE-LPC3250 board
+ *
+ * Copyright 2012 Roland Stigge <stigge@antcom.de>
+ *
+ * The code contained herein is licensed under the GNU General Public
+ * License. You may obtain a copy of the GNU General Public License
+ * Version 2 or later at the following locations:
+ *
+ * http://www.opensource.org/licenses/gpl-license.html
+ * http://www.gnu.org/copyleft/gpl.html
+ */
+
+/dts-v1/;
+#include "lpc32xx.dtsi"
+
+/ {
+ model = "PHYTEC phyCORE-LPC3250 board based on NXP LPC3250";
+ compatible = "phytec,phy3250", "nxp,lpc3250";
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+ memory {
+ device_type = "memory";
+ reg = <0x80000000 0x4000000>;
+ };
+
+ regulators {
+ backlight_reg: regulator@0 {
+ compatible = "regulator-fixed";
+ regulator-name = "backlight_reg";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ gpio = <&gpio 5 4 0>;
+ enable-active-high;
+ regulator-boot-on;
+ };
+
+ lcd_reg: regulator@1 {
+ compatible = "regulator-fixed";
+ regulator-name = "lcd_reg";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ gpio = <&gpio 5 0 0>;
+ enable-active-high;
+ regulator-boot-on;
+ };
+
+ sd_reg: regulator@2 {
+ compatible = "regulator-fixed";
+ regulator-name = "sd_reg";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ gpio = <&gpio 5 5 0>;
+ enable-active-high;
+ };
+ };
+
+ leds {
+ compatible = "gpio-leds";
+
+ led0 { /* red */
+ gpios = <&gpio 5 1 0>; /* GPO_P3 1, GPIO 80, active high */
+ default-state = "off";
+ };
+
+ led1 { /* green */
+ gpios = <&gpio 5 14 0>; /* GPO_P3 14, GPIO 93, active high */
+ linux,default-trigger = "heartbeat";
+ };
+ };
+};
+
+&clcd {
+ status = "okay";
+};
+
+&i2c1 {
+ clock-frequency = <100000>;
+
+ uda1380: uda1380@18 {
+ compatible = "nxp,uda1380";
+ reg = <0x18>;
+ power-gpio = <&gpio 0x59 0>;
+ reset-gpio = <&gpio 0x51 0>;
+ dac-clk = "wspll";
+ };
+
+ pcf8563: rtc@51 {
+ compatible = "nxp,pcf8563";
+ reg = <0x51>;
+ };
+};
+
+&i2c2 {
+ clock-frequency = <100000>;
+};
+
+&i2cusb {
+ clock-frequency = <100000>;
+
+ isp1301: usb-transceiver@2c {
+ compatible = "nxp,isp1301";
+ reg = <0x2c>;
+ };
+};
+
+&key {
+ keypad,num-rows = <1>;
+ keypad,num-columns = <1>;
+ nxp,debounce-delay-ms = <3>;
+ nxp,scan-delay-ms = <34>;
+ linux,keymap = <0x00000002>;
+ status = "okay";
+};
+
+&mac {
+ phy-mode = "rmii";
+ use-iram;
+};
+
+/* Here, choose exactly one from: ohci, usbd */
+&ohci /* &usbd */ {
+ transceiver = <&isp1301>;
+ status = "okay";
+};
+
+&sd {
+ wp-gpios = <&gpio 3 0 0>;
+ cd-gpios = <&gpio 3 1 0>;
+ cd-inverted;
+ bus-width = <4>;
+ vmmc-supply = <&sd_reg>;
+ status = "okay";
+};
+
+/* 64MB Flash via SLC NAND controller */
+&slc {
+ status = "okay";
+
+ nxp,wdr-clks = <14>;
+ nxp,wwidth = <40000000>;
+ nxp,whold = <100000000>;
+ nxp,wsetup = <100000000>;
+ nxp,rdr-clks = <14>;
+ nxp,rwidth = <40000000>;
+ nxp,rhold = <66666666>;
+ nxp,rsetup = <100000000>;
+ nand-on-flash-bbt;
+ gpios = <&gpio 5 19 1>; /* GPO_P3 19, active low */
+
+ partitions {
+ compatible = "fixed-partitions";
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+ mtd0@00000000 {
+ label = "phy3250-boot";
+ reg = <0x00000000 0x00064000>;
+ read-only;
+ };
+
+ mtd1@00064000 {
+ label = "phy3250-uboot";
+ reg = <0x00064000 0x00190000>;
+ read-only;
+ };
+
+ mtd2@001f4000 {
+ label = "phy3250-ubt-prms";
+ reg = <0x001f4000 0x00010000>;
+ };
+
+ mtd3@00204000 {
+ label = "phy3250-kernel";
+ reg = <0x00204000 0x00400000>;
+ };
+
+ mtd4@00604000 {
+ label = "phy3250-rootfs";
+ reg = <0x00604000 0x039fc000>;
+ };
+ };
+};
+
+&ssp0 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ num-cs = <1>;
+ cs-gpios = <&gpio 3 5 0>;
+ status = "okay";
+
+ eeprom: at25@0 {
+ compatible = "atmel,at25";
+ reg = <0>;
+ spi-max-frequency = <5000000>;
+
+ pl022,interface = <0>;
+ pl022,com-mode = <0>;
+ pl022,rx-level-trig = <1>;
+ pl022,tx-level-trig = <1>;
+ pl022,ctrl-len = <11>;
+ pl022,wait-state = <0>;
+ pl022,duplex = <0>;
+
+ at25,byte-len = <0x8000>;
+ at25,addr-mode = <2>;
+ at25,page-size = <64>;
+ };
+};
+
+&tsc {
+ status = "okay";
+};
+
+&uart2 {
+ status = "okay";
+};
+
+&uart3 {
+ status = "okay";
+};
+
+&uart5 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/lpc32xx.dtsi b/arch/arm/boot/dts/lpc32xx.dtsi
index c58d8da9ea2a..e295e1ec82a5 100644
--- a/arch/arm/boot/dts/lpc32xx.dtsi
+++ b/arch/arm/boot/dts/lpc32xx.dtsi
@@ -92,7 +92,8 @@
ohci: ohci@0 {
compatible = "nxp,ohci-nxp", "usb-ohci";
reg = <0x0 0x300>;
- interrupts = <59 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&sic1>;
+ interrupts = <27 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&usbclk LPC32XX_USB_CLK_HOST>;
status = "disabled";
};
@@ -100,10 +101,11 @@
usbd: usbd@0 {
compatible = "nxp,lpc3220-udc";
reg = <0x0 0x300>;
- interrupts = <61 IRQ_TYPE_LEVEL_HIGH>,
- <62 IRQ_TYPE_LEVEL_HIGH>,
- <60 IRQ_TYPE_LEVEL_HIGH>,
- <58 IRQ_TYPE_LEVEL_LOW>;
+ interrupt-parent = <&sic1>;
+ interrupts = <29 IRQ_TYPE_LEVEL_HIGH>,
+ <30 IRQ_TYPE_LEVEL_HIGH>,
+ <28 IRQ_TYPE_LEVEL_HIGH>,
+ <26 IRQ_TYPE_LEVEL_LOW>;
clocks = <&usbclk LPC32XX_USB_CLK_DEVICE>;
status = "disabled";
};
@@ -111,7 +113,8 @@
i2cusb: i2c@300 {
compatible = "nxp,pnx-i2c";
reg = <0x300 0x100>;
- interrupts = <63 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&sic1>;
+ interrupts = <31 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&usbclk LPC32XX_USB_CLK_I2C>;
#address-cells = <1>;
#size-cells = <0>;
@@ -162,30 +165,44 @@
compatible = "simple-bus";
ranges = <0x20000000 0x20000000 0x30000000>;
+ /*
+ * ssp0 and spi1 are shared pins;
+ * enable one in your board dts, as needed.
+ */
ssp0: ssp@20084000 {
compatible = "arm,pl022", "arm,primecell";
reg = <0x20084000 0x1000>;
interrupts = <20 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LPC32XX_CLK_SSP0>;
clock-names = "apb_pclk";
+ status = "disabled";
};
spi1: spi@20088000 {
compatible = "nxp,lpc3220-spi";
reg = <0x20088000 0x1000>;
+ clocks = <&clk LPC32XX_CLK_SPI1>;
+ status = "disabled";
};
+ /*
+ * ssp1 and spi2 are shared pins;
+ * enable one in your board dts, as needed.
+ */
ssp1: ssp@2008c000 {
compatible = "arm,pl022", "arm,primecell";
reg = <0x2008c000 0x1000>;
interrupts = <21 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LPC32XX_CLK_SSP1>;
clock-names = "apb_pclk";
+ status = "disabled";
};
spi2: spi@20090000 {
compatible = "nxp,lpc3220-spi";
reg = <0x20090000 0x1000>;
+ clocks = <&clk LPC32XX_CLK_SPI2>;
+ status = "disabled";
};
i2s0: i2s@20094000 {
@@ -249,7 +266,8 @@
i2c1: i2c@400A0000 {
compatible = "nxp,pnx-i2c";
reg = <0x400A0000 0x100>;
- interrupts = <51 IRQ_TYPE_LEVEL_LOW>;
+ interrupt-parent = <&sic1>;
+ interrupts = <19 IRQ_TYPE_LEVEL_LOW>;
#address-cells = <1>;
#size-cells = <0>;
pnx,timeout = <0x64>;
@@ -259,7 +277,8 @@
i2c2: i2c@400A8000 {
compatible = "nxp,pnx-i2c";
reg = <0x400A8000 0x100>;
- interrupts = <50 IRQ_TYPE_LEVEL_LOW>;
+ interrupt-parent = <&sic1>;
+ interrupts = <18 IRQ_TYPE_LEVEL_LOW>;
#address-cells = <1>;
#size-cells = <0>;
pnx,timeout = <0x64>;
@@ -294,22 +313,41 @@
clocks = <&xtal_32k>, <&xtal>;
clock-names = "xtal_32k", "xtal";
+
+ assigned-clocks = <&clk LPC32XX_CLK_HCLK_PLL>;
+ assigned-clock-rates = <208000000>;
};
};
- /*
- * MIC Interrupt controller includes:
- * MIC @40008000
- * SIC1 @4000C000
- * SIC2 @40010000
- */
mic: interrupt-controller@40008000 {
compatible = "nxp,lpc3220-mic";
+ reg = <0x40008000 0x4000>;
interrupt-controller;
- reg = <0x40008000 0xC000>;
#interrupt-cells = <2>;
};
+ sic1: interrupt-controller@4000c000 {
+ compatible = "nxp,lpc3220-sic";
+ reg = <0x4000c000 0x4000>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+
+ interrupt-parent = <&mic>;
+ interrupts = <0 IRQ_TYPE_LEVEL_LOW>,
+ <30 IRQ_TYPE_LEVEL_LOW>;
+ };
+
+ sic2: interrupt-controller@40010000 {
+ compatible = "nxp,lpc3220-sic";
+ reg = <0x40010000 0x4000>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+
+ interrupt-parent = <&mic>;
+ interrupts = <1 IRQ_TYPE_LEVEL_LOW>,
+ <31 IRQ_TYPE_LEVEL_LOW>;
+ };
+
uart1: serial@40014000 {
compatible = "nxp,lpc3220-hsuart";
reg = <0x40014000 0x1000>;
@@ -334,7 +372,8 @@
rtc: rtc@40024000 {
compatible = "nxp,lpc3220-rtc";
reg = <0x40024000 0x1000>;
- interrupts = <52 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&sic1>;
+ interrupts = <20 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LPC32XX_CLK_RTC>;
};
@@ -387,7 +426,8 @@
adc: adc@40048000 {
compatible = "nxp,lpc3220-adc";
reg = <0x40048000 0x1000>;
- interrupts = <39 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&sic1>;
+ interrupts = <7 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LPC32XX_CLK_ADC>;
status = "disabled";
};
@@ -395,7 +435,8 @@
tsc: tsc@40048000 {
compatible = "nxp,lpc3220-tsc";
reg = <0x40048000 0x1000>;
- interrupts = <39 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-parent = <&sic1>;
+ interrupts = <7 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&clk LPC32XX_CLK_ADC>;
status = "disabled";
};
diff --git a/arch/arm/boot/dts/lpc4350-hitex-eval.dts b/arch/arm/boot/dts/lpc4350-hitex-eval.dts
index 022d495432c1..6c9048d4d03c 100644
--- a/arch/arm/boot/dts/lpc4350-hitex-eval.dts
+++ b/arch/arm/boot/dts/lpc4350-hitex-eval.dts
@@ -45,50 +45,50 @@
poll-interval = <100>;
autorepeat;
- button@0 {
+ button0 {
label = "joy:right";
linux,code = <KEY_RIGHT>;
gpios = <&pca_gpio 8 GPIO_ACTIVE_LOW>;
};
- button@1 {
+ button1 {
label = "joy:up";
linux,code = <KEY_UP>;
gpios = <&pca_gpio 9 GPIO_ACTIVE_LOW>;
};
- button@2 {
+ button2 {
label = "joy:enter";
linux,code = <KEY_ENTER>;
gpios = <&pca_gpio 10 GPIO_ACTIVE_LOW>;
};
- button@3 {
+ button3 {
label = "joy:left";
linux,code = <KEY_LEFT>;
gpios = <&pca_gpio 11 GPIO_ACTIVE_LOW>;
};
- button@4 {
+ button4 {
label = "joy:down";
linux,code = <KEY_DOWN>;
gpios = <&pca_gpio 12 GPIO_ACTIVE_LOW>;
};
- button@5 {
+ button5 {
label = "user:sw3";
linux,code = <KEY_F1>;
gpios = <&pca_gpio 13 GPIO_ACTIVE_LOW>;
};
- button@6 {
+ button6 {
label = "user:sw4";
linux,code = <KEY_F2>;
gpios = <&pca_gpio 14 GPIO_ACTIVE_LOW>;
};
- button@7 {
+ button7 {
label = "user:sw5";
linux,code = <KEY_F3>;
gpios = <&pca_gpio 15 GPIO_ACTIVE_LOW>;
@@ -119,9 +119,25 @@
gpios = <&pca_gpio 3 GPIO_ACTIVE_LOW>;
};
};
+
+ vcc: vcc_fixed {
+ compatible = "regulator-fixed";
+ regulator-name = "3v3io";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ };
};
&pinctrl {
+ adc1_pins: adc1-pins {
+ adc1_pins_cfg {
+ pins = "pf_9";
+ function = "adc";
+ input-disable;
+ bias-disable;
+ };
+ };
+
emc_pins: emc-pins {
emc_addr0_23_cfg {
pins = "p2_9", "p2_10", "p2_11", "p2_12",
@@ -325,6 +341,13 @@
};
};
+&adc1 {
+ status = "okay";
+ vref-supply = <&vcc>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&adc1_pins>;
+};
+
&emc {
status = "okay";
pinctrl-names = "default";
@@ -430,7 +453,7 @@
pinctrl-names = "default";
pinctrl-0 = <&spifi_pins>;
- flash@0 {
+ flash {
compatible = "jedec,spi-nor";
spi-rx-bus-width = <4>;
#address-cells = <1>;
diff --git a/arch/arm/boot/dts/lpc4357-ea4357-devkit.dts b/arch/arm/boot/dts/lpc4357-ea4357-devkit.dts
index 079d3cf8c00b..1919be4dab2b 100644
--- a/arch/arm/boot/dts/lpc4357-ea4357-devkit.dts
+++ b/arch/arm/boot/dts/lpc4357-ea4357-devkit.dts
@@ -38,6 +38,13 @@
reg = <0x28000000 0x2000000>; /* 32 MB */
};
+ vcc: vcc_fixed {
+ compatible = "regulator-fixed";
+ regulator-name = "3v3-supply";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ };
+
/* vmmc is controlled by sdmmc host internally */
vmmc: vmmc_fixed {
compatible = "regulator-fixed";
@@ -55,31 +62,31 @@
poll-interval = <100>;
autorepeat;
- button@0 {
+ button0 {
label = "joy_enter";
linux,code = <KEY_ENTER>;
gpios = <&gpio LPC_GPIO(4,8) GPIO_ACTIVE_LOW>;
};
- button@1 {
+ button1 {
label = "joy_left";
linux,code = <KEY_LEFT>;
gpios = <&gpio LPC_GPIO(4,9) GPIO_ACTIVE_LOW>;
};
- button@2 {
+ button2 {
label = "joy_up";
linux,code = <KEY_UP>;
gpios = <&gpio LPC_GPIO(4,10) GPIO_ACTIVE_LOW>;
};
- button@3 {
+ button3 {
label = "joy_right";
linux,code = <KEY_RIGHT>;
gpios = <&gpio LPC_GPIO(4,12) GPIO_ACTIVE_LOW>;
};
- button@4 {
+ button4 {
label = "joy_down";
linux,code = <KEY_DOWN>;
gpios = <&gpio LPC_GPIO(4,13) GPIO_ACTIVE_LOW>;
@@ -461,6 +468,11 @@
};
};
+&adc0 {
+ status = "okay";
+ vref-supply = <&vcc>;
+};
+
&i2c0 {
status = "okay";
pinctrl-names = "default";
@@ -483,6 +495,11 @@
};
};
+&dac {
+ status = "okay";
+ vref-supply = <&vcc>;
+};
+
&emc {
status = "okay";
pinctrl-names = "default";
@@ -567,7 +584,7 @@
pinctrl-names = "default";
pinctrl-0 = <&spifi_pins>;
- flash@0 {
+ flash {
compatible = "jedec,spi-nor";
spi-cpol;
spi-cpha;
diff --git a/arch/arm/boot/dts/ls1021a.dtsi b/arch/arm/boot/dts/ls1021a.dtsi
index 726372d3adc0..5ae8e9297e9a 100644
--- a/arch/arm/boot/dts/ls1021a.dtsi
+++ b/arch/arm/boot/dts/ls1021a.dtsi
@@ -119,6 +119,20 @@
};
+ msi1: msi-controller@1570e00 {
+ compatible = "fsl,1s1021a-msi";
+ reg = <0x0 0x1570e00 0x0 0x8>;
+ msi-controller;
+ interrupts = <GIC_SPI 179 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
+ msi2: msi-controller@1570e08 {
+ compatible = "fsl,1s1021a-msi";
+ reg = <0x0 0x1570e08 0x0 0x8>;
+ msi-controller;
+ interrupts = <GIC_SPI 180 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
ifc: ifc@1530000 {
compatible = "fsl,ifc", "simple-bus";
reg = <0x0 0x1530000 0x0 0x10000>;
@@ -245,7 +259,7 @@
interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_HIGH>;
clock-names = "dspi";
clocks = <&platform_clk 1>;
- spi-num-chipselects = <5>;
+ spi-num-chipselects = <6>;
big-endian;
status = "disabled";
};
@@ -258,7 +272,7 @@
interrupts = <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>;
clock-names = "dspi";
clocks = <&platform_clk 1>;
- spi-num-chipselects = <5>;
+ spi-num-chipselects = <6>;
big-endian;
status = "disabled";
};
@@ -332,6 +346,46 @@
status = "disabled";
};
+ gpio0: gpio@2300000 {
+ compatible = "fsl,ls1021a-gpio", "fsl,qoriq-gpio";
+ reg = <0x0 0x2300000 0x0 0x10000>;
+ interrupts = <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
+ gpio1: gpio@2310000 {
+ compatible = "fsl,ls1021a-gpio", "fsl,qoriq-gpio";
+ reg = <0x0 0x2310000 0x0 0x10000>;
+ interrupts = <GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
+ gpio2: gpio@2320000 {
+ compatible = "fsl,ls1021a-gpio", "fsl,qoriq-gpio";
+ reg = <0x0 0x2320000 0x0 0x10000>;
+ interrupts = <GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
+ gpio3: gpio@2330000 {
+ compatible = "fsl,ls1021a-gpio", "fsl,qoriq-gpio";
+ reg = <0x0 0x2330000 0x0 0x10000>;
+ interrupts = <GIC_SPI 166 IRQ_TYPE_LEVEL_HIGH>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
lpuart0: serial@2950000 {
compatible = "fsl,ls1021a-lpuart";
reg = <0x0 0x2950000 0x0 0x1000>;
@@ -443,8 +497,9 @@
compatible = "fsl,ls1021a-dcu";
reg = <0x0 0x2ce0000 0x0 0x10000>;
interrupts = <GIC_SPI 172 IRQ_TYPE_LEVEL_HIGH>;
- clocks = <&platform_clk 0>;
- clock-names = "dcu";
+ clocks = <&platform_clk 0>,
+ <&platform_clk 0>;
+ clock-names = "dcu", "pix";
big-endian;
status = "disabled";
};
@@ -587,6 +642,7 @@
bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x00000000 0x40 0x00010000 0x0 0x00010000 /* downstream I/O */
0x82000000 0x0 0x40000000 0x40 0x40000000 0x0 0x40000000>; /* non-prefetchable memory */
+ msi-parent = <&msi1>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0000 0 0 1 &gic GIC_SPI 91 IRQ_TYPE_LEVEL_HIGH>,
@@ -609,6 +665,7 @@
bus-range = <0x0 0xff>;
ranges = <0x81000000 0x0 0x00000000 0x48 0x00010000 0x0 0x00010000 /* downstream I/O */
0x82000000 0x0 0x40000000 0x48 0x40000000 0x0 0x40000000>; /* non-prefetchable memory */
+ msi-parent = <&msi2>;
#interrupt-cells = <1>;
interrupt-map-mask = <0 0 0 7>;
interrupt-map = <0000 0 0 1 &gic GIC_SPI 92 IRQ_TYPE_LEVEL_HIGH>,
diff --git a/arch/arm/boot/dts/rk3288-thermal.dtsi b/arch/arm/boot/dts/mps2-an385.dts
index 651b962e3d53..31c374d72a6f 100644
--- a/arch/arm/boot/dts/rk3288-thermal.dtsi
+++ b/arch/arm/boot/dts/mps2-an385.dts
@@ -1,7 +1,7 @@
/*
- * Device Tree Source for RK3288 SoC thermal
+ * Copyright (C) 2015 ARM Limited
*
- * Copyright (c) 2014, Fuzhou Rockchip Electronics Co., Ltd
+ * Author: Vladimir Murzin <vladimir.murzin@arm.com>
*
* This file is dual-licensed: you can use it either under the terms
* of the GPL or the X11 license, at your option. Note that this dual
@@ -42,77 +42,51 @@
* OTHER DEALINGS IN THE SOFTWARE.
*/
-#include <dt-bindings/thermal/thermal.h>
+/dts-v1/;
-reserve_thermal: reserve_thermal {
- polling-delay-passive = <1000>; /* milliseconds */
- polling-delay = <5000>; /* milliseconds */
+#include "mps2.dtsi"
- thermal-sensors = <&tsadc 0>;
-};
+/ {
+ model = "ARM MPS2 Application Note 385/386";
+ compatible = "arm,mps2";
-cpu_thermal: cpu_thermal {
- polling-delay-passive = <100>; /* milliseconds */
- polling-delay = <5000>; /* milliseconds */
+ aliases {
+ serial0 = &uart0;
+ };
- thermal-sensors = <&tsadc 1>;
+ chosen {
+ bootargs = "earlycon";
+ stdout-path = "serial0:9600n8";
+ };
- trips {
- cpu_alert0: cpu_alert0 {
- temperature = <70000>; /* millicelsius */
- hysteresis = <2000>; /* millicelsius */
- type = "passive";
- };
- cpu_alert1: cpu_alert1 {
- temperature = <75000>; /* millicelsius */
- hysteresis = <2000>; /* millicelsius */
- type = "passive";
- };
- cpu_crit: cpu_crit {
- temperature = <90000>; /* millicelsius */
- hysteresis = <2000>; /* millicelsius */
- type = "critical";
- };
+ memory {
+ device_type = "memory";
+ reg = <0x21000000 0x1000000>;
};
- cooling-maps {
- map0 {
- trip = <&cpu_alert0>;
- cooling-device =
- <&cpu0 THERMAL_NO_LIMIT 6>;
- };
- map1 {
- trip = <&cpu_alert1>;
- cooling-device =
- <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ smb {
+ ethernet@0,0 {
+ compatible = "smsc,lan9220", "smsc,lan9115";
+ reg = <0 0x0 0x10000>;
+ interrupts = <13>;
+ interrupt-parent = <&nvic>;
+ smsc,irq-active-high;
};
};
};
-gpu_thermal: gpu_thermal {
- polling-delay-passive = <100>; /* milliseconds */
- polling-delay = <5000>; /* milliseconds */
+&uart0 {
+ status = "okay";
+};
- thermal-sensors = <&tsadc 2>;
+&timer0 {
+ status = "okay";
+};
- trips {
- gpu_alert0: gpu_alert0 {
- temperature = <70000>; /* millicelsius */
- hysteresis = <2000>; /* millicelsius */
- type = "passive";
- };
- gpu_crit: gpu_crit {
- temperature = <90000>; /* millicelsius */
- hysteresis = <2000>; /* millicelsius */
- type = "critical";
- };
- };
+&timer1 {
+ status = "okay";
+};
- cooling-maps {
- map0 {
- trip = <&gpu_alert0>;
- cooling-device =
- <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
- };
- };
+&wdt {
+ status = "okay";
};
diff --git a/arch/arm/boot/dts/mps2-an399.dts b/arch/arm/boot/dts/mps2-an399.dts
new file mode 100644
index 000000000000..5e7e5ca2edbf
--- /dev/null
+++ b/arch/arm/boot/dts/mps2-an399.dts
@@ -0,0 +1,92 @@
+/*
+ * Copyright (C) 2015 ARM Limited
+ *
+ * Author: Vladimir Murzin <vladimir.murzin@arm.com>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+
+#include "mps2.dtsi"
+
+/ {
+ model = "ARM MPS2 Application Note 399/400";
+ compatible = "arm,mps2";
+
+ aliases {
+ serial0 = &uart0;
+ };
+
+ chosen {
+ bootargs = "earlycon";
+ stdout-path = "serial0:9600n8";
+ };
+
+ memory {
+ device_type = "memory";
+ reg = <0x60000000 0x1000000>;
+ };
+
+ smb {
+ ethernet@1,0 {
+ compatible = "smsc,lan9220", "smsc,lan9115";
+ reg = <1 0x0 0x10000>;
+ interrupts = <13>;
+ interrupt-parent = <&nvic>;
+ smsc,irq-active-high;
+ };
+ };
+};
+
+&uart0 {
+ status = "okay";
+};
+
+&timer0 {
+ status = "okay";
+};
+
+&timer1 {
+ status = "okay";
+};
+
+&wdt {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/mps2.dtsi b/arch/arm/boot/dts/mps2.dtsi
new file mode 100644
index 000000000000..e3fed8d34558
--- /dev/null
+++ b/arch/arm/boot/dts/mps2.dtsi
@@ -0,0 +1,241 @@
+/*
+ * Copyright (C) 2015 ARM Limited
+ *
+ * Author: Vladimir Murzin <vladimir.murzin@arm.com>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "armv7-m.dtsi"
+
+/ {
+ oscclk0: clk-osc0 {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <50000000>;
+ };
+
+ oscclk1: clk-osc1 {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <24576000>;
+ };
+
+ oscclk2: clk-osc2 {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <25000000>;
+ };
+
+ cfgclk: clk-cfg {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <5000000>;
+ };
+
+ spicfgclk: clk-spicfg {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <75000000>;
+ };
+
+ sysclk: clk-sys {
+ compatible = "fixed-factor-clock";
+ clocks = <&oscclk0>;
+ #clock-cells = <0>;
+ clock-div = <2>;
+ clock-mult = <1>;
+ };
+
+ audmclk: clk-audm {
+ compatible = "fixed-factor-clock";
+ clocks = <&oscclk1>;
+ #clock-cells = <0>;
+ clock-div = <2>;
+ clock-mult = <1>;
+ };
+
+ audsclk: clk-auds {
+ compatible = "fixed-factor-clock";
+ clocks = <&oscclk1>;
+ #clock-cells = <0>;
+ clock-div = <8>;
+ clock-mult = <1>;
+ };
+
+ spiclcd: clk-cpiclcd {
+ compatible = "fixed-factor-clock";
+ clocks = <&oscclk0>;
+ #clock-cells = <0>;
+ clock-div = <2>;
+ clock-mult = <1>;
+ };
+
+ spicon: clk-spicon {
+ compatible = "fixed-factor-clock";
+ clocks = <&oscclk0>;
+ #clock-cells = <0>;
+ clock-div = <2>;
+ clock-mult = <1>;
+ };
+
+ i2cclcd: clk-i2cclcd {
+ compatible = "fixed-factor-clock";
+ clocks = <&oscclk0>;
+ #clock-cells = <0>;
+ clock-div = <2>;
+ clock-mult = <1>;
+ };
+
+ i2caud: clk-i2caud {
+ compatible = "fixed-factor-clock";
+ clocks = <&oscclk0>;
+ #clock-cells = <0>;
+ clock-div = <2>;
+ clock-mult = <1>;
+ };
+
+ soc {
+ compatible = "simple-bus";
+ ranges;
+
+ apb@40000000 {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0 0x40000000 0x10000>;
+
+ timer0: mps2-timer0@0 {
+ compatible = "arm,mps2-timer";
+ reg = <0x0 0x1000>;
+ interrupts = <8>;
+ clocks = <&sysclk>;
+ status = "disabled";
+ };
+
+ timer1: mps2-timer1@1000 {
+ compatible = "arm,mps2-timer";
+ reg = <0x1000 0x1000>;
+ interrupts = <9>;
+ clocks = <&sysclk>;
+ status = "disabled";
+ };
+
+ timer2: dual-timer@2000 {
+ compatible = "arm,sp804";
+ reg = <0x2000 0x1000>;
+ clocks = <&sysclk>;
+ interrupts = <10>;
+ status = "disabled";
+ };
+
+ uart0: serial@4000 {
+ compatible = "arm,mps2-uart";
+ reg = <0x4000 0x1000>;
+ interrupts = <0 1 12>;
+ clocks = <&sysclk>;
+ status = "disabled";
+ };
+
+ uart1: serial@5000 {
+ compatible = "arm,mps2-uart";
+ reg = <0x5000 0x1000>;
+ interrupts = <2 3 12>;
+ clocks = <&sysclk>;
+ status = "disabled";
+ };
+
+ uart2: serial@6000 {
+ compatible = "arm,mps2-uart";
+ reg = <0x6000 0x1000>;
+ interrupts = <4 5 12>;
+ clocks = <&sysclk>;
+ status = "disabled";
+ };
+
+ wdt: watchdog@8000 {
+ compatible = "arm,sp805", "arm,primecell";
+ arm,primecell-periphid = <0x00141805>;
+ reg = <0x8000 0x1000>;
+ interrupts = <0>;
+ clocks = <&sysclk>;
+ clock-names = "apb_pclk";
+ status = "disabled";
+ };
+ };
+ };
+
+ fpga@40020000 {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0 0x40020000 0x10000>;
+
+ fpgaio@8000 {
+ compatible = "syscon", "simple-mfd";
+ reg = <0x8000 0x10>;
+
+ led0 {
+ compatible = "register-bit-led";
+ offset = <0x0>;
+ mask = <0x01>;
+ label = "userled:0";
+ linux,default-trigger = "heartbeat";
+ default-state = "on";
+ };
+
+ led1 {
+ compatible = "register-bit-led";
+ offset = <0x0>;
+ mask = <0x02>;
+ label = "userled:1";
+ linux,default-trigger = "usr";
+ default-state = "off";
+ };
+ };
+ };
+
+ smb {
+ compatible = "simple-bus";
+ #address-cells = <2>;
+ #size-cells = <1>;
+ ranges = <0 0 0x40200000 0x10000>,
+ <1 0 0xa0000000 0x10000>;
+ };
+};
diff --git a/arch/arm/boot/dts/mt2701.dtsi b/arch/arm/boot/dts/mt2701.dtsi
index 83437683aa60..18596a2c58a1 100644
--- a/arch/arm/boot/dts/mt2701.dtsi
+++ b/arch/arm/boot/dts/mt2701.dtsi
@@ -15,6 +15,7 @@
#include <dt-bindings/interrupt-controller/irq.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include "skeleton64.dtsi"
+#include "mt2701-pinfunc.h"
/ {
compatible = "mediatek,mt2701";
@@ -85,6 +86,24 @@
<GIC_PPI 10 (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_HIGH)>;
};
+ pio: pinctrl@10005000 {
+ compatible = "mediatek,mt2701-pinctrl";
+ reg = <0 0x1000b000 0 0x1000>;
+ mediatek,pctl-regmap = <&syscfg_pctl_a>;
+ pins-are-numbered;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ interrupts = <GIC_SPI 113 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 114 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
+ syscfg_pctl_a: syscfg@10005000 {
+ compatible = "mediatek,mt2701-pctl-a-syscfg", "syscon";
+ reg = <0 0x10005000 0 0x1000>;
+ };
+
watchdog: watchdog@10007000 {
compatible = "mediatek,mt2701-wdt",
"mediatek,mt6589-wdt";
diff --git a/arch/arm/boot/dts/omap2420-clocks.dtsi b/arch/arm/boot/dts/omap2420-clocks.dtsi
index ce8c742d7e92..f8e5bd3cc628 100644
--- a/arch/arm/boot/dts/omap2420-clocks.dtsi
+++ b/arch/arm/boot/dts/omap2420-clocks.dtsi
@@ -9,7 +9,7 @@
*/
&prcm_clocks {
- sys_clkout2_src_gate: sys_clkout2_src_gate {
+ sys_clkout2_src_gate: sys_clkout2_src_gate@70 {
#clock-cells = <0>;
compatible = "ti,composite-no-wait-gate-clock";
clocks = <&core_ck>;
@@ -17,7 +17,7 @@
reg = <0x0070>;
};
- sys_clkout2_src_mux: sys_clkout2_src_mux {
+ sys_clkout2_src_mux: sys_clkout2_src_mux@70 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&core_ck>, <&sys_ck>, <&func_96m_ck>, <&func_54m_ck>;
@@ -31,7 +31,7 @@
clocks = <&sys_clkout2_src_gate>, <&sys_clkout2_src_mux>;
};
- sys_clkout2: sys_clkout2 {
+ sys_clkout2: sys_clkout2@70 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&sys_clkout2_src>;
@@ -41,7 +41,7 @@
ti,index-power-of-two;
};
- dsp_gate_ick: dsp_gate_ick {
+ dsp_gate_ick: dsp_gate_ick@810 {
#clock-cells = <0>;
compatible = "ti,composite-interface-clock";
clocks = <&dsp_fck>;
@@ -49,7 +49,7 @@
reg = <0x0810>;
};
- dsp_div_ick: dsp_div_ick {
+ dsp_div_ick: dsp_div_ick@840 {
#clock-cells = <0>;
compatible = "ti,composite-divider-clock";
clocks = <&dsp_fck>;
@@ -65,7 +65,7 @@
clocks = <&dsp_gate_ick>, <&dsp_div_ick>;
};
- iva1_gate_ifck: iva1_gate_ifck {
+ iva1_gate_ifck: iva1_gate_ifck@800 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&core_ck>;
@@ -73,7 +73,7 @@
reg = <0x0800>;
};
- iva1_div_ifck: iva1_div_ifck {
+ iva1_div_ifck: iva1_div_ifck@840 {
#clock-cells = <0>;
compatible = "ti,composite-divider-clock";
clocks = <&core_ck>;
@@ -96,7 +96,7 @@
clock-div = <2>;
};
- iva1_mpu_int_ifck: iva1_mpu_int_ifck {
+ iva1_mpu_int_ifck: iva1_mpu_int_ifck@800 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&iva1_ifck_div>;
@@ -104,7 +104,7 @@
reg = <0x0800>;
};
- wdt3_ick: wdt3_ick {
+ wdt3_ick: wdt3_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -112,7 +112,7 @@
reg = <0x0210>;
};
- wdt3_fck: wdt3_fck {
+ wdt3_fck: wdt3_fck@200 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_32k_ck>;
@@ -120,7 +120,7 @@
reg = <0x0200>;
};
- mmc_ick: mmc_ick {
+ mmc_ick: mmc_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -128,7 +128,7 @@
reg = <0x0210>;
};
- mmc_fck: mmc_fck {
+ mmc_fck: mmc_fck@200 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_96m_ck>;
@@ -136,7 +136,7 @@
reg = <0x0200>;
};
- eac_ick: eac_ick {
+ eac_ick: eac_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -144,7 +144,7 @@
reg = <0x0210>;
};
- eac_fck: eac_fck {
+ eac_fck: eac_fck@200 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_96m_ck>;
@@ -152,7 +152,7 @@
reg = <0x0200>;
};
- i2c1_fck: i2c1_fck {
+ i2c1_fck: i2c1_fck@200 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_12m_ck>;
@@ -160,7 +160,7 @@
reg = <0x0200>;
};
- i2c2_fck: i2c2_fck {
+ i2c2_fck: i2c2_fck@200 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_12m_ck>;
@@ -168,7 +168,7 @@
reg = <0x0200>;
};
- vlynq_ick: vlynq_ick {
+ vlynq_ick: vlynq_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l3_ck>;
@@ -176,7 +176,7 @@
reg = <0x0210>;
};
- vlynq_gate_fck: vlynq_gate_fck {
+ vlynq_gate_fck: vlynq_gate_fck@200 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&core_ck>;
@@ -192,7 +192,7 @@
clock-div = <18>;
};
- vlynq_mux_fck: vlynq_mux_fck {
+ vlynq_mux_fck: vlynq_mux_fck@240 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&func_96m_ck>, <&core_ck>, <&core_d2_ck>, <&core_d3_ck>, <&core_d4_ck>, <&dummy_ck>, <&core_d6_ck>, <&dummy_ck>, <&core_d8_ck>, <&core_d9_ck>, <&dummy_ck>, <&dummy_ck>, <&core_d12_ck>, <&dummy_ck>, <&dummy_ck>, <&dummy_ck>, <&core_d16_ck>, <&dummy_ck>, <&core_d18_ck>;
diff --git a/arch/arm/boot/dts/omap2420-n8x0-common.dtsi b/arch/arm/boot/dts/omap2420-n8x0-common.dtsi
index 8491f46c61b7..db95aadcca70 100644
--- a/arch/arm/boot/dts/omap2420-n8x0-common.dtsi
+++ b/arch/arm/boot/dts/omap2420-n8x0-common.dtsi
@@ -7,7 +7,7 @@
};
ocp {
- i2c@0 {
+ i2c0 {
compatible = "i2c-cbus-gpio";
gpios = <&gpio3 2 GPIO_ACTIVE_HIGH /* gpio66 clk */
&gpio3 1 GPIO_ACTIVE_HIGH /* gpio65 dat */
diff --git a/arch/arm/boot/dts/omap2420.dtsi b/arch/arm/boot/dts/omap2420.dtsi
index 5b9a376cc31e..fb712b9aa874 100644
--- a/arch/arm/boot/dts/omap2420.dtsi
+++ b/arch/arm/boot/dts/omap2420.dtsi
@@ -130,6 +130,10 @@
gpmc,num-cs = <8>;
gpmc,num-waitpins = <4>;
ti,hwmods = "gpmc";
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ gpio-controller;
+ #gpio-cells = <2>;
};
mcbsp1: mcbsp@48074000 {
diff --git a/arch/arm/boot/dts/omap2430-clocks.dtsi b/arch/arm/boot/dts/omap2430-clocks.dtsi
index 93fed68839b9..a5aa7d619849 100644
--- a/arch/arm/boot/dts/omap2430-clocks.dtsi
+++ b/arch/arm/boot/dts/omap2430-clocks.dtsi
@@ -9,7 +9,7 @@
*/
&scm_clocks {
- mcbsp3_mux_fck: mcbsp3_mux_fck {
+ mcbsp3_mux_fck: mcbsp3_mux_fck@78 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&func_96m_ck>, <&mcbsp_clks>;
@@ -22,7 +22,7 @@
clocks = <&mcbsp3_gate_fck>, <&mcbsp3_mux_fck>;
};
- mcbsp4_mux_fck: mcbsp4_mux_fck {
+ mcbsp4_mux_fck: mcbsp4_mux_fck@78 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&func_96m_ck>, <&mcbsp_clks>;
@@ -36,7 +36,7 @@
clocks = <&mcbsp4_gate_fck>, <&mcbsp4_mux_fck>;
};
- mcbsp5_mux_fck: mcbsp5_mux_fck {
+ mcbsp5_mux_fck: mcbsp5_mux_fck@78 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&func_96m_ck>, <&mcbsp_clks>;
@@ -52,7 +52,7 @@
};
&prcm_clocks {
- iva2_1_gate_ick: iva2_1_gate_ick {
+ iva2_1_gate_ick: iva2_1_gate_ick@800 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&dsp_fck>;
@@ -60,7 +60,7 @@
reg = <0x0800>;
};
- iva2_1_div_ick: iva2_1_div_ick {
+ iva2_1_div_ick: iva2_1_div_ick@840 {
#clock-cells = <0>;
compatible = "ti,composite-divider-clock";
clocks = <&dsp_fck>;
@@ -76,7 +76,7 @@
clocks = <&iva2_1_gate_ick>, <&iva2_1_div_ick>;
};
- mdm_gate_ick: mdm_gate_ick {
+ mdm_gate_ick: mdm_gate_ick@c10 {
#clock-cells = <0>;
compatible = "ti,composite-interface-clock";
clocks = <&core_ck>;
@@ -84,7 +84,7 @@
reg = <0x0c10>;
};
- mdm_div_ick: mdm_div_ick {
+ mdm_div_ick: mdm_div_ick@c40 {
#clock-cells = <0>;
compatible = "ti,composite-divider-clock";
clocks = <&core_ck>;
@@ -98,7 +98,7 @@
clocks = <&mdm_gate_ick>, <&mdm_div_ick>;
};
- mdm_osc_ck: mdm_osc_ck {
+ mdm_osc_ck: mdm_osc_ck@c00 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&osc_ck>;
@@ -106,7 +106,7 @@
reg = <0x0c00>;
};
- mcbsp3_ick: mcbsp3_ick {
+ mcbsp3_ick: mcbsp3_ick@214 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -114,7 +114,7 @@
reg = <0x0214>;
};
- mcbsp3_gate_fck: mcbsp3_gate_fck {
+ mcbsp3_gate_fck: mcbsp3_gate_fck@204 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&mcbsp_clks>;
@@ -122,7 +122,7 @@
reg = <0x0204>;
};
- mcbsp4_ick: mcbsp4_ick {
+ mcbsp4_ick: mcbsp4_ick@214 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -130,7 +130,7 @@
reg = <0x0214>;
};
- mcbsp4_gate_fck: mcbsp4_gate_fck {
+ mcbsp4_gate_fck: mcbsp4_gate_fck@204 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&mcbsp_clks>;
@@ -138,7 +138,7 @@
reg = <0x0204>;
};
- mcbsp5_ick: mcbsp5_ick {
+ mcbsp5_ick: mcbsp5_ick@214 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -146,7 +146,7 @@
reg = <0x0214>;
};
- mcbsp5_gate_fck: mcbsp5_gate_fck {
+ mcbsp5_gate_fck: mcbsp5_gate_fck@204 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&mcbsp_clks>;
@@ -154,7 +154,7 @@
reg = <0x0204>;
};
- mcspi3_ick: mcspi3_ick {
+ mcspi3_ick: mcspi3_ick@214 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -162,7 +162,7 @@
reg = <0x0214>;
};
- mcspi3_fck: mcspi3_fck {
+ mcspi3_fck: mcspi3_fck@204 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_48m_ck>;
@@ -170,7 +170,7 @@
reg = <0x0204>;
};
- icr_ick: icr_ick {
+ icr_ick: icr_ick@410 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&sys_ck>;
@@ -178,7 +178,7 @@
reg = <0x0410>;
};
- i2chs1_fck: i2chs1_fck {
+ i2chs1_fck: i2chs1_fck@204 {
#clock-cells = <0>;
compatible = "ti,omap2430-interface-clock";
clocks = <&func_96m_ck>;
@@ -186,7 +186,7 @@
reg = <0x0204>;
};
- i2chs2_fck: i2chs2_fck {
+ i2chs2_fck: i2chs2_fck@204 {
#clock-cells = <0>;
compatible = "ti,omap2430-interface-clock";
clocks = <&func_96m_ck>;
@@ -194,7 +194,7 @@
reg = <0x0204>;
};
- usbhs_ick: usbhs_ick {
+ usbhs_ick: usbhs_ick@214 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l3_ck>;
@@ -202,7 +202,7 @@
reg = <0x0214>;
};
- mmchs1_ick: mmchs1_ick {
+ mmchs1_ick: mmchs1_ick@214 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -210,7 +210,7 @@
reg = <0x0214>;
};
- mmchs1_fck: mmchs1_fck {
+ mmchs1_fck: mmchs1_fck@204 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_96m_ck>;
@@ -218,7 +218,7 @@
reg = <0x0204>;
};
- mmchs2_ick: mmchs2_ick {
+ mmchs2_ick: mmchs2_ick@214 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -226,7 +226,7 @@
reg = <0x0214>;
};
- mmchs2_fck: mmchs2_fck {
+ mmchs2_fck: mmchs2_fck@204 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_96m_ck>;
@@ -234,7 +234,7 @@
reg = <0x0204>;
};
- gpio5_ick: gpio5_ick {
+ gpio5_ick: gpio5_ick@214 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -242,7 +242,7 @@
reg = <0x0214>;
};
- gpio5_fck: gpio5_fck {
+ gpio5_fck: gpio5_fck@204 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_32k_ck>;
@@ -250,7 +250,7 @@
reg = <0x0204>;
};
- mdm_intc_ick: mdm_intc_ick {
+ mdm_intc_ick: mdm_intc_ick@214 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -258,7 +258,7 @@
reg = <0x0214>;
};
- mmchsdb1_fck: mmchsdb1_fck {
+ mmchsdb1_fck: mmchsdb1_fck@204 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_32k_ck>;
@@ -266,7 +266,7 @@
reg = <0x0204>;
};
- mmchsdb2_fck: mmchsdb2_fck {
+ mmchsdb2_fck: mmchsdb2_fck@204 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_32k_ck>;
diff --git a/arch/arm/boot/dts/omap2430.dtsi b/arch/arm/boot/dts/omap2430.dtsi
index 798dda072b2a..455aaea407dd 100644
--- a/arch/arm/boot/dts/omap2430.dtsi
+++ b/arch/arm/boot/dts/omap2430.dtsi
@@ -63,7 +63,7 @@
#size-cells = <0>;
};
- pbias_regulator: pbias_regulator {
+ pbias_regulator: pbias_regulator@230 {
compatible = "ti,pbias-omap2", "ti,pbias-omap";
reg = <0x230 0x4>;
syscon = <&scm_conf>;
@@ -154,6 +154,10 @@
gpmc,num-cs = <8>;
gpmc,num-waitpins = <4>;
ti,hwmods = "gpmc";
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ gpio-controller;
+ #gpio-cells = <2>;
};
mcbsp1: mcbsp@48074000 {
diff --git a/arch/arm/boot/dts/omap24xx-clocks.dtsi b/arch/arm/boot/dts/omap24xx-clocks.dtsi
index 63965b876973..ca73722b5ea4 100644
--- a/arch/arm/boot/dts/omap24xx-clocks.dtsi
+++ b/arch/arm/boot/dts/omap24xx-clocks.dtsi
@@ -8,7 +8,7 @@
* published by the Free Software Foundation.
*/
&scm_clocks {
- mcbsp1_mux_fck: mcbsp1_mux_fck {
+ mcbsp1_mux_fck: mcbsp1_mux_fck@4 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&func_96m_ck>, <&mcbsp_clks>;
@@ -22,7 +22,7 @@
clocks = <&mcbsp1_gate_fck>, <&mcbsp1_mux_fck>;
};
- mcbsp2_mux_fck: mcbsp2_mux_fck {
+ mcbsp2_mux_fck: mcbsp2_mux_fck@4 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&func_96m_ck>, <&mcbsp_clks>;
@@ -74,7 +74,7 @@
clock-frequency = <26000000>;
};
- aplls_clkin_ck: aplls_clkin_ck {
+ aplls_clkin_ck: aplls_clkin_ck@540 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&virt_19200000_ck>, <&virt_26m_ck>, <&virt_13m_ck>, <&virt_12m_ck>;
@@ -90,7 +90,7 @@
clock-div = <1>;
};
- osc_ck: osc_ck {
+ osc_ck: osc_ck@60 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&aplls_clkin_ck>, <&aplls_clkin_x2_ck>;
@@ -99,7 +99,7 @@
ti,index-starts-at-one;
};
- sys_ck: sys_ck {
+ sys_ck: sys_ck@60 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&osc_ck>;
@@ -121,14 +121,14 @@
clock-frequency = <0x0>;
};
- dpll_ck: dpll_ck {
+ dpll_ck: dpll_ck@500 {
#clock-cells = <0>;
compatible = "ti,omap2-dpll-core-clock";
clocks = <&sys_ck>, <&sys_ck>;
reg = <0x0500>, <0x0540>;
};
- apll96_ck: apll96_ck {
+ apll96_ck: apll96_ck@500 {
#clock-cells = <0>;
compatible = "ti,omap2-apll-clock";
clocks = <&sys_ck>;
@@ -138,7 +138,7 @@
reg = <0x0500>, <0x0530>, <0x0520>;
};
- apll54_ck: apll54_ck {
+ apll54_ck: apll54_ck@500 {
#clock-cells = <0>;
compatible = "ti,omap2-apll-clock";
clocks = <&sys_ck>;
@@ -148,7 +148,7 @@
reg = <0x0500>, <0x0530>, <0x0520>;
};
- func_54m_ck: func_54m_ck {
+ func_54m_ck: func_54m_ck@540 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&apll54_ck>, <&alt_ck>;
@@ -176,7 +176,7 @@
clock-div = <2>;
};
- func_48m_ck: func_48m_ck {
+ func_48m_ck: func_48m_ck@540 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&apll96_d2_ck>, <&alt_ck>;
@@ -192,7 +192,7 @@
clock-div = <4>;
};
- sys_clkout_src_gate: sys_clkout_src_gate {
+ sys_clkout_src_gate: sys_clkout_src_gate@70 {
#clock-cells = <0>;
compatible = "ti,composite-no-wait-gate-clock";
clocks = <&core_ck>;
@@ -200,7 +200,7 @@
reg = <0x0070>;
};
- sys_clkout_src_mux: sys_clkout_src_mux {
+ sys_clkout_src_mux: sys_clkout_src_mux@70 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&core_ck>, <&sys_ck>, <&func_96m_ck>, <&func_54m_ck>;
@@ -213,7 +213,7 @@
clocks = <&sys_clkout_src_gate>, <&sys_clkout_src_mux>;
};
- sys_clkout: sys_clkout {
+ sys_clkout: sys_clkout@70 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&sys_clkout_src>;
@@ -223,7 +223,7 @@
ti,index-power-of-two;
};
- emul_ck: emul_ck {
+ emul_ck: emul_ck@78 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&func_54m_ck>;
@@ -231,7 +231,7 @@
reg = <0x0078>;
};
- mpu_ck: mpu_ck {
+ mpu_ck: mpu_ck@140 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&core_ck>;
@@ -240,7 +240,7 @@
ti,index-starts-at-one;
};
- dsp_gate_fck: dsp_gate_fck {
+ dsp_gate_fck: dsp_gate_fck@800 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&core_ck>;
@@ -248,7 +248,7 @@
reg = <0x0800>;
};
- dsp_div_fck: dsp_div_fck {
+ dsp_div_fck: dsp_div_fck@840 {
#clock-cells = <0>;
compatible = "ti,composite-divider-clock";
clocks = <&core_ck>;
@@ -261,7 +261,7 @@
clocks = <&dsp_gate_fck>, <&dsp_div_fck>;
};
- core_l3_ck: core_l3_ck {
+ core_l3_ck: core_l3_ck@240 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&core_ck>;
@@ -270,7 +270,7 @@
ti,index-starts-at-one;
};
- gfx_3d_gate_fck: gfx_3d_gate_fck {
+ gfx_3d_gate_fck: gfx_3d_gate_fck@300 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&core_l3_ck>;
@@ -278,7 +278,7 @@
reg = <0x0300>;
};
- gfx_3d_div_fck: gfx_3d_div_fck {
+ gfx_3d_div_fck: gfx_3d_div_fck@340 {
#clock-cells = <0>;
compatible = "ti,composite-divider-clock";
clocks = <&core_l3_ck>;
@@ -293,7 +293,7 @@
clocks = <&gfx_3d_gate_fck>, <&gfx_3d_div_fck>;
};
- gfx_2d_gate_fck: gfx_2d_gate_fck {
+ gfx_2d_gate_fck: gfx_2d_gate_fck@300 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&core_l3_ck>;
@@ -301,7 +301,7 @@
reg = <0x0300>;
};
- gfx_2d_div_fck: gfx_2d_div_fck {
+ gfx_2d_div_fck: gfx_2d_div_fck@340 {
#clock-cells = <0>;
compatible = "ti,composite-divider-clock";
clocks = <&core_l3_ck>;
@@ -316,7 +316,7 @@
clocks = <&gfx_2d_gate_fck>, <&gfx_2d_div_fck>;
};
- gfx_ick: gfx_ick {
+ gfx_ick: gfx_ick@310 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&core_l3_ck>;
@@ -324,7 +324,7 @@
reg = <0x0310>;
};
- l4_ck: l4_ck {
+ l4_ck: l4_ck@240 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&core_l3_ck>;
@@ -334,7 +334,7 @@
ti,index-starts-at-one;
};
- dss_ick: dss_ick {
+ dss_ick: dss_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-no-wait-interface-clock";
clocks = <&l4_ck>;
@@ -342,7 +342,7 @@
reg = <0x0210>;
};
- dss1_gate_fck: dss1_gate_fck {
+ dss1_gate_fck: dss1_gate_fck@200 {
#clock-cells = <0>;
compatible = "ti,composite-no-wait-gate-clock";
clocks = <&core_ck>;
@@ -428,7 +428,7 @@
clock-div = <16>;
};
- dss1_mux_fck: dss1_mux_fck {
+ dss1_mux_fck: dss1_mux_fck@240 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&sys_ck>, <&core_ck>, <&core_d2_ck>, <&core_d3_ck>, <&core_d4_ck>, <&core_d5_ck>, <&core_d6_ck>, <&core_d8_ck>, <&core_d9_ck>, <&core_d12_ck>, <&core_d16_ck>;
@@ -442,7 +442,7 @@
clocks = <&dss1_gate_fck>, <&dss1_mux_fck>;
};
- dss2_gate_fck: dss2_gate_fck {
+ dss2_gate_fck: dss2_gate_fck@200 {
#clock-cells = <0>;
compatible = "ti,composite-no-wait-gate-clock";
clocks = <&func_48m_ck>;
@@ -450,7 +450,7 @@
reg = <0x0200>;
};
- dss2_mux_fck: dss2_mux_fck {
+ dss2_mux_fck: dss2_mux_fck@240 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&sys_ck>, <&func_48m_ck>;
@@ -464,7 +464,7 @@
clocks = <&dss2_gate_fck>, <&dss2_mux_fck>;
};
- dss_54m_fck: dss_54m_fck {
+ dss_54m_fck: dss_54m_fck@200 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_54m_ck>;
@@ -472,7 +472,7 @@
reg = <0x0200>;
};
- ssi_ssr_sst_gate_fck: ssi_ssr_sst_gate_fck {
+ ssi_ssr_sst_gate_fck: ssi_ssr_sst_gate_fck@204 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&core_ck>;
@@ -480,7 +480,7 @@
reg = <0x0204>;
};
- ssi_ssr_sst_div_fck: ssi_ssr_sst_div_fck {
+ ssi_ssr_sst_div_fck: ssi_ssr_sst_div_fck@240 {
#clock-cells = <0>;
compatible = "ti,composite-divider-clock";
clocks = <&core_ck>;
@@ -494,7 +494,7 @@
clocks = <&ssi_ssr_sst_gate_fck>, <&ssi_ssr_sst_div_fck>;
};
- usb_l4_gate_ick: usb_l4_gate_ick {
+ usb_l4_gate_ick: usb_l4_gate_ick@214 {
#clock-cells = <0>;
compatible = "ti,composite-interface-clock";
clocks = <&core_l3_ck>;
@@ -502,7 +502,7 @@
reg = <0x0214>;
};
- usb_l4_div_ick: usb_l4_div_ick {
+ usb_l4_div_ick: usb_l4_div_ick@240 {
#clock-cells = <0>;
compatible = "ti,composite-divider-clock";
clocks = <&core_l3_ck>;
@@ -517,7 +517,7 @@
clocks = <&usb_l4_gate_ick>, <&usb_l4_div_ick>;
};
- ssi_l4_ick: ssi_l4_ick {
+ ssi_l4_ick: ssi_l4_ick@214 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -525,7 +525,7 @@
reg = <0x0214>;
};
- gpt1_ick: gpt1_ick {
+ gpt1_ick: gpt1_ick@410 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&sys_ck>;
@@ -533,7 +533,7 @@
reg = <0x0410>;
};
- gpt1_gate_fck: gpt1_gate_fck {
+ gpt1_gate_fck: gpt1_gate_fck@400 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&func_32k_ck>;
@@ -541,7 +541,7 @@
reg = <0x0400>;
};
- gpt1_mux_fck: gpt1_mux_fck {
+ gpt1_mux_fck: gpt1_mux_fck@440 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&func_32k_ck>, <&sys_ck>, <&alt_ck>;
@@ -554,7 +554,7 @@
clocks = <&gpt1_gate_fck>, <&gpt1_mux_fck>;
};
- gpt2_ick: gpt2_ick {
+ gpt2_ick: gpt2_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -562,7 +562,7 @@
reg = <0x0210>;
};
- gpt2_gate_fck: gpt2_gate_fck {
+ gpt2_gate_fck: gpt2_gate_fck@200 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&func_32k_ck>;
@@ -570,7 +570,7 @@
reg = <0x0200>;
};
- gpt2_mux_fck: gpt2_mux_fck {
+ gpt2_mux_fck: gpt2_mux_fck@244 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&func_32k_ck>, <&sys_ck>, <&alt_ck>;
@@ -584,7 +584,7 @@
clocks = <&gpt2_gate_fck>, <&gpt2_mux_fck>;
};
- gpt3_ick: gpt3_ick {
+ gpt3_ick: gpt3_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -592,7 +592,7 @@
reg = <0x0210>;
};
- gpt3_gate_fck: gpt3_gate_fck {
+ gpt3_gate_fck: gpt3_gate_fck@200 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&func_32k_ck>;
@@ -600,7 +600,7 @@
reg = <0x0200>;
};
- gpt3_mux_fck: gpt3_mux_fck {
+ gpt3_mux_fck: gpt3_mux_fck@244 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&func_32k_ck>, <&sys_ck>, <&alt_ck>;
@@ -614,7 +614,7 @@
clocks = <&gpt3_gate_fck>, <&gpt3_mux_fck>;
};
- gpt4_ick: gpt4_ick {
+ gpt4_ick: gpt4_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -622,7 +622,7 @@
reg = <0x0210>;
};
- gpt4_gate_fck: gpt4_gate_fck {
+ gpt4_gate_fck: gpt4_gate_fck@200 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&func_32k_ck>;
@@ -630,7 +630,7 @@
reg = <0x0200>;
};
- gpt4_mux_fck: gpt4_mux_fck {
+ gpt4_mux_fck: gpt4_mux_fck@244 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&func_32k_ck>, <&sys_ck>, <&alt_ck>;
@@ -644,7 +644,7 @@
clocks = <&gpt4_gate_fck>, <&gpt4_mux_fck>;
};
- gpt5_ick: gpt5_ick {
+ gpt5_ick: gpt5_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -652,7 +652,7 @@
reg = <0x0210>;
};
- gpt5_gate_fck: gpt5_gate_fck {
+ gpt5_gate_fck: gpt5_gate_fck@200 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&func_32k_ck>;
@@ -660,7 +660,7 @@
reg = <0x0200>;
};
- gpt5_mux_fck: gpt5_mux_fck {
+ gpt5_mux_fck: gpt5_mux_fck@244 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&func_32k_ck>, <&sys_ck>, <&alt_ck>;
@@ -674,7 +674,7 @@
clocks = <&gpt5_gate_fck>, <&gpt5_mux_fck>;
};
- gpt6_ick: gpt6_ick {
+ gpt6_ick: gpt6_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -682,7 +682,7 @@
reg = <0x0210>;
};
- gpt6_gate_fck: gpt6_gate_fck {
+ gpt6_gate_fck: gpt6_gate_fck@200 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&func_32k_ck>;
@@ -690,7 +690,7 @@
reg = <0x0200>;
};
- gpt6_mux_fck: gpt6_mux_fck {
+ gpt6_mux_fck: gpt6_mux_fck@244 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&func_32k_ck>, <&sys_ck>, <&alt_ck>;
@@ -704,7 +704,7 @@
clocks = <&gpt6_gate_fck>, <&gpt6_mux_fck>;
};
- gpt7_ick: gpt7_ick {
+ gpt7_ick: gpt7_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -712,7 +712,7 @@
reg = <0x0210>;
};
- gpt7_gate_fck: gpt7_gate_fck {
+ gpt7_gate_fck: gpt7_gate_fck@200 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&func_32k_ck>;
@@ -720,7 +720,7 @@
reg = <0x0200>;
};
- gpt7_mux_fck: gpt7_mux_fck {
+ gpt7_mux_fck: gpt7_mux_fck@244 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&func_32k_ck>, <&sys_ck>, <&alt_ck>;
@@ -734,7 +734,7 @@
clocks = <&gpt7_gate_fck>, <&gpt7_mux_fck>;
};
- gpt8_ick: gpt8_ick {
+ gpt8_ick: gpt8_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -742,7 +742,7 @@
reg = <0x0210>;
};
- gpt8_gate_fck: gpt8_gate_fck {
+ gpt8_gate_fck: gpt8_gate_fck@200 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&func_32k_ck>;
@@ -750,7 +750,7 @@
reg = <0x0200>;
};
- gpt8_mux_fck: gpt8_mux_fck {
+ gpt8_mux_fck: gpt8_mux_fck@244 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&func_32k_ck>, <&sys_ck>, <&alt_ck>;
@@ -764,7 +764,7 @@
clocks = <&gpt8_gate_fck>, <&gpt8_mux_fck>;
};
- gpt9_ick: gpt9_ick {
+ gpt9_ick: gpt9_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -772,7 +772,7 @@
reg = <0x0210>;
};
- gpt9_gate_fck: gpt9_gate_fck {
+ gpt9_gate_fck: gpt9_gate_fck@200 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&func_32k_ck>;
@@ -780,7 +780,7 @@
reg = <0x0200>;
};
- gpt9_mux_fck: gpt9_mux_fck {
+ gpt9_mux_fck: gpt9_mux_fck@244 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&func_32k_ck>, <&sys_ck>, <&alt_ck>;
@@ -794,7 +794,7 @@
clocks = <&gpt9_gate_fck>, <&gpt9_mux_fck>;
};
- gpt10_ick: gpt10_ick {
+ gpt10_ick: gpt10_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -802,7 +802,7 @@
reg = <0x0210>;
};
- gpt10_gate_fck: gpt10_gate_fck {
+ gpt10_gate_fck: gpt10_gate_fck@200 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&func_32k_ck>;
@@ -810,7 +810,7 @@
reg = <0x0200>;
};
- gpt10_mux_fck: gpt10_mux_fck {
+ gpt10_mux_fck: gpt10_mux_fck@244 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&func_32k_ck>, <&sys_ck>, <&alt_ck>;
@@ -824,7 +824,7 @@
clocks = <&gpt10_gate_fck>, <&gpt10_mux_fck>;
};
- gpt11_ick: gpt11_ick {
+ gpt11_ick: gpt11_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -832,7 +832,7 @@
reg = <0x0210>;
};
- gpt11_gate_fck: gpt11_gate_fck {
+ gpt11_gate_fck: gpt11_gate_fck@200 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&func_32k_ck>;
@@ -840,7 +840,7 @@
reg = <0x0200>;
};
- gpt11_mux_fck: gpt11_mux_fck {
+ gpt11_mux_fck: gpt11_mux_fck@244 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&func_32k_ck>, <&sys_ck>, <&alt_ck>;
@@ -854,7 +854,7 @@
clocks = <&gpt11_gate_fck>, <&gpt11_mux_fck>;
};
- gpt12_ick: gpt12_ick {
+ gpt12_ick: gpt12_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -862,7 +862,7 @@
reg = <0x0210>;
};
- gpt12_gate_fck: gpt12_gate_fck {
+ gpt12_gate_fck: gpt12_gate_fck@200 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&func_32k_ck>;
@@ -870,7 +870,7 @@
reg = <0x0200>;
};
- gpt12_mux_fck: gpt12_mux_fck {
+ gpt12_mux_fck: gpt12_mux_fck@244 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&func_32k_ck>, <&sys_ck>, <&alt_ck>;
@@ -884,7 +884,7 @@
clocks = <&gpt12_gate_fck>, <&gpt12_mux_fck>;
};
- mcbsp1_ick: mcbsp1_ick {
+ mcbsp1_ick: mcbsp1_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -892,7 +892,7 @@
reg = <0x0210>;
};
- mcbsp1_gate_fck: mcbsp1_gate_fck {
+ mcbsp1_gate_fck: mcbsp1_gate_fck@200 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&mcbsp_clks>;
@@ -900,7 +900,7 @@
reg = <0x0200>;
};
- mcbsp2_ick: mcbsp2_ick {
+ mcbsp2_ick: mcbsp2_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -908,7 +908,7 @@
reg = <0x0210>;
};
- mcbsp2_gate_fck: mcbsp2_gate_fck {
+ mcbsp2_gate_fck: mcbsp2_gate_fck@200 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&mcbsp_clks>;
@@ -916,7 +916,7 @@
reg = <0x0200>;
};
- mcspi1_ick: mcspi1_ick {
+ mcspi1_ick: mcspi1_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -924,7 +924,7 @@
reg = <0x0210>;
};
- mcspi1_fck: mcspi1_fck {
+ mcspi1_fck: mcspi1_fck@200 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_48m_ck>;
@@ -932,7 +932,7 @@
reg = <0x0200>;
};
- mcspi2_ick: mcspi2_ick {
+ mcspi2_ick: mcspi2_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -940,7 +940,7 @@
reg = <0x0210>;
};
- mcspi2_fck: mcspi2_fck {
+ mcspi2_fck: mcspi2_fck@200 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_48m_ck>;
@@ -948,7 +948,7 @@
reg = <0x0200>;
};
- uart1_ick: uart1_ick {
+ uart1_ick: uart1_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -956,7 +956,7 @@
reg = <0x0210>;
};
- uart1_fck: uart1_fck {
+ uart1_fck: uart1_fck@200 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_48m_ck>;
@@ -964,7 +964,7 @@
reg = <0x0200>;
};
- uart2_ick: uart2_ick {
+ uart2_ick: uart2_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -972,7 +972,7 @@
reg = <0x0210>;
};
- uart2_fck: uart2_fck {
+ uart2_fck: uart2_fck@200 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_48m_ck>;
@@ -980,7 +980,7 @@
reg = <0x0200>;
};
- uart3_ick: uart3_ick {
+ uart3_ick: uart3_ick@214 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -988,7 +988,7 @@
reg = <0x0214>;
};
- uart3_fck: uart3_fck {
+ uart3_fck: uart3_fck@204 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_48m_ck>;
@@ -996,7 +996,7 @@
reg = <0x0204>;
};
- gpios_ick: gpios_ick {
+ gpios_ick: gpios_ick@410 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&sys_ck>;
@@ -1004,7 +1004,7 @@
reg = <0x0410>;
};
- gpios_fck: gpios_fck {
+ gpios_fck: gpios_fck@400 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_32k_ck>;
@@ -1012,7 +1012,7 @@
reg = <0x0400>;
};
- mpu_wdt_ick: mpu_wdt_ick {
+ mpu_wdt_ick: mpu_wdt_ick@410 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&sys_ck>;
@@ -1020,7 +1020,7 @@
reg = <0x0410>;
};
- mpu_wdt_fck: mpu_wdt_fck {
+ mpu_wdt_fck: mpu_wdt_fck@400 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_32k_ck>;
@@ -1028,7 +1028,7 @@
reg = <0x0400>;
};
- sync_32k_ick: sync_32k_ick {
+ sync_32k_ick: sync_32k_ick@410 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&sys_ck>;
@@ -1036,7 +1036,7 @@
reg = <0x0410>;
};
- wdt1_ick: wdt1_ick {
+ wdt1_ick: wdt1_ick@410 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&sys_ck>;
@@ -1044,7 +1044,7 @@
reg = <0x0410>;
};
- omapctrl_ick: omapctrl_ick {
+ omapctrl_ick: omapctrl_ick@410 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&sys_ck>;
@@ -1052,7 +1052,7 @@
reg = <0x0410>;
};
- cam_fck: cam_fck {
+ cam_fck: cam_fck@200 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&func_96m_ck>;
@@ -1060,7 +1060,7 @@
reg = <0x0200>;
};
- cam_ick: cam_ick {
+ cam_ick: cam_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-no-wait-interface-clock";
clocks = <&l4_ck>;
@@ -1068,7 +1068,7 @@
reg = <0x0210>;
};
- mailboxes_ick: mailboxes_ick {
+ mailboxes_ick: mailboxes_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -1076,7 +1076,7 @@
reg = <0x0210>;
};
- wdt4_ick: wdt4_ick {
+ wdt4_ick: wdt4_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -1084,7 +1084,7 @@
reg = <0x0210>;
};
- wdt4_fck: wdt4_fck {
+ wdt4_fck: wdt4_fck@200 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_32k_ck>;
@@ -1092,7 +1092,7 @@
reg = <0x0200>;
};
- mspro_ick: mspro_ick {
+ mspro_ick: mspro_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -1100,7 +1100,7 @@
reg = <0x0210>;
};
- mspro_fck: mspro_fck {
+ mspro_fck: mspro_fck@200 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_96m_ck>;
@@ -1108,7 +1108,7 @@
reg = <0x0200>;
};
- fac_ick: fac_ick {
+ fac_ick: fac_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -1116,7 +1116,7 @@
reg = <0x0210>;
};
- fac_fck: fac_fck {
+ fac_fck: fac_fck@200 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_12m_ck>;
@@ -1124,7 +1124,7 @@
reg = <0x0200>;
};
- hdq_ick: hdq_ick {
+ hdq_ick: hdq_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -1132,7 +1132,7 @@
reg = <0x0210>;
};
- hdq_fck: hdq_fck {
+ hdq_fck: hdq_fck@200 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_12m_ck>;
@@ -1140,7 +1140,7 @@
reg = <0x0200>;
};
- i2c1_ick: i2c1_ick {
+ i2c1_ick: i2c1_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -1148,7 +1148,7 @@
reg = <0x0210>;
};
- i2c2_ick: i2c2_ick {
+ i2c2_ick: i2c2_ick@210 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -1156,7 +1156,7 @@
reg = <0x0210>;
};
- gpmc_fck: gpmc_fck {
+ gpmc_fck: gpmc_fck@238 {
#clock-cells = <0>;
compatible = "ti,fixed-factor-clock";
clocks = <&core_l3_ck>;
@@ -1174,7 +1174,7 @@
clock-div = <1>;
};
- sdma_ick: sdma_ick {
+ sdma_ick: sdma_ick@238 {
#clock-cells = <0>;
compatible = "ti,fixed-factor-clock";
clocks = <&core_l3_ck>;
@@ -1184,7 +1184,7 @@
ti,clock-mult = <1>;
};
- sdrc_ick: sdrc_ick {
+ sdrc_ick: sdrc_ick@238 {
#clock-cells = <0>;
compatible = "ti,fixed-factor-clock";
clocks = <&core_l3_ck>;
@@ -1194,7 +1194,7 @@
ti,clock-mult = <1>;
};
- des_ick: des_ick {
+ des_ick: des_ick@21c {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -1202,7 +1202,7 @@
reg = <0x021c>;
};
- sha_ick: sha_ick {
+ sha_ick: sha_ick@21c {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -1210,7 +1210,7 @@
reg = <0x021c>;
};
- rng_ick: rng_ick {
+ rng_ick: rng_ick@21c {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -1218,7 +1218,7 @@
reg = <0x021c>;
};
- aes_ick: aes_ick {
+ aes_ick: aes_ick@21c {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -1226,7 +1226,7 @@
reg = <0x021c>;
};
- pka_ick: pka_ick {
+ pka_ick: pka_ick@21c {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l4_ck>;
@@ -1234,7 +1234,7 @@
reg = <0x021c>;
};
- usb_fck: usb_fck {
+ usb_fck: usb_fck@204 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&func_48m_ck>;
diff --git a/arch/arm/boot/dts/omap3-beagle.dts b/arch/arm/boot/dts/omap3-beagle.dts
index 4602866792be..a4deff0e2d52 100644
--- a/arch/arm/boot/dts/omap3-beagle.dts
+++ b/arch/arm/boot/dts/omap3-beagle.dts
@@ -390,6 +390,7 @@
interrupts = <0 IRQ_TYPE_NONE>, /* fifoevent */
<1 IRQ_TYPE_NONE>; /* termcount */
ti,nand-ecc-opt = "ham1";
+ rb-gpios = <&gpmc 0 GPIO_ACTIVE_HIGH>; /* gpmc_wait0 */
nand-bus-width = <16>;
#address-cells = <1>;
#size-cells = <1>;
diff --git a/arch/arm/boot/dts/omap3-n9.dts b/arch/arm/boot/dts/omap3-n9.dts
index 5c67429a4da7..b9e58c536afd 100644
--- a/arch/arm/boot/dts/omap3-n9.dts
+++ b/arch/arm/boot/dts/omap3-n9.dts
@@ -57,3 +57,17 @@
&modem {
compatible = "nokia,n9-modem";
};
+
+&lis302 {
+ st,axis-x = <1>; /* LIS3_DEV_X */
+ st,axis-y = <(-2)>; /* LIS3_INV_DEV_Y */
+ st,axis-z = <(-3)>; /* LIS3_INV_DEV_Z */
+
+ st,min-limit-x = <(-46)>;
+ st,min-limit-y = <3>;
+ st,min-limit-z = <3>;
+
+ st,max-limit-x = <(-3)>;
+ st,max-limit-y = <46>;
+ st,max-limit-z = <46>;
+};
diff --git a/arch/arm/boot/dts/omap3-n950-n9.dtsi b/arch/arm/boot/dts/omap3-n950-n9.dtsi
index 858a25048102..a00ca761675d 100644
--- a/arch/arm/boot/dts/omap3-n950-n9.dtsi
+++ b/arch/arm/boot/dts/omap3-n950-n9.dtsi
@@ -14,6 +14,13 @@
cpus {
cpu@0 {
cpu0-supply = <&vcc>;
+ operating-points = <
+ /* kHz uV */
+ 300000 1012500
+ 600000 1200000
+ 800000 1325000
+ 1000000 1375000
+ >;
};
};
@@ -39,9 +46,34 @@
enable-active-high;
regulator-boot-off;
};
+
+ leds {
+ compatible = "gpio-leds";
+
+ heartbeat {
+ label = "debug::sleep";
+ gpios = <&gpio3 28 GPIO_ACTIVE_HIGH>; /* gpio92 */
+ linux,default-trigger = "default-on";
+ pinctrl-names = "default";
+ pinctrl-0 = <&debug_leds>;
+ };
+ };
};
&omap3_pmx_core {
+ accelerator_pins: pinmux_accelerator_pins {
+ pinctrl-single,pins = <
+ OMAP3_CORE1_IOPAD(0x21da, PIN_INPUT | MUX_MODE4) /* mcspi2_somi.gpio_180 -> LIS302 INT1 */
+ OMAP3_CORE1_IOPAD(0x21dc, PIN_INPUT | MUX_MODE4) /* mcspi2_cs0.gpio_181 -> LIS302 INT2 */
+ >;
+ };
+
+ debug_leds: pinmux_debug_led_pins {
+ pinctrl-single,pins = <
+ OMAP3_CORE1_IOPAD(0x2108, PIN_OUTPUT | MUX_MODE4) /* dss_data22.gpio_92 */
+ >;
+ };
+
mmc2_pins: pinmux_mmc2_pins {
pinctrl-single,pins = <
OMAP3_CORE1_IOPAD(0x2158, PIN_INPUT_PULLUP | MUX_MODE0) /* sdmmc2_clk */
@@ -129,6 +161,30 @@
ti,pulldowns = <0x008106>; /* BIT(1) | BIT(2) | BIT(8) | BIT(15) */
};
+&vdac {
+ regulator-name = "vdac";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+};
+
+&vpll1 {
+ regulator-name = "vpll1";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+};
+
+&vpll2 {
+ regulator-name = "vpll2";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+};
+
+&vaux1 {
+ regulator-name = "vaux1";
+ regulator-min-microvolt = <2800000>;
+ regulator-max-microvolt = <2800000>;
+};
+
/* CSI-2 receiver */
&vaux2 {
regulator-name = "vaux2";
@@ -143,12 +199,107 @@
regulator-max-microvolt = <2800000>;
};
+&vaux4 {
+ regulator-name = "vaux4";
+ regulator-min-microvolt = <2800000>;
+ regulator-max-microvolt = <2800000>;
+};
+
+&vmmc1 {
+ regulator-name = "vmmc1";
+ regulator-min-microvolt = <1850000>;
+ regulator-max-microvolt = <3150000>;
+};
+
+&vmmc2 {
+ regulator-name = "vmmc2";
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3000000>;
+};
+
+&vintana1 {
+ regulator-name = "vintana1";
+ regulator-min-microvolt = <1500000>;
+ regulator-max-microvolt = <1500000>;
+};
+
+&vintana2 {
+ regulator-name = "vintana2";
+ regulator-min-microvolt = <2750000>;
+ regulator-max-microvolt = <2750000>;
+};
+
+&vintdig {
+ regulator-name = "vintdig";
+ regulator-min-microvolt = <1500000>;
+ regulator-max-microvolt = <1500000>;
+};
+
+&vsim {
+ regulator-name = "vsim";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+};
+
+&vio {
+ regulator-name = "vio";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+};
+
&i2c2 {
clock-frequency = <400000>;
};
&i2c3 {
clock-frequency = <400000>;
+
+ lis302: lis302@1d {
+ compatible = "st,lis3lv02d";
+ reg = <0x1d>;
+
+ Vdd-supply = <&vaux1>;
+ Vdd_IO-supply = <&vio>;
+
+ pinctrl-names = "default";
+ pinctrl-0 = <&accelerator_pins>;
+
+ interrupts-extended = <&gpio6 20 IRQ_TYPE_EDGE_FALLING>, <&gpio6 21 IRQ_TYPE_EDGE_FALLING>; /* 180, 181 */
+
+ /* click flags */
+ st,click-single-x;
+ st,click-single-y;
+ st,click-single-z;
+
+ /* Limits are 0.5g * value */
+ st,click-threshold-x = <8>;
+ st,click-threshold-y = <8>;
+ st,click-threshold-z = <10>;
+
+ /* Click must be longer than time limit */
+ st,click-time-limit = <9>;
+
+ /* Kind of debounce filter */
+ st,click-latency = <50>;
+
+ st,wakeup-x-hi;
+ st,wakeup-y-hi;
+ st,wakeup-threshold = <(800/18)>; /* millig-value / 18 to get HW values */
+
+ st,wakeup2-z-hi;
+ st,wakeup2-threshold = <(1000/18)>; /* millig-value / 18 to get HW values */
+
+ st,highpass-cutoff-hz = <2>;
+
+ /* Interrupt line 1 for thresholds */
+ st,irq1-ff-wu-1;
+ st,irq1-ff-wu-2;
+ /* Interrupt line 2 for click detection */
+ st,irq2-click;
+
+ st,wu-duration-1 = <8>;
+ st,wu-duration-2 = <8>;
+ };
};
&mmc1 {
diff --git a/arch/arm/boot/dts/omap3-n950.dts b/arch/arm/boot/dts/omap3-n950.dts
index 7f219a9dc5be..646601a3ebd8 100644
--- a/arch/arm/boot/dts/omap3-n950.dts
+++ b/arch/arm/boot/dts/omap3-n950.dts
@@ -11,10 +11,33 @@
/dts-v1/;
#include "omap3-n950-n9.dtsi"
+#include <dt-bindings/input/input.h>
/ {
model = "Nokia N950";
compatible = "nokia,omap3-n950", "ti,omap36xx", "ti,omap3";
+
+ keys {
+ compatible = "gpio-keys";
+
+ keypad_slide {
+ label = "Keypad Slide";
+ gpios = <&gpio4 13 GPIO_ACTIVE_LOW>; /* 109 */
+ linux,input-type = <EV_SW>;
+ linux,code = <SW_KEYPAD_SLIDE>;
+ wakeup-source;
+ pinctrl-names = "default";
+ pinctrl-0 = <&keypad_slide_pins>;
+ };
+ };
+};
+
+&omap3_pmx_core {
+ keypad_slide_pins: pinmux_debug_led_pins {
+ pinctrl-single,pins = <
+ OMAP3_CORE1_IOPAD(0x212a, PIN_INPUT | MUX_MODE4) /* cam_d10.gpio_109 */
+ >;
+ };
};
&omap3_pmx_core {
@@ -86,3 +109,79 @@
&modem {
compatible = "nokia,n950-modem";
};
+
+&twl {
+ twl_audio: audio {
+ compatible = "ti,twl4030-audio";
+ ti,enable-vibra = <1>;
+ };
+};
+
+&twl_keypad {
+ linux,keymap = < MATRIX_KEY(0x00, 0x00, KEY_BACKSLASH)
+ MATRIX_KEY(0x01, 0x00, KEY_LEFTSHIFT)
+ MATRIX_KEY(0x02, 0x00, KEY_COMPOSE)
+ MATRIX_KEY(0x03, 0x00, KEY_LEFTMETA)
+ MATRIX_KEY(0x04, 0x00, KEY_RIGHTCTRL)
+ MATRIX_KEY(0x05, 0x00, KEY_BACKSPACE)
+ MATRIX_KEY(0x06, 0x00, KEY_VOLUMEDOWN)
+ MATRIX_KEY(0x07, 0x00, KEY_VOLUMEUP)
+
+ MATRIX_KEY(0x03, 0x01, KEY_Z)
+ MATRIX_KEY(0x04, 0x01, KEY_A)
+ MATRIX_KEY(0x05, 0x01, KEY_Q)
+ MATRIX_KEY(0x06, 0x01, KEY_W)
+ MATRIX_KEY(0x07, 0x01, KEY_E)
+
+ MATRIX_KEY(0x03, 0x02, KEY_X)
+ MATRIX_KEY(0x04, 0x02, KEY_S)
+ MATRIX_KEY(0x05, 0x02, KEY_D)
+ MATRIX_KEY(0x06, 0x02, KEY_C)
+ MATRIX_KEY(0x07, 0x02, KEY_V)
+
+ MATRIX_KEY(0x03, 0x03, KEY_O)
+ MATRIX_KEY(0x04, 0x03, KEY_I)
+ MATRIX_KEY(0x05, 0x03, KEY_U)
+ MATRIX_KEY(0x06, 0x03, KEY_L)
+ MATRIX_KEY(0x07, 0x03, KEY_APOSTROPHE)
+
+ MATRIX_KEY(0x03, 0x04, KEY_Y)
+ MATRIX_KEY(0x04, 0x04, KEY_K)
+ MATRIX_KEY(0x05, 0x04, KEY_J)
+ MATRIX_KEY(0x06, 0x04, KEY_H)
+ MATRIX_KEY(0x07, 0x04, KEY_G)
+
+ MATRIX_KEY(0x03, 0x05, KEY_B)
+ MATRIX_KEY(0x04, 0x05, KEY_COMMA)
+ MATRIX_KEY(0x05, 0x05, KEY_M)
+ MATRIX_KEY(0x06, 0x05, KEY_N)
+ MATRIX_KEY(0x07, 0x05, KEY_DOT)
+
+ MATRIX_KEY(0x00, 0x06, KEY_SPACE)
+ MATRIX_KEY(0x03, 0x06, KEY_T)
+ MATRIX_KEY(0x04, 0x06, KEY_UP)
+ MATRIX_KEY(0x05, 0x06, KEY_LEFT)
+ MATRIX_KEY(0x06, 0x06, KEY_RIGHT)
+ MATRIX_KEY(0x07, 0x06, KEY_DOWN)
+
+ MATRIX_KEY(0x03, 0x07, KEY_P)
+ MATRIX_KEY(0x04, 0x07, KEY_ENTER)
+ MATRIX_KEY(0x05, 0x07, KEY_SLASH)
+ MATRIX_KEY(0x06, 0x07, KEY_F)
+ MATRIX_KEY(0x07, 0x07, KEY_R)
+ >;
+};
+
+&lis302 {
+ st,axis-x = <(-2)>; /* LIS3_INV_DEV_Y */
+ st,axis-y = <(-1)>; /* LIS3_INV_DEV_X */
+ st,axis-z = <(-3)>; /* LIS3_INV_DEV_Z */
+
+ st,min-limit-x = <(-32)>;
+ st,min-limit-y = <3>;
+ st,min-limit-z = <3>;
+
+ st,max-limit-x = <(-3)>;
+ st,max-limit-y = <32>;
+ st,max-limit-z = <32>;
+};
diff --git a/arch/arm/boot/dts/omap3.dtsi b/arch/arm/boot/dts/omap3.dtsi
index b41d07e8e765..9fbda38528dc 100644
--- a/arch/arm/boot/dts/omap3.dtsi
+++ b/arch/arm/boot/dts/omap3.dtsi
@@ -43,7 +43,7 @@
};
};
- pmu {
+ pmu@54000000 {
compatible = "arm,cortex-a8-pmu";
reg = <0x54000000 0x800000>;
interrupts = <3>;
@@ -119,7 +119,7 @@
#size-cells = <1>;
ranges = <0 0x270 0x330>;
- pbias_regulator: pbias_regulator {
+ pbias_regulator: pbias_regulator@2b0 {
compatible = "ti,pbias-omap3", "ti,pbias-omap";
reg = <0x2b0 0x4>;
syscon = <&scm_conf>;
@@ -725,6 +725,8 @@
#size-cells = <1>;
interrupt-controller;
#interrupt-cells = <2>;
+ gpio-controller;
+ #gpio-cells = <2>;
};
usb_otg_hs: usb_otg_hs@480ab000 {
diff --git a/arch/arm/boot/dts/omap3430es1-clocks.dtsi b/arch/arm/boot/dts/omap3430es1-clocks.dtsi
index 4c22f3a7f813..86de819a0dcf 100644
--- a/arch/arm/boot/dts/omap3430es1-clocks.dtsi
+++ b/arch/arm/boot/dts/omap3430es1-clocks.dtsi
@@ -8,7 +8,7 @@
* published by the Free Software Foundation.
*/
&cm_clocks {
- gfx_l3_ck: gfx_l3_ck {
+ gfx_l3_ck: gfx_l3_ck@b10 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&l3_ick>;
@@ -16,7 +16,7 @@
ti,bit-shift = <0>;
};
- gfx_l3_fck: gfx_l3_fck {
+ gfx_l3_fck: gfx_l3_fck@b40 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&l3_ick>;
@@ -33,7 +33,7 @@
clock-div = <1>;
};
- gfx_cg1_ck: gfx_cg1_ck {
+ gfx_cg1_ck: gfx_cg1_ck@b00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&gfx_l3_fck>;
@@ -41,7 +41,7 @@
ti,bit-shift = <1>;
};
- gfx_cg2_ck: gfx_cg2_ck {
+ gfx_cg2_ck: gfx_cg2_ck@b00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&gfx_l3_fck>;
@@ -49,7 +49,7 @@
ti,bit-shift = <2>;
};
- d2d_26m_fck: d2d_26m_fck {
+ d2d_26m_fck: d2d_26m_fck@a00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&sys_ck>;
@@ -57,7 +57,7 @@
ti,bit-shift = <3>;
};
- fshostusb_fck: fshostusb_fck {
+ fshostusb_fck: fshostusb_fck@a00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&core_48m_fck>;
@@ -65,7 +65,7 @@
ti,bit-shift = <5>;
};
- ssi_ssr_gate_fck_3430es1: ssi_ssr_gate_fck_3430es1 {
+ ssi_ssr_gate_fck_3430es1: ssi_ssr_gate_fck_3430es1@a00 {
#clock-cells = <0>;
compatible = "ti,composite-no-wait-gate-clock";
clocks = <&corex2_fck>;
@@ -73,7 +73,7 @@
reg = <0x0a00>;
};
- ssi_ssr_div_fck_3430es1: ssi_ssr_div_fck_3430es1 {
+ ssi_ssr_div_fck_3430es1: ssi_ssr_div_fck_3430es1@a40 {
#clock-cells = <0>;
compatible = "ti,composite-divider-clock";
clocks = <&corex2_fck>;
@@ -96,7 +96,7 @@
clock-div = <2>;
};
- hsotgusb_ick_3430es1: hsotgusb_ick_3430es1 {
+ hsotgusb_ick_3430es1: hsotgusb_ick_3430es1@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-no-wait-interface-clock";
clocks = <&core_l3_ick>;
@@ -104,7 +104,7 @@
ti,bit-shift = <4>;
};
- fac_ick: fac_ick {
+ fac_ick: fac_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -120,7 +120,7 @@
clock-div = <1>;
};
- ssi_ick: ssi_ick_3430es1 {
+ ssi_ick: ssi_ick_3430es1@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-no-wait-interface-clock";
clocks = <&ssi_l4_ick>;
@@ -128,7 +128,7 @@
ti,bit-shift = <0>;
};
- usb_l4_gate_ick: usb_l4_gate_ick {
+ usb_l4_gate_ick: usb_l4_gate_ick@a10 {
#clock-cells = <0>;
compatible = "ti,composite-interface-clock";
clocks = <&l4_ick>;
@@ -136,7 +136,7 @@
reg = <0x0a10>;
};
- usb_l4_div_ick: usb_l4_div_ick {
+ usb_l4_div_ick: usb_l4_div_ick@a40 {
#clock-cells = <0>;
compatible = "ti,composite-divider-clock";
clocks = <&l4_ick>;
@@ -152,7 +152,7 @@
clocks = <&usb_l4_gate_ick>, <&usb_l4_div_ick>;
};
- dss1_alwon_fck: dss1_alwon_fck_3430es1 {
+ dss1_alwon_fck: dss1_alwon_fck_3430es1@e00 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll4_m4x2_ck>;
@@ -161,7 +161,7 @@
ti,set-rate-parent;
};
- dss_ick: dss_ick_3430es1 {
+ dss_ick: dss_ick_3430es1@e10 {
#clock-cells = <0>;
compatible = "ti,omap3-no-wait-interface-clock";
clocks = <&l4_ick>;
diff --git a/arch/arm/boot/dts/omap34xx-omap36xx-clocks.dtsi b/arch/arm/boot/dts/omap34xx-omap36xx-clocks.dtsi
index b02017b7630e..858aa0796ec8 100644
--- a/arch/arm/boot/dts/omap34xx-omap36xx-clocks.dtsi
+++ b/arch/arm/boot/dts/omap34xx-omap36xx-clocks.dtsi
@@ -16,7 +16,7 @@
clock-div = <1>;
};
- aes1_ick: aes1_ick {
+ aes1_ick: aes1_ick@a14 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&security_l4_ick2>;
@@ -24,7 +24,7 @@
reg = <0x0a14>;
};
- rng_ick: rng_ick {
+ rng_ick: rng_ick@a14 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&security_l4_ick2>;
@@ -32,7 +32,7 @@
ti,bit-shift = <2>;
};
- sha11_ick: sha11_ick {
+ sha11_ick: sha11_ick@a14 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&security_l4_ick2>;
@@ -40,7 +40,7 @@
ti,bit-shift = <1>;
};
- des1_ick: des1_ick {
+ des1_ick: des1_ick@a14 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&security_l4_ick2>;
@@ -48,7 +48,7 @@
ti,bit-shift = <0>;
};
- cam_mclk: cam_mclk {
+ cam_mclk: cam_mclk@f00 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll4_m5x2_ck>;
@@ -57,7 +57,7 @@
ti,set-rate-parent;
};
- cam_ick: cam_ick {
+ cam_ick: cam_ick@f10 {
#clock-cells = <0>;
compatible = "ti,omap3-no-wait-interface-clock";
clocks = <&l4_ick>;
@@ -65,7 +65,7 @@
ti,bit-shift = <0>;
};
- csi2_96m_fck: csi2_96m_fck {
+ csi2_96m_fck: csi2_96m_fck@f00 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&core_96m_fck>;
@@ -81,7 +81,7 @@
clock-div = <1>;
};
- pka_ick: pka_ick {
+ pka_ick: pka_ick@a14 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&security_l3_ick>;
@@ -89,7 +89,7 @@
ti,bit-shift = <4>;
};
- icr_ick: icr_ick {
+ icr_ick: icr_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -97,7 +97,7 @@
ti,bit-shift = <29>;
};
- des2_ick: des2_ick {
+ des2_ick: des2_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -105,7 +105,7 @@
ti,bit-shift = <26>;
};
- mspro_ick: mspro_ick {
+ mspro_ick: mspro_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -113,7 +113,7 @@
ti,bit-shift = <23>;
};
- mailboxes_ick: mailboxes_ick {
+ mailboxes_ick: mailboxes_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -129,7 +129,7 @@
clock-div = <1>;
};
- sr1_fck: sr1_fck {
+ sr1_fck: sr1_fck@c00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&sys_ck>;
@@ -137,7 +137,7 @@
ti,bit-shift = <6>;
};
- sr2_fck: sr2_fck {
+ sr2_fck: sr2_fck@c00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&sys_ck>;
@@ -153,7 +153,7 @@
clock-div = <1>;
};
- dpll2_fck: dpll2_fck {
+ dpll2_fck: dpll2_fck@40 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&core_ck>;
@@ -163,7 +163,7 @@
ti,index-starts-at-one;
};
- dpll2_ck: dpll2_ck {
+ dpll2_ck: dpll2_ck@4 {
#clock-cells = <0>;
compatible = "ti,omap3-dpll-clock";
clocks = <&sys_ck>, <&dpll2_fck>;
@@ -173,7 +173,7 @@
ti,low-power-bypass;
};
- dpll2_m2_ck: dpll2_m2_ck {
+ dpll2_m2_ck: dpll2_m2_ck@44 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll2_ck>;
@@ -182,7 +182,7 @@
ti,index-starts-at-one;
};
- iva2_ck: iva2_ck {
+ iva2_ck: iva2_ck@0 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&dpll2_m2_ck>;
@@ -190,7 +190,7 @@
ti,bit-shift = <0>;
};
- modem_fck: modem_fck {
+ modem_fck: modem_fck@a00 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&sys_ck>;
@@ -198,7 +198,7 @@
ti,bit-shift = <31>;
};
- sad2d_ick: sad2d_ick {
+ sad2d_ick: sad2d_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l3_ick>;
@@ -206,7 +206,7 @@
ti,bit-shift = <3>;
};
- mad2d_ick: mad2d_ick {
+ mad2d_ick: mad2d_ick@a18 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&l3_ick>;
@@ -214,7 +214,7 @@
ti,bit-shift = <3>;
};
- mspro_fck: mspro_fck {
+ mspro_fck: mspro_fck@a00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&core_96m_fck>;
diff --git a/arch/arm/boot/dts/omap34xx.dtsi b/arch/arm/boot/dts/omap34xx.dtsi
index 96f8ce7bd2af..e44656258225 100644
--- a/arch/arm/boot/dts/omap34xx.dtsi
+++ b/arch/arm/boot/dts/omap34xx.dtsi
@@ -55,7 +55,7 @@
};
};
- bandgap {
+ bandgap@48002524 {
reg = <0x48002524 0x4>;
compatible = "ti,omap34xx-bandgap";
#thermal-sensor-cells = <0>;
diff --git a/arch/arm/boot/dts/omap36xx-am35xx-omap3430es2plus-clocks.dtsi b/arch/arm/boot/dts/omap36xx-am35xx-omap3430es2plus-clocks.dtsi
index 080fb3f4e429..15d18669000e 100644
--- a/arch/arm/boot/dts/omap36xx-am35xx-omap3430es2plus-clocks.dtsi
+++ b/arch/arm/boot/dts/omap36xx-am35xx-omap3430es2plus-clocks.dtsi
@@ -25,7 +25,7 @@
};
};
&cm_clocks {
- dpll5_ck: dpll5_ck {
+ dpll5_ck: dpll5_ck@d04 {
#clock-cells = <0>;
compatible = "ti,omap3-dpll-clock";
clocks = <&sys_ck>, <&sys_ck>;
@@ -34,7 +34,7 @@
ti,lock;
};
- dpll5_m2_ck: dpll5_m2_ck {
+ dpll5_m2_ck: dpll5_m2_ck@d50 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll5_ck>;
@@ -43,7 +43,7 @@
ti,index-starts-at-one;
};
- sgx_gate_fck: sgx_gate_fck {
+ sgx_gate_fck: sgx_gate_fck@b00 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&core_ck>;
@@ -91,7 +91,7 @@
clock-div = <2>;
};
- sgx_mux_fck: sgx_mux_fck {
+ sgx_mux_fck: sgx_mux_fck@b40 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&core_d3_ck>, <&core_d4_ck>, <&core_d6_ck>, <&cm_96m_fck>, <&omap_192m_alwon_fck>, <&core_d2_ck>, <&corex2_d3_fck>, <&corex2_d5_fck>;
@@ -104,7 +104,7 @@
clocks = <&sgx_gate_fck>, <&sgx_mux_fck>;
};
- sgx_ick: sgx_ick {
+ sgx_ick: sgx_ick@b10 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&l3_ick>;
@@ -112,7 +112,7 @@
ti,bit-shift = <0>;
};
- cpefuse_fck: cpefuse_fck {
+ cpefuse_fck: cpefuse_fck@a08 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_ck>;
@@ -120,7 +120,7 @@
ti,bit-shift = <0>;
};
- ts_fck: ts_fck {
+ ts_fck: ts_fck@a08 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&omap_32k_fck>;
@@ -128,7 +128,7 @@
ti,bit-shift = <1>;
};
- usbtll_fck: usbtll_fck {
+ usbtll_fck: usbtll_fck@a08 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&dpll5_m2_ck>;
@@ -136,7 +136,7 @@
ti,bit-shift = <2>;
};
- usbtll_ick: usbtll_ick {
+ usbtll_ick: usbtll_ick@a18 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -144,7 +144,7 @@
ti,bit-shift = <2>;
};
- mmchs3_ick: mmchs3_ick {
+ mmchs3_ick: mmchs3_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -152,7 +152,7 @@
ti,bit-shift = <30>;
};
- mmchs3_fck: mmchs3_fck {
+ mmchs3_fck: mmchs3_fck@a00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&core_96m_fck>;
@@ -160,7 +160,7 @@
ti,bit-shift = <30>;
};
- dss1_alwon_fck: dss1_alwon_fck_3430es2 {
+ dss1_alwon_fck: dss1_alwon_fck_3430es2@e00 {
#clock-cells = <0>;
compatible = "ti,dss-gate-clock";
clocks = <&dpll4_m4x2_ck>;
@@ -169,7 +169,7 @@
ti,set-rate-parent;
};
- dss_ick: dss_ick_3430es2 {
+ dss_ick: dss_ick_3430es2@e10 {
#clock-cells = <0>;
compatible = "ti,omap3-dss-interface-clock";
clocks = <&l4_ick>;
@@ -177,7 +177,7 @@
ti,bit-shift = <0>;
};
- usbhost_120m_fck: usbhost_120m_fck {
+ usbhost_120m_fck: usbhost_120m_fck@1400 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll5_m2_ck>;
@@ -185,7 +185,7 @@
ti,bit-shift = <1>;
};
- usbhost_48m_fck: usbhost_48m_fck {
+ usbhost_48m_fck: usbhost_48m_fck@1400 {
#clock-cells = <0>;
compatible = "ti,dss-gate-clock";
clocks = <&omap_48m_fck>;
@@ -193,7 +193,7 @@
ti,bit-shift = <0>;
};
- usbhost_ick: usbhost_ick {
+ usbhost_ick: usbhost_ick@1410 {
#clock-cells = <0>;
compatible = "ti,omap3-dss-interface-clock";
clocks = <&l4_ick>;
diff --git a/arch/arm/boot/dts/omap36xx-clocks.dtsi b/arch/arm/boot/dts/omap36xx-clocks.dtsi
index 200ae3a5cbbb..a21d1f021267 100644
--- a/arch/arm/boot/dts/omap36xx-clocks.dtsi
+++ b/arch/arm/boot/dts/omap36xx-clocks.dtsi
@@ -8,14 +8,14 @@
* published by the Free Software Foundation.
*/
&cm_clocks {
- dpll4_ck: dpll4_ck {
+ dpll4_ck: dpll4_ck@d00 {
#clock-cells = <0>;
compatible = "ti,omap3-dpll-per-j-type-clock";
clocks = <&sys_ck>, <&sys_ck>;
reg = <0x0d00>, <0x0d20>, <0x0d44>, <0x0d30>;
};
- dpll4_m5x2_ck: dpll4_m5x2_ck {
+ dpll4_m5x2_ck: dpll4_m5x2_ck@d00 {
#clock-cells = <0>;
compatible = "ti,hsdiv-gate-clock";
clocks = <&dpll4_m5x2_mul_ck>;
@@ -25,7 +25,7 @@
ti,set-bit-to-disable;
};
- dpll4_m2x2_ck: dpll4_m2x2_ck {
+ dpll4_m2x2_ck: dpll4_m2x2_ck@d00 {
#clock-cells = <0>;
compatible = "ti,hsdiv-gate-clock";
clocks = <&dpll4_m2x2_mul_ck>;
@@ -34,7 +34,7 @@
ti,set-bit-to-disable;
};
- dpll3_m3x2_ck: dpll3_m3x2_ck {
+ dpll3_m3x2_ck: dpll3_m3x2_ck@d00 {
#clock-cells = <0>;
compatible = "ti,hsdiv-gate-clock";
clocks = <&dpll3_m3x2_mul_ck>;
@@ -43,7 +43,7 @@
ti,set-bit-to-disable;
};
- dpll4_m3x2_ck: dpll4_m3x2_ck {
+ dpll4_m3x2_ck: dpll4_m3x2_ck@d00 {
#clock-cells = <0>;
compatible = "ti,hsdiv-gate-clock";
clocks = <&dpll4_m3x2_mul_ck>;
@@ -52,7 +52,7 @@
ti,set-bit-to-disable;
};
- dpll4_m6x2_ck: dpll4_m6x2_ck {
+ dpll4_m6x2_ck: dpll4_m6x2_ck@d00 {
#clock-cells = <0>;
compatible = "ti,hsdiv-gate-clock";
clocks = <&dpll4_m6x2_mul_ck>;
@@ -61,7 +61,7 @@
ti,set-bit-to-disable;
};
- uart4_fck: uart4_fck {
+ uart4_fck: uart4_fck@1000 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&per_48m_fck>;
diff --git a/arch/arm/boot/dts/omap36xx-omap3430es2plus-clocks.dtsi b/arch/arm/boot/dts/omap36xx-omap3430es2plus-clocks.dtsi
index 877318c28364..1a4fbdf0d9cc 100644
--- a/arch/arm/boot/dts/omap36xx-omap3430es2plus-clocks.dtsi
+++ b/arch/arm/boot/dts/omap36xx-omap3430es2plus-clocks.dtsi
@@ -8,7 +8,7 @@
* published by the Free Software Foundation.
*/
&cm_clocks {
- ssi_ssr_gate_fck_3430es2: ssi_ssr_gate_fck_3430es2 {
+ ssi_ssr_gate_fck_3430es2: ssi_ssr_gate_fck_3430es2@a00 {
#clock-cells = <0>;
compatible = "ti,composite-no-wait-gate-clock";
clocks = <&corex2_fck>;
@@ -16,7 +16,7 @@
reg = <0x0a00>;
};
- ssi_ssr_div_fck_3430es2: ssi_ssr_div_fck_3430es2 {
+ ssi_ssr_div_fck_3430es2: ssi_ssr_div_fck_3430es2@a40 {
#clock-cells = <0>;
compatible = "ti,composite-divider-clock";
clocks = <&corex2_fck>;
@@ -39,7 +39,7 @@
clock-div = <2>;
};
- hsotgusb_ick_3430es2: hsotgusb_ick_3430es2 {
+ hsotgusb_ick_3430es2: hsotgusb_ick_3430es2@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-hsotgusb-interface-clock";
clocks = <&core_l3_ick>;
@@ -55,7 +55,7 @@
clock-div = <1>;
};
- ssi_ick: ssi_ick_3430es2 {
+ ssi_ick: ssi_ick_3430es2@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-ssi-interface-clock";
clocks = <&ssi_l4_ick>;
@@ -63,7 +63,7 @@
ti,bit-shift = <0>;
};
- usim_gate_fck: usim_gate_fck {
+ usim_gate_fck: usim_gate_fck@c00 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&omap_96m_fck>;
@@ -143,7 +143,7 @@
clock-div = <20>;
};
- usim_mux_fck: usim_mux_fck {
+ usim_mux_fck: usim_mux_fck@c40 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&sys_ck>, <&sys_d2_ck>, <&omap_96m_d2_fck>, <&omap_96m_d4_fck>, <&omap_96m_d8_fck>, <&omap_96m_d10_fck>, <&dpll5_m2_d4_ck>, <&dpll5_m2_d8_ck>, <&dpll5_m2_d16_ck>, <&dpll5_m2_d20_ck>;
@@ -158,7 +158,7 @@
clocks = <&usim_gate_fck>, <&usim_mux_fck>;
};
- usim_ick: usim_ick {
+ usim_ick: usim_ick@c10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&wkup_l4_ick>;
diff --git a/arch/arm/boot/dts/omap36xx.dtsi b/arch/arm/boot/dts/omap36xx.dtsi
index f19c87bd6bf3..8b7979153008 100644
--- a/arch/arm/boot/dts/omap36xx.dtsi
+++ b/arch/arm/boot/dts/omap36xx.dtsi
@@ -44,7 +44,7 @@
abb_mpu_iva: regulator-abb-mpu {
compatible = "ti,abb-v1";
regulator-name = "abb_mpu_iva";
- #address-cell = <0>;
+ #address-cells = <0>;
#size-cells = <0>;
reg = <0x483072f0 0x8>, <0x48306818 0x4>;
reg-names = "base-address", "int-address";
@@ -87,7 +87,7 @@
};
};
- bandgap {
+ bandgap@48002524 {
reg = <0x48002524 0x4>;
compatible = "ti,omap36xx-bandgap";
#thermal-sensor-cells = <0>;
diff --git a/arch/arm/boot/dts/omap3xxx-clocks.dtsi b/arch/arm/boot/dts/omap3xxx-clocks.dtsi
index bbba5bdc4bc9..9bd91641aa7c 100644
--- a/arch/arm/boot/dts/omap3xxx-clocks.dtsi
+++ b/arch/arm/boot/dts/omap3xxx-clocks.dtsi
@@ -14,14 +14,14 @@
clock-frequency = <16800000>;
};
- osc_sys_ck: osc_sys_ck {
+ osc_sys_ck: osc_sys_ck@d40 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&virt_12m_ck>, <&virt_13m_ck>, <&virt_19200000_ck>, <&virt_26000000_ck>, <&virt_38_4m_ck>, <&virt_16_8m_ck>;
reg = <0x0d40>;
};
- sys_ck: sys_ck {
+ sys_ck: sys_ck@1270 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&osc_sys_ck>;
@@ -31,7 +31,7 @@
ti,index-starts-at-one;
};
- sys_clkout1: sys_clkout1 {
+ sys_clkout1: sys_clkout1@d70 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&osc_sys_ck>;
@@ -81,7 +81,7 @@
};
&scm_clocks {
- mcbsp5_mux_fck: mcbsp5_mux_fck {
+ mcbsp5_mux_fck: mcbsp5_mux_fck@68 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&core_96m_fck>, <&mcbsp_clks>;
@@ -95,7 +95,7 @@
clocks = <&mcbsp5_gate_fck>, <&mcbsp5_mux_fck>;
};
- mcbsp1_mux_fck: mcbsp1_mux_fck {
+ mcbsp1_mux_fck: mcbsp1_mux_fck@4 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&core_96m_fck>, <&mcbsp_clks>;
@@ -109,7 +109,7 @@
clocks = <&mcbsp1_gate_fck>, <&mcbsp1_mux_fck>;
};
- mcbsp2_mux_fck: mcbsp2_mux_fck {
+ mcbsp2_mux_fck: mcbsp2_mux_fck@4 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&per_96m_fck>, <&mcbsp_clks>;
@@ -123,7 +123,7 @@
clocks = <&mcbsp2_gate_fck>, <&mcbsp2_mux_fck>;
};
- mcbsp3_mux_fck: mcbsp3_mux_fck {
+ mcbsp3_mux_fck: mcbsp3_mux_fck@68 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&per_96m_fck>, <&mcbsp_clks>;
@@ -136,7 +136,7 @@
clocks = <&mcbsp3_gate_fck>, <&mcbsp3_mux_fck>;
};
- mcbsp4_mux_fck: mcbsp4_mux_fck {
+ mcbsp4_mux_fck: mcbsp4_mux_fck@68 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&per_96m_fck>, <&mcbsp_clks>;
@@ -193,14 +193,14 @@
clock-frequency = <38400000>;
};
- dpll4_ck: dpll4_ck {
+ dpll4_ck: dpll4_ck@d00 {
#clock-cells = <0>;
compatible = "ti,omap3-dpll-per-clock";
clocks = <&sys_ck>, <&sys_ck>;
reg = <0x0d00>, <0x0d20>, <0x0d44>, <0x0d30>;
};
- dpll4_m2_ck: dpll4_m2_ck {
+ dpll4_m2_ck: dpll4_m2_ck@d48 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll4_ck>;
@@ -217,7 +217,7 @@
clock-div = <1>;
};
- dpll4_m2x2_ck: dpll4_m2x2_ck {
+ dpll4_m2x2_ck: dpll4_m2x2_ck@d00 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll4_m2x2_mul_ck>;
@@ -234,14 +234,14 @@
clock-div = <1>;
};
- dpll3_ck: dpll3_ck {
+ dpll3_ck: dpll3_ck@d00 {
#clock-cells = <0>;
compatible = "ti,omap3-dpll-core-clock";
clocks = <&sys_ck>, <&sys_ck>;
reg = <0x0d00>, <0x0d20>, <0x0d40>, <0x0d30>;
};
- dpll3_m3_ck: dpll3_m3_ck {
+ dpll3_m3_ck: dpll3_m3_ck@1140 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll3_ck>;
@@ -259,7 +259,7 @@
clock-div = <1>;
};
- dpll3_m3x2_ck: dpll3_m3x2_ck {
+ dpll3_m3x2_ck: dpll3_m3x2_ck@d00 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll3_m3x2_mul_ck>;
@@ -288,7 +288,7 @@
clock-frequency = <0x0>;
};
- dpll3_m2_ck: dpll3_m2_ck {
+ dpll3_m2_ck: dpll3_m2_ck@d40 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll3_ck>;
@@ -306,7 +306,7 @@
clock-div = <1>;
};
- dpll1_fck: dpll1_fck {
+ dpll1_fck: dpll1_fck@940 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&core_ck>;
@@ -316,7 +316,7 @@
ti,index-starts-at-one;
};
- dpll1_ck: dpll1_ck {
+ dpll1_ck: dpll1_ck@904 {
#clock-cells = <0>;
compatible = "ti,omap3-dpll-clock";
clocks = <&sys_ck>, <&dpll1_fck>;
@@ -331,7 +331,7 @@
clock-div = <1>;
};
- dpll1_x2m2_ck: dpll1_x2m2_ck {
+ dpll1_x2m2_ck: dpll1_x2m2_ck@944 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll1_x2_ck>;
@@ -348,7 +348,7 @@
clock-div = <1>;
};
- omap_96m_fck: omap_96m_fck {
+ omap_96m_fck: omap_96m_fck@d40 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&cm_96m_fck>, <&sys_ck>;
@@ -356,7 +356,7 @@
reg = <0x0d40>;
};
- dpll4_m3_ck: dpll4_m3_ck {
+ dpll4_m3_ck: dpll4_m3_ck@e40 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll4_ck>;
@@ -374,7 +374,7 @@
clock-div = <1>;
};
- dpll4_m3x2_ck: dpll4_m3x2_ck {
+ dpll4_m3x2_ck: dpll4_m3x2_ck@d00 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll4_m3x2_mul_ck>;
@@ -383,7 +383,7 @@
ti,set-bit-to-disable;
};
- omap_54m_fck: omap_54m_fck {
+ omap_54m_fck: omap_54m_fck@d40 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&dpll4_m3x2_ck>, <&sys_altclk>;
@@ -399,7 +399,7 @@
clock-div = <2>;
};
- omap_48m_fck: omap_48m_fck {
+ omap_48m_fck: omap_48m_fck@d40 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&cm_96m_d2_fck>, <&sys_altclk>;
@@ -415,7 +415,7 @@
clock-div = <4>;
};
- dpll4_m4_ck: dpll4_m4_ck {
+ dpll4_m4_ck: dpll4_m4_ck@e40 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll4_ck>;
@@ -433,7 +433,7 @@
ti,set-rate-parent;
};
- dpll4_m4x2_ck: dpll4_m4x2_ck {
+ dpll4_m4x2_ck: dpll4_m4x2_ck@d00 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll4_m4x2_mul_ck>;
@@ -443,7 +443,7 @@
ti,set-rate-parent;
};
- dpll4_m5_ck: dpll4_m5_ck {
+ dpll4_m5_ck: dpll4_m5_ck@f40 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll4_ck>;
@@ -461,7 +461,7 @@
ti,set-rate-parent;
};
- dpll4_m5x2_ck: dpll4_m5x2_ck {
+ dpll4_m5x2_ck: dpll4_m5x2_ck@d00 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll4_m5x2_mul_ck>;
@@ -471,7 +471,7 @@
ti,set-rate-parent;
};
- dpll4_m6_ck: dpll4_m6_ck {
+ dpll4_m6_ck: dpll4_m6_ck@1140 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll4_ck>;
@@ -489,7 +489,7 @@
clock-div = <1>;
};
- dpll4_m6x2_ck: dpll4_m6x2_ck {
+ dpll4_m6x2_ck: dpll4_m6x2_ck@d00 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll4_m6x2_mul_ck>;
@@ -506,7 +506,7 @@
clock-div = <1>;
};
- clkout2_src_gate_ck: clkout2_src_gate_ck {
+ clkout2_src_gate_ck: clkout2_src_gate_ck@d70 {
#clock-cells = <0>;
compatible = "ti,composite-no-wait-gate-clock";
clocks = <&core_ck>;
@@ -514,7 +514,7 @@
reg = <0x0d70>;
};
- clkout2_src_mux_ck: clkout2_src_mux_ck {
+ clkout2_src_mux_ck: clkout2_src_mux_ck@d70 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&core_ck>, <&sys_ck>, <&cm_96m_fck>, <&omap_54m_fck>;
@@ -527,7 +527,7 @@
clocks = <&clkout2_src_gate_ck>, <&clkout2_src_mux_ck>;
};
- sys_clkout2: sys_clkout2 {
+ sys_clkout2: sys_clkout2@d70 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&clkout2_src_ck>;
@@ -545,7 +545,7 @@
clock-div = <1>;
};
- arm_fck: arm_fck {
+ arm_fck: arm_fck@924 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&mpu_ck>;
@@ -561,7 +561,7 @@
clock-div = <1>;
};
- l3_ick: l3_ick {
+ l3_ick: l3_ick@a40 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&core_ck>;
@@ -570,7 +570,7 @@
ti,index-starts-at-one;
};
- l4_ick: l4_ick {
+ l4_ick: l4_ick@a40 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&l3_ick>;
@@ -580,7 +580,7 @@
ti,index-starts-at-one;
};
- rm_ick: rm_ick {
+ rm_ick: rm_ick@c40 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&l4_ick>;
@@ -590,7 +590,7 @@
ti,index-starts-at-one;
};
- gpt10_gate_fck: gpt10_gate_fck {
+ gpt10_gate_fck: gpt10_gate_fck@a00 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&sys_ck>;
@@ -598,7 +598,7 @@
reg = <0x0a00>;
};
- gpt10_mux_fck: gpt10_mux_fck {
+ gpt10_mux_fck: gpt10_mux_fck@a40 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&omap_32k_fck>, <&sys_ck>;
@@ -612,7 +612,7 @@
clocks = <&gpt10_gate_fck>, <&gpt10_mux_fck>;
};
- gpt11_gate_fck: gpt11_gate_fck {
+ gpt11_gate_fck: gpt11_gate_fck@a00 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&sys_ck>;
@@ -620,7 +620,7 @@
reg = <0x0a00>;
};
- gpt11_mux_fck: gpt11_mux_fck {
+ gpt11_mux_fck: gpt11_mux_fck@a40 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&omap_32k_fck>, <&sys_ck>;
@@ -642,7 +642,7 @@
clock-div = <1>;
};
- mmchs2_fck: mmchs2_fck {
+ mmchs2_fck: mmchs2_fck@a00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&core_96m_fck>;
@@ -650,7 +650,7 @@
ti,bit-shift = <25>;
};
- mmchs1_fck: mmchs1_fck {
+ mmchs1_fck: mmchs1_fck@a00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&core_96m_fck>;
@@ -658,7 +658,7 @@
ti,bit-shift = <24>;
};
- i2c3_fck: i2c3_fck {
+ i2c3_fck: i2c3_fck@a00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&core_96m_fck>;
@@ -666,7 +666,7 @@
ti,bit-shift = <17>;
};
- i2c2_fck: i2c2_fck {
+ i2c2_fck: i2c2_fck@a00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&core_96m_fck>;
@@ -674,7 +674,7 @@
ti,bit-shift = <16>;
};
- i2c1_fck: i2c1_fck {
+ i2c1_fck: i2c1_fck@a00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&core_96m_fck>;
@@ -682,7 +682,7 @@
ti,bit-shift = <15>;
};
- mcbsp5_gate_fck: mcbsp5_gate_fck {
+ mcbsp5_gate_fck: mcbsp5_gate_fck@a00 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&mcbsp_clks>;
@@ -690,7 +690,7 @@
reg = <0x0a00>;
};
- mcbsp1_gate_fck: mcbsp1_gate_fck {
+ mcbsp1_gate_fck: mcbsp1_gate_fck@a00 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&mcbsp_clks>;
@@ -706,7 +706,7 @@
clock-div = <1>;
};
- mcspi4_fck: mcspi4_fck {
+ mcspi4_fck: mcspi4_fck@a00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&core_48m_fck>;
@@ -714,7 +714,7 @@
ti,bit-shift = <21>;
};
- mcspi3_fck: mcspi3_fck {
+ mcspi3_fck: mcspi3_fck@a00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&core_48m_fck>;
@@ -722,7 +722,7 @@
ti,bit-shift = <20>;
};
- mcspi2_fck: mcspi2_fck {
+ mcspi2_fck: mcspi2_fck@a00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&core_48m_fck>;
@@ -730,7 +730,7 @@
ti,bit-shift = <19>;
};
- mcspi1_fck: mcspi1_fck {
+ mcspi1_fck: mcspi1_fck@a00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&core_48m_fck>;
@@ -738,7 +738,7 @@
ti,bit-shift = <18>;
};
- uart2_fck: uart2_fck {
+ uart2_fck: uart2_fck@a00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&core_48m_fck>;
@@ -746,7 +746,7 @@
ti,bit-shift = <14>;
};
- uart1_fck: uart1_fck {
+ uart1_fck: uart1_fck@a00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&core_48m_fck>;
@@ -762,7 +762,7 @@
clock-div = <1>;
};
- hdq_fck: hdq_fck {
+ hdq_fck: hdq_fck@a00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&core_12m_fck>;
@@ -778,7 +778,7 @@
clock-div = <1>;
};
- sdrc_ick: sdrc_ick {
+ sdrc_ick: sdrc_ick@a10 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&core_l3_ick>;
@@ -802,7 +802,7 @@
clock-div = <1>;
};
- mmchs2_ick: mmchs2_ick {
+ mmchs2_ick: mmchs2_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -810,7 +810,7 @@
ti,bit-shift = <25>;
};
- mmchs1_ick: mmchs1_ick {
+ mmchs1_ick: mmchs1_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -818,7 +818,7 @@
ti,bit-shift = <24>;
};
- hdq_ick: hdq_ick {
+ hdq_ick: hdq_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -826,7 +826,7 @@
ti,bit-shift = <22>;
};
- mcspi4_ick: mcspi4_ick {
+ mcspi4_ick: mcspi4_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -834,7 +834,7 @@
ti,bit-shift = <21>;
};
- mcspi3_ick: mcspi3_ick {
+ mcspi3_ick: mcspi3_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -842,7 +842,7 @@
ti,bit-shift = <20>;
};
- mcspi2_ick: mcspi2_ick {
+ mcspi2_ick: mcspi2_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -850,7 +850,7 @@
ti,bit-shift = <19>;
};
- mcspi1_ick: mcspi1_ick {
+ mcspi1_ick: mcspi1_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -858,7 +858,7 @@
ti,bit-shift = <18>;
};
- i2c3_ick: i2c3_ick {
+ i2c3_ick: i2c3_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -866,7 +866,7 @@
ti,bit-shift = <17>;
};
- i2c2_ick: i2c2_ick {
+ i2c2_ick: i2c2_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -874,7 +874,7 @@
ti,bit-shift = <16>;
};
- i2c1_ick: i2c1_ick {
+ i2c1_ick: i2c1_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -882,7 +882,7 @@
ti,bit-shift = <15>;
};
- uart2_ick: uart2_ick {
+ uart2_ick: uart2_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -890,7 +890,7 @@
ti,bit-shift = <14>;
};
- uart1_ick: uart1_ick {
+ uart1_ick: uart1_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -898,7 +898,7 @@
ti,bit-shift = <13>;
};
- gpt11_ick: gpt11_ick {
+ gpt11_ick: gpt11_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -906,7 +906,7 @@
ti,bit-shift = <12>;
};
- gpt10_ick: gpt10_ick {
+ gpt10_ick: gpt10_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -914,7 +914,7 @@
ti,bit-shift = <11>;
};
- mcbsp5_ick: mcbsp5_ick {
+ mcbsp5_ick: mcbsp5_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -922,7 +922,7 @@
ti,bit-shift = <10>;
};
- mcbsp1_ick: mcbsp1_ick {
+ mcbsp1_ick: mcbsp1_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -930,7 +930,7 @@
ti,bit-shift = <9>;
};
- omapctrl_ick: omapctrl_ick {
+ omapctrl_ick: omapctrl_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -938,7 +938,7 @@
ti,bit-shift = <6>;
};
- dss_tv_fck: dss_tv_fck {
+ dss_tv_fck: dss_tv_fck@e00 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&omap_54m_fck>;
@@ -946,7 +946,7 @@
ti,bit-shift = <2>;
};
- dss_96m_fck: dss_96m_fck {
+ dss_96m_fck: dss_96m_fck@e00 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&omap_96m_fck>;
@@ -954,7 +954,7 @@
ti,bit-shift = <2>;
};
- dss2_alwon_fck: dss2_alwon_fck {
+ dss2_alwon_fck: dss2_alwon_fck@e00 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_ck>;
@@ -968,7 +968,7 @@
clock-frequency = <0>;
};
- gpt1_gate_fck: gpt1_gate_fck {
+ gpt1_gate_fck: gpt1_gate_fck@c00 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&sys_ck>;
@@ -976,7 +976,7 @@
reg = <0x0c00>;
};
- gpt1_mux_fck: gpt1_mux_fck {
+ gpt1_mux_fck: gpt1_mux_fck@c40 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&omap_32k_fck>, <&sys_ck>;
@@ -989,7 +989,7 @@
clocks = <&gpt1_gate_fck>, <&gpt1_mux_fck>;
};
- aes2_ick: aes2_ick {
+ aes2_ick: aes2_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -1005,7 +1005,7 @@
clock-div = <1>;
};
- gpio1_dbck: gpio1_dbck {
+ gpio1_dbck: gpio1_dbck@c00 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&wkup_32k_fck>;
@@ -1013,7 +1013,7 @@
ti,bit-shift = <3>;
};
- sha12_ick: sha12_ick {
+ sha12_ick: sha12_ick@a10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&core_l4_ick>;
@@ -1021,7 +1021,7 @@
ti,bit-shift = <27>;
};
- wdt2_fck: wdt2_fck {
+ wdt2_fck: wdt2_fck@c00 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&wkup_32k_fck>;
@@ -1029,7 +1029,7 @@
ti,bit-shift = <5>;
};
- wdt2_ick: wdt2_ick {
+ wdt2_ick: wdt2_ick@c10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&wkup_l4_ick>;
@@ -1037,7 +1037,7 @@
ti,bit-shift = <5>;
};
- wdt1_ick: wdt1_ick {
+ wdt1_ick: wdt1_ick@c10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&wkup_l4_ick>;
@@ -1045,7 +1045,7 @@
ti,bit-shift = <4>;
};
- gpio1_ick: gpio1_ick {
+ gpio1_ick: gpio1_ick@c10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&wkup_l4_ick>;
@@ -1053,7 +1053,7 @@
ti,bit-shift = <3>;
};
- omap_32ksync_ick: omap_32ksync_ick {
+ omap_32ksync_ick: omap_32ksync_ick@c10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&wkup_l4_ick>;
@@ -1061,7 +1061,7 @@
ti,bit-shift = <2>;
};
- gpt12_ick: gpt12_ick {
+ gpt12_ick: gpt12_ick@c10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&wkup_l4_ick>;
@@ -1069,7 +1069,7 @@
ti,bit-shift = <1>;
};
- gpt1_ick: gpt1_ick {
+ gpt1_ick: gpt1_ick@c10 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&wkup_l4_ick>;
@@ -1093,7 +1093,7 @@
clock-div = <1>;
};
- uart3_fck: uart3_fck {
+ uart3_fck: uart3_fck@1000 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&per_48m_fck>;
@@ -1101,7 +1101,7 @@
ti,bit-shift = <11>;
};
- gpt2_gate_fck: gpt2_gate_fck {
+ gpt2_gate_fck: gpt2_gate_fck@1000 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&sys_ck>;
@@ -1109,7 +1109,7 @@
reg = <0x1000>;
};
- gpt2_mux_fck: gpt2_mux_fck {
+ gpt2_mux_fck: gpt2_mux_fck@1040 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&omap_32k_fck>, <&sys_ck>;
@@ -1122,7 +1122,7 @@
clocks = <&gpt2_gate_fck>, <&gpt2_mux_fck>;
};
- gpt3_gate_fck: gpt3_gate_fck {
+ gpt3_gate_fck: gpt3_gate_fck@1000 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&sys_ck>;
@@ -1130,7 +1130,7 @@
reg = <0x1000>;
};
- gpt3_mux_fck: gpt3_mux_fck {
+ gpt3_mux_fck: gpt3_mux_fck@1040 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&omap_32k_fck>, <&sys_ck>;
@@ -1144,7 +1144,7 @@
clocks = <&gpt3_gate_fck>, <&gpt3_mux_fck>;
};
- gpt4_gate_fck: gpt4_gate_fck {
+ gpt4_gate_fck: gpt4_gate_fck@1000 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&sys_ck>;
@@ -1152,7 +1152,7 @@
reg = <0x1000>;
};
- gpt4_mux_fck: gpt4_mux_fck {
+ gpt4_mux_fck: gpt4_mux_fck@1040 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&omap_32k_fck>, <&sys_ck>;
@@ -1166,7 +1166,7 @@
clocks = <&gpt4_gate_fck>, <&gpt4_mux_fck>;
};
- gpt5_gate_fck: gpt5_gate_fck {
+ gpt5_gate_fck: gpt5_gate_fck@1000 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&sys_ck>;
@@ -1174,7 +1174,7 @@
reg = <0x1000>;
};
- gpt5_mux_fck: gpt5_mux_fck {
+ gpt5_mux_fck: gpt5_mux_fck@1040 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&omap_32k_fck>, <&sys_ck>;
@@ -1188,7 +1188,7 @@
clocks = <&gpt5_gate_fck>, <&gpt5_mux_fck>;
};
- gpt6_gate_fck: gpt6_gate_fck {
+ gpt6_gate_fck: gpt6_gate_fck@1000 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&sys_ck>;
@@ -1196,7 +1196,7 @@
reg = <0x1000>;
};
- gpt6_mux_fck: gpt6_mux_fck {
+ gpt6_mux_fck: gpt6_mux_fck@1040 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&omap_32k_fck>, <&sys_ck>;
@@ -1210,7 +1210,7 @@
clocks = <&gpt6_gate_fck>, <&gpt6_mux_fck>;
};
- gpt7_gate_fck: gpt7_gate_fck {
+ gpt7_gate_fck: gpt7_gate_fck@1000 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&sys_ck>;
@@ -1218,7 +1218,7 @@
reg = <0x1000>;
};
- gpt7_mux_fck: gpt7_mux_fck {
+ gpt7_mux_fck: gpt7_mux_fck@1040 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&omap_32k_fck>, <&sys_ck>;
@@ -1232,7 +1232,7 @@
clocks = <&gpt7_gate_fck>, <&gpt7_mux_fck>;
};
- gpt8_gate_fck: gpt8_gate_fck {
+ gpt8_gate_fck: gpt8_gate_fck@1000 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&sys_ck>;
@@ -1240,7 +1240,7 @@
reg = <0x1000>;
};
- gpt8_mux_fck: gpt8_mux_fck {
+ gpt8_mux_fck: gpt8_mux_fck@1040 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&omap_32k_fck>, <&sys_ck>;
@@ -1254,7 +1254,7 @@
clocks = <&gpt8_gate_fck>, <&gpt8_mux_fck>;
};
- gpt9_gate_fck: gpt9_gate_fck {
+ gpt9_gate_fck: gpt9_gate_fck@1000 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&sys_ck>;
@@ -1262,7 +1262,7 @@
reg = <0x1000>;
};
- gpt9_mux_fck: gpt9_mux_fck {
+ gpt9_mux_fck: gpt9_mux_fck@1040 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&omap_32k_fck>, <&sys_ck>;
@@ -1284,7 +1284,7 @@
clock-div = <1>;
};
- gpio6_dbck: gpio6_dbck {
+ gpio6_dbck: gpio6_dbck@1000 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&per_32k_alwon_fck>;
@@ -1292,7 +1292,7 @@
ti,bit-shift = <17>;
};
- gpio5_dbck: gpio5_dbck {
+ gpio5_dbck: gpio5_dbck@1000 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&per_32k_alwon_fck>;
@@ -1300,7 +1300,7 @@
ti,bit-shift = <16>;
};
- gpio4_dbck: gpio4_dbck {
+ gpio4_dbck: gpio4_dbck@1000 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&per_32k_alwon_fck>;
@@ -1308,7 +1308,7 @@
ti,bit-shift = <15>;
};
- gpio3_dbck: gpio3_dbck {
+ gpio3_dbck: gpio3_dbck@1000 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&per_32k_alwon_fck>;
@@ -1316,7 +1316,7 @@
ti,bit-shift = <14>;
};
- gpio2_dbck: gpio2_dbck {
+ gpio2_dbck: gpio2_dbck@1000 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&per_32k_alwon_fck>;
@@ -1324,7 +1324,7 @@
ti,bit-shift = <13>;
};
- wdt3_fck: wdt3_fck {
+ wdt3_fck: wdt3_fck@1000 {
#clock-cells = <0>;
compatible = "ti,wait-gate-clock";
clocks = <&per_32k_alwon_fck>;
@@ -1340,7 +1340,7 @@
clock-div = <1>;
};
- gpio6_ick: gpio6_ick {
+ gpio6_ick: gpio6_ick@1010 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&per_l4_ick>;
@@ -1348,7 +1348,7 @@
ti,bit-shift = <17>;
};
- gpio5_ick: gpio5_ick {
+ gpio5_ick: gpio5_ick@1010 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&per_l4_ick>;
@@ -1356,7 +1356,7 @@
ti,bit-shift = <16>;
};
- gpio4_ick: gpio4_ick {
+ gpio4_ick: gpio4_ick@1010 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&per_l4_ick>;
@@ -1364,7 +1364,7 @@
ti,bit-shift = <15>;
};
- gpio3_ick: gpio3_ick {
+ gpio3_ick: gpio3_ick@1010 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&per_l4_ick>;
@@ -1372,7 +1372,7 @@
ti,bit-shift = <14>;
};
- gpio2_ick: gpio2_ick {
+ gpio2_ick: gpio2_ick@1010 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&per_l4_ick>;
@@ -1380,7 +1380,7 @@
ti,bit-shift = <13>;
};
- wdt3_ick: wdt3_ick {
+ wdt3_ick: wdt3_ick@1010 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&per_l4_ick>;
@@ -1388,7 +1388,7 @@
ti,bit-shift = <12>;
};
- uart3_ick: uart3_ick {
+ uart3_ick: uart3_ick@1010 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&per_l4_ick>;
@@ -1396,7 +1396,7 @@
ti,bit-shift = <11>;
};
- uart4_ick: uart4_ick {
+ uart4_ick: uart4_ick@1010 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&per_l4_ick>;
@@ -1404,7 +1404,7 @@
ti,bit-shift = <18>;
};
- gpt9_ick: gpt9_ick {
+ gpt9_ick: gpt9_ick@1010 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&per_l4_ick>;
@@ -1412,7 +1412,7 @@
ti,bit-shift = <10>;
};
- gpt8_ick: gpt8_ick {
+ gpt8_ick: gpt8_ick@1010 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&per_l4_ick>;
@@ -1420,7 +1420,7 @@
ti,bit-shift = <9>;
};
- gpt7_ick: gpt7_ick {
+ gpt7_ick: gpt7_ick@1010 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&per_l4_ick>;
@@ -1428,7 +1428,7 @@
ti,bit-shift = <8>;
};
- gpt6_ick: gpt6_ick {
+ gpt6_ick: gpt6_ick@1010 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&per_l4_ick>;
@@ -1436,7 +1436,7 @@
ti,bit-shift = <7>;
};
- gpt5_ick: gpt5_ick {
+ gpt5_ick: gpt5_ick@1010 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&per_l4_ick>;
@@ -1444,7 +1444,7 @@
ti,bit-shift = <6>;
};
- gpt4_ick: gpt4_ick {
+ gpt4_ick: gpt4_ick@1010 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&per_l4_ick>;
@@ -1452,7 +1452,7 @@
ti,bit-shift = <5>;
};
- gpt3_ick: gpt3_ick {
+ gpt3_ick: gpt3_ick@1010 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&per_l4_ick>;
@@ -1460,7 +1460,7 @@
ti,bit-shift = <4>;
};
- gpt2_ick: gpt2_ick {
+ gpt2_ick: gpt2_ick@1010 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&per_l4_ick>;
@@ -1468,7 +1468,7 @@
ti,bit-shift = <3>;
};
- mcbsp2_ick: mcbsp2_ick {
+ mcbsp2_ick: mcbsp2_ick@1010 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&per_l4_ick>;
@@ -1476,7 +1476,7 @@
ti,bit-shift = <0>;
};
- mcbsp3_ick: mcbsp3_ick {
+ mcbsp3_ick: mcbsp3_ick@1010 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&per_l4_ick>;
@@ -1484,7 +1484,7 @@
ti,bit-shift = <1>;
};
- mcbsp4_ick: mcbsp4_ick {
+ mcbsp4_ick: mcbsp4_ick@1010 {
#clock-cells = <0>;
compatible = "ti,omap3-interface-clock";
clocks = <&per_l4_ick>;
@@ -1492,7 +1492,7 @@
ti,bit-shift = <2>;
};
- mcbsp2_gate_fck: mcbsp2_gate_fck {
+ mcbsp2_gate_fck: mcbsp2_gate_fck@1000 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&mcbsp_clks>;
@@ -1500,7 +1500,7 @@
reg = <0x1000>;
};
- mcbsp3_gate_fck: mcbsp3_gate_fck {
+ mcbsp3_gate_fck: mcbsp3_gate_fck@1000 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&mcbsp_clks>;
@@ -1508,7 +1508,7 @@
reg = <0x1000>;
};
- mcbsp4_gate_fck: mcbsp4_gate_fck {
+ mcbsp4_gate_fck: mcbsp4_gate_fck@1000 {
#clock-cells = <0>;
compatible = "ti,composite-gate-clock";
clocks = <&mcbsp_clks>;
@@ -1516,7 +1516,7 @@
reg = <0x1000>;
};
- emu_src_mux_ck: emu_src_mux_ck {
+ emu_src_mux_ck: emu_src_mux_ck@1140 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_ck>, <&emu_core_alwon_ck>, <&emu_per_alwon_ck>, <&emu_mpu_alwon_ck>;
@@ -1529,7 +1529,7 @@
clocks = <&emu_src_mux_ck>;
};
- pclk_fck: pclk_fck {
+ pclk_fck: pclk_fck@1140 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&emu_src_ck>;
@@ -1539,7 +1539,7 @@
ti,index-starts-at-one;
};
- pclkx2_fck: pclkx2_fck {
+ pclkx2_fck: pclkx2_fck@1140 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&emu_src_ck>;
@@ -1549,7 +1549,7 @@
ti,index-starts-at-one;
};
- atclk_fck: atclk_fck {
+ atclk_fck: atclk_fck@1140 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&emu_src_ck>;
@@ -1559,7 +1559,7 @@
ti,index-starts-at-one;
};
- traceclk_src_fck: traceclk_src_fck {
+ traceclk_src_fck: traceclk_src_fck@1140 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_ck>, <&emu_core_alwon_ck>, <&emu_per_alwon_ck>, <&emu_mpu_alwon_ck>;
@@ -1567,7 +1567,7 @@
reg = <0x1140>;
};
- traceclk_fck: traceclk_fck {
+ traceclk_fck: traceclk_fck@1140 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&traceclk_src_fck>;
diff --git a/arch/arm/boot/dts/omap4-kc1.dts b/arch/arm/boot/dts/omap4-kc1.dts
new file mode 100644
index 000000000000..2251bd54e4e6
--- /dev/null
+++ b/arch/arm/boot/dts/omap4-kc1.dts
@@ -0,0 +1,182 @@
+/*
+ * Copyright (C) 2016 Paul Kocialkowski <contact@paulk.fr>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+/dts-v1/;
+
+#include "omap443x.dtsi"
+
+/ {
+ model = "Amazon Kindle Fire (first generation)";
+ compatible = "amazon,omap4-kc1", "ti,omap4430", "ti,omap4";
+
+ memory {
+ device_type = "memory";
+ reg = <0x80000000 0x20000000>; /* 512 MB */
+ };
+
+ pwmleds {
+ compatible = "pwm-leds";
+
+ green {
+ label = "green";
+ pwms = <&twl_pwm 0 7812500>;
+ max-brightness = <127>;
+ };
+
+ orange {
+ label = "orange";
+ pwms = <&twl_pwm 1 7812500>;
+ max-brightness = <127>;
+ };
+ };
+};
+
+&omap4_pmx_core {
+ pinctrl-names = "default";
+
+ uart3_pins: pinmux_uart3_pins {
+ pinctrl-single,pins = <
+ OMAP4_IOPAD(0x144, PIN_INPUT | MUX_MODE0) /* uart3_rx_irrx */
+ OMAP4_IOPAD(0x146, PIN_OUTPUT | MUX_MODE0) /* uart3_tx_irtx */
+ >;
+ };
+
+ i2c1_pins: pinmux_i2c1_pins {
+ pinctrl-single,pins = <
+ OMAP4_IOPAD(0x122, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c1_scl */
+ OMAP4_IOPAD(0x124, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c1_sda */
+ >;
+ };
+
+ i2c2_pins: pinmux_i2c2_pins {
+ pinctrl-single,pins = <
+ OMAP4_IOPAD(0x126, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c2_scl */
+ OMAP4_IOPAD(0x128, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c2_sda */
+ >;
+ };
+
+ i2c3_pins: pinmux_i2c3_pins {
+ pinctrl-single,pins = <
+ OMAP4_IOPAD(0x12a, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c3_scl */
+ OMAP4_IOPAD(0x12c, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c3_sda */
+ >;
+ };
+
+ i2c4_pins: pinmux_i2c4_pins {
+ pinctrl-single,pins = <
+ OMAP4_IOPAD(0x12e, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c4_scl */
+ OMAP4_IOPAD(0x130, PIN_INPUT_PULLUP | MUX_MODE0) /* i2c4_sda */
+ >;
+ };
+
+ mmc2_pins: pinmux_mmc2_pins {
+ pinctrl-single,pins = <
+ OMAP4_IOPAD(0x040, PIN_INPUT_PULLUP | MUX_MODE1) /* sdmmc2_dat0 */
+ OMAP4_IOPAD(0x042, PIN_INPUT_PULLUP | MUX_MODE1) /* sdmmc2_dat1 */
+ OMAP4_IOPAD(0x044, PIN_INPUT_PULLUP | MUX_MODE1) /* sdmmc2_dat2 */
+ OMAP4_IOPAD(0x046, PIN_INPUT_PULLUP | MUX_MODE1) /* sdmmc2_dat3 */
+ OMAP4_IOPAD(0x048, PIN_INPUT_PULLUP | MUX_MODE1) /* sdmmc2_dat4 */
+ OMAP4_IOPAD(0x04a, PIN_INPUT_PULLUP | MUX_MODE1) /* sdmmc2_dat5 */
+ OMAP4_IOPAD(0x04c, PIN_INPUT_PULLUP | MUX_MODE1) /* sdmmc2_dat6 */
+ OMAP4_IOPAD(0x04e, PIN_INPUT_PULLUP | MUX_MODE1) /* sdmmc2_dat7 */
+ OMAP4_IOPAD(0x082, PIN_INPUT_PULLUP | MUX_MODE1) /* sdmmc2_clk */
+ OMAP4_IOPAD(0x084, PIN_INPUT_PULLUP | MUX_MODE1) /* sdmmc2_cmd */
+ >;
+ };
+
+ usb_otg_hs_pins: pinmux_usb_otg_hs_pins {
+ pinctrl-single,pins = <
+ OMAP4_IOPAD(0x194, PIN_OUTPUT_PULLDOWN | MUX_MODE0) /* usba0_otg_ce */
+ OMAP4_IOPAD(0x196, PIN_INPUT | MUX_MODE0) /* usba0_otg_dp */
+ OMAP4_IOPAD(0x198, PIN_INPUT | MUX_MODE0) /* usba0_otg_dm */
+ >;
+ };
+};
+
+&uart3 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart3_pins>;
+
+ interrupts-extended = <&wakeupgen GIC_SPI 74 IRQ_TYPE_LEVEL_HIGH
+ &omap4_pmx_core OMAP4_UART3_RX>;
+};
+
+&i2c1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c1_pins>;
+
+ clock-frequency = <400000>;
+
+ twl: twl@48 {
+ reg = <0x48>;
+ /* IRQ# = 7 */
+ interrupts = <GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>; /* IRQ_SYS_1N cascaded to gic */
+
+ twl_power: power {
+ compatible = "ti,twl6030-power";
+ ti,system-power-controller;
+ };
+ };
+};
+
+&i2c2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c2_pins>;
+
+ clock-frequency = <400000>;
+};
+
+&i2c3 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c3_pins>;
+
+ clock-frequency = <400000>;
+};
+
+&i2c4 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c4_pins>;
+
+ clock-frequency = <400000>;
+};
+
+&mmc1 {
+ status = "disabled";
+};
+
+&mmc2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc2_pins>;
+
+ vmmc-supply = <&vaux1>;
+ ti,non-removable;
+ bus-width = <8>;
+};
+
+&mmc3 {
+ status = "disabled";
+};
+
+&mmc4 {
+ status = "disabled";
+};
+
+&usb_otg_hs {
+ pinctrl-names = "default";
+ pinctrl-0 = <&usb_otg_hs_pins>;
+
+ interface-type = <1>;
+ mode = <3>;
+ power = <50>;
+};
+
+#include "twl6030.dtsi"
+#include "twl6030_omap4.dtsi"
+
+&twl_usb_comparator {
+ usb-supply = <&vusb>;
+};
diff --git a/arch/arm/boot/dts/omap4-var-som-om44.dtsi b/arch/arm/boot/dts/omap4-var-som-om44.dtsi
index 49d032b846be..a17997f4e9aa 100644
--- a/arch/arm/boot/dts/omap4-var-som-om44.dtsi
+++ b/arch/arm/boot/dts/omap4-var-som-om44.dtsi
@@ -17,7 +17,7 @@
reg = <0x80000000 0x40000000>; /* 1 GB */
};
- sound: sound@0 {
+ sound: sound {
compatible = "ti,abe-twl6040";
ti,model = "VAR-SOM-OM44";
diff --git a/arch/arm/boot/dts/omap4.dtsi b/arch/arm/boot/dts/omap4.dtsi
index 421fe9f8a9eb..3fdc51cd0fad 100644
--- a/arch/arm/boot/dts/omap4.dtsi
+++ b/arch/arm/boot/dts/omap4.dtsi
@@ -198,7 +198,7 @@
#size-cells = <1>;
ranges = <0 0x5a0 0x170>;
- pbias_regulator: pbias_regulator {
+ pbias_regulator: pbias_regulator@60 {
compatible = "ti,pbias-omap4", "ti,pbias-omap";
reg = <0x60 0x4>;
syscon = <&omap4_padconf_global>;
@@ -370,6 +370,10 @@
ti,no-idle-on-init;
clocks = <&l3_div_ck>;
clock-names = "fck";
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ gpio-controller;
+ #gpio-cells = <2>;
};
uart1: serial@4806a000 {
diff --git a/arch/arm/boot/dts/omap443x-clocks.dtsi b/arch/arm/boot/dts/omap443x-clocks.dtsi
index 2bd2166f88d3..f370d96a87e5 100644
--- a/arch/arm/boot/dts/omap443x-clocks.dtsi
+++ b/arch/arm/boot/dts/omap443x-clocks.dtsi
@@ -8,7 +8,7 @@
* published by the Free Software Foundation.
*/
&prm_clocks {
- bandgap_fclk: bandgap_fclk {
+ bandgap_fclk: bandgap_fclk@1888 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
diff --git a/arch/arm/boot/dts/omap443x.dtsi b/arch/arm/boot/dts/omap443x.dtsi
index 0adfa1d1ef20..fc6a8610c24c 100644
--- a/arch/arm/boot/dts/omap443x.dtsi
+++ b/arch/arm/boot/dts/omap443x.dtsi
@@ -35,7 +35,7 @@
};
ocp {
- bandgap: bandgap {
+ bandgap: bandgap@4a002260 {
reg = <0x4a002260 0x4
0x4a00232C 0x4>;
compatible = "ti,omap4430-bandgap";
diff --git a/arch/arm/boot/dts/omap4460.dtsi b/arch/arm/boot/dts/omap4460.dtsi
index 5fa68f191af7..ef66e12e0a67 100644
--- a/arch/arm/boot/dts/omap4460.dtsi
+++ b/arch/arm/boot/dts/omap4460.dtsi
@@ -40,7 +40,7 @@
};
ocp {
- bandgap: bandgap {
+ bandgap: bandgap@4a002260 {
reg = <0x4a002260 0x4
0x4a00232C 0x4
0x4a002378 0x18>;
diff --git a/arch/arm/boot/dts/omap446x-clocks.dtsi b/arch/arm/boot/dts/omap446x-clocks.dtsi
index be033e9803e9..fb5929b742d4 100644
--- a/arch/arm/boot/dts/omap446x-clocks.dtsi
+++ b/arch/arm/boot/dts/omap446x-clocks.dtsi
@@ -8,7 +8,7 @@
* published by the Free Software Foundation.
*/
&prm_clocks {
- div_ts_ck: div_ts_ck {
+ div_ts_ck: div_ts_ck@1888 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&l4_wkup_clk_mux_ck>;
@@ -17,7 +17,7 @@
ti,dividers = <8>, <16>, <32>;
};
- bandgap_ts_fclk: bandgap_ts_fclk {
+ bandgap_ts_fclk: bandgap_ts_fclk@1888 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&div_ts_ck>;
diff --git a/arch/arm/boot/dts/omap44xx-clocks.dtsi b/arch/arm/boot/dts/omap44xx-clocks.dtsi
index f2c48f09824e..9573b37fbaa7 100644
--- a/arch/arm/boot/dts/omap44xx-clocks.dtsi
+++ b/arch/arm/boot/dts/omap44xx-clocks.dtsi
@@ -20,7 +20,7 @@
clock-frequency = <12000000>;
};
- pad_clks_ck: pad_clks_ck {
+ pad_clks_ck: pad_clks_ck@108 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&pad_clks_src_ck>;
@@ -46,7 +46,7 @@
clock-frequency = <12000000>;
};
- slimbus_clk: slimbus_clk {
+ slimbus_clk: slimbus_clk@108 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&slimbus_src_clk>;
@@ -132,21 +132,21 @@
clock-frequency = <60000000>;
};
- dpll_abe_ck: dpll_abe_ck {
+ dpll_abe_ck: dpll_abe_ck@1e0 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-m4xen-clock";
clocks = <&abe_dpll_refclk_mux_ck>, <&abe_dpll_bypass_clk_mux_ck>;
reg = <0x01e0>, <0x01e4>, <0x01ec>, <0x01e8>;
};
- dpll_abe_x2_ck: dpll_abe_x2_ck {
+ dpll_abe_x2_ck: dpll_abe_x2_ck@1f0 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-x2-clock";
clocks = <&dpll_abe_ck>;
reg = <0x01f0>;
};
- dpll_abe_m2x2_ck: dpll_abe_m2x2_ck {
+ dpll_abe_m2x2_ck: dpll_abe_m2x2_ck@1f0 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_abe_x2_ck>;
@@ -165,7 +165,7 @@
clock-div = <8>;
};
- abe_clk: abe_clk {
+ abe_clk: abe_clk@108 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_abe_m2x2_ck>;
@@ -174,7 +174,7 @@
ti,index-power-of-two;
};
- aess_fclk: aess_fclk {
+ aess_fclk: aess_fclk@528 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&abe_clk>;
@@ -183,7 +183,7 @@
reg = <0x0528>;
};
- dpll_abe_m3x2_ck: dpll_abe_m3x2_ck {
+ dpll_abe_m3x2_ck: dpll_abe_m3x2_ck@1f4 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_abe_x2_ck>;
@@ -194,7 +194,7 @@
ti,invert-autoidle-bit;
};
- core_hsd_byp_clk_mux_ck: core_hsd_byp_clk_mux_ck {
+ core_hsd_byp_clk_mux_ck: core_hsd_byp_clk_mux_ck@12c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin_ck>, <&dpll_abe_m3x2_ck>;
@@ -202,7 +202,7 @@
reg = <0x012c>;
};
- dpll_core_ck: dpll_core_ck {
+ dpll_core_ck: dpll_core_ck@120 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-core-clock";
clocks = <&sys_clkin_ck>, <&core_hsd_byp_clk_mux_ck>;
@@ -215,7 +215,7 @@
clocks = <&dpll_core_ck>;
};
- dpll_core_m6x2_ck: dpll_core_m6x2_ck {
+ dpll_core_m6x2_ck: dpll_core_m6x2_ck@140 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -226,7 +226,7 @@
ti,invert-autoidle-bit;
};
- dpll_core_m2_ck: dpll_core_m2_ck {
+ dpll_core_m2_ck: dpll_core_m2_ck@130 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_ck>;
@@ -245,7 +245,7 @@
clock-div = <2>;
};
- dpll_core_m5x2_ck: dpll_core_m5x2_ck {
+ dpll_core_m5x2_ck: dpll_core_m5x2_ck@13c {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -256,7 +256,7 @@
ti,invert-autoidle-bit;
};
- div_core_ck: div_core_ck {
+ div_core_ck: div_core_ck@100 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_m5x2_ck>;
@@ -264,7 +264,7 @@
ti,max-div = <2>;
};
- div_iva_hs_clk: div_iva_hs_clk {
+ div_iva_hs_clk: div_iva_hs_clk@1dc {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_m5x2_ck>;
@@ -273,7 +273,7 @@
ti,index-power-of-two;
};
- div_mpu_hs_clk: div_mpu_hs_clk {
+ div_mpu_hs_clk: div_mpu_hs_clk@19c {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_m5x2_ck>;
@@ -282,7 +282,7 @@
ti,index-power-of-two;
};
- dpll_core_m4x2_ck: dpll_core_m4x2_ck {
+ dpll_core_m4x2_ck: dpll_core_m4x2_ck@138 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -301,7 +301,7 @@
clock-div = <2>;
};
- dpll_abe_m2_ck: dpll_abe_m2_ck {
+ dpll_abe_m2_ck: dpll_abe_m2_ck@1f0 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_abe_ck>;
@@ -310,7 +310,7 @@
ti,index-starts-at-one;
};
- dpll_core_m3x2_gate_ck: dpll_core_m3x2_gate_ck {
+ dpll_core_m3x2_gate_ck: dpll_core_m3x2_gate_ck@134 {
#clock-cells = <0>;
compatible = "ti,composite-no-wait-gate-clock";
clocks = <&dpll_core_x2_ck>;
@@ -318,7 +318,7 @@
reg = <0x0134>;
};
- dpll_core_m3x2_div_ck: dpll_core_m3x2_div_ck {
+ dpll_core_m3x2_div_ck: dpll_core_m3x2_div_ck@134 {
#clock-cells = <0>;
compatible = "ti,composite-divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -333,7 +333,7 @@
clocks = <&dpll_core_m3x2_gate_ck>, <&dpll_core_m3x2_div_ck>;
};
- dpll_core_m7x2_ck: dpll_core_m7x2_ck {
+ dpll_core_m7x2_ck: dpll_core_m7x2_ck@144 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -344,7 +344,7 @@
ti,invert-autoidle-bit;
};
- iva_hsd_byp_clk_mux_ck: iva_hsd_byp_clk_mux_ck {
+ iva_hsd_byp_clk_mux_ck: iva_hsd_byp_clk_mux_ck@1ac {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin_ck>, <&div_iva_hs_clk>;
@@ -352,7 +352,7 @@
reg = <0x01ac>;
};
- dpll_iva_ck: dpll_iva_ck {
+ dpll_iva_ck: dpll_iva_ck@1a0 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
clocks = <&sys_clkin_ck>, <&iva_hsd_byp_clk_mux_ck>;
@@ -365,7 +365,7 @@
clocks = <&dpll_iva_ck>;
};
- dpll_iva_m4x2_ck: dpll_iva_m4x2_ck {
+ dpll_iva_m4x2_ck: dpll_iva_m4x2_ck@1b8 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_iva_x2_ck>;
@@ -376,7 +376,7 @@
ti,invert-autoidle-bit;
};
- dpll_iva_m5x2_ck: dpll_iva_m5x2_ck {
+ dpll_iva_m5x2_ck: dpll_iva_m5x2_ck@1bc {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_iva_x2_ck>;
@@ -387,14 +387,14 @@
ti,invert-autoidle-bit;
};
- dpll_mpu_ck: dpll_mpu_ck {
+ dpll_mpu_ck: dpll_mpu_ck@160 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
clocks = <&sys_clkin_ck>, <&div_mpu_hs_clk>;
reg = <0x0160>, <0x0164>, <0x016c>, <0x0168>;
};
- dpll_mpu_m2_ck: dpll_mpu_m2_ck {
+ dpll_mpu_m2_ck: dpll_mpu_m2_ck@170 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_mpu_ck>;
@@ -421,7 +421,7 @@
clock-div = <3>;
};
- l3_div_ck: l3_div_ck {
+ l3_div_ck: l3_div_ck@100 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&div_core_ck>;
@@ -430,7 +430,7 @@
reg = <0x0100>;
};
- l4_div_ck: l4_div_ck {
+ l4_div_ck: l4_div_ck@100 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&l3_div_ck>;
@@ -455,7 +455,7 @@
clock-div = <2>;
};
- ocp_abe_iclk: ocp_abe_iclk {
+ ocp_abe_iclk: ocp_abe_iclk@528 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&aess_fclk>;
@@ -472,7 +472,7 @@
clock-div = <4>;
};
- dmic_sync_mux_ck: dmic_sync_mux_ck {
+ dmic_sync_mux_ck: dmic_sync_mux_ck@538 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_24m_fclk>, <&syc_clk_div_ck>, <&func_24m_clk>;
@@ -480,7 +480,7 @@
reg = <0x0538>;
};
- func_dmic_abe_gfclk: func_dmic_abe_gfclk {
+ func_dmic_abe_gfclk: func_dmic_abe_gfclk@538 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&dmic_sync_mux_ck>, <&pad_clks_ck>, <&slimbus_clk>;
@@ -488,7 +488,7 @@
reg = <0x0538>;
};
- mcasp_sync_mux_ck: mcasp_sync_mux_ck {
+ mcasp_sync_mux_ck: mcasp_sync_mux_ck@540 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_24m_fclk>, <&syc_clk_div_ck>, <&func_24m_clk>;
@@ -496,7 +496,7 @@
reg = <0x0540>;
};
- func_mcasp_abe_gfclk: func_mcasp_abe_gfclk {
+ func_mcasp_abe_gfclk: func_mcasp_abe_gfclk@540 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&mcasp_sync_mux_ck>, <&pad_clks_ck>, <&slimbus_clk>;
@@ -504,7 +504,7 @@
reg = <0x0540>;
};
- mcbsp1_sync_mux_ck: mcbsp1_sync_mux_ck {
+ mcbsp1_sync_mux_ck: mcbsp1_sync_mux_ck@548 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_24m_fclk>, <&syc_clk_div_ck>, <&func_24m_clk>;
@@ -512,7 +512,7 @@
reg = <0x0548>;
};
- func_mcbsp1_gfclk: func_mcbsp1_gfclk {
+ func_mcbsp1_gfclk: func_mcbsp1_gfclk@548 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&mcbsp1_sync_mux_ck>, <&pad_clks_ck>, <&slimbus_clk>;
@@ -520,7 +520,7 @@
reg = <0x0548>;
};
- mcbsp2_sync_mux_ck: mcbsp2_sync_mux_ck {
+ mcbsp2_sync_mux_ck: mcbsp2_sync_mux_ck@550 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_24m_fclk>, <&syc_clk_div_ck>, <&func_24m_clk>;
@@ -528,7 +528,7 @@
reg = <0x0550>;
};
- func_mcbsp2_gfclk: func_mcbsp2_gfclk {
+ func_mcbsp2_gfclk: func_mcbsp2_gfclk@550 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&mcbsp2_sync_mux_ck>, <&pad_clks_ck>, <&slimbus_clk>;
@@ -536,7 +536,7 @@
reg = <0x0550>;
};
- mcbsp3_sync_mux_ck: mcbsp3_sync_mux_ck {
+ mcbsp3_sync_mux_ck: mcbsp3_sync_mux_ck@558 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_24m_fclk>, <&syc_clk_div_ck>, <&func_24m_clk>;
@@ -544,7 +544,7 @@
reg = <0x0558>;
};
- func_mcbsp3_gfclk: func_mcbsp3_gfclk {
+ func_mcbsp3_gfclk: func_mcbsp3_gfclk@558 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&mcbsp3_sync_mux_ck>, <&pad_clks_ck>, <&slimbus_clk>;
@@ -552,7 +552,7 @@
reg = <0x0558>;
};
- slimbus1_fclk_1: slimbus1_fclk_1 {
+ slimbus1_fclk_1: slimbus1_fclk_1@560 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&func_24m_clk>;
@@ -560,7 +560,7 @@
reg = <0x0560>;
};
- slimbus1_fclk_0: slimbus1_fclk_0 {
+ slimbus1_fclk_0: slimbus1_fclk_0@560 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&abe_24m_fclk>;
@@ -568,7 +568,7 @@
reg = <0x0560>;
};
- slimbus1_fclk_2: slimbus1_fclk_2 {
+ slimbus1_fclk_2: slimbus1_fclk_2@560 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&pad_clks_ck>;
@@ -576,7 +576,7 @@
reg = <0x0560>;
};
- slimbus1_slimbus_clk: slimbus1_slimbus_clk {
+ slimbus1_slimbus_clk: slimbus1_slimbus_clk@560 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&slimbus_clk>;
@@ -584,7 +584,7 @@
reg = <0x0560>;
};
- timer5_sync_mux: timer5_sync_mux {
+ timer5_sync_mux: timer5_sync_mux@568 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&syc_clk_div_ck>, <&sys_32k_ck>;
@@ -592,7 +592,7 @@
reg = <0x0568>;
};
- timer6_sync_mux: timer6_sync_mux {
+ timer6_sync_mux: timer6_sync_mux@570 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&syc_clk_div_ck>, <&sys_32k_ck>;
@@ -600,7 +600,7 @@
reg = <0x0570>;
};
- timer7_sync_mux: timer7_sync_mux {
+ timer7_sync_mux: timer7_sync_mux@578 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&syc_clk_div_ck>, <&sys_32k_ck>;
@@ -608,7 +608,7 @@
reg = <0x0578>;
};
- timer8_sync_mux: timer8_sync_mux {
+ timer8_sync_mux: timer8_sync_mux@580 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&syc_clk_div_ck>, <&sys_32k_ck>;
@@ -623,7 +623,7 @@
};
};
&prm_clocks {
- sys_clkin_ck: sys_clkin_ck {
+ sys_clkin_ck: sys_clkin_ck@110 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&virt_12000000_ck>, <&virt_13000000_ck>, <&virt_16800000_ck>, <&virt_19200000_ck>, <&virt_26000000_ck>, <&virt_27000000_ck>, <&virt_38400000_ck>;
@@ -631,7 +631,7 @@
ti,index-starts-at-one;
};
- abe_dpll_bypass_clk_mux_ck: abe_dpll_bypass_clk_mux_ck {
+ abe_dpll_bypass_clk_mux_ck: abe_dpll_bypass_clk_mux_ck@108 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin_ck>, <&sys_32k_ck>;
@@ -639,7 +639,7 @@
reg = <0x0108>;
};
- abe_dpll_refclk_mux_ck: abe_dpll_refclk_mux_ck {
+ abe_dpll_refclk_mux_ck: abe_dpll_refclk_mux_ck@10c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin_ck>, <&sys_32k_ck>;
@@ -654,14 +654,14 @@
clock-div = <1>;
};
- l4_wkup_clk_mux_ck: l4_wkup_clk_mux_ck {
+ l4_wkup_clk_mux_ck: l4_wkup_clk_mux_ck@108 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin_ck>, <&lp_clk_div_ck>;
reg = <0x0108>;
};
- syc_clk_div_ck: syc_clk_div_ck {
+ syc_clk_div_ck: syc_clk_div_ck@100 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&sys_clkin_ck>;
@@ -669,7 +669,7 @@
ti,max-div = <2>;
};
- gpio1_dbclk: gpio1_dbclk {
+ gpio1_dbclk: gpio1_dbclk@1838 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -677,7 +677,7 @@
reg = <0x1838>;
};
- dmt1_clk_mux: dmt1_clk_mux {
+ dmt1_clk_mux: dmt1_clk_mux@1840 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin_ck>, <&sys_32k_ck>;
@@ -685,7 +685,7 @@
reg = <0x1840>;
};
- usim_ck: usim_ck {
+ usim_ck: usim_ck@1858 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_m4x2_ck>;
@@ -694,7 +694,7 @@
ti,dividers = <14>, <18>;
};
- usim_fclk: usim_fclk {
+ usim_fclk: usim_fclk@1858 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&usim_ck>;
@@ -702,7 +702,7 @@
reg = <0x1858>;
};
- pmd_stm_clock_mux_ck: pmd_stm_clock_mux_ck {
+ pmd_stm_clock_mux_ck: pmd_stm_clock_mux_ck@1a20 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin_ck>, <&dpll_core_m6x2_ck>, <&tie_low_clock_ck>;
@@ -710,7 +710,7 @@
reg = <0x1a20>;
};
- pmd_trace_clk_mux_ck: pmd_trace_clk_mux_ck {
+ pmd_trace_clk_mux_ck: pmd_trace_clk_mux_ck@1a20 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin_ck>, <&dpll_core_m6x2_ck>, <&tie_low_clock_ck>;
@@ -718,7 +718,7 @@
reg = <0x1a20>;
};
- stm_clk_div_ck: stm_clk_div_ck {
+ stm_clk_div_ck: stm_clk_div_ck@1a20 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&pmd_stm_clock_mux_ck>;
@@ -728,7 +728,7 @@
ti,index-power-of-two;
};
- trace_clk_div_div_ck: trace_clk_div_div_ck {
+ trace_clk_div_div_ck: trace_clk_div_div_ck@1a20 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&pmd_trace_clk_mux_ck>;
@@ -752,7 +752,7 @@
};
&cm2_clocks {
- per_hsd_byp_clk_mux_ck: per_hsd_byp_clk_mux_ck {
+ per_hsd_byp_clk_mux_ck: per_hsd_byp_clk_mux_ck@14c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin_ck>, <&per_hs_clk_div_ck>;
@@ -760,14 +760,14 @@
reg = <0x014c>;
};
- dpll_per_ck: dpll_per_ck {
+ dpll_per_ck: dpll_per_ck@140 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
clocks = <&sys_clkin_ck>, <&per_hsd_byp_clk_mux_ck>;
reg = <0x0140>, <0x0144>, <0x014c>, <0x0148>;
};
- dpll_per_m2_ck: dpll_per_m2_ck {
+ dpll_per_m2_ck: dpll_per_m2_ck@150 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_ck>;
@@ -776,14 +776,14 @@
ti,index-starts-at-one;
};
- dpll_per_x2_ck: dpll_per_x2_ck {
+ dpll_per_x2_ck: dpll_per_x2_ck@150 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-x2-clock";
clocks = <&dpll_per_ck>;
reg = <0x0150>;
};
- dpll_per_m2x2_ck: dpll_per_m2x2_ck {
+ dpll_per_m2x2_ck: dpll_per_m2x2_ck@150 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_x2_ck>;
@@ -794,7 +794,7 @@
ti,invert-autoidle-bit;
};
- dpll_per_m3x2_gate_ck: dpll_per_m3x2_gate_ck {
+ dpll_per_m3x2_gate_ck: dpll_per_m3x2_gate_ck@154 {
#clock-cells = <0>;
compatible = "ti,composite-no-wait-gate-clock";
clocks = <&dpll_per_x2_ck>;
@@ -802,7 +802,7 @@
reg = <0x0154>;
};
- dpll_per_m3x2_div_ck: dpll_per_m3x2_div_ck {
+ dpll_per_m3x2_div_ck: dpll_per_m3x2_div_ck@154 {
#clock-cells = <0>;
compatible = "ti,composite-divider-clock";
clocks = <&dpll_per_x2_ck>;
@@ -817,7 +817,7 @@
clocks = <&dpll_per_m3x2_gate_ck>, <&dpll_per_m3x2_div_ck>;
};
- dpll_per_m4x2_ck: dpll_per_m4x2_ck {
+ dpll_per_m4x2_ck: dpll_per_m4x2_ck@158 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_x2_ck>;
@@ -828,7 +828,7 @@
ti,invert-autoidle-bit;
};
- dpll_per_m5x2_ck: dpll_per_m5x2_ck {
+ dpll_per_m5x2_ck: dpll_per_m5x2_ck@15c {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_x2_ck>;
@@ -839,7 +839,7 @@
ti,invert-autoidle-bit;
};
- dpll_per_m6x2_ck: dpll_per_m6x2_ck {
+ dpll_per_m6x2_ck: dpll_per_m6x2_ck@160 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_x2_ck>;
@@ -850,7 +850,7 @@
ti,invert-autoidle-bit;
};
- dpll_per_m7x2_ck: dpll_per_m7x2_ck {
+ dpll_per_m7x2_ck: dpll_per_m7x2_ck@164 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_x2_ck>;
@@ -861,14 +861,14 @@
ti,invert-autoidle-bit;
};
- dpll_usb_ck: dpll_usb_ck {
+ dpll_usb_ck: dpll_usb_ck@180 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-j-type-clock";
clocks = <&sys_clkin_ck>, <&usb_hs_clk_div_ck>;
reg = <0x0180>, <0x0184>, <0x018c>, <0x0188>;
};
- dpll_usb_clkdcoldo_ck: dpll_usb_clkdcoldo_ck {
+ dpll_usb_clkdcoldo_ck: dpll_usb_clkdcoldo_ck@1b4 {
#clock-cells = <0>;
compatible = "ti,fixed-factor-clock";
clocks = <&dpll_usb_ck>;
@@ -879,7 +879,7 @@
ti,invert-autoidle-bit;
};
- dpll_usb_m2_ck: dpll_usb_m2_ck {
+ dpll_usb_m2_ck: dpll_usb_m2_ck@190 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_usb_ck>;
@@ -890,7 +890,7 @@
ti,invert-autoidle-bit;
};
- ducati_clk_mux_ck: ducati_clk_mux_ck {
+ ducati_clk_mux_ck: ducati_clk_mux_ck@100 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&div_core_ck>, <&dpll_per_m6x2_ck>;
@@ -921,7 +921,7 @@
clock-div = <8>;
};
- func_48m_fclk: func_48m_fclk {
+ func_48m_fclk: func_48m_fclk@108 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_m2x2_ck>;
@@ -937,7 +937,7 @@
clock-div = <4>;
};
- func_64m_fclk: func_64m_fclk {
+ func_64m_fclk: func_64m_fclk@108 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_m4x2_ck>;
@@ -945,7 +945,7 @@
ti,dividers = <2>, <4>;
};
- func_96m_fclk: func_96m_fclk {
+ func_96m_fclk: func_96m_fclk@108 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_m2x2_ck>;
@@ -953,7 +953,7 @@
ti,dividers = <2>, <4>;
};
- init_60m_fclk: init_60m_fclk {
+ init_60m_fclk: init_60m_fclk@104 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_usb_m2_ck>;
@@ -961,7 +961,7 @@
ti,dividers = <1>, <8>;
};
- per_abe_nc_fclk: per_abe_nc_fclk {
+ per_abe_nc_fclk: per_abe_nc_fclk@108 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_abe_m2_ck>;
@@ -969,7 +969,7 @@
ti,max-div = <2>;
};
- aes1_fck: aes1_fck {
+ aes1_fck: aes1_fck@15a0 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l3_div_ck>;
@@ -977,7 +977,7 @@
reg = <0x15a0>;
};
- aes2_fck: aes2_fck {
+ aes2_fck: aes2_fck@15a8 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l3_div_ck>;
@@ -985,7 +985,7 @@
reg = <0x15a8>;
};
- dss_sys_clk: dss_sys_clk {
+ dss_sys_clk: dss_sys_clk@1120 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&syc_clk_div_ck>;
@@ -993,7 +993,7 @@
reg = <0x1120>;
};
- dss_tv_clk: dss_tv_clk {
+ dss_tv_clk: dss_tv_clk@1120 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&extalt_clkin_ck>;
@@ -1001,7 +1001,7 @@
reg = <0x1120>;
};
- dss_dss_clk: dss_dss_clk {
+ dss_dss_clk: dss_dss_clk@1120 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll_per_m5x2_ck>;
@@ -1010,7 +1010,7 @@
ti,set-rate-parent;
};
- dss_48mhz_clk: dss_48mhz_clk {
+ dss_48mhz_clk: dss_48mhz_clk@1120 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&func_48mc_fclk>;
@@ -1018,7 +1018,7 @@
reg = <0x1120>;
};
- fdif_fck: fdif_fck {
+ fdif_fck: fdif_fck@1028 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_m4x2_ck>;
@@ -1028,7 +1028,7 @@
ti,index-power-of-two;
};
- gpio2_dbclk: gpio2_dbclk {
+ gpio2_dbclk: gpio2_dbclk@1460 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1036,7 +1036,7 @@
reg = <0x1460>;
};
- gpio3_dbclk: gpio3_dbclk {
+ gpio3_dbclk: gpio3_dbclk@1468 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1044,7 +1044,7 @@
reg = <0x1468>;
};
- gpio4_dbclk: gpio4_dbclk {
+ gpio4_dbclk: gpio4_dbclk@1470 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1052,7 +1052,7 @@
reg = <0x1470>;
};
- gpio5_dbclk: gpio5_dbclk {
+ gpio5_dbclk: gpio5_dbclk@1478 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1060,7 +1060,7 @@
reg = <0x1478>;
};
- gpio6_dbclk: gpio6_dbclk {
+ gpio6_dbclk: gpio6_dbclk@1480 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1068,7 +1068,7 @@
reg = <0x1480>;
};
- sgx_clk_mux: sgx_clk_mux {
+ sgx_clk_mux: sgx_clk_mux@1220 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&dpll_core_m7x2_ck>, <&dpll_per_m7x2_ck>;
@@ -1076,7 +1076,7 @@
reg = <0x1220>;
};
- hsi_fck: hsi_fck {
+ hsi_fck: hsi_fck@1338 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_m2x2_ck>;
@@ -1086,7 +1086,7 @@
ti,index-power-of-two;
};
- iss_ctrlclk: iss_ctrlclk {
+ iss_ctrlclk: iss_ctrlclk@1020 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&func_96m_fclk>;
@@ -1094,7 +1094,7 @@
reg = <0x1020>;
};
- mcbsp4_sync_mux_ck: mcbsp4_sync_mux_ck {
+ mcbsp4_sync_mux_ck: mcbsp4_sync_mux_ck@14e0 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&func_96m_fclk>, <&per_abe_nc_fclk>;
@@ -1102,7 +1102,7 @@
reg = <0x14e0>;
};
- per_mcbsp4_gfclk: per_mcbsp4_gfclk {
+ per_mcbsp4_gfclk: per_mcbsp4_gfclk@14e0 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&mcbsp4_sync_mux_ck>, <&pad_clks_ck>;
@@ -1110,7 +1110,7 @@
reg = <0x14e0>;
};
- hsmmc1_fclk: hsmmc1_fclk {
+ hsmmc1_fclk: hsmmc1_fclk@1328 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&func_64m_fclk>, <&func_96m_fclk>;
@@ -1118,7 +1118,7 @@
reg = <0x1328>;
};
- hsmmc2_fclk: hsmmc2_fclk {
+ hsmmc2_fclk: hsmmc2_fclk@1330 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&func_64m_fclk>, <&func_96m_fclk>;
@@ -1126,7 +1126,7 @@
reg = <0x1330>;
};
- ocp2scp_usb_phy_phy_48m: ocp2scp_usb_phy_phy_48m {
+ ocp2scp_usb_phy_phy_48m: ocp2scp_usb_phy_phy_48m@13e0 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&func_48m_fclk>;
@@ -1134,7 +1134,7 @@
reg = <0x13e0>;
};
- sha2md5_fck: sha2md5_fck {
+ sha2md5_fck: sha2md5_fck@15c8 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l3_div_ck>;
@@ -1142,7 +1142,7 @@
reg = <0x15c8>;
};
- slimbus2_fclk_1: slimbus2_fclk_1 {
+ slimbus2_fclk_1: slimbus2_fclk_1@1538 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&per_abe_24m_fclk>;
@@ -1150,7 +1150,7 @@
reg = <0x1538>;
};
- slimbus2_fclk_0: slimbus2_fclk_0 {
+ slimbus2_fclk_0: slimbus2_fclk_0@1538 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&func_24mc_fclk>;
@@ -1158,7 +1158,7 @@
reg = <0x1538>;
};
- slimbus2_slimbus_clk: slimbus2_slimbus_clk {
+ slimbus2_slimbus_clk: slimbus2_slimbus_clk@1538 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&pad_slimbus_core_clks_ck>;
@@ -1166,7 +1166,7 @@
reg = <0x1538>;
};
- smartreflex_core_fck: smartreflex_core_fck {
+ smartreflex_core_fck: smartreflex_core_fck@638 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l4_wkup_clk_mux_ck>;
@@ -1174,7 +1174,7 @@
reg = <0x0638>;
};
- smartreflex_iva_fck: smartreflex_iva_fck {
+ smartreflex_iva_fck: smartreflex_iva_fck@630 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l4_wkup_clk_mux_ck>;
@@ -1182,7 +1182,7 @@
reg = <0x0630>;
};
- smartreflex_mpu_fck: smartreflex_mpu_fck {
+ smartreflex_mpu_fck: smartreflex_mpu_fck@628 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l4_wkup_clk_mux_ck>;
@@ -1190,7 +1190,7 @@
reg = <0x0628>;
};
- cm2_dm10_mux: cm2_dm10_mux {
+ cm2_dm10_mux: cm2_dm10_mux@1428 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin_ck>, <&sys_32k_ck>;
@@ -1198,7 +1198,7 @@
reg = <0x1428>;
};
- cm2_dm11_mux: cm2_dm11_mux {
+ cm2_dm11_mux: cm2_dm11_mux@1430 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin_ck>, <&sys_32k_ck>;
@@ -1206,7 +1206,7 @@
reg = <0x1430>;
};
- cm2_dm2_mux: cm2_dm2_mux {
+ cm2_dm2_mux: cm2_dm2_mux@1438 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin_ck>, <&sys_32k_ck>;
@@ -1214,7 +1214,7 @@
reg = <0x1438>;
};
- cm2_dm3_mux: cm2_dm3_mux {
+ cm2_dm3_mux: cm2_dm3_mux@1440 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin_ck>, <&sys_32k_ck>;
@@ -1222,7 +1222,7 @@
reg = <0x1440>;
};
- cm2_dm4_mux: cm2_dm4_mux {
+ cm2_dm4_mux: cm2_dm4_mux@1448 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin_ck>, <&sys_32k_ck>;
@@ -1230,7 +1230,7 @@
reg = <0x1448>;
};
- cm2_dm9_mux: cm2_dm9_mux {
+ cm2_dm9_mux: cm2_dm9_mux@1450 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin_ck>, <&sys_32k_ck>;
@@ -1238,7 +1238,7 @@
reg = <0x1450>;
};
- usb_host_fs_fck: usb_host_fs_fck {
+ usb_host_fs_fck: usb_host_fs_fck@13d0 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&func_48mc_fclk>;
@@ -1246,7 +1246,7 @@
reg = <0x13d0>;
};
- utmi_p1_gfclk: utmi_p1_gfclk {
+ utmi_p1_gfclk: utmi_p1_gfclk@1358 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&init_60m_fclk>, <&xclk60mhsp1_ck>;
@@ -1254,7 +1254,7 @@
reg = <0x1358>;
};
- usb_host_hs_utmi_p1_clk: usb_host_hs_utmi_p1_clk {
+ usb_host_hs_utmi_p1_clk: usb_host_hs_utmi_p1_clk@1358 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&utmi_p1_gfclk>;
@@ -1262,7 +1262,7 @@
reg = <0x1358>;
};
- utmi_p2_gfclk: utmi_p2_gfclk {
+ utmi_p2_gfclk: utmi_p2_gfclk@1358 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&init_60m_fclk>, <&xclk60mhsp2_ck>;
@@ -1270,7 +1270,7 @@
reg = <0x1358>;
};
- usb_host_hs_utmi_p2_clk: usb_host_hs_utmi_p2_clk {
+ usb_host_hs_utmi_p2_clk: usb_host_hs_utmi_p2_clk@1358 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&utmi_p2_gfclk>;
@@ -1278,7 +1278,7 @@
reg = <0x1358>;
};
- usb_host_hs_utmi_p3_clk: usb_host_hs_utmi_p3_clk {
+ usb_host_hs_utmi_p3_clk: usb_host_hs_utmi_p3_clk@1358 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&init_60m_fclk>;
@@ -1286,7 +1286,7 @@
reg = <0x1358>;
};
- usb_host_hs_hsic480m_p1_clk: usb_host_hs_hsic480m_p1_clk {
+ usb_host_hs_hsic480m_p1_clk: usb_host_hs_hsic480m_p1_clk@1358 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll_usb_m2_ck>;
@@ -1294,7 +1294,7 @@
reg = <0x1358>;
};
- usb_host_hs_hsic60m_p1_clk: usb_host_hs_hsic60m_p1_clk {
+ usb_host_hs_hsic60m_p1_clk: usb_host_hs_hsic60m_p1_clk@1358 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&init_60m_fclk>;
@@ -1302,7 +1302,7 @@
reg = <0x1358>;
};
- usb_host_hs_hsic60m_p2_clk: usb_host_hs_hsic60m_p2_clk {
+ usb_host_hs_hsic60m_p2_clk: usb_host_hs_hsic60m_p2_clk@1358 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&init_60m_fclk>;
@@ -1310,7 +1310,7 @@
reg = <0x1358>;
};
- usb_host_hs_hsic480m_p2_clk: usb_host_hs_hsic480m_p2_clk {
+ usb_host_hs_hsic480m_p2_clk: usb_host_hs_hsic480m_p2_clk@1358 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll_usb_m2_ck>;
@@ -1318,7 +1318,7 @@
reg = <0x1358>;
};
- usb_host_hs_func48mclk: usb_host_hs_func48mclk {
+ usb_host_hs_func48mclk: usb_host_hs_func48mclk@1358 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&func_48mc_fclk>;
@@ -1326,7 +1326,7 @@
reg = <0x1358>;
};
- usb_host_hs_fck: usb_host_hs_fck {
+ usb_host_hs_fck: usb_host_hs_fck@1358 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&init_60m_fclk>;
@@ -1334,7 +1334,7 @@
reg = <0x1358>;
};
- otg_60m_gfclk: otg_60m_gfclk {
+ otg_60m_gfclk: otg_60m_gfclk@1360 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&utmi_phy_clkout_ck>, <&xclk60motg_ck>;
@@ -1342,7 +1342,7 @@
reg = <0x1360>;
};
- usb_otg_hs_xclk: usb_otg_hs_xclk {
+ usb_otg_hs_xclk: usb_otg_hs_xclk@1360 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&otg_60m_gfclk>;
@@ -1350,7 +1350,7 @@
reg = <0x1360>;
};
- usb_otg_hs_ick: usb_otg_hs_ick {
+ usb_otg_hs_ick: usb_otg_hs_ick@1360 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l3_div_ck>;
@@ -1358,7 +1358,7 @@
reg = <0x1360>;
};
- usb_phy_cm_clk32k: usb_phy_cm_clk32k {
+ usb_phy_cm_clk32k: usb_phy_cm_clk32k@640 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1366,7 +1366,7 @@
reg = <0x0640>;
};
- usb_tll_hs_usb_ch2_clk: usb_tll_hs_usb_ch2_clk {
+ usb_tll_hs_usb_ch2_clk: usb_tll_hs_usb_ch2_clk@1368 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&init_60m_fclk>;
@@ -1374,7 +1374,7 @@
reg = <0x1368>;
};
- usb_tll_hs_usb_ch0_clk: usb_tll_hs_usb_ch0_clk {
+ usb_tll_hs_usb_ch0_clk: usb_tll_hs_usb_ch0_clk@1368 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&init_60m_fclk>;
@@ -1382,7 +1382,7 @@
reg = <0x1368>;
};
- usb_tll_hs_usb_ch1_clk: usb_tll_hs_usb_ch1_clk {
+ usb_tll_hs_usb_ch1_clk: usb_tll_hs_usb_ch1_clk@1368 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&init_60m_fclk>;
@@ -1390,7 +1390,7 @@
reg = <0x1368>;
};
- usb_tll_hs_ick: usb_tll_hs_ick {
+ usb_tll_hs_ick: usb_tll_hs_ick@1368 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l4_div_ck>;
@@ -1407,7 +1407,7 @@
};
&scrm_clocks {
- auxclk0_src_gate_ck: auxclk0_src_gate_ck {
+ auxclk0_src_gate_ck: auxclk0_src_gate_ck@310 {
#clock-cells = <0>;
compatible = "ti,composite-no-wait-gate-clock";
clocks = <&dpll_core_m3x2_ck>;
@@ -1415,7 +1415,7 @@
reg = <0x0310>;
};
- auxclk0_src_mux_ck: auxclk0_src_mux_ck {
+ auxclk0_src_mux_ck: auxclk0_src_mux_ck@310 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&sys_clkin_ck>, <&dpll_core_m3x2_ck>, <&dpll_per_m3x2_ck>;
@@ -1429,7 +1429,7 @@
clocks = <&auxclk0_src_gate_ck>, <&auxclk0_src_mux_ck>;
};
- auxclk0_ck: auxclk0_ck {
+ auxclk0_ck: auxclk0_ck@310 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&auxclk0_src_ck>;
@@ -1438,7 +1438,7 @@
reg = <0x0310>;
};
- auxclk1_src_gate_ck: auxclk1_src_gate_ck {
+ auxclk1_src_gate_ck: auxclk1_src_gate_ck@314 {
#clock-cells = <0>;
compatible = "ti,composite-no-wait-gate-clock";
clocks = <&dpll_core_m3x2_ck>;
@@ -1446,7 +1446,7 @@
reg = <0x0314>;
};
- auxclk1_src_mux_ck: auxclk1_src_mux_ck {
+ auxclk1_src_mux_ck: auxclk1_src_mux_ck@314 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&sys_clkin_ck>, <&dpll_core_m3x2_ck>, <&dpll_per_m3x2_ck>;
@@ -1460,7 +1460,7 @@
clocks = <&auxclk1_src_gate_ck>, <&auxclk1_src_mux_ck>;
};
- auxclk1_ck: auxclk1_ck {
+ auxclk1_ck: auxclk1_ck@314 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&auxclk1_src_ck>;
@@ -1469,7 +1469,7 @@
reg = <0x0314>;
};
- auxclk2_src_gate_ck: auxclk2_src_gate_ck {
+ auxclk2_src_gate_ck: auxclk2_src_gate_ck@318 {
#clock-cells = <0>;
compatible = "ti,composite-no-wait-gate-clock";
clocks = <&dpll_core_m3x2_ck>;
@@ -1477,7 +1477,7 @@
reg = <0x0318>;
};
- auxclk2_src_mux_ck: auxclk2_src_mux_ck {
+ auxclk2_src_mux_ck: auxclk2_src_mux_ck@318 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&sys_clkin_ck>, <&dpll_core_m3x2_ck>, <&dpll_per_m3x2_ck>;
@@ -1491,7 +1491,7 @@
clocks = <&auxclk2_src_gate_ck>, <&auxclk2_src_mux_ck>;
};
- auxclk2_ck: auxclk2_ck {
+ auxclk2_ck: auxclk2_ck@318 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&auxclk2_src_ck>;
@@ -1500,7 +1500,7 @@
reg = <0x0318>;
};
- auxclk3_src_gate_ck: auxclk3_src_gate_ck {
+ auxclk3_src_gate_ck: auxclk3_src_gate_ck@31c {
#clock-cells = <0>;
compatible = "ti,composite-no-wait-gate-clock";
clocks = <&dpll_core_m3x2_ck>;
@@ -1508,7 +1508,7 @@
reg = <0x031c>;
};
- auxclk3_src_mux_ck: auxclk3_src_mux_ck {
+ auxclk3_src_mux_ck: auxclk3_src_mux_ck@31c {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&sys_clkin_ck>, <&dpll_core_m3x2_ck>, <&dpll_per_m3x2_ck>;
@@ -1522,7 +1522,7 @@
clocks = <&auxclk3_src_gate_ck>, <&auxclk3_src_mux_ck>;
};
- auxclk3_ck: auxclk3_ck {
+ auxclk3_ck: auxclk3_ck@31c {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&auxclk3_src_ck>;
@@ -1531,7 +1531,7 @@
reg = <0x031c>;
};
- auxclk4_src_gate_ck: auxclk4_src_gate_ck {
+ auxclk4_src_gate_ck: auxclk4_src_gate_ck@320 {
#clock-cells = <0>;
compatible = "ti,composite-no-wait-gate-clock";
clocks = <&dpll_core_m3x2_ck>;
@@ -1539,7 +1539,7 @@
reg = <0x0320>;
};
- auxclk4_src_mux_ck: auxclk4_src_mux_ck {
+ auxclk4_src_mux_ck: auxclk4_src_mux_ck@320 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&sys_clkin_ck>, <&dpll_core_m3x2_ck>, <&dpll_per_m3x2_ck>;
@@ -1553,7 +1553,7 @@
clocks = <&auxclk4_src_gate_ck>, <&auxclk4_src_mux_ck>;
};
- auxclk4_ck: auxclk4_ck {
+ auxclk4_ck: auxclk4_ck@320 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&auxclk4_src_ck>;
@@ -1562,7 +1562,7 @@
reg = <0x0320>;
};
- auxclk5_src_gate_ck: auxclk5_src_gate_ck {
+ auxclk5_src_gate_ck: auxclk5_src_gate_ck@324 {
#clock-cells = <0>;
compatible = "ti,composite-no-wait-gate-clock";
clocks = <&dpll_core_m3x2_ck>;
@@ -1570,7 +1570,7 @@
reg = <0x0324>;
};
- auxclk5_src_mux_ck: auxclk5_src_mux_ck {
+ auxclk5_src_mux_ck: auxclk5_src_mux_ck@324 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&sys_clkin_ck>, <&dpll_core_m3x2_ck>, <&dpll_per_m3x2_ck>;
@@ -1584,7 +1584,7 @@
clocks = <&auxclk5_src_gate_ck>, <&auxclk5_src_mux_ck>;
};
- auxclk5_ck: auxclk5_ck {
+ auxclk5_ck: auxclk5_ck@324 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&auxclk5_src_ck>;
@@ -1593,7 +1593,7 @@
reg = <0x0324>;
};
- auxclkreq0_ck: auxclkreq0_ck {
+ auxclkreq0_ck: auxclkreq0_ck@210 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&auxclk0_ck>, <&auxclk1_ck>, <&auxclk2_ck>, <&auxclk3_ck>, <&auxclk4_ck>, <&auxclk5_ck>;
@@ -1601,7 +1601,7 @@
reg = <0x0210>;
};
- auxclkreq1_ck: auxclkreq1_ck {
+ auxclkreq1_ck: auxclkreq1_ck@214 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&auxclk0_ck>, <&auxclk1_ck>, <&auxclk2_ck>, <&auxclk3_ck>, <&auxclk4_ck>, <&auxclk5_ck>;
@@ -1609,7 +1609,7 @@
reg = <0x0214>;
};
- auxclkreq2_ck: auxclkreq2_ck {
+ auxclkreq2_ck: auxclkreq2_ck@218 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&auxclk0_ck>, <&auxclk1_ck>, <&auxclk2_ck>, <&auxclk3_ck>, <&auxclk4_ck>, <&auxclk5_ck>;
@@ -1617,7 +1617,7 @@
reg = <0x0218>;
};
- auxclkreq3_ck: auxclkreq3_ck {
+ auxclkreq3_ck: auxclkreq3_ck@21c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&auxclk0_ck>, <&auxclk1_ck>, <&auxclk2_ck>, <&auxclk3_ck>, <&auxclk4_ck>, <&auxclk5_ck>;
@@ -1625,7 +1625,7 @@
reg = <0x021c>;
};
- auxclkreq4_ck: auxclkreq4_ck {
+ auxclkreq4_ck: auxclkreq4_ck@220 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&auxclk0_ck>, <&auxclk1_ck>, <&auxclk2_ck>, <&auxclk3_ck>, <&auxclk4_ck>, <&auxclk5_ck>;
@@ -1633,7 +1633,7 @@
reg = <0x0220>;
};
- auxclkreq5_ck: auxclkreq5_ck {
+ auxclkreq5_ck: auxclkreq5_ck@224 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&auxclk0_ck>, <&auxclk1_ck>, <&auxclk2_ck>, <&auxclk3_ck>, <&auxclk4_ck>, <&auxclk5_ck>;
diff --git a/arch/arm/boot/dts/omap5-board-common.dtsi b/arch/arm/boot/dts/omap5-board-common.dtsi
index 914bf4c47404..dc759a3028b7 100644
--- a/arch/arm/boot/dts/omap5-board-common.dtsi
+++ b/arch/arm/boot/dts/omap5-board-common.dtsi
@@ -391,11 +391,21 @@
ti,backup-battery-charge-high-current;
};
+ gpadc {
+ compatible = "ti,palmas-gpadc";
+ interrupts = <18 0
+ 16 0
+ 17 0>;
+ #io-channel-cells = <1>;
+ ti,channel0-current-microamp = <5>;
+ ti,channel3-current-microamp = <10>;
+ };
+
palmas_pmic {
compatible = "ti,palmas-pmic";
interrupt-parent = <&palmas>;
interrupts = <14 IRQ_TYPE_NONE>;
- interrupt-name = "short-irq";
+ interrupt-names = "short-irq";
ti,ldo6-vibrator;
diff --git a/arch/arm/boot/dts/omap5-cm-t54.dts b/arch/arm/boot/dts/omap5-cm-t54.dts
index 4d87d9c6c86d..93fdfa96776e 100644
--- a/arch/arm/boot/dts/omap5-cm-t54.dts
+++ b/arch/arm/boot/dts/omap5-cm-t54.dts
@@ -434,7 +434,7 @@
compatible = "ti,palmas-pmic";
interrupt-parent = <&palmas>;
interrupts = <14 IRQ_TYPE_NONE>;
- interrupt-name = "short-irq";
+ interrupt-names = "short-irq";
ti,ldo6-vibrator;
diff --git a/arch/arm/boot/dts/omap5.dtsi b/arch/arm/boot/dts/omap5.dtsi
index 120b6b80cd39..84c10195e79b 100644
--- a/arch/arm/boot/dts/omap5.dtsi
+++ b/arch/arm/boot/dts/omap5.dtsi
@@ -187,7 +187,7 @@
#size-cells = <1>;
ranges = <0 0x5a0 0xec>;
- pbias_regulator: pbias_regulator {
+ pbias_regulator: pbias_regulator@60 {
compatible = "ti,pbias-omap5", "ti,pbias-omap";
reg = <0x60 0x4>;
syscon = <&omap5_padconf_global>;
@@ -398,6 +398,10 @@
ti,hwmods = "gpmc";
clocks = <&l3_iclk_div>;
clock-names = "fck";
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ gpio-controller;
+ #gpio-cells = <2>;
};
i2c1: i2c@48070000 {
diff --git a/arch/arm/boot/dts/omap54xx-clocks.dtsi b/arch/arm/boot/dts/omap54xx-clocks.dtsi
index 83b425fb3ac2..4899c2359d0a 100644
--- a/arch/arm/boot/dts/omap54xx-clocks.dtsi
+++ b/arch/arm/boot/dts/omap54xx-clocks.dtsi
@@ -14,7 +14,7 @@
clock-frequency = <12000000>;
};
- pad_clks_ck: pad_clks_ck {
+ pad_clks_ck: pad_clks_ck@108 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&pad_clks_src_ck>;
@@ -34,7 +34,7 @@
clock-frequency = <12000000>;
};
- slimbus_clk: slimbus_clk {
+ slimbus_clk: slimbus_clk@108 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&slimbus_src_clk>;
@@ -102,7 +102,7 @@
clock-frequency = <60000000>;
};
- dpll_abe_ck: dpll_abe_ck {
+ dpll_abe_ck: dpll_abe_ck@1e0 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-m4xen-clock";
clocks = <&abe_dpll_clk_mux>, <&abe_dpll_bypass_clk_mux>;
@@ -115,7 +115,7 @@
clocks = <&dpll_abe_ck>;
};
- dpll_abe_m2x2_ck: dpll_abe_m2x2_ck {
+ dpll_abe_m2x2_ck: dpll_abe_m2x2_ck@1f0 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_abe_x2_ck>;
@@ -132,7 +132,7 @@
clock-div = <8>;
};
- abe_clk: abe_clk {
+ abe_clk: abe_clk@108 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_abe_m2x2_ck>;
@@ -141,7 +141,7 @@
ti,index-power-of-two;
};
- abe_iclk: abe_iclk {
+ abe_iclk: abe_iclk@528 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&aess_fclk>;
@@ -158,7 +158,7 @@
clock-div = <16>;
};
- dpll_abe_m3x2_ck: dpll_abe_m3x2_ck {
+ dpll_abe_m3x2_ck: dpll_abe_m3x2_ck@1f4 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_abe_x2_ck>;
@@ -167,7 +167,7 @@
ti,index-starts-at-one;
};
- dpll_core_byp_mux: dpll_core_byp_mux {
+ dpll_core_byp_mux: dpll_core_byp_mux@12c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin>, <&dpll_abe_m3x2_ck>;
@@ -175,7 +175,7 @@
reg = <0x012c>;
};
- dpll_core_ck: dpll_core_ck {
+ dpll_core_ck: dpll_core_ck@120 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-core-clock";
clocks = <&sys_clkin>, <&dpll_core_byp_mux>;
@@ -188,7 +188,7 @@
clocks = <&dpll_core_ck>;
};
- dpll_core_h21x2_ck: dpll_core_h21x2_ck {
+ dpll_core_h21x2_ck: dpll_core_h21x2_ck@150 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -213,7 +213,7 @@
clock-div = <2>;
};
- dpll_core_h11x2_ck: dpll_core_h11x2_ck {
+ dpll_core_h11x2_ck: dpll_core_h11x2_ck@138 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -222,7 +222,7 @@
ti,index-starts-at-one;
};
- dpll_core_h12x2_ck: dpll_core_h12x2_ck {
+ dpll_core_h12x2_ck: dpll_core_h12x2_ck@13c {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -231,7 +231,7 @@
ti,index-starts-at-one;
};
- dpll_core_h13x2_ck: dpll_core_h13x2_ck {
+ dpll_core_h13x2_ck: dpll_core_h13x2_ck@140 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -240,7 +240,7 @@
ti,index-starts-at-one;
};
- dpll_core_h14x2_ck: dpll_core_h14x2_ck {
+ dpll_core_h14x2_ck: dpll_core_h14x2_ck@144 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -249,7 +249,7 @@
ti,index-starts-at-one;
};
- dpll_core_h22x2_ck: dpll_core_h22x2_ck {
+ dpll_core_h22x2_ck: dpll_core_h22x2_ck@154 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -258,7 +258,7 @@
ti,index-starts-at-one;
};
- dpll_core_h23x2_ck: dpll_core_h23x2_ck {
+ dpll_core_h23x2_ck: dpll_core_h23x2_ck@158 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -267,7 +267,7 @@
ti,index-starts-at-one;
};
- dpll_core_h24x2_ck: dpll_core_h24x2_ck {
+ dpll_core_h24x2_ck: dpll_core_h24x2_ck@15c {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -276,7 +276,7 @@
ti,index-starts-at-one;
};
- dpll_core_m2_ck: dpll_core_m2_ck {
+ dpll_core_m2_ck: dpll_core_m2_ck@130 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_ck>;
@@ -285,7 +285,7 @@
ti,index-starts-at-one;
};
- dpll_core_m3x2_ck: dpll_core_m3x2_ck {
+ dpll_core_m3x2_ck: dpll_core_m3x2_ck@134 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_core_x2_ck>;
@@ -302,7 +302,7 @@
clock-div = <1>;
};
- dpll_iva_byp_mux: dpll_iva_byp_mux {
+ dpll_iva_byp_mux: dpll_iva_byp_mux@1ac {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin>, <&iva_dpll_hs_clk_div>;
@@ -310,7 +310,7 @@
reg = <0x01ac>;
};
- dpll_iva_ck: dpll_iva_ck {
+ dpll_iva_ck: dpll_iva_ck@1a0 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
clocks = <&sys_clkin>, <&dpll_iva_byp_mux>;
@@ -323,7 +323,7 @@
clocks = <&dpll_iva_ck>;
};
- dpll_iva_h11x2_ck: dpll_iva_h11x2_ck {
+ dpll_iva_h11x2_ck: dpll_iva_h11x2_ck@1b8 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_iva_x2_ck>;
@@ -332,7 +332,7 @@
ti,index-starts-at-one;
};
- dpll_iva_h12x2_ck: dpll_iva_h12x2_ck {
+ dpll_iva_h12x2_ck: dpll_iva_h12x2_ck@1bc {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_iva_x2_ck>;
@@ -349,14 +349,14 @@
clock-div = <1>;
};
- dpll_mpu_ck: dpll_mpu_ck {
+ dpll_mpu_ck: dpll_mpu_ck@160 {
#clock-cells = <0>;
compatible = "ti,omap5-mpu-dpll-clock";
clocks = <&sys_clkin>, <&mpu_dpll_hs_clk_div>;
reg = <0x0160>, <0x0164>, <0x016c>, <0x0168>;
};
- dpll_mpu_m2_ck: dpll_mpu_m2_ck {
+ dpll_mpu_m2_ck: dpll_mpu_m2_ck@170 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_mpu_ck>;
@@ -381,7 +381,7 @@
clock-div = <3>;
};
- l3_iclk_div: l3_iclk_div {
+ l3_iclk_div: l3_iclk_div@100 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
ti,max-div = <2>;
@@ -399,7 +399,7 @@
clock-div = <1>;
};
- l4_root_clk_div: l4_root_clk_div {
+ l4_root_clk_div: l4_root_clk_div@100 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
ti,max-div = <2>;
@@ -409,7 +409,7 @@
ti,index-power-of-two;
};
- slimbus1_slimbus_clk: slimbus1_slimbus_clk {
+ slimbus1_slimbus_clk: slimbus1_slimbus_clk@560 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&slimbus_clk>;
@@ -417,7 +417,7 @@
reg = <0x0560>;
};
- aess_fclk: aess_fclk {
+ aess_fclk: aess_fclk@528 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&abe_clk>;
@@ -426,7 +426,7 @@
reg = <0x0528>;
};
- dmic_sync_mux_ck: dmic_sync_mux_ck {
+ dmic_sync_mux_ck: dmic_sync_mux_ck@538 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_24m_fclk>, <&dss_syc_gfclk_div>, <&func_24m_clk>;
@@ -434,7 +434,7 @@
reg = <0x0538>;
};
- dmic_gfclk: dmic_gfclk {
+ dmic_gfclk: dmic_gfclk@538 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&dmic_sync_mux_ck>, <&pad_clks_ck>, <&slimbus_clk>;
@@ -442,7 +442,7 @@
reg = <0x0538>;
};
- mcasp_sync_mux_ck: mcasp_sync_mux_ck {
+ mcasp_sync_mux_ck: mcasp_sync_mux_ck@540 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_24m_fclk>, <&dss_syc_gfclk_div>, <&func_24m_clk>;
@@ -450,7 +450,7 @@
reg = <0x0540>;
};
- mcasp_gfclk: mcasp_gfclk {
+ mcasp_gfclk: mcasp_gfclk@540 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&mcasp_sync_mux_ck>, <&pad_clks_ck>, <&slimbus_clk>;
@@ -458,7 +458,7 @@
reg = <0x0540>;
};
- mcbsp1_sync_mux_ck: mcbsp1_sync_mux_ck {
+ mcbsp1_sync_mux_ck: mcbsp1_sync_mux_ck@548 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_24m_fclk>, <&dss_syc_gfclk_div>, <&func_24m_clk>;
@@ -466,7 +466,7 @@
reg = <0x0548>;
};
- mcbsp1_gfclk: mcbsp1_gfclk {
+ mcbsp1_gfclk: mcbsp1_gfclk@548 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&mcbsp1_sync_mux_ck>, <&pad_clks_ck>, <&slimbus_clk>;
@@ -474,7 +474,7 @@
reg = <0x0548>;
};
- mcbsp2_sync_mux_ck: mcbsp2_sync_mux_ck {
+ mcbsp2_sync_mux_ck: mcbsp2_sync_mux_ck@550 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_24m_fclk>, <&dss_syc_gfclk_div>, <&func_24m_clk>;
@@ -482,7 +482,7 @@
reg = <0x0550>;
};
- mcbsp2_gfclk: mcbsp2_gfclk {
+ mcbsp2_gfclk: mcbsp2_gfclk@550 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&mcbsp2_sync_mux_ck>, <&pad_clks_ck>, <&slimbus_clk>;
@@ -490,7 +490,7 @@
reg = <0x0550>;
};
- mcbsp3_sync_mux_ck: mcbsp3_sync_mux_ck {
+ mcbsp3_sync_mux_ck: mcbsp3_sync_mux_ck@558 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&abe_24m_fclk>, <&dss_syc_gfclk_div>, <&func_24m_clk>;
@@ -498,7 +498,7 @@
reg = <0x0558>;
};
- mcbsp3_gfclk: mcbsp3_gfclk {
+ mcbsp3_gfclk: mcbsp3_gfclk@558 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&mcbsp3_sync_mux_ck>, <&pad_clks_ck>, <&slimbus_clk>;
@@ -506,7 +506,7 @@
reg = <0x0558>;
};
- timer5_gfclk_mux: timer5_gfclk_mux {
+ timer5_gfclk_mux: timer5_gfclk_mux@568 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&dss_syc_gfclk_div>, <&sys_32k_ck>;
@@ -514,7 +514,7 @@
reg = <0x0568>;
};
- timer6_gfclk_mux: timer6_gfclk_mux {
+ timer6_gfclk_mux: timer6_gfclk_mux@570 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&dss_syc_gfclk_div>, <&sys_32k_ck>;
@@ -522,7 +522,7 @@
reg = <0x0570>;
};
- timer7_gfclk_mux: timer7_gfclk_mux {
+ timer7_gfclk_mux: timer7_gfclk_mux@578 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&dss_syc_gfclk_div>, <&sys_32k_ck>;
@@ -530,7 +530,7 @@
reg = <0x0578>;
};
- timer8_gfclk_mux: timer8_gfclk_mux {
+ timer8_gfclk_mux: timer8_gfclk_mux@580 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&dss_syc_gfclk_div>, <&sys_32k_ck>;
@@ -545,7 +545,7 @@
};
};
&prm_clocks {
- sys_clkin: sys_clkin {
+ sys_clkin: sys_clkin@110 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&virt_12000000_ck>, <&virt_13000000_ck>, <&virt_16800000_ck>, <&virt_19200000_ck>, <&virt_26000000_ck>, <&virt_27000000_ck>, <&virt_38400000_ck>;
@@ -553,14 +553,14 @@
ti,index-starts-at-one;
};
- abe_dpll_bypass_clk_mux: abe_dpll_bypass_clk_mux {
+ abe_dpll_bypass_clk_mux: abe_dpll_bypass_clk_mux@108 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin>, <&sys_32k_ck>;
reg = <0x0108>;
};
- abe_dpll_clk_mux: abe_dpll_clk_mux {
+ abe_dpll_clk_mux: abe_dpll_clk_mux@10c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin>, <&sys_32k_ck>;
@@ -583,7 +583,7 @@
clock-div = <1>;
};
- wkupaon_iclk_mux: wkupaon_iclk_mux {
+ wkupaon_iclk_mux: wkupaon_iclk_mux@108 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin>, <&abe_lp_clk_div>;
@@ -598,7 +598,7 @@
clock-div = <1>;
};
- gpio1_dbclk: gpio1_dbclk {
+ gpio1_dbclk: gpio1_dbclk@1938 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -606,7 +606,7 @@
reg = <0x1938>;
};
- timer1_gfclk_mux: timer1_gfclk_mux {
+ timer1_gfclk_mux: timer1_gfclk_mux@1940 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin>, <&sys_32k_ck>;
@@ -616,7 +616,7 @@
};
&cm_core_clocks {
- dpll_per_byp_mux: dpll_per_byp_mux {
+ dpll_per_byp_mux: dpll_per_byp_mux@14c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin>, <&per_dpll_hs_clk_div>;
@@ -624,7 +624,7 @@
reg = <0x014c>;
};
- dpll_per_ck: dpll_per_ck {
+ dpll_per_ck: dpll_per_ck@140 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
clocks = <&sys_clkin>, <&dpll_per_byp_mux>;
@@ -637,7 +637,7 @@
clocks = <&dpll_per_ck>;
};
- dpll_per_h11x2_ck: dpll_per_h11x2_ck {
+ dpll_per_h11x2_ck: dpll_per_h11x2_ck@158 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_x2_ck>;
@@ -646,7 +646,7 @@
ti,index-starts-at-one;
};
- dpll_per_h12x2_ck: dpll_per_h12x2_ck {
+ dpll_per_h12x2_ck: dpll_per_h12x2_ck@15c {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_x2_ck>;
@@ -655,7 +655,7 @@
ti,index-starts-at-one;
};
- dpll_per_h14x2_ck: dpll_per_h14x2_ck {
+ dpll_per_h14x2_ck: dpll_per_h14x2_ck@164 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_x2_ck>;
@@ -664,7 +664,7 @@
ti,index-starts-at-one;
};
- dpll_per_m2_ck: dpll_per_m2_ck {
+ dpll_per_m2_ck: dpll_per_m2_ck@150 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_ck>;
@@ -673,7 +673,7 @@
ti,index-starts-at-one;
};
- dpll_per_m2x2_ck: dpll_per_m2x2_ck {
+ dpll_per_m2x2_ck: dpll_per_m2x2_ck@150 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_x2_ck>;
@@ -682,7 +682,7 @@
ti,index-starts-at-one;
};
- dpll_per_m3x2_ck: dpll_per_m3x2_ck {
+ dpll_per_m3x2_ck: dpll_per_m3x2_ck@154 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_x2_ck>;
@@ -691,7 +691,7 @@
ti,index-starts-at-one;
};
- dpll_unipro1_ck: dpll_unipro1_ck {
+ dpll_unipro1_ck: dpll_unipro1_ck@200 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
clocks = <&sys_clkin>, <&sys_clkin>;
@@ -706,7 +706,7 @@
clock-div = <1>;
};
- dpll_unipro1_m2_ck: dpll_unipro1_m2_ck {
+ dpll_unipro1_m2_ck: dpll_unipro1_m2_ck@210 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_unipro1_ck>;
@@ -715,7 +715,7 @@
ti,index-starts-at-one;
};
- dpll_unipro2_ck: dpll_unipro2_ck {
+ dpll_unipro2_ck: dpll_unipro2_ck@1c0 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-clock";
clocks = <&sys_clkin>, <&sys_clkin>;
@@ -730,7 +730,7 @@
clock-div = <1>;
};
- dpll_unipro2_m2_ck: dpll_unipro2_m2_ck {
+ dpll_unipro2_m2_ck: dpll_unipro2_m2_ck@1d0 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_unipro2_ck>;
@@ -739,7 +739,7 @@
ti,index-starts-at-one;
};
- dpll_usb_byp_mux: dpll_usb_byp_mux {
+ dpll_usb_byp_mux: dpll_usb_byp_mux@18c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin>, <&usb_dpll_hs_clk_div>;
@@ -747,7 +747,7 @@
reg = <0x018c>;
};
- dpll_usb_ck: dpll_usb_ck {
+ dpll_usb_ck: dpll_usb_ck@180 {
#clock-cells = <0>;
compatible = "ti,omap4-dpll-j-type-clock";
clocks = <&sys_clkin>, <&dpll_usb_byp_mux>;
@@ -762,7 +762,7 @@
clock-div = <1>;
};
- dpll_usb_m2_ck: dpll_usb_m2_ck {
+ dpll_usb_m2_ck: dpll_usb_m2_ck@190 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_usb_ck>;
@@ -811,7 +811,7 @@
clock-div = <2>;
};
- l3init_60m_fclk: l3init_60m_fclk {
+ l3init_60m_fclk: l3init_60m_fclk@104 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_usb_m2_ck>;
@@ -819,7 +819,7 @@
ti,dividers = <1>, <8>;
};
- dss_32khz_clk: dss_32khz_clk {
+ dss_32khz_clk: dss_32khz_clk@1420 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -827,7 +827,7 @@
reg = <0x1420>;
};
- dss_48mhz_clk: dss_48mhz_clk {
+ dss_48mhz_clk: dss_48mhz_clk@1420 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&func_48m_fclk>;
@@ -835,7 +835,7 @@
reg = <0x1420>;
};
- dss_dss_clk: dss_dss_clk {
+ dss_dss_clk: dss_dss_clk@1420 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll_per_h12x2_ck>;
@@ -844,7 +844,7 @@
ti,set-rate-parent;
};
- dss_sys_clk: dss_sys_clk {
+ dss_sys_clk: dss_sys_clk@1420 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dss_syc_gfclk_div>;
@@ -852,7 +852,7 @@
reg = <0x1420>;
};
- gpio2_dbclk: gpio2_dbclk {
+ gpio2_dbclk: gpio2_dbclk@1060 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -860,7 +860,7 @@
reg = <0x1060>;
};
- gpio3_dbclk: gpio3_dbclk {
+ gpio3_dbclk: gpio3_dbclk@1068 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -868,7 +868,7 @@
reg = <0x1068>;
};
- gpio4_dbclk: gpio4_dbclk {
+ gpio4_dbclk: gpio4_dbclk@1070 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -876,7 +876,7 @@
reg = <0x1070>;
};
- gpio5_dbclk: gpio5_dbclk {
+ gpio5_dbclk: gpio5_dbclk@1078 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -884,7 +884,7 @@
reg = <0x1078>;
};
- gpio6_dbclk: gpio6_dbclk {
+ gpio6_dbclk: gpio6_dbclk@1080 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -892,7 +892,7 @@
reg = <0x1080>;
};
- gpio7_dbclk: gpio7_dbclk {
+ gpio7_dbclk: gpio7_dbclk@1110 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -900,7 +900,7 @@
reg = <0x1110>;
};
- gpio8_dbclk: gpio8_dbclk {
+ gpio8_dbclk: gpio8_dbclk@1118 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -908,7 +908,7 @@
reg = <0x1118>;
};
- iss_ctrlclk: iss_ctrlclk {
+ iss_ctrlclk: iss_ctrlclk@1320 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&func_96m_fclk>;
@@ -916,7 +916,7 @@
reg = <0x1320>;
};
- lli_txphy_clk: lli_txphy_clk {
+ lli_txphy_clk: lli_txphy_clk@f20 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll_unipro1_clkdcoldo>;
@@ -924,7 +924,7 @@
reg = <0x0f20>;
};
- lli_txphy_ls_clk: lli_txphy_ls_clk {
+ lli_txphy_ls_clk: lli_txphy_ls_clk@f20 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll_unipro1_m2_ck>;
@@ -932,7 +932,7 @@
reg = <0x0f20>;
};
- mmc1_32khz_clk: mmc1_32khz_clk {
+ mmc1_32khz_clk: mmc1_32khz_clk@1628 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -940,7 +940,7 @@
reg = <0x1628>;
};
- sata_ref_clk: sata_ref_clk {
+ sata_ref_clk: sata_ref_clk@1688 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_clkin>;
@@ -948,7 +948,7 @@
reg = <0x1688>;
};
- usb_host_hs_hsic480m_p1_clk: usb_host_hs_hsic480m_p1_clk {
+ usb_host_hs_hsic480m_p1_clk: usb_host_hs_hsic480m_p1_clk@1658 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll_usb_m2_ck>;
@@ -956,7 +956,7 @@
reg = <0x1658>;
};
- usb_host_hs_hsic480m_p2_clk: usb_host_hs_hsic480m_p2_clk {
+ usb_host_hs_hsic480m_p2_clk: usb_host_hs_hsic480m_p2_clk@1658 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll_usb_m2_ck>;
@@ -964,7 +964,7 @@
reg = <0x1658>;
};
- usb_host_hs_hsic480m_p3_clk: usb_host_hs_hsic480m_p3_clk {
+ usb_host_hs_hsic480m_p3_clk: usb_host_hs_hsic480m_p3_clk@1658 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll_usb_m2_ck>;
@@ -972,7 +972,7 @@
reg = <0x1658>;
};
- usb_host_hs_hsic60m_p1_clk: usb_host_hs_hsic60m_p1_clk {
+ usb_host_hs_hsic60m_p1_clk: usb_host_hs_hsic60m_p1_clk@1658 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l3init_60m_fclk>;
@@ -980,7 +980,7 @@
reg = <0x1658>;
};
- usb_host_hs_hsic60m_p2_clk: usb_host_hs_hsic60m_p2_clk {
+ usb_host_hs_hsic60m_p2_clk: usb_host_hs_hsic60m_p2_clk@1658 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l3init_60m_fclk>;
@@ -988,7 +988,7 @@
reg = <0x1658>;
};
- usb_host_hs_hsic60m_p3_clk: usb_host_hs_hsic60m_p3_clk {
+ usb_host_hs_hsic60m_p3_clk: usb_host_hs_hsic60m_p3_clk@1658 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l3init_60m_fclk>;
@@ -996,7 +996,7 @@
reg = <0x1658>;
};
- utmi_p1_gfclk: utmi_p1_gfclk {
+ utmi_p1_gfclk: utmi_p1_gfclk@1658 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&l3init_60m_fclk>, <&xclk60mhsp1_ck>;
@@ -1004,7 +1004,7 @@
reg = <0x1658>;
};
- usb_host_hs_utmi_p1_clk: usb_host_hs_utmi_p1_clk {
+ usb_host_hs_utmi_p1_clk: usb_host_hs_utmi_p1_clk@1658 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&utmi_p1_gfclk>;
@@ -1012,7 +1012,7 @@
reg = <0x1658>;
};
- utmi_p2_gfclk: utmi_p2_gfclk {
+ utmi_p2_gfclk: utmi_p2_gfclk@1658 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&l3init_60m_fclk>, <&xclk60mhsp2_ck>;
@@ -1020,7 +1020,7 @@
reg = <0x1658>;
};
- usb_host_hs_utmi_p2_clk: usb_host_hs_utmi_p2_clk {
+ usb_host_hs_utmi_p2_clk: usb_host_hs_utmi_p2_clk@1658 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&utmi_p2_gfclk>;
@@ -1028,7 +1028,7 @@
reg = <0x1658>;
};
- usb_host_hs_utmi_p3_clk: usb_host_hs_utmi_p3_clk {
+ usb_host_hs_utmi_p3_clk: usb_host_hs_utmi_p3_clk@1658 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l3init_60m_fclk>;
@@ -1036,7 +1036,7 @@
reg = <0x1658>;
};
- usb_otg_ss_refclk960m: usb_otg_ss_refclk960m {
+ usb_otg_ss_refclk960m: usb_otg_ss_refclk960m@16f0 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&dpll_usb_clkdcoldo>;
@@ -1044,7 +1044,7 @@
reg = <0x16f0>;
};
- usb_phy_cm_clk32k: usb_phy_cm_clk32k {
+ usb_phy_cm_clk32k: usb_phy_cm_clk32k@640 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&sys_32k_ck>;
@@ -1052,7 +1052,7 @@
reg = <0x0640>;
};
- usb_tll_hs_usb_ch0_clk: usb_tll_hs_usb_ch0_clk {
+ usb_tll_hs_usb_ch0_clk: usb_tll_hs_usb_ch0_clk@1668 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l3init_60m_fclk>;
@@ -1060,7 +1060,7 @@
reg = <0x1668>;
};
- usb_tll_hs_usb_ch1_clk: usb_tll_hs_usb_ch1_clk {
+ usb_tll_hs_usb_ch1_clk: usb_tll_hs_usb_ch1_clk@1668 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l3init_60m_fclk>;
@@ -1068,7 +1068,7 @@
reg = <0x1668>;
};
- usb_tll_hs_usb_ch2_clk: usb_tll_hs_usb_ch2_clk {
+ usb_tll_hs_usb_ch2_clk: usb_tll_hs_usb_ch2_clk@1668 {
#clock-cells = <0>;
compatible = "ti,gate-clock";
clocks = <&l3init_60m_fclk>;
@@ -1076,7 +1076,7 @@
reg = <0x1668>;
};
- fdif_fclk: fdif_fclk {
+ fdif_fclk: fdif_fclk@1328 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_h11x2_ck>;
@@ -1085,7 +1085,7 @@
reg = <0x1328>;
};
- gpu_core_gclk_mux: gpu_core_gclk_mux {
+ gpu_core_gclk_mux: gpu_core_gclk_mux@1520 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&dpll_core_h14x2_ck>, <&dpll_per_h14x2_ck>;
@@ -1093,7 +1093,7 @@
reg = <0x1520>;
};
- gpu_hyd_gclk_mux: gpu_hyd_gclk_mux {
+ gpu_hyd_gclk_mux: gpu_hyd_gclk_mux@1520 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&dpll_core_h14x2_ck>, <&dpll_per_h14x2_ck>;
@@ -1101,7 +1101,7 @@
reg = <0x1520>;
};
- hsi_fclk: hsi_fclk {
+ hsi_fclk: hsi_fclk@1638 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&dpll_per_m2x2_ck>;
@@ -1110,7 +1110,7 @@
reg = <0x1638>;
};
- mmc1_fclk_mux: mmc1_fclk_mux {
+ mmc1_fclk_mux: mmc1_fclk_mux@1628 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&func_128m_clk>, <&dpll_per_m2x2_ck>;
@@ -1118,7 +1118,7 @@
reg = <0x1628>;
};
- mmc1_fclk: mmc1_fclk {
+ mmc1_fclk: mmc1_fclk@1628 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&mmc1_fclk_mux>;
@@ -1127,7 +1127,7 @@
reg = <0x1628>;
};
- mmc2_fclk_mux: mmc2_fclk_mux {
+ mmc2_fclk_mux: mmc2_fclk_mux@1630 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&func_128m_clk>, <&dpll_per_m2x2_ck>;
@@ -1135,7 +1135,7 @@
reg = <0x1630>;
};
- mmc2_fclk: mmc2_fclk {
+ mmc2_fclk: mmc2_fclk@1630 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&mmc2_fclk_mux>;
@@ -1144,7 +1144,7 @@
reg = <0x1630>;
};
- timer10_gfclk_mux: timer10_gfclk_mux {
+ timer10_gfclk_mux: timer10_gfclk_mux@1028 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin>, <&sys_32k_ck>;
@@ -1152,7 +1152,7 @@
reg = <0x1028>;
};
- timer11_gfclk_mux: timer11_gfclk_mux {
+ timer11_gfclk_mux: timer11_gfclk_mux@1030 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin>, <&sys_32k_ck>;
@@ -1160,7 +1160,7 @@
reg = <0x1030>;
};
- timer2_gfclk_mux: timer2_gfclk_mux {
+ timer2_gfclk_mux: timer2_gfclk_mux@1038 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin>, <&sys_32k_ck>;
@@ -1168,7 +1168,7 @@
reg = <0x1038>;
};
- timer3_gfclk_mux: timer3_gfclk_mux {
+ timer3_gfclk_mux: timer3_gfclk_mux@1040 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin>, <&sys_32k_ck>;
@@ -1176,7 +1176,7 @@
reg = <0x1040>;
};
- timer4_gfclk_mux: timer4_gfclk_mux {
+ timer4_gfclk_mux: timer4_gfclk_mux@1048 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin>, <&sys_32k_ck>;
@@ -1184,7 +1184,7 @@
reg = <0x1048>;
};
- timer9_gfclk_mux: timer9_gfclk_mux {
+ timer9_gfclk_mux: timer9_gfclk_mux@1050 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&sys_clkin>, <&sys_32k_ck>;
@@ -1201,7 +1201,7 @@
};
&scrm_clocks {
- auxclk0_src_gate_ck: auxclk0_src_gate_ck {
+ auxclk0_src_gate_ck: auxclk0_src_gate_ck@310 {
#clock-cells = <0>;
compatible = "ti,composite-no-wait-gate-clock";
clocks = <&dpll_core_m3x2_ck>;
@@ -1209,7 +1209,7 @@
reg = <0x0310>;
};
- auxclk0_src_mux_ck: auxclk0_src_mux_ck {
+ auxclk0_src_mux_ck: auxclk0_src_mux_ck@310 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&sys_clkin>, <&dpll_core_m3x2_ck>, <&dpll_per_m3x2_ck>;
@@ -1223,7 +1223,7 @@
clocks = <&auxclk0_src_gate_ck>, <&auxclk0_src_mux_ck>;
};
- auxclk0_ck: auxclk0_ck {
+ auxclk0_ck: auxclk0_ck@310 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&auxclk0_src_ck>;
@@ -1232,7 +1232,7 @@
reg = <0x0310>;
};
- auxclk1_src_gate_ck: auxclk1_src_gate_ck {
+ auxclk1_src_gate_ck: auxclk1_src_gate_ck@314 {
#clock-cells = <0>;
compatible = "ti,composite-no-wait-gate-clock";
clocks = <&dpll_core_m3x2_ck>;
@@ -1240,7 +1240,7 @@
reg = <0x0314>;
};
- auxclk1_src_mux_ck: auxclk1_src_mux_ck {
+ auxclk1_src_mux_ck: auxclk1_src_mux_ck@314 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&sys_clkin>, <&dpll_core_m3x2_ck>, <&dpll_per_m3x2_ck>;
@@ -1254,7 +1254,7 @@
clocks = <&auxclk1_src_gate_ck>, <&auxclk1_src_mux_ck>;
};
- auxclk1_ck: auxclk1_ck {
+ auxclk1_ck: auxclk1_ck@314 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&auxclk1_src_ck>;
@@ -1263,7 +1263,7 @@
reg = <0x0314>;
};
- auxclk2_src_gate_ck: auxclk2_src_gate_ck {
+ auxclk2_src_gate_ck: auxclk2_src_gate_ck@318 {
#clock-cells = <0>;
compatible = "ti,composite-no-wait-gate-clock";
clocks = <&dpll_core_m3x2_ck>;
@@ -1271,7 +1271,7 @@
reg = <0x0318>;
};
- auxclk2_src_mux_ck: auxclk2_src_mux_ck {
+ auxclk2_src_mux_ck: auxclk2_src_mux_ck@318 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&sys_clkin>, <&dpll_core_m3x2_ck>, <&dpll_per_m3x2_ck>;
@@ -1285,7 +1285,7 @@
clocks = <&auxclk2_src_gate_ck>, <&auxclk2_src_mux_ck>;
};
- auxclk2_ck: auxclk2_ck {
+ auxclk2_ck: auxclk2_ck@318 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&auxclk2_src_ck>;
@@ -1294,7 +1294,7 @@
reg = <0x0318>;
};
- auxclk3_src_gate_ck: auxclk3_src_gate_ck {
+ auxclk3_src_gate_ck: auxclk3_src_gate_ck@31c {
#clock-cells = <0>;
compatible = "ti,composite-no-wait-gate-clock";
clocks = <&dpll_core_m3x2_ck>;
@@ -1302,7 +1302,7 @@
reg = <0x031c>;
};
- auxclk3_src_mux_ck: auxclk3_src_mux_ck {
+ auxclk3_src_mux_ck: auxclk3_src_mux_ck@31c {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&sys_clkin>, <&dpll_core_m3x2_ck>, <&dpll_per_m3x2_ck>;
@@ -1316,7 +1316,7 @@
clocks = <&auxclk3_src_gate_ck>, <&auxclk3_src_mux_ck>;
};
- auxclk3_ck: auxclk3_ck {
+ auxclk3_ck: auxclk3_ck@31c {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&auxclk3_src_ck>;
@@ -1325,7 +1325,7 @@
reg = <0x031c>;
};
- auxclk4_src_gate_ck: auxclk4_src_gate_ck {
+ auxclk4_src_gate_ck: auxclk4_src_gate_ck@320 {
#clock-cells = <0>;
compatible = "ti,composite-no-wait-gate-clock";
clocks = <&dpll_core_m3x2_ck>;
@@ -1333,7 +1333,7 @@
reg = <0x0320>;
};
- auxclk4_src_mux_ck: auxclk4_src_mux_ck {
+ auxclk4_src_mux_ck: auxclk4_src_mux_ck@320 {
#clock-cells = <0>;
compatible = "ti,composite-mux-clock";
clocks = <&sys_clkin>, <&dpll_core_m3x2_ck>, <&dpll_per_m3x2_ck>;
@@ -1347,7 +1347,7 @@
clocks = <&auxclk4_src_gate_ck>, <&auxclk4_src_mux_ck>;
};
- auxclk4_ck: auxclk4_ck {
+ auxclk4_ck: auxclk4_ck@320 {
#clock-cells = <0>;
compatible = "ti,divider-clock";
clocks = <&auxclk4_src_ck>;
@@ -1356,7 +1356,7 @@
reg = <0x0320>;
};
- auxclkreq0_ck: auxclkreq0_ck {
+ auxclkreq0_ck: auxclkreq0_ck@210 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&auxclk0_ck>, <&auxclk1_ck>, <&auxclk2_ck>, <&auxclk3_ck>, <&auxclk4_ck>;
@@ -1364,7 +1364,7 @@
reg = <0x0210>;
};
- auxclkreq1_ck: auxclkreq1_ck {
+ auxclkreq1_ck: auxclkreq1_ck@214 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&auxclk0_ck>, <&auxclk1_ck>, <&auxclk2_ck>, <&auxclk3_ck>, <&auxclk4_ck>;
@@ -1372,7 +1372,7 @@
reg = <0x0214>;
};
- auxclkreq2_ck: auxclkreq2_ck {
+ auxclkreq2_ck: auxclkreq2_ck@218 {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&auxclk0_ck>, <&auxclk1_ck>, <&auxclk2_ck>, <&auxclk3_ck>, <&auxclk4_ck>;
@@ -1380,7 +1380,7 @@
reg = <0x0218>;
};
- auxclkreq3_ck: auxclkreq3_ck {
+ auxclkreq3_ck: auxclkreq3_ck@21c {
#clock-cells = <0>;
compatible = "ti,mux-clock";
clocks = <&auxclk0_ck>, <&auxclk1_ck>, <&auxclk2_ck>, <&auxclk3_ck>, <&auxclk4_ck>;
diff --git a/arch/arm/boot/dts/orion5x-kuroboxpro.dts b/arch/arm/boot/dts/orion5x-kuroboxpro.dts
new file mode 100644
index 000000000000..1a672b098d0b
--- /dev/null
+++ b/arch/arm/boot/dts/orion5x-kuroboxpro.dts
@@ -0,0 +1,127 @@
+/*
+ * Device Tree file for Buffalo/Revogear Kurobox Pro
+ *
+ * Copyright (C) 2016
+ * Roger Shimizu <rogershimizu@gmail.com>
+ *
+ * Based on the board file arch/arm/mach-orion5x/kurobox_pro-setup.c
+ * Copyright (C) Ronen Shitrit <rshitrit@marvell.com>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED , WITHOUT WARRANTY OF ANY KIND
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+
+#include "orion5x-linkstation.dtsi"
+#include <dt-bindings/gpio/gpio.h>
+
+/ {
+ model = "Buffalo/Revogear Kurobox Pro";
+ compatible = "buffalo,kurobox-pro", "marvell,orion5x-88f5182", "marvell,orion5x";
+
+ soc {
+ ranges = <MBUS_ID(0xf0, 0x01) 0 0xf1000000 0x100000>,
+ <MBUS_ID(0x09, 0x00) 0 0xf2200000 0x800>,
+ <MBUS_ID(0x01, 0x0f) 0 0xf4000000 0x40000>,
+ <MBUS_ID(0x01, 0x1e) 0 0xfc000000 0x1000000>;
+ };
+
+ memory { /* 128 MB */
+ device_type = "memory";
+ reg = <0x00000000 0x8000000>;
+ };
+};
+
+&pinctrl {
+ pmx_power_hdd: pmx-power-hdd {
+ marvell,pins = "mpp1";
+ marvell,function = "gpio";
+ };
+
+ pmx_power_usb: pmx-power-usb {
+ marvell,pins = "mpp9";
+ marvell,function = "gpio";
+ };
+};
+
+&devbus_cs0 {
+ status = "okay";
+ compatible = "marvell,orion-nand";
+ reg = <MBUS_ID(0x01, 0x1e) 0 0x400>;
+ cle = <0>;
+ ale = <1>;
+ bank-width = <1>;
+
+ partitions {
+ compatible = "fixed-partitions";
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+ uImage@0 { /* 4 MB */
+ reg = <0 0x400000>;
+ read-only;
+ };
+
+ rootfs@400000 { /* 64 MB */
+ reg = <0x400000 0x4000000>;
+ read-only;
+ };
+
+ extra@4400000 { /* 188 MB */
+ reg = <0x4400000 0xBC00000>;
+ read-only;
+ };
+ };
+};
+
+&hdd_power {
+ gpios = <&gpio0 1 GPIO_ACTIVE_HIGH>;
+};
+
+&usb_power {
+ gpios = <&gpio0 9 GPIO_ACTIVE_HIGH>;
+};
+
+&sata {
+ nr-ports = <2>;
+};
+
+&ehci1 {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/ox810se.dtsi b/arch/arm/boot/dts/ox810se.dtsi
new file mode 100644
index 000000000000..ce13705c38d4
--- /dev/null
+++ b/arch/arm/boot/dts/ox810se.dtsi
@@ -0,0 +1,336 @@
+/*
+ * ox810se.dtsi - Device tree file for Oxford Semiconductor OX810SE SoC
+ *
+ * Copyright (C) 2016 Neil Armstrong <narmstrong@baylibre.com>
+ *
+ * Licensed under GPLv2 or later
+ */
+
+/include/ "skeleton.dtsi"
+
+/ {
+ compatible = "oxsemi,ox810se";
+
+ cpus {
+ #address-cells = <0>;
+ #size-cells = <0>;
+
+ cpu {
+ device_type = "cpu";
+ compatible = "arm,arm926ej-s";
+ clocks = <&armclk>;
+ };
+ };
+
+ memory {
+ /* Max 256MB @ 0x48000000 */
+ reg = <0x48000000 0x10000000>;
+ };
+
+ clocks {
+ osc: oscillator {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <25000000>;
+ };
+
+ gmacclk: gmacclk {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <125000000>;
+ };
+
+ rpsclk: rpsclk {
+ compatible = "fixed-factor-clock";
+ #clock-cells = <0>;
+ clock-div = <1>;
+ clock-mult = <1>;
+ clocks = <&osc>;
+ };
+
+ pll400: pll400 {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <733333333>;
+ };
+
+ sysclk: sysclk {
+ compatible = "fixed-factor-clock";
+ #clock-cells = <0>;
+ clock-div = <4>;
+ clock-mult = <1>;
+ clocks = <&pll400>;
+ };
+
+ armclk: armclk {
+ compatible = "fixed-factor-clock";
+ #clock-cells = <0>;
+ clock-div = <2>;
+ clock-mult = <1>;
+ clocks = <&pll400>;
+ };
+ };
+
+ soc {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "simple-bus";
+ ranges;
+ interrupt-parent = <&intc>;
+
+ apb-bridge@44000000 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "simple-bus";
+ ranges = <0 0x44000000 0x1000000>;
+
+ pinctrl: pinctrl {
+ compatible = "oxsemi,ox810se-pinctrl";
+
+ /* Regmap for sys registers */
+ oxsemi,sys-ctrl = <&sys>;
+
+ pinctrl_uart0: uart0 {
+ uart0a {
+ pins = "gpio31";
+ function = "fct3";
+ };
+ uart0b {
+ pins = "gpio32";
+ function = "fct3";
+ };
+ };
+
+ pinctrl_uart0_modem: uart0_modem {
+ uart0c {
+ pins = "gpio27";
+ function = "fct3";
+ };
+ uart0d {
+ pins = "gpio28";
+ function = "fct3";
+ };
+ uart0e {
+ pins = "gpio29";
+ function = "fct3";
+ };
+ uart0f {
+ pins = "gpio30";
+ function = "fct3";
+ };
+ uart0g {
+ pins = "gpio33";
+ function = "fct3";
+ };
+ uart0h {
+ pins = "gpio34";
+ function = "fct3";
+ };
+ };
+
+ pinctrl_uart1: uart1 {
+ uart1a {
+ pins = "gpio20";
+ function = "fct3";
+ };
+ uart1b {
+ pins = "gpio22";
+ function = "fct3";
+ };
+ };
+
+ pinctrl_uart1_modem: uart1_modem {
+ uart1c {
+ pins = "gpio8";
+ function = "fct3";
+ };
+ uart1d {
+ pins = "gpio9";
+ function = "fct3";
+ };
+ uart1e {
+ pins = "gpio23";
+ function = "fct3";
+ };
+ uart1f {
+ pins = "gpio24";
+ function = "fct3";
+ };
+ uart1g {
+ pins = "gpio25";
+ function = "fct3";
+ };
+ uart1h {
+ pins = "gpio26";
+ function = "fct3";
+ };
+ };
+
+ pinctrl_uart2: uart2 {
+ uart2a {
+ pins = "gpio6";
+ function = "fct3";
+ };
+ uart2b {
+ pins = "gpio7";
+ function = "fct3";
+ };
+ };
+
+ pinctrl_uart2_modem: uart2_modem {
+ uart2c {
+ pins = "gpio0";
+ function = "fct3";
+ };
+ uart2d {
+ pins = "gpio1";
+ function = "fct3";
+ };
+ uart2e {
+ pins = "gpio2";
+ function = "fct3";
+ };
+ uart2f {
+ pins = "gpio3";
+ function = "fct3";
+ };
+ uart2g {
+ pins = "gpio4";
+ function = "fct3";
+ };
+ uart2h {
+ pins = "gpio5";
+ function = "fct3";
+ };
+ };
+ };
+
+ gpio0: gpio@000000 {
+ compatible = "oxsemi,ox810se-gpio";
+ reg = <0x000000 0x100000>;
+ interrupts = <21>;
+ #gpio-cells = <2>;
+ gpio-controller;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ ngpios = <32>;
+ oxsemi,gpio-bank = <0>;
+ gpio-ranges = <&pinctrl 0 0 32>;
+ };
+
+ gpio1: gpio@100000 {
+ compatible = "oxsemi,ox810se-gpio";
+ reg = <0x100000 0x100000>;
+ interrupts = <22>;
+ #gpio-cells = <2>;
+ gpio-controller;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ ngpios = <3>;
+ oxsemi,gpio-bank = <1>;
+ gpio-ranges = <&pinctrl 0 32 3>;
+ };
+
+ uart0: serial@200000 {
+ compatible = "ns16550a";
+ reg = <0x200000 0x100000>;
+ clocks = <&sysclk>;
+ interrupts = <23>;
+ reg-shift = <0>;
+ fifo-size = <16>;
+ reg-io-width = <1>;
+ current-speed = <115200>;
+ no-loopback-test;
+ status = "disabled";
+ resets = <&reset 17>;
+ };
+
+ uart1: serial@300000 {
+ compatible = "ns16550a";
+ reg = <0x300000 0x100000>;
+ clocks = <&sysclk>;
+ interrupts = <24>;
+ reg-shift = <0>;
+ fifo-size = <16>;
+ reg-io-width = <1>;
+ current-speed = <115200>;
+ no-loopback-test;
+ status = "disabled";
+ resets = <&reset 18>;
+ };
+
+ uart2: serial@900000 {
+ compatible = "ns16550a";
+ reg = <0x900000 0x100000>;
+ clocks = <&sysclk>;
+ interrupts = <29>;
+ reg-shift = <0>;
+ fifo-size = <16>;
+ reg-io-width = <1>;
+ current-speed = <115200>;
+ no-loopback-test;
+ status = "disabled";
+ resets = <&reset 22>;
+ };
+
+ uart3: serial@a00000 {
+ compatible = "ns16550a";
+ reg = <0xa00000 0x100000>;
+ clocks = <&sysclk>;
+ interrupts = <30>;
+ reg-shift = <0>;
+ fifo-size = <16>;
+ reg-io-width = <1>;
+ current-speed = <115200>;
+ no-loopback-test;
+ status = "disabled";
+ resets = <&reset 23>;
+ };
+ };
+
+ apb-bridge@45000000 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "simple-bus";
+ ranges = <0 0x45000000 0x1000000>;
+
+ sys: sys-ctrl@000000 {
+ compatible = "oxsemi,ox810se-sys-ctrl", "syscon", "simple-mfd";
+ reg = <0x000000 0x100000>;
+
+ reset: reset-controller {
+ compatible = "oxsemi,ox810se-reset";
+ #reset-cells = <1>;
+ };
+
+ stdclk: stdclk {
+ compatible = "oxsemi,ox810se-stdclk";
+ #clock-cells = <1>;
+ };
+ };
+
+ rps@300000 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "simple-bus";
+ ranges = <0 0x300000 0x100000>;
+
+ intc: interrupt-controller@0 {
+ compatible = "oxsemi,ox810se-rps-irq";
+ interrupt-controller;
+ reg = <0 0x200>;
+ #interrupt-cells = <1>;
+ valid-mask = <0xFFFFFFFF>;
+ clear-mask = <0>;
+ };
+
+ timer0: timer@200 {
+ compatible = "oxsemi,ox810se-rps-timer";
+ reg = <0x200 0x40>;
+ clocks = <&rpsclk>;
+ interrupts = <4 5>;
+ };
+ };
+ };
+ };
+};
diff --git a/arch/arm/boot/dts/phy3250.dts b/arch/arm/boot/dts/phy3250.dts
deleted file mode 100644
index a00d7ce7802b..000000000000
--- a/arch/arm/boot/dts/phy3250.dts
+++ /dev/null
@@ -1,227 +0,0 @@
-/*
- * PHYTEC phyCORE-LPC3250 board
- *
- * Copyright 2012 Roland Stigge <stigge@antcom.de>
- *
- * The code contained herein is licensed under the GNU General Public
- * License. You may obtain a copy of the GNU General Public License
- * Version 2 or later at the following locations:
- *
- * http://www.opensource.org/licenses/gpl-license.html
- * http://www.gnu.org/copyleft/gpl.html
- */
-
-/dts-v1/;
-#include "lpc32xx.dtsi"
-
-/ {
- model = "PHYTEC phyCORE-LPC3250 board based on NXP LPC3250";
- compatible = "phytec,phy3250", "nxp,lpc3250";
- #address-cells = <1>;
- #size-cells = <1>;
-
- memory {
- device_type = "memory";
- reg = <0x80000000 0x4000000>;
- };
-
- regulators {
- backlight_reg: regulator@0 {
- compatible = "regulator-fixed";
- regulator-name = "backlight_reg";
- regulator-min-microvolt = <1800000>;
- regulator-max-microvolt = <1800000>;
- gpio = <&gpio 5 4 0>;
- enable-active-high;
- regulator-boot-on;
- };
-
- lcd_reg: regulator@1 {
- compatible = "regulator-fixed";
- regulator-name = "lcd_reg";
- regulator-min-microvolt = <1800000>;
- regulator-max-microvolt = <1800000>;
- gpio = <&gpio 5 0 0>;
- enable-active-high;
- regulator-boot-on;
- };
-
- sd_reg: regulator@2 {
- compatible = "regulator-fixed";
- regulator-name = "sd_reg";
- regulator-min-microvolt = <1800000>;
- regulator-max-microvolt = <1800000>;
- gpio = <&gpio 5 5 0>;
- enable-active-high;
- };
- };
-
- ahb {
- mac: ethernet@31060000 {
- phy-mode = "rmii";
- use-iram;
- };
-
- clcd@31040000 {
- status = "okay";
- };
-
- /* 64MB Flash via SLC NAND controller */
- slc: flash@20020000 {
- status = "okay";
- #address-cells = <1>;
- #size-cells = <1>;
-
- nxp,wdr-clks = <14>;
- nxp,wwidth = <40000000>;
- nxp,whold = <100000000>;
- nxp,wsetup = <100000000>;
- nxp,rdr-clks = <14>;
- nxp,rwidth = <40000000>;
- nxp,rhold = <66666666>;
- nxp,rsetup = <100000000>;
- nand-on-flash-bbt;
- gpios = <&gpio 5 19 1>; /* GPO_P3 19, active low */
-
- mtd0@00000000 {
- label = "phy3250-boot";
- reg = <0x00000000 0x00064000>;
- read-only;
- };
-
- mtd1@00064000 {
- label = "phy3250-uboot";
- reg = <0x00064000 0x00190000>;
- read-only;
- };
-
- mtd2@001f4000 {
- label = "phy3250-ubt-prms";
- reg = <0x001f4000 0x00010000>;
- };
-
- mtd3@00204000 {
- label = "phy3250-kernel";
- reg = <0x00204000 0x00400000>;
- };
-
- mtd4@00604000 {
- label = "phy3250-rootfs";
- reg = <0x00604000 0x039fc000>;
- };
- };
-
- apb {
- uart5: serial@40090000 {
- status = "okay";
- };
-
- uart3: serial@40080000 {
- status = "okay";
- };
-
- i2c1: i2c@400A0000 {
- clock-frequency = <100000>;
-
- pcf8563: rtc@51 {
- compatible = "nxp,pcf8563";
- reg = <0x51>;
- };
-
- uda1380: uda1380@18 {
- compatible = "nxp,uda1380";
- reg = <0x18>;
- power-gpio = <&gpio 0x59 0>;
- reset-gpio = <&gpio 0x51 0>;
- dac-clk = "wspll";
- };
- };
-
- i2c2: i2c@400A8000 {
- clock-frequency = <100000>;
- };
-
- ssp0: ssp@20084000 {
- #address-cells = <1>;
- #size-cells = <0>;
- num-cs = <1>;
- cs-gpios = <&gpio 3 5 0>;
-
- eeprom: at25@0 {
- pl022,interface = <0>;
- pl022,com-mode = <0>;
- pl022,rx-level-trig = <1>;
- pl022,tx-level-trig = <1>;
- pl022,ctrl-len = <11>;
- pl022,wait-state = <0>;
- pl022,duplex = <0>;
-
- at25,byte-len = <0x8000>;
- at25,addr-mode = <2>;
- at25,page-size = <64>;
-
- compatible = "atmel,at25";
- reg = <0>;
- spi-max-frequency = <5000000>;
- };
- };
-
- sd@20098000 {
- wp-gpios = <&gpio 3 0 0>;
- cd-gpios = <&gpio 3 1 0>;
- cd-inverted;
- bus-width = <4>;
- vmmc-supply = <&sd_reg>;
- status = "okay";
- };
- };
-
- fab {
- uart2: serial@40018000 {
- status = "okay";
- };
-
- tsc@40048000 {
- status = "okay";
- };
-
- key@40050000 {
- status = "okay";
- keypad,num-rows = <1>;
- keypad,num-columns = <1>;
- nxp,debounce-delay-ms = <3>;
- nxp,scan-delay-ms = <34>;
- linux,keymap = <0x00000002>;
- };
- };
- };
-
- leds {
- compatible = "gpio-leds";
-
- led0 { /* red */
- gpios = <&gpio 5 1 0>; /* GPO_P3 1, GPIO 80, active high */
- default-state = "off";
- };
-
- led1 { /* green */
- gpios = <&gpio 5 14 0>; /* GPO_P3 14, GPIO 93, active high */
- linux,default-trigger = "heartbeat";
- };
- };
-};
-
-/* Here, choose exactly one from: ohci, usbd */
-&ohci /* &usbd */ {
- transceiver = <&isp1301>;
- status = "okay";
-};
-
-&i2cusb {
- clock-frequency = <100000>;
-
- isp1301: usb-transceiver@2c {
- compatible = "nxp,isp1301";
- reg = <0x2c>;
- };
-};
diff --git a/arch/arm/boot/dts/qcom-apq8064-arrow-db600c-pins.dtsi b/arch/arm/boot/dts/qcom-apq8064-arrow-db600c-pins.dtsi
new file mode 100644
index 000000000000..a3efb9704fcd
--- /dev/null
+++ b/arch/arm/boot/dts/qcom-apq8064-arrow-db600c-pins.dtsi
@@ -0,0 +1,52 @@
+&tlmm_pinmux {
+ card_detect: card-detect {
+ mux {
+ pins = "gpio26";
+ function = "gpio";
+ bias-disable;
+ };
+ };
+
+ pcie_pins: pcie-pinmux {
+ mux {
+ pins = "gpio27";
+ function = "gpio";
+ };
+ conf {
+ pins = "gpio27";
+ drive-strength = <12>;
+ bias-disable;
+ };
+ };
+
+ user_leds: user-leds {
+ mux {
+ pins = "gpio3", "gpio7", "gpio10", "gpio11";
+ function = "gpio";
+ };
+
+ conf {
+ pins = "gpio3", "gpio7", "gpio10", "gpio11";
+ function = "gpio";
+ output-low;
+ };
+ };
+
+ magneto_pins: magneto-pins {
+ mux {
+ pins = "gpio31", "gpio48";
+ function = "gpio";
+ bias-disable;
+ };
+ };
+};
+
+&pm8921_mpps {
+ mpp_leds: mpp-leds {
+ pinconf {
+ pins = "mpp7", "mpp8";
+ function = "digital";
+ output-low;
+ };
+ };
+};
diff --git a/arch/arm/boot/dts/qcom-apq8064-arrow-db600c.dts b/arch/arm/boot/dts/qcom-apq8064-arrow-db600c.dts
new file mode 100644
index 000000000000..e01b27ea7fba
--- /dev/null
+++ b/arch/arm/boot/dts/qcom-apq8064-arrow-db600c.dts
@@ -0,0 +1,349 @@
+#include "qcom-apq8064-v2.0.dtsi"
+#include "qcom-apq8064-arrow-db600c-pins.dtsi"
+#include <dt-bindings/gpio/gpio.h>
+
+/ {
+ model = "Arrow Electronics, APQ8064 DB600c";
+ compatible = "arrow,db600c", "qcom,apq8064";
+
+ aliases {
+ serial0 = &gsbi7_serial;
+ serial1 = &gsbi1_serial;
+ i2c0 = &gsbi2_i2c;
+ i2c1 = &gsbi3_i2c;
+ i2c2 = &gsbi4_i2c;
+ i2c3 = &gsbi7_i2c;
+ spi0 = &gsbi5_spi;
+ };
+
+ regulators {
+ compatible = "simple-bus";
+ vph: regulator-fixed@1 {
+ compatible = "regulator-fixed";
+ regulator-min-microvolt = <4500000>;
+ regulator-max-microvolt = <4500000>;
+ regulator-name = "VPH";
+ regulator-type = "voltage";
+ regulator-boot-on;
+ };
+
+ /* on board fixed 3.3v supply */
+ vcc3v3: vcc3v3 {
+ compatible = "regulator-fixed";
+ regulator-name = "VCC3V3";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ };
+
+ };
+
+ soc {
+ rpm@108000 {
+ regulators {
+ vdd_s1-supply = <&vph>;
+ vdd_s2-supply = <&vph>;
+ vdd_s3-supply = <&vph>;
+ vdd_s4-supply = <&vph>;
+ vdd_s5-supply = <&vph>;
+ vdd_s6-supply = <&vph>;
+ vdd_s7-supply = <&vph>;
+ vdd_l1_l2_l12_l18-supply = <&pm8921_s4>;
+ vdd_l3_l15_l17-supply = <&vph>;
+ vdd_l4_l14-supply = <&vph>;
+ vdd_l5_l8_l16-supply = <&vph>;
+ vdd_l6_l7-supply = <&vph>;
+ vdd_l9_l11-supply = <&vph>;
+ vdd_l10_l22-supply = <&vph>;
+ vdd_l21_l23_l29-supply = <&vph>;
+ vdd_l24-supply = <&pm8921_s1>;
+ vdd_l25-supply = <&pm8921_s1>;
+ vdd_l26-supply = <&pm8921_s7>;
+ vdd_l27-supply = <&pm8921_s7>;
+ vdd_l28-supply = <&pm8921_s7>;
+ vin_lvs1_3_6-supply = <&pm8921_s4>;
+ vin_lvs2-supply = <&pm8921_s1>;
+ vin_lvs4_5_7-supply = <&pm8921_s4>;
+
+ s1 {
+ regulator-always-on;
+ regulator-min-microvolt = <1225000>;
+ regulator-max-microvolt = <1225000>;
+ qcom,switch-mode-frequency = <3200000>;
+ bias-pull-down;
+ };
+
+ s3 {
+ regulator-min-microvolt = <1000000>;
+ regulator-max-microvolt = <1400000>;
+ qcom,switch-mode-frequency = <4800000>;
+ };
+
+ s4 {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ qcom,switch-mode-frequency = <3200000>;
+ bias-pull-down;
+ regulator-always-on;
+ };
+
+ s7 {
+ regulator-min-microvolt = <1300000>;
+ regulator-max-microvolt = <1300000>;
+ qcom,switch-mode-frequency = <3200000>;
+ };
+
+ l3 {
+ regulator-min-microvolt = <3050000>;
+ regulator-max-microvolt = <3300000>;
+ bias-pull-down;
+ };
+
+ l4 {
+ regulator-min-microvolt = <1000000>;
+ regulator-max-microvolt = <1800000>;
+ bias-pull-down;
+ };
+
+ l5 {
+ regulator-min-microvolt = <2750000>;
+ regulator-max-microvolt = <3000000>;
+ bias-pull-down;
+ regulator-boot-on;
+ regulator-always-on;
+ };
+
+ l6 {
+ regulator-min-microvolt = <2950000>;
+ regulator-max-microvolt = <2950000>;
+ bias-pull-down;
+ };
+
+ l23 {
+ regulator-min-microvolt = <1700000>;
+ regulator-max-microvolt = <1900000>;
+ bias-pull-down;
+ };
+
+ lvs6 {
+ bias-pull-down;
+ };
+
+ lvs7 {
+ bias-pull-down;
+ };
+ };
+ };
+
+ gsbi@12440000 {
+ status = "okay";
+ qcom,mode = <GSBI_PROT_UART_W_FC>;
+ serial@12450000 {
+ label = "LS-UART1";
+ status = "okay";
+ pinctrl-names = "default";
+ pinctrl-0 = <&gsbi1_uart_4pins>;
+ };
+ };
+
+ gsbi@12480000 {
+ status = "okay";
+ qcom,mode = <GSBI_PROT_I2C>;
+ i2c@124a0000 {
+ /* On Low speed expansion and Sensors */
+ label = "LS-I2C0";
+ status = "okay";
+ lis3mdl_mag@1e {
+ compatible = "st,lis3mdl-magn";
+ reg = <0x1e>;
+ vdd-supply = <&vcc3v3>;
+ vddio-supply = <&pm8921_s4>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&magneto_pins>;
+ interrupt-parent = <&tlmm_pinmux>;
+
+ st,drdy-int-pin = <2>;
+ interrupts = <48 IRQ_TYPE_EDGE_RISING>, /* DRDY line */
+ <31 IRQ_TYPE_EDGE_RISING>; /* INT */
+ };
+ };
+ };
+
+ gsbi@16200000 {
+ status = "okay";
+ qcom,mode = <GSBI_PROT_I2C>;
+ i2c@16280000 {
+ /* On Low speed expansion */
+ status = "okay";
+ label = "LS-I2C1";
+ clock-frequency = <200000>;
+ eeprom@52 {
+ compatible = "atmel,24c128";
+ reg = <0x52>;
+ pagesize = <64>;
+ };
+ };
+ };
+
+ gsbi@16300000 {
+ status = "okay";
+ qcom,mode = <GSBI_PROT_I2C>;
+ i2c@16380000 {
+ /* On High speed expansion */
+ label = "HS-CAM-I2C3";
+ status = "okay";
+ };
+ };
+
+ gsbi@1a200000 {
+ status = "okay";
+ spi@1a280000 {
+ /* On Low speed expansion */
+ label = "LS-SPI0";
+ status = "okay";
+ };
+ };
+
+ /* DEBUG UART */
+ gsbi@16600000 {
+ status = "okay";
+ qcom,mode = <GSBI_PROT_I2C_UART>;
+ serial@16640000 {
+ label = "LS-UART0";
+ status = "okay";
+ pinctrl-names = "default";
+ pinctrl-0 = <&gsbi7_uart_2pins>;
+ };
+
+ i2c@16680000 {
+ /* On High speed expansion */
+ status = "okay";
+ label = "HS-CAM-I2C2";
+ };
+ };
+
+ leds {
+ pinctrl-names = "default";
+ pinctrl-0 = <&user_leds>, <&mpp_leds>;
+
+ compatible = "gpio-leds";
+
+ user-led0 {
+ label = "user0-led";
+ gpios = <&tlmm_pinmux 3 GPIO_ACTIVE_HIGH>;
+ linux,default-trigger = "heartbeat";
+ default-state = "off";
+ };
+
+ user-led1 {
+ label = "user1-led";
+ gpios = <&tlmm_pinmux 7 GPIO_ACTIVE_HIGH>;
+ linux,default-trigger = "mmc0";
+ default-state = "off";
+ };
+
+ user-led2 {
+ label = "user2-led";
+ gpios = <&tlmm_pinmux 10 GPIO_ACTIVE_HIGH>;
+ linux,default-trigger = "mmc1";
+ default-state = "off";
+ };
+
+ user-led3 {
+ label = "user3-led";
+ gpios = <&tlmm_pinmux 11 GPIO_ACTIVE_HIGH>;
+ linux,default-trigger = "none";
+ default-state = "off";
+ };
+
+ wifi-led {
+ label = "WiFi-led";
+ gpios = <&pm8921_mpps 7 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+
+ bt-led {
+ label = "BT-led";
+ gpios = <&pm8921_mpps 8 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+ };
+
+ pci@1b500000 {
+ status = "okay";
+ vdda-supply = <&pm8921_s3>;
+ vdda_phy-supply = <&pm8921_lvs6>;
+ vdda_refclk-supply = <&vcc3v3>;
+ pinctrl-0 = <&pcie_pins>;
+ pinctrl-names = "default";
+ perst-gpio = <&tlmm_pinmux 27 GPIO_ACTIVE_LOW>;
+ };
+
+ phy@1b400000 {
+ status = "okay";
+ };
+
+ sata@29000000 {
+ status = "okay";
+ target-supply = <&pm8921_lvs7>;
+ };
+
+ /* OTG */
+ phy@12500000 {
+ status = "okay";
+ dr_mode = "peripheral";
+ vddcx-supply = <&pm8921_s3>;
+ v3p3-supply = <&pm8921_l3>;
+ v1p8-supply = <&pm8921_l4>;
+ };
+
+ phy@12520000 {
+ status = "okay";
+ vddcx-supply = <&pm8921_s3>;
+ v3p3-supply = <&pm8921_l3>;
+ v1p8-supply = <&pm8921_l23>;
+ };
+
+ phy@12530000 {
+ status = "okay";
+ vddcx-supply = <&pm8921_s3>;
+ v3p3-supply = <&pm8921_l3>;
+ v1p8-supply = <&pm8921_l23>;
+ };
+
+ gadget@12500000 {
+ status = "okay";
+ };
+
+ /* OTG */
+ usb@12500000 {
+ status = "okay";
+ };
+
+ usb@12520000 {
+ status = "okay";
+ };
+
+ usb@12530000 {
+ status = "okay";
+ };
+
+ amba {
+ /* eMMC */
+ sdcc@12400000 {
+ status = "okay";
+ vmmc-supply = <&pm8921_l5>;
+ vqmmc-supply = <&pm8921_s4>;
+ };
+
+ /* External micro SD card */
+ sdcc@12180000 {
+ status = "okay";
+ vmmc-supply = <&pm8921_l6>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&card_detect>;
+ cd-gpios = <&tlmm_pinmux 26 GPIO_ACTIVE_HIGH>;
+ };
+ };
+ };
+};
diff --git a/arch/arm/boot/dts/qcom-apq8064-asus-nexus7-flo.dts b/arch/arm/boot/dts/qcom-apq8064-asus-nexus7-flo.dts
index c535b3f0e5cf..32fedfa149d0 100644
--- a/arch/arm/boot/dts/qcom-apq8064-asus-nexus7-flo.dts
+++ b/arch/arm/boot/dts/qcom-apq8064-asus-nexus7-flo.dts
@@ -224,6 +224,12 @@
reg = <0x52>;
pagesize = <32>;
};
+
+ bq27541@55 {
+ compatible = "ti,bq27541";
+ reg = <0x55>;
+ };
+
};
};
diff --git a/arch/arm/boot/dts/qcom-apq8064-pins.dtsi b/arch/arm/boot/dts/qcom-apq8064-pins.dtsi
index b57c59d5bc00..4102a98f475b 100644
--- a/arch/arm/boot/dts/qcom-apq8064-pins.dtsi
+++ b/arch/arm/boot/dts/qcom-apq8064-pins.dtsi
@@ -39,6 +39,20 @@
};
};
+ gsbi1_uart_2pins: gsbi1_uart_2pins {
+ mux {
+ pins = "gpio18", "gpio19";
+ function = "gsbi1";
+ };
+ };
+
+ gsbi1_uart_4pins: gsbi1_uart_4pins {
+ mux {
+ pins = "gpio18", "gpio19", "gpio20", "gpio21";
+ function = "gsbi1";
+ };
+ };
+
i2c2_pins: i2c2 {
mux {
pins = "gpio24", "gpio25";
@@ -205,4 +219,29 @@
function = "gsbi7";
};
};
+
+ i2c7_pins: i2c7 {
+ mux {
+ pins = "gpio84", "gpio85";
+ function = "gsbi7";
+ };
+
+ pinconf {
+ pins = "gpio84", "gpio85";
+ drive-strength = <16>;
+ bias-disable;
+ };
+ };
+
+ i2c7_pins_sleep: i2c7_pins_sleep {
+ mux {
+ pins = "gpio84", "gpio85";
+ function = "gpio";
+ };
+ pinconf {
+ pins = "gpio84", "gpio85";
+ drive-strength = <2>;
+ bias-disable = <0>;
+ };
+ };
};
diff --git a/arch/arm/boot/dts/qcom-apq8064.dtsi b/arch/arm/boot/dts/qcom-apq8064.dtsi
index 04f541bffbdd..df96ccdc9bb4 100644
--- a/arch/arm/boot/dts/qcom-apq8064.dtsi
+++ b/arch/arm/boot/dts/qcom-apq8064.dtsi
@@ -124,6 +124,95 @@
hwlocks = <&sfpb_mutex 3>;
};
+ smd {
+ compatible = "qcom,smd";
+
+ modem@0 {
+ interrupts = <0 37 IRQ_TYPE_EDGE_RISING>;
+
+ qcom,ipc = <&l2cc 8 3>;
+ qcom,smd-edge = <0>;
+
+ status = "disabled";
+ };
+
+ q6@1 {
+ interrupts = <0 90 IRQ_TYPE_EDGE_RISING>;
+
+ qcom,ipc = <&l2cc 8 15>;
+ qcom,smd-edge = <1>;
+
+ status = "disabled";
+ };
+
+ dsps@3 {
+ interrupts = <0 138 IRQ_TYPE_EDGE_RISING>;
+
+ qcom,ipc = <&sps_sic_non_secure 0x4080 0>;
+ qcom,smd-edge = <3>;
+
+ status = "disabled";
+ };
+
+ riva@6 {
+ interrupts = <0 198 IRQ_TYPE_EDGE_RISING>;
+
+ qcom,ipc = <&l2cc 8 25>;
+ qcom,smd-edge = <6>;
+
+ status = "disabled";
+ };
+ };
+
+ smsm {
+ compatible = "qcom,smsm";
+
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ qcom,ipc-1 = <&l2cc 8 4>;
+ qcom,ipc-2 = <&l2cc 8 14>;
+ qcom,ipc-3 = <&l2cc 8 23>;
+ qcom,ipc-4 = <&sps_sic_non_secure 0x4094 0>;
+
+ apps_smsm: apps@0 {
+ reg = <0>;
+ #qcom,state-cells = <1>;
+ };
+
+ modem_smsm: modem@1 {
+ reg = <1>;
+ interrupts = <0 38 IRQ_TYPE_EDGE_RISING>;
+
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
+ q6_smsm: q6@2 {
+ reg = <2>;
+ interrupts = <0 89 IRQ_TYPE_EDGE_RISING>;
+
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
+ wcnss_smsm: wcnss@3 {
+ reg = <3>;
+ interrupts = <0 204 IRQ_TYPE_EDGE_RISING>;
+
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+
+ dsps_smsm: dsps@4 {
+ reg = <4>;
+ interrupts = <0 137 IRQ_TYPE_EDGE_RISING>;
+
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+ };
+
soc: soc {
#address-cells = <1>;
#size-cells = <1>;
@@ -212,6 +301,11 @@
regulator;
};
+ sps_sic_non_secure: sps-sic-non-secure@12100000 {
+ compatible = "syscon";
+ reg = <0x12100000 0x10000>;
+ };
+
gsbi1: gsbi@12440000 {
status = "disabled";
compatible = "qcom,gsbi-v1.0.0";
@@ -225,9 +319,20 @@
syscon-tcsr = <&tcsr>;
+ gsbi1_serial: serial@12450000 {
+ compatible = "qcom,msm-uartdm-v1.3", "qcom,msm-uartdm";
+ reg = <0x12450000 0x100>,
+ <0x12400000 0x03>;
+ interrupts = <0 193 0x0>;
+ clocks = <&gcc GSBI1_UART_CLK>, <&gcc GSBI1_H_CLK>;
+ clock-names = "core", "iface";
+ status = "disabled";
+ };
+
gsbi1_i2c: i2c@12460000 {
compatible = "qcom,i2c-qup-v1.1.1";
- pinctrl-0 = <&i2c1_pins &i2c1_pins_sleep>;
+ pinctrl-0 = <&i2c1_pins>;
+ pinctrl-1 = <&i2c1_pins_sleep>;
pinctrl-names = "default", "sleep";
reg = <0x12460000 0x1000>;
interrupts = <0 194 IRQ_TYPE_NONE>;
@@ -255,7 +360,8 @@
gsbi2_i2c: i2c@124a0000 {
compatible = "qcom,i2c-qup-v1.1.1";
reg = <0x124a0000 0x1000>;
- pinctrl-0 = <&i2c2_pins &i2c2_pins_sleep>;
+ pinctrl-0 = <&i2c2_pins>;
+ pinctrl-1 = <&i2c2_pins_sleep>;
pinctrl-names = "default", "sleep";
interrupts = <0 196 IRQ_TYPE_NONE>;
clocks = <&gcc GSBI2_QUP_CLK>, <&gcc GSBI2_H_CLK>;
@@ -277,7 +383,8 @@
ranges;
gsbi3_i2c: i2c@16280000 {
compatible = "qcom,i2c-qup-v1.1.1";
- pinctrl-0 = <&i2c3_pins &i2c3_pins_sleep>;
+ pinctrl-0 = <&i2c3_pins>;
+ pinctrl-1 = <&i2c3_pins_sleep>;
pinctrl-names = "default", "sleep";
reg = <0x16280000 0x1000>;
interrupts = <GIC_SPI 151 IRQ_TYPE_NONE>;
@@ -302,7 +409,8 @@
gsbi4_i2c: i2c@16380000 {
compatible = "qcom,i2c-qup-v1.1.1";
- pinctrl-0 = <&i2c4_pins &i2c4_pins_sleep>;
+ pinctrl-0 = <&i2c4_pins>;
+ pinctrl-1 = <&i2c4_pins_sleep>;
pinctrl-names = "default", "sleep";
reg = <0x16380000 0x1000>;
interrupts = <GIC_SPI 153 IRQ_TYPE_NONE>;
@@ -337,7 +445,8 @@
compatible = "qcom,spi-qup-v1.1.1";
reg = <0x1a280000 0x1000>;
interrupts = <0 155 0>;
- pinctrl-0 = <&spi5_default &spi5_sleep>;
+ pinctrl-0 = <&spi5_default>;
+ pinctrl-1 = <&spi5_sleep>;
pinctrl-names = "default", "sleep";
clocks = <&gcc GSBI5_QUP_CLK>, <&gcc GSBI5_H_CLK>;
clock-names = "core", "iface";
@@ -370,7 +479,8 @@
gsbi6_i2c: i2c@16580000 {
compatible = "qcom,i2c-qup-v1.1.1";
- pinctrl-0 = <&i2c6_pins &i2c6_pins_sleep>;
+ pinctrl-0 = <&i2c6_pins>;
+ pinctrl-1 = <&i2c6_pins_sleep>;
pinctrl-names = "default", "sleep";
reg = <0x16580000 0x1000>;
interrupts = <GIC_SPI 157 IRQ_TYPE_NONE>;
@@ -401,6 +511,19 @@
clock-names = "core", "iface";
status = "disabled";
};
+
+ gsbi7_i2c: i2c@16680000 {
+ compatible = "qcom,i2c-qup-v1.1.1";
+ pinctrl-0 = <&i2c7_pins>;
+ pinctrl-1 = <&i2c7_pins_sleep>;
+ pinctrl-names = "default", "sleep";
+ reg = <0x16680000 0x1000>;
+ interrupts = <GIC_SPI 159 IRQ_TYPE_NONE>;
+ clocks = <&gcc GSBI7_QUP_CLK>,
+ <&gcc GSBI7_H_CLK>;
+ clock-names = "core", "iface";
+ status = "disabled";
+ };
};
rng@1a500000 {
diff --git a/arch/arm/boot/dts/qcom-ipq4019-ap.dk01.1-c1.dts b/arch/arm/boot/dts/qcom-ipq4019-ap.dk01.1-c1.dts
new file mode 100644
index 000000000000..0d92f1bc3a13
--- /dev/null
+++ b/arch/arm/boot/dts/qcom-ipq4019-ap.dk01.1-c1.dts
@@ -0,0 +1,22 @@
+/* Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ *
+ */
+
+#include "qcom-ipq4019-ap.dk01.1.dtsi"
+
+/ {
+ model = "Qualcomm Technologies, Inc. IPQ40xx/AP-DK01.1-C1";
+
+};
diff --git a/arch/arm/boot/dts/qcom-ipq4019-ap.dk01.1.dtsi b/arch/arm/boot/dts/qcom-ipq4019-ap.dk01.1.dtsi
new file mode 100644
index 000000000000..b9457dd21a69
--- /dev/null
+++ b/arch/arm/boot/dts/qcom-ipq4019-ap.dk01.1.dtsi
@@ -0,0 +1,112 @@
+/* Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ *
+ */
+
+#include "qcom-ipq4019.dtsi"
+
+/ {
+ model = "Qualcomm Technologies, Inc. IPQ4019/AP-DK01.1";
+ compatible = "qcom,ipq4019";
+
+ clocks {
+ xo: xo {
+ compatible = "fixed-clock";
+ clock-frequency = <48000000>;
+ #clock-cells = <0>;
+ };
+ };
+
+ soc {
+
+
+ timer {
+ compatible = "arm,armv7-timer";
+ interrupts = <1 2 0xf08>,
+ <1 3 0xf08>,
+ <1 4 0xf08>,
+ <1 1 0xf08>;
+ clock-frequency = <48000000>;
+ };
+
+ pinctrl@0x01000000 {
+ serial_pins: serial_pinmux {
+ mux {
+ pins = "gpio60", "gpio61";
+ function = "blsp_uart0";
+ bias-disable;
+ };
+ };
+
+ spi_0_pins: spi_0_pinmux {
+ pinmux {
+ function = "blsp_spi0";
+ pins = "gpio55", "gpio56", "gpio57";
+ };
+ pinmux_cs {
+ function = "gpio";
+ pins = "gpio54";
+ };
+ pinconf {
+ pins = "gpio55", "gpio56", "gpio57";
+ drive-strength = <12>;
+ bias-disable;
+ };
+ pinconf_cs {
+ pins = "gpio54";
+ drive-strength = <2>;
+ bias-disable;
+ output-high;
+ };
+ };
+ };
+
+ blsp_dma: dma@7884000 {
+ status = "ok";
+ };
+
+ spi_0: spi@78b5000 {
+ pinctrl-0 = <&spi_0_pins>;
+ pinctrl-names = "default";
+ status = "ok";
+ cs-gpios = <&tlmm 54 0>;
+
+ mx25l25635e@0 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ reg = <0>;
+ compatible = "mx25l25635e";
+ spi-max-frequency = <24000000>;
+ };
+ };
+
+ serial@78af000 {
+ pinctrl-0 = <&serial_pins>;
+ pinctrl-names = "default";
+ status = "ok";
+ };
+
+ cryptobam: dma@8e04000 {
+ status = "ok";
+ };
+
+ crypto@8e3a000 {
+ status = "ok";
+ };
+
+ watchdog@b017000 {
+ status = "ok";
+ };
+ };
+};
diff --git a/arch/arm/boot/dts/qcom-ipq4019.dtsi b/arch/arm/boot/dts/qcom-ipq4019.dtsi
new file mode 100644
index 000000000000..5c08d19066c2
--- /dev/null
+++ b/arch/arm/boot/dts/qcom-ipq4019.dtsi
@@ -0,0 +1,267 @@
+/*
+ * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+/dts-v1/;
+
+#include "skeleton.dtsi"
+#include <dt-bindings/clock/qcom,gcc-ipq4019.h>
+#include <dt-bindings/interrupt-controller/arm-gic.h>
+#include <dt-bindings/interrupt-controller/irq.h>
+
+/ {
+ model = "Qualcomm Technologies, Inc. IPQ4019";
+ compatible = "qcom,ipq4019";
+ interrupt-parent = <&intc>;
+
+ aliases {
+ spi0 = &spi_0;
+ i2c0 = &i2c_0;
+ };
+
+ cpus {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ cpu@0 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a7";
+ enable-method = "qcom,kpss-acc-v1";
+ qcom,acc = <&acc0>;
+ qcom,saw = <&saw0>;
+ reg = <0x0>;
+ clocks = <&gcc GCC_APPS_CLK_SRC>;
+ clock-frequency = <0>;
+ operating-points = <
+ /* kHz uV (fixed) */
+ 48000 1100000
+ 200000 1100000
+ 500000 1100000
+ 666000 1100000
+ >;
+ clock-latency = <256000>;
+ };
+
+ cpu@1 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a7";
+ enable-method = "qcom,kpss-acc-v1";
+ qcom,acc = <&acc1>;
+ qcom,saw = <&saw1>;
+ reg = <0x1>;
+ clocks = <&gcc GCC_APPS_CLK_SRC>;
+ clock-frequency = <0>;
+ };
+
+ cpu@2 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a7";
+ enable-method = "qcom,kpss-acc-v1";
+ qcom,acc = <&acc2>;
+ qcom,saw = <&saw2>;
+ reg = <0x2>;
+ clocks = <&gcc GCC_APPS_CLK_SRC>;
+ clock-frequency = <0>;
+ };
+
+ cpu@3 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a7";
+ enable-method = "qcom,kpss-acc-v1";
+ qcom,acc = <&acc3>;
+ qcom,saw = <&saw3>;
+ reg = <0x3>;
+ clocks = <&gcc GCC_APPS_CLK_SRC>;
+ clock-frequency = <0>;
+ };
+ };
+
+ clocks {
+ sleep_clk: sleep_clk {
+ compatible = "fixed-clock";
+ clock-frequency = <32768>;
+ #clock-cells = <0>;
+ };
+ };
+
+ soc {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+ compatible = "simple-bus";
+
+ intc: interrupt-controller@b000000 {
+ compatible = "qcom,msm-qgic2";
+ interrupt-controller;
+ #interrupt-cells = <3>;
+ reg = <0x0b000000 0x1000>,
+ <0x0b002000 0x1000>;
+ };
+
+ gcc: clock-controller@1800000 {
+ compatible = "qcom,gcc-ipq4019";
+ #clock-cells = <1>;
+ #reset-cells = <1>;
+ reg = <0x1800000 0x60000>;
+ };
+
+ tlmm: pinctrl@0x01000000 {
+ compatible = "qcom,ipq4019-pinctrl";
+ reg = <0x01000000 0x300000>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ interrupts = <0 208 0>;
+ };
+
+ blsp_dma: dma@7884000 {
+ compatible = "qcom,bam-v1.7.0";
+ reg = <0x07884000 0x23000>;
+ interrupts = <GIC_SPI 238 IRQ_TYPE_NONE>;
+ clocks = <&gcc GCC_BLSP1_AHB_CLK>;
+ clock-names = "bam_clk";
+ #dma-cells = <1>;
+ qcom,ee = <0>;
+ status = "disabled";
+ };
+
+ spi_0: spi@78b5000 {
+ compatible = "qcom,spi-qup-v2.2.1";
+ reg = <0x78b5000 0x600>;
+ interrupts = <GIC_SPI 95 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&gcc GCC_BLSP1_QUP1_SPI_APPS_CLK>,
+ <&gcc GCC_BLSP1_AHB_CLK>;
+ clock-names = "core", "iface";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "disabled";
+ };
+
+ i2c_0: i2c@78b7000 {
+ compatible = "qcom,i2c-qup-v2.2.1";
+ reg = <0x78b7000 0x6000>;
+ interrupts = <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&gcc GCC_BLSP1_AHB_CLK>,
+ <&gcc GCC_BLSP1_QUP2_I2C_APPS_CLK>;
+ clock-names = "iface", "core";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "disabled";
+ };
+
+
+ cryptobam: dma@8e04000 {
+ compatible = "qcom,bam-v1.7.0";
+ reg = <0x08e04000 0x20000>;
+ interrupts = <GIC_SPI 207 0>;
+ clocks = <&gcc GCC_CRYPTO_AHB_CLK>;
+ clock-names = "bam_clk";
+ #dma-cells = <1>;
+ qcom,ee = <1>;
+ qcom,controlled-remotely;
+ status = "disabled";
+ };
+
+ crypto@8e3a000 {
+ compatible = "qcom,crypto-v5.1";
+ reg = <0x08e3a000 0x6000>;
+ clocks = <&gcc GCC_CRYPTO_AHB_CLK>,
+ <&gcc GCC_CRYPTO_AXI_CLK>,
+ <&gcc GCC_CRYPTO_CLK>;
+ clock-names = "iface", "bus", "core";
+ dmas = <&cryptobam 2>, <&cryptobam 3>;
+ dma-names = "rx", "tx";
+ status = "disabled";
+ };
+
+ acc0: clock-controller@b088000 {
+ compatible = "qcom,kpss-acc-v1";
+ reg = <0x0b088000 0x1000>, <0xb008000 0x1000>;
+ };
+
+ acc1: clock-controller@b098000 {
+ compatible = "qcom,kpss-acc-v1";
+ reg = <0x0b098000 0x1000>, <0xb008000 0x1000>;
+ };
+
+ acc2: clock-controller@b0a8000 {
+ compatible = "qcom,kpss-acc-v1";
+ reg = <0x0b0a8000 0x1000>, <0xb008000 0x1000>;
+ };
+
+ acc3: clock-controller@b0b8000 {
+ compatible = "qcom,kpss-acc-v1";
+ reg = <0x0b0b8000 0x1000>, <0xb008000 0x1000>;
+ };
+
+ saw0: regulator@b089000 {
+ compatible = "qcom,saw2";
+ reg = <0x02089000 0x1000>, <0x0b009000 0x1000>;
+ regulator;
+ };
+
+ saw1: regulator@b099000 {
+ compatible = "qcom,saw2";
+ reg = <0x0b099000 0x1000>, <0x0b009000 0x1000>;
+ regulator;
+ };
+
+ saw2: regulator@b0a9000 {
+ compatible = "qcom,saw2";
+ reg = <0x0b0a9000 0x1000>, <0x0b009000 0x1000>;
+ regulator;
+ };
+
+ saw3: regulator@b0b9000 {
+ compatible = "qcom,saw2";
+ reg = <0x0b0b9000 0x1000>, <0x0b009000 0x1000>;
+ regulator;
+ };
+
+ serial@78af000 {
+ compatible = "qcom,msm-uartdm-v1.4", "qcom,msm-uartdm";
+ reg = <0x78af000 0x200>;
+ interrupts = <0 107 0>;
+ status = "disabled";
+ clocks = <&gcc GCC_BLSP1_UART1_APPS_CLK>,
+ <&gcc GCC_BLSP1_AHB_CLK>;
+ clock-names = "core", "iface";
+ dmas = <&blsp_dma 1>, <&blsp_dma 0>;
+ dma-names = "rx", "tx";
+ };
+
+ serial@78b0000 {
+ compatible = "qcom,msm-uartdm-v1.4", "qcom,msm-uartdm";
+ reg = <0x78b0000 0x200>;
+ interrupts = <0 108 0>;
+ status = "disabled";
+ clocks = <&gcc GCC_BLSP1_UART2_APPS_CLK>,
+ <&gcc GCC_BLSP1_AHB_CLK>;
+ clock-names = "core", "iface";
+ dmas = <&blsp_dma 3>, <&blsp_dma 2>;
+ dma-names = "rx", "tx";
+ };
+
+ watchdog@b017000 {
+ compatible = "qcom,kpss-standalone";
+ reg = <0xb017000 0x40>;
+ clocks = <&sleep_clk>;
+ timeout-sec = <10>;
+ status = "disabled";
+ };
+
+ restart@4ab000 {
+ compatible = "qcom,pshold";
+ reg = <0x4ab000 0x4>;
+ };
+ };
+};
diff --git a/arch/arm/boot/dts/qcom-msm8974.dtsi b/arch/arm/boot/dts/qcom-msm8974.dtsi
index 8193139d0d87..6f164266a010 100644
--- a/arch/arm/boot/dts/qcom-msm8974.dtsi
+++ b/arch/arm/boot/dts/qcom-msm8974.dtsi
@@ -49,8 +49,13 @@
no-map;
};
- efs@0fd600000 {
- reg = <0x0fd60000 0x1a0000>;
+ rfsa@0fd60000 {
+ reg = <0x0fd60000 0x20000>;
+ no-map;
+ };
+
+ rmtfs@0fd80000 {
+ reg = <0x0fd80000 0x180000>;
no-map;
};
@@ -163,6 +168,31 @@
hwlocks = <&tcsr_mutex 3>;
};
+ smp2p-modem {
+ compatible = "qcom,smp2p";
+ qcom,smem = <435>, <428>;
+
+ interrupt-parent = <&intc>;
+ interrupts = <0 27 IRQ_TYPE_EDGE_RISING>;
+
+ qcom,ipc = <&apcs 8 14>;
+
+ qcom,local-pid = <0>;
+ qcom,remote-pid = <1>;
+
+ modem_smp2p_out: master-kernel {
+ qcom,entry-name = "master-kernel";
+ #qcom,state-cells = <1>;
+ };
+
+ modem_smp2p_in: slave-kernel {
+ qcom,entry-name = "slave-kernel";
+
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ };
+ };
+
smp2p-wcnss {
compatible = "qcom,smp2p";
qcom,smem = <451>, <431>;
@@ -440,6 +470,17 @@
interrupts = <0 208 0>;
};
+ i2c@f9924000 {
+ status = "disabled";
+ compatible = "qcom,i2c-qup-v2.1.1";
+ reg = <0xf9924000 0x1000>;
+ interrupts = <0 96 IRQ_TYPE_NONE>;
+ clocks = <&gcc GCC_BLSP1_QUP2_I2C_APPS_CLK>, <&gcc GCC_BLSP1_AHB_CLK>;
+ clock-names = "core", "iface";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ };
+
blsp_i2c8: i2c@f9964000 {
status = "disabled";
compatible = "qcom,i2c-qup-v2.1.1";
@@ -482,6 +523,13 @@
smd {
compatible = "qcom,smd";
+ modem {
+ interrupts = <0 25 IRQ_TYPE_EDGE_RISING>;
+
+ qcom,ipc = <&apcs 8 12>;
+ qcom,smd-edge = <0>;
+ };
+
rpm {
interrupts = <0 168 1>;
qcom,ipc = <&apcs 8 0>;
diff --git a/arch/arm/boot/dts/r7s72100.dtsi b/arch/arm/boot/dts/r7s72100.dtsi
index 89e46ebef1bc..e8e2a5d71976 100644
--- a/arch/arm/boot/dts/r7s72100.dtsi
+++ b/arch/arm/boot/dts/r7s72100.dtsi
@@ -37,46 +37,41 @@
#size-cells = <1>;
/* External clocks */
- extal_clk: extal_clk {
+ extal_clk: extal {
#clock-cells = <0>;
compatible = "fixed-clock";
/* If clk present, value must be set by board */
clock-frequency = <0>;
- clock-output-names = "extal";
};
- usb_x1_clk: usb_x1_clk {
+ usb_x1_clk: usb_x1 {
#clock-cells = <0>;
compatible = "fixed-clock";
/* If clk present, value must be set by board */
clock-frequency = <0>;
- clock-output-names = "usb_x1";
};
/* Fixed factor clocks */
- b_clk: b_clk {
+ b_clk: b {
#clock-cells = <0>;
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R7S72100_CLK_PLL>;
clock-mult = <1>;
clock-div = <3>;
- clock-output-names = "b";
};
- p1_clk: p1_clk {
+ p1_clk: p1 {
#clock-cells = <0>;
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R7S72100_CLK_PLL>;
clock-mult = <1>;
clock-div = <6>;
- clock-output-names = "p1";
};
- p0_clk: p0_clk {
+ p0_clk: p0 {
#clock-cells = <0>;
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R7S72100_CLK_PLL>;
clock-mult = <1>;
clock-div = <12>;
- clock-output-names = "p0";
};
/* Special CPG clocks */
diff --git a/arch/arm/boot/dts/r8a73a4-ape6evm.dts b/arch/arm/boot/dts/r8a73a4-ape6evm.dts
index 590257095700..93ace33e3e36 100644
--- a/arch/arm/boot/dts/r8a73a4-ape6evm.dts
+++ b/arch/arm/boot/dts/r8a73a4-ape6evm.dts
@@ -189,28 +189,28 @@
&pfc {
scifa0_pins: serial0 {
- renesas,groups = "scifa0_data";
- renesas,function = "scifa0";
+ groups = "scifa0_data";
+ function = "scifa0";
};
mmc0_pins: mmc {
- renesas,groups = "mmc0_data8", "mmc0_ctrl";
- renesas,function = "mmc0";
+ groups = "mmc0_data8", "mmc0_ctrl";
+ function = "mmc0";
};
sdhi0_pins: sd0 {
- renesas,groups = "sdhi0_data4", "sdhi0_ctrl", "sdhi0_cd";
- renesas,function = "sdhi0";
+ groups = "sdhi0_data4", "sdhi0_ctrl", "sdhi0_cd";
+ function = "sdhi0";
};
sdhi1_pins: sd1 {
- renesas,groups = "sdhi1_data4", "sdhi1_ctrl";
- renesas,function = "sdhi1";
+ groups = "sdhi1_data4", "sdhi1_ctrl";
+ function = "sdhi1";
};
keyboard_pins: keyboard {
- renesas,pins = "PORT324", "PORT325", "PORT326", "PORT327",
- "PORT328", "PORT329";
+ pins = "PORT324", "PORT325", "PORT326", "PORT327", "PORT328",
+ "PORT329";
bias-pull-up;
};
};
diff --git a/arch/arm/boot/dts/r8a73a4.dtsi b/arch/arm/boot/dts/r8a73a4.dtsi
index 6583a1dfca1f..6954912a3753 100644
--- a/arch/arm/boot/dts/r8a73a4.dtsi
+++ b/arch/arm/boot/dts/r8a73a4.dtsi
@@ -486,37 +486,32 @@
ranges;
/* External root clocks */
- extalr_clk: extalr_clk {
+ extalr_clk: extalr {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <32768>;
- clock-output-names = "extalr";
};
- extal1_clk: extal1_clk {
+ extal1_clk: extal1 {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <25000000>;
- clock-output-names = "extal1";
};
- extal2_clk: extal2_clk {
+ extal2_clk: extal2 {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <48000000>;
- clock-output-names = "extal2";
};
- fsiack_clk: fsiack_clk {
+ fsiack_clk: fsiack {
compatible = "fixed-clock";
#clock-cells = <0>;
/* This value must be overridden by the board. */
clock-frequency = <0>;
- clock-output-names = "fsiack";
};
- fsibck_clk: fsibck_clk {
+ fsibck_clk: fsibck {
compatible = "fixed-clock";
#clock-cells = <0>;
/* This value must be overridden by the board. */
clock-frequency = <0>;
- clock-output-names = "fsibck";
};
/* Special CPG clocks */
@@ -540,171 +535,151 @@
#clock-cells = <0>;
clock-output-names = "zb";
};
- sdhi0_clk: sdhi0_clk@e6150074 {
+ sdhi0_clk: sdhi0ck@e6150074 {
compatible = "renesas,r8a73a4-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150074 0 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks R8A73A4_CLK_PLL2S>,
<0>, <&extal2_clk>;
#clock-cells = <0>;
- clock-output-names = "sdhi0ck";
};
- sdhi1_clk: sdhi1_clk@e6150078 {
+ sdhi1_clk: sdhi1ck@e6150078 {
compatible = "renesas,r8a73a4-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150078 0 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks R8A73A4_CLK_PLL2S>,
<0>, <&extal2_clk>;
#clock-cells = <0>;
- clock-output-names = "sdhi1ck";
};
- sdhi2_clk: sdhi2_clk@e615007c {
+ sdhi2_clk: sdhi2ck@e615007c {
compatible = "renesas,r8a73a4-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe615007c 0 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks R8A73A4_CLK_PLL2S>,
<0>, <&extal2_clk>;
#clock-cells = <0>;
- clock-output-names = "sdhi2ck";
};
- mmc0_clk: mmc0_clk@e6150240 {
+ mmc0_clk: mmc0@e6150240 {
compatible = "renesas,r8a73a4-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150240 0 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks R8A73A4_CLK_PLL2S>,
<0>, <&extal2_clk>;
#clock-cells = <0>;
- clock-output-names = "mmc0";
};
- mmc1_clk: mmc1_clk@e6150244 {
+ mmc1_clk: mmc1@e6150244 {
compatible = "renesas,r8a73a4-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150244 0 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks R8A73A4_CLK_PLL2S>,
<0>, <&extal2_clk>;
#clock-cells = <0>;
- clock-output-names = "mmc1";
};
- vclk1_clk: vclk1_clk@e6150008 {
+ vclk1_clk: vclk1@e6150008 {
compatible = "renesas,r8a73a4-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150008 0 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks R8A73A4_CLK_PLL2S>,
<0>, <&extal2_clk>, <&main_div2_clk>,
<&extalr_clk>, <0>, <0>;
#clock-cells = <0>;
- clock-output-names = "vclk1";
};
- vclk2_clk: vclk2_clk@e615000c {
+ vclk2_clk: vclk2@e615000c {
compatible = "renesas,r8a73a4-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe615000c 0 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks R8A73A4_CLK_PLL2S>,
<0>, <&extal2_clk>, <&main_div2_clk>,
<&extalr_clk>, <0>, <0>;
#clock-cells = <0>;
- clock-output-names = "vclk2";
};
- vclk3_clk: vclk3_clk@e615001c {
+ vclk3_clk: vclk3@e615001c {
compatible = "renesas,r8a73a4-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe615001c 0 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks R8A73A4_CLK_PLL2S>,
<0>, <&extal2_clk>, <&main_div2_clk>,
<&extalr_clk>, <0>, <0>;
#clock-cells = <0>;
- clock-output-names = "vclk3";
};
- vclk4_clk: vclk4_clk@e6150014 {
+ vclk4_clk: vclk4@e6150014 {
compatible = "renesas,r8a73a4-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150014 0 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks R8A73A4_CLK_PLL2S>,
<0>, <&extal2_clk>, <&main_div2_clk>,
<&extalr_clk>, <0>, <0>;
#clock-cells = <0>;
- clock-output-names = "vclk4";
};
- vclk5_clk: vclk5_clk@e6150034 {
+ vclk5_clk: vclk5@e6150034 {
compatible = "renesas,r8a73a4-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150034 0 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks R8A73A4_CLK_PLL2S>,
<0>, <&extal2_clk>, <&main_div2_clk>,
<&extalr_clk>, <0>, <0>;
#clock-cells = <0>;
- clock-output-names = "vclk5";
};
- fsia_clk: fsia_clk@e6150018 {
+ fsia_clk: fsia@e6150018 {
compatible = "renesas,r8a73a4-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150018 0 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks R8A73A4_CLK_PLL2S>,
<&fsiack_clk>, <0>;
#clock-cells = <0>;
- clock-output-names = "fsia";
};
- fsib_clk: fsib_clk@e6150090 {
+ fsib_clk: fsib@e6150090 {
compatible = "renesas,r8a73a4-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150090 0 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks R8A73A4_CLK_PLL2S>,
<&fsibck_clk>, <0>;
#clock-cells = <0>;
- clock-output-names = "fsib";
};
- mp_clk: mp_clk@e6150080 {
+ mp_clk: mp@e6150080 {
compatible = "renesas,r8a73a4-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150080 0 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks R8A73A4_CLK_PLL2S>,
<&extal2_clk>, <&extal2_clk>;
#clock-cells = <0>;
- clock-output-names = "mp";
};
- m4_clk: m4_clk@e6150098 {
+ m4_clk: m4@e6150098 {
compatible = "renesas,r8a73a4-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150098 0 4>;
clocks = <&cpg_clocks R8A73A4_CLK_PLL2S>;
#clock-cells = <0>;
- clock-output-names = "m4";
};
- hsi_clk: hsi_clk@e615026c {
+ hsi_clk: hsi@e615026c {
compatible = "renesas,r8a73a4-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe615026c 0 4>;
clocks = <&cpg_clocks R8A73A4_CLK_PLL2H>, <&pll1_div2_clk>,
<&cpg_clocks R8A73A4_CLK_PLL2S>, <0>;
#clock-cells = <0>;
- clock-output-names = "hsi";
};
- spuv_clk: spuv_clk@e6150094 {
+ spuv_clk: spuv@e6150094 {
compatible = "renesas,r8a73a4-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150094 0 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks R8A73A4_CLK_PLL2S>,
<&extal2_clk>, <&extal2_clk>;
#clock-cells = <0>;
- clock-output-names = "spuv";
};
/* Fixed factor clocks */
- main_div2_clk: main_div2_clk {
+ main_div2_clk: main_div2 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A73A4_CLK_MAIN>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "main_div2";
};
- pll0_div2_clk: pll0_div2_clk {
+ pll0_div2_clk: pll0_div2 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A73A4_CLK_PLL0>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "pll0_div2";
};
- pll1_div2_clk: pll1_div2_clk {
+ pll1_div2_clk: pll1_div2 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A73A4_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "pll1_div2";
};
- extal1_div2_clk: extal1_div2_clk {
+ extal1_div2_clk: extal1_div2 {
compatible = "fixed-factor-clock";
clocks = <&extal1_clk>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "extal1_div2";
};
/* Gate clocks */
diff --git a/arch/arm/boot/dts/r8a7740-armadillo800eva.dts b/arch/arm/boot/dts/r8a7740-armadillo800eva.dts
index c548cabb102f..2c82dab2b6f4 100644
--- a/arch/arm/boot/dts/r8a7740-armadillo800eva.dts
+++ b/arch/arm/boot/dts/r8a7740-armadillo800eva.dts
@@ -228,44 +228,44 @@
pinctrl-names = "default";
ether_pins: ether {
- renesas,groups = "gether_mii", "gether_int";
- renesas,function = "gether";
+ groups = "gether_mii", "gether_int";
+ function = "gether";
};
scifa1_pins: serial1 {
- renesas,groups = "scifa1_data";
- renesas,function = "scifa1";
+ groups = "scifa1_data";
+ function = "scifa1";
};
st1232_pins: touchscreen {
- renesas,groups = "intc_irq10";
- renesas,function = "intc";
+ groups = "intc_irq10";
+ function = "intc";
};
backlight_pins: backlight {
- renesas,groups = "tpu0_to2_1";
- renesas,function = "tpu0";
+ groups = "tpu0_to2_1";
+ function = "tpu0";
};
mmc0_pins: mmc0 {
- renesas,groups = "mmc0_data8_1", "mmc0_ctrl_1";
- renesas,function = "mmc0";
+ groups = "mmc0_data8_1", "mmc0_ctrl_1";
+ function = "mmc0";
};
sdhi0_pins: sd0 {
- renesas,groups = "sdhi0_data4", "sdhi0_ctrl", "sdhi0_wp";
- renesas,function = "sdhi0";
+ groups = "sdhi0_data4", "sdhi0_ctrl", "sdhi0_wp";
+ function = "sdhi0";
};
fsia_pins: sounda {
- renesas,groups = "fsia_sclk_in", "fsia_mclk_out",
- "fsia_data_in_1", "fsia_data_out_0";
- renesas,function = "fsia";
+ groups = "fsia_sclk_in", "fsia_mclk_out",
+ "fsia_data_in_1", "fsia_data_out_0";
+ function = "fsia";
};
lcd0_pins: lcd0 {
- renesas,groups = "lcd0_data24_0", "lcd0_lclk_1", "lcd0_sync";
- renesas,function = "lcd0";
+ groups = "lcd0_data24_0", "lcd0_lclk_1", "lcd0_sync";
+ function = "lcd0";
/* DBGMD/LCDC0/FSIA MUX */
gpio-hog;
diff --git a/arch/arm/boot/dts/r8a7740.dtsi b/arch/arm/boot/dts/r8a7740.dtsi
index 995fbda74b7a..39b2f88ad151 100644
--- a/arch/arm/boot/dts/r8a7740.dtsi
+++ b/arch/arm/boot/dts/r8a7740.dtsi
@@ -422,53 +422,45 @@
ranges;
/* External root clock */
- extalr_clk: extalr_clk {
+ extalr_clk: extalr {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <32768>;
- clock-output-names = "extalr";
};
- extal1_clk: extal1_clk {
+ extal1_clk: extal1 {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
- clock-output-names = "extal1";
};
- extal2_clk: extal2_clk {
+ extal2_clk: extal2 {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
- clock-output-names = "extal2";
};
- dv_clk: dv_clk {
+ dv_clk: dv {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <27000000>;
- clock-output-names = "dv";
};
- fmsick_clk: fmsick_clk {
+ fmsick_clk: fmsick {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
- clock-output-names = "fmsick";
};
- fmsock_clk: fmsock_clk {
+ fmsock_clk: fmsock {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
- clock-output-names = "fmsock";
};
- fsiack_clk: fsiack_clk {
+ fsiack_clk: fsiack {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
- clock-output-names = "fsiack";
};
- fsibck_clk: fsibck_clk {
+ fsibck_clk: fsibck {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
- clock-output-names = "fsibck";
};
/* Special CPG clocks */
@@ -486,7 +478,7 @@
};
/* Variable factor clocks (DIV6) */
- vclk1_clk: vclk1_clk@e6150008 {
+ vclk1_clk: vclk1@e6150008 {
compatible = "renesas,r8a7740-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe6150008 4>;
clocks = <&pllc1_div2_clk>, <0>, <&dv_clk>,
@@ -494,9 +486,8 @@
<&extal1_div2_clk>, <&extalr_clk>, <0>,
<0>;
#clock-cells = <0>;
- clock-output-names = "vclk1";
};
- vclk2_clk: vclk2_clk@e615000c {
+ vclk2_clk: vclk2@e615000c {
compatible = "renesas,r8a7740-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe615000c 4>;
clocks = <&pllc1_div2_clk>, <0>, <&dv_clk>,
@@ -504,77 +495,67 @@
<&extal1_div2_clk>, <&extalr_clk>, <0>,
<0>;
#clock-cells = <0>;
- clock-output-names = "vclk2";
};
- fmsi_clk: fmsi_clk@e6150010 {
+ fmsi_clk: fmsi@e6150010 {
compatible = "renesas,r8a7740-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe6150010 4>;
clocks = <&pllc1_div2_clk>, <&fmsick_clk>, <0>, <0>;
#clock-cells = <0>;
- clock-output-names = "fmsi";
};
- fmso_clk: fmso_clk@e6150014 {
+ fmso_clk: fmso@e6150014 {
compatible = "renesas,r8a7740-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe6150014 4>;
clocks = <&pllc1_div2_clk>, <&fmsock_clk>, <0>, <0>;
#clock-cells = <0>;
- clock-output-names = "fmso";
};
- fsia_clk: fsia_clk@e6150018 {
+ fsia_clk: fsia@e6150018 {
compatible = "renesas,r8a7740-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe6150018 4>;
clocks = <&pllc1_div2_clk>, <&fsiack_clk>, <0>, <0>;
#clock-cells = <0>;
- clock-output-names = "fsia";
};
- sub_clk: sub_clk@e6150080 {
+ sub_clk: sub@e6150080 {
compatible = "renesas,r8a7740-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe6150080 4>;
clocks = <&pllc1_div2_clk>,
<&cpg_clocks R8A7740_CLK_USB24S>, <0>, <0>;
#clock-cells = <0>;
- clock-output-names = "sub";
};
- spu_clk: spu_clk@e6150084 {
+ spu_clk: spu@e6150084 {
compatible = "renesas,r8a7740-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe6150084 4>;
clocks = <&pllc1_div2_clk>,
<&cpg_clocks R8A7740_CLK_USB24S>, <0>, <0>;
#clock-cells = <0>;
- clock-output-names = "spu";
};
- vou_clk: vou_clk@e6150088 {
+ vou_clk: vou@e6150088 {
compatible = "renesas,r8a7740-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe6150088 4>;
clocks = <&pllc1_div2_clk>, <&extal1_clk>, <&dv_clk>,
<0>;
#clock-cells = <0>;
- clock-output-names = "vou";
};
- stpro_clk: stpro_clk@e615009c {
+ stpro_clk: stpro@e615009c {
compatible = "renesas,r8a7740-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe615009c 4>;
clocks = <&cpg_clocks R8A7740_CLK_PLLC0>;
#clock-cells = <0>;
- clock-output-names = "stpro";
};
/* Fixed factor clocks */
- pllc1_div2_clk: pllc1_div2_clk {
+ pllc1_div2_clk: pllc1_div2 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7740_CLK_PLLC1>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "pllc1_div2";
};
- extal1_div2_clk: extal1_div2_clk {
+ extal1_div2_clk: extal1_div2 {
compatible = "fixed-factor-clock";
clocks = <&extal1_clk>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "extal1_div2";
};
/* Gate clocks */
diff --git a/arch/arm/boot/dts/r8a7778-bockw.dts b/arch/arm/boot/dts/r8a7778-bockw.dts
index 21e3b9dda2da..e0dab1464648 100644
--- a/arch/arm/boot/dts/r8a7778-bockw.dts
+++ b/arch/arm/boot/dts/r8a7778-bockw.dts
@@ -130,53 +130,53 @@
pinctrl-names = "default";
scif0_pins: serial0 {
- renesas,groups = "scif0_data_a", "scif0_ctrl";
- renesas,function = "scif0";
+ groups = "scif0_data_a", "scif0_ctrl";
+ function = "scif0";
};
scif_clk_pins: scif_clk {
- renesas,groups = "scif_clk";
- renesas,function = "scif_clk";
+ groups = "scif_clk";
+ function = "scif_clk";
};
mmc_pins: mmc {
- renesas,groups = "mmc_data8", "mmc_ctrl";
- renesas,function = "mmc";
+ groups = "mmc_data8", "mmc_ctrl";
+ function = "mmc";
};
sdhi0_pins: sd0 {
- renesas,groups = "sdhi0_data4", "sdhi0_ctrl";
- renesas,function = "sdhi0";
+ groups = "sdhi0_data4", "sdhi0_ctrl";
+ function = "sdhi0";
};
sdhi0_pup_pins: sd0_pup {
- renesas,groups = "sdhi0_cd", "sdhi0_wp";
- renesas,function = "sdhi0";
+ groups = "sdhi0_cd", "sdhi0_wp";
+ function = "sdhi0";
bias-pull-up;
};
hspi0_pins: hspi0 {
- renesas,groups = "hspi0_a";
- renesas,function = "hspi0";
+ groups = "hspi0_a";
+ function = "hspi0";
};
usb0_pins: usb0 {
- renesas,groups = "usb0";
- renesas,function = "usb0";
+ groups = "usb0";
+ function = "usb0";
};
usb1_pins: usb1 {
- renesas,groups = "usb1";
- renesas,function = "usb1";
+ groups = "usb1";
+ function = "usb1";
};
vin0_pins: vin0 {
- renesas,groups = "vin0_data8", "vin0_clk";
- renesas,function = "vin0";
+ groups = "vin0_data8", "vin0_clk";
+ function = "vin0";
};
vin1_pins: vin1 {
- renesas,groups = "vin1_data8", "vin1_clk";
- renesas,function = "vin1";
+ groups = "vin1_data8", "vin1_clk";
+ function = "vin1";
};
};
diff --git a/arch/arm/boot/dts/r8a7778.dtsi b/arch/arm/boot/dts/r8a7778.dtsi
index f83a348fc07a..fe787b4751d2 100644
--- a/arch/arm/boot/dts/r8a7778.dtsi
+++ b/arch/arm/boot/dts/r8a7778.dtsi
@@ -443,11 +443,10 @@
ranges;
/* External input clock */
- extal_clk: extal_clk {
+ extal_clk: extal {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
- clock-output-names = "extal";
};
/* External SCIF clock */
@@ -456,7 +455,6 @@
#clock-cells = <0>;
/* This value must be overridden by the board. */
clock-frequency = <0>;
- status = "disabled";
};
/* Special CPG clocks */
@@ -474,59 +472,51 @@
audio_clk_a: audio_clk_a {
compatible = "fixed-clock";
#clock-cells = <0>;
- clock-output-names = "audio_clk_a";
};
audio_clk_b: audio_clk_b {
compatible = "fixed-clock";
#clock-cells = <0>;
- clock-output-names = "audio_clk_b";
};
audio_clk_c: audio_clk_c {
compatible = "fixed-clock";
#clock-cells = <0>;
- clock-output-names = "audio_clk_c";
};
/* Fixed ratio clocks */
- g_clk: g_clk {
+ g_clk: g {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7778_CLK_PLLA>;
#clock-cells = <0>;
clock-div = <12>;
clock-mult = <1>;
- clock-output-names = "g";
};
- i_clk: i_clk {
+ i_clk: i {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7778_CLK_PLLA>;
#clock-cells = <0>;
clock-div = <1>;
clock-mult = <1>;
- clock-output-names = "i";
};
- s3_clk: s3_clk {
+ s3_clk: s3 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7778_CLK_PLLA>;
#clock-cells = <0>;
clock-div = <4>;
clock-mult = <1>;
- clock-output-names = "s3";
};
- s4_clk: s4_clk {
+ s4_clk: s4 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7778_CLK_PLLA>;
#clock-cells = <0>;
clock-div = <8>;
clock-mult = <1>;
- clock-output-names = "s4";
};
- z_clk: z_clk {
+ z_clk: z {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7778_CLK_PLLB>;
#clock-cells = <0>;
clock-div = <1>;
clock-mult = <1>;
- clock-output-names = "z";
};
/* Gate clocks */
diff --git a/arch/arm/boot/dts/r8a7779-marzen.dts b/arch/arm/boot/dts/r8a7779-marzen.dts
index e111d35d02ae..b795da6f5503 100644
--- a/arch/arm/boot/dts/r8a7779-marzen.dts
+++ b/arch/arm/boot/dts/r8a7779-marzen.dts
@@ -170,49 +170,49 @@
du_pins: du {
du0 {
- renesas,groups = "du0_rgb888", "du0_sync_1", "du0_clk_out_0";
- renesas,function = "du0";
+ groups = "du0_rgb888", "du0_sync_1", "du0_clk_out_0";
+ function = "du0";
};
du1 {
- renesas,groups = "du1_rgb666", "du1_sync_1", "du1_clk_out";
- renesas,function = "du1";
+ groups = "du1_rgb666", "du1_sync_1", "du1_clk_out";
+ function = "du1";
};
};
scif_clk_pins: scif_clk {
- renesas,groups = "scif_clk_b";
- renesas,function = "scif_clk";
+ groups = "scif_clk_b";
+ function = "scif_clk";
};
ethernet_pins: ethernet {
intc {
- renesas,groups = "intc_irq1_b";
- renesas,function = "intc";
+ groups = "intc_irq1_b";
+ function = "intc";
};
lbsc {
- renesas,groups = "lbsc_ex_cs0";
- renesas,function = "lbsc";
+ groups = "lbsc_ex_cs0";
+ function = "lbsc";
};
};
scif2_pins: serial2 {
- renesas,groups = "scif2_data_c";
- renesas,function = "scif2";
+ groups = "scif2_data_c";
+ function = "scif2";
};
scif4_pins: serial4 {
- renesas,groups = "scif4_data";
- renesas,function = "scif4";
+ groups = "scif4_data";
+ function = "scif4";
};
sdhi0_pins: sd0 {
- renesas,groups = "sdhi0_data4", "sdhi0_ctrl", "sdhi0_cd";
- renesas,function = "sdhi0";
+ groups = "sdhi0_data4", "sdhi0_ctrl", "sdhi0_cd";
+ function = "sdhi0";
};
hspi0_pins: hspi0 {
- renesas,groups = "hspi0";
- renesas,function = "hspi0";
+ groups = "hspi0";
+ function = "hspi0";
};
};
diff --git a/arch/arm/boot/dts/r8a7779.dtsi b/arch/arm/boot/dts/r8a7779.dtsi
index a0cc08e6295b..b9bbcce69dfb 100644
--- a/arch/arm/boot/dts/r8a7779.dtsi
+++ b/arch/arm/boot/dts/r8a7779.dtsi
@@ -14,6 +14,7 @@
#include <dt-bindings/clock/r8a7779-clock.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
+#include <dt-bindings/power/r8a7779-sysc.h>
/ {
compatible = "renesas,r8a7779";
@@ -34,18 +35,21 @@
compatible = "arm,cortex-a9";
reg = <1>;
clock-frequency = <1000000000>;
+ power-domains = <&sysc R8A7779_PD_ARM1>;
};
cpu@2 {
device_type = "cpu";
compatible = "arm,cortex-a9";
reg = <2>;
clock-frequency = <1000000000>;
+ power-domains = <&sysc R8A7779_PD_ARM2>;
};
cpu@3 {
device_type = "cpu";
compatible = "arm,cortex-a9";
reg = <3>;
clock-frequency = <1000000000>;
+ power-domains = <&sysc R8A7779_PD_ARM3>;
};
};
@@ -67,7 +71,7 @@
compatible = "arm,cortex-a9-twd-timer";
reg = <0xf0000600 0x20>;
interrupts = <GIC_PPI 13
- (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_HIGH)>;
+ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_EDGE_RISING)>;
clocks = <&cpg_clocks R8A7779_CLK_ZS>;
};
@@ -173,7 +177,7 @@
reg = <0xffc70000 0x1000>;
interrupts = <GIC_SPI 79 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp0_clks R8A7779_CLK_I2C0>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -184,7 +188,7 @@
reg = <0xffc71000 0x1000>;
interrupts = <GIC_SPI 82 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp0_clks R8A7779_CLK_I2C1>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -195,7 +199,7 @@
reg = <0xffc72000 0x1000>;
interrupts = <GIC_SPI 80 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp0_clks R8A7779_CLK_I2C2>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -206,7 +210,7 @@
reg = <0xffc73000 0x1000>;
interrupts = <GIC_SPI 81 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp0_clks R8A7779_CLK_I2C3>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -218,7 +222,7 @@
clocks = <&mstp0_clks R8A7779_CLK_SCIF0>,
<&cpg_clocks R8A7779_CLK_S1>, <&scif_clk>;
clock-names = "fck", "brg_int", "scif_clk";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -230,7 +234,7 @@
clocks = <&mstp0_clks R8A7779_CLK_SCIF1>,
<&cpg_clocks R8A7779_CLK_S1>, <&scif_clk>;
clock-names = "fck", "brg_int", "scif_clk";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -242,7 +246,7 @@
clocks = <&mstp0_clks R8A7779_CLK_SCIF2>,
<&cpg_clocks R8A7779_CLK_S1>, <&scif_clk>;
clock-names = "fck", "brg_int", "scif_clk";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -254,7 +258,7 @@
clocks = <&mstp0_clks R8A7779_CLK_SCIF3>,
<&cpg_clocks R8A7779_CLK_S1>, <&scif_clk>;
clock-names = "fck", "brg_int", "scif_clk";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -266,7 +270,7 @@
clocks = <&mstp0_clks R8A7779_CLK_SCIF4>,
<&cpg_clocks R8A7779_CLK_S1>, <&scif_clk>;
clock-names = "fck", "brg_int", "scif_clk";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -278,7 +282,7 @@
clocks = <&mstp0_clks R8A7779_CLK_SCIF5>,
<&cpg_clocks R8A7779_CLK_S1>, <&scif_clk>;
clock-names = "fck", "brg_int", "scif_clk";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -300,7 +304,7 @@
<GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp0_clks R8A7779_CLK_TMU0>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
#renesas,channels = <3>;
@@ -315,7 +319,7 @@
<GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp0_clks R8A7779_CLK_TMU1>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
#renesas,channels = <3>;
@@ -330,7 +334,7 @@
<GIC_SPI 42 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp0_clks R8A7779_CLK_TMU2>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
#renesas,channels = <3>;
@@ -342,7 +346,7 @@
reg = <0xfc600000 0x2000>;
interrupts = <GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp1_clks R8A7779_CLK_SATA>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
};
sdhi0: sd@ffe4c000 {
@@ -350,7 +354,7 @@
reg = <0xffe4c000 0x100>;
interrupts = <GIC_SPI 104 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp3_clks R8A7779_CLK_SDHI0>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -359,7 +363,7 @@
reg = <0xffe4d000 0x100>;
interrupts = <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp3_clks R8A7779_CLK_SDHI1>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -368,7 +372,7 @@
reg = <0xffe4e000 0x100>;
interrupts = <GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp3_clks R8A7779_CLK_SDHI2>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -377,7 +381,7 @@
reg = <0xffe4f000 0x100>;
interrupts = <GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp3_clks R8A7779_CLK_SDHI3>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -388,7 +392,7 @@
#address-cells = <1>;
#size-cells = <0>;
clocks = <&mstp0_clks R8A7779_CLK_HSPI>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -399,7 +403,7 @@
#address-cells = <1>;
#size-cells = <0>;
clocks = <&mstp0_clks R8A7779_CLK_HSPI>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -410,7 +414,7 @@
#address-cells = <1>;
#size-cells = <0>;
clocks = <&mstp0_clks R8A7779_CLK_HSPI>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -419,7 +423,7 @@
reg = <0 0xfff80000 0 0x40000>;
interrupts = <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp1_clks R8A7779_CLK_DU>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7779_PD_ALWAYS_ON>;
status = "disabled";
ports {
@@ -445,12 +449,11 @@
ranges;
/* External root clock */
- extal_clk: extal_clk {
+ extal_clk: extal {
compatible = "fixed-clock";
#clock-cells = <0>;
/* This value must be overriden by the board. */
clock-frequency = <0>;
- clock-output-names = "extal";
};
/* External SCIF clock */
@@ -459,7 +462,6 @@
#clock-cells = <0>;
/* This value must be overridden by the board. */
clock-frequency = <0>;
- status = "disabled";
};
/* Special CPG clocks */
@@ -474,37 +476,33 @@
};
/* Fixed factor clocks */
- i_clk: i_clk {
+ i_clk: i {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7779_CLK_PLLA>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "i";
};
- s3_clk: s3_clk {
+ s3_clk: s3 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7779_CLK_PLLA>;
#clock-cells = <0>;
clock-div = <8>;
clock-mult = <1>;
- clock-output-names = "s3";
};
- s4_clk: s4_clk {
+ s4_clk: s4 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7779_CLK_PLLA>;
#clock-cells = <0>;
clock-div = <16>;
clock-mult = <1>;
- clock-output-names = "s4";
};
- g_clk: g_clk {
+ g_clk: g {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7779_CLK_PLLA>;
#clock-cells = <0>;
clock-div = <24>;
clock-mult = <1>;
- clock-output-names = "g";
};
/* Gate clocks */
@@ -591,4 +589,10 @@
"mmc1", "mmc0";
};
};
+
+ sysc: system-controller@ffd85000 {
+ compatible = "renesas,r8a7779-sysc";
+ reg = <0xffd85000 0x0200>;
+ #power-domain-cells = <1>;
+ };
};
diff --git a/arch/arm/boot/dts/r8a7790-lager.dts b/arch/arm/boot/dts/r8a7790-lager.dts
index aa6ca92a9485..749ba02b6a53 100644
--- a/arch/arm/boot/dts/r8a7790-lager.dts
+++ b/arch/arm/boot/dts/r8a7790-lager.dts
@@ -176,11 +176,10 @@
1800000 0>;
};
- audio_clock: clock {
+ audio_clock: audio_clock {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <11289600>;
- clock-output-names = "audio_clock";
};
rsnd_ak4643: sound {
@@ -314,119 +313,133 @@
pinctrl-names = "default";
du_pins: du {
- renesas,groups = "du_rgb666", "du_sync_1", "du_clk_out_0";
- renesas,function = "du";
+ groups = "du_rgb666", "du_sync_1", "du_clk_out_0";
+ function = "du";
};
scif0_pins: serial0 {
- renesas,groups = "scif0_data";
- renesas,function = "scif0";
+ groups = "scif0_data";
+ function = "scif0";
};
scif_clk_pins: scif_clk {
- renesas,groups = "scif_clk";
- renesas,function = "scif_clk";
+ groups = "scif_clk";
+ function = "scif_clk";
};
ether_pins: ether {
- renesas,groups = "eth_link", "eth_mdio", "eth_rmii";
- renesas,function = "eth";
+ groups = "eth_link", "eth_mdio", "eth_rmii";
+ function = "eth";
};
phy1_pins: phy1 {
- renesas,groups = "intc_irq0";
- renesas,function = "intc";
+ groups = "intc_irq0";
+ function = "intc";
};
scifa1_pins: serial1 {
- renesas,groups = "scifa1_data";
- renesas,function = "scifa1";
+ groups = "scifa1_data";
+ function = "scifa1";
};
sdhi0_pins: sd0 {
- renesas,groups = "sdhi0_data4", "sdhi0_ctrl";
- renesas,function = "sdhi0";
+ groups = "sdhi0_data4", "sdhi0_ctrl";
+ function = "sdhi0";
+ power-source = <3300>;
+ };
+
+ sdhi0_pins_uhs: sd0_uhs {
+ groups = "sdhi0_data4", "sdhi0_ctrl";
+ function = "sdhi0";
+ power-source = <1800>;
};
sdhi2_pins: sd2 {
- renesas,groups = "sdhi2_data4", "sdhi2_ctrl";
- renesas,function = "sdhi2";
+ groups = "sdhi2_data4", "sdhi2_ctrl";
+ function = "sdhi2";
+ power-source = <3300>;
+ };
+
+ sdhi2_pins_uhs: sd2_uhs {
+ groups = "sdhi2_data4", "sdhi2_ctrl";
+ function = "sdhi2";
+ power-source = <1800>;
};
mmc1_pins: mmc1 {
- renesas,groups = "mmc1_data8", "mmc1_ctrl";
- renesas,function = "mmc1";
+ groups = "mmc1_data8", "mmc1_ctrl";
+ function = "mmc1";
};
qspi_pins: spi0 {
- renesas,groups = "qspi_ctrl", "qspi_data4";
- renesas,function = "qspi";
+ groups = "qspi_ctrl", "qspi_data4";
+ function = "qspi";
};
msiof1_pins: spi2 {
- renesas,groups = "msiof1_clk", "msiof1_sync", "msiof1_rx",
+ groups = "msiof1_clk", "msiof1_sync", "msiof1_rx",
"msiof1_tx";
- renesas,function = "msiof1";
+ function = "msiof1";
};
i2c0_pins: i2c0 {
- renesas,groups = "i2c0";
- renesas,function = "i2c0";
+ groups = "i2c0";
+ function = "i2c0";
};
iic0_pins: iic0 {
- renesas,groups = "iic0";
- renesas,function = "iic0";
+ groups = "iic0";
+ function = "iic0";
};
iic1_pins: iic1 {
- renesas,groups = "iic1";
- renesas,function = "iic1";
+ groups = "iic1";
+ function = "iic1";
};
iic2_pins: iic2 {
- renesas,groups = "iic2";
- renesas,function = "iic2";
+ groups = "iic2";
+ function = "iic2";
};
iic3_pins: iic3 {
- renesas,groups = "iic3";
- renesas,function = "iic3";
+ groups = "iic3";
+ function = "iic3";
};
hsusb_pins: hsusb {
- renesas,groups = "usb0_ovc_vbus";
- renesas,function = "usb0";
+ groups = "usb0_ovc_vbus";
+ function = "usb0";
};
usb0_pins: usb0 {
- renesas,groups = "usb0";
- renesas,function = "usb0";
+ groups = "usb0";
+ function = "usb0";
};
usb1_pins: usb1 {
- renesas,groups = "usb1";
- renesas,function = "usb1";
+ groups = "usb1";
+ function = "usb1";
};
usb2_pins: usb2 {
- renesas,groups = "usb2";
- renesas,function = "usb2";
+ groups = "usb2";
+ function = "usb2";
};
vin1_pins: vin {
- renesas,groups = "vin1_data8", "vin1_clk";
- renesas,function = "vin1";
+ groups = "vin1_data8", "vin1_clk";
+ function = "vin1";
};
sound_pins: sound {
- renesas,groups = "ssi0129_ctrl", "ssi0_data", "ssi1_data";
- renesas,function = "ssi";
+ groups = "ssi0129_ctrl", "ssi0_data", "ssi1_data";
+ function = "ssi";
};
sound_clk_pins: sound_clk {
- renesas,groups = "audio_clk_a";
- renesas,function = "audio_clk";
+ groups = "audio_clk_a";
+ function = "audio_clk";
};
};
@@ -539,21 +552,25 @@
&sdhi0 {
pinctrl-0 = <&sdhi0_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&sdhi0_pins_uhs>;
+ pinctrl-names = "default", "state_uhs";
vmmc-supply = <&vcc_sdhi0>;
vqmmc-supply = <&vccq_sdhi0>;
cd-gpios = <&gpio3 6 GPIO_ACTIVE_LOW>;
+ sd-uhs-sdr50;
status = "okay";
};
&sdhi2 {
pinctrl-0 = <&sdhi2_pins>;
- pinctrl-names = "default";
+ pinctrl-1 = <&sdhi2_pins_uhs>;
+ pinctrl-names = "default", "state_uhs";
vmmc-supply = <&vcc_sdhi2>;
vqmmc-supply = <&vccq_sdhi2>;
cd-gpios = <&gpio3 22 GPIO_ACTIVE_LOW>;
+ sd-uhs-sdr50;
status = "okay";
};
diff --git a/arch/arm/boot/dts/r8a7790.dtsi b/arch/arm/boot/dts/r8a7790.dtsi
index 38b706399a6b..83cf23cd26bb 100644
--- a/arch/arm/boot/dts/r8a7790.dtsi
+++ b/arch/arm/boot/dts/r8a7790.dtsi
@@ -13,6 +13,7 @@
#include <dt-bindings/clock/r8a7790-clock.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
+#include <dt-bindings/power/r8a7790-sysc.h>
/ {
compatible = "renesas,r8a7790";
@@ -52,6 +53,7 @@
voltage-tolerance = <1>; /* 1% */
clocks = <&cpg_clocks R8A7790_CLK_Z>;
clock-latency = <300000>; /* 300 us */
+ power-domains = <&sysc R8A7790_PD_CA15_CPU0>;
next-level-cache = <&L2_CA15>;
/* kHz - uV - OPPs unknown yet */
@@ -68,6 +70,7 @@
compatible = "arm,cortex-a15";
reg = <1>;
clock-frequency = <1300000000>;
+ power-domains = <&sysc R8A7790_PD_CA15_CPU1>;
next-level-cache = <&L2_CA15>;
};
@@ -76,6 +79,7 @@
compatible = "arm,cortex-a15";
reg = <2>;
clock-frequency = <1300000000>;
+ power-domains = <&sysc R8A7790_PD_CA15_CPU2>;
next-level-cache = <&L2_CA15>;
};
@@ -84,6 +88,7 @@
compatible = "arm,cortex-a15";
reg = <3>;
clock-frequency = <1300000000>;
+ power-domains = <&sysc R8A7790_PD_CA15_CPU3>;
next-level-cache = <&L2_CA15>;
};
@@ -92,6 +97,7 @@
compatible = "arm,cortex-a7";
reg = <0x100>;
clock-frequency = <780000000>;
+ power-domains = <&sysc R8A7790_PD_CA7_CPU0>;
next-level-cache = <&L2_CA7>;
};
@@ -100,6 +106,7 @@
compatible = "arm,cortex-a7";
reg = <0x101>;
clock-frequency = <780000000>;
+ power-domains = <&sysc R8A7790_PD_CA7_CPU1>;
next-level-cache = <&L2_CA7>;
};
@@ -108,6 +115,7 @@
compatible = "arm,cortex-a7";
reg = <0x102>;
clock-frequency = <780000000>;
+ power-domains = <&sysc R8A7790_PD_CA7_CPU2>;
next-level-cache = <&L2_CA7>;
};
@@ -116,6 +124,7 @@
compatible = "arm,cortex-a7";
reg = <0x103>;
clock-frequency = <780000000>;
+ power-domains = <&sysc R8A7790_PD_CA7_CPU3>;
next-level-cache = <&L2_CA7>;
};
};
@@ -141,12 +150,14 @@
L2_CA15: cache-controller@0 {
compatible = "cache";
+ power-domains = <&sysc R8A7790_PD_CA15_SCU>;
cache-unified;
cache-level = <2>;
};
L2_CA7: cache-controller@1 {
compatible = "cache";
+ power-domains = <&sysc R8A7790_PD_CA7_SCU>;
cache-unified;
cache-level = <2>;
};
@@ -173,7 +184,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7790_CLK_GPIO0>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
};
gpio1: gpio@e6051000 {
@@ -186,7 +197,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7790_CLK_GPIO1>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
};
gpio2: gpio@e6052000 {
@@ -199,7 +210,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7790_CLK_GPIO2>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
};
gpio3: gpio@e6053000 {
@@ -212,7 +223,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7790_CLK_GPIO3>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
};
gpio4: gpio@e6054000 {
@@ -225,7 +236,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7790_CLK_GPIO4>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
};
gpio5: gpio@e6055000 {
@@ -238,7 +249,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7790_CLK_GPIO5>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
};
thermal: thermal@e61f0000 {
@@ -248,7 +259,7 @@
reg = <0 0xe61f0000 0 0x14>, <0 0xe61f0100 0 0x38>;
interrupts = <GIC_SPI 69 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp5_clks R8A7790_CLK_THERMAL>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
#thermal-sensor-cells = <0>;
};
@@ -267,7 +278,7 @@
<GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp1_clks R8A7790_CLK_CMT0>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
renesas,channels-mask = <0x60>;
@@ -287,7 +298,7 @@
<GIC_SPI 127 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp3_clks R8A7790_CLK_CMT1>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
renesas,channels-mask = <0xff>;
@@ -304,7 +315,7 @@
<GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp4_clks R8A7790_CLK_IRQC>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
};
dmac0: dma-controller@e6700000 {
@@ -333,7 +344,7 @@
"ch12", "ch13", "ch14";
clocks = <&mstp2_clks R8A7790_CLK_SYS_DMAC0>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <15>;
};
@@ -364,7 +375,7 @@
"ch12", "ch13", "ch14";
clocks = <&mstp2_clks R8A7790_CLK_SYS_DMAC1>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <15>;
};
@@ -393,7 +404,7 @@
"ch12";
clocks = <&mstp5_clks R8A7790_CLK_AUDIO_DMAC0>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <13>;
};
@@ -422,7 +433,7 @@
"ch12";
clocks = <&mstp5_clks R8A7790_CLK_AUDIO_DMAC1>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <13>;
};
@@ -434,7 +445,7 @@
GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "ch0", "ch1";
clocks = <&mstp3_clks R8A7790_CLK_USBDMAC0>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <2>;
};
@@ -446,7 +457,7 @@
GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "ch0", "ch1";
clocks = <&mstp3_clks R8A7790_CLK_USBDMAC1>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <2>;
};
@@ -458,7 +469,7 @@
reg = <0 0xe6508000 0 0x40>;
interrupts = <GIC_SPI 287 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7790_CLK_I2C0>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <110>;
status = "disabled";
};
@@ -470,7 +481,7 @@
reg = <0 0xe6518000 0 0x40>;
interrupts = <GIC_SPI 288 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7790_CLK_I2C1>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <6>;
status = "disabled";
};
@@ -482,7 +493,7 @@
reg = <0 0xe6530000 0 0x40>;
interrupts = <GIC_SPI 286 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7790_CLK_I2C2>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <6>;
status = "disabled";
};
@@ -494,7 +505,7 @@
reg = <0 0xe6540000 0 0x40>;
interrupts = <GIC_SPI 290 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7790_CLK_I2C3>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <110>;
status = "disabled";
};
@@ -508,7 +519,7 @@
clocks = <&mstp3_clks R8A7790_CLK_IIC0>;
dmas = <&dmac0 0x61>, <&dmac0 0x62>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -521,7 +532,7 @@
clocks = <&mstp3_clks R8A7790_CLK_IIC1>;
dmas = <&dmac0 0x65>, <&dmac0 0x66>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -534,7 +545,7 @@
clocks = <&mstp3_clks R8A7790_CLK_IIC2>;
dmas = <&dmac0 0x69>, <&dmac0 0x6a>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -547,7 +558,7 @@
clocks = <&mstp9_clks R8A7790_CLK_IICDVFS>;
dmas = <&dmac0 0x77>, <&dmac0 0x78>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -558,7 +569,7 @@
clocks = <&mstp3_clks R8A7790_CLK_MMCIF0>;
dmas = <&dmac0 0xd1>, <&dmac0 0xd2>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
reg-io-width = <4>;
status = "disabled";
max-frequency = <97500000>;
@@ -571,7 +582,7 @@
clocks = <&mstp3_clks R8A7790_CLK_MMCIF1>;
dmas = <&dmac0 0xe1>, <&dmac0 0xe2>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
reg-io-width = <4>;
status = "disabled";
max-frequency = <97500000>;
@@ -589,7 +600,8 @@
clocks = <&mstp3_clks R8A7790_CLK_SDHI0>;
dmas = <&dmac1 0xcd>, <&dmac1 0xce>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ max-frequency = <195000000>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -600,7 +612,8 @@
clocks = <&mstp3_clks R8A7790_CLK_SDHI1>;
dmas = <&dmac1 0xc9>, <&dmac1 0xca>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ max-frequency = <195000000>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -611,7 +624,8 @@
clocks = <&mstp3_clks R8A7790_CLK_SDHI2>;
dmas = <&dmac1 0xc1>, <&dmac1 0xc2>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ max-frequency = <97500000>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -622,7 +636,8 @@
clocks = <&mstp3_clks R8A7790_CLK_SDHI3>;
dmas = <&dmac1 0xd3>, <&dmac1 0xd4>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ max-frequency = <97500000>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -635,7 +650,7 @@
clock-names = "fck";
dmas = <&dmac0 0x21>, <&dmac0 0x22>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -648,7 +663,7 @@
clock-names = "fck";
dmas = <&dmac0 0x25>, <&dmac0 0x26>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -661,7 +676,7 @@
clock-names = "fck";
dmas = <&dmac0 0x27>, <&dmac0 0x28>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -674,7 +689,7 @@
clock-names = "fck";
dmas = <&dmac0 0x3d>, <&dmac0 0x3e>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -687,7 +702,7 @@
clock-names = "fck";
dmas = <&dmac0 0x19>, <&dmac0 0x1a>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -700,7 +715,7 @@
clock-names = "fck";
dmas = <&dmac0 0x1d>, <&dmac0 0x1e>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -714,7 +729,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x29>, <&dmac0 0x2a>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -728,7 +743,21 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x2d>, <&dmac0 0x2e>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
+ status = "disabled";
+ };
+
+ scif2: serial@e6e56000 {
+ compatible = "renesas,scif-r8a7790", "renesas,rcar-gen2-scif",
+ "renesas,scif";
+ reg = <0 0xe6e56000 0 64>;
+ interrupts = <GIC_SPI 164 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&mstp3_clks R8A7790_CLK_SCIF2>, <&zs_clk>,
+ <&scif_clk>;
+ clock-names = "fck", "brg_int", "scif_clk";
+ dmas = <&dmac0 0x2b>, <&dmac0 0x2c>;
+ dma-names = "tx", "rx";
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -742,7 +771,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x39>, <&dmac0 0x3a>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -756,7 +785,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x4d>, <&dmac0 0x4e>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -765,7 +794,7 @@
reg = <0 0xee700000 0 0x400>;
interrupts = <GIC_SPI 162 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7790_CLK_ETHER>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
phy-mode = "rmii";
#address-cells = <1>;
#size-cells = <0>;
@@ -778,7 +807,7 @@
reg = <0 0xe6800000 0 0x800>, <0 0xee0e8000 0 0x4000>;
interrupts = <GIC_SPI 163 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7790_CLK_ETHERAVB>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
#address-cells = <1>;
#size-cells = <0>;
status = "disabled";
@@ -789,7 +818,7 @@
reg = <0 0xee300000 0 0x2000>;
interrupts = <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7790_CLK_SATA0>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -798,7 +827,7 @@
reg = <0 0xee500000 0 0x2000>;
interrupts = <GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7790_CLK_SATA1>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -810,7 +839,7 @@
dmas = <&usb_dmac0 0>, <&usb_dmac0 1>,
<&usb_dmac1 0>, <&usb_dmac1 1>;
dma-names = "ch0", "ch1", "ch2", "ch3";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
renesas,buswait = <4>;
phys = <&usb0 1>;
phy-names = "usb";
@@ -824,7 +853,7 @@
#size-cells = <0>;
clocks = <&mstp7_clks R8A7790_CLK_HSUSB>;
clock-names = "usbhs";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
usb0: usb-channel@0 {
@@ -842,7 +871,7 @@
reg = <0 0xe6ef0000 0 0x1000>;
interrupts = <GIC_SPI 188 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7790_CLK_VIN0>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -851,7 +880,7 @@
reg = <0 0xe6ef1000 0 0x1000>;
interrupts = <GIC_SPI 189 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7790_CLK_VIN1>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -860,7 +889,7 @@
reg = <0 0xe6ef2000 0 0x1000>;
interrupts = <GIC_SPI 190 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7790_CLK_VIN2>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -869,7 +898,7 @@
reg = <0 0xe6ef3000 0 0x1000>;
interrupts = <GIC_SPI 191 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7790_CLK_VIN3>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -878,7 +907,7 @@
reg = <0 0xfe920000 0 0x8000>;
interrupts = <GIC_SPI 266 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp1_clks R8A7790_CLK_VSP1_R>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
renesas,has-sru;
renesas,#rpf = <5>;
@@ -891,7 +920,7 @@
reg = <0 0xfe928000 0 0x8000>;
interrupts = <GIC_SPI 267 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp1_clks R8A7790_CLK_VSP1_S>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
renesas,has-lut;
renesas,has-sru;
@@ -905,7 +934,7 @@
reg = <0 0xfe930000 0 0x8000>;
interrupts = <GIC_SPI 246 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp1_clks R8A7790_CLK_VSP1_DU0>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
renesas,has-lif;
renesas,has-lut;
@@ -919,7 +948,7 @@
reg = <0 0xfe938000 0 0x8000>;
interrupts = <GIC_SPI 247 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp1_clks R8A7790_CLK_VSP1_DU1>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
renesas,has-lif;
renesas,has-lut;
@@ -968,33 +997,33 @@
};
can0: can@e6e80000 {
- compatible = "renesas,can-r8a7790";
+ compatible = "renesas,can-r8a7790", "renesas,rcar-gen2-can";
reg = <0 0xe6e80000 0 0x1000>;
interrupts = <GIC_SPI 186 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7790_CLK_RCAN0>,
<&cpg_clocks R8A7790_CLK_RCAN>, <&can_clk>;
clock-names = "clkp1", "clkp2", "can_clk";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
can1: can@e6e88000 {
- compatible = "renesas,can-r8a7790";
+ compatible = "renesas,can-r8a7790", "renesas,rcar-gen2-can";
reg = <0 0xe6e88000 0 0x1000>;
interrupts = <GIC_SPI 187 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7790_CLK_RCAN1>,
<&cpg_clocks R8A7790_CLK_RCAN>, <&can_clk>;
clock-names = "clkp1", "clkp2", "can_clk";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
jpu: jpeg-codec@fe980000 {
- compatible = "renesas,jpu-r8a7790";
+ compatible = "renesas,jpu-r8a7790", "renesas,rcar-gen2-jpu";
reg = <0 0xfe980000 0 0x10300>;
interrupts = <GIC_SPI 272 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp1_clks R8A7790_CLK_JPU>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
};
clocks {
@@ -1003,21 +1032,18 @@
ranges;
/* External root clock */
- extal_clk: extal_clk {
+ extal_clk: extal {
compatible = "fixed-clock";
#clock-cells = <0>;
/* This value must be overriden by the board. */
clock-frequency = <0>;
- clock-output-names = "extal";
};
/* External PCIe clock - can be overridden by the board */
- pcie_bus_clk: pcie_bus_clk {
+ pcie_bus_clk: pcie_bus {
compatible = "fixed-clock";
#clock-cells = <0>;
- clock-frequency = <100000000>;
- clock-output-names = "pcie_bus";
- status = "disabled";
+ clock-frequency = <0>;
};
/*
@@ -1028,19 +1054,16 @@
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
- clock-output-names = "audio_clk_a";
};
audio_clk_b: audio_clk_b {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
- clock-output-names = "audio_clk_b";
};
audio_clk_c: audio_clk_c {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
- clock-output-names = "audio_clk_c";
};
/* External SCIF clock */
@@ -1049,15 +1072,13 @@
#clock-cells = <0>;
/* This value must be overridden by the board. */
clock-frequency = <0>;
- status = "disabled";
};
/* External USB clock - can be overridden by the board */
- usb_extal_clk: usb_extal_clk {
+ usb_extal_clk: usb_extal {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <48000000>;
- clock-output-names = "usb_extal";
};
/* External CAN clock */
@@ -1066,8 +1087,6 @@
#clock-cells = <0>;
/* This value must be overridden by the board. */
clock-frequency = <0>;
- clock-output-names = "can_clk";
- status = "disabled";
};
/* Special CPG clocks */
@@ -1084,201 +1103,176 @@
};
/* Variable factor clocks */
- sd2_clk: sd2_clk@e6150078 {
+ sd2_clk: sd2@e6150078 {
compatible = "renesas,r8a7790-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150078 0 4>;
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
- clock-output-names = "sd2";
};
- sd3_clk: sd3_clk@e615026c {
+ sd3_clk: sd3@e615026c {
compatible = "renesas,r8a7790-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe615026c 0 4>;
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
- clock-output-names = "sd3";
};
- mmc0_clk: mmc0_clk@e6150240 {
+ mmc0_clk: mmc0@e6150240 {
compatible = "renesas,r8a7790-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150240 0 4>;
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
- clock-output-names = "mmc0";
};
- mmc1_clk: mmc1_clk@e6150244 {
+ mmc1_clk: mmc1@e6150244 {
compatible = "renesas,r8a7790-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150244 0 4>;
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
- clock-output-names = "mmc1";
};
- ssp_clk: ssp_clk@e6150248 {
+ ssp_clk: ssp@e6150248 {
compatible = "renesas,r8a7790-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150248 0 4>;
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
- clock-output-names = "ssp";
};
- ssprs_clk: ssprs_clk@e615024c {
+ ssprs_clk: ssprs@e615024c {
compatible = "renesas,r8a7790-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe615024c 0 4>;
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
- clock-output-names = "ssprs";
};
/* Fixed factor clocks */
- pll1_div2_clk: pll1_div2_clk {
+ pll1_div2_clk: pll1_div2 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7790_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "pll1_div2";
};
- z2_clk: z2_clk {
+ z2_clk: z2 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7790_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "z2";
};
- zg_clk: zg_clk {
+ zg_clk: zg {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7790_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <3>;
clock-mult = <1>;
- clock-output-names = "zg";
};
- zx_clk: zx_clk {
+ zx_clk: zx {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7790_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <3>;
clock-mult = <1>;
- clock-output-names = "zx";
};
- zs_clk: zs_clk {
+ zs_clk: zs {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7790_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <6>;
clock-mult = <1>;
- clock-output-names = "zs";
};
- hp_clk: hp_clk {
+ hp_clk: hp {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7790_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <12>;
clock-mult = <1>;
- clock-output-names = "hp";
};
- i_clk: i_clk {
+ i_clk: i {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7790_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "i";
};
- b_clk: b_clk {
+ b_clk: b {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7790_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <12>;
clock-mult = <1>;
- clock-output-names = "b";
};
- p_clk: p_clk {
+ p_clk: p {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7790_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <24>;
clock-mult = <1>;
- clock-output-names = "p";
};
- cl_clk: cl_clk {
+ cl_clk: cl {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7790_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <48>;
clock-mult = <1>;
- clock-output-names = "cl";
};
- m2_clk: m2_clk {
+ m2_clk: m2 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7790_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <8>;
clock-mult = <1>;
- clock-output-names = "m2";
};
- imp_clk: imp_clk {
+ imp_clk: imp {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7790_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <4>;
clock-mult = <1>;
- clock-output-names = "imp";
};
- rclk_clk: rclk_clk {
+ rclk_clk: rclk {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7790_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <(48 * 1024)>;
clock-mult = <1>;
- clock-output-names = "rclk";
};
- oscclk_clk: oscclk_clk {
+ oscclk_clk: oscclk {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7790_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <(12 * 1024)>;
clock-mult = <1>;
- clock-output-names = "oscclk";
};
- zb3_clk: zb3_clk {
+ zb3_clk: zb3 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7790_CLK_PLL3>;
#clock-cells = <0>;
clock-div = <4>;
clock-mult = <1>;
- clock-output-names = "zb3";
};
- zb3d2_clk: zb3d2_clk {
+ zb3d2_clk: zb3d2 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7790_CLK_PLL3>;
#clock-cells = <0>;
clock-div = <8>;
clock-mult = <1>;
- clock-output-names = "zb3d2";
};
- ddr_clk: ddr_clk {
+ ddr_clk: ddr {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7790_CLK_PLL3>;
#clock-cells = <0>;
clock-div = <8>;
clock-mult = <1>;
- clock-output-names = "ddr";
};
- mp_clk: mp_clk {
+ mp_clk: mp {
compatible = "fixed-factor-clock";
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
clock-div = <15>;
clock-mult = <1>;
- clock-output-names = "mp";
};
- cp_clk: cp_clk {
+ cp_clk: cp {
compatible = "fixed-factor-clock";
clocks = <&extal_clk>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "cp";
};
/* Gate clocks */
@@ -1334,19 +1328,19 @@
mstp3_clks: mstp3_clks@e615013c {
compatible = "renesas,r8a7790-mstp-clocks", "renesas,cpg-mstp-clocks";
reg = <0 0xe615013c 0 4>, <0 0xe6150048 0 4>;
- clocks = <&hp_clk>, <&cp_clk>, <&mmc1_clk>, <&sd3_clk>,
+ clocks = <&hp_clk>, <&cp_clk>, <&mmc1_clk>, <&p_clk>, <&sd3_clk>,
<&sd2_clk>, <&cpg_clocks R8A7790_CLK_SD1>, <&cpg_clocks R8A7790_CLK_SD0>, <&mmc0_clk>,
<&hp_clk>, <&mp_clk>, <&hp_clk>, <&mp_clk>, <&rclk_clk>,
<&hp_clk>, <&hp_clk>;
#clock-cells = <1>;
clock-indices = <
- R8A7790_CLK_IIC2 R8A7790_CLK_TPU0 R8A7790_CLK_MMCIF1 R8A7790_CLK_SDHI3
+ R8A7790_CLK_IIC2 R8A7790_CLK_TPU0 R8A7790_CLK_MMCIF1 R8A7790_CLK_SCIF2 R8A7790_CLK_SDHI3
R8A7790_CLK_SDHI2 R8A7790_CLK_SDHI1 R8A7790_CLK_SDHI0 R8A7790_CLK_MMCIF0
R8A7790_CLK_IIC0 R8A7790_CLK_PCIEC R8A7790_CLK_IIC1 R8A7790_CLK_SSUSB R8A7790_CLK_CMT1
R8A7790_CLK_USBDMAC0 R8A7790_CLK_USBDMAC1
>;
clock-output-names =
- "iic2", "tpu0", "mmcif1", "sdhi3",
+ "iic2", "tpu0", "mmcif1", "scif2", "sdhi3",
"sdhi2", "sdhi1", "sdhi0", "mmcif0",
"iic0", "pciec", "iic1", "ssusb", "cmt1",
"usbdmac0", "usbdmac1";
@@ -1464,6 +1458,12 @@
};
};
+ sysc: system-controller@e6180000 {
+ compatible = "renesas,r8a7790-sysc";
+ reg = <0 0xe6180000 0 0x0200>;
+ #power-domain-cells = <1>;
+ };
+
qspi: spi@e6b10000 {
compatible = "renesas,qspi-r8a7790", "renesas,qspi";
reg = <0 0xe6b10000 0 0x2c>;
@@ -1471,7 +1471,7 @@
clocks = <&mstp9_clks R8A7790_CLK_QSPI_MOD>;
dmas = <&dmac0 0x17>, <&dmac0 0x18>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
num-cs = <1>;
#address-cells = <1>;
#size-cells = <0>;
@@ -1485,7 +1485,7 @@
clocks = <&mstp0_clks R8A7790_CLK_MSIOF0>;
dmas = <&dmac0 0x51>, <&dmac0 0x52>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
#address-cells = <1>;
#size-cells = <0>;
status = "disabled";
@@ -1498,7 +1498,7 @@
clocks = <&mstp2_clks R8A7790_CLK_MSIOF1>;
dmas = <&dmac0 0x55>, <&dmac0 0x56>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
#address-cells = <1>;
#size-cells = <0>;
status = "disabled";
@@ -1511,7 +1511,7 @@
clocks = <&mstp2_clks R8A7790_CLK_MSIOF2>;
dmas = <&dmac0 0x41>, <&dmac0 0x42>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
#address-cells = <1>;
#size-cells = <0>;
status = "disabled";
@@ -1524,18 +1524,18 @@
clocks = <&mstp2_clks R8A7790_CLK_MSIOF3>;
dmas = <&dmac0 0x45>, <&dmac0 0x46>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
#address-cells = <1>;
#size-cells = <0>;
status = "disabled";
};
xhci: usb@ee000000 {
- compatible = "renesas,xhci-r8a7790";
+ compatible = "renesas,xhci-r8a7790", "renesas,rcar-gen2-xhci";
reg = <0 0xee000000 0 0xc00>;
interrupts = <GIC_SPI 101 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp3_clks R8A7790_CLK_SSUSB>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
phys = <&usb2 1>;
phy-names = "usb";
status = "disabled";
@@ -1548,7 +1548,7 @@
<0 0xee080000 0 0x1100>;
interrupts = <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp7_clks R8A7790_CLK_EHCI>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
bus-range = <0 0>;
@@ -1583,7 +1583,7 @@
<0 0xee0a0000 0 0x1100>;
interrupts = <GIC_SPI 112 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp7_clks R8A7790_CLK_EHCI>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
bus-range = <1 1>;
@@ -1601,7 +1601,7 @@
compatible = "renesas,pci-r8a7790", "renesas,pci-rcar-gen2";
device_type = "pci";
clocks = <&mstp7_clks R8A7790_CLK_EHCI>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
reg = <0 0xee0d0000 0 0xc00>,
<0 0xee0c0000 0 0x1100>;
interrupts = <GIC_SPI 113 IRQ_TYPE_LEVEL_HIGH>;
@@ -1654,7 +1654,7 @@
interrupt-map = <0 0 0 0 &gic GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp3_clks R8A7790_CLK_PCIEC>, <&pcie_bus_clk>;
clock-names = "pcie", "pcie_bus";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -1697,7 +1697,7 @@
"mix.0", "mix.1",
"dvc.0", "dvc.1",
"clk_a", "clk_b", "clk_c", "clk_i";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7790_PD_ALWAYS_ON>;
status = "disabled";
diff --git a/arch/arm/boot/dts/r8a7791-koelsch.dts b/arch/arm/boot/dts/r8a7791-koelsch.dts
index cc6e28f81fe4..da59c2844b8a 100644
--- a/arch/arm/boot/dts/r8a7791-koelsch.dts
+++ b/arch/arm/boot/dts/r8a7791-koelsch.dts
@@ -242,11 +242,10 @@
1800000 0>;
};
- audio_clock: clock {
+ audio_clock: audio_clock {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <11289600>;
- clock-output-names = "audio_clock";
};
rsnd_ak4643: sound {
@@ -324,89 +323,89 @@
pinctrl-names = "default";
i2c2_pins: i2c2 {
- renesas,groups = "i2c2";
- renesas,function = "i2c2";
+ groups = "i2c2";
+ function = "i2c2";
};
du_pins: du {
- renesas,groups = "du_rgb888", "du_sync", "du_disp", "du_clk_out_0";
- renesas,function = "du";
+ groups = "du_rgb888", "du_sync", "du_disp", "du_clk_out_0";
+ function = "du";
};
scif0_pins: serial0 {
- renesas,groups = "scif0_data_d";
- renesas,function = "scif0";
+ groups = "scif0_data_d";
+ function = "scif0";
};
scif1_pins: serial1 {
- renesas,groups = "scif1_data_d";
- renesas,function = "scif1";
+ groups = "scif1_data_d";
+ function = "scif1";
};
scif_clk_pins: scif_clk {
- renesas,groups = "scif_clk";
- renesas,function = "scif_clk";
+ groups = "scif_clk";
+ function = "scif_clk";
};
ether_pins: ether {
- renesas,groups = "eth_link", "eth_mdio", "eth_rmii";
- renesas,function = "eth";
+ groups = "eth_link", "eth_mdio", "eth_rmii";
+ function = "eth";
};
phy1_pins: phy1 {
- renesas,groups = "intc_irq0";
- renesas,function = "intc";
+ groups = "intc_irq0";
+ function = "intc";
};
sdhi0_pins: sd0 {
- renesas,groups = "sdhi0_data4", "sdhi0_ctrl";
- renesas,function = "sdhi0";
+ groups = "sdhi0_data4", "sdhi0_ctrl";
+ function = "sdhi0";
};
sdhi1_pins: sd1 {
- renesas,groups = "sdhi1_data4", "sdhi1_ctrl";
- renesas,function = "sdhi1";
+ groups = "sdhi1_data4", "sdhi1_ctrl";
+ function = "sdhi1";
};
sdhi2_pins: sd2 {
- renesas,groups = "sdhi2_data4", "sdhi2_ctrl";
- renesas,function = "sdhi2";
+ groups = "sdhi2_data4", "sdhi2_ctrl";
+ function = "sdhi2";
};
qspi_pins: spi0 {
- renesas,groups = "qspi_ctrl", "qspi_data4";
- renesas,function = "qspi";
+ groups = "qspi_ctrl", "qspi_data4";
+ function = "qspi";
};
msiof0_pins: spi1 {
- renesas,groups = "msiof0_clk", "msiof0_sync", "msiof0_rx",
+ groups = "msiof0_clk", "msiof0_sync", "msiof0_rx",
"msiof0_tx";
- renesas,function = "msiof0";
+ function = "msiof0";
};
usb0_pins: usb0 {
- renesas,groups = "usb0";
- renesas,function = "usb0";
+ groups = "usb0";
+ function = "usb0";
};
usb1_pins: usb1 {
- renesas,groups = "usb1";
- renesas,function = "usb1";
+ groups = "usb1";
+ function = "usb1";
};
vin1_pins: vin1 {
- renesas,groups = "vin1_data8", "vin1_clk";
- renesas,function = "vin1";
+ groups = "vin1_data8", "vin1_clk";
+ function = "vin1";
};
sound_pins: sound {
- renesas,groups = "ssi0129_ctrl", "ssi0_data", "ssi1_data";
- renesas,function = "ssi";
+ groups = "ssi0129_ctrl", "ssi0_data", "ssi1_data";
+ function = "ssi";
};
sound_clk_pins: sound_clk {
- renesas,groups = "audio_clk_a";
- renesas,function = "audio_clk";
+ groups = "audio_clk_a";
+ function = "audio_clk";
};
};
diff --git a/arch/arm/boot/dts/r8a7791-porter.dts b/arch/arm/boot/dts/r8a7791-porter.dts
index a9285d9a57cd..6a1bb1a8209b 100644
--- a/arch/arm/boot/dts/r8a7791-porter.dts
+++ b/arch/arm/boot/dts/r8a7791-porter.dts
@@ -113,11 +113,10 @@
clock-frequency = <74250000>;
};
- x14_clk: x14-clock {
+ x14_clk: audio_clock {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <11289600>;
- clock-output-names = "audio_clock";
};
sound {
@@ -144,73 +143,73 @@
&pfc {
scif0_pins: serial0 {
- renesas,groups = "scif0_data_d";
- renesas,function = "scif0";
+ groups = "scif0_data_d";
+ function = "scif0";
};
ether_pins: ether {
- renesas,groups = "eth_link", "eth_mdio", "eth_rmii";
- renesas,function = "eth";
+ groups = "eth_link", "eth_mdio", "eth_rmii";
+ function = "eth";
};
phy1_pins: phy1 {
- renesas,groups = "intc_irq0";
- renesas,function = "intc";
+ groups = "intc_irq0";
+ function = "intc";
};
sdhi0_pins: sd0 {
- renesas,groups = "sdhi0_data4", "sdhi0_ctrl";
- renesas,function = "sdhi0";
+ groups = "sdhi0_data4", "sdhi0_ctrl";
+ function = "sdhi0";
};
sdhi2_pins: sd2 {
- renesas,groups = "sdhi2_data4", "sdhi2_ctrl";
- renesas,function = "sdhi2";
+ groups = "sdhi2_data4", "sdhi2_ctrl";
+ function = "sdhi2";
};
qspi_pins: spi0 {
- renesas,groups = "qspi_ctrl", "qspi_data4";
- renesas,function = "qspi";
+ groups = "qspi_ctrl", "qspi_data4";
+ function = "qspi";
};
i2c2_pins: i2c2 {
- renesas,groups = "i2c2";
- renesas,function = "i2c2";
+ groups = "i2c2";
+ function = "i2c2";
};
usb0_pins: usb0 {
- renesas,groups = "usb0";
- renesas,function = "usb0";
+ groups = "usb0";
+ function = "usb0";
};
usb1_pins: usb1 {
- renesas,groups = "usb1";
- renesas,function = "usb1";
+ groups = "usb1";
+ function = "usb1";
};
vin0_pins: vin0 {
- renesas,groups = "vin0_data8", "vin0_clk";
- renesas,function = "vin0";
+ groups = "vin0_data8", "vin0_clk";
+ function = "vin0";
};
can0_pins: can0 {
- renesas,groups = "can0_data";
- renesas,function = "can0";
+ groups = "can0_data";
+ function = "can0";
};
du_pins: du {
- renesas,groups = "du_rgb888", "du_sync", "du_disp", "du_clk_out_0";
- renesas,function = "du";
+ groups = "du_rgb888", "du_sync", "du_disp", "du_clk_out_0";
+ function = "du";
};
ssi_pins: sound {
- renesas,groups = "ssi0129_ctrl", "ssi0_data", "ssi1_data";
- renesas,function = "ssi";
+ groups = "ssi0129_ctrl", "ssi0_data", "ssi1_data";
+ function = "ssi";
};
audio_clk_pins: audio_clk {
- renesas,groups = "audio_clk_a";
- renesas,function = "audio_clk";
+ groups = "audio_clk_a";
+ function = "audio_clk";
};
};
diff --git a/arch/arm/boot/dts/r8a7791.dtsi b/arch/arm/boot/dts/r8a7791.dtsi
index 1cd1b6a3a72a..db67e342c585 100644
--- a/arch/arm/boot/dts/r8a7791.dtsi
+++ b/arch/arm/boot/dts/r8a7791.dtsi
@@ -13,6 +13,7 @@
#include <dt-bindings/clock/r8a7791-clock.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
+#include <dt-bindings/power/r8a7791-sysc.h>
/ {
compatible = "renesas,r8a7791";
@@ -51,6 +52,7 @@
voltage-tolerance = <1>; /* 1% */
clocks = <&cpg_clocks R8A7791_CLK_Z>;
clock-latency = <300000>; /* 300 us */
+ power-domains = <&sysc R8A7791_PD_CA15_CPU0>;
next-level-cache = <&L2_CA15>;
/* kHz - uV - OPPs unknown yet */
@@ -67,6 +69,7 @@
compatible = "arm,cortex-a15";
reg = <1>;
clock-frequency = <1500000000>;
+ power-domains = <&sysc R8A7791_PD_CA15_CPU1>;
next-level-cache = <&L2_CA15>;
};
};
@@ -92,6 +95,7 @@
L2_CA15: cache-controller@0 {
compatible = "cache";
+ power-domains = <&sysc R8A7791_PD_CA15_SCU>;
cache-unified;
cache-level = <2>;
};
@@ -118,7 +122,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7791_CLK_GPIO0>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
};
gpio1: gpio@e6051000 {
@@ -131,7 +135,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7791_CLK_GPIO1>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
};
gpio2: gpio@e6052000 {
@@ -144,7 +148,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7791_CLK_GPIO2>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
};
gpio3: gpio@e6053000 {
@@ -157,7 +161,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7791_CLK_GPIO3>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
};
gpio4: gpio@e6054000 {
@@ -170,7 +174,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7791_CLK_GPIO4>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
};
gpio5: gpio@e6055000 {
@@ -183,7 +187,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7791_CLK_GPIO5>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
};
gpio6: gpio@e6055400 {
@@ -196,7 +200,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7791_CLK_GPIO6>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
};
gpio7: gpio@e6055800 {
@@ -209,7 +213,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7791_CLK_GPIO7>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
};
thermal: thermal@e61f0000 {
@@ -219,7 +223,7 @@
reg = <0 0xe61f0000 0 0x14>, <0 0xe61f0100 0 0x38>;
interrupts = <GIC_SPI 69 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp5_clks R8A7791_CLK_THERMAL>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
#thermal-sensor-cells = <0>;
};
@@ -238,7 +242,7 @@
<GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp1_clks R8A7791_CLK_CMT0>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
renesas,channels-mask = <0x60>;
@@ -258,7 +262,7 @@
<GIC_SPI 127 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp3_clks R8A7791_CLK_CMT1>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
renesas,channels-mask = <0xff>;
@@ -281,7 +285,7 @@
<GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp4_clks R8A7791_CLK_IRQC>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
};
dmac0: dma-controller@e6700000 {
@@ -310,7 +314,7 @@
"ch12", "ch13", "ch14";
clocks = <&mstp2_clks R8A7791_CLK_SYS_DMAC0>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <15>;
};
@@ -341,7 +345,7 @@
"ch12", "ch13", "ch14";
clocks = <&mstp2_clks R8A7791_CLK_SYS_DMAC1>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <15>;
};
@@ -370,7 +374,7 @@
"ch12";
clocks = <&mstp5_clks R8A7791_CLK_AUDIO_DMAC0>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <13>;
};
@@ -399,7 +403,7 @@
"ch12";
clocks = <&mstp5_clks R8A7791_CLK_AUDIO_DMAC1>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <13>;
};
@@ -411,7 +415,7 @@
GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "ch0", "ch1";
clocks = <&mstp3_clks R8A7791_CLK_USBDMAC0>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <2>;
};
@@ -423,7 +427,7 @@
GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "ch0", "ch1";
clocks = <&mstp3_clks R8A7791_CLK_USBDMAC1>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <2>;
};
@@ -436,7 +440,7 @@
reg = <0 0xe6508000 0 0x40>;
interrupts = <GIC_SPI 287 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7791_CLK_I2C0>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <6>;
status = "disabled";
};
@@ -448,7 +452,7 @@
reg = <0 0xe6518000 0 0x40>;
interrupts = <GIC_SPI 288 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7791_CLK_I2C1>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <6>;
status = "disabled";
};
@@ -460,7 +464,7 @@
reg = <0 0xe6530000 0 0x40>;
interrupts = <GIC_SPI 286 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7791_CLK_I2C2>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <6>;
status = "disabled";
};
@@ -472,7 +476,7 @@
reg = <0 0xe6540000 0 0x40>;
interrupts = <GIC_SPI 290 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7791_CLK_I2C3>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <6>;
status = "disabled";
};
@@ -484,7 +488,7 @@
reg = <0 0xe6520000 0 0x40>;
interrupts = <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7791_CLK_I2C4>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <6>;
status = "disabled";
};
@@ -497,7 +501,7 @@
reg = <0 0xe6528000 0 0x40>;
interrupts = <GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7791_CLK_I2C5>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <110>;
status = "disabled";
};
@@ -512,7 +516,7 @@
clocks = <&mstp9_clks R8A7791_CLK_IICDVFS>;
dmas = <&dmac0 0x77>, <&dmac0 0x78>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -525,7 +529,7 @@
clocks = <&mstp3_clks R8A7791_CLK_IIC0>;
dmas = <&dmac0 0x61>, <&dmac0 0x62>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -538,7 +542,7 @@
clocks = <&mstp3_clks R8A7791_CLK_IIC1>;
dmas = <&dmac0 0x65>, <&dmac0 0x66>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -554,7 +558,7 @@
clocks = <&mstp3_clks R8A7791_CLK_MMCIF0>;
dmas = <&dmac0 0xd1>, <&dmac0 0xd2>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
reg-io-width = <4>;
status = "disabled";
max-frequency = <97500000>;
@@ -567,7 +571,7 @@
clocks = <&mstp3_clks R8A7791_CLK_SDHI0>;
dmas = <&dmac1 0xcd>, <&dmac1 0xce>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -578,7 +582,7 @@
clocks = <&mstp3_clks R8A7791_CLK_SDHI1>;
dmas = <&dmac1 0xc1>, <&dmac1 0xc2>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -589,7 +593,7 @@
clocks = <&mstp3_clks R8A7791_CLK_SDHI2>;
dmas = <&dmac1 0xd3>, <&dmac1 0xd4>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -602,7 +606,7 @@
clock-names = "fck";
dmas = <&dmac0 0x21>, <&dmac0 0x22>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -615,7 +619,7 @@
clock-names = "fck";
dmas = <&dmac0 0x25>, <&dmac0 0x26>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -628,7 +632,7 @@
clock-names = "fck";
dmas = <&dmac0 0x27>, <&dmac0 0x28>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -641,7 +645,7 @@
clock-names = "fck";
dmas = <&dmac0 0x1b>, <&dmac0 0x1c>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -654,7 +658,7 @@
clock-names = "fck";
dmas = <&dmac0 0x1f>, <&dmac0 0x20>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -667,7 +671,7 @@
clock-names = "fck";
dmas = <&dmac0 0x23>, <&dmac0 0x24>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -680,7 +684,7 @@
clock-names = "fck";
dmas = <&dmac0 0x3d>, <&dmac0 0x3e>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -693,7 +697,7 @@
clock-names = "fck";
dmas = <&dmac0 0x19>, <&dmac0 0x1a>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -706,7 +710,7 @@
clock-names = "fck";
dmas = <&dmac0 0x1d>, <&dmac0 0x1e>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -720,7 +724,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x29>, <&dmac0 0x2a>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -734,7 +738,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x2d>, <&dmac0 0x2e>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -748,7 +752,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x2b>, <&dmac0 0x2c>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -762,7 +766,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x2f>, <&dmac0 0x30>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -776,7 +780,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0xfb>, <&dmac0 0xfc>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -790,7 +794,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0xfd>, <&dmac0 0xfe>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -804,7 +808,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x39>, <&dmac0 0x3a>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -818,7 +822,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x4d>, <&dmac0 0x4e>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -832,7 +836,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x3b>, <&dmac0 0x3c>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -841,7 +845,7 @@
reg = <0 0xee700000 0 0x400>;
interrupts = <GIC_SPI 162 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7791_CLK_ETHER>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
phy-mode = "rmii";
#address-cells = <1>;
#size-cells = <0>;
@@ -854,7 +858,7 @@
reg = <0 0xe6800000 0 0x800>, <0 0xee0e8000 0 0x4000>;
interrupts = <GIC_SPI 163 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7791_CLK_ETHERAVB>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
#address-cells = <1>;
#size-cells = <0>;
status = "disabled";
@@ -865,7 +869,7 @@
reg = <0 0xee300000 0 0x2000>;
interrupts = <GIC_SPI 105 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7791_CLK_SATA0>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -874,7 +878,7 @@
reg = <0 0xee500000 0 0x2000>;
interrupts = <GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7791_CLK_SATA1>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -886,7 +890,7 @@
dmas = <&usb_dmac0 0>, <&usb_dmac0 1>,
<&usb_dmac1 0>, <&usb_dmac1 1>;
dma-names = "ch0", "ch1", "ch2", "ch3";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
renesas,buswait = <4>;
phys = <&usb0 1>;
phy-names = "usb";
@@ -900,7 +904,7 @@
#size-cells = <0>;
clocks = <&mstp7_clks R8A7791_CLK_HSUSB>;
clock-names = "usbhs";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
usb0: usb-channel@0 {
@@ -918,7 +922,7 @@
reg = <0 0xe6ef0000 0 0x1000>;
interrupts = <GIC_SPI 188 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7791_CLK_VIN0>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -927,7 +931,7 @@
reg = <0 0xe6ef1000 0 0x1000>;
interrupts = <GIC_SPI 189 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7791_CLK_VIN1>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -936,7 +940,7 @@
reg = <0 0xe6ef2000 0 0x1000>;
interrupts = <GIC_SPI 190 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7791_CLK_VIN2>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -945,7 +949,7 @@
reg = <0 0xfe928000 0 0x8000>;
interrupts = <GIC_SPI 267 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp1_clks R8A7791_CLK_VSP1_S>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
renesas,has-lut;
renesas,has-sru;
@@ -959,7 +963,7 @@
reg = <0 0xfe930000 0 0x8000>;
interrupts = <GIC_SPI 246 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp1_clks R8A7791_CLK_VSP1_DU0>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
renesas,has-lif;
renesas,has-lut;
@@ -973,7 +977,7 @@
reg = <0 0xfe938000 0 0x8000>;
interrupts = <GIC_SPI 247 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp1_clks R8A7791_CLK_VSP1_DU1>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
renesas,has-lif;
renesas,has-lut;
@@ -1013,33 +1017,33 @@
};
can0: can@e6e80000 {
- compatible = "renesas,can-r8a7791";
+ compatible = "renesas,can-r8a7791", "renesas,rcar-gen2-can";
reg = <0 0xe6e80000 0 0x1000>;
interrupts = <GIC_SPI 186 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7791_CLK_RCAN0>,
<&cpg_clocks R8A7791_CLK_RCAN>, <&can_clk>;
clock-names = "clkp1", "clkp2", "can_clk";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
can1: can@e6e88000 {
- compatible = "renesas,can-r8a7791";
+ compatible = "renesas,can-r8a7791", "renesas,rcar-gen2-can";
reg = <0 0xe6e88000 0 0x1000>;
interrupts = <GIC_SPI 187 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7791_CLK_RCAN1>,
<&cpg_clocks R8A7791_CLK_RCAN>, <&can_clk>;
clock-names = "clkp1", "clkp2", "can_clk";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
jpu: jpeg-codec@fe980000 {
- compatible = "renesas,jpu-r8a7791";
+ compatible = "renesas,jpu-r8a7791", "renesas,rcar-gen2-jpu";
reg = <0 0xfe980000 0 0x10300>;
interrupts = <GIC_SPI 272 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp1_clks R8A7791_CLK_JPU>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
};
clocks {
@@ -1048,12 +1052,11 @@
ranges;
/* External root clock */
- extal_clk: extal_clk {
+ extal_clk: extal {
compatible = "fixed-clock";
#clock-cells = <0>;
/* This value must be overriden by the board. */
clock-frequency = <0>;
- clock-output-names = "extal";
};
/*
@@ -1064,27 +1067,23 @@
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
- clock-output-names = "audio_clk_a";
};
audio_clk_b: audio_clk_b {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
- clock-output-names = "audio_clk_b";
};
audio_clk_c: audio_clk_c {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
- clock-output-names = "audio_clk_c";
};
/* External PCIe clock - can be overridden by the board */
- pcie_bus_clk: pcie_bus_clk {
+ pcie_bus_clk: pcie_bus {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
- clock-output-names = "pcie_bus";
};
/* External SCIF clock */
@@ -1096,11 +1095,10 @@
};
/* External USB clock - can be overridden by the board */
- usb_extal_clk: usb_extal_clk {
+ usb_extal_clk: usb_extal {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <48000000>;
- clock-output-names = "usb_extal";
};
/* External CAN clock */
@@ -1109,7 +1107,6 @@
#clock-cells = <0>;
/* This value must be overridden by the board. */
clock-frequency = <0>;
- clock-output-names = "can_clk";
};
/* Special CPG clocks */
@@ -1126,178 +1123,156 @@
};
/* Variable factor clocks */
- sd2_clk: sd2_clk@e6150078 {
+ sd2_clk: sd2@e6150078 {
compatible = "renesas,r8a7791-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150078 0 4>;
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
- clock-output-names = "sd2";
};
- sd3_clk: sd3_clk@e615026c {
+ sd3_clk: sd3@e615026c {
compatible = "renesas,r8a7791-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe615026c 0 4>;
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
- clock-output-names = "sd3";
};
- mmc0_clk: mmc0_clk@e6150240 {
+ mmc0_clk: mmc0@e6150240 {
compatible = "renesas,r8a7791-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150240 0 4>;
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
- clock-output-names = "mmc0";
};
- ssp_clk: ssp_clk@e6150248 {
+ ssp_clk: ssp@e6150248 {
compatible = "renesas,r8a7791-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150248 0 4>;
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
- clock-output-names = "ssp";
};
- ssprs_clk: ssprs_clk@e615024c {
+ ssprs_clk: ssprs@e615024c {
compatible = "renesas,r8a7791-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe615024c 0 4>;
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
- clock-output-names = "ssprs";
};
/* Fixed factor clocks */
- pll1_div2_clk: pll1_div2_clk {
+ pll1_div2_clk: pll1_div2 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7791_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "pll1_div2";
};
- zg_clk: zg_clk {
+ zg_clk: zg {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7791_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <3>;
clock-mult = <1>;
- clock-output-names = "zg";
};
- zx_clk: zx_clk {
+ zx_clk: zx {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7791_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <3>;
clock-mult = <1>;
- clock-output-names = "zx";
};
- zs_clk: zs_clk {
+ zs_clk: zs {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7791_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <6>;
clock-mult = <1>;
- clock-output-names = "zs";
};
- hp_clk: hp_clk {
+ hp_clk: hp {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7791_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <12>;
clock-mult = <1>;
- clock-output-names = "hp";
};
- i_clk: i_clk {
+ i_clk: i {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7791_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "i";
};
- b_clk: b_clk {
+ b_clk: b {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7791_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <12>;
clock-mult = <1>;
- clock-output-names = "b";
};
- p_clk: p_clk {
+ p_clk: p {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7791_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <24>;
clock-mult = <1>;
- clock-output-names = "p";
};
- cl_clk: cl_clk {
+ cl_clk: cl {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7791_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <48>;
clock-mult = <1>;
- clock-output-names = "cl";
};
- m2_clk: m2_clk {
+ m2_clk: m2 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7791_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <8>;
clock-mult = <1>;
- clock-output-names = "m2";
};
- rclk_clk: rclk_clk {
+ rclk_clk: rclk {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7791_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <(48 * 1024)>;
clock-mult = <1>;
- clock-output-names = "rclk";
};
- oscclk_clk: oscclk_clk {
+ oscclk_clk: oscclk {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7791_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <(12 * 1024)>;
clock-mult = <1>;
- clock-output-names = "oscclk";
};
- zb3_clk: zb3_clk {
+ zb3_clk: zb3 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7791_CLK_PLL3>;
#clock-cells = <0>;
clock-div = <4>;
clock-mult = <1>;
- clock-output-names = "zb3";
};
- zb3d2_clk: zb3d2_clk {
+ zb3d2_clk: zb3d2 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7791_CLK_PLL3>;
#clock-cells = <0>;
clock-div = <8>;
clock-mult = <1>;
- clock-output-names = "zb3d2";
};
- ddr_clk: ddr_clk {
+ ddr_clk: ddr {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7791_CLK_PLL3>;
#clock-cells = <0>;
clock-div = <8>;
clock-mult = <1>;
- clock-output-names = "ddr";
};
- mp_clk: mp_clk {
+ mp_clk: mp {
compatible = "fixed-factor-clock";
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
clock-div = <15>;
clock-mult = <1>;
- clock-output-names = "mp";
};
- cp_clk: cp_clk {
+ cp_clk: cp {
compatible = "fixed-factor-clock";
clocks = <&extal_clk>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "cp";
};
/* Gate clocks */
@@ -1492,6 +1467,12 @@
};
};
+ sysc: system-controller@e6180000 {
+ compatible = "renesas,r8a7791-sysc";
+ reg = <0 0xe6180000 0 0x0200>;
+ #power-domain-cells = <1>;
+ };
+
qspi: spi@e6b10000 {
compatible = "renesas,qspi-r8a7791", "renesas,qspi";
reg = <0 0xe6b10000 0 0x2c>;
@@ -1499,7 +1480,7 @@
clocks = <&mstp9_clks R8A7791_CLK_QSPI_MOD>;
dmas = <&dmac0 0x17>, <&dmac0 0x18>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
num-cs = <1>;
#address-cells = <1>;
#size-cells = <0>;
@@ -1513,7 +1494,7 @@
clocks = <&mstp0_clks R8A7791_CLK_MSIOF0>;
dmas = <&dmac0 0x51>, <&dmac0 0x52>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
#address-cells = <1>;
#size-cells = <0>;
status = "disabled";
@@ -1526,7 +1507,7 @@
clocks = <&mstp2_clks R8A7791_CLK_MSIOF1>;
dmas = <&dmac0 0x55>, <&dmac0 0x56>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
#address-cells = <1>;
#size-cells = <0>;
status = "disabled";
@@ -1539,18 +1520,18 @@
clocks = <&mstp2_clks R8A7791_CLK_MSIOF2>;
dmas = <&dmac0 0x41>, <&dmac0 0x42>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
#address-cells = <1>;
#size-cells = <0>;
status = "disabled";
};
xhci: usb@ee000000 {
- compatible = "renesas,xhci-r8a7791";
+ compatible = "renesas,xhci-r8a7791", "renesas,rcar-gen2-xhci";
reg = <0 0xee000000 0 0xc00>;
interrupts = <GIC_SPI 101 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp3_clks R8A7791_CLK_SSUSB>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
phys = <&usb2 1>;
phy-names = "usb";
status = "disabled";
@@ -1563,7 +1544,7 @@
<0 0xee080000 0 0x1100>;
interrupts = <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp7_clks R8A7791_CLK_EHCI>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
bus-range = <0 0>;
@@ -1598,7 +1579,7 @@
<0 0xee0c0000 0 0x1100>;
interrupts = <GIC_SPI 113 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp7_clks R8A7791_CLK_EHCI>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
bus-range = <1 1>;
@@ -1648,7 +1629,7 @@
interrupt-map = <0 0 0 0 &gic GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp3_clks R8A7791_CLK_PCIEC>, <&pcie_bus_clk>;
clock-names = "pcie", "pcie_bus";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -1751,7 +1732,7 @@
"mix.0", "mix.1",
"dvc.0", "dvc.1",
"clk_a", "clk_b", "clk_c", "clk_i";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7791_PD_ALWAYS_ON>;
status = "disabled";
diff --git a/arch/arm/boot/dts/r8a7793-gose.dts b/arch/arm/boot/dts/r8a7793-gose.dts
index 87e89ec9dd47..0ebc3ee34923 100644
--- a/arch/arm/boot/dts/r8a7793-gose.dts
+++ b/arch/arm/boot/dts/r8a7793-gose.dts
@@ -158,11 +158,82 @@
};
};
- audio_clock: clock {
+ vcc_sdhi0: regulator@0 {
+ compatible = "regulator-fixed";
+
+ regulator-name = "SDHI0 Vcc";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+
+ gpio = <&gpio7 17 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ };
+
+ vccq_sdhi0: regulator@1 {
+ compatible = "regulator-gpio";
+
+ regulator-name = "SDHI0 VccQ";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3300000>;
+
+ gpios = <&gpio2 12 GPIO_ACTIVE_HIGH>;
+ gpios-states = <1>;
+ states = <3300000 1
+ 1800000 0>;
+ };
+
+ vcc_sdhi1: regulator@2 {
+ compatible = "regulator-fixed";
+
+ regulator-name = "SDHI1 Vcc";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+
+ gpio = <&gpio7 18 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ };
+
+ vccq_sdhi1: regulator@3 {
+ compatible = "regulator-gpio";
+
+ regulator-name = "SDHI1 VccQ";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3300000>;
+
+ gpios = <&gpio2 13 GPIO_ACTIVE_HIGH>;
+ gpios-states = <1>;
+ states = <3300000 1
+ 1800000 0>;
+ };
+
+ vcc_sdhi2: regulator@4 {
+ compatible = "regulator-fixed";
+
+ regulator-name = "SDHI2 Vcc";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+
+ gpio = <&gpio7 19 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ };
+
+ vccq_sdhi2: regulator@5 {
+ compatible = "regulator-gpio";
+
+ regulator-name = "SDHI2 VccQ";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3300000>;
+
+ gpios = <&gpio2 26 GPIO_ACTIVE_HIGH>;
+ gpios-states = <1>;
+ states = <3300000 1
+ 1800000 0>;
+ };
+
+ audio_clock: audio_clock {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <11289600>;
- clock-output-names = "audio_clock";
};
rsnd_ak4643: sound {
@@ -240,53 +311,68 @@
pinctrl-names = "default";
i2c2_pins: i2c2 {
- renesas,groups = "i2c2";
- renesas,function = "i2c2";
+ groups = "i2c2";
+ function = "i2c2";
};
du_pins: du {
- renesas,groups = "du_rgb888", "du_sync", "du_disp", "du_clk_out_0";
- renesas,function = "du";
+ groups = "du_rgb888", "du_sync", "du_disp", "du_clk_out_0";
+ function = "du";
};
scif0_pins: serial0 {
- renesas,groups = "scif0_data_d";
- renesas,function = "scif0";
+ groups = "scif0_data_d";
+ function = "scif0";
};
scif1_pins: serial1 {
- renesas,groups = "scif1_data_d";
- renesas,function = "scif1";
+ groups = "scif1_data_d";
+ function = "scif1";
};
scif_clk_pins: scif_clk {
- renesas,groups = "scif_clk";
- renesas,function = "scif_clk";
+ groups = "scif_clk";
+ function = "scif_clk";
};
ether_pins: ether {
- renesas,groups = "eth_link", "eth_mdio", "eth_rmii";
- renesas,function = "eth";
+ groups = "eth_link", "eth_mdio", "eth_rmii";
+ function = "eth";
};
phy1_pins: phy1 {
- renesas,groups = "intc_irq0";
- renesas,function = "intc";
+ groups = "intc_irq0";
+ function = "intc";
+ };
+
+ sdhi0_pins: sd0 {
+ renesas,groups = "sdhi0_data4", "sdhi0_ctrl";
+ renesas,function = "sdhi0";
+ };
+
+ sdhi1_pins: sd1 {
+ renesas,groups = "sdhi1_data4", "sdhi1_ctrl";
+ renesas,function = "sdhi1";
+ };
+
+ sdhi2_pins: sd2 {
+ renesas,groups = "sdhi2_data4", "sdhi2_ctrl";
+ renesas,function = "sdhi2";
};
qspi_pins: spi0 {
- renesas,groups = "qspi_ctrl", "qspi_data4";
- renesas,function = "qspi";
+ groups = "qspi_ctrl", "qspi_data4";
+ function = "qspi";
};
sound_pins: sound {
- renesas,groups = "ssi0129_ctrl", "ssi0_data", "ssi1_data";
- renesas,function = "ssi";
+ groups = "ssi0129_ctrl", "ssi0_data", "ssi1_data";
+ function = "ssi";
};
sound_clk_pins: sound_clk {
- renesas,groups = "audio_clk_a";
- renesas,function = "audio_clk";
+ groups = "audio_clk_a";
+ function = "audio_clk";
};
};
@@ -329,6 +415,38 @@
status = "okay";
};
+&sdhi0 {
+ pinctrl-0 = <&sdhi0_pins>;
+ pinctrl-names = "default";
+
+ vmmc-supply = <&vcc_sdhi0>;
+ vqmmc-supply = <&vccq_sdhi0>;
+ cd-gpios = <&gpio6 6 GPIO_ACTIVE_LOW>;
+ wp-gpios = <&gpio6 7 GPIO_ACTIVE_HIGH>;
+ status = "okay";
+};
+
+&sdhi1 {
+ pinctrl-0 = <&sdhi1_pins>;
+ pinctrl-names = "default";
+
+ vmmc-supply = <&vcc_sdhi1>;
+ vqmmc-supply = <&vccq_sdhi1>;
+ cd-gpios = <&gpio6 14 GPIO_ACTIVE_LOW>;
+ wp-gpios = <&gpio6 15 GPIO_ACTIVE_HIGH>;
+ status = "okay";
+};
+
+&sdhi2 {
+ pinctrl-0 = <&sdhi2_pins>;
+ pinctrl-names = "default";
+
+ vmmc-supply = <&vcc_sdhi2>;
+ vqmmc-supply = <&vccq_sdhi2>;
+ cd-gpios = <&gpio6 22 GPIO_ACTIVE_LOW>;
+ status = "okay";
+};
+
&qspi {
pinctrl-0 = <&qspi_pins>;
pinctrl-names = "default";
diff --git a/arch/arm/boot/dts/r8a7793.dtsi b/arch/arm/boot/dts/r8a7793.dtsi
index b48215945241..1dd6d202cd4c 100644
--- a/arch/arm/boot/dts/r8a7793.dtsi
+++ b/arch/arm/boot/dts/r8a7793.dtsi
@@ -11,6 +11,7 @@
#include <dt-bindings/clock/r8a7793-clock.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
+#include <dt-bindings/power/r8a7793-sysc.h>
/ {
compatible = "renesas,r8a7793";
@@ -43,6 +44,7 @@
voltage-tolerance = <1>; /* 1% */
clocks = <&cpg_clocks R8A7793_CLK_Z>;
clock-latency = <300000>; /* 300 us */
+ power-domains = <&sysc R8A7793_PD_CA15_CPU0>;
/* kHz - uV - OPPs unknown yet */
operating-points = <1500000 1000000>,
@@ -76,6 +78,7 @@
L2_CA15: cache-controller@0 {
compatible = "cache";
+ power-domains = <&sysc R8A7793_PD_CA15_SCU>;
cache-unified;
cache-level = <2>;
};
@@ -102,7 +105,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7793_CLK_GPIO0>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
};
gpio1: gpio@e6051000 {
@@ -115,7 +118,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7793_CLK_GPIO1>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
};
gpio2: gpio@e6052000 {
@@ -128,7 +131,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7793_CLK_GPIO2>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
};
gpio3: gpio@e6053000 {
@@ -141,7 +144,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7793_CLK_GPIO3>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
};
gpio4: gpio@e6054000 {
@@ -154,7 +157,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7793_CLK_GPIO4>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
};
gpio5: gpio@e6055000 {
@@ -167,7 +170,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7793_CLK_GPIO5>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
};
gpio6: gpio@e6055400 {
@@ -180,7 +183,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7793_CLK_GPIO6>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
};
gpio7: gpio@e6055800 {
@@ -193,7 +196,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7793_CLK_GPIO7>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
};
thermal: thermal@e61f0000 {
@@ -203,7 +206,7 @@
reg = <0 0xe61f0000 0 0x14>, <0 0xe61f0100 0 0x38>;
interrupts = <GIC_SPI 69 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp5_clks R8A7793_CLK_THERMAL>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
#thermal-sensor-cells = <0>;
};
@@ -222,7 +225,7 @@
<GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp1_clks R8A7793_CLK_CMT0>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
renesas,channels-mask = <0x60>;
@@ -242,7 +245,7 @@
<GIC_SPI 127 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp3_clks R8A7793_CLK_CMT1>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
renesas,channels-mask = <0xff>;
@@ -265,7 +268,7 @@
<GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp4_clks R8A7793_CLK_IRQC>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
};
dmac0: dma-controller@e6700000 {
@@ -294,7 +297,7 @@
"ch12", "ch13", "ch14";
clocks = <&mstp2_clks R8A7793_CLK_SYS_DMAC0>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <15>;
};
@@ -325,7 +328,7 @@
"ch12", "ch13", "ch14";
clocks = <&mstp2_clks R8A7793_CLK_SYS_DMAC1>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <15>;
};
@@ -354,7 +357,7 @@
"ch12";
clocks = <&mstp5_clks R8A7793_CLK_AUDIO_DMAC0>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <13>;
};
@@ -383,7 +386,7 @@
"ch12";
clocks = <&mstp5_clks R8A7793_CLK_AUDIO_DMAC1>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <13>;
};
@@ -396,7 +399,7 @@
reg = <0 0xe6508000 0 0x40>;
interrupts = <GIC_SPI 287 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7793_CLK_I2C0>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <6>;
status = "disabled";
};
@@ -408,7 +411,7 @@
reg = <0 0xe6518000 0 0x40>;
interrupts = <GIC_SPI 288 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7793_CLK_I2C1>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <6>;
status = "disabled";
};
@@ -420,7 +423,7 @@
reg = <0 0xe6530000 0 0x40>;
interrupts = <GIC_SPI 286 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7793_CLK_I2C2>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <6>;
status = "disabled";
};
@@ -432,7 +435,7 @@
reg = <0 0xe6540000 0 0x40>;
interrupts = <GIC_SPI 290 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7793_CLK_I2C3>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <6>;
status = "disabled";
};
@@ -444,7 +447,7 @@
reg = <0 0xe6520000 0 0x40>;
interrupts = <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7793_CLK_I2C4>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <6>;
status = "disabled";
};
@@ -457,7 +460,7 @@
reg = <0 0xe6528000 0 0x40>;
interrupts = <GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7793_CLK_I2C5>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <110>;
status = "disabled";
};
@@ -472,7 +475,7 @@
clocks = <&mstp9_clks R8A7793_CLK_IICDVFS>;
dmas = <&dmac0 0x77>, <&dmac0 0x78>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -485,7 +488,7 @@
clocks = <&mstp3_clks R8A7793_CLK_IIC0>;
dmas = <&dmac0 0x61>, <&dmac0 0x62>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -498,7 +501,7 @@
clocks = <&mstp3_clks R8A7793_CLK_IIC1>;
dmas = <&dmac0 0x65>, <&dmac0 0x66>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -507,6 +510,39 @@
reg = <0 0xe6060000 0 0x250>;
};
+ sdhi0: sd@ee100000 {
+ compatible = "renesas,sdhi-r8a7793";
+ reg = <0 0xee100000 0 0x328>;
+ interrupts = <GIC_SPI 165 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&mstp3_clks R8A7793_CLK_SDHI0>;
+ dmas = <&dmac0 0xcd>, <&dmac0 0xce>;
+ dma-names = "tx", "rx";
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
+ status = "disabled";
+ };
+
+ sdhi1: sd@ee140000 {
+ compatible = "renesas,sdhi-r8a7793";
+ reg = <0 0xee140000 0 0x100>;
+ interrupts = <GIC_SPI 167 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&mstp3_clks R8A7793_CLK_SDHI1>;
+ dmas = <&dmac0 0xc1>, <&dmac0 0xc2>;
+ dma-names = "tx", "rx";
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
+ status = "disabled";
+ };
+
+ sdhi2: sd@ee160000 {
+ compatible = "renesas,sdhi-r8a7793";
+ reg = <0 0xee160000 0 0x100>;
+ interrupts = <GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&mstp3_clks R8A7793_CLK_SDHI2>;
+ dmas = <&dmac0 0xd3>, <&dmac0 0xd4>;
+ dma-names = "tx", "rx";
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
+ status = "disabled";
+ };
+
scifa0: serial@e6c40000 {
compatible = "renesas,scifa-r8a7793",
"renesas,rcar-gen2-scifa", "renesas,scifa";
@@ -516,7 +552,7 @@
clock-names = "fck";
dmas = <&dmac0 0x21>, <&dmac0 0x22>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -529,7 +565,7 @@
clock-names = "fck";
dmas = <&dmac0 0x25>, <&dmac0 0x26>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -542,7 +578,7 @@
clock-names = "fck";
dmas = <&dmac0 0x27>, <&dmac0 0x28>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -555,7 +591,7 @@
clock-names = "fck";
dmas = <&dmac0 0x1b>, <&dmac0 0x1c>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -568,7 +604,7 @@
clock-names = "fck";
dmas = <&dmac0 0x1f>, <&dmac0 0x20>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -581,7 +617,7 @@
clock-names = "fck";
dmas = <&dmac0 0x23>, <&dmac0 0x24>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -594,7 +630,7 @@
clock-names = "fck";
dmas = <&dmac0 0x3d>, <&dmac0 0x3e>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -607,7 +643,7 @@
clock-names = "fck";
dmas = <&dmac0 0x19>, <&dmac0 0x1a>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -620,7 +656,7 @@
clock-names = "fck";
dmas = <&dmac0 0x1d>, <&dmac0 0x1e>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -634,7 +670,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x29>, <&dmac0 0x2a>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -648,7 +684,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x2d>, <&dmac0 0x2e>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -662,7 +698,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x2b>, <&dmac0 0x2c>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -676,7 +712,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x2f>, <&dmac0 0x30>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -690,7 +726,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0xfb>, <&dmac0 0xfc>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -704,7 +740,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0xfd>, <&dmac0 0xfe>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -718,7 +754,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x39>, <&dmac0 0x3a>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -732,7 +768,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x4d>, <&dmac0 0x4e>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -746,7 +782,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x3b>, <&dmac0 0x3c>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -755,7 +791,7 @@
reg = <0 0xee700000 0 0x400>;
interrupts = <GIC_SPI 162 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7793_CLK_ETHER>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
phy-mode = "rmii";
#address-cells = <1>;
#size-cells = <0>;
@@ -769,7 +805,7 @@
clocks = <&mstp9_clks R8A7793_CLK_QSPI_MOD>;
dmas = <&dmac0 0x17>, <&dmac0 0x18>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
num-cs = <1>;
#address-cells = <1>;
#size-cells = <0>;
@@ -806,18 +842,39 @@
};
};
+ can0: can@e6e80000 {
+ compatible = "renesas,can-r8a7793", "renesas,rcar-gen2-can";
+ reg = <0 0xe6e80000 0 0x1000>;
+ interrupts = <GIC_SPI 186 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&mstp9_clks R8A7793_CLK_RCAN0>,
+ <&cpg_clocks R8A7793_CLK_RCAN>, <&can_clk>;
+ clock-names = "clkp1", "clkp2", "can_clk";
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
+ status = "disabled";
+ };
+
+ can1: can@e6e88000 {
+ compatible = "renesas,can-r8a7793", "renesas,rcar-gen2-can";
+ reg = <0 0xe6e88000 0 0x1000>;
+ interrupts = <GIC_SPI 187 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&mstp9_clks R8A7793_CLK_RCAN1>,
+ <&cpg_clocks R8A7793_CLK_RCAN>, <&can_clk>;
+ clock-names = "clkp1", "clkp2", "can_clk";
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
+ status = "disabled";
+ };
+
clocks {
#address-cells = <2>;
#size-cells = <2>;
ranges;
/* External root clock */
- extal_clk: extal_clk {
+ extal_clk: extal {
compatible = "fixed-clock";
#clock-cells = <0>;
/* This value must be overridden by the board. */
clock-frequency = <0>;
- clock-output-names = "extal";
};
/*
@@ -828,19 +885,31 @@
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
- clock-output-names = "audio_clk_a";
};
audio_clk_b: audio_clk_b {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
- clock-output-names = "audio_clk_b";
};
audio_clk_c: audio_clk_c {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
- clock-output-names = "audio_clk_c";
+ };
+
+ /* External USB clock - can be overridden by the board */
+ usb_extal_clk: usb_extal {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <48000000>;
+ };
+
+ /* External CAN clock */
+ can_clk: can {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ /* This value must be overridden by the board. */
+ clock-frequency = <0>;
};
/* External SCIF clock */
@@ -849,7 +918,6 @@
#clock-cells = <0>;
/* This value must be overridden by the board. */
clock-frequency = <0>;
- status = "disabled";
};
/* Special CPG clocks */
@@ -857,7 +925,7 @@
compatible = "renesas,r8a7793-cpg-clocks",
"renesas,rcar-gen2-cpg-clocks";
reg = <0 0xe6150000 0 0x1000>;
- clocks = <&extal_clk>;
+ clocks = <&extal_clk &usb_extal_clk>;
#clock-cells = <1>;
clock-output-names = "main", "pll0", "pll1", "pll3",
"lb", "qspi", "sdh", "sd0", "z",
@@ -866,111 +934,98 @@
};
/* Variable factor clocks */
- sd2_clk: sd2_clk@e6150078 {
+ sd2_clk: sd2@e6150078 {
compatible = "renesas,r8a7793-div6-clock",
"renesas,cpg-div6-clock";
reg = <0 0xe6150078 0 4>;
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
- clock-output-names = "sd2";
};
- sd3_clk: sd3_clk@e615026c {
+ sd3_clk: sd3@e615026c {
compatible = "renesas,r8a7793-div6-clock",
"renesas,cpg-div6-clock";
reg = <0 0xe615026c 0 4>;
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
- clock-output-names = "sd3";
};
- mmc0_clk: mmc0_clk@e6150240 {
+ mmc0_clk: mmc0@e6150240 {
compatible = "renesas,r8a7793-div6-clock",
"renesas,cpg-div6-clock";
reg = <0 0xe6150240 0 4>;
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
- clock-output-names = "mmc0";
};
/* Fixed factor clocks */
- pll1_div2_clk: pll1_div2_clk {
+ pll1_div2_clk: pll1_div2 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7793_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "pll1_div2";
};
- zg_clk: zg_clk {
+ zg_clk: zg {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7793_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <5>;
clock-mult = <1>;
- clock-output-names = "zg";
};
- zx_clk: zx_clk {
+ zx_clk: zx {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7793_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <3>;
clock-mult = <1>;
- clock-output-names = "zx";
};
- zs_clk: zs_clk {
+ zs_clk: zs {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7793_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <6>;
clock-mult = <1>;
- clock-output-names = "zs";
};
- hp_clk: hp_clk {
+ hp_clk: hp {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7793_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <12>;
clock-mult = <1>;
- clock-output-names = "hp";
};
- p_clk: p_clk {
+ p_clk: p {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7793_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <24>;
clock-mult = <1>;
- clock-output-names = "p";
};
- m2_clk: m2_clk {
+ m2_clk: m2 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7793_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <8>;
clock-mult = <1>;
- clock-output-names = "m2";
};
- rclk_clk: rclk_clk {
+ rclk_clk: rclk {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7793_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <(48 * 1024)>;
clock-mult = <1>;
- clock-output-names = "rclk";
};
- mp_clk: mp_clk {
+ mp_clk: mp {
compatible = "fixed-factor-clock";
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
clock-div = <15>;
clock-mult = <1>;
- clock-output-names = "mp";
};
- cp_clk: cp_clk {
+ cp_clk: cp {
compatible = "fixed-factor-clock";
clocks = <&extal_clk>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "cp";
};
/* Gate clocks */
@@ -1098,6 +1153,7 @@
reg = <0 0xe6150994 0 4>, <0 0xe61509a4 0 4>;
clocks = <&cp_clk>, <&cp_clk>, <&cp_clk>, <&cp_clk>,
<&cp_clk>, <&cp_clk>, <&cp_clk>, <&cp_clk>,
+ <&p_clk>, <&p_clk>,
<&cpg_clocks R8A7793_CLK_QSPI>, <&hp_clk>,
<&cp_clk>, <&hp_clk>, <&hp_clk>, <&hp_clk>,
<&hp_clk>, <&hp_clk>;
@@ -1107,7 +1163,8 @@
R8A7793_CLK_GPIO5 R8A7793_CLK_GPIO4
R8A7793_CLK_GPIO3 R8A7793_CLK_GPIO2
R8A7793_CLK_GPIO1 R8A7793_CLK_GPIO0
- R8A7793_CLK_QSPI_MOD R8A7793_CLK_I2C5
+ R8A7793_CLK_QSPI_MOD R8A7793_CLK_RCAN1
+ R8A7793_CLK_RCAN0 R8A7793_CLK_I2C5
R8A7793_CLK_IICDVFS R8A7793_CLK_I2C4
R8A7793_CLK_I2C3 R8A7793_CLK_I2C2
R8A7793_CLK_I2C1 R8A7793_CLK_I2C0
@@ -1115,8 +1172,9 @@
clock-output-names =
"gpio7", "gpio6", "gpio5", "gpio4",
"gpio3", "gpio2", "gpio1", "gpio0",
- "qspi_mod", "i2c5", "i2c6", "i2c4",
- "i2c3", "i2c2", "i2c1", "i2c0";
+ "rcan1", "rcan0", "qspi_mod", "i2c5",
+ "i2c6", "i2c4", "i2c3", "i2c2", "i2c1",
+ "i2c0";
};
mstp10_clks: mstp10_clks@e6150998 {
compatible = "renesas,r8a7793-mstp-clocks", "renesas,cpg-mstp-clocks";
@@ -1166,6 +1224,12 @@
};
};
+ sysc: system-controller@e6180000 {
+ compatible = "renesas,r8a7793-sysc";
+ reg = <0 0xe6180000 0 0x0200>;
+ #power-domain-cells = <1>;
+ };
+
ipmmu_sy0: mmu@e6280000 {
compatible = "renesas,ipmmu-r8a7793", "renesas,ipmmu-vmsa";
reg = <0 0xe6280000 0 0x1000>;
@@ -1261,7 +1325,7 @@
"src.4", "src.3", "src.2", "src.1", "src.0",
"dvc.0", "dvc.1",
"clk_a", "clk_b", "clk_c", "clk_i";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7793_PD_ALWAYS_ON>;
status = "disabled";
diff --git a/arch/arm/boot/dts/r8a7794-alt.dts b/arch/arm/boot/dts/r8a7794-alt.dts
index ca9bc4fff287..383ad791f1db 100644
--- a/arch/arm/boot/dts/r8a7794-alt.dts
+++ b/arch/arm/boot/dts/r8a7794-alt.dts
@@ -107,38 +107,38 @@
pinctrl-names = "default";
du_pins: du {
- renesas,groups = "du1_rgb666", "du1_sync", "du1_disp", "du1_dotclkout0";
- renesas,function = "du";
+ groups = "du1_rgb666", "du1_sync", "du1_disp", "du1_dotclkout0";
+ function = "du";
};
scif2_pins: serial2 {
- renesas,groups = "scif2_data";
- renesas,function = "scif2";
+ groups = "scif2_data";
+ function = "scif2";
};
scif_clk_pins: scif_clk {
- renesas,groups = "scif_clk";
- renesas,function = "scif_clk";
+ groups = "scif_clk";
+ function = "scif_clk";
};
ether_pins: ether {
- renesas,groups = "eth_link", "eth_mdio", "eth_rmii";
- renesas,function = "eth";
+ groups = "eth_link", "eth_mdio", "eth_rmii";
+ function = "eth";
};
phy1_pins: phy1 {
- renesas,groups = "intc_irq8";
- renesas,function = "intc";
+ groups = "intc_irq8";
+ function = "intc";
};
i2c1_pins: i2c1 {
- renesas,groups = "i2c1";
- renesas,function = "i2c1";
+ groups = "i2c1";
+ function = "i2c1";
};
vin0_pins: vin0 {
- renesas,groups = "vin0_data8", "vin0_clk";
- renesas,function = "vin0";
+ groups = "vin0_data8", "vin0_clk";
+ function = "vin0";
};
};
@@ -148,8 +148,8 @@
&pfc {
qspi_pins: spi0 {
- renesas,groups = "qspi_ctrl", "qspi_data4";
- renesas,function = "qspi";
+ groups = "qspi_ctrl", "qspi_data4";
+ function = "qspi";
};
};
diff --git a/arch/arm/boot/dts/r8a7794-silk.dts b/arch/arm/boot/dts/r8a7794-silk.dts
index 66f077a3ca41..56d98d5b2185 100644
--- a/arch/arm/boot/dts/r8a7794-silk.dts
+++ b/arch/arm/boot/dts/r8a7794-silk.dts
@@ -130,58 +130,58 @@
pinctrl-names = "default";
scif2_pins: serial2 {
- renesas,groups = "scif2_data";
- renesas,function = "scif2";
+ groups = "scif2_data";
+ function = "scif2";
};
scif_clk_pins: scif_clk {
- renesas,groups = "scif_clk";
- renesas,function = "scif_clk";
+ groups = "scif_clk";
+ function = "scif_clk";
};
ether_pins: ether {
- renesas,groups = "eth_link", "eth_mdio", "eth_rmii";
- renesas,function = "eth";
+ groups = "eth_link", "eth_mdio", "eth_rmii";
+ function = "eth";
};
phy1_pins: phy1 {
- renesas,groups = "intc_irq8";
- renesas,function = "intc";
+ groups = "intc_irq8";
+ function = "intc";
};
i2c1_pins: i2c1 {
- renesas,groups = "i2c1";
- renesas,function = "i2c1";
+ groups = "i2c1";
+ function = "i2c1";
};
mmcif0_pins: mmcif0 {
- renesas,groups = "mmc_data8", "mmc_ctrl";
- renesas,function = "mmc";
+ groups = "mmc_data8", "mmc_ctrl";
+ function = "mmc";
};
sdhi1_pins: sd1 {
- renesas,groups = "sdhi1_data4", "sdhi1_ctrl";
- renesas,function = "sdhi1";
+ groups = "sdhi1_data4", "sdhi1_ctrl";
+ function = "sdhi1";
};
qspi_pins: spi0 {
- renesas,groups = "qspi_ctrl", "qspi_data4";
- renesas,function = "qspi";
+ groups = "qspi_ctrl", "qspi_data4";
+ function = "qspi";
};
vin0_pins: vin0 {
- renesas,groups = "vin0_data8", "vin0_clk";
- renesas,function = "vin0";
+ groups = "vin0_data8", "vin0_clk";
+ function = "vin0";
};
usb0_pins: usb0 {
- renesas,groups = "usb0";
- renesas,function = "usb0";
+ groups = "usb0";
+ function = "usb0";
};
usb1_pins: usb1 {
- renesas,groups = "usb1";
- renesas,function = "usb1";
+ groups = "usb1";
+ function = "usb1";
};
};
diff --git a/arch/arm/boot/dts/r8a7794.dtsi b/arch/arm/boot/dts/r8a7794.dtsi
index eacb2b291361..f334a3a715f2 100644
--- a/arch/arm/boot/dts/r8a7794.dtsi
+++ b/arch/arm/boot/dts/r8a7794.dtsi
@@ -12,6 +12,7 @@
#include <dt-bindings/clock/r8a7794-clock.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/interrupt-controller/irq.h>
+#include <dt-bindings/power/r8a7794-sysc.h>
/ {
compatible = "renesas,r8a7794";
@@ -26,6 +27,8 @@
i2c3 = &i2c3;
i2c4 = &i2c4;
i2c5 = &i2c5;
+ i2c6 = &i2c6;
+ i2c7 = &i2c7;
spi0 = &qspi;
vin0 = &vin0;
vin1 = &vin1;
@@ -40,6 +43,7 @@
compatible = "arm,cortex-a7";
reg = <0>;
clock-frequency = <1000000000>;
+ power-domains = <&sysc R8A7794_PD_CA7_CPU0>;
next-level-cache = <&L2_CA7>;
};
@@ -48,12 +52,14 @@
compatible = "arm,cortex-a7";
reg = <1>;
clock-frequency = <1000000000>;
+ power-domains = <&sysc R8A7794_PD_CA7_CPU1>;
next-level-cache = <&L2_CA7>;
};
};
L2_CA7: cache-controller@1 {
compatible = "cache";
+ power-domains = <&sysc R8A7794_PD_CA7_SCU>;
cache-unified;
cache-level = <2>;
};
@@ -80,7 +86,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7794_CLK_GPIO0>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
};
gpio1: gpio@e6051000 {
@@ -93,7 +99,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7794_CLK_GPIO1>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
};
gpio2: gpio@e6052000 {
@@ -106,7 +112,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7794_CLK_GPIO2>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
};
gpio3: gpio@e6053000 {
@@ -119,7 +125,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7794_CLK_GPIO3>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
};
gpio4: gpio@e6054000 {
@@ -132,7 +138,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7794_CLK_GPIO4>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
};
gpio5: gpio@e6055000 {
@@ -145,7 +151,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7794_CLK_GPIO5>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
};
gpio6: gpio@e6055400 {
@@ -158,7 +164,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&mstp9_clks R8A7794_CLK_GPIO6>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
};
cmt0: timer@ffca0000 {
@@ -168,7 +174,7 @@
<GIC_SPI 143 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp1_clks R8A7794_CLK_CMT0>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
renesas,channels-mask = <0x60>;
@@ -188,7 +194,7 @@
<GIC_SPI 127 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp3_clks R8A7794_CLK_CMT1>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
renesas,channels-mask = <0xff>;
@@ -219,7 +225,7 @@
<GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp4_clks R8A7794_CLK_IRQC>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
};
pfc: pin-controller@e6060000 {
@@ -253,7 +259,7 @@
"ch12", "ch13", "ch14";
clocks = <&mstp2_clks R8A7794_CLK_SYS_DMAC0>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <15>;
};
@@ -284,7 +290,7 @@
"ch12", "ch13", "ch14";
clocks = <&mstp2_clks R8A7794_CLK_SYS_DMAC1>;
clock-names = "fck";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <15>;
};
@@ -298,7 +304,7 @@
clock-names = "fck";
dmas = <&dmac0 0x21>, <&dmac0 0x22>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -311,7 +317,7 @@
clock-names = "fck";
dmas = <&dmac0 0x25>, <&dmac0 0x26>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -324,7 +330,7 @@
clock-names = "fck";
dmas = <&dmac0 0x27>, <&dmac0 0x28>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -337,7 +343,7 @@
clock-names = "fck";
dmas = <&dmac0 0x1b>, <&dmac0 0x1c>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -350,7 +356,7 @@
clock-names = "fck";
dmas = <&dmac0 0x1f>, <&dmac0 0x20>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -363,7 +369,7 @@
clock-names = "fck";
dmas = <&dmac0 0x23>, <&dmac0 0x24>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -376,7 +382,7 @@
clock-names = "fck";
dmas = <&dmac0 0x3d>, <&dmac0 0x3e>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -389,7 +395,7 @@
clock-names = "fck";
dmas = <&dmac0 0x19>, <&dmac0 0x1a>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -402,7 +408,7 @@
clock-names = "fck";
dmas = <&dmac0 0x1d>, <&dmac0 0x1e>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -416,7 +422,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x29>, <&dmac0 0x2a>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -430,7 +436,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x2d>, <&dmac0 0x2e>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -444,7 +450,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x2b>, <&dmac0 0x2c>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -458,7 +464,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x2f>, <&dmac0 0x30>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -472,7 +478,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0xfb>, <&dmac0 0xfc>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -486,7 +492,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0xfd>, <&dmac0 0xfe>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -500,7 +506,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x39>, <&dmac0 0x3a>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -514,7 +520,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x4d>, <&dmac0 0x4e>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -528,7 +534,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x3b>, <&dmac0 0x3c>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -537,7 +543,7 @@
reg = <0 0xee700000 0 0x400>;
interrupts = <GIC_SPI 162 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7794_CLK_ETHER>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
phy-mode = "rmii";
#address-cells = <1>;
#size-cells = <0>;
@@ -550,7 +556,7 @@
reg = <0 0xe6800000 0 0x800>, <0 0xee0e8000 0 0x4000>;
interrupts = <GIC_SPI 163 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7794_CLK_ETHERAVB>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
#address-cells = <1>;
#size-cells = <0>;
status = "disabled";
@@ -562,7 +568,7 @@
reg = <0 0xe6508000 0 0x40>;
interrupts = <GIC_SPI 287 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7794_CLK_I2C0>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
#address-cells = <1>;
#size-cells = <0>;
i2c-scl-internal-delay-ns = <6>;
@@ -574,7 +580,7 @@
reg = <0 0xe6518000 0 0x40>;
interrupts = <GIC_SPI 288 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7794_CLK_I2C1>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
#address-cells = <1>;
#size-cells = <0>;
i2c-scl-internal-delay-ns = <6>;
@@ -586,7 +592,7 @@
reg = <0 0xe6530000 0 0x40>;
interrupts = <GIC_SPI 286 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7794_CLK_I2C2>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
#address-cells = <1>;
#size-cells = <0>;
i2c-scl-internal-delay-ns = <6>;
@@ -598,7 +604,7 @@
reg = <0 0xe6540000 0 0x40>;
interrupts = <GIC_SPI 290 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7794_CLK_I2C3>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
#address-cells = <1>;
#size-cells = <0>;
i2c-scl-internal-delay-ns = <6>;
@@ -610,7 +616,7 @@
reg = <0 0xe6520000 0 0x40>;
interrupts = <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7794_CLK_I2C4>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
#address-cells = <1>;
#size-cells = <0>;
i2c-scl-internal-delay-ns = <6>;
@@ -622,13 +628,39 @@
reg = <0 0xe6528000 0 0x40>;
interrupts = <GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp9_clks R8A7794_CLK_I2C5>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
#address-cells = <1>;
#size-cells = <0>;
i2c-scl-internal-delay-ns = <6>;
status = "disabled";
};
+ i2c6: i2c@e6500000 {
+ compatible = "renesas,iic-r8a7794", "renesas,rmobile-iic";
+ reg = <0 0xe6500000 0 0x425>;
+ interrupts = <GIC_SPI 174 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&mstp3_clks R8A7794_CLK_IIC0>;
+ dmas = <&dmac0 0x61>, <&dmac0 0x62>;
+ dma-names = "tx", "rx";
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "disabled";
+ };
+
+ i2c7: i2c@e6510000 {
+ compatible = "renesas,iic-r8a7794", "renesas,rmobile-iic";
+ reg = <0 0xe6510000 0 0x425>;
+ interrupts = <GIC_SPI 175 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&mstp3_clks R8A7794_CLK_IIC1>;
+ dmas = <&dmac0 0x65>, <&dmac0 0x66>;
+ dma-names = "tx", "rx";
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "disabled";
+ };
+
mmcif0: mmc@ee200000 {
compatible = "renesas,mmcif-r8a7794", "renesas,sh-mmcif";
reg = <0 0xee200000 0 0x80>;
@@ -636,7 +668,7 @@
clocks = <&mstp3_clks R8A7794_CLK_MMCIF0>;
dmas = <&dmac0 0xd1>, <&dmac0 0xd2>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
reg-io-width = <4>;
status = "disabled";
};
@@ -646,7 +678,7 @@
reg = <0 0xee100000 0 0x200>;
interrupts = <GIC_SPI 165 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp3_clks R8A7794_CLK_SDHI0>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -655,7 +687,7 @@
reg = <0 0xee140000 0 0x100>;
interrupts = <GIC_SPI 167 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp3_clks R8A7794_CLK_SDHI1>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -664,7 +696,7 @@
reg = <0 0xee160000 0 0x100>;
interrupts = <GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp3_clks R8A7794_CLK_SDHI2>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -675,7 +707,7 @@
clocks = <&mstp9_clks R8A7794_CLK_QSPI_MOD>;
dmas = <&dmac0 0x17>, <&dmac0 0x18>;
dma-names = "tx", "rx";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
num-cs = <1>;
#address-cells = <1>;
#size-cells = <0>;
@@ -687,7 +719,7 @@
reg = <0 0xe6ef0000 0 0x1000>;
interrupts = <GIC_SPI 188 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7794_CLK_VIN0>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -696,7 +728,7 @@
reg = <0 0xe6ef1000 0 0x1000>;
interrupts = <GIC_SPI 189 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp8_clks R8A7794_CLK_VIN1>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -707,7 +739,7 @@
<0 0xee080000 0 0x1100>;
interrupts = <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp7_clks R8A7794_CLK_EHCI>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
bus-range = <0 0>;
@@ -742,7 +774,7 @@
<0 0xee0c0000 0 0x1100>;
interrupts = <GIC_SPI 113 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp7_clks R8A7794_CLK_EHCI>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
bus-range = <1 1>;
@@ -775,7 +807,7 @@
reg = <0 0xe6590000 0 0x100>;
interrupts = <GIC_SPI 107 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&mstp7_clks R8A7794_CLK_HSUSB>;
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
renesas,buswait = <4>;
phys = <&usb0 1>;
phy-names = "usb";
@@ -789,7 +821,7 @@
#size-cells = <0>;
clocks = <&mstp7_clks R8A7794_CLK_HSUSB>;
clock-names = "usbhs";
- power-domains = <&cpg_clocks>;
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
status = "disabled";
usb0: usb-channel@0 {
@@ -830,18 +862,54 @@
};
};
+ can0: can@e6e80000 {
+ compatible = "renesas,can-r8a7794", "renesas,rcar-gen2-can";
+ reg = <0 0xe6e80000 0 0x1000>;
+ interrupts = <GIC_SPI 186 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&mstp9_clks R8A7794_CLK_RCAN0>,
+ <&cpg_clocks R8A7794_CLK_RCAN>, <&can_clk>;
+ clock-names = "clkp1", "clkp2", "can_clk";
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
+ status = "disabled";
+ };
+
+ can1: can@e6e88000 {
+ compatible = "renesas,can-r8a7794", "renesas,rcar-gen2-can";
+ reg = <0 0xe6e88000 0 0x1000>;
+ interrupts = <GIC_SPI 187 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&mstp9_clks R8A7794_CLK_RCAN1>,
+ <&cpg_clocks R8A7794_CLK_RCAN>, <&can_clk>;
+ clock-names = "clkp1", "clkp2", "can_clk";
+ power-domains = <&sysc R8A7794_PD_ALWAYS_ON>;
+ status = "disabled";
+ };
+
clocks {
#address-cells = <2>;
#size-cells = <2>;
ranges;
/* External root clock */
- extal_clk: extal_clk {
+ extal_clk: extal {
compatible = "fixed-clock";
#clock-cells = <0>;
/* This value must be overriden by the board. */
clock-frequency = <0>;
- clock-output-names = "extal";
+ };
+
+ /* External USB clock - can be overridden by the board */
+ usb_extal_clk: usb_extal {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <48000000>;
+ };
+
+ /* External CAN clock */
+ can_clk: can {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ /* This value must be overridden by the board. */
+ clock-frequency = <0>;
};
/* External SCIF clock */
@@ -850,7 +918,6 @@
#clock-cells = <0>;
/* This value must be overridden by the board. */
clock-frequency = <0>;
- status = "disabled";
};
/* Special CPG clocks */
@@ -858,180 +925,160 @@
compatible = "renesas,r8a7794-cpg-clocks",
"renesas,rcar-gen2-cpg-clocks";
reg = <0 0xe6150000 0 0x1000>;
- clocks = <&extal_clk>;
+ clocks = <&extal_clk &usb_extal_clk>;
#clock-cells = <1>;
clock-output-names = "main", "pll0", "pll1", "pll3",
- "lb", "qspi", "sdh", "sd0", "z";
+ "lb", "qspi", "sdh", "sd0", "z",
+ "rcan";
#power-domain-cells = <0>;
};
/* Variable factor clocks */
- sd2_clk: sd2_clk@e6150078 {
+ sd2_clk: sd2@e6150078 {
compatible = "renesas,r8a7794-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150078 0 4>;
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
- clock-output-names = "sd2";
};
- sd3_clk: sd3_clk@e615026c {
+ sd3_clk: sd3@e615026c {
compatible = "renesas,r8a7794-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe615026c 0 4>;
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
- clock-output-names = "sd3";
};
- mmc0_clk: mmc0_clk@e6150240 {
+ mmc0_clk: mmc0@e6150240 {
compatible = "renesas,r8a7794-div6-clock", "renesas,cpg-div6-clock";
reg = <0 0xe6150240 0 4>;
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
- clock-output-names = "mmc0";
};
/* Fixed factor clocks */
- pll1_div2_clk: pll1_div2_clk {
+ pll1_div2_clk: pll1_div2 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7794_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "pll1_div2";
};
- zg_clk: zg_clk {
+ zg_clk: zg {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7794_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <6>;
clock-mult = <1>;
- clock-output-names = "zg";
};
- zx_clk: zx_clk {
+ zx_clk: zx {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7794_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <3>;
clock-mult = <1>;
- clock-output-names = "zx";
};
- zs_clk: zs_clk {
+ zs_clk: zs {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7794_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <6>;
clock-mult = <1>;
- clock-output-names = "zs";
};
- hp_clk: hp_clk {
+ hp_clk: hp {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7794_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <12>;
clock-mult = <1>;
- clock-output-names = "hp";
};
- i_clk: i_clk {
+ i_clk: i {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7794_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "i";
};
- b_clk: b_clk {
+ b_clk: b {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7794_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <12>;
clock-mult = <1>;
- clock-output-names = "b";
};
- p_clk: p_clk {
+ p_clk: p {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7794_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <24>;
clock-mult = <1>;
- clock-output-names = "p";
};
- cl_clk: cl_clk {
+ cl_clk: cl {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7794_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <48>;
clock-mult = <1>;
- clock-output-names = "cl";
};
- m2_clk: m2_clk {
+ m2_clk: m2 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7794_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <8>;
clock-mult = <1>;
- clock-output-names = "m2";
};
- rclk_clk: rclk_clk {
+ rclk_clk: rclk {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7794_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <(48 * 1024)>;
clock-mult = <1>;
- clock-output-names = "rclk";
};
- oscclk_clk: oscclk_clk {
+ oscclk_clk: oscclk {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7794_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <(12 * 1024)>;
clock-mult = <1>;
- clock-output-names = "oscclk";
};
- zb3_clk: zb3_clk {
+ zb3_clk: zb3 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7794_CLK_PLL3>;
#clock-cells = <0>;
clock-div = <4>;
clock-mult = <1>;
- clock-output-names = "zb3";
};
- zb3d2_clk: zb3d2_clk {
+ zb3d2_clk: zb3d2 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7794_CLK_PLL3>;
#clock-cells = <0>;
clock-div = <8>;
clock-mult = <1>;
- clock-output-names = "zb3d2";
};
- ddr_clk: ddr_clk {
+ ddr_clk: ddr {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7794_CLK_PLL3>;
#clock-cells = <0>;
clock-div = <8>;
clock-mult = <1>;
- clock-output-names = "ddr";
};
- mp_clk: mp_clk {
+ mp_clk: mp {
compatible = "fixed-factor-clock";
clocks = <&pll1_div2_clk>;
#clock-cells = <0>;
clock-div = <15>;
clock-mult = <1>;
- clock-output-names = "mp";
};
- cp_clk: cp_clk {
+ cp_clk: cp {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks R8A7794_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <48>;
clock-mult = <1>;
- clock-output-names = "cp";
};
- acp_clk: acp_clk {
+ acp_clk: acp {
compatible = "fixed-factor-clock";
clocks = <&extal_clk>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "acp";
};
/* Gate clocks */
@@ -1082,16 +1129,19 @@
compatible = "renesas,r8a7794-mstp-clocks", "renesas,cpg-mstp-clocks";
reg = <0 0xe615013c 0 4>, <0 0xe6150048 0 4>;
clocks = <&sd3_clk>, <&sd2_clk>, <&cpg_clocks R8A7794_CLK_SD0>,
- <&mmc0_clk>, <&rclk_clk>, <&hp_clk>, <&hp_clk>;
+ <&mmc0_clk>, <&hp_clk>, <&hp_clk>, <&rclk_clk>,
+ <&hp_clk>, <&hp_clk>;
#clock-cells = <1>;
clock-indices = <
R8A7794_CLK_SDHI2 R8A7794_CLK_SDHI1 R8A7794_CLK_SDHI0
- R8A7794_CLK_MMCIF0 R8A7794_CLK_CMT1
+ R8A7794_CLK_MMCIF0 R8A7794_CLK_IIC0
+ R8A7794_CLK_IIC1 R8A7794_CLK_CMT1
R8A7794_CLK_USBDMAC0 R8A7794_CLK_USBDMAC1
>;
clock-output-names =
"sdhi2", "sdhi1", "sdhi0",
- "mmcif0", "cmt1", "usbdmac0", "usbdmac1";
+ "mmcif0", "i2c6", "i2c7",
+ "cmt1", "usbdmac0", "usbdmac1";
};
mstp4_clks: mstp4_clks@e6150140 {
compatible = "renesas,r8a7794-mstp-clocks", "renesas,cpg-mstp-clocks";
@@ -1137,20 +1187,22 @@
compatible = "renesas,r8a7794-mstp-clocks", "renesas,cpg-mstp-clocks";
reg = <0 0xe6150994 0 4>, <0 0xe61509a4 0 4>;
clocks = <&cp_clk>, <&cp_clk>, <&cp_clk>, <&cp_clk>,
- <&cp_clk>, <&cp_clk>, <&cp_clk>,
- <&cpg_clocks R8A7794_CLK_QSPI>, <&hp_clk>, <&hp_clk>,
- <&hp_clk>, <&hp_clk>, <&hp_clk>, <&hp_clk>;
+ <&cp_clk>, <&cp_clk>, <&cp_clk>, <&p_clk>,
+ <&p_clk>, <&cpg_clocks R8A7794_CLK_QSPI>,
+ <&hp_clk>, <&hp_clk>, <&hp_clk>, <&hp_clk>,
+ <&hp_clk>, <&hp_clk>;
#clock-cells = <1>;
clock-indices = <R8A7794_CLK_GPIO6 R8A7794_CLK_GPIO5
R8A7794_CLK_GPIO4 R8A7794_CLK_GPIO3
R8A7794_CLK_GPIO2 R8A7794_CLK_GPIO1
- R8A7794_CLK_GPIO0 R8A7794_CLK_QSPI_MOD
+ R8A7794_CLK_GPIO0 R8A7794_CLK_RCAN1
+ R8A7794_CLK_RCAN0 R8A7794_CLK_QSPI_MOD
R8A7794_CLK_I2C5 R8A7794_CLK_I2C4
R8A7794_CLK_I2C3 R8A7794_CLK_I2C2
R8A7794_CLK_I2C1 R8A7794_CLK_I2C0>;
clock-output-names =
"gpio6", "gpio5", "gpio4", "gpio3", "gpio2",
- "gpio1", "gpio0", "qspi_mod",
+ "gpio1", "gpio0", "rcan1", "rcan0", "qspi_mod",
"i2c5", "i2c4", "i2c3", "i2c2", "i2c1", "i2c0";
};
mstp11_clks: mstp11_clks@e615099c {
@@ -1165,6 +1217,12 @@
};
};
+ sysc: system-controller@e6180000 {
+ compatible = "renesas,r8a7794-sysc";
+ reg = <0 0xe6180000 0 0x0200>;
+ #power-domain-cells = <1>;
+ };
+
ipmmu_sy0: mmu@e6280000 {
compatible = "renesas,ipmmu-r8a7794", "renesas,ipmmu-vmsa";
reg = <0 0xe6280000 0 0x1000>;
diff --git a/arch/arm/boot/dts/rk3036-evb.dts b/arch/arm/boot/dts/rk3036-evb.dts
index b3d6ec87f615..8db9e9b197a2 100644
--- a/arch/arm/boot/dts/rk3036-evb.dts
+++ b/arch/arm/boot/dts/rk3036-evb.dts
@@ -45,6 +45,11 @@
/ {
model = "Rockchip RK3036 Evaluation board";
compatible = "rockchip,rk3036-evb", "rockchip,rk3036";
+
+ memory {
+ device_type = "memory";
+ reg = <0x60000000 0x40000000>;
+ };
};
&emac {
diff --git a/arch/arm/boot/dts/rk3036-kylin.dts b/arch/arm/boot/dts/rk3036-kylin.dts
index 6251d109eff4..1df1557a46c3 100644
--- a/arch/arm/boot/dts/rk3036-kylin.dts
+++ b/arch/arm/boot/dts/rk3036-kylin.dts
@@ -46,6 +46,11 @@
model = "Rockchip RK3036 KylinBoard";
compatible = "rockchip,rk3036-kylin", "rockchip,rk3036";
+ memory {
+ device_type = "memory";
+ reg = <0x60000000 0x20000000>;
+ };
+
leds: gpio-leds {
compatible = "gpio-leds";
@@ -130,6 +135,10 @@
status = "okay";
};
+&hdmi {
+ status = "okay";
+};
+
&i2c1 {
clock-frequency = <400000>;
@@ -341,7 +350,6 @@
&sdio {
status = "okay";
- broken-cd;
bus-width = <4>;
cap-sd-highspeed;
cap-sdio-irq;
@@ -385,6 +393,14 @@
status = "okay";
};
+&vop {
+ status = "okay";
+};
+
+&vop_mmu {
+ status = "okay";
+};
+
&pinctrl {
leds {
led_ctl: led-ctl {
diff --git a/arch/arm/boot/dts/rk3036.dtsi b/arch/arm/boot/dts/rk3036.dtsi
index d0f4bb7e1e50..843d2be2e4e9 100644
--- a/arch/arm/boot/dts/rk3036.dtsi
+++ b/arch/arm/boot/dts/rk3036.dtsi
@@ -63,11 +63,6 @@
spi = &spi;
};
- memory {
- device_type = "memory";
- reg = <0x60000000 0x40000000>;
- };
-
cpus {
#address-cells = <1>;
#size-cells = <0>;
@@ -119,6 +114,11 @@
interrupt-affinity = <&cpu0>, <&cpu1>;
};
+ display-subsystem {
+ compatible = "rockchip,display-subsystem";
+ ports = <&vop_out>;
+ };
+
timer {
compatible = "arm,armv7-timer";
arm,cpu-registers-not-fw-configured;
@@ -149,6 +149,36 @@
};
};
+ vop: vop@10118000 {
+ compatible = "rockchip,rk3036-vop";
+ reg = <0x10118000 0x19c>;
+ interrupts = <GIC_SPI 43 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cru ACLK_LCDC>, <&cru SCLK_LCDC>, <&cru HCLK_LCDC>;
+ clock-names = "aclk_vop", "dclk_vop", "hclk_vop";
+ resets = <&cru SRST_LCDC1_A>, <&cru SRST_LCDC1_H>, <&cru SRST_LCDC1_D>;
+ reset-names = "axi", "ahb", "dclk";
+ iommus = <&vop_mmu>;
+ status = "disabled";
+
+ vop_out: port {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ vop_out_hdmi: endpoint@0 {
+ reg = <0>;
+ remote-endpoint = <&hdmi_in_vop>;
+ };
+ };
+ };
+
+ vop_mmu: iommu@10118300 {
+ compatible = "rockchip,iommu";
+ reg = <0x10118300 0x100>;
+ interrupts = <GIC_SPI 43 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-names = "vop_mmu";
+ #iommu-cells = <0>;
+ status = "disabled";
+ };
+
gic: interrupt-controller@10139000 {
compatible = "arm,gic-400";
interrupt-controller;
@@ -237,7 +267,6 @@
compatible = "rockchip,rk3036-dw-mshc", "rockchip,rk3288-dw-mshc";
reg = <0x1021c000 0x4000>;
interrupts = <GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>;
- broken-cd;
bus-width = <8>;
cap-mmc-highspeed;
clock-frequency = <37500000>;
@@ -297,6 +326,27 @@
status = "disabled";
};
+ hdmi: hdmi@20034000 {
+ compatible = "rockchip,rk3036-inno-hdmi";
+ reg = <0x20034000 0x4000>;
+ interrupts = <GIC_SPI 45 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cru PCLK_HDMI>;
+ clock-names = "pclk";
+ rockchip,grf = <&grf>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&hdmi_ctl>;
+ status = "disabled";
+
+ hdmi_in: port {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ hdmi_in_vop: endpoint@0 {
+ reg = <0>;
+ remote-endpoint = <&vop_out_hdmi>;
+ };
+ };
+ };
+
timer: timer@20044000 {
compatible = "rockchip,rk3036-timer", "rockchip,rk3288-timer";
reg = <0x20044000 0x20>;
@@ -644,6 +694,15 @@
};
};
+ hdmi {
+ hdmi_ctl: hdmi-ctl {
+ rockchip,pins = <1 8 RK_FUNC_1 &pcfg_pull_none>,
+ <1 9 RK_FUNC_1 &pcfg_pull_none>,
+ <1 10 RK_FUNC_1 &pcfg_pull_none>,
+ <1 11 RK_FUNC_1 &pcfg_pull_none>;
+ };
+ };
+
uart0 {
uart0_xfer: uart0-xfer {
rockchip,pins = <0 16 RK_FUNC_1 &pcfg_pull_default>,
diff --git a/arch/arm/boot/dts/rk3066a-bqcurie2.dts b/arch/arm/boot/dts/rk3066a-bqcurie2.dts
index 6d2a5b3a84a8..bc674ee206ec 100644
--- a/arch/arm/boot/dts/rk3066a-bqcurie2.dts
+++ b/arch/arm/boot/dts/rk3066a-bqcurie2.dts
@@ -42,6 +42,7 @@
*/
/dts-v1/;
+#include <dt-bindings/input/input.h>
#include "rk3066a.dtsi"
/ {
@@ -77,21 +78,19 @@
gpio-keys {
compatible = "gpio-keys";
- #address-cells = <1>;
- #size-cells = <0>;
autorepeat;
- button@0 {
+ power {
gpios = <&gpio6 2 GPIO_ACTIVE_LOW>; /* GPIO6_A2 */
- linux,code = <116>;
+ linux,code = <KEY_POWER>;
label = "GPIO Key Power";
linux,input-type = <1>;
wakeup-source;
debounce-interval = <100>;
};
- button@1 {
+ volume-down {
gpios = <&gpio4 21 GPIO_ACTIVE_LOW>; /* GPIO4_C5 */
- linux,code = <104>;
+ linux,code = <KEY_VOLUMEDOWN>;
label = "GPIO Key Vol-";
linux,input-type = <1>;
debounce-interval = <100>;
diff --git a/arch/arm/boot/dts/rk3066a-rayeager.dts b/arch/arm/boot/dts/rk3066a-rayeager.dts
index 05533005a809..6e7f2187a0e3 100644
--- a/arch/arm/boot/dts/rk3066a-rayeager.dts
+++ b/arch/arm/boot/dts/rk3066a-rayeager.dts
@@ -41,6 +41,7 @@
*/
/dts-v1/;
+#include <dt-bindings/input/input.h>
#include "rk3066a.dtsi"
/ {
@@ -61,14 +62,12 @@
keys: gpio-keys {
compatible = "gpio-keys";
- #address-cells = <1>;
- #size-cells = <0>;
- button@0 {
+ power {
wakeup-source;
gpios = <&gpio6 2 GPIO_ACTIVE_LOW>;
label = "GPIO Power";
- linux,code = <116>;
+ linux,code = <KEY_POWER>;
pinctrl-names = "default";
pinctrl-0 = <&pwr_key>;
};
@@ -182,7 +181,6 @@
};
&emmc {
- broken-cd;
bus-width = <8>;
cap-mmc-highspeed;
disable-wp;
@@ -348,7 +346,6 @@
};
&mmc1 {
- broken-cd;
bus-width = <4>;
disable-wp;
non-removable;
diff --git a/arch/arm/boot/dts/rk3066a.dtsi b/arch/arm/boot/dts/rk3066a.dtsi
index cb0a552e0b18..c0ba86c3a2ab 100644
--- a/arch/arm/boot/dts/rk3066a.dtsi
+++ b/arch/arm/boot/dts/rk3066a.dtsi
@@ -169,7 +169,7 @@
clocks = <&cru PCLK_EFUSE>;
clock-names = "pclk_efuse";
- cpu_leakage: cpu_leakage {
+ cpu_leakage: cpu_leakage@17 {
reg = <0x17 0x1>;
};
};
@@ -207,7 +207,7 @@
#size-cells = <0>;
status = "disabled";
- usbphy0: usb-phy0 {
+ usbphy0: usb-phy@17c {
#phy-cells = <0>;
reg = <0x17c>;
clocks = <&cru SCLK_OTGPHY0>;
@@ -215,7 +215,7 @@
#clock-cells = <0>;
};
- usbphy1: usb-phy1 {
+ usbphy1: usb-phy@188 {
#phy-cells = <0>;
reg = <0x188>;
clocks = <&cru SCLK_OTGPHY1>;
diff --git a/arch/arm/boot/dts/rk3188-radxarock.dts b/arch/arm/boot/dts/rk3188-radxarock.dts
index 0b6924c97b6b..1da46d138029 100644
--- a/arch/arm/boot/dts/rk3188-radxarock.dts
+++ b/arch/arm/boot/dts/rk3188-radxarock.dts
@@ -41,6 +41,7 @@
*/
/dts-v1/;
+#include <dt-bindings/input/input.h>
#include "rk3188.dtsi"
/ {
@@ -54,13 +55,11 @@
gpio-keys {
compatible = "gpio-keys";
- #address-cells = <1>;
- #size-cells = <0>;
autorepeat;
- button@0 {
+ power {
gpios = <&gpio0 4 GPIO_ACTIVE_LOW>;
- linux,code = <116>;
+ linux,code = <KEY_POWER>;
label = "GPIO Key Power";
linux,input-type = <1>;
wakeup-source;
diff --git a/arch/arm/boot/dts/rk3188.dtsi b/arch/arm/boot/dts/rk3188.dtsi
index 9271833958f9..31f81b265cef 100644
--- a/arch/arm/boot/dts/rk3188.dtsi
+++ b/arch/arm/boot/dts/rk3188.dtsi
@@ -154,7 +154,7 @@
clocks = <&cru PCLK_EFUSE>;
clock-names = "pclk_efuse";
- cpu_leakage: cpu_leakage {
+ cpu_leakage: cpu_leakage@17 {
reg = <0x17 0x1>;
};
};
@@ -166,7 +166,7 @@
#size-cells = <0>;
status = "disabled";
- usbphy0: usb-phy0 {
+ usbphy0: usb-phy@10c {
#phy-cells = <0>;
reg = <0x10c>;
clocks = <&cru SCLK_OTGPHY0>;
@@ -174,7 +174,7 @@
#clock-cells = <0>;
};
- usbphy1: usb-phy1 {
+ usbphy1: usb-phy@11c {
#phy-cells = <0>;
reg = <0x11c>;
clocks = <&cru SCLK_OTGPHY1>;
diff --git a/arch/arm/boot/dts/rk3228-evb.dts b/arch/arm/boot/dts/rk3228-evb.dts
index e3898b810150..5956e8246abe 100644
--- a/arch/arm/boot/dts/rk3228-evb.dts
+++ b/arch/arm/boot/dts/rk3228-evb.dts
@@ -53,7 +53,6 @@
};
&emmc {
- broken-cd;
cap-mmc-highspeed;
mmc-ddr-1_8v;
disable-wp;
@@ -61,6 +60,13 @@
status = "okay";
};
+&tsadc {
+ status = "okay";
+
+ rockchip,hw-tshut-mode = <0>; /* tshut mode 0:CRU 1:GPIO */
+ rockchip,hw-tshut-polarity = <1>; /* tshut polarity 0:LOW 1:HIGH */
+};
+
&uart2 {
status = "okay";
};
diff --git a/arch/arm/boot/dts/rk3228.dtsi b/arch/arm/boot/dts/rk3228.dtsi
index 4dae42a01509..e23a22e29155 100644
--- a/arch/arm/boot/dts/rk3228.dtsi
+++ b/arch/arm/boot/dts/rk3228.dtsi
@@ -43,6 +43,7 @@
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/pinctrl/rockchip.h>
#include <dt-bindings/clock/rk3228-cru.h>
+#include <dt-bindings/thermal/thermal.h>
#include "skeleton.dtsi"
/ {
@@ -69,6 +70,7 @@
/* KHz uV */
816000 1000000
>;
+ #cooling-cells = <2>; /* min followed by max */
clock-latency = <40000>;
clocks = <&cru ARMCLK>;
};
@@ -185,6 +187,58 @@
status = "disabled";
};
+ i2c0: i2c@11050000 {
+ compatible = "rockchip,rk3228-i2c";
+ reg = <0x11050000 0x1000>;
+ interrupts = <GIC_SPI 36 IRQ_TYPE_LEVEL_HIGH>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ clock-names = "i2c";
+ clocks = <&cru PCLK_I2C0>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c0_xfer>;
+ status = "disabled";
+ };
+
+ i2c1: i2c@11060000 {
+ compatible = "rockchip,rk3228-i2c";
+ reg = <0x11060000 0x1000>;
+ interrupts = <GIC_SPI 37 IRQ_TYPE_LEVEL_HIGH>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ clock-names = "i2c";
+ clocks = <&cru PCLK_I2C1>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c1_xfer>;
+ status = "disabled";
+ };
+
+ i2c2: i2c@11070000 {
+ compatible = "rockchip,rk3228-i2c";
+ reg = <0x11070000 0x1000>;
+ interrupts = <GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ clock-names = "i2c";
+ clocks = <&cru PCLK_I2C2>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c2_xfer>;
+ status = "disabled";
+ };
+
+ i2c3: i2c@11080000 {
+ compatible = "rockchip,rk3228-i2c";
+ reg = <0x11080000 0x1000>;
+ interrupts = <GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ clock-names = "i2c";
+ clocks = <&cru PCLK_I2C3>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c3_xfer>;
+ status = "disabled";
+ };
+
pwm0: pwm@110b0000 {
compatible = "rockchip,rk3288-pwm";
reg = <0x110b0000 0x10>;
@@ -247,6 +301,63 @@
assigned-clock-rates = <594000000>;
};
+ thermal-zones {
+ cpu_thermal: cpu-thermal {
+ polling-delay-passive = <100>; /* milliseconds */
+ polling-delay = <5000>; /* milliseconds */
+
+ thermal-sensors = <&tsadc 0>;
+
+ trips {
+ cpu_alert0: cpu_alert0 {
+ temperature = <70000>; /* millicelsius */
+ hysteresis = <2000>; /* millicelsius */
+ type = "passive";
+ };
+ cpu_alert1: cpu_alert1 {
+ temperature = <75000>; /* millicelsius */
+ hysteresis = <2000>; /* millicelsius */
+ type = "passive";
+ };
+ cpu_crit: cpu_crit {
+ temperature = <90000>; /* millicelsius */
+ hysteresis = <2000>; /* millicelsius */
+ type = "critical";
+ };
+ };
+
+ cooling-maps {
+ map0 {
+ trip = <&cpu_alert0>;
+ cooling-device =
+ <&cpu0 THERMAL_NO_LIMIT 6>;
+ };
+ map1 {
+ trip = <&cpu_alert1>;
+ cooling-device =
+ <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+ };
+
+ tsadc: tsadc@11150000 {
+ compatible = "rockchip,rk3228-tsadc";
+ reg = <0x11150000 0x100>;
+ interrupts = <GIC_SPI 58 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cru SCLK_TSADC>, <&cru PCLK_TSADC>;
+ clock-names = "tsadc", "apb_pclk";
+ resets = <&cru SRST_TSADC>;
+ reset-names = "tsadc-apb";
+ pinctrl-names = "init", "default", "sleep";
+ pinctrl-0 = <&otp_gpio>;
+ pinctrl-1 = <&otp_out>;
+ pinctrl-2 = <&otp_gpio>;
+ #thermal-sensor-cells = <0>;
+ rockchip,hw-tshut-temp = <95000>;
+ status = "disabled";
+ };
+
emmc: dwmmc@30020000 {
compatible = "rockchip,rk3288-dw-mshc";
reg = <0x30020000 0x4000>;
@@ -370,6 +481,34 @@
};
};
+ i2c0 {
+ i2c0_xfer: i2c0-xfer {
+ rockchip,pins = <0 0 RK_FUNC_1 &pcfg_pull_none>,
+ <0 1 RK_FUNC_1 &pcfg_pull_none>;
+ };
+ };
+
+ i2c1 {
+ i2c1_xfer: i2c1-xfer {
+ rockchip,pins = <0 2 RK_FUNC_1 &pcfg_pull_none>,
+ <0 3 RK_FUNC_1 &pcfg_pull_none>;
+ };
+ };
+
+ i2c2 {
+ i2c2_xfer: i2c2-xfer {
+ rockchip,pins = <2 20 RK_FUNC_1 &pcfg_pull_none>,
+ <2 21 RK_FUNC_1 &pcfg_pull_none>;
+ };
+ };
+
+ i2c3 {
+ i2c3_xfer: i2c3-xfer {
+ rockchip,pins = <0 6 RK_FUNC_1 &pcfg_pull_none>,
+ <0 7 RK_FUNC_1 &pcfg_pull_none>;
+ };
+ };
+
pwm0 {
pwm0_pin: pwm0-pin {
rockchip,pins = <3 21 RK_FUNC_1 &pcfg_pull_none>;
@@ -394,6 +533,16 @@
};
};
+ tsadc {
+ otp_gpio: otp-gpio {
+ rockchip,pins = <0 24 RK_FUNC_GPIO &pcfg_pull_none>;
+ };
+
+ otp_out: otp-out {
+ rockchip,pins = <0 24 RK_FUNC_2 &pcfg_pull_none>;
+ };
+ };
+
uart0 {
uart0_xfer: uart0-xfer {
rockchip,pins = <2 26 RK_FUNC_1 &pcfg_pull_none>,
diff --git a/arch/arm/boot/dts/rk3288-evb.dtsi b/arch/arm/boot/dts/rk3288-evb.dtsi
index 78d47f7d2938..963365d12208 100644
--- a/arch/arm/boot/dts/rk3288-evb.dtsi
+++ b/arch/arm/boot/dts/rk3288-evb.dtsi
@@ -38,6 +38,7 @@
* OTHER DEALINGS IN THE SOFTWARE.
*/
+#include <dt-bindings/input/input.h>
#include <dt-bindings/pwm/pwm.h>
#include "rk3288.dtsi"
@@ -98,16 +99,14 @@
gpio-keys {
compatible = "gpio-keys";
- #address-cells = <1>;
- #size-cells = <0>;
autorepeat;
pinctrl-names = "default";
pinctrl-0 = <&pwrbtn>;
- button@0 {
+ power {
gpios = <&gpio0 5 GPIO_ACTIVE_LOW>;
- linux,code = <116>;
+ linux,code = <KEY_POWER>;
label = "GPIO Key Power";
linux,input-type = <1>;
wakeup-source;
@@ -172,7 +171,6 @@
};
&emmc {
- broken-cd;
bus-width = <8>;
cap-mmc-highspeed;
disable-wp;
diff --git a/arch/arm/boot/dts/rk3288-firefly.dtsi b/arch/arm/boot/dts/rk3288-firefly.dtsi
index 98c586a43c73..d6cf9ada13c9 100644
--- a/arch/arm/boot/dts/rk3288-firefly.dtsi
+++ b/arch/arm/boot/dts/rk3288-firefly.dtsi
@@ -40,6 +40,7 @@
* OTHER DEALINGS IN THE SOFTWARE.
*/
+#include <dt-bindings/input/input.h>
#include "rk3288.dtsi"
/ {
@@ -87,14 +88,12 @@
keys: gpio-keys {
compatible = "gpio-keys";
- #address-cells = <1>;
- #size-cells = <0>;
- button@0 {
+ power {
wakeup-source;
gpios = <&gpio0 5 GPIO_ACTIVE_LOW>;
label = "GPIO Power";
- linux,code = <116>;
+ linux,code = <KEY_POWER>;
pinctrl-names = "default";
pinctrl-0 = <&pwr_key>;
};
@@ -208,7 +207,6 @@
};
&emmc {
- broken-cd;
bus-width = <8>;
cap-mmc-highspeed;
disable-wp;
@@ -509,7 +507,6 @@
};
&sdio0 {
- broken-cd;
bus-width = <4>;
disable-wp;
non-removable;
diff --git a/arch/arm/boot/dts/rk3288-miqi.dts b/arch/arm/boot/dts/rk3288-miqi.dts
new file mode 100644
index 000000000000..8643103d8cd8
--- /dev/null
+++ b/arch/arm/boot/dts/rk3288-miqi.dts
@@ -0,0 +1,472 @@
+/*
+ * Copyright (c) 2016 Heiko Stuebner <heiko@sntech.de>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include <dt-bindings/input/input.h>
+#include "rk3288.dtsi"
+
+/ {
+ model = "mqmaker MiQi";
+ compatible = "mqmaker,miqi", "rockchip,rk3288";
+
+ chosen {
+ stdout-path = "serial2:115200n8";
+ };
+
+ memory {
+ device_type = "memory";
+ reg = <0 0x80000000>;
+ };
+
+ ext_gmac: external-gmac-clock {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <125000000>;
+ clock-output-names = "ext_gmac";
+ };
+
+ io_domains: io-domains {
+ compatible = "rockchip,rk3288-io-voltage-domain";
+
+ audio-supply = <&vcca_33>;
+ flash0-supply = <&vcc_flash>;
+ flash1-supply = <&vcc_lan>;
+ gpio30-supply = <&vcc_io>;
+ gpio1830-supply = <&vcc_io>;
+ lcdc-supply = <&vcc_io>;
+ sdcard-supply = <&vccio_sd>;
+ wifi-supply = <&vcc_18>;
+ };
+
+ leds {
+ compatible = "gpio-leds";
+
+ work {
+ gpios = <&gpio7 4 GPIO_ACTIVE_LOW>;
+ label = "miqi:green:user";
+ linux,default-trigger = "default-on";
+ pinctrl-names = "default";
+ pinctrl-0 = <&led_ctl>;
+ };
+ };
+
+ vcc_flash: flash-regulator {
+ compatible = "regulator-fixed";
+ regulator-name = "vcc_flash";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ vin-supply = <&vcc_io>;
+ };
+
+ vcc_host: usb-host-regulator {
+ compatible = "regulator-fixed";
+ enable-active-high;
+ gpio = <&gpio0 14 GPIO_ACTIVE_HIGH>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&host_vbus_drv>;
+ regulator-name = "vcc_host";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ regulator-always-on;
+ vin-supply = <&vcc_sys>;
+ };
+
+ vcc_sd: sdmmc-regulator {
+ compatible = "regulator-fixed";
+ gpio = <&gpio7 11 GPIO_ACTIVE_LOW>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&sdmmc_pwr>;
+ regulator-name = "vcc_sd";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ startup-delay-us = <100000>;
+ vin-supply = <&vcc_io>;
+ };
+
+ vcc_sys: vsys-regulator {
+ compatible = "regulator-fixed";
+ regulator-name = "vcc_sys";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+};
+
+&cpu0 {
+ cpu0-supply = <&vdd_cpu>;
+};
+
+&emmc {
+ bus-width = <8>;
+ cap-mmc-highspeed;
+ disable-wp;
+ non-removable;
+ num-slots = <1>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&emmc_clk>, <&emmc_cmd>, <&emmc_pwr>, <&emmc_bus8>;
+ vmmc-supply = <&vcc_io>;
+ vqmmc-supply = <&vcc_flash>;
+ status = "okay";
+};
+
+&gmac {
+ assigned-clocks = <&cru SCLK_MAC>;
+ assigned-clock-parents = <&ext_gmac>;
+ clock_in_out = "input";
+ pinctrl-names = "default";
+ pinctrl-0 = <&rgmii_pins>, <&phy_rst>, <&phy_pmeb>, <&phy_int>;
+ phy-supply = <&vcc_lan>;
+ phy-mode = "rgmii";
+ snps,reset-active-low;
+ snps,reset-delays-us = <0 10000 1000000>;
+ snps,reset-gpio = <&gpio4 8 GPIO_ACTIVE_LOW>;
+ tx_delay = <0x30>;
+ rx_delay = <0x10>;
+ status = "ok";
+};
+
+&hdmi {
+ ddc-i2c-bus = <&i2c5>;
+ status = "okay";
+};
+
+&i2c0 {
+ clock-frequency = <400000>;
+ status = "okay";
+
+ vdd_cpu: syr827@40 {
+ compatible = "silergy,syr827";
+ fcs,suspend-voltage-selector = <1>;
+ reg = <0x40>;
+ regulator-name = "vdd_cpu";
+ regulator-min-microvolt = <850000>;
+ regulator-max-microvolt = <1350000>;
+ regulator-always-on;
+ regulator-boot-on;
+ regulator-enable-ramp-delay = <300>;
+ regulator-ramp-delay = <8000>;
+ vin-supply = <&vcc_sys>;
+ };
+
+ vdd_gpu: syr828@41 {
+ compatible = "silergy,syr828";
+ fcs,suspend-voltage-selector = <1>;
+ reg = <0x41>;
+ regulator-name = "vdd_gpu";
+ regulator-min-microvolt = <850000>;
+ regulator-max-microvolt = <1350000>;
+ regulator-always-on;
+ vin-supply = <&vcc_sys>;
+ };
+
+ hym8563: hym8563@51 {
+ compatible = "haoyu,hym8563";
+ reg = <0x51>;
+ #clock-cells = <0>;
+ clock-frequency = <32768>;
+ clock-output-names = "xin32k";
+ };
+
+ act8846: act8846@5a {
+ compatible = "active-semi,act8846";
+ reg = <0x5a>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pmic_vsel>;
+ system-power-controller;
+
+ vp1-supply = <&vcc_sys>;
+ vp2-supply = <&vcc_sys>;
+ vp3-supply = <&vcc_sys>;
+ vp4-supply = <&vcc_sys>;
+ inl1-supply = <&vcc_sys>;
+ inl2-supply = <&vcc_sys>;
+ inl3-supply = <&vcc_20>;
+
+ regulators {
+ vcc_ddr: REG1 {
+ regulator-name = "vcc_ddr";
+ regulator-always-on;
+ };
+
+ vcc_io: REG2 {
+ regulator-name = "vcc_io";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ };
+
+ vdd_log: REG3 {
+ regulator-name = "vdd_log";
+ regulator-min-microvolt = <1100000>;
+ regulator-max-microvolt = <1100000>;
+ regulator-always-on;
+ };
+
+ vcc_20: REG4 {
+ regulator-name = "vcc_20";
+ regulator-min-microvolt = <2000000>;
+ regulator-max-microvolt = <2000000>;
+ regulator-always-on;
+ };
+
+ vccio_sd: REG5 {
+ regulator-name = "vccio_sd";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-always-on;
+ };
+
+ vdd10_lcd: REG6 {
+ regulator-name = "vdd10_lcd";
+ regulator-min-microvolt = <1000000>;
+ regulator-max-microvolt = <1000000>;
+ regulator-always-on;
+ };
+
+ vcca_18: REG7 {
+ regulator-name = "vcca_18";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ };
+
+ vcca_33: REG8 {
+ regulator-name = "vcca_33";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ };
+
+ vcc_lan: REG9 {
+ regulator-name = "vcc_lan";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ };
+
+ vdd_10: REG10 {
+ regulator-name = "vdd_10";
+ regulator-min-microvolt = <1000000>;
+ regulator-max-microvolt = <1000000>;
+ regulator-always-on;
+ };
+
+ vcc_18: REG11 {
+ regulator-name = "vcc_18";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+ };
+
+ vcc18_lcd: REG12 {
+ regulator-name = "vcc18_lcd";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-always-on;
+ };
+ };
+ };
+};
+
+&i2c1 {
+ status = "okay";
+};
+
+&i2c2 {
+ status = "okay";
+};
+
+&i2c4 {
+ status = "okay";
+};
+
+&i2c5 {
+ status = "okay";
+};
+
+&pinctrl {
+ pcfg_output_high: pcfg-output-high {
+ output-high;
+ };
+
+ pcfg_output_low: pcfg-output-low {
+ output-low;
+ };
+
+ pcfg_pull_up_drv_12ma: pcfg-pull-up-drv-12ma {
+ bias-pull-up;
+ drive-strength = <12>;
+ };
+
+ act8846 {
+ pmic_int: pmic-int {
+ rockchip,pins = <0 4 RK_FUNC_GPIO &pcfg_pull_up>;
+ };
+
+ pmic_sleep: pmic-sleep {
+ rockchip,pins = <0 0 RK_FUNC_GPIO &pcfg_output_low>;
+ };
+
+ pmic_vsel: pmic-vsel {
+ rockchip,pins = <7 1 RK_FUNC_GPIO &pcfg_output_low>;
+ };
+ };
+
+ gmac {
+ phy_int: phy-int {
+ rockchip,pins = <0 9 RK_FUNC_GPIO &pcfg_pull_up>;
+ };
+
+ phy_pmeb: phy-pmeb {
+ rockchip,pins = <0 8 RK_FUNC_GPIO &pcfg_pull_up>;
+ };
+
+ phy_rst: phy-rst {
+ rockchip,pins = <4 8 RK_FUNC_GPIO &pcfg_output_high>;
+ };
+ };
+
+ leds {
+ led_ctl: led-ctl {
+ rockchip,pins = <7 4 RK_FUNC_GPIO &pcfg_pull_none>;
+ };
+ };
+
+ sdmmc {
+ /*
+ * Default drive strength isn't enough to achieve even
+ * high-speed mode on firefly board so bump up to 12ma.
+ */
+ sdmmc_bus4: sdmmc-bus4 {
+ rockchip,pins = <6 16 RK_FUNC_1 &pcfg_pull_up_drv_12ma>,
+ <6 17 RK_FUNC_1 &pcfg_pull_up_drv_12ma>,
+ <6 18 RK_FUNC_1 &pcfg_pull_up_drv_12ma>,
+ <6 19 RK_FUNC_1 &pcfg_pull_up_drv_12ma>;
+ };
+
+ sdmmc_clk: sdmmc-clk {
+ rockchip,pins = <6 20 RK_FUNC_1 &pcfg_pull_none_12ma>;
+ };
+
+ sdmmc_cmd: sdmmc-cmd {
+ rockchip,pins = <6 21 RK_FUNC_1 &pcfg_pull_up_drv_12ma>;
+ };
+
+ sdmmc_pwr: sdmmc-pwr {
+ rockchip,pins = <7 11 RK_FUNC_GPIO &pcfg_pull_none>;
+ };
+ };
+
+ usb_host {
+ host_vbus_drv: host-vbus-drv {
+ rockchip,pins = <0 14 RK_FUNC_GPIO &pcfg_pull_none>;
+ };
+ };
+};
+
+&saradc {
+ vref-supply = <&vcc_18>;
+ status = "okay";
+};
+
+&sdmmc {
+ bus-width = <4>;
+ cap-mmc-highspeed;
+ cap-sd-highspeed;
+ card-detect-delay = <200>;
+ disable-wp;
+ num-slots = <1>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&sdmmc_clk>, <&sdmmc_cmd>, <&sdmmc_cd>, <&sdmmc_bus4>;
+ vmmc-supply = <&vcc_sd>;
+ vqmmc-supply = <&vccio_sd>;
+ status = "okay";
+};
+
+&tsadc {
+ rockchip,hw-tshut-mode = <0>;
+ rockchip,hw-tshut-polarity = <0>;
+ status = "okay";
+};
+
+&uart2 {
+ status = "okay";
+};
+
+&uart3 {
+ status = "okay";
+};
+
+&usbphy {
+ status = "okay";
+};
+
+&usb_host1 {
+ status = "okay";
+};
+
+&usb_otg {
+ /*
+ * The otg controller is the only system power source,
+ * so needs to always stay in device mode.
+ */
+ dr_mode = "peripheral";
+ status = "okay";
+};
+
+&vopb {
+ status = "okay";
+};
+
+&vopb_mmu {
+ status = "okay";
+};
+
+&vopl {
+ status = "okay";
+};
+
+&vopl_mmu {
+ status = "okay";
+};
+
+&wdt {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/rk3288-popmetal.dts b/arch/arm/boot/dts/rk3288-popmetal.dts
index 2ff9689d2e1b..720717bb3614 100644
--- a/arch/arm/boot/dts/rk3288-popmetal.dts
+++ b/arch/arm/boot/dts/rk3288-popmetal.dts
@@ -41,7 +41,7 @@
*/
/dts-v1/;
-
+#include <dt-bindings/input/input.h>
#include "rk3288.dtsi"
/ {
@@ -62,16 +62,14 @@
gpio-keys {
compatible = "gpio-keys";
- #address-cells = <1>;
- #size-cells = <0>;
autorepeat;
pinctrl-names = "default";
pinctrl-0 = <&pwrbtn>;
- button@0 {
+ power {
gpios = <&gpio0 5 GPIO_ACTIVE_LOW>;
- linux,code = <116>;
+ linux,code = <KEY_POWER>;
label = "GPIO Key Power";
linux,input-type = <1>;
wakeup-source;
@@ -162,7 +160,6 @@
};
&emmc {
- broken-cd;
bus-width = <8>;
cap-mmc-highspeed;
disable-wp;
diff --git a/arch/arm/boot/dts/rk3288-r89.dts b/arch/arm/boot/dts/rk3288-r89.dts
index 510a1d0d7abb..4b8a8adb243c 100644
--- a/arch/arm/boot/dts/rk3288-r89.dts
+++ b/arch/arm/boot/dts/rk3288-r89.dts
@@ -41,6 +41,7 @@
*/
/dts-v1/;
+#include <dt-bindings/input/input.h>
#include <dt-bindings/pwm/pwm.h>
#include "rk3288.dtsi"
@@ -61,16 +62,14 @@
gpio-keys {
compatible = "gpio-keys";
- #address-cells = <1>;
- #size-cells = <0>;
autorepeat;
pinctrl-names = "default";
pinctrl-0 = <&pwrbtn>;
- button@0 {
+ power {
gpios = <&gpio0 5 GPIO_ACTIVE_LOW>;
- linux,code = <116>;
+ linux,code = <KEY_POWER>;
label = "GPIO Key Power";
linux,input-type = <1>;
wakeup-source;
diff --git a/arch/arm/boot/dts/rk3288-veyron-chromebook.dtsi b/arch/arm/boot/dts/rk3288-veyron-chromebook.dtsi
index 610769d99522..2958c36d12a0 100644
--- a/arch/arm/boot/dts/rk3288-veyron-chromebook.dtsi
+++ b/arch/arm/boot/dts/rk3288-veyron-chromebook.dtsi
@@ -54,6 +54,50 @@
i2c20 = &i2c_tunnel;
};
+ backlight: backlight {
+ compatible = "pwm-backlight";
+ brightness-levels = <
+ 0 1 2 3 4 5 6 7
+ 8 9 10 11 12 13 14 15
+ 16 17 18 19 20 21 22 23
+ 24 25 26 27 28 29 30 31
+ 32 33 34 35 36 37 38 39
+ 40 41 42 43 44 45 46 47
+ 48 49 50 51 52 53 54 55
+ 56 57 58 59 60 61 62 63
+ 64 65 66 67 68 69 70 71
+ 72 73 74 75 76 77 78 79
+ 80 81 82 83 84 85 86 87
+ 88 89 90 91 92 93 94 95
+ 96 97 98 99 100 101 102 103
+ 104 105 106 107 108 109 110 111
+ 112 113 114 115 116 117 118 119
+ 120 121 122 123 124 125 126 127
+ 128 129 130 131 132 133 134 135
+ 136 137 138 139 140 141 142 143
+ 144 145 146 147 148 149 150 151
+ 152 153 154 155 156 157 158 159
+ 160 161 162 163 164 165 166 167
+ 168 169 170 171 172 173 174 175
+ 176 177 178 179 180 181 182 183
+ 184 185 186 187 188 189 190 191
+ 192 193 194 195 196 197 198 199
+ 200 201 202 203 204 205 206 207
+ 208 209 210 211 212 213 214 215
+ 216 217 218 219 220 221 222 223
+ 224 225 226 227 228 229 230 231
+ 232 233 234 235 236 237 238 239
+ 240 241 242 243 244 245 246 247
+ 248 249 250 251 252 253 254 255>;
+ default-brightness-level = <128>;
+ enable-gpios = <&gpio7 2 GPIO_ACTIVE_HIGH>;
+ backlight-boot-off;
+ pinctrl-names = "default";
+ pinctrl-0 = <&bl_en>;
+ pwms = <&pwm0 0 1000000 0>;
+ pwm-delay-us = <10000>;
+ };
+
gpio-charger {
compatible = "gpio-charger";
charger-type = "mains";
@@ -62,6 +106,21 @@
pinctrl-0 = <&ac_present_ap>;
};
+ panel: panel {
+ compatible ="innolux,n116bge", "simple-panel";
+ status = "okay";
+ power-supply = <&vcc33_lcd>;
+ backlight = <&backlight>;
+
+ ports {
+ panel_in: port {
+ panel_in_edp: endpoint {
+ remote-endpoint = <&edp_out_panel>;
+ };
+ };
+ };
+ };
+
/* A non-regulated voltage from power supply or battery */
vccsys: vccsys {
compatible = "regulator-fixed";
@@ -103,6 +162,29 @@
};
};
+&edp {
+ status = "okay";
+
+ pinctrl-names = "default";
+ pinctrl-0 = <&edp_hpd>;
+
+ ports {
+ edp_out: port@1 {
+ reg = <1>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ edp_out_panel: endpoint {
+ reg = <0>;
+ remote-endpoint = <&panel_in_edp>;
+ };
+ };
+ };
+};
+
+&edp_phy {
+ status = "okay";
+};
+
&gpio_keys {
pinctrl-0 = <&pwr_key_l &ap_lid_int_l>;
lid {
@@ -115,6 +197,10 @@
};
};
+&pwm0 {
+ status = "okay";
+};
+
&rk808 {
vcc11-supply = <&vcc_5v>;
@@ -168,6 +254,14 @@
};
};
+&vopl {
+ status = "okay";
+};
+
+&vopl_mmu {
+ status = "okay";
+};
+
&pinctrl {
pinctrl-0 = <
/* Common for sleep and wake, but no owners */
@@ -184,6 +278,12 @@
&suspend_l_sleep
>;
+ backlight {
+ bl_en: bl-en {
+ rockchip,pins = <7 2 RK_FUNC_GPIO &pcfg_pull_none>;
+ };
+ };
+
buttons {
ap_lid_int_l: ap-lid-int-l {
rockchip,pins = <0 6 RK_FUNC_GPIO &pcfg_pull_up>;
diff --git a/arch/arm/boot/dts/rk3288-veyron-jaq.dts b/arch/arm/boot/dts/rk3288-veyron-jaq.dts
index c2f52cfb4d06..3748abf562b1 100644
--- a/arch/arm/boot/dts/rk3288-veyron-jaq.dts
+++ b/arch/arm/boot/dts/rk3288-veyron-jaq.dts
@@ -61,6 +61,7 @@
pinctrl-names = "default";
pinctrl-0 = <&lcd_enable_h>;
regulator-name = "panel_regulator";
+ startup-delay-us = <100000>;
vin-supply = <&vcc33_sys>;
};
@@ -88,6 +89,48 @@
};
};
+&backlight {
+ /* Jaq panel PWM must be >= 3%, so start non-zero brightness at 8 */
+ brightness-levels = <
+ 0
+ 8 9 10 11 12 13 14 15
+ 16 17 18 19 20 21 22 23
+ 24 25 26 27 28 29 30 31
+ 32 33 34 35 36 37 38 39
+ 40 41 42 43 44 45 46 47
+ 48 49 50 51 52 53 54 55
+ 56 57 58 59 60 61 62 63
+ 64 65 66 67 68 69 70 71
+ 72 73 74 75 76 77 78 79
+ 80 81 82 83 84 85 86 87
+ 88 89 90 91 92 93 94 95
+ 96 97 98 99 100 101 102 103
+ 104 105 106 107 108 109 110 111
+ 112 113 114 115 116 117 118 119
+ 120 121 122 123 124 125 126 127
+ 128 129 130 131 132 133 134 135
+ 136 137 138 139 140 141 142 143
+ 144 145 146 147 148 149 150 151
+ 152 153 154 155 156 157 158 159
+ 160 161 162 163 164 165 166 167
+ 168 169 170 171 172 173 174 175
+ 176 177 178 179 180 181 182 183
+ 184 185 186 187 188 189 190 191
+ 192 193 194 195 196 197 198 199
+ 200 201 202 203 204 205 206 207
+ 208 209 210 211 212 213 214 215
+ 216 217 218 219 220 221 222 223
+ 224 225 226 227 228 229 230 231
+ 232 233 234 235 236 237 238 239
+ 240 241 242 243 244 245 246 247
+ 248 249 250 251 252 253 254 255>;
+ power-supply = <&backlight_regulator>;
+};
+
+&panel {
+ power-supply = <&panel_regulator>;
+};
+
&rk808 {
pinctrl-names = "default";
pinctrl-0 = <&pmic_int_l &dvs_1 &dvs_2>;
@@ -142,12 +185,6 @@
};
};
- edp {
- edp_hpd: edp_hpd {
- rockchip,pins = <7 11 RK_FUNC_2 &pcfg_pull_down>;
- };
- };
-
hdmi {
vcc50_hdmi_en: vcc50-hdmi-en {
rockchip,pins = <5 19 RK_FUNC_GPIO &pcfg_pull_none>;
diff --git a/arch/arm/boot/dts/rk3288-veyron-jerry.dts b/arch/arm/boot/dts/rk3288-veyron-jerry.dts
index 60bd6e91e308..f6b2eaaebb9a 100644
--- a/arch/arm/boot/dts/rk3288-veyron-jerry.dts
+++ b/arch/arm/boot/dts/rk3288-veyron-jerry.dts
@@ -60,6 +60,7 @@
pinctrl-names = "default";
pinctrl-0 = <&lcd_enable_h>;
regulator-name = "panel_regulator";
+ startup-delay-us = <100000>;
vin-supply = <&vcc33_sys>;
};
@@ -87,6 +88,14 @@
};
};
+&backlight {
+ power-supply = <&backlight_regulator>;
+};
+
+&panel {
+ power-supply= <&panel_regulator>;
+};
+
&rk808 {
pinctrl-names = "default";
pinctrl-0 = <&pmic_int_l>;
diff --git a/arch/arm/boot/dts/rk3288-veyron-minnie.dts b/arch/arm/boot/dts/rk3288-veyron-minnie.dts
index 699beb0a9481..f72d616d1bf8 100644
--- a/arch/arm/boot/dts/rk3288-veyron-minnie.dts
+++ b/arch/arm/boot/dts/rk3288-veyron-minnie.dts
@@ -70,6 +70,7 @@
pinctrl-names = "default";
pinctrl-0 = <&lcd_enable_h>;
regulator-name = "panel_regulator";
+ startup-delay-us = <100000>;
vin-supply = <&vcc33_sys>;
};
@@ -86,6 +87,44 @@
};
};
+&backlight {
+ /* Minnie panel PWM must be >= 1%, so start non-zero brightness at 3 */
+ brightness-levels = <
+ 0 3 4 5 6 7
+ 8 9 10 11 12 13 14 15
+ 16 17 18 19 20 21 22 23
+ 24 25 26 27 28 29 30 31
+ 32 33 34 35 36 37 38 39
+ 40 41 42 43 44 45 46 47
+ 48 49 50 51 52 53 54 55
+ 56 57 58 59 60 61 62 63
+ 64 65 66 67 68 69 70 71
+ 72 73 74 75 76 77 78 79
+ 80 81 82 83 84 85 86 87
+ 88 89 90 91 92 93 94 95
+ 96 97 98 99 100 101 102 103
+ 104 105 106 107 108 109 110 111
+ 112 113 114 115 116 117 118 119
+ 120 121 122 123 124 125 126 127
+ 128 129 130 131 132 133 134 135
+ 136 137 138 139 140 141 142 143
+ 144 145 146 147 148 149 150 151
+ 152 153 154 155 156 157 158 159
+ 160 161 162 163 164 165 166 167
+ 168 169 170 171 172 173 174 175
+ 176 177 178 179 180 181 182 183
+ 184 185 186 187 188 189 190 191
+ 192 193 194 195 196 197 198 199
+ 200 201 202 203 204 205 206 207
+ 208 209 210 211 212 213 214 215
+ 216 217 218 219 220 221 222 223
+ 224 225 226 227 228 229 230 231
+ 232 233 234 235 236 237 238 239
+ 240 241 242 243 244 245 246 247
+ 248 249 250 251 252 253 254 255>;
+ power-supply = <&backlight_regulator>;
+};
+
&emmc {
/delete-property/mmc-hs200-1_8v;
};
@@ -135,6 +174,11 @@
};
};
+&panel {
+ compatible = "auo,b101ean01", "simple-panel";
+ power-supply= <&panel_regulator>;
+};
+
&rk808 {
pinctrl-names = "default";
pinctrl-0 = <&pmic_int_l &dvs_1 &dvs_2>;
diff --git a/arch/arm/boot/dts/rk3288-veyron-pinky.dts b/arch/arm/boot/dts/rk3288-veyron-pinky.dts
index 94b56e33d947..d44351ec2333 100644
--- a/arch/arm/boot/dts/rk3288-veyron-pinky.dts
+++ b/arch/arm/boot/dts/rk3288-veyron-pinky.dts
@@ -65,6 +65,13 @@
pinctrl-0 = <&emmc_clk &emmc_cmd &emmc_bus8 &emmc_reset>;
};
+&edp {
+ /delete-property/pinctrl-names;
+ /delete-property/pinctrl-0;
+
+ force-hpd;
+};
+
&gpio_keys {
pinctrl-0 = <&pwr_key_h &ap_lid_int_l>;
diff --git a/arch/arm/boot/dts/rk3288-veyron-speedy.dts b/arch/arm/boot/dts/rk3288-veyron-speedy.dts
index b34a7b5b3f62..a0d033f6fe52 100644
--- a/arch/arm/boot/dts/rk3288-veyron-speedy.dts
+++ b/arch/arm/boot/dts/rk3288-veyron-speedy.dts
@@ -61,6 +61,7 @@
pinctrl-names = "default";
pinctrl-0 = <&lcd_enable_h>;
regulator-name = "panel_regulator";
+ startup-delay-us = <100000>;
vin-supply = <&vcc33_sys>;
};
@@ -88,6 +89,10 @@
};
};
+&backlight {
+ power-supply = <&backlight_regulator>;
+};
+
&cpu_alert0 {
temperature = <65000>;
};
@@ -96,6 +101,17 @@
temperature = <70000>;
};
+&edp {
+ /delete-property/pinctrl-names;
+ /delete-property/pinctrl-0;
+
+ force-hpd;
+};
+
+&panel {
+ power-supply= <&panel_regulator>;
+};
+
&rk808 {
pinctrl-names = "default";
pinctrl-0 = <&pmic_int_l>;
diff --git a/arch/arm/boot/dts/rk3288-veyron.dtsi b/arch/arm/boot/dts/rk3288-veyron.dtsi
index 412809c60d01..b2557bf5a58f 100644
--- a/arch/arm/boot/dts/rk3288-veyron.dtsi
+++ b/arch/arm/boot/dts/rk3288-veyron.dtsi
@@ -141,12 +141,27 @@
&cpu0 {
cpu0-supply = <&vdd_cpu>;
+ operating-points = <
+ /* KHz uV */
+ 1800000 1400000
+ 1704000 1350000
+ 1608000 1300000
+ 1512000 1250000
+ 1416000 1200000
+ 1200000 1100000
+ 1008000 1050000
+ 816000 1000000
+ 696000 950000
+ 600000 900000
+ 408000 900000
+ 216000 900000
+ 126000 900000
+ >;
};
&emmc {
status = "okay";
- broken-cd;
bus-width = <8>;
cap-mmc-highspeed;
rockchip,default-sample-phase = <158>;
@@ -347,7 +362,6 @@
&sdio0 {
status = "okay";
- broken-cd;
bus-width = <4>;
cap-sd-highspeed;
cap-sdio-irq;
diff --git a/arch/arm/boot/dts/rk3288.dtsi b/arch/arm/boot/dts/rk3288.dtsi
index 31f7e20ef418..3b44ef3cff12 100644
--- a/arch/arm/boot/dts/rk3288.dtsi
+++ b/arch/arm/boot/dts/rk3288.dtsi
@@ -445,7 +445,78 @@
};
thermal-zones {
- #include "rk3288-thermal.dtsi"
+ reserve_thermal: reserve_thermal {
+ polling-delay-passive = <1000>; /* milliseconds */
+ polling-delay = <5000>; /* milliseconds */
+
+ thermal-sensors = <&tsadc 0>;
+ };
+
+ cpu_thermal: cpu_thermal {
+ polling-delay-passive = <100>; /* milliseconds */
+ polling-delay = <5000>; /* milliseconds */
+
+ thermal-sensors = <&tsadc 1>;
+
+ trips {
+ cpu_alert0: cpu_alert0 {
+ temperature = <70000>; /* millicelsius */
+ hysteresis = <2000>; /* millicelsius */
+ type = "passive";
+ };
+ cpu_alert1: cpu_alert1 {
+ temperature = <75000>; /* millicelsius */
+ hysteresis = <2000>; /* millicelsius */
+ type = "passive";
+ };
+ cpu_crit: cpu_crit {
+ temperature = <90000>; /* millicelsius */
+ hysteresis = <2000>; /* millicelsius */
+ type = "critical";
+ };
+ };
+
+ cooling-maps {
+ map0 {
+ trip = <&cpu_alert0>;
+ cooling-device =
+ <&cpu0 THERMAL_NO_LIMIT 6>;
+ };
+ map1 {
+ trip = <&cpu_alert1>;
+ cooling-device =
+ <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+
+ gpu_thermal: gpu_thermal {
+ polling-delay-passive = <100>; /* milliseconds */
+ polling-delay = <5000>; /* milliseconds */
+
+ thermal-sensors = <&tsadc 2>;
+
+ trips {
+ gpu_alert0: gpu_alert0 {
+ temperature = <70000>; /* millicelsius */
+ hysteresis = <2000>; /* millicelsius */
+ type = "passive";
+ };
+ gpu_crit: gpu_crit {
+ temperature = <90000>; /* millicelsius */
+ hysteresis = <2000>; /* millicelsius */
+ type = "critical";
+ };
+ };
+
+ cooling-maps {
+ map0 {
+ trip = <&gpu_alert0>;
+ cooling-device =
+ <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
};
tsadc: tsadc@ff280000 {
@@ -659,7 +730,7 @@
* *_HDMI HDMI
* *_MIPI_* MIPI
*/
- pd_vio {
+ pd_vio@RK3288_PD_VIO {
reg = <RK3288_PD_VIO>;
clocks = <&cru ACLK_IEP>,
<&cru ACLK_ISP>,
@@ -692,7 +763,7 @@
* Note: The following 3 are HEVC(H.265) clocks,
* and on the ACLK_HEVC_NIU (NOC).
*/
- pd_hevc {
+ pd_hevc@RK3288_PD_HEVC {
reg = <RK3288_PD_HEVC>;
clocks = <&cru ACLK_HEVC>,
<&cru SCLK_HEVC_CABAC>,
@@ -704,7 +775,7 @@
* (video endecoder & decoder) clocks that on the
* ACLK_VCODEC_NIU and HCLK_VCODEC_NIU (NOC).
*/
- pd_video {
+ pd_video@RK3288_PD_VIDEO {
reg = <RK3288_PD_VIDEO>;
clocks = <&cru ACLK_VCODEC>,
<&cru HCLK_VCODEC>;
@@ -714,7 +785,7 @@
* Note: ACLK_GPU is the GPU clock,
* and on the ACLK_GPU_NIU (NOC).
*/
- pd_gpu {
+ pd_gpu@RK3288_PD_GPU {
reg = <RK3288_PD_GPU>;
clocks = <&cru ACLK_GPU>;
};
@@ -745,8 +816,16 @@
};
grf: syscon@ff770000 {
- compatible = "rockchip,rk3288-grf", "syscon";
+ compatible = "rockchip,rk3288-grf", "syscon", "simple-mfd";
reg = <0xff770000 0x1000>;
+
+ edp_phy: edp-phy {
+ compatible = "rockchip,rk3288-dp-phy";
+ clocks = <&cru SCLK_EDP_24M>;
+ clock-names = "24m";
+ #phy-cells = <0>;
+ status = "disabled";
+ };
};
wdt: watchdog@ff800000 {
@@ -765,7 +844,7 @@
clocks = <&cru HCLK_SPDIF8CH>, <&cru SCLK_SPDIF8CH>;
dmas = <&dmac_bus_s 3>;
dma-names = "tx";
- interrupts = <GIC_SPI 85 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 54 IRQ_TYPE_LEVEL_HIGH>;
pinctrl-names = "default";
pinctrl-0 = <&spdif_tx>;
rockchip,grf = <&grf>;
@@ -775,7 +854,7 @@
i2s: i2s@ff890000 {
compatible = "rockchip,rk3288-i2s", "rockchip,rk3066-i2s";
reg = <0xff890000 0x10000>;
- interrupts = <GIC_SPI 85 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>;
#address-cells = <1>;
#size-cells = <0>;
dmas = <&dmac_bus_s 0>, <&dmac_bus_s 1>;
@@ -821,6 +900,12 @@
reg = <0>;
remote-endpoint = <&hdmi_in_vopb>;
};
+
+ vopb_out_edp: endpoint@1 {
+ reg = <1>;
+ remote-endpoint = <&edp_in_vopb>;
+ };
+
vopb_out_mipi: endpoint@2 {
reg = <2>;
remote-endpoint = <&mipi_in_vopb>;
@@ -858,6 +943,12 @@
reg = <0>;
remote-endpoint = <&hdmi_in_vopl>;
};
+
+ vopl_out_edp: endpoint@1 {
+ reg = <1>;
+ remote-endpoint = <&edp_in_vopl>;
+ };
+
vopl_out_mipi: endpoint@2 {
reg = <2>;
remote-endpoint = <&mipi_in_vopl>;
@@ -878,19 +969,16 @@
mipi_dsi: mipi@ff960000 {
compatible = "rockchip,rk3288-mipi-dsi", "snps,dw-mipi-dsi";
reg = <0xff960000 0x4000>;
- interrupts = <GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cru SCLK_MIPIDSI_24M>, <&cru PCLK_MIPI_DSI0>;
clock-names = "ref", "pclk";
+ power-domains = <&power RK3288_PD_VIO>;
rockchip,grf = <&grf>;
#address-cells = <1>;
#size-cells = <0>;
status = "disabled";
ports {
- #address-cells = <1>;
- #size-cells = <0>;
- reg = <1>;
-
mipi_in: port {
#address-cells = <1>;
#size-cells = <0>;
@@ -906,6 +994,38 @@
};
};
+ edp: dp@ff970000 {
+ compatible = "rockchip,rk3288-dp";
+ reg = <0xff970000 0x4000>;
+ interrupts = <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cru SCLK_EDP>, <&cru PCLK_EDP_CTRL>;
+ clock-names = "dp", "pclk";
+ phys = <&edp_phy>;
+ phy-names = "dp";
+ resets = <&cru SRST_EDP>;
+ reset-names = "dp";
+ rockchip,grf = <&grf>;
+ status = "disabled";
+
+ ports {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ edp_in: port@0 {
+ reg = <0>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ edp_in_vopb: endpoint@0 {
+ reg = <0>;
+ remote-endpoint = <&vopb_out_edp>;
+ };
+ edp_in_vopl: endpoint@1 {
+ reg = <1>;
+ remote-endpoint = <&vopl_out_edp>;
+ };
+ };
+ };
+ };
+
hdmi: hdmi@ff980000 {
compatible = "rockchip,rk3288-dw-hdmi";
reg = <0xff980000 0x20000>;
@@ -966,7 +1086,7 @@
#size-cells = <0>;
status = "disabled";
- usbphy0: usb-phy0 {
+ usbphy0: usb-phy@320 {
#phy-cells = <0>;
reg = <0x320>;
clocks = <&cru SCLK_OTGPHY0>;
@@ -974,7 +1094,7 @@
#clock-cells = <0>;
};
- usbphy1: usb-phy1 {
+ usbphy1: usb-phy@334 {
#phy-cells = <0>;
reg = <0x334>;
clocks = <&cru SCLK_OTGPHY1>;
@@ -982,7 +1102,7 @@
#clock-cells = <0>;
};
- usbphy2: usb-phy2 {
+ usbphy2: usb-phy@348 {
#phy-cells = <0>;
reg = <0x348>;
clocks = <&cru SCLK_OTGPHY2>;
@@ -1158,6 +1278,12 @@
};
};
+ edp {
+ edp_hpd: edp-hpd {
+ rockchip,pins = <7 11 RK_FUNC_2 &pcfg_pull_down>;
+ };
+ };
+
i2c0 {
i2c0_xfer: i2c0-xfer {
rockchip,pins = <0 15 RK_FUNC_1 &pcfg_pull_none>,
diff --git a/arch/arm/boot/dts/s5pv210-smdkv210.dts b/arch/arm/boot/dts/s5pv210-smdkv210.dts
index 54fcc3fc82e2..9eb6aff3e38f 100644
--- a/arch/arm/boot/dts/s5pv210-smdkv210.dts
+++ b/arch/arm/boot/dts/s5pv210-smdkv210.dts
@@ -197,7 +197,7 @@
display-timings {
native-mode = <&timing0>;
- timing0: timing@0 {
+ timing0: timing {
/* 800x480@60Hz */
clock-frequency = <24373920>;
hactive = <800>;
diff --git a/arch/arm/boot/dts/sama5d2-pinfunc.h b/arch/arm/boot/dts/sama5d2-pinfunc.h
index b0c912feaa2f..8a394f336003 100644
--- a/arch/arm/boot/dts/sama5d2-pinfunc.h
+++ b/arch/arm/boot/dts/sama5d2-pinfunc.h
@@ -837,8 +837,8 @@
#define PIN_PD23__ISC_FIELD PINMUX_PIN(PIN_PD23, 6, 4)
#define PIN_PD24 120
#define PIN_PD24__GPIO PINMUX_PIN(PIN_PD24, 0, 0)
-#define PIN_PD24__UTXD2 PINMUX_PIN(PIN_PD23, 1, 2)
-#define PIN_PD24__FLEXCOM4_IO3 PINMUX_PIN(PIN_PD23, 3, 3)
+#define PIN_PD24__UTXD2 PINMUX_PIN(PIN_PD24, 1, 2)
+#define PIN_PD24__FLEXCOM4_IO3 PINMUX_PIN(PIN_PD24, 3, 3)
#define PIN_PD25 121
#define PIN_PD25__GPIO PINMUX_PIN(PIN_PD25, 0, 0)
#define PIN_PD25__SPI1_SPCK PINMUX_PIN(PIN_PD25, 1, 3)
diff --git a/arch/arm/boot/dts/sama5d2.dtsi b/arch/arm/boot/dts/sama5d2.dtsi
index 78996bdbd3df..2827e7ab5ebc 100644
--- a/arch/arm/boot/dts/sama5d2.dtsi
+++ b/arch/arm/boot/dts/sama5d2.dtsi
@@ -280,7 +280,7 @@
status = "disabled";
nfc@c0000000 {
- compatible = "atmel,sama5d4-nfc";
+ compatible = "atmel,sama5d3-nfc";
#address-cells = <1>;
#size-cells = <1>;
reg = < /* NFC Command Registers */
@@ -319,6 +319,32 @@
#size-cells = <1>;
ranges;
+ hlcdc: hlcdc@f0000000 {
+ compatible = "atmel,sama5d2-hlcdc";
+ reg = <0xf0000000 0x2000>;
+ interrupts = <45 IRQ_TYPE_LEVEL_HIGH 0>;
+ clocks = <&lcdc_clk>, <&lcdck>, <&clk32k>;
+ clock-names = "periph_clk","sys_clk", "slow_clk";
+ status = "disabled";
+
+ hlcdc-display-controller {
+ compatible = "atmel,hlcdc-display-controller";
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ port@0 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0>;
+ };
+ };
+
+ hlcdc_pwm: hlcdc-pwm {
+ compatible = "atmel,hlcdc-pwm";
+ #pwm-cells = <3>;
+ };
+ };
+
ramc0: ramc@f000c000 {
compatible = "atmel,sama5d3-ddramc";
reg = <0xf000c000 0x200>;
@@ -973,6 +999,11 @@
status = "disabled";
};
+ sfr: sfr@f8030000 {
+ compatible = "atmel,sama5d2-sfr", "syscon";
+ reg = <0xf8030000 0x98>;
+ };
+
flx0: flexcom@f8034000 {
compatible = "atmel,sama5d2-flexcom";
reg = <0xf8034000 0x200>;
@@ -999,6 +1030,15 @@
clocks = <&clk32k>;
};
+ shdwc@f8048010 {
+ compatible = "atmel,sama5d2-shdwc";
+ reg = <0xf8048010 0x10>;
+ clocks = <&clk32k>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ atmel,wakeup-rtc-timer;
+ };
+
pit: timer@f8048030 {
compatible = "atmel,at91sam9260-pit";
reg = <0xf8048030 0x10>;
@@ -1010,6 +1050,7 @@
compatible = "atmel,sama5d4-wdt";
reg = <0xf8048040 0x10>;
interrupts = <4 IRQ_TYPE_LEVEL_HIGH 7>;
+ clocks = <&clk32k>;
status = "disabled";
};
@@ -1127,6 +1168,13 @@
status = "disabled";
};
+ trng@fc01c000 {
+ compatible = "atmel,at91sam9g45-trng";
+ reg = <0xfc01c000 0x100>;
+ interrupts = <47 IRQ_TYPE_LEVEL_HIGH 0>;
+ clocks = <&trng_clk>;
+ };
+
aic: interrupt-controller@fc020000 {
#interrupt-cells = <3>;
compatible = "atmel,sama5d2-aic";
@@ -1193,6 +1241,11 @@
clock-names = "tdes_clk";
status = "okay";
};
+
+ chipid@fc069000 {
+ compatible = "atmel,sama5d2-chipid";
+ reg = <0xfc069000 0x8>;
+ };
};
};
};
diff --git a/arch/arm/boot/dts/sama5d3.dtsi b/arch/arm/boot/dts/sama5d3.dtsi
index a53279160f98..36301bd9a14a 100644
--- a/arch/arm/boot/dts/sama5d3.dtsi
+++ b/arch/arm/boot/dts/sama5d3.dtsi
@@ -426,6 +426,13 @@
clock-names = "tdes_clk";
};
+ trng@f8040000 {
+ compatible = "atmel,at91sam9g45-trng";
+ reg = <0xf8040000 0x100>;
+ interrupts = <45 IRQ_TYPE_LEVEL_HIGH 0>;
+ clocks = <&trng_clk>;
+ };
+
dma0: dma-controller@ffffe600 {
compatible = "atmel,at91sam9g45-dma";
reg = <0xffffe600 0x200>;
diff --git a/arch/arm/boot/dts/sama5d4.dtsi b/arch/arm/boot/dts/sama5d4.dtsi
index db1151c18466..4e2cc30d6615 100644
--- a/arch/arm/boot/dts/sama5d4.dtsi
+++ b/arch/arm/boot/dts/sama5d4.dtsi
@@ -1202,6 +1202,13 @@
status = "disabled";
};
+ trng@fc030000 {
+ compatible = "atmel,at91sam9g45-trng";
+ reg = <0xfc030000 0x100>;
+ interrupts = <53 IRQ_TYPE_LEVEL_HIGH 0>;
+ clocks = <&trng_clk>;
+ };
+
adc0: adc@fc034000 {
compatible = "atmel,at91sam9x5-adc";
reg = <0xfc034000 0x100>;
@@ -1302,6 +1309,7 @@
watchdog@fc068640 {
compatible = "atmel,sama5d4-wdt";
reg = <0xfc068640 0x10>;
+ interrupts = <4 IRQ_TYPE_LEVEL_HIGH 7>;
clocks = <&clk32k>;
status = "disabled";
};
diff --git a/arch/arm/boot/dts/sh73a0-kzm9g.dts b/arch/arm/boot/dts/sh73a0-kzm9g.dts
index aa8bae3b8fcf..c2d8a080e392 100644
--- a/arch/arm/boot/dts/sh73a0-kzm9g.dts
+++ b/arch/arm/boot/dts/sh73a0-kzm9g.dts
@@ -149,6 +149,13 @@
label = "SW1";
wakeup-source;
};
+
+ wakeup-key {
+ gpios = <&pfc 159 GPIO_ACTIVE_LOW>;
+ linux,code = <KEY_WAKEUP>;
+ label = "NMI";
+ wakeup-source;
+ };
};
sound {
@@ -329,41 +336,41 @@
&pfc {
i2c3_pins: i2c3 {
- renesas,groups = "i2c3_1";
- renesas,function = "i2c3";
+ groups = "i2c3_1";
+ function = "i2c3";
};
mmcif_pins: mmc {
mux {
- renesas,groups = "mmc0_data8_0", "mmc0_ctrl_0";
- renesas,function = "mmc0";
+ groups = "mmc0_data8_0", "mmc0_ctrl_0";
+ function = "mmc0";
};
cfg {
- renesas,groups = "mmc0_data8_0";
- renesas,pins = "PORT279";
+ groups = "mmc0_data8_0";
+ pins = "PORT279";
bias-pull-up;
};
};
scifa4_pins: serial4 {
- renesas,groups = "scifa4_data", "scifa4_ctrl";
- renesas,function = "scifa4";
+ groups = "scifa4_data", "scifa4_ctrl";
+ function = "scifa4";
};
sdhi0_pins: sd0 {
- renesas,groups = "sdhi0_data4", "sdhi0_ctrl", "sdhi0_cd", "sdhi0_wp";
- renesas,function = "sdhi0";
+ groups = "sdhi0_data4", "sdhi0_ctrl", "sdhi0_cd", "sdhi0_wp";
+ function = "sdhi0";
};
sdhi2_pins: sd2 {
- renesas,groups = "sdhi2_data4", "sdhi2_ctrl";
- renesas,function = "sdhi2";
+ groups = "sdhi2_data4", "sdhi2_ctrl";
+ function = "sdhi2";
};
fsia_pins: sounda {
- renesas,groups = "fsia_mclk_in", "fsia_sclk_in",
- "fsia_data_in", "fsia_data_out";
- renesas,function = "fsia";
+ groups = "fsia_mclk_in", "fsia_sclk_in",
+ "fsia_data_in", "fsia_data_out";
+ function = "fsia";
};
};
diff --git a/arch/arm/boot/dts/sh73a0.dtsi b/arch/arm/boot/dts/sh73a0.dtsi
index bf825ca4f6f7..c4f434cdec60 100644
--- a/arch/arm/boot/dts/sh73a0.dtsi
+++ b/arch/arm/boot/dts/sh73a0.dtsi
@@ -43,7 +43,7 @@
timer@f0000600 {
compatible = "arm,cortex-a9-twd-timer";
reg = <0xf0000600 0x20>;
- interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_HIGH)>;
+ interrupts = <GIC_PPI 13 (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_EDGE_RISING)>;
clocks = <&twd_clk>;
};
@@ -602,39 +602,33 @@
ranges;
/* External root clocks */
- extalr_clk: extalr_clk {
+ extalr_clk: extalr {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <32768>;
- clock-output-names = "extalr";
};
- extal1_clk: extal1_clk {
+ extal1_clk: extal1 {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <26000000>;
- clock-output-names = "extal1";
};
- extal2_clk: extal2_clk {
+ extal2_clk: extal2 {
compatible = "fixed-clock";
#clock-cells = <0>;
- clock-output-names = "extal2";
};
- extcki_clk: extcki_clk {
+ extcki_clk: extcki {
compatible = "fixed-clock";
#clock-cells = <0>;
- clock-output-names = "extcki";
};
- fsiack_clk: fsiack_clk {
+ fsiack_clk: fsiack {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
- clock-output-names = "fsiack";
};
- fsibck_clk: fsibck_clk {
+ fsibck_clk: fsibck {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <0>;
- clock-output-names = "fsibck";
};
/* Special CPG clocks */
@@ -650,7 +644,7 @@
};
/* Variable factor clocks (DIV6) */
- vclk1_clk: vclk1_clk@e6150008 {
+ vclk1_clk: vclk1@e6150008 {
compatible = "renesas,sh73a0-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe6150008 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks SH73A0_CLK_PLL2>,
@@ -658,9 +652,8 @@
<&extalr_clk>, <&cpg_clocks SH73A0_CLK_MAIN>,
<0>;
#clock-cells = <0>;
- clock-output-names = "vclk1";
};
- vclk2_clk: vclk2_clk@e615000c {
+ vclk2_clk: vclk2@e615000c {
compatible = "renesas,sh73a0-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe615000c 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks SH73A0_CLK_PLL2>,
@@ -668,9 +661,8 @@
<&extalr_clk>, <&cpg_clocks SH73A0_CLK_MAIN>,
<0>;
#clock-cells = <0>;
- clock-output-names = "vclk2";
};
- vclk3_clk: vclk3_clk@e615001c {
+ vclk3_clk: vclk3@e615001c {
compatible = "renesas,sh73a0-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe615001c 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks SH73A0_CLK_PLL2>,
@@ -678,7 +670,6 @@
<&extalr_clk>, <&cpg_clocks SH73A0_CLK_MAIN>,
<0>;
#clock-cells = <0>;
- clock-output-names = "vclk3";
};
zb_clk: zb_clk@e6150010 {
compatible = "renesas,sh73a0-div6-clock", "renesas,cpg-div6-clock";
@@ -688,168 +679,148 @@
#clock-cells = <0>;
clock-output-names = "zb";
};
- flctl_clk: flctl_clk@e6150014 {
+ flctl_clk: flctlck@e6150014 {
compatible = "renesas,sh73a0-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe6150014 4>;
clocks = <&pll1_div2_clk>, <0>,
<&cpg_clocks SH73A0_CLK_PLL2>, <0>;
#clock-cells = <0>;
- clock-output-names = "flctlck";
};
- sdhi0_clk: sdhi0_clk@e6150074 {
+ sdhi0_clk: sdhi0ck@e6150074 {
compatible = "renesas,sh73a0-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe6150074 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks SH73A0_CLK_PLL2>,
<&pll1_div13_clk>, <0>;
#clock-cells = <0>;
- clock-output-names = "sdhi0ck";
};
- sdhi1_clk: sdhi1_clk@e6150078 {
+ sdhi1_clk: sdhi1ck@e6150078 {
compatible = "renesas,sh73a0-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe6150078 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks SH73A0_CLK_PLL2>,
<&pll1_div13_clk>, <0>;
#clock-cells = <0>;
- clock-output-names = "sdhi1ck";
};
- sdhi2_clk: sdhi2_clk@e615007c {
+ sdhi2_clk: sdhi2ck@e615007c {
compatible = "renesas,sh73a0-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe615007c 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks SH73A0_CLK_PLL2>,
<&pll1_div13_clk>, <0>;
#clock-cells = <0>;
- clock-output-names = "sdhi2ck";
};
- fsia_clk: fsia_clk@e6150018 {
+ fsia_clk: fsia@e6150018 {
compatible = "renesas,sh73a0-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe6150018 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks SH73A0_CLK_PLL2>,
<&fsiack_clk>, <&fsiack_clk>;
#clock-cells = <0>;
- clock-output-names = "fsia";
};
- fsib_clk: fsib_clk@e6150090 {
+ fsib_clk: fsib@e6150090 {
compatible = "renesas,sh73a0-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe6150090 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks SH73A0_CLK_PLL2>,
<&fsibck_clk>, <&fsibck_clk>;
#clock-cells = <0>;
- clock-output-names = "fsib";
};
- sub_clk: sub_clk@e6150080 {
+ sub_clk: sub@e6150080 {
compatible = "renesas,sh73a0-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe6150080 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks SH73A0_CLK_PLL2>,
<&extal2_clk>, <&extal2_clk>;
#clock-cells = <0>;
- clock-output-names = "sub";
};
- spua_clk: spua_clk@e6150084 {
+ spua_clk: spua@e6150084 {
compatible = "renesas,sh73a0-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe6150084 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks SH73A0_CLK_PLL2>,
<&extal2_clk>, <&extal2_clk>;
#clock-cells = <0>;
- clock-output-names = "spua";
};
- spuv_clk: spuv_clk@e6150094 {
+ spuv_clk: spuv@e6150094 {
compatible = "renesas,sh73a0-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe6150094 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks SH73A0_CLK_PLL2>,
<&extal2_clk>, <&extal2_clk>;
#clock-cells = <0>;
- clock-output-names = "spuv";
};
- msu_clk: msu_clk@e6150088 {
+ msu_clk: msu@e6150088 {
compatible = "renesas,sh73a0-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe6150088 4>;
clocks = <&pll1_div2_clk>, <0>,
<&cpg_clocks SH73A0_CLK_PLL2>, <0>;
#clock-cells = <0>;
- clock-output-names = "msu";
};
- hsi_clk: hsi_clk@e615008c {
+ hsi_clk: hsi@e615008c {
compatible = "renesas,sh73a0-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe615008c 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks SH73A0_CLK_PLL2>,
<&pll1_div7_clk>, <0>;
#clock-cells = <0>;
- clock-output-names = "hsi";
};
- mfg1_clk: mfg1_clk@e6150098 {
+ mfg1_clk: mfg1@e6150098 {
compatible = "renesas,sh73a0-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe6150098 4>;
clocks = <&pll1_div2_clk>, <0>,
<&cpg_clocks SH73A0_CLK_PLL2>, <0>;
#clock-cells = <0>;
- clock-output-names = "mfg1";
};
- mfg2_clk: mfg2_clk@e615009c {
+ mfg2_clk: mfg2@e615009c {
compatible = "renesas,sh73a0-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe615009c 4>;
clocks = <&pll1_div2_clk>, <0>,
<&cpg_clocks SH73A0_CLK_PLL2>, <0>;
#clock-cells = <0>;
- clock-output-names = "mfg2";
};
- dsit_clk: dsit_clk@e6150060 {
+ dsit_clk: dsit@e6150060 {
compatible = "renesas,sh73a0-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe6150060 4>;
clocks = <&pll1_div2_clk>, <0>,
<&cpg_clocks SH73A0_CLK_PLL2>, <0>;
#clock-cells = <0>;
- clock-output-names = "dsit";
};
- dsi0p_clk: dsi0p_clk@e6150064 {
+ dsi0p_clk: dsi0pck@e6150064 {
compatible = "renesas,sh73a0-div6-clock", "renesas,cpg-div6-clock";
reg = <0xe6150064 4>;
clocks = <&pll1_div2_clk>, <&cpg_clocks SH73A0_CLK_PLL2>,
<&cpg_clocks SH73A0_CLK_MAIN>, <&extal2_clk>,
<&extcki_clk>, <0>, <0>, <0>;
#clock-cells = <0>;
- clock-output-names = "dsi0pck";
};
/* Fixed factor clocks */
- main_div2_clk: main_div2_clk {
+ main_div2_clk: main_div2 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks SH73A0_CLK_MAIN>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "main_div2";
};
- pll1_div2_clk: pll1_div2_clk {
+ pll1_div2_clk: pll1_div2 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks SH73A0_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <2>;
clock-mult = <1>;
- clock-output-names = "pll1_div2";
};
- pll1_div7_clk: pll1_div7_clk {
+ pll1_div7_clk: pll1_div7 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks SH73A0_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <7>;
clock-mult = <1>;
- clock-output-names = "pll1_div7";
};
- pll1_div13_clk: pll1_div13_clk {
+ pll1_div13_clk: pll1_div13 {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks SH73A0_CLK_PLL1>;
#clock-cells = <0>;
clock-div = <13>;
clock-mult = <1>;
- clock-output-names = "pll1_div13";
};
- twd_clk: twd_clk {
+ twd_clk: twd {
compatible = "fixed-factor-clock";
clocks = <&cpg_clocks SH73A0_CLK_Z>;
#clock-cells = <0>;
clock-div = <4>;
clock-mult = <1>;
- clock-output-names = "twd";
};
/* Gate clocks */
diff --git a/arch/arm/boot/dts/socfpga.dtsi b/arch/arm/boot/dts/socfpga.dtsi
index b89cbde3b289..9f48141270b8 100644
--- a/arch/arm/boot/dts/socfpga.dtsi
+++ b/arch/arm/boot/dts/socfpga.dtsi
@@ -831,6 +831,8 @@
interrupts = <0 125 4>;
clocks = <&usb_mp_clk>;
clock-names = "otg";
+ resets = <&rst USB0_RESET>;
+ reset-names = "dwc2";
phys = <&usbphy0>;
phy-names = "usb2-phy";
status = "disabled";
@@ -842,6 +844,8 @@
interrupts = <0 128 4>;
clocks = <&usb_mp_clk>;
clock-names = "otg";
+ resets = <&rst USB1_RESET>;
+ reset-names = "dwc2";
phys = <&usbphy0>;
phy-names = "usb2-phy";
status = "disabled";
diff --git a/arch/arm/boot/dts/socfpga_arria10.dtsi b/arch/arm/boot/dts/socfpga_arria10.dtsi
index 1c5e139e4d05..17e81dc9213e 100644
--- a/arch/arm/boot/dts/socfpga_arria10.dtsi
+++ b/arch/arm/boot/dts/socfpga_arria10.dtsi
@@ -78,10 +78,13 @@
<0 87 IRQ_TYPE_LEVEL_HIGH>,
<0 88 IRQ_TYPE_LEVEL_HIGH>,
<0 89 IRQ_TYPE_LEVEL_HIGH>,
- <0 90 IRQ_TYPE_LEVEL_HIGH>;
+ <0 90 IRQ_TYPE_LEVEL_HIGH>,
+ <0 91 IRQ_TYPE_LEVEL_HIGH>;
#dma-cells = <1>;
#dma-channels = <8>;
#dma-requests = <32>;
+ clocks = <&l4_main_clk>;
+ clock-names = "apb_pclk";
};
};
@@ -362,6 +365,7 @@
compatible = "altr,socfpga-a10-gate-clk";
clocks = <&sdmmc_free_clk>;
clk-gate = <0xC8 5>;
+ clk-phase = <0 135>;
};
qspi_clk: qspi_clk {
@@ -589,7 +593,7 @@
reg = <0xff808000 0x1000>;
interrupts = <0 98 IRQ_TYPE_LEVEL_HIGH>;
fifo-depth = <0x400>;
- clocks = <&l4_mp_clk>, <&sdmmc_free_clk>;
+ clocks = <&l4_mp_clk>, <&sdmmc_clk>;
clock-names = "biu", "ciu";
status = "disabled";
};
@@ -599,6 +603,26 @@
reg = <0xffe00000 0x40000>;
};
+ eccmgr: eccmgr@ffd06000 {
+ compatible = "altr,socfpga-a10-ecc-manager";
+ altr,sysmgr-syscon = <&sysmgr>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ interrupts = <0 2 IRQ_TYPE_LEVEL_HIGH>,
+ <0 0 IRQ_TYPE_LEVEL_HIGH>;
+ ranges;
+
+ l2-ecc@ffd06010 {
+ compatible = "altr,socfpga-a10-l2-ecc";
+ reg = <0xffd06010 0x4>;
+ };
+
+ ocram-ecc@ff8c3000 {
+ compatible = "altr,socfpga-a10-ocram-ecc";
+ reg = <0xff8c3000 0x400>;
+ };
+ };
+
rst: rstmgr@ffd05000 {
#reset-cells = <1>;
compatible = "altr,rst-mgr";
@@ -689,6 +713,8 @@
interrupts = <0 95 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&usb_clk>;
clock-names = "otg";
+ resets = <&rst USB0_RESET>;
+ reset-names = "dwc2";
phys = <&usbphy0>;
phy-names = "usb2-phy";
status = "disabled";
@@ -700,6 +726,8 @@
interrupts = <0 96 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&usb_clk>;
clock-names = "otg";
+ resets = <&rst USB1_RESET>;
+ reset-names = "dwc2";
phys = <&usbphy0>;
phy-names = "usb2-phy";
status = "disabled";
diff --git a/arch/arm/boot/dts/socfpga_arria10_socdk_sdmmc.dts b/arch/arm/boot/dts/socfpga_arria10_socdk_sdmmc.dts
index dbbb751ac1ba..8a7dfa473e98 100644
--- a/arch/arm/boot/dts/socfpga_arria10_socdk_sdmmc.dts
+++ b/arch/arm/boot/dts/socfpga_arria10_socdk_sdmmc.dts
@@ -21,6 +21,7 @@
&mmc {
status = "okay";
num-slots = <1>;
+ cap-sd-highspeed;
broken-cd;
bus-width = <4>;
};
diff --git a/arch/arm/boot/dts/socfpga_cyclone5.dtsi b/arch/arm/boot/dts/socfpga_cyclone5.dtsi
index 06db951e06f8..a05e3df23103 100644
--- a/arch/arm/boot/dts/socfpga_cyclone5.dtsi
+++ b/arch/arm/boot/dts/socfpga_cyclone5.dtsi
@@ -38,12 +38,6 @@
cap-sd-highspeed;
};
- ethernet@ff702000 {
- phy-mode = "rgmii";
- phy-addr = <0xffffffff>; /* probe for phy addr */
- status = "okay";
- };
-
sysmgr@ffd08000 {
cpu1-start-addr = <0xffd080c4>;
};
diff --git a/arch/arm/boot/dts/socfpga_cyclone5_sockit.dts b/arch/arm/boot/dts/socfpga_cyclone5_sockit.dts
index b61f22f9ac9f..02e22f554ef0 100644
--- a/arch/arm/boot/dts/socfpga_cyclone5_sockit.dts
+++ b/arch/arm/boot/dts/socfpga_cyclone5_sockit.dts
@@ -39,6 +39,90 @@
ethernet0 = &gmac1;
};
+ leds {
+ compatible = "gpio-leds";
+
+ hps_led0 {
+ label = "hps:blue:led0";
+ gpios = <&portb 24 0>; /* HPS_GPIO53 */
+ linux,default-trigger = "heartbeat";
+ };
+
+ hps_led1 {
+ label = "hps:blue:led1";
+ gpios = <&portb 25 0>; /* HPS_GPIO54 */
+ linux,default-trigger = "heartbeat";
+ };
+
+ hps_led2 {
+ label = "hps:blue:led2";
+ gpios = <&portb 26 0>; /* HPS_GPIO55 */
+ linux,default-trigger = "heartbeat";
+ };
+
+ hps_led3 {
+ label = "hps:blue:led3";
+ gpios = <&portb 27 0>; /* HPS_GPIO56 */
+ linux,default-trigger = "heartbeat";
+ };
+ };
+
+ gpio-keys {
+ compatible = "gpio-keys";
+
+ hps_sw0 {
+ label = "hps_sw0";
+ gpios = <&portc 20 0>; /* HPS_GPI7 */
+ linux,input-type = <5>; /* EV_SW */
+ linux,code = <0x0>; /* SW_LID */
+ };
+
+ hps_sw1 {
+ label = "hps_sw1";
+ gpios = <&portc 19 0>; /* HPS_GPI6 */
+ linux,input-type = <5>; /* EV_SW */
+ linux,code = <0x5>; /* SW_DOCK */
+ };
+
+ hps_sw2 {
+ label = "hps_sw2";
+ gpios = <&portc 18 0>; /* HPS_GPI5 */
+ linux,input-type = <5>; /* EV_SW */
+ linux,code = <0xa>; /* SW_KEYPAD_SLIDE */
+ };
+
+ hps_sw3 {
+ label = "hps_sw3";
+ gpios = <&portc 17 0>; /* HPS_GPI4 */
+ linux,input-type = <5>; /* EV_SW */
+ linux,code = <0xc>; /* SW_ROTATE_LOCK */
+ };
+
+ hps_hkey0 {
+ label = "hps_hkey0";
+ gpios = <&portc 21 1>; /* HPS_GPI8 */
+ linux,code = <187>; /* KEY_F17 */
+ };
+
+ hps_hkey1 {
+ label = "hps_hkey1";
+ gpios = <&portc 22 1>; /* HPS_GPI9 */
+ linux,code = <188>; /* KEY_F18 */
+ };
+
+ hps_hkey2 {
+ label = "hps_hkey2";
+ gpios = <&portc 23 1>; /* HPS_GPI10 */
+ linux,code = <189>; /* KEY_F19 */
+ };
+
+ hps_hkey3 {
+ label = "hps_hkey3";
+ gpios = <&portc 24 1>; /* HPS_GPI11 */
+ linux,code = <190>; /* KEY_F20 */
+ };
+ };
+
regulator_3_3v: vcc3p3-regulator {
compatible = "regulator-fixed";
regulator-name = "VCC3P3";
@@ -61,7 +145,15 @@
rxc-skew-ps = <2000>;
};
-&gpio2 {
+&gpio0 { /* GPIO 0..29 */
+ status = "okay";
+};
+
+&gpio1 { /* GPIO 30..57 */
+ status = "okay";
+};
+
+&gpio2 { /* GPIO 58..66 (HLGPI 0..13 at offset 13) */
status = "okay";
};
diff --git a/arch/arm/boot/dts/socfpga_cyclone5_socrates.dts b/arch/arm/boot/dts/socfpga_cyclone5_socrates.dts
index 019dd2fea208..e1a61f20873f 100644
--- a/arch/arm/boot/dts/socfpga_cyclone5_socrates.dts
+++ b/arch/arm/boot/dts/socfpga_cyclone5_socrates.dts
@@ -36,6 +36,7 @@
};
&gmac1 {
+ phy-mode = "rgmii";
status = "okay";
};
diff --git a/arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts b/arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts
new file mode 100644
index 000000000000..a3601e4c0a2e
--- /dev/null
+++ b/arch/arm/boot/dts/socfpga_cyclone5_vining_fpga.dts
@@ -0,0 +1,310 @@
+/*
+ * Copyright (C) 2015 Marek Vasut <marex@denx.de>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of
+ * the License, or (at your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this file; if not, write to the Free
+ * Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston,
+ * MA 02110-1301 USA
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "socfpga_cyclone5.dtsi"
+#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/input/input.h>
+
+/ {
+ model = "samtec VIN|ING FPGA";
+ compatible = "altr,socfpga-cyclone5", "altr,socfpga";
+
+ chosen {
+ bootargs = "console=ttyS0,115200";
+ };
+
+ memory {
+ name = "memory";
+ device_type = "memory";
+ reg = <0x0 0x40000000>; /* 1GB */
+ };
+
+ aliases {
+ /*
+ * This allow the ethaddr uboot environment variable contents
+ * to be added to the gmac1 device tree blob.
+ */
+ ethernet0 = &gmac1;
+ };
+
+ leds {
+ compatible = "gpio-leds";
+
+ hps_led0 {
+ label = "hps:green:led0"; /* ALIVE_LED_GR */
+ gpios = <&portb 19 0>; /* HPS_GPIO48 */
+ linux,default-trigger = "heartbeat";
+ };
+
+ hps_led1 {
+ label = "hps:red:led0"; /* ALIVE_LED_RD */
+ gpios = <&portb 24 0>; /* HPS_GPIO53 */
+ linux,default-trigger = "none";
+ };
+
+ hps_led2 {
+ label = "hps:green:led1"; /* LINK2HOST_LED_GR */
+ gpios = <&portb 25 0>; /* HPS_GPIO54 */
+ linux,default-trigger = "heartbeat";
+ };
+
+ hps_led3 {
+ label = "hps:red:led1"; /* LINK2HOST_LED_RD */
+ gpios = <&portc 7 0>; /* HPS_GPIO65 */
+ linux,default-trigger = "none";
+ };
+ };
+
+ gpio-keys {
+ compatible = "gpio-keys";
+
+ hps_temp0 {
+ label = "BTN_0"; /* TEMP_OS */
+ gpios = <&portc 18 GPIO_ACTIVE_LOW>; /* HPS_GPIO60 */
+ linux,code = <BTN_0>;
+ };
+
+ hps_hkey0 {
+ label = "BTN_1"; /* DIS_PWR */
+ gpios = <&portc 19 GPIO_ACTIVE_LOW>; /* HPS_GPIO61 */
+ linux,code = <BTN_1>;
+ };
+
+ hps_hkey1 {
+ label = "hps_hkey1"; /* POWER_DOWN */
+ gpios = <&portc 20 GPIO_ACTIVE_LOW>; /* HPS_GPIO62 */
+ linux,code = <KEY_POWER>;
+ };
+ };
+
+ regulator-usb-nrst {
+ compatible = "regulator-fixed";
+ regulator-name = "usb_nrst";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ gpio = <&portb 5 GPIO_ACTIVE_HIGH>;
+ startup-delay-us = <70000>;
+ enable-active-high;
+ regulator-always-on;
+ };
+};
+
+&gmac1 {
+ status = "okay";
+ phy-mode = "rgmii";
+
+ snps,reset-gpio = <&porta 0 GPIO_ACTIVE_LOW>;
+ snps,reset-active-low;
+ snps,reset-delays-us = <10000 10000 10000>;
+
+ mdio0 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ compatible = "snps,dwmac-mdio";
+ phy1: ethernet-phy@1 {
+ reg = <1>;
+ rxd0-skew-ps = <0>;
+ rxd1-skew-ps = <0>;
+ rxd2-skew-ps = <0>;
+ rxd3-skew-ps = <0>;
+ txen-skew-ps = <0>;
+ txc-skew-ps = <2600>;
+ rxdv-skew-ps = <0>;
+ rxc-skew-ps = <2000>;
+ };
+ };
+};
+
+&gpio0 { /* GPIO 0..29 */
+ status = "okay";
+};
+
+&gpio1 { /* GPIO 30..57 */
+ status = "okay";
+};
+
+&gpio2 { /* GPIO 58..66 (HLGPI 0..13 at offset 13) */
+ status = "okay";
+};
+
+&i2c0 {
+ status = "okay";
+
+ gpio: pca9557@1f {
+ compatible = "nxp,pca9557";
+ reg = <0x1f>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ };
+
+ temp: lm75@48 {
+ compatible = "lm75";
+ reg = <0x48>;
+ };
+
+ at24@50 {
+ compatible = "at24,24c01";
+ pagesize = <8>;
+ reg = <0x50>;
+ };
+
+ i2cswitch@70 {
+ compatible = "nxp,pca9548";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0x70>;
+
+ i2c@0 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0>;
+ eeprom@51 {
+ compatible = "at,24c01";
+ pagesize = <8>;
+ reg = <0x51>;
+ };
+ };
+
+ i2c@1 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <1>;
+ eeprom@51 {
+ compatible = "at,24c01";
+ pagesize = <8>;
+ reg = <0x51>;
+ };
+ };
+
+ i2c@2 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <2>;
+ eeprom@51 {
+ compatible = "at,24c01";
+ pagesize = <8>;
+ reg = <0x51>;
+ };
+ };
+
+ i2c@3 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <3>;
+ eeprom@51 {
+ compatible = "at,24c01";
+ pagesize = <8>;
+ reg = <0x51>;
+ };
+ };
+
+ i2c@4 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <4>;
+ eeprom@51 {
+ compatible = "at,24c01";
+ pagesize = <8>;
+ reg = <0x51>;
+ };
+ };
+
+ i2c@5 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <5>;
+ eeprom@51 {
+ compatible = "at,24c01";
+ pagesize = <8>;
+ reg = <0x51>;
+ };
+ };
+
+ i2c@6 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <6>;
+ eeprom@51 {
+ compatible = "at,24c01";
+ pagesize = <8>;
+ reg = <0x51>;
+ };
+ };
+
+ i2c@7 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <7>;
+ eeprom@51 {
+ compatible = "at,24c01";
+ pagesize = <8>;
+ reg = <0x51>;
+ };
+ };
+ };
+};
+
+&i2c1 {
+ status = "okay";
+ clock-frequency = <100000>;
+
+ at24@50 {
+ compatible = "at24,24c02";
+ pagesize = <8>;
+ reg = <0x50>;
+ };
+};
+
+&usb0 {
+ dr_mode = "host";
+ status = "okay";
+};
+
+&usb1 {
+ dr_mode = "peripheral";
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/spear13xx.dtsi b/arch/arm/boot/dts/spear13xx.dtsi
index 14594ce8c18a..449acf0d8272 100644
--- a/arch/arm/boot/dts/spear13xx.dtsi
+++ b/arch/arm/boot/dts/spear13xx.dtsi
@@ -117,7 +117,7 @@
chan_priority = <1>;
block_size = <0xfff>;
dma-masters = <2>;
- data_width = <3 3>;
+ data-width = <8 8>;
};
dma@eb000000 {
@@ -133,7 +133,7 @@
chan_allocation_order = <1>;
chan_priority = <1>;
block_size = <0xfff>;
- data_width = <3 3>;
+ data-width = <8 8>;
};
fsmc: flash@b0000000 {
diff --git a/arch/arm/boot/dts/ste-ccu9540.dts b/arch/arm/boot/dts/ste-ccu9540.dts
index c8b815819cfe..b3b9bb8e1aa8 100644
--- a/arch/arm/boot/dts/ste-ccu9540.dts
+++ b/arch/arm/boot/dts/ste-ccu9540.dts
@@ -49,7 +49,7 @@
cap-mmc-highspeed;
vmmc-supply = <&ab8500_ldo_aux3_reg>;
- cd-gpios = <&gpio7 6 0x4>; // 230
+ cd-gpios = <&gpio7 6 GPIO_ACTIVE_HIGH>; // 230
cd-inverted;
status = "okay";
diff --git a/arch/arm/boot/dts/ste-dbx5x0.dtsi b/arch/arm/boot/dts/ste-dbx5x0.dtsi
index 341f5b7ed242..6ae56838bd3a 100644
--- a/arch/arm/boot/dts/ste-dbx5x0.dtsi
+++ b/arch/arm/boot/dts/ste-dbx5x0.dtsi
@@ -10,8 +10,10 @@
*/
#include <dt-bindings/interrupt-controller/irq.h>
+#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/mfd/dbx500-prcmu.h>
#include <dt-bindings/arm/ux500_pm_domains.h>
+#include <dt-bindings/gpio/gpio.h>
#include "skeleton.dtsi"
/ {
@@ -203,14 +205,14 @@
L2: l2-cache {
compatible = "arm,pl310-cache";
reg = <0xa0412000 0x1000>;
- interrupts = <0 13 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>;
cache-unified;
cache-level = <2>;
};
pmu {
compatible = "arm,cortex-a9-pmu";
- interrupts = <0 7 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>;
};
pm_domains: pm_domains0 {
@@ -253,7 +255,7 @@
/* Nomadik System Timer */
compatible = "st,nomadik-mtu";
reg = <0xa03c6000 0x1000>;
- interrupts = <0 4 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&prcmu_clk PRCMU_TIMCLK>, <&prcc_pclk 6 6>;
clock-names = "timclk", "apb_pclk";
@@ -262,7 +264,7 @@
timer@a0410600 {
compatible = "arm,cortex-a9-twd-timer";
reg = <0xa0410600 0x20>;
- interrupts = <1 13 0x304>; /* IRQ level high per-CPU */
+ interrupts = <GIC_PPI 13 (GIC_CPU_MASK_RAW(3) | IRQ_TYPE_LEVEL_HIGH)>;
clocks = <&smp_twd_clk>;
};
@@ -270,14 +272,14 @@
watchdog@a0410620 {
compatible = "arm,cortex-a9-twd-wdt";
reg = <0xa0410620 0x20>;
- interrupts = <1 14 0x304>;
+ interrupts = <GIC_PPI 14 (GIC_CPU_MASK_RAW(3) | IRQ_TYPE_LEVEL_HIGH)>;
clocks = <&smp_twd_clk>;
};
rtc@80154000 {
compatible = "arm,rtc-pl031", "arm,primecell";
reg = <0x80154000 0x1000>;
- interrupts = <0 18 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 18 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&rtc_clk>;
clock-names = "apb_pclk";
@@ -287,7 +289,7 @@
compatible = "stericsson,db8500-gpio",
"st,nomadik-gpio";
reg = <0x8012e000 0x80>;
- interrupts = <0 119 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 119 IRQ_TYPE_LEVEL_HIGH>;
interrupt-controller;
#interrupt-cells = <2>;
st,supports-sleepmode;
@@ -302,7 +304,7 @@
compatible = "stericsson,db8500-gpio",
"st,nomadik-gpio";
reg = <0x8012e080 0x80>;
- interrupts = <0 120 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>;
interrupt-controller;
#interrupt-cells = <2>;
st,supports-sleepmode;
@@ -317,7 +319,7 @@
compatible = "stericsson,db8500-gpio",
"st,nomadik-gpio";
reg = <0x8000e000 0x80>;
- interrupts = <0 121 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 121 IRQ_TYPE_LEVEL_HIGH>;
interrupt-controller;
#interrupt-cells = <2>;
st,supports-sleepmode;
@@ -332,7 +334,7 @@
compatible = "stericsson,db8500-gpio",
"st,nomadik-gpio";
reg = <0x8000e080 0x80>;
- interrupts = <0 122 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 122 IRQ_TYPE_LEVEL_HIGH>;
interrupt-controller;
#interrupt-cells = <2>;
st,supports-sleepmode;
@@ -347,7 +349,7 @@
compatible = "stericsson,db8500-gpio",
"st,nomadik-gpio";
reg = <0x8000e100 0x80>;
- interrupts = <0 123 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 123 IRQ_TYPE_LEVEL_HIGH>;
interrupt-controller;
#interrupt-cells = <2>;
st,supports-sleepmode;
@@ -362,7 +364,7 @@
compatible = "stericsson,db8500-gpio",
"st,nomadik-gpio";
reg = <0x8000e180 0x80>;
- interrupts = <0 124 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 124 IRQ_TYPE_LEVEL_HIGH>;
interrupt-controller;
#interrupt-cells = <2>;
st,supports-sleepmode;
@@ -377,7 +379,7 @@
compatible = "stericsson,db8500-gpio",
"st,nomadik-gpio";
reg = <0x8011e000 0x80>;
- interrupts = <0 125 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 125 IRQ_TYPE_LEVEL_HIGH>;
interrupt-controller;
#interrupt-cells = <2>;
st,supports-sleepmode;
@@ -392,7 +394,7 @@
compatible = "stericsson,db8500-gpio",
"st,nomadik-gpio";
reg = <0x8011e080 0x80>;
- interrupts = <0 126 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 126 IRQ_TYPE_LEVEL_HIGH>;
interrupt-controller;
#interrupt-cells = <2>;
st,supports-sleepmode;
@@ -407,7 +409,7 @@
compatible = "stericsson,db8500-gpio",
"st,nomadik-gpio";
reg = <0xa03fe000 0x80>;
- interrupts = <0 127 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 127 IRQ_TYPE_LEVEL_HIGH>;
interrupt-controller;
#interrupt-cells = <2>;
st,supports-sleepmode;
@@ -429,7 +431,7 @@
usb_per5@a03e0000 {
compatible = "stericsson,db8500-musb";
reg = <0xa03e0000 0x10000>;
- interrupts = <0 23 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 23 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "mc";
dr_mode = "otg";
@@ -467,7 +469,7 @@
compatible = "stericsson,db8500-dma40", "stericsson,dma40";
reg = <0x801C0000 0x1000 0x40010000 0x800>;
reg-names = "base", "lcpa";
- interrupts = <0 25 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 25 IRQ_TYPE_LEVEL_HIGH>;
#dma-cells = <3>;
memcpy-channels = <56 57 58 59 60>;
@@ -479,7 +481,7 @@
compatible = "stericsson,db8500-prcmu";
reg = <0x80157000 0x2000>, <0x801b0000 0x8000>, <0x801b8000 0x1000>;
reg-names = "prcmu", "prcmu-tcpm", "prcmu-tcdm";
- interrupts = <0 47 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 47 IRQ_TYPE_LEVEL_HIGH>;
#address-cells = <1>;
#size-cells = <1>;
interrupt-controller;
@@ -597,7 +599,7 @@
ab8500 {
compatible = "stericsson,ab8500";
interrupt-parent = <&intc>;
- interrupts = <0 40 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>;
interrupt-controller;
#interrupt-cells = <2>;
@@ -785,7 +787,7 @@
i2c@80004000 {
compatible = "stericsson,db8500-i2c", "st,nomadik-i2c", "arm,primecell";
reg = <0x80004000 0x1000>;
- interrupts = <0 21 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>;
#address-cells = <1>;
#size-cells = <0>;
@@ -800,7 +802,7 @@
i2c@80122000 {
compatible = "stericsson,db8500-i2c", "st,nomadik-i2c", "arm,primecell";
reg = <0x80122000 0x1000>;
- interrupts = <0 22 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 22 IRQ_TYPE_LEVEL_HIGH>;
#address-cells = <1>;
#size-cells = <0>;
@@ -816,7 +818,7 @@
i2c@80128000 {
compatible = "stericsson,db8500-i2c", "st,nomadik-i2c", "arm,primecell";
reg = <0x80128000 0x1000>;
- interrupts = <0 55 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 55 IRQ_TYPE_LEVEL_HIGH>;
#address-cells = <1>;
#size-cells = <0>;
@@ -832,7 +834,7 @@
i2c@80110000 {
compatible = "stericsson,db8500-i2c", "st,nomadik-i2c", "arm,primecell";
reg = <0x80110000 0x1000>;
- interrupts = <0 12 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>;
#address-cells = <1>;
#size-cells = <0>;
@@ -848,7 +850,7 @@
i2c@8012a000 {
compatible = "stericsson,db8500-i2c", "st,nomadik-i2c", "arm,primecell";
reg = <0x8012a000 0x1000>;
- interrupts = <0 51 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 51 IRQ_TYPE_LEVEL_HIGH>;
#address-cells = <1>;
#size-cells = <0>;
@@ -864,7 +866,7 @@
ssp@80002000 {
compatible = "arm,pl022", "arm,primecell";
reg = <0x80002000 0x1000>;
- interrupts = <0 14 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
#address-cells = <1>;
#size-cells = <0>;
clocks = <&prcc_kclk 3 1>, <&prcc_pclk 3 1>;
@@ -878,7 +880,7 @@
ssp@80003000 {
compatible = "arm,pl022", "arm,primecell";
reg = <0x80003000 0x1000>;
- interrupts = <0 52 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>;
#address-cells = <1>;
#size-cells = <0>;
clocks = <&prcc_kclk 3 2>, <&prcc_pclk 3 2>;
@@ -892,7 +894,7 @@
spi@8011a000 {
compatible = "arm,pl022", "arm,primecell";
reg = <0x8011a000 0x1000>;
- interrupts = <0 8 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>;
#address-cells = <1>;
#size-cells = <0>;
/* Same clock wired to kernel and pclk */
@@ -907,7 +909,7 @@
spi@80112000 {
compatible = "arm,pl022", "arm,primecell";
reg = <0x80112000 0x1000>;
- interrupts = <0 96 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 96 IRQ_TYPE_LEVEL_HIGH>;
#address-cells = <1>;
#size-cells = <0>;
/* Same clock wired to kernel and pclk */
@@ -922,7 +924,7 @@
spi@80111000 {
compatible = "arm,pl022", "arm,primecell";
reg = <0x80111000 0x1000>;
- interrupts = <0 6 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>;
#address-cells = <1>;
#size-cells = <0>;
/* Same clock wired to kernel and pclk */
@@ -937,7 +939,7 @@
spi@80129000 {
compatible = "arm,pl022", "arm,primecell";
reg = <0x80129000 0x1000>;
- interrupts = <0 49 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 49 IRQ_TYPE_LEVEL_HIGH>;
#address-cells = <1>;
#size-cells = <0>;
/* Same clock wired to kernel and pclk */
@@ -952,7 +954,7 @@
ux500_serial0: uart@80120000 {
compatible = "arm,pl011", "arm,primecell";
reg = <0x80120000 0x1000>;
- interrupts = <0 11 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>;
dmas = <&dma 13 0 0x2>, /* Logical - DevToMem */
<&dma 13 0 0x0>; /* Logical - MemToDev */
@@ -967,7 +969,7 @@
ux500_serial1: uart@80121000 {
compatible = "arm,pl011", "arm,primecell";
reg = <0x80121000 0x1000>;
- interrupts = <0 19 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>;
dmas = <&dma 12 0 0x2>, /* Logical - DevToMem */
<&dma 12 0 0x0>; /* Logical - MemToDev */
@@ -982,7 +984,7 @@
ux500_serial2: uart@80007000 {
compatible = "arm,pl011", "arm,primecell";
reg = <0x80007000 0x1000>;
- interrupts = <0 26 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>;
dmas = <&dma 11 0 0x2>, /* Logical - DevToMem */
<&dma 11 0 0x0>; /* Logical - MemToDev */
@@ -997,7 +999,7 @@
sdi0_per1@80126000 {
compatible = "arm,pl18x", "arm,primecell";
reg = <0x80126000 0x1000>;
- interrupts = <0 60 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 60 IRQ_TYPE_LEVEL_HIGH>;
dmas = <&dma 29 0 0x2>, /* Logical - DevToMem */
<&dma 29 0 0x0>; /* Logical - MemToDev */
@@ -1013,7 +1015,7 @@
sdi1_per2@80118000 {
compatible = "arm,pl18x", "arm,primecell";
reg = <0x80118000 0x1000>;
- interrupts = <0 50 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 50 IRQ_TYPE_LEVEL_HIGH>;
dmas = <&dma 32 0 0x2>, /* Logical - DevToMem */
<&dma 32 0 0x0>; /* Logical - MemToDev */
@@ -1029,7 +1031,7 @@
sdi2_per3@80005000 {
compatible = "arm,pl18x", "arm,primecell";
reg = <0x80005000 0x1000>;
- interrupts = <0 41 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>;
dmas = <&dma 28 0 0x2>, /* Logical - DevToMem */
<&dma 28 0 0x0>; /* Logical - MemToDev */
@@ -1045,7 +1047,7 @@
sdi3_per2@80119000 {
compatible = "arm,pl18x", "arm,primecell";
reg = <0x80119000 0x1000>;
- interrupts = <0 59 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 59 IRQ_TYPE_LEVEL_HIGH>;
dmas = <&dma 41 0 0x2>, /* Logical - DevToMem */
<&dma 41 0 0x0>; /* Logical - MemToDev */
@@ -1061,7 +1063,7 @@
sdi4_per2@80114000 {
compatible = "arm,pl18x", "arm,primecell";
reg = <0x80114000 0x1000>;
- interrupts = <0 99 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>;
dmas = <&dma 42 0 0x2>, /* Logical - DevToMem */
<&dma 42 0 0x0>; /* Logical - MemToDev */
@@ -1077,7 +1079,7 @@
sdi5_per3@80008000 {
compatible = "arm,pl18x", "arm,primecell";
reg = <0x80008000 0x1000>;
- interrupts = <0 100 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>;
dmas = <&dma 43 0 0x2>, /* Logical - DevToMem */
<&dma 43 0 0x0>; /* Logical - MemToDev */
@@ -1093,7 +1095,7 @@
msp0: msp@80123000 {
compatible = "stericsson,ux500-msp-i2s";
reg = <0x80123000 0x1000>;
- interrupts = <0 31 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>;
v-ape-supply = <&db8500_vape_reg>;
dmas = <&dma 31 0 0x12>, /* Logical - DevToMem - HighPrio */
@@ -1109,7 +1111,7 @@
msp1: msp@80124000 {
compatible = "stericsson,ux500-msp-i2s";
reg = <0x80124000 0x1000>;
- interrupts = <0 62 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 62 IRQ_TYPE_LEVEL_HIGH>;
v-ape-supply = <&db8500_vape_reg>;
/* This DMA channel only exist on DB8500 v1 */
@@ -1126,7 +1128,7 @@
msp2: msp@80117000 {
compatible = "stericsson,ux500-msp-i2s";
reg = <0x80117000 0x1000>;
- interrupts = <0 98 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>;
v-ape-supply = <&db8500_vape_reg>;
dmas = <&dma 14 0 0x12>, /* Logical - DevToMem - HighPrio */
@@ -1143,7 +1145,7 @@
msp3: msp@80125000 {
compatible = "stericsson,ux500-msp-i2s";
reg = <0x80125000 0x1000>;
- interrupts = <0 62 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 62 IRQ_TYPE_LEVEL_HIGH>;
v-ape-supply = <&db8500_vape_reg>;
/* This DMA channel only exist on DB8500 v2 */
@@ -1176,7 +1178,7 @@
<0xa0351000 0x1000>, /* DSI link 1 */
<0xa0352000 0x1000>, /* DSI link 2 */
<0xa0353000 0x1000>; /* DSI link 3 */
- interrupts = <0 48 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 48 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&prcmu_clk PRCMU_MCDECLK>, /* Main MCDE clock */
<&prcmu_clk PRCMU_LCDCLK>, /* LCD clock */
<&prcmu_clk PRCMU_PLLDSI>, /* HDMI clock */
@@ -1190,7 +1192,7 @@
cryp@a03cb000 {
compatible = "stericsson,ux500-cryp";
reg = <0xa03cb000 0x1000>;
- interrupts = <0 15 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>;
v-ape-supply = <&db8500_vape_reg>;
clocks = <&prcc_pclk 6 1>;
diff --git a/arch/arm/boot/dts/ste-href-stuib.dtsi b/arch/arm/boot/dts/ste-href-stuib.dtsi
index c3987ad06d79..6f720756057d 100644
--- a/arch/arm/boot/dts/ste-href-stuib.dtsi
+++ b/arch/arm/boot/dts/ste-href-stuib.dtsi
@@ -22,13 +22,13 @@
button@139 {
/* Proximity sensor */
- gpios = <&gpio6 25 0x4>;
+ gpios = <&gpio6 25 GPIO_ACTIVE_HIGH>;
linux,code = <11>; /* SW_FRONT_PROXIMITY */
label = "SFH7741 Proximity Sensor";
};
button@145 {
/* Hall sensor */
- gpios = <&gpio4 17 0x4>;
+ gpios = <&gpio4 17 GPIO_ACTIVE_HIGH>;
linux,code = <0>; /* SW_LID */
label = "HED54XXU11 Hall Effect Sensor";
};
diff --git a/arch/arm/boot/dts/ste-href-tvk1281618.dtsi b/arch/arm/boot/dts/ste-href-tvk1281618.dtsi
index 55f9d0cc90f3..fc5e8ce700c3 100644
--- a/arch/arm/boot/dts/ste-href-tvk1281618.dtsi
+++ b/arch/arm/boot/dts/ste-href-tvk1281618.dtsi
@@ -24,13 +24,13 @@
button@139 {
/* Proximity sensor */
- gpios = <&gpio6 25 0x4>;
+ gpios = <&gpio6 25 GPIO_ACTIVE_HIGH>;
linux,code = <11>; /* SW_FRONT_PROXIMITY */
label = "SFH7741 Proximity Sensor";
};
button@145 {
/* Hall sensor */
- gpios = <&gpio4 17 0x4>;
+ gpios = <&gpio4 17 GPIO_ACTIVE_HIGH>;
linux,code = <0>; /* SW_LID */
label = "HED54XXU11 Hall Effect Sensor";
};
@@ -93,14 +93,15 @@
/* Accelerometer */
compatible = "st,lsm303dlh-accel";
st,drdy-int-pin = <1>;
+ drive-open-drain;
reg = <0x18>;
vdd-supply = <&ab8500_ldo_aux1_reg>;
vddio-supply = <&db8500_vsmps2_reg>;
pinctrl-names = "default";
pinctrl-0 = <&accel_tvk_mode>;
interrupt-parent = <&gpio2>;
- interrupts = <18 IRQ_TYPE_EDGE_RISING>,
- <19 IRQ_TYPE_EDGE_RISING>;
+ interrupts = <18 IRQ_TYPE_EDGE_FALLING>,
+ <19 IRQ_TYPE_EDGE_FALLING>;
};
lsm303dlh@1e {
/*
@@ -118,14 +119,15 @@
/* Accelerometer */
compatible = "st,lis331dl-accel";
st,drdy-int-pin = <1>;
+ drive-open-drain;
reg = <0x1c>;
vdd-supply = <&ab8500_ldo_aux1_reg>;
vddio-supply = <&db8500_vsmps2_reg>;
pinctrl-names = "default";
pinctrl-0 = <&accel_tvk_mode>;
interrupt-parent = <&gpio2>;
- interrupts = <18 IRQ_TYPE_EDGE_RISING>,
- <19 IRQ_TYPE_EDGE_RISING>;
+ interrupts = <18 IRQ_TYPE_EDGE_FALLING>,
+ <19 IRQ_TYPE_EDGE_FALLING>;
};
ak8974@0f {
/* Magnetometer */
@@ -216,7 +218,7 @@
/* Accelerometer interrupt lines 1 & 2 */
tvk_cfg {
pins = "GPIO82_C1", "GPIO83_D3";
- ste,config = <&gpio_in_pd>;
+ ste,config = <&gpio_in_pu>;
};
};
};
diff --git a/arch/arm/boot/dts/ste-hrefprev60.dtsi b/arch/arm/boot/dts/ste-hrefprev60.dtsi
index b0278f4c486c..ece222d51717 100644
--- a/arch/arm/boot/dts/ste-hrefprev60.dtsi
+++ b/arch/arm/boot/dts/ste-hrefprev60.dtsi
@@ -18,7 +18,7 @@
/ {
gpio_keys {
button@1 {
- gpios = <&tc3589x_gpio 7 0x4>;
+ gpios = <&tc3589x_gpio 7 GPIO_ACTIVE_HIGH>;
};
};
@@ -68,12 +68,12 @@
// External Micro SD slot
sdi0_per1@80126000 {
- cd-gpios = <&tc3589x_gpio 3 0x4>;
+ cd-gpios = <&tc3589x_gpio 3 GPIO_ACTIVE_HIGH>;
};
vmmci: regulator-gpio {
- gpios = <&tc3589x_gpio 18 0x4>;
- enable-gpio = <&tc3589x_gpio 17 0x4>;
+ gpios = <&tc3589x_gpio 18 GPIO_ACTIVE_HIGH>;
+ enable-gpio = <&tc3589x_gpio 17 GPIO_ACTIVE_HIGH>;
};
pinctrl {
diff --git a/arch/arm/boot/dts/ste-hrefv60plus.dtsi b/arch/arm/boot/dts/ste-hrefv60plus.dtsi
index 149a72e7e37a..45d7af326718 100644
--- a/arch/arm/boot/dts/ste-hrefv60plus.dtsi
+++ b/arch/arm/boot/dts/ste-hrefv60plus.dtsi
@@ -20,12 +20,12 @@
soc {
// External Micro SD slot
sdi0_per1@80126000 {
- cd-gpios = <&gpio2 31 0x4>; // 95
+ cd-gpios = <&gpio2 31 GPIO_ACTIVE_HIGH>; // 95
};
vmmci: regulator-gpio {
- gpios = <&gpio0 5 0x4>;
- enable-gpio = <&gpio5 9 0x4>;
+ gpios = <&gpio0 5 GPIO_ACTIVE_HIGH>;
+ enable-gpio = <&gpio5 9 GPIO_ACTIVE_HIGH>;
};
pinctrl {
diff --git a/arch/arm/boot/dts/ste-nomadik-nhk15.dts b/arch/arm/boot/dts/ste-nomadik-nhk15.dts
index 4a21c6492dbb..d35aa88791ad 100644
--- a/arch/arm/boot/dts/ste-nomadik-nhk15.dts
+++ b/arch/arm/boot/dts/ste-nomadik-nhk15.dts
@@ -57,8 +57,15 @@
};
};
};
+ lis3lv02dl {
+ lis3lv02dl_nhk_mode: lis3lv02dl_nhk {
+ nhk_cfg1 {
+ pins = "GPIO82_C10"; // IRQ line
+ ste,input = <0>;
+ };
+ };
+ };
};
-
src@101e0000 {
/* These chrystal outputs are not used on this board */
disable-sxtalo;
@@ -86,6 +93,10 @@
lis3lv02dl@1d {
/* Accelerometer */
compatible = "st,lis3lv02dl-accel";
+ interrupt-parent = <&gpio2>;
+ interrupts = <18 IRQ_TYPE_EDGE_RISING>; // GPIO 82
+ pinctrl-0 = <&lis3lv02dl_nhk_mode>;
+ pinctrl-names = "default";
reg = <0x1d>;
};
stmpe0: stmpe2401@43 {
diff --git a/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi b/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi
index e2be53343064..d2d532a9d783 100644
--- a/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi
+++ b/arch/arm/boot/dts/ste-nomadik-stn8815.dtsi
@@ -748,6 +748,9 @@
clocks = <&uart0clk>, <&pclkuart0>;
clock-names = "uartclk", "apb_pclk";
status = "disabled";
+ dmas = <&dmac0 14 1>,
+ <&dmac0 15 1>;
+ dma-names = "rx", "tx";
};
uart1: uart@101fb000 {
@@ -759,6 +762,9 @@
clock-names = "uartclk", "apb_pclk";
pinctrl-names = "default";
pinctrl-0 = <&uart1_default_mux>;
+ dmas = <&dmac1 22 1>,
+ <&dmac1 23 1>;
+ dma-names = "rx", "tx";
};
uart2: uart@101f2000 {
@@ -769,6 +775,9 @@
clocks = <&uart2clk>, <&pclkuart2>;
clock-names = "uartclk", "apb_pclk";
status = "disabled";
+ dmas = <&dmac1 30 1>,
+ <&dmac1 31 1>;
+ dma-names = "rx", "tx";
};
rng: rng@101b0000 {
@@ -813,5 +822,34 @@
pinctrl-0 = <&mmcsd_default_mux>, <&mmcsd_default_mode>;
vmmc-supply = <&vmmc_regulator>;
};
+
+ dmac0: dma-controller@10130000 {
+ compatible = "arm,pl080", "arm,primecell";
+ reg = <0x10130000 0x1000>;
+ interrupt-parent = <&vica>;
+ interrupts = <15>;
+ clocks = <&hclkdma0>;
+ clock-names = "apb_pclk";
+ lli-bus-interface-ahb1;
+ lli-bus-interface-ahb2;
+ mem-bus-interface-ahb2;
+ memcpy-burst-size = <256>;
+ memcpy-bus-width = <32>;
+ #dma-cells = <2>;
+ };
+ dmac1: dma-controller@10150000 {
+ compatible = "arm,pl080", "arm,primecell";
+ reg = <0x10150000 0x1000>;
+ interrupt-parent = <&vica>;
+ interrupts = <13>;
+ clocks = <&hclkdma1>;
+ clock-names = "apb_pclk";
+ lli-bus-interface-ahb1;
+ lli-bus-interface-ahb2;
+ mem-bus-interface-ahb2;
+ memcpy-burst-size = <256>;
+ memcpy-bus-width = <32>;
+ #dma-cells = <2>;
+ };
};
};
diff --git a/arch/arm/boot/dts/ste-snowball.dts b/arch/arm/boot/dts/ste-snowball.dts
index 08f82077b64d..36e84efc401c 100644
--- a/arch/arm/boot/dts/ste-snowball.dts
+++ b/arch/arm/boot/dts/ste-snowball.dts
@@ -50,35 +50,35 @@
wakeup-source;
linux,code = <2>;
label = "userpb";
- gpios = <&gpio1 0 0x4>;
+ gpios = <&gpio1 0 GPIO_ACTIVE_HIGH>;
};
button@2 {
debounce_interval = <50>;
wakeup-source;
linux,code = <3>;
label = "extkb1";
- gpios = <&gpio4 23 0x4>;
+ gpios = <&gpio4 23 GPIO_ACTIVE_HIGH>;
};
button@3 {
debounce_interval = <50>;
wakeup-source;
linux,code = <4>;
label = "extkb2";
- gpios = <&gpio4 24 0x4>;
+ gpios = <&gpio4 24 GPIO_ACTIVE_HIGH>;
};
button@4 {
debounce_interval = <50>;
wakeup-source;
linux,code = <5>;
label = "extkb3";
- gpios = <&gpio5 1 0x4>;
+ gpios = <&gpio5 1 GPIO_ACTIVE_HIGH>;
};
button@5 {
debounce_interval = <50>;
wakeup-source;
linux,code = <6>;
label = "extkb4";
- gpios = <&gpio5 2 0x4>;
+ gpios = <&gpio5 2 GPIO_ACTIVE_HIGH>;
};
};
@@ -88,7 +88,7 @@
pinctrl-0 = <&gpioled_snowball_mode>;
used-led {
label = "user_led";
- gpios = <&gpio4 14 0x4>;
+ gpios = <&gpio4 14 GPIO_ACTIVE_HIGH>;
default-state = "on";
linux,default-trigger = "heartbeat";
};
@@ -155,8 +155,8 @@
vmmci: regulator-gpio {
compatible = "regulator-gpio";
- gpios = <&gpio7 4 0x4>;
- enable-gpio = <&gpio6 25 0x4>;
+ gpios = <&gpio7 4 GPIO_ACTIVE_HIGH>;
+ enable-gpio = <&gpio6 25 GPIO_ACTIVE_HIGH>;
regulator-min-microvolt = <1800000>;
regulator-max-microvolt = <2900000>;
@@ -182,8 +182,7 @@
pinctrl-0 = <&sdi0_default_mode>;
pinctrl-1 = <&sdi0_sleep_mode>;
- cd-gpios = <&gpio6 26 0x4>; // 218
- cd-inverted;
+ cd-gpios = <&gpio6 26 GPIO_ACTIVE_LOW>; // 218
status = "okay";
};
diff --git a/arch/arm/boot/dts/stih407-family.dtsi b/arch/arm/boot/dts/stih407-family.dtsi
index 81f81214cdf9..ad8ba10764a3 100644
--- a/arch/arm/boot/dts/stih407-family.dtsi
+++ b/arch/arm/boot/dts/stih407-family.dtsi
@@ -15,6 +15,36 @@
#address-cells = <1>;
#size-cells = <1>;
+ reserved-memory {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+
+ gp0_reserved: rproc@40000000 {
+ compatible = "shared-dma-pool";
+ reg = <0x40000000 0x01000000>;
+ no-map;
+ };
+
+ gp1_reserved: rproc@41000000 {
+ compatible = "shared-dma-pool";
+ reg = <0x41000000 0x01000000>;
+ no-map;
+ };
+
+ audio_reserved: rproc@42000000 {
+ compatible = "shared-dma-pool";
+ reg = <0x42000000 0x01000000>;
+ no-map;
+ };
+
+ dmu_reserved: rproc@43000000 {
+ compatible = "shared-dma-pool";
+ reg = <0x43000000 0x01000000>;
+ no-map;
+ };
+ };
+
cpus {
#address-cells = <1>;
#size-cells = <0>;
@@ -22,15 +52,35 @@
device_type = "cpu";
compatible = "arm,cortex-a9";
reg = <0>;
+
/* u-boot puts hpen in SBC dmem at 0xa4 offset */
cpu-release-addr = <0x94100A4>;
+
+ /* kHz uV */
+ operating-points = <1500000 0
+ 1200000 0
+ 800000 0
+ 500000 0>;
+
+ clocks = <&clk_m_a9>;
+ clock-names = "cpu";
+ clock-latency = <100000>;
+ cpu0-supply = <&pwm_regulator>;
+ st,syscfg = <&syscfg_core 0x8e0>;
};
cpu@1 {
device_type = "cpu";
compatible = "arm,cortex-a9";
reg = <1>;
+
/* u-boot puts hpen in SBC dmem at 0xa4 offset */
cpu-release-addr = <0x94100A4>;
+
+ /* kHz uV */
+ operating-points = <1500000 0
+ 1200000 0
+ 800000 0
+ 500000 0>;
};
};
@@ -534,7 +584,7 @@
reg = <0x8788000 0x1000>;
interrupts = <GIC_SPI 130 IRQ_TYPE_EDGE_RISING>;
clocks = <&clk_s_d3_flexgen CLK_LPC_1>;
- st,lpc-mode = <ST_LPC_MODE_RTC>;
+ st,lpc-mode = <ST_LPC_MODE_CLKSRC>;
};
sata0: sata@9b20000 {
@@ -694,5 +744,79 @@
clocks = <&clk_sysin>;
status = "okay";
};
+
+ mailbox0: mailbox@8f00000 {
+ compatible = "st,stih407-mailbox";
+ reg = <0x8f00000 0x1000>;
+ interrupts = <GIC_SPI 1 IRQ_TYPE_NONE>;
+ #mbox-cells = <2>;
+ mbox-name = "a9";
+ status = "okay";
+ };
+
+ mailbox1: mailbox@8f01000 {
+ compatible = "st,stih407-mailbox";
+ reg = <0x8f01000 0x1000>;
+ #mbox-cells = <2>;
+ mbox-name = "st231_gp_1";
+ status = "okay";
+ };
+
+ mailbox2: mailbox@8f02000 {
+ compatible = "st,stih407-mailbox";
+ reg = <0x8f02000 0x1000>;
+ #mbox-cells = <2>;
+ mbox-name = "st231_gp_0";
+ status = "okay";
+ };
+
+ mailbox3: mailbox@8f03000 {
+ compatible = "st,stih407-mailbox";
+ reg = <0x8f03000 0x1000>;
+ #mbox-cells = <2>;
+ mbox-name = "st231_audio_video";
+ status = "okay";
+ };
+
+ st231_gp0: remote-processor {
+ compatible = "st,st231-rproc";
+ memory-region = <&gp0_reserved>;
+ resets = <&softreset STIH407_ST231_GP0_SOFTRESET>;
+ reset-names = "sw_reset";
+ clocks = <&clk_s_c0_flexgen CLK_ST231_GP_0>;
+ clock-frequency = <600000000>;
+ st,syscfg = <&syscfg_core 0x22c>;
+ };
+
+
+ st231_gp1: remote-processor {
+ compatible = "st,st231-rproc";
+ memory-region = <&gp1_reserved>;
+ resets = <&softreset STIH407_ST231_GP1_SOFTRESET>;
+ reset-names = "sw_reset";
+ clocks = <&clk_s_c0_flexgen CLK_ST231_GP_1>;
+ clock-frequency = <600000000>;
+ st,syscfg = <&syscfg_core 0x220>;
+ };
+
+ st231_audio: remote-processor {
+ compatible = "st,st231-rproc";
+ memory-region = <&audio_reserved>;
+ resets = <&softreset STIH407_ST231_AUD_SOFTRESET>;
+ reset-names = "sw_reset";
+ clocks = <&clk_s_c0_flexgen CLK_ST231_AUD_0>;
+ clock-frequency = <600000000>;
+ st,syscfg = <&syscfg_core 0x228>;
+ };
+
+ st231_dmu: remote-processor {
+ compatible = "st,st231-rproc";
+ memory-region = <&dmu_reserved>;
+ resets = <&softreset STIH407_ST231_DMU_SOFTRESET>;
+ reset-names = "sw_reset";
+ clocks = <&clk_s_c0_flexgen CLK_ST231_DMU>;
+ clock-frequency = <600000000>;
+ st,syscfg = <&syscfg_core 0x224>;
+ };
};
};
diff --git a/arch/arm/boot/dts/sun4i-a10-a1000.dts b/arch/arm/boot/dts/sun4i-a10-a1000.dts
index 97570cb7f2fc..c92a1ae33a1e 100644
--- a/arch/arm/boot/dts/sun4i-a10-a1000.dts
+++ b/arch/arm/boot/dts/sun4i-a10-a1000.dts
@@ -87,6 +87,24 @@
enable-active-high;
gpio = <&pio 7 15 GPIO_ACTIVE_HIGH>;
};
+
+ sound {
+ compatible = "simple-audio-card";
+ simple-audio-card,name = "On-board SPDIF";
+
+ simple-audio-card,cpu {
+ sound-dai = <&spdif>;
+ };
+
+ simple-audio-card,codec {
+ sound-dai = <&spdif_out>;
+ };
+ };
+
+ spdif_out: spdif-out {
+ #sound-dai-cells = <0>;
+ compatible = "linux,spdif-dit";
+ };
};
&ahci {
@@ -188,6 +206,12 @@
status = "okay";
};
+&spdif {
+ pinctrl-names = "default";
+ pinctrl-0 = <&spdif_tx_pins_a>;
+ status = "okay";
+};
+
&uart0 {
pinctrl-names = "default";
pinctrl-0 = <&uart0_pins_a>;
diff --git a/arch/arm/boot/dts/sun4i-a10-dserve-dsrv9703c.dts b/arch/arm/boot/dts/sun4i-a10-dserve-dsrv9703c.dts
new file mode 100644
index 000000000000..893497e397da
--- /dev/null
+++ b/arch/arm/boot/dts/sun4i-a10-dserve-dsrv9703c.dts
@@ -0,0 +1,281 @@
+/*
+ * Copyright 2016 Hans de Goede <hdegoede@redhat.com>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "sun4i-a10.dtsi"
+#include "sunxi-common-regulators.dtsi"
+#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/input/input.h>
+#include <dt-bindings/interrupt-controller/irq.h>
+#include <dt-bindings/pinctrl/sun4i-a10.h>
+#include <dt-bindings/pwm/pwm.h>
+
+/ {
+ model = "Dserve DSRV9703C";
+ compatible = "dserve,dsrv9703c", "allwinner,sun4i-a10";
+
+ aliases {
+ serial0 = &uart0;
+ };
+
+ backlight: backlight {
+ compatible = "pwm-backlight";
+ pinctrl-names = "default";
+ pinctrl-0 = <&bl_en_pin_dsrv9703c>;
+ pwms = <&pwm 0 50000 PWM_POLARITY_INVERTED>;
+ brightness-levels = <0 10 20 30 40 50 60 70 80 90 100>;
+ default-brightness-level = <8>;
+ enable-gpios = <&pio 7 7 GPIO_ACTIVE_HIGH>; /* PH7 */
+ };
+
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
+ haptics {
+ compatible = "regulator-haptic";
+ haptic-supply = <&reg_motor>;
+ min-microvolt = <3000000>;
+ max-microvolt = <3000000>;
+ };
+
+ reg_motor: reg_motor {
+ compatible = "regulator-fixed";
+ pinctrl-names = "default";
+ pinctrl-0 = <&motor_pins>;
+ regulator-name = "vcc-motor";
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3000000>;
+ enable-active-high;
+ gpio = <&pio 1 3 GPIO_ACTIVE_HIGH>; /* PB3 */
+ };
+};
+
+&codec {
+ pinctrl-names = "default";
+ pinctrl-0 = <&codec_pa_pin>;
+ allwinner,pa-gpios = <&pio 7 15 GPIO_ACTIVE_HIGH>; /* PH15 */
+ status = "okay";
+};
+
+&cpu0 {
+ cpu-supply = <&reg_dcdc2>;
+};
+
+&ehci1 {
+ status = "okay";
+};
+
+&i2c0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c0_pins_a>;
+ status = "okay";
+
+ axp209: pmic@34 {
+ reg = <0x34>;
+ interrupts = <0>;
+ };
+};
+
+#include "axp209.dtsi"
+
+&i2c1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c1_pins_a>;
+ /* pull-ups and devices require AXP209 LDO3 */
+ status = "failed";
+};
+
+&i2c2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c2_pins_a>;
+ status = "okay";
+
+ ft5406ee8: touchscreen@38 {
+ compatible = "edt,edt-ft5406";
+ reg = <0x38>;
+ interrupt-parent = <&pio>;
+ interrupts = <7 21 IRQ_TYPE_EDGE_FALLING>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&touchscreen_pins>;
+ reset-gpios = <&pio 1 13 GPIO_ACTIVE_LOW>;
+ touchscreen-size-x = <1024>;
+ touchscreen-size-y = <768>;
+ };
+};
+
+&lradc {
+ vref-supply = <&reg_ldo2>;
+ status = "okay";
+
+ button@400 {
+ label = "Volume Down";
+ linux,code = <KEY_VOLUMEDOWN>;
+ channel = <0>;
+ voltage = <400000>;
+ };
+
+ button@800 {
+ label = "Volume Up";
+ linux,code = <KEY_VOLUMEUP>;
+ channel = <0>;
+ voltage = <800000>;
+ };
+};
+
+&mmc0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc0_pins_a>, <&mmc0_cd_pin_reference_design>;
+ vmmc-supply = <&reg_vcc3v3>;
+ bus-width = <4>;
+ cd-gpios = <&pio 7 1 GPIO_ACTIVE_HIGH>; /* PH1 */
+ cd-inverted;
+ status = "okay";
+};
+
+&otg_sram {
+ status = "okay";
+};
+
+&pio {
+ bl_en_pin_dsrv9703c: bl_en_pin@0 {
+ allwinner,pins = "PH7";
+ allwinner,function = "gpio_out";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
+ };
+
+ codec_pa_pin: codec_pa_pin@0 {
+ allwinner,pins = "PH15";
+ allwinner,function = "gpio_out";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
+ };
+
+ motor_pins: motor_pins@0 {
+ allwinner,pins = "PB3";
+ allwinner,function = "gpio_out";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
+ };
+
+ touchscreen_pins: touchscreen_pins@0 {
+ allwinner,pins = "PB13";
+ allwinner,function = "gpio_out";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
+ };
+
+ usb0_id_detect_pin: usb0_id_detect_pin@0 {
+ allwinner,pins = "PH4";
+ allwinner,function = "gpio_in";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_PULL_UP>;
+ };
+
+ usb0_vbus_detect_pin: usb0_vbus_detect_pin@0 {
+ allwinner,pins = "PH5";
+ allwinner,function = "gpio_in";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_PULL_DOWN>;
+ };
+};
+
+&pwm {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pwm0_pins_a>;
+ status = "okay";
+};
+
+&reg_dcdc2 {
+ regulator-always-on;
+ regulator-min-microvolt = <1000000>;
+ regulator-max-microvolt = <1400000>;
+ regulator-name = "vdd-cpu";
+};
+
+&reg_dcdc3 {
+ regulator-always-on;
+ regulator-min-microvolt = <1250000>;
+ regulator-max-microvolt = <1250000>;
+ regulator-name = "vdd-int-dll";
+};
+
+&reg_ldo1 {
+ regulator-name = "vdd-rtc";
+};
+
+&reg_ldo2 {
+ regulator-always-on;
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3000000>;
+ regulator-name = "avcc";
+};
+
+&reg_usb0_vbus {
+ status = "okay";
+};
+
+&reg_usb2_vbus {
+ status = "okay";
+};
+
+&uart0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart0_pins_a>;
+ status = "okay";
+};
+
+&usb_otg {
+ dr_mode = "otg";
+ status = "okay";
+};
+
+&usbphy {
+ pinctrl-names = "default";
+ pinctrl-0 = <&usb0_id_detect_pin>, <&usb0_vbus_detect_pin>;
+ usb0_id_det-gpio = <&pio 7 4 GPIO_ACTIVE_HIGH>; /* PH4 */
+ usb0_vbus_det-gpio = <&pio 7 5 GPIO_ACTIVE_HIGH>; /* PH5 */
+ usb0_vbus-supply = <&reg_usb0_vbus>;
+ usb2_vbus-supply = <&reg_usb2_vbus>;
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/sun4i-a10.dtsi b/arch/arm/boot/dts/sun4i-a10.dtsi
index 2c8f5e6ad905..a03e56fb5dbc 100644
--- a/arch/arm/boot/dts/sun4i-a10.dtsi
+++ b/arch/arm/boot/dts/sun4i-a10.dtsi
@@ -96,7 +96,7 @@
allwinner,pipeline = "de_fe0-de_be0-lcd0-tve0";
clocks = <&pll5 1>, <&ahb_gates 34>, <&ahb_gates 36>,
<&ahb_gates 44>, <&ahb_gates 46>,
- <&dram_gates 25>, <&dram_gates 26>;
+ <&dram_gates 5>, <&dram_gates 25>, <&dram_gates 26>;
status = "disabled";
};
};
@@ -184,6 +184,15 @@
clock-output-names = "osc24M";
};
+ osc3M: osc3M_clk {
+ compatible = "fixed-factor-clock";
+ #clock-cells = <0>;
+ clock-div = <8>;
+ clock-mult = <1>;
+ clocks = <&osc24M>;
+ clock-output-names = "osc3M";
+ };
+
osc32k: clk@0 {
#clock-cells = <0>;
compatible = "fixed-clock";
@@ -208,6 +217,23 @@
"pll2-4x", "pll2-8x";
};
+ pll3: clk@01c20010 {
+ #clock-cells = <0>;
+ compatible = "allwinner,sun4i-a10-pll3-clk";
+ reg = <0x01c20010 0x4>;
+ clocks = <&osc3M>;
+ clock-output-names = "pll3";
+ };
+
+ pll3x2: pll3x2_clk {
+ compatible = "fixed-factor-clock";
+ #clock-cells = <0>;
+ clock-div = <1>;
+ clock-mult = <2>;
+ clocks = <&pll3>;
+ clock-output-names = "pll3-2x";
+ };
+
pll4: clk@01c20018 {
#clock-cells = <0>;
compatible = "allwinner,sun4i-a10-pll1-clk";
@@ -232,6 +258,23 @@
clock-output-names = "pll6_sata", "pll6_other", "pll6";
};
+ pll7: clk@01c20030 {
+ #clock-cells = <0>;
+ compatible = "allwinner,sun4i-a10-pll3-clk";
+ reg = <0x01c20030 0x4>;
+ clocks = <&osc3M>;
+ clock-output-names = "pll7";
+ };
+
+ pll7x2: pll7x2_clk {
+ compatible = "fixed-factor-clock";
+ #clock-cells = <0>;
+ clock-div = <1>;
+ clock-mult = <2>;
+ clocks = <&pll7>;
+ clock-output-names = "pll7-2x";
+ };
+
/* dummy is 200M */
cpu: cpu@01c20054 {
#clock-cells = <0>;
@@ -477,6 +520,17 @@
clock-output-names = "ir1";
};
+ spdif_clk: clk@01c200c0 {
+ #clock-cells = <0>;
+ compatible = "allwinner,sun4i-a10-mod1-clk";
+ reg = <0x01c200c0 0x4>;
+ clocks = <&pll2 SUN4I_A10_PLL2_8X>,
+ <&pll2 SUN4I_A10_PLL2_4X>,
+ <&pll2 SUN4I_A10_PLL2_2X>,
+ <&pll2 SUN4I_A10_PLL2_1X>;
+ clock-output-names = "spdif";
+ };
+
usb_clk: clk@01c200cc {
#clock-cells = <1>;
#reset-cells = <1>;
@@ -1006,6 +1060,13 @@
allwinner,drive = <SUN4I_PINCTRL_10_MA>;
allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
};
+
+ spdif_tx_pins_a: spdif@0 {
+ allwinner,pins = "PB13";
+ allwinner,function = "spdif";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_PULL_UP>;
+ };
};
timer@01c20c00 {
@@ -1034,6 +1095,19 @@
status = "disabled";
};
+ spdif: spdif@01c21000 {
+ #sound-dai-cells = <0>;
+ compatible = "allwinner,sun4i-a10-spdif";
+ reg = <0x01c21000 0x400>;
+ interrupts = <13>;
+ clocks = <&apb0_gates 1>, <&spdif_clk>;
+ clock-names = "apb", "spdif";
+ dmas = <&dma SUN4I_DMA_NORMAL 2>,
+ <&dma SUN4I_DMA_NORMAL 2>;
+ dma-names = "rx", "tx";
+ status = "disabled";
+ };
+
ir0: ir@01c21800 {
compatible = "allwinner,sun4i-a10-ir";
clocks = <&apb0_gates 6>, <&ir0_clk>;
diff --git a/arch/arm/boot/dts/sun5i-a13-difrnce-dit4350.dts b/arch/arm/boot/dts/sun5i-a13-difrnce-dit4350.dts
new file mode 100644
index 000000000000..6546fa02901d
--- /dev/null
+++ b/arch/arm/boot/dts/sun5i-a13-difrnce-dit4350.dts
@@ -0,0 +1,226 @@
+/*
+ * Copyright 2016 Hans de Goede <hdegoede@redhat.com>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "sun5i-a13.dtsi"
+#include "sunxi-common-regulators.dtsi"
+#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/input/input.h>
+#include <dt-bindings/interrupt-controller/irq.h>
+#include <dt-bindings/pinctrl/sun4i-a10.h>
+#include <dt-bindings/pwm/pwm.h>
+
+/ {
+ model = "Difrnce DIT4350";
+ compatible = "difrnce,dit4350", "allwinner,sun5i-a13";
+
+ aliases {
+ serial0 = &uart1;
+ };
+
+ backlight: backlight {
+ compatible = "pwm-backlight";
+ pwms = <&pwm 0 50000 PWM_POLARITY_INVERTED>;
+ brightness-levels = <0 10 20 30 40 50 60 70 80 90 100>;
+ default-brightness-level = <8>;
+ /* TODO: backlight uses axp gpio1 as enable pin */
+ };
+
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+};
+
+&cpu0 {
+ cpu-supply = <&reg_dcdc2>;
+};
+
+&ehci0 {
+ status = "okay";
+};
+
+&i2c0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c0_pins_a>;
+ status = "okay";
+
+ axp209: pmic@34 {
+ reg = <0x34>;
+ interrupts = <0>;
+ };
+};
+
+#include "axp209.dtsi"
+
+&i2c1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c1_pins_a>;
+ status = "okay";
+
+ pcf8563: rtc@51 {
+ compatible = "nxp,pcf8563";
+ reg = <0x51>;
+ };
+};
+
+&lradc {
+ vref-supply = <&reg_ldo2>;
+ status = "okay";
+
+ button@200 {
+ label = "Volume Up";
+ linux,code = <KEY_VOLUMEUP>;
+ channel = <0>;
+ voltage = <200000>;
+ };
+
+ button@400 {
+ label = "Volume Down";
+ linux,code = <KEY_VOLUMEDOWN>;
+ channel = <0>;
+ voltage = <400000>;
+ };
+};
+
+&mmc0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc0_pins_a>, <&mmc0_cd_pin_d709>;
+ vmmc-supply = <&reg_vcc3v3>;
+ bus-width = <4>;
+ cd-gpios = <&pio 6 0 GPIO_ACTIVE_HIGH>; /* PG0 */
+ cd-inverted;
+ status = "okay";
+};
+
+&otg_sram {
+ status = "okay";
+};
+
+&pio {
+ mmc0_cd_pin_d709: mmc0_cd_pin@0 {
+ allwinner,pins = "PG0";
+ allwinner,function = "gpio_in";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_PULL_UP>;
+ };
+
+ usb0_vbus_detect_pin: usb0_vbus_detect_pin@0 {
+ allwinner,pins = "PG1";
+ allwinner,function = "gpio_in";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_PULL_DOWN>;
+ };
+
+ usb0_id_detect_pin: usb0_id_detect_pin@0 {
+ allwinner,pins = "PG2";
+ allwinner,function = "gpio_in";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_PULL_UP>;
+ };
+};
+
+&pwm {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pwm0_pins>;
+ status = "okay";
+};
+
+&reg_dcdc2 {
+ regulator-always-on;
+ regulator-min-microvolt = <1000000>;
+ regulator-max-microvolt = <1400000>;
+ regulator-name = "vdd-cpu";
+};
+
+&reg_dcdc3 {
+ regulator-always-on;
+ regulator-min-microvolt = <1250000>;
+ regulator-max-microvolt = <1250000>;
+ regulator-name = "vdd-int-pll";
+};
+
+&reg_ldo1 {
+ regulator-name = "vdd-rtc";
+};
+
+&reg_ldo2 {
+ regulator-always-on;
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3000000>;
+ regulator-name = "avcc";
+};
+
+&reg_ldo3 {
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-name = "vcc-wifi";
+};
+
+&reg_usb0_vbus {
+ gpio = <&pio 6 12 GPIO_ACTIVE_HIGH>; /* PG12 */
+ status = "okay";
+};
+
+&uart1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart1_pins_b>;
+ status = "okay";
+};
+
+&usb_otg {
+ dr_mode = "otg";
+ status = "okay";
+};
+
+&usb0_vbus_pin_a {
+ allwinner,pins = "PG12";
+};
+
+&usbphy {
+ pinctrl-names = "default";
+ pinctrl-0 = <&usb0_id_detect_pin>, <&usb0_vbus_detect_pin>;
+ usb0_id_det-gpio = <&pio 6 2 GPIO_ACTIVE_HIGH>; /* PG2 */
+ usb0_vbus_det-gpio = <&pio 6 1 GPIO_ACTIVE_HIGH>; /* PG1 */
+ usb0_vbus-supply = <&reg_usb0_vbus>;
+ usb1_vbus-supply = <&reg_ldo3>;
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/sun5i-a13-empire-electronix-d709.dts b/arch/arm/boot/dts/sun5i-a13-empire-electronix-d709.dts
index 7fbb0b0558a9..6efbba6d40a9 100644
--- a/arch/arm/boot/dts/sun5i-a13-empire-electronix-d709.dts
+++ b/arch/arm/boot/dts/sun5i-a13-empire-electronix-d709.dts
@@ -123,7 +123,7 @@
&mmc0 {
pinctrl-names = "default";
- pinctrl-0 = <&mmc0_pins_a>, <&mmc0_cd_pin_inet98fv2>;
+ pinctrl-0 = <&mmc0_pins_a>, <&mmc0_cd_pin_d709>;
vmmc-supply = <&reg_vcc3v3>;
bus-width = <4>;
cd-gpios = <&pio 6 0 GPIO_ACTIVE_HIGH>; /* PG0 */
@@ -131,27 +131,12 @@
status = "okay";
};
-&mmc2 {
- pinctrl-names = "default";
- pinctrl-0 = <&mmc2_pins_a>;
- vmmc-supply = <&reg_vcc3v3>;
- bus-width = <8>;
- non-removable;
- status = "okay";
-
- mmccard: mmccard@0 {
- reg = <0>;
- compatible = "mmc-card";
- broken-hpi;
- };
-};
-
&otg_sram {
status = "okay";
};
&pio {
- mmc0_cd_pin_inet98fv2: mmc0_cd_pin@0 {
+ mmc0_cd_pin_d709: mmc0_cd_pin@0 {
allwinner,pins = "PG0";
allwinner,function = "gpio_in";
allwinner,drive = <SUN4I_PINCTRL_10_MA>;
diff --git a/arch/arm/boot/dts/sun5i-a13-inet-98v-rev2.dts b/arch/arm/boot/dts/sun5i-a13-inet-98v-rev2.dts
index 6fa54b661423..1b11ec95ae53 100644
--- a/arch/arm/boot/dts/sun5i-a13-inet-98v-rev2.dts
+++ b/arch/arm/boot/dts/sun5i-a13-inet-98v-rev2.dts
@@ -123,21 +123,6 @@
status = "okay";
};
-&mmc2 {
- pinctrl-names = "default";
- pinctrl-0 = <&mmc2_pins_a>;
- vmmc-supply = <&reg_vcc3v3>;
- bus-width = <8>;
- non-removable;
- status = "okay";
-
- mmccard: mmccard@0 {
- reg = <0>;
- compatible = "mmc-card";
- broken-hpi;
- };
-};
-
&otg_sram {
status = "okay";
};
diff --git a/arch/arm/boot/dts/sun5i-a13-olinuxino-micro.dts b/arch/arm/boot/dts/sun5i-a13-olinuxino-micro.dts
index ad84fe4276c9..081329e2b80b 100644
--- a/arch/arm/boot/dts/sun5i-a13-olinuxino-micro.dts
+++ b/arch/arm/boot/dts/sun5i-a13-olinuxino-micro.dts
@@ -109,6 +109,10 @@
status = "okay";
};
+&otg_sram {
+ status = "okay";
+};
+
&pio {
mmc0_cd_pin_olinuxinom: mmc0_cd_pin@0 {
allwinner,pins = "PG0";
@@ -124,6 +128,27 @@
allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
};
+ usb0_id_detect_pin: usb0_id_detect_pin@0 {
+ allwinner,pins = "PG2";
+ allwinner,function = "gpio_in";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_PULL_UP>;
+ };
+
+ usb0_vbus_detect_pin: usb0_vbus_detect_pin@0 {
+ allwinner,pins = "PG1";
+ allwinner,function = "gpio_in";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_PULL_DOWN>;
+ };
+
+ usb0_vbus_pin_olinuxinom: usb0_vbus_pin@0 {
+ allwinner,pins = "PG12";
+ allwinner,function = "gpio_out";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
+ };
+
usb1_vbus_pin_olinuxinom: usb1_vbus_pin@0 {
allwinner,pins = "PG11";
allwinner,function = "gpio_out";
@@ -132,6 +157,12 @@
};
};
+&reg_usb0_vbus {
+ pinctrl-0 = <&usb0_vbus_pin_olinuxinom>;
+ gpio = <&pio 6 12 GPIO_ACTIVE_HIGH>;
+ status = "okay";
+};
+
&reg_usb1_vbus {
pinctrl-0 = <&usb1_vbus_pin_olinuxinom>;
gpio = <&pio 6 11 GPIO_ACTIVE_HIGH>;
@@ -144,7 +175,17 @@
status = "okay";
};
+&usb_otg {
+ dr_mode = "otg";
+ status = "okay";
+};
+
&usbphy {
+ pinctrl-names = "default";
+ pinctrl-0 = <&usb0_id_detect_pin>, <&usb0_vbus_detect_pin>;
+ usb0_id_det-gpio = <&pio 6 2 GPIO_ACTIVE_HIGH>; /* PG2 */
+ usb0_vbus_det-gpio = <&pio 6 1 GPIO_ACTIVE_HIGH>; /* PG1 */
+ usb0_vbus-supply = <&reg_usb0_vbus>;
usb1_vbus-supply = <&reg_usb1_vbus>;
status = "okay";
};
diff --git a/arch/arm/boot/dts/sun5i-a13.dtsi b/arch/arm/boot/dts/sun5i-a13.dtsi
index d910d3a6c41c..263d46dbc7e6 100644
--- a/arch/arm/boot/dts/sun5i-a13.dtsi
+++ b/arch/arm/boot/dts/sun5i-a13.dtsi
@@ -61,7 +61,8 @@
compatible = "allwinner,simple-framebuffer",
"simple-framebuffer";
allwinner,pipeline = "de_be0-lcd0";
- clocks = <&pll5 1>, <&ahb_gates 36>, <&ahb_gates 44>;
+ clocks = <&ahb_gates 36>, <&ahb_gates 44>, <&de_be_clk>,
+ <&tcon_ch0_clk>, <&dram_gates 26>;
status = "disabled";
};
};
@@ -110,8 +111,8 @@
<10>, <13>,
<14>, <20>,
<21>, <22>,
- <28>, <32>, <36>,
- <40>, <44>,
+ <28>, <32>, <34>,
+ <36>, <40>, <44>,
<46>, <51>,
<52>;
clock-output-names = "ahb_usbotg", "ahb_ehci",
@@ -120,8 +121,8 @@
"ahb_mmc2", "ahb_nand",
"ahb_sdram", "ahb_spi0",
"ahb_spi1", "ahb_spi2",
- "ahb_stimer", "ahb_ve", "ahb_lcd",
- "ahb_csi", "ahb_de_be",
+ "ahb_stimer", "ahb_ve", "ahb_tve",
+ "ahb_lcd", "ahb_csi", "ahb_de_be",
"ahb_de_fe", "ahb_iep",
"ahb_mali400";
};
@@ -149,6 +150,61 @@
"apb1_i2c2", "apb1_uart1",
"apb1_uart3";
};
+
+ dram_gates: clk@01c20100 {
+ #clock-cells = <1>;
+ compatible = "allwinner,sun5i-a13-dram-gates-clk",
+ "allwinner,sun4i-a10-gates-clk";
+ reg = <0x01c20100 0x4>;
+ clocks = <&pll5 0>;
+ clock-indices = <0>,
+ <1>,
+ <25>,
+ <26>,
+ <29>,
+ <31>;
+ clock-output-names = "dram_ve",
+ "dram_csi",
+ "dram_de_fe",
+ "dram_de_be",
+ "dram_ace",
+ "dram_iep";
+ };
+
+ de_be_clk: clk@01c20104 {
+ #clock-cells = <0>;
+ #reset-cells = <0>;
+ compatible = "allwinner,sun4i-a10-display-clk";
+ reg = <0x01c20104 0x4>;
+ clocks = <&pll3>, <&pll7>, <&pll5 1>;
+ clock-output-names = "de-be";
+ };
+
+ de_fe_clk: clk@01c2010c {
+ #clock-cells = <0>;
+ #reset-cells = <0>;
+ compatible = "allwinner,sun4i-a10-display-clk";
+ reg = <0x01c2010c 0x4>;
+ clocks = <&pll3>, <&pll7>, <&pll5 1>;
+ clock-output-names = "de-fe";
+ };
+
+ tcon_ch0_clk: clk@01c20118 {
+ #clock-cells = <0>;
+ #reset-cells = <1>;
+ compatible = "allwinner,sun4i-a10-tcon-ch0-clk";
+ reg = <0x01c20118 0x4>;
+ clocks = <&pll3>, <&pll7>, <&pll3x2>, <&pll7x2>;
+ clock-output-names = "tcon-ch0-sclk";
+ };
+
+ tcon_ch1_clk: clk@01c2012c {
+ #clock-cells = <0>;
+ compatible = "allwinner,sun4i-a10-tcon-ch1-clk";
+ reg = <0x01c2012c 0x4>;
+ clocks = <&pll3>, <&pll7>, <&pll3x2>, <&pll7x2>;
+ clock-output-names = "tcon-ch1-sclk";
+ };
};
soc@01c00000 {
diff --git a/arch/arm/boot/dts/sun5i-r8-chip.dts b/arch/arm/boot/dts/sun5i-r8-chip.dts
index f6898c6b84d4..a8d8b4582397 100644
--- a/arch/arm/boot/dts/sun5i-r8-chip.dts
+++ b/arch/arm/boot/dts/sun5i-r8-chip.dts
@@ -66,6 +66,10 @@
};
};
+&be0 {
+ status = "okay";
+};
+
&codec {
status = "okay";
};
@@ -188,6 +192,14 @@
status = "okay";
};
+&tcon0 {
+ status = "okay";
+};
+
+&tve0 {
+ status = "okay";
+};
+
&uart1 {
pinctrl-names = "default";
pinctrl-0 = <&uart1_pins_b>;
diff --git a/arch/arm/boot/dts/sun5i-r8.dtsi b/arch/arm/boot/dts/sun5i-r8.dtsi
index 0ef865601ac9..c04cf690b858 100644
--- a/arch/arm/boot/dts/sun5i-r8.dtsi
+++ b/arch/arm/boot/dts/sun5i-r8.dtsi
@@ -51,9 +51,147 @@
compatible = "allwinner,simple-framebuffer",
"simple-framebuffer";
allwinner,pipeline = "de_be0-lcd0-tve0";
- clocks = <&pll5 1>, <&ahb_gates 34>, <&ahb_gates 36>,
- <&ahb_gates 44>;
+ clocks = <&ahb_gates 34>, <&ahb_gates 36>,
+ <&ahb_gates 44>, <&de_be_clk>,
+ <&tcon_ch1_clk>, <&dram_gates 26>;
status = "disabled";
};
};
+
+ soc@01c00000 {
+ tve0: tv-encoder@01c0a000 {
+ compatible = "allwinner,sun4i-a10-tv-encoder";
+ reg = <0x01c0a000 0x1000>;
+ clocks = <&ahb_gates 34>;
+ resets = <&tcon_ch0_clk 0>;
+ status = "disabled";
+
+ port {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ tve0_in_tcon0: endpoint@0 {
+ reg = <0>;
+ remote-endpoint = <&tcon0_out_tve0>;
+ };
+ };
+ };
+
+ tcon0: lcd-controller@01c0c000 {
+ compatible = "allwinner,sun5i-a13-tcon";
+ reg = <0x01c0c000 0x1000>;
+ interrupts = <44>;
+ resets = <&tcon_ch0_clk 1>;
+ reset-names = "lcd";
+ clocks = <&ahb_gates 36>,
+ <&tcon_ch0_clk>,
+ <&tcon_ch1_clk>;
+ clock-names = "ahb",
+ "tcon-ch0",
+ "tcon-ch1";
+ clock-output-names = "tcon-pixel-clock";
+ status = "disabled";
+
+ ports {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ tcon0_in: port@0 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0>;
+
+ tcon0_in_be0: endpoint@0 {
+ reg = <0>;
+ remote-endpoint = <&be0_out_tcon0>;
+ };
+ };
+
+ tcon0_out: port@1 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <1>;
+
+ tcon0_out_tve0: endpoint@1 {
+ reg = <1>;
+ remote-endpoint = <&tve0_in_tcon0>;
+ };
+ };
+ };
+ };
+
+ fe0: display-frontend@01e00000 {
+ compatible = "allwinner,sun5i-a13-display-frontend";
+ reg = <0x01e00000 0x20000>;
+ interrupts = <47>;
+ clocks = <&ahb_gates 46>, <&de_fe_clk>,
+ <&dram_gates 25>;
+ clock-names = "ahb", "mod",
+ "ram";
+ resets = <&de_fe_clk>;
+ status = "disabled";
+
+ ports {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ fe0_out: port@1 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <1>;
+
+ fe0_out_be0: endpoint@0 {
+ reg = <0>;
+ remote-endpoint = <&be0_in_fe0>;
+ };
+ };
+ };
+ };
+
+ be0: display-backend@01e60000 {
+ compatible = "allwinner,sun5i-a13-display-backend";
+ reg = <0x01e60000 0x10000>;
+ clocks = <&ahb_gates 44>, <&de_be_clk>,
+ <&dram_gates 26>;
+ clock-names = "ahb", "mod",
+ "ram";
+ resets = <&de_be_clk>;
+ status = "disabled";
+
+ assigned-clocks = <&de_be_clk>;
+ assigned-clock-rates = <300000000>;
+
+ ports {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ be0_in: port@0 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0>;
+
+ be0_in_fe0: endpoint@0 {
+ reg = <0>;
+ remote-endpoint = <&fe0_out_be0>;
+ };
+ };
+
+ be0_out: port@1 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <1>;
+
+ be0_out_tcon0: endpoint@0 {
+ reg = <0>;
+ remote-endpoint = <&tcon0_in_be0>;
+ };
+ };
+ };
+ };
+ };
+
+ display-engine {
+ compatible = "allwinner,sun5i-a13-display-engine";
+ allwinner,pipelines = <&fe0>;
+ };
};
diff --git a/arch/arm/boot/dts/sun5i.dtsi b/arch/arm/boot/dts/sun5i.dtsi
index 59a9426e3bd4..0840612b5ed6 100644
--- a/arch/arm/boot/dts/sun5i.dtsi
+++ b/arch/arm/boot/dts/sun5i.dtsi
@@ -88,6 +88,15 @@
clock-output-names = "osc24M";
};
+ osc3M: osc3M_clk {
+ compatible = "fixed-factor-clock";
+ #clock-cells = <0>;
+ clock-div = <8>;
+ clock-mult = <1>;
+ clocks = <&osc24M>;
+ clock-output-names = "osc3M";
+ };
+
osc32k: clk@0 {
#clock-cells = <0>;
compatible = "fixed-clock";
@@ -112,6 +121,23 @@
"pll2-4x", "pll2-8x";
};
+ pll3: clk@01c20010 {
+ #clock-cells = <0>;
+ compatible = "allwinner,sun4i-a10-pll3-clk";
+ reg = <0x01c20010 0x4>;
+ clocks = <&osc3M>;
+ clock-output-names = "pll3";
+ };
+
+ pll3x2: pll3x2_clk {
+ compatible = "fixed-factor-clock";
+ #clock-cells = <0>;
+ clock-div = <1>;
+ clock-mult = <2>;
+ clocks = <&pll3>;
+ clock-output-names = "pll3-2x";
+ };
+
pll4: clk@01c20018 {
#clock-cells = <0>;
compatible = "allwinner,sun4i-a10-pll1-clk";
@@ -136,6 +162,23 @@
clock-output-names = "pll6_sata", "pll6_other", "pll6";
};
+ pll7: clk@01c20030 {
+ #clock-cells = <0>;
+ compatible = "allwinner,sun4i-a10-pll3-clk";
+ reg = <0x01c20030 0x4>;
+ clocks = <&osc3M>;
+ clock-output-names = "pll7";
+ };
+
+ pll7x2: pll7x2_clk {
+ compatible = "fixed-factor-clock";
+ #clock-cells = <0>;
+ clock-div = <1>;
+ clock-mult = <2>;
+ clocks = <&pll7>;
+ clock-output-names = "pll7-2x";
+ };
+
/* dummy is 200M */
cpu: cpu@01c20054 {
#clock-cells = <0>;
diff --git a/arch/arm/boot/dts/sun6i-a31s-colorfly-e708-q1.dts b/arch/arm/boot/dts/sun6i-a31s-colorfly-e708-q1.dts
new file mode 100644
index 000000000000..e182eec6d878
--- /dev/null
+++ b/arch/arm/boot/dts/sun6i-a31s-colorfly-e708-q1.dts
@@ -0,0 +1,208 @@
+/*
+ * Copyright 2016 Hans de Goede <hdegoede@redhat.com>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "sun6i-a31s.dtsi"
+#include "sunxi-common-regulators.dtsi"
+
+#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/input/input.h>
+#include <dt-bindings/pinctrl/sun4i-a10.h>
+
+/ {
+ model = "Colorfly E708 Q1 tablet";
+ compatible = "colorfly,e708-q1", "allwinner,sun6i-a31s";
+
+ aliases {
+ serial0 = &uart0;
+ };
+
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+};
+
+&cpu0 {
+ cpu-supply = <&reg_dcdc3>;
+};
+
+&ehci0 {
+ /* rtl8188etv wifi is connected here */
+ status = "okay";
+};
+
+&lradc {
+ vref-supply = <&reg_aldo3>;
+ status = "okay";
+
+ button@1000 {
+ label = "Home";
+ linux,code = <KEY_HOMEPAGE>;
+ channel = <0>;
+ voltage = <1000000>;
+ };
+};
+
+&mmc0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc0_pins_a>, <&mmc0_cd_pin_e708_q1>;
+ vmmc-supply = <&reg_dcdc1>;
+ bus-width = <4>;
+ cd-gpios = <&pio 0 8 GPIO_ACTIVE_HIGH>; /* PA8 */
+ cd-inverted;
+ status = "okay";
+};
+
+&pio {
+ mma8452_int_e708_q1: mma8452_int_pin@0 {
+ allwinner,pins = "PA9";
+ allwinner,function = "gpio_in";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_PULL_UP>;
+ };
+
+ mmc0_cd_pin_e708_q1: mmc0_cd_pin@0 {
+ allwinner,pins = "PA8";
+ allwinner,function = "gpio_in";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_PULL_UP>;
+ };
+};
+
+&p2wi {
+ status = "okay";
+
+ axp22x: pmic@68 {
+ compatible = "x-powers,axp221";
+ reg = <0x68>;
+ interrupt-parent = <&nmi_intc>;
+ interrupts = <0 IRQ_TYPE_LEVEL_LOW>;
+ };
+};
+
+#include "axp22x.dtsi"
+
+&reg_aldo3 {
+ regulator-always-on;
+ regulator-min-microvolt = <2700000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-name = "avcc";
+};
+
+&reg_dc1sw {
+ regulator-name = "vcc-lcd";
+};
+
+&reg_dc5ldo {
+ regulator-always-on;
+ regulator-min-microvolt = <700000>;
+ regulator-max-microvolt = <1320000>;
+ regulator-name = "vdd-cpus"; /* This is an educated guess */
+};
+
+&reg_dcdc1 {
+ regulator-always-on;
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3000000>;
+ regulator-name = "vcc-3v0";
+};
+
+&reg_dcdc2 {
+ regulator-min-microvolt = <700000>;
+ regulator-max-microvolt = <1320000>;
+ regulator-name = "vdd-gpu";
+};
+
+&reg_dcdc3 {
+ regulator-always-on;
+ regulator-min-microvolt = <700000>;
+ regulator-max-microvolt = <1320000>;
+ regulator-name = "vdd-cpu";
+};
+
+&reg_dcdc4 {
+ regulator-always-on;
+ regulator-min-microvolt = <700000>;
+ regulator-max-microvolt = <1320000>;
+ regulator-name = "vdd-sys-dll";
+};
+
+&reg_dcdc5 {
+ regulator-always-on;
+ regulator-min-microvolt = <1500000>;
+ regulator-max-microvolt = <1500000>;
+ regulator-name = "vcc-dram";
+};
+
+&reg_dldo1 {
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-name = "vcc-wifi";
+};
+
+&reg_dldo2 {
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-name = "vcc-pg";
+};
+
+&simplefb_lcd {
+ vcc-lcd-supply = <&reg_dc1sw>;
+ vcc-pg-supply = <&reg_dldo2>;
+};
+
+/*
+ * FIXME for now we only support host mode and rely on u-boot to have
+ * turned on Vbus which is controlled by the axp221 pmic on the board.
+ *
+ * Once we have axp221 power-supply and vbus-usb support we should switch
+ * to fully supporting otg.
+ */
+&usb_otg {
+ dr_mode = "host";
+ status = "okay";
+};
+
+&usbphy {
+ usb1_vbus-supply = <&reg_dldo1>;
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/sun7i-a20-cubietruck.dts b/arch/arm/boot/dts/sun7i-a20-cubietruck.dts
index 8da939ab8350..83f39b0362cb 100644
--- a/arch/arm/boot/dts/sun7i-a20-cubietruck.dts
+++ b/arch/arm/boot/dts/sun7i-a20-cubietruck.dts
@@ -94,6 +94,24 @@
pinctrl-0 = <&mmc3_pwrseq_pin_cubietruck>;
reset-gpios = <&pio 7 9 GPIO_ACTIVE_LOW>; /* PH9 WIFI_EN */
};
+
+ sound {
+ compatible = "simple-audio-card";
+ simple-audio-card,name = "On-board SPDIF";
+
+ simple-audio-card,cpu {
+ sound-dai = <&spdif>;
+ };
+
+ simple-audio-card,codec {
+ sound-dai = <&spdif_out>;
+ };
+ };
+
+ spdif_out: spdif-out {
+ #sound-dai-cells = <0>;
+ compatible = "linux,spdif-dit";
+ };
};
&ahci {
@@ -301,6 +319,12 @@
status = "okay";
};
+&spdif {
+ pinctrl-names = "default";
+ pinctrl-0 = <&spdif_tx_pins_a>;
+ status = "okay";
+};
+
&uart0 {
pinctrl-names = "default";
pinctrl-0 = <&uart0_pins_a>;
diff --git a/arch/arm/boot/dts/sun7i-a20-itead-ibox.dts b/arch/arm/boot/dts/sun7i-a20-itead-ibox.dts
index 661c21d9bdbd..10d48cbf81ff 100644
--- a/arch/arm/boot/dts/sun7i-a20-itead-ibox.dts
+++ b/arch/arm/boot/dts/sun7i-a20-itead-ibox.dts
@@ -65,6 +65,24 @@
default-state = "on";
};
};
+
+ sound {
+ compatible = "simple-audio-card";
+ simple-audio-card,name = "On-board SPDIF";
+
+ simple-audio-card,cpu {
+ sound-dai = <&spdif>;
+ };
+
+ simple-audio-card,codec {
+ sound-dai = <&spdif_out>;
+ };
+ };
+
+ spdif_out: spdif-out {
+ #sound-dai-cells = <0>;
+ compatible = "linux,spdif-dit";
+ };
};
&ahci {
@@ -123,3 +141,9 @@
&reg_ahci_5v {
status = "okay";
};
+
+&spdif {
+ pinctrl-names = "default";
+ pinctrl-0 = <&spdif_tx_pins_a>;
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/sun7i-a20-olinuxino-lime2-emmc.dts b/arch/arm/boot/dts/sun7i-a20-olinuxino-lime2-emmc.dts
new file mode 100644
index 000000000000..5ea4915f6d75
--- /dev/null
+++ b/arch/arm/boot/dts/sun7i-a20-olinuxino-lime2-emmc.dts
@@ -0,0 +1,82 @@
+ /*
+ * Copyright 2015 - Ultimaker B.V.
+ * Author Olliver Schinagl <oliver@schinagl.nl>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "sun7i-a20-olinuxino-lime2.dts"
+
+/ {
+ model = "Olimex A20-OLinuXino-LIME2-eMMC";
+ compatible = "olimex,a20-olinuxino-lime2-emmc", "allwinner,sun7i-a20";
+
+ mmc2_pwrseq: pwrseq {
+ pinctrl-0 = <&mmc2_pins_nrst>;
+ pinctrl-names = "default";
+ compatible = "mmc-pwrseq-emmc";
+ reset-gpios = <&pio 2 16 GPIO_ACTIVE_LOW>;
+ };
+};
+
+&pio {
+ mmc2_pins_nrst: mmc2@0 {
+ allwinner,pins = "PC16";
+ allwinner,function = "gpio_out";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
+ };
+};
+
+&mmc2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc2_pins_a>;
+ vmmc-supply = <&reg_vcc3v3>;
+ vqmmc-supply = <&reg_vcc3v3>;
+ bus-width = <4>;
+ non-removable;
+ mmc-pwrseq = <&mmc2_pwrseq>;
+ status = "okay";
+
+ emmc: emmc@0 {
+ reg = <0>;
+ compatible = "mmc-card";
+ broken-hpi;
+ };
+};
diff --git a/arch/arm/boot/dts/sun7i-a20.dtsi b/arch/arm/boot/dts/sun7i-a20.dtsi
index 0940a788f824..febdf4c72fb0 100644
--- a/arch/arm/boot/dts/sun7i-a20.dtsi
+++ b/arch/arm/boot/dts/sun7i-a20.dtsi
@@ -85,8 +85,9 @@
compatible = "allwinner,simple-framebuffer",
"simple-framebuffer";
allwinner,pipeline = "de_be0-lcd0-tve0";
- clocks = <&pll5 1>, <&ahb_gates 34>, <&ahb_gates 36>,
- <&ahb_gates 44>, <&dram_gates 26>;
+ clocks = <&pll5 1>,
+ <&ahb_gates 34>, <&ahb_gates 36>, <&ahb_gates 44>,
+ <&dram_gates 5>, <&dram_gates 26>;
status = "disabled";
};
};
@@ -186,6 +187,15 @@
clock-output-names = "osc24M";
};
+ osc3M: osc3M_clk {
+ #clock-cells = <0>;
+ compatible = "fixed-factor-clock";
+ clock-div = <8>;
+ clock-mult = <1>;
+ clocks = <&osc24M>;
+ clock-output-names = "osc3M";
+ };
+
osc32k: clk@0 {
#clock-cells = <0>;
compatible = "fixed-clock";
@@ -210,6 +220,22 @@
"pll2-4x", "pll2-8x";
};
+ pll3: clk@01c20010 {
+ #clock-cells = <0>;
+ compatible = "allwinner,sun4i-a10-pll3-clk";
+ reg = <0x01c20010 0x4>;
+ clocks = <&osc3M>;
+ clock-output-names = "pll3";
+ };
+
+ pll3x2: pll3x2_clk {
+ #clock-cells = <0>;
+ compatible = "fixed-factor-clock";
+ clock-div = <1>;
+ clock-mult = <2>;
+ clock-output-names = "pll3-2x";
+ };
+
pll4: clk@01c20018 {
#clock-cells = <0>;
compatible = "allwinner,sun7i-a20-pll4-clk";
@@ -235,6 +261,22 @@
"pll6_div_4";
};
+ pll7: clk@01c20030 {
+ #clock-cells = <0>;
+ compatible = "allwinner,sun4i-a10-pll3-clk";
+ reg = <0x01c20030 0x4>;
+ clocks = <&osc3M>;
+ clock-output-names = "pll7";
+ };
+
+ pll7x2: pll7x2_clk {
+ #clock-cells = <0>;
+ compatible = "fixed-factor-clock";
+ clock-div = <1>;
+ clock-mult = <2>;
+ clock-output-names = "pll7-2x";
+ };
+
pll8: clk@01c20040 {
#clock-cells = <0>;
compatible = "allwinner,sun7i-a20-pll4-clk";
@@ -476,6 +518,17 @@
clock-output-names = "ir1";
};
+ spdif_clk: clk@01c200c0 {
+ #clock-cells = <0>;
+ compatible = "allwinner,sun4i-a10-mod1-clk";
+ reg = <0x01c200c0 0x4>;
+ clocks = <&pll2 SUN4I_A10_PLL2_8X>,
+ <&pll2 SUN4I_A10_PLL2_4X>,
+ <&pll2 SUN4I_A10_PLL2_2X>,
+ <&pll2 SUN4I_A10_PLL2_1X>;
+ clock-output-names = "spdif";
+ };
+
keypad_clk: clk@01c200c4 {
#clock-cells = <0>;
compatible = "allwinner,sun4i-a10-mod0-clk";
@@ -1193,6 +1246,13 @@
allwinner,drive = <SUN4I_PINCTRL_10_MA>;
allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
};
+
+ spdif_tx_pins_a: spdif@0 {
+ allwinner,pins = "PB13";
+ allwinner,function = "spdif";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_PULL_UP>;
+ };
};
timer@01c20c00 {
@@ -1226,6 +1286,19 @@
status = "disabled";
};
+ spdif: spdif@01c21000 {
+ #sound-dai-cells = <0>;
+ compatible = "allwinner,sun4i-a10-spdif";
+ reg = <0x01c21000 0x400>;
+ interrupts = <GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&apb0_gates 1>, <&spdif_clk>;
+ clock-names = "apb", "spdif";
+ dmas = <&dma SUN4I_DMA_NORMAL 2>,
+ <&dma SUN4I_DMA_NORMAL 2>;
+ dma-names = "rx", "tx";
+ status = "disabled";
+ };
+
ir0: ir@01c21800 {
compatible = "allwinner,sun4i-a10-ir";
clocks = <&apb0_gates 6>, <&ir0_clk>;
diff --git a/arch/arm/boot/dts/sun8i-a23-gt90h-v4.dts b/arch/arm/boot/dts/sun8i-a23-gt90h-v4.dts
index 1aeb06c649b9..b2ce284a65a2 100644
--- a/arch/arm/boot/dts/sun8i-a23-gt90h-v4.dts
+++ b/arch/arm/boot/dts/sun8i-a23-gt90h-v4.dts
@@ -47,15 +47,26 @@
#include <dt-bindings/gpio/gpio.h>
#include <dt-bindings/input/input.h>
#include <dt-bindings/pinctrl/sun4i-a10.h>
+#include <dt-bindings/pwm/pwm.h>
/ {
- model = "Allwinner GT90H Quad Core Tablet (v4)";
- compatible = "allwinner,gt90h-v4", "allwinner,sun8i-a33";
+ model = "Allwinner GT90H Dual Core Tablet (v4)";
+ compatible = "allwinner,gt90h-v4", "allwinner,sun8i-a23";
aliases {
serial0 = &r_uart;
};
+ backlight: backlight {
+ compatible = "pwm-backlight";
+ pinctrl-names = "default";
+ pinctrl-0 = <&bl_en_pin_gt90h>;
+ pwms = <&pwm 0 50000 PWM_POLARITY_INVERTED>;
+ brightness-levels = <0 10 20 30 40 50 60 70 80 90 100>;
+ default-brightness-level = <8>;
+ enable-gpios = <&pio 7 6 GPIO_ACTIVE_HIGH>; /* PH6 */
+ };
+
chosen {
stdout-path = "serial0:115200n8";
};
@@ -106,8 +117,7 @@
&mmc0 {
pinctrl-names = "default";
pinctrl-0 = <&mmc0_pins_a>, <&mmc0_cd_pin_gt90h>;
- /* FIXME this really is aldo1, correct once we've pmic support */
- vmmc-supply = <&reg_vcc3v0>;
+ vmmc-supply = <&reg_aldo1>;
bus-width = <4>;
cd-gpios = <&pio 1 4 GPIO_ACTIVE_HIGH>; /* PB4 */
cd-inverted;
@@ -115,6 +125,13 @@
};
&pio {
+ bl_en_pin_gt90h: bl_en_pin@0 {
+ allwinner,pins = "PH6";
+ allwinner,function = "gpio_in";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
+ };
+
mmc0_cd_pin_gt90h: mmc0_cd_pin@0 {
allwinner,pins = "PB4";
allwinner,function = "gpio_in";
@@ -123,12 +140,106 @@
};
};
+&pwm {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pwm0_pins>;
+ status = "okay";
+};
+
+&r_rsb {
+ status = "okay";
+
+ axp22x: pmic@3a3 {
+ compatible = "x-powers,axp223";
+ reg = <0x3a3>;
+ interrupt-parent = <&nmi_intc>;
+ interrupts = <0 IRQ_TYPE_LEVEL_LOW>;
+ eldoin-supply = <&reg_dcdc1>;
+ };
+};
+
&r_uart {
pinctrl-names = "default";
pinctrl-0 = <&r_uart_pins_a>;
status = "okay";
};
+#include "axp22x.dtsi"
+
+&reg_aldo1 {
+ regulator-always-on;
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3000000>;
+ regulator-name = "vcc-io";
+};
+
+&reg_aldo2 {
+ regulator-always-on;
+ regulator-min-microvolt = <2350000>;
+ regulator-max-microvolt = <2650000>;
+ regulator-name = "vdd-dll";
+};
+
+&reg_aldo3 {
+ regulator-always-on;
+ regulator-min-microvolt = <2700000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-name = "vcc-pll-avcc";
+};
+
+&reg_dc1sw {
+ regulator-name = "vcc-lcd";
+};
+
+&reg_dc5ldo {
+ regulator-always-on;
+ regulator-min-microvolt = <900000>;
+ regulator-max-microvolt = <1400000>;
+ regulator-name = "vdd-cpus";
+};
+
+&reg_dcdc1 {
+ regulator-always-on;
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3000000>;
+ regulator-name = "vcc-3v0";
+};
+
+&reg_dcdc2 {
+ regulator-always-on;
+ regulator-min-microvolt = <900000>;
+ regulator-max-microvolt = <1400000>;
+ regulator-name = "vdd-sys";
+};
+
+&reg_dcdc3 {
+ regulator-always-on;
+ regulator-min-microvolt = <900000>;
+ regulator-max-microvolt = <1400000>;
+ regulator-name = "vdd-cpu";
+};
+
+&reg_dcdc5 {
+ regulator-always-on;
+ regulator-min-microvolt = <1500000>;
+ regulator-max-microvolt = <1500000>;
+ regulator-name = "vcc-dram";
+};
+
+&reg_dldo1 {
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-name = "vcc-wifi";
+};
+
+&reg_rtc_ldo {
+ regulator-name = "vcc-rtc";
+};
+
+&simplefb_lcd {
+ vcc-lcd-supply = <&reg_dc1sw>;
+};
+
/*
* FIXME for now we only support host mode and rely on u-boot to have
* turned on Vbus which is controlled by the axp223 pmic on the board.
@@ -141,5 +252,6 @@
};
&usbphy {
+ usb1_vbus-supply = <&reg_dldo1>;
status = "okay";
};
diff --git a/arch/arm/boot/dts/sun8i-a23-polaroid-mid2809pxe04.dts b/arch/arm/boot/dts/sun8i-a23-polaroid-mid2809pxe04.dts
new file mode 100644
index 000000000000..cb5daafcb7c2
--- /dev/null
+++ b/arch/arm/boot/dts/sun8i-a23-polaroid-mid2809pxe04.dts
@@ -0,0 +1,243 @@
+/*
+ * Copyright 2016 Hans de Goede <hdegoede@redhat.com>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "sun8i-a23.dtsi"
+#include "sunxi-common-regulators.dtsi"
+
+#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/input/input.h>
+#include <dt-bindings/pinctrl/sun4i-a10.h>
+#include <dt-bindings/pwm/pwm.h>
+
+/ {
+ model = "Polaroid MID2809PXE04 tablet";
+ compatible = "polaroid,mid2809pxe04", "allwinner,sun8i-a23";
+
+ aliases {
+ serial0 = &r_uart;
+ };
+
+ backlight: backlight {
+ compatible = "pwm-backlight";
+ pinctrl-names = "default";
+ pinctrl-0 = <&bl_en_pin_mid2809>;
+ pwms = <&pwm 0 50000 PWM_POLARITY_INVERTED>;
+ brightness-levels = <0 10 20 30 40 50 60 70 80 90 100>;
+ default-brightness-level = <8>;
+ enable-gpios = <&pio 7 6 GPIO_ACTIVE_HIGH>; /* PH6 */
+ };
+
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+};
+
+&ehci0 {
+ status = "okay";
+};
+
+&i2c0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c0_pins_a>;
+ status = "okay";
+};
+
+&i2c1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c1_pins_a>;
+ status = "okay";
+};
+
+&lradc {
+ vref-supply = <&reg_vcc3v0>;
+ status = "okay";
+
+ button@200 {
+ label = "Volume Up";
+ linux,code = <KEY_VOLUMEUP>;
+ channel = <0>;
+ voltage = <200000>;
+ };
+
+ button@400 {
+ label = "Volume Down";
+ linux,code = <KEY_VOLUMEDOWN>;
+ channel = <0>;
+ voltage = <400000>;
+ };
+};
+
+&mmc0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc0_pins_a>, <&mmc0_cd_pin_mid2809>;
+ vmmc-supply = <&reg_dcdc1>;
+ bus-width = <4>;
+ cd-gpios = <&pio 1 4 GPIO_ACTIVE_HIGH>; /* PB4 */
+ cd-inverted;
+ status = "okay";
+};
+
+&pio {
+ bl_en_pin_mid2809: bl_en_pin@0 {
+ allwinner,pins = "PH6";
+ allwinner,function = "gpio_in";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
+ };
+
+ mmc0_cd_pin_mid2809: mmc0_cd_pin@0 {
+ allwinner,pins = "PB4";
+ allwinner,function = "gpio_in";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_PULL_UP>;
+ };
+};
+
+&pwm {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pwm0_pins>;
+ status = "okay";
+};
+
+&r_rsb {
+ status = "okay";
+
+ axp22x: pmic@3a3 {
+ compatible = "x-powers,axp223";
+ reg = <0x3a3>;
+ interrupt-parent = <&nmi_intc>;
+ interrupts = <0 IRQ_TYPE_LEVEL_LOW>;
+ eldoin-supply = <&reg_dcdc1>;
+ };
+};
+
+&r_uart {
+ pinctrl-names = "default";
+ pinctrl-0 = <&r_uart_pins_a>;
+ status = "okay";
+};
+
+#include "axp22x.dtsi"
+
+&reg_aldo1 {
+ regulator-always-on;
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3000000>;
+ regulator-name = "vcc-io";
+};
+
+&reg_aldo2 {
+ regulator-always-on;
+ regulator-min-microvolt = <2350000>;
+ regulator-max-microvolt = <2650000>;
+ regulator-name = "vdd-dll";
+};
+
+&reg_aldo3 {
+ regulator-always-on;
+ regulator-min-microvolt = <2700000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-name = "vcc-pll-avcc";
+};
+
+&reg_dc1sw {
+ regulator-name = "vcc-lcd";
+};
+
+&reg_dc5ldo {
+ regulator-always-on;
+ regulator-min-microvolt = <900000>;
+ regulator-max-microvolt = <1400000>;
+ regulator-name = "vdd-cpus";
+};
+
+&reg_dcdc1 {
+ regulator-always-on;
+ regulator-min-microvolt = <3000000>;
+ regulator-max-microvolt = <3000000>;
+ regulator-name = "vcc-3v0";
+};
+
+&reg_dcdc2 {
+ regulator-always-on;
+ regulator-min-microvolt = <900000>;
+ regulator-max-microvolt = <1400000>;
+ regulator-name = "vdd-sys";
+};
+
+&reg_dcdc3 {
+ regulator-always-on;
+ regulator-min-microvolt = <900000>;
+ regulator-max-microvolt = <1400000>;
+ regulator-name = "vdd-cpu";
+};
+
+&reg_dcdc5 {
+ regulator-always-on;
+ regulator-min-microvolt = <1500000>;
+ regulator-max-microvolt = <1500000>;
+ regulator-name = "vcc-dram";
+};
+
+&reg_rtc_ldo {
+ regulator-name = "vcc-rtc";
+};
+
+&simplefb_lcd {
+ vcc-lcd-supply = <&reg_dc1sw>;
+};
+
+/*
+ * FIXME for now we only support host mode and rely on u-boot to have
+ * turned on Vbus which is controlled by the axp223 pmic on the board.
+ *
+ * Once we have axp223 support we should switch to fully supporting otg.
+ */
+&usb_otg {
+ dr_mode = "host";
+ status = "okay";
+};
+
+&usbphy {
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/sun8i-h3-orangepi-2.dts b/arch/arm/boot/dts/sun8i-h3-orangepi-2.dts
new file mode 100644
index 000000000000..f93f5d1695c4
--- /dev/null
+++ b/arch/arm/boot/dts/sun8i-h3-orangepi-2.dts
@@ -0,0 +1,186 @@
+/*
+ * Copyright (C) 2016 Hans de Goede <hdegoede@redhat.com>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "sun8i-h3.dtsi"
+#include "sunxi-common-regulators.dtsi"
+
+#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/input/input.h>
+#include <dt-bindings/pinctrl/sun4i-a10.h>
+
+/ {
+ model = "Xunlong Orange Pi 2";
+ compatible = "xunlong,orangepi-2", "allwinner,sun8i-h3";
+
+ aliases {
+ serial0 = &uart0;
+ };
+
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
+ leds {
+ compatible = "gpio-leds";
+ pinctrl-names = "default";
+ pinctrl-0 = <&leds_opc>, <&leds_r_opc>;
+
+ status_led {
+ label = "orangepi:red:status";
+ gpios = <&pio 0 15 GPIO_ACTIVE_HIGH>;
+ };
+
+ pwr_led {
+ label = "orangepi:green:pwr";
+ gpios = <&r_pio 0 10 GPIO_ACTIVE_HIGH>;
+ default-state = "on";
+ };
+ };
+
+ r_gpio_keys {
+ compatible = "gpio-keys";
+ pinctrl-names = "default";
+ pinctrl-0 = <&sw_r_opc>;
+
+ sw2 {
+ label = "sw2";
+ linux,code = <BTN_1>;
+ gpios = <&r_pio 0 4 GPIO_ACTIVE_LOW>;
+ };
+
+ sw4 {
+ label = "sw4";
+ linux,code = <BTN_0>;
+ gpios = <&r_pio 0 3 GPIO_ACTIVE_LOW>;
+ };
+ };
+
+ wifi_pwrseq: wifi_pwrseq {
+ compatible = "mmc-pwrseq-simple";
+ pinctrl-names = "default";
+ pinctrl-0 = <&wifi_pwrseq_pin_orangepi>;
+ reset-gpios = <&r_pio 0 7 GPIO_ACTIVE_LOW>; /* PL7 WIFI_EN */
+ };
+};
+
+&ehci1 {
+ status = "okay";
+};
+
+&ir {
+ pinctrl-names = "default";
+ pinctrl-0 = <&ir_pins_a>;
+ status = "okay";
+};
+
+&mmc0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc0_pins_a>, <&mmc0_cd_pin>;
+ vmmc-supply = <&reg_vcc3v3>;
+ bus-width = <4>;
+ cd-gpios = <&pio 5 6 GPIO_ACTIVE_HIGH>; /* PF6 */
+ cd-inverted;
+ status = "okay";
+};
+
+&mmc1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc1_pins_a>;
+ vmmc-supply = <&reg_vcc3v3>;
+ mmc-pwrseq = <&wifi_pwrseq>;
+ bus-width = <4>;
+ non-removable;
+ status = "okay";
+};
+
+&pio {
+ leds_opc: led_pins@0 {
+ allwinner,pins = "PA15";
+ allwinner,function = "gpio_out";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
+ };
+};
+
+&r_pio {
+ leds_r_opc: led_pins@0 {
+ allwinner,pins = "PL10";
+ allwinner,function = "gpio_out";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
+ };
+
+ sw_r_opc: key_pins@0 {
+ allwinner,pins = "PL3", "PL4";
+ allwinner,function = "gpio_in";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
+ };
+
+ wifi_pwrseq_pin_orangepi: wifi_pwrseq_pin@0 {
+ allwinner,pins = "PL7";
+ allwinner,function = "gpio_out";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
+ };
+};
+
+&reg_usb1_vbus {
+ gpio = <&pio 6 13 GPIO_ACTIVE_HIGH>;
+ status = "okay";
+};
+
+&uart0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart0_pins_a>;
+ status = "okay";
+};
+
+&usb1_vbus_pin_a {
+ allwinner,pins = "PG13";
+};
+
+&usbphy {
+ usb1_vbus-supply = <&reg_usb1_vbus>;
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/sun8i-h3-orangepi-one.dts b/arch/arm/boot/dts/sun8i-h3-orangepi-one.dts
new file mode 100644
index 000000000000..0adf932fd923
--- /dev/null
+++ b/arch/arm/boot/dts/sun8i-h3-orangepi-one.dts
@@ -0,0 +1,145 @@
+/*
+ * Copyright (C) 2016 Hans de Goede <hdegoede@redhat.com>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "sun8i-h3.dtsi"
+#include "sunxi-common-regulators.dtsi"
+
+#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/input/input.h>
+#include <dt-bindings/pinctrl/sun4i-a10.h>
+
+/ {
+ model = "Xunlong Orange Pi One";
+ compatible = "xunlong,orangepi-one", "allwinner,sun8i-h3";
+
+ aliases {
+ serial0 = &uart0;
+ };
+
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
+ leds {
+ compatible = "gpio-leds";
+ pinctrl-names = "default";
+ pinctrl-0 = <&leds_opc>, <&leds_r_opc>;
+
+ pwr_led {
+ label = "orangepi:green:pwr";
+ gpios = <&r_pio 0 10 GPIO_ACTIVE_HIGH>;
+ default-state = "on";
+ };
+
+ status_led {
+ label = "orangepi:red:status";
+ gpios = <&pio 0 15 GPIO_ACTIVE_HIGH>;
+ };
+ };
+
+ r_gpio_keys {
+ compatible = "gpio-keys";
+ pinctrl-names = "default";
+ pinctrl-0 = <&sw_r_opc>;
+
+ sw4 {
+ label = "sw4";
+ linux,code = <BTN_0>;
+ gpios = <&r_pio 0 3 GPIO_ACTIVE_LOW>;
+ };
+ };
+};
+
+&ehci1 {
+ status = "okay";
+};
+
+&mmc0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc0_pins_a>, <&mmc0_cd_pin>;
+ vmmc-supply = <&reg_vcc3v3>;
+ bus-width = <4>;
+ cd-gpios = <&pio 5 6 GPIO_ACTIVE_HIGH>; /* PF6 */
+ cd-inverted;
+ status = "okay";
+};
+
+&ohci1 {
+ status = "okay";
+};
+
+&pio {
+ leds_opc: led_pins@0 {
+ allwinner,pins = "PA15";
+ allwinner,function = "gpio_out";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
+ };
+};
+
+&r_pio {
+ leds_r_opc: led_pins@0 {
+ allwinner,pins = "PL10";
+ allwinner,function = "gpio_out";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
+ };
+
+ sw_r_opc: key_pins@0 {
+ allwinner,pins = "PL3";
+ allwinner,function = "gpio_in";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
+ };
+};
+
+&uart0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart0_pins_a>;
+ status = "okay";
+};
+
+&usbphy {
+ /* USB VBUS is always on */
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/sun8i-h3-orangepi-pc.dts b/arch/arm/boot/dts/sun8i-h3-orangepi-pc.dts
new file mode 100644
index 000000000000..daf50b9a6657
--- /dev/null
+++ b/arch/arm/boot/dts/sun8i-h3-orangepi-pc.dts
@@ -0,0 +1,167 @@
+/*
+ * Copyright (C) 2015 Chen-Yu Tsai <wens@csie.org>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "sun8i-h3.dtsi"
+#include "sunxi-common-regulators.dtsi"
+
+#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/input/input.h>
+#include <dt-bindings/pinctrl/sun4i-a10.h>
+
+/ {
+ model = "Xunlong Orange Pi PC";
+ compatible = "xunlong,orangepi-pc", "allwinner,sun8i-h3";
+
+ aliases {
+ serial0 = &uart0;
+ };
+
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
+ leds {
+ compatible = "gpio-leds";
+ pinctrl-names = "default";
+ pinctrl-0 = <&leds_opc>, <&leds_r_opc>;
+
+ pwr_led {
+ label = "orangepi:green:pwr";
+ gpios = <&r_pio 0 10 GPIO_ACTIVE_HIGH>;
+ default-state = "on";
+ };
+
+ status_led {
+ label = "orangepi:red:status";
+ gpios = <&pio 0 15 GPIO_ACTIVE_HIGH>;
+ };
+ };
+
+ r_gpio_keys {
+ compatible = "gpio-keys";
+ pinctrl-names = "default";
+ pinctrl-0 = <&sw_r_opc>;
+
+ sw4 {
+ label = "sw4";
+ linux,code = <BTN_0>;
+ gpios = <&r_pio 0 3 GPIO_ACTIVE_LOW>;
+ };
+ };
+};
+
+&ehci1 {
+ status = "okay";
+};
+
+&ehci2 {
+ status = "okay";
+};
+
+&ehci3 {
+ status = "okay";
+};
+
+&ir {
+ pinctrl-names = "default";
+ pinctrl-0 = <&ir_pins_a>;
+ status = "okay";
+};
+
+&mmc0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc0_pins_a>, <&mmc0_cd_pin>;
+ vmmc-supply = <&reg_vcc3v3>;
+ bus-width = <4>;
+ cd-gpios = <&pio 5 6 GPIO_ACTIVE_HIGH>; /* PF6 */
+ cd-inverted;
+ status = "okay";
+};
+
+&ohci1 {
+ status = "okay";
+};
+
+&ohci2 {
+ status = "okay";
+};
+
+&ohci3 {
+ status = "okay";
+};
+
+&pio {
+ leds_opc: led_pins@0 {
+ allwinner,pins = "PA15";
+ allwinner,function = "gpio_out";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
+ };
+};
+
+&r_pio {
+ leds_r_opc: led_pins@0 {
+ allwinner,pins = "PL10";
+ allwinner,function = "gpio_out";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
+ };
+
+ sw_r_opc: key_pins@0 {
+ allwinner,pins = "PL3";
+ allwinner,function = "gpio_in";
+ allwinner,drive = <SUN4I_PINCTRL_10_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
+ };
+};
+
+&uart0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart0_pins_a>;
+ status = "okay";
+};
+
+&usbphy {
+ /* USB VBUS is always on */
+ status = "okay";
+};
diff --git a/arch/arm/boot/dts/sun8i-h3-orangepi-plus.dts b/arch/arm/boot/dts/sun8i-h3-orangepi-plus.dts
index 79f40c3e6101..b0cb41787e09 100644
--- a/arch/arm/boot/dts/sun8i-h3-orangepi-plus.dts
+++ b/arch/arm/boot/dts/sun8i-h3-orangepi-plus.dts
@@ -40,95 +40,56 @@
* OTHER DEALINGS IN THE SOFTWARE.
*/
-/dts-v1/;
-#include "sun8i-h3.dtsi"
-#include "sunxi-common-regulators.dtsi"
-
-#include <dt-bindings/gpio/gpio.h>
-#include <dt-bindings/input/input.h>
-#include <dt-bindings/pinctrl/sun4i-a10.h>
+/* The Orange Pi Plus is an extended version of the Orange Pi 2 */
+#include "sun8i-h3-orangepi-2.dts"
/ {
model = "Xunlong Orange Pi Plus";
compatible = "xunlong,orangepi-plus", "allwinner,sun8i-h3";
- aliases {
- serial0 = &uart0;
- };
-
- chosen {
- stdout-path = "serial0:115200n8";
- };
-
- leds {
- compatible = "gpio-leds";
+ reg_usb3_vbus: usb3-vbus {
+ compatible = "regulator-fixed";
pinctrl-names = "default";
- pinctrl-0 = <&leds_opc>, <&leds_r_opc>;
-
- status_led {
- label = "orangepi-plus:red:status";
- gpios = <&pio 0 15 GPIO_ACTIVE_HIGH>;
- };
-
- pwr_led {
- label = "orangepi-plus:green:pwr";
- gpios = <&r_pio 0 10 GPIO_ACTIVE_HIGH>;
- default-state = "on";
- };
+ pinctrl-0 = <&usb3_vbus_pin_a>;
+ regulator-name = "usb3-vbus";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ regulator-boot-on;
+ enable-active-high;
+ gpio = <&pio 6 11 GPIO_ACTIVE_HIGH>;
};
+};
- r_gpio_keys {
- compatible = "gpio-keys";
- input-name = "sw4";
-
- pinctrl-names = "default";
- pinctrl-0 = <&sw_r_opc>;
+&ehci3 {
+ status = "okay";
+};
- sw4@0 {
- label = "sw4";
- linux,code = <BTN_0>;
- gpios = <&r_pio 0 3 GPIO_ACTIVE_LOW>;
- };
- };
+&mmc2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&mmc2_8bit_pins>;
+ vmmc-supply = <&reg_vcc3v3>;
+ bus-width = <8>;
+ non-removable;
+ cap-mmc-hw-reset;
+ status = "okay";
};
-&pio {
- leds_opc: led_pins@0 {
- allwinner,pins = "PA15";
- allwinner,function = "gpio_out";
- allwinner,drive = <SUN4I_PINCTRL_10_MA>;
- allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
- };
+&mmc2_8bit_pins {
+ /* Increase drive strength for DDR modes */
+ allwinner,drive = <SUN4I_PINCTRL_40_MA>;
+ /* eMMC is missing pull-ups */
+ allwinner,pull = <SUN4I_PINCTRL_PULL_UP>;
};
-&r_pio {
- leds_r_opc: led_pins@0 {
- allwinner,pins = "PL10";
+&pio {
+ usb3_vbus_pin_a: usb3_vbus_pin@0 {
+ allwinner,pins = "PG11";
allwinner,function = "gpio_out";
allwinner,drive = <SUN4I_PINCTRL_10_MA>;
allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
};
-
- sw_r_opc: key_pins@0 {
- allwinner,pins = "PL03";
- allwinner,function = "gpio_in";
- allwinner,drive = <SUN4I_PINCTRL_10_MA>;
- allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
- };
-};
-
-&mmc0 {
- pinctrl-names = "default";
- pinctrl-0 = <&mmc0_pins_a>, <&mmc0_cd_pin>;
- vmmc-supply = <&reg_vcc3v3>;
- bus-width = <4>;
- cd-gpios = <&pio 5 6 GPIO_ACTIVE_HIGH>; /* PF6 */
- cd-inverted;
- status = "okay";
};
-&uart0 {
- pinctrl-names = "default";
- pinctrl-0 = <&uart0_pins_a>;
- status = "okay";
+&usbphy {
+ usb3_vbus-supply = <&reg_usb3_vbus>;
};
diff --git a/arch/arm/boot/dts/sun8i-h3.dtsi b/arch/arm/boot/dts/sun8i-h3.dtsi
index dadb7f60c062..4a4926b0b0ed 100644
--- a/arch/arm/boot/dts/sun8i-h3.dtsi
+++ b/arch/arm/boot/dts/sun8i-h3.dtsi
@@ -269,6 +269,18 @@
"mmc2_sample";
};
+ usb_clk: clk@01c200cc {
+ #clock-cells = <1>;
+ #reset-cells = <1>;
+ compatible = "allwinner,sun8i-h3-usb-clk";
+ reg = <0x01c200cc 0x4>;
+ clocks = <&osc24M>;
+ clock-output-names = "usb_phy0", "usb_phy1",
+ "usb_phy2", "usb_phy3",
+ "usb_ohci0", "usb_ohci1",
+ "usb_ohci2", "usb_ohci3";
+ };
+
mbus_clk: clk@01c2015c {
#clock-cells = <0>;
compatible = "allwinner,sun8i-a23-mbus-clk";
@@ -377,6 +389,107 @@
#size-cells = <0>;
};
+ usbphy: phy@01c19400 {
+ compatible = "allwinner,sun8i-h3-usb-phy";
+ reg = <0x01c19400 0x2c>,
+ <0x01c1a800 0x4>,
+ <0x01c1b800 0x4>,
+ <0x01c1c800 0x4>,
+ <0x01c1d800 0x4>;
+ reg-names = "phy_ctrl",
+ "pmu0",
+ "pmu1",
+ "pmu2",
+ "pmu3";
+ clocks = <&usb_clk 8>,
+ <&usb_clk 9>,
+ <&usb_clk 10>,
+ <&usb_clk 11>;
+ clock-names = "usb0_phy",
+ "usb1_phy",
+ "usb2_phy",
+ "usb3_phy";
+ resets = <&usb_clk 0>,
+ <&usb_clk 1>,
+ <&usb_clk 2>,
+ <&usb_clk 3>;
+ reset-names = "usb0_reset",
+ "usb1_reset",
+ "usb2_reset",
+ "usb3_reset";
+ status = "disabled";
+ #phy-cells = <1>;
+ };
+
+ ehci1: usb@01c1b000 {
+ compatible = "allwinner,sun8i-h3-ehci", "generic-ehci";
+ reg = <0x01c1b000 0x100>;
+ interrupts = <GIC_SPI 74 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&bus_gates 25>, <&bus_gates 29>;
+ resets = <&ahb_rst 25>, <&ahb_rst 29>;
+ phys = <&usbphy 1>;
+ phy-names = "usb";
+ status = "disabled";
+ };
+
+ ohci1: usb@01c1b400 {
+ compatible = "allwinner,sun8i-h3-ohci", "generic-ohci";
+ reg = <0x01c1b400 0x100>;
+ interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&bus_gates 29>, <&bus_gates 25>,
+ <&usb_clk 17>;
+ resets = <&ahb_rst 29>, <&ahb_rst 25>;
+ phys = <&usbphy 1>;
+ phy-names = "usb";
+ status = "disabled";
+ };
+
+ ehci2: usb@01c1c000 {
+ compatible = "allwinner,sun8i-h3-ehci", "generic-ehci";
+ reg = <0x01c1c000 0x100>;
+ interrupts = <GIC_SPI 76 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&bus_gates 26>, <&bus_gates 30>;
+ resets = <&ahb_rst 26>, <&ahb_rst 30>;
+ phys = <&usbphy 2>;
+ phy-names = "usb";
+ status = "disabled";
+ };
+
+ ohci2: usb@01c1c400 {
+ compatible = "allwinner,sun8i-h3-ohci", "generic-ohci";
+ reg = <0x01c1c400 0x100>;
+ interrupts = <GIC_SPI 77 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&bus_gates 30>, <&bus_gates 26>,
+ <&usb_clk 18>;
+ resets = <&ahb_rst 30>, <&ahb_rst 26>;
+ phys = <&usbphy 2>;
+ phy-names = "usb";
+ status = "disabled";
+ };
+
+ ehci3: usb@01c1d000 {
+ compatible = "allwinner,sun8i-h3-ehci", "generic-ehci";
+ reg = <0x01c1d000 0x100>;
+ interrupts = <GIC_SPI 78 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&bus_gates 27>, <&bus_gates 31>;
+ resets = <&ahb_rst 27>, <&ahb_rst 31>;
+ phys = <&usbphy 3>;
+ phy-names = "usb";
+ status = "disabled";
+ };
+
+ ohci3: usb@01c1d400 {
+ compatible = "allwinner,sun8i-h3-ohci", "generic-ohci";
+ reg = <0x01c1d400 0x100>;
+ interrupts = <GIC_SPI 79 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&bus_gates 31>, <&bus_gates 27>,
+ <&usb_clk 19>;
+ resets = <&ahb_rst 31>, <&ahb_rst 27>;
+ phys = <&usbphy 3>;
+ phy-names = "usb";
+ status = "disabled";
+ };
+
pio: pinctrl@01c20800 {
compatible = "allwinner,sun8i-h3-pinctrl";
reg = <0x01c20800 0x400>;
@@ -417,6 +530,16 @@
allwinner,drive = <SUN4I_PINCTRL_30_MA>;
allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
};
+
+ mmc2_8bit_pins: mmc2_8bit {
+ allwinner,pins = "PC5", "PC6", "PC8",
+ "PC9", "PC10", "PC11",
+ "PC12", "PC13", "PC14",
+ "PC15", "PC16";
+ allwinner,function = "mmc2";
+ allwinner,drive = <SUN4I_PINCTRL_30_MA>;
+ allwinner,pull = <SUN4I_PINCTRL_NO_PULL>;
+ };
};
ahb_rst: reset@01c202c0 {
diff --git a/arch/arm/boot/dts/tango4-common.dtsi b/arch/arm/boot/dts/tango4-common.dtsi
index ef665d21d317..dd7eb5f624d9 100644
--- a/arch/arm/boot/dts/tango4-common.dtsi
+++ b/arch/arm/boot/dts/tango4-common.dtsi
@@ -3,11 +3,13 @@
* https://github.com/mansr/linux-tangox
*/
-#define CPU_CLK 0
-#define SYS_CLK 1
-
#include <dt-bindings/interrupt-controller/arm-gic.h>
+#define CPU_CLK 0
+#define SYS_CLK 1
+#define USB_CLK 2
+#define SDIO_CLK 3
+
/ {
interrupt-parent = <&gic>;
#address-cells = <1>;
@@ -70,7 +72,7 @@
clkgen: clkgen@10000 {
compatible = "sigma,tango4-clkgen";
- reg = <0x10000 0x40>;
+ reg = <0x10000 0x100>;
clocks = <&xtal>;
#clock-cells = <1>;
};
@@ -89,6 +91,12 @@
reg-shift = <2>;
};
+ watchdog@1fd00 {
+ compatible = "sigma,smp8759-wdt";
+ reg = <0x1fd00 8>;
+ clocks = <&xtal>;
+ };
+
eth0: ethernet@26000 {
compatible = "sigma,smp8734-ethernet";
reg = <0x26000 0x800>;
diff --git a/arch/arm/boot/dts/tango4-smp8758.dtsi b/arch/arm/boot/dts/tango4-smp8758.dtsi
index 7ed88ee629fb..d2e65c46bcc7 100644
--- a/arch/arm/boot/dts/tango4-smp8758.dtsi
+++ b/arch/arm/boot/dts/tango4-smp8758.dtsi
@@ -1,4 +1,4 @@
-#include <dt-bindings/interrupt-controller/arm-gic.h>
+#include "tango4-common.dtsi"
/ {
cpus {
@@ -11,6 +11,9 @@
next-level-cache = <&l2cc>;
device_type = "cpu";
reg = <0>;
+ clocks = <&clkgen CPU_CLK>;
+ clock-latency = <1>;
+ operating-points = <1215000 0 607500 0 405000 0 243000 0 135000 0>;
};
cpu1: cpu@1 {
@@ -28,4 +31,27 @@
<GIC_SPI 12 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 13 IRQ_TYPE_LEVEL_HIGH>;
};
+
+ soc {
+ cpu_temp: thermal@920100 {
+ #thermal-sensor-cells = <0>;
+ compatible = "sigma,smp8758-thermal";
+ reg = <0x920100 12>;
+ };
+ };
+
+ thermal-zones {
+ cpu_thermal: cpu-thermal {
+ polling-delay = <997>; /* milliseconds */
+ polling-delay-passive = <499>; /* milliseconds */
+ thermal-sensors = <&cpu_temp>;
+ trips {
+ cpu_critical {
+ temperature = <120000>;
+ hysteresis = <2500>;
+ type = "critical";
+ };
+ };
+ };
+ };
};
diff --git a/arch/arm/boot/dts/tango4-vantage-1172.dts b/arch/arm/boot/dts/tango4-vantage-1172.dts
index 3e5b9c81a51c..4cab64cb581e 100644
--- a/arch/arm/boot/dts/tango4-vantage-1172.dts
+++ b/arch/arm/boot/dts/tango4-vantage-1172.dts
@@ -1,7 +1,6 @@
/dts-v1/;
#include "tango4-smp8758.dtsi"
-#include "tango4-common.dtsi"
/ {
model = "Sigma Designs SMP8758 Vantage-1172 Rev E1";
diff --git a/arch/arm/boot/dts/tegra114-dalmore.dts b/arch/arm/boot/dts/tegra114-dalmore.dts
index 8b7aa0dcdc6e..c970bf65c74c 100644
--- a/arch/arm/boot/dts/tegra114-dalmore.dts
+++ b/arch/arm/boot/dts/tegra114-dalmore.dts
@@ -18,6 +18,10 @@
serial0 = &uartd;
};
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
memory {
reg = <0x80000000 0x40000000>;
};
@@ -1164,7 +1168,7 @@
label = "Power";
gpios = <&gpio TEGRA_GPIO(Q, 0) GPIO_ACTIVE_LOW>;
linux,code = <KEY_POWER>;
- gpio-key,wakeup;
+ wakeup-source;
};
volume_down {
diff --git a/arch/arm/boot/dts/tegra114-roth.dts b/arch/arm/boot/dts/tegra114-roth.dts
index 38acf78d7815..9d868af97b8e 100644
--- a/arch/arm/boot/dts/tegra114-roth.dts
+++ b/arch/arm/boot/dts/tegra114-roth.dts
@@ -1047,7 +1047,7 @@
label = "Power";
gpios = <&gpio TEGRA_GPIO(Q, 0) GPIO_ACTIVE_LOW>;
linux,code = <KEY_POWER>;
- gpio-key,wakeup;
+ wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/tegra114-tn7.dts b/arch/arm/boot/dts/tegra114-tn7.dts
index f91c2c9b2f94..89047edb5c5f 100644
--- a/arch/arm/boot/dts/tegra114-tn7.dts
+++ b/arch/arm/boot/dts/tegra114-tn7.dts
@@ -292,7 +292,7 @@
label = "Power";
gpios = <&gpio TEGRA_GPIO(Q, 0) GPIO_ACTIVE_LOW>;
linux,code = <KEY_POWER>;
- gpio-key,wakeup;
+ wakeup-source;
};
volume_down {
diff --git a/arch/arm/boot/dts/tegra114.dtsi b/arch/arm/boot/dts/tegra114.dtsi
index d845bd1448b5..cb9393a53422 100644
--- a/arch/arm/boot/dts/tegra114.dtsi
+++ b/arch/arm/boot/dts/tegra114.dtsi
@@ -150,7 +150,7 @@
};
timer@60005000 {
- compatible = "nvidia,tegra114-timer", "nvidia,tegra20-timer";
+ compatible = "nvidia,tegra114-timer", "nvidia,tegra30-timer", "nvidia,tegra20-timer";
reg = <0x60005000 0x400>;
interrupts = <GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
@@ -256,7 +256,7 @@
* driver and APB DMA based serial driver for higher baudrate
* and performace. To enable the 8250 based driver, the compatible
* is "nvidia,tegra114-uart", "nvidia,tegra20-uart" and to enable
- * the APB DMA based serial driver, the comptible is
+ * the APB DMA based serial driver, the compatible is
* "nvidia,tegra114-hsuart", "nvidia,tegra30-hsuart".
*/
uarta: serial@70006000 {
diff --git a/arch/arm/boot/dts/tegra124-jetson-tk1.dts b/arch/arm/boot/dts/tegra124-jetson-tk1.dts
index 66b4451eb2ca..941f36263c8f 100644
--- a/arch/arm/boot/dts/tegra124-jetson-tk1.dts
+++ b/arch/arm/boot/dts/tegra124-jetson-tk1.dts
@@ -12,7 +12,15 @@
aliases {
rtc0 = "/i2c@0,7000d000/pmic@40";
rtc1 = "/rtc@0,7000e000";
+
+ /* This order keeps the mapping DB9 connector <-> ttyS0 */
serial0 = &uartd;
+ serial1 = &uarta;
+ serial2 = &uartb;
+ };
+
+ chosen {
+ stdout-path = "serial0:115200n8";
};
memory {
@@ -30,11 +38,17 @@
vddio-pex-ctl-supply = <&vdd_3v3_lp0>;
avdd-pll-erefe-supply = <&avdd_1v05_run>;
+ /* Mini PCIe */
pci@1,0 {
+ phys = <&{/padctl@0,7009f000/pads/pcie/lanes/pcie-4}>;
+ phy-names = "pcie-0";
status = "okay";
};
+ /* Gigabit Ethernet */
pci@2,0 {
+ phys = <&{/padctl@0,7009f000/pads/pcie/lanes/pcie-2}>;
+ phy-names = "pcie-0";
status = "okay";
};
};
@@ -1367,6 +1381,28 @@
};
};
+ /*
+ * First high speed UART, exposed on the expansion connector J3A2
+ * Pin 41: BR_UART1_TXD
+ * Pin 44: BR_UART1_RXD
+ */
+ serial@70006000 {
+ compatible = "nvidia,tegra124-hsuart", "nvidia,tegra30-hsuart";
+ status = "okay";
+ };
+
+ /*
+ * Second high speed UART, exposed on the expansion connector J3A2
+ * Pin 65: UART2_RXD
+ * Pin 68: UART2_TXD
+ * Pin 71: UART2_CTS_L
+ * Pin 74: UART2_RTS_L
+ */
+ serial@70006040 {
+ compatible = "nvidia,tegra124-hsuart", "nvidia,tegra30-hsuart";
+ status = "okay";
+ };
+
/* DB9 serial port */
serial@0,70006300 {
status = "okay";
@@ -1647,6 +1683,9 @@
sata@0,70020000 {
status = "okay";
+ phys = <&{/padctl@0,7009f000/pads/sata/lanes/sata-0}>;
+ phy-names = "sata-0";
+
hvdd-supply = <&vdd_3v3_lp0>;
vddio-supply = <&vdd_1v05_run>;
avdd-supply = <&vdd_1v05_run>;
@@ -1659,28 +1698,107 @@
status = "okay";
};
+ usb@0,70090000 {
+ phys = <&{/padctl@0,7009f000/pads/usb2/lanes/usb2-0}>, /* Micro A/B */
+ <&{/padctl@0,7009f000/pads/usb2/lanes/usb2-1}>, /* Mini PCIe */
+ <&{/padctl@0,7009f000/pads/usb2/lanes/usb2-2}>, /* USB3 */
+ <&{/padctl@0,7009f000/pads/pcie/lanes/pcie-0}>; /* USB3 */
+ phy-names = "usb2-0", "usb2-1", "usb2-2", "usb3-0";
+
+ avddio-pex-supply = <&vdd_1v05_run>;
+ dvddio-pex-supply = <&vdd_1v05_run>;
+ avdd-usb-supply = <&vdd_3v3_lp0>;
+ avdd-pll-utmip-supply = <&vddio_1v8>;
+ avdd-pll-erefe-supply = <&avdd_1v05_run>;
+ avdd-usb-ss-pll-supply = <&vdd_1v05_run>;
+ hvdd-usb-ss-supply = <&vdd_3v3_lp0>;
+ hvdd-usb-ss-pll-e-supply = <&vdd_3v3_lp0>;
+
+ status = "okay";
+ };
+
padctl@0,7009f000 {
- pinctrl-0 = <&padctl_default>;
- pinctrl-names = "default";
+ status = "okay";
- padctl_default: pinmux {
- usb3 {
- nvidia,lanes = "pcie-0", "pcie-1";
- nvidia,function = "usb3";
- nvidia,iddq = <0>;
+ pads {
+ usb2 {
+ status = "okay";
+
+ lanes {
+ usb2-0 {
+ nvidia,function = "xusb";
+ status = "okay";
+ };
+
+ usb2-1 {
+ nvidia,function = "xusb";
+ status = "okay";
+ };
+
+ usb2-2 {
+ nvidia,function = "xusb";
+ status = "okay";
+ };
+ };
};
pcie {
- nvidia,lanes = "pcie-2", "pcie-3",
- "pcie-4";
- nvidia,function = "pcie";
- nvidia,iddq = <0>;
+ status = "okay";
+
+ lanes {
+ pcie-0 {
+ nvidia,function = "usb3-ss";
+ status = "okay";
+ };
+
+ pcie-2 {
+ nvidia,function = "pcie";
+ status = "okay";
+ };
+
+ pcie-4 {
+ nvidia,function = "pcie";
+ status = "okay";
+ };
+ };
};
sata {
- nvidia,lanes = "sata-0";
- nvidia,function = "sata";
- nvidia,iddq = <0>;
+ status = "okay";
+
+ lanes {
+ sata-0 {
+ nvidia,function = "sata";
+ status = "okay";
+ };
+ };
+ };
+ };
+
+ ports {
+ /* Micro A/B */
+ usb2-0 {
+ status = "okay";
+ mode = "otg";
+ };
+
+ /* Mini PCIe */
+ usb2-1 {
+ status = "okay";
+ mode = "host";
+ };
+
+ /* USB3 */
+ usb2-2 {
+ status = "okay";
+ mode = "host";
+
+ vbus-supply = <&vdd_usb3_vbus>;
+ };
+
+ usb3-0 {
+ nvidia,usb2-companion = <2>;
+ status = "okay";
};
};
};
@@ -1761,7 +1879,7 @@
gpios = <&gpio TEGRA_GPIO(Q, 0) GPIO_ACTIVE_LOW>;
linux,code = <KEY_POWER>;
debounce-interval = <10>;
- gpio-key,wakeup;
+ wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/tegra124-nyan.dtsi b/arch/arm/boot/dts/tegra124-nyan.dtsi
index ec1aa64ded68..0710a600cc69 100644
--- a/arch/arm/boot/dts/tegra124-nyan.dtsi
+++ b/arch/arm/boot/dts/tegra124-nyan.dtsi
@@ -8,6 +8,10 @@
serial0 = &uarta;
};
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
memory {
reg = <0x0 0x80000000 0x0 0x80000000>;
};
@@ -220,7 +224,7 @@
regulator-always-on;
};
- ldo0 {
+ avdd_1v05_run: ldo0 {
regulator-name = "+1.05V_RUN_AVDD";
regulator-min-microvolt = <1050000>;
regulator-max-microvolt = <1050000>;
@@ -364,6 +368,99 @@
status = "okay";
};
+ usb@0,70090000 {
+ phys = <&{/padctl@0,7009f000/pads/usb2/lanes/usb2-0}>, /* 1st USB A */
+ <&{/padctl@0,7009f000/pads/usb2/lanes/usb2-1}>, /* Internal USB */
+ <&{/padctl@0,7009f000/pads/usb2/lanes/usb2-2}>, /* 2nd USB A */
+ <&{/padctl@0,7009f000/pads/pcie/lanes/pcie-0}>, /* 1st USB A */
+ <&{/padctl@0,7009f000/pads/pcie/lanes/pcie-1}>; /* 2nd USB A */
+ phy-names = "usb2-0", "usb2-1", "usb2-2", "usb3-0", "usb3-1";
+
+ avddio-pex-supply = <&vdd_1v05_run>;
+ dvddio-pex-supply = <&vdd_1v05_run>;
+ avdd-usb-supply = <&vdd_3v3_lp0>;
+ avdd-pll-utmip-supply = <&vddio_1v8>;
+ avdd-pll-erefe-supply = <&avdd_1v05_run>;
+ avdd-usb-ss-pll-supply = <&vdd_1v05_run>;
+ hvdd-usb-ss-supply = <&vdd_3v3_lp0>;
+ hvdd-usb-ss-pll-e-supply = <&vdd_3v3_lp0>;
+
+ status = "okay";
+ };
+
+ padctl@0,7009f000 {
+ status = "okay";
+
+ pads {
+ usb2 {
+ status = "okay";
+
+ lanes {
+ usb2-0 {
+ nvidia,function = "xusb";
+ status = "okay";
+ };
+
+ usb2-1 {
+ nvidia,function = "xusb";
+ status = "okay";
+ };
+
+ usb2-2 {
+ nvidia,function = "xusb";
+ status = "okay";
+ };
+ };
+ };
+
+ pcie {
+ status = "okay";
+
+ lanes {
+ pcie-0 {
+ nvidia,function = "usb3-ss";
+ status = "okay";
+ };
+
+ pcie-1 {
+ nvidia,function = "usb3-ss";
+ status = "okay";
+ };
+ };
+ };
+ };
+
+ ports {
+ usb2-0 {
+ vbus-supply = <&vdd_usb1_vbus>;
+ status = "okay";
+ mode = "otg";
+ };
+
+ usb2-1 {
+ vbus-supply = <&vdd_run_cam>;
+ status = "okay";
+ mode = "host";
+ };
+
+ usb2-2 {
+ vbus-supply = <&vdd_usb3_vbus>;
+ status = "okay";
+ mode = "host";
+ };
+
+ usb3-0 {
+ nvidia,usb2-companion = <0>;
+ status = "okay";
+ };
+
+ usb3-1 {
+ nvidia,usb2-companion = <1>;
+ status = "okay";
+ };
+ };
+ };
+
sdhci0_pwrseq: sdhci0_pwrseq {
compatible = "mmc-pwrseq-simple";
@@ -410,33 +507,6 @@
};
};
- usb@0,7d000000 { /* Rear external USB port. */
- status = "okay";
- };
-
- usb-phy@0,7d000000 {
- status = "okay";
- vbus-supply = <&vdd_usb1_vbus>;
- };
-
- usb@0,7d004000 { /* Internal webcam. */
- status = "okay";
- };
-
- usb-phy@0,7d004000 {
- status = "okay";
- vbus-supply = <&vdd_run_cam>;
- };
-
- usb@0,7d008000 { /* Left external USB port. */
- status = "okay";
- };
-
- usb-phy@0,7d008000 {
- status = "okay";
- vbus-supply = <&vdd_usb3_vbus>;
- };
-
backlight: backlight {
compatible = "pwm-backlight";
@@ -509,7 +579,7 @@
linux,input-type = <5>;
linux,code = <KEY_RESERVED>;
debounce-interval = <1>;
- gpio-key,wakeup;
+ wakeup-source;
};
power {
@@ -517,7 +587,7 @@
gpios = <&gpio TEGRA_GPIO(Q, 0) GPIO_ACTIVE_LOW>;
linux,code = <KEY_POWER>;
debounce-interval = <30>;
- gpio-key,wakeup;
+ wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/tegra124-venice2.dts b/arch/arm/boot/dts/tegra124-venice2.dts
index cfbdf429b45d..973446d07182 100644
--- a/arch/arm/boot/dts/tegra124-venice2.dts
+++ b/arch/arm/boot/dts/tegra124-venice2.dts
@@ -13,6 +13,10 @@
serial0 = &uarta;
};
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
memory {
reg = <0x0 0x80000000 0x0 0x80000000>;
};
@@ -753,7 +757,7 @@
regulator-always-on;
};
- ldo0 {
+ avdd_1v05_run: ldo0 {
regulator-name = "+1.05V_RUN_AVDD";
regulator-min-microvolt = <1050000>;
regulator-max-microvolt = <1050000>;
@@ -895,6 +899,105 @@
status = "okay";
};
+ usb@0,70090000 {
+ phys = <&{/padctl@0,7009f000/pads/usb2/lanes/usb2-0}>, /* 1st USB A */
+ <&{/padctl@0,7009f000/pads/usb2/lanes/usb2-1}>, /* Internal USB */
+ <&{/padctl@0,7009f000/pads/usb2/lanes/usb2-2}>, /* 2nd USB A */
+ <&{/padctl@0,7009f000/pads/pcie/lanes/pcie-0}>, /* 1st USB A */
+ <&{/padctl@0,7009f000/pads/pcie/lanes/pcie-1}>; /* 2nd USB A */
+ phy-names = "usb2-0", "usb2-1", "usb2-2", "usb3-0", "usb3-1";
+
+ avddio-pex-supply = <&vdd_1v05_run>;
+ dvddio-pex-supply = <&vdd_1v05_run>;
+ avdd-usb-supply = <&vdd_3v3_lp0>;
+ avdd-pll-utmip-supply = <&vddio_1v8>;
+ avdd-pll-erefe-supply = <&avdd_1v05_run>;
+ avdd-usb-ss-pll-supply = <&vdd_1v05_run>;
+ hvdd-usb-ss-supply = <&vdd_3v3_lp0>;
+ hvdd-usb-ss-pll-e-supply = <&vdd_3v3_lp0>;
+
+ status = "okay";
+ };
+
+ padctl@0,7009f000 {
+ pads {
+ usb2 {
+ status = "okay";
+
+ lanes {
+ usb2-0 {
+ nvidia,function = "xusb";
+ status = "okay";
+ };
+
+ usb2-1 {
+ nvidia,function = "xusb";
+ status = "okay";
+ };
+
+ usb2-2 {
+ nvidia,function = "xusb";
+ status = "okay";
+ };
+ };
+ };
+
+ pcie {
+ status = "okay";
+
+ lanes {
+ pcie-0 {
+ nvidia,function = "usb3-ss";
+ status = "okay";
+ };
+
+ pcie-1 {
+ nvidia,function = "usb3-ss";
+ status = "okay";
+ };
+
+ pcie-1 {
+ nvidia,function = "usb3-ss";
+ status = "okay";
+ };
+ };
+ };
+ };
+
+ ports {
+ usb2-0 {
+ status = "okay";
+ mode = "otg";
+
+ vbus-supply = <&vdd_usb1_vbus>;
+ };
+
+ usb2-1 {
+ status = "okay";
+ mode = "host";
+
+ vbus-supply = <&vdd_run_cam>;
+ };
+
+ usb2-2 {
+ status = "okay";
+ mode = "host";
+
+ vbus-supply = <&vdd_usb3_vbus>;
+ };
+
+ usb3-0 {
+ nvidia,usb2-companion = <0>;
+ status = "okay";
+ };
+
+ usb3-1 {
+ nvidia,usb2-companion = <2>;
+ status = "okay";
+ };
+ };
+ };
+
sdhci@0,700b0400 {
cd-gpios = <&gpio TEGRA_GPIO(V, 2) GPIO_ACTIVE_HIGH>;
power-gpios = <&gpio TEGRA_GPIO(R, 0) GPIO_ACTIVE_HIGH>;
@@ -975,7 +1078,7 @@
gpios = <&gpio TEGRA_GPIO(Q, 0) GPIO_ACTIVE_LOW>;
linux,code = <KEY_POWER>;
debounce-interval = <10>;
- gpio-key,wakeup;
+ wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/tegra124.dtsi b/arch/arm/boot/dts/tegra124.dtsi
index 68669f791c8b..ea4811870de2 100644
--- a/arch/arm/boot/dts/tegra124.dtsi
+++ b/arch/arm/boot/dts/tegra124.dtsi
@@ -2,7 +2,6 @@
#include <dt-bindings/gpio/tegra-gpio.h>
#include <dt-bindings/memory/tegra124-mc.h>
#include <dt-bindings/pinctrl/pinctrl-tegra.h>
-#include <dt-bindings/pinctrl/pinctrl-tegra-xusb.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/reset/tegra124-car.h>
#include <dt-bindings/thermal/tegra124-soctherm.h>
@@ -51,9 +50,6 @@
reset-names = "pex", "afi", "pcie_x";
status = "disabled";
- phys = <&padctl TEGRA_XUSB_PADCTL_PCIE>;
- phy-names = "pcie";
-
pci@1,0 {
device_type = "pci";
assigned-addresses = <0x82000800 0 0x01000000 0 0x1000>;
@@ -208,7 +204,7 @@
};
timer@0,60005000 {
- compatible = "nvidia,tegra124-timer", "nvidia,tegra20-timer";
+ compatible = "nvidia,tegra124-timer", "nvidia,tegra30-timer", "nvidia,tegra20-timer";
reg = <0x0 0x60005000 0x0 0x400>;
interrupts = <GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
<GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>,
@@ -322,7 +318,7 @@
* driver and APB DMA based serial driver for higher baudrate
* and performace. To enable the 8250 based driver, the compatible
* is "nvidia,tegra124-uart", "nvidia,tegra20-uart" and to enable
- * the APB DMA based serial driver, the comptible is
+ * the APB DMA based serial driver, the compatible is
* "nvidia,tegra124-hsuart", "nvidia,tegra30-hsuart".
*/
uarta: serial@0,70006000 {
@@ -622,8 +618,6 @@
<&tegra_car 123>,
<&tegra_car 129>;
reset-names = "sata", "sata-oob", "sata-cold";
- phys = <&padctl TEGRA_XUSB_PADCTL_SATA>;
- phy-names = "sata-phy";
status = "disabled";
};
@@ -642,13 +636,172 @@
status = "disabled";
};
+ usb@0,70090000 {
+ compatible = "nvidia,tegra124-xusb";
+ reg = <0x0 0x70090000 0x0 0x8000>,
+ <0x0 0x70098000 0x0 0x1000>,
+ <0x0 0x70099000 0x0 0x1000>;
+ reg-names = "hcd", "fpci", "ipfs";
+
+ interrupts = <GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>;
+
+ clocks = <&tegra_car TEGRA124_CLK_XUSB_HOST>,
+ <&tegra_car TEGRA124_CLK_XUSB_HOST_SRC>,
+ <&tegra_car TEGRA124_CLK_XUSB_FALCON_SRC>,
+ <&tegra_car TEGRA124_CLK_XUSB_SS>,
+ <&tegra_car TEGRA124_CLK_XUSB_SS_DIV2>,
+ <&tegra_car TEGRA124_CLK_XUSB_SS_SRC>,
+ <&tegra_car TEGRA124_CLK_XUSB_HS_SRC>,
+ <&tegra_car TEGRA124_CLK_XUSB_FS_SRC>,
+ <&tegra_car TEGRA124_CLK_PLL_U_480M>,
+ <&tegra_car TEGRA124_CLK_CLK_M>,
+ <&tegra_car TEGRA124_CLK_PLL_E>;
+ clock-names = "xusb_host", "xusb_host_src",
+ "xusb_falcon_src", "xusb_ss",
+ "xusb_ss_div2", "xusb_ss_src",
+ "xusb_hs_src", "xusb_fs_src",
+ "pll_u_480m", "clk_m", "pll_e";
+ resets = <&tegra_car 89>, <&tegra_car 156>,
+ <&tegra_car 143>;
+ reset-names = "xusb_host", "xusb_ss", "xusb_src";
+
+ nvidia,xusb-padctl = <&padctl>;
+
+ status = "disabled";
+ };
+
padctl: padctl@0,7009f000 {
compatible = "nvidia,tegra124-xusb-padctl";
reg = <0x0 0x7009f000 0x0 0x1000>;
resets = <&tegra_car 142>;
reset-names = "padctl";
- #phy-cells = <1>;
+ pads {
+ usb2 {
+ status = "disabled";
+
+ lanes {
+ usb2-0 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ usb2-1 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ usb2-2 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+ };
+ };
+
+ ulpi {
+ status = "disabled";
+
+ lanes {
+ ulpi-0 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+ };
+ };
+
+ hsic {
+ status = "disabled";
+
+ lanes {
+ hsic-0 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ hsic-1 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+ };
+ };
+
+ pcie {
+ status = "disabled";
+
+ lanes {
+ pcie-0 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ pcie-1 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ pcie-2 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ pcie-3 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+
+ pcie-4 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+ };
+ };
+
+ sata {
+ status = "disabled";
+
+ lanes {
+ sata-0 {
+ status = "disabled";
+ #phy-cells = <0>;
+ };
+ };
+ };
+ };
+
+ ports {
+ usb2-0 {
+ status = "disabled";
+ };
+
+ usb2-1 {
+ status = "disabled";
+ };
+
+ usb2-2 {
+ status = "disabled";
+ };
+
+ ulpi-0 {
+ status = "disabled";
+ };
+
+ hsic-0 {
+ status = "disabled";
+ };
+
+ hsic-1 {
+ status = "disabled";
+ };
+
+ usb3-0 {
+ status = "disabled";
+ };
+
+ usb3-1 {
+ status = "disabled";
+ };
+ };
};
sdhci@0,700b0000 {
diff --git a/arch/arm/boot/dts/tegra20-harmony.dts b/arch/arm/boot/dts/tegra20-harmony.dts
index b926a07b9443..d2e960cbc001 100644
--- a/arch/arm/boot/dts/tegra20-harmony.dts
+++ b/arch/arm/boot/dts/tegra20-harmony.dts
@@ -13,6 +13,10 @@
serial0 = &uartd;
};
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
memory {
reg = <0x00000000 0x40000000>;
};
@@ -655,7 +659,7 @@
label = "Power";
gpios = <&gpio TEGRA_GPIO(V, 2) GPIO_ACTIVE_LOW>;
linux,code = <KEY_POWER>;
- gpio-key,wakeup;
+ wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/tegra20-iris-512.dts b/arch/arm/boot/dts/tegra20-iris-512.dts
index 1dd7d7bfdfcc..bb56dfe9e10c 100644
--- a/arch/arm/boot/dts/tegra20-iris-512.dts
+++ b/arch/arm/boot/dts/tegra20-iris-512.dts
@@ -11,6 +11,10 @@
serial1 = &uartd;
};
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
host1x@50000000 {
hdmi@54280000 {
status = "okay";
diff --git a/arch/arm/boot/dts/tegra20-medcom-wide.dts b/arch/arm/boot/dts/tegra20-medcom-wide.dts
index 9b87526ab0b7..34c6588e92ef 100644
--- a/arch/arm/boot/dts/tegra20-medcom-wide.dts
+++ b/arch/arm/boot/dts/tegra20-medcom-wide.dts
@@ -10,6 +10,10 @@
serial0 = &uartd;
};
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
pwm@7000a000 {
status = "okay";
};
diff --git a/arch/arm/boot/dts/tegra20-paz00.dts b/arch/arm/boot/dts/tegra20-paz00.dts
index ed7e1009326c..33ed2b23026b 100644
--- a/arch/arm/boot/dts/tegra20-paz00.dts
+++ b/arch/arm/boot/dts/tegra20-paz00.dts
@@ -14,6 +14,10 @@
serial1 = &uartc;
};
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
memory {
reg = <0x00000000 0x20000000>;
};
@@ -521,7 +525,7 @@
label = "Power";
gpios = <&gpio TEGRA_GPIO(J, 7) GPIO_ACTIVE_LOW>;
linux,code = <KEY_POWER>;
- gpio-key,wakeup;
+ wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/tegra20-seaboard.dts b/arch/arm/boot/dts/tegra20-seaboard.dts
index aea8994b35f2..94b60a710dd8 100644
--- a/arch/arm/boot/dts/tegra20-seaboard.dts
+++ b/arch/arm/boot/dts/tegra20-seaboard.dts
@@ -13,6 +13,10 @@
serial0 = &uartd;
};
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
memory {
reg = <0x00000000 0x40000000>;
};
@@ -807,7 +811,7 @@
label = "Power";
gpios = <&gpio TEGRA_GPIO(V, 2) GPIO_ACTIVE_LOW>;
linux,code = <KEY_POWER>;
- gpio-key,wakeup;
+ wakeup-source;
};
lid {
@@ -816,7 +820,7 @@
linux,input-type = <5>; /* EV_SW */
linux,code = <0>; /* SW_LID */
debounce-interval = <1>;
- gpio-key,wakeup;
+ wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/tegra20-tamonten.dtsi b/arch/arm/boot/dts/tegra20-tamonten.dtsi
index 13d4e6185275..025e9e8037da 100644
--- a/arch/arm/boot/dts/tegra20-tamonten.dtsi
+++ b/arch/arm/boot/dts/tegra20-tamonten.dtsi
@@ -10,6 +10,10 @@
serial0 = &uartd;
};
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
memory {
reg = <0x00000000 0x20000000>;
};
diff --git a/arch/arm/boot/dts/tegra20-trimslice.dts b/arch/arm/boot/dts/tegra20-trimslice.dts
index d99af4ef9c64..4a035f74043a 100644
--- a/arch/arm/boot/dts/tegra20-trimslice.dts
+++ b/arch/arm/boot/dts/tegra20-trimslice.dts
@@ -13,6 +13,10 @@
serial0 = &uarta;
};
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
memory {
reg = <0x00000000 0x40000000>;
};
@@ -392,7 +396,7 @@
label = "Power";
gpios = <&gpio TEGRA_GPIO(X, 6) GPIO_ACTIVE_LOW>;
linux,code = <KEY_POWER>;
- gpio-key,wakeup;
+ wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/tegra20-ventana.dts b/arch/arm/boot/dts/tegra20-ventana.dts
index 04c58e9ca490..a28c060a839b 100644
--- a/arch/arm/boot/dts/tegra20-ventana.dts
+++ b/arch/arm/boot/dts/tegra20-ventana.dts
@@ -13,6 +13,10 @@
serial0 = &uartd;
};
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
memory {
reg = <0x00000000 0x40000000>;
};
@@ -601,7 +605,7 @@
label = "Power";
gpios = <&gpio TEGRA_GPIO(V, 2) GPIO_ACTIVE_LOW>;
linux,code = <KEY_POWER>;
- gpio-key,wakeup;
+ wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/tegra20-whistler.dts b/arch/arm/boot/dts/tegra20-whistler.dts
index 340d81108df1..073806d07b2b 100644
--- a/arch/arm/boot/dts/tegra20-whistler.dts
+++ b/arch/arm/boot/dts/tegra20-whistler.dts
@@ -13,6 +13,10 @@
serial0 = &uarta;
};
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
memory {
reg = <0x00000000 0x20000000>;
};
@@ -508,7 +512,7 @@
nvidia,repeat-delay-ms = <160>;
nvidia,kbc-row-pins = <0 1 2>;
nvidia,kbc-col-pins = <16 17>;
- nvidia,wakeup-source;
+ wakeup-source;
linux,keymap = <MATRIX_KEY(0x00, 0x00, KEY_POWER)
MATRIX_KEY(0x01, 0x00, KEY_HOME)
MATRIX_KEY(0x01, 0x01, KEY_BACK)
diff --git a/arch/arm/boot/dts/tegra20.dtsi b/arch/arm/boot/dts/tegra20.dtsi
index 33173e1bace9..2207c08e3fa3 100644
--- a/arch/arm/boot/dts/tegra20.dtsi
+++ b/arch/arm/boot/dts/tegra20.dtsi
@@ -145,7 +145,7 @@
interrupt-parent = <&intc>;
reg = <0x50040600 0x20>;
interrupts = <GIC_PPI 13
- (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_LEVEL_HIGH)>;
+ (GIC_CPU_MASK_SIMPLE(2) | IRQ_TYPE_EDGE_RISING)>;
clocks = <&tegra_car TEGRA20_CLK_TWD>;
};
@@ -309,7 +309,7 @@
* driver and APB DMA based serial driver for higher baudrate
* and performace. To enable the 8250 based driver, the compatible
* is "nvidia,tegra20-uart" and to enable the APB DMA based serial
- * driver, the comptible is "nvidia,tegra20-hsuart".
+ * driver, the compatible is "nvidia,tegra20-hsuart".
*/
uarta: serial@70006000 {
compatible = "nvidia,tegra20-uart";
diff --git a/arch/arm/boot/dts/tegra30-apalis-eval.dts b/arch/arm/boot/dts/tegra30-apalis-eval.dts
index f2879cfcca62..99a69457dbf5 100644
--- a/arch/arm/boot/dts/tegra30-apalis-eval.dts
+++ b/arch/arm/boot/dts/tegra30-apalis-eval.dts
@@ -17,6 +17,10 @@
serial3 = &uartd;
};
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
pcie-controller@00003000 {
status = "okay";
@@ -196,7 +200,7 @@
gpios = <&gpio TEGRA_GPIO(V, 1) GPIO_ACTIVE_LOW>;
linux,code = <KEY_WAKEUP>;
debounce-interval = <10>;
- gpio-key,wakeup;
+ wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/tegra30-beaver.dts b/arch/arm/boot/dts/tegra30-beaver.dts
index 3dede3934446..1eca3b28ac64 100644
--- a/arch/arm/boot/dts/tegra30-beaver.dts
+++ b/arch/arm/boot/dts/tegra30-beaver.dts
@@ -12,6 +12,10 @@
serial0 = &uarta;
};
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
memory {
reg = <0x80000000 0x7ff00000>;
};
diff --git a/arch/arm/boot/dts/tegra30-cardhu.dtsi b/arch/arm/boot/dts/tegra30-cardhu.dtsi
index bb1ca158273c..4721c1c9c780 100644
--- a/arch/arm/boot/dts/tegra30-cardhu.dtsi
+++ b/arch/arm/boot/dts/tegra30-cardhu.dtsi
@@ -35,6 +35,10 @@
serial1 = &uartc;
};
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
memory {
reg = <0x80000000 0x40000000>;
};
@@ -626,7 +630,7 @@
interrupts = <2 0>;
linux,code = <KEY_POWER>;
debounce-interval = <100>;
- gpio-key,wakeup;
+ wakeup-source;
};
volume-down {
diff --git a/arch/arm/boot/dts/tegra30-colibri-eval-v3.dts b/arch/arm/boot/dts/tegra30-colibri-eval-v3.dts
index 3ff019f47d00..76875c3160fe 100644
--- a/arch/arm/boot/dts/tegra30-colibri-eval-v3.dts
+++ b/arch/arm/boot/dts/tegra30-colibri-eval-v3.dts
@@ -15,6 +15,10 @@
serial2 = &uartd;
};
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
host1x@50000000 {
dc@54200000 {
rgb {
@@ -142,7 +146,7 @@
gpios = <&gpio TEGRA_GPIO(V, 1) GPIO_ACTIVE_HIGH>;
linux,code = <KEY_WAKEUP>;
debounce-interval = <10>;
- gpio-key,wakeup;
+ wakeup-source;
};
};
diff --git a/arch/arm/boot/dts/tegra30.dtsi b/arch/arm/boot/dts/tegra30.dtsi
index 313e260529a3..5030065cbdfe 100644
--- a/arch/arm/boot/dts/tegra30.dtsi
+++ b/arch/arm/boot/dts/tegra30.dtsi
@@ -230,7 +230,7 @@
reg = <0x50040600 0x20>;
interrupt-parent = <&intc>;
interrupts = <GIC_PPI 13
- (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_HIGH)>;
+ (GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_EDGE_RISING)>;
clocks = <&tegra_car TEGRA30_CLK_TWD>;
};
@@ -371,7 +371,7 @@
* driver and APB DMA based serial driver for higher baudrate
* and performace. To enable the 8250 based driver, the compatible
* is "nvidia,tegra30-uart", "nvidia,tegra20-uart" and to enable
- * the APB DMA based serial driver, the comptible is
+ * the APB DMA based serial driver, the compatible is
* "nvidia,tegra30-hsuart", "nvidia,tegra20-hsuart".
*/
uarta: serial@70006000 {
diff --git a/arch/arm/boot/dts/twl6030.dtsi b/arch/arm/boot/dts/twl6030.dtsi
index 55eb35f068fb..c45f97f37563 100644
--- a/arch/arm/boot/dts/twl6030.dtsi
+++ b/arch/arm/boot/dts/twl6030.dtsi
@@ -99,4 +99,10 @@
compatible = "ti,twl6030-pwmled";
#pwm-cells = <2>;
};
+
+ gpadc {
+ compatible = "ti,twl6030-gpadc";
+ interrupts = <3>;
+ #io-channel-cells = <1>;
+ };
};
diff --git a/arch/arm/boot/dts/uniphier-pinctrl.dtsi b/arch/arm/boot/dts/uniphier-pinctrl.dtsi
index 24592798a368..f2f3fbe2d517 100644
--- a/arch/arm/boot/dts/uniphier-pinctrl.dtsi
+++ b/arch/arm/boot/dts/uniphier-pinctrl.dtsi
@@ -68,6 +68,16 @@
function = "i2c4";
};
+ pinctrl_nand: nand_grp {
+ groups = "nand";
+ function = "nand";
+ };
+
+ pinctrl_nand2cs: nand2cs_grp {
+ groups = "nand", "nand_cs1";
+ function = "nand";
+ };
+
pinctrl_uart0: uart0_grp {
groups = "uart0";
function = "uart0";
diff --git a/arch/arm/boot/dts/versatile-ab.dts b/arch/arm/boot/dts/versatile-ab.dts
index d23320af5ea7..409e069b3a84 100644
--- a/arch/arm/boot/dts/versatile-ab.dts
+++ b/arch/arm/boot/dts/versatile-ab.dts
@@ -119,8 +119,9 @@
};
flash@34000000 {
- compatible = "arm,versatile-flash";
- reg = <0x34000000 0x4000000>;
+ /* 64 MiB NOR flash in non-interleaved chips */
+ compatible = "arm,versatile-flash", "cfi-flash";
+ reg = <0x34000000 0x04000000>;
bank-width = <4>;
};
diff --git a/arch/arm/boot/dts/vexpress-v2m-rs1.dtsi b/arch/arm/boot/dts/vexpress-v2m-rs1.dtsi
index 7a556b92e55c..3086efacd00e 100644
--- a/arch/arm/boot/dts/vexpress-v2m-rs1.dtsi
+++ b/arch/arm/boot/dts/vexpress-v2m-rs1.dtsi
@@ -75,19 +75,19 @@
compatible = "arm,vexpress-sysreg";
reg = <0x010000 0x1000>;
- v2m_led_gpios: sys_led@08 {
+ v2m_led_gpios: sys_led {
compatible = "arm,vexpress-sysreg,sys_led";
gpio-controller;
#gpio-cells = <2>;
};
- v2m_mmc_gpios: sys_mci@48 {
+ v2m_mmc_gpios: sys_mci {
compatible = "arm,vexpress-sysreg,sys_mci";
gpio-controller;
#gpio-cells = <2>;
};
- v2m_flash_gpios: sys_flash@4c {
+ v2m_flash_gpios: sys_flash {
compatible = "arm,vexpress-sysreg,sys_flash";
gpio-controller;
#gpio-cells = <2>;
@@ -286,7 +286,7 @@
};
};
- v2m_fixed_3v3: fixedregulator@0 {
+ v2m_fixed_3v3: fixed-regulator-0 {
compatible = "regulator-fixed";
regulator-name = "3V3";
regulator-min-microvolt = <3300000>;
@@ -318,49 +318,49 @@
leds {
compatible = "gpio-leds";
- user@1 {
+ user1 {
label = "v2m:green:user1";
gpios = <&v2m_led_gpios 0 0>;
linux,default-trigger = "heartbeat";
};
- user@2 {
+ user2 {
label = "v2m:green:user2";
gpios = <&v2m_led_gpios 1 0>;
linux,default-trigger = "mmc0";
};
- user@3 {
+ user3 {
label = "v2m:green:user3";
gpios = <&v2m_led_gpios 2 0>;
linux,default-trigger = "cpu0";
};
- user@4 {
+ user4 {
label = "v2m:green:user4";
gpios = <&v2m_led_gpios 3 0>;
linux,default-trigger = "cpu1";
};
- user@5 {
+ user5 {
label = "v2m:green:user5";
gpios = <&v2m_led_gpios 4 0>;
linux,default-trigger = "cpu2";
};
- user@6 {
+ user6 {
label = "v2m:green:user6";
gpios = <&v2m_led_gpios 5 0>;
linux,default-trigger = "cpu3";
};
- user@7 {
+ user7 {
label = "v2m:green:user7";
gpios = <&v2m_led_gpios 6 0>;
linux,default-trigger = "cpu4";
};
- user@8 {
+ user8 {
label = "v2m:green:user8";
gpios = <&v2m_led_gpios 7 0>;
linux,default-trigger = "cpu5";
@@ -371,7 +371,7 @@
compatible = "arm,vexpress,config-bus";
arm,vexpress,config-bridge = <&v2m_sysreg>;
- osc@0 {
+ oscclk0 {
/* MCC static memory clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 0>;
@@ -380,7 +380,7 @@
clock-output-names = "v2m:oscclk0";
};
- v2m_oscclk1: osc@1 {
+ v2m_oscclk1: oscclk1 {
/* CLCD clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 1>;
@@ -389,7 +389,7 @@
clock-output-names = "v2m:oscclk1";
};
- v2m_oscclk2: osc@2 {
+ v2m_oscclk2: oscclk2 {
/* IO FPGA peripheral clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 2>;
@@ -398,7 +398,7 @@
clock-output-names = "v2m:oscclk2";
};
- volt@0 {
+ volt-vio {
/* Logic level voltage */
compatible = "arm,vexpress-volt";
arm,vexpress-sysreg,func = <2 0>;
@@ -407,34 +407,34 @@
label = "VIO";
};
- temp@0 {
+ temp-mcc {
/* MCC internal operating temperature */
compatible = "arm,vexpress-temp";
arm,vexpress-sysreg,func = <4 0>;
label = "MCC";
};
- reset@0 {
+ reset {
compatible = "arm,vexpress-reset";
arm,vexpress-sysreg,func = <5 0>;
};
- muxfpga@0 {
+ muxfpga {
compatible = "arm,vexpress-muxfpga";
arm,vexpress-sysreg,func = <7 0>;
};
- shutdown@0 {
+ shutdown {
compatible = "arm,vexpress-shutdown";
arm,vexpress-sysreg,func = <8 0>;
};
- reboot@0 {
+ reboot {
compatible = "arm,vexpress-reboot";
arm,vexpress-sysreg,func = <9 0>;
};
- dvimode@0 {
+ dvimode {
compatible = "arm,vexpress-dvimode";
arm,vexpress-sysreg,func = <11 0>;
};
diff --git a/arch/arm/boot/dts/vexpress-v2m.dtsi b/arch/arm/boot/dts/vexpress-v2m.dtsi
index 47e4a115adef..c6393d3f1719 100644
--- a/arch/arm/boot/dts/vexpress-v2m.dtsi
+++ b/arch/arm/boot/dts/vexpress-v2m.dtsi
@@ -74,19 +74,19 @@
compatible = "arm,vexpress-sysreg";
reg = <0x00000 0x1000>;
- v2m_led_gpios: sys_led@08 {
+ v2m_led_gpios: sys_led {
compatible = "arm,vexpress-sysreg,sys_led";
gpio-controller;
#gpio-cells = <2>;
};
- v2m_mmc_gpios: sys_mci@48 {
+ v2m_mmc_gpios: sys_mci {
compatible = "arm,vexpress-sysreg,sys_mci";
gpio-controller;
#gpio-cells = <2>;
};
- v2m_flash_gpios: sys_flash@4c {
+ v2m_flash_gpios: sys_flash {
compatible = "arm,vexpress-sysreg,sys_flash";
gpio-controller;
#gpio-cells = <2>;
@@ -285,7 +285,7 @@
};
};
- v2m_fixed_3v3: fixedregulator@0 {
+ v2m_fixed_3v3: fixed-regulator-0 {
compatible = "regulator-fixed";
regulator-name = "3V3";
regulator-min-microvolt = <3300000>;
@@ -317,49 +317,49 @@
leds {
compatible = "gpio-leds";
- user@1 {
+ user1 {
label = "v2m:green:user1";
gpios = <&v2m_led_gpios 0 0>;
linux,default-trigger = "heartbeat";
};
- user@2 {
+ user2 {
label = "v2m:green:user2";
gpios = <&v2m_led_gpios 1 0>;
linux,default-trigger = "mmc0";
};
- user@3 {
+ user3 {
label = "v2m:green:user3";
gpios = <&v2m_led_gpios 2 0>;
linux,default-trigger = "cpu0";
};
- user@4 {
+ user4 {
label = "v2m:green:user4";
gpios = <&v2m_led_gpios 3 0>;
linux,default-trigger = "cpu1";
};
- user@5 {
+ user5 {
label = "v2m:green:user5";
gpios = <&v2m_led_gpios 4 0>;
linux,default-trigger = "cpu2";
};
- user@6 {
+ user6 {
label = "v2m:green:user6";
gpios = <&v2m_led_gpios 5 0>;
linux,default-trigger = "cpu3";
};
- user@7 {
+ user7 {
label = "v2m:green:user7";
gpios = <&v2m_led_gpios 6 0>;
linux,default-trigger = "cpu4";
};
- user@8 {
+ user8 {
label = "v2m:green:user8";
gpios = <&v2m_led_gpios 7 0>;
linux,default-trigger = "cpu5";
@@ -370,7 +370,7 @@
compatible = "arm,vexpress,config-bus";
arm,vexpress,config-bridge = <&v2m_sysreg>;
- osc@0 {
+ oscclk0 {
/* MCC static memory clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 0>;
@@ -379,7 +379,7 @@
clock-output-names = "v2m:oscclk0";
};
- v2m_oscclk1: osc@1 {
+ v2m_oscclk1: oscclk1 {
/* CLCD clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 1>;
@@ -388,7 +388,7 @@
clock-output-names = "v2m:oscclk1";
};
- v2m_oscclk2: osc@2 {
+ v2m_oscclk2: oscclk2 {
/* IO FPGA peripheral clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 2>;
@@ -397,7 +397,7 @@
clock-output-names = "v2m:oscclk2";
};
- volt@0 {
+ volt-vio {
/* Logic level voltage */
compatible = "arm,vexpress-volt";
arm,vexpress-sysreg,func = <2 0>;
@@ -406,34 +406,34 @@
label = "VIO";
};
- temp@0 {
+ temp-mcc {
/* MCC internal operating temperature */
compatible = "arm,vexpress-temp";
arm,vexpress-sysreg,func = <4 0>;
label = "MCC";
};
- reset@0 {
+ reset {
compatible = "arm,vexpress-reset";
arm,vexpress-sysreg,func = <5 0>;
};
- muxfpga@0 {
+ muxfpga {
compatible = "arm,vexpress-muxfpga";
arm,vexpress-sysreg,func = <7 0>;
};
- shutdown@0 {
+ shutdown {
compatible = "arm,vexpress-shutdown";
arm,vexpress-sysreg,func = <8 0>;
};
- reboot@0 {
+ reboot {
compatible = "arm,vexpress-reboot";
arm,vexpress-sysreg,func = <9 0>;
};
- dvimode@0 {
+ dvimode {
compatible = "arm,vexpress-dvimode";
arm,vexpress-sysreg,func = <11 0>;
};
diff --git a/arch/arm/boot/dts/vexpress-v2p-ca15-tc1.dts b/arch/arm/boot/dts/vexpress-v2p-ca15-tc1.dts
index 9420053acc14..102838fcc588 100644
--- a/arch/arm/boot/dts/vexpress-v2p-ca15-tc1.dts
+++ b/arch/arm/boot/dts/vexpress-v2p-ca15-tc1.dts
@@ -55,14 +55,14 @@
compatible = "arm,hdlcd";
reg = <0 0x2b000000 0 0x1000>;
interrupts = <0 85 4>;
- clocks = <&oscclk5>;
+ clocks = <&hdlcd_clk>;
clock-names = "pxlclk";
};
memory-controller@2b0a0000 {
compatible = "arm,pl341", "arm,primecell";
reg = <0 0x2b0a0000 0 0x1000>;
- clocks = <&oscclk7>;
+ clocks = <&sys_pll>;
clock-names = "apb_pclk";
};
@@ -71,7 +71,7 @@
status = "disabled";
reg = <0 0x2b060000 0 0x1000>;
interrupts = <0 98 4>;
- clocks = <&oscclk7>;
+ clocks = <&sys_pll>;
clock-names = "apb_pclk";
};
@@ -92,7 +92,7 @@
reg = <0 0x7ffd0000 0 0x1000>;
interrupts = <0 86 4>,
<0 87 4>;
- clocks = <&oscclk7>;
+ clocks = <&sys_pll>;
clock-names = "apb_pclk";
};
@@ -104,7 +104,7 @@
<0 89 4>,
<0 90 4>,
<0 91 4>;
- clocks = <&oscclk7>;
+ clocks = <&sys_pll>;
clock-names = "apb_pclk";
};
@@ -126,7 +126,7 @@
compatible = "arm,vexpress,config-bus";
arm,vexpress,config-bridge = <&v2m_sysreg>;
- osc@0 {
+ oscclk0 {
/* CPU PLL reference clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 0>;
@@ -135,7 +135,7 @@
clock-output-names = "oscclk0";
};
- osc@4 {
+ oscclk4 {
/* Multiplexed AXI master clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 4>;
@@ -144,7 +144,7 @@
clock-output-names = "oscclk4";
};
- oscclk5: osc@5 {
+ hdlcd_clk: oscclk5 {
/* HDLCD PLL reference clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 5>;
@@ -153,7 +153,7 @@
clock-output-names = "oscclk5";
};
- smbclk: osc@6 {
+ smbclk: oscclk6 {
/* SMB clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 6>;
@@ -162,7 +162,7 @@
clock-output-names = "oscclk6";
};
- oscclk7: osc@7 {
+ sys_pll: oscclk7 {
/* SYS PLL reference clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 7>;
@@ -171,7 +171,7 @@
clock-output-names = "oscclk7";
};
- osc@8 {
+ oscclk8 {
/* DDR2 PLL reference clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 8>;
@@ -180,7 +180,7 @@
clock-output-names = "oscclk8";
};
- volt@0 {
+ volt-cores {
/* CPU core voltage */
compatible = "arm,vexpress-volt";
arm,vexpress-sysreg,func = <2 0>;
@@ -191,28 +191,28 @@
label = "Cores";
};
- amp@0 {
+ amp-cores {
/* Total current for the two cores */
compatible = "arm,vexpress-amp";
arm,vexpress-sysreg,func = <3 0>;
label = "Cores";
};
- temp@0 {
+ temp-dcc {
/* DCC internal temperature */
compatible = "arm,vexpress-temp";
arm,vexpress-sysreg,func = <4 0>;
label = "DCC";
};
- power@0 {
+ power-cores {
/* Total power */
compatible = "arm,vexpress-power";
arm,vexpress-sysreg,func = <12 0>;
label = "Cores";
};
- energy@0 {
+ energy {
/* Total energy */
compatible = "arm,vexpress-energy";
arm,vexpress-sysreg,func = <13 0>;
@@ -220,7 +220,7 @@
};
};
- smb {
+ smb@08000000 {
compatible = "simple-bus";
#address-cells = <2>;
@@ -280,4 +280,17 @@
/include/ "vexpress-v2m-rs1.dtsi"
};
+
+ site2: hsb@40000000 {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0 0 0x40000000 0x3fef0000>;
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 3>;
+ interrupt-map = <0 0 &gic 0 36 4>,
+ <0 1 &gic 0 37 4>,
+ <0 2 &gic 0 38 4>,
+ <0 3 &gic 0 39 4>;
+ };
};
diff --git a/arch/arm/boot/dts/vexpress-v2p-ca15_a7.dts b/arch/arm/boot/dts/vexpress-v2p-ca15_a7.dts
index 17f63f7dfd9e..0205c97efdef 100644
--- a/arch/arm/boot/dts/vexpress-v2p-ca15_a7.dts
+++ b/arch/arm/boot/dts/vexpress-v2p-ca15_a7.dts
@@ -109,7 +109,7 @@
compatible = "arm,hdlcd";
reg = <0 0x2b000000 0 0x1000>;
interrupts = <0 85 4>;
- clocks = <&oscclk5>;
+ clocks = <&hdlcd_clk>;
clock-names = "pxlclk";
};
@@ -227,7 +227,7 @@
compatible = "arm,vexpress,config-bus";
arm,vexpress,config-bridge = <&v2m_sysreg>;
- osc@0 {
+ oscclk0 {
/* A15 PLL 0 reference clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 0>;
@@ -236,7 +236,7 @@
clock-output-names = "oscclk0";
};
- osc@1 {
+ oscclk1 {
/* A15 PLL 1 reference clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 1>;
@@ -245,7 +245,7 @@
clock-output-names = "oscclk1";
};
- osc@2 {
+ oscclk2 {
/* A7 PLL 0 reference clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 2>;
@@ -254,7 +254,7 @@
clock-output-names = "oscclk2";
};
- osc@3 {
+ oscclk3 {
/* A7 PLL 1 reference clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 3>;
@@ -263,7 +263,7 @@
clock-output-names = "oscclk3";
};
- osc@4 {
+ oscclk4 {
/* External AXI master clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 4>;
@@ -272,7 +272,7 @@
clock-output-names = "oscclk4";
};
- oscclk5: osc@5 {
+ hdlcd_clk: oscclk5 {
/* HDLCD PLL reference clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 5>;
@@ -281,7 +281,7 @@
clock-output-names = "oscclk5";
};
- smbclk: osc@6 {
+ smbclk: oscclk6 {
/* Static memory controller clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 6>;
@@ -290,7 +290,7 @@
clock-output-names = "oscclk6";
};
- osc@7 {
+ oscclk7 {
/* SYS PLL reference clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 7>;
@@ -299,7 +299,7 @@
clock-output-names = "oscclk7";
};
- osc@8 {
+ oscclk8 {
/* DDR2 PLL reference clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 8>;
@@ -308,7 +308,7 @@
clock-output-names = "oscclk8";
};
- volt@0 {
+ volt-a15 {
/* A15 CPU core voltage */
compatible = "arm,vexpress-volt";
arm,vexpress-sysreg,func = <2 0>;
@@ -319,7 +319,7 @@
label = "A15 Vcore";
};
- volt@1 {
+ volt-a7 {
/* A7 CPU core voltage */
compatible = "arm,vexpress-volt";
arm,vexpress-sysreg,func = <2 1>;
@@ -330,49 +330,49 @@
label = "A7 Vcore";
};
- amp@0 {
+ amp-a15 {
/* Total current for the two A15 cores */
compatible = "arm,vexpress-amp";
arm,vexpress-sysreg,func = <3 0>;
label = "A15 Icore";
};
- amp@1 {
+ amp-a7 {
/* Total current for the three A7 cores */
compatible = "arm,vexpress-amp";
arm,vexpress-sysreg,func = <3 1>;
label = "A7 Icore";
};
- temp@0 {
+ temp-dcc {
/* DCC internal temperature */
compatible = "arm,vexpress-temp";
arm,vexpress-sysreg,func = <4 0>;
label = "DCC";
};
- power@0 {
+ power-a15 {
/* Total power for the two A15 cores */
compatible = "arm,vexpress-power";
arm,vexpress-sysreg,func = <12 0>;
label = "A15 Pcore";
};
- power@1 {
+ power-a7 {
/* Total power for the three A7 cores */
compatible = "arm,vexpress-power";
arm,vexpress-sysreg,func = <12 1>;
label = "A7 Pcore";
};
- energy@0 {
+ energy-a15 {
/* Total energy for the two A15 cores */
compatible = "arm,vexpress-energy";
arm,vexpress-sysreg,func = <13 0>, <13 1>;
label = "A15 Jcore";
};
- energy@2 {
+ energy-a7 {
/* Total energy for the three A7 cores */
compatible = "arm,vexpress-energy";
arm,vexpress-sysreg,func = <13 2>, <13 3>;
@@ -387,7 +387,7 @@
clocks = <&oscclk6a>;
clock-names = "apb_pclk";
port {
- etb_in_port: endpoint@0 {
+ etb_in_port: endpoint {
slave-mode;
remote-endpoint = <&replicator_out_port0>;
};
@@ -401,7 +401,7 @@
clocks = <&oscclk6a>;
clock-names = "apb_pclk";
port {
- tpiu_in_port: endpoint@0 {
+ tpiu_in_port: endpoint {
slave-mode;
remote-endpoint = <&replicator_out_port1>;
};
@@ -578,7 +578,7 @@
};
};
- smb {
+ smb@08000000 {
compatible = "simple-bus";
#address-cells = <2>;
@@ -638,4 +638,17 @@
/include/ "vexpress-v2m-rs1.dtsi"
};
+
+ site2: hsb@40000000 {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0 0 0x40000000 0x3fef0000>;
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 3>;
+ interrupt-map = <0 0 &gic 0 36 4>,
+ <0 1 &gic 0 37 4>,
+ <0 2 &gic 0 38 4>,
+ <0 3 &gic 0 39 4>;
+ };
};
diff --git a/arch/arm/boot/dts/vexpress-v2p-ca5s.dts b/arch/arm/boot/dts/vexpress-v2p-ca5s.dts
index d2709b73316b..1acecaf4b13d 100644
--- a/arch/arm/boot/dts/vexpress-v2p-ca5s.dts
+++ b/arch/arm/boot/dts/vexpress-v2p-ca5s.dts
@@ -57,14 +57,14 @@
compatible = "arm,hdlcd";
reg = <0x2a110000 0x1000>;
interrupts = <0 85 4>;
- clocks = <&oscclk3>;
+ clocks = <&hdlcd_clk>;
clock-names = "pxlclk";
};
memory-controller@2a150000 {
compatible = "arm,pl341", "arm,primecell";
reg = <0x2a150000 0x1000>;
- clocks = <&oscclk1>;
+ clocks = <&axi_clk>;
clock-names = "apb_pclk";
};
@@ -73,7 +73,7 @@
reg = <0x2a190000 0x1000>;
interrupts = <0 86 4>,
<0 87 4>;
- clocks = <&oscclk1>;
+ clocks = <&axi_clk>;
clock-names = "apb_pclk";
};
@@ -93,7 +93,7 @@
"arm,cortex-a9-global-timer";
reg = <0x2c000200 0x20>;
interrupts = <1 11 0x304>;
- clocks = <&oscclk0>;
+ clocks = <&cpu_clk>;
};
watchdog@2c000620 {
@@ -128,7 +128,7 @@
compatible = "arm,vexpress,config-bus";
arm,vexpress,config-bridge = <&v2m_sysreg>;
- oscclk0: osc@0 {
+ cpu_clk: oscclk0 {
/* CPU and internal AXI reference clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 0>;
@@ -137,7 +137,7 @@
clock-output-names = "oscclk0";
};
- oscclk1: osc@1 {
+ axi_clk: oscclk1 {
/* Multiplexed AXI master clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 1>;
@@ -146,7 +146,7 @@
clock-output-names = "oscclk1";
};
- osc@2 {
+ oscclk2 {
/* DDR2 */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 2>;
@@ -155,7 +155,7 @@
clock-output-names = "oscclk2";
};
- oscclk3: osc@3 {
+ hdlcd_clk: oscclk3 {
/* HDLCD */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 3>;
@@ -164,7 +164,7 @@
clock-output-names = "oscclk3";
};
- osc@4 {
+ oscclk4 {
/* Test chip gate configuration */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 4>;
@@ -173,7 +173,7 @@
clock-output-names = "oscclk4";
};
- smbclk: osc@5 {
+ smbclk: oscclk5 {
/* SMB clock */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 5>;
@@ -182,7 +182,7 @@
clock-output-names = "oscclk5";
};
- temp@0 {
+ temp-dcc {
/* DCC internal operating temperature */
compatible = "arm,vexpress-temp";
arm,vexpress-sysreg,func = <4 0>;
@@ -190,7 +190,7 @@
};
};
- smb {
+ smb@08000000 {
compatible = "simple-bus";
#address-cells = <2>;
@@ -250,4 +250,17 @@
/include/ "vexpress-v2m-rs1.dtsi"
};
+
+ site2: hsb@40000000 {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0 0x40000000 0x40000000>;
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 3>;
+ interrupt-map = <0 0 &gic 0 36 4>,
+ <0 1 &gic 0 37 4>,
+ <0 2 &gic 0 38 4>,
+ <0 3 &gic 0 39 4>;
+ };
};
diff --git a/arch/arm/boot/dts/vexpress-v2p-ca9.dts b/arch/arm/boot/dts/vexpress-v2p-ca9.dts
index d949facba376..b608a03ee02f 100644
--- a/arch/arm/boot/dts/vexpress-v2p-ca9.dts
+++ b/arch/arm/boot/dts/vexpress-v2p-ca9.dts
@@ -190,7 +190,7 @@
compatible = "arm,vexpress,config-bus";
arm,vexpress,config-bridge = <&v2m_sysreg>;
- osc@0 {
+ oscclk0: extsaxiclk {
/* ACLK clock to the AXI master port on the test chip */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 0>;
@@ -199,7 +199,7 @@
clock-output-names = "extsaxiclk";
};
- oscclk1: osc@1 {
+ oscclk1: clcdclk {
/* Reference clock for the CLCD */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 1>;
@@ -208,7 +208,7 @@
clock-output-names = "clcdclk";
};
- smbclk: oscclk2: osc@2 {
+ smbclk: oscclk2: tcrefclk {
/* Reference clock for the test chip internal PLLs */
compatible = "arm,vexpress-osc";
arm,vexpress-sysreg,func = <1 2>;
@@ -217,7 +217,7 @@
clock-output-names = "tcrefclk";
};
- volt@0 {
+ volt-vd10 {
/* Test Chip internal logic voltage */
compatible = "arm,vexpress-volt";
arm,vexpress-sysreg,func = <2 0>;
@@ -226,7 +226,7 @@
label = "VD10";
};
- volt@1 {
+ volt-vd10-s2 {
/* PL310, L2 cache, RAM cell supply (not PL310 logic) */
compatible = "arm,vexpress-volt";
arm,vexpress-sysreg,func = <2 1>;
@@ -235,7 +235,7 @@
label = "VD10_S2";
};
- volt@2 {
+ volt-vd10-s3 {
/* Cortex-A9 system supply, Cores, MPEs, SCU and PL310 logic */
compatible = "arm,vexpress-volt";
arm,vexpress-sysreg,func = <2 2>;
@@ -244,7 +244,7 @@
label = "VD10_S3";
};
- volt@3 {
+ volt-vcc1v8 {
/* DDR2 SDRAM and Test Chip DDR2 I/O supply */
compatible = "arm,vexpress-volt";
arm,vexpress-sysreg,func = <2 3>;
@@ -253,7 +253,7 @@
label = "VCC1V8";
};
- volt@4 {
+ volt-ddr2vtt {
/* DDR2 SDRAM VTT termination voltage */
compatible = "arm,vexpress-volt";
arm,vexpress-sysreg,func = <2 4>;
@@ -262,7 +262,7 @@
label = "DDR2VTT";
};
- volt@5 {
+ volt-vcc3v3 {
/* Local board supply for miscellaneous logic external to the Test Chip */
arm,vexpress-sysreg,func = <2 5>;
compatible = "arm,vexpress-volt";
@@ -271,28 +271,28 @@
label = "VCC3V3";
};
- amp@0 {
+ amp-vd10-s2 {
/* PL310, L2 cache, RAM cell supply (not PL310 logic) */
compatible = "arm,vexpress-amp";
arm,vexpress-sysreg,func = <3 0>;
label = "VD10_S2";
};
- amp@1 {
+ amp-vd10-s3 {
/* Cortex-A9 system supply, Cores, MPEs, SCU and PL310 logic */
compatible = "arm,vexpress-amp";
arm,vexpress-sysreg,func = <3 1>;
label = "VD10_S3";
};
- power@0 {
+ power-vd10-s2 {
/* PL310, L2 cache, RAM cell supply (not PL310 logic) */
compatible = "arm,vexpress-power";
arm,vexpress-sysreg,func = <12 0>;
label = "PVD10_S2";
};
- power@1 {
+ power-vd10-s3 {
/* Cortex-A9 system supply, Cores, MPEs, SCU and PL310 logic */
compatible = "arm,vexpress-power";
arm,vexpress-sysreg,func = <12 1>;
@@ -300,7 +300,7 @@
};
};
- smb {
+ smb@04000000 {
compatible = "simple-bus";
#address-cells = <2>;
@@ -359,4 +359,17 @@
/include/ "vexpress-v2m.dtsi"
};
+
+ site2: hsb@e0000000 {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0 0xe0000000 0x20000000>;
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 3>;
+ interrupt-map = <0 0 &gic 0 36 4>,
+ <0 1 &gic 0 37 4>,
+ <0 2 &gic 0 38 4>,
+ <0 3 &gic 0 39 4>;
+ };
};
diff --git a/arch/arm/boot/dts/vf-colibri-eval-v3.dtsi b/arch/arm/boot/dts/vf-colibri-eval-v3.dtsi
index 4d8b7f693535..a8a8e434fb27 100644
--- a/arch/arm/boot/dts/vf-colibri-eval-v3.dtsi
+++ b/arch/arm/boot/dts/vf-colibri-eval-v3.dtsi
@@ -50,6 +50,11 @@
clock-frequency = <16000000>;
};
+ panel: panel {
+ compatible = "edt,et057090dhu";
+ backlight = <&bl>;
+ };
+
reg_3v3: regulator-3v3 {
compatible = "regulator-fixed";
regulator-name = "3.3V";
@@ -83,6 +88,13 @@
status = "okay";
};
+&dcu0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_dcu0_1>;
+ fsl,panel = <&panel>;
+ status = "okay";
+};
+
&dspi1 {
status = "okay";
@@ -134,6 +146,10 @@
vin-supply = <&reg_3v3>;
};
+&tcon0 {
+ status = "okay";
+};
+
&uart0 {
status = "okay";
};
diff --git a/arch/arm/boot/dts/vf-colibri.dtsi b/arch/arm/boot/dts/vf-colibri.dtsi
index fda7f28101e1..b7417094dc11 100644
--- a/arch/arm/boot/dts/vf-colibri.dtsi
+++ b/arch/arm/boot/dts/vf-colibri.dtsi
@@ -40,6 +40,11 @@
*/
/ {
+ aliases {
+ ethernet0 = &fec1;
+ ethernet1 = &fec0;
+ };
+
bl: backlight {
compatible = "pwm-backlight";
pinctrl-names = "default";
@@ -125,8 +130,6 @@
};
&nfc {
- assigned-clocks = <&clks VF610_CLK_NFC>;
- assigned-clock-rates = <33000000>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_nfc>;
status = "okay";
@@ -219,6 +222,39 @@
>;
};
+ pinctrl_dcu0_1: dcu0grp_1 {
+ fsl,pins = <
+ VF610_PAD_PTE0__DCU0_HSYNC 0x1902
+ VF610_PAD_PTE1__DCU0_VSYNC 0x1902
+ VF610_PAD_PTE2__DCU0_PCLK 0x1902
+ VF610_PAD_PTE4__DCU0_DE 0x1902
+ VF610_PAD_PTE5__DCU0_R0 0x1902
+ VF610_PAD_PTE6__DCU0_R1 0x1902
+ VF610_PAD_PTE7__DCU0_R2 0x1902
+ VF610_PAD_PTE8__DCU0_R3 0x1902
+ VF610_PAD_PTE9__DCU0_R4 0x1902
+ VF610_PAD_PTE10__DCU0_R5 0x1902
+ VF610_PAD_PTE11__DCU0_R6 0x1902
+ VF610_PAD_PTE12__DCU0_R7 0x1902
+ VF610_PAD_PTE13__DCU0_G0 0x1902
+ VF610_PAD_PTE14__DCU0_G1 0x1902
+ VF610_PAD_PTE15__DCU0_G2 0x1902
+ VF610_PAD_PTE16__DCU0_G3 0x1902
+ VF610_PAD_PTE17__DCU0_G4 0x1902
+ VF610_PAD_PTE18__DCU0_G5 0x1902
+ VF610_PAD_PTE19__DCU0_G6 0x1902
+ VF610_PAD_PTE20__DCU0_G7 0x1902
+ VF610_PAD_PTE21__DCU0_B0 0x1902
+ VF610_PAD_PTE22__DCU0_B1 0x1902
+ VF610_PAD_PTE23__DCU0_B2 0x1902
+ VF610_PAD_PTE24__DCU0_B3 0x1902
+ VF610_PAD_PTE25__DCU0_B4 0x1902
+ VF610_PAD_PTE26__DCU0_B5 0x1902
+ VF610_PAD_PTE27__DCU0_B6 0x1902
+ VF610_PAD_PTE28__DCU0_B7 0x1902
+ >;
+ };
+
pinctrl_dspi1: dspi1grp {
fsl,pins = <
VF610_PAD_PTD5__DSPI1_CS0 0x33e2
diff --git a/arch/arm/boot/dts/vf500-colibri.dtsi b/arch/arm/boot/dts/vf500-colibri.dtsi
index 3fe1f48c2aec..1a8a0efa19a6 100644
--- a/arch/arm/boot/dts/vf500-colibri.dtsi
+++ b/arch/arm/boot/dts/vf500-colibri.dtsi
@@ -69,6 +69,11 @@
};
};
+&nfc {
+ assigned-clocks = <&clks VF610_CLK_NFC>;
+ assigned-clock-rates = <33000000>;
+};
+
&iomuxc {
vf610-colibri {
pinctrl_touchctrl_idle: touchctrl_idle {
diff --git a/arch/arm/boot/dts/vf500.dtsi b/arch/arm/boot/dts/vf500.dtsi
index 9d372720ad3f..a3824e61bd72 100644
--- a/arch/arm/boot/dts/vf500.dtsi
+++ b/arch/arm/boot/dts/vf500.dtsi
@@ -81,6 +81,7 @@
compatible = "arm,cortex-a5-pmu";
interrupts = <7 IRQ_TYPE_LEVEL_HIGH>;
interrupt-affinity = <&a5_cpu>;
+ reg = <0x40089000 0x1000>;
};
};
diff --git a/arch/arm/boot/dts/vf610-colibri.dtsi b/arch/arm/boot/dts/vf610-colibri.dtsi
index ab4a29f95593..9ec9e337f5a8 100644
--- a/arch/arm/boot/dts/vf610-colibri.dtsi
+++ b/arch/arm/boot/dts/vf610-colibri.dtsi
@@ -50,3 +50,8 @@
reg = <0x80000000 0x10000000>;
};
};
+
+&nfc {
+ assigned-clocks = <&clks VF610_CLK_NFC>;
+ assigned-clock-rates = <50000000>;
+};
diff --git a/arch/arm/boot/dts/vf610-zii-dev-rev-b.dts b/arch/arm/boot/dts/vf610-zii-dev-rev-b.dts
new file mode 100644
index 000000000000..6c60b7f91104
--- /dev/null
+++ b/arch/arm/boot/dts/vf610-zii-dev-rev-b.dts
@@ -0,0 +1,734 @@
+/*
+ * Copyright (C) 2015, 2016 Zodiac Inflight Innovations
+ *
+ * Based on an original 'vf610-twr.dts' which is Copyright 2015,
+ * Freescale Semiconductor, Inc.
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED , WITHOUT WARRANTY OF ANY KIND
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "vf610.dtsi"
+
+/ {
+ model = "ZII VF610 Development Board, Rev B";
+ compatible = "zii,vf610dev-b", "zii,vf610dev", "fsl,vf610";
+
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
+ memory {
+ reg = <0x80000000 0x20000000>;
+ };
+
+ gpio-leds {
+ compatible = "gpio-leds";
+ pinctrl-0 = <&pinctrl_leds_debug>;
+ pinctrl-names = "default";
+
+ debug {
+ label = "zii:green:debug1";
+ gpios = <&gpio2 10 GPIO_ACTIVE_HIGH>;
+ linux,default-trigger = "heartbeat";
+ };
+ };
+
+ mdio-mux {
+ compatible = "mdio-mux-gpio";
+ pinctrl-0 = <&pinctrl_mdio_mux>;
+ pinctrl-names = "default";
+ gpios = <&gpio0 8 GPIO_ACTIVE_HIGH
+ &gpio0 9 GPIO_ACTIVE_HIGH
+ &gpio0 24 GPIO_ACTIVE_HIGH
+ &gpio0 25 GPIO_ACTIVE_HIGH>;
+ mdio-parent-bus = <&mdio1>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ mdio_mux_1: mdio@1 {
+ reg = <1>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ };
+
+ mdio_mux_2: mdio@2 {
+ reg = <2>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ };
+
+ mdio_mux_4: mdio@4 {
+ reg = <4>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ };
+
+ mdio_mux_8: mdio@8 {
+ reg = <8>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ };
+ };
+
+ dsa {
+ compatible = "marvell,dsa";
+ #address-cells = <2>;
+ #size-cells = <0>;
+ dsa,ethernet = <&fec1>;
+ dsa,mii-bus = <&mdio_mux_1>;
+
+ /* 6352 - Primary - 7 ports */
+ switch0: switch@0-0 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0x00 0>;
+ eeprom-length = <512>;
+
+ port@0 {
+ reg = <0>;
+ label = "lan0";
+ };
+
+ port@1 {
+ reg = <1>;
+ label = "lan1";
+ };
+
+ port@2 {
+ reg = <2>;
+ label = "lan2";
+ };
+
+ switch0port5: port@5 {
+ reg = <5>;
+ label = "dsa";
+ phy-mode = "rgmii-txid";
+ link = <&switch1port6
+ &switch2port9>;
+
+ fixed-link {
+ speed = <1000>;
+ full-duplex;
+ };
+ };
+
+ port@6 {
+ reg = <6>;
+ label = "cpu";
+
+ fixed-link {
+ speed = <100>;
+ full-duplex;
+ };
+ };
+
+ };
+
+ /* 6352 - Secondary - 7 ports */
+ switch1: switch@0-1 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0x00 1>;
+ eeprom-length = <512>;
+ mii-bus = <&mdio_mux_2>;
+
+ port@0 {
+ reg = <0>;
+ label = "lan3";
+ };
+
+ port@1 {
+ reg = <1>;
+ label = "lan4";
+ };
+
+ port@2 {
+ reg = <2>;
+ label = "lan5";
+ };
+
+ switch1port5: port@5 {
+ reg = <5>;
+ label = "dsa";
+ link = <&switch2port9>;
+ phy-mode = "rgmii-txid";
+
+ fixed-link {
+ speed = <1000>;
+ full-duplex;
+ };
+ };
+
+ switch1port6: port@6 {
+ reg = <6>;
+ label = "dsa";
+ phy-mode = "rgmii-txid";
+ link = <&switch0port5>;
+
+ fixed-link {
+ speed = <1000>;
+ full-duplex;
+ };
+ };
+ };
+
+ /* 6185 - 10 ports */
+ switch2: switch@0-2 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0x00 2>;
+ mii-bus = <&mdio_mux_4>;
+
+ port@0 {
+ reg = <0>;
+ label = "lan6";
+ };
+
+ port@1 {
+ reg = <1>;
+ label = "lan7";
+ };
+
+ port@2 {
+ reg = <2>;
+ label = "lan8";
+ };
+
+ port@3 {
+ reg = <3>;
+ label = "optical3";
+
+ fixed-link {
+ speed = <1000>;
+ full-duplex;
+ link-gpios = <&gpio6 2
+ GPIO_ACTIVE_HIGH>;
+ };
+ };
+
+ port@4 {
+ reg = <4>;
+ label = "optical4";
+
+ fixed-link {
+ speed = <1000>;
+ full-duplex;
+ link-gpios = <&gpio6 3
+ GPIO_ACTIVE_HIGH>;
+ };
+ };
+
+ switch2port9: port@9 {
+ reg = <9>;
+ label = "dsa";
+ phy-mode = "rgmii-txid";
+ link = <&switch1port5
+ &switch0port5>;
+
+ fixed-link {
+ speed = <1000>;
+ full-duplex;
+ };
+ };
+ };
+ };
+
+ reg_vcc_3v3_mcu: regulator-vcc-3v3-mcu {
+ compatible = "regulator-fixed";
+ regulator-name = "vcc_3v3_mcu";
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ };
+
+ usb0_vbus: regulator-usb0-vbus {
+ compatible = "regulator-fixed";
+ pinctrl-0 = <&pinctrl_usb_vbus>;
+ regulator-name = "usb_vbus";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ enable-active-high;
+ regulator-always-on;
+ regulator-boot-on;
+ gpio = <&gpio0 6 0>;
+ };
+
+ spi0 {
+ compatible = "spi-gpio";
+ pinctrl-0 = <&pinctrl_gpio_spi0>;
+ pinctrl-names = "default";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ gpio-sck = <&gpio1 12 GPIO_ACTIVE_HIGH>;
+ gpio-mosi = <&gpio1 11 GPIO_ACTIVE_HIGH>;
+ gpio-miso = <&gpio1 10 GPIO_ACTIVE_HIGH>;
+ cs-gpios = <&gpio1 9 GPIO_ACTIVE_HIGH
+ &gpio1 8 GPIO_ACTIVE_HIGH>;
+ num-chipselects = <2>;
+
+ m25p128@0 {
+ compatible = "m25p128", "jedec,spi-nor";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ reg = <0>;
+ spi-max-frequency = <1000000>;
+ };
+
+ at93c46d@1 {
+ compatible = "atmel,at93c46d";
+ pinctrl-0 = <&pinctrl_gpio_e6185_eeprom_sel>;
+ pinctrl-names = "default";
+ #address-cells = <0>;
+ #size-cells = <0>;
+ reg = <1>;
+ spi-max-frequency = <500000>;
+ spi-cs-high;
+ data-size = <16>;
+ select-gpios = <&gpio4 4 GPIO_ACTIVE_HIGH>;
+ };
+ };
+};
+
+&adc0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_adc0_ad5>;
+ vref-supply = <&reg_vcc_3v3_mcu>;
+ status = "okay";
+};
+
+&edma0 {
+ status = "okay";
+};
+
+&esdhc1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_esdhc1>;
+ bus-width = <4>;
+ status = "okay";
+};
+
+&fec0 {
+ phy-mode = "rmii";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_fec0>;
+ status = "okay";
+};
+
+&fec1 {
+ phy-mode = "rmii";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_fec1>;
+ status = "okay";
+
+ fixed-link {
+ speed = <100>;
+ full-duplex;
+ };
+
+ mdio1: mdio {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "okay";
+ };
+};
+
+&i2c0 {
+ clock-frequency = <100000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c0>;
+ status = "okay";
+
+ gpio5: pca9554@20 {
+ compatible = "nxp,pca9554";
+ reg = <0x20>;
+ gpio-controller;
+ #gpio-cells = <2>;
+
+ };
+
+ gpio6: pca9554@22 {
+ compatible = "nxp,pca9554";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_pca9554_22>;
+ reg = <0x22>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-parent = <&gpio2>;
+ interrupts = <2 IRQ_TYPE_LEVEL_LOW>;
+ };
+
+ lm75@48 {
+ compatible = "national,lm75";
+ reg = <0x48>;
+ };
+
+ at24c04@50 {
+ compatible = "atmel,24c04";
+ reg = <0x50>;
+ };
+
+ at24c04@52 {
+ compatible = "atmel,24c04";
+ reg = <0x52>;
+ };
+
+ ds1682@6b {
+ compatible = "dallas,ds1682";
+ reg = <0x6b>;
+ };
+};
+
+&i2c1 {
+ clock-frequency = <100000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c1>;
+ status = "okay";
+};
+
+&i2c2 {
+ clock-frequency = <100000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c2>;
+ status = "okay";
+
+ tca9548@70 {
+ compatible = "nxp,pca9548";
+ pinctrl-0 = <&pinctrl_i2c_mux_reset>;
+ pinctrl-names = "default";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0x70>;
+ reset-gpios = <&gpio3 23 GPIO_ACTIVE_LOW>;
+
+ i2c@0 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0>;
+
+ sfp1: at24c04@50 {
+ compatible = "atmel,24c02";
+ reg = <0x50>;
+ };
+ };
+
+ i2c@1 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <1>;
+
+ sfp2: at24c04@50 {
+ compatible = "atmel,24c02";
+ reg = <0x50>;
+ };
+ };
+
+ i2c@2 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <2>;
+
+ sfp3: at24c04@50 {
+ compatible = "atmel,24c02";
+ reg = <0x50>;
+ };
+ };
+
+ i2c@3 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <3>;
+
+ sfp4: at24c04@50 {
+ compatible = "atmel,24c02";
+ reg = <0x50>;
+ };
+ };
+
+ i2c@4 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <4>;
+ };
+ };
+};
+
+&i2c3 {
+ clock-frequency = <100000>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_i2c3>;
+ status = "okay";
+};
+
+&uart0 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart0>;
+ status = "okay";
+};
+
+&uart1 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart1>;
+ status = "okay";
+};
+
+&uart2 {
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart2>;
+ status = "okay";
+};
+
+&usbdev0 {
+ disable-over-current;
+ vbus-supply = <&usb0_vbus>;
+ dr_mode = "host";
+ status = "okay";
+};
+
+&usbh1 {
+ disable-over-current;
+ status = "okay";
+};
+
+&usbmisc0 {
+ status = "okay";
+};
+
+&usbmisc1 {
+ status = "okay";
+};
+
+&usbphy0 {
+ status = "okay";
+};
+
+&usbphy1 {
+ status = "okay";
+};
+
+&iomuxc {
+ pinctrl_adc0_ad5: adc0ad5grp {
+ fsl,pins = <
+ VF610_PAD_PTC30__ADC0_SE5 0x00a1
+ >;
+ };
+
+ pinctrl_dspi0: dspi0grp {
+ fsl,pins = <
+ VF610_PAD_PTB18__DSPI0_CS1 0x1182
+ VF610_PAD_PTB19__DSPI0_CS0 0x1182
+ VF610_PAD_PTB20__DSPI0_SIN 0x1181
+ VF610_PAD_PTB21__DSPI0_SOUT 0x1182
+ VF610_PAD_PTB22__DSPI0_SCK 0x1182
+ >;
+ };
+
+ pinctrl_dspi2: dspi2grp {
+ fsl,pins = <
+ VF610_PAD_PTD31__DSPI2_CS1 0x1182
+ VF610_PAD_PTD30__DSPI2_CS0 0x1182
+ VF610_PAD_PTD29__DSPI2_SIN 0x1181
+ VF610_PAD_PTD28__DSPI2_SOUT 0x1182
+ VF610_PAD_PTD27__DSPI2_SCK 0x1182
+ >;
+ };
+
+ pinctrl_esdhc1: esdhc1grp {
+ fsl,pins = <
+ VF610_PAD_PTA24__ESDHC1_CLK 0x31ef
+ VF610_PAD_PTA25__ESDHC1_CMD 0x31ef
+ VF610_PAD_PTA26__ESDHC1_DAT0 0x31ef
+ VF610_PAD_PTA27__ESDHC1_DAT1 0x31ef
+ VF610_PAD_PTA28__ESDHC1_DATA2 0x31ef
+ VF610_PAD_PTA29__ESDHC1_DAT3 0x31ef
+ VF610_PAD_PTA7__GPIO_134 0x219d
+ >;
+ };
+
+ pinctrl_fec0: fec0grp {
+ fsl,pins = <
+ VF610_PAD_PTC0__ENET_RMII0_MDC 0x30d2
+ VF610_PAD_PTC1__ENET_RMII0_MDIO 0x30d3
+ VF610_PAD_PTC2__ENET_RMII0_CRS 0x30d1
+ VF610_PAD_PTC3__ENET_RMII0_RXD1 0x30d1
+ VF610_PAD_PTC4__ENET_RMII0_RXD0 0x30d1
+ VF610_PAD_PTC5__ENET_RMII0_RXER 0x30d1
+ VF610_PAD_PTC6__ENET_RMII0_TXD1 0x30d2
+ VF610_PAD_PTC7__ENET_RMII0_TXD0 0x30d2
+ VF610_PAD_PTC8__ENET_RMII0_TXEN 0x30d2
+ >;
+ };
+
+ pinctrl_fec1: fec1grp {
+ fsl,pins = <
+ VF610_PAD_PTA6__RMII_CLKIN 0x30d1
+ VF610_PAD_PTC9__ENET_RMII1_MDC 0x30d2
+ VF610_PAD_PTC10__ENET_RMII1_MDIO 0x30d3
+ VF610_PAD_PTC11__ENET_RMII1_CRS 0x30d1
+ VF610_PAD_PTC12__ENET_RMII1_RXD1 0x30d1
+ VF610_PAD_PTC13__ENET_RMII1_RXD0 0x30d1
+ VF610_PAD_PTC14__ENET_RMII1_RXER 0x30d1
+ VF610_PAD_PTC15__ENET_RMII1_TXD1 0x30d2
+ VF610_PAD_PTC16__ENET_RMII1_TXD0 0x30d2
+ VF610_PAD_PTC17__ENET_RMII1_TXEN 0x30d2
+ >;
+ };
+
+ pinctrl_gpio_e6185_eeprom_sel: pinctrl-gpio-e6185-eeprom-spi0 {
+ fsl,pins = <
+ VF610_PAD_PTE27__GPIO_132 0x33e2
+ >;
+ };
+
+ pinctrl_gpio_spi0: pinctrl-gpio-spi0 {
+ fsl,pins = <
+ VF610_PAD_PTB22__GPIO_44 0x33e2
+ VF610_PAD_PTB21__GPIO_43 0x33e2
+ VF610_PAD_PTB20__GPIO_42 0x33e1
+ VF610_PAD_PTB19__GPIO_41 0x33e2
+ VF610_PAD_PTB18__GPIO_40 0x33e2
+ >;
+ };
+
+ pinctrl_i2c_mux_reset: pinctrl-i2c-mux-reset {
+ fsl,pins = <
+ VF610_PAD_PTE14__GPIO_119 0x31c2
+ >;
+ };
+
+ pinctrl_i2c0: i2c0grp {
+ fsl,pins = <
+ VF610_PAD_PTB14__I2C0_SCL 0x37ff
+ VF610_PAD_PTB15__I2C0_SDA 0x37ff
+ >;
+ };
+
+ pinctrl_i2c1: i2c1grp {
+ fsl,pins = <
+ VF610_PAD_PTB16__I2C1_SCL 0x37ff
+ VF610_PAD_PTB17__I2C1_SDA 0x37ff
+ >;
+ };
+
+ pinctrl_i2c2: i2c2grp {
+ fsl,pins = <
+ VF610_PAD_PTA22__I2C2_SCL 0x37ff
+ VF610_PAD_PTA23__I2C2_SDA 0x37ff
+ >;
+ };
+
+ pinctrl_i2c3: i2c3grp {
+ fsl,pins = <
+ VF610_PAD_PTA30__I2C3_SCL 0x37ff
+ VF610_PAD_PTA31__I2C3_SDA 0x37ff
+ >;
+ };
+
+ pinctrl_leds_debug: pinctrl-leds-debug {
+ fsl,pins = <
+ VF610_PAD_PTD20__GPIO_74 0x31c2
+ >;
+ };
+
+ pinctrl_mdio_mux: pinctrl-mdio-mux {
+ fsl,pins = <
+ VF610_PAD_PTA18__GPIO_8 0x31c2
+ VF610_PAD_PTA19__GPIO_9 0x31c2
+ VF610_PAD_PTB2__GPIO_24 0x31c2
+ VF610_PAD_PTB3__GPIO_25 0x31c2
+ >;
+ };
+
+ pinctrl_pca9554_22: pinctrl-pca95540-22 {
+ fsl,pins = <
+ VF610_PAD_PTB28__GPIO_98 0x219d
+ >;
+ };
+
+ pinctrl_pwm0: pwm0grp {
+ fsl,pins = <
+ VF610_PAD_PTB0__FTM0_CH0 0x1582
+ VF610_PAD_PTB1__FTM0_CH1 0x1582
+ VF610_PAD_PTB2__FTM0_CH2 0x1582
+ VF610_PAD_PTB3__FTM0_CH3 0x1582
+ >;
+ };
+
+ pinctrl_qspi0: qspi0grp {
+ fsl,pins = <
+ VF610_PAD_PTD7__QSPI0_B_QSCK 0x31c3
+ VF610_PAD_PTD8__QSPI0_B_CS0 0x31ff
+ VF610_PAD_PTD9__QSPI0_B_DATA3 0x31c3
+ VF610_PAD_PTD10__QSPI0_B_DATA2 0x31c3
+ VF610_PAD_PTD11__QSPI0_B_DATA1 0x31c3
+ VF610_PAD_PTD12__QSPI0_B_DATA0 0x31c3
+ >;
+ };
+
+ pinctrl_uart0: uart0grp {
+ fsl,pins = <
+ VF610_PAD_PTB10__UART0_TX 0x21a2
+ VF610_PAD_PTB11__UART0_RX 0x21a1
+ >;
+ };
+
+ pinctrl_uart1: uart1grp {
+ fsl,pins = <
+ VF610_PAD_PTB23__UART1_TX 0x21a2
+ VF610_PAD_PTB24__UART1_RX 0x21a1
+ >;
+ };
+
+ pinctrl_uart2: uart2grp {
+ fsl,pins = <
+ VF610_PAD_PTD0__UART2_TX 0x21a2
+ VF610_PAD_PTD1__UART2_RX 0x21a1
+ >;
+ };
+
+ pinctrl_usb_vbus: pinctrl-usb-vbus {
+ fsl,pins = <
+ VF610_PAD_PTA16__GPIO_6 0x31c2
+ >;
+ };
+
+ pinctrl_usb0_host: usb0-host-grp {
+ fsl,pins = <
+ VF610_PAD_PTD6__GPIO_85 0x0062
+ >;
+ };
+};
diff --git a/arch/arm/boot/dts/vfxxx.dtsi b/arch/arm/boot/dts/vfxxx.dtsi
index 5c0975451d4e..2c13ec696ac5 100644
--- a/arch/arm/boot/dts/vfxxx.dtsi
+++ b/arch/arm/boot/dts/vfxxx.dtsi
@@ -95,6 +95,7 @@
compatible = "fsl,aips-bus", "simple-bus";
#address-cells = <1>;
#size-cells = <1>;
+ reg = <0x40000000 0x00070000>;
ranges;
mscm_cpucfg: cpucfg@40001000 {
@@ -310,6 +311,14 @@
<20000000>;
};
+ tcon0: timing-controller@4003d000 {
+ compatible = "fsl,vf610-tcon";
+ reg = <0x4003d000 0x1000>;
+ clocks = <&clks VF610_CLK_TCON0>;
+ clock-names = "ipg";
+ status = "disabled";
+ };
+
wdoga5: wdog@4003e000 {
compatible = "fsl,vf610-wdt", "fsl,imx21-wdt";
reg = <0x4003e000 0x1000>;
@@ -415,6 +424,17 @@
status = "disabled";
};
+ dcu0: dcu@40058000 {
+ compatible = "fsl,vf610-dcu";
+ reg = <0x40058000 0x1200>;
+ interrupts = <30 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clks VF610_CLK_DCU0>,
+ <&clks VF610_CLK_DCU0_DIV>;
+ clock-names = "dcu", "pix";
+ fsl,tcon = <&tcon0>;
+ status = "disabled";
+ };
+
i2c0: i2c@40066000 {
#address-cells = <1>;
#size-cells = <0>;
@@ -481,6 +501,7 @@
compatible = "fsl,aips-bus", "simple-bus";
#address-cells = <1>;
#size-cells = <1>;
+ reg = <0x40080000 0x0007f000>;
ranges;
edma1: dma-controller@40098000 {
diff --git a/arch/arm/boot/dts/wd-mbwe.dts b/arch/arm/boot/dts/wd-mbwe.dts
new file mode 100644
index 000000000000..ac3250ae8fc4
--- /dev/null
+++ b/arch/arm/boot/dts/wd-mbwe.dts
@@ -0,0 +1,112 @@
+/*
+ * wd-mbwe.dtsi - Device tree file for Western Digital My Book World Edition
+ *
+ * Copyright (C) 2016 Neil Armstrong <narmstrong@baylibre.com>
+ *
+ * Licensed under GPLv2 or later
+ */
+
+/dts-v1/;
+#include "ox810se.dtsi"
+
+/ {
+ model = "Western Digital My Book World Edition";
+
+ compatible = "wd,mbwe", "oxsemi,ox810se";
+
+ chosen {
+ bootargs = "console=ttyS1,115200n8 earlyprintk=serial";
+ };
+
+ memory {
+ /* 128Mbytes DDR */
+ reg = <0x48000000 0x8000000>;
+ };
+
+ aliases {
+ serial1 = &uart1;
+ gpio0 = &gpio0;
+ gpio1 = &gpio1;
+ };
+
+ gpio-keys-polled {
+ compatible = "gpio-keys-polled";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ poll-interval = <100>;
+
+ power {
+ label = "power";
+ gpios = <&gpio0 0 1>;
+ linux,code = <0x198>;
+ };
+
+ recovery {
+ label = "recovery";
+ gpios = <&gpio0 4 1>;
+ linux,code = <0xab>;
+ };
+ };
+
+ leds {
+ compatible = "gpio-leds";
+
+ a0 {
+ label = "activity0";
+ gpios = <&gpio0 25 0>;
+ default-state = "keep";
+ };
+
+ a1 {
+ label = "activity1";
+ gpios = <&gpio0 26 0>;
+ default-state = "keep";
+ };
+
+ a2 {
+ label = "activity2";
+ gpios = <&gpio0 5 0>;
+ default-state = "keep";
+ };
+
+ a3 {
+ label = "activity3";
+ gpios = <&gpio0 6 0>;
+ default-state = "keep";
+ };
+
+ a4 {
+ label = "activity4";
+ gpios = <&gpio0 7 0>;
+ default-state = "keep";
+ };
+
+ a5 {
+ label = "activity5";
+ gpios = <&gpio1 2 0>;
+ default-state = "keep";
+ };
+ };
+
+ i2c-gpio {
+ compatible = "i2c-gpio";
+ gpios = <&gpio0 3 0 /* sda */
+ &gpio0 2 0 /* scl */
+ >;
+ i2c-gpio,delay-us = <2>; /* ~100 kHz */
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ rtc0: rtc@48 {
+ compatible = "st,m41t00";
+ reg = <0x68>;
+ };
+ };
+};
+
+&uart1 {
+ status = "okay";
+
+ pinctrl-names = "default";
+ pinctrl-0 = <&pinctrl_uart1>;
+};
diff --git a/arch/arm/configs/aspeed_g4_defconfig b/arch/arm/configs/aspeed_g4_defconfig
new file mode 100644
index 000000000000..b6e54ee9bdbd
--- /dev/null
+++ b/arch/arm/configs/aspeed_g4_defconfig
@@ -0,0 +1,86 @@
+CONFIG_KERNEL_XZ=y
+CONFIG_SYSVIPC=y
+CONFIG_USELIB=y
+CONFIG_IRQ_DOMAIN_DEBUG=y
+CONFIG_NO_HZ_IDLE=y
+CONFIG_HIGH_RES_TIMERS=y
+CONFIG_LOG_BUF_SHIFT=14
+CONFIG_CGROUPS=y
+CONFIG_BLK_DEV_INITRD=y
+# CONFIG_RD_BZIP2 is not set
+# CONFIG_RD_LZMA is not set
+# CONFIG_RD_LZO is not set
+# CONFIG_RD_LZ4 is not set
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
+CONFIG_BPF_SYSCALL=y
+# CONFIG_SHMEM is not set
+# CONFIG_AIO is not set
+CONFIG_EMBEDDED=y
+# CONFIG_COMPAT_BRK is not set
+CONFIG_SLAB=y
+CONFIG_CC_STACKPROTECTOR_STRONG=y
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+# CONFIG_BLOCK is not set
+# CONFIG_ARCH_MULTI_V7 is not set
+CONFIG_ARCH_ASPEED=y
+CONFIG_MACH_ASPEED_G4=y
+CONFIG_DEBUG_RODATA=y
+CONFIG_AEABI=y
+CONFIG_UACCESS_WITH_MEMCPY=y
+CONFIG_SECCOMP=y
+# CONFIG_ATAGS is not set
+CONFIG_ZBOOT_ROM_TEXT=0x0
+CONFIG_ZBOOT_ROM_BSS=0x0
+CONFIG_ARM_APPENDED_DTB=y
+CONFIG_ARM_ATAG_DTB_COMPAT=y
+CONFIG_KEXEC=y
+# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
+CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_PREVENT_FIRMWARE_BUILD is not set
+# CONFIG_INPUT is not set
+# CONFIG_SERIO is not set
+# CONFIG_VT is not set
+# CONFIG_LEGACY_PTYS is not set
+# CONFIG_DEVKMEM is not set
+CONFIG_SERIAL_8250=y
+# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_8250_NR_UARTS=6
+CONFIG_SERIAL_8250_RUNTIME_UARTS=6
+CONFIG_SERIAL_8250_EXTENDED=y
+CONFIG_SERIAL_8250_SHARE_IRQ=y
+CONFIG_SERIAL_OF_PLATFORM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_FIRMWARE_MEMMAP=y
+CONFIG_FANOTIFY=y
+CONFIG_PRINTK_TIME=1
+CONFIG_DYNAMIC_DEBUG=y
+CONFIG_STRIP_ASM_SYMS=y
+CONFIG_PAGE_POISONING=y
+CONFIG_DEBUG_KMEMLEAK=y
+CONFIG_DEBUG_SHIRQ=y
+CONFIG_LOCKUP_DETECTOR=y
+CONFIG_WQ_WATCHDOG=y
+# CONFIG_SCHED_DEBUG is not set
+CONFIG_SCHED_STACK_END_CHECK=y
+CONFIG_DEBUG_RT_MUTEXES=y
+CONFIG_DEBUG_WW_MUTEX_SLOWPATH=y
+# CONFIG_FTRACE is not set
+CONFIG_MEMTEST=y
+CONFIG_UBSAN=y
+CONFIG_DEBUG_USER=y
+CONFIG_DEBUG_LL=y
+CONFIG_DEBUG_LL_UART_8250=y
+CONFIG_DEBUG_UART_PHYS=0x1e784000
+CONFIG_DEBUG_UART_VIRT=0xe8784000
+CONFIG_EARLY_PRINTK=y
+CONFIG_DEBUG_SET_MODULE_RONX=y
+# CONFIG_XZ_DEC_X86 is not set
+# CONFIG_XZ_DEC_POWERPC is not set
+# CONFIG_XZ_DEC_IA64 is not set
+# CONFIG_XZ_DEC_SPARC is not set
diff --git a/arch/arm/configs/aspeed_g5_defconfig b/arch/arm/configs/aspeed_g5_defconfig
new file mode 100644
index 000000000000..892605167357
--- /dev/null
+++ b/arch/arm/configs/aspeed_g5_defconfig
@@ -0,0 +1,88 @@
+CONFIG_KERNEL_XZ=y
+CONFIG_SYSVIPC=y
+CONFIG_USELIB=y
+CONFIG_IRQ_DOMAIN_DEBUG=y
+CONFIG_NO_HZ_IDLE=y
+CONFIG_HIGH_RES_TIMERS=y
+CONFIG_LOG_BUF_SHIFT=14
+CONFIG_CGROUPS=y
+CONFIG_BLK_DEV_INITRD=y
+# CONFIG_RD_BZIP2 is not set
+# CONFIG_RD_LZMA is not set
+# CONFIG_RD_LZO is not set
+# CONFIG_RD_LZ4 is not set
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
+CONFIG_BPF_SYSCALL=y
+# CONFIG_SHMEM is not set
+# CONFIG_AIO is not set
+CONFIG_EMBEDDED=y
+# CONFIG_COMPAT_BRK is not set
+CONFIG_SLAB=y
+CONFIG_CC_STACKPROTECTOR_STRONG=y
+CONFIG_MODULES=y
+CONFIG_MODULE_UNLOAD=y
+# CONFIG_BLOCK is not set
+CONFIG_ARCH_MULTI_V6=y
+# CONFIG_ARCH_MULTI_V7 is not set
+CONFIG_ARCH_ASPEED=y
+CONFIG_MACH_ASPEED_G5=y
+CONFIG_DEBUG_RODATA=y
+CONFIG_AEABI=y
+CONFIG_UACCESS_WITH_MEMCPY=y
+CONFIG_SECCOMP=y
+# CONFIG_ATAGS is not set
+CONFIG_ZBOOT_ROM_TEXT=0x0
+CONFIG_ZBOOT_ROM_BSS=0x0
+CONFIG_ARM_APPENDED_DTB=y
+CONFIG_ARM_ATAG_DTB_COMPAT=y
+CONFIG_KEXEC=y
+# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
+CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_PREVENT_FIRMWARE_BUILD is not set
+# CONFIG_INPUT is not set
+# CONFIG_SERIO is not set
+# CONFIG_VT is not set
+# CONFIG_LEGACY_PTYS is not set
+# CONFIG_DEVKMEM is not set
+CONFIG_SERIAL_8250=y
+# CONFIG_SERIAL_8250_DEPRECATED_OPTIONS is not set
+CONFIG_SERIAL_8250_CONSOLE=y
+CONFIG_SERIAL_8250_NR_UARTS=6
+CONFIG_SERIAL_8250_RUNTIME_UARTS=6
+CONFIG_SERIAL_8250_EXTENDED=y
+CONFIG_SERIAL_8250_SHARE_IRQ=y
+CONFIG_SERIAL_OF_PLATFORM=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_USB_SUPPORT is not set
+# CONFIG_IOMMU_SUPPORT is not set
+CONFIG_FIRMWARE_MEMMAP=y
+CONFIG_FANOTIFY=y
+CONFIG_PRINTK_TIME=1
+CONFIG_DYNAMIC_DEBUG=y
+CONFIG_STRIP_ASM_SYMS=y
+CONFIG_PAGE_POISONING=y
+CONFIG_DEBUG_KMEMLEAK=y
+CONFIG_DEBUG_SHIRQ=y
+CONFIG_LOCKUP_DETECTOR=y
+CONFIG_WQ_WATCHDOG=y
+# CONFIG_SCHED_DEBUG is not set
+CONFIG_SCHED_STACK_END_CHECK=y
+CONFIG_DEBUG_RT_MUTEXES=y
+CONFIG_DEBUG_WW_MUTEX_SLOWPATH=y
+# CONFIG_FTRACE is not set
+CONFIG_MEMTEST=y
+CONFIG_UBSAN=y
+CONFIG_UBSAN_ALIGNMENT=y
+CONFIG_DEBUG_USER=y
+CONFIG_DEBUG_LL=y
+CONFIG_DEBUG_LL_UART_8250=y
+CONFIG_DEBUG_UART_PHYS=0x1e784000
+CONFIG_DEBUG_UART_VIRT=0xe8784000
+CONFIG_EARLY_PRINTK=y
+CONFIG_DEBUG_SET_MODULE_RONX=y
+# CONFIG_XZ_DEC_X86 is not set
+# CONFIG_XZ_DEC_POWERPC is not set
+# CONFIG_XZ_DEC_IA64 is not set
+# CONFIG_XZ_DEC_SPARC is not set
diff --git a/arch/arm/configs/bcm2835_defconfig b/arch/arm/configs/bcm2835_defconfig
index 1ef69fcbdf2e..79de828e49ad 100644
--- a/arch/arm/configs/bcm2835_defconfig
+++ b/arch/arm/configs/bcm2835_defconfig
@@ -38,10 +38,13 @@ CONFIG_CRASH_DUMP=y
CONFIG_VFP=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
# CONFIG_SUSPEND is not set
+CONFIG_PM=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
CONFIG_INET=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
CONFIG_NETWORK_SECMARK=y
CONFIG_NETFILTER=y
CONFIG_CFG80211=y
@@ -63,7 +66,6 @@ CONFIG_INPUT_EVDEV=y
CONFIG_SERIAL_AMBA_PL011=y
CONFIG_SERIAL_AMBA_PL011_CONSOLE=y
CONFIG_TTY_PRINTK=y
-CONFIG_I2C=y
CONFIG_I2C_CHARDEV=y
CONFIG_I2C_BCM2835=y
CONFIG_SPI=y
@@ -73,10 +75,10 @@ CONFIG_GPIO_SYSFS=y
# CONFIG_HWMON is not set
CONFIG_WATCHDOG=y
CONFIG_BCM2835_WDT=y
-CONFIG_FB=y
+CONFIG_DRM=y
+CONFIG_DRM_VC4=y
CONFIG_FB_SIMPLE=y
CONFIG_FRAMEBUFFER_CONSOLE=y
-CONFIG_FRAMEBUFFER_CONSOLE_DETECT_PRIMARY=y
CONFIG_SOUND=y
CONFIG_SND=y
CONFIG_SND_SOC=y
@@ -87,7 +89,7 @@ CONFIG_USB_DWC2=y
CONFIG_MMC=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
-CONFIG_MMC_SDHCI_BCM2835=y
+CONFIG_MMC_SDHCI_IPROC=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=y
CONFIG_LEDS_GPIO=y
@@ -122,6 +124,7 @@ CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
# CONFIG_MISC_FILESYSTEMS is not set
CONFIG_NFS_FS=y
+CONFIG_ROOT_NFS=y
CONFIG_NFSD=y
CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ASCII=y
diff --git a/arch/arm/configs/bcm_defconfig b/arch/arm/configs/bcm_defconfig
index 7117662bab2e..909049a280ec 100644
--- a/arch/arm/configs/bcm_defconfig
+++ b/arch/arm/configs/bcm_defconfig
@@ -12,7 +12,6 @@ CONFIG_CGROUPS=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
-CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_SCHED=y
CONFIG_BLK_CGROUP=y
CONFIG_NAMESPACES=y
diff --git a/arch/arm/configs/davinci_all_defconfig b/arch/arm/configs/davinci_all_defconfig
index 235842c9ba96..f33d042b1273 100644
--- a/arch/arm/configs/davinci_all_defconfig
+++ b/arch/arm/configs/davinci_all_defconfig
@@ -2,6 +2,7 @@ CONFIG_EXPERIMENTAL=y
# CONFIG_SWAP is not set
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
+CONFIG_FHANDLE=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=14
@@ -70,8 +71,10 @@ CONFIG_MTD_CFI=m
CONFIG_MTD_CFI_INTELEXT=m
CONFIG_MTD_CFI_AMDSTD=m
CONFIG_MTD_PHYSMAP=m
+CONFIG_MTD_M25P80=m
CONFIG_MTD_NAND=m
CONFIG_MTD_NAND_DAVINCI=m
+CONFIG_MTD_SPI_NOR=m
CONFIG_PROC_DEVICETREE=y
CONFIG_BLK_DEV_LOOP=m
CONFIG_BLK_DEV_RAM=y
@@ -117,7 +120,10 @@ CONFIG_SERIAL_OF_PLATFORM=y
CONFIG_I2C=y
CONFIG_I2C_CHARDEV=y
CONFIG_I2C_DAVINCI=y
+CONFIG_SPI=y
+CONFIG_SPI_DAVINCI=m
CONFIG_PINCTRL_SINGLE=y
+CONFIG_GPIO_SYSFS=y
CONFIG_GPIO_PCF857X=y
CONFIG_WATCHDOG=y
CONFIG_DAVINCI_WATCHDOG=m
@@ -187,7 +193,6 @@ CONFIG_TI_EDMA=y
CONFIG_EXT2_FS=y
CONFIG_EXT3_FS=y
CONFIG_XFS_FS=m
-CONFIG_INOTIFY=y
CONFIG_AUTOFS4_FS=m
CONFIG_MSDOS_FS=y
CONFIG_VFAT_FS=y
diff --git a/arch/arm/configs/exynos_defconfig b/arch/arm/configs/exynos_defconfig
index 6ffd7e76f3ce..10f49ab5328e 100644
--- a/arch/arm/configs/exynos_defconfig
+++ b/arch/arm/configs/exynos_defconfig
@@ -28,6 +28,10 @@ CONFIG_CMDLINE="root=/dev/ram0 rw ramdisk=8192 initrd=0x41000000,8M console=ttyS
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_STAT_DETAILS=y
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
+CONFIG_CPU_FREQ_GOV_POWERSAVE=m
+CONFIG_CPU_FREQ_GOV_USERSPACE=m
+CONFIG_CPU_FREQ_GOV_CONSERVATIVE=m
+CONFIG_CPU_FREQ_GOV_SCHEDUTIL=m
CONFIG_CPUFREQ_DT=y
CONFIG_CPU_IDLE=y
CONFIG_ARM_EXYNOS_CPUIDLE=y
@@ -74,6 +78,7 @@ CONFIG_KEYBOARD_CROS_EC=y
CONFIG_MOUSE_CYAPA=y
CONFIG_INPUT_TOUCHSCREEN=y
CONFIG_TOUCHSCREEN_ATMEL_MXT=y
+CONFIG_TOUCHSCREEN_MMS114=y
CONFIG_INPUT_MISC=y
CONFIG_INPUT_MAX77693_HAPTIC=y
CONFIG_INPUT_MAX8997_HAPTIC=y
@@ -93,6 +98,7 @@ CONFIG_SPI=y
CONFIG_SPI_GPIO=y
CONFIG_SPI_S3C64XX=y
CONFIG_DEBUG_GPIO=y
+CONFIG_GPIO_WM8994=y
CONFIG_POWER_SUPPLY=y
CONFIG_BATTERY_SBS=y
CONFIG_BATTERY_MAX17040=y
@@ -134,6 +140,7 @@ CONFIG_REGULATOR_S2MPA01=y
CONFIG_REGULATOR_S2MPS11=y
CONFIG_REGULATOR_S5M8767=y
CONFIG_REGULATOR_TPS65090=y
+CONFIG_REGULATOR_WM8994=y
CONFIG_MEDIA_SUPPORT=m
CONFIG_MEDIA_CAMERA_SUPPORT=y
CONFIG_MEDIA_USB_SUPPORT=y
@@ -160,6 +167,8 @@ CONFIG_SOUND=y
CONFIG_SND=y
CONFIG_SND_SOC=y
CONFIG_SND_SOC_SAMSUNG=y
+CONFIG_SND_SOC_SAMSUNG_SMDK_WM8994=y
+CONFIG_SND_SOC_SMDK_WM8994_PCM=y
CONFIG_SND_SOC_SNOW=y
CONFIG_SND_SOC_ODROIDX2=y
CONFIG_SND_SIMPLE_CARD=y
@@ -210,6 +219,8 @@ CONFIG_EXTCON_MAX77693=y
CONFIG_EXTCON_MAX8997=y
CONFIG_IIO=y
CONFIG_EXYNOS_ADC=y
+CONFIG_CM36651=y
+CONFIG_AK8975=y
CONFIG_PWM=y
CONFIG_PWM_SAMSUNG=y
CONFIG_PHY_EXYNOS5250_SATA=y
diff --git a/arch/arm/configs/imx_v6_v7_defconfig b/arch/arm/configs/imx_v6_v7_defconfig
index 978c5deeb47c..21339ce29654 100644
--- a/arch/arm/configs/imx_v6_v7_defconfig
+++ b/arch/arm/configs/imx_v6_v7_defconfig
@@ -142,6 +142,7 @@ CONFIG_SMC911X=y
CONFIG_SMSC911X=y
# CONFIG_NET_VENDOR_STMICRO is not set
CONFIG_AT803X_PHY=y
+CONFIG_MICREL_PHY=y
CONFIG_USB_PEGASUS=m
CONFIG_USB_RTL8150=m
CONFIG_USB_RTL8152=m
@@ -162,7 +163,9 @@ CONFIG_MOUSE_PS2_ELANTECH=y
CONFIG_INPUT_TOUCHSCREEN=y
CONFIG_TOUCHSCREEN_EGALAX=y
CONFIG_TOUCHSCREEN_IMX6UL_TSC=y
+CONFIG_TOUCHSCREEN_EDT_FT5X06=y
CONFIG_TOUCHSCREEN_MC13783=y
+CONFIG_TOUCHSCREEN_TSC2004=y
CONFIG_TOUCHSCREEN_TSC2007=y
CONFIG_TOUCHSCREEN_STMPE=y
CONFIG_TOUCHSCREEN_SX8654=y
@@ -178,9 +181,11 @@ CONFIG_SERIAL_FSL_LPUART=y
CONFIG_SERIAL_FSL_LPUART_CONSOLE=y
# CONFIG_I2C_COMPAT is not set
CONFIG_I2C_CHARDEV=y
+CONFIG_I2C_MUX_GPIO=y
# CONFIG_I2C_HELPER_AUTO is not set
CONFIG_I2C_ALGOPCF=m
CONFIG_I2C_ALGOPCA=m
+CONFIG_I2C_GPIO=y
CONFIG_I2C_IMX=y
CONFIG_SPI=y
CONFIG_SPI_IMX=y
@@ -313,6 +318,7 @@ CONFIG_RTC_DRV_DS1307=y
CONFIG_RTC_DRV_ISL1208=y
CONFIG_RTC_DRV_PCF8523=y
CONFIG_RTC_DRV_PCF8563=y
+CONFIG_RTC_DRV_M41T80=y
CONFIG_RTC_DRV_MC13XXX=y
CONFIG_RTC_DRV_MXC=y
CONFIG_RTC_DRV_SNVS=y
diff --git a/arch/arm/configs/keystone_defconfig b/arch/arm/configs/keystone_defconfig
index 5bcc9cf9d8f1..faba04d93ad5 100644
--- a/arch/arm/configs/keystone_defconfig
+++ b/arch/arm/configs/keystone_defconfig
@@ -30,6 +30,8 @@ CONFIG_PCI=y
CONFIG_PCI_MSI=y
CONFIG_PCI_KEYSTONE=y
CONFIG_SMP=y
+CONFIG_HOTPLUG_CPU=y
+CONFIG_ARM_PSCI=y
CONFIG_PREEMPT=y
CONFIG_AEABI=y
CONFIG_HIGHMEM=y
diff --git a/arch/arm/configs/lpc32xx_defconfig b/arch/arm/configs/lpc32xx_defconfig
index 9f56ca3985ae..6ba430d2b5b2 100644
--- a/arch/arm/configs/lpc32xx_defconfig
+++ b/arch/arm/configs/lpc32xx_defconfig
@@ -17,8 +17,6 @@ CONFIG_MODULE_UNLOAD=y
# CONFIG_BLK_DEV_BSG is not set
CONFIG_PARTITION_ADVANCED=y
CONFIG_ARCH_LPC32XX=y
-CONFIG_GPIO_PCA953X=y
-CONFIG_KEYBOARD_GPIO_POLLED=y
CONFIG_PREEMPT=y
CONFIG_AEABI=y
CONFIG_ZBOOT_ROM_TEXT=0x0
@@ -27,10 +25,8 @@ CONFIG_ARM_APPENDED_DTB=y
CONFIG_ARM_ATAG_DTB_COMPAT=y
CONFIG_CMDLINE="console=ttyS0,115200n81 root=/dev/ram0"
CONFIG_CPU_IDLE=y
-CONFIG_FPE_NWFPE=y
CONFIG_VFP=y
# CONFIG_CORE_DUMP_DEFAULT_ELF_HEADERS is not set
-CONFIG_BINFMT_AOUT=y
CONFIG_NET=y
CONFIG_PACKET=y
CONFIG_UNIX=y
@@ -42,10 +38,7 @@ CONFIG_IP_PNP_BOOTP=y
# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
# CONFIG_INET_XFRM_MODE_TUNNEL is not set
# CONFIG_INET_XFRM_MODE_BEET is not set
-# CONFIG_INET_LRO is not set
# CONFIG_INET_DIAG is not set
-CONFIG_IPV6=y
-CONFIG_IPV6_PRIVACY=y
# CONFIG_WIRELESS is not set
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_DEVTMPFS=y
@@ -53,9 +46,7 @@ CONFIG_DEVTMPFS_MOUNT=y
# CONFIG_FW_LOADER is not set
CONFIG_MTD=y
CONFIG_MTD_CMDLINE_PARTS=y
-CONFIG_MTD_CHAR=y
CONFIG_MTD_BLOCK=y
-CONFIG_MTD_M25P80=y
CONFIG_MTD_NAND=y
CONFIG_MTD_NAND_SLC_LPC32XX=y
CONFIG_MTD_NAND_MLC_LPC32XX=y
@@ -70,7 +61,6 @@ CONFIG_EEPROM_AT25=y
CONFIG_SCSI=y
CONFIG_BLK_DEV_SD=y
CONFIG_NETDEVICES=y
-CONFIG_MII=y
# CONFIG_NET_VENDOR_BROADCOM is not set
# CONFIG_NET_VENDOR_CIRRUS is not set
# CONFIG_NET_VENDOR_FARADAY is not set
@@ -91,6 +81,7 @@ CONFIG_INPUT_MOUSEDEV_SCREEN_Y=320
CONFIG_INPUT_EVDEV=y
# CONFIG_KEYBOARD_ATKBD is not set
CONFIG_KEYBOARD_GPIO=y
+CONFIG_KEYBOARD_GPIO_POLLED=y
CONFIG_KEYBOARD_LPC32XX=y
# CONFIG_INPUT_MOUSE is not set
CONFIG_INPUT_TOUCHSCREEN=y
@@ -99,8 +90,8 @@ CONFIG_SERIO_LIBPS2=y
# CONFIG_LEGACY_PTYS is not set
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
-CONFIG_SERIAL_HS_LPC32XX=y
CONFIG_SERIAL_OF_PLATFORM=y
+CONFIG_SERIAL_HS_LPC32XX=y
# CONFIG_HW_RANDOM is not set
CONFIG_I2C=y
CONFIG_I2C_CHARDEV=y
@@ -108,19 +99,20 @@ CONFIG_I2C_PNX=y
CONFIG_SPI=y
CONFIG_SPI_PL022=y
CONFIG_GPIO_SYSFS=y
-CONFIG_GPIO_GENERIC_PLATFORM=y
CONFIG_GPIO_EM=y
+CONFIG_GPIO_GENERIC_PLATFORM=y
CONFIG_GPIO_PL061=y
+CONFIG_GPIO_ADP5588=y
+CONFIG_GPIO_ADNP=y
CONFIG_GPIO_MAX7300=y
CONFIG_GPIO_MAX732X=y
+CONFIG_GPIO_PCA953X=y
CONFIG_GPIO_PCF857X=y
CONFIG_GPIO_SX150X=y
-CONFIG_GPIO_ADP5588=y
-CONFIG_GPIO_ADNP=y
+CONFIG_GPIO_74X164=y
CONFIG_GPIO_MAX7301=y
-CONFIG_GPIO_MCP23S08=y
CONFIG_GPIO_MC33880=y
-CONFIG_GPIO_74X164=y
+CONFIG_GPIO_MCP23S08=y
CONFIG_SENSORS_DS620=y
CONFIG_SENSORS_MAX6639=y
CONFIG_WATCHDOG=y
@@ -147,7 +139,6 @@ CONFIG_SND_DEBUG_VERBOSE=y
# CONFIG_SND_SPI is not set
CONFIG_SND_SOC=y
CONFIG_USB=y
-CONFIG_USB_PHY=y
CONFIG_USB_OHCI_HCD=y
CONFIG_USB_STORAGE=y
CONFIG_USB_GADGET=y
@@ -179,6 +170,8 @@ CONFIG_DMADEVICES=y
CONFIG_AMBA_PL08X=y
CONFIG_STAGING=y
CONFIG_LPC32XX_ADC=y
+CONFIG_MEMORY=y
+CONFIG_ARM_PL172_MPMC=y
CONFIG_IIO=y
CONFIG_MAX517=y
CONFIG_PWM=y
@@ -198,9 +191,9 @@ CONFIG_NLS_CODEPAGE_437=y
CONFIG_NLS_ASCII=y
CONFIG_NLS_ISO8859_1=y
CONFIG_NLS_UTF8=y
+CONFIG_DEBUG_INFO=y
# CONFIG_SCHED_DEBUG is not set
# CONFIG_DEBUG_PREEMPT is not set
-CONFIG_DEBUG_INFO=y
# CONFIG_FTRACE is not set
# CONFIG_ARM_UNWIND is not set
CONFIG_DEBUG_LL=y
diff --git a/arch/arm/configs/mps2_defconfig b/arch/arm/configs/mps2_defconfig
new file mode 100644
index 000000000000..19d119f5b77e
--- /dev/null
+++ b/arch/arm/configs/mps2_defconfig
@@ -0,0 +1,109 @@
+CONFIG_NO_HZ_IDLE=y
+CONFIG_HIGH_RES_TIMERS=y
+CONFIG_LOG_BUF_SHIFT=16
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
+CONFIG_EXPERT=y
+# CONFIG_UID16 is not set
+# CONFIG_BASE_FULL is not set
+# CONFIG_FUTEX is not set
+# CONFIG_EPOLL is not set
+# CONFIG_SIGNALFD is not set
+# CONFIG_EVENTFD is not set
+# CONFIG_AIO is not set
+# CONFIG_VM_EVENT_COUNTERS is not set
+# CONFIG_SLUB_DEBUG is not set
+# CONFIG_BLOCK is not set
+# CONFIG_MMU is not set
+CONFIG_ARCH_MPS2=y
+CONFIG_SET_MEM_PARAM=y
+CONFIG_DRAM_BASE=0x21000000
+CONFIG_DRAM_SIZE=0x1000000
+CONFIG_PREEMPT_VOLUNTARY=y
+# CONFIG_ATAGS is not set
+CONFIG_ZBOOT_ROM_TEXT=0x0
+CONFIG_ZBOOT_ROM_BSS=0x0
+CONFIG_BINFMT_FLAT=y
+CONFIG_BINFMT_SHARED_FLAT=y
+# CONFIG_COREDUMP is not set
+# CONFIG_SUSPEND is not set
+CONFIG_NET=y
+CONFIG_PACKET=y
+CONFIG_UNIX=y
+CONFIG_INET=y
+CONFIG_IP_PNP=y
+CONFIG_IP_PNP_DHCP=y
+# CONFIG_INET_XFRM_MODE_TRANSPORT is not set
+# CONFIG_INET_XFRM_MODE_TUNNEL is not set
+# CONFIG_INET_XFRM_MODE_BEET is not set
+# CONFIG_INET_LRO is not set
+# CONFIG_INET_DIAG is not set
+# CONFIG_IPV6 is not set
+# CONFIG_WIRELESS is not set
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+# CONFIG_FW_LOADER is not set
+CONFIG_NETDEVICES=y
+# CONFIG_NET_CORE is not set
+# CONFIG_NET_VENDOR_ARC is not set
+# CONFIG_NET_CADENCE is not set
+# CONFIG_NET_VENDOR_BROADCOM is not set
+# CONFIG_NET_VENDOR_CIRRUS is not set
+# CONFIG_NET_VENDOR_EZCHIP is not set
+# CONFIG_NET_VENDOR_FARADAY is not set
+# CONFIG_NET_VENDOR_HISILICON is not set
+# CONFIG_NET_VENDOR_INTEL is not set
+# CONFIG_NET_VENDOR_MARVELL is not set
+# CONFIG_NET_VENDOR_MICREL is not set
+# CONFIG_NET_VENDOR_NATSEMI is not set
+# CONFIG_NET_VENDOR_QUALCOMM is not set
+# CONFIG_NET_VENDOR_RENESAS is not set
+# CONFIG_NET_VENDOR_ROCKER is not set
+# CONFIG_NET_VENDOR_SAMSUNG is not set
+# CONFIG_NET_VENDOR_SEEQ is not set
+CONFIG_SMSC911X=y
+# CONFIG_NET_VENDOR_STMICRO is not set
+# CONFIG_NET_VENDOR_VIA is not set
+# CONFIG_NET_VENDOR_WIZNET is not set
+# CONFIG_WLAN is not set
+# CONFIG_INPUT is not set
+# CONFIG_SERIO is not set
+# CONFIG_VT is not set
+# CONFIG_LEGACY_PTYS is not set
+CONFIG_SERIAL_NONSTANDARD=y
+# CONFIG_DEVKMEM is not set
+CONFIG_SERIAL_MPS2_UART_CONSOLE=y
+CONFIG_SERIAL_MPS2_UART=y
+# CONFIG_HW_RANDOM is not set
+# CONFIG_HWMON is not set
+CONFIG_WATCHDOG=y
+CONFIG_ARM_SP805_WATCHDOG=y
+CONFIG_MFD_SYSCON=y
+# CONFIG_USB_SUPPORT is not set
+CONFIG_NEW_LEDS=y
+CONFIG_LEDS_CLASS=y
+CONFIG_LEDS_SYSCON=y
+CONFIG_LEDS_TRIGGERS=y
+CONFIG_LEDS_TRIGGER_TIMER=y
+CONFIG_LEDS_TRIGGER_ONESHOT=y
+CONFIG_LEDS_TRIGGER_HEARTBEAT=y
+CONFIG_LEDS_TRIGGER_BACKLIGHT=y
+CONFIG_LEDS_TRIGGER_CPU=y
+CONFIG_LEDS_TRIGGER_DEFAULT_ON=y
+CONFIG_ARM_TIMER_SP804=y
+# CONFIG_DNOTIFY is not set
+# CONFIG_INOTIFY_USER is not set
+# CONFIG_MISC_FILESYSTEMS is not set
+CONFIG_NFS_FS=y
+CONFIG_NFS_V4=y
+CONFIG_NFS_V4_1=y
+CONFIG_NFS_V4_2=y
+CONFIG_ROOT_NFS=y
+CONFIG_NLS=y
+CONFIG_PRINTK_TIME=y
+CONFIG_DEBUG_INFO=y
+# CONFIG_ENABLE_WARN_DEPRECATED is not set
+# CONFIG_ENABLE_MUST_CHECK is not set
+CONFIG_DEBUG_FS=y
+# CONFIG_SCHED_DEBUG is not set
+# CONFIG_DEBUG_BUGVERBOSE is not set
+CONFIG_MEMTEST=y
diff --git a/arch/arm/configs/multi_v5_defconfig b/arch/arm/configs/multi_v5_defconfig
index e11d99d529ee..4f82656f7ea2 100644
--- a/arch/arm/configs/multi_v5_defconfig
+++ b/arch/arm/configs/multi_v5_defconfig
@@ -1,18 +1,26 @@
CONFIG_SYSVIPC=y
+CONFIG_FHANDLE=y
CONFIG_NO_HZ=y
CONFIG_HIGH_RES_TIMERS=y
CONFIG_LOG_BUF_SHIFT=19
+CONFIG_CGROUPS=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_PROFILING=y
CONFIG_OPROFILE=y
CONFIG_KPROBES=y
CONFIG_MODULES=y
CONFIG_MODULE_UNLOAD=y
-# CONFIG_BLK_DEV_BSG is not set
# CONFIG_ARCH_MULTI_V7 is not set
CONFIG_ARCH_MVEBU=y
CONFIG_MACH_KIRKWOOD=y
+CONFIG_ARCH_ASPEED=y
+CONFIG_MACH_ASPEED_G4=y
CONFIG_ARCH_MXC=y
+CONFIG_MACH_MX21ADS=y
+CONFIG_MACH_MX27ADS=y
+CONFIG_MACH_MX27_3DS=y
+CONFIG_MACH_IMX27_VISSTRIM_M10=y
+CONFIG_MACH_PCA100=y
CONFIG_MACH_IMX27_DT=y
CONFIG_SOC_IMX25=y
CONFIG_ARCH_ORION5X=y
@@ -60,13 +68,15 @@ CONFIG_IP_MULTICAST=y
CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
-# CONFIG_IPV6 is not set
CONFIG_NET_DSA=y
CONFIG_NET_SWITCHDEV=y
CONFIG_NET_PKTGEN=m
CONFIG_CFG80211=y
CONFIG_MAC80211=y
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
+CONFIG_DEVTMPFS=y
+CONFIG_DEVTMPFS_MOUNT=y
+CONFIG_IMX_WEIM=y
CONFIG_MTD=y
CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_BLOCK=y
@@ -91,10 +101,7 @@ CONFIG_SATA_AHCI=y
CONFIG_SATA_MV=y
CONFIG_NETDEVICES=y
CONFIG_NET_DSA_MV88E6060=y
-CONFIG_NET_DSA_MV88E6131=y
-CONFIG_NET_DSA_MV88E6123=y
-CONFIG_NET_DSA_MV88E6171=y
-CONFIG_NET_DSA_MV88E6352=y
+CONFIG_NET_DSA_MV88E6XXX=y
CONFIG_MV643XX_ETH=y
CONFIG_R8169=y
CONFIG_MARVELL_PHY=y
@@ -108,15 +115,22 @@ CONFIG_LEGACY_PTY_COUNT=16
# CONFIG_DEVKMEM is not set
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
-CONFIG_SERIAL_8250_RUNTIME_UARTS=2
+CONFIG_SERIAL_8250_NR_UARTS=6
+CONFIG_SERIAL_8250_RUNTIME_UARTS=6
+CONFIG_SERIAL_8250_EXTENDED=y
+CONFIG_SERIAL_8250_MANY_PORTS=y
CONFIG_SERIAL_OF_PLATFORM=y
+CONFIG_SERIAL_IMX=y
+CONFIG_SERIAL_IMX_CONSOLE=y
# CONFIG_HW_RANDOM is not set
CONFIG_I2C=y
# CONFIG_I2C_COMPAT is not set
CONFIG_I2C_CHARDEV=y
+CONFIG_I2C_IMX=y
CONFIG_I2C_MV64XXX=y
CONFIG_I2C_NOMADIK=y
CONFIG_SPI=y
+CONFIG_SPI_IMX=y
CONFIG_SPI_ORION=y
CONFIG_GPIO_SYSFS=y
CONFIG_POWER_RESET=y
@@ -129,12 +143,13 @@ CONFIG_SENSORS_LM75=y
CONFIG_SENSORS_LM85=y
CONFIG_THERMAL=y
CONFIG_KIRKWOOD_THERMAL=y
-CONFIG_WATCHDOG=y
CONFIG_ORION_WATCHDOG=y
+CONFIG_IMX2_WDT=y
# CONFIG_ABX500_CORE is not set
CONFIG_REGULATOR=y
CONFIG_REGULATOR_FIXED_VOLTAGE=y
CONFIG_FB=y
+CONFIG_FB_IMX=y
CONFIG_SOUND=y
CONFIG_SND=y
CONFIG_SND_SOC=y
@@ -158,7 +173,6 @@ CONFIG_HID_ZEROPLUS=y
CONFIG_USB=y
CONFIG_USB_XHCI_HCD=y
CONFIG_USB_EHCI_HCD=y
-CONFIG_USB_EHCI_ROOT_HUB_TT=y
CONFIG_USB_PRINTER=m
CONFIG_USB_STORAGE=y
CONFIG_USB_STORAGE_DATAFAB=y
@@ -166,6 +180,8 @@ CONFIG_USB_STORAGE_FREECOM=y
CONFIG_USB_STORAGE_SDDR09=y
CONFIG_USB_STORAGE_SDDR55=y
CONFIG_USB_STORAGE_JUMPSHOT=y
+CONFIG_USB_CHIPIDEA=y
+CONFIG_USB_CHIPIDEA_HOST=y
CONFIG_MMC=y
CONFIG_SDIO_UART=y
CONFIG_MMC_MVSDIO=y
diff --git a/arch/arm/configs/multi_v7_defconfig b/arch/arm/configs/multi_v7_defconfig
index 28234906a064..8f857564657f 100644
--- a/arch/arm/configs/multi_v7_defconfig
+++ b/arch/arm/configs/multi_v7_defconfig
@@ -105,7 +105,6 @@ CONFIG_ARCH_UNIPHIER=y
CONFIG_ARCH_U8500=y
CONFIG_MACH_HREFV60=y
CONFIG_MACH_SNOWBALL=y
-CONFIG_MACH_UX500_DT=y
CONFIG_ARCH_VEXPRESS=y
CONFIG_ARCH_VEXPRESS_CA9X4=y
CONFIG_ARCH_VEXPRESS_TC2_PM=y
@@ -119,7 +118,7 @@ CONFIG_PCI_MSI=y
CONFIG_PCI_MVEBU=y
CONFIG_PCI_TEGRA=y
CONFIG_PCI_RCAR_GEN2=y
-CONFIG_PCI_RCAR_GEN2_PCIE=y
+CONFIG_PCIE_RCAR=y
CONFIG_PCIEPORTBUS=y
CONFIG_SMP=y
CONFIG_NR_CPUS=16
@@ -131,6 +130,10 @@ CONFIG_KEXEC=y
CONFIG_CPU_FREQ=y
CONFIG_CPU_FREQ_STAT_DETAILS=y
CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
+CONFIG_CPU_FREQ_GOV_POWERSAVE=m
+CONFIG_CPU_FREQ_GOV_USERSPACE=m
+CONFIG_CPU_FREQ_GOV_CONSERVATIVE=m
+CONFIG_CPU_FREQ_GOV_SCHEDUTIL=m
CONFIG_QORIQ_CPUFREQ=y
CONFIG_CPU_IDLE=y
CONFIG_ARM_CPUIDLE=y
@@ -183,6 +186,7 @@ CONFIG_MTD_CMDLINE_PARTS=y
CONFIG_MTD_BLOCK=y
CONFIG_MTD_M25P80=y
CONFIG_MTD_NAND=y
+CONFIG_MTD_NAND_DENALI_DT=y
CONFIG_MTD_NAND_ATMEL=y
CONFIG_MTD_NAND_BRCMNAND=y
CONFIG_MTD_NAND_VF610_NFC=y
@@ -322,6 +326,7 @@ CONFIG_I2C_MUX=y
CONFIG_I2C_ARB_GPIO_CHALLENGE=m
CONFIG_I2C_MUX_PCA954x=y
CONFIG_I2C_MUX_PINCTRL=y
+CONFIG_I2C_DEMUX_PINCTRL=y
CONFIG_I2C_AT91=m
CONFIG_I2C_BCM2835=y
CONFIG_I2C_CADENCE=y
@@ -345,6 +350,7 @@ CONFIG_I2C_UNIPHIER_F=y
CONFIG_I2C_XILINX=y
CONFIG_I2C_RCAR=y
CONFIG_I2C_CROS_EC_TUNNEL=m
+CONFIG_I2C_SLAVE_EEPROM=y
CONFIG_SPI=y
CONFIG_SPI_ATMEL=m
CONFIG_SPI_BCM2835=y
@@ -410,6 +416,7 @@ CONFIG_POWER_RESET_GPIO=y
CONFIG_POWER_RESET_GPIO_RESTART=y
CONFIG_POWER_RESET_KEYSTONE=y
CONFIG_POWER_RESET_RMOBILE=y
+CONFIG_POWER_RESET_ST=y
CONFIG_POWER_AVS=y
CONFIG_ROCKCHIP_IODOMAIN=y
CONFIG_SENSORS_IIO_HWMON=y
@@ -430,6 +437,8 @@ CONFIG_WATCHDOG=y
CONFIG_DA9063_WATCHDOG=m
CONFIG_XILINX_WATCHDOG=y
CONFIG_ARM_SP805_WATCHDOG=y
+CONFIG_AT91SAM9X_WATCHDOG=y
+CONFIG_SAMA5D4_WATCHDOG=y
CONFIG_ORION_WATCHDOG=y
CONFIG_ST_LPC_WATCHDOG=y
CONFIG_SUNXI_WATCHDOG=y
@@ -438,9 +447,11 @@ CONFIG_TEGRA_WATCHDOG=m
CONFIG_MESON_WATCHDOG=y
CONFIG_DW_WATCHDOG=y
CONFIG_DIGICOLOR_WATCHDOG=y
+CONFIG_BCM2835_WDT=y
CONFIG_MFD_AS3711=y
CONFIG_MFD_AS3722=y
CONFIG_MFD_ATMEL_FLEXCOM=y
+CONFIG_MFD_ATMEL_HLCDC=m
CONFIG_MFD_BCM590XX=y
CONFIG_MFD_AXP20X=y
CONFIG_MFD_AXP20X_I2C=m
@@ -488,7 +499,7 @@ CONFIG_REGULATOR_MAX77693=m
CONFIG_REGULATOR_MAX77802=m
CONFIG_REGULATOR_PALMAS=y
CONFIG_REGULATOR_PBIAS=y
-CONFIG_REGULATOR_PWM=m
+CONFIG_REGULATOR_PWM=y
CONFIG_REGULATOR_QCOM_RPM=y
CONFIG_REGULATOR_QCOM_SMD_RPM=y
CONFIG_REGULATOR_S2MPS11=y
@@ -514,6 +525,7 @@ CONFIG_V4L_PLATFORM_DRIVERS=y
CONFIG_SOC_CAMERA=m
CONFIG_SOC_CAMERA_PLATFORM=m
CONFIG_VIDEO_RCAR_VIN=m
+CONFIG_VIDEO_ATMEL_ISI=m
CONFIG_V4L_MEM2MEM_DRIVERS=y
CONFIG_VIDEO_RENESAS_JPU=m
CONFIG_VIDEO_RENESAS_VSP1=m
@@ -532,7 +544,11 @@ CONFIG_DRM_EXYNOS_DSI=y
CONFIG_DRM_EXYNOS_FIMD=y
CONFIG_DRM_EXYNOS_HDMI=y
CONFIG_DRM_ROCKCHIP=m
+CONFIG_ROCKCHIP_ANALOGIX_DP=m
CONFIG_ROCKCHIP_DW_HDMI=m
+CONFIG_ROCKCHIP_DW_MIPI_DSI=m
+CONFIG_ROCKCHIP_INNO_HDMI=m
+CONFIG_DRM_ATMEL_HLCDC=m
CONFIG_DRM_RCAR_DU=m
CONFIG_DRM_RCAR_HDMI=y
CONFIG_DRM_RCAR_LVDS=y
@@ -564,6 +580,8 @@ CONFIG_SND_USB_AUDIO=m
CONFIG_SND_SOC=m
CONFIG_SND_ATMEL_SOC=m
CONFIG_SND_ATMEL_SOC_WM8904=m
+CONFIG_SND_ATMEL_SOC_PDMIC=m
+CONFIG_SND_BCM2835_SOC_I2S=m
CONFIG_SND_SOC_FSL_SAI=m
CONFIG_SND_SOC_ROCKCHIP=m
CONFIG_SND_SOC_ROCKCHIP_SPDIF=m
@@ -592,6 +610,7 @@ CONFIG_USB=y
CONFIG_USB_XHCI_HCD=y
CONFIG_USB_XHCI_MVEBU=y
CONFIG_USB_XHCI_RCAR=m
+CONFIG_USB_XHCI_TEGRA=m
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_MSM=m
CONFIG_USB_EHCI_EXYNOS=y
@@ -609,7 +628,7 @@ CONFIG_USB_STORAGE=y
CONFIG_USB_MUSB_HDRC=m
CONFIG_USB_MUSB_SUNXI=m
CONFIG_USB_DWC3=y
-CONFIG_USB_DWC2=m
+CONFIG_USB_DWC2=y
CONFIG_USB_CHIPIDEA=y
CONFIG_USB_CHIPIDEA_UDC=y
CONFIG_USB_CHIPIDEA_HOST=y
@@ -640,7 +659,6 @@ CONFIG_MMC_SDHCI_SPEAR=y
CONFIG_MMC_SDHCI_S3C=y
CONFIG_MMC_SDHCI_S3C_DMA=y
CONFIG_MMC_SDHCI_BCM_KONA=y
-CONFIG_MMC_SDHCI_BCM2835=y
CONFIG_MMC_SDHCI_ST=y
CONFIG_MMC_OMAP=y
CONFIG_MMC_OMAP_HS=y
@@ -767,6 +785,7 @@ CONFIG_EXTCON=y
CONFIG_TI_AEMIF=y
CONFIG_IIO=y
CONFIG_AT91_ADC=m
+CONFIG_AT91_SAMA5D2_ADC=m
CONFIG_BERLIN2_ADC=m
CONFIG_EXYNOS_ADC=m
CONFIG_VF610_ADC=m
@@ -775,6 +794,7 @@ CONFIG_AK8975=y
CONFIG_RASPBERRYPI_POWER=y
CONFIG_PWM=y
CONFIG_PWM_ATMEL=m
+CONFIG_PWM_ATMEL_HLCDC_PWM=m
CONFIG_PWM_ATMEL_TCB=m
CONFIG_PWM_FSL_FTM=m
CONFIG_PWM_RENESAS_TPU=y
@@ -784,12 +804,13 @@ CONFIG_PWM_SUN4I=y
CONFIG_PWM_TEGRA=y
CONFIG_PWM_VT8500=y
CONFIG_PHY_HIX5HD2_SATA=y
-CONFIG_PWM_STI=m
+CONFIG_PWM_STI=y
CONFIG_PWM_BCM2835=y
CONFIG_OMAP_USB2=y
CONFIG_TI_PIPE3=y
CONFIG_PHY_BERLIN_USB=y
CONFIG_PHY_BERLIN_SATA=y
+CONFIG_PHY_ROCKCHIP_DP=m
CONFIG_PHY_ROCKCHIP_USB=m
CONFIG_PHY_QCOM_APQ8064_SATA=m
CONFIG_PHY_MIPHY28LP=y
@@ -800,6 +821,7 @@ CONFIG_PHY_STIH407_USB=y
CONFIG_PHY_SUN4I_USB=y
CONFIG_PHY_SUN9I_USB=y
CONFIG_PHY_SAMSUNG_USB2=m
+CONFIG_PHY_TEGRA_XUSB=y
CONFIG_NVMEM=y
CONFIG_NVMEM_SUNXI_SID=y
CONFIG_BCM2835_MBOX=y
@@ -829,6 +851,9 @@ CONFIG_LOCKUP_DETECTOR=y
CONFIG_CRYPTO_DEV_TEGRA_AES=y
CONFIG_CPUFREQ_DT=y
CONFIG_KEYSTONE_IRQ=y
+CONFIG_HW_RANDOM=y
+CONFIG_HW_RANDOM_ST=y
+CONFIG_CRYPTO_DEV_MARVELL_CESA=m
CONFIG_CRYPTO_DEV_SUN4I_SS=m
CONFIG_CRYPTO_DEV_ROCKCHIP=m
CONFIG_ARM_CRYPTO=y
diff --git a/arch/arm/configs/mvebu_v5_defconfig b/arch/arm/configs/mvebu_v5_defconfig
index 4562006aae68..6051c51ca188 100644
--- a/arch/arm/configs/mvebu_v5_defconfig
+++ b/arch/arm/configs/mvebu_v5_defconfig
@@ -208,8 +208,8 @@ CONFIG_DEBUG_KERNEL=y
# CONFIG_DEBUG_PREEMPT is not set
# CONFIG_FTRACE is not set
CONFIG_DEBUG_USER=y
+CONFIG_CRYPTO_DEV_MARVELL_CESA=y
CONFIG_CRYPTO_CBC=m
CONFIG_CRYPTO_PCBC=m
-CONFIG_CRYPTO_DEV_MV_CESA=y
CONFIG_CRC_CCITT=y
CONFIG_LIBCRC32C=y
diff --git a/arch/arm/configs/mvebu_v7_defconfig b/arch/arm/configs/mvebu_v7_defconfig
index dc5797a2efab..486a4cabb0dd 100644
--- a/arch/arm/configs/mvebu_v7_defconfig
+++ b/arch/arm/configs/mvebu_v7_defconfig
@@ -66,7 +66,7 @@ CONFIG_SATA_AHCI=y
CONFIG_AHCI_MVEBU=y
CONFIG_SATA_MV=y
CONFIG_NETDEVICES=y
-CONFIG_NET_DSA_MV88E6171=y
+CONFIG_NET_DSA_MV88E6XXX=y
CONFIG_MV643XX_ETH=y
CONFIG_MVNETA=y
CONFIG_MVPP2=y
@@ -155,3 +155,4 @@ CONFIG_MAGIC_SYSRQ=y
CONFIG_TIMER_STATS=y
# CONFIG_DEBUG_BUGVERBOSE is not set
CONFIG_DEBUG_USER=y
+CONFIG_CRYPTO_DEV_MARVELL_CESA=y
diff --git a/arch/arm/configs/omap2plus_defconfig b/arch/arm/configs/omap2plus_defconfig
index 156bc88b8523..ac717cccd2b5 100644
--- a/arch/arm/configs/omap2plus_defconfig
+++ b/arch/arm/configs/omap2plus_defconfig
@@ -10,18 +10,17 @@ CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=16
CONFIG_CGROUPS=y
-CONFIG_CGROUP_FREEZER=y
-CONFIG_CGROUP_DEVICE=y
-CONFIG_CPUSETS=y
-CONFIG_CGROUP_CPUACCT=y
CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y
-CONFIG_MEMCG_KMEM=y
-CONFIG_CGROUP_PERF=y
+CONFIG_BLK_CGROUP=y
CONFIG_CGROUP_SCHED=y
CONFIG_CFS_BANDWIDTH=y
CONFIG_RT_GROUP_SCHED=y
-CONFIG_BLK_CGROUP=y
+CONFIG_CGROUP_FREEZER=y
+CONFIG_CPUSETS=y
+CONFIG_CGROUP_DEVICE=y
+CONFIG_CGROUP_CPUACCT=y
+CONFIG_CGROUP_PERF=y
CONFIG_NAMESPACES=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_EXPERT=y
@@ -50,7 +49,6 @@ CONFIG_SOC_AM33XX=y
CONFIG_SOC_AM43XX=y
CONFIG_SOC_DRA7XX=y
CONFIG_ARM_THUMBEE=y
-CONFIG_ARM_KERNMEM_PERMS=y
CONFIG_ARM_ERRATA_411920=y
CONFIG_ARM_ERRATA_430973=y
CONFIG_SMP=y
@@ -69,7 +67,7 @@ CONFIG_CPU_FREQ_DEFAULT_GOV_ONDEMAND=y
CONFIG_CPU_FREQ_GOV_POWERSAVE=y
CONFIG_CPU_FREQ_GOV_USERSPACE=y
CONFIG_CPU_FREQ_GOV_CONSERVATIVE=y
-CONFIG_CPUFREQ_DT=y
+CONFIG_CPUFREQ_DT=m
# CONFIG_ARM_OMAP2PLUS_CPUFREQ is not set
CONFIG_CPU_IDLE=y
CONFIG_BINFMT_MISC=y
@@ -86,7 +84,6 @@ CONFIG_IP_PNP=y
CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
CONFIG_IP_PNP_RARP=y
-# CONFIG_INET_LRO is not set
CONFIG_NETFILTER=y
CONFIG_PHONET=m
CONFIG_CAN=m
@@ -102,19 +99,18 @@ CONFIG_BT_HIDP=m
CONFIG_BT_HCIBTUSB=m
CONFIG_BT_HCIBTSDIO=m
CONFIG_BT_HCIUART=m
-CONFIG_BT_HCIUART_H4=y
CONFIG_BT_HCIUART_BCSP=y
CONFIG_BT_HCIUART_LL=y
CONFIG_BT_HCIUART_3WIRE=y
CONFIG_BT_HCIBCM203X=m
CONFIG_BT_HCIBPA10X=m
-CONFIG_CFG80211=m
CONFIG_BT_HCIBFUSB=m
CONFIG_BT_HCIVHCI=m
CONFIG_BT_MRVL=m
CONFIG_BT_MRVL_SDIO=m
CONFIG_AF_RXRPC=m
-CONFIG_RXKAD=m
+CONFIG_RXKAD=y
+CONFIG_CFG80211=m
CONFIG_MAC80211=m
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
@@ -129,6 +125,7 @@ CONFIG_MTD_CFI=y
CONFIG_MTD_CFI_INTELEXT=y
CONFIG_MTD_PHYSMAP=y
CONFIG_MTD_PHYSMAP_OF=y
+CONFIG_MTD_M25P80=m
CONFIG_MTD_NAND=y
CONFIG_MTD_NAND_ECC_BCH=y
CONFIG_MTD_NAND_OMAP2=y
@@ -136,9 +133,8 @@ CONFIG_MTD_NAND_OMAP_BCH=y
CONFIG_MTD_ONENAND=y
CONFIG_MTD_ONENAND_VERIFY_WRITE=y
CONFIG_MTD_ONENAND_OMAP2=y
-CONFIG_MTD_UBI=y
CONFIG_MTD_SPI_NOR=m
-CONFIG_MTD_M25P80=m
+CONFIG_MTD_UBI=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZE=16384
@@ -184,22 +180,19 @@ CONFIG_USB_NET_SMSC95XX=m
CONFIG_USB_ALI_M5632=y
CONFIG_USB_AN2720=y
CONFIG_USB_EPSON2888=y
-CONFIG_USB_EHCI_HCD=m
-CONFIG_USB_OHCI_HCD=m
CONFIG_USB_KC2190=y
CONFIG_USB_CDC_PHONET=m
CONFIG_LIBERTAS=m
CONFIG_LIBERTAS_USB=m
CONFIG_LIBERTAS_SDIO=m
CONFIG_LIBERTAS_DEBUG=y
-CONFIG_WL_TI=y
+CONFIG_MWIFIEX=m
+CONFIG_MWIFIEX_SDIO=m
+CONFIG_MWIFIEX_USB=m
CONFIG_WL12XX=m
CONFIG_WL18XX=m
CONFIG_WLCORE_SPI=m
CONFIG_WLCORE_SDIO=m
-CONFIG_MWIFIEX=m
-CONFIG_MWIFIEX_SDIO=m
-CONFIG_MWIFIEX_USB=m
CONFIG_INPUT_JOYDEV=m
CONFIG_INPUT_EVDEV=m
CONFIG_KEYBOARD_ATKBD=m
@@ -211,10 +204,10 @@ CONFIG_KEYBOARD_TWL4030=m
CONFIG_INPUT_TOUCHSCREEN=y
CONFIG_TOUCHSCREEN_ADS7846=m
CONFIG_TOUCHSCREEN_EDT_FT5X06=m
+CONFIG_TOUCHSCREEN_TI_AM335X_TSC=m
CONFIG_TOUCHSCREEN_PIXCIR=m
CONFIG_TOUCHSCREEN_TSC2005=m
CONFIG_TOUCHSCREEN_TSC2007=m
-CONFIG_TOUCHSCREEN_TI_AM335X_TSC=m
CONFIG_INPUT_MISC=y
CONFIG_INPUT_TPS65218_PWRBUTTON=m
CONFIG_INPUT_TWL4030_PWRBUTTON=m
@@ -238,15 +231,14 @@ CONFIG_SPI_OMAP24XX=y
CONFIG_SPI_TI_QSPI=m
CONFIG_HSI=m
CONFIG_OMAP_SSI=m
-CONFIG_NOKIA_MODEM=m
CONFIG_SSI_PROTOCOL=m
CONFIG_PINCTRL_SINGLE=y
CONFIG_DEBUG_GPIO=y
CONFIG_GPIO_SYSFS=y
CONFIG_GPIO_PCA953X=m
CONFIG_GPIO_PCF857X=y
-CONFIG_GPIO_TWL4030=y
CONFIG_GPIO_PALMAS=y
+CONFIG_GPIO_TWL4030=y
CONFIG_W1=m
CONFIG_HDQ_MASTER_OMAP=m
CONFIG_BATTERY_BQ27XXX=m
@@ -273,11 +265,11 @@ CONFIG_DRA752_THERMAL=y
CONFIG_WATCHDOG=y
CONFIG_OMAP_WATCHDOG=m
CONFIG_TWL4030_WATCHDOG=m
+CONFIG_MFD_TI_AM335X_TSCADC=m
CONFIG_MFD_PALMAS=y
CONFIG_MFD_TPS65217=y
CONFIG_MFD_TPS65218=y
CONFIG_MFD_TPS65910=y
-CONFIG_MFD_TI_AM335X_TSCADC=m
CONFIG_TWL6040_CORE=y
CONFIG_REGULATOR_LP872X=y
CONFIG_REGULATOR_PALMAS=y
@@ -292,8 +284,12 @@ CONFIG_REGULATOR_TPS65910=y
CONFIG_REGULATOR_TWL4030=y
CONFIG_MEDIA_SUPPORT=m
CONFIG_MEDIA_CAMERA_SUPPORT=y
+CONFIG_MEDIA_RC_SUPPORT=y
CONFIG_MEDIA_CONTROLLER=y
CONFIG_VIDEO_V4L2_SUBDEV_API=y
+CONFIG_LIRC=m
+CONFIG_RC_DEVICES=y
+CONFIG_IR_RX51=m
CONFIG_V4L_PLATFORM_DRIVERS=y
CONFIG_VIDEO_OMAP3=m
# CONFIG_MEDIA_SUBDRV_AUTOSELECT is not set
@@ -302,10 +298,10 @@ CONFIG_FB=y
CONFIG_FIRMWARE_EDID=y
CONFIG_FB_MODE_HELPERS=y
CONFIG_FB_TILEBLITTING=y
+CONFIG_FB_OMAP2=m
CONFIG_FB_OMAP5_DSS_HDMI=y
CONFIG_FB_OMAP2_DSS_SDI=y
CONFIG_FB_OMAP2_DSS_DSI=y
-CONFIG_FB_OMAP2=m
CONFIG_FB_OMAP2_ENCODER_TFP410=m
CONFIG_FB_OMAP2_ENCODER_TPD12S015=m
CONFIG_FB_OMAP2_CONNECTOR_DVI=m
@@ -341,13 +337,11 @@ CONFIG_SND_USB_AUDIO=m
CONFIG_SND_SOC=m
CONFIG_SND_EDMA_SOC=m
CONFIG_SND_AM33XX_SOC_EVM=m
-CONFIG_SND_DAVINCI_SOC_MCASP=m
CONFIG_SND_OMAP_SOC=m
CONFIG_SND_OMAP_SOC_OMAP_TWL4030=m
CONFIG_SND_OMAP_SOC_OMAP_ABE_TWL6040=m
CONFIG_SND_OMAP_SOC_OMAP3_PANDORA=m
CONFIG_SND_SIMPLE_CARD=m
-CONFIG_SND_SOC_TLV320AIC3X=m
CONFIG_HID_GENERIC=m
CONFIG_USB_HIDDEV=y
CONFIG_USB_KBD=m
@@ -356,6 +350,8 @@ CONFIG_USB=m
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
CONFIG_USB_MON=m
CONFIG_USB_XHCI_HCD=m
+CONFIG_USB_EHCI_HCD=m
+CONFIG_USB_OHCI_HCD=m
CONFIG_USB_WDM=m
CONFIG_USB_STORAGE=m
CONFIG_USB_MUSB_HDRC=m
@@ -372,6 +368,7 @@ CONFIG_USB_SERIAL_FTDI_SIO=m
CONFIG_USB_SERIAL_PL2303=m
CONFIG_USB_TEST=m
CONFIG_AM335X_PHY_USB=y
+CONFIG_TWL6030_USB=m
CONFIG_USB_GADGET=m
CONFIG_USB_GADGET_DEBUG=y
CONFIG_USB_GADGET_DEBUG_FILES=y
@@ -402,8 +399,8 @@ CONFIG_MMC_OMAP_HS=y
CONFIG_NEW_LEDS=y
CONFIG_LEDS_CLASS=m
CONFIG_LEDS_GPIO=m
-CONFIG_LEDS_PWM=m
CONFIG_LEDS_PCA963X=m
+CONFIG_LEDS_PWM=m
CONFIG_LEDS_TRIGGERS=y
CONFIG_LEDS_TRIGGER_TIMER=m
CONFIG_LEDS_TRIGGER_ONESHOT=m
@@ -414,22 +411,22 @@ CONFIG_LEDS_TRIGGER_GPIO=m
CONFIG_LEDS_TRIGGER_DEFAULT_ON=m
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_DS1307=m
-CONFIG_RTC_DRV_PALMAS=m
CONFIG_RTC_DRV_TWL92330=y
CONFIG_RTC_DRV_TWL4030=m
+CONFIG_RTC_DRV_PALMAS=m
CONFIG_RTC_DRV_OMAP=m
CONFIG_DMADEVICES=y
-CONFIG_TI_EDMA=y
CONFIG_DMA_OMAP=y
-CONFIG_IOMMU_SUPPORT=y
+CONFIG_TI_EDMA=y
CONFIG_OMAP_IOMMU=y
CONFIG_EXTCON=m
-CONFIG_EXTCON_USB_GPIO=m
CONFIG_EXTCON_PALMAS=m
+CONFIG_EXTCON_USB_GPIO=m
CONFIG_TI_EMIF=m
CONFIG_IIO=m
CONFIG_TI_AM335X_ADC=m
CONFIG_PWM=y
+CONFIG_PWM_OMAP_DMTIMER=m
CONFIG_PWM_TIECAP=m
CONFIG_PWM_TIEHRPWM=m
CONFIG_PWM_TWL=m
@@ -440,8 +437,6 @@ CONFIG_TI_PIPE3=y
CONFIG_TWL4030_USB=m
CONFIG_EXT2_FS=y
CONFIG_EXT3_FS=y
-# CONFIG_EXT3_FS_XATTR is not set
-CONFIG_EXT4_FS=y
CONFIG_FANOTIFY=y
CONFIG_QUOTA=y
CONFIG_QFMT_V2=y
@@ -476,7 +471,6 @@ CONFIG_PROVE_LOCKING=y
# CONFIG_DEBUG_BUGVERBOSE is not set
CONFIG_SECURITY=y
CONFIG_CRYPTO_MICHAEL_MIC=y
-# CONFIG_CRYPTO_ANSI_CPRNG is not set
CONFIG_CRC_CCITT=y
CONFIG_CRC_T10DIF=y
CONFIG_CRC_ITU_T=y
diff --git a/arch/arm/configs/orion5x_defconfig b/arch/arm/configs/orion5x_defconfig
index 6a5bc27538f1..27a70a7a50f6 100644
--- a/arch/arm/configs/orion5x_defconfig
+++ b/arch/arm/configs/orion5x_defconfig
@@ -85,8 +85,7 @@ CONFIG_ATA=y
CONFIG_SATA_MV=y
CONFIG_NETDEVICES=y
CONFIG_MII=y
-CONFIG_NET_DSA_MV88E6131=y
-CONFIG_NET_DSA_MV88E6123=y
+CONFIG_NET_DSA_MV88E6XXX=y
CONFIG_MV643XX_ETH=y
CONFIG_MARVELL_PHY=y
# CONFIG_INPUT_MOUSEDEV is not set
diff --git a/arch/arm/configs/sama5_defconfig b/arch/arm/configs/sama5_defconfig
index afbda413d61a..9cb1a85bb166 100644
--- a/arch/arm/configs/sama5_defconfig
+++ b/arch/arm/configs/sama5_defconfig
@@ -1,8 +1,10 @@
# CONFIG_LOCALVERSION_AUTO is not set
# CONFIG_SWAP is not set
CONFIG_SYSVIPC=y
+CONFIG_FHANDLE=y
CONFIG_IRQ_DOMAIN_DEBUG=y
CONFIG_LOG_BUF_SHIFT=14
+CONFIG_CGROUPS=y
CONFIG_BLK_DEV_INITRD=y
CONFIG_EMBEDDED=y
CONFIG_SLAB=y
@@ -133,6 +135,7 @@ CONFIG_WATCHDOG=y
CONFIG_AT91SAM9X_WATCHDOG=y
CONFIG_SAMA5D4_WATCHDOG=y
CONFIG_MFD_ATMEL_FLEXCOM=y
+CONFIG_MFD_ATMEL_HLCDC=y
CONFIG_REGULATOR=y
CONFIG_REGULATOR_FIXED_VOLTAGE=y
CONFIG_REGULATOR_ACT8865=y
@@ -142,11 +145,14 @@ CONFIG_V4L_PLATFORM_DRIVERS=y
CONFIG_SOC_CAMERA=y
CONFIG_VIDEO_ATMEL_ISI=y
CONFIG_SOC_CAMERA_OV2640=y
-CONFIG_FB=y
+CONFIG_DRM=y
+CONFIG_DRM_ATMEL_HLCDC=y
+CONFIG_DRM_PANEL_SIMPLE=y
CONFIG_BACKLIGHT_LCD_SUPPORT=y
-# CONFIG_LCD_CLASS_DEVICE is not set
+CONFIG_LCD_CLASS_DEVICE=y
CONFIG_BACKLIGHT_CLASS_DEVICE=y
# CONFIG_BACKLIGHT_GENERIC is not set
+CONFIG_BACKLIGHT_PWM=y
CONFIG_FRAMEBUFFER_CONSOLE=y
CONFIG_SOUND=y
CONFIG_SND=y
@@ -154,6 +160,7 @@ CONFIG_SND_SOC=y
CONFIG_SND_ATMEL_SOC=y
CONFIG_SND_ATMEL_SOC_WM8904=y
# CONFIG_HID_GENERIC is not set
+CONFIG_SND_ATMEL_SOC_PDMIC=y
CONFIG_USB=y
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
CONFIG_USB_EHCI_HCD=y
@@ -183,6 +190,7 @@ CONFIG_LEDS_TRIGGER_GPIO=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_AT91RM9200=y
CONFIG_DMADEVICES=y
+CONFIG_AT_HDMAC=y
CONFIG_AT_XDMAC=y
# CONFIG_IOMMU_SUPPORT is not set
CONFIG_IIO=y
@@ -190,6 +198,7 @@ CONFIG_AT91_ADC=y
CONFIG_AT91_SAMA5D2_ADC=y
CONFIG_PWM=y
CONFIG_PWM_ATMEL=y
+CONFIG_PWM_ATMEL_HLCDC_PWM=y
CONFIG_PWM_ATMEL_TCB=y
CONFIG_EXT4_FS=y
CONFIG_FANOTIFY=y
diff --git a/arch/arm/configs/shmobile_defconfig b/arch/arm/configs/shmobile_defconfig
index b7b714c3958c..f2d635566a13 100644
--- a/arch/arm/configs/shmobile_defconfig
+++ b/arch/arm/configs/shmobile_defconfig
@@ -24,7 +24,7 @@ CONFIG_PL310_ERRATA_588369=y
CONFIG_ARM_ERRATA_754322=y
CONFIG_PCI=y
CONFIG_PCI_RCAR_GEN2=y
-CONFIG_PCI_RCAR_GEN2_PCIE=y
+CONFIG_PCIE_RCAR=y
CONFIG_SMP=y
CONFIG_SCHED_MC=y
CONFIG_HAVE_ARM_ARCH_TIMER=y
@@ -99,11 +99,14 @@ CONFIG_SERIAL_SH_SCI=y
CONFIG_SERIAL_SH_SCI_NR_UARTS=20
CONFIG_SERIAL_SH_SCI_CONSOLE=y
CONFIG_I2C_CHARDEV=y
+CONFIG_I2C_MUX=y
+CONFIG_I2C_DEMUX_PINCTRL=y
CONFIG_I2C_EMEV2=y
CONFIG_I2C_GPIO=y
CONFIG_I2C_RIIC=y
CONFIG_I2C_SH_MOBILE=y
CONFIG_I2C_RCAR=y
+CONFIG_I2C_SLAVE_EEPROM=y
CONFIG_SPI=y
CONFIG_SPI_RSPI=y
CONFIG_SPI_SH_MSIOF=y
diff --git a/arch/arm/configs/tegra_defconfig b/arch/arm/configs/tegra_defconfig
index 3a36244e3cf6..6012a1ec779f 100644
--- a/arch/arm/configs/tegra_defconfig
+++ b/arch/arm/configs/tegra_defconfig
@@ -218,6 +218,7 @@ CONFIG_SND_SOC_TEGRA_ALC5632=y
CONFIG_SND_SOC_TEGRA_MAX98090=y
CONFIG_USB=y
CONFIG_USB_XHCI_HCD=y
+CONFIG_USB_XHCI_TEGRA=y
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_TEGRA=y
CONFIG_USB_ACM=y
@@ -266,6 +267,7 @@ CONFIG_IIO=y
CONFIG_AK8975=y
CONFIG_PWM=y
CONFIG_PWM_TEGRA=y
+CONFIG_PHY_TEGRA_XUSB=y
CONFIG_EXT2_FS=y
CONFIG_EXT2_FS_XATTR=y
CONFIG_EXT2_FS_POSIX_ACL=y
diff --git a/arch/arm/configs/u8500_defconfig b/arch/arm/configs/u8500_defconfig
index a691d590fbd1..b7b09189f1c5 100644
--- a/arch/arm/configs/u8500_defconfig
+++ b/arch/arm/configs/u8500_defconfig
@@ -45,7 +45,6 @@ CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_BLK_DEV_RAM=y
CONFIG_BLK_DEV_RAM_SIZE=65536
-CONFIG_SENSORS_BH1780=y
CONFIG_NETDEVICES=y
CONFIG_SMSC911X=y
CONFIG_SMSC_PHY=y
@@ -107,12 +106,12 @@ CONFIG_RTC_DRV_AB8500=y
CONFIG_RTC_DRV_PL031=y
CONFIG_DMADEVICES=y
CONFIG_STE_DMA40=y
-CONFIG_STAGING=y
-CONFIG_TOUCHSCREEN_SYNAPTICS_I2C_RMI4=y
CONFIG_HSEM_U8500=y
CONFIG_IIO=y
+CONFIG_IIO_BUFFER=y
CONFIG_IIO_ST_ACCEL_3AXIS=y
CONFIG_IIO_ST_GYRO_3AXIS=y
+CONFIG_BH1780=y
CONFIG_IIO_ST_MAGN_3AXIS=y
CONFIG_IIO_ST_PRESS=y
CONFIG_EXT2_FS=y
diff --git a/arch/arm/configs/zx_defconfig b/arch/arm/configs/zx_defconfig
index ab683fbbb954..d6253a48a9fa 100644
--- a/arch/arm/configs/zx_defconfig
+++ b/arch/arm/configs/zx_defconfig
@@ -7,7 +7,6 @@ CONFIG_CGROUPS=y
CONFIG_CGROUP_DEBUG=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_CPUACCT=y
-CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_SCHED=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_NAMESPACES=y
diff --git a/arch/arm/include/asm/cpuidle.h b/arch/arm/include/asm/cpuidle.h
index 3848259bebf8..baefe1d51517 100644
--- a/arch/arm/include/asm/cpuidle.h
+++ b/arch/arm/include/asm/cpuidle.h
@@ -36,7 +36,7 @@ struct cpuidle_ops {
struct of_cpuidle_method {
const char *method;
- struct cpuidle_ops *ops;
+ const struct cpuidle_ops *ops;
};
#define CPUIDLE_METHOD_OF_DECLARE(name, _method, _ops) \
diff --git a/arch/arm/include/asm/dma-mapping.h b/arch/arm/include/asm/dma-mapping.h
index 6ad1ceda62a5..a83570f10124 100644
--- a/arch/arm/include/asm/dma-mapping.h
+++ b/arch/arm/include/asm/dma-mapping.h
@@ -118,7 +118,7 @@ static inline unsigned long dma_max_pfn(struct device *dev)
#define arch_setup_dma_ops arch_setup_dma_ops
extern void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
- struct iommu_ops *iommu, bool coherent);
+ const struct iommu_ops *iommu, bool coherent);
#define arch_teardown_dma_ops arch_teardown_dma_ops
extern void arch_teardown_dma_ops(struct device *dev);
@@ -162,8 +162,6 @@ static inline bool dma_capable(struct device *dev, dma_addr_t addr, size_t size)
static inline void dma_mark_clean(void *addr, size_t size) { }
-extern int arm_dma_set_mask(struct device *dev, u64 dma_mask);
-
/**
* arm_dma_alloc - allocate consistent memory for DMA
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
diff --git a/arch/arm/include/asm/efi.h b/arch/arm/include/asm/efi.h
index e0eea72deb87..a708fa1f0905 100644
--- a/arch/arm/include/asm/efi.h
+++ b/arch/arm/include/asm/efi.h
@@ -17,34 +17,28 @@
#include <asm/mach/map.h>
#include <asm/mmu_context.h>
#include <asm/pgtable.h>
+#include <asm/ptrace.h>
#ifdef CONFIG_EFI
void efi_init(void);
int efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md);
+int efi_set_mapping_permissions(struct mm_struct *mm, efi_memory_desc_t *md);
-#define efi_call_virt(f, ...) \
-({ \
- efi_##f##_t *__f; \
- efi_status_t __s; \
- \
- efi_virtmap_load(); \
- __f = efi.systab->runtime->f; \
- __s = __f(__VA_ARGS__); \
- efi_virtmap_unload(); \
- __s; \
-})
+#define arch_efi_call_virt_setup() efi_virtmap_load()
+#define arch_efi_call_virt_teardown() efi_virtmap_unload()
-#define __efi_call_virt(f, ...) \
+#define arch_efi_call_virt(f, args...) \
({ \
efi_##f##_t *__f; \
- \
- efi_virtmap_load(); \
__f = efi.systab->runtime->f; \
- __f(__VA_ARGS__); \
- efi_virtmap_unload(); \
+ __f(args); \
})
+#define ARCH_EFI_IRQ_FLAGS_MASK \
+ (PSR_J_BIT | PSR_E_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT | \
+ PSR_T_BIT | MODE_MASK)
+
static inline void efi_set_pgd(struct mm_struct *mm)
{
check_and_switch_context(mm, NULL);
@@ -59,7 +53,16 @@ void efi_virtmap_unload(void);
/* arch specific definitions used by the stub code */
-#define efi_call_early(f, ...) sys_table_arg->boottime->f(__VA_ARGS__)
+#define efi_call_early(f, ...) sys_table_arg->boottime->f(__VA_ARGS__)
+#define __efi_call_early(f, ...) f(__VA_ARGS__)
+#define efi_is_64bit() (false)
+
+struct screen_info *alloc_screen_info(efi_system_table_t *sys_table_arg);
+void free_screen_info(efi_system_table_t *sys_table, struct screen_info *si);
+
+static inline void efifb_setup_from_dmi(struct screen_info *si, const char *opt)
+{
+}
/*
* A reasonable upper bound for the uncompressed kernel size is 32 MBytes,
diff --git a/arch/arm/include/asm/io.h b/arch/arm/include/asm/io.h
index 485982084fe9..781ef5fe235d 100644
--- a/arch/arm/include/asm/io.h
+++ b/arch/arm/include/asm/io.h
@@ -392,9 +392,18 @@ void __iomem *ioremap(resource_size_t res_cookie, size_t size);
#define ioremap ioremap
#define ioremap_nocache ioremap
+/*
+ * Do not use ioremap_cache for mapping memory. Use memremap instead.
+ */
void __iomem *ioremap_cache(resource_size_t res_cookie, size_t size);
#define ioremap_cache ioremap_cache
+/*
+ * Do not use ioremap_cached in new code. Provided for the benefit of
+ * the pxa2xx-flash MTD driver only.
+ */
+void __iomem *ioremap_cached(resource_size_t res_cookie, size_t size);
+
void __iomem *ioremap_wc(resource_size_t res_cookie, size_t size);
#define ioremap_wc ioremap_wc
#define ioremap_wt ioremap_wc
@@ -402,6 +411,9 @@ void __iomem *ioremap_wc(resource_size_t res_cookie, size_t size);
void iounmap(volatile void __iomem *iomem_cookie);
#define iounmap iounmap
+void *arch_memremap_wb(phys_addr_t phys_addr, size_t size);
+#define arch_memremap_wb arch_memremap_wb
+
/*
* io{read,write}{16,32}be() macros
*/
diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 385070180c25..96387d477e91 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -41,6 +41,8 @@
#define KVM_MAX_VCPUS VGIC_V2_MAX_CPUS
+#define KVM_REQ_VCPU_EXIT 8
+
u32 *kvm_vcpu_reg(struct kvm_vcpu *vcpu, u8 reg_num, u32 mode);
int __attribute_const__ kvm_target_cpu(void);
int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
@@ -187,6 +189,7 @@ struct kvm_vm_stat {
struct kvm_vcpu_stat {
u32 halt_successful_poll;
u32 halt_attempted_poll;
+ u32 halt_poll_invalid;
u32 halt_wakeup;
u32 hvc_exit_stat;
u64 wfe_exit_stat;
@@ -225,6 +228,10 @@ static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
struct kvm_vcpu __percpu **kvm_get_running_vcpus(void);
+void kvm_arm_halt_guest(struct kvm *kvm);
+void kvm_arm_resume_guest(struct kvm *kvm);
+void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
+void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
int kvm_arm_copy_coproc_indices(struct kvm_vcpu *vcpu, u64 __user *uindices);
unsigned long kvm_arm_num_coproc_regs(struct kvm_vcpu *vcpu);
@@ -265,6 +272,15 @@ static inline void __cpu_init_stage2(void)
kvm_call_hyp(__init_stage2_translation);
}
+static inline void __cpu_reset_hyp_mode(phys_addr_t boot_pgd_ptr,
+ phys_addr_t phys_idmap_start)
+{
+ /*
+ * TODO
+ * kvm_call_reset(boot_pgd_ptr, phys_idmap_start);
+ */
+}
+
static inline int kvm_arch_dev_ioctl_check_extension(long ext)
{
return 0;
@@ -277,11 +293,11 @@ void kvm_mmu_wp_memory_region(struct kvm *kvm, int slot);
struct kvm_vcpu *kvm_mpidr_to_vcpu(struct kvm *kvm, unsigned long mpidr);
-static inline void kvm_arch_hardware_disable(void) {}
static inline void kvm_arch_hardware_unsetup(void) {}
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
+static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
static inline void kvm_arm_init_debug(void) {}
static inline void kvm_arm_setup_debug(struct kvm_vcpu *vcpu) {}
diff --git a/arch/arm/include/asm/kvm_mmio.h b/arch/arm/include/asm/kvm_mmio.h
index d8e90c8cb5fa..f3a7de71f515 100644
--- a/arch/arm/include/asm/kvm_mmio.h
+++ b/arch/arm/include/asm/kvm_mmio.h
@@ -28,6 +28,9 @@ struct kvm_decode {
bool sign_extend;
};
+void kvm_mmio_write_buf(void *buf, unsigned int len, unsigned long data);
+unsigned long kvm_mmio_read_buf(const void *buf, unsigned int len);
+
int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run);
int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
phys_addr_t fault_ipa);
diff --git a/arch/arm/include/asm/kvm_mmu.h b/arch/arm/include/asm/kvm_mmu.h
index da44be9db4fa..f9a65061130b 100644
--- a/arch/arm/include/asm/kvm_mmu.h
+++ b/arch/arm/include/asm/kvm_mmu.h
@@ -47,6 +47,7 @@
#include <linux/highmem.h>
#include <asm/cacheflush.h>
#include <asm/pgalloc.h>
+#include <asm/stage2_pgtable.h>
int create_hyp_mappings(void *from, void *to);
int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
@@ -66,6 +67,7 @@ void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu);
phys_addr_t kvm_mmu_get_httbr(void);
phys_addr_t kvm_mmu_get_boot_httbr(void);
phys_addr_t kvm_get_idmap_vector(void);
+phys_addr_t kvm_get_idmap_start(void);
int kvm_mmu_init(void);
void kvm_clear_hyp_idmap(void);
@@ -105,14 +107,16 @@ static inline void kvm_clean_pte(pte_t *pte)
clean_pte_table(pte);
}
-static inline void kvm_set_s2pte_writable(pte_t *pte)
+static inline pte_t kvm_s2pte_mkwrite(pte_t pte)
{
- pte_val(*pte) |= L_PTE_S2_RDWR;
+ pte_val(pte) |= L_PTE_S2_RDWR;
+ return pte;
}
-static inline void kvm_set_s2pmd_writable(pmd_t *pmd)
+static inline pmd_t kvm_s2pmd_mkwrite(pmd_t pmd)
{
- pmd_val(*pmd) |= L_PMD_S2_RDWR;
+ pmd_val(pmd) |= L_PMD_S2_RDWR;
+ return pmd;
}
static inline void kvm_set_s2pte_readonly(pte_t *pte)
@@ -135,22 +139,6 @@ static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
return (pmd_val(*pmd) & L_PMD_S2_RDWR) == L_PMD_S2_RDONLY;
}
-
-/* Open coded p*d_addr_end that can deal with 64bit addresses */
-#define kvm_pgd_addr_end(addr, end) \
-({ u64 __boundary = ((addr) + PGDIR_SIZE) & PGDIR_MASK; \
- (__boundary - 1 < (end) - 1)? __boundary: (end); \
-})
-
-#define kvm_pud_addr_end(addr,end) (end)
-
-#define kvm_pmd_addr_end(addr, end) \
-({ u64 __boundary = ((addr) + PMD_SIZE) & PMD_MASK; \
- (__boundary - 1 < (end) - 1)? __boundary: (end); \
-})
-
-#define kvm_pgd_index(addr) pgd_index(addr)
-
static inline bool kvm_page_empty(void *ptr)
{
struct page *ptr_page = virt_to_page(ptr);
@@ -159,19 +147,11 @@ static inline bool kvm_page_empty(void *ptr)
#define kvm_pte_table_empty(kvm, ptep) kvm_page_empty(ptep)
#define kvm_pmd_table_empty(kvm, pmdp) kvm_page_empty(pmdp)
-#define kvm_pud_table_empty(kvm, pudp) (0)
-
-#define KVM_PREALLOC_LEVEL 0
+#define kvm_pud_table_empty(kvm, pudp) false
-static inline void *kvm_get_hwpgd(struct kvm *kvm)
-{
- return kvm->arch.pgd;
-}
-
-static inline unsigned int kvm_get_hwpgd_size(void)
-{
- return PTRS_PER_S2_PGD * sizeof(pgd_t);
-}
+#define hyp_pte_table_empty(ptep) kvm_page_empty(ptep)
+#define hyp_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
+#define hyp_pud_table_empty(pudp) false
struct kvm;
diff --git a/arch/arm/include/asm/memory.h b/arch/arm/include/asm/memory.h
index 9427fd632552..31c07a2cc100 100644
--- a/arch/arm/include/asm/memory.h
+++ b/arch/arm/include/asm/memory.h
@@ -288,19 +288,43 @@ static inline void *phys_to_virt(phys_addr_t x)
#define __va(x) ((void *)__phys_to_virt((phys_addr_t)(x)))
#define pfn_to_kaddr(pfn) __va((phys_addr_t)(pfn) << PAGE_SHIFT)
-extern unsigned long (*arch_virt_to_idmap)(unsigned long x);
+extern long long arch_phys_to_idmap_offset;
/*
- * These are for systems that have a hardware interconnect supported alias of
- * physical memory for idmap purposes. Most cases should leave these
+ * These are for systems that have a hardware interconnect supported alias
+ * of physical memory for idmap purposes. Most cases should leave these
* untouched. Note: this can only return addresses less than 4GiB.
*/
+static inline bool arm_has_idmap_alias(void)
+{
+ return IS_ENABLED(CONFIG_MMU) && arch_phys_to_idmap_offset != 0;
+}
+
+#define IDMAP_INVALID_ADDR ((u32)~0)
+
+static inline unsigned long phys_to_idmap(phys_addr_t addr)
+{
+ if (IS_ENABLED(CONFIG_MMU) && arch_phys_to_idmap_offset) {
+ addr += arch_phys_to_idmap_offset;
+ if (addr > (u32)~0)
+ addr = IDMAP_INVALID_ADDR;
+ }
+ return addr;
+}
+
+static inline phys_addr_t idmap_to_phys(unsigned long idmap)
+{
+ phys_addr_t addr = idmap;
+
+ if (IS_ENABLED(CONFIG_MMU) && arch_phys_to_idmap_offset)
+ addr -= arch_phys_to_idmap_offset;
+
+ return addr;
+}
+
static inline unsigned long __virt_to_idmap(unsigned long x)
{
- if (IS_ENABLED(CONFIG_MMU) && arch_virt_to_idmap)
- return arch_virt_to_idmap(x);
- else
- return __virt_to_phys(x);
+ return phys_to_idmap(__virt_to_phys(x));
}
#define virt_to_idmap(x) __virt_to_idmap((unsigned long)(x))
diff --git a/arch/arm/include/asm/mmu_context.h b/arch/arm/include/asm/mmu_context.h
index fa5b42d44985..3cc14dd8587c 100644
--- a/arch/arm/include/asm/mmu_context.h
+++ b/arch/arm/include/asm/mmu_context.h
@@ -15,6 +15,7 @@
#include <linux/compiler.h>
#include <linux/sched.h>
+#include <linux/preempt.h>
#include <asm/cacheflush.h>
#include <asm/cachetype.h>
#include <asm/proc-fns.h>
@@ -66,6 +67,7 @@ static inline void check_and_switch_context(struct mm_struct *mm,
cpu_switch_mm(mm->pgd, mm);
}
+#ifndef MODULE
#define finish_arch_post_lock_switch \
finish_arch_post_lock_switch
static inline void finish_arch_post_lock_switch(void)
@@ -87,6 +89,7 @@ static inline void finish_arch_post_lock_switch(void)
preempt_enable_no_resched();
}
}
+#endif /* !MODULE */
#endif /* CONFIG_MMU */
diff --git a/arch/arm/include/asm/pgtable-3level.h b/arch/arm/include/asm/pgtable-3level.h
index dc46398bc3a5..fa70db7c714b 100644
--- a/arch/arm/include/asm/pgtable-3level.h
+++ b/arch/arm/include/asm/pgtable-3level.h
@@ -281,11 +281,6 @@ static inline void set_pmd_at(struct mm_struct *mm, unsigned long addr,
flush_pmd_entry(pmdp);
}
-static inline int has_transparent_hugepage(void)
-{
- return 1;
-}
-
#endif /* __ASSEMBLY__ */
#endif /* _ASM_PGTABLE_3LEVEL_H */
diff --git a/arch/arm/include/asm/stage2_pgtable.h b/arch/arm/include/asm/stage2_pgtable.h
new file mode 100644
index 000000000000..460d616bb2d6
--- /dev/null
+++ b/arch/arm/include/asm/stage2_pgtable.h
@@ -0,0 +1,61 @@
+/*
+ * Copyright (C) 2016 - ARM Ltd
+ *
+ * stage2 page table helpers
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM_S2_PGTABLE_H_
+#define __ARM_S2_PGTABLE_H_
+
+#define stage2_pgd_none(pgd) pgd_none(pgd)
+#define stage2_pgd_clear(pgd) pgd_clear(pgd)
+#define stage2_pgd_present(pgd) pgd_present(pgd)
+#define stage2_pgd_populate(pgd, pud) pgd_populate(NULL, pgd, pud)
+#define stage2_pud_offset(pgd, address) pud_offset(pgd, address)
+#define stage2_pud_free(pud) pud_free(NULL, pud)
+
+#define stage2_pud_none(pud) pud_none(pud)
+#define stage2_pud_clear(pud) pud_clear(pud)
+#define stage2_pud_present(pud) pud_present(pud)
+#define stage2_pud_populate(pud, pmd) pud_populate(NULL, pud, pmd)
+#define stage2_pmd_offset(pud, address) pmd_offset(pud, address)
+#define stage2_pmd_free(pmd) pmd_free(NULL, pmd)
+
+#define stage2_pud_huge(pud) pud_huge(pud)
+
+/* Open coded p*d_addr_end that can deal with 64bit addresses */
+static inline phys_addr_t stage2_pgd_addr_end(phys_addr_t addr, phys_addr_t end)
+{
+ phys_addr_t boundary = (addr + PGDIR_SIZE) & PGDIR_MASK;
+
+ return (boundary - 1 < end - 1) ? boundary : end;
+}
+
+#define stage2_pud_addr_end(addr, end) (end)
+
+static inline phys_addr_t stage2_pmd_addr_end(phys_addr_t addr, phys_addr_t end)
+{
+ phys_addr_t boundary = (addr + PMD_SIZE) & PMD_MASK;
+
+ return (boundary - 1 < end - 1) ? boundary : end;
+}
+
+#define stage2_pgd_index(addr) pgd_index(addr)
+
+#define stage2_pte_table_empty(ptep) kvm_page_empty(ptep)
+#define stage2_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
+#define stage2_pud_table_empty(pudp) false
+
+#endif /* __ARM_S2_PGTABLE_H_ */
diff --git a/arch/arm/kernel/bios32.c b/arch/arm/kernel/bios32.c
index 066f7f9ba411..05e61a2eeabe 100644
--- a/arch/arm/kernel/bios32.c
+++ b/arch/arm/kernel/bios32.c
@@ -550,9 +550,6 @@ char * __init pcibios_setup(char *str)
if (!strcmp(str, "debug")) {
debug_pci = 1;
return NULL;
- } else if (!strcmp(str, "firmware")) {
- pci_add_flags(PCI_PROBE_ONLY);
- return NULL;
}
return str;
}
diff --git a/arch/arm/kernel/cpuidle.c b/arch/arm/kernel/cpuidle.c
index 703926e7007b..a44b268e12e1 100644
--- a/arch/arm/kernel/cpuidle.c
+++ b/arch/arm/kernel/cpuidle.c
@@ -70,7 +70,7 @@ int arm_cpuidle_suspend(int index)
*
* Returns a struct cpuidle_ops pointer, NULL if not found.
*/
-static struct cpuidle_ops *__init arm_cpuidle_get_ops(const char *method)
+static const struct cpuidle_ops *__init arm_cpuidle_get_ops(const char *method)
{
struct of_cpuidle_method *m = __cpuidle_method_of_table;
@@ -88,7 +88,7 @@ static struct cpuidle_ops *__init arm_cpuidle_get_ops(const char *method)
*
* Get the method name defined in the 'enable-method' property, retrieve the
* associated cpuidle_ops and do a struct copy. This copy is needed because all
- * cpuidle_ops are tagged __initdata and will be unloaded after the init
+ * cpuidle_ops are tagged __initconst and will be unloaded after the init
* process.
*
* Return 0 on sucess, -ENOENT if no 'enable-method' is defined, -EOPNOTSUPP if
@@ -97,7 +97,7 @@ static struct cpuidle_ops *__init arm_cpuidle_get_ops(const char *method)
static int __init arm_cpuidle_read_ops(struct device_node *dn, int cpu)
{
const char *enable_method;
- struct cpuidle_ops *ops;
+ const struct cpuidle_ops *ops;
enable_method = of_get_property(dn, "enable-method", NULL);
if (!enable_method)
diff --git a/arch/arm/kernel/efi.c b/arch/arm/kernel/efi.c
index ff8a9d8acfac..9f43ba012d10 100644
--- a/arch/arm/kernel/efi.c
+++ b/arch/arm/kernel/efi.c
@@ -11,6 +11,41 @@
#include <asm/mach/map.h>
#include <asm/mmu_context.h>
+static int __init set_permissions(pte_t *ptep, pgtable_t token,
+ unsigned long addr, void *data)
+{
+ efi_memory_desc_t *md = data;
+ pte_t pte = *ptep;
+
+ if (md->attribute & EFI_MEMORY_RO)
+ pte = set_pte_bit(pte, __pgprot(L_PTE_RDONLY));
+ if (md->attribute & EFI_MEMORY_XP)
+ pte = set_pte_bit(pte, __pgprot(L_PTE_XN));
+ set_pte_ext(ptep, pte, PTE_EXT_NG);
+ return 0;
+}
+
+int __init efi_set_mapping_permissions(struct mm_struct *mm,
+ efi_memory_desc_t *md)
+{
+ unsigned long base, size;
+
+ base = md->virt_addr;
+ size = md->num_pages << EFI_PAGE_SHIFT;
+
+ /*
+ * We can only use apply_to_page_range() if we can guarantee that the
+ * entire region was mapped using pages. This should be the case if the
+ * region does not cover any naturally aligned SECTION_SIZE sized
+ * blocks.
+ */
+ if (round_down(base + size, SECTION_SIZE) <
+ round_up(base, SECTION_SIZE) + SECTION_SIZE)
+ return apply_to_page_range(mm, base, size, set_permissions, md);
+
+ return 0;
+}
+
int __init efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md)
{
struct map_desc desc = {
@@ -34,5 +69,11 @@ int __init efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md)
desc.type = MT_DEVICE;
create_mapping_late(mm, &desc, true);
+
+ /*
+ * If stricter permissions were specified, apply them now.
+ */
+ if (md->attribute & (EFI_MEMORY_RO | EFI_MEMORY_XP))
+ return efi_set_mapping_permissions(mm, md);
return 0;
}
diff --git a/arch/arm/kernel/hw_breakpoint.c b/arch/arm/kernel/hw_breakpoint.c
index 6284779d64ee..b8df45883cf7 100644
--- a/arch/arm/kernel/hw_breakpoint.c
+++ b/arch/arm/kernel/hw_breakpoint.c
@@ -631,7 +631,7 @@ int arch_validate_hwbkpt_settings(struct perf_event *bp)
info->address &= ~alignment_mask;
info->ctrl.len <<= offset;
- if (!bp->overflow_handler) {
+ if (is_default_overflow_handler(bp)) {
/*
* Mismatch breakpoints are required for single-stepping
* breakpoints.
@@ -754,7 +754,7 @@ static void watchpoint_handler(unsigned long addr, unsigned int fsr,
* mismatch breakpoint so we can single-step over the
* watchpoint trigger.
*/
- if (!wp->overflow_handler)
+ if (is_default_overflow_handler(wp))
enable_single_step(wp, instruction_pointer(regs));
unlock:
diff --git a/arch/arm/kernel/perf_callchain.c b/arch/arm/kernel/perf_callchain.c
index 4e02ae5950ff..22bf1f64d99a 100644
--- a/arch/arm/kernel/perf_callchain.c
+++ b/arch/arm/kernel/perf_callchain.c
@@ -31,7 +31,7 @@ struct frame_tail {
*/
static struct frame_tail __user *
user_backtrace(struct frame_tail __user *tail,
- struct perf_callchain_entry *entry)
+ struct perf_callchain_entry_ctx *entry)
{
struct frame_tail buftail;
unsigned long err;
@@ -59,7 +59,7 @@ user_backtrace(struct frame_tail __user *tail,
}
void
-perf_callchain_user(struct perf_callchain_entry *entry, struct pt_regs *regs)
+perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
{
struct frame_tail __user *tail;
@@ -75,7 +75,7 @@ perf_callchain_user(struct perf_callchain_entry *entry, struct pt_regs *regs)
tail = (struct frame_tail __user *)regs->ARM_fp - 1;
- while ((entry->nr < PERF_MAX_STACK_DEPTH) &&
+ while ((entry->nr < entry->max_stack) &&
tail && !((unsigned long)tail & 0x3))
tail = user_backtrace(tail, entry);
}
@@ -89,13 +89,13 @@ static int
callchain_trace(struct stackframe *fr,
void *data)
{
- struct perf_callchain_entry *entry = data;
+ struct perf_callchain_entry_ctx *entry = data;
perf_callchain_store(entry, fr->pc);
return 0;
}
void
-perf_callchain_kernel(struct perf_callchain_entry *entry, struct pt_regs *regs)
+perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
{
struct stackframe fr;
diff --git a/arch/arm/kernel/process.c b/arch/arm/kernel/process.c
index 4adfb46e3ee9..4a803c5a1ff7 100644
--- a/arch/arm/kernel/process.c
+++ b/arch/arm/kernel/process.c
@@ -193,9 +193,9 @@ EXPORT_SYMBOL_GPL(thread_notify_head);
/*
* Free current thread data structures etc..
*/
-void exit_thread(void)
+void exit_thread(struct task_struct *tsk)
{
- thread_notify(THREAD_NOTIFY_EXIT, current_thread_info());
+ thread_notify(THREAD_NOTIFY_EXIT, task_thread_info(tsk));
}
void flush_thread(void)
@@ -420,7 +420,8 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
npages = 1; /* for sigpage */
npages += vdso_total_pages;
- down_write(&mm->mmap_sem);
+ if (down_write_killable(&mm->mmap_sem))
+ return -EINTR;
hint = sigpage_addr(mm, npages);
addr = get_unmapped_area(NULL, hint, npages << PAGE_SHIFT, 0, 0);
if (IS_ERR_VALUE(addr)) {
diff --git a/arch/arm/kernel/reboot.c b/arch/arm/kernel/reboot.c
index 71a2ff9ec490..3fa867a2aae6 100644
--- a/arch/arm/kernel/reboot.c
+++ b/arch/arm/kernel/reboot.c
@@ -104,8 +104,6 @@ void machine_halt(void)
{
local_irq_disable();
smp_send_stop();
-
- local_irq_disable();
while (1);
}
@@ -150,6 +148,5 @@ void machine_restart(char *cmd)
/* Whoops - the platform was unable to reboot. Tell the user! */
printk("Reboot failed -- System halted\n");
- local_irq_disable();
while (1);
}
diff --git a/arch/arm/kernel/setup.c b/arch/arm/kernel/setup.c
index 2c4bea39cf22..7b5350060612 100644
--- a/arch/arm/kernel/setup.c
+++ b/arch/arm/kernel/setup.c
@@ -883,7 +883,8 @@ static void __init request_standard_resources(const struct machine_desc *mdesc)
request_resource(&ioport_resource, &lp2);
}
-#if defined(CONFIG_VGA_CONSOLE) || defined(CONFIG_DUMMY_CONSOLE)
+#if defined(CONFIG_VGA_CONSOLE) || defined(CONFIG_DUMMY_CONSOLE) || \
+ defined(CONFIG_EFI)
struct screen_info screen_info = {
.orig_video_lines = 30,
.orig_video_cols = 80,
@@ -940,6 +941,12 @@ static int __init init_machine_late(void)
late_initcall(init_machine_late);
#ifdef CONFIG_KEXEC
+/*
+ * The crash region must be aligned to 128MB to avoid
+ * zImage relocating below the reserved region.
+ */
+#define CRASH_ALIGN (128 << 20)
+
static inline unsigned long long get_total_mem(void)
{
unsigned long total;
@@ -967,6 +974,26 @@ static void __init reserve_crashkernel(void)
if (ret)
return;
+ if (crash_base <= 0) {
+ unsigned long long crash_max = idmap_to_phys((u32)~0);
+ crash_base = memblock_find_in_range(CRASH_ALIGN, crash_max,
+ crash_size, CRASH_ALIGN);
+ if (!crash_base) {
+ pr_err("crashkernel reservation failed - No suitable area found.\n");
+ return;
+ }
+ } else {
+ unsigned long long start;
+
+ start = memblock_find_in_range(crash_base,
+ crash_base + crash_size,
+ crash_size, SECTION_SIZE);
+ if (start != crash_base) {
+ pr_err("crashkernel reservation failed - memory is in use.\n");
+ return;
+ }
+ }
+
ret = memblock_reserve(crash_base, crash_size);
if (ret < 0) {
pr_warn("crashkernel reservation failed - memory is in use (0x%lx)\n",
diff --git a/arch/arm/kernel/smp.c b/arch/arm/kernel/smp.c
index baee70267f29..df90bc59bfce 100644
--- a/arch/arm/kernel/smp.c
+++ b/arch/arm/kernel/smp.c
@@ -644,9 +644,11 @@ void handle_IPI(int ipinr, struct pt_regs *regs)
break;
case IPI_CPU_BACKTRACE:
+ printk_nmi_enter();
irq_enter();
nmi_cpu_backtrace(regs);
irq_exit();
+ printk_nmi_exit();
break;
default:
diff --git a/arch/arm/kvm/Kconfig b/arch/arm/kvm/Kconfig
index 95a000515e43..02abfff68ee5 100644
--- a/arch/arm/kvm/Kconfig
+++ b/arch/arm/kvm/Kconfig
@@ -46,6 +46,13 @@ config KVM_ARM_HOST
---help---
Provides host support for ARM processors.
+config KVM_NEW_VGIC
+ bool "New VGIC implementation"
+ depends on KVM
+ default y
+ ---help---
+ uses the new VGIC implementation
+
source drivers/vhost/Kconfig
endif # VIRTUALIZATION
diff --git a/arch/arm/kvm/Makefile b/arch/arm/kvm/Makefile
index eb1bf4309c13..a596b58f6d37 100644
--- a/arch/arm/kvm/Makefile
+++ b/arch/arm/kvm/Makefile
@@ -21,7 +21,18 @@ obj-$(CONFIG_KVM_ARM_HOST) += hyp/
obj-y += kvm-arm.o init.o interrupts.o
obj-y += arm.o handle_exit.o guest.o mmu.o emulate.o reset.o
obj-y += coproc.o coproc_a15.o coproc_a7.o mmio.o psci.o perf.o
+
+ifeq ($(CONFIG_KVM_NEW_VGIC),y)
+obj-y += $(KVM)/arm/vgic/vgic.o
+obj-y += $(KVM)/arm/vgic/vgic-init.o
+obj-y += $(KVM)/arm/vgic/vgic-irqfd.o
+obj-y += $(KVM)/arm/vgic/vgic-v2.o
+obj-y += $(KVM)/arm/vgic/vgic-mmio.o
+obj-y += $(KVM)/arm/vgic/vgic-mmio-v2.o
+obj-y += $(KVM)/arm/vgic/vgic-kvm-device.o
+else
obj-y += $(KVM)/arm/vgic.o
obj-y += $(KVM)/arm/vgic-v2.o
obj-y += $(KVM)/arm/vgic-v2-emul.o
+endif
obj-y += $(KVM)/arm/arch_timer.o
diff --git a/arch/arm/kvm/arm.c b/arch/arm/kvm/arm.c
index dded1b763c16..893941ec98dc 100644
--- a/arch/arm/kvm/arm.c
+++ b/arch/arm/kvm/arm.c
@@ -16,7 +16,6 @@
* Foundation, 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
-#include <linux/cpu.h>
#include <linux/cpu_pm.h>
#include <linux/errno.h>
#include <linux/err.h>
@@ -66,6 +65,8 @@ static DEFINE_SPINLOCK(kvm_vmid_lock);
static bool vgic_present;
+static DEFINE_PER_CPU(unsigned char, kvm_arm_hardware_enabled);
+
static void kvm_arm_set_running_vcpu(struct kvm_vcpu *vcpu)
{
BUG_ON(preemptible());
@@ -90,11 +91,6 @@ struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void)
return &kvm_arm_running_vcpu;
}
-int kvm_arch_hardware_enable(void)
-{
- return 0;
-}
-
int kvm_arch_vcpu_should_kick(struct kvm_vcpu *vcpu)
{
return kvm_vcpu_exiting_guest_mode(vcpu) == IN_GUEST_MODE;
@@ -448,7 +444,7 @@ static void update_vttbr(struct kvm *kvm)
kvm_next_vmid &= (1 << kvm_vmid_bits) - 1;
/* update vttbr to be used with the new vmid */
- pgd_phys = virt_to_phys(kvm_get_hwpgd(kvm));
+ pgd_phys = virt_to_phys(kvm->arch.pgd);
BUG_ON(pgd_phys & ~VTTBR_BADDR_MASK);
vmid = ((u64)(kvm->arch.vmid) << VTTBR_VMID_SHIFT) & VTTBR_VMID_MASK(kvm_vmid_bits);
kvm->arch.vttbr = pgd_phys | vmid;
@@ -459,7 +455,7 @@ static void update_vttbr(struct kvm *kvm)
static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
{
struct kvm *kvm = vcpu->kvm;
- int ret;
+ int ret = 0;
if (likely(vcpu->arch.has_run_once))
return 0;
@@ -482,9 +478,9 @@ static int kvm_vcpu_first_run_init(struct kvm_vcpu *vcpu)
* interrupts from the virtual timer with a userspace gic.
*/
if (irqchip_in_kernel(kvm) && vgic_initialized(kvm))
- kvm_timer_enable(kvm);
+ ret = kvm_timer_enable(vcpu);
- return 0;
+ return ret;
}
bool kvm_arch_intc_initialized(struct kvm *kvm)
@@ -492,30 +488,37 @@ bool kvm_arch_intc_initialized(struct kvm *kvm)
return vgic_initialized(kvm);
}
-static void kvm_arm_halt_guest(struct kvm *kvm) __maybe_unused;
-static void kvm_arm_resume_guest(struct kvm *kvm) __maybe_unused;
-
-static void kvm_arm_halt_guest(struct kvm *kvm)
+void kvm_arm_halt_guest(struct kvm *kvm)
{
int i;
struct kvm_vcpu *vcpu;
kvm_for_each_vcpu(i, vcpu, kvm)
vcpu->arch.pause = true;
- force_vm_exit(cpu_all_mask);
+ kvm_make_all_cpus_request(kvm, KVM_REQ_VCPU_EXIT);
+}
+
+void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu)
+{
+ vcpu->arch.pause = true;
+ kvm_vcpu_kick(vcpu);
}
-static void kvm_arm_resume_guest(struct kvm *kvm)
+void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu)
+{
+ struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
+
+ vcpu->arch.pause = false;
+ swake_up(wq);
+}
+
+void kvm_arm_resume_guest(struct kvm *kvm)
{
int i;
struct kvm_vcpu *vcpu;
- kvm_for_each_vcpu(i, vcpu, kvm) {
- struct swait_queue_head *wq = kvm_arch_vcpu_wq(vcpu);
-
- vcpu->arch.pause = false;
- swake_up(wq);
- }
+ kvm_for_each_vcpu(i, vcpu, kvm)
+ kvm_arm_resume_vcpu(vcpu);
}
static void vcpu_sleep(struct kvm_vcpu *vcpu)
@@ -1033,11 +1036,6 @@ long kvm_arch_vm_ioctl(struct file *filp,
}
}
-static void cpu_init_stage2(void *dummy)
-{
- __cpu_init_stage2();
-}
-
static void cpu_init_hyp_mode(void *dummy)
{
phys_addr_t boot_pgd_ptr;
@@ -1065,43 +1063,87 @@ static void cpu_hyp_reinit(void)
{
if (is_kernel_in_hyp_mode()) {
/*
- * cpu_init_stage2() is safe to call even if the PM
+ * __cpu_init_stage2() is safe to call even if the PM
* event was cancelled before the CPU was reset.
*/
- cpu_init_stage2(NULL);
+ __cpu_init_stage2();
} else {
if (__hyp_get_vectors() == hyp_default_vectors)
cpu_init_hyp_mode(NULL);
}
}
-static int hyp_init_cpu_notify(struct notifier_block *self,
- unsigned long action, void *cpu)
+static void cpu_hyp_reset(void)
{
- switch (action) {
- case CPU_STARTING:
- case CPU_STARTING_FROZEN:
+ phys_addr_t boot_pgd_ptr;
+ phys_addr_t phys_idmap_start;
+
+ if (!is_kernel_in_hyp_mode()) {
+ boot_pgd_ptr = kvm_mmu_get_boot_httbr();
+ phys_idmap_start = kvm_get_idmap_start();
+
+ __cpu_reset_hyp_mode(boot_pgd_ptr, phys_idmap_start);
+ }
+}
+
+static void _kvm_arch_hardware_enable(void *discard)
+{
+ if (!__this_cpu_read(kvm_arm_hardware_enabled)) {
cpu_hyp_reinit();
+ __this_cpu_write(kvm_arm_hardware_enabled, 1);
}
+}
+
+int kvm_arch_hardware_enable(void)
+{
+ _kvm_arch_hardware_enable(NULL);
+ return 0;
+}
- return NOTIFY_OK;
+static void _kvm_arch_hardware_disable(void *discard)
+{
+ if (__this_cpu_read(kvm_arm_hardware_enabled)) {
+ cpu_hyp_reset();
+ __this_cpu_write(kvm_arm_hardware_enabled, 0);
+ }
}
-static struct notifier_block hyp_init_cpu_nb = {
- .notifier_call = hyp_init_cpu_notify,
-};
+void kvm_arch_hardware_disable(void)
+{
+ _kvm_arch_hardware_disable(NULL);
+}
#ifdef CONFIG_CPU_PM
static int hyp_init_cpu_pm_notifier(struct notifier_block *self,
unsigned long cmd,
void *v)
{
- if (cmd == CPU_PM_EXIT) {
- cpu_hyp_reinit();
+ /*
+ * kvm_arm_hardware_enabled is left with its old value over
+ * PM_ENTER->PM_EXIT. It is used to indicate PM_EXIT should
+ * re-enable hyp.
+ */
+ switch (cmd) {
+ case CPU_PM_ENTER:
+ if (__this_cpu_read(kvm_arm_hardware_enabled))
+ /*
+ * don't update kvm_arm_hardware_enabled here
+ * so that the hardware will be re-enabled
+ * when we resume. See below.
+ */
+ cpu_hyp_reset();
+
return NOTIFY_OK;
- }
+ case CPU_PM_EXIT:
+ if (__this_cpu_read(kvm_arm_hardware_enabled))
+ /* The hardware was enabled before suspend. */
+ cpu_hyp_reinit();
- return NOTIFY_DONE;
+ return NOTIFY_OK;
+
+ default:
+ return NOTIFY_DONE;
+ }
}
static struct notifier_block hyp_init_cpu_pm_nb = {
@@ -1143,16 +1185,12 @@ static int init_common_resources(void)
static int init_subsystems(void)
{
- int err;
+ int err = 0;
/*
- * Register CPU Hotplug notifier
+ * Enable hardware so that subsystem initialisation can access EL2.
*/
- err = register_cpu_notifier(&hyp_init_cpu_nb);
- if (err) {
- kvm_err("Cannot register KVM init CPU notifier (%d)\n", err);
- return err;
- }
+ on_each_cpu(_kvm_arch_hardware_enable, NULL, 1);
/*
* Register CPU lower-power notifier
@@ -1170,9 +1208,10 @@ static int init_subsystems(void)
case -ENODEV:
case -ENXIO:
vgic_present = false;
+ err = 0;
break;
default:
- return err;
+ goto out;
}
/*
@@ -1180,12 +1219,15 @@ static int init_subsystems(void)
*/
err = kvm_timer_hyp_init();
if (err)
- return err;
+ goto out;
kvm_perf_init();
kvm_coproc_table_init();
- return 0;
+out:
+ on_each_cpu(_kvm_arch_hardware_disable, NULL, 1);
+
+ return err;
}
static void teardown_hyp_mode(void)
@@ -1198,17 +1240,11 @@ static void teardown_hyp_mode(void)
free_hyp_pgds();
for_each_possible_cpu(cpu)
free_page(per_cpu(kvm_arm_hyp_stack_page, cpu));
- unregister_cpu_notifier(&hyp_init_cpu_nb);
hyp_cpu_pm_exit();
}
static int init_vhe_mode(void)
{
- /*
- * Execute the init code on each CPU.
- */
- on_each_cpu(cpu_init_stage2, NULL, 1);
-
/* set size of VMID supported by CPU */
kvm_vmid_bits = kvm_get_vmid_bits();
kvm_info("%d-bit VMID\n", kvm_vmid_bits);
@@ -1295,11 +1331,6 @@ static int init_hyp_mode(void)
}
}
- /*
- * Execute the init code on each CPU.
- */
- on_each_cpu(cpu_init_hyp_mode, NULL, 1);
-
#ifndef CONFIG_HOTPLUG_CPU
free_boot_hyp_pgd();
#endif
diff --git a/arch/arm/kvm/mmio.c b/arch/arm/kvm/mmio.c
index 0f6600f05137..10f80a6c797a 100644
--- a/arch/arm/kvm/mmio.c
+++ b/arch/arm/kvm/mmio.c
@@ -23,7 +23,7 @@
#include "trace.h"
-static void mmio_write_buf(char *buf, unsigned int len, unsigned long data)
+void kvm_mmio_write_buf(void *buf, unsigned int len, unsigned long data)
{
void *datap = NULL;
union {
@@ -55,7 +55,7 @@ static void mmio_write_buf(char *buf, unsigned int len, unsigned long data)
memcpy(buf, datap, len);
}
-static unsigned long mmio_read_buf(char *buf, unsigned int len)
+unsigned long kvm_mmio_read_buf(const void *buf, unsigned int len)
{
unsigned long data = 0;
union {
@@ -66,7 +66,7 @@ static unsigned long mmio_read_buf(char *buf, unsigned int len)
switch (len) {
case 1:
- data = buf[0];
+ data = *(u8 *)buf;
break;
case 2:
memcpy(&tmp.hword, buf, len);
@@ -87,11 +87,10 @@ static unsigned long mmio_read_buf(char *buf, unsigned int len)
/**
* kvm_handle_mmio_return -- Handle MMIO loads after user space emulation
+ * or in-kernel IO emulation
+ *
* @vcpu: The VCPU pointer
* @run: The VCPU run struct containing the mmio data
- *
- * This should only be called after returning from userspace for MMIO load
- * emulation.
*/
int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
{
@@ -104,7 +103,7 @@ int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run)
if (len > sizeof(unsigned long))
return -EINVAL;
- data = mmio_read_buf(run->mmio.data, len);
+ data = kvm_mmio_read_buf(run->mmio.data, len);
if (vcpu->arch.mmio_decode.sign_extend &&
len < sizeof(unsigned long)) {
@@ -190,7 +189,7 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
len);
trace_kvm_mmio(KVM_TRACE_MMIO_WRITE, len, fault_ipa, data);
- mmio_write_buf(data_buf, len, data);
+ kvm_mmio_write_buf(data_buf, len, data);
ret = kvm_io_bus_write(vcpu, KVM_MMIO_BUS, fault_ipa, len,
data_buf);
@@ -206,18 +205,19 @@ int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
run->mmio.is_write = is_write;
run->mmio.phys_addr = fault_ipa;
run->mmio.len = len;
- if (is_write)
- memcpy(run->mmio.data, data_buf, len);
if (!ret) {
/* We handled the access successfully in the kernel. */
+ if (!is_write)
+ memcpy(run->mmio.data, data_buf, len);
vcpu->stat.mmio_exit_kernel++;
kvm_handle_mmio_return(vcpu, run);
return 1;
- } else {
- vcpu->stat.mmio_exit_user++;
}
+ if (is_write)
+ memcpy(run->mmio.data, data_buf, len);
+ vcpu->stat.mmio_exit_user++;
run->exit_reason = KVM_EXIT_MMIO;
return 0;
}
diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index d6d4191e68f2..45c43aecb8f2 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -43,11 +43,9 @@ static unsigned long hyp_idmap_start;
static unsigned long hyp_idmap_end;
static phys_addr_t hyp_idmap_vector;
+#define S2_PGD_SIZE (PTRS_PER_S2_PGD * sizeof(pgd_t))
#define hyp_pgd_order get_order(PTRS_PER_PGD * sizeof(pgd_t))
-#define kvm_pmd_huge(_x) (pmd_huge(_x) || pmd_trans_huge(_x))
-#define kvm_pud_huge(_x) pud_huge(_x)
-
#define KVM_S2PTE_FLAG_IS_IOMAP (1UL << 0)
#define KVM_S2_FLAG_LOGGING_ACTIVE (1UL << 1)
@@ -69,14 +67,7 @@ void kvm_flush_remote_tlbs(struct kvm *kvm)
static void kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
{
- /*
- * This function also gets called when dealing with HYP page
- * tables. As HYP doesn't have an associated struct kvm (and
- * the HYP page tables are fairly static), we don't do
- * anything there.
- */
- if (kvm)
- kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, kvm, ipa);
+ kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, kvm, ipa);
}
/*
@@ -115,7 +106,7 @@ static bool kvm_is_device_pfn(unsigned long pfn)
*/
static void stage2_dissolve_pmd(struct kvm *kvm, phys_addr_t addr, pmd_t *pmd)
{
- if (!kvm_pmd_huge(*pmd))
+ if (!pmd_thp_or_huge(*pmd))
return;
pmd_clear(pmd);
@@ -155,29 +146,29 @@ static void *mmu_memory_cache_alloc(struct kvm_mmu_memory_cache *mc)
return p;
}
-static void clear_pgd_entry(struct kvm *kvm, pgd_t *pgd, phys_addr_t addr)
+static void clear_stage2_pgd_entry(struct kvm *kvm, pgd_t *pgd, phys_addr_t addr)
{
- pud_t *pud_table __maybe_unused = pud_offset(pgd, 0);
- pgd_clear(pgd);
+ pud_t *pud_table __maybe_unused = stage2_pud_offset(pgd, 0UL);
+ stage2_pgd_clear(pgd);
kvm_tlb_flush_vmid_ipa(kvm, addr);
- pud_free(NULL, pud_table);
+ stage2_pud_free(pud_table);
put_page(virt_to_page(pgd));
}
-static void clear_pud_entry(struct kvm *kvm, pud_t *pud, phys_addr_t addr)
+static void clear_stage2_pud_entry(struct kvm *kvm, pud_t *pud, phys_addr_t addr)
{
- pmd_t *pmd_table = pmd_offset(pud, 0);
- VM_BUG_ON(pud_huge(*pud));
- pud_clear(pud);
+ pmd_t *pmd_table __maybe_unused = stage2_pmd_offset(pud, 0);
+ VM_BUG_ON(stage2_pud_huge(*pud));
+ stage2_pud_clear(pud);
kvm_tlb_flush_vmid_ipa(kvm, addr);
- pmd_free(NULL, pmd_table);
+ stage2_pmd_free(pmd_table);
put_page(virt_to_page(pud));
}
-static void clear_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr)
+static void clear_stage2_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr)
{
pte_t *pte_table = pte_offset_kernel(pmd, 0);
- VM_BUG_ON(kvm_pmd_huge(*pmd));
+ VM_BUG_ON(pmd_thp_or_huge(*pmd));
pmd_clear(pmd);
kvm_tlb_flush_vmid_ipa(kvm, addr);
pte_free_kernel(NULL, pte_table);
@@ -204,7 +195,7 @@ static void clear_pmd_entry(struct kvm *kvm, pmd_t *pmd, phys_addr_t addr)
* the corresponding TLBs, we call kvm_flush_dcache_p*() to make sure
* the IO subsystem will never hit in the cache.
*/
-static void unmap_ptes(struct kvm *kvm, pmd_t *pmd,
+static void unmap_stage2_ptes(struct kvm *kvm, pmd_t *pmd,
phys_addr_t addr, phys_addr_t end)
{
phys_addr_t start_addr = addr;
@@ -226,21 +217,21 @@ static void unmap_ptes(struct kvm *kvm, pmd_t *pmd,
}
} while (pte++, addr += PAGE_SIZE, addr != end);
- if (kvm_pte_table_empty(kvm, start_pte))
- clear_pmd_entry(kvm, pmd, start_addr);
+ if (stage2_pte_table_empty(start_pte))
+ clear_stage2_pmd_entry(kvm, pmd, start_addr);
}
-static void unmap_pmds(struct kvm *kvm, pud_t *pud,
+static void unmap_stage2_pmds(struct kvm *kvm, pud_t *pud,
phys_addr_t addr, phys_addr_t end)
{
phys_addr_t next, start_addr = addr;
pmd_t *pmd, *start_pmd;
- start_pmd = pmd = pmd_offset(pud, addr);
+ start_pmd = pmd = stage2_pmd_offset(pud, addr);
do {
- next = kvm_pmd_addr_end(addr, end);
+ next = stage2_pmd_addr_end(addr, end);
if (!pmd_none(*pmd)) {
- if (kvm_pmd_huge(*pmd)) {
+ if (pmd_thp_or_huge(*pmd)) {
pmd_t old_pmd = *pmd;
pmd_clear(pmd);
@@ -250,57 +241,64 @@ static void unmap_pmds(struct kvm *kvm, pud_t *pud,
put_page(virt_to_page(pmd));
} else {
- unmap_ptes(kvm, pmd, addr, next);
+ unmap_stage2_ptes(kvm, pmd, addr, next);
}
}
} while (pmd++, addr = next, addr != end);
- if (kvm_pmd_table_empty(kvm, start_pmd))
- clear_pud_entry(kvm, pud, start_addr);
+ if (stage2_pmd_table_empty(start_pmd))
+ clear_stage2_pud_entry(kvm, pud, start_addr);
}
-static void unmap_puds(struct kvm *kvm, pgd_t *pgd,
+static void unmap_stage2_puds(struct kvm *kvm, pgd_t *pgd,
phys_addr_t addr, phys_addr_t end)
{
phys_addr_t next, start_addr = addr;
pud_t *pud, *start_pud;
- start_pud = pud = pud_offset(pgd, addr);
+ start_pud = pud = stage2_pud_offset(pgd, addr);
do {
- next = kvm_pud_addr_end(addr, end);
- if (!pud_none(*pud)) {
- if (pud_huge(*pud)) {
+ next = stage2_pud_addr_end(addr, end);
+ if (!stage2_pud_none(*pud)) {
+ if (stage2_pud_huge(*pud)) {
pud_t old_pud = *pud;
- pud_clear(pud);
+ stage2_pud_clear(pud);
kvm_tlb_flush_vmid_ipa(kvm, addr);
-
kvm_flush_dcache_pud(old_pud);
-
put_page(virt_to_page(pud));
} else {
- unmap_pmds(kvm, pud, addr, next);
+ unmap_stage2_pmds(kvm, pud, addr, next);
}
}
} while (pud++, addr = next, addr != end);
- if (kvm_pud_table_empty(kvm, start_pud))
- clear_pgd_entry(kvm, pgd, start_addr);
+ if (stage2_pud_table_empty(start_pud))
+ clear_stage2_pgd_entry(kvm, pgd, start_addr);
}
-
-static void unmap_range(struct kvm *kvm, pgd_t *pgdp,
- phys_addr_t start, u64 size)
+/**
+ * unmap_stage2_range -- Clear stage2 page table entries to unmap a range
+ * @kvm: The VM pointer
+ * @start: The intermediate physical base address of the range to unmap
+ * @size: The size of the area to unmap
+ *
+ * Clear a range of stage-2 mappings, lowering the various ref-counts. Must
+ * be called while holding mmu_lock (unless for freeing the stage2 pgd before
+ * destroying the VM), otherwise another faulting VCPU may come in and mess
+ * with things behind our backs.
+ */
+static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size)
{
pgd_t *pgd;
phys_addr_t addr = start, end = start + size;
phys_addr_t next;
- pgd = pgdp + kvm_pgd_index(addr);
+ pgd = kvm->arch.pgd + stage2_pgd_index(addr);
do {
- next = kvm_pgd_addr_end(addr, end);
- if (!pgd_none(*pgd))
- unmap_puds(kvm, pgd, addr, next);
+ next = stage2_pgd_addr_end(addr, end);
+ if (!stage2_pgd_none(*pgd))
+ unmap_stage2_puds(kvm, pgd, addr, next);
} while (pgd++, addr = next, addr != end);
}
@@ -322,11 +320,11 @@ static void stage2_flush_pmds(struct kvm *kvm, pud_t *pud,
pmd_t *pmd;
phys_addr_t next;
- pmd = pmd_offset(pud, addr);
+ pmd = stage2_pmd_offset(pud, addr);
do {
- next = kvm_pmd_addr_end(addr, end);
+ next = stage2_pmd_addr_end(addr, end);
if (!pmd_none(*pmd)) {
- if (kvm_pmd_huge(*pmd))
+ if (pmd_thp_or_huge(*pmd))
kvm_flush_dcache_pmd(*pmd);
else
stage2_flush_ptes(kvm, pmd, addr, next);
@@ -340,11 +338,11 @@ static void stage2_flush_puds(struct kvm *kvm, pgd_t *pgd,
pud_t *pud;
phys_addr_t next;
- pud = pud_offset(pgd, addr);
+ pud = stage2_pud_offset(pgd, addr);
do {
- next = kvm_pud_addr_end(addr, end);
- if (!pud_none(*pud)) {
- if (pud_huge(*pud))
+ next = stage2_pud_addr_end(addr, end);
+ if (!stage2_pud_none(*pud)) {
+ if (stage2_pud_huge(*pud))
kvm_flush_dcache_pud(*pud);
else
stage2_flush_pmds(kvm, pud, addr, next);
@@ -360,9 +358,9 @@ static void stage2_flush_memslot(struct kvm *kvm,
phys_addr_t next;
pgd_t *pgd;
- pgd = kvm->arch.pgd + kvm_pgd_index(addr);
+ pgd = kvm->arch.pgd + stage2_pgd_index(addr);
do {
- next = kvm_pgd_addr_end(addr, end);
+ next = stage2_pgd_addr_end(addr, end);
stage2_flush_puds(kvm, pgd, addr, next);
} while (pgd++, addr = next, addr != end);
}
@@ -391,6 +389,100 @@ static void stage2_flush_vm(struct kvm *kvm)
srcu_read_unlock(&kvm->srcu, idx);
}
+static void clear_hyp_pgd_entry(pgd_t *pgd)
+{
+ pud_t *pud_table __maybe_unused = pud_offset(pgd, 0UL);
+ pgd_clear(pgd);
+ pud_free(NULL, pud_table);
+ put_page(virt_to_page(pgd));
+}
+
+static void clear_hyp_pud_entry(pud_t *pud)
+{
+ pmd_t *pmd_table __maybe_unused = pmd_offset(pud, 0);
+ VM_BUG_ON(pud_huge(*pud));
+ pud_clear(pud);
+ pmd_free(NULL, pmd_table);
+ put_page(virt_to_page(pud));
+}
+
+static void clear_hyp_pmd_entry(pmd_t *pmd)
+{
+ pte_t *pte_table = pte_offset_kernel(pmd, 0);
+ VM_BUG_ON(pmd_thp_or_huge(*pmd));
+ pmd_clear(pmd);
+ pte_free_kernel(NULL, pte_table);
+ put_page(virt_to_page(pmd));
+}
+
+static void unmap_hyp_ptes(pmd_t *pmd, phys_addr_t addr, phys_addr_t end)
+{
+ pte_t *pte, *start_pte;
+
+ start_pte = pte = pte_offset_kernel(pmd, addr);
+ do {
+ if (!pte_none(*pte)) {
+ kvm_set_pte(pte, __pte(0));
+ put_page(virt_to_page(pte));
+ }
+ } while (pte++, addr += PAGE_SIZE, addr != end);
+
+ if (hyp_pte_table_empty(start_pte))
+ clear_hyp_pmd_entry(pmd);
+}
+
+static void unmap_hyp_pmds(pud_t *pud, phys_addr_t addr, phys_addr_t end)
+{
+ phys_addr_t next;
+ pmd_t *pmd, *start_pmd;
+
+ start_pmd = pmd = pmd_offset(pud, addr);
+ do {
+ next = pmd_addr_end(addr, end);
+ /* Hyp doesn't use huge pmds */
+ if (!pmd_none(*pmd))
+ unmap_hyp_ptes(pmd, addr, next);
+ } while (pmd++, addr = next, addr != end);
+
+ if (hyp_pmd_table_empty(start_pmd))
+ clear_hyp_pud_entry(pud);
+}
+
+static void unmap_hyp_puds(pgd_t *pgd, phys_addr_t addr, phys_addr_t end)
+{
+ phys_addr_t next;
+ pud_t *pud, *start_pud;
+
+ start_pud = pud = pud_offset(pgd, addr);
+ do {
+ next = pud_addr_end(addr, end);
+ /* Hyp doesn't use huge puds */
+ if (!pud_none(*pud))
+ unmap_hyp_pmds(pud, addr, next);
+ } while (pud++, addr = next, addr != end);
+
+ if (hyp_pud_table_empty(start_pud))
+ clear_hyp_pgd_entry(pgd);
+}
+
+static void unmap_hyp_range(pgd_t *pgdp, phys_addr_t start, u64 size)
+{
+ pgd_t *pgd;
+ phys_addr_t addr = start, end = start + size;
+ phys_addr_t next;
+
+ /*
+ * We don't unmap anything from HYP, except at the hyp tear down.
+ * Hence, we don't have to invalidate the TLBs here.
+ */
+ pgd = pgdp + pgd_index(addr);
+ do {
+ next = pgd_addr_end(addr, end);
+ if (!pgd_none(*pgd))
+ unmap_hyp_puds(pgd, addr, next);
+ } while (pgd++, addr = next, addr != end);
+}
+
/**
* free_boot_hyp_pgd - free HYP boot page tables
*
@@ -401,14 +493,14 @@ void free_boot_hyp_pgd(void)
mutex_lock(&kvm_hyp_pgd_mutex);
if (boot_hyp_pgd) {
- unmap_range(NULL, boot_hyp_pgd, hyp_idmap_start, PAGE_SIZE);
- unmap_range(NULL, boot_hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE);
+ unmap_hyp_range(boot_hyp_pgd, hyp_idmap_start, PAGE_SIZE);
+ unmap_hyp_range(boot_hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE);
free_pages((unsigned long)boot_hyp_pgd, hyp_pgd_order);
boot_hyp_pgd = NULL;
}
if (hyp_pgd)
- unmap_range(NULL, hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE);
+ unmap_hyp_range(hyp_pgd, TRAMPOLINE_VA, PAGE_SIZE);
mutex_unlock(&kvm_hyp_pgd_mutex);
}
@@ -433,9 +525,9 @@ void free_hyp_pgds(void)
if (hyp_pgd) {
for (addr = PAGE_OFFSET; virt_addr_valid(addr); addr += PGDIR_SIZE)
- unmap_range(NULL, hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE);
+ unmap_hyp_range(hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE);
for (addr = VMALLOC_START; is_vmalloc_addr((void*)addr); addr += PGDIR_SIZE)
- unmap_range(NULL, hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE);
+ unmap_hyp_range(hyp_pgd, KERN_TO_HYP(addr), PGDIR_SIZE);
free_pages((unsigned long)hyp_pgd, hyp_pgd_order);
hyp_pgd = NULL;
@@ -645,20 +737,6 @@ int create_hyp_io_mappings(void *from, void *to, phys_addr_t phys_addr)
__phys_to_pfn(phys_addr), PAGE_HYP_DEVICE);
}
-/* Free the HW pgd, one page at a time */
-static void kvm_free_hwpgd(void *hwpgd)
-{
- free_pages_exact(hwpgd, kvm_get_hwpgd_size());
-}
-
-/* Allocate the HW PGD, making sure that each page gets its own refcount */
-static void *kvm_alloc_hwpgd(void)
-{
- unsigned int size = kvm_get_hwpgd_size();
-
- return alloc_pages_exact(size, GFP_KERNEL | __GFP_ZERO);
-}
-
/**
* kvm_alloc_stage2_pgd - allocate level-1 table for stage-2 translation.
* @kvm: The KVM struct pointer for the VM.
@@ -673,81 +751,22 @@ static void *kvm_alloc_hwpgd(void)
int kvm_alloc_stage2_pgd(struct kvm *kvm)
{
pgd_t *pgd;
- void *hwpgd;
if (kvm->arch.pgd != NULL) {
kvm_err("kvm_arch already initialized?\n");
return -EINVAL;
}
- hwpgd = kvm_alloc_hwpgd();
- if (!hwpgd)
+ /* Allocate the HW PGD, making sure that each page gets its own refcount */
+ pgd = alloc_pages_exact(S2_PGD_SIZE, GFP_KERNEL | __GFP_ZERO);
+ if (!pgd)
return -ENOMEM;
- /* When the kernel uses more levels of page tables than the
- * guest, we allocate a fake PGD and pre-populate it to point
- * to the next-level page table, which will be the real
- * initial page table pointed to by the VTTBR.
- *
- * When KVM_PREALLOC_LEVEL==2, we allocate a single page for
- * the PMD and the kernel will use folded pud.
- * When KVM_PREALLOC_LEVEL==1, we allocate 2 consecutive PUD
- * pages.
- */
- if (KVM_PREALLOC_LEVEL > 0) {
- int i;
-
- /*
- * Allocate fake pgd for the page table manipulation macros to
- * work. This is not used by the hardware and we have no
- * alignment requirement for this allocation.
- */
- pgd = kmalloc(PTRS_PER_S2_PGD * sizeof(pgd_t),
- GFP_KERNEL | __GFP_ZERO);
-
- if (!pgd) {
- kvm_free_hwpgd(hwpgd);
- return -ENOMEM;
- }
-
- /* Plug the HW PGD into the fake one. */
- for (i = 0; i < PTRS_PER_S2_PGD; i++) {
- if (KVM_PREALLOC_LEVEL == 1)
- pgd_populate(NULL, pgd + i,
- (pud_t *)hwpgd + i * PTRS_PER_PUD);
- else if (KVM_PREALLOC_LEVEL == 2)
- pud_populate(NULL, pud_offset(pgd, 0) + i,
- (pmd_t *)hwpgd + i * PTRS_PER_PMD);
- }
- } else {
- /*
- * Allocate actual first-level Stage-2 page table used by the
- * hardware for Stage-2 page table walks.
- */
- pgd = (pgd_t *)hwpgd;
- }
-
kvm_clean_pgd(pgd);
kvm->arch.pgd = pgd;
return 0;
}
-/**
- * unmap_stage2_range -- Clear stage2 page table entries to unmap a range
- * @kvm: The VM pointer
- * @start: The intermediate physical base address of the range to unmap
- * @size: The size of the area to unmap
- *
- * Clear a range of stage-2 mappings, lowering the various ref-counts. Must
- * be called while holding mmu_lock (unless for freeing the stage2 pgd before
- * destroying the VM), otherwise another faulting VCPU may come in and mess
- * with things behind our backs.
- */
-static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size)
-{
- unmap_range(kvm, kvm->arch.pgd, start, size);
-}
-
static void stage2_unmap_memslot(struct kvm *kvm,
struct kvm_memory_slot *memslot)
{
@@ -830,10 +849,8 @@ void kvm_free_stage2_pgd(struct kvm *kvm)
return;
unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
- kvm_free_hwpgd(kvm_get_hwpgd(kvm));
- if (KVM_PREALLOC_LEVEL > 0)
- kfree(kvm->arch.pgd);
-
+ /* Free the HW pgd, one page at a time */
+ free_pages_exact(kvm->arch.pgd, S2_PGD_SIZE);
kvm->arch.pgd = NULL;
}
@@ -843,16 +860,16 @@ static pud_t *stage2_get_pud(struct kvm *kvm, struct kvm_mmu_memory_cache *cache
pgd_t *pgd;
pud_t *pud;
- pgd = kvm->arch.pgd + kvm_pgd_index(addr);
- if (WARN_ON(pgd_none(*pgd))) {
+ pgd = kvm->arch.pgd + stage2_pgd_index(addr);
+ if (WARN_ON(stage2_pgd_none(*pgd))) {
if (!cache)
return NULL;
pud = mmu_memory_cache_alloc(cache);
- pgd_populate(NULL, pgd, pud);
+ stage2_pgd_populate(pgd, pud);
get_page(virt_to_page(pgd));
}
- return pud_offset(pgd, addr);
+ return stage2_pud_offset(pgd, addr);
}
static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
@@ -862,15 +879,15 @@ static pmd_t *stage2_get_pmd(struct kvm *kvm, struct kvm_mmu_memory_cache *cache
pmd_t *pmd;
pud = stage2_get_pud(kvm, cache, addr);
- if (pud_none(*pud)) {
+ if (stage2_pud_none(*pud)) {
if (!cache)
return NULL;
pmd = mmu_memory_cache_alloc(cache);
- pud_populate(NULL, pud, pmd);
+ stage2_pud_populate(pud, pmd);
get_page(virt_to_page(pud));
}
- return pmd_offset(pud, addr);
+ return stage2_pmd_offset(pud, addr);
}
static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
@@ -893,11 +910,14 @@ static int stage2_set_pmd_huge(struct kvm *kvm, struct kvm_mmu_memory_cache
VM_BUG_ON(pmd_present(*pmd) && pmd_pfn(*pmd) != pmd_pfn(*new_pmd));
old_pmd = *pmd;
- kvm_set_pmd(pmd, *new_pmd);
- if (pmd_present(old_pmd))
+ if (pmd_present(old_pmd)) {
+ pmd_clear(pmd);
kvm_tlb_flush_vmid_ipa(kvm, addr);
- else
+ } else {
get_page(virt_to_page(pmd));
+ }
+
+ kvm_set_pmd(pmd, *new_pmd);
return 0;
}
@@ -946,15 +966,38 @@ static int stage2_set_pte(struct kvm *kvm, struct kvm_mmu_memory_cache *cache,
/* Create 2nd stage page table mapping - Level 3 */
old_pte = *pte;
- kvm_set_pte(pte, *new_pte);
- if (pte_present(old_pte))
+ if (pte_present(old_pte)) {
+ kvm_set_pte(pte, __pte(0));
kvm_tlb_flush_vmid_ipa(kvm, addr);
- else
+ } else {
get_page(virt_to_page(pte));
+ }
+ kvm_set_pte(pte, *new_pte);
return 0;
}
+#ifndef __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
+static int stage2_ptep_test_and_clear_young(pte_t *pte)
+{
+ if (pte_young(*pte)) {
+ *pte = pte_mkold(*pte);
+ return 1;
+ }
+ return 0;
+}
+#else
+static int stage2_ptep_test_and_clear_young(pte_t *pte)
+{
+ return __ptep_test_and_clear_young(pte);
+}
+#endif
+
+static int stage2_pmdp_test_and_clear_young(pmd_t *pmd)
+{
+ return stage2_ptep_test_and_clear_young((pte_t *)pmd);
+}
+
/**
* kvm_phys_addr_ioremap - map a device range to guest IPA
*
@@ -978,7 +1021,7 @@ int kvm_phys_addr_ioremap(struct kvm *kvm, phys_addr_t guest_ipa,
pte_t pte = pfn_pte(pfn, PAGE_S2_DEVICE);
if (writable)
- kvm_set_s2pte_writable(&pte);
+ pte = kvm_s2pte_mkwrite(pte);
ret = mmu_topup_memory_cache(&cache, KVM_MMU_CACHE_MIN_PAGES,
KVM_NR_MEM_OBJS);
@@ -1078,12 +1121,12 @@ static void stage2_wp_pmds(pud_t *pud, phys_addr_t addr, phys_addr_t end)
pmd_t *pmd;
phys_addr_t next;
- pmd = pmd_offset(pud, addr);
+ pmd = stage2_pmd_offset(pud, addr);
do {
- next = kvm_pmd_addr_end(addr, end);
+ next = stage2_pmd_addr_end(addr, end);
if (!pmd_none(*pmd)) {
- if (kvm_pmd_huge(*pmd)) {
+ if (pmd_thp_or_huge(*pmd)) {
if (!kvm_s2pmd_readonly(pmd))
kvm_set_s2pmd_readonly(pmd);
} else {
@@ -1106,12 +1149,12 @@ static void stage2_wp_puds(pgd_t *pgd, phys_addr_t addr, phys_addr_t end)
pud_t *pud;
phys_addr_t next;
- pud = pud_offset(pgd, addr);
+ pud = stage2_pud_offset(pgd, addr);
do {
- next = kvm_pud_addr_end(addr, end);
- if (!pud_none(*pud)) {
+ next = stage2_pud_addr_end(addr, end);
+ if (!stage2_pud_none(*pud)) {
/* TODO:PUD not supported, revisit later if supported */
- BUG_ON(kvm_pud_huge(*pud));
+ BUG_ON(stage2_pud_huge(*pud));
stage2_wp_pmds(pud, addr, next);
}
} while (pud++, addr = next, addr != end);
@@ -1128,7 +1171,7 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
pgd_t *pgd;
phys_addr_t next;
- pgd = kvm->arch.pgd + kvm_pgd_index(addr);
+ pgd = kvm->arch.pgd + stage2_pgd_index(addr);
do {
/*
* Release kvm_mmu_lock periodically if the memory region is
@@ -1140,8 +1183,8 @@ static void stage2_wp_range(struct kvm *kvm, phys_addr_t addr, phys_addr_t end)
if (need_resched() || spin_needbreak(&kvm->mmu_lock))
cond_resched_lock(&kvm->mmu_lock);
- next = kvm_pgd_addr_end(addr, end);
- if (pgd_present(*pgd))
+ next = stage2_pgd_addr_end(addr, end);
+ if (stage2_pgd_present(*pgd))
stage2_wp_puds(pgd, addr, next);
} while (pgd++, addr = next, addr != end);
}
@@ -1320,7 +1363,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
pmd_t new_pmd = pfn_pmd(pfn, mem_type);
new_pmd = pmd_mkhuge(new_pmd);
if (writable) {
- kvm_set_s2pmd_writable(&new_pmd);
+ new_pmd = kvm_s2pmd_mkwrite(new_pmd);
kvm_set_pfn_dirty(pfn);
}
coherent_cache_guest_page(vcpu, pfn, PMD_SIZE, fault_ipa_uncached);
@@ -1329,7 +1372,7 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
pte_t new_pte = pfn_pte(pfn, mem_type);
if (writable) {
- kvm_set_s2pte_writable(&new_pte);
+ new_pte = kvm_s2pte_mkwrite(new_pte);
kvm_set_pfn_dirty(pfn);
mark_page_dirty(kvm, gfn);
}
@@ -1348,6 +1391,8 @@ out_unlock:
* Resolve the access fault by making the page young again.
* Note that because the faulting entry is guaranteed not to be
* cached in the TLB, we don't need to invalidate anything.
+ * Only the HW Access Flag updates are supported for Stage 2 (no DBM),
+ * so there is no need for atomic (pte|pmd)_mkyoung operations.
*/
static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
{
@@ -1364,7 +1409,7 @@ static void handle_access_fault(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa)
if (!pmd || pmd_none(*pmd)) /* Nothing there */
goto out;
- if (kvm_pmd_huge(*pmd)) { /* THP, HugeTLB */
+ if (pmd_thp_or_huge(*pmd)) { /* THP, HugeTLB */
*pmd = pmd_mkyoung(*pmd);
pfn = pmd_pfn(*pmd);
pfn_valid = true;
@@ -1588,25 +1633,14 @@ static int kvm_age_hva_handler(struct kvm *kvm, gpa_t gpa, void *data)
if (!pmd || pmd_none(*pmd)) /* Nothing there */
return 0;
- if (kvm_pmd_huge(*pmd)) { /* THP, HugeTLB */
- if (pmd_young(*pmd)) {
- *pmd = pmd_mkold(*pmd);
- return 1;
- }
-
- return 0;
- }
+ if (pmd_thp_or_huge(*pmd)) /* THP, HugeTLB */
+ return stage2_pmdp_test_and_clear_young(pmd);
pte = pte_offset_kernel(pmd, gpa);
if (pte_none(*pte))
return 0;
- if (pte_young(*pte)) {
- *pte = pte_mkold(*pte); /* Just a page... */
- return 1;
- }
-
- return 0;
+ return stage2_ptep_test_and_clear_young(pte);
}
static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, void *data)
@@ -1618,7 +1652,7 @@ static int kvm_test_age_hva_handler(struct kvm *kvm, gpa_t gpa, void *data)
if (!pmd || pmd_none(*pmd)) /* Nothing there */
return 0;
- if (kvm_pmd_huge(*pmd)) /* THP, HugeTLB */
+ if (pmd_thp_or_huge(*pmd)) /* THP, HugeTLB */
return pmd_young(*pmd);
pte = pte_offset_kernel(pmd, gpa);
@@ -1666,6 +1700,11 @@ phys_addr_t kvm_get_idmap_vector(void)
return hyp_idmap_vector;
}
+phys_addr_t kvm_get_idmap_start(void)
+{
+ return hyp_idmap_start;
+}
+
int kvm_mmu_init(void)
{
int err;
diff --git a/arch/arm/mach-aspeed/Kconfig b/arch/arm/mach-aspeed/Kconfig
new file mode 100644
index 000000000000..5225fbcb250d
--- /dev/null
+++ b/arch/arm/mach-aspeed/Kconfig
@@ -0,0 +1,30 @@
+menuconfig ARCH_ASPEED
+ bool "Aspeed BMC architectures"
+ depends on ARCH_MULTI_V5 || ARCH_MULTI_V6
+ select SRAM
+ select WATCHDOG
+ select ASPEED_WATCHDOG
+ select MOXART_TIMER
+ help
+ Say Y here if you want to run your kernel on an ASpeed BMC SoC.
+
+if ARCH_ASPEED
+
+config MACH_ASPEED_G4
+ bool "Aspeed SoC 4th Generation"
+ depends on ARCH_MULTI_V5
+ select CPU_ARM926T
+ help
+ Say yes if you intend to run on an Aspeed ast2400 or similar
+ fourth generation BMCs, such as those used by OpenPower Power8
+ systems.
+
+config MACH_ASPEED_G5
+ bool "Aspeed SoC 5th Generation"
+ depends on ARCH_MULTI_V6
+ select CPU_V6
+ help
+ Say yes if you intend to run on an Aspeed ast2500 or similar
+ fifth generation Aspeed BMCs.
+
+endif
diff --git a/arch/arm/mach-at91/sama5.c b/arch/arm/mach-at91/sama5.c
index df8fdf1cf66d..922b85f07cd2 100644
--- a/arch/arm/mach-at91/sama5.c
+++ b/arch/arm/mach-at91/sama5.c
@@ -18,8 +18,26 @@
#include "soc.h"
static const struct at91_soc sama5_socs[] = {
- AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D27_EXID_MATCH,
+ AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D21CU_EXID_MATCH,
+ "sama5d21", "sama5d2"),
+ AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D22CU_EXID_MATCH,
+ "sama5d22", "sama5d2"),
+ AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D23CU_EXID_MATCH,
+ "sama5d23", "sama5d2"),
+ AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D24CX_EXID_MATCH,
+ "sama5d24", "sama5d2"),
+ AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D24CU_EXID_MATCH,
+ "sama5d24", "sama5d2"),
+ AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D26CU_EXID_MATCH,
+ "sama5d26", "sama5d2"),
+ AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D27CU_EXID_MATCH,
"sama5d27", "sama5d2"),
+ AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D27CN_EXID_MATCH,
+ "sama5d27", "sama5d2"),
+ AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D28CU_EXID_MATCH,
+ "sama5d28", "sama5d2"),
+ AT91_SOC(SAMA5D2_CIDR_MATCH, SAMA5D28CN_EXID_MATCH,
+ "sama5d28", "sama5d2"),
AT91_SOC(SAMA5D3_CIDR_MATCH, SAMA5D31_EXID_MATCH,
"sama5d31", "sama5d3"),
AT91_SOC(SAMA5D3_CIDR_MATCH, SAMA5D33_EXID_MATCH,
diff --git a/arch/arm/mach-at91/soc.c b/arch/arm/mach-at91/soc.c
index 54343ffa3e53..c6fda75ddb89 100644
--- a/arch/arm/mach-at91/soc.c
+++ b/arch/arm/mach-at91/soc.c
@@ -22,48 +22,93 @@
#include "soc.h"
#define AT91_DBGU_CIDR 0x40
-#define AT91_DBGU_CIDR_VERSION(x) ((x) & 0x1f)
-#define AT91_DBGU_CIDR_EXT BIT(31)
-#define AT91_DBGU_CIDR_MATCH_MASK 0x7fffffe0
#define AT91_DBGU_EXID 0x44
+#define AT91_CHIPID_CIDR 0x00
+#define AT91_CHIPID_EXID 0x04
+#define AT91_CIDR_VERSION(x) ((x) & 0x1f)
+#define AT91_CIDR_EXT BIT(31)
+#define AT91_CIDR_MATCH_MASK 0x7fffffe0
-struct soc_device * __init at91_soc_init(const struct at91_soc *socs)
+static int __init at91_get_cidr_exid_from_dbgu(u32 *cidr, u32 *exid)
{
- struct soc_device_attribute *soc_dev_attr;
- const struct at91_soc *soc;
- struct soc_device *soc_dev;
struct device_node *np;
void __iomem *regs;
- u32 cidr, exid;
np = of_find_compatible_node(NULL, NULL, "atmel,at91rm9200-dbgu");
if (!np)
np = of_find_compatible_node(NULL, NULL,
"atmel,at91sam9260-dbgu");
+ if (!np)
+ return -ENODEV;
- if (!np) {
- pr_warn("Could not find DBGU node");
- return NULL;
+ regs = of_iomap(np, 0);
+ of_node_put(np);
+
+ if (!regs) {
+ pr_warn("Could not map DBGU iomem range");
+ return -ENXIO;
}
+ *cidr = readl(regs + AT91_DBGU_CIDR);
+ *exid = readl(regs + AT91_DBGU_EXID);
+
+ iounmap(regs);
+
+ return 0;
+}
+
+static int __init at91_get_cidr_exid_from_chipid(u32 *cidr, u32 *exid)
+{
+ struct device_node *np;
+ void __iomem *regs;
+
+ np = of_find_compatible_node(NULL, NULL, "atmel,sama5d2-chipid");
+ if (!np)
+ return -ENODEV;
+
regs = of_iomap(np, 0);
of_node_put(np);
if (!regs) {
pr_warn("Could not map DBGU iomem range");
- return NULL;
+ return -ENXIO;
}
- cidr = readl(regs + AT91_DBGU_CIDR);
- exid = readl(regs + AT91_DBGU_EXID);
+ *cidr = readl(regs + AT91_CHIPID_CIDR);
+ *exid = readl(regs + AT91_CHIPID_EXID);
iounmap(regs);
+ return 0;
+}
+
+struct soc_device * __init at91_soc_init(const struct at91_soc *socs)
+{
+ struct soc_device_attribute *soc_dev_attr;
+ const struct at91_soc *soc;
+ struct soc_device *soc_dev;
+ u32 cidr, exid;
+ int ret;
+
+ /*
+ * With SAMA5D2 and later SoCs, CIDR and EXID registers are no more
+ * in the dbgu device but in the chipid device whose purpose is only
+ * to expose these two registers.
+ */
+ ret = at91_get_cidr_exid_from_dbgu(&cidr, &exid);
+ if (ret)
+ ret = at91_get_cidr_exid_from_chipid(&cidr, &exid);
+ if (ret) {
+ if (ret == -ENODEV)
+ pr_warn("Could not find identification node");
+ return NULL;
+ }
+
for (soc = socs; soc->name; soc++) {
- if (soc->cidr_match != (cidr & AT91_DBGU_CIDR_MATCH_MASK))
+ if (soc->cidr_match != (cidr & AT91_CIDR_MATCH_MASK))
continue;
- if (!(cidr & AT91_DBGU_CIDR_EXT) || soc->exid_match == exid)
+ if (!(cidr & AT91_CIDR_EXT) || soc->exid_match == exid)
break;
}
@@ -79,7 +124,7 @@ struct soc_device * __init at91_soc_init(const struct at91_soc *socs)
soc_dev_attr->family = soc->family;
soc_dev_attr->soc_id = soc->name;
soc_dev_attr->revision = kasprintf(GFP_KERNEL, "%X",
- AT91_DBGU_CIDR_VERSION(cidr));
+ AT91_CIDR_VERSION(cidr));
soc_dev = soc_device_register(soc_dev_attr);
if (IS_ERR(soc_dev)) {
kfree(soc_dev_attr->revision);
@@ -91,7 +136,7 @@ struct soc_device * __init at91_soc_init(const struct at91_soc *socs)
if (soc->family)
pr_info("Detected SoC family: %s\n", soc->family);
pr_info("Detected SoC: %s, revision %X\n", soc->name,
- AT91_DBGU_CIDR_VERSION(cidr));
+ AT91_CIDR_VERSION(cidr));
return soc_dev;
}
diff --git a/arch/arm/mach-at91/soc.h b/arch/arm/mach-at91/soc.h
index 8ede0ef86172..228efded5085 100644
--- a/arch/arm/mach-at91/soc.h
+++ b/arch/arm/mach-at91/soc.h
@@ -63,7 +63,17 @@ at91_soc_init(const struct at91_soc *socs);
#define AT91SAM9XE512_CIDR_MATCH 0x329aa3a0
#define SAMA5D2_CIDR_MATCH 0x0a5c08c0
-#define SAMA5D27_EXID_MATCH 0x00000021
+#define SAMA5D21CU_EXID_MATCH 0x0000005a
+#define SAMA5D22CU_EXID_MATCH 0x00000059
+#define SAMA5D22CN_EXID_MATCH 0x00000069
+#define SAMA5D23CU_EXID_MATCH 0x00000058
+#define SAMA5D24CX_EXID_MATCH 0x00000004
+#define SAMA5D24CU_EXID_MATCH 0x00000014
+#define SAMA5D26CU_EXID_MATCH 0x00000012
+#define SAMA5D27CU_EXID_MATCH 0x00000011
+#define SAMA5D27CN_EXID_MATCH 0x00000021
+#define SAMA5D28CU_EXID_MATCH 0x00000010
+#define SAMA5D28CN_EXID_MATCH 0x00000020
#define SAMA5D3_CIDR_MATCH 0x0a5c07c0
#define SAMA5D31_EXID_MATCH 0x00444300
diff --git a/arch/arm/mach-bcm/Kconfig b/arch/arm/mach-bcm/Kconfig
index 7ef121472cdd..68ab6412392a 100644
--- a/arch/arm/mach-bcm/Kconfig
+++ b/arch/arm/mach-bcm/Kconfig
@@ -173,12 +173,12 @@ config ARCH_BRCMSTB
select ARM_GIC
select ARM_ERRATA_798181 if SMP
select HAVE_ARM_ARCH_TIMER
- select BRCMSTB_GISB_ARB
select BRCMSTB_L2_IRQ
select BCM7120_L2_IRQ
select ARCH_DMA_ADDR_T_64BIT if ARM_LPAE
select ARCH_WANT_OPTIONAL_GPIOLIB
select SOC_BRCMSTB
+ select SOC_BUS
help
Say Y if you intend to run the kernel on a Broadcom ARM-based STB
chipset.
diff --git a/arch/arm/mach-berlin/berlin.c b/arch/arm/mach-berlin/berlin.c
index 25d73870ccca..ac181c6797ee 100644
--- a/arch/arm/mach-berlin/berlin.c
+++ b/arch/arm/mach-berlin/berlin.c
@@ -18,11 +18,6 @@
#include <asm/hardware/cache-l2x0.h>
#include <asm/mach/arch.h>
-static void __init berlin_init_late(void)
-{
- platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
-}
-
static const char * const berlin_dt_compat[] = {
"marvell,berlin",
NULL,
@@ -30,7 +25,6 @@ static const char * const berlin_dt_compat[] = {
DT_MACHINE_START(BERLIN_DT, "Marvell Berlin")
.dt_compat = berlin_dt_compat,
- .init_late = berlin_init_late,
/*
* with DT probing for L2CCs, berlin_init_machine can be removed.
* Note: 88DE3005 (Armada 1500-mini) uses pl310 l2cc
diff --git a/arch/arm/mach-davinci/Makefile b/arch/arm/mach-davinci/Makefile
index 2e3464b8bab4..da4c336b4637 100644
--- a/arch/arm/mach-davinci/Makefile
+++ b/arch/arm/mach-davinci/Makefile
@@ -14,8 +14,8 @@ obj-$(CONFIG_ARCH_DAVINCI_DM644x) += dm644x.o devices.o
obj-$(CONFIG_ARCH_DAVINCI_DM355) += dm355.o devices.o
obj-$(CONFIG_ARCH_DAVINCI_DM646x) += dm646x.o devices.o
obj-$(CONFIG_ARCH_DAVINCI_DM365) += dm365.o devices.o
-obj-$(CONFIG_ARCH_DAVINCI_DA830) += da830.o devices-da8xx.o
-obj-$(CONFIG_ARCH_DAVINCI_DA850) += da850.o devices-da8xx.o
+obj-$(CONFIG_ARCH_DAVINCI_DA830) += da830.o devices-da8xx.o usb-da8xx.o
+obj-$(CONFIG_ARCH_DAVINCI_DA850) += da850.o devices-da8xx.o usb-da8xx.o
obj-$(CONFIG_AINTC) += irq.o
obj-$(CONFIG_CP_INTC) += cp_intc.o
diff --git a/arch/arm/mach-davinci/clock.c b/arch/arm/mach-davinci/clock.c
index 3424eac6b588..df42c93a93d6 100644
--- a/arch/arm/mach-davinci/clock.c
+++ b/arch/arm/mach-davinci/clock.c
@@ -195,6 +195,14 @@ int clk_set_parent(struct clk *clk, struct clk *parent)
return -EINVAL;
mutex_lock(&clocks_mutex);
+ if (clk->set_parent) {
+ int ret = clk->set_parent(clk, parent);
+
+ if (ret) {
+ mutex_unlock(&clocks_mutex);
+ return ret;
+ }
+ }
clk->parent = parent;
list_del_init(&clk->childnode);
list_add(&clk->childnode, &clk->parent->children);
@@ -224,8 +232,17 @@ int clk_register(struct clk *clk)
mutex_lock(&clocks_mutex);
list_add_tail(&clk->node, &clocks);
- if (clk->parent)
+ if (clk->parent) {
+ if (clk->set_parent) {
+ int ret = clk->set_parent(clk, clk->parent);
+
+ if (ret) {
+ mutex_unlock(&clocks_mutex);
+ return ret;
+ }
+ }
list_add_tail(&clk->childnode, &clk->parent->children);
+ }
mutex_unlock(&clocks_mutex);
/* If rate is already set, use it */
@@ -560,7 +577,7 @@ EXPORT_SYMBOL(davinci_set_pllrate);
* than that used by default in <soc>.c file. The reference clock rate
* should be updated early in the boot process; ideally soon after the
* clock tree has been initialized once with the default reference clock
- * rate (davinci_common_init()).
+ * rate (davinci_clk_init()).
*
* Returns 0 on success, error otherwise.
*/
diff --git a/arch/arm/mach-davinci/clock.h b/arch/arm/mach-davinci/clock.h
index 1e4e836173a1..e2a5437a1aee 100644
--- a/arch/arm/mach-davinci/clock.h
+++ b/arch/arm/mach-davinci/clock.h
@@ -106,6 +106,7 @@ struct clk {
int (*reset) (struct clk *clk, bool reset);
void (*clk_enable) (struct clk *clk);
void (*clk_disable) (struct clk *clk);
+ int (*set_parent) (struct clk *clk, struct clk *parent);
};
/* Clock flags: SoC-specific flags start at BIT(16) */
diff --git a/arch/arm/mach-davinci/common.c b/arch/arm/mach-davinci/common.c
index 742133b7266a..049025f6d531 100644
--- a/arch/arm/mach-davinci/common.c
+++ b/arch/arm/mach-davinci/common.c
@@ -108,12 +108,6 @@ void __init davinci_common_init(struct davinci_soc_info *soc_info)
if (ret < 0)
goto err;
- if (davinci_soc_info.cpu_clks) {
- ret = davinci_clk_init(davinci_soc_info.cpu_clks);
-
- if (ret != 0)
- goto err;
- }
return;
diff --git a/arch/arm/mach-davinci/cp_intc.c b/arch/arm/mach-davinci/cp_intc.c
index 1a68d2477de6..94085d21018e 100644
--- a/arch/arm/mach-davinci/cp_intc.c
+++ b/arch/arm/mach-davinci/cp_intc.c
@@ -12,6 +12,7 @@
#include <linux/export.h>
#include <linux/init.h>
#include <linux/irq.h>
+#include <linux/irqchip.h>
#include <linux/irqdomain.h>
#include <linux/io.h>
#include <linux/of.h>
@@ -210,3 +211,5 @@ void __init cp_intc_init(void)
{
cp_intc_of_init(NULL, NULL);
}
+
+IRQCHIP_DECLARE(cp_intc, "ti,cp-intc", cp_intc_of_init);
diff --git a/arch/arm/mach-davinci/da830.c b/arch/arm/mach-davinci/da830.c
index 7187e7fc2822..426fd7477357 100644
--- a/arch/arm/mach-davinci/da830.c
+++ b/arch/arm/mach-davinci/da830.c
@@ -1214,4 +1214,6 @@ void __init da830_init(void)
da8xx_syscfg0_base = ioremap(DA8XX_SYSCFG0_BASE, SZ_4K);
WARN(!da8xx_syscfg0_base, "Unable to map syscfg0 module");
+
+ davinci_clk_init(davinci_soc_info_da830.cpu_clks);
}
diff --git a/arch/arm/mach-davinci/da850.c b/arch/arm/mach-davinci/da850.c
index 97d8779a9a65..239886299968 100644
--- a/arch/arm/mach-davinci/da850.c
+++ b/arch/arm/mach-davinci/da850.c
@@ -34,9 +34,6 @@
#include "clock.h"
#include "mux.h"
-/* SoC specific clock flags */
-#define DA850_CLK_ASYNC3 BIT(16)
-
#define DA850_PLL1_BASE 0x01e1a000
#define DA850_TIMER64P2_BASE 0x01f0c000
#define DA850_TIMER64P3_BASE 0x01f0d000
@@ -161,6 +158,32 @@ static struct clk pll1_sysclk3 = {
.div_reg = PLLDIV3,
};
+static int da850_async3_set_parent(struct clk *clk, struct clk *parent)
+{
+ u32 val;
+
+ val = readl(DA8XX_SYSCFG0_VIRT(DA8XX_CFGCHIP3_REG));
+
+ if (parent == &pll0_sysclk2) {
+ val &= ~CFGCHIP3_ASYNC3_CLKSRC;
+ } else if (parent == &pll1_sysclk2) {
+ val |= CFGCHIP3_ASYNC3_CLKSRC;
+ } else {
+ pr_err("Bad parent on async3 clock mux\n");
+ return -EINVAL;
+ }
+
+ writel(val, DA8XX_SYSCFG0_VIRT(DA8XX_CFGCHIP3_REG));
+
+ return 0;
+}
+
+static struct clk async3_clk = {
+ .name = "async3",
+ .parent = &pll1_sysclk2,
+ .set_parent = da850_async3_set_parent,
+};
+
static struct clk i2c0_clk = {
.name = "i2c0",
.parent = &pll0_aux_clk,
@@ -234,18 +257,16 @@ static struct clk uart0_clk = {
static struct clk uart1_clk = {
.name = "uart1",
- .parent = &pll0_sysclk2,
+ .parent = &async3_clk,
.lpsc = DA8XX_LPSC1_UART1,
.gpsc = 1,
- .flags = DA850_CLK_ASYNC3,
};
static struct clk uart2_clk = {
.name = "uart2",
- .parent = &pll0_sysclk2,
+ .parent = &async3_clk,
.lpsc = DA8XX_LPSC1_UART2,
.gpsc = 1,
- .flags = DA850_CLK_ASYNC3,
};
static struct clk aintc_clk = {
@@ -300,10 +321,9 @@ static struct clk emac_clk = {
static struct clk mcasp_clk = {
.name = "mcasp",
- .parent = &pll0_sysclk2,
+ .parent = &async3_clk,
.lpsc = DA8XX_LPSC1_McASP0,
.gpsc = 1,
- .flags = DA850_CLK_ASYNC3,
};
static struct clk lcdc_clk = {
@@ -355,10 +375,9 @@ static struct clk spi0_clk = {
static struct clk spi1_clk = {
.name = "spi1",
- .parent = &pll0_sysclk2,
+ .parent = &async3_clk,
.lpsc = DA8XX_LPSC1_SPI1,
.gpsc = 1,
- .flags = DA850_CLK_ASYNC3,
};
static struct clk vpif_clk = {
@@ -386,10 +405,9 @@ static struct clk dsp_clk = {
static struct clk ehrpwm_clk = {
.name = "ehrpwm",
- .parent = &pll0_sysclk2,
+ .parent = &async3_clk,
.lpsc = DA8XX_LPSC1_PWM,
.gpsc = 1,
- .flags = DA850_CLK_ASYNC3,
};
#define DA8XX_EHRPWM_TBCLKSYNC BIT(12)
@@ -421,10 +439,9 @@ static struct clk ehrpwm_tbclk = {
static struct clk ecap_clk = {
.name = "ecap",
- .parent = &pll0_sysclk2,
+ .parent = &async3_clk,
.lpsc = DA8XX_LPSC1_ECAP,
.gpsc = 1,
- .flags = DA850_CLK_ASYNC3,
};
static struct clk_lookup da850_clks[] = {
@@ -442,6 +459,7 @@ static struct clk_lookup da850_clks[] = {
CLK(NULL, "pll1_aux", &pll1_aux_clk),
CLK(NULL, "pll1_sysclk2", &pll1_sysclk2),
CLK(NULL, "pll1_sysclk3", &pll1_sysclk3),
+ CLK(NULL, "async3", &async3_clk),
CLK("i2c_davinci.1", NULL, &i2c0_clk),
CLK(NULL, "timer0", &timerp64_0_clk),
CLK("davinci-wdt", NULL, &timerp64_1_clk),
@@ -909,30 +927,6 @@ static struct davinci_timer_info da850_timer_info = {
.clocksource_id = T0_TOP,
};
-static void da850_set_async3_src(int pllnum)
-{
- struct clk *clk, *newparent = pllnum ? &pll1_sysclk2 : &pll0_sysclk2;
- struct clk_lookup *c;
- unsigned int v;
- int ret;
-
- for (c = da850_clks; c->clk; c++) {
- clk = c->clk;
- if (clk->flags & DA850_CLK_ASYNC3) {
- ret = clk_set_parent(clk, newparent);
- WARN(ret, "DA850: unable to re-parent clock %s",
- clk->name);
- }
- }
-
- v = __raw_readl(DA8XX_SYSCFG0_VIRT(DA8XX_CFGCHIP3_REG));
- if (pllnum)
- v |= CFGCHIP3_ASYNC3_CLKSRC;
- else
- v &= ~CFGCHIP3_ASYNC3_CLKSRC;
- __raw_writel(v, DA8XX_SYSCFG0_VIRT(DA8XX_CFGCHIP3_REG));
-}
-
#ifdef CONFIG_CPU_FREQ
/*
* Notes:
@@ -1328,15 +1322,6 @@ void __init da850_init(void)
if (WARN(!da8xx_syscfg1_base, "Unable to map syscfg1 module"))
return;
- /*
- * Move the clock source of Async3 domain to PLL1 SYSCLK2.
- * This helps keeping the peripherals on this domain insulated
- * from CPU frequency changes caused by DVFS. The firmware sets
- * both PLL0 and PLL1 to the same frequency so, there should not
- * be any noticeable change even in non-DVFS use cases.
- */
- da850_set_async3_src(1);
-
/* Unlock writing to PLL0 registers */
v = __raw_readl(DA8XX_SYSCFG0_VIRT(DA8XX_CFGCHIP0_REG));
v &= ~CFGCHIP0_PLL_MASTER_LOCK;
@@ -1346,4 +1331,6 @@ void __init da850_init(void)
v = __raw_readl(DA8XX_SYSCFG0_VIRT(DA8XX_CFGCHIP3_REG));
v &= ~CFGCHIP3_PLL1_MASTER_LOCK;
__raw_writel(v, DA8XX_SYSCFG0_VIRT(DA8XX_CFGCHIP3_REG));
+
+ davinci_clk_init(davinci_soc_info_da850.cpu_clks);
}
diff --git a/arch/arm/mach-davinci/da8xx-dt.c b/arch/arm/mach-davinci/da8xx-dt.c
index c4b5808ca7c1..754f478110b4 100644
--- a/arch/arm/mach-davinci/da8xx-dt.c
+++ b/arch/arm/mach-davinci/da8xx-dt.c
@@ -18,20 +18,9 @@
#include "cp_intc.h"
#include <mach/da8xx.h>
-#define DA8XX_NUM_UARTS 3
-
-static const struct of_device_id const da8xx_irq_match[] __initconst = {
- { .compatible = "ti,cp-intc", .data = cp_intc_of_init, },
- { }
-};
-
-static void __init da8xx_init_irq(void)
-{
- of_irq_init(da8xx_irq_match);
-}
-
static struct of_dev_auxdata da850_auxdata_lookup[] __initdata = {
OF_DEV_AUXDATA("ti,davinci-i2c", 0x01c22000, "i2c_davinci.1", NULL),
+ OF_DEV_AUXDATA("ti,davinci-i2c", 0x01e28000, "i2c_davinci.2", NULL),
OF_DEV_AUXDATA("ti,davinci-wdt", 0x01c21000, "davinci-wdt", NULL),
OF_DEV_AUXDATA("ti,da830-mmc", 0x01c40000, "da830-mmc.0", NULL),
OF_DEV_AUXDATA("ti,da850-ehrpwm", 0x01f00000, "ehrpwm", NULL),
@@ -39,6 +28,7 @@ static struct of_dev_auxdata da850_auxdata_lookup[] __initdata = {
OF_DEV_AUXDATA("ti,da850-ecap", 0x01f06000, "ecap", NULL),
OF_DEV_AUXDATA("ti,da850-ecap", 0x01f07000, "ecap", NULL),
OF_DEV_AUXDATA("ti,da850-ecap", 0x01f08000, "ecap", NULL),
+ OF_DEV_AUXDATA("ti,da830-spi", 0x01c41000, "spi_davinci.0", NULL),
OF_DEV_AUXDATA("ti,da830-spi", 0x01f0e000, "spi_davinci.1", NULL),
OF_DEV_AUXDATA("ns16550a", 0x01c42000, "serial8250.0", NULL),
OF_DEV_AUXDATA("ns16550a", 0x01d0c000, "serial8250.1", NULL),
@@ -54,9 +44,7 @@ static struct of_dev_auxdata da850_auxdata_lookup[] __initdata = {
static void __init da850_init_machine(void)
{
- of_platform_populate(NULL, of_default_bus_match_table,
- da850_auxdata_lookup, NULL);
-
+ of_platform_default_populate(NULL, da850_auxdata_lookup, NULL);
}
static const char *const da850_boards_compat[] __initconst = {
@@ -68,7 +56,6 @@ static const char *const da850_boards_compat[] __initconst = {
DT_MACHINE_START(DA850_DT, "Generic DA850/OMAP-L138/AM18x")
.map_io = da850_init,
- .init_irq = da8xx_init_irq,
.init_time = davinci_timer_init,
.init_machine = da850_init_machine,
.dt_compat = da850_boards_compat,
diff --git a/arch/arm/mach-davinci/devices-da8xx.c b/arch/arm/mach-davinci/devices-da8xx.c
index 725e693639d2..add3771d38f6 100644
--- a/arch/arm/mach-davinci/devices-da8xx.c
+++ b/arch/arm/mach-davinci/devices-da8xx.c
@@ -751,16 +751,6 @@ static struct resource da8xx_mmcsd0_resources[] = {
.end = IRQ_DA8XX_MMCSDINT0,
.flags = IORESOURCE_IRQ,
},
- { /* DMA RX */
- .start = DA8XX_DMA_MMCSD0_RX,
- .end = DA8XX_DMA_MMCSD0_RX,
- .flags = IORESOURCE_DMA,
- },
- { /* DMA TX */
- .start = DA8XX_DMA_MMCSD0_TX,
- .end = DA8XX_DMA_MMCSD0_TX,
- .flags = IORESOURCE_DMA,
- },
};
static struct platform_device da8xx_mmcsd0_device = {
@@ -788,16 +778,6 @@ static struct resource da850_mmcsd1_resources[] = {
.end = IRQ_DA850_MMCSDINT0_1,
.flags = IORESOURCE_IRQ,
},
- { /* DMA RX */
- .start = DA850_DMA_MMCSD1_RX,
- .end = DA850_DMA_MMCSD1_RX,
- .flags = IORESOURCE_DMA,
- },
- { /* DMA TX */
- .start = DA850_DMA_MMCSD1_TX,
- .end = DA850_DMA_MMCSD1_TX,
- .flags = IORESOURCE_DMA,
- },
};
static struct platform_device da850_mmcsd1_device = {
diff --git a/arch/arm/mach-davinci/devices.c b/arch/arm/mach-davinci/devices.c
index 6257aa452568..67d26c5bda0b 100644
--- a/arch/arm/mach-davinci/devices.c
+++ b/arch/arm/mach-davinci/devices.c
@@ -144,14 +144,6 @@ static struct resource mmcsd0_resources[] = {
.start = IRQ_SDIOINT,
.flags = IORESOURCE_IRQ,
},
- /* DMA channels: RX, then TX */
- {
- .start = EDMA_CTLR_CHAN(0, DAVINCI_DMA_MMCRXEVT),
- .flags = IORESOURCE_DMA,
- }, {
- .start = EDMA_CTLR_CHAN(0, DAVINCI_DMA_MMCTXEVT),
- .flags = IORESOURCE_DMA,
- },
};
static struct platform_device davinci_mmcsd0_device = {
@@ -181,14 +173,6 @@ static struct resource mmcsd1_resources[] = {
.start = IRQ_DM355_SDIOINT1,
.flags = IORESOURCE_IRQ,
},
- /* DMA channels: RX, then TX */
- {
- .start = EDMA_CTLR_CHAN(0, 30), /* rx */
- .flags = IORESOURCE_DMA,
- }, {
- .start = EDMA_CTLR_CHAN(0, 31), /* tx */
- .flags = IORESOURCE_DMA,
- },
};
static struct platform_device davinci_mmcsd1_device = {
diff --git a/arch/arm/mach-davinci/dm355.c b/arch/arm/mach-davinci/dm355.c
index a0ecf499c2f2..5a19cca7ed6a 100644
--- a/arch/arm/mach-davinci/dm355.c
+++ b/arch/arm/mach-davinci/dm355.c
@@ -1052,6 +1052,7 @@ void __init dm355_init(void)
{
davinci_common_init(&davinci_soc_info_dm355);
davinci_map_sysmod();
+ davinci_clk_init(davinci_soc_info_dm355.cpu_clks);
}
int __init dm355_init_video(struct vpfe_config *vpfe_cfg,
diff --git a/arch/arm/mach-davinci/dm365.c b/arch/arm/mach-davinci/dm365.c
index 384d3674dd9b..8aa004b02fe9 100644
--- a/arch/arm/mach-davinci/dm365.c
+++ b/arch/arm/mach-davinci/dm365.c
@@ -1176,6 +1176,7 @@ void __init dm365_init(void)
{
davinci_common_init(&davinci_soc_info_dm365);
davinci_map_sysmod();
+ davinci_clk_init(davinci_soc_info_dm365.cpu_clks);
}
static struct resource dm365_vpss_resources[] = {
diff --git a/arch/arm/mach-davinci/dm644x.c b/arch/arm/mach-davinci/dm644x.c
index b4b3a8b9ca20..0afa279ec460 100644
--- a/arch/arm/mach-davinci/dm644x.c
+++ b/arch/arm/mach-davinci/dm644x.c
@@ -932,6 +932,7 @@ void __init dm644x_init(void)
{
davinci_common_init(&davinci_soc_info_dm644x);
davinci_map_sysmod();
+ davinci_clk_init(davinci_soc_info_dm644x.cpu_clks);
}
int __init dm644x_init_video(struct vpfe_config *vpfe_cfg,
diff --git a/arch/arm/mach-davinci/dm646x.c b/arch/arm/mach-davinci/dm646x.c
index a43db0f5fbaa..da21353cac45 100644
--- a/arch/arm/mach-davinci/dm646x.c
+++ b/arch/arm/mach-davinci/dm646x.c
@@ -956,6 +956,7 @@ void __init dm646x_init(void)
{
davinci_common_init(&davinci_soc_info_dm646x);
davinci_map_sysmod();
+ davinci_clk_init(davinci_soc_info_dm646x.cpu_clks);
}
static int __init dm646x_init_devices(void)
diff --git a/arch/arm/mach-davinci/usb-da8xx.c b/arch/arm/mach-davinci/usb-da8xx.c
new file mode 100644
index 000000000000..f141f5171906
--- /dev/null
+++ b/arch/arm/mach-davinci/usb-da8xx.c
@@ -0,0 +1,107 @@
+/*
+ * DA8xx USB
+ */
+#include <linux/dma-mapping.h>
+#include <linux/init.h>
+#include <linux/platform_data/usb-davinci.h>
+#include <linux/platform_device.h>
+#include <linux/usb/musb.h>
+
+#include <mach/common.h>
+#include <mach/cputype.h>
+#include <mach/da8xx.h>
+#include <mach/irqs.h>
+
+#define DA8XX_USB0_BASE 0x01e00000
+#define DA8XX_USB1_BASE 0x01e25000
+
+#if IS_ENABLED(CONFIG_USB_MUSB_HDRC)
+
+static struct musb_hdrc_config musb_config = {
+ .multipoint = true,
+ .num_eps = 5,
+ .ram_bits = 10,
+};
+
+static struct musb_hdrc_platform_data usb_data = {
+ /* OTG requires a Mini-AB connector */
+ .mode = MUSB_OTG,
+ .clock = "usb20",
+ .config = &musb_config,
+};
+
+static struct resource da8xx_usb20_resources[] = {
+ {
+ .start = DA8XX_USB0_BASE,
+ .end = DA8XX_USB0_BASE + SZ_64K - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ {
+ .start = IRQ_DA8XX_USB_INT,
+ .flags = IORESOURCE_IRQ,
+ .name = "mc",
+ },
+};
+
+static u64 usb_dmamask = DMA_BIT_MASK(32);
+
+static struct platform_device usb_dev = {
+ .name = "musb-da8xx",
+ .id = -1,
+ .dev = {
+ .platform_data = &usb_data,
+ .dma_mask = &usb_dmamask,
+ .coherent_dma_mask = DMA_BIT_MASK(32),
+ },
+ .resource = da8xx_usb20_resources,
+ .num_resources = ARRAY_SIZE(da8xx_usb20_resources),
+};
+
+int __init da8xx_register_usb20(unsigned int mA, unsigned int potpgt)
+{
+ usb_data.power = mA > 510 ? 255 : mA / 2;
+ usb_data.potpgt = (potpgt + 1) / 2;
+
+ return platform_device_register(&usb_dev);
+}
+
+#else
+
+int __init da8xx_register_usb20(unsigned int mA, unsigned int potpgt)
+{
+ return 0;
+}
+
+#endif /* CONFIG_USB_MUSB_HDRC */
+
+static struct resource da8xx_usb11_resources[] = {
+ [0] = {
+ .start = DA8XX_USB1_BASE,
+ .end = DA8XX_USB1_BASE + SZ_4K - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = IRQ_DA8XX_IRQN,
+ .end = IRQ_DA8XX_IRQN,
+ .flags = IORESOURCE_IRQ,
+ },
+};
+
+static u64 da8xx_usb11_dma_mask = DMA_BIT_MASK(32);
+
+static struct platform_device da8xx_usb11_device = {
+ .name = "ohci",
+ .id = 0,
+ .dev = {
+ .dma_mask = &da8xx_usb11_dma_mask,
+ .coherent_dma_mask = DMA_BIT_MASK(32),
+ },
+ .num_resources = ARRAY_SIZE(da8xx_usb11_resources),
+ .resource = da8xx_usb11_resources,
+};
+
+int __init da8xx_register_usb11(struct da8xx_ohci_root_hub *pdata)
+{
+ da8xx_usb11_device.dev.platform_data = pdata;
+ return platform_device_register(&da8xx_usb11_device);
+}
diff --git a/arch/arm/mach-davinci/usb.c b/arch/arm/mach-davinci/usb.c
index b0a6b522575f..0e7e89c1f331 100644
--- a/arch/arm/mach-davinci/usb.c
+++ b/arch/arm/mach-davinci/usb.c
@@ -10,36 +10,16 @@
#include <mach/common.h>
#include <mach/irqs.h>
#include <mach/cputype.h>
-#include <mach/da8xx.h>
#include <linux/platform_data/usb-davinci.h>
#define DAVINCI_USB_OTG_BASE 0x01c64000
-#define DA8XX_USB0_BASE 0x01e00000
-#define DA8XX_USB1_BASE 0x01e25000
-
#if IS_ENABLED(CONFIG_USB_MUSB_HDRC)
-static struct musb_hdrc_eps_bits musb_eps[] = {
- { "ep1_tx", 8, },
- { "ep1_rx", 8, },
- { "ep2_tx", 8, },
- { "ep2_rx", 8, },
- { "ep3_tx", 5, },
- { "ep3_rx", 5, },
- { "ep4_tx", 5, },
- { "ep4_rx", 5, },
-};
-
static struct musb_hdrc_config musb_config = {
.multipoint = true,
- .dyn_fifo = true,
- .soft_con = true,
- .dma = true,
.num_eps = 5,
- .dma_channels = 8,
.ram_bits = 10,
- .eps_bits = musb_eps,
};
static struct musb_hdrc_platform_data usb_data = {
@@ -97,79 +77,10 @@ void __init davinci_setup_usb(unsigned mA, unsigned potpgt_ms)
platform_device_register(&usb_dev);
}
-#ifdef CONFIG_ARCH_DAVINCI_DA8XX
-static struct resource da8xx_usb20_resources[] = {
- {
- .start = DA8XX_USB0_BASE,
- .end = DA8XX_USB0_BASE + SZ_64K - 1,
- .flags = IORESOURCE_MEM,
- },
- {
- .start = IRQ_DA8XX_USB_INT,
- .flags = IORESOURCE_IRQ,
- .name = "mc",
- },
-};
-
-int __init da8xx_register_usb20(unsigned mA, unsigned potpgt)
-{
- usb_data.clock = "usb20";
- usb_data.power = mA > 510 ? 255 : mA / 2;
- usb_data.potpgt = (potpgt + 1) / 2;
-
- usb_dev.resource = da8xx_usb20_resources;
- usb_dev.num_resources = ARRAY_SIZE(da8xx_usb20_resources);
- usb_dev.name = "musb-da8xx";
-
- return platform_device_register(&usb_dev);
-}
-#endif /* CONFIG_DAVINCI_DA8XX */
-
#else
void __init davinci_setup_usb(unsigned mA, unsigned potpgt_ms)
{
}
-#ifdef CONFIG_ARCH_DAVINCI_DA8XX
-int __init da8xx_register_usb20(unsigned mA, unsigned potpgt)
-{
- return 0;
-}
-#endif
-
#endif /* CONFIG_USB_MUSB_HDRC */
-
-#ifdef CONFIG_ARCH_DAVINCI_DA8XX
-static struct resource da8xx_usb11_resources[] = {
- [0] = {
- .start = DA8XX_USB1_BASE,
- .end = DA8XX_USB1_BASE + SZ_4K - 1,
- .flags = IORESOURCE_MEM,
- },
- [1] = {
- .start = IRQ_DA8XX_IRQN,
- .end = IRQ_DA8XX_IRQN,
- .flags = IORESOURCE_IRQ,
- },
-};
-
-static u64 da8xx_usb11_dma_mask = DMA_BIT_MASK(32);
-
-static struct platform_device da8xx_usb11_device = {
- .name = "ohci",
- .id = 0,
- .dev = {
- .dma_mask = &da8xx_usb11_dma_mask,
- .coherent_dma_mask = DMA_BIT_MASK(32),
- },
- .num_resources = ARRAY_SIZE(da8xx_usb11_resources),
- .resource = da8xx_usb11_resources,
-};
-
-int __init da8xx_register_usb11(struct da8xx_ohci_root_hub *pdata)
-{
- da8xx_usb11_device.dev.platform_data = pdata;
- return platform_device_register(&da8xx_usb11_device);
-}
-#endif /* CONFIG_DAVINCI_DA8XX */
diff --git a/arch/arm/mach-dove/common.c b/arch/arm/mach-dove/common.c
index 0cdaa3851d2e..0d420a2bfe3e 100644
--- a/arch/arm/mach-dove/common.c
+++ b/arch/arm/mach-dove/common.c
@@ -88,8 +88,7 @@ static void __init dove_clk_init(void)
struct clk *nand, *camera, *i2s0, *i2s1, *crypto, *ac97, *pdma;
struct clk *xor0, *xor1, *ge, *gephy;
- tclk = clk_register_fixed_rate(NULL, "tclk", NULL, CLK_IS_ROOT,
- dove_tclk);
+ tclk = clk_register_fixed_rate(NULL, "tclk", NULL, 0, dove_tclk);
usb0 = dove_register_gate("usb0", "tclk", CLOCK_GATING_BIT_USB0);
usb1 = dove_register_gate("usb1", "tclk", CLOCK_GATING_BIT_USB1);
diff --git a/arch/arm/mach-exynos/Kconfig b/arch/arm/mach-exynos/Kconfig
index 207fa2c737a6..e65aa7d11b20 100644
--- a/arch/arm/mach-exynos/Kconfig
+++ b/arch/arm/mach-exynos/Kconfig
@@ -18,6 +18,7 @@ menuconfig ARCH_EXYNOS
select COMMON_CLK_SAMSUNG
select EXYNOS_THERMAL
select EXYNOS_PMU
+ select EXYNOS_SROM
select HAVE_ARM_SCU if SMP
select HAVE_S3C2410_I2C if I2C
select HAVE_S3C2410_WATCHDOG if WATCHDOG
@@ -26,11 +27,13 @@ menuconfig ARCH_EXYNOS
select PINCTRL_EXYNOS
select PM_GENERIC_DOMAINS if PM
select S5P_DEV_MFC
+ select SAMSUNG_MC
select SOC_SAMSUNG
select SRAM
select THERMAL
select THERMAL_OF
select MFD_SYSCON
+ select MEMORY
select CLKSRC_EXYNOS_MCT
select POWER_RESET
select POWER_RESET_SYSCON
diff --git a/arch/arm/mach-exynos/exynos.c b/arch/arm/mach-exynos/exynos.c
index bbf51a46f772..52ccf247e079 100644
--- a/arch/arm/mach-exynos/exynos.c
+++ b/arch/arm/mach-exynos/exynos.c
@@ -31,11 +31,6 @@
static struct map_desc exynos4_iodesc[] __initdata = {
{
- .virtual = (unsigned long)S5P_VA_SROMC,
- .pfn = __phys_to_pfn(EXYNOS4_PA_SROMC),
- .length = SZ_4K,
- .type = MT_DEVICE,
- }, {
.virtual = (unsigned long)S5P_VA_CMU,
.pfn = __phys_to_pfn(EXYNOS4_PA_CMU),
.length = SZ_128K,
@@ -58,15 +53,6 @@ static struct map_desc exynos4_iodesc[] __initdata = {
},
};
-static struct map_desc exynos5_iodesc[] __initdata = {
- {
- .virtual = (unsigned long)S5P_VA_SROMC,
- .pfn = __phys_to_pfn(EXYNOS5_PA_SROMC),
- .length = SZ_4K,
- .type = MT_DEVICE,
- },
-};
-
static struct platform_device exynos_cpuidle = {
.name = "exynos_cpuidle",
#ifdef CONFIG_ARM_EXYNOS_CPUIDLE
@@ -138,9 +124,6 @@ static void __init exynos_map_io(void)
{
if (soc_is_exynos4())
iotable_init(exynos4_iodesc, ARRAY_SIZE(exynos4_iodesc));
-
- if (soc_is_exynos5())
- iotable_init(exynos5_iodesc, ARRAY_SIZE(exynos5_iodesc));
}
static void __init exynos_init_io(void)
@@ -213,33 +196,6 @@ static void __init exynos_init_irq(void)
exynos_map_pmu();
}
-static const struct of_device_id exynos_cpufreq_matches[] = {
- { .compatible = "samsung,exynos3250", .data = "cpufreq-dt" },
- { .compatible = "samsung,exynos4210", .data = "cpufreq-dt" },
- { .compatible = "samsung,exynos4212", .data = "cpufreq-dt" },
- { .compatible = "samsung,exynos4412", .data = "cpufreq-dt" },
- { .compatible = "samsung,exynos5250", .data = "cpufreq-dt" },
-#ifndef CONFIG_BL_SWITCHER
- { .compatible = "samsung,exynos5420", .data = "cpufreq-dt" },
- { .compatible = "samsung,exynos5800", .data = "cpufreq-dt" },
-#endif
- { /* sentinel */ }
-};
-
-static void __init exynos_cpufreq_init(void)
-{
- struct device_node *root = of_find_node_by_path("/");
- const struct of_device_id *match;
-
- match = of_match_node(exynos_cpufreq_matches, root);
- if (!match) {
- platform_device_register_simple("exynos-cpufreq", -1, NULL, 0);
- return;
- }
-
- platform_device_register_simple(match->data, -1, NULL, 0);
-}
-
static void __init exynos_dt_machine_init(void)
{
/*
@@ -262,8 +218,6 @@ static void __init exynos_dt_machine_init(void)
of_machine_is_compatible("samsung,exynos5250"))
platform_device_register(&exynos_cpuidle);
- exynos_cpufreq_init();
-
of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
}
diff --git a/arch/arm/mach-exynos/include/mach/map.h b/arch/arm/mach-exynos/include/mach/map.h
index c88325d56743..c48ba4fbdfd2 100644
--- a/arch/arm/mach-exynos/include/mach/map.h
+++ b/arch/arm/mach-exynos/include/mach/map.h
@@ -25,7 +25,4 @@
#define EXYNOS4_PA_COREPERI 0x10500000
-#define EXYNOS4_PA_SROMC 0x12570000
-#define EXYNOS5_PA_SROMC 0x12250000
-
#endif /* __ASM_ARCH_MAP_H */
diff --git a/arch/arm/mach-exynos/regs-srom.h b/arch/arm/mach-exynos/regs-srom.h
deleted file mode 100644
index 5c4d4427db7b..000000000000
--- a/arch/arm/mach-exynos/regs-srom.h
+++ /dev/null
@@ -1,53 +0,0 @@
-/*
- * Copyright (c) 2010 Samsung Electronics Co., Ltd.
- * http://www.samsung.com
- *
- * S5P SROMC register definitions
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
-*/
-
-#ifndef __PLAT_SAMSUNG_REGS_SROM_H
-#define __PLAT_SAMSUNG_REGS_SROM_H __FILE__
-
-#include <mach/map.h>
-
-#define S5P_SROMREG(x) (S5P_VA_SROMC + (x))
-
-#define S5P_SROM_BW S5P_SROMREG(0x0)
-#define S5P_SROM_BC0 S5P_SROMREG(0x4)
-#define S5P_SROM_BC1 S5P_SROMREG(0x8)
-#define S5P_SROM_BC2 S5P_SROMREG(0xc)
-#define S5P_SROM_BC3 S5P_SROMREG(0x10)
-#define S5P_SROM_BC4 S5P_SROMREG(0x14)
-#define S5P_SROM_BC5 S5P_SROMREG(0x18)
-
-/* one register BW holds 4 x 4-bit packed settings for NCS0 - NCS3 */
-
-#define S5P_SROM_BW__DATAWIDTH__SHIFT 0
-#define S5P_SROM_BW__ADDRMODE__SHIFT 1
-#define S5P_SROM_BW__WAITENABLE__SHIFT 2
-#define S5P_SROM_BW__BYTEENABLE__SHIFT 3
-
-#define S5P_SROM_BW__CS_MASK 0xf
-
-#define S5P_SROM_BW__NCS0__SHIFT 0
-#define S5P_SROM_BW__NCS1__SHIFT 4
-#define S5P_SROM_BW__NCS2__SHIFT 8
-#define S5P_SROM_BW__NCS3__SHIFT 12
-#define S5P_SROM_BW__NCS4__SHIFT 16
-#define S5P_SROM_BW__NCS5__SHIFT 20
-
-/* applies to same to BCS0 - BCS3 */
-
-#define S5P_SROM_BCX__PMC__SHIFT 0
-#define S5P_SROM_BCX__TACP__SHIFT 4
-#define S5P_SROM_BCX__TCAH__SHIFT 8
-#define S5P_SROM_BCX__TCOH__SHIFT 12
-#define S5P_SROM_BCX__TACC__SHIFT 16
-#define S5P_SROM_BCX__TCOS__SHIFT 24
-#define S5P_SROM_BCX__TACS__SHIFT 28
-
-#endif /* __PLAT_SAMSUNG_REGS_SROM_H */
diff --git a/arch/arm/mach-exynos/suspend.c b/arch/arm/mach-exynos/suspend.c
index fee2b003e662..f21690937b7d 100644
--- a/arch/arm/mach-exynos/suspend.c
+++ b/arch/arm/mach-exynos/suspend.c
@@ -34,10 +34,11 @@
#include <asm/smp_scu.h>
#include <asm/suspend.h>
+#include <mach/map.h>
+
#include <plat/pm-common.h>
#include "common.h"
-#include "regs-srom.h"
#define REG_TABLE_END (-1U)
@@ -53,15 +54,6 @@ struct exynos_wkup_irq {
u32 mask;
};
-static struct sleep_save exynos_core_save[] = {
- /* SROM side */
- SAVE_ITEM(S5P_SROM_BW),
- SAVE_ITEM(S5P_SROM_BC0),
- SAVE_ITEM(S5P_SROM_BC1),
- SAVE_ITEM(S5P_SROM_BC2),
- SAVE_ITEM(S5P_SROM_BC3),
-};
-
struct exynos_pm_data {
const struct exynos_wkup_irq *wkup_irq;
unsigned int wake_disable_mask;
@@ -343,8 +335,6 @@ static void exynos_pm_prepare(void)
/* Set wake-up mask registers */
exynos_pm_set_wakeup_mask();
- s3c_pm_do_save(exynos_core_save, ARRAY_SIZE(exynos_core_save));
-
exynos_pm_enter_sleep_mode();
/* ensure at least INFORM0 has the resume address */
@@ -375,8 +365,6 @@ static void exynos5420_pm_prepare(void)
/* Set wake-up mask registers */
exynos_pm_set_wakeup_mask();
- s3c_pm_do_save(exynos_core_save, ARRAY_SIZE(exynos_core_save));
-
exynos_pmu_spare3 = pmu_raw_readl(S5P_PMU_SPARE3);
/*
* The cpu state needs to be saved and restored so that the
@@ -467,8 +455,6 @@ static void exynos_pm_resume(void)
/* For release retention */
exynos_pm_release_retention();
- s3c_pm_do_restore_core(exynos_core_save, ARRAY_SIZE(exynos_core_save));
-
if (cpuid == ARM_CPU_PART_CORTEX_A9)
scu_enable(S5P_VA_SCU);
@@ -535,8 +521,6 @@ static void exynos5420_pm_resume(void)
pmu_raw_writel(exynos_pmu_spare3, S5P_PMU_SPARE3);
- s3c_pm_do_restore_core(exynos_core_save, ARRAY_SIZE(exynos_core_save));
-
early_wakeup:
tmp = pmu_raw_readl(EXYNOS5420_SFR_AXI_CGDIS1);
diff --git a/arch/arm/mach-imx/Kconfig b/arch/arm/mach-imx/Kconfig
index 8973fae25436..dd905b9602a0 100644
--- a/arch/arm/mach-imx/Kconfig
+++ b/arch/arm/mach-imx/Kconfig
@@ -526,7 +526,7 @@ config SOC_IMX6Q
bool "i.MX6 Quad/DualLite support"
select ARM_ERRATA_764369 if SMP
select HAVE_ARM_SCU if SMP
- select HAVE_ARM_TWD if SMP
+ select HAVE_ARM_TWD
select PCI_DOMAINS if PCI
select PINCTRL_IMX6Q
select SOC_IMX6
diff --git a/arch/arm/mach-imx/imx27-dt.c b/arch/arm/mach-imx/imx27-dt.c
index bd42d1bd10af..530a728c2acc 100644
--- a/arch/arm/mach-imx/imx27-dt.c
+++ b/arch/arm/mach-imx/imx27-dt.c
@@ -18,15 +18,6 @@
#include "common.h"
#include "mx27.h"
-static void __init imx27_dt_init(void)
-{
- struct platform_device_info devinfo = { .name = "cpufreq-dt", };
-
- of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
-
- platform_device_register_full(&devinfo);
-}
-
static const char * const imx27_dt_board_compat[] __initconst = {
"fsl,imx27",
NULL
@@ -36,6 +27,5 @@ DT_MACHINE_START(IMX27_DT, "Freescale i.MX27 (Device Tree Support)")
.map_io = mx27_map_io,
.init_early = imx27_init_early,
.init_irq = mx27_init_irq,
- .init_machine = imx27_dt_init,
.dt_compat = imx27_dt_board_compat,
MACHINE_END
diff --git a/arch/arm/mach-imx/mach-imx51.c b/arch/arm/mach-imx/mach-imx51.c
index 6883fbaf9484..10a82a4f1e58 100644
--- a/arch/arm/mach-imx/mach-imx51.c
+++ b/arch/arm/mach-imx/mach-imx51.c
@@ -50,13 +50,10 @@ static void __init imx51_ipu_mipi_setup(void)
static void __init imx51_dt_init(void)
{
- struct platform_device_info devinfo = { .name = "cpufreq-dt", };
-
imx51_ipu_mipi_setup();
imx_src_init();
of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
- platform_device_register_full(&devinfo);
}
static void __init imx51_init_late(void)
diff --git a/arch/arm/mach-imx/mach-imx53.c b/arch/arm/mach-imx/mach-imx53.c
index 86316a979297..18b5c5c136db 100644
--- a/arch/arm/mach-imx/mach-imx53.c
+++ b/arch/arm/mach-imx/mach-imx53.c
@@ -40,8 +40,6 @@ static void __init imx53_dt_init(void)
static void __init imx53_init_late(void)
{
imx53_pm_init();
-
- platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
}
static const char * const imx53_dt_board_compat[] __initconst = {
diff --git a/arch/arm/mach-imx/mach-imx7d.c b/arch/arm/mach-imx/mach-imx7d.c
index 5a27f20c9a82..b450f525a670 100644
--- a/arch/arm/mach-imx/mach-imx7d.c
+++ b/arch/arm/mach-imx/mach-imx7d.c
@@ -105,11 +105,6 @@ static void __init imx7d_init_irq(void)
irqchip_init();
}
-static void __init imx7d_init_late(void)
-{
- platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
-}
-
static const char *const imx7d_dt_compat[] __initconst = {
"fsl,imx7d",
NULL,
@@ -117,7 +112,6 @@ static const char *const imx7d_dt_compat[] __initconst = {
DT_MACHINE_START(IMX7D, "Freescale i.MX7 Dual (Device Tree)")
.init_irq = imx7d_init_irq,
- .init_late = imx7d_init_late,
.init_machine = imx7d_init_machine,
.dt_compat = imx7d_dt_compat,
MACHINE_END
diff --git a/arch/arm/mach-integrator/integrator_ap.c b/arch/arm/mach-integrator/integrator_ap.c
index 5b0e363fe5ba..2b118f20c62c 100644
--- a/arch/arm/mach-integrator/integrator_ap.c
+++ b/arch/arm/mach-integrator/integrator_ap.c
@@ -29,7 +29,6 @@
#include <linux/amba/kmi.h>
#include <linux/io.h>
#include <linux/irqchip.h>
-#include <linux/mtd/physmap.h>
#include <linux/platform_data/clk-integrator.h>
#include <linux/of_irq.h>
#include <linux/of_address.h>
@@ -147,65 +146,6 @@ static int __init irq_syscore_init(void)
device_initcall(irq_syscore_init);
/*
- * Flash handling.
- */
-static int ap_flash_init(struct platform_device *dev)
-{
- u32 tmp;
-
- writel(INTEGRATOR_SC_CTRL_nFLVPPEN | INTEGRATOR_SC_CTRL_nFLWP,
- ap_syscon_base + INTEGRATOR_SC_CTRLC_OFFSET);
-
- tmp = readl(ebi_base + INTEGRATOR_EBI_CSR1_OFFSET) |
- INTEGRATOR_EBI_WRITE_ENABLE;
- writel(tmp, ebi_base + INTEGRATOR_EBI_CSR1_OFFSET);
-
- if (!(readl(ebi_base + INTEGRATOR_EBI_CSR1_OFFSET)
- & INTEGRATOR_EBI_WRITE_ENABLE)) {
- writel(0xa05f, ebi_base + INTEGRATOR_EBI_LOCK_OFFSET);
- writel(tmp, ebi_base + INTEGRATOR_EBI_CSR1_OFFSET);
- writel(0, ebi_base + INTEGRATOR_EBI_LOCK_OFFSET);
- }
- return 0;
-}
-
-static void ap_flash_exit(struct platform_device *dev)
-{
- u32 tmp;
-
- writel(INTEGRATOR_SC_CTRL_nFLVPPEN | INTEGRATOR_SC_CTRL_nFLWP,
- ap_syscon_base + INTEGRATOR_SC_CTRLC_OFFSET);
-
- tmp = readl(ebi_base + INTEGRATOR_EBI_CSR1_OFFSET) &
- ~INTEGRATOR_EBI_WRITE_ENABLE;
- writel(tmp, ebi_base + INTEGRATOR_EBI_CSR1_OFFSET);
-
- if (readl(ebi_base + INTEGRATOR_EBI_CSR1_OFFSET) &
- INTEGRATOR_EBI_WRITE_ENABLE) {
- writel(0xa05f, ebi_base + INTEGRATOR_EBI_LOCK_OFFSET);
- writel(tmp, ebi_base + INTEGRATOR_EBI_CSR1_OFFSET);
- writel(0, ebi_base + INTEGRATOR_EBI_LOCK_OFFSET);
- }
-}
-
-static void ap_flash_set_vpp(struct platform_device *pdev, int on)
-{
- if (on)
- writel(INTEGRATOR_SC_CTRL_nFLVPPEN,
- ap_syscon_base + INTEGRATOR_SC_CTRLS_OFFSET);
- else
- writel(INTEGRATOR_SC_CTRL_nFLVPPEN,
- ap_syscon_base + INTEGRATOR_SC_CTRLC_OFFSET);
-}
-
-static struct physmap_flash_data ap_flash_data = {
- .width = 4,
- .init = ap_flash_init,
- .exit = ap_flash_exit,
- .set_vpp = ap_flash_set_vpp,
-};
-
-/*
* For the PL010 found in the Integrator/AP some of the UART control is
* implemented in the system controller and accessed using a callback
* from the driver.
@@ -266,8 +206,6 @@ static struct of_dev_auxdata ap_auxdata_lookup[] __initdata = {
"kmi0", NULL),
OF_DEV_AUXDATA("arm,primecell", KMI1_BASE,
"kmi1", NULL),
- OF_DEV_AUXDATA("cfi-flash", INTEGRATOR_FLASH_BASE,
- "physmap-flash", &ap_flash_data),
{ /* sentinel */ },
};
diff --git a/arch/arm/mach-integrator/integrator_cp.c b/arch/arm/mach-integrator/integrator_cp.c
index b5fb71a36ee6..6f6b051e81e0 100644
--- a/arch/arm/mach-integrator/integrator_cp.c
+++ b/arch/arm/mach-integrator/integrator_cp.c
@@ -23,7 +23,6 @@
#include <linux/io.h>
#include <linux/irqchip.h>
#include <linux/gfp.h>
-#include <linux/mtd/physmap.h>
#include <linux/of_irq.h>
#include <linux/of_address.h>
#include <linux/of_platform.h>
@@ -43,14 +42,8 @@
/* Base address to the CP controller */
static void __iomem *intcp_con_base;
-#define INTCP_PA_FLASH_BASE 0x24000000
-
#define INTCP_PA_CLCD_BASE 0xc0000000
-#define INTCP_FLASHPROG 0x04
-#define CINTEGRATOR_FLASHPROG_FLVPPEN (1 << 0)
-#define CINTEGRATOR_FLASHPROG_FLWREN (1 << 1)
-
/*
* Logical Physical
* f1000000 10000000 Core module registers
@@ -108,48 +101,6 @@ static void __init intcp_map_io(void)
}
/*
- * Flash handling.
- */
-static int intcp_flash_init(struct platform_device *dev)
-{
- u32 val;
-
- val = readl(intcp_con_base + INTCP_FLASHPROG);
- val |= CINTEGRATOR_FLASHPROG_FLWREN;
- writel(val, intcp_con_base + INTCP_FLASHPROG);
-
- return 0;
-}
-
-static void intcp_flash_exit(struct platform_device *dev)
-{
- u32 val;
-
- val = readl(intcp_con_base + INTCP_FLASHPROG);
- val &= ~(CINTEGRATOR_FLASHPROG_FLVPPEN|CINTEGRATOR_FLASHPROG_FLWREN);
- writel(val, intcp_con_base + INTCP_FLASHPROG);
-}
-
-static void intcp_flash_set_vpp(struct platform_device *pdev, int on)
-{
- u32 val;
-
- val = readl(intcp_con_base + INTCP_FLASHPROG);
- if (on)
- val |= CINTEGRATOR_FLASHPROG_FLVPPEN;
- else
- val &= ~CINTEGRATOR_FLASHPROG_FLVPPEN;
- writel(val, intcp_con_base + INTCP_FLASHPROG);
-}
-
-static struct physmap_flash_data intcp_flash_data = {
- .width = 4,
- .init = intcp_flash_init,
- .exit = intcp_flash_exit,
- .set_vpp = intcp_flash_set_vpp,
-};
-
-/*
* It seems that the card insertion interrupt remains active after
* we've acknowledged it. We therefore ignore the interrupt, and
* rely on reading it from the SIC. This also means that we must
@@ -260,8 +211,6 @@ static struct of_dev_auxdata intcp_auxdata_lookup[] __initdata = {
"aaci", &mmc_data),
OF_DEV_AUXDATA("arm,primecell", INTCP_PA_CLCD_BASE,
"clcd", &clcd_data),
- OF_DEV_AUXDATA("cfi-flash", INTCP_PA_FLASH_BASE,
- "physmap-flash", &intcp_flash_data),
{ /* sentinel */ },
};
diff --git a/arch/arm/mach-keystone/keystone.c b/arch/arm/mach-keystone/keystone.c
index e6b9cb1e6709..a33a296b00dc 100644
--- a/arch/arm/mach-keystone/keystone.c
+++ b/arch/arm/mach-keystone/keystone.c
@@ -63,11 +63,6 @@ static void __init keystone_init(void)
of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
}
-static unsigned long keystone_virt_to_idmap(unsigned long x)
-{
- return (phys_addr_t)(x) - CONFIG_PAGE_OFFSET + KEYSTONE_LOW_PHYS_START;
-}
-
static long long __init keystone_pv_fixup(void)
{
long long offset;
@@ -91,7 +86,7 @@ static long long __init keystone_pv_fixup(void)
offset = KEYSTONE_HIGH_PHYS_START - KEYSTONE_LOW_PHYS_START;
/* Populate the arch idmap hook */
- arch_virt_to_idmap = keystone_virt_to_idmap;
+ arch_phys_to_idmap_offset = -offset;
return offset;
}
diff --git a/arch/arm/mach-lpc32xx/Makefile b/arch/arm/mach-lpc32xx/Makefile
index c70709ada692..79b6b07e115d 100644
--- a/arch/arm/mach-lpc32xx/Makefile
+++ b/arch/arm/mach-lpc32xx/Makefile
@@ -2,6 +2,6 @@
# Makefile for the linux kernel.
#
-obj-y := irq.o common.o serial.o
+obj-y := common.o serial.o
obj-y += pm.o suspend.o
obj-y += phy3250.o
diff --git a/arch/arm/mach-lpc32xx/common.c b/arch/arm/mach-lpc32xx/common.c
index 5b7a1e78c3a5..2f6067bce7c3 100644
--- a/arch/arm/mach-lpc32xx/common.c
+++ b/arch/arm/mach-lpc32xx/common.c
@@ -17,13 +17,6 @@
*/
#include <linux/init.h>
-#include <linux/platform_device.h>
-#include <linux/interrupt.h>
-#include <linux/irq.h>
-#include <linux/err.h>
-#include <linux/i2c.h>
-#include <linux/i2c-pnx.h>
-#include <linux/io.h>
#include <asm/mach/map.h>
#include <asm/system_info.h>
@@ -44,19 +37,6 @@ void lpc32xx_get_uid(u32 devid[4])
}
/*
- * Returns SYSCLK source
- * 0 = PLL397, 1 = main oscillator
- */
-int clk_is_sysclk_mainosc(void)
-{
- if ((__raw_readl(LPC32XX_CLKPWR_SYSCLK_CTRL) &
- LPC32XX_CLKPWR_SYSCTRL_SYSCLKMUX) == 0)
- return 1;
-
- return 0;
-}
-
-/*
* Detects and returns IRAM size for the device variation
*/
#define LPC32XX_IRAM_BANK_SIZE SZ_128K
@@ -87,81 +67,6 @@ u32 lpc32xx_return_iram_size(void)
}
EXPORT_SYMBOL_GPL(lpc32xx_return_iram_size);
-/*
- * Computes PLL rate from PLL register and input clock
- */
-u32 clk_check_pll_setup(u32 ifreq, struct clk_pll_setup *pllsetup)
-{
- u32 ilfreq, p, m, n, fcco, fref, cfreq;
- int mode;
-
- /*
- * PLL requirements
- * ifreq must be >= 1MHz and <= 20MHz
- * FCCO must be >= 156MHz and <= 320MHz
- * FREF must be >= 1MHz and <= 27MHz
- * Assume the passed input data is not valid
- */
-
- ilfreq = ifreq;
- m = pllsetup->pll_m;
- n = pllsetup->pll_n;
- p = pllsetup->pll_p;
-
- mode = (pllsetup->cco_bypass_b15 << 2) |
- (pllsetup->direct_output_b14 << 1) |
- pllsetup->fdbk_div_ctrl_b13;
-
- switch (mode) {
- case 0x0: /* Non-integer mode */
- cfreq = (m * ilfreq) / (2 * p * n);
- fcco = (m * ilfreq) / n;
- fref = ilfreq / n;
- break;
-
- case 0x1: /* integer mode */
- cfreq = (m * ilfreq) / n;
- fcco = (m * ilfreq) / (n * 2 * p);
- fref = ilfreq / n;
- break;
-
- case 0x2:
- case 0x3: /* Direct mode */
- cfreq = (m * ilfreq) / n;
- fcco = cfreq;
- fref = ilfreq / n;
- break;
-
- case 0x4:
- case 0x5: /* Bypass mode */
- cfreq = ilfreq / (2 * p);
- fcco = 156000000;
- fref = 1000000;
- break;
-
- case 0x6:
- case 0x7: /* Direct bypass mode */
- default:
- cfreq = ilfreq;
- fcco = 156000000;
- fref = 1000000;
- break;
- }
-
- if (fcco < 156000000 || fcco > 320000000)
- cfreq = 0;
-
- if (fref < 1000000 || fref > 27000000)
- cfreq = 0;
-
- return (u32) cfreq;
-}
-
-u32 clk_get_pclk_div(void)
-{
- return 1 + ((__raw_readl(LPC32XX_CLKPWR_HCLK_DIV) >> 2) & 0x1F);
-}
-
static struct map_desc lpc32xx_io_desc[] __initdata = {
{
.virtual = (unsigned long)IO_ADDRESS(LPC32XX_AHB0_START),
diff --git a/arch/arm/mach-lpc32xx/common.h b/arch/arm/mach-lpc32xx/common.h
index 2d90801ed1e1..30c9e64fc65b 100644
--- a/arch/arm/mach-lpc32xx/common.h
+++ b/arch/arm/mach-lpc32xx/common.h
@@ -19,37 +19,15 @@
#ifndef __LPC32XX_COMMON_H
#define __LPC32XX_COMMON_H
-#include <mach/board.h>
-#include <linux/platform_device.h>
-#include <linux/reboot.h>
+#include <linux/init.h>
/*
* Other arch specific structures and functions
*/
-extern void lpc32xx_timer_init(void);
extern void __init lpc32xx_init_irq(void);
extern void __init lpc32xx_map_io(void);
extern void __init lpc32xx_serial_init(void);
-
-/*
- * Structure used for setting up and querying the PLLS
- */
-struct clk_pll_setup {
- int analog_on;
- int cco_bypass_b15;
- int direct_output_b14;
- int fdbk_div_ctrl_b13;
- int pll_p;
- int pll_n;
- u32 pll_m;
-};
-
-extern int clk_is_sysclk_mainosc(void);
-extern u32 clk_check_pll_setup(u32 ifreq, struct clk_pll_setup *pllsetup);
-extern u32 clk_get_pllrate_from_reg(u32 inputclk, u32 regval);
-extern u32 clk_get_pclk_div(void);
-
/*
* Returns the LPC32xx unique 128-bit chip ID
*/
diff --git a/arch/arm/mach-lpc32xx/include/mach/irqs.h b/arch/arm/mach-lpc32xx/include/mach/irqs.h
index 9e3b90df32e1..00190535df90 100644
--- a/arch/arm/mach-lpc32xx/include/mach/irqs.h
+++ b/arch/arm/mach-lpc32xx/include/mach/irqs.h
@@ -112,6 +112,6 @@
#define IRQ_LPC32XX_GPI_06 LPC32XX_SIC2_IRQ(28)
#define IRQ_LPC32XX_SYSCLK LPC32XX_SIC2_IRQ(31)
-#define NR_IRQS 96
+#define LPC32XX_NR_IRQS 96
#endif
diff --git a/arch/arm/mach-lpc32xx/irq.c b/arch/arm/mach-lpc32xx/irq.c
deleted file mode 100644
index 2ae431e8bc1b..000000000000
--- a/arch/arm/mach-lpc32xx/irq.c
+++ /dev/null
@@ -1,477 +0,0 @@
-/*
- * arch/arm/mach-lpc32xx/irq.c
- *
- * Author: Kevin Wells <kevin.wells@nxp.com>
- *
- * Copyright (C) 2010 NXP Semiconductors
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <linux/interrupt.h>
-#include <linux/irq.h>
-#include <linux/err.h>
-#include <linux/io.h>
-#include <linux/of.h>
-#include <linux/of_address.h>
-#include <linux/of_irq.h>
-#include <linux/irqdomain.h>
-#include <linux/module.h>
-
-#include <mach/irqs.h>
-#include <mach/hardware.h>
-#include <mach/platform.h>
-#include "common.h"
-
-/*
- * Default value representing the Activation polarity of all internal
- * interrupt sources
- */
-#define MIC_APR_DEFAULT 0x3FF0EFE0
-#define SIC1_APR_DEFAULT 0xFBD27186
-#define SIC2_APR_DEFAULT 0x801810C0
-
-/*
- * Default value representing the Activation Type of all internal
- * interrupt sources. All are level sensitive.
- */
-#define MIC_ATR_DEFAULT 0x00000000
-#define SIC1_ATR_DEFAULT 0x00026000
-#define SIC2_ATR_DEFAULT 0x00000000
-
-static struct irq_domain *lpc32xx_mic_domain;
-static struct device_node *lpc32xx_mic_np;
-
-struct lpc32xx_event_group_regs {
- void __iomem *enab_reg;
- void __iomem *edge_reg;
- void __iomem *maskstat_reg;
- void __iomem *rawstat_reg;
-};
-
-static const struct lpc32xx_event_group_regs lpc32xx_event_int_regs = {
- .enab_reg = LPC32XX_CLKPWR_INT_ER,
- .edge_reg = LPC32XX_CLKPWR_INT_AP,
- .maskstat_reg = LPC32XX_CLKPWR_INT_SR,
- .rawstat_reg = LPC32XX_CLKPWR_INT_RS,
-};
-
-static const struct lpc32xx_event_group_regs lpc32xx_event_pin_regs = {
- .enab_reg = LPC32XX_CLKPWR_PIN_ER,
- .edge_reg = LPC32XX_CLKPWR_PIN_AP,
- .maskstat_reg = LPC32XX_CLKPWR_PIN_SR,
- .rawstat_reg = LPC32XX_CLKPWR_PIN_RS,
-};
-
-struct lpc32xx_event_info {
- const struct lpc32xx_event_group_regs *event_group;
- u32 mask;
-};
-
-/*
- * Maps an IRQ number to and event mask and register
- */
-static const struct lpc32xx_event_info lpc32xx_events[NR_IRQS] = {
- [IRQ_LPC32XX_GPI_08] = {
- .event_group = &lpc32xx_event_pin_regs,
- .mask = LPC32XX_CLKPWR_EXTSRC_GPI_08_BIT,
- },
- [IRQ_LPC32XX_GPI_09] = {
- .event_group = &lpc32xx_event_pin_regs,
- .mask = LPC32XX_CLKPWR_EXTSRC_GPI_09_BIT,
- },
- [IRQ_LPC32XX_GPI_19] = {
- .event_group = &lpc32xx_event_pin_regs,
- .mask = LPC32XX_CLKPWR_EXTSRC_GPI_19_BIT,
- },
- [IRQ_LPC32XX_GPI_07] = {
- .event_group = &lpc32xx_event_pin_regs,
- .mask = LPC32XX_CLKPWR_EXTSRC_GPI_07_BIT,
- },
- [IRQ_LPC32XX_GPI_00] = {
- .event_group = &lpc32xx_event_pin_regs,
- .mask = LPC32XX_CLKPWR_EXTSRC_GPI_00_BIT,
- },
- [IRQ_LPC32XX_GPI_01] = {
- .event_group = &lpc32xx_event_pin_regs,
- .mask = LPC32XX_CLKPWR_EXTSRC_GPI_01_BIT,
- },
- [IRQ_LPC32XX_GPI_02] = {
- .event_group = &lpc32xx_event_pin_regs,
- .mask = LPC32XX_CLKPWR_EXTSRC_GPI_02_BIT,
- },
- [IRQ_LPC32XX_GPI_03] = {
- .event_group = &lpc32xx_event_pin_regs,
- .mask = LPC32XX_CLKPWR_EXTSRC_GPI_03_BIT,
- },
- [IRQ_LPC32XX_GPI_04] = {
- .event_group = &lpc32xx_event_pin_regs,
- .mask = LPC32XX_CLKPWR_EXTSRC_GPI_04_BIT,
- },
- [IRQ_LPC32XX_GPI_05] = {
- .event_group = &lpc32xx_event_pin_regs,
- .mask = LPC32XX_CLKPWR_EXTSRC_GPI_05_BIT,
- },
- [IRQ_LPC32XX_GPI_06] = {
- .event_group = &lpc32xx_event_pin_regs,
- .mask = LPC32XX_CLKPWR_EXTSRC_GPI_06_BIT,
- },
- [IRQ_LPC32XX_GPI_28] = {
- .event_group = &lpc32xx_event_pin_regs,
- .mask = LPC32XX_CLKPWR_EXTSRC_GPI_28_BIT,
- },
- [IRQ_LPC32XX_GPIO_00] = {
- .event_group = &lpc32xx_event_int_regs,
- .mask = LPC32XX_CLKPWR_INTSRC_GPIO_00_BIT,
- },
- [IRQ_LPC32XX_GPIO_01] = {
- .event_group = &lpc32xx_event_int_regs,
- .mask = LPC32XX_CLKPWR_INTSRC_GPIO_01_BIT,
- },
- [IRQ_LPC32XX_GPIO_02] = {
- .event_group = &lpc32xx_event_int_regs,
- .mask = LPC32XX_CLKPWR_INTSRC_GPIO_02_BIT,
- },
- [IRQ_LPC32XX_GPIO_03] = {
- .event_group = &lpc32xx_event_int_regs,
- .mask = LPC32XX_CLKPWR_INTSRC_GPIO_03_BIT,
- },
- [IRQ_LPC32XX_GPIO_04] = {
- .event_group = &lpc32xx_event_int_regs,
- .mask = LPC32XX_CLKPWR_INTSRC_GPIO_04_BIT,
- },
- [IRQ_LPC32XX_GPIO_05] = {
- .event_group = &lpc32xx_event_int_regs,
- .mask = LPC32XX_CLKPWR_INTSRC_GPIO_05_BIT,
- },
- [IRQ_LPC32XX_KEY] = {
- .event_group = &lpc32xx_event_int_regs,
- .mask = LPC32XX_CLKPWR_INTSRC_KEY_BIT,
- },
- [IRQ_LPC32XX_ETHERNET] = {
- .event_group = &lpc32xx_event_int_regs,
- .mask = LPC32XX_CLKPWR_INTSRC_MAC_BIT,
- },
- [IRQ_LPC32XX_USB_OTG_ATX] = {
- .event_group = &lpc32xx_event_int_regs,
- .mask = LPC32XX_CLKPWR_INTSRC_USBATXINT_BIT,
- },
- [IRQ_LPC32XX_USB_HOST] = {
- .event_group = &lpc32xx_event_int_regs,
- .mask = LPC32XX_CLKPWR_INTSRC_USB_BIT,
- },
- [IRQ_LPC32XX_RTC] = {
- .event_group = &lpc32xx_event_int_regs,
- .mask = LPC32XX_CLKPWR_INTSRC_RTC_BIT,
- },
- [IRQ_LPC32XX_MSTIMER] = {
- .event_group = &lpc32xx_event_int_regs,
- .mask = LPC32XX_CLKPWR_INTSRC_MSTIMER_BIT,
- },
- [IRQ_LPC32XX_TS_AUX] = {
- .event_group = &lpc32xx_event_int_regs,
- .mask = LPC32XX_CLKPWR_INTSRC_TS_AUX_BIT,
- },
- [IRQ_LPC32XX_TS_P] = {
- .event_group = &lpc32xx_event_int_regs,
- .mask = LPC32XX_CLKPWR_INTSRC_TS_P_BIT,
- },
- [IRQ_LPC32XX_TS_IRQ] = {
- .event_group = &lpc32xx_event_int_regs,
- .mask = LPC32XX_CLKPWR_INTSRC_ADC_BIT,
- },
-};
-
-static void get_controller(unsigned int irq, unsigned int *base,
- unsigned int *irqbit)
-{
- if (irq < 32) {
- *base = LPC32XX_MIC_BASE;
- *irqbit = 1 << irq;
- } else if (irq < 64) {
- *base = LPC32XX_SIC1_BASE;
- *irqbit = 1 << (irq - 32);
- } else {
- *base = LPC32XX_SIC2_BASE;
- *irqbit = 1 << (irq - 64);
- }
-}
-
-static void lpc32xx_mask_irq(struct irq_data *d)
-{
- unsigned int reg, ctrl, mask;
-
- get_controller(d->hwirq, &ctrl, &mask);
-
- reg = __raw_readl(LPC32XX_INTC_MASK(ctrl)) & ~mask;
- __raw_writel(reg, LPC32XX_INTC_MASK(ctrl));
-}
-
-static void lpc32xx_unmask_irq(struct irq_data *d)
-{
- unsigned int reg, ctrl, mask;
-
- get_controller(d->hwirq, &ctrl, &mask);
-
- reg = __raw_readl(LPC32XX_INTC_MASK(ctrl)) | mask;
- __raw_writel(reg, LPC32XX_INTC_MASK(ctrl));
-}
-
-static void lpc32xx_ack_irq(struct irq_data *d)
-{
- unsigned int ctrl, mask;
-
- get_controller(d->hwirq, &ctrl, &mask);
-
- __raw_writel(mask, LPC32XX_INTC_RAW_STAT(ctrl));
-
- /* Also need to clear pending wake event */
- if (lpc32xx_events[d->hwirq].mask != 0)
- __raw_writel(lpc32xx_events[d->hwirq].mask,
- lpc32xx_events[d->hwirq].event_group->rawstat_reg);
-}
-
-static void __lpc32xx_set_irq_type(unsigned int irq, int use_high_level,
- int use_edge)
-{
- unsigned int reg, ctrl, mask;
-
- get_controller(irq, &ctrl, &mask);
-
- /* Activation level, high or low */
- reg = __raw_readl(LPC32XX_INTC_POLAR(ctrl));
- if (use_high_level)
- reg |= mask;
- else
- reg &= ~mask;
- __raw_writel(reg, LPC32XX_INTC_POLAR(ctrl));
-
- /* Activation type, edge or level */
- reg = __raw_readl(LPC32XX_INTC_ACT_TYPE(ctrl));
- if (use_edge)
- reg |= mask;
- else
- reg &= ~mask;
- __raw_writel(reg, LPC32XX_INTC_ACT_TYPE(ctrl));
-
- /* Use same polarity for the wake events */
- if (lpc32xx_events[irq].mask != 0) {
- reg = __raw_readl(lpc32xx_events[irq].event_group->edge_reg);
-
- if (use_high_level)
- reg |= lpc32xx_events[irq].mask;
- else
- reg &= ~lpc32xx_events[irq].mask;
-
- __raw_writel(reg, lpc32xx_events[irq].event_group->edge_reg);
- }
-}
-
-static int lpc32xx_set_irq_type(struct irq_data *d, unsigned int type)
-{
- switch (type) {
- case IRQ_TYPE_EDGE_RISING:
- /* Rising edge sensitive */
- __lpc32xx_set_irq_type(d->hwirq, 1, 1);
- irq_set_handler_locked(d, handle_edge_irq);
- break;
-
- case IRQ_TYPE_EDGE_FALLING:
- /* Falling edge sensitive */
- __lpc32xx_set_irq_type(d->hwirq, 0, 1);
- irq_set_handler_locked(d, handle_edge_irq);
- break;
-
- case IRQ_TYPE_LEVEL_LOW:
- /* Low level sensitive */
- __lpc32xx_set_irq_type(d->hwirq, 0, 0);
- irq_set_handler_locked(d, handle_level_irq);
- break;
-
- case IRQ_TYPE_LEVEL_HIGH:
- /* High level sensitive */
- __lpc32xx_set_irq_type(d->hwirq, 1, 0);
- irq_set_handler_locked(d, handle_level_irq);
- break;
-
- /* Other modes are not supported */
- default:
- return -EINVAL;
- }
-
- return 0;
-}
-
-static int lpc32xx_irq_wake(struct irq_data *d, unsigned int state)
-{
- unsigned long eventreg;
-
- if (lpc32xx_events[d->hwirq].mask != 0) {
- eventreg = __raw_readl(lpc32xx_events[d->hwirq].
- event_group->enab_reg);
-
- if (state)
- eventreg |= lpc32xx_events[d->hwirq].mask;
- else {
- eventreg &= ~lpc32xx_events[d->hwirq].mask;
-
- /*
- * When disabling the wakeup, clear the latched
- * event
- */
- __raw_writel(lpc32xx_events[d->hwirq].mask,
- lpc32xx_events[d->hwirq].
- event_group->rawstat_reg);
- }
-
- __raw_writel(eventreg,
- lpc32xx_events[d->hwirq].event_group->enab_reg);
-
- return 0;
- }
-
- /* Clear event */
- __raw_writel(lpc32xx_events[d->hwirq].mask,
- lpc32xx_events[d->hwirq].event_group->rawstat_reg);
-
- return -ENODEV;
-}
-
-static void __init lpc32xx_set_default_mappings(unsigned int apr,
- unsigned int atr, unsigned int offset)
-{
- unsigned int i;
-
- /* Set activation levels for each interrupt */
- i = 0;
- while (i < 32) {
- __lpc32xx_set_irq_type(offset + i, ((apr >> i) & 0x1),
- ((atr >> i) & 0x1));
- i++;
- }
-}
-
-static struct irq_chip lpc32xx_irq_chip = {
- .name = "MIC",
- .irq_ack = lpc32xx_ack_irq,
- .irq_mask = lpc32xx_mask_irq,
- .irq_unmask = lpc32xx_unmask_irq,
- .irq_set_type = lpc32xx_set_irq_type,
- .irq_set_wake = lpc32xx_irq_wake
-};
-
-static void lpc32xx_sic1_handler(struct irq_desc *desc)
-{
- unsigned long ints = __raw_readl(LPC32XX_INTC_STAT(LPC32XX_SIC1_BASE));
-
- while (ints != 0) {
- int irqno = fls(ints) - 1;
-
- ints &= ~(1 << irqno);
-
- generic_handle_irq(LPC32XX_SIC1_IRQ(irqno));
- }
-}
-
-static void lpc32xx_sic2_handler(struct irq_desc *desc)
-{
- unsigned long ints = __raw_readl(LPC32XX_INTC_STAT(LPC32XX_SIC2_BASE));
-
- while (ints != 0) {
- int irqno = fls(ints) - 1;
-
- ints &= ~(1 << irqno);
-
- generic_handle_irq(LPC32XX_SIC2_IRQ(irqno));
- }
-}
-
-static int __init __lpc32xx_mic_of_init(struct device_node *node,
- struct device_node *parent)
-{
- lpc32xx_mic_np = node;
-
- return 0;
-}
-
-static const struct of_device_id mic_of_match[] __initconst = {
- { .compatible = "nxp,lpc3220-mic", .data = __lpc32xx_mic_of_init },
- { }
-};
-
-void __init lpc32xx_init_irq(void)
-{
- unsigned int i;
-
- /* Setup MIC */
- __raw_writel(0, LPC32XX_INTC_MASK(LPC32XX_MIC_BASE));
- __raw_writel(MIC_APR_DEFAULT, LPC32XX_INTC_POLAR(LPC32XX_MIC_BASE));
- __raw_writel(MIC_ATR_DEFAULT, LPC32XX_INTC_ACT_TYPE(LPC32XX_MIC_BASE));
-
- /* Setup SIC1 */
- __raw_writel(0, LPC32XX_INTC_MASK(LPC32XX_SIC1_BASE));
- __raw_writel(SIC1_APR_DEFAULT, LPC32XX_INTC_POLAR(LPC32XX_SIC1_BASE));
- __raw_writel(SIC1_ATR_DEFAULT,
- LPC32XX_INTC_ACT_TYPE(LPC32XX_SIC1_BASE));
-
- /* Setup SIC2 */
- __raw_writel(0, LPC32XX_INTC_MASK(LPC32XX_SIC2_BASE));
- __raw_writel(SIC2_APR_DEFAULT, LPC32XX_INTC_POLAR(LPC32XX_SIC2_BASE));
- __raw_writel(SIC2_ATR_DEFAULT,
- LPC32XX_INTC_ACT_TYPE(LPC32XX_SIC2_BASE));
-
- /* Configure supported IRQ's */
- for (i = 0; i < NR_IRQS; i++) {
- irq_set_chip_and_handler(i, &lpc32xx_irq_chip,
- handle_level_irq);
- irq_clear_status_flags(i, IRQ_NOREQUEST);
- }
-
- /* Set default mappings */
- lpc32xx_set_default_mappings(MIC_APR_DEFAULT, MIC_ATR_DEFAULT, 0);
- lpc32xx_set_default_mappings(SIC1_APR_DEFAULT, SIC1_ATR_DEFAULT, 32);
- lpc32xx_set_default_mappings(SIC2_APR_DEFAULT, SIC2_ATR_DEFAULT, 64);
-
- /* Initially disable all wake events */
- __raw_writel(0, LPC32XX_CLKPWR_P01_ER);
- __raw_writel(0, LPC32XX_CLKPWR_INT_ER);
- __raw_writel(0, LPC32XX_CLKPWR_PIN_ER);
-
- /*
- * Default wake activation polarities, all pin sources are low edge
- * triggered
- */
- __raw_writel(LPC32XX_CLKPWR_INTSRC_TS_P_BIT |
- LPC32XX_CLKPWR_INTSRC_MSTIMER_BIT |
- LPC32XX_CLKPWR_INTSRC_RTC_BIT,
- LPC32XX_CLKPWR_INT_AP);
- __raw_writel(0, LPC32XX_CLKPWR_PIN_AP);
-
- /* Clear latched wake event states */
- __raw_writel(__raw_readl(LPC32XX_CLKPWR_PIN_RS),
- LPC32XX_CLKPWR_PIN_RS);
- __raw_writel(__raw_readl(LPC32XX_CLKPWR_INT_RS),
- LPC32XX_CLKPWR_INT_RS);
-
- of_irq_init(mic_of_match);
-
- lpc32xx_mic_domain = irq_domain_add_legacy(lpc32xx_mic_np, NR_IRQS,
- 0, 0, &irq_domain_simple_ops,
- NULL);
- if (!lpc32xx_mic_domain)
- panic("Unable to add MIC irq domain\n");
-
- /* MIC SUBIRQx interrupts will route handling to the chain handlers */
- irq_set_chained_handler(IRQ_LPC32XX_SUB1IRQ, lpc32xx_sic1_handler);
- irq_set_chained_handler(IRQ_LPC32XX_SUB2IRQ, lpc32xx_sic2_handler);
-}
diff --git a/arch/arm/mach-lpc32xx/phy3250.c b/arch/arm/mach-lpc32xx/phy3250.c
index b2f9e226febe..81265e80302d 100644
--- a/arch/arm/mach-lpc32xx/phy3250.c
+++ b/arch/arm/mach-lpc32xx/phy3250.c
@@ -159,7 +159,7 @@ static struct lpc32xx_mlc_platform_data lpc32xx_mlc_data = {
.dma_filter = pl08x_filter_id,
};
-static const struct of_dev_auxdata const lpc32xx_auxdata_lookup[] __initconst = {
+static const struct of_dev_auxdata lpc32xx_auxdata_lookup[] __initconst = {
OF_DEV_AUXDATA("arm,pl022", 0x20084000, "dev:ssp0", NULL),
OF_DEV_AUXDATA("arm,pl022", 0x2008C000, "dev:ssp1", NULL),
OF_DEV_AUXDATA("arm,pl110", 0x31040000, "dev:clcd", &lpc32xx_clcd_data),
@@ -206,7 +206,6 @@ static const char *const lpc32xx_dt_compat[] __initconst = {
DT_MACHINE_START(LPC32XX_DT, "LPC32XX SoC (Flattened Device Tree)")
.atag_offset = 0x100,
.map_io = lpc32xx_map_io,
- .init_irq = lpc32xx_init_irq,
.init_machine = lpc3250_machine_init,
.dt_compat = lpc32xx_dt_compat,
MACHINE_END
diff --git a/arch/arm/mach-mediatek/Kconfig b/arch/arm/mach-mediatek/Kconfig
index 8ced4ad94af0..70e49d54434e 100644
--- a/arch/arm/mach-mediatek/Kconfig
+++ b/arch/arm/mach-mediatek/Kconfig
@@ -10,6 +10,10 @@ menuconfig ARCH_MEDIATEK
if ARCH_MEDIATEK
+config MACH_MT2701
+ bool "MediaTek MT2701 SoCs support"
+ default ARCH_MEDIATEK
+
config MACH_MT6589
bool "MediaTek MT6589 SoCs support"
default ARCH_MEDIATEK
diff --git a/arch/arm/mach-mediatek/mediatek.c b/arch/arm/mach-mediatek/mediatek.c
index 9c2e38d30f47..a6e3c98b95ed 100644
--- a/arch/arm/mach-mediatek/mediatek.c
+++ b/arch/arm/mach-mediatek/mediatek.c
@@ -29,6 +29,7 @@ static void __init mediatek_timer_init(void)
void __iomem *gpt_base;
if (of_machine_is_compatible("mediatek,mt6589") ||
+ of_machine_is_compatible("mediatek,mt7623") ||
of_machine_is_compatible("mediatek,mt8135") ||
of_machine_is_compatible("mediatek,mt8127")) {
/* turn on GPT6 which ungates arch timer clocks */
diff --git a/arch/arm/mach-mv78xx0/common.c b/arch/arm/mach-mv78xx0/common.c
index 99cc93900a24..45a05207b418 100644
--- a/arch/arm/mach-mv78xx0/common.c
+++ b/arch/arm/mach-mv78xx0/common.c
@@ -168,8 +168,7 @@ static struct clk *tclk;
static void __init clk_init(void)
{
- tclk = clk_register_fixed_rate(NULL, "tclk", NULL, CLK_IS_ROOT,
- get_tclk());
+ tclk = clk_register_fixed_rate(NULL, "tclk", NULL, 0, get_tclk());
orion_clkdev_init(tclk);
}
diff --git a/arch/arm/mach-mvebu/pmsu.c b/arch/arm/mach-mvebu/pmsu.c
index ed8fda4cd055..b44442338e4e 100644
--- a/arch/arm/mach-mvebu/pmsu.c
+++ b/arch/arm/mach-mvebu/pmsu.c
@@ -20,7 +20,6 @@
#include <linux/clk.h>
#include <linux/cpu_pm.h>
-#include <linux/cpufreq-dt.h>
#include <linux/delay.h>
#include <linux/init.h>
#include <linux/io.h>
@@ -29,7 +28,6 @@
#include <linux/of_address.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
-#include <linux/pm_opp.h>
#include <linux/resource.h>
#include <linux/slab.h>
#include <linux/smp.h>
@@ -608,86 +606,3 @@ int mvebu_pmsu_dfs_request(int cpu)
return 0;
}
-
-struct cpufreq_dt_platform_data cpufreq_dt_pd = {
- .independent_clocks = true,
-};
-
-static int __init armada_xp_pmsu_cpufreq_init(void)
-{
- struct device_node *np;
- struct resource res;
- int ret, cpu;
-
- if (!of_machine_is_compatible("marvell,armadaxp"))
- return 0;
-
- /*
- * In order to have proper cpufreq handling, we need to ensure
- * that the Device Tree description of the CPU clock includes
- * the definition of the PMU DFS registers. If not, we do not
- * register the clock notifier and the cpufreq driver. This
- * piece of code is only for compatibility with old Device
- * Trees.
- */
- np = of_find_compatible_node(NULL, NULL, "marvell,armada-xp-cpu-clock");
- if (!np)
- return 0;
-
- ret = of_address_to_resource(np, 1, &res);
- if (ret) {
- pr_warn(FW_WARN "not enabling cpufreq, deprecated armada-xp-cpu-clock binding\n");
- of_node_put(np);
- return 0;
- }
-
- of_node_put(np);
-
- /*
- * For each CPU, this loop registers the operating points
- * supported (which are the nominal CPU frequency and half of
- * it), and registers the clock notifier that will take care
- * of doing the PMSU part of a frequency transition.
- */
- for_each_possible_cpu(cpu) {
- struct device *cpu_dev;
- struct clk *clk;
- int ret;
-
- cpu_dev = get_cpu_device(cpu);
- if (!cpu_dev) {
- pr_err("Cannot get CPU %d\n", cpu);
- continue;
- }
-
- clk = clk_get(cpu_dev, 0);
- if (IS_ERR(clk)) {
- pr_err("Cannot get clock for CPU %d\n", cpu);
- return PTR_ERR(clk);
- }
-
- /*
- * In case of a failure of dev_pm_opp_add(), we don't
- * bother with cleaning up the registered OPP (there's
- * no function to do so), and simply cancel the
- * registration of the cpufreq device.
- */
- ret = dev_pm_opp_add(cpu_dev, clk_get_rate(clk), 0);
- if (ret) {
- clk_put(clk);
- return ret;
- }
-
- ret = dev_pm_opp_add(cpu_dev, clk_get_rate(clk) / 2, 0);
- if (ret) {
- clk_put(clk);
- return ret;
- }
- }
-
- platform_device_register_data(NULL, "cpufreq-dt", -1,
- &cpufreq_dt_pd, sizeof(cpufreq_dt_pd));
- return 0;
-}
-
-device_initcall(armada_xp_pmsu_cpufreq_init);
diff --git a/arch/arm/mach-omap2/Makefile b/arch/arm/mach-omap2/Makefile
index 0ba6a0e6fa19..04e276ce8413 100644
--- a/arch/arm/mach-omap2/Makefile
+++ b/arch/arm/mach-omap2/Makefile
@@ -2,7 +2,7 @@
# Makefile for the linux kernel.
#
-ccflags-$(CONFIG_ARCH_MULTIPLATFORM) := -I$(srctree)/$(src)/include \
+ccflags-y := -I$(srctree)/$(src)/include \
-I$(srctree)/arch/arm/plat-omap/include
# Common support
diff --git a/arch/arm/mach-omap2/board-rx51-peripherals.c b/arch/arm/mach-omap2/board-rx51-peripherals.c
index da174c0d603b..9a7073949d1d 100644
--- a/arch/arm/mach-omap2/board-rx51-peripherals.c
+++ b/arch/arm/mach-omap2/board-rx51-peripherals.c
@@ -30,6 +30,8 @@
#include <linux/platform_data/spi-omap2-mcspi.h>
#include <linux/platform_data/mtd-onenand-omap2.h>
+#include <plat/dmtimer.h>
+
#include <asm/system_info.h>
#include "common.h"
@@ -47,9 +49,8 @@
#include <video/omap-panel-data.h>
-#if defined(CONFIG_IR_RX51) || defined(CONFIG_IR_RX51_MODULE)
+#include <linux/platform_data/pwm_omap_dmtimer.h>
#include <linux/platform_data/media/ir-rx51.h>
-#endif
#include "mux.h"
#include "omap-pm.h"
@@ -1212,10 +1213,40 @@ static void __init rx51_init_tsc2005(void)
gpio_to_irq(RX51_TSC2005_IRQ_GPIO);
}
+#if IS_ENABLED(CONFIG_OMAP_DM_TIMER)
+static struct pwm_omap_dmtimer_pdata __maybe_unused pwm_dmtimer_pdata = {
+ .request_by_node = omap_dm_timer_request_by_node,
+ .request_specific = omap_dm_timer_request_specific,
+ .request = omap_dm_timer_request,
+ .set_source = omap_dm_timer_set_source,
+ .get_irq = omap_dm_timer_get_irq,
+ .set_int_enable = omap_dm_timer_set_int_enable,
+ .set_int_disable = omap_dm_timer_set_int_disable,
+ .free = omap_dm_timer_free,
+ .enable = omap_dm_timer_enable,
+ .disable = omap_dm_timer_disable,
+ .get_fclk = omap_dm_timer_get_fclk,
+ .start = omap_dm_timer_start,
+ .stop = omap_dm_timer_stop,
+ .set_load = omap_dm_timer_set_load,
+ .set_match = omap_dm_timer_set_match,
+ .set_pwm = omap_dm_timer_set_pwm,
+ .set_prescaler = omap_dm_timer_set_prescaler,
+ .read_counter = omap_dm_timer_read_counter,
+ .write_counter = omap_dm_timer_write_counter,
+ .read_status = omap_dm_timer_read_status,
+ .write_status = omap_dm_timer_write_status,
+};
+#endif
+
#if defined(CONFIG_IR_RX51) || defined(CONFIG_IR_RX51_MODULE)
static struct lirc_rx51_platform_data rx51_lirc_data = {
.set_max_mpu_wakeup_lat = omap_pm_set_max_mpu_wakeup_lat,
.pwm_timer = 9, /* Use GPT 9 for CIR */
+#if IS_ENABLED(CONFIG_OMAP_DM_TIMER)
+ .dmtimer = &pwm_dmtimer_pdata,
+#endif
+
};
static struct platform_device rx51_lirc_device = {
diff --git a/arch/arm/mach-omap2/gpmc-nand.c b/arch/arm/mach-omap2/gpmc-nand.c
index 72918c4973ea..f6ac027f3c3b 100644
--- a/arch/arm/mach-omap2/gpmc-nand.c
+++ b/arch/arm/mach-omap2/gpmc-nand.c
@@ -97,10 +97,7 @@ int gpmc_nand_init(struct omap_nand_platform_data *gpmc_nand_data,
gpmc_nand_res[2].start = gpmc_get_client_irq(GPMC_IRQ_COUNT_EVENT);
memset(&s, 0, sizeof(struct gpmc_settings));
- if (gpmc_nand_data->of_node)
- gpmc_read_settings_dt(gpmc_nand_data->of_node, &s);
- else
- gpmc_set_legacy(gpmc_nand_data, &s);
+ gpmc_set_legacy(gpmc_nand_data, &s);
s.device_nand = true;
@@ -121,8 +118,6 @@ int gpmc_nand_init(struct omap_nand_platform_data *gpmc_nand_data,
if (err < 0)
goto out_free_cs;
- gpmc_update_nand_reg(&gpmc_nand_data->reg, gpmc_nand_data->cs);
-
if (!gpmc_hwecc_bch_capable(gpmc_nand_data->ecc_opt)) {
pr_err("omap2-nand: Unsupported NAND ECC scheme selected\n");
err = -EINVAL;
diff --git a/arch/arm/mach-omap2/omap-wakeupgen.c b/arch/arm/mach-omap2/omap-wakeupgen.c
index 2c04f2741476..0c4754386532 100644
--- a/arch/arm/mach-omap2/omap-wakeupgen.c
+++ b/arch/arm/mach-omap2/omap-wakeupgen.c
@@ -327,6 +327,11 @@ static int irq_cpu_hotplug_notify(struct notifier_block *self,
{
unsigned int cpu = (unsigned int)hcpu;
+ /*
+ * Corresponding FROZEN transitions do not have to be handled,
+ * they are handled by at a higher level
+ * (drivers/cpuidle/coupled.c).
+ */
switch (action) {
case CPU_ONLINE:
wakeupgen_irqmask_all(cpu, 0);
diff --git a/arch/arm/mach-omap2/omap_hwmod.c b/arch/arm/mach-omap2/omap_hwmod.c
index 2af6ff63e3b4..83cb527755a9 100644
--- a/arch/arm/mach-omap2/omap_hwmod.c
+++ b/arch/arm/mach-omap2/omap_hwmod.c
@@ -2207,15 +2207,15 @@ static int _idle(struct omap_hwmod *oh)
pr_debug("omap_hwmod: %s: idling\n", oh->name);
+ if (_are_all_hardreset_lines_asserted(oh))
+ return 0;
+
if (oh->_state != _HWMOD_STATE_ENABLED) {
WARN(1, "omap_hwmod: %s: idle state can only be entered from enabled state\n",
oh->name);
return -EINVAL;
}
- if (_are_all_hardreset_lines_asserted(oh))
- return 0;
-
if (oh->class->sysc)
_idle_sysc(oh);
_del_initiator_dep(oh, mpu_oh);
@@ -2262,6 +2262,9 @@ static int _shutdown(struct omap_hwmod *oh)
int ret, i;
u8 prev_state;
+ if (_are_all_hardreset_lines_asserted(oh))
+ return 0;
+
if (oh->_state != _HWMOD_STATE_IDLE &&
oh->_state != _HWMOD_STATE_ENABLED) {
WARN(1, "omap_hwmod: %s: disabled state can only be entered from idle, or enabled state\n",
@@ -2269,9 +2272,6 @@ static int _shutdown(struct omap_hwmod *oh)
return -EINVAL;
}
- if (_are_all_hardreset_lines_asserted(oh))
- return 0;
-
pr_debug("omap_hwmod: %s: disabling\n", oh->name);
if (oh->class->pre_shutdown) {
diff --git a/arch/arm/mach-omap2/omap_hwmod.h b/arch/arm/mach-omap2/omap_hwmod.h
index 7c7a31169475..4041bad79a9a 100644
--- a/arch/arm/mach-omap2/omap_hwmod.h
+++ b/arch/arm/mach-omap2/omap_hwmod.h
@@ -754,6 +754,8 @@ const char *omap_hwmod_get_main_clk(struct omap_hwmod *oh);
*/
extern int omap_hwmod_aess_preprogram(struct omap_hwmod *oh);
+void omap_hwmod_rtc_unlock(struct omap_hwmod *oh);
+void omap_hwmod_rtc_lock(struct omap_hwmod *oh);
/*
* Chip variant-specific hwmod init routines - XXX should be converted
diff --git a/arch/arm/mach-omap2/omap_hwmod_33xx_43xx_ipblock_data.c b/arch/arm/mach-omap2/omap_hwmod_33xx_43xx_ipblock_data.c
index 907a452b78ea..aed33621deeb 100644
--- a/arch/arm/mach-omap2/omap_hwmod_33xx_43xx_ipblock_data.c
+++ b/arch/arm/mach-omap2/omap_hwmod_33xx_43xx_ipblock_data.c
@@ -918,6 +918,8 @@ static struct omap_hwmod_class_sysconfig am33xx_rtc_sysc = {
static struct omap_hwmod_class am33xx_rtc_hwmod_class = {
.name = "rtc",
.sysc = &am33xx_rtc_sysc,
+ .unlock = &omap_hwmod_rtc_unlock,
+ .lock = &omap_hwmod_rtc_lock,
};
struct omap_hwmod am33xx_rtc_hwmod = {
diff --git a/arch/arm/mach-omap2/omap_hwmod_7xx_data.c b/arch/arm/mach-omap2/omap_hwmod_7xx_data.c
index 9442d89bd229..d0e7e5259ec3 100644
--- a/arch/arm/mach-omap2/omap_hwmod_7xx_data.c
+++ b/arch/arm/mach-omap2/omap_hwmod_7xx_data.c
@@ -383,6 +383,68 @@ static struct omap_hwmod dra7xx_dcan2_hwmod = {
},
};
+/* pwmss */
+static struct omap_hwmod_class_sysconfig dra7xx_epwmss_sysc = {
+ .rev_offs = 0x0,
+ .sysc_offs = 0x4,
+ .sysc_flags = SYSC_HAS_SIDLEMODE | SYSC_HAS_SOFTRESET,
+ .idlemodes = (SIDLE_FORCE | SIDLE_NO | SIDLE_SMART),
+ .sysc_fields = &omap_hwmod_sysc_type2,
+};
+
+/*
+ * epwmss class
+ */
+static struct omap_hwmod_class dra7xx_epwmss_hwmod_class = {
+ .name = "epwmss",
+ .sysc = &dra7xx_epwmss_sysc,
+};
+
+/* epwmss0 */
+static struct omap_hwmod dra7xx_epwmss0_hwmod = {
+ .name = "epwmss0",
+ .class = &dra7xx_epwmss_hwmod_class,
+ .clkdm_name = "l4per2_clkdm",
+ .main_clk = "l4_root_clk_div",
+ .prcm = {
+ .omap4 = {
+ .modulemode = MODULEMODE_SWCTRL,
+ .clkctrl_offs = DRA7XX_CM_L4PER2_PWMSS1_CLKCTRL_OFFSET,
+ .context_offs = DRA7XX_RM_L4PER2_PWMSS1_CONTEXT_OFFSET,
+ },
+ },
+};
+
+/* epwmss1 */
+static struct omap_hwmod dra7xx_epwmss1_hwmod = {
+ .name = "epwmss1",
+ .class = &dra7xx_epwmss_hwmod_class,
+ .clkdm_name = "l4per2_clkdm",
+ .main_clk = "l4_root_clk_div",
+ .prcm = {
+ .omap4 = {
+ .modulemode = MODULEMODE_SWCTRL,
+ .clkctrl_offs = DRA7XX_CM_L4PER2_PWMSS2_CLKCTRL_OFFSET,
+ .context_offs = DRA7XX_RM_L4PER2_PWMSS2_CONTEXT_OFFSET,
+ },
+ },
+};
+
+/* epwmss2 */
+static struct omap_hwmod dra7xx_epwmss2_hwmod = {
+ .name = "epwmss2",
+ .class = &dra7xx_epwmss_hwmod_class,
+ .clkdm_name = "l4per2_clkdm",
+ .main_clk = "l4_root_clk_div",
+ .prcm = {
+ .omap4 = {
+ .modulemode = MODULEMODE_SWCTRL,
+ .clkctrl_offs = DRA7XX_CM_L4PER2_PWMSS3_CLKCTRL_OFFSET,
+ .context_offs = DRA7XX_RM_L4PER2_PWMSS3_CONTEXT_OFFSET,
+ },
+ },
+};
+
/*
* 'dma' class
*
@@ -1374,6 +1436,52 @@ static struct omap_hwmod_class dra7xx_mcasp_hwmod_class = {
.sysc = &dra7xx_mcasp_sysc,
};
+/* mcasp1 */
+static struct omap_hwmod_opt_clk mcasp1_opt_clks[] = {
+ { .role = "ahclkx", .clk = "mcasp1_ahclkx_mux" },
+ { .role = "ahclkr", .clk = "mcasp1_ahclkr_mux" },
+};
+
+static struct omap_hwmod dra7xx_mcasp1_hwmod = {
+ .name = "mcasp1",
+ .class = &dra7xx_mcasp_hwmod_class,
+ .clkdm_name = "ipu_clkdm",
+ .main_clk = "mcasp1_aux_gfclk_mux",
+ .flags = HWMOD_OPT_CLKS_NEEDED,
+ .prcm = {
+ .omap4 = {
+ .clkctrl_offs = DRA7XX_CM_IPU_MCASP1_CLKCTRL_OFFSET,
+ .context_offs = DRA7XX_RM_IPU_MCASP1_CONTEXT_OFFSET,
+ .modulemode = MODULEMODE_SWCTRL,
+ },
+ },
+ .opt_clks = mcasp1_opt_clks,
+ .opt_clks_cnt = ARRAY_SIZE(mcasp1_opt_clks),
+};
+
+/* mcasp2 */
+static struct omap_hwmod_opt_clk mcasp2_opt_clks[] = {
+ { .role = "ahclkx", .clk = "mcasp2_ahclkx_mux" },
+ { .role = "ahclkr", .clk = "mcasp2_ahclkr_mux" },
+};
+
+static struct omap_hwmod dra7xx_mcasp2_hwmod = {
+ .name = "mcasp2",
+ .class = &dra7xx_mcasp_hwmod_class,
+ .clkdm_name = "l4per2_clkdm",
+ .main_clk = "mcasp2_aux_gfclk_mux",
+ .flags = HWMOD_OPT_CLKS_NEEDED,
+ .prcm = {
+ .omap4 = {
+ .clkctrl_offs = DRA7XX_CM_L4PER2_MCASP2_CLKCTRL_OFFSET,
+ .context_offs = DRA7XX_RM_L4PER2_MCASP2_CONTEXT_OFFSET,
+ .modulemode = MODULEMODE_SWCTRL,
+ },
+ },
+ .opt_clks = mcasp2_opt_clks,
+ .opt_clks_cnt = ARRAY_SIZE(mcasp2_opt_clks),
+};
+
/* mcasp3 */
static struct omap_hwmod_opt_clk mcasp3_opt_clks[] = {
{ .role = "ahclkx", .clk = "mcasp3_ahclkx_mux" },
@@ -1396,6 +1504,116 @@ static struct omap_hwmod dra7xx_mcasp3_hwmod = {
.opt_clks_cnt = ARRAY_SIZE(mcasp3_opt_clks),
};
+/* mcasp4 */
+static struct omap_hwmod_opt_clk mcasp4_opt_clks[] = {
+ { .role = "ahclkx", .clk = "mcasp4_ahclkx_mux" },
+};
+
+static struct omap_hwmod dra7xx_mcasp4_hwmod = {
+ .name = "mcasp4",
+ .class = &dra7xx_mcasp_hwmod_class,
+ .clkdm_name = "l4per2_clkdm",
+ .main_clk = "mcasp4_aux_gfclk_mux",
+ .flags = HWMOD_OPT_CLKS_NEEDED,
+ .prcm = {
+ .omap4 = {
+ .clkctrl_offs = DRA7XX_CM_L4PER2_MCASP4_CLKCTRL_OFFSET,
+ .context_offs = DRA7XX_RM_L4PER2_MCASP4_CONTEXT_OFFSET,
+ .modulemode = MODULEMODE_SWCTRL,
+ },
+ },
+ .opt_clks = mcasp4_opt_clks,
+ .opt_clks_cnt = ARRAY_SIZE(mcasp4_opt_clks),
+};
+
+/* mcasp5 */
+static struct omap_hwmod_opt_clk mcasp5_opt_clks[] = {
+ { .role = "ahclkx", .clk = "mcasp5_ahclkx_mux" },
+};
+
+static struct omap_hwmod dra7xx_mcasp5_hwmod = {
+ .name = "mcasp5",
+ .class = &dra7xx_mcasp_hwmod_class,
+ .clkdm_name = "l4per2_clkdm",
+ .main_clk = "mcasp5_aux_gfclk_mux",
+ .flags = HWMOD_OPT_CLKS_NEEDED,
+ .prcm = {
+ .omap4 = {
+ .clkctrl_offs = DRA7XX_CM_L4PER2_MCASP5_CLKCTRL_OFFSET,
+ .context_offs = DRA7XX_RM_L4PER2_MCASP5_CONTEXT_OFFSET,
+ .modulemode = MODULEMODE_SWCTRL,
+ },
+ },
+ .opt_clks = mcasp5_opt_clks,
+ .opt_clks_cnt = ARRAY_SIZE(mcasp5_opt_clks),
+};
+
+/* mcasp6 */
+static struct omap_hwmod_opt_clk mcasp6_opt_clks[] = {
+ { .role = "ahclkx", .clk = "mcasp6_ahclkx_mux" },
+};
+
+static struct omap_hwmod dra7xx_mcasp6_hwmod = {
+ .name = "mcasp6",
+ .class = &dra7xx_mcasp_hwmod_class,
+ .clkdm_name = "l4per2_clkdm",
+ .main_clk = "mcasp6_aux_gfclk_mux",
+ .flags = HWMOD_OPT_CLKS_NEEDED,
+ .prcm = {
+ .omap4 = {
+ .clkctrl_offs = DRA7XX_CM_L4PER2_MCASP6_CLKCTRL_OFFSET,
+ .context_offs = DRA7XX_RM_L4PER2_MCASP6_CONTEXT_OFFSET,
+ .modulemode = MODULEMODE_SWCTRL,
+ },
+ },
+ .opt_clks = mcasp6_opt_clks,
+ .opt_clks_cnt = ARRAY_SIZE(mcasp6_opt_clks),
+};
+
+/* mcasp7 */
+static struct omap_hwmod_opt_clk mcasp7_opt_clks[] = {
+ { .role = "ahclkx", .clk = "mcasp7_ahclkx_mux" },
+};
+
+static struct omap_hwmod dra7xx_mcasp7_hwmod = {
+ .name = "mcasp7",
+ .class = &dra7xx_mcasp_hwmod_class,
+ .clkdm_name = "l4per2_clkdm",
+ .main_clk = "mcasp7_aux_gfclk_mux",
+ .flags = HWMOD_OPT_CLKS_NEEDED,
+ .prcm = {
+ .omap4 = {
+ .clkctrl_offs = DRA7XX_CM_L4PER2_MCASP7_CLKCTRL_OFFSET,
+ .context_offs = DRA7XX_RM_L4PER2_MCASP7_CONTEXT_OFFSET,
+ .modulemode = MODULEMODE_SWCTRL,
+ },
+ },
+ .opt_clks = mcasp7_opt_clks,
+ .opt_clks_cnt = ARRAY_SIZE(mcasp7_opt_clks),
+};
+
+/* mcasp8 */
+static struct omap_hwmod_opt_clk mcasp8_opt_clks[] = {
+ { .role = "ahclkx", .clk = "mcasp8_ahclkx_mux" },
+};
+
+static struct omap_hwmod dra7xx_mcasp8_hwmod = {
+ .name = "mcasp8",
+ .class = &dra7xx_mcasp_hwmod_class,
+ .clkdm_name = "l4per2_clkdm",
+ .main_clk = "mcasp8_aux_gfclk_mux",
+ .flags = HWMOD_OPT_CLKS_NEEDED,
+ .prcm = {
+ .omap4 = {
+ .clkctrl_offs = DRA7XX_CM_L4PER2_MCASP8_CLKCTRL_OFFSET,
+ .context_offs = DRA7XX_RM_L4PER2_MCASP8_CONTEXT_OFFSET,
+ .modulemode = MODULEMODE_SWCTRL,
+ },
+ },
+ .opt_clks = mcasp8_opt_clks,
+ .opt_clks_cnt = ARRAY_SIZE(mcasp8_opt_clks),
+};
+
/*
* 'mmc' class
*
@@ -1707,6 +1925,8 @@ static struct omap_hwmod_class_sysconfig dra7xx_rtcss_sysc = {
static struct omap_hwmod_class dra7xx_rtcss_hwmod_class = {
.name = "rtcss",
.sysc = &dra7xx_rtcss_sysc,
+ .unlock = &omap_hwmod_rtc_unlock,
+ .lock = &omap_hwmod_rtc_lock,
};
/* rtcss */
@@ -2065,6 +2285,20 @@ static struct omap_hwmod dra7xx_timer11_hwmod = {
},
};
+/* timer12 */
+static struct omap_hwmod dra7xx_timer12_hwmod = {
+ .name = "timer12",
+ .class = &dra7xx_timer_hwmod_class,
+ .clkdm_name = "wkupaon_clkdm",
+ .main_clk = "secure_32k_clk_src_ck",
+ .prcm = {
+ .omap4 = {
+ .clkctrl_offs = DRA7XX_CM_WKUPAON_TIMER12_CLKCTRL_OFFSET,
+ .context_offs = DRA7XX_RM_WKUPAON_TIMER12_CONTEXT_OFFSET,
+ },
+ },
+};
+
/* timer13 */
static struct omap_hwmod dra7xx_timer13_hwmod = {
.name = "timer13",
@@ -2726,6 +2960,38 @@ static struct omap_hwmod_ocp_if dra7xx_l3_main_1__hdmi = {
.user = OCP_USER_MPU | OCP_USER_SDMA,
};
+/* l4_per2 -> mcasp1 */
+static struct omap_hwmod_ocp_if dra7xx_l4_per2__mcasp1 = {
+ .master = &dra7xx_l4_per2_hwmod,
+ .slave = &dra7xx_mcasp1_hwmod,
+ .clk = "l4_root_clk_div",
+ .user = OCP_USER_MPU | OCP_USER_SDMA,
+};
+
+/* l3_main_1 -> mcasp1 */
+static struct omap_hwmod_ocp_if dra7xx_l3_main_1__mcasp1 = {
+ .master = &dra7xx_l3_main_1_hwmod,
+ .slave = &dra7xx_mcasp1_hwmod,
+ .clk = "l3_iclk_div",
+ .user = OCP_USER_MPU | OCP_USER_SDMA,
+};
+
+/* l4_per2 -> mcasp2 */
+static struct omap_hwmod_ocp_if dra7xx_l4_per2__mcasp2 = {
+ .master = &dra7xx_l4_per2_hwmod,
+ .slave = &dra7xx_mcasp2_hwmod,
+ .clk = "l4_root_clk_div",
+ .user = OCP_USER_MPU | OCP_USER_SDMA,
+};
+
+/* l3_main_1 -> mcasp2 */
+static struct omap_hwmod_ocp_if dra7xx_l3_main_1__mcasp2 = {
+ .master = &dra7xx_l3_main_1_hwmod,
+ .slave = &dra7xx_mcasp2_hwmod,
+ .clk = "l3_iclk_div",
+ .user = OCP_USER_MPU | OCP_USER_SDMA,
+};
+
/* l4_per2 -> mcasp3 */
static struct omap_hwmod_ocp_if dra7xx_l4_per2__mcasp3 = {
.master = &dra7xx_l4_per2_hwmod,
@@ -2742,6 +3008,46 @@ static struct omap_hwmod_ocp_if dra7xx_l3_main_1__mcasp3 = {
.user = OCP_USER_MPU | OCP_USER_SDMA,
};
+/* l4_per2 -> mcasp4 */
+static struct omap_hwmod_ocp_if dra7xx_l4_per2__mcasp4 = {
+ .master = &dra7xx_l4_per2_hwmod,
+ .slave = &dra7xx_mcasp4_hwmod,
+ .clk = "l4_root_clk_div",
+ .user = OCP_USER_MPU | OCP_USER_SDMA,
+};
+
+/* l4_per2 -> mcasp5 */
+static struct omap_hwmod_ocp_if dra7xx_l4_per2__mcasp5 = {
+ .master = &dra7xx_l4_per2_hwmod,
+ .slave = &dra7xx_mcasp5_hwmod,
+ .clk = "l4_root_clk_div",
+ .user = OCP_USER_MPU | OCP_USER_SDMA,
+};
+
+/* l4_per2 -> mcasp6 */
+static struct omap_hwmod_ocp_if dra7xx_l4_per2__mcasp6 = {
+ .master = &dra7xx_l4_per2_hwmod,
+ .slave = &dra7xx_mcasp6_hwmod,
+ .clk = "l4_root_clk_div",
+ .user = OCP_USER_MPU | OCP_USER_SDMA,
+};
+
+/* l4_per2 -> mcasp7 */
+static struct omap_hwmod_ocp_if dra7xx_l4_per2__mcasp7 = {
+ .master = &dra7xx_l4_per2_hwmod,
+ .slave = &dra7xx_mcasp7_hwmod,
+ .clk = "l4_root_clk_div",
+ .user = OCP_USER_MPU | OCP_USER_SDMA,
+};
+
+/* l4_per2 -> mcasp8 */
+static struct omap_hwmod_ocp_if dra7xx_l4_per2__mcasp8 = {
+ .master = &dra7xx_l4_per2_hwmod,
+ .slave = &dra7xx_mcasp8_hwmod,
+ .clk = "l4_root_clk_div",
+ .user = OCP_USER_MPU | OCP_USER_SDMA,
+};
+
/* l4_per1 -> elm */
static struct omap_hwmod_ocp_if dra7xx_l4_per1__elm = {
.master = &dra7xx_l4_per1_hwmod,
@@ -3281,6 +3587,14 @@ static struct omap_hwmod_ocp_if dra7xx_l4_per1__timer11 = {
.user = OCP_USER_MPU | OCP_USER_SDMA,
};
+/* l4_wkup -> timer12 */
+static struct omap_hwmod_ocp_if dra7xx_l4_wkup__timer12 = {
+ .master = &dra7xx_l4_wkup_hwmod,
+ .slave = &dra7xx_timer12_hwmod,
+ .clk = "wkupaon_iclk_mux",
+ .user = OCP_USER_MPU | OCP_USER_SDMA,
+};
+
/* l4_per3 -> timer13 */
static struct omap_hwmod_ocp_if dra7xx_l4_per3__timer13 = {
.master = &dra7xx_l4_per3_hwmod,
@@ -3465,6 +3779,30 @@ static struct omap_hwmod_ocp_if dra7xx_l4_wkup__wd_timer2 = {
.user = OCP_USER_MPU | OCP_USER_SDMA,
};
+/* l4_per2 -> epwmss0 */
+static struct omap_hwmod_ocp_if dra7xx_l4_per2__epwmss0 = {
+ .master = &dra7xx_l4_per2_hwmod,
+ .slave = &dra7xx_epwmss0_hwmod,
+ .clk = "l4_root_clk_div",
+ .user = OCP_USER_MPU,
+};
+
+/* l4_per2 -> epwmss1 */
+static struct omap_hwmod_ocp_if dra7xx_l4_per2__epwmss1 = {
+ .master = &dra7xx_l4_per2_hwmod,
+ .slave = &dra7xx_epwmss1_hwmod,
+ .clk = "l4_root_clk_div",
+ .user = OCP_USER_MPU,
+};
+
+/* l4_per2 -> epwmss2 */
+static struct omap_hwmod_ocp_if dra7xx_l4_per2__epwmss2 = {
+ .master = &dra7xx_l4_per2_hwmod,
+ .slave = &dra7xx_epwmss2_hwmod,
+ .clk = "l4_root_clk_div",
+ .user = OCP_USER_MPU,
+};
+
static struct omap_hwmod_ocp_if *dra7xx_hwmod_ocp_ifs[] __initdata = {
&dra7xx_l3_main_1__dmm,
&dra7xx_l3_main_2__l3_instr,
@@ -3484,8 +3822,17 @@ static struct omap_hwmod_ocp_if *dra7xx_hwmod_ocp_ifs[] __initdata = {
&dra7xx_l4_wkup__dcan1,
&dra7xx_l4_per2__dcan2,
&dra7xx_l4_per2__cpgmac0,
+ &dra7xx_l4_per2__mcasp1,
+ &dra7xx_l3_main_1__mcasp1,
+ &dra7xx_l4_per2__mcasp2,
+ &dra7xx_l3_main_1__mcasp2,
&dra7xx_l4_per2__mcasp3,
&dra7xx_l3_main_1__mcasp3,
+ &dra7xx_l4_per2__mcasp4,
+ &dra7xx_l4_per2__mcasp5,
+ &dra7xx_l4_per2__mcasp6,
+ &dra7xx_l4_per2__mcasp7,
+ &dra7xx_l4_per2__mcasp8,
&dra7xx_gmac__mdio,
&dra7xx_l4_cfg__dma_system,
&dra7xx_l3_main_1__tpcc,
@@ -3577,9 +3924,19 @@ static struct omap_hwmod_ocp_if *dra7xx_hwmod_ocp_ifs[] __initdata = {
&dra7xx_l3_main_1__vcp2,
&dra7xx_l4_per2__vcp2,
&dra7xx_l4_wkup__wd_timer2,
+ &dra7xx_l4_per2__epwmss0,
+ &dra7xx_l4_per2__epwmss1,
+ &dra7xx_l4_per2__epwmss2,
+ NULL,
+};
+
+/* GP-only hwmod links */
+static struct omap_hwmod_ocp_if *dra7xx_gp_hwmod_ocp_ifs[] __initdata = {
+ &dra7xx_l4_wkup__timer12,
NULL,
};
+/* SoC variant specific hwmod links */
static struct omap_hwmod_ocp_if *dra74x_hwmod_ocp_ifs[] __initdata = {
&dra7xx_l4_per3__usb_otg_ss4,
NULL,
@@ -3597,9 +3954,12 @@ int __init dra7xx_hwmod_init(void)
ret = omap_hwmod_register_links(dra7xx_hwmod_ocp_ifs);
if (!ret && soc_is_dra74x())
- return omap_hwmod_register_links(dra74x_hwmod_ocp_ifs);
+ ret = omap_hwmod_register_links(dra74x_hwmod_ocp_ifs);
else if (!ret && soc_is_dra72x())
- return omap_hwmod_register_links(dra72x_hwmod_ocp_ifs);
+ ret = omap_hwmod_register_links(dra72x_hwmod_ocp_ifs);
+
+ if (!ret && omap_type() == OMAP2_DEVICE_TYPE_GP)
+ ret = omap_hwmod_register_links(dra7xx_gp_hwmod_ocp_ifs);
return ret;
}
diff --git a/arch/arm/mach-omap2/omap_hwmod_reset.c b/arch/arm/mach-omap2/omap_hwmod_reset.c
index 65e186c9df55..b68f9c0aff0b 100644
--- a/arch/arm/mach-omap2/omap_hwmod_reset.c
+++ b/arch/arm/mach-omap2/omap_hwmod_reset.c
@@ -29,6 +29,16 @@
#include <sound/aess.h>
#include "omap_hwmod.h"
+#include "common.h"
+
+#define OMAP_RTC_STATUS_REG 0x44
+#define OMAP_RTC_KICK0_REG 0x6c
+#define OMAP_RTC_KICK1_REG 0x70
+
+#define OMAP_RTC_KICK0_VALUE 0x83E70B13
+#define OMAP_RTC_KICK1_VALUE 0x95A4F1E0
+#define OMAP_RTC_STATUS_BUSY BIT(0)
+#define OMAP_RTC_MAX_READY_TIME 50
/**
* omap_hwmod_aess_preprogram - enable AESS internal autogating
@@ -51,3 +61,58 @@ int omap_hwmod_aess_preprogram(struct omap_hwmod *oh)
return 0;
}
+
+/**
+ * omap_rtc_wait_not_busy - Wait for the RTC BUSY flag
+ * @oh: struct omap_hwmod *
+ *
+ * For updating certain RTC registers, the MPU must wait
+ * for the BUSY status in OMAP_RTC_STATUS_REG to become zero.
+ * Once the BUSY status is zero, there is a 15 microseconds access
+ * period in which the MPU can program.
+ */
+static void omap_rtc_wait_not_busy(struct omap_hwmod *oh)
+{
+ int i;
+
+ /* BUSY may stay active for 1/32768 second (~30 usec) */
+ omap_test_timeout(omap_hwmod_read(oh, OMAP_RTC_STATUS_REG)
+ & OMAP_RTC_STATUS_BUSY, OMAP_RTC_MAX_READY_TIME, i);
+ /* now we have ~15 microseconds to read/write various registers */
+}
+
+/**
+ * omap_hwmod_rtc_unlock - Unlock the Kicker mechanism.
+ * @oh: struct omap_hwmod *
+ *
+ * RTC IP have kicker feature. This prevents spurious writes to its registers.
+ * In order to write into any of the RTC registers, KICK values has te be
+ * written in respective KICK registers. This is needed for hwmod to write into
+ * sysconfig register.
+ */
+void omap_hwmod_rtc_unlock(struct omap_hwmod *oh)
+{
+ local_irq_disable();
+ omap_rtc_wait_not_busy(oh);
+ omap_hwmod_write(OMAP_RTC_KICK0_VALUE, oh, OMAP_RTC_KICK0_REG);
+ omap_hwmod_write(OMAP_RTC_KICK1_VALUE, oh, OMAP_RTC_KICK1_REG);
+ local_irq_enable();
+}
+
+/**
+ * omap_hwmod_rtc_lock - Lock the Kicker mechanism.
+ * @oh: struct omap_hwmod *
+ *
+ * RTC IP have kicker feature. This prevents spurious writes to its registers.
+ * Once the RTC registers are written, KICK mechanism needs to be locked,
+ * in order to prevent any spurious writes. This function locks back the RTC
+ * registers once hwmod completes its write into sysconfig register.
+ */
+void omap_hwmod_rtc_lock(struct omap_hwmod *oh)
+{
+ local_irq_disable();
+ omap_rtc_wait_not_busy(oh);
+ omap_hwmod_write(0x0, oh, OMAP_RTC_KICK0_REG);
+ omap_hwmod_write(0x0, oh, OMAP_RTC_KICK1_REG);
+ local_irq_enable();
+}
diff --git a/arch/arm/mach-omap2/pdata-quirks.c b/arch/arm/mach-omap2/pdata-quirks.c
index a935d28443da..6571ad959908 100644
--- a/arch/arm/mach-omap2/pdata-quirks.c
+++ b/arch/arm/mach-omap2/pdata-quirks.c
@@ -21,9 +21,11 @@
#include <linux/regulator/fixed.h>
#include <linux/platform_data/pinctrl-single.h>
+#include <linux/platform_data/hsmmc-omap.h>
#include <linux/platform_data/iommu-omap.h>
#include <linux/platform_data/wkup_m3.h>
#include <linux/platform_data/pwm_omap_dmtimer.h>
+#include <linux/platform_data/media/ir-rx51.h>
#include <plat/dmtimer.h>
#include "common.h"
@@ -31,10 +33,13 @@
#include "dss-common.h"
#include "control.h"
#include "omap_device.h"
+#include "omap-pm.h"
#include "omap-secure.h"
#include "soc.h"
#include "hsmmc.h"
+static struct omap_hsmmc_platform_data __maybe_unused mmc_pdata[2];
+
struct pdata_init {
const char *compatible;
void (*fn)(void);
@@ -268,9 +273,13 @@ static struct platform_device omap3_rom_rng_device = {
},
};
+static struct platform_device rx51_lirc_device;
+
static void __init nokia_n900_legacy_init(void)
{
hsmmc2_internal_input_clk();
+ mmc_pdata[0].name = "external";
+ mmc_pdata[1].name = "internal";
if (omap_type() == OMAP2_DEVICE_TYPE_SEC) {
if (IS_ENABLED(CONFIG_ARM_ERRATA_430973)) {
@@ -286,6 +295,8 @@ static void __init nokia_n900_legacy_init(void)
platform_device_register(&omap3_rom_rng_device);
}
+
+ platform_device_register(&rx51_lirc_device);
}
static void __init omap3_tao3530_legacy_init(void)
@@ -453,8 +464,14 @@ void omap_auxdata_legacy_init(struct device *dev)
/* Dual mode timer PWM callbacks platdata */
#if IS_ENABLED(CONFIG_OMAP_DM_TIMER)
-struct pwm_omap_dmtimer_pdata pwm_dmtimer_pdata = {
+static struct pwm_omap_dmtimer_pdata pwm_dmtimer_pdata = {
.request_by_node = omap_dm_timer_request_by_node,
+ .request_specific = omap_dm_timer_request_specific,
+ .request = omap_dm_timer_request,
+ .set_source = omap_dm_timer_set_source,
+ .get_irq = omap_dm_timer_get_irq,
+ .set_int_enable = omap_dm_timer_set_int_enable,
+ .set_int_disable = omap_dm_timer_set_int_disable,
.free = omap_dm_timer_free,
.enable = omap_dm_timer_enable,
.disable = omap_dm_timer_disable,
@@ -465,10 +482,29 @@ struct pwm_omap_dmtimer_pdata pwm_dmtimer_pdata = {
.set_match = omap_dm_timer_set_match,
.set_pwm = omap_dm_timer_set_pwm,
.set_prescaler = omap_dm_timer_set_prescaler,
+ .read_counter = omap_dm_timer_read_counter,
.write_counter = omap_dm_timer_write_counter,
+ .read_status = omap_dm_timer_read_status,
+ .write_status = omap_dm_timer_write_status,
};
#endif
+static struct lirc_rx51_platform_data __maybe_unused rx51_lirc_data = {
+ .set_max_mpu_wakeup_lat = omap_pm_set_max_mpu_wakeup_lat,
+ .pwm_timer = 9, /* Use GPT 9 for CIR */
+#if IS_ENABLED(CONFIG_OMAP_DM_TIMER)
+ .dmtimer = &pwm_dmtimer_pdata,
+#endif
+};
+
+static struct platform_device __maybe_unused rx51_lirc_device = {
+ .name = "lirc_rx51",
+ .id = -1,
+ .dev = {
+ .platform_data = &rx51_lirc_data,
+ },
+};
+
/*
* Few boards still need auxdata populated before we populate
* the dev entries in of_platform_populate().
@@ -492,11 +528,10 @@ static struct of_dev_auxdata omap_auxdata_lookup[] __initdata = {
OF_DEV_AUXDATA("tlv320aic3x", 0x18, "2-0018", &n810_aic33_data),
#endif
#ifdef CONFIG_ARCH_OMAP3
- OF_DEV_AUXDATA("ti,omap3-padconf", 0x48002030, "48002030.pinmux", &pcs_pdata),
- OF_DEV_AUXDATA("ti,omap3-padconf", 0x480025a0, "480025a0.pinmux", &pcs_pdata),
- OF_DEV_AUXDATA("ti,omap3-padconf", 0x48002a00, "48002a00.pinmux", &pcs_pdata),
OF_DEV_AUXDATA("ti,omap2-iommu", 0x5d000000, "5d000000.mmu",
&omap3_iommu_pdata),
+ OF_DEV_AUXDATA("ti,omap3-hsmmc", 0x4809c000, "4809c000.mmc", &mmc_pdata[0]),
+ OF_DEV_AUXDATA("ti,omap3-hsmmc", 0x480b4000, "480b4000.mmc", &mmc_pdata[1]),
/* Only on am3517 */
OF_DEV_AUXDATA("ti,davinci_mdio", 0x5c030000, "davinci_mdio.0", NULL),
OF_DEV_AUXDATA("ti,am3517-emac", 0x5c000000, "davinci_emac.0",
@@ -506,19 +541,7 @@ static struct of_dev_auxdata omap_auxdata_lookup[] __initdata = {
OF_DEV_AUXDATA("ti,am3352-wkup-m3", 0x44d00000, "44d00000.wkup_m3",
&wkup_m3_data),
#endif
-#ifdef CONFIG_ARCH_OMAP4
- OF_DEV_AUXDATA("ti,omap4-padconf", 0x4a100040, "4a100040.pinmux", &pcs_pdata),
- OF_DEV_AUXDATA("ti,omap4-padconf", 0x4a31e040, "4a31e040.pinmux", &pcs_pdata),
-#endif
-#ifdef CONFIG_SOC_OMAP5
- OF_DEV_AUXDATA("ti,omap5-padconf", 0x4a002840, "4a002840.pinmux", &pcs_pdata),
- OF_DEV_AUXDATA("ti,omap5-padconf", 0x4ae0c840, "4ae0c840.pinmux", &pcs_pdata),
-#endif
-#ifdef CONFIG_SOC_DRA7XX
- OF_DEV_AUXDATA("ti,dra7-padconf", 0x4a003400, "4a003400.pinmux", &pcs_pdata),
-#endif
#ifdef CONFIG_SOC_AM43XX
- OF_DEV_AUXDATA("ti,am437-padconf", 0x44e10800, "44e10800.pinmux", &pcs_pdata),
OF_DEV_AUXDATA("ti,am4372-wkup-m3", 0x44d00000, "44d00000.wkup_m3",
&wkup_m3_data),
#endif
@@ -531,6 +554,8 @@ static struct of_dev_auxdata omap_auxdata_lookup[] __initdata = {
OF_DEV_AUXDATA("ti,omap4-iommu", 0x55082000, "55082000.mmu",
&omap4_iommu_pdata),
#endif
+ /* Common auxdata */
+ OF_DEV_AUXDATA("pinctrl-single", 0, NULL, &pcs_pdata),
{ /* sentinel */ },
};
diff --git a/arch/arm/mach-omap2/pm.c b/arch/arm/mach-omap2/pm.c
index 58920bc8807b..2f7b11da7d5d 100644
--- a/arch/arm/mach-omap2/pm.c
+++ b/arch/arm/mach-omap2/pm.c
@@ -277,13 +277,10 @@ static void __init omap4_init_voltages(void)
static inline void omap_init_cpufreq(void)
{
- struct platform_device_info devinfo = { };
+ struct platform_device_info devinfo = { .name = "omap-cpufreq" };
if (!of_have_populated_dt())
- devinfo.name = "omap-cpufreq";
- else
- devinfo.name = "cpufreq-dt";
- platform_device_register_full(&devinfo);
+ platform_device_register_full(&devinfo);
}
static int __init omap2_common_pm_init(void)
diff --git a/arch/arm/mach-omap2/powerdomains7xx_data.c b/arch/arm/mach-omap2/powerdomains7xx_data.c
index 287a2037aa16..0ec2d00f4237 100644
--- a/arch/arm/mach-omap2/powerdomains7xx_data.c
+++ b/arch/arm/mach-omap2/powerdomains7xx_data.c
@@ -35,7 +35,7 @@ static struct powerdomain iva_7xx_pwrdm = {
.name = "iva_pwrdm",
.prcm_offs = DRA7XX_PRM_IVA_INST,
.prcm_partition = DRA7XX_PRM_PARTITION,
- .pwrsts = PWRSTS_OFF_RET_ON,
+ .pwrsts = PWRSTS_OFF_ON,
.pwrsts_logic_ret = PWRSTS_OFF,
.banks = 4,
.pwrsts_mem_ret = {
@@ -45,10 +45,10 @@ static struct powerdomain iva_7xx_pwrdm = {
[3] = PWRSTS_OFF_RET, /* tcm2_mem */
},
.pwrsts_mem_on = {
- [0] = PWRSTS_OFF_RET, /* hwa_mem */
- [1] = PWRSTS_OFF_RET, /* sl2_mem */
- [2] = PWRSTS_OFF_RET, /* tcm1_mem */
- [3] = PWRSTS_OFF_RET, /* tcm2_mem */
+ [0] = PWRSTS_ON, /* hwa_mem */
+ [1] = PWRSTS_ON, /* sl2_mem */
+ [2] = PWRSTS_ON, /* tcm1_mem */
+ [3] = PWRSTS_ON, /* tcm2_mem */
},
.flags = PWRDM_HAS_LOWPOWERSTATECHANGE,
};
@@ -75,7 +75,7 @@ static struct powerdomain ipu_7xx_pwrdm = {
.name = "ipu_pwrdm",
.prcm_offs = DRA7XX_PRM_IPU_INST,
.prcm_partition = DRA7XX_PRM_PARTITION,
- .pwrsts = PWRSTS_OFF_RET_ON,
+ .pwrsts = PWRSTS_OFF_ON,
.pwrsts_logic_ret = PWRSTS_OFF,
.banks = 2,
.pwrsts_mem_ret = {
@@ -83,8 +83,8 @@ static struct powerdomain ipu_7xx_pwrdm = {
[1] = PWRSTS_OFF_RET, /* periphmem */
},
.pwrsts_mem_on = {
- [0] = PWRSTS_OFF_RET, /* aessmem */
- [1] = PWRSTS_OFF_RET, /* periphmem */
+ [0] = PWRSTS_ON, /* aessmem */
+ [1] = PWRSTS_ON, /* periphmem */
},
.flags = PWRDM_HAS_LOWPOWERSTATECHANGE,
};
@@ -94,14 +94,14 @@ static struct powerdomain dss_7xx_pwrdm = {
.name = "dss_pwrdm",
.prcm_offs = DRA7XX_PRM_DSS_INST,
.prcm_partition = DRA7XX_PRM_PARTITION,
- .pwrsts = PWRSTS_OFF_RET_ON,
+ .pwrsts = PWRSTS_OFF_ON,
.pwrsts_logic_ret = PWRSTS_OFF,
.banks = 1,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* dss_mem */
},
.pwrsts_mem_on = {
- [0] = PWRSTS_OFF_RET, /* dss_mem */
+ [0] = PWRSTS_ON, /* dss_mem */
},
.flags = PWRDM_HAS_LOWPOWERSTATECHANGE,
};
@@ -112,15 +112,15 @@ static struct powerdomain l4per_7xx_pwrdm = {
.prcm_offs = DRA7XX_PRM_L4PER_INST,
.prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_RET_ON,
- .pwrsts_logic_ret = PWRSTS_OFF_RET,
+ .pwrsts_logic_ret = PWRSTS_RET,
.banks = 2,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* nonretained_bank */
[1] = PWRSTS_OFF_RET, /* retained_bank */
},
.pwrsts_mem_on = {
- [0] = PWRSTS_OFF_RET, /* nonretained_bank */
- [1] = PWRSTS_OFF_RET, /* retained_bank */
+ [0] = PWRSTS_ON, /* nonretained_bank */
+ [1] = PWRSTS_ON, /* retained_bank */
},
.flags = PWRDM_HAS_LOWPOWERSTATECHANGE,
};
@@ -136,7 +136,7 @@ static struct powerdomain gpu_7xx_pwrdm = {
[0] = PWRSTS_OFF_RET, /* gpu_mem */
},
.pwrsts_mem_on = {
- [0] = PWRSTS_OFF_RET, /* gpu_mem */
+ [0] = PWRSTS_ON, /* gpu_mem */
},
.flags = PWRDM_HAS_LOWPOWERSTATECHANGE,
};
@@ -160,7 +160,7 @@ static struct powerdomain core_7xx_pwrdm = {
.name = "core_pwrdm",
.prcm_offs = DRA7XX_PRM_CORE_INST,
.prcm_partition = DRA7XX_PRM_PARTITION,
- .pwrsts = PWRSTS_INA_ON,
+ .pwrsts = PWRSTS_ON,
.pwrsts_logic_ret = PWRSTS_RET,
.banks = 5,
.pwrsts_mem_ret = {
@@ -171,11 +171,11 @@ static struct powerdomain core_7xx_pwrdm = {
[4] = PWRSTS_OFF_RET, /* ipu_unicache */
},
.pwrsts_mem_on = {
- [0] = PWRSTS_OFF_RET, /* core_nret_bank */
- [1] = PWRSTS_OFF_RET, /* core_ocmram */
- [2] = PWRSTS_OFF_RET, /* core_other_bank */
- [3] = PWRSTS_OFF_RET, /* ipu_l2ram */
- [4] = PWRSTS_OFF_RET, /* ipu_unicache */
+ [0] = PWRSTS_ON, /* core_nret_bank */
+ [1] = PWRSTS_ON, /* core_ocmram */
+ [2] = PWRSTS_ON, /* core_other_bank */
+ [3] = PWRSTS_ON, /* ipu_l2ram */
+ [4] = PWRSTS_ON, /* ipu_unicache */
},
.flags = PWRDM_HAS_LOWPOWERSTATECHANGE,
};
@@ -225,14 +225,14 @@ static struct powerdomain vpe_7xx_pwrdm = {
.name = "vpe_pwrdm",
.prcm_offs = DRA7XX_PRM_VPE_INST,
.prcm_partition = DRA7XX_PRM_PARTITION,
- .pwrsts = PWRSTS_OFF_RET_ON,
- .pwrsts_logic_ret = PWRSTS_OFF_RET,
+ .pwrsts = PWRSTS_OFF_ON,
+ .pwrsts_logic_ret = PWRSTS_OFF,
.banks = 1,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* vpe_bank */
},
.pwrsts_mem_on = {
- [0] = PWRSTS_OFF_RET, /* vpe_bank */
+ [0] = PWRSTS_ON, /* vpe_bank */
},
.flags = PWRDM_HAS_LOWPOWERSTATECHANGE,
};
@@ -250,8 +250,8 @@ static struct powerdomain mpu_7xx_pwrdm = {
[1] = PWRSTS_RET, /* mpu_ram */
},
.pwrsts_mem_on = {
- [0] = PWRSTS_OFF_RET, /* mpu_l2 */
- [1] = PWRSTS_OFF_RET, /* mpu_ram */
+ [0] = PWRSTS_ON, /* mpu_l2 */
+ [1] = PWRSTS_ON, /* mpu_ram */
},
};
@@ -261,7 +261,7 @@ static struct powerdomain l3init_7xx_pwrdm = {
.prcm_offs = DRA7XX_PRM_L3INIT_INST,
.prcm_partition = DRA7XX_PRM_PARTITION,
.pwrsts = PWRSTS_RET_ON,
- .pwrsts_logic_ret = PWRSTS_OFF_RET,
+ .pwrsts_logic_ret = PWRSTS_RET,
.banks = 3,
.pwrsts_mem_ret = {
[0] = PWRSTS_OFF_RET, /* gmac_bank */
@@ -269,9 +269,9 @@ static struct powerdomain l3init_7xx_pwrdm = {
[2] = PWRSTS_OFF_RET, /* l3init_bank2 */
},
.pwrsts_mem_on = {
- [0] = PWRSTS_OFF_RET, /* gmac_bank */
- [1] = PWRSTS_OFF_RET, /* l3init_bank1 */
- [2] = PWRSTS_OFF_RET, /* l3init_bank2 */
+ [0] = PWRSTS_ON, /* gmac_bank */
+ [1] = PWRSTS_ON, /* l3init_bank1 */
+ [2] = PWRSTS_ON, /* l3init_bank2 */
},
.flags = PWRDM_HAS_LOWPOWERSTATECHANGE,
};
@@ -287,7 +287,7 @@ static struct powerdomain eve3_7xx_pwrdm = {
[0] = PWRSTS_OFF_RET, /* eve3_bank */
},
.pwrsts_mem_on = {
- [0] = PWRSTS_OFF_RET, /* eve3_bank */
+ [0] = PWRSTS_ON, /* eve3_bank */
},
.flags = PWRDM_HAS_LOWPOWERSTATECHANGE,
};
@@ -303,7 +303,7 @@ static struct powerdomain emu_7xx_pwrdm = {
[0] = PWRSTS_OFF_RET, /* emu_bank */
},
.pwrsts_mem_on = {
- [0] = PWRSTS_OFF_RET, /* emu_bank */
+ [0] = PWRSTS_ON, /* emu_bank */
},
};
@@ -320,9 +320,9 @@ static struct powerdomain dsp2_7xx_pwrdm = {
[2] = PWRSTS_OFF_RET, /* dsp2_l2 */
},
.pwrsts_mem_on = {
- [0] = PWRSTS_OFF_RET, /* dsp2_edma */
- [1] = PWRSTS_OFF_RET, /* dsp2_l1 */
- [2] = PWRSTS_OFF_RET, /* dsp2_l2 */
+ [0] = PWRSTS_ON, /* dsp2_edma */
+ [1] = PWRSTS_ON, /* dsp2_l1 */
+ [2] = PWRSTS_ON, /* dsp2_l2 */
},
.flags = PWRDM_HAS_LOWPOWERSTATECHANGE,
};
@@ -340,9 +340,9 @@ static struct powerdomain dsp1_7xx_pwrdm = {
[2] = PWRSTS_OFF_RET, /* dsp1_l2 */
},
.pwrsts_mem_on = {
- [0] = PWRSTS_OFF_RET, /* dsp1_edma */
- [1] = PWRSTS_OFF_RET, /* dsp1_l1 */
- [2] = PWRSTS_OFF_RET, /* dsp1_l2 */
+ [0] = PWRSTS_ON, /* dsp1_edma */
+ [1] = PWRSTS_ON, /* dsp1_l1 */
+ [2] = PWRSTS_ON, /* dsp1_l2 */
},
.flags = PWRDM_HAS_LOWPOWERSTATECHANGE,
};
@@ -358,7 +358,7 @@ static struct powerdomain cam_7xx_pwrdm = {
[0] = PWRSTS_OFF_RET, /* vip_bank */
},
.pwrsts_mem_on = {
- [0] = PWRSTS_OFF_RET, /* vip_bank */
+ [0] = PWRSTS_ON, /* vip_bank */
},
.flags = PWRDM_HAS_LOWPOWERSTATECHANGE,
};
@@ -374,7 +374,7 @@ static struct powerdomain eve4_7xx_pwrdm = {
[0] = PWRSTS_OFF_RET, /* eve4_bank */
},
.pwrsts_mem_on = {
- [0] = PWRSTS_OFF_RET, /* eve4_bank */
+ [0] = PWRSTS_ON, /* eve4_bank */
},
.flags = PWRDM_HAS_LOWPOWERSTATECHANGE,
};
@@ -390,7 +390,7 @@ static struct powerdomain eve2_7xx_pwrdm = {
[0] = PWRSTS_OFF_RET, /* eve2_bank */
},
.pwrsts_mem_on = {
- [0] = PWRSTS_OFF_RET, /* eve2_bank */
+ [0] = PWRSTS_ON, /* eve2_bank */
},
.flags = PWRDM_HAS_LOWPOWERSTATECHANGE,
};
@@ -406,7 +406,7 @@ static struct powerdomain eve1_7xx_pwrdm = {
[0] = PWRSTS_OFF_RET, /* eve1_bank */
},
.pwrsts_mem_on = {
- [0] = PWRSTS_OFF_RET, /* eve1_bank */
+ [0] = PWRSTS_ON, /* eve1_bank */
},
.flags = PWRDM_HAS_LOWPOWERSTATECHANGE,
};
diff --git a/arch/arm/mach-omap2/soc.h b/arch/arm/mach-omap2/soc.h
index 364418c78bf3..2aa01c270898 100644
--- a/arch/arm/mach-omap2/soc.h
+++ b/arch/arm/mach-omap2/soc.h
@@ -39,82 +39,10 @@
#include <linux/of.h>
/*
- * Test if multicore OMAP support is needed
+ * OMAP2+ is always defined as ARCH_MULTIPLATFORM in Kconfig
*/
#undef MULTI_OMAP2
-#undef OMAP_NAME
-
-#ifdef CONFIG_ARCH_MULTIPLATFORM
#define MULTI_OMAP2
-#endif
-#ifdef CONFIG_SOC_OMAP2420
-# ifdef OMAP_NAME
-# undef MULTI_OMAP2
-# define MULTI_OMAP2
-# else
-# define OMAP_NAME omap2420
-# endif
-#endif
-#ifdef CONFIG_SOC_OMAP2430
-# ifdef OMAP_NAME
-# undef MULTI_OMAP2
-# define MULTI_OMAP2
-# else
-# define OMAP_NAME omap2430
-# endif
-#endif
-#ifdef CONFIG_ARCH_OMAP3
-# ifdef OMAP_NAME
-# undef MULTI_OMAP2
-# define MULTI_OMAP2
-# else
-# define OMAP_NAME omap3
-# endif
-#endif
-#ifdef CONFIG_ARCH_OMAP4
-# ifdef OMAP_NAME
-# undef MULTI_OMAP2
-# define MULTI_OMAP2
-# else
-# define OMAP_NAME omap4
-# endif
-#endif
-
-#ifdef CONFIG_SOC_OMAP5
-# ifdef OMAP_NAME
-# undef MULTI_OMAP2
-# define MULTI_OMAP2
-# else
-# define OMAP_NAME omap5
-# endif
-#endif
-
-#ifdef CONFIG_SOC_AM33XX
-# ifdef OMAP_NAME
-# undef MULTI_OMAP2
-# define MULTI_OMAP2
-# else
-# define OMAP_NAME am33xx
-# endif
-#endif
-
-#ifdef CONFIG_SOC_AM43XX
-# ifdef OMAP_NAME
-# undef MULTI_OMAP2
-# define MULTI_OMAP2
-# else
-# define OMAP_NAME am43xx
-# endif
-#endif
-
-#ifdef CONFIG_SOC_DRA7XX
-# ifdef OMAP_NAME
-# undef MULTI_OMAP2
-# define MULTI_OMAP2
-# else
-# define OMAP_NAME DRA7XX
-# endif
-#endif
/*
* Omap device type i.e. EMU/HS/TST/GP/BAD
@@ -242,11 +170,6 @@ IS_AM_SUBCLASS(437x, 0x437)
IS_DRA_SUBCLASS(75x, 0x75)
IS_DRA_SUBCLASS(72x, 0x72)
-#define soc_is_omap24xx() 0
-#define soc_is_omap242x() 0
-#define soc_is_omap243x() 0
-#define soc_is_omap34xx() 0
-#define soc_is_omap343x() 0
#define soc_is_ti81xx() 0
#define soc_is_ti816x() 0
#define soc_is_ti814x() 0
@@ -265,46 +188,27 @@ IS_DRA_SUBCLASS(72x, 0x72)
#define soc_is_dra74x() 0
#define soc_is_dra72x() 0
-#if defined(MULTI_OMAP2)
-# if defined(CONFIG_ARCH_OMAP2)
-# undef soc_is_omap24xx
-# define soc_is_omap24xx() is_omap24xx()
-# endif
-# if defined (CONFIG_SOC_OMAP2420)
-# undef soc_is_omap242x
-# define soc_is_omap242x() is_omap242x()
-# endif
-# if defined (CONFIG_SOC_OMAP2430)
-# undef soc_is_omap243x
-# define soc_is_omap243x() is_omap243x()
-# endif
-# if defined(CONFIG_ARCH_OMAP3)
-# undef soc_is_omap34xx
-# undef soc_is_omap343x
-# define soc_is_omap34xx() is_omap34xx()
-# define soc_is_omap343x() is_omap343x()
-# endif
+#if defined(CONFIG_ARCH_OMAP2)
+# define soc_is_omap24xx() is_omap24xx()
#else
-# if defined(CONFIG_ARCH_OMAP2)
-# undef soc_is_omap24xx
-# define soc_is_omap24xx() 1
-# endif
-# if defined(CONFIG_SOC_OMAP2420)
-# undef soc_is_omap242x
-# define soc_is_omap242x() 1
-# endif
-# if defined(CONFIG_SOC_OMAP2430)
-# undef soc_is_omap243x
-# define soc_is_omap243x() 1
-# endif
-# if defined(CONFIG_ARCH_OMAP3)
-# undef soc_is_omap34xx
-# define soc_is_omap34xx() 1
-# endif
-# if defined(CONFIG_SOC_OMAP3430)
-# undef soc_is_omap343x
-# define soc_is_omap343x() 1
-# endif
+# define soc_is_omap24xx() 0
+#endif
+#if defined(CONFIG_SOC_OMAP2420)
+# define soc_is_omap242x() is_omap242x()
+#else
+# define soc_is_omap242x() 0
+#endif
+#if defined(CONFIG_SOC_OMAP2430)
+# define soc_is_omap243x() is_omap243x()
+#else
+# define soc_is_omap243x() 0
+#endif
+#if defined(CONFIG_ARCH_OMAP3)
+# define soc_is_omap34xx() is_omap34xx()
+# define soc_is_omap343x() is_omap343x()
+#else
+# define soc_is_omap34xx() 0
+# define soc_is_omap343x() 0
#endif
/*
@@ -339,7 +243,6 @@ IS_OMAP_TYPE(3430, 0x3430)
#define soc_is_omap5430() 0
/* These are needed for the common code */
-#ifdef CONFIG_ARCH_OMAP2PLUS
#define soc_is_omap7xx() 0
#define soc_is_omap15xx() 0
#define soc_is_omap16xx() 0
@@ -350,7 +253,6 @@ IS_OMAP_TYPE(3430, 0x3430)
#define soc_is_omap1710() 0
#define cpu_class_is_omap1() 0
#define cpu_class_is_omap2() 1
-#endif
#if defined(CONFIG_ARCH_OMAP2)
# undef soc_is_omap2420
diff --git a/arch/arm/mach-orion5x/common.c b/arch/arm/mach-orion5x/common.c
index 70c3366c8d03..058994e99570 100644
--- a/arch/arm/mach-orion5x/common.c
+++ b/arch/arm/mach-orion5x/common.c
@@ -66,8 +66,7 @@ static struct clk *tclk;
void __init clk_init(void)
{
- tclk = clk_register_fixed_rate(NULL, "tclk", NULL, CLK_IS_ROOT,
- orion5x_tclk);
+ tclk = clk_register_fixed_rate(NULL, "tclk", NULL, 0, orion5x_tclk);
orion_clkdev_init(tclk);
}
diff --git a/arch/arm/mach-oxnas/Kconfig b/arch/arm/mach-oxnas/Kconfig
new file mode 100644
index 000000000000..4fff3c7666df
--- /dev/null
+++ b/arch/arm/mach-oxnas/Kconfig
@@ -0,0 +1,24 @@
+menuconfig ARCH_OXNAS
+ bool "Oxford Semiconductor OXNAS Family SoCs"
+ select ARCH_REQUIRE_GPIOLIB
+ select ARCH_HAS_RESET_CONTROLLER
+ select PINCTRL
+ depends on ARCH_MULTI_V5
+ help
+ Support for OxNas SoC family developed by Oxford Semiconductor.
+
+if ARCH_OXNAS
+
+config MACH_OX810SE
+ bool "Support OX810SE Based Products"
+ select ARM_TIMER_SP804
+ select COMMON_CLK_OXNAS
+ select CPU_ARM926T
+ select MFD_SYSCON
+ select PINCTRL_OXNAS
+ select RESET_OXNAS
+ select VERSATILE_FPGA_IRQ
+ help
+ Include Support for the Oxford Semiconductor OX810SE SoC Based Products.
+
+endif
diff --git a/arch/arm/mach-pxa/Kconfig b/arch/arm/mach-pxa/Kconfig
index 7ee4652b4c61..cd894d69e766 100644
--- a/arch/arm/mach-pxa/Kconfig
+++ b/arch/arm/mach-pxa/Kconfig
@@ -6,6 +6,7 @@ comment "Intel/Marvell Dev Platforms (sorted by hardware release time)"
config MACH_PXA27X_DT
bool "Support PXA27x platforms from device tree"
+ select PINCTRL
select POWER_SUPPLY
select PXA27x
select USE_OF
@@ -17,6 +18,7 @@ config MACH_PXA27X_DT
config MACH_PXA3XX_DT
bool "Support PXA3xx platforms from device tree"
select CPU_PXA300
+ select PINCTRL
select POWER_SUPPLY
select PXA3xx
select USE_OF
diff --git a/arch/arm/mach-pxa/eseries.c b/arch/arm/mach-pxa/eseries.c
index e838b11fb8c7..fa9d71d194f0 100644
--- a/arch/arm/mach-pxa/eseries.c
+++ b/arch/arm/mach-pxa/eseries.c
@@ -128,7 +128,7 @@ struct resource eseries_tmio_resources[] = {
/* Some e-series hardware cannot control the 32K clock */
static void __init __maybe_unused eseries_register_clks(void)
{
- clk_register_fixed_rate(NULL, "CLK_CK32K", NULL, CLK_IS_ROOT, 32768);
+ clk_register_fixed_rate(NULL, "CLK_CK32K", NULL, 0, 32768);
}
#ifdef CONFIG_MACH_E330
diff --git a/arch/arm/mach-pxa/raumfeld.c b/arch/arm/mach-pxa/raumfeld.c
index 5a941bd3dbed..e216433b56ed 100644
--- a/arch/arm/mach-pxa/raumfeld.c
+++ b/arch/arm/mach-pxa/raumfeld.c
@@ -385,10 +385,6 @@ static struct property_entry raumfeld_rotary_properties[] = {
{ },
};
-static struct property_set raumfeld_rotary_property_set = {
- .properties = raumfeld_rotary_properties,
-};
-
static struct platform_device rotary_encoder_device = {
.name = "rotary-encoder",
.id = 0,
@@ -1063,8 +1059,8 @@ static void __init __maybe_unused raumfeld_controller_init(void)
pxa3xx_mfp_config(ARRAY_AND_SIZE(raumfeld_controller_pin_config));
gpiod_add_lookup_table(&raumfeld_rotary_gpios_table);
- device_add_property_set(&rotary_encoder_device.dev,
- &raumfeld_rotary_property_set);
+ device_add_properties(&rotary_encoder_device.dev,
+ raumfeld_rotary_properties);
platform_device_register(&rotary_encoder_device);
spi_register_board_info(ARRAY_AND_SIZE(controller_spi_devices));
@@ -1103,8 +1099,8 @@ static void __init __maybe_unused raumfeld_speaker_init(void)
platform_device_register(&smc91x_device);
gpiod_add_lookup_table(&raumfeld_rotary_gpios_table);
- device_add_property_set(&rotary_encoder_device.dev,
- &raumfeld_rotary_property_set);
+ device_add_properties(&rotary_encoder_device.dev,
+ raumfeld_rotary_properties);
platform_device_register(&rotary_encoder_device);
raumfeld_audio_init();
diff --git a/arch/arm/mach-pxa/spitz.c b/arch/arm/mach-pxa/spitz.c
index d9578bc49fdc..bd7cd8b6a286 100644
--- a/arch/arm/mach-pxa/spitz.c
+++ b/arch/arm/mach-pxa/spitz.c
@@ -763,14 +763,49 @@ static struct nand_bbt_descr spitz_nand_bbt = {
.pattern = scan_ff_pattern
};
-static struct nand_ecclayout akita_oobinfo = {
- .oobfree = { {0x08, 0x09} },
- .eccbytes = 24,
- .eccpos = {
- 0x05, 0x01, 0x02, 0x03, 0x06, 0x07, 0x15, 0x11,
- 0x12, 0x13, 0x16, 0x17, 0x25, 0x21, 0x22, 0x23,
- 0x26, 0x27, 0x35, 0x31, 0x32, 0x33, 0x36, 0x37,
- },
+static int akita_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section > 12)
+ return -ERANGE;
+
+ switch (section % 3) {
+ case 0:
+ oobregion->offset = 5;
+ oobregion->length = 1;
+ break;
+
+ case 1:
+ oobregion->offset = 1;
+ oobregion->length = 3;
+ break;
+
+ case 2:
+ oobregion->offset = 6;
+ oobregion->length = 2;
+ break;
+ }
+
+ oobregion->offset += (section / 3) * 0x10;
+
+ return 0;
+}
+
+static int akita_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section)
+ return -ERANGE;
+
+ oobregion->offset = 8;
+ oobregion->length = 9;
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops akita_ooblayout_ops = {
+ .ecc = akita_ooblayout_ecc,
+ .free = akita_ooblayout_free,
};
static struct sharpsl_nand_platform_data spitz_nand_pdata = {
@@ -804,11 +839,11 @@ static void __init spitz_nand_init(void)
} else if (machine_is_akita()) {
spitz_nand_partitions[1].size = 58 * 1024 * 1024;
spitz_nand_bbt.len = 1;
- spitz_nand_pdata.ecc_layout = &akita_oobinfo;
+ spitz_nand_pdata.ecc_layout = &akita_ooblayout_ops;
} else if (machine_is_borzoi()) {
spitz_nand_partitions[1].size = 32 * 1024 * 1024;
spitz_nand_bbt.len = 1;
- spitz_nand_pdata.ecc_layout = &akita_oobinfo;
+ spitz_nand_pdata.ecc_layout = &akita_ooblayout_ops;
}
platform_device_register(&spitz_nand_device);
diff --git a/arch/arm/mach-realview/realview_pbx.c b/arch/arm/mach-realview/realview_pbx.c
index b9f0757787ac..be1cec5fe3ad 100644
--- a/arch/arm/mach-realview/realview_pbx.c
+++ b/arch/arm/mach-realview/realview_pbx.c
@@ -248,6 +248,7 @@ static struct resource realview_pbx_isp1761_resources[] = {
},
};
+#ifdef CONFIG_CACHE_L2X0
static struct resource pmu_resources[] = {
[0] = {
.start = IRQ_PBX_PMU_CPU0,
@@ -277,6 +278,7 @@ static struct platform_device pmu_device = {
.num_resources = ARRAY_SIZE(pmu_resources),
.resource = pmu_resources,
};
+#endif
static void __init gic_init_irq(void)
{
diff --git a/arch/arm/mach-rockchip/platsmp.c b/arch/arm/mach-rockchip/platsmp.c
index d42a07e33482..4d827a069d49 100644
--- a/arch/arm/mach-rockchip/platsmp.c
+++ b/arch/arm/mach-rockchip/platsmp.c
@@ -65,7 +65,7 @@ static struct reset_control *rockchip_get_core_reset(int cpu)
if (dev)
np = dev->of_node;
else
- np = of_get_cpu_node(cpu, 0);
+ np = of_get_cpu_node(cpu, NULL);
return of_reset_control_get(np, NULL);
}
diff --git a/arch/arm/mach-rockchip/rockchip.c b/arch/arm/mach-rockchip/rockchip.c
index 3f07cc5dfe5f..beb71da5d9c8 100644
--- a/arch/arm/mach-rockchip/rockchip.c
+++ b/arch/arm/mach-rockchip/rockchip.c
@@ -74,7 +74,6 @@ static void __init rockchip_dt_init(void)
{
rockchip_suspend_init();
of_platform_populate(NULL, of_default_bus_match_table, NULL, NULL);
- platform_device_register_simple("cpufreq-dt", 0, NULL, 0);
}
static const char * const rockchip_board_dt_compat[] = {
diff --git a/arch/arm/mach-s3c24xx/mach-rx1950.c b/arch/arm/mach-s3c24xx/mach-rx1950.c
index 774c982a7b7e..25a139bb9826 100644
--- a/arch/arm/mach-s3c24xx/mach-rx1950.c
+++ b/arch/arm/mach-s3c24xx/mach-rx1950.c
@@ -496,6 +496,12 @@ static int rx1950_backlight_init(struct device *dev)
return PTR_ERR(lcd_pwm);
}
+ /*
+ * FIXME: pwm_apply_args() should be removed when switching to
+ * the atomic PWM API.
+ */
+ pwm_apply_args(lcd_pwm);
+
rx1950_lcd_power(1);
rx1950_bl_power(1);
diff --git a/arch/arm/mach-shmobile/Kconfig b/arch/arm/mach-shmobile/Kconfig
index f2bc5c353119..fe4ccb52f921 100644
--- a/arch/arm/mach-shmobile/Kconfig
+++ b/arch/arm/mach-shmobile/Kconfig
@@ -4,11 +4,6 @@ config ARCH_SHMOBILE
config ARCH_SHMOBILE_MULTI
bool
-config PM_RCAR
- bool
- select PM
- select PM_GENERIC_DOMAINS
-
config PM_RMOBILE
bool
select PM
@@ -16,13 +11,15 @@ config PM_RMOBILE
config ARCH_RCAR_GEN1
bool
- select PM_RCAR
+ select PM
+ select PM_GENERIC_DOMAINS
select RENESAS_INTC_IRQPIN
select SYS_SUPPORTS_SH_TMU
config ARCH_RCAR_GEN2
bool
- select PM_RCAR
+ select PM
+ select PM_GENERIC_DOMAINS
select RENESAS_IRQC
select SYS_SUPPORTS_SH_CMT
select PCI_DOMAINS if PCI
diff --git a/arch/arm/mach-shmobile/Makefile b/arch/arm/mach-shmobile/Makefile
index a65c80ac9009..fc95f7bd2dd9 100644
--- a/arch/arm/mach-shmobile/Makefile
+++ b/arch/arm/mach-shmobile/Makefile
@@ -38,8 +38,6 @@ smp-$(CONFIG_ARCH_EMEV2) += smp-emev2.o headsmp-scu.o platsmp-scu.o
# PM objects
obj-$(CONFIG_SUSPEND) += suspend.o
-obj-$(CONFIG_CPU_FREQ) += cpufreq.o
-obj-$(CONFIG_PM_RCAR) += pm-rcar.o
obj-$(CONFIG_PM_RMOBILE) += pm-rmobile.o
obj-$(CONFIG_ARCH_RCAR_GEN2) += pm-rcar-gen2.o
diff --git a/arch/arm/mach-shmobile/common.h b/arch/arm/mach-shmobile/common.h
index 5464b7a75e30..3b562d87826d 100644
--- a/arch/arm/mach-shmobile/common.h
+++ b/arch/arm/mach-shmobile/common.h
@@ -25,16 +25,9 @@ static inline int shmobile_suspend_init(void) { return 0; }
static inline void shmobile_smp_apmu_suspend_init(void) { }
#endif
-#ifdef CONFIG_CPU_FREQ
-int shmobile_cpufreq_init(void);
-#else
-static inline int shmobile_cpufreq_init(void) { return 0; }
-#endif
-
static inline void __init shmobile_init_late(void)
{
shmobile_suspend_init();
- shmobile_cpufreq_init();
}
#endif /* __ARCH_MACH_COMMON_H */
diff --git a/arch/arm/mach-shmobile/cpufreq.c b/arch/arm/mach-shmobile/cpufreq.c
deleted file mode 100644
index 634d701c56a7..000000000000
--- a/arch/arm/mach-shmobile/cpufreq.c
+++ /dev/null
@@ -1,19 +0,0 @@
-/*
- * CPUFreq support code for SH-Mobile ARM
- *
- * Copyright (C) 2014 Gaku Inami
- *
- * This file is subject to the terms and conditions of the GNU General Public
- * License. See the file "COPYING" in the main directory of this archive
- * for more details.
- */
-
-#include <linux/platform_device.h>
-
-#include "common.h"
-
-int __init shmobile_cpufreq_init(void)
-{
- platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
- return 0;
-}
diff --git a/arch/arm/mach-shmobile/pm-r8a7779.c b/arch/arm/mach-shmobile/pm-r8a7779.c
index 14c42a1bdf1e..4174cbcbc467 100644
--- a/arch/arm/mach-shmobile/pm-r8a7779.c
+++ b/arch/arm/mach-shmobile/pm-r8a7779.c
@@ -9,9 +9,10 @@
* for more details.
*/
+#include <linux/soc/renesas/rcar-sysc.h>
+
#include <asm/io.h>
-#include "pm-rcar.h"
#include "r8a7779.h"
/* SYSC */
diff --git a/arch/arm/mach-shmobile/pm-rcar-gen2.c b/arch/arm/mach-shmobile/pm-rcar-gen2.c
index 6815781ad116..691ac166a277 100644
--- a/arch/arm/mach-shmobile/pm-rcar-gen2.c
+++ b/arch/arm/mach-shmobile/pm-rcar-gen2.c
@@ -13,9 +13,9 @@
#include <linux/kernel.h>
#include <linux/of.h>
#include <linux/smp.h>
+#include <linux/soc/renesas/rcar-sysc.h>
#include <asm/io.h>
#include "common.h"
-#include "pm-rcar.h"
#include "rcar-gen2.h"
/* RST */
diff --git a/arch/arm/mach-shmobile/pm-rcar.c b/arch/arm/mach-shmobile/pm-rcar.c
deleted file mode 100644
index 0af05d288b09..000000000000
--- a/arch/arm/mach-shmobile/pm-rcar.c
+++ /dev/null
@@ -1,164 +0,0 @@
-/*
- * R-Car SYSC Power management support
- *
- * Copyright (C) 2014 Magnus Damm
- *
- * This file is subject to the terms and conditions of the GNU General Public
- * License. See the file "COPYING" in the main directory of this archive
- * for more details.
- */
-
-#include <linux/delay.h>
-#include <linux/err.h>
-#include <linux/mm.h>
-#include <linux/spinlock.h>
-#include <linux/io.h>
-#include "pm-rcar.h"
-
-/* SYSC Common */
-#define SYSCSR 0x00 /* SYSC Status Register */
-#define SYSCISR 0x04 /* Interrupt Status Register */
-#define SYSCISCR 0x08 /* Interrupt Status Clear Register */
-#define SYSCIER 0x0c /* Interrupt Enable Register */
-#define SYSCIMR 0x10 /* Interrupt Mask Register */
-
-/* SYSC Status Register */
-#define SYSCSR_PONENB 1 /* Ready for power resume requests */
-#define SYSCSR_POFFENB 0 /* Ready for power shutoff requests */
-
-/*
- * Power Control Register Offsets inside the register block for each domain
- * Note: The "CR" registers for ARM cores exist on H1 only
- * Use WFI to power off, CPG/APMU to resume ARM cores on R-Car Gen2
- */
-#define PWRSR_OFFS 0x00 /* Power Status Register */
-#define PWROFFCR_OFFS 0x04 /* Power Shutoff Control Register */
-#define PWROFFSR_OFFS 0x08 /* Power Shutoff Status Register */
-#define PWRONCR_OFFS 0x0c /* Power Resume Control Register */
-#define PWRONSR_OFFS 0x10 /* Power Resume Status Register */
-#define PWRER_OFFS 0x14 /* Power Shutoff/Resume Error */
-
-
-#define SYSCSR_RETRIES 100
-#define SYSCSR_DELAY_US 1
-
-#define PWRER_RETRIES 100
-#define PWRER_DELAY_US 1
-
-#define SYSCISR_RETRIES 1000
-#define SYSCISR_DELAY_US 1
-
-static void __iomem *rcar_sysc_base;
-static DEFINE_SPINLOCK(rcar_sysc_lock); /* SMP CPUs + I/O devices */
-
-static int rcar_sysc_pwr_on_off(const struct rcar_sysc_ch *sysc_ch, bool on)
-{
- unsigned int sr_bit, reg_offs;
- int k;
-
- if (on) {
- sr_bit = SYSCSR_PONENB;
- reg_offs = PWRONCR_OFFS;
- } else {
- sr_bit = SYSCSR_POFFENB;
- reg_offs = PWROFFCR_OFFS;
- }
-
- /* Wait until SYSC is ready to accept a power request */
- for (k = 0; k < SYSCSR_RETRIES; k++) {
- if (ioread32(rcar_sysc_base + SYSCSR) & BIT(sr_bit))
- break;
- udelay(SYSCSR_DELAY_US);
- }
-
- if (k == SYSCSR_RETRIES)
- return -EAGAIN;
-
- /* Submit power shutoff or power resume request */
- iowrite32(BIT(sysc_ch->chan_bit),
- rcar_sysc_base + sysc_ch->chan_offs + reg_offs);
-
- return 0;
-}
-
-static int rcar_sysc_power(const struct rcar_sysc_ch *sysc_ch, bool on)
-{
- unsigned int isr_mask = BIT(sysc_ch->isr_bit);
- unsigned int chan_mask = BIT(sysc_ch->chan_bit);
- unsigned int status;
- unsigned long flags;
- int ret = 0;
- int k;
-
- spin_lock_irqsave(&rcar_sysc_lock, flags);
-
- iowrite32(isr_mask, rcar_sysc_base + SYSCISCR);
-
- /* Submit power shutoff or resume request until it was accepted */
- for (k = 0; k < PWRER_RETRIES; k++) {
- ret = rcar_sysc_pwr_on_off(sysc_ch, on);
- if (ret)
- goto out;
-
- status = ioread32(rcar_sysc_base +
- sysc_ch->chan_offs + PWRER_OFFS);
- if (!(status & chan_mask))
- break;
-
- udelay(PWRER_DELAY_US);
- }
-
- if (k == PWRER_RETRIES) {
- ret = -EIO;
- goto out;
- }
-
- /* Wait until the power shutoff or resume request has completed * */
- for (k = 0; k < SYSCISR_RETRIES; k++) {
- if (ioread32(rcar_sysc_base + SYSCISR) & isr_mask)
- break;
- udelay(SYSCISR_DELAY_US);
- }
-
- if (k == SYSCISR_RETRIES)
- ret = -EIO;
-
- iowrite32(isr_mask, rcar_sysc_base + SYSCISCR);
-
- out:
- spin_unlock_irqrestore(&rcar_sysc_lock, flags);
-
- pr_debug("sysc power domain %d: %08x -> %d\n",
- sysc_ch->isr_bit, ioread32(rcar_sysc_base + SYSCISR), ret);
- return ret;
-}
-
-int rcar_sysc_power_down(const struct rcar_sysc_ch *sysc_ch)
-{
- return rcar_sysc_power(sysc_ch, false);
-}
-
-int rcar_sysc_power_up(const struct rcar_sysc_ch *sysc_ch)
-{
- return rcar_sysc_power(sysc_ch, true);
-}
-
-bool rcar_sysc_power_is_off(const struct rcar_sysc_ch *sysc_ch)
-{
- unsigned int st;
-
- st = ioread32(rcar_sysc_base + sysc_ch->chan_offs + PWRSR_OFFS);
- if (st & BIT(sysc_ch->chan_bit))
- return true;
-
- return false;
-}
-
-void __iomem *rcar_sysc_init(phys_addr_t base)
-{
- rcar_sysc_base = ioremap_nocache(base, PAGE_SIZE);
- if (!rcar_sysc_base)
- panic("unable to ioremap R-Car SYSC hardware block\n");
-
- return rcar_sysc_base;
-}
diff --git a/arch/arm/mach-shmobile/smp-r8a7779.c b/arch/arm/mach-shmobile/smp-r8a7779.c
index f5c31fbc10b2..c6951ee24588 100644
--- a/arch/arm/mach-shmobile/smp-r8a7779.c
+++ b/arch/arm/mach-shmobile/smp-r8a7779.c
@@ -19,13 +19,13 @@
#include <linux/spinlock.h>
#include <linux/io.h>
#include <linux/delay.h>
+#include <linux/soc/renesas/rcar-sysc.h>
#include <asm/cacheflush.h>
#include <asm/smp_plat.h>
#include <asm/smp_scu.h>
#include "common.h"
-#include "pm-rcar.h"
#include "r8a7779.h"
#define AVECR IOMEM(0xfe700040)
diff --git a/arch/arm/mach-shmobile/smp-r8a7790.c b/arch/arm/mach-shmobile/smp-r8a7790.c
index f6426c6fdefc..28f26d5362d8 100644
--- a/arch/arm/mach-shmobile/smp-r8a7790.c
+++ b/arch/arm/mach-shmobile/smp-r8a7790.c
@@ -17,12 +17,12 @@
#include <linux/init.h>
#include <linux/smp.h>
#include <linux/io.h>
+#include <linux/soc/renesas/rcar-sysc.h>
#include <asm/smp_plat.h>
#include "common.h"
#include "platsmp-apmu.h"
-#include "pm-rcar.h"
#include "rcar-gen2.h"
#include "r8a7790.h"
diff --git a/arch/arm/mach-shmobile/timer.c b/arch/arm/mach-shmobile/timer.c
index 67d79f9c6bad..6196a6380385 100644
--- a/arch/arm/mach-shmobile/timer.c
+++ b/arch/arm/mach-shmobile/timer.c
@@ -20,28 +20,9 @@
#include "common.h"
-static void __init shmobile_setup_delay_hz(unsigned int max_cpu_core_hz,
- unsigned int mult, unsigned int div)
-{
- /* calculate a worst-case loops-per-jiffy value
- * based on maximum cpu core hz setting and the
- * __delay() implementation in arch/arm/lib/delay.S
- *
- * this will result in a longer delay than expected
- * when the cpu core runs on lower frequencies.
- */
-
- unsigned int value = HZ * div / mult;
-
- if (!preset_lpj)
- preset_lpj = max_cpu_core_hz / value;
-}
-
void __init shmobile_init_delay(void)
{
struct device_node *np, *cpus;
- unsigned int div = 0;
- bool has_arch_timer = false;
u32 max_freq = 0;
cpus = of_find_node_by_path("/cpus");
@@ -51,25 +32,32 @@ void __init shmobile_init_delay(void)
for_each_child_of_node(cpus, np) {
u32 freq;
+ if (IS_ENABLED(CONFIG_ARM_ARCH_TIMER) &&
+ (of_device_is_compatible(np, "arm,cortex-a7") ||
+ of_device_is_compatible(np, "arm,cortex-a15"))) {
+ of_node_put(np);
+ of_node_put(cpus);
+ return;
+ }
+
if (!of_property_read_u32(np, "clock-frequency", &freq))
max_freq = max(max_freq, freq);
-
- if (of_device_is_compatible(np, "arm,cortex-a8")) {
- div = 2;
- } else if (of_device_is_compatible(np, "arm,cortex-a9")) {
- div = 1;
- } else if (of_device_is_compatible(np, "arm,cortex-a7") ||
- of_device_is_compatible(np, "arm,cortex-a15")) {
- div = 1;
- has_arch_timer = true;
- }
}
of_node_put(cpus);
- if (!max_freq || !div)
+ if (!max_freq)
return;
- if (!has_arch_timer || !IS_ENABLED(CONFIG_ARM_ARCH_TIMER))
- shmobile_setup_delay_hz(max_freq, 1, div);
+ /*
+ * Calculate a worst-case loops-per-jiffy value
+ * based on maximum cpu core hz setting and the
+ * __delay() implementation in arch/arm/lib/delay.S.
+ *
+ * This will result in a longer delay than expected
+ * when the cpu core runs on lower frequencies.
+ */
+
+ if (!preset_lpj)
+ preset_lpj = max_freq / HZ;
}
diff --git a/arch/arm/mach-socfpga/core.h b/arch/arm/mach-socfpga/core.h
index 575195be6687..65e1817d8afe 100644
--- a/arch/arm/mach-socfpga/core.h
+++ b/arch/arm/mach-socfpga/core.h
@@ -38,6 +38,8 @@ extern void socfpga_init_clocks(void);
extern void socfpga_sysmgr_init(void);
void socfpga_init_l2_ecc(void);
void socfpga_init_ocram_ecc(void);
+void socfpga_init_arria10_l2_ecc(void);
+void socfpga_init_arria10_ocram_ecc(void);
extern void __iomem *sys_manager_base_addr;
extern void __iomem *rst_manager_base_addr;
diff --git a/arch/arm/mach-socfpga/l2_cache.c b/arch/arm/mach-socfpga/l2_cache.c
index e3907ab58d05..4267c95f2158 100644
--- a/arch/arm/mach-socfpga/l2_cache.c
+++ b/arch/arm/mach-socfpga/l2_cache.c
@@ -17,6 +17,20 @@
#include <linux/of_platform.h>
#include <linux/of_address.h>
+#include "core.h"
+
+/* A10 System Manager L2 ECC Control register */
+#define A10_MPU_CTRL_L2_ECC_OFST 0x0
+#define A10_MPU_CTRL_L2_ECC_EN BIT(0)
+
+/* A10 System Manager Global IRQ Mask register */
+#define A10_SYSMGR_ECC_INTMASK_CLR_OFST 0x98
+#define A10_SYSMGR_ECC_INTMASK_CLR_L2 BIT(0)
+
+/* A10 System Manager L2 ECC IRQ Clear register */
+#define A10_SYSMGR_MPU_CLEAR_L2_ECC_OFST 0xA8
+#define A10_SYSMGR_MPU_CLEAR_L2_ECC (BIT(31) | BIT(15))
+
void socfpga_init_l2_ecc(void)
{
struct device_node *np;
@@ -39,3 +53,38 @@ void socfpga_init_l2_ecc(void)
writel(0x01, mapped_l2_edac_addr);
iounmap(mapped_l2_edac_addr);
}
+
+void socfpga_init_arria10_l2_ecc(void)
+{
+ struct device_node *np;
+ void __iomem *mapped_l2_edac_addr;
+
+ /* Find the L2 EDAC device tree node */
+ np = of_find_compatible_node(NULL, NULL, "altr,socfpga-a10-l2-ecc");
+ if (!np) {
+ pr_err("Unable to find socfpga-a10-l2-ecc in dtb\n");
+ return;
+ }
+
+ mapped_l2_edac_addr = of_iomap(np, 0);
+ of_node_put(np);
+ if (!mapped_l2_edac_addr) {
+ pr_err("Unable to find L2 ECC mapping in dtb\n");
+ return;
+ }
+
+ if (!sys_manager_base_addr) {
+ pr_err("System Mananger not mapped for L2 ECC\n");
+ goto exit;
+ }
+ /* Clear any pending IRQs */
+ writel(A10_SYSMGR_MPU_CLEAR_L2_ECC, (sys_manager_base_addr +
+ A10_SYSMGR_MPU_CLEAR_L2_ECC_OFST));
+ /* Enable ECC */
+ writel(A10_SYSMGR_ECC_INTMASK_CLR_L2, sys_manager_base_addr +
+ A10_SYSMGR_ECC_INTMASK_CLR_OFST);
+ writel(A10_MPU_CTRL_L2_ECC_EN, mapped_l2_edac_addr +
+ A10_MPU_CTRL_L2_ECC_OFST);
+exit:
+ iounmap(mapped_l2_edac_addr);
+}
diff --git a/arch/arm/mach-socfpga/ocram.c b/arch/arm/mach-socfpga/ocram.c
index 60ec643ac2be..10d673252395 100644
--- a/arch/arm/mach-socfpga/ocram.c
+++ b/arch/arm/mach-socfpga/ocram.c
@@ -13,12 +13,15 @@
* You should have received a copy of the GNU General Public License along with
* this program. If not, see <http://www.gnu.org/licenses/>.
*/
+#include <linux/delay.h>
#include <linux/io.h>
#include <linux/genalloc.h>
#include <linux/module.h>
#include <linux/of_address.h>
#include <linux/of_platform.h>
+#include "core.h"
+
#define ALTR_OCRAM_CLEAR_ECC 0x00000018
#define ALTR_OCRAM_ECC_EN 0x00000019
@@ -47,3 +50,133 @@ void socfpga_init_ocram_ecc(void)
iounmap(mapped_ocr_edac_addr);
}
+
+/* Arria10 OCRAM Section */
+#define ALTR_A10_ECC_CTRL_OFST 0x08
+#define ALTR_A10_OCRAM_ECC_EN_CTL (BIT(1) | BIT(0))
+#define ALTR_A10_ECC_INITA BIT(16)
+
+#define ALTR_A10_ECC_INITSTAT_OFST 0x0C
+#define ALTR_A10_ECC_INITCOMPLETEA BIT(0)
+#define ALTR_A10_ECC_INITCOMPLETEB BIT(8)
+
+#define ALTR_A10_ECC_ERRINTEN_OFST 0x10
+#define ALTR_A10_ECC_SERRINTEN BIT(0)
+
+#define ALTR_A10_ECC_INTSTAT_OFST 0x20
+#define ALTR_A10_ECC_SERRPENA BIT(0)
+#define ALTR_A10_ECC_DERRPENA BIT(8)
+#define ALTR_A10_ECC_ERRPENA_MASK (ALTR_A10_ECC_SERRPENA | \
+ ALTR_A10_ECC_DERRPENA)
+/* ECC Manager Defines */
+#define A10_SYSMGR_ECC_INTMASK_SET_OFST 0x94
+#define A10_SYSMGR_ECC_INTMASK_CLR_OFST 0x98
+#define A10_SYSMGR_ECC_INTMASK_OCRAM BIT(1)
+
+#define ALTR_A10_ECC_INIT_WATCHDOG_10US 10000
+
+static inline void ecc_set_bits(u32 bit_mask, void __iomem *ioaddr)
+{
+ u32 value = readl(ioaddr);
+
+ value |= bit_mask;
+ writel(value, ioaddr);
+}
+
+static inline void ecc_clear_bits(u32 bit_mask, void __iomem *ioaddr)
+{
+ u32 value = readl(ioaddr);
+
+ value &= ~bit_mask;
+ writel(value, ioaddr);
+}
+
+static inline int ecc_test_bits(u32 bit_mask, void __iomem *ioaddr)
+{
+ u32 value = readl(ioaddr);
+
+ return (value & bit_mask) ? 1 : 0;
+}
+
+/*
+ * This function uses the memory initialization block in the Arria10 ECC
+ * controller to initialize/clear the entire memory data and ECC data.
+ */
+static int altr_init_memory_port(void __iomem *ioaddr)
+{
+ int limit = ALTR_A10_ECC_INIT_WATCHDOG_10US;
+
+ ecc_set_bits(ALTR_A10_ECC_INITA, (ioaddr + ALTR_A10_ECC_CTRL_OFST));
+ while (limit--) {
+ if (ecc_test_bits(ALTR_A10_ECC_INITCOMPLETEA,
+ (ioaddr + ALTR_A10_ECC_INITSTAT_OFST)))
+ break;
+ udelay(1);
+ }
+ if (limit < 0)
+ return -EBUSY;
+
+ /* Clear any pending ECC interrupts */
+ writel(ALTR_A10_ECC_ERRPENA_MASK,
+ (ioaddr + ALTR_A10_ECC_INTSTAT_OFST));
+
+ return 0;
+}
+
+void socfpga_init_arria10_ocram_ecc(void)
+{
+ struct device_node *np;
+ int ret = 0;
+ void __iomem *ecc_block_base;
+
+ if (!sys_manager_base_addr) {
+ pr_err("SOCFPGA: sys-mgr is not initialized\n");
+ return;
+ }
+
+ /* Find the OCRAM EDAC device tree node */
+ np = of_find_compatible_node(NULL, NULL, "altr,socfpga-a10-ocram-ecc");
+ if (!np) {
+ pr_err("Unable to find socfpga-a10-ocram-ecc\n");
+ return;
+ }
+
+ /* Map the ECC Block */
+ ecc_block_base = of_iomap(np, 0);
+ of_node_put(np);
+ if (!ecc_block_base) {
+ pr_err("Unable to map OCRAM ECC block\n");
+ return;
+ }
+
+ /* Disable ECC */
+ writel(ALTR_A10_OCRAM_ECC_EN_CTL,
+ sys_manager_base_addr + A10_SYSMGR_ECC_INTMASK_SET_OFST);
+ ecc_clear_bits(ALTR_A10_ECC_SERRINTEN,
+ (ecc_block_base + ALTR_A10_ECC_ERRINTEN_OFST));
+ ecc_clear_bits(ALTR_A10_OCRAM_ECC_EN_CTL,
+ (ecc_block_base + ALTR_A10_ECC_CTRL_OFST));
+
+ /* Ensure all writes complete */
+ wmb();
+
+ /* Use HW initialization block to initialize memory for ECC */
+ ret = altr_init_memory_port(ecc_block_base);
+ if (ret) {
+ pr_err("ECC: cannot init OCRAM PORTA memory\n");
+ goto exit;
+ }
+
+ /* Enable ECC */
+ ecc_set_bits(ALTR_A10_OCRAM_ECC_EN_CTL,
+ (ecc_block_base + ALTR_A10_ECC_CTRL_OFST));
+ ecc_set_bits(ALTR_A10_ECC_SERRINTEN,
+ (ecc_block_base + ALTR_A10_ECC_ERRINTEN_OFST));
+ writel(ALTR_A10_OCRAM_ECC_EN_CTL,
+ sys_manager_base_addr + A10_SYSMGR_ECC_INTMASK_CLR_OFST);
+
+ /* Ensure all writes complete */
+ wmb();
+exit:
+ iounmap(ecc_block_base);
+}
diff --git a/arch/arm/mach-socfpga/socfpga.c b/arch/arm/mach-socfpga/socfpga.c
index 7e0aad2ec3d1..dde14f7bf2c3 100644
--- a/arch/arm/mach-socfpga/socfpga.c
+++ b/arch/arm/mach-socfpga/socfpga.c
@@ -66,6 +66,16 @@ static void __init socfpga_init_irq(void)
socfpga_init_ocram_ecc();
}
+static void __init socfpga_arria10_init_irq(void)
+{
+ irqchip_init();
+ socfpga_sysmgr_init();
+ if (IS_ENABLED(CONFIG_EDAC_ALTERA_L2C))
+ socfpga_init_arria10_l2_ecc();
+ if (IS_ENABLED(CONFIG_EDAC_ALTERA_OCRAM))
+ socfpga_init_arria10_ocram_ecc();
+}
+
static void socfpga_cyclone5_restart(enum reboot_mode mode, const char *cmd)
{
u32 temp;
@@ -113,7 +123,7 @@ static const char *altera_a10_dt_match[] = {
DT_MACHINE_START(SOCFPGA_A10, "Altera SOCFPGA Arria10")
.l2c_aux_val = 0,
.l2c_aux_mask = ~0,
- .init_irq = socfpga_init_irq,
+ .init_irq = socfpga_arria10_init_irq,
.restart = socfpga_arria10_restart,
.dt_compat = altera_a10_dt_match,
MACHINE_END
diff --git a/arch/arm/mach-sti/Kconfig b/arch/arm/mach-sti/Kconfig
index a196d14f65f5..6f1af29f935d 100644
--- a/arch/arm/mach-sti/Kconfig
+++ b/arch/arm/mach-sti/Kconfig
@@ -18,11 +18,10 @@ menuconfig ARCH_STI
select PL310_ERRATA_769419 if CACHE_L2X0
select RESET_CONTROLLER
help
- Include support for STiH41x SOCs like STiH415/416 using the device tree
- for discovery
- More information at Documentation/arm/STiH41x and
- at Documentation/devicetree
-
+ Include support for STMicroelectronics' STiH415/416, STiH407/10 and
+ STiH418 family SoCs using the Device Tree for discovery. More
+ information can be found in Documentation/arm/sti/ and
+ Documentation/devicetree.
if ARCH_STI
diff --git a/arch/arm/mach-sunxi/sunxi.c b/arch/arm/mach-sunxi/sunxi.c
index 3c156190a1d4..95dca8c2c9ed 100644
--- a/arch/arm/mach-sunxi/sunxi.c
+++ b/arch/arm/mach-sunxi/sunxi.c
@@ -17,11 +17,6 @@
#include <asm/mach/arch.h>
-static void __init sunxi_dt_cpufreq_init(void)
-{
- platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
-}
-
static const char * const sunxi_board_dt_compat[] = {
"allwinner,sun4i-a10",
"allwinner,sun5i-a10s",
@@ -32,7 +27,6 @@ static const char * const sunxi_board_dt_compat[] = {
DT_MACHINE_START(SUNXI_DT, "Allwinner sun4i/sun5i Families")
.dt_compat = sunxi_board_dt_compat,
- .init_late = sunxi_dt_cpufreq_init,
MACHINE_END
static const char * const sun6i_board_dt_compat[] = {
@@ -53,7 +47,6 @@ static void __init sun6i_timer_init(void)
DT_MACHINE_START(SUN6I_DT, "Allwinner sun6i (A31) Family")
.init_time = sun6i_timer_init,
.dt_compat = sun6i_board_dt_compat,
- .init_late = sunxi_dt_cpufreq_init,
MACHINE_END
static const char * const sun7i_board_dt_compat[] = {
@@ -63,7 +56,6 @@ static const char * const sun7i_board_dt_compat[] = {
DT_MACHINE_START(SUN7I_DT, "Allwinner sun7i (A20) Family")
.dt_compat = sun7i_board_dt_compat,
- .init_late = sunxi_dt_cpufreq_init,
MACHINE_END
static const char * const sun8i_board_dt_compat[] = {
@@ -77,7 +69,6 @@ static const char * const sun8i_board_dt_compat[] = {
DT_MACHINE_START(SUN8I_DT, "Allwinner sun8i Family")
.init_time = sun6i_timer_init,
.dt_compat = sun8i_board_dt_compat,
- .init_late = sunxi_dt_cpufreq_init,
MACHINE_END
static const char * const sun9i_board_dt_compat[] = {
diff --git a/arch/arm/mach-tegra/board-paz00.c b/arch/arm/mach-tegra/board-paz00.c
index 52db8bf7e153..7478f6fb3664 100644
--- a/arch/arm/mach-tegra/board-paz00.c
+++ b/arch/arm/mach-tegra/board-paz00.c
@@ -29,10 +29,6 @@ static struct property_entry __initdata wifi_rfkill_prop[] = {
{ },
};
-static struct property_set __initdata wifi_rfkill_pset = {
- .properties = wifi_rfkill_prop,
-};
-
static struct platform_device wifi_rfkill_device = {
.name = "rfkill_gpio",
.id = -1,
@@ -49,7 +45,7 @@ static struct gpiod_lookup_table wifi_gpio_lookup = {
void __init tegra_paz00_wifikill_init(void)
{
- platform_device_add_properties(&wifi_rfkill_device, &wifi_rfkill_pset);
+ platform_device_add_properties(&wifi_rfkill_device, wifi_rfkill_prop);
gpiod_add_lookup_table(&wifi_gpio_lookup);
platform_device_register(&wifi_rfkill_device);
}
diff --git a/arch/arm/mach-tegra/platsmp.c b/arch/arm/mach-tegra/platsmp.c
index f3f61dbbda97..75620ae73913 100644
--- a/arch/arm/mach-tegra/platsmp.c
+++ b/arch/arm/mach-tegra/platsmp.c
@@ -108,19 +108,9 @@ static int tegra30_boot_secondary(unsigned int cpu, struct task_struct *idle)
* be un-gated by un-toggling the power gate register
* manually.
*/
- if (!tegra_pmc_cpu_is_powered(cpu)) {
- ret = tegra_pmc_cpu_power_on(cpu);
- if (ret)
- return ret;
-
- /* Wait for the power to come up. */
- timeout = jiffies + msecs_to_jiffies(100);
- while (!tegra_pmc_cpu_is_powered(cpu)) {
- if (time_after(jiffies, timeout))
- return -ETIMEDOUT;
- udelay(10);
- }
- }
+ ret = tegra_pmc_cpu_power_on(cpu);
+ if (ret)
+ return ret;
remove_clamps:
/* CPU partition is powered. Enable the CPU clock. */
diff --git a/arch/arm/mach-uniphier/platsmp.c b/arch/arm/mach-uniphier/platsmp.c
index db04142f88bc..e802ca836ec7 100644
--- a/arch/arm/mach-uniphier/platsmp.c
+++ b/arch/arm/mach-uniphier/platsmp.c
@@ -99,16 +99,16 @@ static int __init uniphier_smp_prepare_trampoline(unsigned int max_cpus)
int ret;
np = of_find_compatible_node(NULL, NULL, "socionext,uniphier-smpctrl");
- of_node_put(np);
ret = of_address_to_resource(np, 0, &res);
+ of_node_put(np);
if (!ret) {
rom_rsv2_phys = res.start + UNIPHIER_SMPCTRL_ROM_RSV2;
} else {
/* try old binding too */
np = of_find_compatible_node(NULL, NULL,
"socionext,uniphier-system-bus-controller");
- of_node_put(np);
ret = of_address_to_resource(np, 1, &res);
+ of_node_put(np);
if (ret) {
pr_err("failed to get resource of SMP control\n");
return ret;
diff --git a/arch/arm/mach-versatile/versatile_dt.c b/arch/arm/mach-versatile/versatile_dt.c
index dff1c0595b67..d643b9210dbd 100644
--- a/arch/arm/mach-versatile/versatile_dt.c
+++ b/arch/arm/mach-versatile/versatile_dt.c
@@ -32,7 +32,6 @@
#include <linux/amba/clcd.h>
#include <linux/platform_data/video-clcd-versatile.h>
#include <linux/amba/mmci.h>
-#include <linux/mtd/physmap.h>
#include <asm/mach-types.h>
#include <asm/mach/arch.h>
#include <asm/mach/map.h>
@@ -42,27 +41,15 @@
#define __io_address(n) ((void __iomem __force *)IO_ADDRESS(n))
/*
- * Memory definitions
- */
-#define VERSATILE_FLASH_BASE 0x34000000
-#define VERSATILE_FLASH_SIZE SZ_64M
-
-/*
* ------------------------------------------------------------------------
* Versatile Registers
* ------------------------------------------------------------------------
*/
#define VERSATILE_SYS_PCICTL_OFFSET 0x44
#define VERSATILE_SYS_MCI_OFFSET 0x48
-#define VERSATILE_SYS_FLASH_OFFSET 0x4C
#define VERSATILE_SYS_CLCD_OFFSET 0x50
/*
- * VERSATILE_SYS_FLASH
- */
-#define VERSATILE_FLASHPROG_FLVPPEN (1 << 0) /* Enable writing to flash */
-
-/*
* VERSATILE peripheral addresses
*/
#define VERSATILE_MMCI0_BASE 0x10005000 /* MMC interface */
@@ -86,39 +73,6 @@
static void __iomem *versatile_sys_base;
static void __iomem *versatile_ib2_ctrl;
-static void versatile_flash_set_vpp(struct platform_device *pdev, int on)
-{
- u32 val;
-
- val = readl(versatile_sys_base + VERSATILE_SYS_FLASH_OFFSET);
- if (on)
- val |= VERSATILE_FLASHPROG_FLVPPEN;
- else
- val &= ~VERSATILE_FLASHPROG_FLVPPEN;
- writel(val, versatile_sys_base + VERSATILE_SYS_FLASH_OFFSET);
-}
-
-static struct physmap_flash_data versatile_flash_data = {
- .width = 4,
- .set_vpp = versatile_flash_set_vpp,
-};
-
-static struct resource versatile_flash_resource = {
- .start = VERSATILE_FLASH_BASE,
- .end = VERSATILE_FLASH_BASE + VERSATILE_FLASH_SIZE - 1,
- .flags = IORESOURCE_MEM,
-};
-
-struct platform_device versatile_flash_device = {
- .name = "physmap-flash",
- .id = 0,
- .dev = {
- .platform_data = &versatile_flash_data,
- },
- .num_resources = 1,
- .resource = &versatile_flash_resource,
-};
-
unsigned int mmc_status(struct device *dev)
{
struct amba_device *adev = container_of(dev, struct amba_device, dev);
@@ -390,7 +344,6 @@ static void __init versatile_dt_init(void)
versatile_dt_pci_init();
- platform_device_register(&versatile_flash_device);
of_platform_populate(NULL, of_default_bus_match_table,
versatile_auxdata_lookup, NULL);
}
diff --git a/arch/arm/mach-vexpress/Makefile b/arch/arm/mach-vexpress/Makefile
index f5c1006dd6a1..73caae71f307 100644
--- a/arch/arm/mach-vexpress/Makefile
+++ b/arch/arm/mach-vexpress/Makefile
@@ -4,7 +4,7 @@
ccflags-$(CONFIG_ARCH_MULTIPLATFORM) := \
-I$(srctree)/arch/arm/plat-versatile/include
-obj-y := v2m.o
+obj-$(CONFIG_ARCH_VEXPRESS) := v2m.o
obj-$(CONFIG_ARCH_VEXPRESS_DCSCB) += dcscb.o dcscb_setup.o
CFLAGS_dcscb.o += -march=armv7-a
CFLAGS_REMOVE_dcscb.o = -pg
@@ -15,3 +15,5 @@ CFLAGS_tc2_pm.o += -march=armv7-a
CFLAGS_REMOVE_tc2_pm.o = -pg
obj-$(CONFIG_SMP) += platsmp.o
obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o
+
+obj-$(CONFIG_ARCH_MPS2) += v2m-mps2.o
diff --git a/arch/arm/mach-vexpress/Makefile.boot b/arch/arm/mach-vexpress/Makefile.boot
new file mode 100644
index 000000000000..eacfc3f5c33e
--- /dev/null
+++ b/arch/arm/mach-vexpress/Makefile.boot
@@ -0,0 +1,3 @@
+# Empty file waiting for deletion once Makefile.boot isn't needed any more.
+# Patch waits for application at
+# http://www.arm.linux.org.uk/developer/patches/viewpatch.php?id=7889/1 .
diff --git a/arch/arm/mach-vexpress/v2m-mps2.c b/arch/arm/mach-vexpress/v2m-mps2.c
new file mode 100644
index 000000000000..e7ad9c27231c
--- /dev/null
+++ b/arch/arm/mach-vexpress/v2m-mps2.c
@@ -0,0 +1,21 @@
+/*
+ * Copyright (C) 2015 ARM Limited
+ *
+ * Author: Vladimir Murzin <vladimir.murzin@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <asm/mach/arch.h>
+
+static const char *const mps2_compat[] __initconst = {
+ "arm,mps2",
+ NULL
+};
+
+DT_MACHINE_START(MPS2DT, "MPS2 (Device Tree Support)")
+ .dt_compat = mps2_compat,
+MACHINE_END
diff --git a/arch/arm/mach-zynq/common.c b/arch/arm/mach-zynq/common.c
index 860ffb663f02..da876d28ccbc 100644
--- a/arch/arm/mach-zynq/common.c
+++ b/arch/arm/mach-zynq/common.c
@@ -110,7 +110,6 @@ static void __init zynq_init_late(void)
*/
static void __init zynq_init_machine(void)
{
- struct platform_device_info devinfo = { .name = "cpufreq-dt", };
struct soc_device_attribute *soc_dev_attr;
struct soc_device *soc_dev;
struct device *parent = NULL;
@@ -145,7 +144,6 @@ out:
of_platform_populate(NULL, of_default_bus_match_table, NULL, parent);
platform_device_register(&zynq_cpuidle_device);
- platform_device_register_full(&devinfo);
}
static void __init zynq_timer_init(void)
diff --git a/arch/arm/mm/Kconfig b/arch/arm/mm/Kconfig
index 55347662e5ed..cb569b65a54d 100644
--- a/arch/arm/mm/Kconfig
+++ b/arch/arm/mm/Kconfig
@@ -421,18 +421,21 @@ config CPU_32v3
select CPU_USE_DOMAINS if MMU
select NEED_KUSER_HELPERS
select TLS_REG_EMUL if SMP || !MMU
+ select CPU_NO_EFFICIENT_FFS
config CPU_32v4
bool
select CPU_USE_DOMAINS if MMU
select NEED_KUSER_HELPERS
select TLS_REG_EMUL if SMP || !MMU
+ select CPU_NO_EFFICIENT_FFS
config CPU_32v4T
bool
select CPU_USE_DOMAINS if MMU
select NEED_KUSER_HELPERS
select TLS_REG_EMUL if SMP || !MMU
+ select CPU_NO_EFFICIENT_FFS
config CPU_32v5
bool
diff --git a/arch/arm/mm/cache-l2x0.c b/arch/arm/mm/cache-l2x0.c
index 9f9d54271aad..c61996c256cc 100644
--- a/arch/arm/mm/cache-l2x0.c
+++ b/arch/arm/mm/cache-l2x0.c
@@ -647,11 +647,6 @@ static void __init l2c310_enable(void __iomem *base, unsigned num_lock)
aux &= ~(L310_AUX_CTRL_FULL_LINE_ZERO | L310_AUX_CTRL_EARLY_BRESP);
}
- /* r3p0 or later has power control register */
- if (rev >= L310_CACHE_ID_RTL_R3P0)
- l2x0_saved_regs.pwr_ctrl = L310_DYNAMIC_CLK_GATING_EN |
- L310_STNDBY_MODE_EN;
-
/*
* Always enable non-secure access to the lockdown registers -
* we write to them as part of the L2C enable sequence so they
@@ -1141,6 +1136,7 @@ static void __init l2c310_of_parse(const struct device_node *np,
u32 filter[2] = { 0, 0 };
u32 assoc;
u32 prefetch;
+ u32 power;
u32 val;
int ret;
@@ -1271,6 +1267,26 @@ static void __init l2c310_of_parse(const struct device_node *np,
}
l2x0_saved_regs.prefetch_ctrl = prefetch;
+
+ power = l2x0_saved_regs.pwr_ctrl |
+ L310_DYNAMIC_CLK_GATING_EN | L310_STNDBY_MODE_EN;
+
+ ret = of_property_read_u32(np, "arm,dynamic-clock-gating", &val);
+ if (!ret) {
+ if (!val)
+ power &= ~L310_DYNAMIC_CLK_GATING_EN;
+ } else if (ret != -EINVAL) {
+ pr_err("L2C-310 OF dynamic-clock-gating property value is missing or invalid\n");
+ }
+ ret = of_property_read_u32(np, "arm,standby-mode", &val);
+ if (!ret) {
+ if (!val)
+ power &= ~L310_STNDBY_MODE_EN;
+ } else if (ret != -EINVAL) {
+ pr_err("L2C-310 OF standby-mode property value is missing or invalid\n");
+ }
+
+ l2x0_saved_regs.pwr_ctrl = power;
}
static const struct l2c_init_data of_l2c310_data __initconst = {
diff --git a/arch/arm/mm/cache-uniphier.c b/arch/arm/mm/cache-uniphier.c
index a6fa7b73fbe0..c8e2f4947223 100644
--- a/arch/arm/mm/cache-uniphier.c
+++ b/arch/arm/mm/cache-uniphier.c
@@ -96,6 +96,7 @@ struct uniphier_cache_data {
void __iomem *ctrl_base;
void __iomem *rev_base;
void __iomem *op_base;
+ void __iomem *way_ctrl_base;
u32 way_present_mask;
u32 way_locked_mask;
u32 nsets;
@@ -256,10 +257,13 @@ static void __init __uniphier_cache_set_locked_ways(
struct uniphier_cache_data *data,
u32 way_mask)
{
+ unsigned int cpu;
+
data->way_locked_mask = way_mask & data->way_present_mask;
- writel_relaxed(~data->way_locked_mask & data->way_present_mask,
- data->ctrl_base + UNIPHIER_SSCLPDAWCR);
+ for_each_possible_cpu(cpu)
+ writel_relaxed(~data->way_locked_mask & data->way_present_mask,
+ data->way_ctrl_base + 4 * cpu);
}
static void uniphier_cache_maint_range(unsigned long start, unsigned long end,
@@ -459,6 +463,8 @@ static int __init __uniphier_cache_init(struct device_node *np,
goto err;
}
+ data->way_ctrl_base = data->ctrl_base + 0xc00;
+
if (*cache_level == 2) {
u32 revision = readl(data->rev_base + UNIPHIER_SSCID);
/*
@@ -467,6 +473,22 @@ static int __init __uniphier_cache_init(struct device_node *np,
*/
if (revision <= 0x16)
data->range_op_max_size = (u32)1 << 22;
+
+ /*
+ * Unfortunatly, the offset address of active way control base
+ * varies from SoC to SoC.
+ */
+ switch (revision) {
+ case 0x11: /* sLD3 */
+ data->way_ctrl_base = data->ctrl_base + 0x870;
+ break;
+ case 0x12: /* LD4 */
+ case 0x16: /* sld8 */
+ data->way_ctrl_base = data->ctrl_base + 0x840;
+ break;
+ default:
+ break;
+ }
}
data->range_op_max_size -= data->line_size;
diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c
index c941e93048ad..ff7ed5697d3e 100644
--- a/arch/arm/mm/dma-mapping.c
+++ b/arch/arm/mm/dma-mapping.c
@@ -190,7 +190,6 @@ struct dma_map_ops arm_dma_ops = {
.sync_single_for_device = arm_dma_sync_single_for_device,
.sync_sg_for_cpu = arm_dma_sync_sg_for_cpu,
.sync_sg_for_device = arm_dma_sync_sg_for_device,
- .set_dma_mask = arm_dma_set_mask,
};
EXPORT_SYMBOL(arm_dma_ops);
@@ -209,7 +208,6 @@ struct dma_map_ops arm_coherent_dma_ops = {
.get_sgtable = arm_dma_get_sgtable,
.map_page = arm_coherent_dma_map_page,
.map_sg = arm_dma_map_sg,
- .set_dma_mask = arm_dma_set_mask,
};
EXPORT_SYMBOL(arm_coherent_dma_ops);
@@ -1143,16 +1141,6 @@ int dma_supported(struct device *dev, u64 mask)
}
EXPORT_SYMBOL(dma_supported);
-int arm_dma_set_mask(struct device *dev, u64 dma_mask)
-{
- if (!dev->dma_mask || !dma_supported(dev, dma_mask))
- return -EIO;
-
- *dev->dma_mask = dma_mask;
-
- return 0;
-}
-
#define PREALLOC_DMA_DEBUG_ENTRIES 4096
static int __init dma_debug_do_init(void)
@@ -2006,8 +1994,6 @@ struct dma_map_ops iommu_ops = {
.unmap_sg = arm_iommu_unmap_sg,
.sync_sg_for_cpu = arm_iommu_sync_sg_for_cpu,
.sync_sg_for_device = arm_iommu_sync_sg_for_device,
-
- .set_dma_mask = arm_dma_set_mask,
};
struct dma_map_ops iommu_coherent_ops = {
@@ -2021,8 +2007,6 @@ struct dma_map_ops iommu_coherent_ops = {
.map_sg = arm_coherent_iommu_map_sg,
.unmap_sg = arm_coherent_iommu_unmap_sg,
-
- .set_dma_mask = arm_dma_set_mask,
};
/**
@@ -2215,7 +2199,7 @@ static struct dma_map_ops *arm_get_iommu_dma_map_ops(bool coherent)
}
static bool arm_setup_iommu_dma_ops(struct device *dev, u64 dma_base, u64 size,
- struct iommu_ops *iommu)
+ const struct iommu_ops *iommu)
{
struct dma_iommu_mapping *mapping;
@@ -2253,7 +2237,7 @@ static void arm_teardown_iommu_dma_ops(struct device *dev)
#else
static bool arm_setup_iommu_dma_ops(struct device *dev, u64 dma_base, u64 size,
- struct iommu_ops *iommu)
+ const struct iommu_ops *iommu)
{
return false;
}
@@ -2270,7 +2254,7 @@ static struct dma_map_ops *arm_get_dma_map_ops(bool coherent)
}
void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
- struct iommu_ops *iommu, bool coherent)
+ const struct iommu_ops *iommu, bool coherent)
{
struct dma_map_ops *dma_ops;
diff --git a/arch/arm/mm/idmap.c b/arch/arm/mm/idmap.c
index bd274a05b8ff..c1a48f88764e 100644
--- a/arch/arm/mm/idmap.c
+++ b/arch/arm/mm/idmap.c
@@ -15,7 +15,7 @@
* page tables.
*/
pgd_t *idmap_pgd;
-unsigned long (*arch_virt_to_idmap)(unsigned long x);
+long long arch_phys_to_idmap_offset;
#ifdef CONFIG_ARM_LPAE
static void idmap_add_pmd(pud_t *pud, unsigned long addr, unsigned long end,
diff --git a/arch/arm/mm/ioremap.c b/arch/arm/mm/ioremap.c
index 66a978d05958..ff0eed23ddf1 100644
--- a/arch/arm/mm/ioremap.c
+++ b/arch/arm/mm/ioremap.c
@@ -297,9 +297,10 @@ static void __iomem * __arm_ioremap_pfn_caller(unsigned long pfn,
}
/*
- * Don't allow RAM to be mapped - this causes problems with ARMv6+
+ * Don't allow RAM to be mapped with mismatched attributes - this
+ * causes problems with ARMv6+
*/
- if (WARN_ON(pfn_valid(pfn)))
+ if (WARN_ON(pfn_valid(pfn) && mtype != MT_MEMORY_RW))
return NULL;
area = get_vm_area_caller(size, VM_IOREMAP, caller);
@@ -380,11 +381,15 @@ void __iomem *ioremap(resource_size_t res_cookie, size_t size)
EXPORT_SYMBOL(ioremap);
void __iomem *ioremap_cache(resource_size_t res_cookie, size_t size)
+ __alias(ioremap_cached);
+
+void __iomem *ioremap_cached(resource_size_t res_cookie, size_t size)
{
return arch_ioremap_caller(res_cookie, size, MT_DEVICE_CACHED,
__builtin_return_address(0));
}
EXPORT_SYMBOL(ioremap_cache);
+EXPORT_SYMBOL(ioremap_cached);
void __iomem *ioremap_wc(resource_size_t res_cookie, size_t size)
{
@@ -414,6 +419,13 @@ __arm_ioremap_exec(phys_addr_t phys_addr, size_t size, bool cached)
__builtin_return_address(0));
}
+void *arch_memremap_wb(phys_addr_t phys_addr, size_t size)
+{
+ return (__force void *)arch_ioremap_caller(phys_addr, size,
+ MT_MEMORY_RW,
+ __builtin_return_address(0));
+}
+
void __iounmap(volatile void __iomem *io_addr)
{
void *addr = (void *)(PAGE_MASK & (unsigned long)io_addr);
diff --git a/arch/arm/mm/nommu.c b/arch/arm/mm/nommu.c
index d5805e4bf2fc..2740967727e2 100644
--- a/arch/arm/mm/nommu.c
+++ b/arch/arm/mm/nommu.c
@@ -368,11 +368,15 @@ void __iomem *ioremap(resource_size_t res_cookie, size_t size)
EXPORT_SYMBOL(ioremap);
void __iomem *ioremap_cache(resource_size_t res_cookie, size_t size)
+ __alias(ioremap_cached);
+
+void __iomem *ioremap_cached(resource_size_t res_cookie, size_t size)
{
return __arm_ioremap_caller(res_cookie, size, MT_DEVICE_CACHED,
__builtin_return_address(0));
}
EXPORT_SYMBOL(ioremap_cache);
+EXPORT_SYMBOL(ioremap_cached);
void __iomem *ioremap_wc(resource_size_t res_cookie, size_t size)
{
@@ -381,6 +385,11 @@ void __iomem *ioremap_wc(resource_size_t res_cookie, size_t size)
}
EXPORT_SYMBOL(ioremap_wc);
+void *arch_memremap_wb(phys_addr_t phys_addr, size_t size)
+{
+ return (void *)phys_addr;
+}
+
void __iounmap(volatile void __iomem *addr)
{
}
diff --git a/arch/arm/plat-samsung/include/plat/map-s5p.h b/arch/arm/plat-samsung/include/plat/map-s5p.h
index 4ec9a7050185..b63aeebb93f3 100644
--- a/arch/arm/plat-samsung/include/plat/map-s5p.h
+++ b/arch/arm/plat-samsung/include/plat/map-s5p.h
@@ -18,7 +18,6 @@
#define S5P_VA_DMC0 S3C_ADDR(0x02440000)
#define S5P_VA_DMC1 S3C_ADDR(0x02480000)
-#define S5P_VA_SROMC S3C_ADDR(0x024C0000)
#define S5P_VA_COREPERI_BASE S3C_ADDR(0x02800000)
#define S5P_VA_COREPERI(x) (S5P_VA_COREPERI_BASE + (x))
diff --git a/arch/arm/tools/Makefile b/arch/arm/tools/Makefile
index 32d05c8219dc..6e4cd1867a9f 100644
--- a/arch/arm/tools/Makefile
+++ b/arch/arm/tools/Makefile
@@ -4,7 +4,10 @@
# Copyright (C) 2001 Russell King
#
-include/generated/mach-types.h: $(src)/gen-mach-types $(src)/mach-types
- @$(kecho) ' Generating $@'
- @mkdir -p $(dir $@)
- $(Q)$(AWK) -f $^ > $@ || { rm -f $@; /bin/false; }
+quiet_cmd_gen_mach = GEN $@
+ cmd_gen_mach = mkdir -p $(dir $@) && \
+ $(AWK) -f $(filter-out $(PHONY),$^) > $@ || \
+ { rm -f $@; /bin/false; }
+
+include/generated/mach-types.h: $(src)/gen-mach-types $(src)/mach-types FORCE
+ $(call if_changed,gen_mach)
diff --git a/arch/arm/vdso/Makefile b/arch/arm/vdso/Makefile
index 1160434eece0..59a8fa7b8a3b 100644
--- a/arch/arm/vdso/Makefile
+++ b/arch/arm/vdso/Makefile
@@ -74,5 +74,5 @@ $(MODLIB)/vdso: FORCE
@mkdir -p $(MODLIB)/vdso
PHONY += vdso_install
-vdso_install: $(obj)/vdso.so.dbg $(MODLIB)/vdso FORCE
+vdso_install: $(obj)/vdso.so.dbg $(MODLIB)/vdso
$(call cmd,vdso_install)
diff --git a/arch/arm/vfp/vfpmodule.c b/arch/arm/vfp/vfpmodule.c
index 2a61e4b04600..73085d3482ed 100644
--- a/arch/arm/vfp/vfpmodule.c
+++ b/arch/arm/vfp/vfpmodule.c
@@ -156,10 +156,6 @@ static void vfp_thread_copy(struct thread_info *thread)
* - we could be preempted if tree preempt rcu is enabled, so
* it is unsafe to use thread->cpu.
* THREAD_NOTIFY_EXIT
- * - the thread (v) will be running on the local CPU, so
- * v === current_thread_info()
- * - thread->cpu is the local CPU number at the time it is accessed,
- * but may change at any time.
* - we could be preempted if tree preempt rcu is enabled, so
* it is unsafe to use thread->cpu.
*/
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 4f436220384f..76747d92bc72 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -11,6 +11,7 @@ config ARM64
select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
select ARCH_USE_CMPXCHG_LOCKREF
select ARCH_SUPPORTS_ATOMIC_RMW
+ select ARCH_SUPPORTS_NUMA_BALANCING
select ARCH_WANT_OPTIONAL_GPIOLIB
select ARCH_WANT_COMPAT_IPC_PARSE_VERSION
select ARCH_WANT_FRAME_POINTERS
@@ -58,11 +59,14 @@ config ARM64
select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_TRACEHOOK
- select HAVE_BPF_JIT
+ select HAVE_ARCH_TRANSPARENT_HUGEPAGE
+ select HAVE_ARM_SMCCC
+ select HAVE_EBPF_JIT
select HAVE_C_RECORDMCOUNT
select HAVE_CC_STACKPROTECTOR
select HAVE_CMPXCHG_DOUBLE
select HAVE_CMPXCHG_LOCAL
+ select HAVE_CONTEXT_TRACKING
select HAVE_DEBUG_BUGVERBOSE
select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG
@@ -76,6 +80,7 @@ config ARM64
select HAVE_HW_BREAKPOINT if PERF_EVENTS
select HAVE_IRQ_TIME_ACCOUNTING
select HAVE_MEMBLOCK
+ select HAVE_MEMBLOCK_NODE_MAP if NUMA
select HAVE_PATA_PLATFORM
select HAVE_PERF_EVENTS
select HAVE_PERF_REGS
@@ -89,15 +94,13 @@ config ARM64
select NO_BOOTMEM
select OF
select OF_EARLY_FLATTREE
+ select OF_NUMA if NUMA && OF
select OF_RESERVED_MEM
select PERF_USE_VMALLOC
select POWER_RESET
select POWER_SUPPLY
- select RTC_LIB
select SPARSE_IRQ
select SYSCTL_EXCEPTION_TRACE
- select HAVE_CONTEXT_TRACKING
- select HAVE_ARM_SMCCC
help
ARM 64-bit (AArch64) Linux support.
@@ -546,10 +549,35 @@ config HOTPLUG_CPU
Say Y here to experiment with turning CPUs off and on. CPUs
can be controlled through /sys/devices/system/cpu.
+# Common NUMA Features
+config NUMA
+ bool "Numa Memory Allocation and Scheduler Support"
+ depends on SMP
+ help
+ Enable NUMA (Non Uniform Memory Access) support.
+
+ The kernel will try to allocate memory used by a CPU on the
+ local memory of the CPU and add some more
+ NUMA awareness to the kernel.
+
+config NODES_SHIFT
+ int "Maximum NUMA Nodes (as a power of 2)"
+ range 1 10
+ default "2"
+ depends on NEED_MULTIPLE_NODES
+ help
+ Specify the maximum number of NUMA Nodes available on the target
+ system. Increases memory reserved to accommodate various tables.
+
+config USE_PERCPU_NUMA_NODE_ID
+ def_bool y
+ depends on NUMA
+
source kernel/Kconfig.preempt
source kernel/Kconfig.hz
config ARCH_SUPPORTS_DEBUG_PAGEALLOC
+ depends on !HIBERNATION
def_bool y
config ARCH_HAS_HOLES_MEMORYMODEL
@@ -578,9 +606,6 @@ config SYS_SUPPORTS_HUGETLBFS
config ARCH_WANT_HUGE_PMD_SHARE
def_bool y if ARM64_4K_PAGES || (ARM64_16K_PAGES && !ARM64_VA_BITS_36)
-config HAVE_ARCH_TRANSPARENT_HUGEPAGE
- def_bool y
-
config ARCH_HAS_CACHE_LINE_SIZE
def_bool y
@@ -953,6 +978,14 @@ menu "Power management options"
source "kernel/power/Kconfig"
+config ARCH_HIBERNATION_POSSIBLE
+ def_bool y
+ depends on CPU_PM
+
+config ARCH_HIBERNATION_HEADER
+ def_bool y
+ depends on HIBERNATION
+
config ARCH_SUSPEND_POSSIBLE
def_bool y
diff --git a/arch/arm64/Kconfig.debug b/arch/arm64/Kconfig.debug
index 7e76845a0434..710fde4ad0f0 100644
--- a/arch/arm64/Kconfig.debug
+++ b/arch/arm64/Kconfig.debug
@@ -59,7 +59,7 @@ config DEBUG_RODATA
If in doubt, say Y
config DEBUG_ALIGN_RODATA
- depends on DEBUG_RODATA && ARM64_4K_PAGES
+ depends on DEBUG_RODATA
bool "Align linker sections up to SECTION_SIZE"
help
If this option is enabled, sections that may potentially be marked as
diff --git a/arch/arm64/Kconfig.platforms b/arch/arm64/Kconfig.platforms
index efa77c146415..7ef1d05859ae 100644
--- a/arch/arm64/Kconfig.platforms
+++ b/arch/arm64/Kconfig.platforms
@@ -2,6 +2,7 @@ menu "Platform selection"
config ARCH_SUNXI
bool "Allwinner sunxi 64-bit SoC Family"
+ select GENERIC_IRQ_CHIP
help
This enables support for Allwinner sunxi based SoCs like the A64.
@@ -43,8 +44,14 @@ config ARCH_LAYERSCAPE
help
This enables support for the Freescale Layerscape SoC family.
+config ARCH_LG1K
+ bool "LG Electronics LG1K SoC Family"
+ help
+ This enables support for LG Electronics LG1K SoC Family
+
config ARCH_HISI
bool "Hisilicon SoC Family"
+ select ARM_TIMER_SP804
select HISILICON_IRQ_MBIGEN
help
This enables support for Hisilicon ARMv8 SoC family
@@ -64,8 +71,8 @@ config ARCH_MESON
config ARCH_MVEBU
bool "Marvell EBU SoC Family"
- select ARMADA_AP806_CORE_CLK
- select ARMADA_AP806_RING_CLK
+ select ARMADA_AP806_SYSCON
+ select ARMADA_CP110_SYSCON
select MVEBU_ODMI
help
This enables support for Marvell EBU familly, including:
diff --git a/arch/arm64/boot/dts/Makefile b/arch/arm64/boot/dts/Makefile
index 330fae966cf3..6e199c903676 100644
--- a/arch/arm64/boot/dts/Makefile
+++ b/arch/arm64/boot/dts/Makefile
@@ -18,6 +18,7 @@ dts-dirs += rockchip
dts-dirs += socionext
dts-dirs += sprd
dts-dirs += xilinx
+dts-dirs += lg
subdir-y := $(dts-dirs)
diff --git a/arch/arm64/boot/dts/amlogic/Makefile b/arch/arm64/boot/dts/amlogic/Makefile
index eb672f38f89e..47ec703cb230 100644
--- a/arch/arm64/boot/dts/amlogic/Makefile
+++ b/arch/arm64/boot/dts/amlogic/Makefile
@@ -1,3 +1,6 @@
+dtb-$(CONFIG_ARCH_MESON) += meson-gxbb-odroidc2.dtb
+dtb-$(CONFIG_ARCH_MESON) += meson-gxbb-p200.dtb
+dtb-$(CONFIG_ARCH_MESON) += meson-gxbb-p201.dtb
dtb-$(CONFIG_ARCH_MESON) += meson-gxbb-vega-s95-pro.dtb
dtb-$(CONFIG_ARCH_MESON) += meson-gxbb-vega-s95-meta.dtb
dtb-$(CONFIG_ARCH_MESON) += meson-gxbb-vega-s95-telos.dtb
diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts
new file mode 100644
index 000000000000..7f2c6747a71e
--- /dev/null
+++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-odroidc2.dts
@@ -0,0 +1,69 @@
+/*
+ * Copyright (c) 2016 Andreas Färber
+ * Copyright (c) 2016 BayLibre, Inc.
+ * Author: Kevin Hilman <khilman@kernel.org>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+
+#include "meson-gxbb.dtsi"
+
+/ {
+ compatible = "hardkernel,odroid-c2", "amlogic,meson-gxbb";
+ model = "Hardkernel ODROID-C2";
+
+ aliases {
+ serial0 = &uart_AO;
+ };
+
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
+ memory@0 {
+ device_type = "memory";
+ reg = <0x0 0x0 0x0 0x80000000>;
+ };
+};
+
+&uart_AO {
+ status = "okay";
+};
diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-p200.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-p200.dts
new file mode 100644
index 000000000000..62979076e250
--- /dev/null
+++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-p200.dts
@@ -0,0 +1,52 @@
+/*
+ * Copyright (c) 2016 Andreas Färber
+ * Copyright (c) 2016 BayLibre, Inc.
+ * Author: Kevin Hilman <khilman@kernel.org>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+
+#include "meson-gxbb-p20x.dtsi"
+
+/ {
+ compatible = "amlogic,p200", "amlogic,meson-gxbb";
+ model = "Amlogic Meson GXBB P200 Development Board";
+};
diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-p201.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-p201.dts
new file mode 100644
index 000000000000..39bb037a3e47
--- /dev/null
+++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-p201.dts
@@ -0,0 +1,52 @@
+/*
+ * Copyright (c) 2016 Andreas Färber
+ * Copyright (c) 2016 BayLibre, Inc.
+ * Author: Kevin Hilman <khilman@kernel.org>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+
+#include "meson-gxbb-p20x.dtsi"
+
+/ {
+ compatible = "amlogic,p201", "amlogic,meson-gxbb";
+ model = "Amlogic Meson GXBB P201 Development Board";
+};
diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-p20x.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxbb-p20x.dtsi
new file mode 100644
index 000000000000..bf7ff1d41851
--- /dev/null
+++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-p20x.dtsi
@@ -0,0 +1,65 @@
+/*
+ * Copyright (c) 2016 Andreas Färber
+ * Copyright (c) 2016 BayLibre, Inc.
+ * Author: Kevin Hilman <khilman@kernel.org>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "meson-gxbb.dtsi"
+
+/ {
+ aliases {
+ serial0 = &uart_AO;
+ };
+
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
+ memory@0 {
+ device_type = "memory";
+ reg = <0x0 0x0 0x0 0x40000000>;
+ };
+};
+
+/* This UART is brought out to the DB9 connector */
+&uart_AO {
+ status = "okay";
+};
diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95-meta.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95-meta.dts
index 399aff9e7975..62fb4968d680 100644
--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95-meta.dts
+++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95-meta.dts
@@ -48,7 +48,7 @@
compatible = "tronsmart,vega-s95-meta", "tronsmart,vega-s95", "amlogic,meson-gxbb";
model = "Tronsmart Vega S95 Meta";
- memory {
+ memory@0 {
device_type = "memory";
reg = <0x0 0x0 0x0 0x80000000>;
};
diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95-pro.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95-pro.dts
index ac5a241b5ec2..9a9663abdf5c 100644
--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95-pro.dts
+++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95-pro.dts
@@ -48,7 +48,7 @@
compatible = "tronsmart,vega-s95-pro", "tronsmart,vega-s95", "amlogic,meson-gxbb";
model = "Tronsmart Vega S95 Pro";
- memory {
+ memory@0 {
device_type = "memory";
reg = <0x0 0x0 0x0 0x40000000>;
};
diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95-telos.dts b/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95-telos.dts
index fff7bfa2aa39..2fe167b2609d 100644
--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95-telos.dts
+++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95-telos.dts
@@ -48,7 +48,7 @@
compatible = "tronsmart,vega-s95-telos", "tronsmart,vega-s95", "amlogic,meson-gxbb";
model = "Tronsmart Vega S95 Telos";
- memory {
+ memory@0 {
device_type = "memory";
reg = <0x0 0x0 0x0 0x80000000>;
};
diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi
index c1fa2667ec5c..012cdccc8a35 100644
--- a/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi
+++ b/arch/arm64/boot/dts/amlogic/meson-gxbb-vega-s95.dtsi
@@ -45,6 +45,10 @@
/ {
compatible = "tronsmart,vega-s95", "amlogic,meson-gxbb";
+ aliases {
+ serial0 = &uart_AO;
+ };
+
chosen {
stdout-path = "serial0:115200n8";
};
diff --git a/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi b/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi
index eaa0a4553734..832815d80462 100644
--- a/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi
+++ b/arch/arm64/boot/dts/amlogic/meson-gxbb.dtsi
@@ -50,11 +50,6 @@
#address-cells = <2>;
#size-cells = <2>;
- aliases {
- serial0 = &uart_AO;
- serial1 = &uart_A;
- };
-
cpus {
#address-cells = <0x2>;
#size-cells = <0x0>;
diff --git a/arch/arm64/boot/dts/apm/apm-shadowcat.dtsi b/arch/arm64/boot/dts/apm/apm-shadowcat.dtsi
index a055a5d443b7..c569f761d090 100644
--- a/arch/arm64/boot/dts/apm/apm-shadowcat.dtsi
+++ b/arch/arm64/boot/dts/apm/apm-shadowcat.dtsi
@@ -543,7 +543,7 @@
};
sata1: sata@1a000000 {
- compatible = "apm,xgene-ahci";
+ compatible = "apm,xgene-ahci-v2";
reg = <0x0 0x1a000000 0x0 0x1000>,
<0x0 0x1f200000 0x0 0x1000>,
<0x0 0x1f20d000 0x0 0x1000>,
@@ -553,7 +553,7 @@
};
sata2: sata@1a200000 {
- compatible = "apm,xgene-ahci";
+ compatible = "apm,xgene-ahci-v2";
reg = <0x0 0x1a200000 0x0 0x1000>,
<0x0 0x1f210000 0x0 0x1000>,
<0x0 0x1f21d000 0x0 0x1000>,
@@ -563,7 +563,7 @@
};
sata3: sata@1a400000 {
- compatible = "apm,xgene-ahci";
+ compatible = "apm,xgene-ahci-v2";
reg = <0x0 0x1a400000 0x0 0x1000>,
<0x0 0x1f220000 0x0 0x1000>,
<0x0 0x1f22d000 0x0 0x1000>,
@@ -653,6 +653,7 @@
<0 113 4>,
<0 114 4>,
<0 115 4>;
+ channel = <12>;
port-id = <1>;
dma-coherent;
clocks = <&xge1clk 0>;
diff --git a/arch/arm64/boot/dts/apm/apm-storm.dtsi b/arch/arm64/boot/dts/apm/apm-storm.dtsi
index ae4a173df493..5147d7698924 100644
--- a/arch/arm64/boot/dts/apm/apm-storm.dtsi
+++ b/arch/arm64/boot/dts/apm/apm-storm.dtsi
@@ -993,6 +993,7 @@
<0x0 0x65 0x4>,
<0x0 0x66 0x4>,
<0x0 0x67 0x4>;
+ channel = <0>;
dma-coherent;
clocks = <&xge0clk 0>;
/* mac address will be overwritten by the bootloader */
diff --git a/arch/arm64/boot/dts/arm/juno-base.dtsi b/arch/arm64/boot/dts/arm/juno-base.dtsi
index 68ccc39a7a66..dee2386d3b9b 100644
--- a/arch/arm64/boot/dts/arm/juno-base.dtsi
+++ b/arch/arm64/boot/dts/arm/juno-base.dtsi
@@ -272,3 +272,13 @@
/include/ "juno-motherboard.dtsi"
};
+
+ site2: tlx@60000000 {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges = <0 0 0x60000000 0x10000000>;
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0>;
+ interrupt-map = <0 0 &gic 0 0 0 168 IRQ_TYPE_LEVEL_HIGH>;
+ };
diff --git a/arch/arm64/boot/dts/broadcom/ns2-clock.dtsi b/arch/arm64/boot/dts/broadcom/ns2-clock.dtsi
new file mode 100644
index 000000000000..99009fdf10a4
--- /dev/null
+++ b/arch/arm64/boot/dts/broadcom/ns2-clock.dtsi
@@ -0,0 +1,105 @@
+/*
+ * BSD LICENSE
+ *
+ * Copyright (c) 2016 Broadcom. All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * * Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * * Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * * Neither the name of Broadcom Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <dt-bindings/clock/bcm-ns2.h>
+
+ osc: oscillator {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <25000000>;
+ };
+
+ lcpll_ddr: lcpll_ddr@6501d058 {
+ #clock-cells = <1>;
+ compatible = "brcm,ns2-lcpll-ddr";
+ reg = <0x6501d058 0x20>,
+ <0x6501c020 0x4>,
+ <0x6501d04c 0x4>;
+ clocks = <&osc>;
+ clock-output-names = "lcpll_ddr", "pcie_sata_usb",
+ "ddr", "ddr_ch2_unused",
+ "ddr_ch3_unused", "ddr_ch4_unused",
+ "ddr_ch5_unused";
+ };
+
+ lcpll_ports: lcpll_ports@6501d078 {
+ #clock-cells = <1>;
+ compatible = "brcm,ns2-lcpll-ports";
+ reg = <0x6501d078 0x20>,
+ <0x6501c020 0x4>,
+ <0x6501d054 0x4>;
+ clocks = <&osc>;
+ clock-output-names = "lcpll_ports", "wan", "rgmii",
+ "ports_ch2_unused",
+ "ports_ch3_unused",
+ "ports_ch4_unused",
+ "ports_ch5_unused";
+ };
+
+ genpll_scr: genpll_scr@6501d098 {
+ #clock-cells = <1>;
+ compatible = "brcm,ns2-genpll-scr";
+ reg = <0x6501d098 0x32>,
+ <0x6501c020 0x4>,
+ <0x6501d044 0x4>;
+ clocks = <&osc>;
+ clock-output-names = "genpll_scr", "scr", "fs",
+ "audio_ref", "scr_ch3_unused",
+ "scr_ch4_unused", "scr_ch5_unused";
+ };
+
+ iprocmed: iprocmed {
+ #clock-cells = <0>;
+ compatible = "fixed-factor-clock";
+ clocks = <&genpll_scr BCM_NS2_GENPLL_SCR_SCR_CLK>;
+ clock-div = <2>;
+ clock-mult = <1>;
+ };
+
+ iprocslow: iprocslow {
+ #clock-cells = <0>;
+ compatible = "fixed-factor-clock";
+ clocks = <&genpll_scr BCM_NS2_GENPLL_SCR_SCR_CLK>;
+ clock-div = <4>;
+ clock-mult = <1>;
+ };
+
+ genpll_sw: genpll_sw@6501d0c4 {
+ #clock-cells = <1>;
+ compatible = "brcm,ns2-genpll-sw";
+ reg = <0x6501d0c4 0x32>,
+ <0x6501c020 0x4>,
+ <0x6501d044 0x4>;
+ clocks = <&osc>;
+ clock-output-names = "genpll_sw", "rpe", "250", "nic",
+ "chimp", "port", "sdio";
+ };
diff --git a/arch/arm64/boot/dts/broadcom/ns2-svk.dts b/arch/arm64/boot/dts/broadcom/ns2-svk.dts
index ce0ab84e0f2d..54ca40c9f711 100644
--- a/arch/arm64/boot/dts/broadcom/ns2-svk.dts
+++ b/arch/arm64/boot/dts/broadcom/ns2-svk.dts
@@ -72,6 +72,51 @@
status = "ok";
};
+&ssp0 {
+ status = "ok";
+
+ slic@0 {
+ compatible = "silabs,si3226x";
+ reg = <0>;
+ spi-max-frequency = <5000000>;
+ spi-cpha = <1>;
+ spi-cpol = <1>;
+ pl022,hierarchy = <0>;
+ pl022,interface = <0>;
+ pl022,slave-tx-disable = <0>;
+ pl022,com-mode = <0>;
+ pl022,rx-level-trig = <1>;
+ pl022,tx-level-trig = <1>;
+ pl022,ctrl-len = <11>;
+ pl022,wait-state = <0>;
+ pl022,duplex = <0>;
+ };
+};
+
+&ssp1 {
+ status = "ok";
+
+ at25@0 {
+ compatible = "atmel,at25";
+ reg = <0>;
+ spi-max-frequency = <5000000>;
+ at25,byte-len = <0x8000>;
+ at25,addr-mode = <2>;
+ at25,page-size = <64>;
+ spi-cpha = <1>;
+ spi-cpol = <1>;
+ pl022,hierarchy = <0>;
+ pl022,interface = <0>;
+ pl022,slave-tx-disable = <0>;
+ pl022,com-mode = <0>;
+ pl022,rx-level-trig = <1>;
+ pl022,tx-level-trig = <1>;
+ pl022,ctrl-len = <11>;
+ pl022,wait-state = <0>;
+ pl022,duplex = <0>;
+ };
+};
+
&sdio0 {
status = "ok";
};
diff --git a/arch/arm64/boot/dts/broadcom/ns2.dtsi b/arch/arm64/boot/dts/broadcom/ns2.dtsi
index 6f81c9d7fb06..ec68ec1a80c8 100644
--- a/arch/arm64/boot/dts/broadcom/ns2.dtsi
+++ b/arch/arm64/boot/dts/broadcom/ns2.dtsi
@@ -1,7 +1,7 @@
/*
* BSD LICENSE
*
- * Copyright(c) 2015 Broadcom Corporation. All rights reserved.
+ * Copyright (c) 2015 Broadcom. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
@@ -33,8 +33,6 @@
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/clock/bcm-ns2.h>
-/memreserve/ 0x84b00000 0x00000008;
-
/ {
compatible = "brcm,ns2";
interrupt-parent = <&gic>;
@@ -49,8 +47,7 @@
device_type = "cpu";
compatible = "arm,cortex-a57", "arm,armv8";
reg = <0 0>;
- enable-method = "spin-table";
- cpu-release-addr = <0 0x84b00000>;
+ enable-method = "psci";
next-level-cache = <&CLUSTER0_L2>;
};
@@ -58,8 +55,7 @@
device_type = "cpu";
compatible = "arm,cortex-a57", "arm,armv8";
reg = <0 1>;
- enable-method = "spin-table";
- cpu-release-addr = <0 0x84b00000>;
+ enable-method = "psci";
next-level-cache = <&CLUSTER0_L2>;
};
@@ -67,8 +63,7 @@
device_type = "cpu";
compatible = "arm,cortex-a57", "arm,armv8";
reg = <0 2>;
- enable-method = "spin-table";
- cpu-release-addr = <0 0x84b00000>;
+ enable-method = "psci";
next-level-cache = <&CLUSTER0_L2>;
};
@@ -76,8 +71,7 @@
device_type = "cpu";
compatible = "arm,cortex-a57", "arm,armv8";
reg = <0 3>;
- enable-method = "spin-table";
- cpu-release-addr = <0 0x84b00000>;
+ enable-method = "psci";
next-level-cache = <&CLUSTER0_L2>;
};
@@ -86,6 +80,11 @@
};
};
+ psci {
+ compatible = "arm,psci-1.0";
+ method = "smc";
+ };
+
timer {
compatible = "arm,armv8-timer";
interrupts = <GIC_PPI 13 (GIC_CPU_MASK_RAW(0xff) |
@@ -110,33 +109,6 @@
<&A57_3>;
};
- clocks {
- #address-cells = <1>;
- #size-cells = <1>;
-
- osc: oscillator {
- #clock-cells = <0>;
- compatible = "fixed-clock";
- clock-frequency = <25000000>;
- };
-
- iprocmed: iprocmed {
- #clock-cells = <0>;
- compatible = "fixed-factor-clock";
- clocks = <&genpll_scr BCM_NS2_GENPLL_SCR_SCR_CLK>;
- clock-div = <2>;
- clock-mult = <1>;
- };
-
- iprocslow: iprocslow {
- #clock-cells = <0>;
- compatible = "fixed-factor-clock";
- clocks = <&genpll_scr BCM_NS2_GENPLL_SCR_SCR_CLK>;
- clock-div = <4>;
- clock-mult = <1>;
- };
- };
-
pcie0: pcie@20020000 {
compatible = "brcm,iproc-pcie";
reg = <0 0x20020000 0 0x1000>;
@@ -217,6 +189,27 @@
#size-cells = <1>;
ranges = <0 0 0 0xffffffff>;
+ #include "ns2-clock.dtsi"
+
+ dma0: dma@61360000 {
+ compatible = "arm,pl330", "arm,primecell";
+ reg = <0x61360000 0x1000>;
+ interrupts = <GIC_SPI 208 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 209 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 210 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 211 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 212 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 213 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 214 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 215 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 216 IRQ_TYPE_LEVEL_HIGH>;
+ #dma-cells = <1>;
+ #dma-channels = <8>;
+ #dma-requests = <32>;
+ clocks = <&iprocslow>;
+ clock-names = "apb_pclk";
+ };
+
smmu: mmu@64000000 {
compatible = "arm,mmu-500";
reg = <0x64000000 0x40000>;
@@ -258,68 +251,6 @@
mmu-masters;
};
- lcpll_ddr: lcpll_ddr@6501d058 {
- #clock-cells = <1>;
- compatible = "brcm,ns2-lcpll-ddr";
- reg = <0x6501d058 0x20>,
- <0x6501c020 0x4>,
- <0x6501d04c 0x4>;
- clocks = <&osc>;
- clock-output-names = "lcpll_ddr", "pcie_sata_usb",
- "ddr", "ddr_ch2_unused",
- "ddr_ch3_unused", "ddr_ch4_unused",
- "ddr_ch5_unused";
- };
-
- lcpll_ports: lcpll_ports@6501d078 {
- #clock-cells = <1>;
- compatible = "brcm,ns2-lcpll-ports";
- reg = <0x6501d078 0x20>,
- <0x6501c020 0x4>,
- <0x6501d054 0x4>;
- clocks = <&osc>;
- clock-output-names = "lcpll_ports", "wan", "rgmii",
- "ports_ch2_unused",
- "ports_ch3_unused",
- "ports_ch4_unused",
- "ports_ch5_unused";
- };
-
- genpll_scr: genpll_scr@6501d098 {
- #clock-cells = <1>;
- compatible = "brcm,ns2-genpll-scr";
- reg = <0x6501d098 0x32>,
- <0x6501c020 0x4>,
- <0x6501d044 0x4>;
- clocks = <&osc>;
- clock-output-names = "genpll_scr", "scr", "fs",
- "audio_ref", "scr_ch3_unused",
- "scr_ch4_unused", "scr_ch5_unused";
- };
-
- genpll_sw: genpll_sw@6501d0c4 {
- #clock-cells = <1>;
- compatible = "brcm,ns2-genpll-sw";
- reg = <0x6501d0c4 0x32>,
- <0x6501c020 0x4>,
- <0x6501d044 0x4>;
- clocks = <&osc>;
- clock-output-names = "genpll_sw", "rpe", "250", "nic",
- "chimp", "port", "sdio";
- };
-
- crmu: crmu@65024000 {
- compatible = "syscon";
- reg = <0x65024000 0x100>;
- };
-
- reboot@65024000 {
- compatible ="syscon-reboot";
- regmap = <&crmu>;
- offset = <0x90>;
- mask = <0xfffffffd>;
- };
-
gic: interrupt-controller@65210000 {
compatible = "arm,gic-400";
#interrupt-cells = <3>;
@@ -328,6 +259,8 @@
<0x65220000 0x1000>,
<0x65240000 0x2000>,
<0x65260000 0x1000>;
+ interrupts = <GIC_PPI 9 (GIC_CPU_MASK_RAW(0xf) |
+ IRQ_TYPE_LEVEL_HIGH)>;
};
timer0: timer@66030000 {
@@ -408,6 +341,28 @@
status = "disabled";
};
+ ssp0: ssp@66180000 {
+ compatible = "arm,pl022", "arm,primecell";
+ reg = <0x66180000 0x1000>;
+ interrupts = <GIC_SPI 404 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&iprocslow>, <&iprocslow>;
+ clock-names = "spiclk", "apb_pclk";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "disabled";
+ };
+
+ ssp1: ssp@66190000 {
+ compatible = "arm,pl022", "arm,primecell";
+ reg = <0x66190000 0x1000>;
+ interrupts = <GIC_SPI 405 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&iprocslow>, <&iprocslow>;
+ clock-names = "spiclk", "apb_pclk";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "disabled";
+ };
+
hwrng: hwrng@66220000 {
compatible = "brcm,iproc-rng200";
reg = <0x66220000 0x28>;
diff --git a/arch/arm64/boot/dts/broadcom/vulcan.dtsi b/arch/arm64/boot/dts/broadcom/vulcan.dtsi
index 85820e2bca9d..34e11a9db2a0 100644
--- a/arch/arm64/boot/dts/broadcom/vulcan.dtsi
+++ b/arch/arm64/boot/dts/broadcom/vulcan.dtsi
@@ -86,7 +86,7 @@
};
pmu {
- compatible = "arm,armv8-pmuv3";
+ compatible = "brcm,vulcan-pmu", "arm,armv8-pmuv3";
interrupts = <GIC_PPI 7 IRQ_TYPE_LEVEL_HIGH>; /* PMU overflow */
};
diff --git a/arch/arm64/boot/dts/exynos/exynos7-tmu-sensor-conf.dtsi b/arch/arm64/boot/dts/exynos/exynos7-tmu-sensor-conf.dtsi
new file mode 100644
index 000000000000..1d6dcf2aadba
--- /dev/null
+++ b/arch/arm64/boot/dts/exynos/exynos7-tmu-sensor-conf.dtsi
@@ -0,0 +1,25 @@
+/*
+ * Device tree sources for Exynos7 TMU sensor configuration
+ *
+ * Copyright (c) 2016 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <dt-bindings/thermal/thermal_exynos.h>
+
+#thermal-sensor-cells = <0>;
+samsung,tmu_gain = <9>;
+samsung,tmu_reference_voltage = <17>;
+samsung,tmu_noise_cancel_mode = <4>;
+samsung,tmu_efuse_value = <75>;
+samsung,tmu_min_efuse_value = <15>;
+samsung,tmu_max_efuse_value = <100>;
+samsung,tmu_first_point_trim = <25>;
+samsung,tmu_second_point_trim = <85>;
+samsung,tmu_default_temp_offset = <50>;
+samsung,tmu_cal_type = <TYPE_ONE_POINT_TRIMMING>;
diff --git a/arch/arm64/boot/dts/exynos/exynos7-trip-points.dtsi b/arch/arm64/boot/dts/exynos/exynos7-trip-points.dtsi
new file mode 100644
index 000000000000..062358355a53
--- /dev/null
+++ b/arch/arm64/boot/dts/exynos/exynos7-trip-points.dtsi
@@ -0,0 +1,54 @@
+/*
+ * Device tree sources for default Exynos7 thermal zone definition
+ *
+ * Copyright (c) 2016 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+trips {
+ cpu-alert-0 {
+ temperature = <75000>; /* millicelsius */
+ hysteresis = <10000>; /* millicelsius */
+ type = "passive";
+ };
+ cpu-alert-1 {
+ temperature = <80000>; /* millicelsius */
+ hysteresis = <10000>; /* millicelsius */
+ type = "passive";
+ };
+ cpu-alert-2 {
+ temperature = <85000>; /* millicelsius */
+ hysteresis = <10000>; /* millicelsius */
+ type = "passive";
+ };
+ cpu-alert-3 {
+ temperature = <90000>; /* millicelsius */
+ hysteresis = <10000>; /* millicelsius */
+ type = "passive";
+ };
+ cpu-alert-4 {
+ temperature = <95000>; /* millicelsius */
+ hysteresis = <10000>; /* millicelsius */
+ type = "passive";
+ };
+ cpu-alert-5 {
+ temperature = <100000>; /* millicelsius */
+ hysteresis = <10000>; /* millicelsius */
+ type = "passive";
+ };
+ cpu-alert-6 {
+ temperature = <110000>; /* millicelsius */
+ hysteresis = <10000>; /* millicelsius */
+ type = "passive";
+ };
+ cpu-crit-0 {
+ temperature = <115000>; /* millicelsius */
+ hysteresis = <0>; /* millicelsius */
+ type = "critical";
+ };
+};
diff --git a/arch/arm64/boot/dts/exynos/exynos7.dtsi b/arch/arm64/boot/dts/exynos/exynos7.dtsi
index 93108f1a90f9..ca663dfe5189 100644
--- a/arch/arm64/boot/dts/exynos/exynos7.dtsi
+++ b/arch/arm64/boot/dts/exynos/exynos7.dtsi
@@ -27,6 +27,7 @@
pinctrl6 = &pinctrl_fsys0;
pinctrl7 = &pinctrl_fsys1;
pinctrl8 = &pinctrl_bus1;
+ tmuctrl0 = &tmuctrl_0;
};
cpus {
@@ -95,6 +96,35 @@
<0x11006000 0x2000>;
};
+ amba {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ ranges;
+
+ pdma0: pdma@10E10000 {
+ compatible = "arm,pl330", "arm,primecell";
+ reg = <0x10E10000 0x1000>;
+ interrupts = <0 225 0>;
+ clocks = <&clock_fsys0 ACLK_PDMA0>;
+ clock-names = "apb_pclk";
+ #dma-cells = <1>;
+ #dma-channels = <8>;
+ #dma-requests = <32>;
+ };
+
+ pdma1: pdma@10EB0000 {
+ compatible = "arm,pl330", "arm,primecell";
+ reg = <0x10EB0000 0x1000>;
+ interrupts = <0 226 0>;
+ clocks = <&clock_fsys0 ACLK_PDMA1>;
+ clock-names = "apb_pclk";
+ #dma-cells = <1>;
+ #dma-channels = <8>;
+ #dma-requests = <32>;
+ };
+ };
+
clock_topc: clock-controller@10570000 {
compatible = "samsung,exynos7-clock-topc";
reg = <0x10570000 0x10000>;
@@ -538,6 +568,25 @@
clocks = <&clock_peric0 PCLK_PWM>;
clock-names = "timers";
};
+
+ tmuctrl_0: tmu@10060000 {
+ compatible = "samsung,exynos7-tmu";
+ reg = <0x10060000 0x200>;
+ interrupts = <0 108 0>;
+ clocks = <&clock_peris PCLK_TMU>,
+ <&clock_peris SCLK_TMU>;
+ clock-names = "tmu_apbif", "tmu_sclk";
+ #include "exynos7-tmu-sensor-conf.dtsi"
+ };
+
+ thermal-zones {
+ atlas_thermal: cluster0-thermal {
+ polling-delay-passive = <0>; /* milliseconds */
+ polling-delay = <0>; /* milliseconds */
+ thermal-sensors = <&tmuctrl_0>;
+ #include "exynos7-trip-points.dtsi"
+ };
+ };
};
};
diff --git a/arch/arm64/boot/dts/freescale/Makefile b/arch/arm64/boot/dts/freescale/Makefile
index f3c25165dad3..1b7783db7de4 100644
--- a/arch/arm64/boot/dts/freescale/Makefile
+++ b/arch/arm64/boot/dts/freescale/Makefile
@@ -1,7 +1,8 @@
+dtb-$(CONFIG_ARCH_LAYERSCAPE) += fsl-ls1043a-qds.dtb
+dtb-$(CONFIG_ARCH_LAYERSCAPE) += fsl-ls1043a-rdb.dtb
dtb-$(CONFIG_ARCH_LAYERSCAPE) += fsl-ls2080a-qds.dtb
dtb-$(CONFIG_ARCH_LAYERSCAPE) += fsl-ls2080a-rdb.dtb
dtb-$(CONFIG_ARCH_LAYERSCAPE) += fsl-ls2080a-simu.dtb
-dtb-$(CONFIG_ARCH_LAYERSCAPE) += fsl-ls1043a-rdb.dtb
always := $(dtb-y)
subdir-y := $(dts-dirs)
diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1043a-qds.dts b/arch/arm64/boot/dts/freescale/fsl-ls1043a-qds.dts
new file mode 100644
index 000000000000..9d3e9fe1c87c
--- /dev/null
+++ b/arch/arm64/boot/dts/freescale/fsl-ls1043a-qds.dts
@@ -0,0 +1,181 @@
+/*
+ * Device Tree Include file for Freescale Layerscape-1043A family SoC.
+ *
+ * Copyright 2014-2015, Freescale Semiconductor
+ *
+ * Mingkai Hu <Mingkai.hu@freescale.com>
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPLv2 or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+/include/ "fsl-ls1043a.dtsi"
+
+/ {
+ model = "LS1043A QDS Board";
+ compatible = "fsl,ls1043a-qds", "fsl,ls1043a";
+
+ aliases {
+ gpio0 = &gpio1;
+ gpio1 = &gpio2;
+ gpio2 = &gpio3;
+ gpio3 = &gpio4;
+ serial0 = &lpuart0;
+ serial1 = &lpuart1;
+ serial2 = &lpuart2;
+ serial3 = &lpuart3;
+ serial4 = &lpuart4;
+ serial5 = &lpuart5;
+ };
+};
+
+&duart0 {
+ status = "okay";
+};
+
+&duart1 {
+ status = "okay";
+};
+
+&ifc {
+ #address-cells = <2>;
+ #size-cells = <1>;
+ /* NOR, NAND Flashes and FPGA on board */
+ ranges = <0x0 0x0 0x0 0x60000000 0x08000000
+ 0x1 0x0 0x0 0x7e800000 0x00010000
+ 0x2 0x0 0x0 0x7fb00000 0x00000100>;
+ status = "okay";
+
+ nor@0,0 {
+ compatible = "cfi-flash";
+ reg = <0x0 0x0 0x8000000>;
+ bank-width = <2>;
+ device-width = <1>;
+ };
+
+ nand@1,0 {
+ compatible = "fsl,ifc-nand";
+ reg = <0x1 0x0 0x10000>;
+ };
+
+ fpga: board-control@2,0 {
+ compatible = "fsl,ls1043aqds-fpga", "fsl,fpga-qixis";
+ reg = <0x2 0x0 0x0000100>;
+ };
+};
+
+&i2c0 {
+ status = "okay";
+
+ pca9547@77 {
+ compatible = "nxp,pca9547";
+ reg = <0x77>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ i2c@0 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0x0>;
+
+ rtc@68 {
+ compatible = "dallas,ds3232";
+ reg = <0x68>;
+ /* IRQ10_B */
+ interrupts = <0 150 0x4>;
+ };
+ };
+
+ i2c@2 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0x2>;
+
+ ina220@40 {
+ compatible = "ti,ina220";
+ reg = <0x40>;
+ shunt-resistor = <1000>;
+ };
+
+ ina220@41 {
+ compatible = "ti,ina220";
+ reg = <0x41>;
+ shunt-resistor = <1000>;
+ };
+ };
+
+ i2c@3 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0x3>;
+
+ eeprom@56 {
+ compatible = "atmel,24c512";
+ reg = <0x56>;
+ };
+
+ eeprom@57 {
+ compatible = "atmel,24c512";
+ reg = <0x57>;
+ };
+
+ temp-sensor@4c {
+ compatible = "adi,adt7461a";
+ reg = <0x4c>;
+ };
+ };
+ };
+};
+
+&lpuart0 {
+ status = "okay";
+};
+
+&qspi {
+ bus-num = <0>;
+ status = "okay";
+
+ qflash0: s25fl128s@0 {
+ compatible = "spansion,m25p80";
+ #address-cells = <1>;
+ #size-cells = <1>;
+ spi-max-frequency = <20000000>;
+ reg = <0>;
+ };
+};
diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1043a-rdb.dts b/arch/arm64/boot/dts/freescale/fsl-ls1043a-rdb.dts
index ce235577e90f..f895fc02ab06 100644
--- a/arch/arm64/boot/dts/freescale/fsl-ls1043a-rdb.dts
+++ b/arch/arm64/boot/dts/freescale/fsl-ls1043a-rdb.dts
@@ -107,6 +107,19 @@
};
};
+&dspi0 {
+ bus-num = <0>;
+ status = "okay";
+
+ flash@0 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "n25q128a13", "jedec,spi-nor"; /* 16MB */
+ reg = <0>;
+ spi-max-frequency = <1000000>; /* input clock */
+ };
+};
+
&duart0 {
status = "okay";
};
diff --git a/arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi
index be72bf5b58b5..de0323b48b1e 100644
--- a/arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi
+++ b/arch/arm64/boot/dts/freescale/fsl-ls1043a.dtsi
@@ -171,6 +171,20 @@
interrupts = <0 43 0x4>;
};
+ qspi: quadspi@1550000 {
+ compatible = "fsl,ls1043a-qspi", "fsl,ls1021a-qspi";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0x0 0x1550000 0x0 0x10000>,
+ <0x0 0x40000000 0x0 0x4000000>;
+ reg-names = "QuadSPI", "QuadSPI-memory";
+ interrupts = <0 99 0x4>;
+ clock-names = "qspi_en", "qspi";
+ clocks = <&clockgen 4 0>, <&clockgen 4 0>;
+ big-endian;
+ status = "disabled";
+ };
+
esdhc: esdhc@1560000 {
compatible = "fsl,ls1043a-esdhc", "fsl,esdhc";
reg = <0x0 0x1560000 0x0 0x10000>;
@@ -284,7 +298,7 @@
};
gpio1: gpio@2300000 {
- compatible = "fsl,ls1043a-gpio";
+ compatible = "fsl,ls1043a-gpio", "fsl,qoriq-gpio";
reg = <0x0 0x2300000 0x0 0x10000>;
interrupts = <0 66 0x4>;
gpio-controller;
@@ -294,7 +308,7 @@
};
gpio2: gpio@2310000 {
- compatible = "fsl,ls1043a-gpio";
+ compatible = "fsl,ls1043a-gpio", "fsl,qoriq-gpio";
reg = <0x0 0x2310000 0x0 0x10000>;
interrupts = <0 67 0x4>;
gpio-controller;
@@ -304,7 +318,7 @@
};
gpio3: gpio@2320000 {
- compatible = "fsl,ls1043a-gpio";
+ compatible = "fsl,ls1043a-gpio", "fsl,qoriq-gpio";
reg = <0x0 0x2320000 0x0 0x10000>;
interrupts = <0 68 0x4>;
gpio-controller;
@@ -314,7 +328,7 @@
};
gpio4: gpio@2330000 {
- compatible = "fsl,ls1043a-gpio";
+ compatible = "fsl,ls1043a-gpio", "fsl,qoriq-gpio";
reg = <0x0 0x2330000 0x0 0x10000>;
interrupts = <0 134 0x4>;
gpio-controller;
diff --git a/arch/arm64/boot/dts/freescale/fsl-ls2080a-qds.dts b/arch/arm64/boot/dts/freescale/fsl-ls2080a-qds.dts
index 4cb996d6e686..e8801faca996 100644
--- a/arch/arm64/boot/dts/freescale/fsl-ls2080a-qds.dts
+++ b/arch/arm64/boot/dts/freescale/fsl-ls2080a-qds.dts
@@ -178,7 +178,14 @@
&qspi {
status = "okay";
- qflash0: s25fl008k {
+ flash0: s25fl256s1@0 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "st,m25p80";
+ spi-max-frequency = <20000000>;
+ reg = <0>;
+ };
+ flash2: s25fl256s1@2 {
#address-cells = <1>;
#size-cells = <1>;
compatible = "st,m25p80";
diff --git a/arch/arm64/boot/dts/freescale/fsl-ls2080a.dtsi b/arch/arm64/boot/dts/freescale/fsl-ls2080a.dtsi
index 9d746c69a789..3187c822afa3 100644
--- a/arch/arm64/boot/dts/freescale/fsl-ls2080a.dtsi
+++ b/arch/arm64/boot/dts/freescale/fsl-ls2080a.dtsi
@@ -265,6 +265,104 @@
compatible = "fsl,qoriq-mc";
reg = <0x00000008 0x0c000000 0 0x40>, /* MC portal base */
<0x00000000 0x08340000 0 0x40000>; /* MC control reg */
+ msi-parent = <&its>;
+ #address-cells = <3>;
+ #size-cells = <1>;
+
+ /*
+ * Region type 0x0 - MC portals
+ * Region type 0x1 - QBMAN portals
+ */
+ ranges = <0x0 0x0 0x0 0x8 0x0c000000 0x4000000
+ 0x1 0x0 0x0 0x8 0x18000000 0x8000000>;
+
+ /*
+ * Define the maximum number of MACs present on the SoC.
+ */
+ dpmacs {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ dpmac1: dpmac@1 {
+ compatible = "fsl,qoriq-mc-dpmac";
+ reg = <0x1>;
+ };
+
+ dpmac2: dpmac@2 {
+ compatible = "fsl,qoriq-mc-dpmac";
+ reg = <0x2>;
+ };
+
+ dpmac3: dpmac@3 {
+ compatible = "fsl,qoriq-mc-dpmac";
+ reg = <0x3>;
+ };
+
+ dpmac4: dpmac@4 {
+ compatible = "fsl,qoriq-mc-dpmac";
+ reg = <0x4>;
+ };
+
+ dpmac5: dpmac@5 {
+ compatible = "fsl,qoriq-mc-dpmac";
+ reg = <0x5>;
+ };
+
+ dpmac6: dpmac@6 {
+ compatible = "fsl,qoriq-mc-dpmac";
+ reg = <0x6>;
+ };
+
+ dpmac7: dpmac@7 {
+ compatible = "fsl,qoriq-mc-dpmac";
+ reg = <0x7>;
+ };
+
+ dpmac8: dpmac@8 {
+ compatible = "fsl,qoriq-mc-dpmac";
+ reg = <0x8>;
+ };
+
+ dpmac9: dpmac@9 {
+ compatible = "fsl,qoriq-mc-dpmac";
+ reg = <0x9>;
+ };
+
+ dpmac10: dpmac@a {
+ compatible = "fsl,qoriq-mc-dpmac";
+ reg = <0xa>;
+ };
+
+ dpmac11: dpmac@b {
+ compatible = "fsl,qoriq-mc-dpmac";
+ reg = <0xb>;
+ };
+
+ dpmac12: dpmac@c {
+ compatible = "fsl,qoriq-mc-dpmac";
+ reg = <0xc>;
+ };
+
+ dpmac13: dpmac@d {
+ compatible = "fsl,qoriq-mc-dpmac";
+ reg = <0xd>;
+ };
+
+ dpmac14: dpmac@e {
+ compatible = "fsl,qoriq-mc-dpmac";
+ reg = <0xe>;
+ };
+
+ dpmac15: dpmac@f {
+ compatible = "fsl,qoriq-mc-dpmac";
+ reg = <0xf>;
+ };
+
+ dpmac16: dpmac@10 {
+ compatible = "fsl,qoriq-mc-dpmac";
+ reg = <0x10>;
+ };
+ };
};
smmu: iommu@5000000 {
@@ -318,7 +416,7 @@
dspi: dspi@2100000 {
status = "disabled";
- compatible = "fsl,vf610-dspi";
+ compatible = "fsl,ls2080a-dspi", "fsl,ls2085a-dspi";
#address-cells = <1>;
#size-cells = <0>;
reg = <0x0 0x2100000 0x0 0x10000>;
@@ -342,7 +440,7 @@
};
gpio0: gpio@2300000 {
- compatible = "fsl,qoriq-gpio";
+ compatible = "fsl,ls2080a-gpio", "fsl,qoriq-gpio";
reg = <0x0 0x2300000 0x0 0x10000>;
interrupts = <0 36 0x4>; /* Level high type */
gpio-controller;
@@ -353,7 +451,7 @@
};
gpio1: gpio@2310000 {
- compatible = "fsl,qoriq-gpio";
+ compatible = "fsl,ls2080a-gpio", "fsl,qoriq-gpio";
reg = <0x0 0x2310000 0x0 0x10000>;
interrupts = <0 36 0x4>; /* Level high type */
gpio-controller;
@@ -364,7 +462,7 @@
};
gpio2: gpio@2320000 {
- compatible = "fsl,qoriq-gpio";
+ compatible = "fsl,ls2080a-gpio", "fsl,qoriq-gpio";
reg = <0x0 0x2320000 0x0 0x10000>;
interrupts = <0 37 0x4>; /* Level high type */
gpio-controller;
@@ -375,7 +473,7 @@
};
gpio3: gpio@2330000 {
- compatible = "fsl,qoriq-gpio";
+ compatible = "fsl,ls2080a-gpio", "fsl,qoriq-gpio";
reg = <0x0 0x2330000 0x0 0x10000>;
interrupts = <0 37 0x4>; /* Level high type */
gpio-controller;
@@ -444,7 +542,7 @@
qspi: quadspi@20c0000 {
status = "disabled";
- compatible = "fsl,vf610-qspi";
+ compatible = "fsl,ls2080a-qspi", "fsl,ls1021a-qspi";
#address-cells = <1>;
#size-cells = <0>;
reg = <0x0 0x20c0000 0x0 0x10000>,
diff --git a/arch/arm64/boot/dts/hisilicon/Makefile b/arch/arm64/boot/dts/hisilicon/Makefile
index cd158b80e29b..d5f43a06b1c1 100644
--- a/arch/arm64/boot/dts/hisilicon/Makefile
+++ b/arch/arm64/boot/dts/hisilicon/Makefile
@@ -1,4 +1,6 @@
-dtb-$(CONFIG_ARCH_HISI) += hi6220-hikey.dtb hip05-d02.dtb
+dtb-$(CONFIG_ARCH_HISI) += hi6220-hikey.dtb
+dtb-$(CONFIG_ARCH_HISI) += hip05-d02.dtb
+dtb-$(CONFIG_ARCH_HISI) += hip06-d03.dtb
always := $(dtb-y)
subdir-y := $(dts-dirs)
diff --git a/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts b/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
index 818525197508..e92a30c87a82 100644
--- a/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
+++ b/arch/arm64/boot/dts/hisilicon/hi6220-hikey.dts
@@ -6,11 +6,9 @@
*/
/dts-v1/;
-
-/*Reserved 1MB memory for MCU*/
-/memreserve/ 0x05e00000 0x00100000;
-
#include "hi6220.dtsi"
+#include "hikey-pinctrl.dtsi"
+#include <dt-bindings/gpio/gpio.h>
/ {
model = "HiKey Development Board";
@@ -27,9 +25,201 @@
stdout-path = "serial3:115200n8";
};
+ /*
+ * Reserve below regions from memory node:
+ *
+ * 0x05e0,0000 - 0x05ef,ffff: MCU firmware runtime using
+ * 0x06df,f000 - 0x06df,ffff: Mailbox message data
+ * 0x0740,f000 - 0x0740,ffff: MCU firmware section
+ * 0x3e00,0000 - 0x3fff,ffff: OP-TEE
+ */
memory@0 {
device_type = "memory";
- reg = <0x0 0x0 0x0 0x40000000>;
+ reg = <0x00000000 0x00000000 0x00000000 0x05e00000>,
+ <0x00000000 0x05f00000 0x00000000 0x00eff000>,
+ <0x00000000 0x06e00000 0x00000000 0x0060f000>,
+ <0x00000000 0x07410000 0x00000000 0x36bf0000>;
+ };
+
+ soc {
+ spi0: spi@f7106000 {
+ status = "ok";
+ };
+
+ i2c0: i2c@f7100000 {
+ status = "ok";
+ };
+
+ i2c1: i2c@f7101000 {
+ status = "ok";
+ };
+
+ uart1: uart@f7111000 {
+ status = "ok";
+ };
+
+ uart2: uart@f7112000 {
+ status = "ok";
+ };
+
+ uart3: uart@f7113000 {
+ status = "ok";
+ };
+
+ dwmmc_2: dwmmc2@f723f000 {
+ ti,non-removable;
+ non-removable;
+ /* WL_EN */
+ vmmc-supply = <&wlan_en_reg>;
+
+ #address-cells = <0x1>;
+ #size-cells = <0x0>;
+ wlcore: wlcore@2 {
+ compatible = "ti,wl1835";
+ reg = <2>; /* sdio func num */
+ /* WL_IRQ, WL_HOST_WAKE_GPIO1_3 */
+ interrupt-parent = <&gpio1>;
+ interrupts = <3 IRQ_TYPE_EDGE_RISING>;
+ };
+ };
+
+ wlan_en_reg: regulator@1 {
+ compatible = "regulator-fixed";
+ regulator-name = "wlan-en-regulator";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ /* WLAN_EN GPIO */
+ gpio = <&gpio0 5 0>;
+ /* WLAN card specific delay */
+ startup-delay-us = <70000>;
+ enable-active-high;
+ };
+ };
+
+ leds {
+ compatible = "gpio-leds";
+ user_led4 {
+ label = "user_led4";
+ gpios = <&gpio4 0 0>; /* <&gpio_user_led_1>; */
+ linux,default-trigger = "heartbeat";
+ };
+
+ user_led3 {
+ label = "user_led3";
+ gpios = <&gpio4 1 0>; /* <&gpio_user_led_2>; */
+ linux,default-trigger = "mmc0";
+ };
+
+ user_led2 {
+ label = "user_led2";
+ gpios = <&gpio4 2 0>; /* <&gpio_user_led_3>; */
+ linux,default-trigger = "mmc1";
+ };
+
+ user_led1 {
+ label = "user_led1";
+ gpios = <&gpio4 3 0>; /* <&gpio_user_led_4>; */
+ linux,default-trigger = "cpu0";
+ };
+
+ wlan_active_led {
+ label = "wifi_active";
+ gpios = <&gpio3 5 0>; /* <&gpio_wlan_active_led>; */
+ linux,default-trigger = "phy0tx";
+ default-state = "off";
+ };
+
+ bt_active_led {
+ label = "bt_active";
+ gpios = <&gpio4 7 0>; /* <&gpio_bt_active_led>; */
+ linux,default-trigger = "hci0rx";
+ default-state = "off";
+ };
+ };
+
+ pmic: pmic@f8000000 {
+ compatible = "hisilicon,hi655x-pmic";
+ reg = <0x0 0xf8000000 0x0 0x1000>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ pmic-gpios = <&gpio1 2 GPIO_ACTIVE_HIGH>;
+
+ regulators {
+ ldo2: LDO2 {
+ regulator-name = "LDO2_2V8";
+ regulator-min-microvolt = <2500000>;
+ regulator-max-microvolt = <3200000>;
+ regulator-enable-ramp-delay = <120>;
+ };
+
+ ldo7: LDO7 {
+ regulator-name = "LDO7_SDIO";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-enable-ramp-delay = <120>;
+ };
+
+ ldo10: LDO10 {
+ regulator-name = "LDO10_2V85";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3000000>;
+ regulator-enable-ramp-delay = <360>;
+ };
+
+ ldo13: LDO13 {
+ regulator-name = "LDO13_1V8";
+ regulator-min-microvolt = <1600000>;
+ regulator-max-microvolt = <1950000>;
+ regulator-enable-ramp-delay = <120>;
+ };
+
+ ldo14: LDO14 {
+ regulator-name = "LDO14_2V8";
+ regulator-min-microvolt = <2500000>;
+ regulator-max-microvolt = <3200000>;
+ regulator-enable-ramp-delay = <120>;
+ };
+
+ ldo15: LDO15 {
+ regulator-name = "LDO15_1V8";
+ regulator-min-microvolt = <1600000>;
+ regulator-max-microvolt = <1950000>;
+ regulator-boot-on;
+ regulator-always-on;
+ regulator-enable-ramp-delay = <120>;
+ };
+
+ ldo17: LDO17 {
+ regulator-name = "LDO17_2V5";
+ regulator-min-microvolt = <2500000>;
+ regulator-max-microvolt = <3200000>;
+ regulator-enable-ramp-delay = <120>;
+ };
+
+ ldo19: LDO19 {
+ regulator-name = "LDO19_3V0";
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3000000>;
+ regulator-enable-ramp-delay = <360>;
+ };
+
+ ldo21: LDO21 {
+ regulator-name = "LDO21_1V8";
+ regulator-min-microvolt = <1650000>;
+ regulator-max-microvolt = <2000000>;
+ regulator-always-on;
+ regulator-enable-ramp-delay = <120>;
+ };
+
+ ldo22: LDO22 {
+ regulator-name = "LDO22_1V2";
+ regulator-min-microvolt = <900000>;
+ regulator-max-microvolt = <1200000>;
+ regulator-boot-on;
+ regulator-always-on;
+ regulator-enable-ramp-delay = <120>;
+ };
+ };
};
};
diff --git a/arch/arm64/boot/dts/hisilicon/hi6220.dtsi b/arch/arm64/boot/dts/hisilicon/hi6220.dtsi
index ad1f1ebcb05c..189d21541f9c 100644
--- a/arch/arm64/boot/dts/hisilicon/hi6220.dtsi
+++ b/arch/arm64/boot/dts/hisilicon/hi6220.dtsi
@@ -6,6 +6,8 @@
#include <dt-bindings/interrupt-controller/arm-gic.h>
#include <dt-bindings/clock/hi6220-clock.h>
+#include <dt-bindings/pinctrl/hisi.h>
+#include <dt-bindings/thermal/thermal.h>
/ {
compatible = "hisilicon,hi6220";
@@ -53,11 +55,42 @@
};
};
+ idle-states {
+ entry-method = "psci";
+
+ CPU_SLEEP: cpu-sleep {
+ compatible = "arm,idle-state";
+ local-timer-stop;
+ arm,psci-suspend-param = <0x0010000>;
+ entry-latency-us = <700>;
+ exit-latency-us = <250>;
+ min-residency-us = <1000>;
+ };
+
+ CLUSTER_SLEEP: cluster-sleep {
+ compatible = "arm,idle-state";
+ local-timer-stop;
+ arm,psci-suspend-param = <0x1010000>;
+ entry-latency-us = <1000>;
+ exit-latency-us = <700>;
+ min-residency-us = <2700>;
+ wakeup-latency-us = <1500>;
+ };
+ };
+
cpu0: cpu@0 {
compatible = "arm,cortex-a53", "arm,armv8";
device_type = "cpu";
reg = <0x0 0x0>;
enable-method = "psci";
+ next-level-cache = <&CLUSTER0_L2>;
+ clocks = <&stub_clock 0>;
+ operating-points-v2 = <&cpu_opp_table>;
+ cooling-min-level = <4>;
+ cooling-max-level = <0>;
+ #cooling-cells = <2>; /* min followed by max */
+ cpu-idle-states = <&CPU_SLEEP &CLUSTER_SLEEP>;
+ dynamic-power-coefficient = <311>;
};
cpu1: cpu@1 {
@@ -65,6 +98,9 @@
device_type = "cpu";
reg = <0x0 0x1>;
enable-method = "psci";
+ next-level-cache = <&CLUSTER0_L2>;
+ operating-points-v2 = <&cpu_opp_table>;
+ cpu-idle-states = <&CPU_SLEEP &CLUSTER_SLEEP>;
};
cpu2: cpu@2 {
@@ -72,6 +108,9 @@
device_type = "cpu";
reg = <0x0 0x2>;
enable-method = "psci";
+ next-level-cache = <&CLUSTER0_L2>;
+ operating-points-v2 = <&cpu_opp_table>;
+ cpu-idle-states = <&CPU_SLEEP &CLUSTER_SLEEP>;
};
cpu3: cpu@3 {
@@ -79,6 +118,9 @@
device_type = "cpu";
reg = <0x0 0x3>;
enable-method = "psci";
+ next-level-cache = <&CLUSTER0_L2>;
+ operating-points-v2 = <&cpu_opp_table>;
+ cpu-idle-states = <&CPU_SLEEP &CLUSTER_SLEEP>;
};
cpu4: cpu@100 {
@@ -86,6 +128,9 @@
device_type = "cpu";
reg = <0x0 0x100>;
enable-method = "psci";
+ next-level-cache = <&CLUSTER1_L2>;
+ operating-points-v2 = <&cpu_opp_table>;
+ cpu-idle-states = <&CPU_SLEEP &CLUSTER_SLEEP>;
};
cpu5: cpu@101 {
@@ -93,6 +138,9 @@
device_type = "cpu";
reg = <0x0 0x101>;
enable-method = "psci";
+ next-level-cache = <&CLUSTER1_L2>;
+ operating-points-v2 = <&cpu_opp_table>;
+ cpu-idle-states = <&CPU_SLEEP &CLUSTER_SLEEP>;
};
cpu6: cpu@102 {
@@ -100,6 +148,9 @@
device_type = "cpu";
reg = <0x0 0x102>;
enable-method = "psci";
+ next-level-cache = <&CLUSTER1_L2>;
+ operating-points-v2 = <&cpu_opp_table>;
+ cpu-idle-states = <&CPU_SLEEP &CLUSTER_SLEEP>;
};
cpu7: cpu@103 {
@@ -107,6 +158,48 @@
device_type = "cpu";
reg = <0x0 0x103>;
enable-method = "psci";
+ next-level-cache = <&CLUSTER1_L2>;
+ operating-points-v2 = <&cpu_opp_table>;
+ cpu-idle-states = <&CPU_SLEEP &CLUSTER_SLEEP>;
+ };
+
+ CLUSTER0_L2: l2-cache0 {
+ compatible = "cache";
+ };
+
+ CLUSTER1_L2: l2-cache1 {
+ compatible = "cache";
+ };
+ };
+
+ cpu_opp_table: cpu_opp_table {
+ compatible = "operating-points-v2";
+ opp-shared;
+
+ opp00 {
+ opp-hz = /bits/ 64 <208000000>;
+ opp-microvolt = <1040000>;
+ clock-latency-ns = <500000>;
+ };
+ opp01 {
+ opp-hz = /bits/ 64 <432000000>;
+ opp-microvolt = <1040000>;
+ clock-latency-ns = <500000>;
+ };
+ opp02 {
+ opp-hz = /bits/ 64 <729000000>;
+ opp-microvolt = <1090000>;
+ clock-latency-ns = <500000>;
+ };
+ opp03 {
+ opp-hz = /bits/ 64 <960000000>;
+ opp-microvolt = <1180000>;
+ clock-latency-ns = <500000>;
+ };
+ opp04 {
+ opp-hz = /bits/ 64 <1200000000>;
+ opp-microvolt = <1330000>;
+ clock-latency-ns = <500000>;
};
};
@@ -137,6 +230,11 @@
#size-cells = <2>;
ranges;
+ sram: sram@fff80000 {
+ compatible = "hisilicon,hi6220-sramctrl", "syscon";
+ reg = <0x0 0xfff80000 0x0 0x12000>;
+ };
+
ao_ctrl: ao_ctrl@f7800000 {
compatible = "hisilicon,hi6220-aoctrl", "syscon";
reg = <0x0 0xf7800000 0x0 0x2000>;
@@ -162,6 +260,14 @@
#clock-cells = <1>;
};
+ stub_clock: stub_clock {
+ compatible = "hisilicon,hi6220-stub-clk";
+ hisilicon,hi6220-clk-sram = <&sram>;
+ #clock-cells = <1>;
+ mbox-names = "mbox-tx";
+ mboxes = <&mailbox 1 0 11>;
+ };
+
uart0: uart@f8015000 { /* console */
compatible = "arm,pl011", "arm,primecell";
reg = <0x0 0xf8015000 0x0 0x1000>;
@@ -178,6 +284,8 @@
clocks = <&sys_ctrl HI6220_UART1_PCLK>,
<&sys_ctrl HI6220_UART1_PCLK>;
clock-names = "uartclk", "apb_pclk";
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart1_pmx_func &uart1_cfg_func1 &uart1_cfg_func2>;
status = "disabled";
};
@@ -188,6 +296,8 @@
clocks = <&sys_ctrl HI6220_UART2_PCLK>,
<&sys_ctrl HI6220_UART2_PCLK>;
clock-names = "uartclk", "apb_pclk";
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart2_pmx_func &uart2_cfg_func>;
status = "disabled";
};
@@ -198,6 +308,9 @@
clocks = <&sys_ctrl HI6220_UART3_PCLK>,
<&sys_ctrl HI6220_UART3_PCLK>;
clock-names = "uartclk", "apb_pclk";
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart3_pmx_func &uart3_cfg_func>;
+ status = "disabled";
};
uart4: uart@f7114000 {
@@ -207,7 +320,517 @@
clocks = <&sys_ctrl HI6220_UART4_PCLK>,
<&sys_ctrl HI6220_UART4_PCLK>;
clock-names = "uartclk", "apb_pclk";
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart4_pmx_func &uart4_cfg_func>;
status = "disabled";
};
+
+ dual_timer0: timer@f8008000 {
+ compatible = "arm,sp804", "arm,primecell";
+ reg = <0x0 0xf8008000 0x0 0x1000>;
+ interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&ao_ctrl HI6220_TIMER0_PCLK>,
+ <&ao_ctrl HI6220_TIMER0_PCLK>,
+ <&ao_ctrl HI6220_TIMER0_PCLK>;
+ clock-names = "timer1", "timer2", "apb_pclk";
+ };
+
+ pmx0: pinmux@f7010000 {
+ compatible = "pinctrl-single";
+ reg = <0x0 0xf7010000 0x0 0x27c>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ #gpio-range-cells = <3>;
+ pinctrl-single,register-width = <32>;
+ pinctrl-single,function-mask = <7>;
+ pinctrl-single,gpio-range = <
+ &range 80 8 MUX_M0 /* gpio 3: [0..7] */
+ &range 88 8 MUX_M0 /* gpio 4: [0..7] */
+ &range 96 8 MUX_M0 /* gpio 5: [0..7] */
+ &range 104 8 MUX_M0 /* gpio 6: [0..7] */
+ &range 112 8 MUX_M0 /* gpio 7: [0..7] */
+ &range 120 2 MUX_M0 /* gpio 8: [0..1] */
+ &range 2 6 MUX_M1 /* gpio 8: [2..7] */
+ &range 8 8 MUX_M1 /* gpio 9: [0..7] */
+ &range 0 1 MUX_M1 /* gpio 10: [0] */
+ &range 16 7 MUX_M1 /* gpio 10: [1..7] */
+ &range 23 3 MUX_M1 /* gpio 11: [0..2] */
+ &range 28 5 MUX_M1 /* gpio 11: [3..7] */
+ &range 33 3 MUX_M1 /* gpio 12: [0..2] */
+ &range 43 5 MUX_M1 /* gpio 12: [3..7] */
+ &range 48 8 MUX_M1 /* gpio 13: [0..7] */
+ &range 56 8 MUX_M1 /* gpio 14: [0..7] */
+ &range 74 6 MUX_M1 /* gpio 15: [0..5] */
+ &range 122 1 MUX_M1 /* gpio 15: [6] */
+ &range 126 1 MUX_M1 /* gpio 15: [7] */
+ &range 127 8 MUX_M1 /* gpio 16: [0..7] */
+ &range 135 8 MUX_M1 /* gpio 17: [0..7] */
+ &range 143 8 MUX_M1 /* gpio 18: [0..7] */
+ &range 151 8 MUX_M1 /* gpio 19: [0..7] */
+ >;
+ range: gpio-range {
+ #pinctrl-single,gpio-range-cells = <3>;
+ };
+ };
+
+ pmx1: pinmux@f7010800 {
+ compatible = "pinconf-single";
+ reg = <0x0 0xf7010800 0x0 0x28c>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ pinctrl-single,register-width = <32>;
+ };
+
+ pmx2: pinmux@f8001800 {
+ compatible = "pinconf-single";
+ reg = <0x0 0xf8001800 0x0 0x78>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ pinctrl-single,register-width = <32>;
+ };
+
+ gpio0: gpio@f8011000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x0 0xf8011000 0x0 0x1000>;
+ interrupts = <0 52 0x4>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&ao_ctrl 2>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio1: gpio@f8012000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x0 0xf8012000 0x0 0x1000>;
+ interrupts = <0 53 0x4>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&ao_ctrl 2>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio2: gpio@f8013000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x0 0xf8013000 0x0 0x1000>;
+ interrupts = <0 54 0x4>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&ao_ctrl 2>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio3: gpio@f8014000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x0 0xf8014000 0x0 0x1000>;
+ interrupts = <0 55 0x4>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ gpio-ranges = <&pmx0 0 80 8>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&ao_ctrl 2>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio4: gpio@f7020000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x0 0xf7020000 0x0 0x1000>;
+ interrupts = <0 56 0x4>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ gpio-ranges = <&pmx0 0 88 8>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&ao_ctrl 2>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio5: gpio@f7021000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x0 0xf7021000 0x0 0x1000>;
+ interrupts = <0 57 0x4>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ gpio-ranges = <&pmx0 0 96 8>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&ao_ctrl 2>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio6: gpio@f7022000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x0 0xf7022000 0x0 0x1000>;
+ interrupts = <0 58 0x4>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ gpio-ranges = <&pmx0 0 104 8>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&ao_ctrl 2>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio7: gpio@f7023000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x0 0xf7023000 0x0 0x1000>;
+ interrupts = <0 59 0x4>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ gpio-ranges = <&pmx0 0 112 8>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&ao_ctrl 2>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio8: gpio@f7024000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x0 0xf7024000 0x0 0x1000>;
+ interrupts = <0 60 0x4>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ gpio-ranges = <&pmx0 0 120 2 &pmx0 2 2 6>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&ao_ctrl 2>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio9: gpio@f7025000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x0 0xf7025000 0x0 0x1000>;
+ interrupts = <0 61 0x4>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ gpio-ranges = <&pmx0 0 8 8>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&ao_ctrl 2>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio10: gpio@f7026000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x0 0xf7026000 0x0 0x1000>;
+ interrupts = <0 62 0x4>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ gpio-ranges = <&pmx0 0 0 1 &pmx0 1 16 7>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&ao_ctrl 2>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio11: gpio@f7027000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x0 0xf7027000 0x0 0x1000>;
+ interrupts = <0 63 0x4>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ gpio-ranges = <&pmx0 0 23 3 &pmx0 3 28 5>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&ao_ctrl 2>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio12: gpio@f7028000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x0 0xf7028000 0x0 0x1000>;
+ interrupts = <0 64 0x4>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ gpio-ranges = <&pmx0 0 33 3 &pmx0 3 43 5>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&ao_ctrl 2>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio13: gpio@f7029000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x0 0xf7029000 0x0 0x1000>;
+ interrupts = <0 65 0x4>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ gpio-ranges = <&pmx0 0 48 8>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&ao_ctrl 2>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio14: gpio@f702a000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x0 0xf702a000 0x0 0x1000>;
+ interrupts = <0 66 0x4>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ gpio-ranges = <&pmx0 0 56 8>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&ao_ctrl 2>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio15: gpio@f702b000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x0 0xf702b000 0x0 0x1000>;
+ interrupts = <0 67 0x4>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ gpio-ranges = <
+ &pmx0 0 74 6
+ &pmx0 6 122 1
+ &pmx0 7 126 1
+ >;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&ao_ctrl 2>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio16: gpio@f702c000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x0 0xf702c000 0x0 0x1000>;
+ interrupts = <0 68 0x4>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ gpio-ranges = <&pmx0 0 127 8>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&ao_ctrl 2>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio17: gpio@f702d000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x0 0xf702d000 0x0 0x1000>;
+ interrupts = <0 69 0x4>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ gpio-ranges = <&pmx0 0 135 8>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&ao_ctrl 2>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio18: gpio@f702e000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x0 0xf702e000 0x0 0x1000>;
+ interrupts = <0 70 0x4>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ gpio-ranges = <&pmx0 0 143 8>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&ao_ctrl 2>;
+ clock-names = "apb_pclk";
+ };
+
+ gpio19: gpio@f702f000 {
+ compatible = "arm,pl061", "arm,primecell";
+ reg = <0x0 0xf702f000 0x0 0x1000>;
+ interrupts = <0 71 0x4>;
+ gpio-controller;
+ #gpio-cells = <2>;
+ gpio-ranges = <&pmx0 0 151 8>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ clocks = <&ao_ctrl 2>;
+ clock-names = "apb_pclk";
+ };
+
+ spi0: spi@f7106000 {
+ compatible = "arm,pl022", "arm,primecell";
+ reg = <0x0 0xf7106000 0x0 0x1000>;
+ interrupts = <0 50 4>;
+ bus-id = <0>;
+ enable-dma = <0>;
+ clocks = <&sys_ctrl HI6220_SPI_CLK>;
+ clock-names = "apb_pclk";
+ pinctrl-names = "default";
+ pinctrl-0 = <&spi0_pmx_func &spi0_cfg_func>;
+ num-cs = <1>;
+ cs-gpios = <&gpio6 2 0>;
+ status = "disabled";
+ };
+
+ i2c0: i2c@f7100000 {
+ compatible = "snps,designware-i2c";
+ reg = <0x0 0xf7100000 0x0 0x1000>;
+ interrupts = <0 44 4>;
+ clocks = <&sys_ctrl HI6220_I2C0_CLK>;
+ i2c-sda-hold-time-ns = <300>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c0_pmx_func &i2c0_cfg_func>;
+ status = "disabled";
+ };
+
+ i2c1: i2c@f7101000 {
+ compatible = "snps,designware-i2c";
+ reg = <0x0 0xf7101000 0x0 0x1000>;
+ clocks = <&sys_ctrl HI6220_I2C1_CLK>;
+ interrupts = <0 45 4>;
+ i2c-sda-hold-time-ns = <300>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c1_pmx_func &i2c1_cfg_func>;
+ status = "disabled";
+ };
+
+ i2c2: i2c@f7102000 {
+ compatible = "snps,designware-i2c";
+ reg = <0x0 0xf7102000 0x0 0x1000>;
+ clocks = <&sys_ctrl HI6220_I2C2_CLK>;
+ interrupts = <0 46 4>;
+ i2c-sda-hold-time-ns = <300>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2c2_pmx_func &i2c2_cfg_func>;
+ status = "disabled";
+ };
+
+ fixed_5v_hub: regulator@0 {
+ compatible = "regulator-fixed";
+ regulator-name = "fixed_5v_hub";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ regulator-boot-on;
+ gpio = <&gpio0 7 0>;
+ regulator-always-on;
+ };
+
+ usb_phy: usbphy {
+ compatible = "hisilicon,hi6220-usb-phy";
+ #phy-cells = <0>;
+ phy-supply = <&fixed_5v_hub>;
+ hisilicon,peripheral-syscon = <&sys_ctrl>;
+ };
+
+ usb: usb@f72c0000 {
+ compatible = "hisilicon,hi6220-usb";
+ reg = <0x0 0xf72c0000 0x0 0x40000>;
+ phys = <&usb_phy>;
+ phy-names = "usb2-phy";
+ clocks = <&sys_ctrl HI6220_USBOTG_HCLK>;
+ clock-names = "otg";
+ dr_mode = "otg";
+ g-use-dma;
+ g-rx-fifo-size = <512>;
+ g-np-tx-fifo-size = <128>;
+ g-tx-fifo-size = <128 128 128 128 128 128>;
+ interrupts = <0 77 0x4>;
+ };
+
+ mailbox: mailbox@f7510000 {
+ compatible = "hisilicon,hi6220-mbox";
+ reg = <0x0 0xf7510000 0x0 0x1000>, /* IPC_S */
+ <0x0 0x06dff800 0x0 0x0800>; /* Mailbox buffer */
+ interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH>;
+ #mbox-cells = <3>;
+ };
+
+ dwmmc_0: dwmmc0@f723d000 {
+ compatible = "hisilicon,hi6220-dw-mshc";
+ num-slots = <0x1>;
+ cap-mmc-highspeed;
+ non-removable;
+ reg = <0x0 0xf723d000 0x0 0x1000>;
+ interrupts = <0x0 0x48 0x4>;
+ clocks = <&sys_ctrl 2>, <&sys_ctrl 1>;
+ clock-names = "ciu", "biu";
+ bus-width = <0x8>;
+ vmmc-supply = <&ldo19>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&emmc_pmx_func &emmc_clk_cfg_func
+ &emmc_cfg_func &emmc_rst_cfg_func>;
+ };
+
+ dwmmc_1: dwmmc1@f723e000 {
+ compatible = "hisilicon,hi6220-dw-mshc";
+ num-slots = <0x1>;
+ card-detect-delay = <200>;
+ hisilicon,peripheral-syscon = <&ao_ctrl>;
+ cap-sd-highspeed;
+ reg = <0x0 0xf723e000 0x0 0x1000>;
+ interrupts = <0x0 0x49 0x4>;
+ #address-cells = <0x1>;
+ #size-cells = <0x0>;
+ clocks = <&sys_ctrl 4>, <&sys_ctrl 3>;
+ clock-names = "ciu", "biu";
+ vqmmc-supply = <&ldo7>;
+ vmmc-supply = <&ldo10>;
+ bus-width = <0x4>;
+ disable-wp;
+ cd-gpios = <&gpio1 0 1>;
+ pinctrl-names = "default", "idle";
+ pinctrl-0 = <&sd_pmx_func &sd_clk_cfg_func &sd_cfg_func>;
+ pinctrl-1 = <&sd_pmx_idle &sd_clk_cfg_idle &sd_cfg_idle>;
+ };
+
+ dwmmc_2: dwmmc2@f723f000 {
+ compatible = "hisilicon,hi6220-dw-mshc";
+ num-slots = <0x1>;
+ reg = <0x0 0xf723f000 0x0 0x1000>;
+ interrupts = <0x0 0x4a 0x4>;
+ clocks = <&sys_ctrl HI6220_MMC2_CIUCLK>, <&sys_ctrl HI6220_MMC2_CLK>;
+ clock-names = "ciu", "biu";
+ bus-width = <0x4>;
+ broken-cd;
+ pinctrl-names = "default", "idle";
+ pinctrl-0 = <&sdio_pmx_func &sdio_clk_cfg_func &sdio_cfg_func>;
+ pinctrl-1 = <&sdio_pmx_idle &sdio_clk_cfg_idle &sdio_cfg_idle>;
+ };
+
+ tsensor: tsensor@0,f7030700 {
+ compatible = "hisilicon,tsensor";
+ reg = <0x0 0xf7030700 0x0 0x1000>;
+ interrupts = <GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&sys_ctrl 22>;
+ clock-names = "thermal_clk";
+ #thermal-sensor-cells = <1>;
+ };
+
+ thermal-zones {
+
+ cls0: cls0 {
+ polling-delay = <1000>;
+ polling-delay-passive = <100>;
+ sustainable-power = <3326>;
+
+ /* sensor ID */
+ thermal-sensors = <&tsensor 2>;
+
+ trips {
+ threshold: trip-point@0 {
+ temperature = <65000>;
+ hysteresis = <0>;
+ type = "passive";
+ };
+
+ target: trip-point@1 {
+ temperature = <75000>;
+ hysteresis = <0>;
+ type = "passive";
+ };
+ };
+
+ cooling-maps {
+ map0 {
+ trip = <&target>;
+ cooling-device = <&cpu0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+ };
};
};
diff --git a/arch/arm64/boot/dts/hisilicon/hikey-pinctrl.dtsi b/arch/arm64/boot/dts/hisilicon/hikey-pinctrl.dtsi
new file mode 100644
index 000000000000..0916e8459d6b
--- /dev/null
+++ b/arch/arm64/boot/dts/hisilicon/hikey-pinctrl.dtsi
@@ -0,0 +1,705 @@
+/*
+ * pinctrl dts fils for Hislicon HiKey development board
+ *
+ */
+#include <dt-bindings/pinctrl/hisi.h>
+
+/ {
+ soc {
+ pmx0: pinmux@f7010000 {
+ pinctrl-names = "default";
+ pinctrl-0 = <
+ &boot_sel_pmx_func
+ &hkadc_ssi_pmx_func
+ &codec_clk_pmx_func
+ &pwm_in_pmx_func
+ &bl_pwm_pmx_func
+ >;
+
+ boot_sel_pmx_func: boot_sel_pmx_func {
+ pinctrl-single,pins = <
+ 0x0 MUX_M0 /* BOOT_SEL (IOMG000) */
+ >;
+ };
+
+ emmc_pmx_func: emmc_pmx_func {
+ pinctrl-single,pins = <
+ 0x100 MUX_M0 /* EMMC_CLK (IOMG064) */
+ 0x104 MUX_M0 /* EMMC_CMD (IOMG065) */
+ 0x108 MUX_M0 /* EMMC_DATA0 (IOMG066) */
+ 0x10c MUX_M0 /* EMMC_DATA1 (IOMG067) */
+ 0x110 MUX_M0 /* EMMC_DATA2 (IOMG068) */
+ 0x114 MUX_M0 /* EMMC_DATA3 (IOMG069) */
+ 0x118 MUX_M0 /* EMMC_DATA4 (IOMG070) */
+ 0x11c MUX_M0 /* EMMC_DATA5 (IOMG071) */
+ 0x120 MUX_M0 /* EMMC_DATA6 (IOMG072) */
+ 0x124 MUX_M0 /* EMMC_DATA7 (IOMG073) */
+ >;
+ };
+
+ sd_pmx_func: sd_pmx_func {
+ pinctrl-single,pins = <
+ 0xc MUX_M0 /* SD_CLK (IOMG003) */
+ 0x10 MUX_M0 /* SD_CMD (IOMG004) */
+ 0x14 MUX_M0 /* SD_DATA0 (IOMG005) */
+ 0x18 MUX_M0 /* SD_DATA1 (IOMG006) */
+ 0x1c MUX_M0 /* SD_DATA2 (IOMG007) */
+ 0x20 MUX_M0 /* SD_DATA3 (IOMG008) */
+ >;
+ };
+ sd_pmx_idle: sd_pmx_idle {
+ pinctrl-single,pins = <
+ 0xc MUX_M1 /* SD_CLK (IOMG003) */
+ 0x10 MUX_M1 /* SD_CMD (IOMG004) */
+ 0x14 MUX_M1 /* SD_DATA0 (IOMG005) */
+ 0x18 MUX_M1 /* SD_DATA1 (IOMG006) */
+ 0x1c MUX_M1 /* SD_DATA2 (IOMG007) */
+ 0x20 MUX_M1 /* SD_DATA3 (IOMG008) */
+ >;
+ };
+
+ sdio_pmx_func: sdio_pmx_func {
+ pinctrl-single,pins = <
+ 0x128 MUX_M0 /* SDIO_CLK (IOMG074) */
+ 0x12c MUX_M0 /* SDIO_CMD (IOMG075) */
+ 0x130 MUX_M0 /* SDIO_DATA0 (IOMG076) */
+ 0x134 MUX_M0 /* SDIO_DATA1 (IOMG077) */
+ 0x138 MUX_M0 /* SDIO_DATA2 (IOMG078) */
+ 0x13c MUX_M0 /* SDIO_DATA3 (IOMG079) */
+ >;
+ };
+ sdio_pmx_idle: sdio_pmx_idle {
+ pinctrl-single,pins = <
+ 0x128 MUX_M1 /* SDIO_CLK (IOMG074) */
+ 0x12c MUX_M1 /* SDIO_CMD (IOMG075) */
+ 0x130 MUX_M1 /* SDIO_DATA0 (IOMG076) */
+ 0x134 MUX_M1 /* SDIO_DATA1 (IOMG077) */
+ 0x138 MUX_M1 /* SDIO_DATA2 (IOMG078) */
+ 0x13c MUX_M1 /* SDIO_DATA3 (IOMG079) */
+ >;
+ };
+
+ isp_pmx_func: isp_pmx_func {
+ pinctrl-single,pins = <
+ 0x24 MUX_M0 /* ISP_PWDN0 (IOMG009) */
+ 0x28 MUX_M0 /* ISP_PWDN1 (IOMG010) */
+ 0x2c MUX_M0 /* ISP_PWDN2 (IOMG011) */
+ 0x30 MUX_M1 /* ISP_SHUTTER0 (IOMG012) */
+ 0x34 MUX_M1 /* ISP_SHUTTER1 (IOMG013) */
+ 0x38 MUX_M1 /* ISP_PWM (IOMG014) */
+ 0x3c MUX_M0 /* ISP_CCLK0 (IOMG015) */
+ 0x40 MUX_M0 /* ISP_CCLK1 (IOMG016) */
+ 0x44 MUX_M0 /* ISP_RESETB0 (IOMG017) */
+ 0x48 MUX_M0 /* ISP_RESETB1 (IOMG018) */
+ 0x4c MUX_M1 /* ISP_STROBE0 (IOMG019) */
+ 0x50 MUX_M1 /* ISP_STROBE1 (IOMG020) */
+ 0x54 MUX_M0 /* ISP_SDA0 (IOMG021) */
+ 0x58 MUX_M0 /* ISP_SCL0 (IOMG022) */
+ 0x5c MUX_M0 /* ISP_SDA1 (IOMG023) */
+ 0x60 MUX_M0 /* ISP_SCL1 (IOMG024) */
+ >;
+ };
+
+ hkadc_ssi_pmx_func: hkadc_ssi_pmx_func {
+ pinctrl-single,pins = <
+ 0x68 MUX_M0 /* HKADC_SSI (IOMG026) */
+ >;
+ };
+
+ codec_clk_pmx_func: codec_clk_pmx_func {
+ pinctrl-single,pins = <
+ 0x6c MUX_M0 /* CODEC_CLK (IOMG027) */
+ >;
+ };
+
+ codec_pmx_func: codec_pmx_func {
+ pinctrl-single,pins = <
+ 0x70 MUX_M1 /* DMIC_CLK (IOMG028) */
+ 0x74 MUX_M0 /* CODEC_SYNC (IOMG029) */
+ 0x78 MUX_M0 /* CODEC_DI (IOMG030) */
+ 0x7c MUX_M0 /* CODEC_DO (IOMG031) */
+ >;
+ };
+
+ fm_pmx_func: fm_pmx_func {
+ pinctrl-single,pins = <
+ 0x80 MUX_M1 /* FM_XCLK (IOMG032) */
+ 0x84 MUX_M1 /* FM_XFS (IOMG033) */
+ 0x88 MUX_M1 /* FM_DI (IOMG034) */
+ 0x8c MUX_M1 /* FM_DO (IOMG035) */
+ >;
+ };
+
+ bt_pmx_func: bt_pmx_func {
+ pinctrl-single,pins = <
+ 0x90 MUX_M0 /* BT_XCLK (IOMG036) */
+ 0x94 MUX_M0 /* BT_XFS (IOMG037) */
+ 0x98 MUX_M0 /* BT_DI (IOMG038) */
+ 0x9c MUX_M0 /* BT_DO (IOMG039) */
+ >;
+ };
+
+ pwm_in_pmx_func: pwm_in_pmx_func {
+ pinctrl-single,pins = <
+ 0xb8 MUX_M1 /* PWM_IN (IOMG046) */
+ >;
+ };
+
+ bl_pwm_pmx_func: bl_pwm_pmx_func {
+ pinctrl-single,pins = <
+ 0xbc MUX_M1 /* BL_PWM (IOMG047) */
+ >;
+ };
+
+ uart0_pmx_func: uart0_pmx_func {
+ pinctrl-single,pins = <
+ 0xc0 MUX_M0 /* UART0_RXD (IOMG048) */
+ 0xc4 MUX_M0 /* UART0_TXD (IOMG049) */
+ >;
+ };
+
+ uart1_pmx_func: uart1_pmx_func {
+ pinctrl-single,pins = <
+ 0xc8 MUX_M0 /* UART1_CTS_N (IOMG050) */
+ 0xcc MUX_M0 /* UART1_RTS_N (IOMG051) */
+ 0xd0 MUX_M0 /* UART1_RXD (IOMG052) */
+ 0xd4 MUX_M0 /* UART1_TXD (IOMG053) */
+ >;
+ };
+
+ uart2_pmx_func: uart2_pmx_func {
+ pinctrl-single,pins = <
+ 0xd8 MUX_M0 /* UART2_CTS_N (IOMG054) */
+ 0xdc MUX_M0 /* UART2_RTS_N (IOMG055) */
+ 0xe0 MUX_M0 /* UART2_RXD (IOMG056) */
+ 0xe4 MUX_M0 /* UART2_TXD (IOMG057) */
+ >;
+ };
+
+ uart3_pmx_func: uart3_pmx_func {
+ pinctrl-single,pins = <
+ 0x180 MUX_M1 /* UART3_CTS_N (IOMG096) */
+ 0x184 MUX_M1 /* UART3_RTS_N (IOMG097) */
+ 0x188 MUX_M1 /* UART3_RXD (IOMG098) */
+ 0x18c MUX_M1 /* UART3_TXD (IOMG099) */
+ >;
+ };
+
+ uart4_pmx_func: uart4_pmx_func {
+ pinctrl-single,pins = <
+ 0x1d0 MUX_M1 /* UART4_CTS_N (IOMG116) */
+ 0x1d4 MUX_M1 /* UART4_RTS_N (IOMG117) */
+ 0x1d8 MUX_M1 /* UART4_RXD (IOMG118) */
+ 0x1dc MUX_M1 /* UART4_TXD (IOMG119) */
+ >;
+ };
+
+ uart5_pmx_func: uart5_pmx_func {
+ pinctrl-single,pins = <
+ 0x1c8 MUX_M1 /* UART5_RXD (IOMG114) */
+ 0x1cc MUX_M1 /* UART5_TXD (IOMG115) */
+ >;
+ };
+
+ i2c0_pmx_func: i2c0_pmx_func {
+ pinctrl-single,pins = <
+ 0xe8 MUX_M0 /* I2C0_SCL (IOMG058) */
+ 0xec MUX_M0 /* I2C0_SDA (IOMG059) */
+ >;
+ };
+
+ i2c1_pmx_func: i2c1_pmx_func {
+ pinctrl-single,pins = <
+ 0xf0 MUX_M0 /* I2C1_SCL (IOMG060) */
+ 0xf4 MUX_M0 /* I2C1_SDA (IOMG061) */
+ >;
+ };
+
+ i2c2_pmx_func: i2c2_pmx_func {
+ pinctrl-single,pins = <
+ 0xf8 MUX_M0 /* I2C2_SCL (IOMG062) */
+ 0xfc MUX_M0 /* I2C2_SDA (IOMG063) */
+ >;
+ };
+
+ spi0_pmx_func: spi0_pmx_func {
+ pinctrl-single,pins = <
+ 0x1a0 MUX_M1 /* SPI0_DI (IOMG104) */
+ 0x1a4 MUX_M1 /* SPI0_DO (IOMG105) */
+ 0x1a8 MUX_M1 /* SPI0_CS_N (IOMG106) */
+ 0x1ac MUX_M1 /* SPI0_CLK (IOMG107) */
+ >;
+ };
+ };
+
+ pmx1: pinmux@f7010800 {
+
+ pinctrl-names = "default";
+ pinctrl-0 = <
+ &boot_sel_cfg_func
+ &hkadc_ssi_cfg_func
+ &codec_clk_cfg_func
+ &pwm_in_cfg_func
+ &bl_pwm_cfg_func
+ >;
+
+ boot_sel_cfg_func: boot_sel_cfg_func {
+ pinctrl-single,pins = <
+ 0x0 0x0 /* BOOT_SEL (IOCFG000) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_UP PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ hkadc_ssi_cfg_func: hkadc_ssi_cfg_func {
+ pinctrl-single,pins = <
+ 0x6c 0x0 /* HKADC_SSI (IOCFG027) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ emmc_clk_cfg_func: emmc_clk_cfg_func {
+ pinctrl-single,pins = <
+ 0x104 0x0 /* EMMC_CLK (IOCFG065) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_08MA DRIVE_MASK>;
+ };
+
+ emmc_cfg_func: emmc_cfg_func {
+ pinctrl-single,pins = <
+ 0x108 0x0 /* EMMC_CMD (IOCFG066) */
+ 0x10c 0x0 /* EMMC_DATA0 (IOCFG067) */
+ 0x110 0x0 /* EMMC_DATA1 (IOCFG068) */
+ 0x114 0x0 /* EMMC_DATA2 (IOCFG069) */
+ 0x118 0x0 /* EMMC_DATA3 (IOCFG070) */
+ 0x11c 0x0 /* EMMC_DATA4 (IOCFG071) */
+ 0x120 0x0 /* EMMC_DATA5 (IOCFG072) */
+ 0x124 0x0 /* EMMC_DATA6 (IOCFG073) */
+ 0x128 0x0 /* EMMC_DATA7 (IOCFG074) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_UP PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_04MA DRIVE_MASK>;
+ };
+
+ emmc_rst_cfg_func: emmc_rst_cfg_func {
+ pinctrl-single,pins = <
+ 0x12c 0x0 /* EMMC_RST_N (IOCFG075) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_04MA DRIVE_MASK>;
+ };
+
+ sd_clk_cfg_func: sd_clk_cfg_func {
+ pinctrl-single,pins = <
+ 0xc 0x0 /* SD_CLK (IOCFG003) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_10MA DRIVE_MASK>;
+ };
+ sd_clk_cfg_idle: sd_clk_cfg_idle {
+ pinctrl-single,pins = <
+ 0xc 0x0 /* SD_CLK (IOCFG003) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DOWN PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ sd_cfg_func: sd_cfg_func {
+ pinctrl-single,pins = <
+ 0x10 0x0 /* SD_CMD (IOCFG004) */
+ 0x14 0x0 /* SD_DATA0 (IOCFG005) */
+ 0x18 0x0 /* SD_DATA1 (IOCFG006) */
+ 0x1c 0x0 /* SD_DATA2 (IOCFG007) */
+ 0x20 0x0 /* SD_DATA3 (IOCFG008) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_08MA DRIVE_MASK>;
+ };
+ sd_cfg_idle: sd_cfg_idle {
+ pinctrl-single,pins = <
+ 0x10 0x0 /* SD_CMD (IOCFG004) */
+ 0x14 0x0 /* SD_DATA0 (IOCFG005) */
+ 0x18 0x0 /* SD_DATA1 (IOCFG006) */
+ 0x1c 0x0 /* SD_DATA2 (IOCFG007) */
+ 0x20 0x0 /* SD_DATA3 (IOCFG008) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DOWN PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ sdio_clk_cfg_func: sdio_clk_cfg_func {
+ pinctrl-single,pins = <
+ 0x134 0x0 /* SDIO_CLK (IOCFG077) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_08MA DRIVE_MASK>;
+ };
+ sdio_clk_cfg_idle: sdio_clk_cfg_idle {
+ pinctrl-single,pins = <
+ 0x134 0x0 /* SDIO_CLK (IOCFG077) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DOWN PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ sdio_cfg_func: sdio_cfg_func {
+ pinctrl-single,pins = <
+ 0x138 0x0 /* SDIO_CMD (IOCFG078) */
+ 0x13c 0x0 /* SDIO_DATA0 (IOCFG079) */
+ 0x140 0x0 /* SDIO_DATA1 (IOCFG080) */
+ 0x144 0x0 /* SDIO_DATA2 (IOCFG081) */
+ 0x148 0x0 /* SDIO_DATA3 (IOCFG082) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_UP PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_04MA DRIVE_MASK>;
+ };
+ sdio_cfg_idle: sdio_cfg_idle {
+ pinctrl-single,pins = <
+ 0x138 0x0 /* SDIO_CMD (IOCFG078) */
+ 0x13c 0x0 /* SDIO_DATA0 (IOCFG079) */
+ 0x140 0x0 /* SDIO_DATA1 (IOCFG080) */
+ 0x144 0x0 /* SDIO_DATA2 (IOCFG081) */
+ 0x148 0x0 /* SDIO_DATA3 (IOCFG082) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_UP PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ isp_cfg_func1: isp_cfg_func1 {
+ pinctrl-single,pins = <
+ 0x28 0x0 /* ISP_PWDN0 (IOCFG010) */
+ 0x2c 0x0 /* ISP_PWDN1 (IOCFG011) */
+ 0x30 0x0 /* ISP_PWDN2 (IOCFG012) */
+ 0x34 0x0 /* ISP_SHUTTER0 (IOCFG013) */
+ 0x38 0x0 /* ISP_SHUTTER1 (IOCFG014) */
+ 0x3c 0x0 /* ISP_PWM (IOCFG015) */
+ 0x40 0x0 /* ISP_CCLK0 (IOCFG016) */
+ 0x44 0x0 /* ISP_CCLK1 (IOCFG017) */
+ 0x48 0x0 /* ISP_RESETB0 (IOCFG018) */
+ 0x4c 0x0 /* ISP_RESETB1 (IOCFG019) */
+ 0x50 0x0 /* ISP_STROBE0 (IOCFG020) */
+ 0x58 0x0 /* ISP_SDA0 (IOCFG022) */
+ 0x5c 0x0 /* ISP_SCL0 (IOCFG023) */
+ 0x60 0x0 /* ISP_SDA1 (IOCFG024) */
+ 0x64 0x0 /* ISP_SCL1 (IOCFG025) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+ isp_cfg_idle1: isp_cfg_idle1 {
+ pinctrl-single,pins = <
+ 0x34 0x0 /* ISP_SHUTTER0 (IOCFG013) */
+ 0x38 0x0 /* ISP_SHUTTER1 (IOCFG014) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DOWN PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ isp_cfg_func2: isp_cfg_func2 {
+ pinctrl-single,pins = <
+ 0x54 0x0 /* ISP_STROBE1 (IOCFG021) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DOWN PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ codec_clk_cfg_func: codec_clk_cfg_func {
+ pinctrl-single,pins = <
+ 0x70 0x0 /* CODEC_CLK (IOCFG028) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_04MA DRIVE_MASK>;
+ };
+ codec_clk_cfg_idle: codec_clk_cfg_idle {
+ pinctrl-single,pins = <
+ 0x70 0x0 /* CODEC_CLK (IOCFG028) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ codec_cfg_func1: codec_cfg_func1 {
+ pinctrl-single,pins = <
+ 0x74 0x0 /* DMIC_CLK (IOCFG029) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DOWN PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ codec_cfg_func2: codec_cfg_func2 {
+ pinctrl-single,pins = <
+ 0x78 0x0 /* CODEC_SYNC (IOCFG030) */
+ 0x7c 0x0 /* CODEC_DI (IOCFG031) */
+ 0x80 0x0 /* CODEC_DO (IOCFG032) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_04MA DRIVE_MASK>;
+ };
+ codec_cfg_idle2: codec_cfg_idle2 {
+ pinctrl-single,pins = <
+ 0x78 0x0 /* CODEC_SYNC (IOCFG030) */
+ 0x7c 0x0 /* CODEC_DI (IOCFG031) */
+ 0x80 0x0 /* CODEC_DO (IOCFG032) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ fm_cfg_func: fm_cfg_func {
+ pinctrl-single,pins = <
+ 0x84 0x0 /* FM_XCLK (IOCFG033) */
+ 0x88 0x0 /* FM_XFS (IOCFG034) */
+ 0x8c 0x0 /* FM_DI (IOCFG035) */
+ 0x90 0x0 /* FM_DO (IOCFG036) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DOWN PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ bt_cfg_func: bt_cfg_func {
+ pinctrl-single,pins = <
+ 0x94 0x0 /* BT_XCLK (IOCFG037) */
+ 0x98 0x0 /* BT_XFS (IOCFG038) */
+ 0x9c 0x0 /* BT_DI (IOCFG039) */
+ 0xa0 0x0 /* BT_DO (IOCFG040) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+ bt_cfg_idle: bt_cfg_idle {
+ pinctrl-single,pins = <
+ 0x94 0x0 /* BT_XCLK (IOCFG037) */
+ 0x98 0x0 /* BT_XFS (IOCFG038) */
+ 0x9c 0x0 /* BT_DI (IOCFG039) */
+ 0xa0 0x0 /* BT_DO (IOCFG040) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DOWN PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ pwm_in_cfg_func: pwm_in_cfg_func {
+ pinctrl-single,pins = <
+ 0xbc 0x0 /* PWM_IN (IOCFG047) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DOWN PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ bl_pwm_cfg_func: bl_pwm_cfg_func {
+ pinctrl-single,pins = <
+ 0xc0 0x0 /* BL_PWM (IOCFG048) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DOWN PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ uart0_cfg_func1: uart0_cfg_func1 {
+ pinctrl-single,pins = <
+ 0xc4 0x0 /* UART0_RXD (IOCFG049) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_UP PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ uart0_cfg_func2: uart0_cfg_func2 {
+ pinctrl-single,pins = <
+ 0xc8 0x0 /* UART0_TXD (IOCFG050) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_04MA DRIVE_MASK>;
+ };
+
+ uart1_cfg_func1: uart1_cfg_func1 {
+ pinctrl-single,pins = <
+ 0xcc 0x0 /* UART1_CTS_N (IOCFG051) */
+ 0xd4 0x0 /* UART1_RXD (IOCFG053) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_UP PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ uart1_cfg_func2: uart1_cfg_func2 {
+ pinctrl-single,pins = <
+ 0xd0 0x0 /* UART1_RTS_N (IOCFG052) */
+ 0xd8 0x0 /* UART1_TXD (IOCFG054) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ uart2_cfg_func: uart2_cfg_func {
+ pinctrl-single,pins = <
+ 0xdc 0x0 /* UART2_CTS_N (IOCFG055) */
+ 0xe0 0x0 /* UART2_RTS_N (IOCFG056) */
+ 0xe4 0x0 /* UART2_RXD (IOCFG057) */
+ 0xe8 0x0 /* UART2_TXD (IOCFG058) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ uart3_cfg_func: uart3_cfg_func {
+ pinctrl-single,pins = <
+ 0x190 0x0 /* UART3_CTS_N (IOCFG100) */
+ 0x194 0x0 /* UART3_RTS_N (IOCFG101) */
+ 0x198 0x0 /* UART3_RXD (IOCFG102) */
+ 0x19c 0x0 /* UART3_TXD (IOCFG103) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DOWN PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ uart4_cfg_func: uart4_cfg_func {
+ pinctrl-single,pins = <
+ 0x1e0 0x0 /* UART4_CTS_N (IOCFG120) */
+ 0x1e4 0x0 /* UART4_RTS_N (IOCFG121) */
+ 0x1e8 0x0 /* UART4_RXD (IOCFG122) */
+ 0x1ec 0x0 /* UART4_TXD (IOCFG123) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DOWN PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ uart5_cfg_func: uart5_cfg_func {
+ pinctrl-single,pins = <
+ 0x1d8 0x0 /* UART4_RXD (IOCFG118) */
+ 0x1dc 0x0 /* UART4_TXD (IOCFG119) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DOWN PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ i2c0_cfg_func: i2c0_cfg_func {
+ pinctrl-single,pins = <
+ 0xec 0x0 /* I2C0_SCL (IOCFG059) */
+ 0xf0 0x0 /* I2C0_SDA (IOCFG060) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ i2c1_cfg_func: i2c1_cfg_func {
+ pinctrl-single,pins = <
+ 0xf4 0x0 /* I2C1_SCL (IOCFG061) */
+ 0xf8 0x0 /* I2C1_SDA (IOCFG062) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ i2c2_cfg_func: i2c2_cfg_func {
+ pinctrl-single,pins = <
+ 0xfc 0x0 /* I2C2_SCL (IOCFG063) */
+ 0x100 0x0 /* I2C2_SDA (IOCFG064) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ spi0_cfg_func: spi0_cfg_func {
+ pinctrl-single,pins = <
+ 0x1b0 0x0 /* SPI0_DI (IOCFG108) */
+ 0x1b4 0x0 /* SPI0_DO (IOCFG109) */
+ 0x1b8 0x0 /* SPI0_CS_N (IOCFG110) */
+ 0x1bc 0x0 /* SPI0_CLK (IOCFG111) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+ };
+
+ pmx2: pinmux@f8001800 {
+
+ pinctrl-names = "default";
+ pinctrl-0 = <
+ &rstout_n_cfg_func
+ >;
+
+ rstout_n_cfg_func: rstout_n_cfg_func {
+ pinctrl-single,pins = <
+ 0x0 0x0 /* RSTOUT_N (IOCFG000) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ pmu_peri_en_cfg_func: pmu_peri_en_cfg_func {
+ pinctrl-single,pins = <
+ 0x4 0x0 /* PMU_PERI_EN (IOCFG001) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ sysclk0_en_cfg_func: sysclk0_en_cfg_func {
+ pinctrl-single,pins = <
+ 0x8 0x0 /* SYSCLK0_EN (IOCFG002) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+
+ jtag_tdo_cfg_func: jtag_tdo_cfg_func {
+ pinctrl-single,pins = <
+ 0xc 0x0 /* JTAG_TDO (IOCFG003) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_08MA DRIVE_MASK>;
+ };
+
+ rf_reset_cfg_func: rf_reset_cfg_func {
+ pinctrl-single,pins = <
+ 0x70 0x0 /* RF_RESET0 (IOCFG028) */
+ 0x74 0x0 /* RF_RESET1 (IOCFG029) */
+ >;
+ pinctrl-single,bias-pulldown = <PULL_DIS PULL_DOWN PULL_DIS PULL_DOWN>;
+ pinctrl-single,bias-pullup = <PULL_DIS PULL_UP PULL_DIS PULL_UP>;
+ pinctrl-single,drive-strength = <DRIVE1_02MA DRIVE_MASK>;
+ };
+ };
+ };
+};
diff --git a/arch/arm64/boot/dts/hisilicon/hip05-d02.dts b/arch/arm64/boot/dts/hisilicon/hip05-d02.dts
index e9436c0d81f7..abba750b87f8 100644
--- a/arch/arm64/boot/dts/hisilicon/hip05-d02.dts
+++ b/arch/arm64/boot/dts/hisilicon/hip05-d02.dts
@@ -52,3 +52,37 @@
&peri_gpio0 {
status = "ok";
};
+
+&lbc {
+ status = "ok";
+ #address-cells = <2>;
+ #size-cells = <1>;
+ ranges = <0 0 0x0 0x90000000 0x08000000>,
+ <1 0 0x0 0x98000000 0x08000000>;
+
+ nor-flash@0,0 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "numonyx,js28f00a", "cfi-flash";
+ reg = <0 0x0 0x08000000>;
+ bank-width = <2>;
+ /* The three parts may not used */
+ partition@0 {
+ label = "BIOS";
+ reg = <0x0 0x300000>;
+ };
+ partition@300000 {
+ label = "Linux";
+ reg = <0x300000 0xa00000>;
+ };
+ partition@1000000 {
+ label = "Rootfs";
+ reg = <0x01000000 0x02000000>;
+ };
+ };
+
+ cpld@1,0 {
+ compatible = "hisilicon,hip05-cpld";
+ reg = <1 0x0 0x100>;
+ };
+};
diff --git a/arch/arm64/boot/dts/hisilicon/hip05.dtsi b/arch/arm64/boot/dts/hisilicon/hip05.dtsi
index 6319ff3b03ea..bf322ed038b8 100644
--- a/arch/arm64/boot/dts/hisilicon/hip05.dtsi
+++ b/arch/arm64/boot/dts/hisilicon/hip05.dtsi
@@ -249,24 +249,28 @@
its_peri: interrupt-controller@8c000000 {
compatible = "arm,gic-v3-its";
msi-controller;
+ #msi-cells = <1>;
reg = <0x0 0x8c000000 0x0 0x40000>;
};
its_m3: interrupt-controller@a3000000 {
compatible = "arm,gic-v3-its";
msi-controller;
+ #msi-cells = <1>;
reg = <0x0 0xa3000000 0x0 0x40000>;
};
its_pcie: interrupt-controller@b7000000 {
compatible = "arm,gic-v3-its";
msi-controller;
+ #msi-cells = <1>;
reg = <0x0 0xb7000000 0x0 0x40000>;
};
its_dsa: interrupt-controller@c6000000 {
compatible = "arm,gic-v3-its";
msi-controller;
+ #msi-cells = <1>;
reg = <0x0 0xc6000000 0x0 0x40000>;
};
};
@@ -323,6 +327,12 @@
status = "disabled";
};
+ lbc: localbus@80380000 {
+ compatible = "hisilicon,hisi-localbus", "simple-bus";
+ reg = <0x0 0x80380000 0x0 0x10000>;
+ status = "disabled";
+ };
+
peri_gpio0: gpio@802e0000 {
#address-cells = <1>;
#size-cells = <0>;
diff --git a/arch/arm64/boot/dts/hisilicon/hip05_hns.dtsi b/arch/arm64/boot/dts/hisilicon/hip05_hns.dtsi
index 933cba359918..b6a130c2e5a4 100644
--- a/arch/arm64/boot/dts/hisilicon/hip05_hns.dtsi
+++ b/arch/arm64/boot/dts/hisilicon/hip05_hns.dtsi
@@ -24,17 +24,19 @@ soc0: soc@000000000 {
};
dsaf0: dsa@c7000000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
compatible = "hisilicon,hns-dsaf-v1";
mode = "6port-16rss";
interrupt-parent = <&mbigen_dsa>;
- reg = <0x0 0xC0000000 0x0 0x420000
- 0x0 0xC2000000 0x0 0x300000
- 0x0 0xc5000000 0x0 0x890000
+ reg = <0x0 0xc5000000 0x0 0x890000
0x0 0xc7000000 0x0 0x60000
>;
- phy-handle = <0 0 0 0 &soc0_phy0 &soc0_phy1 0 0>;
+ reg-names = "ppe-base","dsaf-base";
+ subctrl-syscon = <&dsaf_subctrl>;
+ reset-field-offset = <0>;
interrupts = <
/* [14] ge fifo err 8 / xge 6**/
149 0x4 150 0x4 151 0x4 152 0x4
@@ -122,12 +124,31 @@ soc0: soc@000000000 {
buf-size = <4096>;
desc-num = <1024>;
dma-coherent;
+
+ port@0 {
+ reg = <0>;
+ serdes-syscon = <&serdes_ctrl0>;
+ };
+ port@1 {
+ reg = <1>;
+ serdes-syscon = <&serdes_ctrl0>;
+ };
+ port@4 {
+ reg = <4>;
+ phy-handle = <&soc0_phy0>;
+ serdes-syscon = <&serdes_ctrl1>;
+ };
+ port@5 {
+ reg = <5>;
+ phy-handle = <&soc0_phy1>;
+ serdes-syscon = <&serdes_ctrl1>;
+ };
};
eth0: ethernet@0{
compatible = "hisilicon,hns-nic-v1";
ae-handle = <&dsaf0>;
- port-id = <0>;
+ port-idx-in-ae = <0>;
local-mac-address = [00 00 00 01 00 58];
status = "disabled";
dma-coherent;
@@ -135,56 +156,25 @@ soc0: soc@000000000 {
eth1: ethernet@1{
compatible = "hisilicon,hns-nic-v1";
ae-handle = <&dsaf0>;
- port-id = <1>;
+ port-idx-in-ae = <1>;
+ local-mac-address = [00 00 00 01 00 59];
status = "disabled";
dma-coherent;
};
- eth2: ethernet@2{
+ eth2: ethernet@4{
compatible = "hisilicon,hns-nic-v1";
ae-handle = <&dsaf0>;
- port-id = <2>;
+ port-idx-in-ae = <4>;
local-mac-address = [00 00 00 01 00 5a];
status = "disabled";
dma-coherent;
};
- eth3: ethernet@3{
+ eth3: ethernet@5{
compatible = "hisilicon,hns-nic-v1";
ae-handle = <&dsaf0>;
- port-id = <3>;
+ port-idx-in-ae = <5>;
local-mac-address = [00 00 00 01 00 5b];
status = "disabled";
dma-coherent;
};
- eth4: ethernet@4{
- compatible = "hisilicon,hns-nic-v1";
- ae-handle = <&dsaf0>;
- port-id = <4>;
- local-mac-address = [00 00 00 01 00 5c];
- status = "disabled";
- dma-coherent;
- };
- eth5: ethernet@5{
- compatible = "hisilicon,hns-nic-v1";
- ae-handle = <&dsaf0>;
- port-id = <5>;
- local-mac-address = [00 00 00 01 00 5d];
- status = "disabled";
- dma-coherent;
- };
- eth6: ethernet@6{
- compatible = "hisilicon,hns-nic-v1";
- ae-handle = <&dsaf0>;
- port-id = <6>;
- local-mac-address = [00 00 00 01 00 5e];
- status = "disabled";
- dma-coherent;
- };
- eth7: ethernet@7{
- compatible = "hisilicon,hns-nic-v1";
- ae-handle = <&dsaf0>;
- port-id = <7>;
- local-mac-address = [00 00 00 01 00 5f];
- status = "disabled";
- dma-coherent;
- };
};
diff --git a/arch/arm64/boot/dts/hisilicon/hip06-d03.dts b/arch/arm64/boot/dts/hisilicon/hip06-d03.dts
new file mode 100644
index 000000000000..f3e5323e430b
--- /dev/null
+++ b/arch/arm64/boot/dts/hisilicon/hip06-d03.dts
@@ -0,0 +1,34 @@
+/**
+ * dts file for Hisilicon D03 Development Board
+ *
+ * Copyright (C) 2016 Hisilicon Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * publishhed by the Free Software Foundation.
+ *
+ */
+
+/dts-v1/;
+
+#include "hip06.dtsi"
+
+/ {
+ model = "Hisilicon Hip06 D03 Development Board";
+ compatible = "hisilicon,hip06-d03";
+
+ memory@00000000 {
+ device_type = "memory";
+ reg = <0x0 0x00000000 0x0 0x40000000>;
+ };
+
+ chosen { };
+};
+
+&usb_ohci {
+ status = "ok";
+};
+
+&usb_ehci {
+ status = "ok";
+};
diff --git a/arch/arm64/boot/dts/hisilicon/hip06.dtsi b/arch/arm64/boot/dts/hisilicon/hip06.dtsi
new file mode 100644
index 000000000000..5927bc472f1b
--- /dev/null
+++ b/arch/arm64/boot/dts/hisilicon/hip06.dtsi
@@ -0,0 +1,307 @@
+/**
+ * dts file for Hisilicon D03 Development Board
+ *
+ * Copyright (C) 2016 Hisilicon Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * publishhed by the Free Software Foundation.
+ *
+ */
+
+#include <dt-bindings/interrupt-controller/arm-gic.h>
+
+/ {
+ compatible = "hisilicon,hip06-d03";
+ interrupt-parent = <&gic>;
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ psci {
+ compatible = "arm,psci-0.2";
+ method = "smc";
+ };
+
+ cpus {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ cpu-map {
+ cluster0 {
+ core0 {
+ cpu = <&cpu0>;
+ };
+ core1 {
+ cpu = <&cpu1>;
+ };
+ core2 {
+ cpu = <&cpu2>;
+ };
+ core3 {
+ cpu = <&cpu3>;
+ };
+ };
+ cluster1 {
+ core0 {
+ cpu = <&cpu4>;
+ };
+ core1 {
+ cpu = <&cpu5>;
+ };
+ core2 {
+ cpu = <&cpu6>;
+ };
+ core3 {
+ cpu = <&cpu7>;
+ };
+ };
+ cluster2 {
+ core0 {
+ cpu = <&cpu8>;
+ };
+ core1 {
+ cpu = <&cpu9>;
+ };
+ core2 {
+ cpu = <&cpu10>;
+ };
+ core3 {
+ cpu = <&cpu11>;
+ };
+ };
+ cluster3 {
+ core0 {
+ cpu = <&cpu12>;
+ };
+ core1 {
+ cpu = <&cpu13>;
+ };
+ core2 {
+ cpu = <&cpu14>;
+ };
+ core3 {
+ cpu = <&cpu15>;
+ };
+ };
+ };
+
+ cpu0: cpu@10000 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a57", "arm,armv8";
+ reg = <0x10000>;
+ enable-method = "psci";
+ next-level-cache = <&cluster0_l2>;
+ };
+
+ cpu1: cpu@10001 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a57", "arm,armv8";
+ reg = <0x10001>;
+ enable-method = "psci";
+ next-level-cache = <&cluster0_l2>;
+ };
+
+ cpu2: cpu@10002 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a57", "arm,armv8";
+ reg = <0x10002>;
+ enable-method = "psci";
+ next-level-cache = <&cluster0_l2>;
+ };
+
+ cpu3: cpu@10003 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a57", "arm,armv8";
+ reg = <0x10003>;
+ enable-method = "psci";
+ next-level-cache = <&cluster0_l2>;
+ };
+
+ cpu4: cpu@10100 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a57", "arm,armv8";
+ reg = <0x10100>;
+ enable-method = "psci";
+ next-level-cache = <&cluster1_l2>;
+ };
+
+ cpu5: cpu@10101 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a57", "arm,armv8";
+ reg = <0x10101>;
+ enable-method = "psci";
+ next-level-cache = <&cluster1_l2>;
+ };
+
+ cpu6: cpu@10102 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a57", "arm,armv8";
+ reg = <0x10102>;
+ enable-method = "psci";
+ next-level-cache = <&cluster1_l2>;
+ };
+
+ cpu7: cpu@10103 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a57", "arm,armv8";
+ reg = <0x10103>;
+ enable-method = "psci";
+ next-level-cache = <&cluster1_l2>;
+ };
+
+ cpu8: cpu@10200 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a57", "arm,armv8";
+ reg = <0x10200>;
+ enable-method = "psci";
+ next-level-cache = <&cluster2_l2>;
+ };
+
+ cpu9: cpu@10201 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a57", "arm,armv8";
+ reg = <0x10201>;
+ enable-method = "psci";
+ next-level-cache = <&cluster2_l2>;
+ };
+
+ cpu10: cpu@10202 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a57", "arm,armv8";
+ reg = <0x10202>;
+ enable-method = "psci";
+ next-level-cache = <&cluster2_l2>;
+ };
+
+ cpu11: cpu@10203 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a57", "arm,armv8";
+ reg = <0x10203>;
+ enable-method = "psci";
+ next-level-cache = <&cluster2_l2>;
+ };
+
+ cpu12: cpu@10300 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a57", "arm,armv8";
+ reg = <0x10300>;
+ enable-method = "psci";
+ next-level-cache = <&cluster3_l2>;
+ };
+
+ cpu13: cpu@10301 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a57", "arm,armv8";
+ reg = <0x10301>;
+ enable-method = "psci";
+ next-level-cache = <&cluster3_l2>;
+ };
+
+ cpu14: cpu@10302 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a57", "arm,armv8";
+ reg = <0x10302>;
+ enable-method = "psci";
+ next-level-cache = <&cluster3_l2>;
+ };
+
+ cpu15: cpu@10303 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a57", "arm,armv8";
+ reg = <0x10303>;
+ enable-method = "psci";
+ next-level-cache = <&cluster3_l2>;
+ };
+
+ cluster0_l2: l2-cache0 {
+ compatible = "cache";
+ };
+
+ cluster1_l2: l2-cache1 {
+ compatible = "cache";
+ };
+
+ cluster2_l2: l2-cache2 {
+ compatible = "cache";
+ };
+
+ cluster3_l2: l2-cache3 {
+ compatible = "cache";
+ };
+ };
+
+ gic: interrupt-controller@4d000000 {
+ compatible = "arm,gic-v3";
+ #interrupt-cells = <3>;
+ #address-cells = <2>;
+ #size-cells = <2>;
+ ranges;
+ interrupt-controller;
+ #redistributor-regions = <1>;
+ redistributor-stride = <0x0 0x30000>;
+ reg = <0x0 0x4d000000 0 0x10000>, /* GICD */
+ <0x0 0x4d100000 0 0x300000>, /* GICR */
+ <0x0 0xfe000000 0 0x10000>, /* GICC */
+ <0x0 0xfe010000 0 0x10000>, /* GICH */
+ <0x0 0xfe020000 0 0x10000>; /* GICV */
+ interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;
+
+ its_dsa: interrupt-controller@c6000000 {
+ compatible = "arm,gic-v3-its";
+ msi-controller;
+ #msi-cells = <1>;
+ reg = <0x0 0xc6000000 0x0 0x40000>;
+ };
+ };
+
+ timer {
+ compatible = "arm,armv8-timer";
+ interrupts = <GIC_PPI 13 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_PPI 14 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_PPI 11 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_PPI 10 IRQ_TYPE_LEVEL_LOW>;
+ };
+
+ pmu {
+ compatible = "arm,cortex-a57-pmu";
+ interrupts = <GIC_PPI 7 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
+ mbigen_pcie@a0080000 {
+ compatible = "hisilicon,mbigen-v2";
+ reg = <0x0 0xa0080000 0x0 0x10000>;
+
+ mbigen_usb: intc_usb {
+ msi-parent = <&its_dsa 0x40080>;
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ num-pins = <2>;
+ };
+ };
+
+ soc {
+ compatible = "simple-bus";
+ #address-cells = <2>;
+ #size-cells = <2>;
+ ranges;
+
+ usb_ohci: ohci@a7030000 {
+ compatible = "generic-ohci";
+ reg = <0x0 0xa7030000 0x0 0x10000>;
+ interrupt-parent = <&mbigen_usb>;
+ interrupts = <64 4>;
+ dma-coherent;
+ status = "disabled";
+ };
+
+ usb_ehci: ehci@a7020000 {
+ compatible = "generic-ehci";
+ reg = <0x0 0xa7020000 0x0 0x10000>;
+ interrupt-parent = <&mbigen_usb>;
+ interrupts = <65 4>;
+ dma-coherent;
+ status = "disabled";
+ };
+ };
+
+};
diff --git a/arch/arm64/boot/dts/lg/Makefile b/arch/arm64/boot/dts/lg/Makefile
new file mode 100644
index 000000000000..b0cc64964171
--- /dev/null
+++ b/arch/arm64/boot/dts/lg/Makefile
@@ -0,0 +1,5 @@
+dtb-$(CONFIG_ARCH_LG1K) += lg1312-ref.dtb
+
+always := $(dtb-y)
+subdir-y := $(dts-dirs)
+clean-files := *.dtb
diff --git a/arch/arm64/boot/dts/lg/lg1312-ref.dts b/arch/arm64/boot/dts/lg/lg1312-ref.dts
new file mode 100644
index 000000000000..6d78d6bc7f9c
--- /dev/null
+++ b/arch/arm64/boot/dts/lg/lg1312-ref.dts
@@ -0,0 +1,36 @@
+/*
+ * dts file for lg1312 Reference Board.
+ *
+ * Copyright (C) 2016, LG Electronics
+ */
+
+/dts-v1/;
+
+#include "lg1312.dtsi"
+
+/ {
+ #address-cells = <2>;
+ #size-cells = <1>;
+
+ model = "LG Electronics, DTV SoC LG1312 Reference Board";
+ compatible = "lge,lg1312-ref", "lge,lg1312";
+
+ aliases {
+ serial0 = &uart0;
+ serial1 = &uart1;
+ serial2 = &uart2;
+ };
+
+ memory {
+ device_type = "memory";
+ reg = <0x0 0x00000000 0x20000000>;
+ };
+
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+};
+
+&uart0 {
+ status = "okay";
+};
diff --git a/arch/arm64/boot/dts/lg/lg1312.dtsi b/arch/arm64/boot/dts/lg/lg1312.dtsi
new file mode 100644
index 000000000000..3a4e9a2ab313
--- /dev/null
+++ b/arch/arm64/boot/dts/lg/lg1312.dtsi
@@ -0,0 +1,351 @@
+/*
+ * dts file for lg1312 SoC
+ *
+ * Copyright (C) 2016, LG Electronics
+ */
+
+#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/interrupt-controller/arm-gic.h>
+
+/ {
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ compatible = "lge,lg1312";
+ interrupt-parent = <&gic>;
+
+ cpus {
+ #address-cells = <2>;
+ #size-cells = <0>;
+
+ cpu0: cpu@0 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a53", "arm,armv8";
+ reg = <0x0 0x0>;
+ next-level-cache = <&L2_0>;
+ };
+ cpu1: cpu@1 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a53", "arm,armv8";
+ reg = <0x0 0x1>;
+ enable-method = "psci";
+ next-level-cache = <&L2_0>;
+ };
+ cpu2: cpu@2 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a53", "arm,armv8";
+ reg = <0x0 0x2>;
+ enable-method = "psci";
+ next-level-cache = <&L2_0>;
+ };
+ cpu3: cpu@3 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a53", "arm,armv8";
+ reg = <0x0 0x3>;
+ enable-method = "psci";
+ next-level-cache = <&L2_0>;
+ };
+ L2_0: l2-cache0 {
+ compatible = "cache";
+ };
+ };
+
+ psci {
+ compatible = "arm,psci-0.2", "arm,psci";
+ method = "smc";
+ cpu_suspend = <0x84000001>;
+ cpu_off = <0x84000002>;
+ cpu_on = <0x84000003>;
+ };
+
+ gic: interrupt-controller@c0001000 {
+ #interrupt-cells = <3>;
+ compatible = "arm,gic-400";
+ interrupt-controller;
+ reg = <0x0 0xc0001000 0x1000>,
+ <0x0 0xc0002000 0x2000>,
+ <0x0 0xc0004000 0x2000>,
+ <0x0 0xc0006000 0x2000>;
+ };
+
+ pmu {
+ compatible = "arm,cortex-a53-pmu";
+ interrupts = <GIC_SPI 149 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 150 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 151 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 152 IRQ_TYPE_LEVEL_HIGH>;
+ interrupt-affinity = <&cpu0>,
+ <&cpu1>,
+ <&cpu2>,
+ <&cpu3>;
+ };
+
+ timer {
+ compatible = "arm,armv8-timer";
+ interrupts = <GIC_PPI 13 (GIC_CPU_MASK_RAW(0x0f) |
+ IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 14 (GIC_CPU_MASK_RAW(0x0f) |
+ IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 11 (GIC_CPU_MASK_RAW(0x0f) |
+ IRQ_TYPE_LEVEL_LOW)>,
+ <GIC_PPI 10 (GIC_CPU_MASK_RAW(0x0f) |
+ IRQ_TYPE_LEVEL_LOW)>;
+ };
+
+ clk_bus: clk_bus {
+ #clock-cells = <0>;
+
+ compatible = "fixed-clock";
+ clock-frequency = <198000000>;
+ clock-output-names = "BUSCLK";
+ };
+
+ soc {
+ #address-cells = <2>;
+ #size-cells = <1>;
+
+ compatible = "simple-bus";
+ interrupt-parent = <&gic>;
+ ranges;
+
+ eth0: ethernet@c1b00000 {
+ compatible = "cdns,gem";
+ reg = <0x0 0xc1b00000 0x1000>;
+ interrupts = <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk_bus>, <&clk_bus>;
+ clock-names = "hclk", "pclk";
+ phy-mode = "rmii";
+ /* Filled in by boot */
+ mac-address = [ 00 00 00 00 00 00 ];
+ };
+ };
+
+ amba {
+ #address-cells = <2>;
+ #size-cells = <1>;
+ #interrupts-cells = <3>;
+
+ compatible = "arm,amba-bus";
+ interrupt-parent = <&gic>;
+ ranges;
+
+ timers: timer@fd100000 {
+ compatible = "arm,sp804";
+ reg = <0x0 0xfd100000 0x1000>;
+ interrupts = <GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ };
+ wdog: watchdog@fd200000 {
+ compatible = "arm,sp805", "arm,primecell";
+ reg = <0x0 0xfd200000 0x1000>;
+ interrupts = <GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ };
+ uart0: serial@fe000000 {
+ compatible = "arm,pl011", "arm,primecell";
+ reg = <0x0 0xfe000000 0x1000>;
+ interrupts = <GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ status="disabled";
+ };
+ uart1: serial@fe100000 {
+ compatible = "arm,pl011", "arm,primecell";
+ reg = <0x0 0xfe100000 0x1000>;
+ interrupts = <GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ status="disabled";
+ };
+ uart2: serial@fe200000 {
+ compatible = "arm,pl011", "arm,primecell";
+ reg = <0x0 0xfe200000 0x1000>;
+ interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ status="disabled";
+ };
+ spi0: ssp@fe800000 {
+ compatible = "arm,pl022", "arm,primecell";
+ reg = <0x0 0xfe800000 0x1000>;
+ interrupts = <GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ };
+ spi1: ssp@fe900000 {
+ compatible = "arm,pl022", "arm,primecell";
+ reg = <0x0 0xfe900000 0x1000>;
+ interrupts = <GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ };
+ dmac0: dma@c1128000 {
+ compatible = "arm,pl330", "arm,primecell";
+ reg = <0x0 0xc1128000 0x1000>;
+ interrupts = <GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ };
+ gpio0: gpio@fd400000 {
+ #gpio-cells = <2>;
+ compatible = "arm,pl061", "arm,primecell";
+ gpio-controller;
+ reg = <0x0 0xfd400000 0x1000>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ status="disabled";
+ };
+ gpio1: gpio@fd410000 {
+ #gpio-cells = <2>;
+ compatible = "arm,pl061", "arm,primecell";
+ gpio-controller;
+ reg = <0x0 0xfd410000 0x1000>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ status="disabled";
+ };
+ gpio2: gpio@fd420000 {
+ #gpio-cells = <2>;
+ compatible = "arm,pl061", "arm,primecell";
+ gpio-controller;
+ reg = <0x0 0xfd420000 0x1000>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ status="disabled";
+ };
+ gpio3: gpio@fd430000 {
+ #gpio-cells = <2>;
+ compatible = "arm,pl061", "arm,primecell";
+ gpio-controller;
+ reg = <0x0 0xfd430000 0x1000>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ };
+ gpio4: gpio@fd440000 {
+ #gpio-cells = <2>;
+ compatible = "arm,pl061", "arm,primecell";
+ gpio-controller;
+ reg = <0x0 0xfd440000 0x1000>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ status="disabled";
+ };
+ gpio5: gpio@fd450000 {
+ #gpio-cells = <2>;
+ compatible = "arm,pl061", "arm,primecell";
+ gpio-controller;
+ reg = <0x0 0xfd450000 0x1000>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ status="disabled";
+ };
+ gpio6: gpio@fd460000 {
+ #gpio-cells = <2>;
+ compatible = "arm,pl061", "arm,primecell";
+ gpio-controller;
+ reg = <0x0 0xfd460000 0x1000>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ status="disabled";
+ };
+ gpio7: gpio@fd470000 {
+ #gpio-cells = <2>;
+ compatible = "arm,pl061", "arm,primecell";
+ gpio-controller;
+ reg = <0x0 0xfd470000 0x1000>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ status="disabled";
+ };
+ gpio8: gpio@fd480000 {
+ #gpio-cells = <2>;
+ compatible = "arm,pl061", "arm,primecell";
+ gpio-controller;
+ reg = <0x0 0xfd480000 0x1000>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ status="disabled";
+ };
+ gpio9: gpio@fd490000 {
+ #gpio-cells = <2>;
+ compatible = "arm,pl061", "arm,primecell";
+ gpio-controller;
+ reg = <0x0 0xfd490000 0x1000>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ status="disabled";
+ };
+ gpio10: gpio@fd4a0000 {
+ #gpio-cells = <2>;
+ compatible = "arm,pl061", "arm,primecell";
+ gpio-controller;
+ reg = <0x0 0xfd4a0000 0x1000>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ status="disabled";
+ };
+ gpio11: gpio@fd4b0000 {
+ #gpio-cells = <2>;
+ compatible = "arm,pl061", "arm,primecell";
+ gpio-controller;
+ reg = <0x0 0xfd4b0000 0x1000>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ };
+ gpio12: gpio@fd4c0000 {
+ #gpio-cells = <2>;
+ compatible = "arm,pl061", "arm,primecell";
+ gpio-controller;
+ reg = <0x0 0xfd4c0000 0x1000>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ status="disabled";
+ };
+ gpio13: gpio@fd4d0000 {
+ #gpio-cells = <2>;
+ compatible = "arm,pl061", "arm,primecell";
+ gpio-controller;
+ reg = <0x0 0xfd4d0000 0x1000>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ status="disabled";
+ };
+ gpio14: gpio@fd4e0000 {
+ #gpio-cells = <2>;
+ compatible = "arm,pl061", "arm,primecell";
+ gpio-controller;
+ reg = <0x0 0xfd4e0000 0x1000>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ status="disabled";
+ };
+ gpio15: gpio@fd4f0000 {
+ #gpio-cells = <2>;
+ compatible = "arm,pl061", "arm,primecell";
+ gpio-controller;
+ reg = <0x0 0xfd4f0000 0x1000>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ status="disabled";
+ };
+ gpio16: gpio@fd500000 {
+ #gpio-cells = <2>;
+ compatible = "arm,pl061", "arm,primecell";
+ gpio-controller;
+ reg = <0x0 0xfd500000 0x1000>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ status="disabled";
+ };
+ gpio17: gpio@fd510000 {
+ #gpio-cells = <2>;
+ compatible = "arm,pl061", "arm,primecell";
+ gpio-controller;
+ reg = <0x0 0xfd510000 0x1000>;
+ clocks = <&clk_bus>;
+ clock-names = "apb_pclk";
+ };
+ };
+};
diff --git a/arch/arm64/boot/dts/marvell/armada-3720-db.dts b/arch/arm64/boot/dts/marvell/armada-3720-db.dts
index 359050154511..86110a6ae330 100644
--- a/arch/arm64/boot/dts/marvell/armada-3720-db.dts
+++ b/arch/arm64/boot/dts/marvell/armada-3720-db.dts
@@ -60,27 +60,19 @@
device_type = "memory";
reg = <0x00000000 0x00000000 0x00000000 0x20000000>;
};
+};
- soc {
- internal-regs {
- /*
- * Exported on the micro USB connector CON32
- * through an FTDI
- */
- uart0: serial@12000 {
- status = "okay";
- };
-
- /* CON31 */
- usb3@58000 {
- status = "okay";
- };
+/* CON3 */
+&sata {
+ status = "okay";
+};
- /* CON3 */
- sata@e0000 {
- status = "okay";
- };
- };
- };
+/* Exported on the micro USB connector CON32 through an FTDI */
+&uart0 {
+ status = "okay";
};
+/* CON31 */
+&usb3 {
+ status = "okay";
+};
diff --git a/arch/arm64/boot/dts/marvell/armada-372x.dtsi b/arch/arm64/boot/dts/marvell/armada-372x.dtsi
index f292a00ce97c..5120296596c2 100644
--- a/arch/arm64/boot/dts/marvell/armada-372x.dtsi
+++ b/arch/arm64/boot/dts/marvell/armada-372x.dtsi
@@ -59,5 +59,4 @@
enable-method = "psci";
};
};
-
};
diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
index ba9df7ff2a72..9e2efb882983 100644
--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
@@ -105,14 +105,28 @@
status = "disabled";
};
- usb3@58000 {
- compatible = "generic-xhci";
+ usb3: usb@58000 {
+ compatible = "marvell,armada3700-xhci",
+ "generic-xhci";
reg = <0x58000 0x4000>;
interrupts = <GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
};
- sata@e0000 {
+ xor@60900 {
+ compatible = "marvell,armada-3700-xor";
+ reg = <0x60900 0x100
+ 0x60b00 0x100>;
+
+ xor10 {
+ interrupts = <GIC_SPI 47 IRQ_TYPE_LEVEL_HIGH>;
+ };
+ xor11 {
+ interrupts = <GIC_SPI 48 IRQ_TYPE_LEVEL_HIGH>;
+ };
+ };
+
+ sata: sata@e0000 {
compatible = "marvell,armada-3700-ahci";
reg = <0xe0000 0x2000>;
interrupts = <GIC_SPI 27 IRQ_TYPE_LEVEL_HIGH>;
diff --git a/arch/arm64/boot/dts/marvell/armada-7020.dtsi b/arch/arm64/boot/dts/marvell/armada-7020.dtsi
index 52575756d0be..975e73302753 100644
--- a/arch/arm64/boot/dts/marvell/armada-7020.dtsi
+++ b/arch/arm64/boot/dts/marvell/armada-7020.dtsi
@@ -46,6 +46,7 @@
*/
#include "armada-ap806-dual.dtsi"
+#include "armada-cp110-master.dtsi"
/ {
model = "Marvell Armada 7020";
diff --git a/arch/arm64/boot/dts/marvell/armada-7040-db.dts b/arch/arm64/boot/dts/marvell/armada-7040-db.dts
index 064a251346dd..070b589680c5 100644
--- a/arch/arm64/boot/dts/marvell/armada-7040-db.dts
+++ b/arch/arm64/boot/dts/marvell/armada-7040-db.dts
@@ -51,42 +51,98 @@
compatible = "marvell,armada7040-db", "marvell,armada7040",
"marvell,armada-ap806-quad", "marvell,armada-ap806";
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
memory@00000000 {
device_type = "memory";
reg = <0x0 0x0 0x0 0x80000000>;
};
+};
+
+&i2c0 {
+ status = "okay";
+ clock-frequency = <100000>;
+};
+
+&spi0 {
+ status = "okay";
+
+ spi-flash@0 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "jedec,spi-nor";
+ reg = <0>;
+ spi-max-frequency = <10000000>;
- ap806 {
- config-space {
- spi@510600 {
- status = "okay";
-
- spi-flash@0 {
- #address-cells = <1>;
- #size-cells = <1>;
- compatible = "n25q128a13";
- reg = <0>; /* Chip select 0 */
- spi-max-frequency = <10000000>;
-
- partition@0 {
- label = "U-Boot";
- reg = <0 0x200000>;
- };
- partition@400000 {
- label = "Filesystem";
- reg = <0x200000 0xce0000>;
- };
- };
+ partitions {
+ compatible = "fixed-partitions";
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+ partition@0 {
+ label = "U-Boot";
+ reg = <0 0x200000>;
+ };
+ partition@400000 {
+ label = "Filesystem";
+ reg = <0x200000 0xce0000>;
};
+ };
+ };
+};
+
+&uart0 {
+ status = "okay";
+};
- i2c@511000 {
- status = "okay";
- clock-frequency = <100000>;
+
+&cpm_pcie2 {
+ status = "okay";
+};
+
+&cpm_i2c0 {
+ status = "okay";
+ clock-frequency = <100000>;
+};
+
+&cpm_spi1 {
+ status = "okay";
+
+ spi-flash@0 {
+ #address-cells = <0x1>;
+ #size-cells = <0x1>;
+ compatible = "jedec,spi-nor";
+ reg = <0x0>;
+ spi-max-frequency = <20000000>;
+
+ partitions {
+ compatible = "fixed-partitions";
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+ partition@0 {
+ label = "U-Boot";
+ reg = <0x0 0x200000>;
};
- serial@512000 {
- status = "okay";
+ partition@400000 {
+ label = "Filesystem";
+ reg = <0x200000 0xe00000>;
};
};
};
};
+
+&cpm_sata0 {
+ status = "okay";
+};
+
+&cpm_usb3_0 {
+ status = "okay";
+};
+
+&cpm_usb3_1 {
+ status = "okay";
+};
diff --git a/arch/arm64/boot/dts/marvell/armada-7040.dtsi b/arch/arm64/boot/dts/marvell/armada-7040.dtsi
index 7a2de8bf7907..78d995d62707 100644
--- a/arch/arm64/boot/dts/marvell/armada-7040.dtsi
+++ b/arch/arm64/boot/dts/marvell/armada-7040.dtsi
@@ -46,6 +46,7 @@
*/
#include "armada-ap806-quad.dtsi"
+#include "armada-cp110-master.dtsi"
/ {
model = "Marvell Armada 7040";
diff --git a/arch/arm64/boot/dts/marvell/armada-8020.dtsi b/arch/arm64/boot/dts/marvell/armada-8020.dtsi
index 73d69d99513f..3753c1c6d54d 100644
--- a/arch/arm64/boot/dts/marvell/armada-8020.dtsi
+++ b/arch/arm64/boot/dts/marvell/armada-8020.dtsi
@@ -46,6 +46,7 @@
*/
#include "armada-ap806-dual.dtsi"
+#include "armada-cp110-master.dtsi"
/ {
model = "Marvell Armada 8020";
diff --git a/arch/arm64/boot/dts/marvell/armada-8040.dtsi b/arch/arm64/boot/dts/marvell/armada-8040.dtsi
index a1406a40959e..8bd0d8f8ad4c 100644
--- a/arch/arm64/boot/dts/marvell/armada-8040.dtsi
+++ b/arch/arm64/boot/dts/marvell/armada-8040.dtsi
@@ -46,6 +46,7 @@
*/
#include "armada-ap806-quad.dtsi"
+#include "armada-cp110-master.dtsi"
/ {
model = "Marvell Armada 8040";
diff --git a/arch/arm64/boot/dts/marvell/armada-ap806-dual.dtsi b/arch/arm64/boot/dts/marvell/armada-ap806-dual.dtsi
index f25c5c17fad7..95a1ff60f6c1 100644
--- a/arch/arm64/boot/dts/marvell/armada-ap806-dual.dtsi
+++ b/arch/arm64/boot/dts/marvell/armada-ap806-dual.dtsi
@@ -68,4 +68,3 @@
};
};
};
-
diff --git a/arch/arm64/boot/dts/marvell/armada-ap806-quad.dtsi b/arch/arm64/boot/dts/marvell/armada-ap806-quad.dtsi
index baa7d9a516b3..ba43a4357b89 100644
--- a/arch/arm64/boot/dts/marvell/armada-ap806-quad.dtsi
+++ b/arch/arm64/boot/dts/marvell/armada-ap806-quad.dtsi
@@ -79,6 +79,4 @@
enable-method = "psci";
};
};
-
};
-
diff --git a/arch/arm64/boot/dts/marvell/armada-ap806.dtsi b/arch/arm64/boot/dts/marvell/armada-ap806.dtsi
index 556a92bcc2f6..20d256b32670 100644
--- a/arch/arm64/boot/dts/marvell/armada-ap806.dtsi
+++ b/arch/arm64/boot/dts/marvell/armada-ap806.dtsi
@@ -54,12 +54,16 @@
#address-cells = <2>;
#size-cells = <2>;
+ aliases {
+ serial0 = &uart0;
+ serial1 = &uart1;
+ };
+
psci {
compatible = "arm,psci-0.2";
method = "smc";
};
-
ap806 {
#address-cells = <2>;
#size-cells = <2>;
@@ -136,7 +140,7 @@
marvell,spi-base = <128>, <136>, <144>, <152>;
};
- xor0@400000 {
+ xor@400000 {
compatible = "marvell,mv-xor-v2";
reg = <0x400000 0x1000>,
<0x410000 0x1000>;
@@ -144,7 +148,7 @@
dma-coherent;
};
- xor1@420000 {
+ xor@420000 {
compatible = "marvell,mv-xor-v2";
reg = <0x420000 0x1000>,
<0x430000 0x1000>;
@@ -152,7 +156,7 @@
dma-coherent;
};
- xor2@440000 {
+ xor@440000 {
compatible = "marvell,mv-xor-v2";
reg = <0x440000 0x1000>,
<0x450000 0x1000>;
@@ -160,7 +164,7 @@
dma-coherent;
};
- xor3@460000 {
+ xor@460000 {
compatible = "marvell,mv-xor-v2";
reg = <0x460000 0x1000>,
<0x470000 0x1000>;
@@ -175,63 +179,51 @@
#size-cells = <0>;
cell-index = <0>;
interrupts = <GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>;
- clocks = <&ringclk 2>;
+ clocks = <&ap_syscon 3>;
status = "disabled";
};
i2c0: i2c@511000 {
- compatible = "marvell,mv64xxx-i2c";
+ compatible = "marvell,mv78230-i2c";
reg = <0x511000 0x20>;
#address-cells = <1>;
#size-cells = <0>;
interrupts = <GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>;
timeout-ms = <1000>;
- clocks = <&ringclk 2>;
+ clocks = <&ap_syscon 3>;
status = "disabled";
};
- serial@512000 {
+ uart0: serial@512000 {
compatible = "snps,dw-apb-uart";
reg = <0x512000 0x100>;
reg-shift = <2>;
interrupts = <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>;
reg-io-width = <1>;
- clocks = <&ringclk 2>;
+ clocks = <&ap_syscon 3>;
status = "disabled";
};
- serial@512100 {
+ uart1: serial@512100 {
compatible = "snps,dw-apb-uart";
reg = <0x512100 0x100>;
reg-shift = <2>;
interrupts = <GIC_SPI 29 IRQ_TYPE_LEVEL_HIGH>;
reg-io-width = <1>;
- clocks = <&ringclk 2>;
+ clocks = <&ap_syscon 3>;
status = "disabled";
};
- dfx-server@6f8000 {
- compatible = "simple-mfd", "syscon";
- reg = <0x6f8000 0x70000>;
-
- coreclk: clk@204 {
- compatible = "marvell,armada-ap806-core-clock";
- #clock-cells = <1>;
- clock-output-names = "ddr", "ring", "cpu";
- };
-
- ringclk: clk@250 {
- compatible = "marvell,armada-ap806-ring-clock";
- #clock-cells = <1>;
- clock-output-names = "ring-0", "ring-2",
- "ring-3", "ring-4",
- "ring-5";
- clocks = <&coreclk 1>;
- };
+ ap_syscon: system-controller@6f4000 {
+ compatible = "marvell,ap806-system-controller",
+ "syscon";
+ #clock-cells = <1>;
+ clock-output-names = "ap-cpu-cluster-0",
+ "ap-cpu-cluster-1",
+ "ap-fixed", "ap-mss";
+ reg = <0x6f4000 0x1000>;
};
};
};
-
};
-
diff --git a/arch/arm64/boot/dts/marvell/armada-cp110-master.dtsi b/arch/arm64/boot/dts/marvell/armada-cp110-master.dtsi
new file mode 100644
index 000000000000..367138bae3e0
--- /dev/null
+++ b/arch/arm64/boot/dts/marvell/armada-cp110-master.dtsi
@@ -0,0 +1,228 @@
+/*
+ * Copyright (C) 2016 Marvell Technology Group Ltd.
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPLv2 or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/*
+ * Device Tree file for Marvell Armada CP110 Master.
+ */
+
+/ {
+ cp110-master {
+ #address-cells = <2>;
+ #size-cells = <2>;
+ compatible = "simple-bus";
+ interrupt-parent = <&gic>;
+ ranges;
+
+ config-space {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "simple-bus";
+ interrupt-parent = <&gic>;
+ ranges = <0x0 0x0 0xf2000000 0x2000000>;
+
+ cpm_syscon0: system-controller@440000 {
+ compatible = "marvell,cp110-system-controller0",
+ "syscon";
+ reg = <0x440000 0x1000>;
+ #clock-cells = <2>;
+ core-clock-output-names =
+ "cpm-apll", "cpm-ppv2-core", "cpm-eip",
+ "cpm-core", "cpm-nand-core";
+ gate-clock-output-names =
+ "cpm-audio", "cpm-communit", "cpm-nand",
+ "cpm-ppv2", "cpm-sdio", "cpm-mg-domain",
+ "cpm-mg-core", "cpm-xor1", "cpm-xor0",
+ "cpm-gop-dp", "none", "cpm-pcie_x10",
+ "cpm-pcie_x11", "cpm-pcie_x4", "cpm-pcie-xor",
+ "cpm-sata", "cpm-sata-usb", "cpm-main",
+ "cpm-sd-mmc", "none", "none",
+ "cpm-slow-io", "cpm-usb3h0", "cpm-usb3h1",
+ "cpm-usb3dev", "cpm-eip150", "cpm-eip197";
+ };
+
+ cpm_sata0: sata@540000 {
+ compatible = "marvell,armada-8k-ahci";
+ reg = <0x540000 0x30000>;
+ interrupts = <GIC_SPI 63 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpm_syscon0 1 15>;
+ status = "disabled";
+ };
+
+ cpm_usb3_0: usb3@500000 {
+ compatible = "marvell,armada-8k-xhci",
+ "generic-xhci";
+ reg = <0x500000 0x4000>;
+ dma-coherent;
+ interrupts = <GIC_SPI 62 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpm_syscon0 1 22>;
+ status = "disabled";
+ };
+
+ cpm_usb3_1: usb3@510000 {
+ compatible = "marvell,armada-8k-xhci",
+ "generic-xhci";
+ reg = <0x510000 0x4000>;
+ dma-coherent;
+ interrupts = <GIC_SPI 61 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpm_syscon0 1 23>;
+ status = "disabled";
+ };
+
+ cpm_spi0: spi@700600 {
+ compatible = "marvell,armada-380-spi";
+ reg = <0x700600 0x50>;
+ #address-cells = <0x1>;
+ #size-cells = <0x0>;
+ cell-index = <1>;
+ clocks = <&cpm_syscon0 0 3>;
+ status = "disabled";
+ };
+
+ cpm_spi1: spi@700680 {
+ compatible = "marvell,armada-380-spi";
+ reg = <0x700680 0x50>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ cell-index = <2>;
+ clocks = <&cpm_syscon0 1 21>;
+ status = "disabled";
+ };
+
+ cpm_i2c0: i2c@701000 {
+ compatible = "marvell,mv78230-i2c";
+ reg = <0x701000 0x20>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ interrupts = <GIC_SPI 86 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpm_syscon0 1 21>;
+ status = "disabled";
+ };
+
+ cpm_i2c1: i2c@701100 {
+ compatible = "marvell,mv78230-i2c";
+ reg = <0x701100 0x20>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ interrupts = <GIC_SPI 87 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpm_syscon0 1 21>;
+ status = "disabled";
+ };
+ };
+
+ cpm_pcie0: pcie@f2600000 {
+ compatible = "marvell,armada8k-pcie", "snps,dw-pcie";
+ reg = <0 0xf2600000 0 0x10000>,
+ <0 0xf6f00000 0 0x80000>;
+ reg-names = "ctrl", "config";
+ #address-cells = <3>;
+ #size-cells = <2>;
+ #interrupt-cells = <1>;
+ device_type = "pci";
+ dma-coherent;
+
+ bus-range = <0 0xff>;
+ ranges =
+ /* downstream I/O */
+ <0x81000000 0 0xf9000000 0 0xf9000000 0 0x10000
+ /* non-prefetchable memory */
+ 0x82000000 0 0xf6000000 0 0xf6000000 0 0xf00000>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>;
+ num-lanes = <1>;
+ clocks = <&cpm_syscon0 1 13>;
+ status = "disabled";
+ };
+
+ cpm_pcie1: pcie@f2620000 {
+ compatible = "marvell,armada8k-pcie", "snps,dw-pcie";
+ reg = <0 0xf2620000 0 0x10000>,
+ <0 0xf7f00000 0 0x80000>;
+ reg-names = "ctrl", "config";
+ #address-cells = <3>;
+ #size-cells = <2>;
+ #interrupt-cells = <1>;
+ device_type = "pci";
+ dma-coherent;
+
+ bus-range = <0 0xff>;
+ ranges =
+ /* downstream I/O */
+ <0x81000000 0 0xf9010000 0 0xf9010000 0 0x10000
+ /* non-prefetchable memory */
+ 0x82000000 0 0xf7000000 0 0xf7000000 0 0xf00000>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 34 IRQ_TYPE_LEVEL_HIGH>;
+
+ num-lanes = <1>;
+ clocks = <&cpm_syscon0 1 11>;
+ status = "disabled";
+ };
+
+ cpm_pcie2: pcie@f2640000 {
+ compatible = "marvell,armada8k-pcie", "snps,dw-pcie";
+ reg = <0 0xf2640000 0 0x10000>,
+ <0 0xf8f00000 0 0x80000>;
+ reg-names = "ctrl", "config";
+ #address-cells = <3>;
+ #size-cells = <2>;
+ #interrupt-cells = <1>;
+ device_type = "pci";
+ dma-coherent;
+
+ bus-range = <0 0xff>;
+ ranges =
+ /* downstream I/O */
+ <0x81000000 0 0xf9020000 0 0xf9020000 0 0x10000
+ /* non-prefetchable memory */
+ 0x82000000 0 0xf8000000 0 0xf8000000 0 0xf00000>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &gic 0 GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>;
+ interrupts = <GIC_SPI 33 IRQ_TYPE_LEVEL_HIGH>;
+
+ num-lanes = <1>;
+ clocks = <&cpm_syscon0 1 12>;
+ status = "disabled";
+ };
+ };
+};
diff --git a/arch/arm64/boot/dts/mediatek/mt8173.dtsi b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
index eab7efc2302d..05f89c4a5413 100644
--- a/arch/arm64/boot/dts/mediatek/mt8173.dtsi
+++ b/arch/arm64/boot/dts/mediatek/mt8173.dtsi
@@ -125,6 +125,49 @@
clock-output-names = "cpum_ck";
};
+ thermal-zones {
+ cpu_thermal: cpu_thermal {
+ polling-delay-passive = <1000>; /* milliseconds */
+ polling-delay = <1000>; /* milliseconds */
+
+ thermal-sensors = <&thermal>;
+ sustainable-power = <1500>; /* milliwatts */
+
+ trips {
+ threshold: trip-point@0 {
+ temperature = <68000>;
+ hysteresis = <2000>;
+ type = "passive";
+ };
+
+ target: trip-point@1 {
+ temperature = <85000>;
+ hysteresis = <2000>;
+ type = "passive";
+ };
+
+ cpu_crit: cpu_crit@0 {
+ temperature = <115000>;
+ hysteresis = <2000>;
+ type = "critical";
+ };
+ };
+
+ cooling-maps {
+ map@0 {
+ trip = <&target>;
+ cooling-device = <&cpu0 0 0>;
+ contribution = <1024>;
+ };
+ map@1 {
+ trip = <&target>;
+ cooling-device = <&cpu2 0 0>;
+ contribution = <2048>;
+ };
+ };
+ };
+ };
+
timer {
compatible = "arm,armv8-timer";
interrupt-parent = <&gic>;
@@ -313,6 +356,11 @@
(GIC_CPU_MASK_SIMPLE(4) | IRQ_TYPE_LEVEL_HIGH)>;
};
+ auxadc: auxadc@11001000 {
+ compatible = "mediatek,mt8173-auxadc";
+ reg = <0 0x11001000 0 0x1000>;
+ };
+
uart0: serial@11002000 {
compatible = "mediatek,mt8173-uart",
"mediatek,mt6577-uart";
@@ -414,6 +462,18 @@
status = "disabled";
};
+ thermal: thermal@1100b000 {
+ #thermal-sensor-cells = <0>;
+ compatible = "mediatek,mt8173-thermal";
+ reg = <0 0x1100b000 0 0x1000>;
+ interrupts = <0 70 IRQ_TYPE_LEVEL_LOW>;
+ clocks = <&pericfg CLK_PERI_THERM>, <&pericfg CLK_PERI_AUXADC>;
+ clock-names = "therm", "auxadc";
+ resets = <&pericfg MT8173_PERI_THERM_SW_RST>;
+ mediatek,auxadc = <&auxadc>;
+ mediatek,apmixedsys = <&apmixedsys>;
+ };
+
nor_flash: spi@1100d000 {
compatible = "mediatek,mt8173-nor";
reg = <0 0x1100d000 0 0xe0>;
diff --git a/arch/arm64/boot/dts/nvidia/Makefile b/arch/arm64/boot/dts/nvidia/Makefile
index a7e865da1005..0f7cdf3e05c1 100644
--- a/arch/arm64/boot/dts/nvidia/Makefile
+++ b/arch/arm64/boot/dts/nvidia/Makefile
@@ -2,6 +2,7 @@ dtb-$(CONFIG_ARCH_TEGRA_132_SOC) += tegra132-norrin.dtb
dtb-$(CONFIG_ARCH_TEGRA_210_SOC) += tegra210-p2371-0000.dtb
dtb-$(CONFIG_ARCH_TEGRA_210_SOC) += tegra210-p2371-2180.dtb
dtb-$(CONFIG_ARCH_TEGRA_210_SOC) += tegra210-p2571.dtb
+dtb-$(CONFIG_ARCH_TEGRA_210_SOC) += tegra210-smaug.dtb
always := $(dtb-y)
clean-files := *.dtb
diff --git a/arch/arm64/boot/dts/nvidia/tegra132-norrin.dts b/arch/arm64/boot/dts/nvidia/tegra132-norrin.dts
index 62f33fc84e3e..759af96a6b49 100644
--- a/arch/arm64/boot/dts/nvidia/tegra132-norrin.dts
+++ b/arch/arm64/boot/dts/nvidia/tegra132-norrin.dts
@@ -8,19 +8,22 @@
compatible = "nvidia,norrin", "nvidia,tegra132", "nvidia,tegra124";
aliases {
- rtc0 = "/i2c@0,7000d000/as3722@40";
- rtc1 = "/rtc@0,7000e000";
+ rtc0 = "/i2c@7000d000/as3722@40";
+ rtc1 = "/rtc@7000e000";
+ serial0 = &uarta;
};
- chosen { };
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
memory {
device_type = "memory";
reg = <0x0 0x80000000 0x0 0x80000000>;
};
- host1x@0,50000000 {
- hdmi@0,54280000 {
+ host1x@50000000 {
+ hdmi@54280000 {
status = "disabled";
vdd-supply = <&vdd_3v3_hdmi>;
@@ -32,26 +35,26 @@
<&gpio TEGRA_GPIO(N, 7) GPIO_ACTIVE_HIGH>;
};
- sor@0,54540000 {
+ sor@54540000 {
status = "okay";
nvidia,dpaux = <&dpaux>;
nvidia,panel = <&panel>;
};
- dpaux: dpaux@0,545c0000 {
+ dpaux: dpaux@545c0000 {
vdd-supply = <&vdd_3v3_panel>;
status = "okay";
};
};
- gpu@0,57000000 {
+ gpu@57000000 {
status = "okay";
vdd-supply = <&vdd_gpu>;
};
- pinmux@0,70000868 {
+ pinmux@70000868 {
pinctrl-names = "default";
pinctrl-0 = <&pinmux_default>;
@@ -523,21 +526,21 @@
};
};
- serial@0,70006000 {
+ serial@70006000 {
status = "okay";
};
- pwm: pwm@0,7000a000 {
+ pwm: pwm@7000a000 {
status = "okay";
};
/* HDMI DDC */
- hdmi_ddc: i2c@0,7000c700 {
+ hdmi_ddc: i2c@7000c700 {
status = "okay";
clock-frequency = <100000>;
};
- i2c@0,7000d000 {
+ i2c@7000d000 {
status = "okay";
clock-frequency = <400000>;
@@ -744,7 +747,7 @@
};
};
- spi@0,7000d400 {
+ spi@7000d400 {
status = "okay";
ec: cros-ec@0 {
@@ -876,7 +879,7 @@
};
};
- pmc@0,7000e400 {
+ pmc@7000e400 {
nvidia,invert-interrupt;
nvidia,suspend-mode = <0>;
#wake-cells = <3>;
@@ -890,12 +893,12 @@
};
/* WIFI/BT module */
- sdhci@0,700b0000 {
+ sdhci@700b0000 {
status = "disabled";
};
/* external SD/MMC */
- sdhci@0,700b0400 {
+ sdhci@700b0400 {
cd-gpios = <&gpio TEGRA_GPIO(V, 2) GPIO_ACTIVE_LOW>;
power-gpios = <&gpio TEGRA_GPIO(R, 0) GPIO_ACTIVE_HIGH>;
wp-gpios = <&gpio TEGRA_GPIO(Q, 4) GPIO_ACTIVE_HIGH>;
@@ -905,35 +908,35 @@
};
/* EMMC 4.51 */
- sdhci@0,700b0600 {
+ sdhci@700b0600 {
status = "okay";
bus-width = <8>;
non-removable;
};
- usb@0,7d000000 {
+ usb@7d000000 {
status = "okay";
};
- usb-phy@0,7d000000 {
+ usb-phy@7d000000 {
status = "okay";
vbus-supply = <&vdd_usb1_vbus>;
};
- usb@0,7d004000 {
+ usb@7d004000 {
status = "okay";
};
- usb-phy@0,7d004000 {
+ usb-phy@7d004000 {
status = "okay";
vbus-supply = <&vdd_run_cam>;
};
- usb@0,7d008000 {
+ usb@7d008000 {
status = "okay";
};
- usb-phy@0,7d008000 {
+ usb-phy@7d008000 {
status = "okay";
vbus-supply = <&vdd_usb3_vbus>;
};
@@ -973,7 +976,7 @@
linux,input-type = <5>;
linux,code = <0>;
debounce-interval = <1>;
- gpio-key,wakeup;
+ wakeup-source;
};
power {
@@ -981,7 +984,7 @@
gpios = <&gpio TEGRA_GPIO(Q, 0) GPIO_ACTIVE_LOW>;
linux,code = <KEY_POWER>;
debounce-interval = <10>;
- gpio-key,wakeup;
+ wakeup-source;
};
};
diff --git a/arch/arm64/boot/dts/nvidia/tegra132.dtsi b/arch/arm64/boot/dts/nvidia/tegra132.dtsi
index 6e28e41d7e3e..2013f8916084 100644
--- a/arch/arm64/boot/dts/nvidia/tegra132.dtsi
+++ b/arch/arm64/boot/dts/nvidia/tegra132.dtsi
@@ -11,7 +11,7 @@
#address-cells = <2>;
#size-cells = <2>;
- pcie-controller@0,01003000 {
+ pcie-controller@01003000 {
compatible = "nvidia,tegra124-pcie";
device_type = "pci";
reg = <0x0 0x01003000 0x0 0x00000800 /* PADS registers */
@@ -77,7 +77,7 @@
};
};
- host1x@0,50000000 {
+ host1x@50000000 {
compatible = "nvidia,tegra124-host1x", "simple-bus";
reg = <0x0 0x50000000 0x0 0x00034000>;
interrupts = <GIC_SPI 65 IRQ_TYPE_LEVEL_HIGH>, /* syncpt */
@@ -92,7 +92,7 @@
ranges = <0 0x54000000 0 0x54000000 0 0x01000000>;
- dc@0,54200000 {
+ dc@54200000 {
compatible = "nvidia,tegra124-dc";
reg = <0x0 0x54200000 0x0 0x00040000>;
interrupts = <GIC_SPI 73 IRQ_TYPE_LEVEL_HIGH>;
@@ -107,7 +107,7 @@
nvidia,head = <0>;
};
- dc@0,54240000 {
+ dc@54240000 {
compatible = "nvidia,tegra124-dc";
reg = <0x0 0x54240000 0x0 0x00040000>;
interrupts = <GIC_SPI 74 IRQ_TYPE_LEVEL_HIGH>;
@@ -122,7 +122,7 @@
nvidia,head = <1>;
};
- hdmi@0,54280000 {
+ hdmi@54280000 {
compatible = "nvidia,tegra124-hdmi";
reg = <0x0 0x54280000 0x0 0x00040000>;
interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>;
@@ -134,7 +134,7 @@
status = "disabled";
};
- sor@0,54540000 {
+ sor@54540000 {
compatible = "nvidia,tegra124-sor";
reg = <0x0 0x54540000 0x0 0x00040000>;
interrupts = <GIC_SPI 76 IRQ_TYPE_LEVEL_HIGH>;
@@ -148,7 +148,7 @@
status = "disabled";
};
- dpaux: dpaux@0,545c0000 {
+ dpaux: dpaux@545c0000 {
compatible = "nvidia,tegra124-dpaux";
reg = <0x0 0x545c0000 0x0 0x00040000>;
interrupts = <GIC_SPI 159 IRQ_TYPE_LEVEL_HIGH>;
@@ -161,7 +161,7 @@
};
};
- gic: interrupt-controller@0,50041000 {
+ gic: interrupt-controller@50041000 {
compatible = "arm,cortex-a15-gic";
#interrupt-cells = <3>;
interrupt-controller;
@@ -174,7 +174,7 @@
interrupt-parent = <&gic>;
};
- gpu@0,57000000 {
+ gpu@57000000 {
compatible = "nvidia,gk20a";
reg = <0x0 0x57000000 0x0 0x01000000>,
<0x0 0x58000000 0x0 0x01000000>;
@@ -201,7 +201,7 @@
interrupt-parent = <&gic>;
};
- timer@0,60005000 {
+ timer@60005000 {
compatible = "nvidia,tegra124-timer", "nvidia,tegra20-timer";
reg = <0x0 0x60005000 0x0 0x400>;
interrupts = <GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
@@ -214,7 +214,7 @@
clock-names = "timer";
};
- tegra_car: clock@0,60006000 {
+ tegra_car: clock@60006000 {
compatible = "nvidia,tegra132-car";
reg = <0x0 0x60006000 0x0 0x1000>;
#clock-cells = <1>;
@@ -222,12 +222,12 @@
nvidia,external-memory-controller = <&emc>;
};
- flow-controller@0,60007000 {
+ flow-controller@60007000 {
compatible = "nvidia,tegra124-flowctrl";
reg = <0x0 0x60007000 0x0 0x1000>;
};
- actmon@0,6000c800 {
+ actmon@6000c800 {
compatible = "nvidia,tegra124-actmon";
reg = <0x0 0x6000c800 0x0 0x400>;
interrupts = <GIC_SPI 45 IRQ_TYPE_LEVEL_HIGH>;
@@ -238,7 +238,7 @@
reset-names = "actmon";
};
- gpio: gpio@0,6000d000 {
+ gpio: gpio@6000d000 {
compatible = "nvidia,tegra124-gpio", "nvidia,tegra30-gpio";
reg = <0x0 0x6000d000 0x0 0x1000>;
interrupts = <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>,
@@ -255,7 +255,7 @@
interrupt-controller;
};
- apbdma: dma@0,60020000 {
+ apbdma: dma@60020000 {
compatible = "nvidia,tegra124-apbdma", "nvidia,tegra148-apbdma";
reg = <0x0 0x60020000 0x0 0x1400>;
interrupts = <GIC_SPI 104 IRQ_TYPE_LEVEL_HIGH>,
@@ -297,13 +297,13 @@
#dma-cells = <1>;
};
- apbmisc@0,70000800 {
+ apbmisc@70000800 {
compatible = "nvidia,tegra124-apbmisc", "nvidia,tegra20-apbmisc";
reg = <0x0 0x70000800 0x0 0x64>, /* Chip revision */
<0x0 0x7000e864 0x0 0x04>; /* Strapping options */
};
- pinmux: pinmux@0,70000868 {
+ pinmux: pinmux@70000868 {
compatible = "nvidia,tegra124-pinmux";
reg = <0x0 0x70000868 0x0 0x164>, /* Pad control registers */
<0x0 0x70003000 0x0 0x434>, /* Mux registers */
@@ -315,10 +315,10 @@
* driver and APB DMA based serial driver for higher baudrate
* and performance. To enable the 8250 based driver, the compatible
* is "nvidia,tegra124-uart", "nvidia,tegra20-uart" and to enable
- * the APB DMA based serial driver, the comptible is
+ * the APB DMA based serial driver, the compatible is
* "nvidia,tegra124-hsuart", "nvidia,tegra30-hsuart".
*/
- uarta: serial@0,70006000 {
+ uarta: serial@70006000 {
compatible = "nvidia,tegra124-uart", "nvidia,tegra20-uart";
reg = <0x0 0x70006000 0x0 0x40>;
reg-shift = <2>;
@@ -332,7 +332,7 @@
status = "disabled";
};
- uartb: serial@0,70006040 {
+ uartb: serial@70006040 {
compatible = "nvidia,tegra124-uart", "nvidia,tegra20-uart";
reg = <0x0 0x70006040 0x0 0x40>;
reg-shift = <2>;
@@ -346,7 +346,7 @@
status = "disabled";
};
- uartc: serial@0,70006200 {
+ uartc: serial@70006200 {
compatible = "nvidia,tegra124-uart", "nvidia,tegra20-uart";
reg = <0x0 0x70006200 0x0 0x40>;
reg-shift = <2>;
@@ -360,7 +360,7 @@
status = "disabled";
};
- uartd: serial@0,70006300 {
+ uartd: serial@70006300 {
compatible = "nvidia,tegra124-uart", "nvidia,tegra20-uart";
reg = <0x0 0x70006300 0x0 0x40>;
reg-shift = <2>;
@@ -374,7 +374,7 @@
status = "disabled";
};
- pwm: pwm@0,7000a000 {
+ pwm: pwm@7000a000 {
compatible = "nvidia,tegra124-pwm", "nvidia,tegra20-pwm";
reg = <0x0 0x7000a000 0x0 0x100>;
#pwm-cells = <2>;
@@ -385,7 +385,7 @@
status = "disabled";
};
- i2c@0,7000c000 {
+ i2c@7000c000 {
compatible = "nvidia,tegra124-i2c", "nvidia,tegra114-i2c";
reg = <0x0 0x7000c000 0x0 0x100>;
interrupts = <GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>;
@@ -400,7 +400,7 @@
status = "disabled";
};
- i2c@0,7000c400 {
+ i2c@7000c400 {
compatible = "nvidia,tegra124-i2c", "nvidia,tegra114-i2c";
reg = <0x0 0x7000c400 0x0 0x100>;
interrupts = <GIC_SPI 84 IRQ_TYPE_LEVEL_HIGH>;
@@ -415,7 +415,7 @@
status = "disabled";
};
- i2c@0,7000c500 {
+ i2c@7000c500 {
compatible = "nvidia,tegra124-i2c", "nvidia,tegra114-i2c";
reg = <0x0 0x7000c500 0x0 0x100>;
interrupts = <GIC_SPI 92 IRQ_TYPE_LEVEL_HIGH>;
@@ -430,7 +430,7 @@
status = "disabled";
};
- i2c@0,7000c700 {
+ i2c@7000c700 {
compatible = "nvidia,tegra124-i2c", "nvidia,tegra114-i2c";
reg = <0x0 0x7000c700 0x0 0x100>;
interrupts = <GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>;
@@ -445,7 +445,7 @@
status = "disabled";
};
- i2c@0,7000d000 {
+ i2c@7000d000 {
compatible = "nvidia,tegra124-i2c", "nvidia,tegra114-i2c";
reg = <0x0 0x7000d000 0x0 0x100>;
interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>;
@@ -460,7 +460,7 @@
status = "disabled";
};
- i2c@0,7000d100 {
+ i2c@7000d100 {
compatible = "nvidia,tegra124-i2c", "nvidia,tegra114-i2c";
reg = <0x0 0x7000d100 0x0 0x100>;
interrupts = <GIC_SPI 63 IRQ_TYPE_LEVEL_HIGH>;
@@ -475,7 +475,7 @@
status = "disabled";
};
- spi@0,7000d400 {
+ spi@7000d400 {
compatible = "nvidia,tegra124-spi", "nvidia,tegra114-spi";
reg = <0x0 0x7000d400 0x0 0x200>;
interrupts = <GIC_SPI 59 IRQ_TYPE_LEVEL_HIGH>;
@@ -490,7 +490,7 @@
status = "disabled";
};
- spi@0,7000d600 {
+ spi@7000d600 {
compatible = "nvidia,tegra124-spi", "nvidia,tegra114-spi";
reg = <0x0 0x7000d600 0x0 0x200>;
interrupts = <GIC_SPI 82 IRQ_TYPE_LEVEL_HIGH>;
@@ -505,7 +505,7 @@
status = "disabled";
};
- spi@0,7000d800 {
+ spi@7000d800 {
compatible = "nvidia,tegra124-spi", "nvidia,tegra114-spi";
reg = <0x0 0x7000d800 0x0 0x200>;
interrupts = <GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>;
@@ -520,7 +520,7 @@
status = "disabled";
};
- spi@0,7000da00 {
+ spi@7000da00 {
compatible = "nvidia,tegra124-spi", "nvidia,tegra114-spi";
reg = <0x0 0x7000da00 0x0 0x200>;
interrupts = <GIC_SPI 93 IRQ_TYPE_LEVEL_HIGH>;
@@ -535,7 +535,7 @@
status = "disabled";
};
- spi@0,7000dc00 {
+ spi@7000dc00 {
compatible = "nvidia,tegra124-spi", "nvidia,tegra114-spi";
reg = <0x0 0x7000dc00 0x0 0x200>;
interrupts = <GIC_SPI 94 IRQ_TYPE_LEVEL_HIGH>;
@@ -550,7 +550,7 @@
status = "disabled";
};
- spi@0,7000de00 {
+ spi@7000de00 {
compatible = "nvidia,tegra124-spi", "nvidia,tegra114-spi";
reg = <0x0 0x7000de00 0x0 0x200>;
interrupts = <GIC_SPI 79 IRQ_TYPE_LEVEL_HIGH>;
@@ -565,7 +565,7 @@
status = "disabled";
};
- rtc@0,7000e000 {
+ rtc@7000e000 {
compatible = "nvidia,tegra124-rtc", "nvidia,tegra20-rtc";
reg = <0x0 0x7000e000 0x0 0x100>;
interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>;
@@ -573,14 +573,14 @@
clock-names = "rtc";
};
- pmc@0,7000e400 {
+ pmc@7000e400 {
compatible = "nvidia,tegra124-pmc";
reg = <0x0 0x7000e400 0x0 0x400>;
clocks = <&tegra_car TEGRA124_CLK_PCLK>, <&clk32k_in>;
clock-names = "pclk", "clk32k_in";
};
- fuse@0,7000f800 {
+ fuse@7000f800 {
compatible = "nvidia,tegra124-efuse";
reg = <0x0 0x7000f800 0x0 0x400>;
clocks = <&tegra_car TEGRA124_CLK_FUSE>;
@@ -589,7 +589,7 @@
reset-names = "fuse";
};
- mc: memory-controller@0,70019000 {
+ mc: memory-controller@70019000 {
compatible = "nvidia,tegra132-mc";
reg = <0x0 0x70019000 0x0 0x1000>;
clocks = <&tegra_car TEGRA124_CLK_MC>;
@@ -600,14 +600,14 @@
#iommu-cells = <1>;
};
- emc: emc@0,7001b000 {
+ emc: emc@7001b000 {
compatible = "nvidia,tegra132-emc", "nvidia,tegra124-emc";
reg = <0x0 0x7001b000 0x0 0x1000>;
nvidia,memory-controller = <&mc>;
};
- sata@0,70020000 {
+ sata@70020000 {
compatible = "nvidia,tegra124-ahci";
reg = <0x0 0x70027000 0x0 0x2000>, /* AHCI */
<0x0 0x70020000 0x0 0x7000>; /* SATA */
@@ -626,7 +626,7 @@
status = "disabled";
};
- hda@0,70030000 {
+ hda@70030000 {
compatible = "nvidia,tegra132-hda", "nvidia,tegra124-hda",
"nvidia,tegra30-hda";
reg = <0x0 0x70030000 0x0 0x10000>;
@@ -642,7 +642,7 @@
status = "disabled";
};
- padctl: padctl@0,7009f000 {
+ padctl: padctl@7009f000 {
compatible = "nvidia,tegra132-xusb-padctl",
"nvidia,tegra124-xusb-padctl";
reg = <0x0 0x7009f000 0x0 0x1000>;
@@ -682,7 +682,7 @@
};
};
- sdhci@0,700b0000 {
+ sdhci@700b0000 {
compatible = "nvidia,tegra124-sdhci";
reg = <0x0 0x700b0000 0x0 0x200>;
interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
@@ -693,7 +693,7 @@
status = "disabled";
};
- sdhci@0,700b0200 {
+ sdhci@700b0200 {
compatible = "nvidia,tegra124-sdhci";
reg = <0x0 0x700b0200 0x0 0x200>;
interrupts = <GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>;
@@ -704,7 +704,7 @@
status = "disabled";
};
- sdhci@0,700b0400 {
+ sdhci@700b0400 {
compatible = "nvidia,tegra124-sdhci";
reg = <0x0 0x700b0400 0x0 0x200>;
interrupts = <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>;
@@ -715,7 +715,7 @@
status = "disabled";
};
- sdhci@0,700b0600 {
+ sdhci@700b0600 {
compatible = "nvidia,tegra124-sdhci";
reg = <0x0 0x700b0600 0x0 0x200>;
interrupts = <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>;
@@ -726,7 +726,7 @@
status = "disabled";
};
- soctherm: thermal-sensor@0,700e2000 {
+ soctherm: thermal-sensor@700e2000 {
compatible = "nvidia,tegra124-soctherm";
reg = <0x0 0x700e2000 0x0 0x1000>;
interrupts = <GIC_SPI 48 IRQ_TYPE_LEVEL_HIGH>;
@@ -738,7 +738,7 @@
#thermal-sensor-cells = <1>;
};
- ahub@0,70300000 {
+ ahub@70300000 {
compatible = "nvidia,tegra124-ahub";
reg = <0x0 0x70300000 0x0 0x200>,
<0x0 0x70300800 0x0 0x800>,
@@ -790,7 +790,7 @@
#address-cells = <2>;
#size-cells = <2>;
- tegra_i2s0: i2s@0,70301000 {
+ tegra_i2s0: i2s@70301000 {
compatible = "nvidia,tegra124-i2s";
reg = <0x0 0x70301000 0x0 0x100>;
nvidia,ahub-cif-ids = <4 4>;
@@ -801,7 +801,7 @@
status = "disabled";
};
- tegra_i2s1: i2s@0,70301100 {
+ tegra_i2s1: i2s@70301100 {
compatible = "nvidia,tegra124-i2s";
reg = <0x0 0x70301100 0x0 0x100>;
nvidia,ahub-cif-ids = <5 5>;
@@ -812,7 +812,7 @@
status = "disabled";
};
- tegra_i2s2: i2s@0,70301200 {
+ tegra_i2s2: i2s@70301200 {
compatible = "nvidia,tegra124-i2s";
reg = <0x0 0x70301200 0x0 0x100>;
nvidia,ahub-cif-ids = <6 6>;
@@ -823,7 +823,7 @@
status = "disabled";
};
- tegra_i2s3: i2s@0,70301300 {
+ tegra_i2s3: i2s@70301300 {
compatible = "nvidia,tegra124-i2s";
reg = <0x0 0x70301300 0x0 0x100>;
nvidia,ahub-cif-ids = <7 7>;
@@ -834,7 +834,7 @@
status = "disabled";
};
- tegra_i2s4: i2s@0,70301400 {
+ tegra_i2s4: i2s@70301400 {
compatible = "nvidia,tegra124-i2s";
reg = <0x0 0x70301400 0x0 0x100>;
nvidia,ahub-cif-ids = <8 8>;
@@ -846,7 +846,7 @@
};
};
- usb@0,7d000000 {
+ usb@7d000000 {
compatible = "nvidia,tegra124-ehci", "nvidia,tegra30-ehci", "usb-ehci";
reg = <0x0 0x7d000000 0x0 0x4000>;
interrupts = <GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>;
@@ -859,7 +859,7 @@
status = "disabled";
};
- phy1: usb-phy@0,7d000000 {
+ phy1: usb-phy@7d000000 {
compatible = "nvidia,tegra124-usb-phy", "nvidia,tegra30-usb-phy";
reg = <0x0 0x7d000000 0x0 0x4000>,
<0x0 0x7d000000 0x0 0x4000>;
@@ -884,7 +884,7 @@
status = "disabled";
};
- usb@0,7d004000 {
+ usb@7d004000 {
compatible = "nvidia,tegra124-ehci", "nvidia,tegra30-ehci", "usb-ehci";
reg = <0x0 0x7d004000 0x0 0x4000>;
interrupts = <GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>;
@@ -897,7 +897,7 @@
status = "disabled";
};
- phy2: usb-phy@0,7d004000 {
+ phy2: usb-phy@7d004000 {
compatible = "nvidia,tegra124-usb-phy", "nvidia,tegra30-usb-phy";
reg = <0x0 0x7d004000 0x0 0x4000>,
<0x0 0x7d000000 0x0 0x4000>;
@@ -921,7 +921,7 @@
status = "disabled";
};
- usb@0,7d008000 {
+ usb@7d008000 {
compatible = "nvidia,tegra124-ehci", "nvidia,tegra30-ehci", "usb-ehci";
reg = <0x0 0x7d008000 0x0 0x4000>;
interrupts = <GIC_SPI 97 IRQ_TYPE_LEVEL_HIGH>;
@@ -934,7 +934,7 @@
status = "disabled";
};
- phy3: usb-phy@0,7d008000 {
+ phy3: usb-phy@7d008000 {
compatible = "nvidia,tegra124-usb-phy", "nvidia,tegra30-usb-phy";
reg = <0x0 0x7d008000 0x0 0x4000>,
<0x0 0x7d000000 0x0 0x4000>;
diff --git a/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi b/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
index 2b7f88950d1e..316c92c03821 100644
--- a/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
+++ b/arch/arm64/boot/dts/nvidia/tegra210-p2180.dtsi
@@ -5,7 +5,7 @@
compatible = "nvidia,p2180", "nvidia,tegra210";
aliases {
- rtc1 = "/rtc@0,7000e000";
+ rtc1 = "/rtc@7000e000";
serial0 = &uarta;
};
@@ -15,16 +15,16 @@
};
/* debug port */
- serial@0,70006000 {
+ serial@70006000 {
status = "okay";
};
- pmc@0,7000e400 {
+ pmc@7000e400 {
nvidia,invert-interrupt;
};
/* eMMC */
- sdhci@0,700b0600 {
+ sdhci@700b0600 {
status = "okay";
bus-width = <8>;
non-removable;
diff --git a/arch/arm64/boot/dts/nvidia/tegra210-p2530.dtsi b/arch/arm64/boot/dts/nvidia/tegra210-p2530.dtsi
index ece0dec61fae..0ec92578cacb 100644
--- a/arch/arm64/boot/dts/nvidia/tegra210-p2530.dtsi
+++ b/arch/arm64/boot/dts/nvidia/tegra210-p2530.dtsi
@@ -5,31 +5,35 @@
compatible = "nvidia,p2530", "nvidia,tegra210";
aliases {
- rtc1 = "/rtc@0,7000e000";
+ rtc1 = "/rtc@7000e000";
serial0 = &uarta;
};
+ chosen {
+ stdout-path = "serial0:115200n8";
+ };
+
memory {
device_type = "memory";
reg = <0x0 0x80000000 0x0 0xc0000000>;
};
/* debug port */
- serial@0,70006000 {
+ serial@70006000 {
status = "okay";
};
- i2c@0,7000d000 {
+ i2c@7000d000 {
status = "okay";
clock-frequency = <400000>;
};
- pmc@0,7000e400 {
+ pmc@7000e400 {
nvidia,invert-interrupt;
};
/* eMMC */
- sdhci@0,700b0600 {
+ sdhci@700b0600 {
status = "okay";
bus-width = <8>;
non-removable;
diff --git a/arch/arm64/boot/dts/nvidia/tegra210-p2571.dts b/arch/arm64/boot/dts/nvidia/tegra210-p2571.dts
index 58d27ddd57ff..576957a55801 100644
--- a/arch/arm64/boot/dts/nvidia/tegra210-p2571.dts
+++ b/arch/arm64/boot/dts/nvidia/tegra210-p2571.dts
@@ -7,7 +7,7 @@
model = "NVIDIA Tegra210 P2571 reference design";
compatible = "nvidia,p2571", "nvidia,tegra210";
- pinmux: pinmux@0,700008d4 {
+ pinmux: pinmux@700008d4 {
pinctrl-names = "boot";
pinctrl-0 = <&state_boot>;
diff --git a/arch/arm64/boot/dts/nvidia/tegra210-p2595.dtsi b/arch/arm64/boot/dts/nvidia/tegra210-p2595.dtsi
index f3f91392214e..e008e3364d2a 100644
--- a/arch/arm64/boot/dts/nvidia/tegra210-p2595.dtsi
+++ b/arch/arm64/boot/dts/nvidia/tegra210-p2595.dtsi
@@ -2,7 +2,7 @@
model = "NVIDIA Tegra210 P2595 I/O board";
compatible = "nvidia,p2595", "nvidia,tegra210";
- pinmux: pinmux@0,700008d4 {
+ pinmux: pinmux@700008d4 {
pinctrl-names = "boot";
pinctrl-0 = <&state_boot>;
diff --git a/arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi b/arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi
index be3eccbe8013..a2480c0c7e72 100644
--- a/arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi
+++ b/arch/arm64/boot/dts/nvidia/tegra210-p2597.dtsi
@@ -1,8 +1,10 @@
+#include <dt-bindings/input/input.h>
+
/ {
model = "NVIDIA Tegra210 P2597 I/O board";
compatible = "nvidia,p2597", "nvidia,tegra210";
- pinmux: pinmux@0,700008d4 {
+ pinmux: pinmux@700008d4 {
pinctrl-names = "boot";
pinctrl-0 = <&state_boot>;
@@ -1260,11 +1262,35 @@
};
/* MMC/SD */
- sdhci@0,700b0000 {
+ sdhci@700b0000 {
status = "okay";
bus-width = <4>;
no-1-8-v;
cd-gpios = <&gpio TEGRA_GPIO(Z, 1) GPIO_ACTIVE_LOW>;
};
+
+ gpio-keys {
+ compatible = "gpio-keys";
+ label = "gpio-keys";
+
+ power {
+ label = "Power";
+ gpios = <&gpio TEGRA_GPIO(X, 5) GPIO_ACTIVE_LOW>;
+ linux,code = <KEY_POWER>;
+ wakeup-source;
+ };
+
+ volume_down {
+ label = "Volume Down";
+ gpios = <&gpio TEGRA_GPIO(Y, 0) GPIO_ACTIVE_LOW>;
+ linux,code = <KEY_VOLUMEDOWN>;
+ };
+
+ volume_up {
+ label = "Volume Up";
+ gpios = <&gpio TEGRA_GPIO(X, 6) GPIO_ACTIVE_LOW>;
+ linux,code = <KEY_VOLUMEUP>;
+ };
+ };
};
diff --git a/arch/arm64/boot/dts/nvidia/tegra210-smaug.dts b/arch/arm64/boot/dts/nvidia/tegra210-smaug.dts
new file mode 100644
index 000000000000..4d89f4e02d98
--- /dev/null
+++ b/arch/arm64/boot/dts/nvidia/tegra210-smaug.dts
@@ -0,0 +1,1424 @@
+/dts-v1/;
+
+#include <dt-bindings/input/input.h>
+#include <dt-bindings/pinctrl/pinctrl-tegra.h>
+
+#include "tegra210.dtsi"
+
+/ {
+ model = "Google Pixel C";
+ compatible = "google,smaug-rev8", "google,smaug-rev7",
+ "google,smaug-rev6", "google,smaug-rev5",
+ "google,smaug-rev4", "google,smaug-rev3",
+ "google,smaug-rev1", "google,smaug", "nvidia,tegra210";
+
+ aliases {
+ serial0 = &uarta;
+ };
+
+ chosen {
+ bootargs = "earlycon";
+ stdout-path = "serial0:115200n8";
+ };
+
+ memory {
+ device_type = "memory";
+ reg = <0x0 0x80000000 0x0 0xc0000000>;
+ };
+
+ pinmux: pinmux@700008d4 {
+ pinctrl-names = "boot";
+ pinctrl-0 = <&state_boot>;
+
+ state_boot: pinmux {
+ pex_l0_rst_n_pa0 {
+ nvidia,pins = "pex_l0_rst_n_pa0";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ nvidia,io-hv = <TEGRA_PIN_DISABLE>;
+ };
+ pex_l0_clkreq_n_pa1 {
+ nvidia,pins = "pex_l0_clkreq_n_pa1";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ nvidia,io-hv = <TEGRA_PIN_DISABLE>;
+ };
+ pex_wake_n_pa2 {
+ nvidia,pins = "pex_wake_n_pa2";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ nvidia,io-hv = <TEGRA_PIN_DISABLE>;
+ };
+ pex_l1_rst_n_pa3 {
+ nvidia,pins = "pex_l1_rst_n_pa3";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ nvidia,io-hv = <TEGRA_PIN_DISABLE>;
+ };
+ pex_l1_clkreq_n_pa4 {
+ nvidia,pins = "pex_l1_clkreq_n_pa4";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ nvidia,io-hv = <TEGRA_PIN_DISABLE>;
+ };
+ sata_led_active_pa5 {
+ nvidia,pins = "sata_led_active_pa5";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pa6 {
+ nvidia,pins = "pa6";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ dap1_fs_pb0 {
+ nvidia,pins = "dap1_fs_pb0";
+ nvidia,function = "i2s1";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ dap1_din_pb1 {
+ nvidia,pins = "dap1_din_pb1";
+ nvidia,function = "i2s1";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ dap1_dout_pb2 {
+ nvidia,pins = "dap1_dout_pb2";
+ nvidia,function = "i2s1";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ dap1_sclk_pb3 {
+ nvidia,pins = "dap1_sclk_pb3";
+ nvidia,function = "i2s1";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ spi2_mosi_pb4 {
+ nvidia,pins = "spi2_mosi_pb4";
+ nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ spi2_miso_pb5 {
+ nvidia,pins = "spi2_miso_pb5";
+ nvidia,function = "rsvd2";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ spi2_sck_pb6 {
+ nvidia,pins = "spi2_sck_pb6";
+ nvidia,function = "rsvd2";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ spi2_cs0_pb7 {
+ nvidia,pins = "spi2_cs0_pb7";
+ nvidia,function = "rsvd2";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ spi1_mosi_pc0 {
+ nvidia,pins = "spi1_mosi_pc0";
+ nvidia,function = "spi1";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ spi1_miso_pc1 {
+ nvidia,pins = "spi1_miso_pc1";
+ nvidia,function = "spi1";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ spi1_sck_pc2 {
+ nvidia,pins = "spi1_sck_pc2";
+ nvidia,function = "spi1";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ spi1_cs0_pc3 {
+ nvidia,pins = "spi1_cs0_pc3";
+ nvidia,function = "spi1";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ spi1_cs1_pc4 {
+ nvidia,pins = "spi1_cs1_pc4";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ spi4_sck_pc5 {
+ nvidia,pins = "spi4_sck_pc5";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ spi4_cs0_pc6 {
+ nvidia,pins = "spi4_cs0_pc6";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ spi4_mosi_pc7 {
+ nvidia,pins = "spi4_mosi_pc7";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ spi4_miso_pd0 {
+ nvidia,pins = "spi4_miso_pd0";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ uart3_tx_pd1 {
+ nvidia,pins = "uart3_tx_pd1";
+ nvidia,function = "uartc";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ uart3_rx_pd2 {
+ nvidia,pins = "uart3_rx_pd2";
+ nvidia,function = "uartc";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ uart3_rts_pd3 {
+ nvidia,pins = "uart3_rts_pd3";
+ nvidia,function = "uartc";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ uart3_cts_pd4 {
+ nvidia,pins = "uart3_cts_pd4";
+ nvidia,function = "uartc";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ dmic1_clk_pe0 {
+ nvidia,pins = "dmic1_clk_pe0";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ dmic1_dat_pe1 {
+ nvidia,pins = "dmic1_dat_pe1";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ dmic2_clk_pe2 {
+ nvidia,pins = "dmic2_clk_pe2";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ dmic2_dat_pe3 {
+ nvidia,pins = "dmic2_dat_pe3";
+ nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ dmic3_clk_pe4 {
+ nvidia,pins = "dmic3_clk_pe4";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ dmic3_dat_pe5 {
+ nvidia,pins = "dmic3_dat_pe5";
+ nvidia,function = "rsvd2";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pe6 {
+ nvidia,pins = "pe6";
+ nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pe7 {
+ nvidia,pins = "pe7";
+ nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ gen3_i2c_scl_pf0 {
+ nvidia,pins = "gen3_i2c_scl_pf0";
+ nvidia,function = "i2c3";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ nvidia,io-hv = <TEGRA_PIN_DISABLE>;
+ };
+ gen3_i2c_sda_pf1 {
+ nvidia,pins = "gen3_i2c_sda_pf1";
+ nvidia,function = "i2c3";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ nvidia,io-hv = <TEGRA_PIN_DISABLE>;
+ };
+ uart2_tx_pg0 {
+ nvidia,pins = "uart2_tx_pg0";
+ nvidia,function = "uartb";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ uart2_rx_pg1 {
+ nvidia,pins = "uart2_rx_pg1";
+ nvidia,function = "uartb";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ uart2_rts_pg2 {
+ nvidia,pins = "uart2_rts_pg2";
+ nvidia,function = "rsvd2";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ uart2_cts_pg3 {
+ nvidia,pins = "uart2_cts_pg3";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ wifi_en_ph0 {
+ nvidia,pins = "wifi_en_ph0";
+ nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ wifi_rst_ph1 {
+ nvidia,pins = "wifi_rst_ph1";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ wifi_wake_ap_ph2 {
+ nvidia,pins = "wifi_wake_ap_ph2";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ ap_wake_bt_ph3 {
+ nvidia,pins = "ap_wake_bt_ph3";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ bt_rst_ph4 {
+ nvidia,pins = "bt_rst_ph4";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ bt_wake_ap_ph5 {
+ nvidia,pins = "bt_wake_ap_ph5";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ ph6 {
+ nvidia,pins = "ph6";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ ap_wake_nfc_ph7 {
+ nvidia,pins = "ap_wake_nfc_ph7";
+ nvidia,function = "rsvd0";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ nfc_en_pi0 {
+ nvidia,pins = "nfc_en_pi0";
+ nvidia,function = "rsvd0";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ nfc_int_pi1 {
+ nvidia,pins = "nfc_int_pi1";
+ nvidia,function = "rsvd0";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ gps_en_pi2 {
+ nvidia,pins = "gps_en_pi2";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ gps_rst_pi3 {
+ nvidia,pins = "gps_rst_pi3";
+ nvidia,function = "rsvd0";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ uart4_tx_pi4 {
+ nvidia,pins = "uart4_tx_pi4";
+ nvidia,function = "uartd";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ uart4_rx_pi5 {
+ nvidia,pins = "uart4_rx_pi5";
+ nvidia,function = "uartd";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ uart4_rts_pi6 {
+ nvidia,pins = "uart4_rts_pi6";
+ nvidia,function = "uartd";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ uart4_cts_pi7 {
+ nvidia,pins = "uart4_cts_pi7";
+ nvidia,function = "uartd";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ gen1_i2c_sda_pj0 {
+ nvidia,pins = "gen1_i2c_sda_pj0";
+ nvidia,function = "i2c1";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ nvidia,io-hv = <TEGRA_PIN_DISABLE>;
+ };
+ gen1_i2c_scl_pj1 {
+ nvidia,pins = "gen1_i2c_scl_pj1";
+ nvidia,function = "i2c1";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ nvidia,io-hv = <TEGRA_PIN_DISABLE>;
+ };
+ gen2_i2c_scl_pj2 {
+ nvidia,pins = "gen2_i2c_scl_pj2";
+ nvidia,function = "i2c2";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ nvidia,io-hv = <TEGRA_PIN_ENABLE>;
+ };
+ gen2_i2c_sda_pj3 {
+ nvidia,pins = "gen2_i2c_sda_pj3";
+ nvidia,function = "i2c2";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ nvidia,io-hv = <TEGRA_PIN_ENABLE>;
+ };
+ dap4_fs_pj4 {
+ nvidia,pins = "dap4_fs_pj4";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ dap4_din_pj5 {
+ nvidia,pins = "dap4_din_pj5";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ dap4_dout_pj6 {
+ nvidia,pins = "dap4_dout_pj6";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ dap4_sclk_pj7 {
+ nvidia,pins = "dap4_sclk_pj7";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pk0 {
+ nvidia,pins = "pk0";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pk1 {
+ nvidia,pins = "pk1";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pk2 {
+ nvidia,pins = "pk2";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pk3 {
+ nvidia,pins = "pk3";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pk4 {
+ nvidia,pins = "pk4";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pk5 {
+ nvidia,pins = "pk5";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pk6 {
+ nvidia,pins = "pk6";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pk7 {
+ nvidia,pins = "pk7";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pl0 {
+ nvidia,pins = "pl0";
+ nvidia,function = "rsvd0";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pl1 {
+ nvidia,pins = "pl1";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ sdmmc1_clk_pm0 {
+ nvidia,pins = "sdmmc1_clk_pm0";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ sdmmc1_cmd_pm1 {
+ nvidia,pins = "sdmmc1_cmd_pm1";
+ nvidia,function = "rsvd2";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ sdmmc1_dat3_pm2 {
+ nvidia,pins = "sdmmc1_dat3_pm2";
+ nvidia,function = "rsvd2";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ sdmmc1_dat2_pm3 {
+ nvidia,pins = "sdmmc1_dat2_pm3";
+ nvidia,function = "rsvd2";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ sdmmc1_dat1_pm4 {
+ nvidia,pins = "sdmmc1_dat1_pm4";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ sdmmc1_dat0_pm5 {
+ nvidia,pins = "sdmmc1_dat0_pm5";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ sdmmc3_clk_pp0 {
+ nvidia,pins = "sdmmc3_clk_pp0";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ sdmmc3_cmd_pp1 {
+ nvidia,pins = "sdmmc3_cmd_pp1";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ sdmmc3_dat3_pp2 {
+ nvidia,pins = "sdmmc3_dat3_pp2";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ sdmmc3_dat2_pp3 {
+ nvidia,pins = "sdmmc3_dat2_pp3";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ sdmmc3_dat1_pp4 {
+ nvidia,pins = "sdmmc3_dat1_pp4";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ sdmmc3_dat0_pp5 {
+ nvidia,pins = "sdmmc3_dat0_pp5";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ cam1_mclk_ps0 {
+ nvidia,pins = "cam1_mclk_ps0";
+ nvidia,function = "extperiph3";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ cam2_mclk_ps1 {
+ nvidia,pins = "cam2_mclk_ps1";
+ nvidia,function = "extperiph3";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ cam_i2c_scl_ps2 {
+ nvidia,pins = "cam_i2c_scl_ps2";
+ nvidia,function = "i2cvi";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ nvidia,io-hv = <TEGRA_PIN_DISABLE>;
+ };
+ cam_i2c_sda_ps3 {
+ nvidia,pins = "cam_i2c_sda_ps3";
+ nvidia,function = "i2cvi";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ nvidia,io-hv = <TEGRA_PIN_DISABLE>;
+ };
+ cam_rst_ps4 {
+ nvidia,pins = "cam_rst_ps4";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ cam_af_en_ps5 {
+ nvidia,pins = "cam_af_en_ps5";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ cam_flash_en_ps6 {
+ nvidia,pins = "cam_flash_en_ps6";
+ nvidia,function = "rsvd2";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ cam1_pwdn_ps7 {
+ nvidia,pins = "cam1_pwdn_ps7";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ cam2_pwdn_pt0 {
+ nvidia,pins = "cam2_pwdn_pt0";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ cam1_strobe_pt1 {
+ nvidia,pins = "cam1_strobe_pt1";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ uart1_tx_pu0 {
+ nvidia,pins = "uart1_tx_pu0";
+ nvidia,function = "uarta";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ uart1_rx_pu1 {
+ nvidia,pins = "uart1_rx_pu1";
+ nvidia,function = "uarta";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ uart1_rts_pu2 {
+ nvidia,pins = "uart1_rts_pu2";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ uart1_cts_pu3 {
+ nvidia,pins = "uart1_cts_pu3";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ lcd_bl_pwm_pv0 {
+ nvidia,pins = "lcd_bl_pwm_pv0";
+ nvidia,function = "rsvd3";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ lcd_bl_en_pv1 {
+ nvidia,pins = "lcd_bl_en_pv1";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ lcd_rst_pv2 {
+ nvidia,pins = "lcd_rst_pv2";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ lcd_gpio1_pv3 {
+ nvidia,pins = "lcd_gpio1_pv3";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ lcd_gpio2_pv4 {
+ nvidia,pins = "lcd_gpio2_pv4";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ ap_ready_pv5 {
+ nvidia,pins = "ap_ready_pv5";
+ nvidia,function = "rsvd0";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ touch_rst_pv6 {
+ nvidia,pins = "touch_rst_pv6";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ touch_clk_pv7 {
+ nvidia,pins = "touch_clk_pv7";
+ nvidia,function = "touch";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ modem_wake_ap_px0 {
+ nvidia,pins = "modem_wake_ap_px0";
+ nvidia,function = "rsvd0";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ touch_int_px1 {
+ nvidia,pins = "touch_int_px1";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ motion_int_px2 {
+ nvidia,pins = "motion_int_px2";
+ nvidia,function = "rsvd0";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ als_prox_int_px3 {
+ nvidia,pins = "als_prox_int_px3";
+ nvidia,function = "rsvd0";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ temp_alert_px4 {
+ nvidia,pins = "temp_alert_px4";
+ nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ button_power_on_px5 {
+ nvidia,pins = "button_power_on_px5";
+ nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ button_vol_up_px6 {
+ nvidia,pins = "button_vol_up_px6";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ button_vol_down_px7 {
+ nvidia,pins = "button_vol_down_px7";
+ nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ button_slide_sw_py0 {
+ nvidia,pins = "button_slide_sw_py0";
+ nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ button_home_py1 {
+ nvidia,pins = "button_home_py1";
+ nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ lcd_te_py2 {
+ nvidia,pins = "lcd_te_py2";
+ nvidia,function = "displaya";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pwr_i2c_scl_py3 {
+ nvidia,pins = "pwr_i2c_scl_py3";
+ nvidia,function = "i2cpmu";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ nvidia,io-hv = <TEGRA_PIN_DISABLE>;
+ };
+ pwr_i2c_sda_py4 {
+ nvidia,pins = "pwr_i2c_sda_py4";
+ nvidia,function = "i2cpmu";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ nvidia,io-hv = <TEGRA_PIN_DISABLE>;
+ };
+ clk_32k_out_py5 {
+ nvidia,pins = "clk_32k_out_py5";
+ nvidia,function = "soc";
+ nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pz0 {
+ nvidia,pins = "pz0";
+ nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pz1 {
+ nvidia,pins = "pz1";
+ nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pz2 {
+ nvidia,pins = "pz2";
+ nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pz3 {
+ nvidia,pins = "pz3";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pz4 {
+ nvidia,pins = "pz4";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pz5 {
+ nvidia,pins = "pz5";
+ nvidia,function = "soc";
+ nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ dap2_fs_paa0 {
+ nvidia,pins = "dap2_fs_paa0";
+ nvidia,function = "i2s2";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ dap2_sclk_paa1 {
+ nvidia,pins = "dap2_sclk_paa1";
+ nvidia,function = "i2s2";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ dap2_din_paa2 {
+ nvidia,pins = "dap2_din_paa2";
+ nvidia,function = "i2s2";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ dap2_dout_paa3 {
+ nvidia,pins = "dap2_dout_paa3";
+ nvidia,function = "i2s2";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ aud_mclk_pbb0 {
+ nvidia,pins = "aud_mclk_pbb0";
+ nvidia,function = "aud";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ dvfs_pwm_pbb1 {
+ nvidia,pins = "dvfs_pwm_pbb1";
+ nvidia,function = "rsvd0";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ dvfs_clk_pbb2 {
+ nvidia,pins = "dvfs_clk_pbb2";
+ nvidia,function = "rsvd0";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ gpio_x1_aud_pbb3 {
+ nvidia,pins = "gpio_x1_aud_pbb3";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ gpio_x3_aud_pbb4 {
+ nvidia,pins = "gpio_x3_aud_pbb4";
+ nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ hdmi_cec_pcc0 {
+ nvidia,pins = "hdmi_cec_pcc0";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ nvidia,io-hv = <TEGRA_PIN_DISABLE>;
+ };
+ hdmi_int_dp_hpd_pcc1 {
+ nvidia,pins = "hdmi_int_dp_hpd_pcc1";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ nvidia,io-hv = <TEGRA_PIN_DISABLE>;
+ };
+ spdif_out_pcc2 {
+ nvidia,pins = "spdif_out_pcc2";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ spdif_in_pcc3 {
+ nvidia,pins = "spdif_in_pcc3";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ usb_vbus_en0_pcc4 {
+ nvidia,pins = "usb_vbus_en0_pcc4";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ nvidia,io-hv = <TEGRA_PIN_DISABLE>;
+ };
+ usb_vbus_en1_pcc5 {
+ nvidia,pins = "usb_vbus_en1_pcc5";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ nvidia,io-hv = <TEGRA_PIN_DISABLE>;
+ };
+ dp_hpd0_pcc6 {
+ nvidia,pins = "dp_hpd0_pcc6";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pcc7 {
+ nvidia,pins = "pcc7";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ nvidia,io-hv = <TEGRA_PIN_DISABLE>;
+ };
+ spi2_cs1_pdd0 {
+ nvidia,pins = "spi2_cs1_pdd0";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ qspi_sck_pee0 {
+ nvidia,pins = "qspi_sck_pee0";
+ nvidia,function = "qspi";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ qspi_cs_n_pee1 {
+ nvidia,pins = "qspi_cs_n_pee1";
+ nvidia,function = "qspi";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ qspi_io0_pee2 {
+ nvidia,pins = "qspi_io0_pee2";
+ nvidia,function = "qspi";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ qspi_io1_pee3 {
+ nvidia,pins = "qspi_io1_pee3";
+ nvidia,function = "qspi";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ qspi_io2_pee4 {
+ nvidia,pins = "qspi_io2_pee4";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ qspi_io3_pee5 {
+ nvidia,pins = "qspi_io3_pee5";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ core_pwr_req {
+ nvidia,pins = "core_pwr_req";
+ nvidia,function = "core";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ cpu_pwr_req {
+ nvidia,pins = "cpu_pwr_req";
+ nvidia,function = "cpu";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ pwr_int_n {
+ nvidia,pins = "pwr_int_n";
+ nvidia,function = "pmi";
+ nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ clk_32k_in {
+ nvidia,pins = "clk_32k_in";
+ nvidia,function = "clk";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_ENABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ jtag_rtck {
+ nvidia,pins = "jtag_rtck";
+ nvidia,function = "jtag";
+ nvidia,pull = <TEGRA_PIN_PULL_UP>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ clk_req {
+ nvidia,pins = "clk_req";
+ nvidia,function = "rsvd1";
+ nvidia,pull = <TEGRA_PIN_PULL_DOWN>;
+ nvidia,tristate = <TEGRA_PIN_ENABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ shutdown {
+ nvidia,pins = "shutdown";
+ nvidia,function = "shutdown";
+ nvidia,pull = <TEGRA_PIN_PULL_NONE>;
+ nvidia,tristate = <TEGRA_PIN_DISABLE>;
+ nvidia,enable-input = <TEGRA_PIN_DISABLE>;
+ nvidia,open-drain = <TEGRA_PIN_DISABLE>;
+ };
+ };
+ };
+
+ serial@70006000 {
+ status = "okay";
+ };
+
+ i2c@7000c400 {
+ status = "okay";
+ clock-frequency = <1000000>;
+
+ ec@1e {
+ compatible = "google,cros-ec-i2c";
+ reg = <0x1e>;
+ interrupt-parent = <&gpio>;
+ interrupts = <TEGRA_GPIO(Z, 1) IRQ_TYPE_LEVEL_LOW>;
+ wakeup-source;
+
+ ec_i2c_0: i2c-tunnel {
+ compatible = "google,cros-ec-i2c-tunnel";
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ google,remote-bus = <0>;
+
+ battery: bq27742@55 {
+ compatible = "ti,bq27742";
+ reg = <0x55>;
+ battery-name = "battery";
+ };
+ };
+ };
+ };
+
+ pmc@7000e400 {
+ nvidia,invert-interrupt;
+ nvidia,suspend-mode = <0>;
+ nvidia,cpu-pwr-good-time = <0>;
+ nvidia,cpu-pwr-off-time = <0>;
+ nvidia,core-pwr-good-time = <12000 6000>;
+ nvidia,core-pwr-off-time = <39053>;
+ nvidia,core-power-req-active-high;
+ nvidia,sys-clock-req-active-high;
+ status = "okay";
+ };
+
+ sdhci@700b0600 {
+ bus-width = <8>;
+ non-removable;
+ status = "okay";
+ };
+
+ clocks {
+ compatible = "simple-bus";
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ clk32k_in: clock@0 {
+ compatible = "fixed-clock";
+ reg = <0>;
+ #clock-cells = <0>;
+ clock-frequency = <32768>;
+ };
+ };
+
+ cpus {
+ cpu@0 {
+ enable-method = "psci";
+ };
+
+ cpu@1 {
+ enable-method = "psci";
+ };
+
+ cpu@2 {
+ enable-method = "psci";
+ };
+
+ cpu@3 {
+ enable-method = "psci";
+ };
+ };
+
+ gpio-keys {
+ compatible = "gpio-keys";
+ gpio-keys,name = "gpio-keys";
+
+ power {
+ label = "Power";
+ gpios = <&gpio TEGRA_GPIO(X, 5) GPIO_ACTIVE_LOW>;
+ linux,code = <KEY_POWER>;
+ debounce-interval = <30>;
+ wakeup-source;
+ };
+
+ lid {
+ label = "Lid";
+ gpios = <&gpio TEGRA_GPIO(B, 4) GPIO_ACTIVE_LOW>;
+ linux,input-type = <EV_SW>;
+ linux,code = <SW_LID>;
+ wakeup-source;
+ };
+
+ tablet_mode {
+ label = "Tablet Mode";
+ gpios = <&gpio TEGRA_GPIO(Z, 2) GPIO_ACTIVE_HIGH>;
+ linux,input-type = <EV_SW>;
+ linux,code = <SW_TABLET_MODE>;
+ wakeup-source;
+ };
+
+ volume_down {
+ label = "Volume Down";
+ gpios = <&gpio TEGRA_GPIO(X, 7) GPIO_ACTIVE_LOW>;
+ linux,code = <KEY_VOLUMEDOWN>;
+ };
+
+ volume_up {
+ label = "Volume Up";
+ gpios = <&gpio TEGRA_GPIO(M, 4) GPIO_ACTIVE_LOW>;
+ linux,code = <KEY_VOLUMEUP>;
+ };
+ };
+
+ psci {
+ compatible = "arm,psci-1.0";
+ method = "smc";
+ };
+};
diff --git a/arch/arm64/boot/dts/nvidia/tegra210.dtsi b/arch/arm64/boot/dts/nvidia/tegra210.dtsi
index 23b0630602cf..76fe31faa1a5 100644
--- a/arch/arm64/boot/dts/nvidia/tegra210.dtsi
+++ b/arch/arm64/boot/dts/nvidia/tegra210.dtsi
@@ -10,7 +10,7 @@
#address-cells = <2>;
#size-cells = <2>;
- host1x@0,50000000 {
+ host1x@50000000 {
compatible = "nvidia,tegra210-host1x", "simple-bus";
reg = <0x0 0x50000000 0x0 0x00034000>;
interrupts = <GIC_SPI 65 IRQ_TYPE_LEVEL_HIGH>, /* syncpt */
@@ -25,7 +25,7 @@
ranges = <0x0 0x54000000 0x0 0x54000000 0x0 0x01000000>;
- dpaux1: dpaux@0,54040000 {
+ dpaux1: dpaux@54040000 {
compatible = "nvidia,tegra210-dpaux";
reg = <0x0 0x54040000 0x0 0x00040000>;
interrupts = <GIC_SPI 11 IRQ_TYPE_LEVEL_HIGH>;
@@ -37,19 +37,19 @@
status = "disabled";
};
- vi@0,54080000 {
+ vi@54080000 {
compatible = "nvidia,tegra210-vi";
reg = <0x0 0x54080000 0x0 0x00040000>;
interrupts = <GIC_SPI 69 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
};
- tsec@0,54100000 {
+ tsec@54100000 {
compatible = "nvidia,tegra210-tsec";
reg = <0x0 0x54100000 0x0 0x00040000>;
};
- dc@0,54200000 {
+ dc@54200000 {
compatible = "nvidia,tegra210-dc";
reg = <0x0 0x54200000 0x0 0x00040000>;
interrupts = <GIC_SPI 73 IRQ_TYPE_LEVEL_HIGH>;
@@ -64,7 +64,7 @@
nvidia,head = <0>;
};
- dc@0,54240000 {
+ dc@54240000 {
compatible = "nvidia,tegra210-dc";
reg = <0x0 0x54240000 0x0 0x00040000>;
interrupts = <GIC_SPI 74 IRQ_TYPE_LEVEL_HIGH>;
@@ -79,7 +79,7 @@
nvidia,head = <1>;
};
- dsi@0,54300000 {
+ dsi@54300000 {
compatible = "nvidia,tegra210-dsi";
reg = <0x0 0x54300000 0x0 0x00040000>;
clocks = <&tegra_car TEGRA210_CLK_DSIA>,
@@ -96,19 +96,19 @@
#size-cells = <0>;
};
- vic@0,54340000 {
+ vic@54340000 {
compatible = "nvidia,tegra210-vic";
reg = <0x0 0x54340000 0x0 0x00040000>;
status = "disabled";
};
- nvjpg@0,54380000 {
+ nvjpg@54380000 {
compatible = "nvidia,tegra210-nvjpg";
reg = <0x0 0x54380000 0x0 0x00040000>;
status = "disabled";
};
- dsi@0,54400000 {
+ dsi@54400000 {
compatible = "nvidia,tegra210-dsi";
reg = <0x0 0x54400000 0x0 0x00040000>;
clocks = <&tegra_car TEGRA210_CLK_DSIB>,
@@ -125,25 +125,25 @@
#size-cells = <0>;
};
- nvdec@0,54480000 {
+ nvdec@54480000 {
compatible = "nvidia,tegra210-nvdec";
reg = <0x0 0x54480000 0x0 0x00040000>;
status = "disabled";
};
- nvenc@0,544c0000 {
+ nvenc@544c0000 {
compatible = "nvidia,tegra210-nvenc";
reg = <0x0 0x544c0000 0x0 0x00040000>;
status = "disabled";
};
- tsec@0,54500000 {
+ tsec@54500000 {
compatible = "nvidia,tegra210-tsec";
reg = <0x0 0x54500000 0x0 0x00040000>;
status = "disabled";
};
- sor@0,54540000 {
+ sor@54540000 {
compatible = "nvidia,tegra210-sor";
reg = <0x0 0x54540000 0x0 0x00040000>;
interrupts = <GIC_SPI 76 IRQ_TYPE_LEVEL_HIGH>;
@@ -157,7 +157,7 @@
status = "disabled";
};
- sor@0,54580000 {
+ sor@54580000 {
compatible = "nvidia,tegra210-sor1";
reg = <0x0 0x54580000 0x0 0x00040000>;
interrupts = <GIC_SPI 75 IRQ_TYPE_LEVEL_HIGH>;
@@ -171,7 +171,7 @@
status = "disabled";
};
- dpaux: dpaux@0,545c0000 {
+ dpaux: dpaux@545c0000 {
compatible = "nvidia,tegra124-dpaux";
reg = <0x0 0x545c0000 0x0 0x00040000>;
interrupts = <GIC_SPI 159 IRQ_TYPE_LEVEL_HIGH>;
@@ -183,21 +183,21 @@
status = "disabled";
};
- isp@0,54600000 {
+ isp@54600000 {
compatible = "nvidia,tegra210-isp";
reg = <0x0 0x54600000 0x0 0x00040000>;
interrupts = <GIC_SPI 71 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
};
- isp@0,54680000 {
+ isp@54680000 {
compatible = "nvidia,tegra210-isp";
reg = <0x0 0x54680000 0x0 0x00040000>;
interrupts = <GIC_SPI 70 IRQ_TYPE_LEVEL_HIGH>;
status = "disabled";
};
- i2c@0,546c0000 {
+ i2c@546c0000 {
compatible = "nvidia,tegra210-i2c-vi";
reg = <0x0 0x546c0000 0x0 0x00040000>;
interrupts = <GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>;
@@ -205,7 +205,7 @@
};
};
- gic: interrupt-controller@0,50041000 {
+ gic: interrupt-controller@50041000 {
compatible = "arm,gic-400";
#interrupt-cells = <3>;
interrupt-controller;
@@ -218,7 +218,7 @@
interrupt-parent = <&gic>;
};
- gpu@0,57000000 {
+ gpu@57000000 {
compatible = "nvidia,gm20b";
reg = <0x0 0x57000000 0x0 0x01000000>,
<0x0 0x58000000 0x0 0x01000000>;
@@ -226,14 +226,18 @@
<GIC_SPI 158 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "stall", "nonstall";
clocks = <&tegra_car TEGRA210_CLK_GPU>,
- <&tegra_car TEGRA210_CLK_PLL_P_OUT5>;
- clock-names = "gpu", "pwr";
+ <&tegra_car TEGRA210_CLK_PLL_P_OUT5>,
+ <&tegra_car TEGRA210_CLK_PLL_G_REF>;
+ clock-names = "gpu", "pwr", "ref";
resets = <&tegra_car 184>;
reset-names = "gpu";
+
+ iommus = <&mc TEGRA_SWGROUP_GPU>;
+
status = "disabled";
};
- lic: interrupt-controller@0,60004000 {
+ lic: interrupt-controller@60004000 {
compatible = "nvidia,tegra210-ictlr";
reg = <0x0 0x60004000 0x0 0x40>, /* primary controller */
<0x0 0x60004100 0x0 0x40>, /* secondary controller */
@@ -246,7 +250,7 @@
interrupt-parent = <&gic>;
};
- timer@0,60005000 {
+ timer@60005000 {
compatible = "nvidia,tegra210-timer", "nvidia,tegra20-timer";
reg = <0x0 0x60005000 0x0 0x400>;
interrupts = <GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>,
@@ -259,19 +263,19 @@
clock-names = "timer";
};
- tegra_car: clock@0,60006000 {
+ tegra_car: clock@60006000 {
compatible = "nvidia,tegra210-car";
reg = <0x0 0x60006000 0x0 0x1000>;
#clock-cells = <1>;
#reset-cells = <1>;
};
- flow-controller@0,60007000 {
+ flow-controller@60007000 {
compatible = "nvidia,tegra210-flowctrl";
reg = <0x0 0x60007000 0x0 0x1000>;
};
- gpio: gpio@0,6000d000 {
+ gpio: gpio@6000d000 {
compatible = "nvidia,tegra210-gpio", "nvidia,tegra124-gpio", "nvidia,tegra30-gpio";
reg = <0x0 0x6000d000 0x0 0x1000>;
interrupts = <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>,
@@ -288,7 +292,7 @@
interrupt-controller;
};
- apbdma: dma@0,60020000 {
+ apbdma: dma@60020000 {
compatible = "nvidia,tegra210-apbdma", "nvidia,tegra148-apbdma";
reg = <0x0 0x60020000 0x0 0x1400>;
interrupts = <GIC_SPI 104 IRQ_TYPE_LEVEL_HIGH>,
@@ -330,13 +334,13 @@
#dma-cells = <1>;
};
- apbmisc@0,70000800 {
+ apbmisc@70000800 {
compatible = "nvidia,tegra210-apbmisc", "nvidia,tegra20-apbmisc";
reg = <0x0 0x70000800 0x0 0x64>, /* Chip revision */
<0x0 0x7000e864 0x0 0x04>; /* Strapping options */
};
- pinmux: pinmux@0,700008d4 {
+ pinmux: pinmux@700008d4 {
compatible = "nvidia,tegra210-pinmux";
reg = <0x0 0x700008d4 0x0 0x29c>, /* Pad control registers */
<0x0 0x70003000 0x0 0x294>; /* Mux registers */
@@ -347,10 +351,10 @@
* driver and APB DMA based serial driver for higher baudrate
* and performance. To enable the 8250 based driver, the compatible
* is "nvidia,tegra124-uart", "nvidia,tegra20-uart" and to enable
- * the APB DMA based serial driver, the comptible is
+ * the APB DMA based serial driver, the compatible is
* "nvidia,tegra124-hsuart", "nvidia,tegra30-hsuart".
*/
- uarta: serial@0,70006000 {
+ uarta: serial@70006000 {
compatible = "nvidia,tegra210-uart", "nvidia,tegra20-uart";
reg = <0x0 0x70006000 0x0 0x40>;
reg-shift = <2>;
@@ -364,7 +368,7 @@
status = "disabled";
};
- uartb: serial@0,70006040 {
+ uartb: serial@70006040 {
compatible = "nvidia,tegra210-uart", "nvidia,tegra20-uart";
reg = <0x0 0x70006040 0x0 0x40>;
reg-shift = <2>;
@@ -378,7 +382,7 @@
status = "disabled";
};
- uartc: serial@0,70006200 {
+ uartc: serial@70006200 {
compatible = "nvidia,tegra210-uart", "nvidia,tegra20-uart";
reg = <0x0 0x70006200 0x0 0x40>;
reg-shift = <2>;
@@ -392,7 +396,7 @@
status = "disabled";
};
- uartd: serial@0,70006300 {
+ uartd: serial@70006300 {
compatible = "nvidia,tegra210-uart", "nvidia,tegra20-uart";
reg = <0x0 0x70006300 0x0 0x40>;
reg-shift = <2>;
@@ -406,7 +410,7 @@
status = "disabled";
};
- pwm: pwm@0,7000a000 {
+ pwm: pwm@7000a000 {
compatible = "nvidia,tegra210-pwm", "nvidia,tegra20-pwm";
reg = <0x0 0x7000a000 0x0 0x100>;
#pwm-cells = <2>;
@@ -417,7 +421,7 @@
status = "disabled";
};
- i2c@0,7000c000 {
+ i2c@7000c000 {
compatible = "nvidia,tegra210-i2c", "nvidia,tegra114-i2c";
reg = <0x0 0x7000c000 0x0 0x100>;
interrupts = <GIC_SPI 38 IRQ_TYPE_LEVEL_HIGH>;
@@ -432,7 +436,7 @@
status = "disabled";
};
- i2c@0,7000c400 {
+ i2c@7000c400 {
compatible = "nvidia,tegra210-i2c", "nvidia,tegra114-i2c";
reg = <0x0 0x7000c400 0x0 0x100>;
interrupts = <GIC_SPI 84 IRQ_TYPE_LEVEL_HIGH>;
@@ -447,7 +451,7 @@
status = "disabled";
};
- i2c@0,7000c500 {
+ i2c@7000c500 {
compatible = "nvidia,tegra210-i2c", "nvidia,tegra114-i2c";
reg = <0x0 0x7000c500 0x0 0x100>;
interrupts = <GIC_SPI 92 IRQ_TYPE_LEVEL_HIGH>;
@@ -462,7 +466,7 @@
status = "disabled";
};
- i2c@0,7000c700 {
+ i2c@7000c700 {
compatible = "nvidia,tegra210-i2c", "nvidia,tegra114-i2c";
reg = <0x0 0x7000c700 0x0 0x100>;
interrupts = <GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>;
@@ -477,7 +481,7 @@
status = "disabled";
};
- i2c@0,7000d000 {
+ i2c@7000d000 {
compatible = "nvidia,tegra210-i2c", "nvidia,tegra114-i2c";
reg = <0x0 0x7000d000 0x0 0x100>;
interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>;
@@ -492,7 +496,7 @@
status = "disabled";
};
- i2c@0,7000d100 {
+ i2c@7000d100 {
compatible = "nvidia,tegra210-i2c", "nvidia,tegra114-i2c";
reg = <0x0 0x7000d100 0x0 0x100>;
interrupts = <GIC_SPI 63 IRQ_TYPE_LEVEL_HIGH>;
@@ -507,7 +511,7 @@
status = "disabled";
};
- spi@0,7000d400 {
+ spi@7000d400 {
compatible = "nvidia,tegra210-spi", "nvidia,tegra114-spi";
reg = <0x0 0x7000d400 0x0 0x200>;
interrupts = <GIC_SPI 59 IRQ_TYPE_LEVEL_HIGH>;
@@ -522,7 +526,7 @@
status = "disabled";
};
- spi@0,7000d600 {
+ spi@7000d600 {
compatible = "nvidia,tegra210-spi", "nvidia,tegra114-spi";
reg = <0x0 0x7000d600 0x0 0x200>;
interrupts = <GIC_SPI 82 IRQ_TYPE_LEVEL_HIGH>;
@@ -537,7 +541,7 @@
status = "disabled";
};
- spi@0,7000d800 {
+ spi@7000d800 {
compatible = "nvidia,tegra210-spi", "nvidia,tegra114-spi";
reg = <0x0 0x7000d800 0x0 0x200>;
interrupts = <GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>;
@@ -552,7 +556,7 @@
status = "disabled";
};
- spi@0,7000da00 {
+ spi@7000da00 {
compatible = "nvidia,tegra210-spi", "nvidia,tegra114-spi";
reg = <0x0 0x7000da00 0x0 0x200>;
interrupts = <GIC_SPI 93 IRQ_TYPE_LEVEL_HIGH>;
@@ -567,7 +571,7 @@
status = "disabled";
};
- rtc@0,7000e000 {
+ rtc@7000e000 {
compatible = "nvidia,tegra210-rtc", "nvidia,tegra20-rtc";
reg = <0x0 0x7000e000 0x0 0x100>;
interrupts = <GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>;
@@ -575,16 +579,14 @@
clock-names = "rtc";
};
- pmc: pmc@0,7000e400 {
+ pmc: pmc@7000e400 {
compatible = "nvidia,tegra210-pmc";
reg = <0x0 0x7000e400 0x0 0x400>;
clocks = <&tegra_car TEGRA210_CLK_PCLK>, <&clk32k_in>;
clock-names = "pclk", "clk32k_in";
-
- #power-domain-cells = <1>;
};
- fuse@0,7000f800 {
+ fuse@7000f800 {
compatible = "nvidia,tegra210-efuse";
reg = <0x0 0x7000f800 0x0 0x400>;
clocks = <&tegra_car TEGRA210_CLK_FUSE>;
@@ -593,7 +595,7 @@
reset-names = "fuse";
};
- mc: memory-controller@0,70019000 {
+ mc: memory-controller@70019000 {
compatible = "nvidia,tegra210-mc";
reg = <0x0 0x70019000 0x0 0x1000>;
clocks = <&tegra_car TEGRA210_CLK_MC>;
@@ -604,7 +606,7 @@
#iommu-cells = <1>;
};
- hda@0,70030000 {
+ hda@70030000 {
compatible = "nvidia,tegra210-hda", "nvidia,tegra30-hda";
reg = <0x0 0x70030000 0x0 0x10000>;
interrupts = <GIC_SPI 81 IRQ_TYPE_LEVEL_HIGH>;
@@ -619,7 +621,7 @@
status = "disabled";
};
- sdhci@0,700b0000 {
+ sdhci@700b0000 {
compatible = "nvidia,tegra210-sdhci", "nvidia,tegra124-sdhci";
reg = <0x0 0x700b0000 0x0 0x200>;
interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
@@ -630,7 +632,7 @@
status = "disabled";
};
- sdhci@0,700b0200 {
+ sdhci@700b0200 {
compatible = "nvidia,tegra210-sdhci", "nvidia,tegra124-sdhci";
reg = <0x0 0x700b0200 0x0 0x200>;
interrupts = <GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>;
@@ -641,7 +643,7 @@
status = "disabled";
};
- sdhci@0,700b0400 {
+ sdhci@700b0400 {
compatible = "nvidia,tegra210-sdhci", "nvidia,tegra124-sdhci";
reg = <0x0 0x700b0400 0x0 0x200>;
interrupts = <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>;
@@ -652,7 +654,7 @@
status = "disabled";
};
- sdhci@0,700b0600 {
+ sdhci@700b0600 {
compatible = "nvidia,tegra210-sdhci", "nvidia,tegra124-sdhci";
reg = <0x0 0x700b0600 0x0 0x200>;
interrupts = <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>;
@@ -663,7 +665,7 @@
status = "disabled";
};
- mipi: mipi@0,700e3000 {
+ mipi: mipi@700e3000 {
compatible = "nvidia,tegra210-mipi";
reg = <0x0 0x700e3000 0x0 0x100>;
clocks = <&tegra_car TEGRA210_CLK_MIPI_CAL>;
@@ -671,7 +673,7 @@
#nvidia,mipi-calibrate-cells = <1>;
};
- spi@0,70410000 {
+ spi@70410000 {
compatible = "nvidia,tegra210-qspi";
reg = <0x0 0x70410000 0x0 0x1000>;
interrupts = <GIC_SPI 10 IRQ_TYPE_LEVEL_HIGH>;
@@ -686,7 +688,7 @@
status = "disabled";
};
- usb@0,7d000000 {
+ usb@7d000000 {
compatible = "nvidia,tegra210-ehci", "nvidia,tegra30-ehci", "usb-ehci";
reg = <0x0 0x7d000000 0x0 0x4000>;
interrupts = <GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>;
@@ -699,7 +701,7 @@
status = "disabled";
};
- phy1: usb-phy@0,7d000000 {
+ phy1: usb-phy@7d000000 {
compatible = "nvidia,tegra210-usb-phy", "nvidia,tegra30-usb-phy";
reg = <0x0 0x7d000000 0x0 0x4000>,
<0x0 0x7d000000 0x0 0x4000>;
@@ -724,7 +726,7 @@
status = "disabled";
};
- usb@0,7d004000 {
+ usb@7d004000 {
compatible = "nvidia,tegra210-ehci", "nvidia,tegra30-ehci", "usb-ehci";
reg = <0x0 0x7d004000 0x0 0x4000>;
interrupts = <GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>;
@@ -737,7 +739,7 @@
status = "disabled";
};
- phy2: usb-phy@0,7d004000 {
+ phy2: usb-phy@7d004000 {
compatible = "nvidia,tegra210-usb-phy", "nvidia,tegra30-usb-phy";
reg = <0x0 0x7d004000 0x0 0x4000>,
<0x0 0x7d000000 0x0 0x4000>;
diff --git a/arch/arm64/boot/dts/renesas/r8a7795-salvator-x.dts b/arch/arm64/boot/dts/renesas/r8a7795-salvator-x.dts
index b992b1a3d956..9f561c943f6f 100644
--- a/arch/arm64/boot/dts/renesas/r8a7795-salvator-x.dts
+++ b/arch/arm64/boot/dts/renesas/r8a7795-salvator-x.dts
@@ -141,62 +141,66 @@
clock-frequency = <16666666>;
};
+&extalr_clk {
+ clock-frequency = <32768>;
+};
+
&pfc {
pinctrl-0 = <&scif_clk_pins>;
pinctrl-names = "default";
scif1_pins: scif1 {
- renesas,groups = "scif1_data_a", "scif1_ctrl";
- renesas,function = "scif1";
+ groups = "scif1_data_a", "scif1_ctrl";
+ function = "scif1";
};
scif2_pins: scif2 {
- renesas,groups = "scif2_data_a";
- renesas,function = "scif2";
+ groups = "scif2_data_a";
+ function = "scif2";
};
scif_clk_pins: scif_clk {
- renesas,groups = "scif_clk_a";
- renesas,function = "scif_clk";
+ groups = "scif_clk_a";
+ function = "scif_clk";
};
i2c2_pins: i2c2 {
- renesas,groups = "i2c2_a";
- renesas,function = "i2c2";
+ groups = "i2c2_a";
+ function = "i2c2";
};
avb_pins: avb {
- renesas,groups = "avb_mdc";
- renesas,function = "avb";
+ groups = "avb_mdc";
+ function = "avb";
};
sdhi0_pins: sd0 {
- renesas,groups = "sdhi0_data4", "sdhi0_ctrl";
- renesas,function = "sdhi0";
+ groups = "sdhi0_data4", "sdhi0_ctrl";
+ function = "sdhi0";
};
sdhi3_pins: sd3 {
- renesas,groups = "sdhi3_data4", "sdhi3_ctrl";
- renesas,function = "sdhi3";
+ groups = "sdhi3_data4", "sdhi3_ctrl";
+ function = "sdhi3";
};
sound_pins: sound {
- renesas,groups = "ssi01239_ctrl", "ssi0_data", "ssi1_data_a";
- renesas,function = "ssi";
+ groups = "ssi01239_ctrl", "ssi0_data", "ssi1_data_a";
+ function = "ssi";
};
sound_clk_pins: sound_clk {
- renesas,groups = "audio_clk_a_a", "audio_clk_b_a", "audio_clk_c_a",
- "audio_clkout_a", "audio_clkout3_a";
- renesas,function = "audio_clk";
+ groups = "audio_clk_a_a", "audio_clk_b_a", "audio_clk_c_a",
+ "audio_clkout_a", "audio_clkout3_a";
+ function = "audio_clk";
};
usb1_pins: usb1 {
- renesas,groups = "usb1";
- renesas,function = "usb1";
+ groups = "usb1";
+ function = "usb1";
};
usb2_pins: usb2 {
- renesas,groups = "usb2";
- renesas,function = "usb2";
+ groups = "usb2";
+ function = "usb2";
};
};
@@ -388,3 +392,16 @@
&ohci2 {
status = "okay";
};
+
+&pcie_bus_clk {
+ clock-frequency = <100000000>;
+ status = "okay";
+};
+
+&pciec0 {
+ status = "okay";
+};
+
+&pciec1 {
+ status = "okay";
+};
diff --git a/arch/arm64/boot/dts/renesas/r8a7795.dtsi b/arch/arm64/boot/dts/renesas/r8a7795.dtsi
index 706d2426024f..3285a9286786 100644
--- a/arch/arm64/boot/dts/renesas/r8a7795.dtsi
+++ b/arch/arm64/boot/dts/renesas/r8a7795.dtsi
@@ -10,6 +10,7 @@
#include <dt-bindings/clock/r8a7795-cpg-mssr.h>
#include <dt-bindings/interrupt-controller/arm-gic.h>
+#include <dt-bindings/power/r8a7795-sysc.h>
/ {
compatible = "renesas,r8a7795";
@@ -39,6 +40,7 @@
compatible = "arm,cortex-a57", "arm,armv8";
reg = <0x0>;
device_type = "cpu";
+ power-domains = <&sysc R8A7795_PD_CA57_CPU0>;
next-level-cache = <&L2_CA57>;
enable-method = "psci";
};
@@ -47,6 +49,7 @@
compatible = "arm,cortex-a57","arm,armv8";
reg = <0x1>;
device_type = "cpu";
+ power-domains = <&sysc R8A7795_PD_CA57_CPU1>;
next-level-cache = <&L2_CA57>;
enable-method = "psci";
};
@@ -54,6 +57,7 @@
compatible = "arm,cortex-a57","arm,armv8";
reg = <0x2>;
device_type = "cpu";
+ power-domains = <&sysc R8A7795_PD_CA57_CPU2>;
next-level-cache = <&L2_CA57>;
enable-method = "psci";
};
@@ -61,6 +65,7 @@
compatible = "arm,cortex-a57","arm,armv8";
reg = <0x3>;
device_type = "cpu";
+ power-domains = <&sysc R8A7795_PD_CA57_CPU3>;
next-level-cache = <&L2_CA57>;
enable-method = "psci";
};
@@ -68,12 +73,14 @@
L2_CA57: cache-controller@0 {
compatible = "cache";
+ power-domains = <&sysc R8A7795_PD_CA57_SCU>;
cache-unified;
cache-level = <2>;
};
L2_CA53: cache-controller@1 {
compatible = "cache";
+ power-domains = <&sysc R8A7795_PD_CA53_SCU>;
cache-unified;
cache-level = <2>;
};
@@ -115,6 +122,13 @@
clock-frequency = <0>;
};
+ /* External CAN clock - to be overridden by boards that provide it */
+ can_clk: can {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <0>;
+ };
+
/* External SCIF clock - to be overridden by boards that provide it */
scif_clk: scif {
compatible = "fixed-clock";
@@ -122,6 +136,13 @@
clock-frequency = <0>;
};
+ /* External PCIe clock - can be overridden by the board */
+ pcie_bus_clk: pcie_bus {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <0>;
+ };
+
soc {
compatible = "simple-bus";
interrupt-parent = <&gic>;
@@ -154,7 +175,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&cpg CPG_MOD 912>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
};
gpio1: gpio@e6051000 {
@@ -168,7 +189,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&cpg CPG_MOD 911>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
};
gpio2: gpio@e6052000 {
@@ -182,7 +203,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&cpg CPG_MOD 910>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
};
gpio3: gpio@e6053000 {
@@ -196,7 +217,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&cpg CPG_MOD 909>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
};
gpio4: gpio@e6054000 {
@@ -210,7 +231,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&cpg CPG_MOD 908>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
};
gpio5: gpio@e6055000 {
@@ -224,7 +245,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&cpg CPG_MOD 907>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
};
gpio6: gpio@e6055400 {
@@ -238,7 +259,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&cpg CPG_MOD 906>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
};
gpio7: gpio@e6055800 {
@@ -252,7 +273,7 @@
#interrupt-cells = <2>;
interrupt-controller;
clocks = <&cpg CPG_MOD 905>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
};
pmu_a57 {
@@ -288,6 +309,12 @@
#power-domain-cells = <0>;
};
+ sysc: system-controller@e6180000 {
+ compatible = "renesas,r8a7795-sysc";
+ reg = <0 0xe6180000 0 0x0400>;
+ #power-domain-cells = <1>;
+ };
+
audma0: dma-controller@ec700000 {
compatible = "renesas,rcar-dmac";
reg = <0 0xec700000 0 0x10000>;
@@ -315,7 +342,7 @@
"ch12", "ch13", "ch14", "ch15";
clocks = <&cpg CPG_MOD 502>;
clock-names = "fck";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <16>;
};
@@ -347,7 +374,7 @@
"ch12", "ch13", "ch14", "ch15";
clocks = <&cpg CPG_MOD 501>;
clock-names = "fck";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <16>;
};
@@ -369,7 +396,7 @@
GIC_SPI 18 IRQ_TYPE_LEVEL_HIGH
GIC_SPI 161 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cpg CPG_MOD 407>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
};
dmac0: dma-controller@e6700000 {
@@ -400,7 +427,7 @@
"ch12", "ch13", "ch14", "ch15";
clocks = <&cpg CPG_MOD 219>;
clock-names = "fck";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <16>;
};
@@ -433,7 +460,7 @@
"ch12", "ch13", "ch14", "ch15";
clocks = <&cpg CPG_MOD 218>;
clock-names = "fck";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <16>;
};
@@ -466,7 +493,7 @@
"ch12", "ch13", "ch14", "ch15";
clocks = <&cpg CPG_MOD 217>;
clock-names = "fck";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <16>;
};
@@ -508,12 +535,42 @@
"ch20", "ch21", "ch22", "ch23",
"ch24";
clocks = <&cpg CPG_MOD 812>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
phy-mode = "rgmii-id";
#address-cells = <1>;
#size-cells = <0>;
};
+ can0: can@e6c30000 {
+ compatible = "renesas,can-r8a7795",
+ "renesas,rcar-gen3-can";
+ reg = <0 0xe6c30000 0 0x1000>;
+ interrupts = <GIC_SPI 186 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpg CPG_MOD 916>,
+ <&cpg CPG_CORE R8A7795_CLK_CANFD>,
+ <&can_clk>;
+ clock-names = "clkp1", "clkp2", "can_clk";
+ assigned-clocks = <&cpg CPG_CORE R8A7795_CLK_CANFD>;
+ assigned-clock-rates = <40000000>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
+ status = "disabled";
+ };
+
+ can1: can@e6c38000 {
+ compatible = "renesas,can-r8a7795",
+ "renesas,rcar-gen3-can";
+ reg = <0 0xe6c38000 0 0x1000>;
+ interrupts = <GIC_SPI 187 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpg CPG_MOD 915>,
+ <&cpg CPG_CORE R8A7795_CLK_CANFD>,
+ <&can_clk>;
+ clock-names = "clkp1", "clkp2", "can_clk";
+ assigned-clocks = <&cpg CPG_CORE R8A7795_CLK_CANFD>;
+ assigned-clock-rates = <40000000>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
+ status = "disabled";
+ };
+
hscif0: serial@e6540000 {
compatible = "renesas,hscif-r8a7795",
"renesas,rcar-gen3-hscif",
@@ -526,7 +583,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac1 0x31>, <&dmac1 0x30>;
dma-names = "tx", "rx";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -542,7 +599,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac1 0x33>, <&dmac1 0x32>;
dma-names = "tx", "rx";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -558,7 +615,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac1 0x35>, <&dmac1 0x34>;
dma-names = "tx", "rx";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -574,7 +631,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x37>, <&dmac0 0x36>;
dma-names = "tx", "rx";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -590,7 +647,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x39>, <&dmac0 0x38>;
dma-names = "tx", "rx";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -605,7 +662,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac1 0x51>, <&dmac1 0x50>;
dma-names = "tx", "rx";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -620,7 +677,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac1 0x53>, <&dmac1 0x52>;
dma-names = "tx", "rx";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -635,7 +692,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac1 0x13>, <&dmac1 0x12>;
dma-names = "tx", "rx";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -650,7 +707,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x57>, <&dmac0 0x56>;
dma-names = "tx", "rx";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -665,7 +722,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac0 0x59>, <&dmac0 0x58>;
dma-names = "tx", "rx";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -680,7 +737,7 @@
clock-names = "fck", "brg_int", "scif_clk";
dmas = <&dmac1 0x5b>, <&dmac1 0x5a>;
dma-names = "tx", "rx";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -691,7 +748,7 @@
reg = <0 0xe6500000 0 0x40>;
interrupts = <GIC_SPI 287 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cpg CPG_MOD 931>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <110>;
status = "disabled";
};
@@ -703,7 +760,7 @@
reg = <0 0xe6508000 0 0x40>;
interrupts = <GIC_SPI 288 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cpg CPG_MOD 930>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <6>;
status = "disabled";
};
@@ -715,7 +772,7 @@
reg = <0 0xe6510000 0 0x40>;
interrupts = <GIC_SPI 286 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cpg CPG_MOD 929>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <6>;
status = "disabled";
};
@@ -727,7 +784,7 @@
reg = <0 0xe66d0000 0 0x40>;
interrupts = <GIC_SPI 290 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cpg CPG_MOD 928>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <110>;
status = "disabled";
};
@@ -739,7 +796,7 @@
reg = <0 0xe66d8000 0 0x40>;
interrupts = <GIC_SPI 19 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cpg CPG_MOD 927>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <110>;
status = "disabled";
};
@@ -751,7 +808,7 @@
reg = <0 0xe66e0000 0 0x40>;
interrupts = <GIC_SPI 20 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cpg CPG_MOD 919>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <110>;
status = "disabled";
};
@@ -763,7 +820,7 @@
reg = <0 0xe66e8000 0 0x40>;
interrupts = <GIC_SPI 21 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cpg CPG_MOD 918>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
i2c-scl-internal-delay-ns = <6>;
status = "disabled";
};
@@ -813,7 +870,7 @@
"src.1", "src.0",
"dvc.0", "dvc.1",
"clk_a", "clk_b", "clk_c", "clk_i";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
rcar_sound,dvc {
@@ -943,20 +1000,20 @@
};
xhci0: usb@ee000000 {
- compatible = "renesas,xhci-r8a7795";
+ compatible = "renesas,xhci-r8a7795", "renesas,rcar-gen3-xhci";
reg = <0 0xee000000 0 0xc00>;
interrupts = <GIC_SPI 102 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cpg CPG_MOD 328>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
xhci1: usb@ee0400000 {
- compatible = "renesas,xhci-r8a7795";
+ compatible = "renesas,xhci-r8a7795", "renesas,rcar-gen3-xhci";
reg = <0 0xee040000 0 0xc00>;
interrupts = <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cpg CPG_MOD 327>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -968,7 +1025,7 @@
GIC_SPI 109 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "ch0", "ch1";
clocks = <&cpg CPG_MOD 330>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <2>;
};
@@ -981,7 +1038,7 @@
GIC_SPI 110 IRQ_TYPE_LEVEL_HIGH>;
interrupt-names = "ch0", "ch1";
clocks = <&cpg CPG_MOD 331>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
#dma-cells = <1>;
dma-channels = <2>;
};
@@ -991,7 +1048,7 @@
reg = <0 0xee100000 0 0x2000>;
interrupts = <GIC_SPI 165 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cpg CPG_MOD 314>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -1000,7 +1057,7 @@
reg = <0 0xee120000 0 0x2000>;
interrupts = <GIC_SPI 166 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cpg CPG_MOD 313>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -1009,7 +1066,7 @@
reg = <0 0xee140000 0 0x2000>;
interrupts = <GIC_SPI 167 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cpg CPG_MOD 312>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
cap-mmc-highspeed;
status = "disabled";
};
@@ -1019,7 +1076,7 @@
reg = <0 0xee160000 0 0x2000>;
interrupts = <GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cpg CPG_MOD 311>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
cap-mmc-highspeed;
status = "disabled";
};
@@ -1029,7 +1086,7 @@
reg = <0 0xee080200 0 0x700>;
interrupts = <GIC_SPI 108 IRQ_TYPE_LEVEL_HIGH>;
clocks = <&cpg CPG_MOD 703>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
#phy-cells = <0>;
status = "disabled";
};
@@ -1038,7 +1095,7 @@
compatible = "renesas,usb2-phy-r8a7795";
reg = <0 0xee0a0200 0 0x700>;
clocks = <&cpg CPG_MOD 702>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
#phy-cells = <0>;
status = "disabled";
};
@@ -1047,7 +1104,7 @@
compatible = "renesas,usb2-phy-r8a7795";
reg = <0 0xee0c0200 0 0x700>;
clocks = <&cpg CPG_MOD 701>;
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
#phy-cells = <0>;
status = "disabled";
};
@@ -1059,7 +1116,7 @@
clocks = <&cpg CPG_MOD 703>;
phys = <&usb2_phy0>;
phy-names = "usb";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -1070,7 +1127,7 @@
clocks = <&cpg CPG_MOD 702>;
phys = <&usb2_phy1>;
phy-names = "usb";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -1081,7 +1138,7 @@
clocks = <&cpg CPG_MOD 701>;
phys = <&usb2_phy2>;
phy-names = "usb";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -1092,7 +1149,7 @@
clocks = <&cpg CPG_MOD 703>;
phys = <&usb2_phy0>;
phy-names = "usb";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -1103,7 +1160,7 @@
clocks = <&cpg CPG_MOD 702>;
phys = <&usb2_phy1>;
phy-names = "usb";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
@@ -1114,7 +1171,56 @@
clocks = <&cpg CPG_MOD 701>;
phys = <&usb2_phy2>;
phy-names = "usb";
- power-domains = <&cpg>;
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
+ status = "disabled";
+ };
+ pciec0: pcie@fe000000 {
+ compatible = "renesas,pcie-r8a7795";
+ reg = <0 0xfe000000 0 0x80000>;
+ #address-cells = <3>;
+ #size-cells = <2>;
+ bus-range = <0x00 0xff>;
+ device_type = "pci";
+ ranges = <0x01000000 0 0x00000000 0 0xfe100000 0 0x00100000
+ 0x02000000 0 0xfe200000 0 0xfe200000 0 0x00200000
+ 0x02000000 0 0x30000000 0 0x30000000 0 0x08000000
+ 0x42000000 0 0x38000000 0 0x38000000 0 0x08000000>;
+ /* Map all possible DDR as inbound ranges */
+ dma-ranges = <0x42000000 0 0x40000000 0 0x40000000 0 0x40000000>;
+ interrupts = <GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 117 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 118 IRQ_TYPE_LEVEL_HIGH>;
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &gic GIC_SPI 116 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpg CPG_MOD 319>, <&pcie_bus_clk>;
+ clock-names = "pcie", "pcie_bus";
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
+ status = "disabled";
+ };
+
+ pciec1: pcie@ee800000 {
+ compatible = "renesas,pcie-r8a7795";
+ reg = <0 0xee800000 0 0x80000>;
+ #address-cells = <3>;
+ #size-cells = <2>;
+ bus-range = <0x00 0xff>;
+ device_type = "pci";
+ ranges = <0x01000000 0 0x00000000 0 0xee900000 0 0x00100000
+ 0x02000000 0 0xeea00000 0 0xeea00000 0 0x00200000
+ 0x02000000 0 0xc0000000 0 0xc0000000 0 0x08000000
+ 0x42000000 0 0xc8000000 0 0xc8000000 0 0x08000000>;
+ /* Map all possible DDR as inbound ranges */
+ dma-ranges = <0x42000000 0 0x40000000 0 0x40000000 0 0x40000000>;
+ interrupts = <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 149 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 150 IRQ_TYPE_LEVEL_HIGH>;
+ #interrupt-cells = <1>;
+ interrupt-map-mask = <0 0 0 0>;
+ interrupt-map = <0 0 0 0 &gic GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cpg CPG_MOD 318>, <&pcie_bus_clk>;
+ clock-names = "pcie", "pcie_bus";
+ power-domains = <&sysc R8A7795_PD_ALWAYS_ON>;
status = "disabled";
};
};
diff --git a/arch/arm64/boot/dts/rockchip/Makefile b/arch/arm64/boot/dts/rockchip/Makefile
index e3f0b5f4ba4e..7037a161b6ef 100644
--- a/arch/arm64/boot/dts/rockchip/Makefile
+++ b/arch/arm64/boot/dts/rockchip/Makefile
@@ -1,5 +1,7 @@
dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3368-evb-act8846.dtb
+dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3368-geekbox.dtb
dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3368-r88.dtb
+dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3399-evb.dtb
always := $(dtb-y)
subdir-y := $(dts-dirs)
diff --git a/arch/arm64/boot/dts/rockchip/rk3368-evb.dtsi b/arch/arm64/boot/dts/rockchip/rk3368-evb.dtsi
index 6e27b22704df..fff8b1931f26 100644
--- a/arch/arm64/boot/dts/rockchip/rk3368-evb.dtsi
+++ b/arch/arm64/boot/dts/rockchip/rk3368-evb.dtsi
@@ -40,6 +40,7 @@
* OTHER DEALINGS IN THE SOFTWARE.
*/
+#include <dt-bindings/input/input.h>
#include <dt-bindings/pwm/pwm.h>
#include "rk3368.dtsi"
@@ -105,16 +106,14 @@
keys: gpio-keys {
compatible = "gpio-keys";
- #address-cells = <1>;
- #size-cells = <0>;
pinctrl-names = "default";
pinctrl-0 = <&pwr_key>;
- button@0 {
+ power {
wakeup-source;
gpios = <&gpio0 2 GPIO_ACTIVE_LOW>;
label = "GPIO Power";
- linux,code = <116>;
+ linux,code = <KEY_POWER>;
};
};
@@ -152,7 +151,6 @@
};
&emmc {
- broken-cd;
bus-width = <8>;
cap-mmc-highspeed;
disable-wp;
diff --git a/arch/arm64/boot/dts/rockchip/rk3368-geekbox.dts b/arch/arm64/boot/dts/rockchip/rk3368-geekbox.dts
new file mode 100644
index 000000000000..46cdddfcea6c
--- /dev/null
+++ b/arch/arm64/boot/dts/rockchip/rk3368-geekbox.dts
@@ -0,0 +1,319 @@
+/*
+ * Copyright (c) 2016 Andreas Färber
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This file is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This file is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+/dts-v1/;
+#include "rk3368.dtsi"
+#include <dt-bindings/input/input.h>
+
+/ {
+ model = "GeekBox";
+ compatible = "geekbuying,geekbox", "rockchip,rk3368";
+
+ chosen {
+ stdout-path = "serial2:115200n8";
+ };
+
+ memory@0 {
+ device_type = "memory";
+ reg = <0x0 0x0 0x0 0x80000000>;
+ };
+
+ ext_gmac: gmac-clk {
+ compatible = "fixed-clock";
+ clock-frequency = <125000000>;
+ clock-output-names = "ext_gmac";
+ #clock-cells = <0>;
+ };
+
+ ir: ir-receiver {
+ compatible = "gpio-ir-receiver";
+ gpios = <&gpio3 30 GPIO_ACTIVE_LOW>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&ir_int>;
+ };
+
+ keys: gpio-keys {
+ compatible = "gpio-keys";
+ pinctrl-names = "default";
+ pinctrl-0 = <&pwr_key>;
+
+ power {
+ gpios = <&gpio0 2 GPIO_ACTIVE_LOW>;
+ label = "GPIO Power";
+ linux,code = <KEY_POWER>;
+ wakeup-source;
+ };
+ };
+
+ leds: gpio-leds {
+ compatible = "gpio-leds";
+
+ blue {
+ gpios = <&gpio2 2 GPIO_ACTIVE_HIGH>;
+ label = "geekbox:blue:led";
+ default-state = "on";
+ };
+
+ red {
+ gpios = <&gpio2 3 GPIO_ACTIVE_HIGH>;
+ label = "geekbox:red:led";
+ default-state = "off";
+ };
+ };
+
+ vcc_sys: vcc-sys-regulator {
+ compatible = "regulator-fixed";
+ regulator-name = "vcc_sys";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ regulator-always-on;
+ regulator-boot-on;
+ };
+};
+
+&emmc {
+ status = "okay";
+ bus-width = <8>;
+ cap-mmc-highspeed;
+ clock-frequency = <150000000>;
+ disable-wp;
+ keep-power-in-suspend;
+ non-removable;
+ num-slots = <1>;
+ vmmc-supply = <&vcc_io>;
+ vqmmc-supply = <&vcc18_flash>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&emmc_clk>, <&emmc_cmd>, <&emmc_bus8>;
+};
+
+&gmac {
+ status = "okay";
+ phy-supply = <&vcc_lan>;
+ phy-mode = "rgmii";
+ clock_in_out = "input";
+ assigned-clocks = <&cru SCLK_MAC>;
+ assigned-clock-parents = <&ext_gmac>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&rgmii_pins>;
+ tx_delay = <0x30>;
+ rx_delay = <0x10>;
+};
+
+&i2c0 {
+ status = "okay";
+
+ rk808: pmic@1b {
+ compatible = "rockchip,rk808";
+ reg = <0x1b>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pmic_int>, <&pmic_sleep>;
+ interrupt-parent = <&gpio0>;
+ interrupts = <5 IRQ_TYPE_LEVEL_LOW>;
+ rockchip,system-power-controller;
+ vcc1-supply = <&vcc_sys>;
+ vcc2-supply = <&vcc_sys>;
+ vcc3-supply = <&vcc_sys>;
+ vcc4-supply = <&vcc_sys>;
+ vcc6-supply = <&vcc_sys>;
+ vcc7-supply = <&vcc_sys>;
+ vcc8-supply = <&vcc_io>;
+ vcc9-supply = <&vcc_sys>;
+ vcc10-supply = <&vcc_sys>;
+ vcc11-supply = <&vcc_sys>;
+ vcc12-supply = <&vcc_io>;
+ clock-output-names = "xin32k", "rk808-clkout2";
+ #clock-cells = <1>;
+
+ regulators {
+ vdd_cpu: DCDC_REG1 {
+ regulator-always-on;
+ regulator-boot-on;
+ regulator-min-microvolt = <700000>;
+ regulator-max-microvolt = <1500000>;
+ regulator-name = "vdd_cpu";
+ };
+
+ vdd_log: DCDC_REG2 {
+ regulator-always-on;
+ regulator-boot-on;
+ regulator-min-microvolt = <700000>;
+ regulator-max-microvolt = <1500000>;
+ regulator-name = "vdd_log";
+ };
+
+ vcc_ddr: DCDC_REG3 {
+ regulator-always-on;
+ regulator-boot-on;
+ regulator-name = "vcc_ddr";
+ };
+
+ vcc_io: DCDC_REG4 {
+ regulator-always-on;
+ regulator-boot-on;
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-name = "vcc_io";
+ };
+
+ vcc18_flash: LDO_REG1 {
+ regulator-always-on;
+ regulator-boot-on;
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-name = "vcc18_flash";
+ };
+
+ vcc33_lcd: LDO_REG2 {
+ regulator-always-on;
+ regulator-boot-on;
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-name = "vcc33_lcd";
+ };
+
+ vdd_10: LDO_REG3 {
+ regulator-always-on;
+ regulator-boot-on;
+ regulator-min-microvolt = <1000000>;
+ regulator-max-microvolt = <1000000>;
+ regulator-name = "vdd_10";
+ };
+
+ vcca_18: LDO_REG4 {
+ regulator-boot-on;
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-name = "vcca_18";
+ };
+
+ vccio_sd: LDO_REG5 {
+ regulator-always-on;
+ regulator-boot-on;
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <3300000>;
+ regulator-name = "vccio_sd";
+ };
+
+ vdd10_lcd: LDO_REG6 {
+ regulator-always-on;
+ regulator-boot-on;
+ regulator-min-microvolt = <1000000>;
+ regulator-max-microvolt = <1000000>;
+ regulator-name = "vdd10_lcd";
+ };
+
+ vcc_18: LDO_REG7 {
+ regulator-always-on;
+ regulator-boot-on;
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-name = "vcc_18";
+ };
+
+ vcc18_lcd: LDO_REG8 {
+ regulator-always-on;
+ regulator-boot-on;
+ regulator-min-microvolt = <1800000>;
+ regulator-max-microvolt = <1800000>;
+ regulator-name = "vcc18_lcd";
+ };
+
+ vcc_sd: SWITCH_REG1 {
+ regulator-always-on;
+ regulator-boot-on;
+ regulator-name = "vcc_sd";
+ };
+
+ vcc_lan: SWITCH_REG2 {
+ regulator-always-on;
+ regulator-boot-on;
+ regulator-name = "vcc_lan";
+ };
+ };
+ };
+};
+
+&pinctrl {
+ ir {
+ ir_int: ir-int {
+ rockchip,pins = <3 30 RK_FUNC_GPIO &pcfg_pull_none>;
+ };
+ };
+
+ keys {
+ pwr_key: pwr-key {
+ rockchip,pins = <0 2 RK_FUNC_GPIO &pcfg_pull_none>;
+ };
+ };
+
+ pmic {
+ pmic_sleep: pmic-sleep {
+ rockchip,pins = <0 0 RK_FUNC_2 &pcfg_pull_none>;
+ };
+
+ pmic_int: pmic-int {
+ rockchip,pins = <0 5 RK_FUNC_GPIO &pcfg_pull_up>;
+ };
+ };
+};
+
+&tsadc {
+ status = "okay";
+ rockchip,hw-tshut-mode = <0>; /* CRU */
+ rockchip,hw-tshut-polarity = <1>; /* high */
+};
+
+&uart2 {
+ status = "okay";
+};
+
+&usb_host0_ehci {
+ status = "okay";
+};
+
+&usb_otg {
+ status = "okay";
+};
+
+&wdt {
+ status = "okay";
+};
diff --git a/arch/arm64/boot/dts/rockchip/rk3368-r88.dts b/arch/arm64/boot/dts/rockchip/rk3368-r88.dts
index 1f2b642e794a..b56b7205e39b 100644
--- a/arch/arm64/boot/dts/rockchip/rk3368-r88.dts
+++ b/arch/arm64/boot/dts/rockchip/rk3368-r88.dts
@@ -42,6 +42,7 @@
/dts-v1/;
#include "rk3368.dtsi"
+#include <dt-bindings/input/input.h>
/ {
model = "Rockchip R88";
@@ -65,16 +66,14 @@
keys: gpio-keys {
compatible = "gpio-keys";
- #address-cells = <1>;
- #size-cells = <0>;
pinctrl-names = "default";
pinctrl-0 = <&pwr_key>;
- button@0 {
+ power {
wakeup-source;
gpios = <&gpio0 2 GPIO_ACTIVE_LOW>;
label = "GPIO Power";
- linux,code = <116>;
+ linux,code = <KEY_POWER>;
};
};
@@ -185,7 +184,6 @@
};
&emmc {
- broken-cd;
bus-width = <8>;
cap-mmc-highspeed;
disable-wp;
diff --git a/arch/arm64/boot/dts/rockchip/rk3368.dtsi b/arch/arm64/boot/dts/rockchip/rk3368.dtsi
index 49d119103e31..8b4a7c9154e9 100644
--- a/arch/arm64/boot/dts/rockchip/rk3368.dtsi
+++ b/arch/arm64/boot/dts/rockchip/rk3368.dtsi
@@ -413,7 +413,71 @@
};
thermal-zones {
- #include "rk3368-thermal.dtsi"
+ cpu {
+ polling-delay-passive = <100>; /* milliseconds */
+ polling-delay = <5000>; /* milliseconds */
+
+ thermal-sensors = <&tsadc 0>;
+
+ trips {
+ cpu_alert0: cpu_alert0 {
+ temperature = <75000>; /* millicelsius */
+ hysteresis = <2000>; /* millicelsius */
+ type = "passive";
+ };
+ cpu_alert1: cpu_alert1 {
+ temperature = <80000>; /* millicelsius */
+ hysteresis = <2000>; /* millicelsius */
+ type = "passive";
+ };
+ cpu_crit: cpu_crit {
+ temperature = <95000>; /* millicelsius */
+ hysteresis = <2000>; /* millicelsius */
+ type = "critical";
+ };
+ };
+
+ cooling-maps {
+ map0 {
+ trip = <&cpu_alert0>;
+ cooling-device =
+ <&cpu_b0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ map1 {
+ trip = <&cpu_alert1>;
+ cooling-device =
+ <&cpu_l0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
+
+ gpu {
+ polling-delay-passive = <100>; /* milliseconds */
+ polling-delay = <5000>; /* milliseconds */
+
+ thermal-sensors = <&tsadc 1>;
+
+ trips {
+ gpu_alert0: gpu_alert0 {
+ temperature = <80000>; /* millicelsius */
+ hysteresis = <2000>; /* millicelsius */
+ type = "passive";
+ };
+ gpu_crit: gpu_crit {
+ temperature = <115000>; /* millicelsius */
+ hysteresis = <2000>; /* millicelsius */
+ type = "critical";
+ };
+ };
+
+ cooling-maps {
+ map0 {
+ trip = <&gpu_alert0>;
+ cooling-device =
+ <&cpu_b0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ };
+ };
+ };
};
tsadc: tsadc@ff280000 {
@@ -555,6 +619,18 @@
status = "disabled";
};
+ mbox: mbox@ff6b0000 {
+ compatible = "rockchip,rk3368-mailbox";
+ reg = <0x0 0xff6b0000 0x0 0x1000>;
+ interrupts = <GIC_SPI 146 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 147 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 148 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 149 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cru PCLK_MAILBOX>;
+ clock-names = "pclk_mailbox";
+ #mbox-cells = <1>;
+ };
+
pmugrf: syscon@ff738000 {
compatible = "rockchip,rk3368-pmugrf", "syscon";
reg = <0x0 0xff738000 0x0 0x1000>;
@@ -926,11 +1002,11 @@
tsadc {
otp_gpio: otp-gpio {
- rockchip,pins = <0 10 RK_FUNC_GPIO &pcfg_pull_none>;
+ rockchip,pins = <0 3 RK_FUNC_GPIO &pcfg_pull_none>;
};
otp_out: otp-out {
- rockchip,pins = <0 10 RK_FUNC_1 &pcfg_pull_none>;
+ rockchip,pins = <0 3 RK_FUNC_1 &pcfg_pull_none>;
};
};
diff --git a/arch/arm64/boot/dts/rockchip/rk3368-thermal.dtsi b/arch/arm64/boot/dts/rockchip/rk3399-evb.dts
index a10010f92f96..1a3eb1482050 100644
--- a/arch/arm64/boot/dts/rockchip/rk3368-thermal.dtsi
+++ b/arch/arm64/boot/dts/rockchip/rk3399-evb.dts
@@ -1,8 +1,5 @@
/*
- * Device Tree Source for RK3368 SoC thermal
- *
- * Copyright (c) 2015, Fuzhou Rockchip Electronics Co., Ltd
- * Caesar Wang <wxt@rock-chips.com>
+ * Copyright (c) 2016 Fuzhou Rockchip Electronics Co., Ltd
*
* This file is dual-licensed: you can use it either under the terms
* of the GPL or the X11 license, at your option. Note that this dual
@@ -43,70 +40,85 @@
* OTHER DEALINGS IN THE SOFTWARE.
*/
-#include <dt-bindings/thermal/thermal.h>
+/dts-v1/;
+#include <dt-bindings/pwm/pwm.h>
+#include "rk3399.dtsi"
-cpu_thermal: cpu_thermal {
- polling-delay-passive = <100>; /* milliseconds */
- polling-delay = <5000>; /* milliseconds */
+/ {
+ model = "Rockchip RK3399 Evaluation Board";
+ compatible = "rockchip,rk3399-evb", "rockchip,rk3399",
+ "google,rk3399evb-rev2";
- thermal-sensors = <&tsadc 0>;
+ vdd_center: vdd-center {
+ compatible = "pwm-regulator";
+ pwms = <&pwm3 0 25000 0>;
+ regulator-name = "vdd_center";
+ regulator-min-microvolt = <800000>;
+ regulator-max-microvolt = <1400000>;
+ regulator-always-on;
+ regulator-boot-on;
+ status = "okay";
+ };
- trips {
- cpu_alert0: cpu_alert0 {
- temperature = <75000>; /* millicelsius */
- hysteresis = <2000>; /* millicelsius */
- type = "passive";
- };
- cpu_alert1: cpu_alert1 {
- temperature = <80000>; /* millicelsius */
- hysteresis = <2000>; /* millicelsius */
- type = "passive";
- };
- cpu_crit: cpu_crit {
- temperature = <95000>; /* millicelsius */
- hysteresis = <2000>; /* millicelsius */
- type = "critical";
- };
+ vcc3v3_sys: vcc3v3-sys {
+ compatible = "regulator-fixed";
+ regulator-name = "vcc3v3_sys";
+ regulator-always-on;
+ regulator-boot-on;
+ regulator-min-microvolt = <3300000>;
+ regulator-max-microvolt = <3300000>;
};
- cooling-maps {
- map0 {
- trip = <&cpu_alert0>;
- cooling-device =
- <&cpu_b0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
- };
- map1 {
- trip = <&cpu_alert1>;
- cooling-device =
- <&cpu_l0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
- };
+ vcc_phy: vcc-phy-regulator {
+ compatible = "regulator-fixed";
+ regulator-name = "vcc_phy";
+ regulator-always-on;
+ regulator-boot-on;
};
};
-gpu_thermal: gpu_thermal {
- polling-delay-passive = <100>; /* milliseconds */
- polling-delay = <5000>; /* milliseconds */
+&pwm0 {
+ status = "okay";
+};
+
+&pwm2 {
+ status = "okay";
+};
- thermal-sensors = <&tsadc 1>;
+&pwm3 {
+ status = "okay";
+};
- trips {
- gpu_alert0: gpu_alert0 {
- temperature = <80000>; /* millicelsius */
- hysteresis = <2000>; /* millicelsius */
- type = "passive";
- };
- gpu_crit: gpu_crit {
- temperature = <1150000>; /* millicelsius */
- hysteresis = <2000>; /* millicelsius */
- type = "critical";
+&uart2 {
+ status = "okay";
+};
+
+&usb_host0_ehci {
+ status = "okay";
+};
+
+&usb_host0_ohci {
+ status = "okay";
+};
+
+&usb_host1_ehci {
+ status = "okay";
+};
+
+&usb_host1_ohci {
+ status = "okay";
+};
+
+&pinctrl {
+ pmic {
+ pmic_int_l: pmic-int-l {
+ rockchip,pins =
+ <1 21 RK_FUNC_GPIO &pcfg_pull_up>;
};
- };
- cooling-maps {
- map0 {
- trip = <&gpu_alert0>;
- cooling-device =
- <&cpu_b0 THERMAL_NO_LIMIT THERMAL_NO_LIMIT>;
+ pmic_dvs2: pmic-dvs2 {
+ rockchip,pins =
+ <1 18 RK_FUNC_GPIO &pcfg_pull_down>;
};
};
};
diff --git a/arch/arm64/boot/dts/rockchip/rk3399.dtsi b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
new file mode 100644
index 000000000000..46f325a143b0
--- /dev/null
+++ b/arch/arm64/boot/dts/rockchip/rk3399.dtsi
@@ -0,0 +1,1013 @@
+/*
+ * Copyright (c) 2016 Fuzhou Rockchip Electronics Co., Ltd
+ *
+ * This file is dual-licensed: you can use it either under the terms
+ * of the GPL or the X11 license, at your option. Note that this dual
+ * licensing only applies to this file, and not this project as a
+ * whole.
+ *
+ * a) This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of the
+ * License, or (at your option) any later version.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Or, alternatively,
+ *
+ * b) Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without
+ * restriction, including without limitation the rights to use,
+ * copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following
+ * conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
+ * OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
+ * HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
+ * WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include <dt-bindings/clock/rk3399-cru.h>
+#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/interrupt-controller/arm-gic.h>
+#include <dt-bindings/interrupt-controller/irq.h>
+#include <dt-bindings/pinctrl/rockchip.h>
+
+/ {
+ compatible = "rockchip,rk3399";
+
+ interrupt-parent = <&gic>;
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ aliases {
+ serial0 = &uart0;
+ serial1 = &uart1;
+ serial2 = &uart2;
+ serial3 = &uart3;
+ serial4 = &uart4;
+ };
+
+ cpus {
+ #address-cells = <2>;
+ #size-cells = <0>;
+
+ cpu-map {
+ cluster0 {
+ core0 {
+ cpu = <&cpu_l0>;
+ };
+ core1 {
+ cpu = <&cpu_l1>;
+ };
+ core2 {
+ cpu = <&cpu_l2>;
+ };
+ core3 {
+ cpu = <&cpu_l3>;
+ };
+ };
+
+ cluster1 {
+ core0 {
+ cpu = <&cpu_b0>;
+ };
+ core1 {
+ cpu = <&cpu_b1>;
+ };
+ };
+ };
+
+ cpu_l0: cpu@0 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a53", "arm,armv8";
+ reg = <0x0 0x0>;
+ enable-method = "psci";
+ #cooling-cells = <2>; /* min followed by max */
+ clocks = <&cru ARMCLKL>;
+ };
+
+ cpu_l1: cpu@1 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a53", "arm,armv8";
+ reg = <0x0 0x1>;
+ enable-method = "psci";
+ clocks = <&cru ARMCLKL>;
+ };
+
+ cpu_l2: cpu@2 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a53", "arm,armv8";
+ reg = <0x0 0x2>;
+ enable-method = "psci";
+ clocks = <&cru ARMCLKL>;
+ };
+
+ cpu_l3: cpu@3 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a53", "arm,armv8";
+ reg = <0x0 0x3>;
+ enable-method = "psci";
+ clocks = <&cru ARMCLKL>;
+ };
+
+ cpu_b0: cpu@100 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a72", "arm,armv8";
+ reg = <0x0 0x100>;
+ enable-method = "psci";
+ #cooling-cells = <2>; /* min followed by max */
+ clocks = <&cru ARMCLKB>;
+ };
+
+ cpu_b1: cpu@101 {
+ device_type = "cpu";
+ compatible = "arm,cortex-a72", "arm,armv8";
+ reg = <0x0 0x101>;
+ enable-method = "psci";
+ clocks = <&cru ARMCLKB>;
+ };
+ };
+
+ psci {
+ compatible = "arm,psci-1.0";
+ method = "smc";
+ };
+
+ timer {
+ compatible = "arm,armv8-timer";
+ interrupts = <GIC_PPI 13 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_PPI 14 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_PPI 11 IRQ_TYPE_LEVEL_LOW>,
+ <GIC_PPI 10 IRQ_TYPE_LEVEL_LOW>;
+ };
+
+ xin24m: xin24m {
+ compatible = "fixed-clock";
+ clock-frequency = <24000000>;
+ clock-output-names = "xin24m";
+ #clock-cells = <0>;
+ };
+
+ amba {
+ compatible = "arm,amba-bus";
+ #address-cells = <2>;
+ #size-cells = <2>;
+ ranges;
+
+ dmac_bus: dma-controller@ff6d0000 {
+ compatible = "arm,pl330", "arm,primecell";
+ reg = <0x0 0xff6d0000 0x0 0x4000>;
+ interrupts = <GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>;
+ #dma-cells = <1>;
+ clocks = <&cru ACLK_DMAC0_PERILP>;
+ clock-names = "apb_pclk";
+ };
+
+ dmac_peri: dma-controller@ff6e0000 {
+ compatible = "arm,pl330", "arm,primecell";
+ reg = <0x0 0xff6e0000 0x0 0x4000>;
+ interrupts = <GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>,
+ <GIC_SPI 8 IRQ_TYPE_LEVEL_HIGH>;
+ #dma-cells = <1>;
+ clocks = <&cru ACLK_DMAC1_PERILP>;
+ clock-names = "apb_pclk";
+ };
+ };
+
+ sdio0: dwmmc@fe310000 {
+ compatible = "rockchip,rk3399-dw-mshc",
+ "rockchip,rk3288-dw-mshc";
+ reg = <0x0 0xfe310000 0x0 0x4000>;
+ interrupts = <GIC_SPI 64 IRQ_TYPE_LEVEL_HIGH>;
+ clock-freq-min-max = <400000 150000000>;
+ clocks = <&cru HCLK_SDIO>, <&cru SCLK_SDIO>,
+ <&cru SCLK_SDIO_DRV>, <&cru SCLK_SDIO_SAMPLE>;
+ clock-names = "biu", "ciu", "ciu-drive", "ciu-sample";
+ fifo-depth = <0x100>;
+ status = "disabled";
+ };
+
+ sdmmc: dwmmc@fe320000 {
+ compatible = "rockchip,rk3399-dw-mshc",
+ "rockchip,rk3288-dw-mshc";
+ reg = <0x0 0xfe320000 0x0 0x4000>;
+ interrupts = <GIC_SPI 65 IRQ_TYPE_LEVEL_HIGH>;
+ clock-freq-min-max = <400000 150000000>;
+ clocks = <&cru HCLK_SDMMC>, <&cru SCLK_SDMMC>,
+ <&cru SCLK_SDMMC_DRV>, <&cru SCLK_SDMMC_SAMPLE>;
+ clock-names = "biu", "ciu", "ciu-drive", "ciu-sample";
+ fifo-depth = <0x100>;
+ status = "disabled";
+ };
+
+ usb_host0_ehci: usb@fe380000 {
+ compatible = "generic-ehci";
+ reg = <0x0 0xfe380000 0x0 0x20000>;
+ interrupts = <GIC_SPI 26 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cru HCLK_HOST0>, <&cru HCLK_HOST0_ARB>;
+ clock-names = "hclk_host0", "hclk_host0_arb";
+ status = "disabled";
+ };
+
+ usb_host0_ohci: usb@fe3a0000 {
+ compatible = "generic-ohci";
+ reg = <0x0 0xfe3a0000 0x0 0x20000>;
+ interrupts = <GIC_SPI 28 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cru HCLK_HOST0>, <&cru HCLK_HOST0_ARB>;
+ clock-names = "hclk_host0", "hclk_host0_arb";
+ status = "disabled";
+ };
+
+ usb_host1_ehci: usb@fe3c0000 {
+ compatible = "generic-ehci";
+ reg = <0x0 0xfe3c0000 0x0 0x20000>;
+ interrupts = <GIC_SPI 30 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cru HCLK_HOST1>, <&cru HCLK_HOST1_ARB>;
+ clock-names = "hclk_host1", "hclk_host1_arb";
+ status = "disabled";
+ };
+
+ usb_host1_ohci: usb@fe3e0000 {
+ compatible = "generic-ohci";
+ reg = <0x0 0xfe3e0000 0x0 0x20000>;
+ interrupts = <GIC_SPI 32 IRQ_TYPE_LEVEL_HIGH>;
+ clocks = <&cru HCLK_HOST1>, <&cru HCLK_HOST1_ARB>;
+ clock-names = "hclk_host1", "hclk_host1_arb";
+ status = "disabled";
+ };
+
+ gic: interrupt-controller@fee00000 {
+ compatible = "arm,gic-v3";
+ #interrupt-cells = <3>;
+ #address-cells = <2>;
+ #size-cells = <2>;
+ ranges;
+ interrupt-controller;
+
+ reg = <0x0 0xfee00000 0 0x10000>, /* GICD */
+ <0x0 0xfef00000 0 0xc0000>, /* GICR */
+ <0x0 0xfff00000 0 0x10000>, /* GICC */
+ <0x0 0xfff10000 0 0x10000>, /* GICH */
+ <0x0 0xfff20000 0 0x10000>; /* GICV */
+ interrupts = <GIC_PPI 9 IRQ_TYPE_LEVEL_HIGH>;
+ its: interrupt-controller@fee20000 {
+ compatible = "arm,gic-v3-its";
+ msi-controller;
+ reg = <0x0 0xfee20000 0x0 0x20000>;
+ };
+ };
+
+ uart0: serial@ff180000 {
+ compatible = "rockchip,rk3399-uart", "snps,dw-apb-uart";
+ reg = <0x0 0xff180000 0x0 0x100>;
+ clocks = <&cru SCLK_UART0>, <&cru PCLK_UART0>;
+ clock-names = "baudclk", "apb_pclk";
+ interrupts = <GIC_SPI 99 IRQ_TYPE_LEVEL_HIGH>;
+ reg-shift = <2>;
+ reg-io-width = <4>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart0_xfer>;
+ status = "disabled";
+ };
+
+ uart1: serial@ff190000 {
+ compatible = "rockchip,rk3399-uart", "snps,dw-apb-uart";
+ reg = <0x0 0xff190000 0x0 0x100>;
+ clocks = <&cru SCLK_UART1>, <&cru PCLK_UART1>;
+ clock-names = "baudclk", "apb_pclk";
+ interrupts = <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>;
+ reg-shift = <2>;
+ reg-io-width = <4>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart1_xfer>;
+ status = "disabled";
+ };
+
+ uart2: serial@ff1a0000 {
+ compatible = "rockchip,rk3399-uart", "snps,dw-apb-uart";
+ reg = <0x0 0xff1a0000 0x0 0x100>;
+ clocks = <&cru SCLK_UART2>, <&cru PCLK_UART2>;
+ clock-names = "baudclk", "apb_pclk";
+ interrupts = <GIC_SPI 100 IRQ_TYPE_LEVEL_HIGH>;
+ reg-shift = <2>;
+ reg-io-width = <4>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart2c_xfer>;
+ status = "disabled";
+ };
+
+ uart3: serial@ff1b0000 {
+ compatible = "rockchip,rk3399-uart", "snps,dw-apb-uart";
+ reg = <0x0 0xff1b0000 0x0 0x100>;
+ clocks = <&cru SCLK_UART3>, <&cru PCLK_UART3>;
+ clock-names = "baudclk", "apb_pclk";
+ interrupts = <GIC_SPI 101 IRQ_TYPE_LEVEL_HIGH>;
+ reg-shift = <2>;
+ reg-io-width = <4>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart3_xfer>;
+ status = "disabled";
+ };
+
+ spi0: spi@ff1c0000 {
+ compatible = "rockchip,rk3399-spi", "rockchip,rk3066-spi";
+ reg = <0x0 0xff1c0000 0x0 0x1000>;
+ clocks = <&cru SCLK_SPI0>, <&cru PCLK_SPI0>;
+ clock-names = "spiclk", "apb_pclk";
+ interrupts = <GIC_SPI 68 IRQ_TYPE_LEVEL_HIGH>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&spi0_clk &spi0_tx &spi0_rx &spi0_cs0>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "disabled";
+ };
+
+ spi1: spi@ff1d0000 {
+ compatible = "rockchip,rk3399-spi", "rockchip,rk3066-spi";
+ reg = <0x0 0xff1d0000 0x0 0x1000>;
+ clocks = <&cru SCLK_SPI1>, <&cru PCLK_SPI1>;
+ clock-names = "spiclk", "apb_pclk";
+ interrupts = <GIC_SPI 53 IRQ_TYPE_LEVEL_HIGH>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&spi1_clk &spi1_tx &spi1_rx &spi1_cs0>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "disabled";
+ };
+
+ spi2: spi@ff1e0000 {
+ compatible = "rockchip,rk3399-spi", "rockchip,rk3066-spi";
+ reg = <0x0 0xff1e0000 0x0 0x1000>;
+ clocks = <&cru SCLK_SPI2>, <&cru PCLK_SPI2>;
+ clock-names = "spiclk", "apb_pclk";
+ interrupts = <GIC_SPI 52 IRQ_TYPE_LEVEL_HIGH>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&spi2_clk &spi2_tx &spi2_rx &spi2_cs0>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "disabled";
+ };
+
+ spi4: spi@ff1f0000 {
+ compatible = "rockchip,rk3399-spi", "rockchip,rk3066-spi";
+ reg = <0x0 0xff1f0000 0x0 0x1000>;
+ clocks = <&cru SCLK_SPI4>, <&cru PCLK_SPI4>;
+ clock-names = "spiclk", "apb_pclk";
+ interrupts = <GIC_SPI 67 IRQ_TYPE_LEVEL_HIGH>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&spi4_clk &spi4_tx &spi4_rx &spi4_cs0>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "disabled";
+ };
+
+ spi5: spi@ff200000 {
+ compatible = "rockchip,rk3399-spi", "rockchip,rk3066-spi";
+ reg = <0x0 0xff200000 0x0 0x1000>;
+ clocks = <&cru SCLK_SPI5>, <&cru PCLK_SPI5>;
+ clock-names = "spiclk", "apb_pclk";
+ interrupts = <GIC_SPI 132 IRQ_TYPE_LEVEL_HIGH>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&spi5_clk &spi5_tx &spi5_rx &spi5_cs0>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "disabled";
+ };
+
+ pmugrf: syscon@ff320000 {
+ compatible = "rockchip,rk3399-pmugrf", "syscon";
+ reg = <0x0 0xff320000 0x0 0x1000>;
+ };
+
+ spi3: spi@ff350000 {
+ compatible = "rockchip,rk3399-spi", "rockchip,rk3066-spi";
+ reg = <0x0 0xff350000 0x0 0x1000>;
+ clocks = <&pmucru SCLK_SPI3_PMU>, <&pmucru PCLK_SPI3_PMU>;
+ clock-names = "spiclk", "apb_pclk";
+ interrupts = <GIC_SPI 60 IRQ_TYPE_LEVEL_HIGH>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&spi3_clk &spi3_tx &spi3_rx &spi3_cs0>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "disabled";
+ };
+
+ uart4: serial@ff370000 {
+ compatible = "rockchip,rk3399-uart", "snps,dw-apb-uart";
+ reg = <0x0 0xff370000 0x0 0x100>;
+ clocks = <&pmucru SCLK_UART4_PMU>, <&pmucru PCLK_UART4_PMU>;
+ clock-names = "baudclk", "apb_pclk";
+ interrupts = <GIC_SPI 102 IRQ_TYPE_LEVEL_HIGH>;
+ reg-shift = <2>;
+ reg-io-width = <4>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&uart4_xfer>;
+ status = "disabled";
+ };
+
+ pwm0: pwm@ff420000 {
+ compatible = "rockchip,rk3399-pwm", "rockchip,rk3288-pwm";
+ reg = <0x0 0xff420000 0x0 0x10>;
+ #pwm-cells = <3>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pwm0_pin>;
+ clocks = <&pmucru PCLK_RKPWM_PMU>;
+ clock-names = "pwm";
+ status = "disabled";
+ };
+
+ pwm1: pwm@ff420010 {
+ compatible = "rockchip,rk3399-pwm", "rockchip,rk3288-pwm";
+ reg = <0x0 0xff420010 0x0 0x10>;
+ #pwm-cells = <3>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pwm1_pin>;
+ clocks = <&pmucru PCLK_RKPWM_PMU>;
+ clock-names = "pwm";
+ status = "disabled";
+ };
+
+ pwm2: pwm@ff420020 {
+ compatible = "rockchip,rk3399-pwm", "rockchip,rk3288-pwm";
+ reg = <0x0 0xff420020 0x0 0x10>;
+ #pwm-cells = <3>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pwm2_pin>;
+ clocks = <&pmucru PCLK_RKPWM_PMU>;
+ clock-names = "pwm";
+ status = "disabled";
+ };
+
+ pwm3: pwm@ff420030 {
+ compatible = "rockchip,rk3399-pwm", "rockchip,rk3288-pwm";
+ reg = <0x0 0xff420030 0x0 0x10>;
+ #pwm-cells = <3>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&pwm3a_pin>;
+ clocks = <&pmucru PCLK_RKPWM_PMU>;
+ clock-names = "pwm";
+ status = "disabled";
+ };
+
+ pmucru: pmu-clock-controller@ff750000 {
+ compatible = "rockchip,rk3399-pmucru";
+ reg = <0x0 0xff750000 0x0 0x1000>;
+ #clock-cells = <1>;
+ #reset-cells = <1>;
+ assigned-clocks = <&pmucru PLL_PPLL>;
+ assigned-clock-rates = <676000000>;
+ };
+
+ cru: clock-controller@ff760000 {
+ compatible = "rockchip,rk3399-cru";
+ reg = <0x0 0xff760000 0x0 0x1000>;
+ #clock-cells = <1>;
+ #reset-cells = <1>;
+ };
+
+ grf: syscon@ff770000 {
+ compatible = "rockchip,rk3399-grf", "syscon";
+ reg = <0x0 0xff770000 0x0 0x10000>;
+ };
+
+ watchdog@ff840000 {
+ compatible = "snps,dw-wdt";
+ reg = <0x0 0xff840000 0x0 0x100>;
+ clocks = <&cru PCLK_WDT>;
+ interrupts = <GIC_SPI 120 IRQ_TYPE_LEVEL_HIGH>;
+ };
+
+ spdif: spdif@ff870000 {
+ compatible = "rockchip,rk3399-spdif";
+ reg = <0x0 0xff870000 0x0 0x1000>;
+ interrupts = <GIC_SPI 66 IRQ_TYPE_LEVEL_HIGH>;
+ dmas = <&dmac_bus 7>;
+ dma-names = "tx";
+ clock-names = "mclk", "hclk";
+ clocks = <&cru SCLK_SPDIF_8CH>, <&cru HCLK_SPDIF>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&spdif_bus>;
+ status = "disabled";
+ };
+
+ i2s0: i2s@ff880000 {
+ compatible = "rockchip,rk3399-i2s", "rockchip,rk3066-i2s";
+ reg = <0x0 0xff880000 0x0 0x1000>;
+ rockchip,grf = <&grf>;
+ interrupts = <GIC_SPI 39 IRQ_TYPE_LEVEL_HIGH>;
+ dmas = <&dmac_bus 0>, <&dmac_bus 1>;
+ dma-names = "tx", "rx";
+ clock-names = "i2s_clk", "i2s_hclk";
+ clocks = <&cru SCLK_I2S0_8CH>, <&cru HCLK_I2S0_8CH>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2s0_8ch_bus>;
+ status = "disabled";
+ };
+
+ i2s1: i2s@ff890000 {
+ compatible = "rockchip,rk3399-i2s", "rockchip,rk3066-i2s";
+ reg = <0x0 0xff890000 0x0 0x1000>;
+ interrupts = <GIC_SPI 40 IRQ_TYPE_LEVEL_HIGH>;
+ dmas = <&dmac_bus 2>, <&dmac_bus 3>;
+ dma-names = "tx", "rx";
+ clock-names = "i2s_clk", "i2s_hclk";
+ clocks = <&cru SCLK_I2S1_8CH>, <&cru HCLK_I2S1_8CH>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&i2s1_2ch_bus>;
+ status = "disabled";
+ };
+
+ i2s2: i2s@ff8a0000 {
+ compatible = "rockchip,rk3399-i2s", "rockchip,rk3066-i2s";
+ reg = <0x0 0xff8a0000 0x0 0x1000>;
+ interrupts = <GIC_SPI 41 IRQ_TYPE_LEVEL_HIGH>;
+ dmas = <&dmac_bus 4>, <&dmac_bus 5>;
+ dma-names = "tx", "rx";
+ clock-names = "i2s_clk", "i2s_hclk";
+ clocks = <&cru SCLK_I2S2_8CH>, <&cru HCLK_I2S2_8CH>;
+ status = "disabled";
+ };
+
+ pinctrl: pinctrl {
+ compatible = "rockchip,rk3399-pinctrl";
+ rockchip,grf = <&grf>;
+ rockchip,pmu = <&pmugrf>;
+ #address-cells = <2>;
+ #size-cells = <2>;
+ ranges;
+
+ gpio0: gpio0@ff720000 {
+ compatible = "rockchip,gpio-bank";
+ reg = <0x0 0xff720000 0x0 0x100>;
+ clocks = <&pmucru PCLK_GPIO0_PMU>;
+ interrupts = <GIC_SPI 14 IRQ_TYPE_LEVEL_HIGH>;
+
+ gpio-controller;
+ #gpio-cells = <0x2>;
+
+ interrupt-controller;
+ #interrupt-cells = <0x2>;
+ };
+
+ gpio1: gpio1@ff730000 {
+ compatible = "rockchip,gpio-bank";
+ reg = <0x0 0xff730000 0x0 0x100>;
+ clocks = <&pmucru PCLK_GPIO1_PMU>;
+ interrupts = <GIC_SPI 15 IRQ_TYPE_LEVEL_HIGH>;
+
+ gpio-controller;
+ #gpio-cells = <0x2>;
+
+ interrupt-controller;
+ #interrupt-cells = <0x2>;
+ };
+
+ gpio2: gpio2@ff780000 {
+ compatible = "rockchip,gpio-bank";
+ reg = <0x0 0xff780000 0x0 0x100>;
+ clocks = <&cru PCLK_GPIO2>;
+ interrupts = <GIC_SPI 16 IRQ_TYPE_LEVEL_HIGH>;
+
+ gpio-controller;
+ #gpio-cells = <0x2>;
+
+ interrupt-controller;
+ #interrupt-cells = <0x2>;
+ };
+
+ gpio3: gpio3@ff788000 {
+ compatible = "rockchip,gpio-bank";
+ reg = <0x0 0xff788000 0x0 0x100>;
+ clocks = <&cru PCLK_GPIO3>;
+ interrupts = <GIC_SPI 17 IRQ_TYPE_LEVEL_HIGH>;
+
+ gpio-controller;
+ #gpio-cells = <0x2>;
+
+ interrupt-controller;
+ #interrupt-cells = <0x2>;
+ };
+
+ gpio4: gpio4@ff790000 {
+ compatible = "rockchip,gpio-bank";
+ reg = <0x0 0xff790000 0x0 0x100>;
+ clocks = <&cru PCLK_GPIO4>;
+ interrupts = <GIC_SPI 18 IRQ_TYPE_LEVEL_HIGH>;
+
+ gpio-controller;
+ #gpio-cells = <0x2>;
+
+ interrupt-controller;
+ #interrupt-cells = <0x2>;
+ };
+
+ pcfg_pull_up: pcfg-pull-up {
+ bias-pull-up;
+ };
+
+ pcfg_pull_down: pcfg-pull-down {
+ bias-pull-down;
+ };
+
+ pcfg_pull_none: pcfg-pull-none {
+ bias-disable;
+ };
+
+ pcfg_pull_none_12ma: pcfg-pull-none-12ma {
+ bias-disable;
+ drive-strength = <12>;
+ };
+
+ pcfg_pull_up_8ma: pcfg-pull-up-8ma {
+ bias-pull-up;
+ drive-strength = <8>;
+ };
+
+ pcfg_pull_down_4ma: pcfg-pull-down-4ma {
+ bias-pull-down;
+ drive-strength = <4>;
+ };
+
+ pcfg_pull_up_2ma: pcfg-pull-up-2ma {
+ bias-pull-up;
+ drive-strength = <2>;
+ };
+
+ pcfg_pull_down_12ma: pcfg-pull-down-12ma {
+ bias-pull-down;
+ drive-strength = <12>;
+ };
+
+ pcfg_pull_none_13ma: pcfg-pull-none-13ma {
+ bias-disable;
+ drive-strength = <13>;
+ };
+
+ i2c0 {
+ i2c0_xfer: i2c0-xfer {
+ rockchip,pins =
+ <1 15 RK_FUNC_2 &pcfg_pull_none>,
+ <1 16 RK_FUNC_2 &pcfg_pull_none>;
+ };
+ };
+
+ i2c1 {
+ i2c1_xfer: i2c1-xfer {
+ rockchip,pins =
+ <4 2 RK_FUNC_1 &pcfg_pull_none>,
+ <4 1 RK_FUNC_1 &pcfg_pull_none>;
+ };
+ };
+
+ i2c2 {
+ i2c2_xfer: i2c2-xfer {
+ rockchip,pins =
+ <2 1 RK_FUNC_2 &pcfg_pull_none_12ma>,
+ <2 0 RK_FUNC_2 &pcfg_pull_none_12ma>;
+ };
+ };
+
+ i2c3 {
+ i2c3_xfer: i2c3-xfer {
+ rockchip,pins =
+ <4 17 RK_FUNC_1 &pcfg_pull_none>,
+ <4 16 RK_FUNC_1 &pcfg_pull_none>;
+ };
+ };
+
+ i2c4 {
+ i2c4_xfer: i2c4-xfer {
+ rockchip,pins =
+ <1 12 RK_FUNC_1 &pcfg_pull_none>,
+ <1 11 RK_FUNC_1 &pcfg_pull_none>;
+ };
+ };
+
+ i2c5 {
+ i2c5_xfer: i2c5-xfer {
+ rockchip,pins =
+ <3 11 RK_FUNC_2 &pcfg_pull_none>,
+ <3 10 RK_FUNC_2 &pcfg_pull_none>;
+ };
+ };
+
+ i2c6 {
+ i2c6_xfer: i2c6-xfer {
+ rockchip,pins =
+ <2 10 RK_FUNC_2 &pcfg_pull_none>,
+ <2 9 RK_FUNC_2 &pcfg_pull_none>;
+ };
+ };
+
+ i2c7 {
+ i2c7_xfer: i2c7-xfer {
+ rockchip,pins =
+ <2 8 RK_FUNC_2 &pcfg_pull_none>,
+ <2 7 RK_FUNC_2 &pcfg_pull_none>;
+ };
+ };
+
+ i2c8 {
+ i2c8_xfer: i2c8-xfer {
+ rockchip,pins =
+ <1 21 RK_FUNC_1 &pcfg_pull_none>,
+ <1 20 RK_FUNC_1 &pcfg_pull_none>;
+ };
+ };
+
+ i2s0 {
+ i2s0_8ch_bus: i2s0-8ch-bus {
+ rockchip,pins =
+ <3 24 RK_FUNC_1 &pcfg_pull_none>,
+ <3 25 RK_FUNC_1 &pcfg_pull_none>,
+ <3 26 RK_FUNC_1 &pcfg_pull_none>,
+ <3 27 RK_FUNC_1 &pcfg_pull_none>,
+ <3 28 RK_FUNC_1 &pcfg_pull_none>,
+ <3 29 RK_FUNC_1 &pcfg_pull_none>,
+ <3 30 RK_FUNC_1 &pcfg_pull_none>,
+ <3 31 RK_FUNC_1 &pcfg_pull_none>,
+ <4 0 RK_FUNC_1 &pcfg_pull_none>;
+ };
+ };
+
+ i2s1 {
+ i2s1_2ch_bus: i2s1-2ch-bus {
+ rockchip,pins =
+ <4 3 RK_FUNC_1 &pcfg_pull_none>,
+ <4 4 RK_FUNC_1 &pcfg_pull_none>,
+ <4 5 RK_FUNC_1 &pcfg_pull_none>,
+ <4 6 RK_FUNC_1 &pcfg_pull_none>,
+ <4 7 RK_FUNC_1 &pcfg_pull_none>;
+ };
+ };
+
+ spdif {
+ spdif_bus: spdif-bus {
+ rockchip,pins =
+ <4 21 RK_FUNC_1 &pcfg_pull_none>;
+ };
+ };
+
+ spi0 {
+ spi0_clk: spi0-clk {
+ rockchip,pins =
+ <3 6 RK_FUNC_2 &pcfg_pull_up>;
+ };
+ spi0_cs0: spi0-cs0 {
+ rockchip,pins =
+ <3 7 RK_FUNC_2 &pcfg_pull_up>;
+ };
+ spi0_cs1: spi0-cs1 {
+ rockchip,pins =
+ <3 8 RK_FUNC_2 &pcfg_pull_up>;
+ };
+ spi0_tx: spi0-tx {
+ rockchip,pins =
+ <3 5 RK_FUNC_2 &pcfg_pull_up>;
+ };
+ spi0_rx: spi0-rx {
+ rockchip,pins =
+ <3 4 RK_FUNC_2 &pcfg_pull_up>;
+ };
+ };
+
+ spi1 {
+ spi1_clk: spi1-clk {
+ rockchip,pins =
+ <1 9 RK_FUNC_2 &pcfg_pull_up>;
+ };
+ spi1_cs0: spi1-cs0 {
+ rockchip,pins =
+ <1 10 RK_FUNC_2 &pcfg_pull_up>;
+ };
+ spi1_rx: spi1-rx {
+ rockchip,pins =
+ <1 7 RK_FUNC_2 &pcfg_pull_up>;
+ };
+ spi1_tx: spi1-tx {
+ rockchip,pins =
+ <1 8 RK_FUNC_2 &pcfg_pull_up>;
+ };
+ };
+
+ spi2 {
+ spi2_clk: spi2-clk {
+ rockchip,pins =
+ <2 11 RK_FUNC_1 &pcfg_pull_up>;
+ };
+ spi2_cs0: spi2-cs0 {
+ rockchip,pins =
+ <2 12 RK_FUNC_1 &pcfg_pull_up>;
+ };
+ spi2_rx: spi2-rx {
+ rockchip,pins =
+ <2 9 RK_FUNC_1 &pcfg_pull_up>;
+ };
+ spi2_tx: spi2-tx {
+ rockchip,pins =
+ <2 10 RK_FUNC_1 &pcfg_pull_up>;
+ };
+ };
+
+ spi3 {
+ spi3_clk: spi3-clk {
+ rockchip,pins =
+ <1 17 RK_FUNC_1 &pcfg_pull_up>;
+ };
+ spi3_cs0: spi3-cs0 {
+ rockchip,pins =
+ <1 18 RK_FUNC_1 &pcfg_pull_up>;
+ };
+ spi3_rx: spi3-rx {
+ rockchip,pins =
+ <1 15 RK_FUNC_1 &pcfg_pull_up>;
+ };
+ spi3_tx: spi3-tx {
+ rockchip,pins =
+ <1 16 RK_FUNC_1 &pcfg_pull_up>;
+ };
+ };
+
+ spi4 {
+ spi4_clk: spi4-clk {
+ rockchip,pins =
+ <3 2 RK_FUNC_2 &pcfg_pull_up>;
+ };
+ spi4_cs0: spi4-cs0 {
+ rockchip,pins =
+ <3 3 RK_FUNC_2 &pcfg_pull_up>;
+ };
+ spi4_rx: spi4-rx {
+ rockchip,pins =
+ <3 0 RK_FUNC_2 &pcfg_pull_up>;
+ };
+ spi4_tx: spi4-tx {
+ rockchip,pins =
+ <3 1 RK_FUNC_2 &pcfg_pull_up>;
+ };
+ };
+
+ spi5 {
+ spi5_clk: spi5-clk {
+ rockchip,pins =
+ <2 22 RK_FUNC_2 &pcfg_pull_up>;
+ };
+ spi5_cs0: spi5-cs0 {
+ rockchip,pins =
+ <2 23 RK_FUNC_2 &pcfg_pull_up>;
+ };
+ spi5_rx: spi5-rx {
+ rockchip,pins =
+ <2 20 RK_FUNC_2 &pcfg_pull_up>;
+ };
+ spi5_tx: spi5-tx {
+ rockchip,pins =
+ <2 21 RK_FUNC_2 &pcfg_pull_up>;
+ };
+ };
+
+ uart0 {
+ uart0_xfer: uart0-xfer {
+ rockchip,pins =
+ <2 16 RK_FUNC_1 &pcfg_pull_up>,
+ <2 17 RK_FUNC_1 &pcfg_pull_none>;
+ };
+
+ uart0_cts: uart0-cts {
+ rockchip,pins =
+ <2 18 RK_FUNC_1 &pcfg_pull_none>;
+ };
+
+ uart0_rts: uart0-rts {
+ rockchip,pins =
+ <2 19 RK_FUNC_1 &pcfg_pull_none>;
+ };
+ };
+
+ uart1 {
+ uart1_xfer: uart1-xfer {
+ rockchip,pins =
+ <3 12 RK_FUNC_2 &pcfg_pull_up>,
+ <3 13 RK_FUNC_2 &pcfg_pull_none>;
+ };
+ };
+
+ uart2a {
+ uart2a_xfer: uart2a-xfer {
+ rockchip,pins =
+ <4 8 RK_FUNC_2 &pcfg_pull_up>,
+ <4 9 RK_FUNC_2 &pcfg_pull_none>;
+ };
+ };
+
+ uart2b {
+ uart2b_xfer: uart2b-xfer {
+ rockchip,pins =
+ <4 16 RK_FUNC_2 &pcfg_pull_up>,
+ <4 17 RK_FUNC_2 &pcfg_pull_none>;
+ };
+ };
+
+ uart2c {
+ uart2c_xfer: uart2c-xfer {
+ rockchip,pins =
+ <4 19 RK_FUNC_1 &pcfg_pull_up>,
+ <4 20 RK_FUNC_1 &pcfg_pull_none>;
+ };
+ };
+
+ uart3 {
+ uart3_xfer: uart3-xfer {
+ rockchip,pins =
+ <3 14 RK_FUNC_2 &pcfg_pull_up>,
+ <3 15 RK_FUNC_2 &pcfg_pull_none>;
+ };
+
+ uart3_cts: uart3-cts {
+ rockchip,pins =
+ <3 18 RK_FUNC_2 &pcfg_pull_none>;
+ };
+
+ uart3_rts: uart3-rts {
+ rockchip,pins =
+ <3 19 RK_FUNC_2 &pcfg_pull_none>;
+ };
+ };
+
+ uart4 {
+ uart4_xfer: uart4-xfer {
+ rockchip,pins =
+ <1 7 RK_FUNC_1 &pcfg_pull_up>,
+ <1 8 RK_FUNC_1 &pcfg_pull_none>;
+ };
+ };
+
+ uarthdcp {
+ uarthdcp_xfer: uarthdcp-xfer {
+ rockchip,pins =
+ <4 21 RK_FUNC_2 &pcfg_pull_up>,
+ <4 22 RK_FUNC_2 &pcfg_pull_none>;
+ };
+ };
+
+ pwm0 {
+ pwm0_pin: pwm0-pin {
+ rockchip,pins =
+ <4 18 RK_FUNC_1 &pcfg_pull_none>;
+ };
+
+ vop0_pwm_pin: vop0-pwm-pin {
+ rockchip,pins =
+ <4 18 RK_FUNC_2 &pcfg_pull_none>;
+ };
+ };
+
+ pwm1 {
+ pwm1_pin: pwm1-pin {
+ rockchip,pins =
+ <4 22 RK_FUNC_1 &pcfg_pull_none>;
+ };
+
+ vop1_pwm_pin: vop1-pwm-pin {
+ rockchip,pins =
+ <4 18 RK_FUNC_3 &pcfg_pull_none>;
+ };
+ };
+
+ pwm2 {
+ pwm2_pin: pwm2-pin {
+ rockchip,pins =
+ <1 19 RK_FUNC_1 &pcfg_pull_none>;
+ };
+ };
+
+ pwm3a {
+ pwm3a_pin: pwm3a-pin {
+ rockchip,pins =
+ <0 6 RK_FUNC_1 &pcfg_pull_none>;
+ };
+ };
+
+ pwm3b {
+ pwm3b_pin: pwm3b-pin {
+ rockchip,pins =
+ <1 14 RK_FUNC_1 &pcfg_pull_none>;
+ };
+ };
+ };
+};
diff --git a/arch/arm64/boot/dts/socionext/uniphier-ph1-ld20-ref.dts b/arch/arm64/boot/dts/socionext/uniphier-ph1-ld20-ref.dts
index b0ed44313a5b..2adad8c8cd27 100644
--- a/arch/arm64/boot/dts/socionext/uniphier-ph1-ld20-ref.dts
+++ b/arch/arm64/boot/dts/socionext/uniphier-ph1-ld20-ref.dts
@@ -44,6 +44,7 @@
/dts-v1/;
/include/ "uniphier-ph1-ld20.dtsi"
+/include/ "uniphier-ref-daughter.dtsi"
/include/ "uniphier-support-card.dtsi"
/ {
diff --git a/arch/arm64/boot/dts/socionext/uniphier-ph1-ld20.dtsi b/arch/arm64/boot/dts/socionext/uniphier-ph1-ld20.dtsi
index 651c9d9d2d54..95328808e3c1 100644
--- a/arch/arm64/boot/dts/socionext/uniphier-ph1-ld20.dtsi
+++ b/arch/arm64/boot/dts/socionext/uniphier-ph1-ld20.dtsi
@@ -106,6 +106,12 @@
};
clocks {
+ refclk: ref {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <25000000>;
+ };
+
uart_clk: uart_clk {
#clock-cells = <0>;
compatible = "fixed-clock";
diff --git a/arch/arm64/boot/dts/socionext/uniphier-ref-daughter.dtsi b/arch/arm64/boot/dts/socionext/uniphier-ref-daughter.dtsi
new file mode 120000
index 000000000000..4685a8d89cba
--- /dev/null
+++ b/arch/arm64/boot/dts/socionext/uniphier-ref-daughter.dtsi
@@ -0,0 +1 @@
+../../../../arm/boot/dts/uniphier-ref-daughter.dtsi \ No newline at end of file
diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig
index a44ef995d8ae..fd2d74d0491e 100644
--- a/arch/arm64/configs/defconfig
+++ b/arch/arm64/configs/defconfig
@@ -1,7 +1,6 @@
# CONFIG_LOCALVERSION_AUTO is not set
CONFIG_SYSVIPC=y
CONFIG_POSIX_MQUEUE=y
-CONFIG_FHANDLE=y
CONFIG_AUDIT=y
CONFIG_NO_HZ_IDLE=y
CONFIG_HIGH_RES_TIMERS=y
@@ -36,8 +35,10 @@ CONFIG_ARCH_BCM_IPROC=y
CONFIG_ARCH_BERLIN=y
CONFIG_ARCH_EXYNOS=y
CONFIG_ARCH_LAYERSCAPE=y
+CONFIG_ARCH_LG1K=y
CONFIG_ARCH_HISI=y
CONFIG_ARCH_MEDIATEK=y
+CONFIG_ARCH_MESON=y
CONFIG_ARCH_MVEBU=y
CONFIG_ARCH_QCOM=y
CONFIG_ARCH_ROCKCHIP=y
@@ -56,12 +57,13 @@ CONFIG_ARCH_ZYNQMP=y
CONFIG_PCI=y
CONFIG_PCI_MSI=y
CONFIG_PCI_IOV=y
-CONFIG_PCI_RCAR_GEN2_PCIE=y
+CONFIG_PCIE_RCAR=y
CONFIG_PCI_HOST_GENERIC=y
CONFIG_PCI_XGENE=y
CONFIG_PCI_LAYERSCAPE=y
CONFIG_PCI_HISI=y
CONFIG_PCIE_QCOM=y
+CONFIG_ARM64_VA_BITS_48=y
CONFIG_SCHED_MC=y
CONFIG_PREEMPT=y
CONFIG_KSM=y
@@ -84,13 +86,19 @@ CONFIG_IP_PNP_DHCP=y
CONFIG_IP_PNP_BOOTP=y
# CONFIG_IPV6 is not set
CONFIG_BPF_JIT=y
-# CONFIG_WIRELESS is not set
+CONFIG_CFG80211=m
+CONFIG_MAC80211=m
+CONFIG_MAC80211_LEDS=y
+CONFIG_RFKILL=m
CONFIG_NET_9P=y
CONFIG_NET_9P_VIRTIO=y
CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
CONFIG_DMA_CMA=y
+CONFIG_MTD=y
+CONFIG_MTD_M25P80=y
+CONFIG_MTD_SPI_NOR=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_VIRTIO_BLK=y
# CONFIG_SCSI_PROC_FS is not set
@@ -102,7 +110,9 @@ CONFIG_SATA_AHCI_PLATFORM=y
CONFIG_AHCI_CEVA=y
CONFIG_AHCI_MVEBU=y
CONFIG_AHCI_XGENE=y
+CONFIG_AHCI_QORIQ=y
CONFIG_SATA_RCAR=y
+CONFIG_SATA_SIL24=y
CONFIG_PATA_PLATFORM=y
CONFIG_PATA_OF_PLATFORM=y
CONFIG_NETDEVICES=y
@@ -118,7 +128,18 @@ CONFIG_RAVB=y
CONFIG_SMC91X=y
CONFIG_SMSC911X=y
CONFIG_MICREL_PHY=y
-# CONFIG_WLAN is not set
+CONFIG_USB_PEGASUS=m
+CONFIG_USB_RTL8150=m
+CONFIG_USB_RTL8152=m
+CONFIG_USB_USBNET=m
+CONFIG_USB_NET_DM9601=m
+CONFIG_USB_NET_SR9800=m
+CONFIG_USB_NET_SMSC75XX=m
+CONFIG_USB_NET_SMSC95XX=m
+CONFIG_USB_NET_PLUSB=m
+CONFIG_USB_NET_MCS7830=m
+CONFIG_WL18XX=m
+CONFIG_WLCORE_SDIO=m
CONFIG_INPUT_EVDEV=y
CONFIG_KEYBOARD_GPIO=y
# CONFIG_SERIO_SERPORT is not set
@@ -138,6 +159,8 @@ CONFIG_SERIAL_TEGRA=y
CONFIG_SERIAL_SH_SCI=y
CONFIG_SERIAL_SH_SCI_NR_UARTS=11
CONFIG_SERIAL_SH_SCI_CONSOLE=y
+CONFIG_SERIAL_MESON=y
+CONFIG_SERIAL_MESON_CONSOLE=y
CONFIG_SERIAL_MSM=y
CONFIG_SERIAL_MSM_CONSOLE=y
CONFIG_SERIAL_XILINX_PS_UART=y
@@ -146,15 +169,20 @@ CONFIG_SERIAL_MVEBU_UART=y
CONFIG_VIRTIO_CONSOLE=y
# CONFIG_HW_RANDOM is not set
CONFIG_I2C_CHARDEV=y
+CONFIG_I2C_MUX=y
+CONFIG_I2C_MUX_PCA954x=y
CONFIG_I2C_DESIGNWARE_PLATFORM=y
+CONFIG_I2C_IMX=y
CONFIG_I2C_MV64XXX=y
CONFIG_I2C_QUP=y
CONFIG_I2C_TEGRA=y
CONFIG_I2C_UNIPHIER_F=y
CONFIG_I2C_RCAR=y
CONFIG_SPI=y
+CONFIG_SPI_ORION=y
CONFIG_SPI_PL022=y
CONFIG_SPI_QUP=y
+CONFIG_SPI_SPIDEV=m
CONFIG_SPMI=y
CONFIG_PINCTRL_SINGLE=y
CONFIG_PINCTRL_MSM8916=y
@@ -167,14 +195,19 @@ CONFIG_GPIO_XGENE=y
CONFIG_POWER_RESET_MSM=y
CONFIG_POWER_RESET_XGENE=y
CONFIG_POWER_RESET_SYSCON=y
-# CONFIG_HWMON is not set
+CONFIG_SENSORS_LM90=m
+CONFIG_SENSORS_INA2XX=m
CONFIG_THERMAL=y
CONFIG_THERMAL_EMULATION=y
CONFIG_EXYNOS_THERMAL=y
+CONFIG_WATCHDOG=y
+CONFIG_RENESAS_WDT=y
CONFIG_MFD_SPMI_PMIC=y
CONFIG_MFD_SEC_CORE=y
+CONFIG_MFD_HI655X_PMIC=y
CONFIG_REGULATOR=y
CONFIG_REGULATOR_FIXED_VOLTAGE=y
+CONFIG_REGULATOR_HI655X=y
CONFIG_REGULATOR_QCOM_SMD_RPM=y
CONFIG_REGULATOR_QCOM_SPMI=y
CONFIG_REGULATOR_S2MPS11=y
@@ -192,7 +225,7 @@ CONFIG_SND_SOC_AK4613=y
CONFIG_USB=y
CONFIG_USB_OTG=y
CONFIG_USB_XHCI_HCD=y
-CONFIG_USB_XHCI_PLATFORM=y
+CONFIG_USB_XHCI_RCAR=y
CONFIG_USB_EHCI_HCD=y
CONFIG_USB_EHCI_MSM=y
CONFIG_USB_EHCI_HCD_PLATFORM=y
@@ -213,6 +246,7 @@ CONFIG_MMC_BLOCK_MINORS=32
CONFIG_MMC_ARMMMCI=y
CONFIG_MMC_SDHCI=y
CONFIG_MMC_SDHCI_PLTFM=y
+CONFIG_MMC_SDHCI_OF_ESDHC=y
CONFIG_MMC_SDHCI_TEGRA=y
CONFIG_MMC_SDHCI_MSM=y
CONFIG_MMC_SPI=y
@@ -229,11 +263,13 @@ CONFIG_LEDS_TRIGGER_HEARTBEAT=y
CONFIG_LEDS_TRIGGER_CPU=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_S5M=y
+CONFIG_RTC_DRV_DS3232=y
CONFIG_RTC_DRV_EFI=y
CONFIG_RTC_DRV_PL031=y
CONFIG_RTC_DRV_SUN6I=y
CONFIG_RTC_DRV_XGENE=y
CONFIG_DMADEVICES=y
+CONFIG_PL330_DMA=y
CONFIG_TEGRA20_APB_DMA=y
CONFIG_QCOM_BAM_DMA=y
CONFIG_RCAR_DMAC=y
@@ -246,6 +282,7 @@ CONFIG_XEN_GNTDEV=y
CONFIG_XEN_GRANT_DEV_ALLOC=y
CONFIG_COMMON_CLK_SCPI=y
CONFIG_COMMON_CLK_CS2000_CP=y
+CONFIG_CLK_QORIQ=y
CONFIG_COMMON_CLK_QCOM=y
CONFIG_MSM_GCC_8916=y
CONFIG_HWSPINLOCK_QCOM=y
@@ -264,6 +301,7 @@ CONFIG_PHY_RCAR_GEN3_USB2=y
CONFIG_PHY_HI6220_USB=y
CONFIG_PHY_XGENE=y
CONFIG_ARM_SCPI_PROTOCOL=y
+CONFIG_ACPI=y
CONFIG_EXT2_FS=y
CONFIG_EXT3_FS=y
CONFIG_FANOTIFY=y
diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h
index 70f7b9e04598..10b017c4bdd8 100644
--- a/arch/arm64/include/asm/assembler.h
+++ b/arch/arm64/include/asm/assembler.h
@@ -1,5 +1,5 @@
/*
- * Based on arch/arm/include/asm/assembler.h
+ * Based on arch/arm/include/asm/assembler.h, arch/arm/mm/proc-macros.S
*
* Copyright (C) 1996-2000 Russell King
* Copyright (C) 2012 ARM Ltd.
@@ -23,22 +23,13 @@
#ifndef __ASM_ASSEMBLER_H
#define __ASM_ASSEMBLER_H
+#include <asm/asm-offsets.h>
+#include <asm/page.h>
+#include <asm/pgtable-hwdef.h>
#include <asm/ptrace.h>
#include <asm/thread_info.h>
/*
- * Stack pushing/popping (register pairs only). Equivalent to store decrement
- * before, load increment after.
- */
- .macro push, xreg1, xreg2
- stp \xreg1, \xreg2, [sp, #-16]!
- .endm
-
- .macro pop, xreg1, xreg2
- ldp \xreg1, \xreg2, [sp], #16
- .endm
-
-/*
* Enable and disable interrupts.
*/
.macro disable_irq
@@ -212,6 +203,102 @@ lr .req x30 // link register
.endm
/*
+ * vma_vm_mm - get mm pointer from vma pointer (vma->vm_mm)
+ */
+ .macro vma_vm_mm, rd, rn
+ ldr \rd, [\rn, #VMA_VM_MM]
+ .endm
+
+/*
+ * mmid - get context id from mm pointer (mm->context.id)
+ */
+ .macro mmid, rd, rn
+ ldr \rd, [\rn, #MM_CONTEXT_ID]
+ .endm
+
+/*
+ * dcache_line_size - get the minimum D-cache line size from the CTR register.
+ */
+ .macro dcache_line_size, reg, tmp
+ mrs \tmp, ctr_el0 // read CTR
+ ubfm \tmp, \tmp, #16, #19 // cache line size encoding
+ mov \reg, #4 // bytes per word
+ lsl \reg, \reg, \tmp // actual cache line size
+ .endm
+
+/*
+ * icache_line_size - get the minimum I-cache line size from the CTR register.
+ */
+ .macro icache_line_size, reg, tmp
+ mrs \tmp, ctr_el0 // read CTR
+ and \tmp, \tmp, #0xf // cache line size encoding
+ mov \reg, #4 // bytes per word
+ lsl \reg, \reg, \tmp // actual cache line size
+ .endm
+
+/*
+ * tcr_set_idmap_t0sz - update TCR.T0SZ so that we can load the ID map
+ */
+ .macro tcr_set_idmap_t0sz, valreg, tmpreg
+#ifndef CONFIG_ARM64_VA_BITS_48
+ ldr_l \tmpreg, idmap_t0sz
+ bfi \valreg, \tmpreg, #TCR_T0SZ_OFFSET, #TCR_TxSZ_WIDTH
+#endif
+ .endm
+
+/*
+ * Macro to perform a data cache maintenance for the interval
+ * [kaddr, kaddr + size)
+ *
+ * op: operation passed to dc instruction
+ * domain: domain used in dsb instruciton
+ * kaddr: starting virtual address of the region
+ * size: size of the region
+ * Corrupts: kaddr, size, tmp1, tmp2
+ */
+ .macro dcache_by_line_op op, domain, kaddr, size, tmp1, tmp2
+ dcache_line_size \tmp1, \tmp2
+ add \size, \kaddr, \size
+ sub \tmp2, \tmp1, #1
+ bic \kaddr, \kaddr, \tmp2
+9998: dc \op, \kaddr
+ add \kaddr, \kaddr, \tmp1
+ cmp \kaddr, \size
+ b.lo 9998b
+ dsb \domain
+ .endm
+
+/*
+ * reset_pmuserenr_el0 - reset PMUSERENR_EL0 if PMUv3 present
+ */
+ .macro reset_pmuserenr_el0, tmpreg
+ mrs \tmpreg, id_aa64dfr0_el1 // Check ID_AA64DFR0_EL1 PMUVer
+ sbfx \tmpreg, \tmpreg, #8, #4
+ cmp \tmpreg, #1 // Skip if no PMU present
+ b.lt 9000f
+ msr pmuserenr_el0, xzr // Disable PMU access from EL0
+9000:
+ .endm
+
+/*
+ * copy_page - copy src to dest using temp registers t1-t8
+ */
+ .macro copy_page dest:req src:req t1:req t2:req t3:req t4:req t5:req t6:req t7:req t8:req
+9998: ldp \t1, \t2, [\src]
+ ldp \t3, \t4, [\src, #16]
+ ldp \t5, \t6, [\src, #32]
+ ldp \t7, \t8, [\src, #48]
+ add \src, \src, #64
+ stnp \t1, \t2, [\dest]
+ stnp \t3, \t4, [\dest, #16]
+ stnp \t5, \t6, [\dest, #32]
+ stnp \t7, \t8, [\dest, #48]
+ add \dest, \dest, #64
+ tst \src, #(PAGE_SIZE - 1)
+ b.ne 9998b
+ .endm
+
+/*
* Annotate a function as position independent, i.e., safe to be called before
* the kernel virtual mapping is activated.
*/
@@ -233,4 +320,24 @@ lr .req x30 // link register
.long \sym\()_hi32
.endm
+ /*
+ * mov_q - move an immediate constant into a 64-bit register using
+ * between 2 and 4 movz/movk instructions (depending on the
+ * magnitude and sign of the operand)
+ */
+ .macro mov_q, reg, val
+ .if (((\val) >> 31) == 0 || ((\val) >> 31) == 0x1ffffffff)
+ movz \reg, :abs_g1_s:\val
+ .else
+ .if (((\val) >> 47) == 0 || ((\val) >> 47) == 0x1ffff)
+ movz \reg, :abs_g2_s:\val
+ .else
+ movz \reg, :abs_g3:\val
+ movk \reg, :abs_g2_nc:\val
+ .endif
+ movk \reg, :abs_g1_nc:\val
+ .endif
+ movk \reg, :abs_g0_nc:\val
+ .endm
+
#endif /* __ASM_ASSEMBLER_H */
diff --git a/arch/arm64/include/asm/cpufeature.h b/arch/arm64/include/asm/cpufeature.h
index b9b649422fca..224efe730e46 100644
--- a/arch/arm64/include/asm/cpufeature.h
+++ b/arch/arm64/include/asm/cpufeature.h
@@ -35,8 +35,9 @@
#define ARM64_ALT_PAN_NOT_UAO 10
#define ARM64_HAS_VIRT_HOST_EXTN 11
#define ARM64_WORKAROUND_CAVIUM_27456 12
+#define ARM64_HAS_32BIT_EL0 13
-#define ARM64_NCAPS 13
+#define ARM64_NCAPS 14
#ifndef __ASSEMBLY__
@@ -77,10 +78,17 @@ struct arm64_ftr_reg {
struct arm64_ftr_bits *ftr_bits;
};
+/* scope of capability check */
+enum {
+ SCOPE_SYSTEM,
+ SCOPE_LOCAL_CPU,
+};
+
struct arm64_cpu_capabilities {
const char *desc;
u16 capability;
- bool (*matches)(const struct arm64_cpu_capabilities *);
+ int def_scope; /* default scope */
+ bool (*matches)(const struct arm64_cpu_capabilities *caps, int scope);
void (*enable)(void *); /* Called on all active CPUs */
union {
struct { /* To be used for erratum handling only */
@@ -101,6 +109,8 @@ struct arm64_cpu_capabilities {
extern DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS);
+bool this_cpu_has_cap(unsigned int cap);
+
static inline bool cpu_have_feature(unsigned int num)
{
return elf_hwcap & (1UL << num);
@@ -170,12 +180,20 @@ static inline bool id_aa64mmfr0_mixed_endian_el0(u64 mmfr0)
cpuid_feature_extract_unsigned_field(mmfr0, ID_AA64MMFR0_BIGENDEL0_SHIFT) == 0x1;
}
+static inline bool id_aa64pfr0_32bit_el0(u64 pfr0)
+{
+ u32 val = cpuid_feature_extract_unsigned_field(pfr0, ID_AA64PFR0_EL0_SHIFT);
+
+ return val == ID_AA64PFR0_EL0_32BIT_64BIT;
+}
+
void __init setup_cpu_features(void);
void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
const char *info);
void check_local_cpu_errata(void);
+void verify_local_cpu_errata(void);
void verify_local_cpu_capabilities(void);
u64 read_system_reg(u32 id);
@@ -185,6 +203,11 @@ static inline bool cpu_supports_mixed_endian_el0(void)
return id_aa64mmfr0_mixed_endian_el0(read_cpuid(ID_AA64MMFR0_EL1));
}
+static inline bool system_supports_32bit_el0(void)
+{
+ return cpus_have_cap(ARM64_HAS_32BIT_EL0);
+}
+
static inline bool system_supports_mixed_endian_el0(void)
{
return id_aa64mmfr0_mixed_endian_el0(read_system_reg(SYS_ID_AA64MMFR0_EL1));
diff --git a/arch/arm64/include/asm/dma-mapping.h b/arch/arm64/include/asm/dma-mapping.h
index ba437f090a74..7dbea6c070ec 100644
--- a/arch/arm64/include/asm/dma-mapping.h
+++ b/arch/arm64/include/asm/dma-mapping.h
@@ -48,7 +48,7 @@ static inline struct dma_map_ops *get_dma_ops(struct device *dev)
}
void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
- struct iommu_ops *iommu, bool coherent);
+ const struct iommu_ops *iommu, bool coherent);
#define arch_setup_dma_ops arch_setup_dma_ops
#ifdef CONFIG_IOMMU_DMA
diff --git a/arch/arm64/include/asm/efi.h b/arch/arm64/include/asm/efi.h
index 8e88a696c9cb..622db3c6474e 100644
--- a/arch/arm64/include/asm/efi.h
+++ b/arch/arm64/include/asm/efi.h
@@ -4,6 +4,7 @@
#include <asm/io.h>
#include <asm/mmu_context.h>
#include <asm/neon.h>
+#include <asm/ptrace.h>
#include <asm/tlbflush.h>
#ifdef CONFIG_EFI
@@ -14,32 +15,29 @@ extern void efi_init(void);
int efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md);
-#define efi_call_virt(f, ...) \
+#define efi_set_mapping_permissions efi_create_mapping
+
+#define arch_efi_call_virt_setup() \
({ \
- efi_##f##_t *__f; \
- efi_status_t __s; \
- \
kernel_neon_begin(); \
efi_virtmap_load(); \
- __f = efi.systab->runtime->f; \
- __s = __f(__VA_ARGS__); \
- efi_virtmap_unload(); \
- kernel_neon_end(); \
- __s; \
})
-#define __efi_call_virt(f, ...) \
+#define arch_efi_call_virt(f, args...) \
({ \
efi_##f##_t *__f; \
- \
- kernel_neon_begin(); \
- efi_virtmap_load(); \
__f = efi.systab->runtime->f; \
- __f(__VA_ARGS__); \
+ __f(args); \
+})
+
+#define arch_efi_call_virt_teardown() \
+({ \
efi_virtmap_unload(); \
kernel_neon_end(); \
})
+#define ARCH_EFI_IRQ_FLAGS_MASK (PSR_D_BIT | PSR_A_BIT | PSR_I_BIT | PSR_F_BIT)
+
/* arch specific definitions used by the stub code */
/*
@@ -50,7 +48,16 @@ int efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md);
#define EFI_FDT_ALIGN SZ_2M /* used by allocate_new_fdt_and_exit_boot() */
#define MAX_FDT_OFFSET SZ_512M
-#define efi_call_early(f, ...) sys_table_arg->boottime->f(__VA_ARGS__)
+#define efi_call_early(f, ...) sys_table_arg->boottime->f(__VA_ARGS__)
+#define __efi_call_early(f, ...) f(__VA_ARGS__)
+#define efi_is_64bit() (true)
+
+#define alloc_screen_info(x...) &screen_info
+#define free_screen_info(x...)
+
+static inline void efifb_setup_from_dmi(struct screen_info *si, const char *opt)
+{
+}
#define EFI_ALLOC_ALIGN SZ_64K
diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h
index 24ed037f09fd..7a09c48c0475 100644
--- a/arch/arm64/include/asm/elf.h
+++ b/arch/arm64/include/asm/elf.h
@@ -177,7 +177,8 @@ typedef compat_elf_greg_t compat_elf_gregset_t[COMPAT_ELF_NGREG];
/* AArch32 EABI. */
#define EF_ARM_EABI_MASK 0xff000000
-#define compat_elf_check_arch(x) (((x)->e_machine == EM_ARM) && \
+#define compat_elf_check_arch(x) (system_supports_32bit_el0() && \
+ ((x)->e_machine == EM_ARM) && \
((x)->e_flags & EF_ARM_EABI_MASK))
#define compat_start_thread compat_start_thread
diff --git a/arch/arm64/include/asm/kernel-pgtable.h b/arch/arm64/include/asm/kernel-pgtable.h
index 5c6375d8528b..7e51d1b57c0c 100644
--- a/arch/arm64/include/asm/kernel-pgtable.h
+++ b/arch/arm64/include/asm/kernel-pgtable.h
@@ -19,6 +19,7 @@
#ifndef __ASM_KERNEL_PGTABLE_H
#define __ASM_KERNEL_PGTABLE_H
+#include <asm/sparsemem.h>
/*
* The linear mapping and the start of memory are both 2M aligned (per
@@ -86,10 +87,24 @@
* (64k granule), or a multiple that can be mapped using contiguous bits
* in the page tables: 32 * PMD_SIZE (16k granule)
*/
-#ifdef CONFIG_ARM64_64K_PAGES
-#define ARM64_MEMSTART_ALIGN SZ_512M
+#if defined(CONFIG_ARM64_4K_PAGES)
+#define ARM64_MEMSTART_SHIFT PUD_SHIFT
+#elif defined(CONFIG_ARM64_16K_PAGES)
+#define ARM64_MEMSTART_SHIFT (PMD_SHIFT + 5)
#else
-#define ARM64_MEMSTART_ALIGN SZ_1G
+#define ARM64_MEMSTART_SHIFT PMD_SHIFT
+#endif
+
+/*
+ * sparsemem vmemmap imposes an additional requirement on the alignment of
+ * memstart_addr, due to the fact that the base of the vmemmap region
+ * has a direct correspondence, and needs to appear sufficiently aligned
+ * in the virtual address space.
+ */
+#if defined(CONFIG_SPARSEMEM_VMEMMAP) && ARM64_MEMSTART_SHIFT < SECTION_SIZE_BITS
+#define ARM64_MEMSTART_ALIGN (1UL << SECTION_SIZE_BITS)
+#else
+#define ARM64_MEMSTART_ALIGN (1UL << ARM64_MEMSTART_SHIFT)
#endif
#endif /* __ASM_KERNEL_PGTABLE_H */
diff --git a/arch/arm64/include/asm/kvm_arm.h b/arch/arm64/include/asm/kvm_arm.h
index 3f29887995bc..2cdb6b551ac6 100644
--- a/arch/arm64/include/asm/kvm_arm.h
+++ b/arch/arm64/include/asm/kvm_arm.h
@@ -84,44 +84,38 @@
#define HCR_INT_OVERRIDE (HCR_FMO | HCR_IMO)
#define HCR_HOST_VHE_FLAGS (HCR_RW | HCR_TGE | HCR_E2H)
-/* Hyp System Control Register (SCTLR_EL2) bits */
-#define SCTLR_EL2_EE (1 << 25)
-#define SCTLR_EL2_WXN (1 << 19)
-#define SCTLR_EL2_I (1 << 12)
-#define SCTLR_EL2_SA (1 << 3)
-#define SCTLR_EL2_C (1 << 2)
-#define SCTLR_EL2_A (1 << 1)
-#define SCTLR_EL2_M 1
-#define SCTLR_EL2_FLAGS (SCTLR_EL2_M | SCTLR_EL2_A | SCTLR_EL2_C | \
- SCTLR_EL2_SA | SCTLR_EL2_I)
-
/* TCR_EL2 Registers bits */
-#define TCR_EL2_RES1 ((1 << 31) | (1 << 23))
-#define TCR_EL2_TBI (1 << 20)
-#define TCR_EL2_PS (7 << 16)
-#define TCR_EL2_PS_40B (2 << 16)
-#define TCR_EL2_TG0 (1 << 14)
-#define TCR_EL2_SH0 (3 << 12)
-#define TCR_EL2_ORGN0 (3 << 10)
-#define TCR_EL2_IRGN0 (3 << 8)
-#define TCR_EL2_T0SZ 0x3f
-#define TCR_EL2_MASK (TCR_EL2_TG0 | TCR_EL2_SH0 | \
- TCR_EL2_ORGN0 | TCR_EL2_IRGN0 | TCR_EL2_T0SZ)
+#define TCR_EL2_RES1 ((1 << 31) | (1 << 23))
+#define TCR_EL2_TBI (1 << 20)
+#define TCR_EL2_PS_SHIFT 16
+#define TCR_EL2_PS_MASK (7 << TCR_EL2_PS_SHIFT)
+#define TCR_EL2_PS_40B (2 << TCR_EL2_PS_SHIFT)
+#define TCR_EL2_TG0_MASK TCR_TG0_MASK
+#define TCR_EL2_SH0_MASK TCR_SH0_MASK
+#define TCR_EL2_ORGN0_MASK TCR_ORGN0_MASK
+#define TCR_EL2_IRGN0_MASK TCR_IRGN0_MASK
+#define TCR_EL2_T0SZ_MASK 0x3f
+#define TCR_EL2_MASK (TCR_EL2_TG0_MASK | TCR_EL2_SH0_MASK | \
+ TCR_EL2_ORGN0_MASK | TCR_EL2_IRGN0_MASK | TCR_EL2_T0SZ_MASK)
/* VTCR_EL2 Registers bits */
#define VTCR_EL2_RES1 (1 << 31)
-#define VTCR_EL2_PS_MASK (7 << 16)
-#define VTCR_EL2_TG0_MASK (1 << 14)
-#define VTCR_EL2_TG0_4K (0 << 14)
-#define VTCR_EL2_TG0_64K (1 << 14)
-#define VTCR_EL2_SH0_MASK (3 << 12)
-#define VTCR_EL2_SH0_INNER (3 << 12)
-#define VTCR_EL2_ORGN0_MASK (3 << 10)
-#define VTCR_EL2_ORGN0_WBWA (1 << 10)
-#define VTCR_EL2_IRGN0_MASK (3 << 8)
-#define VTCR_EL2_IRGN0_WBWA (1 << 8)
-#define VTCR_EL2_SL0_MASK (3 << 6)
-#define VTCR_EL2_SL0_LVL1 (1 << 6)
+#define VTCR_EL2_HD (1 << 22)
+#define VTCR_EL2_HA (1 << 21)
+#define VTCR_EL2_PS_MASK TCR_EL2_PS_MASK
+#define VTCR_EL2_TG0_MASK TCR_TG0_MASK
+#define VTCR_EL2_TG0_4K TCR_TG0_4K
+#define VTCR_EL2_TG0_16K TCR_TG0_16K
+#define VTCR_EL2_TG0_64K TCR_TG0_64K
+#define VTCR_EL2_SH0_MASK TCR_SH0_MASK
+#define VTCR_EL2_SH0_INNER TCR_SH0_INNER
+#define VTCR_EL2_ORGN0_MASK TCR_ORGN0_MASK
+#define VTCR_EL2_ORGN0_WBWA TCR_ORGN0_WBWA
+#define VTCR_EL2_IRGN0_MASK TCR_IRGN0_MASK
+#define VTCR_EL2_IRGN0_WBWA TCR_IRGN0_WBWA
+#define VTCR_EL2_SL0_SHIFT 6
+#define VTCR_EL2_SL0_MASK (3 << VTCR_EL2_SL0_SHIFT)
+#define VTCR_EL2_SL0_LVL1 (1 << VTCR_EL2_SL0_SHIFT)
#define VTCR_EL2_T0SZ_MASK 0x3f
#define VTCR_EL2_T0SZ_40B 24
#define VTCR_EL2_VS_SHIFT 19
@@ -137,35 +131,45 @@
* (see hyp-init.S).
*
* Note that when using 4K pages, we concatenate two first level page tables
- * together.
+ * together. With 16K pages, we concatenate 16 first level page tables.
*
* The magic numbers used for VTTBR_X in this patch can be found in Tables
* D4-23 and D4-25 in ARM DDI 0487A.b.
*/
+
+#define VTCR_EL2_T0SZ_IPA VTCR_EL2_T0SZ_40B
+#define VTCR_EL2_COMMON_BITS (VTCR_EL2_SH0_INNER | VTCR_EL2_ORGN0_WBWA | \
+ VTCR_EL2_IRGN0_WBWA | VTCR_EL2_RES1)
+
#ifdef CONFIG_ARM64_64K_PAGES
/*
* Stage2 translation configuration:
- * 40bits input (T0SZ = 24)
* 64kB pages (TG0 = 1)
* 2 level page tables (SL = 1)
*/
-#define VTCR_EL2_FLAGS (VTCR_EL2_TG0_64K | VTCR_EL2_SH0_INNER | \
- VTCR_EL2_ORGN0_WBWA | VTCR_EL2_IRGN0_WBWA | \
- VTCR_EL2_SL0_LVL1 | VTCR_EL2_RES1)
-#define VTTBR_X (38 - VTCR_EL2_T0SZ_40B)
-#else
+#define VTCR_EL2_TGRAN_FLAGS (VTCR_EL2_TG0_64K | VTCR_EL2_SL0_LVL1)
+#define VTTBR_X_TGRAN_MAGIC 38
+#elif defined(CONFIG_ARM64_16K_PAGES)
+/*
+ * Stage2 translation configuration:
+ * 16kB pages (TG0 = 2)
+ * 2 level page tables (SL = 1)
+ */
+#define VTCR_EL2_TGRAN_FLAGS (VTCR_EL2_TG0_16K | VTCR_EL2_SL0_LVL1)
+#define VTTBR_X_TGRAN_MAGIC 42
+#else /* 4K */
/*
* Stage2 translation configuration:
- * 40bits input (T0SZ = 24)
* 4kB pages (TG0 = 0)
* 3 level page tables (SL = 1)
*/
-#define VTCR_EL2_FLAGS (VTCR_EL2_TG0_4K | VTCR_EL2_SH0_INNER | \
- VTCR_EL2_ORGN0_WBWA | VTCR_EL2_IRGN0_WBWA | \
- VTCR_EL2_SL0_LVL1 | VTCR_EL2_RES1)
-#define VTTBR_X (37 - VTCR_EL2_T0SZ_40B)
+#define VTCR_EL2_TGRAN_FLAGS (VTCR_EL2_TG0_4K | VTCR_EL2_SL0_LVL1)
+#define VTTBR_X_TGRAN_MAGIC 37
#endif
+#define VTCR_EL2_FLAGS (VTCR_EL2_COMMON_BITS | VTCR_EL2_TGRAN_FLAGS)
+#define VTTBR_X (VTTBR_X_TGRAN_MAGIC - VTCR_EL2_T0SZ_IPA)
+
#define VTTBR_BADDR_SHIFT (VTTBR_X - 1)
#define VTTBR_BADDR_MASK (((UL(1) << (PHYS_MASK_SHIFT - VTTBR_X)) - 1) << VTTBR_BADDR_SHIFT)
#define VTTBR_VMID_SHIFT (UL(48))
diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index 40a0a24e6c98..7561f63f1c28 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -22,6 +22,8 @@
#define ARM_EXCEPTION_IRQ 0
#define ARM_EXCEPTION_TRAP 1
+/* The hyp-stub will return this for any kvm_call_hyp() call */
+#define ARM_EXCEPTION_HYP_GONE 2
#define KVM_ARM64_DEBUG_DIRTY_SHIFT 0
#define KVM_ARM64_DEBUG_DIRTY (1 << KVM_ARM64_DEBUG_DIRTY_SHIFT)
@@ -40,6 +42,7 @@ struct kvm_vcpu;
extern char __kvm_hyp_init[];
extern char __kvm_hyp_init_end[];
+extern char __kvm_hyp_reset[];
extern char __kvm_hyp_vector[];
diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
index f5c6bd2541ef..49095fc4b482 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -43,9 +43,13 @@
#define KVM_VCPU_MAX_FEATURES 4
+#define KVM_REQ_VCPU_EXIT 8
+
int __attribute_const__ kvm_target_cpu(void);
int kvm_reset_vcpu(struct kvm_vcpu *vcpu);
int kvm_arch_dev_ioctl_check_extension(long ext);
+unsigned long kvm_hyp_reset_entry(void);
+void __extended_idmap_trampoline(phys_addr_t boot_pgd, phys_addr_t idmap_start);
struct kvm_arch {
/* The VMID generation used for the virt. memory system */
@@ -293,6 +297,7 @@ struct kvm_vm_stat {
struct kvm_vcpu_stat {
u32 halt_successful_poll;
u32 halt_attempted_poll;
+ u32 halt_poll_invalid;
u32 halt_wakeup;
u32 hvc_exit_stat;
u64 wfe_exit_stat;
@@ -324,6 +329,10 @@ static inline void kvm_arch_mmu_notifier_invalidate_page(struct kvm *kvm,
struct kvm_vcpu *kvm_arm_get_running_vcpu(void);
struct kvm_vcpu * __percpu *kvm_get_running_vcpus(void);
+void kvm_arm_halt_guest(struct kvm *kvm);
+void kvm_arm_resume_guest(struct kvm *kvm);
+void kvm_arm_halt_vcpu(struct kvm_vcpu *vcpu);
+void kvm_arm_resume_vcpu(struct kvm_vcpu *vcpu);
u64 __kvm_call_hyp(void *hypfn, ...);
#define kvm_call_hyp(f, ...) __kvm_call_hyp(kvm_ksym_ref(f), ##__VA_ARGS__)
@@ -352,11 +361,22 @@ static inline void __cpu_init_hyp_mode(phys_addr_t boot_pgd_ptr,
hyp_stack_ptr, vector_ptr);
}
-static inline void kvm_arch_hardware_disable(void) {}
+static inline void __cpu_reset_hyp_mode(phys_addr_t boot_pgd_ptr,
+ phys_addr_t phys_idmap_start)
+{
+ /*
+ * Call reset code, and switch back to stub hyp vectors.
+ * Uses __kvm_call_hyp() to avoid kaslr's kvm_ksym_ref() translation.
+ */
+ __kvm_call_hyp((void *)kvm_hyp_reset_entry(),
+ boot_pgd_ptr, phys_idmap_start);
+}
+
static inline void kvm_arch_hardware_unsetup(void) {}
static inline void kvm_arch_sync_events(struct kvm *kvm) {}
static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
+static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
void kvm_arm_init_debug(void);
void kvm_arm_setup_debug(struct kvm_vcpu *vcpu);
diff --git a/arch/arm64/include/asm/kvm_mmio.h b/arch/arm64/include/asm/kvm_mmio.h
index fe612a962576..75ea42079757 100644
--- a/arch/arm64/include/asm/kvm_mmio.h
+++ b/arch/arm64/include/asm/kvm_mmio.h
@@ -30,6 +30,9 @@ struct kvm_decode {
bool sign_extend;
};
+void kvm_mmio_write_buf(void *buf, unsigned int len, unsigned long data);
+unsigned long kvm_mmio_read_buf(const void *buf, unsigned int len);
+
int kvm_handle_mmio_return(struct kvm_vcpu *vcpu, struct kvm_run *run);
int io_mem_abort(struct kvm_vcpu *vcpu, struct kvm_run *run,
phys_addr_t fault_ipa);
diff --git a/arch/arm64/include/asm/kvm_mmu.h b/arch/arm64/include/asm/kvm_mmu.h
index 22732a5e3119..f05ac27d033e 100644
--- a/arch/arm64/include/asm/kvm_mmu.h
+++ b/arch/arm64/include/asm/kvm_mmu.h
@@ -45,18 +45,6 @@
*/
#define TRAMPOLINE_VA (HYP_PAGE_OFFSET_MASK & PAGE_MASK)
-/*
- * KVM_MMU_CACHE_MIN_PAGES is the number of stage2 page table translation
- * levels in addition to the PGD and potentially the PUD which are
- * pre-allocated (we pre-allocate the fake PGD and the PUD when the Stage-2
- * tables use one level of tables less than the kernel.
- */
-#ifdef CONFIG_ARM64_64K_PAGES
-#define KVM_MMU_CACHE_MIN_PAGES 1
-#else
-#define KVM_MMU_CACHE_MIN_PAGES 2
-#endif
-
#ifdef __ASSEMBLY__
#include <asm/alternative.h>
@@ -91,6 +79,8 @@ alternative_endif
#define KVM_PHYS_SIZE (1UL << KVM_PHYS_SHIFT)
#define KVM_PHYS_MASK (KVM_PHYS_SIZE - 1UL)
+#include <asm/stage2_pgtable.h>
+
int create_hyp_mappings(void *from, void *to);
int create_hyp_io_mappings(void *from, void *to, phys_addr_t);
void free_boot_hyp_pgd(void);
@@ -109,6 +99,7 @@ void kvm_mmu_free_memory_caches(struct kvm_vcpu *vcpu);
phys_addr_t kvm_mmu_get_httbr(void);
phys_addr_t kvm_mmu_get_boot_httbr(void);
phys_addr_t kvm_get_idmap_vector(void);
+phys_addr_t kvm_get_idmap_start(void);
int kvm_mmu_init(void);
void kvm_clear_hyp_idmap(void);
@@ -121,19 +112,32 @@ static inline void kvm_clean_pmd_entry(pmd_t *pmd) {}
static inline void kvm_clean_pte(pte_t *pte) {}
static inline void kvm_clean_pte_entry(pte_t *pte) {}
-static inline void kvm_set_s2pte_writable(pte_t *pte)
+static inline pte_t kvm_s2pte_mkwrite(pte_t pte)
{
- pte_val(*pte) |= PTE_S2_RDWR;
+ pte_val(pte) |= PTE_S2_RDWR;
+ return pte;
}
-static inline void kvm_set_s2pmd_writable(pmd_t *pmd)
+static inline pmd_t kvm_s2pmd_mkwrite(pmd_t pmd)
{
- pmd_val(*pmd) |= PMD_S2_RDWR;
+ pmd_val(pmd) |= PMD_S2_RDWR;
+ return pmd;
}
static inline void kvm_set_s2pte_readonly(pte_t *pte)
{
- pte_val(*pte) = (pte_val(*pte) & ~PTE_S2_RDWR) | PTE_S2_RDONLY;
+ pteval_t pteval;
+ unsigned long tmp;
+
+ asm volatile("// kvm_set_s2pte_readonly\n"
+ " prfm pstl1strm, %2\n"
+ "1: ldxr %0, %2\n"
+ " and %0, %0, %3 // clear PTE_S2_RDWR\n"
+ " orr %0, %0, %4 // set PTE_S2_RDONLY\n"
+ " stxr %w1, %0, %2\n"
+ " cbnz %w1, 1b\n"
+ : "=&r" (pteval), "=&r" (tmp), "+Q" (pte_val(*pte))
+ : "L" (~PTE_S2_RDWR), "L" (PTE_S2_RDONLY));
}
static inline bool kvm_s2pte_readonly(pte_t *pte)
@@ -143,69 +147,12 @@ static inline bool kvm_s2pte_readonly(pte_t *pte)
static inline void kvm_set_s2pmd_readonly(pmd_t *pmd)
{
- pmd_val(*pmd) = (pmd_val(*pmd) & ~PMD_S2_RDWR) | PMD_S2_RDONLY;
+ kvm_set_s2pte_readonly((pte_t *)pmd);
}
static inline bool kvm_s2pmd_readonly(pmd_t *pmd)
{
- return (pmd_val(*pmd) & PMD_S2_RDWR) == PMD_S2_RDONLY;
-}
-
-
-#define kvm_pgd_addr_end(addr, end) pgd_addr_end(addr, end)
-#define kvm_pud_addr_end(addr, end) pud_addr_end(addr, end)
-#define kvm_pmd_addr_end(addr, end) pmd_addr_end(addr, end)
-
-/*
- * In the case where PGDIR_SHIFT is larger than KVM_PHYS_SHIFT, we can address
- * the entire IPA input range with a single pgd entry, and we would only need
- * one pgd entry. Note that in this case, the pgd is actually not used by
- * the MMU for Stage-2 translations, but is merely a fake pgd used as a data
- * structure for the kernel pgtable macros to work.
- */
-#if PGDIR_SHIFT > KVM_PHYS_SHIFT
-#define PTRS_PER_S2_PGD_SHIFT 0
-#else
-#define PTRS_PER_S2_PGD_SHIFT (KVM_PHYS_SHIFT - PGDIR_SHIFT)
-#endif
-#define PTRS_PER_S2_PGD (1 << PTRS_PER_S2_PGD_SHIFT)
-
-#define kvm_pgd_index(addr) (((addr) >> PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1))
-
-/*
- * If we are concatenating first level stage-2 page tables, we would have less
- * than or equal to 16 pointers in the fake PGD, because that's what the
- * architecture allows. In this case, (4 - CONFIG_PGTABLE_LEVELS)
- * represents the first level for the host, and we add 1 to go to the next
- * level (which uses contatenation) for the stage-2 tables.
- */
-#if PTRS_PER_S2_PGD <= 16
-#define KVM_PREALLOC_LEVEL (4 - CONFIG_PGTABLE_LEVELS + 1)
-#else
-#define KVM_PREALLOC_LEVEL (0)
-#endif
-
-static inline void *kvm_get_hwpgd(struct kvm *kvm)
-{
- pgd_t *pgd = kvm->arch.pgd;
- pud_t *pud;
-
- if (KVM_PREALLOC_LEVEL == 0)
- return pgd;
-
- pud = pud_offset(pgd, 0);
- if (KVM_PREALLOC_LEVEL == 1)
- return pud;
-
- BUG_ON(KVM_PREALLOC_LEVEL != 2);
- return pmd_offset(pud, 0);
-}
-
-static inline unsigned int kvm_get_hwpgd_size(void)
-{
- if (KVM_PREALLOC_LEVEL > 0)
- return PTRS_PER_S2_PGD * PAGE_SIZE;
- return PTRS_PER_S2_PGD * sizeof(pgd_t);
+ return kvm_s2pte_readonly((pte_t *)pmd);
}
static inline bool kvm_page_empty(void *ptr)
@@ -214,23 +161,20 @@ static inline bool kvm_page_empty(void *ptr)
return page_count(ptr_page) == 1;
}
-#define kvm_pte_table_empty(kvm, ptep) kvm_page_empty(ptep)
+#define hyp_pte_table_empty(ptep) kvm_page_empty(ptep)
#ifdef __PAGETABLE_PMD_FOLDED
-#define kvm_pmd_table_empty(kvm, pmdp) (0)
+#define hyp_pmd_table_empty(pmdp) (0)
#else
-#define kvm_pmd_table_empty(kvm, pmdp) \
- (kvm_page_empty(pmdp) && (!(kvm) || KVM_PREALLOC_LEVEL < 2))
+#define hyp_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
#endif
#ifdef __PAGETABLE_PUD_FOLDED
-#define kvm_pud_table_empty(kvm, pudp) (0)
+#define hyp_pud_table_empty(pudp) (0)
#else
-#define kvm_pud_table_empty(kvm, pudp) \
- (kvm_page_empty(pudp) && (!(kvm) || KVM_PREALLOC_LEVEL < 1))
+#define hyp_pud_table_empty(pudp) kvm_page_empty(pudp)
#endif
-
struct kvm;
#define kvm_flush_dcache_to_poc(a,l) __flush_dcache_area((a), (l))
diff --git a/arch/arm64/include/asm/memory.h b/arch/arm64/include/asm/memory.h
index 12f8a00fb3f1..72a3025bb583 100644
--- a/arch/arm64/include/asm/memory.h
+++ b/arch/arm64/include/asm/memory.h
@@ -40,6 +40,21 @@
#define PCI_IO_SIZE SZ_16M
/*
+ * Log2 of the upper bound of the size of a struct page. Used for sizing
+ * the vmemmap region only, does not affect actual memory footprint.
+ * We don't use sizeof(struct page) directly since taking its size here
+ * requires its definition to be available at this point in the inclusion
+ * chain, and it may not be a power of 2 in the first place.
+ */
+#define STRUCT_PAGE_MAX_SHIFT 6
+
+/*
+ * VMEMMAP_SIZE - allows the whole linear region to be covered by
+ * a struct page array
+ */
+#define VMEMMAP_SIZE (UL(1) << (VA_BITS - PAGE_SHIFT - 1 + STRUCT_PAGE_MAX_SHIFT))
+
+/*
* PAGE_OFFSET - the virtual address of the start of the kernel image (top
* (VA_BITS - 1))
* VA_BITS - the maximum number of bits for virtual addresses.
@@ -54,7 +69,8 @@
#define MODULES_END (MODULES_VADDR + MODULES_VSIZE)
#define MODULES_VADDR (VA_START + KASAN_SHADOW_SIZE)
#define MODULES_VSIZE (SZ_128M)
-#define PCI_IO_END (PAGE_OFFSET - SZ_2M)
+#define VMEMMAP_START (PAGE_OFFSET - VMEMMAP_SIZE)
+#define PCI_IO_END (VMEMMAP_START - SZ_2M)
#define PCI_IO_START (PCI_IO_END - PCI_IO_SIZE)
#define FIXADDR_TOP (PCI_IO_START - SZ_2M)
#define TASK_SIZE_64 (UL(1) << VA_BITS)
@@ -71,6 +87,9 @@
#define TASK_UNMAPPED_BASE (PAGE_ALIGN(TASK_SIZE / 4))
+#define KERNEL_START _text
+#define KERNEL_END _end
+
/*
* The size of the KASAN shadow region. This should be 1/8th of the
* size of the entire kernel virtual address space.
@@ -192,9 +211,19 @@ static inline void *phys_to_virt(phys_addr_t x)
*/
#define ARCH_PFN_OFFSET ((unsigned long)PHYS_PFN_OFFSET)
+#ifndef CONFIG_SPARSEMEM_VMEMMAP
#define virt_to_page(kaddr) pfn_to_page(__pa(kaddr) >> PAGE_SHIFT)
-#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
+#define virt_addr_valid(kaddr) pfn_valid(__pa(kaddr) >> PAGE_SHIFT)
+#else
+#define __virt_to_pgoff(kaddr) (((u64)(kaddr) & ~PAGE_OFFSET) / PAGE_SIZE * sizeof(struct page))
+#define __page_to_voff(kaddr) (((u64)(page) & ~VMEMMAP_START) * PAGE_SIZE / sizeof(struct page))
+#define page_to_virt(page) ((void *)((__page_to_voff(page)) | PAGE_OFFSET))
+#define virt_to_page(vaddr) ((struct page *)((__virt_to_pgoff(vaddr)) | VMEMMAP_START))
+
+#define virt_addr_valid(kaddr) pfn_valid((((u64)(kaddr) & ~PAGE_OFFSET) \
+ + PHYS_OFFSET) >> PAGE_SHIFT)
+#endif
#endif
#include <asm-generic/memory_model.h>
diff --git a/arch/arm64/include/asm/mmu.h b/arch/arm64/include/asm/mmu.h
index 990124a67eeb..97b1d8f26b9c 100644
--- a/arch/arm64/include/asm/mmu.h
+++ b/arch/arm64/include/asm/mmu.h
@@ -29,6 +29,7 @@ typedef struct {
#define ASID(mm) ((mm)->context.id.counter & 0xffff)
extern void paging_init(void);
+extern void bootmem_init(void);
extern void __iomem *early_io_map(phys_addr_t phys, unsigned long virt);
extern void init_mem_pgprot(void);
extern void create_pgd_mapping(struct mm_struct *mm, phys_addr_t phys,
diff --git a/arch/arm64/include/asm/mmzone.h b/arch/arm64/include/asm/mmzone.h
new file mode 100644
index 000000000000..a0de9e6ba73f
--- /dev/null
+++ b/arch/arm64/include/asm/mmzone.h
@@ -0,0 +1,12 @@
+#ifndef __ASM_MMZONE_H
+#define __ASM_MMZONE_H
+
+#ifdef CONFIG_NUMA
+
+#include <asm/numa.h>
+
+extern struct pglist_data *node_data[];
+#define NODE_DATA(nid) (node_data[(nid)])
+
+#endif /* CONFIG_NUMA */
+#endif /* __ASM_MMZONE_H */
diff --git a/arch/arm64/include/asm/numa.h b/arch/arm64/include/asm/numa.h
new file mode 100644
index 000000000000..e9b4f2942335
--- /dev/null
+++ b/arch/arm64/include/asm/numa.h
@@ -0,0 +1,45 @@
+#ifndef __ASM_NUMA_H
+#define __ASM_NUMA_H
+
+#include <asm/topology.h>
+
+#ifdef CONFIG_NUMA
+
+/* currently, arm64 implements flat NUMA topology */
+#define parent_node(node) (node)
+
+int __node_distance(int from, int to);
+#define node_distance(a, b) __node_distance(a, b)
+
+extern nodemask_t numa_nodes_parsed __initdata;
+
+/* Mappings between node number and cpus on that node. */
+extern cpumask_var_t node_to_cpumask_map[MAX_NUMNODES];
+void numa_clear_node(unsigned int cpu);
+
+#ifdef CONFIG_DEBUG_PER_CPU_MAPS
+const struct cpumask *cpumask_of_node(int node);
+#else
+/* Returns a pointer to the cpumask of CPUs on Node 'node'. */
+static inline const struct cpumask *cpumask_of_node(int node)
+{
+ return node_to_cpumask_map[node];
+}
+#endif
+
+void __init arm64_numa_init(void);
+int __init numa_add_memblk(int nodeid, u64 start, u64 end);
+void __init numa_set_distance(int from, int to, int distance);
+void __init numa_free_distance(void);
+void __init early_map_cpu_to_node(unsigned int cpu, int nid);
+void numa_store_cpu_info(unsigned int cpu);
+
+#else /* CONFIG_NUMA */
+
+static inline void numa_store_cpu_info(unsigned int cpu) { }
+static inline void arm64_numa_init(void) { }
+static inline void early_map_cpu_to_node(unsigned int cpu, int nid) { }
+
+#endif /* CONFIG_NUMA */
+
+#endif /* __ASM_NUMA_H */
diff --git a/arch/arm64/include/asm/page.h b/arch/arm64/include/asm/page.h
index ae615b9d9a55..17b45f7d96d3 100644
--- a/arch/arm64/include/asm/page.h
+++ b/arch/arm64/include/asm/page.h
@@ -19,6 +19,8 @@
#ifndef __ASM_PAGE_H
#define __ASM_PAGE_H
+#include <linux/const.h>
+
/* PAGE_SHIFT determines the page size */
/* CONT_SHIFT determines the number of pages which can be tracked together */
#ifdef CONFIG_ARM64_64K_PAGES
diff --git a/arch/arm64/include/asm/pgtable-hwdef.h b/arch/arm64/include/asm/pgtable-hwdef.h
index 5c25b831273d..2813748e2f24 100644
--- a/arch/arm64/include/asm/pgtable-hwdef.h
+++ b/arch/arm64/include/asm/pgtable-hwdef.h
@@ -133,7 +133,6 @@
* Section
*/
#define PMD_SECT_VALID (_AT(pmdval_t, 1) << 0)
-#define PMD_SECT_PROT_NONE (_AT(pmdval_t, 1) << 58)
#define PMD_SECT_USER (_AT(pmdval_t, 1) << 6) /* AP[1] */
#define PMD_SECT_RDONLY (_AT(pmdval_t, 1) << 7) /* AP[2] */
#define PMD_SECT_S (_AT(pmdval_t, 3) << 8)
@@ -208,23 +207,69 @@
#define TCR_T1SZ(x) ((UL(64) - (x)) << TCR_T1SZ_OFFSET)
#define TCR_TxSZ(x) (TCR_T0SZ(x) | TCR_T1SZ(x))
#define TCR_TxSZ_WIDTH 6
-#define TCR_IRGN_NC ((UL(0) << 8) | (UL(0) << 24))
-#define TCR_IRGN_WBWA ((UL(1) << 8) | (UL(1) << 24))
-#define TCR_IRGN_WT ((UL(2) << 8) | (UL(2) << 24))
-#define TCR_IRGN_WBnWA ((UL(3) << 8) | (UL(3) << 24))
-#define TCR_IRGN_MASK ((UL(3) << 8) | (UL(3) << 24))
-#define TCR_ORGN_NC ((UL(0) << 10) | (UL(0) << 26))
-#define TCR_ORGN_WBWA ((UL(1) << 10) | (UL(1) << 26))
-#define TCR_ORGN_WT ((UL(2) << 10) | (UL(2) << 26))
-#define TCR_ORGN_WBnWA ((UL(3) << 10) | (UL(3) << 26))
-#define TCR_ORGN_MASK ((UL(3) << 10) | (UL(3) << 26))
-#define TCR_SHARED ((UL(3) << 12) | (UL(3) << 28))
-#define TCR_TG0_4K (UL(0) << 14)
-#define TCR_TG0_64K (UL(1) << 14)
-#define TCR_TG0_16K (UL(2) << 14)
-#define TCR_TG1_16K (UL(1) << 30)
-#define TCR_TG1_4K (UL(2) << 30)
-#define TCR_TG1_64K (UL(3) << 30)
+
+#define TCR_IRGN0_SHIFT 8
+#define TCR_IRGN0_MASK (UL(3) << TCR_IRGN0_SHIFT)
+#define TCR_IRGN0_NC (UL(0) << TCR_IRGN0_SHIFT)
+#define TCR_IRGN0_WBWA (UL(1) << TCR_IRGN0_SHIFT)
+#define TCR_IRGN0_WT (UL(2) << TCR_IRGN0_SHIFT)
+#define TCR_IRGN0_WBnWA (UL(3) << TCR_IRGN0_SHIFT)
+
+#define TCR_IRGN1_SHIFT 24
+#define TCR_IRGN1_MASK (UL(3) << TCR_IRGN1_SHIFT)
+#define TCR_IRGN1_NC (UL(0) << TCR_IRGN1_SHIFT)
+#define TCR_IRGN1_WBWA (UL(1) << TCR_IRGN1_SHIFT)
+#define TCR_IRGN1_WT (UL(2) << TCR_IRGN1_SHIFT)
+#define TCR_IRGN1_WBnWA (UL(3) << TCR_IRGN1_SHIFT)
+
+#define TCR_IRGN_NC (TCR_IRGN0_NC | TCR_IRGN1_NC)
+#define TCR_IRGN_WBWA (TCR_IRGN0_WBWA | TCR_IRGN1_WBWA)
+#define TCR_IRGN_WT (TCR_IRGN0_WT | TCR_IRGN1_WT)
+#define TCR_IRGN_WBnWA (TCR_IRGN0_WBnWA | TCR_IRGN1_WBnWA)
+#define TCR_IRGN_MASK (TCR_IRGN0_MASK | TCR_IRGN1_MASK)
+
+
+#define TCR_ORGN0_SHIFT 10
+#define TCR_ORGN0_MASK (UL(3) << TCR_ORGN0_SHIFT)
+#define TCR_ORGN0_NC (UL(0) << TCR_ORGN0_SHIFT)
+#define TCR_ORGN0_WBWA (UL(1) << TCR_ORGN0_SHIFT)
+#define TCR_ORGN0_WT (UL(2) << TCR_ORGN0_SHIFT)
+#define TCR_ORGN0_WBnWA (UL(3) << TCR_ORGN0_SHIFT)
+
+#define TCR_ORGN1_SHIFT 26
+#define TCR_ORGN1_MASK (UL(3) << TCR_ORGN1_SHIFT)
+#define TCR_ORGN1_NC (UL(0) << TCR_ORGN1_SHIFT)
+#define TCR_ORGN1_WBWA (UL(1) << TCR_ORGN1_SHIFT)
+#define TCR_ORGN1_WT (UL(2) << TCR_ORGN1_SHIFT)
+#define TCR_ORGN1_WBnWA (UL(3) << TCR_ORGN1_SHIFT)
+
+#define TCR_ORGN_NC (TCR_ORGN0_NC | TCR_ORGN1_NC)
+#define TCR_ORGN_WBWA (TCR_ORGN0_WBWA | TCR_ORGN1_WBWA)
+#define TCR_ORGN_WT (TCR_ORGN0_WT | TCR_ORGN1_WT)
+#define TCR_ORGN_WBnWA (TCR_ORGN0_WBnWA | TCR_ORGN1_WBnWA)
+#define TCR_ORGN_MASK (TCR_ORGN0_MASK | TCR_ORGN1_MASK)
+
+#define TCR_SH0_SHIFT 12
+#define TCR_SH0_MASK (UL(3) << TCR_SH0_SHIFT)
+#define TCR_SH0_INNER (UL(3) << TCR_SH0_SHIFT)
+
+#define TCR_SH1_SHIFT 28
+#define TCR_SH1_MASK (UL(3) << TCR_SH1_SHIFT)
+#define TCR_SH1_INNER (UL(3) << TCR_SH1_SHIFT)
+#define TCR_SHARED (TCR_SH0_INNER | TCR_SH1_INNER)
+
+#define TCR_TG0_SHIFT 14
+#define TCR_TG0_MASK (UL(3) << TCR_TG0_SHIFT)
+#define TCR_TG0_4K (UL(0) << TCR_TG0_SHIFT)
+#define TCR_TG0_64K (UL(1) << TCR_TG0_SHIFT)
+#define TCR_TG0_16K (UL(2) << TCR_TG0_SHIFT)
+
+#define TCR_TG1_SHIFT 30
+#define TCR_TG1_MASK (UL(3) << TCR_TG1_SHIFT)
+#define TCR_TG1_16K (UL(1) << TCR_TG1_SHIFT)
+#define TCR_TG1_4K (UL(2) << TCR_TG1_SHIFT)
+#define TCR_TG1_64K (UL(3) << TCR_TG1_SHIFT)
+
#define TCR_ASID16 (UL(1) << 36)
#define TCR_TBI0 (UL(1) << 37)
#define TCR_HA (UL(1) << 39)
diff --git a/arch/arm64/include/asm/pgtable-types.h b/arch/arm64/include/asm/pgtable-types.h
index 2b1bd7e52c3b..69b2fd41503c 100644
--- a/arch/arm64/include/asm/pgtable-types.h
+++ b/arch/arm64/include/asm/pgtable-types.h
@@ -27,10 +27,6 @@ typedef u64 pmdval_t;
typedef u64 pudval_t;
typedef u64 pgdval_t;
-#undef STRICT_MM_TYPECHECKS
-
-#ifdef STRICT_MM_TYPECHECKS
-
/*
* These are used to make use of C type-checking..
*/
@@ -58,34 +54,6 @@ typedef struct { pteval_t pgprot; } pgprot_t;
#define pgprot_val(x) ((x).pgprot)
#define __pgprot(x) ((pgprot_t) { (x) } )
-#else /* !STRICT_MM_TYPECHECKS */
-
-typedef pteval_t pte_t;
-#define pte_val(x) (x)
-#define __pte(x) (x)
-
-#if CONFIG_PGTABLE_LEVELS > 2
-typedef pmdval_t pmd_t;
-#define pmd_val(x) (x)
-#define __pmd(x) (x)
-#endif
-
-#if CONFIG_PGTABLE_LEVELS > 3
-typedef pudval_t pud_t;
-#define pud_val(x) (x)
-#define __pud(x) (x)
-#endif
-
-typedef pgdval_t pgd_t;
-#define pgd_val(x) (x)
-#define __pgd(x) (x)
-
-typedef pteval_t pgprot_t;
-#define pgprot_val(x) (x)
-#define __pgprot(x) (x)
-
-#endif /* STRICT_MM_TYPECHECKS */
-
#if CONFIG_PGTABLE_LEVELS == 2
#include <asm-generic/pgtable-nopmd.h>
#elif CONFIG_PGTABLE_LEVELS == 3
diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 989fef16d461..46472a91b6df 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -24,22 +24,16 @@
#include <asm/pgtable-prot.h>
/*
- * VMALLOC and SPARSEMEM_VMEMMAP ranges.
+ * VMALLOC range.
*
- * VMEMAP_SIZE: allows the whole linear region to be covered by a struct page array
- * (rounded up to PUD_SIZE).
* VMALLOC_START: beginning of the kernel vmalloc space
- * VMALLOC_END: extends to the available space below vmmemmap, PCI I/O space,
- * fixed mappings and modules
+ * VMALLOC_END: extends to the available space below vmmemmap, PCI I/O space
+ * and fixed mappings
*/
-#define VMEMMAP_SIZE ALIGN((1UL << (VA_BITS - PAGE_SHIFT)) * sizeof(struct page), PUD_SIZE)
-
#define VMALLOC_START (MODULES_END)
#define VMALLOC_END (PAGE_OFFSET - PUD_SIZE - VMEMMAP_SIZE - SZ_64K)
-#define VMEMMAP_START (VMALLOC_END + SZ_64K)
-#define vmemmap ((struct page *)VMEMMAP_START - \
- SECTION_ALIGN_DOWN(memstart_addr >> PAGE_SHIFT))
+#define vmemmap ((struct page *)VMEMMAP_START - (memstart_addr >> PAGE_SHIFT))
#define FIRST_USER_ADDRESS 0UL
@@ -58,7 +52,7 @@ extern void __pgd_error(const char *file, int line, unsigned long val);
* for zero-mapped memory areas etc..
*/
extern unsigned long empty_zero_page[PAGE_SIZE / sizeof(unsigned long)];
-#define ZERO_PAGE(vaddr) virt_to_page(empty_zero_page)
+#define ZERO_PAGE(vaddr) pfn_to_page(PHYS_PFN(__pa(empty_zero_page)))
#define pte_ERROR(pte) __pte_error(__FILE__, __LINE__, pte_val(pte))
@@ -272,6 +266,21 @@ static inline pgprot_t mk_sect_prot(pgprot_t prot)
return __pgprot(pgprot_val(prot) & ~PTE_TABLE_BIT);
}
+#ifdef CONFIG_NUMA_BALANCING
+/*
+ * See the comment in include/asm-generic/pgtable.h
+ */
+static inline int pte_protnone(pte_t pte)
+{
+ return (pte_val(pte) & (PTE_VALID | PTE_PROT_NONE)) == PTE_PROT_NONE;
+}
+
+static inline int pmd_protnone(pmd_t pmd)
+{
+ return pte_protnone(pmd_pte(pmd));
+}
+#endif
+
/*
* THP definitions.
*/
@@ -280,15 +289,18 @@ static inline pgprot_t mk_sect_prot(pgprot_t prot)
#define pmd_trans_huge(pmd) (pmd_val(pmd) && !(pmd_val(pmd) & PMD_TABLE_BIT))
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+#define pmd_present(pmd) pte_present(pmd_pte(pmd))
#define pmd_dirty(pmd) pte_dirty(pmd_pte(pmd))
#define pmd_young(pmd) pte_young(pmd_pte(pmd))
#define pmd_wrprotect(pmd) pte_pmd(pte_wrprotect(pmd_pte(pmd)))
#define pmd_mkold(pmd) pte_pmd(pte_mkold(pmd_pte(pmd)))
#define pmd_mkwrite(pmd) pte_pmd(pte_mkwrite(pmd_pte(pmd)))
-#define pmd_mkclean(pmd) pte_pmd(pte_mkclean(pmd_pte(pmd)))
+#define pmd_mkclean(pmd) pte_pmd(pte_mkclean(pmd_pte(pmd)))
#define pmd_mkdirty(pmd) pte_pmd(pte_mkdirty(pmd_pte(pmd)))
#define pmd_mkyoung(pmd) pte_pmd(pte_mkyoung(pmd_pte(pmd)))
-#define pmd_mknotpresent(pmd) (__pmd(pmd_val(pmd) & ~PMD_TYPE_MASK))
+#define pmd_mknotpresent(pmd) (__pmd(pmd_val(pmd) & ~PMD_SECT_VALID))
+
+#define pmd_thp_or_huge(pmd) (pmd_huge(pmd) || pmd_trans_huge(pmd))
#define __HAVE_ARCH_PMD_WRITE
#define pmd_write(pmd) pte_write(pmd_pte(pmd))
@@ -304,11 +316,6 @@ static inline pgprot_t mk_sect_prot(pgprot_t prot)
#define set_pmd_at(mm, addr, pmdp, pmd) set_pte_at(mm, addr, (pte_t *)pmdp, pmd_pte(pmd))
-static inline int has_transparent_hugepage(void)
-{
- return 1;
-}
-
#define __pgprot_modify(prot,mask,bits) \
__pgprot((pgprot_val(prot) & ~(mask)) | (bits))
@@ -327,9 +334,8 @@ extern pgprot_t phys_mem_access_prot(struct file *file, unsigned long pfn,
unsigned long size, pgprot_t vma_prot);
#define pmd_none(pmd) (!pmd_val(pmd))
-#define pmd_present(pmd) (pmd_val(pmd))
-#define pmd_bad(pmd) (!(pmd_val(pmd) & 2))
+#define pmd_bad(pmd) (!(pmd_val(pmd) & PMD_TABLE_BIT))
#define pmd_table(pmd) ((pmd_val(pmd) & PMD_TYPE_MASK) == \
PMD_TYPE_TABLE)
@@ -394,7 +400,7 @@ static inline phys_addr_t pmd_page_paddr(pmd_t pmd)
#define pmd_ERROR(pmd) __pmd_error(__FILE__, __LINE__, pmd_val(pmd))
#define pud_none(pud) (!pud_val(pud))
-#define pud_bad(pud) (!(pud_val(pud) & 2))
+#define pud_bad(pud) (!(pud_val(pud) & PUD_TABLE_BIT))
#define pud_present(pud) (pud_val(pud))
static inline void set_pud(pud_t *pudp, pud_t pud)
@@ -526,18 +532,31 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
}
#ifdef CONFIG_ARM64_HW_AFDBM
+#define __HAVE_ARCH_PTEP_SET_ACCESS_FLAGS
+extern int ptep_set_access_flags(struct vm_area_struct *vma,
+ unsigned long address, pte_t *ptep,
+ pte_t entry, int dirty);
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+#define __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
+static inline int pmdp_set_access_flags(struct vm_area_struct *vma,
+ unsigned long address, pmd_t *pmdp,
+ pmd_t entry, int dirty)
+{
+ return ptep_set_access_flags(vma, address, (pte_t *)pmdp, pmd_pte(entry), dirty);
+}
+#endif
+
/*
* Atomic pte/pmd modifications.
*/
#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
-static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
- unsigned long address,
- pte_t *ptep)
+static inline int __ptep_test_and_clear_young(pte_t *ptep)
{
pteval_t pteval;
unsigned int tmp, res;
- asm volatile("// ptep_test_and_clear_young\n"
+ asm volatile("// __ptep_test_and_clear_young\n"
" prfm pstl1strm, %2\n"
"1: ldxr %0, %2\n"
" ubfx %w3, %w0, %5, #1 // extract PTE_AF (young)\n"
@@ -550,6 +569,13 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
return res;
}
+static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
+ unsigned long address,
+ pte_t *ptep)
+{
+ return __ptep_test_and_clear_young(ptep);
+}
+
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
#define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
static inline int pmdp_test_and_clear_young(struct vm_area_struct *vma,
@@ -578,9 +604,9 @@ static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
}
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-#define __HAVE_ARCH_PMDP_GET_AND_CLEAR
-static inline pmd_t pmdp_get_and_clear(struct mm_struct *mm,
- unsigned long address, pmd_t *pmdp)
+#define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR
+static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
+ unsigned long address, pmd_t *pmdp)
{
return pte_pmd(ptep_get_and_clear(mm, address, (pte_t *)pmdp));
}
diff --git a/arch/arm64/include/asm/smp.h b/arch/arm64/include/asm/smp.h
index 817a067ba058..433e50405274 100644
--- a/arch/arm64/include/asm/smp.h
+++ b/arch/arm64/include/asm/smp.h
@@ -113,6 +113,17 @@ static inline void update_cpu_boot_status(int val)
dsb(ishst);
}
+/*
+ * The calling secondary CPU has detected serious configuration mismatch,
+ * which calls for a kernel panic. Update the boot status and park the calling
+ * CPU.
+ */
+static inline void cpu_panic_kernel(void)
+{
+ update_cpu_boot_status(CPU_PANIC_KERNEL);
+ cpu_park_loop();
+}
+
#endif /* ifndef __ASSEMBLY__ */
#endif /* ifndef __ASM_SMP_H */
diff --git a/arch/arm64/include/asm/stage2_pgtable-nopmd.h b/arch/arm64/include/asm/stage2_pgtable-nopmd.h
new file mode 100644
index 000000000000..2656a0fd05a6
--- /dev/null
+++ b/arch/arm64/include/asm/stage2_pgtable-nopmd.h
@@ -0,0 +1,42 @@
+/*
+ * Copyright (C) 2016 - ARM Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_S2_PGTABLE_NOPMD_H_
+#define __ARM64_S2_PGTABLE_NOPMD_H_
+
+#include <asm/stage2_pgtable-nopud.h>
+
+#define __S2_PGTABLE_PMD_FOLDED
+
+#define S2_PMD_SHIFT S2_PUD_SHIFT
+#define S2_PTRS_PER_PMD 1
+#define S2_PMD_SIZE (1UL << S2_PMD_SHIFT)
+#define S2_PMD_MASK (~(S2_PMD_SIZE-1))
+
+#define stage2_pud_none(pud) (0)
+#define stage2_pud_present(pud) (1)
+#define stage2_pud_clear(pud) do { } while (0)
+#define stage2_pud_populate(pud, pmd) do { } while (0)
+#define stage2_pmd_offset(pud, address) ((pmd_t *)(pud))
+
+#define stage2_pmd_free(pmd) do { } while (0)
+
+#define stage2_pmd_addr_end(addr, end) (end)
+
+#define stage2_pud_huge(pud) (0)
+#define stage2_pmd_table_empty(pmdp) (0)
+
+#endif
diff --git a/arch/arm64/include/asm/stage2_pgtable-nopud.h b/arch/arm64/include/asm/stage2_pgtable-nopud.h
new file mode 100644
index 000000000000..5ee87b54ebf3
--- /dev/null
+++ b/arch/arm64/include/asm/stage2_pgtable-nopud.h
@@ -0,0 +1,39 @@
+/*
+ * Copyright (C) 2016 - ARM Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_S2_PGTABLE_NOPUD_H_
+#define __ARM64_S2_PGTABLE_NOPUD_H_
+
+#define __S2_PGTABLE_PUD_FOLDED
+
+#define S2_PUD_SHIFT S2_PGDIR_SHIFT
+#define S2_PTRS_PER_PUD 1
+#define S2_PUD_SIZE (_AC(1, UL) << S2_PUD_SHIFT)
+#define S2_PUD_MASK (~(S2_PUD_SIZE-1))
+
+#define stage2_pgd_none(pgd) (0)
+#define stage2_pgd_present(pgd) (1)
+#define stage2_pgd_clear(pgd) do { } while (0)
+#define stage2_pgd_populate(pgd, pud) do { } while (0)
+
+#define stage2_pud_offset(pgd, address) ((pud_t *)(pgd))
+
+#define stage2_pud_free(x) do { } while (0)
+
+#define stage2_pud_addr_end(addr, end) (end)
+#define stage2_pud_table_empty(pmdp) (0)
+
+#endif
diff --git a/arch/arm64/include/asm/stage2_pgtable.h b/arch/arm64/include/asm/stage2_pgtable.h
new file mode 100644
index 000000000000..8b68099348e5
--- /dev/null
+++ b/arch/arm64/include/asm/stage2_pgtable.h
@@ -0,0 +1,142 @@
+/*
+ * Copyright (C) 2016 - ARM Ltd
+ *
+ * stage2 page table helpers
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ARM64_S2_PGTABLE_H_
+#define __ARM64_S2_PGTABLE_H_
+
+#include <asm/pgtable.h>
+
+/*
+ * The hardware supports concatenation of up to 16 tables at stage2 entry level
+ * and we use the feature whenever possible.
+ *
+ * Now, the minimum number of bits resolved at any level is (PAGE_SHIFT - 3).
+ * On arm64, the smallest PAGE_SIZE supported is 4k, which means
+ * (PAGE_SHIFT - 3) > 4 holds for all page sizes.
+ * This implies, the total number of page table levels at stage2 expected
+ * by the hardware is actually the number of levels required for (KVM_PHYS_SHIFT - 4)
+ * in normal translations(e.g, stage1), since we cannot have another level in
+ * the range (KVM_PHYS_SHIFT, KVM_PHYS_SHIFT - 4).
+ */
+#define STAGE2_PGTABLE_LEVELS ARM64_HW_PGTABLE_LEVELS(KVM_PHYS_SHIFT - 4)
+
+/*
+ * With all the supported VA_BITs and 40bit guest IPA, the following condition
+ * is always true:
+ *
+ * STAGE2_PGTABLE_LEVELS <= CONFIG_PGTABLE_LEVELS
+ *
+ * We base our stage-2 page table walker helpers on this assumption and
+ * fall back to using the host version of the helper wherever possible.
+ * i.e, if a particular level is not folded (e.g, PUD) at stage2, we fall back
+ * to using the host version, since it is guaranteed it is not folded at host.
+ *
+ * If the condition breaks in the future, we can rearrange the host level
+ * definitions and reuse them for stage2. Till then...
+ */
+#if STAGE2_PGTABLE_LEVELS > CONFIG_PGTABLE_LEVELS
+#error "Unsupported combination of guest IPA and host VA_BITS."
+#endif
+
+/* S2_PGDIR_SHIFT is the size mapped by top-level stage2 entry */
+#define S2_PGDIR_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(4 - STAGE2_PGTABLE_LEVELS)
+#define S2_PGDIR_SIZE (_AC(1, UL) << S2_PGDIR_SHIFT)
+#define S2_PGDIR_MASK (~(S2_PGDIR_SIZE - 1))
+
+/*
+ * The number of PTRS across all concatenated stage2 tables given by the
+ * number of bits resolved at the initial level.
+ */
+#define PTRS_PER_S2_PGD (1 << (KVM_PHYS_SHIFT - S2_PGDIR_SHIFT))
+
+/*
+ * KVM_MMU_CACHE_MIN_PAGES is the number of stage2 page table translation
+ * levels in addition to the PGD.
+ */
+#define KVM_MMU_CACHE_MIN_PAGES (STAGE2_PGTABLE_LEVELS - 1)
+
+
+#if STAGE2_PGTABLE_LEVELS > 3
+
+#define S2_PUD_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(1)
+#define S2_PUD_SIZE (_AC(1, UL) << S2_PUD_SHIFT)
+#define S2_PUD_MASK (~(S2_PUD_SIZE - 1))
+
+#define stage2_pgd_none(pgd) pgd_none(pgd)
+#define stage2_pgd_clear(pgd) pgd_clear(pgd)
+#define stage2_pgd_present(pgd) pgd_present(pgd)
+#define stage2_pgd_populate(pgd, pud) pgd_populate(NULL, pgd, pud)
+#define stage2_pud_offset(pgd, address) pud_offset(pgd, address)
+#define stage2_pud_free(pud) pud_free(NULL, pud)
+
+#define stage2_pud_table_empty(pudp) kvm_page_empty(pudp)
+
+static inline phys_addr_t stage2_pud_addr_end(phys_addr_t addr, phys_addr_t end)
+{
+ phys_addr_t boundary = (addr + S2_PUD_SIZE) & S2_PUD_MASK;
+
+ return (boundary - 1 < end - 1) ? boundary : end;
+}
+
+#endif /* STAGE2_PGTABLE_LEVELS > 3 */
+
+
+#if STAGE2_PGTABLE_LEVELS > 2
+
+#define S2_PMD_SHIFT ARM64_HW_PGTABLE_LEVEL_SHIFT(2)
+#define S2_PMD_SIZE (_AC(1, UL) << S2_PMD_SHIFT)
+#define S2_PMD_MASK (~(S2_PMD_SIZE - 1))
+
+#define stage2_pud_none(pud) pud_none(pud)
+#define stage2_pud_clear(pud) pud_clear(pud)
+#define stage2_pud_present(pud) pud_present(pud)
+#define stage2_pud_populate(pud, pmd) pud_populate(NULL, pud, pmd)
+#define stage2_pmd_offset(pud, address) pmd_offset(pud, address)
+#define stage2_pmd_free(pmd) pmd_free(NULL, pmd)
+
+#define stage2_pud_huge(pud) pud_huge(pud)
+#define stage2_pmd_table_empty(pmdp) kvm_page_empty(pmdp)
+
+static inline phys_addr_t stage2_pmd_addr_end(phys_addr_t addr, phys_addr_t end)
+{
+ phys_addr_t boundary = (addr + S2_PMD_SIZE) & S2_PMD_MASK;
+
+ return (boundary - 1 < end - 1) ? boundary : end;
+}
+
+#endif /* STAGE2_PGTABLE_LEVELS > 2 */
+
+#define stage2_pte_table_empty(ptep) kvm_page_empty(ptep)
+
+#if STAGE2_PGTABLE_LEVELS == 2
+#include <asm/stage2_pgtable-nopmd.h>
+#elif STAGE2_PGTABLE_LEVELS == 3
+#include <asm/stage2_pgtable-nopud.h>
+#endif
+
+
+#define stage2_pgd_index(addr) (((addr) >> S2_PGDIR_SHIFT) & (PTRS_PER_S2_PGD - 1))
+
+static inline phys_addr_t stage2_pgd_addr_end(phys_addr_t addr, phys_addr_t end)
+{
+ phys_addr_t boundary = (addr + S2_PGDIR_SIZE) & S2_PGDIR_MASK;
+
+ return (boundary - 1 < end - 1) ? boundary : end;
+}
+
+#endif /* __ARM64_S2_PGTABLE_H_ */
diff --git a/arch/arm64/include/asm/suspend.h b/arch/arm64/include/asm/suspend.h
index 59a5b0f1e81c..024d623f662e 100644
--- a/arch/arm64/include/asm/suspend.h
+++ b/arch/arm64/include/asm/suspend.h
@@ -1,7 +1,8 @@
#ifndef __ASM_SUSPEND_H
#define __ASM_SUSPEND_H
-#define NR_CTX_REGS 11
+#define NR_CTX_REGS 10
+#define NR_CALLEE_SAVED_REGS 12
/*
* struct cpu_suspend_ctx must be 16-byte aligned since it is allocated on
@@ -16,11 +17,34 @@ struct cpu_suspend_ctx {
u64 sp;
} __aligned(16);
-struct sleep_save_sp {
- phys_addr_t *save_ptr_stash;
- phys_addr_t save_ptr_stash_phys;
+/*
+ * Memory to save the cpu state is allocated on the stack by
+ * __cpu_suspend_enter()'s caller, and populated by __cpu_suspend_enter().
+ * This data must survive until cpu_resume() is called.
+ *
+ * This struct desribes the size and the layout of the saved cpu state.
+ * The layout of the callee_saved_regs is defined by the implementation
+ * of __cpu_suspend_enter(), and cpu_resume(). This struct must be passed
+ * in by the caller as __cpu_suspend_enter()'s stack-frame is gone once it
+ * returns, and the data would be subsequently corrupted by the call to the
+ * finisher.
+ */
+struct sleep_stack_data {
+ struct cpu_suspend_ctx system_regs;
+ unsigned long callee_saved_regs[NR_CALLEE_SAVED_REGS];
};
+extern unsigned long *sleep_save_stash;
+
extern int cpu_suspend(unsigned long arg, int (*fn)(unsigned long));
extern void cpu_resume(void);
+int __cpu_suspend_enter(struct sleep_stack_data *state);
+void __cpu_suspend_exit(void);
+void _cpu_resume(void);
+
+int swsusp_arch_suspend(void);
+int swsusp_arch_resume(void);
+int arch_hibernation_header_save(void *addr, unsigned int max_size);
+int arch_hibernation_header_restore(void *addr);
+
#endif
diff --git a/arch/arm64/include/asm/sysreg.h b/arch/arm64/include/asm/sysreg.h
index 12874164b0ae..751e901c8d37 100644
--- a/arch/arm64/include/asm/sysreg.h
+++ b/arch/arm64/include/asm/sysreg.h
@@ -86,10 +86,21 @@
#define SET_PSTATE_UAO(x) __inst_arm(0xd5000000 | REG_PSTATE_UAO_IMM |\
(!!x)<<8 | 0x1f)
-/* SCTLR_EL1 */
-#define SCTLR_EL1_CP15BEN (0x1 << 5)
-#define SCTLR_EL1_SED (0x1 << 8)
-#define SCTLR_EL1_SPAN (0x1 << 23)
+/* Common SCTLR_ELx flags. */
+#define SCTLR_ELx_EE (1 << 25)
+#define SCTLR_ELx_I (1 << 12)
+#define SCTLR_ELx_SA (1 << 3)
+#define SCTLR_ELx_C (1 << 2)
+#define SCTLR_ELx_A (1 << 1)
+#define SCTLR_ELx_M 1
+
+#define SCTLR_ELx_FLAGS (SCTLR_ELx_M | SCTLR_ELx_A | SCTLR_ELx_C | \
+ SCTLR_ELx_SA | SCTLR_ELx_I)
+
+/* SCTLR_EL1 specific flags. */
+#define SCTLR_EL1_SPAN (1 << 23)
+#define SCTLR_EL1_SED (1 << 8)
+#define SCTLR_EL1_CP15BEN (1 << 5)
/* id_aa64isar0 */
@@ -115,6 +126,7 @@
#define ID_AA64PFR0_ASIMD_SUPPORTED 0x0
#define ID_AA64PFR0_EL1_64BIT_ONLY 0x1
#define ID_AA64PFR0_EL0_64BIT_ONLY 0x1
+#define ID_AA64PFR0_EL0_32BIT_64BIT 0x2
/* id_aa64mmfr0 */
#define ID_AA64MMFR0_TGRAN4_SHIFT 28
@@ -145,7 +157,11 @@
#define ID_AA64MMFR1_VMIDBITS_16 2
/* id_aa64mmfr2 */
+#define ID_AA64MMFR2_LVA_SHIFT 16
+#define ID_AA64MMFR2_IESB_SHIFT 12
+#define ID_AA64MMFR2_LSM_SHIFT 8
#define ID_AA64MMFR2_UAO_SHIFT 4
+#define ID_AA64MMFR2_CNP_SHIFT 0
/* id_aa64dfr0 */
#define ID_AA64DFR0_CTX_CMPS_SHIFT 28
diff --git a/arch/arm64/include/asm/topology.h b/arch/arm64/include/asm/topology.h
index a3e9d6fdbf21..8b57339823e9 100644
--- a/arch/arm64/include/asm/topology.h
+++ b/arch/arm64/include/asm/topology.h
@@ -22,6 +22,16 @@ void init_cpu_topology(void);
void store_cpu_topology(unsigned int cpuid);
const struct cpumask *cpu_coregroup_mask(int cpu);
+#ifdef CONFIG_NUMA
+
+struct pci_bus;
+int pcibus_to_node(struct pci_bus *bus);
+#define cpumask_of_pcibus(bus) (pcibus_to_node(bus) == -1 ? \
+ cpu_all_mask : \
+ cpumask_of_node(pcibus_to_node(bus)))
+
+#endif /* CONFIG_NUMA */
+
#include <asm-generic/topology.h>
#endif /* _ASM_ARM_TOPOLOGY_H */
diff --git a/arch/arm64/include/asm/virt.h b/arch/arm64/include/asm/virt.h
index 9f22dd607958..dcbcf8dcbefb 100644
--- a/arch/arm64/include/asm/virt.h
+++ b/arch/arm64/include/asm/virt.h
@@ -18,6 +18,22 @@
#ifndef __ASM__VIRT_H
#define __ASM__VIRT_H
+/*
+ * The arm64 hcall implementation uses x0 to specify the hcall type. A value
+ * less than 0xfff indicates a special hcall, such as get/set vector.
+ * Any other value is used as a pointer to the function to call.
+ */
+
+/* HVC_GET_VECTORS - Return the value of the vbar_el2 register. */
+#define HVC_GET_VECTORS 0
+
+/*
+ * HVC_SET_VECTORS - Set the value of the vbar_el2 register.
+ *
+ * @x1: Physical address of the new vector table.
+ */
+#define HVC_SET_VECTORS 1
+
#define BOOT_CPU_MODE_EL1 (0xe11)
#define BOOT_CPU_MODE_EL2 (0xe12)
@@ -60,6 +76,12 @@ static inline bool is_kernel_in_hyp_mode(void)
return el == CurrentEL_EL2;
}
+#ifdef CONFIG_ARM64_VHE
+extern void verify_cpu_run_el(void);
+#else
+static inline void verify_cpu_run_el(void) {}
+#endif
+
/* The section containing the hypervisor text */
extern char __hyp_text_start[];
extern char __hyp_text_end[];
diff --git a/arch/arm64/include/uapi/asm/unistd.h b/arch/arm64/include/uapi/asm/unistd.h
index 1caadc24e3fe..043d17a21342 100644
--- a/arch/arm64/include/uapi/asm/unistd.h
+++ b/arch/arm64/include/uapi/asm/unistd.h
@@ -13,4 +13,7 @@
* You should have received a copy of the GNU General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
+
+#define __ARCH_WANT_RENAMEAT
+
#include <asm-generic/unistd.h>
diff --git a/arch/arm64/kernel/Makefile b/arch/arm64/kernel/Makefile
index 3793003e16a2..2173149d8954 100644
--- a/arch/arm64/kernel/Makefile
+++ b/arch/arm64/kernel/Makefile
@@ -45,6 +45,7 @@ arm64-obj-$(CONFIG_ACPI) += acpi.o
arm64-obj-$(CONFIG_ARM64_ACPI_PARKING_PROTOCOL) += acpi_parking_protocol.o
arm64-obj-$(CONFIG_PARAVIRT) += paravirt.o
arm64-obj-$(CONFIG_RANDOMIZE_BASE) += kaslr.o
+arm64-obj-$(CONFIG_HIBERNATION) += hibernate.o hibernate-asm.o
obj-y += $(arm64-obj-y) vdso/
obj-m += $(arm64-obj-m)
diff --git a/arch/arm64/kernel/acpi.c b/arch/arm64/kernel/acpi.c
index d1ce8e2f98b9..3e4f1a45b125 100644
--- a/arch/arm64/kernel/acpi.c
+++ b/arch/arm64/kernel/acpi.c
@@ -42,6 +42,7 @@ int acpi_pci_disabled = 1; /* skip ACPI PCI scan and IRQ initialization */
EXPORT_SYMBOL(acpi_pci_disabled);
static bool param_acpi_off __initdata;
+static bool param_acpi_on __initdata;
static bool param_acpi_force __initdata;
static int __init parse_acpi(char *arg)
@@ -52,6 +53,8 @@ static int __init parse_acpi(char *arg)
/* "acpi=off" disables both ACPI table parsing and interpreter */
if (strcmp(arg, "off") == 0)
param_acpi_off = true;
+ else if (strcmp(arg, "on") == 0) /* prefer ACPI over DT */
+ param_acpi_on = true;
else if (strcmp(arg, "force") == 0) /* force ACPI to be enabled */
param_acpi_force = true;
else
@@ -66,12 +69,24 @@ static int __init dt_scan_depth1_nodes(unsigned long node,
void *data)
{
/*
- * Return 1 as soon as we encounter a node at depth 1 that is
- * not the /chosen node.
+ * Ignore anything not directly under the root node; we'll
+ * catch its parent instead.
*/
- if (depth == 1 && (strcmp(uname, "chosen") != 0))
- return 1;
- return 0;
+ if (depth != 1)
+ return 0;
+
+ if (strcmp(uname, "chosen") == 0)
+ return 0;
+
+ if (strcmp(uname, "hypervisor") == 0 &&
+ of_flat_dt_is_compatible(node, "xen,xen"))
+ return 0;
+
+ /*
+ * This node at depth 1 is neither a chosen node nor a xen node,
+ * which we do not expect.
+ */
+ return 1;
}
/*
@@ -184,11 +199,13 @@ void __init acpi_boot_table_init(void)
/*
* Enable ACPI instead of device tree unless
* - ACPI has been disabled explicitly (acpi=off), or
- * - the device tree is not empty (it has more than just a /chosen node)
- * and ACPI has not been force enabled (acpi=force)
+ * - the device tree is not empty (it has more than just a /chosen node,
+ * and a /hypervisor node when running on Xen)
+ * and ACPI has not been [force] enabled (acpi=on|force)
*/
if (param_acpi_off ||
- (!param_acpi_force && of_scan_flat_dt(dt_scan_depth1_nodes, NULL)))
+ (!param_acpi_on && !param_acpi_force &&
+ of_scan_flat_dt(dt_scan_depth1_nodes, NULL)))
return;
/*
diff --git a/arch/arm64/kernel/asm-offsets.c b/arch/arm64/kernel/asm-offsets.c
index 3ae6b310ac9b..f8e5d47f0880 100644
--- a/arch/arm64/kernel/asm-offsets.c
+++ b/arch/arm64/kernel/asm-offsets.c
@@ -22,6 +22,7 @@
#include <linux/mm.h>
#include <linux/dma-mapping.h>
#include <linux/kvm_host.h>
+#include <linux/suspend.h>
#include <asm/thread_info.h>
#include <asm/memory.h>
#include <asm/smp_plat.h>
@@ -119,11 +120,14 @@ int main(void)
DEFINE(CPU_CTX_SP, offsetof(struct cpu_suspend_ctx, sp));
DEFINE(MPIDR_HASH_MASK, offsetof(struct mpidr_hash, mask));
DEFINE(MPIDR_HASH_SHIFTS, offsetof(struct mpidr_hash, shift_aff));
- DEFINE(SLEEP_SAVE_SP_SZ, sizeof(struct sleep_save_sp));
- DEFINE(SLEEP_SAVE_SP_PHYS, offsetof(struct sleep_save_sp, save_ptr_stash_phys));
- DEFINE(SLEEP_SAVE_SP_VIRT, offsetof(struct sleep_save_sp, save_ptr_stash));
+ DEFINE(SLEEP_STACK_DATA_SYSTEM_REGS, offsetof(struct sleep_stack_data, system_regs));
+ DEFINE(SLEEP_STACK_DATA_CALLEE_REGS, offsetof(struct sleep_stack_data, callee_saved_regs));
#endif
DEFINE(ARM_SMCCC_RES_X0_OFFS, offsetof(struct arm_smccc_res, a0));
DEFINE(ARM_SMCCC_RES_X2_OFFS, offsetof(struct arm_smccc_res, a2));
+ BLANK();
+ DEFINE(HIBERN_PBE_ORIG, offsetof(struct pbe, orig_address));
+ DEFINE(HIBERN_PBE_ADDR, offsetof(struct pbe, address));
+ DEFINE(HIBERN_PBE_NEXT, offsetof(struct pbe, next));
return 0;
}
diff --git a/arch/arm64/kernel/cpu_errata.c b/arch/arm64/kernel/cpu_errata.c
index 06afd04e02c0..d42789499f17 100644
--- a/arch/arm64/kernel/cpu_errata.c
+++ b/arch/arm64/kernel/cpu_errata.c
@@ -22,14 +22,16 @@
#include <asm/cpufeature.h>
static bool __maybe_unused
-is_affected_midr_range(const struct arm64_cpu_capabilities *entry)
+is_affected_midr_range(const struct arm64_cpu_capabilities *entry, int scope)
{
+ WARN_ON(scope != SCOPE_LOCAL_CPU || preemptible());
return MIDR_IS_CPU_MODEL_RANGE(read_cpuid_id(), entry->midr_model,
entry->midr_range_min,
entry->midr_range_max);
}
#define MIDR_RANGE(model, min, max) \
+ .def_scope = SCOPE_LOCAL_CPU, \
.matches = is_affected_midr_range, \
.midr_model = model, \
.midr_range_min = min, \
@@ -101,6 +103,26 @@ const struct arm64_cpu_capabilities arm64_errata[] = {
}
};
+/*
+ * The CPU Errata work arounds are detected and applied at boot time
+ * and the related information is freed soon after. If the new CPU requires
+ * an errata not detected at boot, fail this CPU.
+ */
+void verify_local_cpu_errata(void)
+{
+ const struct arm64_cpu_capabilities *caps = arm64_errata;
+
+ for (; caps->matches; caps++)
+ if (!cpus_have_cap(caps->capability) &&
+ caps->matches(caps, SCOPE_LOCAL_CPU)) {
+ pr_crit("CPU%d: Requires work around for %s, not detected"
+ " at boot time\n",
+ smp_processor_id(),
+ caps->desc ? : "an erratum");
+ cpu_die_early();
+ }
+}
+
void check_local_cpu_errata(void)
{
update_cpu_capabilities(arm64_errata, "enabling workaround for");
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 943f5140e0f3..811773d1c1d0 100644
--- a/arch/arm64/kernel/cpufeature.c
+++ b/arch/arm64/kernel/cpufeature.c
@@ -71,7 +71,8 @@ DECLARE_BITMAP(cpu_hwcaps, ARM64_NCAPS);
/* meta feature for alternatives */
static bool __maybe_unused
-cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry);
+cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused);
+
static struct arm64_ftr_bits ftr_id_aa64isar0[] = {
ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, 32, 32, 0),
@@ -130,7 +131,11 @@ static struct arm64_ftr_bits ftr_id_aa64mmfr1[] = {
};
static struct arm64_ftr_bits ftr_id_aa64mmfr2[] = {
+ ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, ID_AA64MMFR2_LVA_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, ID_AA64MMFR2_IESB_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, ID_AA64MMFR2_LSM_SHIFT, 4, 0),
ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, ID_AA64MMFR2_UAO_SHIFT, 4, 0),
+ ARM64_FTR_BITS(FTR_STRICT, FTR_EXACT, ID_AA64MMFR2_CNP_SHIFT, 4, 0),
ARM64_FTR_END,
};
@@ -435,22 +440,26 @@ void __init init_cpu_features(struct cpuinfo_arm64 *info)
init_cpu_ftr_reg(SYS_ID_AA64MMFR2_EL1, info->reg_id_aa64mmfr2);
init_cpu_ftr_reg(SYS_ID_AA64PFR0_EL1, info->reg_id_aa64pfr0);
init_cpu_ftr_reg(SYS_ID_AA64PFR1_EL1, info->reg_id_aa64pfr1);
- init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0);
- init_cpu_ftr_reg(SYS_ID_ISAR0_EL1, info->reg_id_isar0);
- init_cpu_ftr_reg(SYS_ID_ISAR1_EL1, info->reg_id_isar1);
- init_cpu_ftr_reg(SYS_ID_ISAR2_EL1, info->reg_id_isar2);
- init_cpu_ftr_reg(SYS_ID_ISAR3_EL1, info->reg_id_isar3);
- init_cpu_ftr_reg(SYS_ID_ISAR4_EL1, info->reg_id_isar4);
- init_cpu_ftr_reg(SYS_ID_ISAR5_EL1, info->reg_id_isar5);
- init_cpu_ftr_reg(SYS_ID_MMFR0_EL1, info->reg_id_mmfr0);
- init_cpu_ftr_reg(SYS_ID_MMFR1_EL1, info->reg_id_mmfr1);
- init_cpu_ftr_reg(SYS_ID_MMFR2_EL1, info->reg_id_mmfr2);
- init_cpu_ftr_reg(SYS_ID_MMFR3_EL1, info->reg_id_mmfr3);
- init_cpu_ftr_reg(SYS_ID_PFR0_EL1, info->reg_id_pfr0);
- init_cpu_ftr_reg(SYS_ID_PFR1_EL1, info->reg_id_pfr1);
- init_cpu_ftr_reg(SYS_MVFR0_EL1, info->reg_mvfr0);
- init_cpu_ftr_reg(SYS_MVFR1_EL1, info->reg_mvfr1);
- init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2);
+
+ if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
+ init_cpu_ftr_reg(SYS_ID_DFR0_EL1, info->reg_id_dfr0);
+ init_cpu_ftr_reg(SYS_ID_ISAR0_EL1, info->reg_id_isar0);
+ init_cpu_ftr_reg(SYS_ID_ISAR1_EL1, info->reg_id_isar1);
+ init_cpu_ftr_reg(SYS_ID_ISAR2_EL1, info->reg_id_isar2);
+ init_cpu_ftr_reg(SYS_ID_ISAR3_EL1, info->reg_id_isar3);
+ init_cpu_ftr_reg(SYS_ID_ISAR4_EL1, info->reg_id_isar4);
+ init_cpu_ftr_reg(SYS_ID_ISAR5_EL1, info->reg_id_isar5);
+ init_cpu_ftr_reg(SYS_ID_MMFR0_EL1, info->reg_id_mmfr0);
+ init_cpu_ftr_reg(SYS_ID_MMFR1_EL1, info->reg_id_mmfr1);
+ init_cpu_ftr_reg(SYS_ID_MMFR2_EL1, info->reg_id_mmfr2);
+ init_cpu_ftr_reg(SYS_ID_MMFR3_EL1, info->reg_id_mmfr3);
+ init_cpu_ftr_reg(SYS_ID_PFR0_EL1, info->reg_id_pfr0);
+ init_cpu_ftr_reg(SYS_ID_PFR1_EL1, info->reg_id_pfr1);
+ init_cpu_ftr_reg(SYS_MVFR0_EL1, info->reg_mvfr0);
+ init_cpu_ftr_reg(SYS_MVFR1_EL1, info->reg_mvfr1);
+ init_cpu_ftr_reg(SYS_MVFR2_EL1, info->reg_mvfr2);
+ }
+
}
static void update_cpu_ftr_reg(struct arm64_ftr_reg *reg, u64 new)
@@ -555,47 +564,51 @@ void update_cpu_features(int cpu,
info->reg_id_aa64pfr1, boot->reg_id_aa64pfr1);
/*
- * If we have AArch32, we care about 32-bit features for compat. These
- * registers should be RES0 otherwise.
+ * If we have AArch32, we care about 32-bit features for compat.
+ * If the system doesn't support AArch32, don't update them.
*/
- taint |= check_update_ftr_reg(SYS_ID_DFR0_EL1, cpu,
+ if (id_aa64pfr0_32bit_el0(read_system_reg(SYS_ID_AA64PFR0_EL1)) &&
+ id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
+
+ taint |= check_update_ftr_reg(SYS_ID_DFR0_EL1, cpu,
info->reg_id_dfr0, boot->reg_id_dfr0);
- taint |= check_update_ftr_reg(SYS_ID_ISAR0_EL1, cpu,
+ taint |= check_update_ftr_reg(SYS_ID_ISAR0_EL1, cpu,
info->reg_id_isar0, boot->reg_id_isar0);
- taint |= check_update_ftr_reg(SYS_ID_ISAR1_EL1, cpu,
+ taint |= check_update_ftr_reg(SYS_ID_ISAR1_EL1, cpu,
info->reg_id_isar1, boot->reg_id_isar1);
- taint |= check_update_ftr_reg(SYS_ID_ISAR2_EL1, cpu,
+ taint |= check_update_ftr_reg(SYS_ID_ISAR2_EL1, cpu,
info->reg_id_isar2, boot->reg_id_isar2);
- taint |= check_update_ftr_reg(SYS_ID_ISAR3_EL1, cpu,
+ taint |= check_update_ftr_reg(SYS_ID_ISAR3_EL1, cpu,
info->reg_id_isar3, boot->reg_id_isar3);
- taint |= check_update_ftr_reg(SYS_ID_ISAR4_EL1, cpu,
+ taint |= check_update_ftr_reg(SYS_ID_ISAR4_EL1, cpu,
info->reg_id_isar4, boot->reg_id_isar4);
- taint |= check_update_ftr_reg(SYS_ID_ISAR5_EL1, cpu,
+ taint |= check_update_ftr_reg(SYS_ID_ISAR5_EL1, cpu,
info->reg_id_isar5, boot->reg_id_isar5);
- /*
- * Regardless of the value of the AuxReg field, the AIFSR, ADFSR, and
- * ACTLR formats could differ across CPUs and therefore would have to
- * be trapped for virtualization anyway.
- */
- taint |= check_update_ftr_reg(SYS_ID_MMFR0_EL1, cpu,
+ /*
+ * Regardless of the value of the AuxReg field, the AIFSR, ADFSR, and
+ * ACTLR formats could differ across CPUs and therefore would have to
+ * be trapped for virtualization anyway.
+ */
+ taint |= check_update_ftr_reg(SYS_ID_MMFR0_EL1, cpu,
info->reg_id_mmfr0, boot->reg_id_mmfr0);
- taint |= check_update_ftr_reg(SYS_ID_MMFR1_EL1, cpu,
+ taint |= check_update_ftr_reg(SYS_ID_MMFR1_EL1, cpu,
info->reg_id_mmfr1, boot->reg_id_mmfr1);
- taint |= check_update_ftr_reg(SYS_ID_MMFR2_EL1, cpu,
+ taint |= check_update_ftr_reg(SYS_ID_MMFR2_EL1, cpu,
info->reg_id_mmfr2, boot->reg_id_mmfr2);
- taint |= check_update_ftr_reg(SYS_ID_MMFR3_EL1, cpu,
+ taint |= check_update_ftr_reg(SYS_ID_MMFR3_EL1, cpu,
info->reg_id_mmfr3, boot->reg_id_mmfr3);
- taint |= check_update_ftr_reg(SYS_ID_PFR0_EL1, cpu,
+ taint |= check_update_ftr_reg(SYS_ID_PFR0_EL1, cpu,
info->reg_id_pfr0, boot->reg_id_pfr0);
- taint |= check_update_ftr_reg(SYS_ID_PFR1_EL1, cpu,
+ taint |= check_update_ftr_reg(SYS_ID_PFR1_EL1, cpu,
info->reg_id_pfr1, boot->reg_id_pfr1);
- taint |= check_update_ftr_reg(SYS_MVFR0_EL1, cpu,
+ taint |= check_update_ftr_reg(SYS_MVFR0_EL1, cpu,
info->reg_mvfr0, boot->reg_mvfr0);
- taint |= check_update_ftr_reg(SYS_MVFR1_EL1, cpu,
+ taint |= check_update_ftr_reg(SYS_MVFR1_EL1, cpu,
info->reg_mvfr1, boot->reg_mvfr1);
- taint |= check_update_ftr_reg(SYS_MVFR2_EL1, cpu,
+ taint |= check_update_ftr_reg(SYS_MVFR2_EL1, cpu,
info->reg_mvfr2, boot->reg_mvfr2);
+ }
/*
* Mismatched CPU features are a recipe for disaster. Don't even
@@ -614,6 +627,49 @@ u64 read_system_reg(u32 id)
return regp->sys_val;
}
+/*
+ * __raw_read_system_reg() - Used by a STARTING cpu before cpuinfo is populated.
+ * Read the system register on the current CPU
+ */
+static u64 __raw_read_system_reg(u32 sys_id)
+{
+ switch (sys_id) {
+ case SYS_ID_PFR0_EL1: return read_cpuid(ID_PFR0_EL1);
+ case SYS_ID_PFR1_EL1: return read_cpuid(ID_PFR1_EL1);
+ case SYS_ID_DFR0_EL1: return read_cpuid(ID_DFR0_EL1);
+ case SYS_ID_MMFR0_EL1: return read_cpuid(ID_MMFR0_EL1);
+ case SYS_ID_MMFR1_EL1: return read_cpuid(ID_MMFR1_EL1);
+ case SYS_ID_MMFR2_EL1: return read_cpuid(ID_MMFR2_EL1);
+ case SYS_ID_MMFR3_EL1: return read_cpuid(ID_MMFR3_EL1);
+ case SYS_ID_ISAR0_EL1: return read_cpuid(ID_ISAR0_EL1);
+ case SYS_ID_ISAR1_EL1: return read_cpuid(ID_ISAR1_EL1);
+ case SYS_ID_ISAR2_EL1: return read_cpuid(ID_ISAR2_EL1);
+ case SYS_ID_ISAR3_EL1: return read_cpuid(ID_ISAR3_EL1);
+ case SYS_ID_ISAR4_EL1: return read_cpuid(ID_ISAR4_EL1);
+ case SYS_ID_ISAR5_EL1: return read_cpuid(ID_ISAR4_EL1);
+ case SYS_MVFR0_EL1: return read_cpuid(MVFR0_EL1);
+ case SYS_MVFR1_EL1: return read_cpuid(MVFR1_EL1);
+ case SYS_MVFR2_EL1: return read_cpuid(MVFR2_EL1);
+
+ case SYS_ID_AA64PFR0_EL1: return read_cpuid(ID_AA64PFR0_EL1);
+ case SYS_ID_AA64PFR1_EL1: return read_cpuid(ID_AA64PFR0_EL1);
+ case SYS_ID_AA64DFR0_EL1: return read_cpuid(ID_AA64DFR0_EL1);
+ case SYS_ID_AA64DFR1_EL1: return read_cpuid(ID_AA64DFR0_EL1);
+ case SYS_ID_AA64MMFR0_EL1: return read_cpuid(ID_AA64MMFR0_EL1);
+ case SYS_ID_AA64MMFR1_EL1: return read_cpuid(ID_AA64MMFR1_EL1);
+ case SYS_ID_AA64MMFR2_EL1: return read_cpuid(ID_AA64MMFR2_EL1);
+ case SYS_ID_AA64ISAR0_EL1: return read_cpuid(ID_AA64ISAR0_EL1);
+ case SYS_ID_AA64ISAR1_EL1: return read_cpuid(ID_AA64ISAR1_EL1);
+
+ case SYS_CNTFRQ_EL0: return read_cpuid(CNTFRQ_EL0);
+ case SYS_CTR_EL0: return read_cpuid(CTR_EL0);
+ case SYS_DCZID_EL0: return read_cpuid(DCZID_EL0);
+ default:
+ BUG();
+ return 0;
+ }
+}
+
#include <linux/irqchip/arm-gic-v3.h>
static bool
@@ -625,19 +681,24 @@ feature_matches(u64 reg, const struct arm64_cpu_capabilities *entry)
}
static bool
-has_cpuid_feature(const struct arm64_cpu_capabilities *entry)
+has_cpuid_feature(const struct arm64_cpu_capabilities *entry, int scope)
{
u64 val;
- val = read_system_reg(entry->sys_reg);
+ WARN_ON(scope == SCOPE_LOCAL_CPU && preemptible());
+ if (scope == SCOPE_SYSTEM)
+ val = read_system_reg(entry->sys_reg);
+ else
+ val = __raw_read_system_reg(entry->sys_reg);
+
return feature_matches(val, entry);
}
-static bool has_useable_gicv3_cpuif(const struct arm64_cpu_capabilities *entry)
+static bool has_useable_gicv3_cpuif(const struct arm64_cpu_capabilities *entry, int scope)
{
bool has_sre;
- if (!has_cpuid_feature(entry))
+ if (!has_cpuid_feature(entry, scope))
return false;
has_sre = gic_enable_sre();
@@ -648,7 +709,7 @@ static bool has_useable_gicv3_cpuif(const struct arm64_cpu_capabilities *entry)
return has_sre;
}
-static bool has_no_hw_prefetch(const struct arm64_cpu_capabilities *entry)
+static bool has_no_hw_prefetch(const struct arm64_cpu_capabilities *entry, int __unused)
{
u32 midr = read_cpuid_id();
u32 rv_min, rv_max;
@@ -660,7 +721,7 @@ static bool has_no_hw_prefetch(const struct arm64_cpu_capabilities *entry)
return MIDR_IS_CPU_MODEL_RANGE(midr, MIDR_THUNDERX, rv_min, rv_max);
}
-static bool runs_at_el2(const struct arm64_cpu_capabilities *entry)
+static bool runs_at_el2(const struct arm64_cpu_capabilities *entry, int __unused)
{
return is_kernel_in_hyp_mode();
}
@@ -669,6 +730,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
{
.desc = "GIC system register CPU interface",
.capability = ARM64_HAS_SYSREG_GIC_CPUIF,
+ .def_scope = SCOPE_SYSTEM,
.matches = has_useable_gicv3_cpuif,
.sys_reg = SYS_ID_AA64PFR0_EL1,
.field_pos = ID_AA64PFR0_GIC_SHIFT,
@@ -679,6 +741,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
{
.desc = "Privileged Access Never",
.capability = ARM64_HAS_PAN,
+ .def_scope = SCOPE_SYSTEM,
.matches = has_cpuid_feature,
.sys_reg = SYS_ID_AA64MMFR1_EL1,
.field_pos = ID_AA64MMFR1_PAN_SHIFT,
@@ -691,6 +754,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
{
.desc = "LSE atomic instructions",
.capability = ARM64_HAS_LSE_ATOMICS,
+ .def_scope = SCOPE_SYSTEM,
.matches = has_cpuid_feature,
.sys_reg = SYS_ID_AA64ISAR0_EL1,
.field_pos = ID_AA64ISAR0_ATOMICS_SHIFT,
@@ -701,12 +765,14 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
{
.desc = "Software prefetching using PRFM",
.capability = ARM64_HAS_NO_HW_PREFETCH,
+ .def_scope = SCOPE_SYSTEM,
.matches = has_no_hw_prefetch,
},
#ifdef CONFIG_ARM64_UAO
{
.desc = "User Access Override",
.capability = ARM64_HAS_UAO,
+ .def_scope = SCOPE_SYSTEM,
.matches = has_cpuid_feature,
.sys_reg = SYS_ID_AA64MMFR2_EL1,
.field_pos = ID_AA64MMFR2_UAO_SHIFT,
@@ -717,20 +783,33 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
#ifdef CONFIG_ARM64_PAN
{
.capability = ARM64_ALT_PAN_NOT_UAO,
+ .def_scope = SCOPE_SYSTEM,
.matches = cpufeature_pan_not_uao,
},
#endif /* CONFIG_ARM64_PAN */
{
.desc = "Virtualization Host Extensions",
.capability = ARM64_HAS_VIRT_HOST_EXTN,
+ .def_scope = SCOPE_SYSTEM,
.matches = runs_at_el2,
},
+ {
+ .desc = "32-bit EL0 Support",
+ .capability = ARM64_HAS_32BIT_EL0,
+ .def_scope = SCOPE_SYSTEM,
+ .matches = has_cpuid_feature,
+ .sys_reg = SYS_ID_AA64PFR0_EL1,
+ .sign = FTR_UNSIGNED,
+ .field_pos = ID_AA64PFR0_EL0_SHIFT,
+ .min_field_value = ID_AA64PFR0_EL0_32BIT_64BIT,
+ },
{},
};
#define HWCAP_CAP(reg, field, s, min_value, type, cap) \
{ \
.desc = #cap, \
+ .def_scope = SCOPE_SYSTEM, \
.matches = has_cpuid_feature, \
.sys_reg = reg, \
.field_pos = field, \
@@ -740,7 +819,7 @@ static const struct arm64_cpu_capabilities arm64_features[] = {
.hwcap = cap, \
}
-static const struct arm64_cpu_capabilities arm64_hwcaps[] = {
+static const struct arm64_cpu_capabilities arm64_elf_hwcaps[] = {
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 2, CAP_HWCAP, HWCAP_PMULL),
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_AES_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_AES),
HWCAP_CAP(SYS_ID_AA64ISAR0_EL1, ID_AA64ISAR0_SHA1_SHIFT, FTR_UNSIGNED, 1, CAP_HWCAP, HWCAP_SHA1),
@@ -751,6 +830,10 @@ static const struct arm64_cpu_capabilities arm64_hwcaps[] = {
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_FP_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_FPHP),
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 0, CAP_HWCAP, HWCAP_ASIMD),
HWCAP_CAP(SYS_ID_AA64PFR0_EL1, ID_AA64PFR0_ASIMD_SHIFT, FTR_SIGNED, 1, CAP_HWCAP, HWCAP_ASIMDHP),
+ {},
+};
+
+static const struct arm64_cpu_capabilities compat_elf_hwcaps[] = {
#ifdef CONFIG_COMPAT
HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_AES_SHIFT, FTR_UNSIGNED, 2, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_PMULL),
HWCAP_CAP(SYS_ID_ISAR5_EL1, ID_ISAR5_AES_SHIFT, FTR_UNSIGNED, 1, CAP_COMPAT_HWCAP2, COMPAT_HWCAP2_AES),
@@ -761,7 +844,7 @@ static const struct arm64_cpu_capabilities arm64_hwcaps[] = {
{},
};
-static void __init cap_set_hwcap(const struct arm64_cpu_capabilities *cap)
+static void __init cap_set_elf_hwcap(const struct arm64_cpu_capabilities *cap)
{
switch (cap->hwcap_type) {
case CAP_HWCAP:
@@ -782,7 +865,7 @@ static void __init cap_set_hwcap(const struct arm64_cpu_capabilities *cap)
}
/* Check if we have a particular HWCAP enabled */
-static bool __maybe_unused cpus_have_hwcap(const struct arm64_cpu_capabilities *cap)
+static bool cpus_have_elf_hwcap(const struct arm64_cpu_capabilities *cap)
{
bool rc;
@@ -806,28 +889,23 @@ static bool __maybe_unused cpus_have_hwcap(const struct arm64_cpu_capabilities *
return rc;
}
-static void __init setup_cpu_hwcaps(void)
+static void __init setup_elf_hwcaps(const struct arm64_cpu_capabilities *hwcaps)
{
- int i;
- const struct arm64_cpu_capabilities *hwcaps = arm64_hwcaps;
-
- for (i = 0; hwcaps[i].matches; i++)
- if (hwcaps[i].matches(&hwcaps[i]))
- cap_set_hwcap(&hwcaps[i]);
+ for (; hwcaps->matches; hwcaps++)
+ if (hwcaps->matches(hwcaps, hwcaps->def_scope))
+ cap_set_elf_hwcap(hwcaps);
}
void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
const char *info)
{
- int i;
-
- for (i = 0; caps[i].matches; i++) {
- if (!caps[i].matches(&caps[i]))
+ for (; caps->matches; caps++) {
+ if (!caps->matches(caps, caps->def_scope))
continue;
- if (!cpus_have_cap(caps[i].capability) && caps[i].desc)
- pr_info("%s %s\n", info, caps[i].desc);
- cpus_set_cap(caps[i].capability);
+ if (!cpus_have_cap(caps->capability) && caps->desc)
+ pr_info("%s %s\n", info, caps->desc);
+ cpus_set_cap(caps->capability);
}
}
@@ -838,11 +916,9 @@ void update_cpu_capabilities(const struct arm64_cpu_capabilities *caps,
static void __init
enable_cpu_capabilities(const struct arm64_cpu_capabilities *caps)
{
- int i;
-
- for (i = 0; caps[i].matches; i++)
- if (caps[i].enable && cpus_have_cap(caps[i].capability))
- on_each_cpu(caps[i].enable, NULL, true);
+ for (; caps->matches; caps++)
+ if (caps->enable && cpus_have_cap(caps->capability))
+ on_each_cpu(caps->enable, NULL, true);
}
/*
@@ -861,54 +937,45 @@ static inline void set_sys_caps_initialised(void)
}
/*
- * __raw_read_system_reg() - Used by a STARTING cpu before cpuinfo is populated.
+ * Check for CPU features that are used in early boot
+ * based on the Boot CPU value.
*/
-static u64 __raw_read_system_reg(u32 sys_id)
+static void check_early_cpu_features(void)
{
- switch (sys_id) {
- case SYS_ID_PFR0_EL1: return read_cpuid(ID_PFR0_EL1);
- case SYS_ID_PFR1_EL1: return read_cpuid(ID_PFR1_EL1);
- case SYS_ID_DFR0_EL1: return read_cpuid(ID_DFR0_EL1);
- case SYS_ID_MMFR0_EL1: return read_cpuid(ID_MMFR0_EL1);
- case SYS_ID_MMFR1_EL1: return read_cpuid(ID_MMFR1_EL1);
- case SYS_ID_MMFR2_EL1: return read_cpuid(ID_MMFR2_EL1);
- case SYS_ID_MMFR3_EL1: return read_cpuid(ID_MMFR3_EL1);
- case SYS_ID_ISAR0_EL1: return read_cpuid(ID_ISAR0_EL1);
- case SYS_ID_ISAR1_EL1: return read_cpuid(ID_ISAR1_EL1);
- case SYS_ID_ISAR2_EL1: return read_cpuid(ID_ISAR2_EL1);
- case SYS_ID_ISAR3_EL1: return read_cpuid(ID_ISAR3_EL1);
- case SYS_ID_ISAR4_EL1: return read_cpuid(ID_ISAR4_EL1);
- case SYS_ID_ISAR5_EL1: return read_cpuid(ID_ISAR4_EL1);
- case SYS_MVFR0_EL1: return read_cpuid(MVFR0_EL1);
- case SYS_MVFR1_EL1: return read_cpuid(MVFR1_EL1);
- case SYS_MVFR2_EL1: return read_cpuid(MVFR2_EL1);
+ verify_cpu_run_el();
+ verify_cpu_asid_bits();
+}
- case SYS_ID_AA64PFR0_EL1: return read_cpuid(ID_AA64PFR0_EL1);
- case SYS_ID_AA64PFR1_EL1: return read_cpuid(ID_AA64PFR0_EL1);
- case SYS_ID_AA64DFR0_EL1: return read_cpuid(ID_AA64DFR0_EL1);
- case SYS_ID_AA64DFR1_EL1: return read_cpuid(ID_AA64DFR0_EL1);
- case SYS_ID_AA64MMFR0_EL1: return read_cpuid(ID_AA64MMFR0_EL1);
- case SYS_ID_AA64MMFR1_EL1: return read_cpuid(ID_AA64MMFR1_EL1);
- case SYS_ID_AA64MMFR2_EL1: return read_cpuid(ID_AA64MMFR2_EL1);
- case SYS_ID_AA64ISAR0_EL1: return read_cpuid(ID_AA64ISAR0_EL1);
- case SYS_ID_AA64ISAR1_EL1: return read_cpuid(ID_AA64ISAR1_EL1);
+static void
+verify_local_elf_hwcaps(const struct arm64_cpu_capabilities *caps)
+{
- case SYS_CNTFRQ_EL0: return read_cpuid(CNTFRQ_EL0);
- case SYS_CTR_EL0: return read_cpuid(CTR_EL0);
- case SYS_DCZID_EL0: return read_cpuid(DCZID_EL0);
- default:
- BUG();
- return 0;
- }
+ for (; caps->matches; caps++)
+ if (cpus_have_elf_hwcap(caps) && !caps->matches(caps, SCOPE_LOCAL_CPU)) {
+ pr_crit("CPU%d: missing HWCAP: %s\n",
+ smp_processor_id(), caps->desc);
+ cpu_die_early();
+ }
}
-/*
- * Check for CPU features that are used in early boot
- * based on the Boot CPU value.
- */
-static void check_early_cpu_features(void)
+static void
+verify_local_cpu_features(const struct arm64_cpu_capabilities *caps)
{
- verify_cpu_asid_bits();
+ for (; caps->matches; caps++) {
+ if (!cpus_have_cap(caps->capability))
+ continue;
+ /*
+ * If the new CPU misses an advertised feature, we cannot proceed
+ * further, park the cpu.
+ */
+ if (!caps->matches(caps, SCOPE_LOCAL_CPU)) {
+ pr_crit("CPU%d: missing feature: %s\n",
+ smp_processor_id(), caps->desc);
+ cpu_die_early();
+ }
+ if (caps->enable)
+ caps->enable(NULL);
+ }
}
/*
@@ -921,8 +988,6 @@ static void check_early_cpu_features(void)
*/
void verify_local_cpu_capabilities(void)
{
- int i;
- const struct arm64_cpu_capabilities *caps;
check_early_cpu_features();
@@ -933,32 +998,11 @@ void verify_local_cpu_capabilities(void)
if (!sys_caps_initialised)
return;
- caps = arm64_features;
- for (i = 0; caps[i].matches; i++) {
- if (!cpus_have_cap(caps[i].capability) || !caps[i].sys_reg)
- continue;
- /*
- * If the new CPU misses an advertised feature, we cannot proceed
- * further, park the cpu.
- */
- if (!feature_matches(__raw_read_system_reg(caps[i].sys_reg), &caps[i])) {
- pr_crit("CPU%d: missing feature: %s\n",
- smp_processor_id(), caps[i].desc);
- cpu_die_early();
- }
- if (caps[i].enable)
- caps[i].enable(NULL);
- }
-
- for (i = 0, caps = arm64_hwcaps; caps[i].matches; i++) {
- if (!cpus_have_hwcap(&caps[i]))
- continue;
- if (!feature_matches(__raw_read_system_reg(caps[i].sys_reg), &caps[i])) {
- pr_crit("CPU%d: missing HWCAP: %s\n",
- smp_processor_id(), caps[i].desc);
- cpu_die_early();
- }
- }
+ verify_local_cpu_errata();
+ verify_local_cpu_features(arm64_features);
+ verify_local_elf_hwcaps(arm64_elf_hwcaps);
+ if (system_supports_32bit_el0())
+ verify_local_elf_hwcaps(compat_elf_hwcaps);
}
static void __init setup_feature_capabilities(void)
@@ -967,6 +1011,24 @@ static void __init setup_feature_capabilities(void)
enable_cpu_capabilities(arm64_features);
}
+/*
+ * Check if the current CPU has a given feature capability.
+ * Should be called from non-preemptible context.
+ */
+bool this_cpu_has_cap(unsigned int cap)
+{
+ const struct arm64_cpu_capabilities *caps;
+
+ if (WARN_ON(preemptible()))
+ return false;
+
+ for (caps = arm64_features; caps->desc; caps++)
+ if (caps->capability == cap && caps->matches)
+ return caps->matches(caps, SCOPE_LOCAL_CPU);
+
+ return false;
+}
+
void __init setup_cpu_features(void)
{
u32 cwg;
@@ -974,7 +1036,10 @@ void __init setup_cpu_features(void)
/* Set the CPU feature capabilies */
setup_feature_capabilities();
- setup_cpu_hwcaps();
+ setup_elf_hwcaps(arm64_elf_hwcaps);
+
+ if (system_supports_32bit_el0())
+ setup_elf_hwcaps(compat_elf_hwcaps);
/* Advertise that we have computed the system capabilities */
set_sys_caps_initialised();
@@ -993,7 +1058,7 @@ void __init setup_cpu_features(void)
}
static bool __maybe_unused
-cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry)
+cpufeature_pan_not_uao(const struct arm64_cpu_capabilities *entry, int __unused)
{
return (cpus_have_cap(ARM64_HAS_PAN) && !cpus_have_cap(ARM64_HAS_UAO));
}
diff --git a/arch/arm64/kernel/cpuidle.c b/arch/arm64/kernel/cpuidle.c
index 9047cab68fd3..e11857fce05f 100644
--- a/arch/arm64/kernel/cpuidle.c
+++ b/arch/arm64/kernel/cpuidle.c
@@ -19,7 +19,8 @@ int __init arm_cpuidle_init(unsigned int cpu)
{
int ret = -EOPNOTSUPP;
- if (cpu_ops[cpu] && cpu_ops[cpu]->cpu_init_idle)
+ if (cpu_ops[cpu] && cpu_ops[cpu]->cpu_suspend &&
+ cpu_ops[cpu]->cpu_init_idle)
ret = cpu_ops[cpu]->cpu_init_idle(cpu);
return ret;
@@ -36,11 +37,5 @@ int arm_cpuidle_suspend(int index)
{
int cpu = smp_processor_id();
- /*
- * If cpu_ops have not been registered or suspend
- * has not been initialized, cpu_suspend call fails early.
- */
- if (!cpu_ops[cpu] || !cpu_ops[cpu]->cpu_suspend)
- return -EOPNOTSUPP;
return cpu_ops[cpu]->cpu_suspend(index);
}
diff --git a/arch/arm64/kernel/cpuinfo.c b/arch/arm64/kernel/cpuinfo.c
index 84c8684431c7..3808470486f3 100644
--- a/arch/arm64/kernel/cpuinfo.c
+++ b/arch/arm64/kernel/cpuinfo.c
@@ -87,7 +87,8 @@ static const char *const compat_hwcap_str[] = {
"idivt",
"vfpd32",
"lpae",
- "evtstrm"
+ "evtstrm",
+ NULL
};
static const char *const compat_hwcap2_str[] = {
@@ -216,23 +217,26 @@ static void __cpuinfo_store_cpu(struct cpuinfo_arm64 *info)
info->reg_id_aa64pfr0 = read_cpuid(ID_AA64PFR0_EL1);
info->reg_id_aa64pfr1 = read_cpuid(ID_AA64PFR1_EL1);
- info->reg_id_dfr0 = read_cpuid(ID_DFR0_EL1);
- info->reg_id_isar0 = read_cpuid(ID_ISAR0_EL1);
- info->reg_id_isar1 = read_cpuid(ID_ISAR1_EL1);
- info->reg_id_isar2 = read_cpuid(ID_ISAR2_EL1);
- info->reg_id_isar3 = read_cpuid(ID_ISAR3_EL1);
- info->reg_id_isar4 = read_cpuid(ID_ISAR4_EL1);
- info->reg_id_isar5 = read_cpuid(ID_ISAR5_EL1);
- info->reg_id_mmfr0 = read_cpuid(ID_MMFR0_EL1);
- info->reg_id_mmfr1 = read_cpuid(ID_MMFR1_EL1);
- info->reg_id_mmfr2 = read_cpuid(ID_MMFR2_EL1);
- info->reg_id_mmfr3 = read_cpuid(ID_MMFR3_EL1);
- info->reg_id_pfr0 = read_cpuid(ID_PFR0_EL1);
- info->reg_id_pfr1 = read_cpuid(ID_PFR1_EL1);
-
- info->reg_mvfr0 = read_cpuid(MVFR0_EL1);
- info->reg_mvfr1 = read_cpuid(MVFR1_EL1);
- info->reg_mvfr2 = read_cpuid(MVFR2_EL1);
+ /* Update the 32bit ID registers only if AArch32 is implemented */
+ if (id_aa64pfr0_32bit_el0(info->reg_id_aa64pfr0)) {
+ info->reg_id_dfr0 = read_cpuid(ID_DFR0_EL1);
+ info->reg_id_isar0 = read_cpuid(ID_ISAR0_EL1);
+ info->reg_id_isar1 = read_cpuid(ID_ISAR1_EL1);
+ info->reg_id_isar2 = read_cpuid(ID_ISAR2_EL1);
+ info->reg_id_isar3 = read_cpuid(ID_ISAR3_EL1);
+ info->reg_id_isar4 = read_cpuid(ID_ISAR4_EL1);
+ info->reg_id_isar5 = read_cpuid(ID_ISAR5_EL1);
+ info->reg_id_mmfr0 = read_cpuid(ID_MMFR0_EL1);
+ info->reg_id_mmfr1 = read_cpuid(ID_MMFR1_EL1);
+ info->reg_id_mmfr2 = read_cpuid(ID_MMFR2_EL1);
+ info->reg_id_mmfr3 = read_cpuid(ID_MMFR3_EL1);
+ info->reg_id_pfr0 = read_cpuid(ID_PFR0_EL1);
+ info->reg_id_pfr1 = read_cpuid(ID_PFR1_EL1);
+
+ info->reg_mvfr0 = read_cpuid(MVFR0_EL1);
+ info->reg_mvfr1 = read_cpuid(MVFR1_EL1);
+ info->reg_mvfr2 = read_cpuid(MVFR2_EL1);
+ }
cpuinfo_detect_icache_policy(info);
diff --git a/arch/arm64/kernel/debug-monitors.c b/arch/arm64/kernel/debug-monitors.c
index c45f2968bc8c..4fbf3c54275c 100644
--- a/arch/arm64/kernel/debug-monitors.c
+++ b/arch/arm64/kernel/debug-monitors.c
@@ -135,9 +135,8 @@ static void clear_os_lock(void *unused)
static int os_lock_notify(struct notifier_block *self,
unsigned long action, void *data)
{
- int cpu = (unsigned long)data;
if ((action & ~CPU_TASKS_FROZEN) == CPU_ONLINE)
- smp_call_function_single(cpu, clear_os_lock, NULL, 1);
+ clear_os_lock(NULL);
return NOTIFY_OK;
}
diff --git a/arch/arm64/kernel/efi-entry.S b/arch/arm64/kernel/efi-entry.S
index cae3112f7791..e88c064b845c 100644
--- a/arch/arm64/kernel/efi-entry.S
+++ b/arch/arm64/kernel/efi-entry.S
@@ -62,7 +62,7 @@ ENTRY(entry)
*/
mov x20, x0 // DTB address
ldr x0, [sp, #16] // relocated _text address
- movz x21, #:abs_g0:stext_offset
+ ldr w21, =stext_offset
add x21, x0, x21
/*
diff --git a/arch/arm64/kernel/efi.c b/arch/arm64/kernel/efi.c
index b6abc852f2a1..78f52488f9ff 100644
--- a/arch/arm64/kernel/efi.c
+++ b/arch/arm64/kernel/efi.c
@@ -17,22 +17,51 @@
#include <asm/efi.h>
-int __init efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md)
+/*
+ * Only regions of type EFI_RUNTIME_SERVICES_CODE need to be
+ * executable, everything else can be mapped with the XN bits
+ * set. Also take the new (optional) RO/XP bits into account.
+ */
+static __init pteval_t create_mapping_protection(efi_memory_desc_t *md)
{
- pteval_t prot_val;
+ u64 attr = md->attribute;
+ u32 type = md->type;
- /*
- * Only regions of type EFI_RUNTIME_SERVICES_CODE need to be
- * executable, everything else can be mapped with the XN bits
- * set.
- */
- if ((md->attribute & EFI_MEMORY_WB) == 0)
- prot_val = PROT_DEVICE_nGnRE;
- else if (md->type == EFI_RUNTIME_SERVICES_CODE ||
- !PAGE_ALIGNED(md->phys_addr))
- prot_val = pgprot_val(PAGE_KERNEL_EXEC);
- else
- prot_val = pgprot_val(PAGE_KERNEL);
+ if (type == EFI_MEMORY_MAPPED_IO)
+ return PROT_DEVICE_nGnRE;
+
+ if (WARN_ONCE(!PAGE_ALIGNED(md->phys_addr),
+ "UEFI Runtime regions are not aligned to 64 KB -- buggy firmware?"))
+ /*
+ * If the region is not aligned to the page size of the OS, we
+ * can not use strict permissions, since that would also affect
+ * the mapping attributes of the adjacent regions.
+ */
+ return pgprot_val(PAGE_KERNEL_EXEC);
+
+ /* R-- */
+ if ((attr & (EFI_MEMORY_XP | EFI_MEMORY_RO)) ==
+ (EFI_MEMORY_XP | EFI_MEMORY_RO))
+ return pgprot_val(PAGE_KERNEL_RO);
+
+ /* R-X */
+ if (attr & EFI_MEMORY_RO)
+ return pgprot_val(PAGE_KERNEL_ROX);
+
+ /* RW- */
+ if (attr & EFI_MEMORY_XP || type != EFI_RUNTIME_SERVICES_CODE)
+ return pgprot_val(PAGE_KERNEL);
+
+ /* RWX */
+ return pgprot_val(PAGE_KERNEL_EXEC);
+}
+
+/* we will fill this structure from the stub, so don't put it in .bss */
+struct screen_info screen_info __section(.data);
+
+int __init efi_create_mapping(struct mm_struct *mm, efi_memory_desc_t *md)
+{
+ pteval_t prot_val = create_mapping_protection(md);
create_pgd_mapping(mm, md->phys_addr, md->virt_addr,
md->num_pages << EFI_PAGE_SHIFT,
diff --git a/arch/arm64/kernel/head.S b/arch/arm64/kernel/head.S
index 85da0f599cd6..2c6e598a94dc 100644
--- a/arch/arm64/kernel/head.S
+++ b/arch/arm64/kernel/head.S
@@ -25,6 +25,7 @@
#include <linux/irqchip/arm-gic-v3.h>
#include <asm/assembler.h>
+#include <asm/boot.h>
#include <asm/ptrace.h>
#include <asm/asm-offsets.h>
#include <asm/cache.h>
@@ -51,9 +52,6 @@
#error TEXT_OFFSET must be less than 2MB
#endif
-#define KERNEL_START _text
-#define KERNEL_END _end
-
/*
* Kernel startup entry point.
* ---------------------------
@@ -102,8 +100,6 @@ _head:
#endif
#ifdef CONFIG_EFI
- .globl __efistub_stext_offset
- .set __efistub_stext_offset, stext - _head
.align 3
pe_header:
.ascii "PE"
@@ -123,11 +119,11 @@ optional_header:
.short 0x20b // PE32+ format
.byte 0x02 // MajorLinkerVersion
.byte 0x14 // MinorLinkerVersion
- .long _end - stext // SizeOfCode
+ .long _end - efi_header_end // SizeOfCode
.long 0 // SizeOfInitializedData
.long 0 // SizeOfUninitializedData
.long __efistub_entry - _head // AddressOfEntryPoint
- .long __efistub_stext_offset // BaseOfCode
+ .long efi_header_end - _head // BaseOfCode
extra_header_fields:
.quad 0 // ImageBase
@@ -144,7 +140,7 @@ extra_header_fields:
.long _end - _head // SizeOfImage
// Everything before the kernel image is considered part of the header
- .long __efistub_stext_offset // SizeOfHeaders
+ .long efi_header_end - _head // SizeOfHeaders
.long 0 // CheckSum
.short 0xa // Subsystem (EFI application)
.short 0 // DllCharacteristics
@@ -188,10 +184,10 @@ section_table:
.byte 0
.byte 0
.byte 0 // end of 0 padding of section name
- .long _end - stext // VirtualSize
- .long __efistub_stext_offset // VirtualAddress
- .long _edata - stext // SizeOfRawData
- .long __efistub_stext_offset // PointerToRawData
+ .long _end - efi_header_end // VirtualSize
+ .long efi_header_end - _head // VirtualAddress
+ .long _edata - efi_header_end // SizeOfRawData
+ .long efi_header_end - _head // PointerToRawData
.long 0 // PointerToRelocations (0 for executables)
.long 0 // PointerToLineNumbers (0 for executables)
@@ -200,20 +196,23 @@ section_table:
.long 0xe0500020 // Characteristics (section flags)
/*
- * EFI will load stext onwards at the 4k section alignment
+ * EFI will load .text onwards at the 4k section alignment
* described in the PE/COFF header. To ensure that instruction
* sequences using an adrp and a :lo12: immediate will function
- * correctly at this alignment, we must ensure that stext is
+ * correctly at this alignment, we must ensure that .text is
* placed at a 4k boundary in the Image to begin with.
*/
.align 12
+efi_header_end:
#endif
+ __INIT
+
ENTRY(stext)
bl preserve_boot_args
bl el2_setup // Drop to EL1, w20=cpu_boot_mode
- mov x23, xzr // KASLR offset, defaults to 0
adrp x24, __PHYS_OFFSET
+ and x23, x24, MIN_KIMG_ALIGN - 1 // KASLR offset, defaults to 0
bl set_cpu_boot_mode_flag
bl __create_page_tables // x25=TTBR0, x26=TTBR1
/*
@@ -222,13 +221,11 @@ ENTRY(stext)
* On return, the CPU will be ready for the MMU to be turned on and
* the TCR will have been set.
*/
- ldr x27, 0f // address to jump to after
+ bl __cpu_setup // initialise processor
+ adr_l x27, __primary_switch // address to jump to after
// MMU has been enabled
- adr_l lr, __enable_mmu // return (PIC) address
- b __cpu_setup // initialise processor
+ b __enable_mmu
ENDPROC(stext)
- .align 3
-0: .quad __mmap_switched - (_head - TEXT_OFFSET) + KIMAGE_VADDR
/*
* Preserve the arguments passed by the bootloader in x0 .. x3
@@ -338,7 +335,7 @@ __create_page_tables:
cmp x0, x6
b.lo 1b
- ldr x7, =SWAPPER_MM_MMUFLAGS
+ mov x7, SWAPPER_MM_MMUFLAGS
/*
* Create the identity mapping.
@@ -394,12 +391,13 @@ __create_page_tables:
* Map the kernel image (starting with PHYS_OFFSET).
*/
mov x0, x26 // swapper_pg_dir
- ldr x5, =KIMAGE_VADDR
+ mov_q x5, KIMAGE_VADDR + TEXT_OFFSET // compile time __va(_text)
add x5, x5, x23 // add KASLR displacement
create_pgd_entry x0, x5, x3, x6
- ldr w6, kernel_img_size
- add x6, x6, x5
- mov x3, x24 // phys offset
+ adrp x6, _end // runtime __pa(_end)
+ adrp x3, _text // runtime __pa(_text)
+ sub x6, x6, x3 // _end - _text
+ add x6, x6, x5 // runtime __va(_end)
create_block_map x0, x7, x3, x5, x6
/*
@@ -414,16 +412,13 @@ __create_page_tables:
ret x28
ENDPROC(__create_page_tables)
-
-kernel_img_size:
- .long _end - (_head - TEXT_OFFSET)
.ltorg
/*
* The following fragment of code is executed with the MMU enabled.
*/
.set initial_sp, init_thread_union + THREAD_START_SP
-__mmap_switched:
+__primary_switched:
mov x28, lr // preserve LR
adr_l x8, vectors // load VBAR_EL1 with virtual
msr vbar_el1, x8 // vector table address
@@ -437,44 +432,6 @@ __mmap_switched:
bl __pi_memset
dsb ishst // Make zero page visible to PTW
-#ifdef CONFIG_RELOCATABLE
-
- /*
- * Iterate over each entry in the relocation table, and apply the
- * relocations in place.
- */
- adr_l x8, __dynsym_start // start of symbol table
- adr_l x9, __reloc_start // start of reloc table
- adr_l x10, __reloc_end // end of reloc table
-
-0: cmp x9, x10
- b.hs 2f
- ldp x11, x12, [x9], #24
- ldr x13, [x9, #-8]
- cmp w12, #R_AARCH64_RELATIVE
- b.ne 1f
- add x13, x13, x23 // relocate
- str x13, [x11, x23]
- b 0b
-
-1: cmp w12, #R_AARCH64_ABS64
- b.ne 0b
- add x12, x12, x12, lsl #1 // symtab offset: 24x top word
- add x12, x8, x12, lsr #(32 - 3) // ... shifted into bottom word
- ldrsh w14, [x12, #6] // Elf64_Sym::st_shndx
- ldr x15, [x12, #8] // Elf64_Sym::st_value
- cmp w14, #-0xf // SHN_ABS (0xfff1) ?
- add x14, x15, x23 // relocate
- csel x15, x14, x15, ne
- add x15, x13, x15
- str x15, [x11, x23]
- b 0b
-
-2: adr_l x8, kimage_vaddr // make relocated kimage_vaddr
- dc cvac, x8 // value visible to secondaries
- dsb sy // with MMU off
-#endif
-
adr_l sp, initial_sp, x4
mov x4, sp
and x4, x4, #~(THREAD_SIZE - 1)
@@ -490,17 +447,19 @@ __mmap_switched:
bl kasan_early_init
#endif
#ifdef CONFIG_RANDOMIZE_BASE
- cbnz x23, 0f // already running randomized?
+ tst x23, ~(MIN_KIMG_ALIGN - 1) // already running randomized?
+ b.ne 0f
mov x0, x21 // pass FDT address in x0
+ mov x1, x23 // pass modulo offset in x1
bl kaslr_early_init // parse FDT for KASLR options
cbz x0, 0f // KASLR disabled? just proceed
- mov x23, x0 // record KASLR offset
+ orr x23, x23, x0 // record KASLR offset
ret x28 // we must enable KASLR, return
// to __enable_mmu()
0:
#endif
b start_kernel
-ENDPROC(__mmap_switched)
+ENDPROC(__primary_switched)
/*
* end early head section, begin head code that is also used for
@@ -650,7 +609,7 @@ ENDPROC(el2_setup)
* Sets the __boot_cpu_mode flag depending on the CPU boot mode passed
* in x20. See arch/arm64/include/asm/virt.h for more info.
*/
-ENTRY(set_cpu_boot_mode_flag)
+set_cpu_boot_mode_flag:
adr_l x1, __boot_cpu_mode
cmp w20, #BOOT_CPU_MODE_EL2
b.ne 1f
@@ -683,7 +642,7 @@ ENTRY(secondary_holding_pen)
bl el2_setup // Drop to EL1, w20=cpu_boot_mode
bl set_cpu_boot_mode_flag
mrs x0, mpidr_el1
- ldr x1, =MPIDR_HWID_BITMASK
+ mov_q x1, MPIDR_HWID_BITMASK
and x0, x0, x1
adr_l x3, secondary_holding_pen_release
pen: ldr x4, [x3]
@@ -703,7 +662,7 @@ ENTRY(secondary_entry)
b secondary_startup
ENDPROC(secondary_entry)
-ENTRY(secondary_startup)
+secondary_startup:
/*
* Common entry point for secondary CPUs.
*/
@@ -711,14 +670,11 @@ ENTRY(secondary_startup)
adrp x26, swapper_pg_dir
bl __cpu_setup // initialise processor
- ldr x8, kimage_vaddr
- ldr w9, 0f
- sub x27, x8, w9, sxtw // address to jump to after enabling the MMU
+ adr_l x27, __secondary_switch // address to jump to after enabling the MMU
b __enable_mmu
ENDPROC(secondary_startup)
-0: .long (_text - TEXT_OFFSET) - __secondary_switched
-ENTRY(__secondary_switched)
+__secondary_switched:
adr_l x5, vectors
msr vbar_el1, x5
isb
@@ -768,7 +724,7 @@ ENTRY(__early_cpu_boot_status)
* If it isn't, park the CPU
*/
.section ".idmap.text", "ax"
-__enable_mmu:
+ENTRY(__enable_mmu)
mrs x22, sctlr_el1 // preserve old SCTLR_EL1 value
mrs x1, ID_AA64MMFR0_EL1
ubfx x2, x1, #ID_AA64MMFR0_TGRAN_SHIFT, 4
@@ -806,7 +762,6 @@ __enable_mmu:
ic iallu // flush instructions fetched
dsb nsh // via old mapping
isb
- add x27, x27, x23 // relocated __mmap_switched
#endif
br x27
ENDPROC(__enable_mmu)
@@ -819,3 +774,53 @@ __no_granule_support:
wfi
b 1b
ENDPROC(__no_granule_support)
+
+__primary_switch:
+#ifdef CONFIG_RELOCATABLE
+ /*
+ * Iterate over each entry in the relocation table, and apply the
+ * relocations in place.
+ */
+ ldr w8, =__dynsym_offset // offset to symbol table
+ ldr w9, =__rela_offset // offset to reloc table
+ ldr w10, =__rela_size // size of reloc table
+
+ mov_q x11, KIMAGE_VADDR // default virtual offset
+ add x11, x11, x23 // actual virtual offset
+ add x8, x8, x11 // __va(.dynsym)
+ add x9, x9, x11 // __va(.rela)
+ add x10, x9, x10 // __va(.rela) + sizeof(.rela)
+
+0: cmp x9, x10
+ b.hs 2f
+ ldp x11, x12, [x9], #24
+ ldr x13, [x9, #-8]
+ cmp w12, #R_AARCH64_RELATIVE
+ b.ne 1f
+ add x13, x13, x23 // relocate
+ str x13, [x11, x23]
+ b 0b
+
+1: cmp w12, #R_AARCH64_ABS64
+ b.ne 0b
+ add x12, x12, x12, lsl #1 // symtab offset: 24x top word
+ add x12, x8, x12, lsr #(32 - 3) // ... shifted into bottom word
+ ldrsh w14, [x12, #6] // Elf64_Sym::st_shndx
+ ldr x15, [x12, #8] // Elf64_Sym::st_value
+ cmp w14, #-0xf // SHN_ABS (0xfff1) ?
+ add x14, x15, x23 // relocate
+ csel x15, x14, x15, ne
+ add x15, x13, x15
+ str x15, [x11, x23]
+ b 0b
+
+2:
+#endif
+ ldr x8, =__primary_switched
+ br x8
+ENDPROC(__primary_switch)
+
+__secondary_switch:
+ ldr x8, =__secondary_switched
+ br x8
+ENDPROC(__secondary_switch)
diff --git a/arch/arm64/kernel/hibernate-asm.S b/arch/arm64/kernel/hibernate-asm.S
new file mode 100644
index 000000000000..46f29b6560ec
--- /dev/null
+++ b/arch/arm64/kernel/hibernate-asm.S
@@ -0,0 +1,176 @@
+/*
+ * Hibernate low-level support
+ *
+ * Copyright (C) 2016 ARM Ltd.
+ * Author: James Morse <james.morse@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+#include <linux/linkage.h>
+#include <linux/errno.h>
+
+#include <asm/asm-offsets.h>
+#include <asm/assembler.h>
+#include <asm/cputype.h>
+#include <asm/memory.h>
+#include <asm/page.h>
+#include <asm/virt.h>
+
+/*
+ * To prevent the possibility of old and new partial table walks being visible
+ * in the tlb, switch the ttbr to a zero page when we invalidate the old
+ * records. D4.7.1 'General TLB maintenance requirements' in ARM DDI 0487A.i
+ * Even switching to our copied tables will cause a changed output address at
+ * each stage of the walk.
+ */
+.macro break_before_make_ttbr_switch zero_page, page_table
+ msr ttbr1_el1, \zero_page
+ isb
+ tlbi vmalle1is
+ dsb ish
+ msr ttbr1_el1, \page_table
+ isb
+.endm
+
+
+/*
+ * Resume from hibernate
+ *
+ * Loads temporary page tables then restores the memory image.
+ * Finally branches to cpu_resume() to restore the state saved by
+ * swsusp_arch_suspend().
+ *
+ * Because this code has to be copied to a 'safe' page, it can't call out to
+ * other functions by PC-relative address. Also remember that it may be
+ * mid-way through over-writing other functions. For this reason it contains
+ * code from flush_icache_range() and uses the copy_page() macro.
+ *
+ * This 'safe' page is mapped via ttbr0, and executed from there. This function
+ * switches to a copy of the linear map in ttbr1, performs the restore, then
+ * switches ttbr1 to the original kernel's swapper_pg_dir.
+ *
+ * All of memory gets written to, including code. We need to clean the kernel
+ * text to the Point of Coherence (PoC) before secondary cores can be booted.
+ * Because the kernel modules and executable pages mapped to user space are
+ * also written as data, we clean all pages we touch to the Point of
+ * Unification (PoU).
+ *
+ * x0: physical address of temporary page tables
+ * x1: physical address of swapper page tables
+ * x2: address of cpu_resume
+ * x3: linear map address of restore_pblist in the current kernel
+ * x4: physical address of __hyp_stub_vectors, or 0
+ * x5: physical address of a zero page that remains zero after resume
+ */
+.pushsection ".hibernate_exit.text", "ax"
+ENTRY(swsusp_arch_suspend_exit)
+ /*
+ * We execute from ttbr0, change ttbr1 to our copied linear map tables
+ * with a break-before-make via the zero page
+ */
+ break_before_make_ttbr_switch x5, x0
+
+ mov x21, x1
+ mov x30, x2
+ mov x24, x4
+ mov x25, x5
+
+ /* walk the restore_pblist and use copy_page() to over-write memory */
+ mov x19, x3
+
+1: ldr x10, [x19, #HIBERN_PBE_ORIG]
+ mov x0, x10
+ ldr x1, [x19, #HIBERN_PBE_ADDR]
+
+ copy_page x0, x1, x2, x3, x4, x5, x6, x7, x8, x9
+
+ add x1, x10, #PAGE_SIZE
+ /* Clean the copied page to PoU - based on flush_icache_range() */
+ dcache_line_size x2, x3
+ sub x3, x2, #1
+ bic x4, x10, x3
+2: dc cvau, x4 /* clean D line / unified line */
+ add x4, x4, x2
+ cmp x4, x1
+ b.lo 2b
+
+ ldr x19, [x19, #HIBERN_PBE_NEXT]
+ cbnz x19, 1b
+ dsb ish /* wait for PoU cleaning to finish */
+
+ /* switch to the restored kernels page tables */
+ break_before_make_ttbr_switch x25, x21
+
+ ic ialluis
+ dsb ish
+ isb
+
+ cbz x24, 3f /* Do we need to re-initialise EL2? */
+ hvc #0
+3: ret
+
+ .ltorg
+ENDPROC(swsusp_arch_suspend_exit)
+
+/*
+ * Restore the hyp stub.
+ * This must be done before the hibernate page is unmapped by _cpu_resume(),
+ * but happens before any of the hyp-stub's code is cleaned to PoC.
+ *
+ * x24: The physical address of __hyp_stub_vectors
+ */
+el1_sync:
+ msr vbar_el2, x24
+ eret
+ENDPROC(el1_sync)
+
+.macro invalid_vector label
+\label:
+ b \label
+ENDPROC(\label)
+.endm
+
+ invalid_vector el2_sync_invalid
+ invalid_vector el2_irq_invalid
+ invalid_vector el2_fiq_invalid
+ invalid_vector el2_error_invalid
+ invalid_vector el1_sync_invalid
+ invalid_vector el1_irq_invalid
+ invalid_vector el1_fiq_invalid
+ invalid_vector el1_error_invalid
+
+/* el2 vectors - switch el2 here while we restore the memory image. */
+ .align 11
+ENTRY(hibernate_el2_vectors)
+ ventry el2_sync_invalid // Synchronous EL2t
+ ventry el2_irq_invalid // IRQ EL2t
+ ventry el2_fiq_invalid // FIQ EL2t
+ ventry el2_error_invalid // Error EL2t
+
+ ventry el2_sync_invalid // Synchronous EL2h
+ ventry el2_irq_invalid // IRQ EL2h
+ ventry el2_fiq_invalid // FIQ EL2h
+ ventry el2_error_invalid // Error EL2h
+
+ ventry el1_sync // Synchronous 64-bit EL1
+ ventry el1_irq_invalid // IRQ 64-bit EL1
+ ventry el1_fiq_invalid // FIQ 64-bit EL1
+ ventry el1_error_invalid // Error 64-bit EL1
+
+ ventry el1_sync_invalid // Synchronous 32-bit EL1
+ ventry el1_irq_invalid // IRQ 32-bit EL1
+ ventry el1_fiq_invalid // FIQ 32-bit EL1
+ ventry el1_error_invalid // Error 32-bit EL1
+END(hibernate_el2_vectors)
+
+.popsection
diff --git a/arch/arm64/kernel/hibernate.c b/arch/arm64/kernel/hibernate.c
new file mode 100644
index 000000000000..f8df75d740f4
--- /dev/null
+++ b/arch/arm64/kernel/hibernate.c
@@ -0,0 +1,487 @@
+/*:
+ * Hibernate support specific for ARM64
+ *
+ * Derived from work on ARM hibernation support by:
+ *
+ * Ubuntu project, hibernation support for mach-dove
+ * Copyright (C) 2010 Nokia Corporation (Hiroshi Doyu)
+ * Copyright (C) 2010 Texas Instruments, Inc. (Teerth Reddy et al.)
+ * https://lkml.org/lkml/2010/6/18/4
+ * https://lists.linux-foundation.org/pipermail/linux-pm/2010-June/027422.html
+ * https://patchwork.kernel.org/patch/96442/
+ *
+ * Copyright (C) 2006 Rafael J. Wysocki <rjw@sisk.pl>
+ *
+ * License terms: GNU General Public License (GPL) version 2
+ */
+#define pr_fmt(x) "hibernate: " x
+#include <linux/kvm_host.h>
+#include <linux/mm.h>
+#include <linux/notifier.h>
+#include <linux/pm.h>
+#include <linux/sched.h>
+#include <linux/suspend.h>
+#include <linux/utsname.h>
+#include <linux/version.h>
+
+#include <asm/barrier.h>
+#include <asm/cacheflush.h>
+#include <asm/irqflags.h>
+#include <asm/memory.h>
+#include <asm/mmu_context.h>
+#include <asm/pgalloc.h>
+#include <asm/pgtable.h>
+#include <asm/pgtable-hwdef.h>
+#include <asm/sections.h>
+#include <asm/suspend.h>
+#include <asm/virt.h>
+
+/*
+ * Hibernate core relies on this value being 0 on resume, and marks it
+ * __nosavedata assuming it will keep the resume kernel's '0' value. This
+ * doesn't happen with either KASLR.
+ *
+ * defined as "__visible int in_suspend __nosavedata" in
+ * kernel/power/hibernate.c
+ */
+extern int in_suspend;
+
+/* Find a symbols alias in the linear map */
+#define LMADDR(x) phys_to_virt(virt_to_phys(x))
+
+/* Do we need to reset el2? */
+#define el2_reset_needed() (is_hyp_mode_available() && !is_kernel_in_hyp_mode())
+
+/*
+ * Start/end of the hibernate exit code, this must be copied to a 'safe'
+ * location in memory, and executed from there.
+ */
+extern char __hibernate_exit_text_start[], __hibernate_exit_text_end[];
+
+/* temporary el2 vectors in the __hibernate_exit_text section. */
+extern char hibernate_el2_vectors[];
+
+/* hyp-stub vectors, used to restore el2 during resume from hibernate. */
+extern char __hyp_stub_vectors[];
+
+/*
+ * Values that may not change over hibernate/resume. We put the build number
+ * and date in here so that we guarantee not to resume with a different
+ * kernel.
+ */
+struct arch_hibernate_hdr_invariants {
+ char uts_version[__NEW_UTS_LEN + 1];
+};
+
+/* These values need to be know across a hibernate/restore. */
+static struct arch_hibernate_hdr {
+ struct arch_hibernate_hdr_invariants invariants;
+
+ /* These are needed to find the relocated kernel if built with kaslr */
+ phys_addr_t ttbr1_el1;
+ void (*reenter_kernel)(void);
+
+ /*
+ * We need to know where the __hyp_stub_vectors are after restore to
+ * re-configure el2.
+ */
+ phys_addr_t __hyp_stub_vectors;
+} resume_hdr;
+
+static inline void arch_hdr_invariants(struct arch_hibernate_hdr_invariants *i)
+{
+ memset(i, 0, sizeof(*i));
+ memcpy(i->uts_version, init_utsname()->version, sizeof(i->uts_version));
+}
+
+int pfn_is_nosave(unsigned long pfn)
+{
+ unsigned long nosave_begin_pfn = virt_to_pfn(&__nosave_begin);
+ unsigned long nosave_end_pfn = virt_to_pfn(&__nosave_end - 1);
+
+ return (pfn >= nosave_begin_pfn) && (pfn <= nosave_end_pfn);
+}
+
+void notrace save_processor_state(void)
+{
+ WARN_ON(num_online_cpus() != 1);
+}
+
+void notrace restore_processor_state(void)
+{
+}
+
+int arch_hibernation_header_save(void *addr, unsigned int max_size)
+{
+ struct arch_hibernate_hdr *hdr = addr;
+
+ if (max_size < sizeof(*hdr))
+ return -EOVERFLOW;
+
+ arch_hdr_invariants(&hdr->invariants);
+ hdr->ttbr1_el1 = virt_to_phys(swapper_pg_dir);
+ hdr->reenter_kernel = _cpu_resume;
+
+ /* We can't use __hyp_get_vectors() because kvm may still be loaded */
+ if (el2_reset_needed())
+ hdr->__hyp_stub_vectors = virt_to_phys(__hyp_stub_vectors);
+ else
+ hdr->__hyp_stub_vectors = 0;
+
+ return 0;
+}
+EXPORT_SYMBOL(arch_hibernation_header_save);
+
+int arch_hibernation_header_restore(void *addr)
+{
+ struct arch_hibernate_hdr_invariants invariants;
+ struct arch_hibernate_hdr *hdr = addr;
+
+ arch_hdr_invariants(&invariants);
+ if (memcmp(&hdr->invariants, &invariants, sizeof(invariants))) {
+ pr_crit("Hibernate image not generated by this kernel!\n");
+ return -EINVAL;
+ }
+
+ resume_hdr = *hdr;
+
+ return 0;
+}
+EXPORT_SYMBOL(arch_hibernation_header_restore);
+
+/*
+ * Copies length bytes, starting at src_start into an new page,
+ * perform cache maintentance, then maps it at the specified address low
+ * address as executable.
+ *
+ * This is used by hibernate to copy the code it needs to execute when
+ * overwriting the kernel text. This function generates a new set of page
+ * tables, which it loads into ttbr0.
+ *
+ * Length is provided as we probably only want 4K of data, even on a 64K
+ * page system.
+ */
+static int create_safe_exec_page(void *src_start, size_t length,
+ unsigned long dst_addr,
+ phys_addr_t *phys_dst_addr,
+ void *(*allocator)(gfp_t mask),
+ gfp_t mask)
+{
+ int rc = 0;
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+ pte_t *pte;
+ unsigned long dst = (unsigned long)allocator(mask);
+
+ if (!dst) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ memcpy((void *)dst, src_start, length);
+ flush_icache_range(dst, dst + length);
+
+ pgd = pgd_offset_raw(allocator(mask), dst_addr);
+ if (pgd_none(*pgd)) {
+ pud = allocator(mask);
+ if (!pud) {
+ rc = -ENOMEM;
+ goto out;
+ }
+ pgd_populate(&init_mm, pgd, pud);
+ }
+
+ pud = pud_offset(pgd, dst_addr);
+ if (pud_none(*pud)) {
+ pmd = allocator(mask);
+ if (!pmd) {
+ rc = -ENOMEM;
+ goto out;
+ }
+ pud_populate(&init_mm, pud, pmd);
+ }
+
+ pmd = pmd_offset(pud, dst_addr);
+ if (pmd_none(*pmd)) {
+ pte = allocator(mask);
+ if (!pte) {
+ rc = -ENOMEM;
+ goto out;
+ }
+ pmd_populate_kernel(&init_mm, pmd, pte);
+ }
+
+ pte = pte_offset_kernel(pmd, dst_addr);
+ set_pte(pte, __pte(virt_to_phys((void *)dst) |
+ pgprot_val(PAGE_KERNEL_EXEC)));
+
+ /* Load our new page tables */
+ asm volatile("msr ttbr0_el1, %0;"
+ "isb;"
+ "tlbi vmalle1is;"
+ "dsb ish;"
+ "isb" : : "r"(virt_to_phys(pgd)));
+
+ *phys_dst_addr = virt_to_phys((void *)dst);
+
+out:
+ return rc;
+}
+
+
+int swsusp_arch_suspend(void)
+{
+ int ret = 0;
+ unsigned long flags;
+ struct sleep_stack_data state;
+
+ local_dbg_save(flags);
+
+ if (__cpu_suspend_enter(&state)) {
+ ret = swsusp_save();
+ } else {
+ /* Clean kernel to PoC for secondary core startup */
+ __flush_dcache_area(LMADDR(KERNEL_START), KERNEL_END - KERNEL_START);
+
+ /*
+ * Tell the hibernation core that we've just restored
+ * the memory
+ */
+ in_suspend = 0;
+
+ __cpu_suspend_exit();
+ }
+
+ local_dbg_restore(flags);
+
+ return ret;
+}
+
+static int copy_pte(pmd_t *dst_pmd, pmd_t *src_pmd, unsigned long start,
+ unsigned long end)
+{
+ pte_t *src_pte;
+ pte_t *dst_pte;
+ unsigned long addr = start;
+
+ dst_pte = (pte_t *)get_safe_page(GFP_ATOMIC);
+ if (!dst_pte)
+ return -ENOMEM;
+ pmd_populate_kernel(&init_mm, dst_pmd, dst_pte);
+ dst_pte = pte_offset_kernel(dst_pmd, start);
+
+ src_pte = pte_offset_kernel(src_pmd, start);
+ do {
+ if (!pte_none(*src_pte))
+ /*
+ * Resume will overwrite areas that may be marked
+ * read only (code, rodata). Clear the RDONLY bit from
+ * the temporary mappings we use during restore.
+ */
+ set_pte(dst_pte, __pte(pte_val(*src_pte) & ~PTE_RDONLY));
+ } while (dst_pte++, src_pte++, addr += PAGE_SIZE, addr != end);
+
+ return 0;
+}
+
+static int copy_pmd(pud_t *dst_pud, pud_t *src_pud, unsigned long start,
+ unsigned long end)
+{
+ pmd_t *src_pmd;
+ pmd_t *dst_pmd;
+ unsigned long next;
+ unsigned long addr = start;
+
+ if (pud_none(*dst_pud)) {
+ dst_pmd = (pmd_t *)get_safe_page(GFP_ATOMIC);
+ if (!dst_pmd)
+ return -ENOMEM;
+ pud_populate(&init_mm, dst_pud, dst_pmd);
+ }
+ dst_pmd = pmd_offset(dst_pud, start);
+
+ src_pmd = pmd_offset(src_pud, start);
+ do {
+ next = pmd_addr_end(addr, end);
+ if (pmd_none(*src_pmd))
+ continue;
+ if (pmd_table(*src_pmd)) {
+ if (copy_pte(dst_pmd, src_pmd, addr, next))
+ return -ENOMEM;
+ } else {
+ set_pmd(dst_pmd,
+ __pmd(pmd_val(*src_pmd) & ~PMD_SECT_RDONLY));
+ }
+ } while (dst_pmd++, src_pmd++, addr = next, addr != end);
+
+ return 0;
+}
+
+static int copy_pud(pgd_t *dst_pgd, pgd_t *src_pgd, unsigned long start,
+ unsigned long end)
+{
+ pud_t *dst_pud;
+ pud_t *src_pud;
+ unsigned long next;
+ unsigned long addr = start;
+
+ if (pgd_none(*dst_pgd)) {
+ dst_pud = (pud_t *)get_safe_page(GFP_ATOMIC);
+ if (!dst_pud)
+ return -ENOMEM;
+ pgd_populate(&init_mm, dst_pgd, dst_pud);
+ }
+ dst_pud = pud_offset(dst_pgd, start);
+
+ src_pud = pud_offset(src_pgd, start);
+ do {
+ next = pud_addr_end(addr, end);
+ if (pud_none(*src_pud))
+ continue;
+ if (pud_table(*(src_pud))) {
+ if (copy_pmd(dst_pud, src_pud, addr, next))
+ return -ENOMEM;
+ } else {
+ set_pud(dst_pud,
+ __pud(pud_val(*src_pud) & ~PMD_SECT_RDONLY));
+ }
+ } while (dst_pud++, src_pud++, addr = next, addr != end);
+
+ return 0;
+}
+
+static int copy_page_tables(pgd_t *dst_pgd, unsigned long start,
+ unsigned long end)
+{
+ unsigned long next;
+ unsigned long addr = start;
+ pgd_t *src_pgd = pgd_offset_k(start);
+
+ dst_pgd = pgd_offset_raw(dst_pgd, start);
+ do {
+ next = pgd_addr_end(addr, end);
+ if (pgd_none(*src_pgd))
+ continue;
+ if (copy_pud(dst_pgd, src_pgd, addr, next))
+ return -ENOMEM;
+ } while (dst_pgd++, src_pgd++, addr = next, addr != end);
+
+ return 0;
+}
+
+/*
+ * Setup then Resume from the hibernate image using swsusp_arch_suspend_exit().
+ *
+ * Memory allocated by get_safe_page() will be dealt with by the hibernate code,
+ * we don't need to free it here.
+ */
+int swsusp_arch_resume(void)
+{
+ int rc = 0;
+ void *zero_page;
+ size_t exit_size;
+ pgd_t *tmp_pg_dir;
+ void *lm_restore_pblist;
+ phys_addr_t phys_hibernate_exit;
+ void __noreturn (*hibernate_exit)(phys_addr_t, phys_addr_t, void *,
+ void *, phys_addr_t, phys_addr_t);
+
+ /*
+ * Locate the exit code in the bottom-but-one page, so that *NULL
+ * still has disastrous affects.
+ */
+ hibernate_exit = (void *)PAGE_SIZE;
+ exit_size = __hibernate_exit_text_end - __hibernate_exit_text_start;
+ /*
+ * Copy swsusp_arch_suspend_exit() to a safe page. This will generate
+ * a new set of ttbr0 page tables and load them.
+ */
+ rc = create_safe_exec_page(__hibernate_exit_text_start, exit_size,
+ (unsigned long)hibernate_exit,
+ &phys_hibernate_exit,
+ (void *)get_safe_page, GFP_ATOMIC);
+ if (rc) {
+ pr_err("Failed to create safe executable page for hibernate_exit code.");
+ goto out;
+ }
+
+ /*
+ * The hibernate exit text contains a set of el2 vectors, that will
+ * be executed at el2 with the mmu off in order to reload hyp-stub.
+ */
+ __flush_dcache_area(hibernate_exit, exit_size);
+
+ /*
+ * Restoring the memory image will overwrite the ttbr1 page tables.
+ * Create a second copy of just the linear map, and use this when
+ * restoring.
+ */
+ tmp_pg_dir = (pgd_t *)get_safe_page(GFP_ATOMIC);
+ if (!tmp_pg_dir) {
+ pr_err("Failed to allocate memory for temporary page tables.");
+ rc = -ENOMEM;
+ goto out;
+ }
+ rc = copy_page_tables(tmp_pg_dir, PAGE_OFFSET, 0);
+ if (rc)
+ goto out;
+
+ /*
+ * Since we only copied the linear map, we need to find restore_pblist's
+ * linear map address.
+ */
+ lm_restore_pblist = LMADDR(restore_pblist);
+
+ /*
+ * KASLR will cause the el2 vectors to be in a different location in
+ * the resumed kernel. Load hibernate's temporary copy into el2.
+ *
+ * We can skip this step if we booted at EL1, or are running with VHE.
+ */
+ if (el2_reset_needed()) {
+ phys_addr_t el2_vectors = phys_hibernate_exit; /* base */
+ el2_vectors += hibernate_el2_vectors -
+ __hibernate_exit_text_start; /* offset */
+
+ __hyp_set_vectors(el2_vectors);
+ }
+
+ /*
+ * We need a zero page that is zero before & after resume in order to
+ * to break before make on the ttbr1 page tables.
+ */
+ zero_page = (void *)get_safe_page(GFP_ATOMIC);
+
+ hibernate_exit(virt_to_phys(tmp_pg_dir), resume_hdr.ttbr1_el1,
+ resume_hdr.reenter_kernel, lm_restore_pblist,
+ resume_hdr.__hyp_stub_vectors, virt_to_phys(zero_page));
+
+out:
+ return rc;
+}
+
+static int check_boot_cpu_online_pm_callback(struct notifier_block *nb,
+ unsigned long action, void *ptr)
+{
+ if (action == PM_HIBERNATION_PREPARE &&
+ cpumask_first(cpu_online_mask) != 0) {
+ pr_warn("CPU0 is offline.\n");
+ return notifier_from_errno(-ENODEV);
+ }
+
+ return NOTIFY_OK;
+}
+
+static int __init check_boot_cpu_online_init(void)
+{
+ /*
+ * Set this pm_notifier callback with a lower priority than
+ * cpu_hotplug_pm_callback, so that cpu_hotplug_pm_callback will be
+ * called earlier to disable cpu hotplug before the cpu online check.
+ */
+ pm_notifier(check_boot_cpu_online_pm_callback, -INT_MAX);
+
+ return 0;
+}
+core_initcall(check_boot_cpu_online_init);
diff --git a/arch/arm64/kernel/hw_breakpoint.c b/arch/arm64/kernel/hw_breakpoint.c
index b45c95d34b83..ce21aa88263f 100644
--- a/arch/arm64/kernel/hw_breakpoint.c
+++ b/arch/arm64/kernel/hw_breakpoint.c
@@ -616,7 +616,7 @@ static int breakpoint_handler(unsigned long unused, unsigned int esr,
perf_bp_event(bp, regs);
/* Do we need to handle the stepping? */
- if (!bp->overflow_handler)
+ if (is_default_overflow_handler(bp))
step = 1;
unlock:
rcu_read_unlock();
@@ -712,7 +712,7 @@ static int watchpoint_handler(unsigned long addr, unsigned int esr,
perf_bp_event(wp, regs);
/* Do we need to handle the stepping? */
- if (!wp->overflow_handler)
+ if (is_default_overflow_handler(wp))
step = 1;
unlock:
@@ -886,9 +886,11 @@ static int hw_breakpoint_reset_notify(struct notifier_block *self,
unsigned long action,
void *hcpu)
{
- int cpu = (long)hcpu;
- if ((action & ~CPU_TASKS_FROZEN) == CPU_ONLINE)
- smp_call_function_single(cpu, hw_breakpoint_reset, NULL, 1);
+ if ((action & ~CPU_TASKS_FROZEN) == CPU_ONLINE) {
+ local_irq_disable();
+ hw_breakpoint_reset(NULL);
+ local_irq_enable();
+ }
return NOTIFY_OK;
}
diff --git a/arch/arm64/kernel/hyp-stub.S b/arch/arm64/kernel/hyp-stub.S
index a272f335c289..8727f4490772 100644
--- a/arch/arm64/kernel/hyp-stub.S
+++ b/arch/arm64/kernel/hyp-stub.S
@@ -22,6 +22,8 @@
#include <linux/irqchip/arm-gic-v3.h>
#include <asm/assembler.h>
+#include <asm/kvm_arm.h>
+#include <asm/kvm_asm.h>
#include <asm/ptrace.h>
#include <asm/virt.h>
@@ -53,15 +55,26 @@ ENDPROC(__hyp_stub_vectors)
.align 11
el1_sync:
- mrs x1, esr_el2
- lsr x1, x1, #26
- cmp x1, #0x16
- b.ne 2f // Not an HVC trap
- cbz x0, 1f
- msr vbar_el2, x0 // Set vbar_el2
- b 2f
-1: mrs x0, vbar_el2 // Return vbar_el2
-2: eret
+ mrs x30, esr_el2
+ lsr x30, x30, #ESR_ELx_EC_SHIFT
+
+ cmp x30, #ESR_ELx_EC_HVC64
+ b.ne 9f // Not an HVC trap
+
+ cmp x0, #HVC_GET_VECTORS
+ b.ne 1f
+ mrs x0, vbar_el2
+ b 9f
+
+1: cmp x0, #HVC_SET_VECTORS
+ b.ne 2f
+ msr vbar_el2, x1
+ b 9f
+
+ /* Someone called kvm_call_hyp() against the hyp-stub... */
+2: mov x0, #ARM_EXCEPTION_HYP_GONE
+
+9: eret
ENDPROC(el1_sync)
.macro invalid_vector label
@@ -101,10 +114,18 @@ ENDPROC(\label)
*/
ENTRY(__hyp_get_vectors)
- mov x0, xzr
- // fall through
-ENTRY(__hyp_set_vectors)
+ str lr, [sp, #-16]!
+ mov x0, #HVC_GET_VECTORS
hvc #0
+ ldr lr, [sp], #16
ret
ENDPROC(__hyp_get_vectors)
+
+ENTRY(__hyp_set_vectors)
+ str lr, [sp, #-16]!
+ mov x1, x0
+ mov x0, #HVC_SET_VECTORS
+ hvc #0
+ ldr lr, [sp], #16
+ ret
ENDPROC(__hyp_set_vectors)
diff --git a/arch/arm64/kernel/image.h b/arch/arm64/kernel/image.h
index 5e360ce88f10..c7fcb232fe47 100644
--- a/arch/arm64/kernel/image.h
+++ b/arch/arm64/kernel/image.h
@@ -73,6 +73,8 @@
#ifdef CONFIG_EFI
+__efistub_stext_offset = stext - _text;
+
/*
* Prevent the symbol aliases below from being emitted into the kallsyms
* table, by forcing them to be absolute symbols (which are conveniently
@@ -112,6 +114,7 @@ __efistub___memset = KALLSYMS_HIDE(__pi_memset);
__efistub__text = KALLSYMS_HIDE(_text);
__efistub__end = KALLSYMS_HIDE(_end);
__efistub__edata = KALLSYMS_HIDE(_edata);
+__efistub_screen_info = KALLSYMS_HIDE(screen_info);
#endif
diff --git a/arch/arm64/kernel/insn.c b/arch/arm64/kernel/insn.c
index 7371455160e5..368c08290dd8 100644
--- a/arch/arm64/kernel/insn.c
+++ b/arch/arm64/kernel/insn.c
@@ -96,7 +96,7 @@ static void __kprobes *patch_map(void *addr, int fixmap)
if (module && IS_ENABLED(CONFIG_DEBUG_SET_MODULE_RONX))
page = vmalloc_to_page(addr);
else if (!module && IS_ENABLED(CONFIG_DEBUG_RODATA))
- page = virt_to_page(addr);
+ page = pfn_to_page(PHYS_PFN(__pa(addr)));
else
return addr;
diff --git a/arch/arm64/kernel/kaslr.c b/arch/arm64/kernel/kaslr.c
index 582983920054..b05469173ba5 100644
--- a/arch/arm64/kernel/kaslr.c
+++ b/arch/arm64/kernel/kaslr.c
@@ -74,7 +74,7 @@ extern void *__init __fixmap_remap_fdt(phys_addr_t dt_phys, int *size,
* containing function pointers) to be reinitialized, and zero-initialized
* .bss variables will be reset to 0.
*/
-u64 __init kaslr_early_init(u64 dt_phys)
+u64 __init kaslr_early_init(u64 dt_phys, u64 modulo_offset)
{
void *fdt;
u64 seed, offset, mask, module_range;
@@ -132,8 +132,8 @@ u64 __init kaslr_early_init(u64 dt_phys)
* boundary (for 4KB/16KB/64KB granule kernels, respectively). If this
* happens, increase the KASLR offset by the size of the kernel image.
*/
- if ((((u64)_text + offset) >> SWAPPER_TABLE_SHIFT) !=
- (((u64)_end + offset) >> SWAPPER_TABLE_SHIFT))
+ if ((((u64)_text + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT) !=
+ (((u64)_end + offset + modulo_offset) >> SWAPPER_TABLE_SHIFT))
offset = (offset + (u64)(_end - _text)) & mask;
if (IS_ENABLED(CONFIG_KASAN))
diff --git a/arch/arm64/kernel/pci.c b/arch/arm64/kernel/pci.c
index c72de668e1d4..3c4e308b40a0 100644
--- a/arch/arm64/kernel/pci.c
+++ b/arch/arm64/kernel/pci.c
@@ -74,6 +74,16 @@ int raw_pci_write(unsigned int domain, unsigned int bus,
return -ENXIO;
}
+#ifdef CONFIG_NUMA
+
+int pcibus_to_node(struct pci_bus *bus)
+{
+ return dev_to_node(&bus->dev);
+}
+EXPORT_SYMBOL(pcibus_to_node);
+
+#endif
+
#ifdef CONFIG_ACPI
/* Root bridge scanning */
struct pci_bus *pci_acpi_scan_root(struct acpi_pci_root *root)
diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c
index ff4665462a02..713ca824f266 100644
--- a/arch/arm64/kernel/perf_callchain.c
+++ b/arch/arm64/kernel/perf_callchain.c
@@ -31,7 +31,7 @@ struct frame_tail {
*/
static struct frame_tail __user *
user_backtrace(struct frame_tail __user *tail,
- struct perf_callchain_entry *entry)
+ struct perf_callchain_entry_ctx *entry)
{
struct frame_tail buftail;
unsigned long err;
@@ -76,7 +76,7 @@ struct compat_frame_tail {
static struct compat_frame_tail __user *
compat_user_backtrace(struct compat_frame_tail __user *tail,
- struct perf_callchain_entry *entry)
+ struct perf_callchain_entry_ctx *entry)
{
struct compat_frame_tail buftail;
unsigned long err;
@@ -106,7 +106,7 @@ compat_user_backtrace(struct compat_frame_tail __user *tail,
}
#endif /* CONFIG_COMPAT */
-void perf_callchain_user(struct perf_callchain_entry *entry,
+void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
struct pt_regs *regs)
{
if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
@@ -122,7 +122,7 @@ void perf_callchain_user(struct perf_callchain_entry *entry,
tail = (struct frame_tail __user *)regs->regs[29];
- while (entry->nr < PERF_MAX_STACK_DEPTH &&
+ while (entry->nr < entry->max_stack &&
tail && !((unsigned long)tail & 0xf))
tail = user_backtrace(tail, entry);
} else {
@@ -132,7 +132,7 @@ void perf_callchain_user(struct perf_callchain_entry *entry,
tail = (struct compat_frame_tail __user *)regs->compat_fp - 1;
- while ((entry->nr < PERF_MAX_STACK_DEPTH) &&
+ while ((entry->nr < entry->max_stack) &&
tail && !((unsigned long)tail & 0x3))
tail = compat_user_backtrace(tail, entry);
#endif
@@ -146,12 +146,12 @@ void perf_callchain_user(struct perf_callchain_entry *entry,
*/
static int callchain_trace(struct stackframe *frame, void *data)
{
- struct perf_callchain_entry *entry = data;
+ struct perf_callchain_entry_ctx *entry = data;
perf_callchain_store(entry, frame->pc);
return 0;
}
-void perf_callchain_kernel(struct perf_callchain_entry *entry,
+void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
struct pt_regs *regs)
{
struct stackframe frame;
diff --git a/arch/arm64/kernel/perf_event.c b/arch/arm64/kernel/perf_event.c
index f419a7c075a4..838ccf123307 100644
--- a/arch/arm64/kernel/perf_event.c
+++ b/arch/arm64/kernel/perf_event.c
@@ -21,6 +21,7 @@
#include <asm/irq_regs.h>
#include <asm/perf_event.h>
+#include <asm/sysreg.h>
#include <asm/virt.h>
#include <linux/of.h>
@@ -33,43 +34,43 @@
*/
/* Required events. */
-#define ARMV8_PMUV3_PERFCTR_PMNC_SW_INCR 0x00
-#define ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL 0x03
-#define ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS 0x04
-#define ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED 0x10
-#define ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES 0x11
-#define ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED 0x12
+#define ARMV8_PMUV3_PERFCTR_SW_INCR 0x00
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL 0x03
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE 0x04
+#define ARMV8_PMUV3_PERFCTR_BR_MIS_PRED 0x10
+#define ARMV8_PMUV3_PERFCTR_CPU_CYCLES 0x11
+#define ARMV8_PMUV3_PERFCTR_BR_PRED 0x12
/* At least one of the following is required. */
-#define ARMV8_PMUV3_PERFCTR_INSTR_EXECUTED 0x08
-#define ARMV8_PMUV3_PERFCTR_OP_SPEC 0x1B
+#define ARMV8_PMUV3_PERFCTR_INST_RETIRED 0x08
+#define ARMV8_PMUV3_PERFCTR_INST_SPEC 0x1B
/* Common architectural events. */
-#define ARMV8_PMUV3_PERFCTR_MEM_READ 0x06
-#define ARMV8_PMUV3_PERFCTR_MEM_WRITE 0x07
+#define ARMV8_PMUV3_PERFCTR_LD_RETIRED 0x06
+#define ARMV8_PMUV3_PERFCTR_ST_RETIRED 0x07
#define ARMV8_PMUV3_PERFCTR_EXC_TAKEN 0x09
-#define ARMV8_PMUV3_PERFCTR_EXC_EXECUTED 0x0A
-#define ARMV8_PMUV3_PERFCTR_CID_WRITE 0x0B
-#define ARMV8_PMUV3_PERFCTR_PC_WRITE 0x0C
-#define ARMV8_PMUV3_PERFCTR_PC_IMM_BRANCH 0x0D
-#define ARMV8_PMUV3_PERFCTR_PC_PROC_RETURN 0x0E
-#define ARMV8_PMUV3_PERFCTR_MEM_UNALIGNED_ACCESS 0x0F
-#define ARMV8_PMUV3_PERFCTR_TTBR_WRITE 0x1C
+#define ARMV8_PMUV3_PERFCTR_EXC_RETURN 0x0A
+#define ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED 0x0B
+#define ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED 0x0C
+#define ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED 0x0D
+#define ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED 0x0E
+#define ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED 0x0F
+#define ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED 0x1C
#define ARMV8_PMUV3_PERFCTR_CHAIN 0x1E
#define ARMV8_PMUV3_PERFCTR_BR_RETIRED 0x21
/* Common microarchitectural events. */
-#define ARMV8_PMUV3_PERFCTR_L1_ICACHE_REFILL 0x01
-#define ARMV8_PMUV3_PERFCTR_ITLB_REFILL 0x02
-#define ARMV8_PMUV3_PERFCTR_DTLB_REFILL 0x05
+#define ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL 0x01
+#define ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL 0x02
+#define ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL 0x05
#define ARMV8_PMUV3_PERFCTR_MEM_ACCESS 0x13
-#define ARMV8_PMUV3_PERFCTR_L1_ICACHE_ACCESS 0x14
-#define ARMV8_PMUV3_PERFCTR_L1_DCACHE_WB 0x15
-#define ARMV8_PMUV3_PERFCTR_L2_CACHE_ACCESS 0x16
-#define ARMV8_PMUV3_PERFCTR_L2_CACHE_REFILL 0x17
-#define ARMV8_PMUV3_PERFCTR_L2_CACHE_WB 0x18
+#define ARMV8_PMUV3_PERFCTR_L1I_CACHE 0x14
+#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB 0x15
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE 0x16
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL 0x17
+#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB 0x18
#define ARMV8_PMUV3_PERFCTR_BUS_ACCESS 0x19
-#define ARMV8_PMUV3_PERFCTR_MEM_ERROR 0x1A
+#define ARMV8_PMUV3_PERFCTR_MEMORY_ERROR 0x1A
#define ARMV8_PMUV3_PERFCTR_BUS_CYCLES 0x1D
#define ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE 0x1F
#define ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE 0x20
@@ -85,89 +86,182 @@
#define ARMV8_PMUV3_PERFCTR_L3D_CACHE 0x2B
#define ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB 0x2C
#define ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL 0x2D
-#define ARMV8_PMUV3_PERFCTR_L21_TLB_REFILL 0x2E
+#define ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL 0x2E
#define ARMV8_PMUV3_PERFCTR_L2D_TLB 0x2F
-#define ARMV8_PMUV3_PERFCTR_L21_TLB 0x30
-
-/* ARMv8 implementation defined event types. */
-#define ARMV8_IMPDEF_PERFCTR_L1_DCACHE_ACCESS_LD 0x40
-#define ARMV8_IMPDEF_PERFCTR_L1_DCACHE_ACCESS_ST 0x41
-#define ARMV8_IMPDEF_PERFCTR_L1_DCACHE_REFILL_LD 0x42
-#define ARMV8_IMPDEF_PERFCTR_L1_DCACHE_REFILL_ST 0x43
-#define ARMV8_IMPDEF_PERFCTR_DTLB_REFILL_LD 0x4C
-#define ARMV8_IMPDEF_PERFCTR_DTLB_REFILL_ST 0x4D
-#define ARMV8_IMPDEF_PERFCTR_DTLB_ACCESS_LD 0x4E
-#define ARMV8_IMPDEF_PERFCTR_DTLB_ACCESS_ST 0x4F
+#define ARMV8_PMUV3_PERFCTR_L2I_TLB 0x30
+
+/* ARMv8 recommended implementation defined event types */
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD 0x40
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR 0x41
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD 0x42
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR 0x43
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_INNER 0x44
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_OUTER 0x45
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_VICTIM 0x46
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WB_CLEAN 0x47
+#define ARMV8_IMPDEF_PERFCTR_L1D_CACHE_INVAL 0x48
+
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD 0x4C
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR 0x4D
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD 0x4E
+#define ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR 0x4F
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_RD 0x50
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WR 0x51
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_RD 0x52
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_REFILL_WR 0x53
+
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_VICTIM 0x56
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_WB_CLEAN 0x57
+#define ARMV8_IMPDEF_PERFCTR_L2D_CACHE_INVAL 0x58
+
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_RD 0x5C
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_REFILL_WR 0x5D
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_RD 0x5E
+#define ARMV8_IMPDEF_PERFCTR_L2D_TLB_WR 0x5F
+
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD 0x60
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR 0x61
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_SHARED 0x62
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NOT_SHARED 0x63
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_NORMAL 0x64
+#define ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_PERIPH 0x65
+
+#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_RD 0x66
+#define ARMV8_IMPDEF_PERFCTR_MEM_ACCESS_WR 0x67
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LD_SPEC 0x68
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_ST_SPEC 0x69
+#define ARMV8_IMPDEF_PERFCTR_UNALIGNED_LDST_SPEC 0x6A
+
+#define ARMV8_IMPDEF_PERFCTR_LDREX_SPEC 0x6C
+#define ARMV8_IMPDEF_PERFCTR_STREX_PASS_SPEC 0x6D
+#define ARMV8_IMPDEF_PERFCTR_STREX_FAIL_SPEC 0x6E
+#define ARMV8_IMPDEF_PERFCTR_STREX_SPEC 0x6F
+#define ARMV8_IMPDEF_PERFCTR_LD_SPEC 0x70
+#define ARMV8_IMPDEF_PERFCTR_ST_SPEC 0x71
+#define ARMV8_IMPDEF_PERFCTR_LDST_SPEC 0x72
+#define ARMV8_IMPDEF_PERFCTR_DP_SPEC 0x73
+#define ARMV8_IMPDEF_PERFCTR_ASE_SPEC 0x74
+#define ARMV8_IMPDEF_PERFCTR_VFP_SPEC 0x75
+#define ARMV8_IMPDEF_PERFCTR_PC_WRITE_SPEC 0x76
+#define ARMV8_IMPDEF_PERFCTR_CRYPTO_SPEC 0x77
+#define ARMV8_IMPDEF_PERFCTR_BR_IMMED_SPEC 0x78
+#define ARMV8_IMPDEF_PERFCTR_BR_RETURN_SPEC 0x79
+#define ARMV8_IMPDEF_PERFCTR_BR_INDIRECT_SPEC 0x7A
+
+#define ARMV8_IMPDEF_PERFCTR_ISB_SPEC 0x7C
+#define ARMV8_IMPDEF_PERFCTR_DSB_SPEC 0x7D
+#define ARMV8_IMPDEF_PERFCTR_DMB_SPEC 0x7E
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_UNDEF 0x81
+#define ARMV8_IMPDEF_PERFCTR_EXC_SVC 0x82
+#define ARMV8_IMPDEF_PERFCTR_EXC_PABORT 0x83
+#define ARMV8_IMPDEF_PERFCTR_EXC_DABORT 0x84
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_IRQ 0x86
+#define ARMV8_IMPDEF_PERFCTR_EXC_FIQ 0x87
+#define ARMV8_IMPDEF_PERFCTR_EXC_SMC 0x88
+
+#define ARMV8_IMPDEF_PERFCTR_EXC_HVC 0x8A
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_PABORT 0x8B
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_DABORT 0x8C
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_OTHER 0x8D
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_IRQ 0x8E
+#define ARMV8_IMPDEF_PERFCTR_EXC_TRAP_FIQ 0x8F
+#define ARMV8_IMPDEF_PERFCTR_RC_LD_SPEC 0x90
+#define ARMV8_IMPDEF_PERFCTR_RC_ST_SPEC 0x91
+
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_RD 0xA0
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WR 0xA1
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_RD 0xA2
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_REFILL_WR 0xA3
+
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_VICTIM 0xA6
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_WB_CLEAN 0xA7
+#define ARMV8_IMPDEF_PERFCTR_L3D_CACHE_INVAL 0xA8
/* ARMv8 Cortex-A53 specific event types. */
-#define ARMV8_A53_PERFCTR_PREFETCH_LINEFILL 0xC2
+#define ARMV8_A53_PERFCTR_PREF_LINEFILL 0xC2
/* ARMv8 Cavium ThunderX specific event types. */
-#define ARMV8_THUNDER_PERFCTR_L1_DCACHE_MISS_ST 0xE9
-#define ARMV8_THUNDER_PERFCTR_L1_DCACHE_PREF_ACCESS 0xEA
-#define ARMV8_THUNDER_PERFCTR_L1_DCACHE_PREF_MISS 0xEB
-#define ARMV8_THUNDER_PERFCTR_L1_ICACHE_PREF_ACCESS 0xEC
-#define ARMV8_THUNDER_PERFCTR_L1_ICACHE_PREF_MISS 0xED
+#define ARMV8_THUNDER_PERFCTR_L1D_CACHE_MISS_ST 0xE9
+#define ARMV8_THUNDER_PERFCTR_L1D_CACHE_PREF_ACCESS 0xEA
+#define ARMV8_THUNDER_PERFCTR_L1D_CACHE_PREF_MISS 0xEB
+#define ARMV8_THUNDER_PERFCTR_L1I_CACHE_PREF_ACCESS 0xEC
+#define ARMV8_THUNDER_PERFCTR_L1I_CACHE_PREF_MISS 0xED
/* PMUv3 HW events mapping. */
static const unsigned armv8_pmuv3_perf_map[PERF_COUNT_HW_MAX] = {
PERF_MAP_ALL_UNSUPPORTED,
- [PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES,
- [PERF_COUNT_HW_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_INSTR_EXECUTED,
- [PERF_COUNT_HW_CACHE_REFERENCES] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS,
- [PERF_COUNT_HW_CACHE_MISSES] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL,
- [PERF_COUNT_HW_BRANCH_MISSES] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
+ [PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CPU_CYCLES,
+ [PERF_COUNT_HW_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_INST_RETIRED,
+ [PERF_COUNT_HW_CACHE_REFERENCES] = ARMV8_PMUV3_PERFCTR_L1D_CACHE,
+ [PERF_COUNT_HW_CACHE_MISSES] = ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL,
+ [PERF_COUNT_HW_BRANCH_MISSES] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
};
/* ARM Cortex-A53 HW events mapping. */
static const unsigned armv8_a53_perf_map[PERF_COUNT_HW_MAX] = {
PERF_MAP_ALL_UNSUPPORTED,
- [PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES,
- [PERF_COUNT_HW_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_INSTR_EXECUTED,
- [PERF_COUNT_HW_CACHE_REFERENCES] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS,
- [PERF_COUNT_HW_CACHE_MISSES] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL,
- [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_PC_WRITE,
- [PERF_COUNT_HW_BRANCH_MISSES] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
+ [PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CPU_CYCLES,
+ [PERF_COUNT_HW_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_INST_RETIRED,
+ [PERF_COUNT_HW_CACHE_REFERENCES] = ARMV8_PMUV3_PERFCTR_L1D_CACHE,
+ [PERF_COUNT_HW_CACHE_MISSES] = ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL,
+ [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED,
+ [PERF_COUNT_HW_BRANCH_MISSES] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
[PERF_COUNT_HW_BUS_CYCLES] = ARMV8_PMUV3_PERFCTR_BUS_CYCLES,
};
/* ARM Cortex-A57 and Cortex-A72 events mapping. */
static const unsigned armv8_a57_perf_map[PERF_COUNT_HW_MAX] = {
PERF_MAP_ALL_UNSUPPORTED,
- [PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES,
- [PERF_COUNT_HW_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_INSTR_EXECUTED,
- [PERF_COUNT_HW_CACHE_REFERENCES] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS,
- [PERF_COUNT_HW_CACHE_MISSES] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL,
- [PERF_COUNT_HW_BRANCH_MISSES] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
+ [PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CPU_CYCLES,
+ [PERF_COUNT_HW_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_INST_RETIRED,
+ [PERF_COUNT_HW_CACHE_REFERENCES] = ARMV8_PMUV3_PERFCTR_L1D_CACHE,
+ [PERF_COUNT_HW_CACHE_MISSES] = ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL,
+ [PERF_COUNT_HW_BRANCH_MISSES] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
[PERF_COUNT_HW_BUS_CYCLES] = ARMV8_PMUV3_PERFCTR_BUS_CYCLES,
};
static const unsigned armv8_thunder_perf_map[PERF_COUNT_HW_MAX] = {
PERF_MAP_ALL_UNSUPPORTED,
- [PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES,
- [PERF_COUNT_HW_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_INSTR_EXECUTED,
- [PERF_COUNT_HW_CACHE_REFERENCES] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS,
- [PERF_COUNT_HW_CACHE_MISSES] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL,
- [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_PC_WRITE,
- [PERF_COUNT_HW_BRANCH_MISSES] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
+ [PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CPU_CYCLES,
+ [PERF_COUNT_HW_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_INST_RETIRED,
+ [PERF_COUNT_HW_CACHE_REFERENCES] = ARMV8_PMUV3_PERFCTR_L1D_CACHE,
+ [PERF_COUNT_HW_CACHE_MISSES] = ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL,
+ [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED,
+ [PERF_COUNT_HW_BRANCH_MISSES] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
[PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = ARMV8_PMUV3_PERFCTR_STALL_FRONTEND,
[PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = ARMV8_PMUV3_PERFCTR_STALL_BACKEND,
};
+/* Broadcom Vulcan events mapping */
+static const unsigned armv8_vulcan_perf_map[PERF_COUNT_HW_MAX] = {
+ PERF_MAP_ALL_UNSUPPORTED,
+ [PERF_COUNT_HW_CPU_CYCLES] = ARMV8_PMUV3_PERFCTR_CPU_CYCLES,
+ [PERF_COUNT_HW_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_INST_RETIRED,
+ [PERF_COUNT_HW_CACHE_REFERENCES] = ARMV8_PMUV3_PERFCTR_L1D_CACHE,
+ [PERF_COUNT_HW_CACHE_MISSES] = ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL,
+ [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = ARMV8_PMUV3_PERFCTR_BR_RETIRED,
+ [PERF_COUNT_HW_BRANCH_MISSES] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
+ [PERF_COUNT_HW_BUS_CYCLES] = ARMV8_PMUV3_PERFCTR_BUS_CYCLES,
+ [PERF_COUNT_HW_STALLED_CYCLES_FRONTEND] = ARMV8_PMUV3_PERFCTR_STALL_FRONTEND,
+ [PERF_COUNT_HW_STALLED_CYCLES_BACKEND] = ARMV8_PMUV3_PERFCTR_STALL_BACKEND,
+};
+
static const unsigned armv8_pmuv3_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
PERF_CACHE_MAP_ALL_UNSUPPORTED,
- [C(L1D)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS,
- [C(L1D)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL,
- [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS,
- [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL,
+ [C(L1D)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1D_CACHE,
+ [C(L1D)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL,
+ [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1D_CACHE,
+ [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL,
- [C(BPU)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED,
- [C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
- [C(BPU)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED,
- [C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
+ [C(BPU)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
+ [C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
+ [C(BPU)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
+ [C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
};
static const unsigned armv8_a53_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
@@ -175,21 +269,21 @@ static const unsigned armv8_a53_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
PERF_CACHE_MAP_ALL_UNSUPPORTED,
- [C(L1D)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS,
- [C(L1D)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL,
- [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS,
- [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL,
- [C(L1D)][C(OP_PREFETCH)][C(RESULT_MISS)] = ARMV8_A53_PERFCTR_PREFETCH_LINEFILL,
+ [C(L1D)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1D_CACHE,
+ [C(L1D)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL,
+ [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1D_CACHE,
+ [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL,
+ [C(L1D)][C(OP_PREFETCH)][C(RESULT_MISS)] = ARMV8_A53_PERFCTR_PREF_LINEFILL,
- [C(L1I)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1_ICACHE_ACCESS,
- [C(L1I)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1_ICACHE_REFILL,
+ [C(L1I)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1I_CACHE,
+ [C(L1I)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL,
- [C(ITLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_ITLB_REFILL,
+ [C(ITLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL,
- [C(BPU)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED,
- [C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
- [C(BPU)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED,
- [C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
+ [C(BPU)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
+ [C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
+ [C(BPU)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
+ [C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
};
static const unsigned armv8_a57_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
@@ -197,23 +291,23 @@ static const unsigned armv8_a57_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
PERF_CACHE_MAP_ALL_UNSUPPORTED,
- [C(L1D)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1_DCACHE_ACCESS_LD,
- [C(L1D)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1_DCACHE_REFILL_LD,
- [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1_DCACHE_ACCESS_ST,
- [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1_DCACHE_REFILL_ST,
+ [C(L1D)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD,
+ [C(L1D)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD,
+ [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR,
+ [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR,
- [C(L1I)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1_ICACHE_ACCESS,
- [C(L1I)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1_ICACHE_REFILL,
+ [C(L1I)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1I_CACHE,
+ [C(L1I)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL,
- [C(DTLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_DTLB_REFILL_LD,
- [C(DTLB)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_DTLB_REFILL_ST,
+ [C(DTLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD,
+ [C(DTLB)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR,
- [C(ITLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_ITLB_REFILL,
+ [C(ITLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL,
- [C(BPU)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED,
- [C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
- [C(BPU)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED,
- [C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
+ [C(BPU)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
+ [C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
+ [C(BPU)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
+ [C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
};
static const unsigned armv8_thunder_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
@@ -221,67 +315,108 @@ static const unsigned armv8_thunder_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_RESULT_MAX] = {
PERF_CACHE_MAP_ALL_UNSUPPORTED,
- [C(L1D)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1_DCACHE_ACCESS_LD,
- [C(L1D)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1_DCACHE_REFILL_LD,
- [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1_DCACHE_ACCESS_ST,
- [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_THUNDER_PERFCTR_L1_DCACHE_MISS_ST,
- [C(L1D)][C(OP_PREFETCH)][C(RESULT_ACCESS)] = ARMV8_THUNDER_PERFCTR_L1_DCACHE_PREF_ACCESS,
- [C(L1D)][C(OP_PREFETCH)][C(RESULT_MISS)] = ARMV8_THUNDER_PERFCTR_L1_DCACHE_PREF_MISS,
-
- [C(L1I)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1_ICACHE_ACCESS,
- [C(L1I)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1_ICACHE_REFILL,
- [C(L1I)][C(OP_PREFETCH)][C(RESULT_ACCESS)] = ARMV8_THUNDER_PERFCTR_L1_ICACHE_PREF_ACCESS,
- [C(L1I)][C(OP_PREFETCH)][C(RESULT_MISS)] = ARMV8_THUNDER_PERFCTR_L1_ICACHE_PREF_MISS,
-
- [C(DTLB)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_DTLB_ACCESS_LD,
- [C(DTLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_DTLB_REFILL_LD,
- [C(DTLB)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_DTLB_ACCESS_ST,
- [C(DTLB)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_DTLB_REFILL_ST,
-
- [C(ITLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_ITLB_REFILL,
-
- [C(BPU)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED,
- [C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
- [C(BPU)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED,
- [C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED,
+ [C(L1D)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD,
+ [C(L1D)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD,
+ [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR,
+ [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_THUNDER_PERFCTR_L1D_CACHE_MISS_ST,
+ [C(L1D)][C(OP_PREFETCH)][C(RESULT_ACCESS)] = ARMV8_THUNDER_PERFCTR_L1D_CACHE_PREF_ACCESS,
+ [C(L1D)][C(OP_PREFETCH)][C(RESULT_MISS)] = ARMV8_THUNDER_PERFCTR_L1D_CACHE_PREF_MISS,
+
+ [C(L1I)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1I_CACHE,
+ [C(L1I)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL,
+ [C(L1I)][C(OP_PREFETCH)][C(RESULT_ACCESS)] = ARMV8_THUNDER_PERFCTR_L1I_CACHE_PREF_ACCESS,
+ [C(L1I)][C(OP_PREFETCH)][C(RESULT_MISS)] = ARMV8_THUNDER_PERFCTR_L1I_CACHE_PREF_MISS,
+
+ [C(DTLB)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD,
+ [C(DTLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD,
+ [C(DTLB)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR,
+ [C(DTLB)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR,
+
+ [C(ITLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL,
+
+ [C(BPU)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
+ [C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
+ [C(BPU)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
+ [C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
+};
+
+static const unsigned armv8_vulcan_perf_cache_map[PERF_COUNT_HW_CACHE_MAX]
+ [PERF_COUNT_HW_CACHE_OP_MAX]
+ [PERF_COUNT_HW_CACHE_RESULT_MAX] = {
+ PERF_CACHE_MAP_ALL_UNSUPPORTED,
+
+ [C(L1D)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_RD,
+ [C(L1D)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_RD,
+ [C(L1D)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_WR,
+ [C(L1D)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_CACHE_REFILL_WR,
+
+ [C(L1I)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1I_CACHE,
+ [C(L1I)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL,
+
+ [C(ITLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL,
+ [C(ITLB)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_L1I_TLB,
+
+ [C(DTLB)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_RD,
+ [C(DTLB)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_WR,
+ [C(DTLB)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_RD,
+ [C(DTLB)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_IMPDEF_PERFCTR_L1D_TLB_REFILL_WR,
+
+ [C(BPU)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
+ [C(BPU)][C(OP_READ)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
+ [C(BPU)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_PMUV3_PERFCTR_BR_PRED,
+ [C(BPU)][C(OP_WRITE)][C(RESULT_MISS)] = ARMV8_PMUV3_PERFCTR_BR_MIS_PRED,
+
+ [C(NODE)][C(OP_READ)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_RD,
+ [C(NODE)][C(OP_WRITE)][C(RESULT_ACCESS)] = ARMV8_IMPDEF_PERFCTR_BUS_ACCESS_WR,
};
+static ssize_t
+armv8pmu_events_sysfs_show(struct device *dev,
+ struct device_attribute *attr, char *page)
+{
+ struct perf_pmu_events_attr *pmu_attr;
+
+ pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr);
+
+ return sprintf(page, "event=0x%03llx\n", pmu_attr->id);
+}
+
#define ARMV8_EVENT_ATTR_RESOLVE(m) #m
#define ARMV8_EVENT_ATTR(name, config) \
- PMU_EVENT_ATTR_STRING(name, armv8_event_attr_##name, \
- "event=" ARMV8_EVENT_ATTR_RESOLVE(config))
-
-ARMV8_EVENT_ATTR(sw_incr, ARMV8_PMUV3_PERFCTR_PMNC_SW_INCR);
-ARMV8_EVENT_ATTR(l1i_cache_refill, ARMV8_PMUV3_PERFCTR_L1_ICACHE_REFILL);
-ARMV8_EVENT_ATTR(l1i_tlb_refill, ARMV8_PMUV3_PERFCTR_ITLB_REFILL);
-ARMV8_EVENT_ATTR(l1d_cache_refill, ARMV8_PMUV3_PERFCTR_L1_DCACHE_REFILL);
-ARMV8_EVENT_ATTR(l1d_cache, ARMV8_PMUV3_PERFCTR_L1_DCACHE_ACCESS);
-ARMV8_EVENT_ATTR(l1d_tlb_refill, ARMV8_PMUV3_PERFCTR_DTLB_REFILL);
-ARMV8_EVENT_ATTR(ld_retired, ARMV8_PMUV3_PERFCTR_MEM_READ);
-ARMV8_EVENT_ATTR(st_retired, ARMV8_PMUV3_PERFCTR_MEM_WRITE);
-ARMV8_EVENT_ATTR(inst_retired, ARMV8_PMUV3_PERFCTR_INSTR_EXECUTED);
+ PMU_EVENT_ATTR(name, armv8_event_attr_##name, \
+ config, armv8pmu_events_sysfs_show)
+
+ARMV8_EVENT_ATTR(sw_incr, ARMV8_PMUV3_PERFCTR_SW_INCR);
+ARMV8_EVENT_ATTR(l1i_cache_refill, ARMV8_PMUV3_PERFCTR_L1I_CACHE_REFILL);
+ARMV8_EVENT_ATTR(l1i_tlb_refill, ARMV8_PMUV3_PERFCTR_L1I_TLB_REFILL);
+ARMV8_EVENT_ATTR(l1d_cache_refill, ARMV8_PMUV3_PERFCTR_L1D_CACHE_REFILL);
+ARMV8_EVENT_ATTR(l1d_cache, ARMV8_PMUV3_PERFCTR_L1D_CACHE);
+ARMV8_EVENT_ATTR(l1d_tlb_refill, ARMV8_PMUV3_PERFCTR_L1D_TLB_REFILL);
+ARMV8_EVENT_ATTR(ld_retired, ARMV8_PMUV3_PERFCTR_LD_RETIRED);
+ARMV8_EVENT_ATTR(st_retired, ARMV8_PMUV3_PERFCTR_ST_RETIRED);
+ARMV8_EVENT_ATTR(inst_retired, ARMV8_PMUV3_PERFCTR_INST_RETIRED);
ARMV8_EVENT_ATTR(exc_taken, ARMV8_PMUV3_PERFCTR_EXC_TAKEN);
-ARMV8_EVENT_ATTR(exc_return, ARMV8_PMUV3_PERFCTR_EXC_EXECUTED);
-ARMV8_EVENT_ATTR(cid_write_retired, ARMV8_PMUV3_PERFCTR_CID_WRITE);
-ARMV8_EVENT_ATTR(pc_write_retired, ARMV8_PMUV3_PERFCTR_PC_WRITE);
-ARMV8_EVENT_ATTR(br_immed_retired, ARMV8_PMUV3_PERFCTR_PC_IMM_BRANCH);
-ARMV8_EVENT_ATTR(br_return_retired, ARMV8_PMUV3_PERFCTR_PC_PROC_RETURN);
-ARMV8_EVENT_ATTR(unaligned_ldst_retired, ARMV8_PMUV3_PERFCTR_MEM_UNALIGNED_ACCESS);
-ARMV8_EVENT_ATTR(br_mis_pred, ARMV8_PMUV3_PERFCTR_PC_BRANCH_MIS_PRED);
-ARMV8_EVENT_ATTR(cpu_cycles, ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES);
-ARMV8_EVENT_ATTR(br_pred, ARMV8_PMUV3_PERFCTR_PC_BRANCH_PRED);
+ARMV8_EVENT_ATTR(exc_return, ARMV8_PMUV3_PERFCTR_EXC_RETURN);
+ARMV8_EVENT_ATTR(cid_write_retired, ARMV8_PMUV3_PERFCTR_CID_WRITE_RETIRED);
+ARMV8_EVENT_ATTR(pc_write_retired, ARMV8_PMUV3_PERFCTR_PC_WRITE_RETIRED);
+ARMV8_EVENT_ATTR(br_immed_retired, ARMV8_PMUV3_PERFCTR_BR_IMMED_RETIRED);
+ARMV8_EVENT_ATTR(br_return_retired, ARMV8_PMUV3_PERFCTR_BR_RETURN_RETIRED);
+ARMV8_EVENT_ATTR(unaligned_ldst_retired, ARMV8_PMUV3_PERFCTR_UNALIGNED_LDST_RETIRED);
+ARMV8_EVENT_ATTR(br_mis_pred, ARMV8_PMUV3_PERFCTR_BR_MIS_PRED);
+ARMV8_EVENT_ATTR(cpu_cycles, ARMV8_PMUV3_PERFCTR_CPU_CYCLES);
+ARMV8_EVENT_ATTR(br_pred, ARMV8_PMUV3_PERFCTR_BR_PRED);
ARMV8_EVENT_ATTR(mem_access, ARMV8_PMUV3_PERFCTR_MEM_ACCESS);
-ARMV8_EVENT_ATTR(l1i_cache, ARMV8_PMUV3_PERFCTR_L1_ICACHE_ACCESS);
-ARMV8_EVENT_ATTR(l1d_cache_wb, ARMV8_PMUV3_PERFCTR_L1_DCACHE_WB);
-ARMV8_EVENT_ATTR(l2d_cache, ARMV8_PMUV3_PERFCTR_L2_CACHE_ACCESS);
-ARMV8_EVENT_ATTR(l2d_cache_refill, ARMV8_PMUV3_PERFCTR_L2_CACHE_REFILL);
-ARMV8_EVENT_ATTR(l2d_cache_wb, ARMV8_PMUV3_PERFCTR_L2_CACHE_WB);
+ARMV8_EVENT_ATTR(l1i_cache, ARMV8_PMUV3_PERFCTR_L1I_CACHE);
+ARMV8_EVENT_ATTR(l1d_cache_wb, ARMV8_PMUV3_PERFCTR_L1D_CACHE_WB);
+ARMV8_EVENT_ATTR(l2d_cache, ARMV8_PMUV3_PERFCTR_L2D_CACHE);
+ARMV8_EVENT_ATTR(l2d_cache_refill, ARMV8_PMUV3_PERFCTR_L2D_CACHE_REFILL);
+ARMV8_EVENT_ATTR(l2d_cache_wb, ARMV8_PMUV3_PERFCTR_L2D_CACHE_WB);
ARMV8_EVENT_ATTR(bus_access, ARMV8_PMUV3_PERFCTR_BUS_ACCESS);
-ARMV8_EVENT_ATTR(memory_error, ARMV8_PMUV3_PERFCTR_MEM_ERROR);
-ARMV8_EVENT_ATTR(inst_spec, ARMV8_PMUV3_PERFCTR_OP_SPEC);
-ARMV8_EVENT_ATTR(ttbr_write_retired, ARMV8_PMUV3_PERFCTR_TTBR_WRITE);
+ARMV8_EVENT_ATTR(memory_error, ARMV8_PMUV3_PERFCTR_MEMORY_ERROR);
+ARMV8_EVENT_ATTR(inst_spec, ARMV8_PMUV3_PERFCTR_INST_SPEC);
+ARMV8_EVENT_ATTR(ttbr_write_retired, ARMV8_PMUV3_PERFCTR_TTBR_WRITE_RETIRED);
ARMV8_EVENT_ATTR(bus_cycles, ARMV8_PMUV3_PERFCTR_BUS_CYCLES);
-ARMV8_EVENT_ATTR(chain, ARMV8_PMUV3_PERFCTR_CHAIN);
+/* Don't expose the chain event in /sys, since it's useless in isolation */
ARMV8_EVENT_ATTR(l1d_cache_allocate, ARMV8_PMUV3_PERFCTR_L1D_CACHE_ALLOCATE);
ARMV8_EVENT_ATTR(l2d_cache_allocate, ARMV8_PMUV3_PERFCTR_L2D_CACHE_ALLOCATE);
ARMV8_EVENT_ATTR(br_retired, ARMV8_PMUV3_PERFCTR_BR_RETIRED);
@@ -297,9 +432,9 @@ ARMV8_EVENT_ATTR(l3d_cache_refill, ARMV8_PMUV3_PERFCTR_L3D_CACHE_REFILL);
ARMV8_EVENT_ATTR(l3d_cache, ARMV8_PMUV3_PERFCTR_L3D_CACHE);
ARMV8_EVENT_ATTR(l3d_cache_wb, ARMV8_PMUV3_PERFCTR_L3D_CACHE_WB);
ARMV8_EVENT_ATTR(l2d_tlb_refill, ARMV8_PMUV3_PERFCTR_L2D_TLB_REFILL);
-ARMV8_EVENT_ATTR(l21_tlb_refill, ARMV8_PMUV3_PERFCTR_L21_TLB_REFILL);
+ARMV8_EVENT_ATTR(l2i_tlb_refill, ARMV8_PMUV3_PERFCTR_L2I_TLB_REFILL);
ARMV8_EVENT_ATTR(l2d_tlb, ARMV8_PMUV3_PERFCTR_L2D_TLB);
-ARMV8_EVENT_ATTR(l21_tlb, ARMV8_PMUV3_PERFCTR_L21_TLB);
+ARMV8_EVENT_ATTR(l2i_tlb, ARMV8_PMUV3_PERFCTR_L2I_TLB);
static struct attribute *armv8_pmuv3_event_attrs[] = {
&armv8_event_attr_sw_incr.attr.attr,
@@ -332,7 +467,6 @@ static struct attribute *armv8_pmuv3_event_attrs[] = {
&armv8_event_attr_inst_spec.attr.attr,
&armv8_event_attr_ttbr_write_retired.attr.attr,
&armv8_event_attr_bus_cycles.attr.attr,
- &armv8_event_attr_chain.attr.attr,
&armv8_event_attr_l1d_cache_allocate.attr.attr,
&armv8_event_attr_l2d_cache_allocate.attr.attr,
&armv8_event_attr_br_retired.attr.attr,
@@ -348,15 +482,33 @@ static struct attribute *armv8_pmuv3_event_attrs[] = {
&armv8_event_attr_l3d_cache.attr.attr,
&armv8_event_attr_l3d_cache_wb.attr.attr,
&armv8_event_attr_l2d_tlb_refill.attr.attr,
- &armv8_event_attr_l21_tlb_refill.attr.attr,
+ &armv8_event_attr_l2i_tlb_refill.attr.attr,
&armv8_event_attr_l2d_tlb.attr.attr,
- &armv8_event_attr_l21_tlb.attr.attr,
+ &armv8_event_attr_l2i_tlb.attr.attr,
NULL,
};
+static umode_t
+armv8pmu_event_attr_is_visible(struct kobject *kobj,
+ struct attribute *attr, int unused)
+{
+ struct device *dev = kobj_to_dev(kobj);
+ struct pmu *pmu = dev_get_drvdata(dev);
+ struct arm_pmu *cpu_pmu = container_of(pmu, struct arm_pmu, pmu);
+ struct perf_pmu_events_attr *pmu_attr;
+
+ pmu_attr = container_of(attr, struct perf_pmu_events_attr, attr.attr);
+
+ if (test_bit(pmu_attr->id, cpu_pmu->pmceid_bitmap))
+ return attr->mode;
+
+ return 0;
+}
+
static struct attribute_group armv8_pmuv3_events_attr_group = {
.name = "events",
.attrs = armv8_pmuv3_event_attrs,
+ .is_visible = armv8pmu_event_attr_is_visible,
};
PMU_FORMAT_ATTR(event, "config:0-9");
@@ -397,16 +549,14 @@ static const struct attribute_group *armv8_pmuv3_attr_groups[] = {
static inline u32 armv8pmu_pmcr_read(void)
{
- u32 val;
- asm volatile("mrs %0, pmcr_el0" : "=r" (val));
- return val;
+ return read_sysreg(pmcr_el0);
}
static inline void armv8pmu_pmcr_write(u32 val)
{
val &= ARMV8_PMU_PMCR_MASK;
isb();
- asm volatile("msr pmcr_el0, %0" :: "r" (val));
+ write_sysreg(val, pmcr_el0);
}
static inline int armv8pmu_has_overflowed(u32 pmovsr)
@@ -428,7 +578,7 @@ static inline int armv8pmu_counter_has_overflowed(u32 pmnc, int idx)
static inline int armv8pmu_select_counter(int idx)
{
u32 counter = ARMV8_IDX_TO_COUNTER(idx);
- asm volatile("msr pmselr_el0, %0" :: "r" (counter));
+ write_sysreg(counter, pmselr_el0);
isb();
return idx;
@@ -445,9 +595,9 @@ static inline u32 armv8pmu_read_counter(struct perf_event *event)
pr_err("CPU%u reading wrong counter %d\n",
smp_processor_id(), idx);
else if (idx == ARMV8_IDX_CYCLE_COUNTER)
- asm volatile("mrs %0, pmccntr_el0" : "=r" (value));
+ value = read_sysreg(pmccntr_el0);
else if (armv8pmu_select_counter(idx) == idx)
- asm volatile("mrs %0, pmxevcntr_el0" : "=r" (value));
+ value = read_sysreg(pmxevcntr_el0);
return value;
}
@@ -469,47 +619,47 @@ static inline void armv8pmu_write_counter(struct perf_event *event, u32 value)
*/
u64 value64 = 0xffffffff00000000ULL | value;
- asm volatile("msr pmccntr_el0, %0" :: "r" (value64));
+ write_sysreg(value64, pmccntr_el0);
} else if (armv8pmu_select_counter(idx) == idx)
- asm volatile("msr pmxevcntr_el0, %0" :: "r" (value));
+ write_sysreg(value, pmxevcntr_el0);
}
static inline void armv8pmu_write_evtype(int idx, u32 val)
{
if (armv8pmu_select_counter(idx) == idx) {
val &= ARMV8_PMU_EVTYPE_MASK;
- asm volatile("msr pmxevtyper_el0, %0" :: "r" (val));
+ write_sysreg(val, pmxevtyper_el0);
}
}
static inline int armv8pmu_enable_counter(int idx)
{
u32 counter = ARMV8_IDX_TO_COUNTER(idx);
- asm volatile("msr pmcntenset_el0, %0" :: "r" (BIT(counter)));
+ write_sysreg(BIT(counter), pmcntenset_el0);
return idx;
}
static inline int armv8pmu_disable_counter(int idx)
{
u32 counter = ARMV8_IDX_TO_COUNTER(idx);
- asm volatile("msr pmcntenclr_el0, %0" :: "r" (BIT(counter)));
+ write_sysreg(BIT(counter), pmcntenclr_el0);
return idx;
}
static inline int armv8pmu_enable_intens(int idx)
{
u32 counter = ARMV8_IDX_TO_COUNTER(idx);
- asm volatile("msr pmintenset_el1, %0" :: "r" (BIT(counter)));
+ write_sysreg(BIT(counter), pmintenset_el1);
return idx;
}
static inline int armv8pmu_disable_intens(int idx)
{
u32 counter = ARMV8_IDX_TO_COUNTER(idx);
- asm volatile("msr pmintenclr_el1, %0" :: "r" (BIT(counter)));
+ write_sysreg(BIT(counter), pmintenclr_el1);
isb();
/* Clear the overflow flag in case an interrupt is pending. */
- asm volatile("msr pmovsclr_el0, %0" :: "r" (BIT(counter)));
+ write_sysreg(BIT(counter), pmovsclr_el0);
isb();
return idx;
@@ -520,11 +670,11 @@ static inline u32 armv8pmu_getreset_flags(void)
u32 value;
/* Read */
- asm volatile("mrs %0, pmovsclr_el0" : "=r" (value));
+ value = read_sysreg(pmovsclr_el0);
/* Write to clear flags */
value &= ARMV8_PMU_OVSR_MASK;
- asm volatile("msr pmovsclr_el0, %0" :: "r" (value));
+ write_sysreg(value, pmovsclr_el0);
return value;
}
@@ -685,7 +835,7 @@ static int armv8pmu_get_event_idx(struct pmu_hw_events *cpuc,
unsigned long evtype = hwc->config_base & ARMV8_PMU_EVTYPE_EVENT;
/* Always place a cycle counter into the cycle counter. */
- if (evtype == ARMV8_PMUV3_PERFCTR_CLOCK_CYCLES) {
+ if (evtype == ARMV8_PMUV3_PERFCTR_CPU_CYCLES) {
if (test_and_set_bit(ARMV8_IDX_CYCLE_COUNTER, cpuc->used_mask))
return -EAGAIN;
@@ -781,22 +931,38 @@ static int armv8_thunder_map_event(struct perf_event *event)
ARMV8_PMU_EVTYPE_EVENT);
}
-static void armv8pmu_read_num_pmnc_events(void *info)
+static int armv8_vulcan_map_event(struct perf_event *event)
+{
+ return armpmu_map_event(event, &armv8_vulcan_perf_map,
+ &armv8_vulcan_perf_cache_map,
+ ARMV8_PMU_EVTYPE_EVENT);
+}
+
+static void __armv8pmu_probe_pmu(void *info)
{
- int *nb_cnt = info;
+ struct arm_pmu *cpu_pmu = info;
+ u32 pmceid[2];
/* Read the nb of CNTx counters supported from PMNC */
- *nb_cnt = (armv8pmu_pmcr_read() >> ARMV8_PMU_PMCR_N_SHIFT) & ARMV8_PMU_PMCR_N_MASK;
+ cpu_pmu->num_events = (armv8pmu_pmcr_read() >> ARMV8_PMU_PMCR_N_SHIFT)
+ & ARMV8_PMU_PMCR_N_MASK;
/* Add the CPU cycles counter */
- *nb_cnt += 1;
+ cpu_pmu->num_events += 1;
+
+ pmceid[0] = read_sysreg(pmceid0_el0);
+ pmceid[1] = read_sysreg(pmceid1_el0);
+
+ bitmap_from_u32array(cpu_pmu->pmceid_bitmap,
+ ARMV8_PMUV3_MAX_COMMON_EVENTS, pmceid,
+ ARRAY_SIZE(pmceid));
}
-static int armv8pmu_probe_num_events(struct arm_pmu *arm_pmu)
+static int armv8pmu_probe_pmu(struct arm_pmu *cpu_pmu)
{
- return smp_call_function_any(&arm_pmu->supported_cpus,
- armv8pmu_read_num_pmnc_events,
- &arm_pmu->num_events, 1);
+ return smp_call_function_any(&cpu_pmu->supported_cpus,
+ __armv8pmu_probe_pmu,
+ cpu_pmu, 1);
}
static void armv8_pmu_init(struct arm_pmu *cpu_pmu)
@@ -819,7 +985,8 @@ static int armv8_pmuv3_init(struct arm_pmu *cpu_pmu)
armv8_pmu_init(cpu_pmu);
cpu_pmu->name = "armv8_pmuv3";
cpu_pmu->map_event = armv8_pmuv3_map_event;
- return armv8pmu_probe_num_events(cpu_pmu);
+ cpu_pmu->pmu.attr_groups = armv8_pmuv3_attr_groups;
+ return armv8pmu_probe_pmu(cpu_pmu);
}
static int armv8_a53_pmu_init(struct arm_pmu *cpu_pmu)
@@ -828,7 +995,7 @@ static int armv8_a53_pmu_init(struct arm_pmu *cpu_pmu)
cpu_pmu->name = "armv8_cortex_a53";
cpu_pmu->map_event = armv8_a53_map_event;
cpu_pmu->pmu.attr_groups = armv8_pmuv3_attr_groups;
- return armv8pmu_probe_num_events(cpu_pmu);
+ return armv8pmu_probe_pmu(cpu_pmu);
}
static int armv8_a57_pmu_init(struct arm_pmu *cpu_pmu)
@@ -837,7 +1004,7 @@ static int armv8_a57_pmu_init(struct arm_pmu *cpu_pmu)
cpu_pmu->name = "armv8_cortex_a57";
cpu_pmu->map_event = armv8_a57_map_event;
cpu_pmu->pmu.attr_groups = armv8_pmuv3_attr_groups;
- return armv8pmu_probe_num_events(cpu_pmu);
+ return armv8pmu_probe_pmu(cpu_pmu);
}
static int armv8_a72_pmu_init(struct arm_pmu *cpu_pmu)
@@ -846,7 +1013,7 @@ static int armv8_a72_pmu_init(struct arm_pmu *cpu_pmu)
cpu_pmu->name = "armv8_cortex_a72";
cpu_pmu->map_event = armv8_a57_map_event;
cpu_pmu->pmu.attr_groups = armv8_pmuv3_attr_groups;
- return armv8pmu_probe_num_events(cpu_pmu);
+ return armv8pmu_probe_pmu(cpu_pmu);
}
static int armv8_thunder_pmu_init(struct arm_pmu *cpu_pmu)
@@ -855,7 +1022,16 @@ static int armv8_thunder_pmu_init(struct arm_pmu *cpu_pmu)
cpu_pmu->name = "armv8_cavium_thunder";
cpu_pmu->map_event = armv8_thunder_map_event;
cpu_pmu->pmu.attr_groups = armv8_pmuv3_attr_groups;
- return armv8pmu_probe_num_events(cpu_pmu);
+ return armv8pmu_probe_pmu(cpu_pmu);
+}
+
+static int armv8_vulcan_pmu_init(struct arm_pmu *cpu_pmu)
+{
+ armv8_pmu_init(cpu_pmu);
+ cpu_pmu->name = "armv8_brcm_vulcan";
+ cpu_pmu->map_event = armv8_vulcan_map_event;
+ cpu_pmu->pmu.attr_groups = armv8_pmuv3_attr_groups;
+ return armv8pmu_probe_pmu(cpu_pmu);
}
static const struct of_device_id armv8_pmu_of_device_ids[] = {
@@ -864,6 +1040,7 @@ static const struct of_device_id armv8_pmu_of_device_ids[] = {
{.compatible = "arm,cortex-a57-pmu", .data = armv8_a57_pmu_init},
{.compatible = "arm,cortex-a72-pmu", .data = armv8_a72_pmu_init},
{.compatible = "cavium,thunder-pmu", .data = armv8_thunder_pmu_init},
+ {.compatible = "brcm,vulcan-pmu", .data = armv8_vulcan_pmu_init},
{},
};
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index 80624829db61..6cd2612236dc 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -200,13 +200,6 @@ void show_regs(struct pt_regs * regs)
__show_regs(regs);
}
-/*
- * Free current thread data structures etc..
- */
-void exit_thread(void)
-{
-}
-
static void tls_thread_flush(void)
{
asm ("msr tpidr_el0, xzr");
@@ -265,9 +258,6 @@ int copy_thread(unsigned long clone_flags, unsigned long stack_start,
if (stack_start) {
if (is_compat_thread(task_thread_info(p)))
childregs->compat_sp = stack_start;
- /* 16-byte aligned stack mandatory on AArch64 */
- else if (stack_start & 15)
- return -EINVAL;
else
childregs->sp = stack_start;
}
@@ -382,13 +372,14 @@ unsigned long arch_align_stack(unsigned long sp)
return sp & ~0xf;
}
-static unsigned long randomize_base(unsigned long base)
-{
- unsigned long range_end = base + (STACK_RND_MASK << PAGE_SHIFT) + 1;
- return randomize_range(base, range_end, 0) ? : base;
-}
-
unsigned long arch_randomize_brk(struct mm_struct *mm)
{
- return randomize_base(mm->brk);
+ unsigned long range_end = mm->brk;
+
+ if (is_compat_task())
+ range_end += 0x02000000;
+ else
+ range_end += 0x40000000;
+
+ return randomize_range(mm->brk, range_end, 0) ? : mm->brk;
}
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 9dc67769b6a4..3279defabaa2 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -53,6 +53,7 @@
#include <asm/cpufeature.h>
#include <asm/cpu_ops.h>
#include <asm/kasan.h>
+#include <asm/numa.h>
#include <asm/sections.h>
#include <asm/setup.h>
#include <asm/smp_plat.h>
@@ -175,7 +176,6 @@ static void __init smp_build_mpidr_hash(void)
*/
if (mpidr_hash_size() > 4 * num_possible_cpus())
pr_warn("Large number of MPIDR hash buckets detected\n");
- __flush_dcache_area(&mpidr_hash, sizeof(struct mpidr_hash));
}
static void __init setup_machine_fdt(phys_addr_t dt_phys)
@@ -224,69 +224,6 @@ static void __init request_standard_resources(void)
}
}
-#ifdef CONFIG_BLK_DEV_INITRD
-/*
- * Relocate initrd if it is not completely within the linear mapping.
- * This would be the case if mem= cuts out all or part of it.
- */
-static void __init relocate_initrd(void)
-{
- phys_addr_t orig_start = __virt_to_phys(initrd_start);
- phys_addr_t orig_end = __virt_to_phys(initrd_end);
- phys_addr_t ram_end = memblock_end_of_DRAM();
- phys_addr_t new_start;
- unsigned long size, to_free = 0;
- void *dest;
-
- if (orig_end <= ram_end)
- return;
-
- /*
- * Any of the original initrd which overlaps the linear map should
- * be freed after relocating.
- */
- if (orig_start < ram_end)
- to_free = ram_end - orig_start;
-
- size = orig_end - orig_start;
- if (!size)
- return;
-
- /* initrd needs to be relocated completely inside linear mapping */
- new_start = memblock_find_in_range(0, PFN_PHYS(max_pfn),
- size, PAGE_SIZE);
- if (!new_start)
- panic("Cannot relocate initrd of size %ld\n", size);
- memblock_reserve(new_start, size);
-
- initrd_start = __phys_to_virt(new_start);
- initrd_end = initrd_start + size;
-
- pr_info("Moving initrd from [%llx-%llx] to [%llx-%llx]\n",
- orig_start, orig_start + size - 1,
- new_start, new_start + size - 1);
-
- dest = (void *)initrd_start;
-
- if (to_free) {
- memcpy(dest, (void *)__phys_to_virt(orig_start), to_free);
- dest += to_free;
- }
-
- copy_from_early_mem(dest, orig_start + to_free, size - to_free);
-
- if (to_free) {
- pr_info("Freeing original RAMDISK from [%llx-%llx]\n",
- orig_start, orig_start + to_free - 1);
- memblock_free(orig_start, to_free);
- }
-}
-#else
-static inline void __init relocate_initrd(void)
-{
-}
-#endif
-
u64 __cpu_logical_map[NR_CPUS] = { [0 ... NR_CPUS-1] = INVALID_HWID };
void __init setup_arch(char **cmdline_p)
@@ -327,7 +264,11 @@ void __init setup_arch(char **cmdline_p)
acpi_boot_table_init();
paging_init();
- relocate_initrd();
+
+ if (acpi_disabled)
+ unflatten_device_tree();
+
+ bootmem_init();
kasan_init();
@@ -335,12 +276,11 @@ void __init setup_arch(char **cmdline_p)
early_ioremap_reset();
- if (acpi_disabled) {
- unflatten_device_tree();
+ if (acpi_disabled)
psci_dt_init();
- } else {
+ else
psci_acpi_init();
- }
+
xen_early_init();
cpu_read_bootcpu_ops();
@@ -379,6 +319,9 @@ static int __init topology_init(void)
{
int i;
+ for_each_online_node(i)
+ register_one_node(i);
+
for_each_possible_cpu(i) {
struct cpu *cpu = &per_cpu(cpu_data.cpu, i);
cpu->hotpluggable = 1;
diff --git a/arch/arm64/kernel/sleep.S b/arch/arm64/kernel/sleep.S
index fd10eb663868..9a3aec97ac09 100644
--- a/arch/arm64/kernel/sleep.S
+++ b/arch/arm64/kernel/sleep.S
@@ -49,39 +49,32 @@
orr \dst, \dst, \mask // dst|=(aff3>>rs3)
.endm
/*
- * Save CPU state for a suspend and execute the suspend finisher.
- * On success it will return 0 through cpu_resume - ie through a CPU
- * soft/hard reboot from the reset vector.
- * On failure it returns the suspend finisher return value or force
- * -EOPNOTSUPP if the finisher erroneously returns 0 (the suspend finisher
- * is not allowed to return, if it does this must be considered failure).
- * It saves callee registers, and allocates space on the kernel stack
- * to save the CPU specific registers + some other data for resume.
+ * Save CPU state in the provided sleep_stack_data area, and publish its
+ * location for cpu_resume()'s use in sleep_save_stash.
*
- * x0 = suspend finisher argument
- * x1 = suspend finisher function pointer
+ * cpu_resume() will restore this saved state, and return. Because the
+ * link-register is saved and restored, it will appear to return from this
+ * function. So that the caller can tell the suspend/resume paths apart,
+ * __cpu_suspend_enter() will always return a non-zero value, whereas the
+ * path through cpu_resume() will return 0.
+ *
+ * x0 = struct sleep_stack_data area
*/
ENTRY(__cpu_suspend_enter)
- stp x29, lr, [sp, #-96]!
- stp x19, x20, [sp,#16]
- stp x21, x22, [sp,#32]
- stp x23, x24, [sp,#48]
- stp x25, x26, [sp,#64]
- stp x27, x28, [sp,#80]
- /*
- * Stash suspend finisher and its argument in x20 and x19
- */
- mov x19, x0
- mov x20, x1
+ stp x29, lr, [x0, #SLEEP_STACK_DATA_CALLEE_REGS]
+ stp x19, x20, [x0,#SLEEP_STACK_DATA_CALLEE_REGS+16]
+ stp x21, x22, [x0,#SLEEP_STACK_DATA_CALLEE_REGS+32]
+ stp x23, x24, [x0,#SLEEP_STACK_DATA_CALLEE_REGS+48]
+ stp x25, x26, [x0,#SLEEP_STACK_DATA_CALLEE_REGS+64]
+ stp x27, x28, [x0,#SLEEP_STACK_DATA_CALLEE_REGS+80]
+
+ /* save the sp in cpu_suspend_ctx */
mov x2, sp
- sub sp, sp, #CPU_SUSPEND_SZ // allocate cpu_suspend_ctx
- mov x0, sp
- /*
- * x0 now points to struct cpu_suspend_ctx allocated on the stack
- */
- str x2, [x0, #CPU_CTX_SP]
- ldr x1, =sleep_save_sp
- ldr x1, [x1, #SLEEP_SAVE_SP_VIRT]
+ str x2, [x0, #SLEEP_STACK_DATA_SYSTEM_REGS + CPU_CTX_SP]
+
+ /* find the mpidr_hash */
+ ldr x1, =sleep_save_stash
+ ldr x1, [x1]
mrs x7, mpidr_el1
ldr x9, =mpidr_hash
ldr x10, [x9, #MPIDR_HASH_MASK]
@@ -93,74 +86,28 @@ ENTRY(__cpu_suspend_enter)
ldp w5, w6, [x9, #(MPIDR_HASH_SHIFTS + 8)]
compute_mpidr_hash x8, x3, x4, x5, x6, x7, x10
add x1, x1, x8, lsl #3
- bl __cpu_suspend_save
- /*
- * Grab suspend finisher in x20 and its argument in x19
- */
- mov x0, x19
- mov x1, x20
- /*
- * We are ready for power down, fire off the suspend finisher
- * in x1, with argument in x0
- */
- blr x1
- /*
- * Never gets here, unless suspend finisher fails.
- * Successful cpu_suspend should return from cpu_resume, returning
- * through this code path is considered an error
- * If the return value is set to 0 force x0 = -EOPNOTSUPP
- * to make sure a proper error condition is propagated
- */
- cmp x0, #0
- mov x3, #-EOPNOTSUPP
- csel x0, x3, x0, eq
- add sp, sp, #CPU_SUSPEND_SZ // rewind stack pointer
- ldp x19, x20, [sp, #16]
- ldp x21, x22, [sp, #32]
- ldp x23, x24, [sp, #48]
- ldp x25, x26, [sp, #64]
- ldp x27, x28, [sp, #80]
- ldp x29, lr, [sp], #96
+
+ str x0, [x1]
+ add x0, x0, #SLEEP_STACK_DATA_SYSTEM_REGS
+ stp x29, lr, [sp, #-16]!
+ bl cpu_do_suspend
+ ldp x29, lr, [sp], #16
+ mov x0, #1
ret
ENDPROC(__cpu_suspend_enter)
.ltorg
-/*
- * x0 must contain the sctlr value retrieved from restored context
- */
- .pushsection ".idmap.text", "ax"
-ENTRY(cpu_resume_mmu)
- ldr x3, =cpu_resume_after_mmu
- msr sctlr_el1, x0 // restore sctlr_el1
- isb
- /*
- * Invalidate the local I-cache so that any instructions fetched
- * speculatively from the PoC are discarded, since they may have
- * been dynamically patched at the PoU.
- */
- ic iallu
- dsb nsh
- isb
- br x3 // global jump to virtual address
-ENDPROC(cpu_resume_mmu)
- .popsection
-cpu_resume_after_mmu:
-#ifdef CONFIG_KASAN
- mov x0, sp
- bl kasan_unpoison_remaining_stack
-#endif
- mov x0, #0 // return zero on success
- ldp x19, x20, [sp, #16]
- ldp x21, x22, [sp, #32]
- ldp x23, x24, [sp, #48]
- ldp x25, x26, [sp, #64]
- ldp x27, x28, [sp, #80]
- ldp x29, lr, [sp], #96
- ret
-ENDPROC(cpu_resume_after_mmu)
-
ENTRY(cpu_resume)
bl el2_setup // if in EL2 drop to EL1 cleanly
+ /* enable the MMU early - so we can access sleep_save_stash by va */
+ adr_l lr, __enable_mmu /* __cpu_setup will return here */
+ ldr x27, =_cpu_resume /* __enable_mmu will branch here */
+ adrp x25, idmap_pg_dir
+ adrp x26, swapper_pg_dir
+ b __cpu_setup
+ENDPROC(cpu_resume)
+
+ENTRY(_cpu_resume)
mrs x1, mpidr_el1
adrp x8, mpidr_hash
add x8, x8, #:lo12:mpidr_hash // x8 = struct mpidr_hash phys address
@@ -170,20 +117,32 @@ ENTRY(cpu_resume)
ldp w5, w6, [x8, #(MPIDR_HASH_SHIFTS + 8)]
compute_mpidr_hash x7, x3, x4, x5, x6, x1, x2
/* x7 contains hash index, let's use it to grab context pointer */
- ldr_l x0, sleep_save_sp + SLEEP_SAVE_SP_PHYS
+ ldr_l x0, sleep_save_stash
ldr x0, [x0, x7, lsl #3]
+ add x29, x0, #SLEEP_STACK_DATA_CALLEE_REGS
+ add x0, x0, #SLEEP_STACK_DATA_SYSTEM_REGS
/* load sp from context */
ldr x2, [x0, #CPU_CTX_SP]
- /* load physical address of identity map page table in x1 */
- adrp x1, idmap_pg_dir
mov sp, x2
/* save thread_info */
and x2, x2, #~(THREAD_SIZE - 1)
msr sp_el0, x2
/*
- * cpu_do_resume expects x0 to contain context physical address
- * pointer and x1 to contain physical address of 1:1 page tables
+ * cpu_do_resume expects x0 to contain context address pointer
*/
- bl cpu_do_resume // PC relative jump, MMU off
- b cpu_resume_mmu // Resume MMU, never returns
-ENDPROC(cpu_resume)
+ bl cpu_do_resume
+
+#ifdef CONFIG_KASAN
+ mov x0, sp
+ bl kasan_unpoison_remaining_stack
+#endif
+
+ ldp x19, x20, [x29, #16]
+ ldp x21, x22, [x29, #32]
+ ldp x23, x24, [x29, #48]
+ ldp x25, x26, [x29, #64]
+ ldp x27, x28, [x29, #80]
+ ldp x29, lr, [x29]
+ mov x0, #0
+ ret
+ENDPROC(_cpu_resume)
diff --git a/arch/arm64/kernel/smp.c b/arch/arm64/kernel/smp.c
index b2d5f4ee9a1c..678e0842cb3b 100644
--- a/arch/arm64/kernel/smp.c
+++ b/arch/arm64/kernel/smp.c
@@ -45,6 +45,7 @@
#include <asm/cputype.h>
#include <asm/cpu_ops.h>
#include <asm/mmu_context.h>
+#include <asm/numa.h>
#include <asm/pgtable.h>
#include <asm/pgalloc.h>
#include <asm/processor.h>
@@ -75,6 +76,43 @@ enum ipi_msg_type {
IPI_WAKEUP
};
+#ifdef CONFIG_ARM64_VHE
+
+/* Whether the boot CPU is running in HYP mode or not*/
+static bool boot_cpu_hyp_mode;
+
+static inline void save_boot_cpu_run_el(void)
+{
+ boot_cpu_hyp_mode = is_kernel_in_hyp_mode();
+}
+
+static inline bool is_boot_cpu_in_hyp_mode(void)
+{
+ return boot_cpu_hyp_mode;
+}
+
+/*
+ * Verify that a secondary CPU is running the kernel at the same
+ * EL as that of the boot CPU.
+ */
+void verify_cpu_run_el(void)
+{
+ bool in_el2 = is_kernel_in_hyp_mode();
+ bool boot_cpu_el2 = is_boot_cpu_in_hyp_mode();
+
+ if (in_el2 ^ boot_cpu_el2) {
+ pr_crit("CPU%d: mismatched Exception Level(EL%d) with boot CPU(EL%d)\n",
+ smp_processor_id(),
+ in_el2 ? 2 : 1,
+ boot_cpu_el2 ? 2 : 1);
+ cpu_panic_kernel();
+ }
+}
+
+#else
+static inline void save_boot_cpu_run_el(void) {}
+#endif
+
#ifdef CONFIG_HOTPLUG_CPU
static int op_cpu_kill(unsigned int cpu);
#else
@@ -166,6 +204,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *idle)
static void smp_store_cpu_info(unsigned int cpuid)
{
store_cpu_topology(cpuid);
+ numa_store_cpu_info(cpuid);
}
/*
@@ -225,8 +264,6 @@ asmlinkage void secondary_start_kernel(void)
pr_info("CPU%u: Booted secondary processor [%08x]\n",
cpu, read_cpuid_id());
update_cpu_boot_status(CPU_BOOT_SUCCESS);
- /* Make sure the status update is visible before we complete */
- smp_wmb();
set_cpu_online(cpu, true);
complete(&cpu_running);
@@ -401,6 +438,7 @@ void __init smp_cpus_done(unsigned int max_cpus)
void __init smp_prepare_boot_cpu(void)
{
cpuinfo_store_boot_cpu();
+ save_boot_cpu_run_el();
set_my_cpu_offset(per_cpu_offset(smp_processor_id()));
}
@@ -595,6 +633,8 @@ static void __init of_parse_and_init_cpus(void)
pr_debug("cpu logical map 0x%llx\n", hwid);
cpu_logical_map(cpu_count) = hwid;
+
+ early_map_cpu_to_node(cpu_count, of_node_to_nid(dn));
next:
cpu_count++;
}
@@ -647,33 +687,18 @@ void __init smp_init_cpus(void)
void __init smp_prepare_cpus(unsigned int max_cpus)
{
int err;
- unsigned int cpu, ncores = num_possible_cpus();
+ unsigned int cpu;
init_cpu_topology();
smp_store_cpu_info(smp_processor_id());
/*
- * are we trying to boot more cores than exist?
- */
- if (max_cpus > ncores)
- max_cpus = ncores;
-
- /* Don't bother if we're effectively UP */
- if (max_cpus <= 1)
- return;
-
- /*
* Initialise the present map (which describes the set of CPUs
* actually populated at the present time) and release the
* secondaries from the bootloader.
- *
- * Make sure we online at most (max_cpus - 1) additional CPUs.
*/
- max_cpus--;
for_each_possible_cpu(cpu) {
- if (max_cpus == 0)
- break;
if (cpu == smp_processor_id())
continue;
@@ -686,7 +711,6 @@ void __init smp_prepare_cpus(unsigned int max_cpus)
continue;
set_cpu_present(cpu, true);
- max_cpus--;
}
}
@@ -763,21 +787,11 @@ void arch_irq_work_raise(void)
}
#endif
-static DEFINE_RAW_SPINLOCK(stop_lock);
-
/*
* ipi_cpu_stop - handle IPI from smp_send_stop()
*/
static void ipi_cpu_stop(unsigned int cpu)
{
- if (system_state == SYSTEM_BOOTING ||
- system_state == SYSTEM_RUNNING) {
- raw_spin_lock(&stop_lock);
- pr_crit("CPU%u: stopping\n", cpu);
- dump_stack();
- raw_spin_unlock(&stop_lock);
- }
-
set_cpu_online(cpu, false);
local_irq_disable();
@@ -872,6 +886,9 @@ void smp_send_stop(void)
cpumask_copy(&mask, cpu_online_mask);
cpumask_clear_cpu(smp_processor_id(), &mask);
+ if (system_state == SYSTEM_BOOTING ||
+ system_state == SYSTEM_RUNNING)
+ pr_crit("SMP: stopping secondary CPUs\n");
smp_cross_call(&mask, IPI_CPU_STOP);
}
@@ -881,7 +898,8 @@ void smp_send_stop(void)
udelay(1);
if (num_online_cpus() > 1)
- pr_warning("SMP: failed to stop secondary CPUs\n");
+ pr_warning("SMP: failed to stop secondary CPUs %*pbl\n",
+ cpumask_pr_args(cpu_online_mask));
}
/*
diff --git a/arch/arm64/kernel/suspend.c b/arch/arm64/kernel/suspend.c
index 66055392f445..b616e365cee3 100644
--- a/arch/arm64/kernel/suspend.c
+++ b/arch/arm64/kernel/suspend.c
@@ -10,30 +10,11 @@
#include <asm/suspend.h>
#include <asm/tlbflush.h>
-extern int __cpu_suspend_enter(unsigned long arg, int (*fn)(unsigned long));
/*
- * This is called by __cpu_suspend_enter() to save the state, and do whatever
- * flushing is required to ensure that when the CPU goes to sleep we have
- * the necessary data available when the caches are not searched.
- *
- * ptr: CPU context virtual address
- * save_ptr: address of the location where the context physical address
- * must be saved
+ * This is allocated by cpu_suspend_init(), and used to store a pointer to
+ * the 'struct sleep_stack_data' the contains a particular CPUs state.
*/
-void notrace __cpu_suspend_save(struct cpu_suspend_ctx *ptr,
- phys_addr_t *save_ptr)
-{
- *save_ptr = virt_to_phys(ptr);
-
- cpu_do_suspend(ptr);
- /*
- * Only flush the context that must be retrieved with the MMU
- * off. VA primitives ensure the flush is applied to all
- * cache levels so context is pushed to DRAM.
- */
- __flush_dcache_area(ptr, sizeof(*ptr));
- __flush_dcache_area(save_ptr, sizeof(*save_ptr));
-}
+unsigned long *sleep_save_stash;
/*
* This hook is provided so that cpu_suspend code can restore HW
@@ -51,6 +32,30 @@ void __init cpu_suspend_set_dbg_restorer(void (*hw_bp_restore)(void *))
hw_breakpoint_restore = hw_bp_restore;
}
+void notrace __cpu_suspend_exit(void)
+{
+ /*
+ * We are resuming from reset with the idmap active in TTBR0_EL1.
+ * We must uninstall the idmap and restore the expected MMU
+ * state before we can possibly return to userspace.
+ */
+ cpu_uninstall_idmap();
+
+ /*
+ * Restore per-cpu offset before any kernel
+ * subsystem relying on it has a chance to run.
+ */
+ set_my_cpu_offset(per_cpu_offset(smp_processor_id()));
+
+ /*
+ * Restore HW breakpoint registers to sane values
+ * before debug exceptions are possibly reenabled
+ * through local_dbg_restore.
+ */
+ if (hw_breakpoint_restore)
+ hw_breakpoint_restore(NULL);
+}
+
/*
* cpu_suspend
*
@@ -60,8 +65,9 @@ void __init cpu_suspend_set_dbg_restorer(void (*hw_bp_restore)(void *))
*/
int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
{
- int ret;
+ int ret = 0;
unsigned long flags;
+ struct sleep_stack_data state;
/*
* From this point debug exceptions are disabled to prevent
@@ -77,34 +83,21 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
*/
pause_graph_tracing();
- /*
- * mm context saved on the stack, it will be restored when
- * the cpu comes out of reset through the identity mapped
- * page tables, so that the thread address space is properly
- * set-up on function return.
- */
- ret = __cpu_suspend_enter(arg, fn);
- if (ret == 0) {
- /*
- * We are resuming from reset with the idmap active in TTBR0_EL1.
- * We must uninstall the idmap and restore the expected MMU
- * state before we can possibly return to userspace.
- */
- cpu_uninstall_idmap();
+ if (__cpu_suspend_enter(&state)) {
+ /* Call the suspend finisher */
+ ret = fn(arg);
/*
- * Restore per-cpu offset before any kernel
- * subsystem relying on it has a chance to run.
+ * Never gets here, unless the suspend finisher fails.
+ * Successful cpu_suspend() should return from cpu_resume(),
+ * returning through this code path is considered an error
+ * If the return value is set to 0 force ret = -EOPNOTSUPP
+ * to make sure a proper error condition is propagated
*/
- set_my_cpu_offset(per_cpu_offset(smp_processor_id()));
-
- /*
- * Restore HW breakpoint registers to sane values
- * before debug exceptions are possibly reenabled
- * through local_dbg_restore.
- */
- if (hw_breakpoint_restore)
- hw_breakpoint_restore(NULL);
+ if (!ret)
+ ret = -EOPNOTSUPP;
+ } else {
+ __cpu_suspend_exit();
}
unpause_graph_tracing();
@@ -119,22 +112,15 @@ int cpu_suspend(unsigned long arg, int (*fn)(unsigned long))
return ret;
}
-struct sleep_save_sp sleep_save_sp;
-
static int __init cpu_suspend_init(void)
{
- void *ctx_ptr;
-
/* ctx_ptr is an array of physical addresses */
- ctx_ptr = kcalloc(mpidr_hash_size(), sizeof(phys_addr_t), GFP_KERNEL);
+ sleep_save_stash = kcalloc(mpidr_hash_size(), sizeof(*sleep_save_stash),
+ GFP_KERNEL);
- if (WARN_ON(!ctx_ptr))
+ if (WARN_ON(!sleep_save_stash))
return -ENOMEM;
- sleep_save_sp.save_ptr_stash = ctx_ptr;
- sleep_save_sp.save_ptr_stash_phys = virt_to_phys(ctx_ptr);
- __flush_dcache_area(&sleep_save_sp, sizeof(struct sleep_save_sp));
-
return 0;
}
early_initcall(cpu_suspend_init);
diff --git a/arch/arm64/kernel/sys.c b/arch/arm64/kernel/sys.c
index 75151aaf1a52..26fe8ea93ea2 100644
--- a/arch/arm64/kernel/sys.c
+++ b/arch/arm64/kernel/sys.c
@@ -25,6 +25,7 @@
#include <linux/sched.h>
#include <linux/slab.h>
#include <linux/syscalls.h>
+#include <asm/cpufeature.h>
asmlinkage long sys_mmap(unsigned long addr, unsigned long len,
unsigned long prot, unsigned long flags,
@@ -36,11 +37,20 @@ asmlinkage long sys_mmap(unsigned long addr, unsigned long len,
return sys_mmap_pgoff(addr, len, prot, flags, fd, off >> PAGE_SHIFT);
}
+SYSCALL_DEFINE1(arm64_personality, unsigned int, personality)
+{
+ if (personality(personality) == PER_LINUX32 &&
+ !system_supports_32bit_el0())
+ return -EINVAL;
+ return sys_personality(personality);
+}
+
/*
* Wrappers to pass the pt_regs argument.
*/
asmlinkage long sys_rt_sigreturn_wrapper(void);
#define sys_rt_sigreturn sys_rt_sigreturn_wrapper
+#define sys_personality sys_arm64_personality
#undef __SYSCALL
#define __SYSCALL(nr, sym) [nr] = sym,
diff --git a/arch/arm64/kernel/vdso.c b/arch/arm64/kernel/vdso.c
index 97bc68f4c689..9fefb005812a 100644
--- a/arch/arm64/kernel/vdso.c
+++ b/arch/arm64/kernel/vdso.c
@@ -95,7 +95,8 @@ int aarch32_setup_vectors_page(struct linux_binprm *bprm, int uses_interp)
};
void *ret;
- down_write(&mm->mmap_sem);
+ if (down_write_killable(&mm->mmap_sem))
+ return -EINTR;
current->mm->context.vdso = (void *)addr;
/* Map vectors page at the high address. */
@@ -131,11 +132,11 @@ static int __init vdso_init(void)
return -ENOMEM;
/* Grab the vDSO data page. */
- vdso_pagelist[0] = virt_to_page(vdso_data);
+ vdso_pagelist[0] = pfn_to_page(PHYS_PFN(__pa(vdso_data)));
/* Grab the vDSO code pages. */
for (i = 0; i < vdso_pages; i++)
- vdso_pagelist[i + 1] = virt_to_page(&vdso_start + i * PAGE_SIZE);
+ vdso_pagelist[i + 1] = pfn_to_page(PHYS_PFN(__pa(&vdso_start)) + i);
/* Populate the special mapping structures */
vdso_spec[0] = (struct vm_special_mapping) {
@@ -163,7 +164,8 @@ int arch_setup_additional_pages(struct linux_binprm *bprm,
/* Be sure to map the data page */
vdso_mapping_len = vdso_text_len + PAGE_SIZE;
- down_write(&mm->mmap_sem);
+ if (down_write_killable(&mm->mmap_sem))
+ return -EINTR;
vdso_base = get_unmapped_area(NULL, 0, vdso_mapping_len, 0, 0);
if (IS_ERR_VALUE(vdso_base)) {
ret = ERR_PTR(vdso_base);
diff --git a/arch/arm64/kernel/vmlinux.lds.S b/arch/arm64/kernel/vmlinux.lds.S
index 5a1939a74ff3..435e820e898d 100644
--- a/arch/arm64/kernel/vmlinux.lds.S
+++ b/arch/arm64/kernel/vmlinux.lds.S
@@ -46,6 +46,16 @@ jiffies = jiffies_64;
*(.idmap.text) \
VMLINUX_SYMBOL(__idmap_text_end) = .;
+#ifdef CONFIG_HIBERNATION
+#define HIBERNATE_TEXT \
+ . = ALIGN(SZ_4K); \
+ VMLINUX_SYMBOL(__hibernate_exit_text_start) = .;\
+ *(.hibernate_exit.text) \
+ VMLINUX_SYMBOL(__hibernate_exit_text_end) = .;
+#else
+#define HIBERNATE_TEXT
+#endif
+
/*
* The size of the PE/COFF section that covers the kernel image, which
* runs from stext to _edata, must be a round multiple of the PE/COFF
@@ -63,14 +73,19 @@ PECOFF_FILE_ALIGNMENT = 0x200;
#endif
#if defined(CONFIG_DEBUG_ALIGN_RODATA)
-#define ALIGN_DEBUG_RO . = ALIGN(1<<SECTION_SHIFT);
-#define ALIGN_DEBUG_RO_MIN(min) ALIGN_DEBUG_RO
-#elif defined(CONFIG_DEBUG_RODATA)
-#define ALIGN_DEBUG_RO . = ALIGN(1<<PAGE_SHIFT);
-#define ALIGN_DEBUG_RO_MIN(min) ALIGN_DEBUG_RO
+/*
+ * 4 KB granule: 1 level 2 entry
+ * 16 KB granule: 128 level 3 entries, with contiguous bit
+ * 64 KB granule: 32 level 3 entries, with contiguous bit
+ */
+#define SEGMENT_ALIGN SZ_2M
#else
-#define ALIGN_DEBUG_RO
-#define ALIGN_DEBUG_RO_MIN(min) . = ALIGN(min);
+/*
+ * 4 KB granule: 16 level 3 entries, with contiguous bit
+ * 16 KB granule: 4 level 3 entries, without contiguous bit
+ * 64 KB granule: 1 level 3 entry
+ */
+#define SEGMENT_ALIGN SZ_64K
#endif
SECTIONS
@@ -96,7 +111,6 @@ SECTIONS
_text = .;
HEAD_TEXT
}
- ALIGN_DEBUG_RO_MIN(PAGE_SIZE)
.text : { /* Real text segment */
_stext = .; /* Text and read-only data */
__exception_text_start = .;
@@ -109,18 +123,19 @@ SECTIONS
LOCK_TEXT
HYPERVISOR_TEXT
IDMAP_TEXT
+ HIBERNATE_TEXT
*(.fixup)
*(.gnu.warning)
. = ALIGN(16);
*(.got) /* Global offset table */
}
- ALIGN_DEBUG_RO_MIN(PAGE_SIZE)
+ . = ALIGN(SEGMENT_ALIGN);
RO_DATA(PAGE_SIZE) /* everything from this point to */
EXCEPTION_TABLE(8) /* _etext will be marked RO NX */
NOTES
- ALIGN_DEBUG_RO_MIN(PAGE_SIZE)
+ . = ALIGN(SEGMENT_ALIGN);
_etext = .; /* End of text and rodata section */
__init_begin = .;
@@ -154,12 +169,9 @@ SECTIONS
*(.altinstr_replacement)
}
.rela : ALIGN(8) {
- __reloc_start = .;
*(.rela .rela*)
- __reloc_end = .;
}
.dynsym : ALIGN(8) {
- __dynsym_start = .;
*(.dynsym)
}
.dynstr : {
@@ -169,7 +181,11 @@ SECTIONS
*(.hash)
}
- . = ALIGN(PAGE_SIZE);
+ __rela_offset = ADDR(.rela) - KIMAGE_VADDR;
+ __rela_size = SIZEOF(.rela);
+ __dynsym_offset = ADDR(.dynsym) - KIMAGE_VADDR;
+
+ . = ALIGN(SEGMENT_ALIGN);
__init_end = .;
_data = .;
@@ -201,6 +217,10 @@ ASSERT(__hyp_idmap_text_end - (__hyp_idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K,
"HYP init code too big or misaligned")
ASSERT(__idmap_text_end - (__idmap_text_start & ~(SZ_4K - 1)) <= SZ_4K,
"ID map text too big or misaligned")
+#ifdef CONFIG_HIBERNATION
+ASSERT(__hibernate_exit_text_end - (__hibernate_exit_text_start & ~(SZ_4K - 1))
+ <= SZ_4K, "Hibernate exit text too big or misaligned")
+#endif
/*
* If padding is applied before .head.text, virt<->phys conversions will fail.
diff --git a/arch/arm64/kvm/Kconfig b/arch/arm64/kvm/Kconfig
index de7450df7629..c4f26ef91e77 100644
--- a/arch/arm64/kvm/Kconfig
+++ b/arch/arm64/kvm/Kconfig
@@ -22,7 +22,6 @@ config KVM_ARM_VGIC_V3
config KVM
bool "Kernel-based Virtual Machine (KVM) support"
depends on OF
- depends on !ARM64_16K_PAGES
select MMU_NOTIFIER
select PREEMPT_NOTIFIERS
select ANON_INODES
@@ -55,6 +54,13 @@ config KVM_ARM_PMU
Adds support for a virtual Performance Monitoring Unit (PMU) in
virtual machines.
+config KVM_NEW_VGIC
+ bool "New VGIC implementation"
+ depends on KVM
+ default y
+ ---help---
+ uses the new VGIC implementation
+
source drivers/vhost/Kconfig
endif # VIRTUALIZATION
diff --git a/arch/arm64/kvm/Makefile b/arch/arm64/kvm/Makefile
index 122cff482ac4..a7a958ca29d5 100644
--- a/arch/arm64/kvm/Makefile
+++ b/arch/arm64/kvm/Makefile
@@ -20,10 +20,22 @@ kvm-$(CONFIG_KVM_ARM_HOST) += emulate.o inject_fault.o regmap.o
kvm-$(CONFIG_KVM_ARM_HOST) += hyp.o hyp-init.o handle_exit.o
kvm-$(CONFIG_KVM_ARM_HOST) += guest.o debug.o reset.o sys_regs.o sys_regs_generic_v8.o
+ifeq ($(CONFIG_KVM_NEW_VGIC),y)
+kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic.o
+kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-init.o
+kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-irqfd.o
+kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-v2.o
+kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-v3.o
+kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-mmio.o
+kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-mmio-v2.o
+kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-mmio-v3.o
+kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic/vgic-kvm-device.o
+else
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v2.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v2-emul.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3.o
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/vgic-v3-emul.o
+endif
kvm-$(CONFIG_KVM_ARM_HOST) += $(KVM)/arm/arch_timer.o
kvm-$(CONFIG_KVM_ARM_PMU) += $(KVM)/arm/pmu.o
diff --git a/arch/arm64/kvm/handle_exit.c b/arch/arm64/kvm/handle_exit.c
index eba89e42f0ed..3246c4aba5b1 100644
--- a/arch/arm64/kvm/handle_exit.c
+++ b/arch/arm64/kvm/handle_exit.c
@@ -186,6 +186,13 @@ int handle_exit(struct kvm_vcpu *vcpu, struct kvm_run *run,
exit_handler = kvm_get_exit_handler(vcpu);
return exit_handler(vcpu, run);
+ case ARM_EXCEPTION_HYP_GONE:
+ /*
+ * EL2 has been reset to the hyp-stub. This happens when a guest
+ * is pre-empted by kvm_reboot()'s shutdown call.
+ */
+ run->exit_reason = KVM_EXIT_FAIL_ENTRY;
+ return 0;
default:
kvm_pr_unimpl("Unsupported exception type: %d",
exception_index);
diff --git a/arch/arm64/kvm/hyp-init.S b/arch/arm64/kvm/hyp-init.S
index 7d8747c6427c..a873a6d8be90 100644
--- a/arch/arm64/kvm/hyp-init.S
+++ b/arch/arm64/kvm/hyp-init.S
@@ -21,6 +21,7 @@
#include <asm/kvm_arm.h>
#include <asm/kvm_mmu.h>
#include <asm/pgtable-hwdef.h>
+#include <asm/sysreg.h>
.text
.pushsection .hyp.idmap.text, "ax"
@@ -103,8 +104,8 @@ __do_hyp_init:
dsb sy
mrs x4, sctlr_el2
- and x4, x4, #SCTLR_EL2_EE // preserve endianness of EL2
- ldr x5, =SCTLR_EL2_FLAGS
+ and x4, x4, #SCTLR_ELx_EE // preserve endianness of EL2
+ ldr x5, =SCTLR_ELx_FLAGS
orr x4, x4, x5
msr sctlr_el2, x4
isb
@@ -138,6 +139,49 @@ merged:
eret
ENDPROC(__kvm_hyp_init)
+ /*
+ * Reset kvm back to the hyp stub. This is the trampoline dance in
+ * reverse. If kvm used an extended idmap, __extended_idmap_trampoline
+ * calls this code directly in the idmap. In this case switching to the
+ * boot tables is a no-op.
+ *
+ * x0: HYP boot pgd
+ * x1: HYP phys_idmap_start
+ */
+ENTRY(__kvm_hyp_reset)
+ /* We're in trampoline code in VA, switch back to boot page tables */
+ msr ttbr0_el2, x0
+ isb
+
+ /* Ensure the PA branch doesn't find a stale tlb entry or stale code. */
+ ic iallu
+ tlbi alle2
+ dsb sy
+ isb
+
+ /* Branch into PA space */
+ adr x0, 1f
+ bfi x1, x0, #0, #PAGE_SHIFT
+ br x1
+
+ /* We're now in idmap, disable MMU */
+1: mrs x0, sctlr_el2
+ ldr x1, =SCTLR_ELx_FLAGS
+ bic x0, x0, x1 // Clear SCTL_M and etc
+ msr sctlr_el2, x0
+ isb
+
+ /* Invalidate the old TLBs */
+ tlbi alle2
+ dsb sy
+
+ /* Install stub vectors */
+ adr_l x0, __hyp_stub_vectors
+ msr vbar_el2, x0
+
+ eret
+ENDPROC(__kvm_hyp_reset)
+
.ltorg
.popsection
diff --git a/arch/arm64/kvm/hyp.S b/arch/arm64/kvm/hyp.S
index 48f19a37b3df..7ce931565151 100644
--- a/arch/arm64/kvm/hyp.S
+++ b/arch/arm64/kvm/hyp.S
@@ -35,16 +35,21 @@
* in Hyp mode (see init_hyp_mode in arch/arm/kvm/arm.c). Return values are
* passed in x0.
*
- * A function pointer with a value of 0 has a special meaning, and is
- * used to implement __hyp_get_vectors in the same way as in
+ * A function pointer with a value less than 0xfff has a special meaning,
+ * and is used to implement __hyp_get_vectors in the same way as in
* arch/arm64/kernel/hyp_stub.S.
+ * HVC behaves as a 'bl' call and will clobber lr.
*/
ENTRY(__kvm_call_hyp)
-alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
+alternative_if_not ARM64_HAS_VIRT_HOST_EXTN
+ str lr, [sp, #-16]!
hvc #0
+ ldr lr, [sp], #16
ret
alternative_else
b __vhe_hyp_call
nop
+ nop
+ nop
alternative_endif
ENDPROC(__kvm_call_hyp)
diff --git a/arch/arm64/kvm/hyp/entry.S b/arch/arm64/kvm/hyp/entry.S
index ce9e5e5f28cf..70254a65bd5b 100644
--- a/arch/arm64/kvm/hyp/entry.S
+++ b/arch/arm64/kvm/hyp/entry.S
@@ -164,3 +164,22 @@ alternative_endif
eret
ENDPROC(__fpsimd_guest_restore)
+
+/*
+ * When using the extended idmap, we don't have a trampoline page we can use
+ * while we switch pages tables during __kvm_hyp_reset. Accessing the idmap
+ * directly would be ideal, but if we're using the extended idmap then the
+ * idmap is located above HYP_PAGE_OFFSET, and the address will be masked by
+ * kvm_call_hyp using kern_hyp_va.
+ *
+ * x0: HYP boot pgd
+ * x1: HYP phys_idmap_start
+ */
+ENTRY(__extended_idmap_trampoline)
+ mov x4, x1
+ adr_l x3, __kvm_hyp_reset
+
+ /* insert __kvm_hyp_reset()s offset into phys_idmap_start */
+ bfi x4, x3, #0, #PAGE_SHIFT
+ br x4
+ENDPROC(__extended_idmap_trampoline)
diff --git a/arch/arm64/kvm/hyp/hyp-entry.S b/arch/arm64/kvm/hyp/hyp-entry.S
index 3488894397ff..2d87f36d5cb4 100644
--- a/arch/arm64/kvm/hyp/hyp-entry.S
+++ b/arch/arm64/kvm/hyp/hyp-entry.S
@@ -42,19 +42,17 @@
* Shuffle the parameters before calling the function
* pointed to in x0. Assumes parameters in x[1,2,3].
*/
- sub sp, sp, #16
- str lr, [sp]
mov lr, x0
mov x0, x1
mov x1, x2
mov x2, x3
blr lr
- ldr lr, [sp]
- add sp, sp, #16
.endm
ENTRY(__vhe_hyp_call)
+ str lr, [sp, #-16]!
do_el2_call
+ ldr lr, [sp], #16
/*
* We used to rely on having an exception return to get
* an implicit isb. In the E2H case, we don't have it anymore.
@@ -84,8 +82,8 @@ alternative_endif
/* Here, we're pretty sure the host called HVC. */
restore_x0_to_x3
- /* Check for __hyp_get_vectors */
- cbnz x0, 1f
+ cmp x0, #HVC_GET_VECTORS
+ b.ne 1f
mrs x0, vbar_el2
b 2f
diff --git a/arch/arm64/kvm/hyp/s2-setup.c b/arch/arm64/kvm/hyp/s2-setup.c
index bcbe761a5a3d..b81f4091c909 100644
--- a/arch/arm64/kvm/hyp/s2-setup.c
+++ b/arch/arm64/kvm/hyp/s2-setup.c
@@ -66,6 +66,14 @@ u32 __hyp_text __init_stage2_translation(void)
val |= 64 - (parange > 40 ? 40 : parange);
/*
+ * Check the availability of Hardware Access Flag / Dirty Bit
+ * Management in ID_AA64MMFR1_EL1 and enable the feature in VTCR_EL2.
+ */
+ tmp = (read_sysreg(id_aa64mmfr1_el1) >> ID_AA64MMFR1_HADBS_SHIFT) & 0xf;
+ if (IS_ENABLED(CONFIG_ARM64_HW_AFDBM) && tmp)
+ val |= VTCR_EL2_HA;
+
+ /*
* Read the VMIDBits bits from ID_AA64MMFR1_EL1 and set the VS
* bit in VTCR_EL2.
*/
diff --git a/arch/arm64/kvm/inject_fault.c b/arch/arm64/kvm/inject_fault.c
index 4d1ac81870d2..e9e0e6db73f6 100644
--- a/arch/arm64/kvm/inject_fault.c
+++ b/arch/arm64/kvm/inject_fault.c
@@ -162,7 +162,7 @@ static void inject_abt64(struct kvm_vcpu *vcpu, bool is_iabt, unsigned long addr
esr |= (ESR_ELx_EC_IABT_CUR << ESR_ELx_EC_SHIFT);
if (!is_iabt)
- esr |= ESR_ELx_EC_DABT_LOW;
+ esr |= ESR_ELx_EC_DABT_LOW << ESR_ELx_EC_SHIFT;
vcpu_sys_reg(vcpu, ESR_EL1) = esr | ESR_ELx_FSC_EXTABT;
}
diff --git a/arch/arm64/kvm/reset.c b/arch/arm64/kvm/reset.c
index 9677bf069bcc..b1ad730e1567 100644
--- a/arch/arm64/kvm/reset.c
+++ b/arch/arm64/kvm/reset.c
@@ -29,7 +29,9 @@
#include <asm/cputype.h>
#include <asm/ptrace.h>
#include <asm/kvm_arm.h>
+#include <asm/kvm_asm.h>
#include <asm/kvm_coproc.h>
+#include <asm/kvm_mmu.h>
/*
* ARMv8 Reset Values
@@ -130,3 +132,31 @@ int kvm_reset_vcpu(struct kvm_vcpu *vcpu)
/* Reset timer */
return kvm_timer_vcpu_reset(vcpu, cpu_vtimer_irq);
}
+
+extern char __hyp_idmap_text_start[];
+
+unsigned long kvm_hyp_reset_entry(void)
+{
+ if (!__kvm_cpu_uses_extended_idmap()) {
+ unsigned long offset;
+
+ /*
+ * Find the address of __kvm_hyp_reset() in the trampoline page.
+ * This is present in the running page tables, and the boot page
+ * tables, so we call the code here to start the trampoline
+ * dance in reverse.
+ */
+ offset = (unsigned long)__kvm_hyp_reset
+ - ((unsigned long)__hyp_idmap_text_start & PAGE_MASK);
+
+ return TRAMPOLINE_VA + offset;
+ } else {
+ /*
+ * KVM is running with merged page tables, which don't have the
+ * trampoline page mapped. We know the idmap is still mapped,
+ * but can't be called into directly. Use
+ * __extended_idmap_trampoline to do the call.
+ */
+ return (unsigned long)kvm_ksym_ref(__extended_idmap_trampoline);
+ }
+}
diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
index 57f57fde5722..54bb209cae8e 100644
--- a/arch/arm64/mm/Makefile
+++ b/arch/arm64/mm/Makefile
@@ -4,6 +4,7 @@ obj-y := dma-mapping.o extable.o fault.o init.o \
context.o proc.o pageattr.o
obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
obj-$(CONFIG_ARM64_PTDUMP) += dump.o
+obj-$(CONFIG_NUMA) += numa.o
obj-$(CONFIG_KASAN) += kasan_init.o
KASAN_SANITIZE_kasan_init.o := n
diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
index 6df07069a025..50ff9ba3a236 100644
--- a/arch/arm64/mm/cache.S
+++ b/arch/arm64/mm/cache.S
@@ -24,8 +24,6 @@
#include <asm/cpufeature.h>
#include <asm/alternative.h>
-#include "proc-macros.S"
-
/*
* flush_icache_range(start,end)
*
diff --git a/arch/arm64/mm/context.c b/arch/arm64/mm/context.c
index c90c3c5f46af..b7b397802088 100644
--- a/arch/arm64/mm/context.c
+++ b/arch/arm64/mm/context.c
@@ -75,8 +75,7 @@ void verify_cpu_asid_bits(void)
*/
pr_crit("CPU%d: smaller ASID size(%u) than boot CPU (%u)\n",
smp_processor_id(), asid, asid_bits);
- update_cpu_boot_status(CPU_PANIC_KERNEL);
- cpu_park_loop();
+ cpu_panic_kernel();
}
}
diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index a6e757cbab77..c566ec83719f 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -562,8 +562,8 @@ static void *__iommu_alloc_attrs(struct device *dev, size_t size,
struct page **pages;
pgprot_t prot = __get_dma_pgprot(attrs, PAGE_KERNEL, coherent);
- pages = iommu_dma_alloc(dev, iosize, gfp, ioprot, handle,
- flush_page);
+ pages = iommu_dma_alloc(dev, iosize, gfp, attrs, ioprot,
+ handle, flush_page);
if (!pages)
return NULL;
@@ -804,57 +804,24 @@ struct iommu_dma_notifier_data {
static LIST_HEAD(iommu_dma_masters);
static DEFINE_MUTEX(iommu_dma_notifier_lock);
-/*
- * Temporarily "borrow" a domain feature flag to to tell if we had to resort
- * to creating our own domain here, in case we need to clean it up again.
- */
-#define __IOMMU_DOMAIN_FAKE_DEFAULT (1U << 31)
-
static bool do_iommu_attach(struct device *dev, const struct iommu_ops *ops,
u64 dma_base, u64 size)
{
struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
/*
- * Best case: The device is either part of a group which was
- * already attached to a domain in a previous call, or it's
- * been put in a default DMA domain by the IOMMU core.
+ * If the IOMMU driver has the DMA domain support that we require,
+ * then the IOMMU core will have already configured a group for this
+ * device, and allocated the default domain for that group.
*/
- if (!domain) {
- /*
- * Urgh. The IOMMU core isn't going to do default domains
- * for non-PCI devices anyway, until it has some means of
- * abstracting the entirely implementation-specific
- * sideband data/SoC topology/unicorn dust that may or
- * may not differentiate upstream masters.
- * So until then, HORRIBLE HACKS!
- */
- domain = ops->domain_alloc(IOMMU_DOMAIN_DMA);
- if (!domain)
- goto out_no_domain;
-
- domain->ops = ops;
- domain->type = IOMMU_DOMAIN_DMA | __IOMMU_DOMAIN_FAKE_DEFAULT;
-
- if (iommu_attach_device(domain, dev))
- goto out_put_domain;
+ if (!domain || iommu_dma_init_domain(domain, dma_base, size)) {
+ pr_warn("Failed to set up IOMMU for device %s; retaining platform DMA ops\n",
+ dev_name(dev));
+ return false;
}
- if (iommu_dma_init_domain(domain, dma_base, size))
- goto out_detach;
-
dev->archdata.dma_ops = &iommu_dma_ops;
return true;
-
-out_detach:
- iommu_detach_device(domain, dev);
-out_put_domain:
- if (domain->type & __IOMMU_DOMAIN_FAKE_DEFAULT)
- iommu_domain_free(domain);
-out_no_domain:
- pr_warn("Failed to set up IOMMU for device %s; retaining platform DMA ops\n",
- dev_name(dev));
- return false;
}
static void queue_iommu_attach(struct device *dev, const struct iommu_ops *ops,
@@ -933,6 +900,10 @@ static int __init __iommu_dma_init(void)
ret = register_iommu_dma_ops_notifier(&platform_bus_type);
if (!ret)
ret = register_iommu_dma_ops_notifier(&amba_bustype);
+#ifdef CONFIG_PCI
+ if (!ret)
+ ret = register_iommu_dma_ops_notifier(&pci_bus_type);
+#endif
/* handle devices queued before this arch_initcall */
if (!ret)
@@ -967,11 +938,8 @@ void arch_teardown_dma_ops(struct device *dev)
{
struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
- if (domain) {
+ if (WARN_ON(domain))
iommu_detach_device(domain, dev);
- if (domain->type & __IOMMU_DOMAIN_FAKE_DEFAULT)
- iommu_domain_free(domain);
- }
dev->archdata.dma_ops = NULL;
}
@@ -979,13 +947,13 @@ void arch_teardown_dma_ops(struct device *dev)
#else
static void __iommu_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
- struct iommu_ops *iommu)
+ const struct iommu_ops *iommu)
{ }
#endif /* CONFIG_IOMMU_DMA */
void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
- struct iommu_ops *iommu, bool coherent)
+ const struct iommu_ops *iommu, bool coherent)
{
if (!dev->archdata.dma_ops)
dev->archdata.dma_ops = &swiotlb_dma_ops;
diff --git a/arch/arm64/mm/dump.c b/arch/arm64/mm/dump.c
index f9271cb2f5e3..8404190fe2bd 100644
--- a/arch/arm64/mm/dump.c
+++ b/arch/arm64/mm/dump.c
@@ -23,6 +23,7 @@
#include <linux/seq_file.h>
#include <asm/fixmap.h>
+#include <asm/kasan.h>
#include <asm/memory.h>
#include <asm/pgtable.h>
#include <asm/pgtable-hwdef.h>
@@ -32,37 +33,25 @@ struct addr_marker {
const char *name;
};
-enum address_markers_idx {
- MODULES_START_NR = 0,
- MODULES_END_NR,
- VMALLOC_START_NR,
- VMALLOC_END_NR,
-#ifdef CONFIG_SPARSEMEM_VMEMMAP
- VMEMMAP_START_NR,
- VMEMMAP_END_NR,
+static const struct addr_marker address_markers[] = {
+#ifdef CONFIG_KASAN
+ { KASAN_SHADOW_START, "Kasan shadow start" },
+ { KASAN_SHADOW_END, "Kasan shadow end" },
#endif
- FIXADDR_START_NR,
- FIXADDR_END_NR,
- PCI_START_NR,
- PCI_END_NR,
- KERNEL_SPACE_NR,
-};
-
-static struct addr_marker address_markers[] = {
- { MODULES_VADDR, "Modules start" },
- { MODULES_END, "Modules end" },
- { VMALLOC_START, "vmalloc() Area" },
- { VMALLOC_END, "vmalloc() End" },
+ { MODULES_VADDR, "Modules start" },
+ { MODULES_END, "Modules end" },
+ { VMALLOC_START, "vmalloc() Area" },
+ { VMALLOC_END, "vmalloc() End" },
+ { FIXADDR_START, "Fixmap start" },
+ { FIXADDR_TOP, "Fixmap end" },
+ { PCI_IO_START, "PCI I/O start" },
+ { PCI_IO_END, "PCI I/O end" },
#ifdef CONFIG_SPARSEMEM_VMEMMAP
- { 0, "vmemmap start" },
- { 0, "vmemmap end" },
+ { VMEMMAP_START, "vmemmap start" },
+ { VMEMMAP_START + VMEMMAP_SIZE, "vmemmap end" },
#endif
- { FIXADDR_START, "Fixmap start" },
- { FIXADDR_TOP, "Fixmap end" },
- { PCI_IO_START, "PCI I/O start" },
- { PCI_IO_END, "PCI I/O end" },
- { PAGE_OFFSET, "Linear Mapping" },
- { -1, NULL },
+ { PAGE_OFFSET, "Linear Mapping" },
+ { -1, NULL },
};
/*
@@ -347,13 +336,6 @@ static int ptdump_init(void)
for (j = 0; j < pg_level[i].num; j++)
pg_level[i].mask |= pg_level[i].bits[j].mask;
-#ifdef CONFIG_SPARSEMEM_VMEMMAP
- address_markers[VMEMMAP_START_NR].start_address =
- (unsigned long)virt_to_page(PAGE_OFFSET);
- address_markers[VMEMMAP_END_NR].start_address =
- (unsigned long)virt_to_page(high_memory);
-#endif
-
pe = debugfs_create_file("kernel_page_tables", 0400, NULL, NULL,
&ptdump_fops);
return pe ? 0 : -ENOMEM;
diff --git a/arch/arm64/mm/fault.c b/arch/arm64/mm/fault.c
index 95df28bc875f..5954881a35ac 100644
--- a/arch/arm64/mm/fault.c
+++ b/arch/arm64/mm/fault.c
@@ -81,6 +81,56 @@ void show_pte(struct mm_struct *mm, unsigned long addr)
printk("\n");
}
+#ifdef CONFIG_ARM64_HW_AFDBM
+/*
+ * This function sets the access flags (dirty, accessed), as well as write
+ * permission, and only to a more permissive setting.
+ *
+ * It needs to cope with hardware update of the accessed/dirty state by other
+ * agents in the system and can safely skip the __sync_icache_dcache() call as,
+ * like set_pte_at(), the PTE is never changed from no-exec to exec here.
+ *
+ * Returns whether or not the PTE actually changed.
+ */
+int ptep_set_access_flags(struct vm_area_struct *vma,
+ unsigned long address, pte_t *ptep,
+ pte_t entry, int dirty)
+{
+ pteval_t old_pteval;
+ unsigned int tmp;
+
+ if (pte_same(*ptep, entry))
+ return 0;
+
+ /* only preserve the access flags and write permission */
+ pte_val(entry) &= PTE_AF | PTE_WRITE | PTE_DIRTY;
+
+ /*
+ * PTE_RDONLY is cleared by default in the asm below, so set it in
+ * back if necessary (read-only or clean PTE).
+ */
+ if (!pte_write(entry) || !dirty)
+ pte_val(entry) |= PTE_RDONLY;
+
+ /*
+ * Setting the flags must be done atomically to avoid racing with the
+ * hardware update of the access/dirty state.
+ */
+ asm volatile("// ptep_set_access_flags\n"
+ " prfm pstl1strm, %2\n"
+ "1: ldxr %0, %2\n"
+ " and %0, %0, %3 // clear PTE_RDONLY\n"
+ " orr %0, %0, %4 // set flags\n"
+ " stxr %w1, %0, %2\n"
+ " cbnz %w1, 1b\n"
+ : "=&r" (old_pteval), "=&r" (tmp), "+Q" (pte_val(*ptep))
+ : "L" (~PTE_RDONLY), "r" (pte_val(entry)));
+
+ flush_tlb_fix_spurious_fault(vma, address);
+ return 1;
+}
+#endif
+
/*
* The kernel tried to access some page that wasn't present.
*/
@@ -212,10 +262,6 @@ static int __kprobes do_page_fault(unsigned long addr, unsigned int esr,
tsk = current;
mm = tsk->mm;
- /* Enable interrupts if they were enabled in the parent context. */
- if (interrupts_enabled(regs))
- local_irq_enable();
-
/*
* If we're in an interrupt or have no user context, we must not take
* the fault.
@@ -555,20 +601,33 @@ asmlinkage int __exception do_debug_exception(unsigned long addr,
{
const struct fault_info *inf = debug_fault_info + DBG_ESR_EVT(esr);
struct siginfo info;
+ int rv;
- if (!inf->fn(addr, esr, regs))
- return 1;
+ /*
+ * Tell lockdep we disabled irqs in entry.S. Do nothing if they were
+ * already disabled to preserve the last enabled/disabled addresses.
+ */
+ if (interrupts_enabled(regs))
+ trace_hardirqs_off();
- pr_alert("Unhandled debug exception: %s (0x%08x) at 0x%016lx\n",
- inf->name, esr, addr);
+ if (!inf->fn(addr, esr, regs)) {
+ rv = 1;
+ } else {
+ pr_alert("Unhandled debug exception: %s (0x%08x) at 0x%016lx\n",
+ inf->name, esr, addr);
+
+ info.si_signo = inf->sig;
+ info.si_errno = 0;
+ info.si_code = inf->code;
+ info.si_addr = (void __user *)addr;
+ arm64_notify_die("", regs, &info, 0);
+ rv = 0;
+ }
- info.si_signo = inf->sig;
- info.si_errno = 0;
- info.si_code = inf->code;
- info.si_addr = (void __user *)addr;
- arm64_notify_die("", regs, &info, 0);
+ if (interrupts_enabled(regs))
+ trace_hardirqs_on();
- return 0;
+ return rv;
}
#ifdef CONFIG_ARM64_PAN
diff --git a/arch/arm64/mm/hugetlbpage.c b/arch/arm64/mm/hugetlbpage.c
index 589fd28e1fb5..aa8aee7d6929 100644
--- a/arch/arm64/mm/hugetlbpage.c
+++ b/arch/arm64/mm/hugetlbpage.c
@@ -307,6 +307,7 @@ static __init int setup_hugepagesz(char *opt)
} else if (ps == PUD_SIZE) {
hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT);
} else {
+ hugetlb_bad_size();
pr_err("hugepagesz: Unsupported page size %lu K\n", ps >> 10);
return 0;
}
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index ea989d83ea9b..d45f8627012c 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -40,6 +40,7 @@
#include <asm/kasan.h>
#include <asm/kernel-pgtable.h>
#include <asm/memory.h>
+#include <asm/numa.h>
#include <asm/sections.h>
#include <asm/setup.h>
#include <asm/sizes.h>
@@ -86,6 +87,21 @@ static phys_addr_t __init max_zone_dma_phys(void)
return min(offset + (1ULL << 32), memblock_end_of_DRAM());
}
+#ifdef CONFIG_NUMA
+
+static void __init zone_sizes_init(unsigned long min, unsigned long max)
+{
+ unsigned long max_zone_pfns[MAX_NR_ZONES] = {0};
+
+ if (IS_ENABLED(CONFIG_ZONE_DMA))
+ max_zone_pfns[ZONE_DMA] = PFN_DOWN(max_zone_dma_phys());
+ max_zone_pfns[ZONE_NORMAL] = max;
+
+ free_area_init_nodes(max_zone_pfns);
+}
+
+#else
+
static void __init zone_sizes_init(unsigned long min, unsigned long max)
{
struct memblock_region *reg;
@@ -126,6 +142,8 @@ static void __init zone_sizes_init(unsigned long min, unsigned long max)
free_area_init_node(0, zone_size, min, zhole_size);
}
+#endif /* CONFIG_NUMA */
+
#ifdef CONFIG_HAVE_ARCH_PFN_VALID
int pfn_valid(unsigned long pfn)
{
@@ -142,10 +160,15 @@ static void __init arm64_memory_present(void)
static void __init arm64_memory_present(void)
{
struct memblock_region *reg;
+ int nid = 0;
- for_each_memblock(memory, reg)
- memory_present(0, memblock_region_memory_base_pfn(reg),
- memblock_region_memory_end_pfn(reg));
+ for_each_memblock(memory, reg) {
+#ifdef CONFIG_NUMA
+ nid = reg->nid;
+#endif
+ memory_present(nid, memblock_region_memory_base_pfn(reg),
+ memblock_region_memory_end_pfn(reg));
+ }
}
#endif
@@ -190,8 +213,12 @@ void __init arm64_memblock_init(void)
*/
memblock_remove(max_t(u64, memstart_addr + linear_region_size, __pa(_end)),
ULLONG_MAX);
- if (memblock_end_of_DRAM() > linear_region_size)
- memblock_remove(0, memblock_end_of_DRAM() - linear_region_size);
+ if (memstart_addr + linear_region_size < memblock_end_of_DRAM()) {
+ /* ensure that memstart_addr remains sufficiently aligned */
+ memstart_addr = round_up(memblock_end_of_DRAM() - linear_region_size,
+ ARM64_MEMSTART_ALIGN);
+ memblock_remove(0, memstart_addr);
+ }
/*
* Apply the memory limit if it was set. Since the kernel may be loaded
@@ -203,6 +230,35 @@ void __init arm64_memblock_init(void)
memblock_add(__pa(_text), (u64)(_end - _text));
}
+ if (IS_ENABLED(CONFIG_BLK_DEV_INITRD) && initrd_start) {
+ /*
+ * Add back the memory we just removed if it results in the
+ * initrd to become inaccessible via the linear mapping.
+ * Otherwise, this is a no-op
+ */
+ u64 base = initrd_start & PAGE_MASK;
+ u64 size = PAGE_ALIGN(initrd_end) - base;
+
+ /*
+ * We can only add back the initrd memory if we don't end up
+ * with more memory than we can address via the linear mapping.
+ * It is up to the bootloader to position the kernel and the
+ * initrd reasonably close to each other (i.e., within 32 GB of
+ * each other) so that all granule/#levels combinations can
+ * always access both.
+ */
+ if (WARN(base < memblock_start_of_DRAM() ||
+ base + size > memblock_start_of_DRAM() +
+ linear_region_size,
+ "initrd not fully accessible via the linear mapping -- please check your bootloader ...\n")) {
+ initrd_start = 0;
+ } else {
+ memblock_remove(base, size); /* clear MEMBLOCK_ flags */
+ memblock_add(base, size);
+ memblock_reserve(base, size);
+ }
+ }
+
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE)) {
extern u16 memstart_offset_seed;
u64 range = linear_region_size -
@@ -245,7 +301,6 @@ void __init arm64_memblock_init(void)
dma_contiguous_reserve(arm64_dma_phys_limit);
memblock_allow_resize();
- memblock_dump_all();
}
void __init bootmem_init(void)
@@ -257,6 +312,9 @@ void __init bootmem_init(void)
early_memtest(min << PAGE_SHIFT, max << PAGE_SHIFT);
+ max_pfn = max_low_pfn = max;
+
+ arm64_numa_init();
/*
* Sparsemem tries to allocate bootmem in memory_present(), so must be
* done after the fixed reservations.
@@ -267,7 +325,7 @@ void __init bootmem_init(void)
zone_sizes_init(min, max);
high_memory = __va((max << PAGE_SHIFT) - 1) + 1;
- max_pfn = max_low_pfn = max;
+ memblock_dump_all();
}
#ifndef CONFIG_SPARSEMEM_VMEMMAP
@@ -371,26 +429,27 @@ void __init mem_init(void)
MLM(MODULES_VADDR, MODULES_END));
pr_cont(" vmalloc : 0x%16lx - 0x%16lx (%6ld GB)\n",
MLG(VMALLOC_START, VMALLOC_END));
- pr_cont(" .text : 0x%p" " - 0x%p" " (%6ld KB)\n"
- " .rodata : 0x%p" " - 0x%p" " (%6ld KB)\n"
- " .init : 0x%p" " - 0x%p" " (%6ld KB)\n"
- " .data : 0x%p" " - 0x%p" " (%6ld KB)\n",
- MLK_ROUNDUP(_text, __start_rodata),
- MLK_ROUNDUP(__start_rodata, _etext),
- MLK_ROUNDUP(__init_begin, __init_end),
+ pr_cont(" .text : 0x%p" " - 0x%p" " (%6ld KB)\n",
+ MLK_ROUNDUP(_text, __start_rodata));
+ pr_cont(" .rodata : 0x%p" " - 0x%p" " (%6ld KB)\n",
+ MLK_ROUNDUP(__start_rodata, _etext));
+ pr_cont(" .init : 0x%p" " - 0x%p" " (%6ld KB)\n",
+ MLK_ROUNDUP(__init_begin, __init_end));
+ pr_cont(" .data : 0x%p" " - 0x%p" " (%6ld KB)\n",
MLK_ROUNDUP(_sdata, _edata));
-#ifdef CONFIG_SPARSEMEM_VMEMMAP
- pr_cont(" vmemmap : 0x%16lx - 0x%16lx (%6ld GB maximum)\n"
- " 0x%16lx - 0x%16lx (%6ld MB actual)\n",
- MLG(VMEMMAP_START,
- VMEMMAP_START + VMEMMAP_SIZE),
- MLM((unsigned long)phys_to_page(memblock_start_of_DRAM()),
- (unsigned long)virt_to_page(high_memory)));
-#endif
+ pr_cont(" .bss : 0x%p" " - 0x%p" " (%6ld KB)\n",
+ MLK_ROUNDUP(__bss_start, __bss_stop));
pr_cont(" fixed : 0x%16lx - 0x%16lx (%6ld KB)\n",
MLK(FIXADDR_START, FIXADDR_TOP));
pr_cont(" PCI I/O : 0x%16lx - 0x%16lx (%6ld MB)\n",
MLM(PCI_IO_START, PCI_IO_END));
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
+ pr_cont(" vmemmap : 0x%16lx - 0x%16lx (%6ld GB maximum)\n",
+ MLG(VMEMMAP_START, VMEMMAP_START + VMEMMAP_SIZE));
+ pr_cont(" 0x%16lx - 0x%16lx (%6ld MB actual)\n",
+ MLM((unsigned long)phys_to_page(memblock_start_of_DRAM()),
+ (unsigned long)virt_to_page(high_memory)));
+#endif
pr_cont(" memory : 0x%16lx - 0x%16lx (%6ld MB)\n",
MLM(__phys_to_virt(memblock_start_of_DRAM()),
(unsigned long)high_memory));
@@ -407,6 +466,12 @@ void __init mem_init(void)
BUILD_BUG_ON(TASK_SIZE_32 > TASK_SIZE_64);
#endif
+ /*
+ * Make sure we chose the upper bound of sizeof(struct page)
+ * correctly.
+ */
+ BUILD_BUG_ON(sizeof(struct page) > (1 << STRUCT_PAGE_MAX_SHIFT));
+
if (PAGE_SIZE >= 16384 && get_num_physpages() <= 128) {
extern int sysctl_overcommit_memory;
/*
@@ -419,7 +484,8 @@ void __init mem_init(void)
void free_initmem(void)
{
- free_initmem_default(0);
+ free_reserved_area(__va(__pa(__init_begin)), __va(__pa(__init_end)),
+ 0, "unused kernel");
fixup_init();
}
diff --git a/arch/arm64/mm/mm.h b/arch/arm64/mm/mm.h
index ef47d99b5cbc..71fe98985455 100644
--- a/arch/arm64/mm/mm.h
+++ b/arch/arm64/mm/mm.h
@@ -1,3 +1,2 @@
-extern void __init bootmem_init(void);
void fixup_init(void);
diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
index 232f787a088a..01c171723bb3 100644
--- a/arch/arm64/mm/mmap.c
+++ b/arch/arm64/mm/mmap.c
@@ -95,8 +95,6 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
mm->get_unmapped_area = arch_get_unmapped_area_topdown;
}
}
-EXPORT_SYMBOL_GPL(arch_pick_mmap_layout);
-
/*
* You really shouldn't be using read() or write() on /dev/mem. This might go
diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index f3e5c74233f3..0f85a46c3e18 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -385,7 +385,7 @@ static void create_mapping_late(phys_addr_t phys, unsigned long virt,
static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end)
{
- unsigned long kernel_start = __pa(_stext);
+ unsigned long kernel_start = __pa(_text);
unsigned long kernel_end = __pa(_etext);
/*
@@ -417,7 +417,7 @@ static void __init __map_memblock(pgd_t *pgd, phys_addr_t start, phys_addr_t end
early_pgtable_alloc);
/*
- * Map the linear alias of the [_stext, _etext) interval as
+ * Map the linear alias of the [_text, _etext) interval as
* read-only/non-executable. This makes the contents of the
* region accessible to subsystems such as hibernate, but
* protects it from inadvertent modification or execution.
@@ -449,8 +449,8 @@ void mark_rodata_ro(void)
{
unsigned long section_size;
- section_size = (unsigned long)__start_rodata - (unsigned long)_stext;
- create_mapping_late(__pa(_stext), (unsigned long)_stext,
+ section_size = (unsigned long)__start_rodata - (unsigned long)_text;
+ create_mapping_late(__pa(_text), (unsigned long)_text,
section_size, PAGE_KERNEL_ROX);
/*
* mark .rodata as read only. Use _etext rather than __end_rodata to
@@ -471,8 +471,8 @@ void fixup_init(void)
unmap_kernel_range((u64)__init_begin, (u64)(__init_end - __init_begin));
}
-static void __init map_kernel_chunk(pgd_t *pgd, void *va_start, void *va_end,
- pgprot_t prot, struct vm_struct *vma)
+static void __init map_kernel_segment(pgd_t *pgd, void *va_start, void *va_end,
+ pgprot_t prot, struct vm_struct *vma)
{
phys_addr_t pa_start = __pa(va_start);
unsigned long size = va_end - va_start;
@@ -499,11 +499,11 @@ static void __init map_kernel(pgd_t *pgd)
{
static struct vm_struct vmlinux_text, vmlinux_rodata, vmlinux_init, vmlinux_data;
- map_kernel_chunk(pgd, _stext, __start_rodata, PAGE_KERNEL_EXEC, &vmlinux_text);
- map_kernel_chunk(pgd, __start_rodata, _etext, PAGE_KERNEL, &vmlinux_rodata);
- map_kernel_chunk(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
- &vmlinux_init);
- map_kernel_chunk(pgd, _data, _end, PAGE_KERNEL, &vmlinux_data);
+ map_kernel_segment(pgd, _text, __start_rodata, PAGE_KERNEL_EXEC, &vmlinux_text);
+ map_kernel_segment(pgd, __start_rodata, _etext, PAGE_KERNEL, &vmlinux_rodata);
+ map_kernel_segment(pgd, __init_begin, __init_end, PAGE_KERNEL_EXEC,
+ &vmlinux_init);
+ map_kernel_segment(pgd, _data, _end, PAGE_KERNEL, &vmlinux_data);
if (!pgd_val(*pgd_offset_raw(pgd, FIXADDR_START))) {
/*
@@ -564,8 +564,6 @@ void __init paging_init(void)
*/
memblock_free(__pa(swapper_pg_dir) + PAGE_SIZE,
SWAPPER_DIR_SIZE - PAGE_SIZE);
-
- bootmem_init();
}
/*
diff --git a/arch/arm64/mm/numa.c b/arch/arm64/mm/numa.c
new file mode 100644
index 000000000000..98dc1047f2a2
--- /dev/null
+++ b/arch/arm64/mm/numa.c
@@ -0,0 +1,396 @@
+/*
+ * NUMA support, based on the x86 implementation.
+ *
+ * Copyright (C) 2015 Cavium Inc.
+ * Author: Ganapatrao Kulkarni <gkulkarni@cavium.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/bootmem.h>
+#include <linux/memblock.h>
+#include <linux/module.h>
+#include <linux/of.h>
+
+struct pglist_data *node_data[MAX_NUMNODES] __read_mostly;
+EXPORT_SYMBOL(node_data);
+nodemask_t numa_nodes_parsed __initdata;
+static int cpu_to_node_map[NR_CPUS] = { [0 ... NR_CPUS-1] = NUMA_NO_NODE };
+
+static int numa_distance_cnt;
+static u8 *numa_distance;
+static int numa_off;
+
+static __init int numa_parse_early_param(char *opt)
+{
+ if (!opt)
+ return -EINVAL;
+ if (!strncmp(opt, "off", 3)) {
+ pr_info("%s\n", "NUMA turned off");
+ numa_off = 1;
+ }
+ return 0;
+}
+early_param("numa", numa_parse_early_param);
+
+cpumask_var_t node_to_cpumask_map[MAX_NUMNODES];
+EXPORT_SYMBOL(node_to_cpumask_map);
+
+#ifdef CONFIG_DEBUG_PER_CPU_MAPS
+
+/*
+ * Returns a pointer to the bitmask of CPUs on Node 'node'.
+ */
+const struct cpumask *cpumask_of_node(int node)
+{
+ if (WARN_ON(node >= nr_node_ids))
+ return cpu_none_mask;
+
+ if (WARN_ON(node_to_cpumask_map[node] == NULL))
+ return cpu_online_mask;
+
+ return node_to_cpumask_map[node];
+}
+EXPORT_SYMBOL(cpumask_of_node);
+
+#endif
+
+static void map_cpu_to_node(unsigned int cpu, int nid)
+{
+ set_cpu_numa_node(cpu, nid);
+ if (nid >= 0)
+ cpumask_set_cpu(cpu, node_to_cpumask_map[nid]);
+}
+
+void numa_clear_node(unsigned int cpu)
+{
+ int nid = cpu_to_node(cpu);
+
+ if (nid >= 0)
+ cpumask_clear_cpu(cpu, node_to_cpumask_map[nid]);
+ set_cpu_numa_node(cpu, NUMA_NO_NODE);
+}
+
+/*
+ * Allocate node_to_cpumask_map based on number of available nodes
+ * Requires node_possible_map to be valid.
+ *
+ * Note: cpumask_of_node() is not valid until after this is done.
+ * (Use CONFIG_DEBUG_PER_CPU_MAPS to check this.)
+ */
+static void __init setup_node_to_cpumask_map(void)
+{
+ unsigned int cpu;
+ int node;
+
+ /* setup nr_node_ids if not done yet */
+ if (nr_node_ids == MAX_NUMNODES)
+ setup_nr_node_ids();
+
+ /* allocate and clear the mapping */
+ for (node = 0; node < nr_node_ids; node++) {
+ alloc_bootmem_cpumask_var(&node_to_cpumask_map[node]);
+ cpumask_clear(node_to_cpumask_map[node]);
+ }
+
+ for_each_possible_cpu(cpu)
+ set_cpu_numa_node(cpu, NUMA_NO_NODE);
+
+ /* cpumask_of_node() will now work */
+ pr_debug("NUMA: Node to cpumask map for %d nodes\n", nr_node_ids);
+}
+
+/*
+ * Set the cpu to node and mem mapping
+ */
+void numa_store_cpu_info(unsigned int cpu)
+{
+ map_cpu_to_node(cpu, numa_off ? 0 : cpu_to_node_map[cpu]);
+}
+
+void __init early_map_cpu_to_node(unsigned int cpu, int nid)
+{
+ /* fallback to node 0 */
+ if (nid < 0 || nid >= MAX_NUMNODES)
+ nid = 0;
+
+ cpu_to_node_map[cpu] = nid;
+}
+
+/**
+ * numa_add_memblk - Set node id to memblk
+ * @nid: NUMA node ID of the new memblk
+ * @start: Start address of the new memblk
+ * @size: Size of the new memblk
+ *
+ * RETURNS:
+ * 0 on success, -errno on failure.
+ */
+int __init numa_add_memblk(int nid, u64 start, u64 size)
+{
+ int ret;
+
+ ret = memblock_set_node(start, size, &memblock.memory, nid);
+ if (ret < 0) {
+ pr_err("NUMA: memblock [0x%llx - 0x%llx] failed to add on node %d\n",
+ start, (start + size - 1), nid);
+ return ret;
+ }
+
+ node_set(nid, numa_nodes_parsed);
+ pr_info("NUMA: Adding memblock [0x%llx - 0x%llx] on node %d\n",
+ start, (start + size - 1), nid);
+ return ret;
+}
+
+/**
+ * Initialize NODE_DATA for a node on the local memory
+ */
+static void __init setup_node_data(int nid, u64 start_pfn, u64 end_pfn)
+{
+ const size_t nd_size = roundup(sizeof(pg_data_t), SMP_CACHE_BYTES);
+ u64 nd_pa;
+ void *nd;
+ int tnid;
+
+ pr_info("NUMA: Initmem setup node %d [mem %#010Lx-%#010Lx]\n",
+ nid, start_pfn << PAGE_SHIFT,
+ (end_pfn << PAGE_SHIFT) - 1);
+
+ nd_pa = memblock_alloc_try_nid(nd_size, SMP_CACHE_BYTES, nid);
+ nd = __va(nd_pa);
+
+ /* report and initialize */
+ pr_info("NUMA: NODE_DATA [mem %#010Lx-%#010Lx]\n",
+ nd_pa, nd_pa + nd_size - 1);
+ tnid = early_pfn_to_nid(nd_pa >> PAGE_SHIFT);
+ if (tnid != nid)
+ pr_info("NUMA: NODE_DATA(%d) on node %d\n", nid, tnid);
+
+ node_data[nid] = nd;
+ memset(NODE_DATA(nid), 0, sizeof(pg_data_t));
+ NODE_DATA(nid)->node_id = nid;
+ NODE_DATA(nid)->node_start_pfn = start_pfn;
+ NODE_DATA(nid)->node_spanned_pages = end_pfn - start_pfn;
+}
+
+/**
+ * numa_free_distance
+ *
+ * The current table is freed.
+ */
+void __init numa_free_distance(void)
+{
+ size_t size;
+
+ if (!numa_distance)
+ return;
+
+ size = numa_distance_cnt * numa_distance_cnt *
+ sizeof(numa_distance[0]);
+
+ memblock_free(__pa(numa_distance), size);
+ numa_distance_cnt = 0;
+ numa_distance = NULL;
+}
+
+/**
+ *
+ * Create a new NUMA distance table.
+ *
+ */
+static int __init numa_alloc_distance(void)
+{
+ size_t size;
+ u64 phys;
+ int i, j;
+
+ size = nr_node_ids * nr_node_ids * sizeof(numa_distance[0]);
+ phys = memblock_find_in_range(0, PFN_PHYS(max_pfn),
+ size, PAGE_SIZE);
+ if (WARN_ON(!phys))
+ return -ENOMEM;
+
+ memblock_reserve(phys, size);
+
+ numa_distance = __va(phys);
+ numa_distance_cnt = nr_node_ids;
+
+ /* fill with the default distances */
+ for (i = 0; i < numa_distance_cnt; i++)
+ for (j = 0; j < numa_distance_cnt; j++)
+ numa_distance[i * numa_distance_cnt + j] = i == j ?
+ LOCAL_DISTANCE : REMOTE_DISTANCE;
+
+ pr_debug("NUMA: Initialized distance table, cnt=%d\n",
+ numa_distance_cnt);
+
+ return 0;
+}
+
+/**
+ * numa_set_distance - Set inter node NUMA distance from node to node.
+ * @from: the 'from' node to set distance
+ * @to: the 'to' node to set distance
+ * @distance: NUMA distance
+ *
+ * Set the distance from node @from to @to to @distance.
+ * If distance table doesn't exist, a warning is printed.
+ *
+ * If @from or @to is higher than the highest known node or lower than zero
+ * or @distance doesn't make sense, the call is ignored.
+ *
+ */
+void __init numa_set_distance(int from, int to, int distance)
+{
+ if (!numa_distance) {
+ pr_warn_once("NUMA: Warning: distance table not allocated yet\n");
+ return;
+ }
+
+ if (from >= numa_distance_cnt || to >= numa_distance_cnt ||
+ from < 0 || to < 0) {
+ pr_warn_once("NUMA: Warning: node ids are out of bound, from=%d to=%d distance=%d\n",
+ from, to, distance);
+ return;
+ }
+
+ if ((u8)distance != distance ||
+ (from == to && distance != LOCAL_DISTANCE)) {
+ pr_warn_once("NUMA: Warning: invalid distance parameter, from=%d to=%d distance=%d\n",
+ from, to, distance);
+ return;
+ }
+
+ numa_distance[from * numa_distance_cnt + to] = distance;
+}
+
+/**
+ * Return NUMA distance @from to @to
+ */
+int __node_distance(int from, int to)
+{
+ if (from >= numa_distance_cnt || to >= numa_distance_cnt)
+ return from == to ? LOCAL_DISTANCE : REMOTE_DISTANCE;
+ return numa_distance[from * numa_distance_cnt + to];
+}
+EXPORT_SYMBOL(__node_distance);
+
+static int __init numa_register_nodes(void)
+{
+ int nid;
+ struct memblock_region *mblk;
+
+ /* Check that valid nid is set to memblks */
+ for_each_memblock(memory, mblk)
+ if (mblk->nid == NUMA_NO_NODE || mblk->nid >= MAX_NUMNODES) {
+ pr_warn("NUMA: Warning: invalid memblk node %d [mem %#010Lx-%#010Lx]\n",
+ mblk->nid, mblk->base,
+ mblk->base + mblk->size - 1);
+ return -EINVAL;
+ }
+
+ /* Finally register nodes. */
+ for_each_node_mask(nid, numa_nodes_parsed) {
+ unsigned long start_pfn, end_pfn;
+
+ get_pfn_range_for_nid(nid, &start_pfn, &end_pfn);
+ setup_node_data(nid, start_pfn, end_pfn);
+ node_set_online(nid);
+ }
+
+ /* Setup online nodes to actual nodes*/
+ node_possible_map = numa_nodes_parsed;
+
+ return 0;
+}
+
+static int __init numa_init(int (*init_func)(void))
+{
+ int ret;
+
+ nodes_clear(numa_nodes_parsed);
+ nodes_clear(node_possible_map);
+ nodes_clear(node_online_map);
+ numa_free_distance();
+
+ ret = numa_alloc_distance();
+ if (ret < 0)
+ return ret;
+
+ ret = init_func();
+ if (ret < 0)
+ return ret;
+
+ if (nodes_empty(numa_nodes_parsed))
+ return -EINVAL;
+
+ ret = numa_register_nodes();
+ if (ret < 0)
+ return ret;
+
+ setup_node_to_cpumask_map();
+
+ /* init boot processor */
+ cpu_to_node_map[0] = 0;
+ map_cpu_to_node(0, 0);
+
+ return 0;
+}
+
+/**
+ * dummy_numa_init - Fallback dummy NUMA init
+ *
+ * Used if there's no underlying NUMA architecture, NUMA initialization
+ * fails, or NUMA is disabled on the command line.
+ *
+ * Must online at least one node (node 0) and add memory blocks that cover all
+ * allowed memory. It is unlikely that this function fails.
+ */
+static int __init dummy_numa_init(void)
+{
+ int ret;
+ struct memblock_region *mblk;
+
+ pr_info("%s\n", "No NUMA configuration found");
+ pr_info("NUMA: Faking a node at [mem %#018Lx-%#018Lx]\n",
+ 0LLU, PFN_PHYS(max_pfn) - 1);
+
+ for_each_memblock(memory, mblk) {
+ ret = numa_add_memblk(0, mblk->base, mblk->size);
+ if (!ret)
+ continue;
+
+ pr_err("NUMA init failed\n");
+ return ret;
+ }
+
+ numa_off = 1;
+ return 0;
+}
+
+/**
+ * arm64_numa_init - Initialize NUMA
+ *
+ * Try each configured NUMA initialization method until one succeeds. The
+ * last fallback is dummy single node config encomapssing whole memory.
+ */
+void __init arm64_numa_init(void)
+{
+ if (!numa_off) {
+ if (!numa_init(of_numa_init))
+ return;
+ }
+
+ numa_init(dummy_numa_init);
+}
diff --git a/arch/arm64/mm/proc-macros.S b/arch/arm64/mm/proc-macros.S
deleted file mode 100644
index e6a30e1268a8..000000000000
--- a/arch/arm64/mm/proc-macros.S
+++ /dev/null
@@ -1,98 +0,0 @@
-/*
- * Based on arch/arm/mm/proc-macros.S
- *
- * Copyright (C) 2012 ARM Ltd.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program. If not, see <http://www.gnu.org/licenses/>.
- */
-
-#include <asm/asm-offsets.h>
-#include <asm/thread_info.h>
-
-/*
- * vma_vm_mm - get mm pointer from vma pointer (vma->vm_mm)
- */
- .macro vma_vm_mm, rd, rn
- ldr \rd, [\rn, #VMA_VM_MM]
- .endm
-
-/*
- * mmid - get context id from mm pointer (mm->context.id)
- */
- .macro mmid, rd, rn
- ldr \rd, [\rn, #MM_CONTEXT_ID]
- .endm
-
-/*
- * dcache_line_size - get the minimum D-cache line size from the CTR register.
- */
- .macro dcache_line_size, reg, tmp
- mrs \tmp, ctr_el0 // read CTR
- ubfm \tmp, \tmp, #16, #19 // cache line size encoding
- mov \reg, #4 // bytes per word
- lsl \reg, \reg, \tmp // actual cache line size
- .endm
-
-/*
- * icache_line_size - get the minimum I-cache line size from the CTR register.
- */
- .macro icache_line_size, reg, tmp
- mrs \tmp, ctr_el0 // read CTR
- and \tmp, \tmp, #0xf // cache line size encoding
- mov \reg, #4 // bytes per word
- lsl \reg, \reg, \tmp // actual cache line size
- .endm
-
-/*
- * tcr_set_idmap_t0sz - update TCR.T0SZ so that we can load the ID map
- */
- .macro tcr_set_idmap_t0sz, valreg, tmpreg
-#ifndef CONFIG_ARM64_VA_BITS_48
- ldr_l \tmpreg, idmap_t0sz
- bfi \valreg, \tmpreg, #TCR_T0SZ_OFFSET, #TCR_TxSZ_WIDTH
-#endif
- .endm
-
-/*
- * Macro to perform a data cache maintenance for the interval
- * [kaddr, kaddr + size)
- *
- * op: operation passed to dc instruction
- * domain: domain used in dsb instruciton
- * kaddr: starting virtual address of the region
- * size: size of the region
- * Corrupts: kaddr, size, tmp1, tmp2
- */
- .macro dcache_by_line_op op, domain, kaddr, size, tmp1, tmp2
- dcache_line_size \tmp1, \tmp2
- add \size, \kaddr, \size
- sub \tmp2, \tmp1, #1
- bic \kaddr, \kaddr, \tmp2
-9998: dc \op, \kaddr
- add \kaddr, \kaddr, \tmp1
- cmp \kaddr, \size
- b.lo 9998b
- dsb \domain
- .endm
-
-/*
- * reset_pmuserenr_el0 - reset PMUSERENR_EL0 if PMUv3 present
- */
- .macro reset_pmuserenr_el0, tmpreg
- mrs \tmpreg, id_aa64dfr0_el1 // Check ID_AA64DFR0_EL1 PMUVer
- sbfx \tmpreg, \tmpreg, #8, #4
- cmp \tmpreg, #1 // Skip if no PMU present
- b.lt 9000f
- msr pmuserenr_el0, xzr // Disable PMU access from EL0
-9000:
- .endm
diff --git a/arch/arm64/mm/proc.S b/arch/arm64/mm/proc.S
index 543f5198005a..c4317879b938 100644
--- a/arch/arm64/mm/proc.S
+++ b/arch/arm64/mm/proc.S
@@ -23,13 +23,11 @@
#include <asm/assembler.h>
#include <asm/asm-offsets.h>
#include <asm/hwcap.h>
-#include <asm/pgtable-hwdef.h>
#include <asm/pgtable.h>
+#include <asm/pgtable-hwdef.h>
#include <asm/cpufeature.h>
#include <asm/alternative.h>
-#include "proc-macros.S"
-
#ifdef CONFIG_ARM64_64K_PAGES
#define TCR_TG_FLAGS TCR_TG0_64K | TCR_TG1_64K
#elif defined(CONFIG_ARM64_16K_PAGES)
@@ -66,62 +64,50 @@ ENTRY(cpu_do_suspend)
mrs x2, tpidr_el0
mrs x3, tpidrro_el0
mrs x4, contextidr_el1
- mrs x5, mair_el1
- mrs x6, cpacr_el1
- mrs x7, ttbr1_el1
- mrs x8, tcr_el1
- mrs x9, vbar_el1
- mrs x10, mdscr_el1
- mrs x11, oslsr_el1
- mrs x12, sctlr_el1
+ mrs x5, cpacr_el1
+ mrs x6, tcr_el1
+ mrs x7, vbar_el1
+ mrs x8, mdscr_el1
+ mrs x9, oslsr_el1
+ mrs x10, sctlr_el1
stp x2, x3, [x0]
- stp x4, x5, [x0, #16]
- stp x6, x7, [x0, #32]
- stp x8, x9, [x0, #48]
- stp x10, x11, [x0, #64]
- str x12, [x0, #80]
+ stp x4, xzr, [x0, #16]
+ stp x5, x6, [x0, #32]
+ stp x7, x8, [x0, #48]
+ stp x9, x10, [x0, #64]
ret
ENDPROC(cpu_do_suspend)
/**
* cpu_do_resume - restore CPU register context
*
- * x0: Physical address of context pointer
- * x1: ttbr0_el1 to be restored
- *
- * Returns:
- * sctlr_el1 value in x0
+ * x0: Address of context pointer
*/
ENTRY(cpu_do_resume)
- /*
- * Invalidate local tlb entries before turning on MMU
- */
- tlbi vmalle1
ldp x2, x3, [x0]
ldp x4, x5, [x0, #16]
- ldp x6, x7, [x0, #32]
- ldp x8, x9, [x0, #48]
- ldp x10, x11, [x0, #64]
- ldr x12, [x0, #80]
+ ldp x6, x8, [x0, #32]
+ ldp x9, x10, [x0, #48]
+ ldp x11, x12, [x0, #64]
msr tpidr_el0, x2
msr tpidrro_el0, x3
msr contextidr_el1, x4
- msr mair_el1, x5
msr cpacr_el1, x6
- msr ttbr0_el1, x1
- msr ttbr1_el1, x7
- tcr_set_idmap_t0sz x8, x7
+
+ /* Don't change t0sz here, mask those bits when restoring */
+ mrs x5, tcr_el1
+ bfi x8, x5, TCR_T0SZ_OFFSET, TCR_TxSZ_WIDTH
+
msr tcr_el1, x8
msr vbar_el1, x9
msr mdscr_el1, x10
+ msr sctlr_el1, x12
/*
* Restore oslsr_el1 by writing oslar_el1
*/
ubfx x11, x11, #1, #1
msr oslar_el1, x11
reset_pmuserenr_el0 x0 // Disable PMU access from EL0
- mov x0, x12
- dsb nsh // Make sure local tlb invalidation completed
isb
ret
ENDPROC(cpu_do_resume)
diff --git a/arch/arm64/net/bpf_jit_comp.c b/arch/arm64/net/bpf_jit_comp.c
index a34420a5df9a..49ba37e4bfc0 100644
--- a/arch/arm64/net/bpf_jit_comp.c
+++ b/arch/arm64/net/bpf_jit_comp.c
@@ -31,8 +31,8 @@
int bpf_jit_enable __read_mostly;
-#define TMP_REG_1 (MAX_BPF_REG + 0)
-#define TMP_REG_2 (MAX_BPF_REG + 1)
+#define TMP_REG_1 (MAX_BPF_JIT_REG + 0)
+#define TMP_REG_2 (MAX_BPF_JIT_REG + 1)
/* Map BPF registers to A64 registers */
static const int bpf2a64[] = {
@@ -51,15 +51,16 @@ static const int bpf2a64[] = {
[BPF_REG_9] = A64_R(22),
/* read-only frame pointer to access stack */
[BPF_REG_FP] = A64_R(25),
- /* temporary register for internal BPF JIT */
- [TMP_REG_1] = A64_R(23),
- [TMP_REG_2] = A64_R(24),
+ /* temporary registers for internal BPF JIT */
+ [TMP_REG_1] = A64_R(10),
+ [TMP_REG_2] = A64_R(11),
+ /* temporary register for blinding constants */
+ [BPF_REG_AX] = A64_R(9),
};
struct jit_ctx {
const struct bpf_prog *prog;
int idx;
- int tmp_used;
int epilogue_offset;
int *offset;
u32 *image;
@@ -152,8 +153,6 @@ static void build_prologue(struct jit_ctx *ctx)
const u8 r8 = bpf2a64[BPF_REG_8];
const u8 r9 = bpf2a64[BPF_REG_9];
const u8 fp = bpf2a64[BPF_REG_FP];
- const u8 tmp1 = bpf2a64[TMP_REG_1];
- const u8 tmp2 = bpf2a64[TMP_REG_2];
/*
* BPF prog stack layout
@@ -165,7 +164,7 @@ static void build_prologue(struct jit_ctx *ctx)
* | ... | callee saved registers
* +-----+
* | | x25/x26
- * BPF fp register => -80:+-----+ <= (BPF_FP)
+ * BPF fp register => -64:+-----+ <= (BPF_FP)
* | |
* | ... | BPF prog stack
* | |
@@ -187,8 +186,6 @@ static void build_prologue(struct jit_ctx *ctx)
/* Save callee-saved register */
emit(A64_PUSH(r6, r7, A64_SP), ctx);
emit(A64_PUSH(r8, r9, A64_SP), ctx);
- if (ctx->tmp_used)
- emit(A64_PUSH(tmp1, tmp2, A64_SP), ctx);
/* Save fp (x25) and x26. SP requires 16 bytes alignment */
emit(A64_PUSH(fp, A64_R(26), A64_SP), ctx);
@@ -208,8 +205,6 @@ static void build_epilogue(struct jit_ctx *ctx)
const u8 r8 = bpf2a64[BPF_REG_8];
const u8 r9 = bpf2a64[BPF_REG_9];
const u8 fp = bpf2a64[BPF_REG_FP];
- const u8 tmp1 = bpf2a64[TMP_REG_1];
- const u8 tmp2 = bpf2a64[TMP_REG_2];
/* We're done with BPF stack */
emit(A64_ADD_I(1, A64_SP, A64_SP, STACK_SIZE), ctx);
@@ -218,8 +213,6 @@ static void build_epilogue(struct jit_ctx *ctx)
emit(A64_POP(fp, A64_R(26), A64_SP), ctx);
/* Restore callee-saved register */
- if (ctx->tmp_used)
- emit(A64_POP(tmp1, tmp2, A64_SP), ctx);
emit(A64_POP(r8, r9, A64_SP), ctx);
emit(A64_POP(r6, r7, A64_SP), ctx);
@@ -315,7 +308,6 @@ static int build_insn(const struct bpf_insn *insn, struct jit_ctx *ctx)
emit(A64_UDIV(is64, dst, dst, src), ctx);
break;
case BPF_MOD:
- ctx->tmp_used = 1;
emit(A64_UDIV(is64, tmp, dst, src), ctx);
emit(A64_MUL(is64, tmp, tmp, src), ctx);
emit(A64_SUB(is64, dst, dst, tmp), ctx);
@@ -388,49 +380,41 @@ emit_bswap_uxt:
/* dst = dst OP imm */
case BPF_ALU | BPF_ADD | BPF_K:
case BPF_ALU64 | BPF_ADD | BPF_K:
- ctx->tmp_used = 1;
emit_a64_mov_i(is64, tmp, imm, ctx);
emit(A64_ADD(is64, dst, dst, tmp), ctx);
break;
case BPF_ALU | BPF_SUB | BPF_K:
case BPF_ALU64 | BPF_SUB | BPF_K:
- ctx->tmp_used = 1;
emit_a64_mov_i(is64, tmp, imm, ctx);
emit(A64_SUB(is64, dst, dst, tmp), ctx);
break;
case BPF_ALU | BPF_AND | BPF_K:
case BPF_ALU64 | BPF_AND | BPF_K:
- ctx->tmp_used = 1;
emit_a64_mov_i(is64, tmp, imm, ctx);
emit(A64_AND(is64, dst, dst, tmp), ctx);
break;
case BPF_ALU | BPF_OR | BPF_K:
case BPF_ALU64 | BPF_OR | BPF_K:
- ctx->tmp_used = 1;
emit_a64_mov_i(is64, tmp, imm, ctx);
emit(A64_ORR(is64, dst, dst, tmp), ctx);
break;
case BPF_ALU | BPF_XOR | BPF_K:
case BPF_ALU64 | BPF_XOR | BPF_K:
- ctx->tmp_used = 1;
emit_a64_mov_i(is64, tmp, imm, ctx);
emit(A64_EOR(is64, dst, dst, tmp), ctx);
break;
case BPF_ALU | BPF_MUL | BPF_K:
case BPF_ALU64 | BPF_MUL | BPF_K:
- ctx->tmp_used = 1;
emit_a64_mov_i(is64, tmp, imm, ctx);
emit(A64_MUL(is64, dst, dst, tmp), ctx);
break;
case BPF_ALU | BPF_DIV | BPF_K:
case BPF_ALU64 | BPF_DIV | BPF_K:
- ctx->tmp_used = 1;
emit_a64_mov_i(is64, tmp, imm, ctx);
emit(A64_UDIV(is64, dst, dst, tmp), ctx);
break;
case BPF_ALU | BPF_MOD | BPF_K:
case BPF_ALU64 | BPF_MOD | BPF_K:
- ctx->tmp_used = 1;
emit_a64_mov_i(is64, tmp2, imm, ctx);
emit(A64_UDIV(is64, tmp, dst, tmp2), ctx);
emit(A64_MUL(is64, tmp, tmp, tmp2), ctx);
@@ -476,6 +460,7 @@ emit_cond_jmp:
case BPF_JGE:
jmp_cond = A64_COND_CS;
break;
+ case BPF_JSET:
case BPF_JNE:
jmp_cond = A64_COND_NE;
break;
@@ -500,12 +485,10 @@ emit_cond_jmp:
case BPF_JMP | BPF_JNE | BPF_K:
case BPF_JMP | BPF_JSGT | BPF_K:
case BPF_JMP | BPF_JSGE | BPF_K:
- ctx->tmp_used = 1;
emit_a64_mov_i(1, tmp, imm, ctx);
emit(A64_CMP(1, dst, tmp), ctx);
goto emit_cond_jmp;
case BPF_JMP | BPF_JSET | BPF_K:
- ctx->tmp_used = 1;
emit_a64_mov_i(1, tmp, imm, ctx);
emit(A64_TST(1, dst, tmp), ctx);
goto emit_cond_jmp;
@@ -515,7 +498,6 @@ emit_cond_jmp:
const u8 r0 = bpf2a64[BPF_REG_0];
const u64 func = (u64)__bpf_call_base + imm;
- ctx->tmp_used = 1;
emit_a64_mov_i64(tmp, func, ctx);
emit(A64_PUSH(A64_FP, A64_LR, A64_SP), ctx);
emit(A64_MOV(1, A64_FP, A64_SP), ctx);
@@ -561,7 +543,6 @@ emit_cond_jmp:
case BPF_LDX | BPF_MEM | BPF_H:
case BPF_LDX | BPF_MEM | BPF_B:
case BPF_LDX | BPF_MEM | BPF_DW:
- ctx->tmp_used = 1;
emit_a64_mov_i(1, tmp, off, ctx);
switch (BPF_SIZE(code)) {
case BPF_W:
@@ -585,7 +566,6 @@ emit_cond_jmp:
case BPF_ST | BPF_MEM | BPF_B:
case BPF_ST | BPF_MEM | BPF_DW:
/* Load imm to a register then store it */
- ctx->tmp_used = 1;
emit_a64_mov_i(1, tmp2, off, ctx);
emit_a64_mov_i(1, tmp, imm, ctx);
switch (BPF_SIZE(code)) {
@@ -609,7 +589,6 @@ emit_cond_jmp:
case BPF_STX | BPF_MEM | BPF_H:
case BPF_STX | BPF_MEM | BPF_B:
case BPF_STX | BPF_MEM | BPF_DW:
- ctx->tmp_used = 1;
emit_a64_mov_i(1, tmp, off, ctx);
switch (BPF_SIZE(code)) {
case BPF_W:
@@ -761,31 +740,45 @@ void bpf_jit_compile(struct bpf_prog *prog)
/* Nothing to do here. We support Internal BPF. */
}
-void bpf_int_jit_compile(struct bpf_prog *prog)
+struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
{
+ struct bpf_prog *tmp, *orig_prog = prog;
struct bpf_binary_header *header;
+ bool tmp_blinded = false;
struct jit_ctx ctx;
int image_size;
u8 *image_ptr;
if (!bpf_jit_enable)
- return;
+ return orig_prog;
- if (!prog || !prog->len)
- return;
+ tmp = bpf_jit_blind_constants(prog);
+ /* If blinding was requested and we failed during blinding,
+ * we must fall back to the interpreter.
+ */
+ if (IS_ERR(tmp))
+ return orig_prog;
+ if (tmp != prog) {
+ tmp_blinded = true;
+ prog = tmp;
+ }
memset(&ctx, 0, sizeof(ctx));
ctx.prog = prog;
ctx.offset = kcalloc(prog->len, sizeof(int), GFP_KERNEL);
- if (ctx.offset == NULL)
- return;
+ if (ctx.offset == NULL) {
+ prog = orig_prog;
+ goto out;
+ }
/* 1. Initial fake pass to compute ctx->idx. */
- /* Fake pass to fill in ctx->offset and ctx->tmp_used. */
- if (build_body(&ctx))
- goto out;
+ /* Fake pass to fill in ctx->offset. */
+ if (build_body(&ctx)) {
+ prog = orig_prog;
+ goto out_off;
+ }
build_prologue(&ctx);
@@ -796,8 +789,10 @@ void bpf_int_jit_compile(struct bpf_prog *prog)
image_size = sizeof(u32) * ctx.idx;
header = bpf_jit_binary_alloc(image_size, &image_ptr,
sizeof(u32), jit_fill_hole);
- if (header == NULL)
- goto out;
+ if (header == NULL) {
+ prog = orig_prog;
+ goto out_off;
+ }
/* 2. Now, the actual pass. */
@@ -808,7 +803,8 @@ void bpf_int_jit_compile(struct bpf_prog *prog)
if (build_body(&ctx)) {
bpf_jit_binary_free(header);
- goto out;
+ prog = orig_prog;
+ goto out_off;
}
build_epilogue(&ctx);
@@ -816,7 +812,8 @@ void bpf_int_jit_compile(struct bpf_prog *prog)
/* 3. Extra pass to validate JITed code. */
if (validate_code(&ctx)) {
bpf_jit_binary_free(header);
- goto out;
+ prog = orig_prog;
+ goto out_off;
}
/* And we're done. */
@@ -828,8 +825,14 @@ void bpf_int_jit_compile(struct bpf_prog *prog)
set_memory_ro((unsigned long)header, header->pages);
prog->bpf_func = (void *)ctx.image;
prog->jited = 1;
-out:
+
+out_off:
kfree(ctx.offset);
+out:
+ if (tmp_blinded)
+ bpf_jit_prog_release_other(prog, prog == orig_prog ?
+ tmp : orig_prog);
+ return prog;
}
void bpf_jit_free(struct bpf_prog *prog)
diff --git a/arch/avr32/Kconfig b/arch/avr32/Kconfig
index b6878eb64884..7e75d45e20cd 100644
--- a/arch/avr32/Kconfig
+++ b/arch/avr32/Kconfig
@@ -4,6 +4,7 @@ config AVR32
# that we usually don't need on AVR32.
select EXPERT
select HAVE_CLK
+ select HAVE_EXIT_THREAD
select HAVE_OPROFILE
select HAVE_KPROBES
select VIRT_TO_BUS
@@ -17,6 +18,7 @@ config AVR32
select GENERIC_CLOCKEVENTS
select HAVE_MOD_ARCH_SPECIFIC
select MODULES_USE_ELF_RELA
+ select HAVE_NMI
help
AVR32 is a high-performance 32-bit RISC microprocessor core,
designed for cost-sensitive embedded applications, with particular
@@ -74,7 +76,7 @@ config PLATFORM_AT32AP
select SUBARCH_AVR32B
select MMU
select PERFORMANCE_COUNTERS
- select ARCH_REQUIRE_GPIOLIB
+ select GPIOLIB
select GENERIC_ALLOCATOR
select HAVE_FB_ATMEL
diff --git a/arch/avr32/include/asm/addrspace.h b/arch/avr32/include/asm/addrspace.h
index 366794858ec7..5a47a7979648 100644
--- a/arch/avr32/include/asm/addrspace.h
+++ b/arch/avr32/include/asm/addrspace.h
@@ -1,5 +1,5 @@
/*
- * Defitions for the address spaces of the AVR32 CPUs. Heavily based on
+ * Definitions for the address spaces of the AVR32 CPUs. Heavily based on
* include/asm-sh/addrspace.h
*
* Copyright (C) 2004-2006 Atmel Corporation
diff --git a/arch/avr32/kernel/process.c b/arch/avr32/kernel/process.c
index 42a53e740a7e..68e5b9dac059 100644
--- a/arch/avr32/kernel/process.c
+++ b/arch/avr32/kernel/process.c
@@ -62,9 +62,9 @@ void machine_restart(char *cmd)
/*
* Free current thread data structures etc
*/
-void exit_thread(void)
+void exit_thread(struct task_struct *tsk)
{
- ocd_disable(current);
+ ocd_disable(tsk);
}
void flush_thread(void)
diff --git a/arch/avr32/mach-at32ap/at32ap700x.c b/arch/avr32/mach-at32ap/at32ap700x.c
index bf445aa48282..00d6dcc1d9b6 100644
--- a/arch/avr32/mach-at32ap/at32ap700x.c
+++ b/arch/avr32/mach-at32ap/at32ap700x.c
@@ -1365,8 +1365,8 @@ at32_add_device_mci(unsigned int id, struct mci_platform_data *data)
slave->dma_dev = &dw_dmac0_device.dev;
slave->src_id = 0;
slave->dst_id = 1;
- slave->src_master = 1;
- slave->dst_master = 0;
+ slave->m_master = 1;
+ slave->p_master = 0;
data->dma_slave = slave;
data->dma_filter = at32_mci_dma_filter;
@@ -2061,16 +2061,16 @@ at32_add_device_ac97c(unsigned int id, struct ac97c_platform_data *data,
if (flags & AC97C_CAPTURE) {
rx_dws->dma_dev = &dw_dmac0_device.dev;
rx_dws->src_id = 3;
- rx_dws->src_master = 0;
- rx_dws->dst_master = 1;
+ rx_dws->m_master = 0;
+ rx_dws->p_master = 1;
}
/* Check if DMA slave interface for playback should be configured. */
if (flags & AC97C_PLAYBACK) {
tx_dws->dma_dev = &dw_dmac0_device.dev;
tx_dws->dst_id = 4;
- tx_dws->src_master = 0;
- tx_dws->dst_master = 1;
+ tx_dws->m_master = 0;
+ tx_dws->p_master = 1;
}
if (platform_device_add_data(pdev, data,
@@ -2141,8 +2141,8 @@ at32_add_device_abdac(unsigned int id, struct atmel_abdac_pdata *data)
dws->dma_dev = &dw_dmac0_device.dev;
dws->dst_id = 2;
- dws->src_master = 0;
- dws->dst_master = 1;
+ dws->m_master = 0;
+ dws->p_master = 1;
if (platform_device_add_data(pdev, data,
sizeof(struct atmel_abdac_pdata)))
diff --git a/arch/blackfin/Kconfig b/arch/blackfin/Kconfig
index a63c12259e77..28c63fea786d 100644
--- a/arch/blackfin/Kconfig
+++ b/arch/blackfin/Kconfig
@@ -40,6 +40,7 @@ config BLACKFIN
select HAVE_MOD_ARCH_SPECIFIC
select MODULES_USE_ELF_RELA
select HAVE_DEBUG_STACKOVERFLOW
+ select HAVE_NMI
config GENERIC_CSUM
def_bool y
diff --git a/arch/blackfin/include/asm/processor.h b/arch/blackfin/include/asm/processor.h
index 7acd46653df3..0c265aba94ad 100644
--- a/arch/blackfin/include/asm/processor.h
+++ b/arch/blackfin/include/asm/processor.h
@@ -76,13 +76,6 @@ static inline void release_thread(struct task_struct *dead_task)
}
/*
- * Free current thread data structures etc..
- */
-static inline void exit_thread(void)
-{
-}
-
-/*
* Return saved PC of a blocked thread.
*/
#define thread_saved_pc(tsk) (tsk->thread.pc)
diff --git a/arch/blackfin/lib/udivsi3.S b/arch/blackfin/lib/udivsi3.S
index 748a6a2e8c17..90bfa809b392 100644
--- a/arch/blackfin/lib/udivsi3.S
+++ b/arch/blackfin/lib/udivsi3.S
@@ -154,7 +154,7 @@ ENTRY(___udivsi3)
CC = R7 < 0; /* Check quotient(AQ) */
/* If AQ==0, we'll sub divisor */
IF CC R5 = R1; /* and if AQ==1, we'll add it. */
- R3 = R3 + R5; /* Add/sub divsor to partial remainder */
+ R3 = R3 + R5; /* Add/sub divisor to partial remainder */
R7 = R3 ^ R1; /* Generate next quotient bit */
R5 = R7 >> 31; /* Get AQ */
diff --git a/arch/blackfin/mach-bf609/include/mach/defBF60x_base.h b/arch/blackfin/mach-bf609/include/mach/defBF60x_base.h
index 35caa7bc192c..3933e912cacd 100644
--- a/arch/blackfin/mach-bf609/include/mach/defBF60x_base.h
+++ b/arch/blackfin/mach-bf609/include/mach/defBF60x_base.h
@@ -2689,7 +2689,7 @@
#define L2CTL0_STAT 0xFFCA3010 /* L2CTL0 L2 Status Register */
#define L2CTL0_RPCR 0xFFCA3014 /* L2CTL0 L2 Read Priority Count Register */
#define L2CTL0_WPCR 0xFFCA3018 /* L2CTL0 L2 Write Priority Count Register */
-#define L2CTL0_RFA 0xFFCA3024 /* L2CTL0 L2 Refresh Address Regsiter */
+#define L2CTL0_RFA 0xFFCA3024 /* L2CTL0 L2 Refresh Address Register */
#define L2CTL0_ERRADDR0 0xFFCA3040 /* L2CTL0 L2 Bank 0 ECC Error Address Register */
#define L2CTL0_ERRADDR1 0xFFCA3044 /* L2CTL0 L2 Bank 1 ECC Error Address Register */
#define L2CTL0_ERRADDR2 0xFFCA3048 /* L2CTL0 L2 Bank 2 ECC Error Address Register */
diff --git a/arch/c6x/include/asm/clock.h b/arch/c6x/include/asm/clock.h
index bcf42b2b4b1e..e2f818a7a1d1 100644
--- a/arch/c6x/include/asm/clock.h
+++ b/arch/c6x/include/asm/clock.h
@@ -101,7 +101,7 @@ struct clk {
#define CLK_PLL BIT(2) /* PLL-derived clock */
#define PRE_PLL BIT(3) /* source is before PLL mult/div */
#define FIXED_DIV_PLL BIT(4) /* fixed divisor from PLL */
-#define FIXED_RATE_PLL BIT(5) /* fixed ouput rate PLL */
+#define FIXED_RATE_PLL BIT(5) /* fixed output rate PLL */
#define MAX_PLL_SYSCLKS 16
diff --git a/arch/c6x/include/uapi/asm/unistd.h b/arch/c6x/include/uapi/asm/unistd.h
index e7d09a614d10..12d73d9d81f5 100644
--- a/arch/c6x/include/uapi/asm/unistd.h
+++ b/arch/c6x/include/uapi/asm/unistd.h
@@ -14,6 +14,7 @@
* more details.
*/
+#define __ARCH_WANT_RENAMEAT
#define __ARCH_WANT_SYS_CLONE
/* Use the standard ABI for syscalls. */
diff --git a/arch/c6x/kernel/process.c b/arch/c6x/kernel/process.c
index 3ae9f5a166a0..0ee7686a78f3 100644
--- a/arch/c6x/kernel/process.c
+++ b/arch/c6x/kernel/process.c
@@ -82,10 +82,6 @@ void flush_thread(void)
{
}
-void exit_thread(void)
-{
-}
-
/*
* Do necessary setup to start up a newly executed thread.
*/
diff --git a/arch/c6x/platforms/cache.c b/arch/c6x/platforms/cache.c
index 46fd2d530271..ec3c887c79ec 100644
--- a/arch/c6x/platforms/cache.c
+++ b/arch/c6x/platforms/cache.c
@@ -145,7 +145,7 @@ loop:
spin_lock_irqsave(&cache_lock, flags);
/*
- * If another cache operation is occuring
+ * If another cache operation is occurring
*/
if (unlikely(imcr_get(wc_reg))) {
spin_unlock_irqrestore(&cache_lock, flags);
diff --git a/arch/cris/Kconfig b/arch/cris/Kconfig
index e086f9e93728..deba2662b9f3 100644
--- a/arch/cris/Kconfig
+++ b/arch/cris/Kconfig
@@ -59,9 +59,10 @@ config CRIS
select GENERIC_IOMAP
select MODULES_USE_ELF_RELA
select CLONE_BACKWARDS2
+ select HAVE_EXIT_THREAD if ETRAX_ARCH_V32
select OLD_SIGSUSPEND
select OLD_SIGACTION
- select ARCH_REQUIRE_GPIOLIB
+ select GPIOLIB
select IRQ_DOMAIN if ETRAX_ARCH_V32
select OF if ETRAX_ARCH_V32
select OF_EARLY_FLATTREE if ETRAX_ARCH_V32
@@ -69,6 +70,7 @@ config CRIS
select GENERIC_CLOCKEVENTS if ETRAX_ARCH_V32
select GENERIC_SCHED_CLOCK if ETRAX_ARCH_V32
select HAVE_DEBUG_BUGVERBOSE if ETRAX_ARCH_V32
+ select HAVE_NMI
config HZ
int
diff --git a/arch/cris/arch-v10/drivers/axisflashmap.c b/arch/cris/arch-v10/drivers/axisflashmap.c
index a4bbdfd37bd8..60d57c590032 100644
--- a/arch/cris/arch-v10/drivers/axisflashmap.c
+++ b/arch/cris/arch-v10/drivers/axisflashmap.c
@@ -212,7 +212,7 @@ static struct mtd_info *probe_cs(struct map_info *map_cs)
/*
* Probe each chip select individually for flash chips. If there are chips on
* both cse0 and cse1, the mtd_info structs will be concatenated to one struct
- * so that MTD partitions can cross chip boundries.
+ * so that MTD partitions can cross chip boundaries.
*
* The only known restriction to how you can mount your chips is that each
* chip select must hold similar flash chips. But you need external hardware
diff --git a/arch/cris/arch-v10/kernel/process.c b/arch/cris/arch-v10/kernel/process.c
index 02b783457be0..96e5afef6b47 100644
--- a/arch/cris/arch-v10/kernel/process.c
+++ b/arch/cris/arch-v10/kernel/process.c
@@ -35,15 +35,6 @@ void default_idle(void)
local_irq_enable();
}
-/*
- * Free current thread data structures etc..
- */
-
-void exit_thread(void)
-{
- /* Nothing needs to be done. */
-}
-
/* if the watchdog is enabled, we can simply disable interrupts and go
* into an eternal loop, and the watchdog will reset the CPU after 0.1s
* if on the other hand the watchdog wasn't enabled, we just enable it and wait
diff --git a/arch/cris/arch-v32/drivers/axisflashmap.c b/arch/cris/arch-v32/drivers/axisflashmap.c
index c6309a182f46..bd10d3ba0949 100644
--- a/arch/cris/arch-v32/drivers/axisflashmap.c
+++ b/arch/cris/arch-v32/drivers/axisflashmap.c
@@ -246,7 +246,7 @@ static struct mtd_info *probe_cs(struct map_info *map_cs)
/*
* Probe each chip select individually for flash chips. If there are chips on
* both cse0 and cse1, the mtd_info structs will be concatenated to one struct
- * so that MTD partitions can cross chip boundries.
+ * so that MTD partitions can cross chip boundaries.
*
* The only known restriction to how you can mount your chips is that each
* chip select must hold similar flash chips. But you need external hardware
diff --git a/arch/cris/arch-v32/drivers/cryptocop.c b/arch/cris/arch-v32/drivers/cryptocop.c
index 617645d21b20..2081d8b45f06 100644
--- a/arch/cris/arch-v32/drivers/cryptocop.c
+++ b/arch/cris/arch-v32/drivers/cryptocop.c
@@ -525,7 +525,7 @@ static int setup_cipher_iv_desc(struct cryptocop_tfrm_ctx *tc, struct cryptocop_
return 0;
}
-/* Map the ouput length of the transform to operation output starting on the inject index. */
+/* Map the output length of the transform to operation output starting on the inject index. */
static int create_input_descriptors(struct cryptocop_operation *operation, struct cryptocop_tfrm_ctx *tc, struct cryptocop_dma_desc **id, int alloc_flag)
{
int err = 0;
diff --git a/arch/cris/arch-v32/drivers/mach-a3/nandflash.c b/arch/cris/arch-v32/drivers/mach-a3/nandflash.c
index 5aa3f5162310..3f646c787e58 100644
--- a/arch/cris/arch-v32/drivers/mach-a3/nandflash.c
+++ b/arch/cris/arch-v32/drivers/mach-a3/nandflash.c
@@ -157,6 +157,7 @@ struct mtd_info *__init crisv32_nand_flash_probe(void)
/* 20 us command delay time */
this->chip_delay = 20;
this->ecc.mode = NAND_ECC_SOFT;
+ this->ecc.algo = NAND_ECC_HAMMING;
/* Enable the following for a flash based bad block table */
/* this->bbt_options = NAND_BBT_USE_FLASH; */
diff --git a/arch/cris/arch-v32/drivers/mach-fs/nandflash.c b/arch/cris/arch-v32/drivers/mach-fs/nandflash.c
index a7c17b0f172a..a74540514bdb 100644
--- a/arch/cris/arch-v32/drivers/mach-fs/nandflash.c
+++ b/arch/cris/arch-v32/drivers/mach-fs/nandflash.c
@@ -148,6 +148,7 @@ struct mtd_info *__init crisv32_nand_flash_probe(void)
/* 20 us command delay time */
this->chip_delay = 20;
this->ecc.mode = NAND_ECC_SOFT;
+ this->ecc.algo = NAND_ECC_HAMMING;
/* Enable the following for a flash based bad block table */
/* this->bbt_options = NAND_BBT_USE_FLASH; */
diff --git a/arch/cris/arch-v32/kernel/process.c b/arch/cris/arch-v32/kernel/process.c
index c7ce784a393c..4d1afa9f9fd3 100644
--- a/arch/cris/arch-v32/kernel/process.c
+++ b/arch/cris/arch-v32/kernel/process.c
@@ -33,9 +33,9 @@ void default_idle(void)
*/
extern void deconfigure_bp(long pid);
-void exit_thread(void)
+void exit_thread(struct task_struct *tsk)
{
- deconfigure_bp(current->pid);
+ deconfigure_bp(tsk->pid);
}
/*
diff --git a/arch/cris/arch-v32/mach-a3/dram_init.S b/arch/cris/arch-v32/mach-a3/dram_init.S
index ec8648be32d3..5c4f24dce94c 100644
--- a/arch/cris/arch-v32/mach-a3/dram_init.S
+++ b/arch/cris/arch-v32/mach-a3/dram_init.S
@@ -11,7 +11,7 @@
*/
/* Just to be certain the config file is included, we include it here
- * explicitely instead of depending on it being included in the file that
+ * explicitly instead of depending on it being included in the file that
* uses this code.
*/
diff --git a/arch/cris/arch-v32/mach-fs/dram_init.S b/arch/cris/arch-v32/mach-fs/dram_init.S
index 6fbad336527b..d3ce2eb04cb1 100644
--- a/arch/cris/arch-v32/mach-fs/dram_init.S
+++ b/arch/cris/arch-v32/mach-fs/dram_init.S
@@ -11,7 +11,7 @@
*/
/* Just to be certain the config file is included, we include it here
- * explicitely instead of depending on it being included in the file that
+ * explicitly instead of depending on it being included in the file that
* uses this code.
*/
diff --git a/arch/frv/include/asm/processor.h b/arch/frv/include/asm/processor.h
index ae8d423e79d9..73f0a79ad8e6 100644
--- a/arch/frv/include/asm/processor.h
+++ b/arch/frv/include/asm/processor.h
@@ -97,13 +97,6 @@ extern asmlinkage void *restore_user_regs(const struct user_context *target, ...
#define forget_segments() do { } while (0)
/*
- * Free current thread data structures etc..
- */
-static inline void exit_thread(void)
-{
-}
-
-/*
* Return saved PC of a blocked thread.
*/
extern unsigned long thread_saved_pc(struct task_struct *tsk);
diff --git a/arch/h8300/Kconfig b/arch/h8300/Kconfig
index 986ea84caaed..3ae852507e57 100644
--- a/arch/h8300/Kconfig
+++ b/arch/h8300/Kconfig
@@ -20,6 +20,8 @@ config H8300
select HAVE_KERNEL_GZIP
select HAVE_KERNEL_LZO
select HAVE_ARCH_KGDB
+ select HAVE_ARCH_HASH
+ select CPU_NO_EFFICIENT_FFS
config RWSEM_GENERIC_SPINLOCK
def_bool y
diff --git a/arch/h8300/boot/compressed/Makefile b/arch/h8300/boot/compressed/Makefile
index 7643633f1330..613bfe6f5272 100644
--- a/arch/h8300/boot/compressed/Makefile
+++ b/arch/h8300/boot/compressed/Makefile
@@ -23,7 +23,6 @@ LDFLAGS_vmlinux := -Ttext $(IMAGE_OFFSET) -estartup -T $(obj)/vmlinux.lds \
$(obj)/vmlinux: $(OBJECTS) $(obj)/piggy.o $(LIBGCC) FORCE
$(call if_changed,ld)
- @:
$(obj)/vmlinux.bin: vmlinux FORCE
$(call if_changed,objcopy)
diff --git a/arch/h8300/include/asm/hash.h b/arch/h8300/include/asm/hash.h
new file mode 100644
index 000000000000..04cfbd2bd850
--- /dev/null
+++ b/arch/h8300/include/asm/hash.h
@@ -0,0 +1,53 @@
+#ifndef _ASM_HASH_H
+#define _ASM_HASH_H
+
+/*
+ * The later H8SX models have a 32x32-bit multiply, but the H8/300H
+ * and H8S have only 16x16->32. Since it's tolerably compact, this is
+ * basically an inlined version of the __mulsi3 code. Since the inputs
+ * are not expected to be small, it's also simplfied by skipping the
+ * early-out checks.
+ *
+ * (Since neither CPU has any multi-bit shift instructions, a
+ * shift-and-add version is a non-starter.)
+ *
+ * TODO: come up with an arch-specific version of the hashing in fs/namei.c,
+ * since that is heavily dependent on rotates. Which, as mentioned, suck
+ * horribly on H8.
+ */
+
+#if defined(CONFIG_CPU_H300H) || defined(CONFIG_CPU_H8S)
+
+#define HAVE_ARCH__HASH_32 1
+
+/*
+ * Multiply by k = 0x61C88647. Fitting this into three registers requires
+ * one extra instruction, but reducing register pressure will probably
+ * make that back and then some.
+ *
+ * GCC asm note: %e1 is the high half of operand %1, while %f1 is the
+ * low half. So if %1 is er4, then %e1 is e4 and %f1 is r4.
+ *
+ * This has been designed to modify x in place, since that's the most
+ * common usage, but preserve k, since hash_64() makes two calls in
+ * quick succession.
+ */
+static inline u32 __attribute_const__ __hash_32(u32 x)
+{
+ u32 temp;
+
+ asm( "mov.w %e1,%f0"
+ "\n mulxu.w %f2,%0" /* klow * xhigh */
+ "\n mov.w %f0,%e1" /* The extra instruction */
+ "\n mov.w %f1,%f0"
+ "\n mulxu.w %e2,%0" /* khigh * xlow */
+ "\n add.w %e1,%f0"
+ "\n mulxu.w %f2,%1" /* klow * xlow */
+ "\n add.w %f0,%e1"
+ : "=&r" (temp), "=r" (x)
+ : "%r" (GOLDEN_RATIO_32), "1" (x));
+ return x;
+}
+
+#endif
+#endif /* _ASM_HASH_H */
diff --git a/arch/h8300/include/asm/processor.h b/arch/h8300/include/asm/processor.h
index 54e3fd83c336..111df7397ac7 100644
--- a/arch/h8300/include/asm/processor.h
+++ b/arch/h8300/include/asm/processor.h
@@ -111,13 +111,6 @@ static inline void release_thread(struct task_struct *dead_task)
}
/*
- * Free current thread data structures etc..
- */
-static inline void exit_thread(void)
-{
-}
-
-/*
* Return saved PC of a blocked thread.
*/
unsigned long thread_saved_pc(struct task_struct *tsk);
diff --git a/arch/h8300/include/uapi/asm/unistd.h b/arch/h8300/include/uapi/asm/unistd.h
index 7a2eb698def3..7dd20ef7625a 100644
--- a/arch/h8300/include/uapi/asm/unistd.h
+++ b/arch/h8300/include/uapi/asm/unistd.h
@@ -1,3 +1,5 @@
#define __ARCH_NOMMU
+#define __ARCH_WANT_RENAMEAT
+
#include <asm-generic/unistd.h>
diff --git a/arch/hexagon/include/asm/hexagon_vm.h b/arch/hexagon/include/asm/hexagon_vm.h
index 1f6918b428de..e8990c9a6e99 100644
--- a/arch/hexagon/include/asm/hexagon_vm.h
+++ b/arch/hexagon/include/asm/hexagon_vm.h
@@ -237,7 +237,7 @@ static inline long __vmintop_clear(long i)
/*
* The initial program gets to find a system environment descriptor
- * on its stack when it begins exection. The first word is a version
+ * on its stack when it begins execution. The first word is a version
* code to indicate what is there. Zero means nothing more.
*/
diff --git a/arch/hexagon/include/asm/vm_mmu.h b/arch/hexagon/include/asm/vm_mmu.h
index 096537d8f4c5..6fc29d9d570b 100644
--- a/arch/hexagon/include/asm/vm_mmu.h
+++ b/arch/hexagon/include/asm/vm_mmu.h
@@ -78,7 +78,7 @@
#define __HEXAGON_C_WB_L2 0x7 /* Write-back, with L2 */
/*
- * This can be overriden, but we're defaulting to the most aggressive
+ * This can be overridden, but we're defaulting to the most aggressive
* cache policy, the better to find bugs sooner.
*/
diff --git a/arch/hexagon/include/uapi/asm/unistd.h b/arch/hexagon/include/uapi/asm/unistd.h
index ffee405d6803..21517600432b 100644
--- a/arch/hexagon/include/uapi/asm/unistd.h
+++ b/arch/hexagon/include/uapi/asm/unistd.h
@@ -27,6 +27,7 @@
*/
#define sys_mmap2 sys_mmap_pgoff
+#define __ARCH_WANT_RENAMEAT
#define __ARCH_WANT_SYS_EXECVE
#define __ARCH_WANT_SYS_CLONE
#define __ARCH_WANT_SYS_VFORK
diff --git a/arch/hexagon/kernel/kgdb.c b/arch/hexagon/kernel/kgdb.c
index 038580cc5abf..62dece3ad827 100644
--- a/arch/hexagon/kernel/kgdb.c
+++ b/arch/hexagon/kernel/kgdb.c
@@ -236,9 +236,9 @@ static struct notifier_block kgdb_notifier = {
};
/**
- * kgdb_arch_init - Perform any architecture specific initalization.
+ * kgdb_arch_init - Perform any architecture specific initialization.
*
- * This function will handle the initalization of any architecture
+ * This function will handle the initialization of any architecture
* specific callbacks.
*/
int kgdb_arch_init(void)
diff --git a/arch/hexagon/kernel/process.c b/arch/hexagon/kernel/process.c
index a9ebd471823a..d9edfd3fc52a 100644
--- a/arch/hexagon/kernel/process.c
+++ b/arch/hexagon/kernel/process.c
@@ -137,13 +137,6 @@ void release_thread(struct task_struct *dead_task)
}
/*
- * Free any architecture-specific thread data structures, etc.
- */
-void exit_thread(void)
-{
-}
-
-/*
* Some archs flush debug and FPU info here
*/
void flush_thread(void)
diff --git a/arch/hexagon/kernel/vdso.c b/arch/hexagon/kernel/vdso.c
index 0bf5a87e4d0a..3ea968415539 100644
--- a/arch/hexagon/kernel/vdso.c
+++ b/arch/hexagon/kernel/vdso.c
@@ -65,7 +65,8 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
unsigned long vdso_base;
struct mm_struct *mm = current->mm;
- down_write(&mm->mmap_sem);
+ if (down_write_killable(&mm->mmap_sem))
+ return -EINTR;
/* Try to get it loaded right near ld.so/glibc. */
vdso_base = STACK_TOP;
diff --git a/arch/hexagon/kernel/vm_ops.S b/arch/hexagon/kernel/vm_ops.S
index 9fb77b3f6cf2..58f2b92a37ed 100644
--- a/arch/hexagon/kernel/vm_ops.S
+++ b/arch/hexagon/kernel/vm_ops.S
@@ -26,7 +26,7 @@
* could be, and perhaps some day will be, handled as in-line
* macros, but for tracing/debugging it's handy to have
* a single point of invocation for each of them.
- * Conveniently, they take paramters and return values
+ * Conveniently, they take parameters and return values
* consistent with the ABI calling convention.
*/
diff --git a/arch/hexagon/lib/memcpy.S b/arch/hexagon/lib/memcpy.S
index 81c561c4b4d6..a46093a8800a 100644
--- a/arch/hexagon/lib/memcpy.S
+++ b/arch/hexagon/lib/memcpy.S
@@ -39,7 +39,7 @@
* DJH 10/14/09 Version 1.3 added special loop for aligned case, was
* overreading bloated codesize back up to 892
* DJH 4/20/10 Version 1.4 fixed Ldword_loop_epilog loop to prevent loads
- * occuring if only 1 left outstanding, fixes bug
+ * occurring if only 1 left outstanding, fixes bug
* # 3888, corrected for all alignments. Peeled off
* 1 32byte chunk from kernel loop and extended 8byte
* loop at end to solve all combinations and prevent
diff --git a/arch/ia64/Kconfig b/arch/ia64/Kconfig
index b534ebab36ea..f80758cb7157 100644
--- a/arch/ia64/Kconfig
+++ b/arch/ia64/Kconfig
@@ -18,6 +18,7 @@ config IA64
select ACPI_SYSTEM_POWER_STATES_SUPPORT if ACPI
select ARCH_MIGHT_HAVE_ACPI_PDC if ACPI
select HAVE_UNSTABLE_SCHED_CLOCK
+ select HAVE_EXIT_THREAD
select HAVE_IDE
select HAVE_OPROFILE
select HAVE_KPROBES
diff --git a/arch/ia64/Makefile b/arch/ia64/Makefile
index 970d0bd99621..c100d780f1eb 100644
--- a/arch/ia64/Makefile
+++ b/arch/ia64/Makefile
@@ -95,8 +95,8 @@ define archhelp
echo '* unwcheck - Check vmlinux for invalid unwind info'
endef
-archprepare: make_nr_irqs_h FORCE
-PHONY += make_nr_irqs_h FORCE
+archprepare: make_nr_irqs_h
+PHONY += make_nr_irqs_h
-make_nr_irqs_h: FORCE
+make_nr_irqs_h:
$(Q)$(MAKE) $(build)=arch/ia64/kernel include/generated/nr-irqs.h
diff --git a/arch/ia64/hp/sim/simserial.c b/arch/ia64/hp/sim/simserial.c
index e70cadec7ce6..21fd50def270 100644
--- a/arch/ia64/hp/sim/simserial.c
+++ b/arch/ia64/hp/sim/simserial.c
@@ -300,7 +300,7 @@ static int rs_ioctl(struct tty_struct *tty, unsigned int cmd, unsigned long arg)
if ((cmd != TIOCGSERIAL) && (cmd != TIOCSSERIAL) &&
(cmd != TIOCSERCONFIG) && (cmd != TIOCSERGSTRUCT) &&
(cmd != TIOCMIWAIT)) {
- if (tty->flags & (1 << TTY_IO_ERROR))
+ if (tty_io_error(tty))
return -EIO;
}
diff --git a/arch/ia64/include/asm/iommu.h b/arch/ia64/include/asm/iommu.h
index 105c93b00b1b..1d1212901ae7 100644
--- a/arch/ia64/include/asm/iommu.h
+++ b/arch/ia64/include/asm/iommu.h
@@ -1,7 +1,6 @@
#ifndef _ASM_IA64_IOMMU_H
#define _ASM_IA64_IOMMU_H 1
-#define cpu_has_x2apic 0
/* 10 seconds */
#define DMAR_OPERATION_TIMEOUT (((cycles_t) local_cpu_data->itc_freq)*10)
diff --git a/arch/ia64/include/asm/rwsem.h b/arch/ia64/include/asm/rwsem.h
index ce112472bdd6..8b23e070b844 100644
--- a/arch/ia64/include/asm/rwsem.h
+++ b/arch/ia64/include/asm/rwsem.h
@@ -49,8 +49,8 @@ __down_read (struct rw_semaphore *sem)
/*
* lock for writing
*/
-static inline void
-__down_write (struct rw_semaphore *sem)
+static inline long
+___down_write (struct rw_semaphore *sem)
{
long old, new;
@@ -59,10 +59,26 @@ __down_write (struct rw_semaphore *sem)
new = old + RWSEM_ACTIVE_WRITE_BIAS;
} while (cmpxchg_acq(&sem->count, old, new) != old);
- if (old != 0)
+ return old;
+}
+
+static inline void
+__down_write (struct rw_semaphore *sem)
+{
+ if (___down_write(sem))
rwsem_down_write_failed(sem);
}
+static inline int
+__down_write_killable (struct rw_semaphore *sem)
+{
+ if (___down_write(sem))
+ if (IS_ERR(rwsem_down_write_failed_killable(sem)))
+ return -EINTR;
+
+ return 0;
+}
+
/*
* unlock after reading
*/
diff --git a/arch/ia64/include/asm/sn/ioc3.h b/arch/ia64/include/asm/sn/ioc3.h
index 95ed6cc83cf1..6eaa3cc1e919 100644
--- a/arch/ia64/include/asm/sn/ioc3.h
+++ b/arch/ia64/include/asm/sn/ioc3.h
@@ -131,7 +131,7 @@ struct ioc3 {
#define SSCR_PAUSE_STATE 0x40000000 /* set when PAUSE takes effect*/
#define SSCR_RESET 0x80000000 /* reset DMA channels */
-/* all producer/comsumer pointers are the same bitfield */
+/* all producer/consumer pointers are the same bitfield */
#define PROD_CONS_PTR_4K 0x00000ff8 /* for 4K buffers */
#define PROD_CONS_PTR_1K 0x000003f8 /* for 1K buffers */
#define PROD_CONS_PTR_OFF 3
diff --git a/arch/ia64/include/asm/sn/shubio.h b/arch/ia64/include/asm/sn/shubio.h
index ecb8a49476b6..8a1ec139f977 100644
--- a/arch/ia64/include/asm/sn/shubio.h
+++ b/arch/ia64/include/asm/sn/shubio.h
@@ -1385,7 +1385,7 @@ typedef union ii_ibcr_u {
* respones are captured until IXSS[VALID] is cleared by setting the *
* appropriate bit in IECLR. Every time a spurious read response is *
* detected, the SPUR_RD bit of the PRB corresponding to the incoming *
- * message's SIDN field is set. This always happens, regarless of *
+ * message's SIDN field is set. This always happens, regardless of *
* whether a header is captured. The programmer should check *
* IXSM[SIDN] to determine which widget sent the spurious response, *
* because there may be more than one SPUR_RD bit set in the PRB *
@@ -2997,7 +2997,7 @@ typedef union ii_ippr_u {
/*
* Values for field imsgtype
*/
-#define IIO_ICRB_IMSGT_XTALK 0 /* Incoming Meessage from Xtalk */
+#define IIO_ICRB_IMSGT_XTALK 0 /* Incoming message from Xtalk */
#define IIO_ICRB_IMSGT_BTE 1 /* Incoming message from BTE */
#define IIO_ICRB_IMSGT_SN1NET 2 /* Incoming message from SN1 net */
#define IIO_ICRB_IMSGT_CRB 3 /* Incoming message from CRB ??? */
diff --git a/arch/ia64/kernel/efi.c b/arch/ia64/kernel/efi.c
index 300dac3702f1..3b7a60e40e8a 100644
--- a/arch/ia64/kernel/efi.c
+++ b/arch/ia64/kernel/efi.c
@@ -531,8 +531,6 @@ efi_init (void)
efi.systab->hdr.revision >> 16,
efi.systab->hdr.revision & 0xffff, vendor);
- set_bit(EFI_SYSTEM_TABLES, &efi.flags);
-
palo_phys = EFI_INVALID_TABLE_ADDR;
if (efi_config_init(arch_tables) != 0)
@@ -966,7 +964,7 @@ efi_uart_console_only(void)
/*
* Look for the first granule aligned memory descriptor memory
* that is big enough to hold EFI memory map. Make sure this
- * descriptor is atleast granule sized so it does not get trimmed
+ * descriptor is at least granule sized so it does not get trimmed
*/
struct kern_memdesc *
find_memmap_space (void)
diff --git a/arch/ia64/kernel/mca.c b/arch/ia64/kernel/mca.c
index 2889412e03eb..07a4e32ae96a 100644
--- a/arch/ia64/kernel/mca.c
+++ b/arch/ia64/kernel/mca.c
@@ -1904,13 +1904,10 @@ static int mca_cpu_callback(struct notifier_block *nfb,
unsigned long action,
void *hcpu)
{
- int hotcpu = (unsigned long) hcpu;
-
switch (action) {
case CPU_ONLINE:
case CPU_ONLINE_FROZEN:
- smp_call_function_single(hotcpu, ia64_mca_cmc_vector_adjust,
- NULL, 0);
+ ia64_mca_cmc_vector_adjust(NULL);
break;
}
return NOTIFY_OK;
diff --git a/arch/ia64/kernel/perfmon.c b/arch/ia64/kernel/perfmon.c
index 9cd607b06964..2436ad5f92c1 100644
--- a/arch/ia64/kernel/perfmon.c
+++ b/arch/ia64/kernel/perfmon.c
@@ -4542,8 +4542,8 @@ pfm_context_unload(pfm_context_t *ctx, void *arg, int count, struct pt_regs *reg
/*
- * called only from exit_thread(): task == current
- * we come here only if current has a context attached (loaded or masked)
+ * called only from exit_thread()
+ * we come here only if the task has a context attached (loaded or masked)
*/
void
pfm_exit_thread(struct task_struct *task)
diff --git a/arch/ia64/kernel/process.c b/arch/ia64/kernel/process.c
index b51514957620..aae6c4dc7ae7 100644
--- a/arch/ia64/kernel/process.c
+++ b/arch/ia64/kernel/process.c
@@ -570,22 +570,22 @@ flush_thread (void)
}
/*
- * Clean up state associated with current thread. This is called when
+ * Clean up state associated with a thread. This is called when
* the thread calls exit().
*/
void
-exit_thread (void)
+exit_thread (struct task_struct *tsk)
{
- ia64_drop_fpu(current);
+ ia64_drop_fpu(tsk);
#ifdef CONFIG_PERFMON
/* if needed, stop monitoring and flush state to perfmon context */
- if (current->thread.pfm_context)
- pfm_exit_thread(current);
+ if (tsk->thread.pfm_context)
+ pfm_exit_thread(tsk);
/* free debug register resources */
- if (current->thread.flags & IA64_THREAD_DBG_VALID)
- pfm_release_debug_registers(current);
+ if (tsk->thread.flags & IA64_THREAD_DBG_VALID)
+ pfm_release_debug_registers(tsk);
#endif
}
diff --git a/arch/ia64/kernel/traps.c b/arch/ia64/kernel/traps.c
index 6f7d4a4dcf24..77edd68c5161 100644
--- a/arch/ia64/kernel/traps.c
+++ b/arch/ia64/kernel/traps.c
@@ -548,6 +548,7 @@ ia64_fault (unsigned long vector, unsigned long isr, unsigned long ifa,
return;
}
switch (vector) {
+ default:
case 29:
siginfo.si_code = TRAP_HWBKPT;
#ifdef CONFIG_ITANIUM
diff --git a/arch/ia64/kernel/unaligned.c b/arch/ia64/kernel/unaligned.c
index e7ae6088350a..7f0d31656b4d 100644
--- a/arch/ia64/kernel/unaligned.c
+++ b/arch/ia64/kernel/unaligned.c
@@ -1378,6 +1378,7 @@ ia64_handle_unaligned (unsigned long ifa, struct pt_regs *regs)
* extract the instruction from the bundle given the slot number
*/
switch (ipsr->ri) {
+ default:
case 0: u.l = (bundle[0] >> 5); break;
case 1: u.l = (bundle[0] >> 46) | (bundle[1] << 18); break;
case 2: u.l = (bundle[1] >> 23); break;
diff --git a/arch/ia64/lib/idiv32.S b/arch/ia64/lib/idiv32.S
index 2ac28bf0a662..c91b5b0129ff 100644
--- a/arch/ia64/lib/idiv32.S
+++ b/arch/ia64/lib/idiv32.S
@@ -11,7 +11,7 @@
*
* For more details on the theory behind these algorithms, see "IA-64
* and Elementary Functions" by Peter Markstein; HP Professional Books
- * (http://www.hp.com/go/retailbooks/)
+ * (http://www.goodreads.com/book/show/2019887.Ia_64_and_Elementary_Functions)
*/
#include <asm/asmmacro.h>
diff --git a/arch/ia64/lib/idiv64.S b/arch/ia64/lib/idiv64.S
index f69bd2b0987a..627573c4ceb1 100644
--- a/arch/ia64/lib/idiv64.S
+++ b/arch/ia64/lib/idiv64.S
@@ -11,7 +11,7 @@
*
* For more details on the theory behind these algorithms, see "IA-64
* and Elementary Functions" by Peter Markstein; HP Professional Books
- * (http://www.hp.com/go/retailbooks/)
+ * (http://www.goodreads.com/book/show/2019887.Ia_64_and_Elementary_Functions)
*/
#include <asm/asmmacro.h>
diff --git a/arch/ia64/sn/kernel/io_acpi_init.c b/arch/ia64/sn/kernel/io_acpi_init.c
index 231234c8d113..c31fe637b0b4 100644
--- a/arch/ia64/sn/kernel/io_acpi_init.c
+++ b/arch/ia64/sn/kernel/io_acpi_init.c
@@ -426,7 +426,6 @@ sn_acpi_get_pcidev_info(struct pci_dev *dev, struct pcidev_info **pcidev_info,
void
sn_acpi_slot_fixup(struct pci_dev *dev)
{
- void __iomem *addr;
struct pcidev_info *pcidev_info = NULL;
struct sn_irq_info *sn_irq_info = NULL;
struct resource *res;
diff --git a/arch/ia64/sn/kernel/io_init.c b/arch/ia64/sn/kernel/io_init.c
index c15a41e2d1f2..d63809a6adfa 100644
--- a/arch/ia64/sn/kernel/io_init.c
+++ b/arch/ia64/sn/kernel/io_init.c
@@ -151,7 +151,7 @@ sn_io_slot_fixup(struct pci_dev *dev)
{
int idx;
struct resource *res;
- unsigned long addr, size;
+ unsigned long size;
struct pcidev_info *pcidev_info;
struct sn_irq_info *sn_irq_info;
int status;
@@ -186,7 +186,7 @@ sn_io_slot_fixup(struct pci_dev *dev)
continue;
res->start = pcidev_info->pdi_pio_mapped_addr[idx];
- res->end = addr + size;
+ res->end = res->start + size;
/*
* if it's already in the device structure, remove it before
diff --git a/arch/ia64/sn/kernel/sn2/sn2_smp.c b/arch/ia64/sn/kernel/sn2/sn2_smp.c
index f9c8d9fc5939..c98dc965fe82 100644
--- a/arch/ia64/sn/kernel/sn2/sn2_smp.c
+++ b/arch/ia64/sn/kernel/sn2/sn2_smp.c
@@ -54,7 +54,7 @@ sn2_ptc_deadlock_recovery_core(volatile unsigned long *, unsigned long,
volatile unsigned long *, unsigned long,
volatile unsigned long *, unsigned long);
void
-sn2_ptc_deadlock_recovery(short *, short, short, int,
+sn2_ptc_deadlock_recovery(nodemask_t, short, short, int,
volatile unsigned long *, unsigned long,
volatile unsigned long *, unsigned long);
@@ -169,7 +169,7 @@ sn2_global_tlb_purge(struct mm_struct *mm, unsigned long start,
int use_cpu_ptcga;
volatile unsigned long *ptc0, *ptc1;
unsigned long itc, itc2, flags, data0 = 0, data1 = 0, rr_value, old_rr = 0;
- short nasids[MAX_NUMNODES], nix;
+ short nix;
nodemask_t nodes_flushed;
int active, max_active, deadlock, flush_opt = sn2_flush_opt;
@@ -218,9 +218,7 @@ sn2_global_tlb_purge(struct mm_struct *mm, unsigned long start,
}
itc = ia64_get_itc();
- nix = 0;
- for_each_node_mask(cnode, nodes_flushed)
- nasids[nix++] = cnodeid_to_nasid(cnode);
+ nix = nodes_weight(nodes_flushed);
rr_value = (mm->context << 3) | REGION_NUMBER(start);
@@ -270,8 +268,10 @@ sn2_global_tlb_purge(struct mm_struct *mm, unsigned long start,
data0 = (data0 & ~SH2_PTC_ADDR_MASK) | (start & SH2_PTC_ADDR_MASK);
deadlock = 0;
active = 0;
- for (ibegin = 0, i = 0; i < nix; i++) {
- nasid = nasids[i];
+ ibegin = 0;
+ i = 0;
+ for_each_node_mask(cnode, nodes_flushed) {
+ nasid = cnodeid_to_nasid(cnode);
if (use_cpu_ptcga && unlikely(nasid == mynasid)) {
ia64_ptcga(start, nbits << 2);
ia64_srlz_i();
@@ -286,13 +286,14 @@ sn2_global_tlb_purge(struct mm_struct *mm, unsigned long start,
if ((deadlock = wait_piowc())) {
if (flush_opt == 1)
goto done;
- sn2_ptc_deadlock_recovery(nasids, ibegin, i, mynasid, ptc0, data0, ptc1, data1);
+ sn2_ptc_deadlock_recovery(nodes_flushed, ibegin, i, mynasid, ptc0, data0, ptc1, data1);
if (reset_max_active_on_deadlock())
max_active = 1;
}
active = 0;
ibegin = i + 1;
}
+ i++;
}
start += (1UL << nbits);
} while (start < end);
@@ -327,11 +328,12 @@ done:
*/
void
-sn2_ptc_deadlock_recovery(short *nasids, short ib, short ie, int mynasid,
+sn2_ptc_deadlock_recovery(nodemask_t nodes, short ib, short ie, int mynasid,
volatile unsigned long *ptc0, unsigned long data0,
volatile unsigned long *ptc1, unsigned long data1)
{
short nasid, i;
+ int cnode;
unsigned long *piows, zeroval, n;
__this_cpu_inc(ptcstats.deadlocks);
@@ -339,17 +341,26 @@ sn2_ptc_deadlock_recovery(short *nasids, short ib, short ie, int mynasid,
piows = (unsigned long *) pda->pio_write_status_addr;
zeroval = pda->pio_write_status_val;
+ i = 0;
+ for_each_node_mask(cnode, nodes) {
+ if (i < ib)
+ goto next;
+
+ if (i > ie)
+ break;
- for (i=ib; i <= ie; i++) {
- nasid = nasids[i];
+ nasid = cnodeid_to_nasid(cnode);
if (local_node_uses_ptc_ga(is_shub1()) && nasid == mynasid)
- continue;
+ goto next;
+
ptc0 = CHANGE_NASID(nasid, ptc0);
if (ptc1)
ptc1 = CHANGE_NASID(nasid, ptc1);
n = sn2_ptc_deadlock_recovery_core(ptc0, data0, ptc1, data1, piows, zeroval);
__this_cpu_add(ptcstats.deadlocks2, n);
+next:
+ i++;
}
}
diff --git a/arch/m32r/Kconfig b/arch/m32r/Kconfig
index c82b29253991..3cc8498fe0fe 100644
--- a/arch/m32r/Kconfig
+++ b/arch/m32r/Kconfig
@@ -17,6 +17,7 @@ config M32R
select ARCH_USES_GETTIMEOFFSET
select MODULES_USE_ELF_RELA
select HAVE_DEBUG_STACKOVERFLOW
+ select CPU_NO_EFFICIENT_FFS
config SBUS
bool
diff --git a/arch/m32r/boot/compressed/Makefile b/arch/m32r/boot/compressed/Makefile
index 01729c2979ba..0606a727aab2 100644
--- a/arch/m32r/boot/compressed/Makefile
+++ b/arch/m32r/boot/compressed/Makefile
@@ -19,7 +19,6 @@ LDFLAGS_vmlinux := -T
$(obj)/vmlinux: $(obj)/vmlinux.lds $(OBJECTS) $(obj)/piggy.o FORCE
$(call if_changed,ld)
- @:
$(obj)/vmlinux.bin: vmlinux FORCE
$(call if_changed,objcopy)
diff --git a/arch/m32r/kernel/process.c b/arch/m32r/kernel/process.c
index e69221d581d5..a88b1f01e91f 100644
--- a/arch/m32r/kernel/process.c
+++ b/arch/m32r/kernel/process.c
@@ -101,15 +101,6 @@ void show_regs(struct pt_regs * regs)
#endif
}
-/*
- * Free current thread data structures etc..
- */
-void exit_thread(void)
-{
- /* Nothing to do. */
- DPRINTK("pid = %d\n", current->pid);
-}
-
void flush_thread(void)
{
DPRINTK("pid = %d\n", current->pid);
diff --git a/arch/m32r/kernel/smp.c b/arch/m32r/kernel/smp.c
index 62d6961e7f2b..564052e3d3a0 100644
--- a/arch/m32r/kernel/smp.c
+++ b/arch/m32r/kernel/smp.c
@@ -164,6 +164,7 @@ void smp_flush_cache_all(void)
spin_unlock(&flushcache_lock);
preempt_enable();
}
+EXPORT_SYMBOL(smp_flush_cache_all);
void smp_flush_cache_all_interrupt(void)
{
diff --git a/arch/m68k/Kconfig.cpu b/arch/m68k/Kconfig.cpu
index 0dfcf1281e9c..967260f2eb1c 100644
--- a/arch/m68k/Kconfig.cpu
+++ b/arch/m68k/Kconfig.cpu
@@ -22,11 +22,11 @@ config M68KCLASSIC
config COLDFIRE
bool "Coldfire CPU family support"
- select ARCH_REQUIRE_GPIOLIB
select ARCH_HAVE_CUSTOM_GPIO_H
select CPU_HAS_NO_BITFIELDS
select CPU_HAS_NO_MULDIV64
select GENERIC_CSUM
+ select GPIOLIB
select HAVE_CLK
endchoice
@@ -40,6 +40,8 @@ config M68000
select CPU_HAS_NO_MULDIV64
select CPU_HAS_NO_UNALIGNED
select GENERIC_CSUM
+ select CPU_NO_EFFICIENT_FFS
+ select HAVE_ARCH_HASH
help
The Freescale (was Motorola) 68000 CPU is the first generation of
the well known M68K family of processors. The CPU core as well as
@@ -51,6 +53,7 @@ config MCPU32
bool
select CPU_HAS_NO_BITFIELDS
select CPU_HAS_NO_UNALIGNED
+ select CPU_NO_EFFICIENT_FFS
help
The Freescale (was then Motorola) CPU32 is a CPU core that is
based on the 68020 processor. For the most part it is used in
@@ -130,6 +133,7 @@ config M5206
depends on !MMU
select COLDFIRE_SW_A7
select HAVE_MBAR
+ select CPU_NO_EFFICIENT_FFS
help
Motorola ColdFire 5206 processor support.
@@ -138,6 +142,7 @@ config M5206e
depends on !MMU
select COLDFIRE_SW_A7
select HAVE_MBAR
+ select CPU_NO_EFFICIENT_FFS
help
Motorola ColdFire 5206e processor support.
@@ -163,6 +168,7 @@ config M5249
depends on !MMU
select COLDFIRE_SW_A7
select HAVE_MBAR
+ select CPU_NO_EFFICIENT_FFS
help
Motorola ColdFire 5249 processor support.
@@ -171,6 +177,7 @@ config M525x
depends on !MMU
select COLDFIRE_SW_A7
select HAVE_MBAR
+ select CPU_NO_EFFICIENT_FFS
help
Freescale (Motorola) Coldfire 5251/5253 processor support.
@@ -189,6 +196,7 @@ config M5272
depends on !MMU
select COLDFIRE_SW_A7
select HAVE_MBAR
+ select CPU_NO_EFFICIENT_FFS
help
Motorola ColdFire 5272 processor support.
@@ -217,6 +225,7 @@ config M5307
select COLDFIRE_SW_A7
select HAVE_CACHE_CB
select HAVE_MBAR
+ select CPU_NO_EFFICIENT_FFS
help
Motorola ColdFire 5307 processor support.
@@ -242,6 +251,7 @@ config M5407
select COLDFIRE_SW_A7
select HAVE_CACHE_CB
select HAVE_MBAR
+ select CPU_NO_EFFICIENT_FFS
help
Motorola ColdFire 5407 processor support.
@@ -251,6 +261,7 @@ config M547x
select MMU_COLDFIRE if MMU
select HAVE_CACHE_CB
select HAVE_MBAR
+ select CPU_NO_EFFICIENT_FFS
help
Freescale ColdFire 5470/5471/5472/5473/5474/5475 processor support.
@@ -260,6 +271,7 @@ config M548x
select M54xx
select HAVE_CACHE_CB
select HAVE_MBAR
+ select CPU_NO_EFFICIENT_FFS
help
Freescale ColdFire 5480/5481/5482/5483/5484/5485 processor support.
diff --git a/arch/m68k/bvme6000/rtc.c b/arch/m68k/bvme6000/rtc.c
index cf12a17dc289..f7984f44ff0f 100644
--- a/arch/m68k/bvme6000/rtc.c
+++ b/arch/m68k/bvme6000/rtc.c
@@ -15,7 +15,7 @@
#include <linux/init.h>
#include <linux/poll.h>
#include <linux/module.h>
-#include <linux/mc146818rtc.h> /* For struct rtc_time and ioctls, etc */
+#include <linux/rtc.h> /* For struct rtc_time and ioctls, etc */
#include <linux/bcd.h>
#include <asm/bvme6000hw.h>
diff --git a/arch/m68k/include/asm/hash.h b/arch/m68k/include/asm/hash.h
new file mode 100644
index 000000000000..6407af84a994
--- /dev/null
+++ b/arch/m68k/include/asm/hash.h
@@ -0,0 +1,59 @@
+#ifndef _ASM_HASH_H
+#define _ASM_HASH_H
+
+/*
+ * If CONFIG_M68000=y (original mc68000/010), this file is #included
+ * to work around the lack of a MULU.L instruction.
+ */
+
+#define HAVE_ARCH__HASH_32 1
+/*
+ * While it would be legal to substitute a different hash operation
+ * entirely, let's keep it simple and just use an optimized multiply
+ * by GOLDEN_RATIO_32 = 0x61C88647.
+ *
+ * The best way to do that appears to be to multiply by 0x8647 with
+ * shifts and adds, and use mulu.w to multiply the high half by 0x61C8.
+ *
+ * Because the 68000 has multi-cycle shifts, this addition chain is
+ * chosen to minimise the shift distances.
+ *
+ * Despite every attempt to spoon-feed it simple operations, GCC
+ * 6.1.1 doggedly insists on doing annoying things like converting
+ * "lsl.l #2,<reg>" (12 cycles) to two adds (8+8 cycles).
+ *
+ * It also likes to notice two shifts in a row, like "a = x << 2" and
+ * "a <<= 7", and convert that to "a = x << 9". But shifts longer
+ * than 8 bits are extra-slow on m68k, so that's a lose.
+ *
+ * Since the 68000 is a very simple in-order processor with no
+ * instruction scheduling effects on execution time, we can safely
+ * take it out of GCC's hands and write one big asm() block.
+ *
+ * Without calling overhead, this operation is 30 bytes (14 instructions
+ * plus one immediate constant) and 166 cycles.
+ *
+ * (Because %2 is fetched twice, it can't be postincrement, and thus it
+ * can't be a fully general "g" or "m". Register is preferred, but
+ * offsettable memory or immediate will work.)
+ */
+static inline u32 __attribute_const__ __hash_32(u32 x)
+{
+ u32 a, b;
+
+ asm( "move.l %2,%0" /* a = x * 0x0001 */
+ "\n lsl.l #2,%0" /* a = x * 0x0004 */
+ "\n move.l %0,%1"
+ "\n lsl.l #7,%0" /* a = x * 0x0200 */
+ "\n add.l %2,%0" /* a = x * 0x0201 */
+ "\n add.l %0,%1" /* b = x * 0x0205 */
+ "\n add.l %0,%0" /* a = x * 0x0402 */
+ "\n add.l %0,%1" /* b = x * 0x0607 */
+ "\n lsl.l #5,%0" /* a = x * 0x8040 */
+ : "=&d,d" (a), "=&r,r" (b)
+ : "r,roi?" (x)); /* a+b = x*0x8647 */
+
+ return ((u16)(x*0x61c8) << 16) + a + b;
+}
+
+#endif /* _ASM_HASH_H */
diff --git a/arch/m68k/include/asm/processor.h b/arch/m68k/include/asm/processor.h
index 20dda1d4b860..a6ce2ec8d693 100644
--- a/arch/m68k/include/asm/processor.h
+++ b/arch/m68k/include/asm/processor.h
@@ -153,13 +153,6 @@ static inline void release_thread(struct task_struct *dead_task)
{
}
-/*
- * Free current thread data structures etc..
- */
-static inline void exit_thread(void)
-{
-}
-
extern unsigned long thread_saved_pc(struct task_struct *tsk);
unsigned long get_wchan(struct task_struct *p);
diff --git a/arch/m68k/mvme16x/rtc.c b/arch/m68k/mvme16x/rtc.c
index 1755e2f7137d..1cdc73268188 100644
--- a/arch/m68k/mvme16x/rtc.c
+++ b/arch/m68k/mvme16x/rtc.c
@@ -14,7 +14,7 @@
#include <linux/fcntl.h>
#include <linux/init.h>
#include <linux/poll.h>
-#include <linux/mc146818rtc.h> /* For struct rtc_time and ioctls, etc */
+#include <linux/rtc.h> /* For struct rtc_time and ioctls, etc */
#include <linux/bcd.h>
#include <asm/mvme16xhw.h>
diff --git a/arch/metag/Kconfig b/arch/metag/Kconfig
index a0fa88da3e31..5b7a45d99cfb 100644
--- a/arch/metag/Kconfig
+++ b/arch/metag/Kconfig
@@ -11,6 +11,7 @@ config METAG
select HAVE_DEBUG_KMEMLEAK
select HAVE_DEBUG_STACKOVERFLOW
select HAVE_DYNAMIC_FTRACE
+ select HAVE_EXIT_THREAD
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_TRACER
select HAVE_KERNEL_BZIP2
@@ -29,6 +30,7 @@ config METAG
select OF
select OF_EARLY_FLATTREE
select SPARSE_IRQ
+ select CPU_NO_EFFICIENT_FFS
config STACKTRACE_SUPPORT
def_bool y
diff --git a/arch/metag/Kconfig.soc b/arch/metag/Kconfig.soc
index 973640f46752..50f979c2b02d 100644
--- a/arch/metag/Kconfig.soc
+++ b/arch/metag/Kconfig.soc
@@ -16,7 +16,6 @@ config META21_FPGA
config SOC_TZ1090
bool "Toumaz Xenif TZ1090 SoC (Comet)"
- select ARCH_WANT_OPTIONAL_GPIOLIB
select IMGPDC_IRQ
select METAG_LNKGET_AROUND_CACHE
select METAG_META21
diff --git a/arch/metag/include/asm/atomic_lnkget.h b/arch/metag/include/asm/atomic_lnkget.h
index a62581815624..88fa25fae8bd 100644
--- a/arch/metag/include/asm/atomic_lnkget.h
+++ b/arch/metag/include/asm/atomic_lnkget.h
@@ -61,7 +61,7 @@ static inline int atomic_##op##_return(int i, atomic_t *v) \
" CMPT %0, #HI(0x02000000)\n" \
" BNZ 1b\n" \
: "=&d" (temp), "=&da" (result) \
- : "da" (&v->counter), "bd" (i) \
+ : "da" (&v->counter), "br" (i) \
: "cc"); \
\
smp_mb(); \
diff --git a/arch/metag/include/asm/metag_regs.h b/arch/metag/include/asm/metag_regs.h
index acf4b8e6e9d1..40c3f679c5b8 100644
--- a/arch/metag/include/asm/metag_regs.h
+++ b/arch/metag/include/asm/metag_regs.h
@@ -1165,7 +1165,7 @@
#define TXSTATUS_IPTOGGLE_BIT 0x80000000 /* Prev PToggle of TXPRIVEXT */
#define TXSTATUS_ISTATE_BIT 0x40000000 /* IState bit */
#define TXSTATUS_IWAIT_BIT 0x20000000 /* wait indefinitely in decision step*/
-#define TXSTATUS_IEXCEPT_BIT 0x10000000 /* Indicate an exception occured */
+#define TXSTATUS_IEXCEPT_BIT 0x10000000 /* Indicate an exception occurred */
#define TXSTATUS_IRPCOUNT_BITS 0x0E000000 /* Number of 'dirty' date entries*/
#define TXSTATUS_IRPCOUNT_S 25
#define TXSTATUS_IRQSTAT_BITS 0x0000F000 /* IRQEnc bits, trigger or interrupts */
diff --git a/arch/metag/include/asm/processor.h b/arch/metag/include/asm/processor.h
index 0838ca699764..a0333ebcac35 100644
--- a/arch/metag/include/asm/processor.h
+++ b/arch/metag/include/asm/processor.h
@@ -134,8 +134,6 @@ static inline void release_thread(struct task_struct *dead_task)
#define copy_segments(tsk, mm) do { } while (0)
#define release_segments(mm) do { } while (0)
-extern void exit_thread(void);
-
/*
* Return saved PC of a blocked thread.
*/
diff --git a/arch/metag/include/asm/tbx.h b/arch/metag/include/asm/tbx.h
index 703b9cb0ac5c..5cd2a6c86223 100644
--- a/arch/metag/include/asm/tbx.h
+++ b/arch/metag/include/asm/tbx.h
@@ -668,7 +668,7 @@ typedef union _tbires_tag_ {
State.Sig.TrigMask will indicate the bits set within TXMASKI at
the time of the handler call that have all been cleared to prevent
- nested interrupt occuring immediately.
+ nested interrupt occurring immediately.
State.Sig.SaveMask is a bit-mask which will be set to Zero when a trigger
occurs at background level and TBICTX_CRIT_BIT and optionally
@@ -1083,7 +1083,7 @@ TBIRES __TBINestInts( TBIRES State, void *pExt, int NoNestMask );
/* This routine causes the TBICTX structure specified in State.Sig.pCtx to
be restored. This implies that execution will not return to the caller.
The State.Sig.TrigMask field will be restored during the context switch
- such that any immediately occuring interrupts occur in the context of the
+ such that any immediately occurring interrupts occur in the context of the
newly specified task. The State.Sig.SaveMask parameter is ignored. */
void __TBIASyncResume( TBIRES State );
@@ -1305,7 +1305,7 @@ extern const char __TBISigNames[];
/*
* Calculate linear PC value from real PC and Minim mode control, the LSB of
- * the result returned indicates if address compression has occured.
+ * the result returned indicates if address compression has occurred.
*/
#ifndef __ASSEMBLY__
#define METAG_LINPC( PCVal ) (\
diff --git a/arch/metag/include/uapi/asm/unistd.h b/arch/metag/include/uapi/asm/unistd.h
index b80b8e899d22..459b6ec15848 100644
--- a/arch/metag/include/uapi/asm/unistd.h
+++ b/arch/metag/include/uapi/asm/unistd.h
@@ -7,6 +7,8 @@
* (at your option) any later version.
*/
+#define __ARCH_WANT_RENAMEAT
+
/* Use the standard ABI for syscalls. */
#include <asm-generic/unistd.h>
diff --git a/arch/metag/kernel/ftrace.c b/arch/metag/kernel/ftrace.c
index ac8c039b0318..f7b23d300881 100644
--- a/arch/metag/kernel/ftrace.c
+++ b/arch/metag/kernel/ftrace.c
@@ -115,7 +115,6 @@ int ftrace_make_call(struct dyn_ftrace *rec, unsigned long addr)
return ftrace_modify_code(ip, old, new);
}
-/* run from kstop_machine */
int __init ftrace_dyn_arch_init(void)
{
return 0;
diff --git a/arch/metag/kernel/perf/perf_event.c b/arch/metag/kernel/perf/perf_event.c
index 2478ec6d23c9..33a365f924be 100644
--- a/arch/metag/kernel/perf/perf_event.c
+++ b/arch/metag/kernel/perf/perf_event.c
@@ -618,6 +618,8 @@ static void metag_pmu_enable_counter(struct hw_perf_event *event, int idx)
/* Check for a core internal or performance channel event. */
if (tmp) {
+ /* PERF_ICORE/PERF_CHAN only exist since Meta2 */
+#ifdef METAC_2_1
void *perf_addr;
/*
@@ -640,6 +642,7 @@ static void metag_pmu_enable_counter(struct hw_perf_event *event, int idx)
if (perf_addr)
metag_out32((config & 0x0f), perf_addr);
+#endif
/*
* Now we use the high nibble as the performance event to
diff --git a/arch/metag/kernel/perf_callchain.c b/arch/metag/kernel/perf_callchain.c
index 315633461a94..3e8e048040df 100644
--- a/arch/metag/kernel/perf_callchain.c
+++ b/arch/metag/kernel/perf_callchain.c
@@ -29,7 +29,7 @@ static bool is_valid_call(unsigned long calladdr)
static struct metag_frame __user *
user_backtrace(struct metag_frame __user *user_frame,
- struct perf_callchain_entry *entry)
+ struct perf_callchain_entry_ctx *entry)
{
struct metag_frame frame;
unsigned long calladdr;
@@ -56,7 +56,7 @@ user_backtrace(struct metag_frame __user *user_frame,
}
void
-perf_callchain_user(struct perf_callchain_entry *entry, struct pt_regs *regs)
+perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
{
unsigned long sp = regs->ctx.AX[0].U0;
struct metag_frame __user *frame;
@@ -65,7 +65,7 @@ perf_callchain_user(struct perf_callchain_entry *entry, struct pt_regs *regs)
--frame;
- while ((entry->nr < PERF_MAX_STACK_DEPTH) && frame)
+ while ((entry->nr < entry->max_stack) && frame)
frame = user_backtrace(frame, entry);
}
@@ -78,13 +78,13 @@ static int
callchain_trace(struct stackframe *fr,
void *data)
{
- struct perf_callchain_entry *entry = data;
+ struct perf_callchain_entry_ctx *entry = data;
perf_callchain_store(entry, fr->pc);
return 0;
}
void
-perf_callchain_kernel(struct perf_callchain_entry *entry, struct pt_regs *regs)
+perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
{
struct stackframe fr;
diff --git a/arch/metag/kernel/process.c b/arch/metag/kernel/process.c
index 7f546183a0f0..35062796edf2 100644
--- a/arch/metag/kernel/process.c
+++ b/arch/metag/kernel/process.c
@@ -345,10 +345,10 @@ void flush_thread(void)
/*
* Free current thread data structures etc.
*/
-void exit_thread(void)
+void exit_thread(struct task_struct *tsk)
{
- clear_fpu(&current->thread);
- clear_dsp(&current->thread);
+ clear_fpu(&tsk->thread);
+ clear_dsp(&tsk->thread);
}
/* TODO: figure out how to unwind the kernel stack here to figure out
diff --git a/arch/metag/mm/hugetlbpage.c b/arch/metag/mm/hugetlbpage.c
index b38700ae4e84..db1b7da91e4f 100644
--- a/arch/metag/mm/hugetlbpage.c
+++ b/arch/metag/mm/hugetlbpage.c
@@ -239,6 +239,7 @@ static __init int setup_hugepagesz(char *opt)
if (ps == (1 << HPAGE_SHIFT)) {
hugetlb_add_hstate(HPAGE_SHIFT - PAGE_SHIFT);
} else {
+ hugetlb_bad_size();
pr_err("hugepagesz: Unsupported page size %lu M\n",
ps >> 20);
return 0;
diff --git a/arch/metag/tbx/tbipcx.S b/arch/metag/tbx/tbipcx.S
index de0626fdad25..163c79ac913b 100644
--- a/arch/metag/tbx/tbipcx.S
+++ b/arch/metag/tbx/tbipcx.S
@@ -15,7 +15,7 @@
#include <asm/tbx.h>
/* BEGIN HACK */
-/* define these for now while doing inital conversion to GAS
+/* define these for now while doing initial conversion to GAS
will fix properly later */
/* Signal identifiers always have the TBID_SIGNAL_BIT set and contain the
diff --git a/arch/metag/tbx/tbisoft.S b/arch/metag/tbx/tbisoft.S
index 0346fe8a53b1..b04f50df8d91 100644
--- a/arch/metag/tbx/tbisoft.S
+++ b/arch/metag/tbx/tbisoft.S
@@ -56,7 +56,7 @@ ___TBIJumpX:
/*
* TBIRES __TBISwitch( TBIRES Switch, PTBICTX *rpSaveCtx )
*
- * Software syncronous context switch between soft threads, save only the
+ * Software synchronous context switch between soft threads, save only the
* registers which are actually valid on call entry.
*
* A0FrP, D0RtP, D0.5, D0.6, D0.7 - Saved on stack
@@ -76,7 +76,7 @@ $LSwitchStart:
SETL [A0StP+#8++],D0FrT,D1RtP
/*
* Save current frame state - we save all regs because we don't want
- * uninitialised crap in the TBICTX structure that the asyncronous resumption
+ * uninitialised crap in the TBICTX structure that the asynchronous resumption
* of a thread will restore.
*/
MOVT D1Re0,#HI($LSwitchExit) /* ASync resume point here */
@@ -117,7 +117,7 @@ $LSwitchExit:
* This routine causes the TBICTX structure specified in State.Sig.pCtx to
* be restored. This implies that execution will not return to the caller.
* The State.Sig.TrigMask field will be ored into TXMASKI during the
- * context switch such that any immediately occuring interrupts occur in
+ * context switch such that any immediately occurring interrupts occur in
* the context of the newly specified task. The State.Sig.SaveMask parameter
* is ignored.
*/
diff --git a/arch/microblaze/Kconfig b/arch/microblaze/Kconfig
index 3d793b55f60c..636e0720fb20 100644
--- a/arch/microblaze/Kconfig
+++ b/arch/microblaze/Kconfig
@@ -16,6 +16,7 @@ config MICROBLAZE
select GENERIC_IRQ_SHOW
select GENERIC_PCI_IOMAP
select GENERIC_SCHED_CLOCK
+ select HAVE_ARCH_HASH
select HAVE_ARCH_KGDB
select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG
@@ -32,6 +33,7 @@ config MICROBLAZE
select OF_EARLY_FLATTREE
select TRACING_SUPPORT
select VIRT_TO_BUS
+ select CPU_NO_EFFICIENT_FFS
config SWAP
def_bool n
diff --git a/arch/microblaze/include/asm/hash.h b/arch/microblaze/include/asm/hash.h
new file mode 100644
index 000000000000..753513ae8cb0
--- /dev/null
+++ b/arch/microblaze/include/asm/hash.h
@@ -0,0 +1,81 @@
+#ifndef _ASM_HASH_H
+#define _ASM_HASH_H
+
+/*
+ * Fortunately, most people who want to run Linux on Microblaze enable
+ * both multiplier and barrel shifter, but omitting them is technically
+ * a supported configuration.
+ *
+ * With just a barrel shifter, we can implement an efficient constant
+ * multiply using shifts and adds. GCC can find a 9-step solution, but
+ * this 6-step solution was found by Yevgen Voronenko's implementation
+ * of the Hcub algorithm at http://spiral.ece.cmu.edu/mcm/gen.html.
+ *
+ * That software is really not designed for a single multiplier this large,
+ * but if you run it enough times with different seeds, it'll find several
+ * 6-shift, 6-add sequences for computing x * 0x61C88647. They are all
+ * c = (x << 19) + x;
+ * a = (x << 9) + c;
+ * b = (x << 23) + a;
+ * return (a<<11) + (b<<6) + (c<<3) - b;
+ * with variations on the order of the final add.
+ *
+ * Without even a shifter, it's hopless; any hash function will suck.
+ */
+
+#if CONFIG_XILINX_MICROBLAZE0_USE_HW_MUL == 0
+
+#define HAVE_ARCH__HASH_32 1
+
+/* Multiply by GOLDEN_RATIO_32 = 0x61C88647 */
+static inline u32 __attribute_const__ __hash_32(u32 a)
+{
+#if CONFIG_XILINX_MICROBLAZE0_USE_BARREL
+ unsigned int b, c;
+
+ /* Phase 1: Compute three intermediate values */
+ b = a << 23;
+ c = (a << 19) + a;
+ a = (a << 9) + c;
+ b += a;
+
+ /* Phase 2: Compute (a << 11) + (b << 6) + (c << 3) - b */
+ a <<= 5;
+ a += b; /* (a << 5) + b */
+ a <<= 3;
+ a += c; /* (a << 8) + (b << 3) + c */
+ a <<= 3;
+ return a - b; /* (a << 11) + (b << 6) + (c << 3) - b */
+#else
+ /*
+ * "This is really going to hurt."
+ *
+ * Without a barrel shifter, left shifts are implemented as
+ * repeated additions, and the best we can do is an optimal
+ * addition-subtraction chain. This one is not known to be
+ * optimal, but at 37 steps, it's decent for a 31-bit multiplier.
+ *
+ * Question: given its size (37*4 = 148 bytes per instance),
+ * and slowness, is this worth having inline?
+ */
+ unsigned int b, c, d;
+
+ b = a << 4; /* 4 */
+ c = b << 1; /* 1 5 */
+ b += a; /* 1 6 */
+ c += b; /* 1 7 */
+ c <<= 3; /* 3 10 */
+ c -= a; /* 1 11 */
+ d = c << 7; /* 7 18 */
+ d += b; /* 1 19 */
+ d <<= 8; /* 8 27 */
+ d += a; /* 1 28 */
+ d <<= 1; /* 1 29 */
+ d += b; /* 1 30 */
+ d <<= 6; /* 6 36 */
+ return d + c; /* 1 37 total instructions*/
+#endif
+}
+
+#endif /* !CONFIG_XILINX_MICROBLAZE0_USE_HW_MUL */
+#endif /* _ASM_HASH_H */
diff --git a/arch/microblaze/include/asm/processor.h b/arch/microblaze/include/asm/processor.h
index 497a988d79c2..c38d0dd91134 100644
--- a/arch/microblaze/include/asm/processor.h
+++ b/arch/microblaze/include/asm/processor.h
@@ -70,11 +70,6 @@ static inline void release_thread(struct task_struct *dead_task)
{
}
-/* Free all resources held by a thread. */
-static inline void exit_thread(void)
-{
-}
-
extern unsigned long thread_saved_pc(struct task_struct *t);
extern unsigned long get_wchan(struct task_struct *p);
@@ -127,11 +122,6 @@ static inline void release_thread(struct task_struct *dead_task)
{
}
-/* Free current thread data structures etc. */
-static inline void exit_thread(void)
-{
-}
-
/* Return saved (kernel) PC of a blocked thread. */
# define thread_saved_pc(tsk) \
((tsk)->thread.regs ? (tsk)->thread.regs->r15 : 0)
diff --git a/arch/microblaze/include/asm/unistd.h b/arch/microblaze/include/asm/unistd.h
index 76ed17b56fea..805ae5d712e8 100644
--- a/arch/microblaze/include/asm/unistd.h
+++ b/arch/microblaze/include/asm/unistd.h
@@ -38,6 +38,6 @@
#endif /* __ASSEMBLY__ */
-#define __NR_syscalls 389
+#define __NR_syscalls 392
#endif /* _ASM_MICROBLAZE_UNISTD_H */
diff --git a/arch/microblaze/include/uapi/asm/unistd.h b/arch/microblaze/include/uapi/asm/unistd.h
index 32850c73be09..a8bd3fa28bc7 100644
--- a/arch/microblaze/include/uapi/asm/unistd.h
+++ b/arch/microblaze/include/uapi/asm/unistd.h
@@ -404,5 +404,8 @@
#define __NR_memfd_create 386
#define __NR_bpf 387
#define __NR_execveat 388
+#define __NR_userfaultfd 389
+#define __NR_membarrier 390
+#define __NR_mlock2 391
#endif /* _UAPI_ASM_MICROBLAZE_UNISTD_H */
diff --git a/arch/microblaze/kernel/syscall_table.S b/arch/microblaze/kernel/syscall_table.S
index 29c8568ec55c..6b3dd99126d7 100644
--- a/arch/microblaze/kernel/syscall_table.S
+++ b/arch/microblaze/kernel/syscall_table.S
@@ -389,3 +389,6 @@ ENTRY(sys_call_table)
.long sys_memfd_create
.long sys_bpf
.long sys_execveat
+ .long sys_userfaultfd
+ .long sys_membarrier /* 390 */
+ .long sys_mlock2
diff --git a/arch/microblaze/pci/pci-common.c b/arch/microblaze/pci/pci-common.c
index 35654be3f1c0..14cba600da7a 100644
--- a/arch/microblaze/pci/pci-common.c
+++ b/arch/microblaze/pci/pci-common.c
@@ -48,6 +48,8 @@ static int global_phb_number; /* Global phb counter */
resource_size_t isa_mem_base;
unsigned long isa_io_base;
+EXPORT_SYMBOL(isa_io_base);
+
static int pci_bus_count;
struct pci_controller *pcibios_alloc_controller(struct device_node *dev)
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 2018c2b0e078..ac91939b9b75 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -15,7 +15,7 @@ config MIPS
select HAVE_ARCH_KGDB
select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_TRACEHOOK
- select HAVE_BPF_JIT if !CPU_MICROMIPS
+ select HAVE_CBPF_JIT if !CPU_MICROMIPS
select HAVE_FUNCTION_TRACER
select HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD
@@ -48,6 +48,7 @@ config MIPS
select GENERIC_SCHED_CLOCK if !CAVIUM_OCTEON_SOC
select GENERIC_CMOS_UPDATE
select HAVE_MOD_ARCH_SPECIFIC
+ select HAVE_NMI
select VIRT_TO_BUS
select MODULES_USE_ELF_REL if MODULES
select MODULES_USE_ELF_RELA if MODULES && 64BIT
@@ -62,6 +63,7 @@ config MIPS
select HAVE_IRQ_TIME_ACCOUNTING
select GENERIC_TIME_VSYSCALL
select ARCH_CLOCKSOURCE_DATA
+ select HANDLE_DOMAIN_IRQ
menu "Machine selection"
@@ -79,7 +81,7 @@ config MIPS_ALCHEMY
select SYS_HAS_CPU_MIPS32_R1
select SYS_SUPPORTS_32BIT_KERNEL
select SYS_SUPPORTS_APM_EMULATION
- select ARCH_REQUIRE_GPIOLIB
+ select GPIOLIB
select SYS_SUPPORTS_ZBOOT
select COMMON_CLK
@@ -98,7 +100,7 @@ config AR7
select SYS_SUPPORTS_LITTLE_ENDIAN
select SYS_SUPPORTS_MIPS16
select SYS_SUPPORTS_ZBOOT_UART16550
- select ARCH_REQUIRE_GPIOLIB
+ select GPIOLIB
select VLYNQ
select HAVE_CLK
help
@@ -122,11 +124,11 @@ config ATH25
config ATH79
bool "Atheros AR71XX/AR724X/AR913X based boards"
select ARCH_HAS_RESET_CONTROLLER
- select ARCH_REQUIRE_GPIOLIB
select BOOT_RAW
select CEVT_R4K
select CSRC_R4K
select DMA_NONCOHERENT
+ select GPIOLIB
select HAVE_CLK
select COMMON_CLK
select CLKDEV_LOOKUP
@@ -137,7 +139,7 @@ config ATH79
select SYS_SUPPORTS_32BIT_KERNEL
select SYS_SUPPORTS_BIG_ENDIAN
select SYS_SUPPORTS_MIPS16
- select SYS_SUPPORTS_ZBOOT
+ select SYS_SUPPORTS_ZBOOT_UART_PROM
select USE_OF
help
Support for the Atheros AR71XX/AR724X/AR913X SoCs.
@@ -170,7 +172,6 @@ config BMIPS_GENERIC
select USB_EHCI_BIG_ENDIAN_MMIO if CPU_BIG_ENDIAN
select USB_OHCI_BIG_ENDIAN_DESC if CPU_BIG_ENDIAN
select USB_OHCI_BIG_ENDIAN_MMIO if CPU_BIG_ENDIAN
- select ARCH_WANT_OPTIONAL_GPIOLIB
help
Build a generic DT-based kernel image that boots on select
BCM33xx cable modem chips, BCM63xx DSL chips, and BCM7xxx set-top
@@ -179,7 +180,6 @@ config BMIPS_GENERIC
config BCM47XX
bool "Broadcom BCM47XX based boards"
- select ARCH_WANT_OPTIONAL_GPIOLIB
select BOOT_RAW
select CEVT_R4K
select CSRC_R4K
@@ -196,6 +196,7 @@ config BCM47XX
select GPIOLIB
select LEDS_GPIO_REGISTER
select BCM47XX_NVRAM
+ select BCM47XX_SPROM
help
Support for BCM47XX based boards
@@ -211,7 +212,7 @@ config BCM63XX
select SYS_SUPPORTS_BIG_ENDIAN
select SYS_HAS_EARLY_PRINTK
select SWAP_IO_SPACE
- select ARCH_REQUIRE_GPIOLIB
+ select GPIOLIB
select HAVE_CLK
select MIPS_L1_CACHE_SHIFT_4
help
@@ -305,7 +306,7 @@ config MACH_INGENIC
select SYS_SUPPORTS_ZBOOT_UART16550
select DMA_NONCOHERENT
select IRQ_MIPS_CPU
- select ARCH_REQUIRE_GPIOLIB
+ select GPIOLIB
select COMMON_CLK
select GENERIC_IRQ_CHIP
select BUILTIN_DTB
@@ -325,7 +326,7 @@ config LANTIQ
select SYS_SUPPORTS_MIPS16
select SYS_SUPPORTS_MULTITHREADING
select SYS_HAS_EARLY_PRINTK
- select ARCH_REQUIRE_GPIOLIB
+ select GPIOLIB
select SWAP_IO_SPACE
select BOOT_RAW
select CLKDEV_LOOKUP
@@ -377,7 +378,6 @@ config MACH_LOONGSON64
config MACH_PISTACHIO
bool "IMG Pistachio SoC based boards"
- select ARCH_REQUIRE_GPIOLIB
select BOOT_ELF32
select BOOT_RAW
select CEVT_R4K
@@ -385,6 +385,7 @@ config MACH_PISTACHIO
select COMMON_CLK
select CSRC_R4K
select DMA_MAYBE_COHERENT
+ select GPIOLIB
select IRQ_MIPS_CPU
select LIBFDT
select MFD_SYSCON
@@ -397,6 +398,7 @@ config MACH_PISTACHIO
select SYS_SUPPORTS_LITTLE_ENDIAN
select SYS_SUPPORTS_MIPS_CPS
select SYS_SUPPORTS_MULTITHREADING
+ select SYS_SUPPORTS_RELOCATABLE
select SYS_SUPPORTS_ZBOOT
select SYS_HAS_EARLY_PRINTK
select USE_GENERIC_EARLY_PRINTK_8250
@@ -406,13 +408,13 @@ config MACH_PISTACHIO
config MACH_XILFPGA
bool "MIPSfpga Xilinx based boards"
- select ARCH_REQUIRE_GPIOLIB
select BOOT_ELF32
select BOOT_RAW
select BUILTIN_DTB
select CEVT_R4K
select COMMON_CLK
select CSRC_R4K
+ select GPIOLIB
select IRQ_MIPS_CPU
select LIBFDT
select MIPS_CPU_SCACHE
@@ -473,6 +475,7 @@ config MIPS_MALTA
select SYS_SUPPORTS_MULTITHREADING
select SYS_SUPPORTS_SMARTMIPS
select SYS_SUPPORTS_ZBOOT
+ select SYS_SUPPORTS_RELOCATABLE
select USE_OF
select ZONE_DMA32 if 64BIT
select BUILTIN_DTB
@@ -507,6 +510,7 @@ config MIPS_SEAD3
select MIPS_MSC
select SYS_HAS_CPU_MIPS32_R1
select SYS_HAS_CPU_MIPS32_R2
+ select SYS_HAS_CPU_MIPS32_R6
select SYS_HAS_CPU_MIPS64_R1
select SYS_HAS_EARLY_PRINTK
select SYS_SUPPORTS_32BIT_KERNEL
@@ -516,6 +520,7 @@ config MIPS_SEAD3
select SYS_SUPPORTS_SMARTMIPS
select SYS_SUPPORTS_MICROMIPS
select SYS_SUPPORTS_MIPS16
+ select SYS_SUPPORTS_RELOCATABLE
select USB_EHCI_BIG_ENDIAN_DESC
select USB_EHCI_BIG_ENDIAN_MMIO
select USE_OF
@@ -536,7 +541,7 @@ config MACH_VR41XX
select CSRC_R4K
select SYS_HAS_CPU_VR41XX
select SYS_SUPPORTS_MIPS16
- select ARCH_REQUIRE_GPIOLIB
+ select GPIOLIB
config NXP_STB220
bool "NXP STB220 board"
@@ -856,7 +861,7 @@ config MIKROTIK_RB532
select SYS_SUPPORTS_LITTLE_ENDIAN
select SWAP_IO_SPACE
select BOOT_RAW
- select ARCH_REQUIRE_GPIOLIB
+ select GPIOLIB
select MIPS_L1_CACHE_SHIFT_4
help
Support the Mikrotik(tm) RouterBoard 532 series,
@@ -879,7 +884,7 @@ config CAVIUM_OCTEON_SOC
select HW_HAS_PCI
select ZONE_DMA32
select HOLES_IN_ZONE
- select ARCH_REQUIRE_GPIOLIB
+ select GPIOLIB
select LIBFDT
select USE_OF
select ARCH_SPARSEMEM_ENABLE
@@ -937,7 +942,7 @@ config NLM_XLP_BOARD
select SYS_SUPPORTS_32BIT_KERNEL
select SYS_SUPPORTS_64BIT_KERNEL
select ARCH_PHYS_ADDR_T_64BIT
- select ARCH_REQUIRE_GPIOLIB
+ select GPIOLIB
select SYS_SUPPORTS_BIG_ENDIAN
select SYS_SUPPORTS_LITTLE_ENDIAN
select SYS_SUPPORTS_HIGHMEM
@@ -1077,7 +1082,7 @@ config MIPS_CLOCK_VSYSCALL
def_bool CSRC_R4K || CLKSRC_MIPS_GIC
config GPIO_TXX9
- select ARCH_REQUIRE_GPIOLIB
+ select GPIOLIB
bool
config FW_CFE
@@ -1155,6 +1160,13 @@ config ISA_DMA_API
config HOLES_IN_ZONE
bool
+config SYS_SUPPORTS_RELOCATABLE
+ bool
+ help
+ Selected if the platform supports relocating the kernel.
+ The platform must provide plat_get_fdt() if it selects CONFIG_USE_OF
+ to allow access to command line and entropy sources.
+
#
# Endianness selection. Sufficiently obscure so many users don't know what to
# answer,so we try hard to limit the available choices. Also the use of a
@@ -1342,11 +1354,30 @@ config CPU_LOONGSON3
select CPU_SUPPORTS_HUGEPAGES
select WEAK_ORDERING
select WEAK_REORDERING_BEYOND_LLSC
- select ARCH_REQUIRE_GPIOLIB
+ select MIPS_PGD_C0_CONTEXT
+ select GPIOLIB
help
The Loongson 3 processor implements the MIPS64R2 instruction
set with many extensions.
+config LOONGSON3_ENHANCEMENT
+ bool "New Loongson 3 CPU Enhancements"
+ default n
+ select CPU_MIPSR2
+ select CPU_HAS_PREFETCH
+ depends on CPU_LOONGSON3
+ help
+ New Loongson 3 CPU (since Loongson-3A R2, as opposed to Loongson-3A
+ R1, Loongson-3B R1 and Loongson-3B R2) has many enhancements, such as
+ FTLB, L1-VCache, EI/DI/Wait/Prefetch instruction, DSP/DSPv2 ASE, User
+ Local register, Read-Inhibit/Execute-Inhibit, SFB (Store Fill Buffer),
+ Fast TLB refill support, etc.
+
+ This option enable those enhancements which are not probed at run
+ time. If you want a generic kernel to run on all Loongson 3 machines,
+ please say 'N' here. If you want a high-performance kernel to run on
+ new Loongson 3 machines only, please say 'Y' here.
+
config CPU_LOONGSON2E
bool "Loongson 2E"
depends on SYS_HAS_CPU_LOONGSON2E
@@ -1362,7 +1393,7 @@ config CPU_LOONGSON2F
bool "Loongson 2F"
depends on SYS_HAS_CPU_LOONGSON2F
select CPU_LOONGSON2
- select ARCH_REQUIRE_GPIOLIB
+ select GPIOLIB
help
The Loongson 2F processor implements the MIPS III instruction set
with many extensions.
@@ -1375,6 +1406,8 @@ config CPU_LOONGSON1B
bool "Loongson 1B"
depends on SYS_HAS_CPU_LOONGSON1B
select CPU_LOONGSON1
+ select ARCH_WANT_OPTIONAL_GPIOLIB
+ select LEDS_GPIO_REGISTER
help
The Loongson 1B is a 32-bit SoC, which implements the MIPS32
release 2 instruction set.
@@ -1673,6 +1706,7 @@ config CPU_XLP
select CPU_HAS_PREFETCH
select CPU_MIPSR2
select CPU_SUPPORTS_HUGEPAGES
+ select MIPS_ASID_BITS_VARIABLE
help
Netlogic Microsystems XLP processors.
endchoice
@@ -1798,6 +1832,7 @@ config CPU_BMIPS4380
select MIPS_L1_CACHE_SHIFT_6
select SYS_SUPPORTS_SMP
select SYS_SUPPORTS_HOTPLUG_CPU
+ select CPU_HAS_RIXI
config CPU_BMIPS5000
bool
@@ -1805,10 +1840,12 @@ config CPU_BMIPS5000
select MIPS_L1_CACHE_SHIFT_7
select SYS_SUPPORTS_SMP
select SYS_SUPPORTS_HOTPLUG_CPU
+ select CPU_HAS_RIXI
config SYS_HAS_CPU_LOONGSON3
bool
select CPU_SUPPORTS_CPUFREQ
+ select CPU_HAS_RIXI
config SYS_HAS_CPU_LOONGSON2E
bool
@@ -1961,11 +1998,15 @@ config CPU_MIPSR1
config CPU_MIPSR2
bool
default y if CPU_MIPS32_R2 || CPU_MIPS64_R2 || CPU_CAVIUM_OCTEON
+ select CPU_HAS_RIXI
select MIPS_SPRAM
config CPU_MIPSR6
bool
default y if CPU_MIPS32_R6 || CPU_MIPS64_R6
+ select CPU_HAS_RIXI
+ select HAVE_ARCH_BITREVERSE
+ select MIPS_ASID_BITS_VARIABLE
select MIPS_SPRAM
config EVA
@@ -1999,7 +2040,7 @@ config MIPS_PGD_C0_CONTEXT
#
config HARDWARE_WATCHPOINTS
bool
- default y if CPU_MIPSR1 || CPU_MIPSR2
+ default y if CPU_MIPSR1 || CPU_MIPSR2 || CPU_MIPSR6
menu "Kernel type"
@@ -2042,6 +2083,16 @@ config KVM_GUEST_TIMER_FREQ
emulation when determining guest CPU Frequency. Instead, the guest's
timer frequency is specified directly.
+config MIPS_VA_BITS_48
+ bool "48 bits virtual memory"
+ depends on 64BIT
+ help
+ Support a maximum at least 48 bits of application virtual memory.
+ Default is 40 bits or less, depending on the CPU.
+ This option result in a small memory overhead for page tables.
+ This option is only supported with 16k and 64k page sizes.
+ If unsure, say N.
+
choice
prompt "Kernel page size"
default PAGE_SIZE_4KB
@@ -2049,6 +2100,7 @@ choice
config PAGE_SIZE_4KB
bool "4kB"
depends on !CPU_LOONGSON2 && !CPU_LOONGSON3
+ depends on !MIPS_VA_BITS_48
help
This option select the standard 4kB Linux page size. On some
R3000-family processors this is the only available page size. Using
@@ -2058,6 +2110,7 @@ config PAGE_SIZE_4KB
config PAGE_SIZE_8KB
bool "8kB"
depends on CPU_R8000 || CPU_CAVIUM_OCTEON
+ depends on !MIPS_VA_BITS_48
help
Using 8kB page size will result in higher performance kernel at
the price of higher memory consumption. This option is available
@@ -2076,6 +2129,7 @@ config PAGE_SIZE_16KB
config PAGE_SIZE_32KB
bool "32kB"
depends on CPU_CAVIUM_OCTEON
+ depends on !MIPS_VA_BITS_48
help
Using 32kB page size will result in higher performance kernel at
the price of higher memory consumption. This option is available
@@ -2280,7 +2334,7 @@ config MIPS_CMP
config MIPS_CPS
bool "MIPS Coherent Processing System support"
- depends on SYS_SUPPORTS_MIPS_CPS && !CPU_MIPSR6
+ depends on SYS_SUPPORTS_MIPS_CPS
select MIPS_CM
select MIPS_CPC
select MIPS_CPS_PM if HOTPLUG_CPU
@@ -2371,6 +2425,9 @@ config CPU_HAS_WB
config XKS01
bool
+config CPU_HAS_RIXI
+ bool
+
#
# Vectored interrupt mode is an R2 feature
#
@@ -2401,6 +2458,21 @@ config CPU_R4000_WORKAROUNDS
config CPU_R4400_WORKAROUNDS
bool
+config MIPS_ASID_SHIFT
+ int
+ default 6 if CPU_R3000 || CPU_TX39XX
+ default 4 if CPU_R8000
+ default 0
+
+config MIPS_ASID_BITS
+ int
+ default 0 if MIPS_ASID_BITS_VARIABLE
+ default 6 if CPU_R3000 || CPU_TX39XX
+ default 8
+
+config MIPS_ASID_BITS_VARIABLE
+ bool
+
#
# - Highmem only makes sense for the 32-bit kernel.
# - The current highmem code will only work properly on physically indexed
@@ -2470,6 +2542,61 @@ config NUMA
config SYS_SUPPORTS_NUMA
bool
+config RELOCATABLE
+ bool "Relocatable kernel"
+ depends on SYS_SUPPORTS_RELOCATABLE && (CPU_MIPS32_R2 || CPU_MIPS64_R2 || CPU_MIPS32_R6 || CPU_MIPS64_R6)
+ help
+ This builds a kernel image that retains relocation information
+ so it can be loaded someplace besides the default 1MB.
+ The relocations make the kernel binary about 15% larger,
+ but are discarded at runtime
+
+config RELOCATION_TABLE_SIZE
+ hex "Relocation table size"
+ depends on RELOCATABLE
+ range 0x0 0x01000000
+ default "0x00100000"
+ ---help---
+ A table of relocation data will be appended to the kernel binary
+ and parsed at boot to fix up the relocated kernel.
+
+ This option allows the amount of space reserved for the table to be
+ adjusted, although the default of 1Mb should be ok in most cases.
+
+ The build will fail and a valid size suggested if this is too small.
+
+ If unsure, leave at the default value.
+
+config RANDOMIZE_BASE
+ bool "Randomize the address of the kernel image"
+ depends on RELOCATABLE
+ ---help---
+ Randomizes the physical and virtual address at which the
+ kernel image is loaded, as a security feature that
+ deters exploit attempts relying on knowledge of the location
+ of kernel internals.
+
+ Entropy is generated using any coprocessor 0 registers available.
+
+ The kernel will be offset by up to RANDOMIZE_BASE_MAX_OFFSET.
+
+ If unsure, say N.
+
+config RANDOMIZE_BASE_MAX_OFFSET
+ hex "Maximum kASLR offset" if EXPERT
+ depends on RANDOMIZE_BASE
+ range 0x0 0x40000000 if EVA || 64BIT
+ range 0x0 0x08000000
+ default "0x01000000"
+ ---help---
+ When kASLR is active, this provides the maximum offset that will
+ be applied to the kernel image. It should be set according to the
+ amount of physical RAM available in the target system minus
+ PHYSICAL_START and must be a power of 2.
+
+ This is limited by the size of KSEG0, 256Mb on 32-bit or 1Gb with
+ EVA or 64-bit. The default is 16Mb.
+
config NODES_SHIFT
int
default "6"
@@ -2477,7 +2604,7 @@ config NODES_SHIFT
config HW_PERF_EVENTS
bool "Enable hardware performance counter support for perf events"
- depends on PERF_EVENTS && OPROFILE=n && (CPU_MIPS32 || CPU_MIPS64 || CPU_R10000 || CPU_SB1 || CPU_CAVIUM_OCTEON || CPU_XLP || CPU_LOONGSON3)
+ depends on PERF_EVENTS && !OPROFILE && (CPU_MIPS32 || CPU_MIPS64 || CPU_R10000 || CPU_SB1 || CPU_CAVIUM_OCTEON || CPU_XLP || CPU_LOONGSON3)
default y
help
Enable hardware performance counter support for perf events. If
@@ -2810,6 +2937,10 @@ choice
config MIPS_CMDLINE_FROM_BOOTLOADER
bool "Bootloader kernel arguments if available"
+
+ config MIPS_CMDLINE_BUILTIN_EXTEND
+ depends on CMDLINE_BOOL
+ bool "Extend builtin kernel arguments with bootloader arguments"
endchoice
endmenu
@@ -2987,6 +3118,7 @@ config MIPS32_N32
config BINFMT_ELF32
bool
default y if MIPS32_O32 || MIPS32_N32
+ select ELFCORE
endmenu
diff --git a/arch/mips/Makefile b/arch/mips/Makefile
index e78d60dbdffd..efd7a9dc93c4 100644
--- a/arch/mips/Makefile
+++ b/arch/mips/Makefile
@@ -12,6 +12,9 @@
# for "archclean" cleaning up for this architecture.
#
+archscripts: scripts_basic
+ $(Q)$(MAKE) $(build)=arch/mips/boot/tools relocs
+
KBUILD_DEFCONFIG := ip22_defconfig
#
@@ -93,6 +96,10 @@ LDFLAGS_vmlinux += -G 0 -static -n -nostdlib
KBUILD_AFLAGS_MODULE += -mlong-calls
KBUILD_CFLAGS_MODULE += -mlong-calls
+ifeq ($(CONFIG_RELOCATABLE),y)
+LDFLAGS_vmlinux += --emit-relocs
+endif
+
#
# pass -msoft-float to GAS if it supports it. However on newer binutils
# (specifically newer than 2.24.51.20140728) we then also need to explicitly
@@ -193,6 +200,8 @@ ifeq ($(CONFIG_CPU_HAS_MSA),y)
toolchain-msa := $(call cc-option-yn,$(mips-cflags) -mhard-float -mfp64 -Wa$(comma)-mmsa)
cflags-$(toolchain-msa) += -DTOOLCHAIN_SUPPORTS_MSA
endif
+toolchain-virt := $(call cc-option-yn,$(mips-cflags) -mvirt)
+cflags-$(toolchain-virt) += -DTOOLCHAIN_SUPPORTS_VIRT
cflags-$(CONFIG_MIPS_COMPACT_BRANCHES_NEVER) += -mcompact-branches=never
cflags-$(CONFIG_MIPS_COMPACT_BRANCHES_OPTIMAL) += -mcompact-branches=optimal
@@ -310,6 +319,10 @@ rom.bin rom.sw: vmlinux
$(bootvars-y) $@
endif
+CMD_RELOCS = arch/mips/boot/tools/relocs
+quiet_cmd_relocs = RELOCS $<
+ cmd_relocs = $(CMD_RELOCS) $<
+
#
# Some machines like the Indy need 32-bit ELF binaries for booting purposes.
# Other need ECOFF, so we build a 32-bit ELF binary for them which we then
@@ -318,6 +331,11 @@ endif
quiet_cmd_32 = OBJCOPY $@
cmd_32 = $(OBJCOPY) -O $(32bit-bfd) $(OBJCOPYFLAGS) $< $@
vmlinux.32: vmlinux
+ifeq ($(CONFIG_RELOCATABLE)$(CONFIG_64BIT),yy)
+# Currently, objcopy fails to handle the relocations in the elf64
+# So the relocs tool must be run here to remove them first
+ $(call cmd,relocs)
+endif
$(call cmd,32)
#
@@ -333,6 +351,9 @@ all: $(all-y)
# boot
$(boot-y): $(vmlinux-32) FORCE
+ifeq ($(CONFIG_RELOCATABLE)$(CONFIG_32BIT),yy)
+ $(call cmd,relocs)
+endif
$(Q)$(MAKE) $(build)=arch/mips/boot VMLINUX=$(vmlinux-32) \
$(bootvars-y) arch/mips/boot/$@
@@ -385,6 +406,7 @@ endif
archclean:
$(Q)$(MAKE) $(clean)=arch/mips/boot
$(Q)$(MAKE) $(clean)=arch/mips/boot/compressed
+ $(Q)$(MAKE) $(clean)=arch/mips/boot/tools
$(Q)$(MAKE) $(clean)=arch/mips/lasat
define archhelp
diff --git a/arch/mips/alchemy/Kconfig b/arch/mips/alchemy/Kconfig
index 7fa24881b708..88b4d6a792c1 100644
--- a/arch/mips/alchemy/Kconfig
+++ b/arch/mips/alchemy/Kconfig
@@ -20,7 +20,7 @@ config MIPS_MTX1
config MIPS_DB1XXX
bool "Alchemy DB1XXX / PB1XXX boards"
- select ARCH_REQUIRE_GPIOLIB
+ select GPIOLIB
select HW_HAS_PCI
select SYS_SUPPORTS_LITTLE_ENDIAN
select SYS_HAS_EARLY_PRINTK
diff --git a/arch/mips/alchemy/common/clock.c b/arch/mips/alchemy/common/clock.c
index bd34f4093cd9..7ba7ea0a22f8 100644
--- a/arch/mips/alchemy/common/clock.c
+++ b/arch/mips/alchemy/common/clock.c
@@ -1043,8 +1043,7 @@ static int __init alchemy_clk_init(void)
/* Root of the Alchemy clock tree: external 12MHz crystal osc */
c = clk_register_fixed_rate(NULL, ALCHEMY_ROOT_CLK, NULL,
- CLK_IS_ROOT,
- ALCHEMY_ROOTCLK_RATE);
+ 0, ALCHEMY_ROOTCLK_RATE);
ERRCK(c)
/* CPU core clock */
diff --git a/arch/mips/ath79/Kconfig b/arch/mips/ath79/Kconfig
index 13c04cf54afa..dfc60209dc63 100644
--- a/arch/mips/ath79/Kconfig
+++ b/arch/mips/ath79/Kconfig
@@ -71,18 +71,6 @@ config ATH79_MACH_UBNT_XM
Say 'Y' here if you want your kernel to support the
Ubiquiti Networks XM (rev 1.0) board.
-choice
- prompt "Build a DTB in the kernel"
- optional
- help
- Select a devicetree that should be built into the kernel.
-
- config DTB_TL_WR1043ND_V1
- bool "TL-WR1043ND Version 1"
- select BUILTIN_DTB
- select SOC_AR913X
-endchoice
-
endmenu
config SOC_AR71XX
diff --git a/arch/mips/ath79/clock.c b/arch/mips/ath79/clock.c
index 618dfd735eed..2e7378467c5c 100644
--- a/arch/mips/ath79/clock.c
+++ b/arch/mips/ath79/clock.c
@@ -18,17 +18,21 @@
#include <linux/clk.h>
#include <linux/clkdev.h>
#include <linux/clk-provider.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <dt-bindings/clock/ath79-clk.h>
#include <asm/div64.h>
#include <asm/mach-ath79/ath79.h>
#include <asm/mach-ath79/ar71xx_regs.h>
#include "common.h"
+#include "machtypes.h"
#define AR71XX_BASE_FREQ 40000000
#define AR724X_BASE_FREQ 40000000
-static struct clk *clks[3];
+static struct clk *clks[ATH79_CLK_END];
static struct clk_onecell_data clk_data = {
.clks = clks,
.clk_num = ARRAY_SIZE(clks),
@@ -40,7 +44,7 @@ static struct clk *__init ath79_add_sys_clkdev(
struct clk *clk;
int err;
- clk = clk_register_fixed_rate(NULL, id, NULL, CLK_IS_ROOT, rate);
+ clk = clk_register_fixed_rate(NULL, id, NULL, 0, rate);
if (!clk)
panic("failed to allocate %s clock structure", id);
@@ -78,107 +82,139 @@ static void __init ar71xx_clocks_init(void)
ahb_rate = cpu_rate / div;
ath79_add_sys_clkdev("ref", ref_rate);
- clks[0] = ath79_add_sys_clkdev("cpu", cpu_rate);
- clks[1] = ath79_add_sys_clkdev("ddr", ddr_rate);
- clks[2] = ath79_add_sys_clkdev("ahb", ahb_rate);
+ clks[ATH79_CLK_CPU] = ath79_add_sys_clkdev("cpu", cpu_rate);
+ clks[ATH79_CLK_DDR] = ath79_add_sys_clkdev("ddr", ddr_rate);
+ clks[ATH79_CLK_AHB] = ath79_add_sys_clkdev("ahb", ahb_rate);
clk_add_alias("wdt", NULL, "ahb", NULL);
clk_add_alias("uart", NULL, "ahb", NULL);
}
-static void __init ar724x_clocks_init(void)
+static struct clk * __init ath79_reg_ffclk(const char *name,
+ const char *parent_name, unsigned int mult, unsigned int div)
{
- unsigned long ref_rate;
- unsigned long cpu_rate;
- unsigned long ddr_rate;
- unsigned long ahb_rate;
- u32 pll;
- u32 freq;
- u32 div;
+ struct clk *clk;
- ref_rate = AR724X_BASE_FREQ;
- pll = ath79_pll_rr(AR724X_PLL_REG_CPU_CONFIG);
+ clk = clk_register_fixed_factor(NULL, name, parent_name, 0, mult, div);
+ if (!clk)
+ panic("failed to allocate %s clock structure", name);
- div = ((pll >> AR724X_PLL_FB_SHIFT) & AR724X_PLL_FB_MASK);
- freq = div * ref_rate;
+ return clk;
+}
+static void __init ar724x_clk_init(struct clk *ref_clk, void __iomem *pll_base)
+{
+ u32 pll;
+ u32 mult, div, ddr_div, ahb_div;
+
+ pll = __raw_readl(pll_base + AR724X_PLL_REG_CPU_CONFIG);
+
+ mult = ((pll >> AR724X_PLL_FB_SHIFT) & AR724X_PLL_FB_MASK);
div = ((pll >> AR724X_PLL_REF_DIV_SHIFT) & AR724X_PLL_REF_DIV_MASK) * 2;
- freq /= div;
- cpu_rate = freq;
+ ddr_div = ((pll >> AR724X_DDR_DIV_SHIFT) & AR724X_DDR_DIV_MASK) + 1;
+ ahb_div = (((pll >> AR724X_AHB_DIV_SHIFT) & AR724X_AHB_DIV_MASK) + 1) * 2;
- div = ((pll >> AR724X_DDR_DIV_SHIFT) & AR724X_DDR_DIV_MASK) + 1;
- ddr_rate = freq / div;
+ clks[ATH79_CLK_CPU] = ath79_reg_ffclk("cpu", "ref", mult, div);
+ clks[ATH79_CLK_DDR] = ath79_reg_ffclk("ddr", "ref", mult, div * ddr_div);
+ clks[ATH79_CLK_AHB] = ath79_reg_ffclk("ahb", "ref", mult, div * ahb_div);
+}
- div = (((pll >> AR724X_AHB_DIV_SHIFT) & AR724X_AHB_DIV_MASK) + 1) * 2;
- ahb_rate = cpu_rate / div;
+static void __init ar724x_clocks_init(void)
+{
+ struct clk *ref_clk;
- ath79_add_sys_clkdev("ref", ref_rate);
- clks[0] = ath79_add_sys_clkdev("cpu", cpu_rate);
- clks[1] = ath79_add_sys_clkdev("ddr", ddr_rate);
- clks[2] = ath79_add_sys_clkdev("ahb", ahb_rate);
+ ref_clk = ath79_add_sys_clkdev("ref", AR724X_BASE_FREQ);
+
+ ar724x_clk_init(ref_clk, ath79_pll_base);
+
+ /* just make happy plat_time_init() from arch/mips/ath79/setup.c */
+ clk_register_clkdev(clks[ATH79_CLK_CPU], "cpu", NULL);
+ clk_register_clkdev(clks[ATH79_CLK_DDR], "ddr", NULL);
+ clk_register_clkdev(clks[ATH79_CLK_AHB], "ahb", NULL);
clk_add_alias("wdt", NULL, "ahb", NULL);
clk_add_alias("uart", NULL, "ahb", NULL);
}
-static void __init ar933x_clocks_init(void)
+static void __init ar9330_clk_init(struct clk *ref_clk, void __iomem *pll_base)
{
- unsigned long ref_rate;
- unsigned long cpu_rate;
- unsigned long ddr_rate;
- unsigned long ahb_rate;
u32 clock_ctrl;
- u32 cpu_config;
- u32 freq;
- u32 t;
+ u32 ref_div;
+ u32 ninit_mul;
+ u32 out_div;
- t = ath79_reset_rr(AR933X_RESET_REG_BOOTSTRAP);
- if (t & AR933X_BOOTSTRAP_REF_CLK_40)
- ref_rate = (40 * 1000 * 1000);
- else
- ref_rate = (25 * 1000 * 1000);
+ u32 cpu_div;
+ u32 ddr_div;
+ u32 ahb_div;
- clock_ctrl = ath79_pll_rr(AR933X_PLL_CLOCK_CTRL_REG);
+ clock_ctrl = __raw_readl(pll_base + AR933X_PLL_CLOCK_CTRL_REG);
if (clock_ctrl & AR933X_PLL_CLOCK_CTRL_BYPASS) {
- cpu_rate = ref_rate;
- ahb_rate = ref_rate;
- ddr_rate = ref_rate;
+ ref_div = 1;
+ ninit_mul = 1;
+ out_div = 1;
+
+ cpu_div = 1;
+ ddr_div = 1;
+ ahb_div = 1;
} else {
- cpu_config = ath79_pll_rr(AR933X_PLL_CPU_CONFIG_REG);
+ u32 cpu_config;
+ u32 t;
+
+ cpu_config = __raw_readl(pll_base + AR933X_PLL_CPU_CONFIG_REG);
t = (cpu_config >> AR933X_PLL_CPU_CONFIG_REFDIV_SHIFT) &
AR933X_PLL_CPU_CONFIG_REFDIV_MASK;
- freq = ref_rate / t;
+ ref_div = t;
- t = (cpu_config >> AR933X_PLL_CPU_CONFIG_NINT_SHIFT) &
+ ninit_mul = (cpu_config >> AR933X_PLL_CPU_CONFIG_NINT_SHIFT) &
AR933X_PLL_CPU_CONFIG_NINT_MASK;
- freq *= t;
t = (cpu_config >> AR933X_PLL_CPU_CONFIG_OUTDIV_SHIFT) &
AR933X_PLL_CPU_CONFIG_OUTDIV_MASK;
if (t == 0)
t = 1;
- freq >>= t;
+ out_div = (1 << t);
- t = ((clock_ctrl >> AR933X_PLL_CLOCK_CTRL_CPU_DIV_SHIFT) &
+ cpu_div = ((clock_ctrl >> AR933X_PLL_CLOCK_CTRL_CPU_DIV_SHIFT) &
AR933X_PLL_CLOCK_CTRL_CPU_DIV_MASK) + 1;
- cpu_rate = freq / t;
- t = ((clock_ctrl >> AR933X_PLL_CLOCK_CTRL_DDR_DIV_SHIFT) &
+ ddr_div = ((clock_ctrl >> AR933X_PLL_CLOCK_CTRL_DDR_DIV_SHIFT) &
AR933X_PLL_CLOCK_CTRL_DDR_DIV_MASK) + 1;
- ddr_rate = freq / t;
- t = ((clock_ctrl >> AR933X_PLL_CLOCK_CTRL_AHB_DIV_SHIFT) &
+ ahb_div = ((clock_ctrl >> AR933X_PLL_CLOCK_CTRL_AHB_DIV_SHIFT) &
AR933X_PLL_CLOCK_CTRL_AHB_DIV_MASK) + 1;
- ahb_rate = freq / t;
}
- ath79_add_sys_clkdev("ref", ref_rate);
- clks[0] = ath79_add_sys_clkdev("cpu", cpu_rate);
- clks[1] = ath79_add_sys_clkdev("ddr", ddr_rate);
- clks[2] = ath79_add_sys_clkdev("ahb", ahb_rate);
+ clks[ATH79_CLK_CPU] = ath79_reg_ffclk("cpu", "ref",
+ ninit_mul, ref_div * out_div * cpu_div);
+ clks[ATH79_CLK_DDR] = ath79_reg_ffclk("ddr", "ref",
+ ninit_mul, ref_div * out_div * ddr_div);
+ clks[ATH79_CLK_AHB] = ath79_reg_ffclk("ahb", "ref",
+ ninit_mul, ref_div * out_div * ahb_div);
+}
+
+static void __init ar933x_clocks_init(void)
+{
+ struct clk *ref_clk;
+ unsigned long ref_rate;
+ u32 t;
+
+ t = ath79_reset_rr(AR933X_RESET_REG_BOOTSTRAP);
+ if (t & AR933X_BOOTSTRAP_REF_CLK_40)
+ ref_rate = (40 * 1000 * 1000);
+ else
+ ref_rate = (25 * 1000 * 1000);
+
+ ref_clk = ath79_add_sys_clkdev("ref", ref_rate);
+
+ ar9330_clk_init(ref_clk, ath79_pll_base);
+
+ /* just make happy plat_time_init() from arch/mips/ath79/setup.c */
+ clk_register_clkdev(clks[ATH79_CLK_CPU], "cpu", NULL);
+ clk_register_clkdev(clks[ATH79_CLK_DDR], "ddr", NULL);
+ clk_register_clkdev(clks[ATH79_CLK_AHB], "ahb", NULL);
clk_add_alias("wdt", NULL, "ahb", NULL);
clk_add_alias("uart", NULL, "ref", NULL);
@@ -310,9 +346,9 @@ static void __init ar934x_clocks_init(void)
ahb_rate = cpu_pll / (postdiv + 1);
ath79_add_sys_clkdev("ref", ref_rate);
- clks[0] = ath79_add_sys_clkdev("cpu", cpu_rate);
- clks[1] = ath79_add_sys_clkdev("ddr", ddr_rate);
- clks[2] = ath79_add_sys_clkdev("ahb", ahb_rate);
+ clks[ATH79_CLK_CPU] = ath79_add_sys_clkdev("cpu", cpu_rate);
+ clks[ATH79_CLK_DDR] = ath79_add_sys_clkdev("ddr", ddr_rate);
+ clks[ATH79_CLK_AHB] = ath79_add_sys_clkdev("ahb", ahb_rate);
clk_add_alias("wdt", NULL, "ref", NULL);
clk_add_alias("uart", NULL, "ref", NULL);
@@ -397,9 +433,9 @@ static void __init qca955x_clocks_init(void)
ahb_rate = cpu_pll / (postdiv + 1);
ath79_add_sys_clkdev("ref", ref_rate);
- clks[0] = ath79_add_sys_clkdev("cpu", cpu_rate);
- clks[1] = ath79_add_sys_clkdev("ddr", ddr_rate);
- clks[2] = ath79_add_sys_clkdev("ahb", ahb_rate);
+ clks[ATH79_CLK_CPU] = ath79_add_sys_clkdev("cpu", cpu_rate);
+ clks[ATH79_CLK_DDR] = ath79_add_sys_clkdev("ddr", ddr_rate);
+ clks[ATH79_CLK_AHB] = ath79_add_sys_clkdev("ahb", ahb_rate);
clk_add_alias("wdt", NULL, "ref", NULL);
clk_add_alias("uart", NULL, "ref", NULL);
@@ -419,8 +455,6 @@ void __init ath79_clocks_init(void)
qca955x_clocks_init();
else
BUG();
-
- of_clk_init(NULL);
}
unsigned long __init
@@ -447,8 +481,49 @@ static void __init ath79_clocks_init_dt(struct device_node *np)
CLK_OF_DECLARE(ar7100, "qca,ar7100-pll", ath79_clocks_init_dt);
CLK_OF_DECLARE(ar7240, "qca,ar7240-pll", ath79_clocks_init_dt);
-CLK_OF_DECLARE(ar9130, "qca,ar9130-pll", ath79_clocks_init_dt);
-CLK_OF_DECLARE(ar9330, "qca,ar9330-pll", ath79_clocks_init_dt);
CLK_OF_DECLARE(ar9340, "qca,ar9340-pll", ath79_clocks_init_dt);
CLK_OF_DECLARE(ar9550, "qca,qca9550-pll", ath79_clocks_init_dt);
+
+static void __init ath79_clocks_init_dt_ng(struct device_node *np)
+{
+ struct clk *ref_clk;
+ void __iomem *pll_base;
+ const char *dnfn = of_node_full_name(np);
+
+ ref_clk = of_clk_get(np, 0);
+ if (IS_ERR(ref_clk)) {
+ pr_err("%s: of_clk_get failed\n", dnfn);
+ goto err;
+ }
+
+ pll_base = of_iomap(np, 0);
+ if (!pll_base) {
+ pr_err("%s: can't map pll registers\n", dnfn);
+ goto err_clk;
+ }
+
+ if (of_device_is_compatible(np, "qca,ar9130-pll"))
+ ar724x_clk_init(ref_clk, pll_base);
+ else if (of_device_is_compatible(np, "qca,ar9330-pll"))
+ ar9330_clk_init(ref_clk, pll_base);
+ else {
+ pr_err("%s: could not find any appropriate clk_init()\n", dnfn);
+ goto err_clk;
+ }
+
+ if (of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data)) {
+ pr_err("%s: could not register clk provider\n", dnfn);
+ goto err_clk;
+ }
+
+ return;
+
+err_clk:
+ clk_put(ref_clk);
+
+err:
+ return;
+}
+CLK_OF_DECLARE(ar9130_clk, "qca,ar9130-pll", ath79_clocks_init_dt_ng);
+CLK_OF_DECLARE(ar9330_clk, "qca,ar9330-pll", ath79_clocks_init_dt_ng);
#endif
diff --git a/arch/mips/ath79/common.c b/arch/mips/ath79/common.c
index 3cedd1f95e0f..d071a3a0f876 100644
--- a/arch/mips/ath79/common.c
+++ b/arch/mips/ath79/common.c
@@ -46,12 +46,12 @@ void ath79_ddr_ctrl_init(void)
{
ath79_ddr_base = ioremap_nocache(AR71XX_DDR_CTRL_BASE,
AR71XX_DDR_CTRL_SIZE);
- if (soc_is_ar71xx() || soc_is_ar934x()) {
- ath79_ddr_wb_flush_base = ath79_ddr_base + 0x9c;
- ath79_ddr_pci_win_base = ath79_ddr_base + 0x7c;
- } else {
+ if (soc_is_ar913x() || soc_is_ar724x() || soc_is_ar933x()) {
ath79_ddr_wb_flush_base = ath79_ddr_base + 0x7c;
ath79_ddr_pci_win_base = 0;
+ } else {
+ ath79_ddr_wb_flush_base = ath79_ddr_base + 0x9c;
+ ath79_ddr_pci_win_base = ath79_ddr_base + 0x7c;
}
}
EXPORT_SYMBOL_GPL(ath79_ddr_ctrl_init);
@@ -76,14 +76,14 @@ void ath79_ddr_set_pci_windows(void)
{
BUG_ON(!ath79_ddr_pci_win_base);
- __raw_writel(AR71XX_PCI_WIN0_OFFS, ath79_ddr_pci_win_base + 0);
- __raw_writel(AR71XX_PCI_WIN1_OFFS, ath79_ddr_pci_win_base + 1);
- __raw_writel(AR71XX_PCI_WIN2_OFFS, ath79_ddr_pci_win_base + 2);
- __raw_writel(AR71XX_PCI_WIN3_OFFS, ath79_ddr_pci_win_base + 3);
- __raw_writel(AR71XX_PCI_WIN4_OFFS, ath79_ddr_pci_win_base + 4);
- __raw_writel(AR71XX_PCI_WIN5_OFFS, ath79_ddr_pci_win_base + 5);
- __raw_writel(AR71XX_PCI_WIN6_OFFS, ath79_ddr_pci_win_base + 6);
- __raw_writel(AR71XX_PCI_WIN7_OFFS, ath79_ddr_pci_win_base + 7);
+ __raw_writel(AR71XX_PCI_WIN0_OFFS, ath79_ddr_pci_win_base + 0x0);
+ __raw_writel(AR71XX_PCI_WIN1_OFFS, ath79_ddr_pci_win_base + 0x4);
+ __raw_writel(AR71XX_PCI_WIN2_OFFS, ath79_ddr_pci_win_base + 0x8);
+ __raw_writel(AR71XX_PCI_WIN3_OFFS, ath79_ddr_pci_win_base + 0xc);
+ __raw_writel(AR71XX_PCI_WIN4_OFFS, ath79_ddr_pci_win_base + 0x10);
+ __raw_writel(AR71XX_PCI_WIN5_OFFS, ath79_ddr_pci_win_base + 0x14);
+ __raw_writel(AR71XX_PCI_WIN6_OFFS, ath79_ddr_pci_win_base + 0x18);
+ __raw_writel(AR71XX_PCI_WIN7_OFFS, ath79_ddr_pci_win_base + 0x1c);
}
EXPORT_SYMBOL_GPL(ath79_ddr_set_pci_windows);
diff --git a/arch/mips/ath79/early_printk.c b/arch/mips/ath79/early_printk.c
index b955fafc58ba..d1adc59af5bf 100644
--- a/arch/mips/ath79/early_printk.c
+++ b/arch/mips/ath79/early_printk.c
@@ -31,13 +31,15 @@ static inline void prom_putchar_wait(void __iomem *reg, u32 mask, u32 val)
} while (1);
}
+#define BOTH_EMPTY (UART_LSR_TEMT | UART_LSR_THRE)
+
static void prom_putchar_ar71xx(unsigned char ch)
{
void __iomem *base = (void __iomem *)(KSEG1ADDR(AR71XX_UART_BASE));
- prom_putchar_wait(base + UART_LSR * 4, UART_LSR_THRE, UART_LSR_THRE);
+ prom_putchar_wait(base + UART_LSR * 4, BOTH_EMPTY, BOTH_EMPTY);
__raw_writel(ch, base + UART_TX * 4);
- prom_putchar_wait(base + UART_LSR * 4, UART_LSR_THRE, UART_LSR_THRE);
+ prom_putchar_wait(base + UART_LSR * 4, BOTH_EMPTY, BOTH_EMPTY);
}
static void prom_putchar_ar933x(unsigned char ch)
diff --git a/arch/mips/ath79/setup.c b/arch/mips/ath79/setup.c
index be451ee4a5ea..7adab180e0ca 100644
--- a/arch/mips/ath79/setup.c
+++ b/arch/mips/ath79/setup.c
@@ -17,6 +17,7 @@
#include <linux/bootmem.h>
#include <linux/err.h>
#include <linux/clk.h>
+#include <linux/clk-provider.h>
#include <linux/of_platform.h>
#include <linux/of_fdt.h>
@@ -203,26 +204,57 @@ void __init plat_mem_setup(void)
fdt_start = fw_getenvl("fdt_start");
if (fdt_start)
__dt_setup_arch((void *)KSEG0ADDR(fdt_start));
-#ifdef CONFIG_BUILTIN_DTB
- else
- __dt_setup_arch(__dtb_start);
-#endif
+ else if (fw_arg0 == -2)
+ __dt_setup_arch((void *)KSEG0ADDR(fw_arg1));
- ath79_reset_base = ioremap_nocache(AR71XX_RESET_BASE,
- AR71XX_RESET_SIZE);
- ath79_pll_base = ioremap_nocache(AR71XX_PLL_BASE,
- AR71XX_PLL_SIZE);
- ath79_detect_sys_type();
- ath79_ddr_ctrl_init();
+ if (mips_machtype != ATH79_MACH_GENERIC_OF) {
+ ath79_reset_base = ioremap_nocache(AR71XX_RESET_BASE,
+ AR71XX_RESET_SIZE);
+ ath79_pll_base = ioremap_nocache(AR71XX_PLL_BASE,
+ AR71XX_PLL_SIZE);
+ ath79_detect_sys_type();
+ ath79_ddr_ctrl_init();
- if (mips_machtype != ATH79_MACH_GENERIC_OF)
detect_memory_region(0, ATH79_MEM_SIZE_MIN, ATH79_MEM_SIZE_MAX);
- _machine_restart = ath79_restart;
+ /* OF machines should use the reset driver */
+ _machine_restart = ath79_restart;
+ }
+
_machine_halt = ath79_halt;
pm_power_off = ath79_halt;
}
+static void __init ath79_of_plat_time_init(void)
+{
+ struct device_node *np;
+ struct clk *clk;
+ unsigned long cpu_clk_rate;
+
+ of_clk_init(NULL);
+
+ np = of_get_cpu_node(0, NULL);
+ if (!np) {
+ pr_err("Failed to get CPU node\n");
+ return;
+ }
+
+ clk = of_clk_get(np, 0);
+ if (IS_ERR(clk)) {
+ pr_err("Failed to get CPU clock: %ld\n", PTR_ERR(clk));
+ return;
+ }
+
+ cpu_clk_rate = clk_get_rate(clk);
+
+ pr_info("CPU clock: %lu.%03lu MHz\n",
+ cpu_clk_rate / 1000000, (cpu_clk_rate / 1000) % 1000);
+
+ mips_hpt_frequency = cpu_clk_rate / 2;
+
+ clk_put(clk);
+}
+
void __init plat_time_init(void)
{
unsigned long cpu_clk_rate;
@@ -230,6 +262,11 @@ void __init plat_time_init(void)
unsigned long ddr_clk_rate;
unsigned long ref_clk_rate;
+ if (IS_ENABLED(CONFIG_OF) && mips_machtype == ATH79_MACH_GENERIC_OF) {
+ ath79_of_plat_time_init();
+ return;
+ }
+
ath79_clocks_init();
cpu_clk_rate = ath79_get_sys_clk_rate("cpu");
diff --git a/arch/mips/bcm47xx/Makefile b/arch/mips/bcm47xx/Makefile
index 66bea4ecf449..6d8615074075 100644
--- a/arch/mips/bcm47xx/Makefile
+++ b/arch/mips/bcm47xx/Makefile
@@ -3,5 +3,5 @@
# under Linux.
#
-obj-y += irq.o prom.o serial.o setup.o time.o sprom.o
+obj-y += irq.o prom.o serial.o setup.o time.o
obj-y += board.o buttons.o leds.o workarounds.o
diff --git a/arch/mips/bcm47xx/bcm47xx_private.h b/arch/mips/bcm47xx/bcm47xx_private.h
index 41796befa9df..0367ac7286fe 100644
--- a/arch/mips/bcm47xx/bcm47xx_private.h
+++ b/arch/mips/bcm47xx/bcm47xx_private.h
@@ -10,9 +10,6 @@
/* prom.c */
void __init bcm47xx_prom_highmem_init(void);
-/* sprom.c */
-void bcm47xx_sprom_register_fallbacks(void);
-
/* buttons.c */
int __init bcm47xx_buttons_register(void);
diff --git a/arch/mips/bcm47xx/setup.c b/arch/mips/bcm47xx/setup.c
index c807e32d6d81..6054d49e608e 100644
--- a/arch/mips/bcm47xx/setup.c
+++ b/arch/mips/bcm47xx/setup.c
@@ -28,6 +28,7 @@
#include "bcm47xx_private.h"
+#include <linux/bcm47xx_sprom.h>
#include <linux/export.h>
#include <linux/types.h>
#include <linux/ethtool.h>
@@ -151,7 +152,6 @@ void __init plat_mem_setup(void)
pr_info("Using bcma bus\n");
#ifdef CONFIG_BCM47XX_BCMA
bcm47xx_bus_type = BCM47XX_BUS_TYPE_BCMA;
- bcm47xx_sprom_register_fallbacks();
bcm47xx_register_bcma();
bcm47xx_set_system_type(bcm47xx_bus.bcma.bus.chipinfo.id);
#ifdef CONFIG_HIGHMEM
diff --git a/arch/mips/bmips/Kconfig b/arch/mips/bmips/Kconfig
index e2c4fd682c74..264328d528c7 100644
--- a/arch/mips/bmips/Kconfig
+++ b/arch/mips/bmips/Kconfig
@@ -21,6 +21,10 @@ config DT_BCM93384WVG_VIPER
bool "BCM93384WVG Viper CPU (EXPERIMENTAL)"
select BUILTIN_DTB
+config DT_BCM96358NB4SER
+ bool "BCM96358NB4SER"
+ select BUILTIN_DTB
+
config DT_BCM96368MVWG
bool "BCM96368MVWG"
select BUILTIN_DTB
diff --git a/arch/mips/bmips/setup.c b/arch/mips/bmips/setup.c
index 35535284b39e..f146d1219bde 100644
--- a/arch/mips/bmips/setup.c
+++ b/arch/mips/bmips/setup.c
@@ -95,6 +95,15 @@ static void bcm6328_quirks(void)
bcm63xx_fixup_cpu1();
}
+static void bcm6358_quirks(void)
+{
+ /*
+ * BCM6358 needs special handling for its shared TLB, so
+ * disable SMP for now
+ */
+ bmips_smp_enabled = 0;
+}
+
static void bcm6368_quirks(void)
{
bcm63xx_fixup_cpu1();
@@ -104,13 +113,16 @@ static const struct bmips_quirk bmips_quirk_list[] = {
{ "brcm,bcm3384-viper", &bcm3384_viper_quirks },
{ "brcm,bcm33843-viper", &bcm3384_viper_quirks },
{ "brcm,bcm6328", &bcm6328_quirks },
+ { "brcm,bcm6358", &bcm6358_quirks },
{ "brcm,bcm6368", &bcm6368_quirks },
{ "brcm,bcm63168", &bcm6368_quirks },
+ { "brcm,bcm63268", &bcm6368_quirks },
{ },
};
void __init prom_init(void)
{
+ bmips_cpu_setup();
register_bmips_smp_ops();
}
diff --git a/arch/mips/boot/compressed/Makefile b/arch/mips/boot/compressed/Makefile
index 309d2ad67e4d..90aca95fe314 100644
--- a/arch/mips/boot/compressed/Makefile
+++ b/arch/mips/boot/compressed/Makefile
@@ -37,8 +37,13 @@ vmlinuzobjs-$(CONFIG_DEBUG_ZBOOT) += $(obj)/dbg.o
vmlinuzobjs-$(CONFIG_SYS_SUPPORTS_ZBOOT_UART16550) += $(obj)/uart-16550.o
vmlinuzobjs-$(CONFIG_SYS_SUPPORTS_ZBOOT_UART_PROM) += $(obj)/uart-prom.o
vmlinuzobjs-$(CONFIG_MIPS_ALCHEMY) += $(obj)/uart-alchemy.o
+vmlinuzobjs-$(CONFIG_ATH79) += $(obj)/uart-ath79.o
endif
+extra-y += uart-ath79.c
+$(obj)/uart-ath79.c: $(srctree)/arch/mips/ath79/early_printk.c
+ $(call cmd,shipped)
+
vmlinuzobjs-$(CONFIG_KERNEL_XZ) += $(obj)/ashldi3.o $(obj)/bswapsi.o
extra-y += ashldi3.c bswapsi.c
diff --git a/arch/mips/boot/dts/brcm/Makefile b/arch/mips/boot/dts/brcm/Makefile
index eabeb603e805..fda9d387cc08 100644
--- a/arch/mips/boot/dts/brcm/Makefile
+++ b/arch/mips/boot/dts/brcm/Makefile
@@ -1,5 +1,6 @@
dtb-$(CONFIG_DT_BCM93384WVG) += bcm93384wvg.dtb
dtb-$(CONFIG_DT_BCM93384WVG_VIPER) += bcm93384wvg_viper.dtb
+dtb-$(CONFIG_DT_BCM96358NB4SER) += bcm96358nb4ser.dtb
dtb-$(CONFIG_DT_BCM96368MVWG) += bcm96368mvwg.dtb
dtb-$(CONFIG_DT_BCM9EJTAGPRB) += bcm9ejtagprb.dtb
dtb-$(CONFIG_DT_BCM97125CBMB) += bcm97125cbmb.dtb
@@ -14,6 +15,7 @@ dtb-$(CONFIG_DT_BCM97435SVMB) += bcm97435svmb.dtb
dtb-$(CONFIG_DT_NONE) += \
bcm93384wvg.dtb \
bcm93384wvg_viper.dtb \
+ bcm96358nb4ser.dtb \
bcm96368mvwg.dtb \
bcm9ejtagprb.dtb \
bcm97125cbmb.dtb \
diff --git a/arch/mips/boot/dts/brcm/bcm6328.dtsi b/arch/mips/boot/dts/brcm/bcm6328.dtsi
index 9d19236f53e7..5633b9d90f55 100644
--- a/arch/mips/boot/dts/brcm/bcm6328.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm6328.dtsi
@@ -23,7 +23,7 @@
};
clocks {
- periph_clk: periph_clk {
+ periph_clk: periph-clk {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <50000000>;
@@ -31,11 +31,11 @@
};
aliases {
- leds0 = &leds0;
- uart0 = &uart0;
+ serial0 = &uart0;
+ serial1 = &uart1;
};
- cpu_intc: cpu_intc {
+ cpu_intc: interrupt-controller {
#address-cells = <0>;
compatible = "mti,cpu-interrupt-controller";
@@ -50,16 +50,16 @@
compatible = "simple-bus";
ranges;
- periph_intc: periph_intc@10000020 {
- compatible = "brcm,bcm3380-l2-intc";
- reg = <0x10000024 0x4 0x1000002c 0x4>,
- <0x10000020 0x4 0x10000028 0x4>;
+ periph_intc: interrupt-controller@10000020 {
+ compatible = "brcm,bcm6345-l1-intc";
+ reg = <0x10000020 0x10>,
+ <0x10000030 0x10>;
interrupt-controller;
#interrupt-cells = <1>;
interrupt-parent = <&cpu_intc>;
- interrupts = <2>;
+ interrupts = <2>, <3>;
};
uart0: serial@10000100 {
@@ -71,13 +71,22 @@
status = "disabled";
};
- timer: timer@10000040 {
+ uart1: serial@10000120 {
+ compatible = "brcm,bcm6345-uart";
+ reg = <0x10000120 0x18>;
+ interrupt-parent = <&periph_intc>;
+ interrupts = <39>;
+ clocks = <&periph_clk>;
+ status = "disabled";
+ };
+
+ timer: syscon@10000040 {
compatible = "syscon";
reg = <0x10000040 0x2c>;
native-endian;
};
- reboot {
+ reboot: syscon-reboot@10000068 {
compatible = "syscon-reboot";
regmap = <&timer>;
offset = <0x28>;
@@ -91,5 +100,24 @@
reg = <0x10000800 0x24>;
status = "disabled";
};
+
+ ehci: usb@10002500 {
+ compatible = "brcm,bcm6328-ehci", "generic-ehci";
+ reg = <0x10002500 0x100>;
+ big-endian;
+ interrupt-parent = <&periph_intc>;
+ interrupts = <42>;
+ status = "disabled";
+ };
+
+ ohci: usb@10002600 {
+ compatible = "brcm,bcm6328-ohci", "generic-ohci";
+ reg = <0x10002600 0x100>;
+ big-endian;
+ no-big-frame-no;
+ interrupt-parent = <&periph_intc>;
+ interrupts = <41>;
+ status = "disabled";
+ };
};
};
diff --git a/arch/mips/boot/dts/brcm/bcm6358.dtsi b/arch/mips/boot/dts/brcm/bcm6358.dtsi
new file mode 100644
index 000000000000..f9d8d392162b
--- /dev/null
+++ b/arch/mips/boot/dts/brcm/bcm6358.dtsi
@@ -0,0 +1,130 @@
+/ {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "brcm,bcm6358";
+
+ cpus {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ mips-hpt-frequency = <150000000>;
+
+ cpu@0 {
+ compatible = "brcm,bmips4350";
+ device_type = "cpu";
+ reg = <0>;
+ };
+
+ cpu@1 {
+ compatible = "brcm,bmips4350";
+ device_type = "cpu";
+ reg = <1>;
+ };
+ };
+
+ clocks {
+ periph_clk: periph-clk {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ clock-frequency = <50000000>;
+ };
+ };
+
+ aliases {
+ serial0 = &uart0;
+ serial1 = &uart1;
+ };
+
+ cpu_intc: interrupt-controller {
+ #address-cells = <0>;
+ compatible = "mti,cpu-interrupt-controller";
+
+ interrupt-controller;
+ #interrupt-cells = <1>;
+ };
+
+ ubus {
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+ compatible = "simple-bus";
+ ranges;
+
+ periph_cntl: syscon@fffe0000 {
+ compatible = "syscon";
+ reg = <0xfffe0000 0xc>;
+ native-endian;
+ };
+
+ reboot: syscon-reboot@fffe0008 {
+ compatible = "syscon-reboot";
+ regmap = <&periph_cntl>;
+ offset = <0x8>;
+ mask = <0x1>;
+ };
+
+ periph_intc: interrupt-controller@fffe000c {
+ compatible = "brcm,bcm6345-l1-intc";
+ reg = <0xfffe000c 0x8>,
+ <0xfffe0038 0x8>;
+
+ interrupt-controller;
+ #interrupt-cells = <1>;
+
+ interrupt-parent = <&cpu_intc>;
+ interrupts = <2>, <3>;
+ };
+
+ leds0: led-controller@fffe00d0 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ compatible = "brcm,bcm6358-leds";
+ reg = <0xfffe00d0 0x8>;
+
+ status = "disabled";
+ };
+
+ uart0: serial@fffe0100 {
+ compatible = "brcm,bcm6345-uart";
+ reg = <0xfffe0100 0x18>;
+
+ interrupt-parent = <&periph_intc>;
+ interrupts = <2>;
+
+ clocks = <&periph_clk>;
+
+ status = "disabled";
+ };
+
+ uart1: serial@fffe0120 {
+ compatible = "brcm,bcm6345-uart";
+ reg = <0xfffe0120 0x18>;
+
+ interrupt-parent = <&periph_intc>;
+ interrupts = <3>;
+
+ clocks = <&periph_clk>;
+
+ status = "disabled";
+ };
+
+ ehci: usb@fffe1300 {
+ compatible = "brcm,bcm6358-ehci", "generic-ehci";
+ reg = <0xfffe1300 0x100>;
+ big-endian;
+ interrupt-parent = <&periph_intc>;
+ interrupts = <10>;
+ status = "disabled";
+ };
+
+ ohci: usb@fffe1400 {
+ compatible = "brcm,bcm6358-ohci", "generic-ohci";
+ reg = <0xfffe1400 0x100>;
+ big-endian;
+ no-big-frame-no;
+ interrupt-parent = <&periph_intc>;
+ interrupts = <5>;
+ status = "disabled";
+ };
+ };
+};
diff --git a/arch/mips/boot/dts/brcm/bcm6368.dtsi b/arch/mips/boot/dts/brcm/bcm6368.dtsi
index 1f6b9b5cddb4..d0e3a70b32e2 100644
--- a/arch/mips/boot/dts/brcm/bcm6368.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm6368.dtsi
@@ -20,11 +20,10 @@
device_type = "cpu";
reg = <1>;
};
-
};
clocks {
- periph_clk: periph_clk {
+ periph_clk: periph-clk {
compatible = "fixed-clock";
#clock-cells = <0>;
clock-frequency = <50000000>;
@@ -32,11 +31,11 @@
};
aliases {
- leds0 = &leds0;
- uart0 = &uart0;
+ serial0 = &uart0;
+ serial1 = &uart1;
};
- cpu_intc: cpu_intc {
+ cpu_intc: interrupt-controller {
#address-cells = <0>;
compatible = "mti,cpu-interrupt-controller";
@@ -64,16 +63,16 @@
mask = <0x1>;
};
- periph_intc: periph_intc@10000020 {
- compatible = "brcm,bcm3380-l2-intc";
- reg = <0x10000024 0x4 0x1000002c 0x4>,
- <0x10000020 0x4 0x10000028 0x4>;
+ periph_intc: interrupt-controller@10000020 {
+ compatible = "brcm,bcm6345-l1-intc";
+ reg = <0x10000020 0x10>,
+ <0x10000030 0x10>;
interrupt-controller;
#interrupt-cells = <1>;
interrupt-parent = <&cpu_intc>;
- interrupts = <2>;
+ interrupts = <2>, <3>;
};
leds0: led-controller@100000d0 {
@@ -93,7 +92,16 @@
status = "disabled";
};
- ehci0: usb@10001500 {
+ uart1: serial@10000120 {
+ compatible = "brcm,bcm6345-uart";
+ reg = <0x10000120 0x18>;
+ interrupt-parent = <&periph_intc>;
+ interrupts = <3>;
+ clocks = <&periph_clk>;
+ status = "disabled";
+ };
+
+ ehci: usb@10001500 {
compatible = "brcm,bcm6368-ehci", "generic-ehci";
reg = <0x10001500 0x100>;
big-endian;
@@ -102,7 +110,7 @@
status = "disabled";
};
- ohci0: usb@10001600 {
+ ohci: usb@10001600 {
compatible = "brcm,bcm6368-ohci", "generic-ohci";
reg = <0x10001600 0x100>;
big-endian;
diff --git a/arch/mips/boot/dts/brcm/bcm7125.dtsi b/arch/mips/boot/dts/brcm/bcm7125.dtsi
index 3ae16053a0c9..550e1d9e3ee0 100644
--- a/arch/mips/boot/dts/brcm/bcm7125.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm7125.dtsi
@@ -85,14 +85,15 @@
compatible = "brcm,bcm7120-l2-intc";
reg = <0x406780 0x8>;
- brcm,int-map-mask = <0x44>;
+ brcm,int-map-mask = <0x44>, <0xf000000>;
brcm,int-fwd-mask = <0x70000>;
interrupt-controller;
#interrupt-cells = <1>;
interrupt-parent = <&periph_intc>;
- interrupts = <18>;
+ interrupts = <18>, <19>;
+ interrupt-names = "upg_main", "upg_bsc";
};
sun_top_ctrl: syscon@404000 {
@@ -118,6 +119,70 @@
status = "disabled";
};
+ uart1: serial@406b40 {
+ compatible = "ns16550a";
+ reg = <0x406b40 0x20>;
+ reg-io-width = <0x4>;
+ reg-shift = <0x2>;
+ native-endian;
+ interrupt-parent = <&periph_intc>;
+ interrupts = <64>;
+ clocks = <&uart_clk>;
+ status = "disabled";
+ };
+
+ uart2: serial@406b80 {
+ compatible = "ns16550a";
+ reg = <0x406b80 0x20>;
+ reg-io-width = <0x4>;
+ reg-shift = <0x2>;
+ native-endian;
+ interrupt-parent = <&periph_intc>;
+ interrupts = <65>;
+ clocks = <&uart_clk>;
+ status = "disabled";
+ };
+
+ bsca: i2c@406200 {
+ clock-frequency = <390000>;
+ compatible = "brcm,brcmstb-i2c";
+ interrupt-parent = <&upg_irq0_intc>;
+ reg = <0x406200 0x58>;
+ interrupts = <24>;
+ interrupt-names = "upg_bsca";
+ status = "disabled";
+ };
+
+ bscb: i2c@406280 {
+ clock-frequency = <390000>;
+ compatible = "brcm,brcmstb-i2c";
+ interrupt-parent = <&upg_irq0_intc>;
+ reg = <0x406280 0x58>;
+ interrupts = <25>;
+ interrupt-names = "upg_bscb";
+ status = "disabled";
+ };
+
+ bscc: i2c@406300 {
+ clock-frequency = <390000>;
+ compatible = "brcm,brcmstb-i2c";
+ interrupt-parent = <&upg_irq0_intc>;
+ reg = <0x406300 0x58>;
+ interrupts = <26>;
+ interrupt-names = "upg_bscc";
+ status = "disabled";
+ };
+
+ bscd: i2c@406380 {
+ clock-frequency = <390000>;
+ compatible = "brcm,brcmstb-i2c";
+ interrupt-parent = <&upg_irq0_intc>;
+ reg = <0x406380 0x58>;
+ interrupts = <27>;
+ interrupt-names = "upg_bscd";
+ status = "disabled";
+ };
+
ehci0: usb@488300 {
compatible = "brcm,bcm7125-ehci", "generic-ehci";
reg = <0x488300 0x100>;
diff --git a/arch/mips/boot/dts/brcm/bcm7346.dtsi b/arch/mips/boot/dts/brcm/bcm7346.dtsi
index be7991917d29..ec959061d52e 100644
--- a/arch/mips/boot/dts/brcm/bcm7346.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm7346.dtsi
@@ -24,8 +24,6 @@
aliases {
uart0 = &uart0;
- uart1 = &uart1;
- uart2 = &uart2;
};
cpu_intc: cpu_intc {
@@ -323,8 +321,6 @@
interrupts = <40>;
#address-cells = <1>;
#size-cells = <0>;
- brcm,broken-ncq;
- brcm,broken-phy;
status = "disabled";
sata0: sata-port@0 {
@@ -338,7 +334,7 @@
};
};
- sata_phy: sata-phy@1800000 {
+ sata_phy: sata-phy@180100 {
compatible = "brcm,bcm7425-sata-phy", "brcm,phy-sata3";
reg = <0x180100 0x0eff>;
reg-names = "phy";
diff --git a/arch/mips/boot/dts/brcm/bcm7358.dtsi b/arch/mips/boot/dts/brcm/bcm7358.dtsi
index 060805be619a..ca57fb5eb122 100644
--- a/arch/mips/boot/dts/brcm/bcm7358.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm7358.dtsi
@@ -18,8 +18,6 @@
aliases {
uart0 = &uart0;
- uart1 = &uart1;
- uart2 = &uart2;
};
cpu_intc: cpu_intc {
diff --git a/arch/mips/boot/dts/brcm/bcm7360.dtsi b/arch/mips/boot/dts/brcm/bcm7360.dtsi
index bcdb09bfe07b..1c0c3d438c7a 100644
--- a/arch/mips/boot/dts/brcm/bcm7360.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm7360.dtsi
@@ -18,8 +18,6 @@
aliases {
uart0 = &uart0;
- uart1 = &uart1;
- uart2 = &uart2;
};
cpu_intc: cpu_intc {
@@ -241,5 +239,45 @@
interrupts = <66>;
status = "disabled";
};
+
+ sata: sata@181000 {
+ compatible = "brcm,bcm7425-ahci", "brcm,sata3-ahci";
+ reg-names = "ahci", "top-ctrl";
+ reg = <0x181000 0xa9c>, <0x180020 0x1c>;
+ interrupt-parent = <&periph_intc>;
+ interrupts = <86>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "disabled";
+
+ sata0: sata-port@0 {
+ reg = <0>;
+ phys = <&sata_phy0>;
+ };
+
+ sata1: sata-port@1 {
+ reg = <1>;
+ phys = <&sata_phy1>;
+ };
+ };
+
+ sata_phy: sata-phy@180100 {
+ compatible = "brcm,bcm7425-sata-phy", "brcm,phy-sata3";
+ reg = <0x180100 0x0eff>;
+ reg-names = "phy";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "disabled";
+
+ sata_phy0: sata-phy@0 {
+ reg = <0>;
+ #phy-cells = <0>;
+ };
+
+ sata_phy1: sata-phy@1 {
+ reg = <1>;
+ #phy-cells = <0>;
+ };
+ };
};
};
diff --git a/arch/mips/boot/dts/brcm/bcm7362.dtsi b/arch/mips/boot/dts/brcm/bcm7362.dtsi
index d3b1b762e6c3..6b4713add4b8 100644
--- a/arch/mips/boot/dts/brcm/bcm7362.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm7362.dtsi
@@ -24,8 +24,6 @@
aliases {
uart0 = &uart0;
- uart1 = &uart1;
- uart2 = &uart2;
};
cpu_intc: cpu_intc {
@@ -246,8 +244,6 @@
interrupts = <86>;
#address-cells = <1>;
#size-cells = <0>;
- brcm,broken-ncq;
- brcm,broken-phy;
status = "disabled";
sata0: sata-port@0 {
@@ -261,7 +257,7 @@
};
};
- sata_phy: sata-phy@1800000 {
+ sata_phy: sata-phy@180100 {
compatible = "brcm,bcm7425-sata-phy", "brcm,phy-sata3";
reg = <0x180100 0x0eff>;
reg-names = "phy";
diff --git a/arch/mips/boot/dts/brcm/bcm7420.dtsi b/arch/mips/boot/dts/brcm/bcm7420.dtsi
index 3302a1b8a5c9..0586bf662571 100644
--- a/arch/mips/boot/dts/brcm/bcm7420.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm7420.dtsi
@@ -86,14 +86,15 @@
compatible = "brcm,bcm7120-l2-intc";
reg = <0x406780 0x8>;
- brcm,int-map-mask = <0x44>;
+ brcm,int-map-mask = <0x44>, <0x1f000000>;
brcm,int-fwd-mask = <0x70000>;
interrupt-controller;
#interrupt-cells = <1>;
interrupt-parent = <&periph_intc>;
- interrupts = <18>;
+ interrupts = <18>, <19>;
+ interrupt-names = "upg_main", "upg_bsc";
};
sun_top_ctrl: syscon@404000 {
@@ -118,6 +119,78 @@
status = "disabled";
};
+ uart1: serial@406b40 {
+ compatible = "ns16550a";
+ reg = <0x406b40 0x20>;
+ reg-io-width = <0x4>;
+ reg-shift = <0x2>;
+ interrupt-parent = <&periph_intc>;
+ interrupts = <64>;
+ clocks = <&uart_clk>;
+ status = "disabled";
+ };
+
+ uart2: serial@406b80 {
+ compatible = "ns16550a";
+ reg = <0x406b80 0x20>;
+ reg-io-width = <0x4>;
+ reg-shift = <0x2>;
+ interrupt-parent = <&periph_intc>;
+ interrupts = <65>;
+ clocks = <&uart_clk>;
+ status = "disabled";
+ };
+
+ bsca: i2c@406200 {
+ clock-frequency = <390000>;
+ compatible = "brcm,brcmstb-i2c";
+ interrupt-parent = <&upg_irq0_intc>;
+ reg = <0x406200 0x58>;
+ interrupts = <24>;
+ interrupt-names = "upg_bsca";
+ status = "disabled";
+ };
+
+ bscb: i2c@406280 {
+ clock-frequency = <390000>;
+ compatible = "brcm,brcmstb-i2c";
+ interrupt-parent = <&upg_irq0_intc>;
+ reg = <0x406280 0x58>;
+ interrupts = <25>;
+ interrupt-names = "upg_bscb";
+ status = "disabled";
+ };
+
+ bscc: i2c@406300 {
+ clock-frequency = <390000>;
+ compatible = "brcm,brcmstb-i2c";
+ interrupt-parent = <&upg_irq0_intc>;
+ reg = <0x406300 0x58>;
+ interrupts = <26>;
+ interrupt-names = "upg_bscc";
+ status = "disabled";
+ };
+
+ bscd: i2c@406380 {
+ clock-frequency = <390000>;
+ compatible = "brcm,brcmstb-i2c";
+ interrupt-parent = <&upg_irq0_intc>;
+ reg = <0x406380 0x58>;
+ interrupts = <27>;
+ interrupt-names = "upg_bscd";
+ status = "disabled";
+ };
+
+ bsce: i2c@406800 {
+ clock-frequency = <390000>;
+ compatible = "brcm,brcmstb-i2c";
+ interrupt-parent = <&upg_irq0_intc>;
+ reg = <0x406800 0x58>;
+ interrupts = <28>;
+ interrupt-names = "upg_bsce";
+ status = "disabled";
+ };
+
enet0: ethernet@468000 {
phy-mode = "internal";
phy-handle = <&phy1>;
diff --git a/arch/mips/boot/dts/brcm/bcm7425.dtsi b/arch/mips/boot/dts/brcm/bcm7425.dtsi
index 15b27aae15a9..c1c15edaf829 100644
--- a/arch/mips/boot/dts/brcm/bcm7425.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm7425.dtsi
@@ -87,14 +87,32 @@
compatible = "brcm,bcm7120-l2-intc";
reg = <0x406780 0x8>;
- brcm,int-map-mask = <0x44>;
+ brcm,int-map-mask = <0x44>, <0x7000000>;
brcm,int-fwd-mask = <0x70000>;
interrupt-controller;
#interrupt-cells = <1>;
interrupt-parent = <&periph_intc>;
- interrupts = <55>;
+ interrupts = <55>, <53>;
+ interrupt-names = "upg_main", "upg_bsc";
+ };
+
+ upg_aon_irq0_intc: upg_aon_irq0_intc@409480 {
+ compatible = "brcm,bcm7120-l2-intc";
+ reg = <0x409480 0x8>;
+
+ brcm,int-map-mask = <0x40>, <0x18000000>, <0x100000>;
+ brcm,int-fwd-mask = <0>;
+ brcm,irq-can-wake;
+
+ interrupt-controller;
+ #interrupt-cells = <1>;
+
+ interrupt-parent = <&periph_intc>;
+ interrupts = <56>, <54>, <59>;
+ interrupt-names = "upg_main_aon", "upg_bsc_aon",
+ "upg_spi";
};
sun_top_ctrl: syscon@404000 {
@@ -119,6 +137,78 @@
status = "disabled";
};
+ uart1: serial@406b40 {
+ compatible = "ns16550a";
+ reg = <0x406b40 0x20>;
+ reg-io-width = <0x4>;
+ reg-shift = <0x2>;
+ interrupt-parent = <&periph_intc>;
+ interrupts = <62>;
+ clocks = <&uart_clk>;
+ status = "disabled";
+ };
+
+ uart2: serial@406b80 {
+ compatible = "ns16550a";
+ reg = <0x406b80 0x20>;
+ reg-io-width = <0x4>;
+ reg-shift = <0x2>;
+ interrupt-parent = <&periph_intc>;
+ interrupts = <63>;
+ clocks = <&uart_clk>;
+ status = "disabled";
+ };
+
+ bsca: i2c@409180 {
+ clock-frequency = <390000>;
+ compatible = "brcm,brcmstb-i2c";
+ interrupt-parent = <&upg_aon_irq0_intc>;
+ reg = <0x409180 0x58>;
+ interrupts = <27>;
+ interrupt-names = "upg_bsca";
+ status = "disabled";
+ };
+
+ bscb: i2c@409400 {
+ clock-frequency = <390000>;
+ compatible = "brcm,brcmstb-i2c";
+ interrupt-parent = <&upg_aon_irq0_intc>;
+ reg = <0x409400 0x58>;
+ interrupts = <28>;
+ interrupt-names = "upg_bscb";
+ status = "disabled";
+ };
+
+ bscc: i2c@406200 {
+ clock-frequency = <390000>;
+ compatible = "brcm,brcmstb-i2c";
+ interrupt-parent = <&upg_irq0_intc>;
+ reg = <0x406200 0x58>;
+ interrupts = <24>;
+ interrupt-names = "upg_bscc";
+ status = "disabled";
+ };
+
+ bscd: i2c@406280 {
+ clock-frequency = <390000>;
+ compatible = "brcm,brcmstb-i2c";
+ interrupt-parent = <&upg_irq0_intc>;
+ reg = <0x406280 0x58>;
+ interrupts = <25>;
+ interrupt-names = "upg_bscd";
+ status = "disabled";
+ };
+
+ bsce: i2c@406300 {
+ clock-frequency = <390000>;
+ compatible = "brcm,brcmstb-i2c";
+ interrupt-parent = <&upg_irq0_intc>;
+ reg = <0x406300 0x58>;
+ interrupts = <26>;
+ interrupt-names = "upg_bsce";
+ status = "disabled";
+ };
+
enet0: ethernet@b80000 {
phy-mode = "internal";
phy-handle = <&phy1>;
@@ -227,11 +317,9 @@
reg-names = "ahci", "top-ctrl";
reg = <0x181000 0xa9c>, <0x180020 0x1c>;
interrupt-parent = <&periph_intc>;
- interrupts = <40>;
+ interrupts = <41>;
#address-cells = <1>;
#size-cells = <0>;
- brcm,broken-ncq;
- brcm,broken-phy;
status = "disabled";
sata0: sata-port@0 {
@@ -245,7 +333,7 @@
};
};
- sata_phy: sata-phy@1800000 {
+ sata_phy: sata-phy@180100 {
compatible = "brcm,bcm7425-sata-phy", "brcm,phy-sata3";
reg = <0x180100 0x0eff>;
reg-names = "phy";
diff --git a/arch/mips/boot/dts/brcm/bcm7435.dtsi b/arch/mips/boot/dts/brcm/bcm7435.dtsi
index 56035e5b7008..a874d3a0e2ee 100644
--- a/arch/mips/boot/dts/brcm/bcm7435.dtsi
+++ b/arch/mips/boot/dts/brcm/bcm7435.dtsi
@@ -7,7 +7,7 @@
#address-cells = <1>;
#size-cells = <0>;
- mips-hpt-frequency = <163125000>;
+ mips-hpt-frequency = <175625000>;
cpu@0 {
compatible = "brcm,bmips5200";
@@ -63,13 +63,14 @@
periph_intc: periph_intc@41b500 {
compatible = "brcm,bcm7038-l1-intc";
- reg = <0x41b500 0x40>, <0x41b600 0x40>;
+ reg = <0x41b500 0x40>, <0x41b600 0x40>,
+ <0x41b700 0x40>, <0x41b800 0x40>;
interrupt-controller;
#interrupt-cells = <1>;
interrupt-parent = <&cpu_intc>;
- interrupts = <2>, <3>;
+ interrupts = <2>, <3>, <2>, <3>;
};
sun_l2_intc: sun_l2_intc@403000 {
@@ -101,14 +102,32 @@
compatible = "brcm,bcm7120-l2-intc";
reg = <0x406780 0x8>;
- brcm,int-map-mask = <0x44>;
+ brcm,int-map-mask = <0x44>, <0x7000000>;
brcm,int-fwd-mask = <0x70000>;
interrupt-controller;
#interrupt-cells = <1>;
interrupt-parent = <&periph_intc>;
- interrupts = <60>;
+ interrupts = <60>, <58>;
+ interrupt-names = "upg_main", "upg_bsc";
+ };
+
+ upg_aon_irq0_intc: upg_aon_irq0_intc@409480 {
+ compatible = "brcm,bcm7120-l2-intc";
+ reg = <0x409480 0x8>;
+
+ brcm,int-map-mask = <0x40>, <0x18000000>, <0x100000>;
+ brcm,int-fwd-mask = <0>;
+ brcm,irq-can-wake;
+
+ interrupt-controller;
+ #interrupt-cells = <1>;
+
+ interrupt-parent = <&periph_intc>;
+ interrupts = <61>, <59>, <64>;
+ interrupt-names = "upg_main_aon", "upg_bsc_aon",
+ "upg_spi";
};
sun_top_ctrl: syscon@404000 {
@@ -133,6 +152,78 @@
status = "disabled";
};
+ uart1: serial@406b40 {
+ compatible = "ns16550a";
+ reg = <0x406b40 0x20>;
+ reg-io-width = <0x4>;
+ reg-shift = <0x2>;
+ interrupt-parent = <&periph_intc>;
+ interrupts = <67>;
+ clocks = <&uart_clk>;
+ status = "disabled";
+ };
+
+ uart2: serial@406b80 {
+ compatible = "ns16550a";
+ reg = <0x406b80 0x20>;
+ reg-io-width = <0x4>;
+ reg-shift = <0x2>;
+ interrupt-parent = <&periph_intc>;
+ interrupts = <68>;
+ clocks = <&uart_clk>;
+ status = "disabled";
+ };
+
+ bsca: i2c@406300 {
+ clock-frequency = <390000>;
+ compatible = "brcm,brcmstb-i2c";
+ interrupt-parent = <&upg_irq0_intc>;
+ reg = <0x406300 0x58>;
+ interrupts = <26>;
+ interrupt-names = "upg_bsca";
+ status = "disabled";
+ };
+
+ bscb: i2c@409400 {
+ clock-frequency = <390000>;
+ compatible = "brcm,brcmstb-i2c";
+ interrupt-parent = <&upg_aon_irq0_intc>;
+ reg = <0x409400 0x58>;
+ interrupts = <28>;
+ interrupt-names = "upg_bscb";
+ status = "disabled";
+ };
+
+ bscc: i2c@406200 {
+ clock-frequency = <390000>;
+ compatible = "brcm,brcmstb-i2c";
+ interrupt-parent = <&upg_irq0_intc>;
+ reg = <0x406200 0x58>;
+ interrupts = <24>;
+ interrupt-names = "upg_bscc";
+ status = "disabled";
+ };
+
+ bscd: i2c@406280 {
+ clock-frequency = <390000>;
+ compatible = "brcm,brcmstb-i2c";
+ interrupt-parent = <&upg_irq0_intc>;
+ reg = <0x406280 0x58>;
+ interrupts = <25>;
+ interrupt-names = "upg_bscd";
+ status = "disabled";
+ };
+
+ bsce: i2c@409180 {
+ clock-frequency = <390000>;
+ compatible = "brcm,brcmstb-i2c";
+ interrupt-parent = <&upg_aon_irq0_intc>;
+ reg = <0x409180 0x58>;
+ interrupts = <27>;
+ interrupt-names = "upg_bsce";
+ status = "disabled";
+ };
+
enet0: ethernet@b80000 {
phy-mode = "internal";
phy-handle = <&phy1>;
@@ -235,5 +326,45 @@
interrupts = <78>;
status = "disabled";
};
+
+ sata: sata@181000 {
+ compatible = "brcm,bcm7425-ahci", "brcm,sata3-ahci";
+ reg-names = "ahci", "top-ctrl";
+ reg = <0x181000 0xa9c>, <0x180020 0x1c>;
+ interrupt-parent = <&periph_intc>;
+ interrupts = <45>;
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "disabled";
+
+ sata0: sata-port@0 {
+ reg = <0>;
+ phys = <&sata_phy0>;
+ };
+
+ sata1: sata-port@1 {
+ reg = <1>;
+ phys = <&sata_phy1>;
+ };
+ };
+
+ sata_phy: sata-phy@180100 {
+ compatible = "brcm,bcm7425-sata-phy", "brcm,phy-sata3";
+ reg = <0x180100 0x0eff>;
+ reg-names = "phy";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ status = "disabled";
+
+ sata_phy0: sata-phy@0 {
+ reg = <0>;
+ #phy-cells = <0>;
+ };
+
+ sata_phy1: sata-phy@1 {
+ reg = <1>;
+ #phy-cells = <0>;
+ };
+ };
};
};
diff --git a/arch/mips/boot/dts/brcm/bcm96358nb4ser.dts b/arch/mips/boot/dts/brcm/bcm96358nb4ser.dts
new file mode 100644
index 000000000000..f412117972e6
--- /dev/null
+++ b/arch/mips/boot/dts/brcm/bcm96358nb4ser.dts
@@ -0,0 +1,46 @@
+/dts-v1/;
+
+/include/ "bcm6358.dtsi"
+
+/ {
+ compatible = "sfr,nb4-ser", "brcm,bcm6358";
+ model = "SFR Neufbox 4 (Sercomm)";
+
+ memory@0 {
+ device_type = "memory";
+ reg = <0x00000000 0x02000000>;
+ };
+
+ chosen {
+ stdout-path = &uart0;
+ };
+};
+
+&leds0 {
+ status = "ok";
+
+ led@0 {
+ reg = <0>;
+ active-low;
+ label = "nb4-ser:white:alarm";
+ };
+ led@2 {
+ reg = <2>;
+ active-low;
+ label = "nb4-ser:white:tv";
+ };
+ led@3 {
+ reg = <3>;
+ active-low;
+ label = "nb4-ser:white:tel";
+ };
+ led@4 {
+ reg = <4>;
+ active-low;
+ label = "nb4-ser:white:adsl";
+ };
+};
+
+&uart0 {
+ status = "okay";
+};
diff --git a/arch/mips/boot/dts/brcm/bcm96368mvwg.dts b/arch/mips/boot/dts/brcm/bcm96368mvwg.dts
index 0e890c28fe5c..8c71c6845730 100644
--- a/arch/mips/boot/dts/brcm/bcm96368mvwg.dts
+++ b/arch/mips/boot/dts/brcm/bcm96368mvwg.dts
@@ -22,10 +22,10 @@
};
/* FIXME: need to set up USB_CTRL registers first */
-&ehci0 {
+&ehci {
status = "disabled";
};
-&ohci0 {
+&ohci {
status = "disabled";
};
diff --git a/arch/mips/boot/dts/brcm/bcm97125cbmb.dts b/arch/mips/boot/dts/brcm/bcm97125cbmb.dts
index e046b1109eab..f2449d147c6d 100644
--- a/arch/mips/boot/dts/brcm/bcm97125cbmb.dts
+++ b/arch/mips/boot/dts/brcm/bcm97125cbmb.dts
@@ -21,6 +21,30 @@
status = "okay";
};
+&uart1 {
+ status = "okay";
+};
+
+&uart2 {
+ status = "okay";
+};
+
+&bsca {
+ status = "okay";
+};
+
+&bscb {
+ status = "okay";
+};
+
+&bscc {
+ status = "okay";
+};
+
+&bscd {
+ status = "okay";
+};
+
/* FIXME: USB is wonky; disable it for now */
&ehci0 {
status = "disabled";
diff --git a/arch/mips/boot/dts/brcm/bcm97360svmb.dts b/arch/mips/boot/dts/brcm/bcm97360svmb.dts
index d48462e091f1..73124be9548a 100644
--- a/arch/mips/boot/dts/brcm/bcm97360svmb.dts
+++ b/arch/mips/boot/dts/brcm/bcm97360svmb.dts
@@ -56,3 +56,11 @@
&ohci0 {
status = "okay";
};
+
+&sata {
+ status = "okay";
+};
+
+&sata_phy {
+ status = "okay";
+};
diff --git a/arch/mips/boot/dts/brcm/bcm97420c.dts b/arch/mips/boot/dts/brcm/bcm97420c.dts
index 67fe1f3a3891..600d57abee05 100644
--- a/arch/mips/boot/dts/brcm/bcm97420c.dts
+++ b/arch/mips/boot/dts/brcm/bcm97420c.dts
@@ -23,6 +23,34 @@
status = "okay";
};
+&uart1 {
+ status = "okay";
+};
+
+&uart2 {
+ status = "okay";
+};
+
+&bsca {
+ status = "okay";
+};
+
+&bscb {
+ status = "okay";
+};
+
+&bscc {
+ status = "okay";
+};
+
+&bscd {
+ status = "okay";
+};
+
+&bsce {
+ status = "okay";
+};
+
/* FIXME: MAC driver comes up but cannot attach to PHY */
&enet0 {
status = "disabled";
diff --git a/arch/mips/boot/dts/brcm/bcm97425svmb.dts b/arch/mips/boot/dts/brcm/bcm97425svmb.dts
index 689c68a4f9c8..119c714805cb 100644
--- a/arch/mips/boot/dts/brcm/bcm97425svmb.dts
+++ b/arch/mips/boot/dts/brcm/bcm97425svmb.dts
@@ -23,6 +23,34 @@
status = "okay";
};
+&uart1 {
+ status = "okay";
+};
+
+&uart2 {
+ status = "okay";
+};
+
+&bsca {
+ status = "okay";
+};
+
+&bscb {
+ status = "okay";
+};
+
+&bscc {
+ status = "okay";
+};
+
+&bscd {
+ status = "okay";
+};
+
+&bsce {
+ status = "okay";
+};
+
&enet0 {
status = "okay";
};
diff --git a/arch/mips/boot/dts/brcm/bcm97435svmb.dts b/arch/mips/boot/dts/brcm/bcm97435svmb.dts
index 1df088183523..43e3ba27f07b 100644
--- a/arch/mips/boot/dts/brcm/bcm97435svmb.dts
+++ b/arch/mips/boot/dts/brcm/bcm97435svmb.dts
@@ -14,7 +14,7 @@
};
chosen {
- bootargs = "console=ttyS0,115200 maxcpus=1";
+ bootargs = "console=ttyS0,115200";
stdout-path = &uart0;
};
};
@@ -23,6 +23,34 @@
status = "okay";
};
+&uart1 {
+ status = "okay";
+};
+
+&uart2 {
+ status = "okay";
+};
+
+&bsca {
+ status = "okay";
+};
+
+&bscb {
+ status = "okay";
+};
+
+&bscc {
+ status = "okay";
+};
+
+&bscd {
+ status = "okay";
+};
+
+&bsce {
+ status = "okay";
+};
+
&enet0 {
status = "okay";
};
@@ -58,3 +86,11 @@
&ohci3 {
status = "okay";
};
+
+&sata {
+ status = "okay";
+};
+
+&sata_phy {
+ status = "okay";
+};
diff --git a/arch/mips/boot/dts/cavium-octeon/dlink_dsr-1000n.dts b/arch/mips/boot/dts/cavium-octeon/dlink_dsr-1000n.dts
new file mode 100644
index 000000000000..d6bc994f736f
--- /dev/null
+++ b/arch/mips/boot/dts/cavium-octeon/dlink_dsr-1000n.dts
@@ -0,0 +1,78 @@
+/*
+ * Device tree source for D-Link DSR-1000N.
+ *
+ * Written by: Aaro Koskinen <aaro.koskinen@iki.fi>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+/include/ "octeon_3xxx.dtsi"
+
+/ {
+ model = "dlink,dsr-1000n";
+
+ soc@0 {
+ smi0: mdio@1180000001800 {
+ phy8: ethernet-phy@8 {
+ reg = <8>;
+ compatible = "ethernet-phy-ieee802.3-c22";
+ };
+ };
+
+ pip: pip@11800a0000000 {
+ interface@0 {
+ ethernet@0 {
+ fixed-link {
+ speed = <1000>;
+ full-duplex;
+ };
+ };
+ ethernet@1 {
+ fixed-link {
+ speed = <1000>;
+ full-duplex;
+ };
+ };
+ ethernet@2 {
+ phy-handle = <&phy8>;
+ };
+ };
+ };
+
+ twsi0: i2c@1180000001000 {
+ rtc@68 {
+ compatible = "dallas,ds1337";
+ reg = <0x68>;
+ };
+ };
+
+ uart0: serial@1180000000800 {
+ clock-frequency = <500000000>;
+ };
+
+ usbn: usbn@1180068000000 {
+ refclk-frequency = <12000000>;
+ refclk-type = "crystal";
+ };
+ };
+
+ leds {
+ compatible = "gpio-leds";
+
+ usb1 {
+ label = "usb1";
+ gpios = <&gpio 9 1>; /* Active low */
+ };
+
+ usb2 {
+ label = "usb2";
+ gpios = <&gpio 10 1>; /* Active low */
+ };
+ };
+
+ aliases {
+ pip = &pip;
+ };
+};
diff --git a/arch/mips/boot/dts/cavium-octeon/octeon_3xxx.dts b/arch/mips/boot/dts/cavium-octeon/octeon_3xxx.dts
index 9c48e0586ba7..de61f02d3ef6 100644
--- a/arch/mips/boot/dts/cavium-octeon/octeon_3xxx.dts
+++ b/arch/mips/boot/dts/cavium-octeon/octeon_3xxx.dts
@@ -1,4 +1,3 @@
-/dts-v1/;
/*
* OCTEON 3XXX, 5XXX, 63XX device tree skeleton.
*
@@ -6,56 +5,12 @@
* use. Because of this, it contains a super-set of the available
* devices and properties.
*/
-/ {
- compatible = "cavium,octeon-3860";
- #address-cells = <2>;
- #size-cells = <2>;
- interrupt-parent = <&ciu>;
-
- soc@0 {
- compatible = "simple-bus";
- #address-cells = <2>;
- #size-cells = <2>;
- ranges; /* Direct mapping */
-
- ciu: interrupt-controller@1070000000000 {
- compatible = "cavium,octeon-3860-ciu";
- interrupt-controller;
- /* Interrupts are specified by two parts:
- * 1) Controller register (0 or 1)
- * 2) Bit within the register (0..63)
- */
- #interrupt-cells = <2>;
- reg = <0x10700 0x00000000 0x0 0x7000>;
- };
- gpio: gpio-controller@1070000000800 {
- #gpio-cells = <2>;
- compatible = "cavium,octeon-3860-gpio";
- reg = <0x10700 0x00000800 0x0 0x100>;
- gpio-controller;
- /* Interrupts are specified by two parts:
- * 1) GPIO pin number (0..15)
- * 2) Triggering (1 - edge rising
- * 2 - edge falling
- * 4 - level active high
- * 8 - level active low)
- */
- interrupt-controller;
- #interrupt-cells = <2>;
- /* The GPIO pin connect to 16 consecutive CUI bits */
- interrupts = <0 16>, <0 17>, <0 18>, <0 19>,
- <0 20>, <0 21>, <0 22>, <0 23>,
- <0 24>, <0 25>, <0 26>, <0 27>,
- <0 28>, <0 29>, <0 30>, <0 31>;
- };
+/include/ "octeon_3xxx.dtsi"
+/ {
+ soc@0 {
smi0: mdio@1180000001800 {
- compatible = "cavium,octeon-3860-mdio";
- #address-cells = <1>;
- #size-cells = <0>;
- reg = <0x11800 0x00001800 0x0 0x40>;
-
phy0: ethernet-phy@0 {
compatible = "marvell,88e1118";
marvell,reg-init =
@@ -220,35 +175,16 @@
};
pip: pip@11800a0000000 {
- compatible = "cavium,octeon-3860-pip";
- #address-cells = <1>;
- #size-cells = <0>;
- reg = <0x11800 0xa0000000 0x0 0x2000>;
-
interface@0 {
- compatible = "cavium,octeon-3860-pip-interface";
- #address-cells = <1>;
- #size-cells = <0>;
- reg = <0>; /* interface */
-
ethernet@0 {
- compatible = "cavium,octeon-3860-pip-port";
- reg = <0x0>; /* Port */
- local-mac-address = [ 00 00 00 00 00 00 ];
phy-handle = <&phy2>;
cavium,alt-phy-handle = <&phy100>;
};
ethernet@1 {
- compatible = "cavium,octeon-3860-pip-port";
- reg = <0x1>; /* Port */
- local-mac-address = [ 00 00 00 00 00 00 ];
phy-handle = <&phy3>;
cavium,alt-phy-handle = <&phy101>;
};
ethernet@2 {
- compatible = "cavium,octeon-3860-pip-port";
- reg = <0x2>; /* Port */
- local-mac-address = [ 00 00 00 00 00 00 ];
phy-handle = <&phy4>;
cavium,alt-phy-handle = <&phy102>;
};
@@ -322,11 +258,6 @@
};
interface@1 {
- compatible = "cavium,octeon-3860-pip-interface";
- #address-cells = <1>;
- #size-cells = <0>;
- reg = <1>; /* interface */
-
ethernet@0 {
compatible = "cavium,octeon-3860-pip-port";
reg = <0x0>; /* Port */
@@ -355,13 +286,6 @@
};
twsi0: i2c@1180000001000 {
- #address-cells = <1>;
- #size-cells = <0>;
- compatible = "cavium,octeon-3860-twsi";
- reg = <0x11800 0x00001000 0x0 0x200>;
- interrupts = <0 45>;
- clock-frequency = <100000>;
-
rtc@68 {
compatible = "dallas,ds1337";
reg = <0x68>;
@@ -381,15 +305,6 @@
clock-frequency = <100000>;
};
- uart0: serial@1180000000800 {
- compatible = "cavium,octeon-3860-uart","ns16550";
- reg = <0x11800 0x00000800 0x0 0x400>;
- clock-frequency = <0>;
- current-speed = <115200>;
- reg-shift = <3>;
- interrupts = <0 34>;
- };
-
uart1: serial@1180000000c00 {
compatible = "cavium,octeon-3860-uart","ns16550";
reg = <0x11800 0x00000c00 0x0 0x400>;
@@ -409,98 +324,6 @@
};
bootbus: bootbus@1180000000000 {
- compatible = "cavium,octeon-3860-bootbus";
- reg = <0x11800 0x00000000 0x0 0x200>;
- /* The chip select number and offset */
- #address-cells = <2>;
- /* The size of the chip select region */
- #size-cells = <1>;
- ranges = <0 0 0x0 0x1f400000 0xc00000>,
- <1 0 0x10000 0x30000000 0>,
- <2 0 0x10000 0x40000000 0>,
- <3 0 0x10000 0x50000000 0>,
- <4 0 0x0 0x1d020000 0x10000>,
- <5 0 0x0 0x1d040000 0x10000>,
- <6 0 0x0 0x1d050000 0x10000>,
- <7 0 0x10000 0x90000000 0>;
-
- cavium,cs-config@0 {
- compatible = "cavium,octeon-3860-bootbus-config";
- cavium,cs-index = <0>;
- cavium,t-adr = <20>;
- cavium,t-ce = <60>;
- cavium,t-oe = <60>;
- cavium,t-we = <45>;
- cavium,t-rd-hld = <35>;
- cavium,t-wr-hld = <45>;
- cavium,t-pause = <0>;
- cavium,t-wait = <0>;
- cavium,t-page = <35>;
- cavium,t-rd-dly = <0>;
-
- cavium,pages = <0>;
- cavium,bus-width = <8>;
- };
- cavium,cs-config@4 {
- compatible = "cavium,octeon-3860-bootbus-config";
- cavium,cs-index = <4>;
- cavium,t-adr = <320>;
- cavium,t-ce = <320>;
- cavium,t-oe = <320>;
- cavium,t-we = <320>;
- cavium,t-rd-hld = <320>;
- cavium,t-wr-hld = <320>;
- cavium,t-pause = <320>;
- cavium,t-wait = <320>;
- cavium,t-page = <320>;
- cavium,t-rd-dly = <0>;
-
- cavium,pages = <0>;
- cavium,bus-width = <8>;
- };
- cavium,cs-config@5 {
- compatible = "cavium,octeon-3860-bootbus-config";
- cavium,cs-index = <5>;
- cavium,t-adr = <5>;
- cavium,t-ce = <300>;
- cavium,t-oe = <125>;
- cavium,t-we = <150>;
- cavium,t-rd-hld = <100>;
- cavium,t-wr-hld = <30>;
- cavium,t-pause = <0>;
- cavium,t-wait = <30>;
- cavium,t-page = <320>;
- cavium,t-rd-dly = <0>;
-
- cavium,pages = <0>;
- cavium,bus-width = <16>;
- };
- cavium,cs-config@6 {
- compatible = "cavium,octeon-3860-bootbus-config";
- cavium,cs-index = <6>;
- cavium,t-adr = <5>;
- cavium,t-ce = <300>;
- cavium,t-oe = <270>;
- cavium,t-we = <150>;
- cavium,t-rd-hld = <100>;
- cavium,t-wr-hld = <70>;
- cavium,t-pause = <0>;
- cavium,t-wait = <0>;
- cavium,t-page = <320>;
- cavium,t-rd-dly = <0>;
-
- cavium,pages = <0>;
- cavium,wait-mode;
- cavium,bus-width = <16>;
- };
-
- flash0: nor@0,0 {
- compatible = "cfi-flash";
- reg = <0 0 0x800000>;
- #address-cells = <1>;
- #size-cells = <1>;
- };
-
led0: led-display@4,0 {
compatible = "avago,hdsp-253x";
reg = <4 0x20 0x20>, <4 0 0x20>;
@@ -515,17 +338,6 @@
};
};
- dma0: dma-engine@1180000000100 {
- compatible = "cavium,octeon-5750-bootbus-dma";
- reg = <0x11800 0x00000100 0x0 0x8>;
- interrupts = <0 63>;
- };
- dma1: dma-engine@1180000000108 {
- compatible = "cavium,octeon-5750-bootbus-dma";
- reg = <0x11800 0x00000108 0x0 0x8>;
- interrupts = <0 63>;
- };
-
uctl: uctl@118006f000000 {
compatible = "cavium,octeon-6335-uctl";
reg = <0x11800 0x6f000000 0x0 0x100>;
@@ -552,21 +364,10 @@
};
usbn: usbn@1180068000000 {
- compatible = "cavium,octeon-5750-usbn";
- reg = <0x11800 0x68000000 0x0 0x1000>;
- ranges; /* Direct mapping */
- #address-cells = <2>;
- #size-cells = <2>;
/* 12MHz, 24MHz and 48MHz allowed */
refclk-frequency = <12000000>;
/* Either "crystal" or "external" */
refclk-type = "crystal";
-
- usbc@16f0010000000 {
- compatible = "cavium,octeon-5750-usbc";
- reg = <0x16f00 0x10000000 0x0 0x80000>;
- interrupts = <0 56>;
- };
};
};
diff --git a/arch/mips/boot/dts/cavium-octeon/octeon_3xxx.dtsi b/arch/mips/boot/dts/cavium-octeon/octeon_3xxx.dtsi
new file mode 100644
index 000000000000..5302148e05a3
--- /dev/null
+++ b/arch/mips/boot/dts/cavium-octeon/octeon_3xxx.dtsi
@@ -0,0 +1,231 @@
+/* OCTEON 3XXX DTS common parts. */
+
+/dts-v1/;
+
+/ {
+ compatible = "cavium,octeon-3860";
+ #address-cells = <2>;
+ #size-cells = <2>;
+ interrupt-parent = <&ciu>;
+
+ soc@0 {
+ compatible = "simple-bus";
+ #address-cells = <2>;
+ #size-cells = <2>;
+ ranges; /* Direct mapping */
+
+ ciu: interrupt-controller@1070000000000 {
+ compatible = "cavium,octeon-3860-ciu";
+ interrupt-controller;
+ /* Interrupts are specified by two parts:
+ * 1) Controller register (0 or 1)
+ * 2) Bit within the register (0..63)
+ */
+ #interrupt-cells = <2>;
+ reg = <0x10700 0x00000000 0x0 0x7000>;
+ };
+
+ gpio: gpio-controller@1070000000800 {
+ #gpio-cells = <2>;
+ compatible = "cavium,octeon-3860-gpio";
+ reg = <0x10700 0x00000800 0x0 0x100>;
+ gpio-controller;
+ /* Interrupts are specified by two parts:
+ * 1) GPIO pin number (0..15)
+ * 2) Triggering (1 - edge rising
+ * 2 - edge falling
+ * 4 - level active high
+ * 8 - level active low)
+ */
+ interrupt-controller;
+ #interrupt-cells = <2>;
+ /* The GPIO pin connect to 16 consecutive CUI bits */
+ interrupts = <0 16>, <0 17>, <0 18>, <0 19>,
+ <0 20>, <0 21>, <0 22>, <0 23>,
+ <0 24>, <0 25>, <0 26>, <0 27>,
+ <0 28>, <0 29>, <0 30>, <0 31>;
+ };
+
+ smi0: mdio@1180000001800 {
+ compatible = "cavium,octeon-3860-mdio";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0x11800 0x00001800 0x0 0x40>;
+ };
+
+ pip: pip@11800a0000000 {
+ compatible = "cavium,octeon-3860-pip";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0x11800 0xa0000000 0x0 0x2000>;
+
+ interface@0 {
+ compatible = "cavium,octeon-3860-pip-interface";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <0>; /* interface */
+
+ ethernet@0 {
+ compatible = "cavium,octeon-3860-pip-port";
+ reg = <0x0>; /* Port */
+ local-mac-address = [ 00 00 00 00 00 00 ];
+ };
+ ethernet@1 {
+ compatible = "cavium,octeon-3860-pip-port";
+ reg = <0x1>; /* Port */
+ local-mac-address = [ 00 00 00 00 00 00 ];
+ };
+ ethernet@2 {
+ compatible = "cavium,octeon-3860-pip-port";
+ reg = <0x2>; /* Port */
+ local-mac-address = [ 00 00 00 00 00 00 ];
+ };
+ };
+
+ interface@1 {
+ compatible = "cavium,octeon-3860-pip-interface";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ reg = <1>; /* interface */
+ };
+ };
+
+ twsi0: i2c@1180000001000 {
+ #address-cells = <1>;
+ #size-cells = <0>;
+ compatible = "cavium,octeon-3860-twsi";
+ reg = <0x11800 0x00001000 0x0 0x200>;
+ interrupts = <0 45>;
+ clock-frequency = <100000>;
+ };
+
+ uart0: serial@1180000000800 {
+ compatible = "cavium,octeon-3860-uart","ns16550";
+ reg = <0x11800 0x00000800 0x0 0x400>;
+ clock-frequency = <0>;
+ current-speed = <115200>;
+ reg-shift = <3>;
+ interrupts = <0 34>;
+ };
+
+ bootbus: bootbus@1180000000000 {
+ compatible = "cavium,octeon-3860-bootbus";
+ reg = <0x11800 0x00000000 0x0 0x200>;
+ /* The chip select number and offset */
+ #address-cells = <2>;
+ /* The size of the chip select region */
+ #size-cells = <1>;
+ ranges = <0 0 0x0 0x1f400000 0xc00000>,
+ <1 0 0x10000 0x30000000 0>,
+ <2 0 0x10000 0x40000000 0>,
+ <3 0 0x10000 0x50000000 0>,
+ <4 0 0x0 0x1d020000 0x10000>,
+ <5 0 0x0 0x1d040000 0x10000>,
+ <6 0 0x0 0x1d050000 0x10000>,
+ <7 0 0x10000 0x90000000 0>;
+
+ cavium,cs-config@0 {
+ compatible = "cavium,octeon-3860-bootbus-config";
+ cavium,cs-index = <0>;
+ cavium,t-adr = <20>;
+ cavium,t-ce = <60>;
+ cavium,t-oe = <60>;
+ cavium,t-we = <45>;
+ cavium,t-rd-hld = <35>;
+ cavium,t-wr-hld = <45>;
+ cavium,t-pause = <0>;
+ cavium,t-wait = <0>;
+ cavium,t-page = <35>;
+ cavium,t-rd-dly = <0>;
+
+ cavium,pages = <0>;
+ cavium,bus-width = <8>;
+ };
+ cavium,cs-config@4 {
+ compatible = "cavium,octeon-3860-bootbus-config";
+ cavium,cs-index = <4>;
+ cavium,t-adr = <320>;
+ cavium,t-ce = <320>;
+ cavium,t-oe = <320>;
+ cavium,t-we = <320>;
+ cavium,t-rd-hld = <320>;
+ cavium,t-wr-hld = <320>;
+ cavium,t-pause = <320>;
+ cavium,t-wait = <320>;
+ cavium,t-page = <320>;
+ cavium,t-rd-dly = <0>;
+
+ cavium,pages = <0>;
+ cavium,bus-width = <8>;
+ };
+ cavium,cs-config@5 {
+ compatible = "cavium,octeon-3860-bootbus-config";
+ cavium,cs-index = <5>;
+ cavium,t-adr = <5>;
+ cavium,t-ce = <300>;
+ cavium,t-oe = <125>;
+ cavium,t-we = <150>;
+ cavium,t-rd-hld = <100>;
+ cavium,t-wr-hld = <30>;
+ cavium,t-pause = <0>;
+ cavium,t-wait = <30>;
+ cavium,t-page = <320>;
+ cavium,t-rd-dly = <0>;
+
+ cavium,pages = <0>;
+ cavium,bus-width = <16>;
+ };
+ cavium,cs-config@6 {
+ compatible = "cavium,octeon-3860-bootbus-config";
+ cavium,cs-index = <6>;
+ cavium,t-adr = <5>;
+ cavium,t-ce = <300>;
+ cavium,t-oe = <270>;
+ cavium,t-we = <150>;
+ cavium,t-rd-hld = <100>;
+ cavium,t-wr-hld = <70>;
+ cavium,t-pause = <0>;
+ cavium,t-wait = <0>;
+ cavium,t-page = <320>;
+ cavium,t-rd-dly = <0>;
+
+ cavium,pages = <0>;
+ cavium,wait-mode;
+ cavium,bus-width = <16>;
+ };
+
+ flash0: nor@0,0 {
+ compatible = "cfi-flash";
+ reg = <0 0 0x800000>;
+ #address-cells = <1>;
+ #size-cells = <1>;
+ };
+ };
+
+ dma0: dma-engine@1180000000100 {
+ compatible = "cavium,octeon-5750-bootbus-dma";
+ reg = <0x11800 0x00000100 0x0 0x8>;
+ interrupts = <0 63>;
+ };
+
+ dma1: dma-engine@1180000000108 {
+ compatible = "cavium,octeon-5750-bootbus-dma";
+ reg = <0x11800 0x00000108 0x0 0x8>;
+ interrupts = <0 63>;
+ };
+
+ usbn: usbn@1180068000000 {
+ compatible = "cavium,octeon-5750-usbn";
+ reg = <0x11800 0x68000000 0x0 0x1000>;
+ ranges; /* Direct mapping */
+ #address-cells = <2>;
+ #size-cells = <2>;
+
+ usbc@16f0010000000 {
+ compatible = "cavium,octeon-5750-usbc";
+ reg = <0x16f00 0x10000000 0x0 0x80000>;
+ interrupts = <0 56>;
+ };
+ };
+ };
+};
diff --git a/arch/mips/boot/dts/cavium-octeon/ubnt_e100.dts b/arch/mips/boot/dts/cavium-octeon/ubnt_e100.dts
new file mode 100644
index 000000000000..243e5dc444fb
--- /dev/null
+++ b/arch/mips/boot/dts/cavium-octeon/ubnt_e100.dts
@@ -0,0 +1,59 @@
+/*
+ * Device tree source for EdgeRouter Lite.
+ *
+ * Written by: Aaro Koskinen <aaro.koskinen@iki.fi>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+/include/ "octeon_3xxx.dtsi"
+
+/ {
+ model = "ubnt,e100";
+
+ soc@0 {
+ smi0: mdio@1180000001800 {
+ phy5: ethernet-phy@5 {
+ reg = <5>;
+ compatible = "ethernet-phy-ieee802.3-c22";
+ };
+ phy6: ethernet-phy@6 {
+ reg = <6>;
+ compatible = "ethernet-phy-ieee802.3-c22";
+ };
+ phy7: ethernet-phy@7 {
+ reg = <7>;
+ compatible = "ethernet-phy-ieee802.3-c22";
+ };
+ };
+
+ pip: pip@11800a0000000 {
+ interface@0 {
+ ethernet@0 {
+ phy-handle = <&phy7>;
+ };
+ ethernet@1 {
+ phy-handle = <&phy6>;
+ };
+ ethernet@2 {
+ phy-handle = <&phy5>;
+ };
+ };
+ };
+
+ uart0: serial@1180000000800 {
+ clock-frequency = <500000000>;
+ };
+
+ usbn: usbn@1180068000000 {
+ refclk-frequency = <12000000>;
+ refclk-type = "crystal";
+ };
+ };
+
+ aliases {
+ pip = &pip;
+ };
+};
diff --git a/arch/mips/boot/dts/ingenic/jz4740.dtsi b/arch/mips/boot/dts/ingenic/jz4740.dtsi
index 8b2437cd019f..f6ae6ed9c4b1 100644
--- a/arch/mips/boot/dts/ingenic/jz4740.dtsi
+++ b/arch/mips/boot/dts/ingenic/jz4740.dtsi
@@ -5,7 +5,7 @@
#size-cells = <1>;
compatible = "ingenic,jz4740";
- cpuintc: interrupt-controller@0 {
+ cpuintc: interrupt-controller {
#address-cells = <0>;
#interrupt-cells = <1>;
interrupt-controller;
@@ -65,4 +65,18 @@
clocks = <&ext>, <&cgu JZ4740_CLK_UART1>;
clock-names = "baud", "module";
};
+
+ uhc: uhc@13030000 {
+ compatible = "ingenic,jz4740-ohci", "generic-ohci";
+ reg = <0x13030000 0x1000>;
+
+ clocks = <&cgu JZ4740_CLK_UHC>;
+ assigned-clocks = <&cgu JZ4740_CLK_UHC>;
+ assigned-clock-rates = <48000000>;
+
+ interrupt-parent = <&intc>;
+ interrupts = <3>;
+
+ status = "disabled";
+ };
};
diff --git a/arch/mips/boot/dts/lantiq/easy50712.dts b/arch/mips/boot/dts/lantiq/easy50712.dts
index 143b8a37b5e4..b59962585dde 100644
--- a/arch/mips/boot/dts/lantiq/easy50712.dts
+++ b/arch/mips/boot/dts/lantiq/easy50712.dts
@@ -52,7 +52,7 @@
};
gpio: pinmux@E100B10 {
- compatible = "lantiq,pinctrl-xway";
+ compatible = "lantiq,danube-pinctrl";
pinctrl-names = "default";
pinctrl-0 = <&state_default>;
diff --git a/arch/mips/boot/dts/pic32/pic32mzda-clk.dtsi b/arch/mips/boot/dts/pic32/pic32mzda-clk.dtsi
deleted file mode 100644
index ef1335012f43..000000000000
--- a/arch/mips/boot/dts/pic32/pic32mzda-clk.dtsi
+++ /dev/null
@@ -1,236 +0,0 @@
-/*
- * Device Tree Source for PIC32MZDA clock data
- *
- * Purna Chandra Mandal <purna.mandal@microchip.com>
- * Copyright (C) 2015 Microchip Technology Inc. All rights reserved.
- *
- * Licensed under GPLv2 or later.
- */
-
-/* all fixed rate clocks */
-
-/ {
- POSC:posc_clk { /* On-chip primary oscillator */
- #clock-cells = <0>;
- compatible = "fixed-clock";
- clock-frequency = <24000000>;
- };
-
- FRC:frc_clk { /* internal FRC oscillator */
- #clock-cells = <0>;
- compatible = "fixed-clock";
- clock-frequency = <8000000>;
- };
-
- BFRC:bfrc_clk { /* internal backup FRC oscillator */
- #clock-cells = <0>;
- compatible = "fixed-clock";
- clock-frequency = <8000000>;
- };
-
- LPRC:lprc_clk { /* internal low-power FRC oscillator */
- #clock-cells = <0>;
- compatible = "fixed-clock";
- clock-frequency = <32000>;
- };
-
- /* UPLL provides clock to USBCORE */
- UPLL:usb_phy_clk {
- #clock-cells = <0>;
- compatible = "fixed-clock";
- clock-frequency = <24000000>;
- clock-output-names = "usbphy_clk";
- };
-
- TxCKI:txcki_clk { /* external clock input on TxCLKI pin */
- #clock-cells = <0>;
- compatible = "fixed-clock";
- clock-frequency = <4000000>;
- status = "disabled";
- };
-
- /* external clock input on REFCLKIx pin */
- REFIx:refix_clk {
- #clock-cells = <0>;
- compatible = "fixed-clock";
- clock-frequency = <24000000>;
- status = "disabled";
- };
-
- /* PIC32 specific clks */
- pic32_clktree {
- #address-cells = <1>;
- #size-cells = <1>;
- reg = <0x1f801200 0x200>;
- compatible = "microchip,pic32mzda-clk";
- ranges = <0 0x1f801200 0x200>;
-
- /* secondary oscillator; external input on SOSCI pin */
- SOSC:sosc_clk@0 {
- #clock-cells = <0>;
- compatible = "microchip,pic32mzda-sosc";
- clock-frequency = <32768>;
- reg = <0x000 0x10>, /* enable reg */
- <0x1d0 0x10>; /* status reg */
- microchip,bit-mask = <0x02>; /* enable mask */
- microchip,status-bit-mask = <0x10>; /* status-mask*/
- };
-
- FRCDIV:frcdiv_clk {
- #clock-cells = <0>;
- compatible = "microchip,pic32mzda-frcdivclk";
- clocks = <&FRC>;
- clock-output-names = "frcdiv_clk";
- };
-
- /* System PLL clock */
- SYSPLL:spll_clk@020 {
- #clock-cells = <0>;
- compatible = "microchip,pic32mzda-syspll";
- reg = <0x020 0x10>, /* SPLL register */
- <0x1d0 0x10>; /* CLKSTAT register */
- clocks = <&POSC>, <&FRC>;
- clock-output-names = "sys_pll";
- microchip,status-bit-mask = <0x80>; /* SPLLRDY */
- };
-
- /* system clock; mux with postdiv & slew */
- SYSCLK:sys_clk@1c0 {
- #clock-cells = <0>;
- compatible = "microchip,pic32mzda-sysclk-v2";
- reg = <0x1c0 0x04>; /* SLEWCON */
- clocks = <&FRCDIV>, <&SYSPLL>, <&POSC>, <&SOSC>,
- <&LPRC>, <&FRCDIV>;
- microchip,clock-indices = <0>, <1>, <2>, <4>,
- <5>, <7>;
- clock-output-names = "sys_clk";
- };
-
- /* Peripheral bus1 clock */
- PBCLK1:pb1_clk@140 {
- reg = <0x140 0x10>;
- #clock-cells = <0>;
- compatible = "microchip,pic32mzda-pbclk";
- clocks = <&SYSCLK>;
- clock-output-names = "pb1_clk";
- /* used by system modules, not gateable */
- microchip,ignore-unused;
- };
-
- /* Peripheral bus2 clock */
- PBCLK2:pb2_clk@150 {
- reg = <0x150 0x10>;
- #clock-cells = <0>;
- compatible = "microchip,pic32mzda-pbclk";
- clocks = <&SYSCLK>;
- clock-output-names = "pb2_clk";
- /* avoid gating even if unused */
- microchip,ignore-unused;
- };
-
- /* Peripheral bus3 clock */
- PBCLK3:pb3_clk@160 {
- reg = <0x160 0x10>;
- #clock-cells = <0>;
- compatible = "microchip,pic32mzda-pbclk";
- clocks = <&SYSCLK>;
- clock-output-names = "pb3_clk";
- };
-
- /* Peripheral bus4 clock(I/O ports, GPIO) */
- PBCLK4:pb4_clk@170 {
- reg = <0x170 0x10>;
- #clock-cells = <0>;
- compatible = "microchip,pic32mzda-pbclk";
- clocks = <&SYSCLK>;
- clock-output-names = "pb4_clk";
- };
-
- /* Peripheral bus clock */
- PBCLK5:pb5_clk@180 {
- reg = <0x180 0x10>;
- #clock-cells = <0>;
- compatible = "microchip,pic32mzda-pbclk";
- clocks = <&SYSCLK>;
- clock-output-names = "pb5_clk";
- };
-
- /* Peripheral Bus6 clock; */
- PBCLK6:pb6_clk@190 {
- reg = <0x190 0x10>;
- compatible = "microchip,pic32mzda-pbclk";
- clocks = <&SYSCLK>;
- #clock-cells = <0>;
- };
-
- /* Peripheral bus7 clock */
- PBCLK7:pb7_clk@1a0 {
- reg = <0x1a0 0x10>;
- #clock-cells = <0>;
- compatible = "microchip,pic32mzda-pbclk";
- /* CPU is driven by this clock; so named */
- clock-output-names = "cpu_clk";
- clocks = <&SYSCLK>;
- };
-
- /* Reference Oscillator clock for SPI/I2S */
- REFCLKO1:refo1_clk@80 {
- reg = <0x080 0x20>;
- #clock-cells = <0>;
- compatible = "microchip,pic32mzda-refoclk";
- clocks = <&SYSCLK>, <&PBCLK1>, <&POSC>, <&FRC>, <&LPRC>,
- <&SOSC>, <&SYSPLL>, <&REFIx>, <&BFRC>;
- microchip,clock-indices = <0>, <1>, <2>, <3>, <4>,
- <5>, <7>, <8>, <9>;
- clock-output-names = "refo1_clk";
- };
-
- /* Reference Oscillator clock for SQI */
- REFCLKO2:refo2_clk@a0 {
- reg = <0x0a0 0x20>;
- #clock-cells = <0>;
- compatible = "microchip,pic32mzda-refoclk";
- clocks = <&SYSCLK>, <&PBCLK1>, <&POSC>, <&FRC>, <&LPRC>,
- <&SOSC>, <&SYSPLL>, <&REFIx>, <&BFRC>;
- microchip,clock-indices = <0>, <1>, <2>, <3>, <4>,
- <5>, <7>, <8>, <9>;
- clock-output-names = "refo2_clk";
- };
-
- /* Reference Oscillator clock, ADC */
- REFCLKO3:refo3_clk@c0 {
- reg = <0x0c0 0x20>;
- compatible = "microchip,pic32mzda-refoclk";
- clocks = <&SYSCLK>, <&PBCLK1>, <&POSC>, <&FRC>, <&LPRC>,
- <&SOSC>, <&SYSPLL>, <&REFIx>, <&BFRC>;
- microchip,clock-indices = <0>, <1>, <2>, <3>, <4>,
- <5>, <7>, <8>, <9>;
- #clock-cells = <0>;
- clock-output-names = "refo3_clk";
- };
-
- /* Reference Oscillator clock */
- REFCLKO4:refo4_clk@e0 {
- reg = <0x0e0 0x20>;
- compatible = "microchip,pic32mzda-refoclk";
- clocks = <&SYSCLK>, <&PBCLK1>, <&POSC>, <&FRC>, <&LPRC>,
- <&SOSC>, <&SYSPLL>, <&REFIx>, <&BFRC>;
- microchip,clock-indices = <0>, <1>, <2>, <3>, <4>,
- <5>, <7>, <8>, <9>;
- #clock-cells = <0>;
- clock-output-names = "refo4_clk";
- };
-
- /* Reference Oscillator clock, LCD */
- REFCLKO5:refo5_clk@100 {
- reg = <0x100 0x20>;
- compatible = "microchip,pic32mzda-refoclk";
- clocks = <&SYSCLK>,<&PBCLK1>,<&POSC>,<&FRC>,<&LPRC>,
- <&SOSC>,<&SYSPLL>,<&REFIx>,<&BFRC>;
- microchip,clock-indices = <0>, <1>, <2>, <3>, <4>,
- <5>, <7>, <8>, <9>;
- #clock-cells = <0>;
- clock-output-names = "refo5_clk";
- };
- };
-};
diff --git a/arch/mips/boot/dts/pic32/pic32mzda.dtsi b/arch/mips/boot/dts/pic32/pic32mzda.dtsi
index ad9e3318c2ce..5353a639c4fb 100644
--- a/arch/mips/boot/dts/pic32/pic32mzda.dtsi
+++ b/arch/mips/boot/dts/pic32/pic32mzda.dtsi
@@ -6,11 +6,9 @@
* published by the Free Software Foundation.
*
*/
-
+#include <dt-bindings/clock/microchip,pic32-clock.h>
#include <dt-bindings/interrupt-controller/irq.h>
-#include "pic32mzda-clk.dtsi"
-
/ {
#address-cells = <1>;
#size-cells = <1>;
@@ -50,6 +48,29 @@
interrupts = <0 IRQ_TYPE_EDGE_RISING>;
};
+ /* external clock input on TxCLKI pin */
+ txcki: txcki_clk {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <4000000>;
+ status = "disabled";
+ };
+
+ /* external input on REFCLKIx pin */
+ refix: refix_clk {
+ #clock-cells = <0>;
+ compatible = "fixed-clock";
+ clock-frequency = <24000000>;
+ status = "disabled";
+ };
+
+ rootclk: clock-controller@1f801200 {
+ compatible = "microchip,pic32mzda-clk";
+ reg = <0x1f801200 0x200>;
+ #clock-cells = <1>;
+ microchip,pic32mzda-sosc;
+ };
+
evic: interrupt-controller@1f810000 {
compatible = "microchip,pic32mzda-evic";
interrupt-controller;
@@ -63,7 +84,7 @@
#size-cells = <1>;
compatible = "microchip,pic32mzda-pinctrl";
reg = <0x1f801400 0x400>;
- clocks = <&PBCLK1>;
+ clocks = <&rootclk PB1CLK>;
};
/* PORTA */
@@ -75,7 +96,7 @@
gpio-controller;
interrupt-controller;
#interrupt-cells = <2>;
- clocks = <&PBCLK4>;
+ clocks = <&rootclk PB4CLK>;
microchip,gpio-bank = <0>;
gpio-ranges = <&pic32_pinctrl 0 0 16>;
};
@@ -89,7 +110,7 @@
gpio-controller;
interrupt-controller;
#interrupt-cells = <2>;
- clocks = <&PBCLK4>;
+ clocks = <&rootclk PB4CLK>;
microchip,gpio-bank = <1>;
gpio-ranges = <&pic32_pinctrl 0 16 16>;
};
@@ -103,7 +124,7 @@
gpio-controller;
interrupt-controller;
#interrupt-cells = <2>;
- clocks = <&PBCLK4>;
+ clocks = <&rootclk PB4CLK>;
microchip,gpio-bank = <2>;
gpio-ranges = <&pic32_pinctrl 0 32 16>;
};
@@ -117,7 +138,7 @@
gpio-controller;
interrupt-controller;
#interrupt-cells = <2>;
- clocks = <&PBCLK4>;
+ clocks = <&rootclk PB4CLK>;
microchip,gpio-bank = <3>;
gpio-ranges = <&pic32_pinctrl 0 48 16>;
};
@@ -131,7 +152,7 @@
gpio-controller;
interrupt-controller;
#interrupt-cells = <2>;
- clocks = <&PBCLK4>;
+ clocks = <&rootclk PB4CLK>;
microchip,gpio-bank = <4>;
gpio-ranges = <&pic32_pinctrl 0 64 16>;
};
@@ -145,7 +166,7 @@
gpio-controller;
interrupt-controller;
#interrupt-cells = <2>;
- clocks = <&PBCLK4>;
+ clocks = <&rootclk PB4CLK>;
microchip,gpio-bank = <5>;
gpio-ranges = <&pic32_pinctrl 0 80 16>;
};
@@ -159,7 +180,7 @@
gpio-controller;
interrupt-controller;
#interrupt-cells = <2>;
- clocks = <&PBCLK4>;
+ clocks = <&rootclk PB4CLK>;
microchip,gpio-bank = <6>;
gpio-ranges = <&pic32_pinctrl 0 96 16>;
};
@@ -173,7 +194,7 @@
gpio-controller;
interrupt-controller;
#interrupt-cells = <2>;
- clocks = <&PBCLK4>;
+ clocks = <&rootclk PB4CLK>;
microchip,gpio-bank = <7>;
gpio-ranges = <&pic32_pinctrl 0 112 16>;
};
@@ -189,7 +210,7 @@
gpio-controller;
interrupt-controller;
#interrupt-cells = <2>;
- clocks = <&PBCLK4>;
+ clocks = <&rootclk PB4CLK>;
microchip,gpio-bank = <8>;
gpio-ranges = <&pic32_pinctrl 0 128 16>;
};
@@ -203,7 +224,7 @@
gpio-controller;
interrupt-controller;
#interrupt-cells = <2>;
- clocks = <&PBCLK4>;
+ clocks = <&rootclk PB4CLK>;
microchip,gpio-bank = <9>;
gpio-ranges = <&pic32_pinctrl 0 144 16>;
};
@@ -212,7 +233,7 @@
compatible = "microchip,pic32mzda-sdhci";
reg = <0x1f8ec000 0x100>;
interrupts = <191 IRQ_TYPE_LEVEL_HIGH>;
- clocks = <&REFCLKO4>, <&PBCLK5>;
+ clocks = <&rootclk REF4CLK>, <&rootclk PB5CLK>;
clock-names = "base_clk", "sys_clk";
bus-width = <4>;
cap-sd-highspeed;
@@ -225,7 +246,7 @@
interrupts = <112 IRQ_TYPE_LEVEL_HIGH>,
<113 IRQ_TYPE_LEVEL_HIGH>,
<114 IRQ_TYPE_LEVEL_HIGH>;
- clocks = <&PBCLK2>;
+ clocks = <&rootclk PB2CLK>;
status = "disabled";
};
@@ -235,7 +256,7 @@
interrupts = <145 IRQ_TYPE_LEVEL_HIGH>,
<146 IRQ_TYPE_LEVEL_HIGH>,
<147 IRQ_TYPE_LEVEL_HIGH>;
- clocks = <&PBCLK2>;
+ clocks = <&rootclk PB2CLK>;
status = "disabled";
};
@@ -245,7 +266,7 @@
interrupts = <157 IRQ_TYPE_LEVEL_HIGH>,
<158 IRQ_TYPE_LEVEL_HIGH>,
<159 IRQ_TYPE_LEVEL_HIGH>;
- clocks = <&PBCLK2>;
+ clocks = <&rootclk PB2CLK>;
status = "disabled";
};
@@ -255,7 +276,7 @@
interrupts = <170 IRQ_TYPE_LEVEL_HIGH>,
<171 IRQ_TYPE_LEVEL_HIGH>,
<172 IRQ_TYPE_LEVEL_HIGH>;
- clocks = <&PBCLK2>;
+ clocks = <&rootclk PB2CLK>;
status = "disabled";
};
@@ -265,7 +286,7 @@
interrupts = <179 IRQ_TYPE_LEVEL_HIGH>,
<180 IRQ_TYPE_LEVEL_HIGH>,
<181 IRQ_TYPE_LEVEL_HIGH>;
- clocks = <&PBCLK2>;
+ clocks = <&rootclk PB2CLK>;
status = "disabled";
};
@@ -275,7 +296,7 @@
interrupts = <188 IRQ_TYPE_LEVEL_HIGH>,
<189 IRQ_TYPE_LEVEL_HIGH>,
<190 IRQ_TYPE_LEVEL_HIGH>;
- clocks = <&PBCLK2>;
+ clocks = <&rootclk PB2CLK>;
status = "disabled";
};
};
diff --git a/arch/mips/boot/dts/pic32/pic32mzda_sk.dts b/arch/mips/boot/dts/pic32/pic32mzda_sk.dts
index 5d434a50e85b..fc740102852e 100644
--- a/arch/mips/boot/dts/pic32/pic32mzda_sk.dts
+++ b/arch/mips/boot/dts/pic32/pic32mzda_sk.dts
@@ -95,8 +95,9 @@
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_sdhc1>;
status = "okay";
- assigned-clocks = <&REFCLKO2>,<&REFCLKO4>,<&REFCLKO5>;
- assigned-clock-rates = <50000000>,<25000000>,<40000000>;
+ assigned-clocks = <&rootclk REF2CLK>, <&rootclk REF4CLK>,
+ <&rootclk REF5CLK>;
+ assigned-clock-rates = <50000000>, <25000000>, <40000000>;
};
&pic32_pinctrl {
diff --git a/arch/mips/boot/dts/qca/Makefile b/arch/mips/boot/dts/qca/Makefile
index 2d61455d585d..63a9ddf048c9 100644
--- a/arch/mips/boot/dts/qca/Makefile
+++ b/arch/mips/boot/dts/qca/Makefile
@@ -1,8 +1,9 @@
# All DTBs
dtb-$(CONFIG_ATH79) += ar9132_tl_wr1043nd_v1.dtb
-
-# Select a DTB to build in the kernel
-obj-$(CONFIG_DTB_TL_WR1043ND_V1) += ar9132_tl_wr1043nd_v1.dtb.o
+dtb-$(CONFIG_ATH79) += ar9331_dpt_module.dtb
+dtb-$(CONFIG_ATH79) += ar9331_dragino_ms14.dtb
+dtb-$(CONFIG_ATH79) += ar9331_omega.dtb
+dtb-$(CONFIG_ATH79) += ar9331_tl_mr3020.dtb
# Force kbuild to make empty built-in.o if necessary
obj- += dummy.o
diff --git a/arch/mips/boot/dts/qca/ar9132.dtsi b/arch/mips/boot/dts/qca/ar9132.dtsi
index 3c2ed9ee5b2f..302f0a8d2988 100644
--- a/arch/mips/boot/dts/qca/ar9132.dtsi
+++ b/arch/mips/boot/dts/qca/ar9132.dtsi
@@ -1,3 +1,5 @@
+#include <dt-bindings/clock/ath79-clk.h>
+
/ {
compatible = "qca,ar9132";
@@ -11,6 +13,7 @@
cpu@0 {
device_type = "cpu";
compatible = "mips,mips24Kc";
+ clocks = <&pll ATH79_CLK_CPU>;
reg = <0>;
};
};
@@ -52,12 +55,12 @@
#qca,ddr-wb-channel-cells = <1>;
};
- uart@18020000 {
+ uart: uart@18020000 {
compatible = "ns8250";
reg = <0x18020000 0x20>;
interrupts = <3>;
- clocks = <&pll 2>;
+ clocks = <&pll ATH79_CLK_AHB>;
clock-names = "uart";
reg-io-width = <4>;
@@ -94,13 +97,13 @@
clock-output-names = "cpu", "ddr", "ahb";
};
- wdt@18060008 {
+ wdt: wdt@18060008 {
compatible = "qca,ar7130-wdt";
reg = <0x18060008 0x8>;
interrupts = <4>;
- clocks = <&pll 2>;
+ clocks = <&pll ATH79_CLK_AHB>;
clock-names = "wdt";
};
@@ -125,7 +128,7 @@
};
};
- usb@1b000100 {
+ usb: usb@1b000100 {
compatible = "qca,ar7100-ehci", "generic-ehci";
reg = <0x1b000100 0x100>;
@@ -140,11 +143,11 @@
status = "disabled";
};
- spi@1f000000 {
+ spi: spi@1f000000 {
compatible = "qca,ar9132-spi", "qca,ar7100-spi";
reg = <0x1f000000 0x10>;
- clocks = <&pll 2>;
+ clocks = <&pll ATH79_CLK_AHB>;
clock-names = "ahb";
status = "disabled";
diff --git a/arch/mips/boot/dts/qca/ar9132_tl_wr1043nd_v1.dts b/arch/mips/boot/dts/qca/ar9132_tl_wr1043nd_v1.dts
index 4f1540e5f963..3c3b7ce5737b 100644
--- a/arch/mips/boot/dts/qca/ar9132_tl_wr1043nd_v1.dts
+++ b/arch/mips/boot/dts/qca/ar9132_tl_wr1043nd_v1.dts
@@ -9,10 +9,6 @@
compatible = "tplink,tl-wr1043nd-v1", "qca,ar9132";
model = "TP-Link TL-WR1043ND Version 1";
- alias {
- serial0 = "/ahb/apb/uart@18020000";
- };
-
memory@0 {
device_type = "memory";
reg = <0x0 0x2000000>;
@@ -24,55 +20,6 @@
clock-frequency = <40000000>;
};
- ahb {
- apb {
- uart@18020000 {
- status = "okay";
- };
-
- pll-controller@18050000 {
- clocks = <&extosc>;
- };
- };
-
- usb@1b000100 {
- status = "okay";
- };
-
- spi@1f000000 {
- status = "okay";
- num-cs = <1>;
-
- flash@0 {
- #address-cells = <1>;
- #size-cells = <1>;
- compatible = "s25sl064a";
- reg = <0>;
- spi-max-frequency = <25000000>;
-
- partition@0 {
- label = "u-boot";
- reg = <0x000000 0x020000>;
- };
-
- partition@1 {
- label = "firmware";
- reg = <0x020000 0x7D0000>;
- };
-
- partition@2 {
- label = "art";
- reg = <0x7F0000 0x010000>;
- read-only;
- };
- };
- };
- };
-
- usb-phy {
- status = "okay";
- };
-
gpio-keys {
compatible = "gpio-keys-polled";
#address-cells = <1>;
@@ -118,3 +65,48 @@
};
};
};
+
+&uart {
+ status = "okay";
+};
+
+&pll {
+ clocks = <&extosc>;
+};
+
+&usb {
+ status = "okay";
+};
+
+&usb_phy {
+ status = "okay";
+};
+
+&spi {
+ status = "okay";
+ num-cs = <1>;
+
+ flash@0 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "s25sl064a";
+ reg = <0>;
+ spi-max-frequency = <25000000>;
+
+ partition@0 {
+ label = "u-boot";
+ reg = <0x000000 0x020000>;
+ };
+
+ partition@1 {
+ label = "firmware";
+ reg = <0x020000 0x7D0000>;
+ };
+
+ partition@2 {
+ label = "art";
+ reg = <0x7F0000 0x010000>;
+ read-only;
+ };
+ };
+};
diff --git a/arch/mips/boot/dts/qca/ar9331.dtsi b/arch/mips/boot/dts/qca/ar9331.dtsi
new file mode 100644
index 000000000000..cf47ed4d8569
--- /dev/null
+++ b/arch/mips/boot/dts/qca/ar9331.dtsi
@@ -0,0 +1,155 @@
+#include <dt-bindings/clock/ath79-clk.h>
+
+/ {
+ compatible = "qca,ar9331";
+
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+ cpus {
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ cpu@0 {
+ device_type = "cpu";
+ compatible = "mips,mips24Kc";
+ clocks = <&pll ATH79_CLK_CPU>;
+ reg = <0>;
+ };
+ };
+
+ cpuintc: interrupt-controller {
+ compatible = "qca,ar7100-cpu-intc";
+
+ interrupt-controller;
+ #interrupt-cells = <1>;
+
+ qca,ddr-wb-channel-interrupts = <2>, <3>;
+ qca,ddr-wb-channels = <&ddr_ctrl 3>, <&ddr_ctrl 2>;
+ };
+
+ ref: ref {
+ compatible = "fixed-clock";
+ #clock-cells = <0>;
+ };
+
+ ahb {
+ compatible = "simple-bus";
+ ranges;
+
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+ interrupt-parent = <&cpuintc>;
+
+ apb {
+ compatible = "simple-bus";
+ ranges;
+
+ #address-cells = <1>;
+ #size-cells = <1>;
+
+ interrupt-parent = <&miscintc>;
+
+ ddr_ctrl: memory-controller@18000000 {
+ compatible = "qca,ar7240-ddr-controller";
+ reg = <0x18000000 0x100>;
+
+ #qca,ddr-wb-channel-cells = <1>;
+ };
+
+ uart: uart@18020000 {
+ compatible = "qca,ar9330-uart";
+ reg = <0x18020000 0x14>;
+
+ interrupts = <3>;
+
+ clocks = <&ref>;
+ clock-names = "uart";
+
+ status = "disabled";
+ };
+
+ gpio: gpio@18040000 {
+ compatible = "qca,ar7100-gpio";
+ reg = <0x18040000 0x34>;
+ interrupts = <2>;
+
+ ngpios = <30>;
+
+ gpio-controller;
+ #gpio-cells = <2>;
+
+ interrupt-controller;
+ #interrupt-cells = <2>;
+
+ status = "disabled";
+ };
+
+ pll: pll-controller@18050000 {
+ compatible = "qca,ar9330-pll";
+ reg = <0x18050000 0x100>;
+
+ clocks = <&ref>;
+ clock-names = "ref";
+
+ #clock-cells = <1>;
+ };
+
+ miscintc: interrupt-controller@18060010 {
+ compatible = "qca,ar7240-misc-intc";
+ reg = <0x18060010 0x4>;
+
+ interrupt-parent = <&cpuintc>;
+ interrupts = <6>;
+
+ interrupt-controller;
+ #interrupt-cells = <1>;
+ };
+
+ rst: reset-controller@1806001c {
+ compatible = "qca,ar7100-reset";
+ reg = <0x1806001c 0x4>;
+
+ #reset-cells = <1>;
+ };
+ };
+
+ usb: usb@1b000100 {
+ compatible = "chipidea,usb2";
+ reg = <0x1b000000 0x200>;
+
+ interrupts = <3>;
+ resets = <&rst 5>;
+
+ phy-names = "usb-phy";
+ phys = <&usb_phy>;
+
+ status = "disabled";
+ };
+
+ spi: spi@1f000000 {
+ compatible = "qca,ar7100-spi";
+ reg = <0x1f000000 0x10>;
+
+ clocks = <&pll ATH79_CLK_AHB>;
+ clock-names = "ahb";
+
+ #address-cells = <1>;
+ #size-cells = <0>;
+
+ status = "disabled";
+ };
+ };
+
+ usb_phy: usb-phy {
+ compatible = "qca,ar7100-usb-phy";
+
+ reset-names = "usb-phy", "usb-suspend-override";
+ resets = <&rst 4>, <&rst 3>;
+
+ #phy-cells = <0>;
+
+ status = "disabled";
+ };
+};
diff --git a/arch/mips/boot/dts/qca/ar9331_dpt_module.dts b/arch/mips/boot/dts/qca/ar9331_dpt_module.dts
new file mode 100644
index 000000000000..98e74500e79d
--- /dev/null
+++ b/arch/mips/boot/dts/qca/ar9331_dpt_module.dts
@@ -0,0 +1,78 @@
+/dts-v1/;
+
+#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/input/input.h>
+
+#include "ar9331.dtsi"
+
+/ {
+ model = "DPTechnics DPT-Module";
+ compatible = "dptechnics,dpt-module";
+
+ aliases {
+ serial0 = &uart;
+ };
+
+ memory@0 {
+ device_type = "memory";
+ reg = <0x0 0x4000000>;
+ };
+
+ leds {
+ compatible = "gpio-leds";
+
+ system {
+ label = "dpt-module:green:system";
+ gpios = <&gpio 27 GPIO_ACTIVE_LOW>;
+ default-state = "off";
+ };
+ };
+
+ gpio-keys-polled {
+ compatible = "gpio-keys-polled";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ poll-interval = <100>;
+
+ button@0 {
+ label = "reset";
+ linux,code = <KEY_RESTART>;
+ gpios = <&gpio 11 GPIO_ACTIVE_LOW>;
+ };
+ };
+};
+
+&ref {
+ clock-frequency = <25000000>;
+};
+
+&uart {
+ status = "okay";
+};
+
+&gpio {
+ status = "okay";
+};
+
+&usb {
+ dr_mode = "host";
+ status = "okay";
+};
+
+&usb_phy {
+ status = "okay";
+};
+
+&spi {
+ num-chipselects = <1>;
+ status = "okay";
+
+ /* Winbond 25Q128FVSG SPI flash */
+ spiflash: w25q128@0 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "winbond,w25q128", "jedec,spi-nor";
+ spi-max-frequency = <104000000>;
+ reg = <0>;
+ };
+};
diff --git a/arch/mips/boot/dts/qca/ar9331_dragino_ms14.dts b/arch/mips/boot/dts/qca/ar9331_dragino_ms14.dts
new file mode 100644
index 000000000000..56f832076a69
--- /dev/null
+++ b/arch/mips/boot/dts/qca/ar9331_dragino_ms14.dts
@@ -0,0 +1,102 @@
+/dts-v1/;
+
+#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/input/input.h>
+
+#include "ar9331.dtsi"
+
+/ {
+ model = "Dragino MS14 (Dragino 2)";
+ compatible = "dragino,ms14";
+
+ aliases {
+ serial0 = &uart;
+ };
+
+ memory@0 {
+ device_type = "memory";
+ reg = <0x0 0x4000000>;
+ };
+
+ leds {
+ compatible = "gpio-leds";
+
+ wlan {
+ label = "dragino2:red:wlan";
+ gpios = <&gpio 0 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+
+ lan {
+ label = "dragino2:red:lan";
+ gpios = <&gpio 13 GPIO_ACTIVE_LOW>;
+ default-state = "off";
+ };
+
+ wan {
+ label = "dragino2:red:wan";
+ gpios = <&gpio 17 GPIO_ACTIVE_LOW>;
+ default-state = "off";
+ };
+
+ system {
+ label = "dragino2:red:system";
+ gpios = <&gpio 28 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+ };
+
+ gpio-keys-polled {
+ compatible = "gpio-keys-polled";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ poll-interval = <100>;
+
+ button@0 {
+ label = "jumpstart";
+ linux,code = <KEY_WPS_BUTTON>;
+ gpios = <&gpio 11 GPIO_ACTIVE_LOW>;
+ };
+
+ button@1 {
+ label = "reset";
+ linux,code = <KEY_RESTART>;
+ gpios = <&gpio 12 GPIO_ACTIVE_LOW>;
+ };
+ };
+};
+
+&ref {
+ clock-frequency = <25000000>;
+};
+
+&uart {
+ status = "okay";
+};
+
+&gpio {
+ status = "okay";
+};
+
+&usb {
+ dr_mode = "host";
+ status = "okay";
+};
+
+&usb_phy {
+ status = "okay";
+};
+
+&spi {
+ num-chipselects = <1>;
+ status = "okay";
+
+ /* Winbond 25Q128BVFG SPI flash */
+ spiflash: w25q128@0 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "winbond,w25q128", "jedec,spi-nor";
+ spi-max-frequency = <104000000>;
+ reg = <0>;
+ };
+};
diff --git a/arch/mips/boot/dts/qca/ar9331_omega.dts b/arch/mips/boot/dts/qca/ar9331_omega.dts
new file mode 100644
index 000000000000..b2be3b04479d
--- /dev/null
+++ b/arch/mips/boot/dts/qca/ar9331_omega.dts
@@ -0,0 +1,78 @@
+/dts-v1/;
+
+#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/input/input.h>
+
+#include "ar9331.dtsi"
+
+/ {
+ model = "Onion Omega";
+ compatible = "onion,omega";
+
+ aliases {
+ serial0 = &uart;
+ };
+
+ memory@0 {
+ device_type = "memory";
+ reg = <0x0 0x4000000>;
+ };
+
+ leds {
+ compatible = "gpio-leds";
+
+ system {
+ label = "onion:amber:system";
+ gpios = <&gpio 27 GPIO_ACTIVE_LOW>;
+ default-state = "off";
+ };
+ };
+
+ gpio-keys-polled {
+ compatible = "gpio-keys-polled";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ poll-interval = <100>;
+
+ button@0 {
+ label = "reset";
+ linux,code = <KEY_RESTART>;
+ gpios = <&gpio 11 GPIO_ACTIVE_HIGH>;
+ };
+ };
+};
+
+&ref {
+ clock-frequency = <25000000>;
+};
+
+&uart {
+ status = "okay";
+};
+
+&gpio {
+ status = "okay";
+};
+
+&usb {
+ dr_mode = "host";
+ status = "okay";
+};
+
+&usb_phy {
+ status = "okay";
+};
+
+&spi {
+ num-chipselects = <1>;
+ status = "okay";
+
+ /* Winbond 25Q128FVSG SPI flash */
+ spiflash: w25q128@0 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "winbond,w25q128", "jedec,spi-nor";
+ spi-max-frequency = <104000000>;
+ reg = <0>;
+ };
+};
diff --git a/arch/mips/boot/dts/qca/ar9331_tl_mr3020.dts b/arch/mips/boot/dts/qca/ar9331_tl_mr3020.dts
new file mode 100644
index 000000000000..919cf3b854a5
--- /dev/null
+++ b/arch/mips/boot/dts/qca/ar9331_tl_mr3020.dts
@@ -0,0 +1,118 @@
+/dts-v1/;
+
+#include <dt-bindings/gpio/gpio.h>
+#include <dt-bindings/input/input.h>
+
+#include "ar9331.dtsi"
+
+/ {
+ model = "TP-Link TL-MR3020";
+ compatible = "tplink,tl-mr3020";
+
+ aliases {
+ serial0 = &uart;
+ };
+
+ memory@0 {
+ device_type = "memory";
+ reg = <0x0 0x2000000>;
+ };
+
+ leds {
+ compatible = "gpio-leds";
+
+ wlan {
+ label = "tp-link:green:wlan";
+ gpios = <&gpio 0 GPIO_ACTIVE_HIGH>;
+ default-state = "off";
+ };
+
+ lan {
+ label = "tp-link:green:lan";
+ gpios = <&gpio 17 GPIO_ACTIVE_LOW>;
+ default-state = "off";
+ };
+
+ wps {
+ label = "tp-link:green:wps";
+ gpios = <&gpio 26 GPIO_ACTIVE_LOW>;
+ default-state = "off";
+ };
+
+ led3g {
+ label = "tp-link:green:3g";
+ gpios = <&gpio 27 GPIO_ACTIVE_LOW>;
+ default-state = "off";
+ };
+ };
+
+ gpio-keys-polled {
+ compatible = "gpio-keys-polled";
+ #address-cells = <1>;
+ #size-cells = <0>;
+ poll-interval = <100>;
+
+ button@0 {
+ label = "wps";
+ linux,code = <KEY_WPS_BUTTON>;
+ gpios = <&gpio 11 GPIO_ACTIVE_HIGH>;
+ };
+
+ button@1 {
+ label = "sw1";
+ linux,code = <BTN_0>;
+ gpios = <&gpio 18 GPIO_ACTIVE_HIGH>;
+ };
+
+ button@2 {
+ label = "sw2";
+ linux,code = <BTN_1>;
+ gpios = <&gpio 20 GPIO_ACTIVE_HIGH>;
+ };
+ };
+
+ reg_usb_vbus: reg_usb_vbus {
+ compatible = "regulator-fixed";
+ regulator-name = "usb_vbus";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ gpio = <&gpio 8 GPIO_ACTIVE_HIGH>;
+ enable-active-high;
+ };
+};
+
+&ref {
+ clock-frequency = <25000000>;
+};
+
+&uart {
+ status = "okay";
+};
+
+&gpio {
+ status = "okay";
+};
+
+&usb {
+ dr_mode = "host";
+ vbus-supply = <&reg_usb_vbus>;
+ status = "okay";
+};
+
+&usb_phy {
+ status = "okay";
+};
+
+&spi {
+ num-chipselects = <1>;
+ status = "okay";
+
+ /* Spansion S25FL032PIF SPI flash */
+ spiflash: s25sl032p@0 {
+ #address-cells = <1>;
+ #size-cells = <1>;
+ compatible = "spansion,s25sl032p", "jedec,spi-nor";
+ spi-max-frequency = <104000000>;
+ reg = <0>;
+ };
+};
diff --git a/arch/mips/boot/dts/ralink/mt7620a.dtsi b/arch/mips/boot/dts/ralink/mt7620a.dtsi
index 08bf24fefe9f..793c0c7ca921 100644
--- a/arch/mips/boot/dts/ralink/mt7620a.dtsi
+++ b/arch/mips/boot/dts/ralink/mt7620a.dtsi
@@ -9,7 +9,7 @@
};
};
- cpuintc: cpuintc@0 {
+ cpuintc: cpuintc {
#address-cells = <0>;
#interrupt-cells = <1>;
interrupt-controller;
diff --git a/arch/mips/boot/dts/ralink/rt2880.dtsi b/arch/mips/boot/dts/ralink/rt2880.dtsi
index 182afde2f2e1..fb2faef0ab79 100644
--- a/arch/mips/boot/dts/ralink/rt2880.dtsi
+++ b/arch/mips/boot/dts/ralink/rt2880.dtsi
@@ -9,7 +9,7 @@
};
};
- cpuintc: cpuintc@0 {
+ cpuintc: cpuintc {
#address-cells = <0>;
#interrupt-cells = <1>;
interrupt-controller;
diff --git a/arch/mips/boot/dts/ralink/rt3050.dtsi b/arch/mips/boot/dts/ralink/rt3050.dtsi
index e3203d414fee..d3cb57f985da 100644
--- a/arch/mips/boot/dts/ralink/rt3050.dtsi
+++ b/arch/mips/boot/dts/ralink/rt3050.dtsi
@@ -9,7 +9,7 @@
};
};
- cpuintc: cpuintc@0 {
+ cpuintc: cpuintc {
#address-cells = <0>;
#interrupt-cells = <1>;
interrupt-controller;
diff --git a/arch/mips/boot/dts/ralink/rt3883.dtsi b/arch/mips/boot/dts/ralink/rt3883.dtsi
index 3b131dd0d5ac..3d6fc9afdaf6 100644
--- a/arch/mips/boot/dts/ralink/rt3883.dtsi
+++ b/arch/mips/boot/dts/ralink/rt3883.dtsi
@@ -9,7 +9,7 @@
};
};
- cpuintc: cpuintc@0 {
+ cpuintc: cpuintc {
#address-cells = <0>;
#interrupt-cells = <1>;
interrupt-controller;
diff --git a/arch/mips/boot/dts/xilfpga/nexys4ddr.dts b/arch/mips/boot/dts/xilfpga/nexys4ddr.dts
index 686ebd11386d..48d21127c3f3 100644
--- a/arch/mips/boot/dts/xilfpga/nexys4ddr.dts
+++ b/arch/mips/boot/dts/xilfpga/nexys4ddr.dts
@@ -10,7 +10,7 @@
reg = <0x0 0x08000000>;
};
- cpuintc: interrupt-controller@0 {
+ cpuintc: interrupt-controller {
#address-cells = <0>;
#interrupt-cells = <1>;
interrupt-controller;
diff --git a/arch/mips/boot/tools/.gitignore b/arch/mips/boot/tools/.gitignore
new file mode 100644
index 000000000000..be0ed065249b
--- /dev/null
+++ b/arch/mips/boot/tools/.gitignore
@@ -0,0 +1 @@
+relocs
diff --git a/arch/mips/boot/tools/Makefile b/arch/mips/boot/tools/Makefile
new file mode 100644
index 000000000000..d232a68f6c8a
--- /dev/null
+++ b/arch/mips/boot/tools/Makefile
@@ -0,0 +1,8 @@
+
+hostprogs-y += relocs
+relocs-objs += relocs_32.o
+relocs-objs += relocs_64.o
+relocs-objs += relocs_main.o
+PHONY += relocs
+relocs: $(obj)/relocs
+ @:
diff --git a/arch/mips/boot/tools/relocs.c b/arch/mips/boot/tools/relocs.c
new file mode 100644
index 000000000000..b9cbf78527e8
--- /dev/null
+++ b/arch/mips/boot/tools/relocs.c
@@ -0,0 +1,680 @@
+/* This is included from relocs_32/64.c */
+
+#define ElfW(type) _ElfW(ELF_BITS, type)
+#define _ElfW(bits, type) __ElfW(bits, type)
+#define __ElfW(bits, type) Elf##bits##_##type
+
+#define Elf_Addr ElfW(Addr)
+#define Elf_Ehdr ElfW(Ehdr)
+#define Elf_Phdr ElfW(Phdr)
+#define Elf_Shdr ElfW(Shdr)
+#define Elf_Sym ElfW(Sym)
+
+static Elf_Ehdr ehdr;
+
+struct relocs {
+ uint32_t *offset;
+ unsigned long count;
+ unsigned long size;
+};
+
+static struct relocs relocs;
+
+struct section {
+ Elf_Shdr shdr;
+ struct section *link;
+ Elf_Sym *symtab;
+ Elf_Rel *reltab;
+ char *strtab;
+ long shdr_offset;
+};
+static struct section *secs;
+
+static const char * const regex_sym_kernel = {
+/* Symbols matching these regex's should never be relocated */
+ "^(__crc_)",
+};
+
+static regex_t sym_regex_c;
+
+static int regex_skip_reloc(const char *sym_name)
+{
+ return !regexec(&sym_regex_c, sym_name, 0, NULL, 0);
+}
+
+static void regex_init(void)
+{
+ char errbuf[128];
+ int err;
+
+ err = regcomp(&sym_regex_c, regex_sym_kernel,
+ REG_EXTENDED|REG_NOSUB);
+
+ if (err) {
+ regerror(err, &sym_regex_c, errbuf, sizeof(errbuf));
+ die("%s", errbuf);
+ }
+}
+
+static const char *rel_type(unsigned type)
+{
+ static const char * const type_name[] = {
+#define REL_TYPE(X)[X] = #X
+ REL_TYPE(R_MIPS_NONE),
+ REL_TYPE(R_MIPS_16),
+ REL_TYPE(R_MIPS_32),
+ REL_TYPE(R_MIPS_REL32),
+ REL_TYPE(R_MIPS_26),
+ REL_TYPE(R_MIPS_HI16),
+ REL_TYPE(R_MIPS_LO16),
+ REL_TYPE(R_MIPS_GPREL16),
+ REL_TYPE(R_MIPS_LITERAL),
+ REL_TYPE(R_MIPS_GOT16),
+ REL_TYPE(R_MIPS_PC16),
+ REL_TYPE(R_MIPS_CALL16),
+ REL_TYPE(R_MIPS_GPREL32),
+ REL_TYPE(R_MIPS_64),
+ REL_TYPE(R_MIPS_HIGHER),
+ REL_TYPE(R_MIPS_HIGHEST),
+ REL_TYPE(R_MIPS_PC21_S2),
+ REL_TYPE(R_MIPS_PC26_S2),
+#undef REL_TYPE
+ };
+ const char *name = "unknown type rel type name";
+
+ if (type < ARRAY_SIZE(type_name) && type_name[type])
+ name = type_name[type];
+ return name;
+}
+
+static const char *sec_name(unsigned shndx)
+{
+ const char *sec_strtab;
+ const char *name;
+
+ sec_strtab = secs[ehdr.e_shstrndx].strtab;
+ if (shndx < ehdr.e_shnum)
+ name = sec_strtab + secs[shndx].shdr.sh_name;
+ else if (shndx == SHN_ABS)
+ name = "ABSOLUTE";
+ else if (shndx == SHN_COMMON)
+ name = "COMMON";
+ else
+ name = "<noname>";
+ return name;
+}
+
+static struct section *sec_lookup(const char *secname)
+{
+ int i;
+
+ for (i = 0; i < ehdr.e_shnum; i++)
+ if (strcmp(secname, sec_name(i)) == 0)
+ return &secs[i];
+
+ return NULL;
+}
+
+static const char *sym_name(const char *sym_strtab, Elf_Sym *sym)
+{
+ const char *name;
+
+ if (sym->st_name)
+ name = sym_strtab + sym->st_name;
+ else
+ name = sec_name(sym->st_shndx);
+ return name;
+}
+
+#if BYTE_ORDER == LITTLE_ENDIAN
+#define le16_to_cpu(val) (val)
+#define le32_to_cpu(val) (val)
+#define le64_to_cpu(val) (val)
+#define be16_to_cpu(val) bswap_16(val)
+#define be32_to_cpu(val) bswap_32(val)
+#define be64_to_cpu(val) bswap_64(val)
+
+#define cpu_to_le16(val) (val)
+#define cpu_to_le32(val) (val)
+#define cpu_to_le64(val) (val)
+#define cpu_to_be16(val) bswap_16(val)
+#define cpu_to_be32(val) bswap_32(val)
+#define cpu_to_be64(val) bswap_64(val)
+#endif
+#if BYTE_ORDER == BIG_ENDIAN
+#define le16_to_cpu(val) bswap_16(val)
+#define le32_to_cpu(val) bswap_32(val)
+#define le64_to_cpu(val) bswap_64(val)
+#define be16_to_cpu(val) (val)
+#define be32_to_cpu(val) (val)
+#define be64_to_cpu(val) (val)
+
+#define cpu_to_le16(val) bswap_16(val)
+#define cpu_to_le32(val) bswap_32(val)
+#define cpu_to_le64(val) bswap_64(val)
+#define cpu_to_be16(val) (val)
+#define cpu_to_be32(val) (val)
+#define cpu_to_be64(val) (val)
+#endif
+
+static uint16_t elf16_to_cpu(uint16_t val)
+{
+ if (ehdr.e_ident[EI_DATA] == ELFDATA2LSB)
+ return le16_to_cpu(val);
+ else
+ return be16_to_cpu(val);
+}
+
+static uint32_t elf32_to_cpu(uint32_t val)
+{
+ if (ehdr.e_ident[EI_DATA] == ELFDATA2LSB)
+ return le32_to_cpu(val);
+ else
+ return be32_to_cpu(val);
+}
+
+static uint32_t cpu_to_elf32(uint32_t val)
+{
+ if (ehdr.e_ident[EI_DATA] == ELFDATA2LSB)
+ return cpu_to_le32(val);
+ else
+ return cpu_to_be32(val);
+}
+
+#define elf_half_to_cpu(x) elf16_to_cpu(x)
+#define elf_word_to_cpu(x) elf32_to_cpu(x)
+
+#if ELF_BITS == 64
+static uint64_t elf64_to_cpu(uint64_t val)
+{
+ if (ehdr.e_ident[EI_DATA] == ELFDATA2LSB)
+ return le64_to_cpu(val);
+ else
+ return be64_to_cpu(val);
+}
+#define elf_addr_to_cpu(x) elf64_to_cpu(x)
+#define elf_off_to_cpu(x) elf64_to_cpu(x)
+#define elf_xword_to_cpu(x) elf64_to_cpu(x)
+#else
+#define elf_addr_to_cpu(x) elf32_to_cpu(x)
+#define elf_off_to_cpu(x) elf32_to_cpu(x)
+#define elf_xword_to_cpu(x) elf32_to_cpu(x)
+#endif
+
+static void read_ehdr(FILE *fp)
+{
+ if (fread(&ehdr, sizeof(ehdr), 1, fp) != 1)
+ die("Cannot read ELF header: %s\n", strerror(errno));
+
+ if (memcmp(ehdr.e_ident, ELFMAG, SELFMAG) != 0)
+ die("No ELF magic\n");
+
+ if (ehdr.e_ident[EI_CLASS] != ELF_CLASS)
+ die("Not a %d bit executable\n", ELF_BITS);
+
+ if ((ehdr.e_ident[EI_DATA] != ELFDATA2LSB) &&
+ (ehdr.e_ident[EI_DATA] != ELFDATA2MSB))
+ die("Unknown ELF Endianness\n");
+
+ if (ehdr.e_ident[EI_VERSION] != EV_CURRENT)
+ die("Unknown ELF version\n");
+
+ /* Convert the fields to native endian */
+ ehdr.e_type = elf_half_to_cpu(ehdr.e_type);
+ ehdr.e_machine = elf_half_to_cpu(ehdr.e_machine);
+ ehdr.e_version = elf_word_to_cpu(ehdr.e_version);
+ ehdr.e_entry = elf_addr_to_cpu(ehdr.e_entry);
+ ehdr.e_phoff = elf_off_to_cpu(ehdr.e_phoff);
+ ehdr.e_shoff = elf_off_to_cpu(ehdr.e_shoff);
+ ehdr.e_flags = elf_word_to_cpu(ehdr.e_flags);
+ ehdr.e_ehsize = elf_half_to_cpu(ehdr.e_ehsize);
+ ehdr.e_phentsize = elf_half_to_cpu(ehdr.e_phentsize);
+ ehdr.e_phnum = elf_half_to_cpu(ehdr.e_phnum);
+ ehdr.e_shentsize = elf_half_to_cpu(ehdr.e_shentsize);
+ ehdr.e_shnum = elf_half_to_cpu(ehdr.e_shnum);
+ ehdr.e_shstrndx = elf_half_to_cpu(ehdr.e_shstrndx);
+
+ if ((ehdr.e_type != ET_EXEC) && (ehdr.e_type != ET_DYN))
+ die("Unsupported ELF header type\n");
+
+ if (ehdr.e_machine != ELF_MACHINE)
+ die("Not for %s\n", ELF_MACHINE_NAME);
+
+ if (ehdr.e_version != EV_CURRENT)
+ die("Unknown ELF version\n");
+
+ if (ehdr.e_ehsize != sizeof(Elf_Ehdr))
+ die("Bad Elf header size\n");
+
+ if (ehdr.e_phentsize != sizeof(Elf_Phdr))
+ die("Bad program header entry\n");
+
+ if (ehdr.e_shentsize != sizeof(Elf_Shdr))
+ die("Bad section header entry\n");
+
+ if (ehdr.e_shstrndx >= ehdr.e_shnum)
+ die("String table index out of bounds\n");
+}
+
+static void read_shdrs(FILE *fp)
+{
+ int i;
+ Elf_Shdr shdr;
+
+ secs = calloc(ehdr.e_shnum, sizeof(struct section));
+ if (!secs)
+ die("Unable to allocate %d section headers\n", ehdr.e_shnum);
+
+ if (fseek(fp, ehdr.e_shoff, SEEK_SET) < 0)
+ die("Seek to %d failed: %s\n", ehdr.e_shoff, strerror(errno));
+
+ for (i = 0; i < ehdr.e_shnum; i++) {
+ struct section *sec = &secs[i];
+
+ sec->shdr_offset = ftell(fp);
+ if (fread(&shdr, sizeof(shdr), 1, fp) != 1)
+ die("Cannot read ELF section headers %d/%d: %s\n",
+ i, ehdr.e_shnum, strerror(errno));
+ sec->shdr.sh_name = elf_word_to_cpu(shdr.sh_name);
+ sec->shdr.sh_type = elf_word_to_cpu(shdr.sh_type);
+ sec->shdr.sh_flags = elf_xword_to_cpu(shdr.sh_flags);
+ sec->shdr.sh_addr = elf_addr_to_cpu(shdr.sh_addr);
+ sec->shdr.sh_offset = elf_off_to_cpu(shdr.sh_offset);
+ sec->shdr.sh_size = elf_xword_to_cpu(shdr.sh_size);
+ sec->shdr.sh_link = elf_word_to_cpu(shdr.sh_link);
+ sec->shdr.sh_info = elf_word_to_cpu(shdr.sh_info);
+ sec->shdr.sh_addralign = elf_xword_to_cpu(shdr.sh_addralign);
+ sec->shdr.sh_entsize = elf_xword_to_cpu(shdr.sh_entsize);
+ if (sec->shdr.sh_link < ehdr.e_shnum)
+ sec->link = &secs[sec->shdr.sh_link];
+ }
+}
+
+static void read_strtabs(FILE *fp)
+{
+ int i;
+
+ for (i = 0; i < ehdr.e_shnum; i++) {
+ struct section *sec = &secs[i];
+
+ if (sec->shdr.sh_type != SHT_STRTAB)
+ continue;
+
+ sec->strtab = malloc(sec->shdr.sh_size);
+ if (!sec->strtab)
+ die("malloc of %d bytes for strtab failed\n",
+ sec->shdr.sh_size);
+
+ if (fseek(fp, sec->shdr.sh_offset, SEEK_SET) < 0)
+ die("Seek to %d failed: %s\n",
+ sec->shdr.sh_offset, strerror(errno));
+
+ if (fread(sec->strtab, 1, sec->shdr.sh_size, fp) !=
+ sec->shdr.sh_size)
+ die("Cannot read symbol table: %s\n", strerror(errno));
+ }
+}
+
+static void read_symtabs(FILE *fp)
+{
+ int i, j;
+
+ for (i = 0; i < ehdr.e_shnum; i++) {
+ struct section *sec = &secs[i];
+ if (sec->shdr.sh_type != SHT_SYMTAB)
+ continue;
+
+ sec->symtab = malloc(sec->shdr.sh_size);
+ if (!sec->symtab)
+ die("malloc of %d bytes for symtab failed\n",
+ sec->shdr.sh_size);
+
+ if (fseek(fp, sec->shdr.sh_offset, SEEK_SET) < 0)
+ die("Seek to %d failed: %s\n",
+ sec->shdr.sh_offset, strerror(errno));
+
+ if (fread(sec->symtab, 1, sec->shdr.sh_size, fp) !=
+ sec->shdr.sh_size)
+ die("Cannot read symbol table: %s\n", strerror(errno));
+
+ for (j = 0; j < sec->shdr.sh_size/sizeof(Elf_Sym); j++) {
+ Elf_Sym *sym = &sec->symtab[j];
+
+ sym->st_name = elf_word_to_cpu(sym->st_name);
+ sym->st_value = elf_addr_to_cpu(sym->st_value);
+ sym->st_size = elf_xword_to_cpu(sym->st_size);
+ sym->st_shndx = elf_half_to_cpu(sym->st_shndx);
+ }
+ }
+}
+
+static void read_relocs(FILE *fp)
+{
+ static unsigned long base = 0;
+ int i, j;
+
+ if (!base) {
+ struct section *sec = sec_lookup(".text");
+
+ if (!sec)
+ die("Could not find .text section\n");
+
+ base = sec->shdr.sh_addr;
+ }
+
+ for (i = 0; i < ehdr.e_shnum; i++) {
+ struct section *sec = &secs[i];
+
+ if (sec->shdr.sh_type != SHT_REL_TYPE)
+ continue;
+
+ sec->reltab = malloc(sec->shdr.sh_size);
+ if (!sec->reltab)
+ die("malloc of %d bytes for relocs failed\n",
+ sec->shdr.sh_size);
+
+ if (fseek(fp, sec->shdr.sh_offset, SEEK_SET) < 0)
+ die("Seek to %d failed: %s\n",
+ sec->shdr.sh_offset, strerror(errno));
+
+ if (fread(sec->reltab, 1, sec->shdr.sh_size, fp) !=
+ sec->shdr.sh_size)
+ die("Cannot read symbol table: %s\n", strerror(errno));
+
+ for (j = 0; j < sec->shdr.sh_size/sizeof(Elf_Rel); j++) {
+ Elf_Rel *rel = &sec->reltab[j];
+
+ rel->r_offset = elf_addr_to_cpu(rel->r_offset);
+ /* Set offset into kernel image */
+ rel->r_offset -= base;
+#if (ELF_BITS == 32)
+ rel->r_info = elf_xword_to_cpu(rel->r_info);
+#else
+ /* Convert MIPS64 RELA format - only the symbol
+ * index needs converting to native endianness
+ */
+ rel->r_info = rel->r_info;
+ ELF_R_SYM(rel->r_info) = elf32_to_cpu(ELF_R_SYM(rel->r_info));
+#endif
+#if (SHT_REL_TYPE == SHT_RELA)
+ rel->r_addend = elf_xword_to_cpu(rel->r_addend);
+#endif
+ }
+ }
+}
+
+static void remove_relocs(FILE *fp)
+{
+ int i;
+ Elf_Shdr shdr;
+
+ for (i = 0; i < ehdr.e_shnum; i++) {
+ struct section *sec = &secs[i];
+
+ if (sec->shdr.sh_type != SHT_REL_TYPE)
+ continue;
+
+ if (fseek(fp, sec->shdr_offset, SEEK_SET) < 0)
+ die("Seek to %d failed: %s\n",
+ sec->shdr_offset, strerror(errno));
+
+ if (fread(&shdr, sizeof(shdr), 1, fp) != 1)
+ die("Cannot read ELF section headers %d/%d: %s\n",
+ i, ehdr.e_shnum, strerror(errno));
+
+ /* Set relocation section size to 0, effectively removing it.
+ * This is necessary due to lack of support for relocations
+ * in objcopy when creating 32bit elf from 64bit elf.
+ */
+ shdr.sh_size = 0;
+
+ if (fseek(fp, sec->shdr_offset, SEEK_SET) < 0)
+ die("Seek to %d failed: %s\n",
+ sec->shdr_offset, strerror(errno));
+
+ if (fwrite(&shdr, sizeof(shdr), 1, fp) != 1)
+ die("Cannot write ELF section headers %d/%d: %s\n",
+ i, ehdr.e_shnum, strerror(errno));
+ }
+}
+
+static void add_reloc(struct relocs *r, uint32_t offset, unsigned type)
+{
+ /* Relocation representation in binary table:
+ * |76543210|76543210|76543210|76543210|
+ * | Type | offset from _text >> 2 |
+ */
+ offset >>= 2;
+ if (offset > 0x00FFFFFF)
+ die("Kernel image exceeds maximum size for relocation!\n");
+
+ offset = (offset & 0x00FFFFFF) | ((type & 0xFF) << 24);
+
+ if (r->count == r->size) {
+ unsigned long newsize = r->size + 50000;
+ void *mem = realloc(r->offset, newsize * sizeof(r->offset[0]));
+
+ if (!mem)
+ die("realloc failed\n");
+
+ r->offset = mem;
+ r->size = newsize;
+ }
+ r->offset[r->count++] = offset;
+}
+
+static void walk_relocs(int (*process)(struct section *sec, Elf_Rel *rel,
+ Elf_Sym *sym, const char *symname))
+{
+ int i;
+
+ /* Walk through the relocations */
+ for (i = 0; i < ehdr.e_shnum; i++) {
+ char *sym_strtab;
+ Elf_Sym *sh_symtab;
+ struct section *sec_applies, *sec_symtab;
+ int j;
+ struct section *sec = &secs[i];
+
+ if (sec->shdr.sh_type != SHT_REL_TYPE)
+ continue;
+
+ sec_symtab = sec->link;
+ sec_applies = &secs[sec->shdr.sh_info];
+ if (!(sec_applies->shdr.sh_flags & SHF_ALLOC))
+ continue;
+
+ sh_symtab = sec_symtab->symtab;
+ sym_strtab = sec_symtab->link->strtab;
+ for (j = 0; j < sec->shdr.sh_size/sizeof(Elf_Rel); j++) {
+ Elf_Rel *rel = &sec->reltab[j];
+ Elf_Sym *sym = &sh_symtab[ELF_R_SYM(rel->r_info)];
+ const char *symname = sym_name(sym_strtab, sym);
+
+ process(sec, rel, sym, symname);
+ }
+ }
+}
+
+static int do_reloc(struct section *sec, Elf_Rel *rel, Elf_Sym *sym,
+ const char *symname)
+{
+ unsigned r_type = ELF_R_TYPE(rel->r_info);
+ unsigned bind = ELF_ST_BIND(sym->st_info);
+
+ if ((bind == STB_WEAK) && (sym->st_value == 0)) {
+ /* Don't relocate weak symbols without a target */
+ return 0;
+ }
+
+ if (regex_skip_reloc(symname))
+ return 0;
+
+ switch (r_type) {
+ case R_MIPS_NONE:
+ case R_MIPS_REL32:
+ case R_MIPS_PC16:
+ case R_MIPS_PC21_S2:
+ case R_MIPS_PC26_S2:
+ /*
+ * NONE can be ignored and PC relative relocations don't
+ * need to be adjusted.
+ */
+ case R_MIPS_HIGHEST:
+ case R_MIPS_HIGHER:
+ /* We support relocating within the same 4Gb segment only,
+ * thus leaving the top 32bits unchanged
+ */
+ case R_MIPS_LO16:
+ /* We support relocating by 64k jumps only
+ * thus leaving the bottom 16bits unchanged
+ */
+ break;
+
+ case R_MIPS_64:
+ case R_MIPS_32:
+ case R_MIPS_26:
+ case R_MIPS_HI16:
+ add_reloc(&relocs, rel->r_offset, r_type);
+ break;
+
+ default:
+ die("Unsupported relocation type: %s (%d)\n",
+ rel_type(r_type), r_type);
+ break;
+ }
+
+ return 0;
+}
+
+static int write_reloc_as_bin(uint32_t v, FILE *f)
+{
+ unsigned char buf[4];
+
+ v = cpu_to_elf32(v);
+
+ memcpy(buf, &v, sizeof(uint32_t));
+ return fwrite(buf, 1, 4, f);
+}
+
+static int write_reloc_as_text(uint32_t v, FILE *f)
+{
+ int res;
+
+ res = fprintf(f, "\t.long 0x%08"PRIx32"\n", v);
+ if (res < 0)
+ return res;
+ else
+ return sizeof(uint32_t);
+}
+
+static void emit_relocs(int as_text, int as_bin, FILE *outf)
+{
+ int i;
+ int (*write_reloc)(uint32_t, FILE *) = write_reloc_as_bin;
+ int size = 0;
+ int size_reserved;
+ struct section *sec_reloc;
+
+ sec_reloc = sec_lookup(".data.reloc");
+ if (!sec_reloc)
+ die("Could not find relocation section\n");
+
+ size_reserved = sec_reloc->shdr.sh_size;
+
+ /* Collect up the relocations */
+ walk_relocs(do_reloc);
+
+ /* Print the relocations */
+ if (as_text) {
+ /* Print the relocations in a form suitable that
+ * gas will like.
+ */
+ printf(".section \".data.reloc\",\"a\"\n");
+ printf(".balign 4\n");
+ /* Output text to stdout */
+ write_reloc = write_reloc_as_text;
+ outf = stdout;
+ } else if (as_bin) {
+ /* Output raw binary to stdout */
+ outf = stdout;
+ } else {
+ /* Seek to offset of the relocation section.
+ * Each relocation is then written into the
+ * vmlinux kernel image.
+ */
+ if (fseek(outf, sec_reloc->shdr.sh_offset, SEEK_SET) < 0) {
+ die("Seek to %d failed: %s\n",
+ sec_reloc->shdr.sh_offset, strerror(errno));
+ }
+ }
+
+ for (i = 0; i < relocs.count; i++)
+ size += write_reloc(relocs.offset[i], outf);
+
+ /* Print a stop, but only if we've actually written some relocs */
+ if (size)
+ size += write_reloc(0, outf);
+
+ if (size > size_reserved)
+ /* Die, but suggest a value for CONFIG_RELOCATION_TABLE_SIZE
+ * which will fix this problem and allow a bit of headroom
+ * if more kernel features are enabled
+ */
+ die("Relocations overflow available space!\n" \
+ "Please adjust CONFIG_RELOCATION_TABLE_SIZE " \
+ "to at least 0x%08x\n", (size + 0x1000) & ~0xFFF);
+}
+
+/*
+ * As an aid to debugging problems with different linkers
+ * print summary information about the relocs.
+ * Since different linkers tend to emit the sections in
+ * different orders we use the section names in the output.
+ */
+static int do_reloc_info(struct section *sec, Elf_Rel *rel, ElfW(Sym) *sym,
+ const char *symname)
+{
+ printf("%16s 0x%08x %16s %40s %16s\n",
+ sec_name(sec->shdr.sh_info),
+ (unsigned int)rel->r_offset,
+ rel_type(ELF_R_TYPE(rel->r_info)),
+ symname,
+ sec_name(sym->st_shndx));
+ return 0;
+}
+
+static void print_reloc_info(void)
+{
+ printf("%16s %10s %16s %40s %16s\n",
+ "reloc section",
+ "offset",
+ "reloc type",
+ "symbol",
+ "symbol section");
+ walk_relocs(do_reloc_info);
+}
+
+#if ELF_BITS == 64
+# define process process_64
+#else
+# define process process_32
+#endif
+
+void process(FILE *fp, int as_text, int as_bin,
+ int show_reloc_info, int keep_relocs)
+{
+ regex_init();
+ read_ehdr(fp);
+ read_shdrs(fp);
+ read_strtabs(fp);
+ read_symtabs(fp);
+ read_relocs(fp);
+ if (show_reloc_info) {
+ print_reloc_info();
+ return;
+ }
+ emit_relocs(as_text, as_bin, fp);
+ if (!keep_relocs)
+ remove_relocs(fp);
+}
diff --git a/arch/mips/boot/tools/relocs.h b/arch/mips/boot/tools/relocs.h
new file mode 100644
index 000000000000..3cf676f49e18
--- /dev/null
+++ b/arch/mips/boot/tools/relocs.h
@@ -0,0 +1,45 @@
+#ifndef RELOCS_H
+#define RELOCS_H
+
+#include <stdio.h>
+#include <stdarg.h>
+#include <stdlib.h>
+#include <stdint.h>
+#include <inttypes.h>
+#include <string.h>
+#include <errno.h>
+#include <unistd.h>
+#include <elf.h>
+#include <byteswap.h>
+#define USE_BSD
+#include <endian.h>
+#include <regex.h>
+
+void die(char *fmt, ...);
+
+/*
+ * Introduced for MIPSr6
+ */
+#ifndef R_MIPS_PC21_S2
+#define R_MIPS_PC21_S2 60
+#endif
+
+#ifndef R_MIPS_PC26_S2
+#define R_MIPS_PC26_S2 61
+#endif
+
+#define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0]))
+
+enum symtype {
+ S_ABS,
+ S_REL,
+ S_SEG,
+ S_LIN,
+ S_NSYMTYPES
+};
+
+void process_32(FILE *fp, int as_text, int as_bin,
+ int show_reloc_info, int keep_relocs);
+void process_64(FILE *fp, int as_text, int as_bin,
+ int show_reloc_info, int keep_relocs);
+#endif /* RELOCS_H */
diff --git a/arch/mips/boot/tools/relocs_32.c b/arch/mips/boot/tools/relocs_32.c
new file mode 100644
index 000000000000..915bdc07f5ed
--- /dev/null
+++ b/arch/mips/boot/tools/relocs_32.c
@@ -0,0 +1,17 @@
+#include "relocs.h"
+
+#define ELF_BITS 32
+
+#define ELF_MACHINE EM_MIPS
+#define ELF_MACHINE_NAME "MIPS"
+#define SHT_REL_TYPE SHT_REL
+#define Elf_Rel ElfW(Rel)
+
+#define ELF_CLASS ELFCLASS32
+#define ELF_R_SYM(val) ELF32_R_SYM(val)
+#define ELF_R_TYPE(val) ELF32_R_TYPE(val)
+#define ELF_ST_TYPE(o) ELF32_ST_TYPE(o)
+#define ELF_ST_BIND(o) ELF32_ST_BIND(o)
+#define ELF_ST_VISIBILITY(o) ELF32_ST_VISIBILITY(o)
+
+#include "relocs.c"
diff --git a/arch/mips/boot/tools/relocs_64.c b/arch/mips/boot/tools/relocs_64.c
new file mode 100644
index 000000000000..b671b5e2dcd8
--- /dev/null
+++ b/arch/mips/boot/tools/relocs_64.c
@@ -0,0 +1,27 @@
+#include "relocs.h"
+
+#define ELF_BITS 64
+
+#define ELF_MACHINE EM_MIPS
+#define ELF_MACHINE_NAME "MIPS64"
+#define SHT_REL_TYPE SHT_RELA
+#define Elf_Rel Elf64_Rela
+
+typedef uint8_t Elf64_Byte;
+
+typedef struct {
+ Elf64_Word r_sym; /* Symbol index. */
+ Elf64_Byte r_ssym; /* Special symbol. */
+ Elf64_Byte r_type3; /* Third relocation. */
+ Elf64_Byte r_type2; /* Second relocation. */
+ Elf64_Byte r_type; /* First relocation. */
+} Elf64_Mips_Rela;
+
+#define ELF_CLASS ELFCLASS64
+#define ELF_R_SYM(val) (((Elf64_Mips_Rela *)(&val))->r_sym)
+#define ELF_R_TYPE(val) (((Elf64_Mips_Rela *)(&val))->r_type)
+#define ELF_ST_TYPE(o) ELF64_ST_TYPE(o)
+#define ELF_ST_BIND(o) ELF64_ST_BIND(o)
+#define ELF_ST_VISIBILITY(o) ELF64_ST_VISIBILITY(o)
+
+#include "relocs.c"
diff --git a/arch/mips/boot/tools/relocs_main.c b/arch/mips/boot/tools/relocs_main.c
new file mode 100644
index 000000000000..d8fe2343b8d0
--- /dev/null
+++ b/arch/mips/boot/tools/relocs_main.c
@@ -0,0 +1,84 @@
+
+#include <stdio.h>
+#include <stdint.h>
+#include <stdarg.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+#include <endian.h>
+#include <elf.h>
+
+#include "relocs.h"
+
+void die(char *fmt, ...)
+{
+ va_list ap;
+
+ va_start(ap, fmt);
+ vfprintf(stderr, fmt, ap);
+ va_end(ap);
+ exit(1);
+}
+
+static void usage(void)
+{
+ die("relocs [--reloc-info|--text|--bin|--keep] vmlinux\n");
+}
+
+int main(int argc, char **argv)
+{
+ int show_reloc_info, as_text, as_bin, keep_relocs;
+ const char *fname;
+ FILE *fp;
+ int i;
+ unsigned char e_ident[EI_NIDENT];
+
+ show_reloc_info = 0;
+ as_text = 0;
+ as_bin = 0;
+ keep_relocs = 0;
+ fname = NULL;
+ for (i = 1; i < argc; i++) {
+ char *arg = argv[i];
+
+ if (*arg == '-') {
+ if (strcmp(arg, "--reloc-info") == 0) {
+ show_reloc_info = 1;
+ continue;
+ }
+ if (strcmp(arg, "--text") == 0) {
+ as_text = 1;
+ continue;
+ }
+ if (strcmp(arg, "--bin") == 0) {
+ as_bin = 1;
+ continue;
+ }
+ if (strcmp(arg, "--keep") == 0) {
+ keep_relocs = 1;
+ continue;
+ }
+ } else if (!fname) {
+ fname = arg;
+ continue;
+ }
+ usage();
+ }
+ if (!fname)
+ usage();
+
+ fp = fopen(fname, "r+");
+ if (!fp)
+ die("Cannot open %s: %s\n", fname, strerror(errno));
+
+ if (fread(&e_ident, 1, EI_NIDENT, fp) != EI_NIDENT)
+ die("Cannot read %s: %s", fname, strerror(errno));
+
+ rewind(fp);
+ if (e_ident[EI_CLASS] == ELFCLASS64)
+ process_64(fp, as_text, as_bin, show_reloc_info, keep_relocs);
+ else
+ process_32(fp, as_text, as_bin, show_reloc_info, keep_relocs);
+ fclose(fp);
+ return 0;
+}
diff --git a/arch/mips/cavium-octeon/csrc-octeon.c b/arch/mips/cavium-octeon/csrc-octeon.c
index 1882e6475dd0..23c2344a3552 100644
--- a/arch/mips/cavium-octeon/csrc-octeon.c
+++ b/arch/mips/cavium-octeon/csrc-octeon.c
@@ -19,6 +19,7 @@
#include <asm/octeon/cvmx-ipd-defs.h>
#include <asm/octeon/cvmx-mio-defs.h>
#include <asm/octeon/cvmx-rst-defs.h>
+#include <asm/octeon/cvmx-fpa-defs.h>
static u64 f;
static u64 rdiv;
@@ -65,9 +66,13 @@ void __init octeon_setup_delays(void)
*/
void octeon_init_cvmcount(void)
{
+ u64 clk_reg;
unsigned long flags;
unsigned loops = 2;
+ clk_reg = octeon_has_feature(OCTEON_FEATURE_FPA3) ?
+ CVMX_FPA_CLK_COUNT : CVMX_IPD_CLK_COUNT;
+
/* Clobber loops so GCC will not unroll the following while loop. */
asm("" : "+r" (loops));
@@ -77,18 +82,18 @@ void octeon_init_cvmcount(void)
* which should give more deterministic timing.
*/
while (loops--) {
- u64 ipd_clk_count = cvmx_read_csr(CVMX_IPD_CLK_COUNT);
+ u64 clk_count = cvmx_read_csr(clk_reg);
if (rdiv != 0) {
- ipd_clk_count *= rdiv;
+ clk_count *= rdiv;
if (f != 0) {
asm("dmultu\t%[cnt],%[f]\n\t"
"mfhi\t%[cnt]"
- : [cnt] "+r" (ipd_clk_count)
+ : [cnt] "+r" (clk_count)
: [f] "r" (f)
: "hi", "lo");
}
}
- write_c0_cvmcount(ipd_clk_count);
+ write_c0_cvmcount(clk_count);
}
local_irq_restore(flags);
}
diff --git a/arch/mips/cavium-octeon/executive/cvmx-helper.c b/arch/mips/cavium-octeon/executive/cvmx-helper.c
index 376701f41cc2..ff26d0217b87 100644
--- a/arch/mips/cavium-octeon/executive/cvmx-helper.c
+++ b/arch/mips/cavium-octeon/executive/cvmx-helper.c
@@ -87,6 +87,8 @@ int cvmx_helper_get_number_of_interfaces(void)
return 9;
if (OCTEON_IS_MODEL(OCTEON_CN56XX) || OCTEON_IS_MODEL(OCTEON_CN52XX))
return 4;
+ if (OCTEON_IS_MODEL(OCTEON_CN7XXX))
+ return 5;
else
return 3;
}
@@ -260,6 +262,41 @@ static cvmx_helper_interface_mode_t __cvmx_get_mode_octeon2(int interface)
}
/**
+ * @INTERNAL
+ * Return interface mode for CN7XXX.
+ */
+static cvmx_helper_interface_mode_t __cvmx_get_mode_cn7xxx(int interface)
+{
+ union cvmx_gmxx_inf_mode mode;
+
+ mode.u64 = cvmx_read_csr(CVMX_GMXX_INF_MODE(interface));
+
+ switch (interface) {
+ case 0:
+ case 1:
+ switch (mode.cn68xx.mode) {
+ case 0:
+ return CVMX_HELPER_INTERFACE_MODE_DISABLED;
+ case 1:
+ case 2:
+ return CVMX_HELPER_INTERFACE_MODE_SGMII;
+ case 3:
+ return CVMX_HELPER_INTERFACE_MODE_XAUI;
+ default:
+ return CVMX_HELPER_INTERFACE_MODE_SGMII;
+ }
+ case 2:
+ return CVMX_HELPER_INTERFACE_MODE_NPI;
+ case 3:
+ return CVMX_HELPER_INTERFACE_MODE_LOOP;
+ case 4:
+ return CVMX_HELPER_INTERFACE_MODE_RGMII;
+ default:
+ return CVMX_HELPER_INTERFACE_MODE_DISABLED;
+ }
+}
+
+/**
* Get the operating mode of an interface. Depending on the Octeon
* chip and configuration, this function returns an enumeration
* of the type of packet I/O supported by an interface.
@@ -278,6 +315,12 @@ cvmx_helper_interface_mode_t cvmx_helper_interface_get_mode(int interface)
return CVMX_HELPER_INTERFACE_MODE_DISABLED;
/*
+ * OCTEON III models
+ */
+ if (OCTEON_IS_MODEL(OCTEON_CN7XXX))
+ return __cvmx_get_mode_cn7xxx(interface);
+
+ /*
* Octeon II models
*/
if (OCTEON_IS_MODEL(OCTEON_CN6XXX) || OCTEON_IS_MODEL(OCTEON_CNF71XX))
diff --git a/arch/mips/cavium-octeon/executive/cvmx-sysinfo.c b/arch/mips/cavium-octeon/executive/cvmx-sysinfo.c
index 3d17fac29359..cc1b1d2a6fa1 100644
--- a/arch/mips/cavium-octeon/executive/cvmx-sysinfo.c
+++ b/arch/mips/cavium-octeon/executive/cvmx-sysinfo.c
@@ -32,86 +32,22 @@
#include <linux/module.h>
#include <asm/octeon/cvmx.h>
-#include <asm/octeon/cvmx-spinlock.h>
#include <asm/octeon/cvmx-sysinfo.h>
-/**
+/*
* This structure defines the private state maintained by sysinfo module.
- *
*/
-static struct {
- struct cvmx_sysinfo sysinfo; /* system information */
- cvmx_spinlock_t lock; /* mutex spinlock */
-
-} state = {
- .lock = CVMX_SPINLOCK_UNLOCKED_INITIALIZER
-};
-
+static struct cvmx_sysinfo sysinfo; /* system information */
/*
- * Global variables that define the min/max of the memory region set
- * up for 32 bit userspace access.
- */
-uint64_t linux_mem32_min;
-uint64_t linux_mem32_max;
-uint64_t linux_mem32_wired;
-uint64_t linux_mem32_offset;
-
-/**
- * This function returns the application information as obtained
+ * Returns the application information as obtained
* by the bootloader. This provides the core mask of the cores
* running the same application image, as well as the physical
* memory regions available to the core.
- *
- * Returns Pointer to the boot information structure
- *
*/
struct cvmx_sysinfo *cvmx_sysinfo_get(void)
{
- return &(state.sysinfo);
+ return &sysinfo;
}
EXPORT_SYMBOL(cvmx_sysinfo_get);
-/**
- * This function is used in non-simple executive environments (such as
- * Linux kernel, u-boot, etc.) to configure the minimal fields that
- * are required to use simple executive files directly.
- *
- * Locking (if required) must be handled outside of this
- * function
- *
- * @phy_mem_desc_ptr:
- * Pointer to global physical memory descriptor
- * (bootmem descriptor) @board_type: Octeon board
- * type enumeration
- *
- * @board_rev_major:
- * Board major revision
- * @board_rev_minor:
- * Board minor revision
- * @cpu_clock_hz:
- * CPU clock freqency in hertz
- *
- * Returns 0: Failure
- * 1: success
- */
-int cvmx_sysinfo_minimal_initialize(void *phy_mem_desc_ptr,
- uint16_t board_type,
- uint8_t board_rev_major,
- uint8_t board_rev_minor,
- uint32_t cpu_clock_hz)
-{
-
- /* The sysinfo structure was already initialized */
- if (state.sysinfo.board_type)
- return 0;
-
- memset(&(state.sysinfo), 0x0, sizeof(state.sysinfo));
- state.sysinfo.phy_mem_desc_ptr = phy_mem_desc_ptr;
- state.sysinfo.board_type = board_type;
- state.sysinfo.board_rev_major = board_rev_major;
- state.sysinfo.board_rev_minor = board_rev_minor;
- state.sysinfo.cpu_clock_hz = cpu_clock_hz;
-
- return 1;
-}
diff --git a/arch/mips/cavium-octeon/executive/octeon-model.c b/arch/mips/cavium-octeon/executive/octeon-model.c
index b2104bd9ab3b..d08a2bce653c 100644
--- a/arch/mips/cavium-octeon/executive/octeon-model.c
+++ b/arch/mips/cavium-octeon/executive/octeon-model.c
@@ -71,11 +71,11 @@ static const char *__init octeon_model_get_string_buffer(uint32_t chip_id,
uint32_t fuse_data = 0;
fus3.u64 = 0;
- if (!OCTEON_IS_MODEL(OCTEON_CN6XXX))
+ if (OCTEON_IS_MODEL(OCTEON_CN3XXX) || OCTEON_IS_MODEL(OCTEON_CN5XXX))
fus3.u64 = cvmx_read_csr(CVMX_L2D_FUS3);
fus_dat2.u64 = cvmx_read_csr(CVMX_MIO_FUS_DAT2);
fus_dat3.u64 = cvmx_read_csr(CVMX_MIO_FUS_DAT3);
- num_cores = cvmx_pop(cvmx_read_csr(CVMX_CIU_FUSE));
+ num_cores = cvmx_octeon_num_cores();
/* Make sure the non existent devices look disabled */
switch ((chip_id >> 8) & 0xff) {
@@ -121,6 +121,15 @@ static const char *__init octeon_model_get_string_buffer(uint32_t chip_id,
* later.
*/
switch (num_cores) {
+ case 48:
+ core_model = "90";
+ break;
+ case 44:
+ core_model = "88";
+ break;
+ case 40:
+ core_model = "85";
+ break;
case 32:
core_model = "80";
break;
@@ -297,7 +306,7 @@ static const char *__init octeon_model_get_string_buffer(uint32_t chip_id,
if (fus_dat3.s.nozip)
suffix = "SCP";
- if (fus_dat3.s.bar2_en)
+ if (fus_dat3.cn56xx.bar2_en)
suffix = "NSPB2";
}
if (fus3.cn56xx.crip_1024k)
@@ -369,6 +378,73 @@ static const char *__init octeon_model_get_string_buffer(uint32_t chip_id,
else
suffix = "AAP";
break;
+ case 0x94: /* CNF71XX */
+ family = "F71";
+ if (fus_dat3.cnf71xx.nozip)
+ suffix = "SCP";
+ else
+ suffix = "AAP";
+ break;
+ case 0x95: /* CN78XX */
+ if (num_cores == 6) /* Other core counts match generic */
+ core_model = "35";
+ if (OCTEON_IS_MODEL(OCTEON_CN76XX))
+ family = "76";
+ else
+ family = "78";
+ if (fus_dat3.cn78xx.l2c_crip == 2)
+ family = "77";
+ if (fus_dat3.cn78xx.nozip
+ && fus_dat3.cn78xx.nodfa_dte
+ && fus_dat3.cn78xx.nohna_dte) {
+ if (fus_dat3.cn78xx.nozip &&
+ !fus_dat2.cn78xx.raid_en &&
+ fus_dat3.cn78xx.nohna_dte) {
+ suffix = "CP";
+ } else {
+ suffix = "SCP";
+ }
+ } else if (fus_dat2.cn78xx.raid_en == 0)
+ suffix = "HCP";
+ else
+ suffix = "AAP";
+ break;
+ case 0x96: /* CN70XX */
+ family = "70";
+ if (cvmx_read_csr(CVMX_MIO_FUS_PDF) & (0x1ULL << 32))
+ family = "71";
+ if (fus_dat2.cn70xx.nocrypto)
+ suffix = "CP";
+ else if (fus_dat3.cn70xx.nodfa_dte)
+ suffix = "SCP";
+ else
+ suffix = "AAP";
+ break;
+ case 0x97: /* CN73XX */
+ if (num_cores == 6) /* Other core counts match generic */
+ core_model = "35";
+ family = "73";
+ if (fus_dat3.cn73xx.l2c_crip == 2)
+ family = "72";
+ if (fus_dat3.cn73xx.nozip
+ && fus_dat3.cn73xx.nodfa_dte
+ && fus_dat3.cn73xx.nohna_dte) {
+ if (!fus_dat2.cn73xx.raid_en)
+ suffix = "CP";
+ else
+ suffix = "SCP";
+ } else
+ suffix = "AAP";
+ break;
+ case 0x98: /* CN75XX */
+ family = "F75";
+ if (fus_dat3.cn78xx.nozip
+ && fus_dat3.cn78xx.nodfa_dte
+ && fus_dat3.cn78xx.nohna_dte)
+ suffix = "SCP";
+ else
+ suffix = "AAP";
+ break;
default:
family = "XX";
core_model = "XX";
diff --git a/arch/mips/cavium-octeon/octeon-irq.c b/arch/mips/cavium-octeon/octeon-irq.c
index 4f9eb0576884..368eb490354c 100644
--- a/arch/mips/cavium-octeon/octeon-irq.c
+++ b/arch/mips/cavium-octeon/octeon-irq.c
@@ -3,7 +3,7 @@
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
- * Copyright (C) 2004-2014 Cavium, Inc.
+ * Copyright (C) 2004-2016 Cavium, Inc.
*/
#include <linux/of_address.h>
@@ -19,16 +19,53 @@
#include <asm/octeon/octeon.h>
#include <asm/octeon/cvmx-ciu2-defs.h>
+#include <asm/octeon/cvmx-ciu3-defs.h>
static DEFINE_PER_CPU(unsigned long, octeon_irq_ciu0_en_mirror);
static DEFINE_PER_CPU(unsigned long, octeon_irq_ciu1_en_mirror);
static DEFINE_PER_CPU(raw_spinlock_t, octeon_irq_ciu_spinlock);
+static DEFINE_PER_CPU(unsigned int, octeon_irq_ciu3_idt_ip2);
+
+static DEFINE_PER_CPU(unsigned int, octeon_irq_ciu3_idt_ip3);
+static DEFINE_PER_CPU(struct octeon_ciu3_info *, octeon_ciu3_info);
+#define CIU3_MBOX_PER_CORE 10
+
+/*
+ * The 8 most significant bits of the intsn identify the interrupt major block.
+ * Each major block might use its own interrupt domain. Thus 256 domains are
+ * needed.
+ */
+#define MAX_CIU3_DOMAINS 256
+
+typedef irq_hw_number_t (*octeon_ciu3_intsn2hw_t)(struct irq_domain *, unsigned int);
+
+/* Information for each ciu3 in the system */
+struct octeon_ciu3_info {
+ u64 ciu3_addr;
+ int node;
+ struct irq_domain *domain[MAX_CIU3_DOMAINS];
+ octeon_ciu3_intsn2hw_t intsn2hw[MAX_CIU3_DOMAINS];
+};
+
+/* Each ciu3 in the system uses its own data (one ciu3 per node) */
+static struct octeon_ciu3_info *octeon_ciu3_info_per_node[4];
struct octeon_irq_ciu_domain_data {
int num_sum; /* number of sum registers (2 or 3). */
};
-static __read_mostly u8 octeon_irq_ciu_to_irq[8][64];
+/* Register offsets from ciu3_addr */
+#define CIU3_CONST 0x220
+#define CIU3_IDT_CTL(_idt) ((_idt) * 8 + 0x110000)
+#define CIU3_IDT_PP(_idt, _idx) ((_idt) * 32 + (_idx) * 8 + 0x120000)
+#define CIU3_IDT_IO(_idt) ((_idt) * 8 + 0x130000)
+#define CIU3_DEST_PP_INT(_pp_ip) ((_pp_ip) * 8 + 0x200000)
+#define CIU3_DEST_IO_INT(_io) ((_io) * 8 + 0x210000)
+#define CIU3_ISC_CTL(_intsn) ((_intsn) * 8 + 0x80000000)
+#define CIU3_ISC_W1C(_intsn) ((_intsn) * 8 + 0x90000000)
+#define CIU3_ISC_W1S(_intsn) ((_intsn) * 8 + 0xa0000000)
+
+static __read_mostly int octeon_irq_ciu_to_irq[8][64];
struct octeon_ciu_chip_data {
union {
@@ -39,10 +76,11 @@ struct octeon_ciu_chip_data {
struct { /* only used for ciu/ciu2 */
u8 line;
u8 bit;
- u8 gpio_line;
};
};
+ int gpio_line;
int current_cpu; /* Next CPU expected to take this irq */
+ int ciu_node; /* NUMA node number of the CIU */
};
struct octeon_core_chip_data {
@@ -626,6 +664,18 @@ static void octeon_irq_ciu_enable_all_v2(struct irq_data *data)
}
}
+static int octeon_irq_ciu_set_type(struct irq_data *data, unsigned int t)
+{
+ irqd_set_trigger_type(data, t);
+
+ if (t & IRQ_TYPE_EDGE_BOTH)
+ irq_set_handler_locked(data, handle_edge_irq);
+ else
+ irq_set_handler_locked(data, handle_level_irq);
+
+ return IRQ_SET_MASK_OK;
+}
+
static void octeon_irq_gpio_setup(struct irq_data *data)
{
union cvmx_gpio_bit_cfgx cfg;
@@ -663,7 +713,7 @@ static int octeon_irq_ciu_gpio_set_type(struct irq_data *data, unsigned int t)
irqd_set_trigger_type(data, t);
octeon_irq_gpio_setup(data);
- if (irqd_get_trigger_type(data) & IRQ_TYPE_EDGE_BOTH)
+ if (t & IRQ_TYPE_EDGE_BOTH)
irq_set_handler_locked(data, handle_edge_irq);
else
irq_set_handler_locked(data, handle_level_irq);
@@ -863,6 +913,16 @@ static int octeon_irq_ciu_set_affinity_sum2(struct irq_data *data,
}
#endif
+static unsigned int edge_startup(struct irq_data *data)
+{
+ /* ack any pending edge-irq at startup, so there is
+ * an _edge_ to fire on when the event reappears.
+ */
+ data->chip->irq_ack(data);
+ data->chip->irq_enable(data);
+ return 0;
+}
+
/*
* Newer octeon chips have support for lockless CIU operation.
*/
@@ -1158,16 +1218,6 @@ static struct irq_chip *octeon_irq_ciu_chip;
static struct irq_chip *octeon_irq_ciu_chip_edge;
static struct irq_chip *octeon_irq_gpio_chip;
-static bool octeon_irq_virq_in_range(unsigned int virq)
-{
- /* We cannot let it overflow the mapping array. */
- if (virq < (1ul << 8 * sizeof(octeon_irq_ciu_to_irq[0][0])))
- return true;
-
- WARN_ONCE(true, "virq out of range %u.\n", virq);
- return false;
-}
-
static int octeon_irq_ciu_map(struct irq_domain *d,
unsigned int virq, irq_hw_number_t hw)
{
@@ -1176,13 +1226,6 @@ static int octeon_irq_ciu_map(struct irq_domain *d,
unsigned int bit = hw & 63;
struct octeon_irq_ciu_domain_data *dd = d->host_data;
- if (!octeon_irq_virq_in_range(virq))
- return -EINVAL;
-
- /* Don't map irq if it is reserved for GPIO. */
- if (line == 0 && bit >= 16 && bit <32)
- return 0;
-
if (line >= dd->num_sum || octeon_irq_ciu_to_irq[line][bit] != 0)
return -EINVAL;
@@ -1215,9 +1258,6 @@ static int octeon_irq_gpio_map(struct irq_domain *d,
unsigned int line, bit;
int r;
- if (!octeon_irq_virq_in_range(virq))
- return -EINVAL;
-
line = (hw + gpiod->base_hwirq) >> 6;
bit = (hw + gpiod->base_hwirq) & 63;
if (line > ARRAY_SIZE(octeon_irq_ciu_to_irq) ||
@@ -1899,9 +1939,6 @@ static int octeon_irq_ciu2_map(struct irq_domain *d,
unsigned int line = hw >> 6;
unsigned int bit = hw & 63;
- if (!octeon_irq_virq_in_range(virq))
- return -EINVAL;
-
/*
* Don't map irq if it is reserved for GPIO.
* (Line 7 are the GPIO lines.)
@@ -2294,10 +2331,598 @@ static int __init octeon_irq_init_cib(struct device_node *ciu_node,
return 0;
}
+int octeon_irq_ciu3_xlat(struct irq_domain *d,
+ struct device_node *node,
+ const u32 *intspec,
+ unsigned int intsize,
+ unsigned long *out_hwirq,
+ unsigned int *out_type)
+{
+ struct octeon_ciu3_info *ciu3_info = d->host_data;
+ unsigned int hwirq, type, intsn_major;
+ union cvmx_ciu3_iscx_ctl isc;
+
+ if (intsize < 2)
+ return -EINVAL;
+ hwirq = intspec[0];
+ type = intspec[1];
+
+ if (hwirq >= (1 << 20))
+ return -EINVAL;
+
+ intsn_major = hwirq >> 12;
+ switch (intsn_major) {
+ case 0x04: /* Software handled separately. */
+ return -EINVAL;
+ default:
+ break;
+ }
+
+ isc.u64 = cvmx_read_csr(ciu3_info->ciu3_addr + CIU3_ISC_CTL(hwirq));
+ if (!isc.s.imp)
+ return -EINVAL;
+
+ switch (type) {
+ case 4: /* official value for level triggering. */
+ *out_type = IRQ_TYPE_LEVEL_HIGH;
+ break;
+ case 0: /* unofficial value, but we might as well let it work. */
+ case 1: /* official value for edge triggering. */
+ *out_type = IRQ_TYPE_EDGE_RISING;
+ break;
+ default: /* Nothing else is acceptable. */
+ return -EINVAL;
+ }
+
+ *out_hwirq = hwirq;
+
+ return 0;
+}
+
+void octeon_irq_ciu3_enable(struct irq_data *data)
+{
+ int cpu;
+ union cvmx_ciu3_iscx_ctl isc_ctl;
+ union cvmx_ciu3_iscx_w1c isc_w1c;
+ u64 isc_ctl_addr;
+
+ struct octeon_ciu_chip_data *cd;
+
+ cpu = next_cpu_for_irq(data);
+
+ cd = irq_data_get_irq_chip_data(data);
+
+ isc_w1c.u64 = 0;
+ isc_w1c.s.en = 1;
+ cvmx_write_csr(cd->ciu3_addr + CIU3_ISC_W1C(cd->intsn), isc_w1c.u64);
+
+ isc_ctl_addr = cd->ciu3_addr + CIU3_ISC_CTL(cd->intsn);
+ isc_ctl.u64 = 0;
+ isc_ctl.s.en = 1;
+ isc_ctl.s.idt = per_cpu(octeon_irq_ciu3_idt_ip2, cpu);
+ cvmx_write_csr(isc_ctl_addr, isc_ctl.u64);
+ cvmx_read_csr(isc_ctl_addr);
+}
+
+void octeon_irq_ciu3_disable(struct irq_data *data)
+{
+ u64 isc_ctl_addr;
+ union cvmx_ciu3_iscx_w1c isc_w1c;
+
+ struct octeon_ciu_chip_data *cd;
+
+ cd = irq_data_get_irq_chip_data(data);
+
+ isc_w1c.u64 = 0;
+ isc_w1c.s.en = 1;
+
+ isc_ctl_addr = cd->ciu3_addr + CIU3_ISC_CTL(cd->intsn);
+ cvmx_write_csr(cd->ciu3_addr + CIU3_ISC_W1C(cd->intsn), isc_w1c.u64);
+ cvmx_write_csr(isc_ctl_addr, 0);
+ cvmx_read_csr(isc_ctl_addr);
+}
+
+void octeon_irq_ciu3_ack(struct irq_data *data)
+{
+ u64 isc_w1c_addr;
+ union cvmx_ciu3_iscx_w1c isc_w1c;
+ struct octeon_ciu_chip_data *cd;
+ u32 trigger_type = irqd_get_trigger_type(data);
+
+ /*
+ * We use a single irq_chip, so we have to do nothing to ack a
+ * level interrupt.
+ */
+ if (!(trigger_type & IRQ_TYPE_EDGE_BOTH))
+ return;
+
+ cd = irq_data_get_irq_chip_data(data);
+
+ isc_w1c.u64 = 0;
+ isc_w1c.s.raw = 1;
+
+ isc_w1c_addr = cd->ciu3_addr + CIU3_ISC_W1C(cd->intsn);
+ cvmx_write_csr(isc_w1c_addr, isc_w1c.u64);
+ cvmx_read_csr(isc_w1c_addr);
+}
+
+void octeon_irq_ciu3_mask(struct irq_data *data)
+{
+ union cvmx_ciu3_iscx_w1c isc_w1c;
+ u64 isc_w1c_addr;
+ struct octeon_ciu_chip_data *cd;
+
+ cd = irq_data_get_irq_chip_data(data);
+
+ isc_w1c.u64 = 0;
+ isc_w1c.s.en = 1;
+
+ isc_w1c_addr = cd->ciu3_addr + CIU3_ISC_W1C(cd->intsn);
+ cvmx_write_csr(isc_w1c_addr, isc_w1c.u64);
+ cvmx_read_csr(isc_w1c_addr);
+}
+
+void octeon_irq_ciu3_mask_ack(struct irq_data *data)
+{
+ union cvmx_ciu3_iscx_w1c isc_w1c;
+ u64 isc_w1c_addr;
+ struct octeon_ciu_chip_data *cd;
+ u32 trigger_type = irqd_get_trigger_type(data);
+
+ cd = irq_data_get_irq_chip_data(data);
+
+ isc_w1c.u64 = 0;
+ isc_w1c.s.en = 1;
+
+ /*
+ * We use a single irq_chip, so only ack an edge (!level)
+ * interrupt.
+ */
+ if (trigger_type & IRQ_TYPE_EDGE_BOTH)
+ isc_w1c.s.raw = 1;
+
+ isc_w1c_addr = cd->ciu3_addr + CIU3_ISC_W1C(cd->intsn);
+ cvmx_write_csr(isc_w1c_addr, isc_w1c.u64);
+ cvmx_read_csr(isc_w1c_addr);
+}
+
+#ifdef CONFIG_SMP
+int octeon_irq_ciu3_set_affinity(struct irq_data *data,
+ const struct cpumask *dest, bool force)
+{
+ union cvmx_ciu3_iscx_ctl isc_ctl;
+ union cvmx_ciu3_iscx_w1c isc_w1c;
+ u64 isc_ctl_addr;
+ int cpu;
+ bool enable_one = !irqd_irq_disabled(data) && !irqd_irq_masked(data);
+ struct octeon_ciu_chip_data *cd = irq_data_get_irq_chip_data(data);
+
+ if (!cpumask_subset(dest, cpumask_of_node(cd->ciu_node)))
+ return -EINVAL;
+
+ if (!enable_one)
+ return IRQ_SET_MASK_OK;
+
+ cd = irq_data_get_irq_chip_data(data);
+ cpu = cpumask_first(dest);
+ if (cpu >= nr_cpu_ids)
+ cpu = smp_processor_id();
+ cd->current_cpu = cpu;
+
+ isc_w1c.u64 = 0;
+ isc_w1c.s.en = 1;
+ cvmx_write_csr(cd->ciu3_addr + CIU3_ISC_W1C(cd->intsn), isc_w1c.u64);
+
+ isc_ctl_addr = cd->ciu3_addr + CIU3_ISC_CTL(cd->intsn);
+ isc_ctl.u64 = 0;
+ isc_ctl.s.en = 1;
+ isc_ctl.s.idt = per_cpu(octeon_irq_ciu3_idt_ip2, cpu);
+ cvmx_write_csr(isc_ctl_addr, isc_ctl.u64);
+ cvmx_read_csr(isc_ctl_addr);
+
+ return IRQ_SET_MASK_OK;
+}
+#endif
+
+static struct irq_chip octeon_irq_chip_ciu3 = {
+ .name = "CIU3",
+ .irq_startup = edge_startup,
+ .irq_enable = octeon_irq_ciu3_enable,
+ .irq_disable = octeon_irq_ciu3_disable,
+ .irq_ack = octeon_irq_ciu3_ack,
+ .irq_mask = octeon_irq_ciu3_mask,
+ .irq_mask_ack = octeon_irq_ciu3_mask_ack,
+ .irq_unmask = octeon_irq_ciu3_enable,
+ .irq_set_type = octeon_irq_ciu_set_type,
+#ifdef CONFIG_SMP
+ .irq_set_affinity = octeon_irq_ciu3_set_affinity,
+ .irq_cpu_offline = octeon_irq_cpu_offline_ciu,
+#endif
+};
+
+int octeon_irq_ciu3_mapx(struct irq_domain *d, unsigned int virq,
+ irq_hw_number_t hw, struct irq_chip *chip)
+{
+ struct octeon_ciu3_info *ciu3_info = d->host_data;
+ struct octeon_ciu_chip_data *cd = kzalloc_node(sizeof(*cd), GFP_KERNEL,
+ ciu3_info->node);
+ if (!cd)
+ return -ENOMEM;
+ cd->intsn = hw;
+ cd->current_cpu = -1;
+ cd->ciu3_addr = ciu3_info->ciu3_addr;
+ cd->ciu_node = ciu3_info->node;
+ irq_set_chip_and_handler(virq, chip, handle_edge_irq);
+ irq_set_chip_data(virq, cd);
+
+ return 0;
+}
+
+static int octeon_irq_ciu3_map(struct irq_domain *d,
+ unsigned int virq, irq_hw_number_t hw)
+{
+ return octeon_irq_ciu3_mapx(d, virq, hw, &octeon_irq_chip_ciu3);
+}
+
+static struct irq_domain_ops octeon_dflt_domain_ciu3_ops = {
+ .map = octeon_irq_ciu3_map,
+ .unmap = octeon_irq_free_cd,
+ .xlate = octeon_irq_ciu3_xlat,
+};
+
+static void octeon_irq_ciu3_ip2(void)
+{
+ union cvmx_ciu3_destx_pp_int dest_pp_int;
+ struct octeon_ciu3_info *ciu3_info;
+ u64 ciu3_addr;
+
+ ciu3_info = __this_cpu_read(octeon_ciu3_info);
+ ciu3_addr = ciu3_info->ciu3_addr;
+
+ dest_pp_int.u64 = cvmx_read_csr(ciu3_addr + CIU3_DEST_PP_INT(3 * cvmx_get_local_core_num()));
+
+ if (likely(dest_pp_int.s.intr)) {
+ irq_hw_number_t intsn = dest_pp_int.s.intsn;
+ irq_hw_number_t hw;
+ struct irq_domain *domain;
+ /* Get the domain to use from the major block */
+ int block = intsn >> 12;
+ int ret;
+
+ domain = ciu3_info->domain[block];
+ if (ciu3_info->intsn2hw[block])
+ hw = ciu3_info->intsn2hw[block](domain, intsn);
+ else
+ hw = intsn;
+
+ ret = handle_domain_irq(domain, hw, NULL);
+ if (ret < 0) {
+ union cvmx_ciu3_iscx_w1c isc_w1c;
+ u64 isc_w1c_addr = ciu3_addr + CIU3_ISC_W1C(intsn);
+
+ isc_w1c.u64 = 0;
+ isc_w1c.s.en = 1;
+ cvmx_write_csr(isc_w1c_addr, isc_w1c.u64);
+ cvmx_read_csr(isc_w1c_addr);
+ spurious_interrupt();
+ }
+ } else {
+ spurious_interrupt();
+ }
+}
+
+/*
+ * 10 mbox per core starting from zero.
+ * Base mbox is core * 10
+ */
+static unsigned int octeon_irq_ciu3_base_mbox_intsn(int core)
+{
+ /* SW (mbox) are 0x04 in bits 12..19 */
+ return 0x04000 + CIU3_MBOX_PER_CORE * core;
+}
+
+static unsigned int octeon_irq_ciu3_mbox_intsn_for_core(int core, unsigned int mbox)
+{
+ return octeon_irq_ciu3_base_mbox_intsn(core) + mbox;
+}
+
+static unsigned int octeon_irq_ciu3_mbox_intsn_for_cpu(int cpu, unsigned int mbox)
+{
+ int local_core = octeon_coreid_for_cpu(cpu) & 0x3f;
+
+ return octeon_irq_ciu3_mbox_intsn_for_core(local_core, mbox);
+}
+
+static void octeon_irq_ciu3_mbox(void)
+{
+ union cvmx_ciu3_destx_pp_int dest_pp_int;
+ struct octeon_ciu3_info *ciu3_info;
+ u64 ciu3_addr;
+ int core = cvmx_get_local_core_num();
+
+ ciu3_info = __this_cpu_read(octeon_ciu3_info);
+ ciu3_addr = ciu3_info->ciu3_addr;
+
+ dest_pp_int.u64 = cvmx_read_csr(ciu3_addr + CIU3_DEST_PP_INT(1 + 3 * core));
+
+ if (likely(dest_pp_int.s.intr)) {
+ irq_hw_number_t intsn = dest_pp_int.s.intsn;
+ int mbox = intsn - octeon_irq_ciu3_base_mbox_intsn(core);
+
+ if (likely(mbox >= 0 && mbox < CIU3_MBOX_PER_CORE)) {
+ do_IRQ(mbox + OCTEON_IRQ_MBOX0);
+ } else {
+ union cvmx_ciu3_iscx_w1c isc_w1c;
+ u64 isc_w1c_addr = ciu3_addr + CIU3_ISC_W1C(intsn);
+
+ isc_w1c.u64 = 0;
+ isc_w1c.s.en = 1;
+ cvmx_write_csr(isc_w1c_addr, isc_w1c.u64);
+ cvmx_read_csr(isc_w1c_addr);
+ spurious_interrupt();
+ }
+ } else {
+ spurious_interrupt();
+ }
+}
+
+void octeon_ciu3_mbox_send(int cpu, unsigned int mbox)
+{
+ struct octeon_ciu3_info *ciu3_info;
+ unsigned int intsn;
+ union cvmx_ciu3_iscx_w1s isc_w1s;
+ u64 isc_w1s_addr;
+
+ if (WARN_ON_ONCE(mbox >= CIU3_MBOX_PER_CORE))
+ return;
+
+ intsn = octeon_irq_ciu3_mbox_intsn_for_cpu(cpu, mbox);
+ ciu3_info = per_cpu(octeon_ciu3_info, cpu);
+ isc_w1s_addr = ciu3_info->ciu3_addr + CIU3_ISC_W1S(intsn);
+
+ isc_w1s.u64 = 0;
+ isc_w1s.s.raw = 1;
+
+ cvmx_write_csr(isc_w1s_addr, isc_w1s.u64);
+ cvmx_read_csr(isc_w1s_addr);
+}
+
+static void octeon_irq_ciu3_mbox_set_enable(struct irq_data *data, int cpu, bool en)
+{
+ struct octeon_ciu3_info *ciu3_info;
+ unsigned int intsn;
+ u64 isc_ctl_addr, isc_w1c_addr;
+ union cvmx_ciu3_iscx_ctl isc_ctl;
+ unsigned int mbox = data->irq - OCTEON_IRQ_MBOX0;
+
+ intsn = octeon_irq_ciu3_mbox_intsn_for_cpu(cpu, mbox);
+ ciu3_info = per_cpu(octeon_ciu3_info, cpu);
+ isc_w1c_addr = ciu3_info->ciu3_addr + CIU3_ISC_W1C(intsn);
+ isc_ctl_addr = ciu3_info->ciu3_addr + CIU3_ISC_CTL(intsn);
+
+ isc_ctl.u64 = 0;
+ isc_ctl.s.en = 1;
+
+ cvmx_write_csr(isc_w1c_addr, isc_ctl.u64);
+ cvmx_write_csr(isc_ctl_addr, 0);
+ if (en) {
+ unsigned int idt = per_cpu(octeon_irq_ciu3_idt_ip3, cpu);
+
+ isc_ctl.u64 = 0;
+ isc_ctl.s.en = 1;
+ isc_ctl.s.idt = idt;
+ cvmx_write_csr(isc_ctl_addr, isc_ctl.u64);
+ }
+ cvmx_read_csr(isc_ctl_addr);
+}
+
+static void octeon_irq_ciu3_mbox_enable(struct irq_data *data)
+{
+ int cpu;
+ unsigned int mbox = data->irq - OCTEON_IRQ_MBOX0;
+
+ WARN_ON(mbox >= CIU3_MBOX_PER_CORE);
+
+ for_each_online_cpu(cpu)
+ octeon_irq_ciu3_mbox_set_enable(data, cpu, true);
+}
+
+static void octeon_irq_ciu3_mbox_disable(struct irq_data *data)
+{
+ int cpu;
+ unsigned int mbox = data->irq - OCTEON_IRQ_MBOX0;
+
+ WARN_ON(mbox >= CIU3_MBOX_PER_CORE);
+
+ for_each_online_cpu(cpu)
+ octeon_irq_ciu3_mbox_set_enable(data, cpu, false);
+}
+
+static void octeon_irq_ciu3_mbox_ack(struct irq_data *data)
+{
+ struct octeon_ciu3_info *ciu3_info;
+ unsigned int intsn;
+ u64 isc_w1c_addr;
+ union cvmx_ciu3_iscx_w1c isc_w1c;
+ unsigned int mbox = data->irq - OCTEON_IRQ_MBOX0;
+
+ intsn = octeon_irq_ciu3_mbox_intsn_for_core(cvmx_get_local_core_num(), mbox);
+
+ isc_w1c.u64 = 0;
+ isc_w1c.s.raw = 1;
+
+ ciu3_info = __this_cpu_read(octeon_ciu3_info);
+ isc_w1c_addr = ciu3_info->ciu3_addr + CIU3_ISC_W1C(intsn);
+ cvmx_write_csr(isc_w1c_addr, isc_w1c.u64);
+ cvmx_read_csr(isc_w1c_addr);
+}
+
+static void octeon_irq_ciu3_mbox_cpu_online(struct irq_data *data)
+{
+ octeon_irq_ciu3_mbox_set_enable(data, smp_processor_id(), true);
+}
+
+static void octeon_irq_ciu3_mbox_cpu_offline(struct irq_data *data)
+{
+ octeon_irq_ciu3_mbox_set_enable(data, smp_processor_id(), false);
+}
+
+static int octeon_irq_ciu3_alloc_resources(struct octeon_ciu3_info *ciu3_info)
+{
+ u64 b = ciu3_info->ciu3_addr;
+ int idt_ip2, idt_ip3, idt_ip4;
+ int unused_idt2;
+ int core = cvmx_get_local_core_num();
+ int i;
+
+ __this_cpu_write(octeon_ciu3_info, ciu3_info);
+
+ /*
+ * 4 idt per core starting from 1 because zero is reserved.
+ * Base idt per core is 4 * core + 1
+ */
+ idt_ip2 = core * 4 + 1;
+ idt_ip3 = core * 4 + 2;
+ idt_ip4 = core * 4 + 3;
+ unused_idt2 = core * 4 + 4;
+ __this_cpu_write(octeon_irq_ciu3_idt_ip2, idt_ip2);
+ __this_cpu_write(octeon_irq_ciu3_idt_ip3, idt_ip3);
+
+ /* ip2 interrupts for this CPU */
+ cvmx_write_csr(b + CIU3_IDT_CTL(idt_ip2), 0);
+ cvmx_write_csr(b + CIU3_IDT_PP(idt_ip2, 0), 1ull << core);
+ cvmx_write_csr(b + CIU3_IDT_IO(idt_ip2), 0);
+
+ /* ip3 interrupts for this CPU */
+ cvmx_write_csr(b + CIU3_IDT_CTL(idt_ip3), 1);
+ cvmx_write_csr(b + CIU3_IDT_PP(idt_ip3, 0), 1ull << core);
+ cvmx_write_csr(b + CIU3_IDT_IO(idt_ip3), 0);
+
+ /* ip4 interrupts for this CPU */
+ cvmx_write_csr(b + CIU3_IDT_CTL(idt_ip4), 2);
+ cvmx_write_csr(b + CIU3_IDT_PP(idt_ip4, 0), 0);
+ cvmx_write_csr(b + CIU3_IDT_IO(idt_ip4), 0);
+
+ cvmx_write_csr(b + CIU3_IDT_CTL(unused_idt2), 0);
+ cvmx_write_csr(b + CIU3_IDT_PP(unused_idt2, 0), 0);
+ cvmx_write_csr(b + CIU3_IDT_IO(unused_idt2), 0);
+
+ for (i = 0; i < CIU3_MBOX_PER_CORE; i++) {
+ unsigned int intsn = octeon_irq_ciu3_mbox_intsn_for_core(core, i);
+
+ cvmx_write_csr(b + CIU3_ISC_W1C(intsn), 2);
+ cvmx_write_csr(b + CIU3_ISC_CTL(intsn), 0);
+ }
+
+ return 0;
+}
+
+static void octeon_irq_setup_secondary_ciu3(void)
+{
+ struct octeon_ciu3_info *ciu3_info;
+
+ ciu3_info = octeon_ciu3_info_per_node[cvmx_get_node_num()];
+ octeon_irq_ciu3_alloc_resources(ciu3_info);
+ irq_cpu_online();
+
+ /* Enable the CIU lines */
+ set_c0_status(STATUSF_IP3 | STATUSF_IP2);
+ if (octeon_irq_use_ip4)
+ set_c0_status(STATUSF_IP4);
+ else
+ clear_c0_status(STATUSF_IP4);
+}
+
+static struct irq_chip octeon_irq_chip_ciu3_mbox = {
+ .name = "CIU3-M",
+ .irq_enable = octeon_irq_ciu3_mbox_enable,
+ .irq_disable = octeon_irq_ciu3_mbox_disable,
+ .irq_ack = octeon_irq_ciu3_mbox_ack,
+
+ .irq_cpu_online = octeon_irq_ciu3_mbox_cpu_online,
+ .irq_cpu_offline = octeon_irq_ciu3_mbox_cpu_offline,
+ .flags = IRQCHIP_ONOFFLINE_ENABLED,
+};
+
+static int __init octeon_irq_init_ciu3(struct device_node *ciu_node,
+ struct device_node *parent)
+{
+ int i;
+ int node;
+ struct irq_domain *domain;
+ struct octeon_ciu3_info *ciu3_info;
+ const __be32 *zero_addr;
+ u64 base_addr;
+ union cvmx_ciu3_const consts;
+
+ node = 0; /* of_node_to_nid(ciu_node); */
+ ciu3_info = kzalloc_node(sizeof(*ciu3_info), GFP_KERNEL, node);
+
+ if (!ciu3_info)
+ return -ENOMEM;
+
+ zero_addr = of_get_address(ciu_node, 0, NULL, NULL);
+ if (WARN_ON(!zero_addr))
+ return -EINVAL;
+
+ base_addr = of_translate_address(ciu_node, zero_addr);
+ base_addr = (u64)phys_to_virt(base_addr);
+
+ ciu3_info->ciu3_addr = base_addr;
+ ciu3_info->node = node;
+
+ consts.u64 = cvmx_read_csr(base_addr + CIU3_CONST);
+
+ octeon_irq_setup_secondary = octeon_irq_setup_secondary_ciu3;
+
+ octeon_irq_ip2 = octeon_irq_ciu3_ip2;
+ octeon_irq_ip3 = octeon_irq_ciu3_mbox;
+ octeon_irq_ip4 = octeon_irq_ip4_mask;
+
+ if (node == cvmx_get_node_num()) {
+ /* Mips internal */
+ octeon_irq_init_core();
+
+ /* Only do per CPU things if it is the CIU of the boot node. */
+ i = irq_alloc_descs_from(OCTEON_IRQ_MBOX0, 8, node);
+ WARN_ON(i < 0);
+
+ for (i = 0; i < 8; i++)
+ irq_set_chip_and_handler(i + OCTEON_IRQ_MBOX0,
+ &octeon_irq_chip_ciu3_mbox, handle_percpu_irq);
+ }
+
+ /*
+ * Initialize all domains to use the default domain. Specific major
+ * blocks will overwrite the default domain as needed.
+ */
+ domain = irq_domain_add_tree(ciu_node, &octeon_dflt_domain_ciu3_ops,
+ ciu3_info);
+ for (i = 0; i < MAX_CIU3_DOMAINS; i++)
+ ciu3_info->domain[i] = domain;
+
+ octeon_ciu3_info_per_node[node] = ciu3_info;
+
+ if (node == cvmx_get_node_num()) {
+ /* Only do per CPU things if it is the CIU of the boot node. */
+ octeon_irq_ciu3_alloc_resources(ciu3_info);
+ if (node == 0)
+ irq_set_default_host(domain);
+
+ octeon_irq_use_ip4 = false;
+ /* Enable the CIU lines */
+ set_c0_status(STATUSF_IP2 | STATUSF_IP3);
+ clear_c0_status(STATUSF_IP4);
+ }
+
+ return 0;
+}
+
static struct of_device_id ciu_types[] __initdata = {
{.compatible = "cavium,octeon-3860-ciu", .data = octeon_irq_init_ciu},
{.compatible = "cavium,octeon-3860-gpio", .data = octeon_irq_init_gpio},
{.compatible = "cavium,octeon-6880-ciu2", .data = octeon_irq_init_ciu2},
+ {.compatible = "cavium,octeon-7890-ciu3", .data = octeon_irq_init_ciu3},
{.compatible = "cavium,octeon-7130-cib", .data = octeon_irq_init_cib},
{}
};
diff --git a/arch/mips/cavium-octeon/octeon-platform.c b/arch/mips/cavium-octeon/octeon-platform.c
index d113c8ded6e2..7aeafedff94e 100644
--- a/arch/mips/cavium-octeon/octeon-platform.c
+++ b/arch/mips/cavium-octeon/octeon-platform.c
@@ -13,6 +13,7 @@
#include <linux/i2c.h>
#include <linux/usb.h>
#include <linux/dma-mapping.h>
+#include <linux/etherdevice.h>
#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/slab.h>
@@ -525,10 +526,17 @@ static void __init octeon_fdt_set_phy(int eth, int phy_addr)
static void __init octeon_fdt_set_mac_addr(int n, u64 *pmac)
{
+ const u8 *old_mac;
+ int old_len;
u8 new_mac[6];
u64 mac = *pmac;
int r;
+ old_mac = fdt_getprop(initial_boot_params, n, "local-mac-address",
+ &old_len);
+ if (!old_mac || old_len != 6 || is_valid_ether_addr(old_mac))
+ return;
+
new_mac[0] = (mac >> 40) & 0xff;
new_mac[1] = (mac >> 32) & 0xff;
new_mac[2] = (mac >> 24) & 0xff;
@@ -560,7 +568,7 @@ static void __init octeon_fdt_rm_ethernet(int node)
fdt_nop_node(initial_boot_params, node);
}
-static void __init octeon_fdt_pip_port(int iface, int i, int p, int max, u64 *pmac)
+static void __init octeon_fdt_pip_port(int iface, int i, int p, int max)
{
char name_buffer[20];
int eth;
@@ -583,10 +591,9 @@ static void __init octeon_fdt_pip_port(int iface, int i, int p, int max, u64 *pm
phy_addr = cvmx_helper_board_get_mii_address(ipd_port);
octeon_fdt_set_phy(eth, phy_addr);
- octeon_fdt_set_mac_addr(eth, pmac);
}
-static void __init octeon_fdt_pip_iface(int pip, int idx, u64 *pmac)
+static void __init octeon_fdt_pip_iface(int pip, int idx)
{
char name_buffer[20];
int iface;
@@ -602,7 +609,73 @@ static void __init octeon_fdt_pip_iface(int pip, int idx, u64 *pmac)
count = cvmx_helper_ports_on_interface(idx);
for (p = 0; p < 16; p++)
- octeon_fdt_pip_port(iface, idx, p, count - 1, pmac);
+ octeon_fdt_pip_port(iface, idx, p, count - 1);
+}
+
+void __init octeon_fill_mac_addresses(void)
+{
+ const char *alias_prop;
+ char name_buffer[20];
+ u64 mac_addr_base;
+ int aliases;
+ int pip;
+ int i;
+
+ aliases = fdt_path_offset(initial_boot_params, "/aliases");
+ if (aliases < 0)
+ return;
+
+ mac_addr_base =
+ ((octeon_bootinfo->mac_addr_base[0] & 0xffull)) << 40 |
+ ((octeon_bootinfo->mac_addr_base[1] & 0xffull)) << 32 |
+ ((octeon_bootinfo->mac_addr_base[2] & 0xffull)) << 24 |
+ ((octeon_bootinfo->mac_addr_base[3] & 0xffull)) << 16 |
+ ((octeon_bootinfo->mac_addr_base[4] & 0xffull)) << 8 |
+ (octeon_bootinfo->mac_addr_base[5] & 0xffull);
+
+ for (i = 0; i < 2; i++) {
+ int mgmt;
+
+ snprintf(name_buffer, sizeof(name_buffer), "mix%d", i);
+ alias_prop = fdt_getprop(initial_boot_params, aliases,
+ name_buffer, NULL);
+ if (!alias_prop)
+ continue;
+ mgmt = fdt_path_offset(initial_boot_params, alias_prop);
+ if (mgmt < 0)
+ continue;
+ octeon_fdt_set_mac_addr(mgmt, &mac_addr_base);
+ }
+
+ alias_prop = fdt_getprop(initial_boot_params, aliases, "pip", NULL);
+ if (!alias_prop)
+ return;
+
+ pip = fdt_path_offset(initial_boot_params, alias_prop);
+ if (pip < 0)
+ return;
+
+ for (i = 0; i <= 4; i++) {
+ int iface;
+ int p;
+
+ snprintf(name_buffer, sizeof(name_buffer), "interface@%d", i);
+ iface = fdt_subnode_offset(initial_boot_params, pip,
+ name_buffer);
+ if (iface < 0)
+ continue;
+ for (p = 0; p < 16; p++) {
+ int eth;
+
+ snprintf(name_buffer, sizeof(name_buffer),
+ "ethernet@%x", p);
+ eth = fdt_subnode_offset(initial_boot_params, iface,
+ name_buffer);
+ if (eth < 0)
+ continue;
+ octeon_fdt_set_mac_addr(eth, &mac_addr_base);
+ }
+ }
}
int __init octeon_prune_device_tree(void)
@@ -612,7 +685,6 @@ int __init octeon_prune_device_tree(void)
const char *alias_prop;
char name_buffer[20];
int aliases;
- u64 mac_addr_base;
if (fdt_check_header(initial_boot_params))
panic("Corrupt Device Tree.");
@@ -623,15 +695,6 @@ int __init octeon_prune_device_tree(void)
return -EINVAL;
}
-
- mac_addr_base =
- ((octeon_bootinfo->mac_addr_base[0] & 0xffull)) << 40 |
- ((octeon_bootinfo->mac_addr_base[1] & 0xffull)) << 32 |
- ((octeon_bootinfo->mac_addr_base[2] & 0xffull)) << 24 |
- ((octeon_bootinfo->mac_addr_base[3] & 0xffull)) << 16 |
- ((octeon_bootinfo->mac_addr_base[4] & 0xffull)) << 8 |
- (octeon_bootinfo->mac_addr_base[5] & 0xffull);
-
if (OCTEON_IS_MODEL(OCTEON_CN52XX) || OCTEON_IS_MODEL(OCTEON_CN63XX))
max_port = 2;
else if (OCTEON_IS_MODEL(OCTEON_CN56XX) || OCTEON_IS_MODEL(OCTEON_CN68XX))
@@ -660,7 +723,6 @@ int __init octeon_prune_device_tree(void)
} else {
int phy_addr = cvmx_helper_board_get_mii_address(CVMX_HELPER_BOARD_MGMT_IPD_PORT + i);
octeon_fdt_set_phy(mgmt, phy_addr);
- octeon_fdt_set_mac_addr(mgmt, &mac_addr_base);
}
}
}
@@ -670,7 +732,7 @@ int __init octeon_prune_device_tree(void)
int pip = fdt_path_offset(initial_boot_params, pip_path);
if (pip >= 0)
for (i = 0; i <= 4; i++)
- octeon_fdt_pip_iface(pip, i, &mac_addr_base);
+ octeon_fdt_pip_iface(pip, i);
}
/* I2C */
diff --git a/arch/mips/cavium-octeon/setup.c b/arch/mips/cavium-octeon/setup.c
index cd7101fb6227..64f852b063a8 100644
--- a/arch/mips/cavium-octeon/setup.c
+++ b/arch/mips/cavium-octeon/setup.c
@@ -43,8 +43,6 @@
#include <asm/octeon/cvmx-mio-defs.h>
#include <asm/octeon/cvmx-rst-defs.h>
-extern struct plat_smp_ops octeon_smp_ops;
-
#ifdef CONFIG_PCI
extern void pci_console_init(const char *arg);
#endif
@@ -466,15 +464,25 @@ static void octeon_halt(void)
static char __read_mostly octeon_system_type[80];
-static int __init init_octeon_system_type(void)
+static void __init init_octeon_system_type(void)
{
- snprintf(octeon_system_type, sizeof(octeon_system_type), "%s (%s)",
- cvmx_board_type_to_string(octeon_bootinfo->board_type),
- octeon_model_get_string(read_c0_prid()));
+ char const *board_type;
+
+ board_type = cvmx_board_type_to_string(octeon_bootinfo->board_type);
+ if (board_type == NULL) {
+ struct device_node *root;
+ int ret;
+
+ root = of_find_node_by_path("/");
+ ret = of_property_read_string(root, "model", &board_type);
+ of_node_put(root);
+ if (ret)
+ board_type = "Unsupported Board";
+ }
- return 0;
+ snprintf(octeon_system_type, sizeof(octeon_system_type), "%s (%s)",
+ board_type, octeon_model_get_string(read_c0_prid()));
}
-early_initcall(init_octeon_system_type);
/**
* Return a string representing the system type
@@ -492,8 +500,6 @@ const char *get_system_type(void)
void octeon_user_io_init(void)
{
union octeon_cvmemctl cvmmemctl;
- union cvmx_iob_fau_timeout fau_timeout;
- union cvmx_pow_nw_tim nm_tim;
/* Get the current settings for CP0_CVMMEMCTL_REG */
cvmmemctl.u64 = read_c0_cvmmemctl();
@@ -595,17 +601,27 @@ void octeon_user_io_init(void)
CONFIG_CAVIUM_OCTEON_CVMSEG_SIZE,
CONFIG_CAVIUM_OCTEON_CVMSEG_SIZE * 128);
- /* Set a default for the hardware timeouts */
- fau_timeout.u64 = 0;
- fau_timeout.s.tout_val = 0xfff;
- /* Disable tagwait FAU timeout */
- fau_timeout.s.tout_enb = 0;
- cvmx_write_csr(CVMX_IOB_FAU_TIMEOUT, fau_timeout.u64);
+ if (octeon_has_feature(OCTEON_FEATURE_FAU)) {
+ union cvmx_iob_fau_timeout fau_timeout;
+
+ /* Set a default for the hardware timeouts */
+ fau_timeout.u64 = 0;
+ fau_timeout.s.tout_val = 0xfff;
+ /* Disable tagwait FAU timeout */
+ fau_timeout.s.tout_enb = 0;
+ cvmx_write_csr(CVMX_IOB_FAU_TIMEOUT, fau_timeout.u64);
+ }
- nm_tim.u64 = 0;
- /* 4096 cycles */
- nm_tim.s.nw_tim = 3;
- cvmx_write_csr(CVMX_POW_NW_TIM, nm_tim.u64);
+ if ((!OCTEON_IS_MODEL(OCTEON_CN68XX) &&
+ !OCTEON_IS_MODEL(OCTEON_CN7XXX)) ||
+ OCTEON_IS_MODEL(OCTEON_CN70XX)) {
+ union cvmx_pow_nw_tim nm_tim;
+
+ nm_tim.u64 = 0;
+ /* 4096 cycles */
+ nm_tim.s.nw_tim = 3;
+ cvmx_write_csr(CVMX_POW_NW_TIM, nm_tim.u64);
+ }
write_octeon_c0_icacheerr(0);
write_c0_derraddr1(0);
@@ -637,9 +653,22 @@ void __init prom_init(void)
sysinfo = cvmx_sysinfo_get();
memset(sysinfo, 0, sizeof(*sysinfo));
sysinfo->system_dram_size = octeon_bootinfo->dram_size << 20;
- sysinfo->phy_mem_desc_ptr =
- cvmx_phys_to_ptr(octeon_bootinfo->phy_mem_desc_addr);
- sysinfo->core_mask = octeon_bootinfo->core_mask;
+ sysinfo->phy_mem_desc_addr = (u64)phys_to_virt(octeon_bootinfo->phy_mem_desc_addr);
+
+ if ((octeon_bootinfo->major_version > 1) ||
+ (octeon_bootinfo->major_version == 1 &&
+ octeon_bootinfo->minor_version >= 4))
+ cvmx_coremask_copy(&sysinfo->core_mask,
+ &octeon_bootinfo->ext_core_mask);
+ else
+ cvmx_coremask_set64(&sysinfo->core_mask,
+ octeon_bootinfo->core_mask);
+
+ /* Some broken u-boot pass garbage in upper bits, clear them out */
+ if (!OCTEON_IS_MODEL(OCTEON_CN78XX))
+ for (i = 512; i < 1024; i++)
+ cvmx_coremask_clear_core(&sysinfo->core_mask, i);
+
sysinfo->exception_base_addr = octeon_bootinfo->exception_base_addr;
sysinfo->cpu_clock_hz = octeon_bootinfo->eclock_hz;
sysinfo->dram_data_rate_hz = octeon_bootinfo->dclock_hz * 2;
@@ -867,7 +896,7 @@ void __init prom_init(void)
#endif
octeon_user_io_init();
- register_smp_ops(&octeon_smp_ops);
+ octeon_setup_smp();
}
/* Exclude a single page from the regions obtained in plat_mem_setup. */
@@ -1079,6 +1108,7 @@ void __init prom_free_prom_memory(void)
}
}
+void __init octeon_fill_mac_addresses(void);
int octeon_prune_device_tree(void);
extern const char __appended_dtb;
@@ -1088,11 +1118,13 @@ void __init device_tree_init(void)
{
const void *fdt;
bool do_prune;
+ bool fill_mac;
#ifdef CONFIG_MIPS_ELF_APPENDED_DTB
if (!fdt_check_header(&__appended_dtb)) {
fdt = &__appended_dtb;
do_prune = false;
+ fill_mac = true;
pr_info("Using appended Device Tree.\n");
} else
#endif
@@ -1101,13 +1133,16 @@ void __init device_tree_init(void)
if (fdt_check_header(fdt))
panic("Corrupt Device Tree passed to kernel.");
do_prune = false;
+ fill_mac = false;
pr_info("Using passed Device Tree.\n");
} else if (OCTEON_IS_MODEL(OCTEON_CN68XX)) {
fdt = &__dtb_octeon_68xx_begin;
do_prune = true;
+ fill_mac = true;
} else {
fdt = &__dtb_octeon_3xxx_begin;
do_prune = true;
+ fill_mac = true;
}
initial_boot_params = (void *)fdt;
@@ -1116,7 +1151,10 @@ void __init device_tree_init(void)
octeon_prune_device_tree();
pr_info("Using internal Device Tree.\n");
}
+ if (fill_mac)
+ octeon_fill_mac_addresses();
unflatten_and_copy_device_tree();
+ init_octeon_system_type();
}
static int __initdata disable_octeon_edac_p;
diff --git a/arch/mips/cavium-octeon/smp.c b/arch/mips/cavium-octeon/smp.c
index 42412ba0f3bf..33aab89259f3 100644
--- a/arch/mips/cavium-octeon/smp.c
+++ b/arch/mips/cavium-octeon/smp.c
@@ -30,25 +30,55 @@ uint64_t octeon_bootloader_entry_addr;
EXPORT_SYMBOL(octeon_bootloader_entry_addr);
#endif
+static void octeon_icache_flush(void)
+{
+ asm volatile ("synci 0($0)\n");
+}
+
+static void (*octeon_message_functions[8])(void) = {
+ scheduler_ipi,
+ generic_smp_call_function_interrupt,
+ octeon_icache_flush,
+};
+
static irqreturn_t mailbox_interrupt(int irq, void *dev_id)
{
- const int coreid = cvmx_get_core_num();
- uint64_t action;
+ u64 mbox_clrx = CVMX_CIU_MBOX_CLRX(cvmx_get_core_num());
+ u64 action;
+ int i;
+
+ /*
+ * Make sure the function array initialization remains
+ * correct.
+ */
+ BUILD_BUG_ON(SMP_RESCHEDULE_YOURSELF != (1 << 0));
+ BUILD_BUG_ON(SMP_CALL_FUNCTION != (1 << 1));
+ BUILD_BUG_ON(SMP_ICACHE_FLUSH != (1 << 2));
+
+ /*
+ * Load the mailbox register to figure out what we're supposed
+ * to do.
+ */
+ action = cvmx_read_csr(mbox_clrx);
- /* Load the mailbox register to figure out what we're supposed to do */
- action = cvmx_read_csr(CVMX_CIU_MBOX_CLRX(coreid)) & 0xffff;
+ if (OCTEON_IS_MODEL(OCTEON_CN68XX))
+ action &= 0xff;
+ else
+ action &= 0xffff;
/* Clear the mailbox to clear the interrupt */
- cvmx_write_csr(CVMX_CIU_MBOX_CLRX(coreid), action);
+ cvmx_write_csr(mbox_clrx, action);
- if (action & SMP_CALL_FUNCTION)
- generic_smp_call_function_interrupt();
- if (action & SMP_RESCHEDULE_YOURSELF)
- scheduler_ipi();
+ for (i = 0; i < ARRAY_SIZE(octeon_message_functions) && action;) {
+ if (action & 1) {
+ void (*fn)(void) = octeon_message_functions[i];
- /* Check if we've been told to flush the icache */
- if (action & SMP_ICACHE_FLUSH)
- asm volatile ("synci 0($0)\n");
+ if (fn)
+ fn();
+ }
+ action >>= 1;
+ i++;
+ }
return IRQ_HANDLED;
}
@@ -97,13 +127,15 @@ static void octeon_smp_hotplug_setup(void)
#endif
}
-static void octeon_smp_setup(void)
+static void __init octeon_smp_setup(void)
{
const int coreid = cvmx_get_core_num();
int cpus;
int id;
- int core_mask = octeon_get_boot_coremask();
+ struct cvmx_sysinfo *sysinfo = cvmx_sysinfo_get();
+
#ifdef CONFIG_HOTPLUG_CPU
+ int core_mask = octeon_get_boot_coremask();
unsigned int num_cores = cvmx_octeon_num_cores();
#endif
@@ -119,7 +151,7 @@ static void octeon_smp_setup(void)
/* The present CPUs get the lowest CPU numbers. */
cpus = 1;
for (id = 0; id < NR_CPUS; id++) {
- if ((id != coreid) && (core_mask & (1 << id))) {
+ if ((id != coreid) && cvmx_coremask_is_core_set(&sysinfo->core_mask, id)) {
set_cpu_possible(cpus, true);
set_cpu_present(cpus, true);
__cpu_number_map[id] = cpus;
@@ -196,7 +228,7 @@ static void octeon_init_secondary(void)
* Callout to firmware before smp_init
*
*/
-void octeon_prepare_cpus(unsigned int max_cpus)
+static void __init octeon_prepare_cpus(unsigned int max_cpus)
{
/*
* Only the low order mailbox bits are used for IPIs, leave
@@ -242,7 +274,7 @@ static int octeon_cpu_disable(void)
cpumask_clear_cpu(cpu, &cpu_callin_map);
octeon_fixup_irqs();
- flush_cache_all();
+ __flush_cache_all();
local_flush_tlb_all();
return 0;
@@ -352,7 +384,7 @@ static int octeon_cpu_callback(struct notifier_block *nfb,
{
unsigned int cpu = (unsigned long)hcpu;
- switch (action) {
+ switch (action & ~CPU_TASKS_FROZEN) {
case CPU_UP_PREPARE:
octeon_update_boot_vector(cpu);
break;
@@ -388,3 +420,92 @@ struct plat_smp_ops octeon_smp_ops = {
.cpu_die = octeon_cpu_die,
#endif
};
+
+static irqreturn_t octeon_78xx_reched_interrupt(int irq, void *dev_id)
+{
+ scheduler_ipi();
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t octeon_78xx_call_function_interrupt(int irq, void *dev_id)
+{
+ generic_smp_call_function_interrupt();
+ return IRQ_HANDLED;
+}
+
+static irqreturn_t octeon_78xx_icache_flush_interrupt(int irq, void *dev_id)
+{
+ octeon_icache_flush();
+ return IRQ_HANDLED;
+}
+
+/*
+ * Callout to firmware before smp_init
+ */
+static void octeon_78xx_prepare_cpus(unsigned int max_cpus)
+{
+ if (request_irq(OCTEON_IRQ_MBOX0 + 0,
+ octeon_78xx_reched_interrupt,
+ IRQF_PERCPU | IRQF_NO_THREAD, "Scheduler",
+ octeon_78xx_reched_interrupt)) {
+ panic("Cannot request_irq for SchedulerIPI");
+ }
+ if (request_irq(OCTEON_IRQ_MBOX0 + 1,
+ octeon_78xx_call_function_interrupt,
+ IRQF_PERCPU | IRQF_NO_THREAD, "SMP-Call",
+ octeon_78xx_call_function_interrupt)) {
+ panic("Cannot request_irq for SMP-Call");
+ }
+ if (request_irq(OCTEON_IRQ_MBOX0 + 2,
+ octeon_78xx_icache_flush_interrupt,
+ IRQF_PERCPU | IRQF_NO_THREAD, "ICache-Flush",
+ octeon_78xx_icache_flush_interrupt)) {
+ panic("Cannot request_irq for ICache-Flush");
+ }
+}
+
+static void octeon_78xx_send_ipi_single(int cpu, unsigned int action)
+{
+ int i;
+
+ for (i = 0; i < 8; i++) {
+ if (action & 1)
+ octeon_ciu3_mbox_send(cpu, i);
+ action >>= 1;
+ }
+}
+
+static void octeon_78xx_send_ipi_mask(const struct cpumask *mask,
+ unsigned int action)
+{
+ unsigned int cpu;
+
+ for_each_cpu(cpu, mask)
+ octeon_78xx_send_ipi_single(cpu, action);
+}
+
+static struct plat_smp_ops octeon_78xx_smp_ops = {
+ .send_ipi_single = octeon_78xx_send_ipi_single,
+ .send_ipi_mask = octeon_78xx_send_ipi_mask,
+ .init_secondary = octeon_init_secondary,
+ .smp_finish = octeon_smp_finish,
+ .boot_secondary = octeon_boot_secondary,
+ .smp_setup = octeon_smp_setup,
+ .prepare_cpus = octeon_78xx_prepare_cpus,
+#ifdef CONFIG_HOTPLUG_CPU
+ .cpu_disable = octeon_cpu_disable,
+ .cpu_die = octeon_cpu_die,
+#endif
+};
+
+void __init octeon_setup_smp(void)
+{
+ struct plat_smp_ops *ops;
+
+ if (octeon_has_feature(OCTEON_FEATURE_CIU3))
+ ops = &octeon_78xx_smp_ops;
+ else
+ ops = &octeon_smp_ops;
+
+ register_smp_ops(ops);
+}
diff --git a/arch/mips/configs/bcm47xx_defconfig b/arch/mips/configs/bcm47xx_defconfig
index 0db4eb319e0a..fad8e964f14c 100644
--- a/arch/mips/configs/bcm47xx_defconfig
+++ b/arch/mips/configs/bcm47xx_defconfig
@@ -23,7 +23,6 @@ CONFIG_IP_MROUTE=y
CONFIG_IP_MROUTE_MULTIPLE_TABLES=y
CONFIG_SYN_COOKIES=y
CONFIG_TCP_CONG_ADVANCED=y
-CONFIG_IPV6_PRIVACY=y
CONFIG_IPV6_MULTIPLE_TABLES=y
CONFIG_IPV6_SUBTREES=y
CONFIG_IPV6_MROUTE=y
diff --git a/arch/mips/configs/bcm63xx_defconfig b/arch/mips/configs/bcm63xx_defconfig
index 3fec26410f34..5599a9f1e3c6 100644
--- a/arch/mips/configs/bcm63xx_defconfig
+++ b/arch/mips/configs/bcm63xx_defconfig
@@ -44,6 +44,7 @@ CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
# CONFIG_STANDALONE is not set
# CONFIG_PREVENT_FIRMWARE_BUILD is not set
CONFIG_MTD=y
+CONFIG_MTD_BCM63XX_PARTS=y
CONFIG_MTD_CFI=y
CONFIG_MTD_CFI_INTELEXT=y
CONFIG_MTD_CFI_AMDSTD=y
diff --git a/arch/mips/configs/bigsur_defconfig b/arch/mips/configs/bigsur_defconfig
index e070dac071c8..d20b09d77b53 100644
--- a/arch/mips/configs/bigsur_defconfig
+++ b/arch/mips/configs/bigsur_defconfig
@@ -62,7 +62,6 @@ CONFIG_INET_XFRM_MODE_TRANSPORT=m
CONFIG_INET_XFRM_MODE_TUNNEL=m
# CONFIG_INET_LRO is not set
CONFIG_TCP_MD5SIG=y
-CONFIG_IPV6_PRIVACY=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_ROUTE_INFO=y
CONFIG_IPV6_OPTIMISTIC_DAD=y
diff --git a/arch/mips/configs/bmips_be_defconfig b/arch/mips/configs/bmips_be_defconfig
index 24dcb90b0f64..acf7785c4cdb 100644
--- a/arch/mips/configs/bmips_be_defconfig
+++ b/arch/mips/configs/bmips_be_defconfig
@@ -36,6 +36,7 @@ CONFIG_DEVTMPFS_MOUNT=y
CONFIG_PRINTK_TIME=y
CONFIG_BRCMSTB_GISB_ARB=y
CONFIG_MTD=y
+CONFIG_MTD_BCM63XX_PARTS=y
CONFIG_MTD_CFI=y
CONFIG_MTD_CFI_INTELEXT=y
CONFIG_MTD_CFI_AMDSTD=y
diff --git a/arch/mips/configs/cavium_octeon_defconfig b/arch/mips/configs/cavium_octeon_defconfig
index e57058d4ec22..dcac308cec39 100644
--- a/arch/mips/configs/cavium_octeon_defconfig
+++ b/arch/mips/configs/cavium_octeon_defconfig
@@ -119,14 +119,16 @@ CONFIG_SPI=y
CONFIG_SPI_OCTEON=y
# CONFIG_HWMON is not set
CONFIG_WATCHDOG=y
-# CONFIG_USB_SUPPORT is not set
-CONFIG_USB_EHCI_BIG_ENDIAN_MMIO=y
-CONFIG_USB_OHCI_BIG_ENDIAN_MMIO=y
-CONFIG_USB_OHCI_LITTLE_ENDIAN=y
+CONFIG_USB=m
+CONFIG_USB_EHCI_HCD=m
+CONFIG_USB_EHCI_HCD_PLATFORM=m
+CONFIG_USB_OHCI_HCD=m
+CONFIG_USB_OHCI_HCD_PLATFORM=m
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_DS1307=y
CONFIG_STAGING=y
CONFIG_OCTEON_ETHERNET=y
+CONFIG_OCTEON_USB=m
# CONFIG_IOMMU_SUPPORT is not set
CONFIG_EXT4_FS=y
CONFIG_EXT4_FS_POSIX_ACL=y
@@ -152,6 +154,9 @@ CONFIG_SECURITY=y
CONFIG_SECURITY_NETWORK=y
CONFIG_CRYPTO_CBC=y
CONFIG_CRYPTO_HMAC=y
-CONFIG_CRYPTO_MD5=y
+CONFIG_CRYPTO_MD5_OCTEON=y
+CONFIG_CRYPTO_SHA1_OCTEON=m
+CONFIG_CRYPTO_SHA256_OCTEON=m
+CONFIG_CRYPTO_SHA512_OCTEON=m
CONFIG_CRYPTO_DES=y
# CONFIG_CRYPTO_ANSI_CPRNG is not set
diff --git a/arch/mips/configs/db1xxx_defconfig b/arch/mips/configs/db1xxx_defconfig
index 3bdb72a70364..f0c8971030c4 100644
--- a/arch/mips/configs/db1xxx_defconfig
+++ b/arch/mips/configs/db1xxx_defconfig
@@ -18,7 +18,6 @@ CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
CONFIG_CGROUP_CPUACCT=y
-CONFIG_RESOURCE_COUNTERS=y
CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y
CONFIG_MEMCG_KMEM=y
diff --git a/arch/mips/configs/decstation_defconfig b/arch/mips/configs/decstation_defconfig
index ebc011c51e5a..2b6cb41d5715 100644
--- a/arch/mips/configs/decstation_defconfig
+++ b/arch/mips/configs/decstation_defconfig
@@ -30,7 +30,6 @@ CONFIG_INET_XFRM_MODE_TRANSPORT=m
CONFIG_INET_XFRM_MODE_TUNNEL=m
CONFIG_INET_XFRM_MODE_BEET=m
CONFIG_TCP_MD5SIG=y
-CONFIG_IPV6_PRIVACY=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_ROUTE_INFO=y
CONFIG_INET6_AH=m
diff --git a/arch/mips/configs/ip22_defconfig b/arch/mips/configs/ip22_defconfig
index 6ba9ce9fcdd5..5d83ff755547 100644
--- a/arch/mips/configs/ip22_defconfig
+++ b/arch/mips/configs/ip22_defconfig
@@ -48,7 +48,6 @@ CONFIG_INET_XFRM_MODE_TUNNEL=m
CONFIG_INET_XFRM_MODE_BEET=m
# CONFIG_INET_LRO is not set
CONFIG_TCP_MD5SIG=y
-CONFIG_IPV6_PRIVACY=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_ROUTE_INFO=y
CONFIG_IPV6_OPTIMISTIC_DAD=y
diff --git a/arch/mips/configs/ip27_defconfig b/arch/mips/configs/ip27_defconfig
index 77e9f505f5e4..2b74aee320a1 100644
--- a/arch/mips/configs/ip27_defconfig
+++ b/arch/mips/configs/ip27_defconfig
@@ -43,7 +43,6 @@ CONFIG_INET_XFRM_MODE_TUNNEL=m
CONFIG_INET_XFRM_MODE_BEET=m
CONFIG_TCP_MD5SIG=y
CONFIG_IPV6=y
-CONFIG_IPV6_PRIVACY=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_ROUTE_INFO=y
CONFIG_IPV6_OPTIMISTIC_DAD=y
diff --git a/arch/mips/configs/jazz_defconfig b/arch/mips/configs/jazz_defconfig
index a5e85e1ee5de..3019fce63cd3 100644
--- a/arch/mips/configs/jazz_defconfig
+++ b/arch/mips/configs/jazz_defconfig
@@ -34,7 +34,6 @@ CONFIG_IP_PIMSM_V2=y
CONFIG_INET_XFRM_MODE_TRANSPORT=m
CONFIG_INET_XFRM_MODE_TUNNEL=m
CONFIG_TCP_MD5SIG=y
-CONFIG_IPV6_PRIVACY=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_ROUTE_INFO=y
CONFIG_INET6_AH=m
diff --git a/arch/mips/configs/lemote2f_defconfig b/arch/mips/configs/lemote2f_defconfig
index d1f198b072a0..5da76e0e120f 100644
--- a/arch/mips/configs/lemote2f_defconfig
+++ b/arch/mips/configs/lemote2f_defconfig
@@ -71,7 +71,6 @@ CONFIG_TCP_CONG_ADVANCED=y
CONFIG_TCP_CONG_BIC=y
CONFIG_DEFAULT_BIC=y
CONFIG_TCP_MD5SIG=y
-CONFIG_IPV6_PRIVACY=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_TUNNEL=m
CONFIG_IPV6_MULTIPLE_TABLES=y
diff --git a/arch/mips/configs/ls1b_defconfig b/arch/mips/configs/loongson1b_defconfig
index 1b2cc1fb26a1..c442f27685f4 100644
--- a/arch/mips/configs/ls1b_defconfig
+++ b/arch/mips/configs/loongson1b_defconfig
@@ -1,19 +1,17 @@
CONFIG_MACH_LOONGSON32=y
CONFIG_PREEMPT=y
# CONFIG_SECCOMP is not set
-CONFIG_EXPERIMENTAL=y
# CONFIG_LOCALVERSION_AUTO is not set
+CONFIG_KERNEL_XZ=y
CONFIG_SYSVIPC=y
+CONFIG_HIGH_RES_TIMERS=y
CONFIG_BSD_PROCESS_ACCT=y
CONFIG_BSD_PROCESS_ACCT_V3=y
-CONFIG_HIGH_RES_TIMERS=y
CONFIG_IKCONFIG=y
CONFIG_IKCONFIG_PROC=y
CONFIG_LOG_BUF_SHIFT=16
CONFIG_NAMESPACES=y
-CONFIG_BLK_DEV_INITRD=y
-CONFIG_RD_BZIP2=y
-CONFIG_RD_LZMA=y
+CONFIG_CC_OPTIMIZE_FOR_SIZE=y
CONFIG_EXPERT=y
CONFIG_PERF_EVENTS=y
# CONFIG_COMPAT_BRK is not set
@@ -41,6 +39,12 @@ CONFIG_UEVENT_HELPER_PATH="/sbin/hotplug"
CONFIG_DEVTMPFS=y
CONFIG_DEVTMPFS_MOUNT=y
# CONFIG_STANDALONE is not set
+CONFIG_MTD=y
+CONFIG_MTD_CMDLINE_PARTS=y
+CONFIG_MTD_BLOCK=y
+CONFIG_MTD_NAND=y
+CONFIG_MTD_NAND_LOONGSON1=y
+CONFIG_MTD_UBI=y
CONFIG_BLK_DEV_LOOP=y
CONFIG_SCSI=m
# CONFIG_SCSI_PROC_FS is not set
@@ -48,7 +52,6 @@ CONFIG_BLK_DEV_SD=m
# CONFIG_SCSI_LOWLEVEL is not set
CONFIG_NETDEVICES=y
# CONFIG_NET_VENDOR_BROADCOM is not set
-# CONFIG_NET_VENDOR_CHELSIO is not set
# CONFIG_NET_VENDOR_INTEL is not set
# CONFIG_NET_VENDOR_MARVELL is not set
# CONFIG_NET_VENDOR_MICREL is not set
@@ -56,7 +59,6 @@ CONFIG_NETDEVICES=y
# CONFIG_NET_VENDOR_SEEQ is not set
# CONFIG_NET_VENDOR_SMSC is not set
CONFIG_STMMAC_ETH=y
-CONFIG_STMMAC_DA=y
# CONFIG_NET_VENDOR_WIZNET is not set
# CONFIG_WLAN is not set
CONFIG_INPUT_EVDEV=y
@@ -69,18 +71,25 @@ CONFIG_LEGACY_PTY_COUNT=8
CONFIG_SERIAL_8250=y
CONFIG_SERIAL_8250_CONSOLE=y
# CONFIG_HW_RANDOM is not set
+CONFIG_GPIOLIB=y
+CONFIG_GPIO_LOONGSON1=y
# CONFIG_HWMON is not set
# CONFIG_VGA_CONSOLE is not set
-CONFIG_USB_HID=m
CONFIG_HID_GENERIC=m
+CONFIG_USB_HID=m
CONFIG_USB=y
CONFIG_USB_ANNOUNCE_NEW_DEVICES=y
CONFIG_USB_EHCI_HCD=y
-CONFIG_USB_EHCI_HCD_PLATFORM=y
# CONFIG_USB_EHCI_TT_NEWSCHED is not set
+CONFIG_USB_EHCI_HCD_PLATFORM=y
CONFIG_USB_STORAGE=m
CONFIG_USB_SERIAL=m
CONFIG_USB_SERIAL_PL2303=m
+CONFIG_NEW_LEDS=y
+CONFIG_LEDS_CLASS=y
+CONFIG_LEDS_GPIO=y
+CONFIG_LEDS_TRIGGERS=y
+CONFIG_LEDS_TRIGGER_HEARTBEAT=y
CONFIG_RTC_CLASS=y
CONFIG_RTC_DRV_LOONGSON1=y
# CONFIG_IOMMU_SUPPORT is not set
@@ -96,15 +105,21 @@ CONFIG_VFAT_FS=y
CONFIG_PROC_KCORE=y
CONFIG_TMPFS=y
CONFIG_TMPFS_POSIX_ACL=y
-# CONFIG_MISC_FILESYSTEMS is not set
+CONFIG_UBIFS_FS=y
+CONFIG_UBIFS_FS_ADVANCED_COMPR=y
+CONFIG_UBIFS_ATIME_SUPPORT=y
CONFIG_NFS_FS=y
CONFIG_ROOT_NFS=y
CONFIG_NLS_CODEPAGE_437=m
CONFIG_NLS_ISO8859_1=m
+CONFIG_DYNAMIC_DEBUG=y
# CONFIG_ENABLE_WARN_DEPRECATED is not set
# CONFIG_ENABLE_MUST_CHECK is not set
+CONFIG_DEBUG_FS=y
CONFIG_MAGIC_SYSRQ=y
# CONFIG_SCHED_DEBUG is not set
# CONFIG_DEBUG_PREEMPT is not set
# CONFIG_FTRACE is not set
# CONFIG_EARLY_PRINTK is not set
+# CONFIG_CRYPTO_ECHAINIV is not set
+# CONFIG_CRYPTO_HW is not set
diff --git a/arch/mips/configs/loongson3_defconfig b/arch/mips/configs/loongson3_defconfig
index f8bf915c6d6b..7f95c4b3ab2c 100644
--- a/arch/mips/configs/loongson3_defconfig
+++ b/arch/mips/configs/loongson3_defconfig
@@ -25,7 +25,6 @@ CONFIG_TASK_XACCT=y
CONFIG_TASK_IO_ACCOUNTING=y
CONFIG_LOG_BUF_SHIFT=14
CONFIG_CPUSETS=y
-CONFIG_RESOURCE_COUNTERS=y
CONFIG_MEMCG=y
CONFIG_MEMCG_SWAP=y
CONFIG_BLK_CGROUP=y
diff --git a/arch/mips/configs/mtx1_defconfig b/arch/mips/configs/mtx1_defconfig
index 9b6926d6bb32..f3f60056bc27 100644
--- a/arch/mips/configs/mtx1_defconfig
+++ b/arch/mips/configs/mtx1_defconfig
@@ -51,7 +51,6 @@ CONFIG_INET_IPCOMP=m
CONFIG_INET_XFRM_MODE_TRANSPORT=m
CONFIG_INET_XFRM_MODE_TUNNEL=m
CONFIG_INET_XFRM_MODE_BEET=m
-CONFIG_IPV6_PRIVACY=y
CONFIG_INET6_AH=m
CONFIG_INET6_ESP=m
CONFIG_INET6_IPCOMP=m
diff --git a/arch/mips/configs/nlm_xlp_defconfig b/arch/mips/configs/nlm_xlp_defconfig
index b3d1d37f85ea..b496c25fced6 100644
--- a/arch/mips/configs/nlm_xlp_defconfig
+++ b/arch/mips/configs/nlm_xlp_defconfig
@@ -95,7 +95,6 @@ CONFIG_TCP_CONG_YEAH=m
CONFIG_TCP_CONG_ILLINOIS=m
CONFIG_TCP_MD5SIG=y
CONFIG_IPV6=y
-CONFIG_IPV6_PRIVACY=y
CONFIG_INET6_AH=m
CONFIG_INET6_ESP=m
CONFIG_INET6_IPCOMP=m
diff --git a/arch/mips/configs/nlm_xlr_defconfig b/arch/mips/configs/nlm_xlr_defconfig
index 3d8016d6cf3e..8e99ad807a57 100644
--- a/arch/mips/configs/nlm_xlr_defconfig
+++ b/arch/mips/configs/nlm_xlr_defconfig
@@ -75,7 +75,6 @@ CONFIG_TCP_CONG_YEAH=m
CONFIG_TCP_CONG_ILLINOIS=m
CONFIG_TCP_MD5SIG=y
CONFIG_IPV6=y
-CONFIG_IPV6_PRIVACY=y
CONFIG_INET6_AH=m
CONFIG_INET6_ESP=m
CONFIG_INET6_IPCOMP=m
diff --git a/arch/mips/configs/rm200_defconfig b/arch/mips/configs/rm200_defconfig
index 82db4e3e4cf1..c2b4e3f33a73 100644
--- a/arch/mips/configs/rm200_defconfig
+++ b/arch/mips/configs/rm200_defconfig
@@ -37,7 +37,6 @@ CONFIG_INET_XFRM_MODE_TRANSPORT=m
CONFIG_INET_XFRM_MODE_TUNNEL=m
CONFIG_INET_XFRM_MODE_BEET=m
CONFIG_TCP_MD5SIG=y
-CONFIG_IPV6_PRIVACY=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_ROUTE_INFO=y
CONFIG_INET6_AH=m
diff --git a/arch/mips/dec/setup.c b/arch/mips/dec/setup.c
index a0b8943c8f11..1c3bf9fe926f 100644
--- a/arch/mips/dec/setup.c
+++ b/arch/mips/dec/setup.c
@@ -60,6 +60,7 @@ EXPORT_SYMBOL(dec_kn_slot_size);
int dec_tc_bus;
DEFINE_SPINLOCK(ioasic_ssr_lock);
+EXPORT_SYMBOL(ioasic_ssr_lock);
volatile u32 *ioasic_base;
diff --git a/arch/mips/include/asm/Kbuild b/arch/mips/include/asm/Kbuild
index c7fe4d01e79c..9740066cc631 100644
--- a/arch/mips/include/asm/Kbuild
+++ b/arch/mips/include/asm/Kbuild
@@ -1,5 +1,6 @@
# MIPS headers
generic-(CONFIG_GENERIC_CSUM) += checksum.h
+generic-y += clkdev.h
generic-y += cputime.h
generic-y += current.h
generic-y += dma-contiguous.h
diff --git a/arch/mips/include/asm/asmmacro.h b/arch/mips/include/asm/asmmacro.h
index 867f924b05c7..56584a659183 100644
--- a/arch/mips/include/asm/asmmacro.h
+++ b/arch/mips/include/asm/asmmacro.h
@@ -19,6 +19,28 @@
#include <asm/asmmacro-64.h>
#endif
+/*
+ * Helper macros for generating raw instruction encodings.
+ */
+#ifdef CONFIG_CPU_MICROMIPS
+ .macro insn32_if_mm enc
+ .insn
+ .hword ((\enc) >> 16)
+ .hword ((\enc) & 0xffff)
+ .endm
+
+ .macro insn_if_mips enc
+ .endm
+#else
+ .macro insn32_if_mm enc
+ .endm
+
+ .macro insn_if_mips enc
+ .insn
+ .word (\enc)
+ .endm
+#endif
+
#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
.macro local_irq_enable reg=t0
ei
@@ -235,6 +257,7 @@
.macro ld_b wd, off, base
.set push
.set mips32r2
+ .set fp=64
.set msa
ld.b $w\wd, \off(\base)
.set pop
@@ -243,6 +266,7 @@
.macro ld_h wd, off, base
.set push
.set mips32r2
+ .set fp=64
.set msa
ld.h $w\wd, \off(\base)
.set pop
@@ -251,6 +275,7 @@
.macro ld_w wd, off, base
.set push
.set mips32r2
+ .set fp=64
.set msa
ld.w $w\wd, \off(\base)
.set pop
@@ -268,6 +293,7 @@
.macro st_b wd, off, base
.set push
.set mips32r2
+ .set fp=64
.set msa
st.b $w\wd, \off(\base)
.set pop
@@ -276,6 +302,7 @@
.macro st_h wd, off, base
.set push
.set mips32r2
+ .set fp=64
.set msa
st.h $w\wd, \off(\base)
.set pop
@@ -284,6 +311,7 @@
.macro st_w wd, off, base
.set push
.set mips32r2
+ .set fp=64
.set msa
st.w $w\wd, \off(\base)
.set pop
@@ -298,21 +326,21 @@
.set pop
.endm
- .macro copy_u_w ws, n
+ .macro copy_s_w ws, n
.set push
.set mips32r2
.set fp=64
.set msa
- copy_u.w $1, $w\ws[\n]
+ copy_s.w $1, $w\ws[\n]
.set pop
.endm
- .macro copy_u_d ws, n
+ .macro copy_s_d ws, n
.set push
.set mips64r2
.set fp=64
.set msa
- copy_u.d $1, $w\ws[\n]
+ copy_s.d $1, $w\ws[\n]
.set pop
.endm
@@ -335,38 +363,6 @@
.endm
#else
-#ifdef CONFIG_CPU_MICROMIPS
-#define CFC_MSA_INSN 0x587e0056
-#define CTC_MSA_INSN 0x583e0816
-#define LDB_MSA_INSN 0x58000807
-#define LDH_MSA_INSN 0x58000817
-#define LDW_MSA_INSN 0x58000827
-#define LDD_MSA_INSN 0x58000837
-#define STB_MSA_INSN 0x5800080f
-#define STH_MSA_INSN 0x5800081f
-#define STW_MSA_INSN 0x5800082f
-#define STD_MSA_INSN 0x5800083f
-#define COPY_UW_MSA_INSN 0x58f00056
-#define COPY_UD_MSA_INSN 0x58f80056
-#define INSERT_W_MSA_INSN 0x59300816
-#define INSERT_D_MSA_INSN 0x59380816
-#else
-#define CFC_MSA_INSN 0x787e0059
-#define CTC_MSA_INSN 0x783e0819
-#define LDB_MSA_INSN 0x78000820
-#define LDH_MSA_INSN 0x78000821
-#define LDW_MSA_INSN 0x78000822
-#define LDD_MSA_INSN 0x78000823
-#define STB_MSA_INSN 0x78000824
-#define STH_MSA_INSN 0x78000825
-#define STW_MSA_INSN 0x78000826
-#define STD_MSA_INSN 0x78000827
-#define COPY_UW_MSA_INSN 0x78f00059
-#define COPY_UD_MSA_INSN 0x78f80059
-#define INSERT_W_MSA_INSN 0x79300819
-#define INSERT_D_MSA_INSN 0x79380819
-#endif
-
/*
* Temporary until all toolchains in use include MSA support.
*/
@@ -374,8 +370,8 @@
.set push
.set noat
SET_HARDFLOAT
- .insn
- .word CFC_MSA_INSN | (\cs << 11)
+ insn_if_mips 0x787e0059 | (\cs << 11)
+ insn32_if_mm 0x587e0056 | (\cs << 11)
move \rd, $1
.set pop
.endm
@@ -385,7 +381,8 @@
.set noat
SET_HARDFLOAT
move $1, \rs
- .word CTC_MSA_INSN | (\cd << 6)
+ insn_if_mips 0x783e0819 | (\cd << 6)
+ insn32_if_mm 0x583e0816 | (\cd << 6)
.set pop
.endm
@@ -393,8 +390,9 @@
.set push
.set noat
SET_HARDFLOAT
- addu $1, \base, \off
- .word LDB_MSA_INSN | (\wd << 6)
+ PTR_ADDU $1, \base, \off
+ insn_if_mips 0x78000820 | (\wd << 6)
+ insn32_if_mm 0x58000807 | (\wd << 6)
.set pop
.endm
@@ -402,8 +400,9 @@
.set push
.set noat
SET_HARDFLOAT
- addu $1, \base, \off
- .word LDH_MSA_INSN | (\wd << 6)
+ PTR_ADDU $1, \base, \off
+ insn_if_mips 0x78000821 | (\wd << 6)
+ insn32_if_mm 0x58000817 | (\wd << 6)
.set pop
.endm
@@ -411,8 +410,9 @@
.set push
.set noat
SET_HARDFLOAT
- addu $1, \base, \off
- .word LDW_MSA_INSN | (\wd << 6)
+ PTR_ADDU $1, \base, \off
+ insn_if_mips 0x78000822 | (\wd << 6)
+ insn32_if_mm 0x58000827 | (\wd << 6)
.set pop
.endm
@@ -420,8 +420,9 @@
.set push
.set noat
SET_HARDFLOAT
- addu $1, \base, \off
- .word LDD_MSA_INSN | (\wd << 6)
+ PTR_ADDU $1, \base, \off
+ insn_if_mips 0x78000823 | (\wd << 6)
+ insn32_if_mm 0x58000837 | (\wd << 6)
.set pop
.endm
@@ -429,8 +430,9 @@
.set push
.set noat
SET_HARDFLOAT
- addu $1, \base, \off
- .word STB_MSA_INSN | (\wd << 6)
+ PTR_ADDU $1, \base, \off
+ insn_if_mips 0x78000824 | (\wd << 6)
+ insn32_if_mm 0x5800080f | (\wd << 6)
.set pop
.endm
@@ -438,8 +440,9 @@
.set push
.set noat
SET_HARDFLOAT
- addu $1, \base, \off
- .word STH_MSA_INSN | (\wd << 6)
+ PTR_ADDU $1, \base, \off
+ insn_if_mips 0x78000825 | (\wd << 6)
+ insn32_if_mm 0x5800081f | (\wd << 6)
.set pop
.endm
@@ -447,8 +450,9 @@
.set push
.set noat
SET_HARDFLOAT
- addu $1, \base, \off
- .word STW_MSA_INSN | (\wd << 6)
+ PTR_ADDU $1, \base, \off
+ insn_if_mips 0x78000826 | (\wd << 6)
+ insn32_if_mm 0x5800082f | (\wd << 6)
.set pop
.endm
@@ -456,26 +460,27 @@
.set push
.set noat
SET_HARDFLOAT
- addu $1, \base, \off
- .word STD_MSA_INSN | (\wd << 6)
+ PTR_ADDU $1, \base, \off
+ insn_if_mips 0x78000827 | (\wd << 6)
+ insn32_if_mm 0x5800083f | (\wd << 6)
.set pop
.endm
- .macro copy_u_w ws, n
+ .macro copy_s_w ws, n
.set push
.set noat
SET_HARDFLOAT
- .insn
- .word COPY_UW_MSA_INSN | (\n << 16) | (\ws << 11)
+ insn_if_mips 0x78b00059 | (\n << 16) | (\ws << 11)
+ insn32_if_mm 0x58b00056 | (\n << 16) | (\ws << 11)
.set pop
.endm
- .macro copy_u_d ws, n
+ .macro copy_s_d ws, n
.set push
.set noat
SET_HARDFLOAT
- .insn
- .word COPY_UD_MSA_INSN | (\n << 16) | (\ws << 11)
+ insn_if_mips 0x78b80059 | (\n << 16) | (\ws << 11)
+ insn32_if_mm 0x58b80056 | (\n << 16) | (\ws << 11)
.set pop
.endm
@@ -483,7 +488,8 @@
.set push
.set noat
SET_HARDFLOAT
- .word INSERT_W_MSA_INSN | (\n << 16) | (\wd << 6)
+ insn_if_mips 0x79300819 | (\n << 16) | (\wd << 6)
+ insn32_if_mm 0x59300816 | (\n << 16) | (\wd << 6)
.set pop
.endm
@@ -491,46 +497,58 @@
.set push
.set noat
SET_HARDFLOAT
- .word INSERT_D_MSA_INSN | (\n << 16) | (\wd << 6)
+ insn_if_mips 0x79380819 | (\n << 16) | (\wd << 6)
+ insn32_if_mm 0x59380816 | (\n << 16) | (\wd << 6)
.set pop
.endm
#endif
+#ifdef TOOLCHAIN_SUPPORTS_MSA
+#define FPR_BASE_OFFS THREAD_FPR0
+#define FPR_BASE $1
+#else
+#define FPR_BASE_OFFS 0
+#define FPR_BASE \thread
+#endif
+
.macro msa_save_all thread
- st_d 0, THREAD_FPR0, \thread
- st_d 1, THREAD_FPR1, \thread
- st_d 2, THREAD_FPR2, \thread
- st_d 3, THREAD_FPR3, \thread
- st_d 4, THREAD_FPR4, \thread
- st_d 5, THREAD_FPR5, \thread
- st_d 6, THREAD_FPR6, \thread
- st_d 7, THREAD_FPR7, \thread
- st_d 8, THREAD_FPR8, \thread
- st_d 9, THREAD_FPR9, \thread
- st_d 10, THREAD_FPR10, \thread
- st_d 11, THREAD_FPR11, \thread
- st_d 12, THREAD_FPR12, \thread
- st_d 13, THREAD_FPR13, \thread
- st_d 14, THREAD_FPR14, \thread
- st_d 15, THREAD_FPR15, \thread
- st_d 16, THREAD_FPR16, \thread
- st_d 17, THREAD_FPR17, \thread
- st_d 18, THREAD_FPR18, \thread
- st_d 19, THREAD_FPR19, \thread
- st_d 20, THREAD_FPR20, \thread
- st_d 21, THREAD_FPR21, \thread
- st_d 22, THREAD_FPR22, \thread
- st_d 23, THREAD_FPR23, \thread
- st_d 24, THREAD_FPR24, \thread
- st_d 25, THREAD_FPR25, \thread
- st_d 26, THREAD_FPR26, \thread
- st_d 27, THREAD_FPR27, \thread
- st_d 28, THREAD_FPR28, \thread
- st_d 29, THREAD_FPR29, \thread
- st_d 30, THREAD_FPR30, \thread
- st_d 31, THREAD_FPR31, \thread
.set push
.set noat
+#ifdef TOOLCHAIN_SUPPORTS_MSA
+ PTR_ADDU FPR_BASE, \thread, FPR_BASE_OFFS
+#endif
+ st_d 0, THREAD_FPR0 - FPR_BASE_OFFS, FPR_BASE
+ st_d 1, THREAD_FPR1 - FPR_BASE_OFFS, FPR_BASE
+ st_d 2, THREAD_FPR2 - FPR_BASE_OFFS, FPR_BASE
+ st_d 3, THREAD_FPR3 - FPR_BASE_OFFS, FPR_BASE
+ st_d 4, THREAD_FPR4 - FPR_BASE_OFFS, FPR_BASE
+ st_d 5, THREAD_FPR5 - FPR_BASE_OFFS, FPR_BASE
+ st_d 6, THREAD_FPR6 - FPR_BASE_OFFS, FPR_BASE
+ st_d 7, THREAD_FPR7 - FPR_BASE_OFFS, FPR_BASE
+ st_d 8, THREAD_FPR8 - FPR_BASE_OFFS, FPR_BASE
+ st_d 9, THREAD_FPR9 - FPR_BASE_OFFS, FPR_BASE
+ st_d 10, THREAD_FPR10 - FPR_BASE_OFFS, FPR_BASE
+ st_d 11, THREAD_FPR11 - FPR_BASE_OFFS, FPR_BASE
+ st_d 12, THREAD_FPR12 - FPR_BASE_OFFS, FPR_BASE
+ st_d 13, THREAD_FPR13 - FPR_BASE_OFFS, FPR_BASE
+ st_d 14, THREAD_FPR14 - FPR_BASE_OFFS, FPR_BASE
+ st_d 15, THREAD_FPR15 - FPR_BASE_OFFS, FPR_BASE
+ st_d 16, THREAD_FPR16 - FPR_BASE_OFFS, FPR_BASE
+ st_d 17, THREAD_FPR17 - FPR_BASE_OFFS, FPR_BASE
+ st_d 18, THREAD_FPR18 - FPR_BASE_OFFS, FPR_BASE
+ st_d 19, THREAD_FPR19 - FPR_BASE_OFFS, FPR_BASE
+ st_d 20, THREAD_FPR20 - FPR_BASE_OFFS, FPR_BASE
+ st_d 21, THREAD_FPR21 - FPR_BASE_OFFS, FPR_BASE
+ st_d 22, THREAD_FPR22 - FPR_BASE_OFFS, FPR_BASE
+ st_d 23, THREAD_FPR23 - FPR_BASE_OFFS, FPR_BASE
+ st_d 24, THREAD_FPR24 - FPR_BASE_OFFS, FPR_BASE
+ st_d 25, THREAD_FPR25 - FPR_BASE_OFFS, FPR_BASE
+ st_d 26, THREAD_FPR26 - FPR_BASE_OFFS, FPR_BASE
+ st_d 27, THREAD_FPR27 - FPR_BASE_OFFS, FPR_BASE
+ st_d 28, THREAD_FPR28 - FPR_BASE_OFFS, FPR_BASE
+ st_d 29, THREAD_FPR29 - FPR_BASE_OFFS, FPR_BASE
+ st_d 30, THREAD_FPR30 - FPR_BASE_OFFS, FPR_BASE
+ st_d 31, THREAD_FPR31 - FPR_BASE_OFFS, FPR_BASE
SET_HARDFLOAT
_cfcmsa $1, MSA_CSR
sw $1, THREAD_MSA_CSR(\thread)
@@ -543,40 +561,46 @@
SET_HARDFLOAT
lw $1, THREAD_MSA_CSR(\thread)
_ctcmsa MSA_CSR, $1
- .set pop
- ld_d 0, THREAD_FPR0, \thread
- ld_d 1, THREAD_FPR1, \thread
- ld_d 2, THREAD_FPR2, \thread
- ld_d 3, THREAD_FPR3, \thread
- ld_d 4, THREAD_FPR4, \thread
- ld_d 5, THREAD_FPR5, \thread
- ld_d 6, THREAD_FPR6, \thread
- ld_d 7, THREAD_FPR7, \thread
- ld_d 8, THREAD_FPR8, \thread
- ld_d 9, THREAD_FPR9, \thread
- ld_d 10, THREAD_FPR10, \thread
- ld_d 11, THREAD_FPR11, \thread
- ld_d 12, THREAD_FPR12, \thread
- ld_d 13, THREAD_FPR13, \thread
- ld_d 14, THREAD_FPR14, \thread
- ld_d 15, THREAD_FPR15, \thread
- ld_d 16, THREAD_FPR16, \thread
- ld_d 17, THREAD_FPR17, \thread
- ld_d 18, THREAD_FPR18, \thread
- ld_d 19, THREAD_FPR19, \thread
- ld_d 20, THREAD_FPR20, \thread
- ld_d 21, THREAD_FPR21, \thread
- ld_d 22, THREAD_FPR22, \thread
- ld_d 23, THREAD_FPR23, \thread
- ld_d 24, THREAD_FPR24, \thread
- ld_d 25, THREAD_FPR25, \thread
- ld_d 26, THREAD_FPR26, \thread
- ld_d 27, THREAD_FPR27, \thread
- ld_d 28, THREAD_FPR28, \thread
- ld_d 29, THREAD_FPR29, \thread
- ld_d 30, THREAD_FPR30, \thread
- ld_d 31, THREAD_FPR31, \thread
- .endm
+#ifdef TOOLCHAIN_SUPPORTS_MSA
+ PTR_ADDU FPR_BASE, \thread, FPR_BASE_OFFS
+#endif
+ ld_d 0, THREAD_FPR0 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 1, THREAD_FPR1 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 2, THREAD_FPR2 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 3, THREAD_FPR3 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 4, THREAD_FPR4 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 5, THREAD_FPR5 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 6, THREAD_FPR6 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 7, THREAD_FPR7 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 8, THREAD_FPR8 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 9, THREAD_FPR9 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 10, THREAD_FPR10 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 11, THREAD_FPR11 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 12, THREAD_FPR12 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 13, THREAD_FPR13 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 14, THREAD_FPR14 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 15, THREAD_FPR15 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 16, THREAD_FPR16 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 17, THREAD_FPR17 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 18, THREAD_FPR18 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 19, THREAD_FPR19 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 20, THREAD_FPR20 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 21, THREAD_FPR21 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 22, THREAD_FPR22 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 23, THREAD_FPR23 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 24, THREAD_FPR24 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 25, THREAD_FPR25 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 26, THREAD_FPR26 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 27, THREAD_FPR27 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 28, THREAD_FPR28 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 29, THREAD_FPR29 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 30, THREAD_FPR30 - FPR_BASE_OFFS, FPR_BASE
+ ld_d 31, THREAD_FPR31 - FPR_BASE_OFFS, FPR_BASE
+ .set pop
+ .endm
+
+#undef FPR_BASE_OFFS
+#undef FPR_BASE
.macro msa_init_upper wd
#ifdef CONFIG_64BIT
diff --git a/arch/mips/include/asm/bitops.h b/arch/mips/include/asm/bitops.h
index ce9666cf1499..fa57cef12a46 100644
--- a/arch/mips/include/asm/bitops.h
+++ b/arch/mips/include/asm/bitops.h
@@ -19,25 +19,10 @@
#include <asm/byteorder.h> /* sigh ... */
#include <asm/compiler.h>
#include <asm/cpu-features.h>
+#include <asm/llsc.h>
#include <asm/sgidefs.h>
#include <asm/war.h>
-#if _MIPS_SZLONG == 32
-#define SZLONG_LOG 5
-#define SZLONG_MASK 31UL
-#define __LL "ll "
-#define __SC "sc "
-#define __INS "ins "
-#define __EXT "ext "
-#elif _MIPS_SZLONG == 64
-#define SZLONG_LOG 6
-#define SZLONG_MASK 63UL
-#define __LL "lld "
-#define __SC "scd "
-#define __INS "dins "
-#define __EXT "dext "
-#endif
-
/*
* These are the "slower" versions of the functions and are in bitops.c.
* These functions call raw_local_irq_{save,restore}().
diff --git a/arch/mips/include/asm/bitrev.h b/arch/mips/include/asm/bitrev.h
new file mode 100644
index 000000000000..bc739a404ae3
--- /dev/null
+++ b/arch/mips/include/asm/bitrev.h
@@ -0,0 +1,30 @@
+#ifndef __MIPS_ASM_BITREV_H__
+#define __MIPS_ASM_BITREV_H__
+
+#include <linux/swab.h>
+
+static __always_inline __attribute_const__ u32 __arch_bitrev32(u32 x)
+{
+ u32 ret;
+
+ asm("bitswap %0, %1" : "=r"(ret) : "r"(__swab32(x)));
+ return ret;
+}
+
+static __always_inline __attribute_const__ u16 __arch_bitrev16(u16 x)
+{
+ u16 ret;
+
+ asm("bitswap %0, %1" : "=r"(ret) : "r"(__swab16(x)));
+ return ret;
+}
+
+static __always_inline __attribute_const__ u8 __arch_bitrev8(u8 x)
+{
+ u8 ret;
+
+ asm("bitswap %0, %1" : "=r"(ret) : "r"(x));
+ return ret;
+}
+
+#endif /* __MIPS_ASM_BITREV_H__ */
diff --git a/arch/mips/include/asm/bmips.h b/arch/mips/include/asm/bmips.h
index 6d25ad33ec78..a92aee7b977a 100644
--- a/arch/mips/include/asm/bmips.h
+++ b/arch/mips/include/asm/bmips.h
@@ -88,6 +88,7 @@ extern unsigned long bmips_tp1_irqs;
extern void bmips_ebase_setup(void);
extern asmlinkage void plat_wired_tlb_setup(void);
+extern void bmips_cpu_setup(void);
static inline unsigned long bmips_read_zscm_reg(unsigned int offset)
{
diff --git a/arch/mips/include/asm/bootinfo.h b/arch/mips/include/asm/bootinfo.h
index b603804caac5..9f67033961a6 100644
--- a/arch/mips/include/asm/bootinfo.h
+++ b/arch/mips/include/asm/bootinfo.h
@@ -144,4 +144,22 @@ static inline void plat_swiotlb_setup(void) {}
#endif /* CONFIG_SWIOTLB */
+#ifdef CONFIG_USE_OF
+/**
+ * plat_get_fdt() - Return a pointer to the platform's device tree blob
+ *
+ * This function provides a platform independent API to get a pointer to the
+ * flattened device tree blob. The interface between bootloader and kernel
+ * is not consistent across platforms so it is necessary to provide this
+ * API such that common startup code can locate the FDT.
+ *
+ * This is used by the KASLR code to get command line arguments and random
+ * seed from the device tree. Any platform wishing to use KASLR should
+ * provide this API and select SYS_SUPPORTS_RELOCATABLE.
+ *
+ * Return: Pointer to the flattened device tree blob.
+ */
+extern void *plat_get_fdt(void);
+#endif /* CONFIG_USE_OF */
+
#endif /* _ASM_BOOTINFO_H */
diff --git a/arch/mips/include/asm/cacheflush.h b/arch/mips/include/asm/cacheflush.h
index 723229f4cf27..34ed22ec6c33 100644
--- a/arch/mips/include/asm/cacheflush.h
+++ b/arch/mips/include/asm/cacheflush.h
@@ -51,7 +51,6 @@ extern void (*flush_cache_range)(struct vm_area_struct *vma,
unsigned long start, unsigned long end);
extern void (*flush_cache_page)(struct vm_area_struct *vma, unsigned long page, unsigned long pfn);
extern void __flush_dcache_page(struct page *page);
-extern void __flush_icache_page(struct vm_area_struct *vma, struct page *page);
#define ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE 1
static inline void flush_dcache_page(struct page *page)
@@ -77,11 +76,6 @@ static inline void flush_anon_page(struct vm_area_struct *vma,
static inline void flush_icache_page(struct vm_area_struct *vma,
struct page *page)
{
- if (!cpu_has_ic_fills_f_dc && (vma->vm_flags & VM_EXEC) &&
- Page_dcache_dirty(page)) {
- __flush_icache_page(vma, page);
- ClearPageDcacheDirty(page);
- }
}
extern void (*flush_icache_range)(unsigned long start, unsigned long end);
@@ -132,6 +126,7 @@ static inline void kunmap_noncoherent(void)
static inline void flush_kernel_dcache_page(struct page *page)
{
BUG_ON(cpu_has_dc_aliases && PageHighMem(page));
+ flush_dcache_page(page);
}
/*
diff --git a/arch/mips/include/asm/cacheops.h b/arch/mips/include/asm/cacheops.h
index c3212ff26723..8031fbc6b69a 100644
--- a/arch/mips/include/asm/cacheops.h
+++ b/arch/mips/include/asm/cacheops.h
@@ -21,6 +21,7 @@
#define Cache_I 0x00
#define Cache_D 0x01
#define Cache_T 0x02
+#define Cache_V 0x02 /* Loongson-3 */
#define Cache_S 0x03
#define Index_Writeback_Inv 0x00
@@ -107,4 +108,9 @@
*/
#define Hit_Invalidate_I_Loongson2 (Cache_I | 0x00)
+/*
+ * Loongson3-specific cacheops
+ */
+#define Index_Writeback_Inv_V (Cache_V | Index_Writeback_Inv)
+
#endif /* __ASM_CACHEOPS_H */
diff --git a/arch/mips/include/asm/clkdev.h b/arch/mips/include/asm/clkdev.h
deleted file mode 100644
index 1b3ad7b09dc1..000000000000
--- a/arch/mips/include/asm/clkdev.h
+++ /dev/null
@@ -1,27 +0,0 @@
-/*
- * based on arch/arm/include/asm/clkdev.h
- *
- * Copyright (C) 2008 Russell King.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * Helper for the clk API to assist looking up a struct clk.
- */
-#ifndef __ASM_CLKDEV_H
-#define __ASM_CLKDEV_H
-
-#include <linux/slab.h>
-
-#ifndef CONFIG_COMMON_CLK
-#define __clk_get(clk) ({ 1; })
-#define __clk_put(clk) do { } while (0)
-#endif
-
-static inline struct clk_lookup_alloc *__clkdev_alloc(size_t size)
-{
- return kzalloc(size, GFP_KERNEL);
-}
-
-#endif
diff --git a/arch/mips/include/asm/cpu-features.h b/arch/mips/include/asm/cpu-features.h
index eeec8c8e2da2..e961c8a7ea66 100644
--- a/arch/mips/include/asm/cpu-features.h
+++ b/arch/mips/include/asm/cpu-features.h
@@ -35,6 +35,9 @@
#ifndef cpu_has_htw
#define cpu_has_htw (cpu_data[0].options & MIPS_CPU_HTW)
#endif
+#ifndef cpu_has_ldpte
+#define cpu_has_ldpte (cpu_data[0].options & MIPS_CPU_LDPTE)
+#endif
#ifndef cpu_has_rixiex
#define cpu_has_rixiex (cpu_data[0].options & MIPS_CPU_RIXIEX)
#endif
@@ -117,6 +120,21 @@
#ifndef kernel_uses_llsc
#define kernel_uses_llsc cpu_has_llsc
#endif
+#ifndef cpu_has_guestctl0ext
+#define cpu_has_guestctl0ext (cpu_data[0].options & MIPS_CPU_GUESTCTL0EXT)
+#endif
+#ifndef cpu_has_guestctl1
+#define cpu_has_guestctl1 (cpu_data[0].options & MIPS_CPU_GUESTCTL1)
+#endif
+#ifndef cpu_has_guestctl2
+#define cpu_has_guestctl2 (cpu_data[0].options & MIPS_CPU_GUESTCTL2)
+#endif
+#ifndef cpu_has_guestid
+#define cpu_has_guestid (cpu_data[0].options & MIPS_CPU_GUESTID)
+#endif
+#ifndef cpu_has_drg
+#define cpu_has_drg (cpu_data[0].options & MIPS_CPU_DRG)
+#endif
#ifndef cpu_has_mips16
#define cpu_has_mips16 (cpu_data[0].ases & MIPS_ASE_MIPS16)
#endif
@@ -142,8 +160,14 @@
# endif
#endif
+#ifndef cpu_has_lpa
+#define cpu_has_lpa (cpu_data[0].options & MIPS_CPU_LPA)
+#endif
+#ifndef cpu_has_mvh
+#define cpu_has_mvh (cpu_data[0].options & MIPS_CPU_MVH)
+#endif
#ifndef cpu_has_xpa
-#define cpu_has_xpa (cpu_data[0].options & MIPS_CPU_XPA)
+#define cpu_has_xpa (cpu_has_lpa && cpu_has_mvh)
#endif
#ifndef cpu_has_vtag_icache
#define cpu_has_vtag_icache (cpu_data[0].icache.flags & MIPS_CACHE_VTAG)
@@ -180,6 +204,16 @@
#endif
#endif
+/* __builtin_constant_p(cpu_has_mips_r) && cpu_has_mips_r */
+#if !((defined(cpu_has_mips32r1) && cpu_has_mips32r1) || \
+ (defined(cpu_has_mips32r2) && cpu_has_mips32r2) || \
+ (defined(cpu_has_mips32r6) && cpu_has_mips32r6) || \
+ (defined(cpu_has_mips64r1) && cpu_has_mips64r1) || \
+ (defined(cpu_has_mips64r2) && cpu_has_mips64r2) || \
+ (defined(cpu_has_mips64r6) && cpu_has_mips64r6))
+#define CPU_NO_EFFICIENT_FFS 1
+#endif
+
#ifndef cpu_has_mips_1
# define cpu_has_mips_1 (!cpu_has_mips_r6)
#endif
@@ -307,10 +341,18 @@
#define cpu_has_dsp2 (cpu_data[0].ases & MIPS_ASE_DSP2P)
#endif
+#ifndef cpu_has_dsp3
+#define cpu_has_dsp3 (cpu_data[0].ases & MIPS_ASE_DSP3)
+#endif
+
#ifndef cpu_has_mipsmt
#define cpu_has_mipsmt (cpu_data[0].ases & MIPS_ASE_MIPSMT)
#endif
+#ifndef cpu_has_vp
+#define cpu_has_vp (cpu_data[0].options & MIPS_CPU_VP)
+#endif
+
#ifndef cpu_has_userlocal
#define cpu_has_userlocal (cpu_data[0].options & MIPS_CPU_ULRI)
#endif
@@ -421,4 +463,107 @@
#define cpu_has_nan_2008 (cpu_data[0].options & MIPS_CPU_NAN_2008)
#endif
+#ifndef cpu_has_ebase_wg
+# define cpu_has_ebase_wg (cpu_data[0].options & MIPS_CPU_EBASE_WG)
+#endif
+
+#ifndef cpu_has_badinstr
+# define cpu_has_badinstr (cpu_data[0].options & MIPS_CPU_BADINSTR)
+#endif
+
+#ifndef cpu_has_badinstrp
+# define cpu_has_badinstrp (cpu_data[0].options & MIPS_CPU_BADINSTRP)
+#endif
+
+#ifndef cpu_has_contextconfig
+# define cpu_has_contextconfig (cpu_data[0].options & MIPS_CPU_CTXTC)
+#endif
+
+#ifndef cpu_has_perf
+# define cpu_has_perf (cpu_data[0].options & MIPS_CPU_PERF)
+#endif
+
+/*
+ * Guest capabilities
+ */
+#ifndef cpu_guest_has_conf1
+#define cpu_guest_has_conf1 (cpu_data[0].guest.conf & (1 << 1))
+#endif
+#ifndef cpu_guest_has_conf2
+#define cpu_guest_has_conf2 (cpu_data[0].guest.conf & (1 << 2))
+#endif
+#ifndef cpu_guest_has_conf3
+#define cpu_guest_has_conf3 (cpu_data[0].guest.conf & (1 << 3))
+#endif
+#ifndef cpu_guest_has_conf4
+#define cpu_guest_has_conf4 (cpu_data[0].guest.conf & (1 << 4))
+#endif
+#ifndef cpu_guest_has_conf5
+#define cpu_guest_has_conf5 (cpu_data[0].guest.conf & (1 << 5))
+#endif
+#ifndef cpu_guest_has_conf6
+#define cpu_guest_has_conf6 (cpu_data[0].guest.conf & (1 << 6))
+#endif
+#ifndef cpu_guest_has_conf7
+#define cpu_guest_has_conf7 (cpu_data[0].guest.conf & (1 << 7))
+#endif
+#ifndef cpu_guest_has_fpu
+#define cpu_guest_has_fpu (cpu_data[0].guest.options & MIPS_CPU_FPU)
+#endif
+#ifndef cpu_guest_has_watch
+#define cpu_guest_has_watch (cpu_data[0].guest.options & MIPS_CPU_WATCH)
+#endif
+#ifndef cpu_guest_has_contextconfig
+#define cpu_guest_has_contextconfig (cpu_data[0].guest.options & MIPS_CPU_CTXTC)
+#endif
+#ifndef cpu_guest_has_segments
+#define cpu_guest_has_segments (cpu_data[0].guest.options & MIPS_CPU_SEGMENTS)
+#endif
+#ifndef cpu_guest_has_badinstr
+#define cpu_guest_has_badinstr (cpu_data[0].guest.options & MIPS_CPU_BADINSTR)
+#endif
+#ifndef cpu_guest_has_badinstrp
+#define cpu_guest_has_badinstrp (cpu_data[0].guest.options & MIPS_CPU_BADINSTRP)
+#endif
+#ifndef cpu_guest_has_htw
+#define cpu_guest_has_htw (cpu_data[0].guest.options & MIPS_CPU_HTW)
+#endif
+#ifndef cpu_guest_has_msa
+#define cpu_guest_has_msa (cpu_data[0].guest.ases & MIPS_ASE_MSA)
+#endif
+#ifndef cpu_guest_has_kscr
+#define cpu_guest_has_kscr(n) (cpu_data[0].guest.kscratch_mask & (1u << (n)))
+#endif
+#ifndef cpu_guest_has_rw_llb
+#define cpu_guest_has_rw_llb (cpu_has_mips_r6 || (cpu_data[0].guest.options & MIPS_CPU_RW_LLB))
+#endif
+#ifndef cpu_guest_has_perf
+#define cpu_guest_has_perf (cpu_data[0].guest.options & MIPS_CPU_PERF)
+#endif
+#ifndef cpu_guest_has_maar
+#define cpu_guest_has_maar (cpu_data[0].guest.options & MIPS_CPU_MAAR)
+#endif
+
+/*
+ * Guest dynamic capabilities
+ */
+#ifndef cpu_guest_has_dyn_fpu
+#define cpu_guest_has_dyn_fpu (cpu_data[0].guest.options_dyn & MIPS_CPU_FPU)
+#endif
+#ifndef cpu_guest_has_dyn_watch
+#define cpu_guest_has_dyn_watch (cpu_data[0].guest.options_dyn & MIPS_CPU_WATCH)
+#endif
+#ifndef cpu_guest_has_dyn_contextconfig
+#define cpu_guest_has_dyn_contextconfig (cpu_data[0].guest.options_dyn & MIPS_CPU_CTXTC)
+#endif
+#ifndef cpu_guest_has_dyn_perf
+#define cpu_guest_has_dyn_perf (cpu_data[0].guest.options_dyn & MIPS_CPU_PERF)
+#endif
+#ifndef cpu_guest_has_dyn_msa
+#define cpu_guest_has_dyn_msa (cpu_data[0].guest.ases_dyn & MIPS_ASE_MSA)
+#endif
+#ifndef cpu_guest_has_dyn_maar
+#define cpu_guest_has_dyn_maar (cpu_data[0].guest.options_dyn & MIPS_CPU_MAAR)
+#endif
+
#endif /* __ASM_CPU_FEATURES_H */
diff --git a/arch/mips/include/asm/cpu-info.h b/arch/mips/include/asm/cpu-info.h
index af12c1f9f1a8..edbe2734a1bf 100644
--- a/arch/mips/include/asm/cpu-info.h
+++ b/arch/mips/include/asm/cpu-info.h
@@ -28,6 +28,15 @@ struct cache_desc {
unsigned char flags; /* Flags describing cache properties */
};
+struct guest_info {
+ unsigned long ases;
+ unsigned long ases_dyn;
+ unsigned long long options;
+ unsigned long long options_dyn;
+ u8 conf;
+ u8 kscratch_mask;
+};
+
/*
* Flag definitions
*/
@@ -40,6 +49,9 @@ struct cache_desc {
struct cpuinfo_mips {
unsigned long asid_cache;
+#ifdef CONFIG_MIPS_ASID_BITS_VARIABLE
+ unsigned long asid_mask;
+#endif
/*
* Capability and feature descriptor structure for MIPS CPU
@@ -60,6 +72,7 @@ struct cpuinfo_mips {
int tlbsizeftlbways;
struct cache_desc icache; /* Primary I-cache */
struct cache_desc dcache; /* Primary D or combined I/D cache */
+ struct cache_desc vcache; /* Victim cache, between pcache and scache */
struct cache_desc scache; /* Secondary cache */
struct cache_desc tcache; /* Tertiary/split secondary cache */
int srsets; /* Shadow register sets */
@@ -68,7 +81,7 @@ struct cpuinfo_mips {
#ifdef CONFIG_64BIT
int vmbits; /* Virtual memory size in bits */
#endif
-#ifdef CONFIG_MIPS_MT_SMP
+#if defined(CONFIG_MIPS_MT_SMP) || defined(CONFIG_CPU_MIPSR6)
/*
* There is not necessarily a 1:1 mapping of VPE num to CPU number
* in particular on multi-core systems.
@@ -91,6 +104,11 @@ struct cpuinfo_mips {
* htw_start/htw_stop calls
*/
unsigned int htw_seq;
+
+ /* VZ & Guest features */
+ struct guest_info guest;
+ unsigned int gtoffset_mask;
+ unsigned int guestid_mask;
} __attribute__((aligned(SMP_CACHE_BYTES)));
extern struct cpuinfo_mips cpu_data[];
@@ -125,10 +143,31 @@ struct proc_cpuinfo_notifier_args {
unsigned long n;
};
-#ifdef CONFIG_MIPS_MT_SMP
+#if defined(CONFIG_MIPS_MT_SMP) || defined(CONFIG_CPU_MIPSR6)
# define cpu_vpe_id(cpuinfo) ((cpuinfo)->vpe_id)
#else
# define cpu_vpe_id(cpuinfo) ({ (void)cpuinfo; 0; })
#endif
+static inline unsigned long cpu_asid_inc(void)
+{
+ return 1 << CONFIG_MIPS_ASID_SHIFT;
+}
+
+static inline unsigned long cpu_asid_mask(struct cpuinfo_mips *cpuinfo)
+{
+#ifdef CONFIG_MIPS_ASID_BITS_VARIABLE
+ return cpuinfo->asid_mask;
+#endif
+ return ((1 << CONFIG_MIPS_ASID_BITS) - 1) << CONFIG_MIPS_ASID_SHIFT;
+}
+
+static inline void set_cpu_asid_mask(struct cpuinfo_mips *cpuinfo,
+ unsigned long asid_mask)
+{
+#ifdef CONFIG_MIPS_ASID_BITS_VARIABLE
+ cpuinfo->asid_mask = asid_mask;
+#endif
+}
+
#endif /* __ASM_CPU_INFO_H */
diff --git a/arch/mips/include/asm/cpu-type.h b/arch/mips/include/asm/cpu-type.h
index abee2bfd10dc..fbe1881f28fc 100644
--- a/arch/mips/include/asm/cpu-type.h
+++ b/arch/mips/include/asm/cpu-type.h
@@ -77,8 +77,13 @@ static inline int __pure __get_cpu_type(const int cpu_type)
*/
#endif
+#ifdef CONFIG_SYS_HAS_CPU_MIPS32_R6
+ case CPU_M6250:
+#endif
+
#ifdef CONFIG_SYS_HAS_CPU_MIPS64_R6
case CPU_I6400:
+ case CPU_P6600:
#endif
#ifdef CONFIG_SYS_HAS_CPU_R3000
diff --git a/arch/mips/include/asm/cpu.h b/arch/mips/include/asm/cpu.h
index a97ca97285ec..f672df8b26d0 100644
--- a/arch/mips/include/asm/cpu.h
+++ b/arch/mips/include/asm/cpu.h
@@ -42,6 +42,7 @@
#define PRID_COMP_LEXRA 0x0b0000
#define PRID_COMP_NETLOGIC 0x0c0000
#define PRID_COMP_CAVIUM 0x0d0000
+#define PRID_COMP_LOONGSON 0x140000
#define PRID_COMP_INGENIC_D0 0xd00000 /* JZ4740, JZ4750 */
#define PRID_COMP_INGENIC_D1 0xd10000 /* JZ4770, JZ4775 */
#define PRID_COMP_INGENIC_E1 0xe10000 /* JZ4780 */
@@ -118,9 +119,11 @@
#define PRID_IMP_INTERAPTIV_MP 0xa100
#define PRID_IMP_PROAPTIV_UP 0xa200
#define PRID_IMP_PROAPTIV_MP 0xa300
+#define PRID_IMP_P6600 0xa400
#define PRID_IMP_M5150 0xa700
#define PRID_IMP_P5600 0xa800
#define PRID_IMP_I6400 0xa900
+#define PRID_IMP_M6250 0xab00
/*
* These are the PRID's for when 23:16 == PRID_COMP_SIBYTE
@@ -169,6 +172,8 @@
#define PRID_IMP_CAVIUM_CNF71XX 0x9400
#define PRID_IMP_CAVIUM_CN78XX 0x9500
#define PRID_IMP_CAVIUM_CN70XX 0x9600
+#define PRID_IMP_CAVIUM_CN73XX 0x9700
+#define PRID_IMP_CAVIUM_CNF75XX 0x9800
/*
* These are the PRID's for when 23:16 == PRID_COMP_INGENIC_*
@@ -237,9 +242,10 @@
#define PRID_REV_LOONGSON1B 0x0020
#define PRID_REV_LOONGSON2E 0x0002
#define PRID_REV_LOONGSON2F 0x0003
-#define PRID_REV_LOONGSON3A 0x0005
+#define PRID_REV_LOONGSON3A_R1 0x0005
#define PRID_REV_LOONGSON3B_R1 0x0006
#define PRID_REV_LOONGSON3B_R2 0x0007
+#define PRID_REV_LOONGSON3A_R2 0x0008
/*
* Older processors used to encode processor version and revision in two
@@ -307,8 +313,8 @@ enum cpu_type_enum {
CPU_4KC, CPU_4KEC, CPU_4KSC, CPU_24K, CPU_34K, CPU_1004K, CPU_74K,
CPU_ALCHEMY, CPU_PR4450, CPU_BMIPS32, CPU_BMIPS3300, CPU_BMIPS4350,
CPU_BMIPS4380, CPU_BMIPS5000, CPU_JZRISC, CPU_LOONGSON1, CPU_M14KC,
- CPU_M14KEC, CPU_INTERAPTIV, CPU_P5600, CPU_PROAPTIV, CPU_1074K, CPU_M5150,
- CPU_I6400,
+ CPU_M14KEC, CPU_INTERAPTIV, CPU_P5600, CPU_PROAPTIV, CPU_1074K,
+ CPU_M5150, CPU_I6400, CPU_P6600, CPU_M6250,
/*
* MIPS64 class processors
@@ -346,48 +352,68 @@ enum cpu_type_enum {
MIPS_CPU_ISA_M64R6)
/*
+ * Private version of BIT_ULL() to escape include file recursion hell.
+ * We soon will have to switch to another mechanism that will work with
+ * more than 64 bits anyway.
+ */
+#define MBIT_ULL(bit) (1ULL << (bit))
+
+/*
* CPU Option encodings
*/
-#define MIPS_CPU_TLB 0x00000001ull /* CPU has TLB */
-#define MIPS_CPU_4KEX 0x00000002ull /* "R4K" exception model */
-#define MIPS_CPU_3K_CACHE 0x00000004ull /* R3000-style caches */
-#define MIPS_CPU_4K_CACHE 0x00000008ull /* R4000-style caches */
-#define MIPS_CPU_TX39_CACHE 0x00000010ull /* TX3900-style caches */
-#define MIPS_CPU_FPU 0x00000020ull /* CPU has FPU */
-#define MIPS_CPU_32FPR 0x00000040ull /* 32 dbl. prec. FP registers */
-#define MIPS_CPU_COUNTER 0x00000080ull /* Cycle count/compare */
-#define MIPS_CPU_WATCH 0x00000100ull /* watchpoint registers */
-#define MIPS_CPU_DIVEC 0x00000200ull /* dedicated interrupt vector */
-#define MIPS_CPU_VCE 0x00000400ull /* virt. coherence conflict possible */
-#define MIPS_CPU_CACHE_CDEX_P 0x00000800ull /* Create_Dirty_Exclusive CACHE op */
-#define MIPS_CPU_CACHE_CDEX_S 0x00001000ull /* ... same for seconary cache ... */
-#define MIPS_CPU_MCHECK 0x00002000ull /* Machine check exception */
-#define MIPS_CPU_EJTAG 0x00004000ull /* EJTAG exception */
-#define MIPS_CPU_NOFPUEX 0x00008000ull /* no FPU exception */
-#define MIPS_CPU_LLSC 0x00010000ull /* CPU has ll/sc instructions */
-#define MIPS_CPU_INCLUSIVE_CACHES 0x00020000ull /* P-cache subset enforced */
-#define MIPS_CPU_PREFETCH 0x00040000ull /* CPU has usable prefetch */
-#define MIPS_CPU_VINT 0x00080000ull /* CPU supports MIPSR2 vectored interrupts */
-#define MIPS_CPU_VEIC 0x00100000ull /* CPU supports MIPSR2 external interrupt controller mode */
-#define MIPS_CPU_ULRI 0x00200000ull /* CPU has ULRI feature */
-#define MIPS_CPU_PCI 0x00400000ull /* CPU has Perf Ctr Int indicator */
-#define MIPS_CPU_RIXI 0x00800000ull /* CPU has TLB Read/eXec Inhibit */
-#define MIPS_CPU_MICROMIPS 0x01000000ull /* CPU has microMIPS capability */
-#define MIPS_CPU_TLBINV 0x02000000ull /* CPU supports TLBINV/F */
-#define MIPS_CPU_SEGMENTS 0x04000000ull /* CPU supports Segmentation Control registers */
-#define MIPS_CPU_EVA 0x80000000ull /* CPU supports Enhanced Virtual Addressing */
-#define MIPS_CPU_HTW 0x100000000ull /* CPU support Hardware Page Table Walker */
-#define MIPS_CPU_RIXIEX 0x200000000ull /* CPU has unique exception codes for {Read, Execute}-Inhibit exceptions */
-#define MIPS_CPU_MAAR 0x400000000ull /* MAAR(I) registers are present */
-#define MIPS_CPU_FRE 0x800000000ull /* FRE & UFE bits implemented */
-#define MIPS_CPU_RW_LLB 0x1000000000ull /* LLADDR/LLB writes are allowed */
-#define MIPS_CPU_XPA 0x2000000000ull /* CPU supports Extended Physical Addressing */
-#define MIPS_CPU_CDMM 0x4000000000ull /* CPU has Common Device Memory Map */
-#define MIPS_CPU_BP_GHIST 0x8000000000ull /* R12K+ Branch Prediction Global History */
-#define MIPS_CPU_SP 0x10000000000ull /* Small (1KB) page support */
-#define MIPS_CPU_FTLB 0x20000000000ull /* CPU has Fixed-page-size TLB */
-#define MIPS_CPU_NAN_LEGACY 0x40000000000ull /* Legacy NaN implemented */
-#define MIPS_CPU_NAN_2008 0x80000000000ull /* 2008 NaN implemented */
+#define MIPS_CPU_TLB MBIT_ULL( 0) /* CPU has TLB */
+#define MIPS_CPU_4KEX MBIT_ULL( 1) /* "R4K" exception model */
+#define MIPS_CPU_3K_CACHE MBIT_ULL( 2) /* R3000-style caches */
+#define MIPS_CPU_4K_CACHE MBIT_ULL( 3) /* R4000-style caches */
+#define MIPS_CPU_TX39_CACHE MBIT_ULL( 4) /* TX3900-style caches */
+#define MIPS_CPU_FPU MBIT_ULL( 5) /* CPU has FPU */
+#define MIPS_CPU_32FPR MBIT_ULL( 6) /* 32 dbl. prec. FP registers */
+#define MIPS_CPU_COUNTER MBIT_ULL( 7) /* Cycle count/compare */
+#define MIPS_CPU_WATCH MBIT_ULL( 8) /* watchpoint registers */
+#define MIPS_CPU_DIVEC MBIT_ULL( 9) /* dedicated interrupt vector */
+#define MIPS_CPU_VCE MBIT_ULL(10) /* virt. coherence conflict possible */
+#define MIPS_CPU_CACHE_CDEX_P MBIT_ULL(11) /* Create_Dirty_Exclusive CACHE op */
+#define MIPS_CPU_CACHE_CDEX_S MBIT_ULL(12) /* ... same for seconary cache ... */
+#define MIPS_CPU_MCHECK MBIT_ULL(13) /* Machine check exception */
+#define MIPS_CPU_EJTAG MBIT_ULL(14) /* EJTAG exception */
+#define MIPS_CPU_NOFPUEX MBIT_ULL(15) /* no FPU exception */
+#define MIPS_CPU_LLSC MBIT_ULL(16) /* CPU has ll/sc instructions */
+#define MIPS_CPU_INCLUSIVE_CACHES MBIT_ULL(17) /* P-cache subset enforced */
+#define MIPS_CPU_PREFETCH MBIT_ULL(18) /* CPU has usable prefetch */
+#define MIPS_CPU_VINT MBIT_ULL(19) /* CPU supports MIPSR2 vectored interrupts */
+#define MIPS_CPU_VEIC MBIT_ULL(20) /* CPU supports MIPSR2 external interrupt controller mode */
+#define MIPS_CPU_ULRI MBIT_ULL(21) /* CPU has ULRI feature */
+#define MIPS_CPU_PCI MBIT_ULL(22) /* CPU has Perf Ctr Int indicator */
+#define MIPS_CPU_RIXI MBIT_ULL(23) /* CPU has TLB Read/eXec Inhibit */
+#define MIPS_CPU_MICROMIPS MBIT_ULL(24) /* CPU has microMIPS capability */
+#define MIPS_CPU_TLBINV MBIT_ULL(25) /* CPU supports TLBINV/F */
+#define MIPS_CPU_SEGMENTS MBIT_ULL(26) /* CPU supports Segmentation Control registers */
+#define MIPS_CPU_EVA MBIT_ULL(27) /* CPU supports Enhanced Virtual Addressing */
+#define MIPS_CPU_HTW MBIT_ULL(28) /* CPU support Hardware Page Table Walker */
+#define MIPS_CPU_RIXIEX MBIT_ULL(29) /* CPU has unique exception codes for {Read, Execute}-Inhibit exceptions */
+#define MIPS_CPU_MAAR MBIT_ULL(30) /* MAAR(I) registers are present */
+#define MIPS_CPU_FRE MBIT_ULL(31) /* FRE & UFE bits implemented */
+#define MIPS_CPU_RW_LLB MBIT_ULL(32) /* LLADDR/LLB writes are allowed */
+#define MIPS_CPU_LPA MBIT_ULL(33) /* CPU supports Large Physical Addressing */
+#define MIPS_CPU_CDMM MBIT_ULL(34) /* CPU has Common Device Memory Map */
+#define MIPS_CPU_BP_GHIST MBIT_ULL(35) /* R12K+ Branch Prediction Global History */
+#define MIPS_CPU_SP MBIT_ULL(36) /* Small (1KB) page support */
+#define MIPS_CPU_FTLB MBIT_ULL(37) /* CPU has Fixed-page-size TLB */
+#define MIPS_CPU_NAN_LEGACY MBIT_ULL(38) /* Legacy NaN implemented */
+#define MIPS_CPU_NAN_2008 MBIT_ULL(39) /* 2008 NaN implemented */
+#define MIPS_CPU_VP MBIT_ULL(40) /* MIPSr6 Virtual Processors (multi-threading) */
+#define MIPS_CPU_LDPTE MBIT_ULL(41) /* CPU has ldpte/lddir instructions */
+#define MIPS_CPU_MVH MBIT_ULL(42) /* CPU supports MFHC0/MTHC0 */
+#define MIPS_CPU_EBASE_WG MBIT_ULL(43) /* CPU has EBase.WG */
+#define MIPS_CPU_BADINSTR MBIT_ULL(44) /* CPU has BadInstr register */
+#define MIPS_CPU_BADINSTRP MBIT_ULL(45) /* CPU has BadInstrP register */
+#define MIPS_CPU_CTXTC MBIT_ULL(46) /* CPU has [X]ConfigContext registers */
+#define MIPS_CPU_PERF MBIT_ULL(47) /* CPU has MIPS performance counters */
+#define MIPS_CPU_GUESTCTL0EXT MBIT_ULL(48) /* CPU has VZ GuestCtl0Ext register */
+#define MIPS_CPU_GUESTCTL1 MBIT_ULL(49) /* CPU has VZ GuestCtl1 register */
+#define MIPS_CPU_GUESTCTL2 MBIT_ULL(50) /* CPU has VZ GuestCtl2 register */
+#define MIPS_CPU_GUESTID MBIT_ULL(51) /* CPU uses VZ ASE GuestID feature */
+#define MIPS_CPU_DRG MBIT_ULL(52) /* CPU has VZ Direct Root to Guest (DRG) */
/*
* CPU ASE encodings
@@ -401,5 +427,6 @@ enum cpu_type_enum {
#define MIPS_ASE_DSP2P 0x00000040 /* Signal Processing ASE Rev 2 */
#define MIPS_ASE_VZ 0x00000080 /* Virtualization ASE */
#define MIPS_ASE_MSA 0x00000100 /* MIPS SIMD Architecture */
+#define MIPS_ASE_DSP3 0x00000200 /* Signal Processing ASE Rev 3*/
#endif /* _ASM_CPU_H */
diff --git a/arch/mips/include/asm/elf.h b/arch/mips/include/asm/elf.h
index e090fc388e02..f5f45717968e 100644
--- a/arch/mips/include/asm/elf.h
+++ b/arch/mips/include/asm/elf.h
@@ -111,6 +111,11 @@
#define R_MIPS_CALLHI16 30
#define R_MIPS_CALLLO16 31
/*
+ * Introduced for MIPSr6.
+ */
+#define R_MIPS_PC21_S2 60
+#define R_MIPS_PC26_S2 61
+/*
* This range is reserved for vendor specific relocations.
*/
#define R_MIPS_LOVENDOR 100
@@ -170,16 +175,14 @@
#define SHF_MIPS_NAMES 0x02000000
#define SHF_MIPS_NODUPES 0x01000000
-#ifndef ELF_ARCH
-/* ELF register definitions */
-#define ELF_NGREG 45
-#define ELF_NFPREG 33
-
-typedef unsigned long elf_greg_t;
-typedef elf_greg_t elf_gregset_t[ELF_NGREG];
-
-typedef double elf_fpreg_t;
-typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG];
+#define MIPS_ABI_FP_ANY 0 /* FP ABI doesn't matter */
+#define MIPS_ABI_FP_DOUBLE 1 /* -mdouble-float */
+#define MIPS_ABI_FP_SINGLE 2 /* -msingle-float */
+#define MIPS_ABI_FP_SOFT 3 /* -msoft-float */
+#define MIPS_ABI_FP_OLD_64 4 /* -mips32r2 -mfp64 */
+#define MIPS_ABI_FP_XX 5 /* -mfpxx */
+#define MIPS_ABI_FP_64 6 /* -mips32r2 -mfp64 */
+#define MIPS_ABI_FP_64A 7 /* -mips32r2 -mfp64 -mno-odd-spreg */
struct mips_elf_abiflags_v0 {
uint16_t version; /* Version of flags structure */
@@ -196,16 +199,54 @@ struct mips_elf_abiflags_v0 {
uint32_t flags2;
};
-#define MIPS_ABI_FP_ANY 0 /* FP ABI doesn't matter */
-#define MIPS_ABI_FP_DOUBLE 1 /* -mdouble-float */
-#define MIPS_ABI_FP_SINGLE 2 /* -msingle-float */
-#define MIPS_ABI_FP_SOFT 3 /* -msoft-float */
-#define MIPS_ABI_FP_OLD_64 4 /* -mips32r2 -mfp64 */
-#define MIPS_ABI_FP_XX 5 /* -mfpxx */
-#define MIPS_ABI_FP_64 6 /* -mips32r2 -mfp64 */
-#define MIPS_ABI_FP_64A 7 /* -mips32r2 -mfp64 -mno-odd-spreg */
+#ifndef ELF_ARCH
+/* ELF register definitions */
+#define ELF_NGREG 45
+#define ELF_NFPREG 33
+
+typedef unsigned long elf_greg_t;
+typedef elf_greg_t elf_gregset_t[ELF_NGREG];
+
+typedef double elf_fpreg_t;
+typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG];
#ifdef CONFIG_32BIT
+/*
+ * This is used to ensure we don't load something for the wrong architecture.
+ */
+#define elf_check_arch elfo32_check_arch
+
+/*
+ * These are used to set parameters in the core dumps.
+ */
+#define ELF_CLASS ELFCLASS32
+
+#endif /* CONFIG_32BIT */
+
+#ifdef CONFIG_64BIT
+/*
+ * This is used to ensure we don't load something for the wrong architecture.
+ */
+#define elf_check_arch elfn64_check_arch
+
+/*
+ * These are used to set parameters in the core dumps.
+ */
+#define ELF_CLASS ELFCLASS64
+
+#endif /* CONFIG_64BIT */
+
+/*
+ * These are used to set parameters in the core dumps.
+ */
+#ifdef __MIPSEB__
+#define ELF_DATA ELFDATA2MSB
+#elif defined(__MIPSEL__)
+#define ELF_DATA ELFDATA2LSB
+#endif
+#define ELF_ARCH EM_MIPS
+
+#endif /* !defined(ELF_ARCH) */
/*
* In order to be sure that we don't attempt to execute an O32 binary which
@@ -219,10 +260,15 @@ struct mips_elf_abiflags_v0 {
# define __MIPS_O32_FP64_MUST_BE_ZERO EF_MIPS_FP64
#endif
+#define mips_elf_check_machine(x) ((x)->e_machine == EM_MIPS)
+
+#define vmcore_elf32_check_arch mips_elf_check_machine
+#define vmcore_elf64_check_arch mips_elf_check_machine
+
/*
- * This is used to ensure we don't load something for the wrong architecture.
+ * Return non-zero if HDR identifies an o32 ELF binary.
*/
-#define elf_check_arch(hdr) \
+#define elfo32_check_arch(hdr) \
({ \
int __res = 1; \
struct elfhdr *__h = (hdr); \
@@ -243,17 +289,9 @@ struct mips_elf_abiflags_v0 {
})
/*
- * These are used to set parameters in the core dumps.
- */
-#define ELF_CLASS ELFCLASS32
-
-#endif /* CONFIG_32BIT */
-
-#ifdef CONFIG_64BIT
-/*
- * This is used to ensure we don't load something for the wrong architecture.
+ * Return non-zero if HDR identifies an n64 ELF binary.
*/
-#define elf_check_arch(hdr) \
+#define elfn64_check_arch(hdr) \
({ \
int __res = 1; \
struct elfhdr *__h = (hdr); \
@@ -267,28 +305,23 @@ struct mips_elf_abiflags_v0 {
})
/*
- * These are used to set parameters in the core dumps.
+ * Return non-zero if HDR identifies an n32 ELF binary.
*/
-#define ELF_CLASS ELFCLASS64
-
-#endif /* CONFIG_64BIT */
-
-/*
- * These are used to set parameters in the core dumps.
- */
-#ifdef __MIPSEB__
-#define ELF_DATA ELFDATA2MSB
-#elif defined(__MIPSEL__)
-#define ELF_DATA ELFDATA2LSB
-#endif
-#define ELF_ARCH EM_MIPS
-
-#endif /* !defined(ELF_ARCH) */
-
-#define mips_elf_check_machine(x) ((x)->e_machine == EM_MIPS)
-
-#define vmcore_elf32_check_arch mips_elf_check_machine
-#define vmcore_elf64_check_arch mips_elf_check_machine
+#define elfn32_check_arch(hdr) \
+({ \
+ int __res = 1; \
+ struct elfhdr *__h = (hdr); \
+ \
+ if (!mips_elf_check_machine(__h)) \
+ __res = 0; \
+ if (__h->e_ident[EI_CLASS] != ELFCLASS32) \
+ __res = 0; \
+ if (((__h->e_flags & EF_MIPS_ABI2) == 0) || \
+ ((__h->e_flags & EF_MIPS_ABI) != 0)) \
+ __res = 0; \
+ \
+ __res; \
+})
struct mips_abi;
@@ -300,17 +333,16 @@ extern struct mips_abi mips_abi_n32;
#define SET_PERSONALITY2(ex, state) \
do { \
- if (personality(current->personality) != PER_LINUX) \
- set_personality(PER_LINUX); \
- \
clear_thread_flag(TIF_HYBRID_FPREGS); \
set_thread_flag(TIF_32BIT_FPREGS); \
\
- mips_set_personality_fp(state); \
- \
current->thread.abi = &mips_abi; \
\
+ mips_set_personality_fp(state); \
mips_set_personality_nan(state); \
+ \
+ if (personality(current->personality) != PER_LINUX) \
+ set_personality(PER_LINUX); \
} while (0)
#endif /* CONFIG_32BIT */
@@ -321,6 +353,7 @@ do { \
#define __SET_PERSONALITY32_N32() \
do { \
set_thread_flag(TIF_32BIT_ADDR); \
+ \
current->thread.abi = &mips_abi_n32; \
} while (0)
#else
@@ -336,9 +369,9 @@ do { \
clear_thread_flag(TIF_HYBRID_FPREGS); \
set_thread_flag(TIF_32BIT_FPREGS); \
\
- mips_set_personality_fp(state); \
- \
current->thread.abi = &mips_abi_32; \
+ \
+ mips_set_personality_fp(state); \
} while (0)
#else
#define __SET_PERSONALITY32_O32(ex, state) \
diff --git a/arch/mips/include/asm/hazards.h b/arch/mips/include/asm/hazards.h
index 7b99efd31074..e0fecf206f2c 100644
--- a/arch/mips/include/asm/hazards.h
+++ b/arch/mips/include/asm/hazards.h
@@ -22,7 +22,8 @@
/*
* TLB hazards
*/
-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6) && !defined(CONFIG_CPU_CAVIUM_OCTEON)
+#if (defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)) && \
+ !defined(CONFIG_CPU_CAVIUM_OCTEON) && !defined(CONFIG_LOONGSON3_ENHANCEMENT)
/*
* MIPSR2 defines ehb for hazard avoidance
@@ -57,8 +58,8 @@
* address of a label as argument to inline assembler. Gas otoh has the
* annoying difference between la and dla which are only usable for 32-bit
* rsp. 64-bit code, so can't be used without conditional compilation.
- * The alterantive is switching the assembler to 64-bit code which happens
- * to work right even for 32-bit code ...
+ * The alternative is switching the assembler to 64-bit code which happens
+ * to work right even for 32-bit code...
*/
#define instruction_hazard() \
do { \
@@ -132,8 +133,8 @@ do { \
* address of a label as argument to inline assembler. Gas otoh has the
* annoying difference between la and dla which are only usable for 32-bit
* rsp. 64-bit code, so can't be used without conditional compilation.
- * The alterantive is switching the assembler to 64-bit code which happens
- * to work right even for 32-bit code ...
+ * The alternative is switching the assembler to 64-bit code which happens
+ * to work right even for 32-bit code...
*/
#define __instruction_hazard() \
do { \
@@ -155,8 +156,8 @@ do { \
} while (0)
#elif defined(CONFIG_MIPS_ALCHEMY) || defined(CONFIG_CPU_CAVIUM_OCTEON) || \
- defined(CONFIG_CPU_LOONGSON2) || defined(CONFIG_CPU_R10000) || \
- defined(CONFIG_CPU_R5500) || defined(CONFIG_CPU_XLR)
+ defined(CONFIG_CPU_LOONGSON2) || defined(CONFIG_LOONGSON3_ENHANCEMENT) || \
+ defined(CONFIG_CPU_R10000) || defined(CONFIG_CPU_R5500) || defined(CONFIG_CPU_XLR)
/*
* R10000 rocks - all hazards handled in hardware, so this becomes a nobrainer.
diff --git a/arch/mips/include/asm/highmem.h b/arch/mips/include/asm/highmem.h
index 01880b34a209..64f2500d891b 100644
--- a/arch/mips/include/asm/highmem.h
+++ b/arch/mips/include/asm/highmem.h
@@ -19,8 +19,10 @@
#ifdef __KERNEL__
+#include <linux/bug.h>
#include <linux/interrupt.h>
#include <linux/uaccess.h>
+#include <asm/cpu-features.h>
#include <asm/kmap_types.h>
/* undef for production */
@@ -50,7 +52,7 @@ extern void *kmap_atomic(struct page *page);
extern void __kunmap_atomic(void *kvaddr);
extern void *kmap_atomic_pfn(unsigned long pfn);
-#define flush_cache_kmaps() flush_cache_all()
+#define flush_cache_kmaps() BUG_ON(cpu_has_dc_aliases)
extern void kmap_init(void);
diff --git a/arch/mips/include/asm/io.h b/arch/mips/include/asm/io.h
index 2b4dc7ad53b8..ecabc00c1e66 100644
--- a/arch/mips/include/asm/io.h
+++ b/arch/mips/include/asm/io.h
@@ -304,10 +304,10 @@ static inline void iounmap(const volatile void __iomem *addr)
#undef __IS_KSEG1
}
-#ifdef CONFIG_CPU_CAVIUM_OCTEON
-#define war_octeon_io_reorder_wmb() wmb()
+#if defined(CONFIG_CPU_CAVIUM_OCTEON) || defined(CONFIG_LOONGSON3_ENHANCEMENT)
+#define war_io_reorder_wmb() wmb()
#else
-#define war_octeon_io_reorder_wmb() do { } while (0)
+#define war_io_reorder_wmb() do { } while (0)
#endif
#define __BUILD_MEMORY_SINGLE(pfx, bwlq, type, irq) \
@@ -318,7 +318,7 @@ static inline void pfx##write##bwlq(type val, \
volatile type *__mem; \
type __val; \
\
- war_octeon_io_reorder_wmb(); \
+ war_io_reorder_wmb(); \
\
__mem = (void *)__swizzle_addr_##bwlq((unsigned long)(mem)); \
\
@@ -387,7 +387,7 @@ static inline void pfx##out##bwlq##p(type val, unsigned long port) \
volatile type *__addr; \
type __val; \
\
- war_octeon_io_reorder_wmb(); \
+ war_io_reorder_wmb(); \
\
__addr = (void *)__swizzle_addr_##bwlq(mips_io_port_base + port); \
\
diff --git a/arch/mips/include/asm/irq_regs.h b/arch/mips/include/asm/irq_regs.h
index 33bd2a06de57..8c48d6dd1d78 100644
--- a/arch/mips/include/asm/irq_regs.h
+++ b/arch/mips/include/asm/irq_regs.h
@@ -18,4 +18,14 @@ static inline struct pt_regs *get_irq_regs(void)
return current_thread_info()->regs;
}
+static inline struct pt_regs *set_irq_regs(struct pt_regs *new_regs)
+{
+ struct pt_regs *old_regs;
+
+ old_regs = get_irq_regs();
+ current_thread_info()->regs = new_regs;
+
+ return old_regs;
+}
+
#endif /* __ASM_IRQ_REGS_H */
diff --git a/arch/mips/include/asm/irqflags.h b/arch/mips/include/asm/irqflags.h
index 65c351e328cc..9d3610be2323 100644
--- a/arch/mips/include/asm/irqflags.h
+++ b/arch/mips/include/asm/irqflags.h
@@ -41,7 +41,12 @@ static inline unsigned long arch_local_irq_save(void)
" .set push \n"
" .set reorder \n"
" .set noat \n"
+#if defined(CONFIG_CPU_LOONGSON3)
+ " mfc0 %[flags], $12 \n"
+ " di \n"
+#else
" di %[flags] \n"
+#endif
" andi %[flags], 1 \n"
" " __stringify(__irq_disable_hazard) " \n"
" .set pop \n"
diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index f6b12790716c..6733ac575da4 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -122,6 +122,7 @@ struct kvm_vcpu_stat {
u32 flush_dcache_exits;
u32 halt_successful_poll;
u32 halt_attempted_poll;
+ u32 halt_poll_invalid;
u32 halt_wakeup;
};
@@ -311,17 +312,18 @@ enum emulation_result {
#define MIPS3_PG_FRAME 0x3fffffc0
#define VPN2_MASK 0xffffe000
+#define KVM_ENTRYHI_ASID MIPS_ENTRYHI_ASID
#define TLB_IS_GLOBAL(x) (((x).tlb_lo0 & MIPS3_PG_G) && \
((x).tlb_lo1 & MIPS3_PG_G))
#define TLB_VPN2(x) ((x).tlb_hi & VPN2_MASK)
-#define TLB_ASID(x) ((x).tlb_hi & ASID_MASK)
+#define TLB_ASID(x) ((x).tlb_hi & KVM_ENTRYHI_ASID)
#define TLB_IS_VALID(x, va) (((va) & (1 << PAGE_SHIFT)) \
? ((x).tlb_lo1 & MIPS3_PG_V) \
: ((x).tlb_lo0 & MIPS3_PG_V))
#define TLB_HI_VPN2_HIT(x, y) ((TLB_VPN2(x) & ~(x).tlb_mask) == \
((y) & VPN2_MASK & ~(x).tlb_mask))
#define TLB_HI_ASID_HIT(x, y) (TLB_IS_GLOBAL(x) || \
- TLB_ASID(x) == ((y) & ASID_MASK))
+ TLB_ASID(x) == ((y) & KVM_ENTRYHI_ASID))
struct kvm_mips_tlb {
long tlb_mask;
@@ -747,7 +749,7 @@ extern enum emulation_result kvm_mips_complete_mmio_load(struct kvm_vcpu *vcpu,
uint32_t kvm_mips_read_count(struct kvm_vcpu *vcpu);
void kvm_mips_write_count(struct kvm_vcpu *vcpu, uint32_t count);
-void kvm_mips_write_compare(struct kvm_vcpu *vcpu, uint32_t compare);
+void kvm_mips_write_compare(struct kvm_vcpu *vcpu, uint32_t compare, bool ack);
void kvm_mips_init_count(struct kvm_vcpu *vcpu);
int kvm_mips_set_count_ctl(struct kvm_vcpu *vcpu, s64 count_ctl);
int kvm_mips_set_count_resume(struct kvm_vcpu *vcpu, s64 count_resume);
@@ -812,5 +814,6 @@ static inline void kvm_arch_vcpu_uninit(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
#endif /* __MIPS_KVM_HOST_H__ */
diff --git a/arch/mips/include/asm/llsc.h b/arch/mips/include/asm/llsc.h
new file mode 100644
index 000000000000..c6d17d171147
--- /dev/null
+++ b/arch/mips/include/asm/llsc.h
@@ -0,0 +1,28 @@
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Macros for 32/64-bit neutral inline assembler
+ */
+
+#ifndef __ASM_LLSC_H
+#define __ASM_LLSC_H
+
+#if _MIPS_SZLONG == 32
+#define SZLONG_LOG 5
+#define SZLONG_MASK 31UL
+#define __LL "ll "
+#define __SC "sc "
+#define __INS "ins "
+#define __EXT "ext "
+#elif _MIPS_SZLONG == 64
+#define SZLONG_LOG 6
+#define SZLONG_MASK 63UL
+#define __LL "lld "
+#define __SC "scd "
+#define __INS "dins "
+#define __EXT "dext "
+#endif
+
+#endif /* __ASM_LLSC_H */
diff --git a/arch/mips/include/asm/mach-au1x00/au1xxx_dbdma.h b/arch/mips/include/asm/mach-au1x00/au1xxx_dbdma.h
index ca8077afac4a..456ddba152c4 100644
--- a/arch/mips/include/asm/mach-au1x00/au1xxx_dbdma.h
+++ b/arch/mips/include/asm/mach-au1x00/au1xxx_dbdma.h
@@ -100,7 +100,7 @@ typedef volatile struct au1xxx_ddma_desc {
u32 dscr_nxtptr; /* Next descriptor pointer (mostly) */
/*
* First 32 bytes are HW specific!!!
- * Lets have some SW data following -- make sure it's 32 bytes.
+ * Let's have some SW data following -- make sure it's 32 bytes.
*/
u32 sw_status;
u32 sw_context;
diff --git a/arch/mips/include/asm/mach-au1x00/gpio-au1300.h b/arch/mips/include/asm/mach-au1x00/gpio-au1300.h
index ce02894271c6..d607d643b973 100644
--- a/arch/mips/include/asm/mach-au1x00/gpio-au1300.h
+++ b/arch/mips/include/asm/mach-au1x00/gpio-au1300.h
@@ -140,7 +140,7 @@ static inline int au1300_gpio_getinitlvl(unsigned int gpio)
* Cases 1 and 3 are intended for boards which want to provide their own
* GPIO namespace and -operations (i.e. for example you have 8 GPIOs
* which are in part provided by spare Au1300 GPIO pins and in part by
-* an external FPGA but you still want them to be accssible in linux
+* an external FPGA but you still want them to be accessible in linux
* as gpio0-7. The board can of course use the alchemy_gpioX_* functions
* as required).
*/
diff --git a/arch/mips/include/asm/mach-bcm63xx/bcm63xx_dev_enet.h b/arch/mips/include/asm/mach-bcm63xx/bcm63xx_dev_enet.h
index 466fc85899f4..c4e856f27040 100644
--- a/arch/mips/include/asm/mach-bcm63xx/bcm63xx_dev_enet.h
+++ b/arch/mips/include/asm/mach-bcm63xx/bcm63xx_dev_enet.h
@@ -22,7 +22,7 @@ struct bcm63xx_enet_platform_data {
int has_phy_interrupt;
int phy_interrupt;
- /* if has_phy, use autonegociated pause parameters or force
+ /* if has_phy, use autonegotiated pause parameters or force
* them */
int pause_auto;
int pause_rx;
diff --git a/arch/mips/include/asm/mach-bmips/cpu-feature-overrides.h b/arch/mips/include/asm/mach-bmips/cpu-feature-overrides.h
new file mode 100644
index 000000000000..fa0583e1ce0d
--- /dev/null
+++ b/arch/mips/include/asm/mach-bmips/cpu-feature-overrides.h
@@ -0,0 +1,14 @@
+#ifndef __ASM_MACH_BMIPS_CPU_FEATURE_OVERRIDES_H
+#define __ASM_MACH_BMIPS_CPU_FEATURE_OVERRIDES_H
+
+/* Invariants across all BMIPS processors */
+#define cpu_has_vtag_icache 0
+#define cpu_icache_snoops_remote_store 1
+
+/* Processor ISA compatibility is MIPS32R1 */
+#define cpu_has_mips32r1 1
+#define cpu_has_mips32r2 0
+#define cpu_has_mips64r1 0
+#define cpu_has_mips64r2 0
+
+#endif /* __ASM_MACH_BMIPS_CPU_FEATURE_OVERRIDES_H */
diff --git a/arch/mips/include/asm/mach-bmips/ioremap.h b/arch/mips/include/asm/mach-bmips/ioremap.h
new file mode 100644
index 000000000000..29c7a7bb7080
--- /dev/null
+++ b/arch/mips/include/asm/mach-bmips/ioremap.h
@@ -0,0 +1,33 @@
+#ifndef __ASM_MACH_BMIPS_IOREMAP_H
+#define __ASM_MACH_BMIPS_IOREMAP_H
+
+#include <linux/types.h>
+
+static inline phys_addr_t fixup_bigphys_addr(phys_addr_t phys_addr, phys_addr_t size)
+{
+ return phys_addr;
+}
+
+static inline int is_bmips_internal_registers(phys_addr_t offset)
+{
+ if (offset >= 0xfff80000)
+ return 1;
+
+ return 0;
+}
+
+static inline void __iomem *plat_ioremap(phys_addr_t offset, unsigned long size,
+ unsigned long flags)
+{
+ if (is_bmips_internal_registers(offset))
+ return (void __iomem *)offset;
+
+ return NULL;
+}
+
+static inline int plat_iounmap(const volatile void __iomem *addr)
+{
+ return is_bmips_internal_registers((unsigned long)addr);
+}
+
+#endif /* __ASM_MACH_BMIPS_IOREMAP_H */
diff --git a/arch/mips/include/asm/mach-ip27/dma-coherence.h b/arch/mips/include/asm/mach-ip27/dma-coherence.h
index 1daa64412569..04d862020ac9 100644
--- a/arch/mips/include/asm/mach-ip27/dma-coherence.h
+++ b/arch/mips/include/asm/mach-ip27/dma-coherence.h
@@ -64,7 +64,7 @@ static inline void plat_post_dma_flush(struct device *dev)
static inline int plat_device_is_coherent(struct device *dev)
{
- return 1; /* IP27 non-cohernet mode is unsupported */
+ return 1; /* IP27 non-coherent mode is unsupported */
}
#endif /* __ASM_MACH_IP27_DMA_COHERENCE_H */
diff --git a/arch/mips/include/asm/mach-ip32/dma-coherence.h b/arch/mips/include/asm/mach-ip32/dma-coherence.h
index 0a0b0e2ced60..7bdf212587a0 100644
--- a/arch/mips/include/asm/mach-ip32/dma-coherence.h
+++ b/arch/mips/include/asm/mach-ip32/dma-coherence.h
@@ -86,7 +86,7 @@ static inline void plat_post_dma_flush(struct device *dev)
static inline int plat_device_is_coherent(struct device *dev)
{
- return 0; /* IP32 is non-cohernet */
+ return 0; /* IP32 is non-coherent */
}
#endif /* __ASM_MACH_IP32_DMA_COHERENCE_H */
diff --git a/arch/mips/include/asm/mach-jz4740/jz4740_nand.h b/arch/mips/include/asm/mach-jz4740/jz4740_nand.h
index 398733e3e2cf..7f7b0fc554da 100644
--- a/arch/mips/include/asm/mach-jz4740/jz4740_nand.h
+++ b/arch/mips/include/asm/mach-jz4740/jz4740_nand.h
@@ -27,7 +27,7 @@ struct jz_nand_platform_data {
unsigned char banks[JZ_NAND_NUM_BANKS];
- void (*ident_callback)(struct platform_device *, struct nand_chip *,
+ void (*ident_callback)(struct platform_device *, struct mtd_info *,
struct mtd_partition **, int *num_partitions);
};
diff --git a/arch/mips/include/asm/mach-jz4740/platform.h b/arch/mips/include/asm/mach-jz4740/platform.h
index 32cfbe6a191b..073b8bfbb3b3 100644
--- a/arch/mips/include/asm/mach-jz4740/platform.h
+++ b/arch/mips/include/asm/mach-jz4740/platform.h
@@ -19,7 +19,6 @@
#include <linux/platform_device.h>
-extern struct platform_device jz4740_usb_ohci_device;
extern struct platform_device jz4740_udc_device;
extern struct platform_device jz4740_udc_xceiv_device;
extern struct platform_device jz4740_mmc_device;
diff --git a/arch/mips/include/asm/mach-lantiq/falcon/lantiq_soc.h b/arch/mips/include/asm/mach-lantiq/falcon/lantiq_soc.h
index 98d6a2f14aaf..8e9b022c3594 100644
--- a/arch/mips/include/asm/mach-lantiq/falcon/lantiq_soc.h
+++ b/arch/mips/include/asm/mach-lantiq/falcon/lantiq_soc.h
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2010 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2010 John Crispin <john@phrozen.org>
*/
#ifndef _LTQ_FALCON_H__
@@ -22,7 +22,7 @@
/*
* during early_printk no ioremap possible at this early stage
- * lets use KSEG1 instead
+ * let's use KSEG1 instead
*/
#define LTQ_ASC0_BASE_ADDR 0x1E100C00
#define LTQ_EARLY_ASC KSEG1ADDR(LTQ_ASC0_BASE_ADDR)
diff --git a/arch/mips/include/asm/mach-lantiq/lantiq.h b/arch/mips/include/asm/mach-lantiq/lantiq.h
index 4e5ae6523cb4..8064d7a4b33d 100644
--- a/arch/mips/include/asm/mach-lantiq/lantiq.h
+++ b/arch/mips/include/asm/mach-lantiq/lantiq.h
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2010 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2010 John Crispin <john@phrozen.org>
*/
#ifndef _LANTIQ_H__
#define _LANTIQ_H__
diff --git a/arch/mips/include/asm/mach-lantiq/lantiq_platform.h b/arch/mips/include/asm/mach-lantiq/lantiq_platform.h
index e23bf7c9a2d0..17d2fdcdaef4 100644
--- a/arch/mips/include/asm/mach-lantiq/lantiq_platform.h
+++ b/arch/mips/include/asm/mach-lantiq/lantiq_platform.h
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2010 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2010 John Crispin <john@phrozen.org>
*/
#ifndef _LANTIQ_PLATFORM_H__
diff --git a/arch/mips/include/asm/mach-lantiq/xway/irq.h b/arch/mips/include/asm/mach-lantiq/xway/irq.h
index a1471d2dd0d2..83e5f03cccb5 100644
--- a/arch/mips/include/asm/mach-lantiq/xway/irq.h
+++ b/arch/mips/include/asm/mach-lantiq/xway/irq.h
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2010 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2010 John Crispin <john@phrozen.org>
*/
#ifndef __LANTIQ_IRQ_H
diff --git a/arch/mips/include/asm/mach-lantiq/xway/lantiq_irq.h b/arch/mips/include/asm/mach-lantiq/xway/lantiq_irq.h
index 5eadfe582529..141076325307 100644
--- a/arch/mips/include/asm/mach-lantiq/xway/lantiq_irq.h
+++ b/arch/mips/include/asm/mach-lantiq/xway/lantiq_irq.h
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2010 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2010 John Crispin <john@phrozen.org>
*/
#ifndef _LANTIQ_XWAY_IRQ_H__
diff --git a/arch/mips/include/asm/mach-lantiq/xway/lantiq_soc.h b/arch/mips/include/asm/mach-lantiq/xway/lantiq_soc.h
index dd6005b75e0c..17b41bb5991f 100644
--- a/arch/mips/include/asm/mach-lantiq/xway/lantiq_soc.h
+++ b/arch/mips/include/asm/mach-lantiq/xway/lantiq_soc.h
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2010 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2010 John Crispin <john@phrozen.org>
*/
#ifndef _LTQ_XWAY_H__
@@ -75,7 +75,7 @@ extern __iomem void *ltq_cgu_membase;
/*
* during early_printk no ioremap is possible
- * lets use KSEG1 instead
+ * let's use KSEG1 instead
*/
#define LTQ_ASC1_BASE_ADDR 0x1E100C00
#define LTQ_EARLY_ASC KSEG1ADDR(LTQ_ASC1_BASE_ADDR)
diff --git a/arch/mips/include/asm/mach-lantiq/xway/xway_dma.h b/arch/mips/include/asm/mach-lantiq/xway/xway_dma.h
index 5f8693d5ab12..4901833498f7 100644
--- a/arch/mips/include/asm/mach-lantiq/xway/xway_dma.h
+++ b/arch/mips/include/asm/mach-lantiq/xway/xway_dma.h
@@ -12,7 +12,7 @@
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
*
- * Copyright (C) 2011 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2011 John Crispin <john@phrozen.org>
*/
#ifndef LTQ_DMA_H__
diff --git a/arch/mips/include/asm/mach-loongson32/cpufreq.h b/arch/mips/include/asm/mach-loongson32/cpufreq.h
index 6843fa1a608d..2f1ecb081223 100644
--- a/arch/mips/include/asm/mach-loongson32/cpufreq.h
+++ b/arch/mips/include/asm/mach-loongson32/cpufreq.h
@@ -9,7 +9,6 @@
* option) any later version.
*/
-
#ifndef __ASM_MACH_LOONGSON32_CPUFREQ_H
#define __ASM_MACH_LOONGSON32_CPUFREQ_H
diff --git a/arch/mips/include/asm/mach-loongson32/dma.h b/arch/mips/include/asm/mach-loongson32/dma.h
new file mode 100644
index 000000000000..ad1dec743ccc
--- /dev/null
+++ b/arch/mips/include/asm/mach-loongson32/dma.h
@@ -0,0 +1,25 @@
+/*
+ * Copyright (c) 2015 Zhang, Keguang <keguang.zhang@gmail.com>
+ *
+ * Loongson 1 NAND platform support.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#ifndef __ASM_MACH_LOONGSON32_DMA_H
+#define __ASM_MACH_LOONGSON32_DMA_H
+
+#define LS1X_DMA_CHANNEL0 0
+#define LS1X_DMA_CHANNEL1 1
+#define LS1X_DMA_CHANNEL2 2
+
+struct plat_ls1x_dma {
+ int nr_channels;
+};
+
+extern struct plat_ls1x_dma ls1b_dma_pdata;
+
+#endif /* __ASM_MACH_LOONGSON32_DMA_H */
diff --git a/arch/mips/include/asm/mach-loongson32/irq.h b/arch/mips/include/asm/mach-loongson32/irq.h
index 0d35b994e8d2..c1c744197de4 100644
--- a/arch/mips/include/asm/mach-loongson32/irq.h
+++ b/arch/mips/include/asm/mach-loongson32/irq.h
@@ -9,7 +9,6 @@
* option) any later version.
*/
-
#ifndef __ASM_MACH_LOONGSON32_IRQ_H
#define __ASM_MACH_LOONGSON32_IRQ_H
diff --git a/arch/mips/include/asm/mach-loongson32/loongson1.h b/arch/mips/include/asm/mach-loongson32/loongson1.h
index 12aa129aad80..978f6df8970a 100644
--- a/arch/mips/include/asm/mach-loongson32/loongson1.h
+++ b/arch/mips/include/asm/mach-loongson32/loongson1.h
@@ -9,7 +9,6 @@
* option) any later version.
*/
-
#ifndef __ASM_MACH_LOONGSON32_LOONGSON1_H
#define __ASM_MACH_LOONGSON32_LOONGSON1_H
@@ -18,6 +17,9 @@
/* Loongson 1 Register Bases */
#define LS1X_MUX_BASE 0x1fd00420
#define LS1X_INTC_BASE 0x1fd01040
+#define LS1X_GPIO0_BASE 0x1fd010c0
+#define LS1X_GPIO1_BASE 0x1fd010c4
+#define LS1X_DMAC_BASE 0x1fd01160
#define LS1X_EHCI_BASE 0x1fe00000
#define LS1X_OHCI_BASE 0x1fe08000
#define LS1X_GMAC0_BASE 0x1fe10000
diff --git a/arch/mips/include/asm/mach-loongson32/nand.h b/arch/mips/include/asm/mach-loongson32/nand.h
new file mode 100644
index 000000000000..e274912e9de1
--- /dev/null
+++ b/arch/mips/include/asm/mach-loongson32/nand.h
@@ -0,0 +1,30 @@
+/*
+ * Copyright (c) 2015 Zhang, Keguang <keguang.zhang@gmail.com>
+ *
+ * Loongson 1 NAND platform support.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; either version 2 of the License, or (at your
+ * option) any later version.
+ */
+
+#ifndef __ASM_MACH_LOONGSON32_NAND_H
+#define __ASM_MACH_LOONGSON32_NAND_H
+
+#include <linux/dmaengine.h>
+#include <linux/mtd/partitions.h>
+
+struct plat_ls1x_nand {
+ struct mtd_partition *parts;
+ unsigned int nr_parts;
+
+ int hold_cycle;
+ int wait_cycle;
+};
+
+extern struct plat_ls1x_nand ls1b_nand_pdata;
+
+bool ls1x_dma_filter_fn(struct dma_chan *chan, void *param);
+
+#endif /* __ASM_MACH_LOONGSON32_NAND_H */
diff --git a/arch/mips/include/asm/mach-loongson32/platform.h b/arch/mips/include/asm/mach-loongson32/platform.h
index c32f03f3f72c..672531aa9bef 100644
--- a/arch/mips/include/asm/mach-loongson32/platform.h
+++ b/arch/mips/include/asm/mach-loongson32/platform.h
@@ -7,20 +7,28 @@
* option) any later version.
*/
-
#ifndef __ASM_MACH_LOONGSON32_PLATFORM_H
#define __ASM_MACH_LOONGSON32_PLATFORM_H
#include <linux/platform_device.h>
+#include <dma.h>
+#include <nand.h>
+
extern struct platform_device ls1x_uart_pdev;
extern struct platform_device ls1x_cpufreq_pdev;
+extern struct platform_device ls1x_dma_pdev;
extern struct platform_device ls1x_eth0_pdev;
extern struct platform_device ls1x_eth1_pdev;
extern struct platform_device ls1x_ehci_pdev;
+extern struct platform_device ls1x_gpio0_pdev;
+extern struct platform_device ls1x_gpio1_pdev;
+extern struct platform_device ls1x_nand_pdev;
extern struct platform_device ls1x_rtc_pdev;
-extern void __init ls1x_clk_init(void);
-extern void __init ls1x_serial_setup(struct platform_device *pdev);
+void __init ls1x_clk_init(void);
+void __init ls1x_dma_set_platdata(struct plat_ls1x_dma *pdata);
+void __init ls1x_nand_set_platdata(struct plat_ls1x_nand *pdata);
+void __init ls1x_serial_set_uartclk(struct platform_device *pdev);
#endif /* __ASM_MACH_LOONGSON32_PLATFORM_H */
diff --git a/arch/mips/include/asm/mach-loongson32/regs-clk.h b/arch/mips/include/asm/mach-loongson32/regs-clk.h
index 1f5a715ac841..4d56fc38f0c4 100644
--- a/arch/mips/include/asm/mach-loongson32/regs-clk.h
+++ b/arch/mips/include/asm/mach-loongson32/regs-clk.h
@@ -19,18 +19,18 @@
#define LS1X_CLK_PLL_DIV LS1X_CLK_REG(0x4)
/* Clock PLL Divisor Register Bits */
-#define DIV_DC_EN (0x1 << 31)
-#define DIV_DC_RST (0x1 << 30)
-#define DIV_CPU_EN (0x1 << 25)
-#define DIV_CPU_RST (0x1 << 24)
-#define DIV_DDR_EN (0x1 << 19)
-#define DIV_DDR_RST (0x1 << 18)
-#define RST_DC_EN (0x1 << 5)
-#define RST_DC (0x1 << 4)
-#define RST_DDR_EN (0x1 << 3)
-#define RST_DDR (0x1 << 2)
-#define RST_CPU_EN (0x1 << 1)
-#define RST_CPU 0x1
+#define DIV_DC_EN BIT(31)
+#define DIV_DC_RST BIT(30)
+#define DIV_CPU_EN BIT(25)
+#define DIV_CPU_RST BIT(24)
+#define DIV_DDR_EN BIT(19)
+#define DIV_DDR_RST BIT(18)
+#define RST_DC_EN BIT(5)
+#define RST_DC BIT(4)
+#define RST_DDR_EN BIT(3)
+#define RST_DDR BIT(2)
+#define RST_CPU_EN BIT(1)
+#define RST_CPU BIT(0)
#define DIV_DC_SHIFT 26
#define DIV_CPU_SHIFT 20
diff --git a/arch/mips/include/asm/mach-loongson32/regs-mux.h b/arch/mips/include/asm/mach-loongson32/regs-mux.h
index 8302d92f2da2..7c394f93cb9e 100644
--- a/arch/mips/include/asm/mach-loongson32/regs-mux.h
+++ b/arch/mips/include/asm/mach-loongson32/regs-mux.h
@@ -19,49 +19,49 @@
#define LS1X_MUX_CTRL1 LS1X_MUX_REG(0x4)
/* MUX CTRL0 Register Bits */
-#define UART0_USE_PWM23 (0x1 << 28)
-#define UART0_USE_PWM01 (0x1 << 27)
-#define UART1_USE_LCD0_5_6_11 (0x1 << 26)
-#define I2C2_USE_CAN1 (0x1 << 25)
-#define I2C1_USE_CAN0 (0x1 << 24)
-#define NAND3_USE_UART5 (0x1 << 23)
-#define NAND3_USE_UART4 (0x1 << 22)
-#define NAND3_USE_UART1_DAT (0x1 << 21)
-#define NAND3_USE_UART1_CTS (0x1 << 20)
-#define NAND3_USE_PWM23 (0x1 << 19)
-#define NAND3_USE_PWM01 (0x1 << 18)
-#define NAND2_USE_UART5 (0x1 << 17)
-#define NAND2_USE_UART4 (0x1 << 16)
-#define NAND2_USE_UART1_DAT (0x1 << 15)
-#define NAND2_USE_UART1_CTS (0x1 << 14)
-#define NAND2_USE_PWM23 (0x1 << 13)
-#define NAND2_USE_PWM01 (0x1 << 12)
-#define NAND1_USE_UART5 (0x1 << 11)
-#define NAND1_USE_UART4 (0x1 << 10)
-#define NAND1_USE_UART1_DAT (0x1 << 9)
-#define NAND1_USE_UART1_CTS (0x1 << 8)
-#define NAND1_USE_PWM23 (0x1 << 7)
-#define NAND1_USE_PWM01 (0x1 << 6)
-#define GMAC1_USE_UART1 (0x1 << 4)
-#define GMAC1_USE_UART0 (0x1 << 3)
-#define LCD_USE_UART0_DAT (0x1 << 2)
-#define LCD_USE_UART15 (0x1 << 1)
-#define LCD_USE_UART0 0x1
+#define UART0_USE_PWM23 BIT(28)
+#define UART0_USE_PWM01 BIT(27)
+#define UART1_USE_LCD0_5_6_11 BIT(26)
+#define I2C2_USE_CAN1 BIT(25)
+#define I2C1_USE_CAN0 BIT(24)
+#define NAND3_USE_UART5 BIT(23)
+#define NAND3_USE_UART4 BIT(22)
+#define NAND3_USE_UART1_DAT BIT(21)
+#define NAND3_USE_UART1_CTS BIT(20)
+#define NAND3_USE_PWM23 BIT(19)
+#define NAND3_USE_PWM01 BIT(18)
+#define NAND2_USE_UART5 BIT(17)
+#define NAND2_USE_UART4 BIT(16)
+#define NAND2_USE_UART1_DAT BIT(15)
+#define NAND2_USE_UART1_CTS BIT(14)
+#define NAND2_USE_PWM23 BIT(13)
+#define NAND2_USE_PWM01 BIT(12)
+#define NAND1_USE_UART5 BIT(11)
+#define NAND1_USE_UART4 BIT(10)
+#define NAND1_USE_UART1_DAT BIT(9)
+#define NAND1_USE_UART1_CTS BIT(8)
+#define NAND1_USE_PWM23 BIT(7)
+#define NAND1_USE_PWM01 BIT(6)
+#define GMAC1_USE_UART1 BIT(4)
+#define GMAC1_USE_UART0 BIT(3)
+#define LCD_USE_UART0_DAT BIT(2)
+#define LCD_USE_UART15 BIT(1)
+#define LCD_USE_UART0 BIT(0)
/* MUX CTRL1 Register Bits */
-#define USB_RESET (0x1 << 31)
-#define SPI1_CS_USE_PWM01 (0x1 << 24)
-#define SPI1_USE_CAN (0x1 << 23)
-#define DISABLE_DDR_CONFSPACE (0x1 << 20)
-#define DDR32TO16EN (0x1 << 16)
-#define GMAC1_SHUT (0x1 << 13)
-#define GMAC0_SHUT (0x1 << 12)
-#define USB_SHUT (0x1 << 11)
-#define UART1_3_USE_CAN1 (0x1 << 5)
-#define UART1_2_USE_CAN0 (0x1 << 4)
-#define GMAC1_USE_TXCLK (0x1 << 3)
-#define GMAC0_USE_TXCLK (0x1 << 2)
-#define GMAC1_USE_PWM23 (0x1 << 1)
-#define GMAC0_USE_PWM01 0x1
+#define USB_RESET BIT(31)
+#define SPI1_CS_USE_PWM01 BIT(24)
+#define SPI1_USE_CAN BIT(23)
+#define DISABLE_DDR_CONFSPACE BIT(20)
+#define DDR32TO16EN BIT(16)
+#define GMAC1_SHUT BIT(13)
+#define GMAC0_SHUT BIT(12)
+#define USB_SHUT BIT(11)
+#define UART1_3_USE_CAN1 BIT(5)
+#define UART1_2_USE_CAN0 BIT(4)
+#define GMAC1_USE_TXCLK BIT(3)
+#define GMAC0_USE_TXCLK BIT(2)
+#define GMAC1_USE_PWM23 BIT(1)
+#define GMAC0_USE_PWM01 BIT(0)
#endif /* __ASM_MACH_LOONGSON32_REGS_MUX_H */
diff --git a/arch/mips/include/asm/mach-loongson32/regs-pwm.h b/arch/mips/include/asm/mach-loongson32/regs-pwm.h
index 69f174ed13a4..4119600ce79a 100644
--- a/arch/mips/include/asm/mach-loongson32/regs-pwm.h
+++ b/arch/mips/include/asm/mach-loongson32/regs-pwm.h
@@ -19,11 +19,11 @@
#define PWM_CTRL 0xc
/* PWM Control Register Bits */
-#define CNT_RST (0x1 << 7)
-#define INT_SR (0x1 << 6)
-#define INT_EN (0x1 << 5)
-#define PWM_SINGLE (0x1 << 4)
-#define PWM_OE (0x1 << 3)
-#define CNT_EN 0x1
+#define CNT_RST BIT(7)
+#define INT_SR BIT(6)
+#define INT_EN BIT(5)
+#define PWM_SINGLE BIT(4)
+#define PWM_OE BIT(3)
+#define CNT_EN BIT(0)
#endif /* __ASM_MACH_LOONGSON32_REGS_PWM_H */
diff --git a/arch/mips/include/asm/mach-loongson64/cpu-feature-overrides.h b/arch/mips/include/asm/mach-loongson64/cpu-feature-overrides.h
index 98963c2c7be4..89328a3d44d8 100644
--- a/arch/mips/include/asm/mach-loongson64/cpu-feature-overrides.h
+++ b/arch/mips/include/asm/mach-loongson64/cpu-feature-overrides.h
@@ -16,11 +16,6 @@
#ifndef __ASM_MACH_LOONGSON64_CPU_FEATURE_OVERRIDES_H
#define __ASM_MACH_LOONGSON64_CPU_FEATURE_OVERRIDES_H
-#define cpu_dcache_line_size() 32
-#define cpu_icache_line_size() 32
-#define cpu_scache_line_size() 32
-
-
#define cpu_has_32fpr 1
#define cpu_has_3k_cache 0
#define cpu_has_4k_cache 1
@@ -31,24 +26,17 @@
#define cpu_has_counter 1
#define cpu_has_dc_aliases (PAGE_SIZE < 0x4000)
#define cpu_has_divec 0
-#define cpu_has_dsp 0
-#define cpu_has_dsp2 0
#define cpu_has_ejtag 0
-#define cpu_has_ic_fills_f_dc 0
#define cpu_has_inclusive_pcaches 1
#define cpu_has_llsc 1
#define cpu_has_mcheck 0
#define cpu_has_mdmx 0
#define cpu_has_mips16 0
-#define cpu_has_mips32r2 0
#define cpu_has_mips3d 0
-#define cpu_has_mips64r2 0
#define cpu_has_mipsmt 0
-#define cpu_has_prefetch 0
#define cpu_has_smartmips 0
#define cpu_has_tlb 1
#define cpu_has_tx39_cache 0
-#define cpu_has_userlocal 0
#define cpu_has_vce 0
#define cpu_has_veic 0
#define cpu_has_vint 0
@@ -56,6 +44,10 @@
#define cpu_has_watch 1
#define cpu_has_local_ebase 0
-#define cpu_has_wsbh IS_ENABLED(CONFIG_CPU_LOONGSON3)
+#ifdef CONFIG_CPU_LOONGSON3
+#define cpu_has_wsbh 1
+#define cpu_has_ic_fills_f_dc 1
+#define cpu_hwrena_impl_bits 0xc0000000
+#endif
#endif /* __ASM_MACH_LOONGSON64_CPU_FEATURE_OVERRIDES_H */
diff --git a/arch/mips/include/asm/mach-loongson64/kernel-entry-init.h b/arch/mips/include/asm/mach-loongson64/kernel-entry-init.h
index 3f2f84f6c401..8393bc548987 100644
--- a/arch/mips/include/asm/mach-loongson64/kernel-entry-init.h
+++ b/arch/mips/include/asm/mach-loongson64/kernel-entry-init.h
@@ -23,8 +23,15 @@
or t0, (0x1 << 7)
mtc0 t0, $16, 3
/* Set ELPA on LOONGSON3 pagegrain */
- li t0, (0x1 << 29)
+ mfc0 t0, $5, 1
+ or t0, (0x1 << 29)
mtc0 t0, $5, 1
+#ifdef CONFIG_LOONGSON3_ENHANCEMENT
+ /* Enable STFill Buffer */
+ mfc0 t0, $16, 6
+ or t0, 0x100
+ mtc0 t0, $16, 6
+#endif
_ehb
.set pop
#endif
@@ -42,8 +49,15 @@
or t0, (0x1 << 7)
mtc0 t0, $16, 3
/* Set ELPA on LOONGSON3 pagegrain */
- li t0, (0x1 << 29)
+ mfc0 t0, $5, 1
+ or t0, (0x1 << 29)
mtc0 t0, $5, 1
+#ifdef CONFIG_LOONGSON3_ENHANCEMENT
+ /* Enable STFill Buffer */
+ mfc0 t0, $16, 6
+ or t0, 0x100
+ mtc0 t0, $16, 6
+#endif
_ehb
.set pop
#endif
diff --git a/arch/mips/include/asm/mach-loongson64/loongson_hwmon.h b/arch/mips/include/asm/mach-loongson64/loongson_hwmon.h
index 4431fc54a36c..74230d0ca98b 100644
--- a/arch/mips/include/asm/mach-loongson64/loongson_hwmon.h
+++ b/arch/mips/include/asm/mach-loongson64/loongson_hwmon.h
@@ -24,7 +24,7 @@ struct temp_range {
u8 level;
};
-#define CONSTANT_SPEED_POLICY 0 /* at constent speed */
+#define CONSTANT_SPEED_POLICY 0 /* at constant speed */
#define STEP_SPEED_POLICY 1 /* use up/down arrays to describe policy */
#define KERNEL_HELPER_POLICY 2 /* kernel as a helper to fan control */
diff --git a/arch/mips/include/asm/mach-malta/kernel-entry-init.h b/arch/mips/include/asm/mach-malta/kernel-entry-init.h
index 0cf8622db27f..ab03eb3fadac 100644
--- a/arch/mips/include/asm/mach-malta/kernel-entry-init.h
+++ b/arch/mips/include/asm/mach-malta/kernel-entry-init.h
@@ -56,7 +56,7 @@
(0 << MIPS_SEGCFG_PA_SHIFT) | \
(1 << MIPS_SEGCFG_EU_SHIFT)) << 16)
or t0, t2
- mtc0 t0, $5, 2
+ mtc0 t0, CP0_SEGCTL0
/* SegCtl1 */
li t0, ((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) | \
@@ -67,7 +67,7 @@
(0 << MIPS_SEGCFG_PA_SHIFT) | \
(1 << MIPS_SEGCFG_EU_SHIFT)) << 16)
ins t0, t1, 16, 3
- mtc0 t0, $5, 3
+ mtc0 t0, CP0_SEGCTL1
/* SegCtl2 */
li t0, ((MIPS_SEGCFG_MUSUK << MIPS_SEGCFG_AM_SHIFT) | \
@@ -77,7 +77,7 @@
(4 << MIPS_SEGCFG_PA_SHIFT) | \
(1 << MIPS_SEGCFG_EU_SHIFT)) << 16)
or t0, t2
- mtc0 t0, $5, 4
+ mtc0 t0, CP0_SEGCTL2
jal mips_ihb
mfc0 t0, $16, 5
diff --git a/arch/mips/include/asm/mach-ralink/mt7620.h b/arch/mips/include/asm/mach-ralink/mt7620.h
index 455d406e8ddf..a73350b07fdf 100644
--- a/arch/mips/include/asm/mach-ralink/mt7620.h
+++ b/arch/mips/include/asm/mach-ralink/mt7620.h
@@ -7,7 +7,7 @@
*
* Copyright (C) 2008-2011 Gabor Juhos <juhosg@openwrt.org>
* Copyright (C) 2008 Imre Kaloz <kaloz@openwrt.org>
- * Copyright (C) 2013 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2013 John Crispin <john@phrozen.org>
*/
#ifndef _MT7620_REGS_H_
@@ -72,6 +72,7 @@
#define SYSCFG0_DRAM_TYPE_SDRAM 0
#define SYSCFG0_DRAM_TYPE_DDR1 1
#define SYSCFG0_DRAM_TYPE_DDR2 2
+#define SYSCFG0_DRAM_TYPE_UNKNOWN 3
#define SYSCFG0_DRAM_TYPE_DDR2_MT7628 0
#define SYSCFG0_DRAM_TYPE_DDR1_MT7628 1
diff --git a/arch/mips/include/asm/mach-ralink/mt7621.h b/arch/mips/include/asm/mach-ralink/mt7621.h
index 610b61e3f9df..a672e06fa5fd 100644
--- a/arch/mips/include/asm/mach-ralink/mt7621.h
+++ b/arch/mips/include/asm/mach-ralink/mt7621.h
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2015 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2015 John Crispin <john@phrozen.org>
*/
#ifndef _MT7621_REGS_H_
diff --git a/arch/mips/include/asm/mach-ralink/pinmux.h b/arch/mips/include/asm/mach-ralink/pinmux.h
index be106cb2e26d..ba8ac331af0c 100644
--- a/arch/mips/include/asm/mach-ralink/pinmux.h
+++ b/arch/mips/include/asm/mach-ralink/pinmux.h
@@ -3,7 +3,7 @@
* it under the terms of the GNU General Public License version 2 as
* publishhed by the Free Software Foundation.
*
- * Copyright (C) 2012 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2012 John Crispin <john@phrozen.org>
*/
#ifndef _RT288X_PINMUX_H__
diff --git a/arch/mips/include/asm/mach-ralink/ralink_regs.h b/arch/mips/include/asm/mach-ralink/ralink_regs.h
index 4c9fba68c8b2..9df1a53bcb36 100644
--- a/arch/mips/include/asm/mach-ralink/ralink_regs.h
+++ b/arch/mips/include/asm/mach-ralink/ralink_regs.h
@@ -1,7 +1,7 @@
/*
* Ralink SoC register definitions
*
- * Copyright (C) 2013 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2013 John Crispin <john@phrozen.org>
* Copyright (C) 2008-2010 Gabor Juhos <juhosg@openwrt.org>
* Copyright (C) 2008 Imre Kaloz <kaloz@openwrt.org>
*
diff --git a/arch/mips/include/asm/mach-ralink/rt288x.h b/arch/mips/include/asm/mach-ralink/rt288x.h
index 03ad716acb42..25ae1042d57b 100644
--- a/arch/mips/include/asm/mach-ralink/rt288x.h
+++ b/arch/mips/include/asm/mach-ralink/rt288x.h
@@ -7,7 +7,7 @@
*
* Copyright (C) 2008-2011 Gabor Juhos <juhosg@openwrt.org>
* Copyright (C) 2008 Imre Kaloz <kaloz@openwrt.org>
- * Copyright (C) 2013 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2013 John Crispin <john@phrozen.org>
*/
#ifndef _RT288X_REGS_H_
diff --git a/arch/mips/include/asm/mach-ralink/rt305x.h b/arch/mips/include/asm/mach-ralink/rt305x.h
index 2eea79331a14..ac2d65c04b5f 100644
--- a/arch/mips/include/asm/mach-ralink/rt305x.h
+++ b/arch/mips/include/asm/mach-ralink/rt305x.h
@@ -7,7 +7,7 @@
*
* Copyright (C) 2008-2011 Gabor Juhos <juhosg@openwrt.org>
* Copyright (C) 2008 Imre Kaloz <kaloz@openwrt.org>
- * Copyright (C) 2013 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2013 John Crispin <john@phrozen.org>
*/
#ifndef _RT305X_REGS_H_
diff --git a/arch/mips/include/asm/mips-cm.h b/arch/mips/include/asm/mips-cm.h
index d4635391c36a..9411a4c0bdad 100644
--- a/arch/mips/include/asm/mips-cm.h
+++ b/arch/mips/include/asm/mips-cm.h
@@ -208,6 +208,7 @@ BUILD_CM_RW(l2_config, MIPS_CM_GCB_OFS + 0x130)
BUILD_CM_RW(sys_config2, MIPS_CM_GCB_OFS + 0x150)
BUILD_CM_RW(l2_pft_control, MIPS_CM_GCB_OFS + 0x300)
BUILD_CM_RW(l2_pft_control_b, MIPS_CM_GCB_OFS + 0x308)
+BUILD_CM_RW(bev_base, MIPS_CM_GCB_OFS + 0x680)
/* Core Local & Core Other register accessor functions */
BUILD_CM_Cx_RW(reset_release, 0x00)
@@ -290,8 +291,8 @@ BUILD_CM_Cx_R_(tcid_8_priority, 0x80)
#define CM_GCR_GIC_BASE_GICEN_MSK (_ULCAST_(0x1) << 0)
/* GCR_CPC_BASE register fields */
-#define CM_GCR_CPC_BASE_CPCBASE_SHF 17
-#define CM_GCR_CPC_BASE_CPCBASE_MSK (_ULCAST_(0x7fff) << 17)
+#define CM_GCR_CPC_BASE_CPCBASE_SHF 15
+#define CM_GCR_CPC_BASE_CPCBASE_MSK (_ULCAST_(0x1ffff) << 15)
#define CM_GCR_CPC_BASE_CPCEN_SHF 0
#define CM_GCR_CPC_BASE_CPCEN_MSK (_ULCAST_(0x1) << 0)
@@ -461,7 +462,10 @@ static inline unsigned int mips_cm_max_vp_width(void)
if (mips_cm_revision() >= CM_REV_CM3)
return read_gcr_sys_config2() & CM_GCR_SYS_CONFIG2_MAXVPW_MSK;
- return smp_num_siblings;
+ if (config_enabled(CONFIG_SMP))
+ return smp_num_siblings;
+
+ return 1;
}
/**
@@ -505,7 +509,7 @@ extern void mips_cm_unlock_other(void);
#else /* !CONFIG_MIPS_CM */
-static inline void mips_cm_lock_other(unsigned int core) { }
+static inline void mips_cm_lock_other(unsigned int core, unsigned int vp) { }
static inline void mips_cm_unlock_other(void) { }
#endif /* !CONFIG_MIPS_CM */
diff --git a/arch/mips/include/asm/mips-cpc.h b/arch/mips/include/asm/mips-cpc.h
index e09035239e53..8c519f9827a3 100644
--- a/arch/mips/include/asm/mips-cpc.h
+++ b/arch/mips/include/asm/mips-cpc.h
@@ -106,6 +106,9 @@ BUILD_CPC_R_(revision, MIPS_CPC_GCB_OFS + 0x20)
BUILD_CPC_Cx_RW(cmd, 0x00)
BUILD_CPC_Cx_RW(stat_conf, 0x08)
BUILD_CPC_Cx_RW(other, 0x10)
+BUILD_CPC_Cx_RW(vp_stop, 0x20)
+BUILD_CPC_Cx_RW(vp_run, 0x28)
+BUILD_CPC_Cx_RW(vp_running, 0x30)
/* CPC_Cx_CMD register fields */
#define CPC_Cx_CMD_SHF 0
diff --git a/arch/mips/include/asm/mips_mt.h b/arch/mips/include/asm/mips_mt.h
index f6ba004a7711..aa4cca060e0a 100644
--- a/arch/mips/include/asm/mips_mt.h
+++ b/arch/mips/include/asm/mips_mt.h
@@ -1,5 +1,5 @@
/*
- * Definitions and decalrations for MIPS MT support that are common between
+ * Definitions and declarations for MIPS MT support that are common between
* the VSMP, and AP/SP kernel models.
*/
#ifndef __ASM_MIPS_MT_H
diff --git a/arch/mips/include/asm/mipsregs.h b/arch/mips/include/asm/mipsregs.h
index 3ad19ad04d8a..e1ca65c62f6a 100644
--- a/arch/mips/include/asm/mipsregs.h
+++ b/arch/mips/include/asm/mipsregs.h
@@ -48,6 +48,9 @@
#define CP0_CONF $3
#define CP0_CONTEXT $4
#define CP0_PAGEMASK $5
+#define CP0_SEGCTL0 $5, 2
+#define CP0_SEGCTL1 $5, 3
+#define CP0_SEGCTL2 $5, 4
#define CP0_WIRED $6
#define CP0_INFO $7
#define CP0_HWRENA $7, 0
@@ -55,8 +58,14 @@
#define CP0_BADINSTR $8, 1
#define CP0_COUNT $9
#define CP0_ENTRYHI $10
+#define CP0_GUESTCTL1 $10, 4
+#define CP0_GUESTCTL2 $10, 5
+#define CP0_GUESTCTL3 $10, 6
#define CP0_COMPARE $11
+#define CP0_GUESTCTL0EXT $11, 4
#define CP0_STATUS $12
+#define CP0_GUESTCTL0 $12, 6
+#define CP0_GTOFFSET $12, 7
#define CP0_CAUSE $13
#define CP0_EPC $14
#define CP0_PRID $15
@@ -229,6 +238,8 @@
/* MIPS32/64 EntryHI bit definitions */
#define MIPS_ENTRYHI_EHINV (_ULCAST_(1) << 10)
+#define MIPS_ENTRYHI_ASIDX (_ULCAST_(0x3) << 8)
+#define MIPS_ENTRYHI_ASID (_ULCAST_(0xff) << 0)
/*
* R4x00 interrupt enable / cause bits
@@ -390,6 +401,8 @@
#define CAUSEF_IP7 (_ULCAST_(1) << 15)
#define CAUSEB_FDCI 21
#define CAUSEF_FDCI (_ULCAST_(1) << 21)
+#define CAUSEB_WP 22
+#define CAUSEF_WP (_ULCAST_(1) << 22)
#define CAUSEB_IV 23
#define CAUSEF_IV (_ULCAST_(1) << 23)
#define CAUSEB_PCI 26
@@ -611,7 +624,8 @@
#define MIPS_CONF4_MMUEXTDEF_MMUSIZEEXT (_ULCAST_(1) << 14)
#define MIPS_CONF4_MMUEXTDEF_FTLBSIZEEXT (_ULCAST_(2) << 14)
#define MIPS_CONF4_MMUEXTDEF_VTLBSIZEEXT (_ULCAST_(3) << 14)
-#define MIPS_CONF4_KSCREXIST (_ULCAST_(255) << 16)
+#define MIPS_CONF4_KSCREXIST_SHIFT (16)
+#define MIPS_CONF4_KSCREXIST (_ULCAST_(255) << MIPS_CONF4_KSCREXIST_SHIFT)
#define MIPS_CONF4_VTLBSIZEEXT_SHIFT (24)
#define MIPS_CONF4_VTLBSIZEEXT (_ULCAST_(15) << MIPS_CONF4_VTLBSIZEEXT_SHIFT)
#define MIPS_CONF4_AE (_ULCAST_(1) << 28)
@@ -623,6 +637,7 @@
#define MIPS_CONF5_MRP (_ULCAST_(1) << 3)
#define MIPS_CONF5_LLB (_ULCAST_(1) << 4)
#define MIPS_CONF5_MVH (_ULCAST_(1) << 5)
+#define MIPS_CONF5_VP (_ULCAST_(1) << 7)
#define MIPS_CONF5_FRE (_ULCAST_(1) << 8)
#define MIPS_CONF5_UFE (_ULCAST_(1) << 9)
#define MIPS_CONF5_MSAEN (_ULCAST_(1) << 27)
@@ -633,6 +648,8 @@
#define MIPS_CONF6_SYND (_ULCAST_(1) << 13)
/* proAptiv FTLB on/off bit */
#define MIPS_CONF6_FTLBEN (_ULCAST_(1) << 15)
+/* Loongson-3 FTLB on/off bit */
+#define MIPS_CONF6_FTLBDIS (_ULCAST_(1) << 22)
/* FTLB probability bits */
#define MIPS_CONF6_FTLBP_SHIFT (16)
@@ -645,12 +662,38 @@
/* FTLB probability bits for R6 */
#define MIPS_CONF7_FTLBP_SHIFT (18)
+/* WatchLo* register definitions */
+#define MIPS_WATCHLO_IRW (_ULCAST_(0x7) << 0)
+
+/* WatchHi* register definitions */
+#define MIPS_WATCHHI_M (_ULCAST_(1) << 31)
+#define MIPS_WATCHHI_G (_ULCAST_(1) << 30)
+#define MIPS_WATCHHI_WM (_ULCAST_(0x3) << 28)
+#define MIPS_WATCHHI_WM_R_RVA (_ULCAST_(0) << 28)
+#define MIPS_WATCHHI_WM_R_GPA (_ULCAST_(1) << 28)
+#define MIPS_WATCHHI_WM_G_GVA (_ULCAST_(2) << 28)
+#define MIPS_WATCHHI_EAS (_ULCAST_(0x3) << 24)
+#define MIPS_WATCHHI_ASID (_ULCAST_(0xff) << 16)
+#define MIPS_WATCHHI_MASK (_ULCAST_(0x1ff) << 3)
+#define MIPS_WATCHHI_I (_ULCAST_(1) << 2)
+#define MIPS_WATCHHI_R (_ULCAST_(1) << 1)
+#define MIPS_WATCHHI_W (_ULCAST_(1) << 0)
+#define MIPS_WATCHHI_IRW (_ULCAST_(0x7) << 0)
+
/* MAAR bit definitions */
#define MIPS_MAAR_ADDR ((BIT_ULL(BITS_PER_LONG - 12) - 1) << 12)
#define MIPS_MAAR_ADDR_SHIFT 12
#define MIPS_MAAR_S (_ULCAST_(1) << 1)
#define MIPS_MAAR_V (_ULCAST_(1) << 0)
+/* EBase bit definitions */
+#define MIPS_EBASE_CPUNUM_SHIFT 0
+#define MIPS_EBASE_CPUNUM (_ULCAST_(0x3ff) << 0)
+#define MIPS_EBASE_WG_SHIFT 11
+#define MIPS_EBASE_WG (_ULCAST_(1) << 11)
+#define MIPS_EBASE_BASE_SHIFT 12
+#define MIPS_EBASE_BASE (~_ULCAST_((1 << MIPS_EBASE_BASE_SHIFT) - 1))
+
/* CMGCRBase bit definitions */
#define MIPS_CMGCRB_BASE 11
#define MIPS_CMGCRF_BASE (~_ULCAST_((1 << MIPS_CMGCRB_BASE) - 1))
@@ -686,6 +729,8 @@
#define MIPS_PWFIELD_PTEI_SHIFT 0
#define MIPS_PWFIELD_PTEI_MASK 0x0000003f
+#define MIPS_PWSIZE_PS_SHIFT 30
+#define MIPS_PWSIZE_PS_MASK 0x40000000
#define MIPS_PWSIZE_GDW_SHIFT 24
#define MIPS_PWSIZE_GDW_MASK 0x3f000000
#define MIPS_PWSIZE_UDW_SHIFT 18
@@ -699,6 +744,12 @@
#define MIPS_PWCTL_PWEN_SHIFT 31
#define MIPS_PWCTL_PWEN_MASK 0x80000000
+#define MIPS_PWCTL_XK_SHIFT 28
+#define MIPS_PWCTL_XK_MASK 0x10000000
+#define MIPS_PWCTL_XS_SHIFT 27
+#define MIPS_PWCTL_XS_MASK 0x08000000
+#define MIPS_PWCTL_XU_SHIFT 26
+#define MIPS_PWCTL_XU_MASK 0x04000000
#define MIPS_PWCTL_DPH_SHIFT 7
#define MIPS_PWCTL_DPH_MASK 0x00000080
#define MIPS_PWCTL_HUGEPG_SHIFT 6
@@ -706,6 +757,94 @@
#define MIPS_PWCTL_PSN_SHIFT 0
#define MIPS_PWCTL_PSN_MASK 0x0000003f
+/* GuestCtl0 fields */
+#define MIPS_GCTL0_GM_SHIFT 31
+#define MIPS_GCTL0_GM (_ULCAST_(1) << MIPS_GCTL0_GM_SHIFT)
+#define MIPS_GCTL0_RI_SHIFT 30
+#define MIPS_GCTL0_RI (_ULCAST_(1) << MIPS_GCTL0_RI_SHIFT)
+#define MIPS_GCTL0_MC_SHIFT 29
+#define MIPS_GCTL0_MC (_ULCAST_(1) << MIPS_GCTL0_MC_SHIFT)
+#define MIPS_GCTL0_CP0_SHIFT 28
+#define MIPS_GCTL0_CP0 (_ULCAST_(1) << MIPS_GCTL0_CP0_SHIFT)
+#define MIPS_GCTL0_AT_SHIFT 26
+#define MIPS_GCTL0_AT (_ULCAST_(0x3) << MIPS_GCTL0_AT_SHIFT)
+#define MIPS_GCTL0_GT_SHIFT 25
+#define MIPS_GCTL0_GT (_ULCAST_(1) << MIPS_GCTL0_GT_SHIFT)
+#define MIPS_GCTL0_CG_SHIFT 24
+#define MIPS_GCTL0_CG (_ULCAST_(1) << MIPS_GCTL0_CG_SHIFT)
+#define MIPS_GCTL0_CF_SHIFT 23
+#define MIPS_GCTL0_CF (_ULCAST_(1) << MIPS_GCTL0_CF_SHIFT)
+#define MIPS_GCTL0_G1_SHIFT 22
+#define MIPS_GCTL0_G1 (_ULCAST_(1) << MIPS_GCTL0_G1_SHIFT)
+#define MIPS_GCTL0_G0E_SHIFT 19
+#define MIPS_GCTL0_G0E (_ULCAST_(1) << MIPS_GCTL0_G0E_SHIFT)
+#define MIPS_GCTL0_PT_SHIFT 18
+#define MIPS_GCTL0_PT (_ULCAST_(1) << MIPS_GCTL0_PT_SHIFT)
+#define MIPS_GCTL0_RAD_SHIFT 9
+#define MIPS_GCTL0_RAD (_ULCAST_(1) << MIPS_GCTL0_RAD_SHIFT)
+#define MIPS_GCTL0_DRG_SHIFT 8
+#define MIPS_GCTL0_DRG (_ULCAST_(1) << MIPS_GCTL0_DRG_SHIFT)
+#define MIPS_GCTL0_G2_SHIFT 7
+#define MIPS_GCTL0_G2 (_ULCAST_(1) << MIPS_GCTL0_G2_SHIFT)
+#define MIPS_GCTL0_GEXC_SHIFT 2
+#define MIPS_GCTL0_GEXC (_ULCAST_(0x1f) << MIPS_GCTL0_GEXC_SHIFT)
+#define MIPS_GCTL0_SFC2_SHIFT 1
+#define MIPS_GCTL0_SFC2 (_ULCAST_(1) << MIPS_GCTL0_SFC2_SHIFT)
+#define MIPS_GCTL0_SFC1_SHIFT 0
+#define MIPS_GCTL0_SFC1 (_ULCAST_(1) << MIPS_GCTL0_SFC1_SHIFT)
+
+/* GuestCtl0.AT Guest address translation control */
+#define MIPS_GCTL0_AT_ROOT 1 /* Guest MMU under Root control */
+#define MIPS_GCTL0_AT_GUEST 3 /* Guest MMU under Guest control */
+
+/* GuestCtl0.GExcCode Hypervisor exception cause codes */
+#define MIPS_GCTL0_GEXC_GPSI 0 /* Guest Privileged Sensitive Instruction */
+#define MIPS_GCTL0_GEXC_GSFC 1 /* Guest Software Field Change */
+#define MIPS_GCTL0_GEXC_HC 2 /* Hypercall */
+#define MIPS_GCTL0_GEXC_GRR 3 /* Guest Reserved Instruction Redirect */
+#define MIPS_GCTL0_GEXC_GVA 8 /* Guest Virtual Address available */
+#define MIPS_GCTL0_GEXC_GHFC 9 /* Guest Hardware Field Change */
+#define MIPS_GCTL0_GEXC_GPA 10 /* Guest Physical Address available */
+
+/* GuestCtl0Ext fields */
+#define MIPS_GCTL0EXT_RPW_SHIFT 8
+#define MIPS_GCTL0EXT_RPW (_ULCAST_(0x3) << MIPS_GCTL0EXT_RPW_SHIFT)
+#define MIPS_GCTL0EXT_NCC_SHIFT 6
+#define MIPS_GCTL0EXT_NCC (_ULCAST_(0x3) << MIPS_GCTL0EXT_NCC_SHIFT)
+#define MIPS_GCTL0EXT_CGI_SHIFT 4
+#define MIPS_GCTL0EXT_CGI (_ULCAST_(1) << MIPS_GCTL0EXT_CGI_SHIFT)
+#define MIPS_GCTL0EXT_FCD_SHIFT 3
+#define MIPS_GCTL0EXT_FCD (_ULCAST_(1) << MIPS_GCTL0EXT_FCD_SHIFT)
+#define MIPS_GCTL0EXT_OG_SHIFT 2
+#define MIPS_GCTL0EXT_OG (_ULCAST_(1) << MIPS_GCTL0EXT_OG_SHIFT)
+#define MIPS_GCTL0EXT_BG_SHIFT 1
+#define MIPS_GCTL0EXT_BG (_ULCAST_(1) << MIPS_GCTL0EXT_BG_SHIFT)
+#define MIPS_GCTL0EXT_MG_SHIFT 0
+#define MIPS_GCTL0EXT_MG (_ULCAST_(1) << MIPS_GCTL0EXT_MG_SHIFT)
+
+/* GuestCtl0Ext.RPW Root page walk configuration */
+#define MIPS_GCTL0EXT_RPW_BOTH 0 /* Root PW for GPA->RPA and RVA->RPA */
+#define MIPS_GCTL0EXT_RPW_GPA 2 /* Root PW for GPA->RPA */
+#define MIPS_GCTL0EXT_RPW_RVA 3 /* Root PW for RVA->RPA */
+
+/* GuestCtl0Ext.NCC Nested cache coherency attributes */
+#define MIPS_GCTL0EXT_NCC_IND 0 /* Guest CCA independent of Root CCA */
+#define MIPS_GCTL0EXT_NCC_MOD 1 /* Guest CCA modified by Root CCA */
+
+/* GuestCtl1 fields */
+#define MIPS_GCTL1_ID_SHIFT 0
+#define MIPS_GCTL1_ID_WIDTH 8
+#define MIPS_GCTL1_ID (_ULCAST_(0xff) << MIPS_GCTL1_ID_SHIFT)
+#define MIPS_GCTL1_RID_SHIFT 16
+#define MIPS_GCTL1_RID_WIDTH 8
+#define MIPS_GCTL1_RID (_ULCAST_(0xff) << MIPS_GCTL1_RID_SHIFT)
+#define MIPS_GCTL1_EID_SHIFT 24
+#define MIPS_GCTL1_EID_WIDTH 8
+#define MIPS_GCTL1_EID (_ULCAST_(0xff) << MIPS_GCTL1_EID_SHIFT)
+
+/* GuestID reserved for root context */
+#define MIPS_GCTL1_ROOT_GUESTID 0
+
/* CDMMBase register bit definitions */
#define MIPS_CDMMBASE_SIZE_SHIFT 0
#define MIPS_CDMMBASE_SIZE (_ULCAST_(511) << MIPS_CDMMBASE_SIZE_SHIFT)
@@ -757,6 +896,15 @@
/* Disable Branch Return Cache */
#define R10K_DIAG_D_BRC (_ULCAST_(1) << 22)
+/* Flush ITLB */
+#define LOONGSON_DIAG_ITLB (_ULCAST_(1) << 2)
+/* Flush DTLB */
+#define LOONGSON_DIAG_DTLB (_ULCAST_(1) << 3)
+/* Flush VTLB */
+#define LOONGSON_DIAG_VTLB (_ULCAST_(1) << 12)
+/* Flush FTLB */
+#define LOONGSON_DIAG_FTLB (_ULCAST_(1) << 13)
+
/*
* Coprocessor 1 (FPU) register names
*/
@@ -909,6 +1057,33 @@ static inline int mm_insn_16bit(u16 insn)
}
/*
+ * Helper macros for generating raw instruction encodings in inline asm.
+ */
+#ifdef CONFIG_CPU_MICROMIPS
+#define _ASM_INSN16_IF_MM(_enc) \
+ ".insn\n\t" \
+ ".hword (" #_enc ")\n\t"
+#define _ASM_INSN32_IF_MM(_enc) \
+ ".insn\n\t" \
+ ".hword ((" #_enc ") >> 16)\n\t" \
+ ".hword ((" #_enc ") & 0xffff)\n\t"
+#else
+#define _ASM_INSN_IF_MIPS(_enc) \
+ ".insn\n\t" \
+ ".word (" #_enc ")\n\t"
+#endif
+
+#ifndef _ASM_INSN16_IF_MM
+#define _ASM_INSN16_IF_MM(_enc)
+#endif
+#ifndef _ASM_INSN32_IF_MM
+#define _ASM_INSN32_IF_MM(_enc)
+#endif
+#ifndef _ASM_INSN_IF_MIPS
+#define _ASM_INSN_IF_MIPS(_enc)
+#endif
+
+/*
* TLB Invalidate Flush
*/
static inline void tlbinvf(void)
@@ -916,7 +1091,9 @@ static inline void tlbinvf(void)
__asm__ __volatile__(
".set push\n\t"
".set noreorder\n\t"
- ".word 0x42000004\n\t" /* tlbinvf */
+ "# tlbinvf\n\t"
+ _ASM_INSN_IF_MIPS(0x42000004)
+ _ASM_INSN32_IF_MM(0x0000537c)
".set pop");
}
@@ -1137,9 +1314,9 @@ do { \
" .set push \n" \
" .set noat \n" \
" .set mips32r2 \n" \
- " .insn \n" \
" # mfhc0 $1, %1 \n" \
- " .word (0x40410000 | ((%1 & 0x1f) << 11)) \n" \
+ _ASM_INSN_IF_MIPS(0x40410000 | ((%1 & 0x1f) << 11)) \
+ _ASM_INSN32_IF_MM(0x002000f4 | ((%1 & 0x1f) << 16)) \
" move %0, $1 \n" \
" .set pop \n" \
: "=r" (__res) \
@@ -1155,8 +1332,8 @@ do { \
" .set mips32r2 \n" \
" move $1, %0 \n" \
" # mthc0 $1, %1 \n" \
- " .insn \n" \
- " .word (0x40c10000 | ((%1 & 0x1f) << 11)) \n" \
+ _ASM_INSN_IF_MIPS(0x40c10000 | ((%1 & 0x1f) << 11)) \
+ _ASM_INSN32_IF_MM(0x002002f4 | ((%1 & 0x1f) << 16)) \
" .set pop \n" \
: \
: "r" (value), "i" (register)); \
@@ -1186,9 +1363,15 @@ do { \
#define read_c0_context() __read_ulong_c0_register($4, 0)
#define write_c0_context(val) __write_ulong_c0_register($4, 0, val)
+#define read_c0_contextconfig() __read_32bit_c0_register($4, 1)
+#define write_c0_contextconfig(val) __write_32bit_c0_register($4, 1, val)
+
#define read_c0_userlocal() __read_ulong_c0_register($4, 2)
#define write_c0_userlocal(val) __write_ulong_c0_register($4, 2, val)
+#define read_c0_xcontextconfig() __read_ulong_c0_register($4, 3)
+#define write_c0_xcontextconfig(val) __write_ulong_c0_register($4, 3, val)
+
#define read_c0_pagemask() __read_32bit_c0_register($5, 0)
#define write_c0_pagemask(val) __write_32bit_c0_register($5, 0, val)
@@ -1206,6 +1389,9 @@ do { \
#define read_c0_badvaddr() __read_ulong_c0_register($8, 0)
#define write_c0_badvaddr(val) __write_ulong_c0_register($8, 0, val)
+#define read_c0_badinstr() __read_32bit_c0_register($8, 1)
+#define read_c0_badinstrp() __read_32bit_c0_register($8, 2)
+
#define read_c0_count() __read_32bit_c0_register($9, 0)
#define write_c0_count(val) __write_32bit_c0_register($9, 0, val)
@@ -1218,9 +1404,21 @@ do { \
#define read_c0_entryhi() __read_ulong_c0_register($10, 0)
#define write_c0_entryhi(val) __write_ulong_c0_register($10, 0, val)
+#define read_c0_guestctl1() __read_32bit_c0_register($10, 4)
+#define write_c0_guestctl1(val) __write_32bit_c0_register($10, 4, val)
+
+#define read_c0_guestctl2() __read_32bit_c0_register($10, 5)
+#define write_c0_guestctl2(val) __write_32bit_c0_register($10, 5, val)
+
+#define read_c0_guestctl3() __read_32bit_c0_register($10, 6)
+#define write_c0_guestctl3(val) __write_32bit_c0_register($10, 6, val)
+
#define read_c0_compare() __read_32bit_c0_register($11, 0)
#define write_c0_compare(val) __write_32bit_c0_register($11, 0, val)
+#define read_c0_guestctl0ext() __read_32bit_c0_register($11, 4)
+#define write_c0_guestctl0ext(val) __write_32bit_c0_register($11, 4, val)
+
#define read_c0_compare2() __read_32bit_c0_register($11, 6) /* pnx8550 */
#define write_c0_compare2(val) __write_32bit_c0_register($11, 6, val)
@@ -1231,6 +1429,12 @@ do { \
#define write_c0_status(val) __write_32bit_c0_register($12, 0, val)
+#define read_c0_guestctl0() __read_32bit_c0_register($12, 6)
+#define write_c0_guestctl0(val) __write_32bit_c0_register($12, 6, val)
+
+#define read_c0_gtoffset() __read_32bit_c0_register($12, 7)
+#define write_c0_gtoffset(val) __write_32bit_c0_register($12, 7, val)
+
#define read_c0_cause() __read_32bit_c0_register($13, 0)
#define write_c0_cause(val) __write_32bit_c0_register($13, 0, val)
@@ -1416,6 +1620,9 @@ do { \
#define read_c0_ebase() __read_32bit_c0_register($15, 1)
#define write_c0_ebase(val) __write_32bit_c0_register($15, 1, val)
+#define read_c0_ebase_64() __read_64bit_c0_register($15, 1)
+#define write_c0_ebase_64(val) __write_64bit_c0_register($15, 1, val)
+
#define read_c0_cdmmbase() __read_ulong_c0_register($15, 2)
#define write_c0_cdmmbase(val) __write_ulong_c0_register($15, 2, val)
@@ -1442,6 +1649,12 @@ do { \
#define read_c0_pwctl() __read_32bit_c0_register($6, 6)
#define write_c0_pwctl(val) __write_32bit_c0_register($6, 6, val)
+#define read_c0_pgd() __read_64bit_c0_register($9, 7)
+#define write_c0_pgd(val) __write_64bit_c0_register($9, 7, val)
+
+#define read_c0_kpgd() __read_64bit_c0_register($31, 7)
+#define write_c0_kpgd(val) __write_64bit_c0_register($31, 7, val)
+
/* Cavium OCTEON (cnMIPS) */
#define read_c0_cvmcount() __read_ulong_c0_register($9, 6)
#define write_c0_cvmcount(val) __write_ulong_c0_register($9, 6, val)
@@ -1507,6 +1720,321 @@ do { \
#define write_c0_brcm_sleepcount(val) __write_32bit_c0_register($22, 7, val)
/*
+ * Macros to access the guest system control coprocessor
+ */
+
+#ifdef TOOLCHAIN_SUPPORTS_VIRT
+
+#define __read_32bit_gc0_register(source, sel) \
+({ int __res; \
+ __asm__ __volatile__( \
+ ".set\tpush\n\t" \
+ ".set\tmips32r2\n\t" \
+ ".set\tvirt\n\t" \
+ "mfgc0\t%0, $%1, %2\n\t" \
+ ".set\tpop" \
+ : "=r" (__res) \
+ : "i" (source), "i" (sel)); \
+ __res; \
+})
+
+#define __read_64bit_gc0_register(source, sel) \
+({ unsigned long long __res; \
+ __asm__ __volatile__( \
+ ".set\tpush\n\t" \
+ ".set\tmips64r2\n\t" \
+ ".set\tvirt\n\t" \
+ "dmfgc0\t%0, $%1, %2\n\t" \
+ ".set\tpop" \
+ : "=r" (__res) \
+ : "i" (source), "i" (sel)); \
+ __res; \
+})
+
+#define __write_32bit_gc0_register(register, sel, value) \
+do { \
+ __asm__ __volatile__( \
+ ".set\tpush\n\t" \
+ ".set\tmips32r2\n\t" \
+ ".set\tvirt\n\t" \
+ "mtgc0\t%z0, $%1, %2\n\t" \
+ ".set\tpop" \
+ : : "Jr" ((unsigned int)(value)), \
+ "i" (register), "i" (sel)); \
+} while (0)
+
+#define __write_64bit_gc0_register(register, sel, value) \
+do { \
+ __asm__ __volatile__( \
+ ".set\tpush\n\t" \
+ ".set\tmips64r2\n\t" \
+ ".set\tvirt\n\t" \
+ "dmtgc0\t%z0, $%1, %2\n\t" \
+ ".set\tpop" \
+ : : "Jr" (value), \
+ "i" (register), "i" (sel)); \
+} while (0)
+
+#else /* TOOLCHAIN_SUPPORTS_VIRT */
+
+#define __read_32bit_gc0_register(source, sel) \
+({ int __res; \
+ __asm__ __volatile__( \
+ ".set\tpush\n\t" \
+ ".set\tnoat\n\t" \
+ "# mfgc0\t$1, $%1, %2\n\t" \
+ _ASM_INSN_IF_MIPS(0x40610000 | %1 << 11 | %2) \
+ _ASM_INSN32_IF_MM(0x002004fc | %1 << 16 | %2 << 11) \
+ "move\t%0, $1\n\t" \
+ ".set\tpop" \
+ : "=r" (__res) \
+ : "i" (source), "i" (sel)); \
+ __res; \
+})
+
+#define __read_64bit_gc0_register(source, sel) \
+({ unsigned long long __res; \
+ __asm__ __volatile__( \
+ ".set\tpush\n\t" \
+ ".set\tnoat\n\t" \
+ "# dmfgc0\t$1, $%1, %2\n\t" \
+ _ASM_INSN_IF_MIPS(0x40610100 | %1 << 11 | %2) \
+ _ASM_INSN32_IF_MM(0x582004fc | %1 << 16 | %2 << 11) \
+ "move\t%0, $1\n\t" \
+ ".set\tpop" \
+ : "=r" (__res) \
+ : "i" (source), "i" (sel)); \
+ __res; \
+})
+
+#define __write_32bit_gc0_register(register, sel, value) \
+do { \
+ __asm__ __volatile__( \
+ ".set\tpush\n\t" \
+ ".set\tnoat\n\t" \
+ "move\t$1, %z0\n\t" \
+ "# mtgc0\t$1, $%1, %2\n\t" \
+ _ASM_INSN_IF_MIPS(0x40610200 | %1 << 11 | %2) \
+ _ASM_INSN32_IF_MM(0x002006fc | %1 << 16 | %2 << 11) \
+ ".set\tpop" \
+ : : "Jr" ((unsigned int)(value)), \
+ "i" (register), "i" (sel)); \
+} while (0)
+
+#define __write_64bit_gc0_register(register, sel, value) \
+do { \
+ __asm__ __volatile__( \
+ ".set\tpush\n\t" \
+ ".set\tnoat\n\t" \
+ "move\t$1, %z0\n\t" \
+ "# dmtgc0\t$1, $%1, %2\n\t" \
+ _ASM_INSN_IF_MIPS(0x40610300 | %1 << 11 | %2) \
+ _ASM_INSN32_IF_MM(0x582006fc | %1 << 16 | %2 << 11) \
+ ".set\tpop" \
+ : : "Jr" (value), \
+ "i" (register), "i" (sel)); \
+} while (0)
+
+#endif /* !TOOLCHAIN_SUPPORTS_VIRT */
+
+#define __read_ulong_gc0_register(reg, sel) \
+ ((sizeof(unsigned long) == 4) ? \
+ (unsigned long) __read_32bit_gc0_register(reg, sel) : \
+ (unsigned long) __read_64bit_gc0_register(reg, sel))
+
+#define __write_ulong_gc0_register(reg, sel, val) \
+do { \
+ if (sizeof(unsigned long) == 4) \
+ __write_32bit_gc0_register(reg, sel, val); \
+ else \
+ __write_64bit_gc0_register(reg, sel, val); \
+} while (0)
+
+#define read_gc0_index() __read_32bit_gc0_register(0, 0)
+#define write_gc0_index(val) __write_32bit_gc0_register(0, 0, val)
+
+#define read_gc0_entrylo0() __read_ulong_gc0_register(2, 0)
+#define write_gc0_entrylo0(val) __write_ulong_gc0_register(2, 0, val)
+
+#define read_gc0_entrylo1() __read_ulong_gc0_register(3, 0)
+#define write_gc0_entrylo1(val) __write_ulong_gc0_register(3, 0, val)
+
+#define read_gc0_context() __read_ulong_gc0_register(4, 0)
+#define write_gc0_context(val) __write_ulong_gc0_register(4, 0, val)
+
+#define read_gc0_contextconfig() __read_32bit_gc0_register(4, 1)
+#define write_gc0_contextconfig(val) __write_32bit_gc0_register(4, 1, val)
+
+#define read_gc0_userlocal() __read_ulong_gc0_register(4, 2)
+#define write_gc0_userlocal(val) __write_ulong_gc0_register(4, 2, val)
+
+#define read_gc0_xcontextconfig() __read_ulong_gc0_register(4, 3)
+#define write_gc0_xcontextconfig(val) __write_ulong_gc0_register(4, 3, val)
+
+#define read_gc0_pagemask() __read_32bit_gc0_register(5, 0)
+#define write_gc0_pagemask(val) __write_32bit_gc0_register(5, 0, val)
+
+#define read_gc0_pagegrain() __read_32bit_gc0_register(5, 1)
+#define write_gc0_pagegrain(val) __write_32bit_gc0_register(5, 1, val)
+
+#define read_gc0_segctl0() __read_ulong_gc0_register(5, 2)
+#define write_gc0_segctl0(val) __write_ulong_gc0_register(5, 2, val)
+
+#define read_gc0_segctl1() __read_ulong_gc0_register(5, 3)
+#define write_gc0_segctl1(val) __write_ulong_gc0_register(5, 3, val)
+
+#define read_gc0_segctl2() __read_ulong_gc0_register(5, 4)
+#define write_gc0_segctl2(val) __write_ulong_gc0_register(5, 4, val)
+
+#define read_gc0_pwbase() __read_ulong_gc0_register(5, 5)
+#define write_gc0_pwbase(val) __write_ulong_gc0_register(5, 5, val)
+
+#define read_gc0_pwfield() __read_ulong_gc0_register(5, 6)
+#define write_gc0_pwfield(val) __write_ulong_gc0_register(5, 6, val)
+
+#define read_gc0_pwsize() __read_ulong_gc0_register(5, 7)
+#define write_gc0_pwsize(val) __write_ulong_gc0_register(5, 7, val)
+
+#define read_gc0_wired() __read_32bit_gc0_register(6, 0)
+#define write_gc0_wired(val) __write_32bit_gc0_register(6, 0, val)
+
+#define read_gc0_pwctl() __read_32bit_gc0_register(6, 6)
+#define write_gc0_pwctl(val) __write_32bit_gc0_register(6, 6, val)
+
+#define read_gc0_hwrena() __read_32bit_gc0_register(7, 0)
+#define write_gc0_hwrena(val) __write_32bit_gc0_register(7, 0, val)
+
+#define read_gc0_badvaddr() __read_ulong_gc0_register(8, 0)
+#define write_gc0_badvaddr(val) __write_ulong_gc0_register(8, 0, val)
+
+#define read_gc0_badinstr() __read_32bit_gc0_register(8, 1)
+#define write_gc0_badinstr(val) __write_32bit_gc0_register(8, 1, val)
+
+#define read_gc0_badinstrp() __read_32bit_gc0_register(8, 2)
+#define write_gc0_badinstrp(val) __write_32bit_gc0_register(8, 2, val)
+
+#define read_gc0_count() __read_32bit_gc0_register(9, 0)
+
+#define read_gc0_entryhi() __read_ulong_gc0_register(10, 0)
+#define write_gc0_entryhi(val) __write_ulong_gc0_register(10, 0, val)
+
+#define read_gc0_compare() __read_32bit_gc0_register(11, 0)
+#define write_gc0_compare(val) __write_32bit_gc0_register(11, 0, val)
+
+#define read_gc0_status() __read_32bit_gc0_register(12, 0)
+#define write_gc0_status(val) __write_32bit_gc0_register(12, 0, val)
+
+#define read_gc0_intctl() __read_32bit_gc0_register(12, 1)
+#define write_gc0_intctl(val) __write_32bit_gc0_register(12, 1, val)
+
+#define read_gc0_cause() __read_32bit_gc0_register(13, 0)
+#define write_gc0_cause(val) __write_32bit_gc0_register(13, 0, val)
+
+#define read_gc0_epc() __read_ulong_gc0_register(14, 0)
+#define write_gc0_epc(val) __write_ulong_gc0_register(14, 0, val)
+
+#define read_gc0_ebase() __read_32bit_gc0_register(15, 1)
+#define write_gc0_ebase(val) __write_32bit_gc0_register(15, 1, val)
+
+#define read_gc0_ebase_64() __read_64bit_gc0_register(15, 1)
+#define write_gc0_ebase_64(val) __write_64bit_gc0_register(15, 1, val)
+
+#define read_gc0_config() __read_32bit_gc0_register(16, 0)
+#define read_gc0_config1() __read_32bit_gc0_register(16, 1)
+#define read_gc0_config2() __read_32bit_gc0_register(16, 2)
+#define read_gc0_config3() __read_32bit_gc0_register(16, 3)
+#define read_gc0_config4() __read_32bit_gc0_register(16, 4)
+#define read_gc0_config5() __read_32bit_gc0_register(16, 5)
+#define read_gc0_config6() __read_32bit_gc0_register(16, 6)
+#define read_gc0_config7() __read_32bit_gc0_register(16, 7)
+#define write_gc0_config(val) __write_32bit_gc0_register(16, 0, val)
+#define write_gc0_config1(val) __write_32bit_gc0_register(16, 1, val)
+#define write_gc0_config2(val) __write_32bit_gc0_register(16, 2, val)
+#define write_gc0_config3(val) __write_32bit_gc0_register(16, 3, val)
+#define write_gc0_config4(val) __write_32bit_gc0_register(16, 4, val)
+#define write_gc0_config5(val) __write_32bit_gc0_register(16, 5, val)
+#define write_gc0_config6(val) __write_32bit_gc0_register(16, 6, val)
+#define write_gc0_config7(val) __write_32bit_gc0_register(16, 7, val)
+
+#define read_gc0_watchlo0() __read_ulong_gc0_register(18, 0)
+#define read_gc0_watchlo1() __read_ulong_gc0_register(18, 1)
+#define read_gc0_watchlo2() __read_ulong_gc0_register(18, 2)
+#define read_gc0_watchlo3() __read_ulong_gc0_register(18, 3)
+#define read_gc0_watchlo4() __read_ulong_gc0_register(18, 4)
+#define read_gc0_watchlo5() __read_ulong_gc0_register(18, 5)
+#define read_gc0_watchlo6() __read_ulong_gc0_register(18, 6)
+#define read_gc0_watchlo7() __read_ulong_gc0_register(18, 7)
+#define write_gc0_watchlo0(val) __write_ulong_gc0_register(18, 0, val)
+#define write_gc0_watchlo1(val) __write_ulong_gc0_register(18, 1, val)
+#define write_gc0_watchlo2(val) __write_ulong_gc0_register(18, 2, val)
+#define write_gc0_watchlo3(val) __write_ulong_gc0_register(18, 3, val)
+#define write_gc0_watchlo4(val) __write_ulong_gc0_register(18, 4, val)
+#define write_gc0_watchlo5(val) __write_ulong_gc0_register(18, 5, val)
+#define write_gc0_watchlo6(val) __write_ulong_gc0_register(18, 6, val)
+#define write_gc0_watchlo7(val) __write_ulong_gc0_register(18, 7, val)
+
+#define read_gc0_watchhi0() __read_32bit_gc0_register(19, 0)
+#define read_gc0_watchhi1() __read_32bit_gc0_register(19, 1)
+#define read_gc0_watchhi2() __read_32bit_gc0_register(19, 2)
+#define read_gc0_watchhi3() __read_32bit_gc0_register(19, 3)
+#define read_gc0_watchhi4() __read_32bit_gc0_register(19, 4)
+#define read_gc0_watchhi5() __read_32bit_gc0_register(19, 5)
+#define read_gc0_watchhi6() __read_32bit_gc0_register(19, 6)
+#define read_gc0_watchhi7() __read_32bit_gc0_register(19, 7)
+#define write_gc0_watchhi0(val) __write_32bit_gc0_register(19, 0, val)
+#define write_gc0_watchhi1(val) __write_32bit_gc0_register(19, 1, val)
+#define write_gc0_watchhi2(val) __write_32bit_gc0_register(19, 2, val)
+#define write_gc0_watchhi3(val) __write_32bit_gc0_register(19, 3, val)
+#define write_gc0_watchhi4(val) __write_32bit_gc0_register(19, 4, val)
+#define write_gc0_watchhi5(val) __write_32bit_gc0_register(19, 5, val)
+#define write_gc0_watchhi6(val) __write_32bit_gc0_register(19, 6, val)
+#define write_gc0_watchhi7(val) __write_32bit_gc0_register(19, 7, val)
+
+#define read_gc0_xcontext() __read_ulong_gc0_register(20, 0)
+#define write_gc0_xcontext(val) __write_ulong_gc0_register(20, 0, val)
+
+#define read_gc0_perfctrl0() __read_32bit_gc0_register(25, 0)
+#define write_gc0_perfctrl0(val) __write_32bit_gc0_register(25, 0, val)
+#define read_gc0_perfcntr0() __read_32bit_gc0_register(25, 1)
+#define write_gc0_perfcntr0(val) __write_32bit_gc0_register(25, 1, val)
+#define read_gc0_perfcntr0_64() __read_64bit_gc0_register(25, 1)
+#define write_gc0_perfcntr0_64(val) __write_64bit_gc0_register(25, 1, val)
+#define read_gc0_perfctrl1() __read_32bit_gc0_register(25, 2)
+#define write_gc0_perfctrl1(val) __write_32bit_gc0_register(25, 2, val)
+#define read_gc0_perfcntr1() __read_32bit_gc0_register(25, 3)
+#define write_gc0_perfcntr1(val) __write_32bit_gc0_register(25, 3, val)
+#define read_gc0_perfcntr1_64() __read_64bit_gc0_register(25, 3)
+#define write_gc0_perfcntr1_64(val) __write_64bit_gc0_register(25, 3, val)
+#define read_gc0_perfctrl2() __read_32bit_gc0_register(25, 4)
+#define write_gc0_perfctrl2(val) __write_32bit_gc0_register(25, 4, val)
+#define read_gc0_perfcntr2() __read_32bit_gc0_register(25, 5)
+#define write_gc0_perfcntr2(val) __write_32bit_gc0_register(25, 5, val)
+#define read_gc0_perfcntr2_64() __read_64bit_gc0_register(25, 5)
+#define write_gc0_perfcntr2_64(val) __write_64bit_gc0_register(25, 5, val)
+#define read_gc0_perfctrl3() __read_32bit_gc0_register(25, 6)
+#define write_gc0_perfctrl3(val) __write_32bit_gc0_register(25, 6, val)
+#define read_gc0_perfcntr3() __read_32bit_gc0_register(25, 7)
+#define write_gc0_perfcntr3(val) __write_32bit_gc0_register(25, 7, val)
+#define read_gc0_perfcntr3_64() __read_64bit_gc0_register(25, 7)
+#define write_gc0_perfcntr3_64(val) __write_64bit_gc0_register(25, 7, val)
+
+#define read_gc0_errorepc() __read_ulong_gc0_register(30, 0)
+#define write_gc0_errorepc(val) __write_ulong_gc0_register(30, 0, val)
+
+#define read_gc0_kscratch1() __read_ulong_gc0_register(31, 2)
+#define read_gc0_kscratch2() __read_ulong_gc0_register(31, 3)
+#define read_gc0_kscratch3() __read_ulong_gc0_register(31, 4)
+#define read_gc0_kscratch4() __read_ulong_gc0_register(31, 5)
+#define read_gc0_kscratch5() __read_ulong_gc0_register(31, 6)
+#define read_gc0_kscratch6() __read_ulong_gc0_register(31, 7)
+#define write_gc0_kscratch1(val) __write_ulong_gc0_register(31, 2, val)
+#define write_gc0_kscratch2(val) __write_ulong_gc0_register(31, 3, val)
+#define write_gc0_kscratch3(val) __write_ulong_gc0_register(31, 4, val)
+#define write_gc0_kscratch4(val) __write_ulong_gc0_register(31, 5, val)
+#define write_gc0_kscratch5(val) __write_ulong_gc0_register(31, 6, val)
+#define write_gc0_kscratch6(val) __write_ulong_gc0_register(31, 7, val)
+
+/*
* Macros to access the floating point coprocessor control registers
*/
#define _read_32bit_cp1_register(source, gas_hardfloat) \
@@ -1762,7 +2290,6 @@ do { \
#else
-#ifdef CONFIG_CPU_MICROMIPS
#define rddsp(mask) \
({ \
unsigned int __res; \
@@ -1771,8 +2298,8 @@ do { \
" .set push \n" \
" .set noat \n" \
" # rddsp $1, %x1 \n" \
- " .hword ((0x0020067c | (%x1 << 14)) >> 16) \n" \
- " .hword ((0x0020067c | (%x1 << 14)) & 0xffff) \n" \
+ _ASM_INSN_IF_MIPS(0x7c000cb8 | (%x1 << 16)) \
+ _ASM_INSN32_IF_MM(0x0020067c | (%x1 << 14)) \
" move %0, $1 \n" \
" .set pop \n" \
: "=r" (__res) \
@@ -1787,22 +2314,22 @@ do { \
" .set noat \n" \
" move $1, %0 \n" \
" # wrdsp $1, %x1 \n" \
- " .hword ((0x0020167c | (%x1 << 14)) >> 16) \n" \
- " .hword ((0x0020167c | (%x1 << 14)) & 0xffff) \n" \
+ _ASM_INSN_IF_MIPS(0x7c2004f8 | (%x1 << 11)) \
+ _ASM_INSN32_IF_MM(0x0020167c | (%x1 << 14)) \
" .set pop \n" \
: \
: "r" (val), "i" (mask)); \
} while (0)
-#define _umips_dsp_mfxxx(ins) \
+#define _dsp_mfxxx(ins) \
({ \
unsigned long __treg; \
\
__asm__ __volatile__( \
" .set push \n" \
" .set noat \n" \
- " .hword 0x0001 \n" \
- " .hword %x1 \n" \
+ _ASM_INSN_IF_MIPS(0x00000810 | %X1) \
+ _ASM_INSN32_IF_MM(0x0001007c | %x1) \
" move %0, $1 \n" \
" .set pop \n" \
: "=r" (__treg) \
@@ -1810,101 +2337,28 @@ do { \
__treg; \
})
-#define _umips_dsp_mtxxx(val, ins) \
+#define _dsp_mtxxx(val, ins) \
do { \
__asm__ __volatile__( \
" .set push \n" \
" .set noat \n" \
" move $1, %0 \n" \
- " .hword 0x0001 \n" \
- " .hword %x1 \n" \
+ _ASM_INSN_IF_MIPS(0x00200011 | %X1) \
+ _ASM_INSN32_IF_MM(0x0001207c | %x1) \
" .set pop \n" \
: \
: "r" (val), "i" (ins)); \
} while (0)
-#define _umips_dsp_mflo(reg) _umips_dsp_mfxxx((reg << 14) | 0x107c)
-#define _umips_dsp_mfhi(reg) _umips_dsp_mfxxx((reg << 14) | 0x007c)
-
-#define _umips_dsp_mtlo(val, reg) _umips_dsp_mtxxx(val, ((reg << 14) | 0x307c))
-#define _umips_dsp_mthi(val, reg) _umips_dsp_mtxxx(val, ((reg << 14) | 0x207c))
-
-#define mflo0() _umips_dsp_mflo(0)
-#define mflo1() _umips_dsp_mflo(1)
-#define mflo2() _umips_dsp_mflo(2)
-#define mflo3() _umips_dsp_mflo(3)
-
-#define mfhi0() _umips_dsp_mfhi(0)
-#define mfhi1() _umips_dsp_mfhi(1)
-#define mfhi2() _umips_dsp_mfhi(2)
-#define mfhi3() _umips_dsp_mfhi(3)
+#ifdef CONFIG_CPU_MICROMIPS
-#define mtlo0(x) _umips_dsp_mtlo(x, 0)
-#define mtlo1(x) _umips_dsp_mtlo(x, 1)
-#define mtlo2(x) _umips_dsp_mtlo(x, 2)
-#define mtlo3(x) _umips_dsp_mtlo(x, 3)
+#define _dsp_mflo(reg) _dsp_mfxxx((reg << 14) | 0x1000)
+#define _dsp_mfhi(reg) _dsp_mfxxx((reg << 14) | 0x0000)
-#define mthi0(x) _umips_dsp_mthi(x, 0)
-#define mthi1(x) _umips_dsp_mthi(x, 1)
-#define mthi2(x) _umips_dsp_mthi(x, 2)
-#define mthi3(x) _umips_dsp_mthi(x, 3)
+#define _dsp_mtlo(val, reg) _dsp_mtxxx(val, ((reg << 14) | 0x1000))
+#define _dsp_mthi(val, reg) _dsp_mtxxx(val, ((reg << 14) | 0x0000))
#else /* !CONFIG_CPU_MICROMIPS */
-#define rddsp(mask) \
-({ \
- unsigned int __res; \
- \
- __asm__ __volatile__( \
- " .set push \n" \
- " .set noat \n" \
- " # rddsp $1, %x1 \n" \
- " .word 0x7c000cb8 | (%x1 << 16) \n" \
- " move %0, $1 \n" \
- " .set pop \n" \
- : "=r" (__res) \
- : "i" (mask)); \
- __res; \
-})
-
-#define wrdsp(val, mask) \
-do { \
- __asm__ __volatile__( \
- " .set push \n" \
- " .set noat \n" \
- " move $1, %0 \n" \
- " # wrdsp $1, %x1 \n" \
- " .word 0x7c2004f8 | (%x1 << 11) \n" \
- " .set pop \n" \
- : \
- : "r" (val), "i" (mask)); \
-} while (0)
-
-#define _dsp_mfxxx(ins) \
-({ \
- unsigned long __treg; \
- \
- __asm__ __volatile__( \
- " .set push \n" \
- " .set noat \n" \
- " .word (0x00000810 | %1) \n" \
- " move %0, $1 \n" \
- " .set pop \n" \
- : "=r" (__treg) \
- : "i" (ins)); \
- __treg; \
-})
-
-#define _dsp_mtxxx(val, ins) \
-do { \
- __asm__ __volatile__( \
- " .set push \n" \
- " .set noat \n" \
- " move $1, %0 \n" \
- " .word (0x00200011 | %1) \n" \
- " .set pop \n" \
- : \
- : "r" (val), "i" (ins)); \
-} while (0)
#define _dsp_mflo(reg) _dsp_mfxxx((reg << 21) | 0x0002)
#define _dsp_mfhi(reg) _dsp_mfxxx((reg << 21) | 0x0000)
@@ -1912,6 +2366,8 @@ do { \
#define _dsp_mtlo(val, reg) _dsp_mtxxx(val, ((reg << 11) | 0x0002))
#define _dsp_mthi(val, reg) _dsp_mtxxx(val, ((reg << 11) | 0x0000))
+#endif /* CONFIG_CPU_MICROMIPS */
+
#define mflo0() _dsp_mflo(0)
#define mflo1() _dsp_mflo(1)
#define mflo2() _dsp_mflo(2)
@@ -1932,7 +2388,6 @@ do { \
#define mthi2(x) _dsp_mthi(x, 2)
#define mthi3(x) _dsp_mthi(x, 3)
-#endif /* CONFIG_CPU_MICROMIPS */
#endif
/*
@@ -2001,47 +2456,164 @@ static inline void tlb_write_random(void)
".set reorder");
}
+#ifdef TOOLCHAIN_SUPPORTS_VIRT
+
/*
- * Manipulate bits in a c0 register.
+ * Guest TLB operations.
+ *
+ * It is responsibility of the caller to take care of any TLB hazards.
*/
-#define __BUILD_SET_C0(name) \
+static inline void guest_tlb_probe(void)
+{
+ __asm__ __volatile__(
+ ".set push\n\t"
+ ".set noreorder\n\t"
+ ".set virt\n\t"
+ "tlbgp\n\t"
+ ".set pop");
+}
+
+static inline void guest_tlb_read(void)
+{
+ __asm__ __volatile__(
+ ".set push\n\t"
+ ".set noreorder\n\t"
+ ".set virt\n\t"
+ "tlbgr\n\t"
+ ".set pop");
+}
+
+static inline void guest_tlb_write_indexed(void)
+{
+ __asm__ __volatile__(
+ ".set push\n\t"
+ ".set noreorder\n\t"
+ ".set virt\n\t"
+ "tlbgwi\n\t"
+ ".set pop");
+}
+
+static inline void guest_tlb_write_random(void)
+{
+ __asm__ __volatile__(
+ ".set push\n\t"
+ ".set noreorder\n\t"
+ ".set virt\n\t"
+ "tlbgwr\n\t"
+ ".set pop");
+}
+
+/*
+ * Guest TLB Invalidate Flush
+ */
+static inline void guest_tlbinvf(void)
+{
+ __asm__ __volatile__(
+ ".set push\n\t"
+ ".set noreorder\n\t"
+ ".set virt\n\t"
+ "tlbginvf\n\t"
+ ".set pop");
+}
+
+#else /* TOOLCHAIN_SUPPORTS_VIRT */
+
+/*
+ * Guest TLB operations.
+ *
+ * It is responsibility of the caller to take care of any TLB hazards.
+ */
+static inline void guest_tlb_probe(void)
+{
+ __asm__ __volatile__(
+ "# tlbgp\n\t"
+ _ASM_INSN_IF_MIPS(0x42000010)
+ _ASM_INSN32_IF_MM(0x0000017c));
+}
+
+static inline void guest_tlb_read(void)
+{
+ __asm__ __volatile__(
+ "# tlbgr\n\t"
+ _ASM_INSN_IF_MIPS(0x42000009)
+ _ASM_INSN32_IF_MM(0x0000117c));
+}
+
+static inline void guest_tlb_write_indexed(void)
+{
+ __asm__ __volatile__(
+ "# tlbgwi\n\t"
+ _ASM_INSN_IF_MIPS(0x4200000a)
+ _ASM_INSN32_IF_MM(0x0000217c));
+}
+
+static inline void guest_tlb_write_random(void)
+{
+ __asm__ __volatile__(
+ "# tlbgwr\n\t"
+ _ASM_INSN_IF_MIPS(0x4200000e)
+ _ASM_INSN32_IF_MM(0x0000317c));
+}
+
+/*
+ * Guest TLB Invalidate Flush
+ */
+static inline void guest_tlbinvf(void)
+{
+ __asm__ __volatile__(
+ "# tlbginvf\n\t"
+ _ASM_INSN_IF_MIPS(0x4200000c)
+ _ASM_INSN32_IF_MM(0x0000517c));
+}
+
+#endif /* !TOOLCHAIN_SUPPORTS_VIRT */
+
+/*
+ * Manipulate bits in a register.
+ */
+#define __BUILD_SET_COMMON(name) \
static inline unsigned int \
-set_c0_##name(unsigned int set) \
+set_##name(unsigned int set) \
{ \
unsigned int res, new; \
\
- res = read_c0_##name(); \
+ res = read_##name(); \
new = res | set; \
- write_c0_##name(new); \
+ write_##name(new); \
\
return res; \
} \
\
static inline unsigned int \
-clear_c0_##name(unsigned int clear) \
+clear_##name(unsigned int clear) \
{ \
unsigned int res, new; \
\
- res = read_c0_##name(); \
+ res = read_##name(); \
new = res & ~clear; \
- write_c0_##name(new); \
+ write_##name(new); \
\
return res; \
} \
\
static inline unsigned int \
-change_c0_##name(unsigned int change, unsigned int val) \
+change_##name(unsigned int change, unsigned int val) \
{ \
unsigned int res, new; \
\
- res = read_c0_##name(); \
+ res = read_##name(); \
new = res & ~change; \
new |= (val & change); \
- write_c0_##name(new); \
+ write_##name(new); \
\
return res; \
}
+/*
+ * Manipulate bits in a c0 register.
+ */
+#define __BUILD_SET_C0(name) __BUILD_SET_COMMON(c0_##name)
+
__BUILD_SET_C0(status)
__BUILD_SET_C0(cause)
__BUILD_SET_C0(config)
@@ -2050,6 +2622,11 @@ __BUILD_SET_C0(intcontrol)
__BUILD_SET_C0(intctl)
__BUILD_SET_C0(srsmap)
__BUILD_SET_C0(pagegrain)
+__BUILD_SET_C0(guestctl0)
+__BUILD_SET_C0(guestctl0ext)
+__BUILD_SET_C0(guestctl1)
+__BUILD_SET_C0(guestctl2)
+__BUILD_SET_C0(guestctl3)
__BUILD_SET_C0(brcm_config_0)
__BUILD_SET_C0(brcm_bus_pll)
__BUILD_SET_C0(brcm_reset)
@@ -2059,12 +2636,21 @@ __BUILD_SET_C0(brcm_config)
__BUILD_SET_C0(brcm_mode)
/*
+ * Manipulate bits in a guest c0 register.
+ */
+#define __BUILD_SET_GC0(name) __BUILD_SET_COMMON(gc0_##name)
+
+__BUILD_SET_GC0(status)
+__BUILD_SET_GC0(cause)
+__BUILD_SET_GC0(ebase)
+
+/*
* Return low 10 bits of ebase.
* Note that under KVM (MIPSVZ) this returns vcpu id.
*/
static inline unsigned int get_ebase_cpunum(void)
{
- return read_c0_ebase() & 0x3ff;
+ return read_c0_ebase() & MIPS_EBASE_CPUNUM;
}
#endif /* !__ASSEMBLY__ */
diff --git a/arch/mips/include/asm/mmu_context.h b/arch/mips/include/asm/mmu_context.h
index 45914b59824c..fc57e135cb0a 100644
--- a/arch/mips/include/asm/mmu_context.h
+++ b/arch/mips/include/asm/mmu_context.h
@@ -65,37 +65,32 @@ extern unsigned long pgd_current[];
back_to_back_c0_hazard(); \
TLBMISS_HANDLER_SETUP_PGD(swapper_pg_dir)
#endif /* CONFIG_MIPS_PGD_C0_CONTEXT*/
-#if defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
-#define ASID_INC 0x40
-#define ASID_MASK 0xfc0
-
-#elif defined(CONFIG_CPU_R8000)
-
-#define ASID_INC 0x10
-#define ASID_MASK 0xff0
-
-#else /* FIXME: not correct for R6000 */
+/*
+ * All unused by hardware upper bits will be considered
+ * as a software asid extension.
+ */
+static unsigned long asid_version_mask(unsigned int cpu)
+{
+ unsigned long asid_mask = cpu_asid_mask(&cpu_data[cpu]);
-#define ASID_INC 0x1
-#define ASID_MASK 0xff
+ return ~(asid_mask | (asid_mask - 1));
+}
-#endif
+static unsigned long asid_first_version(unsigned int cpu)
+{
+ return ~asid_version_mask(cpu) + 1;
+}
#define cpu_context(cpu, mm) ((mm)->context.asid[cpu])
-#define cpu_asid(cpu, mm) (cpu_context((cpu), (mm)) & ASID_MASK)
#define asid_cache(cpu) (cpu_data[cpu].asid_cache)
+#define cpu_asid(cpu, mm) \
+ (cpu_context((cpu), (mm)) & cpu_asid_mask(&cpu_data[cpu]))
static inline void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk)
{
}
-/*
- * All unused by hardware upper bits will be considered
- * as a software asid extension.
- */
-#define ASID_VERSION_MASK ((unsigned long)~(ASID_MASK|(ASID_MASK-1)))
-#define ASID_FIRST_VERSION ((unsigned long)(~ASID_VERSION_MASK) + 1)
/* Normal, classic MIPS get_new_mmu_context */
static inline void
@@ -104,7 +99,7 @@ get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
extern void kvm_local_flush_tlb_all(void);
unsigned long asid = asid_cache(cpu);
- if (! ((asid += ASID_INC) & ASID_MASK) ) {
+ if (!((asid += cpu_asid_inc()) & cpu_asid_mask(&cpu_data[cpu]))) {
if (cpu_has_vtag_icache)
flush_icache_all();
#ifdef CONFIG_KVM
@@ -113,7 +108,7 @@ get_new_mmu_context(struct mm_struct *mm, unsigned long cpu)
local_flush_tlb_all(); /* start new asid cycle */
#endif
if (!asid) /* fix version if needed */
- asid = ASID_FIRST_VERSION;
+ asid = asid_first_version(cpu);
}
cpu_context(cpu, mm) = asid_cache(cpu) = asid;
@@ -145,7 +140,7 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
htw_stop();
/* Check if our ASID is of an older version and thus invalid */
- if ((cpu_context(cpu, next) ^ asid_cache(cpu)) & ASID_VERSION_MASK)
+ if ((cpu_context(cpu, next) ^ asid_cache(cpu)) & asid_version_mask(cpu))
get_new_mmu_context(next, cpu);
write_c0_entryhi(cpu_asid(cpu, next));
TLBMISS_HANDLER_SETUP_PGD(next->pgd);
diff --git a/arch/mips/include/asm/msa.h b/arch/mips/include/asm/msa.h
index bbb85fe21642..ddf496cb2a2a 100644
--- a/arch/mips/include/asm/msa.h
+++ b/arch/mips/include/asm/msa.h
@@ -147,6 +147,19 @@ static inline void restore_msa(struct task_struct *t)
_restore_msa(t);
}
+static inline void init_msa_upper(void)
+{
+ /*
+ * Check cpu_has_msa only if it's a constant. This will allow the
+ * compiler to optimise out code for CPUs without MSA without adding
+ * an extra redundant check for CPUs with MSA.
+ */
+ if (__builtin_constant_p(cpu_has_msa) && !cpu_has_msa)
+ return;
+
+ _init_msa_upper();
+}
+
#ifdef TOOLCHAIN_SUPPORTS_MSA
#define __BUILD_MSA_CTL_REG(name, cs) \
@@ -179,13 +192,6 @@ static inline void write_msa_##name(unsigned int val) \
* allow compilation with toolchains that do not support MSA. Once all
* toolchains in use support MSA these can be removed.
*/
-#ifdef CONFIG_CPU_MICROMIPS
-#define CFC_MSA_INSN 0x587e0056
-#define CTC_MSA_INSN 0x583e0816
-#else
-#define CFC_MSA_INSN 0x787e0059
-#define CTC_MSA_INSN 0x783e0819
-#endif
#define __BUILD_MSA_CTL_REG(name, cs) \
static inline unsigned int read_msa_##name(void) \
@@ -194,11 +200,12 @@ static inline unsigned int read_msa_##name(void) \
__asm__ __volatile__( \
" .set push\n" \
" .set noat\n" \
- " .insn\n" \
- " .word %1 | (" #cs " << 11)\n" \
+ " # cfcmsa $1, $%1\n" \
+ _ASM_INSN_IF_MIPS(0x787e0059 | %1 << 11) \
+ _ASM_INSN32_IF_MM(0x587e0056 | %1 << 11) \
" move %0, $1\n" \
" .set pop\n" \
- : "=r"(reg) : "i"(CFC_MSA_INSN)); \
+ : "=r"(reg) : "i"(cs)); \
return reg; \
} \
\
@@ -208,10 +215,11 @@ static inline void write_msa_##name(unsigned int val) \
" .set push\n" \
" .set noat\n" \
" move $1, %0\n" \
- " .insn\n" \
- " .word %1 | (" #cs " << 6)\n" \
+ " # ctcmsa $%1, $1\n" \
+ _ASM_INSN_IF_MIPS(0x783e0819 | %1 << 6) \
+ _ASM_INSN32_IF_MM(0x583e0816 | %1 << 6) \
" .set pop\n" \
- : : "r"(val), "i"(CTC_MSA_INSN)); \
+ : : "r"(val), "i"(cs)); \
}
#endif /* !TOOLCHAIN_SUPPORTS_MSA */
diff --git a/arch/mips/include/asm/octeon/cvmx-bootinfo.h b/arch/mips/include/asm/octeon/cvmx-bootinfo.h
index d92cf59bdae6..62787765575e 100644
--- a/arch/mips/include/asm/octeon/cvmx-bootinfo.h
+++ b/arch/mips/include/asm/octeon/cvmx-bootinfo.h
@@ -32,6 +32,8 @@
#ifndef __CVMX_BOOTINFO_H__
#define __CVMX_BOOTINFO_H__
+#include "cvmx-coremask.h"
+
/*
* Current major and minor versions of the CVMX bootinfo block that is
* passed from the bootloader to the application. This is versioned
@@ -39,7 +41,7 @@
* versions.
*/
#define CVMX_BOOTINFO_MAJ_VER 1
-#define CVMX_BOOTINFO_MIN_VER 3
+#define CVMX_BOOTINFO_MIN_VER 4
#if (CVMX_BOOTINFO_MAJ_VER == 1)
#define CVMX_BOOTINFO_OCTEON_SERIAL_LEN 20
@@ -124,6 +126,13 @@ struct cvmx_bootinfo {
*/
uint64_t fdt_addr;
#endif
+#if (CVMX_BOOTINFO_MIN_VER >= 4)
+ /*
+ * Coremask used for processors with more than 32 cores
+ * or with OCI. This replaces core_mask.
+ */
+ struct cvmx_coremask ext_core_mask;
+#endif
#else /* __BIG_ENDIAN */
/*
* Little-Endian: When the CPU mode is switched to
@@ -177,6 +186,9 @@ struct cvmx_bootinfo {
#if (CVMX_BOOTINFO_MIN_VER >= 3)
uint64_t fdt_addr;
#endif
+#if (CVMX_BOOTINFO_MIN_VER >= 4)
+ struct cvmx_coremask ext_core_mask;
+#endif
#endif
};
@@ -388,7 +400,7 @@ static inline const char *cvmx_board_type_to_string(enum
ENUM_BRD_TYPE_CASE(CVMX_BOARD_TYPE_KONTRON_S1901)
ENUM_BRD_TYPE_CASE(CVMX_BOARD_TYPE_CUST_PRIVATE_MAX)
}
- return "Unsupported Board";
+ return NULL;
}
#define ENUM_CHIP_TYPE_CASE(x) \
diff --git a/arch/mips/include/asm/octeon/cvmx-ciu3-defs.h b/arch/mips/include/asm/octeon/cvmx-ciu3-defs.h
new file mode 100644
index 000000000000..547f778f5b05
--- /dev/null
+++ b/arch/mips/include/asm/octeon/cvmx-ciu3-defs.h
@@ -0,0 +1,353 @@
+/*
+ * Copyright (c) 2003-2016 Cavium Inc.
+ *
+ * This file is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License, Version 2, as
+ * published by the Free Software Foundation.
+ *
+ * This file is distributed in the hope that it will be useful, but
+ * AS-IS and WITHOUT ANY WARRANTY; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE, TITLE, or
+ * NONINFRINGEMENT. See the GNU General Public License for more
+ * details.
+ *
+ */
+
+#ifndef __CVMX_CIU3_DEFS_H__
+#define __CVMX_CIU3_DEFS_H__
+
+#define CVMX_CIU3_FUSE CVMX_ADD_IO_SEG(0x00010100000001A0ull)
+#define CVMX_CIU3_BIST CVMX_ADD_IO_SEG(0x00010100000001C0ull)
+#define CVMX_CIU3_CONST CVMX_ADD_IO_SEG(0x0001010000000220ull)
+#define CVMX_CIU3_CTL CVMX_ADD_IO_SEG(0x00010100000000E0ull)
+#define CVMX_CIU3_DESTX_IO_INT(offset) (CVMX_ADD_IO_SEG(0x0001010000210000ull) + ((offset) & 7) * 8)
+#define CVMX_CIU3_DESTX_PP_INT(offset) (CVMX_ADD_IO_SEG(0x0001010000200000ull) + ((offset) & 255) * 8)
+#define CVMX_CIU3_GSTOP CVMX_ADD_IO_SEG(0x0001010000000140ull)
+#define CVMX_CIU3_IDTX_CTL(offset) (CVMX_ADD_IO_SEG(0x0001010000110000ull) + ((offset) & 255) * 8)
+#define CVMX_CIU3_IDTX_IO(offset) (CVMX_ADD_IO_SEG(0x0001010000130000ull) + ((offset) & 255) * 8)
+#define CVMX_CIU3_IDTX_PPX(offset, block_id) (CVMX_ADD_IO_SEG(0x0001010000120000ull) + ((block_id) & 255) * 0x20ull)
+#define CVMX_CIU3_INTR_RAM_ECC_CTL CVMX_ADD_IO_SEG(0x0001010000000260ull)
+#define CVMX_CIU3_INTR_RAM_ECC_ST CVMX_ADD_IO_SEG(0x0001010000000280ull)
+#define CVMX_CIU3_INTR_READY CVMX_ADD_IO_SEG(0x00010100000002A0ull)
+#define CVMX_CIU3_INTR_SLOWDOWN CVMX_ADD_IO_SEG(0x0001010000000240ull)
+#define CVMX_CIU3_ISCX_CTL(offset) (CVMX_ADD_IO_SEG(0x0001010080000000ull) + ((offset) & 1048575) * 8)
+#define CVMX_CIU3_ISCX_W1C(offset) (CVMX_ADD_IO_SEG(0x0001010090000000ull) + ((offset) & 1048575) * 8)
+#define CVMX_CIU3_ISCX_W1S(offset) (CVMX_ADD_IO_SEG(0x00010100A0000000ull) + ((offset) & 1048575) * 8)
+#define CVMX_CIU3_NMI CVMX_ADD_IO_SEG(0x0001010000000160ull)
+#define CVMX_CIU3_SISCX(offset) (CVMX_ADD_IO_SEG(0x0001010000220000ull) + ((offset) & 255) * 8)
+#define CVMX_CIU3_TIMX(offset) (CVMX_ADD_IO_SEG(0x0001010000010000ull) + ((offset) & 15) * 8)
+
+union cvmx_ciu3_bist {
+ uint64_t u64;
+ struct cvmx_ciu3_bist_s {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_9_63 : 55;
+ uint64_t bist : 9;
+#else
+ uint64_t bist : 9;
+ uint64_t reserved_9_63 : 55;
+#endif
+ } s;
+};
+
+union cvmx_ciu3_const {
+ uint64_t u64;
+ struct cvmx_ciu3_const_s {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t dests_io : 16;
+ uint64_t pintsn : 16;
+ uint64_t dests_pp : 16;
+ uint64_t idt : 16;
+#else
+ uint64_t idt : 16;
+ uint64_t dests_pp : 16;
+ uint64_t pintsn : 16;
+ uint64_t dests_io : 16;
+#endif
+ } s;
+};
+
+union cvmx_ciu3_ctl {
+ uint64_t u64;
+ struct cvmx_ciu3_ctl_s {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_5_63 : 59;
+ uint64_t mcd_sel : 2;
+ uint64_t iscmem_le : 1;
+ uint64_t seq_dis : 1;
+ uint64_t cclk_dis : 1;
+#else
+ uint64_t cclk_dis : 1;
+ uint64_t seq_dis : 1;
+ uint64_t iscmem_le : 1;
+ uint64_t mcd_sel : 2;
+ uint64_t reserved_5_63 : 59;
+#endif
+ } s;
+};
+
+union cvmx_ciu3_destx_io_int {
+ uint64_t u64;
+ struct cvmx_ciu3_destx_io_int_s {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_52_63 : 12;
+ uint64_t intsn : 20;
+ uint64_t reserved_10_31 : 22;
+ uint64_t intidt : 8;
+ uint64_t newint : 1;
+ uint64_t intr : 1;
+#else
+ uint64_t intr : 1;
+ uint64_t newint : 1;
+ uint64_t intidt : 8;
+ uint64_t reserved_10_31 : 22;
+ uint64_t intsn : 20;
+ uint64_t reserved_52_63 : 12;
+#endif
+ } s;
+};
+
+union cvmx_ciu3_destx_pp_int {
+ uint64_t u64;
+ struct cvmx_ciu3_destx_pp_int_s {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_52_63 : 12;
+ uint64_t intsn : 20;
+ uint64_t reserved_10_31 : 22;
+ uint64_t intidt : 8;
+ uint64_t newint : 1;
+ uint64_t intr : 1;
+#else
+ uint64_t intr : 1;
+ uint64_t newint : 1;
+ uint64_t intidt : 8;
+ uint64_t reserved_10_31 : 22;
+ uint64_t intsn : 20;
+ uint64_t reserved_52_63 : 12;
+#endif
+ } s;
+};
+
+union cvmx_ciu3_gstop {
+ uint64_t u64;
+ struct cvmx_ciu3_gstop_s {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_1_63 : 63;
+ uint64_t gstop : 1;
+#else
+ uint64_t gstop : 1;
+ uint64_t reserved_1_63 : 63;
+#endif
+ } s;
+};
+
+union cvmx_ciu3_idtx_ctl {
+ uint64_t u64;
+ struct cvmx_ciu3_idtx_ctl_s {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_52_63 : 12;
+ uint64_t intsn : 20;
+ uint64_t reserved_4_31 : 28;
+ uint64_t intr : 1;
+ uint64_t newint : 1;
+ uint64_t ip_num : 2;
+#else
+ uint64_t ip_num : 2;
+ uint64_t newint : 1;
+ uint64_t intr : 1;
+ uint64_t reserved_4_31 : 28;
+ uint64_t intsn : 20;
+ uint64_t reserved_52_63 : 12;
+#endif
+ } s;
+};
+
+union cvmx_ciu3_idtx_io {
+ uint64_t u64;
+ struct cvmx_ciu3_idtx_io_s {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_5_63 : 59;
+ uint64_t io : 5;
+#else
+ uint64_t io : 5;
+ uint64_t reserved_5_63 : 59;
+#endif
+ } s;
+};
+
+union cvmx_ciu3_idtx_ppx {
+ uint64_t u64;
+ struct cvmx_ciu3_idtx_ppx_s {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_48_63 : 16;
+ uint64_t pp : 48;
+#else
+ uint64_t pp : 48;
+ uint64_t reserved_48_63 : 16;
+#endif
+ } s;
+};
+
+union cvmx_ciu3_intr_ram_ecc_ctl {
+ uint64_t u64;
+ struct cvmx_ciu3_intr_ram_ecc_ctl_s {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_3_63 : 61;
+ uint64_t flip_synd : 2;
+ uint64_t ecc_ena : 1;
+#else
+ uint64_t ecc_ena : 1;
+ uint64_t flip_synd : 2;
+ uint64_t reserved_3_63 : 61;
+#endif
+ } s;
+};
+
+union cvmx_ciu3_intr_ram_ecc_st {
+ uint64_t u64;
+ struct cvmx_ciu3_intr_ram_ecc_st_s {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_52_63 : 12;
+ uint64_t addr : 20;
+ uint64_t reserved_6_31 : 26;
+ uint64_t sisc_dbe : 1;
+ uint64_t sisc_sbe : 1;
+ uint64_t idt_dbe : 1;
+ uint64_t idt_sbe : 1;
+ uint64_t isc_dbe : 1;
+ uint64_t isc_sbe : 1;
+#else
+ uint64_t isc_sbe : 1;
+ uint64_t isc_dbe : 1;
+ uint64_t idt_sbe : 1;
+ uint64_t idt_dbe : 1;
+ uint64_t sisc_sbe : 1;
+ uint64_t sisc_dbe : 1;
+ uint64_t reserved_6_31 : 26;
+ uint64_t addr : 20;
+ uint64_t reserved_52_63 : 12;
+#endif
+ } s;
+};
+
+union cvmx_ciu3_intr_ready {
+ uint64_t u64;
+ struct cvmx_ciu3_intr_ready_s {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_46_63 : 18;
+ uint64_t index : 14;
+ uint64_t reserved_1_31 : 31;
+ uint64_t ready : 1;
+#else
+ uint64_t ready : 1;
+ uint64_t reserved_1_31 : 31;
+ uint64_t index : 14;
+ uint64_t reserved_46_63 : 18;
+#endif
+ } s;
+};
+
+union cvmx_ciu3_intr_slowdown {
+ uint64_t u64;
+ struct cvmx_ciu3_intr_slowdown_s {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_3_63 : 61;
+ uint64_t ctl : 3;
+#else
+ uint64_t ctl : 3;
+ uint64_t reserved_3_63 : 61;
+#endif
+ } s;
+};
+
+union cvmx_ciu3_iscx_ctl {
+ uint64_t u64;
+ struct cvmx_ciu3_iscx_ctl_s {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_24_63 : 40;
+ uint64_t idt : 8;
+ uint64_t imp : 1;
+ uint64_t reserved_2_14 : 13;
+ uint64_t en : 1;
+ uint64_t raw : 1;
+#else
+ uint64_t raw : 1;
+ uint64_t en : 1;
+ uint64_t reserved_2_14 : 13;
+ uint64_t imp : 1;
+ uint64_t idt : 8;
+ uint64_t reserved_24_63 : 40;
+#endif
+ } s;
+};
+
+union cvmx_ciu3_iscx_w1c {
+ uint64_t u64;
+ struct cvmx_ciu3_iscx_w1c_s {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_2_63 : 62;
+ uint64_t en : 1;
+ uint64_t raw : 1;
+#else
+ uint64_t raw : 1;
+ uint64_t en : 1;
+ uint64_t reserved_2_63 : 62;
+#endif
+ } s;
+};
+
+union cvmx_ciu3_iscx_w1s {
+ uint64_t u64;
+ struct cvmx_ciu3_iscx_w1s_s {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_2_63 : 62;
+ uint64_t en : 1;
+ uint64_t raw : 1;
+#else
+ uint64_t raw : 1;
+ uint64_t en : 1;
+ uint64_t reserved_2_63 : 62;
+#endif
+ } s;
+};
+
+union cvmx_ciu3_nmi {
+ uint64_t u64;
+ struct cvmx_ciu3_nmi_s {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_48_63 : 16;
+ uint64_t nmi : 48;
+#else
+ uint64_t nmi : 48;
+ uint64_t reserved_48_63 : 16;
+#endif
+ } s;
+};
+
+union cvmx_ciu3_siscx {
+ uint64_t u64;
+ struct cvmx_ciu3_siscx_s {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t en : 64;
+#else
+ uint64_t en : 64;
+#endif
+ } s;
+};
+
+union cvmx_ciu3_timx {
+ uint64_t u64;
+ struct cvmx_ciu3_timx_s {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_37_63 : 27;
+ uint64_t one_shot : 1;
+ uint64_t len : 36;
+#else
+ uint64_t len : 36;
+ uint64_t one_shot : 1;
+ uint64_t reserved_37_63 : 27;
+#endif
+ } s;
+};
+
+#endif
diff --git a/arch/mips/include/asm/octeon/cvmx-cmd-queue.h b/arch/mips/include/asm/octeon/cvmx-cmd-queue.h
index 8d05d9069823..a07a36f7d814 100644
--- a/arch/mips/include/asm/octeon/cvmx-cmd-queue.h
+++ b/arch/mips/include/asm/octeon/cvmx-cmd-queue.h
@@ -146,7 +146,7 @@ typedef struct {
* This structure contains the global state of all command queues.
* It is stored in a bootmem named block and shared by all
* applications running on Octeon. Tickets are stored in a differnet
- * cahce line that queue information to reduce the contention on the
+ * cache line that queue information to reduce the contention on the
* ll/sc used to get a ticket. If this is not the case, the update
* of queue state causes the ll/sc to fail quite often.
*/
diff --git a/arch/mips/include/asm/octeon/cvmx-coremask.h b/arch/mips/include/asm/octeon/cvmx-coremask.h
new file mode 100644
index 000000000000..097dc096db84
--- /dev/null
+++ b/arch/mips/include/asm/octeon/cvmx-coremask.h
@@ -0,0 +1,89 @@
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (c) 2016 Cavium Inc. (support@cavium.com).
+ *
+ */
+
+/*
+ * Module to support operations on bitmap of cores. Coremask can be used to
+ * select a specific core, a group of cores, or all available cores, for
+ * initialization and differentiation of roles within a single shared binary
+ * executable image.
+ *
+ * The core numbers used in this file are the same value as what is found in
+ * the COP0_EBASE register and the rdhwr 0 instruction.
+ *
+ * For the CN78XX and other multi-node environments the core numbers are not
+ * contiguous. The core numbers for the CN78XX are as follows:
+ *
+ * Node 0: Cores 0 - 47
+ * Node 1: Cores 128 - 175
+ * Node 2: Cores 256 - 303
+ * Node 3: Cores 384 - 431
+ *
+ */
+
+#ifndef __CVMX_COREMASK_H__
+#define __CVMX_COREMASK_H__
+
+#define CVMX_MIPS_MAX_CORES 1024
+/* bits per holder */
+#define CVMX_COREMASK_ELTSZ 64
+
+/* cvmx_coremask_t's size in u64 */
+#define CVMX_COREMASK_BMPSZ (CVMX_MIPS_MAX_CORES / CVMX_COREMASK_ELTSZ)
+
+
+/* cvmx_coremask_t */
+struct cvmx_coremask {
+ u64 coremask_bitmap[CVMX_COREMASK_BMPSZ];
+};
+
+/*
+ * Is ``core'' set in the coremask?
+ */
+static inline bool cvmx_coremask_is_core_set(const struct cvmx_coremask *pcm,
+ int core)
+{
+ int n, i;
+
+ n = core % CVMX_COREMASK_ELTSZ;
+ i = core / CVMX_COREMASK_ELTSZ;
+
+ return (pcm->coremask_bitmap[i] & ((u64)1 << n)) != 0;
+}
+
+/*
+ * Make a copy of a coremask
+ */
+static inline void cvmx_coremask_copy(struct cvmx_coremask *dest,
+ const struct cvmx_coremask *src)
+{
+ memcpy(dest, src, sizeof(*dest));
+}
+
+/*
+ * Set the lower 64-bit of the coremask.
+ */
+static inline void cvmx_coremask_set64(struct cvmx_coremask *pcm,
+ uint64_t coremask_64)
+{
+ pcm->coremask_bitmap[0] = coremask_64;
+}
+
+/*
+ * Clear ``core'' from the coremask.
+ */
+static inline void cvmx_coremask_clear_core(struct cvmx_coremask *pcm, int core)
+{
+ int n, i;
+
+ n = core % CVMX_COREMASK_ELTSZ;
+ i = core / CVMX_COREMASK_ELTSZ;
+ pcm->coremask_bitmap[i] &= ~(1ull << n);
+}
+
+#endif /* __CVMX_COREMASK_H__ */
diff --git a/arch/mips/include/asm/octeon/cvmx-fpa-defs.h b/arch/mips/include/asm/octeon/cvmx-fpa-defs.h
index 1d79e3c7040d..887ff8e1f715 100644
--- a/arch/mips/include/asm/octeon/cvmx-fpa-defs.h
+++ b/arch/mips/include/asm/octeon/cvmx-fpa-defs.h
@@ -66,6 +66,7 @@
#define CVMX_FPA_WART_CTL (CVMX_ADD_IO_SEG(0x00011800280000D8ull))
#define CVMX_FPA_WART_STATUS (CVMX_ADD_IO_SEG(0x00011800280000E0ull))
#define CVMX_FPA_WQE_THRESHOLD (CVMX_ADD_IO_SEG(0x0001180028000468ull))
+#define CVMX_FPA_CLK_COUNT (CVMX_ADD_IO_SEG(0x00012800000000F0ull))
union cvmx_fpa_addr_range_error {
uint64_t u64;
diff --git a/arch/mips/include/asm/octeon/cvmx-helper-board.h b/arch/mips/include/asm/octeon/cvmx-helper-board.h
index 893320375aef..cda93aee712c 100644
--- a/arch/mips/include/asm/octeon/cvmx-helper-board.h
+++ b/arch/mips/include/asm/octeon/cvmx-helper-board.h
@@ -94,7 +94,7 @@ extern int cvmx_helper_board_get_mii_address(int ipd_port);
* @phy_addr: The address of the PHY to program
* @link_flags:
* Flags to control autonegotiation. Bit 0 is autonegotiation
- * enable/disable to maintain backware compatibility.
+ * enable/disable to maintain backward compatibility.
* @link_info: Link speed to program. If the speed is zero and autonegotiation
* is enabled, all possible negotiation speeds are advertised.
*
diff --git a/arch/mips/include/asm/octeon/cvmx-ipd.h b/arch/mips/include/asm/octeon/cvmx-ipd.h
index e13490ebbb27..cbdc14b77435 100644
--- a/arch/mips/include/asm/octeon/cvmx-ipd.h
+++ b/arch/mips/include/asm/octeon/cvmx-ipd.h
@@ -39,7 +39,7 @@
enum cvmx_ipd_mode {
CVMX_IPD_OPC_MODE_STT = 0LL, /* All blocks DRAM, not cached in L2 */
- CVMX_IPD_OPC_MODE_STF = 1LL, /* All bloccks into L2 */
+ CVMX_IPD_OPC_MODE_STF = 1LL, /* All blocks into L2 */
CVMX_IPD_OPC_MODE_STF1_STT = 2LL, /* 1st block L2, rest DRAM */
CVMX_IPD_OPC_MODE_STF2_STT = 3LL /* 1st, 2nd blocks L2, rest DRAM */
};
diff --git a/arch/mips/include/asm/octeon/cvmx-mio-defs.h b/arch/mips/include/asm/octeon/cvmx-mio-defs.h
index bb0ae338a460..5196c04eee41 100644
--- a/arch/mips/include/asm/octeon/cvmx-mio-defs.h
+++ b/arch/mips/include/asm/octeon/cvmx-mio-defs.h
@@ -1481,7 +1481,9 @@ union cvmx_mio_fus_dat2 {
uint64_t u64;
struct cvmx_mio_fus_dat2_s {
#ifdef __BIG_ENDIAN_BITFIELD
- uint64_t reserved_48_63:16;
+ uint64_t reserved_59_63:5;
+ uint64_t run_platform:3;
+ uint64_t gbl_pwr_throttle:8;
uint64_t fus118:1;
uint64_t rom_info:10;
uint64_t power_limit:2;
@@ -1513,7 +1515,9 @@ union cvmx_mio_fus_dat2 {
uint64_t power_limit:2;
uint64_t rom_info:10;
uint64_t fus118:1;
- uint64_t reserved_48_63:16;
+ uint64_t gbl_pwr_throttle:8;
+ uint64_t run_platform:3;
+ uint64_t reserved_59_63:5;
#endif
} s;
struct cvmx_mio_fus_dat2_cn30xx {
@@ -1837,50 +1841,192 @@ union cvmx_mio_fus_dat2 {
#endif
} cn68xx;
struct cvmx_mio_fus_dat2_cn68xx cn68xxp1;
+ struct cvmx_mio_fus_dat2_cn70xx {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_48_63:16;
+ uint64_t fus118:1;
+ uint64_t rom_info:10;
+ uint64_t power_limit:2;
+ uint64_t dorm_crypto:1;
+ uint64_t fus318:1;
+ uint64_t raid_en:1;
+ uint64_t reserved_31_29:3;
+ uint64_t nodfa_cp2:1;
+ uint64_t nomul:1;
+ uint64_t nocrypto:1;
+ uint64_t reserved_25_24:2;
+ uint64_t chip_id:8;
+ uint64_t reserved_15_0:16;
+#else
+ uint64_t reserved_15_0:16;
+ uint64_t chip_id:8;
+ uint64_t reserved_25_24:2;
+ uint64_t nocrypto:1;
+ uint64_t nomul:1;
+ uint64_t nodfa_cp2:1;
+ uint64_t reserved_31_29:3;
+ uint64_t raid_en:1;
+ uint64_t fus318:1;
+ uint64_t dorm_crypto:1;
+ uint64_t power_limit:2;
+ uint64_t rom_info:10;
+ uint64_t fus118:1;
+ uint64_t reserved_48_63:16;
+#endif
+ } cn70xx;
+ struct cvmx_mio_fus_dat2_cn70xx cn70xxp1;
+ struct cvmx_mio_fus_dat2_cn73xx {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_59_63:5;
+ uint64_t run_platform:3;
+ uint64_t gbl_pwr_throttle:8;
+ uint64_t fus118:1;
+ uint64_t rom_info:10;
+ uint64_t power_limit:2;
+ uint64_t dorm_crypto:1;
+ uint64_t fus318:1;
+ uint64_t raid_en:1;
+ uint64_t reserved_31_29:3;
+ uint64_t nodfa_cp2:1;
+ uint64_t nomul:1;
+ uint64_t nocrypto:1;
+ uint64_t reserved_25_24:2;
+ uint64_t chip_id:8;
+ uint64_t reserved_15_0:16;
+#else
+ uint64_t reserved_15_0:16;
+ uint64_t chip_id:8;
+ uint64_t reserved_25_24:2;
+ uint64_t nocrypto:1;
+ uint64_t nomul:1;
+ uint64_t nodfa_cp2:1;
+ uint64_t reserved_31_29:3;
+ uint64_t raid_en:1;
+ uint64_t fus318:1;
+ uint64_t dorm_crypto:1;
+ uint64_t power_limit:2;
+ uint64_t rom_info:10;
+ uint64_t fus118:1;
+ uint64_t gbl_pwr_throttle:8;
+ uint64_t run_platform:3;
+ uint64_t reserved_59_63:5;
+#endif
+ } cn73xx;
+ struct cvmx_mio_fus_dat2_cn78xx {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_59_63:5;
+ uint64_t run_platform:3;
+ uint64_t reserved_48_55:8;
+ uint64_t fus118:1;
+ uint64_t rom_info:10;
+ uint64_t power_limit:2;
+ uint64_t dorm_crypto:1;
+ uint64_t fus318:1;
+ uint64_t raid_en:1;
+ uint64_t reserved_31_29:3;
+ uint64_t nodfa_cp2:1;
+ uint64_t nomul:1;
+ uint64_t nocrypto:1;
+ uint64_t reserved_25_24:2;
+ uint64_t chip_id:8;
+ uint64_t reserved_0_15:16;
+#else
+ uint64_t reserved_0_15:16;
+ uint64_t chip_id:8;
+ uint64_t reserved_25_24:2;
+ uint64_t nocrypto:1;
+ uint64_t nomul:1;
+ uint64_t nodfa_cp2:1;
+ uint64_t reserved_31_29:3;
+ uint64_t raid_en:1;
+ uint64_t fus318:1;
+ uint64_t dorm_crypto:1;
+ uint64_t power_limit:2;
+ uint64_t rom_info:10;
+ uint64_t fus118:1;
+ uint64_t reserved_48_55:8;
+ uint64_t run_platform:3;
+ uint64_t reserved_59_63:5;
+#endif
+ } cn78xx;
+ struct cvmx_mio_fus_dat2_cn78xxp2 {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t reserved_59_63:5;
+ uint64_t run_platform:3;
+ uint64_t gbl_pwr_throttle:8;
+ uint64_t fus118:1;
+ uint64_t rom_info:10;
+ uint64_t power_limit:2;
+ uint64_t dorm_crypto:1;
+ uint64_t fus318:1;
+ uint64_t raid_en:1;
+ uint64_t reserved_31_29:3;
+ uint64_t nodfa_cp2:1;
+ uint64_t nomul:1;
+ uint64_t nocrypto:1;
+ uint64_t reserved_25_24:2;
+ uint64_t chip_id:8;
+ uint64_t reserved_0_15:16;
+#else
+ uint64_t reserved_0_15:16;
+ uint64_t chip_id:8;
+ uint64_t reserved_25_24:2;
+ uint64_t nocrypto:1;
+ uint64_t nomul:1;
+ uint64_t nodfa_cp2:1;
+ uint64_t reserved_31_29:3;
+ uint64_t raid_en:1;
+ uint64_t fus318:1;
+ uint64_t dorm_crypto:1;
+ uint64_t power_limit:2;
+ uint64_t rom_info:10;
+ uint64_t fus118:1;
+ uint64_t gbl_pwr_throttle:8;
+ uint64_t run_platform:3;
+ uint64_t reserved_59_63:5;
+#endif
+ } cn78xxp2;
struct cvmx_mio_fus_dat2_cn61xx cnf71xx;
+ struct cvmx_mio_fus_dat2_cn73xx cnf75xx;
};
union cvmx_mio_fus_dat3 {
uint64_t u64;
struct cvmx_mio_fus_dat3_s {
#ifdef __BIG_ENDIAN_BITFIELD
- uint64_t reserved_58_63:6;
+ uint64_t ema0:6;
uint64_t pll_ctl:10;
uint64_t dfa_info_dte:3;
uint64_t dfa_info_clm:4;
- uint64_t reserved_40_40:1;
- uint64_t ema:2;
+ uint64_t pll_alt_matrix:1;
+ uint64_t reserved_38_39:2;
uint64_t efus_lck_rsv:1;
uint64_t efus_lck_man:1;
uint64_t pll_half_dis:1;
uint64_t l2c_crip:3;
- uint64_t pll_div4:1;
- uint64_t reserved_29_30:2;
- uint64_t bar2_en:1;
+ uint64_t reserved_28_31:4;
uint64_t efus_lck:1;
uint64_t efus_ign:1;
uint64_t nozip:1;
uint64_t nodfa_dte:1;
- uint64_t icache:24;
+ uint64_t reserved_0_23:24;
#else
- uint64_t icache:24;
+ uint64_t reserved_0_23:24;
uint64_t nodfa_dte:1;
uint64_t nozip:1;
uint64_t efus_ign:1;
uint64_t efus_lck:1;
- uint64_t bar2_en:1;
- uint64_t reserved_29_30:2;
- uint64_t pll_div4:1;
+ uint64_t reserved_28_31:4;
uint64_t l2c_crip:3;
uint64_t pll_half_dis:1;
uint64_t efus_lck_man:1;
uint64_t efus_lck_rsv:1;
- uint64_t ema:2;
- uint64_t reserved_40_40:1;
+ uint64_t reserved_38_39:2;
+ uint64_t pll_alt_matrix:1;
uint64_t dfa_info_clm:4;
uint64_t dfa_info_dte:3;
uint64_t pll_ctl:10;
- uint64_t reserved_58_63:6;
+ uint64_t ema0:6;
#endif
} s;
struct cvmx_mio_fus_dat3_cn30xx {
@@ -2022,7 +2168,239 @@ union cvmx_mio_fus_dat3 {
struct cvmx_mio_fus_dat3_cn61xx cn66xx;
struct cvmx_mio_fus_dat3_cn61xx cn68xx;
struct cvmx_mio_fus_dat3_cn61xx cn68xxp1;
+ struct cvmx_mio_fus_dat3_cn70xx {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t ema0:6;
+ uint64_t pll_ctl:10;
+ uint64_t dfa_info_dte:3;
+ uint64_t dfa_info_clm:4;
+ uint64_t pll_alt_matrix:1;
+ uint64_t pll_bwadj_denom:2;
+ uint64_t efus_lck_rsv:1;
+ uint64_t efus_lck_man:1;
+ uint64_t pll_half_dis:1;
+ uint64_t l2c_crip:3;
+ uint64_t use_int_refclk:1;
+ uint64_t zip_info:2;
+ uint64_t bar2_sz_conf:1;
+ uint64_t efus_lck:1;
+ uint64_t efus_ign:1;
+ uint64_t nozip:1;
+ uint64_t nodfa_dte:1;
+ uint64_t ema1:6;
+ uint64_t reserved_0_17:18;
+#else
+ uint64_t reserved_0_17:18;
+ uint64_t ema1:6;
+ uint64_t nodfa_dte:1;
+ uint64_t nozip:1;
+ uint64_t efus_ign:1;
+ uint64_t efus_lck:1;
+ uint64_t bar2_sz_conf:1;
+ uint64_t zip_info:2;
+ uint64_t use_int_refclk:1;
+ uint64_t l2c_crip:3;
+ uint64_t pll_half_dis:1;
+ uint64_t efus_lck_man:1;
+ uint64_t efus_lck_rsv:1;
+ uint64_t pll_bwadj_denom:2;
+ uint64_t pll_alt_matrix:1;
+ uint64_t dfa_info_clm:4;
+ uint64_t dfa_info_dte:3;
+ uint64_t pll_ctl:10;
+ uint64_t ema0:6;
+#endif
+ } cn70xx;
+ struct cvmx_mio_fus_dat3_cn70xxp1 {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t ema0:6;
+ uint64_t pll_ctl:10;
+ uint64_t dfa_info_dte:3;
+ uint64_t dfa_info_clm:4;
+ uint64_t reserved_38_40:3;
+ uint64_t efus_lck_rsv:1;
+ uint64_t efus_lck_man:1;
+ uint64_t pll_half_dis:1;
+ uint64_t l2c_crip:3;
+ uint64_t reserved_31_31:1;
+ uint64_t zip_info:2;
+ uint64_t bar2_sz_conf:1;
+ uint64_t efus_lck:1;
+ uint64_t efus_ign:1;
+ uint64_t nozip:1;
+ uint64_t nodfa_dte:1;
+ uint64_t ema1:6;
+ uint64_t reserved_0_17:18;
+#else
+ uint64_t reserved_0_17:18;
+ uint64_t ema1:6;
+ uint64_t nodfa_dte:1;
+ uint64_t nozip:1;
+ uint64_t efus_ign:1;
+ uint64_t efus_lck:1;
+ uint64_t bar2_sz_conf:1;
+ uint64_t zip_info:2;
+ uint64_t reserved_31_31:1;
+ uint64_t l2c_crip:3;
+ uint64_t pll_half_dis:1;
+ uint64_t efus_lck_man:1;
+ uint64_t efus_lck_rsv:1;
+ uint64_t reserved_38_40:3;
+ uint64_t dfa_info_clm:4;
+ uint64_t dfa_info_dte:3;
+ uint64_t pll_ctl:10;
+ uint64_t ema0:6;
+#endif
+ } cn70xxp1;
+ struct cvmx_mio_fus_dat3_cn73xx {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t ema0:6;
+ uint64_t pll_ctl:10;
+ uint64_t dfa_info_dte:3;
+ uint64_t dfa_info_clm:4;
+ uint64_t pll_alt_matrix:1;
+ uint64_t pll_bwadj_denom:2;
+ uint64_t efus_lck_rsv:1;
+ uint64_t efus_lck_man:1;
+ uint64_t pll_half_dis:1;
+ uint64_t l2c_crip:3;
+ uint64_t use_int_refclk:1;
+ uint64_t zip_info:2;
+ uint64_t bar2_sz_conf:1;
+ uint64_t efus_lck:1;
+ uint64_t efus_ign:1;
+ uint64_t nozip:1;
+ uint64_t nodfa_dte:1;
+ uint64_t ema1:6;
+ uint64_t nohna_dte:1;
+ uint64_t hna_info_dte:3;
+ uint64_t hna_info_clm:4;
+ uint64_t reserved_9_9:1;
+ uint64_t core_pll_mul:5;
+ uint64_t pnr_pll_mul:4;
+#else
+ uint64_t pnr_pll_mul:4;
+ uint64_t core_pll_mul:5;
+ uint64_t reserved_9_9:1;
+ uint64_t hna_info_clm:4;
+ uint64_t hna_info_dte:3;
+ uint64_t nohna_dte:1;
+ uint64_t ema1:6;
+ uint64_t nodfa_dte:1;
+ uint64_t nozip:1;
+ uint64_t efus_ign:1;
+ uint64_t efus_lck:1;
+ uint64_t bar2_sz_conf:1;
+ uint64_t zip_info:2;
+ uint64_t use_int_refclk:1;
+ uint64_t l2c_crip:3;
+ uint64_t pll_half_dis:1;
+ uint64_t efus_lck_man:1;
+ uint64_t efus_lck_rsv:1;
+ uint64_t pll_bwadj_denom:2;
+ uint64_t pll_alt_matrix:1;
+ uint64_t dfa_info_clm:4;
+ uint64_t dfa_info_dte:3;
+ uint64_t pll_ctl:10;
+ uint64_t ema0:6;
+#endif
+ } cn73xx;
+ struct cvmx_mio_fus_dat3_cn78xx {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t ema0:6;
+ uint64_t pll_ctl:10;
+ uint64_t dfa_info_dte:3;
+ uint64_t dfa_info_clm:4;
+ uint64_t reserved_38_40:3;
+ uint64_t efus_lck_rsv:1;
+ uint64_t efus_lck_man:1;
+ uint64_t pll_half_dis:1;
+ uint64_t l2c_crip:3;
+ uint64_t reserved_31_31:1;
+ uint64_t zip_info:2;
+ uint64_t bar2_sz_conf:1;
+ uint64_t efus_lck:1;
+ uint64_t efus_ign:1;
+ uint64_t nozip:1;
+ uint64_t nodfa_dte:1;
+ uint64_t ema1:6;
+ uint64_t nohna_dte:1;
+ uint64_t hna_info_dte:3;
+ uint64_t hna_info_clm:4;
+ uint64_t reserved_0_9:10;
+#else
+ uint64_t reserved_0_9:10;
+ uint64_t hna_info_clm:4;
+ uint64_t hna_info_dte:3;
+ uint64_t nohna_dte:1;
+ uint64_t ema1:6;
+ uint64_t nodfa_dte:1;
+ uint64_t nozip:1;
+ uint64_t efus_ign:1;
+ uint64_t efus_lck:1;
+ uint64_t bar2_sz_conf:1;
+ uint64_t zip_info:2;
+ uint64_t reserved_31_31:1;
+ uint64_t l2c_crip:3;
+ uint64_t pll_half_dis:1;
+ uint64_t efus_lck_man:1;
+ uint64_t efus_lck_rsv:1;
+ uint64_t reserved_38_40:3;
+ uint64_t dfa_info_clm:4;
+ uint64_t dfa_info_dte:3;
+ uint64_t pll_ctl:10;
+ uint64_t ema0:6;
+#endif
+ } cn78xx;
+ struct cvmx_mio_fus_dat3_cn73xx cn78xxp2;
struct cvmx_mio_fus_dat3_cn61xx cnf71xx;
+ struct cvmx_mio_fus_dat3_cnf75xx {
+#ifdef __BIG_ENDIAN_BITFIELD
+ uint64_t ema0:6;
+ uint64_t pll_ctl:10;
+ uint64_t dfa_info_dte:3;
+ uint64_t dfa_info_clm:4;
+ uint64_t pll_alt_matrix:1;
+ uint64_t pll_bwadj_denom:2;
+ uint64_t efus_lck_rsv:1;
+ uint64_t efus_lck_man:1;
+ uint64_t pll_half_dis:1;
+ uint64_t l2c_crip:3;
+ uint64_t use_int_refclk:1;
+ uint64_t zip_info:2;
+ uint64_t bar2_sz_conf:1;
+ uint64_t efus_lck:1;
+ uint64_t efus_ign:1;
+ uint64_t nozip:1;
+ uint64_t nodfa_dte:1;
+ uint64_t ema1:6;
+ uint64_t reserved_9_17:9;
+ uint64_t core_pll_mul:5;
+ uint64_t pnr_pll_mul:4;
+#else
+ uint64_t pnr_pll_mul:4;
+ uint64_t core_pll_mul:5;
+ uint64_t reserved_9_17:9;
+ uint64_t ema1:6;
+ uint64_t nodfa_dte:1;
+ uint64_t nozip:1;
+ uint64_t efus_ign:1;
+ uint64_t efus_lck:1;
+ uint64_t bar2_sz_conf:1;
+ uint64_t zip_info:2;
+ uint64_t use_int_refclk:1;
+ uint64_t l2c_crip:3;
+ uint64_t pll_half_dis:1;
+ uint64_t efus_lck_man:1;
+ uint64_t efus_lck_rsv:1;
+ uint64_t pll_bwadj_denom:2;
+ uint64_t pll_alt_matrix:1;
+ uint64_t dfa_info_clm:4;
+ uint64_t dfa_info_dte:3;
+ uint64_t pll_ctl:10;
+ uint64_t ema0:6;
+#endif
+ } cnf75xx;
};
union cvmx_mio_fus_ema {
diff --git a/arch/mips/include/asm/octeon/cvmx-pow.h b/arch/mips/include/asm/octeon/cvmx-pow.h
index 51531563f8dc..410bb70e5aac 100644
--- a/arch/mips/include/asm/octeon/cvmx-pow.h
+++ b/arch/mips/include/asm/octeon/cvmx-pow.h
@@ -2051,7 +2051,7 @@ static inline void cvmx_pow_tag_sw_desched(uint32_t tag,
}
/**
- * Descchedules the current work queue entry.
+ * Deschedules the current work queue entry.
*
* @no_sched: no schedule flag value to be set on the work queue
* entry. If this is set the entry will not be
diff --git a/arch/mips/include/asm/octeon/cvmx-sysinfo.h b/arch/mips/include/asm/octeon/cvmx-sysinfo.h
index 2131197422e5..c6c3ee39c69d 100644
--- a/arch/mips/include/asm/octeon/cvmx-sysinfo.h
+++ b/arch/mips/include/asm/octeon/cvmx-sysinfo.h
@@ -4,7 +4,7 @@
* Contact: support@caviumnetworks.com
* This file is part of the OCTEON SDK
*
- * Copyright (c) 2003-2008 Cavium Networks
+ * Copyright (c) 2003-2016 Cavium, Inc.
*
* This file is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License, Version 2, as
@@ -32,6 +32,8 @@
#ifndef __CVMX_SYSINFO_H__
#define __CVMX_SYSINFO_H__
+#include "cvmx-coremask.h"
+
#define OCTEON_SERIAL_LEN 20
/**
* Structure describing application specific information.
@@ -50,8 +52,7 @@ struct cvmx_sysinfo {
uint64_t system_dram_size;
/* ptr to memory descriptor block */
- void *phy_mem_desc_ptr;
-
+ uint64_t phy_mem_desc_addr;
/* Application image specific variables */
/* stack top address (virtual) */
@@ -63,7 +64,7 @@ struct cvmx_sysinfo {
/* heap size in bytes */
uint32_t heap_size;
/* coremask defining cores running application */
- uint32_t core_mask;
+ struct cvmx_coremask core_mask;
/* Deprecated, use cvmx_coremask_first_core() to select init core */
uint32_t init_core;
@@ -121,32 +122,4 @@ struct cvmx_sysinfo {
extern struct cvmx_sysinfo *cvmx_sysinfo_get(void);
-/**
- * This function is used in non-simple executive environments (such as
- * Linux kernel, u-boot, etc.) to configure the minimal fields that
- * are required to use simple executive files directly.
- *
- * Locking (if required) must be handled outside of this
- * function
- *
- * @phy_mem_desc_ptr: Pointer to global physical memory descriptor
- * (bootmem descriptor) @board_type: Octeon board
- * type enumeration
- *
- * @board_rev_major:
- * Board major revision
- * @board_rev_minor:
- * Board minor revision
- * @cpu_clock_hz:
- * CPU clock freqency in hertz
- *
- * Returns 0: Failure
- * 1: success
- */
-extern int cvmx_sysinfo_minimal_initialize(void *phy_mem_desc_ptr,
- uint16_t board_type,
- uint8_t board_rev_major,
- uint8_t board_rev_minor,
- uint32_t cpu_clock_hz);
-
#endif /* __CVMX_SYSINFO_H__ */
diff --git a/arch/mips/include/asm/octeon/cvmx.h b/arch/mips/include/asm/octeon/cvmx.h
index 3e982e0c397e..2530e8731c8a 100644
--- a/arch/mips/include/asm/octeon/cvmx.h
+++ b/arch/mips/include/asm/octeon/cvmx.h
@@ -57,6 +57,7 @@ enum cvmx_mips_space {
#include <asm/octeon/cvmx-sysinfo.h>
#include <asm/octeon/cvmx-ciu-defs.h>
+#include <asm/octeon/cvmx-ciu3-defs.h>
#include <asm/octeon/cvmx-gpio-defs.h>
#include <asm/octeon/cvmx-iob-defs.h>
#include <asm/octeon/cvmx-ipd-defs.h>
@@ -341,6 +342,21 @@ static inline unsigned int cvmx_get_core_num(void)
return core_num;
}
+/* Maximum # of bits to define core in node */
+#define CVMX_NODE_NO_SHIFT 7
+#define CVMX_NODE_MASK 0x3
+static inline unsigned int cvmx_get_node_num(void)
+{
+ unsigned int core_num = cvmx_get_core_num();
+
+ return (core_num >> CVMX_NODE_NO_SHIFT) & CVMX_NODE_MASK;
+}
+
+static inline unsigned int cvmx_get_local_core_num(void)
+{
+ return cvmx_get_core_num() & ((1 << CVMX_NODE_NO_SHIFT) - 1);
+}
+
/**
* Returns the number of bits set in the provided value.
* Simple wrapper for POP instruction.
@@ -448,8 +464,15 @@ static inline uint64_t cvmx_get_cycle_global(void)
/* Return the number of cores available in the chip */
static inline uint32_t cvmx_octeon_num_cores(void)
{
- uint32_t ciu_fuse = (uint32_t) cvmx_read_csr(CVMX_CIU_FUSE) & 0xffff;
- return cvmx_pop(ciu_fuse);
+ u64 ciu_fuse_reg;
+ u64 ciu_fuse;
+
+ if (OCTEON_IS_OCTEON3() && !OCTEON_IS_MODEL(OCTEON_CN70XX))
+ ciu_fuse_reg = CVMX_CIU3_FUSE;
+ else
+ ciu_fuse_reg = CVMX_CIU_FUSE;
+ ciu_fuse = cvmx_read_csr(ciu_fuse_reg);
+ return cvmx_dpop(ciu_fuse);
}
#endif /* __CVMX_H__ */
diff --git a/arch/mips/include/asm/octeon/octeon-feature.h b/arch/mips/include/asm/octeon/octeon-feature.h
index 3ed10a8d7865..a19ca3b2775c 100644
--- a/arch/mips/include/asm/octeon/octeon-feature.h
+++ b/arch/mips/include/asm/octeon/octeon-feature.h
@@ -81,6 +81,10 @@ enum octeon_feature {
OCTEON_FEATURE_HFA,
OCTEON_FEATURE_DFM,
OCTEON_FEATURE_CIU2,
+ OCTEON_FEATURE_CIU3,
+ /* Octeon has FPA first seen on 78XX */
+ OCTEON_FEATURE_FPA3,
+ OCTEON_FEATURE_FAU,
OCTEON_MAX_FEATURE
};
@@ -110,7 +114,7 @@ static inline int octeon_has_crypto(void)
* Returns Non zero if the feature exists. Zero if the feature does not
* exist.
*/
-static inline int octeon_has_feature(enum octeon_feature feature)
+static inline bool octeon_has_feature(enum octeon_feature feature)
{
switch (feature) {
case OCTEON_FEATURE_SAAD:
@@ -122,7 +126,7 @@ static inline int octeon_has_feature(enum octeon_feature feature)
fus_2.u64 = cvmx_read_csr(CVMX_MIO_FUS_DAT2);
return !fus_2.s.nocrypto && !fus_2.s.nomul && fus_2.s.dorm_crypto;
} else {
- return 0;
+ return false;
}
case OCTEON_FEATURE_PCIE:
@@ -190,11 +194,20 @@ static inline int octeon_has_feature(enum octeon_feature feature)
case OCTEON_FEATURE_CIU2:
return OCTEON_IS_MODEL(OCTEON_CN68XX);
+ case OCTEON_FEATURE_CIU3:
+ case OCTEON_FEATURE_FPA3:
+ return OCTEON_IS_MODEL(OCTEON_CN78XX)
+ || OCTEON_IS_MODEL(OCTEON_CNF75XX)
+ || OCTEON_IS_MODEL(OCTEON_CN73XX);
+ case OCTEON_FEATURE_FAU:
+ return !(OCTEON_IS_MODEL(OCTEON_CN78XX)
+ || OCTEON_IS_MODEL(OCTEON_CNF75XX)
+ || OCTEON_IS_MODEL(OCTEON_CN73XX));
default:
break;
}
- return 0;
+ return false;
}
#endif /* __OCTEON_FEATURE_H__ */
diff --git a/arch/mips/include/asm/octeon/octeon-model.h b/arch/mips/include/asm/octeon/octeon-model.h
index 92b377e36dac..6c68517c2770 100644
--- a/arch/mips/include/asm/octeon/octeon-model.h
+++ b/arch/mips/include/asm/octeon/octeon-model.h
@@ -74,7 +74,12 @@
* CN7XXX models with new revision encoding
*/
+#define OCTEON_CNF75XX_PASS1_0 0x000d9800
+#define OCTEON_CNF75XX (OCTEON_CNF75XX_PASS1_0 | OM_IGNORE_REVISION)
+#define OCTEON_CNF75XX_PASS1_X (OCTEON_CNF75XX_PASS1_0 | OM_IGNORE_MINOR_REVISION)
+
#define OCTEON_CN73XX_PASS1_0 0x000d9700
+#define OCTEON_CN73XX_PASS1_1 0x000d9701
#define OCTEON_CN73XX (OCTEON_CN73XX_PASS1_0 | OM_IGNORE_REVISION)
#define OCTEON_CN73XX_PASS1_X (OCTEON_CN73XX_PASS1_0 | \
OM_IGNORE_MINOR_REVISION)
diff --git a/arch/mips/include/asm/octeon/octeon.h b/arch/mips/include/asm/octeon/octeon.h
index de9f74ee5dd0..07c0516ef4d5 100644
--- a/arch/mips/include/asm/octeon/octeon.h
+++ b/arch/mips/include/asm/octeon/octeon.h
@@ -299,6 +299,31 @@ static inline void octeon_npi_write32(uint64_t address, uint32_t val)
cvmx_read64_uint32(address ^ 4);
}
+#ifdef CONFIG_SMP
+void octeon_setup_smp(void);
+#else
+static inline void octeon_setup_smp(void) {}
+#endif
+
+struct irq_domain;
+struct device_node;
+struct irq_data;
+struct irq_chip;
+void octeon_ciu3_mbox_send(int cpu, unsigned int mbox);
+int octeon_irq_ciu3_xlat(struct irq_domain *d,
+ struct device_node *node,
+ const u32 *intspec,
+ unsigned int intsize,
+ unsigned long *out_hwirq,
+ unsigned int *out_type);
+void octeon_irq_ciu3_enable(struct irq_data *data);
+void octeon_irq_ciu3_disable(struct irq_data *data);
+void octeon_irq_ciu3_ack(struct irq_data *data);
+void octeon_irq_ciu3_mask(struct irq_data *data);
+void octeon_irq_ciu3_mask_ack(struct irq_data *data);
+int octeon_irq_ciu3_mapx(struct irq_domain *d, unsigned int virq,
+ irq_hw_number_t hw, struct irq_chip *chip);
+
/* Octeon multiplier save/restore routines from octeon_switch.S */
void octeon_mult_save(void);
void octeon_mult_restore(void);
diff --git a/arch/mips/include/asm/pci.h b/arch/mips/include/asm/pci.h
index 8c16fb7b8fdb..86b239d9d75d 100644
--- a/arch/mips/include/asm/pci.h
+++ b/arch/mips/include/asm/pci.h
@@ -43,8 +43,6 @@ struct pci_controller {
and XFree86. Eventually will be removed. */
unsigned int need_domain_info;
- int iommu;
-
/* Optional access methods for reading/writing the bus number
of the PCI controller */
int (*get_busno)(void);
@@ -106,11 +104,11 @@ static inline void pci_resource_to_user(const struct pci_dev *dev, int bar,
struct pci_dev;
/*
- * The PCI address space does equal the physical memory address space. The
- * networking and block device layers use this boolean for bounce buffer
- * decisions. This is set if any hose does not have an IOMMU.
+ * The PCI address space does equal the physical memory address space.
+ * The networking and block device layers use this boolean for bounce
+ * buffer decisions.
*/
-extern unsigned int PCI_DMA_BUS_IS_PHYS;
+#define PCI_DMA_BUS_IS_PHYS (1)
#ifdef CONFIG_PCI_DOMAINS
#define pci_domain_nr(bus) ((struct pci_controller *)(bus)->sysdata)->index
diff --git a/arch/mips/include/asm/pgtable-32.h b/arch/mips/include/asm/pgtable-32.h
index 832e2167d00f..d21f3da7bdb6 100644
--- a/arch/mips/include/asm/pgtable-32.h
+++ b/arch/mips/include/asm/pgtable-32.h
@@ -103,8 +103,8 @@ static inline void pmd_clear(pmd_t *pmdp)
pmd_val(*pmdp) = ((unsigned long) invalid_pte_table);
}
-#if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
-#define pte_page(x) pfn_to_page(pte_pfn(x))
+#if defined(CONFIG_XPA)
+
#define pte_pfn(x) (((unsigned long)((x).pte_high >> _PFN_SHIFT)) | (unsigned long)((x).pte_low << _PAGE_PRESENT_SHIFT))
static inline pte_t
pfn_pte(unsigned long pfn, pgprot_t prot)
@@ -118,9 +118,21 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
return pte;
}
-#else
+#elif defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
-#define pte_page(x) pfn_to_page(pte_pfn(x))
+#define pte_pfn(x) ((unsigned long)((x).pte_high >> 6))
+
+static inline pte_t pfn_pte(unsigned long pfn, pgprot_t prot)
+{
+ pte_t pte;
+
+ pte.pte_high = (pfn << 6) | (pgprot_val(prot) & 0x3f);
+ pte.pte_low = pgprot_val(prot);
+
+ return pte;
+}
+
+#else
#ifdef CONFIG_CPU_VR41XX
#define pte_pfn(x) ((unsigned long)((x).pte >> (PAGE_SHIFT + 2)))
@@ -131,6 +143,8 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
#endif
#endif /* defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32) */
+#define pte_page(x) pfn_to_page(pte_pfn(x))
+
#define __pgd_offset(address) pgd_index(address)
#define __pud_offset(address) (((address) >> PUD_SHIFT) & (PTRS_PER_PUD-1))
#define __pmd_offset(address) (((address) >> PMD_SHIFT) & (PTRS_PER_PMD-1))
@@ -166,7 +180,7 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
#else
-#if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+#if defined(CONFIG_XPA)
/* Swap entries must have VALID and GLOBAL bits cleared. */
#define __swp_type(x) (((x).val >> 4) & 0x1f)
@@ -175,6 +189,15 @@ pfn_pte(unsigned long pfn, pgprot_t prot)
#define __pte_to_swp_entry(pte) ((swp_entry_t) { (pte).pte_high })
#define __swp_entry_to_pte(x) ((pte_t) { 0, (x).val })
+#elif defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+
+/* Swap entries must have VALID and GLOBAL bits cleared. */
+#define __swp_type(x) (((x).val >> 2) & 0x1f)
+#define __swp_offset(x) ((x).val >> 7)
+#define __swp_entry(type, offset) ((swp_entry_t) { ((type) << 2) | ((offset) << 7) })
+#define __pte_to_swp_entry(pte) ((swp_entry_t) { (pte).pte_high })
+#define __swp_entry_to_pte(x) ((pte_t) { 0, (x).val })
+
#else
/*
* Constraints:
diff --git a/arch/mips/include/asm/pgtable-64.h b/arch/mips/include/asm/pgtable-64.h
index cf661a2fb141..514cbc0a6a67 100644
--- a/arch/mips/include/asm/pgtable-64.h
+++ b/arch/mips/include/asm/pgtable-64.h
@@ -17,7 +17,7 @@
#include <asm/cachectl.h>
#include <asm/fixmap.h>
-#ifdef CONFIG_PAGE_SIZE_64KB
+#if defined(CONFIG_PAGE_SIZE_64KB) && !defined(CONFIG_MIPS_VA_BITS_48)
#include <asm-generic/pgtable-nopmd.h>
#else
#include <asm-generic/pgtable-nopud.h>
@@ -90,7 +90,11 @@
#define PTE_ORDER 0
#endif
#ifdef CONFIG_PAGE_SIZE_16KB
-#define PGD_ORDER 0
+#ifdef CONFIG_MIPS_VA_BITS_48
+#define PGD_ORDER 1
+#else
+#define PGD_ORDER 0
+#endif
#define PUD_ORDER aieeee_attempt_to_allocate_pud
#define PMD_ORDER 0
#define PTE_ORDER 0
@@ -104,7 +108,11 @@
#ifdef CONFIG_PAGE_SIZE_64KB
#define PGD_ORDER 0
#define PUD_ORDER aieeee_attempt_to_allocate_pud
+#ifdef CONFIG_MIPS_VA_BITS_48
+#define PMD_ORDER 0
+#else
#define PMD_ORDER aieeee_attempt_to_allocate_pmd
+#endif
#define PTE_ORDER 0
#endif
@@ -114,11 +122,7 @@
#endif
#define PTRS_PER_PTE ((PAGE_SIZE << PTE_ORDER) / sizeof(pte_t))
-#if PGDIR_SIZE >= TASK_SIZE64
-#define USER_PTRS_PER_PGD (1)
-#else
-#define USER_PTRS_PER_PGD (TASK_SIZE64 / PGDIR_SIZE)
-#endif
+#define USER_PTRS_PER_PGD ((TASK_SIZE64 / PGDIR_SIZE)?(TASK_SIZE64 / PGDIR_SIZE):1)
#define FIRST_USER_ADDRESS 0UL
/*
diff --git a/arch/mips/include/asm/pgtable-bits.h b/arch/mips/include/asm/pgtable-bits.h
index 97b313882678..f88a48cd68b2 100644
--- a/arch/mips/include/asm/pgtable-bits.h
+++ b/arch/mips/include/asm/pgtable-bits.h
@@ -32,149 +32,132 @@
* unpredictable things. The code (when it is written) to deal with
* this problem will be in the update_mmu_cache() code for the r4k.
*/
-#if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+#if defined(CONFIG_XPA)
/*
- * The following bits are implemented by the TLB hardware
+ * Page table bit offsets used for 64 bit physical addressing on
+ * MIPS32r5 with XPA.
*/
-#define _PAGE_NO_EXEC_SHIFT 0
-#define _PAGE_NO_EXEC (1 << _PAGE_NO_EXEC_SHIFT)
-#define _PAGE_NO_READ_SHIFT (_PAGE_NO_EXEC_SHIFT + 1)
-#define _PAGE_NO_READ (1 << _PAGE_NO_READ_SHIFT)
-#define _PAGE_GLOBAL_SHIFT (_PAGE_NO_READ_SHIFT + 1)
-#define _PAGE_GLOBAL (1 << _PAGE_GLOBAL_SHIFT)
-#define _PAGE_VALID_SHIFT (_PAGE_GLOBAL_SHIFT + 1)
-#define _PAGE_VALID (1 << _PAGE_VALID_SHIFT)
-#define _PAGE_DIRTY_SHIFT (_PAGE_VALID_SHIFT + 1)
-#define _PAGE_DIRTY (1 << _PAGE_DIRTY_SHIFT)
-#define _CACHE_SHIFT (_PAGE_DIRTY_SHIFT + 1)
-#define _CACHE_MASK (7 << _CACHE_SHIFT)
-
-/*
- * The following bits are implemented in software
- */
-#define _PAGE_PRESENT_SHIFT (24)
-#define _PAGE_PRESENT (1 << _PAGE_PRESENT_SHIFT)
-#define _PAGE_READ_SHIFT (_PAGE_PRESENT_SHIFT + 1)
-#define _PAGE_READ (1 << _PAGE_READ_SHIFT)
-#define _PAGE_WRITE_SHIFT (_PAGE_READ_SHIFT + 1)
-#define _PAGE_WRITE (1 << _PAGE_WRITE_SHIFT)
-#define _PAGE_ACCESSED_SHIFT (_PAGE_WRITE_SHIFT + 1)
-#define _PAGE_ACCESSED (1 << _PAGE_ACCESSED_SHIFT)
-#define _PAGE_MODIFIED_SHIFT (_PAGE_ACCESSED_SHIFT + 1)
-#define _PAGE_MODIFIED (1 << _PAGE_MODIFIED_SHIFT)
-
-#define _PFN_SHIFT (PAGE_SHIFT - 12 + _CACHE_SHIFT + 3)
+enum pgtable_bits {
+ /* Used by TLB hardware (placed in EntryLo*) */
+ _PAGE_NO_EXEC_SHIFT,
+ _PAGE_NO_READ_SHIFT,
+ _PAGE_GLOBAL_SHIFT,
+ _PAGE_VALID_SHIFT,
+ _PAGE_DIRTY_SHIFT,
+ _CACHE_SHIFT,
+
+ /* Used only by software (masked out before writing EntryLo*) */
+ _PAGE_PRESENT_SHIFT = 24,
+ _PAGE_WRITE_SHIFT,
+ _PAGE_ACCESSED_SHIFT,
+ _PAGE_MODIFIED_SHIFT,
+};
/*
* Bits for extended EntryLo0/EntryLo1 registers
*/
#define _PFNX_MASK 0xffffff
-#elif defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
+#elif defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
/*
- * The following bits are implemented in software
+ * Page table bit offsets used for 36 bit physical addressing on MIPS32,
+ * for example with Alchemy or Netlogic XLP/XLR.
*/
-#define _PAGE_PRESENT_SHIFT (0)
-#define _PAGE_PRESENT (1 << _PAGE_PRESENT_SHIFT)
-#define _PAGE_READ_SHIFT (_PAGE_PRESENT_SHIFT + 1)
-#define _PAGE_READ (1 << _PAGE_READ_SHIFT)
-#define _PAGE_WRITE_SHIFT (_PAGE_READ_SHIFT + 1)
-#define _PAGE_WRITE (1 << _PAGE_WRITE_SHIFT)
-#define _PAGE_ACCESSED_SHIFT (_PAGE_WRITE_SHIFT + 1)
-#define _PAGE_ACCESSED (1 << _PAGE_ACCESSED_SHIFT)
-#define _PAGE_MODIFIED_SHIFT (_PAGE_ACCESSED_SHIFT + 1)
-#define _PAGE_MODIFIED (1 << _PAGE_MODIFIED_SHIFT)
+enum pgtable_bits {
+ /* Used by TLB hardware (placed in EntryLo*) */
+ _PAGE_GLOBAL_SHIFT,
+ _PAGE_VALID_SHIFT,
+ _PAGE_DIRTY_SHIFT,
+ _CACHE_SHIFT,
+
+ /* Used only by software (masked out before writing EntryLo*) */
+ _PAGE_PRESENT_SHIFT = _CACHE_SHIFT + 3,
+ _PAGE_NO_READ_SHIFT,
+ _PAGE_WRITE_SHIFT,
+ _PAGE_ACCESSED_SHIFT,
+ _PAGE_MODIFIED_SHIFT,
+};
-/*
- * The following bits are implemented by the TLB hardware
- */
-#define _PAGE_GLOBAL_SHIFT (_PAGE_MODIFIED_SHIFT + 4)
-#define _PAGE_GLOBAL (1 << _PAGE_GLOBAL_SHIFT)
-#define _PAGE_VALID_SHIFT (_PAGE_GLOBAL_SHIFT + 1)
-#define _PAGE_VALID (1 << _PAGE_VALID_SHIFT)
-#define _PAGE_DIRTY_SHIFT (_PAGE_VALID_SHIFT + 1)
-#define _PAGE_DIRTY (1 << _PAGE_DIRTY_SHIFT)
-#define _CACHE_UNCACHED_SHIFT (_PAGE_DIRTY_SHIFT + 1)
-#define _CACHE_UNCACHED (1 << _CACHE_UNCACHED_SHIFT)
-#define _CACHE_MASK _CACHE_UNCACHED
+#elif defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
-#define _PFN_SHIFT PAGE_SHIFT
+/* Page table bits used for r3k systems */
+enum pgtable_bits {
+ /* Used only by software (writes to EntryLo ignored) */
+ _PAGE_PRESENT_SHIFT,
+ _PAGE_NO_READ_SHIFT,
+ _PAGE_WRITE_SHIFT,
+ _PAGE_ACCESSED_SHIFT,
+ _PAGE_MODIFIED_SHIFT,
+
+ /* Used by TLB hardware (placed in EntryLo) */
+ _PAGE_GLOBAL_SHIFT = 8,
+ _PAGE_VALID_SHIFT,
+ _PAGE_DIRTY_SHIFT,
+ _CACHE_UNCACHED_SHIFT,
+};
#else
-/*
- * Below are the "Normal" R4K cases
- */
-/*
- * The following bits are implemented in software
- */
-#define _PAGE_PRESENT_SHIFT 0
+/* Page table bits used for r4k systems */
+enum pgtable_bits {
+ /* Used only by software (masked out before writing EntryLo*) */
+ _PAGE_PRESENT_SHIFT,
+#if !defined(CONFIG_CPU_HAS_RIXI)
+ _PAGE_NO_READ_SHIFT,
+#endif
+ _PAGE_WRITE_SHIFT,
+ _PAGE_ACCESSED_SHIFT,
+ _PAGE_MODIFIED_SHIFT,
+#if defined(CONFIG_64BIT) && defined(CONFIG_MIPS_HUGE_TLB_SUPPORT)
+ _PAGE_HUGE_SHIFT,
+#endif
+
+ /* Used by TLB hardware (placed in EntryLo*) */
+#if defined(CONFIG_CPU_HAS_RIXI)
+ _PAGE_NO_EXEC_SHIFT,
+ _PAGE_NO_READ_SHIFT,
+#endif
+ _PAGE_GLOBAL_SHIFT,
+ _PAGE_VALID_SHIFT,
+ _PAGE_DIRTY_SHIFT,
+ _CACHE_SHIFT,
+};
+
+#endif /* defined(CONFIG_PHYS_ADDR_T_64BIT && defined(CONFIG_CPU_MIPS32) */
+
+/* Used only by software */
#define _PAGE_PRESENT (1 << _PAGE_PRESENT_SHIFT)
-/* R2 or later cores check for RI/XI support to determine _PAGE_READ */
-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
-#define _PAGE_WRITE_SHIFT (_PAGE_PRESENT_SHIFT + 1)
-#define _PAGE_WRITE (1 << _PAGE_WRITE_SHIFT)
-#else
-#define _PAGE_READ_SHIFT (_PAGE_PRESENT_SHIFT + 1)
-#define _PAGE_READ (1 << _PAGE_READ_SHIFT)
-#define _PAGE_WRITE_SHIFT (_PAGE_READ_SHIFT + 1)
#define _PAGE_WRITE (1 << _PAGE_WRITE_SHIFT)
-#endif
-#define _PAGE_ACCESSED_SHIFT (_PAGE_WRITE_SHIFT + 1)
#define _PAGE_ACCESSED (1 << _PAGE_ACCESSED_SHIFT)
-#define _PAGE_MODIFIED_SHIFT (_PAGE_ACCESSED_SHIFT + 1)
#define _PAGE_MODIFIED (1 << _PAGE_MODIFIED_SHIFT)
-
#if defined(CONFIG_64BIT) && defined(CONFIG_MIPS_HUGE_TLB_SUPPORT)
-/* Huge TLB page */
-#define _PAGE_HUGE_SHIFT (_PAGE_MODIFIED_SHIFT + 1)
-#define _PAGE_HUGE (1 << _PAGE_HUGE_SHIFT)
-#endif /* CONFIG_64BIT && CONFIG_MIPS_HUGE_TLB_SUPPORT */
-
-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
-/* XI - page cannot be executed */
-#ifdef _PAGE_HUGE_SHIFT
-#define _PAGE_NO_EXEC_SHIFT (_PAGE_HUGE_SHIFT + 1)
-#else
-#define _PAGE_NO_EXEC_SHIFT (_PAGE_MODIFIED_SHIFT + 1)
+# define _PAGE_HUGE (1 << _PAGE_HUGE_SHIFT)
#endif
-#define _PAGE_NO_EXEC (cpu_has_rixi ? (1 << _PAGE_NO_EXEC_SHIFT) : 0)
-
-/* RI - page cannot be read */
-#define _PAGE_READ_SHIFT (_PAGE_NO_EXEC_SHIFT + 1)
-#define _PAGE_READ (cpu_has_rixi ? 0 : (1 << _PAGE_READ_SHIFT))
-#define _PAGE_NO_READ_SHIFT _PAGE_READ_SHIFT
-#define _PAGE_NO_READ (cpu_has_rixi ? (1 << _PAGE_READ_SHIFT) : 0)
-#endif /* defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6) */
-
-#if defined(_PAGE_NO_READ_SHIFT)
-#define _PAGE_GLOBAL_SHIFT (_PAGE_NO_READ_SHIFT + 1)
-#elif defined(_PAGE_HUGE_SHIFT)
-#define _PAGE_GLOBAL_SHIFT (_PAGE_HUGE_SHIFT + 1)
-#else
-#define _PAGE_GLOBAL_SHIFT (_PAGE_MODIFIED_SHIFT + 1)
+
+/* Used by TLB hardware (placed in EntryLo*) */
+#if defined(CONFIG_XPA)
+# define _PAGE_NO_EXEC (1 << _PAGE_NO_EXEC_SHIFT)
+#elif defined(CONFIG_CPU_HAS_RIXI)
+# define _PAGE_NO_EXEC (cpu_has_rixi ? (1 << _PAGE_NO_EXEC_SHIFT) : 0)
#endif
+#define _PAGE_NO_READ (1 << _PAGE_NO_READ_SHIFT)
#define _PAGE_GLOBAL (1 << _PAGE_GLOBAL_SHIFT)
-
-#define _PAGE_VALID_SHIFT (_PAGE_GLOBAL_SHIFT + 1)
#define _PAGE_VALID (1 << _PAGE_VALID_SHIFT)
-#define _PAGE_DIRTY_SHIFT (_PAGE_VALID_SHIFT + 1)
#define _PAGE_DIRTY (1 << _PAGE_DIRTY_SHIFT)
-#define _CACHE_SHIFT (_PAGE_DIRTY_SHIFT + 1)
-#define _CACHE_MASK (7 << _CACHE_SHIFT)
-
-#define _PFN_SHIFT (PAGE_SHIFT - 12 + _CACHE_SHIFT + 3)
-
-#endif /* defined(CONFIG_PHYS_ADDR_T_64BIT && defined(CONFIG_CPU_MIPS32) */
+#if defined(CONFIG_CPU_R3000) || defined(CONFIG_CPU_TX39XX)
+# define _CACHE_UNCACHED (1 << _CACHE_UNCACHED_SHIFT)
+# define _CACHE_MASK _CACHE_UNCACHED
+# define _PFN_SHIFT PAGE_SHIFT
+#else
+# define _CACHE_MASK (7 << _CACHE_SHIFT)
+# define _PFN_SHIFT (PAGE_SHIFT - 12 + _CACHE_SHIFT + 3)
+#endif
#ifndef _PAGE_NO_EXEC
#define _PAGE_NO_EXEC 0
#endif
-#ifndef _PAGE_NO_READ
-#define _PAGE_NO_READ 0
-#endif
#define _PAGE_SILENT_READ _PAGE_VALID
#define _PAGE_SILENT_WRITE _PAGE_DIRTY
@@ -191,14 +174,13 @@
*/
-#ifndef __ASSEMBLY__
/*
* pte_to_entrylo converts a page table entry (PTE) into a Mips
* entrylo0/1 value.
*/
static inline uint64_t pte_to_entrylo(unsigned long pte_val)
{
-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
+#ifdef CONFIG_CPU_HAS_RIXI
if (cpu_has_rixi) {
int sa;
#ifdef CONFIG_32BIT
@@ -218,7 +200,6 @@ static inline uint64_t pte_to_entrylo(unsigned long pte_val)
return pte_val >> _PAGE_GLOBAL_SHIFT;
}
-#endif
/*
* Cache attributes
@@ -274,7 +255,7 @@ static inline uint64_t pte_to_entrylo(unsigned long pte_val)
#define _CACHE_UNCACHED_ACCELERATED (7<<_CACHE_SHIFT)
#endif
-#define __READABLE (_PAGE_SILENT_READ | _PAGE_READ | _PAGE_ACCESSED)
+#define __READABLE (_PAGE_SILENT_READ | _PAGE_ACCESSED)
#define __WRITEABLE (_PAGE_SILENT_WRITE | _PAGE_WRITE | _PAGE_MODIFIED)
#define _PAGE_CHG_MASK (_PAGE_ACCESSED | _PAGE_MODIFIED | \
diff --git a/arch/mips/include/asm/pgtable.h b/arch/mips/include/asm/pgtable.h
index 9a4fe0133ff1..a6b611f1da43 100644
--- a/arch/mips/include/asm/pgtable.h
+++ b/arch/mips/include/asm/pgtable.h
@@ -23,18 +23,19 @@
struct mm_struct;
struct vm_area_struct;
-#define PAGE_NONE __pgprot(_PAGE_PRESENT | _CACHE_CACHABLE_NONCOHERENT)
-#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_WRITE | _PAGE_READ | \
+#define PAGE_NONE __pgprot(_PAGE_PRESENT | _PAGE_NO_READ | \
+ _CACHE_CACHABLE_NONCOHERENT)
+#define PAGE_SHARED __pgprot(_PAGE_PRESENT | _PAGE_WRITE | \
_page_cachable_default)
-#define PAGE_COPY __pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_NO_EXEC | \
+#define PAGE_COPY __pgprot(_PAGE_PRESENT | _PAGE_NO_EXEC | \
_page_cachable_default)
-#define PAGE_READONLY __pgprot(_PAGE_PRESENT | _PAGE_READ | \
+#define PAGE_READONLY __pgprot(_PAGE_PRESENT | \
_page_cachable_default)
#define PAGE_KERNEL __pgprot(_PAGE_PRESENT | __READABLE | __WRITEABLE | \
_PAGE_GLOBAL | _page_cachable_default)
#define PAGE_KERNEL_NC __pgprot(_PAGE_PRESENT | __READABLE | __WRITEABLE | \
_PAGE_GLOBAL | _CACHE_CACHABLE_NONCOHERENT)
-#define PAGE_USERIO __pgprot(_PAGE_PRESENT | _PAGE_READ | _PAGE_WRITE | \
+#define PAGE_USERIO __pgprot(_PAGE_PRESENT | _PAGE_WRITE | \
_page_cachable_default)
#define PAGE_KERNEL_UNCACHED __pgprot(_PAGE_PRESENT | __READABLE | \
__WRITEABLE | _PAGE_GLOBAL | _CACHE_UNCACHED)
@@ -127,10 +128,19 @@ do { \
} \
} while(0)
+static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, pte_t pteval);
+
#if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
-#define pte_none(pte) (!(((pte).pte_high) & ~_PAGE_GLOBAL))
+#ifdef CONFIG_XPA
+# define pte_none(pte) (!(((pte).pte_high) & ~_PAGE_GLOBAL))
+#else
+# define pte_none(pte) (!(((pte).pte_low | (pte).pte_high) & ~_PAGE_GLOBAL))
+#endif
+
#define pte_present(pte) ((pte).pte_low & _PAGE_PRESENT)
+#define pte_no_exec(pte) ((pte).pte_low & _PAGE_NO_EXEC)
static inline void set_pte(pte_t *ptep, pte_t pte)
{
@@ -138,17 +148,23 @@ static inline void set_pte(pte_t *ptep, pte_t pte)
smp_wmb();
ptep->pte_low = pte.pte_low;
+#ifdef CONFIG_XPA
if (pte.pte_high & _PAGE_GLOBAL) {
+#else
+ if (pte.pte_low & _PAGE_GLOBAL) {
+#endif
pte_t *buddy = ptep_buddy(ptep);
/*
* Make sure the buddy is global too (if it's !none,
* it better already be global)
*/
- if (pte_none(*buddy))
+ if (pte_none(*buddy)) {
+ if (!config_enabled(CONFIG_XPA))
+ buddy->pte_low |= _PAGE_GLOBAL;
buddy->pte_high |= _PAGE_GLOBAL;
+ }
}
}
-#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval)
static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
{
@@ -156,8 +172,13 @@ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *pt
htw_stop();
/* Preserve global status for the pair */
- if (ptep_buddy(ptep)->pte_high & _PAGE_GLOBAL)
- null.pte_high = _PAGE_GLOBAL;
+ if (config_enabled(CONFIG_XPA)) {
+ if (ptep_buddy(ptep)->pte_high & _PAGE_GLOBAL)
+ null.pte_high = _PAGE_GLOBAL;
+ } else {
+ if (ptep_buddy(ptep)->pte_low & _PAGE_GLOBAL)
+ null.pte_low = null.pte_high = _PAGE_GLOBAL;
+ }
set_pte_at(mm, addr, ptep, null);
htw_start();
@@ -166,6 +187,7 @@ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *pt
#define pte_none(pte) (!(pte_val(pte) & ~_PAGE_GLOBAL))
#define pte_present(pte) (pte_val(pte) & _PAGE_PRESENT)
+#define pte_no_exec(pte) (pte_val(pte) & _PAGE_NO_EXEC)
/*
* Certain architectures need to do special things when pte's
@@ -187,30 +209,42 @@ static inline void set_pte(pte_t *ptep, pte_t pteval)
* For SMP, multiple CPUs can race, so we need to do
* this atomically.
*/
-#ifdef CONFIG_64BIT
-#define LL_INSN "lld"
-#define SC_INSN "scd"
-#else /* CONFIG_32BIT */
-#define LL_INSN "ll"
-#define SC_INSN "sc"
-#endif
unsigned long page_global = _PAGE_GLOBAL;
unsigned long tmp;
- __asm__ __volatile__ (
- " .set push\n"
- " .set noreorder\n"
- "1: " LL_INSN " %[tmp], %[buddy]\n"
- " bnez %[tmp], 2f\n"
- " or %[tmp], %[tmp], %[global]\n"
- " " SC_INSN " %[tmp], %[buddy]\n"
- " beqz %[tmp], 1b\n"
- " nop\n"
- "2:\n"
- " .set pop"
- : [buddy] "+m" (buddy->pte),
- [tmp] "=&r" (tmp)
+ if (kernel_uses_llsc && R10000_LLSC_WAR) {
+ __asm__ __volatile__ (
+ " .set arch=r4000 \n"
+ " .set push \n"
+ " .set noreorder \n"
+ "1:" __LL "%[tmp], %[buddy] \n"
+ " bnez %[tmp], 2f \n"
+ " or %[tmp], %[tmp], %[global] \n"
+ __SC "%[tmp], %[buddy] \n"
+ " beqzl %[tmp], 1b \n"
+ " nop \n"
+ "2: \n"
+ " .set pop \n"
+ " .set mips0 \n"
+ : [buddy] "+m" (buddy->pte), [tmp] "=&r" (tmp)
+ : [global] "r" (page_global));
+ } else if (kernel_uses_llsc) {
+ __asm__ __volatile__ (
+ " .set "MIPS_ISA_ARCH_LEVEL" \n"
+ " .set push \n"
+ " .set noreorder \n"
+ "1:" __LL "%[tmp], %[buddy] \n"
+ " bnez %[tmp], 2f \n"
+ " or %[tmp], %[tmp], %[global] \n"
+ __SC "%[tmp], %[buddy] \n"
+ " beqz %[tmp], 1b \n"
+ " nop \n"
+ "2: \n"
+ " .set pop \n"
+ " .set mips0 \n"
+ : [buddy] "+m" (buddy->pte), [tmp] "=&r" (tmp)
: [global] "r" (page_global));
+ }
#else /* !CONFIG_SMP */
if (pte_none(*buddy))
pte_val(*buddy) = pte_val(*buddy) | _PAGE_GLOBAL;
@@ -218,7 +252,6 @@ static inline void set_pte(pte_t *ptep, pte_t pteval)
}
#endif
}
-#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval)
static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
{
@@ -234,6 +267,22 @@ static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *pt
}
#endif
+static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, pte_t pteval)
+{
+ extern void __update_cache(unsigned long address, pte_t pte);
+
+ if (!pte_present(pteval))
+ goto cache_sync_done;
+
+ if (pte_present(*ptep) && (pte_pfn(*ptep) == pte_pfn(pteval)))
+ goto cache_sync_done;
+
+ __update_cache(addr, pteval);
+cache_sync_done:
+ set_pte(ptep, pteval);
+}
+
/*
* (pmds are folded into puds so this doesn't get actually called,
* but the define is needed for a generic inline function.)
@@ -270,6 +319,8 @@ static inline int pte_young(pte_t pte) { return pte.pte_low & _PAGE_ACCESSED; }
static inline pte_t pte_wrprotect(pte_t pte)
{
pte.pte_low &= ~_PAGE_WRITE;
+ if (!config_enabled(CONFIG_XPA))
+ pte.pte_low &= ~_PAGE_SILENT_WRITE;
pte.pte_high &= ~_PAGE_SILENT_WRITE;
return pte;
}
@@ -277,6 +328,8 @@ static inline pte_t pte_wrprotect(pte_t pte)
static inline pte_t pte_mkclean(pte_t pte)
{
pte.pte_low &= ~_PAGE_MODIFIED;
+ if (!config_enabled(CONFIG_XPA))
+ pte.pte_low &= ~_PAGE_SILENT_WRITE;
pte.pte_high &= ~_PAGE_SILENT_WRITE;
return pte;
}
@@ -284,6 +337,8 @@ static inline pte_t pte_mkclean(pte_t pte)
static inline pte_t pte_mkold(pte_t pte)
{
pte.pte_low &= ~_PAGE_ACCESSED;
+ if (!config_enabled(CONFIG_XPA))
+ pte.pte_low &= ~_PAGE_SILENT_READ;
pte.pte_high &= ~_PAGE_SILENT_READ;
return pte;
}
@@ -291,24 +346,33 @@ static inline pte_t pte_mkold(pte_t pte)
static inline pte_t pte_mkwrite(pte_t pte)
{
pte.pte_low |= _PAGE_WRITE;
- if (pte.pte_low & _PAGE_MODIFIED)
+ if (pte.pte_low & _PAGE_MODIFIED) {
+ if (!config_enabled(CONFIG_XPA))
+ pte.pte_low |= _PAGE_SILENT_WRITE;
pte.pte_high |= _PAGE_SILENT_WRITE;
+ }
return pte;
}
static inline pte_t pte_mkdirty(pte_t pte)
{
pte.pte_low |= _PAGE_MODIFIED;
- if (pte.pte_low & _PAGE_WRITE)
+ if (pte.pte_low & _PAGE_WRITE) {
+ if (!config_enabled(CONFIG_XPA))
+ pte.pte_low |= _PAGE_SILENT_WRITE;
pte.pte_high |= _PAGE_SILENT_WRITE;
+ }
return pte;
}
static inline pte_t pte_mkyoung(pte_t pte)
{
pte.pte_low |= _PAGE_ACCESSED;
- if (pte.pte_low & _PAGE_READ)
+ if (!(pte.pte_low & _PAGE_NO_READ)) {
+ if (!config_enabled(CONFIG_XPA))
+ pte.pte_low |= _PAGE_SILENT_READ;
pte.pte_high |= _PAGE_SILENT_READ;
+ }
return pte;
}
#else
@@ -353,13 +417,8 @@ static inline pte_t pte_mkdirty(pte_t pte)
static inline pte_t pte_mkyoung(pte_t pte)
{
pte_val(pte) |= _PAGE_ACCESSED;
-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
if (!(pte_val(pte) & _PAGE_NO_READ))
pte_val(pte) |= _PAGE_SILENT_READ;
- else
-#endif
- if (pte_val(pte) & _PAGE_READ)
- pte_val(pte) |= _PAGE_SILENT_READ;
return pte;
}
@@ -411,7 +470,7 @@ static inline pgprot_t pgprot_writecombine(pgprot_t _prot)
*/
#define mk_pte(page, pgprot) pfn_pte(page_to_pfn(page), (pgprot))
-#if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+#if defined(CONFIG_XPA)
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
{
pte.pte_low &= (_PAGE_MODIFIED | _PAGE_ACCESSED | _PFNX_MASK);
@@ -420,6 +479,15 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
pte.pte_high |= pgprot_val(newprot) & ~_PFN_MASK;
return pte;
}
+#elif defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+{
+ pte.pte_low &= _PAGE_CHG_MASK;
+ pte.pte_high &= (_PFN_MASK | _CACHE_MASK);
+ pte.pte_low |= pgprot_val(newprot);
+ pte.pte_high |= pgprot_val(newprot) & ~(_PFN_MASK | _CACHE_MASK);
+ return pte;
+}
#else
static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
{
@@ -430,15 +498,12 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
extern void __update_tlb(struct vm_area_struct *vma, unsigned long address,
pte_t pte);
-extern void __update_cache(struct vm_area_struct *vma, unsigned long address,
- pte_t pte);
static inline void update_mmu_cache(struct vm_area_struct *vma,
unsigned long address, pte_t *ptep)
{
pte_t pte = *ptep;
__update_tlb(vma, address, pte);
- __update_cache(vma, address, pte);
}
static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
@@ -468,6 +533,7 @@ static inline int io_remap_pfn_range(struct vm_area_struct *vma,
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+#define has_transparent_hugepage has_transparent_hugepage
extern int has_transparent_hugepage(void);
static inline int pmd_trans_huge(pmd_t pmd)
@@ -542,13 +608,8 @@ static inline pmd_t pmd_mkyoung(pmd_t pmd)
{
pmd_val(pmd) |= _PAGE_ACCESSED;
-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
if (!(pmd_val(pmd) & _PAGE_NO_READ))
pmd_val(pmd) |= _PAGE_SILENT_READ;
- else
-#endif
- if (pmd_val(pmd) & _PAGE_READ)
- pmd_val(pmd) |= _PAGE_SILENT_READ;
return pmd;
}
diff --git a/arch/mips/include/asm/processor.h b/arch/mips/include/asm/processor.h
index 041153f5cf93..7e78b6208d7d 100644
--- a/arch/mips/include/asm/processor.h
+++ b/arch/mips/include/asm/processor.h
@@ -63,7 +63,11 @@ extern unsigned int vced_count, vcei_count;
* 8192EB ...
*/
#define TASK_SIZE32 0x7fff8000UL
-#define TASK_SIZE64 0x10000000000UL
+#ifdef CONFIG_MIPS_VA_BITS_48
+#define TASK_SIZE64 (0x1UL << ((cpu_data[0].vmbits>48)?48:cpu_data[0].vmbits))
+#else
+#define TASK_SIZE64 0x10000000000UL
+#endif
#define TASK_SIZE (test_thread_flag(TIF_32BIT_ADDR) ? TASK_SIZE32 : TASK_SIZE64)
#define STACK_TOP_MAX TASK_SIZE64
@@ -355,6 +359,10 @@ extern unsigned long thread_saved_pc(struct task_struct *tsk);
*/
extern void start_thread(struct pt_regs * regs, unsigned long pc, unsigned long sp);
+static inline void flush_thread(void)
+{
+}
+
unsigned long get_wchan(struct task_struct *p);
#define __KSTK_TOS(tsk) ((unsigned long)task_stack_page(tsk) + \
diff --git a/arch/mips/include/asm/seccomp.h b/arch/mips/include/asm/seccomp.h
index 1d8a2e2c75c1..684fb3a12ed3 100644
--- a/arch/mips/include/asm/seccomp.h
+++ b/arch/mips/include/asm/seccomp.h
@@ -2,27 +2,32 @@
#include <linux/unistd.h>
-/*
- * Kludge alert:
- *
- * The generic seccomp code currently allows only a single compat ABI. Until
- * this is fixed we priorize O32 as the compat ABI over N32.
- */
-#ifdef CONFIG_MIPS32_O32
-
-#define __NR_seccomp_read_32 4003
-#define __NR_seccomp_write_32 4004
-#define __NR_seccomp_exit_32 4001
-#define __NR_seccomp_sigreturn_32 4193 /* rt_sigreturn */
-
-#elif defined(CONFIG_MIPS32_N32)
-
-#define __NR_seccomp_read_32 6000
-#define __NR_seccomp_write_32 6001
-#define __NR_seccomp_exit_32 6058
-#define __NR_seccomp_sigreturn_32 6211 /* rt_sigreturn */
-
-#endif /* CONFIG_MIPS32_O32 */
+#ifdef CONFIG_COMPAT
+static inline const int *get_compat_mode1_syscalls(void)
+{
+ static const int syscalls_O32[] = {
+ __NR_O32_Linux + 3, __NR_O32_Linux + 4,
+ __NR_O32_Linux + 1, __NR_O32_Linux + 193,
+ 0, /* null terminated */
+ };
+ static const int syscalls_N32[] = {
+ __NR_N32_Linux + 0, __NR_N32_Linux + 1,
+ __NR_N32_Linux + 58, __NR_N32_Linux + 211,
+ 0, /* null terminated */
+ };
+
+ if (config_enabled(CONFIG_MIPS32_O32) && test_thread_flag(TIF_32BIT_REGS))
+ return syscalls_O32;
+
+ if (config_enabled(CONFIG_MIPS32_N32))
+ return syscalls_N32;
+
+ BUG();
+}
+
+#define get_compat_mode1_syscalls get_compat_mode1_syscalls
+
+#endif /* CONFIG_COMPAT */
#include <asm-generic/seccomp.h>
diff --git a/arch/mips/include/asm/sgi/hpc3.h b/arch/mips/include/asm/sgi/hpc3.h
index 4a9c99050c13..c0e3dc0293a7 100644
--- a/arch/mips/include/asm/sgi/hpc3.h
+++ b/arch/mips/include/asm/sgi/hpc3.h
@@ -39,7 +39,7 @@ struct hpc3_pbus_dmacregs {
volatile u32 pbdma_dptr; /* pbus dma channel desc ptr */
u32 _unused0[0x1000/4 - 2]; /* padding */
volatile u32 pbdma_ctrl; /* pbus dma channel control register has
- * copletely different meaning for read
+ * completely different meaning for read
* compared with write */
/* read */
#define HPC3_PDMACTRL_INT 0x00000001 /* interrupt (cleared after read) */
diff --git a/arch/mips/include/asm/sibyte/bcm1480_regs.h b/arch/mips/include/asm/sibyte/bcm1480_regs.h
index ec0dacf6f0cb..32a84837b8fa 100644
--- a/arch/mips/include/asm/sibyte/bcm1480_regs.h
+++ b/arch/mips/include/asm/sibyte/bcm1480_regs.h
@@ -415,8 +415,8 @@
(cpu)*BCM1480_IMR_ALIAS_MAILBOX_SPACING)
#define A_BCM1480_IMR_ALIAS_MAILBOX_REGISTER(cpu, reg) (A_BCM1480_IMR_ALIAS_MAILBOX(cpu)+(reg))
-#define R_BCM1480_IMR_ALIAS_MAILBOX_0 0x0000 /* 0x0x0 */
-#define R_BCM1480_IMR_ALIAS_MAILBOX_0_SET 0x0008 /* 0x0x8 */
+#define R_BCM1480_IMR_ALIAS_MAILBOX_0 0x0000
+#define R_BCM1480_IMR_ALIAS_MAILBOX_0_SET 0x0008
/*
* these macros work together to build the address of a mailbox
diff --git a/arch/mips/include/asm/signal.h b/arch/mips/include/asm/signal.h
index 003e273eff4c..2292373ff11a 100644
--- a/arch/mips/include/asm/signal.h
+++ b/arch/mips/include/asm/signal.h
@@ -11,11 +11,17 @@
#include <uapi/asm/signal.h>
+#ifdef CONFIG_MIPS32_COMPAT
+extern struct mips_abi mips_abi_32;
-#ifdef CONFIG_TRAD_SIGNALS
-#define sig_uses_siginfo(ka) ((ka)->sa.sa_flags & SA_SIGINFO)
+#define sig_uses_siginfo(ka, abi) \
+ ((abi != &mips_abi_32) ? 1 : \
+ ((ka)->sa.sa_flags & SA_SIGINFO))
#else
-#define sig_uses_siginfo(ka) (1)
+#define sig_uses_siginfo(ka, abi) \
+ (config_enabled(CONFIG_64BIT) ? 1 : \
+ (config_enabled(CONFIG_TRAD_SIGNALS) ? \
+ ((ka)->sa.sa_flags & SA_SIGINFO) : 1) )
#endif
#include <asm/sigcontext.h>
diff --git a/arch/mips/include/asm/smp-cps.h b/arch/mips/include/asm/smp-cps.h
index 326c16ebd589..2ae1f61a4a95 100644
--- a/arch/mips/include/asm/smp-cps.h
+++ b/arch/mips/include/asm/smp-cps.h
@@ -29,7 +29,7 @@ extern struct core_boot_config *mips_cps_core_bootcfg;
extern void mips_cps_core_entry(void);
extern void mips_cps_core_init(void);
-extern struct vpe_boot_config *mips_cps_boot_vpes(void);
+extern void mips_cps_boot_vpes(struct core_boot_config *cfg, unsigned vpe);
extern void mips_cps_pm_save(void);
extern void mips_cps_pm_restore(void);
diff --git a/arch/mips/include/asm/switch_to.h b/arch/mips/include/asm/switch_to.h
index 28b5d84a5022..ebb5c0f2f90d 100644
--- a/arch/mips/include/asm/switch_to.h
+++ b/arch/mips/include/asm/switch_to.h
@@ -105,7 +105,7 @@ do { \
__clear_software_ll_bit(); \
if (cpu_has_userlocal) \
write_c0_userlocal(task_thread_info(next)->tp_value); \
- __restore_watch(); \
+ __restore_watch(next); \
(last) = resume(prev, next, task_thread_info(next)); \
} while (0)
diff --git a/arch/mips/include/asm/uasm.h b/arch/mips/include/asm/uasm.h
index fc1cdd25fcda..b6ecfeee4dbe 100644
--- a/arch/mips/include/asm/uasm.h
+++ b/arch/mips/include/asm/uasm.h
@@ -171,7 +171,8 @@ Ip_u2u1(_wsbh);
Ip_u3u1u2(_xor);
Ip_u2u1u3(_xori);
Ip_u2u1(_yield);
-
+Ip_u1u2(_ldpte);
+Ip_u2u1u3(_lddir);
/* Handle labels. */
struct uasm_label {
diff --git a/arch/mips/include/asm/watch.h b/arch/mips/include/asm/watch.h
index 20126ec79359..6ffe3eadf105 100644
--- a/arch/mips/include/asm/watch.h
+++ b/arch/mips/include/asm/watch.h
@@ -12,21 +12,21 @@
#include <asm/mipsregs.h>
-void mips_install_watch_registers(void);
+void mips_install_watch_registers(struct task_struct *t);
void mips_read_watch_registers(void);
void mips_clear_watch_registers(void);
void mips_probe_watch_registers(struct cpuinfo_mips *c);
#ifdef CONFIG_HARDWARE_WATCHPOINTS
-#define __restore_watch() do { \
+#define __restore_watch(task) do { \
if (unlikely(test_bit(TIF_LOAD_WATCH, \
- &current_thread_info()->flags))) { \
- mips_install_watch_registers(); \
+ &task_thread_info(task)->flags))) { \
+ mips_install_watch_registers(task); \
} \
} while (0)
#else
-#define __restore_watch() do {} while (0)
+#define __restore_watch(task) do {} while (0)
#endif
#endif /* _ASM_WATCH_H */
diff --git a/arch/mips/include/uapi/asm/inst.h b/arch/mips/include/uapi/asm/inst.h
index ddea53e3a9bb..8051f9aa1379 100644
--- a/arch/mips/include/uapi/asm/inst.h
+++ b/arch/mips/include/uapi/asm/inst.h
@@ -167,6 +167,7 @@ enum cop1_sdw_func {
fceill_op = 0x0a, ffloorl_op = 0x0b,
fround_op = 0x0c, ftrunc_op = 0x0d,
fceil_op = 0x0e, ffloor_op = 0x0f,
+ fsel_op = 0x10,
fmovc_op = 0x11, fmovz_op = 0x12,
fmovn_op = 0x13, fseleqz_op = 0x14,
frecip_op = 0x15, frsqrt_op = 0x16,
@@ -204,6 +205,16 @@ enum mad_func {
};
/*
+ * func field for page table walker (Loongson-3).
+ */
+enum ptw_func {
+ lwdir_op = 0x00,
+ lwpte_op = 0x01,
+ lddir_op = 0x02,
+ ldpte_op = 0x03,
+};
+
+/*
* func field for special3 lx opcodes (Cavium Octeon).
*/
enum lx_func {
diff --git a/arch/mips/include/uapi/asm/siginfo.h b/arch/mips/include/uapi/asm/siginfo.h
index cc49dc240d67..8069cf766603 100644
--- a/arch/mips/include/uapi/asm/siginfo.h
+++ b/arch/mips/include/uapi/asm/siginfo.h
@@ -28,7 +28,7 @@
#define __ARCH_SIGSYS
-#include <uapi/asm-generic/siginfo.h>
+#include <asm-generic/siginfo.h>
/* We can't use generic siginfo_t, because our si_code and si_errno are swapped */
typedef struct siginfo {
@@ -42,13 +42,13 @@ typedef struct siginfo {
/* kill() */
struct {
- pid_t _pid; /* sender's pid */
+ __kernel_pid_t _pid; /* sender's pid */
__ARCH_SI_UID_T _uid; /* sender's uid */
} _kill;
/* POSIX.1b timers */
struct {
- timer_t _tid; /* timer id */
+ __kernel_timer_t _tid; /* timer id */
int _overrun; /* overrun count */
char _pad[sizeof( __ARCH_SI_UID_T) - sizeof(int)];
sigval_t _sigval; /* same as below */
@@ -57,26 +57,26 @@ typedef struct siginfo {
/* POSIX.1b signals */
struct {
- pid_t _pid; /* sender's pid */
+ __kernel_pid_t _pid; /* sender's pid */
__ARCH_SI_UID_T _uid; /* sender's uid */
sigval_t _sigval;
} _rt;
/* SIGCHLD */
struct {
- pid_t _pid; /* which child */
+ __kernel_pid_t _pid; /* which child */
__ARCH_SI_UID_T _uid; /* sender's uid */
int _status; /* exit code */
- clock_t _utime;
- clock_t _stime;
+ __kernel_clock_t _utime;
+ __kernel_clock_t _stime;
} _sigchld;
/* IRIX SIGCHLD */
struct {
- pid_t _pid; /* which child */
- clock_t _utime;
+ __kernel_pid_t _pid; /* which child */
+ __kernel_clock_t _utime;
int _status; /* exit code */
- clock_t _stime;
+ __kernel_clock_t _stime;
} _irix_sigchld;
/* SIGILL, SIGFPE, SIGSEGV, SIGBUS */
@@ -123,6 +123,4 @@ typedef struct siginfo {
#define SI_TIMER __SI_CODE(__SI_TIMER, -3) /* sent by timer expiration */
#define SI_MESGQ __SI_CODE(__SI_MESGQ, -4) /* sent by real time mesq state change */
-#include <asm-generic/siginfo.h>
-
#endif /* _UAPI_ASM_SIGINFO_H */
diff --git a/arch/mips/jz4740/board-qi_lb60.c b/arch/mips/jz4740/board-qi_lb60.c
index 934b15b5b575..258fd03c9ef5 100644
--- a/arch/mips/jz4740/board-qi_lb60.c
+++ b/arch/mips/jz4740/board-qi_lb60.c
@@ -39,8 +39,6 @@
#include "clock.h"
-static bool is_avt2;
-
/* GPIOs */
#define QI_LB60_GPIO_SD_CD JZ_GPIO_PORTD(0)
#define QI_LB60_GPIO_SD_VCC_EN_N JZ_GPIO_PORTD(2)
@@ -50,20 +48,6 @@ static bool is_avt2;
#define QI_LB60_GPIO_KEYIN8 JZ_GPIO_PORTD(26)
/* NAND */
-static struct nand_ecclayout qi_lb60_ecclayout_1gb = {
- .eccbytes = 36,
- .eccpos = {
- 6, 7, 8, 9, 10, 11, 12, 13,
- 14, 15, 16, 17, 18, 19, 20, 21,
- 22, 23, 24, 25, 26, 27, 28, 29,
- 30, 31, 32, 33, 34, 35, 36, 37,
- 38, 39, 40, 41
- },
- .oobfree = {
- { .offset = 2, .length = 4 },
- { .offset = 42, .length = 22 }
- },
-};
/* Early prototypes of the QI LB60 had only 1GB of NAND.
* In order to support these devices as well the partition and ecc layout is
@@ -86,25 +70,6 @@ static struct mtd_partition qi_lb60_partitions_1gb[] = {
},
};
-static struct nand_ecclayout qi_lb60_ecclayout_2gb = {
- .eccbytes = 72,
- .eccpos = {
- 12, 13, 14, 15, 16, 17, 18, 19,
- 20, 21, 22, 23, 24, 25, 26, 27,
- 28, 29, 30, 31, 32, 33, 34, 35,
- 36, 37, 38, 39, 40, 41, 42, 43,
- 44, 45, 46, 47, 48, 49, 50, 51,
- 52, 53, 54, 55, 56, 57, 58, 59,
- 60, 61, 62, 63, 64, 65, 66, 67,
- 68, 69, 70, 71, 72, 73, 74, 75,
- 76, 77, 78, 79, 80, 81, 82, 83
- },
- .oobfree = {
- { .offset = 2, .length = 10 },
- { .offset = 84, .length = 44 },
- },
-};
-
static struct mtd_partition qi_lb60_partitions_2gb[] = {
{
.name = "NAND BOOT partition",
@@ -123,19 +88,67 @@ static struct mtd_partition qi_lb60_partitions_2gb[] = {
},
};
+static int qi_lb60_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section)
+ return -ERANGE;
+
+ oobregion->length = 36;
+ oobregion->offset = 6;
+
+ if (mtd->oobsize == 128) {
+ oobregion->length *= 2;
+ oobregion->offset *= 2;
+ }
+
+ return 0;
+}
+
+static int qi_lb60_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ int eccbytes = 36, eccoff = 6;
+
+ if (section > 1)
+ return -ERANGE;
+
+ if (mtd->oobsize == 128) {
+ eccbytes *= 2;
+ eccoff *= 2;
+ }
+
+ if (!section) {
+ oobregion->offset = 2;
+ oobregion->length = eccoff - 2;
+ } else {
+ oobregion->offset = eccoff + eccbytes;
+ oobregion->length = mtd->oobsize - oobregion->offset;
+ }
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops qi_lb60_ooblayout_ops = {
+ .ecc = qi_lb60_ooblayout_ecc,
+ .free = qi_lb60_ooblayout_free,
+};
+
static void qi_lb60_nand_ident(struct platform_device *pdev,
- struct nand_chip *chip, struct mtd_partition **partitions,
+ struct mtd_info *mtd, struct mtd_partition **partitions,
int *num_partitions)
{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+
if (chip->page_shift == 12) {
- chip->ecc.layout = &qi_lb60_ecclayout_2gb;
*partitions = qi_lb60_partitions_2gb;
*num_partitions = ARRAY_SIZE(qi_lb60_partitions_2gb);
} else {
- chip->ecc.layout = &qi_lb60_ecclayout_1gb;
*partitions = qi_lb60_partitions_1gb;
*num_partitions = ARRAY_SIZE(qi_lb60_partitions_1gb);
}
+
+ mtd_set_ooblayout(mtd, &qi_lb60_ooblayout_ops);
}
static struct jz_nand_platform_data qi_lb60_nand_pdata = {
@@ -367,43 +380,12 @@ static struct jz4740_mmc_platform_data qi_lb60_mmc_pdata = {
.power_active_low = 1,
};
-/* OHCI */
-static struct regulator_consumer_supply avt2_usb_regulator_consumer =
- REGULATOR_SUPPLY("vbus", "jz4740-ohci");
-
-static struct regulator_init_data avt2_usb_regulator_init_data = {
- .num_consumer_supplies = 1,
- .consumer_supplies = &avt2_usb_regulator_consumer,
- .constraints = {
- .name = "USB power",
- .min_uV = 5000000,
- .max_uV = 5000000,
- .valid_modes_mask = REGULATOR_MODE_NORMAL,
- .valid_ops_mask = REGULATOR_CHANGE_STATUS,
- },
-};
-
-static struct fixed_voltage_config avt2_usb_regulator_data = {
- .supply_name = "USB power",
- .microvolts = 5000000,
- .gpio = JZ_GPIO_PORTB(17),
- .init_data = &avt2_usb_regulator_init_data,
-};
-
-static struct platform_device avt2_usb_regulator_device = {
- .name = "reg-fixed-voltage",
- .id = -1,
- .dev = {
- .platform_data = &avt2_usb_regulator_data,
- }
-};
-
+/* beeper */
static struct pwm_lookup qi_lb60_pwm_lookup[] = {
PWM_LOOKUP("jz4740-pwm", 4, "pwm-beeper", NULL, 0,
PWM_POLARITY_NORMAL),
};
-/* beeper */
static struct platform_device qi_lb60_pwm_beeper = {
.name = "pwm-beeper",
.id = -1,
@@ -487,11 +469,6 @@ static int __init qi_lb60_init_platform_devices(void)
spi_register_board_info(qi_lb60_spi_board_info,
ARRAY_SIZE(qi_lb60_spi_board_info));
- if (is_avt2) {
- platform_device_register(&avt2_usb_regulator_device);
- platform_device_register(&jz4740_usb_ohci_device);
- }
-
pwm_add_table(qi_lb60_pwm_lookup, ARRAY_SIZE(qi_lb60_pwm_lookup));
return platform_add_devices(jz_platform_devices,
@@ -499,19 +476,9 @@ static int __init qi_lb60_init_platform_devices(void)
}
-static __init int board_avt2(char *str)
-{
- qi_lb60_mmc_pdata.card_detect_active_low = 1;
- is_avt2 = true;
-
- return 1;
-}
-__setup("avt2", board_avt2);
-
static int __init qi_lb60_board_setup(void)
{
- printk(KERN_INFO "Qi Hardware JZ4740 QI %s setup\n",
- is_avt2 ? "AVT2" : "LB60");
+ printk(KERN_INFO "Qi Hardware JZ4740 QI LB60 setup\n");
board_gpio_setup();
diff --git a/arch/mips/jz4740/platform.c b/arch/mips/jz4740/platform.c
index e8a463b9b663..2f1dab35c061 100644
--- a/arch/mips/jz4740/platform.c
+++ b/arch/mips/jz4740/platform.c
@@ -32,31 +32,6 @@
#include "clock.h"
-/* OHCI controller */
-static struct resource jz4740_usb_ohci_resources[] = {
- {
- .start = JZ4740_UHC_BASE_ADDR,
- .end = JZ4740_UHC_BASE_ADDR + 0x1000 - 1,
- .flags = IORESOURCE_MEM,
- },
- {
- .start = JZ4740_IRQ_UHC,
- .end = JZ4740_IRQ_UHC,
- .flags = IORESOURCE_IRQ,
- },
-};
-
-struct platform_device jz4740_usb_ohci_device = {
- .name = "jz4740-ohci",
- .id = -1,
- .dev = {
- .dma_mask = &jz4740_usb_ohci_device.dev.coherent_dma_mask,
- .coherent_dma_mask = DMA_BIT_MASK(32),
- },
- .num_resources = ARRAY_SIZE(jz4740_usb_ohci_resources),
- .resource = jz4740_usb_ohci_resources,
-};
-
/* USB Device Controller */
struct platform_device jz4740_udc_xceiv_device = {
.name = "usb_phy_generic",
diff --git a/arch/mips/kernel/Makefile b/arch/mips/kernel/Makefile
index b0988fd62fcc..e6053d07072f 100644
--- a/arch/mips/kernel/Makefile
+++ b/arch/mips/kernel/Makefile
@@ -44,7 +44,7 @@ obj-$(CONFIG_CPU_CAVIUM_OCTEON) += r4k_fpu.o octeon_switch.o
obj-$(CONFIG_SMP) += smp.o
obj-$(CONFIG_SMP_UP) += smp-up.o
-obj-$(CONFIG_CPU_BMIPS) += smp-bmips.o bmips_vec.o
+obj-$(CONFIG_CPU_BMIPS) += smp-bmips.o bmips_vec.o bmips_5xxx_init.o
obj-$(CONFIG_MIPS_MT) += mips-mt.o
obj-$(CONFIG_MIPS_MT_FPAFF) += mips-mt-fpaff.o
@@ -83,6 +83,8 @@ obj-$(CONFIG_I8253) += i8253.o
obj-$(CONFIG_GPIO_TXX9) += gpio_txx9.o
+obj-$(CONFIG_RELOCATABLE) += relocate.o
+
obj-$(CONFIG_KEXEC) += machine_kexec.o relocate_kernel.o crash.o
obj-$(CONFIG_CRASH_DUMP) += crash_dump.o
obj-$(CONFIG_EARLY_PRINTK) += early_printk.o
diff --git a/arch/mips/kernel/asm-offsets.c b/arch/mips/kernel/asm-offsets.c
index 154e2039ea5e..1ea973b2abb1 100644
--- a/arch/mips/kernel/asm-offsets.c
+++ b/arch/mips/kernel/asm-offsets.c
@@ -14,6 +14,7 @@
#include <linux/mm.h>
#include <linux/kbuild.h>
#include <linux/suspend.h>
+#include <asm/cpu-info.h>
#include <asm/pm.h>
#include <asm/ptrace.h>
#include <asm/processor.h>
@@ -338,6 +339,15 @@ void output_pm_defines(void)
}
#endif
+void output_cpuinfo_defines(void)
+{
+ COMMENT(" MIPS cpuinfo offsets. ");
+ DEFINE(CPUINFO_SIZE, sizeof(struct cpuinfo_mips));
+#ifdef CONFIG_MIPS_ASID_BITS_VARIABLE
+ OFFSET(CPUINFO_ASID_MASK, cpuinfo_mips, asid_mask);
+#endif
+}
+
void output_kvm_defines(void)
{
COMMENT(" KVM/MIPS Specfic offsets. ");
diff --git a/arch/mips/kernel/binfmt_elfn32.c b/arch/mips/kernel/binfmt_elfn32.c
index 1b992c6e3d8e..58ad63d7eb42 100644
--- a/arch/mips/kernel/binfmt_elfn32.c
+++ b/arch/mips/kernel/binfmt_elfn32.c
@@ -30,21 +30,7 @@ typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG];
/*
* This is used to ensure we don't load something for the wrong architecture.
*/
-#define elf_check_arch(hdr) \
-({ \
- int __res = 1; \
- struct elfhdr *__h = (hdr); \
- \
- if (!mips_elf_check_machine(__h)) \
- __res = 0; \
- if (__h->e_ident[EI_CLASS] != ELFCLASS32) \
- __res = 0; \
- if (((__h->e_flags & EF_MIPS_ABI2) == 0) || \
- ((__h->e_flags & EF_MIPS_ABI) != 0)) \
- __res = 0; \
- \
- __res; \
-})
+#define elf_check_arch elfn32_check_arch
#define TASK32_SIZE 0x7fff8000UL
#undef ELF_ET_DYN_BASE
diff --git a/arch/mips/kernel/binfmt_elfo32.c b/arch/mips/kernel/binfmt_elfo32.c
index abd3affe5fb3..49fb881481f7 100644
--- a/arch/mips/kernel/binfmt_elfo32.c
+++ b/arch/mips/kernel/binfmt_elfo32.c
@@ -28,39 +28,9 @@ typedef double elf_fpreg_t;
typedef elf_fpreg_t elf_fpregset_t[ELF_NFPREG];
/*
- * In order to be sure that we don't attempt to execute an O32 binary which
- * requires 64 bit FP (FR=1) on a system which does not support it we refuse
- * to execute any binary which has bits specified by the following macro set
- * in its ELF header flags.
- */
-#ifdef CONFIG_MIPS_O32_FP64_SUPPORT
-# define __MIPS_O32_FP64_MUST_BE_ZERO 0
-#else
-# define __MIPS_O32_FP64_MUST_BE_ZERO EF_MIPS_FP64
-#endif
-
-/*
* This is used to ensure we don't load something for the wrong architecture.
*/
-#define elf_check_arch(hdr) \
-({ \
- int __res = 1; \
- struct elfhdr *__h = (hdr); \
- \
- if (!mips_elf_check_machine(__h)) \
- __res = 0; \
- if (__h->e_ident[EI_CLASS] != ELFCLASS32) \
- __res = 0; \
- if ((__h->e_flags & EF_MIPS_ABI2) != 0) \
- __res = 0; \
- if (((__h->e_flags & EF_MIPS_ABI) != 0) && \
- ((__h->e_flags & EF_MIPS_ABI) != EF_MIPS_ABI_O32)) \
- __res = 0; \
- if (__h->e_flags & __MIPS_O32_FP64_MUST_BE_ZERO) \
- __res = 0; \
- \
- __res; \
-})
+#define elf_check_arch elfo32_check_arch
#ifdef CONFIG_KVM_GUEST
#define TASK32_SIZE 0x3fff8000UL
diff --git a/arch/mips/kernel/bmips_5xxx_init.S b/arch/mips/kernel/bmips_5xxx_init.S
new file mode 100644
index 000000000000..adaa82e00f2b
--- /dev/null
+++ b/arch/mips/kernel/bmips_5xxx_init.S
@@ -0,0 +1,753 @@
+
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2011-2012 by Broadcom Corporation
+ *
+ * Init for bmips 5000.
+ * Used to init second core in dual core 5000's.
+ */
+
+#include <linux/init.h>
+
+#include <asm/asm.h>
+#include <asm/asmmacro.h>
+#include <asm/cacheops.h>
+#include <asm/regdef.h>
+#include <asm/mipsregs.h>
+#include <asm/stackframe.h>
+#include <asm/addrspace.h>
+#include <asm/hazards.h>
+#include <asm/bmips.h>
+
+#ifdef CONFIG_CPU_BMIPS5000
+
+
+#define cacheop(kva, size, linesize, op) \
+ .set noreorder ; \
+ addu t1, kva, size ; \
+ subu t2, linesize, 1 ; \
+ not t2 ; \
+ and t0, kva, t2 ; \
+ addiu t1, t1, -1 ; \
+ and t1, t2 ; \
+9: cache op, 0(t0) ; \
+ bne t0, t1, 9b ; \
+ addu t0, linesize ; \
+ .set reorder ;
+
+
+
+#define IS_SHIFT 22
+#define IL_SHIFT 19
+#define IA_SHIFT 16
+#define DS_SHIFT 13
+#define DL_SHIFT 10
+#define DA_SHIFT 7
+#define IS_MASK 7
+#define IL_MASK 7
+#define IA_MASK 7
+#define DS_MASK 7
+#define DL_MASK 7
+#define DA_MASK 7
+#define ICE_MASK 0x80000000
+#define DCE_MASK 0x40000000
+
+#define CP0_BRCM_CONFIG0 $22, 0
+#define CP0_BRCM_MODE $22, 1
+#define CP0_CONFIG_K0_MASK 7
+
+#define CP0_ICACHE_TAG_LO $28
+#define CP0_ICACHE_DATA_LO $28, 1
+#define CP0_DCACHE_TAG_LO $28, 2
+#define CP0_D_SEC_CACHE_DATA_LO $28, 3
+#define CP0_ICACHE_TAG_HI $29
+#define CP0_ICACHE_DATA_HI $29, 1
+#define CP0_DCACHE_TAG_HI $29, 2
+
+#define CP0_BRCM_MODE_Luc_MASK (1 << 11)
+#define CP0_BRCM_CONFIG0_CWF_MASK (1 << 20)
+#define CP0_BRCM_CONFIG0_TSE_MASK (1 << 19)
+#define CP0_BRCM_MODE_SET_MASK (1 << 7)
+#define CP0_BRCM_MODE_ClkRATIO_MASK (7 << 4)
+#define CP0_BRCM_MODE_BrPRED_MASK (3 << 24)
+#define CP0_BRCM_MODE_BrPRED_SHIFT 24
+#define CP0_BRCM_MODE_BrHIST_MASK (0x1f << 20)
+#define CP0_BRCM_MODE_BrHIST_SHIFT 20
+
+/* ZSC L2 Cache Register Access Register Definitions */
+#define BRCM_ZSC_ALL_REGS_SELECT 0x7 << 24
+
+#define BRCM_ZSC_CONFIG_REG 0 << 3
+#define BRCM_ZSC_REQ_BUFFER_REG 2 << 3
+#define BRCM_ZSC_RBUS_ADDR_MAPPING_REG0 4 << 3
+#define BRCM_ZSC_RBUS_ADDR_MAPPING_REG1 6 << 3
+#define BRCM_ZSC_RBUS_ADDR_MAPPING_REG2 8 << 3
+
+#define BRCM_ZSC_SCB0_ADDR_MAPPING_REG0 0xa << 3
+#define BRCM_ZSC_SCB0_ADDR_MAPPING_REG1 0xc << 3
+
+#define BRCM_ZSC_SCB1_ADDR_MAPPING_REG0 0xe << 3
+#define BRCM_ZSC_SCB1_ADDR_MAPPING_REG1 0x10 << 3
+
+#define BRCM_ZSC_CONFIG_LMB1En 1 << (15)
+#define BRCM_ZSC_CONFIG_LMB0En 1 << (14)
+
+/* branch predition values */
+
+#define BRCM_BrPRED_ALL_TAKEN (0x0)
+#define BRCM_BrPRED_ALL_NOT_TAKEN (0x1)
+#define BRCM_BrPRED_BHT_ENABLE (0x2)
+#define BRCM_BrPRED_PREDICT_BACKWARD (0x3)
+
+
+
+.align 2
+/*
+ * Function: size_i_cache
+ * Arguments: None
+ * Returns: v0 = i cache size, v1 = I cache line size
+ * Description: compute the I-cache size and I-cache line size
+ * Trashes: v0, v1, a0, t0
+ *
+ * pseudo code:
+ *
+ */
+
+LEAF(size_i_cache)
+ .set noreorder
+
+ mfc0 a0, CP0_CONFIG, 1
+ move t0, a0
+
+ /*
+ * Determine sets per way: IS
+ *
+ * This field contains the number of sets (i.e., indices) per way of
+ * the instruction cache:
+ * i) 0x0: 64, ii) 0x1: 128, iii) 0x2: 256, iv) 0x3: 512, v) 0x4: 1k
+ * vi) 0x5 - 0x7: Reserved.
+ */
+
+ srl a0, a0, IS_SHIFT
+ and a0, a0, IS_MASK
+
+ /* sets per way = (64<<IS) */
+
+ li v0, 0x40
+ sllv v0, v0, a0
+
+ /*
+ * Determine line size
+ *
+ * This field contains the line size of the instruction cache:
+ * i) 0x0: No I-cache present, i) 0x3: 16 bytes, ii) 0x4: 32 bytes, iii)
+ * 0x5: 64 bytes, iv) the rest: Reserved.
+ */
+
+ move a0, t0
+
+ srl a0, a0, IL_SHIFT
+ and a0, a0, IL_MASK
+
+ beqz a0, no_i_cache
+ nop
+
+ /* line size = 2 ^ (IL+1) */
+
+ addi a0, a0, 1
+ li v1, 1
+ sll v1, v1, a0
+
+ /* v0 now have sets per way, multiply it by line size now
+ * that will give the set size
+ */
+
+ sll v0, v0, a0
+
+ /*
+ * Determine set associativity
+ *
+ * This field contains the set associativity of the instruction cache.
+ * i) 0x0: Direct mapped, ii) 0x1: 2-way, iii) 0x2: 3-way, iv) 0x3:
+ * 4-way, v) 0x4 - 0x7: Reserved.
+ */
+
+ move a0, t0
+
+ srl a0, a0, IA_SHIFT
+ and a0, a0, IA_MASK
+ addi a0, a0, 0x1
+
+ /* v0 has the set size, multiply it by
+ * set associativiy, to get the cache size
+ */
+
+ multu v0, a0 /*multu is interlocked, so no need to insert nops */
+ mflo v0
+ b 1f
+ nop
+
+no_i_cache:
+ move v0, zero
+ move v1, zero
+1:
+ jr ra
+ nop
+ .set reorder
+
+END(size_i_cache)
+
+/*
+ * Function: size_d_cache
+ * Arguments: None
+ * Returns: v0 = d cache size, v1 = d cache line size
+ * Description: compute the D-cache size and D-cache line size.
+ * Trashes: v0, v1, a0, t0
+ *
+ */
+
+LEAF(size_d_cache)
+ .set noreorder
+
+ mfc0 a0, CP0_CONFIG, 1
+ move t0, a0
+
+ /*
+ * Determine sets per way: IS
+ *
+ * This field contains the number of sets (i.e., indices) per way of
+ * the instruction cache:
+ * i) 0x0: 64, ii) 0x1: 128, iii) 0x2: 256, iv) 0x3: 512, v) 0x4: 1k
+ * vi) 0x5 - 0x7: Reserved.
+ */
+
+ srl a0, a0, DS_SHIFT
+ and a0, a0, DS_MASK
+
+ /* sets per way = (64<<IS) */
+
+ li v0, 0x40
+ sllv v0, v0, a0
+
+ /*
+ * Determine line size
+ *
+ * This field contains the line size of the instruction cache:
+ * i) 0x0: No I-cache present, i) 0x3: 16 bytes, ii) 0x4: 32 bytes, iii)
+ * 0x5: 64 bytes, iv) the rest: Reserved.
+ */
+ move a0, t0
+
+ srl a0, a0, DL_SHIFT
+ and a0, a0, DL_MASK
+
+ beqz a0, no_d_cache
+ nop
+
+ /* line size = 2 ^ (IL+1) */
+
+ addi a0, a0, 1
+ li v1, 1
+ sll v1, v1, a0
+
+ /* v0 now have sets per way, multiply it by line size now
+ * that will give the set size
+ */
+
+ sll v0, v0, a0
+
+ /* determine set associativity
+ *
+ * This field contains the set associativity of the instruction cache.
+ * i) 0x0: Direct mapped, ii) 0x1: 2-way, iii) 0x2: 3-way, iv) 0x3:
+ * 4-way, v) 0x4 - 0x7: Reserved.
+ */
+
+ move a0, t0
+
+ srl a0, a0, DA_SHIFT
+ and a0, a0, DA_MASK
+ addi a0, a0, 0x1
+
+ /* v0 has the set size, multiply it by
+ * set associativiy, to get the cache size
+ */
+
+ multu v0, a0 /*multu is interlocked, so no need to insert nops */
+ mflo v0
+
+ b 1f
+ nop
+
+no_d_cache:
+ move v0, zero
+ move v1, zero
+1:
+ jr ra
+ nop
+ .set reorder
+
+END(size_d_cache)
+
+
+/*
+ * Function: enable_ID
+ * Arguments: None
+ * Returns: None
+ * Description: Enable I and D caches, initialize I and D-caches, also set
+ * hardware delay for d-cache (TP0).
+ * Trashes: t0
+ *
+ */
+ .global enable_ID
+ .ent enable_ID
+ .set noreorder
+enable_ID:
+ mfc0 t0, CP0_BRCM_CONFIG0
+ or t0, t0, (ICE_MASK | DCE_MASK)
+ mtc0 t0, CP0_BRCM_CONFIG0
+ jr ra
+ nop
+
+ .end enable_ID
+ .set reorder
+
+
+/*
+ * Function: l1_init
+ * Arguments: None
+ * Returns: None
+ * Description: Enable I and D caches, and initialize I and D-caches
+ * Trashes: a0, v0, v1, t0, t1, t2, t8
+ *
+ */
+ .globl l1_init
+ .ent l1_init
+ .set noreorder
+l1_init:
+
+ /* save return address */
+ move t8, ra
+
+
+ /* initialize I and D cache Data and Tag registers. */
+ mtc0 zero, CP0_ICACHE_TAG_LO
+ mtc0 zero, CP0_ICACHE_TAG_HI
+ mtc0 zero, CP0_ICACHE_DATA_LO
+ mtc0 zero, CP0_ICACHE_DATA_HI
+ mtc0 zero, CP0_DCACHE_TAG_LO
+ mtc0 zero, CP0_DCACHE_TAG_HI
+
+ /* Enable Caches before Clearing. If the caches are disabled
+ * then the cache operations to clear the cache will be ignored
+ */
+
+ jal enable_ID
+ nop
+
+ jal size_i_cache /* v0 = i-cache size, v1 = i-cache line size */
+ nop
+
+ /* run uncached in kseg 1 */
+ la k0, 1f
+ lui k1, 0x2000
+ or k0, k1, k0
+ jr k0
+ nop
+1:
+
+ /*
+ * set K0 cache mode
+ */
+
+ mfc0 t0, CP0_CONFIG
+ and t0, t0, ~CP0_CONFIG_K0_MASK
+ or t0, t0, 3 /* Write Back mode */
+ mtc0 t0, CP0_CONFIG
+
+ /*
+ * Initialize instruction cache.
+ */
+
+ li a0, KSEG0
+ cacheop(a0, v0, v1, Index_Store_Tag_I)
+
+ /*
+ * Now we can run from I-$, kseg 0
+ */
+ la k0, 1f
+ lui k1, 0x2000
+ or k0, k1, k0
+ xor k0, k1, k0
+ jr k0
+ nop
+1:
+ /*
+ * Initialize data cache.
+ */
+
+ jal size_d_cache /* v0 = d-cache size, v1 = d-cache line size */
+ nop
+
+
+ li a0, KSEG0
+ cacheop(a0, v0, v1, Index_Store_Tag_D)
+
+ jr t8
+ nop
+
+ .end l1_init
+ .set reorder
+
+
+/*
+ * Function: set_other_config
+ * Arguments: none
+ * Returns: None
+ * Description: initialize other remainder configuration to defaults.
+ * Trashes: t0, t1
+ *
+ * pseudo code:
+ *
+ */
+LEAF(set_other_config)
+ .set noreorder
+
+ /* enable Bus error for I-fetch */
+ mfc0 t0, CP0_CACHEERR, 0
+ li t1, 0x4
+ or t0, t1
+ mtc0 t0, CP0_CACHEERR, 0
+
+ /* enable Bus error for Load */
+ mfc0 t0, CP0_CACHEERR, 1
+ li t1, 0x4
+ or t0, t1
+ mtc0 t0, CP0_CACHEERR, 1
+
+ /* enable Bus Error for Store */
+ mfc0 t0, CP0_CACHEERR, 2
+ li t1, 0x4
+ or t0, t1
+ mtc0 t0, CP0_CACHEERR, 2
+
+ jr ra
+ nop
+ .set reorder
+END(set_other_config)
+
+/*
+ * Function: set_branch_pred
+ * Arguments: none
+ * Returns: None
+ * Description:
+ * Trashes: t0, t1
+ *
+ * pseudo code:
+ *
+ */
+
+LEAF(set_branch_pred)
+ .set noreorder
+ mfc0 t0, CP0_BRCM_MODE
+ li t1, ~(CP0_BRCM_MODE_BrPRED_MASK | CP0_BRCM_MODE_BrHIST_MASK )
+ and t0, t0, t1
+
+ /* enable Branch prediction */
+ li t1, BRCM_BrPRED_BHT_ENABLE
+ sll t1, CP0_BRCM_MODE_BrPRED_SHIFT
+ or t0, t0, t1
+
+ /* set history count to 8 */
+ li t1, 8
+ sll t1, CP0_BRCM_MODE_BrHIST_SHIFT
+ or t0, t0, t1
+
+ mtc0 t0, CP0_BRCM_MODE
+ jr ra
+ nop
+ .set reorder
+END(set_branch_pred)
+
+
+/*
+ * Function: set_luc
+ * Arguments: set link uncached.
+ * Returns: None
+ * Description:
+ * Trashes: t0, t1
+ *
+ */
+LEAF(set_luc)
+ .set noreorder
+ mfc0 t0, CP0_BRCM_MODE
+ li t1, ~(CP0_BRCM_MODE_Luc_MASK)
+ and t0, t0, t1
+
+ /* set Luc */
+ ori t0, t0, CP0_BRCM_MODE_Luc_MASK
+
+ mtc0 t0, CP0_BRCM_MODE
+ jr ra
+ nop
+ .set reorder
+END(set_luc)
+
+/*
+ * Function: set_cwf_tse
+ * Arguments: set CWF and TSE bits
+ * Returns: None
+ * Description:
+ * Trashes: t0, t1
+ *
+ */
+LEAF(set_cwf_tse)
+ .set noreorder
+ mfc0 t0, CP0_BRCM_CONFIG0
+ li t1, (CP0_BRCM_CONFIG0_CWF_MASK | CP0_BRCM_CONFIG0_TSE_MASK)
+ or t0, t0, t1
+
+ mtc0 t0, CP0_BRCM_CONFIG0
+ jr ra
+ nop
+ .set reorder
+END(set_cwf_tse)
+
+/*
+ * Function: set_clock_ratio
+ * Arguments: set clock ratio specified by a0
+ * Returns: None
+ * Description:
+ * Trashes: v0, v1, a0, a1
+ *
+ * pseudo code:
+ *
+ */
+LEAF(set_clock_ratio)
+ .set noreorder
+
+ mfc0 t0, CP0_BRCM_MODE
+ li t1, ~(CP0_BRCM_MODE_SET_MASK | CP0_BRCM_MODE_ClkRATIO_MASK)
+ and t0, t0, t1
+ li t1, CP0_BRCM_MODE_SET_MASK
+ or t0, t0, t1
+ or t0, t0, a0
+ mtc0 t0, CP0_BRCM_MODE
+ jr ra
+ nop
+ .set reorder
+END(set_clock_ratio)
+/*
+ * Function: set_zephyr
+ * Arguments: None
+ * Returns: None
+ * Description: Set any zephyr bits
+ * Trashes: t0 & t1
+ *
+ */
+LEAF(set_zephyr)
+ .set noreorder
+
+ /* enable read/write of CP0 #22 sel. 8 */
+ li t0, 0x5a455048
+ .word 0x4088b00f /* mtc0 t0, $22, 15 */
+
+ .word 0x4008b008 /* mfc0 t0, $22, 8 */
+ li t1, 0x09008000 /* turn off pref, jtb */
+ or t0, t0, t1
+ .word 0x4088b008 /* mtc0 t0, $22, 8 */
+ sync
+
+ /* disable read/write of CP0 #22 sel 8 */
+ li t0, 0x0
+ .word 0x4088b00f /* mtc0 t0, $22, 15 */
+
+
+ jr ra
+ nop
+ .set reorder
+
+END(set_zephyr)
+
+
+/*
+ * Function: set_llmb
+ * Arguments: a0=0 disable llmb, a0=1 enables llmb
+ * Returns: None
+ * Description:
+ * Trashes: t0, t1, t2
+ *
+ * pseudo code:
+ *
+ */
+LEAF(set_llmb)
+ .set noreorder
+
+ li t2, 0x90000000 | BRCM_ZSC_ALL_REGS_SELECT | BRCM_ZSC_CONFIG_REG
+ sync
+ cache 0x7, 0x0(t2)
+ sync
+ mfc0 t0, CP0_D_SEC_CACHE_DATA_LO
+ li t1, ~(BRCM_ZSC_CONFIG_LMB1En | BRCM_ZSC_CONFIG_LMB0En)
+ and t0, t0, t1
+
+ beqz a0, svlmb
+ nop
+
+enable_lmb:
+ li t1, (BRCM_ZSC_CONFIG_LMB1En | BRCM_ZSC_CONFIG_LMB0En)
+ or t0, t0, t1
+
+svlmb:
+ mtc0 t0, CP0_D_SEC_CACHE_DATA_LO
+ sync
+ cache 0xb, 0x0(t2)
+ sync
+
+ jr ra
+ nop
+ .set reorder
+
+END(set_llmb)
+/*
+ * Function: core_init
+ * Arguments: none
+ * Returns: None
+ * Description: initialize core related configuration
+ * Trashes: v0,v1,a0,a1,t8
+ *
+ * pseudo code:
+ *
+ */
+ .globl core_init
+ .ent core_init
+ .set noreorder
+core_init:
+ move t8, ra
+
+ /* set Zephyr bits. */
+ bal set_zephyr
+ nop
+
+#if ENABLE_FPU==1
+ /* initialize the Floating point unit (both TPs) */
+ bal init_fpu
+ nop
+#endif
+
+ /* set low latency memory bus */
+ li a0, 1
+ bal set_llmb
+ nop
+
+ /* set branch prediction (TP0 only) */
+ bal set_branch_pred
+ nop
+
+ /* set link uncached */
+ bal set_luc
+ nop
+
+ /* set CWF and TSE */
+ bal set_cwf_tse
+ nop
+
+ /*
+ *set clock ratio by setting 1 to 'set'
+ * and 0 to ClkRatio, (TP0 only)
+ */
+ li a0, 0
+ bal set_clock_ratio
+ nop
+
+ /* set other configuration to defaults */
+ bal set_other_config
+ nop
+
+ move ra, t8
+ jr ra
+ nop
+
+ .set reorder
+ .end core_init
+
+/*
+ * Function: clear_jump_target_buffer
+ * Arguments: None
+ * Returns: None
+ * Description:
+ * Trashes: t0, t1, t2
+ *
+ */
+#define RESET_CALL_RETURN_STACK_THIS_THREAD (0x06<<16)
+#define RESET_JUMP_TARGET_BUFFER_THIS_THREAD (0x04<<16)
+#define JTB_CS_CNTL_MASK (0xFF<<16)
+
+ .globl clear_jump_target_buffer
+ .ent clear_jump_target_buffer
+ .set noreorder
+clear_jump_target_buffer:
+
+ mfc0 t0, $22, 2
+ nop
+ nop
+
+ li t1, ~JTB_CS_CNTL_MASK
+ and t0, t0, t1
+ li t2, RESET_CALL_RETURN_STACK_THIS_THREAD
+ or t0, t0, t2
+ mtc0 t0, $22, 2
+ nop
+ nop
+
+ and t0, t0, t1
+ li t2, RESET_JUMP_TARGET_BUFFER_THIS_THREAD
+ or t0, t0, t2
+ mtc0 t0, $22, 2
+ nop
+ nop
+ jr ra
+ nop
+
+ .end clear_jump_target_buffer
+ .set reorder
+/*
+ * Function: bmips_cache_init
+ * Arguments: None
+ * Returns: None
+ * Description: Enable I and D caches, and initialize I and D-caches
+ * Trashes: v0, v1, t0, t1, t2, t5, t7, t8
+ *
+ */
+ .globl bmips_5xxx_init
+ .ent bmips_5xxx_init
+ .set noreorder
+bmips_5xxx_init:
+
+ /* save return address and A0 */
+ move t7, ra
+ move t5, a0
+
+ jal l1_init
+ nop
+
+ jal core_init
+ nop
+
+ jal clear_jump_target_buffer
+ nop
+
+ mtc0 zero, CP0_CAUSE
+
+ move a0, t5
+ jr t7
+ nop
+
+ .end bmips_5xxx_init
+ .set reorder
+
+
+#endif
diff --git a/arch/mips/kernel/bmips_vec.S b/arch/mips/kernel/bmips_vec.S
index 86495072a922..921a5fa55da6 100644
--- a/arch/mips/kernel/bmips_vec.S
+++ b/arch/mips/kernel/bmips_vec.S
@@ -88,12 +88,13 @@ NESTED(bmips_reset_nmi_vec, PT_SIZE, sp)
li k1, (1 << 19)
mfc0 k0, CP0_STATUS
and k0, k1
- beqz k0, bmips_smp_entry
+ beqz k0, soft_reset
#if defined(CONFIG_CPU_BMIPS5000)
mfc0 k0, CP0_PRID
li k1, PRID_IMP_BMIPS5000
- andi k0, 0xff00
+ /* mask with PRID_IMP_BMIPS5000 to cover both variants */
+ andi k0, PRID_IMP_BMIPS5000
bne k0, k1, 1f
/* if we're not on core 0, this must be the SMP boot signal */
@@ -125,13 +126,48 @@ NESTED(bmips_reset_nmi_vec, PT_SIZE, sp)
.set arch=r4000
eret
+#ifdef CONFIG_SMP
+soft_reset:
+
+#if defined(CONFIG_CPU_BMIPS5000)
+ mfc0 k0, CP0_PRID
+ andi k0, 0xff00
+ li k1, PRID_IMP_BMIPS5200
+ bne k0, k1, bmips_smp_entry
+
+ /* if running on TP 1, jump to bmips_smp_entry */
+ mfc0 k0, $22
+ li k1, (1 << 24)
+ and k1, k0
+ bnez k1, bmips_smp_entry
+ nop
+
+ /*
+ * running on TP0, can not be core 0 (the boot core).
+ * Check for soft reset. Indicates a warm boot
+ */
+ mfc0 k0, $12
+ li k1, (1 << 20)
+ and k0, k1
+ beqz k0, bmips_smp_entry
+
+ /*
+ * Warm boot.
+ * Cache init is only done on TP0
+ */
+ la k0, bmips_5xxx_init
+ jalr k0
+ nop
+
+ b bmips_smp_entry
+ nop
+#endif
+
/***********************************************************************
* CPU1 reset vector (used for the initial boot only)
* This is still part of bmips_reset_nmi_vec().
***********************************************************************/
-#ifdef CONFIG_SMP
-
bmips_smp_entry:
/* set up CP0 STATUS; enable FPU */
@@ -166,10 +202,12 @@ bmips_smp_entry:
2:
#endif /* CONFIG_CPU_BMIPS4350 || CONFIG_CPU_BMIPS4380 */
#if defined(CONFIG_CPU_BMIPS5000)
- /* set exception vector base */
+ /* mask with PRID_IMP_BMIPS5000 to cover both variants */
li k1, PRID_IMP_BMIPS5000
+ andi k0, PRID_IMP_BMIPS5000
bne k0, k1, 3f
+ /* set exception vector base */
la k0, ebase
lw k0, 0(k0)
mtc0 k0, $15, 1
@@ -263,6 +301,8 @@ LEAF(bmips_enable_xks01)
#endif /* CONFIG_CPU_BMIPS4380 */
#if defined(CONFIG_CPU_BMIPS5000)
li t1, PRID_IMP_BMIPS5000
+ /* mask with PRID_IMP_BMIPS5000 to cover both variants */
+ andi t2, PRID_IMP_BMIPS5000
bne t2, t1, 2f
mfc0 t0, $22, 5
diff --git a/arch/mips/kernel/branch.c b/arch/mips/kernel/branch.c
index d8f9b357b222..6dc3f1fdaccc 100644
--- a/arch/mips/kernel/branch.c
+++ b/arch/mips/kernel/branch.c
@@ -481,7 +481,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
/*
* OK we are here either because we hit a NAL
* instruction or because we are emulating an
- * old bltzal{,l} one. Lets figure out what the
+ * old bltzal{,l} one. Let's figure out what the
* case really is.
*/
if (!insn.i_format.rs) {
@@ -515,7 +515,7 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
/*
* OK we are here either because we hit a BAL
* instruction or because we are emulating an
- * old bgezal{,l} one. Lets figure out what the
+ * old bgezal{,l} one. Let's figure out what the
* case really is.
*/
if (!insn.i_format.rs) {
@@ -688,21 +688,9 @@ int __compute_return_epc_for_insn(struct pt_regs *regs,
}
lose_fpu(1); /* Save FPU state for the emulator. */
reg = insn.i_format.rt;
- bit = 0;
- switch (insn.i_format.rs) {
- case bc1eqz_op:
- /* Test bit 0 */
- if (get_fpr32(&current->thread.fpu.fpr[reg], 0)
- & 0x1)
- bit = 1;
- break;
- case bc1nez_op:
- /* Test bit 0 */
- if (!(get_fpr32(&current->thread.fpu.fpr[reg], 0)
- & 0x1))
- bit = 1;
- break;
- }
+ bit = get_fpr32(&current->thread.fpu.fpr[reg], 0) & 0x1;
+ if (insn.i_format.rs == bc1eqz_op)
+ bit = !bit;
own_fpu(1);
if (bit)
epc = epc + 4 +
diff --git a/arch/mips/kernel/cevt-r4k.c b/arch/mips/kernel/cevt-r4k.c
index 8dfe6a6e1480..e4c21bbf9422 100644
--- a/arch/mips/kernel/cevt-r4k.c
+++ b/arch/mips/kernel/cevt-r4k.c
@@ -28,6 +28,83 @@ static int mips_next_event(unsigned long delta,
return res;
}
+/**
+ * calculate_min_delta() - Calculate a good minimum delta for mips_next_event().
+ *
+ * Running under virtualisation can introduce overhead into mips_next_event() in
+ * the form of hypervisor emulation of CP0_Count/CP0_Compare registers,
+ * potentially with an unnatural frequency, which makes a fixed min_delta_ns
+ * value inappropriate as it may be too small.
+ *
+ * It can also introduce occasional latency from the guest being descheduled.
+ *
+ * This function calculates a good minimum delta based roughly on the 75th
+ * percentile of the time taken to do the mips_next_event() sequence, in order
+ * to handle potentially higher overhead while also eliminating outliers due to
+ * unpredictable hypervisor latency (which can be handled by retries).
+ *
+ * Return: An appropriate minimum delta for the clock event device.
+ */
+static unsigned int calculate_min_delta(void)
+{
+ unsigned int cnt, i, j, k, l;
+ unsigned int buf1[4], buf2[3];
+ unsigned int min_delta;
+
+ /*
+ * Calculate the median of 5 75th percentiles of 5 samples of how long
+ * it takes to set CP0_Compare = CP0_Count + delta.
+ */
+ for (i = 0; i < 5; ++i) {
+ for (j = 0; j < 5; ++j) {
+ /*
+ * This is like the code in mips_next_event(), and
+ * directly measures the borderline "safe" delta.
+ */
+ cnt = read_c0_count();
+ write_c0_compare(cnt);
+ cnt = read_c0_count() - cnt;
+
+ /* Sorted insert into buf1 */
+ for (k = 0; k < j; ++k) {
+ if (cnt < buf1[k]) {
+ l = min_t(unsigned int,
+ j, ARRAY_SIZE(buf1) - 1);
+ for (; l > k; --l)
+ buf1[l] = buf1[l - 1];
+ break;
+ }
+ }
+ if (k < ARRAY_SIZE(buf1))
+ buf1[k] = cnt;
+ }
+
+ /* Sorted insert of 75th percentile into buf2 */
+ for (k = 0; k < i; ++k) {
+ if (buf1[ARRAY_SIZE(buf1) - 1] < buf2[k]) {
+ l = min_t(unsigned int,
+ i, ARRAY_SIZE(buf2) - 1);
+ for (; l > k; --l)
+ buf2[l] = buf2[l - 1];
+ break;
+ }
+ }
+ if (k < ARRAY_SIZE(buf2))
+ buf2[k] = buf1[ARRAY_SIZE(buf1) - 1];
+ }
+
+ /* Use 2 * median of 75th percentiles */
+ min_delta = buf2[ARRAY_SIZE(buf2) - 1] * 2;
+
+ /* Don't go too low */
+ if (min_delta < 0x300)
+ min_delta = 0x300;
+
+ pr_debug("%s: median 75th percentile=%#x, min_delta=%#x\n",
+ __func__, buf2[ARRAY_SIZE(buf2) - 1], min_delta);
+ return min_delta;
+}
+
DEFINE_PER_CPU(struct clock_event_device, mips_clockevent_device);
int cp0_timer_irq_installed;
@@ -177,7 +254,7 @@ int r4k_clockevent_init(void)
{
unsigned int cpu = smp_processor_id();
struct clock_event_device *cd;
- unsigned int irq;
+ unsigned int irq, min_delta;
if (!cpu_has_counter || !mips_hpt_frequency)
return -ENXIO;
@@ -203,7 +280,8 @@ int r4k_clockevent_init(void)
/* Calculate the min / max delta */
cd->max_delta_ns = clockevent_delta2ns(0x7fffffff, cd);
- cd->min_delta_ns = clockevent_delta2ns(0x300, cd);
+ min_delta = calculate_min_delta();
+ cd->min_delta_ns = clockevent_delta2ns(min_delta, cd);
cd->rating = 300;
cd->irq = irq;
diff --git a/arch/mips/kernel/cps-vec.S b/arch/mips/kernel/cps-vec.S
index ac81edd44563..59476a607add 100644
--- a/arch/mips/kernel/cps-vec.S
+++ b/arch/mips/kernel/cps-vec.S
@@ -18,9 +18,12 @@
#include <asm/mipsmtregs.h>
#include <asm/pm.h>
+#define GCR_CPC_BASE_OFS 0x0088
#define GCR_CL_COHERENCE_OFS 0x2008
#define GCR_CL_ID_OFS 0x2028
+#define CPC_CL_VC_RUN_OFS 0x2028
+
.extern mips_cm_base
.set noreorder
@@ -60,6 +63,37 @@
nop
.endm
+ /*
+ * Set dest to non-zero if the core supports MIPSr6 multithreading
+ * (ie. VPs), else zero. If MIPSr6 multithreading is not supported then
+ * branch to nomt.
+ */
+ .macro has_vp dest, nomt
+ mfc0 \dest, CP0_CONFIG, 1
+ bgez \dest, \nomt
+ mfc0 \dest, CP0_CONFIG, 2
+ bgez \dest, \nomt
+ mfc0 \dest, CP0_CONFIG, 3
+ bgez \dest, \nomt
+ mfc0 \dest, CP0_CONFIG, 4
+ bgez \dest, \nomt
+ mfc0 \dest, CP0_CONFIG, 5
+ andi \dest, \dest, MIPS_CONF5_VP
+ beqz \dest, \nomt
+ nop
+ .endm
+
+ /* Calculate an uncached address for the CM GCRs */
+ .macro cmgcrb dest
+ .set push
+ .set noat
+ MFC0 $1, CP0_CMGCRBASE
+ PTR_SLL $1, $1, 4
+ PTR_LI \dest, UNCAC_BASE
+ PTR_ADDU \dest, \dest, $1
+ .set pop
+ .endm
+
.section .text.cps-vec
.balign 0x1000
@@ -90,120 +124,64 @@ not_nmi:
li t0, ST0_CU1 | ST0_CU0 | ST0_BEV | STATUS_BITDEPS
mtc0 t0, CP0_STATUS
- /*
- * Clear the bits used to index the caches. Note that the architecture
- * dictates that writing to any of TagLo or TagHi selects 0 or 2 should
- * be valid for all MIPS32 CPUs, even those for which said writes are
- * unnecessary.
- */
- mtc0 zero, CP0_TAGLO, 0
- mtc0 zero, CP0_TAGHI, 0
- mtc0 zero, CP0_TAGLO, 2
- mtc0 zero, CP0_TAGHI, 2
- ehb
-
- /* Primary cache configuration is indicated by Config1 */
- mfc0 v0, CP0_CONFIG, 1
-
- /* Detect I-cache line size */
- _EXT t0, v0, MIPS_CONF1_IL_SHF, MIPS_CONF1_IL_SZ
- beqz t0, icache_done
- li t1, 2
- sllv t0, t1, t0
-
- /* Detect I-cache size */
- _EXT t1, v0, MIPS_CONF1_IS_SHF, MIPS_CONF1_IS_SZ
- xori t2, t1, 0x7
- beqz t2, 1f
- li t3, 32
- addiu t1, t1, 1
- sllv t1, t3, t1
-1: /* At this point t1 == I-cache sets per way */
- _EXT t2, v0, MIPS_CONF1_IA_SHF, MIPS_CONF1_IA_SZ
- addiu t2, t2, 1
- mul t1, t1, t0
- mul t1, t1, t2
-
- li a0, CKSEG0
- PTR_ADD a1, a0, t1
-1: cache Index_Store_Tag_I, 0(a0)
- PTR_ADD a0, a0, t0
- bne a0, a1, 1b
+ /* Skip cache & coherence setup if we're already coherent */
+ cmgcrb v1
+ lw s7, GCR_CL_COHERENCE_OFS(v1)
+ bnez s7, 1f
nop
-icache_done:
- /* Detect D-cache line size */
- _EXT t0, v0, MIPS_CONF1_DL_SHF, MIPS_CONF1_DL_SZ
- beqz t0, dcache_done
- li t1, 2
- sllv t0, t1, t0
-
- /* Detect D-cache size */
- _EXT t1, v0, MIPS_CONF1_DS_SHF, MIPS_CONF1_DS_SZ
- xori t2, t1, 0x7
- beqz t2, 1f
- li t3, 32
- addiu t1, t1, 1
- sllv t1, t3, t1
-1: /* At this point t1 == D-cache sets per way */
- _EXT t2, v0, MIPS_CONF1_DA_SHF, MIPS_CONF1_DA_SZ
- addiu t2, t2, 1
- mul t1, t1, t0
- mul t1, t1, t2
+ /* Initialize the L1 caches */
+ jal mips_cps_cache_init
+ nop
- li a0, CKSEG0
- PTR_ADDU a1, a0, t1
- PTR_SUBU a1, a1, t0
-1: cache Index_Store_Tag_D, 0(a0)
- bne a0, a1, 1b
- PTR_ADD a0, a0, t0
-dcache_done:
+ /* Enter the coherent domain */
+ li t0, 0xff
+ sw t0, GCR_CL_COHERENCE_OFS(v1)
+ ehb
/* Set Kseg0 CCA to that in s0 */
- mfc0 t0, CP0_CONFIG
+1: mfc0 t0, CP0_CONFIG
ori t0, 0x7
xori t0, 0x7
or t0, t0, s0
mtc0 t0, CP0_CONFIG
ehb
- /* Calculate an uncached address for the CM GCRs */
- MFC0 v1, CP0_CMGCRBASE
- PTR_SLL v1, v1, 4
- PTR_LI t0, UNCAC_BASE
- PTR_ADDU v1, v1, t0
-
- /* Enter the coherent domain */
- li t0, 0xff
- sw t0, GCR_CL_COHERENCE_OFS(v1)
- ehb
-
/* Jump to kseg0 */
PTR_LA t0, 1f
jr t0
nop
/*
- * We're up, cached & coherent. Perform any further required core-level
- * initialisation.
+ * We're up, cached & coherent. Perform any EVA initialization necessary
+ * before we access memory.
*/
-1: jal mips_cps_core_init
+1: eva_init
+
+ /* Retrieve boot configuration pointers */
+ jal mips_cps_get_bootcfg
+ nop
+
+ /* Skip core-level init if we started up coherent */
+ bnez s7, 1f
nop
- /* Do any EVA initialization if necessary */
- eva_init
+ /* Perform any further required core-level initialisation */
+ jal mips_cps_core_init
+ nop
/*
* Boot any other VPEs within this core that should be online, and
* deactivate this VPE if it should be offline.
*/
+ move a1, t9
jal mips_cps_boot_vpes
- nop
+ move a0, v0
/* Off we go! */
- PTR_L t1, VPEBOOTCFG_PC(v0)
- PTR_L gp, VPEBOOTCFG_GP(v0)
- PTR_L sp, VPEBOOTCFG_SP(v0)
+1: PTR_L t1, VPEBOOTCFG_PC(v1)
+ PTR_L gp, VPEBOOTCFG_GP(v1)
+ PTR_L sp, VPEBOOTCFG_SP(v1)
jr t1
nop
END(mips_cps_core_entry)
@@ -245,7 +223,6 @@ LEAF(excep_intex)
.org 0x480
LEAF(excep_ejtag)
- DUMP_EXCEP("EJTAG")
PTR_LA k0, ejtag_debug_handler
jr k0
nop
@@ -323,22 +300,35 @@ LEAF(mips_cps_core_init)
nop
END(mips_cps_core_init)
-LEAF(mips_cps_boot_vpes)
- /* Retrieve CM base address */
- PTR_LA t0, mips_cm_base
- PTR_L t0, 0(t0)
-
+/**
+ * mips_cps_get_bootcfg() - retrieve boot configuration pointers
+ *
+ * Returns: pointer to struct core_boot_config in v0, pointer to
+ * struct vpe_boot_config in v1, VPE ID in t9
+ */
+LEAF(mips_cps_get_bootcfg)
/* Calculate a pointer to this cores struct core_boot_config */
+ cmgcrb t0
lw t0, GCR_CL_ID_OFS(t0)
li t1, COREBOOTCFG_SIZE
mul t0, t0, t1
PTR_LA t1, mips_cps_core_bootcfg
PTR_L t1, 0(t1)
- PTR_ADDU t0, t0, t1
+ PTR_ADDU v0, t0, t1
/* Calculate this VPEs ID. If the core doesn't support MT use 0 */
li t9, 0
-#ifdef CONFIG_MIPS_MT_SMP
+#if defined(CONFIG_CPU_MIPSR6)
+ has_vp ta2, 1f
+
+ /*
+ * Assume non-contiguous numbering. Perhaps some day we'll need
+ * to handle contiguous VP numbering, but no such systems yet
+ * exist.
+ */
+ mfc0 t9, $3, 1
+ andi t9, t9, 0xff
+#elif defined(CONFIG_MIPS_MT_SMP)
has_mt ta2, 1f
/* Find the number of VPEs present in the core */
@@ -362,22 +352,43 @@ LEAF(mips_cps_boot_vpes)
1: /* Calculate a pointer to this VPEs struct vpe_boot_config */
li t1, VPEBOOTCFG_SIZE
- mul v0, t9, t1
- PTR_L ta3, COREBOOTCFG_VPECONFIG(t0)
- PTR_ADDU v0, v0, ta3
+ mul v1, t9, t1
+ PTR_L ta3, COREBOOTCFG_VPECONFIG(v0)
+ PTR_ADDU v1, v1, ta3
-#ifdef CONFIG_MIPS_MT_SMP
-
- /* If the core doesn't support MT then return */
- bnez ta2, 1f
- nop
jr ra
nop
+ END(mips_cps_get_bootcfg)
+
+LEAF(mips_cps_boot_vpes)
+ PTR_L ta2, COREBOOTCFG_VPEMASK(a0)
+ PTR_L ta3, COREBOOTCFG_VPECONFIG(a0)
+
+#if defined(CONFIG_CPU_MIPSR6)
+
+ has_vp t0, 5f
+
+ /* Find base address of CPC */
+ cmgcrb t3
+ PTR_L t1, GCR_CPC_BASE_OFS(t3)
+ PTR_LI t2, ~0x7fff
+ and t1, t1, t2
+ PTR_LI t2, UNCAC_BASE
+ PTR_ADD t1, t1, t2
+
+ /* Set VC_RUN to the VPE mask */
+ PTR_S ta2, CPC_CL_VC_RUN_OFS(t1)
+ ehb
+
+#elif defined(CONFIG_MIPS_MT)
.set push
.set mt
-1: /* Enter VPE configuration state */
+ /* If the core doesn't support MT then return */
+ has_mt t0, 5f
+
+ /* Enter VPE configuration state */
dvpe
PTR_LA t1, 1f
jr.hb t1
@@ -388,7 +399,6 @@ LEAF(mips_cps_boot_vpes)
ehb
/* Loop through each VPE */
- PTR_L ta2, COREBOOTCFG_VPEMASK(t0)
move t8, ta2
li ta1, 0
@@ -431,6 +441,21 @@ LEAF(mips_cps_boot_vpes)
mfc0 t0, CP0_CONFIG
mttc0 t0, CP0_CONFIG
+ /*
+ * Copy the EVA config from this VPE if the CPU supports it.
+ * CONFIG3 must exist to be running MT startup - just read it.
+ */
+ mfc0 t0, CP0_CONFIG, 3
+ and t0, t0, MIPS_CONF3_SC
+ beqz t0, 3f
+ nop
+ mfc0 t0, CP0_SEGCTL0
+ mttc0 t0, CP0_SEGCTL0
+ mfc0 t0, CP0_SEGCTL1
+ mttc0 t0, CP0_SEGCTL1
+ mfc0 t0, CP0_SEGCTL2
+ mttc0 t0, CP0_SEGCTL2
+3:
/* Ensure no software interrupts are pending */
mttc0 zero, CP0_CAUSE
mttc0 zero, CP0_STATUS
@@ -465,7 +490,7 @@ LEAF(mips_cps_boot_vpes)
/* Check whether this VPE is meant to be running */
li t0, 1
- sll t0, t0, t9
+ sll t0, t0, a1
and t0, t0, t8
bnez t0, 2f
nop
@@ -482,10 +507,84 @@ LEAF(mips_cps_boot_vpes)
#endif /* CONFIG_MIPS_MT_SMP */
/* Return */
- jr ra
+5: jr ra
nop
END(mips_cps_boot_vpes)
+LEAF(mips_cps_cache_init)
+ /*
+ * Clear the bits used to index the caches. Note that the architecture
+ * dictates that writing to any of TagLo or TagHi selects 0 or 2 should
+ * be valid for all MIPS32 CPUs, even those for which said writes are
+ * unnecessary.
+ */
+ mtc0 zero, CP0_TAGLO, 0
+ mtc0 zero, CP0_TAGHI, 0
+ mtc0 zero, CP0_TAGLO, 2
+ mtc0 zero, CP0_TAGHI, 2
+ ehb
+
+ /* Primary cache configuration is indicated by Config1 */
+ mfc0 v0, CP0_CONFIG, 1
+
+ /* Detect I-cache line size */
+ _EXT t0, v0, MIPS_CONF1_IL_SHF, MIPS_CONF1_IL_SZ
+ beqz t0, icache_done
+ li t1, 2
+ sllv t0, t1, t0
+
+ /* Detect I-cache size */
+ _EXT t1, v0, MIPS_CONF1_IS_SHF, MIPS_CONF1_IS_SZ
+ xori t2, t1, 0x7
+ beqz t2, 1f
+ li t3, 32
+ addiu t1, t1, 1
+ sllv t1, t3, t1
+1: /* At this point t1 == I-cache sets per way */
+ _EXT t2, v0, MIPS_CONF1_IA_SHF, MIPS_CONF1_IA_SZ
+ addiu t2, t2, 1
+ mul t1, t1, t0
+ mul t1, t1, t2
+
+ li a0, CKSEG0
+ PTR_ADD a1, a0, t1
+1: cache Index_Store_Tag_I, 0(a0)
+ PTR_ADD a0, a0, t0
+ bne a0, a1, 1b
+ nop
+icache_done:
+
+ /* Detect D-cache line size */
+ _EXT t0, v0, MIPS_CONF1_DL_SHF, MIPS_CONF1_DL_SZ
+ beqz t0, dcache_done
+ li t1, 2
+ sllv t0, t1, t0
+
+ /* Detect D-cache size */
+ _EXT t1, v0, MIPS_CONF1_DS_SHF, MIPS_CONF1_DS_SZ
+ xori t2, t1, 0x7
+ beqz t2, 1f
+ li t3, 32
+ addiu t1, t1, 1
+ sllv t1, t3, t1
+1: /* At this point t1 == D-cache sets per way */
+ _EXT t2, v0, MIPS_CONF1_DA_SHF, MIPS_CONF1_DA_SZ
+ addiu t2, t2, 1
+ mul t1, t1, t0
+ mul t1, t1, t2
+
+ li a0, CKSEG0
+ PTR_ADDU a1, a0, t1
+ PTR_SUBU a1, a1, t0
+1: cache Index_Store_Tag_D, 0(a0)
+ bne a0, a1, 1b
+ PTR_ADD a0, a0, t0
+dcache_done:
+
+ jr ra
+ nop
+ END(mips_cps_cache_init)
+
#if defined(CONFIG_MIPS_CPS_PM) && defined(CONFIG_CPU_PM)
/* Calculate a pointer to this CPUs struct mips_static_suspend_state */
diff --git a/arch/mips/kernel/cpu-probe.c b/arch/mips/kernel/cpu-probe.c
index b725b713b9f8..a88d44247cc8 100644
--- a/arch/mips/kernel/cpu-probe.c
+++ b/arch/mips/kernel/cpu-probe.c
@@ -539,6 +539,7 @@ static int set_ftlb_enable(struct cpuinfo_mips *c, int enable)
switch (c->cputype) {
case CPU_PROAPTIV:
case CPU_P5600:
+ case CPU_P6600:
/* proAptiv & related cores use Config6 to enable the FTLB */
config = read_c0_config6();
/* Clear the old probability value */
@@ -561,6 +562,19 @@ static int set_ftlb_enable(struct cpuinfo_mips *c, int enable)
write_c0_config7(config | (calculate_ftlb_probability(c)
<< MIPS_CONF7_FTLBP_SHIFT));
break;
+ case CPU_LOONGSON3:
+ /* Flush ITLB, DTLB, VTLB and FTLB */
+ write_c0_diag(LOONGSON_DIAG_ITLB | LOONGSON_DIAG_DTLB |
+ LOONGSON_DIAG_VTLB | LOONGSON_DIAG_FTLB);
+ /* Loongson-3 cores use Config6 to enable the FTLB */
+ config = read_c0_config6();
+ if (enable)
+ /* Enable FTLB */
+ write_c0_config6(config & ~MIPS_CONF6_FTLBDIS);
+ else
+ /* Disable FTLB */
+ write_c0_config6(config | MIPS_CONF6_FTLBDIS);
+ break;
default:
return 1;
}
@@ -634,6 +648,8 @@ static inline unsigned int decode_config1(struct cpuinfo_mips *c)
if (config1 & MIPS_CONF1_MD)
c->ases |= MIPS_ASE_MDMX;
+ if (config1 & MIPS_CONF1_PC)
+ c->options |= MIPS_CPU_PERF;
if (config1 & MIPS_CONF1_WR)
c->options |= MIPS_CPU_WATCH;
if (config1 & MIPS_CONF1_CA)
@@ -673,18 +689,25 @@ static inline unsigned int decode_config3(struct cpuinfo_mips *c)
if (config3 & MIPS_CONF3_SM) {
c->ases |= MIPS_ASE_SMARTMIPS;
- c->options |= MIPS_CPU_RIXI;
+ c->options |= MIPS_CPU_RIXI | MIPS_CPU_CTXTC;
}
if (config3 & MIPS_CONF3_RXI)
c->options |= MIPS_CPU_RIXI;
+ if (config3 & MIPS_CONF3_CTXTC)
+ c->options |= MIPS_CPU_CTXTC;
if (config3 & MIPS_CONF3_DSP)
c->ases |= MIPS_ASE_DSP;
- if (config3 & MIPS_CONF3_DSP2P)
+ if (config3 & MIPS_CONF3_DSP2P) {
c->ases |= MIPS_ASE_DSP2P;
+ if (cpu_has_mips_r6)
+ c->ases |= MIPS_ASE_DSP3;
+ }
if (config3 & MIPS_CONF3_VINT)
c->options |= MIPS_CPU_VINT;
if (config3 & MIPS_CONF3_VEIC)
c->options |= MIPS_CPU_VEIC;
+ if (config3 & MIPS_CONF3_LPA)
+ c->options |= MIPS_CPU_LPA;
if (config3 & MIPS_CONF3_MT)
c->ases |= MIPS_ASE_MIPSMT;
if (config3 & MIPS_CONF3_ULRI)
@@ -695,6 +718,10 @@ static inline unsigned int decode_config3(struct cpuinfo_mips *c)
c->ases |= MIPS_ASE_VZ;
if (config3 & MIPS_CONF3_SC)
c->options |= MIPS_CPU_SEGMENTS;
+ if (config3 & MIPS_CONF3_BI)
+ c->options |= MIPS_CPU_BADINSTR;
+ if (config3 & MIPS_CONF3_BP)
+ c->options |= MIPS_CPU_BADINSTRP;
if (config3 & MIPS_CONF3_MSA)
c->ases |= MIPS_ASE_MSA;
if (config3 & MIPS_CONF3_PW) {
@@ -715,6 +742,7 @@ static inline unsigned int decode_config4(struct cpuinfo_mips *c)
unsigned int newcf4;
unsigned int mmuextdef;
unsigned int ftlb_page = MIPS_CONF4_FTLBPAGESIZE;
+ unsigned long asid_mask;
config4 = read_c0_config4();
@@ -773,7 +801,20 @@ static inline unsigned int decode_config4(struct cpuinfo_mips *c)
}
}
- c->kscratch_mask = (config4 >> 16) & 0xff;
+ c->kscratch_mask = (config4 & MIPS_CONF4_KSCREXIST)
+ >> MIPS_CONF4_KSCREXIST_SHIFT;
+
+ asid_mask = MIPS_ENTRYHI_ASID;
+ if (config4 & MIPS_CONF4_AE)
+ asid_mask |= MIPS_ENTRYHI_ASIDX;
+ set_cpu_asid_mask(c, asid_mask);
+
+ /*
+ * Warn if the computed ASID mask doesn't match the mask the kernel
+ * is built for. This may indicate either a serious problem or an
+ * easy optimisation opportunity, but either way should be addressed.
+ */
+ WARN_ON(asid_mask != cpu_asid_mask(c));
return config4 & MIPS_CONF_M;
}
@@ -792,10 +833,10 @@ static inline unsigned int decode_config5(struct cpuinfo_mips *c)
c->options |= MIPS_CPU_MAAR;
if (config5 & MIPS_CONF5_LLB)
c->options |= MIPS_CPU_RW_LLB;
-#ifdef CONFIG_XPA
if (config5 & MIPS_CONF5_MVH)
- c->options |= MIPS_CPU_XPA;
-#endif
+ c->options |= MIPS_CPU_MVH;
+ if (cpu_has_mips_r6 && (config5 & MIPS_CONF5_VP))
+ c->options |= MIPS_CPU_VP;
return config5 & MIPS_CONF_M;
}
@@ -826,17 +867,43 @@ static void decode_configs(struct cpuinfo_mips *c)
if (ok)
ok = decode_config5(c);
- mips_probe_watch_registers(c);
-
- if (cpu_has_rixi) {
- /* Enable the RIXI exceptions */
- set_c0_pagegrain(PG_IEC);
- back_to_back_c0_hazard();
- /* Verify the IEC bit is set */
- if (read_c0_pagegrain() & PG_IEC)
- c->options |= MIPS_CPU_RIXIEX;
+ /* Probe the EBase.WG bit */
+ if (cpu_has_mips_r2_r6) {
+ u64 ebase;
+ unsigned int status;
+
+ /* {read,write}_c0_ebase_64() may be UNDEFINED prior to r6 */
+ ebase = cpu_has_mips64r6 ? read_c0_ebase_64()
+ : (s32)read_c0_ebase();
+ if (ebase & MIPS_EBASE_WG) {
+ /* WG bit already set, we can avoid the clumsy probe */
+ c->options |= MIPS_CPU_EBASE_WG;
+ } else {
+ /* Its UNDEFINED to change EBase while BEV=0 */
+ status = read_c0_status();
+ write_c0_status(status | ST0_BEV);
+ irq_enable_hazard();
+ /*
+ * On pre-r6 cores, this may well clobber the upper bits
+ * of EBase. This is hard to avoid without potentially
+ * hitting UNDEFINED dm*c0 behaviour if EBase is 32-bit.
+ */
+ if (cpu_has_mips64r6)
+ write_c0_ebase_64(ebase | MIPS_EBASE_WG);
+ else
+ write_c0_ebase(ebase | MIPS_EBASE_WG);
+ back_to_back_c0_hazard();
+ /* Restore BEV */
+ write_c0_status(status);
+ if (read_c0_ebase() & MIPS_EBASE_WG) {
+ c->options |= MIPS_CPU_EBASE_WG;
+ write_c0_ebase(ebase);
+ }
+ }
}
+ mips_probe_watch_registers(c);
+
#ifndef CONFIG_MIPS_CPS
if (cpu_has_mips_r2_r6) {
c->core = get_ebase_cpunum();
@@ -846,6 +913,235 @@ static void decode_configs(struct cpuinfo_mips *c)
#endif
}
+/*
+ * Probe for certain guest capabilities by writing config bits and reading back.
+ * Finally write back the original value.
+ */
+#define probe_gc0_config(name, maxconf, bits) \
+do { \
+ unsigned int tmp; \
+ tmp = read_gc0_##name(); \
+ write_gc0_##name(tmp | (bits)); \
+ back_to_back_c0_hazard(); \
+ maxconf = read_gc0_##name(); \
+ write_gc0_##name(tmp); \
+} while (0)
+
+/*
+ * Probe for dynamic guest capabilities by changing certain config bits and
+ * reading back to see if they change. Finally write back the original value.
+ */
+#define probe_gc0_config_dyn(name, maxconf, dynconf, bits) \
+do { \
+ maxconf = read_gc0_##name(); \
+ write_gc0_##name(maxconf ^ (bits)); \
+ back_to_back_c0_hazard(); \
+ dynconf = maxconf ^ read_gc0_##name(); \
+ write_gc0_##name(maxconf); \
+ maxconf |= dynconf; \
+} while (0)
+
+static inline unsigned int decode_guest_config0(struct cpuinfo_mips *c)
+{
+ unsigned int config0;
+
+ probe_gc0_config(config, config0, MIPS_CONF_M);
+
+ if (config0 & MIPS_CONF_M)
+ c->guest.conf |= BIT(1);
+ return config0 & MIPS_CONF_M;
+}
+
+static inline unsigned int decode_guest_config1(struct cpuinfo_mips *c)
+{
+ unsigned int config1, config1_dyn;
+
+ probe_gc0_config_dyn(config1, config1, config1_dyn,
+ MIPS_CONF_M | MIPS_CONF1_PC | MIPS_CONF1_WR |
+ MIPS_CONF1_FP);
+
+ if (config1 & MIPS_CONF1_FP)
+ c->guest.options |= MIPS_CPU_FPU;
+ if (config1_dyn & MIPS_CONF1_FP)
+ c->guest.options_dyn |= MIPS_CPU_FPU;
+
+ if (config1 & MIPS_CONF1_WR)
+ c->guest.options |= MIPS_CPU_WATCH;
+ if (config1_dyn & MIPS_CONF1_WR)
+ c->guest.options_dyn |= MIPS_CPU_WATCH;
+
+ if (config1 & MIPS_CONF1_PC)
+ c->guest.options |= MIPS_CPU_PERF;
+ if (config1_dyn & MIPS_CONF1_PC)
+ c->guest.options_dyn |= MIPS_CPU_PERF;
+
+ if (config1 & MIPS_CONF_M)
+ c->guest.conf |= BIT(2);
+ return config1 & MIPS_CONF_M;
+}
+
+static inline unsigned int decode_guest_config2(struct cpuinfo_mips *c)
+{
+ unsigned int config2;
+
+ probe_gc0_config(config2, config2, MIPS_CONF_M);
+
+ if (config2 & MIPS_CONF_M)
+ c->guest.conf |= BIT(3);
+ return config2 & MIPS_CONF_M;
+}
+
+static inline unsigned int decode_guest_config3(struct cpuinfo_mips *c)
+{
+ unsigned int config3, config3_dyn;
+
+ probe_gc0_config_dyn(config3, config3, config3_dyn,
+ MIPS_CONF_M | MIPS_CONF3_MSA | MIPS_CONF3_CTXTC);
+
+ if (config3 & MIPS_CONF3_CTXTC)
+ c->guest.options |= MIPS_CPU_CTXTC;
+ if (config3_dyn & MIPS_CONF3_CTXTC)
+ c->guest.options_dyn |= MIPS_CPU_CTXTC;
+
+ if (config3 & MIPS_CONF3_PW)
+ c->guest.options |= MIPS_CPU_HTW;
+
+ if (config3 & MIPS_CONF3_SC)
+ c->guest.options |= MIPS_CPU_SEGMENTS;
+
+ if (config3 & MIPS_CONF3_BI)
+ c->guest.options |= MIPS_CPU_BADINSTR;
+ if (config3 & MIPS_CONF3_BP)
+ c->guest.options |= MIPS_CPU_BADINSTRP;
+
+ if (config3 & MIPS_CONF3_MSA)
+ c->guest.ases |= MIPS_ASE_MSA;
+ if (config3_dyn & MIPS_CONF3_MSA)
+ c->guest.ases_dyn |= MIPS_ASE_MSA;
+
+ if (config3 & MIPS_CONF_M)
+ c->guest.conf |= BIT(4);
+ return config3 & MIPS_CONF_M;
+}
+
+static inline unsigned int decode_guest_config4(struct cpuinfo_mips *c)
+{
+ unsigned int config4;
+
+ probe_gc0_config(config4, config4,
+ MIPS_CONF_M | MIPS_CONF4_KSCREXIST);
+
+ c->guest.kscratch_mask = (config4 & MIPS_CONF4_KSCREXIST)
+ >> MIPS_CONF4_KSCREXIST_SHIFT;
+
+ if (config4 & MIPS_CONF_M)
+ c->guest.conf |= BIT(5);
+ return config4 & MIPS_CONF_M;
+}
+
+static inline unsigned int decode_guest_config5(struct cpuinfo_mips *c)
+{
+ unsigned int config5, config5_dyn;
+
+ probe_gc0_config_dyn(config5, config5, config5_dyn,
+ MIPS_CONF_M | MIPS_CONF5_MRP);
+
+ if (config5 & MIPS_CONF5_MRP)
+ c->guest.options |= MIPS_CPU_MAAR;
+ if (config5_dyn & MIPS_CONF5_MRP)
+ c->guest.options_dyn |= MIPS_CPU_MAAR;
+
+ if (config5 & MIPS_CONF5_LLB)
+ c->guest.options |= MIPS_CPU_RW_LLB;
+
+ if (config5 & MIPS_CONF_M)
+ c->guest.conf |= BIT(6);
+ return config5 & MIPS_CONF_M;
+}
+
+static inline void decode_guest_configs(struct cpuinfo_mips *c)
+{
+ unsigned int ok;
+
+ ok = decode_guest_config0(c);
+ if (ok)
+ ok = decode_guest_config1(c);
+ if (ok)
+ ok = decode_guest_config2(c);
+ if (ok)
+ ok = decode_guest_config3(c);
+ if (ok)
+ ok = decode_guest_config4(c);
+ if (ok)
+ decode_guest_config5(c);
+}
+
+static inline void cpu_probe_guestctl0(struct cpuinfo_mips *c)
+{
+ unsigned int guestctl0, temp;
+
+ guestctl0 = read_c0_guestctl0();
+
+ if (guestctl0 & MIPS_GCTL0_G0E)
+ c->options |= MIPS_CPU_GUESTCTL0EXT;
+ if (guestctl0 & MIPS_GCTL0_G1)
+ c->options |= MIPS_CPU_GUESTCTL1;
+ if (guestctl0 & MIPS_GCTL0_G2)
+ c->options |= MIPS_CPU_GUESTCTL2;
+ if (!(guestctl0 & MIPS_GCTL0_RAD)) {
+ c->options |= MIPS_CPU_GUESTID;
+
+ /*
+ * Probe for Direct Root to Guest (DRG). Set GuestCtl1.RID = 0
+ * first, otherwise all data accesses will be fully virtualised
+ * as if they were performed by guest mode.
+ */
+ write_c0_guestctl1(0);
+ tlbw_use_hazard();
+
+ write_c0_guestctl0(guestctl0 | MIPS_GCTL0_DRG);
+ back_to_back_c0_hazard();
+ temp = read_c0_guestctl0();
+
+ if (temp & MIPS_GCTL0_DRG) {
+ write_c0_guestctl0(guestctl0);
+ c->options |= MIPS_CPU_DRG;
+ }
+ }
+}
+
+static inline void cpu_probe_guestctl1(struct cpuinfo_mips *c)
+{
+ if (cpu_has_guestid) {
+ /* determine the number of bits of GuestID available */
+ write_c0_guestctl1(MIPS_GCTL1_ID);
+ back_to_back_c0_hazard();
+ c->guestid_mask = (read_c0_guestctl1() & MIPS_GCTL1_ID)
+ >> MIPS_GCTL1_ID_SHIFT;
+ write_c0_guestctl1(0);
+ }
+}
+
+static inline void cpu_probe_gtoffset(struct cpuinfo_mips *c)
+{
+ /* determine the number of bits of GTOffset available */
+ write_c0_gtoffset(0xffffffff);
+ back_to_back_c0_hazard();
+ c->gtoffset_mask = read_c0_gtoffset();
+ write_c0_gtoffset(0);
+}
+
+static inline void cpu_probe_vz(struct cpuinfo_mips *c)
+{
+ cpu_probe_guestctl0(c);
+ if (cpu_has_guestctl1)
+ cpu_probe_guestctl1(c);
+
+ cpu_probe_gtoffset(c);
+
+ decode_guest_configs(c);
+}
+
#define R4K_OPTS (MIPS_CPU_TLB | MIPS_CPU_4KEX | MIPS_CPU_4K_CACHE \
| MIPS_CPU_COUNTER)
@@ -1172,7 +1468,7 @@ static inline void cpu_probe_legacy(struct cpuinfo_mips *c, unsigned int cpu)
set_isa(c, MIPS_CPU_ISA_III);
c->fpu_msk31 |= FPU_CSR_CONDX;
break;
- case PRID_REV_LOONGSON3A:
+ case PRID_REV_LOONGSON3A_R1:
c->cputype = CPU_LOONGSON3;
__cpu_name[cpu] = "ICT Loongson-3";
set_elf_platform(cpu, "loongson3a");
@@ -1314,6 +1610,10 @@ static inline void cpu_probe_mips(struct cpuinfo_mips *c, unsigned int cpu)
c->cputype = CPU_P5600;
__cpu_name[cpu] = "MIPS P5600";
break;
+ case PRID_IMP_P6600:
+ c->cputype = CPU_P6600;
+ __cpu_name[cpu] = "MIPS P6600";
+ break;
case PRID_IMP_I6400:
c->cputype = CPU_I6400;
__cpu_name[cpu] = "MIPS I6400";
@@ -1322,6 +1622,10 @@ static inline void cpu_probe_mips(struct cpuinfo_mips *c, unsigned int cpu)
c->cputype = CPU_M5150;
__cpu_name[cpu] = "MIPS M5150";
break;
+ case PRID_IMP_M6250:
+ c->cputype = CPU_M6250;
+ __cpu_name[cpu] = "MIPS M6250";
+ break;
}
decode_configs(c);
@@ -1435,6 +1739,7 @@ static inline void cpu_probe_broadcom(struct cpuinfo_mips *c, unsigned int cpu)
c->cputype = CPU_BMIPS4380;
__cpu_name[cpu] = "Broadcom BMIPS4380";
set_elf_platform(cpu, "bmips4380");
+ c->options |= MIPS_CPU_RIXI;
} else {
c->cputype = CPU_BMIPS4350;
__cpu_name[cpu] = "Broadcom BMIPS4350";
@@ -1445,9 +1750,12 @@ static inline void cpu_probe_broadcom(struct cpuinfo_mips *c, unsigned int cpu)
case PRID_IMP_BMIPS5000:
case PRID_IMP_BMIPS5200:
c->cputype = CPU_BMIPS5000;
- __cpu_name[cpu] = "Broadcom BMIPS5000";
+ if ((c->processor_id & PRID_IMP_MASK) == PRID_IMP_BMIPS5200)
+ __cpu_name[cpu] = "Broadcom BMIPS5200";
+ else
+ __cpu_name[cpu] = "Broadcom BMIPS5000";
set_elf_platform(cpu, "bmips5000");
- c->options |= MIPS_CPU_ULRI;
+ c->options |= MIPS_CPU_ULRI | MIPS_CPU_RIXI;
break;
}
}
@@ -1481,6 +1789,8 @@ platform:
set_elf_platform(cpu, "octeon2");
break;
case PRID_IMP_CAVIUM_CN70XX:
+ case PRID_IMP_CAVIUM_CN73XX:
+ case PRID_IMP_CAVIUM_CNF75XX:
case PRID_IMP_CAVIUM_CN78XX:
c->cputype = CPU_CAVIUM_OCTEON3;
__cpu_name[cpu] = "Cavium Octeon III";
@@ -1493,6 +1803,29 @@ platform:
}
}
+static inline void cpu_probe_loongson(struct cpuinfo_mips *c, unsigned int cpu)
+{
+ switch (c->processor_id & PRID_IMP_MASK) {
+ case PRID_IMP_LOONGSON_64: /* Loongson-2/3 */
+ switch (c->processor_id & PRID_REV_MASK) {
+ case PRID_REV_LOONGSON3A_R2:
+ c->cputype = CPU_LOONGSON3;
+ __cpu_name[cpu] = "ICT Loongson-3";
+ set_elf_platform(cpu, "loongson3a");
+ set_isa(c, MIPS_CPU_ISA_M64R2);
+ break;
+ }
+
+ decode_configs(c);
+ c->options |= MIPS_CPU_TLBINV | MIPS_CPU_LDPTE;
+ c->writecombine = _CACHE_UNCACHED_ACCELERATED;
+ break;
+ default:
+ panic("Unknown Loongson Processor ID!");
+ break;
+ }
+}
+
static inline void cpu_probe_ingenic(struct cpuinfo_mips *c, unsigned int cpu)
{
decode_configs(c);
@@ -1640,6 +1973,9 @@ void cpu_probe(void)
case PRID_COMP_CAVIUM:
cpu_probe_cavium(c, cpu);
break;
+ case PRID_COMP_LOONGSON:
+ cpu_probe_loongson(c, cpu);
+ break;
case PRID_COMP_INGENIC_D0:
case PRID_COMP_INGENIC_D1:
case PRID_COMP_INGENIC_E1:
@@ -1660,6 +1996,15 @@ void cpu_probe(void)
*/
BUG_ON(current_cpu_type() != c->cputype);
+ if (cpu_has_rixi) {
+ /* Enable the RIXI exceptions */
+ set_c0_pagegrain(PG_IEC);
+ back_to_back_c0_hazard();
+ /* Verify the IEC bit is set */
+ if (read_c0_pagegrain() & PG_IEC)
+ c->options |= MIPS_CPU_RIXIEX;
+ }
+
if (mips_fpu_disabled)
c->options &= ~MIPS_CPU_FPU;
@@ -1699,6 +2044,9 @@ void cpu_probe(void)
elf_hwcap |= HWCAP_MIPS_MSA;
}
+ if (cpu_has_vz)
+ cpu_probe_vz(c);
+
cpu_probe_vmbits(c);
#ifdef CONFIG_64BIT
diff --git a/arch/mips/kernel/crash.c b/arch/mips/kernel/crash.c
index d434d5d5ae6e..610f0f3bdb34 100644
--- a/arch/mips/kernel/crash.c
+++ b/arch/mips/kernel/crash.c
@@ -14,12 +14,22 @@ static int crashing_cpu = -1;
static cpumask_t cpus_in_crash = CPU_MASK_NONE;
#ifdef CONFIG_SMP
-static void crash_shutdown_secondary(void *ignore)
+static void crash_shutdown_secondary(void *passed_regs)
{
- struct pt_regs *regs;
+ struct pt_regs *regs = passed_regs;
int cpu = smp_processor_id();
- regs = task_pt_regs(current);
+ /*
+ * If we are passed registers, use those. Otherwise get the
+ * regs from the last interrupt, which should be correct, as
+ * we are in an interrupt. But if the regs are not there,
+ * pull them from the top of the stack. They are probably
+ * wrong, but we need something to keep from crashing again.
+ */
+ if (!regs)
+ regs = get_irq_regs();
+ if (!regs)
+ regs = task_pt_regs(current);
if (!cpu_online(cpu))
return;
diff --git a/arch/mips/kernel/elf.c b/arch/mips/kernel/elf.c
index c3c234dc0c07..891f5ee63983 100644
--- a/arch/mips/kernel/elf.c
+++ b/arch/mips/kernel/elf.c
@@ -88,7 +88,7 @@ int arch_elf_pt_proc(void *_ehdr, void *_phdr, struct file *elf,
elf32 = ehdr->e32.e_ident[EI_CLASS] == ELFCLASS32;
flags = elf32 ? ehdr->e32.e_flags : ehdr->e64.e_flags;
- /* Lets see if this is an O32 ELF */
+ /* Let's see if this is an O32 ELF */
if (elf32) {
if (flags & EF_MIPS_FP64) {
/*
diff --git a/arch/mips/kernel/genex.S b/arch/mips/kernel/genex.S
index baa7b6fc0a60..17326a90d53c 100644
--- a/arch/mips/kernel/genex.S
+++ b/arch/mips/kernel/genex.S
@@ -130,7 +130,7 @@ LEAF(__r4k_wait)
/* end of rollback region (the region size must be power of two) */
1:
jr ra
- nop
+ nop
.set pop
END(__r4k_wait)
@@ -172,7 +172,7 @@ NESTED(handle_int, PT_SIZE, sp)
mfc0 k0, CP0_EPC
.set noreorder
j k0
- rfe
+ rfe
#else
and k0, ST0_IE
bnez k0, 1f
@@ -189,7 +189,7 @@ NESTED(handle_int, PT_SIZE, sp)
LONG_L s0, TI_REGS($28)
LONG_S sp, TI_REGS($28)
PTR_LA ra, ret_from_irq
- PTR_LA v0, plat_irq_dispatch
+ PTR_LA v0, plat_irq_dispatch
jr v0
#ifdef CONFIG_CPU_MICROMIPS
nop
@@ -292,7 +292,7 @@ ejtag_return:
MFC0 k0, CP0_DESAVE
.set mips32
deret
- .set pop
+ .set pop
END(ejtag_debug_handler)
/*
@@ -329,10 +329,10 @@ NESTED(nmi_handler, PT_SIZE, sp)
* Clear BEV - required for page fault exception handler to work
*/
mfc0 k0, CP0_STATUS
- ori k0, k0, ST0_EXL
+ ori k0, k0, ST0_EXL
li k1, ~(ST0_BEV | ST0_ERL)
- and k0, k0, k1
- mtc0 k0, CP0_STATUS
+ and k0, k0, k1
+ mtc0 k0, CP0_STATUS
_ehb
SAVE_ALL
move a0, sp
@@ -396,7 +396,7 @@ NESTED(nmi_handler, PT_SIZE, sp)
.macro __BUILD_count exception
LONG_L t0,exception_count_\exception
- LONG_ADDIU t0, 1
+ LONG_ADDIU t0, 1
LONG_S t0,exception_count_\exception
.comm exception_count\exception, 8, 8
.endm
@@ -455,10 +455,10 @@ NESTED(nmi_handler, PT_SIZE, sp)
.set noreorder
/* check if TLB contains a entry for EPC */
MFC0 k1, CP0_ENTRYHI
- andi k1, 0xff /* ASID_MASK */
+ andi k1, MIPS_ENTRYHI_ASID | MIPS_ENTRYHI_ASIDX
MFC0 k0, CP0_EPC
- PTR_SRL k0, _PAGE_SHIFT + 1
- PTR_SLL k0, _PAGE_SHIFT + 1
+ PTR_SRL k0, _PAGE_SHIFT + 1
+ PTR_SLL k0, _PAGE_SHIFT + 1
or k1, k0
MTC0 k1, CP0_ENTRYHI
mtc0_tlbw_hazard
@@ -478,27 +478,27 @@ NESTED(nmi_handler, PT_SIZE, sp)
/* microMIPS: 0x007d6b3c: rdhwr v1,$29 */
MFC0 k1, CP0_EPC
#if defined(CONFIG_CPU_MICROMIPS) || defined(CONFIG_CPU_MIPS32_R2) || defined(CONFIG_CPU_MIPS64_R2)
- and k0, k1, 1
- beqz k0, 1f
- xor k1, k0
- lhu k0, (k1)
- lhu k1, 2(k1)
- ins k1, k0, 16, 16
- lui k0, 0x007d
- b docheck
- ori k0, 0x6b3c
+ and k0, k1, 1
+ beqz k0, 1f
+ xor k1, k0
+ lhu k0, (k1)
+ lhu k1, 2(k1)
+ ins k1, k0, 16, 16
+ lui k0, 0x007d
+ b docheck
+ ori k0, 0x6b3c
1:
- lui k0, 0x7c03
- lw k1, (k1)
- ori k0, 0xe83b
+ lui k0, 0x7c03
+ lw k1, (k1)
+ ori k0, 0xe83b
#else
- andi k0, k1, 1
- bnez k0, handle_ri
- lui k0, 0x7c03
- lw k1, (k1)
- ori k0, 0xe83b
+ andi k0, k1, 1
+ bnez k0, handle_ri
+ lui k0, 0x7c03
+ lw k1, (k1)
+ ori k0, 0xe83b
#endif
- .set reorder
+ .set reorder
docheck:
bne k0, k1, handle_ri /* if not ours */
diff --git a/arch/mips/kernel/head.S b/arch/mips/kernel/head.S
index 4e4cc5b9a771..56e8fede3fd8 100644
--- a/arch/mips/kernel/head.S
+++ b/arch/mips/kernel/head.S
@@ -21,7 +21,6 @@
#include <asm/asmmacro.h>
#include <asm/irqflags.h>
#include <asm/regdef.h>
-#include <asm/pgtable-bits.h>
#include <asm/mipsregs.h>
#include <asm/stackframe.h>
@@ -132,7 +131,27 @@ not_found:
set_saved_sp sp, t0, t1
PTR_SUBU sp, 4 * SZREG # init stack pointer
+#ifdef CONFIG_RELOCATABLE
+ /* Copy kernel and apply the relocations */
+ jal relocate_kernel
+
+ /* Repoint the sp into the new kernel image */
+ PTR_LI sp, _THREAD_SIZE - 32 - PT_SIZE
+ PTR_ADDU sp, $28
+ set_saved_sp sp, t0, t1
+ PTR_SUBU sp, 4 * SZREG # init stack pointer
+
+ /*
+ * relocate_kernel returns the entry point either
+ * in the relocated kernel or the original if for
+ * some reason relocation failed - jump there now
+ * with instruction hazard barrier because of the
+ * newly sync'd icache.
+ */
+ jr.hb v0
+#else
j start_kernel
+#endif
END(kernel_entry)
#ifdef CONFIG_SMP
diff --git a/arch/mips/kernel/idle.c b/arch/mips/kernel/idle.c
index 46794d64c0bf..60ab4c44d305 100644
--- a/arch/mips/kernel/idle.c
+++ b/arch/mips/kernel/idle.c
@@ -181,6 +181,11 @@ void __init check_wait(void)
case CPU_XLP:
cpu_wait = r4k_wait;
break;
+ case CPU_LOONGSON3:
+ if ((c->processor_id & PRID_REV_MASK) >= PRID_REV_LOONGSON3A_R2)
+ cpu_wait = r4k_wait;
+ break;
+
case CPU_BMIPS5000:
cpu_wait = r4k_wait_irqoff;
break;
diff --git a/arch/mips/kernel/irq.c b/arch/mips/kernel/irq.c
index 8eb5af805964..f25f7eab7307 100644
--- a/arch/mips/kernel/irq.c
+++ b/arch/mips/kernel/irq.c
@@ -54,6 +54,9 @@ void __init init_IRQ(void)
for (i = 0; i < NR_IRQS; i++)
irq_set_noprobe(i);
+ if (cpu_has_veic)
+ clear_c0_status(ST0_IM);
+
arch_init_irq();
}
diff --git a/arch/mips/kernel/mips-r2-to-r6-emul.c b/arch/mips/kernel/mips-r2-to-r6-emul.c
index 3fff89ae760b..7ff2a557f4aa 100644
--- a/arch/mips/kernel/mips-r2-to-r6-emul.c
+++ b/arch/mips/kernel/mips-r2-to-r6-emul.c
@@ -28,6 +28,7 @@
#include <asm/inst.h>
#include <asm/mips-r2-to-r6-emul.h>
#include <asm/local.h>
+#include <asm/mipsregs.h>
#include <asm/ptrace.h>
#include <asm/uaccess.h>
@@ -1251,10 +1252,10 @@ fpu_emul:
" j 10b\n"
" .previous\n"
" .section __ex_table,\"a\"\n"
- " .word 1b,8b\n"
- " .word 2b,8b\n"
- " .word 3b,8b\n"
- " .word 4b,8b\n"
+ STR(PTR) " 1b,8b\n"
+ STR(PTR) " 2b,8b\n"
+ STR(PTR) " 3b,8b\n"
+ STR(PTR) " 4b,8b\n"
" .previous\n"
" .set pop\n"
: "+&r"(rt), "=&r"(rs),
@@ -1326,10 +1327,10 @@ fpu_emul:
" j 10b\n"
" .previous\n"
" .section __ex_table,\"a\"\n"
- " .word 1b,8b\n"
- " .word 2b,8b\n"
- " .word 3b,8b\n"
- " .word 4b,8b\n"
+ STR(PTR) " 1b,8b\n"
+ STR(PTR) " 2b,8b\n"
+ STR(PTR) " 3b,8b\n"
+ STR(PTR) " 4b,8b\n"
" .previous\n"
" .set pop\n"
: "+&r"(rt), "=&r"(rs),
@@ -1397,10 +1398,10 @@ fpu_emul:
" j 9b\n"
" .previous\n"
" .section __ex_table,\"a\"\n"
- " .word 1b,8b\n"
- " .word 2b,8b\n"
- " .word 3b,8b\n"
- " .word 4b,8b\n"
+ STR(PTR) " 1b,8b\n"
+ STR(PTR) " 2b,8b\n"
+ STR(PTR) " 3b,8b\n"
+ STR(PTR) " 4b,8b\n"
" .previous\n"
" .set pop\n"
: "+&r"(rt), "=&r"(rs),
@@ -1467,10 +1468,10 @@ fpu_emul:
" j 9b\n"
" .previous\n"
" .section __ex_table,\"a\"\n"
- " .word 1b,8b\n"
- " .word 2b,8b\n"
- " .word 3b,8b\n"
- " .word 4b,8b\n"
+ STR(PTR) " 1b,8b\n"
+ STR(PTR) " 2b,8b\n"
+ STR(PTR) " 3b,8b\n"
+ STR(PTR) " 4b,8b\n"
" .previous\n"
" .set pop\n"
: "+&r"(rt), "=&r"(rs),
@@ -1582,14 +1583,14 @@ fpu_emul:
" j 9b\n"
" .previous\n"
" .section __ex_table,\"a\"\n"
- " .word 1b,8b\n"
- " .word 2b,8b\n"
- " .word 3b,8b\n"
- " .word 4b,8b\n"
- " .word 5b,8b\n"
- " .word 6b,8b\n"
- " .word 7b,8b\n"
- " .word 0b,8b\n"
+ STR(PTR) " 1b,8b\n"
+ STR(PTR) " 2b,8b\n"
+ STR(PTR) " 3b,8b\n"
+ STR(PTR) " 4b,8b\n"
+ STR(PTR) " 5b,8b\n"
+ STR(PTR) " 6b,8b\n"
+ STR(PTR) " 7b,8b\n"
+ STR(PTR) " 0b,8b\n"
" .previous\n"
" .set pop\n"
: "+&r"(rt), "=&r"(rs),
@@ -1701,14 +1702,14 @@ fpu_emul:
" j 9b\n"
" .previous\n"
" .section __ex_table,\"a\"\n"
- " .word 1b,8b\n"
- " .word 2b,8b\n"
- " .word 3b,8b\n"
- " .word 4b,8b\n"
- " .word 5b,8b\n"
- " .word 6b,8b\n"
- " .word 7b,8b\n"
- " .word 0b,8b\n"
+ STR(PTR) " 1b,8b\n"
+ STR(PTR) " 2b,8b\n"
+ STR(PTR) " 3b,8b\n"
+ STR(PTR) " 4b,8b\n"
+ STR(PTR) " 5b,8b\n"
+ STR(PTR) " 6b,8b\n"
+ STR(PTR) " 7b,8b\n"
+ STR(PTR) " 0b,8b\n"
" .previous\n"
" .set pop\n"
: "+&r"(rt), "=&r"(rs),
@@ -1820,14 +1821,14 @@ fpu_emul:
" j 9b\n"
" .previous\n"
" .section __ex_table,\"a\"\n"
- " .word 1b,8b\n"
- " .word 2b,8b\n"
- " .word 3b,8b\n"
- " .word 4b,8b\n"
- " .word 5b,8b\n"
- " .word 6b,8b\n"
- " .word 7b,8b\n"
- " .word 0b,8b\n"
+ STR(PTR) " 1b,8b\n"
+ STR(PTR) " 2b,8b\n"
+ STR(PTR) " 3b,8b\n"
+ STR(PTR) " 4b,8b\n"
+ STR(PTR) " 5b,8b\n"
+ STR(PTR) " 6b,8b\n"
+ STR(PTR) " 7b,8b\n"
+ STR(PTR) " 0b,8b\n"
" .previous\n"
" .set pop\n"
: "+&r"(rt), "=&r"(rs),
@@ -1938,14 +1939,14 @@ fpu_emul:
" j 9b\n"
" .previous\n"
" .section __ex_table,\"a\"\n"
- " .word 1b,8b\n"
- " .word 2b,8b\n"
- " .word 3b,8b\n"
- " .word 4b,8b\n"
- " .word 5b,8b\n"
- " .word 6b,8b\n"
- " .word 7b,8b\n"
- " .word 0b,8b\n"
+ STR(PTR) " 1b,8b\n"
+ STR(PTR) " 2b,8b\n"
+ STR(PTR) " 3b,8b\n"
+ STR(PTR) " 4b,8b\n"
+ STR(PTR) " 5b,8b\n"
+ STR(PTR) " 6b,8b\n"
+ STR(PTR) " 7b,8b\n"
+ STR(PTR) " 0b,8b\n"
" .previous\n"
" .set pop\n"
: "+&r"(rt), "=&r"(rs),
@@ -2000,7 +2001,7 @@ fpu_emul:
"j 2b\n"
".previous\n"
".section __ex_table,\"a\"\n"
- ".word 1b, 3b\n"
+ STR(PTR) " 1b,3b\n"
".previous\n"
: "=&r"(res), "+&r"(err)
: "r"(vaddr), "i"(SIGSEGV)
@@ -2058,7 +2059,7 @@ fpu_emul:
"j 2b\n"
".previous\n"
".section __ex_table,\"a\"\n"
- ".word 1b, 3b\n"
+ STR(PTR) " 1b,3b\n"
".previous\n"
: "+&r"(res), "+&r"(err)
: "r"(vaddr), "i"(SIGSEGV));
@@ -2119,7 +2120,7 @@ fpu_emul:
"j 2b\n"
".previous\n"
".section __ex_table,\"a\"\n"
- ".word 1b, 3b\n"
+ STR(PTR) " 1b,3b\n"
".previous\n"
: "=&r"(res), "+&r"(err)
: "r"(vaddr), "i"(SIGSEGV)
@@ -2182,7 +2183,7 @@ fpu_emul:
"j 2b\n"
".previous\n"
".section __ex_table,\"a\"\n"
- ".word 1b, 3b\n"
+ STR(PTR) " 1b,3b\n"
".previous\n"
: "+&r"(res), "+&r"(err)
: "r"(vaddr), "i"(SIGSEGV));
@@ -2201,7 +2202,7 @@ fpu_emul:
}
/*
- * Lets not return to userland just yet. It's constly and
+ * Let's not return to userland just yet. It's costly and
* it's likely we have more R2 instructions to emulate
*/
if (!err && (pass++ < MIPS_R2_EMUL_TOTAL_PASS)) {
diff --git a/arch/mips/kernel/module-rela.c b/arch/mips/kernel/module-rela.c
index 9083d63b765c..781168834456 100644
--- a/arch/mips/kernel/module-rela.c
+++ b/arch/mips/kernel/module-rela.c
@@ -16,6 +16,7 @@
* Copyright (C) 2001 Rusty Russell.
* Copyright (C) 2003, 2004 Ralf Baechle (ralf@linux-mips.org)
* Copyright (C) 2005 Thiemo Seufer
+ * Copyright (C) 2015 Imagination Technologies Ltd.
*/
#include <linux/elf.h>
@@ -35,15 +36,13 @@ static int apply_r_mips_32_rela(struct module *me, u32 *location, Elf_Addr v)
static int apply_r_mips_26_rela(struct module *me, u32 *location, Elf_Addr v)
{
if (v % 4) {
- pr_err("module %s: dangerous R_MIPS_26 RELArelocation\n",
+ pr_err("module %s: dangerous R_MIPS_26 RELA relocation\n",
me->name);
return -ENOEXEC;
}
if ((v & 0xf0000000) != (((unsigned long)location + 4) & 0xf0000000)) {
- printk(KERN_ERR
- "module %s: relocation overflow\n",
- me->name);
+ pr_err("module %s: relocation overflow\n", me->name);
return -ENOEXEC;
}
@@ -67,6 +66,48 @@ static int apply_r_mips_lo16_rela(struct module *me, u32 *location, Elf_Addr v)
return 0;
}
+static int apply_r_mips_pc_rela(struct module *me, u32 *location, Elf_Addr v,
+ unsigned bits)
+{
+ unsigned long mask = GENMASK(bits - 1, 0);
+ unsigned long se_bits;
+ long offset;
+
+ if (v % 4) {
+ pr_err("module %s: dangerous R_MIPS_PC%u RELA relocation\n",
+ me->name, bits);
+ return -ENOEXEC;
+ }
+
+ offset = ((long)v - (long)location) >> 2;
+
+ /* check the sign bit onwards are identical - ie. we didn't overflow */
+ se_bits = (offset & BIT(bits - 1)) ? ~0ul : 0;
+ if ((offset & ~mask) != (se_bits & ~mask)) {
+ pr_err("module %s: relocation overflow\n", me->name);
+ return -ENOEXEC;
+ }
+
+ *location = (*location & ~mask) | (offset & mask);
+
+ return 0;
+}
+
+static int apply_r_mips_pc16_rela(struct module *me, u32 *location, Elf_Addr v)
+{
+ return apply_r_mips_pc_rela(me, location, v, 16);
+}
+
+static int apply_r_mips_pc21_rela(struct module *me, u32 *location, Elf_Addr v)
+{
+ return apply_r_mips_pc_rela(me, location, v, 21);
+}
+
+static int apply_r_mips_pc26_rela(struct module *me, u32 *location, Elf_Addr v)
+{
+ return apply_r_mips_pc_rela(me, location, v, 26);
+}
+
static int apply_r_mips_64_rela(struct module *me, u32 *location, Elf_Addr v)
{
*(Elf_Addr *)location = v;
@@ -99,9 +140,12 @@ static int (*reloc_handlers_rela[]) (struct module *me, u32 *location,
[R_MIPS_26] = apply_r_mips_26_rela,
[R_MIPS_HI16] = apply_r_mips_hi16_rela,
[R_MIPS_LO16] = apply_r_mips_lo16_rela,
+ [R_MIPS_PC16] = apply_r_mips_pc16_rela,
[R_MIPS_64] = apply_r_mips_64_rela,
[R_MIPS_HIGHER] = apply_r_mips_higher_rela,
- [R_MIPS_HIGHEST] = apply_r_mips_highest_rela
+ [R_MIPS_HIGHEST] = apply_r_mips_highest_rela,
+ [R_MIPS_PC21_S2] = apply_r_mips_pc21_rela,
+ [R_MIPS_PC26_S2] = apply_r_mips_pc26_rela,
};
int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab,
@@ -126,11 +170,11 @@ int apply_relocate_add(Elf_Shdr *sechdrs, const char *strtab,
/* This is the symbol it is referring to */
sym = (Elf_Sym *)sechdrs[symindex].sh_addr
+ ELF_MIPS_R_SYM(rel[i]);
- if (IS_ERR_VALUE(sym->st_value)) {
+ if (sym->st_value >= -MAX_ERRNO) {
/* Ignore unresolved weak symbol */
if (ELF_ST_BIND(sym->st_info) == STB_WEAK)
continue;
- printk(KERN_WARNING "%s: Unknown symbol %s\n",
+ pr_warn("%s: Unknown symbol %s\n",
me->name, strtab + sym->st_name);
return -ENOENT;
}
diff --git a/arch/mips/kernel/module.c b/arch/mips/kernel/module.c
index f9b2936d598d..79850e376ef6 100644
--- a/arch/mips/kernel/module.c
+++ b/arch/mips/kernel/module.c
@@ -73,8 +73,7 @@ static int apply_r_mips_26_rel(struct module *me, u32 *location, Elf_Addr v)
}
if ((v & 0xf0000000) != (((unsigned long)location + 4) & 0xf0000000)) {
- printk(KERN_ERR
- "module %s: relocation overflow\n",
+ pr_err("module %s: relocation overflow\n",
me->name);
return -ENOEXEC;
}
@@ -183,13 +182,62 @@ out_danger:
return -ENOEXEC;
}
+static int apply_r_mips_pc_rel(struct module *me, u32 *location, Elf_Addr v,
+ unsigned bits)
+{
+ unsigned long mask = GENMASK(bits - 1, 0);
+ unsigned long se_bits;
+ long offset;
+
+ if (v % 4) {
+ pr_err("module %s: dangerous R_MIPS_PC%u REL relocation\n",
+ me->name, bits);
+ return -ENOEXEC;
+ }
+
+ /* retrieve & sign extend implicit addend */
+ offset = *location & mask;
+ offset |= (offset & BIT(bits - 1)) ? ~mask : 0;
+
+ offset += ((long)v - (long)location) >> 2;
+
+ /* check the sign bit onwards are identical - ie. we didn't overflow */
+ se_bits = (offset & BIT(bits - 1)) ? ~0ul : 0;
+ if ((offset & ~mask) != (se_bits & ~mask)) {
+ pr_err("module %s: relocation overflow\n", me->name);
+ return -ENOEXEC;
+ }
+
+ *location = (*location & ~mask) | (offset & mask);
+
+ return 0;
+}
+
+static int apply_r_mips_pc16_rel(struct module *me, u32 *location, Elf_Addr v)
+{
+ return apply_r_mips_pc_rel(me, location, v, 16);
+}
+
+static int apply_r_mips_pc21_rel(struct module *me, u32 *location, Elf_Addr v)
+{
+ return apply_r_mips_pc_rel(me, location, v, 21);
+}
+
+static int apply_r_mips_pc26_rel(struct module *me, u32 *location, Elf_Addr v)
+{
+ return apply_r_mips_pc_rel(me, location, v, 26);
+}
+
static int (*reloc_handlers_rel[]) (struct module *me, u32 *location,
Elf_Addr v) = {
[R_MIPS_NONE] = apply_r_mips_none,
[R_MIPS_32] = apply_r_mips_32_rel,
[R_MIPS_26] = apply_r_mips_26_rel,
[R_MIPS_HI16] = apply_r_mips_hi16_rel,
- [R_MIPS_LO16] = apply_r_mips_lo16_rel
+ [R_MIPS_LO16] = apply_r_mips_lo16_rel,
+ [R_MIPS_PC16] = apply_r_mips_pc16_rel,
+ [R_MIPS_PC21_S2] = apply_r_mips_pc21_rel,
+ [R_MIPS_PC26_S2] = apply_r_mips_pc26_rel,
};
int apply_relocate(Elf_Shdr *sechdrs, const char *strtab,
@@ -215,12 +263,12 @@ int apply_relocate(Elf_Shdr *sechdrs, const char *strtab,
/* This is the symbol it is referring to */
sym = (Elf_Sym *)sechdrs[symindex].sh_addr
+ ELF_MIPS_R_SYM(rel[i]);
- if (IS_ERR_VALUE(sym->st_value)) {
+ if (sym->st_value >= -MAX_ERRNO) {
/* Ignore unresolved weak symbol */
if (ELF_ST_BIND(sym->st_info) == STB_WEAK)
continue;
- printk(KERN_WARNING "%s: Unknown symbol %s\n",
- me->name, strtab + sym->st_name);
+ pr_warn("%s: Unknown symbol %s\n",
+ me->name, strtab + sym->st_name);
return -ENOENT;
}
diff --git a/arch/mips/kernel/perf_event.c b/arch/mips/kernel/perf_event.c
index c1cf9c6c3f77..d64056e0bb56 100644
--- a/arch/mips/kernel/perf_event.c
+++ b/arch/mips/kernel/perf_event.c
@@ -25,8 +25,8 @@
* the user stack callchains, we will add it here.
*/
-static void save_raw_perf_callchain(struct perf_callchain_entry *entry,
- unsigned long reg29)
+static void save_raw_perf_callchain(struct perf_callchain_entry_ctx *entry,
+ unsigned long reg29)
{
unsigned long *sp = (unsigned long *)reg29;
unsigned long addr;
@@ -35,14 +35,14 @@ static void save_raw_perf_callchain(struct perf_callchain_entry *entry,
addr = *sp++;
if (__kernel_text_address(addr)) {
perf_callchain_store(entry, addr);
- if (entry->nr >= PERF_MAX_STACK_DEPTH)
+ if (entry->nr >= entry->max_stack)
break;
}
}
}
-void perf_callchain_kernel(struct perf_callchain_entry *entry,
- struct pt_regs *regs)
+void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
+ struct pt_regs *regs)
{
unsigned long sp = regs->regs[29];
#ifdef CONFIG_KALLSYMS
@@ -59,7 +59,7 @@ void perf_callchain_kernel(struct perf_callchain_entry *entry,
}
do {
perf_callchain_store(entry, pc);
- if (entry->nr >= PERF_MAX_STACK_DEPTH)
+ if (entry->nr >= entry->max_stack)
break;
pc = unwind_stack(current, &sp, pc, &ra);
} while (pc);
diff --git a/arch/mips/kernel/perf_event_mipsxx.c b/arch/mips/kernel/perf_event_mipsxx.c
index 9bc1191b1ab0..d3ba9f4105b5 100644
--- a/arch/mips/kernel/perf_event_mipsxx.c
+++ b/arch/mips/kernel/perf_event_mipsxx.c
@@ -101,8 +101,6 @@ struct mips_pmu {
static struct mips_pmu mipspmu;
-#define M_CONFIG1_PC (1 << 4)
-
#define M_PERFCTL_EXL (1 << 0)
#define M_PERFCTL_KERNEL (1 << 1)
#define M_PERFCTL_SUPERVISOR (1 << 2)
@@ -754,7 +752,7 @@ static void handle_associated_event(struct cpu_hw_events *cpuc,
static int __n_counters(void)
{
- if (!(read_c0_config1() & M_CONFIG1_PC))
+ if (!cpu_has_perf)
return 0;
if (!(read_c0_perfctrl0() & M_PERFCTL_MORE))
return 1;
@@ -825,6 +823,16 @@ static const struct mips_perf_event mipsxxcore_event_map2
[PERF_COUNT_HW_BRANCH_MISSES] = { 0x27, CNTR_ODD, T },
};
+static const struct mips_perf_event i6400_event_map[PERF_COUNT_HW_MAX] = {
+ [PERF_COUNT_HW_CPU_CYCLES] = { 0x00, CNTR_EVEN | CNTR_ODD },
+ [PERF_COUNT_HW_INSTRUCTIONS] = { 0x01, CNTR_EVEN | CNTR_ODD },
+ /* These only count dcache, not icache */
+ [PERF_COUNT_HW_CACHE_REFERENCES] = { 0x45, CNTR_EVEN | CNTR_ODD },
+ [PERF_COUNT_HW_CACHE_MISSES] = { 0x48, CNTR_EVEN | CNTR_ODD },
+ [PERF_COUNT_HW_BRANCH_INSTRUCTIONS] = { 0x15, CNTR_EVEN | CNTR_ODD },
+ [PERF_COUNT_HW_BRANCH_MISSES] = { 0x16, CNTR_EVEN | CNTR_ODD },
+};
+
static const struct mips_perf_event loongson3_event_map[PERF_COUNT_HW_MAX] = {
[PERF_COUNT_HW_CPU_CYCLES] = { 0x00, CNTR_EVEN },
[PERF_COUNT_HW_INSTRUCTIONS] = { 0x00, CNTR_ODD },
@@ -1015,6 +1023,46 @@ static const struct mips_perf_event mipsxxcore_cache_map2
},
};
+static const struct mips_perf_event i6400_cache_map
+ [PERF_COUNT_HW_CACHE_MAX]
+ [PERF_COUNT_HW_CACHE_OP_MAX]
+ [PERF_COUNT_HW_CACHE_RESULT_MAX] = {
+[C(L1D)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = { 0x46, CNTR_EVEN | CNTR_ODD },
+ [C(RESULT_MISS)] = { 0x49, CNTR_EVEN | CNTR_ODD },
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = { 0x47, CNTR_EVEN | CNTR_ODD },
+ [C(RESULT_MISS)] = { 0x4a, CNTR_EVEN | CNTR_ODD },
+ },
+},
+[C(L1I)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = { 0x84, CNTR_EVEN | CNTR_ODD },
+ [C(RESULT_MISS)] = { 0x85, CNTR_EVEN | CNTR_ODD },
+ },
+},
+[C(DTLB)] = {
+ /* Can't distinguish read & write */
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = { 0x40, CNTR_EVEN | CNTR_ODD },
+ [C(RESULT_MISS)] = { 0x41, CNTR_EVEN | CNTR_ODD },
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = { 0x40, CNTR_EVEN | CNTR_ODD },
+ [C(RESULT_MISS)] = { 0x41, CNTR_EVEN | CNTR_ODD },
+ },
+},
+[C(BPU)] = {
+ /* Conditional branches / mispredicted */
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = { 0x15, CNTR_EVEN | CNTR_ODD },
+ [C(RESULT_MISS)] = { 0x16, CNTR_EVEN | CNTR_ODD },
+ },
+},
+};
+
static const struct mips_perf_event loongson3_cache_map
[PERF_COUNT_HW_CACHE_MAX]
[PERF_COUNT_HW_CACHE_OP_MAX]
@@ -1556,6 +1604,7 @@ static const struct mips_perf_event *mipsxx_pmu_map_raw_event(u64 config)
#endif
break;
case CPU_P5600:
+ case CPU_P6600:
case CPU_I6400:
/* 8-bit event numbers */
raw_id = config & 0x1ff;
@@ -1718,11 +1767,16 @@ init_hw_perf_events(void)
mipspmu.general_event_map = &mipsxxcore_event_map2;
mipspmu.cache_event_map = &mipsxxcore_cache_map2;
break;
- case CPU_I6400:
- mipspmu.name = "mips/I6400";
+ case CPU_P6600:
+ mipspmu.name = "mips/P6600";
mipspmu.general_event_map = &mipsxxcore_event_map2;
mipspmu.cache_event_map = &mipsxxcore_cache_map2;
break;
+ case CPU_I6400:
+ mipspmu.name = "mips/I6400";
+ mipspmu.general_event_map = &i6400_event_map;
+ mipspmu.cache_event_map = &i6400_cache_map;
+ break;
case CPU_1004K:
mipspmu.name = "mips/1004K";
mipspmu.general_event_map = &mipsxxcore_event_map;
diff --git a/arch/mips/kernel/pm-cps.c b/arch/mips/kernel/pm-cps.c
index fa3f9ebad8f4..adda3ffb9b78 100644
--- a/arch/mips/kernel/pm-cps.c
+++ b/arch/mips/kernel/pm-cps.c
@@ -224,11 +224,18 @@ static void __init cps_gen_cache_routine(u32 **pp, struct uasm_label **pl,
uasm_build_label(pl, *pp, lbl);
/* Generate the cache ops */
- for (i = 0; i < unroll_lines; i++)
- uasm_i_cache(pp, op, i * cache->linesz, t0);
+ for (i = 0; i < unroll_lines; i++) {
+ if (cpu_has_mips_r6) {
+ uasm_i_cache(pp, op, 0, t0);
+ uasm_i_addiu(pp, t0, t0, cache->linesz);
+ } else {
+ uasm_i_cache(pp, op, i * cache->linesz, t0);
+ }
+ }
- /* Update the base address */
- uasm_i_addiu(pp, t0, t0, unroll_lines * cache->linesz);
+ if (!cpu_has_mips_r6)
+ /* Update the base address */
+ uasm_i_addiu(pp, t0, t0, unroll_lines * cache->linesz);
/* Loop if we haven't reached the end address yet */
uasm_il_bne(pp, pr, t0, t1, lbl);
diff --git a/arch/mips/kernel/pm.c b/arch/mips/kernel/pm.c
index fefdf39d3df3..dc814892133c 100644
--- a/arch/mips/kernel/pm.c
+++ b/arch/mips/kernel/pm.c
@@ -56,7 +56,7 @@ static void mips_cpu_restore(void)
write_c0_userlocal(current_thread_info()->tp_value);
/* Restore watch registers */
- __restore_watch();
+ __restore_watch(current);
}
/**
diff --git a/arch/mips/kernel/proc.c b/arch/mips/kernel/proc.c
index 298b2b773d12..97dc01b03631 100644
--- a/arch/mips/kernel/proc.c
+++ b/arch/mips/kernel/proc.c
@@ -114,6 +114,7 @@ static int show_cpuinfo(struct seq_file *m, void *v)
if (cpu_has_smartmips) seq_printf(m, "%s", " smartmips");
if (cpu_has_dsp) seq_printf(m, "%s", " dsp");
if (cpu_has_dsp2) seq_printf(m, "%s", " dsp2");
+ if (cpu_has_dsp3) seq_printf(m, "%s", " dsp3");
if (cpu_has_mipsmt) seq_printf(m, "%s", " mt");
if (cpu_has_mmips) seq_printf(m, "%s", " micromips");
if (cpu_has_vz) seq_printf(m, "%s", " vz");
diff --git a/arch/mips/kernel/process.c b/arch/mips/kernel/process.c
index 92880cee449e..813ed7829c61 100644
--- a/arch/mips/kernel/process.c
+++ b/arch/mips/kernel/process.c
@@ -73,14 +73,6 @@ void start_thread(struct pt_regs * regs, unsigned long pc, unsigned long sp)
regs->regs[29] = sp;
}
-void exit_thread(void)
-{
-}
-
-void flush_thread(void)
-{
-}
-
int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
{
/*
@@ -353,7 +345,7 @@ static int get_frame_info(struct mips_frame_info *info)
return 0;
if (info->pc_offset < 0) /* leaf */
return 1;
- /* prologue seems boggus... */
+ /* prologue seems bogus... */
err:
return -1;
}
@@ -455,7 +447,7 @@ unsigned long notrace unwind_stack_by_address(unsigned long stack_page,
*sp + sizeof(*regs) <= stack_page + THREAD_SIZE - 32) {
regs = (struct pt_regs *)*sp;
pc = regs->cp0_epc;
- if (__kernel_text_address(pc)) {
+ if (!user_mode(regs) && __kernel_text_address(pc)) {
*sp = regs->regs[29];
*ra = regs->regs[31];
return pc;
@@ -580,11 +572,19 @@ int mips_get_process_fp_mode(struct task_struct *task)
return value;
}
+static void prepare_for_fp_mode_switch(void *info)
+{
+ struct mm_struct *mm = info;
+
+ if (current->mm == mm)
+ lose_fpu(1);
+}
+
int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
{
const unsigned int known_bits = PR_FP_MODE_FR | PR_FP_MODE_FRE;
- unsigned long switch_count;
struct task_struct *t;
+ int max_users;
/* Check the value is valid */
if (value & ~known_bits)
@@ -601,6 +601,9 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
if (!(value & PR_FP_MODE_FR) && cpu_has_fpu && cpu_has_mips_r6)
return -EOPNOTSUPP;
+ /* Proceed with the mode switch */
+ preempt_disable();
+
/* Save FP & vector context, then disable FPU & MSA */
if (task->signal == current->signal)
lose_fpu(1);
@@ -610,31 +613,17 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
smp_mb__after_atomic();
/*
- * If there are multiple online CPUs then wait until all threads whose
- * FP mode is about to change have been context switched. This approach
- * allows us to only worry about whether an FP mode switch is in
- * progress when FP is first used in a tasks time slice. Pretty much all
- * of the mode switch overhead can thus be confined to cases where mode
- * switches are actually occurring. That is, to here. However for the
- * thread performing the mode switch it may take a while...
+ * If there are multiple online CPUs then force any which are running
+ * threads in this process to lose their FPU context, which they can't
+ * regain until fp_mode_switching is cleared later.
*/
if (num_online_cpus() > 1) {
- spin_lock_irq(&task->sighand->siglock);
-
- for_each_thread(task, t) {
- if (t == current)
- continue;
+ /* No need to send an IPI for the local CPU */
+ max_users = (task->mm == current->mm) ? 1 : 0;
- switch_count = t->nvcsw + t->nivcsw;
-
- do {
- spin_unlock_irq(&task->sighand->siglock);
- cond_resched();
- spin_lock_irq(&task->sighand->siglock);
- } while ((t->nvcsw + t->nivcsw) == switch_count);
- }
-
- spin_unlock_irq(&task->sighand->siglock);
+ if (atomic_read(&current->mm->mm_users) > max_users)
+ smp_call_function(prepare_for_fp_mode_switch,
+ (void *)current->mm, 1);
}
/*
@@ -659,6 +648,7 @@ int mips_set_process_fp_mode(struct task_struct *task, unsigned int value)
/* Allow threads to use FP again */
atomic_set(&task->mm->context.fp_mode_switching, 0);
+ preempt_enable();
return 0;
}
diff --git a/arch/mips/kernel/ptrace.c b/arch/mips/kernel/ptrace.c
index a5279b2f3198..0dcf69194473 100644
--- a/arch/mips/kernel/ptrace.c
+++ b/arch/mips/kernel/ptrace.c
@@ -57,8 +57,7 @@ static void init_fp_ctx(struct task_struct *target)
/* Begin with data registers set to all 1s... */
memset(&target->thread.fpu.fpr, ~0, sizeof(target->thread.fpu.fpr));
- /* ...and FCSR zeroed */
- target->thread.fpu.fcr31 = 0;
+ /* FCSR has been preset by `mips_set_personality_nan'. */
/*
* Record that the target has "used" math, such that the context
@@ -80,6 +79,22 @@ void ptrace_disable(struct task_struct *child)
}
/*
+ * Poke at FCSR according to its mask. Don't set the cause bits as
+ * this is currently not handled correctly in FP context restoration
+ * and will cause an oops if a corresponding enable bit is set.
+ */
+static void ptrace_setfcr31(struct task_struct *child, u32 value)
+{
+ u32 fcr31;
+ u32 mask;
+
+ value &= ~FPU_CSR_ALL_X;
+ fcr31 = child->thread.fpu.fcr31;
+ mask = boot_cpu_data.fpu_msk31;
+ child->thread.fpu.fcr31 = (value & ~mask) | (fcr31 & mask);
+}
+
+/*
* Read a general register set. We always use the 64-bit format, even
* for 32-bit kernels and for 32-bit processes on a 64-bit kernel.
* Registers are sign extended to fill the available space.
@@ -159,9 +174,7 @@ int ptrace_setfpregs(struct task_struct *child, __u32 __user *data)
{
union fpureg *fregs;
u64 fpr_val;
- u32 fcr31;
u32 value;
- u32 mask;
int i;
if (!access_ok(VERIFY_READ, data, 33 * 8))
@@ -176,9 +189,7 @@ int ptrace_setfpregs(struct task_struct *child, __u32 __user *data)
}
__get_user(value, data + 64);
- fcr31 = child->thread.fpu.fcr31;
- mask = boot_cpu_data.fpu_msk31;
- child->thread.fpu.fcr31 = (value & ~mask) | (fcr31 & mask);
+ ptrace_setfcr31(child, value);
/* FIR may not be written. */
@@ -210,7 +221,8 @@ int ptrace_get_watch_regs(struct task_struct *child,
for (i = 0; i < boot_cpu_data.watch_reg_use_cnt; i++) {
__put_user(child->thread.watch.mips3264.watchlo[i],
&addr->WATCH_STYLE.watchlo[i]);
- __put_user(child->thread.watch.mips3264.watchhi[i] & 0xfff,
+ __put_user(child->thread.watch.mips3264.watchhi[i] &
+ (MIPS_WATCHHI_MASK | MIPS_WATCHHI_IRW),
&addr->WATCH_STYLE.watchhi[i]);
__put_user(boot_cpu_data.watch_reg_masks[i],
&addr->WATCH_STYLE.watch_masks[i]);
@@ -252,12 +264,12 @@ int ptrace_set_watch_regs(struct task_struct *child,
}
#endif
__get_user(ht[i], &addr->WATCH_STYLE.watchhi[i]);
- if (ht[i] & ~0xff8)
+ if (ht[i] & ~MIPS_WATCHHI_MASK)
return -EINVAL;
}
/* Install them. */
for (i = 0; i < boot_cpu_data.watch_reg_use_cnt; i++) {
- if (lt[i] & 7)
+ if (lt[i] & MIPS_WATCHLO_IRW)
watch_active = 1;
child->thread.watch.mips3264.watchlo[i] = lt[i];
/* Set the G bit. */
@@ -805,7 +817,7 @@ long arch_ptrace(struct task_struct *child, long request,
break;
#endif
case FPC_CSR:
- child->thread.fpu.fcr31 = data & ~FPU_CSR_ALL_X;
+ ptrace_setfcr31(child, data);
break;
case DSP_BASE ... DSP_BASE + 5: {
dspreg_t *dregs;
diff --git a/arch/mips/kernel/r4k_fpu.S b/arch/mips/kernel/r4k_fpu.S
index 17732f876eff..56d86b09c917 100644
--- a/arch/mips/kernel/r4k_fpu.S
+++ b/arch/mips/kernel/r4k_fpu.S
@@ -244,17 +244,17 @@ LEAF(\name)
.set push
.set noat
#ifdef CONFIG_64BIT
- copy_u_d \wr, 1
+ copy_s_d \wr, 1
EX sd $1, \off(\base)
#elif defined(CONFIG_CPU_LITTLE_ENDIAN)
- copy_u_w \wr, 2
+ copy_s_w \wr, 2
EX sw $1, \off(\base)
- copy_u_w \wr, 3
+ copy_s_w \wr, 3
EX sw $1, (\off+4)(\base)
#else /* CONFIG_CPU_BIG_ENDIAN */
- copy_u_w \wr, 2
+ copy_s_w \wr, 2
EX sw $1, (\off+4)(\base)
- copy_u_w \wr, 3
+ copy_s_w \wr, 3
EX sw $1, \off(\base)
#endif
.set pop
diff --git a/arch/mips/kernel/r4k_switch.S b/arch/mips/kernel/r4k_switch.S
index 92cd0516ecf5..2f0a3b223c97 100644
--- a/arch/mips/kernel/r4k_switch.S
+++ b/arch/mips/kernel/r4k_switch.S
@@ -15,7 +15,6 @@
#include <asm/fpregdef.h>
#include <asm/mipsregs.h>
#include <asm/asm-offsets.h>
-#include <asm/pgtable-bits.h>
#include <asm/regdef.h>
#include <asm/stackframe.h>
#include <asm/thread_info.h>
diff --git a/arch/mips/kernel/relocate.c b/arch/mips/kernel/relocate.c
new file mode 100644
index 000000000000..ca1cc30c0891
--- /dev/null
+++ b/arch/mips/kernel/relocate.c
@@ -0,0 +1,386 @@
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Support for Kernel relocation at boot time
+ *
+ * Copyright (C) 2015, Imagination Technologies Ltd.
+ * Authors: Matt Redfearn (matt.redfearn@imgtec.com)
+ */
+#include <asm/bootinfo.h>
+#include <asm/cacheflush.h>
+#include <asm/fw/fw.h>
+#include <asm/sections.h>
+#include <asm/setup.h>
+#include <asm/timex.h>
+#include <linux/elf.h>
+#include <linux/kernel.h>
+#include <linux/libfdt.h>
+#include <linux/of_fdt.h>
+#include <linux/sched.h>
+#include <linux/start_kernel.h>
+#include <linux/string.h>
+#include <linux/printk.h>
+
+#define RELOCATED(x) ((void *)((long)x + offset))
+
+extern u32 _relocation_start[]; /* End kernel image / start relocation table */
+extern u32 _relocation_end[]; /* End relocation table */
+
+extern long __start___ex_table; /* Start exception table */
+extern long __stop___ex_table; /* End exception table */
+
+static inline u32 __init get_synci_step(void)
+{
+ u32 res;
+
+ __asm__("rdhwr %0, $1" : "=r" (res));
+
+ return res;
+}
+
+static void __init sync_icache(void *kbase, unsigned long kernel_length)
+{
+ void *kend = kbase + kernel_length;
+ u32 step = get_synci_step();
+
+ do {
+ __asm__ __volatile__(
+ "synci 0(%0)"
+ : /* no output */
+ : "r" (kbase));
+
+ kbase += step;
+ } while (kbase < kend);
+
+ /* Completion barrier */
+ __sync();
+}
+
+static int __init apply_r_mips_64_rel(u32 *loc_orig, u32 *loc_new, long offset)
+{
+ *(u64 *)loc_new += offset;
+
+ return 0;
+}
+
+static int __init apply_r_mips_32_rel(u32 *loc_orig, u32 *loc_new, long offset)
+{
+ *loc_new += offset;
+
+ return 0;
+}
+
+static int __init apply_r_mips_26_rel(u32 *loc_orig, u32 *loc_new, long offset)
+{
+ unsigned long target_addr = (*loc_orig) & 0x03ffffff;
+
+ if (offset % 4) {
+ pr_err("Dangerous R_MIPS_26 REL relocation\n");
+ return -ENOEXEC;
+ }
+
+ /* Original target address */
+ target_addr <<= 2;
+ target_addr += (unsigned long)loc_orig & ~0x03ffffff;
+
+ /* Get the new target address */
+ target_addr += offset;
+
+ if ((target_addr & 0xf0000000) != ((unsigned long)loc_new & 0xf0000000)) {
+ pr_err("R_MIPS_26 REL relocation overflow\n");
+ return -ENOEXEC;
+ }
+
+ target_addr -= (unsigned long)loc_new & ~0x03ffffff;
+ target_addr >>= 2;
+
+ *loc_new = (*loc_new & ~0x03ffffff) | (target_addr & 0x03ffffff);
+
+ return 0;
+}
+
+
+static int __init apply_r_mips_hi16_rel(u32 *loc_orig, u32 *loc_new, long offset)
+{
+ unsigned long insn = *loc_orig;
+ unsigned long target = (insn & 0xffff) << 16; /* high 16bits of target */
+
+ target += offset;
+
+ *loc_new = (insn & ~0xffff) | ((target >> 16) & 0xffff);
+ return 0;
+}
+
+static int (*reloc_handlers_rel[]) (u32 *, u32 *, long) __initdata = {
+ [R_MIPS_64] = apply_r_mips_64_rel,
+ [R_MIPS_32] = apply_r_mips_32_rel,
+ [R_MIPS_26] = apply_r_mips_26_rel,
+ [R_MIPS_HI16] = apply_r_mips_hi16_rel,
+};
+
+int __init do_relocations(void *kbase_old, void *kbase_new, long offset)
+{
+ u32 *r;
+ u32 *loc_orig;
+ u32 *loc_new;
+ int type;
+ int res;
+
+ for (r = _relocation_start; r < _relocation_end; r++) {
+ /* Sentinel for last relocation */
+ if (*r == 0)
+ break;
+
+ type = (*r >> 24) & 0xff;
+ loc_orig = (void *)(kbase_old + ((*r & 0x00ffffff) << 2));
+ loc_new = RELOCATED(loc_orig);
+
+ if (reloc_handlers_rel[type] == NULL) {
+ /* Unsupported relocation */
+ pr_err("Unhandled relocation type %d at 0x%pK\n",
+ type, loc_orig);
+ return -ENOEXEC;
+ }
+
+ res = reloc_handlers_rel[type](loc_orig, loc_new, offset);
+ if (res)
+ return res;
+ }
+
+ return 0;
+}
+
+/*
+ * The exception table is filled in by the relocs tool after vmlinux is linked.
+ * It must be relocated separately since there will not be any relocation
+ * information for it filled in by the linker.
+ */
+static int __init relocate_exception_table(long offset)
+{
+ unsigned long *etable_start, *etable_end, *e;
+
+ etable_start = RELOCATED(&__start___ex_table);
+ etable_end = RELOCATED(&__stop___ex_table);
+
+ for (e = etable_start; e < etable_end; e++)
+ *e += offset;
+
+ return 0;
+}
+
+#ifdef CONFIG_RANDOMIZE_BASE
+
+static inline __init unsigned long rotate_xor(unsigned long hash,
+ const void *area, size_t size)
+{
+ size_t i;
+ unsigned long *ptr = (unsigned long *)area;
+
+ for (i = 0; i < size / sizeof(hash); i++) {
+ /* Rotate by odd number of bits and XOR. */
+ hash = (hash << ((sizeof(hash) * 8) - 7)) | (hash >> 7);
+ hash ^= ptr[i];
+ }
+
+ return hash;
+}
+
+static inline __init unsigned long get_random_boot(void)
+{
+ unsigned long entropy = random_get_entropy();
+ unsigned long hash = 0;
+
+ /* Attempt to create a simple but unpredictable starting entropy. */
+ hash = rotate_xor(hash, linux_banner, strlen(linux_banner));
+
+ /* Add in any runtime entropy we can get */
+ hash = rotate_xor(hash, &entropy, sizeof(entropy));
+
+#if defined(CONFIG_USE_OF)
+ /* Get any additional entropy passed in device tree */
+ {
+ int node, len;
+ u64 *prop;
+
+ node = fdt_path_offset(initial_boot_params, "/chosen");
+ if (node >= 0) {
+ prop = fdt_getprop_w(initial_boot_params, node,
+ "kaslr-seed", &len);
+ if (prop && (len == sizeof(u64)))
+ hash = rotate_xor(hash, prop, sizeof(*prop));
+ }
+ }
+#endif /* CONFIG_USE_OF */
+
+ return hash;
+}
+
+static inline __init bool kaslr_disabled(void)
+{
+ char *str;
+
+#if defined(CONFIG_CMDLINE_BOOL)
+ const char *builtin_cmdline = CONFIG_CMDLINE;
+
+ str = strstr(builtin_cmdline, "nokaslr");
+ if (str == builtin_cmdline ||
+ (str > builtin_cmdline && *(str - 1) == ' '))
+ return true;
+#endif
+ str = strstr(arcs_cmdline, "nokaslr");
+ if (str == arcs_cmdline || (str > arcs_cmdline && *(str - 1) == ' '))
+ return true;
+
+ return false;
+}
+
+static inline void __init *determine_relocation_address(void)
+{
+ /* Choose a new address for the kernel */
+ unsigned long kernel_length;
+ void *dest = &_text;
+ unsigned long offset;
+
+ if (kaslr_disabled())
+ return dest;
+
+ kernel_length = (long)_end - (long)(&_text);
+
+ offset = get_random_boot() << 16;
+ offset &= (CONFIG_RANDOMIZE_BASE_MAX_OFFSET - 1);
+ if (offset < kernel_length)
+ offset += ALIGN(kernel_length, 0xffff);
+
+ return RELOCATED(dest);
+}
+
+#else
+
+static inline void __init *determine_relocation_address(void)
+{
+ /*
+ * Choose a new address for the kernel
+ * For now we'll hard code the destination
+ */
+ return (void *)0xffffffff81000000;
+}
+
+#endif
+
+static inline int __init relocation_addr_valid(void *loc_new)
+{
+ if ((unsigned long)loc_new & 0x0000ffff) {
+ /* Inappropriately aligned new location */
+ return 0;
+ }
+ if ((unsigned long)loc_new < (unsigned long)&_end) {
+ /* New location overlaps original kernel */
+ return 0;
+ }
+ return 1;
+}
+
+void *__init relocate_kernel(void)
+{
+ void *loc_new;
+ unsigned long kernel_length;
+ unsigned long bss_length;
+ long offset = 0;
+ int res = 1;
+ /* Default to original kernel entry point */
+ void *kernel_entry = start_kernel;
+
+ /* Get the command line */
+ fw_init_cmdline();
+#if defined(CONFIG_USE_OF)
+ /* Deal with the device tree */
+ early_init_dt_scan(plat_get_fdt());
+ if (boot_command_line[0]) {
+ /* Boot command line was passed in device tree */
+ strlcpy(arcs_cmdline, boot_command_line, COMMAND_LINE_SIZE);
+ }
+#endif /* CONFIG_USE_OF */
+
+ kernel_length = (long)(&_relocation_start) - (long)(&_text);
+ bss_length = (long)&__bss_stop - (long)&__bss_start;
+
+ loc_new = determine_relocation_address();
+
+ /* Sanity check relocation address */
+ if (relocation_addr_valid(loc_new))
+ offset = (unsigned long)loc_new - (unsigned long)(&_text);
+
+ /* Reset the command line now so we don't end up with a duplicate */
+ arcs_cmdline[0] = '\0';
+
+ if (offset) {
+ /* Copy the kernel to it's new location */
+ memcpy(loc_new, &_text, kernel_length);
+
+ /* Perform relocations on the new kernel */
+ res = do_relocations(&_text, loc_new, offset);
+ if (res < 0)
+ goto out;
+
+ /* Sync the caches ready for execution of new kernel */
+ sync_icache(loc_new, kernel_length);
+
+ res = relocate_exception_table(offset);
+ if (res < 0)
+ goto out;
+
+ /*
+ * The original .bss has already been cleared, and
+ * some variables such as command line parameters
+ * stored to it so make a copy in the new location.
+ */
+ memcpy(RELOCATED(&__bss_start), &__bss_start, bss_length);
+
+ /* The current thread is now within the relocated image */
+ __current_thread_info = RELOCATED(&init_thread_union);
+
+ /* Return the new kernel's entry point */
+ kernel_entry = RELOCATED(start_kernel);
+ }
+out:
+ return kernel_entry;
+}
+
+/*
+ * Show relocation information on panic.
+ */
+void show_kernel_relocation(const char *level)
+{
+ unsigned long offset;
+
+ offset = __pa_symbol(_text) - __pa_symbol(VMLINUX_LOAD_ADDRESS);
+
+ if (IS_ENABLED(CONFIG_RELOCATABLE) && offset > 0) {
+ printk(level);
+ pr_cont("Kernel relocated by 0x%pK\n", (void *)offset);
+ pr_cont(" .text @ 0x%pK\n", _text);
+ pr_cont(" .data @ 0x%pK\n", _sdata);
+ pr_cont(" .bss @ 0x%pK\n", __bss_start);
+ }
+}
+
+static int kernel_location_notifier_fn(struct notifier_block *self,
+ unsigned long v, void *p)
+{
+ show_kernel_relocation(KERN_EMERG);
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block kernel_location_notifier = {
+ .notifier_call = kernel_location_notifier_fn
+};
+
+static int __init register_kernel_offset_dumper(void)
+{
+ atomic_notifier_chain_register(&panic_notifier_list,
+ &kernel_location_notifier);
+ return 0;
+}
+__initcall(register_kernel_offset_dumper);
diff --git a/arch/mips/kernel/scall32-o32.S b/arch/mips/kernel/scall32-o32.S
index d01fe53a6638..c8e43e0c4066 100644
--- a/arch/mips/kernel/scall32-o32.S
+++ b/arch/mips/kernel/scall32-o32.S
@@ -35,7 +35,6 @@ NESTED(handle_sys, PT_SIZE, sp)
lw t1, PT_EPC(sp) # skip syscall on return
- subu v0, v0, __NR_O32_Linux # check syscall number
addiu t1, 4 # skip to next instruction
sw t1, PT_EPC(sp)
@@ -89,6 +88,7 @@ loads_done:
and t0, t1
bnez t0, syscall_trace_entry # -> yes
syscall_common:
+ subu v0, v0, __NR_O32_Linux # check syscall number
sltiu t0, v0, __NR_O32_Linux_syscalls + 1
beqz t0, illegal_syscall
@@ -118,24 +118,23 @@ o32_syscall_exit:
syscall_trace_entry:
SAVE_STATIC
- move s0, v0
move a0, sp
/*
* syscall number is in v0 unless we called syscall(__NR_###)
* where the real syscall number is in a0
*/
- addiu a1, v0, __NR_O32_Linux
- bnez v0, 1f /* __NR_syscall at offset 0 */
+ move a1, v0
+ subu t2, v0, __NR_O32_Linux
+ bnez t2, 1f /* __NR_syscall at offset 0 */
lw a1, PT_R4(sp)
1: jal syscall_trace_enter
bltz v0, 1f # seccomp failed? Skip syscall
- move v0, s0 # restore syscall
-
RESTORE_STATIC
+ lw v0, PT_R2(sp) # Restore syscall (maybe modified)
lw a0, PT_R4(sp) # Restore argument registers
lw a1, PT_R5(sp)
lw a2, PT_R6(sp)
diff --git a/arch/mips/kernel/scall64-64.S b/arch/mips/kernel/scall64-64.S
index 6b73ecc02597..e6ede125059f 100644
--- a/arch/mips/kernel/scall64-64.S
+++ b/arch/mips/kernel/scall64-64.S
@@ -82,15 +82,14 @@ n64_syscall_exit:
syscall_trace_entry:
SAVE_STATIC
- move s0, v0
move a0, sp
move a1, v0
jal syscall_trace_enter
bltz v0, 1f # seccomp failed? Skip syscall
- move v0, s0
RESTORE_STATIC
+ ld v0, PT_R2(sp) # Restore syscall (maybe modified)
ld a0, PT_R4(sp) # Restore argument registers
ld a1, PT_R5(sp)
ld a2, PT_R6(sp)
diff --git a/arch/mips/kernel/scall64-n32.S b/arch/mips/kernel/scall64-n32.S
index 71f99d5f7a06..9c0b387d6427 100644
--- a/arch/mips/kernel/scall64-n32.S
+++ b/arch/mips/kernel/scall64-n32.S
@@ -42,9 +42,6 @@ NESTED(handle_sysn32, PT_SIZE, sp)
#endif
beqz t0, not_n32_scall
- dsll t0, v0, 3 # offset into table
- ld t2, (sysn32_call_table - (__NR_N32_Linux * 8))(t0)
-
sd a3, PT_R26(sp) # save a3 for syscall restarting
li t1, _TIF_WORK_SYSCALL_ENTRY
@@ -53,6 +50,9 @@ NESTED(handle_sysn32, PT_SIZE, sp)
bnez t0, n32_syscall_trace_entry
syscall_common:
+ dsll t0, v0, 3 # offset into table
+ ld t2, (sysn32_call_table - (__NR_N32_Linux * 8))(t0)
+
jalr t2 # Do The Real Thing (TM)
li t0, -EMAXERRNO - 1 # error?
@@ -71,21 +71,25 @@ syscall_common:
n32_syscall_trace_entry:
SAVE_STATIC
- move s0, t2
move a0, sp
move a1, v0
jal syscall_trace_enter
bltz v0, 1f # seccomp failed? Skip syscall
- move t2, s0
RESTORE_STATIC
+ ld v0, PT_R2(sp) # Restore syscall (maybe modified)
ld a0, PT_R4(sp) # Restore argument registers
ld a1, PT_R5(sp)
ld a2, PT_R6(sp)
ld a3, PT_R7(sp)
ld a4, PT_R8(sp)
ld a5, PT_R9(sp)
+
+ dsubu t2, v0, __NR_N32_Linux # check (new) syscall number
+ sltiu t0, t2, __NR_N32_Linux_syscalls + 1
+ beqz t0, not_n32_scall
+
j syscall_common
1: j syscall_exit
diff --git a/arch/mips/kernel/scall64-o32.S b/arch/mips/kernel/scall64-o32.S
index 91b43eea2d5a..f4f28b1580de 100644
--- a/arch/mips/kernel/scall64-o32.S
+++ b/arch/mips/kernel/scall64-o32.S
@@ -52,9 +52,6 @@ NESTED(handle_sys, PT_SIZE, sp)
sll a2, a2, 0
sll a3, a3, 0
- dsll t0, v0, 3 # offset into table
- ld t2, (sys32_call_table - (__NR_O32_Linux * 8))(t0)
-
sd a3, PT_R26(sp) # save a3 for syscall restarting
/*
@@ -88,6 +85,9 @@ loads_done:
bnez t0, trace_a_syscall
syscall_common:
+ dsll t0, v0, 3 # offset into table
+ ld t2, (sys32_call_table - (__NR_O32_Linux * 8))(t0)
+
jalr t2 # Do The Real Thing (TM)
li t0, -EMAXERRNO - 1 # error?
@@ -112,7 +112,6 @@ trace_a_syscall:
sd a6, PT_R10(sp)
sd a7, PT_R11(sp) # For indirect syscalls
- move s0, t2 # Save syscall pointer
move a0, sp
/*
* absolute syscall number is in v0 unless we called syscall(__NR_###)
@@ -133,8 +132,8 @@ trace_a_syscall:
bltz v0, 1f # seccomp failed? Skip syscall
- move t2, s0
RESTORE_STATIC
+ ld v0, PT_R2(sp) # Restore syscall (maybe modified)
ld a0, PT_R4(sp) # Restore argument registers
ld a1, PT_R5(sp)
ld a2, PT_R6(sp)
@@ -143,6 +142,11 @@ trace_a_syscall:
ld a5, PT_R9(sp)
ld a6, PT_R10(sp)
ld a7, PT_R11(sp) # For indirect syscalls
+
+ dsubu t0, v0, __NR_O32_Linux # check (new) syscall number
+ sltiu t0, t0, __NR_O32_Linux_syscalls + 1
+ beqz t0, not_o32_scall
+
j syscall_common
1: j syscall_exit
diff --git a/arch/mips/kernel/setup.c b/arch/mips/kernel/setup.c
index 4f607341a793..ef408a03e818 100644
--- a/arch/mips/kernel/setup.c
+++ b/arch/mips/kernel/setup.c
@@ -26,6 +26,7 @@
#include <linux/sizes.h>
#include <linux/device.h>
#include <linux/dma-contiguous.h>
+#include <linux/decompress/generic.h>
#include <asm/addrspace.h>
#include <asm/bootinfo.h>
@@ -52,13 +53,6 @@ struct screen_info screen_info;
#endif
/*
- * Despite it's name this variable is even if we don't have PCI
- */
-unsigned int PCI_DMA_BUS_IS_PHYS;
-
-EXPORT_SYMBOL(PCI_DMA_BUS_IS_PHYS);
-
-/*
* Setup information
*
* These are initialized so they are in the .data section
@@ -250,6 +244,35 @@ disable:
return 0;
}
+/* In some conditions (e.g. big endian bootloader with a little endian
+ kernel), the initrd might appear byte swapped. Try to detect this and
+ byte swap it if needed. */
+static void __init maybe_bswap_initrd(void)
+{
+#if defined(CONFIG_CPU_CAVIUM_OCTEON)
+ u64 buf;
+
+ /* Check for CPIO signature */
+ if (!memcmp((void *)initrd_start, "070701", 6))
+ return;
+
+ /* Check for compressed initrd */
+ if (decompress_method((unsigned char *)initrd_start, 8, NULL))
+ return;
+
+ /* Try again with a byte swapped header */
+ buf = swab64p((u64 *)initrd_start);
+ if (!memcmp(&buf, "070701", 6) ||
+ decompress_method((unsigned char *)(&buf), 8, NULL)) {
+ unsigned long i;
+
+ pr_info("Byteswapped initrd detected\n");
+ for (i = initrd_start; i < ALIGN(initrd_end, 8); i += 8)
+ swab64s((u64 *)i);
+ }
+#endif
+}
+
static void __init finalize_initrd(void)
{
unsigned long size = initrd_end - initrd_start;
@@ -263,6 +286,8 @@ static void __init finalize_initrd(void)
goto disable;
}
+ maybe_bswap_initrd();
+
reserve_bootmem(__pa(initrd_start), size, BOOTMEM_DEFAULT);
initrd_below_start_ok = 1;
@@ -469,6 +494,29 @@ static void __init bootmem_init(void)
*/
reserve_bootmem(PFN_PHYS(mapstart), bootmap_size, BOOTMEM_DEFAULT);
+#ifdef CONFIG_RELOCATABLE
+ /*
+ * The kernel reserves all memory below its _end symbol as bootmem,
+ * but the kernel may now be at a much higher address. The memory
+ * between the original and new locations may be returned to the system.
+ */
+ if (__pa_symbol(_text) > __pa_symbol(VMLINUX_LOAD_ADDRESS)) {
+ unsigned long offset;
+ extern void show_kernel_relocation(const char *level);
+
+ offset = __pa_symbol(_text) - __pa_symbol(VMLINUX_LOAD_ADDRESS);
+ free_bootmem(__pa_symbol(VMLINUX_LOAD_ADDRESS), offset);
+
+#if defined(CONFIG_DEBUG_KERNEL) && defined(CONFIG_DEBUG_INFO)
+ /*
+ * This information is necessary when debugging the kernel
+ * But is a security vulnerability otherwise!
+ */
+ show_kernel_relocation(KERN_INFO);
+#endif
+ }
+#endif
+
/*
* Reserve initrd memory if needed.
*/
@@ -624,6 +672,8 @@ static void __init request_crashkernel(struct resource *res)
#define USE_PROM_CMDLINE IS_ENABLED(CONFIG_MIPS_CMDLINE_FROM_BOOTLOADER)
#define USE_DTB_CMDLINE IS_ENABLED(CONFIG_MIPS_CMDLINE_FROM_DTB)
#define EXTEND_WITH_PROM IS_ENABLED(CONFIG_MIPS_CMDLINE_DTB_EXTEND)
+#define BUILTIN_EXTEND_WITH_PROM \
+ IS_ENABLED(CONFIG_MIPS_CMDLINE_BUILTIN_EXTEND)
static void __init arch_mem_init(char **cmdline_p)
{
@@ -657,15 +707,23 @@ static void __init arch_mem_init(char **cmdline_p)
strlcpy(boot_command_line, arcs_cmdline, COMMAND_LINE_SIZE);
if (EXTEND_WITH_PROM && arcs_cmdline[0]) {
- strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);
+ if (boot_command_line[0])
+ strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);
strlcat(boot_command_line, arcs_cmdline, COMMAND_LINE_SIZE);
}
#if defined(CONFIG_CMDLINE_BOOL)
if (builtin_cmdline[0]) {
- strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);
+ if (boot_command_line[0])
+ strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);
strlcat(boot_command_line, builtin_cmdline, COMMAND_LINE_SIZE);
}
+
+ if (BUILTIN_EXTEND_WITH_PROM && arcs_cmdline[0]) {
+ if (boot_command_line[0])
+ strlcat(boot_command_line, " ", COMMAND_LINE_SIZE);
+ strlcat(boot_command_line, arcs_cmdline, COMMAND_LINE_SIZE);
+ }
#endif
#endif
strlcpy(command_line, boot_command_line, COMMAND_LINE_SIZE);
@@ -706,6 +764,9 @@ static void __init arch_mem_init(char **cmdline_p)
for_each_memblock(reserved, reg)
if (reg->size != 0)
reserve_bootmem(reg->base, reg->size, BOOTMEM_DEFAULT);
+
+ reserve_bootmem_region(__pa_symbol(&__nosave_begin),
+ __pa_symbol(&__nosave_end)); /* Reserve for hibernation */
}
static void __init resource_init(void)
diff --git a/arch/mips/kernel/signal.c b/arch/mips/kernel/signal.c
index bf792e2839a6..ae4231452115 100644
--- a/arch/mips/kernel/signal.c
+++ b/arch/mips/kernel/signal.c
@@ -195,6 +195,9 @@ static int restore_msa_extcontext(void __user *buf, unsigned int size)
unsigned int csr;
int i, err;
+ if (!config_enabled(CONFIG_CPU_HAS_MSA))
+ return SIGSYS;
+
if (size != sizeof(*msa))
return -EINVAL;
@@ -398,8 +401,8 @@ int protected_restore_fp_context(void __user *sc)
}
fp_done:
- if (used & USED_EXTCONTEXT)
- err |= restore_extcontext(sc_to_extcontext(sc));
+ if (!err && (used & USED_EXTCONTEXT))
+ err = restore_extcontext(sc_to_extcontext(sc));
return err ?: sig;
}
@@ -767,15 +770,7 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
sigset_t *oldset = sigmask_to_save();
int ret;
struct mips_abi *abi = current->thread.abi;
-#ifdef CONFIG_CPU_MICROMIPS
- void *vdso;
- unsigned long tmp = (unsigned long)current->mm->context.vdso;
-
- set_isa16_mode(tmp);
- vdso = (void *)tmp;
-#else
void *vdso = current->mm->context.vdso;
-#endif
if (regs->regs[0]) {
switch(regs->regs[2]) {
@@ -798,7 +793,7 @@ static void handle_signal(struct ksignal *ksig, struct pt_regs *regs)
regs->regs[0] = 0; /* Don't deal with this again. */
}
- if (sig_uses_siginfo(&ksig->ka))
+ if (sig_uses_siginfo(&ksig->ka, abi))
ret = abi->setup_rt_frame(vdso + abi->vdso->off_rt_sigreturn,
ksig, regs, oldset);
else
diff --git a/arch/mips/kernel/signal32.c b/arch/mips/kernel/signal32.c
index 4909639aa35b..78c8349d151c 100644
--- a/arch/mips/kernel/signal32.c
+++ b/arch/mips/kernel/signal32.c
@@ -227,6 +227,12 @@ int copy_siginfo_to_user32(compat_siginfo_t __user *to, const siginfo_t *from)
err |= __put_user(from->si_uid, &to->si_uid);
err |= __put_user(from->si_int, &to->si_int);
break;
+ case __SI_SYS >> 16:
+ err |= __copy_to_user(&to->si_call_addr, &from->si_call_addr,
+ sizeof(compat_uptr_t));
+ err |= __put_user(from->si_syscall, &to->si_syscall);
+ err |= __put_user(from->si_arch, &to->si_arch);
+ break;
}
}
return err;
diff --git a/arch/mips/kernel/smp-bmips.c b/arch/mips/kernel/smp-bmips.c
index 78cf8c2f1de0..e02addc0307f 100644
--- a/arch/mips/kernel/smp-bmips.c
+++ b/arch/mips/kernel/smp-bmips.c
@@ -243,6 +243,7 @@ static void bmips_init_secondary(void)
break;
case CPU_BMIPS5000:
write_c0_brcm_action(ACTION_CLR_IPI(smp_processor_id(), 0));
+ current_cpu_data.core = (read_c0_brcm_config() >> 25) & 3;
break;
}
}
@@ -565,3 +566,90 @@ asmlinkage void __weak plat_wired_tlb_setup(void)
* once the wired entries are present.
*/
}
+
+void __init bmips_cpu_setup(void)
+{
+ void __iomem __maybe_unused *cbr = BMIPS_GET_CBR();
+ u32 __maybe_unused cfg;
+
+ switch (current_cpu_type()) {
+ case CPU_BMIPS3300:
+ /* Set BIU to async mode */
+ set_c0_brcm_bus_pll(BIT(22));
+ __sync();
+
+ /* put the BIU back in sync mode */
+ clear_c0_brcm_bus_pll(BIT(22));
+
+ /* clear BHTD to enable branch history table */
+ clear_c0_brcm_reset(BIT(16));
+
+ /* Flush and enable RAC */
+ cfg = __raw_readl(cbr + BMIPS_RAC_CONFIG);
+ __raw_writel(cfg | 0x100, BMIPS_RAC_CONFIG);
+ __raw_readl(cbr + BMIPS_RAC_CONFIG);
+
+ cfg = __raw_readl(cbr + BMIPS_RAC_CONFIG);
+ __raw_writel(cfg | 0xf, BMIPS_RAC_CONFIG);
+ __raw_readl(cbr + BMIPS_RAC_CONFIG);
+
+ cfg = __raw_readl(cbr + BMIPS_RAC_ADDRESS_RANGE);
+ __raw_writel(cfg | 0x0fff0000, cbr + BMIPS_RAC_ADDRESS_RANGE);
+ __raw_readl(cbr + BMIPS_RAC_ADDRESS_RANGE);
+ break;
+
+ case CPU_BMIPS4380:
+ /* CBG workaround for early BMIPS4380 CPUs */
+ switch (read_c0_prid()) {
+ case 0x2a040:
+ case 0x2a042:
+ case 0x2a044:
+ case 0x2a060:
+ cfg = __raw_readl(cbr + BMIPS_L2_CONFIG);
+ __raw_writel(cfg & ~0x07000000, cbr + BMIPS_L2_CONFIG);
+ __raw_readl(cbr + BMIPS_L2_CONFIG);
+ }
+
+ /* clear BHTD to enable branch history table */
+ clear_c0_brcm_config_0(BIT(21));
+
+ /* XI/ROTR enable */
+ set_c0_brcm_config_0(BIT(23));
+ set_c0_brcm_cmt_ctrl(BIT(15));
+ break;
+
+ case CPU_BMIPS5000:
+ /* enable RDHWR, BRDHWR */
+ set_c0_brcm_config(BIT(17) | BIT(21));
+
+ /* Disable JTB */
+ __asm__ __volatile__(
+ " .set noreorder\n"
+ " li $8, 0x5a455048\n"
+ " .word 0x4088b00f\n" /* mtc0 t0, $22, 15 */
+ " .word 0x4008b008\n" /* mfc0 t0, $22, 8 */
+ " li $9, 0x00008000\n"
+ " or $8, $8, $9\n"
+ " .word 0x4088b008\n" /* mtc0 t0, $22, 8 */
+ " sync\n"
+ " li $8, 0x0\n"
+ " .word 0x4088b00f\n" /* mtc0 t0, $22, 15 */
+ " .set reorder\n"
+ : : : "$8", "$9");
+
+ /* XI enable */
+ set_c0_brcm_config(BIT(27));
+
+ /* enable MIPS32R2 ROR instruction for XI TLB handlers */
+ __asm__ __volatile__(
+ " li $8, 0x5a455048\n"
+ " .word 0x4088b00f\n" /* mtc0 $8, $22, 15 */
+ " nop; nop; nop\n"
+ " .word 0x4008b008\n" /* mfc0 $8, $22, 8 */
+ " lui $9, 0x0100\n"
+ " or $8, $9\n"
+ " .word 0x4088b008\n" /* mtc0 $8, $22, 8 */
+ : : : "$8", "$9");
+ break;
+ }
+}
diff --git a/arch/mips/kernel/smp-cps.c b/arch/mips/kernel/smp-cps.c
index 253e1409338c..4ed36f288d64 100644
--- a/arch/mips/kernel/smp-cps.c
+++ b/arch/mips/kernel/smp-cps.c
@@ -27,15 +27,27 @@
#include <asm/time.h>
#include <asm/uasm.h>
+static bool threads_disabled;
static DECLARE_BITMAP(core_power, NR_CPUS);
struct core_boot_config *mips_cps_core_bootcfg;
+static int __init setup_nothreads(char *s)
+{
+ threads_disabled = true;
+ return 0;
+}
+early_param("nothreads", setup_nothreads);
+
static unsigned core_vpe_count(unsigned core)
{
unsigned cfg;
- if (!config_enabled(CONFIG_MIPS_MT_SMP) || !cpu_has_mipsmt)
+ if (threads_disabled)
+ return 1;
+
+ if ((!config_enabled(CONFIG_MIPS_MT_SMP) || !cpu_has_mipsmt)
+ && (!config_enabled(CONFIG_CPU_MIPSR6) || !cpu_has_vp))
return 1;
mips_cm_lock_other(core, 0);
@@ -47,11 +59,12 @@ static unsigned core_vpe_count(unsigned core)
static void __init cps_smp_setup(void)
{
unsigned int ncores, nvpes, core_vpes;
+ unsigned long core_entry;
int c, v;
/* Detect & record VPE topology */
ncores = mips_cm_numcores();
- pr_info("VPE topology ");
+ pr_info("%s topology ", cpu_has_mips_r6 ? "VP" : "VPE");
for (c = nvpes = 0; c < ncores; c++) {
core_vpes = core_vpe_count(c);
pr_cont("%c%u", c ? ',' : '{', core_vpes);
@@ -62,7 +75,7 @@ static void __init cps_smp_setup(void)
for (v = 0; v < min_t(int, core_vpes, NR_CPUS - nvpes); v++) {
cpu_data[nvpes + v].core = c;
-#ifdef CONFIG_MIPS_MT_SMP
+#if defined(CONFIG_MIPS_MT_SMP) || defined(CONFIG_CPU_MIPSR6)
cpu_data[nvpes + v].vpe_id = v;
#endif
}
@@ -91,6 +104,11 @@ static void __init cps_smp_setup(void)
/* Make core 0 coherent with everything */
write_gcr_cl_coherence(0xff);
+ if (mips_cm_revision() >= CM_REV_CM3) {
+ core_entry = CKSEG1ADDR((unsigned long)mips_cps_core_entry);
+ write_gcr_bev_base(core_entry);
+ }
+
#ifdef CONFIG_MIPS_MT_FPAFF
/* If we have an FPU, enroll ourselves in the FPU-full mask */
if (cpu_has_fpu)
@@ -213,6 +231,18 @@ static void boot_core(unsigned core)
if (mips_cpc_present()) {
/* Reset the core */
mips_cpc_lock_other(core);
+
+ if (mips_cm_revision() >= CM_REV_CM3) {
+ /* Run VP0 following the reset */
+ write_cpc_co_vp_run(0x1);
+
+ /*
+ * Ensure that the VP_RUN register is written before the
+ * core leaves reset.
+ */
+ wmb();
+ }
+
write_cpc_co_cmd(CPC_Cx_CMD_RESET);
timeout = 100;
@@ -250,7 +280,10 @@ static void boot_core(unsigned core)
static void remote_vpe_boot(void *dummy)
{
- mips_cps_boot_vpes();
+ unsigned core = current_cpu_data.core;
+ struct core_boot_config *core_cfg = &mips_cps_core_bootcfg[core];
+
+ mips_cps_boot_vpes(core_cfg, cpu_vpe_id(&current_cpu_data));
}
static void cps_boot_secondary(int cpu, struct task_struct *idle)
@@ -259,6 +292,7 @@ static void cps_boot_secondary(int cpu, struct task_struct *idle)
unsigned vpe_id = cpu_vpe_id(&cpu_data[cpu]);
struct core_boot_config *core_cfg = &mips_cps_core_bootcfg[core];
struct vpe_boot_config *vpe_cfg = &core_cfg->vpe_config[vpe_id];
+ unsigned long core_entry;
unsigned int remote;
int err;
@@ -276,6 +310,13 @@ static void cps_boot_secondary(int cpu, struct task_struct *idle)
goto out;
}
+ if (cpu_has_vp) {
+ mips_cm_lock_other(core, vpe_id);
+ core_entry = CKSEG1ADDR((unsigned long)mips_cps_core_entry);
+ write_gcr_co_reset_base(core_entry);
+ mips_cm_unlock_other();
+ }
+
if (core != current_cpu_data.core) {
/* Boot a VPE on another powered up core */
for (remote = 0; remote < NR_CPUS; remote++) {
@@ -293,10 +334,10 @@ static void cps_boot_secondary(int cpu, struct task_struct *idle)
goto out;
}
- BUG_ON(!cpu_has_mipsmt);
+ BUG_ON(!cpu_has_mipsmt && !cpu_has_vp);
/* Boot a VPE on this core */
- mips_cps_boot_vpes();
+ mips_cps_boot_vpes(core_cfg, vpe_id);
out:
preempt_enable();
}
@@ -307,8 +348,23 @@ static void cps_init_secondary(void)
if (cpu_has_mipsmt)
dmt();
- change_c0_status(ST0_IM, STATUSF_IP2 | STATUSF_IP3 | STATUSF_IP4 |
- STATUSF_IP5 | STATUSF_IP6 | STATUSF_IP7);
+ if (mips_cm_revision() >= CM_REV_CM3) {
+ unsigned ident = gic_read_local_vp_id();
+
+ /*
+ * Ensure that our calculation of the VP ID matches up with
+ * what the GIC reports, otherwise we'll have configured
+ * interrupts incorrectly.
+ */
+ BUG_ON(ident != mips_cm_vp_id(smp_processor_id()));
+ }
+
+ if (cpu_has_veic)
+ clear_c0_status(ST0_IM);
+ else
+ change_c0_status(ST0_IM, STATUSF_IP2 | STATUSF_IP3 |
+ STATUSF_IP4 | STATUSF_IP5 |
+ STATUSF_IP6 | STATUSF_IP7);
}
static void cps_smp_finish(void)
diff --git a/arch/mips/kernel/smp.c b/arch/mips/kernel/smp.c
index 27cb638f0824..f9d01e953acb 100644
--- a/arch/mips/kernel/smp.c
+++ b/arch/mips/kernel/smp.c
@@ -243,18 +243,6 @@ static int __init mips_smp_ipi_init(void)
struct irq_domain *ipidomain;
struct device_node *node;
- /*
- * In some cases like qemu-malta, it is desired to try SMP with
- * a single core. Qemu-malta has no GIC, so an attempt to set any IPIs
- * would cause a BUG_ON() to be triggered since there's no ipidomain.
- *
- * Since for a single core system IPIs aren't required really, skip the
- * initialisation which should generally keep any such configurations
- * happy and only fail hard when trying to truely run SMP.
- */
- if (cpumask_weight(cpu_possible_mask) == 1)
- return 0;
-
node = of_irq_find_parent(of_root);
ipidomain = irq_find_matching_host(node, DOMAIN_BUS_IPI);
@@ -266,7 +254,17 @@ static int __init mips_smp_ipi_init(void)
if (node && !ipidomain)
ipidomain = irq_find_matching_host(NULL, DOMAIN_BUS_IPI);
- BUG_ON(!ipidomain);
+ /*
+ * There are systems which only use IPI domains some of the time,
+ * depending upon configuration we don't know until runtime. An
+ * example is Malta where we may compile in support for GIC & the
+ * MT ASE, but run on a system which has multiple VPEs in a single
+ * core and doesn't include a GIC. Until all IPI implementations
+ * have been converted to use IPI domains the best we can do here
+ * is to return & hope some other code sets up the IPIs.
+ */
+ if (!ipidomain)
+ return 0;
call_virq = irq_reserve_ipi(ipidomain, cpu_possible_mask);
BUG_ON(!call_virq);
diff --git a/arch/mips/kernel/spram.c b/arch/mips/kernel/spram.c
index 8489c88f9932..d6e6cf75114d 100644
--- a/arch/mips/kernel/spram.c
+++ b/arch/mips/kernel/spram.c
@@ -210,6 +210,7 @@ void spram_config(void)
case CPU_P5600:
case CPU_QEMU_GENERIC:
case CPU_I6400:
+ case CPU_P6600:
config0 = read_c0_config();
/* FIXME: addresses are Malta specific */
if (config0 & (1<<24)) {
diff --git a/arch/mips/kernel/traps.c b/arch/mips/kernel/traps.c
index ae0c89d23ad7..4a1712b5abdf 100644
--- a/arch/mips/kernel/traps.c
+++ b/arch/mips/kernel/traps.c
@@ -145,7 +145,7 @@ static void show_backtrace(struct task_struct *task, const struct pt_regs *regs)
if (!task)
task = current;
- if (raw_show_trace || !__kernel_text_address(pc)) {
+ if (raw_show_trace || user_mode(regs) || !__kernel_text_address(pc)) {
show_raw_backtrace(sp);
return;
}
@@ -399,11 +399,8 @@ void __noreturn die(const char *str, struct pt_regs *regs)
if (in_interrupt())
panic("Fatal exception in interrupt");
- if (panic_on_oops) {
- printk(KERN_EMERG "Fatal exception: panic in 5 seconds");
- ssleep(5);
+ if (panic_on_oops)
panic("Fatal exception");
- }
if (regs && kexec_should_crash(current))
crash_kexec(regs);
@@ -1249,7 +1246,7 @@ static int enable_restore_fp_context(int msa)
err = init_fpu();
if (msa && !err) {
enable_msa();
- _init_msa_upper();
+ init_msa_upper();
set_thread_flag(TIF_USEDMSA);
set_thread_flag(TIF_MSA_CTX_LIVE);
}
@@ -1312,7 +1309,7 @@ static int enable_restore_fp_context(int msa)
*/
prior_msa = test_and_set_thread_flag(TIF_MSA_CTX_LIVE);
if (!prior_msa && was_fpu_owner) {
- _init_msa_upper();
+ init_msa_upper();
goto out;
}
@@ -1329,7 +1326,7 @@ static int enable_restore_fp_context(int msa)
* of each vector register such that it cannot see data left
* behind by another task.
*/
- _init_msa_upper();
+ init_msa_upper();
} else {
/* We need to restore the vector context. */
restore_msa(current);
@@ -1356,7 +1353,6 @@ asmlinkage void do_cpu(struct pt_regs *regs)
unsigned long fcr31;
unsigned int cpid;
int status, err;
- unsigned long __maybe_unused flags;
int sig;
prev_state = exception_enter();
@@ -1501,16 +1497,13 @@ asmlinkage void do_watch(struct pt_regs *regs)
{
siginfo_t info = { .si_signo = SIGTRAP, .si_code = TRAP_HWBKPT };
enum ctx_state prev_state;
- u32 cause;
prev_state = exception_enter();
/*
* Clear WP (bit 22) bit of cause register so we don't loop
* forever.
*/
- cause = read_c0_cause();
- cause &= ~(1 << 22);
- write_c0_cause(cause);
+ clear_c0_cause(CAUSEF_WP);
/*
* If the current thread has the watch registers loaded, save
@@ -1647,6 +1640,7 @@ static inline void parity_protection_init(void)
case CPU_P5600:
case CPU_QEMU_GENERIC:
case CPU_I6400:
+ case CPU_P6600:
{
#define ERRCTL_PE 0x80000000
#define ERRCTL_L2P 0x00800000
@@ -1777,7 +1771,8 @@ asmlinkage void do_ftlb(void)
/* For the moment, report the problem and hang. */
if ((cpu_has_mips_r2_r6) &&
- ((current_cpu_data.processor_id & 0xff0000) == PRID_COMP_MIPS)) {
+ (((current_cpu_data.processor_id & 0xff0000) == PRID_COMP_MIPS) ||
+ ((current_cpu_data.processor_id & 0xff0000) == PRID_COMP_LOONGSON))) {
pr_err("FTLB error exception, cp0_ecc=0x%08x:\n",
read_c0_ecc());
pr_err("cp0_errorepc == %0*lx\n", field, read_c0_errorepc());
@@ -2119,6 +2114,13 @@ void per_cpu_trap_init(bool is_boot_cpu)
* o read IntCtl.IPFDC to determine the fast debug channel interrupt
*/
if (cpu_has_mips_r2_r6) {
+ /*
+ * We shouldn't trust a secondary core has a sane EBASE register
+ * so use the one calculated by the boot CPU.
+ */
+ if (!is_boot_cpu)
+ write_c0_ebase(ebase);
+
cp0_compare_irq_shift = CAUSEB_TI - CAUSEB_IP;
cp0_compare_irq = (read_c0_intctl() >> INTCTLB_IPTI) & 7;
cp0_perfcount_irq = (read_c0_intctl() >> INTCTLB_IPPCI) & 7;
@@ -2134,7 +2136,7 @@ void per_cpu_trap_init(bool is_boot_cpu)
}
if (!cpu_data[cpu].asid_cache)
- cpu_data[cpu].asid_cache = ASID_FIRST_VERSION;
+ cpu_data[cpu].asid_cache = asid_first_version(cpu);
atomic_inc(&init_mm.mm_count);
current->active_mm = &init_mm;
diff --git a/arch/mips/kernel/unaligned.c b/arch/mips/kernel/unaligned.c
index 5c62065cbf22..28b3af73a17b 100644
--- a/arch/mips/kernel/unaligned.c
+++ b/arch/mips/kernel/unaligned.c
@@ -1191,6 +1191,7 @@ static void emulate_load_store_insn(struct pt_regs *regs,
case ldc1_op:
case swc1_op:
case sdc1_op:
+ case cop1x_op:
die_if_kernel("Unaligned FP access in kernel code", regs);
BUG_ON(!used_math());
diff --git a/arch/mips/kernel/vdso.c b/arch/mips/kernel/vdso.c
index 975e99759bab..54e1663ce639 100644
--- a/arch/mips/kernel/vdso.c
+++ b/arch/mips/kernel/vdso.c
@@ -104,7 +104,8 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
struct resource gic_res;
int ret;
- down_write(&mm->mmap_sem);
+ if (down_write_killable(&mm->mmap_sem))
+ return -EINTR;
/*
* Determine total area size. This includes the VDSO data itself, the
diff --git a/arch/mips/kernel/vmlinux.lds.S b/arch/mips/kernel/vmlinux.lds.S
index 54d653ee17e1..a82c178d0bb9 100644
--- a/arch/mips/kernel/vmlinux.lds.S
+++ b/arch/mips/kernel/vmlinux.lds.S
@@ -136,6 +136,27 @@ SECTIONS
#ifdef CONFIG_SMP
PERCPU_SECTION(1 << CONFIG_MIPS_L1_CACHE_SHIFT)
#endif
+
+#ifdef CONFIG_RELOCATABLE
+ . = ALIGN(4);
+
+ .data.reloc : {
+ _relocation_start = .;
+ /*
+ * Space for relocation table
+ * This needs to be filled so that the
+ * relocs tool can overwrite the content.
+ * An invalid value is left at the start of the
+ * section to abort relocation if the table
+ * has not been filled in.
+ */
+ LONG(0xFFFFFFFF);
+ FILL(0);
+ . += CONFIG_RELOCATION_TABLE_SIZE - 4;
+ _relocation_end = .;
+ }
+#endif
+
#ifdef CONFIG_MIPS_RAW_APPENDED_DTB
__appended_dtb = .;
/* leave space for appended DTB */
diff --git a/arch/mips/kernel/watch.c b/arch/mips/kernel/watch.c
index 2a03abb5bd2c..19fcab7348b1 100644
--- a/arch/mips/kernel/watch.c
+++ b/arch/mips/kernel/watch.c
@@ -15,10 +15,9 @@
* Install the watch registers for the current thread. A maximum of
* four registers are installed although the machine may have more.
*/
-void mips_install_watch_registers(void)
+void mips_install_watch_registers(struct task_struct *t)
{
- struct mips3264_watch_reg_state *watches =
- &current->thread.watch.mips3264;
+ struct mips3264_watch_reg_state *watches = &t->thread.watch.mips3264;
switch (current_cpu_data.watch_reg_use_cnt) {
default:
BUG();
@@ -26,16 +25,20 @@ void mips_install_watch_registers(void)
write_c0_watchlo3(watches->watchlo[3]);
/* Write 1 to the I, R, and W bits to clear them, and
1 to G so all ASIDs are trapped. */
- write_c0_watchhi3(0x40000007 | watches->watchhi[3]);
+ write_c0_watchhi3(MIPS_WATCHHI_G | MIPS_WATCHHI_IRW |
+ watches->watchhi[3]);
case 3:
write_c0_watchlo2(watches->watchlo[2]);
- write_c0_watchhi2(0x40000007 | watches->watchhi[2]);
+ write_c0_watchhi2(MIPS_WATCHHI_G | MIPS_WATCHHI_IRW |
+ watches->watchhi[2]);
case 2:
write_c0_watchlo1(watches->watchlo[1]);
- write_c0_watchhi1(0x40000007 | watches->watchhi[1]);
+ write_c0_watchhi1(MIPS_WATCHHI_G | MIPS_WATCHHI_IRW |
+ watches->watchhi[1]);
case 1:
write_c0_watchlo0(watches->watchlo[0]);
- write_c0_watchhi0(0x40000007 | watches->watchhi[0]);
+ write_c0_watchhi0(MIPS_WATCHHI_G | MIPS_WATCHHI_IRW |
+ watches->watchhi[0]);
}
}
@@ -52,22 +55,26 @@ void mips_read_watch_registers(void)
default:
BUG();
case 4:
- watches->watchhi[3] = (read_c0_watchhi3() & 0x0fff);
+ watches->watchhi[3] = (read_c0_watchhi3() &
+ (MIPS_WATCHHI_MASK | MIPS_WATCHHI_IRW));
case 3:
- watches->watchhi[2] = (read_c0_watchhi2() & 0x0fff);
+ watches->watchhi[2] = (read_c0_watchhi2() &
+ (MIPS_WATCHHI_MASK | MIPS_WATCHHI_IRW));
case 2:
- watches->watchhi[1] = (read_c0_watchhi1() & 0x0fff);
+ watches->watchhi[1] = (read_c0_watchhi1() &
+ (MIPS_WATCHHI_MASK | MIPS_WATCHHI_IRW));
case 1:
- watches->watchhi[0] = (read_c0_watchhi0() & 0x0fff);
+ watches->watchhi[0] = (read_c0_watchhi0() &
+ (MIPS_WATCHHI_MASK | MIPS_WATCHHI_IRW));
}
if (current_cpu_data.watch_reg_use_cnt == 1 &&
- (watches->watchhi[0] & 7) == 0) {
+ (watches->watchhi[0] & MIPS_WATCHHI_IRW) == 0) {
/* Pathological case of release 1 architecture that
* doesn't set the condition bits. We assume that
* since we got here, the watch condition was met and
* signal that the conditions requested in watchlo
* were met. */
- watches->watchhi[0] |= (watches->watchlo[0] & 7);
+ watches->watchhi[0] |= (watches->watchlo[0] & MIPS_WATCHHI_IRW);
}
}
@@ -110,86 +117,86 @@ void mips_probe_watch_registers(struct cpuinfo_mips *c)
* Check which of the I,R and W bits are supported, then
* disable the register.
*/
- write_c0_watchlo0(7);
+ write_c0_watchlo0(MIPS_WATCHLO_IRW);
back_to_back_c0_hazard();
t = read_c0_watchlo0();
write_c0_watchlo0(0);
- c->watch_reg_masks[0] = t & 7;
+ c->watch_reg_masks[0] = t & MIPS_WATCHLO_IRW;
/* Write the mask bits and read them back to determine which
* can be used. */
c->watch_reg_count = 1;
c->watch_reg_use_cnt = 1;
t = read_c0_watchhi0();
- write_c0_watchhi0(t | 0xff8);
+ write_c0_watchhi0(t | MIPS_WATCHHI_MASK);
back_to_back_c0_hazard();
t = read_c0_watchhi0();
- c->watch_reg_masks[0] |= (t & 0xff8);
- if ((t & 0x80000000) == 0)
+ c->watch_reg_masks[0] |= (t & MIPS_WATCHHI_MASK);
+ if ((t & MIPS_WATCHHI_M) == 0)
return;
- write_c0_watchlo1(7);
+ write_c0_watchlo1(MIPS_WATCHLO_IRW);
back_to_back_c0_hazard();
t = read_c0_watchlo1();
write_c0_watchlo1(0);
- c->watch_reg_masks[1] = t & 7;
+ c->watch_reg_masks[1] = t & MIPS_WATCHLO_IRW;
c->watch_reg_count = 2;
c->watch_reg_use_cnt = 2;
t = read_c0_watchhi1();
- write_c0_watchhi1(t | 0xff8);
+ write_c0_watchhi1(t | MIPS_WATCHHI_MASK);
back_to_back_c0_hazard();
t = read_c0_watchhi1();
- c->watch_reg_masks[1] |= (t & 0xff8);
- if ((t & 0x80000000) == 0)
+ c->watch_reg_masks[1] |= (t & MIPS_WATCHHI_MASK);
+ if ((t & MIPS_WATCHHI_M) == 0)
return;
- write_c0_watchlo2(7);
+ write_c0_watchlo2(MIPS_WATCHLO_IRW);
back_to_back_c0_hazard();
t = read_c0_watchlo2();
write_c0_watchlo2(0);
- c->watch_reg_masks[2] = t & 7;
+ c->watch_reg_masks[2] = t & MIPS_WATCHLO_IRW;
c->watch_reg_count = 3;
c->watch_reg_use_cnt = 3;
t = read_c0_watchhi2();
- write_c0_watchhi2(t | 0xff8);
+ write_c0_watchhi2(t | MIPS_WATCHHI_MASK);
back_to_back_c0_hazard();
t = read_c0_watchhi2();
- c->watch_reg_masks[2] |= (t & 0xff8);
- if ((t & 0x80000000) == 0)
+ c->watch_reg_masks[2] |= (t & MIPS_WATCHHI_MASK);
+ if ((t & MIPS_WATCHHI_M) == 0)
return;
- write_c0_watchlo3(7);
+ write_c0_watchlo3(MIPS_WATCHLO_IRW);
back_to_back_c0_hazard();
t = read_c0_watchlo3();
write_c0_watchlo3(0);
- c->watch_reg_masks[3] = t & 7;
+ c->watch_reg_masks[3] = t & MIPS_WATCHLO_IRW;
c->watch_reg_count = 4;
c->watch_reg_use_cnt = 4;
t = read_c0_watchhi3();
- write_c0_watchhi3(t | 0xff8);
+ write_c0_watchhi3(t | MIPS_WATCHHI_MASK);
back_to_back_c0_hazard();
t = read_c0_watchhi3();
- c->watch_reg_masks[3] |= (t & 0xff8);
- if ((t & 0x80000000) == 0)
+ c->watch_reg_masks[3] |= (t & MIPS_WATCHHI_MASK);
+ if ((t & MIPS_WATCHHI_M) == 0)
return;
/* We use at most 4, but probe and report up to 8. */
c->watch_reg_count = 5;
t = read_c0_watchhi4();
- if ((t & 0x80000000) == 0)
+ if ((t & MIPS_WATCHHI_M) == 0)
return;
c->watch_reg_count = 6;
t = read_c0_watchhi5();
- if ((t & 0x80000000) == 0)
+ if ((t & MIPS_WATCHHI_M) == 0)
return;
c->watch_reg_count = 7;
t = read_c0_watchhi6();
- if ((t & 0x80000000) == 0)
+ if ((t & MIPS_WATCHHI_M) == 0)
return;
c->watch_reg_count = 8;
diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
index b37954cc880d..396df6eb0a12 100644
--- a/arch/mips/kvm/emulate.c
+++ b/arch/mips/kvm/emulate.c
@@ -302,12 +302,31 @@ static inline ktime_t kvm_mips_count_time(struct kvm_vcpu *vcpu)
*/
static uint32_t kvm_mips_read_count_running(struct kvm_vcpu *vcpu, ktime_t now)
{
- ktime_t expires;
+ struct mips_coproc *cop0 = vcpu->arch.cop0;
+ ktime_t expires, threshold;
+ uint32_t count, compare;
int running;
- /* Is the hrtimer pending? */
+ /* Calculate the biased and scaled guest CP0_Count */
+ count = vcpu->arch.count_bias + kvm_mips_ktime_to_count(vcpu, now);
+ compare = kvm_read_c0_guest_compare(cop0);
+
+ /*
+ * Find whether CP0_Count has reached the closest timer interrupt. If
+ * not, we shouldn't inject it.
+ */
+ if ((int32_t)(count - compare) < 0)
+ return count;
+
+ /*
+ * The CP0_Count we're going to return has already reached the closest
+ * timer interrupt. Quickly check if it really is a new interrupt by
+ * looking at whether the interval until the hrtimer expiry time is
+ * less than 1/4 of the timer period.
+ */
expires = hrtimer_get_expires(&vcpu->arch.comparecount_timer);
- if (ktime_compare(now, expires) >= 0) {
+ threshold = ktime_add_ns(now, vcpu->arch.count_period / 4);
+ if (ktime_before(expires, threshold)) {
/*
* Cancel it while we handle it so there's no chance of
* interference with the timeout handler.
@@ -329,8 +348,7 @@ static uint32_t kvm_mips_read_count_running(struct kvm_vcpu *vcpu, ktime_t now)
}
}
- /* Return the biased and scaled guest CP0_Count */
- return vcpu->arch.count_bias + kvm_mips_ktime_to_count(vcpu, now);
+ return count;
}
/**
@@ -420,32 +438,6 @@ static void kvm_mips_resume_hrtimer(struct kvm_vcpu *vcpu,
}
/**
- * kvm_mips_update_hrtimer() - Update next expiry time of hrtimer.
- * @vcpu: Virtual CPU.
- *
- * Recalculates and updates the expiry time of the hrtimer. This can be used
- * after timer parameters have been altered which do not depend on the time that
- * the change occurs (in those cases kvm_mips_freeze_hrtimer() and
- * kvm_mips_resume_hrtimer() are used directly).
- *
- * It is guaranteed that no timer interrupts will be lost in the process.
- *
- * Assumes !kvm_mips_count_disabled(@vcpu) (guest CP0_Count timer is running).
- */
-static void kvm_mips_update_hrtimer(struct kvm_vcpu *vcpu)
-{
- ktime_t now;
- uint32_t count;
-
- /*
- * freeze_hrtimer takes care of a timer interrupts <= count, and
- * resume_hrtimer the hrtimer takes care of a timer interrupts > count.
- */
- now = kvm_mips_freeze_hrtimer(vcpu, &count);
- kvm_mips_resume_hrtimer(vcpu, now, count);
-}
-
-/**
* kvm_mips_write_count() - Modify the count and update timer.
* @vcpu: Virtual CPU.
* @count: Guest CP0_Count value to set.
@@ -540,23 +532,42 @@ int kvm_mips_set_count_hz(struct kvm_vcpu *vcpu, s64 count_hz)
* kvm_mips_write_compare() - Modify compare and update timer.
* @vcpu: Virtual CPU.
* @compare: New CP0_Compare value.
+ * @ack: Whether to acknowledge timer interrupt.
*
* Update CP0_Compare to a new value and update the timeout.
+ * If @ack, atomically acknowledge any pending timer interrupt, otherwise ensure
+ * any pending timer interrupt is preserved.
*/
-void kvm_mips_write_compare(struct kvm_vcpu *vcpu, uint32_t compare)
+void kvm_mips_write_compare(struct kvm_vcpu *vcpu, uint32_t compare, bool ack)
{
struct mips_coproc *cop0 = vcpu->arch.cop0;
+ int dc;
+ u32 old_compare = kvm_read_c0_guest_compare(cop0);
+ ktime_t now;
+ uint32_t count;
/* if unchanged, must just be an ack */
- if (kvm_read_c0_guest_compare(cop0) == compare)
+ if (old_compare == compare) {
+ if (!ack)
+ return;
+ kvm_mips_callbacks->dequeue_timer_int(vcpu);
+ kvm_write_c0_guest_compare(cop0, compare);
return;
+ }
+
+ /* freeze_hrtimer() takes care of timer interrupts <= count */
+ dc = kvm_mips_count_disabled(vcpu);
+ if (!dc)
+ now = kvm_mips_freeze_hrtimer(vcpu, &count);
+
+ if (ack)
+ kvm_mips_callbacks->dequeue_timer_int(vcpu);
- /* Update compare */
kvm_write_c0_guest_compare(cop0, compare);
- /* Update timeout if count enabled */
- if (!kvm_mips_count_disabled(vcpu))
- kvm_mips_update_hrtimer(vcpu);
+ /* resume_hrtimer() takes care of timer interrupts > count */
+ if (!dc)
+ kvm_mips_resume_hrtimer(vcpu, now, count);
}
/**
@@ -1068,15 +1079,15 @@ enum emulation_result kvm_mips_emulate_CP0(uint32_t inst, uint32_t *opc,
kvm_read_c0_guest_ebase(cop0));
} else if (rd == MIPS_CP0_TLB_HI && sel == 0) {
uint32_t nasid =
- vcpu->arch.gprs[rt] & ASID_MASK;
+ vcpu->arch.gprs[rt] & KVM_ENTRYHI_ASID;
if ((KSEGX(vcpu->arch.gprs[rt]) != CKSEG0) &&
((kvm_read_c0_guest_entryhi(cop0) &
- ASID_MASK) != nasid)) {
+ KVM_ENTRYHI_ASID) != nasid)) {
kvm_debug("MTCz, change ASID from %#lx to %#lx\n",
kvm_read_c0_guest_entryhi(cop0)
- & ASID_MASK,
+ & KVM_ENTRYHI_ASID,
vcpu->arch.gprs[rt]
- & ASID_MASK);
+ & KVM_ENTRYHI_ASID);
/* Blow away the shadow host TLBs */
kvm_mips_flush_host_tlb(1);
@@ -1095,9 +1106,9 @@ enum emulation_result kvm_mips_emulate_CP0(uint32_t inst, uint32_t *opc,
/* If we are writing to COMPARE */
/* Clear pending timer interrupt, if any */
- kvm_mips_callbacks->dequeue_timer_int(vcpu);
kvm_mips_write_compare(vcpu,
- vcpu->arch.gprs[rt]);
+ vcpu->arch.gprs[rt],
+ true);
} else if ((rd == MIPS_CP0_STATUS) && (sel == 0)) {
unsigned int old_val, val, change;
@@ -1620,7 +1631,7 @@ enum emulation_result kvm_mips_emulate_cache(uint32_t inst, uint32_t *opc,
*/
index = kvm_mips_guest_tlb_lookup(vcpu, (va & VPN2_MASK) |
(kvm_read_c0_guest_entryhi
- (cop0) & ASID_MASK));
+ (cop0) & KVM_ENTRYHI_ASID));
if (index < 0) {
vcpu->arch.host_cp0_entryhi = (va & VPN2_MASK);
@@ -1786,7 +1797,7 @@ enum emulation_result kvm_mips_emulate_tlbmiss_ld(unsigned long cause,
struct mips_coproc *cop0 = vcpu->arch.cop0;
struct kvm_vcpu_arch *arch = &vcpu->arch;
unsigned long entryhi = (vcpu->arch. host_cp0_badvaddr & VPN2_MASK) |
- (kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
+ (kvm_read_c0_guest_entryhi(cop0) & KVM_ENTRYHI_ASID);
if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
/* save old pc */
@@ -1833,7 +1844,7 @@ enum emulation_result kvm_mips_emulate_tlbinv_ld(unsigned long cause,
struct kvm_vcpu_arch *arch = &vcpu->arch;
unsigned long entryhi =
(vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
- (kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
+ (kvm_read_c0_guest_entryhi(cop0) & KVM_ENTRYHI_ASID);
if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
/* save old pc */
@@ -1878,7 +1889,7 @@ enum emulation_result kvm_mips_emulate_tlbmiss_st(unsigned long cause,
struct mips_coproc *cop0 = vcpu->arch.cop0;
struct kvm_vcpu_arch *arch = &vcpu->arch;
unsigned long entryhi = (vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
- (kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
+ (kvm_read_c0_guest_entryhi(cop0) & KVM_ENTRYHI_ASID);
if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
/* save old pc */
@@ -1922,7 +1933,7 @@ enum emulation_result kvm_mips_emulate_tlbinv_st(unsigned long cause,
struct mips_coproc *cop0 = vcpu->arch.cop0;
struct kvm_vcpu_arch *arch = &vcpu->arch;
unsigned long entryhi = (vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
- (kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
+ (kvm_read_c0_guest_entryhi(cop0) & KVM_ENTRYHI_ASID);
if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
/* save old pc */
@@ -1967,7 +1978,7 @@ enum emulation_result kvm_mips_handle_tlbmod(unsigned long cause, uint32_t *opc,
#ifdef DEBUG
struct mips_coproc *cop0 = vcpu->arch.cop0;
unsigned long entryhi = (vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
- (kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
+ (kvm_read_c0_guest_entryhi(cop0) & KVM_ENTRYHI_ASID);
int index;
/* If address not in the guest TLB, then we are in trouble */
@@ -1994,7 +2005,7 @@ enum emulation_result kvm_mips_emulate_tlbmod(unsigned long cause,
{
struct mips_coproc *cop0 = vcpu->arch.cop0;
unsigned long entryhi = (vcpu->arch.host_cp0_badvaddr & VPN2_MASK) |
- (kvm_read_c0_guest_entryhi(cop0) & ASID_MASK);
+ (kvm_read_c0_guest_entryhi(cop0) & KVM_ENTRYHI_ASID);
struct kvm_vcpu_arch *arch = &vcpu->arch;
if ((kvm_read_c0_guest_status(cop0) & ST0_EXL) == 0) {
@@ -2569,7 +2580,8 @@ enum emulation_result kvm_mips_handle_tlbmiss(unsigned long cause,
*/
index = kvm_mips_guest_tlb_lookup(vcpu,
(va & VPN2_MASK) |
- (kvm_read_c0_guest_entryhi(vcpu->arch.cop0) & ASID_MASK));
+ (kvm_read_c0_guest_entryhi(vcpu->arch.cop0) &
+ KVM_ENTRYHI_ASID));
if (index < 0) {
if (exccode == EXCCODE_TLBL) {
er = kvm_mips_emulate_tlbmiss_ld(cause, opc, run, vcpu);
diff --git a/arch/mips/kvm/locore.S b/arch/mips/kvm/locore.S
index 81687ab1b523..3ef03009de5f 100644
--- a/arch/mips/kvm/locore.S
+++ b/arch/mips/kvm/locore.S
@@ -32,7 +32,6 @@
EXPORT(x);
/* Overload, Danger Will Robinson!! */
-#define PT_HOST_ASID PT_BVADDR
#define PT_HOST_USERLOCAL PT_EPC
#define CP0_DDATA_LO $28,3
@@ -49,45 +48,18 @@
* a1: vcpu
*/
.set noreorder
- .set noat
FEXPORT(__kvm_mips_vcpu_run)
/* k0/k1 not being used in host kernel context */
INT_ADDIU k1, sp, -PT_SIZE
- LONG_S $0, PT_R0(k1)
- LONG_S $1, PT_R1(k1)
- LONG_S $2, PT_R2(k1)
- LONG_S $3, PT_R3(k1)
-
- LONG_S $4, PT_R4(k1)
- LONG_S $5, PT_R5(k1)
- LONG_S $6, PT_R6(k1)
- LONG_S $7, PT_R7(k1)
-
- LONG_S $8, PT_R8(k1)
- LONG_S $9, PT_R9(k1)
- LONG_S $10, PT_R10(k1)
- LONG_S $11, PT_R11(k1)
- LONG_S $12, PT_R12(k1)
- LONG_S $13, PT_R13(k1)
- LONG_S $14, PT_R14(k1)
- LONG_S $15, PT_R15(k1)
LONG_S $16, PT_R16(k1)
LONG_S $17, PT_R17(k1)
-
LONG_S $18, PT_R18(k1)
LONG_S $19, PT_R19(k1)
LONG_S $20, PT_R20(k1)
LONG_S $21, PT_R21(k1)
LONG_S $22, PT_R22(k1)
LONG_S $23, PT_R23(k1)
- LONG_S $24, PT_R24(k1)
- LONG_S $25, PT_R25(k1)
-
- /*
- * XXXKYMA k0/k1 not saved, not being used if we got here through
- * an ioctl()
- */
LONG_S $28, PT_R28(k1)
LONG_S $29, PT_R29(k1)
@@ -104,11 +76,6 @@ FEXPORT(__kvm_mips_vcpu_run)
mfc0 v0, CP0_STATUS
LONG_S v0, PT_STATUS(k1)
- /* Save host ASID, shove it into the BVADDR location */
- mfc0 v1, CP0_ENTRYHI
- andi v1, 0xff
- LONG_S v1, PT_HOST_ASID(k1)
-
/* Save DDATA_LO, will be used to store pointer to vcpu */
mfc0 v1, CP0_DDATA_LO
LONG_S v1, PT_HOST_USERLOCAL(k1)
@@ -170,13 +137,21 @@ FEXPORT(__kvm_mips_load_asid)
INT_SLL t2, t2, 2 /* x4 */
REG_ADDU t3, t1, t2
LONG_L k0, (t3)
- andi k0, k0, 0xff
+#ifdef CONFIG_MIPS_ASID_BITS_VARIABLE
+ li t3, CPUINFO_SIZE/4
+ mul t2, t2, t3 /* x sizeof(struct cpuinfo_mips)/4 */
+ LONG_L t2, (cpu_data + CPUINFO_ASID_MASK)(t2)
+ and k0, k0, t2
+#else
+ andi k0, k0, MIPS_ENTRYHI_ASID
+#endif
mtc0 k0, CP0_ENTRYHI
ehb
/* Disable RDHWR access */
mtc0 zero, CP0_HWRENA
+ .set noat
/* Now load up the Guest Context from VCPU */
LONG_L $1, VCPU_R1(k1)
LONG_L $2, VCPU_R2(k1)
@@ -288,6 +263,8 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
LONG_S $30, VCPU_R30(k1)
LONG_S $31, VCPU_R31(k1)
+ .set at
+
/* We need to save hi/lo and restore them on the way out */
mfhi t0
LONG_S t0, VCPU_HI(k1)
@@ -339,9 +316,7 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
/* load up the host EBASE */
mfc0 v0, CP0_STATUS
- .set at
or k0, v0, ST0_BEV
- .set noat
mtc0 k0, CP0_STATUS
ehb
@@ -353,7 +328,6 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
* If FPU is enabled, save FCR31 and clear it so that later ctc1's don't
* trigger FPE for pending exceptions.
*/
- .set at
and v1, v0, ST0_CU1
beqz v1, 1f
nop
@@ -363,7 +337,6 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
sw t0, VCPU_FCR31(k1)
ctc1 zero,fcr31
.set pop
- .set noat
1:
#ifdef CONFIG_CPU_HAS_MSA
@@ -386,10 +359,8 @@ NESTED (MIPSX(GuestException), CALLFRAME_SIZ, ra)
#endif
/* Now that the new EBASE has been loaded, unset BEV and KSU_USER */
- .set at
and v0, v0, ~(ST0_EXL | KSU_USER | ST0_IE)
or v0, v0, ST0_CU0
- .set noat
mtc0 v0, CP0_STATUS
ehb
@@ -456,18 +427,14 @@ __kvm_mips_return_to_guest:
/* Switch EBASE back to the one used by KVM */
mfc0 v1, CP0_STATUS
- .set at
or k0, v1, ST0_BEV
- .set noat
mtc0 k0, CP0_STATUS
ehb
mtc0 t0, CP0_EBASE
/* Setup status register for running guest in UM */
- .set at
or v1, v1, (ST0_EXL | KSU_USER | ST0_IE)
and v1, v1, ~(ST0_CU0 | ST0_MX)
- .set noat
mtc0 v1, CP0_STATUS
ehb
@@ -489,13 +456,21 @@ __kvm_mips_return_to_guest:
INT_SLL t2, t2, 2 /* x4 */
REG_ADDU t3, t1, t2
LONG_L k0, (t3)
- andi k0, k0, 0xff
+#ifdef CONFIG_MIPS_ASID_BITS_VARIABLE
+ li t3, CPUINFO_SIZE/4
+ mul t2, t2, t3 /* x sizeof(struct cpuinfo_mips)/4 */
+ LONG_L t2, (cpu_data + CPUINFO_ASID_MASK)(t2)
+ and k0, k0, t2
+#else
+ andi k0, k0, MIPS_ENTRYHI_ASID
+#endif
mtc0 k0, CP0_ENTRYHI
ehb
/* Disable RDHWR access */
mtc0 zero, CP0_HWRENA
+ .set noat
/* load the guest context from VCPU and return */
LONG_L $0, VCPU_R0(k1)
LONG_L $1, VCPU_R1(k1)
@@ -541,6 +516,7 @@ FEXPORT(__kvm_mips_skip_guest_restore)
LONG_L k1, VCPU_R27(k1)
eret
+ .set at
__kvm_mips_return_to_host:
/* EBASE is already pointing to Linux */
@@ -551,16 +527,6 @@ __kvm_mips_return_to_host:
LONG_L k0, PT_HOST_USERLOCAL(k1)
mtc0 k0, CP0_DDATA_LO
- /* Restore host ASID */
- LONG_L k0, PT_HOST_ASID(sp)
- andi k0, 0xff
- mtc0 k0,CP0_ENTRYHI
- ehb
-
- /* Load context saved on the host stack */
- LONG_L $0, PT_R0(k1)
- LONG_L $1, PT_R1(k1)
-
/*
* r2/v0 is the return code, shift it down by 2 (arithmetic)
* to recover the err code
@@ -568,19 +534,7 @@ __kvm_mips_return_to_host:
INT_SRA k0, v0, 2
move $2, k0
- LONG_L $3, PT_R3(k1)
- LONG_L $4, PT_R4(k1)
- LONG_L $5, PT_R5(k1)
- LONG_L $6, PT_R6(k1)
- LONG_L $7, PT_R7(k1)
- LONG_L $8, PT_R8(k1)
- LONG_L $9, PT_R9(k1)
- LONG_L $10, PT_R10(k1)
- LONG_L $11, PT_R11(k1)
- LONG_L $12, PT_R12(k1)
- LONG_L $13, PT_R13(k1)
- LONG_L $14, PT_R14(k1)
- LONG_L $15, PT_R15(k1)
+ /* Load context saved on the host stack */
LONG_L $16, PT_R16(k1)
LONG_L $17, PT_R17(k1)
LONG_L $18, PT_R18(k1)
@@ -589,10 +543,6 @@ __kvm_mips_return_to_host:
LONG_L $21, PT_R21(k1)
LONG_L $22, PT_R22(k1)
LONG_L $23, PT_R23(k1)
- LONG_L $24, PT_R24(k1)
- LONG_L $25, PT_R25(k1)
-
- /* Host k0/k1 were not saved */
LONG_L $28, PT_R28(k1)
LONG_L $29, PT_R29(k1)
diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
index 70ef1a43c114..dc052fb5c7a2 100644
--- a/arch/mips/kvm/mips.c
+++ b/arch/mips/kvm/mips.c
@@ -56,6 +56,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
{ "flush_dcache", VCPU_STAT(flush_dcache_exits), KVM_STAT_VCPU },
{ "halt_successful_poll", VCPU_STAT(halt_successful_poll), KVM_STAT_VCPU },
{ "halt_attempted_poll", VCPU_STAT(halt_attempted_poll), KVM_STAT_VCPU },
+ { "halt_poll_invalid", VCPU_STAT(halt_poll_invalid), KVM_STAT_VCPU },
{ "halt_wakeup", VCPU_STAT(halt_wakeup), KVM_STAT_VCPU },
{NULL}
};
@@ -1079,7 +1080,8 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = KVM_COALESCED_MMIO_PAGE_OFFSET;
break;
case KVM_CAP_MIPS_FPU:
- r = !!cpu_has_fpu;
+ /* We don't handle systems with inconsistent cpu_has_fpu */
+ r = !!raw_cpu_has_fpu;
break;
case KVM_CAP_MIPS_MSA:
/*
@@ -1555,8 +1557,10 @@ void kvm_lose_fpu(struct kvm_vcpu *vcpu)
/* Disable MSA & FPU */
disable_msa();
- if (vcpu->arch.fpu_inuse & KVM_MIPS_FPU_FPU)
+ if (vcpu->arch.fpu_inuse & KVM_MIPS_FPU_FPU) {
clear_c0_status(ST0_CU1 | ST0_FR);
+ disable_fpu_hazard();
+ }
vcpu->arch.fpu_inuse &= ~(KVM_MIPS_FPU_FPU | KVM_MIPS_FPU_MSA);
} else if (vcpu->arch.fpu_inuse & KVM_MIPS_FPU_FPU) {
set_c0_status(ST0_CU1);
@@ -1567,6 +1571,7 @@ void kvm_lose_fpu(struct kvm_vcpu *vcpu)
/* Disable FPU */
clear_c0_status(ST0_CU1 | ST0_FR);
+ disable_fpu_hazard();
}
preempt_enable();
}
diff --git a/arch/mips/kvm/tlb.c b/arch/mips/kvm/tlb.c
index e0e1d0a611fc..ed021ae7867a 100644
--- a/arch/mips/kvm/tlb.c
+++ b/arch/mips/kvm/tlb.c
@@ -49,12 +49,18 @@ EXPORT_SYMBOL_GPL(kvm_mips_is_error_pfn);
uint32_t kvm_mips_get_kernel_asid(struct kvm_vcpu *vcpu)
{
- return vcpu->arch.guest_kernel_asid[smp_processor_id()] & ASID_MASK;
+ int cpu = smp_processor_id();
+
+ return vcpu->arch.guest_kernel_asid[cpu] &
+ cpu_asid_mask(&cpu_data[cpu]);
}
uint32_t kvm_mips_get_user_asid(struct kvm_vcpu *vcpu)
{
- return vcpu->arch.guest_user_asid[smp_processor_id()] & ASID_MASK;
+ int cpu = smp_processor_id();
+
+ return vcpu->arch.guest_user_asid[cpu] &
+ cpu_asid_mask(&cpu_data[cpu]);
}
inline uint32_t kvm_mips_get_commpage_asid(struct kvm_vcpu *vcpu)
@@ -78,7 +84,8 @@ void kvm_mips_dump_host_tlbs(void)
old_pagemask = read_c0_pagemask();
kvm_info("HOST TLBs:\n");
- kvm_info("ASID: %#lx\n", read_c0_entryhi() & ASID_MASK);
+ kvm_info("ASID: %#lx\n", read_c0_entryhi() &
+ cpu_asid_mask(&current_cpu_data));
for (i = 0; i < current_cpu_data.tlbsize; i++) {
write_c0_index(i);
@@ -268,6 +275,7 @@ int kvm_mips_handle_kseg0_tlb_fault(unsigned long badvaddr,
int even;
struct kvm *kvm = vcpu->kvm;
const int flush_dcache_mask = 0;
+ int ret;
if (KVM_GUEST_KSEGX(badvaddr) != KVM_GUEST_KSEG0) {
kvm_err("%s: Invalid BadVaddr: %#lx\n", __func__, badvaddr);
@@ -299,14 +307,18 @@ int kvm_mips_handle_kseg0_tlb_fault(unsigned long badvaddr,
pfn1 = kvm->arch.guest_pmap[gfn];
}
- entryhi = (vaddr | kvm_mips_get_kernel_asid(vcpu));
entrylo0 = mips3_paddr_to_tlbpfn(pfn0 << PAGE_SHIFT) | (0x3 << 3) |
(1 << 2) | (0x1 << 1);
entrylo1 = mips3_paddr_to_tlbpfn(pfn1 << PAGE_SHIFT) | (0x3 << 3) |
(1 << 2) | (0x1 << 1);
- return kvm_mips_host_tlb_write(vcpu, entryhi, entrylo0, entrylo1,
- flush_dcache_mask);
+ preempt_disable();
+ entryhi = (vaddr | kvm_mips_get_kernel_asid(vcpu));
+ ret = kvm_mips_host_tlb_write(vcpu, entryhi, entrylo0, entrylo1,
+ flush_dcache_mask);
+ preempt_enable();
+
+ return ret;
}
EXPORT_SYMBOL_GPL(kvm_mips_handle_kseg0_tlb_fault);
@@ -361,6 +373,7 @@ int kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu,
unsigned long entryhi = 0, entrylo0 = 0, entrylo1 = 0;
struct kvm *kvm = vcpu->kvm;
kvm_pfn_t pfn0, pfn1;
+ int ret;
if ((tlb->tlb_hi & VPN2_MASK) == 0) {
pfn0 = 0;
@@ -387,9 +400,6 @@ int kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu,
*hpa1 = pfn1 << PAGE_SHIFT;
/* Get attributes from the Guest TLB */
- entryhi = (tlb->tlb_hi & VPN2_MASK) | (KVM_GUEST_KERNEL_MODE(vcpu) ?
- kvm_mips_get_kernel_asid(vcpu) :
- kvm_mips_get_user_asid(vcpu));
entrylo0 = mips3_paddr_to_tlbpfn(pfn0 << PAGE_SHIFT) | (0x3 << 3) |
(tlb->tlb_lo0 & MIPS3_PG_D) | (tlb->tlb_lo0 & MIPS3_PG_V);
entrylo1 = mips3_paddr_to_tlbpfn(pfn1 << PAGE_SHIFT) | (0x3 << 3) |
@@ -398,8 +408,15 @@ int kvm_mips_handle_mapped_seg_tlb_fault(struct kvm_vcpu *vcpu,
kvm_debug("@ %#lx tlb_lo0: 0x%08lx tlb_lo1: 0x%08lx\n", vcpu->arch.pc,
tlb->tlb_lo0, tlb->tlb_lo1);
- return kvm_mips_host_tlb_write(vcpu, entryhi, entrylo0, entrylo1,
- tlb->tlb_mask);
+ preempt_disable();
+ entryhi = (tlb->tlb_hi & VPN2_MASK) | (KVM_GUEST_KERNEL_MODE(vcpu) ?
+ kvm_mips_get_kernel_asid(vcpu) :
+ kvm_mips_get_user_asid(vcpu));
+ ret = kvm_mips_host_tlb_write(vcpu, entryhi, entrylo0, entrylo1,
+ tlb->tlb_mask);
+ preempt_enable();
+
+ return ret;
}
EXPORT_SYMBOL_GPL(kvm_mips_handle_mapped_seg_tlb_fault);
@@ -564,15 +581,15 @@ void kvm_get_new_mmu_context(struct mm_struct *mm, unsigned long cpu,
{
unsigned long asid = asid_cache(cpu);
- asid += ASID_INC;
- if (!(asid & ASID_MASK)) {
+ asid += cpu_asid_inc();
+ if (!(asid & cpu_asid_mask(&cpu_data[cpu]))) {
if (cpu_has_vtag_icache)
flush_icache_all();
kvm_local_flush_tlb_all(); /* start new asid cycle */
if (!asid) /* fix version if needed */
- asid = ASID_FIRST_VERSION;
+ asid = asid_first_version(cpu);
}
cpu_context(cpu, mm) = asid_cache(cpu) = asid;
@@ -627,6 +644,7 @@ static void kvm_mips_migrate_count(struct kvm_vcpu *vcpu)
/* Restore ASID once we are scheduled back after preemption */
void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
{
+ unsigned long asid_mask = cpu_asid_mask(&cpu_data[cpu]);
unsigned long flags;
int newasid = 0;
@@ -637,7 +655,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
local_irq_save(flags);
if ((vcpu->arch.guest_kernel_asid[cpu] ^ asid_cache(cpu)) &
- ASID_VERSION_MASK) {
+ asid_version_mask(cpu)) {
kvm_get_new_mmu_context(&vcpu->arch.guest_kernel_mm, cpu, vcpu);
vcpu->arch.guest_kernel_asid[cpu] =
vcpu->arch.guest_kernel_mm.context.asid[cpu];
@@ -672,7 +690,7 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
*/
if (current->flags & PF_VCPU) {
write_c0_entryhi(vcpu->arch.
- preempt_entryhi & ASID_MASK);
+ preempt_entryhi & asid_mask);
ehb();
}
} else {
@@ -687,11 +705,11 @@ void kvm_arch_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
if (KVM_GUEST_KERNEL_MODE(vcpu))
write_c0_entryhi(vcpu->arch.
guest_kernel_asid[cpu] &
- ASID_MASK);
+ asid_mask);
else
write_c0_entryhi(vcpu->arch.
guest_user_asid[cpu] &
- ASID_MASK);
+ asid_mask);
ehb();
}
}
@@ -721,7 +739,7 @@ void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
kvm_mips_callbacks->vcpu_get_regs(vcpu);
if (((cpu_context(cpu, current->mm) ^ asid_cache(cpu)) &
- ASID_VERSION_MASK)) {
+ asid_version_mask(cpu))) {
kvm_debug("%s: Dropping MMU Context: %#lx\n", __func__,
cpu_context(cpu, current->mm));
drop_mmu_context(current->mm, cpu);
@@ -748,7 +766,8 @@ uint32_t kvm_get_inst(uint32_t *opc, struct kvm_vcpu *vcpu)
inst = *(opc);
} else {
vpn2 = (unsigned long) opc & VPN2_MASK;
- asid = kvm_read_c0_guest_entryhi(cop0) & ASID_MASK;
+ asid = kvm_read_c0_guest_entryhi(cop0) &
+ KVM_ENTRYHI_ASID;
index = kvm_mips_guest_tlb_lookup(vcpu, vpn2 | asid);
if (index < 0) {
kvm_err("%s: get_user_failed for %p, vcpu: %p, ASID: %#lx\n",
diff --git a/arch/mips/kvm/trap_emul.c b/arch/mips/kvm/trap_emul.c
index c4038d2a724c..6ba0fafcecbc 100644
--- a/arch/mips/kvm/trap_emul.c
+++ b/arch/mips/kvm/trap_emul.c
@@ -505,7 +505,8 @@ static int kvm_trap_emul_vcpu_setup(struct kvm_vcpu *vcpu)
kvm_write_c0_guest_intctl(cop0, 0xFC000000);
/* Put in vcpu id as CPUNum into Ebase Reg to handle SMP Guests */
- kvm_write_c0_guest_ebase(cop0, KVM_GUEST_KSEG0 | (vcpu_id & 0xFF));
+ kvm_write_c0_guest_ebase(cop0, KVM_GUEST_KSEG0 |
+ (vcpu_id & MIPS_EBASE_CPUNUM));
return 0;
}
@@ -546,7 +547,7 @@ static int kvm_trap_emul_set_one_reg(struct kvm_vcpu *vcpu,
kvm_mips_write_count(vcpu, v);
break;
case KVM_REG_MIPS_CP0_COMPARE:
- kvm_mips_write_compare(vcpu, v);
+ kvm_mips_write_compare(vcpu, v, false);
break;
case KVM_REG_MIPS_CP0_CAUSE:
/*
diff --git a/arch/mips/lantiq/Kconfig b/arch/mips/lantiq/Kconfig
index e10d33342b30..177769dbb0e8 100644
--- a/arch/mips/lantiq/Kconfig
+++ b/arch/mips/lantiq/Kconfig
@@ -25,7 +25,17 @@ config SOC_FALCON
endchoice
choice
- prompt "Devicetree"
+ prompt "Built-in device tree"
+ help
+ Legacy bootloaders do not pass a DTB pointer to the kernel, so
+ if a "wrapper" is not being used, the kernel will need to include
+ a device tree that matches the target board.
+
+ The builtin DTB will only be used if the firmware does not supply
+ a valid DTB.
+
+config LANTIQ_DT_NONE
+ bool "None"
config DT_EASY50712
bool "Easy50712"
diff --git a/arch/mips/lantiq/Makefile b/arch/mips/lantiq/Makefile
index 690257ab86d6..2718652e7466 100644
--- a/arch/mips/lantiq/Makefile
+++ b/arch/mips/lantiq/Makefile
@@ -1,4 +1,4 @@
-# Copyright (C) 2010 John Crispin <blogic@openwrt.org>
+# Copyright (C) 2010 John Crispin <john@phrozen.org>
#
# This program is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License version 2 as published
diff --git a/arch/mips/lantiq/clk.c b/arch/mips/lantiq/clk.c
index a0706fd4ce0a..149f0513c4f5 100644
--- a/arch/mips/lantiq/clk.c
+++ b/arch/mips/lantiq/clk.c
@@ -4,7 +4,7 @@
* by the Free Software Foundation.
*
* Copyright (C) 2010 Thomas Langer <thomas.langer@lantiq.com>
- * Copyright (C) 2010 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2010 John Crispin <john@phrozen.org>
*/
#include <linux/io.h>
#include <linux/export.h>
diff --git a/arch/mips/lantiq/clk.h b/arch/mips/lantiq/clk.h
index 7376ce817eda..e806e048ffc2 100644
--- a/arch/mips/lantiq/clk.h
+++ b/arch/mips/lantiq/clk.h
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2010 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2010 John Crispin <john@phrozen.org>
*/
#ifndef _LTQ_CLK_H__
diff --git a/arch/mips/lantiq/early_printk.c b/arch/mips/lantiq/early_printk.c
index 9b28d0940ef4..44bccaee822b 100644
--- a/arch/mips/lantiq/early_printk.c
+++ b/arch/mips/lantiq/early_printk.c
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2010 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2010 John Crispin <john@phrozen.org>
*/
#include <linux/cpu.h>
diff --git a/arch/mips/lantiq/falcon/prom.c b/arch/mips/lantiq/falcon/prom.c
index aa9497947859..75315c0a9fc3 100644
--- a/arch/mips/lantiq/falcon/prom.c
+++ b/arch/mips/lantiq/falcon/prom.c
@@ -4,7 +4,7 @@
* by the Free Software Foundation.
*
* Copyright (C) 2012 Thomas Langer <thomas.langer@lantiq.com>
- * Copyright (C) 2012 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2012 John Crispin <john@phrozen.org>
*/
#include <linux/kernel.h>
diff --git a/arch/mips/lantiq/falcon/reset.c b/arch/mips/lantiq/falcon/reset.c
index 568248253426..7a535d72f541 100644
--- a/arch/mips/lantiq/falcon/reset.c
+++ b/arch/mips/lantiq/falcon/reset.c
@@ -4,7 +4,7 @@
* by the Free Software Foundation.
*
* Copyright (C) 2012 Thomas Langer <thomas.langer@lantiq.com>
- * Copyright (C) 2012 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2012 John Crispin <john@phrozen.org>
*/
#include <linux/init.h>
diff --git a/arch/mips/lantiq/falcon/sysctrl.c b/arch/mips/lantiq/falcon/sysctrl.c
index 7edcd4946fc1..2a1b3021589c 100644
--- a/arch/mips/lantiq/falcon/sysctrl.c
+++ b/arch/mips/lantiq/falcon/sysctrl.c
@@ -4,7 +4,7 @@
* by the Free Software Foundation.
*
* Copyright (C) 2011 Thomas Langer <thomas.langer@lantiq.com>
- * Copyright (C) 2011 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2011 John Crispin <john@phrozen.org>
*/
#include <linux/ioport.h>
diff --git a/arch/mips/lantiq/irq.c b/arch/mips/lantiq/irq.c
index 2e7f60c9fc5d..ff17669e30a3 100644
--- a/arch/mips/lantiq/irq.c
+++ b/arch/mips/lantiq/irq.c
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2010 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2010 John Crispin <john@phrozen.org>
* Copyright (C) 2010 Thomas Langer <thomas.langer@lantiq.com>
*/
diff --git a/arch/mips/lantiq/prom.c b/arch/mips/lantiq/prom.c
index 297bcaa6b5d3..5f693ac77a0d 100644
--- a/arch/mips/lantiq/prom.c
+++ b/arch/mips/lantiq/prom.c
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2010 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2010 John Crispin <john@phrozen.org>
*/
#include <linux/export.h>
@@ -65,6 +65,8 @@ static void __init prom_init_cmdline(void)
void __init plat_mem_setup(void)
{
+ void *dtb;
+
ioport_resource.start = IOPORT_RESOURCE_START;
ioport_resource.end = IOPORT_RESOURCE_END;
iomem_resource.start = IOMEM_RESOURCE_START;
@@ -72,11 +74,18 @@ void __init plat_mem_setup(void)
set_io_port_base((unsigned long) KSEG1);
+ if (fw_arg0 == -2) /* UHI interface */
+ dtb = (void *)fw_arg1;
+ else if (__dtb_start != __dtb_end)
+ dtb = (void *)__dtb_start;
+ else
+ panic("no dtb found");
+
/*
- * Load the builtin devicetree. This causes the chosen node to be
+ * Load the devicetree. This causes the chosen node to be
* parsed resulting in our memory appearing
*/
- __dt_setup_arch(__dtb_start);
+ __dt_setup_arch(dtb);
}
void __init device_tree_init(void)
diff --git a/arch/mips/lantiq/prom.h b/arch/mips/lantiq/prom.h
index bfd2d58c1d69..4b6576c50250 100644
--- a/arch/mips/lantiq/prom.h
+++ b/arch/mips/lantiq/prom.h
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2010 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2010 John Crispin <john@phrozen.org>
*/
#ifndef _LTQ_PROM_H__
diff --git a/arch/mips/lantiq/xway/clk.c b/arch/mips/lantiq/xway/clk.c
index 07f6d5b0b65e..41fc30d8ef89 100644
--- a/arch/mips/lantiq/xway/clk.c
+++ b/arch/mips/lantiq/xway/clk.c
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2010 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2010 John Crispin <john@phrozen.org>
* Copyright (C) 2013-2015 Lantiq Beteiligungs-GmbH & Co.KG
*/
diff --git a/arch/mips/lantiq/xway/dcdc.c b/arch/mips/lantiq/xway/dcdc.c
index ae8e930f5283..08f7abaadfe5 100644
--- a/arch/mips/lantiq/xway/dcdc.c
+++ b/arch/mips/lantiq/xway/dcdc.c
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2012 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2012 John Crispin <john@phrozen.org>
* Copyright (C) 2010 Sameer Ahmad, Lantiq GmbH
*/
diff --git a/arch/mips/lantiq/xway/dma.c b/arch/mips/lantiq/xway/dma.c
index 34a116e840d8..cef811755123 100644
--- a/arch/mips/lantiq/xway/dma.c
+++ b/arch/mips/lantiq/xway/dma.c
@@ -12,7 +12,7 @@
* along with this program; if not, write to the Free Software
* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307, USA.
*
- * Copyright (C) 2011 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2011 John Crispin <john@phrozen.org>
*/
#include <linux/init.h>
diff --git a/arch/mips/lantiq/xway/gptu.c b/arch/mips/lantiq/xway/gptu.c
index f1492b2db017..0f1bbea1a816 100644
--- a/arch/mips/lantiq/xway/gptu.c
+++ b/arch/mips/lantiq/xway/gptu.c
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2012 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2012 John Crispin <john@phrozen.org>
* Copyright (C) 2012 Lantiq GmbH
*/
diff --git a/arch/mips/lantiq/xway/prom.c b/arch/mips/lantiq/xway/prom.c
index 8f6e02f1e965..9475b2510adb 100644
--- a/arch/mips/lantiq/xway/prom.c
+++ b/arch/mips/lantiq/xway/prom.c
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2010 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2010 John Crispin <john@phrozen.org>
* Copyright (C) 2013-2015 Lantiq Beteiligungs-GmbH & Co.KG
*/
diff --git a/arch/mips/lantiq/xway/reset.c b/arch/mips/lantiq/xway/reset.c
index bc29bb349e94..83fd65d76e81 100644
--- a/arch/mips/lantiq/xway/reset.c
+++ b/arch/mips/lantiq/xway/reset.c
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2010 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2010 John Crispin <john@phrozen.org>
* Copyright (C) 2013-2015 Lantiq Beteiligungs-GmbH & Co.KG
*/
@@ -258,7 +258,7 @@ static int ltq_reset_device(struct reset_controller_dev *rcdev,
return ltq_deassert_device(rcdev, id);
}
-static struct reset_control_ops reset_ops = {
+static const struct reset_control_ops reset_ops = {
.reset = ltq_reset_device,
.assert = ltq_assert_device,
.deassert = ltq_deassert_device,
diff --git a/arch/mips/lantiq/xway/sysctrl.c b/arch/mips/lantiq/xway/sysctrl.c
index 80554e8f6037..236193b5210b 100644
--- a/arch/mips/lantiq/xway/sysctrl.c
+++ b/arch/mips/lantiq/xway/sysctrl.c
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2011-2012 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2011-2012 John Crispin <john@phrozen.org>
* Copyright (C) 2013-2015 Lantiq Beteiligungs-GmbH & Co.KG
*/
diff --git a/arch/mips/lantiq/xway/vmmc.c b/arch/mips/lantiq/xway/vmmc.c
index d001bc38908a..4625495f9230 100644
--- a/arch/mips/lantiq/xway/vmmc.c
+++ b/arch/mips/lantiq/xway/vmmc.c
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2012 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2012 John Crispin <john@phrozen.org>
*/
#include <linux/module.h>
diff --git a/arch/mips/lantiq/xway/xrx200_phy_fw.c b/arch/mips/lantiq/xway/xrx200_phy_fw.c
index 199094a40c15..71e518c1e7e7 100644
--- a/arch/mips/lantiq/xway/xrx200_phy_fw.c
+++ b/arch/mips/lantiq/xway/xrx200_phy_fw.c
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2012 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2012 John Crispin <john@phrozen.org>
*/
#include <linux/delay.h>
@@ -112,6 +112,6 @@ static struct platform_driver xway_phy_driver = {
module_platform_driver(xway_phy_driver);
-MODULE_AUTHOR("John Crispin <blogic@openwrt.org>");
+MODULE_AUTHOR("John Crispin <john@phrozen.org>");
MODULE_DESCRIPTION("Lantiq XRX200 PHY Firmware Loader");
MODULE_LICENSE("GPL");
diff --git a/arch/mips/lasat/picvue_proc.c b/arch/mips/lasat/picvue_proc.c
index b42095880667..27533c109f92 100644
--- a/arch/mips/lasat/picvue_proc.c
+++ b/arch/mips/lasat/picvue_proc.c
@@ -43,7 +43,7 @@ static int pvc_line_proc_show(struct seq_file *m, void *v)
{
int lineno = *(int *)m->private;
- if (lineno < 0 || lineno > PVC_NLINES) {
+ if (lineno < 0 || lineno >= PVC_NLINES) {
printk(KERN_WARNING "proc_read_line: invalid lineno %d\n", lineno);
return 0;
}
@@ -67,7 +67,7 @@ static ssize_t pvc_line_proc_write(struct file *file, const char __user *buf,
char kbuf[PVC_LINELEN];
size_t len;
- BUG_ON(lineno < 0 || lineno > PVC_NLINES);
+ BUG_ON(lineno < 0 || lineno >= PVC_NLINES);
len = min(count, sizeof(kbuf) - 1);
if (copy_from_user(kbuf, buf, len))
diff --git a/arch/mips/lib/ashldi3.c b/arch/mips/lib/ashldi3.c
index beb80f316095..927dc94a030f 100644
--- a/arch/mips/lib/ashldi3.c
+++ b/arch/mips/lib/ashldi3.c
@@ -2,7 +2,7 @@
#include "libgcc.h"
-long long __ashldi3(long long u, word_type b)
+long long notrace __ashldi3(long long u, word_type b)
{
DWunion uu, w;
word_type bm;
diff --git a/arch/mips/lib/ashrdi3.c b/arch/mips/lib/ashrdi3.c
index c884a912b660..9fdf1a598428 100644
--- a/arch/mips/lib/ashrdi3.c
+++ b/arch/mips/lib/ashrdi3.c
@@ -2,7 +2,7 @@
#include "libgcc.h"
-long long __ashrdi3(long long u, word_type b)
+long long notrace __ashrdi3(long long u, word_type b)
{
DWunion uu, w;
word_type bm;
diff --git a/arch/mips/lib/bswapdi.c b/arch/mips/lib/bswapdi.c
index 77e5f9c1f005..e3e77aa52c95 100644
--- a/arch/mips/lib/bswapdi.c
+++ b/arch/mips/lib/bswapdi.c
@@ -1,6 +1,6 @@
#include <linux/module.h>
-unsigned long long __bswapdi2(unsigned long long u)
+unsigned long long notrace __bswapdi2(unsigned long long u)
{
return (((u) & 0xff00000000000000ull) >> 56) |
(((u) & 0x00ff000000000000ull) >> 40) |
diff --git a/arch/mips/lib/bswapsi.c b/arch/mips/lib/bswapsi.c
index 2b302ff121d2..530a8afe6fda 100644
--- a/arch/mips/lib/bswapsi.c
+++ b/arch/mips/lib/bswapsi.c
@@ -1,6 +1,6 @@
#include <linux/module.h>
-unsigned int __bswapsi2(unsigned int u)
+unsigned int notrace __bswapsi2(unsigned int u)
{
return (((u) & 0xff000000) >> 24) |
(((u) & 0x00ff0000) >> 8) |
diff --git a/arch/mips/lib/cmpdi2.c b/arch/mips/lib/cmpdi2.c
index 8c1306437ed1..06857da96993 100644
--- a/arch/mips/lib/cmpdi2.c
+++ b/arch/mips/lib/cmpdi2.c
@@ -2,7 +2,7 @@
#include "libgcc.h"
-word_type __cmpdi2(long long a, long long b)
+word_type notrace __cmpdi2(long long a, long long b)
{
const DWunion au = {
.ll = a
diff --git a/arch/mips/lib/dump_tlb.c b/arch/mips/lib/dump_tlb.c
index 92a37319efbe..0f80b936e75e 100644
--- a/arch/mips/lib/dump_tlb.c
+++ b/arch/mips/lib/dump_tlb.c
@@ -19,6 +19,8 @@ void dump_tlb_regs(void)
pr_info("Index : %0x\n", read_c0_index());
pr_info("PageMask : %0x\n", read_c0_pagemask());
+ if (cpu_has_guestid)
+ pr_info("GuestCtl1: %0x\n", read_c0_guestctl1());
pr_info("EntryHi : %0*lx\n", field, read_c0_entryhi());
pr_info("EntryLo0 : %0*lx\n", field, read_c0_entrylo0());
pr_info("EntryLo1 : %0*lx\n", field, read_c0_entrylo1());
@@ -72,7 +74,10 @@ static void dump_tlb(int first, int last)
{
unsigned long s_entryhi, entryhi, asid;
unsigned long long entrylo0, entrylo1, pa;
- unsigned int s_index, s_pagemask, pagemask, c0, c1, i;
+ unsigned int s_index, s_pagemask, s_guestctl1 = 0;
+ unsigned int pagemask, guestctl1 = 0, c0, c1, i;
+ unsigned long asidmask = cpu_asid_mask(&current_cpu_data);
+ int asidwidth = DIV_ROUND_UP(ilog2(asidmask) + 1, 4);
#ifdef CONFIG_32BIT
bool xpa = cpu_has_xpa && (read_c0_pagegrain() & PG_ELPA);
int pwidth = xpa ? 11 : 8;
@@ -86,7 +91,9 @@ static void dump_tlb(int first, int last)
s_pagemask = read_c0_pagemask();
s_entryhi = read_c0_entryhi();
s_index = read_c0_index();
- asid = s_entryhi & 0xff;
+ asid = s_entryhi & asidmask;
+ if (cpu_has_guestid)
+ s_guestctl1 = read_c0_guestctl1();
for (i = first; i <= last; i++) {
write_c0_index(i);
@@ -97,6 +104,8 @@ static void dump_tlb(int first, int last)
entryhi = read_c0_entryhi();
entrylo0 = read_c0_entrylo0();
entrylo1 = read_c0_entrylo1();
+ if (cpu_has_guestid)
+ guestctl1 = read_c0_guestctl1();
/* EHINV bit marks entire entry as invalid */
if (cpu_has_tlbinv && entryhi & MIPS_ENTRYHI_EHINV)
@@ -115,7 +124,7 @@ static void dump_tlb(int first, int last)
* due to duplicate TLB entry.
*/
if (!((entrylo0 | entrylo1) & ENTRYLO_G) &&
- (entryhi & 0xff) != asid)
+ (entryhi & asidmask) != asid)
continue;
/*
@@ -126,15 +135,19 @@ static void dump_tlb(int first, int last)
c0 = (entrylo0 & ENTRYLO_C) >> ENTRYLO_C_SHIFT;
c1 = (entrylo1 & ENTRYLO_C) >> ENTRYLO_C_SHIFT;
- printk("va=%0*lx asid=%02lx\n",
+ printk("va=%0*lx asid=%0*lx",
vwidth, (entryhi & ~0x1fffUL),
- entryhi & 0xff);
+ asidwidth, entryhi & asidmask);
+ if (cpu_has_guestid)
+ printk(" gid=%02lx",
+ (guestctl1 & MIPS_GCTL1_RID)
+ >> MIPS_GCTL1_RID_SHIFT);
/* RI/XI are in awkward places, so mask them off separately */
pa = entrylo0 & ~(MIPS_ENTRYLO_RI | MIPS_ENTRYLO_XI);
if (xpa)
pa |= (unsigned long long)readx_c0_entrylo0() << 30;
pa = (pa << 6) & PAGE_MASK;
- printk("\t[");
+ printk("\n\t[");
if (cpu_has_rixi)
printk("ri=%d xi=%d ",
(entrylo0 & MIPS_ENTRYLO_RI) ? 1 : 0,
@@ -164,6 +177,8 @@ static void dump_tlb(int first, int last)
write_c0_entryhi(s_entryhi);
write_c0_index(s_index);
write_c0_pagemask(s_pagemask);
+ if (cpu_has_guestid)
+ write_c0_guestctl1(s_guestctl1);
}
void dump_tlb_all(void)
diff --git a/arch/mips/lib/lshrdi3.c b/arch/mips/lib/lshrdi3.c
index dcf8d6810b7c..364547449c65 100644
--- a/arch/mips/lib/lshrdi3.c
+++ b/arch/mips/lib/lshrdi3.c
@@ -2,7 +2,7 @@
#include "libgcc.h"
-long long __lshrdi3(long long u, word_type b)
+long long notrace __lshrdi3(long long u, word_type b)
{
DWunion uu, w;
word_type bm;
diff --git a/arch/mips/lib/memcpy.S b/arch/mips/lib/memcpy.S
index 9245e1705e69..6c303a94a196 100644
--- a/arch/mips/lib/memcpy.S
+++ b/arch/mips/lib/memcpy.S
@@ -256,7 +256,7 @@
/*
* Macro to build the __copy_user common code
- * Arguements:
+ * Arguments:
* mode : LEGACY_MODE or EVA_MODE
* from : Source operand. USEROP or KERNELOP
* to : Destination operand. USEROP or KERNELOP
diff --git a/arch/mips/lib/memset.S b/arch/mips/lib/memset.S
index 8f0019a2e5c8..18a1ccd4d134 100644
--- a/arch/mips/lib/memset.S
+++ b/arch/mips/lib/memset.S
@@ -228,10 +228,12 @@
.hidden __memset
.endif
+#ifdef CONFIG_CPU_MIPSR6
.Lbyte_fixup\@:
PTR_SUBU a2, $0, t0
jr ra
PTR_ADDIU a2, 1
+#endif /* CONFIG_CPU_MIPSR6 */
.Lfirst_fixup\@:
jr ra
diff --git a/arch/mips/lib/r3k_dump_tlb.c b/arch/mips/lib/r3k_dump_tlb.c
index cfcbb5218b59..744f4a7bc49d 100644
--- a/arch/mips/lib/r3k_dump_tlb.c
+++ b/arch/mips/lib/r3k_dump_tlb.c
@@ -29,9 +29,10 @@ static void dump_tlb(int first, int last)
{
int i;
unsigned int asid;
- unsigned long entryhi, entrylo0;
+ unsigned long entryhi, entrylo0, asid_mask;
- asid = read_c0_entryhi() & ASID_MASK;
+ asid_mask = cpu_asid_mask(&current_cpu_data);
+ asid = read_c0_entryhi() & asid_mask;
for (i = first; i <= last; i++) {
write_c0_index(i<<8);
@@ -46,7 +47,7 @@ static void dump_tlb(int first, int last)
/* Unused entries have a virtual address of KSEG0. */
if ((entryhi & PAGE_MASK) != KSEG0 &&
(entrylo0 & R3K_ENTRYLO_G ||
- (entryhi & ASID_MASK) == asid)) {
+ (entryhi & asid_mask) == asid)) {
/*
* Only print entries in use
*/
@@ -55,7 +56,7 @@ static void dump_tlb(int first, int last)
printk("va=%08lx asid=%08lx"
" [pa=%06lx n=%d d=%d v=%d g=%d]",
entryhi & PAGE_MASK,
- entryhi & ASID_MASK,
+ entryhi & asid_mask,
entrylo0 & PAGE_MASK,
(entrylo0 & R3K_ENTRYLO_N) ? 1 : 0,
(entrylo0 & R3K_ENTRYLO_D) ? 1 : 0,
diff --git a/arch/mips/lib/ucmpdi2.c b/arch/mips/lib/ucmpdi2.c
index bb4cb2f828ea..bd599f58234c 100644
--- a/arch/mips/lib/ucmpdi2.c
+++ b/arch/mips/lib/ucmpdi2.c
@@ -2,7 +2,7 @@
#include "libgcc.h"
-word_type __ucmpdi2(unsigned long long a, unsigned long long b)
+word_type notrace __ucmpdi2(unsigned long long a, unsigned long long b)
{
const DWunion au = {.ll = a};
const DWunion bu = {.ll = b};
diff --git a/arch/mips/loongson32/common/platform.c b/arch/mips/loongson32/common/platform.c
index ddf1d4cbf31e..f2c714d8fb60 100644
--- a/arch/mips/loongson32/common/platform.c
+++ b/arch/mips/loongson32/common/platform.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2011 Zhang, Keguang <keguang.zhang@gmail.com>
+ * Copyright (c) 2011-2016 Zhang, Keguang <keguang.zhang@gmail.com>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
@@ -10,14 +10,17 @@
#include <linux/clk.h>
#include <linux/dma-mapping.h>
#include <linux/err.h>
+#include <linux/mtd/partitions.h>
+#include <linux/sizes.h>
#include <linux/phy.h>
#include <linux/serial_8250.h>
#include <linux/stmmac.h>
#include <linux/usb/ehci_pdriver.h>
-#include <asm-generic/sizes.h>
-#include <cpufreq.h>
#include <loongson1.h>
+#include <cpufreq.h>
+#include <dma.h>
+#include <nand.h>
/* 8250/16550 compatible UART */
#define LS1X_UART(_id) \
@@ -45,7 +48,7 @@ struct platform_device ls1x_uart_pdev = {
},
};
-void __init ls1x_serial_setup(struct platform_device *pdev)
+void __init ls1x_serial_set_uartclk(struct platform_device *pdev)
{
struct clk *clk;
struct plat_serial8250_port *p;
@@ -77,6 +80,42 @@ struct platform_device ls1x_cpufreq_pdev = {
},
};
+/* DMA */
+static struct resource ls1x_dma_resources[] = {
+ [0] = {
+ .start = LS1X_DMAC_BASE,
+ .end = LS1X_DMAC_BASE + SZ_4 - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ .start = LS1X_DMA0_IRQ,
+ .end = LS1X_DMA0_IRQ,
+ .flags = IORESOURCE_IRQ,
+ },
+ [2] = {
+ .start = LS1X_DMA1_IRQ,
+ .end = LS1X_DMA1_IRQ,
+ .flags = IORESOURCE_IRQ,
+ },
+ [3] = {
+ .start = LS1X_DMA2_IRQ,
+ .end = LS1X_DMA2_IRQ,
+ .flags = IORESOURCE_IRQ,
+ },
+};
+
+struct platform_device ls1x_dma_pdev = {
+ .name = "ls1x-dma",
+ .id = -1,
+ .num_resources = ARRAY_SIZE(ls1x_dma_resources),
+ .resource = ls1x_dma_resources,
+};
+
+void __init ls1x_dma_set_platdata(struct plat_ls1x_dma *pdata)
+{
+ ls1x_dma_pdev.dev.platform_data = pdata;
+}
+
/* Synopsys Ethernet GMAC */
static struct stmmac_mdio_bus_data ls1x_mdio_bus_data = {
.phy_mask = 0,
@@ -198,6 +237,64 @@ struct platform_device ls1x_eth1_pdev = {
},
};
+/* GPIO */
+static struct resource ls1x_gpio0_resources[] = {
+ [0] = {
+ .start = LS1X_GPIO0_BASE,
+ .end = LS1X_GPIO0_BASE + SZ_4 - 1,
+ .flags = IORESOURCE_MEM,
+ },
+};
+
+struct platform_device ls1x_gpio0_pdev = {
+ .name = "ls1x-gpio",
+ .id = 0,
+ .num_resources = ARRAY_SIZE(ls1x_gpio0_resources),
+ .resource = ls1x_gpio0_resources,
+};
+
+static struct resource ls1x_gpio1_resources[] = {
+ [0] = {
+ .start = LS1X_GPIO1_BASE,
+ .end = LS1X_GPIO1_BASE + SZ_4 - 1,
+ .flags = IORESOURCE_MEM,
+ },
+};
+
+struct platform_device ls1x_gpio1_pdev = {
+ .name = "ls1x-gpio",
+ .id = 1,
+ .num_resources = ARRAY_SIZE(ls1x_gpio1_resources),
+ .resource = ls1x_gpio1_resources,
+};
+
+/* NAND Flash */
+static struct resource ls1x_nand_resources[] = {
+ [0] = {
+ .start = LS1X_NAND_BASE,
+ .end = LS1X_NAND_BASE + SZ_32 - 1,
+ .flags = IORESOURCE_MEM,
+ },
+ [1] = {
+ /* DMA channel 0 is dedicated to NAND */
+ .start = LS1X_DMA_CHANNEL0,
+ .end = LS1X_DMA_CHANNEL0,
+ .flags = IORESOURCE_DMA,
+ },
+};
+
+struct platform_device ls1x_nand_pdev = {
+ .name = "ls1x-nand",
+ .id = -1,
+ .num_resources = ARRAY_SIZE(ls1x_nand_resources),
+ .resource = ls1x_nand_resources,
+};
+
+void __init ls1x_nand_set_platdata(struct plat_ls1x_nand *pdata)
+{
+ ls1x_nand_pdev.dev.platform_data = pdata;
+}
+
/* USB EHCI */
static u64 ls1x_ehci_dmamask = DMA_BIT_MASK(32);
diff --git a/arch/mips/loongson32/common/reset.c b/arch/mips/loongson32/common/reset.c
index c41e4ca56ab4..8a1d9cc5a134 100644
--- a/arch/mips/loongson32/common/reset.c
+++ b/arch/mips/loongson32/common/reset.c
@@ -9,12 +9,13 @@
#include <linux/io.h>
#include <linux/pm.h>
+#include <linux/sizes.h>
#include <asm/idle.h>
#include <asm/reboot.h>
#include <loongson1.h>
-static void __iomem *wdt_base;
+static void __iomem *wdt_reg_base;
static void ls1x_halt(void)
{
@@ -26,9 +27,9 @@ static void ls1x_halt(void)
static void ls1x_restart(char *command)
{
- __raw_writel(0x1, wdt_base + WDT_EN);
- __raw_writel(0x1, wdt_base + WDT_TIMER);
- __raw_writel(0x1, wdt_base + WDT_SET);
+ __raw_writel(0x1, wdt_reg_base + WDT_EN);
+ __raw_writel(0x1, wdt_reg_base + WDT_TIMER);
+ __raw_writel(0x1, wdt_reg_base + WDT_SET);
ls1x_halt();
}
@@ -40,8 +41,8 @@ static void ls1x_power_off(void)
static int __init ls1x_reboot_setup(void)
{
- wdt_base = ioremap_nocache(LS1X_WDT_BASE, 0x0f);
- if (!wdt_base)
+ wdt_reg_base = ioremap_nocache(LS1X_WDT_BASE, (SZ_4 + SZ_8));
+ if (!wdt_reg_base)
panic("Failed to remap watchdog registers");
_machine_restart = ls1x_restart;
diff --git a/arch/mips/loongson32/common/time.c b/arch/mips/loongson32/common/time.c
index 0996b025eeef..ff224f0020e5 100644
--- a/arch/mips/loongson32/common/time.c
+++ b/arch/mips/loongson32/common/time.c
@@ -9,6 +9,7 @@
#include <linux/clk.h>
#include <linux/interrupt.h>
+#include <linux/sizes.h>
#include <asm/time.h>
#include <loongson1.h>
@@ -35,25 +36,25 @@
DEFINE_RAW_SPINLOCK(ls1x_timer_lock);
-static void __iomem *timer_base;
+static void __iomem *timer_reg_base;
static uint32_t ls1x_jiffies_per_tick;
static inline void ls1x_pwmtimer_set_period(uint32_t period)
{
- __raw_writel(period, timer_base + PWM_HRC);
- __raw_writel(period, timer_base + PWM_LRC);
+ __raw_writel(period, timer_reg_base + PWM_HRC);
+ __raw_writel(period, timer_reg_base + PWM_LRC);
}
static inline void ls1x_pwmtimer_restart(void)
{
- __raw_writel(0x0, timer_base + PWM_CNT);
- __raw_writel(INT_EN | CNT_EN, timer_base + PWM_CTRL);
+ __raw_writel(0x0, timer_reg_base + PWM_CNT);
+ __raw_writel(INT_EN | CNT_EN, timer_reg_base + PWM_CTRL);
}
void __init ls1x_pwmtimer_init(void)
{
- timer_base = ioremap(LS1X_TIMER_BASE, 0xf);
- if (!timer_base)
+ timer_reg_base = ioremap_nocache(LS1X_TIMER_BASE, SZ_16);
+ if (!timer_reg_base)
panic("Failed to remap timer registers");
ls1x_jiffies_per_tick = DIV_ROUND_CLOSEST(mips_hpt_frequency, HZ);
@@ -86,7 +87,7 @@ static cycle_t ls1x_clocksource_read(struct clocksource *cs)
*/
jifs = jiffies;
/* read the count */
- count = __raw_readl(timer_base + PWM_CNT);
+ count = __raw_readl(timer_reg_base + PWM_CNT);
/*
* It's possible for count to appear to go the wrong way for this
@@ -131,7 +132,7 @@ static int ls1x_clockevent_set_state_periodic(struct clock_event_device *cd)
raw_spin_lock(&ls1x_timer_lock);
ls1x_pwmtimer_set_period(ls1x_jiffies_per_tick);
ls1x_pwmtimer_restart();
- __raw_writel(INT_EN | CNT_EN, timer_base + PWM_CTRL);
+ __raw_writel(INT_EN | CNT_EN, timer_reg_base + PWM_CTRL);
raw_spin_unlock(&ls1x_timer_lock);
return 0;
@@ -140,7 +141,7 @@ static int ls1x_clockevent_set_state_periodic(struct clock_event_device *cd)
static int ls1x_clockevent_tick_resume(struct clock_event_device *cd)
{
raw_spin_lock(&ls1x_timer_lock);
- __raw_writel(INT_EN | CNT_EN, timer_base + PWM_CTRL);
+ __raw_writel(INT_EN | CNT_EN, timer_reg_base + PWM_CTRL);
raw_spin_unlock(&ls1x_timer_lock);
return 0;
@@ -149,8 +150,8 @@ static int ls1x_clockevent_tick_resume(struct clock_event_device *cd)
static int ls1x_clockevent_set_state_shutdown(struct clock_event_device *cd)
{
raw_spin_lock(&ls1x_timer_lock);
- __raw_writel(__raw_readl(timer_base + PWM_CTRL) & ~CNT_EN,
- timer_base + PWM_CTRL);
+ __raw_writel(__raw_readl(timer_reg_base + PWM_CTRL) & ~CNT_EN,
+ timer_reg_base + PWM_CTRL);
raw_spin_unlock(&ls1x_timer_lock);
return 0;
@@ -220,7 +221,7 @@ void __init plat_time_init(void)
#ifdef CONFIG_CEVT_CSRC_LS1X
/* setup LS1X PWM timer */
- clk = clk_get(NULL, "ls1x_pwmtimer");
+ clk = clk_get(NULL, "ls1x-pwmtimer");
if (IS_ERR(clk))
panic("unable to get timer clock, err=%ld", PTR_ERR(clk));
diff --git a/arch/mips/loongson32/ls1b/board.c b/arch/mips/loongson32/ls1b/board.c
index 58daeea25739..38a1d404be1b 100644
--- a/arch/mips/loongson32/ls1b/board.c
+++ b/arch/mips/loongson32/ls1b/board.c
@@ -1,5 +1,5 @@
/*
- * Copyright (c) 2011 Zhang, Keguang <keguang.zhang@gmail.com>
+ * Copyright (c) 2011-2016 Zhang, Keguang <keguang.zhang@gmail.com>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
@@ -7,26 +7,83 @@
* option) any later version.
*/
+#include <linux/leds.h>
+#include <linux/mtd/partitions.h>
+#include <linux/sizes.h>
+
+#include <loongson1.h>
+#include <dma.h>
+#include <nand.h>
#include <platform.h>
+struct plat_ls1x_dma ls1x_dma_pdata = {
+ .nr_channels = 3,
+};
+
+static struct mtd_partition ls1x_nand_parts[] = {
+ {
+ .name = "kernel",
+ .offset = 0,
+ .size = SZ_16M,
+ },
+ {
+ .name = "rootfs",
+ .offset = MTDPART_OFS_APPEND,
+ .size = MTDPART_SIZ_FULL,
+ },
+};
+
+struct plat_ls1x_nand ls1x_nand_pdata = {
+ .parts = ls1x_nand_parts,
+ .nr_parts = ARRAY_SIZE(ls1x_nand_parts),
+ .hold_cycle = 0x2,
+ .wait_cycle = 0xc,
+};
+
+static const struct gpio_led ls1x_gpio_leds[] __initconst = {
+ {
+ .name = "LED9",
+ .default_trigger = "heartbeat",
+ .gpio = 38,
+ .active_low = 1,
+ .default_state = LEDS_GPIO_DEFSTATE_OFF,
+ }, {
+ .name = "LED6",
+ .default_trigger = "nand-disk",
+ .gpio = 39,
+ .active_low = 1,
+ .default_state = LEDS_GPIO_DEFSTATE_OFF,
+ },
+};
+
+static const struct gpio_led_platform_data ls1x_led_pdata __initconst = {
+ .num_leds = ARRAY_SIZE(ls1x_gpio_leds),
+ .leds = ls1x_gpio_leds,
+};
+
static struct platform_device *ls1b_platform_devices[] __initdata = {
&ls1x_uart_pdev,
&ls1x_cpufreq_pdev,
+ &ls1x_dma_pdev,
&ls1x_eth0_pdev,
&ls1x_eth1_pdev,
&ls1x_ehci_pdev,
+ &ls1x_gpio0_pdev,
+ &ls1x_gpio1_pdev,
+ &ls1x_nand_pdev,
&ls1x_rtc_pdev,
};
static int __init ls1b_platform_init(void)
{
- int err;
+ ls1x_serial_set_uartclk(&ls1x_uart_pdev);
+ ls1x_dma_set_platdata(&ls1x_dma_pdata);
+ ls1x_nand_set_platdata(&ls1x_nand_pdata);
- ls1x_serial_setup(&ls1x_uart_pdev);
+ gpio_led_register_device(-1, &ls1x_led_pdata);
- err = platform_add_devices(ls1b_platform_devices,
+ return platform_add_devices(ls1b_platform_devices,
ARRAY_SIZE(ls1b_platform_devices));
- return err;
}
arch_initcall(ls1b_platform_init);
diff --git a/arch/mips/loongson64/Platform b/arch/mips/loongson64/Platform
index 85d808924c94..0fce4608aa88 100644
--- a/arch/mips/loongson64/Platform
+++ b/arch/mips/loongson64/Platform
@@ -31,7 +31,7 @@ cflags-$(CONFIG_CPU_LOONGSON3) += -Wa,--trap
# can't easily be used safely within the kbuild framework.
#
ifeq ($(call cc-ifversion, -ge, 0409, y), y)
- ifeq ($(call ld-ifversion, -ge, 22500000, y), y)
+ ifeq ($(call ld-ifversion, -ge, 225000000, y), y)
cflags-$(CONFIG_CPU_LOONGSON3) += \
$(call cc-option,-march=loongson3a -U_MIPS_ISA -D_MIPS_ISA=_MIPS_ISA_MIPS64)
else
diff --git a/arch/mips/loongson64/common/env.c b/arch/mips/loongson64/common/env.c
index d6d07ad56180..57d590ac8004 100644
--- a/arch/mips/loongson64/common/env.c
+++ b/arch/mips/loongson64/common/env.c
@@ -105,6 +105,10 @@ void __init prom_init_env(void)
loongson_chiptemp[1] = 0x900010001fe0019c;
loongson_chiptemp[2] = 0x900020001fe0019c;
loongson_chiptemp[3] = 0x900030001fe0019c;
+ loongson_freqctrl[0] = 0x900000001fe001d0;
+ loongson_freqctrl[1] = 0x900010001fe001d0;
+ loongson_freqctrl[2] = 0x900020001fe001d0;
+ loongson_freqctrl[3] = 0x900030001fe001d0;
loongson_sysconf.ht_control_base = 0x90000EFDFB000000;
loongson_sysconf.workarounds = WORKAROUND_CPUFREQ;
} else if (ecpu->cputype == Loongson_3B) {
@@ -187,7 +191,8 @@ void __init prom_init_env(void)
case PRID_REV_LOONGSON2F:
cpu_clock_freq = 797000000;
break;
- case PRID_REV_LOONGSON3A:
+ case PRID_REV_LOONGSON3A_R1:
+ case PRID_REV_LOONGSON3A_R2:
cpu_clock_freq = 900000000;
break;
case PRID_REV_LOONGSON3B_R1:
diff --git a/arch/mips/loongson64/loongson-3/Makefile b/arch/mips/loongson64/loongson-3/Makefile
index 622fead5ebc9..44bc1482158b 100644
--- a/arch/mips/loongson64/loongson-3/Makefile
+++ b/arch/mips/loongson64/loongson-3/Makefile
@@ -1,7 +1,7 @@
#
# Makefile for Loongson-3 family machines
#
-obj-y += irq.o cop2-ex.o platform.o
+obj-y += irq.o cop2-ex.o platform.o acpi_init.o
obj-$(CONFIG_SMP) += smp.o
diff --git a/drivers/platform/mips/acpi_init.c b/arch/mips/loongson64/loongson-3/acpi_init.c
index dbdad79ead8f..dbdad79ead8f 100644
--- a/drivers/platform/mips/acpi_init.c
+++ b/arch/mips/loongson64/loongson-3/acpi_init.c
diff --git a/arch/mips/loongson64/loongson-3/hpet.c b/arch/mips/loongson64/loongson-3/hpet.c
index a2631a52ca99..249039af66c4 100644
--- a/arch/mips/loongson64/loongson-3/hpet.c
+++ b/arch/mips/loongson64/loongson-3/hpet.c
@@ -212,7 +212,7 @@ static void hpet_setup(void)
/* set hpet base address */
smbus_write(SMBUS_PCI_REGB4, HPET_ADDR);
- /* enable decodeing of access to HPET MMIO*/
+ /* enable decoding of access to HPET MMIO*/
smbus_enable(SMBUS_PCI_REG40, (1 << 28));
/* HPET irq enable */
diff --git a/arch/mips/loongson64/loongson-3/irq.c b/arch/mips/loongson64/loongson-3/irq.c
index 0f75b6b3d218..8e7649088353 100644
--- a/arch/mips/loongson64/loongson-3/irq.c
+++ b/arch/mips/loongson64/loongson-3/irq.c
@@ -24,19 +24,21 @@ static void ht_irqdispatch(void)
}
}
+#define UNUSED_IPS (CAUSEF_IP5 | CAUSEF_IP4 | CAUSEF_IP1 | CAUSEF_IP0)
+
void mach_irq_dispatch(unsigned int pending)
{
if (pending & CAUSEF_IP7)
do_IRQ(LOONGSON_TIMER_IRQ);
#if defined(CONFIG_SMP)
- else if (pending & CAUSEF_IP6)
+ if (pending & CAUSEF_IP6)
loongson3_ipi_interrupt(NULL);
#endif
- else if (pending & CAUSEF_IP3)
+ if (pending & CAUSEF_IP3)
ht_irqdispatch();
- else if (pending & CAUSEF_IP2)
+ if (pending & CAUSEF_IP2)
do_IRQ(LOONGSON_UART_IRQ);
- else {
+ if (pending & UNUSED_IPS) {
pr_err("%s : spurious interrupt\n", __func__);
spurious_interrupt();
}
diff --git a/arch/mips/loongson64/loongson-3/numa.c b/arch/mips/loongson64/loongson-3/numa.c
index 6f9e010cec4d..282c5a8c2fcd 100644
--- a/arch/mips/loongson64/loongson-3/numa.c
+++ b/arch/mips/loongson64/loongson-3/numa.c
@@ -213,10 +213,10 @@ static void __init node_mem_init(unsigned int node)
BOOTMEM_DEFAULT);
if (node == 0 && node_end_pfn(0) >= (0xffffffff >> PAGE_SHIFT)) {
- /* Reserve 0xff800000~0xffffffff for RS780E integrated GPU */
+ /* Reserve 0xfe000000~0xffffffff for RS780E integrated GPU */
reserve_bootmem_node(NODE_DATA(node),
- (node_addrspace_offset | 0xff800000),
- 8 << 20, BOOTMEM_DEFAULT);
+ (node_addrspace_offset | 0xfe000000),
+ 32 << 20, BOOTMEM_DEFAULT);
}
sparse_memory_present_with_active_regions(node);
diff --git a/arch/mips/loongson64/loongson-3/smp.c b/arch/mips/loongson64/loongson-3/smp.c
index 509832a9836c..e59759af63d9 100644
--- a/arch/mips/loongson64/loongson-3/smp.c
+++ b/arch/mips/loongson64/loongson-3/smp.c
@@ -421,7 +421,6 @@ static int loongson3_cpu_disable(void)
local_irq_save(flags);
fixup_irqs();
local_irq_restore(flags);
- flush_cache_all();
local_flush_tlb_all();
return 0;
@@ -440,7 +439,7 @@ static void loongson3_cpu_die(unsigned int cpu)
* flush all L1 entries at first. Then, another core (usually Core 0) can
* safely disable the clock of the target core. loongson3_play_dead() is
* called via CKSEG1 (uncached and unmmaped) */
-static void loongson3a_play_dead(int *state_addr)
+static void loongson3a_r1_play_dead(int *state_addr)
{
register int val;
register long cpuid, core, node, count;
@@ -502,6 +501,89 @@ static void loongson3a_play_dead(int *state_addr)
: "a1");
}
+static void loongson3a_r2_play_dead(int *state_addr)
+{
+ register int val;
+ register long cpuid, core, node, count;
+ register void *addr, *base, *initfunc;
+
+ __asm__ __volatile__(
+ " .set push \n"
+ " .set noreorder \n"
+ " li %[addr], 0x80000000 \n" /* KSEG0 */
+ "1: cache 0, 0(%[addr]) \n" /* flush L1 ICache */
+ " cache 0, 1(%[addr]) \n"
+ " cache 0, 2(%[addr]) \n"
+ " cache 0, 3(%[addr]) \n"
+ " cache 1, 0(%[addr]) \n" /* flush L1 DCache */
+ " cache 1, 1(%[addr]) \n"
+ " cache 1, 2(%[addr]) \n"
+ " cache 1, 3(%[addr]) \n"
+ " addiu %[sets], %[sets], -1 \n"
+ " bnez %[sets], 1b \n"
+ " addiu %[addr], %[addr], 0x40 \n"
+ " li %[addr], 0x80000000 \n" /* KSEG0 */
+ "2: cache 2, 0(%[addr]) \n" /* flush L1 VCache */
+ " cache 2, 1(%[addr]) \n"
+ " cache 2, 2(%[addr]) \n"
+ " cache 2, 3(%[addr]) \n"
+ " cache 2, 4(%[addr]) \n"
+ " cache 2, 5(%[addr]) \n"
+ " cache 2, 6(%[addr]) \n"
+ " cache 2, 7(%[addr]) \n"
+ " cache 2, 8(%[addr]) \n"
+ " cache 2, 9(%[addr]) \n"
+ " cache 2, 10(%[addr]) \n"
+ " cache 2, 11(%[addr]) \n"
+ " cache 2, 12(%[addr]) \n"
+ " cache 2, 13(%[addr]) \n"
+ " cache 2, 14(%[addr]) \n"
+ " cache 2, 15(%[addr]) \n"
+ " addiu %[vsets], %[vsets], -1 \n"
+ " bnez %[vsets], 2b \n"
+ " addiu %[addr], %[addr], 0x40 \n"
+ " li %[val], 0x7 \n" /* *state_addr = CPU_DEAD; */
+ " sw %[val], (%[state_addr]) \n"
+ " sync \n"
+ " cache 21, (%[state_addr]) \n" /* flush entry of *state_addr */
+ " .set pop \n"
+ : [addr] "=&r" (addr), [val] "=&r" (val)
+ : [state_addr] "r" (state_addr),
+ [sets] "r" (cpu_data[smp_processor_id()].dcache.sets),
+ [vsets] "r" (cpu_data[smp_processor_id()].vcache.sets));
+
+ __asm__ __volatile__(
+ " .set push \n"
+ " .set noreorder \n"
+ " .set mips64 \n"
+ " mfc0 %[cpuid], $15, 1 \n"
+ " andi %[cpuid], 0x3ff \n"
+ " dli %[base], 0x900000003ff01000 \n"
+ " andi %[core], %[cpuid], 0x3 \n"
+ " sll %[core], 8 \n" /* get core id */
+ " or %[base], %[base], %[core] \n"
+ " andi %[node], %[cpuid], 0xc \n"
+ " dsll %[node], 42 \n" /* get node id */
+ " or %[base], %[base], %[node] \n"
+ "1: li %[count], 0x100 \n" /* wait for init loop */
+ "2: bnez %[count], 2b \n" /* limit mailbox access */
+ " addiu %[count], -1 \n"
+ " ld %[initfunc], 0x20(%[base]) \n" /* get PC via mailbox */
+ " beqz %[initfunc], 1b \n"
+ " nop \n"
+ " ld $sp, 0x28(%[base]) \n" /* get SP via mailbox */
+ " ld $gp, 0x30(%[base]) \n" /* get GP via mailbox */
+ " ld $a1, 0x38(%[base]) \n"
+ " jr %[initfunc] \n" /* jump to initial PC */
+ " nop \n"
+ " .set pop \n"
+ : [core] "=&r" (core), [node] "=&r" (node),
+ [base] "=&r" (base), [cpuid] "=&r" (cpuid),
+ [count] "=&r" (count), [initfunc] "=&r" (initfunc)
+ : /* No Input */
+ : "a1");
+}
+
static void loongson3b_play_dead(int *state_addr)
{
register int val;
@@ -573,13 +655,18 @@ void play_dead(void)
void (*play_dead_at_ckseg1)(int *);
idle_task_exit();
- switch (loongson_sysconf.cputype) {
- case Loongson_3A:
+ switch (read_c0_prid() & PRID_REV_MASK) {
+ case PRID_REV_LOONGSON3A_R1:
default:
play_dead_at_ckseg1 =
- (void *)CKSEG1ADDR((unsigned long)loongson3a_play_dead);
+ (void *)CKSEG1ADDR((unsigned long)loongson3a_r1_play_dead);
+ break;
+ case PRID_REV_LOONGSON3A_R2:
+ play_dead_at_ckseg1 =
+ (void *)CKSEG1ADDR((unsigned long)loongson3a_r2_play_dead);
break;
- case Loongson_3B:
+ case PRID_REV_LOONGSON3B_R1:
+ case PRID_REV_LOONGSON3B_R2:
play_dead_at_ckseg1 =
(void *)CKSEG1ADDR((unsigned long)loongson3b_play_dead);
break;
@@ -594,9 +681,9 @@ void loongson3_disable_clock(int cpu)
uint64_t core_id = cpu_data[cpu].core;
uint64_t package_id = cpu_data[cpu].package;
- if (loongson_sysconf.cputype == Loongson_3A) {
+ if ((read_c0_prid() & PRID_REV_MASK) == PRID_REV_LOONGSON3A_R1) {
LOONGSON_CHIPCFG(package_id) &= ~(1 << (12 + core_id));
- } else if (loongson_sysconf.cputype == Loongson_3B) {
+ } else {
if (!(loongson_sysconf.workarounds & WORKAROUND_CPUHOTPLUG))
LOONGSON_FREQCTRL(package_id) &= ~(1 << (core_id * 4 + 3));
}
@@ -607,9 +694,9 @@ void loongson3_enable_clock(int cpu)
uint64_t core_id = cpu_data[cpu].core;
uint64_t package_id = cpu_data[cpu].package;
- if (loongson_sysconf.cputype == Loongson_3A) {
+ if ((read_c0_prid() & PRID_REV_MASK) == PRID_REV_LOONGSON3A_R1) {
LOONGSON_CHIPCFG(package_id) |= 1 << (12 + core_id);
- } else if (loongson_sysconf.cputype == Loongson_3B) {
+ } else {
if (!(loongson_sysconf.workarounds & WORKAROUND_CPUHOTPLUG))
LOONGSON_FREQCTRL(package_id) |= 1 << (core_id * 4 + 3);
}
diff --git a/arch/mips/math-emu/Makefile b/arch/mips/math-emu/Makefile
index a19641d3ac23..e9bbc2a6526f 100644
--- a/arch/mips/math-emu/Makefile
+++ b/arch/mips/math-emu/Makefile
@@ -4,9 +4,9 @@
obj-y += cp1emu.o ieee754dp.o ieee754sp.o ieee754.o \
dp_div.o dp_mul.o dp_sub.o dp_add.o dp_fsp.o dp_cmp.o dp_simple.o \
- dp_tint.o dp_fint.o dp_maddf.o dp_msubf.o dp_2008class.o dp_fmin.o dp_fmax.o \
+ dp_tint.o dp_fint.o dp_maddf.o dp_2008class.o dp_fmin.o dp_fmax.o \
sp_div.o sp_mul.o sp_sub.o sp_add.o sp_fdp.o sp_cmp.o sp_simple.o \
- sp_tint.o sp_fint.o sp_maddf.o sp_msubf.o sp_2008class.o sp_fmin.o sp_fmax.o \
+ sp_tint.o sp_fint.o sp_maddf.o sp_2008class.o sp_fmin.o sp_fmax.o \
dsemul.o
lib-y += ieee754d.o \
diff --git a/arch/mips/math-emu/cp1emu.c b/arch/mips/math-emu/cp1emu.c
index cdfd44ffa51c..d96e912b9d44 100644
--- a/arch/mips/math-emu/cp1emu.c
+++ b/arch/mips/math-emu/cp1emu.c
@@ -445,9 +445,11 @@ static int isBranchInstr(struct pt_regs *regs, struct mm_decoded_insn dec_insn,
case spec_op:
switch (insn.r_format.func) {
case jalr_op:
- regs->regs[insn.r_format.rd] =
- regs->cp0_epc + dec_insn.pc_inc +
- dec_insn.next_pc_inc;
+ if (insn.r_format.rd != 0) {
+ regs->regs[insn.r_format.rd] =
+ regs->cp0_epc + dec_insn.pc_inc +
+ dec_insn.next_pc_inc;
+ }
/* Fall through */
case jr_op:
/* For R6, JR already emulated in jalr_op */
@@ -973,9 +975,10 @@ static int cop1Emulate(struct pt_regs *xcp, struct mips_fpu_struct *ctx,
struct mm_decoded_insn dec_insn, void *__user *fault_addr)
{
unsigned long contpc = xcp->cp0_epc + dec_insn.pc_inc;
- unsigned int cond, cbit;
+ unsigned int cond, cbit, bit0;
mips_instruction ir;
int likely, pc_inc;
+ union fpureg *fpr;
u32 __user *wva;
u64 __user *dva;
u32 wval;
@@ -1187,14 +1190,14 @@ emul:
return SIGILL;
cond = likely = 0;
+ fpr = &current->thread.fpu.fpr[MIPSInst_RT(ir)];
+ bit0 = get_fpr32(fpr, 0) & 0x1;
switch (MIPSInst_RS(ir)) {
case bc1eqz_op:
- if (get_fpr32(&current->thread.fpu.fpr[MIPSInst_RT(ir)], 0) & 0x1)
- cond = 1;
+ cond = bit0 == 0;
break;
case bc1nez_op:
- if (!(get_fpr32(&current->thread.fpu.fpr[MIPSInst_RT(ir)], 0) & 0x1))
- cond = 1;
+ cond = bit0 != 0;
break;
}
goto branch_common;
@@ -1674,7 +1677,7 @@ static int fpu_emu(struct pt_regs *xcp, struct mips_fpu_struct *ctx,
union ieee754sp(*b) (union ieee754sp, union ieee754sp);
union ieee754sp(*u) (union ieee754sp);
} handler;
- union ieee754sp fs, ft;
+ union ieee754sp fd, fs, ft;
switch (MIPSInst_FUNC(ir)) {
/* binary ops */
@@ -1945,6 +1948,17 @@ copcsr:
rfmt = w_fmt;
goto copcsr;
+ case fsel_op:
+ if (!cpu_has_mips_r6)
+ return SIGILL;
+
+ SPFROMREG(fd, MIPSInst_FD(ir));
+ if (fd.bits & 0x1)
+ SPFROMREG(rv.s, MIPSInst_FT(ir));
+ else
+ SPFROMREG(rv.s, MIPSInst_FS(ir));
+ break;
+
case fcvtl_op:
if (!cpu_has_mips_3_4_5_64_r2_r6)
return SIGILL;
@@ -1993,7 +2007,7 @@ copcsr:
}
case d_fmt: {
- union ieee754dp fs, ft;
+ union ieee754dp fd, fs, ft;
union {
union ieee754dp(*b) (union ieee754dp, union ieee754dp);
union ieee754dp(*u) (union ieee754dp);
@@ -2243,6 +2257,17 @@ dcopuop:
rfmt = w_fmt;
goto copcsr;
+ case fsel_op:
+ if (!cpu_has_mips_r6)
+ return SIGILL;
+
+ DPFROMREG(fd, MIPSInst_FD(ir));
+ if (fd.bits & 0x1)
+ DPFROMREG(rv.d, MIPSInst_FT(ir));
+ else
+ DPFROMREG(rv.d, MIPSInst_FS(ir));
+ break;
+
case fcvtl_op:
if (!cpu_has_mips_3_4_5_64_r2_r6)
return SIGILL;
diff --git a/arch/mips/math-emu/dp_maddf.c b/arch/mips/math-emu/dp_maddf.c
index 119eda9fa1ea..4a2d03c72959 100644
--- a/arch/mips/math-emu/dp_maddf.c
+++ b/arch/mips/math-emu/dp_maddf.c
@@ -14,8 +14,12 @@
#include "ieee754dp.h"
-union ieee754dp ieee754dp_maddf(union ieee754dp z, union ieee754dp x,
- union ieee754dp y)
+enum maddf_flags {
+ maddf_negate_product = 1 << 0,
+};
+
+static union ieee754dp _dp_maddf(union ieee754dp z, union ieee754dp x,
+ union ieee754dp y, enum maddf_flags flags)
{
int re;
int rs;
@@ -32,16 +36,15 @@ union ieee754dp ieee754dp_maddf(union ieee754dp z, union ieee754dp x,
COMPXDP;
COMPYDP;
-
- u64 zm; int ze; int zs __maybe_unused; int zc;
+ COMPZDP;
EXPLODEXDP;
EXPLODEYDP;
- EXPLODEDP(z, zc, zs, ze, zm)
+ EXPLODEZDP;
FLUSHXDP;
FLUSHYDP;
- FLUSHDP(z, zc, zs, ze, zm);
+ FLUSHZDP;
ieee754_clearcx();
@@ -50,7 +53,7 @@ union ieee754dp ieee754dp_maddf(union ieee754dp z, union ieee754dp x,
ieee754_setcx(IEEE754_INVALID_OPERATION);
return ieee754dp_nanxcpt(z);
case IEEE754_CLASS_DNORM:
- DPDNORMx(zm, ze);
+ DPDNORMZ;
/* QNAN is handled separately below */
}
@@ -154,13 +157,15 @@ union ieee754dp ieee754dp_maddf(union ieee754dp z, union ieee754dp x,
re = xe + ye;
rs = xs ^ ys;
+ if (flags & maddf_negate_product)
+ rs ^= 1;
/* shunt to top of word */
xm <<= 64 - (DP_FBITS + 1);
ym <<= 64 - (DP_FBITS + 1);
/*
- * Multiply 32 bits xm, ym to give high 32 bits rm with stickness.
+ * Multiply 64 bits xm, ym to give high 64 bits rm with stickness.
*/
/* 32 * 32 => 64 */
@@ -198,7 +203,7 @@ union ieee754dp ieee754dp_maddf(union ieee754dp z, union ieee754dp x,
if ((s64) rm < 0) {
rm = (rm >> (64 - (DP_FBITS + 1 + 3))) |
((rm << (DP_FBITS + 1 + 3)) != 0);
- re++;
+ re++;
} else {
rm = (rm >> (64 - (DP_FBITS + 1 + 3 + 1))) |
((rm << (DP_FBITS + 1 + 3 + 1)) != 0);
@@ -263,3 +268,15 @@ union ieee754dp ieee754dp_maddf(union ieee754dp z, union ieee754dp x,
return ieee754dp_format(zs, ze, zm);
}
+
+union ieee754dp ieee754dp_maddf(union ieee754dp z, union ieee754dp x,
+ union ieee754dp y)
+{
+ return _dp_maddf(z, x, y, 0);
+}
+
+union ieee754dp ieee754dp_msubf(union ieee754dp z, union ieee754dp x,
+ union ieee754dp y)
+{
+ return _dp_maddf(z, x, y, maddf_negate_product);
+}
diff --git a/arch/mips/math-emu/dp_msubf.c b/arch/mips/math-emu/dp_msubf.c
deleted file mode 100644
index 12241262f856..000000000000
--- a/arch/mips/math-emu/dp_msubf.c
+++ /dev/null
@@ -1,269 +0,0 @@
-/*
- * IEEE754 floating point arithmetic
- * double precision: MSUB.f (Fused Multiply Subtract)
- * MSUBF.fmt: FPR[fd] = FPR[fd] - (FPR[fs] x FPR[ft])
- *
- * MIPS floating point support
- * Copyright (C) 2015 Imagination Technologies, Ltd.
- * Author: Markos Chandras <markos.chandras@imgtec.com>
- *
- * This program is free software; you can distribute it and/or modify it
- * under the terms of the GNU General Public License as published by the
- * Free Software Foundation; version 2 of the License.
- */
-
-#include "ieee754dp.h"
-
-union ieee754dp ieee754dp_msubf(union ieee754dp z, union ieee754dp x,
- union ieee754dp y)
-{
- int re;
- int rs;
- u64 rm;
- unsigned lxm;
- unsigned hxm;
- unsigned lym;
- unsigned hym;
- u64 lrm;
- u64 hrm;
- u64 t;
- u64 at;
- int s;
-
- COMPXDP;
- COMPYDP;
-
- u64 zm; int ze; int zs __maybe_unused; int zc;
-
- EXPLODEXDP;
- EXPLODEYDP;
- EXPLODEDP(z, zc, zs, ze, zm)
-
- FLUSHXDP;
- FLUSHYDP;
- FLUSHDP(z, zc, zs, ze, zm);
-
- ieee754_clearcx();
-
- switch (zc) {
- case IEEE754_CLASS_SNAN:
- ieee754_setcx(IEEE754_INVALID_OPERATION);
- return ieee754dp_nanxcpt(z);
- case IEEE754_CLASS_DNORM:
- DPDNORMx(zm, ze);
- /* QNAN is handled separately below */
- }
-
- switch (CLPAIR(xc, yc)) {
- case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_SNAN):
- case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_SNAN):
- case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_SNAN):
- case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_SNAN):
- case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_SNAN):
- return ieee754dp_nanxcpt(y);
-
- case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_SNAN):
- case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_QNAN):
- case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_ZERO):
- case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_NORM):
- case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_DNORM):
- case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_INF):
- return ieee754dp_nanxcpt(x);
-
- case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_QNAN):
- case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_QNAN):
- case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_QNAN):
- case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_QNAN):
- return y;
-
- case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_QNAN):
- case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_ZERO):
- case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_NORM):
- case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_DNORM):
- case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_INF):
- return x;
-
-
- /*
- * Infinity handling
- */
- case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_ZERO):
- case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_INF):
- if (zc == IEEE754_CLASS_QNAN)
- return z;
- ieee754_setcx(IEEE754_INVALID_OPERATION);
- return ieee754dp_indef();
-
- case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_INF):
- case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_INF):
- case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_NORM):
- case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_DNORM):
- case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF):
- if (zc == IEEE754_CLASS_QNAN)
- return z;
- return ieee754dp_inf(xs ^ ys);
-
- case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO):
- case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_NORM):
- case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_DNORM):
- case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_ZERO):
- case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_ZERO):
- if (zc == IEEE754_CLASS_INF)
- return ieee754dp_inf(zs);
- /* Multiplication is 0 so just return z */
- return z;
-
- case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_DNORM):
- DPDNORMX;
-
- case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_DNORM):
- if (zc == IEEE754_CLASS_QNAN)
- return z;
- else if (zc == IEEE754_CLASS_INF)
- return ieee754dp_inf(zs);
- DPDNORMY;
- break;
-
- case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_NORM):
- if (zc == IEEE754_CLASS_QNAN)
- return z;
- else if (zc == IEEE754_CLASS_INF)
- return ieee754dp_inf(zs);
- DPDNORMX;
- break;
-
- case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_NORM):
- if (zc == IEEE754_CLASS_QNAN)
- return z;
- else if (zc == IEEE754_CLASS_INF)
- return ieee754dp_inf(zs);
- /* fall through to real computations */
- }
-
- /* Finally get to do some computation */
-
- /*
- * Do the multiplication bit first
- *
- * rm = xm * ym, re = xe + ye basically
- *
- * At this point xm and ym should have been normalized.
- */
- assert(xm & DP_HIDDEN_BIT);
- assert(ym & DP_HIDDEN_BIT);
-
- re = xe + ye;
- rs = xs ^ ys;
-
- /* shunt to top of word */
- xm <<= 64 - (DP_FBITS + 1);
- ym <<= 64 - (DP_FBITS + 1);
-
- /*
- * Multiply 32 bits xm, ym to give high 32 bits rm with stickness.
- */
-
- /* 32 * 32 => 64 */
-#define DPXMULT(x, y) ((u64)(x) * (u64)y)
-
- lxm = xm;
- hxm = xm >> 32;
- lym = ym;
- hym = ym >> 32;
-
- lrm = DPXMULT(lxm, lym);
- hrm = DPXMULT(hxm, hym);
-
- t = DPXMULT(lxm, hym);
-
- at = lrm + (t << 32);
- hrm += at < lrm;
- lrm = at;
-
- hrm = hrm + (t >> 32);
-
- t = DPXMULT(hxm, lym);
-
- at = lrm + (t << 32);
- hrm += at < lrm;
- lrm = at;
-
- hrm = hrm + (t >> 32);
-
- rm = hrm | (lrm != 0);
-
- /*
- * Sticky shift down to normal rounding precision.
- */
- if ((s64) rm < 0) {
- rm = (rm >> (64 - (DP_FBITS + 1 + 3))) |
- ((rm << (DP_FBITS + 1 + 3)) != 0);
- re++;
- } else {
- rm = (rm >> (64 - (DP_FBITS + 1 + 3 + 1))) |
- ((rm << (DP_FBITS + 1 + 3 + 1)) != 0);
- }
- assert(rm & (DP_HIDDEN_BIT << 3));
-
- /* And now the subtraction */
-
- /* flip sign of r and handle as add */
- rs ^= 1;
-
- assert(zm & DP_HIDDEN_BIT);
-
- /*
- * Provide guard,round and stick bit space.
- */
- zm <<= 3;
-
- if (ze > re) {
- /*
- * Have to shift y fraction right to align.
- */
- s = ze - re;
- rm = XDPSRS(rm, s);
- re += s;
- } else if (re > ze) {
- /*
- * Have to shift x fraction right to align.
- */
- s = re - ze;
- zm = XDPSRS(zm, s);
- ze += s;
- }
- assert(ze == re);
- assert(ze <= DP_EMAX);
-
- if (zs == rs) {
- /*
- * Generate 28 bit result of adding two 27 bit numbers
- * leaving result in xm, xs and xe.
- */
- zm = zm + rm;
-
- if (zm >> (DP_FBITS + 1 + 3)) { /* carry out */
- zm = XDPSRS1(zm);
- ze++;
- }
- } else {
- if (zm >= rm) {
- zm = zm - rm;
- } else {
- zm = rm - zm;
- zs = rs;
- }
- if (zm == 0)
- return ieee754dp_zero(ieee754_csr.rm == FPU_CSR_RD);
-
- /*
- * Normalize to rounding precision.
- */
- while ((zm >> (DP_FBITS + 3)) == 0) {
- zm <<= 1;
- ze--;
- }
- }
-
- return ieee754dp_format(zs, ze, zm);
-}
diff --git a/arch/mips/math-emu/dp_mul.c b/arch/mips/math-emu/dp_mul.c
index d0901f03fa19..87d0b44b0614 100644
--- a/arch/mips/math-emu/dp_mul.c
+++ b/arch/mips/math-emu/dp_mul.c
@@ -125,7 +125,7 @@ union ieee754dp ieee754dp_mul(union ieee754dp x, union ieee754dp y)
ym <<= 64 - (DP_FBITS + 1);
/*
- * Multiply 32 bits xm, ym to give high 32 bits rm with stickness.
+ * Multiply 64 bits xm, ym to give high 64 bits rm with stickness.
*/
/* 32 * 32 => 64 */
@@ -163,7 +163,7 @@ union ieee754dp ieee754dp_mul(union ieee754dp x, union ieee754dp y)
if ((s64) rm < 0) {
rm = (rm >> (64 - (DP_FBITS + 1 + 3))) |
((rm << (DP_FBITS + 1 + 3)) != 0);
- re++;
+ re++;
} else {
rm = (rm >> (64 - (DP_FBITS + 1 + 3 + 1))) |
((rm << (DP_FBITS + 1 + 3 + 1)) != 0);
diff --git a/arch/mips/math-emu/dsemul.c b/arch/mips/math-emu/dsemul.c
index 46b964d2b79c..47074887e64c 100644
--- a/arch/mips/math-emu/dsemul.c
+++ b/arch/mips/math-emu/dsemul.c
@@ -8,7 +8,7 @@
#include "ieee754.h"
/*
- * Emulate the arbritrary instruction ir at xcp->cp0_epc. Required when
+ * Emulate the arbitrary instruction ir at xcp->cp0_epc. Required when
* we have to emulate the instruction in a COP1 branch delay slot. Do
* not change cp0_epc due to the instruction
*
@@ -60,7 +60,7 @@ int mips_dsemul(struct pt_regs *regs, mips_instruction ir, unsigned long cpc)
unsigned int rs;
s32 v;
- rs = (((insn.mm_a_format.rs + 0x1e) & 0xf) + 2);
+ rs = (((insn.mm_a_format.rs + 0xe) & 0xf) + 2);
v = regs->cp0_epc & ~3;
v += insn.mm_a_format.simmediate << 2;
regs->regs[rs] = (long)v;
@@ -88,7 +88,7 @@ int mips_dsemul(struct pt_regs *regs, mips_instruction ir, unsigned long cpc)
fr = (struct emuframe __user *)
((regs->regs[29] - sizeof(struct emuframe)) & ~0x7);
- /* Verify that the stack pointer is not competely insane */
+ /* Verify that the stack pointer is not completely insane */
if (unlikely(!access_ok(VERIFY_WRITE, fr, sizeof(struct emuframe))))
return SIGBUS;
diff --git a/arch/mips/math-emu/ieee754dp.c b/arch/mips/math-emu/ieee754dp.c
index 47d26c805eac..465a0342ed4c 100644
--- a/arch/mips/math-emu/ieee754dp.c
+++ b/arch/mips/math-emu/ieee754dp.c
@@ -54,10 +54,13 @@ union ieee754dp __cold ieee754dp_nanxcpt(union ieee754dp r)
assert(ieee754dp_issnan(r));
ieee754_setcx(IEEE754_INVALID_OPERATION);
- if (ieee754_csr.nan2008)
+ if (ieee754_csr.nan2008) {
DPMANT(r) |= DP_MBIT(DP_FBITS - 1);
- else
- r = ieee754dp_indef();
+ } else {
+ DPMANT(r) &= ~DP_MBIT(DP_FBITS - 1);
+ if (!ieee754dp_isnan(r))
+ DPMANT(r) |= DP_MBIT(DP_FBITS - 2);
+ }
return r;
}
diff --git a/arch/mips/math-emu/ieee754dp.h b/arch/mips/math-emu/ieee754dp.h
index e2babd98fee3..9ba023004eb6 100644
--- a/arch/mips/math-emu/ieee754dp.h
+++ b/arch/mips/math-emu/ieee754dp.h
@@ -60,6 +60,7 @@ static inline int ieee754dp_finite(union ieee754dp x)
while ((m >> DP_FBITS) == 0) { m <<= 1; e--; }
#define DPDNORMX DPDNORMx(xm, xe)
#define DPDNORMY DPDNORMx(ym, ye)
+#define DPDNORMZ DPDNORMx(zm, ze)
static inline union ieee754dp builddp(int s, int bx, u64 m)
{
diff --git a/arch/mips/math-emu/ieee754int.h b/arch/mips/math-emu/ieee754int.h
index ed7bb277b3e0..8bc2f6963324 100644
--- a/arch/mips/math-emu/ieee754int.h
+++ b/arch/mips/math-emu/ieee754int.h
@@ -55,6 +55,9 @@ static inline int ieee754_class_nan(int xc)
#define COMPYSP \
unsigned ym; int ye; int ys; int yc
+#define COMPZSP \
+ unsigned zm; int ze; int zs; int zc
+
#define EXPLODESP(v, vc, vs, ve, vm) \
{ \
vs = SPSIGN(v); \
@@ -81,6 +84,7 @@ static inline int ieee754_class_nan(int xc)
}
#define EXPLODEXSP EXPLODESP(x, xc, xs, xe, xm)
#define EXPLODEYSP EXPLODESP(y, yc, ys, ye, ym)
+#define EXPLODEZSP EXPLODESP(z, zc, zs, ze, zm)
#define COMPXDP \
@@ -89,6 +93,9 @@ static inline int ieee754_class_nan(int xc)
#define COMPYDP \
u64 ym; int ye; int ys; int yc
+#define COMPZDP \
+ u64 zm; int ze; int zs; int zc
+
#define EXPLODEDP(v, vc, vs, ve, vm) \
{ \
vm = DPMANT(v); \
@@ -115,6 +122,7 @@ static inline int ieee754_class_nan(int xc)
}
#define EXPLODEXDP EXPLODEDP(x, xc, xs, xe, xm)
#define EXPLODEYDP EXPLODEDP(y, yc, ys, ye, ym)
+#define EXPLODEZDP EXPLODEDP(z, zc, zs, ze, zm)
#define FLUSHDP(v, vc, vs, ve, vm) \
if (vc==IEEE754_CLASS_DNORM) { \
@@ -140,7 +148,9 @@ static inline int ieee754_class_nan(int xc)
#define FLUSHXDP FLUSHDP(x, xc, xs, xe, xm)
#define FLUSHYDP FLUSHDP(y, yc, ys, ye, ym)
+#define FLUSHZDP FLUSHDP(z, zc, zs, ze, zm)
#define FLUSHXSP FLUSHSP(x, xc, xs, xe, xm)
#define FLUSHYSP FLUSHSP(y, yc, ys, ye, ym)
+#define FLUSHZSP FLUSHSP(z, zc, zs, ze, zm)
#endif /* __IEEE754INT_H */
diff --git a/arch/mips/math-emu/ieee754sp.c b/arch/mips/math-emu/ieee754sp.c
index e0b2c450b963..260e68965907 100644
--- a/arch/mips/math-emu/ieee754sp.c
+++ b/arch/mips/math-emu/ieee754sp.c
@@ -54,10 +54,13 @@ union ieee754sp __cold ieee754sp_nanxcpt(union ieee754sp r)
assert(ieee754sp_issnan(r));
ieee754_setcx(IEEE754_INVALID_OPERATION);
- if (ieee754_csr.nan2008)
+ if (ieee754_csr.nan2008) {
SPMANT(r) |= SP_MBIT(SP_FBITS - 1);
- else
- r = ieee754sp_indef();
+ } else {
+ SPMANT(r) &= ~SP_MBIT(SP_FBITS - 1);
+ if (!ieee754sp_isnan(r))
+ SPMANT(r) |= SP_MBIT(SP_FBITS - 2);
+ }
return r;
}
@@ -138,7 +141,8 @@ union ieee754sp ieee754sp_format(int sn, int xe, unsigned xm)
} else {
/* sticky right shift es bits
*/
- SPXSRSXn(es);
+ xm = XSPSRS(xm, es);
+ xe += es;
assert((xm & (SP_HIDDEN_BIT << 3)) == 0);
assert(xe == SP_EMIN);
}
diff --git a/arch/mips/math-emu/ieee754sp.h b/arch/mips/math-emu/ieee754sp.h
index 374a3f00a589..8476067075fe 100644
--- a/arch/mips/math-emu/ieee754sp.h
+++ b/arch/mips/math-emu/ieee754sp.h
@@ -46,25 +46,24 @@ static inline int ieee754sp_finite(union ieee754sp x)
}
/* 3bit extended single precision sticky right shift */
-#define SPXSRSXn(rs) \
- (xe += rs, \
- xm = (rs > (SP_FBITS+3))?1:((xm) >> (rs)) | ((xm) << (32-(rs)) != 0))
+#define XSPSRS(v, rs) \
+ ((rs > (SP_FBITS+3))?1:((v) >> (rs)) | ((v) << (32-(rs)) != 0))
-#define SPXSRSX1() \
- (xe++, (xm = (xm >> 1) | (xm & 1)))
+#define XSPSRS1(m) \
+ ((m >> 1) | (m & 1))
-#define SPXSRSYn(rs) \
- (ye+=rs, \
- ym = (rs > (SP_FBITS+3))?1:((ym) >> (rs)) | ((ym) << (32-(rs)) != 0))
+#define SPXSRSX1() \
+ (xe++, (xm = XSPSRS1(xm)))
#define SPXSRSY1() \
- (ye++, (ym = (ym >> 1) | (ym & 1)))
+ (ye++, (ym = XSPSRS1(ym)))
/* convert denormal to normalized with extended exponent */
#define SPDNORMx(m,e) \
while ((m >> SP_FBITS) == 0) { m <<= 1; e--; }
#define SPDNORMX SPDNORMx(xm, xe)
#define SPDNORMY SPDNORMx(ym, ye)
+#define SPDNORMZ SPDNORMx(zm, ze)
static inline union ieee754sp buildsp(int s, int bx, unsigned m)
{
diff --git a/arch/mips/math-emu/sp_add.c b/arch/mips/math-emu/sp_add.c
index f1c87b07d3b4..c55c0c00bca8 100644
--- a/arch/mips/math-emu/sp_add.c
+++ b/arch/mips/math-emu/sp_add.c
@@ -132,13 +132,15 @@ union ieee754sp ieee754sp_add(union ieee754sp x, union ieee754sp y)
* Have to shift y fraction right to align.
*/
s = xe - ye;
- SPXSRSYn(s);
+ ym = XSPSRS(ym, s);
+ ye += s;
} else if (ye > xe) {
/*
* Have to shift x fraction right to align.
*/
s = ye - xe;
- SPXSRSXn(s);
+ xm = XSPSRS(xm, s);
+ xe += s;
}
assert(xe == ye);
assert(xe <= SP_EMAX);
diff --git a/arch/mips/math-emu/sp_maddf.c b/arch/mips/math-emu/sp_maddf.c
index dd1dd83e34eb..a8cd8b4f235e 100644
--- a/arch/mips/math-emu/sp_maddf.c
+++ b/arch/mips/math-emu/sp_maddf.c
@@ -14,8 +14,12 @@
#include "ieee754sp.h"
-union ieee754sp ieee754sp_maddf(union ieee754sp z, union ieee754sp x,
- union ieee754sp y)
+enum maddf_flags {
+ maddf_negate_product = 1 << 0,
+};
+
+static union ieee754sp _sp_maddf(union ieee754sp z, union ieee754sp x,
+ union ieee754sp y, enum maddf_flags flags)
{
int re;
int rs;
@@ -32,15 +36,15 @@ union ieee754sp ieee754sp_maddf(union ieee754sp z, union ieee754sp x,
COMPXSP;
COMPYSP;
- u32 zm; int ze; int zs __maybe_unused; int zc;
+ COMPZSP;
EXPLODEXSP;
EXPLODEYSP;
- EXPLODESP(z, zc, zs, ze, zm)
+ EXPLODEZSP;
FLUSHXSP;
FLUSHYSP;
- FLUSHSP(z, zc, zs, ze, zm);
+ FLUSHZSP;
ieee754_clearcx();
@@ -49,7 +53,7 @@ union ieee754sp ieee754sp_maddf(union ieee754sp z, union ieee754sp x,
ieee754_setcx(IEEE754_INVALID_OPERATION);
return ieee754sp_nanxcpt(z);
case IEEE754_CLASS_DNORM:
- SPDNORMx(zm, ze);
+ SPDNORMZ;
/* QNAN is handled separately below */
}
@@ -154,6 +158,8 @@ union ieee754sp ieee754sp_maddf(union ieee754sp z, union ieee754sp x,
re = xe + ye;
rs = xs ^ ys;
+ if (flags & maddf_negate_product)
+ rs ^= 1;
/* shunt to top of word */
xm <<= 32 - (SP_FBITS + 1);
@@ -208,16 +214,18 @@ union ieee754sp ieee754sp_maddf(union ieee754sp z, union ieee754sp x,
if (ze > re) {
/*
- * Have to shift y fraction right to align.
+ * Have to shift r fraction right to align.
*/
s = ze - re;
- SPXSRSYn(s);
+ rm = XSPSRS(rm, s);
+ re += s;
} else if (re > ze) {
/*
- * Have to shift x fraction right to align.
+ * Have to shift z fraction right to align.
*/
s = re - ze;
- SPXSRSYn(s);
+ zm = XSPSRS(zm, s);
+ ze += s;
}
assert(ze == re);
assert(ze <= SP_EMAX);
@@ -230,7 +238,8 @@ union ieee754sp ieee754sp_maddf(union ieee754sp z, union ieee754sp x,
zm = zm + rm;
if (zm >> (SP_FBITS + 1 + 3)) { /* carry out */
- SPXSRSX1();
+ zm = XSPSRS1(zm);
+ ze++;
}
} else {
if (zm >= rm) {
@@ -253,3 +262,15 @@ union ieee754sp ieee754sp_maddf(union ieee754sp z, union ieee754sp x,
}
return ieee754sp_format(zs, ze, zm);
}
+
+union ieee754sp ieee754sp_maddf(union ieee754sp z, union ieee754sp x,
+ union ieee754sp y)
+{
+ return _sp_maddf(z, x, y, 0);
+}
+
+union ieee754sp ieee754sp_msubf(union ieee754sp z, union ieee754sp x,
+ union ieee754sp y)
+{
+ return _sp_maddf(z, x, y, maddf_negate_product);
+}
diff --git a/arch/mips/math-emu/sp_msubf.c b/arch/mips/math-emu/sp_msubf.c
deleted file mode 100644
index 81c38b980d69..000000000000
--- a/arch/mips/math-emu/sp_msubf.c
+++ /dev/null
@@ -1,258 +0,0 @@
-/*
- * IEEE754 floating point arithmetic
- * single precision: MSUB.f (Fused Multiply Subtract)
- * MSUBF.fmt: FPR[fd] = FPR[fd] - (FPR[fs] x FPR[ft])
- *
- * MIPS floating point support
- * Copyright (C) 2015 Imagination Technologies, Ltd.
- * Author: Markos Chandras <markos.chandras@imgtec.com>
- *
- * This program is free software; you can distribute it and/or modify it
- * under the terms of the GNU General Public License as published by the
- * Free Software Foundation; version 2 of the License.
- */
-
-#include "ieee754sp.h"
-
-union ieee754sp ieee754sp_msubf(union ieee754sp z, union ieee754sp x,
- union ieee754sp y)
-{
- int re;
- int rs;
- unsigned rm;
- unsigned short lxm;
- unsigned short hxm;
- unsigned short lym;
- unsigned short hym;
- unsigned lrm;
- unsigned hrm;
- unsigned t;
- unsigned at;
- int s;
-
- COMPXSP;
- COMPYSP;
- u32 zm; int ze; int zs __maybe_unused; int zc;
-
- EXPLODEXSP;
- EXPLODEYSP;
- EXPLODESP(z, zc, zs, ze, zm)
-
- FLUSHXSP;
- FLUSHYSP;
- FLUSHSP(z, zc, zs, ze, zm);
-
- ieee754_clearcx();
-
- switch (zc) {
- case IEEE754_CLASS_SNAN:
- ieee754_setcx(IEEE754_INVALID_OPERATION);
- return ieee754sp_nanxcpt(z);
- case IEEE754_CLASS_DNORM:
- SPDNORMx(zm, ze);
- /* QNAN is handled separately below */
- }
-
- switch (CLPAIR(xc, yc)) {
- case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_SNAN):
- case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_SNAN):
- case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_SNAN):
- case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_SNAN):
- case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_SNAN):
- return ieee754sp_nanxcpt(y);
-
- case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_SNAN):
- case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_QNAN):
- case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_ZERO):
- case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_NORM):
- case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_DNORM):
- case CLPAIR(IEEE754_CLASS_SNAN, IEEE754_CLASS_INF):
- return ieee754sp_nanxcpt(x);
-
- case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_QNAN):
- case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_QNAN):
- case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_QNAN):
- case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_QNAN):
- return y;
-
- case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_QNAN):
- case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_ZERO):
- case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_NORM):
- case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_DNORM):
- case CLPAIR(IEEE754_CLASS_QNAN, IEEE754_CLASS_INF):
- return x;
-
- /*
- * Infinity handling
- */
- case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_ZERO):
- case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_INF):
- if (zc == IEEE754_CLASS_QNAN)
- return z;
- ieee754_setcx(IEEE754_INVALID_OPERATION);
- return ieee754sp_indef();
-
- case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_INF):
- case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_INF):
- case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_NORM):
- case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_DNORM):
- case CLPAIR(IEEE754_CLASS_INF, IEEE754_CLASS_INF):
- if (zc == IEEE754_CLASS_QNAN)
- return z;
- return ieee754sp_inf(xs ^ ys);
-
- case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_ZERO):
- case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_NORM):
- case CLPAIR(IEEE754_CLASS_ZERO, IEEE754_CLASS_DNORM):
- case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_ZERO):
- case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_ZERO):
- if (zc == IEEE754_CLASS_INF)
- return ieee754sp_inf(zs);
- /* Multiplication is 0 so just return z */
- return z;
-
- case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_DNORM):
- SPDNORMX;
-
- case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_DNORM):
- if (zc == IEEE754_CLASS_QNAN)
- return z;
- else if (zc == IEEE754_CLASS_INF)
- return ieee754sp_inf(zs);
- SPDNORMY;
- break;
-
- case CLPAIR(IEEE754_CLASS_DNORM, IEEE754_CLASS_NORM):
- if (zc == IEEE754_CLASS_QNAN)
- return z;
- else if (zc == IEEE754_CLASS_INF)
- return ieee754sp_inf(zs);
- SPDNORMX;
- break;
-
- case CLPAIR(IEEE754_CLASS_NORM, IEEE754_CLASS_NORM):
- if (zc == IEEE754_CLASS_QNAN)
- return z;
- else if (zc == IEEE754_CLASS_INF)
- return ieee754sp_inf(zs);
- /* fall through to real compuation */
- }
-
- /* Finally get to do some computation */
-
- /*
- * Do the multiplication bit first
- *
- * rm = xm * ym, re = xe + ye basically
- *
- * At this point xm and ym should have been normalized.
- */
-
- /* rm = xm * ym, re = xe+ye basically */
- assert(xm & SP_HIDDEN_BIT);
- assert(ym & SP_HIDDEN_BIT);
-
- re = xe + ye;
- rs = xs ^ ys;
-
- /* shunt to top of word */
- xm <<= 32 - (SP_FBITS + 1);
- ym <<= 32 - (SP_FBITS + 1);
-
- /*
- * Multiply 32 bits xm, ym to give high 32 bits rm with stickness.
- */
- lxm = xm & 0xffff;
- hxm = xm >> 16;
- lym = ym & 0xffff;
- hym = ym >> 16;
-
- lrm = lxm * lym; /* 16 * 16 => 32 */
- hrm = hxm * hym; /* 16 * 16 => 32 */
-
- t = lxm * hym; /* 16 * 16 => 32 */
- at = lrm + (t << 16);
- hrm += at < lrm;
- lrm = at;
- hrm = hrm + (t >> 16);
-
- t = hxm * lym; /* 16 * 16 => 32 */
- at = lrm + (t << 16);
- hrm += at < lrm;
- lrm = at;
- hrm = hrm + (t >> 16);
-
- rm = hrm | (lrm != 0);
-
- /*
- * Sticky shift down to normal rounding precision.
- */
- if ((int) rm < 0) {
- rm = (rm >> (32 - (SP_FBITS + 1 + 3))) |
- ((rm << (SP_FBITS + 1 + 3)) != 0);
- re++;
- } else {
- rm = (rm >> (32 - (SP_FBITS + 1 + 3 + 1))) |
- ((rm << (SP_FBITS + 1 + 3 + 1)) != 0);
- }
- assert(rm & (SP_HIDDEN_BIT << 3));
-
- /* And now the subtraction */
-
- /* Flip sign of r and handle as add */
- rs ^= 1;
-
- assert(zm & SP_HIDDEN_BIT);
-
- /*
- * Provide guard,round and stick bit space.
- */
- zm <<= 3;
-
- if (ze > re) {
- /*
- * Have to shift y fraction right to align.
- */
- s = ze - re;
- SPXSRSYn(s);
- } else if (re > ze) {
- /*
- * Have to shift x fraction right to align.
- */
- s = re - ze;
- SPXSRSYn(s);
- }
- assert(ze == re);
- assert(ze <= SP_EMAX);
-
- if (zs == rs) {
- /*
- * Generate 28 bit result of adding two 27 bit numbers
- * leaving result in zm, zs and ze.
- */
- zm = zm + rm;
-
- if (zm >> (SP_FBITS + 1 + 3)) { /* carry out */
- SPXSRSX1(); /* shift preserving sticky */
- }
- } else {
- if (zm >= rm) {
- zm = zm - rm;
- } else {
- zm = rm - zm;
- zs = rs;
- }
- if (zm == 0)
- return ieee754sp_zero(ieee754_csr.rm == FPU_CSR_RD);
-
- /*
- * Normalize in extended single precision
- */
- while ((zm >> (SP_MBITS + 3)) == 0) {
- zm <<= 1;
- ze--;
- }
-
- }
- return ieee754sp_format(zs, ze, zm);
-}
diff --git a/arch/mips/math-emu/sp_sub.c b/arch/mips/math-emu/sp_sub.c
index ec5f937a8b3e..dc998ed47295 100644
--- a/arch/mips/math-emu/sp_sub.c
+++ b/arch/mips/math-emu/sp_sub.c
@@ -134,13 +134,15 @@ union ieee754sp ieee754sp_sub(union ieee754sp x, union ieee754sp y)
* have to shift y fraction right to align
*/
s = xe - ye;
- SPXSRSYn(s);
+ ym = XSPSRS(ym, s);
+ ye += s;
} else if (ye > xe) {
/*
* have to shift x fraction right to align
*/
s = ye - xe;
- SPXSRSXn(s);
+ xm = XSPSRS(xm, s);
+ xe += s;
}
assert(xe == ye);
assert(xe <= SP_EMAX);
diff --git a/arch/mips/mm/c-r4k.c b/arch/mips/mm/c-r4k.c
index caac3d747a90..ef7f925dd1b0 100644
--- a/arch/mips/mm/c-r4k.c
+++ b/arch/mips/mm/c-r4k.c
@@ -77,6 +77,7 @@ static inline void r4k_on_each_cpu(void (*func) (void *info), void *info)
*/
static unsigned long icache_size __read_mostly;
static unsigned long dcache_size __read_mostly;
+static unsigned long vcache_size __read_mostly;
static unsigned long scache_size __read_mostly;
/*
@@ -447,6 +448,11 @@ static inline void local_r4k___flush_cache_all(void * args)
r4k_blast_scache();
break;
+ case CPU_BMIPS5000:
+ r4k_blast_scache();
+ __sync();
+ break;
+
default:
r4k_blast_dcache();
r4k_blast_icache();
@@ -492,7 +498,14 @@ static inline void local_r4k_flush_cache_range(void * args)
if (!(has_valid_asid(vma->vm_mm)))
return;
- r4k_blast_dcache();
+ /*
+ * If dcache can alias, we must blast it since mapping is changing.
+ * If executable, we must ensure any dirty lines are written back far
+ * enough to be visible to icache.
+ */
+ if (cpu_has_dc_aliases || (exec && !cpu_has_ic_fills_f_dc))
+ r4k_blast_dcache();
+ /* If executable, blast stale lines from icache */
if (exec)
r4k_blast_icache();
}
@@ -502,7 +515,7 @@ static void r4k_flush_cache_range(struct vm_area_struct *vma,
{
int exec = vma->vm_flags & VM_EXEC;
- if (cpu_has_dc_aliases || (exec && !cpu_has_ic_fills_f_dc))
+ if (cpu_has_dc_aliases || exec)
r4k_on_each_cpu(local_r4k_flush_cache_range, vma);
}
@@ -1148,6 +1161,8 @@ static void probe_pcache(void)
c->dcache.ways *
c->dcache.linesz;
c->dcache.waybit = 0;
+ if ((prid & PRID_REV_MASK) >= PRID_REV_LOONGSON3A_R2)
+ c->options |= MIPS_CPU_PREFETCH;
break;
case CPU_CAVIUM_OCTEON3:
@@ -1278,6 +1293,8 @@ static void probe_pcache(void)
case CPU_M5150:
case CPU_QEMU_GENERIC:
case CPU_I6400:
+ case CPU_P6600:
+ case CPU_M6250:
if (!(read_c0_config7() & MIPS_CONF7_IAR) &&
(c->icache.waysize > PAGE_SIZE))
c->icache.flags |= MIPS_CACHE_ALIASES;
@@ -1304,7 +1321,14 @@ static void probe_pcache(void)
break;
case CPU_ALCHEMY:
+ case CPU_I6400:
+ c->icache.flags |= MIPS_CACHE_IC_F_DC;
+ break;
+
+ case CPU_BMIPS5000:
c->icache.flags |= MIPS_CACHE_IC_F_DC;
+ /* Cache aliases are handled in hardware; allow HIGHMEM */
+ c->dcache.flags &= ~MIPS_CACHE_ALIASES;
break;
case CPU_LOONGSON2:
@@ -1328,6 +1352,31 @@ static void probe_pcache(void)
c->dcache.linesz);
}
+static void probe_vcache(void)
+{
+ struct cpuinfo_mips *c = &current_cpu_data;
+ unsigned int config2, lsize;
+
+ if (current_cpu_type() != CPU_LOONGSON3)
+ return;
+
+ config2 = read_c0_config2();
+ if ((lsize = ((config2 >> 20) & 15)))
+ c->vcache.linesz = 2 << lsize;
+ else
+ c->vcache.linesz = lsize;
+
+ c->vcache.sets = 64 << ((config2 >> 24) & 15);
+ c->vcache.ways = 1 + ((config2 >> 16) & 15);
+
+ vcache_size = c->vcache.sets * c->vcache.ways * c->vcache.linesz;
+
+ c->vcache.waybit = 0;
+
+ pr_info("Unified victim cache %ldkB %s, linesize %d bytes.\n",
+ vcache_size >> 10, way_string[c->vcache.ways], c->vcache.linesz);
+}
+
/*
* If you even _breathe_ on this function, look at the gcc output and make sure
* it does not pop things on and off the stack for the cache sizing loop that
@@ -1650,6 +1699,7 @@ void r4k_cache_init(void)
struct cpuinfo_mips *c = &current_cpu_data;
probe_pcache();
+ probe_vcache();
setup_scache();
r4k_blast_dcache_page_setup();
@@ -1671,7 +1721,7 @@ void r4k_cache_init(void)
* This code supports virtually indexed processors and will be
* unnecessarily inefficient on physically indexed processors.
*/
- if (c->dcache.linesz)
+ if (c->dcache.linesz && cpu_has_dc_aliases)
shm_align_mask = max_t( unsigned long,
c->dcache.sets * c->dcache.linesz - 1,
PAGE_SIZE - 1);
@@ -1744,12 +1794,24 @@ void r4k_cache_init(void)
flush_icache_range = (void *)b5k_instruction_hazard;
local_flush_icache_range = (void *)b5k_instruction_hazard;
- /* Cache aliases are handled in hardware; allow HIGHMEM */
- current_cpu_data.dcache.flags &= ~MIPS_CACHE_ALIASES;
/* Optimization: an L2 flush implicitly flushes the L1 */
current_cpu_data.options |= MIPS_CPU_INCLUSIVE_CACHES;
break;
+ case CPU_LOONGSON3:
+ /* Loongson-3 maintains cache coherency by hardware */
+ __flush_cache_all = cache_noop;
+ __flush_cache_vmap = cache_noop;
+ __flush_cache_vunmap = cache_noop;
+ __flush_kernel_vmap_range = (void *)cache_noop;
+ flush_cache_mm = (void *)cache_noop;
+ flush_cache_page = (void *)cache_noop;
+ flush_cache_range = (void *)cache_noop;
+ flush_cache_sigtramp = (void *)cache_noop;
+ flush_icache_all = (void *)cache_noop;
+ flush_data_cache_page = (void *)cache_noop;
+ local_flush_data_cache_page = (void *)cache_noop;
+ break;
}
}
diff --git a/arch/mips/mm/cache.c b/arch/mips/mm/cache.c
index 3f159caf6dbc..bf04c6c479a4 100644
--- a/arch/mips/mm/cache.c
+++ b/arch/mips/mm/cache.c
@@ -16,6 +16,7 @@
#include <linux/mm.h>
#include <asm/cacheflush.h>
+#include <asm/highmem.h>
#include <asm/processor.h>
#include <asm/cpu.h>
#include <asm/cpu-features.h>
@@ -83,8 +84,6 @@ void __flush_dcache_page(struct page *page)
struct address_space *mapping = page_mapping(page);
unsigned long addr;
- if (PageHighMem(page))
- return;
if (mapping && !mapping_mapped(mapping)) {
SetPageDcacheDirty(page);
return;
@@ -95,8 +94,15 @@ void __flush_dcache_page(struct page *page)
* case is for exec env/arg pages and those are %99 certainly going to
* get faulted into the tlb (and thus flushed) anyways.
*/
- addr = (unsigned long) page_address(page);
+ if (PageHighMem(page))
+ addr = (unsigned long)kmap_atomic(page);
+ else
+ addr = (unsigned long)page_address(page);
+
flush_data_cache_page(addr);
+
+ if (PageHighMem(page))
+ __kunmap_atomic((void *)addr);
}
EXPORT_SYMBOL(__flush_dcache_page);
@@ -119,33 +125,28 @@ void __flush_anon_page(struct page *page, unsigned long vmaddr)
EXPORT_SYMBOL(__flush_anon_page);
-void __flush_icache_page(struct vm_area_struct *vma, struct page *page)
-{
- unsigned long addr;
-
- if (PageHighMem(page))
- return;
-
- addr = (unsigned long) page_address(page);
- flush_data_cache_page(addr);
-}
-EXPORT_SYMBOL_GPL(__flush_icache_page);
-
-void __update_cache(struct vm_area_struct *vma, unsigned long address,
- pte_t pte)
+void __update_cache(unsigned long address, pte_t pte)
{
struct page *page;
unsigned long pfn, addr;
- int exec = (vma->vm_flags & VM_EXEC) && !cpu_has_ic_fills_f_dc;
+ int exec = !pte_no_exec(pte) && !cpu_has_ic_fills_f_dc;
pfn = pte_pfn(pte);
if (unlikely(!pfn_valid(pfn)))
return;
page = pfn_to_page(pfn);
- if (page_mapping(page) && Page_dcache_dirty(page)) {
- addr = (unsigned long) page_address(page);
+ if (Page_dcache_dirty(page)) {
+ if (PageHighMem(page))
+ addr = (unsigned long)kmap_atomic(page);
+ else
+ addr = (unsigned long)page_address(page);
+
if (exec || pages_do_alias(addr, address & PAGE_MASK))
flush_data_cache_page(addr);
+
+ if (PageHighMem(page))
+ __kunmap_atomic((void *)addr);
+
ClearPageDcacheDirty(page);
}
}
diff --git a/arch/mips/mm/dma-default.c b/arch/mips/mm/dma-default.c
index 730d394ce5f0..cb557d28cb21 100644
--- a/arch/mips/mm/dma-default.c
+++ b/arch/mips/mm/dma-default.c
@@ -88,19 +88,20 @@ static gfp_t massage_gfp_flags(const struct device *dev, gfp_t gfp)
else
#endif
#if defined(CONFIG_ZONE_DMA32) && defined(CONFIG_ZONE_DMA)
- if (dev->coherent_dma_mask < DMA_BIT_MASK(32))
+ if (dev == NULL || dev->coherent_dma_mask < DMA_BIT_MASK(32))
dma_flag = __GFP_DMA;
else if (dev->coherent_dma_mask < DMA_BIT_MASK(64))
dma_flag = __GFP_DMA32;
else
#endif
#if defined(CONFIG_ZONE_DMA32) && !defined(CONFIG_ZONE_DMA)
- if (dev->coherent_dma_mask < DMA_BIT_MASK(64))
+ if (dev == NULL || dev->coherent_dma_mask < DMA_BIT_MASK(64))
dma_flag = __GFP_DMA32;
else
#endif
#if defined(CONFIG_ZONE_DMA) && !defined(CONFIG_ZONE_DMA32)
- if (dev->coherent_dma_mask < DMA_BIT_MASK(sizeof(phys_addr_t) * 8))
+ if (dev == NULL ||
+ dev->coherent_dma_mask < DMA_BIT_MASK(sizeof(phys_addr_t) * 8))
dma_flag = __GFP_DMA;
else
#endif
diff --git a/arch/mips/mm/init.c b/arch/mips/mm/init.c
index 7e5fa0938c21..9b58eb5fd0d5 100644
--- a/arch/mips/mm/init.c
+++ b/arch/mips/mm/init.c
@@ -98,8 +98,10 @@ static void *__kmap_pgprot(struct page *page, unsigned long addr, pgprot_t prot)
idx += in_interrupt() ? FIX_N_COLOURS : 0;
vaddr = __fix_to_virt(FIX_CMAP_END - idx);
pte = mk_pte(page, prot);
-#if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+#if defined(CONFIG_XPA)
entrylo = pte_to_entrylo(pte.pte_high);
+#elif defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
+ entrylo = pte.pte_high;
#else
entrylo = pte_to_entrylo(pte_val(pte));
#endif
@@ -110,9 +112,11 @@ static void *__kmap_pgprot(struct page *page, unsigned long addr, pgprot_t prot)
write_c0_entrylo0(entrylo);
write_c0_entrylo1(entrylo);
#ifdef CONFIG_XPA
- entrylo = (pte.pte_low & _PFNX_MASK);
- writex_c0_entrylo0(entrylo);
- writex_c0_entrylo1(entrylo);
+ if (cpu_has_xpa) {
+ entrylo = (pte.pte_low & _PFNX_MASK);
+ writex_c0_entrylo0(entrylo);
+ writex_c0_entrylo1(entrylo);
+ }
#endif
tlbidx = read_c0_wired();
write_c0_wired(tlbidx + 1);
@@ -196,7 +200,7 @@ void copy_to_user_page(struct vm_area_struct *vma,
if (cpu_has_dc_aliases)
SetPageDcacheDirty(page);
}
- if ((vma->vm_flags & VM_EXEC) && !cpu_has_ic_fills_f_dc)
+ if (vma->vm_flags & VM_EXEC)
flush_cache_page(vma, vaddr, page_to_pfn(page));
}
diff --git a/arch/mips/mm/page.c b/arch/mips/mm/page.c
index 885d73ffd6fb..c41953ca6605 100644
--- a/arch/mips/mm/page.c
+++ b/arch/mips/mm/page.c
@@ -188,6 +188,15 @@ static void set_prefetch_parameters(void)
}
break;
+ case CPU_LOONGSON3:
+ /* Loongson-3 only support the Pref_Load/Pref_Store. */
+ pref_bias_clear_store = 128;
+ pref_bias_copy_load = 128;
+ pref_bias_copy_store = 128;
+ pref_src_mode = Pref_Load;
+ pref_dst_mode = Pref_Store;
+ break;
+
default:
pref_bias_clear_store = 128;
pref_bias_copy_load = 256;
diff --git a/arch/mips/mm/sc-mips.c b/arch/mips/mm/sc-mips.c
index 91dec32c77b7..286a4d5a1884 100644
--- a/arch/mips/mm/sc-mips.c
+++ b/arch/mips/mm/sc-mips.c
@@ -141,6 +141,7 @@ static inline int mips_sc_is_activated(struct cpuinfo_mips *c)
case CPU_P5600:
case CPU_BMIPS5000:
case CPU_QEMU_GENERIC:
+ case CPU_P6600:
if (config2 & (1 << 12))
return 0;
}
diff --git a/arch/mips/mm/tlb-r3k.c b/arch/mips/mm/tlb-r3k.c
index b4f366f7c0f5..1290b995695d 100644
--- a/arch/mips/mm/tlb-r3k.c
+++ b/arch/mips/mm/tlb-r3k.c
@@ -43,7 +43,7 @@ static void local_flush_tlb_from(int entry)
{
unsigned long old_ctx;
- old_ctx = read_c0_entryhi() & ASID_MASK;
+ old_ctx = read_c0_entryhi() & cpu_asid_mask(&current_cpu_data);
write_c0_entrylo0(0);
while (entry < current_cpu_data.tlbsize) {
write_c0_index(entry << 8);
@@ -81,6 +81,7 @@ void local_flush_tlb_mm(struct mm_struct *mm)
void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
unsigned long end)
{
+ unsigned long asid_mask = cpu_asid_mask(&current_cpu_data);
struct mm_struct *mm = vma->vm_mm;
int cpu = smp_processor_id();
@@ -89,13 +90,13 @@ void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
#ifdef DEBUG_TLB
printk("[tlbrange<%lu,0x%08lx,0x%08lx>]",
- cpu_context(cpu, mm) & ASID_MASK, start, end);
+ cpu_context(cpu, mm) & asid_mask, start, end);
#endif
local_irq_save(flags);
size = (end - start + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
if (size <= current_cpu_data.tlbsize) {
- int oldpid = read_c0_entryhi() & ASID_MASK;
- int newpid = cpu_context(cpu, mm) & ASID_MASK;
+ int oldpid = read_c0_entryhi() & asid_mask;
+ int newpid = cpu_context(cpu, mm) & asid_mask;
start &= PAGE_MASK;
end += PAGE_SIZE - 1;
@@ -159,6 +160,7 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
{
+ unsigned long asid_mask = cpu_asid_mask(&current_cpu_data);
int cpu = smp_processor_id();
if (cpu_context(cpu, vma->vm_mm) != 0) {
@@ -168,10 +170,10 @@ void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
#ifdef DEBUG_TLB
printk("[tlbpage<%lu,0x%08lx>]", cpu_context(cpu, vma->vm_mm), page);
#endif
- newpid = cpu_context(cpu, vma->vm_mm) & ASID_MASK;
+ newpid = cpu_context(cpu, vma->vm_mm) & asid_mask;
page &= PAGE_MASK;
local_irq_save(flags);
- oldpid = read_c0_entryhi() & ASID_MASK;
+ oldpid = read_c0_entryhi() & asid_mask;
write_c0_entryhi(page | newpid);
BARRIER;
tlb_probe();
@@ -190,6 +192,7 @@ finish:
void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte)
{
+ unsigned long asid_mask = cpu_asid_mask(&current_cpu_data);
unsigned long flags;
int idx, pid;
@@ -199,10 +202,10 @@ void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte)
if (current->active_mm != vma->vm_mm)
return;
- pid = read_c0_entryhi() & ASID_MASK;
+ pid = read_c0_entryhi() & asid_mask;
#ifdef DEBUG_TLB
- if ((pid != (cpu_context(cpu, vma->vm_mm) & ASID_MASK)) || (cpu_context(cpu, vma->vm_mm) == 0)) {
+ if ((pid != (cpu_context(cpu, vma->vm_mm) & asid_mask)) || (cpu_context(cpu, vma->vm_mm) == 0)) {
printk("update_mmu_cache: Wheee, bogus tlbpid mmpid=%lu tlbpid=%d\n",
(cpu_context(cpu, vma->vm_mm)), pid);
}
@@ -228,6 +231,7 @@ void __update_tlb(struct vm_area_struct *vma, unsigned long address, pte_t pte)
void add_wired_entry(unsigned long entrylo0, unsigned long entrylo1,
unsigned long entryhi, unsigned long pagemask)
{
+ unsigned long asid_mask = cpu_asid_mask(&current_cpu_data);
unsigned long flags;
unsigned long old_ctx;
static unsigned long wired = 0;
@@ -243,7 +247,7 @@ void add_wired_entry(unsigned long entrylo0, unsigned long entrylo1,
local_irq_save(flags);
/* Save old context and create impossible VPN2 value */
- old_ctx = read_c0_entryhi() & ASID_MASK;
+ old_ctx = read_c0_entryhi() & asid_mask;
old_pagemask = read_c0_pagemask();
w = read_c0_wired();
write_c0_wired(w + 1);
@@ -266,7 +270,7 @@ void add_wired_entry(unsigned long entrylo0, unsigned long entrylo1,
#endif
local_irq_save(flags);
- old_ctx = read_c0_entryhi() & ASID_MASK;
+ old_ctx = read_c0_entryhi() & asid_mask;
write_c0_entrylo0(entrylo0);
write_c0_entryhi(entryhi);
write_c0_index(wired);
diff --git a/arch/mips/mm/tlb-r4k.c b/arch/mips/mm/tlb-r4k.c
index c17d7627f872..e8b335c16295 100644
--- a/arch/mips/mm/tlb-r4k.c
+++ b/arch/mips/mm/tlb-r4k.c
@@ -28,25 +28,28 @@
extern void build_tlb_refill_handler(void);
/*
- * LOONGSON2/3 has a 4 entry itlb which is a subset of dtlb,
- * unfortunately, itlb is not totally transparent to software.
+ * LOONGSON-2 has a 4 entry itlb which is a subset of jtlb, LOONGSON-3 has
+ * a 4 entry itlb and a 4 entry dtlb which are subsets of jtlb. Unfortunately,
+ * itlb/dtlb are not totally transparent to software.
*/
-static inline void flush_itlb(void)
+static inline void flush_micro_tlb(void)
{
switch (current_cpu_type()) {
case CPU_LOONGSON2:
+ write_c0_diag(LOONGSON_DIAG_ITLB);
+ break;
case CPU_LOONGSON3:
- write_c0_diag(4);
+ write_c0_diag(LOONGSON_DIAG_ITLB | LOONGSON_DIAG_DTLB);
break;
default:
break;
}
}
-static inline void flush_itlb_vm(struct vm_area_struct *vma)
+static inline void flush_micro_tlb_vm(struct vm_area_struct *vma)
{
if (vma->vm_flags & VM_EXEC)
- flush_itlb();
+ flush_micro_tlb();
}
void local_flush_tlb_all(void)
@@ -93,7 +96,7 @@ void local_flush_tlb_all(void)
tlbw_use_hazard();
write_c0_entryhi(old_ctx);
htw_start();
- flush_itlb();
+ flush_micro_tlb();
local_irq_restore(flags);
}
EXPORT_SYMBOL(local_flush_tlb_all);
@@ -159,7 +162,7 @@ void local_flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
} else {
drop_mmu_context(mm, cpu);
}
- flush_itlb();
+ flush_micro_tlb();
local_irq_restore(flags);
}
}
@@ -205,7 +208,7 @@ void local_flush_tlb_kernel_range(unsigned long start, unsigned long end)
} else {
local_flush_tlb_all();
}
- flush_itlb();
+ flush_micro_tlb();
local_irq_restore(flags);
}
@@ -240,7 +243,7 @@ void local_flush_tlb_page(struct vm_area_struct *vma, unsigned long page)
finish:
write_c0_entryhi(oldpid);
htw_start();
- flush_itlb_vm(vma);
+ flush_micro_tlb_vm(vma);
local_irq_restore(flags);
}
}
@@ -274,7 +277,7 @@ void local_flush_tlb_one(unsigned long page)
}
write_c0_entryhi(oldpid);
htw_start();
- flush_itlb();
+ flush_micro_tlb();
local_irq_restore(flags);
}
@@ -301,7 +304,7 @@ void __update_tlb(struct vm_area_struct * vma, unsigned long address, pte_t pte)
local_irq_save(flags);
htw_stop();
- pid = read_c0_entryhi() & ASID_MASK;
+ pid = read_c0_entryhi() & cpu_asid_mask(&current_cpu_data);
address &= (PAGE_MASK << 1);
write_c0_entryhi(address | pid);
pgdp = pgd_offset(vma->vm_mm, address);
@@ -336,10 +339,12 @@ void __update_tlb(struct vm_area_struct * vma, unsigned long address, pte_t pte)
#if defined(CONFIG_PHYS_ADDR_T_64BIT) && defined(CONFIG_CPU_MIPS32)
#ifdef CONFIG_XPA
write_c0_entrylo0(pte_to_entrylo(ptep->pte_high));
- writex_c0_entrylo0(ptep->pte_low & _PFNX_MASK);
+ if (cpu_has_xpa)
+ writex_c0_entrylo0(ptep->pte_low & _PFNX_MASK);
ptep++;
write_c0_entrylo1(pte_to_entrylo(ptep->pte_high));
- writex_c0_entrylo1(ptep->pte_low & _PFNX_MASK);
+ if (cpu_has_xpa)
+ writex_c0_entrylo1(ptep->pte_low & _PFNX_MASK);
#else
write_c0_entrylo0(ptep->pte_high);
ptep++;
@@ -357,7 +362,7 @@ void __update_tlb(struct vm_area_struct * vma, unsigned long address, pte_t pte)
}
tlbw_use_hazard();
htw_start();
- flush_itlb_vm(vma);
+ flush_micro_tlb_vm(vma);
local_irq_restore(flags);
}
@@ -400,19 +405,20 @@ void add_wired_entry(unsigned long entrylo0, unsigned long entrylo1,
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-int __init has_transparent_hugepage(void)
+int has_transparent_hugepage(void)
{
- unsigned int mask;
- unsigned long flags;
+ static unsigned int mask = -1;
- local_irq_save(flags);
- write_c0_pagemask(PM_HUGE_MASK);
- back_to_back_c0_hazard();
- mask = read_c0_pagemask();
- write_c0_pagemask(PM_DEFAULT_MASK);
-
- local_irq_restore(flags);
+ if (mask == -1) { /* first call comes during __init */
+ unsigned long flags;
+ local_irq_save(flags);
+ write_c0_pagemask(PM_HUGE_MASK);
+ back_to_back_c0_hazard();
+ mask = read_c0_pagemask();
+ write_c0_pagemask(PM_DEFAULT_MASK);
+ local_irq_restore(flags);
+ }
return mask == PM_HUGE_MASK;
}
diff --git a/arch/mips/mm/tlb-r8k.c b/arch/mips/mm/tlb-r8k.c
index 138a2ec7cc6b..e86e2e55ad3e 100644
--- a/arch/mips/mm/tlb-r8k.c
+++ b/arch/mips/mm/tlb-r8k.c
@@ -194,7 +194,7 @@ void __update_tlb(struct vm_area_struct * vma, unsigned long address, pte_t pte)
if (current->active_mm != vma->vm_mm)
return;
- pid = read_c0_entryhi() & ASID_MASK;
+ pid = read_c0_entryhi() & cpu_asid_mask(&current_cpu_data);
local_irq_save(flags);
address &= PAGE_MASK;
diff --git a/arch/mips/mm/tlbex.c b/arch/mips/mm/tlbex.c
index 84c6e3fda84a..4004b659ce50 100644
--- a/arch/mips/mm/tlbex.c
+++ b/arch/mips/mm/tlbex.c
@@ -234,20 +234,16 @@ static void output_pgtable_bits_defines(void)
pr_debug("\n");
pr_define("_PAGE_PRESENT_SHIFT %d\n", _PAGE_PRESENT_SHIFT);
- pr_define("_PAGE_READ_SHIFT %d\n", _PAGE_READ_SHIFT);
+ pr_define("_PAGE_NO_READ_SHIFT %d\n", _PAGE_NO_READ_SHIFT);
pr_define("_PAGE_WRITE_SHIFT %d\n", _PAGE_WRITE_SHIFT);
pr_define("_PAGE_ACCESSED_SHIFT %d\n", _PAGE_ACCESSED_SHIFT);
pr_define("_PAGE_MODIFIED_SHIFT %d\n", _PAGE_MODIFIED_SHIFT);
#ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
pr_define("_PAGE_HUGE_SHIFT %d\n", _PAGE_HUGE_SHIFT);
#endif
-#if defined(CONFIG_CPU_MIPSR2) || defined(CONFIG_CPU_MIPSR6)
- if (cpu_has_rixi) {
#ifdef _PAGE_NO_EXEC_SHIFT
+ if (cpu_has_rixi)
pr_define("_PAGE_NO_EXEC_SHIFT %d\n", _PAGE_NO_EXEC_SHIFT);
- pr_define("_PAGE_NO_READ_SHIFT %d\n", _PAGE_NO_READ_SHIFT);
-#endif
- }
#endif
pr_define("_PAGE_GLOBAL_SHIFT %d\n", _PAGE_GLOBAL_SHIFT);
pr_define("_PAGE_VALID_SHIFT %d\n", _PAGE_VALID_SHIFT);
@@ -284,7 +280,12 @@ static inline void dump_handler(const char *symbol, const u32 *handler, int coun
#define C0_ENTRYLO1 3, 0
#define C0_CONTEXT 4, 0
#define C0_PAGEMASK 5, 0
+#define C0_PWBASE 5, 5
+#define C0_PWFIELD 5, 6
+#define C0_PWSIZE 5, 7
+#define C0_PWCTL 6, 6
#define C0_BADVADDR 8, 0
+#define C0_PGD 9, 7
#define C0_ENTRYHI 10, 0
#define C0_EPC 14, 0
#define C0_XCONTEXT 20, 0
@@ -630,6 +631,11 @@ static void build_tlb_write_entry(u32 **p, struct uasm_label **l,
static __maybe_unused void build_convert_pte_to_entrylo(u32 **p,
unsigned int reg)
{
+ if (_PAGE_GLOBAL_SHIFT == 0) {
+ /* pte_t is already in EntryLo format */
+ return;
+ }
+
if (cpu_has_rixi && _PAGE_NO_EXEC) {
if (fill_includes_sw_bits) {
UASM_i_ROTR(p, reg, reg, ilog2(_PAGE_GLOBAL));
@@ -808,7 +814,10 @@ build_get_pmde64(u32 **p, struct uasm_label **l, struct uasm_reloc **r,
if (pgd_reg != -1) {
/* pgd is in pgd_reg */
- UASM_i_MFC0(p, ptr, c0_kscratch(), pgd_reg);
+ if (cpu_has_ldpte)
+ UASM_i_MFC0(p, ptr, C0_PWBASE);
+ else
+ UASM_i_MFC0(p, ptr, c0_kscratch(), pgd_reg);
} else {
#if defined(CONFIG_MIPS_PGD_C0_CONTEXT)
/*
@@ -1007,39 +1016,40 @@ static void build_get_ptep(u32 **p, unsigned int tmp, unsigned int ptr)
static void build_update_entries(u32 **p, unsigned int tmp, unsigned int ptep)
{
- /*
- * 64bit address support (36bit on a 32bit CPU) in a 32bit
- * Kernel is a special case. Only a few CPUs use it.
- */
- if (config_enabled(CONFIG_PHYS_ADDR_T_64BIT) && !cpu_has_64bits) {
- int pte_off_even = sizeof(pte_t) / 2;
- int pte_off_odd = pte_off_even + sizeof(pte_t);
-#ifdef CONFIG_XPA
- const int scratch = 1; /* Our extra working register */
+ int pte_off_even = 0;
+ int pte_off_odd = sizeof(pte_t);
- uasm_i_addu(p, scratch, 0, ptep);
+#if defined(CONFIG_CPU_MIPS32) && defined(CONFIG_PHYS_ADDR_T_64BIT)
+ /* The low 32 bits of EntryLo is stored in pte_high */
+ pte_off_even += offsetof(pte_t, pte_high);
+ pte_off_odd += offsetof(pte_t, pte_high);
#endif
+
+ if (config_enabled(CONFIG_XPA)) {
uasm_i_lw(p, tmp, pte_off_even, ptep); /* even pte */
- uasm_i_lw(p, ptep, pte_off_odd, ptep); /* odd pte */
UASM_i_ROTR(p, tmp, tmp, ilog2(_PAGE_GLOBAL));
- UASM_i_ROTR(p, ptep, ptep, ilog2(_PAGE_GLOBAL));
UASM_i_MTC0(p, tmp, C0_ENTRYLO0);
- UASM_i_MTC0(p, ptep, C0_ENTRYLO1);
-#ifdef CONFIG_XPA
- uasm_i_lw(p, tmp, 0, scratch);
- uasm_i_lw(p, ptep, sizeof(pte_t), scratch);
- uasm_i_lui(p, scratch, 0xff);
- uasm_i_ori(p, scratch, scratch, 0xffff);
- uasm_i_and(p, tmp, scratch, tmp);
- uasm_i_and(p, ptep, scratch, ptep);
- uasm_i_mthc0(p, tmp, C0_ENTRYLO0);
- uasm_i_mthc0(p, ptep, C0_ENTRYLO1);
-#endif
+
+ if (cpu_has_xpa && !mips_xpa_disabled) {
+ uasm_i_lw(p, tmp, 0, ptep);
+ uasm_i_ext(p, tmp, tmp, 0, 24);
+ uasm_i_mthc0(p, tmp, C0_ENTRYLO0);
+ }
+
+ uasm_i_lw(p, tmp, pte_off_odd, ptep); /* odd pte */
+ UASM_i_ROTR(p, tmp, tmp, ilog2(_PAGE_GLOBAL));
+ UASM_i_MTC0(p, tmp, C0_ENTRYLO1);
+
+ if (cpu_has_xpa && !mips_xpa_disabled) {
+ uasm_i_lw(p, tmp, sizeof(pte_t), ptep);
+ uasm_i_ext(p, tmp, tmp, 0, 24);
+ uasm_i_mthc0(p, tmp, C0_ENTRYLO1);
+ }
return;
}
- UASM_i_LW(p, tmp, 0, ptep); /* get even pte */
- UASM_i_LW(p, ptep, sizeof(pte_t), ptep); /* get odd pte */
+ UASM_i_LW(p, tmp, pte_off_even, ptep); /* get even pte */
+ UASM_i_LW(p, ptep, pte_off_odd, ptep); /* get odd pte */
if (r45k_bvahwbug())
build_tlb_probe_entry(p);
build_convert_pte_to_entrylo(p, tmp);
@@ -1421,6 +1431,108 @@ static void build_r4000_tlb_refill_handler(void)
dump_handler("r4000_tlb_refill", (u32 *)ebase, 64);
}
+static void setup_pw(void)
+{
+ unsigned long pgd_i, pgd_w;
+#ifndef __PAGETABLE_PMD_FOLDED
+ unsigned long pmd_i, pmd_w;
+#endif
+ unsigned long pt_i, pt_w;
+ unsigned long pte_i, pte_w;
+#ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
+ unsigned long psn;
+
+ psn = ilog2(_PAGE_HUGE); /* bit used to indicate huge page */
+#endif
+ pgd_i = PGDIR_SHIFT; /* 1st level PGD */
+#ifndef __PAGETABLE_PMD_FOLDED
+ pgd_w = PGDIR_SHIFT - PMD_SHIFT + PGD_ORDER;
+
+ pmd_i = PMD_SHIFT; /* 2nd level PMD */
+ pmd_w = PMD_SHIFT - PAGE_SHIFT;
+#else
+ pgd_w = PGDIR_SHIFT - PAGE_SHIFT + PGD_ORDER;
+#endif
+
+ pt_i = PAGE_SHIFT; /* 3rd level PTE */
+ pt_w = PAGE_SHIFT - 3;
+
+ pte_i = ilog2(_PAGE_GLOBAL);
+ pte_w = 0;
+
+#ifndef __PAGETABLE_PMD_FOLDED
+ write_c0_pwfield(pgd_i << 24 | pmd_i << 12 | pt_i << 6 | pte_i);
+ write_c0_pwsize(1 << 30 | pgd_w << 24 | pmd_w << 12 | pt_w << 6 | pte_w);
+#else
+ write_c0_pwfield(pgd_i << 24 | pt_i << 6 | pte_i);
+ write_c0_pwsize(1 << 30 | pgd_w << 24 | pt_w << 6 | pte_w);
+#endif
+
+#ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
+ write_c0_pwctl(1 << 6 | psn);
+#endif
+ write_c0_kpgd(swapper_pg_dir);
+ kscratch_used_mask |= (1 << 7); /* KScratch6 is used for KPGD */
+}
+
+static void build_loongson3_tlb_refill_handler(void)
+{
+ u32 *p = tlb_handler;
+ struct uasm_label *l = labels;
+ struct uasm_reloc *r = relocs;
+
+ memset(labels, 0, sizeof(labels));
+ memset(relocs, 0, sizeof(relocs));
+ memset(tlb_handler, 0, sizeof(tlb_handler));
+
+ if (check_for_high_segbits) {
+ uasm_i_dmfc0(&p, K0, C0_BADVADDR);
+ uasm_i_dsrl_safe(&p, K1, K0, PGDIR_SHIFT + PGD_ORDER + PAGE_SHIFT - 3);
+ uasm_il_beqz(&p, &r, K1, label_vmalloc);
+ uasm_i_nop(&p);
+
+ uasm_il_bgez(&p, &r, K0, label_large_segbits_fault);
+ uasm_i_nop(&p);
+ uasm_l_vmalloc(&l, p);
+ }
+
+ uasm_i_dmfc0(&p, K1, C0_PGD);
+
+ uasm_i_lddir(&p, K0, K1, 3); /* global page dir */
+#ifndef __PAGETABLE_PMD_FOLDED
+ uasm_i_lddir(&p, K1, K0, 1); /* middle page dir */
+#endif
+ uasm_i_ldpte(&p, K1, 0); /* even */
+ uasm_i_ldpte(&p, K1, 1); /* odd */
+ uasm_i_tlbwr(&p);
+
+ /* restore page mask */
+ if (PM_DEFAULT_MASK >> 16) {
+ uasm_i_lui(&p, K0, PM_DEFAULT_MASK >> 16);
+ uasm_i_ori(&p, K0, K0, PM_DEFAULT_MASK & 0xffff);
+ uasm_i_mtc0(&p, K0, C0_PAGEMASK);
+ } else if (PM_DEFAULT_MASK) {
+ uasm_i_ori(&p, K0, 0, PM_DEFAULT_MASK);
+ uasm_i_mtc0(&p, K0, C0_PAGEMASK);
+ } else {
+ uasm_i_mtc0(&p, 0, C0_PAGEMASK);
+ }
+
+ uasm_i_eret(&p);
+
+ if (check_for_high_segbits) {
+ uasm_l_large_segbits_fault(&l, p);
+ UASM_i_LA(&p, K1, (unsigned long)tlb_do_page_fault_0);
+ uasm_i_jr(&p, K1);
+ uasm_i_nop(&p);
+ }
+
+ uasm_resolve_relocs(relocs, labels);
+ memcpy((void *)(ebase + 0x80), tlb_handler, 0x80);
+ local_flush_icache_range(ebase + 0x80, ebase + 0x100);
+ dump_handler("loongson3_tlb_refill", (u32 *)(ebase + 0x80), 32);
+}
+
extern u32 handle_tlbl[], handle_tlbl_end[];
extern u32 handle_tlbs[], handle_tlbs_end[];
extern u32 handle_tlbm[], handle_tlbm_end[];
@@ -1468,7 +1580,10 @@ static void build_setup_pgd(void)
} else {
/* PGD in c0_KScratch */
uasm_i_jr(&p, 31);
- UASM_i_MTC0(&p, a0, c0_kscratch(), pgd_reg);
+ if (cpu_has_ldpte)
+ UASM_i_MTC0(&p, a0, C0_PWBASE);
+ else
+ UASM_i_MTC0(&p, a0, c0_kscratch(), pgd_reg);
}
#else
#ifdef CONFIG_SMP
@@ -1523,19 +1638,19 @@ iPTE_LW(u32 **p, unsigned int pte, unsigned int ptr)
static void
iPTE_SW(u32 **p, struct uasm_reloc **r, unsigned int pte, unsigned int ptr,
- unsigned int mode)
+ unsigned int mode, unsigned int scratch)
{
-#ifdef CONFIG_PHYS_ADDR_T_64BIT
unsigned int hwmode = mode & (_PAGE_VALID | _PAGE_DIRTY);
+ unsigned int swmode = mode & ~hwmode;
- if (!cpu_has_64bits) {
- const int scratch = 1; /* Our extra working register */
-
- uasm_i_lui(p, scratch, (mode >> 16));
+ if (config_enabled(CONFIG_XPA) && !cpu_has_64bits) {
+ uasm_i_lui(p, scratch, swmode >> 16);
uasm_i_or(p, pte, pte, scratch);
- } else
-#endif
- uasm_i_ori(p, pte, pte, mode);
+ BUG_ON(swmode & 0xffff);
+ } else {
+ uasm_i_ori(p, pte, pte, mode);
+ }
+
#ifdef CONFIG_SMP
# ifdef CONFIG_PHYS_ADDR_T_64BIT
if (cpu_has_64bits)
@@ -1554,6 +1669,7 @@ iPTE_SW(u32 **p, struct uasm_reloc **r, unsigned int pte, unsigned int ptr,
/* no uasm_i_nop needed */
uasm_i_ll(p, pte, sizeof(pte_t) / 2, ptr);
uasm_i_ori(p, pte, pte, hwmode);
+ BUG_ON(hwmode & ~0xffff);
uasm_i_sc(p, pte, sizeof(pte_t) / 2, ptr);
uasm_il_beqz(p, r, pte, label_smp_pgtable_change);
/* no uasm_i_nop needed */
@@ -1575,6 +1691,7 @@ iPTE_SW(u32 **p, struct uasm_reloc **r, unsigned int pte, unsigned int ptr,
if (!cpu_has_64bits) {
uasm_i_lw(p, pte, sizeof(pte_t) / 2, ptr);
uasm_i_ori(p, pte, pte, hwmode);
+ BUG_ON(hwmode & ~0xffff);
uasm_i_sw(p, pte, sizeof(pte_t) / 2, ptr);
uasm_i_lw(p, pte, 0, ptr);
}
@@ -1615,9 +1732,8 @@ build_pte_present(u32 **p, struct uasm_reloc **r,
cur = t;
}
uasm_i_andi(p, t, cur,
- (_PAGE_PRESENT | _PAGE_READ) >> _PAGE_PRESENT_SHIFT);
- uasm_i_xori(p, t, t,
- (_PAGE_PRESENT | _PAGE_READ) >> _PAGE_PRESENT_SHIFT);
+ (_PAGE_PRESENT | _PAGE_NO_READ) >> _PAGE_PRESENT_SHIFT);
+ uasm_i_xori(p, t, t, _PAGE_PRESENT >> _PAGE_PRESENT_SHIFT);
uasm_il_bnez(p, r, t, lid);
if (pte == t)
/* You lose the SMP race :-(*/
@@ -1628,11 +1744,11 @@ build_pte_present(u32 **p, struct uasm_reloc **r,
/* Make PTE valid, store result in PTR. */
static void
build_make_valid(u32 **p, struct uasm_reloc **r, unsigned int pte,
- unsigned int ptr)
+ unsigned int ptr, unsigned int scratch)
{
unsigned int mode = _PAGE_VALID | _PAGE_ACCESSED;
- iPTE_SW(p, r, pte, ptr, mode);
+ iPTE_SW(p, r, pte, ptr, mode, scratch);
}
/*
@@ -1668,12 +1784,12 @@ build_pte_writable(u32 **p, struct uasm_reloc **r,
*/
static void
build_make_write(u32 **p, struct uasm_reloc **r, unsigned int pte,
- unsigned int ptr)
+ unsigned int ptr, unsigned int scratch)
{
unsigned int mode = (_PAGE_ACCESSED | _PAGE_MODIFIED | _PAGE_VALID
| _PAGE_DIRTY);
- iPTE_SW(p, r, pte, ptr, mode);
+ iPTE_SW(p, r, pte, ptr, mode, scratch);
}
/*
@@ -1778,7 +1894,7 @@ static void build_r3000_tlb_load_handler(void)
build_r3000_tlbchange_handler_head(&p, K0, K1);
build_pte_present(&p, &r, K0, K1, -1, label_nopage_tlbl);
uasm_i_nop(&p); /* load delay */
- build_make_valid(&p, &r, K0, K1);
+ build_make_valid(&p, &r, K0, K1, -1);
build_r3000_tlb_reload_write(&p, &l, &r, K0, K1);
uasm_l_nopage_tlbl(&l, p);
@@ -1809,7 +1925,7 @@ static void build_r3000_tlb_store_handler(void)
build_r3000_tlbchange_handler_head(&p, K0, K1);
build_pte_writable(&p, &r, K0, K1, -1, label_nopage_tlbs);
uasm_i_nop(&p); /* load delay */
- build_make_write(&p, &r, K0, K1);
+ build_make_write(&p, &r, K0, K1, -1);
build_r3000_tlb_reload_write(&p, &l, &r, K0, K1);
uasm_l_nopage_tlbs(&l, p);
@@ -1840,7 +1956,7 @@ static void build_r3000_tlb_modify_handler(void)
build_r3000_tlbchange_handler_head(&p, K0, K1);
build_pte_modifiable(&p, &r, K0, K1, -1, label_nopage_tlbm);
uasm_i_nop(&p); /* load delay */
- build_make_write(&p, &r, K0, K1);
+ build_make_write(&p, &r, K0, K1, -1);
build_r3000_pte_reload_tlbwi(&p, K0, K1);
uasm_l_nopage_tlbm(&l, p);
@@ -2008,7 +2124,7 @@ static void build_r4000_tlb_load_handler(void)
}
uasm_l_tlbl_goaround1(&l, p);
}
- build_make_valid(&p, &r, wr.r1, wr.r2);
+ build_make_valid(&p, &r, wr.r1, wr.r2, wr.r3);
build_r4000_tlbchange_handler_tail(&p, &l, &r, wr.r1, wr.r2);
#ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
@@ -2122,7 +2238,7 @@ static void build_r4000_tlb_store_handler(void)
build_pte_writable(&p, &r, wr.r1, wr.r2, wr.r3, label_nopage_tlbs);
if (m4kc_tlbp_war())
build_tlb_probe_entry(&p);
- build_make_write(&p, &r, wr.r1, wr.r2);
+ build_make_write(&p, &r, wr.r1, wr.r2, wr.r3);
build_r4000_tlbchange_handler_tail(&p, &l, &r, wr.r1, wr.r2);
#ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
@@ -2178,7 +2294,7 @@ static void build_r4000_tlb_modify_handler(void)
if (m4kc_tlbp_war())
build_tlb_probe_entry(&p);
/* Present and writable bits set, set accessed and dirty bits. */
- build_make_write(&p, &r, wr.r1, wr.r2);
+ build_make_write(&p, &r, wr.r1, wr.r2, wr.r3);
build_r4000_tlbchange_handler_tail(&p, &l, &r, wr.r1, wr.r2);
#ifdef CONFIG_MIPS_HUGE_TLB_SUPPORT
@@ -2245,8 +2361,9 @@ static void print_htw_config(void)
(config & MIPS_PWFIELD_PTEI_MASK) >> MIPS_PWFIELD_PTEI_SHIFT);
config = read_c0_pwsize();
- pr_debug("PWSize (0x%0*lx): GDW: 0x%02lx UDW: 0x%02lx MDW: 0x%02lx PTW: 0x%02lx PTEW: 0x%02lx\n",
+ pr_debug("PWSize (0x%0*lx): PS: 0x%lx GDW: 0x%02lx UDW: 0x%02lx MDW: 0x%02lx PTW: 0x%02lx PTEW: 0x%02lx\n",
field, config,
+ (config & MIPS_PWSIZE_PS_MASK) >> MIPS_PWSIZE_PS_SHIFT,
(config & MIPS_PWSIZE_GDW_MASK) >> MIPS_PWSIZE_GDW_SHIFT,
(config & MIPS_PWSIZE_UDW_MASK) >> MIPS_PWSIZE_UDW_SHIFT,
(config & MIPS_PWSIZE_MDW_MASK) >> MIPS_PWSIZE_MDW_SHIFT,
@@ -2254,9 +2371,12 @@ static void print_htw_config(void)
(config & MIPS_PWSIZE_PTEW_MASK) >> MIPS_PWSIZE_PTEW_SHIFT);
pwctl = read_c0_pwctl();
- pr_debug("PWCtl (0x%x): PWEn: 0x%x DPH: 0x%x HugePg: 0x%x Psn: 0x%x\n",
+ pr_debug("PWCtl (0x%x): PWEn: 0x%x XK: 0x%x XS: 0x%x XU: 0x%x DPH: 0x%x HugePg: 0x%x Psn: 0x%x\n",
pwctl,
(pwctl & MIPS_PWCTL_PWEN_MASK) >> MIPS_PWCTL_PWEN_SHIFT,
+ (pwctl & MIPS_PWCTL_XK_MASK) >> MIPS_PWCTL_XK_SHIFT,
+ (pwctl & MIPS_PWCTL_XS_MASK) >> MIPS_PWCTL_XS_SHIFT,
+ (pwctl & MIPS_PWCTL_XU_MASK) >> MIPS_PWCTL_XU_SHIFT,
(pwctl & MIPS_PWCTL_DPH_MASK) >> MIPS_PWCTL_DPH_SHIFT,
(pwctl & MIPS_PWCTL_HUGEPG_MASK) >> MIPS_PWCTL_HUGEPG_SHIFT,
(pwctl & MIPS_PWCTL_PSN_MASK) >> MIPS_PWCTL_PSN_SHIFT);
@@ -2311,17 +2431,25 @@ static void config_htw_params(void)
if (CONFIG_PGTABLE_LEVELS >= 3)
pwsize |= ilog2(PTRS_PER_PMD) << MIPS_PWSIZE_MDW_SHIFT;
- /* If XPA has been enabled, PTEs are 64-bit in size. */
- if (config_enabled(CONFIG_64BITS) || (read_c0_pagegrain() & PG_ELPA))
- pwsize |= 1;
+ /* Set pointer size to size of directory pointers */
+ if (config_enabled(CONFIG_64BIT))
+ pwsize |= MIPS_PWSIZE_PS_MASK;
+ /* PTEs may be multiple pointers long (e.g. with XPA) */
+ pwsize |= ((PTE_T_LOG2 - PGD_T_LOG2) << MIPS_PWSIZE_PTEW_SHIFT)
+ & MIPS_PWSIZE_PTEW_MASK;
write_c0_pwsize(pwsize);
/* Make sure everything is set before we enable the HTW */
back_to_back_c0_hazard();
- /* Enable HTW and disable the rest of the pwctl fields */
+ /*
+ * Enable HTW (and only for XUSeg on 64-bit), and disable the rest of
+ * the pwctl fields.
+ */
config = 1 << MIPS_PWCTL_PWEN_SHIFT;
+ if (config_enabled(CONFIG_64BIT))
+ config |= MIPS_PWCTL_XU_MASK;
write_c0_pwctl(config);
pr_info("Hardware Page Table Walker enabled\n");
@@ -2394,6 +2522,9 @@ void build_tlb_refill_handler(void)
*/
static int run_once = 0;
+ if (config_enabled(CONFIG_XPA) && !cpu_has_rixi)
+ panic("Kernels supporting XPA currently require CPUs with RIXI");
+
output_pgtable_bits_defines();
check_pabits();
@@ -2437,13 +2568,18 @@ void build_tlb_refill_handler(void)
break;
default:
+ if (cpu_has_ldpte)
+ setup_pw();
+
if (!run_once) {
scratch_reg = allocate_kscratch();
build_setup_pgd();
build_r4000_tlb_load_handler();
build_r4000_tlb_store_handler();
build_r4000_tlb_modify_handler();
- if (!cpu_has_local_ebase)
+ if (cpu_has_ldpte)
+ build_loongson3_tlb_refill_handler();
+ else if (!cpu_has_local_ebase)
build_r4000_tlb_refill_handler();
flush_tlb_handlers();
run_once++;
diff --git a/arch/mips/mm/uasm-mips.c b/arch/mips/mm/uasm-mips.c
index b4a837893562..9c2220a45189 100644
--- a/arch/mips/mm/uasm-mips.c
+++ b/arch/mips/mm/uasm-mips.c
@@ -153,6 +153,8 @@ static struct insn insn_table[] = {
{ insn_xori, M(xori_op, 0, 0, 0, 0, 0), RS | RT | UIMM },
{ insn_xor, M(spec_op, 0, 0, 0, 0, xor_op), RS | RT | RD },
{ insn_yield, M(spec3_op, 0, 0, 0, 0, yield_op), RS | RD },
+ { insn_ldpte, M(lwc2_op, 0, 0, 0, ldpte_op, mult_op), RS | RD },
+ { insn_lddir, M(lwc2_op, 0, 0, 0, lddir_op, mult_op), RS | RT | RD },
{ insn_invalid, 0, 0 }
};
diff --git a/arch/mips/mm/uasm.c b/arch/mips/mm/uasm.c
index 319051c34343..ad718debc35a 100644
--- a/arch/mips/mm/uasm.c
+++ b/arch/mips/mm/uasm.c
@@ -60,6 +60,7 @@ enum opcode {
insn_sltiu, insn_sltu, insn_sra, insn_srl, insn_srlv, insn_subu,
insn_sw, insn_sync, insn_syscall, insn_tlbp, insn_tlbr, insn_tlbwi,
insn_tlbwr, insn_wait, insn_wsbh, insn_xor, insn_xori, insn_yield,
+ insn_lddir, insn_ldpte,
};
struct insn {
@@ -335,6 +336,8 @@ I_u1u2s3(_bbit0);
I_u1u2s3(_bbit1);
I_u3u1u2(_lwx)
I_u3u1u2(_ldx)
+I_u1u2(_ldpte)
+I_u2u1u3(_lddir)
#ifdef CONFIG_CPU_CAVIUM_OCTEON
#include <asm/octeon/octeon.h>
diff --git a/arch/mips/mti-malta/malta-setup.c b/arch/mips/mti-malta/malta-setup.c
index 4740c82fb97a..33d5ff5069e5 100644
--- a/arch/mips/mti-malta/malta-setup.c
+++ b/arch/mips/mti-malta/malta-setup.c
@@ -248,10 +248,15 @@ static void __init bonito_quirks_setup(void)
#endif
}
+void __init *plat_get_fdt(void)
+{
+ return (void *)__dtb_start;
+}
+
void __init plat_mem_setup(void)
{
unsigned int i;
- void *fdt = __dtb_start;
+ void *fdt = plat_get_fdt();
fdt = malta_dt_shim(fdt);
__dt_setup_arch(fdt);
diff --git a/arch/mips/mti-malta/malta-time.c b/arch/mips/mti-malta/malta-time.c
index b7bf721eabf5..7407da04f8d6 100644
--- a/arch/mips/mti-malta/malta-time.c
+++ b/arch/mips/mti-malta/malta-time.c
@@ -21,6 +21,7 @@
#include <linux/i8253.h>
#include <linux/init.h>
#include <linux/kernel_stat.h>
+#include <linux/math64.h>
#include <linux/sched.h>
#include <linux/spinlock.h>
#include <linux/interrupt.h>
@@ -72,6 +73,8 @@ static void __init estimate_frequencies(void)
{
unsigned long flags;
unsigned int count, start;
+ unsigned char secs1, secs2, ctrl;
+ int secs;
cycle_t giccount = 0, gicstart = 0;
#if defined(CONFIG_KVM_GUEST) && CONFIG_KVM_GUEST_TIMER_FREQ
@@ -81,32 +84,51 @@ static void __init estimate_frequencies(void)
local_irq_save(flags);
- /* Start counter exactly on falling edge of update flag. */
+ if (gic_present)
+ gic_start_count();
+
+ /*
+ * Read counters exactly on rising edge of update flag.
+ * This helps get an accurate reading under virtualisation.
+ */
while (CMOS_READ(RTC_REG_A) & RTC_UIP);
while (!(CMOS_READ(RTC_REG_A) & RTC_UIP));
-
- /* Initialize counters. */
start = read_c0_count();
- if (gic_present) {
- gic_start_count();
+ if (gic_present)
gicstart = gic_read_count();
- }
- /* Read counter exactly on falling edge of update flag. */
+ /* Wait for falling edge before reading RTC. */
while (CMOS_READ(RTC_REG_A) & RTC_UIP);
- while (!(CMOS_READ(RTC_REG_A) & RTC_UIP));
+ secs1 = CMOS_READ(RTC_SECONDS);
+ /* Read counters again exactly on rising edge of update flag. */
+ while (!(CMOS_READ(RTC_REG_A) & RTC_UIP));
count = read_c0_count();
if (gic_present)
giccount = gic_read_count();
+ /* Wait for falling edge before reading RTC again. */
+ while (CMOS_READ(RTC_REG_A) & RTC_UIP);
+ secs2 = CMOS_READ(RTC_SECONDS);
+
+ ctrl = CMOS_READ(RTC_CONTROL);
+
local_irq_restore(flags);
+ if (!(ctrl & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
+ secs1 = bcd2bin(secs1);
+ secs2 = bcd2bin(secs2);
+ }
+ secs = secs2 - secs1;
+ if (secs < 1)
+ secs += 60;
+
count -= start;
+ count /= secs;
mips_hpt_frequency = count;
if (gic_present) {
- giccount -= gicstart;
+ giccount = div_u64(giccount - gicstart, secs);
gic_frequency = giccount;
}
}
diff --git a/arch/mips/mti-sead3/sead3-setup.c b/arch/mips/mti-sead3/sead3-setup.c
index e43f4801a245..9f2f9b2b23ce 100644
--- a/arch/mips/mti-sead3/sead3-setup.c
+++ b/arch/mips/mti-sead3/sead3-setup.c
@@ -83,6 +83,11 @@ static void __init parse_memsize_param(void)
}
}
+void __init *plat_get_fdt(void)
+{
+ return (void *)__dtb_start;
+}
+
void __init plat_mem_setup(void)
{
/* allow command line/bootloader env to override memory size in DT */
diff --git a/arch/mips/netlogic/common/reset.S b/arch/mips/netlogic/common/reset.S
index edbab9b8691f..c474981a6c0d 100644
--- a/arch/mips/netlogic/common/reset.S
+++ b/arch/mips/netlogic/common/reset.S
@@ -50,7 +50,6 @@
#include <asm/netlogic/xlp-hal/sys.h>
#include <asm/netlogic/xlp-hal/cpucontrol.h>
-#define CP0_EBASE $15
#define SYS_CPU_COHERENT_BASE CKSEG1ADDR(XLP_DEFAULT_IO_BASE) + \
XLP_IO_SYS_OFFSET(0) + XLP_IO_PCI_HDRSZ + \
SYS_CPU_NONCOHERENT_MODE * 4
@@ -92,7 +91,7 @@
* registers. On XLPII CPUs, usual cache instructions work.
*/
.macro xlp_flush_l1_dcache
- mfc0 t0, CP0_EBASE, 0
+ mfc0 t0, CP0_PRID
andi t0, t0, PRID_IMP_MASK
slt t1, t0, 0x1200
beqz t1, 15f
@@ -171,7 +170,7 @@ FEXPORT(nlm_reset_entry)
nop
1: /* Entry point on core wakeup */
- mfc0 t0, CP0_EBASE, 0 /* processor ID */
+ mfc0 t0, CP0_PRID /* processor ID */
andi t0, PRID_IMP_MASK
li t1, 0x1500 /* XLP 9xx */
beq t0, t1, 2f /* does not need to set coherent */
@@ -182,8 +181,8 @@ FEXPORT(nlm_reset_entry)
nop
/* set bit in SYS coherent register for the core */
- mfc0 t0, CP0_EBASE, 1
- mfc0 t1, CP0_EBASE, 1
+ mfc0 t0, CP0_EBASE
+ mfc0 t1, CP0_EBASE
srl t1, 5
andi t1, 0x3 /* t1 <- node */
li t2, 0x40000
@@ -232,7 +231,7 @@ EXPORT(nlm_boot_siblings)
* NOTE: All GPR contents are lost after the mtcr above!
*/
- mfc0 v0, CP0_EBASE, 1
+ mfc0 v0, CP0_EBASE
andi v0, 0x3ff /* v0 <- node/core */
/*
diff --git a/arch/mips/netlogic/common/smpboot.S b/arch/mips/netlogic/common/smpboot.S
index 805355b0bd05..f0cc4c9de2bb 100644
--- a/arch/mips/netlogic/common/smpboot.S
+++ b/arch/mips/netlogic/common/smpboot.S
@@ -48,8 +48,6 @@
#include <asm/netlogic/xlp-hal/sys.h>
#include <asm/netlogic/xlp-hal/cpucontrol.h>
-#define CP0_EBASE $15
-
.set noreorder
.set noat
.set arch=xlr /* for mfcr/mtcr, XLR is sufficient */
@@ -86,7 +84,7 @@ NESTED(nlm_boot_secondary_cpus, 16, sp)
PTR_L gp, 0(t1)
/* a0 has the processor id */
- mfc0 a0, CP0_EBASE, 1
+ mfc0 a0, CP0_EBASE
andi a0, 0x3ff /* a0 <- node/core */
PTR_LA t0, nlm_early_init_secondary
jalr t0
diff --git a/arch/mips/netlogic/xlp/nlm_hal.c b/arch/mips/netlogic/xlp/nlm_hal.c
index 80ec929747c3..25ee69489e5e 100644
--- a/arch/mips/netlogic/xlp/nlm_hal.c
+++ b/arch/mips/netlogic/xlp/nlm_hal.c
@@ -58,7 +58,7 @@ void nlm_node_init(int node)
nodep->coremask = 1; /* node 0, boot cpu */
nodep->sysbase = nlm_get_sys_regbase(node);
nodep->picbase = nlm_get_pic_regbase(node);
- nodep->ebase = read_c0_ebase() & (~((1 << 12) - 1));
+ nodep->ebase = read_c0_ebase() & MIPS_EBASE_BASE;
if (cpu_is_xlp9xx())
nodep->socbus = xlp9xx_get_socbus(node);
else
diff --git a/arch/mips/netlogic/xlr/setup.c b/arch/mips/netlogic/xlr/setup.c
index d118b9aa7647..72ceddc9a03f 100644
--- a/arch/mips/netlogic/xlr/setup.c
+++ b/arch/mips/netlogic/xlr/setup.c
@@ -168,7 +168,7 @@ static void nlm_init_node(void)
nodep = nlm_current_node();
nodep->picbase = nlm_mmio_base(NETLOGIC_IO_PIC_OFFSET);
- nodep->ebase = read_c0_ebase() & (~((1 << 12) - 1));
+ nodep->ebase = read_c0_ebase() & MIPS_EBASE_BASE;
spin_lock_init(&nodep->piclock);
}
diff --git a/arch/mips/oprofile/common.c b/arch/mips/oprofile/common.c
index 3c9ec3ddca84..2f33992f6dff 100644
--- a/arch/mips/oprofile/common.c
+++ b/arch/mips/oprofile/common.c
@@ -77,7 +77,7 @@ int __init oprofile_arch_init(struct oprofile_operations *ops)
struct op_mips_model *lmodel = NULL;
int res;
- switch (current_cpu_type()) {
+ switch (boot_cpu_type()) {
case CPU_5KC:
case CPU_M14KC:
case CPU_M14KEC:
diff --git a/arch/mips/oprofile/op_impl.h b/arch/mips/oprofile/op_impl.h
index 7c2da27ece04..a4e758a39af4 100644
--- a/arch/mips/oprofile/op_impl.h
+++ b/arch/mips/oprofile/op_impl.h
@@ -24,7 +24,7 @@ struct op_counter_config {
unsigned long unit_mask;
};
-/* Per-architecture configury and hooks. */
+/* Per-architecture configure and hooks. */
struct op_mips_model {
void (*reg_setup) (struct op_counter_config *);
void (*cpu_setup) (void *dummy);
diff --git a/arch/mips/oprofile/op_model_mipsxx.c b/arch/mips/oprofile/op_model_mipsxx.c
index 8f988a61b7a8..45cb27469fba 100644
--- a/arch/mips/oprofile/op_model_mipsxx.c
+++ b/arch/mips/oprofile/op_model_mipsxx.c
@@ -269,11 +269,9 @@ static int mipsxx_perfcount_handler(void)
return handled;
}
-#define M_CONFIG1_PC (1 << 4)
-
static inline int __n_counters(void)
{
- if (!(read_c0_config1() & M_CONFIG1_PC))
+ if (!cpu_has_perf)
return 0;
if (!(read_c0_perfctrl0() & M_PERFCTL_MORE))
return 1;
diff --git a/arch/mips/pci/fixup-lantiq.c b/arch/mips/pci/fixup-lantiq.c
index c2ce41ea61d7..2b5427d3f35c 100644
--- a/arch/mips/pci/fixup-lantiq.c
+++ b/arch/mips/pci/fixup-lantiq.c
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2012 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2012 John Crispin <john@phrozen.org>
*/
#include <linux/of_irq.h>
diff --git a/arch/mips/pci/ops-bridge.c b/arch/mips/pci/ops-bridge.c
index 438319465cb4..57e1463fcd02 100644
--- a/arch/mips/pci/ops-bridge.c
+++ b/arch/mips/pci/ops-bridge.c
@@ -33,9 +33,9 @@ static u32 emulate_ioc3_cfg(int where, int size)
* The Bridge ASIC supports both type 0 and type 1 access. Type 1 is
* not really documented, so right now I can't write code which uses it.
* Therefore we use type 0 accesses for now even though they won't work
- * correcly for PCI-to-PCI bridges.
+ * correctly for PCI-to-PCI bridges.
*
- * The function is complicated by the ultimate brokeness of the IOC3 chip
+ * The function is complicated by the ultimate brokenness of the IOC3 chip
* which is used in SGI systems. The IOC3 can only handle 32-bit PCI
* accesses and does only decode parts of it's address space.
*/
diff --git a/arch/mips/pci/ops-lantiq.c b/arch/mips/pci/ops-lantiq.c
index e5738ee26f4f..f51e10899cc2 100644
--- a/arch/mips/pci/ops-lantiq.c
+++ b/arch/mips/pci/ops-lantiq.c
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2010 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2010 John Crispin <john@phrozen.org>
*/
#include <linux/types.h>
diff --git a/arch/mips/pci/pci-alchemy.c b/arch/mips/pci/pci-alchemy.c
index 28952637a862..c8994c156e2d 100644
--- a/arch/mips/pci/pci-alchemy.c
+++ b/arch/mips/pci/pci-alchemy.c
@@ -76,7 +76,7 @@ static void mod_wired_entry(int entry, unsigned long entrylo0,
unsigned long old_ctx;
/* Save old context and create impossible VPN2 value */
- old_ctx = read_c0_entryhi() & 0xff;
+ old_ctx = read_c0_entryhi() & MIPS_ENTRYHI_ASID;
old_pagemask = read_c0_pagemask();
write_c0_index(entry);
write_c0_pagemask(pagemask);
diff --git a/arch/mips/pci/pci-ip32.c b/arch/mips/pci/pci-ip32.c
index b1e061f7fdc7..7ae89d0c7099 100644
--- a/arch/mips/pci/pci-ip32.c
+++ b/arch/mips/pci/pci-ip32.c
@@ -116,7 +116,6 @@ static struct pci_controller mace_pci_controller = {
.pci_ops = &mace_pci_ops,
.mem_resource = &mace_pci_mem_resource,
.io_resource = &mace_pci_io_resource,
- .iommu = 0,
.mem_offset = MACE_PCI_MEM_OFFSET,
.io_offset = 0,
.io_map_base = CKSEG1ADDR(MACEPCI_LOW_IO),
diff --git a/arch/mips/pci/pci-lantiq.c b/arch/mips/pci/pci-lantiq.c
index 6a15dbd085aa..b9deab17ccf2 100644
--- a/arch/mips/pci/pci-lantiq.c
+++ b/arch/mips/pci/pci-lantiq.c
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2010 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2010 John Crispin <john@phrozen.org>
*/
#include <linux/types.h>
diff --git a/arch/mips/pci/pci-lantiq.h b/arch/mips/pci/pci-lantiq.h
index 66bf6cd6be3c..0cc71253a497 100644
--- a/arch/mips/pci/pci-lantiq.h
+++ b/arch/mips/pci/pci-lantiq.h
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2010 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2010 John Crispin <john@phrozen.org>
*/
#ifndef _LTQ_PCI_H__
diff --git a/arch/mips/pci/pci-mt7620.c b/arch/mips/pci/pci-mt7620.c
index 1ae932c2d78b..6ce816201699 100644
--- a/arch/mips/pci/pci-mt7620.c
+++ b/arch/mips/pci/pci-mt7620.c
@@ -2,7 +2,7 @@
* Ralink MT7620A SoC PCI support
*
* Copyright (C) 2007-2013 Bruce Chang (Mediatek)
- * Copyright (C) 2013-2016 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2013-2016 John Crispin <john@phrozen.org>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 as published
diff --git a/arch/mips/pci/pci-rt2880.c b/arch/mips/pci/pci-rt2880.c
index a245cad4372a..f2a1050168d9 100644
--- a/arch/mips/pci/pci-rt2880.c
+++ b/arch/mips/pci/pci-rt2880.c
@@ -1,7 +1,7 @@
/*
* Ralink RT288x SoC PCI register definitions
*
- * Copyright (C) 2009 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2009 John Crispin <john@phrozen.org>
* Copyright (C) 2009 Gabor Juhos <juhosg@openwrt.org>
*
* Parts of this file are based on Ralink's 2.6.21 BSP
diff --git a/arch/mips/pci/pci.c b/arch/mips/pci/pci.c
index b8a0bf5766f2..f1b11f0dea2d 100644
--- a/arch/mips/pci/pci.c
+++ b/arch/mips/pci/pci.c
@@ -83,9 +83,6 @@ static void pcibios_scanbus(struct pci_controller *hose)
LIST_HEAD(resources);
struct pci_bus *bus;
- if (!hose->iommu)
- PCI_DMA_BUS_IS_PHYS = 1;
-
if (hose->get_busno && pci_has_flag(PCI_PROBE_ONLY))
next_busno = (*hose->get_busno)();
diff --git a/arch/mips/pic32/Kconfig b/arch/mips/pic32/Kconfig
index 1985971b9890..527d37da05ac 100644
--- a/arch/mips/pic32/Kconfig
+++ b/arch/mips/pic32/Kconfig
@@ -14,7 +14,7 @@ config PIC32MZDA
select SYS_HAS_EARLY_PRINTK
select SYS_SUPPORTS_32BIT_KERNEL
select SYS_SUPPORTS_LITTLE_ENDIAN
- select ARCH_REQUIRE_GPIOLIB
+ select GPIOLIB
select COMMON_CLK
select CLKDEV_LOOKUP
select LIBFDT
diff --git a/arch/mips/pic32/pic32mzda/time.c b/arch/mips/pic32/pic32mzda/time.c
index ca6a62bb10db..62a0a78b6c64 100644
--- a/arch/mips/pic32/pic32mzda/time.c
+++ b/arch/mips/pic32/pic32mzda/time.c
@@ -11,13 +11,12 @@
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
* for more details.
*/
-#include <linux/clk.h>
#include <linux/clk-provider.h>
#include <linux/clocksource.h>
#include <linux/init.h>
+#include <linux/irqdomain.h>
#include <linux/of.h>
#include <linux/of_irq.h>
-#include <linux/irqdomain.h>
#include <asm/time.h>
@@ -58,16 +57,12 @@ unsigned int get_c0_compare_int(void)
void __init plat_time_init(void)
{
- struct clk *clk;
+ unsigned long rate = pic32_get_pbclk(7);
of_clk_init(NULL);
- clk = clk_get_sys("cpu_clk", NULL);
- if (IS_ERR(clk))
- panic("unable to get CPU clock, err=%ld", PTR_ERR(clk));
- clk_prepare_enable(clk);
- pr_info("CPU Clock: %ldMHz\n", clk_get_rate(clk) / 1000000);
- mips_hpt_frequency = clk_get_rate(clk) / 2;
+ pr_info("CPU Clock: %ldMHz\n", rate / 1000000);
+ mips_hpt_frequency = rate / 2;
clocksource_probe();
}
diff --git a/arch/mips/pistachio/init.c b/arch/mips/pistachio/init.c
index 96ba2cc9ad3e..ab79828230ab 100644
--- a/arch/mips/pistachio/init.c
+++ b/arch/mips/pistachio/init.c
@@ -2,6 +2,7 @@
* Pistachio platform setup
*
* Copyright (C) 2014 Google, Inc.
+ * Copyright (C) 2016 Imagination Technologies
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
@@ -9,6 +10,7 @@
*/
#include <linux/init.h>
+#include <linux/io.h>
#include <linux/kernel.h>
#include <linux/of_address.h>
#include <linux/of_fdt.h>
@@ -24,9 +26,38 @@
#include <asm/smp-ops.h>
#include <asm/traps.h>
+/*
+ * Core revision register decoding
+ * Bits 23 to 20: Major rev
+ * Bits 15 to 8: Minor rev
+ * Bits 7 to 0: Maintenance rev
+ */
+#define PISTACHIO_CORE_REV_REG 0xB81483D0
+#define PISTACHIO_CORE_REV_A1 0x00100006
+#define PISTACHIO_CORE_REV_B0 0x00100106
+
const char *get_system_type(void)
{
- return "IMG Pistachio SoC";
+ u32 core_rev;
+ const char *sys_type;
+
+ core_rev = __raw_readl((const void *)PISTACHIO_CORE_REV_REG);
+
+ switch (core_rev) {
+ case PISTACHIO_CORE_REV_B0:
+ sys_type = "IMG Pistachio SoC (B0)";
+ break;
+
+ case PISTACHIO_CORE_REV_A1:
+ sys_type = "IMG Pistachio SoC (A1)";
+ break;
+
+ default:
+ sys_type = "IMG Pistachio SoC";
+ break;
+ }
+
+ return sys_type;
}
static void __init plat_setup_iocoherency(void)
@@ -52,12 +83,16 @@ static void __init plat_setup_iocoherency(void)
}
}
-void __init plat_mem_setup(void)
+void __init *plat_get_fdt(void)
{
if (fw_arg0 != -2)
panic("Device-tree not present");
+ return (void *)fw_arg1;
+}
- __dt_setup_arch((void *)fw_arg1);
+void __init plat_mem_setup(void)
+{
+ __dt_setup_arch(plat_get_fdt());
plat_setup_iocoherency();
}
@@ -109,6 +144,8 @@ void __init prom_init(void)
mips_cm_probe();
mips_cpc_probe();
register_cps_smp_ops();
+
+ pr_info("SoC Type: %s\n", get_system_type());
}
void __init prom_free_prom_memory(void)
diff --git a/arch/mips/pmcs-msp71xx/msp_setup.c b/arch/mips/pmcs-msp71xx/msp_setup.c
index 9d293b3e9130..a63b73610fd4 100644
--- a/arch/mips/pmcs-msp71xx/msp_setup.c
+++ b/arch/mips/pmcs-msp71xx/msp_setup.c
@@ -118,7 +118,7 @@ void msp_restart(char *command)
/* No chip-specific reset code, just jump to the ROM reset vector */
set_c0_status(ST0_BEV | ST0_ERL);
change_c0_config(CONF_CM_CMASK, CONF_CM_UNCACHED);
- flush_cache_all();
+ __flush_cache_all();
write_c0_wired(0);
__asm__ __volatile__("jr\t%0"::"r"(0xbfc00000));
diff --git a/arch/mips/pnx833x/common/setup.c b/arch/mips/pnx833x/common/setup.c
index 99b4d94236cc..8a7443b2535e 100644
--- a/arch/mips/pnx833x/common/setup.c
+++ b/arch/mips/pnx833x/common/setup.c
@@ -38,9 +38,6 @@ extern void pnx833x_machine_power_off(void);
int __init plat_mem_setup(void)
{
- /* fake pci bus to avoid bounce buffers */
- PCI_DMA_BUS_IS_PHYS = 1;
-
/* set mips clock to 320MHz */
#if defined(CONFIG_SOC_PNX8335)
PNX8335_WRITEFIELD(0x17, CLOCK_PLL_CPU_CTL, FREQ);
diff --git a/arch/mips/ralink/Makefile b/arch/mips/ralink/Makefile
index 0d1795a0321e..fe3471533820 100644
--- a/arch/mips/ralink/Makefile
+++ b/arch/mips/ralink/Makefile
@@ -4,7 +4,7 @@
# Makefile for the Ralink common stuff
#
# Copyright (C) 2009-2011 Gabor Juhos <juhosg@openwrt.org>
-# Copyright (C) 2013 John Crispin <blogic@openwrt.org>
+# Copyright (C) 2013 John Crispin <john@phrozen.org>
obj-y := prom.o of.o reset.o
diff --git a/arch/mips/ralink/bootrom.c b/arch/mips/ralink/bootrom.c
index 5403468394fb..e1fa5972a81d 100644
--- a/arch/mips/ralink/bootrom.c
+++ b/arch/mips/ralink/bootrom.c
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2013 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2013 John Crispin <john@phrozen.org>
*/
#include <linux/debugfs.h>
diff --git a/arch/mips/ralink/cevt-rt3352.c b/arch/mips/ralink/cevt-rt3352.c
index e46f91f971c5..3ad0b0794f7d 100644
--- a/arch/mips/ralink/cevt-rt3352.c
+++ b/arch/mips/ralink/cevt-rt3352.c
@@ -3,7 +3,7 @@
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*
- * Copyright (C) 2013 by John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2013 by John Crispin <john@phrozen.org>
*/
#include <linux/clockchips.h>
diff --git a/arch/mips/ralink/clk.c b/arch/mips/ralink/clk.c
index 25c4a61779f1..ebaa7cc0e995 100644
--- a/arch/mips/ralink/clk.c
+++ b/arch/mips/ralink/clk.c
@@ -4,7 +4,7 @@
* by the Free Software Foundation.
*
* Copyright (C) 2011 Gabor Juhos <juhosg@openwrt.org>
- * Copyright (C) 2013 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2013 John Crispin <john@phrozen.org>
*/
#include <linux/kernel.h>
diff --git a/arch/mips/ralink/common.h b/arch/mips/ralink/common.h
index 8e7d8e618fb9..b8245d0940d6 100644
--- a/arch/mips/ralink/common.h
+++ b/arch/mips/ralink/common.h
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2013 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2013 John Crispin <john@phrozen.org>
*/
#ifndef _RALINK_COMMON_H__
diff --git a/arch/mips/ralink/ill_acc.c b/arch/mips/ralink/ill_acc.c
index e10d10b9e82a..765d5ba98fa2 100644
--- a/arch/mips/ralink/ill_acc.c
+++ b/arch/mips/ralink/ill_acc.c
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2013 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2013 John Crispin <john@phrozen.org>
*/
#include <linux/interrupt.h>
diff --git a/arch/mips/ralink/irq-gic.c b/arch/mips/ralink/irq-gic.c
index 50d6c55ab1de..2058280450b5 100644
--- a/arch/mips/ralink/irq-gic.c
+++ b/arch/mips/ralink/irq-gic.c
@@ -4,7 +4,7 @@
* by the Free Software Foundation.
*
* Copyright (C) 2015 Nikolay Martynov <mar.kolya@gmail.com>
- * Copyright (C) 2015 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2015 John Crispin <john@phrozen.org>
*/
#include <linux/init.h>
diff --git a/arch/mips/ralink/irq.c b/arch/mips/ralink/irq.c
index 4cf77f358395..4911c1445f1a 100644
--- a/arch/mips/ralink/irq.c
+++ b/arch/mips/ralink/irq.c
@@ -4,7 +4,7 @@
* by the Free Software Foundation.
*
* Copyright (C) 2009 Gabor Juhos <juhosg@openwrt.org>
- * Copyright (C) 2013 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2013 John Crispin <john@phrozen.org>
*/
#include <linux/io.h>
diff --git a/arch/mips/ralink/mt7620.c b/arch/mips/ralink/mt7620.c
index 0d3d1a97895f..d40edda0ca3b 100644
--- a/arch/mips/ralink/mt7620.c
+++ b/arch/mips/ralink/mt7620.c
@@ -7,7 +7,7 @@
*
* Copyright (C) 2008-2011 Gabor Juhos <juhosg@openwrt.org>
* Copyright (C) 2008 Imre Kaloz <kaloz@openwrt.org>
- * Copyright (C) 2013 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2013 John Crispin <john@phrozen.org>
*/
#include <linux/kernel.h>
@@ -188,6 +188,41 @@ static struct rt2880_pmx_func gpio_grp_mt7628[] = {
FUNC("gpio", 0, 11, 1),
};
+static struct rt2880_pmx_func p4led_kn_grp_mt7628[] = {
+ FUNC("jtag", 3, 30, 1),
+ FUNC("util", 2, 30, 1),
+ FUNC("gpio", 1, 30, 1),
+ FUNC("p4led_kn", 0, 30, 1),
+};
+
+static struct rt2880_pmx_func p3led_kn_grp_mt7628[] = {
+ FUNC("jtag", 3, 31, 1),
+ FUNC("util", 2, 31, 1),
+ FUNC("gpio", 1, 31, 1),
+ FUNC("p3led_kn", 0, 31, 1),
+};
+
+static struct rt2880_pmx_func p2led_kn_grp_mt7628[] = {
+ FUNC("jtag", 3, 32, 1),
+ FUNC("util", 2, 32, 1),
+ FUNC("gpio", 1, 32, 1),
+ FUNC("p2led_kn", 0, 32, 1),
+};
+
+static struct rt2880_pmx_func p1led_kn_grp_mt7628[] = {
+ FUNC("jtag", 3, 33, 1),
+ FUNC("util", 2, 33, 1),
+ FUNC("gpio", 1, 33, 1),
+ FUNC("p1led_kn", 0, 33, 1),
+};
+
+static struct rt2880_pmx_func p0led_kn_grp_mt7628[] = {
+ FUNC("jtag", 3, 34, 1),
+ FUNC("rsvd", 2, 34, 1),
+ FUNC("gpio", 1, 34, 1),
+ FUNC("p0led_kn", 0, 34, 1),
+};
+
static struct rt2880_pmx_func wled_kn_grp_mt7628[] = {
FUNC("rsvd", 3, 35, 1),
FUNC("rsvd", 2, 35, 1),
@@ -195,16 +230,61 @@ static struct rt2880_pmx_func wled_kn_grp_mt7628[] = {
FUNC("wled_kn", 0, 35, 1),
};
+static struct rt2880_pmx_func p4led_an_grp_mt7628[] = {
+ FUNC("jtag", 3, 39, 1),
+ FUNC("util", 2, 39, 1),
+ FUNC("gpio", 1, 39, 1),
+ FUNC("p4led_an", 0, 39, 1),
+};
+
+static struct rt2880_pmx_func p3led_an_grp_mt7628[] = {
+ FUNC("jtag", 3, 40, 1),
+ FUNC("util", 2, 40, 1),
+ FUNC("gpio", 1, 40, 1),
+ FUNC("p3led_an", 0, 40, 1),
+};
+
+static struct rt2880_pmx_func p2led_an_grp_mt7628[] = {
+ FUNC("jtag", 3, 41, 1),
+ FUNC("util", 2, 41, 1),
+ FUNC("gpio", 1, 41, 1),
+ FUNC("p2led_an", 0, 41, 1),
+};
+
+static struct rt2880_pmx_func p1led_an_grp_mt7628[] = {
+ FUNC("jtag", 3, 42, 1),
+ FUNC("util", 2, 42, 1),
+ FUNC("gpio", 1, 42, 1),
+ FUNC("p1led_an", 0, 42, 1),
+};
+
+static struct rt2880_pmx_func p0led_an_grp_mt7628[] = {
+ FUNC("jtag", 3, 43, 1),
+ FUNC("rsvd", 2, 43, 1),
+ FUNC("gpio", 1, 43, 1),
+ FUNC("p0led_an", 0, 43, 1),
+};
+
static struct rt2880_pmx_func wled_an_grp_mt7628[] = {
- FUNC("rsvd", 3, 35, 1),
- FUNC("rsvd", 2, 35, 1),
- FUNC("gpio", 1, 35, 1),
- FUNC("wled_an", 0, 35, 1),
+ FUNC("rsvd", 3, 44, 1),
+ FUNC("rsvd", 2, 44, 1),
+ FUNC("gpio", 1, 44, 1),
+ FUNC("wled_an", 0, 44, 1),
};
#define MT7628_GPIO_MODE_MASK 0x3
+#define MT7628_GPIO_MODE_P4LED_KN 58
+#define MT7628_GPIO_MODE_P3LED_KN 56
+#define MT7628_GPIO_MODE_P2LED_KN 54
+#define MT7628_GPIO_MODE_P1LED_KN 52
+#define MT7628_GPIO_MODE_P0LED_KN 50
#define MT7628_GPIO_MODE_WLED_KN 48
+#define MT7628_GPIO_MODE_P4LED_AN 42
+#define MT7628_GPIO_MODE_P3LED_AN 40
+#define MT7628_GPIO_MODE_P2LED_AN 38
+#define MT7628_GPIO_MODE_P1LED_AN 36
+#define MT7628_GPIO_MODE_P0LED_AN 34
#define MT7628_GPIO_MODE_WLED_AN 32
#define MT7628_GPIO_MODE_PWM1 30
#define MT7628_GPIO_MODE_PWM0 28
@@ -223,9 +303,9 @@ static struct rt2880_pmx_func wled_an_grp_mt7628[] = {
#define MT7628_GPIO_MODE_GPIO 0
static struct rt2880_pmx_group mt7628an_pinmux_data[] = {
- GRP_G("pmw1", pwm1_grp_mt7628, MT7628_GPIO_MODE_MASK,
+ GRP_G("pwm1", pwm1_grp_mt7628, MT7628_GPIO_MODE_MASK,
1, MT7628_GPIO_MODE_PWM1),
- GRP_G("pmw0", pwm0_grp_mt7628, MT7628_GPIO_MODE_MASK,
+ GRP_G("pwm0", pwm0_grp_mt7628, MT7628_GPIO_MODE_MASK,
1, MT7628_GPIO_MODE_PWM0),
GRP_G("uart2", uart2_grp_mt7628, MT7628_GPIO_MODE_MASK,
1, MT7628_GPIO_MODE_UART2),
@@ -251,8 +331,28 @@ static struct rt2880_pmx_group mt7628an_pinmux_data[] = {
1, MT7628_GPIO_MODE_GPIO),
GRP_G("wled_an", wled_an_grp_mt7628, MT7628_GPIO_MODE_MASK,
1, MT7628_GPIO_MODE_WLED_AN),
+ GRP_G("p0led_an", p0led_an_grp_mt7628, MT7628_GPIO_MODE_MASK,
+ 1, MT7628_GPIO_MODE_P0LED_AN),
+ GRP_G("p1led_an", p1led_an_grp_mt7628, MT7628_GPIO_MODE_MASK,
+ 1, MT7628_GPIO_MODE_P1LED_AN),
+ GRP_G("p2led_an", p2led_an_grp_mt7628, MT7628_GPIO_MODE_MASK,
+ 1, MT7628_GPIO_MODE_P2LED_AN),
+ GRP_G("p3led_an", p3led_an_grp_mt7628, MT7628_GPIO_MODE_MASK,
+ 1, MT7628_GPIO_MODE_P3LED_AN),
+ GRP_G("p4led_an", p4led_an_grp_mt7628, MT7628_GPIO_MODE_MASK,
+ 1, MT7628_GPIO_MODE_P4LED_AN),
GRP_G("wled_kn", wled_kn_grp_mt7628, MT7628_GPIO_MODE_MASK,
1, MT7628_GPIO_MODE_WLED_KN),
+ GRP_G("p0led_kn", p0led_kn_grp_mt7628, MT7628_GPIO_MODE_MASK,
+ 1, MT7628_GPIO_MODE_P0LED_KN),
+ GRP_G("p1led_kn", p1led_kn_grp_mt7628, MT7628_GPIO_MODE_MASK,
+ 1, MT7628_GPIO_MODE_P1LED_KN),
+ GRP_G("p2led_kn", p2led_kn_grp_mt7628, MT7628_GPIO_MODE_MASK,
+ 1, MT7628_GPIO_MODE_P2LED_KN),
+ GRP_G("p3led_kn", p3led_kn_grp_mt7628, MT7628_GPIO_MODE_MASK,
+ 1, MT7628_GPIO_MODE_P3LED_KN),
+ GRP_G("p4led_kn", p4led_kn_grp_mt7628, MT7628_GPIO_MODE_MASK,
+ 1, MT7628_GPIO_MODE_P4LED_KN),
{ 0 }
};
@@ -581,11 +681,14 @@ void prom_soc_init(struct ralink_soc_info *soc_info)
(rev & CHIP_REV_ECO_MASK));
cfg0 = __raw_readl(sysc + SYSC_REG_SYSTEM_CONFIG0);
- if (is_mt76x8())
+ if (is_mt76x8()) {
dram_type = cfg0 & DRAM_TYPE_MT7628_MASK;
- else
+ } else {
dram_type = (cfg0 >> SYSCFG0_DRAM_TYPE_SHIFT) &
SYSCFG0_DRAM_TYPE_MASK;
+ if (dram_type == SYSCFG0_DRAM_TYPE_UNKNOWN)
+ dram_type = SYSCFG0_DRAM_TYPE_SDRAM;
+ }
soc_info->mem_base = MT7620_DRAM_BASE;
if (is_mt76x8())
diff --git a/arch/mips/ralink/mt7621.c b/arch/mips/ralink/mt7621.c
index e9b9fa3e1e51..a45bbbe97ac5 100644
--- a/arch/mips/ralink/mt7621.c
+++ b/arch/mips/ralink/mt7621.c
@@ -4,7 +4,7 @@
* by the Free Software Foundation.
*
* Copyright (C) 2015 Nikolay Martynov <mar.kolya@gmail.com>
- * Copyright (C) 2015 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2015 John Crispin <john@phrozen.org>
*/
#include <linux/kernel.h>
diff --git a/arch/mips/ralink/of.c b/arch/mips/ralink/of.c
index f9eda5d8f82c..0aa67a2d0ae6 100644
--- a/arch/mips/ralink/of.c
+++ b/arch/mips/ralink/of.c
@@ -5,7 +5,7 @@
*
* Copyright (C) 2008 Imre Kaloz <kaloz@openwrt.org>
* Copyright (C) 2008-2009 Gabor Juhos <juhosg@openwrt.org>
- * Copyright (C) 2013 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2013 John Crispin <john@phrozen.org>
*/
#include <linux/io.h>
diff --git a/arch/mips/ralink/prom.c b/arch/mips/ralink/prom.c
index 39a9142f71be..5a73c5e14221 100644
--- a/arch/mips/ralink/prom.c
+++ b/arch/mips/ralink/prom.c
@@ -5,7 +5,7 @@
*
* Copyright (C) 2009 Gabor Juhos <juhosg@openwrt.org>
* Copyright (C) 2010 Joonas Lahtinen <joonas.lahtinen@gmail.com>
- * Copyright (C) 2013 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2013 John Crispin <john@phrozen.org>
*/
#include <linux/string.h>
diff --git a/arch/mips/ralink/reset.c b/arch/mips/ralink/reset.c
index ee117c4bc4a3..64543d66e76b 100644
--- a/arch/mips/ralink/reset.c
+++ b/arch/mips/ralink/reset.c
@@ -5,7 +5,7 @@
*
* Copyright (C) 2008-2009 Gabor Juhos <juhosg@openwrt.org>
* Copyright (C) 2008 Imre Kaloz <kaloz@openwrt.org>
- * Copyright (C) 2013 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2013 John Crispin <john@phrozen.org>
*/
#include <linux/pm.h>
@@ -61,7 +61,7 @@ static int ralink_reset_device(struct reset_controller_dev *rcdev,
return ralink_deassert_device(rcdev, id);
}
-static struct reset_control_ops reset_ops = {
+static const struct reset_control_ops reset_ops = {
.reset = ralink_reset_device,
.assert = ralink_assert_device,
.deassert = ralink_deassert_device,
diff --git a/arch/mips/ralink/rt288x.c b/arch/mips/ralink/rt288x.c
index 3c84166ebcb7..285796e6d75c 100644
--- a/arch/mips/ralink/rt288x.c
+++ b/arch/mips/ralink/rt288x.c
@@ -7,7 +7,7 @@
*
* Copyright (C) 2008-2011 Gabor Juhos <juhosg@openwrt.org>
* Copyright (C) 2008 Imre Kaloz <kaloz@openwrt.org>
- * Copyright (C) 2013 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2013 John Crispin <john@phrozen.org>
*/
#include <linux/kernel.h>
diff --git a/arch/mips/ralink/rt305x.c b/arch/mips/ralink/rt305x.c
index d7c4ba43a428..c8a28c4bf29e 100644
--- a/arch/mips/ralink/rt305x.c
+++ b/arch/mips/ralink/rt305x.c
@@ -7,7 +7,7 @@
*
* Copyright (C) 2008-2011 Gabor Juhos <juhosg@openwrt.org>
* Copyright (C) 2008 Imre Kaloz <kaloz@openwrt.org>
- * Copyright (C) 2013 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2013 John Crispin <john@phrozen.org>
*/
#include <linux/kernel.h>
diff --git a/arch/mips/ralink/rt3883.c b/arch/mips/ralink/rt3883.c
index fafec947b27d..4cef9162bd9b 100644
--- a/arch/mips/ralink/rt3883.c
+++ b/arch/mips/ralink/rt3883.c
@@ -7,7 +7,7 @@
*
* Copyright (C) 2008 Imre Kaloz <kaloz@openwrt.org>
* Copyright (C) 2008-2011 Gabor Juhos <juhosg@openwrt.org>
- * Copyright (C) 2013 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2013 John Crispin <john@phrozen.org>
*/
#include <linux/kernel.h>
diff --git a/arch/mips/ralink/timer-gic.c b/arch/mips/ralink/timer-gic.c
index 5b4f186bcf95..069771dbec42 100644
--- a/arch/mips/ralink/timer-gic.c
+++ b/arch/mips/ralink/timer-gic.c
@@ -4,7 +4,7 @@
* by the Free Software Foundation.
*
* Copyright (C) 2015 Nikolay Martynov <mar.kolya@gmail.com>
- * Copyright (C) 2015 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2015 John Crispin <john@phrozen.org>
*/
#include <linux/init.h>
diff --git a/arch/mips/ralink/timer.c b/arch/mips/ralink/timer.c
index 82c72a15bf75..b0343ff336c5 100644
--- a/arch/mips/ralink/timer.c
+++ b/arch/mips/ralink/timer.c
@@ -3,7 +3,7 @@
* under the terms of the GNU General Public License version 2 as published
* by the Free Software Foundation.
*
- * Copyright (C) 2013 John Crispin <blogic@openwrt.org>
+ * Copyright (C) 2013 John Crispin <john@phrozen.org>
*/
#include <linux/module.h>
@@ -180,5 +180,5 @@ static struct platform_driver rt_timer_driver = {
module_platform_driver(rt_timer_driver);
MODULE_DESCRIPTION("Ralink RT2880 timer");
-MODULE_AUTHOR("John Crispin <blogic@openwrt.org");
+MODULE_AUTHOR("John Crispin <john@phrozen.org");
MODULE_LICENSE("GPL");
diff --git a/arch/mips/sgi-ip27/ip27-hubio.c b/arch/mips/sgi-ip27/ip27-hubio.c
index 328ceb3c86ec..2abe016a0ffc 100644
--- a/arch/mips/sgi-ip27/ip27-hubio.c
+++ b/arch/mips/sgi-ip27/ip27-hubio.c
@@ -105,7 +105,7 @@ static void hub_setup_prb(nasid_t nasid, int prbnum, int credits)
prb.iprb_ff = force_fire_and_forget ? 1 : 0;
/*
- * Set the appropriate number of PIO cresits for the widget.
+ * Set the appropriate number of PIO credits for the widget.
*/
prb.iprb_xtalkctr = credits;
diff --git a/arch/mips/sgi-ip27/ip27-nmi.c b/arch/mips/sgi-ip27/ip27-nmi.c
index a2358b44420c..cfceaea92724 100644
--- a/arch/mips/sgi-ip27/ip27-nmi.c
+++ b/arch/mips/sgi-ip27/ip27-nmi.c
@@ -23,7 +23,7 @@ typedef unsigned long machreg_t;
static arch_spinlock_t nmi_lock = __ARCH_SPIN_LOCK_UNLOCKED;
/*
- * Lets see what else we need to do here. Set up sp, gp?
+ * Let's see what else we need to do here. Set up sp, gp?
*/
void nmi_dump(void)
{
diff --git a/arch/mips/sgi-ip27/ip27-xtalk.c b/arch/mips/sgi-ip27/ip27-xtalk.c
index 20f582a2137a..4fe5678ba74d 100644
--- a/arch/mips/sgi-ip27/ip27-xtalk.c
+++ b/arch/mips/sgi-ip27/ip27-xtalk.c
@@ -67,7 +67,7 @@ static int xbow_probe(nasid_t nasid)
return -ENODEV;
/*
- * Okay, here's a xbow. Lets arbitrate and find
+ * Okay, here's a xbow. Let's arbitrate and find
* out if we should initialize it. Set enabled
* hub connected at highest or lowest widget as
* master.
diff --git a/arch/mips/sibyte/Kconfig b/arch/mips/sibyte/Kconfig
index cb9a095f5c5e..707b88441567 100644
--- a/arch/mips/sibyte/Kconfig
+++ b/arch/mips/sibyte/Kconfig
@@ -143,7 +143,8 @@ config SIBYTE_CFE_CONSOLE
config SIBYTE_BUS_WATCHER
bool "Support for Bus Watcher statistics"
depends on SIBYTE_SB1xxx_SOC && \
- (SIBYTE_BCM112X || SIBYTE_SB1250)
+ (SIBYTE_BCM112X || SIBYTE_SB1250 || \
+ SIBYTE_BCM1x55 || SIBYTE_BCM1x80)
help
Handle and keep statistics on the bus error interrupts (COR_ECC,
BAD_ECC, IO_BUS).
diff --git a/arch/mips/sni/rm200.c b/arch/mips/sni/rm200.c
index a046b302623e..160b88000b4b 100644
--- a/arch/mips/sni/rm200.c
+++ b/arch/mips/sni/rm200.c
@@ -263,7 +263,7 @@ spurious_8259A_irq:
static int spurious_irq_mask;
/*
* At this point we can be sure the IRQ is spurious,
- * lets ACK and report it. [once per IRQ]
+ * let's ACK and report it. [once per IRQ]
*/
if (!(spurious_irq_mask & irqmask)) {
printk(KERN_DEBUG
diff --git a/arch/mips/vdso/Makefile b/arch/mips/vdso/Makefile
index ee3617c0c5e2..3b4538ec0102 100644
--- a/arch/mips/vdso/Makefile
+++ b/arch/mips/vdso/Makefile
@@ -5,10 +5,12 @@ obj-vdso-y := elf.o gettimeofday.o sigreturn.o
ccflags-vdso := \
$(filter -I%,$(KBUILD_CFLAGS)) \
$(filter -E%,$(KBUILD_CFLAGS)) \
+ $(filter -mmicromips,$(KBUILD_CFLAGS)) \
$(filter -march=%,$(KBUILD_CFLAGS))
cflags-vdso := $(ccflags-vdso) \
$(filter -W%,$(filter-out -Wa$(comma)%,$(KBUILD_CFLAGS))) \
- -O2 -g -fPIC -fno-common -fno-builtin -G 0 -DDISABLE_BRANCH_PROFILING \
+ -O2 -g -fPIC -fno-strict-aliasing -fno-common -fno-builtin -G 0 \
+ -DDISABLE_BRANCH_PROFILING \
$(call cc-option, -fno-stack-protector)
aflags-vdso := $(ccflags-vdso) \
$(filter -I%,$(KBUILD_CFLAGS)) \
@@ -50,13 +52,17 @@ quiet_cmd_vdsold = VDSO $@
cmd_vdsold = $(CC) $(c_flags) $(VDSO_LDFLAGS) \
-Wl,-T $(filter %.lds,$^) $(filter %.o,$^) -o $@
+# Strip rule for the raw .so files
+$(obj)/%.so.raw: OBJCOPYFLAGS := -S
+$(obj)/%.so.raw: $(obj)/%.so.dbg.raw FORCE
+ $(call if_changed,objcopy)
+
hostprogs-y := genvdso
quiet_cmd_genvdso = GENVDSO $@
define cmd_genvdso
- cp $< $(<:%.dbg=%) && \
- $(OBJCOPY) -S $< $(<:%.dbg=%) && \
- $(obj)/genvdso $< $(<:%.dbg=%) $@ $(VDSO_NAME)
+ $(foreach file,$(filter %.raw,$^),cp $(file) $(file:%.raw=%) &&) \
+ $(obj)/genvdso $(<:%.raw=%) $(<:%.dbg.raw=%) $@ $(VDSO_NAME)
endef
#
@@ -66,7 +72,10 @@ endef
native-abi := $(filter -mabi=%,$(KBUILD_CFLAGS))
targets += $(obj-vdso-y)
-targets += vdso.lds vdso.so.dbg vdso.so vdso-image.c
+targets += vdso.lds
+targets += vdso.so.dbg.raw vdso.so.raw
+targets += vdso.so.dbg vdso.so
+targets += vdso-image.c
obj-vdso := $(obj-vdso-y:%.o=$(obj)/%.o)
@@ -75,10 +84,11 @@ $(obj-vdso): KBUILD_AFLAGS := $(aflags-vdso) $(native-abi)
$(obj)/vdso.lds: KBUILD_CPPFLAGS := $(native-abi)
-$(obj)/vdso.so.dbg: $(obj)/vdso.lds $(obj-vdso) FORCE
+$(obj)/vdso.so.dbg.raw: $(obj)/vdso.lds $(obj-vdso) FORCE
$(call if_changed,vdsold)
-$(obj)/vdso-image.c: $(obj)/vdso.so.dbg $(obj)/genvdso FORCE
+$(obj)/vdso-image.c: $(obj)/vdso.so.dbg.raw $(obj)/vdso.so.raw \
+ $(obj)/genvdso FORCE
$(call if_changed,genvdso)
obj-y += vdso-image.o
@@ -89,7 +99,10 @@ obj-y += vdso-image.o
# Define these outside the ifdef to ensure they are picked up by clean.
targets += $(obj-vdso-y:%.o=%-o32.o)
-targets += vdso-o32.lds vdso-o32.so.dbg vdso-o32.so vdso-o32-image.c
+targets += vdso-o32.lds
+targets += vdso-o32.so.dbg.raw vdso-o32.so.raw
+targets += vdso-o32.so.dbg vdso-o32.so
+targets += vdso-o32-image.c
ifdef CONFIG_MIPS32_O32
@@ -109,11 +122,12 @@ $(obj)/vdso-o32.lds: KBUILD_CPPFLAGS := -mabi=32
$(obj)/vdso-o32.lds: $(src)/vdso.lds.S FORCE
$(call if_changed_dep,cpp_lds_S)
-$(obj)/vdso-o32.so.dbg: $(obj)/vdso-o32.lds $(obj-vdso-o32) FORCE
+$(obj)/vdso-o32.so.dbg.raw: $(obj)/vdso-o32.lds $(obj-vdso-o32) FORCE
$(call if_changed,vdsold)
$(obj)/vdso-o32-image.c: VDSO_NAME := o32
-$(obj)/vdso-o32-image.c: $(obj)/vdso-o32.so.dbg $(obj)/genvdso FORCE
+$(obj)/vdso-o32-image.c: $(obj)/vdso-o32.so.dbg.raw $(obj)/vdso-o32.so.raw \
+ $(obj)/genvdso FORCE
$(call if_changed,genvdso)
obj-y += vdso-o32-image.o
@@ -125,7 +139,10 @@ endif
#
targets += $(obj-vdso-y:%.o=%-n32.o)
-targets += vdso-n32.lds vdso-n32.so.dbg vdso-n32.so vdso-n32-image.c
+targets += vdso-n32.lds
+targets += vdso-n32.so.dbg.raw vdso-n32.so.raw
+targets += vdso-n32.so.dbg vdso-n32.so
+targets += vdso-n32-image.c
ifdef CONFIG_MIPS32_N32
@@ -145,11 +162,12 @@ $(obj)/vdso-n32.lds: KBUILD_CPPFLAGS := -mabi=n32
$(obj)/vdso-n32.lds: $(src)/vdso.lds.S FORCE
$(call if_changed_dep,cpp_lds_S)
-$(obj)/vdso-n32.so.dbg: $(obj)/vdso-n32.lds $(obj-vdso-n32) FORCE
+$(obj)/vdso-n32.so.dbg.raw: $(obj)/vdso-n32.lds $(obj-vdso-n32) FORCE
$(call if_changed,vdsold)
$(obj)/vdso-n32-image.c: VDSO_NAME := n32
-$(obj)/vdso-n32-image.c: $(obj)/vdso-n32.so.dbg $(obj)/genvdso FORCE
+$(obj)/vdso-n32-image.c: $(obj)/vdso-n32.so.dbg.raw $(obj)/vdso-n32.so.raw \
+ $(obj)/genvdso FORCE
$(call if_changed,genvdso)
obj-y += vdso-n32-image.o
diff --git a/arch/mips/vr41xx/common/cmu.c b/arch/mips/vr41xx/common/cmu.c
index 05302bfdd114..89bac9885695 100644
--- a/arch/mips/vr41xx/common/cmu.c
+++ b/arch/mips/vr41xx/common/cmu.c
@@ -3,7 +3,7 @@
*
* Copyright (C) 2001-2002 MontaVista Software Inc.
* Author: Yoichi Yuasa <source@mvista.com>
- * Copuright (C) 2003-2005 Yoichi Yuasa <yuasa@linux-mips.org>
+ * Copyright (C) 2003-2005 Yoichi Yuasa <yuasa@linux-mips.org>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
diff --git a/arch/mips/vr41xx/common/pmu.c b/arch/mips/vr41xx/common/pmu.c
index d7f755833c3f..39a0db3e2b34 100644
--- a/arch/mips/vr41xx/common/pmu.c
+++ b/arch/mips/vr41xx/common/pmu.c
@@ -73,7 +73,7 @@ static inline void software_reset(void)
default:
set_c0_status(ST0_BEV | ST0_ERL);
change_c0_config(CONF_CM_CMASK, CONF_CM_UNCACHED);
- flush_cache_all();
+ __flush_cache_all();
write_c0_wired(0);
__asm__("jr %0"::"r"(0xbfc00000));
break;
diff --git a/arch/mn10300/Kconfig b/arch/mn10300/Kconfig
index 06ddb5501ab1..9627e81a6cbb 100644
--- a/arch/mn10300/Kconfig
+++ b/arch/mn10300/Kconfig
@@ -1,5 +1,6 @@
config MN10300
def_bool y
+ select HAVE_EXIT_THREAD
select HAVE_OPROFILE
select HAVE_UID16
select GENERIC_IRQ_SHOW
diff --git a/arch/mn10300/boot/compressed/Makefile b/arch/mn10300/boot/compressed/Makefile
index 08a95e171685..5f56f9de1061 100644
--- a/arch/mn10300/boot/compressed/Makefile
+++ b/arch/mn10300/boot/compressed/Makefile
@@ -8,7 +8,6 @@ LDFLAGS_vmlinux := -Ttext $(CONFIG_KERNEL_ZIMAGE_BASE_ADDRESS) -e startup_32
$(obj)/vmlinux: $(obj)/head.o $(obj)/misc.o $(obj)/piggy.o FORCE
$(call if_changed,ld)
- @:
$(obj)/vmlinux.bin: vmlinux FORCE
$(call if_changed,objcopy)
diff --git a/arch/mn10300/configs/asb2364_defconfig b/arch/mn10300/configs/asb2364_defconfig
index fbb96ae3122a..cd0a6cb17dee 100644
--- a/arch/mn10300/configs/asb2364_defconfig
+++ b/arch/mn10300/configs/asb2364_defconfig
@@ -11,7 +11,6 @@ CONFIG_CGROUPS=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
-CONFIG_RESOURCE_COUNTERS=y
CONFIG_RELAY=y
# CONFIG_CC_OPTIMIZE_FOR_SIZE is not set
CONFIG_EXPERT=y
diff --git a/arch/mn10300/include/asm/fpu.h b/arch/mn10300/include/asm/fpu.h
index 738ff72659d5..a47e995d45f3 100644
--- a/arch/mn10300/include/asm/fpu.h
+++ b/arch/mn10300/include/asm/fpu.h
@@ -76,11 +76,9 @@ static inline void unlazy_fpu(struct task_struct *tsk)
preempt_enable();
}
-static inline void exit_fpu(void)
+static inline void exit_fpu(struct task_struct *tsk)
{
#ifdef CONFIG_LAZY_SAVE_FPU
- struct task_struct *tsk = current;
-
preempt_disable();
if (fpu_state_owner == tsk)
fpu_state_owner = NULL;
@@ -123,7 +121,7 @@ static inline void fpu_init_state(void) {}
static inline void fpu_save(struct fpu_state_struct *s) {}
static inline void fpu_kill_state(struct task_struct *tsk) {}
static inline void unlazy_fpu(struct task_struct *tsk) {}
-static inline void exit_fpu(void) {}
+static inline void exit_fpu(struct task_struct *tsk) {}
static inline void flush_fpu(void) {}
static inline int fpu_setup_sigcontext(struct fpucontext *buf) { return 0; }
static inline int fpu_restore_sigcontext(struct fpucontext *buf) { return 0; }
diff --git a/arch/mn10300/kernel/process.c b/arch/mn10300/kernel/process.c
index 3707da583d05..cbede4e88dee 100644
--- a/arch/mn10300/kernel/process.c
+++ b/arch/mn10300/kernel/process.c
@@ -103,9 +103,9 @@ void show_regs(struct pt_regs *regs)
/*
* free current thread data structures etc..
*/
-void exit_thread(void)
+void exit_thread(struct task_struct *tsk)
{
- exit_fpu();
+ exit_fpu(tsk);
}
void flush_thread(void)
diff --git a/arch/nios2/Kconfig b/arch/nios2/Kconfig
index 437555424bda..51a56c8b04b4 100644
--- a/arch/nios2/Kconfig
+++ b/arch/nios2/Kconfig
@@ -1,6 +1,5 @@
config NIOS2
def_bool y
- select ARCH_WANT_OPTIONAL_GPIOLIB
select CLKSRC_OF
select GENERIC_ATOMIC64
select GENERIC_CLOCKEVENTS
@@ -16,6 +15,7 @@ config NIOS2
select SOC_BUS
select SPARSE_IRQ
select USB_ARCH_HAS_HCD if USB_SUPPORT
+ select CPU_NO_EFFICIENT_FFS
config GENERIC_CSUM
def_bool y
diff --git a/arch/nios2/Makefile b/arch/nios2/Makefile
index 2328f82ba2a8..e74afc12d516 100644
--- a/arch/nios2/Makefile
+++ b/arch/nios2/Makefile
@@ -20,7 +20,7 @@ UTS_SYSNAME = Linux
export MMU
-LIBGCC := $(shell $(CC) $(KBUILD_CFLAGS) -print-libgcc-file-name)
+LIBGCC := $(shell $(CC) $(KBUILD_CFLAGS) $(KCFLAGS) -print-libgcc-file-name)
KBUILD_CFLAGS += -pipe -D__linux__ -D__ELF__
KBUILD_CFLAGS += $(if $(CONFIG_NIOS2_HW_MUL_SUPPORT),-mhw-mul,-mno-hw-mul)
@@ -53,7 +53,7 @@ all: vmImage
archclean:
$(Q)$(MAKE) $(clean)=$(nios2-boot)
-%.dtb:
+%.dtb: | scripts
$(Q)$(MAKE) $(build)=$(nios2-boot) $(nios2-boot)/$@
dtbs:
diff --git a/arch/nios2/boot/compressed/Makefile b/arch/nios2/boot/compressed/Makefile
index 5b0fb346d888..d5921c9a9726 100644
--- a/arch/nios2/boot/compressed/Makefile
+++ b/arch/nios2/boot/compressed/Makefile
@@ -11,7 +11,6 @@ LDFLAGS_vmlinux := -T
$(obj)/vmlinux: $(obj)/vmlinux.lds $(OBJECTS) $(obj)/piggy.o FORCE
$(call if_changed,ld)
- @:
LDFLAGS_piggy.o := -r --format binary --oformat elf32-littlenios2 -T
diff --git a/arch/nios2/include/asm/io.h b/arch/nios2/include/asm/io.h
index c5a62da22cd2..ce072ba0f8dd 100644
--- a/arch/nios2/include/asm/io.h
+++ b/arch/nios2/include/asm/io.h
@@ -50,7 +50,6 @@ static inline void iounmap(void __iomem *addr)
/* Pages to physical address... */
#define page_to_phys(page) virt_to_phys(page_to_virt(page))
-#define page_to_bus(page) page_to_virt(page)
/* Macros used for converting between virtual and physical mappings. */
#define phys_to_virt(vaddr) \
diff --git a/arch/nios2/include/asm/page.h b/arch/nios2/include/asm/page.h
index 4b32d6fd9d98..c1683f51ad0f 100644
--- a/arch/nios2/include/asm/page.h
+++ b/arch/nios2/include/asm/page.h
@@ -84,7 +84,7 @@ extern struct page *mem_map;
((void *)((unsigned long)(x) + PAGE_OFFSET - PHYS_OFFSET))
#define page_to_virt(page) \
- ((((page) - mem_map) << PAGE_SHIFT) + PAGE_OFFSET)
+ ((void *)(((page) - mem_map) << PAGE_SHIFT) + PAGE_OFFSET)
# define pfn_to_kaddr(pfn) __va((pfn) << PAGE_SHIFT)
# define pfn_valid(pfn) ((pfn) >= ARCH_PFN_OFFSET && \
diff --git a/arch/nios2/include/asm/pgtable.h b/arch/nios2/include/asm/pgtable.h
index a213e8c9aad0..298393c3cb42 100644
--- a/arch/nios2/include/asm/pgtable.h
+++ b/arch/nios2/include/asm/pgtable.h
@@ -209,7 +209,7 @@ static inline void set_pte(pte_t *ptep, pte_t pteval)
static inline void set_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t pteval)
{
- unsigned long paddr = page_to_virt(pte_page(pteval));
+ unsigned long paddr = (unsigned long)page_to_virt(pte_page(pteval));
flush_dcache_range(paddr, paddr + PAGE_SIZE);
set_pte(ptep, pteval);
diff --git a/arch/nios2/include/asm/processor.h b/arch/nios2/include/asm/processor.h
index c2ba45c159c7..1c953f0cadbf 100644
--- a/arch/nios2/include/asm/processor.h
+++ b/arch/nios2/include/asm/processor.h
@@ -75,11 +75,6 @@ static inline void release_thread(struct task_struct *dead_task)
{
}
-/* Free current thread data structures etc.. */
-static inline void exit_thread(void)
-{
-}
-
/* Return saved PC of a blocked thread. */
#define thread_saved_pc(tsk) ((tsk)->thread.kregs->ea)
diff --git a/arch/nios2/include/uapi/asm/unistd.h b/arch/nios2/include/uapi/asm/unistd.h
index c4bf79510461..51a32c71ce2b 100644
--- a/arch/nios2/include/uapi/asm/unistd.h
+++ b/arch/nios2/include/uapi/asm/unistd.h
@@ -17,6 +17,8 @@
#define sys_mmap2 sys_mmap_pgoff
+#define __ARCH_WANT_RENAMEAT
+
/* Use the standard ABI for syscalls */
#include <asm-generic/unistd.h>
diff --git a/arch/openrisc/Kconfig b/arch/openrisc/Kconfig
index e118c02cc79a..142cb057c41b 100644
--- a/arch/openrisc/Kconfig
+++ b/arch/openrisc/Kconfig
@@ -25,6 +25,7 @@ config OPENRISC
select MODULES_USE_ELF_RELA
select HAVE_DEBUG_STACKOVERFLOW
select OR1K_PIC
+ select CPU_NO_EFFICIENT_FFS if !OPENRISC_HAVE_INST_FF1
config MMU
def_bool y
diff --git a/arch/openrisc/include/asm/page.h b/arch/openrisc/include/asm/page.h
index e613d3673034..35bcb7cd2cde 100644
--- a/arch/openrisc/include/asm/page.h
+++ b/arch/openrisc/include/asm/page.h
@@ -81,8 +81,6 @@ typedef struct page *pgtable_t;
#define virt_to_page(addr) \
(mem_map + (((unsigned long)(addr)-PAGE_OFFSET) >> PAGE_SHIFT))
-#define page_to_virt(page) \
- ((((page) - mem_map) << PAGE_SHIFT) + PAGE_OFFSET)
#define page_to_phys(page) ((dma_addr_t)page_to_pfn(page) << PAGE_SHIFT)
diff --git a/arch/openrisc/include/asm/processor.h b/arch/openrisc/include/asm/processor.h
index 4d235e3d2534..70334c9f7d24 100644
--- a/arch/openrisc/include/asm/processor.h
+++ b/arch/openrisc/include/asm/processor.h
@@ -85,15 +85,6 @@ void release_thread(struct task_struct *);
unsigned long get_wchan(struct task_struct *p);
/*
- * Free current thread data structures etc..
- */
-
-extern inline void exit_thread(void)
-{
- /* Nothing needs to be done. */
-}
-
-/*
* Return saved PC of a blocked thread. For now, this is the "user" PC
*/
extern unsigned long thread_saved_pc(struct task_struct *t);
diff --git a/arch/openrisc/include/uapi/asm/unistd.h b/arch/openrisc/include/uapi/asm/unistd.h
index ce40b71df006..471905bd7745 100644
--- a/arch/openrisc/include/uapi/asm/unistd.h
+++ b/arch/openrisc/include/uapi/asm/unistd.h
@@ -20,6 +20,7 @@
#define sys_mmap2 sys_mmap_pgoff
+#define __ARCH_WANT_RENAMEAT
#define __ARCH_WANT_SYS_FORK
#define __ARCH_WANT_SYS_CLONE
diff --git a/arch/parisc/Kconfig b/arch/parisc/Kconfig
index 88cfaa8af78e..dc117385ce2e 100644
--- a/arch/parisc/Kconfig
+++ b/arch/parisc/Kconfig
@@ -6,6 +6,7 @@ config PARISC
select HAVE_OPROFILE
select HAVE_FUNCTION_TRACER
select HAVE_FUNCTION_GRAPH_TRACER
+ select HAVE_SYSCALL_TRACEPOINTS
select ARCH_WANT_FRAME_POINTERS
select RTC_CLASS
select RTC_DRV_GENERIC
@@ -31,7 +32,10 @@ config PARISC
select HAVE_DEBUG_STACKOVERFLOW
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_SECCOMP_FILTER
+ select HAVE_ARCH_TRACEHOOK
+ select HAVE_UNSTABLE_SCHED_CLOCK if (SMP || !64BIT)
select ARCH_NO_COHERENT_DMA_MMAP
+ select CPU_NO_EFFICIENT_FFS
help
The PA-RISC microprocessor is designed by Hewlett-Packard and used
diff --git a/arch/parisc/include/asm/cmpxchg.h b/arch/parisc/include/asm/cmpxchg.h
index 0a90b965cccb..7ada30900807 100644
--- a/arch/parisc/include/asm/cmpxchg.h
+++ b/arch/parisc/include/asm/cmpxchg.h
@@ -52,8 +52,7 @@ extern void __cmpxchg_called_with_bad_pointer(void);
/* __cmpxchg_u32/u64 defined in arch/parisc/lib/bitops.c */
extern unsigned long __cmpxchg_u32(volatile unsigned int *m, unsigned int old,
unsigned int new_);
-extern unsigned long __cmpxchg_u64(volatile unsigned long *ptr,
- unsigned long old, unsigned long new_);
+extern u64 __cmpxchg_u64(volatile u64 *ptr, u64 old, u64 new_);
/* don't worry...optimizer will get rid of most of this */
static inline unsigned long
@@ -61,7 +60,7 @@ __cmpxchg(volatile void *ptr, unsigned long old, unsigned long new_, int size)
{
switch (size) {
#ifdef CONFIG_64BIT
- case 8: return __cmpxchg_u64((unsigned long *)ptr, old, new_);
+ case 8: return __cmpxchg_u64((u64 *)ptr, old, new_);
#endif
case 4: return __cmpxchg_u32((unsigned int *)ptr,
(unsigned int)old, (unsigned int)new_);
@@ -86,7 +85,7 @@ static inline unsigned long __cmpxchg_local(volatile void *ptr,
{
switch (size) {
#ifdef CONFIG_64BIT
- case 8: return __cmpxchg_u64((unsigned long *)ptr, old, new_);
+ case 8: return __cmpxchg_u64((u64 *)ptr, old, new_);
#endif
case 4: return __cmpxchg_u32(ptr, old, new_);
default:
@@ -111,4 +110,6 @@ static inline unsigned long __cmpxchg_local(volatile void *ptr,
#define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n))
#endif
+#define cmpxchg64(ptr, o, n) __cmpxchg_u64(ptr, o, n)
+
#endif /* _ASM_PARISC_CMPXCHG_H_ */
diff --git a/arch/parisc/include/asm/eisa_eeprom.h b/arch/parisc/include/asm/eisa_eeprom.h
index 8ce8b85ca588..5637ac962f8e 100644
--- a/arch/parisc/include/asm/eisa_eeprom.h
+++ b/arch/parisc/include/asm/eisa_eeprom.h
@@ -99,7 +99,7 @@ struct eeprom_eisa_slot_info
#define HPEE_MEMORY_DECODE_24BITS 0x04
#define HPEE_MEMORY_DECODE_32BITS 0x08
/* byte 2 and 3 are a 16bit LE value
- * containging the memory size in kilobytes */
+ * containing the memory size in kilobytes */
/* byte 4,5,6 are a 24bit LE value
* containing the memory base address */
@@ -135,7 +135,7 @@ struct eeprom_eisa_slot_info
#define HPEE_PORT_SHARED 0x40
#define HPEE_PORT_MORE 0x80
/* byte 1 and 2 is a 16bit LE value
- * conating the start port number */
+ * containing the start port number */
#define HPEE_PORT_INIT_MAX_LEN 60 /* in bytes here */
/* port init entry byte 0 */
diff --git a/arch/parisc/include/asm/ftrace.h b/arch/parisc/include/asm/ftrace.h
index 24cd81d58d70..d635c6b0269d 100644
--- a/arch/parisc/include/asm/ftrace.h
+++ b/arch/parisc/include/asm/ftrace.h
@@ -6,6 +6,8 @@ extern void mcount(void);
#define MCOUNT_INSN_SIZE 4
+extern unsigned long sys_call_table[];
+
extern unsigned long return_address(unsigned int);
#define ftrace_return_address(n) return_address(n)
diff --git a/arch/parisc/include/asm/futex.h b/arch/parisc/include/asm/futex.h
index 49df14805a9b..ac8bd586ace8 100644
--- a/arch/parisc/include/asm/futex.h
+++ b/arch/parisc/include/asm/futex.h
@@ -35,70 +35,57 @@ static inline int
futex_atomic_op_inuser (int encoded_op, u32 __user *uaddr)
{
unsigned long int flags;
- u32 val;
int op = (encoded_op >> 28) & 7;
int cmp = (encoded_op >> 24) & 15;
int oparg = (encoded_op << 8) >> 20;
int cmparg = (encoded_op << 20) >> 20;
- int oldval = 0, ret;
+ int oldval, ret;
+ u32 tmp;
+
if (encoded_op & (FUTEX_OP_OPARG_SHIFT << 28))
oparg = 1 << oparg;
if (!access_ok(VERIFY_WRITE, uaddr, sizeof(*uaddr)))
return -EFAULT;
+ _futex_spin_lock_irqsave(uaddr, &flags);
pagefault_disable();
- _futex_spin_lock_irqsave(uaddr, &flags);
+ ret = -EFAULT;
+ if (unlikely(get_user(oldval, uaddr) != 0))
+ goto out_pagefault_enable;
+
+ ret = 0;
+ tmp = oldval;
switch (op) {
case FUTEX_OP_SET:
- /* *(int *)UADDR2 = OPARG; */
- ret = get_user(oldval, uaddr);
- if (!ret)
- ret = put_user(oparg, uaddr);
+ tmp = oparg;
break;
case FUTEX_OP_ADD:
- /* *(int *)UADDR2 += OPARG; */
- ret = get_user(oldval, uaddr);
- if (!ret) {
- val = oldval + oparg;
- ret = put_user(val, uaddr);
- }
+ tmp += oparg;
break;
case FUTEX_OP_OR:
- /* *(int *)UADDR2 |= OPARG; */
- ret = get_user(oldval, uaddr);
- if (!ret) {
- val = oldval | oparg;
- ret = put_user(val, uaddr);
- }
+ tmp |= oparg;
break;
case FUTEX_OP_ANDN:
- /* *(int *)UADDR2 &= ~OPARG; */
- ret = get_user(oldval, uaddr);
- if (!ret) {
- val = oldval & ~oparg;
- ret = put_user(val, uaddr);
- }
+ tmp &= ~oparg;
break;
case FUTEX_OP_XOR:
- /* *(int *)UADDR2 ^= OPARG; */
- ret = get_user(oldval, uaddr);
- if (!ret) {
- val = oldval ^ oparg;
- ret = put_user(val, uaddr);
- }
+ tmp ^= oparg;
break;
default:
ret = -ENOSYS;
}
- _futex_spin_unlock_irqrestore(uaddr, &flags);
+ if (ret == 0 && unlikely(put_user(tmp, uaddr) != 0))
+ ret = -EFAULT;
+out_pagefault_enable:
pagefault_enable();
+ _futex_spin_unlock_irqrestore(uaddr, &flags);
- if (!ret) {
+ if (ret == 0) {
switch (cmp) {
case FUTEX_OP_CMP_EQ: ret = (oldval == cmparg); break;
case FUTEX_OP_CMP_NE: ret = (oldval != cmparg); break;
@@ -112,12 +99,10 @@ futex_atomic_op_inuser (int encoded_op, u32 __user *uaddr)
return ret;
}
-/* Non-atomic version */
static inline int
futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
u32 oldval, u32 newval)
{
- int ret;
u32 val;
unsigned long flags;
@@ -137,17 +122,20 @@ futex_atomic_cmpxchg_inatomic(u32 *uval, u32 __user *uaddr,
*/
_futex_spin_lock_irqsave(uaddr, &flags);
+ if (unlikely(get_user(val, uaddr) != 0)) {
+ _futex_spin_unlock_irqrestore(uaddr, &flags);
+ return -EFAULT;
+ }
- ret = get_user(val, uaddr);
-
- if (!ret && val == oldval)
- ret = put_user(newval, uaddr);
+ if (val == oldval && unlikely(put_user(newval, uaddr) != 0)) {
+ _futex_spin_unlock_irqrestore(uaddr, &flags);
+ return -EFAULT;
+ }
*uval = val;
-
_futex_spin_unlock_irqrestore(uaddr, &flags);
- return ret;
+ return 0;
}
#endif /*__KERNEL__*/
diff --git a/arch/parisc/include/asm/ldcw.h b/arch/parisc/include/asm/ldcw.h
index 8121aa6db2ff..8be707e1b6c7 100644
--- a/arch/parisc/include/asm/ldcw.h
+++ b/arch/parisc/include/asm/ldcw.h
@@ -40,7 +40,7 @@
memory to indicate to the compiler that the assembly code reads
or writes to items other than those listed in the input and output
operands. This may pessimize the code somewhat but __ldcw is
- usually used within code blocks surrounded by memory barriors. */
+ usually used within code blocks surrounded by memory barriers. */
#define __ldcw(a) ({ \
unsigned __ret; \
__asm__ __volatile__(__LDCW " 0(%1),%0" \
diff --git a/arch/parisc/include/asm/syscall.h b/arch/parisc/include/asm/syscall.h
index 637ce8d6f375..5e0b4e6bd99d 100644
--- a/arch/parisc/include/asm/syscall.h
+++ b/arch/parisc/include/asm/syscall.h
@@ -8,6 +8,8 @@
#include <linux/err.h>
#include <asm/ptrace.h>
+#define NR_syscalls (__NR_Linux_syscalls)
+
static inline long syscall_get_nr(struct task_struct *tsk,
struct pt_regs *regs)
{
@@ -33,12 +35,19 @@ static inline void syscall_get_arguments(struct task_struct *tsk,
args[1] = regs->gr[25];
case 1:
args[0] = regs->gr[26];
+ case 0:
break;
default:
BUG();
}
}
+static inline long syscall_get_return_value(struct task_struct *task,
+ struct pt_regs *regs)
+{
+ return regs->gr[28];
+}
+
static inline void syscall_set_return_value(struct task_struct *task,
struct pt_regs *regs,
int error, long val)
diff --git a/arch/parisc/include/asm/thread_info.h b/arch/parisc/include/asm/thread_info.h
index e96e693fd58c..7581330ea35b 100644
--- a/arch/parisc/include/asm/thread_info.h
+++ b/arch/parisc/include/asm/thread_info.h
@@ -55,6 +55,7 @@ struct thread_info {
#define TIF_SINGLESTEP 9 /* single stepping? */
#define TIF_BLOCKSTEP 10 /* branch stepping? */
#define TIF_SECCOMP 11 /* secure computing */
+#define TIF_SYSCALL_TRACEPOINT 12 /* syscall tracepoint instrumentation */
#define _TIF_SYSCALL_TRACE (1 << TIF_SYSCALL_TRACE)
#define _TIF_SIGPENDING (1 << TIF_SIGPENDING)
@@ -66,12 +67,13 @@ struct thread_info {
#define _TIF_SINGLESTEP (1 << TIF_SINGLESTEP)
#define _TIF_BLOCKSTEP (1 << TIF_BLOCKSTEP)
#define _TIF_SECCOMP (1 << TIF_SECCOMP)
+#define _TIF_SYSCALL_TRACEPOINT (1 << TIF_SYSCALL_TRACEPOINT)
#define _TIF_USER_WORK_MASK (_TIF_SIGPENDING | _TIF_NOTIFY_RESUME | \
_TIF_NEED_RESCHED)
#define _TIF_SYSCALL_TRACE_MASK (_TIF_SYSCALL_TRACE | _TIF_SINGLESTEP | \
_TIF_BLOCKSTEP | _TIF_SYSCALL_AUDIT | \
- _TIF_SECCOMP)
+ _TIF_SECCOMP | _TIF_SYSCALL_TRACEPOINT)
#ifdef CONFIG_64BIT
# ifdef CONFIG_COMPAT
diff --git a/arch/parisc/include/asm/uaccess.h b/arch/parisc/include/asm/uaccess.h
index 7955e43f3f3f..0f59fd9ca205 100644
--- a/arch/parisc/include/asm/uaccess.h
+++ b/arch/parisc/include/asm/uaccess.h
@@ -40,14 +40,10 @@ static inline long access_ok(int type, const void __user * addr,
#define get_user __get_user
#if !defined(CONFIG_64BIT)
-#define LDD_KERNEL(ptr) BUILD_BUG()
-#define LDD_USER(ptr) BUILD_BUG()
-#define STD_KERNEL(x, ptr) __put_kernel_asm64(x, ptr)
+#define LDD_USER(ptr) __get_user_asm64(ptr)
#define STD_USER(x, ptr) __put_user_asm64(x, ptr)
#else
-#define LDD_KERNEL(ptr) __get_kernel_asm("ldd", ptr)
#define LDD_USER(ptr) __get_user_asm("ldd", ptr)
-#define STD_KERNEL(x, ptr) __put_kernel_asm("std", x, ptr)
#define STD_USER(x, ptr) __put_user_asm("std", x, ptr)
#endif
@@ -80,70 +76,70 @@ struct exception_data {
unsigned long fault_addr;
};
+/*
+ * load_sr2() preloads the space register %%sr2 - based on the value of
+ * get_fs() - with either a value of 0 to access kernel space (KERNEL_DS which
+ * is 0), or with the current value of %%sr3 to access user space (USER_DS)
+ * memory. The following __get_user_asm() and __put_user_asm() functions have
+ * %%sr2 hard-coded to access the requested memory.
+ */
+#define load_sr2() \
+ __asm__(" or,= %0,%%r0,%%r0\n\t" \
+ " mfsp %%sr3,%0\n\t" \
+ " mtsp %0,%%sr2\n\t" \
+ : : "r"(get_fs()) : )
+
#define __get_user(x, ptr) \
({ \
register long __gu_err __asm__ ("r8") = 0; \
register long __gu_val __asm__ ("r9") = 0; \
\
- if (segment_eq(get_fs(), KERNEL_DS)) { \
- switch (sizeof(*(ptr))) { \
- case 1: __get_kernel_asm("ldb", ptr); break; \
- case 2: __get_kernel_asm("ldh", ptr); break; \
- case 4: __get_kernel_asm("ldw", ptr); break; \
- case 8: LDD_KERNEL(ptr); break; \
- default: BUILD_BUG(); break; \
- } \
- } \
- else { \
- switch (sizeof(*(ptr))) { \
+ load_sr2(); \
+ switch (sizeof(*(ptr))) { \
case 1: __get_user_asm("ldb", ptr); break; \
case 2: __get_user_asm("ldh", ptr); break; \
case 4: __get_user_asm("ldw", ptr); break; \
case 8: LDD_USER(ptr); break; \
default: BUILD_BUG(); break; \
- } \
} \
\
(x) = (__force __typeof__(*(ptr))) __gu_val; \
__gu_err; \
})
-#define __get_kernel_asm(ldx, ptr) \
- __asm__("\n1:\t" ldx "\t0(%2),%0\n\t" \
+#define __get_user_asm(ldx, ptr) \
+ __asm__("\n1:\t" ldx "\t0(%%sr2,%2),%0\n\t" \
ASM_EXCEPTIONTABLE_ENTRY(1b, fixup_get_user_skip_1)\
: "=r"(__gu_val), "=r"(__gu_err) \
: "r"(ptr), "1"(__gu_err) \
: "r1");
-#define __get_user_asm(ldx, ptr) \
- __asm__("\n1:\t" ldx "\t0(%%sr3,%2),%0\n\t" \
- ASM_EXCEPTIONTABLE_ENTRY(1b, fixup_get_user_skip_1)\
- : "=r"(__gu_val), "=r"(__gu_err) \
+#if !defined(CONFIG_64BIT)
+
+#define __get_user_asm64(ptr) \
+ __asm__("\n1:\tldw 0(%%sr2,%2),%0" \
+ "\n2:\tldw 4(%%sr2,%2),%R0\n\t" \
+ ASM_EXCEPTIONTABLE_ENTRY(1b, fixup_get_user_skip_2)\
+ ASM_EXCEPTIONTABLE_ENTRY(2b, fixup_get_user_skip_1)\
+ : "=r"(__gu_val), "=r"(__gu_err) \
: "r"(ptr), "1"(__gu_err) \
: "r1");
+#endif /* !defined(CONFIG_64BIT) */
+
+
#define __put_user(x, ptr) \
({ \
register long __pu_err __asm__ ("r8") = 0; \
__typeof__(*(ptr)) __x = (__typeof__(*(ptr)))(x); \
\
- if (segment_eq(get_fs(), KERNEL_DS)) { \
- switch (sizeof(*(ptr))) { \
- case 1: __put_kernel_asm("stb", __x, ptr); break; \
- case 2: __put_kernel_asm("sth", __x, ptr); break; \
- case 4: __put_kernel_asm("stw", __x, ptr); break; \
- case 8: STD_KERNEL(__x, ptr); break; \
- default: BUILD_BUG(); break; \
- } \
- } \
- else { \
- switch (sizeof(*(ptr))) { \
+ load_sr2(); \
+ switch (sizeof(*(ptr))) { \
case 1: __put_user_asm("stb", __x, ptr); break; \
case 2: __put_user_asm("sth", __x, ptr); break; \
case 4: __put_user_asm("stw", __x, ptr); break; \
case 8: STD_USER(__x, ptr); break; \
default: BUILD_BUG(); break; \
- } \
} \
\
__pu_err; \
@@ -159,17 +155,9 @@ struct exception_data {
* r8/r9 are already listed as err/val.
*/
-#define __put_kernel_asm(stx, x, ptr) \
- __asm__ __volatile__ ( \
- "\n1:\t" stx "\t%2,0(%1)\n\t" \
- ASM_EXCEPTIONTABLE_ENTRY(1b, fixup_put_user_skip_1)\
- : "=r"(__pu_err) \
- : "r"(ptr), "r"(x), "0"(__pu_err) \
- : "r1")
-
#define __put_user_asm(stx, x, ptr) \
__asm__ __volatile__ ( \
- "\n1:\t" stx "\t%2,0(%%sr3,%1)\n\t" \
+ "\n1:\t" stx "\t%2,0(%%sr2,%1)\n\t" \
ASM_EXCEPTIONTABLE_ENTRY(1b, fixup_put_user_skip_1)\
: "=r"(__pu_err) \
: "r"(ptr), "r"(x), "0"(__pu_err) \
@@ -178,21 +166,10 @@ struct exception_data {
#if !defined(CONFIG_64BIT)
-#define __put_kernel_asm64(__val, ptr) do { \
- __asm__ __volatile__ ( \
- "\n1:\tstw %2,0(%1)" \
- "\n2:\tstw %R2,4(%1)\n\t" \
- ASM_EXCEPTIONTABLE_ENTRY(1b, fixup_put_user_skip_2)\
- ASM_EXCEPTIONTABLE_ENTRY(2b, fixup_put_user_skip_1)\
- : "=r"(__pu_err) \
- : "r"(ptr), "r"(__val), "0"(__pu_err) \
- : "r1"); \
-} while (0)
-
#define __put_user_asm64(__val, ptr) do { \
__asm__ __volatile__ ( \
- "\n1:\tstw %2,0(%%sr3,%1)" \
- "\n2:\tstw %R2,4(%%sr3,%1)\n\t" \
+ "\n1:\tstw %2,0(%%sr2,%1)" \
+ "\n2:\tstw %R2,4(%%sr2,%1)\n\t" \
ASM_EXCEPTIONTABLE_ENTRY(1b, fixup_put_user_skip_2)\
ASM_EXCEPTIONTABLE_ENTRY(2b, fixup_put_user_skip_1)\
: "=r"(__pu_err) \
diff --git a/arch/parisc/include/uapi/asm/pdc.h b/arch/parisc/include/uapi/asm/pdc.h
index 702498f7705b..0609ff117f67 100644
--- a/arch/parisc/include/uapi/asm/pdc.h
+++ b/arch/parisc/include/uapi/asm/pdc.h
@@ -59,7 +59,7 @@
#define PDC_MODEL_GET_BOOT__OP 8 /* returns boot test options */
#define PDC_MODEL_SET_BOOT__OP 9 /* set boot test options */
-#define PA89_INSTRUCTION_SET 0x4 /* capatibilies returned */
+#define PA89_INSTRUCTION_SET 0x4 /* capabilities returned */
#define PA90_INSTRUCTION_SET 0x8
#define PDC_CACHE 5 /* return/set cache (& TLB) info*/
diff --git a/arch/parisc/include/uapi/asm/ptrace.h b/arch/parisc/include/uapi/asm/ptrace.h
index c4fa6c8b9ad9..02ce2eb99a7f 100644
--- a/arch/parisc/include/uapi/asm/ptrace.h
+++ b/arch/parisc/include/uapi/asm/ptrace.h
@@ -13,6 +13,11 @@
* N.B. gdb/strace care about the size and offsets within this
* structure. If you change things, you may break object compatibility
* for those applications.
+ *
+ * Please do NOT use this structure for future programs, but use
+ * user_regs_struct (see below) instead.
+ *
+ * It can be accessed through PTRACE_PEEKUSR/PTRACE_POKEUSR only.
*/
struct pt_regs {
@@ -33,6 +38,45 @@ struct pt_regs {
unsigned long ipsw; /* CR22 */
};
+/**
+ * struct user_regs_struct - User general purpose registers
+ *
+ * This is the user-visible general purpose register state structure
+ * which is used to define the elf_gregset_t.
+ *
+ * It can be accessed through PTRACE_GETREGSET with NT_PRSTATUS
+ * and through PTRACE_GETREGS.
+ */
+struct user_regs_struct {
+ unsigned long gr[32]; /* PSW is in gr[0] */
+ unsigned long sr[8];
+ unsigned long iaoq[2];
+ unsigned long iasq[2];
+ unsigned long sar; /* CR11 */
+ unsigned long iir; /* CR19 */
+ unsigned long isr; /* CR20 */
+ unsigned long ior; /* CR21 */
+ unsigned long ipsw; /* CR22 */
+ unsigned long cr0;
+ unsigned long cr24, cr25, cr26, cr27, cr28, cr29, cr30, cr31;
+ unsigned long cr8, cr9, cr12, cr13, cr10, cr15;
+ unsigned long _pad[80-64]; /* pad to ELF_NGREG (80) */
+};
+
+/**
+ * struct user_fp_struct - User floating point registers
+ *
+ * This is the user-visible floating point register state structure.
+ * It uses the same layout and size as elf_fpregset_t.
+ *
+ * It can be accessed through PTRACE_GETREGSET with NT_PRFPREG
+ * and through PTRACE_GETFPREGS.
+ */
+struct user_fp_struct {
+ __u64 fr[32];
+};
+
+
/*
* The numbers chosen here are somewhat arbitrary but absolutely MUST
* not overlap with any of the number assigned in <linux/ptrace.h>.
@@ -43,5 +87,9 @@ struct pt_regs {
*/
#define PTRACE_SINGLEBLOCK 12 /* resume execution until next branch */
+#define PTRACE_GETREGS 18
+#define PTRACE_SETREGS 19
+#define PTRACE_GETFPREGS 14
+#define PTRACE_SETFPREGS 15
#endif /* _UAPI_PARISC_PTRACE_H */
diff --git a/arch/parisc/include/uapi/asm/unistd.h b/arch/parisc/include/uapi/asm/unistd.h
index cc0ce92c93c7..a9b9407f38f7 100644
--- a/arch/parisc/include/uapi/asm/unistd.h
+++ b/arch/parisc/include/uapi/asm/unistd.h
@@ -102,7 +102,7 @@
#define __NR_uselib (__NR_Linux + 86)
#define __NR_swapon (__NR_Linux + 87)
#define __NR_reboot (__NR_Linux + 88)
-#define __NR_mmap2 (__NR_Linux + 89)
+#define __NR_mmap2 (__NR_Linux + 89)
#define __NR_mmap (__NR_Linux + 90)
#define __NR_munmap (__NR_Linux + 91)
#define __NR_truncate (__NR_Linux + 92)
@@ -114,7 +114,7 @@
#define __NR_recv (__NR_Linux + 98)
#define __NR_statfs (__NR_Linux + 99)
#define __NR_fstatfs (__NR_Linux + 100)
-#define __NR_stat64 (__NR_Linux + 101)
+#define __NR_stat64 (__NR_Linux + 101)
/* #define __NR_socketcall (__NR_Linux + 102) */
#define __NR_syslog (__NR_Linux + 103)
#define __NR_setitimer (__NR_Linux + 104)
@@ -140,17 +140,17 @@
#define __NR_adjtimex (__NR_Linux + 124)
#define __NR_mprotect (__NR_Linux + 125)
#define __NR_sigprocmask (__NR_Linux + 126)
-#define __NR_create_module (__NR_Linux + 127)
+#define __NR_create_module (__NR_Linux + 127) /* not used */
#define __NR_init_module (__NR_Linux + 128)
#define __NR_delete_module (__NR_Linux + 129)
-#define __NR_get_kernel_syms (__NR_Linux + 130)
+#define __NR_get_kernel_syms (__NR_Linux + 130) /* not used */
#define __NR_quotactl (__NR_Linux + 131)
#define __NR_getpgid (__NR_Linux + 132)
#define __NR_fchdir (__NR_Linux + 133)
#define __NR_bdflush (__NR_Linux + 134)
#define __NR_sysfs (__NR_Linux + 135)
#define __NR_personality (__NR_Linux + 136)
-#define __NR_afs_syscall (__NR_Linux + 137) /* Syscall for Andrew File System */
+#define __NR_afs_syscall (__NR_Linux + 137) /* not used */
#define __NR_setfsuid (__NR_Linux + 138)
#define __NR_setfsgid (__NR_Linux + 139)
#define __NR__llseek (__NR_Linux + 140)
@@ -180,9 +180,9 @@
#define __NR_setresuid (__NR_Linux + 164)
#define __NR_getresuid (__NR_Linux + 165)
#define __NR_sigaltstack (__NR_Linux + 166)
-#define __NR_query_module (__NR_Linux + 167)
+#define __NR_query_module (__NR_Linux + 167) /* not used */
#define __NR_poll (__NR_Linux + 168)
-#define __NR_nfsservctl (__NR_Linux + 169)
+#define __NR_nfsservctl (__NR_Linux + 169) /* not used */
#define __NR_setresgid (__NR_Linux + 170)
#define __NR_getresgid (__NR_Linux + 171)
#define __NR_prctl (__NR_Linux + 172)
@@ -209,18 +209,16 @@
#define __NR_shmdt (__NR_Linux + 193)
#define __NR_shmget (__NR_Linux + 194)
#define __NR_shmctl (__NR_Linux + 195)
-
-#define __NR_getpmsg (__NR_Linux + 196) /* Somebody *wants* streams? */
-#define __NR_putpmsg (__NR_Linux + 197)
-
+#define __NR_getpmsg (__NR_Linux + 196) /* not used */
+#define __NR_putpmsg (__NR_Linux + 197) /* not used */
#define __NR_lstat64 (__NR_Linux + 198)
#define __NR_truncate64 (__NR_Linux + 199)
#define __NR_ftruncate64 (__NR_Linux + 200)
#define __NR_getdents64 (__NR_Linux + 201)
#define __NR_fcntl64 (__NR_Linux + 202)
-#define __NR_attrctl (__NR_Linux + 203)
-#define __NR_acl_get (__NR_Linux + 204)
-#define __NR_acl_set (__NR_Linux + 205)
+#define __NR_attrctl (__NR_Linux + 203) /* not used */
+#define __NR_acl_get (__NR_Linux + 204) /* not used */
+#define __NR_acl_set (__NR_Linux + 205) /* not used */
#define __NR_gettid (__NR_Linux + 206)
#define __NR_readahead (__NR_Linux + 207)
#define __NR_tkill (__NR_Linux + 208)
@@ -228,8 +226,8 @@
#define __NR_futex (__NR_Linux + 210)
#define __NR_sched_setaffinity (__NR_Linux + 211)
#define __NR_sched_getaffinity (__NR_Linux + 212)
-#define __NR_set_thread_area (__NR_Linux + 213)
-#define __NR_get_thread_area (__NR_Linux + 214)
+#define __NR_set_thread_area (__NR_Linux + 213) /* not used */
+#define __NR_get_thread_area (__NR_Linux + 214) /* not used */
#define __NR_io_setup (__NR_Linux + 215)
#define __NR_io_destroy (__NR_Linux + 216)
#define __NR_io_getevents (__NR_Linux + 217)
@@ -278,7 +276,7 @@
#define __NR_mbind (__NR_Linux + 260)
#define __NR_get_mempolicy (__NR_Linux + 261)
#define __NR_set_mempolicy (__NR_Linux + 262)
-#define __NR_vserver (__NR_Linux + 263)
+#define __NR_vserver (__NR_Linux + 263) /* not used */
#define __NR_add_key (__NR_Linux + 264)
#define __NR_request_key (__NR_Linux + 265)
#define __NR_keyctl (__NR_Linux + 266)
@@ -318,7 +316,7 @@
#define __NR_kexec_load (__NR_Linux + 300)
#define __NR_utimensat (__NR_Linux + 301)
#define __NR_signalfd (__NR_Linux + 302)
-#define __NR_timerfd (__NR_Linux + 303)
+#define __NR_timerfd (__NR_Linux + 303) /* not used */
#define __NR_eventfd (__NR_Linux + 304)
#define __NR_fallocate (__NR_Linux + 305)
#define __NR_timerfd_create (__NR_Linux + 306)
diff --git a/arch/parisc/kernel/entry.S b/arch/parisc/kernel/entry.S
index 39127d3e70e5..baa3d9d6e971 100644
--- a/arch/parisc/kernel/entry.S
+++ b/arch/parisc/kernel/entry.S
@@ -667,7 +667,7 @@
* boundary
*/
- .text
+ .section .text.hot
.align 2048
ENTRY(fault_vector_20)
@@ -2019,6 +2019,7 @@ ftrace_stub:
.procend
ENDPROC(mcount)
+#ifdef CONFIG_FUNCTION_GRAPH_TRACER
.align 8
.globl return_to_handler
.type return_to_handler, @function
@@ -2040,11 +2041,17 @@ parisc_return_to_handler:
#endif
/* call ftrace_return_to_handler(0) */
+ .import ftrace_return_to_handler,code
+ load32 ftrace_return_to_handler,%ret0
+ load32 .Lftrace_ret,%r2
#ifdef CONFIG_64BIT
ldo -16(%sp),%ret1 /* Reference param save area */
+ bve (%ret0)
+#else
+ bv %r0(%ret0)
#endif
- BL ftrace_return_to_handler,%r2
ldi 0,%r26
+.Lftrace_ret:
copy %ret0,%rp
/* restore original return values */
@@ -2062,6 +2069,8 @@ parisc_return_to_handler:
.procend
ENDPROC(return_to_handler)
+#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
+
#endif /* CONFIG_FUNCTION_TRACER */
#ifdef CONFIG_IRQSTACKS
diff --git a/arch/parisc/kernel/ftrace.c b/arch/parisc/kernel/ftrace.c
index b13f9ec6f294..a828a0adf52c 100644
--- a/arch/parisc/kernel/ftrace.c
+++ b/arch/parisc/kernel/ftrace.c
@@ -18,12 +18,15 @@
#include <asm/ftrace.h>
+#define __hot __attribute__ ((__section__ (".text.hot")))
+
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
/*
* Hook the return address and push it in the stack of return addrs
* in current thread info.
*/
-static void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr)
+static void __hot prepare_ftrace_return(unsigned long *parent,
+ unsigned long self_addr)
{
unsigned long old;
struct ftrace_graph_ent trace;
@@ -53,7 +56,7 @@ static void prepare_ftrace_return(unsigned long *parent, unsigned long self_addr
}
#endif /* CONFIG_FUNCTION_GRAPH_TRACER */
-void notrace ftrace_function_trampoline(unsigned long parent,
+void notrace __hot ftrace_function_trampoline(unsigned long parent,
unsigned long self_addr,
unsigned long org_sp_gr3)
{
diff --git a/arch/parisc/kernel/process.c b/arch/parisc/kernel/process.c
index 809905a811ed..40639439d8b3 100644
--- a/arch/parisc/kernel/process.c
+++ b/arch/parisc/kernel/process.c
@@ -144,13 +144,6 @@ void machine_power_off(void)
void (*pm_power_off)(void) = machine_power_off;
EXPORT_SYMBOL(pm_power_off);
-/*
- * Free current thread data structures etc..
- */
-void exit_thread(void)
-{
-}
-
void flush_thread(void)
{
/* Only needs to handle fpu stuff or perf monitors.
diff --git a/arch/parisc/kernel/ptrace.c b/arch/parisc/kernel/ptrace.c
index 8fb81a391599..b5458b37fc5b 100644
--- a/arch/parisc/kernel/ptrace.c
+++ b/arch/parisc/kernel/ptrace.c
@@ -4,18 +4,20 @@
* Copyright (C) 2000 Hewlett-Packard Co, Linuxcare Inc.
* Copyright (C) 2000 Matthew Wilcox <matthew@wil.cx>
* Copyright (C) 2000 David Huggins-Daines <dhd@debian.org>
- * Copyright (C) 2008 Helge Deller <deller@gmx.de>
+ * Copyright (C) 2008-2016 Helge Deller <deller@gmx.de>
*/
#include <linux/kernel.h>
#include <linux/sched.h>
#include <linux/mm.h>
#include <linux/smp.h>
+#include <linux/elf.h>
#include <linux/errno.h>
#include <linux/ptrace.h>
#include <linux/tracehook.h>
#include <linux/user.h>
#include <linux/personality.h>
+#include <linux/regset.h>
#include <linux/security.h>
#include <linux/seccomp.h>
#include <linux/compat.h>
@@ -30,6 +32,17 @@
/* PSW bits we allow the debugger to modify */
#define USER_PSW_BITS (PSW_N | PSW_B | PSW_V | PSW_CB)
+#define CREATE_TRACE_POINTS
+#include <trace/events/syscalls.h>
+
+/*
+ * These are our native regset flavors.
+ */
+enum parisc_regset {
+ REGSET_GENERAL,
+ REGSET_FP
+};
+
/*
* Called by kernel/ptrace.c when detaching..
*
@@ -114,6 +127,7 @@ void user_enable_block_step(struct task_struct *task)
long arch_ptrace(struct task_struct *child, long request,
unsigned long addr, unsigned long data)
{
+ unsigned long __user *datap = (unsigned long __user *)data;
unsigned long tmp;
long ret = -EIO;
@@ -126,7 +140,7 @@ long arch_ptrace(struct task_struct *child, long request,
addr >= sizeof(struct pt_regs))
break;
tmp = *(unsigned long *) ((char *) task_regs(child) + addr);
- ret = put_user(tmp, (unsigned long __user *) data);
+ ret = put_user(tmp, datap);
break;
/* Write the word at location addr in the USER area. This will need
@@ -165,6 +179,34 @@ long arch_ptrace(struct task_struct *child, long request,
}
break;
+ case PTRACE_GETREGS: /* Get all gp regs from the child. */
+ return copy_regset_to_user(child,
+ task_user_regset_view(current),
+ REGSET_GENERAL,
+ 0, sizeof(struct user_regs_struct),
+ datap);
+
+ case PTRACE_SETREGS: /* Set all gp regs in the child. */
+ return copy_regset_from_user(child,
+ task_user_regset_view(current),
+ REGSET_GENERAL,
+ 0, sizeof(struct user_regs_struct),
+ datap);
+
+ case PTRACE_GETFPREGS: /* Get the child FPU state. */
+ return copy_regset_to_user(child,
+ task_user_regset_view(current),
+ REGSET_FP,
+ 0, sizeof(struct user_fp_struct),
+ datap);
+
+ case PTRACE_SETFPREGS: /* Set the child FPU state. */
+ return copy_regset_from_user(child,
+ task_user_regset_view(current),
+ REGSET_FP,
+ 0, sizeof(struct user_fp_struct),
+ datap);
+
default:
ret = ptrace_request(child, request, addr, data);
break;
@@ -283,6 +325,10 @@ long do_syscall_trace_enter(struct pt_regs *regs)
regs->gr[20] = -1UL;
goto out;
}
+#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
+ if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT)))
+ trace_sys_enter(regs, regs->gr[20]);
+#endif
#ifdef CONFIG_64BIT
if (!is_compat_task())
@@ -311,6 +357,324 @@ void do_syscall_trace_exit(struct pt_regs *regs)
audit_syscall_exit(regs);
+#ifdef CONFIG_HAVE_SYSCALL_TRACEPOINTS
+ if (unlikely(test_thread_flag(TIF_SYSCALL_TRACEPOINT)))
+ trace_sys_exit(regs, regs->gr[20]);
+#endif
+
if (stepping || test_thread_flag(TIF_SYSCALL_TRACE))
tracehook_report_syscall_exit(regs, stepping);
}
+
+
+/*
+ * regset functions.
+ */
+
+static int fpr_get(struct task_struct *target,
+ const struct user_regset *regset,
+ unsigned int pos, unsigned int count,
+ void *kbuf, void __user *ubuf)
+{
+ struct pt_regs *regs = task_regs(target);
+ __u64 *k = kbuf;
+ __u64 __user *u = ubuf;
+ __u64 reg;
+
+ pos /= sizeof(reg);
+ count /= sizeof(reg);
+
+ if (kbuf)
+ for (; count > 0 && pos < ELF_NFPREG; --count)
+ *k++ = regs->fr[pos++];
+ else
+ for (; count > 0 && pos < ELF_NFPREG; --count)
+ if (__put_user(regs->fr[pos++], u++))
+ return -EFAULT;
+
+ kbuf = k;
+ ubuf = u;
+ pos *= sizeof(reg);
+ count *= sizeof(reg);
+ return user_regset_copyout_zero(&pos, &count, &kbuf, &ubuf,
+ ELF_NFPREG * sizeof(reg), -1);
+}
+
+static int fpr_set(struct task_struct *target,
+ const struct user_regset *regset,
+ unsigned int pos, unsigned int count,
+ const void *kbuf, const void __user *ubuf)
+{
+ struct pt_regs *regs = task_regs(target);
+ const __u64 *k = kbuf;
+ const __u64 __user *u = ubuf;
+ __u64 reg;
+
+ pos /= sizeof(reg);
+ count /= sizeof(reg);
+
+ if (kbuf)
+ for (; count > 0 && pos < ELF_NFPREG; --count)
+ regs->fr[pos++] = *k++;
+ else
+ for (; count > 0 && pos < ELF_NFPREG; --count) {
+ if (__get_user(reg, u++))
+ return -EFAULT;
+ regs->fr[pos++] = reg;
+ }
+
+ kbuf = k;
+ ubuf = u;
+ pos *= sizeof(reg);
+ count *= sizeof(reg);
+ return user_regset_copyin_ignore(&pos, &count, &kbuf, &ubuf,
+ ELF_NFPREG * sizeof(reg), -1);
+}
+
+#define RI(reg) (offsetof(struct user_regs_struct,reg) / sizeof(long))
+
+static unsigned long get_reg(struct pt_regs *regs, int num)
+{
+ switch (num) {
+ case RI(gr[0]) ... RI(gr[31]): return regs->gr[num - RI(gr[0])];
+ case RI(sr[0]) ... RI(sr[7]): return regs->sr[num - RI(sr[0])];
+ case RI(iasq[0]): return regs->iasq[0];
+ case RI(iasq[1]): return regs->iasq[1];
+ case RI(iaoq[0]): return regs->iaoq[0];
+ case RI(iaoq[1]): return regs->iaoq[1];
+ case RI(sar): return regs->sar;
+ case RI(iir): return regs->iir;
+ case RI(isr): return regs->isr;
+ case RI(ior): return regs->ior;
+ case RI(ipsw): return regs->ipsw;
+ case RI(cr27): return regs->cr27;
+ case RI(cr0): return mfctl(0);
+ case RI(cr24): return mfctl(24);
+ case RI(cr25): return mfctl(25);
+ case RI(cr26): return mfctl(26);
+ case RI(cr28): return mfctl(28);
+ case RI(cr29): return mfctl(29);
+ case RI(cr30): return mfctl(30);
+ case RI(cr31): return mfctl(31);
+ case RI(cr8): return mfctl(8);
+ case RI(cr9): return mfctl(9);
+ case RI(cr12): return mfctl(12);
+ case RI(cr13): return mfctl(13);
+ case RI(cr10): return mfctl(10);
+ case RI(cr15): return mfctl(15);
+ default: return 0;
+ }
+}
+
+static void set_reg(struct pt_regs *regs, int num, unsigned long val)
+{
+ switch (num) {
+ case RI(gr[0]): /*
+ * PSW is in gr[0].
+ * Allow writing to Nullify, Divide-step-correction,
+ * and carry/borrow bits.
+ * BEWARE, if you set N, and then single step, it won't
+ * stop on the nullified instruction.
+ */
+ val &= USER_PSW_BITS;
+ regs->gr[0] &= ~USER_PSW_BITS;
+ regs->gr[0] |= val;
+ return;
+ case RI(gr[1]) ... RI(gr[31]):
+ regs->gr[num - RI(gr[0])] = val;
+ return;
+ case RI(iaoq[0]):
+ case RI(iaoq[1]):
+ regs->iaoq[num - RI(iaoq[0])] = val;
+ return;
+ case RI(sar): regs->sar = val;
+ return;
+ default: return;
+#if 0
+ /* do not allow to change any of the following registers (yet) */
+ case RI(sr[0]) ... RI(sr[7]): return regs->sr[num - RI(sr[0])];
+ case RI(iasq[0]): return regs->iasq[0];
+ case RI(iasq[1]): return regs->iasq[1];
+ case RI(iir): return regs->iir;
+ case RI(isr): return regs->isr;
+ case RI(ior): return regs->ior;
+ case RI(ipsw): return regs->ipsw;
+ case RI(cr27): return regs->cr27;
+ case cr0, cr24, cr25, cr26, cr27, cr28, cr29, cr30, cr31;
+ case cr8, cr9, cr12, cr13, cr10, cr15;
+#endif
+ }
+}
+
+static int gpr_get(struct task_struct *target,
+ const struct user_regset *regset,
+ unsigned int pos, unsigned int count,
+ void *kbuf, void __user *ubuf)
+{
+ struct pt_regs *regs = task_regs(target);
+ unsigned long *k = kbuf;
+ unsigned long __user *u = ubuf;
+ unsigned long reg;
+
+ pos /= sizeof(reg);
+ count /= sizeof(reg);
+
+ if (kbuf)
+ for (; count > 0 && pos < ELF_NGREG; --count)
+ *k++ = get_reg(regs, pos++);
+ else
+ for (; count > 0 && pos < ELF_NGREG; --count)
+ if (__put_user(get_reg(regs, pos++), u++))
+ return -EFAULT;
+ kbuf = k;
+ ubuf = u;
+ pos *= sizeof(reg);
+ count *= sizeof(reg);
+ return user_regset_copyout_zero(&pos, &count, &kbuf, &ubuf,
+ ELF_NGREG * sizeof(reg), -1);
+}
+
+static int gpr_set(struct task_struct *target,
+ const struct user_regset *regset,
+ unsigned int pos, unsigned int count,
+ const void *kbuf, const void __user *ubuf)
+{
+ struct pt_regs *regs = task_regs(target);
+ const unsigned long *k = kbuf;
+ const unsigned long __user *u = ubuf;
+ unsigned long reg;
+
+ pos /= sizeof(reg);
+ count /= sizeof(reg);
+
+ if (kbuf)
+ for (; count > 0 && pos < ELF_NGREG; --count)
+ set_reg(regs, pos++, *k++);
+ else
+ for (; count > 0 && pos < ELF_NGREG; --count) {
+ if (__get_user(reg, u++))
+ return -EFAULT;
+ set_reg(regs, pos++, reg);
+ }
+
+ kbuf = k;
+ ubuf = u;
+ pos *= sizeof(reg);
+ count *= sizeof(reg);
+ return user_regset_copyin_ignore(&pos, &count, &kbuf, &ubuf,
+ ELF_NGREG * sizeof(reg), -1);
+}
+
+static const struct user_regset native_regsets[] = {
+ [REGSET_GENERAL] = {
+ .core_note_type = NT_PRSTATUS, .n = ELF_NGREG,
+ .size = sizeof(long), .align = sizeof(long),
+ .get = gpr_get, .set = gpr_set
+ },
+ [REGSET_FP] = {
+ .core_note_type = NT_PRFPREG, .n = ELF_NFPREG,
+ .size = sizeof(__u64), .align = sizeof(__u64),
+ .get = fpr_get, .set = fpr_set
+ }
+};
+
+static const struct user_regset_view user_parisc_native_view = {
+ .name = "parisc", .e_machine = ELF_ARCH, .ei_osabi = ELFOSABI_LINUX,
+ .regsets = native_regsets, .n = ARRAY_SIZE(native_regsets)
+};
+
+#ifdef CONFIG_64BIT
+#include <linux/compat.h>
+
+static int gpr32_get(struct task_struct *target,
+ const struct user_regset *regset,
+ unsigned int pos, unsigned int count,
+ void *kbuf, void __user *ubuf)
+{
+ struct pt_regs *regs = task_regs(target);
+ compat_ulong_t *k = kbuf;
+ compat_ulong_t __user *u = ubuf;
+ compat_ulong_t reg;
+
+ pos /= sizeof(reg);
+ count /= sizeof(reg);
+
+ if (kbuf)
+ for (; count > 0 && pos < ELF_NGREG; --count)
+ *k++ = get_reg(regs, pos++);
+ else
+ for (; count > 0 && pos < ELF_NGREG; --count)
+ if (__put_user((compat_ulong_t) get_reg(regs, pos++), u++))
+ return -EFAULT;
+
+ kbuf = k;
+ ubuf = u;
+ pos *= sizeof(reg);
+ count *= sizeof(reg);
+ return user_regset_copyout_zero(&pos, &count, &kbuf, &ubuf,
+ ELF_NGREG * sizeof(reg), -1);
+}
+
+static int gpr32_set(struct task_struct *target,
+ const struct user_regset *regset,
+ unsigned int pos, unsigned int count,
+ const void *kbuf, const void __user *ubuf)
+{
+ struct pt_regs *regs = task_regs(target);
+ const compat_ulong_t *k = kbuf;
+ const compat_ulong_t __user *u = ubuf;
+ compat_ulong_t reg;
+
+ pos /= sizeof(reg);
+ count /= sizeof(reg);
+
+ if (kbuf)
+ for (; count > 0 && pos < ELF_NGREG; --count)
+ set_reg(regs, pos++, *k++);
+ else
+ for (; count > 0 && pos < ELF_NGREG; --count) {
+ if (__get_user(reg, u++))
+ return -EFAULT;
+ set_reg(regs, pos++, reg);
+ }
+
+ kbuf = k;
+ ubuf = u;
+ pos *= sizeof(reg);
+ count *= sizeof(reg);
+ return user_regset_copyin_ignore(&pos, &count, &kbuf, &ubuf,
+ ELF_NGREG * sizeof(reg), -1);
+}
+
+/*
+ * These are the regset flavors matching the 32bit native set.
+ */
+static const struct user_regset compat_regsets[] = {
+ [REGSET_GENERAL] = {
+ .core_note_type = NT_PRSTATUS, .n = ELF_NGREG,
+ .size = sizeof(compat_long_t), .align = sizeof(compat_long_t),
+ .get = gpr32_get, .set = gpr32_set
+ },
+ [REGSET_FP] = {
+ .core_note_type = NT_PRFPREG, .n = ELF_NFPREG,
+ .size = sizeof(__u64), .align = sizeof(__u64),
+ .get = fpr_get, .set = fpr_set
+ }
+};
+
+static const struct user_regset_view user_parisc_compat_view = {
+ .name = "parisc", .e_machine = EM_PARISC, .ei_osabi = ELFOSABI_LINUX,
+ .regsets = compat_regsets, .n = ARRAY_SIZE(compat_regsets)
+};
+#endif /* CONFIG_64BIT */
+
+const struct user_regset_view *task_user_regset_view(struct task_struct *task)
+{
+ BUILD_BUG_ON(sizeof(struct user_regs_struct)/sizeof(long) != ELF_NGREG);
+ BUILD_BUG_ON(sizeof(struct user_fp_struct)/sizeof(__u64) != ELF_NFPREG);
+#ifdef CONFIG_64BIT
+ if (is_compat_task())
+ return &user_parisc_compat_view;
+#endif
+ return &user_parisc_native_view;
+}
diff --git a/arch/parisc/kernel/syscall.S b/arch/parisc/kernel/syscall.S
index 57b4836b7ecd..d03422e5f188 100644
--- a/arch/parisc/kernel/syscall.S
+++ b/arch/parisc/kernel/syscall.S
@@ -912,6 +912,7 @@ END(lws_table)
.align 8
ENTRY(sys_call_table)
+ .export sys_call_table,data
#include "syscall_table.S"
END(sys_call_table)
diff --git a/arch/parisc/kernel/time.c b/arch/parisc/kernel/time.c
index 400acac0a304..58dd6801f5be 100644
--- a/arch/parisc/kernel/time.c
+++ b/arch/parisc/kernel/time.c
@@ -38,6 +38,18 @@
static unsigned long clocktick __read_mostly; /* timer cycles per tick */
+#ifndef CONFIG_64BIT
+/*
+ * The processor-internal cycle counter (Control Register 16) is used as time
+ * source for the sched_clock() function. This register is 64bit wide on a
+ * 64-bit kernel and 32bit on a 32-bit kernel. Since sched_clock() always
+ * requires a 64bit counter we emulate on the 32-bit kernel the higher 32bits
+ * with a per-cpu variable which we increase every time the counter
+ * wraps-around (which happens every ~4 secounds).
+ */
+static DEFINE_PER_CPU(unsigned long, cr16_high_32_bits);
+#endif
+
/*
* We keep time on PA-RISC Linux by using the Interval Timer which is
* a pair of registers; one is read-only and one is write-only; both
@@ -108,6 +120,12 @@ irqreturn_t __irq_entry timer_interrupt(int irq, void *dev_id)
*/
mtctl(next_tick, 16);
+#if !defined(CONFIG_64BIT)
+ /* check for overflow on a 32bit kernel (every ~4 seconds). */
+ if (unlikely(next_tick < now))
+ this_cpu_inc(cr16_high_32_bits);
+#endif
+
/* Skip one clocktick on purpose if we missed next_tick.
* The new CR16 must be "later" than current CR16 otherwise
* itimer would not fire until CR16 wrapped - e.g 4 seconds
@@ -219,6 +237,12 @@ void __init start_cpu_itimer(void)
unsigned int cpu = smp_processor_id();
unsigned long next_tick = mfctl(16) + clocktick;
+#if defined(CONFIG_HAVE_UNSTABLE_SCHED_CLOCK) && defined(CONFIG_64BIT)
+ /* With multiple 64bit CPUs online, the cr16's are not syncronized. */
+ if (cpu != 0)
+ clear_sched_clock_stable();
+#endif
+
mtctl(next_tick, 16); /* kick off Interval Timer (CR16) */
per_cpu(cpu_data, cpu).it_value = next_tick;
@@ -246,15 +270,52 @@ void read_persistent_clock(struct timespec *ts)
}
}
+
+/*
+ * sched_clock() framework
+ */
+
+static u32 cyc2ns_mul __read_mostly;
+static u32 cyc2ns_shift __read_mostly;
+
+u64 sched_clock(void)
+{
+ u64 now;
+
+ /* Get current cycle counter (Control Register 16). */
+#ifdef CONFIG_64BIT
+ now = mfctl(16);
+#else
+ now = mfctl(16) + (((u64) this_cpu_read(cr16_high_32_bits)) << 32);
+#endif
+
+ /* return the value in ns (cycles_2_ns) */
+ return mul_u64_u32_shr(now, cyc2ns_mul, cyc2ns_shift);
+}
+
+
+/*
+ * timer interrupt and sched_clock() initialization
+ */
+
void __init time_init(void)
{
unsigned long current_cr16_khz;
+ current_cr16_khz = PAGE0->mem_10msec/10; /* kHz */
clocktick = (100 * PAGE0->mem_10msec) / HZ;
+ /* calculate mult/shift values for cr16 */
+ clocks_calc_mult_shift(&cyc2ns_mul, &cyc2ns_shift, current_cr16_khz,
+ NSEC_PER_MSEC, 0);
+
+#if defined(CONFIG_HAVE_UNSTABLE_SCHED_CLOCK) && defined(CONFIG_64BIT)
+ /* At bootup only one 64bit CPU is online and cr16 is "stable" */
+ set_sched_clock_stable();
+#endif
+
start_cpu_itimer(); /* get CPU 0 started */
/* register at clocksource framework */
- current_cr16_khz = PAGE0->mem_10msec/10; /* kHz */
clocksource_register_khz(&clocksource_cr16, current_cr16_khz);
}
diff --git a/arch/parisc/lib/bitops.c b/arch/parisc/lib/bitops.c
index 187118841af1..8e45b0a97abf 100644
--- a/arch/parisc/lib/bitops.c
+++ b/arch/parisc/lib/bitops.c
@@ -55,11 +55,10 @@ unsigned long __xchg8(char x, char *ptr)
}
-#ifdef CONFIG_64BIT
-unsigned long __cmpxchg_u64(volatile unsigned long *ptr, unsigned long old, unsigned long new)
+u64 __cmpxchg_u64(volatile u64 *ptr, u64 old, u64 new)
{
unsigned long flags;
- unsigned long prev;
+ u64 prev;
_atomic_spin_lock_irqsave(ptr, flags);
if ((prev = *ptr) == old)
@@ -67,7 +66,6 @@ unsigned long __cmpxchg_u64(volatile unsigned long *ptr, unsigned long old, unsi
_atomic_spin_unlock_irqrestore(ptr, flags);
return prev;
}
-#endif
unsigned long __cmpxchg_u32(volatile unsigned int *ptr, unsigned int old, unsigned int new)
{
diff --git a/arch/parisc/math-emu/fpudispatch.c b/arch/parisc/math-emu/fpudispatch.c
index 673b73e8420d..18df1237c93c 100644
--- a/arch/parisc/math-emu/fpudispatch.c
+++ b/arch/parisc/math-emu/fpudispatch.c
@@ -184,7 +184,7 @@ static void parisc_linux_get_fpu_type(u_int fpregs[])
/*
* this routine will decode the excepting floating point instruction and
- * call the approiate emulation routine.
+ * call the appropriate emulation routine.
* It is called by decode_fpu with the following parameters:
* fpudispatch(current_ir, unimplemented_code, 0, &Fpu_register)
* where current_ir is the instruction to be emulated,
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index 7cd32c038286..01f7464d9fea 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -116,6 +116,8 @@ config PPC
select GENERIC_ATOMIC64 if PPC32
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
select HAVE_PERF_EVENTS
+ select HAVE_PERF_REGS
+ select HAVE_PERF_USER_STACK_DUMP
select HAVE_REGS_AND_STACK_ACCESS_API
select HAVE_HW_BREAKPOINT if PERF_EVENTS && PPC_BOOK3S_64
select ARCH_WANT_IPC_PARSE_VERSION
@@ -126,7 +128,7 @@ config PPC
select IRQ_FORCED_THREADING
select HAVE_RCU_TABLE_FREE if SMP
select HAVE_SYSCALL_TRACEPOINTS
- select HAVE_BPF_JIT
+ select HAVE_CBPF_JIT
select HAVE_ARCH_JUMP_LABEL
select ARCH_HAVE_NMI_SAFE_CMPXCHG
select ARCH_HAS_GCOV_PROFILE_ALL
@@ -153,6 +155,7 @@ config PPC
select NO_BOOTMEM
select HAVE_GENERIC_RCU_GUP
select HAVE_PERF_EVENTS_NMI if PPC64
+ select HAVE_NMI if PERF_EVENTS
select EDAC_SUPPORT
select EDAC_ATOMIC_SCRUB
select ARCH_HAS_DMA_SET_COHERENT_MASK
@@ -160,6 +163,7 @@ config PPC
select HAVE_ARCH_SECCOMP_FILTER
select ARCH_HAS_UBSAN_SANITIZE_ALL
select ARCH_SUPPORTS_DEFERRED_STRUCT_PAGE_INIT
+ select HAVE_LIVEPATCH if HAVE_DYNAMIC_FTRACE_WITH_REGS
config GENERIC_CSUM
def_bool CPU_LITTLE_ENDIAN
@@ -605,9 +609,9 @@ endchoice
config FORCE_MAX_ZONEORDER
int "Maximum zone order"
- range 9 64 if PPC64 && PPC_64K_PAGES
+ range 8 9 if PPC64 && PPC_64K_PAGES
default "9" if PPC64 && PPC_64K_PAGES
- range 13 64 if PPC64 && !PPC_64K_PAGES
+ range 9 13 if PPC64 && !PPC_64K_PAGES
default "13" if PPC64 && !PPC_64K_PAGES
range 9 64 if PPC32 && PPC_16K_PAGES
default "9" if PPC32 && PPC_16K_PAGES
@@ -794,7 +798,6 @@ config 4xx_SOC
config FSL_LBC
bool "Freescale Local Bus support"
- depends on FSL_SOC
help
Enables reporting of errors from the Freescale local bus
controller. Also contains some common code used by
@@ -1107,3 +1110,5 @@ config PPC_LIB_RHEAP
bool
source "arch/powerpc/kvm/Kconfig"
+
+source "kernel/livepatch/Kconfig"
diff --git a/arch/powerpc/Kconfig.debug b/arch/powerpc/Kconfig.debug
index 638f9ce740f5..d3fcf7e64e3a 100644
--- a/arch/powerpc/Kconfig.debug
+++ b/arch/powerpc/Kconfig.debug
@@ -19,14 +19,6 @@ config PPC_WERROR
depends on !PPC_DISABLE_WERROR
default y
-config STRICT_MM_TYPECHECKS
- bool "Do extra type checking on mm types"
- default n
- help
- This option turns on extra type checking for some mm related types.
-
- If you don't know what this means, say N.
-
config PRINT_STACK_DEPTH
int "Stack depth to print" if DEBUG_KERNEL
default 64
diff --git a/arch/powerpc/boot/Makefile b/arch/powerpc/boot/Makefile
index 61165101342c..8fe78a3efc92 100644
--- a/arch/powerpc/boot/Makefile
+++ b/arch/powerpc/boot/Makefile
@@ -362,9 +362,6 @@ $(obj)/cuImage.initrd.%: vmlinux $(obj)/%.dtb $(wrapperbits)
$(obj)/cuImage.%: vmlinux $(obj)/%.dtb $(wrapperbits)
$(call if_changed,wrap,cuboot-$*,,$(obj)/$*.dtb)
-$(obj)/cuImage.%: vmlinux $(obj)/fsl/%.dtb $(wrapperbits)
- $(call if_changed,wrap,cuboot-$*,,$(obj)/fsl/$*.dtb)
-
$(obj)/simpleImage.initrd.%: vmlinux $(obj)/%.dtb $(wrapperbits)
$(call if_changed,wrap,simpleboot-$*,,$(obj)/$*.dtb,$(obj)/ramdisk.image.gz)
@@ -381,6 +378,9 @@ $(obj)/treeImage.%: vmlinux $(obj)/%.dtb $(wrapperbits)
$(obj)/%.dtb: $(src)/dts/%.dts FORCE
$(call if_changed_dep,dtc)
+$(obj)/%.dtb: $(src)/dts/fsl/%.dts FORCE
+ $(call if_changed_dep,dtc)
+
# If there isn't a platform selected then just strip the vmlinux.
ifeq (,$(image-y))
image-y := vmlinux.strip
diff --git a/arch/powerpc/boot/dts/canyonlands.dts b/arch/powerpc/boot/dts/canyonlands.dts
index 3dc75deafbb3..549c24c4c388 100644
--- a/arch/powerpc/boot/dts/canyonlands.dts
+++ b/arch/powerpc/boot/dts/canyonlands.dts
@@ -190,12 +190,21 @@
/* DMA */ 0x2 &UIC0 0xc 0x4>;
};
+ AHBDMA: dma@bffd0800 {
+ compatible = "snps,dma-spear1340";
+ reg = <4 0xbffd0800 0x400>;
+ interrupt-parent = <&UIC3>;
+ interrupts = <0x5 0x4>;
+ #dma-cells = <3>;
+ };
+
SATA0: sata@bffd1000 {
compatible = "amcc,sata-460ex";
- reg = <4 0xbffd1000 0x800 4 0xbffd0800 0x400>;
+ reg = <4 0xbffd1000 0x800>;
interrupt-parent = <&UIC3>;
- interrupts = <0x0 0x4 /* SATA */
- 0x5 0x4>; /* AHBDMA */
+ interrupts = <0x0 0x4>;
+ dmas = <&AHBDMA 0 1 0>;
+ dma-names = "sata-dma";
};
POB0: opb {
diff --git a/arch/powerpc/boot/dts/fsl/gef_ppc9a.dts b/arch/powerpc/boot/dts/fsl/gef_ppc9a.dts
index 0424fc2bd0e0..c88d4ef9e4f7 100644
--- a/arch/powerpc/boot/dts/fsl/gef_ppc9a.dts
+++ b/arch/powerpc/boot/dts/fsl/gef_ppc9a.dts
@@ -211,6 +211,10 @@
0x0 0x00400000>;
};
};
+
+ pci1: pcie@fef09000 {
+ status = "disabled";
+ };
};
/include/ "mpc8641si-post.dtsi"
diff --git a/arch/powerpc/boot/dts/fsl/gef_sbc310.dts b/arch/powerpc/boot/dts/fsl/gef_sbc310.dts
index 84b3d38f880e..838515798cce 100644
--- a/arch/powerpc/boot/dts/fsl/gef_sbc310.dts
+++ b/arch/powerpc/boot/dts/fsl/gef_sbc310.dts
@@ -24,10 +24,6 @@
model = "GEF_SBC310";
compatible = "gef,sbc310";
- aliases {
- pci1 = &pci1;
- };
-
memory {
device_type = "memory";
reg = <0x0 0x40000000>; // set by uboot
@@ -223,29 +219,11 @@
};
pci1: pcie@fef09000 {
- compatible = "fsl,mpc8641-pcie";
- device_type = "pci";
- #size-cells = <2>;
- #address-cells = <3>;
reg = <0xfef09000 0x1000>;
- bus-range = <0x0 0xff>;
ranges = <0x02000000 0x0 0xc0000000 0xc0000000 0x0 0x20000000
0x01000000 0x0 0x00000000 0xfe400000 0x0 0x00400000>;
- clock-frequency = <100000000>;
- interrupts = <0x19 0x2 0 0>;
- interrupt-map-mask = <0xf800 0x0 0x0 0x7>;
- interrupt-map = <
- 0x0000 0x0 0x0 0x1 &mpic 0x4 0x2
- 0x0000 0x0 0x0 0x2 &mpic 0x5 0x2
- 0x0000 0x0 0x0 0x3 &mpic 0x6 0x2
- 0x0000 0x0 0x0 0x4 &mpic 0x7 0x2
- >;
pcie@0 {
- reg = <0 0 0 0 0>;
- #size-cells = <2>;
- #address-cells = <3>;
- device_type = "pci";
ranges = <0x02000000 0x0 0xc0000000
0x02000000 0x0 0xc0000000
0x0 0x20000000
diff --git a/arch/powerpc/boot/dts/fsl/gef_sbc610.dts b/arch/powerpc/boot/dts/fsl/gef_sbc610.dts
index 974446acce23..ff423ab424f2 100644
--- a/arch/powerpc/boot/dts/fsl/gef_sbc610.dts
+++ b/arch/powerpc/boot/dts/fsl/gef_sbc610.dts
@@ -209,6 +209,10 @@
0x0 0x00400000>;
};
};
+
+ pci1: pcie@fef09000 {
+ status = "disabled";
+ };
};
/include/ "mpc8641si-post.dtsi"
diff --git a/arch/powerpc/boot/dts/fsl/mpc8641_hpcn.dts b/arch/powerpc/boot/dts/fsl/mpc8641_hpcn.dts
index 554001f2e96a..11bea3e6a43f 100644
--- a/arch/powerpc/boot/dts/fsl/mpc8641_hpcn.dts
+++ b/arch/powerpc/boot/dts/fsl/mpc8641_hpcn.dts
@@ -15,10 +15,6 @@
model = "MPC8641HPCN";
compatible = "fsl,mpc8641hpcn";
- aliases {
- pci1 = &pci1;
- };
-
memory {
device_type = "memory";
reg = <0x00000000 0x40000000>; // 1G at 0x0
@@ -359,29 +355,11 @@
};
pci1: pcie@ffe09000 {
- compatible = "fsl,mpc8641-pcie";
- device_type = "pci";
- #size-cells = <2>;
- #address-cells = <3>;
reg = <0xffe09000 0x1000>;
- bus-range = <0 0xff>;
ranges = <0x02000000 0x0 0xa0000000 0xa0000000 0x0 0x20000000
0x01000000 0x0 0x00000000 0xffc10000 0x0 0x00010000>;
- clock-frequency = <100000000>;
- interrupts = <25 2 0 0>;
- interrupt-map-mask = <0xf800 0 0 7>;
- interrupt-map = <
- /* IDSEL 0x0 */
- 0x0000 0 0 1 &mpic 4 1
- 0x0000 0 0 2 &mpic 5 1
- 0x0000 0 0 3 &mpic 6 1
- 0x0000 0 0 4 &mpic 7 1
- >;
+
pcie@0 {
- reg = <0 0 0 0 0>;
- #size-cells = <2>;
- #address-cells = <3>;
- device_type = "pci";
ranges = <0x02000000 0x0 0xa0000000
0x02000000 0x0 0xa0000000
0x0 0x20000000
diff --git a/arch/powerpc/boot/dts/fsl/mpc8641_hpcn_36b.dts b/arch/powerpc/boot/dts/fsl/mpc8641_hpcn_36b.dts
index fec58671a6d6..7ff62046a9ea 100644
--- a/arch/powerpc/boot/dts/fsl/mpc8641_hpcn_36b.dts
+++ b/arch/powerpc/boot/dts/fsl/mpc8641_hpcn_36b.dts
@@ -17,10 +17,6 @@
#address-cells = <2>;
#size-cells = <2>;
- aliases {
- pci1 = &pci1;
- };
-
memory {
device_type = "memory";
reg = <0x0 0x00000000 0x0 0x40000000>; // 1G at 0x0
@@ -326,29 +322,11 @@
};
pci1: pcie@fffe09000 {
- compatible = "fsl,mpc8641-pcie";
- device_type = "pci";
- #size-cells = <2>;
- #address-cells = <3>;
reg = <0x0f 0xffe09000 0x0 0x1000>;
- bus-range = <0x0 0xff>;
ranges = <0x02000000 0x0 0xe0000000 0x0c 0x20000000 0x0 0x20000000
0x01000000 0x0 0x00000000 0x0f 0xffc10000 0x0 0x00010000>;
- clock-frequency = <100000000>;
- interrupts = <25 2 0 0>;
- interrupt-map-mask = <0xf800 0 0 7>;
- interrupt-map = <
- /* IDSEL 0x0 */
- 0x0000 0 0 1 &mpic 4 1
- 0x0000 0 0 2 &mpic 5 1
- 0x0000 0 0 3 &mpic 6 1
- 0x0000 0 0 4 &mpic 7 1
- >;
+
pcie@0 {
- reg = <0 0 0 0 0>;
- #size-cells = <2>;
- #address-cells = <3>;
- device_type = "pci";
ranges = <0x02000000 0x0 0xe0000000
0x02000000 0x0 0xe0000000
0x0 0x20000000
diff --git a/arch/powerpc/boot/dts/fsl/mpc8641si-post.dtsi b/arch/powerpc/boot/dts/fsl/mpc8641si-post.dtsi
index 70889d8e8850..eeb7c65d5f22 100644
--- a/arch/powerpc/boot/dts/fsl/mpc8641si-post.dtsi
+++ b/arch/powerpc/boot/dts/fsl/mpc8641si-post.dtsi
@@ -102,19 +102,46 @@
bus-range = <0x0 0xff>;
clock-frequency = <100000000>;
interrupts = <24 2 0 0>;
- interrupt-map-mask = <0xf800 0x0 0x0 0x7>;
- interrupt-map = <
- 0x0000 0x0 0x0 0x1 &mpic 0x0 0x1
- 0x0000 0x0 0x0 0x2 &mpic 0x1 0x1
- 0x0000 0x0 0x0 0x3 &mpic 0x2 0x1
- 0x0000 0x0 0x0 0x4 &mpic 0x3 0x1
- >;
+ pcie@0 {
+ reg = <0 0 0 0 0>;
+ #interrupt-cells = <1>;
+ #size-cells = <2>;
+ #address-cells = <3>;
+ device_type = "pci";
+ interrupts = <24 2 0 0>;
+ interrupt-map-mask = <0xf800 0x0 0x0 0x7>;
+ interrupt-map = <
+ 0x0000 0x0 0x0 0x1 &mpic 0x0 0x1 0x0 0x0
+ 0x0000 0x0 0x0 0x2 &mpic 0x1 0x1 0x0 0x0
+ 0x0000 0x0 0x0 0x3 &mpic 0x2 0x1 0x0 0x0
+ 0x0000 0x0 0x0 0x4 &mpic 0x3 0x1 0x0 0x0
+ >;
+ };
+};
+
+&pci1 {
+ compatible = "fsl,mpc8641-pcie";
+ device_type = "pci";
+ #size-cells = <2>;
+ #address-cells = <3>;
+ bus-range = <0x0 0xff>;
+ clock-frequency = <100000000>;
+ interrupts = <25 2 0 0>;
pcie@0 {
reg = <0 0 0 0 0>;
+ #interrupt-cells = <1>;
#size-cells = <2>;
#address-cells = <3>;
device_type = "pci";
+ interrupts = <25 2 0 0>;
+ interrupt-map-mask = <0xf800 0x0 0x0 0x7>;
+ interrupt-map = <
+ 0x0000 0x0 0x0 0x1 &mpic 0x4 0x1 0x0 0x0
+ 0x0000 0x0 0x0 0x2 &mpic 0x5 0x1 0x0 0x0
+ 0x0000 0x0 0x0 0x3 &mpic 0x6 0x1 0x0 0x0
+ 0x0000 0x0 0x0 0x4 &mpic 0x7 0x1 0x0 0x0
+ >;
};
};
diff --git a/arch/powerpc/boot/dts/fsl/mpc8641si-pre.dtsi b/arch/powerpc/boot/dts/fsl/mpc8641si-pre.dtsi
index 9e03328561d3..7c6db6f7c12e 100644
--- a/arch/powerpc/boot/dts/fsl/mpc8641si-pre.dtsi
+++ b/arch/powerpc/boot/dts/fsl/mpc8641si-pre.dtsi
@@ -25,6 +25,7 @@
serial0 = &serial0;
serial1 = &serial1;
pci0 = &pci0;
+ pci1 = &pci1;
};
cpus {
diff --git a/arch/powerpc/boot/dts/fsl/sbc8641d.dts b/arch/powerpc/boot/dts/fsl/sbc8641d.dts
index 0a9733cd418d..75870a124903 100644
--- a/arch/powerpc/boot/dts/fsl/sbc8641d.dts
+++ b/arch/powerpc/boot/dts/fsl/sbc8641d.dts
@@ -19,10 +19,6 @@
model = "SBC8641D";
compatible = "wind,sbc8641";
- aliases {
- pci1 = &pci1;
- };
-
memory {
device_type = "memory";
reg = <0x00000000 0x20000000>; // 512M at 0x0
@@ -165,30 +161,11 @@
};
pci1: pcie@f8009000 {
- compatible = "fsl,mpc8641-pcie";
- device_type = "pci";
- #size-cells = <2>;
- #address-cells = <3>;
reg = <0xf8009000 0x1000>;
- bus-range = <0 0xff>;
ranges = <0x02000000 0x0 0xa0000000 0xa0000000 0x0 0x20000000
0x01000000 0x0 0x00000000 0xe3000000 0x0 0x00100000>;
- clock-frequency = <100000000>;
- interrupts = <25 2 0 0>;
- interrupt-map-mask = <0xf800 0 0 7>;
- interrupt-map = <
- /* IDSEL 0x0 */
- 0x0000 0 0 1 &mpic 4 1
- 0x0000 0 0 2 &mpic 5 1
- 0x0000 0 0 3 &mpic 6 1
- 0x0000 0 0 4 &mpic 7 1
- >;
pcie@0 {
- reg = <0 0 0 0 0>;
- #size-cells = <2>;
- #address-cells = <3>;
- device_type = "pci";
ranges = <0x02000000 0x0 0xa0000000
0x02000000 0x0 0xa0000000
0x0 0x20000000
diff --git a/arch/powerpc/boot/dts/fsl/t1023si-post.dtsi b/arch/powerpc/boot/dts/fsl/t1023si-post.dtsi
index 99e421df79d4..6e0b4892a740 100644
--- a/arch/powerpc/boot/dts/fsl/t1023si-post.dtsi
+++ b/arch/powerpc/boot/dts/fsl/t1023si-post.dtsi
@@ -263,7 +263,7 @@
};
rcpm: global-utilities@e2000 {
- compatible = "fsl,t1023-rcpm", "fsl,qoriq-rcpm-2.0";
+ compatible = "fsl,t1023-rcpm", "fsl,qoriq-rcpm-2.1";
reg = <0xe2000 0x1000>;
};
diff --git a/arch/powerpc/boot/dts/fsl/t1040si-post.dtsi b/arch/powerpc/boot/dts/fsl/t1040si-post.dtsi
index e0f4da554774..507649ece0a1 100644
--- a/arch/powerpc/boot/dts/fsl/t1040si-post.dtsi
+++ b/arch/powerpc/boot/dts/fsl/t1040si-post.dtsi
@@ -472,7 +472,7 @@
};
rcpm: global-utilities@e2000 {
- compatible = "fsl,t1040-rcpm", "fsl,qoriq-rcpm-2.0";
+ compatible = "fsl,t1040-rcpm", "fsl,qoriq-rcpm-2.1";
reg = <0xe2000 0x1000>;
};
diff --git a/arch/powerpc/boot/dts/fsl/t104xrdb.dtsi b/arch/powerpc/boot/dts/fsl/t104xrdb.dtsi
index 72691ef102ee..7c4afdb44b46 100644
--- a/arch/powerpc/boot/dts/fsl/t104xrdb.dtsi
+++ b/arch/powerpc/boot/dts/fsl/t104xrdb.dtsi
@@ -109,7 +109,7 @@
flash@0 {
#address-cells = <1>;
#size-cells = <1>;
- compatible = "micron,n25q512a", "jedec,spi-nor";
+ compatible = "micron,n25q512ax3", "jedec,spi-nor";
reg = <0>;
spi-max-frequency = <10000000>; /* input clock */
};
diff --git a/arch/powerpc/boot/dts/fsl/t208xrdb.dtsi b/arch/powerpc/boot/dts/fsl/t208xrdb.dtsi
index dc9326875778..ff87e67c70da 100644
--- a/arch/powerpc/boot/dts/fsl/t208xrdb.dtsi
+++ b/arch/powerpc/boot/dts/fsl/t208xrdb.dtsi
@@ -113,7 +113,7 @@
flash@0 {
#address-cells = <1>;
#size-cells = <1>;
- compatible = "micron,n25q512a", "jedec,spi-nor";
+ compatible = "micron,n25q512ax3", "jedec,spi-nor";
reg = <0>;
spi-max-frequency = <10000000>; /* input clock */
};
diff --git a/arch/powerpc/include/asm/book3s/32/hash.h b/arch/powerpc/include/asm/book3s/32/hash.h
index 264b754d65b0..880db13a2e9f 100644
--- a/arch/powerpc/include/asm/book3s/32/hash.h
+++ b/arch/powerpc/include/asm/book3s/32/hash.h
@@ -39,8 +39,5 @@
#define _PMD_PRESENT_MASK (PAGE_MASK)
#define _PMD_BAD (~PAGE_MASK)
-/* Hash table based platforms need atomic updates of the linux PTE */
-#define PTE_ATOMIC_UPDATES 1
-
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_BOOK3S_32_HASH_H */
diff --git a/arch/powerpc/include/asm/book3s/32/mmu-hash.h b/arch/powerpc/include/asm/book3s/32/mmu-hash.h
index 16f513e5cbd7..b82e063494dd 100644
--- a/arch/powerpc/include/asm/book3s/32/mmu-hash.h
+++ b/arch/powerpc/include/asm/book3s/32/mmu-hash.h
@@ -1,5 +1,5 @@
-#ifndef _ASM_POWERPC_MMU_HASH32_H_
-#define _ASM_POWERPC_MMU_HASH32_H_
+#ifndef _ASM_POWERPC_BOOK3S_32_MMU_HASH_H_
+#define _ASM_POWERPC_BOOK3S_32_MMU_HASH_H_
/*
* 32-bit hash table MMU support
*/
@@ -90,4 +90,4 @@ typedef struct {
#define mmu_virtual_psize MMU_PAGE_4K
#define mmu_linear_psize MMU_PAGE_256M
-#endif /* _ASM_POWERPC_MMU_HASH32_H_ */
+#endif /* _ASM_POWERPC_BOOK3S_32_MMU_HASH_H_ */
diff --git a/arch/powerpc/include/asm/book3s/32/pgalloc.h b/arch/powerpc/include/asm/book3s/32/pgalloc.h
new file mode 100644
index 000000000000..a2350194fc76
--- /dev/null
+++ b/arch/powerpc/include/asm/book3s/32/pgalloc.h
@@ -0,0 +1,109 @@
+#ifndef _ASM_POWERPC_BOOK3S_32_PGALLOC_H
+#define _ASM_POWERPC_BOOK3S_32_PGALLOC_H
+
+#include <linux/threads.h>
+
+/* For 32-bit, all levels of page tables are just drawn from get_free_page() */
+#define MAX_PGTABLE_INDEX_SIZE 0
+
+extern void __bad_pte(pmd_t *pmd);
+
+extern pgd_t *pgd_alloc(struct mm_struct *mm);
+extern void pgd_free(struct mm_struct *mm, pgd_t *pgd);
+
+/*
+ * We don't have any real pmd's, and this code never triggers because
+ * the pgd will always be present..
+ */
+/* #define pmd_alloc_one(mm,address) ({ BUG(); ((pmd_t *)2); }) */
+#define pmd_free(mm, x) do { } while (0)
+#define __pmd_free_tlb(tlb,x,a) do { } while (0)
+/* #define pgd_populate(mm, pmd, pte) BUG() */
+
+#ifndef CONFIG_BOOKE
+
+static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp,
+ pte_t *pte)
+{
+ *pmdp = __pmd(__pa(pte) | _PMD_PRESENT);
+}
+
+static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmdp,
+ pgtable_t pte_page)
+{
+ *pmdp = __pmd((page_to_pfn(pte_page) << PAGE_SHIFT) | _PMD_PRESENT);
+}
+
+#define pmd_pgtable(pmd) pmd_page(pmd)
+#else
+
+static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmdp,
+ pte_t *pte)
+{
+ *pmdp = __pmd((unsigned long)pte | _PMD_PRESENT);
+}
+
+static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmdp,
+ pgtable_t pte_page)
+{
+ *pmdp = __pmd((unsigned long)lowmem_page_address(pte_page) | _PMD_PRESENT);
+}
+
+#define pmd_pgtable(pmd) pmd_page(pmd)
+#endif
+
+extern pte_t *pte_alloc_one_kernel(struct mm_struct *mm, unsigned long addr);
+extern pgtable_t pte_alloc_one(struct mm_struct *mm, unsigned long addr);
+
+static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
+{
+ free_page((unsigned long)pte);
+}
+
+static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
+{
+ pgtable_page_dtor(ptepage);
+ __free_page(ptepage);
+}
+
+static inline void pgtable_free(void *table, unsigned index_size)
+{
+ BUG_ON(index_size); /* 32-bit doesn't use this */
+ free_page((unsigned long)table);
+}
+
+#define check_pgt_cache() do { } while (0)
+
+#ifdef CONFIG_SMP
+static inline void pgtable_free_tlb(struct mmu_gather *tlb,
+ void *table, int shift)
+{
+ unsigned long pgf = (unsigned long)table;
+ BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
+ pgf |= shift;
+ tlb_remove_table(tlb, (void *)pgf);
+}
+
+static inline void __tlb_remove_table(void *_table)
+{
+ void *table = (void *)((unsigned long)_table & ~MAX_PGTABLE_INDEX_SIZE);
+ unsigned shift = (unsigned long)_table & MAX_PGTABLE_INDEX_SIZE;
+
+ pgtable_free(table, shift);
+}
+#else
+static inline void pgtable_free_tlb(struct mmu_gather *tlb,
+ void *table, int shift)
+{
+ pgtable_free(table, shift);
+}
+#endif
+
+static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
+ unsigned long address)
+{
+ tlb_flush_pgtable(tlb, address);
+ pgtable_page_dtor(table);
+ pgtable_free_tlb(tlb, page_address(table), 0);
+}
+#endif /* _ASM_POWERPC_BOOK3S_32_PGALLOC_H */
diff --git a/arch/powerpc/include/asm/book3s/64/hash-4k.h b/arch/powerpc/include/asm/book3s/64/hash-4k.h
index 5f08a0832238..1af837c561ba 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-4k.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-4k.h
@@ -5,58 +5,31 @@
* for each page table entry. The PMD and PGD level use a 32b record for
* each entry by assuming that each entry is page aligned.
*/
-#define PTE_INDEX_SIZE 9
-#define PMD_INDEX_SIZE 7
-#define PUD_INDEX_SIZE 9
-#define PGD_INDEX_SIZE 9
+#define H_PTE_INDEX_SIZE 9
+#define H_PMD_INDEX_SIZE 7
+#define H_PUD_INDEX_SIZE 9
+#define H_PGD_INDEX_SIZE 9
#ifndef __ASSEMBLY__
-#define PTE_TABLE_SIZE (sizeof(pte_t) << PTE_INDEX_SIZE)
-#define PMD_TABLE_SIZE (sizeof(pmd_t) << PMD_INDEX_SIZE)
-#define PUD_TABLE_SIZE (sizeof(pud_t) << PUD_INDEX_SIZE)
-#define PGD_TABLE_SIZE (sizeof(pgd_t) << PGD_INDEX_SIZE)
-#endif /* __ASSEMBLY__ */
-
-#define PTRS_PER_PTE (1 << PTE_INDEX_SIZE)
-#define PTRS_PER_PMD (1 << PMD_INDEX_SIZE)
-#define PTRS_PER_PUD (1 << PUD_INDEX_SIZE)
-#define PTRS_PER_PGD (1 << PGD_INDEX_SIZE)
-
-/* PMD_SHIFT determines what a second-level page table entry can map */
-#define PMD_SHIFT (PAGE_SHIFT + PTE_INDEX_SIZE)
-#define PMD_SIZE (1UL << PMD_SHIFT)
-#define PMD_MASK (~(PMD_SIZE-1))
+#define H_PTE_TABLE_SIZE (sizeof(pte_t) << H_PTE_INDEX_SIZE)
+#define H_PMD_TABLE_SIZE (sizeof(pmd_t) << H_PMD_INDEX_SIZE)
+#define H_PUD_TABLE_SIZE (sizeof(pud_t) << H_PUD_INDEX_SIZE)
+#define H_PGD_TABLE_SIZE (sizeof(pgd_t) << H_PGD_INDEX_SIZE)
/* With 4k base page size, hugepage PTEs go at the PMD level */
#define MIN_HUGEPTE_SHIFT PMD_SHIFT
-/* PUD_SHIFT determines what a third-level page table entry can map */
-#define PUD_SHIFT (PMD_SHIFT + PMD_INDEX_SIZE)
-#define PUD_SIZE (1UL << PUD_SHIFT)
-#define PUD_MASK (~(PUD_SIZE-1))
-
-/* PGDIR_SHIFT determines what a fourth-level page table entry can map */
-#define PGDIR_SHIFT (PUD_SHIFT + PUD_INDEX_SIZE)
-#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
-#define PGDIR_MASK (~(PGDIR_SIZE-1))
-
-/* Bits to mask out from a PMD to get to the PTE page */
-#define PMD_MASKED_BITS 0
-/* Bits to mask out from a PUD to get to the PMD page */
-#define PUD_MASKED_BITS 0
-/* Bits to mask out from a PGD to get to the PUD page */
-#define PGD_MASKED_BITS 0
-
/* PTE flags to conserve for HPTE identification */
-#define _PAGE_HPTEFLAGS (_PAGE_BUSY | _PAGE_HASHPTE | \
- _PAGE_F_SECOND | _PAGE_F_GIX)
-
-/* shift to put page number into pte */
-#define PTE_RPN_SHIFT (12)
-#define PTE_RPN_SIZE (45) /* gives 57-bit real addresses */
-
-#define _PAGE_4K_PFN 0
-#ifndef __ASSEMBLY__
+#define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_HASHPTE | \
+ H_PAGE_F_SECOND | H_PAGE_F_GIX)
+/*
+ * Not supported by 4k linux page size
+ */
+#define H_PAGE_4K_PFN 0x0
+#define H_PAGE_THP_HUGE 0x0
+#define H_PAGE_COMBO 0x0
+#define H_PTE_FRAG_NR 0
+#define H_PTE_FRAG_SIZE_SHIFT 0
/*
* On all 4K setups, remap_4k_pfn() equates to remap_pfn_range()
*/
@@ -64,37 +37,76 @@
remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE, (prot))
#ifdef CONFIG_HUGETLB_PAGE
-/*
- * For 4k page size, we support explicit hugepage via hugepd
- */
-static inline int pmd_huge(pmd_t pmd)
+static inline int hash__hugepd_ok(hugepd_t hpd)
+{
+ /*
+ * if it is not a pte and have hugepd shift mask
+ * set, then it is a hugepd directory pointer
+ */
+ if (!(hpd.pd & _PAGE_PTE) &&
+ ((hpd.pd & HUGEPD_SHIFT_MASK) != 0))
+ return true;
+ return false;
+}
+#endif
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+
+static inline char *get_hpte_slot_array(pmd_t *pmdp)
+{
+ BUG();
+ return NULL;
+}
+
+static inline unsigned int hpte_valid(unsigned char *hpte_slot_array, int index)
{
+ BUG();
return 0;
}
-static inline int pud_huge(pud_t pud)
+static inline unsigned int hpte_hash_index(unsigned char *hpte_slot_array,
+ int index)
{
+ BUG();
return 0;
}
-static inline int pgd_huge(pgd_t pgd)
+static inline void mark_hpte_slot_valid(unsigned char *hpte_slot_array,
+ unsigned int index, unsigned int hidx)
+{
+ BUG();
+}
+
+static inline int hash__pmd_trans_huge(pmd_t pmd)
{
return 0;
}
-#define pgd_huge pgd_huge
-static inline int hugepd_ok(hugepd_t hpd)
+static inline int hash__pmd_same(pmd_t pmd_a, pmd_t pmd_b)
{
- /*
- * if it is not a pte and have hugepd shift mask
- * set, then it is a hugepd directory pointer
- */
- if (!(hpd.pd & _PAGE_PTE) &&
- ((hpd.pd & HUGEPD_SHIFT_MASK) != 0))
- return true;
- return false;
+ BUG();
+ return 0;
}
-#define is_hugepd(hpd) (hugepd_ok(hpd))
+
+static inline pmd_t hash__pmd_mkhuge(pmd_t pmd)
+{
+ BUG();
+ return pmd;
+}
+
+extern unsigned long hash__pmd_hugepage_update(struct mm_struct *mm,
+ unsigned long addr, pmd_t *pmdp,
+ unsigned long clr, unsigned long set);
+extern pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma,
+ unsigned long address, pmd_t *pmdp);
+extern void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
+ pgtable_t pgtable);
+extern pgtable_t hash__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
+extern void hash__pmdp_huge_split_prepare(struct vm_area_struct *vma,
+ unsigned long address, pmd_t *pmdp);
+extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm,
+ unsigned long addr, pmd_t *pmdp);
+extern int hash__has_transparent_hugepage(void);
#endif
#endif /* !__ASSEMBLY__ */
diff --git a/arch/powerpc/include/asm/book3s/64/hash-64k.h b/arch/powerpc/include/asm/book3s/64/hash-64k.h
index 0a7956a80a08..5aae4f530c21 100644
--- a/arch/powerpc/include/asm/book3s/64/hash-64k.h
+++ b/arch/powerpc/include/asm/book3s/64/hash-64k.h
@@ -1,73 +1,44 @@
#ifndef _ASM_POWERPC_BOOK3S_64_HASH_64K_H
#define _ASM_POWERPC_BOOK3S_64_HASH_64K_H
-#define PTE_INDEX_SIZE 8
-#define PMD_INDEX_SIZE 5
-#define PUD_INDEX_SIZE 5
-#define PGD_INDEX_SIZE 12
-
-#define PTRS_PER_PTE (1 << PTE_INDEX_SIZE)
-#define PTRS_PER_PMD (1 << PMD_INDEX_SIZE)
-#define PTRS_PER_PUD (1 << PUD_INDEX_SIZE)
-#define PTRS_PER_PGD (1 << PGD_INDEX_SIZE)
+#define H_PTE_INDEX_SIZE 8
+#define H_PMD_INDEX_SIZE 5
+#define H_PUD_INDEX_SIZE 5
+#define H_PGD_INDEX_SIZE 12
/* With 4k base page size, hugepage PTEs go at the PMD level */
#define MIN_HUGEPTE_SHIFT PAGE_SHIFT
-/* PMD_SHIFT determines what a second-level page table entry can map */
-#define PMD_SHIFT (PAGE_SHIFT + PTE_INDEX_SIZE)
-#define PMD_SIZE (1UL << PMD_SHIFT)
-#define PMD_MASK (~(PMD_SIZE-1))
-
-/* PUD_SHIFT determines what a third-level page table entry can map */
-#define PUD_SHIFT (PMD_SHIFT + PMD_INDEX_SIZE)
-#define PUD_SIZE (1UL << PUD_SHIFT)
-#define PUD_MASK (~(PUD_SIZE-1))
-
-/* PGDIR_SHIFT determines what a fourth-level page table entry can map */
-#define PGDIR_SHIFT (PUD_SHIFT + PUD_INDEX_SIZE)
-#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
-#define PGDIR_MASK (~(PGDIR_SIZE-1))
-
-#define _PAGE_COMBO 0x00001000 /* this is a combo 4k page */
-#define _PAGE_4K_PFN 0x00002000 /* PFN is for a single 4k page */
+#define H_PAGE_COMBO 0x00001000 /* this is a combo 4k page */
+#define H_PAGE_4K_PFN 0x00002000 /* PFN is for a single 4k page */
/*
- * Used to track subpage group valid if _PAGE_COMBO is set
- * This overloads _PAGE_F_GIX and _PAGE_F_SECOND
+ * We need to differentiate between explicit huge page and THP huge
+ * page, since THP huge page also need to track real subpage details
*/
-#define _PAGE_COMBO_VALID (_PAGE_F_GIX | _PAGE_F_SECOND)
+#define H_PAGE_THP_HUGE H_PAGE_4K_PFN
-/* PTE flags to conserve for HPTE identification */
-#define _PAGE_HPTEFLAGS (_PAGE_BUSY | _PAGE_F_SECOND | \
- _PAGE_F_GIX | _PAGE_HASHPTE | _PAGE_COMBO)
-
-/* Shift to put page number into pte.
- *
- * That gives us a max RPN of 41 bits, which means a max of 57 bits
- * of addressable physical space, or 53 bits for the special 4k PFNs.
+/*
+ * Used to track subpage group valid if H_PAGE_COMBO is set
+ * This overloads H_PAGE_F_GIX and H_PAGE_F_SECOND
*/
-#define PTE_RPN_SHIFT (16)
-#define PTE_RPN_SIZE (41)
+#define H_PAGE_COMBO_VALID (H_PAGE_F_GIX | H_PAGE_F_SECOND)
+/* PTE flags to conserve for HPTE identification */
+#define _PAGE_HPTEFLAGS (H_PAGE_BUSY | H_PAGE_F_SECOND | \
+ H_PAGE_F_GIX | H_PAGE_HASHPTE | H_PAGE_COMBO)
/*
* we support 16 fragments per PTE page of 64K size.
*/
-#define PTE_FRAG_NR 16
+#define H_PTE_FRAG_NR 16
/*
* We use a 2K PTE page fragment and another 2K for storing
* real_pte_t hash index
*/
-#define PTE_FRAG_SIZE_SHIFT 12
+#define H_PTE_FRAG_SIZE_SHIFT 12
#define PTE_FRAG_SIZE (1UL << PTE_FRAG_SIZE_SHIFT)
-/* Bits to mask out from a PMD to get to the PTE page */
-#define PMD_MASKED_BITS 0xc0000000000000ffUL
-/* Bits to mask out from a PUD to get to the PMD page */
-#define PUD_MASKED_BITS 0xc0000000000000ffUL
-/* Bits to mask out from a PGD to get to the PUD page */
-#define PGD_MASKED_BITS 0xc0000000000000ffUL
-
#ifndef __ASSEMBLY__
+#include <asm/errno.h>
/*
* With 64K pages on hash table, we have a special PTE format that
@@ -83,9 +54,9 @@ static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep)
rpte.pte = pte;
rpte.hidx = 0;
- if (pte_val(pte) & _PAGE_COMBO) {
+ if (pte_val(pte) & H_PAGE_COMBO) {
/*
- * Make sure we order the hidx load against the _PAGE_COMBO
+ * Make sure we order the hidx load against the H_PAGE_COMBO
* check. The store side ordering is done in __hash_page_4K
*/
smp_rmb();
@@ -97,9 +68,9 @@ static inline real_pte_t __real_pte(pte_t pte, pte_t *ptep)
static inline unsigned long __rpte_to_hidx(real_pte_t rpte, unsigned long index)
{
- if ((pte_val(rpte.pte) & _PAGE_COMBO))
+ if ((pte_val(rpte.pte) & H_PAGE_COMBO))
return (rpte.hidx >> (index<<2)) & 0xf;
- return (pte_val(rpte.pte) >> _PAGE_F_GIX_SHIFT) & 0xf;
+ return (pte_val(rpte.pte) >> H_PAGE_F_GIX_SHIFT) & 0xf;
}
#define __rpte_to_pte(r) ((r).pte)
@@ -122,79 +93,32 @@ extern bool __rpte_sub_valid(real_pte_t rpte, unsigned long index);
#define pte_iterate_hashed_end() } while(0); } } while(0)
#define pte_pagesize_index(mm, addr, pte) \
- (((pte) & _PAGE_COMBO)? MMU_PAGE_4K: MMU_PAGE_64K)
-
-#define remap_4k_pfn(vma, addr, pfn, prot) \
- (WARN_ON(((pfn) >= (1UL << PTE_RPN_SIZE))) ? -EINVAL : \
- remap_pfn_range((vma), (addr), (pfn), PAGE_SIZE, \
- __pgprot(pgprot_val((prot)) | _PAGE_4K_PFN)))
-
-#define PTE_TABLE_SIZE PTE_FRAG_SIZE
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-#define PMD_TABLE_SIZE ((sizeof(pmd_t) << PMD_INDEX_SIZE) + (sizeof(unsigned long) << PMD_INDEX_SIZE))
-#else
-#define PMD_TABLE_SIZE (sizeof(pmd_t) << PMD_INDEX_SIZE)
-#endif
-#define PUD_TABLE_SIZE (sizeof(pud_t) << PUD_INDEX_SIZE)
-#define PGD_TABLE_SIZE (sizeof(pgd_t) << PGD_INDEX_SIZE)
-
-#ifdef CONFIG_HUGETLB_PAGE
-/*
- * We have PGD_INDEX_SIZ = 12 and PTE_INDEX_SIZE = 8, so that we can have
- * 16GB hugepage pte in PGD and 16MB hugepage pte at PMD;
- *
- * Defined in such a way that we can optimize away code block at build time
- * if CONFIG_HUGETLB_PAGE=n.
- */
-static inline int pmd_huge(pmd_t pmd)
-{
- /*
- * leaf pte for huge page
- */
- return !!(pmd_val(pmd) & _PAGE_PTE);
-}
-
-static inline int pud_huge(pud_t pud)
-{
- /*
- * leaf pte for huge page
- */
- return !!(pud_val(pud) & _PAGE_PTE);
-}
+ (((pte) & H_PAGE_COMBO)? MMU_PAGE_4K: MMU_PAGE_64K)
-static inline int pgd_huge(pgd_t pgd)
+extern int remap_pfn_range(struct vm_area_struct *, unsigned long addr,
+ unsigned long pfn, unsigned long size, pgprot_t);
+static inline int hash__remap_4k_pfn(struct vm_area_struct *vma, unsigned long addr,
+ unsigned long pfn, pgprot_t prot)
{
- /*
- * leaf pte for huge page
- */
- return !!(pgd_val(pgd) & _PAGE_PTE);
+ if (pfn > (PTE_RPN_MASK >> PAGE_SHIFT)) {
+ WARN(1, "remap_4k_pfn called with wrong pfn value\n");
+ return -EINVAL;
+ }
+ return remap_pfn_range(vma, addr, pfn, PAGE_SIZE,
+ __pgprot(pgprot_val(prot) | H_PAGE_4K_PFN));
}
-#define pgd_huge pgd_huge
-#ifdef CONFIG_DEBUG_VM
-extern int hugepd_ok(hugepd_t hpd);
-#define is_hugepd(hpd) (hugepd_ok(hpd))
+#define H_PTE_TABLE_SIZE PTE_FRAG_SIZE
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+#define H_PMD_TABLE_SIZE ((sizeof(pmd_t) << PMD_INDEX_SIZE) + \
+ (sizeof(unsigned long) << PMD_INDEX_SIZE))
#else
-/*
- * With 64k page size, we have hugepage ptes in the pgd and pmd entries. We don't
- * need to setup hugepage directory for them. Our pte and page directory format
- * enable us to have this enabled.
- */
-static inline int hugepd_ok(hugepd_t hpd)
-{
- return 0;
-}
-#define is_hugepd(pdep) 0
-#endif /* CONFIG_DEBUG_VM */
-
-#endif /* CONFIG_HUGETLB_PAGE */
+#define H_PMD_TABLE_SIZE (sizeof(pmd_t) << PMD_INDEX_SIZE)
+#endif
+#define H_PUD_TABLE_SIZE (sizeof(pud_t) << PUD_INDEX_SIZE)
+#define H_PGD_TABLE_SIZE (sizeof(pgd_t) << PGD_INDEX_SIZE)
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-extern unsigned long pmd_hugepage_update(struct mm_struct *mm,
- unsigned long addr,
- pmd_t *pmdp,
- unsigned long clr,
- unsigned long set);
static inline char *get_hpte_slot_array(pmd_t *pmdp)
{
/*
@@ -253,50 +177,35 @@ static inline void mark_hpte_slot_valid(unsigned char *hpte_slot_array,
* that for explicit huge pages.
*
*/
-static inline int pmd_trans_huge(pmd_t pmd)
+static inline int hash__pmd_trans_huge(pmd_t pmd)
{
- return !!((pmd_val(pmd) & (_PAGE_PTE | _PAGE_THP_HUGE)) ==
- (_PAGE_PTE | _PAGE_THP_HUGE));
+ return !!((pmd_val(pmd) & (_PAGE_PTE | H_PAGE_THP_HUGE)) ==
+ (_PAGE_PTE | H_PAGE_THP_HUGE));
}
-static inline int pmd_large(pmd_t pmd)
+static inline int hash__pmd_same(pmd_t pmd_a, pmd_t pmd_b)
{
- return !!(pmd_val(pmd) & _PAGE_PTE);
+ return (((pmd_raw(pmd_a) ^ pmd_raw(pmd_b)) & ~cpu_to_be64(_PAGE_HPTEFLAGS)) == 0);
}
-static inline pmd_t pmd_mknotpresent(pmd_t pmd)
+static inline pmd_t hash__pmd_mkhuge(pmd_t pmd)
{
- return __pmd(pmd_val(pmd) & ~_PAGE_PRESENT);
-}
-
-#define __HAVE_ARCH_PMD_SAME
-static inline int pmd_same(pmd_t pmd_a, pmd_t pmd_b)
-{
- return (((pmd_val(pmd_a) ^ pmd_val(pmd_b)) & ~_PAGE_HPTEFLAGS) == 0);
-}
-
-static inline int __pmdp_test_and_clear_young(struct mm_struct *mm,
- unsigned long addr, pmd_t *pmdp)
-{
- unsigned long old;
-
- if ((pmd_val(*pmdp) & (_PAGE_ACCESSED | _PAGE_HASHPTE)) == 0)
- return 0;
- old = pmd_hugepage_update(mm, addr, pmdp, _PAGE_ACCESSED, 0);
- return ((old & _PAGE_ACCESSED) != 0);
-}
-
-#define __HAVE_ARCH_PMDP_SET_WRPROTECT
-static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr,
- pmd_t *pmdp)
-{
-
- if ((pmd_val(*pmdp) & _PAGE_RW) == 0)
- return;
-
- pmd_hugepage_update(mm, addr, pmdp, _PAGE_RW, 0);
+ return __pmd(pmd_val(pmd) | (_PAGE_PTE | H_PAGE_THP_HUGE));
}
+extern unsigned long hash__pmd_hugepage_update(struct mm_struct *mm,
+ unsigned long addr, pmd_t *pmdp,
+ unsigned long clr, unsigned long set);
+extern pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma,
+ unsigned long address, pmd_t *pmdp);
+extern void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
+ pgtable_t pgtable);
+extern pgtable_t hash__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
+extern void hash__pmdp_huge_split_prepare(struct vm_area_struct *vma,
+ unsigned long address, pmd_t *pmdp);
+extern pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm,
+ unsigned long addr, pmd_t *pmdp);
+extern int hash__has_transparent_hugepage(void);
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
#endif /* __ASSEMBLY__ */
diff --git a/arch/powerpc/include/asm/book3s/64/hash.h b/arch/powerpc/include/asm/book3s/64/hash.h
index d0ee6fcef823..f61cad3de4e6 100644
--- a/arch/powerpc/include/asm/book3s/64/hash.h
+++ b/arch/powerpc/include/asm/book3s/64/hash.h
@@ -13,48 +13,12 @@
* We could create separate kernel read-only if we used the 3 PP bits
* combinations that newer processors provide but we currently don't.
*/
-#define _PAGE_BIT_SWAP_TYPE 0
-
-#define _PAGE_EXEC 0x00001 /* execute permission */
-#define _PAGE_RW 0x00002 /* read & write access allowed */
-#define _PAGE_READ 0x00004 /* read access allowed */
-#define _PAGE_USER 0x00008 /* page may be accessed by userspace */
-#define _PAGE_GUARDED 0x00010 /* G: guarded (side-effect) page */
-/* M (memory coherence) is always set in the HPTE, so we don't need it here */
-#define _PAGE_COHERENT 0x0
-#define _PAGE_NO_CACHE 0x00020 /* I: cache inhibit */
-#define _PAGE_WRITETHRU 0x00040 /* W: cache write-through */
-#define _PAGE_DIRTY 0x00080 /* C: page changed */
-#define _PAGE_ACCESSED 0x00100 /* R: page referenced */
-#define _PAGE_SPECIAL 0x00400 /* software: special page */
-#define _PAGE_BUSY 0x00800 /* software: PTE & hash are busy */
-
-#ifdef CONFIG_MEM_SOFT_DIRTY
-#define _PAGE_SOFT_DIRTY 0x200 /* software: software dirty tracking */
-#else
-#define _PAGE_SOFT_DIRTY 0x000
-#endif
-
-#define _PAGE_F_GIX_SHIFT 57
-#define _PAGE_F_GIX (7ul << 57) /* HPTE index within HPTEG */
-#define _PAGE_F_SECOND (1ul << 60) /* HPTE is in 2ndary HPTEG */
-#define _PAGE_HASHPTE (1ul << 61) /* PTE has associated HPTE */
-#define _PAGE_PTE (1ul << 62) /* distinguishes PTEs from pointers */
-#define _PAGE_PRESENT (1ul << 63) /* pte contains a translation */
-
-/*
- * We need to differentiate between explicit huge page and THP huge
- * page, since THP huge page also need to track real subpage details
- */
-#define _PAGE_THP_HUGE _PAGE_4K_PFN
-
-/*
- * set of bits not changed in pmd_modify.
- */
-#define _HPAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
- _PAGE_ACCESSED | _PAGE_THP_HUGE | _PAGE_PTE | \
- _PAGE_SOFT_DIRTY)
-
+#define H_PAGE_BUSY 0x00800 /* software: PTE & hash are busy */
+#define H_PTE_NONE_MASK _PAGE_HPTEFLAGS
+#define H_PAGE_F_GIX_SHIFT 57
+#define H_PAGE_F_GIX (7ul << 57) /* HPTE index within HPTEG */
+#define H_PAGE_F_SECOND (1ul << 60) /* HPTE is in 2ndary HPTEG */
+#define H_PAGE_HASHPTE (1ul << 61) /* PTE has associated HPTE */
#ifdef CONFIG_PPC_64K_PAGES
#include <asm/book3s/64/hash-64k.h>
@@ -65,29 +29,33 @@
/*
* Size of EA range mapped by our pagetables.
*/
-#define PGTABLE_EADDR_SIZE (PTE_INDEX_SIZE + PMD_INDEX_SIZE + \
- PUD_INDEX_SIZE + PGD_INDEX_SIZE + PAGE_SHIFT)
-#define PGTABLE_RANGE (ASM_CONST(1) << PGTABLE_EADDR_SIZE)
+#define H_PGTABLE_EADDR_SIZE (H_PTE_INDEX_SIZE + H_PMD_INDEX_SIZE + \
+ H_PUD_INDEX_SIZE + H_PGD_INDEX_SIZE + PAGE_SHIFT)
+#define H_PGTABLE_RANGE (ASM_CONST(1) << H_PGTABLE_EADDR_SIZE)
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-#define PMD_CACHE_INDEX (PMD_INDEX_SIZE + 1)
+/*
+ * only with hash we need to use the second half of pmd page table
+ * to store pointer to deposited pgtable_t
+ */
+#define H_PMD_CACHE_INDEX (H_PMD_INDEX_SIZE + 1)
#else
-#define PMD_CACHE_INDEX PMD_INDEX_SIZE
+#define H_PMD_CACHE_INDEX H_PMD_INDEX_SIZE
#endif
/*
* Define the address range of the kernel non-linear virtual area
*/
-#define KERN_VIRT_START ASM_CONST(0xD000000000000000)
-#define KERN_VIRT_SIZE ASM_CONST(0x0000100000000000)
+#define H_KERN_VIRT_START ASM_CONST(0xD000000000000000)
+#define H_KERN_VIRT_SIZE ASM_CONST(0x0000100000000000)
/*
* The vmalloc space starts at the beginning of that region, and
* occupies half of it on hash CPUs and a quarter of it on Book3E
* (we keep a quarter for the virtual memmap)
*/
-#define VMALLOC_START KERN_VIRT_START
-#define VMALLOC_SIZE (KERN_VIRT_SIZE >> 1)
-#define VMALLOC_END (VMALLOC_START + VMALLOC_SIZE)
+#define H_VMALLOC_START H_KERN_VIRT_START
+#define H_VMALLOC_SIZE (H_KERN_VIRT_SIZE >> 1)
+#define H_VMALLOC_END (H_VMALLOC_START + H_VMALLOC_SIZE)
/*
* Region IDs
@@ -96,7 +64,7 @@
#define REGION_MASK (0xfUL << REGION_SHIFT)
#define REGION_ID(ea) (((unsigned long)(ea)) >> REGION_SHIFT)
-#define VMALLOC_REGION_ID (REGION_ID(VMALLOC_START))
+#define VMALLOC_REGION_ID (REGION_ID(H_VMALLOC_START))
#define KERNEL_REGION_ID (REGION_ID(PAGE_OFFSET))
#define VMEMMAP_REGION_ID (0xfUL) /* Server only */
#define USER_REGION_ID (0UL)
@@ -105,381 +73,97 @@
* Defines the address of the vmemap area, in its own region on
* hash table CPUs.
*/
-#define VMEMMAP_BASE (VMEMMAP_REGION_ID << REGION_SHIFT)
+#define H_VMEMMAP_BASE (VMEMMAP_REGION_ID << REGION_SHIFT)
#ifdef CONFIG_PPC_MM_SLICES
#define HAVE_ARCH_UNMAPPED_AREA
#define HAVE_ARCH_UNMAPPED_AREA_TOPDOWN
#endif /* CONFIG_PPC_MM_SLICES */
-/* No separate kernel read-only */
-#define _PAGE_KERNEL_RW (_PAGE_RW | _PAGE_DIRTY) /* user access blocked by key */
-#define _PAGE_KERNEL_RO _PAGE_KERNEL_RW
-#define _PAGE_KERNEL_RWX (_PAGE_DIRTY | _PAGE_RW | _PAGE_EXEC)
-
-/* Strong Access Ordering */
-#define _PAGE_SAO (_PAGE_WRITETHRU | _PAGE_NO_CACHE | _PAGE_COHERENT)
-
-/* No page size encoding in the linux PTE */
-#define _PAGE_PSIZE 0
/* PTEIDX nibble */
#define _PTEIDX_SECONDARY 0x8
#define _PTEIDX_GROUP_IX 0x7
-/* Hash table based platforms need atomic updates of the linux PTE */
-#define PTE_ATOMIC_UPDATES 1
-#define _PTE_NONE_MASK _PAGE_HPTEFLAGS
-/*
- * The mask convered by the RPN must be a ULL on 32-bit platforms with
- * 64-bit PTEs
- */
-#define PTE_RPN_MASK (((1UL << PTE_RPN_SIZE) - 1) << PTE_RPN_SHIFT)
-/*
- * _PAGE_CHG_MASK masks of bits that are to be preserved across
- * pgprot changes
- */
-#define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
- _PAGE_ACCESSED | _PAGE_SPECIAL | _PAGE_PTE | \
- _PAGE_SOFT_DIRTY)
-/*
- * Mask of bits returned by pte_pgprot()
- */
-#define PAGE_PROT_BITS (_PAGE_GUARDED | _PAGE_COHERENT | _PAGE_NO_CACHE | \
- _PAGE_WRITETHRU | _PAGE_4K_PFN | \
- _PAGE_USER | _PAGE_ACCESSED | \
- _PAGE_RW | _PAGE_DIRTY | _PAGE_EXEC | \
- _PAGE_SOFT_DIRTY)
-/*
- * We define 2 sets of base prot bits, one for basic pages (ie,
- * cacheable kernel and user pages) and one for non cacheable
- * pages. We always set _PAGE_COHERENT when SMP is enabled or
- * the processor might need it for DMA coherency.
- */
-#define _PAGE_BASE_NC (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_PSIZE)
-#define _PAGE_BASE (_PAGE_BASE_NC | _PAGE_COHERENT)
-
-/* Permission masks used to generate the __P and __S table,
- *
- * Note:__pgprot is defined in arch/powerpc/include/asm/page.h
- *
- * Write permissions imply read permissions for now (we could make write-only
- * pages on BookE but we don't bother for now). Execute permission control is
- * possible on platforms that define _PAGE_EXEC
- *
- * Note due to the way vm flags are laid out, the bits are XWR
- */
-#define PAGE_NONE __pgprot(_PAGE_BASE)
-#define PAGE_SHARED __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW)
-#define PAGE_SHARED_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_RW | \
- _PAGE_EXEC)
-#define PAGE_COPY __pgprot(_PAGE_BASE | _PAGE_USER )
-#define PAGE_COPY_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
-#define PAGE_READONLY __pgprot(_PAGE_BASE | _PAGE_USER )
-#define PAGE_READONLY_X __pgprot(_PAGE_BASE | _PAGE_USER | _PAGE_EXEC)
-
-#define __P000 PAGE_NONE
-#define __P001 PAGE_READONLY
-#define __P010 PAGE_COPY
-#define __P011 PAGE_COPY
-#define __P100 PAGE_READONLY_X
-#define __P101 PAGE_READONLY_X
-#define __P110 PAGE_COPY_X
-#define __P111 PAGE_COPY_X
-
-#define __S000 PAGE_NONE
-#define __S001 PAGE_READONLY
-#define __S010 PAGE_SHARED
-#define __S011 PAGE_SHARED
-#define __S100 PAGE_READONLY_X
-#define __S101 PAGE_READONLY_X
-#define __S110 PAGE_SHARED_X
-#define __S111 PAGE_SHARED_X
-
-/* Permission masks used for kernel mappings */
-#define PAGE_KERNEL __pgprot(_PAGE_BASE | _PAGE_KERNEL_RW)
-#define PAGE_KERNEL_NC __pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
- _PAGE_NO_CACHE)
-#define PAGE_KERNEL_NCG __pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
- _PAGE_NO_CACHE | _PAGE_GUARDED)
-#define PAGE_KERNEL_X __pgprot(_PAGE_BASE | _PAGE_KERNEL_RWX)
-#define PAGE_KERNEL_RO __pgprot(_PAGE_BASE | _PAGE_KERNEL_RO)
-#define PAGE_KERNEL_ROX __pgprot(_PAGE_BASE | _PAGE_KERNEL_ROX)
-
-/* Protection used for kernel text. We want the debuggers to be able to
- * set breakpoints anywhere, so don't write protect the kernel text
- * on platforms where such control is possible.
- */
-#if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || defined(CONFIG_BDI_SWITCH) ||\
- defined(CONFIG_KPROBES) || defined(CONFIG_DYNAMIC_FTRACE)
-#define PAGE_KERNEL_TEXT PAGE_KERNEL_X
-#else
-#define PAGE_KERNEL_TEXT PAGE_KERNEL_ROX
-#endif
-
-/* Make modules code happy. We don't set RO yet */
-#define PAGE_KERNEL_EXEC PAGE_KERNEL_X
-#define PAGE_AGP (PAGE_KERNEL_NC)
-
-#define PMD_BAD_BITS (PTE_TABLE_SIZE-1)
-#define PUD_BAD_BITS (PMD_TABLE_SIZE-1)
+#define H_PMD_BAD_BITS (PTE_TABLE_SIZE-1)
+#define H_PUD_BAD_BITS (PMD_TABLE_SIZE-1)
#ifndef __ASSEMBLY__
-#define pmd_bad(pmd) (pmd_val(pmd) & PMD_BAD_BITS)
-#define pmd_page_vaddr(pmd) __va(pmd_val(pmd) & ~PMD_MASKED_BITS)
-
-#define pud_bad(pud) (pud_val(pud) & PUD_BAD_BITS)
-#define pud_page_vaddr(pud) __va(pud_val(pud) & ~PUD_MASKED_BITS)
-
-/* Pointers in the page table tree are physical addresses */
-#define __pgtable_ptr_val(ptr) __pa(ptr)
-
-#define pgd_index(address) (((address) >> (PGDIR_SHIFT)) & (PTRS_PER_PGD - 1))
-#define pud_index(address) (((address) >> (PUD_SHIFT)) & (PTRS_PER_PUD - 1))
-#define pmd_index(address) (((address) >> (PMD_SHIFT)) & (PTRS_PER_PMD - 1))
-#define pte_index(address) (((address) >> (PAGE_SHIFT)) & (PTRS_PER_PTE - 1))
+#define hash__pmd_bad(pmd) (pmd_val(pmd) & H_PMD_BAD_BITS)
+#define hash__pud_bad(pud) (pud_val(pud) & H_PUD_BAD_BITS)
+static inline int hash__pgd_bad(pgd_t pgd)
+{
+ return (pgd_val(pgd) == 0);
+}
extern void hpte_need_flush(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, unsigned long pte, int huge);
extern unsigned long htab_convert_pte_flags(unsigned long pteflags);
/* Atomic PTE updates */
-static inline unsigned long pte_update(struct mm_struct *mm,
- unsigned long addr,
- pte_t *ptep, unsigned long clr,
- unsigned long set,
- int huge)
+static inline unsigned long hash__pte_update(struct mm_struct *mm,
+ unsigned long addr,
+ pte_t *ptep, unsigned long clr,
+ unsigned long set,
+ int huge)
{
- unsigned long old, tmp;
+ __be64 old_be, tmp_be;
+ unsigned long old;
__asm__ __volatile__(
"1: ldarx %0,0,%3 # pte_update\n\
- andi. %1,%0,%6\n\
+ and. %1,%0,%6\n\
bne- 1b \n\
andc %1,%0,%4 \n\
or %1,%1,%7\n\
stdcx. %1,0,%3 \n\
bne- 1b"
- : "=&r" (old), "=&r" (tmp), "=m" (*ptep)
- : "r" (ptep), "r" (clr), "m" (*ptep), "i" (_PAGE_BUSY), "r" (set)
+ : "=&r" (old_be), "=&r" (tmp_be), "=m" (*ptep)
+ : "r" (ptep), "r" (cpu_to_be64(clr)), "m" (*ptep),
+ "r" (cpu_to_be64(H_PAGE_BUSY)), "r" (cpu_to_be64(set))
: "cc" );
/* huge pages use the old page table lock */
if (!huge)
assert_pte_locked(mm, addr);
- if (old & _PAGE_HASHPTE)
+ old = be64_to_cpu(old_be);
+ if (old & H_PAGE_HASHPTE)
hpte_need_flush(mm, addr, ptep, old, huge);
return old;
}
-static inline int __ptep_test_and_clear_young(struct mm_struct *mm,
- unsigned long addr, pte_t *ptep)
-{
- unsigned long old;
-
- if ((pte_val(*ptep) & (_PAGE_ACCESSED | _PAGE_HASHPTE)) == 0)
- return 0;
- old = pte_update(mm, addr, ptep, _PAGE_ACCESSED, 0, 0);
- return (old & _PAGE_ACCESSED) != 0;
-}
-#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
-#define ptep_test_and_clear_young(__vma, __addr, __ptep) \
-({ \
- int __r; \
- __r = __ptep_test_and_clear_young((__vma)->vm_mm, __addr, __ptep); \
- __r; \
-})
-
-#define __HAVE_ARCH_PTEP_SET_WRPROTECT
-static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr,
- pte_t *ptep)
-{
-
- if ((pte_val(*ptep) & _PAGE_RW) == 0)
- return;
-
- pte_update(mm, addr, ptep, _PAGE_RW, 0, 0);
-}
-
-static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
- unsigned long addr, pte_t *ptep)
-{
- if ((pte_val(*ptep) & _PAGE_RW) == 0)
- return;
-
- pte_update(mm, addr, ptep, _PAGE_RW, 0, 1);
-}
-
-/*
- * We currently remove entries from the hashtable regardless of whether
- * the entry was young or dirty. The generic routines only flush if the
- * entry was young or dirty which is not good enough.
- *
- * We should be more intelligent about this but for the moment we override
- * these functions and force a tlb flush unconditionally
- */
-#define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
-#define ptep_clear_flush_young(__vma, __address, __ptep) \
-({ \
- int __young = __ptep_test_and_clear_young((__vma)->vm_mm, __address, \
- __ptep); \
- __young; \
-})
-
-#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
-static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
- unsigned long addr, pte_t *ptep)
-{
- unsigned long old = pte_update(mm, addr, ptep, ~0UL, 0, 0);
- return __pte(old);
-}
-
-static inline void pte_clear(struct mm_struct *mm, unsigned long addr,
- pte_t * ptep)
-{
- pte_update(mm, addr, ptep, ~0UL, 0, 0);
-}
-
-
/* Set the dirty and/or accessed bits atomically in a linux PTE, this
* function doesn't need to flush the hash entry
*/
-static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry)
+static inline void hash__ptep_set_access_flags(pte_t *ptep, pte_t entry)
{
- unsigned long bits = pte_val(entry) &
- (_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_RW | _PAGE_EXEC |
- _PAGE_SOFT_DIRTY);
+ __be64 old, tmp, val, mask;
- unsigned long old, tmp;
+ mask = cpu_to_be64(_PAGE_DIRTY | _PAGE_ACCESSED | _PAGE_READ | _PAGE_WRITE |
+ _PAGE_EXEC | _PAGE_SOFT_DIRTY);
+
+ val = pte_raw(entry) & mask;
__asm__ __volatile__(
"1: ldarx %0,0,%4\n\
- andi. %1,%0,%6\n\
+ and. %1,%0,%6\n\
bne- 1b \n\
or %0,%3,%0\n\
stdcx. %0,0,%4\n\
bne- 1b"
:"=&r" (old), "=&r" (tmp), "=m" (*ptep)
- :"r" (bits), "r" (ptep), "m" (*ptep), "i" (_PAGE_BUSY)
+ :"r" (val), "r" (ptep), "m" (*ptep), "r" (cpu_to_be64(H_PAGE_BUSY))
:"cc");
}
-static inline int pgd_bad(pgd_t pgd)
-{
- return (pgd_val(pgd) == 0);
-}
-
-#define __HAVE_ARCH_PTE_SAME
-#define pte_same(A,B) (((pte_val(A) ^ pte_val(B)) & ~_PAGE_HPTEFLAGS) == 0)
-static inline unsigned long pgd_page_vaddr(pgd_t pgd)
-{
- return (unsigned long)__va(pgd_val(pgd) & ~PGD_MASKED_BITS);
-}
-
-
-/* Generic accessors to PTE bits */
-static inline int pte_write(pte_t pte) { return !!(pte_val(pte) & _PAGE_RW);}
-static inline int pte_dirty(pte_t pte) { return !!(pte_val(pte) & _PAGE_DIRTY); }
-static inline int pte_young(pte_t pte) { return !!(pte_val(pte) & _PAGE_ACCESSED); }
-static inline int pte_special(pte_t pte) { return !!(pte_val(pte) & _PAGE_SPECIAL); }
-static inline int pte_none(pte_t pte) { return (pte_val(pte) & ~_PTE_NONE_MASK) == 0; }
-static inline pgprot_t pte_pgprot(pte_t pte) { return __pgprot(pte_val(pte) & PAGE_PROT_BITS); }
-
-#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
-static inline bool pte_soft_dirty(pte_t pte)
-{
- return !!(pte_val(pte) & _PAGE_SOFT_DIRTY);
-}
-static inline pte_t pte_mksoft_dirty(pte_t pte)
-{
- return __pte(pte_val(pte) | _PAGE_SOFT_DIRTY);
-}
-
-static inline pte_t pte_clear_soft_dirty(pte_t pte)
-{
- return __pte(pte_val(pte) & ~_PAGE_SOFT_DIRTY);
-}
-#endif /* CONFIG_HAVE_ARCH_SOFT_DIRTY */
-
-#ifdef CONFIG_NUMA_BALANCING
-/*
- * These work without NUMA balancing but the kernel does not care. See the
- * comment in include/asm-generic/pgtable.h . On powerpc, this will only
- * work for user pages and always return true for kernel pages.
- */
-static inline int pte_protnone(pte_t pte)
-{
- return (pte_val(pte) &
- (_PAGE_PRESENT | _PAGE_USER)) == _PAGE_PRESENT;
-}
-#endif /* CONFIG_NUMA_BALANCING */
-
-static inline int pte_present(pte_t pte)
-{
- return !!(pte_val(pte) & _PAGE_PRESENT);
-}
-
-/* Conversion functions: convert a page and protection to a page entry,
- * and a page entry and page directory to the page they refer to.
- *
- * Even if PTEs can be unsigned long long, a PFN is always an unsigned
- * long for now.
- */
-static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot)
-{
- return __pte((((pte_basic_t)(pfn) << PTE_RPN_SHIFT) & PTE_RPN_MASK) |
- pgprot_val(pgprot));
-}
-
-static inline unsigned long pte_pfn(pte_t pte)
-{
- return (pte_val(pte) & PTE_RPN_MASK) >> PTE_RPN_SHIFT;
-}
-
-/* Generic modifiers for PTE bits */
-static inline pte_t pte_wrprotect(pte_t pte)
-{
- return __pte(pte_val(pte) & ~_PAGE_RW);
-}
-
-static inline pte_t pte_mkclean(pte_t pte)
-{
- return __pte(pte_val(pte) & ~_PAGE_DIRTY);
-}
-
-static inline pte_t pte_mkold(pte_t pte)
-{
- return __pte(pte_val(pte) & ~_PAGE_ACCESSED);
-}
-
-static inline pte_t pte_mkwrite(pte_t pte)
-{
- return __pte(pte_val(pte) | _PAGE_RW);
-}
-
-static inline pte_t pte_mkdirty(pte_t pte)
+static inline int hash__pte_same(pte_t pte_a, pte_t pte_b)
{
- return __pte(pte_val(pte) | _PAGE_DIRTY | _PAGE_SOFT_DIRTY);
+ return (((pte_raw(pte_a) ^ pte_raw(pte_b)) & ~cpu_to_be64(_PAGE_HPTEFLAGS)) == 0);
}
-static inline pte_t pte_mkyoung(pte_t pte)
+static inline int hash__pte_none(pte_t pte)
{
- return __pte(pte_val(pte) | _PAGE_ACCESSED);
-}
-
-static inline pte_t pte_mkspecial(pte_t pte)
-{
- return __pte(pte_val(pte) | _PAGE_SPECIAL);
-}
-
-static inline pte_t pte_mkhuge(pte_t pte)
-{
- return pte;
-}
-
-static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
-{
- return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot));
+ return (pte_val(pte) & ~H_PTE_NONE_MASK) == 0;
}
/* This low level function performs the actual PTE insertion
@@ -487,8 +171,8 @@ static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
* an horrible mess that I'm not going to try to clean up now but
* I'm keeping it in one place rather than spread around
*/
-static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
- pte_t *ptep, pte_t pte, int percpu)
+static inline void hash__set_pte_at(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, pte_t pte, int percpu)
{
/*
* Anything else just stores the PTE normally. That covers all 64-bit
@@ -497,53 +181,6 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
*ptep = pte;
}
-/*
- * Macro to mark a page protection value as "uncacheable".
- */
-
-#define _PAGE_CACHE_CTL (_PAGE_COHERENT | _PAGE_GUARDED | _PAGE_NO_CACHE | \
- _PAGE_WRITETHRU)
-
-#define pgprot_noncached pgprot_noncached
-static inline pgprot_t pgprot_noncached(pgprot_t prot)
-{
- return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) |
- _PAGE_NO_CACHE | _PAGE_GUARDED);
-}
-
-#define pgprot_noncached_wc pgprot_noncached_wc
-static inline pgprot_t pgprot_noncached_wc(pgprot_t prot)
-{
- return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) |
- _PAGE_NO_CACHE);
-}
-
-#define pgprot_cached pgprot_cached
-static inline pgprot_t pgprot_cached(pgprot_t prot)
-{
- return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) |
- _PAGE_COHERENT);
-}
-
-#define pgprot_cached_wthru pgprot_cached_wthru
-static inline pgprot_t pgprot_cached_wthru(pgprot_t prot)
-{
- return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) |
- _PAGE_COHERENT | _PAGE_WRITETHRU);
-}
-
-#define pgprot_cached_noncoherent pgprot_cached_noncoherent
-static inline pgprot_t pgprot_cached_noncoherent(pgprot_t prot)
-{
- return __pgprot(pgprot_val(prot) & ~_PAGE_CACHE_CTL);
-}
-
-#define pgprot_writecombine pgprot_writecombine
-static inline pgprot_t pgprot_writecombine(pgprot_t prot)
-{
- return pgprot_noncached_wc(prot);
-}
-
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
extern void hpte_do_hugepage_flush(struct mm_struct *mm, unsigned long addr,
pmd_t *pmdp, unsigned long old_pmd);
@@ -556,6 +193,14 @@ static inline void hpte_do_hugepage_flush(struct mm_struct *mm,
}
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
+
+extern int hash__map_kernel_page(unsigned long ea, unsigned long pa,
+ unsigned long flags);
+extern int __meminit hash__vmemmap_create_mapping(unsigned long start,
+ unsigned long page_size,
+ unsigned long phys);
+extern void hash__vmemmap_remove_mapping(unsigned long start,
+ unsigned long page_size);
#endif /* !__ASSEMBLY__ */
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_BOOK3S_64_HASH_H */
diff --git a/arch/powerpc/include/asm/book3s/64/hugetlb-radix.h b/arch/powerpc/include/asm/book3s/64/hugetlb-radix.h
new file mode 100644
index 000000000000..60f47649306f
--- /dev/null
+++ b/arch/powerpc/include/asm/book3s/64/hugetlb-radix.h
@@ -0,0 +1,14 @@
+#ifndef _ASM_POWERPC_BOOK3S_64_HUGETLB_RADIX_H
+#define _ASM_POWERPC_BOOK3S_64_HUGETLB_RADIX_H
+/*
+ * For radix we want generic code to handle hugetlb. But then if we want
+ * both hash and radix to be enabled together we need to workaround the
+ * limitations.
+ */
+void radix__flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
+void radix__local_flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
+extern unsigned long
+radix__hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
+ unsigned long len, unsigned long pgoff,
+ unsigned long flags);
+#endif
diff --git a/arch/powerpc/include/asm/book3s/64/mmu-hash.h b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
index 0cea4807e26f..290157e8d5b2 100644
--- a/arch/powerpc/include/asm/book3s/64/mmu-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/mmu-hash.h
@@ -1,5 +1,5 @@
-#ifndef _ASM_POWERPC_MMU_HASH64_H_
-#define _ASM_POWERPC_MMU_HASH64_H_
+#ifndef _ASM_POWERPC_BOOK3S_64_MMU_HASH_H_
+#define _ASM_POWERPC_BOOK3S_64_MMU_HASH_H_
/*
* PowerPC64 memory management structures
*
@@ -78,6 +78,10 @@
#define HPTE_V_SECONDARY ASM_CONST(0x0000000000000002)
#define HPTE_V_VALID ASM_CONST(0x0000000000000001)
+/*
+ * ISA 3.0 have a different HPTE format.
+ */
+#define HPTE_R_3_0_SSIZE_SHIFT 58
#define HPTE_R_PP0 ASM_CONST(0x8000000000000000)
#define HPTE_R_TS ASM_CONST(0x4000000000000000)
#define HPTE_R_KEY_HI ASM_CONST(0x3000000000000000)
@@ -115,6 +119,7 @@
#define POWER7_TLB_SETS 128 /* # sets in POWER7 TLB */
#define POWER8_TLB_SETS 512 /* # sets in POWER8 TLB */
#define POWER9_TLB_SETS_HASH 256 /* # sets in POWER9 TLB Hash mode */
+#define POWER9_TLB_SETS_RADIX 128 /* # sets in POWER9 TLB Radix mode */
#ifndef __ASSEMBLY__
@@ -127,24 +132,6 @@ extern struct hash_pte *htab_address;
extern unsigned long htab_size_bytes;
extern unsigned long htab_hash_mask;
-/*
- * Page size definition
- *
- * shift : is the "PAGE_SHIFT" value for that page size
- * sllp : is a bit mask with the value of SLB L || LP to be or'ed
- * directly to a slbmte "vsid" value
- * penc : is the HPTE encoding mask for the "LP" field:
- *
- */
-struct mmu_psize_def
-{
- unsigned int shift; /* number of bits */
- int penc[MMU_PAGE_COUNT]; /* HPTE encoding */
- unsigned int tlbiel; /* tlbiel supported for that page size */
- unsigned long avpnm; /* bits to mask out in AVPN in the HPTE */
- unsigned long sllp; /* SLB L||LP (exact mask to use in slbmte) */
-};
-extern struct mmu_psize_def mmu_psize_defs[MMU_PAGE_COUNT];
static inline int shift_to_mmu_psize(unsigned int shift)
{
@@ -210,11 +197,6 @@ static inline int segment_shift(int ssize)
/*
* The current system page and segment sizes
*/
-extern int mmu_linear_psize;
-extern int mmu_virtual_psize;
-extern int mmu_vmalloc_psize;
-extern int mmu_vmemmap_psize;
-extern int mmu_io_psize;
extern int mmu_kernel_ssize;
extern int mmu_highuser_ssize;
extern u16 mmu_slb_size;
@@ -247,7 +229,8 @@ static inline unsigned long hpte_encode_avpn(unsigned long vpn, int psize,
*/
v = (vpn >> (23 - VPN_SHIFT)) & ~(mmu_psize_defs[psize].avpnm);
v <<= HPTE_V_AVPN_SHIFT;
- v |= ((unsigned long) ssize) << HPTE_V_SSIZE_SHIFT;
+ if (!cpu_has_feature(CPU_FTR_ARCH_300))
+ v |= ((unsigned long) ssize) << HPTE_V_SSIZE_SHIFT;
return v;
}
@@ -271,8 +254,12 @@ static inline unsigned long hpte_encode_v(unsigned long vpn, int base_psize,
* aligned for the requested page size
*/
static inline unsigned long hpte_encode_r(unsigned long pa, int base_psize,
- int actual_psize)
+ int actual_psize, int ssize)
{
+
+ if (cpu_has_feature(CPU_FTR_ARCH_300))
+ pa |= ((unsigned long) ssize) << HPTE_R_3_0_SSIZE_SHIFT;
+
/* A 4K page needs no special encoding */
if (actual_psize == MMU_PAGE_4K)
return pa & HPTE_R_RPN;
@@ -476,7 +463,7 @@ extern void slb_set_size(u16 size);
add rt,rt,rx
/* 4 bits per slice and we have one slice per 1TB */
-#define SLICE_ARRAY_SIZE (PGTABLE_RANGE >> 41)
+#define SLICE_ARRAY_SIZE (H_PGTABLE_RANGE >> 41)
#ifndef __ASSEMBLY__
@@ -512,38 +499,6 @@ static inline void subpage_prot_free(struct mm_struct *mm) {}
static inline void subpage_prot_init_new_context(struct mm_struct *mm) { }
#endif /* CONFIG_PPC_SUBPAGE_PROT */
-typedef unsigned long mm_context_id_t;
-struct spinlock;
-
-typedef struct {
- mm_context_id_t id;
- u16 user_psize; /* page size index */
-
-#ifdef CONFIG_PPC_MM_SLICES
- u64 low_slices_psize; /* SLB page size encodings */
- unsigned char high_slices_psize[SLICE_ARRAY_SIZE];
-#else
- u16 sllp; /* SLB page size encoding */
-#endif
- unsigned long vdso_base;
-#ifdef CONFIG_PPC_SUBPAGE_PROT
- struct subpage_prot_table spt;
-#endif /* CONFIG_PPC_SUBPAGE_PROT */
-#ifdef CONFIG_PPC_ICSWX
- struct spinlock *cop_lockp; /* guard acop and cop_pid */
- unsigned long acop; /* mask of enabled coprocessor types */
- unsigned int cop_pid; /* pid value used with coprocessors */
-#endif /* CONFIG_PPC_ICSWX */
-#ifdef CONFIG_PPC_64K_PAGES
- /* for 4K PTE fragment support */
- void *pte_frag;
-#endif
-#ifdef CONFIG_SPAPR_TCE_IOMMU
- struct list_head iommu_group_mem_list;
-#endif
-} mm_context_t;
-
-
#if 0
/*
* The code below is equivalent to this function for arguments
@@ -579,7 +534,7 @@ static inline unsigned long get_vsid(unsigned long context, unsigned long ea,
/*
* Bad address. We return VSID 0 for that
*/
- if ((ea & ~REGION_MASK) >= PGTABLE_RANGE)
+ if ((ea & ~REGION_MASK) >= H_PGTABLE_RANGE)
return 0;
if (ssize == MMU_SEGSIZE_256M)
@@ -613,4 +568,4 @@ unsigned htab_shift_for_mem_size(unsigned long mem_size);
#endif /* __ASSEMBLY__ */
-#endif /* _ASM_POWERPC_MMU_HASH64_H_ */
+#endif /* _ASM_POWERPC_BOOK3S_64_MMU_HASH_H_ */
diff --git a/arch/powerpc/include/asm/book3s/64/mmu.h b/arch/powerpc/include/asm/book3s/64/mmu.h
new file mode 100644
index 000000000000..5854263d4d6e
--- /dev/null
+++ b/arch/powerpc/include/asm/book3s/64/mmu.h
@@ -0,0 +1,137 @@
+#ifndef _ASM_POWERPC_BOOK3S_64_MMU_H_
+#define _ASM_POWERPC_BOOK3S_64_MMU_H_
+
+#ifndef __ASSEMBLY__
+/*
+ * Page size definition
+ *
+ * shift : is the "PAGE_SHIFT" value for that page size
+ * sllp : is a bit mask with the value of SLB L || LP to be or'ed
+ * directly to a slbmte "vsid" value
+ * penc : is the HPTE encoding mask for the "LP" field:
+ *
+ */
+struct mmu_psize_def {
+ unsigned int shift; /* number of bits */
+ int penc[MMU_PAGE_COUNT]; /* HPTE encoding */
+ unsigned int tlbiel; /* tlbiel supported for that page size */
+ unsigned long avpnm; /* bits to mask out in AVPN in the HPTE */
+ union {
+ unsigned long sllp; /* SLB L||LP (exact mask to use in slbmte) */
+ unsigned long ap; /* Ap encoding used by PowerISA 3.0 */
+ };
+};
+extern struct mmu_psize_def mmu_psize_defs[MMU_PAGE_COUNT];
+
+#define radix_enabled() mmu_has_feature(MMU_FTR_RADIX)
+
+#endif /* __ASSEMBLY__ */
+
+/* 64-bit classic hash table MMU */
+#include <asm/book3s/64/mmu-hash.h>
+
+#ifndef __ASSEMBLY__
+/*
+ * ISA 3.0 partiton and process table entry format
+ */
+struct prtb_entry {
+ __be64 prtb0;
+ __be64 prtb1;
+};
+extern struct prtb_entry *process_tb;
+
+struct patb_entry {
+ __be64 patb0;
+ __be64 patb1;
+};
+extern struct patb_entry *partition_tb;
+
+#define PATB_HR (1UL << 63)
+#define PATB_GR (1UL << 63)
+#define RPDB_MASK 0x0ffffffffffff00fUL
+#define RPDB_SHIFT (1UL << 8)
+/*
+ * Limit process table to PAGE_SIZE table. This
+ * also limit the max pid we can support.
+ * MAX_USER_CONTEXT * 16 bytes of space.
+ */
+#define PRTB_SIZE_SHIFT (CONTEXT_BITS + 4)
+/*
+ * Power9 currently only support 64K partition table size.
+ */
+#define PATB_SIZE_SHIFT 16
+
+typedef unsigned long mm_context_id_t;
+struct spinlock;
+
+typedef struct {
+ mm_context_id_t id;
+ u16 user_psize; /* page size index */
+
+#ifdef CONFIG_PPC_MM_SLICES
+ u64 low_slices_psize; /* SLB page size encodings */
+ unsigned char high_slices_psize[SLICE_ARRAY_SIZE];
+#else
+ u16 sllp; /* SLB page size encoding */
+#endif
+ unsigned long vdso_base;
+#ifdef CONFIG_PPC_SUBPAGE_PROT
+ struct subpage_prot_table spt;
+#endif /* CONFIG_PPC_SUBPAGE_PROT */
+#ifdef CONFIG_PPC_ICSWX
+ struct spinlock *cop_lockp; /* guard acop and cop_pid */
+ unsigned long acop; /* mask of enabled coprocessor types */
+ unsigned int cop_pid; /* pid value used with coprocessors */
+#endif /* CONFIG_PPC_ICSWX */
+#ifdef CONFIG_PPC_64K_PAGES
+ /* for 4K PTE fragment support */
+ void *pte_frag;
+#endif
+#ifdef CONFIG_SPAPR_TCE_IOMMU
+ struct list_head iommu_group_mem_list;
+#endif
+} mm_context_t;
+
+/*
+ * The current system page and segment sizes
+ */
+extern int mmu_linear_psize;
+extern int mmu_virtual_psize;
+extern int mmu_vmalloc_psize;
+extern int mmu_vmemmap_psize;
+extern int mmu_io_psize;
+
+/* MMU initialization */
+extern void radix_init_native(void);
+extern void hash__early_init_mmu(void);
+extern void radix__early_init_mmu(void);
+static inline void early_init_mmu(void)
+{
+ if (radix_enabled())
+ return radix__early_init_mmu();
+ return hash__early_init_mmu();
+}
+extern void hash__early_init_mmu_secondary(void);
+extern void radix__early_init_mmu_secondary(void);
+static inline void early_init_mmu_secondary(void)
+{
+ if (radix_enabled())
+ return radix__early_init_mmu_secondary();
+ return hash__early_init_mmu_secondary();
+}
+
+extern void hash__setup_initial_memory_limit(phys_addr_t first_memblock_base,
+ phys_addr_t first_memblock_size);
+extern void radix__setup_initial_memory_limit(phys_addr_t first_memblock_base,
+ phys_addr_t first_memblock_size);
+static inline void setup_initial_memory_limit(phys_addr_t first_memblock_base,
+ phys_addr_t first_memblock_size)
+{
+ if (radix_enabled())
+ return radix__setup_initial_memory_limit(first_memblock_base,
+ first_memblock_size);
+ return hash__setup_initial_memory_limit(first_memblock_base,
+ first_memblock_size);
+}
+#endif /* __ASSEMBLY__ */
+#endif /* _ASM_POWERPC_BOOK3S_64_MMU_H_ */
diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h b/arch/powerpc/include/asm/book3s/64/pgalloc.h
new file mode 100644
index 000000000000..488279edb1f0
--- /dev/null
+++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h
@@ -0,0 +1,207 @@
+#ifndef _ASM_POWERPC_BOOK3S_64_PGALLOC_H
+#define _ASM_POWERPC_BOOK3S_64_PGALLOC_H
+/*
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/slab.h>
+#include <linux/cpumask.h>
+#include <linux/percpu.h>
+
+struct vmemmap_backing {
+ struct vmemmap_backing *list;
+ unsigned long phys;
+ unsigned long virt_addr;
+};
+extern struct vmemmap_backing *vmemmap_list;
+
+/*
+ * Functions that deal with pagetables that could be at any level of
+ * the table need to be passed an "index_size" so they know how to
+ * handle allocation. For PTE pages (which are linked to a struct
+ * page for now, and drawn from the main get_free_pages() pool), the
+ * allocation size will be (2^index_size * sizeof(pointer)) and
+ * allocations are drawn from the kmem_cache in PGT_CACHE(index_size).
+ *
+ * The maximum index size needs to be big enough to allow any
+ * pagetable sizes we need, but small enough to fit in the low bits of
+ * any page table pointer. In other words all pagetables, even tiny
+ * ones, must be aligned to allow at least enough low 0 bits to
+ * contain this value. This value is also used as a mask, so it must
+ * be one less than a power of two.
+ */
+#define MAX_PGTABLE_INDEX_SIZE 0xf
+
+extern struct kmem_cache *pgtable_cache[];
+#define PGT_CACHE(shift) ({ \
+ BUG_ON(!(shift)); \
+ pgtable_cache[(shift) - 1]; \
+ })
+
+#define PGALLOC_GFP GFP_KERNEL | __GFP_NOTRACK | __GFP_REPEAT | __GFP_ZERO
+
+extern pte_t *pte_fragment_alloc(struct mm_struct *, unsigned long, int);
+extern void pte_fragment_free(unsigned long *, int);
+extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift);
+#ifdef CONFIG_SMP
+extern void __tlb_remove_table(void *_table);
+#endif
+
+static inline pgd_t *radix__pgd_alloc(struct mm_struct *mm)
+{
+#ifdef CONFIG_PPC_64K_PAGES
+ return (pgd_t *)__get_free_page(PGALLOC_GFP);
+#else
+ struct page *page;
+ page = alloc_pages(PGALLOC_GFP, 4);
+ if (!page)
+ return NULL;
+ return (pgd_t *) page_address(page);
+#endif
+}
+
+static inline void radix__pgd_free(struct mm_struct *mm, pgd_t *pgd)
+{
+#ifdef CONFIG_PPC_64K_PAGES
+ free_page((unsigned long)pgd);
+#else
+ free_pages((unsigned long)pgd, 4);
+#endif
+}
+
+static inline pgd_t *pgd_alloc(struct mm_struct *mm)
+{
+ if (radix_enabled())
+ return radix__pgd_alloc(mm);
+ return kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE), GFP_KERNEL);
+}
+
+static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
+{
+ if (radix_enabled())
+ return radix__pgd_free(mm, pgd);
+ kmem_cache_free(PGT_CACHE(PGD_INDEX_SIZE), pgd);
+}
+
+static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgd, pud_t *pud)
+{
+ pgd_set(pgd, __pgtable_ptr_val(pud) | PGD_VAL_BITS);
+}
+
+static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
+{
+ return kmem_cache_alloc(PGT_CACHE(PUD_INDEX_SIZE),
+ GFP_KERNEL|__GFP_REPEAT);
+}
+
+static inline void pud_free(struct mm_struct *mm, pud_t *pud)
+{
+ kmem_cache_free(PGT_CACHE(PUD_INDEX_SIZE), pud);
+}
+
+static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
+{
+ pud_set(pud, __pgtable_ptr_val(pmd) | PUD_VAL_BITS);
+}
+
+static inline void __pud_free_tlb(struct mmu_gather *tlb, pud_t *pud,
+ unsigned long address)
+{
+ pgtable_free_tlb(tlb, pud, PUD_INDEX_SIZE);
+}
+
+static inline pmd_t *pmd_alloc_one(struct mm_struct *mm, unsigned long addr)
+{
+ return kmem_cache_alloc(PGT_CACHE(PMD_CACHE_INDEX),
+ GFP_KERNEL|__GFP_REPEAT);
+}
+
+static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
+{
+ kmem_cache_free(PGT_CACHE(PMD_CACHE_INDEX), pmd);
+}
+
+static inline void __pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd,
+ unsigned long address)
+{
+ return pgtable_free_tlb(tlb, pmd, PMD_CACHE_INDEX);
+}
+
+static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
+ pte_t *pte)
+{
+ pmd_set(pmd, __pgtable_ptr_val(pte) | PMD_VAL_BITS);
+}
+
+static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
+ pgtable_t pte_page)
+{
+ pmd_set(pmd, __pgtable_ptr_val(pte_page) | PMD_VAL_BITS);
+}
+
+static inline pgtable_t pmd_pgtable(pmd_t pmd)
+{
+ return (pgtable_t)pmd_page_vaddr(pmd);
+}
+
+#ifdef CONFIG_PPC_4K_PAGES
+static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
+ unsigned long address)
+{
+ return (pte_t *)__get_free_page(GFP_KERNEL | __GFP_REPEAT | __GFP_ZERO);
+}
+
+static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
+ unsigned long address)
+{
+ struct page *page;
+ pte_t *pte;
+
+ pte = pte_alloc_one_kernel(mm, address);
+ if (!pte)
+ return NULL;
+ page = virt_to_page(pte);
+ if (!pgtable_page_ctor(page)) {
+ __free_page(page);
+ return NULL;
+ }
+ return pte;
+}
+#else /* if CONFIG_PPC_64K_PAGES */
+
+static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
+ unsigned long address)
+{
+ return (pte_t *)pte_fragment_alloc(mm, address, 1);
+}
+
+static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
+ unsigned long address)
+{
+ return (pgtable_t)pte_fragment_alloc(mm, address, 0);
+}
+#endif
+
+static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
+{
+ pte_fragment_free((unsigned long *)pte, 1);
+}
+
+static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
+{
+ pte_fragment_free((unsigned long *)ptepage, 0);
+}
+
+static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
+ unsigned long address)
+{
+ tlb_flush_pgtable(tlb, address);
+ pgtable_free_tlb(tlb, table, 0);
+}
+
+#define check_pgt_cache() do { } while (0)
+
+#endif /* _ASM_POWERPC_BOOK3S_64_PGALLOC_H */
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable-4k.h b/arch/powerpc/include/asm/book3s/64/pgtable-4k.h
new file mode 100644
index 000000000000..71e9abced493
--- /dev/null
+++ b/arch/powerpc/include/asm/book3s/64/pgtable-4k.h
@@ -0,0 +1,53 @@
+#ifndef _ASM_POWERPC_BOOK3S_64_PGTABLE_4K_H
+#define _ASM_POWERPC_BOOK3S_64_PGTABLE_4K_H
+/*
+ * hash 4k can't share hugetlb and also doesn't support THP
+ */
+#ifndef __ASSEMBLY__
+#ifdef CONFIG_HUGETLB_PAGE
+static inline int pmd_huge(pmd_t pmd)
+{
+ /*
+ * leaf pte for huge page
+ */
+ if (radix_enabled())
+ return !!(pmd_val(pmd) & _PAGE_PTE);
+ return 0;
+}
+
+static inline int pud_huge(pud_t pud)
+{
+ /*
+ * leaf pte for huge page
+ */
+ if (radix_enabled())
+ return !!(pud_val(pud) & _PAGE_PTE);
+ return 0;
+}
+
+static inline int pgd_huge(pgd_t pgd)
+{
+ /*
+ * leaf pte for huge page
+ */
+ if (radix_enabled())
+ return !!(pgd_val(pgd) & _PAGE_PTE);
+ return 0;
+}
+#define pgd_huge pgd_huge
+/*
+ * With radix , we have hugepage ptes in the pud and pmd entries. We don't
+ * need to setup hugepage directory for them. Our pte and page directory format
+ * enable us to have this enabled.
+ */
+static inline int hugepd_ok(hugepd_t hpd)
+{
+ if (radix_enabled())
+ return 0;
+ return hash__hugepd_ok(hpd);
+}
+#define is_hugepd(hpd) (hugepd_ok(hpd))
+#endif /* CONFIG_HUGETLB_PAGE */
+#endif /* __ASSEMBLY__ */
+
+#endif /*_ASM_POWERPC_BOOK3S_64_PGTABLE_4K_H */
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable-64k.h b/arch/powerpc/include/asm/book3s/64/pgtable-64k.h
new file mode 100644
index 000000000000..cb2d0a5fa3f8
--- /dev/null
+++ b/arch/powerpc/include/asm/book3s/64/pgtable-64k.h
@@ -0,0 +1,64 @@
+#ifndef _ASM_POWERPC_BOOK3S_64_PGTABLE_64K_H
+#define _ASM_POWERPC_BOOK3S_64_PGTABLE_64K_H
+
+#ifndef __ASSEMBLY__
+#ifdef CONFIG_HUGETLB_PAGE
+/*
+ * We have PGD_INDEX_SIZ = 12 and PTE_INDEX_SIZE = 8, so that we can have
+ * 16GB hugepage pte in PGD and 16MB hugepage pte at PMD;
+ *
+ * Defined in such a way that we can optimize away code block at build time
+ * if CONFIG_HUGETLB_PAGE=n.
+ */
+static inline int pmd_huge(pmd_t pmd)
+{
+ /*
+ * leaf pte for huge page
+ */
+ return !!(pmd_val(pmd) & _PAGE_PTE);
+}
+
+static inline int pud_huge(pud_t pud)
+{
+ /*
+ * leaf pte for huge page
+ */
+ return !!(pud_val(pud) & _PAGE_PTE);
+}
+
+static inline int pgd_huge(pgd_t pgd)
+{
+ /*
+ * leaf pte for huge page
+ */
+ return !!(pgd_val(pgd) & _PAGE_PTE);
+}
+#define pgd_huge pgd_huge
+
+#ifdef CONFIG_DEBUG_VM
+extern int hugepd_ok(hugepd_t hpd);
+#define is_hugepd(hpd) (hugepd_ok(hpd))
+#else
+/*
+ * With 64k page size, we have hugepage ptes in the pgd and pmd entries. We don't
+ * need to setup hugepage directory for them. Our pte and page directory format
+ * enable us to have this enabled.
+ */
+static inline int hugepd_ok(hugepd_t hpd)
+{
+ return 0;
+}
+#define is_hugepd(pdep) 0
+#endif /* CONFIG_DEBUG_VM */
+
+#endif /* CONFIG_HUGETLB_PAGE */
+
+static inline int remap_4k_pfn(struct vm_area_struct *vma, unsigned long addr,
+ unsigned long pfn, pgprot_t prot)
+{
+ if (radix_enabled())
+ BUG();
+ return hash__remap_4k_pfn(vma, addr, pfn, prot);
+}
+#endif /* __ASSEMBLY__ */
+#endif /*_ASM_POWERPC_BOOK3S_64_PGTABLE_64K_H */
diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
index 77d3ce05798e..88a5ecaa157b 100644
--- a/arch/powerpc/include/asm/book3s/64/pgtable.h
+++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
@@ -1,13 +1,247 @@
#ifndef _ASM_POWERPC_BOOK3S_64_PGTABLE_H_
#define _ASM_POWERPC_BOOK3S_64_PGTABLE_H_
+
+/*
+ * Common bits between hash and Radix page table
+ */
+#define _PAGE_BIT_SWAP_TYPE 0
+
+#define _PAGE_EXEC 0x00001 /* execute permission */
+#define _PAGE_WRITE 0x00002 /* write access allowed */
+#define _PAGE_READ 0x00004 /* read access allowed */
+#define _PAGE_RW (_PAGE_READ | _PAGE_WRITE)
+#define _PAGE_RWX (_PAGE_READ | _PAGE_WRITE | _PAGE_EXEC)
+#define _PAGE_PRIVILEGED 0x00008 /* kernel access only */
+#define _PAGE_SAO 0x00010 /* Strong access order */
+#define _PAGE_NON_IDEMPOTENT 0x00020 /* non idempotent memory */
+#define _PAGE_TOLERANT 0x00030 /* tolerant memory, cache inhibited */
+#define _PAGE_DIRTY 0x00080 /* C: page changed */
+#define _PAGE_ACCESSED 0x00100 /* R: page referenced */
/*
- * This file contains the functions and defines necessary to modify and use
- * the ppc64 hashed page table.
+ * Software bits
*/
+#define _RPAGE_SW0 0x2000000000000000UL
+#define _RPAGE_SW1 0x00800
+#define _RPAGE_SW2 0x00400
+#define _RPAGE_SW3 0x00200
+#ifdef CONFIG_MEM_SOFT_DIRTY
+#define _PAGE_SOFT_DIRTY _RPAGE_SW3 /* software: software dirty tracking */
+#else
+#define _PAGE_SOFT_DIRTY 0x00000
+#endif
+#define _PAGE_SPECIAL _RPAGE_SW2 /* software: special page */
+
+
+#define _PAGE_PTE (1ul << 62) /* distinguishes PTEs from pointers */
+#define _PAGE_PRESENT (1ul << 63) /* pte contains a translation */
+/*
+ * Drivers request for cache inhibited pte mapping using _PAGE_NO_CACHE
+ * Instead of fixing all of them, add an alternate define which
+ * maps CI pte mapping.
+ */
+#define _PAGE_NO_CACHE _PAGE_TOLERANT
+/*
+ * We support 57 bit real address in pte. Clear everything above 57, and
+ * every thing below PAGE_SHIFT;
+ */
+#define PTE_RPN_MASK (((1UL << 57) - 1) & (PAGE_MASK))
+/*
+ * set of bits not changed in pmd_modify. Even though we have hash specific bits
+ * in here, on radix we expect them to be zero.
+ */
+#define _HPAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
+ _PAGE_ACCESSED | H_PAGE_THP_HUGE | _PAGE_PTE | \
+ _PAGE_SOFT_DIRTY)
+/*
+ * user access blocked by key
+ */
+#define _PAGE_KERNEL_RW (_PAGE_PRIVILEGED | _PAGE_RW | _PAGE_DIRTY)
+#define _PAGE_KERNEL_RO (_PAGE_PRIVILEGED | _PAGE_READ)
+#define _PAGE_KERNEL_RWX (_PAGE_PRIVILEGED | _PAGE_DIRTY | \
+ _PAGE_RW | _PAGE_EXEC)
+/*
+ * No page size encoding in the linux PTE
+ */
+#define _PAGE_PSIZE 0
+/*
+ * _PAGE_CHG_MASK masks of bits that are to be preserved across
+ * pgprot changes
+ */
+#define _PAGE_CHG_MASK (PTE_RPN_MASK | _PAGE_HPTEFLAGS | _PAGE_DIRTY | \
+ _PAGE_ACCESSED | _PAGE_SPECIAL | _PAGE_PTE | \
+ _PAGE_SOFT_DIRTY)
+/*
+ * Mask of bits returned by pte_pgprot()
+ */
+#define PAGE_PROT_BITS (_PAGE_SAO | _PAGE_NON_IDEMPOTENT | _PAGE_TOLERANT | \
+ H_PAGE_4K_PFN | _PAGE_PRIVILEGED | _PAGE_ACCESSED | \
+ _PAGE_READ | _PAGE_WRITE | _PAGE_DIRTY | _PAGE_EXEC | \
+ _PAGE_SOFT_DIRTY)
+/*
+ * We define 2 sets of base prot bits, one for basic pages (ie,
+ * cacheable kernel and user pages) and one for non cacheable
+ * pages. We always set _PAGE_COHERENT when SMP is enabled or
+ * the processor might need it for DMA coherency.
+ */
+#define _PAGE_BASE_NC (_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_PSIZE)
+#define _PAGE_BASE (_PAGE_BASE_NC)
+
+/* Permission masks used to generate the __P and __S table,
+ *
+ * Note:__pgprot is defined in arch/powerpc/include/asm/page.h
+ *
+ * Write permissions imply read permissions for now (we could make write-only
+ * pages on BookE but we don't bother for now). Execute permission control is
+ * possible on platforms that define _PAGE_EXEC
+ *
+ * Note due to the way vm flags are laid out, the bits are XWR
+ */
+#define PAGE_NONE __pgprot(_PAGE_BASE | _PAGE_PRIVILEGED)
+#define PAGE_SHARED __pgprot(_PAGE_BASE | _PAGE_RW)
+#define PAGE_SHARED_X __pgprot(_PAGE_BASE | _PAGE_RW | _PAGE_EXEC)
+#define PAGE_COPY __pgprot(_PAGE_BASE | _PAGE_READ)
+#define PAGE_COPY_X __pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_EXEC)
+#define PAGE_READONLY __pgprot(_PAGE_BASE | _PAGE_READ)
+#define PAGE_READONLY_X __pgprot(_PAGE_BASE | _PAGE_READ | _PAGE_EXEC)
+
+#define __P000 PAGE_NONE
+#define __P001 PAGE_READONLY
+#define __P010 PAGE_COPY
+#define __P011 PAGE_COPY
+#define __P100 PAGE_READONLY_X
+#define __P101 PAGE_READONLY_X
+#define __P110 PAGE_COPY_X
+#define __P111 PAGE_COPY_X
+
+#define __S000 PAGE_NONE
+#define __S001 PAGE_READONLY
+#define __S010 PAGE_SHARED
+#define __S011 PAGE_SHARED
+#define __S100 PAGE_READONLY_X
+#define __S101 PAGE_READONLY_X
+#define __S110 PAGE_SHARED_X
+#define __S111 PAGE_SHARED_X
+
+/* Permission masks used for kernel mappings */
+#define PAGE_KERNEL __pgprot(_PAGE_BASE | _PAGE_KERNEL_RW)
+#define PAGE_KERNEL_NC __pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
+ _PAGE_TOLERANT)
+#define PAGE_KERNEL_NCG __pgprot(_PAGE_BASE_NC | _PAGE_KERNEL_RW | \
+ _PAGE_NON_IDEMPOTENT)
+#define PAGE_KERNEL_X __pgprot(_PAGE_BASE | _PAGE_KERNEL_RWX)
+#define PAGE_KERNEL_RO __pgprot(_PAGE_BASE | _PAGE_KERNEL_RO)
+#define PAGE_KERNEL_ROX __pgprot(_PAGE_BASE | _PAGE_KERNEL_ROX)
+
+/*
+ * Protection used for kernel text. We want the debuggers to be able to
+ * set breakpoints anywhere, so don't write protect the kernel text
+ * on platforms where such control is possible.
+ */
+#if defined(CONFIG_KGDB) || defined(CONFIG_XMON) || defined(CONFIG_BDI_SWITCH) || \
+ defined(CONFIG_KPROBES) || defined(CONFIG_DYNAMIC_FTRACE)
+#define PAGE_KERNEL_TEXT PAGE_KERNEL_X
+#else
+#define PAGE_KERNEL_TEXT PAGE_KERNEL_ROX
+#endif
+
+/* Make modules code happy. We don't set RO yet */
+#define PAGE_KERNEL_EXEC PAGE_KERNEL_X
+#define PAGE_AGP (PAGE_KERNEL_NC)
+
+#ifndef __ASSEMBLY__
+/*
+ * page table defines
+ */
+extern unsigned long __pte_index_size;
+extern unsigned long __pmd_index_size;
+extern unsigned long __pud_index_size;
+extern unsigned long __pgd_index_size;
+extern unsigned long __pmd_cache_index;
+#define PTE_INDEX_SIZE __pte_index_size
+#define PMD_INDEX_SIZE __pmd_index_size
+#define PUD_INDEX_SIZE __pud_index_size
+#define PGD_INDEX_SIZE __pgd_index_size
+#define PMD_CACHE_INDEX __pmd_cache_index
+/*
+ * Because of use of pte fragments and THP, size of page table
+ * are not always derived out of index size above.
+ */
+extern unsigned long __pte_table_size;
+extern unsigned long __pmd_table_size;
+extern unsigned long __pud_table_size;
+extern unsigned long __pgd_table_size;
+#define PTE_TABLE_SIZE __pte_table_size
+#define PMD_TABLE_SIZE __pmd_table_size
+#define PUD_TABLE_SIZE __pud_table_size
+#define PGD_TABLE_SIZE __pgd_table_size
+
+extern unsigned long __pmd_val_bits;
+extern unsigned long __pud_val_bits;
+extern unsigned long __pgd_val_bits;
+#define PMD_VAL_BITS __pmd_val_bits
+#define PUD_VAL_BITS __pud_val_bits
+#define PGD_VAL_BITS __pgd_val_bits
+
+extern unsigned long __pte_frag_nr;
+#define PTE_FRAG_NR __pte_frag_nr
+extern unsigned long __pte_frag_size_shift;
+#define PTE_FRAG_SIZE_SHIFT __pte_frag_size_shift
+#define PTE_FRAG_SIZE (1UL << PTE_FRAG_SIZE_SHIFT)
+/*
+ * Pgtable size used by swapper, init in asm code
+ */
+#define MAX_PGD_TABLE_SIZE (sizeof(pgd_t) << RADIX_PGD_INDEX_SIZE)
+
+#define PTRS_PER_PTE (1 << PTE_INDEX_SIZE)
+#define PTRS_PER_PMD (1 << PMD_INDEX_SIZE)
+#define PTRS_PER_PUD (1 << PUD_INDEX_SIZE)
+#define PTRS_PER_PGD (1 << PGD_INDEX_SIZE)
+
+/* PMD_SHIFT determines what a second-level page table entry can map */
+#define PMD_SHIFT (PAGE_SHIFT + PTE_INDEX_SIZE)
+#define PMD_SIZE (1UL << PMD_SHIFT)
+#define PMD_MASK (~(PMD_SIZE-1))
+
+/* PUD_SHIFT determines what a third-level page table entry can map */
+#define PUD_SHIFT (PMD_SHIFT + PMD_INDEX_SIZE)
+#define PUD_SIZE (1UL << PUD_SHIFT)
+#define PUD_MASK (~(PUD_SIZE-1))
+
+/* PGDIR_SHIFT determines what a fourth-level page table entry can map */
+#define PGDIR_SHIFT (PUD_SHIFT + PUD_INDEX_SIZE)
+#define PGDIR_SIZE (1UL << PGDIR_SHIFT)
+#define PGDIR_MASK (~(PGDIR_SIZE-1))
+
+/* Bits to mask out from a PMD to get to the PTE page */
+#define PMD_MASKED_BITS 0xc0000000000000ffUL
+/* Bits to mask out from a PUD to get to the PMD page */
+#define PUD_MASKED_BITS 0xc0000000000000ffUL
+/* Bits to mask out from a PGD to get to the PUD page */
+#define PGD_MASKED_BITS 0xc0000000000000ffUL
+
+extern unsigned long __vmalloc_start;
+extern unsigned long __vmalloc_end;
+#define VMALLOC_START __vmalloc_start
+#define VMALLOC_END __vmalloc_end
+
+extern unsigned long __kernel_virt_start;
+extern unsigned long __kernel_virt_size;
+#define KERN_VIRT_START __kernel_virt_start
+#define KERN_VIRT_SIZE __kernel_virt_size
+extern struct page *vmemmap;
+extern unsigned long ioremap_bot;
+#endif /* __ASSEMBLY__ */
#include <asm/book3s/64/hash.h>
-#include <asm/barrier.h>
+#include <asm/book3s/64/radix.h>
+#ifdef CONFIG_PPC_64K_PAGES
+#include <asm/book3s/64/pgtable-64k.h>
+#else
+#include <asm/book3s/64/pgtable-4k.h>
+#endif
+
+#include <asm/barrier.h>
/*
* The second half of the kernel virtual space is used for IO mappings,
* it's itself carved into the PIO region (ISA and PHB IO space) and
@@ -26,8 +260,6 @@
#define IOREMAP_BASE (PHB_IO_END)
#define IOREMAP_END (KERN_VIRT_START + KERN_VIRT_SIZE)
-#define vmemmap ((struct page *)VMEMMAP_BASE)
-
/* Advertise special mapping type for AGP */
#define HAVE_PAGE_AGP
@@ -45,7 +277,7 @@
#define __real_pte(e,p) ((real_pte_t){(e)})
#define __rpte_to_pte(r) ((r).pte)
-#define __rpte_to_hidx(r,index) (pte_val(__rpte_to_pte(r)) >>_PAGE_F_GIX_SHIFT)
+#define __rpte_to_hidx(r,index) (pte_val(__rpte_to_pte(r)) >> H_PAGE_F_GIX_SHIFT)
#define pte_iterate_hashed_subpages(rpte, psize, va, index, shift) \
do { \
@@ -62,6 +294,327 @@
#endif /* __real_pte */
+static inline unsigned long pte_update(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, unsigned long clr,
+ unsigned long set, int huge)
+{
+ if (radix_enabled())
+ return radix__pte_update(mm, addr, ptep, clr, set, huge);
+ return hash__pte_update(mm, addr, ptep, clr, set, huge);
+}
+/*
+ * For hash even if we have _PAGE_ACCESSED = 0, we do a pte_update.
+ * We currently remove entries from the hashtable regardless of whether
+ * the entry was young or dirty.
+ *
+ * We should be more intelligent about this but for the moment we override
+ * these functions and force a tlb flush unconditionally
+ * For radix: H_PAGE_HASHPTE should be zero. Hence we can use the same
+ * function for both hash and radix.
+ */
+static inline int __ptep_test_and_clear_young(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep)
+{
+ unsigned long old;
+
+ if ((pte_val(*ptep) & (_PAGE_ACCESSED | H_PAGE_HASHPTE)) == 0)
+ return 0;
+ old = pte_update(mm, addr, ptep, _PAGE_ACCESSED, 0, 0);
+ return (old & _PAGE_ACCESSED) != 0;
+}
+
+#define __HAVE_ARCH_PTEP_TEST_AND_CLEAR_YOUNG
+#define ptep_test_and_clear_young(__vma, __addr, __ptep) \
+({ \
+ int __r; \
+ __r = __ptep_test_and_clear_young((__vma)->vm_mm, __addr, __ptep); \
+ __r; \
+})
+
+#define __HAVE_ARCH_PTEP_SET_WRPROTECT
+static inline void ptep_set_wrprotect(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep)
+{
+
+ if ((pte_val(*ptep) & _PAGE_WRITE) == 0)
+ return;
+
+ pte_update(mm, addr, ptep, _PAGE_WRITE, 0, 0);
+}
+
+static inline void huge_ptep_set_wrprotect(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep)
+{
+ if ((pte_val(*ptep) & _PAGE_WRITE) == 0)
+ return;
+
+ pte_update(mm, addr, ptep, _PAGE_WRITE, 0, 1);
+}
+
+#define __HAVE_ARCH_PTEP_GET_AND_CLEAR
+static inline pte_t ptep_get_and_clear(struct mm_struct *mm,
+ unsigned long addr, pte_t *ptep)
+{
+ unsigned long old = pte_update(mm, addr, ptep, ~0UL, 0, 0);
+ return __pte(old);
+}
+
+static inline void pte_clear(struct mm_struct *mm, unsigned long addr,
+ pte_t * ptep)
+{
+ pte_update(mm, addr, ptep, ~0UL, 0, 0);
+}
+static inline int pte_write(pte_t pte) { return !!(pte_val(pte) & _PAGE_WRITE);}
+static inline int pte_dirty(pte_t pte) { return !!(pte_val(pte) & _PAGE_DIRTY); }
+static inline int pte_young(pte_t pte) { return !!(pte_val(pte) & _PAGE_ACCESSED); }
+static inline int pte_special(pte_t pte) { return !!(pte_val(pte) & _PAGE_SPECIAL); }
+static inline pgprot_t pte_pgprot(pte_t pte) { return __pgprot(pte_val(pte) & PAGE_PROT_BITS); }
+
+#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
+static inline bool pte_soft_dirty(pte_t pte)
+{
+ return !!(pte_val(pte) & _PAGE_SOFT_DIRTY);
+}
+static inline pte_t pte_mksoft_dirty(pte_t pte)
+{
+ return __pte(pte_val(pte) | _PAGE_SOFT_DIRTY);
+}
+
+static inline pte_t pte_clear_soft_dirty(pte_t pte)
+{
+ return __pte(pte_val(pte) & ~_PAGE_SOFT_DIRTY);
+}
+#endif /* CONFIG_HAVE_ARCH_SOFT_DIRTY */
+
+#ifdef CONFIG_NUMA_BALANCING
+/*
+ * These work without NUMA balancing but the kernel does not care. See the
+ * comment in include/asm-generic/pgtable.h . On powerpc, this will only
+ * work for user pages and always return true for kernel pages.
+ */
+static inline int pte_protnone(pte_t pte)
+{
+ return (pte_val(pte) & (_PAGE_PRESENT | _PAGE_PRIVILEGED)) ==
+ (_PAGE_PRESENT | _PAGE_PRIVILEGED);
+}
+#endif /* CONFIG_NUMA_BALANCING */
+
+static inline int pte_present(pte_t pte)
+{
+ return !!(pte_val(pte) & _PAGE_PRESENT);
+}
+/*
+ * Conversion functions: convert a page and protection to a page entry,
+ * and a page entry and page directory to the page they refer to.
+ *
+ * Even if PTEs can be unsigned long long, a PFN is always an unsigned
+ * long for now.
+ */
+static inline pte_t pfn_pte(unsigned long pfn, pgprot_t pgprot)
+{
+ return __pte((((pte_basic_t)(pfn) << PAGE_SHIFT) & PTE_RPN_MASK) |
+ pgprot_val(pgprot));
+}
+
+static inline unsigned long pte_pfn(pte_t pte)
+{
+ return (pte_val(pte) & PTE_RPN_MASK) >> PAGE_SHIFT;
+}
+
+/* Generic modifiers for PTE bits */
+static inline pte_t pte_wrprotect(pte_t pte)
+{
+ return __pte(pte_val(pte) & ~_PAGE_WRITE);
+}
+
+static inline pte_t pte_mkclean(pte_t pte)
+{
+ return __pte(pte_val(pte) & ~_PAGE_DIRTY);
+}
+
+static inline pte_t pte_mkold(pte_t pte)
+{
+ return __pte(pte_val(pte) & ~_PAGE_ACCESSED);
+}
+
+static inline pte_t pte_mkwrite(pte_t pte)
+{
+ /*
+ * write implies read, hence set both
+ */
+ return __pte(pte_val(pte) | _PAGE_RW);
+}
+
+static inline pte_t pte_mkdirty(pte_t pte)
+{
+ return __pte(pte_val(pte) | _PAGE_DIRTY | _PAGE_SOFT_DIRTY);
+}
+
+static inline pte_t pte_mkyoung(pte_t pte)
+{
+ return __pte(pte_val(pte) | _PAGE_ACCESSED);
+}
+
+static inline pte_t pte_mkspecial(pte_t pte)
+{
+ return __pte(pte_val(pte) | _PAGE_SPECIAL);
+}
+
+static inline pte_t pte_mkhuge(pte_t pte)
+{
+ return pte;
+}
+
+static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
+{
+ /* FIXME!! check whether this need to be a conditional */
+ return __pte((pte_val(pte) & _PAGE_CHG_MASK) | pgprot_val(newprot));
+}
+
+static inline bool pte_user(pte_t pte)
+{
+ return !(pte_val(pte) & _PAGE_PRIVILEGED);
+}
+
+/* Encode and de-code a swap entry */
+#define MAX_SWAPFILES_CHECK() do { \
+ BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS); \
+ /* \
+ * Don't have overlapping bits with _PAGE_HPTEFLAGS \
+ * We filter HPTEFLAGS on set_pte. \
+ */ \
+ BUILD_BUG_ON(_PAGE_HPTEFLAGS & (0x1f << _PAGE_BIT_SWAP_TYPE)); \
+ BUILD_BUG_ON(_PAGE_HPTEFLAGS & _PAGE_SWP_SOFT_DIRTY); \
+ } while (0)
+/*
+ * on pte we don't need handle RADIX_TREE_EXCEPTIONAL_SHIFT;
+ */
+#define SWP_TYPE_BITS 5
+#define __swp_type(x) (((x).val >> _PAGE_BIT_SWAP_TYPE) \
+ & ((1UL << SWP_TYPE_BITS) - 1))
+#define __swp_offset(x) (((x).val & PTE_RPN_MASK) >> PAGE_SHIFT)
+#define __swp_entry(type, offset) ((swp_entry_t) { \
+ ((type) << _PAGE_BIT_SWAP_TYPE) \
+ | (((offset) << PAGE_SHIFT) & PTE_RPN_MASK)})
+/*
+ * swp_entry_t must be independent of pte bits. We build a swp_entry_t from
+ * swap type and offset we get from swap and convert that to pte to find a
+ * matching pte in linux page table.
+ * Clear bits not found in swap entries here.
+ */
+#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val((pte)) & ~_PAGE_PTE })
+#define __swp_entry_to_pte(x) __pte((x).val | _PAGE_PTE)
+
+#ifdef CONFIG_MEM_SOFT_DIRTY
+#define _PAGE_SWP_SOFT_DIRTY (1UL << (SWP_TYPE_BITS + _PAGE_BIT_SWAP_TYPE))
+#else
+#define _PAGE_SWP_SOFT_DIRTY 0UL
+#endif /* CONFIG_MEM_SOFT_DIRTY */
+
+#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
+static inline pte_t pte_swp_mksoft_dirty(pte_t pte)
+{
+ return __pte(pte_val(pte) | _PAGE_SWP_SOFT_DIRTY);
+}
+static inline bool pte_swp_soft_dirty(pte_t pte)
+{
+ return !!(pte_val(pte) & _PAGE_SWP_SOFT_DIRTY);
+}
+static inline pte_t pte_swp_clear_soft_dirty(pte_t pte)
+{
+ return __pte(pte_val(pte) & ~_PAGE_SWP_SOFT_DIRTY);
+}
+#endif /* CONFIG_HAVE_ARCH_SOFT_DIRTY */
+
+static inline bool check_pte_access(unsigned long access, unsigned long ptev)
+{
+ /*
+ * This check for _PAGE_RWX and _PAGE_PRESENT bits
+ */
+ if (access & ~ptev)
+ return false;
+ /*
+ * This check for access to privilege space
+ */
+ if ((access & _PAGE_PRIVILEGED) != (ptev & _PAGE_PRIVILEGED))
+ return false;
+
+ return true;
+}
+/*
+ * Generic functions with hash/radix callbacks
+ */
+
+static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry)
+{
+ if (radix_enabled())
+ return radix__ptep_set_access_flags(ptep, entry);
+ return hash__ptep_set_access_flags(ptep, entry);
+}
+
+#define __HAVE_ARCH_PTE_SAME
+static inline int pte_same(pte_t pte_a, pte_t pte_b)
+{
+ if (radix_enabled())
+ return radix__pte_same(pte_a, pte_b);
+ return hash__pte_same(pte_a, pte_b);
+}
+
+static inline int pte_none(pte_t pte)
+{
+ if (radix_enabled())
+ return radix__pte_none(pte);
+ return hash__pte_none(pte);
+}
+
+static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, pte_t pte, int percpu)
+{
+ if (radix_enabled())
+ return radix__set_pte_at(mm, addr, ptep, pte, percpu);
+ return hash__set_pte_at(mm, addr, ptep, pte, percpu);
+}
+
+#define _PAGE_CACHE_CTL (_PAGE_NON_IDEMPOTENT | _PAGE_TOLERANT)
+
+#define pgprot_noncached pgprot_noncached
+static inline pgprot_t pgprot_noncached(pgprot_t prot)
+{
+ return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) |
+ _PAGE_NON_IDEMPOTENT);
+}
+
+#define pgprot_noncached_wc pgprot_noncached_wc
+static inline pgprot_t pgprot_noncached_wc(pgprot_t prot)
+{
+ return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL) |
+ _PAGE_TOLERANT);
+}
+
+#define pgprot_cached pgprot_cached
+static inline pgprot_t pgprot_cached(pgprot_t prot)
+{
+ return __pgprot((pgprot_val(prot) & ~_PAGE_CACHE_CTL));
+}
+
+#define pgprot_writecombine pgprot_writecombine
+static inline pgprot_t pgprot_writecombine(pgprot_t prot)
+{
+ return pgprot_noncached_wc(prot);
+}
+/*
+ * check a pte mapping have cache inhibited property
+ */
+static inline bool pte_ci(pte_t pte)
+{
+ unsigned long pte_v = pte_val(pte);
+
+ if (((pte_v & _PAGE_CACHE_CTL) == _PAGE_TOLERANT) ||
+ ((pte_v & _PAGE_CACHE_CTL) == _PAGE_NON_IDEMPOTENT))
+ return true;
+ return false;
+}
+
static inline void pmd_set(pmd_t *pmdp, unsigned long val)
{
*pmdp = __pmd(val);
@@ -75,6 +628,13 @@ static inline void pmd_clear(pmd_t *pmdp)
#define pmd_none(pmd) (!pmd_val(pmd))
#define pmd_present(pmd) (!pmd_none(pmd))
+static inline int pmd_bad(pmd_t pmd)
+{
+ if (radix_enabled())
+ return radix__pmd_bad(pmd);
+ return hash__pmd_bad(pmd);
+}
+
static inline void pud_set(pud_t *pudp, unsigned long val)
{
*pudp = __pud(val);
@@ -100,6 +660,15 @@ static inline pud_t pte_pud(pte_t pte)
return __pud(pte_val(pte));
}
#define pud_write(pud) pte_write(pud_pte(pud))
+
+static inline int pud_bad(pud_t pud)
+{
+ if (radix_enabled())
+ return radix__pud_bad(pud);
+ return hash__pud_bad(pud);
+}
+
+
#define pgd_write(pgd) pte_write(pgd_pte(pgd))
static inline void pgd_set(pgd_t *pgdp, unsigned long val)
{
@@ -124,8 +693,27 @@ static inline pgd_t pte_pgd(pte_t pte)
return __pgd(pte_val(pte));
}
+static inline int pgd_bad(pgd_t pgd)
+{
+ if (radix_enabled())
+ return radix__pgd_bad(pgd);
+ return hash__pgd_bad(pgd);
+}
+
extern struct page *pgd_page(pgd_t pgd);
+/* Pointers in the page table tree are physical addresses */
+#define __pgtable_ptr_val(ptr) __pa(ptr)
+
+#define pmd_page_vaddr(pmd) __va(pmd_val(pmd) & ~PMD_MASKED_BITS)
+#define pud_page_vaddr(pud) __va(pud_val(pud) & ~PUD_MASKED_BITS)
+#define pgd_page_vaddr(pgd) __va(pgd_val(pgd) & ~PGD_MASKED_BITS)
+
+#define pgd_index(address) (((address) >> (PGDIR_SHIFT)) & (PTRS_PER_PGD - 1))
+#define pud_index(address) (((address) >> (PUD_SHIFT)) & (PTRS_PER_PUD - 1))
+#define pmd_index(address) (((address) >> (PMD_SHIFT)) & (PTRS_PER_PMD - 1))
+#define pte_index(address) (((address) >> (PAGE_SHIFT)) & (PTRS_PER_PTE - 1))
+
/*
* Find an entry in a page-table-directory. We combine the address region
* (the high order N bits) and the pgd portion of the address.
@@ -156,73 +744,42 @@ extern struct page *pgd_page(pgd_t pgd);
#define pgd_ERROR(e) \
pr_err("%s:%d: bad pgd %08lx.\n", __FILE__, __LINE__, pgd_val(e))
-/* Encode and de-code a swap entry */
-#define MAX_SWAPFILES_CHECK() do { \
- BUILD_BUG_ON(MAX_SWAPFILES_SHIFT > SWP_TYPE_BITS); \
- /* \
- * Don't have overlapping bits with _PAGE_HPTEFLAGS \
- * We filter HPTEFLAGS on set_pte. \
- */ \
- BUILD_BUG_ON(_PAGE_HPTEFLAGS & (0x1f << _PAGE_BIT_SWAP_TYPE)); \
- BUILD_BUG_ON(_PAGE_HPTEFLAGS & _PAGE_SWP_SOFT_DIRTY); \
- } while (0)
-/*
- * on pte we don't need handle RADIX_TREE_EXCEPTIONAL_SHIFT;
- */
-#define SWP_TYPE_BITS 5
-#define __swp_type(x) (((x).val >> _PAGE_BIT_SWAP_TYPE) \
- & ((1UL << SWP_TYPE_BITS) - 1))
-#define __swp_offset(x) (((x).val & PTE_RPN_MASK) >> PTE_RPN_SHIFT)
-#define __swp_entry(type, offset) ((swp_entry_t) { \
- ((type) << _PAGE_BIT_SWAP_TYPE) \
- | (((offset) << PTE_RPN_SHIFT) & PTE_RPN_MASK)})
-/*
- * swp_entry_t must be independent of pte bits. We build a swp_entry_t from
- * swap type and offset we get from swap and convert that to pte to find a
- * matching pte in linux page table.
- * Clear bits not found in swap entries here.
- */
-#define __pte_to_swp_entry(pte) ((swp_entry_t) { pte_val((pte)) & ~_PAGE_PTE })
-#define __swp_entry_to_pte(x) __pte((x).val | _PAGE_PTE)
-
-#ifdef CONFIG_MEM_SOFT_DIRTY
-#define _PAGE_SWP_SOFT_DIRTY (1UL << (SWP_TYPE_BITS + _PAGE_BIT_SWAP_TYPE))
-#else
-#define _PAGE_SWP_SOFT_DIRTY 0UL
-#endif /* CONFIG_MEM_SOFT_DIRTY */
+void pgtable_cache_add(unsigned shift, void (*ctor)(void *));
+void pgtable_cache_init(void);
-#ifdef CONFIG_HAVE_ARCH_SOFT_DIRTY
-static inline pte_t pte_swp_mksoft_dirty(pte_t pte)
+static inline int map_kernel_page(unsigned long ea, unsigned long pa,
+ unsigned long flags)
{
- return __pte(pte_val(pte) | _PAGE_SWP_SOFT_DIRTY);
+ if (radix_enabled()) {
+#if defined(CONFIG_PPC_RADIX_MMU) && defined(DEBUG_VM)
+ unsigned long page_size = 1 << mmu_psize_defs[mmu_io_psize].shift;
+ WARN((page_size != PAGE_SIZE), "I/O page size != PAGE_SIZE");
+#endif
+ return radix__map_kernel_page(ea, pa, __pgprot(flags), PAGE_SIZE);
+ }
+ return hash__map_kernel_page(ea, pa, flags);
}
-static inline bool pte_swp_soft_dirty(pte_t pte)
+
+static inline int __meminit vmemmap_create_mapping(unsigned long start,
+ unsigned long page_size,
+ unsigned long phys)
{
- return !!(pte_val(pte) & _PAGE_SWP_SOFT_DIRTY);
+ if (radix_enabled())
+ return radix__vmemmap_create_mapping(start, page_size, phys);
+ return hash__vmemmap_create_mapping(start, page_size, phys);
}
-static inline pte_t pte_swp_clear_soft_dirty(pte_t pte)
+
+#ifdef CONFIG_MEMORY_HOTPLUG
+static inline void vmemmap_remove_mapping(unsigned long start,
+ unsigned long page_size)
{
- return __pte(pte_val(pte) & ~_PAGE_SWP_SOFT_DIRTY);
+ if (radix_enabled())
+ return radix__vmemmap_remove_mapping(start, page_size);
+ return hash__vmemmap_remove_mapping(start, page_size);
}
-#endif /* CONFIG_HAVE_ARCH_SOFT_DIRTY */
-
-void pgtable_cache_add(unsigned shift, void (*ctor)(void *));
-void pgtable_cache_init(void);
-
+#endif
struct page *realmode_pfn_to_page(unsigned long pfn);
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-extern pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot);
-extern pmd_t mk_pmd(struct page *page, pgprot_t pgprot);
-extern pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot);
-extern void set_pmd_at(struct mm_struct *mm, unsigned long addr,
- pmd_t *pmdp, pmd_t pmd);
-extern void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
- pmd_t *pmd);
-extern int has_transparent_hugepage(void);
-#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
-
-
static inline pte_t pmd_pte(pmd_t pmd)
{
return __pte(pmd_val(pmd));
@@ -237,7 +794,6 @@ static inline pte_t *pmdp_ptep(pmd_t *pmd)
{
return (pte_t *)pmd;
}
-
#define pmd_pfn(pmd) pte_pfn(pmd_pte(pmd))
#define pmd_dirty(pmd) pte_dirty(pmd_pte(pmd))
#define pmd_young(pmd) pte_young(pmd_pte(pmd))
@@ -264,9 +820,87 @@ static inline int pmd_protnone(pmd_t pmd)
#define __HAVE_ARCH_PMD_WRITE
#define pmd_write(pmd) pte_write(pmd_pte(pmd))
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+extern pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot);
+extern pmd_t mk_pmd(struct page *page, pgprot_t pgprot);
+extern pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot);
+extern void set_pmd_at(struct mm_struct *mm, unsigned long addr,
+ pmd_t *pmdp, pmd_t pmd);
+extern void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
+ pmd_t *pmd);
+extern int hash__has_transparent_hugepage(void);
+static inline int has_transparent_hugepage(void)
+{
+ if (radix_enabled())
+ return radix__has_transparent_hugepage();
+ return hash__has_transparent_hugepage();
+}
+#define has_transparent_hugepage has_transparent_hugepage
+
+static inline unsigned long
+pmd_hugepage_update(struct mm_struct *mm, unsigned long addr, pmd_t *pmdp,
+ unsigned long clr, unsigned long set)
+{
+ if (radix_enabled())
+ return radix__pmd_hugepage_update(mm, addr, pmdp, clr, set);
+ return hash__pmd_hugepage_update(mm, addr, pmdp, clr, set);
+}
+
+static inline int pmd_large(pmd_t pmd)
+{
+ return !!(pmd_val(pmd) & _PAGE_PTE);
+}
+
+static inline pmd_t pmd_mknotpresent(pmd_t pmd)
+{
+ return __pmd(pmd_val(pmd) & ~_PAGE_PRESENT);
+}
+/*
+ * For radix we should always find H_PAGE_HASHPTE zero. Hence
+ * the below will work for radix too
+ */
+static inline int __pmdp_test_and_clear_young(struct mm_struct *mm,
+ unsigned long addr, pmd_t *pmdp)
+{
+ unsigned long old;
+
+ if ((pmd_val(*pmdp) & (_PAGE_ACCESSED | H_PAGE_HASHPTE)) == 0)
+ return 0;
+ old = pmd_hugepage_update(mm, addr, pmdp, _PAGE_ACCESSED, 0);
+ return ((old & _PAGE_ACCESSED) != 0);
+}
+
+#define __HAVE_ARCH_PMDP_SET_WRPROTECT
+static inline void pmdp_set_wrprotect(struct mm_struct *mm, unsigned long addr,
+ pmd_t *pmdp)
+{
+
+ if ((pmd_val(*pmdp) & _PAGE_WRITE) == 0)
+ return;
+
+ pmd_hugepage_update(mm, addr, pmdp, _PAGE_WRITE, 0);
+}
+
+static inline int pmd_trans_huge(pmd_t pmd)
+{
+ if (radix_enabled())
+ return radix__pmd_trans_huge(pmd);
+ return hash__pmd_trans_huge(pmd);
+}
+
+#define __HAVE_ARCH_PMD_SAME
+static inline int pmd_same(pmd_t pmd_a, pmd_t pmd_b)
+{
+ if (radix_enabled())
+ return radix__pmd_same(pmd_a, pmd_b);
+ return hash__pmd_same(pmd_a, pmd_b);
+}
+
static inline pmd_t pmd_mkhuge(pmd_t pmd)
{
- return __pmd(pmd_val(pmd) | (_PAGE_PTE | _PAGE_THP_HUGE));
+ if (radix_enabled())
+ return radix__pmd_mkhuge(pmd);
+ return hash__pmd_mkhuge(pmd);
}
#define __HAVE_ARCH_PMDP_SET_ACCESS_FLAGS
@@ -277,37 +911,63 @@ extern int pmdp_set_access_flags(struct vm_area_struct *vma,
#define __HAVE_ARCH_PMDP_TEST_AND_CLEAR_YOUNG
extern int pmdp_test_and_clear_young(struct vm_area_struct *vma,
unsigned long address, pmd_t *pmdp);
-#define __HAVE_ARCH_PMDP_CLEAR_YOUNG_FLUSH
-extern int pmdp_clear_flush_young(struct vm_area_struct *vma,
- unsigned long address, pmd_t *pmdp);
#define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR
-extern pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
- unsigned long addr, pmd_t *pmdp);
+static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
+ unsigned long addr, pmd_t *pmdp)
+{
+ if (radix_enabled())
+ return radix__pmdp_huge_get_and_clear(mm, addr, pmdp);
+ return hash__pmdp_huge_get_and_clear(mm, addr, pmdp);
+}
-extern pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
- unsigned long address, pmd_t *pmdp);
+static inline pmd_t pmdp_collapse_flush(struct vm_area_struct *vma,
+ unsigned long address, pmd_t *pmdp)
+{
+ if (radix_enabled())
+ return radix__pmdp_collapse_flush(vma, address, pmdp);
+ return hash__pmdp_collapse_flush(vma, address, pmdp);
+}
#define pmdp_collapse_flush pmdp_collapse_flush
#define __HAVE_ARCH_PGTABLE_DEPOSIT
-extern void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
- pgtable_t pgtable);
+static inline void pgtable_trans_huge_deposit(struct mm_struct *mm,
+ pmd_t *pmdp, pgtable_t pgtable)
+{
+ if (radix_enabled())
+ return radix__pgtable_trans_huge_deposit(mm, pmdp, pgtable);
+ return hash__pgtable_trans_huge_deposit(mm, pmdp, pgtable);
+}
+
#define __HAVE_ARCH_PGTABLE_WITHDRAW
-extern pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
+static inline pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm,
+ pmd_t *pmdp)
+{
+ if (radix_enabled())
+ return radix__pgtable_trans_huge_withdraw(mm, pmdp);
+ return hash__pgtable_trans_huge_withdraw(mm, pmdp);
+}
#define __HAVE_ARCH_PMDP_INVALIDATE
extern void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
pmd_t *pmdp);
#define __HAVE_ARCH_PMDP_HUGE_SPLIT_PREPARE
-extern void pmdp_huge_split_prepare(struct vm_area_struct *vma,
- unsigned long address, pmd_t *pmdp);
+static inline void pmdp_huge_split_prepare(struct vm_area_struct *vma,
+ unsigned long address, pmd_t *pmdp)
+{
+ if (radix_enabled())
+ return radix__pmdp_huge_split_prepare(vma, address, pmdp);
+ return hash__pmdp_huge_split_prepare(vma, address, pmdp);
+}
#define pmd_move_must_withdraw pmd_move_must_withdraw
struct spinlock;
static inline int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl,
struct spinlock *old_pmd_ptl)
{
+ if (radix_enabled())
+ return false;
/*
* Archs like ppc64 use pgtable to store per pmd
* specific information. So when we switch the pmd,
@@ -315,5 +975,6 @@ static inline int pmd_move_must_withdraw(struct spinlock *new_pmd_ptl,
*/
return true;
}
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
#endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_BOOK3S_64_PGTABLE_H_ */
diff --git a/arch/powerpc/include/asm/book3s/64/radix-4k.h b/arch/powerpc/include/asm/book3s/64/radix-4k.h
new file mode 100644
index 000000000000..7c3b1fe1619e
--- /dev/null
+++ b/arch/powerpc/include/asm/book3s/64/radix-4k.h
@@ -0,0 +1,12 @@
+#ifndef _ASM_POWERPC_PGTABLE_RADIX_4K_H
+#define _ASM_POWERPC_PGTABLE_RADIX_4K_H
+
+/*
+ * For 4K page size supported index is 13/9/9/9
+ */
+#define RADIX_PTE_INDEX_SIZE 9 /* 2MB huge page */
+#define RADIX_PMD_INDEX_SIZE 9 /* 1G huge page */
+#define RADIX_PUD_INDEX_SIZE 9
+#define RADIX_PGD_INDEX_SIZE 13
+
+#endif /* _ASM_POWERPC_PGTABLE_RADIX_4K_H */
diff --git a/arch/powerpc/include/asm/book3s/64/radix-64k.h b/arch/powerpc/include/asm/book3s/64/radix-64k.h
new file mode 100644
index 000000000000..82dc355f0b45
--- /dev/null
+++ b/arch/powerpc/include/asm/book3s/64/radix-64k.h
@@ -0,0 +1,12 @@
+#ifndef _ASM_POWERPC_PGTABLE_RADIX_64K_H
+#define _ASM_POWERPC_PGTABLE_RADIX_64K_H
+
+/*
+ * For 64K page size supported index is 13/9/9/5
+ */
+#define RADIX_PTE_INDEX_SIZE 5 /* 2MB huge page */
+#define RADIX_PMD_INDEX_SIZE 9 /* 1G huge page */
+#define RADIX_PUD_INDEX_SIZE 9
+#define RADIX_PGD_INDEX_SIZE 13
+
+#endif /* _ASM_POWERPC_PGTABLE_RADIX_64K_H */
diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h
new file mode 100644
index 000000000000..937d4e247ac3
--- /dev/null
+++ b/arch/powerpc/include/asm/book3s/64/radix.h
@@ -0,0 +1,232 @@
+#ifndef _ASM_POWERPC_PGTABLE_RADIX_H
+#define _ASM_POWERPC_PGTABLE_RADIX_H
+
+#ifndef __ASSEMBLY__
+#include <asm/cmpxchg.h>
+#endif
+
+#ifdef CONFIG_PPC_64K_PAGES
+#include <asm/book3s/64/radix-64k.h>
+#else
+#include <asm/book3s/64/radix-4k.h>
+#endif
+
+/* An empty PTE can still have a R or C writeback */
+#define RADIX_PTE_NONE_MASK (_PAGE_DIRTY | _PAGE_ACCESSED)
+
+/* Bits to set in a RPMD/RPUD/RPGD */
+#define RADIX_PMD_VAL_BITS (0x8000000000000000UL | RADIX_PTE_INDEX_SIZE)
+#define RADIX_PUD_VAL_BITS (0x8000000000000000UL | RADIX_PMD_INDEX_SIZE)
+#define RADIX_PGD_VAL_BITS (0x8000000000000000UL | RADIX_PUD_INDEX_SIZE)
+
+/* Don't have anything in the reserved bits and leaf bits */
+#define RADIX_PMD_BAD_BITS 0x60000000000000e0UL
+#define RADIX_PUD_BAD_BITS 0x60000000000000e0UL
+#define RADIX_PGD_BAD_BITS 0x60000000000000e0UL
+
+/*
+ * Size of EA range mapped by our pagetables.
+ */
+#define RADIX_PGTABLE_EADDR_SIZE (RADIX_PTE_INDEX_SIZE + RADIX_PMD_INDEX_SIZE + \
+ RADIX_PUD_INDEX_SIZE + RADIX_PGD_INDEX_SIZE + PAGE_SHIFT)
+#define RADIX_PGTABLE_RANGE (ASM_CONST(1) << RADIX_PGTABLE_EADDR_SIZE)
+
+/*
+ * We support 52 bit address space, Use top bit for kernel
+ * virtual mapping. Also make sure kernel fit in the top
+ * quadrant.
+ *
+ * +------------------+
+ * +------------------+ Kernel virtual map (0xc008000000000000)
+ * | |
+ * | |
+ * | |
+ * 0b11......+------------------+ Kernel linear map (0xc....)
+ * | |
+ * | 2 quadrant |
+ * | |
+ * 0b10......+------------------+
+ * | |
+ * | 1 quadrant |
+ * | |
+ * 0b01......+------------------+
+ * | |
+ * | 0 quadrant |
+ * | |
+ * 0b00......+------------------+
+ *
+ *
+ * 3rd quadrant expanded:
+ * +------------------------------+
+ * | |
+ * | |
+ * | |
+ * +------------------------------+ Kernel IO map end (0xc010000000000000)
+ * | |
+ * | |
+ * | 1/2 of virtual map |
+ * | |
+ * | |
+ * +------------------------------+ Kernel IO map start
+ * | |
+ * | 1/4 of virtual map |
+ * | |
+ * +------------------------------+ Kernel vmemap start
+ * | |
+ * | 1/4 of virtual map |
+ * | |
+ * +------------------------------+ Kernel virt start (0xc008000000000000)
+ * | |
+ * | |
+ * | |
+ * +------------------------------+ Kernel linear (0xc.....)
+ */
+
+#define RADIX_KERN_VIRT_START ASM_CONST(0xc008000000000000)
+#define RADIX_KERN_VIRT_SIZE ASM_CONST(0x0008000000000000)
+
+/*
+ * The vmalloc space starts at the beginning of that region, and
+ * occupies a quarter of it on radix config.
+ * (we keep a quarter for the virtual memmap)
+ */
+#define RADIX_VMALLOC_START RADIX_KERN_VIRT_START
+#define RADIX_VMALLOC_SIZE (RADIX_KERN_VIRT_SIZE >> 2)
+#define RADIX_VMALLOC_END (RADIX_VMALLOC_START + RADIX_VMALLOC_SIZE)
+/*
+ * Defines the address of the vmemap area, in its own region on
+ * hash table CPUs.
+ */
+#define RADIX_VMEMMAP_BASE (RADIX_VMALLOC_END)
+
+#ifndef __ASSEMBLY__
+#define RADIX_PTE_TABLE_SIZE (sizeof(pte_t) << RADIX_PTE_INDEX_SIZE)
+#define RADIX_PMD_TABLE_SIZE (sizeof(pmd_t) << RADIX_PMD_INDEX_SIZE)
+#define RADIX_PUD_TABLE_SIZE (sizeof(pud_t) << RADIX_PUD_INDEX_SIZE)
+#define RADIX_PGD_TABLE_SIZE (sizeof(pgd_t) << RADIX_PGD_INDEX_SIZE)
+
+static inline unsigned long radix__pte_update(struct mm_struct *mm,
+ unsigned long addr,
+ pte_t *ptep, unsigned long clr,
+ unsigned long set,
+ int huge)
+{
+ pte_t pte;
+ unsigned long old_pte, new_pte;
+
+ do {
+ pte = READ_ONCE(*ptep);
+ old_pte = pte_val(pte);
+ new_pte = (old_pte | set) & ~clr;
+
+ } while (!pte_xchg(ptep, __pte(old_pte), __pte(new_pte)));
+
+ /* We already do a sync in cmpxchg, is ptesync needed ?*/
+ asm volatile("ptesync" : : : "memory");
+ /* huge pages use the old page table lock */
+ if (!huge)
+ assert_pte_locked(mm, addr);
+
+ return old_pte;
+}
+
+/*
+ * Set the dirty and/or accessed bits atomically in a linux PTE, this
+ * function doesn't need to invalidate tlb.
+ */
+static inline void radix__ptep_set_access_flags(pte_t *ptep, pte_t entry)
+{
+ pte_t pte;
+ unsigned long old_pte, new_pte;
+ unsigned long set = pte_val(entry) & (_PAGE_DIRTY | _PAGE_ACCESSED |
+ _PAGE_RW | _PAGE_EXEC);
+ do {
+ pte = READ_ONCE(*ptep);
+ old_pte = pte_val(pte);
+ new_pte = old_pte | set;
+
+ } while (!pte_xchg(ptep, __pte(old_pte), __pte(new_pte)));
+
+ /* We already do a sync in cmpxchg, is ptesync needed ?*/
+ asm volatile("ptesync" : : : "memory");
+}
+
+static inline int radix__pte_same(pte_t pte_a, pte_t pte_b)
+{
+ return ((pte_raw(pte_a) ^ pte_raw(pte_b)) == 0);
+}
+
+static inline int radix__pte_none(pte_t pte)
+{
+ return (pte_val(pte) & ~RADIX_PTE_NONE_MASK) == 0;
+}
+
+static inline void radix__set_pte_at(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, pte_t pte, int percpu)
+{
+ *ptep = pte;
+ asm volatile("ptesync" : : : "memory");
+}
+
+static inline int radix__pmd_bad(pmd_t pmd)
+{
+ return !!(pmd_val(pmd) & RADIX_PMD_BAD_BITS);
+}
+
+static inline int radix__pmd_same(pmd_t pmd_a, pmd_t pmd_b)
+{
+ return ((pmd_raw(pmd_a) ^ pmd_raw(pmd_b)) == 0);
+}
+
+static inline int radix__pud_bad(pud_t pud)
+{
+ return !!(pud_val(pud) & RADIX_PUD_BAD_BITS);
+}
+
+
+static inline int radix__pgd_bad(pgd_t pgd)
+{
+ return !!(pgd_val(pgd) & RADIX_PGD_BAD_BITS);
+}
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+
+static inline int radix__pmd_trans_huge(pmd_t pmd)
+{
+ return !!(pmd_val(pmd) & _PAGE_PTE);
+}
+
+static inline pmd_t radix__pmd_mkhuge(pmd_t pmd)
+{
+ return __pmd(pmd_val(pmd) | _PAGE_PTE);
+}
+static inline void radix__pmdp_huge_split_prepare(struct vm_area_struct *vma,
+ unsigned long address, pmd_t *pmdp)
+{
+ /* Nothing to do for radix. */
+ return;
+}
+
+extern unsigned long radix__pmd_hugepage_update(struct mm_struct *mm, unsigned long addr,
+ pmd_t *pmdp, unsigned long clr,
+ unsigned long set);
+extern pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma,
+ unsigned long address, pmd_t *pmdp);
+extern void radix__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
+ pgtable_t pgtable);
+extern pgtable_t radix__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp);
+extern pmd_t radix__pmdp_huge_get_and_clear(struct mm_struct *mm,
+ unsigned long addr, pmd_t *pmdp);
+extern int radix__has_transparent_hugepage(void);
+#endif
+
+extern int __meminit radix__vmemmap_create_mapping(unsigned long start,
+ unsigned long page_size,
+ unsigned long phys);
+extern void radix__vmemmap_remove_mapping(unsigned long start,
+ unsigned long page_size);
+
+extern int radix__map_kernel_page(unsigned long ea, unsigned long pa,
+ pgprot_t flags, unsigned int psz);
+#endif /* __ASSEMBLY__ */
+#endif
diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
index 1b753f96b374..f12ddf5e8de5 100644
--- a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
+++ b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
@@ -1,8 +1,6 @@
#ifndef _ASM_POWERPC_BOOK3S_64_TLBFLUSH_HASH_H
#define _ASM_POWERPC_BOOK3S_64_TLBFLUSH_HASH_H
-#define MMU_NO_CONTEXT 0
-
/*
* TLB flushing for 64-bit hash-MMU CPUs
*/
@@ -29,14 +27,21 @@ extern void __flush_tlb_pending(struct ppc64_tlb_batch *batch);
static inline void arch_enter_lazy_mmu_mode(void)
{
- struct ppc64_tlb_batch *batch = this_cpu_ptr(&ppc64_tlb_batch);
+ struct ppc64_tlb_batch *batch;
+ if (radix_enabled())
+ return;
+ batch = this_cpu_ptr(&ppc64_tlb_batch);
batch->active = 1;
}
static inline void arch_leave_lazy_mmu_mode(void)
{
- struct ppc64_tlb_batch *batch = this_cpu_ptr(&ppc64_tlb_batch);
+ struct ppc64_tlb_batch *batch;
+
+ if (radix_enabled())
+ return;
+ batch = this_cpu_ptr(&ppc64_tlb_batch);
if (batch->index)
__flush_tlb_pending(batch);
@@ -52,40 +57,42 @@ extern void flush_hash_range(unsigned long number, int local);
extern void flush_hash_hugepage(unsigned long vsid, unsigned long addr,
pmd_t *pmdp, unsigned int psize, int ssize,
unsigned long flags);
-
-static inline void local_flush_tlb_mm(struct mm_struct *mm)
+static inline void hash__local_flush_tlb_mm(struct mm_struct *mm)
{
}
-static inline void flush_tlb_mm(struct mm_struct *mm)
+static inline void hash__flush_tlb_mm(struct mm_struct *mm)
{
}
-static inline void local_flush_tlb_page(struct vm_area_struct *vma,
- unsigned long vmaddr)
+static inline void hash__local_flush_tlb_page(struct vm_area_struct *vma,
+ unsigned long vmaddr)
{
}
-static inline void flush_tlb_page(struct vm_area_struct *vma,
- unsigned long vmaddr)
+static inline void hash__flush_tlb_page(struct vm_area_struct *vma,
+ unsigned long vmaddr)
{
}
-static inline void flush_tlb_page_nohash(struct vm_area_struct *vma,
- unsigned long vmaddr)
+static inline void hash__flush_tlb_page_nohash(struct vm_area_struct *vma,
+ unsigned long vmaddr)
{
}
-static inline void flush_tlb_range(struct vm_area_struct *vma,
- unsigned long start, unsigned long end)
+static inline void hash__flush_tlb_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
{
}
-static inline void flush_tlb_kernel_range(unsigned long start,
- unsigned long end)
+static inline void hash__flush_tlb_kernel_range(unsigned long start,
+ unsigned long end)
{
}
+
+struct mmu_gather;
+extern void hash__tlb_flush(struct mmu_gather *tlb);
/* Private function for use by PCI IO mapping code */
extern void __flush_hash_table_range(struct mm_struct *mm, unsigned long start,
unsigned long end);
diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h b/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
new file mode 100644
index 000000000000..13ef38828dfe
--- /dev/null
+++ b/arch/powerpc/include/asm/book3s/64/tlbflush-radix.h
@@ -0,0 +1,33 @@
+#ifndef _ASM_POWERPC_TLBFLUSH_RADIX_H
+#define _ASM_POWERPC_TLBFLUSH_RADIX_H
+
+struct vm_area_struct;
+struct mm_struct;
+struct mmu_gather;
+
+static inline int mmu_get_ap(int psize)
+{
+ return mmu_psize_defs[psize].ap;
+}
+
+extern void radix__flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end);
+extern void radix__flush_tlb_kernel_range(unsigned long start, unsigned long end);
+
+extern void radix__local_flush_tlb_mm(struct mm_struct *mm);
+extern void radix__local_flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
+extern void radix___local_flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
+ unsigned long ap, int nid);
+extern void radix__tlb_flush(struct mmu_gather *tlb);
+#ifdef CONFIG_SMP
+extern void radix__flush_tlb_mm(struct mm_struct *mm);
+extern void radix__flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr);
+extern void radix___flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
+ unsigned long ap, int nid);
+#else
+#define radix__flush_tlb_mm(mm) radix__local_flush_tlb_mm(mm)
+#define radix__flush_tlb_page(vma,addr) radix__local_flush_tlb_page(vma,addr)
+#define radix___flush_tlb_page(mm,addr,p,i) radix___local_flush_tlb_page(mm,addr,p,i)
+#endif
+
+#endif
diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush.h b/arch/powerpc/include/asm/book3s/64/tlbflush.h
new file mode 100644
index 000000000000..d98424ae356c
--- /dev/null
+++ b/arch/powerpc/include/asm/book3s/64/tlbflush.h
@@ -0,0 +1,76 @@
+#ifndef _ASM_POWERPC_BOOK3S_64_TLBFLUSH_H
+#define _ASM_POWERPC_BOOK3S_64_TLBFLUSH_H
+
+#define MMU_NO_CONTEXT ~0UL
+
+
+#include <asm/book3s/64/tlbflush-hash.h>
+#include <asm/book3s/64/tlbflush-radix.h>
+
+static inline void flush_tlb_range(struct vm_area_struct *vma,
+ unsigned long start, unsigned long end)
+{
+ if (radix_enabled())
+ return radix__flush_tlb_range(vma, start, end);
+ return hash__flush_tlb_range(vma, start, end);
+}
+
+static inline void flush_tlb_kernel_range(unsigned long start,
+ unsigned long end)
+{
+ if (radix_enabled())
+ return radix__flush_tlb_kernel_range(start, end);
+ return hash__flush_tlb_kernel_range(start, end);
+}
+
+static inline void local_flush_tlb_mm(struct mm_struct *mm)
+{
+ if (radix_enabled())
+ return radix__local_flush_tlb_mm(mm);
+ return hash__local_flush_tlb_mm(mm);
+}
+
+static inline void local_flush_tlb_page(struct vm_area_struct *vma,
+ unsigned long vmaddr)
+{
+ if (radix_enabled())
+ return radix__local_flush_tlb_page(vma, vmaddr);
+ return hash__local_flush_tlb_page(vma, vmaddr);
+}
+
+static inline void flush_tlb_page_nohash(struct vm_area_struct *vma,
+ unsigned long vmaddr)
+{
+ if (radix_enabled())
+ return radix__flush_tlb_page(vma, vmaddr);
+ return hash__flush_tlb_page_nohash(vma, vmaddr);
+}
+
+static inline void tlb_flush(struct mmu_gather *tlb)
+{
+ if (radix_enabled())
+ return radix__tlb_flush(tlb);
+ return hash__tlb_flush(tlb);
+}
+
+#ifdef CONFIG_SMP
+static inline void flush_tlb_mm(struct mm_struct *mm)
+{
+ if (radix_enabled())
+ return radix__flush_tlb_mm(mm);
+ return hash__flush_tlb_mm(mm);
+}
+
+static inline void flush_tlb_page(struct vm_area_struct *vma,
+ unsigned long vmaddr)
+{
+ if (radix_enabled())
+ return radix__flush_tlb_page(vma, vmaddr);
+ return hash__flush_tlb_page(vma, vmaddr);
+}
+#else
+#define flush_tlb_mm(mm) local_flush_tlb_mm(mm)
+#define flush_tlb_page(vma, addr) local_flush_tlb_page(vma, addr)
+#endif /* CONFIG_SMP */
+
+#endif /* _ASM_POWERPC_BOOK3S_64_TLBFLUSH_H */
diff --git a/arch/powerpc/include/asm/book3s/pgalloc.h b/arch/powerpc/include/asm/book3s/pgalloc.h
new file mode 100644
index 000000000000..54f591e9572e
--- /dev/null
+++ b/arch/powerpc/include/asm/book3s/pgalloc.h
@@ -0,0 +1,19 @@
+#ifndef _ASM_POWERPC_BOOK3S_PGALLOC_H
+#define _ASM_POWERPC_BOOK3S_PGALLOC_H
+
+#include <linux/mm.h>
+
+extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
+static inline void tlb_flush_pgtable(struct mmu_gather *tlb,
+ unsigned long address)
+{
+
+}
+
+#ifdef CONFIG_PPC64
+#include <asm/book3s/64/pgalloc.h>
+#else
+#include <asm/book3s/32/pgalloc.h>
+#endif
+
+#endif /* _ASM_POWERPC_BOOK3S_PGALLOC_H */
diff --git a/arch/powerpc/include/asm/hugetlb.h b/arch/powerpc/include/asm/hugetlb.h
index 42814f0567cc..e2d9f4996e5c 100644
--- a/arch/powerpc/include/asm/hugetlb.h
+++ b/arch/powerpc/include/asm/hugetlb.h
@@ -8,6 +8,8 @@
extern struct kmem_cache *hugepte_cache;
#ifdef CONFIG_PPC_BOOK3S_64
+
+#include <asm/book3s/64/hugetlb-radix.h>
/*
* This should work for other subarchs too. But right now we use the
* new format only for 64bit book3s
@@ -31,7 +33,19 @@ static inline unsigned int hugepd_shift(hugepd_t hpd)
{
return mmu_psize_to_shift(hugepd_mmu_psize(hpd));
}
+static inline void flush_hugetlb_page(struct vm_area_struct *vma,
+ unsigned long vmaddr)
+{
+ if (radix_enabled())
+ return radix__flush_hugetlb_page(vma, vmaddr);
+}
+static inline void __local_flush_hugetlb_page(struct vm_area_struct *vma,
+ unsigned long vmaddr)
+{
+ if (radix_enabled())
+ return radix__local_flush_hugetlb_page(vma, vmaddr);
+}
#else
static inline pte_t *hugepd_page(hugepd_t hpd)
diff --git a/arch/powerpc/include/asm/kvm_book3s_64.h b/arch/powerpc/include/asm/kvm_book3s_64.h
index 7529aab068f5..1f4497fb5b83 100644
--- a/arch/powerpc/include/asm/kvm_book3s_64.h
+++ b/arch/powerpc/include/asm/kvm_book3s_64.h
@@ -276,19 +276,24 @@ static inline unsigned long hpte_make_readonly(unsigned long ptel)
return ptel;
}
-static inline int hpte_cache_flags_ok(unsigned long ptel, unsigned long io_type)
+static inline bool hpte_cache_flags_ok(unsigned long hptel, bool is_ci)
{
- unsigned int wimg = ptel & HPTE_R_WIMG;
+ unsigned int wimg = hptel & HPTE_R_WIMG;
/* Handle SAO */
if (wimg == (HPTE_R_W | HPTE_R_I | HPTE_R_M) &&
cpu_has_feature(CPU_FTR_ARCH_206))
wimg = HPTE_R_M;
- if (!io_type)
+ if (!is_ci)
return wimg == HPTE_R_M;
-
- return (wimg & (HPTE_R_W | HPTE_R_I)) == io_type;
+ /*
+ * if host is mapped cache inhibited, make sure hptel also have
+ * cache inhibited.
+ */
+ if (wimg & HPTE_R_W) /* FIXME!! is this ok for all guest. ? */
+ return false;
+ return !!(wimg & HPTE_R_I);
}
/*
@@ -305,9 +310,9 @@ static inline pte_t kvmppc_read_update_linux_pte(pte_t *ptep, int writing)
*/
old_pte = READ_ONCE(*ptep);
/*
- * wait until _PAGE_BUSY is clear then set it atomically
+ * wait until H_PAGE_BUSY is clear then set it atomically
*/
- if (unlikely(pte_val(old_pte) & _PAGE_BUSY)) {
+ if (unlikely(pte_val(old_pte) & H_PAGE_BUSY)) {
cpu_relax();
continue;
}
@@ -319,27 +324,12 @@ static inline pte_t kvmppc_read_update_linux_pte(pte_t *ptep, int writing)
if (writing && pte_write(old_pte))
new_pte = pte_mkdirty(new_pte);
- if (pte_val(old_pte) == __cmpxchg_u64((unsigned long *)ptep,
- pte_val(old_pte),
- pte_val(new_pte))) {
+ if (pte_xchg(ptep, old_pte, new_pte))
break;
- }
}
return new_pte;
}
-
-/* Return HPTE cache control bits corresponding to Linux pte bits */
-static inline unsigned long hpte_cache_bits(unsigned long pte_val)
-{
-#if _PAGE_NO_CACHE == HPTE_R_I && _PAGE_WRITETHRU == HPTE_R_W
- return pte_val & (HPTE_R_W | HPTE_R_I);
-#else
- return ((pte_val & _PAGE_NO_CACHE) ? HPTE_R_I : 0) +
- ((pte_val & _PAGE_WRITETHRU) ? HPTE_R_W : 0);
-#endif
-}
-
static inline bool hpte_read_permission(unsigned long pp, unsigned long key)
{
if (key)
diff --git a/arch/powerpc/include/asm/kvm_host.h b/arch/powerpc/include/asm/kvm_host.h
index d7b343170453..ec35af34a3fb 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -40,6 +40,9 @@
#define KVM_MAX_VCORES NR_CPUS
#define KVM_USER_MEM_SLOTS 512
+#include <asm/cputhreads.h>
+#define KVM_MAX_VCPU_ID (threads_per_subcore * KVM_MAX_VCORES)
+
#ifdef CONFIG_KVM_MMIO
#define KVM_COALESCED_MMIO_PAGE_OFFSET 1
#endif
@@ -113,6 +116,7 @@ struct kvm_vcpu_stat {
u32 ext_intr_exits;
u32 halt_successful_poll;
u32 halt_attempted_poll;
+ u32 halt_poll_invalid;
u32 halt_wakeup;
u32 dbell_exits;
u32 gdbell_exits;
@@ -724,5 +728,6 @@ static inline void kvm_arch_sched_in(struct kvm_vcpu *vcpu, int cpu) {}
static inline void kvm_arch_exit(void) {}
static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
#endif /* __POWERPC_KVM_HOST_H__ */
diff --git a/arch/powerpc/include/asm/livepatch.h b/arch/powerpc/include/asm/livepatch.h
new file mode 100644
index 000000000000..a402f7f94896
--- /dev/null
+++ b/arch/powerpc/include/asm/livepatch.h
@@ -0,0 +1,62 @@
+/*
+ * livepatch.h - powerpc-specific Kernel Live Patching Core
+ *
+ * Copyright (C) 2015-2016, SUSE, IBM Corp.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version 2
+ * of the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef _ASM_POWERPC_LIVEPATCH_H
+#define _ASM_POWERPC_LIVEPATCH_H
+
+#include <linux/module.h>
+#include <linux/ftrace.h>
+
+#ifdef CONFIG_LIVEPATCH
+static inline int klp_check_compiler_support(void)
+{
+ return 0;
+}
+
+static inline int klp_write_module_reloc(struct module *mod, unsigned long
+ type, unsigned long loc, unsigned long value)
+{
+ /* This requires infrastructure changes; we need the loadinfos. */
+ return -ENOSYS;
+}
+
+static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip)
+{
+ regs->nip = ip;
+}
+
+#define klp_get_ftrace_location klp_get_ftrace_location
+static inline unsigned long klp_get_ftrace_location(unsigned long faddr)
+{
+ /*
+ * Live patch works only with -mprofile-kernel on PPC. In this case,
+ * the ftrace location is always within the first 16 bytes.
+ */
+ return ftrace_location_range(faddr, faddr + 16);
+}
+
+static inline void klp_init_thread_info(struct thread_info *ti)
+{
+ /* + 1 to account for STACK_END_MAGIC */
+ ti->livepatch_sp = (unsigned long *)(ti + 1) + 1;
+}
+#else
+static void klp_init_thread_info(struct thread_info *ti) { }
+#endif /* CONFIG_LIVEPATCH */
+
+#endif /* _ASM_POWERPC_LIVEPATCH_H */
diff --git a/arch/powerpc/include/asm/machdep.h b/arch/powerpc/include/asm/machdep.h
index fd22442d30a9..6bdcd0da9e21 100644
--- a/arch/powerpc/include/asm/machdep.h
+++ b/arch/powerpc/include/asm/machdep.h
@@ -256,6 +256,7 @@ struct machdep_calls {
#ifdef CONFIG_ARCH_RANDOM
int (*get_random_seed)(unsigned long *v);
#endif
+ int (*update_partition_table)(u64);
};
extern void e500_idle(void);
diff --git a/arch/powerpc/include/asm/mmu.h b/arch/powerpc/include/asm/mmu.h
index 8ca1c983bf6c..e53ebebff474 100644
--- a/arch/powerpc/include/asm/mmu.h
+++ b/arch/powerpc/include/asm/mmu.h
@@ -88,6 +88,11 @@
*/
#define MMU_FTR_1T_SEGMENT ASM_CONST(0x40000000)
+/*
+ * Radix page table available
+ */
+#define MMU_FTR_RADIX ASM_CONST(0x80000000)
+
/* MMU feature bit sets for various CPUs */
#define MMU_FTRS_DEFAULT_HPTE_ARCH_V2 \
MMU_FTR_HPTE_TABLE | MMU_FTR_PPCAS_ARCH_V2
@@ -110,9 +115,25 @@
DECLARE_PER_CPU(int, next_tlbcam_idx);
#endif
+enum {
+ MMU_FTRS_POSSIBLE = MMU_FTR_HPTE_TABLE | MMU_FTR_TYPE_8xx |
+ MMU_FTR_TYPE_40x | MMU_FTR_TYPE_44x | MMU_FTR_TYPE_FSL_E |
+ MMU_FTR_TYPE_47x | MMU_FTR_USE_HIGH_BATS | MMU_FTR_BIG_PHYS |
+ MMU_FTR_USE_TLBIVAX_BCAST | MMU_FTR_USE_TLBILX |
+ MMU_FTR_LOCK_BCAST_INVAL | MMU_FTR_NEED_DTLB_SW_LRU |
+ MMU_FTR_USE_TLBRSRV | MMU_FTR_USE_PAIRED_MAS |
+ MMU_FTR_NO_SLBIE_B | MMU_FTR_16M_PAGE | MMU_FTR_TLBIEL |
+ MMU_FTR_LOCKLESS_TLBIE | MMU_FTR_CI_LARGE_PAGE |
+ MMU_FTR_1T_SEGMENT |
+#ifdef CONFIG_PPC_RADIX_MMU
+ MMU_FTR_RADIX |
+#endif
+ 0,
+};
+
static inline int mmu_has_feature(unsigned long feature)
{
- return (cur_cpu_spec->mmu_features & feature);
+ return (MMU_FTRS_POSSIBLE & cur_cpu_spec->mmu_features & feature);
}
static inline void mmu_clear_feature(unsigned long feature)
@@ -122,13 +143,6 @@ static inline void mmu_clear_feature(unsigned long feature)
extern unsigned int __start___mmu_ftr_fixup, __stop___mmu_ftr_fixup;
-/* MMU initialization */
-extern void early_init_mmu(void);
-extern void early_init_mmu_secondary(void);
-
-extern void setup_initial_memory_limit(phys_addr_t first_memblock_base,
- phys_addr_t first_memblock_size);
-
#ifdef CONFIG_PPC64
/* This is our real memory area size on ppc64 server, on embedded, we
* make it match the size our of bolted TLB area
@@ -181,10 +195,20 @@ static inline void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
#define MMU_PAGE_COUNT 15
-#if defined(CONFIG_PPC_STD_MMU_64)
-/* 64-bit classic hash table MMU */
-#include <asm/book3s/64/mmu-hash.h>
-#elif defined(CONFIG_PPC_STD_MMU_32)
+#ifdef CONFIG_PPC_BOOK3S_64
+#include <asm/book3s/64/mmu.h>
+#else /* CONFIG_PPC_BOOK3S_64 */
+
+#ifndef __ASSEMBLY__
+/* MMU initialization */
+extern void early_init_mmu(void);
+extern void early_init_mmu_secondary(void);
+extern void setup_initial_memory_limit(phys_addr_t first_memblock_base,
+ phys_addr_t first_memblock_size);
+#endif /* __ASSEMBLY__ */
+#endif
+
+#if defined(CONFIG_PPC_STD_MMU_32)
/* 32-bit classic hash table MMU */
#include <asm/book3s/32/mmu-hash.h>
#elif defined(CONFIG_40x)
@@ -201,6 +225,9 @@ static inline void assert_pte_locked(struct mm_struct *mm, unsigned long addr)
# include <asm/mmu-8xx.h>
#endif
+#ifndef radix_enabled
+#define radix_enabled() (0)
+#endif
#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_MMU_H_ */
diff --git a/arch/powerpc/include/asm/mmu_context.h b/arch/powerpc/include/asm/mmu_context.h
index 4eaab40e3ade..9d2cd0c36ec2 100644
--- a/arch/powerpc/include/asm/mmu_context.h
+++ b/arch/powerpc/include/asm/mmu_context.h
@@ -33,16 +33,27 @@ extern long mm_iommu_ua_to_hpa(struct mm_iommu_table_group_mem_t *mem,
extern long mm_iommu_mapped_inc(struct mm_iommu_table_group_mem_t *mem);
extern void mm_iommu_mapped_dec(struct mm_iommu_table_group_mem_t *mem);
#endif
-
-extern void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next);
extern void switch_slb(struct task_struct *tsk, struct mm_struct *mm);
extern void set_context(unsigned long id, pgd_t *pgd);
#ifdef CONFIG_PPC_BOOK3S_64
+extern void radix__switch_mmu_context(struct mm_struct *prev,
+ struct mm_struct *next);
+static inline void switch_mmu_context(struct mm_struct *prev,
+ struct mm_struct *next,
+ struct task_struct *tsk)
+{
+ if (radix_enabled())
+ return radix__switch_mmu_context(prev, next);
+ return switch_slb(tsk, next);
+}
+
extern int __init_new_context(void);
extern void __destroy_context(int context_id);
static inline void mmu_context_init(void) { }
#else
+extern void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next,
+ struct task_struct *tsk);
extern unsigned long __init_new_context(void);
extern void __destroy_context(unsigned long context_id);
extern void mmu_context_init(void);
@@ -88,17 +99,11 @@ static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
if (cpu_has_feature(CPU_FTR_ALTIVEC))
asm volatile ("dssall");
#endif /* CONFIG_ALTIVEC */
-
- /* The actual HW switching method differs between the various
- * sub architectures.
+ /*
+ * The actual HW switching method differs between the various
+ * sub architectures. Out of line for now
*/
-#ifdef CONFIG_PPC_STD_MMU_64
- switch_slb(tsk, next);
-#else
- /* Out of line for now */
- switch_mmu_context(prev, next);
-#endif
-
+ switch_mmu_context(prev, next, tsk);
}
#define deactivate_mm(tsk,mm) do { } while (0)
diff --git a/arch/powerpc/include/asm/pgalloc-32.h b/arch/powerpc/include/asm/nohash/32/pgalloc.h
index 76d6b9e0c8a9..76d6b9e0c8a9 100644
--- a/arch/powerpc/include/asm/pgalloc-32.h
+++ b/arch/powerpc/include/asm/nohash/32/pgalloc.h
diff --git a/arch/powerpc/include/asm/pgalloc-64.h b/arch/powerpc/include/asm/nohash/64/pgalloc.h
index 8d5fc3ac43da..0c12a3bfe2ab 100644
--- a/arch/powerpc/include/asm/pgalloc-64.h
+++ b/arch/powerpc/include/asm/nohash/64/pgalloc.h
@@ -53,7 +53,7 @@ static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
#ifndef CONFIG_PPC_64K_PAGES
-#define pgd_populate(MM, PGD, PUD) pgd_set(PGD, __pgtable_ptr_val(PUD))
+#define pgd_populate(MM, PGD, PUD) pgd_set(PGD, (unsigned long)PUD)
static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
{
@@ -68,19 +68,19 @@ static inline void pud_free(struct mm_struct *mm, pud_t *pud)
static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
{
- pud_set(pud, __pgtable_ptr_val(pmd));
+ pud_set(pud, (unsigned long)pmd);
}
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
pte_t *pte)
{
- pmd_set(pmd, __pgtable_ptr_val(pte));
+ pmd_set(pmd, (unsigned long)pte);
}
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
pgtable_t pte_page)
{
- pmd_set(pmd, __pgtable_ptr_val(page_address(pte_page)));
+ pmd_set(pmd, (unsigned long)page_address(pte_page));
}
#define pmd_pgtable(pmd) pmd_page(pmd)
@@ -119,119 +119,65 @@ static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
__free_page(ptepage);
}
-static inline void pgtable_free(void *table, unsigned index_size)
-{
- if (!index_size)
- free_page((unsigned long)table);
- else {
- BUG_ON(index_size > MAX_PGTABLE_INDEX_SIZE);
- kmem_cache_free(PGT_CACHE(index_size), table);
- }
-}
-
+extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift);
#ifdef CONFIG_SMP
-static inline void pgtable_free_tlb(struct mmu_gather *tlb,
- void *table, int shift)
-{
- unsigned long pgf = (unsigned long)table;
- BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
- pgf |= shift;
- tlb_remove_table(tlb, (void *)pgf);
-}
-
-static inline void __tlb_remove_table(void *_table)
-{
- void *table = (void *)((unsigned long)_table & ~MAX_PGTABLE_INDEX_SIZE);
- unsigned shift = (unsigned long)_table & MAX_PGTABLE_INDEX_SIZE;
-
- pgtable_free(table, shift);
-}
-#else /* !CONFIG_SMP */
-static inline void pgtable_free_tlb(struct mmu_gather *tlb,
- void *table, int shift)
-{
- pgtable_free(table, shift);
-}
-#endif /* CONFIG_SMP */
-
+extern void __tlb_remove_table(void *_table);
+#endif
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
unsigned long address)
{
tlb_flush_pgtable(tlb, address);
- pgtable_page_dtor(table);
pgtable_free_tlb(tlb, page_address(table), 0);
}
#else /* if CONFIG_PPC_64K_PAGES */
-extern pte_t *page_table_alloc(struct mm_struct *, unsigned long, int);
-extern void page_table_free(struct mm_struct *, unsigned long *, int);
+extern pte_t *pte_fragment_alloc(struct mm_struct *, unsigned long, int);
+extern void pte_fragment_free(unsigned long *, int);
extern void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift);
#ifdef CONFIG_SMP
extern void __tlb_remove_table(void *_table);
#endif
-#ifndef __PAGETABLE_PUD_FOLDED
-/* book3s 64 is 4 level page table */
-static inline void pgd_populate(struct mm_struct *mm, pgd_t *pgd, pud_t *pud)
-{
- pgd_set(pgd, __pgtable_ptr_val(pud));
-}
-
-static inline pud_t *pud_alloc_one(struct mm_struct *mm, unsigned long addr)
-{
- return kmem_cache_alloc(PGT_CACHE(PUD_INDEX_SIZE),
- GFP_KERNEL|__GFP_REPEAT);
-}
-
-static inline void pud_free(struct mm_struct *mm, pud_t *pud)
-{
- kmem_cache_free(PGT_CACHE(PUD_INDEX_SIZE), pud);
-}
-#endif
-
-static inline void pud_populate(struct mm_struct *mm, pud_t *pud, pmd_t *pmd)
-{
- pud_set(pud, __pgtable_ptr_val(pmd));
-}
+#define pud_populate(mm, pud, pmd) pud_set(pud, (unsigned long)pmd)
static inline void pmd_populate_kernel(struct mm_struct *mm, pmd_t *pmd,
pte_t *pte)
{
- pmd_set(pmd, __pgtable_ptr_val(pte));
+ pmd_set(pmd, (unsigned long)pte);
}
static inline void pmd_populate(struct mm_struct *mm, pmd_t *pmd,
pgtable_t pte_page)
{
- pmd_set(pmd, __pgtable_ptr_val(pte_page));
+ pmd_set(pmd, (unsigned long)pte_page);
}
static inline pgtable_t pmd_pgtable(pmd_t pmd)
{
- return (pgtable_t)pmd_page_vaddr(pmd);
+ return (pgtable_t)(pmd_val(pmd) & ~PMD_MASKED_BITS);
}
static inline pte_t *pte_alloc_one_kernel(struct mm_struct *mm,
unsigned long address)
{
- return (pte_t *)page_table_alloc(mm, address, 1);
+ return (pte_t *)pte_fragment_alloc(mm, address, 1);
}
static inline pgtable_t pte_alloc_one(struct mm_struct *mm,
unsigned long address)
{
- return (pgtable_t)page_table_alloc(mm, address, 0);
+ return (pgtable_t)pte_fragment_alloc(mm, address, 0);
}
static inline void pte_free_kernel(struct mm_struct *mm, pte_t *pte)
{
- page_table_free(mm, (unsigned long *)pte, 1);
+ pte_fragment_fre((unsigned long *)pte, 1);
}
static inline void pte_free(struct mm_struct *mm, pgtable_t ptepage)
{
- page_table_free(mm, (unsigned long *)ptepage, 0);
+ pte_fragment_free((unsigned long *)ptepage, 0);
}
static inline void __pte_free_tlb(struct mmu_gather *tlb, pgtable_t table,
@@ -255,11 +201,11 @@ static inline void pmd_free(struct mm_struct *mm, pmd_t *pmd)
#define __pmd_free_tlb(tlb, pmd, addr) \
pgtable_free_tlb(tlb, pmd, PMD_CACHE_INDEX)
-#ifndef __PAGETABLE_PUD_FOLDED
+#ifndef CONFIG_PPC_64K_PAGES
#define __pud_free_tlb(tlb, pud, addr) \
pgtable_free_tlb(tlb, pud, PUD_INDEX_SIZE)
-#endif /* __PAGETABLE_PUD_FOLDED */
+#endif /* CONFIG_PPC_64K_PAGES */
#define check_pgt_cache() do { } while (0)
diff --git a/arch/powerpc/include/asm/nohash/64/pgtable.h b/arch/powerpc/include/asm/nohash/64/pgtable.h
index 10debb93c4a4..d4d808cf905e 100644
--- a/arch/powerpc/include/asm/nohash/64/pgtable.h
+++ b/arch/powerpc/include/asm/nohash/64/pgtable.h
@@ -108,9 +108,6 @@
#ifndef __ASSEMBLY__
/* pte_clear moved to later in this file */
-/* Pointers in the page table tree are virtual addresses */
-#define __pgtable_ptr_val(ptr) ((unsigned long)(ptr))
-
#define PMD_BAD_BITS (PTE_TABLE_SIZE-1)
#define PUD_BAD_BITS (PMD_TABLE_SIZE-1)
@@ -362,6 +359,13 @@ static inline void __ptep_set_access_flags(pte_t *ptep, pte_t entry)
void pgtable_cache_add(unsigned shift, void (*ctor)(void *));
void pgtable_cache_init(void);
+extern int map_kernel_page(unsigned long ea, unsigned long pa,
+ unsigned long flags);
+extern int __meminit vmemmap_create_mapping(unsigned long start,
+ unsigned long page_size,
+ unsigned long phys);
+extern void vmemmap_remove_mapping(unsigned long start,
+ unsigned long page_size);
#endif /* __ASSEMBLY__ */
#endif /* _ASM_POWERPC_NOHASH_64_PGTABLE_H */
diff --git a/arch/powerpc/include/asm/nohash/pgalloc.h b/arch/powerpc/include/asm/nohash/pgalloc.h
new file mode 100644
index 000000000000..b39ec956d71e
--- /dev/null
+++ b/arch/powerpc/include/asm/nohash/pgalloc.h
@@ -0,0 +1,23 @@
+#ifndef _ASM_POWERPC_NOHASH_PGALLOC_H
+#define _ASM_POWERPC_NOHASH_PGALLOC_H
+
+#include <linux/mm.h>
+
+extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
+#ifdef CONFIG_PPC64
+extern void tlb_flush_pgtable(struct mmu_gather *tlb, unsigned long address);
+#else
+/* 44x etc which is BOOKE not BOOK3E */
+static inline void tlb_flush_pgtable(struct mmu_gather *tlb,
+ unsigned long address)
+{
+
+}
+#endif /* !CONFIG_PPC_BOOK3E */
+
+#ifdef CONFIG_PPC64
+#include <asm/nohash/64/pgalloc.h>
+#else
+#include <asm/nohash/32/pgalloc.h>
+#endif
+#endif /* _ASM_POWERPC_NOHASH_PGALLOC_H */
diff --git a/arch/powerpc/include/asm/opal-api.h b/arch/powerpc/include/asm/opal-api.h
index f8faaaeeca1e..9bb8ddf0be37 100644
--- a/arch/powerpc/include/asm/opal-api.h
+++ b/arch/powerpc/include/asm/opal-api.h
@@ -368,16 +368,16 @@ enum OpalLPCAddressType {
};
enum opal_msg_type {
- OPAL_MSG_ASYNC_COMP = 0, /* params[0] = token, params[1] = rc,
+ OPAL_MSG_ASYNC_COMP = 0, /* params[0] = token, params[1] = rc,
* additional params function-specific
*/
- OPAL_MSG_MEM_ERR,
- OPAL_MSG_EPOW,
- OPAL_MSG_SHUTDOWN, /* params[0] = 1 reboot, 0 shutdown */
- OPAL_MSG_HMI_EVT,
- OPAL_MSG_DPO,
- OPAL_MSG_PRD,
- OPAL_MSG_OCC,
+ OPAL_MSG_MEM_ERR = 1,
+ OPAL_MSG_EPOW = 2,
+ OPAL_MSG_SHUTDOWN = 3, /* params[0] = 1 reboot, 0 shutdown */
+ OPAL_MSG_HMI_EVT = 4,
+ OPAL_MSG_DPO = 5,
+ OPAL_MSG_PRD = 6,
+ OPAL_MSG_OCC = 7,
OPAL_MSG_TYPE_MAX,
};
diff --git a/arch/powerpc/include/asm/page.h b/arch/powerpc/include/asm/page.h
index ab3d8977bacd..51db3a37bced 100644
--- a/arch/powerpc/include/asm/page.h
+++ b/arch/powerpc/include/asm/page.h
@@ -288,7 +288,11 @@ extern long long virt_phys_offset;
#ifndef __ASSEMBLY__
+#ifdef CONFIG_PPC_BOOK3S_64
+#include <asm/pgtable-be-types.h>
+#else
#include <asm/pgtable-types.h>
+#endif
typedef struct { signed long pd; } hugepd_t;
@@ -312,12 +316,20 @@ void arch_free_page(struct page *page, int order);
#endif
struct vm_area_struct;
-
+#ifdef CONFIG_PPC_BOOK3S_64
+/*
+ * For BOOK3s 64 with 4k and 64K linux page size
+ * we want to use pointers, because the page table
+ * actually store pfn
+ */
+typedef pte_t *pgtable_t;
+#else
#if defined(CONFIG_PPC_64K_PAGES) && defined(CONFIG_PPC64)
typedef pte_t *pgtable_t;
#else
typedef struct page *pgtable_t;
#endif
+#endif
#include <asm-generic/memory_model.h>
#endif /* __ASSEMBLY__ */
diff --git a/arch/powerpc/include/asm/page_64.h b/arch/powerpc/include/asm/page_64.h
index d908a46d05c0..dd5f0712afa2 100644
--- a/arch/powerpc/include/asm/page_64.h
+++ b/arch/powerpc/include/asm/page_64.h
@@ -93,7 +93,7 @@ extern u64 ppc64_pft_size;
#define SLICE_LOW_TOP (0x100000000ul)
#define SLICE_NUM_LOW (SLICE_LOW_TOP >> SLICE_LOW_SHIFT)
-#define SLICE_NUM_HIGH (PGTABLE_RANGE >> SLICE_HIGH_SHIFT)
+#define SLICE_NUM_HIGH (H_PGTABLE_RANGE >> SLICE_HIGH_SHIFT)
#define GET_LOW_SLICE_INDEX(addr) ((addr) >> SLICE_LOW_SHIFT)
#define GET_HIGH_SLICE_INDEX(addr) ((addr) >> SLICE_HIGH_SHIFT)
@@ -128,8 +128,6 @@ extern void slice_set_user_psize(struct mm_struct *mm, unsigned int psize);
extern void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
unsigned long len, unsigned int psize);
-#define slice_mm_new_context(mm) ((mm)->context.id == MMU_NO_CONTEXT)
-
#endif /* __ASSEMBLY__ */
#else
#define slice_init()
@@ -151,7 +149,6 @@ do { \
#define slice_set_range_psize(mm, start, len, psize) \
slice_set_user_psize((mm), (psize))
-#define slice_mm_new_context(mm) 1
#endif /* CONFIG_PPC_MM_SLICES */
#ifdef CONFIG_HUGETLB_PAGE
diff --git a/arch/powerpc/include/asm/pci-bridge.h b/arch/powerpc/include/asm/pci-bridge.h
index f5056e3394b4..467c0b05b6fb 100644
--- a/arch/powerpc/include/asm/pci-bridge.h
+++ b/arch/powerpc/include/asm/pci-bridge.h
@@ -17,33 +17,34 @@ struct device_node;
* PCI controller operations
*/
struct pci_controller_ops {
- void (*dma_dev_setup)(struct pci_dev *dev);
+ void (*dma_dev_setup)(struct pci_dev *pdev);
void (*dma_bus_setup)(struct pci_bus *bus);
- int (*probe_mode)(struct pci_bus *);
+ int (*probe_mode)(struct pci_bus *bus);
/* Called when pci_enable_device() is called. Returns true to
* allow assignment/enabling of the device. */
- bool (*enable_device_hook)(struct pci_dev *);
+ bool (*enable_device_hook)(struct pci_dev *pdev);
- void (*disable_device)(struct pci_dev *);
+ void (*disable_device)(struct pci_dev *pdev);
- void (*release_device)(struct pci_dev *);
+ void (*release_device)(struct pci_dev *pdev);
/* Called during PCI resource reassignment */
- resource_size_t (*window_alignment)(struct pci_bus *, unsigned long type);
- void (*reset_secondary_bus)(struct pci_dev *dev);
+ resource_size_t (*window_alignment)(struct pci_bus *bus,
+ unsigned long type);
+ void (*reset_secondary_bus)(struct pci_dev *pdev);
#ifdef CONFIG_PCI_MSI
- int (*setup_msi_irqs)(struct pci_dev *dev,
+ int (*setup_msi_irqs)(struct pci_dev *pdev,
int nvec, int type);
- void (*teardown_msi_irqs)(struct pci_dev *dev);
+ void (*teardown_msi_irqs)(struct pci_dev *pdev);
#endif
- int (*dma_set_mask)(struct pci_dev *dev, u64 dma_mask);
- u64 (*dma_get_required_mask)(struct pci_dev *dev);
+ int (*dma_set_mask)(struct pci_dev *pdev, u64 dma_mask);
+ u64 (*dma_get_required_mask)(struct pci_dev *pdev);
- void (*shutdown)(struct pci_controller *);
+ void (*shutdown)(struct pci_controller *hose);
};
/*
@@ -208,14 +209,14 @@ struct pci_dn {
#ifdef CONFIG_EEH
struct eeh_dev *edev; /* eeh device */
#endif
-#define IODA_INVALID_PE (-1)
+#define IODA_INVALID_PE 0xFFFFFFFF
#ifdef CONFIG_PPC_POWERNV
- int pe_number;
+ unsigned int pe_number;
int vf_index; /* VF index in the PF */
#ifdef CONFIG_PCI_IOV
u16 vfs_expanded; /* number of VFs IOV BAR expanded */
u16 num_vfs; /* number of VFs enabled*/
- int *pe_num_map; /* PE# for the first VF PE or array */
+ unsigned int *pe_num_map; /* PE# for the first VF PE or array */
bool m64_single_mode; /* Use M64 BAR in Single Mode */
#define IODA_INVALID_M64 (-1)
int (*m64_map)[PCI_SRIOV_NUM_BARS];
@@ -234,7 +235,9 @@ extern struct pci_dn *pci_get_pdn_by_devfn(struct pci_bus *bus,
extern struct pci_dn *pci_get_pdn(struct pci_dev *pdev);
extern struct pci_dn *add_dev_pci_data(struct pci_dev *pdev);
extern void remove_dev_pci_data(struct pci_dev *pdev);
-extern void *update_dn_pci_info(struct device_node *dn, void *data);
+extern struct pci_dn *pci_add_device_node_info(struct pci_controller *hose,
+ struct device_node *dn);
+extern void pci_remove_device_node_info(struct device_node *dn);
static inline int pci_device_from_OF_node(struct device_node *np,
u8 *bus, u8 *devfn)
@@ -256,13 +259,13 @@ static inline struct eeh_dev *pdn_to_eeh_dev(struct pci_dn *pdn)
#endif
/** Find the bus corresponding to the indicated device node */
-extern struct pci_bus *pcibios_find_pci_bus(struct device_node *dn);
+extern struct pci_bus *pci_find_bus_by_node(struct device_node *dn);
/** Remove all of the PCI devices under this bus */
-extern void pcibios_remove_pci_devices(struct pci_bus *bus);
+extern void pci_hp_remove_devices(struct pci_bus *bus);
/** Discover new pci devices under this bus, and add them */
-extern void pcibios_add_pci_devices(struct pci_bus *bus);
+extern void pci_hp_add_devices(struct pci_bus *bus);
extern void isa_bridge_find_early(struct pci_controller *hose);
diff --git a/arch/powerpc/include/asm/pgalloc.h b/arch/powerpc/include/asm/pgalloc.h
index fc3ee06eab87..0413457ba11d 100644
--- a/arch/powerpc/include/asm/pgalloc.h
+++ b/arch/powerpc/include/asm/pgalloc.h
@@ -1,25 +1,12 @@
#ifndef _ASM_POWERPC_PGALLOC_H
#define _ASM_POWERPC_PGALLOC_H
-#ifdef __KERNEL__
#include <linux/mm.h>
-#ifdef CONFIG_PPC_BOOK3E
-extern void tlb_flush_pgtable(struct mmu_gather *tlb, unsigned long address);
-#else /* CONFIG_PPC_BOOK3E */
-static inline void tlb_flush_pgtable(struct mmu_gather *tlb,
- unsigned long address)
-{
-}
-#endif /* !CONFIG_PPC_BOOK3E */
-
-extern void tlb_remove_table(struct mmu_gather *tlb, void *table);
-
-#ifdef CONFIG_PPC64
-#include <asm/pgalloc-64.h>
+#ifdef CONFIG_PPC_BOOK3S
+#include <asm/book3s/pgalloc.h>
#else
-#include <asm/pgalloc-32.h>
+#include <asm/nohash/pgalloc.h>
#endif
-#endif /* __KERNEL__ */
#endif /* _ASM_POWERPC_PGALLOC_H */
diff --git a/arch/powerpc/include/asm/pgtable-be-types.h b/arch/powerpc/include/asm/pgtable-be-types.h
new file mode 100644
index 000000000000..e2bf208605b1
--- /dev/null
+++ b/arch/powerpc/include/asm/pgtable-be-types.h
@@ -0,0 +1,92 @@
+#ifndef _ASM_POWERPC_PGTABLE_BE_TYPES_H
+#define _ASM_POWERPC_PGTABLE_BE_TYPES_H
+
+#include <asm/cmpxchg.h>
+
+/* PTE level */
+typedef struct { __be64 pte; } pte_t;
+#define __pte(x) ((pte_t) { cpu_to_be64(x) })
+static inline unsigned long pte_val(pte_t x)
+{
+ return be64_to_cpu(x.pte);
+}
+
+static inline __be64 pte_raw(pte_t x)
+{
+ return x.pte;
+}
+
+/* PMD level */
+#ifdef CONFIG_PPC64
+typedef struct { __be64 pmd; } pmd_t;
+#define __pmd(x) ((pmd_t) { cpu_to_be64(x) })
+static inline unsigned long pmd_val(pmd_t x)
+{
+ return be64_to_cpu(x.pmd);
+}
+
+static inline __be64 pmd_raw(pmd_t x)
+{
+ return x.pmd;
+}
+
+/*
+ * 64 bit hash always use 4 level table. Everybody else use 4 level
+ * only for 4K page size.
+ */
+#if defined(CONFIG_PPC_BOOK3S_64) || !defined(CONFIG_PPC_64K_PAGES)
+typedef struct { __be64 pud; } pud_t;
+#define __pud(x) ((pud_t) { cpu_to_be64(x) })
+static inline unsigned long pud_val(pud_t x)
+{
+ return be64_to_cpu(x.pud);
+}
+#endif /* CONFIG_PPC_BOOK3S_64 || !CONFIG_PPC_64K_PAGES */
+#endif /* CONFIG_PPC64 */
+
+/* PGD level */
+typedef struct { __be64 pgd; } pgd_t;
+#define __pgd(x) ((pgd_t) { cpu_to_be64(x) })
+static inline unsigned long pgd_val(pgd_t x)
+{
+ return be64_to_cpu(x.pgd);
+}
+
+/* Page protection bits */
+typedef struct { unsigned long pgprot; } pgprot_t;
+#define pgprot_val(x) ((x).pgprot)
+#define __pgprot(x) ((pgprot_t) { (x) })
+
+/*
+ * With hash config 64k pages additionally define a bigger "real PTE" type that
+ * gathers the "second half" part of the PTE for pseudo 64k pages
+ */
+#if defined(CONFIG_PPC_64K_PAGES) && defined(CONFIG_PPC_STD_MMU_64)
+typedef struct { pte_t pte; unsigned long hidx; } real_pte_t;
+#else
+typedef struct { pte_t pte; } real_pte_t;
+#endif
+
+static inline bool pte_xchg(pte_t *ptep, pte_t old, pte_t new)
+{
+ unsigned long *p = (unsigned long *)ptep;
+ __be64 prev;
+
+ prev = (__force __be64)__cmpxchg_u64(p, (__force unsigned long)pte_raw(old),
+ (__force unsigned long)pte_raw(new));
+
+ return pte_raw(old) == prev;
+}
+
+static inline bool pmd_xchg(pmd_t *pmdp, pmd_t old, pmd_t new)
+{
+ unsigned long *p = (unsigned long *)pmdp;
+ __be64 prev;
+
+ prev = (__force __be64)__cmpxchg_u64(p, (__force unsigned long)pmd_raw(old),
+ (__force unsigned long)pmd_raw(new));
+
+ return pmd_raw(old) == prev;
+}
+
+#endif /* _ASM_POWERPC_PGTABLE_BE_TYPES_H */
diff --git a/arch/powerpc/include/asm/pgtable-types.h b/arch/powerpc/include/asm/pgtable-types.h
index 43140f8b0592..e7f4f3e0fcde 100644
--- a/arch/powerpc/include/asm/pgtable-types.h
+++ b/arch/powerpc/include/asm/pgtable-types.h
@@ -1,9 +1,6 @@
#ifndef _ASM_POWERPC_PGTABLE_TYPES_H
#define _ASM_POWERPC_PGTABLE_TYPES_H
-#ifdef CONFIG_STRICT_MM_TYPECHECKS
-/* These are used to make use of C type-checking. */
-
/* PTE level */
typedef struct { pte_basic_t pte; } pte_t;
#define __pte(x) ((pte_t) { (x) })
@@ -48,49 +45,6 @@ typedef struct { unsigned long pgprot; } pgprot_t;
#define pgprot_val(x) ((x).pgprot)
#define __pgprot(x) ((pgprot_t) { (x) })
-#else
-
-/*
- * .. while these make it easier on the compiler
- */
-
-typedef pte_basic_t pte_t;
-#define __pte(x) (x)
-static inline pte_basic_t pte_val(pte_t pte)
-{
- return pte;
-}
-
-#ifdef CONFIG_PPC64
-typedef unsigned long pmd_t;
-#define __pmd(x) (x)
-static inline unsigned long pmd_val(pmd_t pmd)
-{
- return pmd;
-}
-
-#if defined(CONFIG_PPC_BOOK3S_64) || !defined(CONFIG_PPC_64K_PAGES)
-typedef unsigned long pud_t;
-#define __pud(x) (x)
-static inline unsigned long pud_val(pud_t pud)
-{
- return pud;
-}
-#endif /* CONFIG_PPC_BOOK3S_64 || !CONFIG_PPC_64K_PAGES */
-#endif /* CONFIG_PPC64 */
-
-typedef unsigned long pgd_t;
-#define __pgd(x) (x)
-static inline unsigned long pgd_val(pgd_t pgd)
-{
- return pgd;
-}
-
-typedef unsigned long pgprot_t;
-#define pgprot_val(x) (x)
-#define __pgprot(x) (x)
-
-#endif /* CONFIG_STRICT_MM_TYPECHECKS */
/*
* With hash config 64k pages additionally define a bigger "real PTE" type that
* gathers the "second half" part of the PTE for pseudo 64k pages
@@ -100,4 +54,16 @@ typedef struct { pte_t pte; unsigned long hidx; } real_pte_t;
#else
typedef struct { pte_t pte; } real_pte_t;
#endif
+
+#ifdef CONFIG_PPC_STD_MMU_64
+#include <asm/cmpxchg.h>
+
+static inline bool pte_xchg(pte_t *ptep, pte_t old, pte_t new)
+{
+ unsigned long *p = (unsigned long *)ptep;
+
+ return pte_val(old) == __cmpxchg_u64(p, pte_val(old), pte_val(new));
+}
+#endif
+
#endif /* _ASM_POWERPC_PGTABLE_TYPES_H */
diff --git a/arch/powerpc/include/asm/pgtable.h b/arch/powerpc/include/asm/pgtable.h
index 47897a30982d..ee09e99097f0 100644
--- a/arch/powerpc/include/asm/pgtable.h
+++ b/arch/powerpc/include/asm/pgtable.h
@@ -65,7 +65,6 @@ extern int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
struct page **pages, int *nr);
#ifndef CONFIG_TRANSPARENT_HUGEPAGE
#define pmd_large(pmd) 0
-#define has_transparent_hugepage() 0
#endif
pte_t *__find_linux_pte_or_hugepte(pgd_t *pgdir, unsigned long ea,
bool *is_thp, unsigned *shift);
diff --git a/arch/powerpc/include/asm/ppc-opcode.h b/arch/powerpc/include/asm/ppc-opcode.h
index 7ab04fc59e24..1d035c1cc889 100644
--- a/arch/powerpc/include/asm/ppc-opcode.h
+++ b/arch/powerpc/include/asm/ppc-opcode.h
@@ -131,6 +131,7 @@
/* sorted alphabetically */
#define PPC_INST_BHRBE 0x7c00025c
#define PPC_INST_CLRBHRB 0x7c00035c
+#define PPC_INST_CP_ABORT 0x7c00068c
#define PPC_INST_DCBA 0x7c0005ec
#define PPC_INST_DCBA_MASK 0xfc0007fe
#define PPC_INST_DCBAL 0x7c2005ec
@@ -285,6 +286,7 @@
#endif
/* Deal with instructions that older assemblers aren't aware of */
+#define PPC_CP_ABORT stringify_in_c(.long PPC_INST_CP_ABORT)
#define PPC_DCBAL(a, b) stringify_in_c(.long PPC_INST_DCBAL | \
__PPC_RA(a) | __PPC_RB(b))
#define PPC_DCBZL(a, b) stringify_in_c(.long PPC_INST_DCBZL | \
diff --git a/arch/powerpc/include/asm/ppc-pci.h b/arch/powerpc/include/asm/ppc-pci.h
index ca0c5bff7849..8753e4eb9ab5 100644
--- a/arch/powerpc/include/asm/ppc-pci.h
+++ b/arch/powerpc/include/asm/ppc-pci.h
@@ -33,9 +33,9 @@ extern struct pci_dev *isa_bridge_pcidev; /* may be NULL if no ISA bus */
struct device_node;
struct pci_dn;
-typedef void *(*traverse_func)(struct device_node *me, void *data);
-void *traverse_pci_devices(struct device_node *start, traverse_func pre,
- void *data);
+void *pci_traverse_device_nodes(struct device_node *start,
+ void *(*fn)(struct device_node *, void *),
+ void *data);
void *traverse_pci_dn(struct pci_dn *root,
void *(*fn)(struct pci_dn *, void *),
void *data);
diff --git a/arch/powerpc/include/asm/ppc_asm.h b/arch/powerpc/include/asm/ppc_asm.h
index 499d9f89435a..2b31632376a5 100644
--- a/arch/powerpc/include/asm/ppc_asm.h
+++ b/arch/powerpc/include/asm/ppc_asm.h
@@ -427,7 +427,10 @@ END_FTR_SECTION_IFCLR(CPU_FTR_601)
li r4,1024; \
mtctr r4; \
lis r4,KERNELBASE@h; \
+ .machine push; \
+ .machine "power4"; \
0: tlbie r4; \
+ .machine pop; \
addi r4,r4,0x1000; \
bdnz 0b
#endif
diff --git a/arch/powerpc/include/asm/pte-common.h b/arch/powerpc/include/asm/pte-common.h
index 1ec67b043065..2eeaf80d41b7 100644
--- a/arch/powerpc/include/asm/pte-common.h
+++ b/arch/powerpc/include/asm/pte-common.h
@@ -76,6 +76,16 @@
*/
#ifndef __ASSEMBLY__
extern unsigned long bad_call_to_PMD_PAGE_SIZE(void);
+
+/*
+ * Don't just check for any non zero bits in __PAGE_USER, since for book3e
+ * and PTE_64BIT, PAGE_KERNEL_X contains _PAGE_BAP_SR which is also in
+ * _PAGE_USER. Need to explicitly match _PAGE_BAP_UR bit in that case too.
+ */
+static inline bool pte_user(pte_t pte)
+{
+ return (pte_val(pte) & _PAGE_USER) == _PAGE_USER;
+}
#endif /* __ASSEMBLY__ */
/* Location of the PFN in the PTE. Most 32-bit platforms use the same
@@ -184,13 +194,6 @@ extern unsigned long bad_call_to_PMD_PAGE_SIZE(void);
/* Make modules code happy. We don't set RO yet */
#define PAGE_KERNEL_EXEC PAGE_KERNEL_X
-/*
- * Don't just check for any non zero bits in __PAGE_USER, since for book3e
- * and PTE_64BIT, PAGE_KERNEL_X contains _PAGE_BAP_SR which is also in
- * _PAGE_USER. Need to explicitly match _PAGE_BAP_UR bit in that case too.
- */
-#define pte_user(val) ((val & _PAGE_USER) == _PAGE_USER)
-
/* Advertise special mapping type for AGP */
#define PAGE_AGP (PAGE_KERNEL_NC)
#define HAVE_PAGE_AGP
@@ -198,3 +201,12 @@ extern unsigned long bad_call_to_PMD_PAGE_SIZE(void);
/* Advertise support for _PAGE_SPECIAL */
#define __HAVE_ARCH_PTE_SPECIAL
+#ifndef _PAGE_READ
+/* if not defined, we should not find _PAGE_WRITE too */
+#define _PAGE_READ 0
+#define _PAGE_WRITE _PAGE_RW
+#endif
+
+#ifndef H_PAGE_4K_PFN
+#define H_PAGE_4K_PFN 0
+#endif
diff --git a/arch/powerpc/include/asm/reg.h b/arch/powerpc/include/asm/reg.h
index f5f4c66bbbc9..c1e82e968506 100644
--- a/arch/powerpc/include/asm/reg.h
+++ b/arch/powerpc/include/asm/reg.h
@@ -347,6 +347,7 @@
#define LPCR_LPES_SH 2
#define LPCR_RMI 0x00000002 /* real mode is cache inhibit */
#define LPCR_HDICE 0x00000001 /* Hyp Decr enable (HV,PR,EE) */
+#define LPCR_UPRT 0x00400000 /* Use Process Table (ISA 3) */
#ifndef SPRN_LPID
#define SPRN_LPID 0x13F /* Logical Partition Identifier */
#endif
@@ -587,6 +588,7 @@
#define SPRN_PIR 0x3FF /* Processor Identification Register */
#endif
#define SPRN_TIR 0x1BE /* Thread Identification Register */
+#define SPRN_PTCR 0x1D0 /* Partition table control Register */
#define SPRN_PSPB 0x09F /* Problem State Priority Boost reg */
#define SPRN_PTEHI 0x3D5 /* 981 7450 PTE HI word (S/W TLB load) */
#define SPRN_PTELO 0x3D6 /* 982 7450 PTE LO word (S/W TLB load) */
@@ -1182,6 +1184,7 @@
#define PVR_970GX 0x0045
#define PVR_POWER7p 0x004A
#define PVR_POWER8E 0x004B
+#define PVR_POWER8NVL 0x004C
#define PVR_POWER8 0x004D
#define PVR_BE 0x0070
#define PVR_PA6T 0x0090
diff --git a/arch/powerpc/include/asm/thread_info.h b/arch/powerpc/include/asm/thread_info.h
index 7efee4a3240b..8febc3f66d53 100644
--- a/arch/powerpc/include/asm/thread_info.h
+++ b/arch/powerpc/include/asm/thread_info.h
@@ -43,7 +43,9 @@ struct thread_info {
int preempt_count; /* 0 => preemptable,
<0 => BUG */
unsigned long local_flags; /* private flags for thread */
-
+#ifdef CONFIG_LIVEPATCH
+ unsigned long *livepatch_sp;
+#endif
/* low level flags - has atomic operations done on it */
unsigned long flags ____cacheline_aligned_in_smp;
};
diff --git a/arch/powerpc/include/asm/tlbflush.h b/arch/powerpc/include/asm/tlbflush.h
index 9f77f85e3e99..1b38eea28e5a 100644
--- a/arch/powerpc/include/asm/tlbflush.h
+++ b/arch/powerpc/include/asm/tlbflush.h
@@ -58,6 +58,7 @@ extern void __flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
#elif defined(CONFIG_PPC_STD_MMU_32)
+#define MMU_NO_CONTEXT (0)
/*
* TLB flushing for "classic" hash-MMU 32-bit CPUs, 6xx, 7xx, 7xxx
*/
@@ -78,7 +79,7 @@ static inline void local_flush_tlb_mm(struct mm_struct *mm)
}
#elif defined(CONFIG_PPC_STD_MMU_64)
-#include <asm/book3s/64/tlbflush-hash.h>
+#include <asm/book3s/64/tlbflush.h>
#else
#error Unsupported MMU type
#endif
diff --git a/arch/powerpc/include/uapi/asm/perf_regs.h b/arch/powerpc/include/uapi/asm/perf_regs.h
new file mode 100644
index 000000000000..6a93209748a1
--- /dev/null
+++ b/arch/powerpc/include/uapi/asm/perf_regs.h
@@ -0,0 +1,50 @@
+#ifndef _UAPI_ASM_POWERPC_PERF_REGS_H
+#define _UAPI_ASM_POWERPC_PERF_REGS_H
+
+enum perf_event_powerpc_regs {
+ PERF_REG_POWERPC_R0,
+ PERF_REG_POWERPC_R1,
+ PERF_REG_POWERPC_R2,
+ PERF_REG_POWERPC_R3,
+ PERF_REG_POWERPC_R4,
+ PERF_REG_POWERPC_R5,
+ PERF_REG_POWERPC_R6,
+ PERF_REG_POWERPC_R7,
+ PERF_REG_POWERPC_R8,
+ PERF_REG_POWERPC_R9,
+ PERF_REG_POWERPC_R10,
+ PERF_REG_POWERPC_R11,
+ PERF_REG_POWERPC_R12,
+ PERF_REG_POWERPC_R13,
+ PERF_REG_POWERPC_R14,
+ PERF_REG_POWERPC_R15,
+ PERF_REG_POWERPC_R16,
+ PERF_REG_POWERPC_R17,
+ PERF_REG_POWERPC_R18,
+ PERF_REG_POWERPC_R19,
+ PERF_REG_POWERPC_R20,
+ PERF_REG_POWERPC_R21,
+ PERF_REG_POWERPC_R22,
+ PERF_REG_POWERPC_R23,
+ PERF_REG_POWERPC_R24,
+ PERF_REG_POWERPC_R25,
+ PERF_REG_POWERPC_R26,
+ PERF_REG_POWERPC_R27,
+ PERF_REG_POWERPC_R28,
+ PERF_REG_POWERPC_R29,
+ PERF_REG_POWERPC_R30,
+ PERF_REG_POWERPC_R31,
+ PERF_REG_POWERPC_NIP,
+ PERF_REG_POWERPC_MSR,
+ PERF_REG_POWERPC_ORIG_R3,
+ PERF_REG_POWERPC_CTR,
+ PERF_REG_POWERPC_LINK,
+ PERF_REG_POWERPC_XER,
+ PERF_REG_POWERPC_CCR,
+ PERF_REG_POWERPC_SOFTE,
+ PERF_REG_POWERPC_TRAP,
+ PERF_REG_POWERPC_DAR,
+ PERF_REG_POWERPC_DSISR,
+ PERF_REG_POWERPC_MAX,
+};
+#endif /* _UAPI_ASM_POWERPC_PERF_REGS_H */
diff --git a/arch/powerpc/kernel/asm-offsets.c b/arch/powerpc/kernel/asm-offsets.c
index 0d0183d3180a..9ea09551a2cd 100644
--- a/arch/powerpc/kernel/asm-offsets.c
+++ b/arch/powerpc/kernel/asm-offsets.c
@@ -86,6 +86,10 @@ int main(void)
DEFINE(KSP_LIMIT, offsetof(struct thread_struct, ksp_limit));
#endif /* CONFIG_PPC64 */
+#ifdef CONFIG_LIVEPATCH
+ DEFINE(TI_livepatch_sp, offsetof(struct thread_info, livepatch_sp));
+#endif
+
DEFINE(KSP, offsetof(struct thread_struct, ksp));
DEFINE(PT_REGS, offsetof(struct thread_struct, regs));
#ifdef CONFIG_BOOKE
@@ -434,7 +438,11 @@ int main(void)
DEFINE(BUG_ENTRY_SIZE, sizeof(struct bug_entry));
#endif
+#ifdef MAX_PGD_TABLE_SIZE
+ DEFINE(PGD_TABLE_SIZE, MAX_PGD_TABLE_SIZE);
+#else
DEFINE(PGD_TABLE_SIZE, PGD_TABLE_SIZE);
+#endif
DEFINE(PTE_SIZE, sizeof(pte_t));
#ifdef CONFIG_KVM
diff --git a/arch/powerpc/kernel/btext.c b/arch/powerpc/kernel/btext.c
index 41c011cb6070..8275858a434d 100644
--- a/arch/powerpc/kernel/btext.c
+++ b/arch/powerpc/kernel/btext.c
@@ -162,7 +162,7 @@ void btext_map(void)
offset = ((unsigned long) dispDeviceBase) - base;
size = dispDeviceRowBytes * dispDeviceRect[3] + offset
+ dispDeviceRect[0];
- vbase = __ioremap(base, size, _PAGE_NO_CACHE);
+ vbase = __ioremap(base, size, pgprot_val(pgprot_noncached_wc(__pgprot(0))));
if (vbase == 0)
return;
logicalDisplayBase = vbase + offset;
diff --git a/arch/powerpc/kernel/cputable.c b/arch/powerpc/kernel/cputable.c
index 6c662b8de90d..eeeacf6235a3 100644
--- a/arch/powerpc/kernel/cputable.c
+++ b/arch/powerpc/kernel/cputable.c
@@ -63,7 +63,6 @@ extern void __setup_cpu_745x(unsigned long offset, struct cpu_spec* spec);
extern void __setup_cpu_ppc970(unsigned long offset, struct cpu_spec* spec);
extern void __setup_cpu_ppc970MP(unsigned long offset, struct cpu_spec* spec);
extern void __setup_cpu_pa6t(unsigned long offset, struct cpu_spec* spec);
-extern void __setup_cpu_a2(unsigned long offset, struct cpu_spec* spec);
extern void __restore_cpu_pa6t(void);
extern void __restore_cpu_ppc970(void);
extern void __setup_cpu_power7(unsigned long offset, struct cpu_spec* spec);
@@ -72,7 +71,6 @@ extern void __setup_cpu_power8(unsigned long offset, struct cpu_spec* spec);
extern void __restore_cpu_power8(void);
extern void __setup_cpu_power9(unsigned long offset, struct cpu_spec* spec);
extern void __restore_cpu_power9(void);
-extern void __restore_cpu_a2(void);
extern void __flush_tlb_power7(unsigned int action);
extern void __flush_tlb_power8(unsigned int action);
extern void __flush_tlb_power9(unsigned int action);
diff --git a/arch/powerpc/kernel/eeh.c b/arch/powerpc/kernel/eeh.c
index 6544017eb90b..c9bc78e9c610 100644
--- a/arch/powerpc/kernel/eeh.c
+++ b/arch/powerpc/kernel/eeh.c
@@ -48,7 +48,7 @@
/** Overview:
- * EEH, or "Extended Error Handling" is a PCI bridge technology for
+ * EEH, or "Enhanced Error Handling" is a PCI bridge technology for
* dealing with PCI bus errors that can't be dealt with within the
* usual PCI framework, except by check-stopping the CPU. Systems
* that are designed for high-availability/reliability cannot afford
@@ -1068,7 +1068,7 @@ void eeh_add_device_early(struct pci_dn *pdn)
struct pci_controller *phb;
struct eeh_dev *edev = pdn_to_eeh_dev(pdn);
- if (!edev || !eeh_enabled())
+ if (!edev)
return;
if (!eeh_has_flag(EEH_PROBE_MODE_DEVTREE))
@@ -1336,14 +1336,11 @@ static int eeh_pe_change_owner(struct eeh_pe *pe)
id->subdevice != pdev->subsystem_device)
continue;
- goto reset;
+ return eeh_pe_reset_and_recover(pe);
}
}
return eeh_unfreeze_pe(pe, true);
-
-reset:
- return eeh_pe_reset_and_recover(pe);
}
/**
diff --git a/arch/powerpc/kernel/eeh_driver.c b/arch/powerpc/kernel/eeh_driver.c
index fb6207d2c604..2714a3b81d24 100644
--- a/arch/powerpc/kernel/eeh_driver.c
+++ b/arch/powerpc/kernel/eeh_driver.c
@@ -171,6 +171,16 @@ static void *eeh_dev_save_state(void *data, void *userdata)
if (!edev)
return NULL;
+ /*
+ * We cannot access the config space on some adapters.
+ * Otherwise, it will cause fenced PHB. We don't save
+ * the content in their config space and will restore
+ * from the initial config space saved when the EEH
+ * device is created.
+ */
+ if (edev->pe && (edev->pe->state & EEH_PE_CFG_RESTRICTED))
+ return NULL;
+
pdev = eeh_dev_to_pci_dev(edev);
if (!pdev)
return NULL;
@@ -312,6 +322,19 @@ static void *eeh_dev_restore_state(void *data, void *userdata)
if (!edev)
return NULL;
+ /*
+ * The content in the config space isn't saved because
+ * the blocked config space on some adapters. We have
+ * to restore the initial saved config space when the
+ * EEH device is created.
+ */
+ if (edev->pe && (edev->pe->state & EEH_PE_CFG_RESTRICTED)) {
+ if (list_is_last(&edev->list, &edev->pe->edevs))
+ eeh_pe_restore_bars(edev->pe);
+
+ return NULL;
+ }
+
pdev = eeh_dev_to_pci_dev(edev);
if (!pdev)
return NULL;
@@ -552,7 +575,7 @@ static int eeh_clear_pe_frozen_state(struct eeh_pe *pe,
int eeh_pe_reset_and_recover(struct eeh_pe *pe)
{
- int result, ret;
+ int ret;
/* Bail if the PE is being recovered */
if (pe->state & EEH_PE_RECOVERING)
@@ -564,9 +587,6 @@ int eeh_pe_reset_and_recover(struct eeh_pe *pe)
/* Save states */
eeh_pe_dev_traverse(pe, eeh_dev_save_state, NULL);
- /* Report error */
- eeh_pe_dev_traverse(pe, eeh_report_error, &result);
-
/* Issue reset */
ret = eeh_reset_pe(pe);
if (ret) {
@@ -581,15 +601,9 @@ int eeh_pe_reset_and_recover(struct eeh_pe *pe)
return ret;
}
- /* Notify completion of reset */
- eeh_pe_dev_traverse(pe, eeh_report_reset, &result);
-
/* Restore device state */
eeh_pe_dev_traverse(pe, eeh_dev_restore_state, NULL);
- /* Resume */
- eeh_pe_dev_traverse(pe, eeh_report_resume, NULL);
-
/* Clear recovery mode */
eeh_pe_state_clear(pe, EEH_PE_RECOVERING);
@@ -621,7 +635,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
* We don't remove the corresponding PE instances because
* we need the information afterwords. The attached EEH
* devices are expected to be attached soon when calling
- * into pcibios_add_pci_devices().
+ * into pci_hp_add_devices().
*/
eeh_pe_state_mark(pe, EEH_PE_KEEP);
if (bus) {
@@ -630,7 +644,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
} else {
eeh_pe_state_clear(pe, EEH_PE_PRI_BUS);
pci_lock_rescan_remove();
- pcibios_remove_pci_devices(bus);
+ pci_hp_remove_devices(bus);
pci_unlock_rescan_remove();
}
} else if (frozen_bus) {
@@ -681,7 +695,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
if (pe->type & EEH_PE_VF)
eeh_add_virt_device(edev, NULL);
else
- pcibios_add_pci_devices(bus);
+ pci_hp_add_devices(bus);
} else if (frozen_bus && rmv_data->removed) {
pr_info("EEH: Sleep 5s ahead of partial hotplug\n");
ssleep(5);
@@ -691,7 +705,7 @@ static int eeh_reset_device(struct eeh_pe *pe, struct pci_bus *bus,
if (pe->type & EEH_PE_VF)
eeh_add_virt_device(edev, NULL);
else
- pcibios_add_pci_devices(frozen_bus);
+ pci_hp_add_devices(frozen_bus);
}
eeh_pe_state_clear(pe, EEH_PE_KEEP);
@@ -896,7 +910,7 @@ perm_error:
eeh_pe_dev_mode_mark(pe, EEH_DEV_REMOVED);
pci_lock_rescan_remove();
- pcibios_remove_pci_devices(frozen_bus);
+ pci_hp_remove_devices(frozen_bus);
pci_unlock_rescan_remove();
}
}
@@ -981,7 +995,7 @@ static void eeh_handle_special_event(void)
bus = eeh_pe_bus_get(phb_pe);
eeh_pe_dev_traverse(pe,
eeh_report_failure, NULL);
- pcibios_remove_pci_devices(bus);
+ pci_hp_remove_devices(bus);
}
pci_unlock_rescan_remove();
}
diff --git a/arch/powerpc/kernel/eeh_event.c b/arch/powerpc/kernel/eeh_event.c
index 4eefb6e34dbb..82e7327e3cd0 100644
--- a/arch/powerpc/kernel/eeh_event.c
+++ b/arch/powerpc/kernel/eeh_event.c
@@ -36,7 +36,7 @@
static DEFINE_SPINLOCK(eeh_eventlist_lock);
static struct semaphore eeh_eventlist_sem;
-LIST_HEAD(eeh_eventlist);
+static LIST_HEAD(eeh_eventlist);
/**
* eeh_event_handler - Dispatch EEH events.
diff --git a/arch/powerpc/kernel/eeh_pe.c b/arch/powerpc/kernel/eeh_pe.c
index eea48d8baf49..f0520da85759 100644
--- a/arch/powerpc/kernel/eeh_pe.c
+++ b/arch/powerpc/kernel/eeh_pe.c
@@ -249,7 +249,7 @@ static void *__eeh_pe_get(void *data, void *flag)
} else {
if (edev->pe_config_addr &&
(edev->pe_config_addr == pe->addr))
- return pe;
+ return pe;
}
/* Try BDF address */
diff --git a/arch/powerpc/kernel/entry_64.S b/arch/powerpc/kernel/entry_64.S
index 9916d150b28c..73e461a3dfbb 100644
--- a/arch/powerpc/kernel/entry_64.S
+++ b/arch/powerpc/kernel/entry_64.S
@@ -20,6 +20,7 @@
#include <linux/errno.h>
#include <linux/err.h>
+#include <linux/magic.h>
#include <asm/unistd.h>
#include <asm/processor.h>
#include <asm/page.h>
@@ -36,6 +37,7 @@
#include <asm/hw_irq.h>
#include <asm/context_tracking.h>
#include <asm/tm.h>
+#include <asm/ppc-opcode.h>
/*
* System calls.
@@ -508,6 +510,14 @@ BEGIN_FTR_SECTION
ldarx r6,0,r1
END_FTR_SECTION_IFSET(CPU_FTR_STCX_CHECKS_ADDRESS)
+BEGIN_FTR_SECTION
+/*
+ * A cp_abort (copy paste abort) here ensures that when context switching, a
+ * copy from one process can't leak into the paste of another.
+ */
+ PPC_CP_ABORT
+END_FTR_SECTION_IFSET(CPU_FTR_ARCH_300)
+
#ifdef CONFIG_PPC_BOOK3S
/* Cancel all explict user streams as they will have no use after context
* switch and will stop the HW from creating streams itself
@@ -519,7 +529,10 @@ END_FTR_SECTION_IFSET(CPU_FTR_STCX_CHECKS_ADDRESS)
std r6,PACACURRENT(r13) /* Set new 'current' */
ld r8,KSP(r4) /* new stack pointer */
-#ifdef CONFIG_PPC_BOOK3S
+#ifdef CONFIG_PPC_STD_MMU_64
+BEGIN_MMU_FTR_SECTION
+ b 2f
+END_MMU_FTR_SECTION_IFSET(MMU_FTR_RADIX)
BEGIN_FTR_SECTION
clrrdi r6,r8,28 /* get its ESID */
clrrdi r9,r1,28 /* get current sp ESID */
@@ -565,7 +578,7 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
slbmte r7,r0
isync
2:
-#endif /* !CONFIG_PPC_BOOK3S */
+#endif /* CONFIG_PPC_STD_MMU_64 */
CURRENT_THREAD_INFO(r7, r8) /* base of new stack */
/* Note: this uses SWITCH_FRAME_SIZE rather than INT_FRAME_SIZE
@@ -1248,6 +1261,9 @@ _GLOBAL(ftrace_caller)
addi r3,r3,function_trace_op@toc@l
ld r5,0(r3)
+#ifdef CONFIG_LIVEPATCH
+ mr r14,r7 /* remember old NIP */
+#endif
/* Calculate ip from nip-4 into r3 for call below */
subi r3, r7, MCOUNT_INSN_SIZE
@@ -1272,6 +1288,9 @@ ftrace_call:
/* Load ctr with the possibly modified NIP */
ld r3, _NIP(r1)
mtctr r3
+#ifdef CONFIG_LIVEPATCH
+ cmpd r14,r3 /* has NIP been altered? */
+#endif
/* Restore gprs */
REST_8GPRS(0,r1)
@@ -1289,6 +1308,11 @@ ftrace_call:
ld r0, LRSAVE(r1)
mtlr r0
+#ifdef CONFIG_LIVEPATCH
+ /* Based on the cmpd above, if the NIP was altered handle livepatch */
+ bne- livepatch_handler
+#endif
+
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
stdu r1, -112(r1)
.globl ftrace_graph_call
@@ -1305,6 +1329,91 @@ _GLOBAL(ftrace_graph_stub)
_GLOBAL(ftrace_stub)
blr
+
+#ifdef CONFIG_LIVEPATCH
+ /*
+ * This function runs in the mcount context, between two functions. As
+ * such it can only clobber registers which are volatile and used in
+ * function linkage.
+ *
+ * We get here when a function A, calls another function B, but B has
+ * been live patched with a new function C.
+ *
+ * On entry:
+ * - we have no stack frame and can not allocate one
+ * - LR points back to the original caller (in A)
+ * - CTR holds the new NIP in C
+ * - r0 & r12 are free
+ *
+ * r0 can't be used as the base register for a DS-form load or store, so
+ * we temporarily shuffle r1 (stack pointer) into r0 and then put it back.
+ */
+livepatch_handler:
+ CURRENT_THREAD_INFO(r12, r1)
+
+ /* Save stack pointer into r0 */
+ mr r0, r1
+
+ /* Allocate 3 x 8 bytes */
+ ld r1, TI_livepatch_sp(r12)
+ addi r1, r1, 24
+ std r1, TI_livepatch_sp(r12)
+
+ /* Save toc & real LR on livepatch stack */
+ std r2, -24(r1)
+ mflr r12
+ std r12, -16(r1)
+
+ /* Store stack end marker */
+ lis r12, STACK_END_MAGIC@h
+ ori r12, r12, STACK_END_MAGIC@l
+ std r12, -8(r1)
+
+ /* Restore real stack pointer */
+ mr r1, r0
+
+ /* Put ctr in r12 for global entry and branch there */
+ mfctr r12
+ bctrl
+
+ /*
+ * Now we are returning from the patched function to the original
+ * caller A. We are free to use r0 and r12, and we can use r2 until we
+ * restore it.
+ */
+
+ CURRENT_THREAD_INFO(r12, r1)
+
+ /* Save stack pointer into r0 */
+ mr r0, r1
+
+ ld r1, TI_livepatch_sp(r12)
+
+ /* Check stack marker hasn't been trashed */
+ lis r2, STACK_END_MAGIC@h
+ ori r2, r2, STACK_END_MAGIC@l
+ ld r12, -8(r1)
+1: tdne r12, r2
+ EMIT_BUG_ENTRY 1b, __FILE__, __LINE__ - 1, 0
+
+ /* Restore LR & toc from livepatch stack */
+ ld r12, -16(r1)
+ mtlr r12
+ ld r2, -24(r1)
+
+ /* Pop livepatch stack frame */
+ CURRENT_THREAD_INFO(r12, r0)
+ subi r1, r1, 24
+ std r1, TI_livepatch_sp(r12)
+
+ /* Restore real stack pointer */
+ mr r1, r0
+
+ /* Return to original caller of live patched function */
+ blr
+#endif
+
+
#else
_GLOBAL_TOC(_mcount)
/* Taken from output of objdump from lib64/glibc */
diff --git a/arch/powerpc/kernel/exceptions-64s.S b/arch/powerpc/kernel/exceptions-64s.S
index 7716cebf4b8e..4c9440629128 100644
--- a/arch/powerpc/kernel/exceptions-64s.S
+++ b/arch/powerpc/kernel/exceptions-64s.S
@@ -189,7 +189,7 @@ END_FTR_SECTION_IFSET(CPU_FTR_HVMODE | CPU_FTR_ARCH_206)
#endif /* CONFIG_PPC_P7_NAP */
EXCEPTION_PROLOG_0(PACA_EXMC)
BEGIN_FTR_SECTION
- b machine_check_pSeries_early
+ b machine_check_powernv_early
FTR_SECTION_ELSE
b machine_check_pSeries_0
ALT_FTR_SECTION_END_IFSET(CPU_FTR_HVMODE)
@@ -209,11 +209,6 @@ data_access_slb_pSeries:
EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST, 0x380)
std r3,PACA_EXSLB+EX_R3(r13)
mfspr r3,SPRN_DAR
-#ifdef __DISABLED__
- /* Keep that around for when we re-implement dynamic VSIDs */
- cmpdi r3,0
- bge slb_miss_user_pseries
-#endif /* __DISABLED__ */
mfspr r12,SPRN_SRR1
#ifndef CONFIG_RELOCATABLE
b slb_miss_realmode
@@ -240,11 +235,6 @@ instruction_access_slb_pSeries:
EXCEPTION_PROLOG_1(PACA_EXSLB, KVMTEST, 0x480)
std r3,PACA_EXSLB+EX_R3(r13)
mfspr r3,SPRN_SRR0 /* SRR0 is faulting address */
-#ifdef __DISABLED__
- /* Keep that around for when we re-implement dynamic VSIDs */
- cmpdi r3,0
- bge slb_miss_user_pseries
-#endif /* __DISABLED__ */
mfspr r12,SPRN_SRR1
#ifndef CONFIG_RELOCATABLE
b slb_miss_realmode
@@ -443,7 +433,7 @@ denorm_exception_hv:
.align 7
/* moved from 0x200 */
-machine_check_pSeries_early:
+machine_check_powernv_early:
BEGIN_FTR_SECTION
EXCEPTION_PROLOG_1(PACA_EXMC, NOTEST, 0x200)
/*
@@ -709,34 +699,6 @@ system_reset_fwnmi:
#endif /* CONFIG_PPC_PSERIES */
-#ifdef __DISABLED__
-/*
- * This is used for when the SLB miss handler has to go virtual,
- * which doesn't happen for now anymore but will once we re-implement
- * dynamic VSIDs for shared page tables
- */
-slb_miss_user_pseries:
- std r10,PACA_EXGEN+EX_R10(r13)
- std r11,PACA_EXGEN+EX_R11(r13)
- std r12,PACA_EXGEN+EX_R12(r13)
- GET_SCRATCH0(r10)
- ld r11,PACA_EXSLB+EX_R9(r13)
- ld r12,PACA_EXSLB+EX_R3(r13)
- std r10,PACA_EXGEN+EX_R13(r13)
- std r11,PACA_EXGEN+EX_R9(r13)
- std r12,PACA_EXGEN+EX_R3(r13)
- clrrdi r12,r13,32
- mfmsr r10
- mfspr r11,SRR0 /* save SRR0 */
- ori r12,r12,slb_miss_user_common@l /* virt addr of handler */
- ori r10,r10,MSR_IR|MSR_DR|MSR_RI
- mtspr SRR0,r12
- mfspr r12,SRR1 /* and SRR1 */
- mtspr SRR1,r10
- rfid
- b . /* prevent spec. execution */
-#endif /* __DISABLED__ */
-
#ifdef CONFIG_KVM_BOOK3S_64_HANDLER
kvmppc_skip_interrupt:
/*
@@ -764,11 +726,10 @@ kvmppc_skip_Hinterrupt:
#endif
/*
- * Code from here down to __end_handlers is invoked from the
- * exception prologs above. Because the prologs assemble the
- * addresses of these handlers using the LOAD_HANDLER macro,
- * which uses an ori instruction, these handlers must be in
- * the first 64k of the kernel image.
+ * Ensure that any handlers that get invoked from the exception prologs
+ * above are below the first 64KB (0x10000) of the kernel image because
+ * the prologs assemble the addresses of these handlers using the
+ * LOAD_HANDLER macro, which uses an ori instruction.
*/
/*** Common interrupt handlers ***/
@@ -953,11 +914,6 @@ hv_facility_unavailable_relon_trampoline:
#endif
STD_RELON_EXCEPTION_PSERIES(0x5700, 0x1700, altivec_assist)
- /* Other future vectors */
- .align 7
- .globl __end_interrupts
-__end_interrupts:
-
.align 7
system_call_entry:
b system_call_common
@@ -983,7 +939,13 @@ data_access_common:
ld r3,PACA_EXGEN+EX_DAR(r13)
lwz r4,PACA_EXGEN+EX_DSISR(r13)
li r5,0x300
+ std r3,_DAR(r1)
+ std r4,_DSISR(r1)
+BEGIN_MMU_FTR_SECTION
b do_hash_page /* Try to handle as hpte fault */
+MMU_FTR_SECTION_ELSE
+ b handle_page_fault
+ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_RADIX)
.align 7
.globl h_data_storage_common
@@ -1008,73 +970,15 @@ instruction_access_common:
ld r3,_NIP(r1)
andis. r4,r12,0x5820
li r5,0x400
+ std r3,_DAR(r1)
+ std r4,_DSISR(r1)
+BEGIN_MMU_FTR_SECTION
b do_hash_page /* Try to handle as hpte fault */
-
- STD_EXCEPTION_COMMON(0xe20, h_instr_storage, unknown_exception)
-
-/*
- * Here is the common SLB miss user that is used when going to virtual
- * mode for SLB misses, that is currently not used
- */
-#ifdef __DISABLED__
- .align 7
- .globl slb_miss_user_common
-slb_miss_user_common:
- mflr r10
- std r3,PACA_EXGEN+EX_DAR(r13)
- stw r9,PACA_EXGEN+EX_CCR(r13)
- std r10,PACA_EXGEN+EX_LR(r13)
- std r11,PACA_EXGEN+EX_SRR0(r13)
- bl slb_allocate_user
-
- ld r10,PACA_EXGEN+EX_LR(r13)
- ld r3,PACA_EXGEN+EX_R3(r13)
- lwz r9,PACA_EXGEN+EX_CCR(r13)
- ld r11,PACA_EXGEN+EX_SRR0(r13)
- mtlr r10
- beq- slb_miss_fault
-
- andi. r10,r12,MSR_RI /* check for unrecoverable exception */
- beq- unrecov_user_slb
- mfmsr r10
-
-.machine push
-.machine "power4"
- mtcrf 0x80,r9
-.machine pop
-
- clrrdi r10,r10,2 /* clear RI before setting SRR0/1 */
- mtmsrd r10,1
-
- mtspr SRR0,r11
- mtspr SRR1,r12
-
- ld r9,PACA_EXGEN+EX_R9(r13)
- ld r10,PACA_EXGEN+EX_R10(r13)
- ld r11,PACA_EXGEN+EX_R11(r13)
- ld r12,PACA_EXGEN+EX_R12(r13)
- ld r13,PACA_EXGEN+EX_R13(r13)
- rfid
- b .
-
-slb_miss_fault:
- EXCEPTION_PROLOG_COMMON(0x380, PACA_EXGEN)
- ld r4,PACA_EXGEN+EX_DAR(r13)
- li r5,0
- std r4,_DAR(r1)
- std r5,_DSISR(r1)
+MMU_FTR_SECTION_ELSE
b handle_page_fault
+ALT_MMU_FTR_SECTION_END_IFCLR(MMU_FTR_RADIX)
-unrecov_user_slb:
- EXCEPTION_PROLOG_COMMON(0x4200, PACA_EXGEN)
- RECONCILE_IRQ_STATE(r10, r11)
- bl save_nvgprs
-1: addi r3,r1,STACK_FRAME_OVERHEAD
- bl unrecoverable_exception
- b 1b
-
-#endif /* __DISABLED__ */
-
+ STD_EXCEPTION_COMMON(0xe20, h_instr_storage, unknown_exception)
/*
* Machine check is different because we use a different
@@ -1230,10 +1134,6 @@ END_FTR_SECTION_IFSET(CPU_FTR_VSX)
STD_EXCEPTION_COMMON(0xf60, facility_unavailable, facility_unavailable_exception)
STD_EXCEPTION_COMMON(0xf80, hv_facility_unavailable, facility_unavailable_exception)
- .align 7
- .globl __end_handlers
-__end_handlers:
-
/* Equivalents to the above handlers for relocation-on interrupt vectors */
STD_RELON_EXCEPTION_HV_OOL(0xe40, emulation_assist)
MASKABLE_RELON_EXCEPTION_HV_OOL(0xe80, h_doorbell)
@@ -1244,6 +1144,17 @@ __end_handlers:
STD_RELON_EXCEPTION_PSERIES_OOL(0xf60, facility_unavailable)
STD_RELON_EXCEPTION_HV_OOL(0xf80, hv_facility_unavailable)
+ /*
+ * The __end_interrupts marker must be past the out-of-line (OOL)
+ * handlers, so that they are copied to real address 0x100 when running
+ * a relocatable kernel. This ensures they can be reached from the short
+ * trampoline handlers (like 0x4f00, 0x4f20, etc.) which branch
+ * directly, without using LOAD_HANDLER().
+ */
+ .align 7
+ .globl __end_interrupts
+__end_interrupts:
+
#if defined(CONFIG_PPC_PSERIES) || defined(CONFIG_PPC_POWERNV)
/*
* Data area reserved for FWNMI option.
@@ -1476,8 +1387,11 @@ slb_miss_realmode:
stw r9,PACA_EXSLB+EX_CCR(r13) /* save CR in exc. frame */
std r10,PACA_EXSLB+EX_LR(r13) /* save LR */
+#ifdef CONFIG_PPC_STD_MMU_64
+BEGIN_MMU_FTR_SECTION
bl slb_allocate_realmode
-
+END_MMU_FTR_SECTION_IFCLR(MMU_FTR_RADIX)
+#endif
/* All done -- return from exception. */
ld r10,PACA_EXSLB+EX_LR(r13)
@@ -1485,7 +1399,9 @@ slb_miss_realmode:
lwz r9,PACA_EXSLB+EX_CCR(r13) /* get saved CR */
mtlr r10
-
+BEGIN_MMU_FTR_SECTION
+ b 2f
+END_MMU_FTR_SECTION_IFSET(MMU_FTR_RADIX)
andi. r10,r12,MSR_RI /* check for unrecoverable exception */
beq- 2f
@@ -1536,9 +1452,7 @@ power4_fixup_nap:
*/
.align 7
do_hash_page:
- std r3,_DAR(r1)
- std r4,_DSISR(r1)
-
+#ifdef CONFIG_PPC_STD_MMU_64
andis. r0,r4,0xa410 /* weird error? */
bne- handle_page_fault /* if not, try to insert a HPTE */
andis. r0,r4,DSISR_DABRMATCH@h
@@ -1566,6 +1480,7 @@ do_hash_page:
/* Error */
blt- 13f
+#endif /* CONFIG_PPC_STD_MMU_64 */
/* Here we have a page fault that hash_page can't handle. */
handle_page_fault:
@@ -1592,6 +1507,7 @@ handle_dabr_fault:
12: b ret_from_except_lite
+#ifdef CONFIG_PPC_STD_MMU_64
/* We have a page fault that hash_page could handle but HV refused
* the PTE insertion
*/
@@ -1601,6 +1517,7 @@ handle_dabr_fault:
ld r4,_DAR(r1)
bl low_hash_fault
b ret_from_except
+#endif
/*
* We come here as a result of a DSI at a point where we don't want
diff --git a/arch/powerpc/kernel/ftrace.c b/arch/powerpc/kernel/ftrace.c
index 9dac18dabd03..1123a4d8d8dd 100644
--- a/arch/powerpc/kernel/ftrace.c
+++ b/arch/powerpc/kernel/ftrace.c
@@ -607,3 +607,13 @@ unsigned long __init arch_syscall_addr(int nr)
return sys_call_table[nr*2];
}
#endif /* CONFIG_FTRACE_SYSCALLS && CONFIG_PPC64 */
+
+#if defined(CONFIG_PPC64) && (!defined(_CALL_ELF) || _CALL_ELF != 2)
+char *arch_ftrace_match_adjust(char *str, const char *search)
+{
+ if (str[0] == '.' && search[0] != '.')
+ return str + 1;
+ else
+ return str;
+}
+#endif /* defined(CONFIG_PPC64) && (!defined(_CALL_ELF) || _CALL_ELF != 2) */
diff --git a/arch/powerpc/kernel/head_64.S b/arch/powerpc/kernel/head_64.S
index 4286775cbde9..2d14774af6b4 100644
--- a/arch/powerpc/kernel/head_64.S
+++ b/arch/powerpc/kernel/head_64.S
@@ -973,13 +973,16 @@ start_here_common:
* This stuff goes at the beginning of the bss, which is page-aligned.
*/
.section ".bss"
+/*
+ * pgd dir should be aligned to PGD_TABLE_SIZE which is 64K.
+ * We will need to find a better way to fix this
+ */
+ .align 16
- .align PAGE_SHIFT
+ .globl swapper_pg_dir
+swapper_pg_dir:
+ .space PGD_TABLE_SIZE
.globl empty_zero_page
empty_zero_page:
.space PAGE_SIZE
-
- .globl swapper_pg_dir
-swapper_pg_dir:
- .space PGD_TABLE_SIZE
diff --git a/arch/powerpc/kernel/ibmebus.c b/arch/powerpc/kernel/ibmebus.c
index ac86c53e2542..a89f4f7a66bd 100644
--- a/arch/powerpc/kernel/ibmebus.c
+++ b/arch/powerpc/kernel/ibmebus.c
@@ -408,7 +408,7 @@ static ssize_t modalias_show(struct device *dev,
return len+1;
}
-struct device_attribute ibmebus_bus_device_attrs[] = {
+static struct device_attribute ibmebus_bus_device_attrs[] = {
__ATTR_RO(devspec),
__ATTR_RO(name),
__ATTR_RO(modalias),
diff --git a/arch/powerpc/kernel/irq.c b/arch/powerpc/kernel/irq.c
index 290559df1e8b..3cb46a3b1de7 100644
--- a/arch/powerpc/kernel/irq.c
+++ b/arch/powerpc/kernel/irq.c
@@ -66,6 +66,7 @@
#include <asm/udbg.h>
#include <asm/smp.h>
#include <asm/debug.h>
+#include <asm/livepatch.h>
#ifdef CONFIG_PPC64
#include <asm/paca.h>
@@ -607,10 +608,12 @@ void irq_ctx_init(void)
memset((void *)softirq_ctx[i], 0, THREAD_SIZE);
tp = softirq_ctx[i];
tp->cpu = i;
+ klp_init_thread_info(tp);
memset((void *)hardirq_ctx[i], 0, THREAD_SIZE);
tp = hardirq_ctx[i];
tp->cpu = i;
+ klp_init_thread_info(tp);
}
}
diff --git a/arch/powerpc/kernel/isa-bridge.c b/arch/powerpc/kernel/isa-bridge.c
index 0f1997097960..ae1316106e2b 100644
--- a/arch/powerpc/kernel/isa-bridge.c
+++ b/arch/powerpc/kernel/isa-bridge.c
@@ -109,14 +109,14 @@ static void pci_process_ISA_OF_ranges(struct device_node *isa_node,
size = 0x10000;
__ioremap_at(phb_io_base_phys, (void *)ISA_IO_BASE,
- size, _PAGE_NO_CACHE|_PAGE_GUARDED);
+ size, pgprot_val(pgprot_noncached(__pgprot(0))));
return;
inval_range:
printk(KERN_ERR "no ISA IO ranges or unexpected isa range, "
"mapping 64k\n");
__ioremap_at(phb_io_base_phys, (void *)ISA_IO_BASE,
- 0x10000, _PAGE_NO_CACHE|_PAGE_GUARDED);
+ 0x10000, pgprot_val(pgprot_noncached(__pgprot(0))));
}
diff --git a/arch/powerpc/kernel/machine_kexec.c b/arch/powerpc/kernel/machine_kexec.c
index 015ae55c1868..2694d078741d 100644
--- a/arch/powerpc/kernel/machine_kexec.c
+++ b/arch/powerpc/kernel/machine_kexec.c
@@ -228,17 +228,12 @@ static struct property memory_limit_prop = {
static void __init export_crashk_values(struct device_node *node)
{
- struct property *prop;
-
/* There might be existing crash kernel properties, but we can't
* be sure what's in them, so remove them. */
- prop = of_find_property(node, "linux,crashkernel-base", NULL);
- if (prop)
- of_remove_property(node, prop);
-
- prop = of_find_property(node, "linux,crashkernel-size", NULL);
- if (prop)
- of_remove_property(node, prop);
+ of_remove_property(node, of_find_property(node,
+ "linux,crashkernel-base", NULL));
+ of_remove_property(node, of_find_property(node,
+ "linux,crashkernel-size", NULL));
if (crashk_res.start != 0) {
crashk_base = cpu_to_be_ulong(crashk_res.start),
@@ -258,16 +253,13 @@ static void __init export_crashk_values(struct device_node *node)
static int __init kexec_setup(void)
{
struct device_node *node;
- struct property *prop;
node = of_find_node_by_path("/chosen");
if (!node)
return -ENOENT;
/* remove any stale properties so ours can be found */
- prop = of_find_property(node, kernel_end_prop.name, NULL);
- if (prop)
- of_remove_property(node, prop);
+ of_remove_property(node, of_find_property(node, kernel_end_prop.name, NULL));
/* information needed by userspace when using default_machine_kexec */
kernel_end = cpu_to_be_ulong(__pa(_end));
diff --git a/arch/powerpc/kernel/machine_kexec_64.c b/arch/powerpc/kernel/machine_kexec_64.c
index 0fbd75d185d7..b8c202d63ecb 100644
--- a/arch/powerpc/kernel/machine_kexec_64.c
+++ b/arch/powerpc/kernel/machine_kexec_64.c
@@ -76,6 +76,7 @@ int default_machine_kexec_prepare(struct kimage *image)
* end of the blocked region (begin >= high). Use the
* boolean identity !(a || b) === (!a && !b).
*/
+#ifdef CONFIG_PPC_STD_MMU_64
if (htab_address) {
low = __pa(htab_address);
high = low + htab_size_bytes;
@@ -88,6 +89,7 @@ int default_machine_kexec_prepare(struct kimage *image)
return -ETXTBSY;
}
}
+#endif /* CONFIG_PPC_STD_MMU_64 */
/* We also should not overwrite the tce tables */
for_each_node_by_type(node, "pci") {
@@ -381,7 +383,7 @@ void default_machine_kexec(struct kimage *image)
/* NOTREACHED */
}
-#ifndef CONFIG_PPC_BOOK3E
+#ifdef CONFIG_PPC_STD_MMU_64
/* Values we need to export to the second kernel via the device tree. */
static unsigned long htab_base;
static unsigned long htab_size;
@@ -401,7 +403,6 @@ static struct property htab_size_prop = {
static int __init export_htab_values(void)
{
struct device_node *node;
- struct property *prop;
/* On machines with no htab htab_address is NULL */
if (!htab_address)
@@ -412,12 +413,8 @@ static int __init export_htab_values(void)
return -ENODEV;
/* remove any stale propertys so ours can be found */
- prop = of_find_property(node, htab_base_prop.name, NULL);
- if (prop)
- of_remove_property(node, prop);
- prop = of_find_property(node, htab_size_prop.name, NULL);
- if (prop)
- of_remove_property(node, prop);
+ of_remove_property(node, of_find_property(node, htab_base_prop.name, NULL));
+ of_remove_property(node, of_find_property(node, htab_size_prop.name, NULL));
htab_base = cpu_to_be64(__pa(htab_address));
of_add_property(node, &htab_base_prop);
@@ -428,4 +425,4 @@ static int __init export_htab_values(void)
return 0;
}
late_initcall(export_htab_values);
-#endif /* !CONFIG_PPC_BOOK3E */
+#endif /* CONFIG_PPC_STD_MMU_64 */
diff --git a/arch/powerpc/kernel/mce.c b/arch/powerpc/kernel/mce.c
index b2eb4686bd8f..ef267fd9dd22 100644
--- a/arch/powerpc/kernel/mce.c
+++ b/arch/powerpc/kernel/mce.c
@@ -37,7 +37,7 @@ static DEFINE_PER_CPU(int, mce_queue_count);
static DEFINE_PER_CPU(struct machine_check_event[MAX_MC_EVT], mce_event_queue);
static void machine_check_process_queued_event(struct irq_work *work);
-struct irq_work mce_event_process_work = {
+static struct irq_work mce_event_process_work = {
.func = machine_check_process_queued_event,
};
@@ -284,7 +284,7 @@ void machine_check_print_event_info(struct machine_check_event *evt)
printk("%s Effective address: %016llx\n",
level, evt->u.ue_error.effective_address);
if (evt->u.ue_error.physical_address_provided)
- printk("%s Physial address: %016llx\n",
+ printk("%s Physical address: %016llx\n",
level, evt->u.ue_error.physical_address);
break;
case MCE_ERROR_TYPE_SLB:
diff --git a/arch/powerpc/kernel/mce_power.c b/arch/powerpc/kernel/mce_power.c
index ee62b197502d..7353991c4ece 100644
--- a/arch/powerpc/kernel/mce_power.c
+++ b/arch/powerpc/kernel/mce_power.c
@@ -72,11 +72,15 @@ void __flush_tlb_power8(unsigned int action)
void __flush_tlb_power9(unsigned int action)
{
+ if (radix_enabled())
+ flush_tlb_206(POWER9_TLB_SETS_RADIX, action);
+
flush_tlb_206(POWER9_TLB_SETS_HASH, action);
}
/* flush SLBs and reload */
+#ifdef CONFIG_PPC_STD_MMU_64
static void flush_and_reload_slb(void)
{
struct slb_shadow *slb;
@@ -110,6 +114,7 @@ static void flush_and_reload_slb(void)
asm volatile("slbmte %0,%1" : : "r" (rs), "r" (rb));
}
}
+#endif
static long mce_handle_derror(uint64_t dsisr, uint64_t slb_error_bits)
{
@@ -120,6 +125,7 @@ static long mce_handle_derror(uint64_t dsisr, uint64_t slb_error_bits)
* reset the error bits whenever we handle them so that at the end
* we can check whether we handled all of them or not.
* */
+#ifdef CONFIG_PPC_STD_MMU_64
if (dsisr & slb_error_bits) {
flush_and_reload_slb();
/* reset error bits */
@@ -131,6 +137,7 @@ static long mce_handle_derror(uint64_t dsisr, uint64_t slb_error_bits)
/* reset error bits */
dsisr &= ~P7_DSISR_MC_TLB_MULTIHIT_MFTLB;
}
+#endif
/* Any other errors we don't understand? */
if (dsisr & 0xffffffffUL)
handled = 0;
@@ -150,6 +157,7 @@ static long mce_handle_common_ierror(uint64_t srr1)
switch (P7_SRR1_MC_IFETCH(srr1)) {
case 0:
break;
+#ifdef CONFIG_PPC_STD_MMU_64
case P7_SRR1_MC_IFETCH_SLB_PARITY:
case P7_SRR1_MC_IFETCH_SLB_MULTIHIT:
/* flush and reload SLBs for SLB errors. */
@@ -162,6 +170,7 @@ static long mce_handle_common_ierror(uint64_t srr1)
handled = 1;
}
break;
+#endif
default:
break;
}
@@ -175,10 +184,12 @@ static long mce_handle_ierror_p7(uint64_t srr1)
handled = mce_handle_common_ierror(srr1);
+#ifdef CONFIG_PPC_STD_MMU_64
if (P7_SRR1_MC_IFETCH(srr1) == P7_SRR1_MC_IFETCH_SLB_BOTH) {
flush_and_reload_slb();
handled = 1;
}
+#endif
return handled;
}
@@ -321,10 +332,12 @@ static long mce_handle_ierror_p8(uint64_t srr1)
handled = mce_handle_common_ierror(srr1);
+#ifdef CONFIG_PPC_STD_MMU_64
if (P7_SRR1_MC_IFETCH(srr1) == P8_SRR1_MC_IFETCH_ERAT_MULTIHIT) {
flush_and_reload_slb();
handled = 1;
}
+#endif
return handled;
}
diff --git a/arch/powerpc/kernel/misc_32.S b/arch/powerpc/kernel/misc_32.S
index bf5160fbf9d8..285ca8c6cc2e 100644
--- a/arch/powerpc/kernel/misc_32.S
+++ b/arch/powerpc/kernel/misc_32.S
@@ -599,12 +599,6 @@ _GLOBAL(__bswapdi2)
mr r4,r10
blr
-_GLOBAL(abs)
- srawi r4,r3,31
- xor r3,r3,r4
- sub r3,r3,r4
- blr
-
#ifdef CONFIG_SMP
_GLOBAL(start_secondary_resume)
/* Reset stack */
diff --git a/arch/powerpc/kernel/nvram_64.c b/arch/powerpc/kernel/nvram_64.c
index 0cab9e8c3794..856f9a7944cd 100644
--- a/arch/powerpc/kernel/nvram_64.c
+++ b/arch/powerpc/kernel/nvram_64.c
@@ -15,8 +15,6 @@
* parsing code.
*/
-#include <linux/module.h>
-
#include <linux/types.h>
#include <linux/errno.h>
#include <linux/fs.h>
@@ -1231,12 +1229,4 @@ static int __init nvram_init(void)
return rc;
}
-
-static void __exit nvram_cleanup(void)
-{
- misc_deregister( &nvram_dev );
-}
-
-module_init(nvram_init);
-module_exit(nvram_cleanup);
-MODULE_LICENSE("GPL");
+device_initcall(nvram_init);
diff --git a/arch/powerpc/kernel/pci-hotplug.c b/arch/powerpc/kernel/pci-hotplug.c
index 59c436189f46..2d71269e7dc1 100644
--- a/arch/powerpc/kernel/pci-hotplug.c
+++ b/arch/powerpc/kernel/pci-hotplug.c
@@ -21,6 +21,35 @@
#include <asm/firmware.h>
#include <asm/eeh.h>
+static struct pci_bus *find_bus_among_children(struct pci_bus *bus,
+ struct device_node *dn)
+{
+ struct pci_bus *child = NULL;
+ struct pci_bus *tmp;
+
+ if (pci_bus_to_OF_node(bus) == dn)
+ return bus;
+
+ list_for_each_entry(tmp, &bus->children, node) {
+ child = find_bus_among_children(tmp, dn);
+ if (child)
+ break;
+ }
+
+ return child;
+}
+
+struct pci_bus *pci_find_bus_by_node(struct device_node *dn)
+{
+ struct pci_dn *pdn = PCI_DN(dn);
+
+ if (!pdn || !pdn->phb || !pdn->phb->bus)
+ return NULL;
+
+ return find_bus_among_children(pdn->phb->bus, dn);
+}
+EXPORT_SYMBOL_GPL(pci_find_bus_by_node);
+
/**
* pcibios_release_device - release PCI device
* @dev: PCI device
@@ -38,20 +67,20 @@ void pcibios_release_device(struct pci_dev *dev)
}
/**
- * pcibios_remove_pci_devices - remove all devices under this bus
+ * pci_hp_remove_devices - remove all devices under this bus
* @bus: the indicated PCI bus
*
* Remove all of the PCI devices under this bus both from the
* linux pci device tree, and from the powerpc EEH address cache.
*/
-void pcibios_remove_pci_devices(struct pci_bus *bus)
+void pci_hp_remove_devices(struct pci_bus *bus)
{
struct pci_dev *dev, *tmp;
struct pci_bus *child_bus;
/* First go down child busses */
list_for_each_entry(child_bus, &bus->children, node)
- pcibios_remove_pci_devices(child_bus);
+ pci_hp_remove_devices(child_bus);
pr_debug("PCI: Removing devices on bus %04x:%02x\n",
pci_domain_nr(bus), bus->number);
@@ -60,11 +89,10 @@ void pcibios_remove_pci_devices(struct pci_bus *bus)
pci_stop_and_remove_bus_device(dev);
}
}
-
-EXPORT_SYMBOL_GPL(pcibios_remove_pci_devices);
+EXPORT_SYMBOL_GPL(pci_hp_remove_devices);
/**
- * pcibios_add_pci_devices - adds new pci devices to bus
+ * pci_hp_add_devices - adds new pci devices to bus
* @bus: the indicated PCI bus
*
* This routine will find and fixup new pci devices under
@@ -74,7 +102,7 @@ EXPORT_SYMBOL_GPL(pcibios_remove_pci_devices);
* is how this routine differs from other, similar pcibios
* routines.)
*/
-void pcibios_add_pci_devices(struct pci_bus * bus)
+void pci_hp_add_devices(struct pci_bus *bus)
{
int slotno, mode, pass, max;
struct pci_dev *dev;
@@ -92,7 +120,8 @@ void pcibios_add_pci_devices(struct pci_bus * bus)
if (mode == PCI_PROBE_DEVTREE) {
/* use ofdt-based probe */
of_rescan_bus(dn, bus);
- } else if (mode == PCI_PROBE_NORMAL) {
+ } else if (mode == PCI_PROBE_NORMAL &&
+ dn->child && PCI_DN(dn->child)) {
/*
* Use legacy probe. In the partial hotplug case, we
* probably have grandchildren devices unplugged. So
@@ -114,4 +143,4 @@ void pcibios_add_pci_devices(struct pci_bus * bus)
}
pcibios_finish_adding_to_bus(bus);
}
-EXPORT_SYMBOL_GPL(pcibios_add_pci_devices);
+EXPORT_SYMBOL_GPL(pci_hp_add_devices);
diff --git a/arch/powerpc/kernel/pci_64.c b/arch/powerpc/kernel/pci_64.c
index 60bb187cb46a..3759df52bd67 100644
--- a/arch/powerpc/kernel/pci_64.c
+++ b/arch/powerpc/kernel/pci_64.c
@@ -38,7 +38,7 @@
* ISA drivers use hard coded offsets. If no ISA bus exists nothing
* is mapped on the first 64K of IO space
*/
-unsigned long pci_io_base = ISA_IO_BASE;
+unsigned long pci_io_base;
EXPORT_SYMBOL(pci_io_base);
static int __init pcibios_init(void)
@@ -47,6 +47,7 @@ static int __init pcibios_init(void)
printk(KERN_INFO "PCI: Probing PCI hardware\n");
+ pci_io_base = ISA_IO_BASE;
/* For now, override phys_mem_access_prot. If we need it,g
* later, we may move that initialization to each ppc_md
*/
@@ -159,7 +160,7 @@ static int pcibios_map_phb_io_space(struct pci_controller *hose)
/* Establish the mapping */
if (__ioremap_at(phys_page, area->addr, size_page,
- _PAGE_NO_CACHE | _PAGE_GUARDED) == NULL)
+ pgprot_val(pgprot_noncached(__pgprot(0)))) == NULL)
return -ENOMEM;
/* Fixup hose IO resource */
diff --git a/arch/powerpc/kernel/pci_dn.c b/arch/powerpc/kernel/pci_dn.c
index 38102cb9baa9..ecdccce78719 100644
--- a/arch/powerpc/kernel/pci_dn.c
+++ b/arch/powerpc/kernel/pci_dn.c
@@ -282,13 +282,9 @@ void remove_dev_pci_data(struct pci_dev *pdev)
#endif /* CONFIG_PCI_IOV */
}
-/*
- * Traverse_func that inits the PCI fields of the device node.
- * NOTE: this *must* be done before read/write config to the device.
- */
-void *update_dn_pci_info(struct device_node *dn, void *data)
+struct pci_dn *pci_add_device_node_info(struct pci_controller *hose,
+ struct device_node *dn)
{
- struct pci_controller *phb = data;
const __be32 *type = of_get_property(dn, "ibm,pci-config-space-type", NULL);
const __be32 *regs;
struct device_node *parent;
@@ -299,7 +295,7 @@ void *update_dn_pci_info(struct device_node *dn, void *data)
return NULL;
dn->data = pdn;
pdn->node = dn;
- pdn->phb = phb;
+ pdn->phb = hose;
#ifdef CONFIG_PPC_POWERNV
pdn->pe_number = IODA_INVALID_PE;
#endif
@@ -331,8 +327,32 @@ void *update_dn_pci_info(struct device_node *dn, void *data)
if (pdn->parent)
list_add_tail(&pdn->list, &pdn->parent->child_list);
- return NULL;
+ return pdn;
}
+EXPORT_SYMBOL_GPL(pci_add_device_node_info);
+
+void pci_remove_device_node_info(struct device_node *dn)
+{
+ struct pci_dn *pdn = dn ? PCI_DN(dn) : NULL;
+#ifdef CONFIG_EEH
+ struct eeh_dev *edev = pdn_to_eeh_dev(pdn);
+
+ if (edev)
+ edev->pdn = NULL;
+#endif
+
+ if (!pdn)
+ return;
+
+ WARN_ON(!list_empty(&pdn->child_list));
+ list_del(&pdn->list);
+ if (pdn->parent)
+ of_node_put(pdn->parent->node);
+
+ dn->data = NULL;
+ kfree(pdn);
+}
+EXPORT_SYMBOL_GPL(pci_remove_device_node_info);
/*
* Traverse a device tree stopping each PCI device in the tree.
@@ -352,8 +372,9 @@ void *update_dn_pci_info(struct device_node *dn, void *data)
* one of these nodes we also assume its siblings are non-pci for
* performance.
*/
-void *traverse_pci_devices(struct device_node *start, traverse_func pre,
- void *data)
+void *pci_traverse_device_nodes(struct device_node *start,
+ void *(*fn)(struct device_node *, void *),
+ void *data)
{
struct device_node *dn, *nextdn;
void *ret;
@@ -368,8 +389,11 @@ void *traverse_pci_devices(struct device_node *start, traverse_func pre,
if (classp)
class = of_read_number(classp, 1);
- if (pre && ((ret = pre(dn, data)) != NULL))
- return ret;
+ if (fn) {
+ ret = fn(dn, data);
+ if (ret)
+ return ret;
+ }
/* If we are a PCI bridge, go down */
if (dn->child && ((class >> 8) == PCI_CLASS_BRIDGE_PCI ||
@@ -391,6 +415,7 @@ void *traverse_pci_devices(struct device_node *start, traverse_func pre,
}
return NULL;
}
+EXPORT_SYMBOL_GPL(pci_traverse_device_nodes);
static struct pci_dn *pci_dn_next_one(struct pci_dn *root,
struct pci_dn *pdn)
@@ -432,6 +457,18 @@ void *traverse_pci_dn(struct pci_dn *root,
return NULL;
}
+static void *add_pdn(struct device_node *dn, void *data)
+{
+ struct pci_controller *hose = data;
+ struct pci_dn *pdn;
+
+ pdn = pci_add_device_node_info(hose, dn);
+ if (!pdn)
+ return ERR_PTR(-ENOMEM);
+
+ return NULL;
+}
+
/**
* pci_devs_phb_init_dynamic - setup pci devices under this PHB
* phb: pci-to-host bridge (top-level bridge connecting to cpu)
@@ -446,8 +483,7 @@ void pci_devs_phb_init_dynamic(struct pci_controller *phb)
struct pci_dn *pdn;
/* PHB nodes themselves must not match */
- update_dn_pci_info(dn, phb);
- pdn = dn->data;
+ pdn = pci_add_device_node_info(phb, dn);
if (pdn) {
pdn->devfn = pdn->busno = -1;
pdn->vendor_id = pdn->device_id = pdn->class_code = 0;
@@ -456,7 +492,7 @@ void pci_devs_phb_init_dynamic(struct pci_controller *phb)
}
/* Update dn->phb ptrs for new phb and children devices */
- traverse_pci_devices(dn, update_dn_pci_info, phb);
+ pci_traverse_device_nodes(dn, add_pdn, phb);
}
/**
diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index b8500b4ac7fe..e2f12cbcade9 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -38,6 +38,7 @@
#include <linux/random.h>
#include <linux/hw_breakpoint.h>
#include <linux/uaccess.h>
+#include <linux/elf-randomize.h>
#include <asm/pgtable.h>
#include <asm/io.h>
@@ -55,6 +56,9 @@
#include <asm/firmware.h>
#endif
#include <asm/code-patching.h>
+#include <asm/exec.h>
+#include <asm/livepatch.h>
+
#include <linux/kprobes.h>
#include <linux/kdebug.h>
@@ -1075,7 +1079,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
}
#endif /* CONFIG_PPC64 */
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_STD_MMU_64
batch = this_cpu_ptr(&ppc64_tlb_batch);
if (batch->active) {
current_thread_info()->local_flags |= _TLF_LAZY_MMU;
@@ -1083,7 +1087,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
__flush_tlb_pending(batch);
batch->active = 0;
}
-#endif /* CONFIG_PPC_BOOK3S_64 */
+#endif /* CONFIG_PPC_STD_MMU_64 */
#ifdef CONFIG_PPC_ADV_DEBUG_REGS
switch_booke_debug_regs(&new->thread.debug);
@@ -1129,7 +1133,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
last = _switch(old_thread, new_thread);
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_STD_MMU_64
if (current_thread_info()->local_flags & _TLF_LAZY_MMU) {
current_thread_info()->local_flags &= ~_TLF_LAZY_MMU;
batch = this_cpu_ptr(&ppc64_tlb_batch);
@@ -1138,8 +1142,7 @@ struct task_struct *__switch_to(struct task_struct *prev,
if (current_thread_info()->task->thread.regs)
restore_math(current_thread_info()->task->thread.regs);
-
-#endif /* CONFIG_PPC_BOOK3S_64 */
+#endif /* CONFIG_PPC_STD_MMU_64 */
return last;
}
@@ -1326,10 +1329,6 @@ void show_regs(struct pt_regs * regs)
show_instructions(regs);
}
-void exit_thread(void)
-{
-}
-
void flush_thread(void)
{
#ifdef CONFIG_HAVE_HW_BREAKPOINT
@@ -1374,6 +1373,9 @@ static void setup_ksp_vsid(struct task_struct *p, unsigned long sp)
unsigned long sp_vsid;
unsigned long llp = mmu_psize_defs[mmu_linear_psize].sllp;
+ if (radix_enabled())
+ return;
+
if (mmu_has_feature(MMU_FTR_1T_SEGMENT))
sp_vsid = get_kernel_vsid(sp, MMU_SEGSIZE_1T)
<< SLB_VSID_SHIFT_1T;
@@ -1400,13 +1402,15 @@ int copy_thread(unsigned long clone_flags, unsigned long usp,
extern void ret_from_kernel_thread(void);
void (*f)(void);
unsigned long sp = (unsigned long)task_stack_page(p) + THREAD_SIZE;
+ struct thread_info *ti = task_thread_info(p);
+
+ klp_init_thread_info(ti);
/* Copy registers */
sp -= sizeof(struct pt_regs);
childregs = (struct pt_regs *) sp;
if (unlikely(p->flags & PF_KTHREAD)) {
/* kernel thread */
- struct thread_info *ti = (void *)task_stack_page(p);
memset(childregs, 0, sizeof(struct pt_regs));
childregs->gpr[1] = sp + sizeof(struct pt_regs);
/* function */
@@ -1920,7 +1924,8 @@ unsigned long arch_randomize_brk(struct mm_struct *mm)
* the heap, we can put it above 1TB so it is backed by a 1TB
* segment. Otherwise the heap will be in the bottom 1TB
* which always uses 256MB segments and this may result in a
- * performance penalty.
+ * performance penalty. We don't need to worry about radix. For
+ * radix, mmu_highuser_ssize remains unchanged from 256MB.
*/
if (!is_32bit_task() && (mmu_highuser_ssize == MMU_SEGSIZE_1T))
base = max_t(unsigned long, mm->brk, 1UL << SID_SHIFT_1T);
diff --git a/arch/powerpc/kernel/prom.c b/arch/powerpc/kernel/prom.c
index a15fe1d4e84a..946e34ffeae9 100644
--- a/arch/powerpc/kernel/prom.c
+++ b/arch/powerpc/kernel/prom.c
@@ -34,6 +34,7 @@
#include <linux/of.h>
#include <linux/of_fdt.h>
#include <linux/libfdt.h>
+#include <linux/cpu.h>
#include <asm/prom.h>
#include <asm/rtas.h>
@@ -167,6 +168,7 @@ static struct ibm_pa_feature {
*/
{CPU_FTR_TM_COMP, 0, 0,
PPC_FEATURE2_HTM_COMP|PPC_FEATURE2_HTM_NOSC_COMP, 22, 0, 0},
+ {0, MMU_FTR_RADIX, 0, 0, 40, 0, 0},
};
static void __init scan_features(unsigned long node, const unsigned char *ftrs,
diff --git a/arch/powerpc/kernel/rtasd.c b/arch/powerpc/kernel/rtasd.c
index aa610ce8742f..c638e2487a9c 100644
--- a/arch/powerpc/kernel/rtasd.c
+++ b/arch/powerpc/kernel/rtasd.c
@@ -442,7 +442,7 @@ static void do_event_scan(void)
}
static void rtas_event_scan(struct work_struct *w);
-DECLARE_DELAYED_WORK(event_scan_work, rtas_event_scan);
+static DECLARE_DELAYED_WORK(event_scan_work, rtas_event_scan);
/*
* Delay should be at least one second since some machines have problems if
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index 44c8d03558ac..8ca79b7503d8 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -128,9 +128,7 @@ void machine_restart(char *cmd)
machine_shutdown();
if (ppc_md.restart)
ppc_md.restart(cmd);
-#ifdef CONFIG_SMP
smp_send_stop();
-#endif
printk(KERN_EMERG "System Halted, OK to turn off power\n");
local_irq_disable();
while (1) ;
@@ -141,9 +139,7 @@ void machine_power_off(void)
machine_shutdown();
if (pm_power_off)
pm_power_off();
-#ifdef CONFIG_SMP
smp_send_stop();
-#endif
printk(KERN_EMERG "System Halted, OK to turn off power\n");
local_irq_disable();
while (1) ;
@@ -159,9 +155,7 @@ void machine_halt(void)
machine_shutdown();
if (ppc_md.halt)
ppc_md.halt();
-#ifdef CONFIG_SMP
smp_send_stop();
-#endif
printk(KERN_EMERG "System Halted, OK to turn off power\n");
local_irq_disable();
while (1) ;
diff --git a/arch/powerpc/kernel/setup_64.c b/arch/powerpc/kernel/setup_64.c
index f98be8383a39..96d4a2b23d0f 100644
--- a/arch/powerpc/kernel/setup_64.c
+++ b/arch/powerpc/kernel/setup_64.c
@@ -69,6 +69,7 @@
#include <asm/kvm_ppc.h>
#include <asm/hugetlb.h>
#include <asm/epapr_hcalls.h>
+#include <asm/livepatch.h>
#ifdef DEBUG
#define DBG(fmt...) udbg_printf(fmt)
@@ -667,16 +668,16 @@ static void __init emergency_stack_init(void)
limit = min(safe_stack_limit(), ppc64_rma_size);
for_each_possible_cpu(i) {
- unsigned long sp;
- sp = memblock_alloc_base(THREAD_SIZE, THREAD_SIZE, limit);
- sp += THREAD_SIZE;
- paca[i].emergency_sp = __va(sp);
+ struct thread_info *ti;
+ ti = __va(memblock_alloc_base(THREAD_SIZE, THREAD_SIZE, limit));
+ klp_init_thread_info(ti);
+ paca[i].emergency_sp = (void *)ti + THREAD_SIZE;
#ifdef CONFIG_PPC_BOOK3S_64
/* emergency stack for machine check exception handling. */
- sp = memblock_alloc_base(THREAD_SIZE, THREAD_SIZE, limit);
- sp += THREAD_SIZE;
- paca[i].mc_emergency_sp = __va(sp);
+ ti = __va(memblock_alloc_base(THREAD_SIZE, THREAD_SIZE, limit));
+ klp_init_thread_info(ti);
+ paca[i].mc_emergency_sp = (void *)ti + THREAD_SIZE;
#endif
}
}
@@ -700,6 +701,8 @@ void __init setup_arch(char **cmdline_p)
if (ppc_md.panic)
setup_panic();
+ klp_init_thread_info(&init_thread_info);
+
init_mm.start_code = (unsigned long)_stext;
init_mm.end_code = (unsigned long) _etext;
init_mm.end_data = (unsigned long) _edata;
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 8cac1eb41466..55c924b65f71 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -565,7 +565,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *tidle)
smp_ops->give_timebase();
/* Wait until cpu puts itself in the online & active maps */
- while (!cpu_online(cpu) || !cpu_active(cpu))
+ while (!cpu_online(cpu))
cpu_relax();
return 0;
diff --git a/arch/powerpc/kernel/swsusp.c b/arch/powerpc/kernel/swsusp.c
index 6669b1752512..6ae9bd5086a4 100644
--- a/arch/powerpc/kernel/swsusp.c
+++ b/arch/powerpc/kernel/swsusp.c
@@ -31,6 +31,6 @@ void save_processor_state(void)
void restore_processor_state(void)
{
#ifdef CONFIG_PPC32
- switch_mmu_context(current->active_mm, current->active_mm);
+ switch_mmu_context(current->active_mm, current->active_mm, NULL);
#endif
}
diff --git a/arch/powerpc/kernel/time.c b/arch/powerpc/kernel/time.c
index 81b0900a39ee..3ed9a5a21d77 100644
--- a/arch/powerpc/kernel/time.c
+++ b/arch/powerpc/kernel/time.c
@@ -55,6 +55,7 @@
#include <linux/delay.h>
#include <linux/irq_work.h>
#include <linux/clk-provider.h>
+#include <linux/suspend.h>
#include <asm/trace.h>
#include <asm/io.h>
diff --git a/arch/powerpc/kernel/vdso.c b/arch/powerpc/kernel/vdso.c
index def1b8b5e6c1..6767605ea8da 100644
--- a/arch/powerpc/kernel/vdso.c
+++ b/arch/powerpc/kernel/vdso.c
@@ -195,7 +195,8 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
* and end up putting it elsewhere.
* Add enough to the size so that the result can be aligned.
*/
- down_write(&mm->mmap_sem);
+ if (down_write_killable(&mm->mmap_sem))
+ return -EINTR;
vdso_base = get_unmapped_area(NULL, vdso_base,
(vdso_pages << PAGE_SHIFT) +
((VDSO_ALIGNMENT - 1) & PAGE_MASK),
diff --git a/arch/powerpc/kernel/vio.c b/arch/powerpc/kernel/vio.c
index 5f8dcdaa2820..8d7358f3a273 100644
--- a/arch/powerpc/kernel/vio.c
+++ b/arch/powerpc/kernel/vio.c
@@ -87,7 +87,7 @@ struct vio_cmo_dev_entry {
* @curr: bytes currently allocated
* @high: high water mark for IO data usage
*/
-struct vio_cmo {
+static struct vio_cmo {
spinlock_t lock;
struct delayed_work balance_q;
struct list_head device_list;
@@ -615,7 +615,7 @@ static u64 vio_dma_get_required_mask(struct device *dev)
return dma_iommu_ops.get_required_mask(dev);
}
-struct dma_map_ops vio_dma_mapping_ops = {
+static struct dma_map_ops vio_dma_mapping_ops = {
.alloc = vio_dma_iommu_alloc_coherent,
.free = vio_dma_iommu_free_coherent,
.mmap = dma_direct_mmap_coherent,
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index b34220d2aa42..47018fcbf7d6 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -54,6 +54,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
{ "queue_intr", VCPU_STAT(queue_intr) },
{ "halt_successful_poll", VCPU_STAT(halt_successful_poll), },
{ "halt_attempted_poll", VCPU_STAT(halt_attempted_poll), },
+ { "halt_poll_invalid", VCPU_STAT(halt_poll_invalid) },
{ "halt_wakeup", VCPU_STAT(halt_wakeup) },
{ "pf_storage", VCPU_STAT(pf_storage) },
{ "sp_storage", VCPU_STAT(sp_storage) },
diff --git a/arch/powerpc/kvm/book3s_64_mmu_hv.c b/arch/powerpc/kvm/book3s_64_mmu_hv.c
index c7b78d8336b2..05f09ae82587 100644
--- a/arch/powerpc/kvm/book3s_64_mmu_hv.c
+++ b/arch/powerpc/kvm/book3s_64_mmu_hv.c
@@ -447,7 +447,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
struct revmap_entry *rev;
struct page *page, *pages[1];
long index, ret, npages;
- unsigned long is_io;
+ bool is_ci;
unsigned int writing, write_ok;
struct vm_area_struct *vma;
unsigned long rcbits;
@@ -503,7 +503,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
smp_rmb();
ret = -EFAULT;
- is_io = 0;
+ is_ci = false;
pfn = 0;
page = NULL;
pte_size = PAGE_SIZE;
@@ -521,7 +521,7 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
pfn = vma->vm_pgoff +
((hva - vma->vm_start) >> PAGE_SHIFT);
pte_size = psize;
- is_io = hpte_cache_bits(pgprot_val(vma->vm_page_prot));
+ is_ci = pte_ci(__pte((pgprot_val(vma->vm_page_prot))));
write_ok = vma->vm_flags & VM_WRITE;
}
up_read(&current->mm->mmap_sem);
@@ -558,10 +558,9 @@ int kvmppc_book3s_hv_page_fault(struct kvm_run *run, struct kvm_vcpu *vcpu,
goto out_put;
/* Check WIMG vs. the actual page we're accessing */
- if (!hpte_cache_flags_ok(r, is_io)) {
- if (is_io)
+ if (!hpte_cache_flags_ok(r, is_ci)) {
+ if (is_ci)
goto out_put;
-
/*
* Allow guest to map emulated device memory as
* uncacheable, but actually make it cacheable.
diff --git a/arch/powerpc/kvm/book3s_hv.c b/arch/powerpc/kvm/book3s_hv.c
index 84fb4fcfaa41..e20beae5ca7a 100644
--- a/arch/powerpc/kvm/book3s_hv.c
+++ b/arch/powerpc/kvm/book3s_hv.c
@@ -27,6 +27,7 @@
#include <linux/export.h>
#include <linux/fs.h>
#include <linux/anon_inodes.h>
+#include <linux/cpu.h>
#include <linux/cpumask.h>
#include <linux/spinlock.h>
#include <linux/page-flags.h>
@@ -3271,6 +3272,12 @@ static int kvmppc_core_check_processor_compat_hv(void)
if (!cpu_has_feature(CPU_FTR_HVMODE) ||
!cpu_has_feature(CPU_FTR_ARCH_206))
return -EIO;
+ /*
+ * Disable KVM for Power9, untill the required bits merged.
+ */
+ if (cpu_has_feature(CPU_FTR_ARCH_300))
+ return -EIO;
+
return 0;
}
diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
index 4cb8db05f3e5..99b4e9d5dd23 100644
--- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
+++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
@@ -175,7 +175,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
unsigned long g_ptel;
struct kvm_memory_slot *memslot;
unsigned hpage_shift;
- unsigned long is_io;
+ bool is_ci;
unsigned long *rmap;
pte_t *ptep;
unsigned int writing;
@@ -199,7 +199,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
gfn = gpa >> PAGE_SHIFT;
memslot = __gfn_to_memslot(kvm_memslots_raw(kvm), gfn);
pa = 0;
- is_io = ~0ul;
+ is_ci = false;
rmap = NULL;
if (!(memslot && !(memslot->flags & KVM_MEMSLOT_INVALID))) {
/* Emulated MMIO - mark this with key=31 */
@@ -250,7 +250,7 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
if (writing && !pte_write(pte))
/* make the actual HPTE be read-only */
ptel = hpte_make_readonly(ptel);
- is_io = hpte_cache_bits(pte_val(pte));
+ is_ci = pte_ci(pte);
pa = pte_pfn(pte) << PAGE_SHIFT;
pa |= hva & (host_pte_size - 1);
pa |= gpa & ~PAGE_MASK;
@@ -267,9 +267,9 @@ long kvmppc_do_h_enter(struct kvm *kvm, unsigned long flags,
else
pteh |= HPTE_V_ABSENT;
- /* Check WIMG */
- if (is_io != ~0ul && !hpte_cache_flags_ok(ptel, is_io)) {
- if (is_io)
+ /*If we had host pte mapping then Check WIMG */
+ if (ptep && !hpte_cache_flags_ok(ptel, is_ci)) {
+ if (is_ci)
return H_PARAMETER;
/*
* Allow guest to map emulated device memory as
diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
index 95bceca8f40e..8e4f64f0b774 100644
--- a/arch/powerpc/kvm/book3s_pr.c
+++ b/arch/powerpc/kvm/book3s_pr.c
@@ -882,6 +882,24 @@ void kvmppc_set_fscr(struct kvm_vcpu *vcpu, u64 fscr)
}
#endif
+static void kvmppc_setup_debug(struct kvm_vcpu *vcpu)
+{
+ if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) {
+ u64 msr = kvmppc_get_msr(vcpu);
+
+ kvmppc_set_msr(vcpu, msr | MSR_SE);
+ }
+}
+
+static void kvmppc_clear_debug(struct kvm_vcpu *vcpu)
+{
+ if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) {
+ u64 msr = kvmppc_get_msr(vcpu);
+
+ kvmppc_set_msr(vcpu, msr & ~MSR_SE);
+ }
+}
+
int kvmppc_handle_exit_pr(struct kvm_run *run, struct kvm_vcpu *vcpu,
unsigned int exit_nr)
{
@@ -1207,10 +1225,18 @@ program_interrupt:
break;
#endif
case BOOK3S_INTERRUPT_MACHINE_CHECK:
- case BOOK3S_INTERRUPT_TRACE:
kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
r = RESUME_GUEST;
break;
+ case BOOK3S_INTERRUPT_TRACE:
+ if (vcpu->guest_debug & KVM_GUESTDBG_SINGLESTEP) {
+ run->exit_reason = KVM_EXIT_DEBUG;
+ r = RESUME_HOST;
+ } else {
+ kvmppc_book3s_queue_irqprio(vcpu, exit_nr);
+ r = RESUME_GUEST;
+ }
+ break;
default:
{
ulong shadow_srr1 = vcpu->arch.shadow_srr1;
@@ -1479,6 +1505,8 @@ static int kvmppc_vcpu_run_pr(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
goto out;
}
+ kvmppc_setup_debug(vcpu);
+
/*
* Interrupts could be timers for the guest which we have to inject
* again, so let's postpone them until we're in the guest and if we
@@ -1501,6 +1529,8 @@ static int kvmppc_vcpu_run_pr(struct kvm_run *kvm_run, struct kvm_vcpu *vcpu)
ret = __kvmppc_vcpu_run(kvm_run, vcpu);
+ kvmppc_clear_debug(vcpu);
+
/* No need for kvm_guest_exit. It's done in handle_exit.
We also get here with interrupts enabled. */
@@ -1683,7 +1713,11 @@ static void kvmppc_core_destroy_vm_pr(struct kvm *kvm)
static int kvmppc_core_check_processor_compat_pr(void)
{
- /* we are always compatible */
+ /*
+ * Disable KVM for Power9 untill the required bits merged.
+ */
+ if (cpu_has_feature(CPU_FTR_ARCH_300))
+ return -EIO;
return 0;
}
diff --git a/arch/powerpc/kvm/book3s_xics.c b/arch/powerpc/kvm/book3s_xics.c
index 46871d554057..a75ba38a2d81 100644
--- a/arch/powerpc/kvm/book3s_xics.c
+++ b/arch/powerpc/kvm/book3s_xics.c
@@ -92,7 +92,7 @@ static int ics_deliver_irq(struct kvmppc_xics *xics, u32 irq, u32 level)
* we are the only setter, thus concurrent access is undefined
* to begin with.
*/
- if (level == 1 || level == KVM_INTERRUPT_SET_LEVEL)
+ if ((level == 1 && state->lsi) || level == KVM_INTERRUPT_SET_LEVEL)
state->asserted = 1;
else if (level == 0 || level == KVM_INTERRUPT_UNSET) {
state->asserted = 0;
@@ -280,7 +280,7 @@ static inline bool icp_try_update(struct kvmppc_icp *icp,
if (!success)
goto bail;
- XICS_DBG("UPD [%04x] - C:%02x M:%02x PP: %02x PI:%06x R:%d O:%d\n",
+ XICS_DBG("UPD [%04lx] - C:%02x M:%02x PP: %02x PI:%06x R:%d O:%d\n",
icp->server_num,
old.cppr, old.mfrr, old.pending_pri, old.xisr,
old.need_resend, old.out_ee);
@@ -336,7 +336,7 @@ static bool icp_try_to_deliver(struct kvmppc_icp *icp, u32 irq, u8 priority,
union kvmppc_icp_state old_state, new_state;
bool success;
- XICS_DBG("try deliver %#x(P:%#x) to server %#x\n", irq, priority,
+ XICS_DBG("try deliver %#x(P:%#x) to server %#lx\n", irq, priority,
icp->server_num);
do {
@@ -1174,9 +1174,11 @@ static int xics_get_source(struct kvmppc_xics *xics, long irq, u64 addr)
prio = irqp->saved_priority;
}
val |= prio << KVM_XICS_PRIORITY_SHIFT;
- if (irqp->asserted)
- val |= KVM_XICS_LEVEL_SENSITIVE | KVM_XICS_PENDING;
- else if (irqp->masked_pending || irqp->resend)
+ if (irqp->lsi) {
+ val |= KVM_XICS_LEVEL_SENSITIVE;
+ if (irqp->asserted)
+ val |= KVM_XICS_PENDING;
+ } else if (irqp->masked_pending || irqp->resend)
val |= KVM_XICS_PENDING;
ret = 0;
}
@@ -1228,9 +1230,13 @@ static int xics_set_source(struct kvmppc_xics *xics, long irq, u64 addr)
irqp->priority = prio;
irqp->resend = 0;
irqp->masked_pending = 0;
+ irqp->lsi = 0;
irqp->asserted = 0;
- if ((val & KVM_XICS_PENDING) && (val & KVM_XICS_LEVEL_SENSITIVE))
- irqp->asserted = 1;
+ if (val & KVM_XICS_LEVEL_SENSITIVE) {
+ irqp->lsi = 1;
+ if (val & KVM_XICS_PENDING)
+ irqp->asserted = 1;
+ }
irqp->exists = 1;
arch_spin_unlock(&ics->lock);
local_irq_restore(flags);
@@ -1249,11 +1255,10 @@ int kvm_set_irq(struct kvm *kvm, int irq_source_id, u32 irq, int level,
return ics_deliver_irq(xics, irq, level);
}
-int kvm_set_msi(struct kvm_kernel_irq_routing_entry *irq_entry, struct kvm *kvm,
- int irq_source_id, int level, bool line_status)
+int kvm_arch_set_irq_inatomic(struct kvm_kernel_irq_routing_entry *irq_entry,
+ struct kvm *kvm, int irq_source_id,
+ int level, bool line_status)
{
- if (!level)
- return -1;
return kvm_set_irq(kvm, irq_source_id, irq_entry->gsi,
level, line_status);
}
diff --git a/arch/powerpc/kvm/book3s_xics.h b/arch/powerpc/kvm/book3s_xics.h
index 56ea44f9867f..a46b954055c4 100644
--- a/arch/powerpc/kvm/book3s_xics.h
+++ b/arch/powerpc/kvm/book3s_xics.h
@@ -39,6 +39,7 @@ struct ics_irq_state {
u8 saved_priority;
u8 resend;
u8 masked_pending;
+ u8 lsi; /* level-sensitive interrupt */
u8 asserted; /* Only for LSI */
u8 exists;
};
diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
index 4d66f44a1657..4afae695899a 100644
--- a/arch/powerpc/kvm/booke.c
+++ b/arch/powerpc/kvm/booke.c
@@ -64,6 +64,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
{ "ext_intr", VCPU_STAT(ext_intr_exits) },
{ "halt_successful_poll", VCPU_STAT(halt_successful_poll) },
{ "halt_attempted_poll", VCPU_STAT(halt_attempted_poll) },
+ { "halt_poll_invalid", VCPU_STAT(halt_poll_invalid) },
{ "halt_wakeup", VCPU_STAT(halt_wakeup) },
{ "doorbell", VCPU_STAT(dbell_exits) },
{ "guest doorbell", VCPU_STAT(gdbell_exits) },
diff --git a/arch/powerpc/kvm/powerpc.c b/arch/powerpc/kvm/powerpc.c
index 6a68730774ee..02416fea7653 100644
--- a/arch/powerpc/kvm/powerpc.c
+++ b/arch/powerpc/kvm/powerpc.c
@@ -800,9 +800,9 @@ static void kvmppc_complete_mmio_load(struct kvm_vcpu *vcpu,
}
}
-int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
- unsigned int rt, unsigned int bytes,
- int is_default_endian)
+static int __kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
+ unsigned int rt, unsigned int bytes,
+ int is_default_endian, int sign_extend)
{
int idx, ret;
bool host_swabbed;
@@ -827,7 +827,7 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
vcpu->arch.mmio_host_swabbed = host_swabbed;
vcpu->mmio_needed = 1;
vcpu->mmio_is_write = 0;
- vcpu->arch.mmio_sign_extend = 0;
+ vcpu->arch.mmio_sign_extend = sign_extend;
idx = srcu_read_lock(&vcpu->kvm->srcu);
@@ -844,6 +844,13 @@ int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
return EMULATE_DO_MMIO;
}
+
+int kvmppc_handle_load(struct kvm_run *run, struct kvm_vcpu *vcpu,
+ unsigned int rt, unsigned int bytes,
+ int is_default_endian)
+{
+ return __kvmppc_handle_load(run, vcpu, rt, bytes, is_default_endian, 0);
+}
EXPORT_SYMBOL_GPL(kvmppc_handle_load);
/* Same as above, but sign extends */
@@ -851,12 +858,7 @@ int kvmppc_handle_loads(struct kvm_run *run, struct kvm_vcpu *vcpu,
unsigned int rt, unsigned int bytes,
int is_default_endian)
{
- int r;
-
- vcpu->arch.mmio_sign_extend = 1;
- r = kvmppc_handle_load(run, vcpu, rt, bytes, is_default_endian);
-
- return r;
+ return __kvmppc_handle_load(run, vcpu, rt, bytes, is_default_endian, 1);
}
int kvmppc_handle_store(struct kvm_run *run, struct kvm_vcpu *vcpu,
diff --git a/arch/powerpc/lib/copy_32.S b/arch/powerpc/lib/copy_32.S
index c44df2dbedd5..99f37f24185c 100644
--- a/arch/powerpc/lib/copy_32.S
+++ b/arch/powerpc/lib/copy_32.S
@@ -217,7 +217,7 @@ _GLOBAL(memcpy)
bdnz 40b
65: blr
-_GLOBAL(generic_memcpy)
+generic_memcpy:
srwi. r7,r5,3
addi r6,r3,-4
addi r4,r4,-4
diff --git a/arch/powerpc/lib/sstep.c b/arch/powerpc/lib/sstep.c
index dc885b30f7a6..3362299b1859 100644
--- a/arch/powerpc/lib/sstep.c
+++ b/arch/powerpc/lib/sstep.c
@@ -925,6 +925,7 @@ int __kprobes analyse_instr(struct instruction_op *op, struct pt_regs *regs,
}
}
#endif
+ break; /* illegal instruction */
case 31:
switch ((instr >> 1) & 0x3ff) {
@@ -1818,9 +1819,11 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
case 4:
__get_user_asmx(val, op.ea, err, "lwarx");
break;
+#ifdef __powerpc64__
case 8:
__get_user_asmx(val, op.ea, err, "ldarx");
break;
+#endif
default:
return 0;
}
@@ -1841,9 +1844,11 @@ int __kprobes emulate_step(struct pt_regs *regs, unsigned int instr)
case 4:
__put_user_asmx(op.val, op.ea, err, "stwcx.", cr);
break;
+#ifdef __powerpc64__
case 8:
__put_user_asmx(op.val, op.ea, err, "stdcx.", cr);
break;
+#endif
default:
return 0;
}
diff --git a/arch/powerpc/lib/xor_vmx.c b/arch/powerpc/lib/xor_vmx.c
index 07f49f1568e5..f9de69a04e88 100644
--- a/arch/powerpc/lib/xor_vmx.c
+++ b/arch/powerpc/lib/xor_vmx.c
@@ -17,7 +17,17 @@
*
* Author: Anton Blanchard <anton@au.ibm.com>
*/
+
+/*
+ * Sparse (as at v0.5.0) gets very, very confused by this file.
+ * Make it a bit simpler for it.
+ */
+#if !defined(__CHECKER__)
#include <altivec.h>
+#else
+#define vec_xor(a, b) a ^ b
+#define vector __attribute__((vector_size(16)))
+#endif
#include <linux/preempt.h>
#include <linux/export.h>
diff --git a/arch/powerpc/mm/Makefile b/arch/powerpc/mm/Makefile
index adfee3f1aeb9..f2cea6d5e764 100644
--- a/arch/powerpc/mm/Makefile
+++ b/arch/powerpc/mm/Makefile
@@ -13,10 +13,11 @@ obj-$(CONFIG_PPC_MMU_NOHASH) += mmu_context_nohash.o tlb_nohash.o \
tlb_nohash_low.o
obj-$(CONFIG_PPC_BOOK3E) += tlb_low_$(CONFIG_WORD_SIZE)e.o
hash64-$(CONFIG_PPC_NATIVE) := hash_native_64.o
-obj-$(CONFIG_PPC_STD_MMU_64) += hash_utils_64.o slb_low.o slb.o $(hash64-y)
-obj-$(CONFIG_PPC_STD_MMU_32) += ppc_mmu_32.o hash_low_32.o
-obj-$(CONFIG_PPC_STD_MMU) += tlb_hash$(CONFIG_WORD_SIZE).o \
- mmu_context_hash$(CONFIG_WORD_SIZE).o
+obj-$(CONFIG_PPC_BOOK3E_64) += pgtable-book3e.o
+obj-$(CONFIG_PPC_STD_MMU_64) += pgtable-hash64.o hash_utils_64.o slb_low.o slb.o $(hash64-y) mmu_context_book3s64.o pgtable-book3s64.o
+obj-$(CONFIG_PPC_RADIX_MMU) += pgtable-radix.o tlb-radix.o
+obj-$(CONFIG_PPC_STD_MMU_32) += ppc_mmu_32.o hash_low_32.o mmu_context_hash32.o
+obj-$(CONFIG_PPC_STD_MMU) += tlb_hash$(CONFIG_WORD_SIZE).o
ifeq ($(CONFIG_PPC_STD_MMU_64),y)
obj-$(CONFIG_PPC_4K_PAGES) += hash64_4k.o
obj-$(CONFIG_PPC_64K_PAGES) += hash64_64k.o
@@ -33,6 +34,7 @@ obj-$(CONFIG_PPC_MM_SLICES) += slice.o
obj-y += hugetlbpage.o
ifeq ($(CONFIG_HUGETLB_PAGE),y)
obj-$(CONFIG_PPC_STD_MMU_64) += hugetlbpage-hash64.o
+obj-$(CONFIG_PPC_RADIX_MMU) += hugetlbpage-radix.o
obj-$(CONFIG_PPC_BOOK3E_MMU) += hugetlbpage-book3e.o
endif
obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += hugepage-hash64.o
diff --git a/arch/powerpc/mm/fsl_booke_mmu.c b/arch/powerpc/mm/fsl_booke_mmu.c
index a1b2713f6e96..139dec421e57 100644
--- a/arch/powerpc/mm/fsl_booke_mmu.c
+++ b/arch/powerpc/mm/fsl_booke_mmu.c
@@ -135,7 +135,7 @@ static void settlbcam(int index, unsigned long virt, phys_addr_t phys,
TLBCAM[index].MAS7 = (u64)phys >> 32;
/* Below is unlikely -- only for large user pages or similar */
- if (pte_user(flags)) {
+ if (pte_user(__pte(flags))) {
TLBCAM[index].MAS3 |= MAS3_UX | MAS3_UR;
TLBCAM[index].MAS3 |= ((flags & _PAGE_RW) ? MAS3_UW : 0);
}
diff --git a/arch/powerpc/mm/hash64_4k.c b/arch/powerpc/mm/hash64_4k.c
index 47d1b26effc6..6333b273d2d5 100644
--- a/arch/powerpc/mm/hash64_4k.c
+++ b/arch/powerpc/mm/hash64_4k.c
@@ -34,21 +34,21 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
old_pte = pte_val(pte);
/* If PTE busy, retry the access */
- if (unlikely(old_pte & _PAGE_BUSY))
+ if (unlikely(old_pte & H_PAGE_BUSY))
return 0;
/* If PTE permissions don't match, take page fault */
- if (unlikely(access & ~old_pte))
+ if (unlikely(!check_pte_access(access, old_pte)))
return 1;
/*
* Try to lock the PTE, add ACCESSED and DIRTY if it was
* a write access. Since this is 4K insert of 64K page size
- * also add _PAGE_COMBO
+ * also add H_PAGE_COMBO
*/
- new_pte = old_pte | _PAGE_BUSY | _PAGE_ACCESSED;
- if (access & _PAGE_RW)
+ new_pte = old_pte | H_PAGE_BUSY | _PAGE_ACCESSED;
+ if (access & _PAGE_WRITE)
new_pte |= _PAGE_DIRTY;
- } while (old_pte != __cmpxchg_u64((unsigned long *)ptep,
- old_pte, new_pte));
+ } while (!pte_xchg(ptep, __pte(old_pte), __pte(new_pte)));
+
/*
* PP bits. _PAGE_USER is already PP bit 0x2, so we only
* need to add in 0x1 if it's a read-only user page
@@ -60,22 +60,22 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
rflags = hash_page_do_lazy_icache(rflags, __pte(old_pte), trap);
vpn = hpt_vpn(ea, vsid, ssize);
- if (unlikely(old_pte & _PAGE_HASHPTE)) {
+ if (unlikely(old_pte & H_PAGE_HASHPTE)) {
/*
* There MIGHT be an HPTE for this pte
*/
hash = hpt_hash(vpn, shift, ssize);
- if (old_pte & _PAGE_F_SECOND)
+ if (old_pte & H_PAGE_F_SECOND)
hash = ~hash;
slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
- slot += (old_pte & _PAGE_F_GIX) >> _PAGE_F_GIX_SHIFT;
+ slot += (old_pte & H_PAGE_F_GIX) >> H_PAGE_F_GIX_SHIFT;
if (ppc_md.hpte_updatepp(slot, rflags, vpn, MMU_PAGE_4K,
MMU_PAGE_4K, ssize, flags) == -1)
old_pte &= ~_PAGE_HPTEFLAGS;
}
- if (likely(!(old_pte & _PAGE_HASHPTE))) {
+ if (likely(!(old_pte & H_PAGE_HASHPTE))) {
pa = pte_pfn(__pte(old_pte)) << PAGE_SHIFT;
hash = hpt_hash(vpn, shift, ssize);
@@ -115,9 +115,10 @@ repeat:
MMU_PAGE_4K, MMU_PAGE_4K, old_pte);
return -1;
}
- new_pte = (new_pte & ~_PAGE_HPTEFLAGS) | _PAGE_HASHPTE;
- new_pte |= (slot << _PAGE_F_GIX_SHIFT) & (_PAGE_F_SECOND | _PAGE_F_GIX);
+ new_pte = (new_pte & ~_PAGE_HPTEFLAGS) | H_PAGE_HASHPTE;
+ new_pte |= (slot << H_PAGE_F_GIX_SHIFT) &
+ (H_PAGE_F_SECOND | H_PAGE_F_GIX);
}
- *ptep = __pte(new_pte & ~_PAGE_BUSY);
+ *ptep = __pte(new_pte & ~H_PAGE_BUSY);
return 0;
}
diff --git a/arch/powerpc/mm/hash64_64k.c b/arch/powerpc/mm/hash64_64k.c
index b2d659cf51c6..16644e1f4e6b 100644
--- a/arch/powerpc/mm/hash64_64k.c
+++ b/arch/powerpc/mm/hash64_64k.c
@@ -23,7 +23,7 @@ bool __rpte_sub_valid(real_pte_t rpte, unsigned long index)
unsigned long g_idx;
unsigned long ptev = pte_val(rpte.pte);
- g_idx = (ptev & _PAGE_COMBO_VALID) >> _PAGE_F_GIX_SHIFT;
+ g_idx = (ptev & H_PAGE_COMBO_VALID) >> H_PAGE_F_GIX_SHIFT;
index = index >> 2;
if (g_idx & (0x1 << index))
return true;
@@ -37,12 +37,12 @@ static unsigned long mark_subptegroup_valid(unsigned long ptev, unsigned long in
{
unsigned long g_idx;
- if (!(ptev & _PAGE_COMBO))
+ if (!(ptev & H_PAGE_COMBO))
return ptev;
index = index >> 2;
g_idx = 0x1 << index;
- return ptev | (g_idx << _PAGE_F_GIX_SHIFT);
+ return ptev | (g_idx << H_PAGE_F_GIX_SHIFT);
}
int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
@@ -66,21 +66,21 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
old_pte = pte_val(pte);
/* If PTE busy, retry the access */
- if (unlikely(old_pte & _PAGE_BUSY))
+ if (unlikely(old_pte & H_PAGE_BUSY))
return 0;
/* If PTE permissions don't match, take page fault */
- if (unlikely(access & ~old_pte))
+ if (unlikely(!check_pte_access(access, old_pte)))
return 1;
/*
* Try to lock the PTE, add ACCESSED and DIRTY if it was
* a write access. Since this is 4K insert of 64K page size
- * also add _PAGE_COMBO
+ * also add H_PAGE_COMBO
*/
- new_pte = old_pte | _PAGE_BUSY | _PAGE_ACCESSED | _PAGE_COMBO;
- if (access & _PAGE_RW)
+ new_pte = old_pte | H_PAGE_BUSY | _PAGE_ACCESSED | H_PAGE_COMBO;
+ if (access & _PAGE_WRITE)
new_pte |= _PAGE_DIRTY;
- } while (old_pte != __cmpxchg_u64((unsigned long *)ptep,
- old_pte, new_pte));
+ } while (!pte_xchg(ptep, __pte(old_pte), __pte(new_pte)));
+
/*
* Handle the subpage protection bits
*/
@@ -103,21 +103,21 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
/*
*None of the sub 4k page is hashed
*/
- if (!(old_pte & _PAGE_HASHPTE))
+ if (!(old_pte & H_PAGE_HASHPTE))
goto htab_insert_hpte;
/*
* Check if the pte was already inserted into the hash table
* as a 64k HW page, and invalidate the 64k HPTE if so.
*/
- if (!(old_pte & _PAGE_COMBO)) {
+ if (!(old_pte & H_PAGE_COMBO)) {
flush_hash_page(vpn, rpte, MMU_PAGE_64K, ssize, flags);
/*
* clear the old slot details from the old and new pte.
* On hash insert failure we use old pte value and we don't
* want slot information there if we have a insert failure.
*/
- old_pte &= ~(_PAGE_HASHPTE | _PAGE_F_GIX | _PAGE_F_SECOND);
- new_pte &= ~(_PAGE_HASHPTE | _PAGE_F_GIX | _PAGE_F_SECOND);
+ old_pte &= ~(H_PAGE_HASHPTE | H_PAGE_F_GIX | H_PAGE_F_SECOND);
+ new_pte &= ~(H_PAGE_HASHPTE | H_PAGE_F_GIX | H_PAGE_F_SECOND);
goto htab_insert_hpte;
}
/*
@@ -143,15 +143,15 @@ int __hash_page_4K(unsigned long ea, unsigned long access, unsigned long vsid,
if (ret == -1)
goto htab_insert_hpte;
- *ptep = __pte(new_pte & ~_PAGE_BUSY);
+ *ptep = __pte(new_pte & ~H_PAGE_BUSY);
return 0;
}
htab_insert_hpte:
/*
- * handle _PAGE_4K_PFN case
+ * handle H_PAGE_4K_PFN case
*/
- if (old_pte & _PAGE_4K_PFN) {
+ if (old_pte & H_PAGE_4K_PFN) {
/*
* All the sub 4k page have the same
* physical address.
@@ -199,20 +199,20 @@ repeat:
}
/*
* Insert slot number & secondary bit in PTE second half,
- * clear _PAGE_BUSY and set appropriate HPTE slot bit
- * Since we have _PAGE_BUSY set on ptep, we can be sure
+ * clear H_PAGE_BUSY and set appropriate HPTE slot bit
+ * Since we have H_PAGE_BUSY set on ptep, we can be sure
* nobody is undating hidx.
*/
hidxp = (unsigned long *)(ptep + PTRS_PER_PTE);
rpte.hidx &= ~(0xfUL << (subpg_index << 2));
*hidxp = rpte.hidx | (slot << (subpg_index << 2));
new_pte = mark_subptegroup_valid(new_pte, subpg_index);
- new_pte |= _PAGE_HASHPTE;
+ new_pte |= H_PAGE_HASHPTE;
/*
* check __real_pte for details on matching smp_rmb()
*/
smp_wmb();
- *ptep = __pte(new_pte & ~_PAGE_BUSY);
+ *ptep = __pte(new_pte & ~H_PAGE_BUSY);
return 0;
}
@@ -220,7 +220,6 @@ int __hash_page_64K(unsigned long ea, unsigned long access,
unsigned long vsid, pte_t *ptep, unsigned long trap,
unsigned long flags, int ssize)
{
-
unsigned long hpte_group;
unsigned long rflags, pa;
unsigned long old_pte, new_pte;
@@ -235,27 +234,26 @@ int __hash_page_64K(unsigned long ea, unsigned long access,
old_pte = pte_val(pte);
/* If PTE busy, retry the access */
- if (unlikely(old_pte & _PAGE_BUSY))
+ if (unlikely(old_pte & H_PAGE_BUSY))
return 0;
/* If PTE permissions don't match, take page fault */
- if (unlikely(access & ~old_pte))
+ if (unlikely(!check_pte_access(access, old_pte)))
return 1;
/*
* Check if PTE has the cache-inhibit bit set
* If so, bail out and refault as a 4k page
*/
if (!mmu_has_feature(MMU_FTR_CI_LARGE_PAGE) &&
- unlikely(old_pte & _PAGE_NO_CACHE))
+ unlikely(pte_ci(pte)))
return 0;
/*
* Try to lock the PTE, add ACCESSED and DIRTY if it was
* a write access.
*/
- new_pte = old_pte | _PAGE_BUSY | _PAGE_ACCESSED;
- if (access & _PAGE_RW)
+ new_pte = old_pte | H_PAGE_BUSY | _PAGE_ACCESSED;
+ if (access & _PAGE_WRITE)
new_pte |= _PAGE_DIRTY;
- } while (old_pte != __cmpxchg_u64((unsigned long *)ptep,
- old_pte, new_pte));
+ } while (!pte_xchg(ptep, __pte(old_pte), __pte(new_pte)));
rflags = htab_convert_pte_flags(new_pte);
@@ -264,22 +262,22 @@ int __hash_page_64K(unsigned long ea, unsigned long access,
rflags = hash_page_do_lazy_icache(rflags, __pte(old_pte), trap);
vpn = hpt_vpn(ea, vsid, ssize);
- if (unlikely(old_pte & _PAGE_HASHPTE)) {
+ if (unlikely(old_pte & H_PAGE_HASHPTE)) {
/*
* There MIGHT be an HPTE for this pte
*/
hash = hpt_hash(vpn, shift, ssize);
- if (old_pte & _PAGE_F_SECOND)
+ if (old_pte & H_PAGE_F_SECOND)
hash = ~hash;
slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
- slot += (old_pte & _PAGE_F_GIX) >> _PAGE_F_GIX_SHIFT;
+ slot += (old_pte & H_PAGE_F_GIX) >> H_PAGE_F_GIX_SHIFT;
if (ppc_md.hpte_updatepp(slot, rflags, vpn, MMU_PAGE_64K,
MMU_PAGE_64K, ssize, flags) == -1)
old_pte &= ~_PAGE_HPTEFLAGS;
}
- if (likely(!(old_pte & _PAGE_HASHPTE))) {
+ if (likely(!(old_pte & H_PAGE_HASHPTE))) {
pa = pte_pfn(__pte(old_pte)) << PAGE_SHIFT;
hash = hpt_hash(vpn, shift, ssize);
@@ -319,9 +317,10 @@ repeat:
MMU_PAGE_64K, MMU_PAGE_64K, old_pte);
return -1;
}
- new_pte = (new_pte & ~_PAGE_HPTEFLAGS) | _PAGE_HASHPTE;
- new_pte |= (slot << _PAGE_F_GIX_SHIFT) & (_PAGE_F_SECOND | _PAGE_F_GIX);
+ new_pte = (new_pte & ~_PAGE_HPTEFLAGS) | H_PAGE_HASHPTE;
+ new_pte |= (slot << H_PAGE_F_GIX_SHIFT) &
+ (H_PAGE_F_SECOND | H_PAGE_F_GIX);
}
- *ptep = __pte(new_pte & ~_PAGE_BUSY);
+ *ptep = __pte(new_pte & ~H_PAGE_BUSY);
return 0;
}
diff --git a/arch/powerpc/mm/hash_native_64.c b/arch/powerpc/mm/hash_native_64.c
index 8eaac81347fd..d873f6507f72 100644
--- a/arch/powerpc/mm/hash_native_64.c
+++ b/arch/powerpc/mm/hash_native_64.c
@@ -221,7 +221,7 @@ static long native_hpte_insert(unsigned long hpte_group, unsigned long vpn,
return -1;
hpte_v = hpte_encode_v(vpn, psize, apsize, ssize) | vflags | HPTE_V_VALID;
- hpte_r = hpte_encode_r(pa, psize, apsize) | rflags;
+ hpte_r = hpte_encode_r(pa, psize, apsize, ssize) | rflags;
if (!(vflags & HPTE_V_BOLTED)) {
DBG_LOW(" i=%x hpte_v=%016lx, hpte_r=%016lx\n",
@@ -719,6 +719,12 @@ static void native_flush_hash_range(unsigned long number, int local)
local_irq_restore(flags);
}
+static int native_update_partition_table(u64 patb1)
+{
+ partition_tb->patb1 = cpu_to_be64(patb1);
+ return 0;
+}
+
void __init hpte_init_native(void)
{
ppc_md.hpte_invalidate = native_hpte_invalidate;
@@ -729,4 +735,7 @@ void __init hpte_init_native(void)
ppc_md.hpte_clear_all = native_hpte_clear;
ppc_md.flush_hash_range = native_flush_hash_range;
ppc_md.hugepage_invalidate = native_hugepage_invalidate;
+
+ if (cpu_has_feature(CPU_FTR_ARCH_300))
+ ppc_md.update_partition_table = native_update_partition_table;
}
diff --git a/arch/powerpc/mm/hash_utils_64.c b/arch/powerpc/mm/hash_utils_64.c
index 7635b1c6b5da..59268969a0bc 100644
--- a/arch/powerpc/mm/hash_utils_64.c
+++ b/arch/powerpc/mm/hash_utils_64.c
@@ -167,16 +167,22 @@ unsigned long htab_convert_pte_flags(unsigned long pteflags)
if ((pteflags & _PAGE_EXEC) == 0)
rflags |= HPTE_R_N;
/*
- * PP bits:
+ * PPP bits:
* Linux uses slb key 0 for kernel and 1 for user.
- * kernel areas are mapped with PP=00
- * and there is no kernel RO (_PAGE_KERNEL_RO).
- * User area is mapped with PP=0x2 for read/write
- * or PP=0x3 for read-only (including writeable but clean pages).
+ * kernel RW areas are mapped with PPP=0b000
+ * User area is mapped with PPP=0b010 for read/write
+ * or PPP=0b011 for read-only (including writeable but clean pages).
*/
- if (pteflags & _PAGE_USER) {
- rflags |= 0x2;
- if (!((pteflags & _PAGE_RW) && (pteflags & _PAGE_DIRTY)))
+ if (pteflags & _PAGE_PRIVILEGED) {
+ /*
+ * Kernel read only mapped with ppp bits 0b110
+ */
+ if (!(pteflags & _PAGE_WRITE))
+ rflags |= (HPTE_R_PP0 | 0x2);
+ } else {
+ if (pteflags & _PAGE_RWX)
+ rflags |= 0x2;
+ if (!((pteflags & _PAGE_WRITE) && (pteflags & _PAGE_DIRTY)))
rflags |= 0x1;
}
/*
@@ -186,12 +192,13 @@ unsigned long htab_convert_pte_flags(unsigned long pteflags)
/*
* Add in WIG bits
*/
- if (pteflags & _PAGE_WRITETHRU)
- rflags |= HPTE_R_W;
- if (pteflags & _PAGE_NO_CACHE)
+
+ if ((pteflags & _PAGE_CACHE_CTL) == _PAGE_TOLERANT)
rflags |= HPTE_R_I;
- if (pteflags & _PAGE_GUARDED)
- rflags |= HPTE_R_G;
+ if ((pteflags & _PAGE_CACHE_CTL ) == _PAGE_NON_IDEMPOTENT)
+ rflags |= (HPTE_R_I | HPTE_R_G);
+ if ((pteflags & _PAGE_CACHE_CTL) == _PAGE_SAO)
+ rflags |= (HPTE_R_I | HPTE_R_W);
return rflags;
}
@@ -669,6 +676,41 @@ int remove_section_mapping(unsigned long start, unsigned long end)
}
#endif /* CONFIG_MEMORY_HOTPLUG */
+static void __init hash_init_partition_table(phys_addr_t hash_table,
+ unsigned long pteg_count)
+{
+ unsigned long ps_field;
+ unsigned long htab_size;
+ unsigned long patb_size = 1UL << PATB_SIZE_SHIFT;
+
+ /*
+ * slb llp encoding for the page size used in VPM real mode.
+ * We can ignore that for lpid 0
+ */
+ ps_field = 0;
+ htab_size = __ilog2(pteg_count) - 11;
+
+ BUILD_BUG_ON_MSG((PATB_SIZE_SHIFT > 24), "Partition table size too large.");
+ partition_tb = __va(memblock_alloc_base(patb_size, patb_size,
+ MEMBLOCK_ALLOC_ANYWHERE));
+
+ /* Initialize the Partition Table with no entries */
+ memset((void *)partition_tb, 0, patb_size);
+ partition_tb->patb0 = cpu_to_be64(ps_field | hash_table | htab_size);
+ /*
+ * FIXME!! This should be done via update_partition table
+ * For now UPRT is 0 for us.
+ */
+ partition_tb->patb1 = 0;
+ DBG("Partition table %p\n", partition_tb);
+ /*
+ * update partition table control register,
+ * 64 K size.
+ */
+ mtspr(SPRN_PTCR, __pa(partition_tb) | (PATB_SIZE_SHIFT - 12));
+
+}
+
static void __init htab_initialize(void)
{
unsigned long table;
@@ -737,8 +779,11 @@ static void __init htab_initialize(void)
/* Initialize the HPT with no entries */
memset((void *)table, 0, htab_size_bytes);
- /* Set SDR1 */
- mtspr(SPRN_SDR1, _SDR1);
+ if (!cpu_has_feature(CPU_FTR_ARCH_300))
+ /* Set SDR1 */
+ mtspr(SPRN_SDR1, _SDR1);
+ else
+ hash_init_partition_table(table, pteg_count);
}
prot = pgprot_val(PAGE_KERNEL);
@@ -823,8 +868,38 @@ static void __init htab_initialize(void)
#undef KB
#undef MB
-void __init early_init_mmu(void)
+void __init hash__early_init_mmu(void)
{
+ /*
+ * initialize page table size
+ */
+ __pte_frag_nr = H_PTE_FRAG_NR;
+ __pte_frag_size_shift = H_PTE_FRAG_SIZE_SHIFT;
+
+ __pte_index_size = H_PTE_INDEX_SIZE;
+ __pmd_index_size = H_PMD_INDEX_SIZE;
+ __pud_index_size = H_PUD_INDEX_SIZE;
+ __pgd_index_size = H_PGD_INDEX_SIZE;
+ __pmd_cache_index = H_PMD_CACHE_INDEX;
+ __pte_table_size = H_PTE_TABLE_SIZE;
+ __pmd_table_size = H_PMD_TABLE_SIZE;
+ __pud_table_size = H_PUD_TABLE_SIZE;
+ __pgd_table_size = H_PGD_TABLE_SIZE;
+ /*
+ * 4k use hugepd format, so for hash set then to
+ * zero
+ */
+ __pmd_val_bits = 0;
+ __pud_val_bits = 0;
+ __pgd_val_bits = 0;
+
+ __kernel_virt_start = H_KERN_VIRT_START;
+ __kernel_virt_size = H_KERN_VIRT_SIZE;
+ __vmalloc_start = H_VMALLOC_START;
+ __vmalloc_end = H_VMALLOC_END;
+ vmemmap = (struct page *)H_VMEMMAP_BASE;
+ ioremap_bot = IOREMAP_BASE;
+
/* Initialize the MMU Hash table and create the linear mapping
* of memory. Has to be done before SLB initialization as this is
* currently where the page size encoding is obtained.
@@ -836,12 +911,16 @@ void __init early_init_mmu(void)
}
#ifdef CONFIG_SMP
-void early_init_mmu_secondary(void)
+void hash__early_init_mmu_secondary(void)
{
/* Initialize hash table for that CPU */
- if (!firmware_has_feature(FW_FEATURE_LPAR))
- mtspr(SPRN_SDR1, _SDR1);
-
+ if (!firmware_has_feature(FW_FEATURE_LPAR)) {
+ if (!cpu_has_feature(CPU_FTR_ARCH_300))
+ mtspr(SPRN_SDR1, _SDR1);
+ else
+ mtspr(SPRN_PTCR,
+ __pa(partition_tb) | (PATB_SIZE_SHIFT - 12));
+ }
/* Initialize SLB */
slb_initialize();
}
@@ -920,7 +999,7 @@ void demote_segment_4k(struct mm_struct *mm, unsigned long addr)
* Userspace sets the subpage permissions using the subpage_prot system call.
*
* Result is 0: full permissions, _PAGE_RW: read-only,
- * _PAGE_USER or _PAGE_USER|_PAGE_RW: no access.
+ * _PAGE_RWX: no access.
*/
static int subpage_protection(struct mm_struct *mm, unsigned long ea)
{
@@ -946,8 +1025,13 @@ static int subpage_protection(struct mm_struct *mm, unsigned long ea)
/* extract 2-bit bitfield for this 4k subpage */
spp >>= 30 - 2 * ((ea >> 12) & 0xf);
- /* turn 0,1,2,3 into combination of _PAGE_USER and _PAGE_RW */
- spp = ((spp & 2) ? _PAGE_USER : 0) | ((spp & 1) ? _PAGE_RW : 0);
+ /*
+ * 0 -> full premission
+ * 1 -> Read only
+ * 2 -> no access.
+ * We return the flag that need to be cleared.
+ */
+ spp = ((spp & 2) ? _PAGE_RWX : 0) | ((spp & 1) ? _PAGE_WRITE : 0);
return spp;
}
@@ -1084,7 +1168,7 @@ int hash_page_mm(struct mm_struct *mm, unsigned long ea,
/* Pre-check access permissions (will be re-checked atomically
* in __hash_page_XX but this pre-check is a fast path
*/
- if (access & ~pte_val(*ptep)) {
+ if (!check_pte_access(access, pte_val(*ptep))) {
DBG_LOW(" no access !\n");
rc = 1;
goto bail;
@@ -1122,8 +1206,8 @@ int hash_page_mm(struct mm_struct *mm, unsigned long ea,
#endif
/* Do actual hashing */
#ifdef CONFIG_PPC_64K_PAGES
- /* If _PAGE_4K_PFN is set, make sure this is a 4k segment */
- if ((pte_val(*ptep) & _PAGE_4K_PFN) && psize == MMU_PAGE_64K) {
+ /* If H_PAGE_4K_PFN is set, make sure this is a 4k segment */
+ if ((pte_val(*ptep) & H_PAGE_4K_PFN) && psize == MMU_PAGE_64K) {
demote_segment_4k(mm, ea);
psize = MMU_PAGE_4K;
}
@@ -1131,8 +1215,7 @@ int hash_page_mm(struct mm_struct *mm, unsigned long ea,
/* If this PTE is non-cacheable and we have restrictions on
* using non cacheable large pages, then we switch to 4k
*/
- if (mmu_ci_restrictions && psize == MMU_PAGE_64K &&
- (pte_val(*ptep) & _PAGE_NO_CACHE)) {
+ if (mmu_ci_restrictions && psize == MMU_PAGE_64K && pte_ci(*ptep)) {
if (user_region) {
demote_segment_4k(mm, ea);
psize = MMU_PAGE_4K;
@@ -1209,7 +1292,7 @@ EXPORT_SYMBOL_GPL(hash_page);
int __hash_page(unsigned long ea, unsigned long msr, unsigned long trap,
unsigned long dsisr)
{
- unsigned long access = _PAGE_PRESENT;
+ unsigned long access = _PAGE_PRESENT | _PAGE_READ;
unsigned long flags = 0;
struct mm_struct *mm = current->mm;
@@ -1220,14 +1303,18 @@ int __hash_page(unsigned long ea, unsigned long msr, unsigned long trap,
flags |= HPTE_NOHPTE_UPDATE;
if (dsisr & DSISR_ISSTORE)
- access |= _PAGE_RW;
+ access |= _PAGE_WRITE;
/*
- * We need to set the _PAGE_USER bit if MSR_PR is set or if we are
- * accessing a userspace segment (even from the kernel). We assume
- * kernel addresses always have the high bit set.
+ * We set _PAGE_PRIVILEGED only when
+ * kernel mode access kernel space.
+ *
+ * _PAGE_PRIVILEGED is NOT set
+ * 1) when kernel mode access user space
+ * 2) user space access kernel space.
*/
+ access |= _PAGE_PRIVILEGED;
if ((msr & MSR_PR) || (REGION_ID(ea) == USER_REGION_ID))
- access |= _PAGE_USER;
+ access &= ~_PAGE_PRIVILEGED;
if (trap == 0x400)
access |= _PAGE_EXEC;
@@ -1235,6 +1322,30 @@ int __hash_page(unsigned long ea, unsigned long msr, unsigned long trap,
return hash_page_mm(mm, ea, access, trap, flags);
}
+#ifdef CONFIG_PPC_MM_SLICES
+static bool should_hash_preload(struct mm_struct *mm, unsigned long ea)
+{
+ int psize = get_slice_psize(mm, ea);
+
+ /* We only prefault standard pages for now */
+ if (unlikely(psize != mm->context.user_psize))
+ return false;
+
+ /*
+ * Don't prefault if subpage protection is enabled for the EA.
+ */
+ if (unlikely((psize == MMU_PAGE_4K) && subpage_protection(mm, ea)))
+ return false;
+
+ return true;
+}
+#else
+static bool should_hash_preload(struct mm_struct *mm, unsigned long ea)
+{
+ return true;
+}
+#endif
+
void hash_preload(struct mm_struct *mm, unsigned long ea,
unsigned long access, unsigned long trap)
{
@@ -1247,11 +1358,8 @@ void hash_preload(struct mm_struct *mm, unsigned long ea,
BUG_ON(REGION_ID(ea) != USER_REGION_ID);
-#ifdef CONFIG_PPC_MM_SLICES
- /* We only prefault standard pages for now */
- if (unlikely(get_slice_psize(mm, ea) != mm->context.user_psize))
+ if (!should_hash_preload(mm, ea))
return;
-#endif
DBG_LOW("hash_preload(mm=%p, mm->pgdir=%p, ea=%016lx, access=%lx,"
" trap=%lx\n", mm, mm->pgd, ea, access, trap);
@@ -1282,13 +1390,13 @@ void hash_preload(struct mm_struct *mm, unsigned long ea,
WARN_ON(hugepage_shift);
#ifdef CONFIG_PPC_64K_PAGES
- /* If either _PAGE_4K_PFN or _PAGE_NO_CACHE is set (and we are on
+ /* If either H_PAGE_4K_PFN or cache inhibited is set (and we are on
* a 64K kernel), then we don't preload, hash_page() will take
* care of it once we actually try to access the page.
* That way we don't have to duplicate all of the logic for segment
* page size demotion here
*/
- if (pte_val(*ptep) & (_PAGE_4K_PFN | _PAGE_NO_CACHE))
+ if ((pte_val(*ptep) & H_PAGE_4K_PFN) || pte_ci(*ptep))
goto out_exit;
#endif /* CONFIG_PPC_64K_PAGES */
@@ -1570,7 +1678,7 @@ void __kernel_map_pages(struct page *page, int numpages, int enable)
}
#endif /* CONFIG_DEBUG_PAGEALLOC */
-void setup_initial_memory_limit(phys_addr_t first_memblock_base,
+void hash__setup_initial_memory_limit(phys_addr_t first_memblock_base,
phys_addr_t first_memblock_size)
{
/* We don't currently support the first MEMBLOCK not mapping 0
diff --git a/arch/powerpc/mm/hugepage-hash64.c b/arch/powerpc/mm/hugepage-hash64.c
index eb2accdd76fd..ba3fc229468a 100644
--- a/arch/powerpc/mm/hugepage-hash64.c
+++ b/arch/powerpc/mm/hugepage-hash64.c
@@ -37,20 +37,20 @@ int __hash_page_thp(unsigned long ea, unsigned long access, unsigned long vsid,
old_pmd = pmd_val(pmd);
/* If PMD busy, retry the access */
- if (unlikely(old_pmd & _PAGE_BUSY))
+ if (unlikely(old_pmd & H_PAGE_BUSY))
return 0;
/* If PMD permissions don't match, take page fault */
- if (unlikely(access & ~old_pmd))
+ if (unlikely(!check_pte_access(access, old_pmd)))
return 1;
/*
* Try to lock the PTE, add ACCESSED and DIRTY if it was
* a write access
*/
- new_pmd = old_pmd | _PAGE_BUSY | _PAGE_ACCESSED;
- if (access & _PAGE_RW)
+ new_pmd = old_pmd | H_PAGE_BUSY | _PAGE_ACCESSED;
+ if (access & _PAGE_WRITE)
new_pmd |= _PAGE_DIRTY;
- } while (old_pmd != __cmpxchg_u64((unsigned long *)pmdp,
- old_pmd, new_pmd));
+ } while (!pmd_xchg(pmdp, __pmd(old_pmd), __pmd(new_pmd)));
+
rflags = htab_convert_pte_flags(new_pmd);
#if 0
@@ -78,7 +78,7 @@ int __hash_page_thp(unsigned long ea, unsigned long access, unsigned long vsid,
* base page size. This is because demote_segment won't flush
* hash page table entries.
*/
- if ((old_pmd & _PAGE_HASHPTE) && !(old_pmd & _PAGE_COMBO)) {
+ if ((old_pmd & H_PAGE_HASHPTE) && !(old_pmd & H_PAGE_COMBO)) {
flush_hash_hugepage(vsid, ea, pmdp, MMU_PAGE_64K,
ssize, flags);
/*
@@ -125,7 +125,7 @@ int __hash_page_thp(unsigned long ea, unsigned long access, unsigned long vsid,
hash = hpt_hash(vpn, shift, ssize);
/* insert new entry */
pa = pmd_pfn(__pmd(old_pmd)) << PAGE_SHIFT;
- new_pmd |= _PAGE_HASHPTE;
+ new_pmd |= H_PAGE_HASHPTE;
repeat:
hpte_group = ((hash & htab_hash_mask) * HPTES_PER_GROUP) & ~0x7UL;
@@ -169,17 +169,17 @@ repeat:
mark_hpte_slot_valid(hpte_slot_array, index, slot);
}
/*
- * Mark the pte with _PAGE_COMBO, if we are trying to hash it with
+ * Mark the pte with H_PAGE_COMBO, if we are trying to hash it with
* base page size 4k.
*/
if (psize == MMU_PAGE_4K)
- new_pmd |= _PAGE_COMBO;
+ new_pmd |= H_PAGE_COMBO;
/*
* The hpte valid is stored in the pgtable whose address is in the
* second half of the PMD. Order this against clearing of the busy bit in
* huge pmd.
*/
smp_wmb();
- *pmdp = __pmd(new_pmd & ~_PAGE_BUSY);
+ *pmdp = __pmd(new_pmd & ~H_PAGE_BUSY);
return 0;
}
diff --git a/arch/powerpc/mm/hugetlbpage-hash64.c b/arch/powerpc/mm/hugetlbpage-hash64.c
index 8555fce902fe..3058560b6121 100644
--- a/arch/powerpc/mm/hugetlbpage-hash64.c
+++ b/arch/powerpc/mm/hugetlbpage-hash64.c
@@ -47,18 +47,19 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
do {
old_pte = pte_val(*ptep);
/* If PTE busy, retry the access */
- if (unlikely(old_pte & _PAGE_BUSY))
+ if (unlikely(old_pte & H_PAGE_BUSY))
return 0;
/* If PTE permissions don't match, take page fault */
- if (unlikely(access & ~old_pte))
+ if (unlikely(!check_pte_access(access, old_pte)))
return 1;
+
/* Try to lock the PTE, add ACCESSED and DIRTY if it was
* a write access */
- new_pte = old_pte | _PAGE_BUSY | _PAGE_ACCESSED;
- if (access & _PAGE_RW)
+ new_pte = old_pte | H_PAGE_BUSY | _PAGE_ACCESSED;
+ if (access & _PAGE_WRITE)
new_pte |= _PAGE_DIRTY;
- } while(old_pte != __cmpxchg_u64((unsigned long *)ptep,
- old_pte, new_pte));
+ } while(!pte_xchg(ptep, __pte(old_pte), __pte(new_pte)));
+
rflags = htab_convert_pte_flags(new_pte);
sz = ((1UL) << shift);
@@ -68,28 +69,28 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
rflags = hash_page_do_lazy_icache(rflags, __pte(old_pte), trap);
/* Check if pte already has an hpte (case 2) */
- if (unlikely(old_pte & _PAGE_HASHPTE)) {
+ if (unlikely(old_pte & H_PAGE_HASHPTE)) {
/* There MIGHT be an HPTE for this pte */
unsigned long hash, slot;
hash = hpt_hash(vpn, shift, ssize);
- if (old_pte & _PAGE_F_SECOND)
+ if (old_pte & H_PAGE_F_SECOND)
hash = ~hash;
slot = (hash & htab_hash_mask) * HPTES_PER_GROUP;
- slot += (old_pte & _PAGE_F_GIX) >> _PAGE_F_GIX_SHIFT;
+ slot += (old_pte & H_PAGE_F_GIX) >> H_PAGE_F_GIX_SHIFT;
if (ppc_md.hpte_updatepp(slot, rflags, vpn, mmu_psize,
mmu_psize, ssize, flags) == -1)
old_pte &= ~_PAGE_HPTEFLAGS;
}
- if (likely(!(old_pte & _PAGE_HASHPTE))) {
+ if (likely(!(old_pte & H_PAGE_HASHPTE))) {
unsigned long hash = hpt_hash(vpn, shift, ssize);
pa = pte_pfn(__pte(old_pte)) << PAGE_SHIFT;
/* clear HPTE slot informations in new PTE */
- new_pte = (new_pte & ~_PAGE_HPTEFLAGS) | _PAGE_HASHPTE;
+ new_pte = (new_pte & ~_PAGE_HPTEFLAGS) | H_PAGE_HASHPTE;
slot = hpte_insert_repeating(hash, vpn, pa, rflags, 0,
mmu_psize, ssize);
@@ -105,14 +106,14 @@ int __hash_page_huge(unsigned long ea, unsigned long access, unsigned long vsid,
return -1;
}
- new_pte |= (slot << _PAGE_F_GIX_SHIFT) &
- (_PAGE_F_SECOND | _PAGE_F_GIX);
+ new_pte |= (slot << H_PAGE_F_GIX_SHIFT) &
+ (H_PAGE_F_SECOND | H_PAGE_F_GIX);
}
/*
* No need to use ldarx/stdcx here
*/
- *ptep = __pte(new_pte & ~_PAGE_BUSY);
+ *ptep = __pte(new_pte & ~H_PAGE_BUSY);
return 0;
}
diff --git a/arch/powerpc/mm/hugetlbpage-radix.c b/arch/powerpc/mm/hugetlbpage-radix.c
new file mode 100644
index 000000000000..1e11559e1aac
--- /dev/null
+++ b/arch/powerpc/mm/hugetlbpage-radix.c
@@ -0,0 +1,87 @@
+#include <linux/mm.h>
+#include <linux/hugetlb.h>
+#include <asm/pgtable.h>
+#include <asm/pgalloc.h>
+#include <asm/cacheflush.h>
+#include <asm/machdep.h>
+#include <asm/mman.h>
+
+void radix__flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
+{
+ unsigned long ap, shift;
+ struct hstate *hstate = hstate_file(vma->vm_file);
+
+ shift = huge_page_shift(hstate);
+ if (shift == mmu_psize_defs[MMU_PAGE_2M].shift)
+ ap = mmu_get_ap(MMU_PAGE_2M);
+ else if (shift == mmu_psize_defs[MMU_PAGE_1G].shift)
+ ap = mmu_get_ap(MMU_PAGE_1G);
+ else {
+ WARN(1, "Wrong huge page shift\n");
+ return ;
+ }
+ radix___flush_tlb_page(vma->vm_mm, vmaddr, ap, 0);
+}
+
+void radix__local_flush_hugetlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
+{
+ unsigned long ap, shift;
+ struct hstate *hstate = hstate_file(vma->vm_file);
+
+ shift = huge_page_shift(hstate);
+ if (shift == mmu_psize_defs[MMU_PAGE_2M].shift)
+ ap = mmu_get_ap(MMU_PAGE_2M);
+ else if (shift == mmu_psize_defs[MMU_PAGE_1G].shift)
+ ap = mmu_get_ap(MMU_PAGE_1G);
+ else {
+ WARN(1, "Wrong huge page shift\n");
+ return ;
+ }
+ radix___local_flush_tlb_page(vma->vm_mm, vmaddr, ap, 0);
+}
+
+/*
+ * A vairant of hugetlb_get_unmapped_area doing topdown search
+ * FIXME!! should we do as x86 does or non hugetlb area does ?
+ * ie, use topdown or not based on mmap_is_legacy check ?
+ */
+unsigned long
+radix__hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
+ unsigned long len, unsigned long pgoff,
+ unsigned long flags)
+{
+ struct mm_struct *mm = current->mm;
+ struct vm_area_struct *vma;
+ struct hstate *h = hstate_file(file);
+ struct vm_unmapped_area_info info;
+
+ if (len & ~huge_page_mask(h))
+ return -EINVAL;
+ if (len > TASK_SIZE)
+ return -ENOMEM;
+
+ if (flags & MAP_FIXED) {
+ if (prepare_hugepage_range(file, addr, len))
+ return -EINVAL;
+ return addr;
+ }
+
+ if (addr) {
+ addr = ALIGN(addr, huge_page_size(h));
+ vma = find_vma(mm, addr);
+ if (TASK_SIZE - len >= addr &&
+ (!vma || addr + len <= vma->vm_start))
+ return addr;
+ }
+ /*
+ * We are always doing an topdown search here. Slice code
+ * does that too.
+ */
+ info.flags = VM_UNMAPPED_AREA_TOPDOWN;
+ info.length = len;
+ info.low_limit = PAGE_SIZE;
+ info.high_limit = current->mm->mmap_base;
+ info.align_mask = PAGE_MASK & ~huge_page_mask(h);
+ info.align_offset = 0;
+ return vm_unmapped_area(&info);
+}
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index d991b9e80dbb..5aac1a3f86cd 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -711,6 +711,9 @@ unsigned long hugetlb_get_unmapped_area(struct file *file, unsigned long addr,
struct hstate *hstate = hstate_file(file);
int mmu_psize = shift_to_mmu_psize(huge_page_shift(hstate));
+ if (radix_enabled())
+ return radix__hugetlb_get_unmapped_area(file, addr, len,
+ pgoff, flags);
return slice_get_unmapped_area(addr, len, flags, mmu_psize, 1);
}
#endif
@@ -719,14 +722,14 @@ unsigned long vma_mmu_pagesize(struct vm_area_struct *vma)
{
#ifdef CONFIG_PPC_MM_SLICES
unsigned int psize = get_slice_psize(vma->vm_mm, vma->vm_start);
-
- return 1UL << mmu_psize_to_shift(psize);
-#else
+ /* With radix we don't use slice, so derive it from vma*/
+ if (!radix_enabled())
+ return 1UL << mmu_psize_to_shift(psize);
+#endif
if (!is_vm_hugetlb_page(vma))
return PAGE_SIZE;
return huge_page_size(hstate_vma(vma));
-#endif
}
static inline bool is_power_of_4(unsigned long x)
@@ -772,8 +775,10 @@ static int __init hugepage_setup_sz(char *str)
size = memparse(str, &str);
- if (add_huge_page_size(size) != 0)
- printk(KERN_WARNING "Invalid huge page size specified(%llu)\n", size);
+ if (add_huge_page_size(size) != 0) {
+ hugetlb_bad_size();
+ pr_err("Invalid huge page size specified(%llu)\n", size);
+ }
return 1;
}
@@ -823,7 +828,7 @@ static int __init hugetlbpage_init(void)
{
int psize;
- if (!mmu_has_feature(MMU_FTR_16M_PAGE))
+ if (!radix_enabled() && !mmu_has_feature(MMU_FTR_16M_PAGE))
return -ENODEV;
for (psize = 0; psize < MMU_PAGE_COUNT; ++psize) {
@@ -863,6 +868,9 @@ static int __init hugetlbpage_init(void)
HPAGE_SHIFT = mmu_psize_defs[MMU_PAGE_16M].shift;
else if (mmu_psize_defs[MMU_PAGE_1M].shift)
HPAGE_SHIFT = mmu_psize_defs[MMU_PAGE_1M].shift;
+ else if (mmu_psize_defs[MMU_PAGE_2M].shift)
+ HPAGE_SHIFT = mmu_psize_defs[MMU_PAGE_2M].shift;
+
return 0;
}
@@ -1003,9 +1011,9 @@ int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
end = pte_end;
pte = READ_ONCE(*ptep);
- mask = _PAGE_PRESENT | _PAGE_USER;
+ mask = _PAGE_PRESENT | _PAGE_READ;
if (write)
- mask |= _PAGE_RW;
+ mask |= _PAGE_WRITE;
if ((pte_val(pte) & mask) != mask)
return 0;
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index ba655666186d..33709bdb0419 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -66,11 +66,11 @@
#include "mmu_decl.h"
#ifdef CONFIG_PPC_STD_MMU_64
-#if PGTABLE_RANGE > USER_VSID_RANGE
+#if H_PGTABLE_RANGE > USER_VSID_RANGE
#warning Limited user VSID range means pagetable space is wasted
#endif
-#if (TASK_SIZE_USER64 < PGTABLE_RANGE) && (TASK_SIZE_USER64 < USER_VSID_RANGE)
+#if (TASK_SIZE_USER64 < H_PGTABLE_RANGE) && (TASK_SIZE_USER64 < USER_VSID_RANGE)
#warning TASK_SIZE is smaller than it needs to be.
#endif
#endif /* CONFIG_PPC_STD_MMU_64 */
@@ -189,75 +189,6 @@ static int __meminit vmemmap_populated(unsigned long start, int page_size)
return 0;
}
-/* On hash-based CPUs, the vmemmap is bolted in the hash table.
- *
- * On Book3E CPUs, the vmemmap is currently mapped in the top half of
- * the vmalloc space using normal page tables, though the size of
- * pages encoded in the PTEs can be different
- */
-
-#ifdef CONFIG_PPC_BOOK3E
-static int __meminit vmemmap_create_mapping(unsigned long start,
- unsigned long page_size,
- unsigned long phys)
-{
- /* Create a PTE encoding without page size */
- unsigned long i, flags = _PAGE_PRESENT | _PAGE_ACCESSED |
- _PAGE_KERNEL_RW;
-
- /* PTEs only contain page size encodings up to 32M */
- BUG_ON(mmu_psize_defs[mmu_vmemmap_psize].enc > 0xf);
-
- /* Encode the size in the PTE */
- flags |= mmu_psize_defs[mmu_vmemmap_psize].enc << 8;
-
- /* For each PTE for that area, map things. Note that we don't
- * increment phys because all PTEs are of the large size and
- * thus must have the low bits clear
- */
- for (i = 0; i < page_size; i += PAGE_SIZE)
- BUG_ON(map_kernel_page(start + i, phys, flags));
-
- return 0;
-}
-
-#ifdef CONFIG_MEMORY_HOTPLUG
-static void vmemmap_remove_mapping(unsigned long start,
- unsigned long page_size)
-{
-}
-#endif
-#else /* CONFIG_PPC_BOOK3E */
-static int __meminit vmemmap_create_mapping(unsigned long start,
- unsigned long page_size,
- unsigned long phys)
-{
- int rc = htab_bolt_mapping(start, start + page_size, phys,
- pgprot_val(PAGE_KERNEL),
- mmu_vmemmap_psize, mmu_kernel_ssize);
- if (rc < 0) {
- int rc2 = htab_remove_mapping(start, start + page_size,
- mmu_vmemmap_psize,
- mmu_kernel_ssize);
- BUG_ON(rc2 && (rc2 != -ENOENT));
- }
- return rc;
-}
-
-#ifdef CONFIG_MEMORY_HOTPLUG
-static void vmemmap_remove_mapping(unsigned long start,
- unsigned long page_size)
-{
- int rc = htab_remove_mapping(start, start + page_size,
- mmu_vmemmap_psize,
- mmu_kernel_ssize);
- BUG_ON((rc < 0) && (rc != -ENOENT));
- WARN_ON(rc == -ENOENT);
-}
-#endif
-
-#endif /* CONFIG_PPC_BOOK3E */
-
struct vmemmap_backing *vmemmap_list;
static struct vmemmap_backing *next;
static int num_left;
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index ac79dbde1015..2fd57fa48429 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -68,12 +68,15 @@ pte_t *kmap_pte;
EXPORT_SYMBOL(kmap_pte);
pgprot_t kmap_prot;
EXPORT_SYMBOL(kmap_prot);
+#define TOP_ZONE ZONE_HIGHMEM
static inline pte_t *virt_to_kpte(unsigned long vaddr)
{
return pte_offset_kernel(pmd_offset(pud_offset(pgd_offset_k(vaddr),
vaddr), vaddr), vaddr);
}
+#else
+#define TOP_ZONE ZONE_NORMAL
#endif
int page_is_ram(unsigned long pfn)
@@ -267,14 +270,9 @@ void __init limit_zone_pfn(enum zone_type zone, unsigned long pfn_limit)
*/
int dma_pfn_limit_to_zone(u64 pfn_limit)
{
- enum zone_type top_zone = ZONE_NORMAL;
int i;
-#ifdef CONFIG_HIGHMEM
- top_zone = ZONE_HIGHMEM;
-#endif
-
- for (i = top_zone; i >= 0; i--) {
+ for (i = TOP_ZONE; i >= 0; i--) {
if (max_zone_pfns[i] <= pfn_limit)
return i;
}
@@ -289,7 +287,6 @@ void __init paging_init(void)
{
unsigned long long total_ram = memblock_phys_mem_size();
phys_addr_t top_of_ram = memblock_end_of_DRAM();
- enum zone_type top_zone;
#ifdef CONFIG_PPC32
unsigned long v = __fix_to_virt(__end_of_fixed_addresses - 1);
@@ -313,13 +310,9 @@ void __init paging_init(void)
(long int)((top_of_ram - total_ram) >> 20));
#ifdef CONFIG_HIGHMEM
- top_zone = ZONE_HIGHMEM;
limit_zone_pfn(ZONE_NORMAL, lowmem_end_addr >> PAGE_SHIFT);
-#else
- top_zone = ZONE_NORMAL;
#endif
-
- limit_zone_pfn(top_zone, top_of_ram >> PAGE_SHIFT);
+ limit_zone_pfn(TOP_ZONE, top_of_ram >> PAGE_SHIFT);
zone_limits_final = true;
free_area_init_nodes(max_zone_pfns);
@@ -498,7 +491,10 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address,
* We don't need to worry about _PAGE_PRESENT here because we are
* called with either mm->page_table_lock held or ptl lock held
*/
- unsigned long access = 0, trap;
+ unsigned long access, trap;
+
+ if (radix_enabled())
+ return;
/* We only want HPTEs for linux PTEs that have _PAGE_ACCESSED set */
if (!pte_young(*ptep) || address >= TASK_SIZE)
@@ -511,13 +507,19 @@ void update_mmu_cache(struct vm_area_struct *vma, unsigned long address,
*
* We also avoid filling the hash if not coming from a fault
*/
- if (current->thread.regs == NULL)
- return;
- trap = TRAP(current->thread.regs);
- if (trap == 0x400)
- access |= _PAGE_EXEC;
- else if (trap != 0x300)
+
+ trap = current->thread.regs ? TRAP(current->thread.regs) : 0UL;
+ switch (trap) {
+ case 0x300:
+ access = 0UL;
+ break;
+ case 0x400:
+ access = _PAGE_EXEC;
+ break;
+ default:
return;
+ }
+
hash_preload(vma->vm_mm, address, access, trap);
#endif /* CONFIG_PPC_STD_MMU */
#if (defined(CONFIG_PPC_BOOK3E_64) || defined(CONFIG_PPC_FSL_BOOK3E)) \
diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c
index 4087705ba90f..2f1e44362198 100644
--- a/arch/powerpc/mm/mmap.c
+++ b/arch/powerpc/mm/mmap.c
@@ -26,6 +26,9 @@
#include <linux/mm.h>
#include <linux/random.h>
#include <linux/sched.h>
+#include <linux/elf-randomize.h>
+#include <linux/security.h>
+#include <linux/mman.h>
/*
* Top of mmap area (just below the process stack).
@@ -78,6 +81,111 @@ static inline unsigned long mmap_base(unsigned long rnd)
return PAGE_ALIGN(TASK_SIZE - gap - rnd);
}
+#ifdef CONFIG_PPC_RADIX_MMU
+/*
+ * Same function as generic code used only for radix, because we don't need to overload
+ * the generic one. But we will have to duplicate, because hash select
+ * HAVE_ARCH_UNMAPPED_AREA
+ */
+static unsigned long
+radix__arch_get_unmapped_area(struct file *filp, unsigned long addr,
+ unsigned long len, unsigned long pgoff,
+ unsigned long flags)
+{
+ struct mm_struct *mm = current->mm;
+ struct vm_area_struct *vma;
+ struct vm_unmapped_area_info info;
+
+ if (len > TASK_SIZE - mmap_min_addr)
+ return -ENOMEM;
+
+ if (flags & MAP_FIXED)
+ return addr;
+
+ if (addr) {
+ addr = PAGE_ALIGN(addr);
+ vma = find_vma(mm, addr);
+ if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&
+ (!vma || addr + len <= vma->vm_start))
+ return addr;
+ }
+
+ info.flags = 0;
+ info.length = len;
+ info.low_limit = mm->mmap_base;
+ info.high_limit = TASK_SIZE;
+ info.align_mask = 0;
+ return vm_unmapped_area(&info);
+}
+
+static unsigned long
+radix__arch_get_unmapped_area_topdown(struct file *filp,
+ const unsigned long addr0,
+ const unsigned long len,
+ const unsigned long pgoff,
+ const unsigned long flags)
+{
+ struct vm_area_struct *vma;
+ struct mm_struct *mm = current->mm;
+ unsigned long addr = addr0;
+ struct vm_unmapped_area_info info;
+
+ /* requested length too big for entire address space */
+ if (len > TASK_SIZE - mmap_min_addr)
+ return -ENOMEM;
+
+ if (flags & MAP_FIXED)
+ return addr;
+
+ /* requesting a specific address */
+ if (addr) {
+ addr = PAGE_ALIGN(addr);
+ vma = find_vma(mm, addr);
+ if (TASK_SIZE - len >= addr && addr >= mmap_min_addr &&
+ (!vma || addr + len <= vma->vm_start))
+ return addr;
+ }
+
+ info.flags = VM_UNMAPPED_AREA_TOPDOWN;
+ info.length = len;
+ info.low_limit = max(PAGE_SIZE, mmap_min_addr);
+ info.high_limit = mm->mmap_base;
+ info.align_mask = 0;
+ addr = vm_unmapped_area(&info);
+
+ /*
+ * A failed mmap() very likely causes application failure,
+ * so fall back to the bottom-up function here. This scenario
+ * can happen with large stack limits and large mmap()
+ * allocations.
+ */
+ if (addr & ~PAGE_MASK) {
+ VM_BUG_ON(addr != -ENOMEM);
+ info.flags = 0;
+ info.low_limit = TASK_UNMAPPED_BASE;
+ info.high_limit = TASK_SIZE;
+ addr = vm_unmapped_area(&info);
+ }
+
+ return addr;
+}
+
+static void radix__arch_pick_mmap_layout(struct mm_struct *mm,
+ unsigned long random_factor)
+{
+ if (mmap_is_legacy()) {
+ mm->mmap_base = TASK_UNMAPPED_BASE;
+ mm->get_unmapped_area = radix__arch_get_unmapped_area;
+ } else {
+ mm->mmap_base = mmap_base(random_factor);
+ mm->get_unmapped_area = radix__arch_get_unmapped_area_topdown;
+ }
+}
+#else
+/* dummy */
+extern void radix__arch_pick_mmap_layout(struct mm_struct *mm,
+ unsigned long random_factor);
+#endif
/*
* This function, called very early during the creation of a new
* process VM image, sets up which VM layout function to use:
@@ -89,6 +197,8 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
if (current->flags & PF_RANDOMIZE)
random_factor = arch_mmap_rnd();
+ if (radix_enabled())
+ return radix__arch_pick_mmap_layout(mm, random_factor);
/*
* Fall back to the standard layout if the personality
* bit is set, or if the expected stack growth is unlimited:
diff --git a/arch/powerpc/mm/mmu_context_hash64.c b/arch/powerpc/mm/mmu_context_book3s64.c
index 9ca6fe16cb29..227b2a6c4544 100644
--- a/arch/powerpc/mm/mmu_context_hash64.c
+++ b/arch/powerpc/mm/mmu_context_book3s64.c
@@ -58,6 +58,17 @@ again:
return index;
}
EXPORT_SYMBOL_GPL(__init_new_context);
+static int radix__init_new_context(struct mm_struct *mm, int index)
+{
+ unsigned long rts_field;
+
+ /*
+ * set the process table entry,
+ */
+ rts_field = 3ull << PPC_BITLSHIFT(2);
+ process_tb[index].prtb0 = cpu_to_be64(rts_field | __pa(mm->pgd) | RADIX_PGD_INDEX_SIZE);
+ return 0;
+}
int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
{
@@ -67,13 +78,27 @@ int init_new_context(struct task_struct *tsk, struct mm_struct *mm)
if (index < 0)
return index;
- /* The old code would re-promote on fork, we don't do that
- * when using slices as it could cause problem promoting slices
- * that have been forced down to 4K
- */
- if (slice_mm_new_context(mm))
- slice_set_user_psize(mm, mmu_virtual_psize);
- subpage_prot_init_new_context(mm);
+ if (radix_enabled()) {
+ radix__init_new_context(mm, index);
+ } else {
+
+ /* The old code would re-promote on fork, we don't do that
+ * when using slices as it could cause problem promoting slices
+ * that have been forced down to 4K
+ *
+ * For book3s we have MMU_NO_CONTEXT set to be ~0. Hence check
+ * explicitly against context.id == 0. This ensures that we
+ * properly initialize context slice details for newly allocated
+ * mm's (which will have id == 0) and don't alter context slice
+ * inherited via fork (which will have id != 0).
+ *
+ * We should not be calling init_new_context() on init_mm. Hence a
+ * check against 0 is ok.
+ */
+ if (mm->context.id == 0)
+ slice_set_user_psize(mm, mmu_virtual_psize);
+ subpage_prot_init_new_context(mm);
+ }
mm->context.id = index;
#ifdef CONFIG_PPC_ICSWX
mm->context.cop_lockp = kmalloc(sizeof(spinlock_t), GFP_KERNEL);
@@ -144,8 +169,19 @@ void destroy_context(struct mm_struct *mm)
mm->context.cop_lockp = NULL;
#endif /* CONFIG_PPC_ICSWX */
+ if (radix_enabled())
+ process_tb[mm->context.id].prtb1 = 0;
+ else
+ subpage_prot_free(mm);
destroy_pagetable_page(mm);
__destroy_context(mm->context.id);
- subpage_prot_free(mm);
mm->context.id = MMU_NO_CONTEXT;
}
+
+#ifdef CONFIG_PPC_RADIX_MMU
+void radix__switch_mmu_context(struct mm_struct *prev, struct mm_struct *next)
+{
+ mtspr(SPRN_PID, next->context.id);
+ asm volatile("isync": : :"memory");
+}
+#endif
diff --git a/arch/powerpc/mm/mmu_context_nohash.c b/arch/powerpc/mm/mmu_context_nohash.c
index 986afbc22c76..7d95bc402dba 100644
--- a/arch/powerpc/mm/mmu_context_nohash.c
+++ b/arch/powerpc/mm/mmu_context_nohash.c
@@ -226,7 +226,8 @@ static void context_check_map(void)
static void context_check_map(void) { }
#endif
-void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next)
+void switch_mmu_context(struct mm_struct *prev, struct mm_struct *next,
+ struct task_struct *tsk)
{
unsigned int i, id, cpu = smp_processor_id();
unsigned long *map;
@@ -334,8 +335,7 @@ int init_new_context(struct task_struct *t, struct mm_struct *mm)
mm->context.active = 0;
#ifdef CONFIG_PPC_MM_SLICES
- if (slice_mm_new_context(mm))
- slice_set_user_psize(mm, mmu_virtual_psize);
+ slice_set_user_psize(mm, mmu_virtual_psize);
#endif
return 0;
diff --git a/arch/powerpc/mm/mmu_decl.h b/arch/powerpc/mm/mmu_decl.h
index bfb7c0bcabd5..6af65327c993 100644
--- a/arch/powerpc/mm/mmu_decl.h
+++ b/arch/powerpc/mm/mmu_decl.h
@@ -108,11 +108,6 @@ extern unsigned long Hash_size, Hash_mask;
#endif /* CONFIG_PPC32 */
-#ifdef CONFIG_PPC64
-extern int map_kernel_page(unsigned long ea, unsigned long pa,
- unsigned long flags);
-#endif /* CONFIG_PPC64 */
-
extern unsigned long ioremap_bot;
extern unsigned long __max_low_memory;
extern phys_addr_t __initial_memory_limit_addr;
diff --git a/arch/powerpc/mm/pgtable-book3e.c b/arch/powerpc/mm/pgtable-book3e.c
new file mode 100644
index 000000000000..a2298930f990
--- /dev/null
+++ b/arch/powerpc/mm/pgtable-book3e.c
@@ -0,0 +1,122 @@
+/*
+ * Copyright 2005, Paul Mackerras, IBM Corporation.
+ * Copyright 2009, Benjamin Herrenschmidt, IBM Corporation.
+ * Copyright 2015-2016, Aneesh Kumar K.V, IBM Corporation.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/sched.h>
+#include <linux/memblock.h>
+#include <asm/pgalloc.h>
+#include <asm/tlb.h>
+#include <asm/dma.h>
+
+#include "mmu_decl.h"
+
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
+/*
+ * On Book3E CPUs, the vmemmap is currently mapped in the top half of
+ * the vmalloc space using normal page tables, though the size of
+ * pages encoded in the PTEs can be different
+ */
+int __meminit vmemmap_create_mapping(unsigned long start,
+ unsigned long page_size,
+ unsigned long phys)
+{
+ /* Create a PTE encoding without page size */
+ unsigned long i, flags = _PAGE_PRESENT | _PAGE_ACCESSED |
+ _PAGE_KERNEL_RW;
+
+ /* PTEs only contain page size encodings up to 32M */
+ BUG_ON(mmu_psize_defs[mmu_vmemmap_psize].enc > 0xf);
+
+ /* Encode the size in the PTE */
+ flags |= mmu_psize_defs[mmu_vmemmap_psize].enc << 8;
+
+ /* For each PTE for that area, map things. Note that we don't
+ * increment phys because all PTEs are of the large size and
+ * thus must have the low bits clear
+ */
+ for (i = 0; i < page_size; i += PAGE_SIZE)
+ BUG_ON(map_kernel_page(start + i, phys, flags));
+
+ return 0;
+}
+
+#ifdef CONFIG_MEMORY_HOTPLUG
+void vmemmap_remove_mapping(unsigned long start,
+ unsigned long page_size)
+{
+}
+#endif
+#endif /* CONFIG_SPARSEMEM_VMEMMAP */
+
+static __ref void *early_alloc_pgtable(unsigned long size)
+{
+ void *pt;
+
+ pt = __va(memblock_alloc_base(size, size, __pa(MAX_DMA_ADDRESS)));
+ memset(pt, 0, size);
+
+ return pt;
+}
+
+/*
+ * map_kernel_page currently only called by __ioremap
+ * map_kernel_page adds an entry to the ioremap page table
+ * and adds an entry to the HPT, possibly bolting it
+ */
+int map_kernel_page(unsigned long ea, unsigned long pa, unsigned long flags)
+{
+ pgd_t *pgdp;
+ pud_t *pudp;
+ pmd_t *pmdp;
+ pte_t *ptep;
+
+ BUILD_BUG_ON(TASK_SIZE_USER64 > PGTABLE_RANGE);
+ if (slab_is_available()) {
+ pgdp = pgd_offset_k(ea);
+ pudp = pud_alloc(&init_mm, pgdp, ea);
+ if (!pudp)
+ return -ENOMEM;
+ pmdp = pmd_alloc(&init_mm, pudp, ea);
+ if (!pmdp)
+ return -ENOMEM;
+ ptep = pte_alloc_kernel(pmdp, ea);
+ if (!ptep)
+ return -ENOMEM;
+ set_pte_at(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT,
+ __pgprot(flags)));
+ } else {
+ pgdp = pgd_offset_k(ea);
+#ifndef __PAGETABLE_PUD_FOLDED
+ if (pgd_none(*pgdp)) {
+ pudp = early_alloc_pgtable(PUD_TABLE_SIZE);
+ BUG_ON(pudp == NULL);
+ pgd_populate(&init_mm, pgdp, pudp);
+ }
+#endif /* !__PAGETABLE_PUD_FOLDED */
+ pudp = pud_offset(pgdp, ea);
+ if (pud_none(*pudp)) {
+ pmdp = early_alloc_pgtable(PMD_TABLE_SIZE);
+ BUG_ON(pmdp == NULL);
+ pud_populate(&init_mm, pudp, pmdp);
+ }
+ pmdp = pmd_offset(pudp, ea);
+ if (!pmd_present(*pmdp)) {
+ ptep = early_alloc_pgtable(PAGE_SIZE);
+ BUG_ON(ptep == NULL);
+ pmd_populate_kernel(&init_mm, pmdp, ptep);
+ }
+ ptep = pte_offset_kernel(pmdp, ea);
+ set_pte_at(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT,
+ __pgprot(flags)));
+ }
+
+ smp_wmb();
+ return 0;
+}
diff --git a/arch/powerpc/mm/pgtable-book3s64.c b/arch/powerpc/mm/pgtable-book3s64.c
new file mode 100644
index 000000000000..eb4451144746
--- /dev/null
+++ b/arch/powerpc/mm/pgtable-book3s64.c
@@ -0,0 +1,118 @@
+/*
+ * Copyright 2015-2016, Aneesh Kumar K.V, IBM Corporation.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/sched.h>
+#include <asm/pgalloc.h>
+#include <asm/tlb.h>
+
+#include "mmu_decl.h"
+#include <trace/events/thp.h>
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+/*
+ * This is called when relaxing access to a hugepage. It's also called in the page
+ * fault path when we don't hit any of the major fault cases, ie, a minor
+ * update of _PAGE_ACCESSED, _PAGE_DIRTY, etc... The generic code will have
+ * handled those two for us, we additionally deal with missing execute
+ * permission here on some processors
+ */
+int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address,
+ pmd_t *pmdp, pmd_t entry, int dirty)
+{
+ int changed;
+#ifdef CONFIG_DEBUG_VM
+ WARN_ON(!pmd_trans_huge(*pmdp));
+ assert_spin_locked(&vma->vm_mm->page_table_lock);
+#endif
+ changed = !pmd_same(*(pmdp), entry);
+ if (changed) {
+ __ptep_set_access_flags(pmdp_ptep(pmdp), pmd_pte(entry));
+ /*
+ * Since we are not supporting SW TLB systems, we don't
+ * have any thing similar to flush_tlb_page_nohash()
+ */
+ }
+ return changed;
+}
+
+int pmdp_test_and_clear_young(struct vm_area_struct *vma,
+ unsigned long address, pmd_t *pmdp)
+{
+ return __pmdp_test_and_clear_young(vma->vm_mm, address, pmdp);
+}
+/*
+ * set a new huge pmd. We should not be called for updating
+ * an existing pmd entry. That should go via pmd_hugepage_update.
+ */
+void set_pmd_at(struct mm_struct *mm, unsigned long addr,
+ pmd_t *pmdp, pmd_t pmd)
+{
+#ifdef CONFIG_DEBUG_VM
+ WARN_ON(pte_present(pmd_pte(*pmdp)) && !pte_protnone(pmd_pte(*pmdp)));
+ assert_spin_locked(&mm->page_table_lock);
+ WARN_ON(!pmd_trans_huge(pmd));
+#endif
+ trace_hugepage_set_pmd(addr, pmd_val(pmd));
+ return set_pte_at(mm, addr, pmdp_ptep(pmdp), pmd_pte(pmd));
+}
+/*
+ * We use this to invalidate a pmdp entry before switching from a
+ * hugepte to regular pmd entry.
+ */
+void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
+ pmd_t *pmdp)
+{
+ pmd_hugepage_update(vma->vm_mm, address, pmdp, _PAGE_PRESENT, 0);
+ flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
+ /*
+ * This ensures that generic code that rely on IRQ disabling
+ * to prevent a parallel THP split work as expected.
+ */
+ kick_all_cpus_sync();
+}
+
+static pmd_t pmd_set_protbits(pmd_t pmd, pgprot_t pgprot)
+{
+ return __pmd(pmd_val(pmd) | pgprot_val(pgprot));
+}
+
+pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot)
+{
+ unsigned long pmdv;
+
+ pmdv = (pfn << PAGE_SHIFT) & PTE_RPN_MASK;
+ return pmd_set_protbits(__pmd(pmdv), pgprot);
+}
+
+pmd_t mk_pmd(struct page *page, pgprot_t pgprot)
+{
+ return pfn_pmd(page_to_pfn(page), pgprot);
+}
+
+pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
+{
+ unsigned long pmdv;
+
+ pmdv = pmd_val(pmd);
+ pmdv &= _HPAGE_CHG_MASK;
+ return pmd_set_protbits(__pmd(pmdv), newprot);
+}
+
+/*
+ * This is called at the end of handling a user page fault, when the
+ * fault has been handled by updating a HUGE PMD entry in the linux page tables.
+ * We use it to preload an HPTE into the hash table corresponding to
+ * the updated linux HUGE PMD entry.
+ */
+void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
+ pmd_t *pmd)
+{
+ return;
+}
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
diff --git a/arch/powerpc/mm/pgtable-hash64.c b/arch/powerpc/mm/pgtable-hash64.c
new file mode 100644
index 000000000000..c23e286a6b8f
--- /dev/null
+++ b/arch/powerpc/mm/pgtable-hash64.c
@@ -0,0 +1,342 @@
+/*
+ * Copyright 2005, Paul Mackerras, IBM Corporation.
+ * Copyright 2009, Benjamin Herrenschmidt, IBM Corporation.
+ * Copyright 2015-2016, Aneesh Kumar K.V, IBM Corporation.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/sched.h>
+#include <asm/pgalloc.h>
+#include <asm/tlb.h>
+
+#include "mmu_decl.h"
+
+#define CREATE_TRACE_POINTS
+#include <trace/events/thp.h>
+
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
+/*
+ * On hash-based CPUs, the vmemmap is bolted in the hash table.
+ *
+ */
+int __meminit hash__vmemmap_create_mapping(unsigned long start,
+ unsigned long page_size,
+ unsigned long phys)
+{
+ int rc = htab_bolt_mapping(start, start + page_size, phys,
+ pgprot_val(PAGE_KERNEL),
+ mmu_vmemmap_psize, mmu_kernel_ssize);
+ if (rc < 0) {
+ int rc2 = htab_remove_mapping(start, start + page_size,
+ mmu_vmemmap_psize,
+ mmu_kernel_ssize);
+ BUG_ON(rc2 && (rc2 != -ENOENT));
+ }
+ return rc;
+}
+
+#ifdef CONFIG_MEMORY_HOTPLUG
+void hash__vmemmap_remove_mapping(unsigned long start,
+ unsigned long page_size)
+{
+ int rc = htab_remove_mapping(start, start + page_size,
+ mmu_vmemmap_psize,
+ mmu_kernel_ssize);
+ BUG_ON((rc < 0) && (rc != -ENOENT));
+ WARN_ON(rc == -ENOENT);
+}
+#endif
+#endif /* CONFIG_SPARSEMEM_VMEMMAP */
+
+/*
+ * map_kernel_page currently only called by __ioremap
+ * map_kernel_page adds an entry to the ioremap page table
+ * and adds an entry to the HPT, possibly bolting it
+ */
+int hash__map_kernel_page(unsigned long ea, unsigned long pa, unsigned long flags)
+{
+ pgd_t *pgdp;
+ pud_t *pudp;
+ pmd_t *pmdp;
+ pte_t *ptep;
+
+ BUILD_BUG_ON(TASK_SIZE_USER64 > H_PGTABLE_RANGE);
+ if (slab_is_available()) {
+ pgdp = pgd_offset_k(ea);
+ pudp = pud_alloc(&init_mm, pgdp, ea);
+ if (!pudp)
+ return -ENOMEM;
+ pmdp = pmd_alloc(&init_mm, pudp, ea);
+ if (!pmdp)
+ return -ENOMEM;
+ ptep = pte_alloc_kernel(pmdp, ea);
+ if (!ptep)
+ return -ENOMEM;
+ set_pte_at(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT,
+ __pgprot(flags)));
+ } else {
+ /*
+ * If the mm subsystem is not fully up, we cannot create a
+ * linux page table entry for this mapping. Simply bolt an
+ * entry in the hardware page table.
+ *
+ */
+ if (htab_bolt_mapping(ea, ea + PAGE_SIZE, pa, flags,
+ mmu_io_psize, mmu_kernel_ssize)) {
+ printk(KERN_ERR "Failed to do bolted mapping IO "
+ "memory at %016lx !\n", pa);
+ return -ENOMEM;
+ }
+ }
+
+ smp_wmb();
+ return 0;
+}
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+
+unsigned long hash__pmd_hugepage_update(struct mm_struct *mm, unsigned long addr,
+ pmd_t *pmdp, unsigned long clr,
+ unsigned long set)
+{
+ __be64 old_be, tmp;
+ unsigned long old;
+
+#ifdef CONFIG_DEBUG_VM
+ WARN_ON(!pmd_trans_huge(*pmdp));
+ assert_spin_locked(&mm->page_table_lock);
+#endif
+
+ __asm__ __volatile__(
+ "1: ldarx %0,0,%3\n\
+ and. %1,%0,%6\n\
+ bne- 1b \n\
+ andc %1,%0,%4 \n\
+ or %1,%1,%7\n\
+ stdcx. %1,0,%3 \n\
+ bne- 1b"
+ : "=&r" (old_be), "=&r" (tmp), "=m" (*pmdp)
+ : "r" (pmdp), "r" (cpu_to_be64(clr)), "m" (*pmdp),
+ "r" (cpu_to_be64(H_PAGE_BUSY)), "r" (cpu_to_be64(set))
+ : "cc" );
+
+ old = be64_to_cpu(old_be);
+
+ trace_hugepage_update(addr, old, clr, set);
+ if (old & H_PAGE_HASHPTE)
+ hpte_do_hugepage_flush(mm, addr, pmdp, old);
+ return old;
+}
+
+pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address,
+ pmd_t *pmdp)
+{
+ pmd_t pmd;
+
+ VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+ VM_BUG_ON(pmd_trans_huge(*pmdp));
+
+ pmd = *pmdp;
+ pmd_clear(pmdp);
+ /*
+ * Wait for all pending hash_page to finish. This is needed
+ * in case of subpage collapse. When we collapse normal pages
+ * to hugepage, we first clear the pmd, then invalidate all
+ * the PTE entries. The assumption here is that any low level
+ * page fault will see a none pmd and take the slow path that
+ * will wait on mmap_sem. But we could very well be in a
+ * hash_page with local ptep pointer value. Such a hash page
+ * can result in adding new HPTE entries for normal subpages.
+ * That means we could be modifying the page content as we
+ * copy them to a huge page. So wait for parallel hash_page
+ * to finish before invalidating HPTE entries. We can do this
+ * by sending an IPI to all the cpus and executing a dummy
+ * function there.
+ */
+ kick_all_cpus_sync();
+ /*
+ * Now invalidate the hpte entries in the range
+ * covered by pmd. This make sure we take a
+ * fault and will find the pmd as none, which will
+ * result in a major fault which takes mmap_sem and
+ * hence wait for collapse to complete. Without this
+ * the __collapse_huge_page_copy can result in copying
+ * the old content.
+ */
+ flush_tlb_pmd_range(vma->vm_mm, &pmd, address);
+ return pmd;
+}
+
+/*
+ * We want to put the pgtable in pmd and use pgtable for tracking
+ * the base page size hptes
+ */
+void hash__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
+ pgtable_t pgtable)
+{
+ pgtable_t *pgtable_slot;
+ assert_spin_locked(&mm->page_table_lock);
+ /*
+ * we store the pgtable in the second half of PMD
+ */
+ pgtable_slot = (pgtable_t *)pmdp + PTRS_PER_PMD;
+ *pgtable_slot = pgtable;
+ /*
+ * expose the deposited pgtable to other cpus.
+ * before we set the hugepage PTE at pmd level
+ * hash fault code looks at the deposted pgtable
+ * to store hash index values.
+ */
+ smp_wmb();
+}
+
+pgtable_t hash__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
+{
+ pgtable_t pgtable;
+ pgtable_t *pgtable_slot;
+
+ assert_spin_locked(&mm->page_table_lock);
+ pgtable_slot = (pgtable_t *)pmdp + PTRS_PER_PMD;
+ pgtable = *pgtable_slot;
+ /*
+ * Once we withdraw, mark the entry NULL.
+ */
+ *pgtable_slot = NULL;
+ /*
+ * We store HPTE information in the deposited PTE fragment.
+ * zero out the content on withdraw.
+ */
+ memset(pgtable, 0, PTE_FRAG_SIZE);
+ return pgtable;
+}
+
+void hash__pmdp_huge_split_prepare(struct vm_area_struct *vma,
+ unsigned long address, pmd_t *pmdp)
+{
+ VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+ VM_BUG_ON(REGION_ID(address) != USER_REGION_ID);
+
+ /*
+ * We can't mark the pmd none here, because that will cause a race
+ * against exit_mmap. We need to continue mark pmd TRANS HUGE, while
+ * we spilt, but at the same time we wan't rest of the ppc64 code
+ * not to insert hash pte on this, because we will be modifying
+ * the deposited pgtable in the caller of this function. Hence
+ * clear the _PAGE_USER so that we move the fault handling to
+ * higher level function and that will serialize against ptl.
+ * We need to flush existing hash pte entries here even though,
+ * the translation is still valid, because we will withdraw
+ * pgtable_t after this.
+ */
+ pmd_hugepage_update(vma->vm_mm, address, pmdp, 0, _PAGE_PRIVILEGED);
+}
+
+/*
+ * A linux hugepage PMD was changed and the corresponding hash table entries
+ * neesd to be flushed.
+ */
+void hpte_do_hugepage_flush(struct mm_struct *mm, unsigned long addr,
+ pmd_t *pmdp, unsigned long old_pmd)
+{
+ int ssize;
+ unsigned int psize;
+ unsigned long vsid;
+ unsigned long flags = 0;
+ const struct cpumask *tmp;
+
+ /* get the base page size,vsid and segment size */
+#ifdef CONFIG_DEBUG_VM
+ psize = get_slice_psize(mm, addr);
+ BUG_ON(psize == MMU_PAGE_16M);
+#endif
+ if (old_pmd & H_PAGE_COMBO)
+ psize = MMU_PAGE_4K;
+ else
+ psize = MMU_PAGE_64K;
+
+ if (!is_kernel_addr(addr)) {
+ ssize = user_segment_size(addr);
+ vsid = get_vsid(mm->context.id, addr, ssize);
+ WARN_ON(vsid == 0);
+ } else {
+ vsid = get_kernel_vsid(addr, mmu_kernel_ssize);
+ ssize = mmu_kernel_ssize;
+ }
+
+ tmp = cpumask_of(smp_processor_id());
+ if (cpumask_equal(mm_cpumask(mm), tmp))
+ flags |= HPTE_LOCAL_UPDATE;
+
+ return flush_hash_hugepage(vsid, addr, pmdp, psize, ssize, flags);
+}
+
+pmd_t hash__pmdp_huge_get_and_clear(struct mm_struct *mm,
+ unsigned long addr, pmd_t *pmdp)
+{
+ pmd_t old_pmd;
+ pgtable_t pgtable;
+ unsigned long old;
+ pgtable_t *pgtable_slot;
+
+ old = pmd_hugepage_update(mm, addr, pmdp, ~0UL, 0);
+ old_pmd = __pmd(old);
+ /*
+ * We have pmd == none and we are holding page_table_lock.
+ * So we can safely go and clear the pgtable hash
+ * index info.
+ */
+ pgtable_slot = (pgtable_t *)pmdp + PTRS_PER_PMD;
+ pgtable = *pgtable_slot;
+ /*
+ * Let's zero out old valid and hash index details
+ * hash fault look at them.
+ */
+ memset(pgtable, 0, PTE_FRAG_SIZE);
+ /*
+ * Serialize against find_linux_pte_or_hugepte which does lock-less
+ * lookup in page tables with local interrupts disabled. For huge pages
+ * it casts pmd_t to pte_t. Since format of pte_t is different from
+ * pmd_t we want to prevent transit from pmd pointing to page table
+ * to pmd pointing to huge page (and back) while interrupts are disabled.
+ * We clear pmd to possibly replace it with page table pointer in
+ * different code paths. So make sure we wait for the parallel
+ * find_linux_pte_or_hugepage to finish.
+ */
+ kick_all_cpus_sync();
+ return old_pmd;
+}
+
+int hash__has_transparent_hugepage(void)
+{
+
+ if (!mmu_has_feature(MMU_FTR_16M_PAGE))
+ return 0;
+ /*
+ * We support THP only if PMD_SIZE is 16MB.
+ */
+ if (mmu_psize_defs[MMU_PAGE_16M].shift != PMD_SHIFT)
+ return 0;
+ /*
+ * We need to make sure that we support 16MB hugepage in a segement
+ * with base page size 64K or 4K. We only enable THP with a PAGE_SIZE
+ * of 64K.
+ */
+ /*
+ * If we have 64K HPTE, we will be using that by default
+ */
+ if (mmu_psize_defs[MMU_PAGE_64K].shift &&
+ (mmu_psize_defs[MMU_PAGE_64K].penc[MMU_PAGE_16M] == -1))
+ return 0;
+ /*
+ * Ok we only have 4K HPTE
+ */
+ if (mmu_psize_defs[MMU_PAGE_4K].penc[MMU_PAGE_16M] == -1)
+ return 0;
+
+ return 1;
+}
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
diff --git a/arch/powerpc/mm/pgtable-radix.c b/arch/powerpc/mm/pgtable-radix.c
new file mode 100644
index 000000000000..18b2c11604fa
--- /dev/null
+++ b/arch/powerpc/mm/pgtable-radix.c
@@ -0,0 +1,526 @@
+/*
+ * Page table handling routines for radix page table.
+ *
+ * Copyright 2015-2016, Aneesh Kumar K.V, IBM Corporation.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+#include <linux/sched.h>
+#include <linux/memblock.h>
+#include <linux/of_fdt.h>
+
+#include <asm/pgtable.h>
+#include <asm/pgalloc.h>
+#include <asm/dma.h>
+#include <asm/machdep.h>
+#include <asm/mmu.h>
+#include <asm/firmware.h>
+
+#include <trace/events/thp.h>
+
+static int native_update_partition_table(u64 patb1)
+{
+ partition_tb->patb1 = cpu_to_be64(patb1);
+ return 0;
+}
+
+static __ref void *early_alloc_pgtable(unsigned long size)
+{
+ void *pt;
+
+ pt = __va(memblock_alloc_base(size, size, MEMBLOCK_ALLOC_ANYWHERE));
+ memset(pt, 0, size);
+
+ return pt;
+}
+
+int radix__map_kernel_page(unsigned long ea, unsigned long pa,
+ pgprot_t flags,
+ unsigned int map_page_size)
+{
+ pgd_t *pgdp;
+ pud_t *pudp;
+ pmd_t *pmdp;
+ pte_t *ptep;
+ /*
+ * Make sure task size is correct as per the max adddr
+ */
+ BUILD_BUG_ON(TASK_SIZE_USER64 > RADIX_PGTABLE_RANGE);
+ if (slab_is_available()) {
+ pgdp = pgd_offset_k(ea);
+ pudp = pud_alloc(&init_mm, pgdp, ea);
+ if (!pudp)
+ return -ENOMEM;
+ if (map_page_size == PUD_SIZE) {
+ ptep = (pte_t *)pudp;
+ goto set_the_pte;
+ }
+ pmdp = pmd_alloc(&init_mm, pudp, ea);
+ if (!pmdp)
+ return -ENOMEM;
+ if (map_page_size == PMD_SIZE) {
+ ptep = (pte_t *)pudp;
+ goto set_the_pte;
+ }
+ ptep = pte_alloc_kernel(pmdp, ea);
+ if (!ptep)
+ return -ENOMEM;
+ } else {
+ pgdp = pgd_offset_k(ea);
+ if (pgd_none(*pgdp)) {
+ pudp = early_alloc_pgtable(PUD_TABLE_SIZE);
+ BUG_ON(pudp == NULL);
+ pgd_populate(&init_mm, pgdp, pudp);
+ }
+ pudp = pud_offset(pgdp, ea);
+ if (map_page_size == PUD_SIZE) {
+ ptep = (pte_t *)pudp;
+ goto set_the_pte;
+ }
+ if (pud_none(*pudp)) {
+ pmdp = early_alloc_pgtable(PMD_TABLE_SIZE);
+ BUG_ON(pmdp == NULL);
+ pud_populate(&init_mm, pudp, pmdp);
+ }
+ pmdp = pmd_offset(pudp, ea);
+ if (map_page_size == PMD_SIZE) {
+ ptep = (pte_t *)pudp;
+ goto set_the_pte;
+ }
+ if (!pmd_present(*pmdp)) {
+ ptep = early_alloc_pgtable(PAGE_SIZE);
+ BUG_ON(ptep == NULL);
+ pmd_populate_kernel(&init_mm, pmdp, ptep);
+ }
+ ptep = pte_offset_kernel(pmdp, ea);
+ }
+
+set_the_pte:
+ set_pte_at(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT, flags));
+ smp_wmb();
+ return 0;
+}
+
+static void __init radix_init_pgtable(void)
+{
+ int loop_count;
+ u64 base, end, start_addr;
+ unsigned long rts_field;
+ struct memblock_region *reg;
+ unsigned long linear_page_size;
+
+ /* We don't support slb for radix */
+ mmu_slb_size = 0;
+ /*
+ * Create the linear mapping, using standard page size for now
+ */
+ loop_count = 0;
+ for_each_memblock(memory, reg) {
+
+ start_addr = reg->base;
+
+redo:
+ if (loop_count < 1 && mmu_psize_defs[MMU_PAGE_1G].shift)
+ linear_page_size = PUD_SIZE;
+ else if (loop_count < 2 && mmu_psize_defs[MMU_PAGE_2M].shift)
+ linear_page_size = PMD_SIZE;
+ else
+ linear_page_size = PAGE_SIZE;
+
+ base = _ALIGN_UP(start_addr, linear_page_size);
+ end = _ALIGN_DOWN(reg->base + reg->size, linear_page_size);
+
+ pr_info("Mapping range 0x%lx - 0x%lx with 0x%lx\n",
+ (unsigned long)base, (unsigned long)end,
+ linear_page_size);
+
+ while (base < end) {
+ radix__map_kernel_page((unsigned long)__va(base),
+ base, PAGE_KERNEL_X,
+ linear_page_size);
+ base += linear_page_size;
+ }
+ /*
+ * map the rest using lower page size
+ */
+ if (end < reg->base + reg->size) {
+ start_addr = end;
+ loop_count++;
+ goto redo;
+ }
+ }
+ /*
+ * Allocate Partition table and process table for the
+ * host.
+ */
+ BUILD_BUG_ON_MSG((PRTB_SIZE_SHIFT > 23), "Process table size too large.");
+ process_tb = early_alloc_pgtable(1UL << PRTB_SIZE_SHIFT);
+ /*
+ * Fill in the process table.
+ * we support 52 bits, hence 52-28 = 24, 11000
+ */
+ rts_field = 3ull << PPC_BITLSHIFT(2);
+ process_tb->prtb0 = cpu_to_be64(rts_field | __pa(init_mm.pgd) | RADIX_PGD_INDEX_SIZE);
+ /*
+ * Fill in the partition table. We are suppose to use effective address
+ * of process table here. But our linear mapping also enable us to use
+ * physical address here.
+ */
+ ppc_md.update_partition_table(__pa(process_tb) | (PRTB_SIZE_SHIFT - 12) | PATB_GR);
+ pr_info("Process table %p and radix root for kernel: %p\n", process_tb, init_mm.pgd);
+}
+
+static void __init radix_init_partition_table(void)
+{
+ unsigned long rts_field;
+ /*
+ * we support 52 bits, hence 52-28 = 24, 11000
+ */
+ rts_field = 3ull << PPC_BITLSHIFT(2);
+
+ BUILD_BUG_ON_MSG((PATB_SIZE_SHIFT > 24), "Partition table size too large.");
+ partition_tb = early_alloc_pgtable(1UL << PATB_SIZE_SHIFT);
+ partition_tb->patb0 = cpu_to_be64(rts_field | __pa(init_mm.pgd) |
+ RADIX_PGD_INDEX_SIZE | PATB_HR);
+ printk("Partition table %p\n", partition_tb);
+
+ memblock_set_current_limit(MEMBLOCK_ALLOC_ANYWHERE);
+ /*
+ * update partition table control register,
+ * 64 K size.
+ */
+ mtspr(SPRN_PTCR, __pa(partition_tb) | (PATB_SIZE_SHIFT - 12));
+}
+
+void __init radix_init_native(void)
+{
+ ppc_md.update_partition_table = native_update_partition_table;
+}
+
+static int __init get_idx_from_shift(unsigned int shift)
+{
+ int idx = -1;
+
+ switch (shift) {
+ case 0xc:
+ idx = MMU_PAGE_4K;
+ break;
+ case 0x10:
+ idx = MMU_PAGE_64K;
+ break;
+ case 0x15:
+ idx = MMU_PAGE_2M;
+ break;
+ case 0x1e:
+ idx = MMU_PAGE_1G;
+ break;
+ }
+ return idx;
+}
+
+static int __init radix_dt_scan_page_sizes(unsigned long node,
+ const char *uname, int depth,
+ void *data)
+{
+ int size = 0;
+ int shift, idx;
+ unsigned int ap;
+ const __be32 *prop;
+ const char *type = of_get_flat_dt_prop(node, "device_type", NULL);
+
+ /* We are scanning "cpu" nodes only */
+ if (type == NULL || strcmp(type, "cpu") != 0)
+ return 0;
+
+ prop = of_get_flat_dt_prop(node, "ibm,processor-radix-AP-encodings", &size);
+ if (!prop)
+ return 0;
+
+ pr_info("Page sizes from device-tree:\n");
+ for (; size >= 4; size -= 4, ++prop) {
+
+ struct mmu_psize_def *def;
+
+ /* top 3 bit is AP encoding */
+ shift = be32_to_cpu(prop[0]) & ~(0xe << 28);
+ ap = be32_to_cpu(prop[0]) >> 29;
+ pr_info("Page size sift = %d AP=0x%x\n", shift, ap);
+
+ idx = get_idx_from_shift(shift);
+ if (idx < 0)
+ continue;
+
+ def = &mmu_psize_defs[idx];
+ def->shift = shift;
+ def->ap = ap;
+ }
+
+ /* needed ? */
+ cur_cpu_spec->mmu_features &= ~MMU_FTR_NO_SLBIE_B;
+ return 1;
+}
+
+static void __init radix_init_page_sizes(void)
+{
+ int rc;
+
+ /*
+ * Try to find the available page sizes in the device-tree
+ */
+ rc = of_scan_flat_dt(radix_dt_scan_page_sizes, NULL);
+ if (rc != 0) /* Found */
+ goto found;
+ /*
+ * let's assume we have page 4k and 64k support
+ */
+ mmu_psize_defs[MMU_PAGE_4K].shift = 12;
+ mmu_psize_defs[MMU_PAGE_4K].ap = 0x0;
+
+ mmu_psize_defs[MMU_PAGE_64K].shift = 16;
+ mmu_psize_defs[MMU_PAGE_64K].ap = 0x5;
+found:
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
+ if (mmu_psize_defs[MMU_PAGE_2M].shift) {
+ /*
+ * map vmemmap using 2M if available
+ */
+ mmu_vmemmap_psize = MMU_PAGE_2M;
+ }
+#endif /* CONFIG_SPARSEMEM_VMEMMAP */
+ return;
+}
+
+void __init radix__early_init_mmu(void)
+{
+ unsigned long lpcr;
+ /*
+ * setup LPCR UPRT based on mmu_features
+ */
+ lpcr = mfspr(SPRN_LPCR);
+ mtspr(SPRN_LPCR, lpcr | LPCR_UPRT);
+
+#ifdef CONFIG_PPC_64K_PAGES
+ /* PAGE_SIZE mappings */
+ mmu_virtual_psize = MMU_PAGE_64K;
+#else
+ mmu_virtual_psize = MMU_PAGE_4K;
+#endif
+
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
+ /* vmemmap mapping */
+ mmu_vmemmap_psize = mmu_virtual_psize;
+#endif
+ /*
+ * initialize page table size
+ */
+ __pte_index_size = RADIX_PTE_INDEX_SIZE;
+ __pmd_index_size = RADIX_PMD_INDEX_SIZE;
+ __pud_index_size = RADIX_PUD_INDEX_SIZE;
+ __pgd_index_size = RADIX_PGD_INDEX_SIZE;
+ __pmd_cache_index = RADIX_PMD_INDEX_SIZE;
+ __pte_table_size = RADIX_PTE_TABLE_SIZE;
+ __pmd_table_size = RADIX_PMD_TABLE_SIZE;
+ __pud_table_size = RADIX_PUD_TABLE_SIZE;
+ __pgd_table_size = RADIX_PGD_TABLE_SIZE;
+
+ __pmd_val_bits = RADIX_PMD_VAL_BITS;
+ __pud_val_bits = RADIX_PUD_VAL_BITS;
+ __pgd_val_bits = RADIX_PGD_VAL_BITS;
+
+ __kernel_virt_start = RADIX_KERN_VIRT_START;
+ __kernel_virt_size = RADIX_KERN_VIRT_SIZE;
+ __vmalloc_start = RADIX_VMALLOC_START;
+ __vmalloc_end = RADIX_VMALLOC_END;
+ vmemmap = (struct page *)RADIX_VMEMMAP_BASE;
+ ioremap_bot = IOREMAP_BASE;
+ /*
+ * For now radix also use the same frag size
+ */
+ __pte_frag_nr = H_PTE_FRAG_NR;
+ __pte_frag_size_shift = H_PTE_FRAG_SIZE_SHIFT;
+
+ radix_init_page_sizes();
+ if (!firmware_has_feature(FW_FEATURE_LPAR))
+ radix_init_partition_table();
+
+ radix_init_pgtable();
+}
+
+void radix__early_init_mmu_secondary(void)
+{
+ unsigned long lpcr;
+ /*
+ * setup LPCR UPRT based on mmu_features
+ */
+ lpcr = mfspr(SPRN_LPCR);
+ mtspr(SPRN_LPCR, lpcr | LPCR_UPRT);
+ /*
+ * update partition table control register, 64 K size.
+ */
+ if (!firmware_has_feature(FW_FEATURE_LPAR))
+ mtspr(SPRN_PTCR,
+ __pa(partition_tb) | (PATB_SIZE_SHIFT - 12));
+}
+
+void radix__setup_initial_memory_limit(phys_addr_t first_memblock_base,
+ phys_addr_t first_memblock_size)
+{
+ /* We don't currently support the first MEMBLOCK not mapping 0
+ * physical on those processors
+ */
+ BUG_ON(first_memblock_base != 0);
+ /*
+ * We limit the allocation that depend on ppc64_rma_size
+ * to first_memblock_size. We also clamp it to 1GB to
+ * avoid some funky things such as RTAS bugs.
+ *
+ * On radix config we really don't have a limitation
+ * on real mode access. But keeping it as above works
+ * well enough.
+ */
+ ppc64_rma_size = min_t(u64, first_memblock_size, 0x40000000);
+ /*
+ * Finally limit subsequent allocations. We really don't want
+ * to limit the memblock allocations to rma_size. FIXME!! should
+ * we even limit at all ?
+ */
+ memblock_set_current_limit(first_memblock_base + first_memblock_size);
+}
+
+#ifdef CONFIG_SPARSEMEM_VMEMMAP
+int __meminit radix__vmemmap_create_mapping(unsigned long start,
+ unsigned long page_size,
+ unsigned long phys)
+{
+ /* Create a PTE encoding */
+ unsigned long flags = _PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_KERNEL_RW;
+
+ BUG_ON(radix__map_kernel_page(start, phys, __pgprot(flags), page_size));
+ return 0;
+}
+
+#ifdef CONFIG_MEMORY_HOTPLUG
+void radix__vmemmap_remove_mapping(unsigned long start, unsigned long page_size)
+{
+ /* FIXME!! intel does more. We should free page tables mapping vmemmap ? */
+}
+#endif
+#endif
+
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
+
+unsigned long radix__pmd_hugepage_update(struct mm_struct *mm, unsigned long addr,
+ pmd_t *pmdp, unsigned long clr,
+ unsigned long set)
+{
+ unsigned long old;
+
+#ifdef CONFIG_DEBUG_VM
+ WARN_ON(!radix__pmd_trans_huge(*pmdp));
+ assert_spin_locked(&mm->page_table_lock);
+#endif
+
+ old = radix__pte_update(mm, addr, (pte_t *)pmdp, clr, set, 1);
+ trace_hugepage_update(addr, old, clr, set);
+
+ return old;
+}
+
+pmd_t radix__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address,
+ pmd_t *pmdp)
+
+{
+ pmd_t pmd;
+
+ VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+ VM_BUG_ON(radix__pmd_trans_huge(*pmdp));
+ /*
+ * khugepaged calls this for normal pmd
+ */
+ pmd = *pmdp;
+ pmd_clear(pmdp);
+ /*FIXME!! Verify whether we need this kick below */
+ kick_all_cpus_sync();
+ flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
+ return pmd;
+}
+
+/*
+ * For us pgtable_t is pte_t *. Inorder to save the deposisted
+ * page table, we consider the allocated page table as a list
+ * head. On withdraw we need to make sure we zero out the used
+ * list_head memory area.
+ */
+void radix__pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
+ pgtable_t pgtable)
+{
+ struct list_head *lh = (struct list_head *) pgtable;
+
+ assert_spin_locked(pmd_lockptr(mm, pmdp));
+
+ /* FIFO */
+ if (!pmd_huge_pte(mm, pmdp))
+ INIT_LIST_HEAD(lh);
+ else
+ list_add(lh, (struct list_head *) pmd_huge_pte(mm, pmdp));
+ pmd_huge_pte(mm, pmdp) = pgtable;
+}
+
+pgtable_t radix__pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
+{
+ pte_t *ptep;
+ pgtable_t pgtable;
+ struct list_head *lh;
+
+ assert_spin_locked(pmd_lockptr(mm, pmdp));
+
+ /* FIFO */
+ pgtable = pmd_huge_pte(mm, pmdp);
+ lh = (struct list_head *) pgtable;
+ if (list_empty(lh))
+ pmd_huge_pte(mm, pmdp) = NULL;
+ else {
+ pmd_huge_pte(mm, pmdp) = (pgtable_t) lh->next;
+ list_del(lh);
+ }
+ ptep = (pte_t *) pgtable;
+ *ptep = __pte(0);
+ ptep++;
+ *ptep = __pte(0);
+ return pgtable;
+}
+
+
+pmd_t radix__pmdp_huge_get_and_clear(struct mm_struct *mm,
+ unsigned long addr, pmd_t *pmdp)
+{
+ pmd_t old_pmd;
+ unsigned long old;
+
+ old = radix__pmd_hugepage_update(mm, addr, pmdp, ~0UL, 0);
+ old_pmd = __pmd(old);
+ /*
+ * Serialize against find_linux_pte_or_hugepte which does lock-less
+ * lookup in page tables with local interrupts disabled. For huge pages
+ * it casts pmd_t to pte_t. Since format of pte_t is different from
+ * pmd_t we want to prevent transit from pmd pointing to page table
+ * to pmd pointing to huge page (and back) while interrupts are disabled.
+ * We clear pmd to possibly replace it with page table pointer in
+ * different code paths. So make sure we wait for the parallel
+ * find_linux_pte_or_hugepage to finish.
+ */
+ kick_all_cpus_sync();
+ return old_pmd;
+}
+
+int radix__has_transparent_hugepage(void)
+{
+ /* For radix 2M at PMD level means thp */
+ if (mmu_psize_defs[MMU_PAGE_2M].shift == PMD_SHIFT)
+ return 1;
+ return 0;
+}
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
index de37ff445362..88a307504b5a 100644
--- a/arch/powerpc/mm/pgtable.c
+++ b/arch/powerpc/mm/pgtable.c
@@ -38,14 +38,25 @@ static inline int is_exec_fault(void)
/* We only try to do i/d cache coherency on stuff that looks like
* reasonably "normal" PTEs. We currently require a PTE to be present
- * and we avoid _PAGE_SPECIAL and _PAGE_NO_CACHE. We also only do that
+ * and we avoid _PAGE_SPECIAL and cache inhibited pte. We also only do that
* on userspace PTEs
*/
static inline int pte_looks_normal(pte_t pte)
{
+
+#if defined(CONFIG_PPC_BOOK3S_64)
+ if ((pte_val(pte) & (_PAGE_PRESENT | _PAGE_SPECIAL)) == _PAGE_PRESENT) {
+ if (pte_ci(pte))
+ return 0;
+ if (pte_user(pte))
+ return 1;
+ }
+ return 0;
+#else
return (pte_val(pte) &
- (_PAGE_PRESENT | _PAGE_SPECIAL | _PAGE_NO_CACHE | _PAGE_USER)) ==
- (_PAGE_PRESENT | _PAGE_USER);
+ (_PAGE_PRESENT | _PAGE_SPECIAL | _PAGE_NO_CACHE | _PAGE_USER)) ==
+ (_PAGE_PRESENT | _PAGE_USER);
+#endif
}
static struct page *maybe_pte_to_page(pte_t pte)
@@ -71,6 +82,9 @@ static struct page *maybe_pte_to_page(pte_t pte)
static pte_t set_pte_filter(pte_t pte)
{
+ if (radix_enabled())
+ return pte;
+
pte = __pte(pte_val(pte) & ~_PAGE_HPTEFLAGS);
if (pte_looks_normal(pte) && !(cpu_has_feature(CPU_FTR_COHERENT_ICACHE) ||
cpu_has_feature(CPU_FTR_NOEXECUTE))) {
@@ -177,8 +191,8 @@ void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
* _PAGE_PRESENT, but we can be sure that it is not in hpte.
* Hence we can use set_pte_at for them.
*/
- VM_WARN_ON((pte_val(*ptep) & (_PAGE_PRESENT | _PAGE_USER)) ==
- (_PAGE_PRESENT | _PAGE_USER));
+ VM_WARN_ON(pte_present(*ptep) && !pte_protnone(*ptep));
+
/*
* Add the pte bit when tryint set a pte
*/
diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
index 347106080bb1..e009e0604a8a 100644
--- a/arch/powerpc/mm/pgtable_64.c
+++ b/arch/powerpc/mm/pgtable_64.c
@@ -55,104 +55,63 @@
#include "mmu_decl.h"
-#define CREATE_TRACE_POINTS
-#include <trace/events/thp.h>
-
-/* Some sanity checking */
-#if TASK_SIZE_USER64 > PGTABLE_RANGE
-#error TASK_SIZE_USER64 exceeds pagetable range
-#endif
-
#ifdef CONFIG_PPC_STD_MMU_64
#if TASK_SIZE_USER64 > (1UL << (ESID_BITS + SID_SHIFT))
#error TASK_SIZE_USER64 exceeds user VSID range
#endif
#endif
-unsigned long ioremap_bot = IOREMAP_BASE;
-
-#ifdef CONFIG_PPC_MMU_NOHASH
-static __ref void *early_alloc_pgtable(unsigned long size)
-{
- void *pt;
-
- pt = __va(memblock_alloc_base(size, size, __pa(MAX_DMA_ADDRESS)));
- memset(pt, 0, size);
-
- return pt;
-}
-#endif /* CONFIG_PPC_MMU_NOHASH */
-
+#ifdef CONFIG_PPC_BOOK3S_64
/*
- * map_kernel_page currently only called by __ioremap
- * map_kernel_page adds an entry to the ioremap page table
- * and adds an entry to the HPT, possibly bolting it
+ * partition table and process table for ISA 3.0
*/
-int map_kernel_page(unsigned long ea, unsigned long pa, unsigned long flags)
-{
- pgd_t *pgdp;
- pud_t *pudp;
- pmd_t *pmdp;
- pte_t *ptep;
-
- if (slab_is_available()) {
- pgdp = pgd_offset_k(ea);
- pudp = pud_alloc(&init_mm, pgdp, ea);
- if (!pudp)
- return -ENOMEM;
- pmdp = pmd_alloc(&init_mm, pudp, ea);
- if (!pmdp)
- return -ENOMEM;
- ptep = pte_alloc_kernel(pmdp, ea);
- if (!ptep)
- return -ENOMEM;
- set_pte_at(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT,
- __pgprot(flags)));
- } else {
-#ifdef CONFIG_PPC_MMU_NOHASH
- pgdp = pgd_offset_k(ea);
-#ifdef PUD_TABLE_SIZE
- if (pgd_none(*pgdp)) {
- pudp = early_alloc_pgtable(PUD_TABLE_SIZE);
- BUG_ON(pudp == NULL);
- pgd_populate(&init_mm, pgdp, pudp);
- }
-#endif /* PUD_TABLE_SIZE */
- pudp = pud_offset(pgdp, ea);
- if (pud_none(*pudp)) {
- pmdp = early_alloc_pgtable(PMD_TABLE_SIZE);
- BUG_ON(pmdp == NULL);
- pud_populate(&init_mm, pudp, pmdp);
- }
- pmdp = pmd_offset(pudp, ea);
- if (!pmd_present(*pmdp)) {
- ptep = early_alloc_pgtable(PAGE_SIZE);
- BUG_ON(ptep == NULL);
- pmd_populate_kernel(&init_mm, pmdp, ptep);
- }
- ptep = pte_offset_kernel(pmdp, ea);
- set_pte_at(&init_mm, ea, ptep, pfn_pte(pa >> PAGE_SHIFT,
- __pgprot(flags)));
-#else /* CONFIG_PPC_MMU_NOHASH */
- /*
- * If the mm subsystem is not fully up, we cannot create a
- * linux page table entry for this mapping. Simply bolt an
- * entry in the hardware page table.
- *
- */
- if (htab_bolt_mapping(ea, ea + PAGE_SIZE, pa, flags,
- mmu_io_psize, mmu_kernel_ssize)) {
- printk(KERN_ERR "Failed to do bolted mapping IO "
- "memory at %016lx !\n", pa);
- return -ENOMEM;
- }
-#endif /* !CONFIG_PPC_MMU_NOHASH */
- }
-
- smp_wmb();
- return 0;
-}
-
+struct prtb_entry *process_tb;
+struct patb_entry *partition_tb;
+/*
+ * page table size
+ */
+unsigned long __pte_index_size;
+EXPORT_SYMBOL(__pte_index_size);
+unsigned long __pmd_index_size;
+EXPORT_SYMBOL(__pmd_index_size);
+unsigned long __pud_index_size;
+EXPORT_SYMBOL(__pud_index_size);
+unsigned long __pgd_index_size;
+EXPORT_SYMBOL(__pgd_index_size);
+unsigned long __pmd_cache_index;
+EXPORT_SYMBOL(__pmd_cache_index);
+unsigned long __pte_table_size;
+EXPORT_SYMBOL(__pte_table_size);
+unsigned long __pmd_table_size;
+EXPORT_SYMBOL(__pmd_table_size);
+unsigned long __pud_table_size;
+EXPORT_SYMBOL(__pud_table_size);
+unsigned long __pgd_table_size;
+EXPORT_SYMBOL(__pgd_table_size);
+unsigned long __pmd_val_bits;
+EXPORT_SYMBOL(__pmd_val_bits);
+unsigned long __pud_val_bits;
+EXPORT_SYMBOL(__pud_val_bits);
+unsigned long __pgd_val_bits;
+EXPORT_SYMBOL(__pgd_val_bits);
+unsigned long __kernel_virt_start;
+EXPORT_SYMBOL(__kernel_virt_start);
+unsigned long __kernel_virt_size;
+EXPORT_SYMBOL(__kernel_virt_size);
+unsigned long __vmalloc_start;
+EXPORT_SYMBOL(__vmalloc_start);
+unsigned long __vmalloc_end;
+EXPORT_SYMBOL(__vmalloc_end);
+struct page *vmemmap;
+EXPORT_SYMBOL(vmemmap);
+unsigned long __pte_frag_nr;
+EXPORT_SYMBOL(__pte_frag_nr);
+unsigned long __pte_frag_size_shift;
+EXPORT_SYMBOL(__pte_frag_size_shift);
+unsigned long ioremap_bot;
+#else /* !CONFIG_PPC_BOOK3S_64 */
+unsigned long ioremap_bot = IOREMAP_BASE;
+#endif
/**
* __ioremap_at - Low level function to establish the page tables
@@ -167,12 +126,8 @@ void __iomem * __ioremap_at(phys_addr_t pa, void *ea, unsigned long size,
if ((flags & _PAGE_PRESENT) == 0)
flags |= pgprot_val(PAGE_KERNEL);
- /* Non-cacheable page cannot be coherent */
- if (flags & _PAGE_NO_CACHE)
- flags &= ~_PAGE_COHERENT;
-
/* We don't support the 4K PFN hack with ioremap */
- if (flags & _PAGE_4K_PFN)
+ if (flags & H_PAGE_4K_PFN)
return NULL;
WARN_ON(pa & ~PAGE_MASK);
@@ -253,7 +208,7 @@ void __iomem * __ioremap(phys_addr_t addr, unsigned long size,
void __iomem * ioremap(phys_addr_t addr, unsigned long size)
{
- unsigned long flags = _PAGE_NO_CACHE | _PAGE_GUARDED;
+ unsigned long flags = pgprot_val(pgprot_noncached(__pgprot(0)));
void *caller = __builtin_return_address(0);
if (ppc_md.ioremap)
@@ -263,7 +218,7 @@ void __iomem * ioremap(phys_addr_t addr, unsigned long size)
void __iomem * ioremap_wc(phys_addr_t addr, unsigned long size)
{
- unsigned long flags = _PAGE_NO_CACHE;
+ unsigned long flags = pgprot_val(pgprot_noncached_wc(__pgprot(0)));
void *caller = __builtin_return_address(0);
if (ppc_md.ioremap)
@@ -277,11 +232,20 @@ void __iomem * ioremap_prot(phys_addr_t addr, unsigned long size,
void *caller = __builtin_return_address(0);
/* writeable implies dirty for kernel addresses */
- if (flags & _PAGE_RW)
+ if (flags & _PAGE_WRITE)
flags |= _PAGE_DIRTY;
- /* we don't want to let _PAGE_USER and _PAGE_EXEC leak out */
- flags &= ~(_PAGE_USER | _PAGE_EXEC);
+ /* we don't want to let _PAGE_EXEC leak out */
+ flags &= ~_PAGE_EXEC;
+ /*
+ * Force kernel mapping.
+ */
+#if defined(CONFIG_PPC_BOOK3S_64)
+ flags |= _PAGE_PRIVILEGED;
+#else
+ flags &= ~_PAGE_USER;
+#endif
+
#ifdef _PAGE_BAP_SR
/* _PAGE_USER contains _PAGE_BAP_SR on BookE using the new PTE format
@@ -411,7 +375,7 @@ static pte_t *__alloc_for_cache(struct mm_struct *mm, int kernel)
return (pte_t *)ret;
}
-pte_t *page_table_alloc(struct mm_struct *mm, unsigned long vmaddr, int kernel)
+pte_t *pte_fragment_alloc(struct mm_struct *mm, unsigned long vmaddr, int kernel)
{
pte_t *pte;
@@ -421,8 +385,9 @@ pte_t *page_table_alloc(struct mm_struct *mm, unsigned long vmaddr, int kernel)
return __alloc_for_cache(mm, kernel);
}
+#endif /* CONFIG_PPC_64K_PAGES */
-void page_table_free(struct mm_struct *mm, unsigned long *table, int kernel)
+void pte_fragment_free(unsigned long *table, int kernel)
{
struct page *page = virt_to_page(table);
if (put_page_testzero(page)) {
@@ -433,15 +398,6 @@ void page_table_free(struct mm_struct *mm, unsigned long *table, int kernel)
}
#ifdef CONFIG_SMP
-static void page_table_free_rcu(void *table)
-{
- struct page *page = virt_to_page(table);
- if (put_page_testzero(page)) {
- pgtable_page_dtor(page);
- free_hot_cold_page(page, 0);
- }
-}
-
void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift)
{
unsigned long pgf = (unsigned long)table;
@@ -458,7 +414,7 @@ void __tlb_remove_table(void *_table)
if (!shift)
/* PTE page needs special handling */
- page_table_free_rcu(table);
+ pte_fragment_free(table, 0);
else {
BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
kmem_cache_free(PGT_CACHE(shift), table);
@@ -469,385 +425,10 @@ void pgtable_free_tlb(struct mmu_gather *tlb, void *table, int shift)
{
if (!shift) {
/* PTE page needs special handling */
- struct page *page = virt_to_page(table);
- if (put_page_testzero(page)) {
- pgtable_page_dtor(page);
- free_hot_cold_page(page, 0);
- }
+ pte_fragment_free(table, 0);
} else {
BUG_ON(shift > MAX_PGTABLE_INDEX_SIZE);
kmem_cache_free(PGT_CACHE(shift), table);
}
}
#endif
-#endif /* CONFIG_PPC_64K_PAGES */
-
-#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-
-/*
- * This is called when relaxing access to a hugepage. It's also called in the page
- * fault path when we don't hit any of the major fault cases, ie, a minor
- * update of _PAGE_ACCESSED, _PAGE_DIRTY, etc... The generic code will have
- * handled those two for us, we additionally deal with missing execute
- * permission here on some processors
- */
-int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address,
- pmd_t *pmdp, pmd_t entry, int dirty)
-{
- int changed;
-#ifdef CONFIG_DEBUG_VM
- WARN_ON(!pmd_trans_huge(*pmdp));
- assert_spin_locked(&vma->vm_mm->page_table_lock);
-#endif
- changed = !pmd_same(*(pmdp), entry);
- if (changed) {
- __ptep_set_access_flags(pmdp_ptep(pmdp), pmd_pte(entry));
- /*
- * Since we are not supporting SW TLB systems, we don't
- * have any thing similar to flush_tlb_page_nohash()
- */
- }
- return changed;
-}
-
-unsigned long pmd_hugepage_update(struct mm_struct *mm, unsigned long addr,
- pmd_t *pmdp, unsigned long clr,
- unsigned long set)
-{
-
- unsigned long old, tmp;
-
-#ifdef CONFIG_DEBUG_VM
- WARN_ON(!pmd_trans_huge(*pmdp));
- assert_spin_locked(&mm->page_table_lock);
-#endif
-
-#ifdef PTE_ATOMIC_UPDATES
- __asm__ __volatile__(
- "1: ldarx %0,0,%3\n\
- andi. %1,%0,%6\n\
- bne- 1b \n\
- andc %1,%0,%4 \n\
- or %1,%1,%7\n\
- stdcx. %1,0,%3 \n\
- bne- 1b"
- : "=&r" (old), "=&r" (tmp), "=m" (*pmdp)
- : "r" (pmdp), "r" (clr), "m" (*pmdp), "i" (_PAGE_BUSY), "r" (set)
- : "cc" );
-#else
- old = pmd_val(*pmdp);
- *pmdp = __pmd((old & ~clr) | set);
-#endif
- trace_hugepage_update(addr, old, clr, set);
- if (old & _PAGE_HASHPTE)
- hpte_do_hugepage_flush(mm, addr, pmdp, old);
- return old;
-}
-
-pmd_t pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address,
- pmd_t *pmdp)
-{
- pmd_t pmd;
-
- VM_BUG_ON(address & ~HPAGE_PMD_MASK);
- VM_BUG_ON(pmd_trans_huge(*pmdp));
-
- pmd = *pmdp;
- pmd_clear(pmdp);
- /*
- * Wait for all pending hash_page to finish. This is needed
- * in case of subpage collapse. When we collapse normal pages
- * to hugepage, we first clear the pmd, then invalidate all
- * the PTE entries. The assumption here is that any low level
- * page fault will see a none pmd and take the slow path that
- * will wait on mmap_sem. But we could very well be in a
- * hash_page with local ptep pointer value. Such a hash page
- * can result in adding new HPTE entries for normal subpages.
- * That means we could be modifying the page content as we
- * copy them to a huge page. So wait for parallel hash_page
- * to finish before invalidating HPTE entries. We can do this
- * by sending an IPI to all the cpus and executing a dummy
- * function there.
- */
- kick_all_cpus_sync();
- /*
- * Now invalidate the hpte entries in the range
- * covered by pmd. This make sure we take a
- * fault and will find the pmd as none, which will
- * result in a major fault which takes mmap_sem and
- * hence wait for collapse to complete. Without this
- * the __collapse_huge_page_copy can result in copying
- * the old content.
- */
- flush_tlb_pmd_range(vma->vm_mm, &pmd, address);
- return pmd;
-}
-
-int pmdp_test_and_clear_young(struct vm_area_struct *vma,
- unsigned long address, pmd_t *pmdp)
-{
- return __pmdp_test_and_clear_young(vma->vm_mm, address, pmdp);
-}
-
-/*
- * We currently remove entries from the hashtable regardless of whether
- * the entry was young or dirty. The generic routines only flush if the
- * entry was young or dirty which is not good enough.
- *
- * We should be more intelligent about this but for the moment we override
- * these functions and force a tlb flush unconditionally
- */
-int pmdp_clear_flush_young(struct vm_area_struct *vma,
- unsigned long address, pmd_t *pmdp)
-{
- return __pmdp_test_and_clear_young(vma->vm_mm, address, pmdp);
-}
-
-/*
- * We want to put the pgtable in pmd and use pgtable for tracking
- * the base page size hptes
- */
-void pgtable_trans_huge_deposit(struct mm_struct *mm, pmd_t *pmdp,
- pgtable_t pgtable)
-{
- pgtable_t *pgtable_slot;
- assert_spin_locked(&mm->page_table_lock);
- /*
- * we store the pgtable in the second half of PMD
- */
- pgtable_slot = (pgtable_t *)pmdp + PTRS_PER_PMD;
- *pgtable_slot = pgtable;
- /*
- * expose the deposited pgtable to other cpus.
- * before we set the hugepage PTE at pmd level
- * hash fault code looks at the deposted pgtable
- * to store hash index values.
- */
- smp_wmb();
-}
-
-pgtable_t pgtable_trans_huge_withdraw(struct mm_struct *mm, pmd_t *pmdp)
-{
- pgtable_t pgtable;
- pgtable_t *pgtable_slot;
-
- assert_spin_locked(&mm->page_table_lock);
- pgtable_slot = (pgtable_t *)pmdp + PTRS_PER_PMD;
- pgtable = *pgtable_slot;
- /*
- * Once we withdraw, mark the entry NULL.
- */
- *pgtable_slot = NULL;
- /*
- * We store HPTE information in the deposited PTE fragment.
- * zero out the content on withdraw.
- */
- memset(pgtable, 0, PTE_FRAG_SIZE);
- return pgtable;
-}
-
-void pmdp_huge_split_prepare(struct vm_area_struct *vma,
- unsigned long address, pmd_t *pmdp)
-{
- VM_BUG_ON(address & ~HPAGE_PMD_MASK);
- VM_BUG_ON(REGION_ID(address) != USER_REGION_ID);
-
- /*
- * We can't mark the pmd none here, because that will cause a race
- * against exit_mmap. We need to continue mark pmd TRANS HUGE, while
- * we spilt, but at the same time we wan't rest of the ppc64 code
- * not to insert hash pte on this, because we will be modifying
- * the deposited pgtable in the caller of this function. Hence
- * clear the _PAGE_USER so that we move the fault handling to
- * higher level function and that will serialize against ptl.
- * We need to flush existing hash pte entries here even though,
- * the translation is still valid, because we will withdraw
- * pgtable_t after this.
- */
- pmd_hugepage_update(vma->vm_mm, address, pmdp, _PAGE_USER, 0);
-}
-
-
-/*
- * set a new huge pmd. We should not be called for updating
- * an existing pmd entry. That should go via pmd_hugepage_update.
- */
-void set_pmd_at(struct mm_struct *mm, unsigned long addr,
- pmd_t *pmdp, pmd_t pmd)
-{
-#ifdef CONFIG_DEBUG_VM
- WARN_ON((pmd_val(*pmdp) & (_PAGE_PRESENT | _PAGE_USER)) ==
- (_PAGE_PRESENT | _PAGE_USER));
- assert_spin_locked(&mm->page_table_lock);
- WARN_ON(!pmd_trans_huge(pmd));
-#endif
- trace_hugepage_set_pmd(addr, pmd_val(pmd));
- return set_pte_at(mm, addr, pmdp_ptep(pmdp), pmd_pte(pmd));
-}
-
-/*
- * We use this to invalidate a pmdp entry before switching from a
- * hugepte to regular pmd entry.
- */
-void pmdp_invalidate(struct vm_area_struct *vma, unsigned long address,
- pmd_t *pmdp)
-{
- pmd_hugepage_update(vma->vm_mm, address, pmdp, _PAGE_PRESENT, 0);
-
- /*
- * This ensures that generic code that rely on IRQ disabling
- * to prevent a parallel THP split work as expected.
- */
- kick_all_cpus_sync();
-}
-
-/*
- * A linux hugepage PMD was changed and the corresponding hash table entries
- * neesd to be flushed.
- */
-void hpte_do_hugepage_flush(struct mm_struct *mm, unsigned long addr,
- pmd_t *pmdp, unsigned long old_pmd)
-{
- int ssize;
- unsigned int psize;
- unsigned long vsid;
- unsigned long flags = 0;
- const struct cpumask *tmp;
-
- /* get the base page size,vsid and segment size */
-#ifdef CONFIG_DEBUG_VM
- psize = get_slice_psize(mm, addr);
- BUG_ON(psize == MMU_PAGE_16M);
-#endif
- if (old_pmd & _PAGE_COMBO)
- psize = MMU_PAGE_4K;
- else
- psize = MMU_PAGE_64K;
-
- if (!is_kernel_addr(addr)) {
- ssize = user_segment_size(addr);
- vsid = get_vsid(mm->context.id, addr, ssize);
- WARN_ON(vsid == 0);
- } else {
- vsid = get_kernel_vsid(addr, mmu_kernel_ssize);
- ssize = mmu_kernel_ssize;
- }
-
- tmp = cpumask_of(smp_processor_id());
- if (cpumask_equal(mm_cpumask(mm), tmp))
- flags |= HPTE_LOCAL_UPDATE;
-
- return flush_hash_hugepage(vsid, addr, pmdp, psize, ssize, flags);
-}
-
-static pmd_t pmd_set_protbits(pmd_t pmd, pgprot_t pgprot)
-{
- return __pmd(pmd_val(pmd) | pgprot_val(pgprot));
-}
-
-pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot)
-{
- unsigned long pmdv;
-
- pmdv = (pfn << PTE_RPN_SHIFT) & PTE_RPN_MASK;
- return pmd_set_protbits(__pmd(pmdv), pgprot);
-}
-
-pmd_t mk_pmd(struct page *page, pgprot_t pgprot)
-{
- return pfn_pmd(page_to_pfn(page), pgprot);
-}
-
-pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
-{
- unsigned long pmdv;
-
- pmdv = pmd_val(pmd);
- pmdv &= _HPAGE_CHG_MASK;
- return pmd_set_protbits(__pmd(pmdv), newprot);
-}
-
-/*
- * This is called at the end of handling a user page fault, when the
- * fault has been handled by updating a HUGE PMD entry in the linux page tables.
- * We use it to preload an HPTE into the hash table corresponding to
- * the updated linux HUGE PMD entry.
- */
-void update_mmu_cache_pmd(struct vm_area_struct *vma, unsigned long addr,
- pmd_t *pmd)
-{
- return;
-}
-
-pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
- unsigned long addr, pmd_t *pmdp)
-{
- pmd_t old_pmd;
- pgtable_t pgtable;
- unsigned long old;
- pgtable_t *pgtable_slot;
-
- old = pmd_hugepage_update(mm, addr, pmdp, ~0UL, 0);
- old_pmd = __pmd(old);
- /*
- * We have pmd == none and we are holding page_table_lock.
- * So we can safely go and clear the pgtable hash
- * index info.
- */
- pgtable_slot = (pgtable_t *)pmdp + PTRS_PER_PMD;
- pgtable = *pgtable_slot;
- /*
- * Let's zero out old valid and hash index details
- * hash fault look at them.
- */
- memset(pgtable, 0, PTE_FRAG_SIZE);
- /*
- * Serialize against find_linux_pte_or_hugepte which does lock-less
- * lookup in page tables with local interrupts disabled. For huge pages
- * it casts pmd_t to pte_t. Since format of pte_t is different from
- * pmd_t we want to prevent transit from pmd pointing to page table
- * to pmd pointing to huge page (and back) while interrupts are disabled.
- * We clear pmd to possibly replace it with page table pointer in
- * different code paths. So make sure we wait for the parallel
- * find_linux_pte_or_hugepage to finish.
- */
- kick_all_cpus_sync();
- return old_pmd;
-}
-
-int has_transparent_hugepage(void)
-{
-
- BUILD_BUG_ON_MSG((PMD_SHIFT - PAGE_SHIFT) >= MAX_ORDER,
- "hugepages can't be allocated by the buddy allocator");
-
- BUILD_BUG_ON_MSG((PMD_SHIFT - PAGE_SHIFT) < 2,
- "We need more than 2 pages to do deferred thp split");
-
- if (!mmu_has_feature(MMU_FTR_16M_PAGE))
- return 0;
- /*
- * We support THP only if PMD_SIZE is 16MB.
- */
- if (mmu_psize_defs[MMU_PAGE_16M].shift != PMD_SHIFT)
- return 0;
- /*
- * We need to make sure that we support 16MB hugepage in a segement
- * with base page size 64K or 4K. We only enable THP with a PAGE_SIZE
- * of 64K.
- */
- /*
- * If we have 64K HPTE, we will be using that by default
- */
- if (mmu_psize_defs[MMU_PAGE_64K].shift &&
- (mmu_psize_defs[MMU_PAGE_64K].penc[MMU_PAGE_16M] == -1))
- return 0;
- /*
- * Ok we only have 4K HPTE
- */
- if (mmu_psize_defs[MMU_PAGE_4K].penc[MMU_PAGE_16M] == -1)
- return 0;
-
- return 1;
-}
-#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
diff --git a/arch/powerpc/mm/slb.c b/arch/powerpc/mm/slb.c
index 825b6873391f..48fc28bab544 100644
--- a/arch/powerpc/mm/slb.c
+++ b/arch/powerpc/mm/slb.c
@@ -32,7 +32,6 @@ enum slb_index {
};
extern void slb_allocate_realmode(unsigned long ea);
-extern void slb_allocate_user(unsigned long ea);
static void slb_allocate(unsigned long ea)
{
diff --git a/arch/powerpc/mm/slb_low.S b/arch/powerpc/mm/slb_low.S
index 736d18b3cefd..dfdb90cb4403 100644
--- a/arch/powerpc/mm/slb_low.S
+++ b/arch/powerpc/mm/slb_low.S
@@ -35,7 +35,7 @@ _GLOBAL(slb_allocate_realmode)
* check for bad kernel/user address
* (ea & ~REGION_MASK) >= PGTABLE_RANGE
*/
- rldicr. r9,r3,4,(63 - PGTABLE_EADDR_SIZE - 4)
+ rldicr. r9,r3,4,(63 - H_PGTABLE_EADDR_SIZE - 4)
bne- 8f
srdi r9,r3,60 /* get region */
@@ -91,7 +91,7 @@ slb_miss_kernel_load_vmemmap:
* can be demoted from 64K -> 4K dynamically on some machines
*/
clrldi r11,r10,48
- cmpldi r11,(VMALLOC_SIZE >> 28) - 1
+ cmpldi r11,(H_VMALLOC_SIZE >> 28) - 1
bgt 5f
lhz r11,PACAVMALLOCSLLP(r13)
b 6f
@@ -179,56 +179,6 @@ END_MMU_FTR_SECTION_IFSET(MMU_FTR_1T_SEGMENT)
li r11,SLB_VSID_USER /* flags don't much matter */
b slb_finish_load
-#ifdef __DISABLED__
-
-/* void slb_allocate_user(unsigned long ea);
- *
- * Create an SLB entry for the given EA (user or kernel).
- * r3 = faulting address, r13 = PACA
- * r9, r10, r11 are clobbered by this function
- * No other registers are examined or changed.
- *
- * It is called with translation enabled in order to be able to walk the
- * page tables. This is not currently used.
- */
-_GLOBAL(slb_allocate_user)
- /* r3 = faulting address */
- srdi r10,r3,28 /* get esid */
-
- crset 4*cr7+lt /* set "user" flag for later */
-
- /* check if we fit in the range covered by the pagetables*/
- srdi. r9,r3,PGTABLE_EADDR_SIZE
- crnot 4*cr0+eq,4*cr0+eq
- beqlr
-
- /* now we need to get to the page tables in order to get the page
- * size encoding from the PMD. In the future, we'll be able to deal
- * with 1T segments too by getting the encoding from the PGD instead
- */
- ld r9,PACAPGDIR(r13)
- cmpldi cr0,r9,0
- beqlr
- rlwinm r11,r10,8,25,28
- ldx r9,r9,r11 /* get pgd_t */
- cmpldi cr0,r9,0
- beqlr
- rlwinm r11,r10,3,17,28
- ldx r9,r9,r11 /* get pmd_t */
- cmpldi cr0,r9,0
- beqlr
-
- /* build vsid flags */
- andi. r11,r9,SLB_VSID_LLP
- ori r11,r11,SLB_VSID_USER
-
- /* get context to calculate proto-VSID */
- ld r9,PACACONTEXTID(r13)
- /* fall through slb_finish_load */
-
-#endif /* __DISABLED__ */
-
-
/*
* Finish loading of an SLB entry and return
*
diff --git a/arch/powerpc/mm/slice.c b/arch/powerpc/mm/slice.c
index 42954f0b47ac..2b27458902ee 100644
--- a/arch/powerpc/mm/slice.c
+++ b/arch/powerpc/mm/slice.c
@@ -37,8 +37,8 @@
#include <asm/hugetlb.h>
/* some sanity checks */
-#if (PGTABLE_RANGE >> 43) > SLICE_MASK_SIZE
-#error PGTABLE_RANGE exceeds slice_mask high_slices size
+#if (H_PGTABLE_RANGE >> 43) > SLICE_MASK_SIZE
+#error H_PGTABLE_RANGE exceeds slice_mask high_slices size
#endif
static DEFINE_SPINLOCK(slice_convert_lock);
@@ -395,6 +395,7 @@ unsigned long slice_get_unmapped_area(unsigned long addr, unsigned long len,
/* Sanity checks */
BUG_ON(mm->task_size == 0);
+ VM_BUG_ON(radix_enabled());
slice_dbg("slice_get_unmapped_area(mm=%p, psize=%d...\n", mm, psize);
slice_dbg(" addr=%lx, len=%lx, flags=%lx, topdown=%d\n",
@@ -568,6 +569,16 @@ unsigned int get_slice_psize(struct mm_struct *mm, unsigned long addr)
unsigned char *hpsizes;
int index, mask_index;
+ /*
+ * Radix doesn't use slice, but can get enabled along with MMU_SLICE
+ */
+ if (radix_enabled()) {
+#ifdef CONFIG_PPC_64K_PAGES
+ return MMU_PAGE_64K;
+#else
+ return MMU_PAGE_4K;
+#endif
+ }
if (addr < SLICE_LOW_TOP) {
u64 lpsizes;
lpsizes = mm->context.low_slices_psize;
@@ -605,6 +616,7 @@ void slice_set_user_psize(struct mm_struct *mm, unsigned int psize)
slice_dbg("slice_set_user_psize(mm=%p, psize=%d)\n", mm, psize);
+ VM_BUG_ON(radix_enabled());
spin_lock_irqsave(&slice_convert_lock, flags);
old_psize = mm->context.user_psize;
@@ -649,6 +661,7 @@ void slice_set_range_psize(struct mm_struct *mm, unsigned long start,
{
struct slice_mask mask = slice_range_to_mask(start, len);
+ VM_BUG_ON(radix_enabled());
slice_convert(mm, mask, psize);
}
@@ -678,6 +691,9 @@ int is_hugepage_only_range(struct mm_struct *mm, unsigned long addr,
struct slice_mask mask, available;
unsigned int psize = mm->context.user_psize;
+ if (radix_enabled())
+ return 0;
+
mask = slice_range_to_mask(addr, len);
available = slice_mask_for_size(mm, psize);
#ifdef CONFIG_PPC_64K_PAGES
diff --git a/arch/powerpc/mm/tlb-radix.c b/arch/powerpc/mm/tlb-radix.c
new file mode 100644
index 000000000000..0fdaf93a3e09
--- /dev/null
+++ b/arch/powerpc/mm/tlb-radix.c
@@ -0,0 +1,251 @@
+/*
+ * TLB flush routines for radix kernels.
+ *
+ * Copyright 2015-2016, Aneesh Kumar K.V, IBM Corporation.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/mm.h>
+#include <linux/hugetlb.h>
+#include <linux/memblock.h>
+
+#include <asm/tlb.h>
+#include <asm/tlbflush.h>
+
+static DEFINE_RAW_SPINLOCK(native_tlbie_lock);
+
+static inline void __tlbiel_pid(unsigned long pid, int set)
+{
+ unsigned long rb,rs,ric,prs,r;
+
+ rb = PPC_BIT(53); /* IS = 1 */
+ rb |= set << PPC_BITLSHIFT(51);
+ rs = ((unsigned long)pid) << PPC_BITLSHIFT(31);
+ prs = 1; /* process scoped */
+ r = 1; /* raidx format */
+ ric = 2; /* invalidate all the caches */
+
+ asm volatile("ptesync": : :"memory");
+ asm volatile(".long 0x7c000224 | (%0 << 11) | (%1 << 16) |"
+ "(%2 << 17) | (%3 << 18) | (%4 << 21)"
+ : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(rs) : "memory");
+ asm volatile("ptesync": : :"memory");
+}
+
+/*
+ * We use 128 set in radix mode and 256 set in hpt mode.
+ */
+static inline void _tlbiel_pid(unsigned long pid)
+{
+ int set;
+
+ for (set = 0; set < POWER9_TLB_SETS_RADIX ; set++) {
+ __tlbiel_pid(pid, set);
+ }
+ return;
+}
+
+static inline void _tlbie_pid(unsigned long pid)
+{
+ unsigned long rb,rs,ric,prs,r;
+
+ rb = PPC_BIT(53); /* IS = 1 */
+ rs = pid << PPC_BITLSHIFT(31);
+ prs = 1; /* process scoped */
+ r = 1; /* raidx format */
+ ric = 2; /* invalidate all the caches */
+
+ asm volatile("ptesync": : :"memory");
+ asm volatile(".long 0x7c000264 | (%0 << 11) | (%1 << 16) |"
+ "(%2 << 17) | (%3 << 18) | (%4 << 21)"
+ : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(rs) : "memory");
+ asm volatile("eieio; tlbsync; ptesync": : :"memory");
+}
+
+static inline void _tlbiel_va(unsigned long va, unsigned long pid,
+ unsigned long ap)
+{
+ unsigned long rb,rs,ric,prs,r;
+
+ rb = va & ~(PPC_BITMASK(52, 63));
+ rb |= ap << PPC_BITLSHIFT(58);
+ rs = pid << PPC_BITLSHIFT(31);
+ prs = 1; /* process scoped */
+ r = 1; /* raidx format */
+ ric = 0; /* no cluster flush yet */
+
+ asm volatile("ptesync": : :"memory");
+ asm volatile(".long 0x7c000224 | (%0 << 11) | (%1 << 16) |"
+ "(%2 << 17) | (%3 << 18) | (%4 << 21)"
+ : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(rs) : "memory");
+ asm volatile("ptesync": : :"memory");
+}
+
+static inline void _tlbie_va(unsigned long va, unsigned long pid,
+ unsigned long ap)
+{
+ unsigned long rb,rs,ric,prs,r;
+
+ rb = va & ~(PPC_BITMASK(52, 63));
+ rb |= ap << PPC_BITLSHIFT(58);
+ rs = pid << PPC_BITLSHIFT(31);
+ prs = 1; /* process scoped */
+ r = 1; /* raidx format */
+ ric = 0; /* no cluster flush yet */
+
+ asm volatile("ptesync": : :"memory");
+ asm volatile(".long 0x7c000264 | (%0 << 11) | (%1 << 16) |"
+ "(%2 << 17) | (%3 << 18) | (%4 << 21)"
+ : : "r"(rb), "i"(r), "i"(prs), "i"(ric), "r"(rs) : "memory");
+ asm volatile("eieio; tlbsync; ptesync": : :"memory");
+}
+
+/*
+ * Base TLB flushing operations:
+ *
+ * - flush_tlb_mm(mm) flushes the specified mm context TLB's
+ * - flush_tlb_page(vma, vmaddr) flushes one page
+ * - flush_tlb_range(vma, start, end) flushes a range of pages
+ * - flush_tlb_kernel_range(start, end) flushes kernel pages
+ *
+ * - local_* variants of page and mm only apply to the current
+ * processor
+ */
+void radix__local_flush_tlb_mm(struct mm_struct *mm)
+{
+ unsigned int pid;
+
+ preempt_disable();
+ pid = mm->context.id;
+ if (pid != MMU_NO_CONTEXT)
+ _tlbiel_pid(pid);
+ preempt_enable();
+}
+EXPORT_SYMBOL(radix__local_flush_tlb_mm);
+
+void radix___local_flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
+ unsigned long ap, int nid)
+{
+ unsigned int pid;
+
+ preempt_disable();
+ pid = mm ? mm->context.id : 0;
+ if (pid != MMU_NO_CONTEXT)
+ _tlbiel_va(vmaddr, pid, ap);
+ preempt_enable();
+}
+
+void radix__local_flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
+{
+#ifdef CONFIG_HUGETLB_PAGE
+ /* need the return fix for nohash.c */
+ if (vma && is_vm_hugetlb_page(vma))
+ return __local_flush_hugetlb_page(vma, vmaddr);
+#endif
+ radix___local_flush_tlb_page(vma ? vma->vm_mm : NULL, vmaddr,
+ mmu_get_ap(mmu_virtual_psize), 0);
+}
+EXPORT_SYMBOL(radix__local_flush_tlb_page);
+
+#ifdef CONFIG_SMP
+static int mm_is_core_local(struct mm_struct *mm)
+{
+ return cpumask_subset(mm_cpumask(mm),
+ topology_sibling_cpumask(smp_processor_id()));
+}
+
+void radix__flush_tlb_mm(struct mm_struct *mm)
+{
+ unsigned int pid;
+
+ preempt_disable();
+ pid = mm->context.id;
+ if (unlikely(pid == MMU_NO_CONTEXT))
+ goto no_context;
+
+ if (!mm_is_core_local(mm)) {
+ int lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE);
+
+ if (lock_tlbie)
+ raw_spin_lock(&native_tlbie_lock);
+ _tlbie_pid(pid);
+ if (lock_tlbie)
+ raw_spin_unlock(&native_tlbie_lock);
+ } else
+ _tlbiel_pid(pid);
+no_context:
+ preempt_enable();
+}
+EXPORT_SYMBOL(radix__flush_tlb_mm);
+
+void radix___flush_tlb_page(struct mm_struct *mm, unsigned long vmaddr,
+ unsigned long ap, int nid)
+{
+ unsigned int pid;
+
+ preempt_disable();
+ pid = mm ? mm->context.id : 0;
+ if (unlikely(pid == MMU_NO_CONTEXT))
+ goto bail;
+ if (!mm_is_core_local(mm)) {
+ int lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE);
+
+ if (lock_tlbie)
+ raw_spin_lock(&native_tlbie_lock);
+ _tlbie_va(vmaddr, pid, ap);
+ if (lock_tlbie)
+ raw_spin_unlock(&native_tlbie_lock);
+ } else
+ _tlbiel_va(vmaddr, pid, ap);
+bail:
+ preempt_enable();
+}
+
+void radix__flush_tlb_page(struct vm_area_struct *vma, unsigned long vmaddr)
+{
+#ifdef CONFIG_HUGETLB_PAGE
+ if (vma && is_vm_hugetlb_page(vma))
+ return flush_hugetlb_page(vma, vmaddr);
+#endif
+ radix___flush_tlb_page(vma ? vma->vm_mm : NULL, vmaddr,
+ mmu_get_ap(mmu_virtual_psize), 0);
+}
+EXPORT_SYMBOL(radix__flush_tlb_page);
+
+#endif /* CONFIG_SMP */
+
+void radix__flush_tlb_kernel_range(unsigned long start, unsigned long end)
+{
+ int lock_tlbie = !mmu_has_feature(MMU_FTR_LOCKLESS_TLBIE);
+
+ if (lock_tlbie)
+ raw_spin_lock(&native_tlbie_lock);
+ _tlbie_pid(0);
+ if (lock_tlbie)
+ raw_spin_unlock(&native_tlbie_lock);
+}
+EXPORT_SYMBOL(radix__flush_tlb_kernel_range);
+
+/*
+ * Currently, for range flushing, we just do a full mm flush. Because
+ * we use this in code path where we don' track the page size.
+ */
+void radix__flush_tlb_range(struct vm_area_struct *vma, unsigned long start,
+ unsigned long end)
+
+{
+ struct mm_struct *mm = vma->vm_mm;
+ radix__flush_tlb_mm(mm);
+}
+EXPORT_SYMBOL(radix__flush_tlb_range);
+
+
+void radix__tlb_flush(struct mmu_gather *tlb)
+{
+ struct mm_struct *mm = tlb->mm;
+ radix__flush_tlb_mm(mm);
+}
diff --git a/arch/powerpc/mm/tlb_hash64.c b/arch/powerpc/mm/tlb_hash64.c
index f7b80391bee7..4517aa43a8b1 100644
--- a/arch/powerpc/mm/tlb_hash64.c
+++ b/arch/powerpc/mm/tlb_hash64.c
@@ -155,7 +155,7 @@ void __flush_tlb_pending(struct ppc64_tlb_batch *batch)
batch->index = 0;
}
-void tlb_flush(struct mmu_gather *tlb)
+void hash__tlb_flush(struct mmu_gather *tlb)
{
struct ppc64_tlb_batch *tlbbatch = &get_cpu_var(ppc64_tlb_batch);
@@ -218,7 +218,7 @@ void __flush_hash_table_range(struct mm_struct *mm, unsigned long start,
pte = pte_val(*ptep);
if (is_thp)
trace_hugepage_invalidate(start, pte);
- if (!(pte & _PAGE_HASHPTE))
+ if (!(pte & H_PAGE_HASHPTE))
continue;
if (unlikely(is_thp))
hpte_do_hugepage_flush(mm, start, (pmd_t *)ptep, pte);
@@ -248,7 +248,7 @@ void flush_tlb_pmd_range(struct mm_struct *mm, pmd_t *pmd, unsigned long addr)
start_pte = pte_offset_map(pmd, addr);
for (pte = start_pte; pte < start_pte + PTRS_PER_PTE; pte++) {
unsigned long pteval = pte_val(*pte);
- if (pteval & _PAGE_HASHPTE)
+ if (pteval & H_PAGE_HASHPTE)
hpte_need_flush(mm, addr, pte, pteval, 0);
addr += PAGE_SIZE;
}
diff --git a/arch/powerpc/perf/Makefile b/arch/powerpc/perf/Makefile
index f9c083a5652a..77b6394a7c50 100644
--- a/arch/powerpc/perf/Makefile
+++ b/arch/powerpc/perf/Makefile
@@ -1,6 +1,6 @@
subdir-ccflags-$(CONFIG_PPC_WERROR) := -Werror
-obj-$(CONFIG_PERF_EVENTS) += callchain.o
+obj-$(CONFIG_PERF_EVENTS) += callchain.o perf_regs.o
obj-$(CONFIG_PPC_PERF_CTRS) += core-book3s.o bhrb.o
obj64-$(CONFIG_PPC_PERF_CTRS) += power4-pmu.o ppc970-pmu.o power5-pmu.o \
diff --git a/arch/powerpc/perf/callchain.c b/arch/powerpc/perf/callchain.c
index e04a6752b399..0fc26714780a 100644
--- a/arch/powerpc/perf/callchain.c
+++ b/arch/powerpc/perf/callchain.c
@@ -47,7 +47,7 @@ static int valid_next_sp(unsigned long sp, unsigned long prev_sp)
}
void
-perf_callchain_kernel(struct perf_callchain_entry *entry, struct pt_regs *regs)
+perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
{
unsigned long sp, next_sp;
unsigned long next_ip;
@@ -76,7 +76,7 @@ perf_callchain_kernel(struct perf_callchain_entry *entry, struct pt_regs *regs)
next_ip = regs->nip;
lr = regs->link;
level = 0;
- perf_callchain_store(entry, PERF_CONTEXT_KERNEL);
+ perf_callchain_store_context(entry, PERF_CONTEXT_KERNEL);
} else {
if (level == 0)
@@ -137,7 +137,7 @@ static int read_user_stack_slow(void __user *ptr, void *buf, int nb)
offset = addr & ((1UL << shift) - 1);
pte = READ_ONCE(*ptep);
- if (!pte_present(pte) || !(pte_val(pte) & _PAGE_USER))
+ if (!pte_present(pte) || !pte_user(pte))
goto err_out;
pfn = pte_pfn(pte);
if (!page_is_ram(pfn))
@@ -232,7 +232,7 @@ static int sane_signal_64_frame(unsigned long sp)
puc == (unsigned long) &sf->uc;
}
-static void perf_callchain_user_64(struct perf_callchain_entry *entry,
+static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
struct pt_regs *regs)
{
unsigned long sp, next_sp;
@@ -247,7 +247,7 @@ static void perf_callchain_user_64(struct perf_callchain_entry *entry,
sp = regs->gpr[1];
perf_callchain_store(entry, next_ip);
- while (entry->nr < PERF_MAX_STACK_DEPTH) {
+ while (entry->nr < entry->max_stack) {
fp = (unsigned long __user *) sp;
if (!valid_user_sp(sp, 1) || read_user_stack_64(fp, &next_sp))
return;
@@ -274,7 +274,7 @@ static void perf_callchain_user_64(struct perf_callchain_entry *entry,
read_user_stack_64(&uregs[PT_R1], &sp))
return;
level = 0;
- perf_callchain_store(entry, PERF_CONTEXT_USER);
+ perf_callchain_store_context(entry, PERF_CONTEXT_USER);
perf_callchain_store(entry, next_ip);
continue;
}
@@ -319,7 +319,7 @@ static int read_user_stack_32(unsigned int __user *ptr, unsigned int *ret)
return rc;
}
-static inline void perf_callchain_user_64(struct perf_callchain_entry *entry,
+static inline void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
struct pt_regs *regs)
{
}
@@ -439,7 +439,7 @@ static unsigned int __user *signal_frame_32_regs(unsigned int sp,
return mctx->mc_gregs;
}
-static void perf_callchain_user_32(struct perf_callchain_entry *entry,
+static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
struct pt_regs *regs)
{
unsigned int sp, next_sp;
@@ -453,7 +453,7 @@ static void perf_callchain_user_32(struct perf_callchain_entry *entry,
sp = regs->gpr[1];
perf_callchain_store(entry, next_ip);
- while (entry->nr < PERF_MAX_STACK_DEPTH) {
+ while (entry->nr < entry->max_stack) {
fp = (unsigned int __user *) (unsigned long) sp;
if (!valid_user_sp(sp, 0) || read_user_stack_32(fp, &next_sp))
return;
@@ -473,7 +473,7 @@ static void perf_callchain_user_32(struct perf_callchain_entry *entry,
read_user_stack_32(&uregs[PT_R1], &sp))
return;
level = 0;
- perf_callchain_store(entry, PERF_CONTEXT_USER);
+ perf_callchain_store_context(entry, PERF_CONTEXT_USER);
perf_callchain_store(entry, next_ip);
continue;
}
@@ -487,7 +487,7 @@ static void perf_callchain_user_32(struct perf_callchain_entry *entry,
}
void
-perf_callchain_user(struct perf_callchain_entry *entry, struct pt_regs *regs)
+perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
{
if (current_is_64bit())
perf_callchain_user_64(entry, regs);
diff --git a/arch/powerpc/perf/perf_regs.c b/arch/powerpc/perf/perf_regs.c
new file mode 100644
index 000000000000..d24a8a3668fa
--- /dev/null
+++ b/arch/powerpc/perf/perf_regs.c
@@ -0,0 +1,104 @@
+/*
+ * Copyright 2016 Anju T, IBM Corporation.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/sched.h>
+#include <linux/perf_event.h>
+#include <linux/bug.h>
+#include <linux/stddef.h>
+#include <asm/ptrace.h>
+#include <asm/perf_regs.h>
+
+#define PT_REGS_OFFSET(id, r) [id] = offsetof(struct pt_regs, r)
+
+#define REG_RESERVED (~((1ULL << PERF_REG_POWERPC_MAX) - 1))
+
+static unsigned int pt_regs_offset[PERF_REG_POWERPC_MAX] = {
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R0, gpr[0]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R1, gpr[1]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R2, gpr[2]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R3, gpr[3]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R4, gpr[4]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R5, gpr[5]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R6, gpr[6]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R7, gpr[7]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R8, gpr[8]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R9, gpr[9]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R10, gpr[10]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R11, gpr[11]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R12, gpr[12]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R13, gpr[13]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R14, gpr[14]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R15, gpr[15]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R16, gpr[16]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R17, gpr[17]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R18, gpr[18]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R19, gpr[19]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R20, gpr[20]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R21, gpr[21]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R22, gpr[22]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R23, gpr[23]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R24, gpr[24]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R25, gpr[25]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R26, gpr[26]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R27, gpr[27]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R28, gpr[28]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R29, gpr[29]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R30, gpr[30]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_R31, gpr[31]),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_NIP, nip),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_MSR, msr),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_ORIG_R3, orig_gpr3),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_CTR, ctr),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_LINK, link),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_XER, xer),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_CCR, ccr),
+#ifdef CONFIG_PPC64
+ PT_REGS_OFFSET(PERF_REG_POWERPC_SOFTE, softe),
+#else
+ PT_REGS_OFFSET(PERF_REG_POWERPC_SOFTE, mq),
+#endif
+ PT_REGS_OFFSET(PERF_REG_POWERPC_TRAP, trap),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_DAR, dar),
+ PT_REGS_OFFSET(PERF_REG_POWERPC_DSISR, dsisr),
+};
+
+u64 perf_reg_value(struct pt_regs *regs, int idx)
+{
+ if (WARN_ON_ONCE(idx >= PERF_REG_POWERPC_MAX))
+ return 0;
+
+ return regs_get_register(regs, pt_regs_offset[idx]);
+}
+
+int perf_reg_validate(u64 mask)
+{
+ if (!mask || mask & REG_RESERVED)
+ return -EINVAL;
+ return 0;
+}
+
+u64 perf_reg_abi(struct task_struct *task)
+{
+#ifdef CONFIG_PPC64
+ if (!test_tsk_thread_flag(task, TIF_32BIT))
+ return PERF_SAMPLE_REGS_ABI_64;
+ else
+#endif
+ return PERF_SAMPLE_REGS_ABI_32;
+}
+
+void perf_get_regs_user(struct perf_regs *regs_user,
+ struct pt_regs *regs,
+ struct pt_regs *regs_user_copy)
+{
+ regs_user->regs = task_pt_regs(current);
+ regs_user->abi = perf_reg_abi(current);
+}
diff --git a/arch/powerpc/perf/power8-events-list.h b/arch/powerpc/perf/power8-events-list.h
index 741b77edd03e..3a2e6e8ebb92 100644
--- a/arch/powerpc/perf/power8-events-list.h
+++ b/arch/powerpc/perf/power8-events-list.h
@@ -49,3 +49,43 @@ EVENT(PM_L3_PREF_ALL, 0x4e052)
EVENT(PM_DTLB_MISS, 0x300fc)
/* ITLB Reloaded */
EVENT(PM_ITLB_MISS, 0x400fc)
+/* Run_Instructions */
+EVENT(PM_RUN_INST_CMPL, 0x500fa)
+/* Alternate event code for PM_RUN_INST_CMPL */
+EVENT(PM_RUN_INST_CMPL_ALT, 0x400fa)
+/* Run_cycles */
+EVENT(PM_RUN_CYC, 0x600f4)
+/* Alternate event code for Run_cycles */
+EVENT(PM_RUN_CYC_ALT, 0x200f4)
+/* Marked store completed */
+EVENT(PM_MRK_ST_CMPL, 0x10134)
+/* Alternate event code for Marked store completed */
+EVENT(PM_MRK_ST_CMPL_ALT, 0x301e2)
+/* Marked two path branch */
+EVENT(PM_BR_MRK_2PATH, 0x10138)
+/* Alternate event code for PM_BR_MRK_2PATH */
+EVENT(PM_BR_MRK_2PATH_ALT, 0x40138)
+/* L3 castouts in Mepf state */
+EVENT(PM_L3_CO_MEPF, 0x18082)
+/* Alternate event code for PM_L3_CO_MEPF */
+EVENT(PM_L3_CO_MEPF_ALT, 0x3e05e)
+/* Data cache was reloaded from a location other than L2 due to a marked load */
+EVENT(PM_MRK_DATA_FROM_L2MISS, 0x1d14e)
+/* Alternate event code for PM_MRK_DATA_FROM_L2MISS */
+EVENT(PM_MRK_DATA_FROM_L2MISS_ALT, 0x401e8)
+/* Alternate event code for PM_CMPLU_STALL */
+EVENT(PM_CMPLU_STALL_ALT, 0x1e054)
+/* Two path branch */
+EVENT(PM_BR_2PATH, 0x20036)
+/* Alternate event code for PM_BR_2PATH */
+EVENT(PM_BR_2PATH_ALT, 0x40036)
+/* # PPC Dispatched */
+EVENT(PM_INST_DISP, 0x200f2)
+/* Alternate event code for PM_INST_DISP */
+EVENT(PM_INST_DISP_ALT, 0x300f2)
+/* Marked filter Match */
+EVENT(PM_MRK_FILT_MATCH, 0x2013c)
+/* Alternate event code for PM_MRK_FILT_MATCH */
+EVENT(PM_MRK_FILT_MATCH_ALT, 0x3012e)
+/* Alternate event code for PM_LD_MISS_L1 */
+EVENT(PM_LD_MISS_L1_ALT, 0x400f0)
diff --git a/arch/powerpc/perf/power8-pmu.c b/arch/powerpc/perf/power8-pmu.c
index 690d9186a855..7cf3b4378192 100644
--- a/arch/powerpc/perf/power8-pmu.c
+++ b/arch/powerpc/perf/power8-pmu.c
@@ -274,7 +274,8 @@ static int power8_get_constraint(u64 event, unsigned long *maskp, unsigned long
/* Ignore Linux defined bits when checking event below */
base_event = event & ~EVENT_LINUX_MASK;
- if (pmc >= 5 && base_event != 0x500fa && base_event != 0x600f4)
+ if (pmc >= 5 && base_event != PM_RUN_INST_CMPL &&
+ base_event != PM_RUN_CYC)
return -1;
mask |= CNST_PMC_MASK(pmc);
@@ -488,17 +489,17 @@ static int power8_compute_mmcr(u64 event[], int n_ev,
/* Table of alternatives, sorted by column 0 */
static const unsigned int event_alternatives[][MAX_ALT] = {
- { 0x10134, 0x301e2 }, /* PM_MRK_ST_CMPL */
- { 0x10138, 0x40138 }, /* PM_BR_MRK_2PATH */
- { 0x18082, 0x3e05e }, /* PM_L3_CO_MEPF */
- { 0x1d14e, 0x401e8 }, /* PM_MRK_DATA_FROM_L2MISS */
- { 0x1e054, 0x4000a }, /* PM_CMPLU_STALL */
- { 0x20036, 0x40036 }, /* PM_BR_2PATH */
- { 0x200f2, 0x300f2 }, /* PM_INST_DISP */
- { 0x200f4, 0x600f4 }, /* PM_RUN_CYC */
- { 0x2013c, 0x3012e }, /* PM_MRK_FILT_MATCH */
- { 0x3e054, 0x400f0 }, /* PM_LD_MISS_L1 */
- { 0x400fa, 0x500fa }, /* PM_RUN_INST_CMPL */
+ { PM_MRK_ST_CMPL, PM_MRK_ST_CMPL_ALT },
+ { PM_BR_MRK_2PATH, PM_BR_MRK_2PATH_ALT },
+ { PM_L3_CO_MEPF, PM_L3_CO_MEPF_ALT },
+ { PM_MRK_DATA_FROM_L2MISS, PM_MRK_DATA_FROM_L2MISS_ALT },
+ { PM_CMPLU_STALL_ALT, PM_CMPLU_STALL },
+ { PM_BR_2PATH, PM_BR_2PATH_ALT },
+ { PM_INST_DISP, PM_INST_DISP_ALT },
+ { PM_RUN_CYC_ALT, PM_RUN_CYC },
+ { PM_MRK_FILT_MATCH, PM_MRK_FILT_MATCH_ALT },
+ { PM_LD_MISS_L1, PM_LD_MISS_L1_ALT },
+ { PM_RUN_INST_CMPL_ALT, PM_RUN_INST_CMPL },
};
/*
@@ -546,17 +547,17 @@ static int power8_get_alternatives(u64 event, unsigned int flags, u64 alt[])
j = num_alt;
for (i = 0; i < num_alt; ++i) {
switch (alt[i]) {
- case 0x1e: /* PM_CYC */
- alt[j++] = 0x600f4; /* PM_RUN_CYC */
+ case PM_CYC:
+ alt[j++] = PM_RUN_CYC;
break;
- case 0x600f4: /* PM_RUN_CYC */
- alt[j++] = 0x1e;
+ case PM_RUN_CYC:
+ alt[j++] = PM_CYC;
break;
- case 0x2: /* PM_PPC_CMPL */
- alt[j++] = 0x500fa; /* PM_RUN_INST_CMPL */
+ case PM_INST_CMPL:
+ alt[j++] = PM_RUN_INST_CMPL;
break;
- case 0x500fa: /* PM_RUN_INST_CMPL */
- alt[j++] = 0x2; /* PM_PPC_CMPL */
+ case PM_RUN_INST_CMPL:
+ alt[j++] = PM_INST_CMPL;
break;
}
}
diff --git a/arch/powerpc/platforms/52xx/mpc52xx_gpt.c b/arch/powerpc/platforms/52xx/mpc52xx_gpt.c
index 3048e34db6d8..22645a7c6b8a 100644
--- a/arch/powerpc/platforms/52xx/mpc52xx_gpt.c
+++ b/arch/powerpc/platforms/52xx/mpc52xx_gpt.c
@@ -278,14 +278,9 @@ mpc52xx_gpt_irq_setup(struct mpc52xx_gpt_priv *gpt, struct device_node *node)
* GPIOLIB hooks
*/
#if defined(CONFIG_GPIOLIB)
-static inline struct mpc52xx_gpt_priv *gc_to_mpc52xx_gpt(struct gpio_chip *gc)
-{
- return container_of(gc, struct mpc52xx_gpt_priv, gc);
-}
-
static int mpc52xx_gpt_gpio_get(struct gpio_chip *gc, unsigned int gpio)
{
- struct mpc52xx_gpt_priv *gpt = gc_to_mpc52xx_gpt(gc);
+ struct mpc52xx_gpt_priv *gpt = gpiochip_get_data(gc);
return (in_be32(&gpt->regs->status) >> 8) & 1;
}
@@ -293,7 +288,7 @@ static int mpc52xx_gpt_gpio_get(struct gpio_chip *gc, unsigned int gpio)
static void
mpc52xx_gpt_gpio_set(struct gpio_chip *gc, unsigned int gpio, int v)
{
- struct mpc52xx_gpt_priv *gpt = gc_to_mpc52xx_gpt(gc);
+ struct mpc52xx_gpt_priv *gpt = gpiochip_get_data(gc);
unsigned long flags;
u32 r;
@@ -307,7 +302,7 @@ mpc52xx_gpt_gpio_set(struct gpio_chip *gc, unsigned int gpio, int v)
static int mpc52xx_gpt_gpio_dir_in(struct gpio_chip *gc, unsigned int gpio)
{
- struct mpc52xx_gpt_priv *gpt = gc_to_mpc52xx_gpt(gc);
+ struct mpc52xx_gpt_priv *gpt = gpiochip_get_data(gc);
unsigned long flags;
dev_dbg(gpt->dev, "%s: gpio:%d\n", __func__, gpio);
@@ -354,9 +349,9 @@ mpc52xx_gpt_gpio_setup(struct mpc52xx_gpt_priv *gpt, struct device_node *node)
clrsetbits_be32(&gpt->regs->mode, MPC52xx_GPT_MODE_MS_MASK,
MPC52xx_GPT_MODE_MS_GPIO);
- rc = gpiochip_add(&gpt->gc);
+ rc = gpiochip_add_data(&gpt->gc, gpt);
if (rc)
- dev_err(gpt->dev, "gpiochip_add() failed; rc=%i\n", rc);
+ dev_err(gpt->dev, "gpiochip_add_data() failed; rc=%i\n", rc);
dev_dbg(gpt->dev, "%s() complete.\n", __func__);
}
diff --git a/arch/powerpc/platforms/83xx/mcu_mpc8349emitx.c b/arch/powerpc/platforms/83xx/mcu_mpc8349emitx.c
index 15e8021ddef9..dbcd0303afed 100644
--- a/arch/powerpc/platforms/83xx/mcu_mpc8349emitx.c
+++ b/arch/powerpc/platforms/83xx/mcu_mpc8349emitx.c
@@ -16,7 +16,7 @@
#include <linux/device.h>
#include <linux/mutex.h>
#include <linux/i2c.h>
-#include <linux/gpio.h>
+#include <linux/gpio/driver.h>
#include <linux/of.h>
#include <linux/of_gpio.h>
#include <linux/slab.h>
@@ -99,7 +99,7 @@ static void mcu_power_off(void)
static void mcu_gpio_set(struct gpio_chip *gc, unsigned int gpio, int val)
{
- struct mcu *mcu = container_of(gc, struct mcu, gc);
+ struct mcu *mcu = gpiochip_get_data(gc);
u8 bit = 1 << (4 + gpio);
mutex_lock(&mcu->lock);
@@ -136,7 +136,7 @@ static int mcu_gpiochip_add(struct mcu *mcu)
gc->direction_output = mcu_gpio_dir_out;
gc->of_node = np;
- return gpiochip_add(gc);
+ return gpiochip_add_data(gc, mcu);
}
static int mcu_gpiochip_remove(struct mcu *mcu)
diff --git a/arch/powerpc/platforms/Kconfig.cputype b/arch/powerpc/platforms/Kconfig.cputype
index 142dff5e96d6..77e9b8d591fb 100644
--- a/arch/powerpc/platforms/Kconfig.cputype
+++ b/arch/powerpc/platforms/Kconfig.cputype
@@ -72,7 +72,7 @@ config PPC_BOOK3S_64
select PPC_FPU
select PPC_HAVE_PMU_SUPPORT
select SYS_SUPPORTS_HUGETLBFS
- select HAVE_ARCH_TRANSPARENT_HUGEPAGE if PPC_64K_PAGES
+ select HAVE_ARCH_TRANSPARENT_HUGEPAGE
select ARCH_SUPPORTS_NUMA_BALANCING
select IRQ_WORK
@@ -331,6 +331,15 @@ config PPC_STD_MMU_64
def_bool y
depends on PPC_STD_MMU && PPC64
+config PPC_RADIX_MMU
+ bool "Radix MMU Support"
+ depends on PPC_BOOK3S_64
+ default y
+ help
+ Enable support for the Power ISA 3.0 Radix style MMU. Currently this
+ is only implemented by IBM Power9 CPUs, if you don't have one of them
+ you can probably disable this.
+
config PPC_MMU_NOHASH
def_bool y
depends on !PPC_STD_MMU
diff --git a/arch/powerpc/platforms/cell/spu_base.c b/arch/powerpc/platforms/cell/spu_base.c
index f7af74f83693..3cbe38fad609 100644
--- a/arch/powerpc/platforms/cell/spu_base.c
+++ b/arch/powerpc/platforms/cell/spu_base.c
@@ -24,7 +24,7 @@
#include <linux/interrupt.h>
#include <linux/list.h>
-#include <linux/module.h>
+#include <linux/init.h>
#include <linux/ptrace.h>
#include <linux/slab.h>
#include <linux/wait.h>
@@ -197,7 +197,7 @@ static int __spu_trap_data_map(struct spu *spu, unsigned long ea, u64 dsisr)
(REGION_ID(ea) != USER_REGION_ID)) {
spin_unlock(&spu->register_lock);
- ret = hash_page(ea, _PAGE_PRESENT, 0x300, dsisr);
+ ret = hash_page(ea, _PAGE_PRESENT | _PAGE_READ, 0x300, dsisr);
spin_lock(&spu->register_lock);
if (!ret) {
@@ -805,7 +805,4 @@ static int __init init_spu_base(void)
out:
return ret;
}
-module_init(init_spu_base);
-
-MODULE_LICENSE("GPL");
-MODULE_AUTHOR("Arnd Bergmann <arndb@de.ibm.com>");
+device_initcall(init_spu_base);
diff --git a/arch/powerpc/platforms/cell/spufs/coredump.c b/arch/powerpc/platforms/cell/spufs/coredump.c
index be6212ddbf06..84fb984f29c1 100644
--- a/arch/powerpc/platforms/cell/spufs/coredump.c
+++ b/arch/powerpc/platforms/cell/spufs/coredump.c
@@ -137,6 +137,7 @@ static int spufs_arch_write_note(struct spu_context *ctx, int i,
char *name;
char fullname[80], *buf;
struct elf_note en;
+ size_t skip;
buf = (void *)get_zeroed_page(GFP_KERNEL);
if (!buf)
@@ -171,8 +172,8 @@ static int spufs_arch_write_note(struct spu_context *ctx, int i,
if (rc < 0)
goto out;
- if (!dump_skip(cprm,
- roundup(cprm->written - total + sz, 4) - cprm->written))
+ skip = roundup(cprm->file->f_pos - total + sz, 4) - cprm->file->f_pos;
+ if (!dump_skip(cprm, skip))
goto Eio;
out:
free_page((unsigned long)buf);
diff --git a/arch/powerpc/platforms/cell/spufs/fault.c b/arch/powerpc/platforms/cell/spufs/fault.c
index d98f845ac777..e29e4d5afa2d 100644
--- a/arch/powerpc/platforms/cell/spufs/fault.c
+++ b/arch/powerpc/platforms/cell/spufs/fault.c
@@ -141,8 +141,8 @@ int spufs_handle_class1(struct spu_context *ctx)
/* we must not hold the lock when entering copro_handle_mm_fault */
spu_release(ctx);
- access = (_PAGE_PRESENT | _PAGE_USER);
- access |= (dsisr & MFC_DSISR_ACCESS_PUT) ? _PAGE_RW : 0UL;
+ access = (_PAGE_PRESENT | _PAGE_READ);
+ access |= (dsisr & MFC_DSISR_ACCESS_PUT) ? _PAGE_WRITE : 0UL;
local_irq_save(flags);
ret = hash_page(ea, access, 0x300, dsisr);
local_irq_restore(flags);
diff --git a/arch/powerpc/platforms/cell/spufs/inode.c b/arch/powerpc/platforms/cell/spufs/inode.c
index 6ca5f0525e57..5be15cff758d 100644
--- a/arch/powerpc/platforms/cell/spufs/inode.c
+++ b/arch/powerpc/platforms/cell/spufs/inode.c
@@ -238,7 +238,7 @@ const struct file_operations spufs_context_fops = {
.release = spufs_dir_close,
.llseek = dcache_dir_lseek,
.read = generic_read_dir,
- .iterate = dcache_readdir,
+ .iterate_shared = dcache_readdir,
.fsync = noop_fsync,
};
EXPORT_SYMBOL_GPL(spufs_context_fops);
diff --git a/arch/powerpc/platforms/powernv/eeh-powernv.c b/arch/powerpc/platforms/powernv/eeh-powernv.c
index 950b3e539057..9226df11bf39 100644
--- a/arch/powerpc/platforms/powernv/eeh-powernv.c
+++ b/arch/powerpc/platforms/powernv/eeh-powernv.c
@@ -75,7 +75,7 @@ static int pnv_eeh_init(void)
* and P7IOC separately. So we should regard
* PE#0 as valid for PHB3 and P7IOC.
*/
- if (phb->ioda.reserved_pe != 0)
+ if (phb->ioda.reserved_pe_idx != 0)
eeh_add_flag(EEH_VALID_PE_ZERO);
break;
@@ -1009,8 +1009,9 @@ static int pnv_eeh_reset_vf_pe(struct eeh_pe *pe, int option)
static int pnv_eeh_reset(struct eeh_pe *pe, int option)
{
struct pci_controller *hose = pe->phb;
+ struct pnv_phb *phb;
struct pci_bus *bus;
- int ret;
+ int64_t rc;
/*
* For PHB reset, we always have complete reset. For those PEs whose
@@ -1026,45 +1027,39 @@ static int pnv_eeh_reset(struct eeh_pe *pe, int option)
* reset. The side effect is that EEH core has to clear the frozen
* state explicitly after BAR restore.
*/
- if (pe->type & EEH_PE_PHB) {
- ret = pnv_eeh_phb_reset(hose, option);
- } else {
- struct pnv_phb *phb;
- s64 rc;
+ if (pe->type & EEH_PE_PHB)
+ return pnv_eeh_phb_reset(hose, option);
- /*
- * The frozen PE might be caused by PAPR error injection
- * registers, which are expected to be cleared after hitting
- * frozen PE as stated in the hardware spec. Unfortunately,
- * that's not true on P7IOC. So we have to clear it manually
- * to avoid recursive EEH errors during recovery.
- */
- phb = hose->private_data;
- if (phb->model == PNV_PHB_MODEL_P7IOC &&
- (option == EEH_RESET_HOT ||
- option == EEH_RESET_FUNDAMENTAL)) {
- rc = opal_pci_reset(phb->opal_id,
- OPAL_RESET_PHB_ERROR,
- OPAL_ASSERT_RESET);
- if (rc != OPAL_SUCCESS) {
- pr_warn("%s: Failure %lld clearing "
- "error injection registers\n",
- __func__, rc);
- return -EIO;
- }
+ /*
+ * The frozen PE might be caused by PAPR error injection
+ * registers, which are expected to be cleared after hitting
+ * frozen PE as stated in the hardware spec. Unfortunately,
+ * that's not true on P7IOC. So we have to clear it manually
+ * to avoid recursive EEH errors during recovery.
+ */
+ phb = hose->private_data;
+ if (phb->model == PNV_PHB_MODEL_P7IOC &&
+ (option == EEH_RESET_HOT ||
+ option == EEH_RESET_FUNDAMENTAL)) {
+ rc = opal_pci_reset(phb->opal_id,
+ OPAL_RESET_PHB_ERROR,
+ OPAL_ASSERT_RESET);
+ if (rc != OPAL_SUCCESS) {
+ pr_warn("%s: Failure %lld clearing error injection registers\n",
+ __func__, rc);
+ return -EIO;
}
-
- bus = eeh_pe_bus_get(pe);
- if (pe->type & EEH_PE_VF)
- ret = pnv_eeh_reset_vf_pe(pe, option);
- else if (pci_is_root_bus(bus) ||
- pci_is_root_bus(bus->parent))
- ret = pnv_eeh_root_reset(hose, option);
- else
- ret = pnv_eeh_bridge_reset(bus->self, option);
}
- return ret;
+ bus = eeh_pe_bus_get(pe);
+ if (pe->type & EEH_PE_VF)
+ return pnv_eeh_reset_vf_pe(pe, option);
+
+ if (pci_is_root_bus(bus) ||
+ pci_is_root_bus(bus->parent))
+ return pnv_eeh_root_reset(hose, option);
+
+ return pnv_eeh_bridge_reset(bus->self, option);
}
/**
diff --git a/arch/powerpc/platforms/powernv/npu-dma.c b/arch/powerpc/platforms/powernv/npu-dma.c
index 7229acd9bb3a..0459e100b4e7 100644
--- a/arch/powerpc/platforms/powernv/npu-dma.c
+++ b/arch/powerpc/platforms/powernv/npu-dma.c
@@ -12,6 +12,7 @@
#include <linux/export.h>
#include <linux/pci.h>
#include <linux/memblock.h>
+#include <linux/iommu.h>
#include <asm/iommu.h>
#include <asm/pnv-pci.h>
@@ -25,8 +26,6 @@
* Other types of TCE cache invalidation are not functional in the
* hardware.
*/
-#define TCE_KILL_INVAL_ALL PPC_BIT(0)
-
static struct pci_dev *get_pci_dev(struct device_node *dn)
{
return PCI_DN(dn)->pcidev;
@@ -138,22 +137,17 @@ static struct pnv_ioda_pe *get_gpu_pci_dev_and_pe(struct pnv_ioda_pe *npe,
struct pnv_ioda_pe *pe;
struct pci_dn *pdn;
- if (npe->flags & PNV_IODA_PE_PEER) {
- pe = npe->peers[0];
- pdev = pe->pdev;
- } else {
- pdev = pnv_pci_get_gpu_dev(npe->pdev);
- if (!pdev)
- return NULL;
+ pdev = pnv_pci_get_gpu_dev(npe->pdev);
+ if (!pdev)
+ return NULL;
- pdn = pci_get_pdn(pdev);
- if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
- return NULL;
+ pdn = pci_get_pdn(pdev);
+ if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
+ return NULL;
- hose = pci_bus_to_host(pdev->bus);
- phb = hose->private_data;
- pe = &phb->ioda.pe_array[pdn->pe_number];
- }
+ hose = pci_bus_to_host(pdev->bus);
+ phb = hose->private_data;
+ pe = &phb->ioda.pe_array[pdn->pe_number];
if (gpdev)
*gpdev = pdev;
@@ -161,92 +155,70 @@ static struct pnv_ioda_pe *get_gpu_pci_dev_and_pe(struct pnv_ioda_pe *npe,
return pe;
}
-void pnv_npu_tce_invalidate_entire(struct pnv_ioda_pe *npe)
+long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
+ struct iommu_table *tbl)
{
struct pnv_phb *phb = npe->phb;
+ int64_t rc;
+ const unsigned long size = tbl->it_indirect_levels ?
+ tbl->it_level_size : tbl->it_size;
+ const __u64 start_addr = tbl->it_offset << tbl->it_page_shift;
+ const __u64 win_size = tbl->it_size << tbl->it_page_shift;
+
+ pe_info(npe, "Setting up window %llx..%llx pg=%lx\n",
+ start_addr, start_addr + win_size - 1,
+ IOMMU_PAGE_SIZE(tbl));
+
+ rc = opal_pci_map_pe_dma_window(phb->opal_id,
+ npe->pe_number,
+ npe->pe_number,
+ tbl->it_indirect_levels + 1,
+ __pa(tbl->it_base),
+ size << 3,
+ IOMMU_PAGE_SIZE(tbl));
+ if (rc) {
+ pe_err(npe, "Failed to configure TCE table, err %lld\n", rc);
+ return rc;
+ }
+ pnv_pci_ioda2_tce_invalidate_entire(phb, false);
- if (WARN_ON(phb->type != PNV_PHB_NPU ||
- !phb->ioda.tce_inval_reg ||
- !(npe->flags & PNV_IODA_PE_DEV)))
- return;
+ /* Add the table to the list so its TCE cache will get invalidated */
+ pnv_pci_link_table_and_group(phb->hose->node, num,
+ tbl, &npe->table_group);
- mb(); /* Ensure previous TCE table stores are visible */
- __raw_writeq(cpu_to_be64(TCE_KILL_INVAL_ALL),
- phb->ioda.tce_inval_reg);
+ return 0;
}
-void pnv_npu_tce_invalidate(struct pnv_ioda_pe *npe,
- struct iommu_table *tbl,
- unsigned long index,
- unsigned long npages,
- bool rm)
+long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num)
{
struct pnv_phb *phb = npe->phb;
+ int64_t rc;
- /* We can only invalidate the whole cache on NPU */
- unsigned long val = TCE_KILL_INVAL_ALL;
-
- if (WARN_ON(phb->type != PNV_PHB_NPU ||
- !phb->ioda.tce_inval_reg ||
- !(npe->flags & PNV_IODA_PE_DEV)))
- return;
-
- mb(); /* Ensure previous TCE table stores are visible */
- if (rm)
- __raw_rm_writeq(cpu_to_be64(val),
- (__be64 __iomem *) phb->ioda.tce_inval_reg_phys);
- else
- __raw_writeq(cpu_to_be64(val),
- phb->ioda.tce_inval_reg);
-}
-
-void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe)
-{
- struct pnv_ioda_pe *gpe;
- struct pci_dev *gpdev;
- int i, avail = -1;
-
- if (!npe->pdev || !(npe->flags & PNV_IODA_PE_DEV))
- return;
-
- gpe = get_gpu_pci_dev_and_pe(npe, &gpdev);
- if (!gpe)
- return;
-
- for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
- /* Nothing to do if the PE is already connected. */
- if (gpe->peers[i] == npe)
- return;
+ pe_info(npe, "Removing DMA window\n");
- if (!gpe->peers[i])
- avail = i;
+ rc = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
+ npe->pe_number,
+ 0/* levels */, 0/* table address */,
+ 0/* table size */, 0/* page size */);
+ if (rc) {
+ pe_err(npe, "Unmapping failed, ret = %lld\n", rc);
+ return rc;
}
+ pnv_pci_ioda2_tce_invalidate_entire(phb, false);
- if (WARN_ON(avail < 0))
- return;
-
- gpe->peers[avail] = npe;
- gpe->flags |= PNV_IODA_PE_PEER;
+ pnv_pci_unlink_table_and_group(npe->table_group.tables[num],
+ &npe->table_group);
- /*
- * We assume that the NPU devices only have a single peer PE
- * (the GPU PCIe device PE).
- */
- npe->peers[0] = gpe;
- npe->flags |= PNV_IODA_PE_PEER;
+ return 0;
}
/*
- * For the NPU we want to point the TCE table at the same table as the
- * real PCI device.
+ * Enables 32 bit DMA on NPU.
*/
-static void pnv_npu_disable_bypass(struct pnv_ioda_pe *npe)
+static void pnv_npu_dma_set_32(struct pnv_ioda_pe *npe)
{
- struct pnv_phb *phb = npe->phb;
struct pci_dev *gpdev;
struct pnv_ioda_pe *gpe;
- void *addr;
- unsigned int size;
int64_t rc;
/*
@@ -260,14 +232,7 @@ static void pnv_npu_disable_bypass(struct pnv_ioda_pe *npe)
if (!gpe)
return;
- addr = (void *)gpe->table_group.tables[0]->it_base;
- size = gpe->table_group.tables[0]->it_size << 3;
- rc = opal_pci_map_pe_dma_window(phb->opal_id, npe->pe_number,
- npe->pe_number, 1, __pa(addr),
- size, 0x1000);
- if (rc != OPAL_SUCCESS)
- pr_warn("%s: Error %lld setting DMA window on PHB#%d-PE#%d\n",
- __func__, rc, phb->hose->global_number, npe->pe_number);
+ rc = pnv_npu_set_window(npe, 0, gpe->table_group.tables[0]);
/*
* We don't initialise npu_pe->tce32_table as we always use
@@ -277,72 +242,120 @@ static void pnv_npu_disable_bypass(struct pnv_ioda_pe *npe)
}
/*
- * Enable/disable bypass mode on the NPU. The NPU only supports one
+ * Enables bypass mode on the NPU. The NPU only supports one
* window per link, so bypass needs to be explicitly enabled or
* disabled. Unlike for a PHB3 bypass and non-bypass modes can't be
* active at the same time.
*/
-int pnv_npu_dma_set_bypass(struct pnv_ioda_pe *npe, bool enable)
+static int pnv_npu_dma_set_bypass(struct pnv_ioda_pe *npe)
{
struct pnv_phb *phb = npe->phb;
int64_t rc = 0;
+ phys_addr_t top = memblock_end_of_DRAM();
if (phb->type != PNV_PHB_NPU || !npe->pdev)
return -EINVAL;
- if (enable) {
- /* Enable the bypass window */
- phys_addr_t top = memblock_end_of_DRAM();
-
- npe->tce_bypass_base = 0;
- top = roundup_pow_of_two(top);
- dev_info(&npe->pdev->dev, "Enabling bypass for PE %d\n",
- npe->pe_number);
- rc = opal_pci_map_pe_dma_window_real(phb->opal_id,
- npe->pe_number, npe->pe_number,
- npe->tce_bypass_base, top);
- } else {
- /*
- * Disable the bypass window by replacing it with the
- * TCE32 window.
- */
- pnv_npu_disable_bypass(npe);
- }
+ rc = pnv_npu_unset_window(npe, 0);
+ if (rc != OPAL_SUCCESS)
+ return rc;
+
+ /* Enable the bypass window */
+
+ top = roundup_pow_of_two(top);
+ dev_info(&npe->pdev->dev, "Enabling bypass for PE %d\n",
+ npe->pe_number);
+ rc = opal_pci_map_pe_dma_window_real(phb->opal_id,
+ npe->pe_number, npe->pe_number,
+ 0 /* bypass base */, top);
+
+ if (rc == OPAL_SUCCESS)
+ pnv_pci_ioda2_tce_invalidate_entire(phb, false);
return rc;
}
-int pnv_npu_dma_set_mask(struct pci_dev *npdev, u64 dma_mask)
+void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass)
{
- struct pci_controller *hose = pci_bus_to_host(npdev->bus);
- struct pnv_phb *phb = hose->private_data;
- struct pci_dn *pdn = pci_get_pdn(npdev);
- struct pnv_ioda_pe *npe, *gpe;
- struct pci_dev *gpdev;
- uint64_t top;
- bool bypass = false;
+ int i;
+ struct pnv_phb *phb;
+ struct pci_dn *pdn;
+ struct pnv_ioda_pe *npe;
+ struct pci_dev *npdev;
- if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
- return -ENXIO;
+ for (i = 0; ; ++i) {
+ npdev = pnv_pci_get_npu_dev(gpdev, i);
- /* We only do bypass if it's enabled on the linked device */
- npe = &phb->ioda.pe_array[pdn->pe_number];
- gpe = get_gpu_pci_dev_and_pe(npe, &gpdev);
- if (!gpe)
- return -ENODEV;
+ if (!npdev)
+ break;
+
+ pdn = pci_get_pdn(npdev);
+ if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
+ return;
+
+ phb = pci_bus_to_host(npdev->bus)->private_data;
+
+ /* We only do bypass if it's enabled on the linked device */
+ npe = &phb->ioda.pe_array[pdn->pe_number];
+
+ if (bypass) {
+ dev_info(&npdev->dev,
+ "Using 64-bit DMA iommu bypass\n");
+ pnv_npu_dma_set_bypass(npe);
+ } else {
+ dev_info(&npdev->dev, "Using 32-bit DMA via iommu\n");
+ pnv_npu_dma_set_32(npe);
+ }
+ }
+}
- if (gpe->tce_bypass_enabled) {
- top = gpe->tce_bypass_base + memblock_end_of_DRAM() - 1;
- bypass = (dma_mask >= top);
+/* Switch ownership from platform code to external user (e.g. VFIO) */
+void pnv_npu_take_ownership(struct pnv_ioda_pe *npe)
+{
+ struct pnv_phb *phb = npe->phb;
+ int64_t rc;
+
+ /*
+ * Note: NPU has just a single TVE in the hardware which means that
+ * while used by the kernel, it can have either 32bit window or
+ * DMA bypass but never both. So we deconfigure 32bit window only
+ * if it was enabled at the moment of ownership change.
+ */
+ if (npe->table_group.tables[0]) {
+ pnv_npu_unset_window(npe, 0);
+ return;
}
- if (bypass)
- dev_info(&npdev->dev, "Using 64-bit DMA iommu bypass\n");
- else
- dev_info(&npdev->dev, "Using 32-bit DMA via iommu\n");
+ /* Disable bypass */
+ rc = opal_pci_map_pe_dma_window_real(phb->opal_id,
+ npe->pe_number, npe->pe_number,
+ 0 /* bypass base */, 0);
+ if (rc) {
+ pe_err(npe, "Failed to disable bypass, err %lld\n", rc);
+ return;
+ }
+ pnv_pci_ioda2_tce_invalidate_entire(npe->phb, false);
+}
- pnv_npu_dma_set_bypass(npe, bypass);
- *npdev->dev.dma_mask = dma_mask;
+struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe)
+{
+ struct pnv_phb *phb = npe->phb;
+ struct pci_bus *pbus = phb->hose->bus;
+ struct pci_dev *npdev, *gpdev = NULL, *gptmp;
+ struct pnv_ioda_pe *gpe = get_gpu_pci_dev_and_pe(npe, &gpdev);
- return 0;
+ if (!gpe || !gpdev)
+ return NULL;
+
+ list_for_each_entry(npdev, &pbus->devices, bus_list) {
+ gptmp = pnv_pci_get_gpu_dev(npdev);
+
+ if (gptmp != gpdev)
+ continue;
+
+ pe_info(gpe, "Attached NPU %s\n", dev_name(&npdev->dev));
+ iommu_group_add_device(gpe->table_group.group, &npdev->dev);
+ }
+
+ return gpe;
}
diff --git a/arch/powerpc/platforms/powernv/opal-hmi.c b/arch/powerpc/platforms/powernv/opal-hmi.c
index d000f4e21981..c0a8201cb4d9 100644
--- a/arch/powerpc/platforms/powernv/opal-hmi.c
+++ b/arch/powerpc/platforms/powernv/opal-hmi.c
@@ -150,15 +150,17 @@ static void print_nx_checkstop_reason(const char *level,
static void print_checkstop_reason(const char *level,
struct OpalHMIEvent *hmi_evt)
{
- switch (hmi_evt->u.xstop_error.xstop_type) {
+ uint8_t type = hmi_evt->u.xstop_error.xstop_type;
+ switch (type) {
case CHECKSTOP_TYPE_CORE:
print_core_checkstop_reason(level, hmi_evt);
break;
case CHECKSTOP_TYPE_NX:
print_nx_checkstop_reason(level, hmi_evt);
break;
- case CHECKSTOP_TYPE_UNKNOWN:
- printk("%s Unknown Malfunction Alert.\n", level);
+ default:
+ printk("%s Unknown Malfunction Alert of type %d\n",
+ level, type);
break;
}
}
diff --git a/arch/powerpc/platforms/powernv/pci-ioda.c b/arch/powerpc/platforms/powernv/pci-ioda.c
index c5baaf3cc4e5..3a5ea8236db8 100644
--- a/arch/powerpc/platforms/powernv/pci-ioda.c
+++ b/arch/powerpc/platforms/powernv/pci-ioda.c
@@ -48,15 +48,16 @@
#include "powernv.h"
#include "pci.h"
-/* 256M DMA window, 4K TCE pages, 8 bytes TCE */
-#define TCE32_TABLE_SIZE ((0x10000000 / 0x1000) * 8)
+#define PNV_IODA1_M64_NUM 16 /* Number of M64 BARs */
+#define PNV_IODA1_M64_SEGS 8 /* Segments per M64 BAR */
+#define PNV_IODA1_DMA32_SEGSIZE 0x10000000
#define POWERNV_IOMMU_DEFAULT_LEVELS 1
#define POWERNV_IOMMU_MAX_LEVELS 5
static void pnv_pci_ioda2_table_free_pages(struct iommu_table *tbl);
-static void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
+void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
const char *fmt, ...)
{
struct va_format vaf;
@@ -87,13 +88,6 @@ static void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
va_end(args);
}
-#define pe_err(pe, fmt, ...) \
- pe_level_printk(pe, KERN_ERR, fmt, ##__VA_ARGS__)
-#define pe_warn(pe, fmt, ...) \
- pe_level_printk(pe, KERN_WARNING, fmt, ##__VA_ARGS__)
-#define pe_info(pe, fmt, ...) \
- pe_level_printk(pe, KERN_INFO, fmt, ##__VA_ARGS__)
-
static bool pnv_iommu_bypass_disabled __read_mostly;
static int __init iommu_setup(char *str)
@@ -122,9 +116,17 @@ static inline bool pnv_pci_is_mem_pref_64(unsigned long flags)
(IORESOURCE_MEM_64 | IORESOURCE_PREFETCH));
}
+static struct pnv_ioda_pe *pnv_ioda_init_pe(struct pnv_phb *phb, int pe_no)
+{
+ phb->ioda.pe_array[pe_no].phb = phb;
+ phb->ioda.pe_array[pe_no].pe_number = pe_no;
+
+ return &phb->ioda.pe_array[pe_no];
+}
+
static void pnv_ioda_reserve_pe(struct pnv_phb *phb, int pe_no)
{
- if (!(pe_no >= 0 && pe_no < phb->ioda.total_pe)) {
+ if (!(pe_no >= 0 && pe_no < phb->ioda.total_pe_num)) {
pr_warn("%s: Invalid PE %d on PHB#%x\n",
__func__, pe_no, phb->hose->global_number);
return;
@@ -134,32 +136,31 @@ static void pnv_ioda_reserve_pe(struct pnv_phb *phb, int pe_no)
pr_debug("%s: PE %d was reserved on PHB#%x\n",
__func__, pe_no, phb->hose->global_number);
- phb->ioda.pe_array[pe_no].phb = phb;
- phb->ioda.pe_array[pe_no].pe_number = pe_no;
+ pnv_ioda_init_pe(phb, pe_no);
}
-static int pnv_ioda_alloc_pe(struct pnv_phb *phb)
+static struct pnv_ioda_pe *pnv_ioda_alloc_pe(struct pnv_phb *phb)
{
unsigned long pe;
do {
pe = find_next_zero_bit(phb->ioda.pe_alloc,
- phb->ioda.total_pe, 0);
- if (pe >= phb->ioda.total_pe)
- return IODA_INVALID_PE;
+ phb->ioda.total_pe_num, 0);
+ if (pe >= phb->ioda.total_pe_num)
+ return NULL;
} while(test_and_set_bit(pe, phb->ioda.pe_alloc));
- phb->ioda.pe_array[pe].phb = phb;
- phb->ioda.pe_array[pe].pe_number = pe;
- return pe;
+ return pnv_ioda_init_pe(phb, pe);
}
-static void pnv_ioda_free_pe(struct pnv_phb *phb, int pe)
+static void pnv_ioda_free_pe(struct pnv_ioda_pe *pe)
{
- WARN_ON(phb->ioda.pe_array[pe].pdev);
+ struct pnv_phb *phb = pe->phb;
+
+ WARN_ON(pe->pdev);
- memset(&phb->ioda.pe_array[pe], 0, sizeof(struct pnv_ioda_pe));
- clear_bit(pe, phb->ioda.pe_alloc);
+ memset(pe, 0, sizeof(struct pnv_ioda_pe));
+ clear_bit(pe->pe_number, phb->ioda.pe_alloc);
}
/* The default M64 BAR is shared by all PEs */
@@ -199,13 +200,13 @@ static int pnv_ioda2_init_m64(struct pnv_phb *phb)
* expected to be 0 or last one of PE capabicity.
*/
r = &phb->hose->mem_resources[1];
- if (phb->ioda.reserved_pe == 0)
+ if (phb->ioda.reserved_pe_idx == 0)
r->start += phb->ioda.m64_segsize;
- else if (phb->ioda.reserved_pe == (phb->ioda.total_pe - 1))
+ else if (phb->ioda.reserved_pe_idx == (phb->ioda.total_pe_num - 1))
r->end -= phb->ioda.m64_segsize;
else
pr_warn(" Cannot strip M64 segment for reserved PE#%d\n",
- phb->ioda.reserved_pe);
+ phb->ioda.reserved_pe_idx);
return 0;
@@ -219,7 +220,7 @@ fail:
return -EIO;
}
-static void pnv_ioda2_reserve_dev_m64_pe(struct pci_dev *pdev,
+static void pnv_ioda_reserve_dev_m64_pe(struct pci_dev *pdev,
unsigned long *pe_bitmap)
{
struct pci_controller *hose = pci_bus_to_host(pdev->bus);
@@ -246,22 +247,80 @@ static void pnv_ioda2_reserve_dev_m64_pe(struct pci_dev *pdev,
}
}
-static void pnv_ioda2_reserve_m64_pe(struct pci_bus *bus,
- unsigned long *pe_bitmap,
- bool all)
+static int pnv_ioda1_init_m64(struct pnv_phb *phb)
+{
+ struct resource *r;
+ int index;
+
+ /*
+ * There are 16 M64 BARs, each of which has 8 segments. So
+ * there are as many M64 segments as the maximum number of
+ * PEs, which is 128.
+ */
+ for (index = 0; index < PNV_IODA1_M64_NUM; index++) {
+ unsigned long base, segsz = phb->ioda.m64_segsize;
+ int64_t rc;
+
+ base = phb->ioda.m64_base +
+ index * PNV_IODA1_M64_SEGS * segsz;
+ rc = opal_pci_set_phb_mem_window(phb->opal_id,
+ OPAL_M64_WINDOW_TYPE, index, base, 0,
+ PNV_IODA1_M64_SEGS * segsz);
+ if (rc != OPAL_SUCCESS) {
+ pr_warn(" Error %lld setting M64 PHB#%d-BAR#%d\n",
+ rc, phb->hose->global_number, index);
+ goto fail;
+ }
+
+ rc = opal_pci_phb_mmio_enable(phb->opal_id,
+ OPAL_M64_WINDOW_TYPE, index,
+ OPAL_ENABLE_M64_SPLIT);
+ if (rc != OPAL_SUCCESS) {
+ pr_warn(" Error %lld enabling M64 PHB#%d-BAR#%d\n",
+ rc, phb->hose->global_number, index);
+ goto fail;
+ }
+ }
+
+ /*
+ * Exclude the segment used by the reserved PE, which
+ * is expected to be 0 or last supported PE#.
+ */
+ r = &phb->hose->mem_resources[1];
+ if (phb->ioda.reserved_pe_idx == 0)
+ r->start += phb->ioda.m64_segsize;
+ else if (phb->ioda.reserved_pe_idx == (phb->ioda.total_pe_num - 1))
+ r->end -= phb->ioda.m64_segsize;
+ else
+ WARN(1, "Wrong reserved PE#%d on PHB#%d\n",
+ phb->ioda.reserved_pe_idx, phb->hose->global_number);
+
+ return 0;
+
+fail:
+ for ( ; index >= 0; index--)
+ opal_pci_phb_mmio_enable(phb->opal_id,
+ OPAL_M64_WINDOW_TYPE, index, OPAL_DISABLE_M64);
+
+ return -EIO;
+}
+
+static void pnv_ioda_reserve_m64_pe(struct pci_bus *bus,
+ unsigned long *pe_bitmap,
+ bool all)
{
struct pci_dev *pdev;
list_for_each_entry(pdev, &bus->devices, bus_list) {
- pnv_ioda2_reserve_dev_m64_pe(pdev, pe_bitmap);
+ pnv_ioda_reserve_dev_m64_pe(pdev, pe_bitmap);
if (all && pdev->subordinate)
- pnv_ioda2_reserve_m64_pe(pdev->subordinate,
- pe_bitmap, all);
+ pnv_ioda_reserve_m64_pe(pdev->subordinate,
+ pe_bitmap, all);
}
}
-static int pnv_ioda2_pick_m64_pe(struct pci_bus *bus, bool all)
+static struct pnv_ioda_pe *pnv_ioda_pick_m64_pe(struct pci_bus *bus, bool all)
{
struct pci_controller *hose = pci_bus_to_host(bus);
struct pnv_phb *phb = hose->private_data;
@@ -271,28 +330,28 @@ static int pnv_ioda2_pick_m64_pe(struct pci_bus *bus, bool all)
/* Root bus shouldn't use M64 */
if (pci_is_root_bus(bus))
- return IODA_INVALID_PE;
+ return NULL;
/* Allocate bitmap */
- size = _ALIGN_UP(phb->ioda.total_pe / 8, sizeof(unsigned long));
+ size = _ALIGN_UP(phb->ioda.total_pe_num / 8, sizeof(unsigned long));
pe_alloc = kzalloc(size, GFP_KERNEL);
if (!pe_alloc) {
pr_warn("%s: Out of memory !\n",
__func__);
- return IODA_INVALID_PE;
+ return NULL;
}
/* Figure out reserved PE numbers by the PE */
- pnv_ioda2_reserve_m64_pe(bus, pe_alloc, all);
+ pnv_ioda_reserve_m64_pe(bus, pe_alloc, all);
/*
* the current bus might not own M64 window and that's all
* contributed by its child buses. For the case, we needn't
* pick M64 dependent PE#.
*/
- if (bitmap_empty(pe_alloc, phb->ioda.total_pe)) {
+ if (bitmap_empty(pe_alloc, phb->ioda.total_pe_num)) {
kfree(pe_alloc);
- return IODA_INVALID_PE;
+ return NULL;
}
/*
@@ -301,10 +360,11 @@ static int pnv_ioda2_pick_m64_pe(struct pci_bus *bus, bool all)
*/
master_pe = NULL;
i = -1;
- while ((i = find_next_bit(pe_alloc, phb->ioda.total_pe, i + 1)) <
- phb->ioda.total_pe) {
+ while ((i = find_next_bit(pe_alloc, phb->ioda.total_pe_num, i + 1)) <
+ phb->ioda.total_pe_num) {
pe = &phb->ioda.pe_array[i];
+ phb->ioda.m64_segmap[pe->pe_number] = pe->pe_number;
if (!master_pe) {
pe->flags |= PNV_IODA_PE_MASTER;
INIT_LIST_HEAD(&pe->slaves);
@@ -314,10 +374,30 @@ static int pnv_ioda2_pick_m64_pe(struct pci_bus *bus, bool all)
pe->master = master_pe;
list_add_tail(&pe->list, &master_pe->slaves);
}
+
+ /*
+ * P7IOC supports M64DT, which helps mapping M64 segment
+ * to one particular PE#. However, PHB3 has fixed mapping
+ * between M64 segment and PE#. In order to have same logic
+ * for P7IOC and PHB3, we enforce fixed mapping between M64
+ * segment and PE# on P7IOC.
+ */
+ if (phb->type == PNV_PHB_IODA1) {
+ int64_t rc;
+
+ rc = opal_pci_map_pe_mmio_window(phb->opal_id,
+ pe->pe_number, OPAL_M64_WINDOW_TYPE,
+ pe->pe_number / PNV_IODA1_M64_SEGS,
+ pe->pe_number % PNV_IODA1_M64_SEGS);
+ if (rc != OPAL_SUCCESS)
+ pr_warn("%s: Error %lld mapping M64 for PHB#%d-PE#%d\n",
+ __func__, rc, phb->hose->global_number,
+ pe->pe_number);
+ }
}
kfree(pe_alloc);
- return master_pe->pe_number;
+ return master_pe;
}
static void __init pnv_ioda_parse_m64_window(struct pnv_phb *phb)
@@ -328,8 +408,7 @@ static void __init pnv_ioda_parse_m64_window(struct pnv_phb *phb)
const u32 *r;
u64 pci_addr;
- /* FIXME: Support M64 for P7IOC */
- if (phb->type != PNV_PHB_IODA2) {
+ if (phb->type != PNV_PHB_IODA1 && phb->type != PNV_PHB_IODA2) {
pr_info(" Not support M64 window\n");
return;
}
@@ -355,7 +434,7 @@ static void __init pnv_ioda_parse_m64_window(struct pnv_phb *phb)
hose->mem_offset[1] = res->start - pci_addr;
phb->ioda.m64_size = resource_size(res);
- phb->ioda.m64_segsize = phb->ioda.m64_size / phb->ioda.total_pe;
+ phb->ioda.m64_segsize = phb->ioda.m64_size / phb->ioda.total_pe_num;
phb->ioda.m64_base = pci_addr;
pr_info(" MEM64 0x%016llx..0x%016llx -> 0x%016llx\n",
@@ -363,9 +442,12 @@ static void __init pnv_ioda_parse_m64_window(struct pnv_phb *phb)
/* Use last M64 BAR to cover M64 window */
phb->ioda.m64_bar_idx = 15;
- phb->init_m64 = pnv_ioda2_init_m64;
- phb->reserve_m64_pe = pnv_ioda2_reserve_m64_pe;
- phb->pick_m64_pe = pnv_ioda2_pick_m64_pe;
+ if (phb->type == PNV_PHB_IODA1)
+ phb->init_m64 = pnv_ioda1_init_m64;
+ else
+ phb->init_m64 = pnv_ioda2_init_m64;
+ phb->reserve_m64_pe = pnv_ioda_reserve_m64_pe;
+ phb->pick_m64_pe = pnv_ioda_pick_m64_pe;
}
static void pnv_ioda_freeze_pe(struct pnv_phb *phb, int pe_no)
@@ -456,7 +538,7 @@ static int pnv_ioda_get_pe_state(struct pnv_phb *phb, int pe_no)
s64 rc;
/* Sanity check on PE number */
- if (pe_no < 0 || pe_no >= phb->ioda.total_pe)
+ if (pe_no < 0 || pe_no >= phb->ioda.total_pe_num)
return OPAL_EEH_STOPPED_PERM_UNAVAIL;
/*
@@ -808,44 +890,6 @@ out:
return 0;
}
-static void pnv_ioda_link_pe_by_weight(struct pnv_phb *phb,
- struct pnv_ioda_pe *pe)
-{
- struct pnv_ioda_pe *lpe;
-
- list_for_each_entry(lpe, &phb->ioda.pe_dma_list, dma_link) {
- if (lpe->dma_weight < pe->dma_weight) {
- list_add_tail(&pe->dma_link, &lpe->dma_link);
- return;
- }
- }
- list_add_tail(&pe->dma_link, &phb->ioda.pe_dma_list);
-}
-
-static unsigned int pnv_ioda_dma_weight(struct pci_dev *dev)
-{
- /* This is quite simplistic. The "base" weight of a device
- * is 10. 0 means no DMA is to be accounted for it.
- */
-
- /* If it's a bridge, no DMA */
- if (dev->hdr_type != PCI_HEADER_TYPE_NORMAL)
- return 0;
-
- /* Reduce the weight of slow USB controllers */
- if (dev->class == PCI_CLASS_SERIAL_USB_UHCI ||
- dev->class == PCI_CLASS_SERIAL_USB_OHCI ||
- dev->class == PCI_CLASS_SERIAL_USB_EHCI)
- return 3;
-
- /* Increase the weight of RAID (includes Obsidian) */
- if ((dev->class >> 8) == PCI_CLASS_STORAGE_RAID)
- return 15;
-
- /* Default */
- return 10;
-}
-
#ifdef CONFIG_PCI_IOV
static int pnv_pci_vf_resource_shift(struct pci_dev *dev, int offset)
{
@@ -919,7 +963,6 @@ static struct pnv_ioda_pe *pnv_ioda_setup_dev_PE(struct pci_dev *dev)
struct pnv_phb *phb = hose->private_data;
struct pci_dn *pdn = pci_get_pdn(dev);
struct pnv_ioda_pe *pe;
- int pe_num;
if (!pdn) {
pr_err("%s: Device tree node not associated properly\n",
@@ -929,8 +972,8 @@ static struct pnv_ioda_pe *pnv_ioda_setup_dev_PE(struct pci_dev *dev)
if (pdn->pe_number != IODA_INVALID_PE)
return NULL;
- pe_num = pnv_ioda_alloc_pe(phb);
- if (pe_num == IODA_INVALID_PE) {
+ pe = pnv_ioda_alloc_pe(phb);
+ if (!pe) {
pr_warning("%s: Not enough PE# available, disabling device\n",
pci_name(dev));
return NULL;
@@ -943,14 +986,12 @@ static struct pnv_ioda_pe *pnv_ioda_setup_dev_PE(struct pci_dev *dev)
*
* At some point we want to remove the PDN completely anyways
*/
- pe = &phb->ioda.pe_array[pe_num];
pci_dev_get(dev);
pdn->pcidev = dev;
- pdn->pe_number = pe_num;
+ pdn->pe_number = pe->pe_number;
pe->flags = PNV_IODA_PE_DEV;
pe->pdev = dev;
pe->pbus = NULL;
- pe->tce32_seg = -1;
pe->mve_number = -1;
pe->rid = dev->bus->number << 8 | pdn->devfn;
@@ -958,23 +999,15 @@ static struct pnv_ioda_pe *pnv_ioda_setup_dev_PE(struct pci_dev *dev)
if (pnv_ioda_configure_pe(phb, pe)) {
/* XXX What do we do here ? */
- if (pe_num)
- pnv_ioda_free_pe(phb, pe_num);
+ pnv_ioda_free_pe(pe);
pdn->pe_number = IODA_INVALID_PE;
pe->pdev = NULL;
pci_dev_put(dev);
return NULL;
}
- /* Assign a DMA weight to the device */
- pe->dma_weight = pnv_ioda_dma_weight(dev);
- if (pe->dma_weight != 0) {
- phb->ioda.dma_weight += pe->dma_weight;
- phb->ioda.dma_pe_count++;
- }
-
- /* Link the PE */
- pnv_ioda_link_pe_by_weight(phb, pe);
+ /* Put PE to the list */
+ list_add_tail(&pe->list, &phb->ioda.pe_list);
return pe;
}
@@ -993,7 +1026,6 @@ static void pnv_ioda_setup_same_PE(struct pci_bus *bus, struct pnv_ioda_pe *pe)
}
pdn->pcidev = dev;
pdn->pe_number = pe->pe_number;
- pe->dma_weight += pnv_ioda_dma_weight(dev);
if ((pe->flags & PNV_IODA_PE_BUS_ALL) && dev->subordinate)
pnv_ioda_setup_same_PE(dev->subordinate, pe);
}
@@ -1005,49 +1037,44 @@ static void pnv_ioda_setup_same_PE(struct pci_bus *bus, struct pnv_ioda_pe *pe)
* subordinate PCI devices and buses. The second type of PE is normally
* orgiriated by PCIe-to-PCI bridge or PLX switch downstream ports.
*/
-static void pnv_ioda_setup_bus_PE(struct pci_bus *bus, bool all)
+static struct pnv_ioda_pe *pnv_ioda_setup_bus_PE(struct pci_bus *bus, bool all)
{
struct pci_controller *hose = pci_bus_to_host(bus);
struct pnv_phb *phb = hose->private_data;
- struct pnv_ioda_pe *pe;
- int pe_num = IODA_INVALID_PE;
+ struct pnv_ioda_pe *pe = NULL;
/* Check if PE is determined by M64 */
if (phb->pick_m64_pe)
- pe_num = phb->pick_m64_pe(bus, all);
+ pe = phb->pick_m64_pe(bus, all);
/* The PE number isn't pinned by M64 */
- if (pe_num == IODA_INVALID_PE)
- pe_num = pnv_ioda_alloc_pe(phb);
+ if (!pe)
+ pe = pnv_ioda_alloc_pe(phb);
- if (pe_num == IODA_INVALID_PE) {
+ if (!pe) {
pr_warning("%s: Not enough PE# available for PCI bus %04x:%02x\n",
__func__, pci_domain_nr(bus), bus->number);
- return;
+ return NULL;
}
- pe = &phb->ioda.pe_array[pe_num];
pe->flags |= (all ? PNV_IODA_PE_BUS_ALL : PNV_IODA_PE_BUS);
pe->pbus = bus;
pe->pdev = NULL;
- pe->tce32_seg = -1;
pe->mve_number = -1;
pe->rid = bus->busn_res.start << 8;
- pe->dma_weight = 0;
if (all)
pe_info(pe, "Secondary bus %d..%d associated with PE#%d\n",
- bus->busn_res.start, bus->busn_res.end, pe_num);
+ bus->busn_res.start, bus->busn_res.end, pe->pe_number);
else
pe_info(pe, "Secondary bus %d associated with PE#%d\n",
- bus->busn_res.start, pe_num);
+ bus->busn_res.start, pe->pe_number);
if (pnv_ioda_configure_pe(phb, pe)) {
/* XXX What do we do here ? */
- if (pe_num)
- pnv_ioda_free_pe(phb, pe_num);
+ pnv_ioda_free_pe(pe);
pe->pbus = NULL;
- return;
+ return NULL;
}
/* Associate it with all child devices */
@@ -1056,16 +1083,7 @@ static void pnv_ioda_setup_bus_PE(struct pci_bus *bus, bool all)
/* Put PE to the list */
list_add_tail(&pe->list, &phb->ioda.pe_list);
- /* Account for one DMA PE if at least one DMA capable device exist
- * below the bridge
- */
- if (pe->dma_weight != 0) {
- phb->ioda.dma_weight += pe->dma_weight;
- phb->ioda.dma_pe_count++;
- }
-
- /* Link the PE */
- pnv_ioda_link_pe_by_weight(phb, pe);
+ return pe;
}
static struct pnv_ioda_pe *pnv_ioda_setup_npu_PE(struct pci_dev *npu_pdev)
@@ -1088,7 +1106,7 @@ static struct pnv_ioda_pe *pnv_ioda_setup_npu_PE(struct pci_dev *npu_pdev)
* same GPU get assigned the same PE.
*/
gpu_pdev = pnv_pci_get_gpu_dev(npu_pdev);
- for (pe_num = 0; pe_num < phb->ioda.total_pe; pe_num++) {
+ for (pe_num = 0; pe_num < phb->ioda.total_pe_num; pe_num++) {
pe = &phb->ioda.pe_array[pe_num];
if (!pe->pdev)
continue;
@@ -1106,7 +1124,6 @@ static struct pnv_ioda_pe *pnv_ioda_setup_npu_PE(struct pci_dev *npu_pdev)
rid = npu_pdev->bus->number << 8 | npu_pdn->devfn;
npu_pdn->pcidev = npu_pdev;
npu_pdn->pe_number = pe_num;
- pe->dma_weight += pnv_ioda_dma_weight(npu_pdev);
phb->ioda.pe_rmap[rid] = pe->pe_number;
/* Map the PE to this link */
@@ -1378,7 +1395,7 @@ static void pnv_ioda_release_vf_PE(struct pci_dev *pdev)
pnv_ioda_deconfigure_pe(phb, pe);
- pnv_ioda_free_pe(phb, pe->pe_number);
+ pnv_ioda_free_pe(pe);
}
}
@@ -1387,6 +1404,7 @@ void pnv_pci_sriov_disable(struct pci_dev *pdev)
struct pci_bus *bus;
struct pci_controller *hose;
struct pnv_phb *phb;
+ struct pnv_ioda_pe *pe;
struct pci_dn *pdn;
struct pci_sriov *iov;
u16 num_vfs, i;
@@ -1411,8 +1429,11 @@ void pnv_pci_sriov_disable(struct pci_dev *pdev)
/* Release PE numbers */
if (pdn->m64_single_mode) {
for (i = 0; i < num_vfs; i++) {
- if (pdn->pe_num_map[i] != IODA_INVALID_PE)
- pnv_ioda_free_pe(phb, pdn->pe_num_map[i]);
+ if (pdn->pe_num_map[i] == IODA_INVALID_PE)
+ continue;
+
+ pe = &phb->ioda.pe_array[pdn->pe_num_map[i]];
+ pnv_ioda_free_pe(pe);
}
} else
bitmap_clear(phb->ioda.pe_alloc, *pdn->pe_num_map, num_vfs);
@@ -1454,7 +1475,6 @@ static void pnv_ioda_setup_vf_PE(struct pci_dev *pdev, u16 num_vfs)
pe->flags = PNV_IODA_PE_VF;
pe->pbus = NULL;
pe->parent_dev = pdev;
- pe->tce32_seg = -1;
pe->mve_number = -1;
pe->rid = (pci_iov_virtfn_bus(pdev, vf_index) << 8) |
pci_iov_virtfn_devfn(pdev, vf_index);
@@ -1466,8 +1486,7 @@ static void pnv_ioda_setup_vf_PE(struct pci_dev *pdev, u16 num_vfs)
if (pnv_ioda_configure_pe(phb, pe)) {
/* XXX What do we do here ? */
- if (pe_num)
- pnv_ioda_free_pe(phb, pe_num);
+ pnv_ioda_free_pe(pe);
pe->pdev = NULL;
continue;
}
@@ -1486,6 +1505,7 @@ int pnv_pci_sriov_enable(struct pci_dev *pdev, u16 num_vfs)
struct pci_bus *bus;
struct pci_controller *hose;
struct pnv_phb *phb;
+ struct pnv_ioda_pe *pe;
struct pci_dn *pdn;
int ret;
u16 i;
@@ -1528,18 +1548,20 @@ int pnv_pci_sriov_enable(struct pci_dev *pdev, u16 num_vfs)
/* Calculate available PE for required VFs */
if (pdn->m64_single_mode) {
for (i = 0; i < num_vfs; i++) {
- pdn->pe_num_map[i] = pnv_ioda_alloc_pe(phb);
- if (pdn->pe_num_map[i] == IODA_INVALID_PE) {
+ pe = pnv_ioda_alloc_pe(phb);
+ if (!pe) {
ret = -EBUSY;
goto m64_failed;
}
+
+ pdn->pe_num_map[i] = pe->pe_number;
}
} else {
mutex_lock(&phb->ioda.pe_alloc_mutex);
*pdn->pe_num_map = bitmap_find_next_zero_area(
- phb->ioda.pe_alloc, phb->ioda.total_pe,
+ phb->ioda.pe_alloc, phb->ioda.total_pe_num,
0, num_vfs, 0);
- if (*pdn->pe_num_map >= phb->ioda.total_pe) {
+ if (*pdn->pe_num_map >= phb->ioda.total_pe_num) {
mutex_unlock(&phb->ioda.pe_alloc_mutex);
dev_info(&pdev->dev, "Failed to enable VF%d\n", num_vfs);
kfree(pdn->pe_num_map);
@@ -1577,8 +1599,11 @@ int pnv_pci_sriov_enable(struct pci_dev *pdev, u16 num_vfs)
m64_failed:
if (pdn->m64_single_mode) {
for (i = 0; i < num_vfs; i++) {
- if (pdn->pe_num_map[i] != IODA_INVALID_PE)
- pnv_ioda_free_pe(phb, pdn->pe_num_map[i]);
+ if (pdn->pe_num_map[i] == IODA_INVALID_PE)
+ continue;
+
+ pe = &phb->ioda.pe_array[pdn->pe_num_map[i]];
+ pnv_ioda_free_pe(pe);
}
} else
bitmap_clear(phb->ioda.pe_alloc, *pdn->pe_num_map, num_vfs);
@@ -1640,8 +1665,6 @@ static int pnv_pci_ioda_dma_set_mask(struct pci_dev *pdev, u64 dma_mask)
struct pnv_ioda_pe *pe;
uint64_t top;
bool bypass = false;
- struct pci_dev *linked_npu_dev;
- int i;
if (WARN_ON(!pdn || pdn->pe_number == IODA_INVALID_PE))
return -ENODEV;;
@@ -1662,15 +1685,7 @@ static int pnv_pci_ioda_dma_set_mask(struct pci_dev *pdev, u64 dma_mask)
*pdev->dev.dma_mask = dma_mask;
/* Update peer npu devices */
- if (pe->flags & PNV_IODA_PE_PEER)
- for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
- if (!pe->peers[i])
- continue;
-
- linked_npu_dev = pe->peers[i]->pdev;
- if (dma_get_mask(&linked_npu_dev->dev) != dma_mask)
- dma_set_mask(&linked_npu_dev->dev, dma_mask);
- }
+ pnv_npu_try_dma_set_bypass(pdev, bypass);
return 0;
}
@@ -1811,28 +1826,34 @@ static struct iommu_table_ops pnv_ioda1_iommu_ops = {
.get = pnv_tce_get,
};
-static inline void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_ioda_pe *pe)
+#define TCE_KILL_INVAL_ALL PPC_BIT(0)
+#define TCE_KILL_INVAL_PE PPC_BIT(1)
+#define TCE_KILL_INVAL_TCE PPC_BIT(2)
+
+void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool rm)
+{
+ const unsigned long val = TCE_KILL_INVAL_ALL;
+
+ mb(); /* Ensure previous TCE table stores are visible */
+ if (rm)
+ __raw_rm_writeq(cpu_to_be64(val),
+ (__be64 __iomem *)
+ phb->ioda.tce_inval_reg_phys);
+ else
+ __raw_writeq(cpu_to_be64(val), phb->ioda.tce_inval_reg);
+}
+
+static inline void pnv_pci_ioda2_tce_invalidate_pe(struct pnv_ioda_pe *pe)
{
/* 01xb - invalidate TCEs that match the specified PE# */
- unsigned long val = (0x4ull << 60) | (pe->pe_number & 0xFF);
+ unsigned long val = TCE_KILL_INVAL_PE | (pe->pe_number & 0xFF);
struct pnv_phb *phb = pe->phb;
- struct pnv_ioda_pe *npe;
- int i;
if (!phb->ioda.tce_inval_reg)
return;
mb(); /* Ensure above stores are visible */
__raw_writeq(cpu_to_be64(val), phb->ioda.tce_inval_reg);
-
- if (pe->flags & PNV_IODA_PE_PEER)
- for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
- npe = pe->peers[i];
- if (!npe || npe->phb->type != PNV_PHB_NPU)
- continue;
-
- pnv_npu_tce_invalidate_entire(npe);
- }
}
static void pnv_pci_ioda2_do_tce_invalidate(unsigned pe_number, bool rm,
@@ -1842,7 +1863,7 @@ static void pnv_pci_ioda2_do_tce_invalidate(unsigned pe_number, bool rm,
unsigned long start, end, inc;
/* We'll invalidate DMA address in PE scope */
- start = 0x2ull << 60;
+ start = TCE_KILL_INVAL_TCE;
start |= (pe_number & 0xFF);
end = start;
@@ -1867,28 +1888,24 @@ static void pnv_pci_ioda2_tce_invalidate(struct iommu_table *tbl,
struct iommu_table_group_link *tgl;
list_for_each_entry_rcu(tgl, &tbl->it_group_list, next) {
- struct pnv_ioda_pe *npe;
struct pnv_ioda_pe *pe = container_of(tgl->table_group,
struct pnv_ioda_pe, table_group);
__be64 __iomem *invalidate = rm ?
(__be64 __iomem *)pe->phb->ioda.tce_inval_reg_phys :
pe->phb->ioda.tce_inval_reg;
- int i;
+ if (pe->phb->type == PNV_PHB_NPU) {
+ /*
+ * The NVLink hardware does not support TCE kill
+ * per TCE entry so we have to invalidate
+ * the entire cache for it.
+ */
+ pnv_pci_ioda2_tce_invalidate_entire(pe->phb, rm);
+ continue;
+ }
pnv_pci_ioda2_do_tce_invalidate(pe->pe_number, rm,
invalidate, tbl->it_page_shift,
index, npages);
-
- if (pe->flags & PNV_IODA_PE_PEER)
- /* Invalidate PEs using the same TCE table */
- for (i = 0; i < PNV_IODA_MAX_PEER_PES; i++) {
- npe = pe->peers[i];
- if (!npe || npe->phb->type != PNV_PHB_NPU)
- continue;
-
- pnv_npu_tce_invalidate(npe, tbl, index,
- npages, rm);
- }
}
}
@@ -1945,56 +1962,140 @@ static struct iommu_table_ops pnv_ioda2_iommu_ops = {
.free = pnv_ioda2_table_free,
};
-static void pnv_pci_ioda_setup_dma_pe(struct pnv_phb *phb,
- struct pnv_ioda_pe *pe, unsigned int base,
- unsigned int segs)
+static int pnv_pci_ioda_dev_dma_weight(struct pci_dev *dev, void *data)
+{
+ unsigned int *weight = (unsigned int *)data;
+
+ /* This is quite simplistic. The "base" weight of a device
+ * is 10. 0 means no DMA is to be accounted for it.
+ */
+ if (dev->hdr_type != PCI_HEADER_TYPE_NORMAL)
+ return 0;
+
+ if (dev->class == PCI_CLASS_SERIAL_USB_UHCI ||
+ dev->class == PCI_CLASS_SERIAL_USB_OHCI ||
+ dev->class == PCI_CLASS_SERIAL_USB_EHCI)
+ *weight += 3;
+ else if ((dev->class >> 8) == PCI_CLASS_STORAGE_RAID)
+ *weight += 15;
+ else
+ *weight += 10;
+
+ return 0;
+}
+
+static unsigned int pnv_pci_ioda_pe_dma_weight(struct pnv_ioda_pe *pe)
+{
+ unsigned int weight = 0;
+
+ /* SRIOV VF has same DMA32 weight as its PF */
+#ifdef CONFIG_PCI_IOV
+ if ((pe->flags & PNV_IODA_PE_VF) && pe->parent_dev) {
+ pnv_pci_ioda_dev_dma_weight(pe->parent_dev, &weight);
+ return weight;
+ }
+#endif
+
+ if ((pe->flags & PNV_IODA_PE_DEV) && pe->pdev) {
+ pnv_pci_ioda_dev_dma_weight(pe->pdev, &weight);
+ } else if ((pe->flags & PNV_IODA_PE_BUS) && pe->pbus) {
+ struct pci_dev *pdev;
+
+ list_for_each_entry(pdev, &pe->pbus->devices, bus_list)
+ pnv_pci_ioda_dev_dma_weight(pdev, &weight);
+ } else if ((pe->flags & PNV_IODA_PE_BUS_ALL) && pe->pbus) {
+ pci_walk_bus(pe->pbus, pnv_pci_ioda_dev_dma_weight, &weight);
+ }
+
+ return weight;
+}
+
+static void pnv_pci_ioda1_setup_dma_pe(struct pnv_phb *phb,
+ struct pnv_ioda_pe *pe)
{
struct page *tce_mem = NULL;
struct iommu_table *tbl;
- unsigned int i;
+ unsigned int weight, total_weight = 0;
+ unsigned int tce32_segsz, base, segs, avail, i;
int64_t rc;
void *addr;
/* XXX FIXME: Handle 64-bit only DMA devices */
/* XXX FIXME: Provide 64-bit DMA facilities & non-4K TCE tables etc.. */
/* XXX FIXME: Allocate multi-level tables on PHB3 */
+ weight = pnv_pci_ioda_pe_dma_weight(pe);
+ if (!weight)
+ return;
- /* We shouldn't already have a 32-bit DMA associated */
- if (WARN_ON(pe->tce32_seg >= 0))
+ pci_walk_bus(phb->hose->bus, pnv_pci_ioda_dev_dma_weight,
+ &total_weight);
+ segs = (weight * phb->ioda.dma32_count) / total_weight;
+ if (!segs)
+ segs = 1;
+
+ /*
+ * Allocate contiguous DMA32 segments. We begin with the expected
+ * number of segments. With one more attempt, the number of DMA32
+ * segments to be allocated is decreased by one until one segment
+ * is allocated successfully.
+ */
+ do {
+ for (base = 0; base <= phb->ioda.dma32_count - segs; base++) {
+ for (avail = 0, i = base; i < base + segs; i++) {
+ if (phb->ioda.dma32_segmap[i] ==
+ IODA_INVALID_PE)
+ avail++;
+ }
+
+ if (avail == segs)
+ goto found;
+ }
+ } while (--segs);
+
+ if (!segs) {
+ pe_warn(pe, "No available DMA32 segments\n");
return;
+ }
+found:
tbl = pnv_pci_table_alloc(phb->hose->node);
iommu_register_group(&pe->table_group, phb->hose->global_number,
pe->pe_number);
pnv_pci_link_table_and_group(phb->hose->node, 0, tbl, &pe->table_group);
/* Grab a 32-bit TCE table */
- pe->tce32_seg = base;
+ pe_info(pe, "DMA weight %d (%d), assigned (%d) %d DMA32 segments\n",
+ weight, total_weight, base, segs);
pe_info(pe, " Setting up 32-bit TCE table at %08x..%08x\n",
- (base << 28), ((base + segs) << 28) - 1);
+ base * PNV_IODA1_DMA32_SEGSIZE,
+ (base + segs) * PNV_IODA1_DMA32_SEGSIZE - 1);
/* XXX Currently, we allocate one big contiguous table for the
* TCEs. We only really need one chunk per 256M of TCE space
* (ie per segment) but that's an optimization for later, it
* requires some added smarts with our get/put_tce implementation
+ *
+ * Each TCE page is 4KB in size and each TCE entry occupies 8
+ * bytes
*/
+ tce32_segsz = PNV_IODA1_DMA32_SEGSIZE >> (IOMMU_PAGE_SHIFT_4K - 3);
tce_mem = alloc_pages_node(phb->hose->node, GFP_KERNEL,
- get_order(TCE32_TABLE_SIZE * segs));
+ get_order(tce32_segsz * segs));
if (!tce_mem) {
pe_err(pe, " Failed to allocate a 32-bit TCE memory\n");
goto fail;
}
addr = page_address(tce_mem);
- memset(addr, 0, TCE32_TABLE_SIZE * segs);
+ memset(addr, 0, tce32_segsz * segs);
/* Configure HW */
for (i = 0; i < segs; i++) {
rc = opal_pci_map_pe_dma_window(phb->opal_id,
pe->pe_number,
base + i, 1,
- __pa(addr) + TCE32_TABLE_SIZE * i,
- TCE32_TABLE_SIZE, 0x1000);
+ __pa(addr) + tce32_segsz * i,
+ tce32_segsz, IOMMU_PAGE_SIZE_4K);
if (rc) {
pe_err(pe, " Failed to configure 32-bit TCE table,"
" err %ld\n", rc);
@@ -2002,9 +2103,14 @@ static void pnv_pci_ioda_setup_dma_pe(struct pnv_phb *phb,
}
}
+ /* Setup DMA32 segment mapping */
+ for (i = base; i < base + segs; i++)
+ phb->ioda.dma32_segmap[i] = pe->pe_number;
+
/* Setup linux iommu table */
- pnv_pci_setup_iommu_table(tbl, addr, TCE32_TABLE_SIZE * segs,
- base << 28, IOMMU_PAGE_SHIFT_4K);
+ pnv_pci_setup_iommu_table(tbl, addr, tce32_segsz * segs,
+ base * PNV_IODA1_DMA32_SEGSIZE,
+ IOMMU_PAGE_SHIFT_4K);
/* OPAL variant of P7IOC SW invalidated TCEs */
if (phb->ioda.tce_inval_reg)
@@ -2031,10 +2137,8 @@ static void pnv_pci_ioda_setup_dma_pe(struct pnv_phb *phb,
return;
fail:
/* XXX Failure: Try to fallback to 64-bit only ? */
- if (pe->tce32_seg >= 0)
- pe->tce32_seg = -1;
if (tce_mem)
- __free_pages(tce_mem, get_order(TCE32_TABLE_SIZE * segs));
+ __free_pages(tce_mem, get_order(tce32_segsz * segs));
if (tbl) {
pnv_pci_unlink_table_and_group(tbl, &pe->table_group);
iommu_free_table(tbl, "pnv");
@@ -2075,7 +2179,7 @@ static long pnv_pci_ioda2_set_window(struct iommu_table_group *table_group,
pnv_pci_link_table_and_group(phb->hose->node, num,
tbl, &pe->table_group);
- pnv_pci_ioda2_tce_invalidate_entire(pe);
+ pnv_pci_ioda2_tce_invalidate_pe(pe);
return 0;
}
@@ -2219,7 +2323,7 @@ static long pnv_pci_ioda2_unset_window(struct iommu_table_group *table_group,
if (ret)
pe_warn(pe, "Unmapping failed, ret = %ld\n", ret);
else
- pnv_pci_ioda2_tce_invalidate_entire(pe);
+ pnv_pci_ioda2_tce_invalidate_pe(pe);
pnv_pci_unlink_table_and_group(table_group->tables[num], table_group);
@@ -2288,6 +2392,116 @@ static struct iommu_table_group_ops pnv_pci_ioda2_ops = {
.take_ownership = pnv_ioda2_take_ownership,
.release_ownership = pnv_ioda2_release_ownership,
};
+
+static int gpe_table_group_to_npe_cb(struct device *dev, void *opaque)
+{
+ struct pci_controller *hose;
+ struct pnv_phb *phb;
+ struct pnv_ioda_pe **ptmppe = opaque;
+ struct pci_dev *pdev = container_of(dev, struct pci_dev, dev);
+ struct pci_dn *pdn = pci_get_pdn(pdev);
+
+ if (!pdn || pdn->pe_number == IODA_INVALID_PE)
+ return 0;
+
+ hose = pci_bus_to_host(pdev->bus);
+ phb = hose->private_data;
+ if (phb->type != PNV_PHB_NPU)
+ return 0;
+
+ *ptmppe = &phb->ioda.pe_array[pdn->pe_number];
+
+ return 1;
+}
+
+/*
+ * This returns PE of associated NPU.
+ * This assumes that NPU is in the same IOMMU group with GPU and there is
+ * no other PEs.
+ */
+static struct pnv_ioda_pe *gpe_table_group_to_npe(
+ struct iommu_table_group *table_group)
+{
+ struct pnv_ioda_pe *npe = NULL;
+ int ret = iommu_group_for_each_dev(table_group->group, &npe,
+ gpe_table_group_to_npe_cb);
+
+ BUG_ON(!ret || !npe);
+
+ return npe;
+}
+
+static long pnv_pci_ioda2_npu_set_window(struct iommu_table_group *table_group,
+ int num, struct iommu_table *tbl)
+{
+ long ret = pnv_pci_ioda2_set_window(table_group, num, tbl);
+
+ if (ret)
+ return ret;
+
+ ret = pnv_npu_set_window(gpe_table_group_to_npe(table_group), num, tbl);
+ if (ret)
+ pnv_pci_ioda2_unset_window(table_group, num);
+
+ return ret;
+}
+
+static long pnv_pci_ioda2_npu_unset_window(
+ struct iommu_table_group *table_group,
+ int num)
+{
+ long ret = pnv_pci_ioda2_unset_window(table_group, num);
+
+ if (ret)
+ return ret;
+
+ return pnv_npu_unset_window(gpe_table_group_to_npe(table_group), num);
+}
+
+static void pnv_ioda2_npu_take_ownership(struct iommu_table_group *table_group)
+{
+ /*
+ * Detach NPU first as pnv_ioda2_take_ownership() will destroy
+ * the iommu_table if 32bit DMA is enabled.
+ */
+ pnv_npu_take_ownership(gpe_table_group_to_npe(table_group));
+ pnv_ioda2_take_ownership(table_group);
+}
+
+static struct iommu_table_group_ops pnv_pci_ioda2_npu_ops = {
+ .get_table_size = pnv_pci_ioda2_get_table_size,
+ .create_table = pnv_pci_ioda2_create_table,
+ .set_window = pnv_pci_ioda2_npu_set_window,
+ .unset_window = pnv_pci_ioda2_npu_unset_window,
+ .take_ownership = pnv_ioda2_npu_take_ownership,
+ .release_ownership = pnv_ioda2_release_ownership,
+};
+
+static void pnv_pci_ioda_setup_iommu_api(void)
+{
+ struct pci_controller *hose, *tmp;
+ struct pnv_phb *phb;
+ struct pnv_ioda_pe *pe, *gpe;
+
+ /*
+ * Now we have all PHBs discovered, time to add NPU devices to
+ * the corresponding IOMMU groups.
+ */
+ list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
+ phb = hose->private_data;
+
+ if (phb->type != PNV_PHB_NPU)
+ continue;
+
+ list_for_each_entry(pe, &phb->ioda.pe_list, list) {
+ gpe = pnv_pci_npu_setup_iommu(pe);
+ if (gpe)
+ gpe->table_group.ops = &pnv_pci_ioda2_npu_ops;
+ }
+ }
+}
+#else /* !CONFIG_IOMMU_API */
+static void pnv_pci_ioda_setup_iommu_api(void) { };
#endif
static void pnv_pci_ioda_setup_opal_tce_kill(struct pnv_phb *phb)
@@ -2443,10 +2657,6 @@ static void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb,
{
int64_t rc;
- /* We shouldn't already have a 32-bit DMA associated */
- if (WARN_ON(pe->tce32_seg >= 0))
- return;
-
/* TVE #1 is selected by PCI address bit 59 */
pe->tce_bypass_base = 1ull << 59;
@@ -2454,7 +2664,6 @@ static void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb,
pe->pe_number);
/* The PE will reserve all possible 32-bits space */
- pe->tce32_seg = 0;
pe_info(pe, "Setting up 32-bit TCE table at 0..%08x\n",
phb->ioda.m32_pci_base);
@@ -2470,11 +2679,8 @@ static void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb,
#endif
rc = pnv_pci_ioda2_setup_default_config(pe);
- if (rc) {
- if (pe->tce32_seg >= 0)
- pe->tce32_seg = -1;
+ if (rc)
return;
- }
if (pe->flags & PNV_IODA_PE_DEV)
iommu_add_device(&pe->pdev->dev);
@@ -2485,47 +2691,24 @@ static void pnv_pci_ioda2_setup_dma_pe(struct pnv_phb *phb,
static void pnv_ioda_setup_dma(struct pnv_phb *phb)
{
struct pci_controller *hose = phb->hose;
- unsigned int residual, remaining, segs, tw, base;
struct pnv_ioda_pe *pe;
+ unsigned int weight;
/* If we have more PE# than segments available, hand out one
* per PE until we run out and let the rest fail. If not,
* then we assign at least one segment per PE, plus more based
* on the amount of devices under that PE
*/
- if (phb->ioda.dma_pe_count > phb->ioda.tce32_count)
- residual = 0;
- else
- residual = phb->ioda.tce32_count -
- phb->ioda.dma_pe_count;
-
- pr_info("PCI: Domain %04x has %ld available 32-bit DMA segments\n",
- hose->global_number, phb->ioda.tce32_count);
- pr_info("PCI: %d PE# for a total weight of %d\n",
- phb->ioda.dma_pe_count, phb->ioda.dma_weight);
+ pr_info("PCI: Domain %04x has %d available 32-bit DMA segments\n",
+ hose->global_number, phb->ioda.dma32_count);
pnv_pci_ioda_setup_opal_tce_kill(phb);
- /* Walk our PE list and configure their DMA segments, hand them
- * out one base segment plus any residual segments based on
- * weight
- */
- remaining = phb->ioda.tce32_count;
- tw = phb->ioda.dma_weight;
- base = 0;
- list_for_each_entry(pe, &phb->ioda.pe_dma_list, dma_link) {
- if (!pe->dma_weight)
- continue;
- if (!remaining) {
- pe_warn(pe, "No DMA32 resources available\n");
+ /* Walk our PE list and configure their DMA segments */
+ list_for_each_entry(pe, &phb->ioda.pe_list, list) {
+ weight = pnv_pci_ioda_pe_dma_weight(pe);
+ if (!weight)
continue;
- }
- segs = 1;
- if (residual) {
- segs += ((pe->dma_weight * residual) + (tw / 2)) / tw;
- if (segs > remaining)
- segs = remaining;
- }
/*
* For IODA2 compliant PHB3, we needn't care about the weight.
@@ -2533,12 +2716,9 @@ static void pnv_ioda_setup_dma(struct pnv_phb *phb)
* the specific PE.
*/
if (phb->type == PNV_PHB_IODA1) {
- pe_info(pe, "DMA weight %d, assigned %d DMA32 segments\n",
- pe->dma_weight, segs);
- pnv_pci_ioda_setup_dma_pe(phb, pe, base, segs);
+ pnv_pci_ioda1_setup_dma_pe(phb, pe);
} else if (phb->type == PNV_PHB_IODA2) {
pe_info(pe, "Assign DMA32 space\n");
- segs = 0;
pnv_pci_ioda2_setup_dma_pe(phb, pe);
} else if (phb->type == PNV_PHB_NPU) {
/*
@@ -2548,9 +2728,6 @@ static void pnv_ioda_setup_dma(struct pnv_phb *phb)
* as the PHB3 TVT.
*/
}
-
- remaining -= segs;
- base += segs;
}
}
@@ -2858,7 +3035,7 @@ static void pnv_pci_ioda_fixup_iov_resources(struct pci_dev *pdev)
pdn->m64_single_mode = false;
total_vfs = pci_sriov_get_totalvfs(pdev);
- mul = phb->ioda.total_pe;
+ mul = phb->ioda.total_pe_num;
total_vf_bar_sz = 0;
for (i = 0; i < PCI_SRIOV_NUM_BARS; i++) {
@@ -2929,19 +3106,72 @@ truncate_iov:
}
#endif /* CONFIG_PCI_IOV */
+static void pnv_ioda_setup_pe_res(struct pnv_ioda_pe *pe,
+ struct resource *res)
+{
+ struct pnv_phb *phb = pe->phb;
+ struct pci_bus_region region;
+ int index;
+ int64_t rc;
+
+ if (!res || !res->flags || res->start > res->end)
+ return;
+
+ if (res->flags & IORESOURCE_IO) {
+ region.start = res->start - phb->ioda.io_pci_base;
+ region.end = res->end - phb->ioda.io_pci_base;
+ index = region.start / phb->ioda.io_segsize;
+
+ while (index < phb->ioda.total_pe_num &&
+ region.start <= region.end) {
+ phb->ioda.io_segmap[index] = pe->pe_number;
+ rc = opal_pci_map_pe_mmio_window(phb->opal_id,
+ pe->pe_number, OPAL_IO_WINDOW_TYPE, 0, index);
+ if (rc != OPAL_SUCCESS) {
+ pr_err("%s: Error %lld mapping IO segment#%d to PE#%d\n",
+ __func__, rc, index, pe->pe_number);
+ break;
+ }
+
+ region.start += phb->ioda.io_segsize;
+ index++;
+ }
+ } else if ((res->flags & IORESOURCE_MEM) &&
+ !pnv_pci_is_mem_pref_64(res->flags)) {
+ region.start = res->start -
+ phb->hose->mem_offset[0] -
+ phb->ioda.m32_pci_base;
+ region.end = res->end -
+ phb->hose->mem_offset[0] -
+ phb->ioda.m32_pci_base;
+ index = region.start / phb->ioda.m32_segsize;
+
+ while (index < phb->ioda.total_pe_num &&
+ region.start <= region.end) {
+ phb->ioda.m32_segmap[index] = pe->pe_number;
+ rc = opal_pci_map_pe_mmio_window(phb->opal_id,
+ pe->pe_number, OPAL_M32_WINDOW_TYPE, 0, index);
+ if (rc != OPAL_SUCCESS) {
+ pr_err("%s: Error %lld mapping M32 segment#%d to PE#%d",
+ __func__, rc, index, pe->pe_number);
+ break;
+ }
+
+ region.start += phb->ioda.m32_segsize;
+ index++;
+ }
+ }
+}
+
/*
* This function is supposed to be called on basis of PE from top
* to bottom style. So the the I/O or MMIO segment assigned to
* parent PE could be overrided by its child PEs if necessary.
*/
-static void pnv_ioda_setup_pe_seg(struct pci_controller *hose,
- struct pnv_ioda_pe *pe)
+static void pnv_ioda_setup_pe_seg(struct pnv_ioda_pe *pe)
{
- struct pnv_phb *phb = hose->private_data;
- struct pci_bus_region region;
- struct resource *res;
- int i, index;
- int rc;
+ struct pci_dev *pdev;
+ int i;
/*
* NOTE: We only care PCI bus based PE for now. For PCI
@@ -2950,57 +3180,20 @@ static void pnv_ioda_setup_pe_seg(struct pci_controller *hose,
*/
BUG_ON(!(pe->flags & (PNV_IODA_PE_BUS | PNV_IODA_PE_BUS_ALL)));
- pci_bus_for_each_resource(pe->pbus, res, i) {
- if (!res || !res->flags ||
- res->start > res->end)
- continue;
+ list_for_each_entry(pdev, &pe->pbus->devices, bus_list) {
+ for (i = 0; i <= PCI_ROM_RESOURCE; i++)
+ pnv_ioda_setup_pe_res(pe, &pdev->resource[i]);
- if (res->flags & IORESOURCE_IO) {
- region.start = res->start - phb->ioda.io_pci_base;
- region.end = res->end - phb->ioda.io_pci_base;
- index = region.start / phb->ioda.io_segsize;
-
- while (index < phb->ioda.total_pe &&
- region.start <= region.end) {
- phb->ioda.io_segmap[index] = pe->pe_number;
- rc = opal_pci_map_pe_mmio_window(phb->opal_id,
- pe->pe_number, OPAL_IO_WINDOW_TYPE, 0, index);
- if (rc != OPAL_SUCCESS) {
- pr_err("%s: OPAL error %d when mapping IO "
- "segment #%d to PE#%d\n",
- __func__, rc, index, pe->pe_number);
- break;
- }
-
- region.start += phb->ioda.io_segsize;
- index++;
- }
- } else if ((res->flags & IORESOURCE_MEM) &&
- !pnv_pci_is_mem_pref_64(res->flags)) {
- region.start = res->start -
- hose->mem_offset[0] -
- phb->ioda.m32_pci_base;
- region.end = res->end -
- hose->mem_offset[0] -
- phb->ioda.m32_pci_base;
- index = region.start / phb->ioda.m32_segsize;
-
- while (index < phb->ioda.total_pe &&
- region.start <= region.end) {
- phb->ioda.m32_segmap[index] = pe->pe_number;
- rc = opal_pci_map_pe_mmio_window(phb->opal_id,
- pe->pe_number, OPAL_M32_WINDOW_TYPE, 0, index);
- if (rc != OPAL_SUCCESS) {
- pr_err("%s: OPAL error %d when mapping M32 "
- "segment#%d to PE#%d",
- __func__, rc, index, pe->pe_number);
- break;
- }
-
- region.start += phb->ioda.m32_segsize;
- index++;
- }
- }
+ /*
+ * If the PE contains all subordinate PCI buses, the
+ * windows of the child bridges should be mapped to
+ * the PE as well.
+ */
+ if (!(pe->flags & PNV_IODA_PE_BUS_ALL) || !pci_is_bridge(pdev))
+ continue;
+ for (i = 0; i < PCI_BRIDGE_RESOURCE_NUM; i++)
+ pnv_ioda_setup_pe_res(pe,
+ &pdev->resource[PCI_BRIDGE_RESOURCES + i]);
}
}
@@ -3018,7 +3211,7 @@ static void pnv_pci_ioda_setup_seg(void)
continue;
list_for_each_entry(pe, &phb->ioda.pe_list, list) {
- pnv_ioda_setup_pe_seg(hose, pe);
+ pnv_ioda_setup_pe_seg(pe);
}
}
}
@@ -3035,6 +3228,8 @@ static void pnv_pci_ioda_setup_DMA(void)
phb = hose->private_data;
phb->initialized = 1;
}
+
+ pnv_pci_ioda_setup_iommu_api();
}
static void pnv_pci_ioda_create_dbgfs(void)
@@ -3056,27 +3251,6 @@ static void pnv_pci_ioda_create_dbgfs(void)
#endif /* CONFIG_DEBUG_FS */
}
-static void pnv_npu_ioda_fixup(void)
-{
- bool enable_bypass;
- struct pci_controller *hose, *tmp;
- struct pnv_phb *phb;
- struct pnv_ioda_pe *pe;
-
- list_for_each_entry_safe(hose, tmp, &hose_list, list_node) {
- phb = hose->private_data;
- if (phb->type != PNV_PHB_NPU)
- continue;
-
- list_for_each_entry(pe, &phb->ioda.pe_dma_list, dma_link) {
- enable_bypass = dma_get_mask(&pe->pdev->dev) ==
- DMA_BIT_MASK(64);
- pnv_npu_init_dma_pe(pe);
- pnv_npu_dma_set_bypass(pe, enable_bypass);
- }
- }
-}
-
static void pnv_pci_ioda_fixup(void)
{
pnv_pci_ioda_setup_PEs();
@@ -3089,9 +3263,6 @@ static void pnv_pci_ioda_fixup(void)
eeh_init();
eeh_addr_cache_build();
#endif
-
- /* Link NPU IODA tables to their PCI devices. */
- pnv_npu_ioda_fixup();
}
/*
@@ -3195,12 +3366,6 @@ static bool pnv_pci_enable_device_hook(struct pci_dev *dev)
return true;
}
-static u32 pnv_ioda_bdfn_to_pe(struct pnv_phb *phb, struct pci_bus *bus,
- u32 devfn)
-{
- return phb->ioda.pe_rmap[(bus->number << 8) | devfn];
-}
-
static void pnv_pci_ioda_shutdown(struct pci_controller *hose)
{
struct pnv_phb *phb = hose->private_data;
@@ -3210,31 +3375,39 @@ static void pnv_pci_ioda_shutdown(struct pci_controller *hose)
}
static const struct pci_controller_ops pnv_pci_ioda_controller_ops = {
- .dma_dev_setup = pnv_pci_dma_dev_setup,
- .dma_bus_setup = pnv_pci_dma_bus_setup,
+ .dma_dev_setup = pnv_pci_dma_dev_setup,
+ .dma_bus_setup = pnv_pci_dma_bus_setup,
#ifdef CONFIG_PCI_MSI
- .setup_msi_irqs = pnv_setup_msi_irqs,
- .teardown_msi_irqs = pnv_teardown_msi_irqs,
+ .setup_msi_irqs = pnv_setup_msi_irqs,
+ .teardown_msi_irqs = pnv_teardown_msi_irqs,
#endif
- .enable_device_hook = pnv_pci_enable_device_hook,
- .window_alignment = pnv_pci_window_alignment,
- .reset_secondary_bus = pnv_pci_reset_secondary_bus,
- .dma_set_mask = pnv_pci_ioda_dma_set_mask,
- .dma_get_required_mask = pnv_pci_ioda_dma_get_required_mask,
- .shutdown = pnv_pci_ioda_shutdown,
+ .enable_device_hook = pnv_pci_enable_device_hook,
+ .window_alignment = pnv_pci_window_alignment,
+ .reset_secondary_bus = pnv_pci_reset_secondary_bus,
+ .dma_set_mask = pnv_pci_ioda_dma_set_mask,
+ .dma_get_required_mask = pnv_pci_ioda_dma_get_required_mask,
+ .shutdown = pnv_pci_ioda_shutdown,
};
+static int pnv_npu_dma_set_mask(struct pci_dev *npdev, u64 dma_mask)
+{
+ dev_err_once(&npdev->dev,
+ "%s operation unsupported for NVLink devices\n",
+ __func__);
+ return -EPERM;
+}
+
static const struct pci_controller_ops pnv_npu_ioda_controller_ops = {
- .dma_dev_setup = pnv_pci_dma_dev_setup,
+ .dma_dev_setup = pnv_pci_dma_dev_setup,
#ifdef CONFIG_PCI_MSI
- .setup_msi_irqs = pnv_setup_msi_irqs,
- .teardown_msi_irqs = pnv_teardown_msi_irqs,
+ .setup_msi_irqs = pnv_setup_msi_irqs,
+ .teardown_msi_irqs = pnv_teardown_msi_irqs,
#endif
- .enable_device_hook = pnv_pci_enable_device_hook,
- .window_alignment = pnv_pci_window_alignment,
- .reset_secondary_bus = pnv_pci_reset_secondary_bus,
- .dma_set_mask = pnv_npu_dma_set_mask,
- .shutdown = pnv_pci_ioda_shutdown,
+ .enable_device_hook = pnv_pci_enable_device_hook,
+ .window_alignment = pnv_pci_window_alignment,
+ .reset_secondary_bus = pnv_pci_reset_secondary_bus,
+ .dma_set_mask = pnv_npu_dma_set_mask,
+ .shutdown = pnv_pci_ioda_shutdown,
};
static void __init pnv_pci_init_ioda_phb(struct device_node *np,
@@ -3242,10 +3415,12 @@ static void __init pnv_pci_init_ioda_phb(struct device_node *np,
{
struct pci_controller *hose;
struct pnv_phb *phb;
- unsigned long size, m32map_off, pemap_off, iomap_off = 0;
+ unsigned long size, m64map_off, m32map_off, pemap_off;
+ unsigned long iomap_off = 0, dma32map_off = 0;
const __be64 *prop64;
const __be32 *prop32;
int len;
+ unsigned int segno;
u64 phb_id;
void *aux;
long rc;
@@ -3306,13 +3481,13 @@ static void __init pnv_pci_init_ioda_phb(struct device_node *np,
pr_err(" Failed to map registers !\n");
/* Initialize more IODA stuff */
- phb->ioda.total_pe = 1;
+ phb->ioda.total_pe_num = 1;
prop32 = of_get_property(np, "ibm,opal-num-pes", NULL);
if (prop32)
- phb->ioda.total_pe = be32_to_cpup(prop32);
+ phb->ioda.total_pe_num = be32_to_cpup(prop32);
prop32 = of_get_property(np, "ibm,opal-reserved-pe", NULL);
if (prop32)
- phb->ioda.reserved_pe = be32_to_cpup(prop32);
+ phb->ioda.reserved_pe_idx = be32_to_cpup(prop32);
/* Parse 64-bit MMIO range */
pnv_ioda_parse_m64_window(phb);
@@ -3321,36 +3496,58 @@ static void __init pnv_pci_init_ioda_phb(struct device_node *np,
/* FW Has already off top 64k of M32 space (MSI space) */
phb->ioda.m32_size += 0x10000;
- phb->ioda.m32_segsize = phb->ioda.m32_size / phb->ioda.total_pe;
+ phb->ioda.m32_segsize = phb->ioda.m32_size / phb->ioda.total_pe_num;
phb->ioda.m32_pci_base = hose->mem_resources[0].start - hose->mem_offset[0];
phb->ioda.io_size = hose->pci_io_size;
- phb->ioda.io_segsize = phb->ioda.io_size / phb->ioda.total_pe;
+ phb->ioda.io_segsize = phb->ioda.io_size / phb->ioda.total_pe_num;
phb->ioda.io_pci_base = 0; /* XXX calculate this ? */
+ /* Calculate how many 32-bit TCE segments we have */
+ phb->ioda.dma32_count = phb->ioda.m32_pci_base /
+ PNV_IODA1_DMA32_SEGSIZE;
+
/* Allocate aux data & arrays. We don't have IO ports on PHB3 */
- size = _ALIGN_UP(phb->ioda.total_pe / 8, sizeof(unsigned long));
+ size = _ALIGN_UP(max_t(unsigned, phb->ioda.total_pe_num, 8) / 8,
+ sizeof(unsigned long));
+ m64map_off = size;
+ size += phb->ioda.total_pe_num * sizeof(phb->ioda.m64_segmap[0]);
m32map_off = size;
- size += phb->ioda.total_pe * sizeof(phb->ioda.m32_segmap[0]);
+ size += phb->ioda.total_pe_num * sizeof(phb->ioda.m32_segmap[0]);
if (phb->type == PNV_PHB_IODA1) {
iomap_off = size;
- size += phb->ioda.total_pe * sizeof(phb->ioda.io_segmap[0]);
+ size += phb->ioda.total_pe_num * sizeof(phb->ioda.io_segmap[0]);
+ dma32map_off = size;
+ size += phb->ioda.dma32_count *
+ sizeof(phb->ioda.dma32_segmap[0]);
}
pemap_off = size;
- size += phb->ioda.total_pe * sizeof(struct pnv_ioda_pe);
+ size += phb->ioda.total_pe_num * sizeof(struct pnv_ioda_pe);
aux = memblock_virt_alloc(size, 0);
phb->ioda.pe_alloc = aux;
+ phb->ioda.m64_segmap = aux + m64map_off;
phb->ioda.m32_segmap = aux + m32map_off;
- if (phb->type == PNV_PHB_IODA1)
+ for (segno = 0; segno < phb->ioda.total_pe_num; segno++) {
+ phb->ioda.m64_segmap[segno] = IODA_INVALID_PE;
+ phb->ioda.m32_segmap[segno] = IODA_INVALID_PE;
+ }
+ if (phb->type == PNV_PHB_IODA1) {
phb->ioda.io_segmap = aux + iomap_off;
+ for (segno = 0; segno < phb->ioda.total_pe_num; segno++)
+ phb->ioda.io_segmap[segno] = IODA_INVALID_PE;
+
+ phb->ioda.dma32_segmap = aux + dma32map_off;
+ for (segno = 0; segno < phb->ioda.dma32_count; segno++)
+ phb->ioda.dma32_segmap[segno] = IODA_INVALID_PE;
+ }
phb->ioda.pe_array = aux + pemap_off;
- set_bit(phb->ioda.reserved_pe, phb->ioda.pe_alloc);
+ set_bit(phb->ioda.reserved_pe_idx, phb->ioda.pe_alloc);
- INIT_LIST_HEAD(&phb->ioda.pe_dma_list);
INIT_LIST_HEAD(&phb->ioda.pe_list);
mutex_init(&phb->ioda.pe_list_mutex);
/* Calculate how many 32-bit TCE segments we have */
- phb->ioda.tce32_count = phb->ioda.m32_pci_base >> 28;
+ phb->ioda.dma32_count = phb->ioda.m32_pci_base /
+ PNV_IODA1_DMA32_SEGSIZE;
#if 0 /* We should really do that ... */
rc = opal_pci_set_phb_mem_window(opal->phb_id,
@@ -3362,7 +3559,7 @@ static void __init pnv_pci_init_ioda_phb(struct device_node *np,
#endif
pr_info(" %03d (%03d) PE's M32: 0x%x [segment=0x%x]\n",
- phb->ioda.total_pe, phb->ioda.reserved_pe,
+ phb->ioda.total_pe_num, phb->ioda.reserved_pe_idx,
phb->ioda.m32_size, phb->ioda.m32_segsize);
if (phb->ioda.m64_size)
pr_info(" M64: 0x%lx [segment=0x%lx]\n",
@@ -3377,12 +3574,6 @@ static void __init pnv_pci_init_ioda_phb(struct device_node *np,
phb->freeze_pe = pnv_ioda_freeze_pe;
phb->unfreeze_pe = pnv_ioda_unfreeze_pe;
- /* Setup RID -> PE mapping function */
- phb->bdfn_to_pe = pnv_ioda_bdfn_to_pe;
-
- /* Setup TCEs */
- phb->dma_dev_setup = pnv_pci_ioda_dma_dev_setup;
-
/* Setup MSI support */
pnv_pci_init_ioda_msis(phb);
@@ -3395,10 +3586,12 @@ static void __init pnv_pci_init_ioda_phb(struct device_node *np,
*/
ppc_md.pcibios_fixup = pnv_pci_ioda_fixup;
- if (phb->type == PNV_PHB_NPU)
+ if (phb->type == PNV_PHB_NPU) {
hose->controller_ops = pnv_npu_ioda_controller_ops;
- else
+ } else {
+ phb->dma_dev_setup = pnv_pci_ioda_dma_dev_setup;
hose->controller_ops = pnv_pci_ioda_controller_ops;
+ }
#ifdef CONFIG_PCI_IOV
ppc_md.pcibios_fixup_sriov = pnv_pci_ioda_fixup_iov_resources;
diff --git a/arch/powerpc/platforms/powernv/pci.c b/arch/powerpc/platforms/powernv/pci.c
index 73c8dc2a353f..1d92bd93bcd9 100644
--- a/arch/powerpc/platforms/powernv/pci.c
+++ b/arch/powerpc/platforms/powernv/pci.c
@@ -39,9 +39,6 @@
/* Delay in usec */
#define PCI_RESET_DELAY_US 3000000
-#define cfg_dbg(fmt...) do { } while(0)
-//#define cfg_dbg(fmt...) printk(fmt)
-
#ifdef CONFIG_PCI_MSI
int pnv_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type)
{
@@ -370,7 +367,7 @@ static void pnv_pci_config_check_eeh(struct pci_dn *pdn)
struct pnv_phb *phb = pdn->phb->private_data;
u8 fstate;
__be16 pcierr;
- int pe_no;
+ unsigned int pe_no;
s64 rc;
/*
@@ -380,7 +377,7 @@ static void pnv_pci_config_check_eeh(struct pci_dn *pdn)
*/
pe_no = pdn->pe_number;
if (pe_no == IODA_INVALID_PE) {
- pe_no = phb->ioda.reserved_pe;
+ pe_no = phb->ioda.reserved_pe_idx;
}
/*
@@ -402,8 +399,8 @@ static void pnv_pci_config_check_eeh(struct pci_dn *pdn)
}
}
- cfg_dbg(" -> EEH check, bdfn=%04x PE#%d fstate=%x\n",
- (pdn->busno << 8) | (pdn->devfn), pe_no, fstate);
+ pr_devel(" -> EEH check, bdfn=%04x PE#%d fstate=%x\n",
+ (pdn->busno << 8) | (pdn->devfn), pe_no, fstate);
/* Clear the frozen state if applicable */
if (fstate == OPAL_EEH_STOPPED_MMIO_FREEZE ||
@@ -451,8 +448,8 @@ int pnv_pci_cfg_read(struct pci_dn *pdn,
return PCIBIOS_FUNC_NOT_SUPPORTED;
}
- cfg_dbg("%s: bus: %x devfn: %x +%x/%x -> %08x\n",
- __func__, pdn->busno, pdn->devfn, where, size, *val);
+ pr_devel("%s: bus: %x devfn: %x +%x/%x -> %08x\n",
+ __func__, pdn->busno, pdn->devfn, where, size, *val);
return PCIBIOS_SUCCESSFUL;
}
@@ -462,8 +459,8 @@ int pnv_pci_cfg_write(struct pci_dn *pdn,
struct pnv_phb *phb = pdn->phb->private_data;
u32 bdfn = (pdn->busno << 8) | pdn->devfn;
- cfg_dbg("%s: bus: %x devfn: %x +%x/%x -> %08x\n",
- pdn->busno, pdn->devfn, where, size, val);
+ pr_devel("%s: bus: %x devfn: %x +%x/%x -> %08x\n",
+ __func__, pdn->busno, pdn->devfn, where, size, val);
switch (size) {
case 1:
opal_pci_config_write_byte(phb->opal_id, bdfn, where, val);
diff --git a/arch/powerpc/platforms/powernv/pci.h b/arch/powerpc/platforms/powernv/pci.h
index 3f814f382b2e..7dee25e304db 100644
--- a/arch/powerpc/platforms/powernv/pci.h
+++ b/arch/powerpc/platforms/powernv/pci.h
@@ -24,7 +24,6 @@ enum pnv_phb_model {
#define PNV_IODA_PE_MASTER (1 << 3) /* Master PE in compound case */
#define PNV_IODA_PE_SLAVE (1 << 4) /* Slave PE in compound case */
#define PNV_IODA_PE_VF (1 << 5) /* PE for one VF */
-#define PNV_IODA_PE_PEER (1 << 6) /* PE has peers */
/* Data associated with a PE, including IOMMU tracking etc.. */
struct pnv_phb;
@@ -32,9 +31,6 @@ struct pnv_ioda_pe {
unsigned long flags;
struct pnv_phb *phb;
-#define PNV_IODA_MAX_PEER_PES 8
- struct pnv_ioda_pe *peers[PNV_IODA_MAX_PEER_PES];
-
/* A PE can be associated with a single device or an
* entire bus (& children). In the former case, pdev
* is populated, in the later case, pbus is.
@@ -53,14 +49,7 @@ struct pnv_ioda_pe {
/* PE number */
unsigned int pe_number;
- /* "Weight" assigned to the PE for the sake of DMA resource
- * allocations
- */
- unsigned int dma_weight;
-
/* "Base" iommu table, ie, 4K TCEs, 32-bit DMA */
- int tce32_seg;
- int tce32_segcount;
struct iommu_table_group table_group;
/* 64-bit TCE bypass region */
@@ -78,7 +67,6 @@ struct pnv_ioda_pe {
struct list_head slaves;
/* Link in list of PE#s */
- struct list_head dma_link;
struct list_head list;
};
@@ -110,19 +98,18 @@ struct pnv_phb {
unsigned int is_64, struct msi_msg *msg);
void (*dma_dev_setup)(struct pnv_phb *phb, struct pci_dev *pdev);
void (*fixup_phb)(struct pci_controller *hose);
- u32 (*bdfn_to_pe)(struct pnv_phb *phb, struct pci_bus *bus, u32 devfn);
int (*init_m64)(struct pnv_phb *phb);
void (*reserve_m64_pe)(struct pci_bus *bus,
unsigned long *pe_bitmap, bool all);
- int (*pick_m64_pe)(struct pci_bus *bus, bool all);
+ struct pnv_ioda_pe *(*pick_m64_pe)(struct pci_bus *bus, bool all);
int (*get_pe_state)(struct pnv_phb *phb, int pe_no);
void (*freeze_pe)(struct pnv_phb *phb, int pe_no);
int (*unfreeze_pe)(struct pnv_phb *phb, int pe_no, int opt);
struct {
/* Global bridge info */
- unsigned int total_pe;
- unsigned int reserved_pe;
+ unsigned int total_pe_num;
+ unsigned int reserved_pe_idx;
/* 32-bit MMIO window */
unsigned int m32_size;
@@ -141,15 +128,19 @@ struct pnv_phb {
unsigned int io_segsize;
unsigned int io_pci_base;
- /* PE allocation bitmap */
- unsigned long *pe_alloc;
- /* PE allocation mutex */
+ /* PE allocation */
struct mutex pe_alloc_mutex;
+ unsigned long *pe_alloc;
+ struct pnv_ioda_pe *pe_array;
/* M32 & IO segment maps */
+ unsigned int *m64_segmap;
unsigned int *m32_segmap;
unsigned int *io_segmap;
- struct pnv_ioda_pe *pe_array;
+
+ /* DMA32 segment maps - IODA1 only */
+ unsigned int dma32_count;
+ unsigned int *dma32_segmap;
/* IRQ chip */
int irq_chip_init;
@@ -167,20 +158,6 @@ struct pnv_phb {
*/
unsigned char pe_rmap[0x10000];
- /* 32-bit TCE tables allocation */
- unsigned long tce32_count;
-
- /* Total "weight" for the sake of DMA resources
- * allocation
- */
- unsigned int dma_weight;
- unsigned int dma_pe_count;
-
- /* Sorted list of used PE's, sorted at
- * boot for resource allocation purposes
- */
- struct list_head pe_dma_list;
-
/* TCE cache invalidate registers (physical and
* remapped)
*/
@@ -236,16 +213,23 @@ extern void pnv_pci_dma_bus_setup(struct pci_bus *bus);
extern int pnv_setup_msi_irqs(struct pci_dev *pdev, int nvec, int type);
extern void pnv_teardown_msi_irqs(struct pci_dev *pdev);
+extern void pe_level_printk(const struct pnv_ioda_pe *pe, const char *level,
+ const char *fmt, ...);
+#define pe_err(pe, fmt, ...) \
+ pe_level_printk(pe, KERN_ERR, fmt, ##__VA_ARGS__)
+#define pe_warn(pe, fmt, ...) \
+ pe_level_printk(pe, KERN_WARNING, fmt, ##__VA_ARGS__)
+#define pe_info(pe, fmt, ...) \
+ pe_level_printk(pe, KERN_INFO, fmt, ##__VA_ARGS__)
+
/* Nvlink functions */
-extern void pnv_npu_tce_invalidate_entire(struct pnv_ioda_pe *npe);
-extern void pnv_npu_tce_invalidate(struct pnv_ioda_pe *npe,
- struct iommu_table *tbl,
- unsigned long index,
- unsigned long npages,
- bool rm);
-extern void pnv_npu_init_dma_pe(struct pnv_ioda_pe *npe);
-extern void pnv_npu_setup_dma_pe(struct pnv_ioda_pe *npe);
-extern int pnv_npu_dma_set_bypass(struct pnv_ioda_pe *npe, bool enabled);
-extern int pnv_npu_dma_set_mask(struct pci_dev *npdev, u64 dma_mask);
+extern void pnv_npu_try_dma_set_bypass(struct pci_dev *gpdev, bool bypass);
+extern void pnv_pci_ioda2_tce_invalidate_entire(struct pnv_phb *phb, bool rm);
+extern struct pnv_ioda_pe *pnv_pci_npu_setup_iommu(struct pnv_ioda_pe *npe);
+extern long pnv_npu_set_window(struct pnv_ioda_pe *npe, int num,
+ struct iommu_table *tbl);
+extern long pnv_npu_unset_window(struct pnv_ioda_pe *npe, int num);
+extern void pnv_npu_take_ownership(struct pnv_ioda_pe *npe);
+extern void pnv_npu_release_ownership(struct pnv_ioda_pe *npe);
#endif /* __POWERNV_PCI_H */
diff --git a/arch/powerpc/platforms/powernv/setup.c b/arch/powerpc/platforms/powernv/setup.c
index 1acb0c72d923..ee6430bedcc3 100644
--- a/arch/powerpc/platforms/powernv/setup.c
+++ b/arch/powerpc/platforms/powernv/setup.c
@@ -273,7 +273,10 @@ static int __init pnv_probe(void)
if (!of_flat_dt_is_compatible(root, "ibm,powernv"))
return 0;
- hpte_init_native();
+ if (IS_ENABLED(CONFIG_PPC_RADIX_MMU) && radix_enabled())
+ radix_init_native();
+ else if (IS_ENABLED(CONFIG_PPC_STD_MMU_64))
+ hpte_init_native();
if (firmware_has_feature(FW_FEATURE_OPAL))
pnv_setup_machdep_opal();
diff --git a/arch/powerpc/platforms/ps3/htab.c b/arch/powerpc/platforms/ps3/htab.c
index 2f95d33cf34a..c9a3e677192a 100644
--- a/arch/powerpc/platforms/ps3/htab.c
+++ b/arch/powerpc/platforms/ps3/htab.c
@@ -63,7 +63,7 @@ static long ps3_hpte_insert(unsigned long hpte_group, unsigned long vpn,
vflags &= ~HPTE_V_SECONDARY;
hpte_v = hpte_encode_v(vpn, psize, apsize, ssize) | vflags | HPTE_V_VALID;
- hpte_r = hpte_encode_r(ps3_mm_phys_to_lpar(pa), psize, apsize) | rflags;
+ hpte_r = hpte_encode_r(ps3_mm_phys_to_lpar(pa), psize, apsize, ssize) | rflags;
spin_lock_irqsave(&ps3_htab_lock, flags);
diff --git a/arch/powerpc/platforms/ps3/spu.c b/arch/powerpc/platforms/ps3/spu.c
index a0bca05e26b0..492b2575e0d2 100644
--- a/arch/powerpc/platforms/ps3/spu.c
+++ b/arch/powerpc/platforms/ps3/spu.c
@@ -205,7 +205,7 @@ static void spu_unmap(struct spu *spu)
static int __init setup_areas(struct spu *spu)
{
struct table {char* name; unsigned long addr; unsigned long size;};
- static const unsigned long shadow_flags = _PAGE_NO_CACHE | 3;
+ unsigned long shadow_flags = pgprot_val(pgprot_noncached_wc(PAGE_KERNEL_RO));
spu_pdata(spu)->shadow = __ioremap(spu_pdata(spu)->shadow_addr,
sizeof(struct spe_shadow),
@@ -216,7 +216,7 @@ static int __init setup_areas(struct spu *spu)
}
spu->local_store = (__force void *)ioremap_prot(spu->local_store_phys,
- LS_SIZE, _PAGE_NO_CACHE);
+ LS_SIZE, pgprot_val(pgprot_noncached_wc(__pgprot(0))));
if (!spu->local_store) {
pr_debug("%s:%d: ioremap local_store failed\n",
diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
index e9ff44cd5d86..2ce138542083 100644
--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
+++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
@@ -116,6 +116,155 @@ static struct property *dlpar_clone_drconf_property(struct device_node *dn)
return new_prop;
}
+static void dlpar_update_drconf_property(struct device_node *dn,
+ struct property *prop)
+{
+ struct of_drconf_cell *lmbs;
+ u32 num_lmbs, *p;
+ int i;
+
+ /* Convert the property back to BE */
+ p = prop->value;
+ num_lmbs = *p;
+ *p = cpu_to_be32(*p);
+ p++;
+
+ lmbs = (struct of_drconf_cell *)p;
+ for (i = 0; i < num_lmbs; i++) {
+ lmbs[i].base_addr = cpu_to_be64(lmbs[i].base_addr);
+ lmbs[i].drc_index = cpu_to_be32(lmbs[i].drc_index);
+ lmbs[i].flags = cpu_to_be32(lmbs[i].flags);
+ }
+
+ rtas_hp_event = true;
+ of_update_property(dn, prop);
+ rtas_hp_event = false;
+}
+
+static int dlpar_update_device_tree_lmb(struct of_drconf_cell *lmb)
+{
+ struct device_node *dn;
+ struct property *prop;
+ struct of_drconf_cell *lmbs;
+ u32 *p, num_lmbs;
+ int i;
+
+ dn = of_find_node_by_path("/ibm,dynamic-reconfiguration-memory");
+ if (!dn)
+ return -ENODEV;
+
+ prop = dlpar_clone_drconf_property(dn);
+ if (!prop) {
+ of_node_put(dn);
+ return -ENODEV;
+ }
+
+ p = prop->value;
+ num_lmbs = *p++;
+ lmbs = (struct of_drconf_cell *)p;
+
+ for (i = 0; i < num_lmbs; i++) {
+ if (lmbs[i].drc_index == lmb->drc_index) {
+ lmbs[i].flags = lmb->flags;
+ lmbs[i].aa_index = lmb->aa_index;
+
+ dlpar_update_drconf_property(dn, prop);
+ break;
+ }
+ }
+
+ of_node_put(dn);
+ return 0;
+}
+
+static u32 lookup_lmb_associativity_index(struct of_drconf_cell *lmb)
+{
+ struct device_node *parent, *lmb_node, *dr_node;
+ const u32 *lmb_assoc;
+ const u32 *assoc_arrays;
+ u32 aa_index;
+ int aa_arrays, aa_array_entries, aa_array_sz;
+ int i;
+
+ parent = of_find_node_by_path("/");
+ if (!parent)
+ return -ENODEV;
+
+ lmb_node = dlpar_configure_connector(cpu_to_be32(lmb->drc_index),
+ parent);
+ of_node_put(parent);
+ if (!lmb_node)
+ return -EINVAL;
+
+ lmb_assoc = of_get_property(lmb_node, "ibm,associativity", NULL);
+ if (!lmb_assoc) {
+ dlpar_free_cc_nodes(lmb_node);
+ return -ENODEV;
+ }
+
+ dr_node = of_find_node_by_path("/ibm,dynamic-reconfiguration-memory");
+ if (!dr_node) {
+ dlpar_free_cc_nodes(lmb_node);
+ return -ENODEV;
+ }
+
+ assoc_arrays = of_get_property(dr_node,
+ "ibm,associativity-lookup-arrays",
+ NULL);
+ of_node_put(dr_node);
+ if (!assoc_arrays) {
+ dlpar_free_cc_nodes(lmb_node);
+ return -ENODEV;
+ }
+
+ /* The ibm,associativity-lookup-arrays property is defined to be
+ * a 32-bit value specifying the number of associativity arrays
+ * followed by a 32-bitvalue specifying the number of entries per
+ * array, followed by the associativity arrays.
+ */
+ aa_arrays = be32_to_cpu(assoc_arrays[0]);
+ aa_array_entries = be32_to_cpu(assoc_arrays[1]);
+ aa_array_sz = aa_array_entries * sizeof(u32);
+
+ aa_index = -1;
+ for (i = 0; i < aa_arrays; i++) {
+ int indx = (i * aa_array_entries) + 2;
+
+ if (memcmp(&assoc_arrays[indx], &lmb_assoc[1], aa_array_sz))
+ continue;
+
+ aa_index = i;
+ break;
+ }
+
+ dlpar_free_cc_nodes(lmb_node);
+ return aa_index;
+}
+
+static int dlpar_add_device_tree_lmb(struct of_drconf_cell *lmb)
+{
+ int aa_index;
+
+ lmb->flags |= DRCONF_MEM_ASSIGNED;
+
+ aa_index = lookup_lmb_associativity_index(lmb);
+ if (aa_index < 0) {
+ pr_err("Couldn't find associativity index for drc index %x\n",
+ lmb->drc_index);
+ return aa_index;
+ }
+
+ lmb->aa_index = aa_index;
+ return dlpar_update_device_tree_lmb(lmb);
+}
+
+static int dlpar_remove_device_tree_lmb(struct of_drconf_cell *lmb)
+{
+ lmb->flags &= ~DRCONF_MEM_ASSIGNED;
+ lmb->aa_index = 0xffffffff;
+ return dlpar_update_device_tree_lmb(lmb);
+}
+
static struct memory_block *lmb_to_memblock(struct of_drconf_cell *lmb)
{
unsigned long section_nr;
@@ -243,8 +392,8 @@ static int dlpar_remove_lmb(struct of_drconf_cell *lmb)
memblock_remove(lmb->base_addr, block_sz);
dlpar_release_drc(lmb->drc_index);
+ dlpar_remove_device_tree_lmb(lmb);
- lmb->flags &= ~DRCONF_MEM_ASSIGNED;
return 0;
}
@@ -384,43 +533,32 @@ static int dlpar_memory_remove_by_index(u32 drc_index, struct property *prop)
#endif /* CONFIG_MEMORY_HOTREMOVE */
-static int dlpar_add_lmb(struct of_drconf_cell *lmb)
+static int dlpar_add_lmb_memory(struct of_drconf_cell *lmb)
{
struct memory_block *mem_block;
unsigned long block_sz;
int nid, rc;
- if (lmb->flags & DRCONF_MEM_ASSIGNED)
- return -EINVAL;
-
block_sz = memory_block_size_bytes();
- rc = dlpar_acquire_drc(lmb->drc_index);
- if (rc)
- return rc;
-
/* Find the node id for this address */
nid = memory_add_physaddr_to_nid(lmb->base_addr);
/* Add the memory */
rc = add_memory(nid, lmb->base_addr, block_sz);
- if (rc) {
- dlpar_release_drc(lmb->drc_index);
+ if (rc)
return rc;
- }
/* Register this block of memory */
rc = memblock_add(lmb->base_addr, block_sz);
if (rc) {
remove_memory(nid, lmb->base_addr, block_sz);
- dlpar_release_drc(lmb->drc_index);
return rc;
}
mem_block = lmb_to_memblock(lmb);
if (!mem_block) {
remove_memory(nid, lmb->base_addr, block_sz);
- dlpar_release_drc(lmb->drc_index);
return -EINVAL;
}
@@ -428,7 +566,6 @@ static int dlpar_add_lmb(struct of_drconf_cell *lmb)
put_device(&mem_block->dev);
if (rc) {
remove_memory(nid, lmb->base_addr, block_sz);
- dlpar_release_drc(lmb->drc_index);
return rc;
}
@@ -436,6 +573,34 @@ static int dlpar_add_lmb(struct of_drconf_cell *lmb)
return 0;
}
+static int dlpar_add_lmb(struct of_drconf_cell *lmb)
+{
+ int rc;
+
+ if (lmb->flags & DRCONF_MEM_ASSIGNED)
+ return -EINVAL;
+
+ rc = dlpar_acquire_drc(lmb->drc_index);
+ if (rc)
+ return rc;
+
+ rc = dlpar_add_device_tree_lmb(lmb);
+ if (rc) {
+ pr_err("Couldn't update device tree for drc index %x\n",
+ lmb->drc_index);
+ dlpar_release_drc(lmb->drc_index);
+ return rc;
+ }
+
+ rc = dlpar_add_lmb_memory(lmb);
+ if (rc) {
+ dlpar_remove_device_tree_lmb(lmb);
+ dlpar_release_drc(lmb->drc_index);
+ }
+
+ return rc;
+}
+
static int dlpar_memory_add_by_count(u32 lmbs_to_add, struct property *prop)
{
struct of_drconf_cell *lmbs;
@@ -536,31 +701,6 @@ static int dlpar_memory_add_by_index(u32 drc_index, struct property *prop)
return rc;
}
-static void dlpar_update_drconf_property(struct device_node *dn,
- struct property *prop)
-{
- struct of_drconf_cell *lmbs;
- u32 num_lmbs, *p;
- int i;
-
- /* Convert the property back to BE */
- p = prop->value;
- num_lmbs = *p;
- *p = cpu_to_be32(*p);
- p++;
-
- lmbs = (struct of_drconf_cell *)p;
- for (i = 0; i < num_lmbs; i++) {
- lmbs[i].base_addr = cpu_to_be64(lmbs[i].base_addr);
- lmbs[i].drc_index = cpu_to_be32(lmbs[i].drc_index);
- lmbs[i].flags = cpu_to_be32(lmbs[i].flags);
- }
-
- rtas_hp_event = true;
- of_update_property(dn, prop);
- rtas_hp_event = false;
-}
-
int dlpar_memory(struct pseries_hp_errorlog *hp_elog)
{
struct device_node *dn;
@@ -608,10 +748,7 @@ int dlpar_memory(struct pseries_hp_errorlog *hp_elog)
break;
}
- if (rc)
- dlpar_free_drconf_property(prop);
- else
- dlpar_update_drconf_property(dn, prop);
+ dlpar_free_drconf_property(prop);
dlpar_memory_out:
of_node_put(dn);
diff --git a/arch/powerpc/platforms/pseries/iommu.c b/arch/powerpc/platforms/pseries/iommu.c
index bd98ce2be17b..b7dfc1359d01 100644
--- a/arch/powerpc/platforms/pseries/iommu.c
+++ b/arch/powerpc/platforms/pseries/iommu.c
@@ -912,7 +912,8 @@ machine_arch_initcall(pseries, find_existing_ddw_windows);
static int query_ddw(struct pci_dev *dev, const u32 *ddw_avail,
struct ddw_query_response *query)
{
- struct eeh_dev *edev;
+ struct device_node *dn;
+ struct pci_dn *pdn;
u32 cfg_addr;
u64 buid;
int ret;
@@ -923,11 +924,10 @@ static int query_ddw(struct pci_dev *dev, const u32 *ddw_avail,
* Retrieve them from the pci device, not the node with the
* dma-window property
*/
- edev = pci_dev_to_eeh_dev(dev);
- cfg_addr = edev->config_addr;
- if (edev->pe_config_addr)
- cfg_addr = edev->pe_config_addr;
- buid = edev->phb->buid;
+ dn = pci_device_to_OF_node(dev);
+ pdn = PCI_DN(dn);
+ buid = pdn->phb->buid;
+ cfg_addr = (pdn->busno << 8) | pdn->devfn;
ret = rtas_call(ddw_avail[0], 3, 5, (u32 *)query,
cfg_addr, BUID_HI(buid), BUID_LO(buid));
@@ -941,7 +941,8 @@ static int create_ddw(struct pci_dev *dev, const u32 *ddw_avail,
struct ddw_create_response *create, int page_shift,
int window_shift)
{
- struct eeh_dev *edev;
+ struct device_node *dn;
+ struct pci_dn *pdn;
u32 cfg_addr;
u64 buid;
int ret;
@@ -952,11 +953,10 @@ static int create_ddw(struct pci_dev *dev, const u32 *ddw_avail,
* Retrieve them from the pci device, not the node with the
* dma-window property
*/
- edev = pci_dev_to_eeh_dev(dev);
- cfg_addr = edev->config_addr;
- if (edev->pe_config_addr)
- cfg_addr = edev->pe_config_addr;
- buid = edev->phb->buid;
+ dn = pci_device_to_OF_node(dev);
+ pdn = PCI_DN(dn);
+ buid = pdn->phb->buid;
+ cfg_addr = (pdn->busno << 8) | pdn->devfn;
do {
/* extra outputs are LIOBN and dma-addr (hi, lo) */
diff --git a/arch/powerpc/platforms/pseries/lpar.c b/arch/powerpc/platforms/pseries/lpar.c
index 2415a0d31f8f..7f6100d91b4b 100644
--- a/arch/powerpc/platforms/pseries/lpar.c
+++ b/arch/powerpc/platforms/pseries/lpar.c
@@ -89,18 +89,21 @@ void vpa_init(int cpu)
"%lx failed with %ld\n", cpu, hwcpu, addr, ret);
return;
}
+
+#ifdef CONFIG_PPC_STD_MMU_64
/*
* PAPR says this feature is SLB-Buffer but firmware never
* reports that. All SPLPAR support SLB shadow buffer.
*/
- addr = __pa(paca[cpu].slb_shadow_ptr);
- if (firmware_has_feature(FW_FEATURE_SPLPAR)) {
+ if (!radix_enabled() && firmware_has_feature(FW_FEATURE_SPLPAR)) {
+ addr = __pa(paca[cpu].slb_shadow_ptr);
ret = register_slb_shadow(hwcpu, addr);
if (ret)
pr_err("WARNING: SLB shadow buffer registration for "
"cpu %d (hw %d) of area %lx failed with %ld\n",
cpu, hwcpu, addr, ret);
}
+#endif /* CONFIG_PPC_STD_MMU_64 */
/*
* Register dispatch trace log, if one has been allocated.
@@ -123,6 +126,8 @@ void vpa_init(int cpu)
}
}
+#ifdef CONFIG_PPC_STD_MMU_64
+
static long pSeries_lpar_hpte_insert(unsigned long hpte_group,
unsigned long vpn, unsigned long pa,
unsigned long rflags, unsigned long vflags,
@@ -139,7 +144,7 @@ static long pSeries_lpar_hpte_insert(unsigned long hpte_group,
hpte_group, vpn, pa, rflags, vflags, psize);
hpte_v = hpte_encode_v(vpn, psize, apsize, ssize) | vflags | HPTE_V_VALID;
- hpte_r = hpte_encode_r(pa, psize, apsize) | rflags;
+ hpte_r = hpte_encode_r(pa, psize, apsize, ssize) | rflags;
if (!(vflags & HPTE_V_BOLTED))
pr_devel(" hpte_v=%016lx, hpte_r=%016lx\n", hpte_v, hpte_r);
@@ -152,10 +157,6 @@ static long pSeries_lpar_hpte_insert(unsigned long hpte_group,
/* Exact = 0 */
flags = 0;
- /* Make pHyp happy */
- if ((rflags & _PAGE_NO_CACHE) && !(rflags & _PAGE_WRITETHRU))
- hpte_r &= ~HPTE_R_M;
-
if (firmware_has_feature(FW_FEATURE_XCMO) && !(hpte_r & HPTE_R_N))
flags |= H_COALESCE_CAND;
@@ -659,6 +660,8 @@ static void pSeries_set_page_state(struct page *page, int order,
void arch_free_page(struct page *page, int order)
{
+ if (radix_enabled())
+ return;
if (!cmo_free_hint_flag || !firmware_has_feature(FW_FEATURE_CMO))
return;
@@ -666,7 +669,8 @@ void arch_free_page(struct page *page, int order)
}
EXPORT_SYMBOL(arch_free_page);
-#endif
+#endif /* CONFIG_PPC_SMLPAR */
+#endif /* CONFIG_PPC_STD_MMU_64 */
#ifdef CONFIG_TRACEPOINTS
#ifdef HAVE_JUMP_LABEL
diff --git a/arch/powerpc/platforms/pseries/lparcfg.c b/arch/powerpc/platforms/pseries/lparcfg.c
index c9fecf09b8fa..afa05a2cb702 100644
--- a/arch/powerpc/platforms/pseries/lparcfg.c
+++ b/arch/powerpc/platforms/pseries/lparcfg.c
@@ -484,8 +484,9 @@ static int pseries_lparcfg_data(struct seq_file *m, void *v)
seq_printf(m, "shared_processor_mode=%d\n",
lppaca_shared_proc(get_lppaca()));
+#ifdef CONFIG_PPC_STD_MMU_64
seq_printf(m, "slb_size=%d\n", mmu_slb_size);
-
+#endif
parse_em_data(m);
return 0;
diff --git a/arch/powerpc/platforms/pseries/mobility.c b/arch/powerpc/platforms/pseries/mobility.c
index ceb18d349459..a560a98bcf3b 100644
--- a/arch/powerpc/platforms/pseries/mobility.c
+++ b/arch/powerpc/platforms/pseries/mobility.c
@@ -191,8 +191,8 @@ static int update_dt_node(__be32 phandle, s32 scope)
break;
case 0x80000000:
- prop = of_find_property(dn, prop_name, NULL);
- of_remove_property(dn, prop);
+ of_remove_property(dn, of_find_property(dn,
+ prop_name, NULL));
prop = NULL;
break;
diff --git a/arch/powerpc/platforms/pseries/msi.c b/arch/powerpc/platforms/pseries/msi.c
index 272e9ec1ab54..543a6386f3eb 100644
--- a/arch/powerpc/platforms/pseries/msi.c
+++ b/arch/powerpc/platforms/pseries/msi.c
@@ -305,7 +305,7 @@ static int msi_quota_for_device(struct pci_dev *dev, int request)
memset(&counts, 0, sizeof(struct msi_counts));
/* Work out how many devices we have below this PE */
- traverse_pci_devices(pe_dn, count_non_bridge_devices, &counts);
+ pci_traverse_device_nodes(pe_dn, count_non_bridge_devices, &counts);
if (counts.num_devices == 0) {
pr_err("rtas_msi: found 0 devices under PE for %s\n",
@@ -320,7 +320,7 @@ static int msi_quota_for_device(struct pci_dev *dev, int request)
/* else, we have some more calculating to do */
counts.requestor = pci_device_to_OF_node(dev);
counts.request = request;
- traverse_pci_devices(pe_dn, count_spare_msis, &counts);
+ pci_traverse_device_nodes(pe_dn, count_spare_msis, &counts);
/* If the quota isn't an integer multiple of the total, we can
* use the remainder as spare MSIs for anyone that wants them. */
diff --git a/arch/powerpc/platforms/pseries/pci_dlpar.c b/arch/powerpc/platforms/pseries/pci_dlpar.c
index 5d4a3df59d0c..906dbaa97fe2 100644
--- a/arch/powerpc/platforms/pseries/pci_dlpar.c
+++ b/arch/powerpc/platforms/pseries/pci_dlpar.c
@@ -34,38 +34,6 @@
#include "pseries.h"
-static struct pci_bus *
-find_bus_among_children(struct pci_bus *bus,
- struct device_node *dn)
-{
- struct pci_bus *child = NULL;
- struct pci_bus *tmp;
- struct device_node *busdn;
-
- busdn = pci_bus_to_OF_node(bus);
- if (busdn == dn)
- return bus;
-
- list_for_each_entry(tmp, &bus->children, node) {
- child = find_bus_among_children(tmp, dn);
- if (child)
- break;
- };
- return child;
-}
-
-struct pci_bus *
-pcibios_find_pci_bus(struct device_node *dn)
-{
- struct pci_dn *pdn = dn->data;
-
- if (!pdn || !pdn->phb || !pdn->phb->bus)
- return NULL;
-
- return find_bus_among_children(pdn->phb->bus, dn);
-}
-EXPORT_SYMBOL_GPL(pcibios_find_pci_bus);
-
struct pci_controller *init_phb_dynamic(struct device_node *dn)
{
struct pci_controller *phb;
diff --git a/arch/powerpc/platforms/pseries/reconfig.c b/arch/powerpc/platforms/pseries/reconfig.c
index 7c7fcc042549..cc66c49f07aa 100644
--- a/arch/powerpc/platforms/pseries/reconfig.c
+++ b/arch/powerpc/platforms/pseries/reconfig.c
@@ -303,7 +303,6 @@ static int do_remove_property(char *buf, size_t bufsize)
{
struct device_node *np;
char *tmp;
- struct property *prop;
buf = parse_node(buf, bufsize, &np);
if (!np)
@@ -316,9 +315,7 @@ static int do_remove_property(char *buf, size_t bufsize)
if (strlen(buf) == 0)
return -EINVAL;
- prop = of_find_property(np, buf, NULL);
-
- return of_remove_property(np, prop);
+ return of_remove_property(np, of_find_property(np, buf, NULL));
}
static int do_update_property(char *buf, size_t bufsize)
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index 6e944fc6e5f9..9883bc7ea007 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -235,6 +235,8 @@ static void __init pseries_discover_pic(void)
for_each_node_by_name(np, "interrupt-controller") {
typep = of_get_property(np, "compatible", NULL);
+ if (!typep)
+ continue;
if (strstr(typep, "open-pic")) {
pSeries_mpic_node = of_node_get(np);
ppc_md.init_IRQ = pseries_mpic_init_IRQ;
@@ -265,7 +267,7 @@ static int pci_dn_reconfig_notifier(struct notifier_block *nb, unsigned long act
pdn = parent ? PCI_DN(parent) : NULL;
if (pdn) {
/* Create pdn and EEH device */
- update_dn_pci_info(np, pdn->phb);
+ pci_add_device_node_info(pdn->phb, np);
eeh_dev_init(PCI_DN(np), pdn->phb);
}
diff --git a/arch/powerpc/sysdev/axonram.c b/arch/powerpc/sysdev/axonram.c
index 0d112b94d91d..ff75d70f7285 100644
--- a/arch/powerpc/sysdev/axonram.c
+++ b/arch/powerpc/sysdev/axonram.c
@@ -143,7 +143,7 @@ axon_ram_make_request(struct request_queue *queue, struct bio *bio)
*/
static long
axon_ram_direct_access(struct block_device *device, sector_t sector,
- void __pmem **kaddr, pfn_t *pfn)
+ void __pmem **kaddr, pfn_t *pfn, long size)
{
struct axon_ram_bank *bank = device->bd_disk->private_data;
loff_t offset = (loff_t)sector << AXON_RAM_SECTOR_SHIFT;
diff --git a/arch/powerpc/sysdev/cpm1.c b/arch/powerpc/sysdev/cpm1.c
index 8ed65365be50..6c110994d902 100644
--- a/arch/powerpc/sysdev/cpm1.c
+++ b/arch/powerpc/sysdev/cpm1.c
@@ -532,15 +532,9 @@ struct cpm1_gpio16_chip {
u16 cpdata;
};
-static inline struct cpm1_gpio16_chip *
-to_cpm1_gpio16_chip(struct of_mm_gpio_chip *mm_gc)
-{
- return container_of(mm_gc, struct cpm1_gpio16_chip, mm_gc);
-}
-
static void cpm1_gpio16_save_regs(struct of_mm_gpio_chip *mm_gc)
{
- struct cpm1_gpio16_chip *cpm1_gc = to_cpm1_gpio16_chip(mm_gc);
+ struct cpm1_gpio16_chip *cpm1_gc = gpiochip_get_data(&mm_gc->gc);
struct cpm_ioport16 __iomem *iop = mm_gc->regs;
cpm1_gc->cpdata = in_be16(&iop->dat);
@@ -560,7 +554,7 @@ static int cpm1_gpio16_get(struct gpio_chip *gc, unsigned int gpio)
static void __cpm1_gpio16_set(struct of_mm_gpio_chip *mm_gc, u16 pin_mask,
int value)
{
- struct cpm1_gpio16_chip *cpm1_gc = to_cpm1_gpio16_chip(mm_gc);
+ struct cpm1_gpio16_chip *cpm1_gc = gpiochip_get_data(&mm_gc->gc);
struct cpm_ioport16 __iomem *iop = mm_gc->regs;
if (value)
@@ -574,7 +568,7 @@ static void __cpm1_gpio16_set(struct of_mm_gpio_chip *mm_gc, u16 pin_mask,
static void cpm1_gpio16_set(struct gpio_chip *gc, unsigned int gpio, int value)
{
struct of_mm_gpio_chip *mm_gc = to_of_mm_gpio_chip(gc);
- struct cpm1_gpio16_chip *cpm1_gc = to_cpm1_gpio16_chip(mm_gc);
+ struct cpm1_gpio16_chip *cpm1_gc = gpiochip_get_data(&mm_gc->gc);
unsigned long flags;
u16 pin_mask = 1 << (15 - gpio);
@@ -588,7 +582,7 @@ static void cpm1_gpio16_set(struct gpio_chip *gc, unsigned int gpio, int value)
static int cpm1_gpio16_dir_out(struct gpio_chip *gc, unsigned int gpio, int val)
{
struct of_mm_gpio_chip *mm_gc = to_of_mm_gpio_chip(gc);
- struct cpm1_gpio16_chip *cpm1_gc = to_cpm1_gpio16_chip(mm_gc);
+ struct cpm1_gpio16_chip *cpm1_gc = gpiochip_get_data(&mm_gc->gc);
struct cpm_ioport16 __iomem *iop = mm_gc->regs;
unsigned long flags;
u16 pin_mask = 1 << (15 - gpio);
@@ -606,7 +600,7 @@ static int cpm1_gpio16_dir_out(struct gpio_chip *gc, unsigned int gpio, int val)
static int cpm1_gpio16_dir_in(struct gpio_chip *gc, unsigned int gpio)
{
struct of_mm_gpio_chip *mm_gc = to_of_mm_gpio_chip(gc);
- struct cpm1_gpio16_chip *cpm1_gc = to_cpm1_gpio16_chip(mm_gc);
+ struct cpm1_gpio16_chip *cpm1_gc = gpiochip_get_data(&mm_gc->gc);
struct cpm_ioport16 __iomem *iop = mm_gc->regs;
unsigned long flags;
u16 pin_mask = 1 << (15 - gpio);
@@ -642,7 +636,7 @@ int cpm1_gpiochip_add16(struct device_node *np)
gc->get = cpm1_gpio16_get;
gc->set = cpm1_gpio16_set;
- return of_mm_gpiochip_add(np, mm_gc);
+ return of_mm_gpiochip_add_data(np, mm_gc, cpm1_gc);
}
struct cpm1_gpio32_chip {
@@ -653,15 +647,9 @@ struct cpm1_gpio32_chip {
u32 cpdata;
};
-static inline struct cpm1_gpio32_chip *
-to_cpm1_gpio32_chip(struct of_mm_gpio_chip *mm_gc)
-{
- return container_of(mm_gc, struct cpm1_gpio32_chip, mm_gc);
-}
-
static void cpm1_gpio32_save_regs(struct of_mm_gpio_chip *mm_gc)
{
- struct cpm1_gpio32_chip *cpm1_gc = to_cpm1_gpio32_chip(mm_gc);
+ struct cpm1_gpio32_chip *cpm1_gc = gpiochip_get_data(&mm_gc->gc);
struct cpm_ioport32b __iomem *iop = mm_gc->regs;
cpm1_gc->cpdata = in_be32(&iop->dat);
@@ -681,7 +669,7 @@ static int cpm1_gpio32_get(struct gpio_chip *gc, unsigned int gpio)
static void __cpm1_gpio32_set(struct of_mm_gpio_chip *mm_gc, u32 pin_mask,
int value)
{
- struct cpm1_gpio32_chip *cpm1_gc = to_cpm1_gpio32_chip(mm_gc);
+ struct cpm1_gpio32_chip *cpm1_gc = gpiochip_get_data(&mm_gc->gc);
struct cpm_ioport32b __iomem *iop = mm_gc->regs;
if (value)
@@ -695,7 +683,7 @@ static void __cpm1_gpio32_set(struct of_mm_gpio_chip *mm_gc, u32 pin_mask,
static void cpm1_gpio32_set(struct gpio_chip *gc, unsigned int gpio, int value)
{
struct of_mm_gpio_chip *mm_gc = to_of_mm_gpio_chip(gc);
- struct cpm1_gpio32_chip *cpm1_gc = to_cpm1_gpio32_chip(mm_gc);
+ struct cpm1_gpio32_chip *cpm1_gc = gpiochip_get_data(&mm_gc->gc);
unsigned long flags;
u32 pin_mask = 1 << (31 - gpio);
@@ -709,7 +697,7 @@ static void cpm1_gpio32_set(struct gpio_chip *gc, unsigned int gpio, int value)
static int cpm1_gpio32_dir_out(struct gpio_chip *gc, unsigned int gpio, int val)
{
struct of_mm_gpio_chip *mm_gc = to_of_mm_gpio_chip(gc);
- struct cpm1_gpio32_chip *cpm1_gc = to_cpm1_gpio32_chip(mm_gc);
+ struct cpm1_gpio32_chip *cpm1_gc = gpiochip_get_data(&mm_gc->gc);
struct cpm_ioport32b __iomem *iop = mm_gc->regs;
unsigned long flags;
u32 pin_mask = 1 << (31 - gpio);
@@ -727,7 +715,7 @@ static int cpm1_gpio32_dir_out(struct gpio_chip *gc, unsigned int gpio, int val)
static int cpm1_gpio32_dir_in(struct gpio_chip *gc, unsigned int gpio)
{
struct of_mm_gpio_chip *mm_gc = to_of_mm_gpio_chip(gc);
- struct cpm1_gpio32_chip *cpm1_gc = to_cpm1_gpio32_chip(mm_gc);
+ struct cpm1_gpio32_chip *cpm1_gc = gpiochip_get_data(&mm_gc->gc);
struct cpm_ioport32b __iomem *iop = mm_gc->regs;
unsigned long flags;
u32 pin_mask = 1 << (31 - gpio);
@@ -763,7 +751,7 @@ int cpm1_gpiochip_add32(struct device_node *np)
gc->get = cpm1_gpio32_get;
gc->set = cpm1_gpio32_set;
- return of_mm_gpiochip_add(np, mm_gc);
+ return of_mm_gpiochip_add_data(np, mm_gc, cpm1_gc);
}
static int cpm_init_par_io(void)
diff --git a/arch/powerpc/sysdev/cpm_common.c b/arch/powerpc/sysdev/cpm_common.c
index 9d32465eddb1..0ac12e5fd8ab 100644
--- a/arch/powerpc/sysdev/cpm_common.c
+++ b/arch/powerpc/sysdev/cpm_common.c
@@ -80,15 +80,9 @@ struct cpm2_gpio32_chip {
u32 cpdata;
};
-static inline struct cpm2_gpio32_chip *
-to_cpm2_gpio32_chip(struct of_mm_gpio_chip *mm_gc)
-{
- return container_of(mm_gc, struct cpm2_gpio32_chip, mm_gc);
-}
-
static void cpm2_gpio32_save_regs(struct of_mm_gpio_chip *mm_gc)
{
- struct cpm2_gpio32_chip *cpm2_gc = to_cpm2_gpio32_chip(mm_gc);
+ struct cpm2_gpio32_chip *cpm2_gc = gpiochip_get_data(&mm_gc->gc);
struct cpm2_ioports __iomem *iop = mm_gc->regs;
cpm2_gc->cpdata = in_be32(&iop->dat);
@@ -108,7 +102,7 @@ static int cpm2_gpio32_get(struct gpio_chip *gc, unsigned int gpio)
static void __cpm2_gpio32_set(struct of_mm_gpio_chip *mm_gc, u32 pin_mask,
int value)
{
- struct cpm2_gpio32_chip *cpm2_gc = to_cpm2_gpio32_chip(mm_gc);
+ struct cpm2_gpio32_chip *cpm2_gc = gpiochip_get_data(&mm_gc->gc);
struct cpm2_ioports __iomem *iop = mm_gc->regs;
if (value)
@@ -122,7 +116,7 @@ static void __cpm2_gpio32_set(struct of_mm_gpio_chip *mm_gc, u32 pin_mask,
static void cpm2_gpio32_set(struct gpio_chip *gc, unsigned int gpio, int value)
{
struct of_mm_gpio_chip *mm_gc = to_of_mm_gpio_chip(gc);
- struct cpm2_gpio32_chip *cpm2_gc = to_cpm2_gpio32_chip(mm_gc);
+ struct cpm2_gpio32_chip *cpm2_gc = gpiochip_get_data(gc);
unsigned long flags;
u32 pin_mask = 1 << (31 - gpio);
@@ -136,7 +130,7 @@ static void cpm2_gpio32_set(struct gpio_chip *gc, unsigned int gpio, int value)
static int cpm2_gpio32_dir_out(struct gpio_chip *gc, unsigned int gpio, int val)
{
struct of_mm_gpio_chip *mm_gc = to_of_mm_gpio_chip(gc);
- struct cpm2_gpio32_chip *cpm2_gc = to_cpm2_gpio32_chip(mm_gc);
+ struct cpm2_gpio32_chip *cpm2_gc = gpiochip_get_data(gc);
struct cpm2_ioports __iomem *iop = mm_gc->regs;
unsigned long flags;
u32 pin_mask = 1 << (31 - gpio);
@@ -154,7 +148,7 @@ static int cpm2_gpio32_dir_out(struct gpio_chip *gc, unsigned int gpio, int val)
static int cpm2_gpio32_dir_in(struct gpio_chip *gc, unsigned int gpio)
{
struct of_mm_gpio_chip *mm_gc = to_of_mm_gpio_chip(gc);
- struct cpm2_gpio32_chip *cpm2_gc = to_cpm2_gpio32_chip(mm_gc);
+ struct cpm2_gpio32_chip *cpm2_gc = gpiochip_get_data(gc);
struct cpm2_ioports __iomem *iop = mm_gc->regs;
unsigned long flags;
u32 pin_mask = 1 << (31 - gpio);
@@ -190,6 +184,6 @@ int cpm2_gpiochip_add32(struct device_node *np)
gc->get = cpm2_gpio32_get;
gc->set = cpm2_gpio32_set;
- return of_mm_gpiochip_add(np, mm_gc);
+ return of_mm_gpiochip_add_data(np, mm_gc, cpm2_gc);
}
#endif /* CONFIG_CPM2 || CONFIG_8xx_GPIO */
diff --git a/arch/powerpc/sysdev/fsl_pci.c b/arch/powerpc/sysdev/fsl_pci.c
index 85729f49764f..0ef9df49f0f2 100644
--- a/arch/powerpc/sysdev/fsl_pci.c
+++ b/arch/powerpc/sysdev/fsl_pci.c
@@ -37,6 +37,7 @@
#include <asm/pci-bridge.h>
#include <asm/ppc-pci.h>
#include <asm/machdep.h>
+#include <asm/mpc85xx.h>
#include <asm/disassemble.h>
#include <asm/ppc-opcode.h>
#include <sysdev/fsl_soc.h>
@@ -527,6 +528,8 @@ int fsl_add_bridge(struct platform_device *pdev, int is_primary)
u8 hdr_type, progif;
struct device_node *dev;
struct ccsr_pci __iomem *pci;
+ u16 temp;
+ u32 svr = mfspr(SPRN_SVR);
dev = pdev->dev.of_node;
@@ -596,6 +599,27 @@ int fsl_add_bridge(struct platform_device *pdev, int is_primary)
PPC_INDIRECT_TYPE_SURPRESS_PRIMARY_BUS;
if (fsl_pcie_check_link(hose))
hose->indirect_type |= PPC_INDIRECT_TYPE_NO_PCIE_LINK;
+ } else {
+ /*
+ * Set PBFR(PCI Bus Function Register)[10] = 1 to
+ * disable the combining of crossing cacheline
+ * boundary requests into one burst transaction.
+ * PCI-X operation is not affected.
+ * Fix erratum PCI 5 on MPC8548
+ */
+#define PCI_BUS_FUNCTION 0x44
+#define PCI_BUS_FUNCTION_MDS 0x400 /* Master disable streaming */
+ if (((SVR_SOC_VER(svr) == SVR_8543) ||
+ (SVR_SOC_VER(svr) == SVR_8545) ||
+ (SVR_SOC_VER(svr) == SVR_8547) ||
+ (SVR_SOC_VER(svr) == SVR_8548)) &&
+ !early_find_capability(hose, 0, 0, PCI_CAP_ID_PCIX)) {
+ early_read_config_word(hose, 0, 0,
+ PCI_BUS_FUNCTION, &temp);
+ temp |= PCI_BUS_FUNCTION_MDS;
+ early_write_config_word(hose, 0, 0,
+ PCI_BUS_FUNCTION, temp);
+ }
}
printk(KERN_INFO "Found FSL PCI host bridge at 0x%016llx. "
diff --git a/arch/powerpc/sysdev/mpic.c b/arch/powerpc/sysdev/mpic.c
index afe3c7cd395d..7de45b2df366 100644
--- a/arch/powerpc/sysdev/mpic.c
+++ b/arch/powerpc/sysdev/mpic.c
@@ -2004,8 +2004,15 @@ static struct syscore_ops mpic_syscore_ops = {
static int mpic_init_sys(void)
{
+ int rc;
+
register_syscore_ops(&mpic_syscore_ops);
- subsys_system_register(&mpic_subsys, NULL);
+ rc = subsys_system_register(&mpic_subsys, NULL);
+ if (rc) {
+ unregister_syscore_ops(&mpic_syscore_ops);
+ pr_err("mpic: Failed to register subsystem!\n");
+ return rc;
+ }
return 0;
}
diff --git a/arch/powerpc/sysdev/ppc4xx_gpio.c b/arch/powerpc/sysdev/ppc4xx_gpio.c
index d7a7ef135b9f..5382d04dd872 100644
--- a/arch/powerpc/sysdev/ppc4xx_gpio.c
+++ b/arch/powerpc/sysdev/ppc4xx_gpio.c
@@ -27,7 +27,7 @@
#include <linux/io.h>
#include <linux/of.h>
#include <linux/of_gpio.h>
-#include <linux/gpio.h>
+#include <linux/gpio/driver.h>
#include <linux/types.h>
#include <linux/slab.h>
@@ -67,12 +67,6 @@ struct ppc4xx_gpio_chip {
* There are a maximum of 32 gpios in each gpio controller.
*/
-static inline struct ppc4xx_gpio_chip *
-to_ppc4xx_gpiochip(struct of_mm_gpio_chip *mm_gc)
-{
- return container_of(mm_gc, struct ppc4xx_gpio_chip, mm_gc);
-}
-
static int ppc4xx_gpio_get(struct gpio_chip *gc, unsigned int gpio)
{
struct of_mm_gpio_chip *mm_gc = to_of_mm_gpio_chip(gc);
@@ -96,8 +90,7 @@ __ppc4xx_gpio_set(struct gpio_chip *gc, unsigned int gpio, int val)
static void
ppc4xx_gpio_set(struct gpio_chip *gc, unsigned int gpio, int val)
{
- struct of_mm_gpio_chip *mm_gc = to_of_mm_gpio_chip(gc);
- struct ppc4xx_gpio_chip *chip = to_ppc4xx_gpiochip(mm_gc);
+ struct ppc4xx_gpio_chip *chip = gpiochip_get_data(gc);
unsigned long flags;
spin_lock_irqsave(&chip->lock, flags);
@@ -112,7 +105,7 @@ ppc4xx_gpio_set(struct gpio_chip *gc, unsigned int gpio, int val)
static int ppc4xx_gpio_dir_in(struct gpio_chip *gc, unsigned int gpio)
{
struct of_mm_gpio_chip *mm_gc = to_of_mm_gpio_chip(gc);
- struct ppc4xx_gpio_chip *chip = to_ppc4xx_gpiochip(mm_gc);
+ struct ppc4xx_gpio_chip *chip = gpiochip_get_data(gc);
struct ppc4xx_gpio __iomem *regs = mm_gc->regs;
unsigned long flags;
@@ -142,7 +135,7 @@ static int
ppc4xx_gpio_dir_out(struct gpio_chip *gc, unsigned int gpio, int val)
{
struct of_mm_gpio_chip *mm_gc = to_of_mm_gpio_chip(gc);
- struct ppc4xx_gpio_chip *chip = to_ppc4xx_gpiochip(mm_gc);
+ struct ppc4xx_gpio_chip *chip = gpiochip_get_data(gc);
struct ppc4xx_gpio __iomem *regs = mm_gc->regs;
unsigned long flags;
@@ -200,7 +193,7 @@ static int __init ppc4xx_add_gpiochips(void)
gc->get = ppc4xx_gpio_get;
gc->set = ppc4xx_gpio_set;
- ret = of_mm_gpiochip_add(np, mm_gc);
+ ret = of_mm_gpiochip_add_data(np, mm_gc, ppc4xx_gc);
if (ret)
goto err;
continue;
diff --git a/arch/powerpc/sysdev/simple_gpio.c b/arch/powerpc/sysdev/simple_gpio.c
index 56ce8ca3281b..ef470b470b04 100644
--- a/arch/powerpc/sysdev/simple_gpio.c
+++ b/arch/powerpc/sysdev/simple_gpio.c
@@ -19,7 +19,7 @@
#include <linux/io.h>
#include <linux/of.h>
#include <linux/of_gpio.h>
-#include <linux/gpio.h>
+#include <linux/gpio/driver.h>
#include <linux/slab.h>
#include <asm/prom.h>
#include "simple_gpio.h"
@@ -32,11 +32,6 @@ struct u8_gpio_chip {
u8 data;
};
-static struct u8_gpio_chip *to_u8_gpio_chip(struct of_mm_gpio_chip *mm_gc)
-{
- return container_of(mm_gc, struct u8_gpio_chip, mm_gc);
-}
-
static u8 u8_pin2mask(unsigned int pin)
{
return 1 << (8 - 1 - pin);
@@ -52,7 +47,7 @@ static int u8_gpio_get(struct gpio_chip *gc, unsigned int gpio)
static void u8_gpio_set(struct gpio_chip *gc, unsigned int gpio, int val)
{
struct of_mm_gpio_chip *mm_gc = to_of_mm_gpio_chip(gc);
- struct u8_gpio_chip *u8_gc = to_u8_gpio_chip(mm_gc);
+ struct u8_gpio_chip *u8_gc = gpiochip_get_data(gc);
unsigned long flags;
spin_lock_irqsave(&u8_gc->lock, flags);
@@ -80,7 +75,7 @@ static int u8_gpio_dir_out(struct gpio_chip *gc, unsigned int gpio, int val)
static void u8_gpio_save_regs(struct of_mm_gpio_chip *mm_gc)
{
- struct u8_gpio_chip *u8_gc = to_u8_gpio_chip(mm_gc);
+ struct u8_gpio_chip *u8_gc = gpiochip_get_data(&mm_gc->gc);
u8_gc->data = in_8(mm_gc->regs);
}
@@ -108,7 +103,7 @@ static int __init u8_simple_gpiochip_add(struct device_node *np)
gc->get = u8_gpio_get;
gc->set = u8_gpio_set;
- ret = of_mm_gpiochip_add(np, mm_gc);
+ ret = of_mm_gpiochip_add_data(np, mm_gc, u8_gc);
if (ret)
goto err;
return 0;
diff --git a/arch/powerpc/xmon/Makefile b/arch/powerpc/xmon/Makefile
index 436062dbb6e2..0b2f771593eb 100644
--- a/arch/powerpc/xmon/Makefile
+++ b/arch/powerpc/xmon/Makefile
@@ -7,7 +7,7 @@ UBSAN_SANITIZE := n
ccflags-$(CONFIG_PPC64) := $(NO_MINIMAL_TOC)
-obj-y += xmon.o nonstdio.o
+obj-y += xmon.o nonstdio.o spr_access.o
ifdef CONFIG_XMON_DISASSEMBLY
obj-y += ppc-dis.o ppc-opc.o
diff --git a/arch/powerpc/xmon/spr_access.S b/arch/powerpc/xmon/spr_access.S
new file mode 100644
index 000000000000..84ad74213c83
--- /dev/null
+++ b/arch/powerpc/xmon/spr_access.S
@@ -0,0 +1,45 @@
+#include <asm/ppc_asm.h>
+
+/* unsigned long xmon_mfspr(sprn, default_value) */
+_GLOBAL(xmon_mfspr)
+ ld r5, .Lmfspr_table@got(r2)
+ b xmon_mxspr
+
+/* void xmon_mtspr(sprn, new_value) */
+_GLOBAL(xmon_mtspr)
+ ld r5, .Lmtspr_table@got(r2)
+ b xmon_mxspr
+
+/*
+ * r3 = sprn
+ * r4 = default or new value
+ * r5 = table base
+ */
+xmon_mxspr:
+ /*
+ * To index into the table of mxsprs we need:
+ * i = (sprn & 0x3ff) * 8
+ * or using rwlinm:
+ * i = (sprn << 3) & (0x3ff << 3)
+ */
+ rlwinm r3, r3, 3, 0x3ff << 3
+ add r5, r5, r3
+ mtctr r5
+ mr r3, r4 /* put default_value in r3 for mfspr */
+ bctr
+
+.Lmfspr_table:
+ spr = 0
+ .rept 1024
+ mfspr r3, spr
+ blr
+ spr = spr + 1
+ .endr
+
+.Lmtspr_table:
+ spr = 0
+ .rept 1024
+ mtspr spr, r4
+ blr
+ spr = spr + 1
+ .endr
diff --git a/arch/powerpc/xmon/xmon.c b/arch/powerpc/xmon/xmon.c
index 942796fa4767..c5e155108be5 100644
--- a/arch/powerpc/xmon/xmon.c
+++ b/arch/powerpc/xmon/xmon.c
@@ -86,6 +86,7 @@ static char tmpstr[128];
static long bus_error_jmp[JMP_BUF_LEN];
static int catch_memory_errors;
+static int catch_spr_faults;
static long *xmon_fault_jmp[NR_CPUS];
/* Breakpoint stuff */
@@ -147,7 +148,7 @@ void getstring(char *, int);
static void flush_input(void);
static int inchar(void);
static void take_input(char *);
-static unsigned long read_spr(int);
+static int read_spr(int, unsigned long *);
static void write_spr(int, unsigned long);
static void super_regs(void);
static void remove_bpts(void);
@@ -250,6 +251,9 @@ Commands:\n\
sdi # disassemble spu local store for spu # (in hex)\n"
#endif
" S print special registers\n\
+ Sa print all SPRs\n\
+ Sr # read SPR #\n\
+ Sw #v write v to SPR #\n\
t print backtrace\n\
x exit monitor and recover\n\
X exit monitor and don't recover\n"
@@ -442,6 +446,12 @@ static int xmon_core(struct pt_regs *regs, int fromipi)
#ifdef CONFIG_SMP
cpu = smp_processor_id();
if (cpumask_test_cpu(cpu, &cpus_in_xmon)) {
+ /*
+ * We catch SPR read/write faults here because the 0x700, 0xf60
+ * etc. handlers don't call debugger_fault_handler().
+ */
+ if (catch_spr_faults)
+ longjmp(bus_error_jmp, 1);
get_output_lock();
excprint(regs);
printf("cpu 0x%x: Exception %lx %s in xmon, "
@@ -1635,89 +1645,87 @@ static void cacheflush(void)
catch_memory_errors = 0;
}
-static unsigned long
-read_spr(int n)
+extern unsigned long xmon_mfspr(int spr, unsigned long default_value);
+extern void xmon_mtspr(int spr, unsigned long value);
+
+static int
+read_spr(int n, unsigned long *vp)
{
- unsigned int instrs[2];
- unsigned long (*code)(void);
unsigned long ret = -1UL;
-#ifdef CONFIG_PPC64
- unsigned long opd[3];
-
- opd[0] = (unsigned long)instrs;
- opd[1] = 0;
- opd[2] = 0;
- code = (unsigned long (*)(void)) opd;
-#else
- code = (unsigned long (*)(void)) instrs;
-#endif
-
- /* mfspr r3,n; blr */
- instrs[0] = 0x7c6002a6 + ((n & 0x1F) << 16) + ((n & 0x3e0) << 6);
- instrs[1] = 0x4e800020;
- store_inst(instrs);
- store_inst(instrs+1);
+ int ok = 0;
if (setjmp(bus_error_jmp) == 0) {
- catch_memory_errors = 1;
+ catch_spr_faults = 1;
sync();
- ret = code();
+ ret = xmon_mfspr(n, *vp);
sync();
- /* wait a little while to see if we get a machine check */
- __delay(200);
- n = size;
+ *vp = ret;
+ ok = 1;
}
+ catch_spr_faults = 0;
- return ret;
+ return ok;
}
static void
write_spr(int n, unsigned long val)
{
- unsigned int instrs[2];
- unsigned long (*code)(unsigned long);
-#ifdef CONFIG_PPC64
- unsigned long opd[3];
-
- opd[0] = (unsigned long)instrs;
- opd[1] = 0;
- opd[2] = 0;
- code = (unsigned long (*)(unsigned long)) opd;
-#else
- code = (unsigned long (*)(unsigned long)) instrs;
-#endif
-
- instrs[0] = 0x7c6003a6 + ((n & 0x1F) << 16) + ((n & 0x3e0) << 6);
- instrs[1] = 0x4e800020;
- store_inst(instrs);
- store_inst(instrs+1);
-
if (setjmp(bus_error_jmp) == 0) {
- catch_memory_errors = 1;
+ catch_spr_faults = 1;
sync();
- code(val);
+ xmon_mtspr(n, val);
sync();
- /* wait a little while to see if we get a machine check */
- __delay(200);
- n = size;
+ } else {
+ printf("SPR 0x%03x (%4d) Faulted during write\n", n, n);
}
+ catch_spr_faults = 0;
}
static unsigned long regno;
extern char exc_prolog;
extern char dec_exc;
+static void dump_one_spr(int spr, bool show_unimplemented)
+{
+ unsigned long val;
+
+ val = 0xdeadbeef;
+ if (!read_spr(spr, &val)) {
+ printf("SPR 0x%03x (%4d) Faulted during read\n", spr, spr);
+ return;
+ }
+
+ if (val == 0xdeadbeef) {
+ /* Looks like read was a nop, confirm */
+ val = 0x0badcafe;
+ if (!read_spr(spr, &val)) {
+ printf("SPR 0x%03x (%4d) Faulted during read\n", spr, spr);
+ return;
+ }
+
+ if (val == 0x0badcafe) {
+ if (show_unimplemented)
+ printf("SPR 0x%03x (%4d) Unimplemented\n", spr, spr);
+ return;
+ }
+ }
+
+ printf("SPR 0x%03x (%4d) = 0x%lx\n", spr, spr, val);
+}
+
static void super_regs(void)
{
int cmd;
- unsigned long val;
+ int spr;
cmd = skipbl();
- if (cmd == '\n') {
+
+ switch (cmd) {
+ case '\n': {
unsigned long sp, toc;
asm("mr %0,1" : "=r" (sp) :);
asm("mr %0,2" : "=r" (toc) :);
@@ -1730,21 +1738,29 @@ static void super_regs(void)
mfspr(SPRN_DEC), mfspr(SPRN_SPRG2));
printf("sp = "REG" sprg3= "REG"\n", sp, mfspr(SPRN_SPRG3));
printf("toc = "REG" dar = "REG"\n", toc, mfspr(SPRN_DAR));
-
return;
}
-
- scanhex(&regno);
- switch (cmd) {
- case 'w':
- val = read_spr(regno);
+ case 'w': {
+ unsigned long val;
+ scanhex(&regno);
+ val = 0;
+ read_spr(regno, &val);
scanhex(&val);
write_spr(regno, val);
- /* fall through */
+ dump_one_spr(regno, true);
+ break;
+ }
case 'r':
- printf("spr %lx = %lx\n", regno, read_spr(regno));
+ scanhex(&regno);
+ dump_one_spr(regno, true);
+ break;
+ case 'a':
+ /* dump ALL SPRs */
+ for (spr = 1; spr < 1024; ++spr)
+ dump_one_spr(spr, false);
break;
}
+
scannl();
}
@@ -2913,7 +2929,7 @@ static void xmon_print_symbol(unsigned long address, const char *mid,
printf("%s", after);
}
-#ifdef CONFIG_PPC_BOOK3S_64
+#ifdef CONFIG_PPC_STD_MMU_64
void dump_segments(void)
{
int i;
diff --git a/arch/s390/Kconfig b/arch/s390/Kconfig
index bf24ab188921..a8c259059adf 100644
--- a/arch/s390/Kconfig
+++ b/arch/s390/Kconfig
@@ -107,6 +107,7 @@ config S390
select ARCH_SUPPORTS_NUMA_BALANCING
select ARCH_USE_BUILTIN_BSWAP
select ARCH_USE_CMPXCHG_LOCKREF
+ select ARCH_WANTS_DYNAMIC_TASK_STRUCT
select ARCH_WANTS_PROT_NUMA_PROT_NONE
select ARCH_WANT_IPC_PARSE_VERSION
select BUILDTIME_EXTABLE_SORT
@@ -122,17 +123,19 @@ config S390
select HAVE_ARCH_AUDITSYSCALL
select HAVE_ARCH_EARLY_PFN_TO_NID
select HAVE_ARCH_JUMP_LABEL
+ select CPU_NO_EFFICIENT_FFS if !HAVE_MARCH_Z9_109_FEATURES
select HAVE_ARCH_SECCOMP_FILTER
select HAVE_ARCH_SOFT_DIRTY
select HAVE_ARCH_TRACEHOOK
select HAVE_ARCH_TRANSPARENT_HUGEPAGE
- select HAVE_BPF_JIT if PACK_STACK && HAVE_MARCH_Z196_FEATURES
+ select HAVE_EBPF_JIT if PACK_STACK && HAVE_MARCH_Z196_FEATURES
select HAVE_CMPXCHG_DOUBLE
select HAVE_CMPXCHG_LOCAL
select HAVE_DEBUG_KMEMLEAK
select HAVE_DMA_API_DEBUG
select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_REGS
+ select HAVE_EXIT_THREAD
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_GRAPH_TRACER
select HAVE_FUNCTION_TRACER
@@ -164,6 +167,7 @@ config S390
select TTY
select VIRT_CPU_ACCOUNTING
select VIRT_TO_BUS
+ select HAVE_NMI
config SCHED_OMIT_FRAME_POINTER
@@ -210,7 +214,7 @@ config HAVE_MARCH_Z13_FEATURES
choice
prompt "Processor type"
- default MARCH_Z900
+ default MARCH_Z196
config MARCH_Z900
bool "IBM zSeries model z800 and z900"
diff --git a/arch/s390/boot/compressed/Makefile b/arch/s390/boot/compressed/Makefile
index fac6ac9790fa..1dd210347e12 100644
--- a/arch/s390/boot/compressed/Makefile
+++ b/arch/s390/boot/compressed/Makefile
@@ -22,7 +22,6 @@ OBJECTS += $(obj)/head.o $(obj)/misc.o $(obj)/piggy.o
LDFLAGS_vmlinux := --oformat $(LD_BFD) -e startup -T
$(obj)/vmlinux: $(obj)/vmlinux.lds $(OBJECTS)
$(call if_changed,ld)
- @:
sed-sizes := -e 's/^\([0-9a-fA-F]*\) . \(__bss_start\|_end\)$$/\#define SZ\2 0x\1/p'
diff --git a/arch/s390/crypto/aes_s390.c b/arch/s390/crypto/aes_s390.c
index 48e1a2d3e318..7554a8bb2adc 100644
--- a/arch/s390/crypto/aes_s390.c
+++ b/arch/s390/crypto/aes_s390.c
@@ -28,7 +28,7 @@
#include <linux/init.h>
#include <linux/spinlock.h>
#include <crypto/xts.h>
-#include "crypt_s390.h"
+#include <asm/cpacf.h>
#define AES_KEYLEN_128 1
#define AES_KEYLEN_192 2
@@ -145,16 +145,16 @@ static void aes_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
switch (sctx->key_len) {
case 16:
- crypt_s390_km(KM_AES_128_ENCRYPT, &sctx->key, out, in,
- AES_BLOCK_SIZE);
+ cpacf_km(CPACF_KM_AES_128_ENC, &sctx->key, out, in,
+ AES_BLOCK_SIZE);
break;
case 24:
- crypt_s390_km(KM_AES_192_ENCRYPT, &sctx->key, out, in,
- AES_BLOCK_SIZE);
+ cpacf_km(CPACF_KM_AES_192_ENC, &sctx->key, out, in,
+ AES_BLOCK_SIZE);
break;
case 32:
- crypt_s390_km(KM_AES_256_ENCRYPT, &sctx->key, out, in,
- AES_BLOCK_SIZE);
+ cpacf_km(CPACF_KM_AES_256_ENC, &sctx->key, out, in,
+ AES_BLOCK_SIZE);
break;
}
}
@@ -170,16 +170,16 @@ static void aes_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
switch (sctx->key_len) {
case 16:
- crypt_s390_km(KM_AES_128_DECRYPT, &sctx->key, out, in,
- AES_BLOCK_SIZE);
+ cpacf_km(CPACF_KM_AES_128_DEC, &sctx->key, out, in,
+ AES_BLOCK_SIZE);
break;
case 24:
- crypt_s390_km(KM_AES_192_DECRYPT, &sctx->key, out, in,
- AES_BLOCK_SIZE);
+ cpacf_km(CPACF_KM_AES_192_DEC, &sctx->key, out, in,
+ AES_BLOCK_SIZE);
break;
case 32:
- crypt_s390_km(KM_AES_256_DECRYPT, &sctx->key, out, in,
- AES_BLOCK_SIZE);
+ cpacf_km(CPACF_KM_AES_256_DEC, &sctx->key, out, in,
+ AES_BLOCK_SIZE);
break;
}
}
@@ -212,7 +212,7 @@ static void fallback_exit_cip(struct crypto_tfm *tfm)
static struct crypto_alg aes_alg = {
.cra_name = "aes",
.cra_driver_name = "aes-s390",
- .cra_priority = CRYPT_S390_PRIORITY,
+ .cra_priority = 300,
.cra_flags = CRYPTO_ALG_TYPE_CIPHER |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = AES_BLOCK_SIZE,
@@ -298,16 +298,16 @@ static int ecb_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
switch (key_len) {
case 16:
- sctx->enc = KM_AES_128_ENCRYPT;
- sctx->dec = KM_AES_128_DECRYPT;
+ sctx->enc = CPACF_KM_AES_128_ENC;
+ sctx->dec = CPACF_KM_AES_128_DEC;
break;
case 24:
- sctx->enc = KM_AES_192_ENCRYPT;
- sctx->dec = KM_AES_192_DECRYPT;
+ sctx->enc = CPACF_KM_AES_192_ENC;
+ sctx->dec = CPACF_KM_AES_192_DEC;
break;
case 32:
- sctx->enc = KM_AES_256_ENCRYPT;
- sctx->dec = KM_AES_256_DECRYPT;
+ sctx->enc = CPACF_KM_AES_256_ENC;
+ sctx->dec = CPACF_KM_AES_256_DEC;
break;
}
@@ -326,7 +326,7 @@ static int ecb_aes_crypt(struct blkcipher_desc *desc, long func, void *param,
u8 *out = walk->dst.virt.addr;
u8 *in = walk->src.virt.addr;
- ret = crypt_s390_km(func, param, out, in, n);
+ ret = cpacf_km(func, param, out, in, n);
if (ret < 0 || ret != n)
return -EIO;
@@ -393,7 +393,7 @@ static void fallback_exit_blk(struct crypto_tfm *tfm)
static struct crypto_alg ecb_aes_alg = {
.cra_name = "ecb(aes)",
.cra_driver_name = "ecb-aes-s390",
- .cra_priority = CRYPT_S390_COMPOSITE_PRIORITY,
+ .cra_priority = 400, /* combo: aes + ecb */
.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = AES_BLOCK_SIZE,
@@ -427,16 +427,16 @@ static int cbc_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
switch (key_len) {
case 16:
- sctx->enc = KMC_AES_128_ENCRYPT;
- sctx->dec = KMC_AES_128_DECRYPT;
+ sctx->enc = CPACF_KMC_AES_128_ENC;
+ sctx->dec = CPACF_KMC_AES_128_DEC;
break;
case 24:
- sctx->enc = KMC_AES_192_ENCRYPT;
- sctx->dec = KMC_AES_192_DECRYPT;
+ sctx->enc = CPACF_KMC_AES_192_ENC;
+ sctx->dec = CPACF_KMC_AES_192_DEC;
break;
case 32:
- sctx->enc = KMC_AES_256_ENCRYPT;
- sctx->dec = KMC_AES_256_DECRYPT;
+ sctx->enc = CPACF_KMC_AES_256_ENC;
+ sctx->dec = CPACF_KMC_AES_256_DEC;
break;
}
@@ -465,7 +465,7 @@ static int cbc_aes_crypt(struct blkcipher_desc *desc, long func,
u8 *out = walk->dst.virt.addr;
u8 *in = walk->src.virt.addr;
- ret = crypt_s390_kmc(func, &param, out, in, n);
+ ret = cpacf_kmc(func, &param, out, in, n);
if (ret < 0 || ret != n)
return -EIO;
@@ -509,7 +509,7 @@ static int cbc_aes_decrypt(struct blkcipher_desc *desc,
static struct crypto_alg cbc_aes_alg = {
.cra_name = "cbc(aes)",
.cra_driver_name = "cbc-aes-s390",
- .cra_priority = CRYPT_S390_COMPOSITE_PRIORITY,
+ .cra_priority = 400, /* combo: aes + cbc */
.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = AES_BLOCK_SIZE,
@@ -596,8 +596,8 @@ static int xts_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
switch (key_len) {
case 32:
- xts_ctx->enc = KM_XTS_128_ENCRYPT;
- xts_ctx->dec = KM_XTS_128_DECRYPT;
+ xts_ctx->enc = CPACF_KM_XTS_128_ENC;
+ xts_ctx->dec = CPACF_KM_XTS_128_DEC;
memcpy(xts_ctx->key + 16, in_key, 16);
memcpy(xts_ctx->pcc_key + 16, in_key + 16, 16);
break;
@@ -607,8 +607,8 @@ static int xts_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
xts_fallback_setkey(tfm, in_key, key_len);
break;
case 64:
- xts_ctx->enc = KM_XTS_256_ENCRYPT;
- xts_ctx->dec = KM_XTS_256_DECRYPT;
+ xts_ctx->enc = CPACF_KM_XTS_256_ENC;
+ xts_ctx->dec = CPACF_KM_XTS_256_DEC;
memcpy(xts_ctx->key, in_key, 32);
memcpy(xts_ctx->pcc_key, in_key + 32, 32);
break;
@@ -643,7 +643,8 @@ static int xts_aes_crypt(struct blkcipher_desc *desc, long func,
memset(pcc_param.xts, 0, sizeof(pcc_param.xts));
memcpy(pcc_param.tweak, walk->iv, sizeof(pcc_param.tweak));
memcpy(pcc_param.key, xts_ctx->pcc_key, 32);
- ret = crypt_s390_pcc(func, &pcc_param.key[offset]);
+ /* remove decipher modifier bit from 'func' and call PCC */
+ ret = cpacf_pcc(func & 0x7f, &pcc_param.key[offset]);
if (ret < 0)
return -EIO;
@@ -655,7 +656,7 @@ static int xts_aes_crypt(struct blkcipher_desc *desc, long func,
out = walk->dst.virt.addr;
in = walk->src.virt.addr;
- ret = crypt_s390_km(func, &xts_param.key[offset], out, in, n);
+ ret = cpacf_km(func, &xts_param.key[offset], out, in, n);
if (ret < 0 || ret != n)
return -EIO;
@@ -721,7 +722,7 @@ static void xts_fallback_exit(struct crypto_tfm *tfm)
static struct crypto_alg xts_aes_alg = {
.cra_name = "xts(aes)",
.cra_driver_name = "xts-aes-s390",
- .cra_priority = CRYPT_S390_COMPOSITE_PRIORITY,
+ .cra_priority = 400, /* combo: aes + xts */
.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER |
CRYPTO_ALG_NEED_FALLBACK,
.cra_blocksize = AES_BLOCK_SIZE,
@@ -751,16 +752,16 @@ static int ctr_aes_set_key(struct crypto_tfm *tfm, const u8 *in_key,
switch (key_len) {
case 16:
- sctx->enc = KMCTR_AES_128_ENCRYPT;
- sctx->dec = KMCTR_AES_128_DECRYPT;
+ sctx->enc = CPACF_KMCTR_AES_128_ENC;
+ sctx->dec = CPACF_KMCTR_AES_128_DEC;
break;
case 24:
- sctx->enc = KMCTR_AES_192_ENCRYPT;
- sctx->dec = KMCTR_AES_192_DECRYPT;
+ sctx->enc = CPACF_KMCTR_AES_192_ENC;
+ sctx->dec = CPACF_KMCTR_AES_192_DEC;
break;
case 32:
- sctx->enc = KMCTR_AES_256_ENCRYPT;
- sctx->dec = KMCTR_AES_256_DECRYPT;
+ sctx->enc = CPACF_KMCTR_AES_256_ENC;
+ sctx->dec = CPACF_KMCTR_AES_256_DEC;
break;
}
@@ -804,8 +805,7 @@ static int ctr_aes_crypt(struct blkcipher_desc *desc, long func,
n = __ctrblk_init(ctrptr, nbytes);
else
n = AES_BLOCK_SIZE;
- ret = crypt_s390_kmctr(func, sctx->key, out, in,
- n, ctrptr);
+ ret = cpacf_kmctr(func, sctx->key, out, in, n, ctrptr);
if (ret < 0 || ret != n) {
if (ctrptr == ctrblk)
spin_unlock(&ctrblk_lock);
@@ -837,8 +837,8 @@ static int ctr_aes_crypt(struct blkcipher_desc *desc, long func,
if (nbytes) {
out = walk->dst.virt.addr;
in = walk->src.virt.addr;
- ret = crypt_s390_kmctr(func, sctx->key, buf, in,
- AES_BLOCK_SIZE, ctrbuf);
+ ret = cpacf_kmctr(func, sctx->key, buf, in,
+ AES_BLOCK_SIZE, ctrbuf);
if (ret < 0 || ret != AES_BLOCK_SIZE)
return -EIO;
memcpy(out, buf, nbytes);
@@ -875,7 +875,7 @@ static int ctr_aes_decrypt(struct blkcipher_desc *desc,
static struct crypto_alg ctr_aes_alg = {
.cra_name = "ctr(aes)",
.cra_driver_name = "ctr-aes-s390",
- .cra_priority = CRYPT_S390_COMPOSITE_PRIORITY,
+ .cra_priority = 400, /* combo: aes + ctr */
.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
.cra_blocksize = 1,
.cra_ctxsize = sizeof(struct s390_aes_ctx),
@@ -899,11 +899,11 @@ static int __init aes_s390_init(void)
{
int ret;
- if (crypt_s390_func_available(KM_AES_128_ENCRYPT, CRYPT_S390_MSA))
+ if (cpacf_query(CPACF_KM, CPACF_KM_AES_128_ENC))
keylen_flag |= AES_KEYLEN_128;
- if (crypt_s390_func_available(KM_AES_192_ENCRYPT, CRYPT_S390_MSA))
+ if (cpacf_query(CPACF_KM, CPACF_KM_AES_192_ENC))
keylen_flag |= AES_KEYLEN_192;
- if (crypt_s390_func_available(KM_AES_256_ENCRYPT, CRYPT_S390_MSA))
+ if (cpacf_query(CPACF_KM, CPACF_KM_AES_256_ENC))
keylen_flag |= AES_KEYLEN_256;
if (!keylen_flag)
@@ -926,22 +926,17 @@ static int __init aes_s390_init(void)
if (ret)
goto cbc_aes_err;
- if (crypt_s390_func_available(KM_XTS_128_ENCRYPT,
- CRYPT_S390_MSA | CRYPT_S390_MSA4) &&
- crypt_s390_func_available(KM_XTS_256_ENCRYPT,
- CRYPT_S390_MSA | CRYPT_S390_MSA4)) {
+ if (cpacf_query(CPACF_KM, CPACF_KM_XTS_128_ENC) &&
+ cpacf_query(CPACF_KM, CPACF_KM_XTS_256_ENC)) {
ret = crypto_register_alg(&xts_aes_alg);
if (ret)
goto xts_aes_err;
xts_aes_alg_reg = 1;
}
- if (crypt_s390_func_available(KMCTR_AES_128_ENCRYPT,
- CRYPT_S390_MSA | CRYPT_S390_MSA4) &&
- crypt_s390_func_available(KMCTR_AES_192_ENCRYPT,
- CRYPT_S390_MSA | CRYPT_S390_MSA4) &&
- crypt_s390_func_available(KMCTR_AES_256_ENCRYPT,
- CRYPT_S390_MSA | CRYPT_S390_MSA4)) {
+ if (cpacf_query(CPACF_KMCTR, CPACF_KMCTR_AES_128_ENC) &&
+ cpacf_query(CPACF_KMCTR, CPACF_KMCTR_AES_192_ENC) &&
+ cpacf_query(CPACF_KMCTR, CPACF_KMCTR_AES_256_ENC)) {
ctrblk = (u8 *) __get_free_page(GFP_KERNEL);
if (!ctrblk) {
ret = -ENOMEM;
diff --git a/arch/s390/crypto/crypt_s390.h b/arch/s390/crypto/crypt_s390.h
deleted file mode 100644
index d9c4c313fbc6..000000000000
--- a/arch/s390/crypto/crypt_s390.h
+++ /dev/null
@@ -1,493 +0,0 @@
-/*
- * Cryptographic API.
- *
- * Support for s390 cryptographic instructions.
- *
- * Copyright IBM Corp. 2003, 2015
- * Author(s): Thomas Spatzier
- * Jan Glauber (jan.glauber@de.ibm.com)
- * Harald Freudenberger (freude@de.ibm.com)
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License as published by the Free
- * Software Foundation; either version 2 of the License, or (at your option)
- * any later version.
- *
- */
-#ifndef _CRYPTO_ARCH_S390_CRYPT_S390_H
-#define _CRYPTO_ARCH_S390_CRYPT_S390_H
-
-#include <asm/errno.h>
-#include <asm/facility.h>
-
-#define CRYPT_S390_OP_MASK 0xFF00
-#define CRYPT_S390_FUNC_MASK 0x00FF
-
-#define CRYPT_S390_PRIORITY 300
-#define CRYPT_S390_COMPOSITE_PRIORITY 400
-
-#define CRYPT_S390_MSA 0x1
-#define CRYPT_S390_MSA3 0x2
-#define CRYPT_S390_MSA4 0x4
-#define CRYPT_S390_MSA5 0x8
-
-/* s390 cryptographic operations */
-enum crypt_s390_operations {
- CRYPT_S390_KM = 0x0100,
- CRYPT_S390_KMC = 0x0200,
- CRYPT_S390_KIMD = 0x0300,
- CRYPT_S390_KLMD = 0x0400,
- CRYPT_S390_KMAC = 0x0500,
- CRYPT_S390_KMCTR = 0x0600,
- CRYPT_S390_PPNO = 0x0700
-};
-
-/*
- * function codes for KM (CIPHER MESSAGE) instruction
- * 0x80 is the decipher modifier bit
- */
-enum crypt_s390_km_func {
- KM_QUERY = CRYPT_S390_KM | 0x0,
- KM_DEA_ENCRYPT = CRYPT_S390_KM | 0x1,
- KM_DEA_DECRYPT = CRYPT_S390_KM | 0x1 | 0x80,
- KM_TDEA_128_ENCRYPT = CRYPT_S390_KM | 0x2,
- KM_TDEA_128_DECRYPT = CRYPT_S390_KM | 0x2 | 0x80,
- KM_TDEA_192_ENCRYPT = CRYPT_S390_KM | 0x3,
- KM_TDEA_192_DECRYPT = CRYPT_S390_KM | 0x3 | 0x80,
- KM_AES_128_ENCRYPT = CRYPT_S390_KM | 0x12,
- KM_AES_128_DECRYPT = CRYPT_S390_KM | 0x12 | 0x80,
- KM_AES_192_ENCRYPT = CRYPT_S390_KM | 0x13,
- KM_AES_192_DECRYPT = CRYPT_S390_KM | 0x13 | 0x80,
- KM_AES_256_ENCRYPT = CRYPT_S390_KM | 0x14,
- KM_AES_256_DECRYPT = CRYPT_S390_KM | 0x14 | 0x80,
- KM_XTS_128_ENCRYPT = CRYPT_S390_KM | 0x32,
- KM_XTS_128_DECRYPT = CRYPT_S390_KM | 0x32 | 0x80,
- KM_XTS_256_ENCRYPT = CRYPT_S390_KM | 0x34,
- KM_XTS_256_DECRYPT = CRYPT_S390_KM | 0x34 | 0x80,
-};
-
-/*
- * function codes for KMC (CIPHER MESSAGE WITH CHAINING)
- * instruction
- */
-enum crypt_s390_kmc_func {
- KMC_QUERY = CRYPT_S390_KMC | 0x0,
- KMC_DEA_ENCRYPT = CRYPT_S390_KMC | 0x1,
- KMC_DEA_DECRYPT = CRYPT_S390_KMC | 0x1 | 0x80,
- KMC_TDEA_128_ENCRYPT = CRYPT_S390_KMC | 0x2,
- KMC_TDEA_128_DECRYPT = CRYPT_S390_KMC | 0x2 | 0x80,
- KMC_TDEA_192_ENCRYPT = CRYPT_S390_KMC | 0x3,
- KMC_TDEA_192_DECRYPT = CRYPT_S390_KMC | 0x3 | 0x80,
- KMC_AES_128_ENCRYPT = CRYPT_S390_KMC | 0x12,
- KMC_AES_128_DECRYPT = CRYPT_S390_KMC | 0x12 | 0x80,
- KMC_AES_192_ENCRYPT = CRYPT_S390_KMC | 0x13,
- KMC_AES_192_DECRYPT = CRYPT_S390_KMC | 0x13 | 0x80,
- KMC_AES_256_ENCRYPT = CRYPT_S390_KMC | 0x14,
- KMC_AES_256_DECRYPT = CRYPT_S390_KMC | 0x14 | 0x80,
- KMC_PRNG = CRYPT_S390_KMC | 0x43,
-};
-
-/*
- * function codes for KMCTR (CIPHER MESSAGE WITH COUNTER)
- * instruction
- */
-enum crypt_s390_kmctr_func {
- KMCTR_QUERY = CRYPT_S390_KMCTR | 0x0,
- KMCTR_DEA_ENCRYPT = CRYPT_S390_KMCTR | 0x1,
- KMCTR_DEA_DECRYPT = CRYPT_S390_KMCTR | 0x1 | 0x80,
- KMCTR_TDEA_128_ENCRYPT = CRYPT_S390_KMCTR | 0x2,
- KMCTR_TDEA_128_DECRYPT = CRYPT_S390_KMCTR | 0x2 | 0x80,
- KMCTR_TDEA_192_ENCRYPT = CRYPT_S390_KMCTR | 0x3,
- KMCTR_TDEA_192_DECRYPT = CRYPT_S390_KMCTR | 0x3 | 0x80,
- KMCTR_AES_128_ENCRYPT = CRYPT_S390_KMCTR | 0x12,
- KMCTR_AES_128_DECRYPT = CRYPT_S390_KMCTR | 0x12 | 0x80,
- KMCTR_AES_192_ENCRYPT = CRYPT_S390_KMCTR | 0x13,
- KMCTR_AES_192_DECRYPT = CRYPT_S390_KMCTR | 0x13 | 0x80,
- KMCTR_AES_256_ENCRYPT = CRYPT_S390_KMCTR | 0x14,
- KMCTR_AES_256_DECRYPT = CRYPT_S390_KMCTR | 0x14 | 0x80,
-};
-
-/*
- * function codes for KIMD (COMPUTE INTERMEDIATE MESSAGE DIGEST)
- * instruction
- */
-enum crypt_s390_kimd_func {
- KIMD_QUERY = CRYPT_S390_KIMD | 0,
- KIMD_SHA_1 = CRYPT_S390_KIMD | 1,
- KIMD_SHA_256 = CRYPT_S390_KIMD | 2,
- KIMD_SHA_512 = CRYPT_S390_KIMD | 3,
- KIMD_GHASH = CRYPT_S390_KIMD | 65,
-};
-
-/*
- * function codes for KLMD (COMPUTE LAST MESSAGE DIGEST)
- * instruction
- */
-enum crypt_s390_klmd_func {
- KLMD_QUERY = CRYPT_S390_KLMD | 0,
- KLMD_SHA_1 = CRYPT_S390_KLMD | 1,
- KLMD_SHA_256 = CRYPT_S390_KLMD | 2,
- KLMD_SHA_512 = CRYPT_S390_KLMD | 3,
-};
-
-/*
- * function codes for KMAC (COMPUTE MESSAGE AUTHENTICATION CODE)
- * instruction
- */
-enum crypt_s390_kmac_func {
- KMAC_QUERY = CRYPT_S390_KMAC | 0,
- KMAC_DEA = CRYPT_S390_KMAC | 1,
- KMAC_TDEA_128 = CRYPT_S390_KMAC | 2,
- KMAC_TDEA_192 = CRYPT_S390_KMAC | 3
-};
-
-/*
- * function codes for PPNO (PERFORM PSEUDORANDOM NUMBER
- * OPERATION) instruction
- */
-enum crypt_s390_ppno_func {
- PPNO_QUERY = CRYPT_S390_PPNO | 0,
- PPNO_SHA512_DRNG_GEN = CRYPT_S390_PPNO | 3,
- PPNO_SHA512_DRNG_SEED = CRYPT_S390_PPNO | 0x83
-};
-
-/**
- * crypt_s390_km:
- * @func: the function code passed to KM; see crypt_s390_km_func
- * @param: address of parameter block; see POP for details on each func
- * @dest: address of destination memory area
- * @src: address of source memory area
- * @src_len: length of src operand in bytes
- *
- * Executes the KM (CIPHER MESSAGE) operation of the CPU.
- *
- * Returns -1 for failure, 0 for the query func, number of processed
- * bytes for encryption/decryption funcs
- */
-static inline int crypt_s390_km(long func, void *param,
- u8 *dest, const u8 *src, long src_len)
-{
- register long __func asm("0") = func & CRYPT_S390_FUNC_MASK;
- register void *__param asm("1") = param;
- register const u8 *__src asm("2") = src;
- register long __src_len asm("3") = src_len;
- register u8 *__dest asm("4") = dest;
- int ret;
-
- asm volatile(
- "0: .insn rre,0xb92e0000,%3,%1\n" /* KM opcode */
- "1: brc 1,0b\n" /* handle partial completion */
- " la %0,0\n"
- "2:\n"
- EX_TABLE(0b, 2b) EX_TABLE(1b, 2b)
- : "=d" (ret), "+a" (__src), "+d" (__src_len), "+a" (__dest)
- : "d" (__func), "a" (__param), "0" (-1) : "cc", "memory");
- if (ret < 0)
- return ret;
- return (func & CRYPT_S390_FUNC_MASK) ? src_len - __src_len : __src_len;
-}
-
-/**
- * crypt_s390_kmc:
- * @func: the function code passed to KM; see crypt_s390_kmc_func
- * @param: address of parameter block; see POP for details on each func
- * @dest: address of destination memory area
- * @src: address of source memory area
- * @src_len: length of src operand in bytes
- *
- * Executes the KMC (CIPHER MESSAGE WITH CHAINING) operation of the CPU.
- *
- * Returns -1 for failure, 0 for the query func, number of processed
- * bytes for encryption/decryption funcs
- */
-static inline int crypt_s390_kmc(long func, void *param,
- u8 *dest, const u8 *src, long src_len)
-{
- register long __func asm("0") = func & CRYPT_S390_FUNC_MASK;
- register void *__param asm("1") = param;
- register const u8 *__src asm("2") = src;
- register long __src_len asm("3") = src_len;
- register u8 *__dest asm("4") = dest;
- int ret;
-
- asm volatile(
- "0: .insn rre,0xb92f0000,%3,%1\n" /* KMC opcode */
- "1: brc 1,0b\n" /* handle partial completion */
- " la %0,0\n"
- "2:\n"
- EX_TABLE(0b, 2b) EX_TABLE(1b, 2b)
- : "=d" (ret), "+a" (__src), "+d" (__src_len), "+a" (__dest)
- : "d" (__func), "a" (__param), "0" (-1) : "cc", "memory");
- if (ret < 0)
- return ret;
- return (func & CRYPT_S390_FUNC_MASK) ? src_len - __src_len : __src_len;
-}
-
-/**
- * crypt_s390_kimd:
- * @func: the function code passed to KM; see crypt_s390_kimd_func
- * @param: address of parameter block; see POP for details on each func
- * @src: address of source memory area
- * @src_len: length of src operand in bytes
- *
- * Executes the KIMD (COMPUTE INTERMEDIATE MESSAGE DIGEST) operation
- * of the CPU.
- *
- * Returns -1 for failure, 0 for the query func, number of processed
- * bytes for digest funcs
- */
-static inline int crypt_s390_kimd(long func, void *param,
- const u8 *src, long src_len)
-{
- register long __func asm("0") = func & CRYPT_S390_FUNC_MASK;
- register void *__param asm("1") = param;
- register const u8 *__src asm("2") = src;
- register long __src_len asm("3") = src_len;
- int ret;
-
- asm volatile(
- "0: .insn rre,0xb93e0000,%1,%1\n" /* KIMD opcode */
- "1: brc 1,0b\n" /* handle partial completion */
- " la %0,0\n"
- "2:\n"
- EX_TABLE(0b, 2b) EX_TABLE(1b, 2b)
- : "=d" (ret), "+a" (__src), "+d" (__src_len)
- : "d" (__func), "a" (__param), "0" (-1) : "cc", "memory");
- if (ret < 0)
- return ret;
- return (func & CRYPT_S390_FUNC_MASK) ? src_len - __src_len : __src_len;
-}
-
-/**
- * crypt_s390_klmd:
- * @func: the function code passed to KM; see crypt_s390_klmd_func
- * @param: address of parameter block; see POP for details on each func
- * @src: address of source memory area
- * @src_len: length of src operand in bytes
- *
- * Executes the KLMD (COMPUTE LAST MESSAGE DIGEST) operation of the CPU.
- *
- * Returns -1 for failure, 0 for the query func, number of processed
- * bytes for digest funcs
- */
-static inline int crypt_s390_klmd(long func, void *param,
- const u8 *src, long src_len)
-{
- register long __func asm("0") = func & CRYPT_S390_FUNC_MASK;
- register void *__param asm("1") = param;
- register const u8 *__src asm("2") = src;
- register long __src_len asm("3") = src_len;
- int ret;
-
- asm volatile(
- "0: .insn rre,0xb93f0000,%1,%1\n" /* KLMD opcode */
- "1: brc 1,0b\n" /* handle partial completion */
- " la %0,0\n"
- "2:\n"
- EX_TABLE(0b, 2b) EX_TABLE(1b, 2b)
- : "=d" (ret), "+a" (__src), "+d" (__src_len)
- : "d" (__func), "a" (__param), "0" (-1) : "cc", "memory");
- if (ret < 0)
- return ret;
- return (func & CRYPT_S390_FUNC_MASK) ? src_len - __src_len : __src_len;
-}
-
-/**
- * crypt_s390_kmac:
- * @func: the function code passed to KM; see crypt_s390_klmd_func
- * @param: address of parameter block; see POP for details on each func
- * @src: address of source memory area
- * @src_len: length of src operand in bytes
- *
- * Executes the KMAC (COMPUTE MESSAGE AUTHENTICATION CODE) operation
- * of the CPU.
- *
- * Returns -1 for failure, 0 for the query func, number of processed
- * bytes for digest funcs
- */
-static inline int crypt_s390_kmac(long func, void *param,
- const u8 *src, long src_len)
-{
- register long __func asm("0") = func & CRYPT_S390_FUNC_MASK;
- register void *__param asm("1") = param;
- register const u8 *__src asm("2") = src;
- register long __src_len asm("3") = src_len;
- int ret;
-
- asm volatile(
- "0: .insn rre,0xb91e0000,%1,%1\n" /* KLAC opcode */
- "1: brc 1,0b\n" /* handle partial completion */
- " la %0,0\n"
- "2:\n"
- EX_TABLE(0b, 2b) EX_TABLE(1b, 2b)
- : "=d" (ret), "+a" (__src), "+d" (__src_len)
- : "d" (__func), "a" (__param), "0" (-1) : "cc", "memory");
- if (ret < 0)
- return ret;
- return (func & CRYPT_S390_FUNC_MASK) ? src_len - __src_len : __src_len;
-}
-
-/**
- * crypt_s390_kmctr:
- * @func: the function code passed to KMCTR; see crypt_s390_kmctr_func
- * @param: address of parameter block; see POP for details on each func
- * @dest: address of destination memory area
- * @src: address of source memory area
- * @src_len: length of src operand in bytes
- * @counter: address of counter value
- *
- * Executes the KMCTR (CIPHER MESSAGE WITH COUNTER) operation of the CPU.
- *
- * Returns -1 for failure, 0 for the query func, number of processed
- * bytes for encryption/decryption funcs
- */
-static inline int crypt_s390_kmctr(long func, void *param, u8 *dest,
- const u8 *src, long src_len, u8 *counter)
-{
- register long __func asm("0") = func & CRYPT_S390_FUNC_MASK;
- register void *__param asm("1") = param;
- register const u8 *__src asm("2") = src;
- register long __src_len asm("3") = src_len;
- register u8 *__dest asm("4") = dest;
- register u8 *__ctr asm("6") = counter;
- int ret = -1;
-
- asm volatile(
- "0: .insn rrf,0xb92d0000,%3,%1,%4,0\n" /* KMCTR opcode */
- "1: brc 1,0b\n" /* handle partial completion */
- " la %0,0\n"
- "2:\n"
- EX_TABLE(0b, 2b) EX_TABLE(1b, 2b)
- : "+d" (ret), "+a" (__src), "+d" (__src_len), "+a" (__dest),
- "+a" (__ctr)
- : "d" (__func), "a" (__param) : "cc", "memory");
- if (ret < 0)
- return ret;
- return (func & CRYPT_S390_FUNC_MASK) ? src_len - __src_len : __src_len;
-}
-
-/**
- * crypt_s390_ppno:
- * @func: the function code passed to PPNO; see crypt_s390_ppno_func
- * @param: address of parameter block; see POP for details on each func
- * @dest: address of destination memory area
- * @dest_len: size of destination memory area in bytes
- * @seed: address of seed data
- * @seed_len: size of seed data in bytes
- *
- * Executes the PPNO (PERFORM PSEUDORANDOM NUMBER OPERATION)
- * operation of the CPU.
- *
- * Returns -1 for failure, 0 for the query func, number of random
- * bytes stored in dest buffer for generate function
- */
-static inline int crypt_s390_ppno(long func, void *param,
- u8 *dest, long dest_len,
- const u8 *seed, long seed_len)
-{
- register long __func asm("0") = func & CRYPT_S390_FUNC_MASK;
- register void *__param asm("1") = param; /* param block (240 bytes) */
- register u8 *__dest asm("2") = dest; /* buf for recv random bytes */
- register long __dest_len asm("3") = dest_len; /* requested random bytes */
- register const u8 *__seed asm("4") = seed; /* buf with seed data */
- register long __seed_len asm("5") = seed_len; /* bytes in seed buf */
- int ret = -1;
-
- asm volatile (
- "0: .insn rre,0xb93c0000,%1,%5\n" /* PPNO opcode */
- "1: brc 1,0b\n" /* handle partial completion */
- " la %0,0\n"
- "2:\n"
- EX_TABLE(0b, 2b) EX_TABLE(1b, 2b)
- : "+d" (ret), "+a"(__dest), "+d"(__dest_len)
- : "d"(__func), "a"(__param), "a"(__seed), "d"(__seed_len)
- : "cc", "memory");
- if (ret < 0)
- return ret;
- return (func & CRYPT_S390_FUNC_MASK) ? dest_len - __dest_len : 0;
-}
-
-/**
- * crypt_s390_func_available:
- * @func: the function code of the specific function; 0 if op in general
- *
- * Tests if a specific crypto function is implemented on the machine.
- *
- * Returns 1 if func available; 0 if func or op in general not available
- */
-static inline int crypt_s390_func_available(int func,
- unsigned int facility_mask)
-{
- unsigned char status[16];
- int ret;
-
- if (facility_mask & CRYPT_S390_MSA && !test_facility(17))
- return 0;
- if (facility_mask & CRYPT_S390_MSA3 && !test_facility(76))
- return 0;
- if (facility_mask & CRYPT_S390_MSA4 && !test_facility(77))
- return 0;
- if (facility_mask & CRYPT_S390_MSA5 && !test_facility(57))
- return 0;
-
- switch (func & CRYPT_S390_OP_MASK) {
- case CRYPT_S390_KM:
- ret = crypt_s390_km(KM_QUERY, &status, NULL, NULL, 0);
- break;
- case CRYPT_S390_KMC:
- ret = crypt_s390_kmc(KMC_QUERY, &status, NULL, NULL, 0);
- break;
- case CRYPT_S390_KIMD:
- ret = crypt_s390_kimd(KIMD_QUERY, &status, NULL, 0);
- break;
- case CRYPT_S390_KLMD:
- ret = crypt_s390_klmd(KLMD_QUERY, &status, NULL, 0);
- break;
- case CRYPT_S390_KMAC:
- ret = crypt_s390_kmac(KMAC_QUERY, &status, NULL, 0);
- break;
- case CRYPT_S390_KMCTR:
- ret = crypt_s390_kmctr(KMCTR_QUERY, &status,
- NULL, NULL, 0, NULL);
- break;
- case CRYPT_S390_PPNO:
- ret = crypt_s390_ppno(PPNO_QUERY, &status,
- NULL, 0, NULL, 0);
- break;
- default:
- return 0;
- }
- if (ret < 0)
- return 0;
- func &= CRYPT_S390_FUNC_MASK;
- func &= 0x7f; /* mask modifier bit */
- return (status[func >> 3] & (0x80 >> (func & 7))) != 0;
-}
-
-/**
- * crypt_s390_pcc:
- * @func: the function code passed to KM; see crypt_s390_km_func
- * @param: address of parameter block; see POP for details on each func
- *
- * Executes the PCC (PERFORM CRYPTOGRAPHIC COMPUTATION) operation of the CPU.
- *
- * Returns -1 for failure, 0 for success.
- */
-static inline int crypt_s390_pcc(long func, void *param)
-{
- register long __func asm("0") = func & 0x7f; /* encrypt or decrypt */
- register void *__param asm("1") = param;
- int ret = -1;
-
- asm volatile(
- "0: .insn rre,0xb92c0000,0,0\n" /* PCC opcode */
- "1: brc 1,0b\n" /* handle partial completion */
- " la %0,0\n"
- "2:\n"
- EX_TABLE(0b, 2b) EX_TABLE(1b, 2b)
- : "+d" (ret)
- : "d" (__func), "a" (__param) : "cc", "memory");
- return ret;
-}
-
-#endif /* _CRYPTO_ARCH_S390_CRYPT_S390_H */
diff --git a/arch/s390/crypto/des_s390.c b/arch/s390/crypto/des_s390.c
index fba1c10a2dd0..697e71a75fc2 100644
--- a/arch/s390/crypto/des_s390.c
+++ b/arch/s390/crypto/des_s390.c
@@ -20,8 +20,7 @@
#include <linux/crypto.h>
#include <crypto/algapi.h>
#include <crypto/des.h>
-
-#include "crypt_s390.h"
+#include <asm/cpacf.h>
#define DES3_KEY_SIZE (3 * DES_KEY_SIZE)
@@ -54,20 +53,20 @@ static void des_encrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
{
struct s390_des_ctx *ctx = crypto_tfm_ctx(tfm);
- crypt_s390_km(KM_DEA_ENCRYPT, ctx->key, out, in, DES_BLOCK_SIZE);
+ cpacf_km(CPACF_KM_DEA_ENC, ctx->key, out, in, DES_BLOCK_SIZE);
}
static void des_decrypt(struct crypto_tfm *tfm, u8 *out, const u8 *in)
{
struct s390_des_ctx *ctx = crypto_tfm_ctx(tfm);
- crypt_s390_km(KM_DEA_DECRYPT, ctx->key, out, in, DES_BLOCK_SIZE);
+ cpacf_km(CPACF_KM_DEA_DEC, ctx->key, out, in, DES_BLOCK_SIZE);
}
static struct crypto_alg des_alg = {
.cra_name = "des",
.cra_driver_name = "des-s390",
- .cra_priority = CRYPT_S390_PRIORITY,
+ .cra_priority = 300,
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
.cra_blocksize = DES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s390_des_ctx),
@@ -95,7 +94,7 @@ static int ecb_desall_crypt(struct blkcipher_desc *desc, long func,
u8 *out = walk->dst.virt.addr;
u8 *in = walk->src.virt.addr;
- ret = crypt_s390_km(func, key, out, in, n);
+ ret = cpacf_km(func, key, out, in, n);
if (ret < 0 || ret != n)
return -EIO;
@@ -128,7 +127,7 @@ static int cbc_desall_crypt(struct blkcipher_desc *desc, long func,
u8 *out = walk->dst.virt.addr;
u8 *in = walk->src.virt.addr;
- ret = crypt_s390_kmc(func, &param, out, in, n);
+ ret = cpacf_kmc(func, &param, out, in, n);
if (ret < 0 || ret != n)
return -EIO;
@@ -149,7 +148,7 @@ static int ecb_des_encrypt(struct blkcipher_desc *desc,
struct blkcipher_walk walk;
blkcipher_walk_init(&walk, dst, src, nbytes);
- return ecb_desall_crypt(desc, KM_DEA_ENCRYPT, ctx->key, &walk);
+ return ecb_desall_crypt(desc, CPACF_KM_DEA_ENC, ctx->key, &walk);
}
static int ecb_des_decrypt(struct blkcipher_desc *desc,
@@ -160,13 +159,13 @@ static int ecb_des_decrypt(struct blkcipher_desc *desc,
struct blkcipher_walk walk;
blkcipher_walk_init(&walk, dst, src, nbytes);
- return ecb_desall_crypt(desc, KM_DEA_DECRYPT, ctx->key, &walk);
+ return ecb_desall_crypt(desc, CPACF_KM_DEA_DEC, ctx->key, &walk);
}
static struct crypto_alg ecb_des_alg = {
.cra_name = "ecb(des)",
.cra_driver_name = "ecb-des-s390",
- .cra_priority = CRYPT_S390_COMPOSITE_PRIORITY,
+ .cra_priority = 400, /* combo: des + ecb */
.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
.cra_blocksize = DES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s390_des_ctx),
@@ -190,7 +189,7 @@ static int cbc_des_encrypt(struct blkcipher_desc *desc,
struct blkcipher_walk walk;
blkcipher_walk_init(&walk, dst, src, nbytes);
- return cbc_desall_crypt(desc, KMC_DEA_ENCRYPT, &walk);
+ return cbc_desall_crypt(desc, CPACF_KMC_DEA_ENC, &walk);
}
static int cbc_des_decrypt(struct blkcipher_desc *desc,
@@ -200,13 +199,13 @@ static int cbc_des_decrypt(struct blkcipher_desc *desc,
struct blkcipher_walk walk;
blkcipher_walk_init(&walk, dst, src, nbytes);
- return cbc_desall_crypt(desc, KMC_DEA_DECRYPT, &walk);
+ return cbc_desall_crypt(desc, CPACF_KMC_DEA_DEC, &walk);
}
static struct crypto_alg cbc_des_alg = {
.cra_name = "cbc(des)",
.cra_driver_name = "cbc-des-s390",
- .cra_priority = CRYPT_S390_COMPOSITE_PRIORITY,
+ .cra_priority = 400, /* combo: des + cbc */
.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
.cra_blocksize = DES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s390_des_ctx),
@@ -258,20 +257,20 @@ static void des3_encrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
struct s390_des_ctx *ctx = crypto_tfm_ctx(tfm);
- crypt_s390_km(KM_TDEA_192_ENCRYPT, ctx->key, dst, src, DES_BLOCK_SIZE);
+ cpacf_km(CPACF_KM_TDEA_192_ENC, ctx->key, dst, src, DES_BLOCK_SIZE);
}
static void des3_decrypt(struct crypto_tfm *tfm, u8 *dst, const u8 *src)
{
struct s390_des_ctx *ctx = crypto_tfm_ctx(tfm);
- crypt_s390_km(KM_TDEA_192_DECRYPT, ctx->key, dst, src, DES_BLOCK_SIZE);
+ cpacf_km(CPACF_KM_TDEA_192_DEC, ctx->key, dst, src, DES_BLOCK_SIZE);
}
static struct crypto_alg des3_alg = {
.cra_name = "des3_ede",
.cra_driver_name = "des3_ede-s390",
- .cra_priority = CRYPT_S390_PRIORITY,
+ .cra_priority = 300,
.cra_flags = CRYPTO_ALG_TYPE_CIPHER,
.cra_blocksize = DES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s390_des_ctx),
@@ -295,7 +294,7 @@ static int ecb_des3_encrypt(struct blkcipher_desc *desc,
struct blkcipher_walk walk;
blkcipher_walk_init(&walk, dst, src, nbytes);
- return ecb_desall_crypt(desc, KM_TDEA_192_ENCRYPT, ctx->key, &walk);
+ return ecb_desall_crypt(desc, CPACF_KM_TDEA_192_ENC, ctx->key, &walk);
}
static int ecb_des3_decrypt(struct blkcipher_desc *desc,
@@ -306,13 +305,13 @@ static int ecb_des3_decrypt(struct blkcipher_desc *desc,
struct blkcipher_walk walk;
blkcipher_walk_init(&walk, dst, src, nbytes);
- return ecb_desall_crypt(desc, KM_TDEA_192_DECRYPT, ctx->key, &walk);
+ return ecb_desall_crypt(desc, CPACF_KM_TDEA_192_DEC, ctx->key, &walk);
}
static struct crypto_alg ecb_des3_alg = {
.cra_name = "ecb(des3_ede)",
.cra_driver_name = "ecb-des3_ede-s390",
- .cra_priority = CRYPT_S390_COMPOSITE_PRIORITY,
+ .cra_priority = 400, /* combo: des3 + ecb */
.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
.cra_blocksize = DES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s390_des_ctx),
@@ -336,7 +335,7 @@ static int cbc_des3_encrypt(struct blkcipher_desc *desc,
struct blkcipher_walk walk;
blkcipher_walk_init(&walk, dst, src, nbytes);
- return cbc_desall_crypt(desc, KMC_TDEA_192_ENCRYPT, &walk);
+ return cbc_desall_crypt(desc, CPACF_KMC_TDEA_192_ENC, &walk);
}
static int cbc_des3_decrypt(struct blkcipher_desc *desc,
@@ -346,13 +345,13 @@ static int cbc_des3_decrypt(struct blkcipher_desc *desc,
struct blkcipher_walk walk;
blkcipher_walk_init(&walk, dst, src, nbytes);
- return cbc_desall_crypt(desc, KMC_TDEA_192_DECRYPT, &walk);
+ return cbc_desall_crypt(desc, CPACF_KMC_TDEA_192_DEC, &walk);
}
static struct crypto_alg cbc_des3_alg = {
.cra_name = "cbc(des3_ede)",
.cra_driver_name = "cbc-des3_ede-s390",
- .cra_priority = CRYPT_S390_COMPOSITE_PRIORITY,
+ .cra_priority = 400, /* combo: des3 + cbc */
.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
.cra_blocksize = DES_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s390_des_ctx),
@@ -407,8 +406,7 @@ static int ctr_desall_crypt(struct blkcipher_desc *desc, long func,
n = __ctrblk_init(ctrptr, nbytes);
else
n = DES_BLOCK_SIZE;
- ret = crypt_s390_kmctr(func, ctx->key, out, in,
- n, ctrptr);
+ ret = cpacf_kmctr(func, ctx->key, out, in, n, ctrptr);
if (ret < 0 || ret != n) {
if (ctrptr == ctrblk)
spin_unlock(&ctrblk_lock);
@@ -438,8 +436,8 @@ static int ctr_desall_crypt(struct blkcipher_desc *desc, long func,
if (nbytes) {
out = walk->dst.virt.addr;
in = walk->src.virt.addr;
- ret = crypt_s390_kmctr(func, ctx->key, buf, in,
- DES_BLOCK_SIZE, ctrbuf);
+ ret = cpacf_kmctr(func, ctx->key, buf, in,
+ DES_BLOCK_SIZE, ctrbuf);
if (ret < 0 || ret != DES_BLOCK_SIZE)
return -EIO;
memcpy(out, buf, nbytes);
@@ -458,7 +456,7 @@ static int ctr_des_encrypt(struct blkcipher_desc *desc,
struct blkcipher_walk walk;
blkcipher_walk_init(&walk, dst, src, nbytes);
- return ctr_desall_crypt(desc, KMCTR_DEA_ENCRYPT, ctx, &walk);
+ return ctr_desall_crypt(desc, CPACF_KMCTR_DEA_ENC, ctx, &walk);
}
static int ctr_des_decrypt(struct blkcipher_desc *desc,
@@ -469,13 +467,13 @@ static int ctr_des_decrypt(struct blkcipher_desc *desc,
struct blkcipher_walk walk;
blkcipher_walk_init(&walk, dst, src, nbytes);
- return ctr_desall_crypt(desc, KMCTR_DEA_DECRYPT, ctx, &walk);
+ return ctr_desall_crypt(desc, CPACF_KMCTR_DEA_DEC, ctx, &walk);
}
static struct crypto_alg ctr_des_alg = {
.cra_name = "ctr(des)",
.cra_driver_name = "ctr-des-s390",
- .cra_priority = CRYPT_S390_COMPOSITE_PRIORITY,
+ .cra_priority = 400, /* combo: des + ctr */
.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
.cra_blocksize = 1,
.cra_ctxsize = sizeof(struct s390_des_ctx),
@@ -501,7 +499,7 @@ static int ctr_des3_encrypt(struct blkcipher_desc *desc,
struct blkcipher_walk walk;
blkcipher_walk_init(&walk, dst, src, nbytes);
- return ctr_desall_crypt(desc, KMCTR_TDEA_192_ENCRYPT, ctx, &walk);
+ return ctr_desall_crypt(desc, CPACF_KMCTR_TDEA_192_ENC, ctx, &walk);
}
static int ctr_des3_decrypt(struct blkcipher_desc *desc,
@@ -512,13 +510,13 @@ static int ctr_des3_decrypt(struct blkcipher_desc *desc,
struct blkcipher_walk walk;
blkcipher_walk_init(&walk, dst, src, nbytes);
- return ctr_desall_crypt(desc, KMCTR_TDEA_192_DECRYPT, ctx, &walk);
+ return ctr_desall_crypt(desc, CPACF_KMCTR_TDEA_192_DEC, ctx, &walk);
}
static struct crypto_alg ctr_des3_alg = {
.cra_name = "ctr(des3_ede)",
.cra_driver_name = "ctr-des3_ede-s390",
- .cra_priority = CRYPT_S390_COMPOSITE_PRIORITY,
+ .cra_priority = 400, /* combo: des3 + ede */
.cra_flags = CRYPTO_ALG_TYPE_BLKCIPHER,
.cra_blocksize = 1,
.cra_ctxsize = sizeof(struct s390_des_ctx),
@@ -540,8 +538,8 @@ static int __init des_s390_init(void)
{
int ret;
- if (!crypt_s390_func_available(KM_DEA_ENCRYPT, CRYPT_S390_MSA) ||
- !crypt_s390_func_available(KM_TDEA_192_ENCRYPT, CRYPT_S390_MSA))
+ if (!cpacf_query(CPACF_KM, CPACF_KM_DEA_ENC) ||
+ !cpacf_query(CPACF_KM, CPACF_KM_TDEA_192_ENC))
return -EOPNOTSUPP;
ret = crypto_register_alg(&des_alg);
@@ -563,10 +561,8 @@ static int __init des_s390_init(void)
if (ret)
goto cbc_des3_err;
- if (crypt_s390_func_available(KMCTR_DEA_ENCRYPT,
- CRYPT_S390_MSA | CRYPT_S390_MSA4) &&
- crypt_s390_func_available(KMCTR_TDEA_192_ENCRYPT,
- CRYPT_S390_MSA | CRYPT_S390_MSA4)) {
+ if (cpacf_query(CPACF_KMCTR, CPACF_KMCTR_DEA_ENC) &&
+ cpacf_query(CPACF_KMCTR, CPACF_KMCTR_TDEA_192_ENC)) {
ret = crypto_register_alg(&ctr_des_alg);
if (ret)
goto ctr_des_err;
diff --git a/arch/s390/crypto/ghash_s390.c b/arch/s390/crypto/ghash_s390.c
index 26e14efd30a7..ab68de72e795 100644
--- a/arch/s390/crypto/ghash_s390.c
+++ b/arch/s390/crypto/ghash_s390.c
@@ -10,8 +10,7 @@
#include <crypto/internal/hash.h>
#include <linux/module.h>
#include <linux/cpufeature.h>
-
-#include "crypt_s390.h"
+#include <asm/cpacf.h>
#define GHASH_BLOCK_SIZE 16
#define GHASH_DIGEST_SIZE 16
@@ -72,8 +71,8 @@ static int ghash_update(struct shash_desc *desc,
src += n;
if (!dctx->bytes) {
- ret = crypt_s390_kimd(KIMD_GHASH, dctx, buf,
- GHASH_BLOCK_SIZE);
+ ret = cpacf_kimd(CPACF_KIMD_GHASH, dctx, buf,
+ GHASH_BLOCK_SIZE);
if (ret != GHASH_BLOCK_SIZE)
return -EIO;
}
@@ -81,7 +80,7 @@ static int ghash_update(struct shash_desc *desc,
n = srclen & ~(GHASH_BLOCK_SIZE - 1);
if (n) {
- ret = crypt_s390_kimd(KIMD_GHASH, dctx, src, n);
+ ret = cpacf_kimd(CPACF_KIMD_GHASH, dctx, src, n);
if (ret != n)
return -EIO;
src += n;
@@ -106,7 +105,7 @@ static int ghash_flush(struct ghash_desc_ctx *dctx)
memset(pos, 0, dctx->bytes);
- ret = crypt_s390_kimd(KIMD_GHASH, dctx, buf, GHASH_BLOCK_SIZE);
+ ret = cpacf_kimd(CPACF_KIMD_GHASH, dctx, buf, GHASH_BLOCK_SIZE);
if (ret != GHASH_BLOCK_SIZE)
return -EIO;
@@ -137,7 +136,7 @@ static struct shash_alg ghash_alg = {
.base = {
.cra_name = "ghash",
.cra_driver_name = "ghash-s390",
- .cra_priority = CRYPT_S390_PRIORITY,
+ .cra_priority = 300,
.cra_flags = CRYPTO_ALG_TYPE_SHASH,
.cra_blocksize = GHASH_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct ghash_ctx),
@@ -147,8 +146,7 @@ static struct shash_alg ghash_alg = {
static int __init ghash_mod_init(void)
{
- if (!crypt_s390_func_available(KIMD_GHASH,
- CRYPT_S390_MSA | CRYPT_S390_MSA4))
+ if (!cpacf_query(CPACF_KIMD, CPACF_KIMD_GHASH))
return -EOPNOTSUPP;
return crypto_register_shash(&ghash_alg);
diff --git a/arch/s390/crypto/prng.c b/arch/s390/crypto/prng.c
index d750cc0dfe30..41527b113f5a 100644
--- a/arch/s390/crypto/prng.c
+++ b/arch/s390/crypto/prng.c
@@ -23,8 +23,7 @@
#include <asm/debug.h>
#include <asm/uaccess.h>
#include <asm/timex.h>
-
-#include "crypt_s390.h"
+#include <asm/cpacf.h>
MODULE_LICENSE("GPL");
MODULE_AUTHOR("IBM Corporation");
@@ -136,8 +135,8 @@ static int generate_entropy(u8 *ebuf, size_t nbytes)
else
h = ebuf;
/* generate sha256 from this page */
- if (crypt_s390_kimd(KIMD_SHA_256, h,
- pg, PAGE_SIZE) != PAGE_SIZE) {
+ if (cpacf_kimd(CPACF_KIMD_SHA_256, h,
+ pg, PAGE_SIZE) != PAGE_SIZE) {
prng_errorflag = PRNG_GEN_ENTROPY_FAILED;
ret = -EIO;
goto out;
@@ -164,9 +163,9 @@ static void prng_tdes_add_entropy(void)
int ret;
for (i = 0; i < 16; i++) {
- ret = crypt_s390_kmc(KMC_PRNG, prng_data->prngws.parm_block,
- (char *)entropy, (char *)entropy,
- sizeof(entropy));
+ ret = cpacf_kmc(CPACF_KMC_PRNG, prng_data->prngws.parm_block,
+ (char *)entropy, (char *)entropy,
+ sizeof(entropy));
BUG_ON(ret < 0 || ret != sizeof(entropy));
memcpy(prng_data->prngws.parm_block, entropy, sizeof(entropy));
}
@@ -311,9 +310,8 @@ static int __init prng_sha512_selftest(void)
memset(&ws, 0, sizeof(ws));
/* initial seed */
- ret = crypt_s390_ppno(PPNO_SHA512_DRNG_SEED,
- &ws, NULL, 0,
- seed, sizeof(seed));
+ ret = cpacf_ppno(CPACF_PPNO_SHA512_DRNG_SEED, &ws, NULL, 0,
+ seed, sizeof(seed));
if (ret < 0) {
pr_err("The prng self test seed operation for the "
"SHA-512 mode failed with rc=%d\n", ret);
@@ -331,18 +329,16 @@ static int __init prng_sha512_selftest(void)
}
/* generate random bytes */
- ret = crypt_s390_ppno(PPNO_SHA512_DRNG_GEN,
- &ws, buf, sizeof(buf),
- NULL, 0);
+ ret = cpacf_ppno(CPACF_PPNO_SHA512_DRNG_GEN,
+ &ws, buf, sizeof(buf), NULL, 0);
if (ret < 0) {
pr_err("The prng self test generate operation for "
"the SHA-512 mode failed with rc=%d\n", ret);
prng_errorflag = PRNG_SELFTEST_FAILED;
return -EIO;
}
- ret = crypt_s390_ppno(PPNO_SHA512_DRNG_GEN,
- &ws, buf, sizeof(buf),
- NULL, 0);
+ ret = cpacf_ppno(CPACF_PPNO_SHA512_DRNG_GEN,
+ &ws, buf, sizeof(buf), NULL, 0);
if (ret < 0) {
pr_err("The prng self test generate operation for "
"the SHA-512 mode failed with rc=%d\n", ret);
@@ -396,9 +392,8 @@ static int __init prng_sha512_instantiate(void)
get_tod_clock_ext(seed + 48);
/* initial seed of the ppno drng */
- ret = crypt_s390_ppno(PPNO_SHA512_DRNG_SEED,
- &prng_data->ppnows, NULL, 0,
- seed, sizeof(seed));
+ ret = cpacf_ppno(CPACF_PPNO_SHA512_DRNG_SEED,
+ &prng_data->ppnows, NULL, 0, seed, sizeof(seed));
if (ret < 0) {
prng_errorflag = PRNG_SEED_FAILED;
ret = -EIO;
@@ -409,11 +404,9 @@ static int __init prng_sha512_instantiate(void)
bytes for the FIPS 140-2 Conditional Self Test */
if (fips_enabled) {
prng_data->prev = prng_data->buf + prng_chunk_size;
- ret = crypt_s390_ppno(PPNO_SHA512_DRNG_GEN,
- &prng_data->ppnows,
- prng_data->prev,
- prng_chunk_size,
- NULL, 0);
+ ret = cpacf_ppno(CPACF_PPNO_SHA512_DRNG_GEN,
+ &prng_data->ppnows,
+ prng_data->prev, prng_chunk_size, NULL, 0);
if (ret < 0 || ret != prng_chunk_size) {
prng_errorflag = PRNG_GEN_FAILED;
ret = -EIO;
@@ -447,9 +440,8 @@ static int prng_sha512_reseed(void)
return ret;
/* do a reseed of the ppno drng with this bytestring */
- ret = crypt_s390_ppno(PPNO_SHA512_DRNG_SEED,
- &prng_data->ppnows, NULL, 0,
- seed, sizeof(seed));
+ ret = cpacf_ppno(CPACF_PPNO_SHA512_DRNG_SEED,
+ &prng_data->ppnows, NULL, 0, seed, sizeof(seed));
if (ret) {
prng_errorflag = PRNG_RESEED_FAILED;
return -EIO;
@@ -471,9 +463,8 @@ static int prng_sha512_generate(u8 *buf, size_t nbytes)
}
/* PPNO generate */
- ret = crypt_s390_ppno(PPNO_SHA512_DRNG_GEN,
- &prng_data->ppnows, buf, nbytes,
- NULL, 0);
+ ret = cpacf_ppno(CPACF_PPNO_SHA512_DRNG_GEN,
+ &prng_data->ppnows, buf, nbytes, NULL, 0);
if (ret < 0 || ret != nbytes) {
prng_errorflag = PRNG_GEN_FAILED;
return -EIO;
@@ -555,8 +546,8 @@ static ssize_t prng_tdes_read(struct file *file, char __user *ubuf,
* Note: you can still get strict X9.17 conformity by setting
* prng_chunk_size to 8 bytes.
*/
- tmp = crypt_s390_kmc(KMC_PRNG, prng_data->prngws.parm_block,
- prng_data->buf, prng_data->buf, n);
+ tmp = cpacf_kmc(CPACF_KMC_PRNG, prng_data->prngws.parm_block,
+ prng_data->buf, prng_data->buf, n);
if (tmp < 0 || tmp != n) {
ret = -EIO;
break;
@@ -815,14 +806,13 @@ static int __init prng_init(void)
int ret;
/* check if the CPU has a PRNG */
- if (!crypt_s390_func_available(KMC_PRNG, CRYPT_S390_MSA))
+ if (!cpacf_query(CPACF_KMC, CPACF_KMC_PRNG))
return -EOPNOTSUPP;
/* choose prng mode */
if (prng_mode != PRNG_MODE_TDES) {
/* check for MSA5 support for PPNO operations */
- if (!crypt_s390_func_available(PPNO_SHA512_DRNG_GEN,
- CRYPT_S390_MSA5)) {
+ if (!cpacf_query(CPACF_PPNO, CPACF_PPNO_SHA512_DRNG_GEN)) {
if (prng_mode == PRNG_MODE_SHA512) {
pr_err("The prng module cannot "
"start in SHA-512 mode\n");
diff --git a/arch/s390/crypto/sha1_s390.c b/arch/s390/crypto/sha1_s390.c
index 9208eadae9f0..5fbf91bbb478 100644
--- a/arch/s390/crypto/sha1_s390.c
+++ b/arch/s390/crypto/sha1_s390.c
@@ -28,8 +28,8 @@
#include <linux/module.h>
#include <linux/cpufeature.h>
#include <crypto/sha.h>
+#include <asm/cpacf.h>
-#include "crypt_s390.h"
#include "sha.h"
static int sha1_init(struct shash_desc *desc)
@@ -42,7 +42,7 @@ static int sha1_init(struct shash_desc *desc)
sctx->state[3] = SHA1_H3;
sctx->state[4] = SHA1_H4;
sctx->count = 0;
- sctx->func = KIMD_SHA_1;
+ sctx->func = CPACF_KIMD_SHA_1;
return 0;
}
@@ -66,7 +66,7 @@ static int sha1_import(struct shash_desc *desc, const void *in)
sctx->count = ictx->count;
memcpy(sctx->state, ictx->state, sizeof(ictx->state));
memcpy(sctx->buf, ictx->buffer, sizeof(ictx->buffer));
- sctx->func = KIMD_SHA_1;
+ sctx->func = CPACF_KIMD_SHA_1;
return 0;
}
@@ -82,7 +82,7 @@ static struct shash_alg alg = {
.base = {
.cra_name = "sha1",
.cra_driver_name= "sha1-s390",
- .cra_priority = CRYPT_S390_PRIORITY,
+ .cra_priority = 300,
.cra_flags = CRYPTO_ALG_TYPE_SHASH,
.cra_blocksize = SHA1_BLOCK_SIZE,
.cra_module = THIS_MODULE,
@@ -91,7 +91,7 @@ static struct shash_alg alg = {
static int __init sha1_s390_init(void)
{
- if (!crypt_s390_func_available(KIMD_SHA_1, CRYPT_S390_MSA))
+ if (!cpacf_query(CPACF_KIMD, CPACF_KIMD_SHA_1))
return -EOPNOTSUPP;
return crypto_register_shash(&alg);
}
diff --git a/arch/s390/crypto/sha256_s390.c b/arch/s390/crypto/sha256_s390.c
index 667888f5c964..10aac0b11988 100644
--- a/arch/s390/crypto/sha256_s390.c
+++ b/arch/s390/crypto/sha256_s390.c
@@ -18,8 +18,8 @@
#include <linux/module.h>
#include <linux/cpufeature.h>
#include <crypto/sha.h>
+#include <asm/cpacf.h>
-#include "crypt_s390.h"
#include "sha.h"
static int sha256_init(struct shash_desc *desc)
@@ -35,7 +35,7 @@ static int sha256_init(struct shash_desc *desc)
sctx->state[6] = SHA256_H6;
sctx->state[7] = SHA256_H7;
sctx->count = 0;
- sctx->func = KIMD_SHA_256;
+ sctx->func = CPACF_KIMD_SHA_256;
return 0;
}
@@ -59,7 +59,7 @@ static int sha256_import(struct shash_desc *desc, const void *in)
sctx->count = ictx->count;
memcpy(sctx->state, ictx->state, sizeof(ictx->state));
memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf));
- sctx->func = KIMD_SHA_256;
+ sctx->func = CPACF_KIMD_SHA_256;
return 0;
}
@@ -75,7 +75,7 @@ static struct shash_alg sha256_alg = {
.base = {
.cra_name = "sha256",
.cra_driver_name= "sha256-s390",
- .cra_priority = CRYPT_S390_PRIORITY,
+ .cra_priority = 300,
.cra_flags = CRYPTO_ALG_TYPE_SHASH,
.cra_blocksize = SHA256_BLOCK_SIZE,
.cra_module = THIS_MODULE,
@@ -95,7 +95,7 @@ static int sha224_init(struct shash_desc *desc)
sctx->state[6] = SHA224_H6;
sctx->state[7] = SHA224_H7;
sctx->count = 0;
- sctx->func = KIMD_SHA_256;
+ sctx->func = CPACF_KIMD_SHA_256;
return 0;
}
@@ -112,7 +112,7 @@ static struct shash_alg sha224_alg = {
.base = {
.cra_name = "sha224",
.cra_driver_name= "sha224-s390",
- .cra_priority = CRYPT_S390_PRIORITY,
+ .cra_priority = 300,
.cra_flags = CRYPTO_ALG_TYPE_SHASH,
.cra_blocksize = SHA224_BLOCK_SIZE,
.cra_module = THIS_MODULE,
@@ -123,7 +123,7 @@ static int __init sha256_s390_init(void)
{
int ret;
- if (!crypt_s390_func_available(KIMD_SHA_256, CRYPT_S390_MSA))
+ if (!cpacf_query(CPACF_KIMD, CPACF_KIMD_SHA_256))
return -EOPNOTSUPP;
ret = crypto_register_shash(&sha256_alg);
if (ret < 0)
diff --git a/arch/s390/crypto/sha512_s390.c b/arch/s390/crypto/sha512_s390.c
index 2ba66b1518f0..ea85757be407 100644
--- a/arch/s390/crypto/sha512_s390.c
+++ b/arch/s390/crypto/sha512_s390.c
@@ -19,9 +19,9 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/cpufeature.h>
+#include <asm/cpacf.h>
#include "sha.h"
-#include "crypt_s390.h"
static int sha512_init(struct shash_desc *desc)
{
@@ -36,7 +36,7 @@ static int sha512_init(struct shash_desc *desc)
*(__u64 *)&ctx->state[12] = 0x1f83d9abfb41bd6bULL;
*(__u64 *)&ctx->state[14] = 0x5be0cd19137e2179ULL;
ctx->count = 0;
- ctx->func = KIMD_SHA_512;
+ ctx->func = CPACF_KIMD_SHA_512;
return 0;
}
@@ -64,7 +64,7 @@ static int sha512_import(struct shash_desc *desc, const void *in)
memcpy(sctx->state, ictx->state, sizeof(ictx->state));
memcpy(sctx->buf, ictx->buf, sizeof(ictx->buf));
- sctx->func = KIMD_SHA_512;
+ sctx->func = CPACF_KIMD_SHA_512;
return 0;
}
@@ -80,7 +80,7 @@ static struct shash_alg sha512_alg = {
.base = {
.cra_name = "sha512",
.cra_driver_name= "sha512-s390",
- .cra_priority = CRYPT_S390_PRIORITY,
+ .cra_priority = 300,
.cra_flags = CRYPTO_ALG_TYPE_SHASH,
.cra_blocksize = SHA512_BLOCK_SIZE,
.cra_module = THIS_MODULE,
@@ -102,7 +102,7 @@ static int sha384_init(struct shash_desc *desc)
*(__u64 *)&ctx->state[12] = 0xdb0c2e0d64f98fa7ULL;
*(__u64 *)&ctx->state[14] = 0x47b5481dbefa4fa4ULL;
ctx->count = 0;
- ctx->func = KIMD_SHA_512;
+ ctx->func = CPACF_KIMD_SHA_512;
return 0;
}
@@ -119,7 +119,7 @@ static struct shash_alg sha384_alg = {
.base = {
.cra_name = "sha384",
.cra_driver_name= "sha384-s390",
- .cra_priority = CRYPT_S390_PRIORITY,
+ .cra_priority = 300,
.cra_flags = CRYPTO_ALG_TYPE_SHASH,
.cra_blocksize = SHA384_BLOCK_SIZE,
.cra_ctxsize = sizeof(struct s390_sha_ctx),
@@ -133,7 +133,7 @@ static int __init init(void)
{
int ret;
- if (!crypt_s390_func_available(KIMD_SHA_512, CRYPT_S390_MSA))
+ if (!cpacf_query(CPACF_KIMD, CPACF_KIMD_SHA_512))
return -EOPNOTSUPP;
if ((ret = crypto_register_shash(&sha512_alg)) < 0)
goto out;
diff --git a/arch/s390/crypto/sha_common.c b/arch/s390/crypto/sha_common.c
index 8620b0ec9c42..8e908166c3ee 100644
--- a/arch/s390/crypto/sha_common.c
+++ b/arch/s390/crypto/sha_common.c
@@ -15,8 +15,8 @@
#include <crypto/internal/hash.h>
#include <linux/module.h>
+#include <asm/cpacf.h>
#include "sha.h"
-#include "crypt_s390.h"
int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len)
{
@@ -35,7 +35,7 @@ int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len)
/* process one stored block */
if (index) {
memcpy(ctx->buf + index, data, bsize - index);
- ret = crypt_s390_kimd(ctx->func, ctx->state, ctx->buf, bsize);
+ ret = cpacf_kimd(ctx->func, ctx->state, ctx->buf, bsize);
if (ret != bsize)
return -EIO;
data += bsize - index;
@@ -45,8 +45,8 @@ int s390_sha_update(struct shash_desc *desc, const u8 *data, unsigned int len)
/* process as many blocks as possible */
if (len >= bsize) {
- ret = crypt_s390_kimd(ctx->func, ctx->state, data,
- len & ~(bsize - 1));
+ ret = cpacf_kimd(ctx->func, ctx->state, data,
+ len & ~(bsize - 1));
if (ret != (len & ~(bsize - 1)))
return -EIO;
data += ret;
@@ -89,7 +89,7 @@ int s390_sha_final(struct shash_desc *desc, u8 *out)
bits = ctx->count * 8;
memcpy(ctx->buf + end - 8, &bits, sizeof(bits));
- ret = crypt_s390_kimd(ctx->func, ctx->state, ctx->buf, end);
+ ret = cpacf_kimd(ctx->func, ctx->state, ctx->buf, end);
if (ret != end)
return -EIO;
diff --git a/arch/s390/include/asm/cpacf.h b/arch/s390/include/asm/cpacf.h
new file mode 100644
index 000000000000..1a82cf26ee11
--- /dev/null
+++ b/arch/s390/include/asm/cpacf.h
@@ -0,0 +1,410 @@
+/*
+ * CP Assist for Cryptographic Functions (CPACF)
+ *
+ * Copyright IBM Corp. 2003, 2016
+ * Author(s): Thomas Spatzier
+ * Jan Glauber
+ * Harald Freudenberger (freude@de.ibm.com)
+ * Martin Schwidefsky <schwidefsky@de.ibm.com>
+ */
+#ifndef _ASM_S390_CPACF_H
+#define _ASM_S390_CPACF_H
+
+#include <asm/facility.h>
+
+/*
+ * Instruction opcodes for the CPACF instructions
+ */
+#define CPACF_KMAC 0xb91e /* MSA */
+#define CPACF_KM 0xb92e /* MSA */
+#define CPACF_KMC 0xb92f /* MSA */
+#define CPACF_KIMD 0xb93e /* MSA */
+#define CPACF_KLMD 0xb93f /* MSA */
+#define CPACF_PCC 0xb92c /* MSA4 */
+#define CPACF_KMCTR 0xb92d /* MSA4 */
+#define CPACF_PPNO 0xb93c /* MSA5 */
+
+/*
+ * Function codes for the KM (CIPHER MESSAGE)
+ * instruction (0x80 is the decipher modifier bit)
+ */
+#define CPACF_KM_QUERY 0x00
+#define CPACF_KM_DEA_ENC 0x01
+#define CPACF_KM_DEA_DEC 0x81
+#define CPACF_KM_TDEA_128_ENC 0x02
+#define CPACF_KM_TDEA_128_DEC 0x82
+#define CPACF_KM_TDEA_192_ENC 0x03
+#define CPACF_KM_TDEA_192_DEC 0x83
+#define CPACF_KM_AES_128_ENC 0x12
+#define CPACF_KM_AES_128_DEC 0x92
+#define CPACF_KM_AES_192_ENC 0x13
+#define CPACF_KM_AES_192_DEC 0x93
+#define CPACF_KM_AES_256_ENC 0x14
+#define CPACF_KM_AES_256_DEC 0x94
+#define CPACF_KM_XTS_128_ENC 0x32
+#define CPACF_KM_XTS_128_DEC 0xb2
+#define CPACF_KM_XTS_256_ENC 0x34
+#define CPACF_KM_XTS_256_DEC 0xb4
+
+/*
+ * Function codes for the KMC (CIPHER MESSAGE WITH CHAINING)
+ * instruction (0x80 is the decipher modifier bit)
+ */
+#define CPACF_KMC_QUERY 0x00
+#define CPACF_KMC_DEA_ENC 0x01
+#define CPACF_KMC_DEA_DEC 0x81
+#define CPACF_KMC_TDEA_128_ENC 0x02
+#define CPACF_KMC_TDEA_128_DEC 0x82
+#define CPACF_KMC_TDEA_192_ENC 0x03
+#define CPACF_KMC_TDEA_192_DEC 0x83
+#define CPACF_KMC_AES_128_ENC 0x12
+#define CPACF_KMC_AES_128_DEC 0x92
+#define CPACF_KMC_AES_192_ENC 0x13
+#define CPACF_KMC_AES_192_DEC 0x93
+#define CPACF_KMC_AES_256_ENC 0x14
+#define CPACF_KMC_AES_256_DEC 0x94
+#define CPACF_KMC_PRNG 0x43
+
+/*
+ * Function codes for the KMCTR (CIPHER MESSAGE WITH COUNTER)
+ * instruction (0x80 is the decipher modifier bit)
+ */
+#define CPACF_KMCTR_QUERY 0x00
+#define CPACF_KMCTR_DEA_ENC 0x01
+#define CPACF_KMCTR_DEA_DEC 0x81
+#define CPACF_KMCTR_TDEA_128_ENC 0x02
+#define CPACF_KMCTR_TDEA_128_DEC 0x82
+#define CPACF_KMCTR_TDEA_192_ENC 0x03
+#define CPACF_KMCTR_TDEA_192_DEC 0x83
+#define CPACF_KMCTR_AES_128_ENC 0x12
+#define CPACF_KMCTR_AES_128_DEC 0x92
+#define CPACF_KMCTR_AES_192_ENC 0x13
+#define CPACF_KMCTR_AES_192_DEC 0x93
+#define CPACF_KMCTR_AES_256_ENC 0x14
+#define CPACF_KMCTR_AES_256_DEC 0x94
+
+/*
+ * Function codes for the KIMD (COMPUTE INTERMEDIATE MESSAGE DIGEST)
+ * instruction (0x80 is the decipher modifier bit)
+ */
+#define CPACF_KIMD_QUERY 0x00
+#define CPACF_KIMD_SHA_1 0x01
+#define CPACF_KIMD_SHA_256 0x02
+#define CPACF_KIMD_SHA_512 0x03
+#define CPACF_KIMD_GHASH 0x41
+
+/*
+ * Function codes for the KLMD (COMPUTE LAST MESSAGE DIGEST)
+ * instruction (0x80 is the decipher modifier bit)
+ */
+#define CPACF_KLMD_QUERY 0x00
+#define CPACF_KLMD_SHA_1 0x01
+#define CPACF_KLMD_SHA_256 0x02
+#define CPACF_KLMD_SHA_512 0x03
+
+/*
+ * function codes for the KMAC (COMPUTE MESSAGE AUTHENTICATION CODE)
+ * instruction (0x80 is the decipher modifier bit)
+ */
+#define CPACF_KMAC_QUERY 0x00
+#define CPACF_KMAC_DEA 0x01
+#define CPACF_KMAC_TDEA_128 0x02
+#define CPACF_KMAC_TDEA_192 0x03
+
+/*
+ * Function codes for the PPNO (PERFORM PSEUDORANDOM NUMBER OPERATION)
+ * instruction (0x80 is the decipher modifier bit)
+ */
+#define CPACF_PPNO_QUERY 0x00
+#define CPACF_PPNO_SHA512_DRNG_GEN 0x03
+#define CPACF_PPNO_SHA512_DRNG_SEED 0x83
+
+/**
+ * cpacf_query() - check if a specific CPACF function is available
+ * @opcode: the opcode of the crypto instruction
+ * @func: the function code to test for
+ *
+ * Executes the query function for the given crypto instruction @opcode
+ * and checks if @func is available
+ *
+ * Returns 1 if @func is available for @opcode, 0 otherwise
+ */
+static inline void __cpacf_query(unsigned int opcode, unsigned char *status)
+{
+ typedef struct { unsigned char _[16]; } status_type;
+ register unsigned long r0 asm("0") = 0; /* query function */
+ register unsigned long r1 asm("1") = (unsigned long) status;
+
+ asm volatile(
+ /* Parameter registers are ignored, but may not be 0 */
+ "0: .insn rrf,%[opc] << 16,2,2,2,0\n"
+ " brc 1,0b\n" /* handle partial completion */
+ : "=m" (*(status_type *) status)
+ : [fc] "d" (r0), [pba] "a" (r1), [opc] "i" (opcode)
+ : "cc");
+}
+
+static inline int cpacf_query(unsigned int opcode, unsigned int func)
+{
+ unsigned char status[16];
+
+ switch (opcode) {
+ case CPACF_KMAC:
+ case CPACF_KM:
+ case CPACF_KMC:
+ case CPACF_KIMD:
+ case CPACF_KLMD:
+ if (!test_facility(17)) /* check for MSA */
+ return 0;
+ break;
+ case CPACF_PCC:
+ case CPACF_KMCTR:
+ if (!test_facility(77)) /* check for MSA4 */
+ return 0;
+ break;
+ case CPACF_PPNO:
+ if (!test_facility(57)) /* check for MSA5 */
+ return 0;
+ break;
+ default:
+ BUG();
+ }
+ __cpacf_query(opcode, status);
+ return (status[func >> 3] & (0x80 >> (func & 7))) != 0;
+}
+
+/**
+ * cpacf_km() - executes the KM (CIPHER MESSAGE) instruction
+ * @func: the function code passed to KM; see CPACF_KM_xxx defines
+ * @param: address of parameter block; see POP for details on each func
+ * @dest: address of destination memory area
+ * @src: address of source memory area
+ * @src_len: length of src operand in bytes
+ *
+ * Returns 0 for the query func, number of processed bytes for
+ * encryption/decryption funcs
+ */
+static inline int cpacf_km(long func, void *param,
+ u8 *dest, const u8 *src, long src_len)
+{
+ register unsigned long r0 asm("0") = (unsigned long) func;
+ register unsigned long r1 asm("1") = (unsigned long) param;
+ register unsigned long r2 asm("2") = (unsigned long) src;
+ register unsigned long r3 asm("3") = (unsigned long) src_len;
+ register unsigned long r4 asm("4") = (unsigned long) dest;
+
+ asm volatile(
+ "0: .insn rre,%[opc] << 16,%[dst],%[src]\n"
+ " brc 1,0b\n" /* handle partial completion */
+ : [src] "+a" (r2), [len] "+d" (r3), [dst] "+a" (r4)
+ : [fc] "d" (r0), [pba] "a" (r1), [opc] "i" (CPACF_KM)
+ : "cc", "memory");
+
+ return src_len - r3;
+}
+
+/**
+ * cpacf_kmc() - executes the KMC (CIPHER MESSAGE WITH CHAINING) instruction
+ * @func: the function code passed to KM; see CPACF_KMC_xxx defines
+ * @param: address of parameter block; see POP for details on each func
+ * @dest: address of destination memory area
+ * @src: address of source memory area
+ * @src_len: length of src operand in bytes
+ *
+ * Returns 0 for the query func, number of processed bytes for
+ * encryption/decryption funcs
+ */
+static inline int cpacf_kmc(long func, void *param,
+ u8 *dest, const u8 *src, long src_len)
+{
+ register unsigned long r0 asm("0") = (unsigned long) func;
+ register unsigned long r1 asm("1") = (unsigned long) param;
+ register unsigned long r2 asm("2") = (unsigned long) src;
+ register unsigned long r3 asm("3") = (unsigned long) src_len;
+ register unsigned long r4 asm("4") = (unsigned long) dest;
+
+ asm volatile(
+ "0: .insn rre,%[opc] << 16,%[dst],%[src]\n"
+ " brc 1,0b\n" /* handle partial completion */
+ : [src] "+a" (r2), [len] "+d" (r3), [dst] "+a" (r4)
+ : [fc] "d" (r0), [pba] "a" (r1), [opc] "i" (CPACF_KMC)
+ : "cc", "memory");
+
+ return src_len - r3;
+}
+
+/**
+ * cpacf_kimd() - executes the KIMD (COMPUTE INTERMEDIATE MESSAGE DIGEST)
+ * instruction
+ * @func: the function code passed to KM; see CPACF_KIMD_xxx defines
+ * @param: address of parameter block; see POP for details on each func
+ * @src: address of source memory area
+ * @src_len: length of src operand in bytes
+ *
+ * Returns 0 for the query func, number of processed bytes for digest funcs
+ */
+static inline int cpacf_kimd(long func, void *param,
+ const u8 *src, long src_len)
+{
+ register unsigned long r0 asm("0") = (unsigned long) func;
+ register unsigned long r1 asm("1") = (unsigned long) param;
+ register unsigned long r2 asm("2") = (unsigned long) src;
+ register unsigned long r3 asm("3") = (unsigned long) src_len;
+
+ asm volatile(
+ "0: .insn rre,%[opc] << 16,0,%[src]\n"
+ " brc 1,0b\n" /* handle partial completion */
+ : [src] "+a" (r2), [len] "+d" (r3)
+ : [fc] "d" (r0), [pba] "a" (r1), [opc] "i" (CPACF_KIMD)
+ : "cc", "memory");
+
+ return src_len - r3;
+}
+
+/**
+ * cpacf_klmd() - executes the KLMD (COMPUTE LAST MESSAGE DIGEST) instruction
+ * @func: the function code passed to KM; see CPACF_KLMD_xxx defines
+ * @param: address of parameter block; see POP for details on each func
+ * @src: address of source memory area
+ * @src_len: length of src operand in bytes
+ *
+ * Returns 0 for the query func, number of processed bytes for digest funcs
+ */
+static inline int cpacf_klmd(long func, void *param,
+ const u8 *src, long src_len)
+{
+ register unsigned long r0 asm("0") = (unsigned long) func;
+ register unsigned long r1 asm("1") = (unsigned long) param;
+ register unsigned long r2 asm("2") = (unsigned long) src;
+ register unsigned long r3 asm("3") = (unsigned long) src_len;
+
+ asm volatile(
+ "0: .insn rre,%[opc] << 16,0,%[src]\n"
+ " brc 1,0b\n" /* handle partial completion */
+ : [src] "+a" (r2), [len] "+d" (r3)
+ : [fc] "d" (r0), [pba] "a" (r1), [opc] "i" (CPACF_KLMD)
+ : "cc", "memory");
+
+ return src_len - r3;
+}
+
+/**
+ * cpacf_kmac() - executes the KMAC (COMPUTE MESSAGE AUTHENTICATION CODE)
+ * instruction
+ * @func: the function code passed to KM; see CPACF_KMAC_xxx defines
+ * @param: address of parameter block; see POP for details on each func
+ * @src: address of source memory area
+ * @src_len: length of src operand in bytes
+ *
+ * Returns 0 for the query func, number of processed bytes for digest funcs
+ */
+static inline int cpacf_kmac(long func, void *param,
+ const u8 *src, long src_len)
+{
+ register unsigned long r0 asm("0") = (unsigned long) func;
+ register unsigned long r1 asm("1") = (unsigned long) param;
+ register unsigned long r2 asm("2") = (unsigned long) src;
+ register unsigned long r3 asm("3") = (unsigned long) src_len;
+
+ asm volatile(
+ "0: .insn rre,%[opc] << 16,0,%[src]\n"
+ " brc 1,0b\n" /* handle partial completion */
+ : [src] "+a" (r2), [len] "+d" (r3)
+ : [fc] "d" (r0), [pba] "a" (r1), [opc] "i" (CPACF_KMAC)
+ : "cc", "memory");
+
+ return src_len - r3;
+}
+
+/**
+ * cpacf_kmctr() - executes the KMCTR (CIPHER MESSAGE WITH COUNTER) instruction
+ * @func: the function code passed to KMCTR; see CPACF_KMCTR_xxx defines
+ * @param: address of parameter block; see POP for details on each func
+ * @dest: address of destination memory area
+ * @src: address of source memory area
+ * @src_len: length of src operand in bytes
+ * @counter: address of counter value
+ *
+ * Returns 0 for the query func, number of processed bytes for
+ * encryption/decryption funcs
+ */
+static inline int cpacf_kmctr(long func, void *param, u8 *dest,
+ const u8 *src, long src_len, u8 *counter)
+{
+ register unsigned long r0 asm("0") = (unsigned long) func;
+ register unsigned long r1 asm("1") = (unsigned long) param;
+ register unsigned long r2 asm("2") = (unsigned long) src;
+ register unsigned long r3 asm("3") = (unsigned long) src_len;
+ register unsigned long r4 asm("4") = (unsigned long) dest;
+ register unsigned long r6 asm("6") = (unsigned long) counter;
+
+ asm volatile(
+ "0: .insn rrf,%[opc] << 16,%[dst],%[src],%[ctr],0\n"
+ " brc 1,0b\n" /* handle partial completion */
+ : [src] "+a" (r2), [len] "+d" (r3),
+ [dst] "+a" (r4), [ctr] "+a" (r6)
+ : [fc] "d" (r0), [pba] "a" (r1), [opc] "i" (CPACF_KMCTR)
+ : "cc", "memory");
+
+ return src_len - r3;
+}
+
+/**
+ * cpacf_ppno() - executes the PPNO (PERFORM PSEUDORANDOM NUMBER OPERATION)
+ * instruction
+ * @func: the function code passed to PPNO; see CPACF_PPNO_xxx defines
+ * @param: address of parameter block; see POP for details on each func
+ * @dest: address of destination memory area
+ * @dest_len: size of destination memory area in bytes
+ * @seed: address of seed data
+ * @seed_len: size of seed data in bytes
+ *
+ * Returns 0 for the query func, number of random bytes stored in
+ * dest buffer for generate function
+ */
+static inline int cpacf_ppno(long func, void *param,
+ u8 *dest, long dest_len,
+ const u8 *seed, long seed_len)
+{
+ register unsigned long r0 asm("0") = (unsigned long) func;
+ register unsigned long r1 asm("1") = (unsigned long) param;
+ register unsigned long r2 asm("2") = (unsigned long) dest;
+ register unsigned long r3 asm("3") = (unsigned long) dest_len;
+ register unsigned long r4 asm("4") = (unsigned long) seed;
+ register unsigned long r5 asm("5") = (unsigned long) seed_len;
+
+ asm volatile (
+ "0: .insn rre,%[opc] << 16,%[dst],%[seed]\n"
+ " brc 1,0b\n" /* handle partial completion */
+ : [dst] "+a" (r2), [dlen] "+d" (r3)
+ : [fc] "d" (r0), [pba] "a" (r1),
+ [seed] "a" (r4), [slen] "d" (r5), [opc] "i" (CPACF_PPNO)
+ : "cc", "memory");
+
+ return dest_len - r3;
+}
+
+/**
+ * cpacf_pcc() - executes the PCC (PERFORM CRYPTOGRAPHIC COMPUTATION)
+ * instruction
+ * @func: the function code passed to PCC; see CPACF_KM_xxx defines
+ * @param: address of parameter block; see POP for details on each func
+ *
+ * Returns 0.
+ */
+static inline int cpacf_pcc(long func, void *param)
+{
+ register unsigned long r0 asm("0") = (unsigned long) func;
+ register unsigned long r1 asm("1") = (unsigned long) param;
+
+ asm volatile(
+ "0: .insn rre,%[opc] << 16,0,0\n" /* PCC opcode */
+ " brc 1,0b\n" /* handle partial completion */
+ :
+ : [fc] "d" (r0), [pba] "a" (r1), [opc] "i" (CPACF_PCC)
+ : "cc", "memory");
+
+ return 0;
+}
+
+#endif /* _ASM_S390_CPACF_H */
diff --git a/arch/s390/include/asm/fpu/types.h b/arch/s390/include/asm/fpu/types.h
index 14a8b0c14f87..fe937c9b6471 100644
--- a/arch/s390/include/asm/fpu/types.h
+++ b/arch/s390/include/asm/fpu/types.h
@@ -11,11 +11,13 @@
#include <asm/sigcontext.h>
struct fpu {
- __u32 fpc; /* Floating-point control */
+ __u32 fpc; /* Floating-point control */
+ void *regs; /* Pointer to the current save area */
union {
- void *regs;
- freg_t *fprs; /* Floating-point register save area */
- __vector128 *vxrs; /* Vector register save area */
+ /* Floating-point register save area */
+ freg_t fprs[__NUM_FPRS];
+ /* Vector register save area */
+ __vector128 vxrs[__NUM_VXRS];
};
};
diff --git a/arch/s390/include/asm/ftrace.h b/arch/s390/include/asm/ftrace.h
index 836c56290499..64053d9ac3f2 100644
--- a/arch/s390/include/asm/ftrace.h
+++ b/arch/s390/include/asm/ftrace.h
@@ -12,7 +12,9 @@
#ifndef __ASSEMBLY__
-#define ftrace_return_address(n) __builtin_return_address(n)
+unsigned long return_address(int depth);
+
+#define ftrace_return_address(n) return_address(n)
void _mcount(void);
void ftrace_caller(void);
diff --git a/arch/s390/include/asm/kvm_host.h b/arch/s390/include/asm/kvm_host.h
index 6da41fab70fb..37b9017c6a96 100644
--- a/arch/s390/include/asm/kvm_host.h
+++ b/arch/s390/include/asm/kvm_host.h
@@ -38,7 +38,7 @@
*/
#define KVM_NR_IRQCHIPS 1
#define KVM_IRQCHIP_NUM_PINS 4096
-#define KVM_HALT_POLL_NS_DEFAULT 0
+#define KVM_HALT_POLL_NS_DEFAULT 80000
/* s390-specific vcpu->requests bit members */
#define KVM_REQ_ENABLE_IBS 8
@@ -247,6 +247,7 @@ struct kvm_vcpu_stat {
u32 exit_instruction;
u32 halt_successful_poll;
u32 halt_attempted_poll;
+ u32 halt_poll_invalid;
u32 halt_wakeup;
u32 instruction_lctl;
u32 instruction_lctlg;
@@ -544,10 +545,6 @@ struct kvm_vcpu_arch {
struct kvm_s390_local_interrupt local_int;
struct hrtimer ckc_timer;
struct kvm_s390_pgm_info pgm;
- union {
- struct cpuid cpu_id;
- u64 stidp_data;
- };
struct gmap *gmap;
struct kvm_guestdbg_info_arch guestdbg;
unsigned long pfault_token;
@@ -605,7 +602,7 @@ struct kvm_s390_cpu_model {
__u64 fac_mask[S390_ARCH_FAC_LIST_SIZE_U64];
/* facility list requested by guest (in dma page) */
__u64 *fac_list;
- struct cpuid cpu_id;
+ u64 cpuid;
unsigned short ibc;
};
@@ -700,4 +697,6 @@ static inline void kvm_arch_flush_shadow_memslot(struct kvm *kvm,
static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
+void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu);
+
#endif
diff --git a/arch/s390/include/asm/livepatch.h b/arch/s390/include/asm/livepatch.h
index d5427c78b1b3..2c1213785892 100644
--- a/arch/s390/include/asm/livepatch.h
+++ b/arch/s390/include/asm/livepatch.h
@@ -24,13 +24,6 @@ static inline int klp_check_compiler_support(void)
return 0;
}
-static inline int klp_write_module_reloc(struct module *mod, unsigned long
- type, unsigned long loc, unsigned long value)
-{
- /* not supported yet */
- return -ENOSYS;
-}
-
static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip)
{
regs->psw.addr = ip;
diff --git a/arch/s390/include/asm/pci.h b/arch/s390/include/asm/pci.h
index 535a46d46d28..0da91c4d30fd 100644
--- a/arch/s390/include/asm/pci.h
+++ b/arch/s390/include/asm/pci.h
@@ -31,20 +31,41 @@ int pci_proc_domain(struct pci_bus *);
#define ZPCI_FC_BLOCKED 0x20
#define ZPCI_FC_DMA_ENABLED 0x10
+#define ZPCI_FMB_DMA_COUNTER_VALID (1 << 23)
+
+struct zpci_fmb_fmt0 {
+ u64 dma_rbytes;
+ u64 dma_wbytes;
+};
+
+struct zpci_fmb_fmt1 {
+ u64 rx_bytes;
+ u64 rx_packets;
+ u64 tx_bytes;
+ u64 tx_packets;
+};
+
+struct zpci_fmb_fmt2 {
+ u64 consumed_work_units;
+ u64 max_work_units;
+};
+
struct zpci_fmb {
- u32 format : 8;
- u32 dma_valid : 1;
- u32 : 23;
+ u32 format : 8;
+ u32 fmt_ind : 24;
u32 samples;
u64 last_update;
- /* hardware counters */
+ /* common counters */
u64 ld_ops;
u64 st_ops;
u64 stb_ops;
u64 rpcit_ops;
- u64 dma_rbytes;
- u64 dma_wbytes;
- u64 pad[2];
+ /* format specific counters */
+ union {
+ struct zpci_fmb_fmt0 fmt0;
+ struct zpci_fmb_fmt1 fmt1;
+ struct zpci_fmb_fmt2 fmt2;
+ };
} __packed __aligned(128);
enum zpci_state {
diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index 2f66645587a2..18d2beb89340 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -1223,6 +1223,7 @@ static inline int pmd_trans_huge(pmd_t pmd)
return pmd_val(pmd) & _SEGMENT_ENTRY_LARGE;
}
+#define has_transparent_hugepage has_transparent_hugepage
static inline int has_transparent_hugepage(void)
{
return MACHINE_HAS_HPAGE ? 1 : 0;
diff --git a/arch/s390/include/asm/processor.h b/arch/s390/include/asm/processor.h
index 18cdede1aeda..9d4d311d7e52 100644
--- a/arch/s390/include/asm/processor.h
+++ b/arch/s390/include/asm/processor.h
@@ -105,7 +105,6 @@ typedef struct {
* Thread structure
*/
struct thread_struct {
- struct fpu fpu; /* FP and VX register save area */
unsigned int acrs[NUM_ACRS];
unsigned long ksp; /* kernel stack pointer */
mm_segment_t mm_segment;
@@ -120,6 +119,11 @@ struct thread_struct {
/* cpu runtime instrumentation */
struct runtime_instr_cb *ri_cb;
unsigned char trap_tdb[256]; /* Transaction abort diagnose block */
+ /*
+ * Warning: 'fpu' is dynamically-sized. It *MUST* be at
+ * the end.
+ */
+ struct fpu fpu; /* FP and VX register save area */
};
/* Flag to disable transactions. */
@@ -155,10 +159,9 @@ struct stack_frame {
#define ARCH_MIN_TASKALIGN 8
-extern __vector128 init_task_fpu_regs[__NUM_VXRS];
#define INIT_THREAD { \
.ksp = sizeof(init_stack) + (unsigned long) &init_stack, \
- .fpu.regs = (void *)&init_task_fpu_regs, \
+ .fpu.regs = (void *) init_task.thread.fpu.fprs, \
}
/*
diff --git a/arch/s390/include/asm/rwsem.h b/arch/s390/include/asm/rwsem.h
index fead491dfc28..c75e4471e618 100644
--- a/arch/s390/include/asm/rwsem.h
+++ b/arch/s390/include/asm/rwsem.h
@@ -90,7 +90,7 @@ static inline int __down_read_trylock(struct rw_semaphore *sem)
/*
* lock for writing
*/
-static inline void __down_write_nested(struct rw_semaphore *sem, int subclass)
+static inline long ___down_write(struct rw_semaphore *sem)
{
signed long old, new, tmp;
@@ -104,13 +104,23 @@ static inline void __down_write_nested(struct rw_semaphore *sem, int subclass)
: "=&d" (old), "=&d" (new), "=Q" (sem->count)
: "Q" (sem->count), "m" (tmp)
: "cc", "memory");
- if (old != 0)
- rwsem_down_write_failed(sem);
+
+ return old;
}
static inline void __down_write(struct rw_semaphore *sem)
{
- __down_write_nested(sem, 0);
+ if (___down_write(sem))
+ rwsem_down_write_failed(sem);
+}
+
+static inline int __down_write_killable(struct rw_semaphore *sem)
+{
+ if (___down_write(sem))
+ if (IS_ERR(rwsem_down_write_failed_killable(sem)))
+ return -EINTR;
+
+ return 0;
}
/*
diff --git a/arch/s390/include/asm/sclp.h b/arch/s390/include/asm/sclp.h
index bab456be9a4f..e4f6f73afe2f 100644
--- a/arch/s390/include/asm/sclp.h
+++ b/arch/s390/include/asm/sclp.h
@@ -69,9 +69,22 @@ struct sclp_info {
unsigned int max_cores;
unsigned long hsa_size;
unsigned long facilities;
+ unsigned int hmfai;
};
extern struct sclp_info sclp;
+struct zpci_report_error_header {
+ u8 version; /* Interface version byte */
+ u8 action; /* Action qualifier byte
+ * 1: Deconfigure and repair action requested
+ * (OpenCrypto Problem Call Home)
+ * 2: Informational Report
+ * (OpenCrypto Successful Diagnostics Execution)
+ */
+ u16 length; /* Length of Subsequent Data (up to 4K – SCLP header */
+ u8 data[0]; /* Subsequent Data passed verbatim to SCLP ET 24 */
+} __packed;
+
int sclp_get_core_info(struct sclp_core_info *info);
int sclp_core_configure(u8 core);
int sclp_core_deconfigure(u8 core);
@@ -83,6 +96,7 @@ int sclp_chp_read_info(struct sclp_chp_info *info);
void sclp_get_ipl_info(struct sclp_ipl_info *info);
int sclp_pci_configure(u32 fid);
int sclp_pci_deconfigure(u32 fid);
+int sclp_pci_report(struct zpci_report_error_header *report, u32 fh, u32 fid);
int memcpy_hsa_kernel(void *dest, unsigned long src, size_t count);
int memcpy_hsa_user(void __user *dest, unsigned long src, size_t count);
void sclp_early_detect(void);
diff --git a/arch/s390/include/asm/sigp.h b/arch/s390/include/asm/sigp.h
index ec60cf7fa0a2..1c8f33fca356 100644
--- a/arch/s390/include/asm/sigp.h
+++ b/arch/s390/include/asm/sigp.h
@@ -27,6 +27,7 @@
/* SIGP cpu status bits */
+#define SIGP_STATUS_INVALID_ORDER 0x00000002UL
#define SIGP_STATUS_CHECK_STOP 0x00000010UL
#define SIGP_STATUS_STOPPED 0x00000040UL
#define SIGP_STATUS_EXT_CALL_PENDING 0x00000080UL
diff --git a/arch/s390/include/asm/thread_info.h b/arch/s390/include/asm/thread_info.h
index 2fffc2c27581..f15c0398c363 100644
--- a/arch/s390/include/asm/thread_info.h
+++ b/arch/s390/include/asm/thread_info.h
@@ -62,6 +62,7 @@ static inline struct thread_info *current_thread_info(void)
}
void arch_release_task_struct(struct task_struct *tsk);
+int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src);
#define THREAD_SIZE_ORDER THREAD_ORDER
diff --git a/arch/s390/include/uapi/asm/dasd.h b/arch/s390/include/uapi/asm/dasd.h
index 5812a3b2df9e..1340311dab77 100644
--- a/arch/s390/include/uapi/asm/dasd.h
+++ b/arch/s390/include/uapi/asm/dasd.h
@@ -187,6 +187,36 @@ typedef struct format_data_t {
#define DASD_FMT_INT_INVAL 4 /* invalidate tracks */
#define DASD_FMT_INT_COMPAT 8 /* use OS/390 compatible disk layout */
+/*
+ * struct format_check_t
+ * represents all data necessary to evaluate the format of
+ * different tracks of a dasd
+ */
+typedef struct format_check_t {
+ /* Input */
+ struct format_data_t expect;
+
+ /* Output */
+ unsigned int result; /* Error indication (DASD_FMT_ERR_*) */
+ unsigned int unit; /* Track that is in error */
+ unsigned int rec; /* Record that is in error */
+ unsigned int num_records; /* Records in the track in error */
+ unsigned int blksize; /* Blocksize of first record in error */
+ unsigned int key_length; /* Key length of first record in error */
+} format_check_t;
+
+/* Values returned in format_check_t when a format error is detected: */
+/* Too few records were found on a single track */
+#define DASD_FMT_ERR_TOO_FEW_RECORDS 1
+/* Too many records were found on a single track */
+#define DASD_FMT_ERR_TOO_MANY_RECORDS 2
+/* Blocksize/data-length of a record was wrong */
+#define DASD_FMT_ERR_BLKSIZE 3
+/* A record ID is defined by cylinder, head, and record number (CHR). */
+/* On mismatch, this error is set */
+#define DASD_FMT_ERR_RECORD_ID 4
+/* If key-length was != 0 */
+#define DASD_FMT_ERR_KEY_LENGTH 5
/*
* struct attrib_data_t
@@ -288,6 +318,8 @@ struct dasd_snid_ioctl_data {
/* Get Sense Path Group ID (SNID) data */
#define BIODASDSNID _IOWR(DASD_IOCTL_LETTER, 1, struct dasd_snid_ioctl_data)
+/* Check device format according to format_check_t */
+#define BIODASDCHECKFMT _IOWR(DASD_IOCTL_LETTER, 2, format_check_t)
#define BIODASDSYMMIO _IOWR(DASD_IOCTL_LETTER, 240, dasd_symmio_parms_t)
diff --git a/arch/s390/include/uapi/asm/kvm.h b/arch/s390/include/uapi/asm/kvm.h
index 347fe5afa419..3b8e99ef9d58 100644
--- a/arch/s390/include/uapi/asm/kvm.h
+++ b/arch/s390/include/uapi/asm/kvm.h
@@ -25,6 +25,7 @@
#define KVM_DEV_FLIC_APF_DISABLE_WAIT 5
#define KVM_DEV_FLIC_ADAPTER_REGISTER 6
#define KVM_DEV_FLIC_ADAPTER_MODIFY 7
+#define KVM_DEV_FLIC_CLEAR_IO_IRQ 8
/*
* We can have up to 4*64k pending subchannels + 8 adapter interrupts,
* as well as up to ASYNC_PF_PER_VCPU*KVM_MAX_VCPUS pfault done interrupts.
diff --git a/arch/s390/include/uapi/asm/sie.h b/arch/s390/include/uapi/asm/sie.h
index 5dbaa72baa64..8fb5d4a6dd25 100644
--- a/arch/s390/include/uapi/asm/sie.h
+++ b/arch/s390/include/uapi/asm/sie.h
@@ -16,14 +16,19 @@
{ 0x01, "SIGP sense" }, \
{ 0x02, "SIGP external call" }, \
{ 0x03, "SIGP emergency signal" }, \
+ { 0x04, "SIGP start" }, \
{ 0x05, "SIGP stop" }, \
{ 0x06, "SIGP restart" }, \
{ 0x09, "SIGP stop and store status" }, \
{ 0x0b, "SIGP initial cpu reset" }, \
+ { 0x0c, "SIGP cpu reset" }, \
{ 0x0d, "SIGP set prefix" }, \
{ 0x0e, "SIGP store status at address" }, \
{ 0x12, "SIGP set architecture" }, \
- { 0x15, "SIGP sense running" }
+ { 0x13, "SIGP conditional emergency signal" }, \
+ { 0x15, "SIGP sense running" }, \
+ { 0x16, "SIGP set multithreading"}, \
+ { 0x17, "SIGP store additional status ait address"}
#define icpt_prog_codes \
{ 0x0001, "Prog Operation" }, \
diff --git a/arch/s390/kernel/cache.c b/arch/s390/kernel/cache.c
index 8ba32436effe..77a84bd78be2 100644
--- a/arch/s390/kernel/cache.c
+++ b/arch/s390/kernel/cache.c
@@ -72,7 +72,6 @@ void show_cacheinfo(struct seq_file *m)
if (!test_facility(34))
return;
- get_online_cpus();
this_cpu_ci = get_cpu_cacheinfo(cpumask_any(cpu_online_mask));
for (idx = 0; idx < this_cpu_ci->num_leaves; idx++) {
cache = this_cpu_ci->info_list + idx;
@@ -86,7 +85,6 @@ void show_cacheinfo(struct seq_file *m)
seq_printf(m, "associativity=%d", cache->ways_of_associativity);
seq_puts(m, "\n");
}
- put_online_cpus();
}
static inline enum cache_type get_cache_type(struct cache_info *ci, int level)
diff --git a/arch/s390/kernel/crash_dump.c b/arch/s390/kernel/crash_dump.c
index 3986c9f62191..29df8484282b 100644
--- a/arch/s390/kernel/crash_dump.c
+++ b/arch/s390/kernel/crash_dump.c
@@ -173,7 +173,7 @@ int copy_oldmem_kernel(void *dst, void *src, size_t count)
/*
* Copy memory of the old, dumped system to a user space virtual address
*/
-int copy_oldmem_user(void __user *dst, void *src, size_t count)
+static int copy_oldmem_user(void __user *dst, void *src, size_t count)
{
unsigned long from, len;
int rc;
diff --git a/arch/s390/kernel/dumpstack.c b/arch/s390/kernel/dumpstack.c
index 1b6081c0aff9..69f9908ac44c 100644
--- a/arch/s390/kernel/dumpstack.c
+++ b/arch/s390/kernel/dumpstack.c
@@ -89,6 +89,30 @@ void dump_trace(dump_trace_func_t func, void *data, struct task_struct *task,
}
EXPORT_SYMBOL_GPL(dump_trace);
+struct return_address_data {
+ unsigned long address;
+ int depth;
+};
+
+static int __return_address(void *data, unsigned long address)
+{
+ struct return_address_data *rd = data;
+
+ if (rd->depth--)
+ return 0;
+ rd->address = address;
+ return 1;
+}
+
+unsigned long return_address(int depth)
+{
+ struct return_address_data rd = { .depth = depth + 2 };
+
+ dump_trace(__return_address, &rd, NULL, current_stack_pointer());
+ return rd.address;
+}
+EXPORT_SYMBOL_GPL(return_address);
+
static int show_address(void *data, unsigned long address)
{
printk("([<%016lx>] %pSR)\n", address, (void *)address);
diff --git a/arch/s390/kernel/entry.h b/arch/s390/kernel/entry.h
index b7019ab74070..bedd2f55d860 100644
--- a/arch/s390/kernel/entry.h
+++ b/arch/s390/kernel/entry.h
@@ -1,6 +1,7 @@
#ifndef _ENTRY_H
#define _ENTRY_H
+#include <linux/percpu.h>
#include <linux/types.h>
#include <linux/signal.h>
#include <asm/ptrace.h>
@@ -75,4 +76,7 @@ long sys_s390_personality(unsigned int personality);
long sys_s390_runtime_instr(int command, int signum);
long sys_s390_pci_mmio_write(unsigned long, const void __user *, size_t);
long sys_s390_pci_mmio_read(unsigned long, void __user *, size_t);
+
+DECLARE_PER_CPU(u64, mt_cycles[8]);
+
#endif /* _ENTRY_H */
diff --git a/arch/s390/kernel/machine_kexec.c b/arch/s390/kernel/machine_kexec.c
index 2f1b7217c25c..0e64f08d3d69 100644
--- a/arch/s390/kernel/machine_kexec.c
+++ b/arch/s390/kernel/machine_kexec.c
@@ -43,13 +43,13 @@ static int machine_kdump_pm_cb(struct notifier_block *nb, unsigned long action,
switch (action) {
case PM_SUSPEND_PREPARE:
case PM_HIBERNATION_PREPARE:
- if (crashk_res.start)
- crash_map_reserved_pages();
+ if (kexec_crash_image)
+ arch_kexec_unprotect_crashkres();
break;
case PM_POST_SUSPEND:
case PM_POST_HIBERNATION:
- if (crashk_res.start)
- crash_unmap_reserved_pages();
+ if (kexec_crash_image)
+ arch_kexec_protect_crashkres();
break;
default:
return NOTIFY_DONE;
@@ -60,6 +60,8 @@ static int machine_kdump_pm_cb(struct notifier_block *nb, unsigned long action,
static int __init machine_kdump_pm_init(void)
{
pm_notifier(machine_kdump_pm_cb, 0);
+ /* Create initial mapping for crashkernel memory */
+ arch_kexec_unprotect_crashkres();
return 0;
}
arch_initcall(machine_kdump_pm_init);
@@ -146,6 +148,8 @@ static int kdump_csum_valid(struct kimage *image)
#endif
}
+#ifdef CONFIG_CRASH_DUMP
+
/*
* Map or unmap crashkernel memory
*/
@@ -167,21 +171,25 @@ static void crash_map_pages(int enable)
}
/*
- * Map crashkernel memory
+ * Unmap crashkernel memory
*/
-void crash_map_reserved_pages(void)
+void arch_kexec_protect_crashkres(void)
{
- crash_map_pages(1);
+ if (crashk_res.end)
+ crash_map_pages(0);
}
/*
- * Unmap crashkernel memory
+ * Map crashkernel memory
*/
-void crash_unmap_reserved_pages(void)
+void arch_kexec_unprotect_crashkres(void)
{
- crash_map_pages(0);
+ if (crashk_res.end)
+ crash_map_pages(1);
}
+#endif
+
/*
* Give back memory to hypervisor before new kdump is loaded
*/
diff --git a/arch/s390/kernel/module.c b/arch/s390/kernel/module.c
index 7873e171457c..fbc07891f9e7 100644
--- a/arch/s390/kernel/module.c
+++ b/arch/s390/kernel/module.c
@@ -51,6 +51,10 @@ void *module_alloc(unsigned long size)
void module_arch_freeing_init(struct module *mod)
{
+ if (is_livepatch_module(mod) &&
+ mod->state == MODULE_STATE_LIVE)
+ return;
+
vfree(mod->arch.syminfo);
mod->arch.syminfo = NULL;
}
@@ -425,7 +429,5 @@ int module_finalize(const Elf_Ehdr *hdr,
struct module *me)
{
jump_label_apply_nops(me);
- vfree(me->arch.syminfo);
- me->arch.syminfo = NULL;
return 0;
}
diff --git a/arch/s390/kernel/perf_cpum_cf.c b/arch/s390/kernel/perf_cpum_cf.c
index 62f066b5259e..59215c518f37 100644
--- a/arch/s390/kernel/perf_cpum_cf.c
+++ b/arch/s390/kernel/perf_cpum_cf.c
@@ -665,18 +665,21 @@ static struct pmu cpumf_pmu = {
static int cpumf_pmu_notifier(struct notifier_block *self, unsigned long action,
void *hcpu)
{
- unsigned int cpu = (long) hcpu;
int flags;
switch (action & ~CPU_TASKS_FROZEN) {
case CPU_ONLINE:
case CPU_DOWN_FAILED:
flags = PMC_INIT;
- smp_call_function_single(cpu, setup_pmc_cpu, &flags, 1);
+ local_irq_disable();
+ setup_pmc_cpu(&flags);
+ local_irq_enable();
break;
case CPU_DOWN_PREPARE:
flags = PMC_RELEASE;
- smp_call_function_single(cpu, setup_pmc_cpu, &flags, 1);
+ local_irq_disable();
+ setup_pmc_cpu(&flags);
+ local_irq_enable();
break;
default:
break;
diff --git a/arch/s390/kernel/perf_cpum_sf.c b/arch/s390/kernel/perf_cpum_sf.c
index eaab9a7cb3be..a8e832166417 100644
--- a/arch/s390/kernel/perf_cpum_sf.c
+++ b/arch/s390/kernel/perf_cpum_sf.c
@@ -1510,7 +1510,6 @@ static void cpumf_measurement_alert(struct ext_code ext_code,
static int cpumf_pmu_notifier(struct notifier_block *self,
unsigned long action, void *hcpu)
{
- unsigned int cpu = (long) hcpu;
int flags;
/* Ignore the notification if no events are scheduled on the PMU.
@@ -1523,11 +1522,15 @@ static int cpumf_pmu_notifier(struct notifier_block *self,
case CPU_ONLINE:
case CPU_DOWN_FAILED:
flags = PMC_INIT;
- smp_call_function_single(cpu, setup_pmc_cpu, &flags, 1);
+ local_irq_disable();
+ setup_pmc_cpu(&flags);
+ local_irq_enable();
break;
case CPU_DOWN_PREPARE:
flags = PMC_RELEASE;
- smp_call_function_single(cpu, setup_pmc_cpu, &flags, 1);
+ local_irq_disable();
+ setup_pmc_cpu(&flags);
+ local_irq_enable();
break;
default:
break;
diff --git a/arch/s390/kernel/perf_event.c b/arch/s390/kernel/perf_event.c
index c3e4099b60a5..87035fa58bbe 100644
--- a/arch/s390/kernel/perf_event.c
+++ b/arch/s390/kernel/perf_event.c
@@ -224,13 +224,13 @@ arch_initcall(service_level_perf_register);
static int __perf_callchain_kernel(void *data, unsigned long address)
{
- struct perf_callchain_entry *entry = data;
+ struct perf_callchain_entry_ctx *entry = data;
perf_callchain_store(entry, address);
return 0;
}
-void perf_callchain_kernel(struct perf_callchain_entry *entry,
+void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
struct pt_regs *regs)
{
if (user_mode(regs))
diff --git a/arch/s390/kernel/process.c b/arch/s390/kernel/process.c
index 2bba7df4ac51..bba4fa74b321 100644
--- a/arch/s390/kernel/process.c
+++ b/arch/s390/kernel/process.c
@@ -7,6 +7,7 @@
* Denis Joseph Barrow,
*/
+#include <linux/elf-randomize.h>
#include <linux/compiler.h>
#include <linux/cpu.h>
#include <linux/sched.h>
@@ -37,9 +38,6 @@
asmlinkage void ret_from_fork(void) asm ("ret_from_fork");
-/* FPU save area for the init task */
-__vector128 init_task_fpu_regs[__NUM_VXRS] __init_task_data;
-
/*
* Return saved PC of a blocked thread. used in kernel/sched.
* resume in entry.S does not create a new stack frame, it
@@ -70,9 +68,10 @@ extern void kernel_thread_starter(void);
/*
* Free current thread data structures etc..
*/
-void exit_thread(void)
+void exit_thread(struct task_struct *tsk)
{
- exit_thread_runtime_instr();
+ if (tsk == current)
+ exit_thread_runtime_instr();
}
void flush_thread(void)
@@ -85,35 +84,19 @@ void release_thread(struct task_struct *dead_task)
void arch_release_task_struct(struct task_struct *tsk)
{
- /* Free either the floating-point or the vector register save area */
- kfree(tsk->thread.fpu.regs);
}
int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
{
- size_t fpu_regs_size;
-
- *dst = *src;
-
- /*
- * If the vector extension is available, it is enabled for all tasks,
- * and, thus, the FPU register save area must be allocated accordingly.
- */
- fpu_regs_size = MACHINE_HAS_VX ? sizeof(__vector128) * __NUM_VXRS
- : sizeof(freg_t) * __NUM_FPRS;
- dst->thread.fpu.regs = kzalloc(fpu_regs_size, GFP_KERNEL|__GFP_REPEAT);
- if (!dst->thread.fpu.regs)
- return -ENOMEM;
-
/*
* Save the floating-point or vector register state of the current
* task and set the CIF_FPU flag to lazy restore the FPU register
* state when returning to user space.
*/
save_fpu_regs();
- dst->thread.fpu.fpc = current->thread.fpu.fpc;
- memcpy(dst->thread.fpu.regs, current->thread.fpu.regs, fpu_regs_size);
+ memcpy(dst, src, arch_task_struct_size);
+ dst->thread.fpu.regs = dst->thread.fpu.fprs;
return 0;
}
diff --git a/arch/s390/kernel/processor.c b/arch/s390/kernel/processor.c
index 647128d5b983..de7451065c34 100644
--- a/arch/s390/kernel/processor.c
+++ b/arch/s390/kernel/processor.c
@@ -6,6 +6,7 @@
#define KMSG_COMPONENT "cpu"
#define pr_fmt(fmt) KMSG_COMPONENT ": " fmt
+#include <linux/cpufeature.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/seq_file.h>
@@ -84,7 +85,6 @@ static int show_cpuinfo(struct seq_file *m, void *v)
seq_puts(m, "\n");
show_cacheinfo(m);
}
- get_online_cpus();
if (cpu_online(n)) {
struct cpuid *id = &per_cpu(cpu_id, n);
seq_printf(m, "processor %li: "
@@ -93,23 +93,31 @@ static int show_cpuinfo(struct seq_file *m, void *v)
"machine = %04X\n",
n, id->version, id->ident, id->machine);
}
- put_online_cpus();
return 0;
}
+static inline void *c_update(loff_t *pos)
+{
+ if (*pos)
+ *pos = cpumask_next(*pos - 1, cpu_online_mask);
+ return *pos < nr_cpu_ids ? (void *)*pos + 1 : NULL;
+}
+
static void *c_start(struct seq_file *m, loff_t *pos)
{
- return *pos < nr_cpu_ids ? (void *)((unsigned long) *pos + 1) : NULL;
+ get_online_cpus();
+ return c_update(pos);
}
static void *c_next(struct seq_file *m, void *v, loff_t *pos)
{
++*pos;
- return c_start(m, pos);
+ return c_update(pos);
}
static void c_stop(struct seq_file *m, void *v)
{
+ put_online_cpus();
}
const struct seq_operations cpuinfo_op = {
diff --git a/arch/s390/kernel/setup.c b/arch/s390/kernel/setup.c
index d3f9688f26b5..f31939147ccd 100644
--- a/arch/s390/kernel/setup.c
+++ b/arch/s390/kernel/setup.c
@@ -809,6 +809,22 @@ static void __init setup_randomness(void)
}
/*
+ * Find the correct size for the task_struct. This depends on
+ * the size of the struct fpu at the end of the thread_struct
+ * which is embedded in the task_struct.
+ */
+static void __init setup_task_size(void)
+{
+ int task_size = sizeof(struct task_struct);
+
+ if (!MACHINE_HAS_VX) {
+ task_size -= sizeof(__vector128) * __NUM_VXRS;
+ task_size += sizeof(freg_t) * __NUM_FPRS;
+ }
+ arch_task_struct_size = task_size;
+}
+
+/*
* Setup function called from init/main.c just after the banner
* was printed.
*/
@@ -846,6 +862,7 @@ void __init setup_arch(char **cmdline_p)
os_info_init();
setup_ipl();
+ setup_task_size();
/* Do some memory reservations *before* memory is added to memblock */
reserve_memory_end();
diff --git a/arch/s390/kernel/smp.c b/arch/s390/kernel/smp.c
index 40a6b4f9c36c..7b89a7572100 100644
--- a/arch/s390/kernel/smp.c
+++ b/arch/s390/kernel/smp.c
@@ -832,7 +832,7 @@ int __cpu_up(unsigned int cpu, struct task_struct *tidle)
pcpu_attach_task(pcpu, tidle);
pcpu_start_fn(pcpu, smp_start_secondary, NULL);
/* Wait until cpu puts itself in the online & active maps */
- while (!cpu_online(cpu) || !cpu_active(cpu))
+ while (!cpu_online(cpu))
cpu_relax();
return 0;
}
diff --git a/arch/s390/kernel/vdso.c b/arch/s390/kernel/vdso.c
index 94495cac8be3..5904abf6b1ae 100644
--- a/arch/s390/kernel/vdso.c
+++ b/arch/s390/kernel/vdso.c
@@ -216,7 +216,8 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
* it at vdso_base which is the "natural" base for it, but we might
* fail and end up putting it elsewhere.
*/
- down_write(&mm->mmap_sem);
+ if (down_write_killable(&mm->mmap_sem))
+ return -EINTR;
vdso_base = get_unmapped_area(NULL, 0, vdso_pages << PAGE_SHIFT, 0, 0);
if (IS_ERR_VALUE(vdso_base)) {
rc = vdso_base;
diff --git a/arch/s390/kernel/vtime.c b/arch/s390/kernel/vtime.c
index dafc44f519c3..856e30d8463f 100644
--- a/arch/s390/kernel/vtime.c
+++ b/arch/s390/kernel/vtime.c
@@ -18,6 +18,8 @@
#include <asm/cpu_mf.h>
#include <asm/smp.h>
+#include "entry.h"
+
static void virt_timer_expire(void);
static LIST_HEAD(virt_timer_list);
diff --git a/arch/s390/kvm/Kconfig b/arch/s390/kvm/Kconfig
index 5ea5af3c7db7..b1900239b0ab 100644
--- a/arch/s390/kvm/Kconfig
+++ b/arch/s390/kvm/Kconfig
@@ -28,6 +28,7 @@ config KVM
select HAVE_KVM_IRQCHIP
select HAVE_KVM_IRQFD
select HAVE_KVM_IRQ_ROUTING
+ select HAVE_KVM_INVALID_WAKEUPS
select SRCU
select KVM_VFIO
---help---
diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c
index 84efc2ba6a90..5a80af740d3e 100644
--- a/arch/s390/kvm/interrupt.c
+++ b/arch/s390/kvm/interrupt.c
@@ -977,6 +977,11 @@ no_timer:
void kvm_s390_vcpu_wakeup(struct kvm_vcpu *vcpu)
{
+ /*
+ * We cannot move this into the if, as the CPU might be already
+ * in kvm_vcpu_block without having the waitqueue set (polling)
+ */
+ vcpu->valid_wakeup = true;
if (swait_active(&vcpu->wq)) {
/*
* The vcpu gave up the cpu voluntarily, mark it as a good
@@ -2034,6 +2039,27 @@ static int modify_io_adapter(struct kvm_device *dev,
return ret;
}
+static int clear_io_irq(struct kvm *kvm, struct kvm_device_attr *attr)
+
+{
+ const u64 isc_mask = 0xffUL << 24; /* all iscs set */
+ u32 schid;
+
+ if (attr->flags)
+ return -EINVAL;
+ if (attr->attr != sizeof(schid))
+ return -EINVAL;
+ if (copy_from_user(&schid, (void __user *) attr->addr, sizeof(schid)))
+ return -EFAULT;
+ kfree(kvm_s390_get_io_int(kvm, isc_mask, schid));
+ /*
+ * If userspace is conforming to the architecture, we can have at most
+ * one pending I/O interrupt per subchannel, so this is effectively a
+ * clear all.
+ */
+ return 0;
+}
+
static int flic_set_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
{
int r = 0;
@@ -2067,6 +2093,9 @@ static int flic_set_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
case KVM_DEV_FLIC_ADAPTER_MODIFY:
r = modify_io_adapter(dev, attr);
break;
+ case KVM_DEV_FLIC_CLEAR_IO_IRQ:
+ r = clear_io_irq(dev->kvm, attr);
+ break;
default:
r = -EINVAL;
}
@@ -2074,6 +2103,23 @@ static int flic_set_attr(struct kvm_device *dev, struct kvm_device_attr *attr)
return r;
}
+static int flic_has_attr(struct kvm_device *dev,
+ struct kvm_device_attr *attr)
+{
+ switch (attr->group) {
+ case KVM_DEV_FLIC_GET_ALL_IRQS:
+ case KVM_DEV_FLIC_ENQUEUE:
+ case KVM_DEV_FLIC_CLEAR_IRQS:
+ case KVM_DEV_FLIC_APF_ENABLE:
+ case KVM_DEV_FLIC_APF_DISABLE_WAIT:
+ case KVM_DEV_FLIC_ADAPTER_REGISTER:
+ case KVM_DEV_FLIC_ADAPTER_MODIFY:
+ case KVM_DEV_FLIC_CLEAR_IO_IRQ:
+ return 0;
+ }
+ return -ENXIO;
+}
+
static int flic_create(struct kvm_device *dev, u32 type)
{
if (!dev)
@@ -2095,6 +2141,7 @@ struct kvm_device_ops kvm_flic_ops = {
.name = "kvm-flic",
.get_attr = flic_get_attr,
.set_attr = flic_set_attr,
+ .has_attr = flic_has_attr,
.create = flic_create,
.destroy = flic_destroy,
};
diff --git a/arch/s390/kvm/kvm-s390.c b/arch/s390/kvm/kvm-s390.c
index 668c087513e5..6d8ec3ac9dd8 100644
--- a/arch/s390/kvm/kvm-s390.c
+++ b/arch/s390/kvm/kvm-s390.c
@@ -65,6 +65,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
{ "exit_instr_and_program_int", VCPU_STAT(exit_instr_and_program) },
{ "halt_successful_poll", VCPU_STAT(halt_successful_poll) },
{ "halt_attempted_poll", VCPU_STAT(halt_attempted_poll) },
+ { "halt_poll_invalid", VCPU_STAT(halt_poll_invalid) },
{ "halt_wakeup", VCPU_STAT(halt_wakeup) },
{ "instruction_lctlg", VCPU_STAT(instruction_lctlg) },
{ "instruction_lctl", VCPU_STAT(instruction_lctl) },
@@ -118,9 +119,9 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
};
/* upper facilities limit for kvm */
-unsigned long kvm_s390_fac_list_mask[] = {
- 0xffe6fffbfcfdfc40UL,
- 0x005e800000000000UL,
+unsigned long kvm_s390_fac_list_mask[16] = {
+ 0xffe6000000000000UL,
+ 0x005e000000000000UL,
};
unsigned long kvm_s390_fac_list_mask_size(void)
@@ -638,6 +639,7 @@ static int kvm_s390_get_tod(struct kvm *kvm, struct kvm_device_attr *attr)
static int kvm_s390_set_processor(struct kvm *kvm, struct kvm_device_attr *attr)
{
struct kvm_s390_vm_cpu_processor *proc;
+ u16 lowest_ibc, unblocked_ibc;
int ret = 0;
mutex_lock(&kvm->lock);
@@ -652,9 +654,17 @@ static int kvm_s390_set_processor(struct kvm *kvm, struct kvm_device_attr *attr)
}
if (!copy_from_user(proc, (void __user *)attr->addr,
sizeof(*proc))) {
- memcpy(&kvm->arch.model.cpu_id, &proc->cpuid,
- sizeof(struct cpuid));
- kvm->arch.model.ibc = proc->ibc;
+ kvm->arch.model.cpuid = proc->cpuid;
+ lowest_ibc = sclp.ibc >> 16 & 0xfff;
+ unblocked_ibc = sclp.ibc & 0xfff;
+ if (lowest_ibc) {
+ if (proc->ibc > unblocked_ibc)
+ kvm->arch.model.ibc = unblocked_ibc;
+ else if (proc->ibc < lowest_ibc)
+ kvm->arch.model.ibc = lowest_ibc;
+ else
+ kvm->arch.model.ibc = proc->ibc;
+ }
memcpy(kvm->arch.model.fac_list, proc->fac_list,
S390_ARCH_FAC_LIST_SIZE_BYTE);
} else
@@ -687,7 +697,7 @@ static int kvm_s390_get_processor(struct kvm *kvm, struct kvm_device_attr *attr)
ret = -ENOMEM;
goto out;
}
- memcpy(&proc->cpuid, &kvm->arch.model.cpu_id, sizeof(struct cpuid));
+ proc->cpuid = kvm->arch.model.cpuid;
proc->ibc = kvm->arch.model.ibc;
memcpy(&proc->fac_list, kvm->arch.model.fac_list,
S390_ARCH_FAC_LIST_SIZE_BYTE);
@@ -1081,10 +1091,13 @@ static void kvm_s390_set_crycb_format(struct kvm *kvm)
kvm->arch.crypto.crycbd |= CRYCB_FORMAT1;
}
-static void kvm_s390_get_cpu_id(struct cpuid *cpu_id)
+static u64 kvm_s390_get_initial_cpuid(void)
{
- get_cpu_id(cpu_id);
- cpu_id->version = 0xff;
+ struct cpuid cpuid;
+
+ get_cpu_id(&cpuid);
+ cpuid.version = 0xff;
+ return *((u64 *) &cpuid);
}
static void kvm_s390_crypto_init(struct kvm *kvm)
@@ -1175,7 +1188,7 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
memcpy(kvm->arch.model.fac_list, kvm->arch.model.fac_mask,
S390_ARCH_FAC_LIST_SIZE_BYTE);
- kvm_s390_get_cpu_id(&kvm->arch.model.cpu_id);
+ kvm->arch.model.cpuid = kvm_s390_get_initial_cpuid();
kvm->arch.model.ibc = sclp.ibc & 0x0fff;
kvm_s390_crypto_init(kvm);
@@ -1624,7 +1637,6 @@ static void kvm_s390_vcpu_setup_model(struct kvm_vcpu *vcpu)
{
struct kvm_s390_cpu_model *model = &vcpu->kvm->arch.model;
- vcpu->arch.cpu_id = model->cpu_id;
vcpu->arch.sie_block->ibc = model->ibc;
if (test_kvm_facility(vcpu->kvm, 7))
vcpu->arch.sie_block->fac = (u32)(u64) model->fac_list;
@@ -1645,11 +1657,14 @@ int kvm_arch_vcpu_setup(struct kvm_vcpu *vcpu)
kvm_s390_vcpu_setup_model(vcpu);
- vcpu->arch.sie_block->ecb = 6;
+ vcpu->arch.sie_block->ecb = 0x02;
+ if (test_kvm_facility(vcpu->kvm, 9))
+ vcpu->arch.sie_block->ecb |= 0x04;
if (test_kvm_facility(vcpu->kvm, 50) && test_kvm_facility(vcpu->kvm, 73))
vcpu->arch.sie_block->ecb |= 0x10;
- vcpu->arch.sie_block->ecb2 = 8;
+ if (test_kvm_facility(vcpu->kvm, 8))
+ vcpu->arch.sie_block->ecb2 |= 0x08;
vcpu->arch.sie_block->eca = 0xC1002000U;
if (sclp.has_siif)
vcpu->arch.sie_block->eca |= 1;
@@ -2971,13 +2986,31 @@ void kvm_arch_commit_memory_region(struct kvm *kvm,
return;
}
+static inline unsigned long nonhyp_mask(int i)
+{
+ unsigned int nonhyp_fai = (sclp.hmfai << i * 2) >> 30;
+
+ return 0x0000ffffffffffffUL >> (nonhyp_fai << 4);
+}
+
+void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu)
+{
+ vcpu->valid_wakeup = false;
+}
+
static int __init kvm_s390_init(void)
{
+ int i;
+
if (!sclp.has_sief2) {
pr_info("SIE not available\n");
return -ENODEV;
}
+ for (i = 0; i < 16; i++)
+ kvm_s390_fac_list_mask[i] |=
+ S390_lowcore.stfle_fac_list[i] & nonhyp_mask(i);
+
return kvm_init(NULL, sizeof(struct kvm_vcpu), 0, THIS_MODULE);
}
diff --git a/arch/s390/kvm/priv.c b/arch/s390/kvm/priv.c
index 0a1591d3d25d..95916fa7c670 100644
--- a/arch/s390/kvm/priv.c
+++ b/arch/s390/kvm/priv.c
@@ -439,7 +439,7 @@ static int handle_lpswe(struct kvm_vcpu *vcpu)
static int handle_stidp(struct kvm_vcpu *vcpu)
{
- u64 stidp_data = vcpu->arch.stidp_data;
+ u64 stidp_data = vcpu->kvm->arch.model.cpuid;
u64 operand2;
int rc;
ar_t ar;
@@ -670,8 +670,9 @@ static int handle_pfmf(struct kvm_vcpu *vcpu)
if (vcpu->run->s.regs.gprs[reg1] & PFMF_RESERVED)
return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
- /* Only provide non-quiescing support if the host supports it */
- if (vcpu->run->s.regs.gprs[reg1] & PFMF_NQ && !test_facility(14))
+ /* Only provide non-quiescing support if enabled for the guest */
+ if (vcpu->run->s.regs.gprs[reg1] & PFMF_NQ &&
+ !test_kvm_facility(vcpu->kvm, 14))
return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
/* No support for conditional-SSKE */
@@ -744,7 +745,7 @@ static int handle_essa(struct kvm_vcpu *vcpu)
{
/* entries expected to be 1FF */
int entries = (vcpu->arch.sie_block->cbrlo & ~PAGE_MASK) >> 3;
- unsigned long *cbrlo, cbrle;
+ unsigned long *cbrlo;
struct gmap *gmap;
int i;
@@ -765,17 +766,9 @@ static int handle_essa(struct kvm_vcpu *vcpu)
vcpu->arch.sie_block->cbrlo &= PAGE_MASK; /* reset nceo */
cbrlo = phys_to_virt(vcpu->arch.sie_block->cbrlo);
down_read(&gmap->mm->mmap_sem);
- for (i = 0; i < entries; ++i) {
- cbrle = cbrlo[i];
- if (unlikely(cbrle & ~PAGE_MASK || cbrle < 2 * PAGE_SIZE))
- /* invalid entry */
- break;
- /* try to free backing */
- __gmap_zap(gmap, cbrle);
- }
+ for (i = 0; i < entries; ++i)
+ __gmap_zap(gmap, cbrlo[i]);
up_read(&gmap->mm->mmap_sem);
- if (i < entries)
- return kvm_s390_inject_program_int(vcpu, PGM_SPECIFICATION);
return 0;
}
diff --git a/arch/s390/kvm/sigp.c b/arch/s390/kvm/sigp.c
index 77c22d685c7a..28ea0cab1f1b 100644
--- a/arch/s390/kvm/sigp.c
+++ b/arch/s390/kvm/sigp.c
@@ -240,6 +240,12 @@ static int __sigp_sense_running(struct kvm_vcpu *vcpu,
struct kvm_s390_local_interrupt *li;
int rc;
+ if (!test_kvm_facility(vcpu->kvm, 9)) {
+ *reg &= 0xffffffff00000000UL;
+ *reg |= SIGP_STATUS_INVALID_ORDER;
+ return SIGP_CC_STATUS_STORED;
+ }
+
li = &dst_vcpu->arch.local_int;
if (atomic_read(li->cpuflags) & CPUSTAT_RUNNING) {
/* running */
diff --git a/arch/s390/mm/fault.c b/arch/s390/mm/fault.c
index cce577feab1e..7a3144017301 100644
--- a/arch/s390/mm/fault.c
+++ b/arch/s390/mm/fault.c
@@ -631,6 +631,29 @@ void pfault_fini(void)
static DEFINE_SPINLOCK(pfault_lock);
static LIST_HEAD(pfault_list);
+#define PF_COMPLETE 0x0080
+
+/*
+ * The mechanism of our pfault code: if Linux is running as guest, runs a user
+ * space process and the user space process accesses a page that the host has
+ * paged out we get a pfault interrupt.
+ *
+ * This allows us, within the guest, to schedule a different process. Without
+ * this mechanism the host would have to suspend the whole virtual cpu until
+ * the page has been paged in.
+ *
+ * So when we get such an interrupt then we set the state of the current task
+ * to uninterruptible and also set the need_resched flag. Both happens within
+ * interrupt context(!). If we later on want to return to user space we
+ * recognize the need_resched flag and then call schedule(). It's not very
+ * obvious how this works...
+ *
+ * Of course we have a lot of additional fun with the completion interrupt (->
+ * host signals that a page of a process has been paged in and the process can
+ * continue to run). This interrupt can arrive on any cpu and, since we have
+ * virtual cpus, actually appear before the interrupt that signals that a page
+ * is missing.
+ */
static void pfault_interrupt(struct ext_code ext_code,
unsigned int param32, unsigned long param64)
{
@@ -639,10 +662,9 @@ static void pfault_interrupt(struct ext_code ext_code,
pid_t pid;
/*
- * Get the external interruption subcode & pfault
- * initial/completion signal bit. VM stores this
- * in the 'cpu address' field associated with the
- * external interrupt.
+ * Get the external interruption subcode & pfault initial/completion
+ * signal bit. VM stores this in the 'cpu address' field associated
+ * with the external interrupt.
*/
subcode = ext_code.subcode;
if ((subcode & 0xff00) != __SUBCODE_MASK)
@@ -658,7 +680,7 @@ static void pfault_interrupt(struct ext_code ext_code,
if (!tsk)
return;
spin_lock(&pfault_lock);
- if (subcode & 0x0080) {
+ if (subcode & PF_COMPLETE) {
/* signal bit is set -> a page has been swapped in by VM */
if (tsk->thread.pfault_wait == 1) {
/* Initial interrupt was faster than the completion
@@ -687,8 +709,7 @@ static void pfault_interrupt(struct ext_code ext_code,
goto out;
if (tsk->thread.pfault_wait == 1) {
/* Already on the list with a reference: put to sleep */
- __set_task_state(tsk, TASK_UNINTERRUPTIBLE);
- set_tsk_need_resched(tsk);
+ goto block;
} else if (tsk->thread.pfault_wait == -1) {
/* Completion interrupt was faster than the initial
* interrupt (pfault_wait == -1). Set pfault_wait
@@ -703,7 +724,11 @@ static void pfault_interrupt(struct ext_code ext_code,
get_task_struct(tsk);
tsk->thread.pfault_wait = 1;
list_add(&tsk->thread.list, &pfault_list);
- __set_task_state(tsk, TASK_UNINTERRUPTIBLE);
+block:
+ /* Since this must be a userspace fault, there
+ * is no kernel task state to trample. Rely on the
+ * return to userspace schedule() to block. */
+ __set_current_state(TASK_UNINTERRUPTIBLE);
set_tsk_need_resched(tsk);
}
}
diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
index 89cf09e5f168..eb9df2822da1 100644
--- a/arch/s390/mm/mmap.c
+++ b/arch/s390/mm/mmap.c
@@ -22,6 +22,7 @@
* Started by Ingo Molnar <mingo@elte.hu>
*/
+#include <linux/elf-randomize.h>
#include <linux/personality.h>
#include <linux/mm.h>
#include <linux/mman.h>
diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c
index d27fccbad7c1..d48cf25cfe99 100644
--- a/arch/s390/mm/vmem.c
+++ b/arch/s390/mm/vmem.c
@@ -56,7 +56,7 @@ static inline pmd_t *vmem_pmd_alloc(void)
return pmd;
}
-static pte_t __ref *vmem_pte_alloc(unsigned long address)
+static pte_t __ref *vmem_pte_alloc(void)
{
pte_t *pte;
@@ -121,7 +121,7 @@ static int vmem_add_mem(unsigned long start, unsigned long size, int ro)
continue;
}
if (pmd_none(*pm_dir)) {
- pt_dir = vmem_pte_alloc(address);
+ pt_dir = vmem_pte_alloc();
if (!pt_dir)
goto out;
pmd_populate(&init_mm, pm_dir, pt_dir);
@@ -233,7 +233,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
address = (address + PMD_SIZE) & PMD_MASK;
continue;
}
- pt_dir = vmem_pte_alloc(address);
+ pt_dir = vmem_pte_alloc();
if (!pt_dir)
goto out;
pmd_populate(&init_mm, pm_dir, pt_dir);
@@ -370,7 +370,7 @@ void __init vmem_map_init(void)
ro_end = (unsigned long)&_eshared & PAGE_MASK;
for_each_memblock(memory, reg) {
start = reg->base;
- end = reg->base + reg->size - 1;
+ end = reg->base + reg->size;
if (start >= ro_end || end <= ro_start)
vmem_add_mem(start, end - start, 0);
else if (start >= ro_start && end <= ro_end)
diff --git a/arch/s390/net/bpf_jit_comp.c b/arch/s390/net/bpf_jit_comp.c
index 3c0bfc1f2694..9133b0ec000b 100644
--- a/arch/s390/net/bpf_jit_comp.c
+++ b/arch/s390/net/bpf_jit_comp.c
@@ -54,16 +54,17 @@ struct bpf_jit {
#define SEEN_FUNC 16 /* calls C functions */
#define SEEN_TAIL_CALL 32 /* code uses tail calls */
#define SEEN_SKB_CHANGE 64 /* code changes skb data */
+#define SEEN_REG_AX 128 /* code uses constant blinding */
#define SEEN_STACK (SEEN_FUNC | SEEN_MEM | SEEN_SKB)
/*
* s390 registers
*/
-#define REG_W0 (__MAX_BPF_REG+0) /* Work register 1 (even) */
-#define REG_W1 (__MAX_BPF_REG+1) /* Work register 2 (odd) */
-#define REG_SKB_DATA (__MAX_BPF_REG+2) /* SKB data register */
-#define REG_L (__MAX_BPF_REG+3) /* Literal pool register */
-#define REG_15 (__MAX_BPF_REG+4) /* Register 15 */
+#define REG_W0 (MAX_BPF_JIT_REG + 0) /* Work register 1 (even) */
+#define REG_W1 (MAX_BPF_JIT_REG + 1) /* Work register 2 (odd) */
+#define REG_SKB_DATA (MAX_BPF_JIT_REG + 2) /* SKB data register */
+#define REG_L (MAX_BPF_JIT_REG + 3) /* Literal pool register */
+#define REG_15 (MAX_BPF_JIT_REG + 4) /* Register 15 */
#define REG_0 REG_W0 /* Register 0 */
#define REG_1 REG_W1 /* Register 1 */
#define REG_2 BPF_REG_1 /* Register 2 */
@@ -88,6 +89,8 @@ static const int reg2hex[] = {
[BPF_REG_9] = 10,
/* BPF stack pointer */
[BPF_REG_FP] = 13,
+ /* Register for blinding (shared with REG_SKB_DATA) */
+ [BPF_REG_AX] = 12,
/* SKB data pointer */
[REG_SKB_DATA] = 12,
/* Work registers for s390x backend */
@@ -385,7 +388,7 @@ static void save_restore_regs(struct bpf_jit *jit, int op)
/*
* For SKB access %b1 contains the SKB pointer. For "bpf_jit.S"
* we store the SKB header length on the stack and the SKB data
- * pointer in REG_SKB_DATA.
+ * pointer in REG_SKB_DATA if BPF_REG_AX is not used.
*/
static void emit_load_skb_data_hlen(struct bpf_jit *jit)
{
@@ -397,9 +400,10 @@ static void emit_load_skb_data_hlen(struct bpf_jit *jit)
offsetof(struct sk_buff, data_len));
/* stg %w1,ST_OFF_HLEN(%r0,%r15) */
EMIT6_DISP_LH(0xe3000000, 0x0024, REG_W1, REG_0, REG_15, STK_OFF_HLEN);
- /* lg %skb_data,data_off(%b1) */
- EMIT6_DISP_LH(0xe3000000, 0x0004, REG_SKB_DATA, REG_0,
- BPF_REG_1, offsetof(struct sk_buff, data));
+ if (!(jit->seen & SEEN_REG_AX))
+ /* lg %skb_data,data_off(%b1) */
+ EMIT6_DISP_LH(0xe3000000, 0x0004, REG_SKB_DATA, REG_0,
+ BPF_REG_1, offsetof(struct sk_buff, data));
}
/*
@@ -487,6 +491,8 @@ static noinline int bpf_jit_insn(struct bpf_jit *jit, struct bpf_prog *fp, int i
s32 imm = insn->imm;
s16 off = insn->off;
+ if (dst_reg == BPF_REG_AX || src_reg == BPF_REG_AX)
+ jit->seen |= SEEN_REG_AX;
switch (insn->code) {
/*
* BPF_MOV
@@ -1188,7 +1194,7 @@ call_fn:
/*
* Implicit input:
* BPF_REG_6 (R7) : skb pointer
- * REG_SKB_DATA (R12): skb data pointer
+ * REG_SKB_DATA (R12): skb data pointer (if no BPF_REG_AX)
*
* Calculated input:
* BPF_REG_2 (R3) : offset of byte(s) to fetch in skb
@@ -1209,6 +1215,11 @@ call_fn:
/* agfr %b2,%src (%src is s32 here) */
EMIT4(0xb9180000, BPF_REG_2, src_reg);
+ /* Reload REG_SKB_DATA if BPF_REG_AX is used */
+ if (jit->seen & SEEN_REG_AX)
+ /* lg %skb_data,data_off(%b6) */
+ EMIT6_DISP_LH(0xe3000000, 0x0004, REG_SKB_DATA, REG_0,
+ BPF_REG_6, offsetof(struct sk_buff, data));
/* basr %b5,%w1 (%b5 is call saved) */
EMIT2(0x0d00, BPF_REG_5, REG_W1);
@@ -1262,37 +1273,62 @@ void bpf_jit_compile(struct bpf_prog *fp)
/*
* Compile eBPF program "fp"
*/
-void bpf_int_jit_compile(struct bpf_prog *fp)
+struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *fp)
{
+ struct bpf_prog *tmp, *orig_fp = fp;
struct bpf_binary_header *header;
+ bool tmp_blinded = false;
struct bpf_jit jit;
int pass;
if (!bpf_jit_enable)
- return;
+ return orig_fp;
+
+ tmp = bpf_jit_blind_constants(fp);
+ /*
+ * If blinding was requested and we failed during blinding,
+ * we must fall back to the interpreter.
+ */
+ if (IS_ERR(tmp))
+ return orig_fp;
+ if (tmp != fp) {
+ tmp_blinded = true;
+ fp = tmp;
+ }
+
memset(&jit, 0, sizeof(jit));
jit.addrs = kcalloc(fp->len + 1, sizeof(*jit.addrs), GFP_KERNEL);
- if (jit.addrs == NULL)
- return;
+ if (jit.addrs == NULL) {
+ fp = orig_fp;
+ goto out;
+ }
/*
* Three initial passes:
* - 1/2: Determine clobbered registers
* - 3: Calculate program size and addrs arrray
*/
for (pass = 1; pass <= 3; pass++) {
- if (bpf_jit_prog(&jit, fp))
+ if (bpf_jit_prog(&jit, fp)) {
+ fp = orig_fp;
goto free_addrs;
+ }
}
/*
* Final pass: Allocate and generate program
*/
- if (jit.size >= BPF_SIZE_MAX)
+ if (jit.size >= BPF_SIZE_MAX) {
+ fp = orig_fp;
goto free_addrs;
+ }
header = bpf_jit_binary_alloc(jit.size, &jit.prg_buf, 2, jit_fill_hole);
- if (!header)
+ if (!header) {
+ fp = orig_fp;
goto free_addrs;
- if (bpf_jit_prog(&jit, fp))
+ }
+ if (bpf_jit_prog(&jit, fp)) {
+ fp = orig_fp;
goto free_addrs;
+ }
if (bpf_jit_enable > 1) {
bpf_jit_dump(fp->len, jit.size, pass, jit.prg_buf);
if (jit.prg_buf)
@@ -1305,6 +1341,11 @@ void bpf_int_jit_compile(struct bpf_prog *fp)
}
free_addrs:
kfree(jit.addrs);
+out:
+ if (tmp_blinded)
+ bpf_jit_prog_release_other(fp, fp == orig_fp ?
+ tmp : orig_fp);
+ return fp;
}
/*
diff --git a/arch/s390/pci/pci_debug.c b/arch/s390/pci/pci_debug.c
index c555de3d12d6..38993b156924 100644
--- a/arch/s390/pci/pci_debug.c
+++ b/arch/s390/pci/pci_debug.c
@@ -1,5 +1,5 @@
/*
- * Copyright IBM Corp. 2012
+ * Copyright IBM Corp. 2012,2015
*
* Author(s):
* Jan Glauber <jang@linux.vnet.ibm.com>
@@ -23,22 +23,45 @@ EXPORT_SYMBOL_GPL(pci_debug_msg_id);
debug_info_t *pci_debug_err_id;
EXPORT_SYMBOL_GPL(pci_debug_err_id);
-static char *pci_perf_names[] = {
- /* hardware counters */
+static char *pci_common_names[] = {
"Load operations",
"Store operations",
"Store block operations",
"Refresh operations",
+};
+
+static char *pci_fmt0_names[] = {
"DMA read bytes",
"DMA write bytes",
};
+static char *pci_fmt1_names[] = {
+ "Received bytes",
+ "Received packets",
+ "Transmitted bytes",
+ "Transmitted packets",
+};
+
+static char *pci_fmt2_names[] = {
+ "Consumed work units",
+ "Maximum work units",
+};
+
static char *pci_sw_names[] = {
"Allocated pages",
"Mapped pages",
"Unmapped pages",
};
+static void pci_fmb_show(struct seq_file *m, char *name[], int length,
+ u64 *data)
+{
+ int i;
+
+ for (i = 0; i < length; i++, data++)
+ seq_printf(m, "%26s:\t%llu\n", name[i], *data);
+}
+
static void pci_sw_counter_show(struct seq_file *m)
{
struct zpci_dev *zdev = m->private;
@@ -53,8 +76,6 @@ static void pci_sw_counter_show(struct seq_file *m)
static int pci_perf_show(struct seq_file *m, void *v)
{
struct zpci_dev *zdev = m->private;
- u64 *stat;
- int i;
if (!zdev)
return 0;
@@ -72,15 +93,27 @@ static int pci_perf_show(struct seq_file *m, void *v)
seq_printf(m, "Samples: %u\n", zdev->fmb->samples);
seq_printf(m, "Last update TOD: %Lx\n", zdev->fmb->last_update);
- /* hardware counters */
- stat = (u64 *) &zdev->fmb->ld_ops;
- for (i = 0; i < 4; i++)
- seq_printf(m, "%26s:\t%llu\n",
- pci_perf_names[i], *(stat + i));
- if (zdev->fmb->dma_valid)
- for (i = 4; i < 6; i++)
- seq_printf(m, "%26s:\t%llu\n",
- pci_perf_names[i], *(stat + i));
+ pci_fmb_show(m, pci_common_names, ARRAY_SIZE(pci_common_names),
+ &zdev->fmb->ld_ops);
+
+ switch (zdev->fmb->format) {
+ case 0:
+ if (!(zdev->fmb->fmt_ind & ZPCI_FMB_DMA_COUNTER_VALID))
+ break;
+ pci_fmb_show(m, pci_fmt0_names, ARRAY_SIZE(pci_fmt0_names),
+ &zdev->fmb->fmt0.dma_rbytes);
+ break;
+ case 1:
+ pci_fmb_show(m, pci_fmt1_names, ARRAY_SIZE(pci_fmt1_names),
+ &zdev->fmb->fmt1.rx_bytes);
+ break;
+ case 2:
+ pci_fmb_show(m, pci_fmt2_names, ARRAY_SIZE(pci_fmt2_names),
+ &zdev->fmb->fmt2.consumed_work_units);
+ break;
+ default:
+ seq_puts(m, "Unknown format\n");
+ }
pci_sw_counter_show(m);
mutex_unlock(&zdev->lock);
diff --git a/arch/s390/pci/pci_sysfs.c b/arch/s390/pci/pci_sysfs.c
index f37a5808883d..ed484dc84d14 100644
--- a/arch/s390/pci/pci_sysfs.c
+++ b/arch/s390/pci/pci_sysfs.c
@@ -12,6 +12,8 @@
#include <linux/stat.h>
#include <linux/pci.h>
+#include <asm/sclp.h>
+
#define zpci_attr(name, fmt, member) \
static ssize_t name##_show(struct device *dev, \
struct device_attribute *attr, char *buf) \
@@ -77,8 +79,29 @@ static ssize_t util_string_read(struct file *filp, struct kobject *kobj,
sizeof(zdev->util_str));
}
static BIN_ATTR_RO(util_string, CLP_UTIL_STR_LEN);
+
+static ssize_t report_error_write(struct file *filp, struct kobject *kobj,
+ struct bin_attribute *attr, char *buf,
+ loff_t off, size_t count)
+{
+ struct zpci_report_error_header *report = (void *) buf;
+ struct device *dev = kobj_to_dev(kobj);
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct zpci_dev *zdev = to_zpci(pdev);
+ int ret;
+
+ if (off || (count < sizeof(*report)))
+ return -EINVAL;
+
+ ret = sclp_pci_report(report, zdev->fh, zdev->fid);
+
+ return ret ? ret : count;
+}
+static BIN_ATTR(report_error, S_IWUSR, NULL, report_error_write, PAGE_SIZE);
+
static struct bin_attribute *zpci_bin_attrs[] = {
&bin_attr_util_string,
+ &bin_attr_report_error,
NULL,
};
diff --git a/arch/score/Kconfig b/arch/score/Kconfig
index 366e1b599a7b..507d63181389 100644
--- a/arch/score/Kconfig
+++ b/arch/score/Kconfig
@@ -14,6 +14,7 @@ config SCORE
select VIRT_TO_BUS
select MODULES_USE_ELF_REL
select CLONE_BACKWARDS
+ select CPU_NO_EFFICIENT_FFS
choice
prompt "System type"
diff --git a/arch/score/include/uapi/asm/unistd.h b/arch/score/include/uapi/asm/unistd.h
index 9cb4260a5f3e..d4008c339e89 100644
--- a/arch/score/include/uapi/asm/unistd.h
+++ b/arch/score/include/uapi/asm/unistd.h
@@ -1,5 +1,6 @@
#define __ARCH_HAVE_MMU
+#define __ARCH_WANT_RENAMEAT
#define __ARCH_WANT_SYSCALL_NO_AT
#define __ARCH_WANT_SYSCALL_NO_FLAGS
#define __ARCH_WANT_SYSCALL_OFF_T
diff --git a/arch/score/kernel/process.c b/arch/score/kernel/process.c
index a1519ad3d49d..aae9480706c2 100644
--- a/arch/score/kernel/process.c
+++ b/arch/score/kernel/process.c
@@ -56,8 +56,6 @@ void start_thread(struct pt_regs *regs, unsigned long pc, unsigned long sp)
regs->regs[0] = sp;
}
-void exit_thread(void) {}
-
/*
* When a process does an "exec", machine state like FPU and debug
* registers need to be reset. This is a hook function for that.
diff --git a/arch/sh/Kconfig b/arch/sh/Kconfig
index 7ed20fc3fc81..e803a836cb7c 100644
--- a/arch/sh/Kconfig
+++ b/arch/sh/Kconfig
@@ -20,6 +20,7 @@ config SUPERH
select PERF_USE_VMALLOC
select HAVE_DEBUG_KMEMLEAK
select HAVE_KERNEL_GZIP
+ select CPU_NO_EFFICIENT_FFS
select HAVE_KERNEL_BZIP2
select HAVE_KERNEL_LZMA
select HAVE_KERNEL_XZ
@@ -44,6 +45,7 @@ config SUPERH
select OLD_SIGSUSPEND
select OLD_SIGACTION
select HAVE_ARCH_AUDITSYSCALL
+ select HAVE_NMI
help
The SuperH is a RISC processor targeted for use in embedded systems
and consumer electronics; it was also used in the Sega Dreamcast
@@ -71,6 +73,7 @@ config SUPERH32
config SUPERH64
def_bool ARCH = "sh64"
+ select HAVE_EXIT_THREAD
select KALLSYMS
config ARCH_DEFCONFIG
diff --git a/arch/sh/boards/board-sh7757lcr.c b/arch/sh/boards/board-sh7757lcr.c
index 324599bfad14..0104c8199c48 100644
--- a/arch/sh/boards/board-sh7757lcr.c
+++ b/arch/sh/boards/board-sh7757lcr.c
@@ -20,7 +20,6 @@
#include <linux/mfd/tmio.h>
#include <linux/mmc/host.h>
#include <linux/mmc/sh_mmcif.h>
-#include <linux/mmc/sh_mobile_sdhi.h>
#include <linux/sh_eth.h>
#include <linux/sh_intc.h>
#include <linux/usb/renesas_usbhs.h>
diff --git a/arch/sh/boards/mach-ap325rxa/setup.c b/arch/sh/boards/mach-ap325rxa/setup.c
index 62c3b81300ed..de8393cb7313 100644
--- a/arch/sh/boards/mach-ap325rxa/setup.c
+++ b/arch/sh/boards/mach-ap325rxa/setup.c
@@ -15,7 +15,6 @@
#include <linux/interrupt.h>
#include <linux/platform_device.h>
#include <linux/mmc/host.h>
-#include <linux/mmc/sh_mobile_sdhi.h>
#include <linux/mtd/physmap.h>
#include <linux/mtd/sh_flctl.h>
#include <linux/mfd/tmio.h>
diff --git a/arch/sh/boards/mach-ecovec24/setup.c b/arch/sh/boards/mach-ecovec24/setup.c
index a9c0c07386fd..6d612792f6b8 100644
--- a/arch/sh/boards/mach-ecovec24/setup.c
+++ b/arch/sh/boards/mach-ecovec24/setup.c
@@ -13,7 +13,6 @@
#include <linux/platform_device.h>
#include <linux/mmc/host.h>
#include <linux/mmc/sh_mmcif.h>
-#include <linux/mmc/sh_mobile_sdhi.h>
#include <linux/mtd/physmap.h>
#include <linux/mfd/tmio.h>
#include <linux/gpio.h>
diff --git a/arch/sh/boards/mach-kfr2r09/setup.c b/arch/sh/boards/mach-kfr2r09/setup.c
index 6bd9230e64e3..5deb2d82f19f 100644
--- a/arch/sh/boards/mach-kfr2r09/setup.c
+++ b/arch/sh/boards/mach-kfr2r09/setup.c
@@ -11,7 +11,6 @@
#include <linux/platform_device.h>
#include <linux/interrupt.h>
#include <linux/mmc/host.h>
-#include <linux/mmc/sh_mobile_sdhi.h>
#include <linux/mfd/tmio.h>
#include <linux/mtd/physmap.h>
#include <linux/mtd/onenand.h>
diff --git a/arch/sh/boards/mach-migor/setup.c b/arch/sh/boards/mach-migor/setup.c
index 7a04da3efce4..5de60a77eaa1 100644
--- a/arch/sh/boards/mach-migor/setup.c
+++ b/arch/sh/boards/mach-migor/setup.c
@@ -13,7 +13,6 @@
#include <linux/input.h>
#include <linux/input/sh_keysc.h>
#include <linux/mmc/host.h>
-#include <linux/mmc/sh_mobile_sdhi.h>
#include <linux/mtd/physmap.h>
#include <linux/mfd/tmio.h>
#include <linux/mtd/nand.h>
diff --git a/arch/sh/boards/mach-sdk7786/gpio.c b/arch/sh/boards/mach-sdk7786/gpio.c
index f71ce09d4e15..47997010b77a 100644
--- a/arch/sh/boards/mach-sdk7786/gpio.c
+++ b/arch/sh/boards/mach-sdk7786/gpio.c
@@ -9,7 +9,7 @@
*/
#include <linux/init.h>
#include <linux/interrupt.h>
-#include <linux/gpio.h>
+#include <linux/gpio/driver.h>
#include <linux/irq.h>
#include <linux/kernel.h>
#include <linux/spinlock.h>
@@ -44,6 +44,6 @@ static struct gpio_chip usrgpir_gpio_chip = {
static int __init usrgpir_gpio_setup(void)
{
- return gpiochip_add(&usrgpir_gpio_chip);
+ return gpiochip_add_data(&usrgpir_gpio_chip, NULL);
}
device_initcall(usrgpir_gpio_setup);
diff --git a/arch/sh/boards/mach-se/7724/setup.c b/arch/sh/boards/mach-se/7724/setup.c
index e0e1df136642..f1fecd395679 100644
--- a/arch/sh/boards/mach-se/7724/setup.c
+++ b/arch/sh/boards/mach-se/7724/setup.c
@@ -15,7 +15,6 @@
#include <linux/interrupt.h>
#include <linux/platform_device.h>
#include <linux/mmc/host.h>
-#include <linux/mmc/sh_mobile_sdhi.h>
#include <linux/mfd/tmio.h>
#include <linux/mtd/physmap.h>
#include <linux/delay.h>
diff --git a/arch/sh/boards/mach-x3proto/gpio.c b/arch/sh/boards/mach-x3proto/gpio.c
index 1fb2cbee25f2..cea88b0effa2 100644
--- a/arch/sh/boards/mach-x3proto/gpio.c
+++ b/arch/sh/boards/mach-x3proto/gpio.c
@@ -13,7 +13,7 @@
#include <linux/init.h>
#include <linux/interrupt.h>
-#include <linux/gpio.h>
+#include <linux/gpio/driver.h>
#include <linux/irq.h>
#include <linux/kernel.h>
#include <linux/spinlock.h>
@@ -107,7 +107,7 @@ int __init x3proto_gpio_setup(void)
if (unlikely(ilsel < 0))
return ilsel;
- ret = gpiochip_add(&x3proto_gpio_chip);
+ ret = gpiochip_add_data(&x3proto_gpio_chip, NULL);
if (unlikely(ret))
goto err_gpio;
diff --git a/arch/sh/boot/compressed/Makefile b/arch/sh/boot/compressed/Makefile
index 6df826ee7316..c4c47ea9fa94 100644
--- a/arch/sh/boot/compressed/Makefile
+++ b/arch/sh/boot/compressed/Makefile
@@ -55,7 +55,6 @@ $(addprefix $(obj)/,$(lib1funcs-y)): $(obj)/%: $(lib1funcs-dir)/% FORCE
$(obj)/vmlinux: $(OBJECTS) $(obj)/piggy.o $(lib1funcs-obj) FORCE
$(call if_changed,ld)
- @:
$(obj)/vmlinux.bin: vmlinux FORCE
$(call if_changed,objcopy)
diff --git a/arch/sh/boot/romimage/Makefile b/arch/sh/boot/romimage/Makefile
index 2216ee57f251..43c41191de5d 100644
--- a/arch/sh/boot/romimage/Makefile
+++ b/arch/sh/boot/romimage/Makefile
@@ -17,7 +17,6 @@ LDFLAGS_vmlinux := --oformat $(ld-bfd) -Ttext $(load-y) -e romstart \
$(obj)/vmlinux: $(obj)/head.o $(obj-y) $(obj)/piggy.o FORCE
$(call if_changed,ld)
- @:
OBJCOPYFLAGS += -j .empty_zero_page
diff --git a/arch/sh/configs/apsh4ad0a_defconfig b/arch/sh/configs/apsh4ad0a_defconfig
index a8d975793b6d..fe45d2c9b151 100644
--- a/arch/sh/configs/apsh4ad0a_defconfig
+++ b/arch/sh/configs/apsh4ad0a_defconfig
@@ -10,7 +10,6 @@ CONFIG_CGROUPS=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
-CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_MEMCG=y
CONFIG_BLK_CGROUP=y
CONFIG_NAMESPACES=y
diff --git a/arch/sh/configs/sdk7786_defconfig b/arch/sh/configs/sdk7786_defconfig
index e7e56a4131b4..36642ec2cb97 100644
--- a/arch/sh/configs/sdk7786_defconfig
+++ b/arch/sh/configs/sdk7786_defconfig
@@ -17,7 +17,6 @@ CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
# CONFIG_PROC_PID_CPUSET is not set
CONFIG_CGROUP_CPUACCT=y
-CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_MEMCG=y
CONFIG_CGROUP_MEMCG_SWAP=y
CONFIG_CGROUP_SCHED=y
diff --git a/arch/sh/configs/se7206_defconfig b/arch/sh/configs/se7206_defconfig
index 6bc30ab9fd18..91853a67ec34 100644
--- a/arch/sh/configs/se7206_defconfig
+++ b/arch/sh/configs/se7206_defconfig
@@ -10,7 +10,6 @@ CONFIG_CGROUPS=y
CONFIG_CGROUP_DEBUG=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
-CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_MEMCG=y
CONFIG_RELAY=y
CONFIG_NAMESPACES=y
diff --git a/arch/sh/configs/shx3_defconfig b/arch/sh/configs/shx3_defconfig
index cd6c519f8fad..4a4269ad5b04 100644
--- a/arch/sh/configs/shx3_defconfig
+++ b/arch/sh/configs/shx3_defconfig
@@ -12,7 +12,6 @@ CONFIG_CGROUPS=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CGROUP_CPUACCT=y
-CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_MEMCG=y
CONFIG_RELAY=y
CONFIG_NAMESPACES=y
diff --git a/arch/sh/configs/urquell_defconfig b/arch/sh/configs/urquell_defconfig
index 1e843dbed5f0..01c9a91ee896 100644
--- a/arch/sh/configs/urquell_defconfig
+++ b/arch/sh/configs/urquell_defconfig
@@ -14,7 +14,6 @@ CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
# CONFIG_PROC_PID_CPUSET is not set
CONFIG_CGROUP_CPUACCT=y
-CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_MEMCG=y
CONFIG_CGROUP_MEMCG_SWAP=y
CONFIG_CGROUP_SCHED=y
diff --git a/arch/sh/include/asm/Kbuild b/arch/sh/include/asm/Kbuild
index a319745a7b63..751c3373a92c 100644
--- a/arch/sh/include/asm/Kbuild
+++ b/arch/sh/include/asm/Kbuild
@@ -26,6 +26,7 @@ generic-y += percpu.h
generic-y += poll.h
generic-y += preempt.h
generic-y += resource.h
+generic-y += rwsem.h
generic-y += sembuf.h
generic-y += serial.h
generic-y += shmbuf.h
diff --git a/arch/sh/include/asm/rwsem.h b/arch/sh/include/asm/rwsem.h
deleted file mode 100644
index edab57265293..000000000000
--- a/arch/sh/include/asm/rwsem.h
+++ /dev/null
@@ -1,132 +0,0 @@
-/*
- * include/asm-sh/rwsem.h: R/W semaphores for SH using the stuff
- * in lib/rwsem.c.
- */
-
-#ifndef _ASM_SH_RWSEM_H
-#define _ASM_SH_RWSEM_H
-
-#ifndef _LINUX_RWSEM_H
-#error "please don't include asm/rwsem.h directly, use linux/rwsem.h instead"
-#endif
-
-#ifdef __KERNEL__
-
-#define RWSEM_UNLOCKED_VALUE 0x00000000
-#define RWSEM_ACTIVE_BIAS 0x00000001
-#define RWSEM_ACTIVE_MASK 0x0000ffff
-#define RWSEM_WAITING_BIAS (-0x00010000)
-#define RWSEM_ACTIVE_READ_BIAS RWSEM_ACTIVE_BIAS
-#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
-
-/*
- * lock for reading
- */
-static inline void __down_read(struct rw_semaphore *sem)
-{
- if (atomic_inc_return((atomic_t *)(&sem->count)) > 0)
- smp_wmb();
- else
- rwsem_down_read_failed(sem);
-}
-
-static inline int __down_read_trylock(struct rw_semaphore *sem)
-{
- int tmp;
-
- while ((tmp = sem->count) >= 0) {
- if (tmp == cmpxchg(&sem->count, tmp,
- tmp + RWSEM_ACTIVE_READ_BIAS)) {
- smp_wmb();
- return 1;
- }
- }
- return 0;
-}
-
-/*
- * lock for writing
- */
-static inline void __down_write(struct rw_semaphore *sem)
-{
- int tmp;
-
- tmp = atomic_add_return(RWSEM_ACTIVE_WRITE_BIAS,
- (atomic_t *)(&sem->count));
- if (tmp == RWSEM_ACTIVE_WRITE_BIAS)
- smp_wmb();
- else
- rwsem_down_write_failed(sem);
-}
-
-static inline int __down_write_trylock(struct rw_semaphore *sem)
-{
- int tmp;
-
- tmp = cmpxchg(&sem->count, RWSEM_UNLOCKED_VALUE,
- RWSEM_ACTIVE_WRITE_BIAS);
- smp_wmb();
- return tmp == RWSEM_UNLOCKED_VALUE;
-}
-
-/*
- * unlock after reading
- */
-static inline void __up_read(struct rw_semaphore *sem)
-{
- int tmp;
-
- smp_wmb();
- tmp = atomic_dec_return((atomic_t *)(&sem->count));
- if (tmp < -1 && (tmp & RWSEM_ACTIVE_MASK) == 0)
- rwsem_wake(sem);
-}
-
-/*
- * unlock after writing
- */
-static inline void __up_write(struct rw_semaphore *sem)
-{
- smp_wmb();
- if (atomic_sub_return(RWSEM_ACTIVE_WRITE_BIAS,
- (atomic_t *)(&sem->count)) < 0)
- rwsem_wake(sem);
-}
-
-/*
- * implement atomic add functionality
- */
-static inline void rwsem_atomic_add(int delta, struct rw_semaphore *sem)
-{
- atomic_add(delta, (atomic_t *)(&sem->count));
-}
-
-/*
- * downgrade write lock to read lock
- */
-static inline void __downgrade_write(struct rw_semaphore *sem)
-{
- int tmp;
-
- smp_wmb();
- tmp = atomic_add_return(-RWSEM_WAITING_BIAS, (atomic_t *)(&sem->count));
- if (tmp < 0)
- rwsem_downgrade_wake(sem);
-}
-
-static inline void __down_write_nested(struct rw_semaphore *sem, int subclass)
-{
- __down_write(sem);
-}
-
-/*
- * implement exchange and add functionality
- */
-static inline int rwsem_atomic_update(int delta, struct rw_semaphore *sem)
-{
- smp_mb();
- return atomic_add_return(delta, (atomic_t *)(&sem->count));
-}
-
-#endif /* __KERNEL__ */
-#endif /* _ASM_SH_RWSEM_H */
diff --git a/arch/sh/kernel/perf_callchain.c b/arch/sh/kernel/perf_callchain.c
index cc80b614b5fa..fa2c0cd23eaa 100644
--- a/arch/sh/kernel/perf_callchain.c
+++ b/arch/sh/kernel/perf_callchain.c
@@ -21,7 +21,7 @@ static int callchain_stack(void *data, char *name)
static void callchain_address(void *data, unsigned long addr, int reliable)
{
- struct perf_callchain_entry *entry = data;
+ struct perf_callchain_entry_ctx *entry = data;
if (reliable)
perf_callchain_store(entry, addr);
@@ -33,7 +33,7 @@ static const struct stacktrace_ops callchain_ops = {
};
void
-perf_callchain_kernel(struct perf_callchain_entry *entry, struct pt_regs *regs)
+perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
{
perf_callchain_store(entry, regs->pc);
diff --git a/arch/sh/kernel/process_32.c b/arch/sh/kernel/process_32.c
index 2885fc9d9dcd..ee12e9451874 100644
--- a/arch/sh/kernel/process_32.c
+++ b/arch/sh/kernel/process_32.c
@@ -76,13 +76,6 @@ void start_thread(struct pt_regs *regs, unsigned long new_pc,
}
EXPORT_SYMBOL(start_thread);
-/*
- * Free current thread data structures etc..
- */
-void exit_thread(void)
-{
-}
-
void flush_thread(void)
{
struct task_struct *tsk = current;
diff --git a/arch/sh/kernel/process_64.c b/arch/sh/kernel/process_64.c
index e2062e643341..9d3e9916555d 100644
--- a/arch/sh/kernel/process_64.c
+++ b/arch/sh/kernel/process_64.c
@@ -288,7 +288,7 @@ void show_regs(struct pt_regs *regs)
/*
* Free current thread data structures etc..
*/
-void exit_thread(void)
+void exit_thread(struct task_struct *tsk)
{
/*
* See arch/sparc/kernel/process.c for the precedent for doing
@@ -307,9 +307,8 @@ void exit_thread(void)
* which it would get safely nulled.
*/
#ifdef CONFIG_SH_FPU
- if (last_task_used_math == current) {
+ if (last_task_used_math == tsk)
last_task_used_math = NULL;
- }
#endif
}
diff --git a/arch/sh/kernel/vsyscall/vsyscall.c b/arch/sh/kernel/vsyscall/vsyscall.c
index ea2aa1393b87..cc0cc5b4ff18 100644
--- a/arch/sh/kernel/vsyscall/vsyscall.c
+++ b/arch/sh/kernel/vsyscall/vsyscall.c
@@ -64,7 +64,9 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
unsigned long addr;
int ret;
- down_write(&mm->mmap_sem);
+ if (down_write_killable(&mm->mmap_sem))
+ return -EINTR;
+
addr = get_unmapped_area(NULL, 0, PAGE_SIZE, 0, 0);
if (IS_ERR_VALUE(addr)) {
ret = addr;
diff --git a/arch/sparc/Kconfig b/arch/sparc/Kconfig
index 57ffaf285c2f..546293d9e6c5 100644
--- a/arch/sparc/Kconfig
+++ b/arch/sparc/Kconfig
@@ -20,8 +20,8 @@ config SPARC
select HAVE_OPROFILE
select HAVE_ARCH_KGDB if !SMP || SPARC64
select HAVE_ARCH_TRACEHOOK
+ select HAVE_EXIT_THREAD
select SYSCTL_EXCEPTION_TRACE
- select ARCH_WANT_OPTIONAL_GPIOLIB
select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
select RTC_CLASS
select RTC_DRV_M48T59
@@ -32,7 +32,7 @@ config SPARC
select ARCH_WANT_IPC_PARSE_VERSION
select GENERIC_PCI_IOMAP
select HAVE_NMI_WATCHDOG if SPARC64
- select HAVE_BPF_JIT
+ select HAVE_CBPF_JIT
select HAVE_DEBUG_BUGVERBOSE
select GENERIC_SMP_IDLE_THREAD
select GENERIC_CLOCKEVENTS
@@ -42,6 +42,7 @@ config SPARC
select ODD_RT_SIGACTION
select OLD_SIGSUSPEND
select ARCH_HAS_SG_CHAIN
+ select CPU_NO_EFFICIENT_FFS
config SPARC32
def_bool !64BIT
@@ -79,6 +80,7 @@ config SPARC64
select NO_BOOTMEM
select HAVE_ARCH_AUDITSYSCALL
select ARCH_SUPPORTS_ATOMIC_RMW
+ select HAVE_NMI
config ARCH_DEFCONFIG
string
diff --git a/arch/sparc/include/asm/Kbuild b/arch/sparc/include/asm/Kbuild
index e928618838bc..6024c26c0585 100644
--- a/arch/sparc/include/asm/Kbuild
+++ b/arch/sparc/include/asm/Kbuild
@@ -16,6 +16,7 @@ generic-y += mm-arch-hooks.h
generic-y += module.h
generic-y += mutex.h
generic-y += preempt.h
+generic-y += rwsem.h
generic-y += serial.h
generic-y += trace_clock.h
generic-y += types.h
diff --git a/arch/sparc/include/asm/head_32.h b/arch/sparc/include/asm/head_32.h
index 5f1dbe315bc8..6fc60fd182c4 100644
--- a/arch/sparc/include/asm/head_32.h
+++ b/arch/sparc/include/asm/head_32.h
@@ -43,10 +43,10 @@
nop;
#ifdef CONFIG_KGDB
-#define KGDB_TRAP(num) \
- b kgdb_trap_low; \
- rd %psr,%l0; \
- nop; \
+#define KGDB_TRAP(num) \
+ mov num, %l7; \
+ b kgdb_trap_low; \
+ rd %psr,%l0; \
nop;
#else
#define KGDB_TRAP(num) \
diff --git a/arch/sparc/include/asm/kgdb.h b/arch/sparc/include/asm/kgdb.h
index 47366af7a589..a6ad7bf84bac 100644
--- a/arch/sparc/include/asm/kgdb.h
+++ b/arch/sparc/include/asm/kgdb.h
@@ -28,10 +28,10 @@ enum regnames {
#define NUMREGBYTES ((GDB_CSR + 1) * 4)
#else
#define NUMREGBYTES ((GDB_Y + 1) * 8)
+#endif
struct pt_regs;
asmlinkage void kgdb_trap(unsigned long trap_level, struct pt_regs *regs);
-#endif
void arch_kgdb_breakpoint(void);
diff --git a/arch/sparc/include/asm/page_32.h b/arch/sparc/include/asm/page_32.h
index f82a1f36b655..0efd0583a8c9 100644
--- a/arch/sparc/include/asm/page_32.h
+++ b/arch/sparc/include/asm/page_32.h
@@ -69,7 +69,6 @@ typedef struct { unsigned long iopgprot; } iopgprot_t;
#define __pte(x) ((pte_t) { (x) } )
#define __iopte(x) ((iopte_t) { (x) } )
-/* #define __pmd(x) ((pmd_t) { (x) } ) */ /* XXX procedure with loop */
#define __pgd(x) ((pgd_t) { (x) } )
#define __ctxd(x) ((ctxd_t) { (x) } )
#define __pgprot(x) ((pgprot_t) { (x) } )
@@ -97,7 +96,6 @@ typedef unsigned long iopgprot_t;
#define __pte(x) (x)
#define __iopte(x) (x)
-/* #define __pmd(x) (x) */ /* XXX later */
#define __pgd(x) (x)
#define __ctxd(x) (x)
#define __pgprot(x) (x)
diff --git a/arch/sparc/include/asm/pgalloc_32.h b/arch/sparc/include/asm/pgalloc_32.h
index a3890da94428..0346c7e62452 100644
--- a/arch/sparc/include/asm/pgalloc_32.h
+++ b/arch/sparc/include/asm/pgalloc_32.h
@@ -29,9 +29,9 @@ static inline void free_pgd_fast(pgd_t *pgd)
static inline void pgd_set(pgd_t * pgdp, pmd_t * pmdp)
{
- unsigned long pa = __nocache_pa((unsigned long)pmdp);
+ unsigned long pa = __nocache_pa(pmdp);
- set_pte((pte_t *)pgdp, (SRMMU_ET_PTD | (pa >> 4)));
+ set_pte((pte_t *)pgdp, __pte((SRMMU_ET_PTD | (pa >> 4))));
}
#define pgd_populate(MM, PGD, PMD) pgd_set(PGD, PMD)
diff --git a/arch/sparc/include/asm/pgtable_32.h b/arch/sparc/include/asm/pgtable_32.h
index 91b963a887b7..ce6f56980aef 100644
--- a/arch/sparc/include/asm/pgtable_32.h
+++ b/arch/sparc/include/asm/pgtable_32.h
@@ -298,7 +298,7 @@ static inline pte_t mk_pte_io(unsigned long page, pgprot_t pgprot, int space)
#define pgprot_noncached pgprot_noncached
static inline pgprot_t pgprot_noncached(pgprot_t prot)
{
- prot &= ~__pgprot(SRMMU_CACHE);
+ pgprot_val(prot) &= ~pgprot_val(__pgprot(SRMMU_CACHE));
return prot;
}
diff --git a/arch/sparc/include/asm/pgtable_64.h b/arch/sparc/include/asm/pgtable_64.h
index f089cfa249f3..e7d82803a48f 100644
--- a/arch/sparc/include/asm/pgtable_64.h
+++ b/arch/sparc/include/asm/pgtable_64.h
@@ -375,7 +375,7 @@ static inline pgprot_t pgprot_noncached(pgprot_t prot)
#define pgprot_noncached pgprot_noncached
#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE)
-static inline pte_t pte_mkhuge(pte_t pte)
+static inline unsigned long __pte_huge_mask(void)
{
unsigned long mask;
@@ -390,8 +390,19 @@ static inline pte_t pte_mkhuge(pte_t pte)
: "=r" (mask)
: "i" (_PAGE_SZHUGE_4U), "i" (_PAGE_SZHUGE_4V));
- return __pte(pte_val(pte) | mask);
+ return mask;
+}
+
+static inline pte_t pte_mkhuge(pte_t pte)
+{
+ return __pte(pte_val(pte) | __pte_huge_mask());
+}
+
+static inline bool is_hugetlb_pte(pte_t pte)
+{
+ return !!(pte_val(pte) & __pte_huge_mask());
}
+
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
static inline pmd_t pmd_mkhuge(pmd_t pmd)
{
@@ -403,6 +414,11 @@ static inline pmd_t pmd_mkhuge(pmd_t pmd)
return __pmd(pte_val(pte));
}
#endif
+#else
+static inline bool is_hugetlb_pte(pte_t pte)
+{
+ return false;
+}
#endif
static inline pte_t pte_mkdirty(pte_t pte)
@@ -681,8 +697,6 @@ static inline unsigned long pmd_trans_huge(pmd_t pmd)
return pte_val(pte) & _PAGE_PMD_HUGE;
}
-#define has_transparent_hugepage() 1
-
static inline pmd_t pmd_mkold(pmd_t pmd)
{
pte_t pte = __pte(pmd_val(pmd));
@@ -858,6 +872,19 @@ static inline unsigned long pud_pfn(pud_t pud)
void tlb_batch_add(struct mm_struct *mm, unsigned long vaddr,
pte_t *ptep, pte_t orig, int fullmm);
+static void maybe_tlb_batch_add(struct mm_struct *mm, unsigned long vaddr,
+ pte_t *ptep, pte_t orig, int fullmm)
+{
+ /* It is more efficient to let flush_tlb_kernel_range()
+ * handle init_mm tlb flushes.
+ *
+ * SUN4V NOTE: _PAGE_VALID is the same value in both the SUN4U
+ * and SUN4V pte layout, so this inline test is fine.
+ */
+ if (likely(mm != &init_mm) && pte_accessible(mm, orig))
+ tlb_batch_add(mm, vaddr, ptep, orig, fullmm);
+}
+
#define __HAVE_ARCH_PMDP_HUGE_GET_AND_CLEAR
static inline pmd_t pmdp_huge_get_and_clear(struct mm_struct *mm,
unsigned long addr,
@@ -874,15 +901,7 @@ static inline void __set_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t orig = *ptep;
*ptep = pte;
-
- /* It is more efficient to let flush_tlb_kernel_range()
- * handle init_mm tlb flushes.
- *
- * SUN4V NOTE: _PAGE_VALID is the same value in both the SUN4U
- * and SUN4V pte layout, so this inline test is fine.
- */
- if (likely(mm != &init_mm) && pte_accessible(mm, orig))
- tlb_batch_add(mm, addr, ptep, orig, fullmm);
+ maybe_tlb_batch_add(mm, addr, ptep, orig, fullmm);
}
#define set_pte_at(mm,addr,ptep,pte) \
diff --git a/arch/sparc/include/asm/rwsem.h b/arch/sparc/include/asm/rwsem.h
deleted file mode 100644
index 069bf4d663a1..000000000000
--- a/arch/sparc/include/asm/rwsem.h
+++ /dev/null
@@ -1,124 +0,0 @@
-/*
- * rwsem.h: R/W semaphores implemented using CAS
- *
- * Written by David S. Miller (davem@redhat.com), 2001.
- * Derived from asm-i386/rwsem.h
- */
-#ifndef _SPARC64_RWSEM_H
-#define _SPARC64_RWSEM_H
-
-#ifndef _LINUX_RWSEM_H
-#error "please don't include asm/rwsem.h directly, use linux/rwsem.h instead"
-#endif
-
-#ifdef __KERNEL__
-
-#define RWSEM_UNLOCKED_VALUE 0x00000000L
-#define RWSEM_ACTIVE_BIAS 0x00000001L
-#define RWSEM_ACTIVE_MASK 0xffffffffL
-#define RWSEM_WAITING_BIAS (-RWSEM_ACTIVE_MASK-1)
-#define RWSEM_ACTIVE_READ_BIAS RWSEM_ACTIVE_BIAS
-#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
-
-/*
- * lock for reading
- */
-static inline void __down_read(struct rw_semaphore *sem)
-{
- if (unlikely(atomic64_inc_return((atomic64_t *)(&sem->count)) <= 0L))
- rwsem_down_read_failed(sem);
-}
-
-static inline int __down_read_trylock(struct rw_semaphore *sem)
-{
- long tmp;
-
- while ((tmp = sem->count) >= 0L) {
- if (tmp == cmpxchg(&sem->count, tmp,
- tmp + RWSEM_ACTIVE_READ_BIAS)) {
- return 1;
- }
- }
- return 0;
-}
-
-/*
- * lock for writing
- */
-static inline void __down_write_nested(struct rw_semaphore *sem, int subclass)
-{
- long tmp;
-
- tmp = atomic64_add_return(RWSEM_ACTIVE_WRITE_BIAS,
- (atomic64_t *)(&sem->count));
- if (unlikely(tmp != RWSEM_ACTIVE_WRITE_BIAS))
- rwsem_down_write_failed(sem);
-}
-
-static inline void __down_write(struct rw_semaphore *sem)
-{
- __down_write_nested(sem, 0);
-}
-
-static inline int __down_write_trylock(struct rw_semaphore *sem)
-{
- long tmp;
-
- tmp = cmpxchg(&sem->count, RWSEM_UNLOCKED_VALUE,
- RWSEM_ACTIVE_WRITE_BIAS);
- return tmp == RWSEM_UNLOCKED_VALUE;
-}
-
-/*
- * unlock after reading
- */
-static inline void __up_read(struct rw_semaphore *sem)
-{
- long tmp;
-
- tmp = atomic64_dec_return((atomic64_t *)(&sem->count));
- if (unlikely(tmp < -1L && (tmp & RWSEM_ACTIVE_MASK) == 0L))
- rwsem_wake(sem);
-}
-
-/*
- * unlock after writing
- */
-static inline void __up_write(struct rw_semaphore *sem)
-{
- if (unlikely(atomic64_sub_return(RWSEM_ACTIVE_WRITE_BIAS,
- (atomic64_t *)(&sem->count)) < 0L))
- rwsem_wake(sem);
-}
-
-/*
- * implement atomic add functionality
- */
-static inline void rwsem_atomic_add(long delta, struct rw_semaphore *sem)
-{
- atomic64_add(delta, (atomic64_t *)(&sem->count));
-}
-
-/*
- * downgrade write lock to read lock
- */
-static inline void __downgrade_write(struct rw_semaphore *sem)
-{
- long tmp;
-
- tmp = atomic64_add_return(-RWSEM_WAITING_BIAS, (atomic64_t *)(&sem->count));
- if (tmp < 0L)
- rwsem_downgrade_wake(sem);
-}
-
-/*
- * implement exchange and add functionality
- */
-static inline long rwsem_atomic_update(long delta, struct rw_semaphore *sem)
-{
- return atomic64_add_return(delta, (atomic64_t *)(&sem->count));
-}
-
-#endif /* __KERNEL__ */
-
-#endif /* _SPARC64_RWSEM_H */
diff --git a/arch/sparc/include/asm/tlbflush_64.h b/arch/sparc/include/asm/tlbflush_64.h
index dea1cfa2122b..a8e192e90700 100644
--- a/arch/sparc/include/asm/tlbflush_64.h
+++ b/arch/sparc/include/asm/tlbflush_64.h
@@ -8,6 +8,7 @@
#define TLB_BATCH_NR 192
struct tlb_batch {
+ bool huge;
struct mm_struct *mm;
unsigned long tlb_nr;
unsigned long active;
@@ -16,7 +17,7 @@ struct tlb_batch {
void flush_tsb_kernel_range(unsigned long start, unsigned long end);
void flush_tsb_user(struct tlb_batch *tb);
-void flush_tsb_user_page(struct mm_struct *mm, unsigned long vaddr);
+void flush_tsb_user_page(struct mm_struct *mm, unsigned long vaddr, bool huge);
/* TLB flush operations. */
diff --git a/arch/sparc/kernel/entry.S b/arch/sparc/kernel/entry.S
index 51aa6e86a5f8..07918ab3062e 100644
--- a/arch/sparc/kernel/entry.S
+++ b/arch/sparc/kernel/entry.S
@@ -1225,20 +1225,18 @@ breakpoint_trap:
RESTORE_ALL
#ifdef CONFIG_KGDB
- .align 4
- .globl kgdb_trap_low
- .type kgdb_trap_low,#function
-kgdb_trap_low:
+ ENTRY(kgdb_trap_low)
rd %wim,%l3
SAVE_ALL
wr %l0, PSR_ET, %psr
WRITE_PAUSE
+ mov %l7, %o0 ! trap_level
call kgdb_trap
- add %sp, STACKFRAME_SZ, %o0
+ add %sp, STACKFRAME_SZ, %o1 ! struct pt_regs *regs
RESTORE_ALL
- .size kgdb_trap_low,.-kgdb_trap_low
+ ENDPROC(kgdb_trap_low)
#endif
.align 4
diff --git a/arch/sparc/kernel/kernel.h b/arch/sparc/kernel/kernel.h
index 5057ec2e4af6..c9804551262c 100644
--- a/arch/sparc/kernel/kernel.h
+++ b/arch/sparc/kernel/kernel.h
@@ -127,6 +127,7 @@ extern unsigned int t_nmi[];
extern unsigned int linux_trap_ipi15_sun4d[];
extern unsigned int linux_trap_ipi15_sun4m[];
+extern struct tt_entry trapbase;
extern struct tt_entry trapbase_cpu1;
extern struct tt_entry trapbase_cpu2;
extern struct tt_entry trapbase_cpu3;
diff --git a/arch/sparc/kernel/kgdb_32.c b/arch/sparc/kernel/kgdb_32.c
index dcf210811af4..6e8e318c57be 100644
--- a/arch/sparc/kernel/kgdb_32.c
+++ b/arch/sparc/kernel/kgdb_32.c
@@ -12,7 +12,8 @@
#include <asm/irq.h>
#include <asm/cacheflush.h>
-extern unsigned long trapbase;
+#include "kernel.h"
+#include "entry.h"
void pt_regs_to_gdb_regs(unsigned long *gdb_regs, struct pt_regs *regs)
{
@@ -133,21 +134,19 @@ int kgdb_arch_handle_exception(int e_vector, int signo, int err_code,
return -1;
}
-extern void do_hw_interrupt(struct pt_regs *regs, unsigned long type);
-
-asmlinkage void kgdb_trap(struct pt_regs *regs)
+asmlinkage void kgdb_trap(unsigned long trap_level, struct pt_regs *regs)
{
unsigned long flags;
if (user_mode(regs)) {
- do_hw_interrupt(regs, 0xfd);
+ do_hw_interrupt(regs, trap_level);
return;
}
flushw_all();
local_irq_save(flags);
- kgdb_handle_exception(0x172, SIGTRAP, 0, regs);
+ kgdb_handle_exception(trap_level, SIGTRAP, 0, regs);
local_irq_restore(flags);
}
diff --git a/arch/sparc/kernel/perf_event.c b/arch/sparc/kernel/perf_event.c
index 6596f66ce112..710f3278d448 100644
--- a/arch/sparc/kernel/perf_event.c
+++ b/arch/sparc/kernel/perf_event.c
@@ -1711,7 +1711,7 @@ static int __init init_hw_perf_events(void)
}
pure_initcall(init_hw_perf_events);
-void perf_callchain_kernel(struct perf_callchain_entry *entry,
+void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
struct pt_regs *regs)
{
unsigned long ksp, fp;
@@ -1756,7 +1756,7 @@ void perf_callchain_kernel(struct perf_callchain_entry *entry,
}
}
#endif
- } while (entry->nr < PERF_MAX_STACK_DEPTH);
+ } while (entry->nr < entry->max_stack);
}
static inline int
@@ -1769,7 +1769,7 @@ valid_user_frame(const void __user *fp, unsigned long size)
return (__range_not_ok(fp, size, TASK_SIZE) == 0);
}
-static void perf_callchain_user_64(struct perf_callchain_entry *entry,
+static void perf_callchain_user_64(struct perf_callchain_entry_ctx *entry,
struct pt_regs *regs)
{
unsigned long ufp;
@@ -1790,10 +1790,10 @@ static void perf_callchain_user_64(struct perf_callchain_entry *entry,
pc = sf.callers_pc;
ufp = (unsigned long)sf.fp + STACK_BIAS;
perf_callchain_store(entry, pc);
- } while (entry->nr < PERF_MAX_STACK_DEPTH);
+ } while (entry->nr < entry->max_stack);
}
-static void perf_callchain_user_32(struct perf_callchain_entry *entry,
+static void perf_callchain_user_32(struct perf_callchain_entry_ctx *entry,
struct pt_regs *regs)
{
unsigned long ufp;
@@ -1822,11 +1822,11 @@ static void perf_callchain_user_32(struct perf_callchain_entry *entry,
ufp = (unsigned long)sf.fp;
}
perf_callchain_store(entry, pc);
- } while (entry->nr < PERF_MAX_STACK_DEPTH);
+ } while (entry->nr < entry->max_stack);
}
void
-perf_callchain_user(struct perf_callchain_entry *entry, struct pt_regs *regs)
+perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
{
u64 saved_fault_address = current_thread_info()->fault_address;
u8 saved_fault_code = get_thread_fault_code();
diff --git a/arch/sparc/kernel/process_32.c b/arch/sparc/kernel/process_32.c
index c5113c7ce2fd..b7780a5bef11 100644
--- a/arch/sparc/kernel/process_32.c
+++ b/arch/sparc/kernel/process_32.c
@@ -184,21 +184,21 @@ unsigned long thread_saved_pc(struct task_struct *tsk)
/*
* Free current thread data structures etc..
*/
-void exit_thread(void)
+void exit_thread(struct task_struct *tsk)
{
#ifndef CONFIG_SMP
- if(last_task_used_math == current) {
+ if (last_task_used_math == tsk) {
#else
- if (test_thread_flag(TIF_USEDFPU)) {
+ if (test_ti_thread_flag(task_thread_info(tsk), TIF_USEDFPU)) {
#endif
/* Keep process from leaving FPU in a bogon state. */
put_psr(get_psr() | PSR_EF);
- fpsave(&current->thread.float_regs[0], &current->thread.fsr,
- &current->thread.fpqueue[0], &current->thread.fpqdepth);
+ fpsave(&tsk->thread.float_regs[0], &tsk->thread.fsr,
+ &tsk->thread.fpqueue[0], &tsk->thread.fpqdepth);
#ifndef CONFIG_SMP
last_task_used_math = NULL;
#else
- clear_thread_flag(TIF_USEDFPU);
+ clear_ti_thread_flag(task_thread_info(tsk), TIF_USEDFPU);
#endif
}
}
diff --git a/arch/sparc/kernel/process_64.c b/arch/sparc/kernel/process_64.c
index c16ef1af1843..fa14402b33f9 100644
--- a/arch/sparc/kernel/process_64.c
+++ b/arch/sparc/kernel/process_64.c
@@ -417,9 +417,9 @@ unsigned long thread_saved_pc(struct task_struct *tsk)
}
/* Free current thread data structures etc.. */
-void exit_thread(void)
+void exit_thread(struct task_struct *tsk)
{
- struct thread_info *t = current_thread_info();
+ struct thread_info *t = task_thread_info(tsk);
if (t->utraps) {
if (t->utraps[0] < 2)
diff --git a/arch/sparc/kernel/setup_32.c b/arch/sparc/kernel/setup_32.c
index 69d75ff1c25c..c4e65cb3280f 100644
--- a/arch/sparc/kernel/setup_32.c
+++ b/arch/sparc/kernel/setup_32.c
@@ -68,8 +68,6 @@ struct screen_info screen_info = {
* prints out pretty messages and returns.
*/
-extern unsigned long trapbase;
-
/* Pretty sick eh? */
static void prom_sync_me(void)
{
@@ -300,7 +298,7 @@ void __init setup_arch(char **cmdline_p)
int i;
unsigned long highest_paddr;
- sparc_ttable = (struct tt_entry *) &trapbase;
+ sparc_ttable = &trapbase;
/* Initialize PROM console and command line. */
*cmdline_p = prom_getbootargs();
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c
index 4977800e9770..ba52e6466a82 100644
--- a/arch/sparc/mm/hugetlbpage.c
+++ b/arch/sparc/mm/hugetlbpage.c
@@ -176,17 +176,31 @@ void set_huge_pte_at(struct mm_struct *mm, unsigned long addr,
pte_t *ptep, pte_t entry)
{
int i;
+ pte_t orig[2];
+ unsigned long nptes;
if (!pte_present(*ptep) && pte_present(entry))
mm->context.huge_pte_count++;
addr &= HPAGE_MASK;
- for (i = 0; i < (1 << HUGETLB_PAGE_ORDER); i++) {
- set_pte_at(mm, addr, ptep, entry);
+
+ nptes = 1 << HUGETLB_PAGE_ORDER;
+ orig[0] = *ptep;
+ orig[1] = *(ptep + nptes / 2);
+ for (i = 0; i < nptes; i++) {
+ *ptep = entry;
ptep++;
addr += PAGE_SIZE;
pte_val(entry) += PAGE_SIZE;
}
+
+ /* Issue TLB flush at REAL_HPAGE_SIZE boundaries */
+ addr -= REAL_HPAGE_SIZE;
+ ptep -= nptes / 2;
+ maybe_tlb_batch_add(mm, addr, ptep, orig[1], 0);
+ addr -= REAL_HPAGE_SIZE;
+ ptep -= nptes / 2;
+ maybe_tlb_batch_add(mm, addr, ptep, orig[0], 0);
}
pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
@@ -194,19 +208,28 @@ pte_t huge_ptep_get_and_clear(struct mm_struct *mm, unsigned long addr,
{
pte_t entry;
int i;
+ unsigned long nptes;
entry = *ptep;
if (pte_present(entry))
mm->context.huge_pte_count--;
addr &= HPAGE_MASK;
-
- for (i = 0; i < (1 << HUGETLB_PAGE_ORDER); i++) {
- pte_clear(mm, addr, ptep);
+ nptes = 1 << HUGETLB_PAGE_ORDER;
+ for (i = 0; i < nptes; i++) {
+ *ptep = __pte(0UL);
addr += PAGE_SIZE;
ptep++;
}
+ /* Issue TLB flush at REAL_HPAGE_SIZE boundaries */
+ addr -= REAL_HPAGE_SIZE;
+ ptep -= nptes / 2;
+ maybe_tlb_batch_add(mm, addr, ptep, entry, 0);
+ addr -= REAL_HPAGE_SIZE;
+ ptep -= nptes / 2;
+ maybe_tlb_batch_add(mm, addr, ptep, entry, 0);
+
return entry;
}
diff --git a/arch/sparc/mm/init_64.c b/arch/sparc/mm/init_64.c
index 09e838801e39..652683cb4b4b 100644
--- a/arch/sparc/mm/init_64.c
+++ b/arch/sparc/mm/init_64.c
@@ -324,18 +324,6 @@ static void __update_mmu_tsb_insert(struct mm_struct *mm, unsigned long tsb_inde
tsb_insert(tsb, tag, tte);
}
-#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE)
-static inline bool is_hugetlb_pte(pte_t pte)
-{
- if ((tlb_type == hypervisor &&
- (pte_val(pte) & _PAGE_SZALL_4V) == _PAGE_SZHUGE_4V) ||
- (tlb_type != hypervisor &&
- (pte_val(pte) & _PAGE_SZALL_4U) == _PAGE_SZHUGE_4U))
- return true;
- return false;
-}
-#endif
-
void update_mmu_cache(struct vm_area_struct *vma, unsigned long address, pte_t *ptep)
{
struct mm_struct *mm;
diff --git a/arch/sparc/mm/io-unit.c b/arch/sparc/mm/io-unit.c
index f311bf219016..338fb71535de 100644
--- a/arch/sparc/mm/io-unit.c
+++ b/arch/sparc/mm/io-unit.c
@@ -133,7 +133,7 @@ nexti: scan = find_next_zero_bit(iounit->bmap, limit, scan);
vaddr = IOUNIT_DMA_BASE + (scan << PAGE_SHIFT) + (vaddr & ~PAGE_MASK);
for (k = 0; k < npages; k++, iopte = __iopte(iopte_val(iopte) + 0x100), scan++) {
set_bit(scan, iounit->bmap);
- sbus_writel(iopte, &iounit->page_table[scan]);
+ sbus_writel(iopte_val(iopte), &iounit->page_table[scan]);
}
IOD(("%08lx\n", vaddr));
return vaddr;
@@ -228,7 +228,7 @@ static int iounit_map_dma_area(struct device *dev, dma_addr_t *pba, unsigned lon
i = ((addr - IOUNIT_DMA_BASE) >> PAGE_SHIFT);
iopte = iounit->page_table + i;
- sbus_writel(MKIOPTE(__pa(page)), iopte);
+ sbus_writel(iopte_val(MKIOPTE(__pa(page))), iopte);
}
addr += PAGE_SIZE;
va += PAGE_SIZE;
diff --git a/arch/sparc/mm/srmmu.c b/arch/sparc/mm/srmmu.c
index 5cbc96d801ff..c7f2a5295b3a 100644
--- a/arch/sparc/mm/srmmu.c
+++ b/arch/sparc/mm/srmmu.c
@@ -107,17 +107,22 @@ static inline int srmmu_pmd_none(pmd_t pmd)
/* XXX should we hyper_flush_whole_icache here - Anton */
static inline void srmmu_ctxd_set(ctxd_t *ctxp, pgd_t *pgdp)
-{ set_pte((pte_t *)ctxp, (SRMMU_ET_PTD | (__nocache_pa((unsigned long) pgdp) >> 4))); }
+{
+ pte_t pte;
+
+ pte = __pte((SRMMU_ET_PTD | (__nocache_pa(pgdp) >> 4)));
+ set_pte((pte_t *)ctxp, pte);
+}
void pmd_set(pmd_t *pmdp, pte_t *ptep)
{
unsigned long ptp; /* Physical address, shifted right by 4 */
int i;
- ptp = __nocache_pa((unsigned long) ptep) >> 4;
+ ptp = __nocache_pa(ptep) >> 4;
for (i = 0; i < PTRS_PER_PTE/SRMMU_REAL_PTRS_PER_PTE; i++) {
- set_pte((pte_t *)&pmdp->pmdv[i], SRMMU_ET_PTD | ptp);
- ptp += (SRMMU_REAL_PTRS_PER_PTE*sizeof(pte_t) >> 4);
+ set_pte((pte_t *)&pmdp->pmdv[i], __pte(SRMMU_ET_PTD | ptp));
+ ptp += (SRMMU_REAL_PTRS_PER_PTE * sizeof(pte_t) >> 4);
}
}
@@ -128,8 +133,8 @@ void pmd_populate(struct mm_struct *mm, pmd_t *pmdp, struct page *ptep)
ptp = page_to_pfn(ptep) << (PAGE_SHIFT-4); /* watch for overflow */
for (i = 0; i < PTRS_PER_PTE/SRMMU_REAL_PTRS_PER_PTE; i++) {
- set_pte((pte_t *)&pmdp->pmdv[i], SRMMU_ET_PTD | ptp);
- ptp += (SRMMU_REAL_PTRS_PER_PTE*sizeof(pte_t) >> 4);
+ set_pte((pte_t *)&pmdp->pmdv[i], __pte(SRMMU_ET_PTD | ptp));
+ ptp += (SRMMU_REAL_PTRS_PER_PTE * sizeof(pte_t) >> 4);
}
}
@@ -911,7 +916,7 @@ void __init srmmu_paging_init(void)
/* ctx table has to be physically aligned to its size */
srmmu_context_table = __srmmu_get_nocache(num_contexts * sizeof(ctxd_t), num_contexts * sizeof(ctxd_t));
- srmmu_ctx_table_phys = (ctxd_t *)__nocache_pa((unsigned long)srmmu_context_table);
+ srmmu_ctx_table_phys = (ctxd_t *)__nocache_pa(srmmu_context_table);
for (i = 0; i < num_contexts; i++)
srmmu_ctxd_set((ctxd_t *)__nocache_fix(&srmmu_context_table[i]), srmmu_swapper_pg_dir);
diff --git a/arch/sparc/mm/tlb.c b/arch/sparc/mm/tlb.c
index 9df2190c097e..f81cd9736700 100644
--- a/arch/sparc/mm/tlb.c
+++ b/arch/sparc/mm/tlb.c
@@ -67,7 +67,7 @@ void arch_leave_lazy_mmu_mode(void)
}
static void tlb_batch_add_one(struct mm_struct *mm, unsigned long vaddr,
- bool exec)
+ bool exec, bool huge)
{
struct tlb_batch *tb = &get_cpu_var(tlb_batch);
unsigned long nr;
@@ -84,13 +84,21 @@ static void tlb_batch_add_one(struct mm_struct *mm, unsigned long vaddr,
}
if (!tb->active) {
- flush_tsb_user_page(mm, vaddr);
+ flush_tsb_user_page(mm, vaddr, huge);
global_flush_tlb_page(mm, vaddr);
goto out;
}
- if (nr == 0)
+ if (nr == 0) {
tb->mm = mm;
+ tb->huge = huge;
+ }
+
+ if (tb->huge != huge) {
+ flush_tlb_pending();
+ tb->huge = huge;
+ nr = 0;
+ }
tb->vaddrs[nr] = vaddr;
tb->tlb_nr = ++nr;
@@ -104,6 +112,8 @@ out:
void tlb_batch_add(struct mm_struct *mm, unsigned long vaddr,
pte_t *ptep, pte_t orig, int fullmm)
{
+ bool huge = is_hugetlb_pte(orig);
+
if (tlb_type != hypervisor &&
pte_dirty(orig)) {
unsigned long paddr, pfn = pte_pfn(orig);
@@ -129,7 +139,7 @@ void tlb_batch_add(struct mm_struct *mm, unsigned long vaddr,
no_cache_flush:
if (!fullmm)
- tlb_batch_add_one(mm, vaddr, pte_exec(orig));
+ tlb_batch_add_one(mm, vaddr, pte_exec(orig), huge);
}
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
@@ -145,7 +155,7 @@ static void tlb_batch_pmd_scan(struct mm_struct *mm, unsigned long vaddr,
if (pte_val(*pte) & _PAGE_VALID) {
bool exec = pte_exec(*pte);
- tlb_batch_add_one(mm, vaddr, exec);
+ tlb_batch_add_one(mm, vaddr, exec, false);
}
pte++;
vaddr += PAGE_SIZE;
@@ -185,8 +195,9 @@ void set_pmd_at(struct mm_struct *mm, unsigned long addr,
pte_t orig_pte = __pte(pmd_val(orig));
bool exec = pte_exec(orig_pte);
- tlb_batch_add_one(mm, addr, exec);
- tlb_batch_add_one(mm, addr + REAL_HPAGE_SIZE, exec);
+ tlb_batch_add_one(mm, addr, exec, true);
+ tlb_batch_add_one(mm, addr + REAL_HPAGE_SIZE, exec,
+ true);
} else {
tlb_batch_pmd_scan(mm, addr, orig);
}
diff --git a/arch/sparc/mm/tsb.c b/arch/sparc/mm/tsb.c
index a06576683c38..a0604a493a36 100644
--- a/arch/sparc/mm/tsb.c
+++ b/arch/sparc/mm/tsb.c
@@ -76,14 +76,15 @@ void flush_tsb_user(struct tlb_batch *tb)
spin_lock_irqsave(&mm->context.lock, flags);
- base = (unsigned long) mm->context.tsb_block[MM_TSB_BASE].tsb;
- nentries = mm->context.tsb_block[MM_TSB_BASE].tsb_nentries;
- if (tlb_type == cheetah_plus || tlb_type == hypervisor)
- base = __pa(base);
- __flush_tsb_one(tb, PAGE_SHIFT, base, nentries);
-
+ if (!tb->huge) {
+ base = (unsigned long) mm->context.tsb_block[MM_TSB_BASE].tsb;
+ nentries = mm->context.tsb_block[MM_TSB_BASE].tsb_nentries;
+ if (tlb_type == cheetah_plus || tlb_type == hypervisor)
+ base = __pa(base);
+ __flush_tsb_one(tb, PAGE_SHIFT, base, nentries);
+ }
#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE)
- if (mm->context.tsb_block[MM_TSB_HUGE].tsb) {
+ if (tb->huge && mm->context.tsb_block[MM_TSB_HUGE].tsb) {
base = (unsigned long) mm->context.tsb_block[MM_TSB_HUGE].tsb;
nentries = mm->context.tsb_block[MM_TSB_HUGE].tsb_nentries;
if (tlb_type == cheetah_plus || tlb_type == hypervisor)
@@ -94,20 +95,21 @@ void flush_tsb_user(struct tlb_batch *tb)
spin_unlock_irqrestore(&mm->context.lock, flags);
}
-void flush_tsb_user_page(struct mm_struct *mm, unsigned long vaddr)
+void flush_tsb_user_page(struct mm_struct *mm, unsigned long vaddr, bool huge)
{
unsigned long nentries, base, flags;
spin_lock_irqsave(&mm->context.lock, flags);
- base = (unsigned long) mm->context.tsb_block[MM_TSB_BASE].tsb;
- nentries = mm->context.tsb_block[MM_TSB_BASE].tsb_nentries;
- if (tlb_type == cheetah_plus || tlb_type == hypervisor)
- base = __pa(base);
- __flush_tsb_one_entry(base, vaddr, PAGE_SHIFT, nentries);
-
+ if (!huge) {
+ base = (unsigned long) mm->context.tsb_block[MM_TSB_BASE].tsb;
+ nentries = mm->context.tsb_block[MM_TSB_BASE].tsb_nentries;
+ if (tlb_type == cheetah_plus || tlb_type == hypervisor)
+ base = __pa(base);
+ __flush_tsb_one_entry(base, vaddr, PAGE_SHIFT, nentries);
+ }
#if defined(CONFIG_HUGETLB_PAGE) || defined(CONFIG_TRANSPARENT_HUGEPAGE)
- if (mm->context.tsb_block[MM_TSB_HUGE].tsb) {
+ if (huge && mm->context.tsb_block[MM_TSB_HUGE].tsb) {
base = (unsigned long) mm->context.tsb_block[MM_TSB_HUGE].tsb;
nentries = mm->context.tsb_block[MM_TSB_HUGE].tsb_nentries;
if (tlb_type == cheetah_plus || tlb_type == hypervisor)
diff --git a/arch/tile/Kconfig b/arch/tile/Kconfig
index 81719302b056..4820a02838ac 100644
--- a/arch/tile/Kconfig
+++ b/arch/tile/Kconfig
@@ -3,47 +3,38 @@
config TILE
def_bool y
- select HAVE_PERF_EVENTS
- select USE_PMC if PERF_EVENTS
- select HAVE_DMA_API_DEBUG
- select HAVE_KVM if !TILEGX
- select GENERIC_FIND_FIRST_BIT
- select SYSCTL_EXCEPTION_TRACE
- select CC_OPTIMIZE_FOR_SIZE
- select HAVE_DEBUG_KMEMLEAK
- select GENERIC_IRQ_PROBE
- select GENERIC_PENDING_IRQ if SMP
- select GENERIC_IRQ_SHOW
- select HAVE_DEBUG_BUGVERBOSE
- select VIRT_TO_BUS
- select SYS_HYPERVISOR
+ select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
select ARCH_HAS_DEBUG_STRICT_USER_COPY_CHECKS
select ARCH_HAS_DEVMEM_IS_ALLOWED
select ARCH_HAVE_NMI_SAFE_CMPXCHG
- select GENERIC_CLOCKEVENTS
- select MODULES_USE_ELF_RELA
- select HAVE_ARCH_TRACEHOOK
- select HAVE_SYSCALL_TRACEPOINTS
- select USER_STACKTRACE_SUPPORT
- select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
- select HAVE_DEBUG_STACKOVERFLOW
select ARCH_WANT_FRAME_POINTERS
- select HAVE_CONTEXT_TRACKING
+ select CC_OPTIMIZE_FOR_SIZE
select EDAC_SUPPORT
+ select GENERIC_CLOCKEVENTS
+ select GENERIC_FIND_FIRST_BIT
+ select GENERIC_IRQ_PROBE
+ select GENERIC_IRQ_SHOW
+ select GENERIC_PENDING_IRQ if SMP
select GENERIC_STRNCPY_FROM_USER
select GENERIC_STRNLEN_USER
select HAVE_ARCH_SECCOMP_FILTER
-
-# FIXME: investigate whether we need/want these options.
-# select HAVE_IOREMAP_PROT
-# select HAVE_OPTPROBES
-# select HAVE_REGS_AND_STACK_ACCESS_API
-# select HAVE_HW_BREAKPOINT
-# select PERF_EVENTS
-# select HAVE_USER_RETURN_NOTIFIER
-# config NO_BOOTMEM
-# config ARCH_SUPPORTS_DEBUG_PAGEALLOC
-# config HUGETLB_PAGE_SIZE_VARIABLE
+ select HAVE_ARCH_TRACEHOOK
+ select HAVE_CONTEXT_TRACKING
+ select HAVE_DEBUG_BUGVERBOSE
+ select HAVE_DEBUG_KMEMLEAK
+ select HAVE_DEBUG_STACKOVERFLOW
+ select HAVE_DMA_API_DEBUG
+ select HAVE_EXIT_THREAD
+ select HAVE_KVM if !TILEGX
+ select HAVE_NMI if USE_PMC
+ select HAVE_PERF_EVENTS
+ select HAVE_SYSCALL_TRACEPOINTS
+ select MODULES_USE_ELF_RELA
+ select SYSCTL_EXCEPTION_TRACE
+ select SYS_HYPERVISOR
+ select USER_STACKTRACE_SUPPORT
+ select USE_PMC if PERF_EVENTS
+ select VIRT_TO_BUS
config MMU
def_bool y
@@ -130,17 +121,17 @@ config HVC_TILE
# 64-bit TILE-Gx toolchain, so force CONFIG_TILEGX on.
config TILEGX
def_bool ARCH != "tilepro"
- select SPARSE_IRQ
+ select ARCH_SUPPORTS_ATOMIC_RMW
select GENERIC_IRQ_LEGACY_ALLOC_HWIRQ
- select HAVE_FUNCTION_TRACER
- select HAVE_FUNCTION_GRAPH_TRACER
+ select HAVE_ARCH_JUMP_LABEL
+ select HAVE_ARCH_KGDB
select HAVE_DYNAMIC_FTRACE
select HAVE_FTRACE_MCOUNT_RECORD
+ select HAVE_FUNCTION_GRAPH_TRACER
+ select HAVE_FUNCTION_TRACER
select HAVE_KPROBES
select HAVE_KRETPROBES
- select HAVE_ARCH_KGDB
- select ARCH_SUPPORTS_ATOMIC_RMW
- select HAVE_ARCH_JUMP_LABEL
+ select SPARSE_IRQ
config TILEPRO
def_bool !TILEGX
diff --git a/arch/tile/configs/tilegx_defconfig b/arch/tile/configs/tilegx_defconfig
index 3f3dfb8b150a..fd122ef45b00 100644
--- a/arch/tile/configs/tilegx_defconfig
+++ b/arch/tile/configs/tilegx_defconfig
@@ -16,7 +16,6 @@ CONFIG_CGROUP_DEBUG=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
CONFIG_CGROUP_CPUACCT=y
-CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_SCHED=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_BLK_CGROUP=y
@@ -89,7 +88,6 @@ CONFIG_TCP_CONG_YEAH=m
CONFIG_TCP_CONG_ILLINOIS=m
CONFIG_TCP_MD5SIG=y
CONFIG_IPV6=y
-CONFIG_IPV6_PRIVACY=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_ROUTE_INFO=y
CONFIG_IPV6_OPTIMISTIC_DAD=y
@@ -221,8 +219,7 @@ CONFIG_NETCONSOLE_DYNAMIC=y
CONFIG_TUN=y
CONFIG_VETH=m
CONFIG_NET_DSA_MV88E6060=y
-CONFIG_NET_DSA_MV88E6131=y
-CONFIG_NET_DSA_MV88E6123=y
+CONFIG_NET_DSA_MV88E6XXX=y
CONFIG_SKY2=y
CONFIG_PTP_1588_CLOCK_TILEGX=y
# CONFIG_WLAN is not set
diff --git a/arch/tile/configs/tilepro_defconfig b/arch/tile/configs/tilepro_defconfig
index ef9e27eb2f50..eb6a55944191 100644
--- a/arch/tile/configs/tilepro_defconfig
+++ b/arch/tile/configs/tilepro_defconfig
@@ -15,7 +15,6 @@ CONFIG_CGROUP_DEBUG=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
CONFIG_CGROUP_CPUACCT=y
-CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_SCHED=y
CONFIG_RT_GROUP_SCHED=y
CONFIG_BLK_CGROUP=y
@@ -85,7 +84,6 @@ CONFIG_TCP_CONG_YEAH=m
CONFIG_TCP_CONG_ILLINOIS=m
CONFIG_TCP_MD5SIG=y
CONFIG_IPV6=y
-CONFIG_IPV6_PRIVACY=y
CONFIG_IPV6_ROUTER_PREF=y
CONFIG_IPV6_ROUTE_INFO=y
CONFIG_IPV6_OPTIMISTIC_DAD=y
@@ -340,8 +338,7 @@ CONFIG_NETCONSOLE_DYNAMIC=y
CONFIG_TUN=y
CONFIG_VETH=m
CONFIG_NET_DSA_MV88E6060=y
-CONFIG_NET_DSA_MV88E6131=y
-CONFIG_NET_DSA_MV88E6123=y
+CONFIG_NET_DSA_MV88E6XXX=y
# CONFIG_NET_VENDOR_3COM is not set
CONFIG_E1000E=y
# CONFIG_WLAN is not set
diff --git a/arch/tile/gxio/mpipe.c b/arch/tile/gxio/mpipe.c
index f102048d9c0e..34de300ab320 100644
--- a/arch/tile/gxio/mpipe.c
+++ b/arch/tile/gxio/mpipe.c
@@ -122,7 +122,7 @@ size_t gxio_mpipe_calc_buffer_stack_bytes(unsigned long buffers)
{
const int BUFFERS_PER_LINE = 12;
- /* Count the number of cachlines. */
+ /* Count the number of cachelines. */
unsigned long lines =
(buffers + BUFFERS_PER_LINE - 1) / BUFFERS_PER_LINE;
diff --git a/arch/tile/include/asm/atomic_64.h b/arch/tile/include/asm/atomic_64.h
index 51cabc26e387..b0531a623653 100644
--- a/arch/tile/include/asm/atomic_64.h
+++ b/arch/tile/include/asm/atomic_64.h
@@ -37,12 +37,25 @@ static inline void atomic_add(int i, atomic_t *v)
__insn_fetchadd4((void *)&v->counter, i);
}
+/*
+ * Note a subtlety of the locking here. We are required to provide a
+ * full memory barrier before and after the operation. However, we
+ * only provide an explicit mb before the operation. After the
+ * operation, we use barrier() to get a full mb for free, because:
+ *
+ * (1) The barrier directive to the compiler prohibits any instructions
+ * being statically hoisted before the barrier;
+ * (2) the microarchitecture will not issue any further instructions
+ * until the fetchadd result is available for the "+ i" add instruction;
+ * (3) the smb_mb before the fetchadd ensures that no other memory
+ * operations are in flight at this point.
+ */
static inline int atomic_add_return(int i, atomic_t *v)
{
int val;
smp_mb(); /* barrier for proper semantics */
val = __insn_fetchadd4((void *)&v->counter, i) + i;
- barrier(); /* the "+ i" above will wait on memory */
+ barrier(); /* equivalent to smp_mb(); see block comment above */
return val;
}
@@ -95,7 +108,7 @@ static inline long atomic64_add_return(long i, atomic64_t *v)
int val;
smp_mb(); /* barrier for proper semantics */
val = __insn_fetchadd((void *)&v->counter, i) + i;
- barrier(); /* the "+ i" above will wait on memory */
+ barrier(); /* equivalent to smp_mb; see atomic_add_return() */
return val;
}
diff --git a/arch/tile/include/asm/pgtable.h b/arch/tile/include/asm/pgtable.h
index 96cecf55522e..2a26cc4fefc2 100644
--- a/arch/tile/include/asm/pgtable.h
+++ b/arch/tile/include/asm/pgtable.h
@@ -487,7 +487,6 @@ static inline pmd_t pmd_modify(pmd_t pmd, pgprot_t newprot)
}
#ifdef CONFIG_TRANSPARENT_HUGEPAGE
-#define has_transparent_hugepage() 1
#define pmd_trans_huge pmd_huge_page
#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
diff --git a/arch/tile/include/uapi/asm/unistd.h b/arch/tile/include/uapi/asm/unistd.h
index 3866397aaf5a..24e9187e85a8 100644
--- a/arch/tile/include/uapi/asm/unistd.h
+++ b/arch/tile/include/uapi/asm/unistd.h
@@ -12,6 +12,7 @@
* more details.
*/
+#define __ARCH_WANT_RENAMEAT
#if !defined(__LP64__) || defined(__SYSCALL_COMPAT)
/* Use the flavor of this syscall that matches the 32-bit API better. */
#define __ARCH_WANT_SYNC_FILE_RANGE2
diff --git a/arch/tile/kernel/pci_gx.c b/arch/tile/kernel/pci_gx.c
index aa2b44cd8fd3..0e7a5d09e023 100644
--- a/arch/tile/kernel/pci_gx.c
+++ b/arch/tile/kernel/pci_gx.c
@@ -40,7 +40,7 @@
#include <arch/sim.h>
/*
- * This file containes the routines to search for PCI buses,
+ * This file contains the routines to search for PCI buses,
* enumerate the buses, and configure any attached devices.
*/
@@ -434,7 +434,7 @@ int __init tile_pci_init(void)
/*
* Now determine which PCIe ports are configured to operate in RC
- * mode. There is a differece in the port configuration capability
+ * mode. There is a difference in the port configuration capability
* between the Gx36 and Gx72 devices.
*
* The Gx36 has configuration capability for each of the 3 PCIe
diff --git a/arch/tile/kernel/perf_event.c b/arch/tile/kernel/perf_event.c
index 8767060d70fb..6394c1ccb68e 100644
--- a/arch/tile/kernel/perf_event.c
+++ b/arch/tile/kernel/perf_event.c
@@ -941,7 +941,7 @@ arch_initcall(init_hw_perf_events);
/*
* Tile specific backtracing code for perf_events.
*/
-static inline void perf_callchain(struct perf_callchain_entry *entry,
+static inline void perf_callchain(struct perf_callchain_entry_ctx *entry,
struct pt_regs *regs)
{
struct KBacktraceIterator kbt;
@@ -992,13 +992,13 @@ static inline void perf_callchain(struct perf_callchain_entry *entry,
}
}
-void perf_callchain_user(struct perf_callchain_entry *entry,
+void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
struct pt_regs *regs)
{
perf_callchain(entry, regs);
}
-void perf_callchain_kernel(struct perf_callchain_entry *entry,
+void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
struct pt_regs *regs)
{
perf_callchain(entry, regs);
diff --git a/arch/tile/kernel/process.c b/arch/tile/kernel/process.c
index b5f30d376ce1..6b705ccc9cc1 100644
--- a/arch/tile/kernel/process.c
+++ b/arch/tile/kernel/process.c
@@ -541,7 +541,7 @@ void flush_thread(void)
/*
* Free current thread data structures etc..
*/
-void exit_thread(void)
+void exit_thread(struct task_struct *tsk)
{
#ifdef CONFIG_HARDWALL
/*
@@ -550,7 +550,7 @@ void exit_thread(void)
* the last reference to a hardwall fd, it would already have
* been released and deactivated at this point.)
*/
- hardwall_deactivate_all(current);
+ hardwall_deactivate_all(tsk);
#endif
}
diff --git a/arch/tile/kernel/setup.c b/arch/tile/kernel/setup.c
index a992238e9b58..153020abd2f5 100644
--- a/arch/tile/kernel/setup.c
+++ b/arch/tile/kernel/setup.c
@@ -962,9 +962,7 @@ static void __init setup_numa_mapping(void)
cpumask_set_cpu(best_cpu, &node_2_cpu_mask[node]);
cpu_2_node[best_cpu] = node;
cpumask_clear_cpu(best_cpu, &unbound_cpus);
- node = next_node(node, default_nodes);
- if (node == MAX_NUMNODES)
- node = first_node(default_nodes);
+ node = next_node_in(node, default_nodes);
}
/* Print out node assignments and set defaults for disabled cpus */
diff --git a/arch/tile/kernel/unaligned.c b/arch/tile/kernel/unaligned.c
index 0db5f7c9d9e5..9772a3554282 100644
--- a/arch/tile/kernel/unaligned.c
+++ b/arch/tile/kernel/unaligned.c
@@ -188,7 +188,7 @@ static void find_regs(tilegx_bundle_bits bundle, uint64_t *rd, uint64_t *ra,
* Parse fault bundle, find potential used registers and mark
* corresponding bits in reg_map and alias_map. These 2 bit maps
* are used to find the scratch registers and determine if there
- * is register alais.
+ * is register alias.
*/
if (bundle & TILEGX_BUNDLE_MODE_MASK) { /* Y Mode Bundle. */
@@ -1529,7 +1529,7 @@ void do_unaligned(struct pt_regs *regs, int vecnum)
}
- /* Read the bundle casued the exception! */
+ /* Read the bundle caused the exception! */
pc = (tilegx_bundle_bits __user *)(regs->pc);
if (get_user(bundle, pc) != 0) {
/* Probably never be here since pc is valid user address.*/
diff --git a/arch/tile/mm/hugetlbpage.c b/arch/tile/mm/hugetlbpage.c
index e212c64682c5..77ceaa343fce 100644
--- a/arch/tile/mm/hugetlbpage.c
+++ b/arch/tile/mm/hugetlbpage.c
@@ -308,11 +308,16 @@ static bool saw_hugepagesz;
static __init int setup_hugepagesz(char *opt)
{
+ int rc;
+
if (!saw_hugepagesz) {
saw_hugepagesz = true;
memset(huge_shift, 0, sizeof(huge_shift));
}
- return __setup_hugepagesz(memparse(opt, NULL));
+ rc = __setup_hugepagesz(memparse(opt, NULL));
+ if (rc)
+ hugetlb_bad_size();
+ return rc;
}
__setup("hugepagesz=", setup_hugepagesz);
diff --git a/arch/tile/mm/init.c b/arch/tile/mm/init.c
index a0582b7f41d3..adce25462b0d 100644
--- a/arch/tile/mm/init.c
+++ b/arch/tile/mm/init.c
@@ -679,7 +679,7 @@ static void __init init_free_pfn_range(unsigned long start, unsigned long end)
* Hacky direct set to avoid unnecessary
* lock take/release for EVERY page here.
*/
- p->_count.counter = 0;
+ p->_refcount.counter = 0;
p->_mapcount.counter = -1;
}
init_page_count(page);
diff --git a/arch/um/configs/i386_defconfig b/arch/um/configs/i386_defconfig
index a12bf68c9f3a..5636221b8785 100644
--- a/arch/um/configs/i386_defconfig
+++ b/arch/um/configs/i386_defconfig
@@ -17,7 +17,6 @@ CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
CONFIG_CGROUP_CPUACCT=y
-CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_SCHED=y
CONFIG_BLK_CGROUP=y
# CONFIG_PID_NS is not set
diff --git a/arch/um/configs/x86_64_defconfig b/arch/um/configs/x86_64_defconfig
index 3aab117bd553..7a67b7ac1a7e 100644
--- a/arch/um/configs/x86_64_defconfig
+++ b/arch/um/configs/x86_64_defconfig
@@ -15,7 +15,6 @@ CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
CONFIG_CGROUP_CPUACCT=y
-CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_SCHED=y
CONFIG_BLK_CGROUP=y
# CONFIG_PID_NS is not set
diff --git a/arch/um/drivers/net_kern.c b/arch/um/drivers/net_kern.c
index 9ef669d24bb2..2cd5b6874c7b 100644
--- a/arch/um/drivers/net_kern.c
+++ b/arch/um/drivers/net_kern.c
@@ -223,7 +223,7 @@ static int uml_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
if (len == skb->len) {
dev->stats.tx_packets++;
dev->stats.tx_bytes += skb->len;
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
netif_start_queue(dev);
/* this is normally done in the interrupt when tx finishes */
@@ -252,7 +252,7 @@ static void uml_net_set_multicast_list(struct net_device *dev)
static void uml_net_tx_timeout(struct net_device *dev)
{
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
netif_wake_queue(dev);
}
diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
index 39ba20755e03..17e96dc29596 100644
--- a/arch/um/drivers/ubd_kern.c
+++ b/arch/um/drivers/ubd_kern.c
@@ -862,7 +862,7 @@ static int ubd_add(int n, char **error_out)
goto out;
}
ubd_dev->queue->queuedata = ubd_dev;
- blk_queue_flush(ubd_dev->queue, REQ_FLUSH);
+ blk_queue_write_cache(ubd_dev->queue, true, false);
blk_queue_max_segments(ubd_dev->queue, MAX_SG);
err = ubd_disk_register(UBD_MAJOR, ubd_dev->size, n, &ubd_gendisk[n]);
diff --git a/arch/um/include/shared/registers.h b/arch/um/include/shared/registers.h
index f5b76355ad71..a74449b5b0e3 100644
--- a/arch/um/include/shared/registers.h
+++ b/arch/um/include/shared/registers.h
@@ -9,6 +9,8 @@
#include <sysdep/ptrace.h>
#include <sysdep/archsetjmp.h>
+extern int save_i387_registers(int pid, unsigned long *fp_regs);
+extern int restore_i387_registers(int pid, unsigned long *fp_regs);
extern int save_fp_registers(int pid, unsigned long *fp_regs);
extern int restore_fp_registers(int pid, unsigned long *fp_regs);
extern int save_fpx_registers(int pid, unsigned long *fp_regs);
diff --git a/arch/um/kernel/process.c b/arch/um/kernel/process.c
index 48af59aae129..034b42c7ab40 100644
--- a/arch/um/kernel/process.c
+++ b/arch/um/kernel/process.c
@@ -103,10 +103,6 @@ void interrupt_end(void)
tracehook_notify_resume(regs);
}
-void exit_thread(void)
-{
-}
-
int get_current_pid(void)
{
return task_pid_nr(current);
@@ -402,6 +398,6 @@ int elf_core_copy_fpregs(struct task_struct *t, elf_fpregset_t *fpu)
{
int cpu = current_thread_info()->cpu;
- return save_fp_registers(userspace_pid[cpu], (unsigned long *) fpu);
+ return save_i387_registers(userspace_pid[cpu], (unsigned long *) fpu);
}
diff --git a/arch/um/os-Linux/signal.c b/arch/um/os-Linux/signal.c
index 7801666514ed..8acaf4e384c0 100644
--- a/arch/um/os-Linux/signal.c
+++ b/arch/um/os-Linux/signal.c
@@ -29,23 +29,29 @@ void (*sig_info[NSIG])(int, struct siginfo *, struct uml_pt_regs *) = {
static void sig_handler_common(int sig, struct siginfo *si, mcontext_t *mc)
{
- struct uml_pt_regs r;
+ struct uml_pt_regs *r;
int save_errno = errno;
- r.is_user = 0;
+ r = malloc(sizeof(struct uml_pt_regs));
+ if (!r)
+ panic("out of memory");
+
+ r->is_user = 0;
if (sig == SIGSEGV) {
/* For segfaults, we want the data from the sigcontext. */
- get_regs_from_mc(&r, mc);
- GET_FAULTINFO_FROM_MC(r.faultinfo, mc);
+ get_regs_from_mc(r, mc);
+ GET_FAULTINFO_FROM_MC(r->faultinfo, mc);
}
/* enable signals if sig isn't IRQ signal */
if ((sig != SIGIO) && (sig != SIGWINCH) && (sig != SIGALRM))
unblock_signals();
- (*sig_info[sig])(sig, si, &r);
+ (*sig_info[sig])(sig, si, r);
errno = save_errno;
+
+ free(r);
}
/*
@@ -83,11 +89,17 @@ void sig_handler(int sig, struct siginfo *si, mcontext_t *mc)
static void timer_real_alarm_handler(mcontext_t *mc)
{
- struct uml_pt_regs regs;
+ struct uml_pt_regs *regs;
+
+ regs = malloc(sizeof(struct uml_pt_regs));
+ if (!regs)
+ panic("out of memory");
if (mc != NULL)
- get_regs_from_mc(&regs, mc);
- timer_handler(SIGALRM, NULL, &regs);
+ get_regs_from_mc(regs, mc);
+ timer_handler(SIGALRM, NULL, regs);
+
+ free(regs);
}
void timer_alarm_handler(int sig, struct siginfo *unused_si, mcontext_t *mc)
diff --git a/arch/unicore32/boot/Makefile b/arch/unicore32/boot/Makefile
index ec7fb70b412b..828855007b29 100644
--- a/arch/unicore32/boot/Makefile
+++ b/arch/unicore32/boot/Makefile
@@ -31,7 +31,7 @@ $(obj)/uImage: $(obj)/zImage FORCE
$(call if_changed,uimage)
@echo ' Image $@ is ready'
-PHONY += initrd FORCE
+PHONY += initrd
initrd:
@test "$(INITRD)" != "" || \
(echo You must specify INITRD; exit -1)
diff --git a/arch/unicore32/boot/compressed/Makefile b/arch/unicore32/boot/compressed/Makefile
index 96494fb646f7..9aecdd3ddc48 100644
--- a/arch/unicore32/boot/compressed/Makefile
+++ b/arch/unicore32/boot/compressed/Makefile
@@ -54,7 +54,6 @@ LDFLAGS_vmlinux += -T
$(obj)/vmlinux: $(obj)/vmlinux.lds $(obj)/head.o $(obj)/piggy.o \
$(obj)/misc.o FORCE
$(call if_changed,ld)
- @:
# We now have a PIC decompressor implementation. Decompressors running
# from RAM should not define ZTEXTADDR. Decompressors running directly
diff --git a/arch/unicore32/include/uapi/asm/unistd.h b/arch/unicore32/include/uapi/asm/unistd.h
index d4cc4559d848..1f63c476528e 100644
--- a/arch/unicore32/include/uapi/asm/unistd.h
+++ b/arch/unicore32/include/uapi/asm/unistd.h
@@ -10,6 +10,8 @@
* published by the Free Software Foundation.
*/
+#define __ARCH_WANT_RENAMEAT
+
/* Use the standard ABI for syscalls. */
#include <asm-generic/unistd.h>
#define __ARCH_WANT_SYS_CLONE
diff --git a/arch/unicore32/kernel/gpio.c b/arch/unicore32/kernel/gpio.c
index 5ab23794ea17..49347a0e9288 100644
--- a/arch/unicore32/kernel/gpio.c
+++ b/arch/unicore32/kernel/gpio.c
@@ -14,6 +14,8 @@
#include <linux/init.h>
#include <linux/module.h>
+#include <linux/gpio/driver.h>
+/* FIXME: needed for gpio_set_value() - convert to use descriptors or hogs */
#include <linux/gpio.h>
#include <mach/hardware.h>
@@ -118,5 +120,5 @@ void __init puv3_init_gpio(void)
* gpio_set_value(GPO_SET_V2, 1);
*/
#endif
- gpiochip_add(&puv3_gpio_chip);
+ gpiochip_add_data(&puv3_gpio_chip, NULL);
}
diff --git a/arch/unicore32/kernel/process.c b/arch/unicore32/kernel/process.c
index b008e9961465..00299c927852 100644
--- a/arch/unicore32/kernel/process.c
+++ b/arch/unicore32/kernel/process.c
@@ -201,13 +201,6 @@ void show_regs(struct pt_regs *regs)
__backtrace();
}
-/*
- * Free current thread data structures etc..
- */
-void exit_thread(void)
-{
-}
-
void flush_thread(void)
{
struct thread_info *thread = current_thread_info();
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 2dc18605831f..0a7b885964ba 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -91,7 +91,7 @@ config X86
select HAVE_ARCH_SOFT_DIRTY if X86_64
select HAVE_ARCH_TRACEHOOK
select HAVE_ARCH_TRANSPARENT_HUGEPAGE
- select HAVE_BPF_JIT if X86_64
+ select HAVE_EBPF_JIT if X86_64
select HAVE_CC_STACKPROTECTOR
select HAVE_CMPXCHG_DOUBLE
select HAVE_CMPXCHG_LOCAL
@@ -105,6 +105,7 @@ config X86
select HAVE_DYNAMIC_FTRACE
select HAVE_DYNAMIC_FTRACE_WITH_REGS
select HAVE_EFFICIENT_UNALIGNED_ACCESS
+ select HAVE_EXIT_THREAD
select HAVE_FENTRY if X86_64
select HAVE_FTRACE_MCOUNT_RECORD
select HAVE_FUNCTION_GRAPH_FP_TEST
@@ -130,6 +131,7 @@ config X86
select HAVE_MEMBLOCK
select HAVE_MEMBLOCK_NODE_MAP
select HAVE_MIXED_BREAKPOINTS_REGS
+ select HAVE_NMI
select HAVE_OPROFILE
select HAVE_OPTPROBES
select HAVE_PCSPKR_PLATFORM
@@ -164,10 +166,6 @@ config INSTRUCTION_DECODER
def_bool y
depends on KPROBES || PERF_EVENTS || UPROBES
-config PERF_EVENTS_INTEL_UNCORE
- def_bool y
- depends on PERF_EVENTS && CPU_SUP_INTEL && PCI
-
config OUTPUT_FORMAT
string
default "elf32-i386" if X86_32
@@ -1046,6 +1044,8 @@ config X86_THERMAL_VECTOR
def_bool y
depends on X86_MCE_INTEL
+source "arch/x86/events/Kconfig"
+
config X86_LEGACY_VM86
bool "Legacy VM86 support"
default n
@@ -1210,15 +1210,6 @@ config MICROCODE_OLD_INTERFACE
def_bool y
depends on MICROCODE
-config PERF_EVENTS_AMD_POWER
- depends on PERF_EVENTS && CPU_SUP_AMD
- tristate "AMD Processor Power Reporting Mechanism"
- ---help---
- Provide power reporting mechanism support for AMD processors.
- Currently, it leverages X86_FEATURE_ACC_POWER
- (CPUID Fn8000_0007_EDX[12]) interface to calculate the
- average power consumption on Family 15h processors.
-
config X86_MSR
tristate "/dev/cpu/*/msr - Model-specific register support"
---help---
@@ -1932,54 +1923,38 @@ config RELOCATABLE
(CONFIG_PHYSICAL_START) is used as the minimum location.
config RANDOMIZE_BASE
- bool "Randomize the address of the kernel image"
+ bool "Randomize the address of the kernel image (KASLR)"
depends on RELOCATABLE
default n
---help---
- Randomizes the physical and virtual address at which the
- kernel image is decompressed, as a security feature that
- deters exploit attempts relying on knowledge of the location
- of kernel internals.
+ In support of Kernel Address Space Layout Randomization (KASLR),
+ this randomizes the physical address at which the kernel image
+ is decompressed and the virtual address where the kernel
+ image is mapped, as a security feature that deters exploit
+ attempts relying on knowledge of the location of kernel
+ code internals.
+
+ The kernel physical and virtual address can be randomized
+ from 16MB up to 1GB on 64-bit and 512MB on 32-bit. (Note that
+ using RANDOMIZE_BASE reduces the memory space available to
+ kernel modules from 1.5GB to 1GB.)
+
+ Entropy is generated using the RDRAND instruction if it is
+ supported. If RDTSC is supported, its value is mixed into
+ the entropy pool as well. If neither RDRAND nor RDTSC are
+ supported, then entropy is read from the i8254 timer.
+
+ Since the kernel is built using 2GB addressing, and
+ PHYSICAL_ALIGN must be at a minimum of 2MB, only 10 bits of
+ entropy is theoretically possible. Currently, with the
+ default value for PHYSICAL_ALIGN and due to page table
+ layouts, 64-bit uses 9 bits of entropy and 32-bit uses 8 bits.
+
+ If CONFIG_HIBERNATE is also enabled, KASLR is disabled at boot
+ time. To enable it, boot with "kaslr" on the kernel command
+ line (which will also disable hibernation).
- Entropy is generated using the RDRAND instruction if it is
- supported. If RDTSC is supported, it is used as well. If
- neither RDRAND nor RDTSC are supported, then randomness is
- read from the i8254 timer.
-
- The kernel will be offset by up to RANDOMIZE_BASE_MAX_OFFSET,
- and aligned according to PHYSICAL_ALIGN. Since the kernel is
- built using 2GiB addressing, and PHYSICAL_ALGIN must be at a
- minimum of 2MiB, only 10 bits of entropy is theoretically
- possible. At best, due to page table layouts, 64-bit can use
- 9 bits of entropy and 32-bit uses 8 bits.
-
- If unsure, say N.
-
-config RANDOMIZE_BASE_MAX_OFFSET
- hex "Maximum kASLR offset allowed" if EXPERT
- depends on RANDOMIZE_BASE
- range 0x0 0x20000000 if X86_32
- default "0x20000000" if X86_32
- range 0x0 0x40000000 if X86_64
- default "0x40000000" if X86_64
- ---help---
- The lesser of RANDOMIZE_BASE_MAX_OFFSET and available physical
- memory is used to determine the maximal offset in bytes that will
- be applied to the kernel when kernel Address Space Layout
- Randomization (kASLR) is active. This must be a multiple of
- PHYSICAL_ALIGN.
-
- On 32-bit this is limited to 512MiB by page table layouts. The
- default is 512MiB.
-
- On 64-bit this is limited by how the kernel fixmap page table is
- positioned, so this cannot be larger than 1GiB currently. Without
- RANDOMIZE_BASE, there is a 512MiB to 1.5GiB split between kernel
- and modules. When RANDOMIZE_BASE_MAX_OFFSET is above 512MiB, the
- modules area will shrink to compensate, up to the current maximum
- 1GiB to 1GiB split. The default is 1GiB.
-
- If unsure, leave at the default value.
+ If unsure, say N.
# Relocation on x86 needs some additional build support
config X86_NEED_RELOCS
diff --git a/arch/x86/Makefile b/arch/x86/Makefile
index 4086abca0b32..6fce7f096b88 100644
--- a/arch/x86/Makefile
+++ b/arch/x86/Makefile
@@ -208,7 +208,8 @@ endif
head-y := arch/x86/kernel/head_$(BITS).o
head-y += arch/x86/kernel/head$(BITS).o
-head-y += arch/x86/kernel/head.o
+head-y += arch/x86/kernel/ebda.o
+head-y += arch/x86/kernel/platform-quirks.o
libs-y += arch/x86/lib/
diff --git a/arch/x86/boot/Makefile b/arch/x86/boot/Makefile
index b1ef9e489084..700a9c6e6159 100644
--- a/arch/x86/boot/Makefile
+++ b/arch/x86/boot/Makefile
@@ -86,16 +86,7 @@ $(obj)/vmlinux.bin: $(obj)/compressed/vmlinux FORCE
SETUP_OBJS = $(addprefix $(obj)/,$(setup-y))
-sed-voffset := -e 's/^\([0-9a-fA-F]*\) [ABCDGRSTVW] \(_text\|_end\)$$/\#define VO_\2 0x\1/p'
-
-quiet_cmd_voffset = VOFFSET $@
- cmd_voffset = $(NM) $< | sed -n $(sed-voffset) > $@
-
-targets += voffset.h
-$(obj)/voffset.h: vmlinux FORCE
- $(call if_changed,voffset)
-
-sed-zoffset := -e 's/^\([0-9a-fA-F]*\) [ABCDGRSTVW] \(startup_32\|startup_64\|efi32_stub_entry\|efi64_stub_entry\|efi_pe_entry\|input_data\|_end\|z_.*\)$$/\#define ZO_\2 0x\1/p'
+sed-zoffset := -e 's/^\([0-9a-fA-F]*\) [ABCDGRSTVW] \(startup_32\|startup_64\|efi32_stub_entry\|efi64_stub_entry\|efi_pe_entry\|input_data\|_end\|_ehead\|_text\|z_.*\)$$/\#define ZO_\2 0x\1/p'
quiet_cmd_zoffset = ZOFFSET $@
cmd_zoffset = $(NM) $< | sed -n $(sed-zoffset) > $@
@@ -106,7 +97,7 @@ $(obj)/zoffset.h: $(obj)/compressed/vmlinux FORCE
AFLAGS_header.o += -I$(obj)
-$(obj)/header.o: $(obj)/voffset.h $(obj)/zoffset.h
+$(obj)/header.o: $(obj)/zoffset.h
LDFLAGS_setup.elf := -T
$(obj)/setup.elf: $(src)/setup.ld $(SETUP_OBJS) FORCE
diff --git a/arch/x86/boot/compressed/Makefile b/arch/x86/boot/compressed/Makefile
index 8774cb23064f..f1356889204e 100644
--- a/arch/x86/boot/compressed/Makefile
+++ b/arch/x86/boot/compressed/Makefile
@@ -57,12 +57,27 @@ LDFLAGS_vmlinux := -T
hostprogs-y := mkpiggy
HOST_EXTRACFLAGS += -I$(srctree)/tools/include
+sed-voffset := -e 's/^\([0-9a-fA-F]*\) [ABCDGRSTVW] \(_text\|__bss_start\|_end\)$$/\#define VO_\2 _AC(0x\1,UL)/p'
+
+quiet_cmd_voffset = VOFFSET $@
+ cmd_voffset = $(NM) $< | sed -n $(sed-voffset) > $@
+
+targets += ../voffset.h
+
+$(obj)/../voffset.h: vmlinux FORCE
+ $(call if_changed,voffset)
+
+$(obj)/misc.o: $(obj)/../voffset.h
+
vmlinux-objs-y := $(obj)/vmlinux.lds $(obj)/head_$(BITS).o $(obj)/misc.o \
- $(obj)/string.o $(obj)/cmdline.o \
+ $(obj)/string.o $(obj)/cmdline.o $(obj)/error.o \
$(obj)/piggy.o $(obj)/cpuflags.o
vmlinux-objs-$(CONFIG_EARLY_PRINTK) += $(obj)/early_serial_console.o
-vmlinux-objs-$(CONFIG_RANDOMIZE_BASE) += $(obj)/aslr.o
+vmlinux-objs-$(CONFIG_RANDOMIZE_BASE) += $(obj)/kaslr.o
+ifdef CONFIG_X86_64
+ vmlinux-objs-$(CONFIG_RANDOMIZE_BASE) += $(obj)/pagetable.o
+endif
$(obj)/eboot.o: KBUILD_CFLAGS += -fshort-wchar -mno-red-zone
@@ -72,7 +87,6 @@ vmlinux-objs-$(CONFIG_EFI_MIXED) += $(obj)/efi_thunk_$(BITS).o
$(obj)/vmlinux: $(vmlinux-objs-y) FORCE
$(call if_changed,ld)
- @:
OBJCOPYFLAGS_vmlinux.bin := -R .comment -S
$(obj)/vmlinux.bin: vmlinux FORCE
@@ -109,10 +123,8 @@ suffix-$(CONFIG_KERNEL_XZ) := xz
suffix-$(CONFIG_KERNEL_LZO) := lzo
suffix-$(CONFIG_KERNEL_LZ4) := lz4
-RUN_SIZE = $(shell $(OBJDUMP) -h vmlinux | \
- $(CONFIG_SHELL) $(srctree)/arch/x86/tools/calc_run_size.sh)
quiet_cmd_mkpiggy = MKPIGGY $@
- cmd_mkpiggy = $(obj)/mkpiggy $< $(RUN_SIZE) > $@ || ( rm -f $@ ; false )
+ cmd_mkpiggy = $(obj)/mkpiggy $< > $@ || ( rm -f $@ ; false )
targets += piggy.S
$(obj)/piggy.S: $(obj)/vmlinux.bin.$(suffix-y) $(obj)/mkpiggy FORCE
diff --git a/arch/x86/boot/compressed/aslr.c b/arch/x86/boot/compressed/aslr.c
deleted file mode 100644
index 6a9b96b4624d..000000000000
--- a/arch/x86/boot/compressed/aslr.c
+++ /dev/null
@@ -1,339 +0,0 @@
-#include "misc.h"
-
-#include <asm/msr.h>
-#include <asm/archrandom.h>
-#include <asm/e820.h>
-
-#include <generated/compile.h>
-#include <linux/module.h>
-#include <linux/uts.h>
-#include <linux/utsname.h>
-#include <generated/utsrelease.h>
-
-/* Simplified build-specific string for starting entropy. */
-static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
- LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
-
-#define I8254_PORT_CONTROL 0x43
-#define I8254_PORT_COUNTER0 0x40
-#define I8254_CMD_READBACK 0xC0
-#define I8254_SELECT_COUNTER0 0x02
-#define I8254_STATUS_NOTREADY 0x40
-static inline u16 i8254(void)
-{
- u16 status, timer;
-
- do {
- outb(I8254_PORT_CONTROL,
- I8254_CMD_READBACK | I8254_SELECT_COUNTER0);
- status = inb(I8254_PORT_COUNTER0);
- timer = inb(I8254_PORT_COUNTER0);
- timer |= inb(I8254_PORT_COUNTER0) << 8;
- } while (status & I8254_STATUS_NOTREADY);
-
- return timer;
-}
-
-static unsigned long rotate_xor(unsigned long hash, const void *area,
- size_t size)
-{
- size_t i;
- unsigned long *ptr = (unsigned long *)area;
-
- for (i = 0; i < size / sizeof(hash); i++) {
- /* Rotate by odd number of bits and XOR. */
- hash = (hash << ((sizeof(hash) * 8) - 7)) | (hash >> 7);
- hash ^= ptr[i];
- }
-
- return hash;
-}
-
-/* Attempt to create a simple but unpredictable starting entropy. */
-static unsigned long get_random_boot(void)
-{
- unsigned long hash = 0;
-
- hash = rotate_xor(hash, build_str, sizeof(build_str));
- hash = rotate_xor(hash, real_mode, sizeof(*real_mode));
-
- return hash;
-}
-
-static unsigned long get_random_long(void)
-{
-#ifdef CONFIG_X86_64
- const unsigned long mix_const = 0x5d6008cbf3848dd3UL;
-#else
- const unsigned long mix_const = 0x3f39e593UL;
-#endif
- unsigned long raw, random = get_random_boot();
- bool use_i8254 = true;
-
- debug_putstr("KASLR using");
-
- if (has_cpuflag(X86_FEATURE_RDRAND)) {
- debug_putstr(" RDRAND");
- if (rdrand_long(&raw)) {
- random ^= raw;
- use_i8254 = false;
- }
- }
-
- if (has_cpuflag(X86_FEATURE_TSC)) {
- debug_putstr(" RDTSC");
- raw = rdtsc();
-
- random ^= raw;
- use_i8254 = false;
- }
-
- if (use_i8254) {
- debug_putstr(" i8254");
- random ^= i8254();
- }
-
- /* Circular multiply for better bit diffusion */
- asm("mul %3"
- : "=a" (random), "=d" (raw)
- : "a" (random), "rm" (mix_const));
- random += raw;
-
- debug_putstr("...\n");
-
- return random;
-}
-
-struct mem_vector {
- unsigned long start;
- unsigned long size;
-};
-
-#define MEM_AVOID_MAX 5
-static struct mem_vector mem_avoid[MEM_AVOID_MAX];
-
-static bool mem_contains(struct mem_vector *region, struct mem_vector *item)
-{
- /* Item at least partially before region. */
- if (item->start < region->start)
- return false;
- /* Item at least partially after region. */
- if (item->start + item->size > region->start + region->size)
- return false;
- return true;
-}
-
-static bool mem_overlaps(struct mem_vector *one, struct mem_vector *two)
-{
- /* Item one is entirely before item two. */
- if (one->start + one->size <= two->start)
- return false;
- /* Item one is entirely after item two. */
- if (one->start >= two->start + two->size)
- return false;
- return true;
-}
-
-static void mem_avoid_init(unsigned long input, unsigned long input_size,
- unsigned long output, unsigned long output_size)
-{
- u64 initrd_start, initrd_size;
- u64 cmd_line, cmd_line_size;
- unsigned long unsafe, unsafe_len;
- char *ptr;
-
- /*
- * Avoid the region that is unsafe to overlap during
- * decompression (see calculations at top of misc.c).
- */
- unsafe_len = (output_size >> 12) + 32768 + 18;
- unsafe = (unsigned long)input + input_size - unsafe_len;
- mem_avoid[0].start = unsafe;
- mem_avoid[0].size = unsafe_len;
-
- /* Avoid initrd. */
- initrd_start = (u64)real_mode->ext_ramdisk_image << 32;
- initrd_start |= real_mode->hdr.ramdisk_image;
- initrd_size = (u64)real_mode->ext_ramdisk_size << 32;
- initrd_size |= real_mode->hdr.ramdisk_size;
- mem_avoid[1].start = initrd_start;
- mem_avoid[1].size = initrd_size;
-
- /* Avoid kernel command line. */
- cmd_line = (u64)real_mode->ext_cmd_line_ptr << 32;
- cmd_line |= real_mode->hdr.cmd_line_ptr;
- /* Calculate size of cmd_line. */
- ptr = (char *)(unsigned long)cmd_line;
- for (cmd_line_size = 0; ptr[cmd_line_size++]; )
- ;
- mem_avoid[2].start = cmd_line;
- mem_avoid[2].size = cmd_line_size;
-
- /* Avoid heap memory. */
- mem_avoid[3].start = (unsigned long)free_mem_ptr;
- mem_avoid[3].size = BOOT_HEAP_SIZE;
-
- /* Avoid stack memory. */
- mem_avoid[4].start = (unsigned long)free_mem_end_ptr;
- mem_avoid[4].size = BOOT_STACK_SIZE;
-}
-
-/* Does this memory vector overlap a known avoided area? */
-static bool mem_avoid_overlap(struct mem_vector *img)
-{
- int i;
- struct setup_data *ptr;
-
- for (i = 0; i < MEM_AVOID_MAX; i++) {
- if (mem_overlaps(img, &mem_avoid[i]))
- return true;
- }
-
- /* Avoid all entries in the setup_data linked list. */
- ptr = (struct setup_data *)(unsigned long)real_mode->hdr.setup_data;
- while (ptr) {
- struct mem_vector avoid;
-
- avoid.start = (unsigned long)ptr;
- avoid.size = sizeof(*ptr) + ptr->len;
-
- if (mem_overlaps(img, &avoid))
- return true;
-
- ptr = (struct setup_data *)(unsigned long)ptr->next;
- }
-
- return false;
-}
-
-static unsigned long slots[CONFIG_RANDOMIZE_BASE_MAX_OFFSET /
- CONFIG_PHYSICAL_ALIGN];
-static unsigned long slot_max;
-
-static void slots_append(unsigned long addr)
-{
- /* Overflowing the slots list should be impossible. */
- if (slot_max >= CONFIG_RANDOMIZE_BASE_MAX_OFFSET /
- CONFIG_PHYSICAL_ALIGN)
- return;
-
- slots[slot_max++] = addr;
-}
-
-static unsigned long slots_fetch_random(void)
-{
- /* Handle case of no slots stored. */
- if (slot_max == 0)
- return 0;
-
- return slots[get_random_long() % slot_max];
-}
-
-static void process_e820_entry(struct e820entry *entry,
- unsigned long minimum,
- unsigned long image_size)
-{
- struct mem_vector region, img;
-
- /* Skip non-RAM entries. */
- if (entry->type != E820_RAM)
- return;
-
- /* Ignore entries entirely above our maximum. */
- if (entry->addr >= CONFIG_RANDOMIZE_BASE_MAX_OFFSET)
- return;
-
- /* Ignore entries entirely below our minimum. */
- if (entry->addr + entry->size < minimum)
- return;
-
- region.start = entry->addr;
- region.size = entry->size;
-
- /* Potentially raise address to minimum location. */
- if (region.start < minimum)
- region.start = minimum;
-
- /* Potentially raise address to meet alignment requirements. */
- region.start = ALIGN(region.start, CONFIG_PHYSICAL_ALIGN);
-
- /* Did we raise the address above the bounds of this e820 region? */
- if (region.start > entry->addr + entry->size)
- return;
-
- /* Reduce size by any delta from the original address. */
- region.size -= region.start - entry->addr;
-
- /* Reduce maximum size to fit end of image within maximum limit. */
- if (region.start + region.size > CONFIG_RANDOMIZE_BASE_MAX_OFFSET)
- region.size = CONFIG_RANDOMIZE_BASE_MAX_OFFSET - region.start;
-
- /* Walk each aligned slot and check for avoided areas. */
- for (img.start = region.start, img.size = image_size ;
- mem_contains(&region, &img) ;
- img.start += CONFIG_PHYSICAL_ALIGN) {
- if (mem_avoid_overlap(&img))
- continue;
- slots_append(img.start);
- }
-}
-
-static unsigned long find_random_addr(unsigned long minimum,
- unsigned long size)
-{
- int i;
- unsigned long addr;
-
- /* Make sure minimum is aligned. */
- minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
-
- /* Verify potential e820 positions, appending to slots list. */
- for (i = 0; i < real_mode->e820_entries; i++) {
- process_e820_entry(&real_mode->e820_map[i], minimum, size);
- }
-
- return slots_fetch_random();
-}
-
-unsigned char *choose_kernel_location(struct boot_params *boot_params,
- unsigned char *input,
- unsigned long input_size,
- unsigned char *output,
- unsigned long output_size)
-{
- unsigned long choice = (unsigned long)output;
- unsigned long random;
-
-#ifdef CONFIG_HIBERNATION
- if (!cmdline_find_option_bool("kaslr")) {
- debug_putstr("KASLR disabled by default...\n");
- goto out;
- }
-#else
- if (cmdline_find_option_bool("nokaslr")) {
- debug_putstr("KASLR disabled by cmdline...\n");
- goto out;
- }
-#endif
-
- boot_params->hdr.loadflags |= KASLR_FLAG;
-
- /* Record the various known unsafe memory ranges. */
- mem_avoid_init((unsigned long)input, input_size,
- (unsigned long)output, output_size);
-
- /* Walk e820 and find a random address. */
- random = find_random_addr(choice, output_size);
- if (!random) {
- debug_putstr("KASLR could not find suitable E820 region...\n");
- goto out;
- }
-
- /* Always enforce the minimum. */
- if (random < choice)
- goto out;
-
- choice = random;
-out:
- return (unsigned char *)choice;
-}
diff --git a/arch/x86/boot/compressed/cmdline.c b/arch/x86/boot/compressed/cmdline.c
index b68e3033e6b9..73ccf63b0f48 100644
--- a/arch/x86/boot/compressed/cmdline.c
+++ b/arch/x86/boot/compressed/cmdline.c
@@ -15,9 +15,9 @@ static inline char rdfs8(addr_t addr)
#include "../cmdline.c"
static unsigned long get_cmd_line_ptr(void)
{
- unsigned long cmd_line_ptr = real_mode->hdr.cmd_line_ptr;
+ unsigned long cmd_line_ptr = boot_params->hdr.cmd_line_ptr;
- cmd_line_ptr |= (u64)real_mode->ext_cmd_line_ptr << 32;
+ cmd_line_ptr |= (u64)boot_params->ext_cmd_line_ptr << 32;
return cmd_line_ptr;
}
diff --git a/arch/x86/boot/compressed/eboot.c b/arch/x86/boot/compressed/eboot.c
index 583d539a4197..52fef606bc54 100644
--- a/arch/x86/boot/compressed/eboot.c
+++ b/arch/x86/boot/compressed/eboot.c
@@ -571,312 +571,6 @@ free_handle:
efi_call_early(free_pool, pci_handle);
}
-static void
-setup_pixel_info(struct screen_info *si, u32 pixels_per_scan_line,
- struct efi_pixel_bitmask pixel_info, int pixel_format)
-{
- if (pixel_format == PIXEL_RGB_RESERVED_8BIT_PER_COLOR) {
- si->lfb_depth = 32;
- si->lfb_linelength = pixels_per_scan_line * 4;
- si->red_size = 8;
- si->red_pos = 0;
- si->green_size = 8;
- si->green_pos = 8;
- si->blue_size = 8;
- si->blue_pos = 16;
- si->rsvd_size = 8;
- si->rsvd_pos = 24;
- } else if (pixel_format == PIXEL_BGR_RESERVED_8BIT_PER_COLOR) {
- si->lfb_depth = 32;
- si->lfb_linelength = pixels_per_scan_line * 4;
- si->red_size = 8;
- si->red_pos = 16;
- si->green_size = 8;
- si->green_pos = 8;
- si->blue_size = 8;
- si->blue_pos = 0;
- si->rsvd_size = 8;
- si->rsvd_pos = 24;
- } else if (pixel_format == PIXEL_BIT_MASK) {
- find_bits(pixel_info.red_mask, &si->red_pos, &si->red_size);
- find_bits(pixel_info.green_mask, &si->green_pos,
- &si->green_size);
- find_bits(pixel_info.blue_mask, &si->blue_pos, &si->blue_size);
- find_bits(pixel_info.reserved_mask, &si->rsvd_pos,
- &si->rsvd_size);
- si->lfb_depth = si->red_size + si->green_size +
- si->blue_size + si->rsvd_size;
- si->lfb_linelength = (pixels_per_scan_line * si->lfb_depth) / 8;
- } else {
- si->lfb_depth = 4;
- si->lfb_linelength = si->lfb_width / 2;
- si->red_size = 0;
- si->red_pos = 0;
- si->green_size = 0;
- si->green_pos = 0;
- si->blue_size = 0;
- si->blue_pos = 0;
- si->rsvd_size = 0;
- si->rsvd_pos = 0;
- }
-}
-
-static efi_status_t
-__gop_query32(struct efi_graphics_output_protocol_32 *gop32,
- struct efi_graphics_output_mode_info **info,
- unsigned long *size, u64 *fb_base)
-{
- struct efi_graphics_output_protocol_mode_32 *mode;
- efi_status_t status;
- unsigned long m;
-
- m = gop32->mode;
- mode = (struct efi_graphics_output_protocol_mode_32 *)m;
-
- status = efi_early->call(gop32->query_mode, gop32,
- mode->mode, size, info);
- if (status != EFI_SUCCESS)
- return status;
-
- *fb_base = mode->frame_buffer_base;
- return status;
-}
-
-static efi_status_t
-setup_gop32(struct screen_info *si, efi_guid_t *proto,
- unsigned long size, void **gop_handle)
-{
- struct efi_graphics_output_protocol_32 *gop32, *first_gop;
- unsigned long nr_gops;
- u16 width, height;
- u32 pixels_per_scan_line;
- u32 ext_lfb_base;
- u64 fb_base;
- struct efi_pixel_bitmask pixel_info;
- int pixel_format;
- efi_status_t status;
- u32 *handles = (u32 *)(unsigned long)gop_handle;
- int i;
-
- first_gop = NULL;
- gop32 = NULL;
-
- nr_gops = size / sizeof(u32);
- for (i = 0; i < nr_gops; i++) {
- struct efi_graphics_output_mode_info *info = NULL;
- efi_guid_t conout_proto = EFI_CONSOLE_OUT_DEVICE_GUID;
- bool conout_found = false;
- void *dummy = NULL;
- u32 h = handles[i];
- u64 current_fb_base;
-
- status = efi_call_early(handle_protocol, h,
- proto, (void **)&gop32);
- if (status != EFI_SUCCESS)
- continue;
-
- status = efi_call_early(handle_protocol, h,
- &conout_proto, &dummy);
- if (status == EFI_SUCCESS)
- conout_found = true;
-
- status = __gop_query32(gop32, &info, &size, &current_fb_base);
- if (status == EFI_SUCCESS && (!first_gop || conout_found)) {
- /*
- * Systems that use the UEFI Console Splitter may
- * provide multiple GOP devices, not all of which are
- * backed by real hardware. The workaround is to search
- * for a GOP implementing the ConOut protocol, and if
- * one isn't found, to just fall back to the first GOP.
- */
- width = info->horizontal_resolution;
- height = info->vertical_resolution;
- pixel_format = info->pixel_format;
- pixel_info = info->pixel_information;
- pixels_per_scan_line = info->pixels_per_scan_line;
- fb_base = current_fb_base;
-
- /*
- * Once we've found a GOP supporting ConOut,
- * don't bother looking any further.
- */
- first_gop = gop32;
- if (conout_found)
- break;
- }
- }
-
- /* Did we find any GOPs? */
- if (!first_gop)
- goto out;
-
- /* EFI framebuffer */
- si->orig_video_isVGA = VIDEO_TYPE_EFI;
-
- si->lfb_width = width;
- si->lfb_height = height;
- si->lfb_base = fb_base;
-
- ext_lfb_base = (u64)(unsigned long)fb_base >> 32;
- if (ext_lfb_base) {
- si->capabilities |= VIDEO_CAPABILITY_64BIT_BASE;
- si->ext_lfb_base = ext_lfb_base;
- }
-
- si->pages = 1;
-
- setup_pixel_info(si, pixels_per_scan_line, pixel_info, pixel_format);
-
- si->lfb_size = si->lfb_linelength * si->lfb_height;
-
- si->capabilities |= VIDEO_CAPABILITY_SKIP_QUIRKS;
-out:
- return status;
-}
-
-static efi_status_t
-__gop_query64(struct efi_graphics_output_protocol_64 *gop64,
- struct efi_graphics_output_mode_info **info,
- unsigned long *size, u64 *fb_base)
-{
- struct efi_graphics_output_protocol_mode_64 *mode;
- efi_status_t status;
- unsigned long m;
-
- m = gop64->mode;
- mode = (struct efi_graphics_output_protocol_mode_64 *)m;
-
- status = efi_early->call(gop64->query_mode, gop64,
- mode->mode, size, info);
- if (status != EFI_SUCCESS)
- return status;
-
- *fb_base = mode->frame_buffer_base;
- return status;
-}
-
-static efi_status_t
-setup_gop64(struct screen_info *si, efi_guid_t *proto,
- unsigned long size, void **gop_handle)
-{
- struct efi_graphics_output_protocol_64 *gop64, *first_gop;
- unsigned long nr_gops;
- u16 width, height;
- u32 pixels_per_scan_line;
- u32 ext_lfb_base;
- u64 fb_base;
- struct efi_pixel_bitmask pixel_info;
- int pixel_format;
- efi_status_t status;
- u64 *handles = (u64 *)(unsigned long)gop_handle;
- int i;
-
- first_gop = NULL;
- gop64 = NULL;
-
- nr_gops = size / sizeof(u64);
- for (i = 0; i < nr_gops; i++) {
- struct efi_graphics_output_mode_info *info = NULL;
- efi_guid_t conout_proto = EFI_CONSOLE_OUT_DEVICE_GUID;
- bool conout_found = false;
- void *dummy = NULL;
- u64 h = handles[i];
- u64 current_fb_base;
-
- status = efi_call_early(handle_protocol, h,
- proto, (void **)&gop64);
- if (status != EFI_SUCCESS)
- continue;
-
- status = efi_call_early(handle_protocol, h,
- &conout_proto, &dummy);
- if (status == EFI_SUCCESS)
- conout_found = true;
-
- status = __gop_query64(gop64, &info, &size, &current_fb_base);
- if (status == EFI_SUCCESS && (!first_gop || conout_found)) {
- /*
- * Systems that use the UEFI Console Splitter may
- * provide multiple GOP devices, not all of which are
- * backed by real hardware. The workaround is to search
- * for a GOP implementing the ConOut protocol, and if
- * one isn't found, to just fall back to the first GOP.
- */
- width = info->horizontal_resolution;
- height = info->vertical_resolution;
- pixel_format = info->pixel_format;
- pixel_info = info->pixel_information;
- pixels_per_scan_line = info->pixels_per_scan_line;
- fb_base = current_fb_base;
-
- /*
- * Once we've found a GOP supporting ConOut,
- * don't bother looking any further.
- */
- first_gop = gop64;
- if (conout_found)
- break;
- }
- }
-
- /* Did we find any GOPs? */
- if (!first_gop)
- goto out;
-
- /* EFI framebuffer */
- si->orig_video_isVGA = VIDEO_TYPE_EFI;
-
- si->lfb_width = width;
- si->lfb_height = height;
- si->lfb_base = fb_base;
-
- ext_lfb_base = (u64)(unsigned long)fb_base >> 32;
- if (ext_lfb_base) {
- si->capabilities |= VIDEO_CAPABILITY_64BIT_BASE;
- si->ext_lfb_base = ext_lfb_base;
- }
-
- si->pages = 1;
-
- setup_pixel_info(si, pixels_per_scan_line, pixel_info, pixel_format);
-
- si->lfb_size = si->lfb_linelength * si->lfb_height;
-
- si->capabilities |= VIDEO_CAPABILITY_SKIP_QUIRKS;
-out:
- return status;
-}
-
-/*
- * See if we have Graphics Output Protocol
- */
-static efi_status_t setup_gop(struct screen_info *si, efi_guid_t *proto,
- unsigned long size)
-{
- efi_status_t status;
- void **gop_handle = NULL;
-
- status = efi_call_early(allocate_pool, EFI_LOADER_DATA,
- size, (void **)&gop_handle);
- if (status != EFI_SUCCESS)
- return status;
-
- status = efi_call_early(locate_handle,
- EFI_LOCATE_BY_PROTOCOL,
- proto, NULL, &size, gop_handle);
- if (status != EFI_SUCCESS)
- goto free_handle;
-
- if (efi_early->is64)
- status = setup_gop64(si, proto, size, gop_handle);
- else
- status = setup_gop32(si, proto, size, gop_handle);
-
-free_handle:
- efi_call_early(free_pool, gop_handle);
- return status;
-}
-
static efi_status_t
setup_uga32(void **uga_handle, unsigned long size, u32 *width, u32 *height)
{
@@ -1038,7 +732,7 @@ void setup_graphics(struct boot_params *boot_params)
EFI_LOCATE_BY_PROTOCOL,
&graphics_proto, NULL, &size, gop_handle);
if (status == EFI_BUFFER_TOO_SMALL)
- status = setup_gop(si, &graphics_proto, size);
+ status = efi_setup_gop(NULL, si, &graphics_proto, size);
if (status != EFI_SUCCESS) {
size = 0;
diff --git a/arch/x86/boot/compressed/eboot.h b/arch/x86/boot/compressed/eboot.h
index d487e727f1ec..c0223f1a89d7 100644
--- a/arch/x86/boot/compressed/eboot.h
+++ b/arch/x86/boot/compressed/eboot.h
@@ -11,80 +11,6 @@
#define DESC_TYPE_CODE_DATA (1 << 0)
-#define EFI_CONSOLE_OUT_DEVICE_GUID \
- EFI_GUID(0xd3b36f2c, 0xd551, 0x11d4, 0x9a, 0x46, 0x0, 0x90, 0x27, \
- 0x3f, 0xc1, 0x4d)
-
-#define PIXEL_RGB_RESERVED_8BIT_PER_COLOR 0
-#define PIXEL_BGR_RESERVED_8BIT_PER_COLOR 1
-#define PIXEL_BIT_MASK 2
-#define PIXEL_BLT_ONLY 3
-#define PIXEL_FORMAT_MAX 4
-
-struct efi_pixel_bitmask {
- u32 red_mask;
- u32 green_mask;
- u32 blue_mask;
- u32 reserved_mask;
-};
-
-struct efi_graphics_output_mode_info {
- u32 version;
- u32 horizontal_resolution;
- u32 vertical_resolution;
- int pixel_format;
- struct efi_pixel_bitmask pixel_information;
- u32 pixels_per_scan_line;
-} __packed;
-
-struct efi_graphics_output_protocol_mode_32 {
- u32 max_mode;
- u32 mode;
- u32 info;
- u32 size_of_info;
- u64 frame_buffer_base;
- u32 frame_buffer_size;
-} __packed;
-
-struct efi_graphics_output_protocol_mode_64 {
- u32 max_mode;
- u32 mode;
- u64 info;
- u64 size_of_info;
- u64 frame_buffer_base;
- u64 frame_buffer_size;
-} __packed;
-
-struct efi_graphics_output_protocol_mode {
- u32 max_mode;
- u32 mode;
- unsigned long info;
- unsigned long size_of_info;
- u64 frame_buffer_base;
- unsigned long frame_buffer_size;
-} __packed;
-
-struct efi_graphics_output_protocol_32 {
- u32 query_mode;
- u32 set_mode;
- u32 blt;
- u32 mode;
-};
-
-struct efi_graphics_output_protocol_64 {
- u64 query_mode;
- u64 set_mode;
- u64 blt;
- u64 mode;
-};
-
-struct efi_graphics_output_protocol {
- void *query_mode;
- unsigned long set_mode;
- unsigned long blt;
- struct efi_graphics_output_protocol_mode *mode;
-};
-
struct efi_uga_draw_protocol_32 {
u32 get_mode;
u32 set_mode;
diff --git a/arch/x86/boot/compressed/error.c b/arch/x86/boot/compressed/error.c
new file mode 100644
index 000000000000..6248740b68b5
--- /dev/null
+++ b/arch/x86/boot/compressed/error.c
@@ -0,0 +1,22 @@
+/*
+ * Callers outside of misc.c need access to the error reporting routines,
+ * but the *_putstr() functions need to stay in misc.c because of how
+ * memcpy() and memmove() are defined for the compressed boot environment.
+ */
+#include "misc.h"
+
+void warn(char *m)
+{
+ error_putstr("\n\n");
+ error_putstr(m);
+ error_putstr("\n\n");
+}
+
+void error(char *m)
+{
+ warn(m);
+ error_putstr(" -- System halted");
+
+ while (1)
+ asm("hlt");
+}
diff --git a/arch/x86/boot/compressed/error.h b/arch/x86/boot/compressed/error.h
new file mode 100644
index 000000000000..2e59dac07f9e
--- /dev/null
+++ b/arch/x86/boot/compressed/error.h
@@ -0,0 +1,7 @@
+#ifndef BOOT_COMPRESSED_ERROR_H
+#define BOOT_COMPRESSED_ERROR_H
+
+void warn(char *m);
+void error(char *m);
+
+#endif /* BOOT_COMPRESSED_ERROR_H */
diff --git a/arch/x86/boot/compressed/head_32.S b/arch/x86/boot/compressed/head_32.S
index 0256064da8da..1038524270e7 100644
--- a/arch/x86/boot/compressed/head_32.S
+++ b/arch/x86/boot/compressed/head_32.S
@@ -176,7 +176,9 @@ preferred_addr:
1:
/* Target address to relocate to for decompression */
- addl $z_extract_offset, %ebx
+ movl BP_init_size(%esi), %eax
+ subl $_end, %eax
+ addl %eax, %ebx
/* Set up the stack */
leal boot_stack_end(%ebx), %esp
@@ -233,24 +235,28 @@ relocated:
2:
/*
- * Do the decompression, and jump to the new kernel..
+ * Do the extraction, and jump to the new kernel..
*/
- /* push arguments for decompress_kernel: */
- pushl $z_run_size /* size of kernel with .bss and .brk */
+ /* push arguments for extract_kernel: */
pushl $z_output_len /* decompressed length, end of relocs */
- leal z_extract_offset_negative(%ebx), %ebp
+
+ movl BP_init_size(%esi), %eax
+ subl $_end, %eax
+ movl %ebx, %ebp
+ subl %eax, %ebp
pushl %ebp /* output address */
+
pushl $z_input_len /* input_len */
leal input_data(%ebx), %eax
pushl %eax /* input_data */
leal boot_heap(%ebx), %eax
pushl %eax /* heap area */
pushl %esi /* real mode pointer */
- call decompress_kernel /* returns kernel location in %eax */
- addl $28, %esp
+ call extract_kernel /* returns kernel location in %eax */
+ addl $24, %esp
/*
- * Jump to the decompressed kernel.
+ * Jump to the extracted kernel.
*/
xorl %ebx, %ebx
jmp *%eax
diff --git a/arch/x86/boot/compressed/head_64.S b/arch/x86/boot/compressed/head_64.S
index 86558a199139..0d80a7ad65cd 100644
--- a/arch/x86/boot/compressed/head_64.S
+++ b/arch/x86/boot/compressed/head_64.S
@@ -110,7 +110,9 @@ ENTRY(startup_32)
1:
/* Target address to relocate to for decompression */
- addl $z_extract_offset, %ebx
+ movl BP_init_size(%esi), %eax
+ subl $_end, %eax
+ addl %eax, %ebx
/*
* Prepare for entering 64 bit mode
@@ -132,7 +134,7 @@ ENTRY(startup_32)
/* Initialize Page tables to 0 */
leal pgtable(%ebx), %edi
xorl %eax, %eax
- movl $((4096*6)/4), %ecx
+ movl $(BOOT_INIT_PGT_SIZE/4), %ecx
rep stosl
/* Build Level 4 */
@@ -338,7 +340,9 @@ preferred_addr:
1:
/* Target address to relocate to for decompression */
- leaq z_extract_offset(%rbp), %rbx
+ movl BP_init_size(%rsi), %ebx
+ subl $_end, %ebx
+ addq %rbp, %rbx
/* Set up the stack */
leaq boot_stack_end(%rbx), %rsp
@@ -408,19 +412,16 @@ relocated:
2:
/*
- * Do the decompression, and jump to the new kernel..
+ * Do the extraction, and jump to the new kernel..
*/
pushq %rsi /* Save the real mode argument */
- movq $z_run_size, %r9 /* size of kernel with .bss and .brk */
- pushq %r9
movq %rsi, %rdi /* real mode address */
leaq boot_heap(%rip), %rsi /* malloc area for uncompression */
leaq input_data(%rip), %rdx /* input_data */
movl $z_input_len, %ecx /* input_len */
movq %rbp, %r8 /* output target address */
movq $z_output_len, %r9 /* decompressed length, end of relocs */
- call decompress_kernel /* returns kernel location in %rax */
- popq %r9
+ call extract_kernel /* returns kernel location in %rax */
popq %rsi
/*
@@ -485,4 +486,4 @@ boot_stack_end:
.section ".pgtable","a",@nobits
.balign 4096
pgtable:
- .fill 6*4096, 1, 0
+ .fill BOOT_PGT_SIZE, 1, 0
diff --git a/arch/x86/boot/compressed/kaslr.c b/arch/x86/boot/compressed/kaslr.c
new file mode 100644
index 000000000000..cfeb0259ed81
--- /dev/null
+++ b/arch/x86/boot/compressed/kaslr.c
@@ -0,0 +1,510 @@
+/*
+ * kaslr.c
+ *
+ * This contains the routines needed to generate a reasonable level of
+ * entropy to choose a randomized kernel base address offset in support
+ * of Kernel Address Space Layout Randomization (KASLR). Additionally
+ * handles walking the physical memory maps (and tracking memory regions
+ * to avoid) in order to select a physical memory location that can
+ * contain the entire properly aligned running kernel image.
+ *
+ */
+#include "misc.h"
+#include "error.h"
+
+#include <asm/msr.h>
+#include <asm/archrandom.h>
+#include <asm/e820.h>
+
+#include <generated/compile.h>
+#include <linux/module.h>
+#include <linux/uts.h>
+#include <linux/utsname.h>
+#include <generated/utsrelease.h>
+
+/* Simplified build-specific string for starting entropy. */
+static const char build_str[] = UTS_RELEASE " (" LINUX_COMPILE_BY "@"
+ LINUX_COMPILE_HOST ") (" LINUX_COMPILER ") " UTS_VERSION;
+
+#define I8254_PORT_CONTROL 0x43
+#define I8254_PORT_COUNTER0 0x40
+#define I8254_CMD_READBACK 0xC0
+#define I8254_SELECT_COUNTER0 0x02
+#define I8254_STATUS_NOTREADY 0x40
+static inline u16 i8254(void)
+{
+ u16 status, timer;
+
+ do {
+ outb(I8254_PORT_CONTROL,
+ I8254_CMD_READBACK | I8254_SELECT_COUNTER0);
+ status = inb(I8254_PORT_COUNTER0);
+ timer = inb(I8254_PORT_COUNTER0);
+ timer |= inb(I8254_PORT_COUNTER0) << 8;
+ } while (status & I8254_STATUS_NOTREADY);
+
+ return timer;
+}
+
+static unsigned long rotate_xor(unsigned long hash, const void *area,
+ size_t size)
+{
+ size_t i;
+ unsigned long *ptr = (unsigned long *)area;
+
+ for (i = 0; i < size / sizeof(hash); i++) {
+ /* Rotate by odd number of bits and XOR. */
+ hash = (hash << ((sizeof(hash) * 8) - 7)) | (hash >> 7);
+ hash ^= ptr[i];
+ }
+
+ return hash;
+}
+
+/* Attempt to create a simple but unpredictable starting entropy. */
+static unsigned long get_random_boot(void)
+{
+ unsigned long hash = 0;
+
+ hash = rotate_xor(hash, build_str, sizeof(build_str));
+ hash = rotate_xor(hash, boot_params, sizeof(*boot_params));
+
+ return hash;
+}
+
+static unsigned long get_random_long(const char *purpose)
+{
+#ifdef CONFIG_X86_64
+ const unsigned long mix_const = 0x5d6008cbf3848dd3UL;
+#else
+ const unsigned long mix_const = 0x3f39e593UL;
+#endif
+ unsigned long raw, random = get_random_boot();
+ bool use_i8254 = true;
+
+ debug_putstr(purpose);
+ debug_putstr(" KASLR using");
+
+ if (has_cpuflag(X86_FEATURE_RDRAND)) {
+ debug_putstr(" RDRAND");
+ if (rdrand_long(&raw)) {
+ random ^= raw;
+ use_i8254 = false;
+ }
+ }
+
+ if (has_cpuflag(X86_FEATURE_TSC)) {
+ debug_putstr(" RDTSC");
+ raw = rdtsc();
+
+ random ^= raw;
+ use_i8254 = false;
+ }
+
+ if (use_i8254) {
+ debug_putstr(" i8254");
+ random ^= i8254();
+ }
+
+ /* Circular multiply for better bit diffusion */
+ asm("mul %3"
+ : "=a" (random), "=d" (raw)
+ : "a" (random), "rm" (mix_const));
+ random += raw;
+
+ debug_putstr("...\n");
+
+ return random;
+}
+
+struct mem_vector {
+ unsigned long start;
+ unsigned long size;
+};
+
+enum mem_avoid_index {
+ MEM_AVOID_ZO_RANGE = 0,
+ MEM_AVOID_INITRD,
+ MEM_AVOID_CMDLINE,
+ MEM_AVOID_BOOTPARAMS,
+ MEM_AVOID_MAX,
+};
+
+static struct mem_vector mem_avoid[MEM_AVOID_MAX];
+
+static bool mem_contains(struct mem_vector *region, struct mem_vector *item)
+{
+ /* Item at least partially before region. */
+ if (item->start < region->start)
+ return false;
+ /* Item at least partially after region. */
+ if (item->start + item->size > region->start + region->size)
+ return false;
+ return true;
+}
+
+static bool mem_overlaps(struct mem_vector *one, struct mem_vector *two)
+{
+ /* Item one is entirely before item two. */
+ if (one->start + one->size <= two->start)
+ return false;
+ /* Item one is entirely after item two. */
+ if (one->start >= two->start + two->size)
+ return false;
+ return true;
+}
+
+/*
+ * In theory, KASLR can put the kernel anywhere in the range of [16M, 64T).
+ * The mem_avoid array is used to store the ranges that need to be avoided
+ * when KASLR searches for an appropriate random address. We must avoid any
+ * regions that are unsafe to overlap with during decompression, and other
+ * things like the initrd, cmdline and boot_params. This comment seeks to
+ * explain mem_avoid as clearly as possible since incorrect mem_avoid
+ * memory ranges lead to really hard to debug boot failures.
+ *
+ * The initrd, cmdline, and boot_params are trivial to identify for
+ * avoiding. They are MEM_AVOID_INITRD, MEM_AVOID_CMDLINE, and
+ * MEM_AVOID_BOOTPARAMS respectively below.
+ *
+ * What is not obvious how to avoid is the range of memory that is used
+ * during decompression (MEM_AVOID_ZO_RANGE below). This range must cover
+ * the compressed kernel (ZO) and its run space, which is used to extract
+ * the uncompressed kernel (VO) and relocs.
+ *
+ * ZO's full run size sits against the end of the decompression buffer, so
+ * we can calculate where text, data, bss, etc of ZO are positioned more
+ * easily.
+ *
+ * For additional background, the decompression calculations can be found
+ * in header.S, and the memory diagram is based on the one found in misc.c.
+ *
+ * The following conditions are already enforced by the image layouts and
+ * associated code:
+ * - input + input_size >= output + output_size
+ * - kernel_total_size <= init_size
+ * - kernel_total_size <= output_size (see Note below)
+ * - output + init_size >= output + output_size
+ *
+ * (Note that kernel_total_size and output_size have no fundamental
+ * relationship, but output_size is passed to choose_random_location
+ * as a maximum of the two. The diagram is showing a case where
+ * kernel_total_size is larger than output_size, but this case is
+ * handled by bumping output_size.)
+ *
+ * The above conditions can be illustrated by a diagram:
+ *
+ * 0 output input input+input_size output+init_size
+ * | | | | |
+ * | | | | |
+ * |-----|--------|--------|--------------|-----------|--|-------------|
+ * | | |
+ * | | |
+ * output+init_size-ZO_INIT_SIZE output+output_size output+kernel_total_size
+ *
+ * [output, output+init_size) is the entire memory range used for
+ * extracting the compressed image.
+ *
+ * [output, output+kernel_total_size) is the range needed for the
+ * uncompressed kernel (VO) and its run size (bss, brk, etc).
+ *
+ * [output, output+output_size) is VO plus relocs (i.e. the entire
+ * uncompressed payload contained by ZO). This is the area of the buffer
+ * written to during decompression.
+ *
+ * [output+init_size-ZO_INIT_SIZE, output+init_size) is the worst-case
+ * range of the copied ZO and decompression code. (i.e. the range
+ * covered backwards of size ZO_INIT_SIZE, starting from output+init_size.)
+ *
+ * [input, input+input_size) is the original copied compressed image (ZO)
+ * (i.e. it does not include its run size). This range must be avoided
+ * because it contains the data used for decompression.
+ *
+ * [input+input_size, output+init_size) is [_text, _end) for ZO. This
+ * range includes ZO's heap and stack, and must be avoided since it
+ * performs the decompression.
+ *
+ * Since the above two ranges need to be avoided and they are adjacent,
+ * they can be merged, resulting in: [input, output+init_size) which
+ * becomes the MEM_AVOID_ZO_RANGE below.
+ */
+static void mem_avoid_init(unsigned long input, unsigned long input_size,
+ unsigned long output)
+{
+ unsigned long init_size = boot_params->hdr.init_size;
+ u64 initrd_start, initrd_size;
+ u64 cmd_line, cmd_line_size;
+ char *ptr;
+
+ /*
+ * Avoid the region that is unsafe to overlap during
+ * decompression.
+ */
+ mem_avoid[MEM_AVOID_ZO_RANGE].start = input;
+ mem_avoid[MEM_AVOID_ZO_RANGE].size = (output + init_size) - input;
+ add_identity_map(mem_avoid[MEM_AVOID_ZO_RANGE].start,
+ mem_avoid[MEM_AVOID_ZO_RANGE].size);
+
+ /* Avoid initrd. */
+ initrd_start = (u64)boot_params->ext_ramdisk_image << 32;
+ initrd_start |= boot_params->hdr.ramdisk_image;
+ initrd_size = (u64)boot_params->ext_ramdisk_size << 32;
+ initrd_size |= boot_params->hdr.ramdisk_size;
+ mem_avoid[MEM_AVOID_INITRD].start = initrd_start;
+ mem_avoid[MEM_AVOID_INITRD].size = initrd_size;
+ /* No need to set mapping for initrd, it will be handled in VO. */
+
+ /* Avoid kernel command line. */
+ cmd_line = (u64)boot_params->ext_cmd_line_ptr << 32;
+ cmd_line |= boot_params->hdr.cmd_line_ptr;
+ /* Calculate size of cmd_line. */
+ ptr = (char *)(unsigned long)cmd_line;
+ for (cmd_line_size = 0; ptr[cmd_line_size++]; )
+ ;
+ mem_avoid[MEM_AVOID_CMDLINE].start = cmd_line;
+ mem_avoid[MEM_AVOID_CMDLINE].size = cmd_line_size;
+ add_identity_map(mem_avoid[MEM_AVOID_CMDLINE].start,
+ mem_avoid[MEM_AVOID_CMDLINE].size);
+
+ /* Avoid boot parameters. */
+ mem_avoid[MEM_AVOID_BOOTPARAMS].start = (unsigned long)boot_params;
+ mem_avoid[MEM_AVOID_BOOTPARAMS].size = sizeof(*boot_params);
+ add_identity_map(mem_avoid[MEM_AVOID_BOOTPARAMS].start,
+ mem_avoid[MEM_AVOID_BOOTPARAMS].size);
+
+ /* We don't need to set a mapping for setup_data. */
+
+#ifdef CONFIG_X86_VERBOSE_BOOTUP
+ /* Make sure video RAM can be used. */
+ add_identity_map(0, PMD_SIZE);
+#endif
+}
+
+/*
+ * Does this memory vector overlap a known avoided area? If so, record the
+ * overlap region with the lowest address.
+ */
+static bool mem_avoid_overlap(struct mem_vector *img,
+ struct mem_vector *overlap)
+{
+ int i;
+ struct setup_data *ptr;
+ unsigned long earliest = img->start + img->size;
+ bool is_overlapping = false;
+
+ for (i = 0; i < MEM_AVOID_MAX; i++) {
+ if (mem_overlaps(img, &mem_avoid[i]) &&
+ mem_avoid[i].start < earliest) {
+ *overlap = mem_avoid[i];
+ is_overlapping = true;
+ }
+ }
+
+ /* Avoid all entries in the setup_data linked list. */
+ ptr = (struct setup_data *)(unsigned long)boot_params->hdr.setup_data;
+ while (ptr) {
+ struct mem_vector avoid;
+
+ avoid.start = (unsigned long)ptr;
+ avoid.size = sizeof(*ptr) + ptr->len;
+
+ if (mem_overlaps(img, &avoid) && (avoid.start < earliest)) {
+ *overlap = avoid;
+ is_overlapping = true;
+ }
+
+ ptr = (struct setup_data *)(unsigned long)ptr->next;
+ }
+
+ return is_overlapping;
+}
+
+static unsigned long slots[KERNEL_IMAGE_SIZE / CONFIG_PHYSICAL_ALIGN];
+
+struct slot_area {
+ unsigned long addr;
+ int num;
+};
+
+#define MAX_SLOT_AREA 100
+
+static struct slot_area slot_areas[MAX_SLOT_AREA];
+
+static unsigned long slot_max;
+
+static unsigned long slot_area_index;
+
+static void store_slot_info(struct mem_vector *region, unsigned long image_size)
+{
+ struct slot_area slot_area;
+
+ if (slot_area_index == MAX_SLOT_AREA)
+ return;
+
+ slot_area.addr = region->start;
+ slot_area.num = (region->size - image_size) /
+ CONFIG_PHYSICAL_ALIGN + 1;
+
+ if (slot_area.num > 0) {
+ slot_areas[slot_area_index++] = slot_area;
+ slot_max += slot_area.num;
+ }
+}
+
+static void slots_append(unsigned long addr)
+{
+ /* Overflowing the slots list should be impossible. */
+ if (slot_max >= KERNEL_IMAGE_SIZE / CONFIG_PHYSICAL_ALIGN)
+ return;
+
+ slots[slot_max++] = addr;
+}
+
+static unsigned long slots_fetch_random(void)
+{
+ /* Handle case of no slots stored. */
+ if (slot_max == 0)
+ return 0;
+
+ return slots[get_random_long("Physical") % slot_max];
+}
+
+static void process_e820_entry(struct e820entry *entry,
+ unsigned long minimum,
+ unsigned long image_size)
+{
+ struct mem_vector region, img, overlap;
+
+ /* Skip non-RAM entries. */
+ if (entry->type != E820_RAM)
+ return;
+
+ /* Ignore entries entirely above our maximum. */
+ if (entry->addr >= KERNEL_IMAGE_SIZE)
+ return;
+
+ /* Ignore entries entirely below our minimum. */
+ if (entry->addr + entry->size < minimum)
+ return;
+
+ region.start = entry->addr;
+ region.size = entry->size;
+
+ /* Potentially raise address to minimum location. */
+ if (region.start < minimum)
+ region.start = minimum;
+
+ /* Potentially raise address to meet alignment requirements. */
+ region.start = ALIGN(region.start, CONFIG_PHYSICAL_ALIGN);
+
+ /* Did we raise the address above the bounds of this e820 region? */
+ if (region.start > entry->addr + entry->size)
+ return;
+
+ /* Reduce size by any delta from the original address. */
+ region.size -= region.start - entry->addr;
+
+ /* Reduce maximum size to fit end of image within maximum limit. */
+ if (region.start + region.size > KERNEL_IMAGE_SIZE)
+ region.size = KERNEL_IMAGE_SIZE - region.start;
+
+ /* Walk each aligned slot and check for avoided areas. */
+ for (img.start = region.start, img.size = image_size ;
+ mem_contains(&region, &img) ;
+ img.start += CONFIG_PHYSICAL_ALIGN) {
+ if (mem_avoid_overlap(&img, &overlap))
+ continue;
+ slots_append(img.start);
+ }
+}
+
+static unsigned long find_random_phys_addr(unsigned long minimum,
+ unsigned long image_size)
+{
+ int i;
+ unsigned long addr;
+
+ /* Make sure minimum is aligned. */
+ minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
+
+ /* Verify potential e820 positions, appending to slots list. */
+ for (i = 0; i < boot_params->e820_entries; i++) {
+ process_e820_entry(&boot_params->e820_map[i], minimum,
+ image_size);
+ }
+
+ return slots_fetch_random();
+}
+
+static unsigned long find_random_virt_addr(unsigned long minimum,
+ unsigned long image_size)
+{
+ unsigned long slots, random_addr;
+
+ /* Make sure minimum is aligned. */
+ minimum = ALIGN(minimum, CONFIG_PHYSICAL_ALIGN);
+ /* Align image_size for easy slot calculations. */
+ image_size = ALIGN(image_size, CONFIG_PHYSICAL_ALIGN);
+
+ /*
+ * There are how many CONFIG_PHYSICAL_ALIGN-sized slots
+ * that can hold image_size within the range of minimum to
+ * KERNEL_IMAGE_SIZE?
+ */
+ slots = (KERNEL_IMAGE_SIZE - minimum - image_size) /
+ CONFIG_PHYSICAL_ALIGN + 1;
+
+ random_addr = get_random_long("Virtual") % slots;
+
+ return random_addr * CONFIG_PHYSICAL_ALIGN + minimum;
+}
+
+/*
+ * Since this function examines addresses much more numerically,
+ * it takes the input and output pointers as 'unsigned long'.
+ */
+unsigned char *choose_random_location(unsigned long input,
+ unsigned long input_size,
+ unsigned long output,
+ unsigned long output_size)
+{
+ unsigned long choice = output;
+ unsigned long random_addr;
+
+#ifdef CONFIG_HIBERNATION
+ if (!cmdline_find_option_bool("kaslr")) {
+ warn("KASLR disabled: 'kaslr' not on cmdline (hibernation selected).");
+ goto out;
+ }
+#else
+ if (cmdline_find_option_bool("nokaslr")) {
+ warn("KASLR disabled: 'nokaslr' on cmdline.");
+ goto out;
+ }
+#endif
+
+ boot_params->hdr.loadflags |= KASLR_FLAG;
+
+ /* Record the various known unsafe memory ranges. */
+ mem_avoid_init(input, input_size, output);
+
+ /* Walk e820 and find a random address. */
+ random_addr = find_random_phys_addr(output, output_size);
+ if (!random_addr) {
+ warn("KASLR disabled: could not find suitable E820 region!");
+ goto out;
+ }
+
+ /* Always enforce the minimum. */
+ if (random_addr < choice)
+ goto out;
+
+ choice = random_addr;
+
+ add_identity_map(choice, output_size);
+
+ /* This actually loads the identity pagetable on x86_64. */
+ finalize_identity_maps();
+out:
+ return (unsigned char *)choice;
+}
diff --git a/arch/x86/boot/compressed/misc.c b/arch/x86/boot/compressed/misc.c
index 79dac1758e7c..f14db4e21654 100644
--- a/arch/x86/boot/compressed/misc.c
+++ b/arch/x86/boot/compressed/misc.c
@@ -1,8 +1,10 @@
/*
* misc.c
*
- * This is a collection of several routines from gzip-1.0.3
- * adapted for Linux.
+ * This is a collection of several routines used to extract the kernel
+ * which includes KASLR relocation, decompression, ELF parsing, and
+ * relocation processing. Additionally included are the screen and serial
+ * output functions and related debugging support functions.
*
* malloc by Hannu Savolainen 1993 and Matthias Urlichs 1994
* puts by Nick Holloway 1993, better puts by Martin Mares 1995
@@ -10,111 +12,37 @@
*/
#include "misc.h"
+#include "error.h"
#include "../string.h"
-
-/* WARNING!!
- * This code is compiled with -fPIC and it is relocated dynamically
- * at run time, but no relocation processing is performed.
- * This means that it is not safe to place pointers in static structures.
- */
+#include "../voffset.h"
/*
- * Getting to provable safe in place decompression is hard.
- * Worst case behaviours need to be analyzed.
- * Background information:
- *
- * The file layout is:
- * magic[2]
- * method[1]
- * flags[1]
- * timestamp[4]
- * extraflags[1]
- * os[1]
- * compressed data blocks[N]
- * crc[4] orig_len[4]
- *
- * resulting in 18 bytes of non compressed data overhead.
- *
- * Files divided into blocks
- * 1 bit (last block flag)
- * 2 bits (block type)
- *
- * 1 block occurs every 32K -1 bytes or when there 50% compression
- * has been achieved. The smallest block type encoding is always used.
- *
- * stored:
- * 32 bits length in bytes.
- *
- * fixed:
- * magic fixed tree.
- * symbols.
- *
- * dynamic:
- * dynamic tree encoding.
- * symbols.
- *
- *
- * The buffer for decompression in place is the length of the
- * uncompressed data, plus a small amount extra to keep the algorithm safe.
- * The compressed data is placed at the end of the buffer. The output
- * pointer is placed at the start of the buffer and the input pointer
- * is placed where the compressed data starts. Problems will occur
- * when the output pointer overruns the input pointer.
- *
- * The output pointer can only overrun the input pointer if the input
- * pointer is moving faster than the output pointer. A condition only
- * triggered by data whose compressed form is larger than the uncompressed
- * form.
- *
- * The worst case at the block level is a growth of the compressed data
- * of 5 bytes per 32767 bytes.
- *
- * The worst case internal to a compressed block is very hard to figure.
- * The worst case can at least be boundined by having one bit that represents
- * 32764 bytes and then all of the rest of the bytes representing the very
- * very last byte.
- *
- * All of which is enough to compute an amount of extra data that is required
- * to be safe. To avoid problems at the block level allocating 5 extra bytes
- * per 32767 bytes of data is sufficient. To avoind problems internal to a
- * block adding an extra 32767 bytes (the worst case uncompressed block size)
- * is sufficient, to ensure that in the worst case the decompressed data for
- * block will stop the byte before the compressed data for a block begins.
- * To avoid problems with the compressed data's meta information an extra 18
- * bytes are needed. Leading to the formula:
- *
- * extra_bytes = (uncompressed_size >> 12) + 32768 + 18 + decompressor_size.
- *
- * Adding 8 bytes per 32K is a bit excessive but much easier to calculate.
- * Adding 32768 instead of 32767 just makes for round numbers.
- * Adding the decompressor_size is necessary as it musht live after all
- * of the data as well. Last I measured the decompressor is about 14K.
- * 10K of actual data and 4K of bss.
- *
+ * WARNING!!
+ * This code is compiled with -fPIC and it is relocated dynamically at
+ * run time, but no relocation processing is performed. This means that
+ * it is not safe to place pointers in static structures.
*/
-/*
- * gzip declarations
- */
+/* Macros used by the included decompressor code below. */
#define STATIC static
-#undef memcpy
-
/*
- * Use a normal definition of memset() from string.c. There are already
+ * Use normal definitions of mem*() from string.c. There are already
* included header files which expect a definition of memset() and by
* the time we define memset macro, it is too late.
*/
+#undef memcpy
#undef memset
#define memzero(s, n) memset((s), 0, (n))
+#define memmove memmove
-
-static void error(char *m);
+/* Functions used by the included decompressor code below. */
+void *memmove(void *dest, const void *src, size_t n);
/*
* This is set up by the setup-routine at boot-time
*/
-struct boot_params *real_mode; /* Pointer to real-mode data */
+struct boot_params *boot_params;
memptr free_mem_ptr;
memptr free_mem_end_ptr;
@@ -146,12 +74,16 @@ static int lines, cols;
#ifdef CONFIG_KERNEL_LZ4
#include "../../../../lib/decompress_unlz4.c"
#endif
+/*
+ * NOTE: When adding a new decompressor, please update the analysis in
+ * ../header.S.
+ */
static void scroll(void)
{
int i;
- memcpy(vidmem, vidmem + cols * 2, (lines - 1) * cols * 2);
+ memmove(vidmem, vidmem + cols * 2, (lines - 1) * cols * 2);
for (i = (lines - 1) * cols * 2; i < lines * cols * 2; i += 2)
vidmem[i] = ' ';
}
@@ -184,12 +116,12 @@ void __putstr(const char *s)
}
}
- if (real_mode->screen_info.orig_video_mode == 0 &&
+ if (boot_params->screen_info.orig_video_mode == 0 &&
lines == 0 && cols == 0)
return;
- x = real_mode->screen_info.orig_x;
- y = real_mode->screen_info.orig_y;
+ x = boot_params->screen_info.orig_x;
+ y = boot_params->screen_info.orig_y;
while ((c = *s++) != '\0') {
if (c == '\n') {
@@ -210,8 +142,8 @@ void __putstr(const char *s)
}
}
- real_mode->screen_info.orig_x = x;
- real_mode->screen_info.orig_y = y;
+ boot_params->screen_info.orig_x = x;
+ boot_params->screen_info.orig_y = y;
pos = (x + cols * y) * 2; /* Update cursor position */
outb(14, vidport);
@@ -237,23 +169,13 @@ void __puthex(unsigned long value)
}
}
-static void error(char *x)
-{
- error_putstr("\n\n");
- error_putstr(x);
- error_putstr("\n\n -- System halted");
-
- while (1)
- asm("hlt");
-}
-
#if CONFIG_X86_NEED_RELOCS
static void handle_relocations(void *output, unsigned long output_len)
{
int *reloc;
unsigned long delta, map, ptr;
unsigned long min_addr = (unsigned long)output;
- unsigned long max_addr = min_addr + output_len;
+ unsigned long max_addr = min_addr + (VO___bss_start - VO__text);
/*
* Calculate the delta between where vmlinux was linked to load
@@ -295,7 +217,7 @@ static void handle_relocations(void *output, unsigned long output_len)
* So we work backwards from the end of the decompressed image.
*/
for (reloc = output + output_len - sizeof(*reloc); *reloc; reloc--) {
- int extended = *reloc;
+ long extended = *reloc;
extended += map;
ptr = (unsigned long)extended;
@@ -372,9 +294,7 @@ static void parse_elf(void *output)
#else
dest = (void *)(phdr->p_paddr);
#endif
- memcpy(dest,
- output + phdr->p_offset,
- phdr->p_filesz);
+ memmove(dest, output + phdr->p_offset, phdr->p_filesz);
break;
default: /* Ignore other PT_* */ break;
}
@@ -383,23 +303,41 @@ static void parse_elf(void *output)
free(phdrs);
}
-asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
+/*
+ * The compressed kernel image (ZO), has been moved so that its position
+ * is against the end of the buffer used to hold the uncompressed kernel
+ * image (VO) and the execution environment (.bss, .brk), which makes sure
+ * there is room to do the in-place decompression. (See header.S for the
+ * calculations.)
+ *
+ * |-----compressed kernel image------|
+ * V V
+ * 0 extract_offset +INIT_SIZE
+ * |-----------|---------------|-------------------------|--------|
+ * | | | |
+ * VO__text startup_32 of ZO VO__end ZO__end
+ * ^ ^
+ * |-------uncompressed kernel image---------|
+ *
+ */
+asmlinkage __visible void *extract_kernel(void *rmode, memptr heap,
unsigned char *input_data,
unsigned long input_len,
unsigned char *output,
- unsigned long output_len,
- unsigned long run_size)
+ unsigned long output_len)
{
+ const unsigned long kernel_total_size = VO__end - VO__text;
unsigned char *output_orig = output;
- real_mode = rmode;
+ /* Retain x86 boot parameters pointer passed from startup_32/64. */
+ boot_params = rmode;
- /* Clear it for solely in-kernel use */
- real_mode->hdr.loadflags &= ~KASLR_FLAG;
+ /* Clear flags intended for solely in-kernel use. */
+ boot_params->hdr.loadflags &= ~KASLR_FLAG;
- sanitize_boot_params(real_mode);
+ sanitize_boot_params(boot_params);
- if (real_mode->screen_info.orig_video_mode == 7) {
+ if (boot_params->screen_info.orig_video_mode == 7) {
vidmem = (char *) 0xb0000;
vidport = 0x3b4;
} else {
@@ -407,11 +345,11 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
vidport = 0x3d4;
}
- lines = real_mode->screen_info.orig_video_lines;
- cols = real_mode->screen_info.orig_video_cols;
+ lines = boot_params->screen_info.orig_video_lines;
+ cols = boot_params->screen_info.orig_video_cols;
console_init();
- debug_putstr("early console in decompress_kernel\n");
+ debug_putstr("early console in extract_kernel\n");
free_mem_ptr = heap; /* Heap */
free_mem_end_ptr = heap + BOOT_HEAP_SIZE;
@@ -421,16 +359,16 @@ asmlinkage __visible void *decompress_kernel(void *rmode, memptr heap,
debug_putaddr(input_len);
debug_putaddr(output);
debug_putaddr(output_len);
- debug_putaddr(run_size);
+ debug_putaddr(kernel_total_size);
/*
* The memory hole needed for the kernel is the larger of either
* the entire decompressed kernel plus relocation table, or the
* entire decompressed kernel plus .bss and .brk sections.
*/
- output = choose_kernel_location(real_mode, input_data, input_len, output,
- output_len > run_size ? output_len
- : run_size);
+ output = choose_random_location((unsigned long)input_data, input_len,
+ (unsigned long)output,
+ max(output_len, kernel_total_size));
/* Validate memory location choices. */
if ((unsigned long)output & (MIN_KERNEL_ALIGN - 1))
diff --git a/arch/x86/boot/compressed/misc.h b/arch/x86/boot/compressed/misc.h
index 3783dc3e10b3..b6fec1ff10e4 100644
--- a/arch/x86/boot/compressed/misc.h
+++ b/arch/x86/boot/compressed/misc.h
@@ -32,7 +32,7 @@
/* misc.c */
extern memptr free_mem_ptr;
extern memptr free_mem_end_ptr;
-extern struct boot_params *real_mode; /* Pointer to real-mode data */
+extern struct boot_params *boot_params;
void __putstr(const char *s);
void __puthex(unsigned long value);
#define error_putstr(__x) __putstr(__x)
@@ -66,26 +66,35 @@ int cmdline_find_option_bool(const char *option);
#if CONFIG_RANDOMIZE_BASE
-/* aslr.c */
-unsigned char *choose_kernel_location(struct boot_params *boot_params,
- unsigned char *input,
+/* kaslr.c */
+unsigned char *choose_random_location(unsigned long input_ptr,
unsigned long input_size,
- unsigned char *output,
+ unsigned long output_ptr,
unsigned long output_size);
/* cpuflags.c */
bool has_cpuflag(int flag);
#else
static inline
-unsigned char *choose_kernel_location(struct boot_params *boot_params,
- unsigned char *input,
+unsigned char *choose_random_location(unsigned long input_ptr,
unsigned long input_size,
- unsigned char *output,
+ unsigned long output_ptr,
unsigned long output_size)
{
- return output;
+ return (unsigned char *)output_ptr;
}
#endif
+#ifdef CONFIG_X86_64
+void add_identity_map(unsigned long start, unsigned long size);
+void finalize_identity_maps(void);
+extern unsigned char _pgtable[];
+#else
+static inline void add_identity_map(unsigned long start, unsigned long size)
+{ }
+static inline void finalize_identity_maps(void)
+{ }
+#endif
+
#ifdef CONFIG_EARLY_PRINTK
/* early_serial_console.c */
extern int early_serial_base;
diff --git a/arch/x86/boot/compressed/mkpiggy.c b/arch/x86/boot/compressed/mkpiggy.c
index d8222f213182..72bad2c8debe 100644
--- a/arch/x86/boot/compressed/mkpiggy.c
+++ b/arch/x86/boot/compressed/mkpiggy.c
@@ -18,11 +18,10 @@
*
* H. Peter Anvin <hpa@linux.intel.com>
*
- * ----------------------------------------------------------------------- */
-
-/*
- * Compute the desired load offset from a compressed program; outputs
- * a small assembly wrapper with the appropriate symbols defined.
+ * -----------------------------------------------------------------------
+ *
+ * Outputs a small assembly wrapper with the appropriate symbols defined.
+ *
*/
#include <stdlib.h>
@@ -35,14 +34,11 @@ int main(int argc, char *argv[])
{
uint32_t olen;
long ilen;
- unsigned long offs;
- unsigned long run_size;
FILE *f = NULL;
int retval = 1;
- if (argc < 3) {
- fprintf(stderr, "Usage: %s compressed_file run_size\n",
- argv[0]);
+ if (argc < 2) {
+ fprintf(stderr, "Usage: %s compressed_file\n", argv[0]);
goto bail;
}
@@ -67,29 +63,11 @@ int main(int argc, char *argv[])
ilen = ftell(f);
olen = get_unaligned_le32(&olen);
- /*
- * Now we have the input (compressed) and output (uncompressed)
- * sizes, compute the necessary decompression offset...
- */
-
- offs = (olen > ilen) ? olen - ilen : 0;
- offs += olen >> 12; /* Add 8 bytes for each 32K block */
- offs += 64*1024 + 128; /* Add 64K + 128 bytes slack */
- offs = (offs+4095) & ~4095; /* Round to a 4K boundary */
- run_size = atoi(argv[2]);
-
printf(".section \".rodata..compressed\",\"a\",@progbits\n");
printf(".globl z_input_len\n");
printf("z_input_len = %lu\n", ilen);
printf(".globl z_output_len\n");
printf("z_output_len = %lu\n", (unsigned long)olen);
- printf(".globl z_extract_offset\n");
- printf("z_extract_offset = 0x%lx\n", offs);
- /* z_extract_offset_negative allows simplification of head_32.S */
- printf(".globl z_extract_offset_negative\n");
- printf("z_extract_offset_negative = -0x%lx\n", offs);
- printf(".globl z_run_size\n");
- printf("z_run_size = %lu\n", run_size);
printf(".globl input_data, input_data_end\n");
printf("input_data:\n");
diff --git a/arch/x86/boot/compressed/pagetable.c b/arch/x86/boot/compressed/pagetable.c
new file mode 100644
index 000000000000..34b95df14e69
--- /dev/null
+++ b/arch/x86/boot/compressed/pagetable.c
@@ -0,0 +1,129 @@
+/*
+ * This code is used on x86_64 to create page table identity mappings on
+ * demand by building up a new set of page tables (or appending to the
+ * existing ones), and then switching over to them when ready.
+ */
+
+/*
+ * Since we're dealing with identity mappings, physical and virtual
+ * addresses are the same, so override these defines which are ultimately
+ * used by the headers in misc.h.
+ */
+#define __pa(x) ((unsigned long)(x))
+#define __va(x) ((void *)((unsigned long)(x)))
+
+#include "misc.h"
+
+/* These actually do the work of building the kernel identity maps. */
+#include <asm/init.h>
+#include <asm/pgtable.h>
+#include "../../mm/ident_map.c"
+
+/* Used by pgtable.h asm code to force instruction serialization. */
+unsigned long __force_order;
+
+/* Used to track our page table allocation area. */
+struct alloc_pgt_data {
+ unsigned char *pgt_buf;
+ unsigned long pgt_buf_size;
+ unsigned long pgt_buf_offset;
+};
+
+/*
+ * Allocates space for a page table entry, using struct alloc_pgt_data
+ * above. Besides the local callers, this is used as the allocation
+ * callback in mapping_info below.
+ */
+static void *alloc_pgt_page(void *context)
+{
+ struct alloc_pgt_data *pages = (struct alloc_pgt_data *)context;
+ unsigned char *entry;
+
+ /* Validate there is space available for a new page. */
+ if (pages->pgt_buf_offset >= pages->pgt_buf_size) {
+ debug_putstr("out of pgt_buf in " __FILE__ "!?\n");
+ debug_putaddr(pages->pgt_buf_offset);
+ debug_putaddr(pages->pgt_buf_size);
+ return NULL;
+ }
+
+ entry = pages->pgt_buf + pages->pgt_buf_offset;
+ pages->pgt_buf_offset += PAGE_SIZE;
+
+ return entry;
+}
+
+/* Used to track our allocated page tables. */
+static struct alloc_pgt_data pgt_data;
+
+/* The top level page table entry pointer. */
+static unsigned long level4p;
+
+/* Locates and clears a region for a new top level page table. */
+static void prepare_level4(void)
+{
+ /*
+ * It should be impossible for this not to already be true,
+ * but since calling this a second time would rewind the other
+ * counters, let's just make sure this is reset too.
+ */
+ pgt_data.pgt_buf_offset = 0;
+
+ /*
+ * If we came here via startup_32(), cr3 will be _pgtable already
+ * and we must append to the existing area instead of entirely
+ * overwriting it.
+ */
+ level4p = read_cr3();
+ if (level4p == (unsigned long)_pgtable) {
+ debug_putstr("booted via startup_32()\n");
+ pgt_data.pgt_buf = _pgtable + BOOT_INIT_PGT_SIZE;
+ pgt_data.pgt_buf_size = BOOT_PGT_SIZE - BOOT_INIT_PGT_SIZE;
+ memset(pgt_data.pgt_buf, 0, pgt_data.pgt_buf_size);
+ } else {
+ debug_putstr("booted via startup_64()\n");
+ pgt_data.pgt_buf = _pgtable;
+ pgt_data.pgt_buf_size = BOOT_PGT_SIZE;
+ memset(pgt_data.pgt_buf, 0, pgt_data.pgt_buf_size);
+ level4p = (unsigned long)alloc_pgt_page(&pgt_data);
+ }
+}
+
+/*
+ * Adds the specified range to what will become the new identity mappings.
+ * Once all ranges have been added, the new mapping is activated by calling
+ * finalize_identity_maps() below.
+ */
+void add_identity_map(unsigned long start, unsigned long size)
+{
+ struct x86_mapping_info mapping_info = {
+ .alloc_pgt_page = alloc_pgt_page,
+ .context = &pgt_data,
+ .pmd_flag = __PAGE_KERNEL_LARGE_EXEC,
+ };
+ unsigned long end = start + size;
+
+ /* Make sure we have a top level page table ready to use. */
+ if (!level4p)
+ prepare_level4();
+
+ /* Align boundary to 2M. */
+ start = round_down(start, PMD_SIZE);
+ end = round_up(end, PMD_SIZE);
+ if (start >= end)
+ return;
+
+ /* Build the mapping. */
+ kernel_ident_mapping_init(&mapping_info, (pgd_t *)level4p,
+ start, end);
+}
+
+/*
+ * This switches the page tables to the new level4 that has been built
+ * via calls to add_identity_map() above. If booted via startup_32(),
+ * this is effectively a no-op.
+ */
+void finalize_identity_maps(void)
+{
+ write_cr3(level4p);
+}
diff --git a/arch/x86/boot/compressed/string.c b/arch/x86/boot/compressed/string.c
index 00e788be1db9..cea140ce6b42 100644
--- a/arch/x86/boot/compressed/string.c
+++ b/arch/x86/boot/compressed/string.c
@@ -1,7 +1,16 @@
+/*
+ * This provides an optimized implementation of memcpy, and a simplified
+ * implementation of memset and memmove. These are used here because the
+ * standard kernel runtime versions are not yet available and we don't
+ * trust the gcc built-in implementations as they may do unexpected things
+ * (e.g. FPU ops) in the minimal decompression stub execution environment.
+ */
+#include "error.h"
+
#include "../string.c"
#ifdef CONFIG_X86_32
-void *memcpy(void *dest, const void *src, size_t n)
+static void *__memcpy(void *dest, const void *src, size_t n)
{
int d0, d1, d2;
asm volatile(
@@ -15,7 +24,7 @@ void *memcpy(void *dest, const void *src, size_t n)
return dest;
}
#else
-void *memcpy(void *dest, const void *src, size_t n)
+static void *__memcpy(void *dest, const void *src, size_t n)
{
long d0, d1, d2;
asm volatile(
@@ -39,3 +48,27 @@ void *memset(void *s, int c, size_t n)
ss[i] = c;
return s;
}
+
+void *memmove(void *dest, const void *src, size_t n)
+{
+ unsigned char *d = dest;
+ const unsigned char *s = src;
+
+ if (d <= s || d - s >= n)
+ return __memcpy(dest, src, n);
+
+ while (n-- > 0)
+ d[n] = s[n];
+
+ return dest;
+}
+
+/* Detect and warn about potential overlaps, but handle them with memmove. */
+void *memcpy(void *dest, const void *src, size_t n)
+{
+ if (dest > src && dest - src < n) {
+ warn("Avoiding potentially unsafe overlapping memcpy()!");
+ return memmove(dest, src, n);
+ }
+ return __memcpy(dest, src, n);
+}
diff --git a/arch/x86/boot/compressed/vmlinux.lds.S b/arch/x86/boot/compressed/vmlinux.lds.S
index 34d047c98284..e24e0a0c90c9 100644
--- a/arch/x86/boot/compressed/vmlinux.lds.S
+++ b/arch/x86/boot/compressed/vmlinux.lds.S
@@ -70,5 +70,6 @@ SECTIONS
_epgtable = . ;
}
#endif
+ . = ALIGN(PAGE_SIZE); /* keep ZO size page aligned */
_end = .;
}
diff --git a/arch/x86/boot/early_serial_console.c b/arch/x86/boot/early_serial_console.c
index 45a07684bbab..f0b8d6d93164 100644
--- a/arch/x86/boot/early_serial_console.c
+++ b/arch/x86/boot/early_serial_console.c
@@ -1,3 +1,7 @@
+/*
+ * Serial port routines for use during early boot reporting. This code is
+ * included from both the compressed kernel and the regular kernel.
+ */
#include "boot.h"
#define DEFAULT_SERIAL_PORT 0x3f8 /* ttyS0 */
diff --git a/arch/x86/boot/header.S b/arch/x86/boot/header.S
index 6236b9ec4b76..3dd5be33aaa7 100644
--- a/arch/x86/boot/header.S
+++ b/arch/x86/boot/header.S
@@ -440,13 +440,116 @@ setup_data: .quad 0 # 64-bit physical pointer to
pref_address: .quad LOAD_PHYSICAL_ADDR # preferred load addr
-#define ZO_INIT_SIZE (ZO__end - ZO_startup_32 + ZO_z_extract_offset)
+#
+# Getting to provably safe in-place decompression is hard. Worst case
+# behaviours need to be analyzed. Here let's take the decompression of
+# a gzip-compressed kernel as example, to illustrate it:
+#
+# The file layout of gzip compressed kernel is:
+#
+# magic[2]
+# method[1]
+# flags[1]
+# timestamp[4]
+# extraflags[1]
+# os[1]
+# compressed data blocks[N]
+# crc[4] orig_len[4]
+#
+# ... resulting in +18 bytes overhead of uncompressed data.
+#
+# (For more information, please refer to RFC 1951 and RFC 1952.)
+#
+# Files divided into blocks
+# 1 bit (last block flag)
+# 2 bits (block type)
+#
+# 1 block occurs every 32K -1 bytes or when there 50% compression
+# has been achieved. The smallest block type encoding is always used.
+#
+# stored:
+# 32 bits length in bytes.
+#
+# fixed:
+# magic fixed tree.
+# symbols.
+#
+# dynamic:
+# dynamic tree encoding.
+# symbols.
+#
+#
+# The buffer for decompression in place is the length of the uncompressed
+# data, plus a small amount extra to keep the algorithm safe. The
+# compressed data is placed at the end of the buffer. The output pointer
+# is placed at the start of the buffer and the input pointer is placed
+# where the compressed data starts. Problems will occur when the output
+# pointer overruns the input pointer.
+#
+# The output pointer can only overrun the input pointer if the input
+# pointer is moving faster than the output pointer. A condition only
+# triggered by data whose compressed form is larger than the uncompressed
+# form.
+#
+# The worst case at the block level is a growth of the compressed data
+# of 5 bytes per 32767 bytes.
+#
+# The worst case internal to a compressed block is very hard to figure.
+# The worst case can at least be bounded by having one bit that represents
+# 32764 bytes and then all of the rest of the bytes representing the very
+# very last byte.
+#
+# All of which is enough to compute an amount of extra data that is required
+# to be safe. To avoid problems at the block level allocating 5 extra bytes
+# per 32767 bytes of data is sufficient. To avoid problems internal to a
+# block adding an extra 32767 bytes (the worst case uncompressed block size)
+# is sufficient, to ensure that in the worst case the decompressed data for
+# block will stop the byte before the compressed data for a block begins.
+# To avoid problems with the compressed data's meta information an extra 18
+# bytes are needed. Leading to the formula:
+#
+# extra_bytes = (uncompressed_size >> 12) + 32768 + 18
+#
+# Adding 8 bytes per 32K is a bit excessive but much easier to calculate.
+# Adding 32768 instead of 32767 just makes for round numbers.
+#
+# Above analysis is for decompressing gzip compressed kernel only. Up to
+# now 6 different decompressor are supported all together. And among them
+# xz stores data in chunks and has maximum chunk of 64K. Hence safety
+# margin should be updated to cover all decompressors so that we don't
+# need to deal with each of them separately. Please check
+# the description in lib/decompressor_xxx.c for specific information.
+#
+# extra_bytes = (uncompressed_size >> 12) + 65536 + 128
+
+#define ZO_z_extra_bytes ((ZO_z_output_len >> 12) + 65536 + 128)
+#if ZO_z_output_len > ZO_z_input_len
+# define ZO_z_extract_offset (ZO_z_output_len + ZO_z_extra_bytes - \
+ ZO_z_input_len)
+#else
+# define ZO_z_extract_offset ZO_z_extra_bytes
+#endif
+
+/*
+ * The extract_offset has to be bigger than ZO head section. Otherwise when
+ * the head code is running to move ZO to the end of the buffer, it will
+ * overwrite the head code itself.
+ */
+#if (ZO__ehead - ZO_startup_32) > ZO_z_extract_offset
+# define ZO_z_min_extract_offset ((ZO__ehead - ZO_startup_32 + 4095) & ~4095)
+#else
+# define ZO_z_min_extract_offset ((ZO_z_extract_offset + 4095) & ~4095)
+#endif
+
+#define ZO_INIT_SIZE (ZO__end - ZO_startup_32 + ZO_z_min_extract_offset)
+
#define VO_INIT_SIZE (VO__end - VO__text)
#if ZO_INIT_SIZE > VO_INIT_SIZE
-#define INIT_SIZE ZO_INIT_SIZE
+# define INIT_SIZE ZO_INIT_SIZE
#else
-#define INIT_SIZE VO_INIT_SIZE
+# define INIT_SIZE VO_INIT_SIZE
#endif
+
init_size: .long INIT_SIZE # kernel initialization size
handover_offset: .long 0 # Filled in by build.c
diff --git a/arch/x86/configs/i386_defconfig b/arch/x86/configs/i386_defconfig
index 265901a84f3f..5fa6ee2c2dde 100644
--- a/arch/x86/configs/i386_defconfig
+++ b/arch/x86/configs/i386_defconfig
@@ -17,7 +17,6 @@ CONFIG_CGROUPS=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CPUSETS=y
CONFIG_CGROUP_CPUACCT=y
-CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_SCHED=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_COMPAT_BRK is not set
diff --git a/arch/x86/configs/kvm_guest.config b/arch/x86/configs/kvm_guest.config
index f9affcc3b9f1..9906505c998a 100644
--- a/arch/x86/configs/kvm_guest.config
+++ b/arch/x86/configs/kvm_guest.config
@@ -26,3 +26,6 @@ CONFIG_VIRTIO_NET=y
CONFIG_9P_FS=y
CONFIG_NET_9P=y
CONFIG_NET_9P_VIRTIO=y
+CONFIG_SCSI_LOWLEVEL=y
+CONFIG_SCSI_VIRTIO=y
+CONFIG_VIRTIO_INPUT=y
diff --git a/arch/x86/configs/x86_64_defconfig b/arch/x86/configs/x86_64_defconfig
index 4f404a64681b..d28bdabcc87e 100644
--- a/arch/x86/configs/x86_64_defconfig
+++ b/arch/x86/configs/x86_64_defconfig
@@ -16,7 +16,6 @@ CONFIG_CGROUPS=y
CONFIG_CGROUP_FREEZER=y
CONFIG_CPUSETS=y
CONFIG_CGROUP_CPUACCT=y
-CONFIG_RESOURCE_COUNTERS=y
CONFIG_CGROUP_SCHED=y
CONFIG_BLK_DEV_INITRD=y
# CONFIG_COMPAT_BRK is not set
@@ -173,6 +172,7 @@ CONFIG_TIGON3=y
CONFIG_NET_TULIP=y
CONFIG_E100=y
CONFIG_E1000=y
+CONFIG_E1000E=y
CONFIG_SKY2=y
CONFIG_FORCEDETH=y
CONFIG_8139TOO=y
diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 064c7e2bd7c8..5b7fa1471007 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -1477,7 +1477,7 @@ static int __init aesni_init(void)
}
aesni_ctr_enc_tfm = aesni_ctr_enc;
#ifdef CONFIG_AS_AVX
- if (cpu_has_avx) {
+ if (boot_cpu_has(X86_FEATURE_AVX)) {
/* optimize performance of ctr mode encryption transform */
aesni_ctr_enc_tfm = aesni_ctr_enc_avx_tfm;
pr_info("AES CTR mode by8 optimization enabled\n");
diff --git a/arch/x86/crypto/camellia_aesni_avx2_glue.c b/arch/x86/crypto/camellia_aesni_avx2_glue.c
index d84456924563..60907c139c4e 100644
--- a/arch/x86/crypto/camellia_aesni_avx2_glue.c
+++ b/arch/x86/crypto/camellia_aesni_avx2_glue.c
@@ -562,7 +562,10 @@ static int __init camellia_aesni_init(void)
{
const char *feature_name;
- if (!cpu_has_avx2 || !cpu_has_avx || !cpu_has_aes || !cpu_has_osxsave) {
+ if (!boot_cpu_has(X86_FEATURE_AVX) ||
+ !boot_cpu_has(X86_FEATURE_AVX2) ||
+ !boot_cpu_has(X86_FEATURE_AES) ||
+ !boot_cpu_has(X86_FEATURE_OSXSAVE)) {
pr_info("AVX2 or AES-NI instructions are not detected.\n");
return -ENODEV;
}
diff --git a/arch/x86/crypto/camellia_aesni_avx_glue.c b/arch/x86/crypto/camellia_aesni_avx_glue.c
index 93d8f295784e..d96429da88eb 100644
--- a/arch/x86/crypto/camellia_aesni_avx_glue.c
+++ b/arch/x86/crypto/camellia_aesni_avx_glue.c
@@ -554,7 +554,9 @@ static int __init camellia_aesni_init(void)
{
const char *feature_name;
- if (!cpu_has_avx || !cpu_has_aes || !cpu_has_osxsave) {
+ if (!boot_cpu_has(X86_FEATURE_AVX) ||
+ !boot_cpu_has(X86_FEATURE_AES) ||
+ !boot_cpu_has(X86_FEATURE_OSXSAVE)) {
pr_info("AVX or AES-NI instructions are not detected.\n");
return -ENODEV;
}
diff --git a/arch/x86/crypto/chacha20_glue.c b/arch/x86/crypto/chacha20_glue.c
index 8baaff5af0b5..2d5c2e0bd939 100644
--- a/arch/x86/crypto/chacha20_glue.c
+++ b/arch/x86/crypto/chacha20_glue.c
@@ -129,7 +129,8 @@ static int __init chacha20_simd_mod_init(void)
return -ENODEV;
#ifdef CONFIG_AS_AVX2
- chacha20_use_avx2 = cpu_has_avx && cpu_has_avx2 &&
+ chacha20_use_avx2 = boot_cpu_has(X86_FEATURE_AVX) &&
+ boot_cpu_has(X86_FEATURE_AVX2) &&
cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL);
#endif
return crypto_register_alg(&alg);
diff --git a/arch/x86/crypto/poly1305_glue.c b/arch/x86/crypto/poly1305_glue.c
index 4264a3d59589..e32142bc071d 100644
--- a/arch/x86/crypto/poly1305_glue.c
+++ b/arch/x86/crypto/poly1305_glue.c
@@ -179,11 +179,12 @@ static struct shash_alg alg = {
static int __init poly1305_simd_mod_init(void)
{
- if (!cpu_has_xmm2)
+ if (!boot_cpu_has(X86_FEATURE_XMM2))
return -ENODEV;
#ifdef CONFIG_AS_AVX2
- poly1305_use_avx2 = cpu_has_avx && cpu_has_avx2 &&
+ poly1305_use_avx2 = boot_cpu_has(X86_FEATURE_AVX) &&
+ boot_cpu_has(X86_FEATURE_AVX2) &&
cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL);
alg.descsize = sizeof(struct poly1305_simd_desc_ctx);
if (poly1305_use_avx2)
diff --git a/arch/x86/crypto/serpent_avx2_glue.c b/arch/x86/crypto/serpent_avx2_glue.c
index 6d198342e2de..870f6d812a2d 100644
--- a/arch/x86/crypto/serpent_avx2_glue.c
+++ b/arch/x86/crypto/serpent_avx2_glue.c
@@ -538,7 +538,7 @@ static int __init init(void)
{
const char *feature_name;
- if (!cpu_has_avx2 || !cpu_has_osxsave) {
+ if (!boot_cpu_has(X86_FEATURE_AVX2) || !boot_cpu_has(X86_FEATURE_OSXSAVE)) {
pr_info("AVX2 instructions are not detected.\n");
return -ENODEV;
}
diff --git a/arch/x86/crypto/serpent_sse2_glue.c b/arch/x86/crypto/serpent_sse2_glue.c
index 8943407e8917..644f97ab8cac 100644
--- a/arch/x86/crypto/serpent_sse2_glue.c
+++ b/arch/x86/crypto/serpent_sse2_glue.c
@@ -600,7 +600,7 @@ static struct crypto_alg serpent_algs[10] = { {
static int __init serpent_sse2_init(void)
{
- if (!cpu_has_xmm2) {
+ if (!boot_cpu_has(X86_FEATURE_XMM2)) {
printk(KERN_INFO "SSE2 instructions are not detected.\n");
return -ENODEV;
}
diff --git a/arch/x86/crypto/sha-mb/sha1_mb.c b/arch/x86/crypto/sha-mb/sha1_mb.c
index 081255cea1ee..9c5af331a956 100644
--- a/arch/x86/crypto/sha-mb/sha1_mb.c
+++ b/arch/x86/crypto/sha-mb/sha1_mb.c
@@ -102,14 +102,14 @@ static asmlinkage struct job_sha1* (*sha1_job_mgr_submit)(struct sha1_mb_mgr *st
static asmlinkage struct job_sha1* (*sha1_job_mgr_flush)(struct sha1_mb_mgr *state);
static asmlinkage struct job_sha1* (*sha1_job_mgr_get_comp_job)(struct sha1_mb_mgr *state);
-inline void sha1_init_digest(uint32_t *digest)
+static inline void sha1_init_digest(uint32_t *digest)
{
static const uint32_t initial_digest[SHA1_DIGEST_LENGTH] = {SHA1_H0,
SHA1_H1, SHA1_H2, SHA1_H3, SHA1_H4 };
memcpy(digest, initial_digest, sizeof(initial_digest));
}
-inline uint32_t sha1_pad(uint8_t padblock[SHA1_BLOCK_SIZE * 2],
+static inline uint32_t sha1_pad(uint8_t padblock[SHA1_BLOCK_SIZE * 2],
uint32_t total_len)
{
uint32_t i = total_len & (SHA1_BLOCK_SIZE - 1);
diff --git a/arch/x86/crypto/sha-mb/sha1_x8_avx2.S b/arch/x86/crypto/sha-mb/sha1_x8_avx2.S
index 8e1b47792b31..c9dae1cd2919 100644
--- a/arch/x86/crypto/sha-mb/sha1_x8_avx2.S
+++ b/arch/x86/crypto/sha-mb/sha1_x8_avx2.S
@@ -296,7 +296,11 @@ W14 = TMP_
#
ENTRY(sha1_x8_avx2)
- push RSP_SAVE
+ # save callee-saved clobbered registers to comply with C function ABI
+ push %r12
+ push %r13
+ push %r14
+ push %r15
#save rsp
mov %rsp, RSP_SAVE
@@ -446,7 +450,12 @@ lloop:
## Postamble
mov RSP_SAVE, %rsp
- pop RSP_SAVE
+
+ # restore callee-saved clobbered registers
+ pop %r15
+ pop %r14
+ pop %r13
+ pop %r12
ret
ENDPROC(sha1_x8_avx2)
diff --git a/arch/x86/crypto/sha1_ssse3_glue.c b/arch/x86/crypto/sha1_ssse3_glue.c
index dd14616b7739..1024e378a358 100644
--- a/arch/x86/crypto/sha1_ssse3_glue.c
+++ b/arch/x86/crypto/sha1_ssse3_glue.c
@@ -166,7 +166,7 @@ static struct shash_alg sha1_avx_alg = {
static bool avx_usable(void)
{
if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) {
- if (cpu_has_avx)
+ if (boot_cpu_has(X86_FEATURE_AVX))
pr_info("AVX detected but unusable.\n");
return false;
}
diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_ssse3_glue.c
index 5f4d6086dc59..3ae0f43ebd37 100644
--- a/arch/x86/crypto/sha256_ssse3_glue.c
+++ b/arch/x86/crypto/sha256_ssse3_glue.c
@@ -201,7 +201,7 @@ static struct shash_alg sha256_avx_algs[] = { {
static bool avx_usable(void)
{
if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) {
- if (cpu_has_avx)
+ if (boot_cpu_has(X86_FEATURE_AVX))
pr_info("AVX detected but unusable.\n");
return false;
}
diff --git a/arch/x86/crypto/sha512_ssse3_glue.c b/arch/x86/crypto/sha512_ssse3_glue.c
index 34e5083d6f36..0b17c83d027d 100644
--- a/arch/x86/crypto/sha512_ssse3_glue.c
+++ b/arch/x86/crypto/sha512_ssse3_glue.c
@@ -151,7 +151,7 @@ asmlinkage void sha512_transform_avx(u64 *digest, const char *data,
static bool avx_usable(void)
{
if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) {
- if (cpu_has_avx)
+ if (boot_cpu_has(X86_FEATURE_AVX))
pr_info("AVX detected but unusable.\n");
return false;
}
diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
index e79d93d44ecd..ec138e538c44 100644
--- a/arch/x86/entry/common.c
+++ b/arch/x86/entry/common.c
@@ -191,7 +191,7 @@ long syscall_trace_enter_phase2(struct pt_regs *regs, u32 arch,
long syscall_trace_enter(struct pt_regs *regs)
{
- u32 arch = is_ia32_task() ? AUDIT_ARCH_I386 : AUDIT_ARCH_X86_64;
+ u32 arch = in_ia32_syscall() ? AUDIT_ARCH_I386 : AUDIT_ARCH_X86_64;
unsigned long phase1_result = syscall_trace_enter_phase1(regs, arch);
if (phase1_result == 0)
diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index 10868aa734dc..983e5d3a0d27 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -207,10 +207,7 @@
ENTRY(ret_from_fork)
pushl %eax
call schedule_tail
- GET_THREAD_INFO(%ebp)
popl %eax
- pushl $0x0202 # Reset kernel eflags
- popfl
/* When we fork, we trace the syscall return in the child, too. */
movl %esp, %eax
@@ -221,10 +218,7 @@ END(ret_from_fork)
ENTRY(ret_from_kernel_thread)
pushl %eax
call schedule_tail
- GET_THREAD_INFO(%ebp)
popl %eax
- pushl $0x0202 # Reset kernel eflags
- popfl
movl PT_EBP(%esp), %eax
call *PT_EBX(%esp)
movl $0, PT_EAX(%esp)
@@ -251,7 +245,6 @@ ENDPROC(ret_from_kernel_thread)
ret_from_exception:
preempt_stop(CLBR_ANY)
ret_from_intr:
- GET_THREAD_INFO(%ebp)
#ifdef CONFIG_VM86
movl PT_EFLAGS(%esp), %eax # mix EFLAGS and CS
movb PT_CS(%esp), %al
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 858b555e274b..9ee0da1807ed 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -372,9 +372,6 @@ END(ptregs_\func)
ENTRY(ret_from_fork)
LOCK ; btr $TIF_FORK, TI_flags(%r8)
- pushq $0x0002
- popfq /* reset kernel eflags */
-
call schedule_tail /* rdi: 'prev' task parameter */
testb $3, CS(%rsp) /* from kernel_thread? */
@@ -781,19 +778,25 @@ ENTRY(native_load_gs_index)
pushfq
DISABLE_INTERRUPTS(CLBR_ANY & ~CLBR_RDI)
SWAPGS
-gs_change:
+.Lgs_change:
movl %edi, %gs
-2: mfence /* workaround */
+2: ALTERNATIVE "", "mfence", X86_BUG_SWAPGS_FENCE
SWAPGS
popfq
ret
END(native_load_gs_index)
- _ASM_EXTABLE(gs_change, bad_gs)
+ _ASM_EXTABLE(.Lgs_change, bad_gs)
.section .fixup, "ax"
/* running with kernelgs */
bad_gs:
SWAPGS /* switch back to user gs */
+.macro ZAP_GS
+ /* This can't be a string because the preprocessor needs to see it. */
+ movl $__USER_DS, %eax
+ movl %eax, %gs
+.endm
+ ALTERNATIVE "", "ZAP_GS", X86_BUG_NULL_SEG
xorl %eax, %eax
movl %eax, %gs
jmp 2b
@@ -1019,13 +1022,13 @@ ENTRY(error_entry)
movl %ecx, %eax /* zero extend */
cmpq %rax, RIP+8(%rsp)
je .Lbstep_iret
- cmpq $gs_change, RIP+8(%rsp)
+ cmpq $.Lgs_change, RIP+8(%rsp)
jne .Lerror_entry_done
/*
- * hack: gs_change can fail with user gsbase. If this happens, fix up
+ * hack: .Lgs_change can fail with user gsbase. If this happens, fix up
* gsbase and proceed. We'll fix up the exception and land in
- * gs_change's error handler with kernel gsbase.
+ * .Lgs_change's error handler with kernel gsbase.
*/
jmp .Lerror_entry_from_usermode_swapgs
diff --git a/arch/x86/entry/entry_64_compat.S b/arch/x86/entry/entry_64_compat.S
index 847f2f0c31e5..e1721dafbcb1 100644
--- a/arch/x86/entry/entry_64_compat.S
+++ b/arch/x86/entry/entry_64_compat.S
@@ -72,24 +72,23 @@ ENTRY(entry_SYSENTER_compat)
pushfq /* pt_regs->flags (except IF = 0) */
orl $X86_EFLAGS_IF, (%rsp) /* Fix saved flags */
pushq $__USER32_CS /* pt_regs->cs */
- xorq %r8,%r8
- pushq %r8 /* pt_regs->ip = 0 (placeholder) */
+ pushq $0 /* pt_regs->ip = 0 (placeholder) */
pushq %rax /* pt_regs->orig_ax */
pushq %rdi /* pt_regs->di */
pushq %rsi /* pt_regs->si */
pushq %rdx /* pt_regs->dx */
pushq %rcx /* pt_regs->cx */
pushq $-ENOSYS /* pt_regs->ax */
- pushq %r8 /* pt_regs->r8 = 0 */
- pushq %r8 /* pt_regs->r9 = 0 */
- pushq %r8 /* pt_regs->r10 = 0 */
- pushq %r8 /* pt_regs->r11 = 0 */
+ pushq $0 /* pt_regs->r8 = 0 */
+ pushq $0 /* pt_regs->r9 = 0 */
+ pushq $0 /* pt_regs->r10 = 0 */
+ pushq $0 /* pt_regs->r11 = 0 */
pushq %rbx /* pt_regs->rbx */
pushq %rbp /* pt_regs->rbp (will be overwritten) */
- pushq %r8 /* pt_regs->r12 = 0 */
- pushq %r8 /* pt_regs->r13 = 0 */
- pushq %r8 /* pt_regs->r14 = 0 */
- pushq %r8 /* pt_regs->r15 = 0 */
+ pushq $0 /* pt_regs->r12 = 0 */
+ pushq $0 /* pt_regs->r13 = 0 */
+ pushq $0 /* pt_regs->r14 = 0 */
+ pushq $0 /* pt_regs->r15 = 0 */
cld
/*
@@ -205,17 +204,16 @@ ENTRY(entry_SYSCALL_compat)
pushq %rdx /* pt_regs->dx */
pushq %rbp /* pt_regs->cx (stashed in bp) */
pushq $-ENOSYS /* pt_regs->ax */
- xorq %r8,%r8
- pushq %r8 /* pt_regs->r8 = 0 */
- pushq %r8 /* pt_regs->r9 = 0 */
- pushq %r8 /* pt_regs->r10 = 0 */
- pushq %r8 /* pt_regs->r11 = 0 */
+ pushq $0 /* pt_regs->r8 = 0 */
+ pushq $0 /* pt_regs->r9 = 0 */
+ pushq $0 /* pt_regs->r10 = 0 */
+ pushq $0 /* pt_regs->r11 = 0 */
pushq %rbx /* pt_regs->rbx */
pushq %rbp /* pt_regs->rbp (will be overwritten) */
- pushq %r8 /* pt_regs->r12 = 0 */
- pushq %r8 /* pt_regs->r13 = 0 */
- pushq %r8 /* pt_regs->r14 = 0 */
- pushq %r8 /* pt_regs->r15 = 0 */
+ pushq $0 /* pt_regs->r12 = 0 */
+ pushq $0 /* pt_regs->r13 = 0 */
+ pushq $0 /* pt_regs->r14 = 0 */
+ pushq $0 /* pt_regs->r15 = 0 */
/*
* User mode is traced as though IRQs are on, and SYSENTER
@@ -316,11 +314,10 @@ ENTRY(entry_INT80_compat)
pushq %rdx /* pt_regs->dx */
pushq %rcx /* pt_regs->cx */
pushq $-ENOSYS /* pt_regs->ax */
- xorq %r8,%r8
- pushq %r8 /* pt_regs->r8 = 0 */
- pushq %r8 /* pt_regs->r9 = 0 */
- pushq %r8 /* pt_regs->r10 = 0 */
- pushq %r8 /* pt_regs->r11 = 0 */
+ pushq $0 /* pt_regs->r8 = 0 */
+ pushq $0 /* pt_regs->r9 = 0 */
+ pushq $0 /* pt_regs->r10 = 0 */
+ pushq $0 /* pt_regs->r11 = 0 */
pushq %rbx /* pt_regs->rbx */
pushq %rbp /* pt_regs->rbp */
pushq %r12 /* pt_regs->r12 */
diff --git a/arch/x86/entry/syscalls/syscall_32.tbl b/arch/x86/entry/syscalls/syscall_32.tbl
index b30dd8154cc2..4cddd17153fb 100644
--- a/arch/x86/entry/syscalls/syscall_32.tbl
+++ b/arch/x86/entry/syscalls/syscall_32.tbl
@@ -384,5 +384,5 @@
375 i386 membarrier sys_membarrier
376 i386 mlock2 sys_mlock2
377 i386 copy_file_range sys_copy_file_range
-378 i386 preadv2 sys_preadv2
-379 i386 pwritev2 sys_pwritev2
+378 i386 preadv2 sys_preadv2 compat_sys_preadv2
+379 i386 pwritev2 sys_pwritev2 compat_sys_pwritev2
diff --git a/arch/x86/entry/syscalls/syscall_64.tbl b/arch/x86/entry/syscalls/syscall_64.tbl
index cac6d17ce5db..555263e385c9 100644
--- a/arch/x86/entry/syscalls/syscall_64.tbl
+++ b/arch/x86/entry/syscalls/syscall_64.tbl
@@ -374,3 +374,5 @@
543 x32 io_setup compat_sys_io_setup
544 x32 io_submit compat_sys_io_submit
545 x32 execveat compat_sys_execveat/ptregs
+534 x32 preadv2 compat_sys_preadv2
+535 x32 pwritev2 compat_sys_pwritev2
diff --git a/arch/x86/entry/thunk_64.S b/arch/x86/entry/thunk_64.S
index 98df1fa8825c..027aec4a74df 100644
--- a/arch/x86/entry/thunk_64.S
+++ b/arch/x86/entry/thunk_64.S
@@ -8,16 +8,15 @@
#include <linux/linkage.h>
#include "calling.h"
#include <asm/asm.h>
-#include <asm/frame.h>
/* rdi: arg1 ... normal C conventions. rax is saved/restored. */
.macro THUNK name, func, put_ret_addr_in_rdi=0
.globl \name
.type \name, @function
\name:
- FRAME_BEGIN
+ pushq %rbp
+ movq %rsp, %rbp
- /* this one pushes 9 elems, the next one would be %rIP */
pushq %rdi
pushq %rsi
pushq %rdx
@@ -29,8 +28,8 @@
pushq %r11
.if \put_ret_addr_in_rdi
- /* 9*8(%rsp) is return addr on stack */
- movq 9*8(%rsp), %rdi
+ /* 8(%rbp) is return addr on stack */
+ movq 8(%rbp), %rdi
.endif
call \func
@@ -65,7 +64,7 @@ restore:
popq %rdx
popq %rsi
popq %rdi
- FRAME_END
+ popq %rbp
ret
_ASM_NOKPROBE(restore)
#endif
diff --git a/arch/x86/entry/vdso/Makefile b/arch/x86/entry/vdso/Makefile
index 6874da5f67fc..253b72eaade6 100644
--- a/arch/x86/entry/vdso/Makefile
+++ b/arch/x86/entry/vdso/Makefile
@@ -193,10 +193,10 @@ vdso_img_insttargets := $(vdso_img_sodbg:%.dbg=install_%)
$(MODLIB)/vdso: FORCE
@mkdir -p $(MODLIB)/vdso
-$(vdso_img_insttargets): install_%: $(obj)/%.dbg $(MODLIB)/vdso FORCE
+$(vdso_img_insttargets): install_%: $(obj)/%.dbg $(MODLIB)/vdso
$(call cmd,vdso_install)
PHONY += vdso_install $(vdso_img_insttargets)
-vdso_install: $(vdso_img_insttargets) FORCE
+vdso_install: $(vdso_img_insttargets)
clean-files := vdso32.so vdso32.so.dbg vdso64* vdso-image-*.c vdsox32.so*
diff --git a/arch/x86/entry/vdso/vclock_gettime.c b/arch/x86/entry/vdso/vclock_gettime.c
index 03c3eb77bfce..2f02d23a05ef 100644
--- a/arch/x86/entry/vdso/vclock_gettime.c
+++ b/arch/x86/entry/vdso/vclock_gettime.c
@@ -13,7 +13,6 @@
#include <uapi/linux/time.h>
#include <asm/vgtod.h>
-#include <asm/hpet.h>
#include <asm/vvar.h>
#include <asm/unistd.h>
#include <asm/msr.h>
@@ -28,16 +27,6 @@ extern int __vdso_clock_gettime(clockid_t clock, struct timespec *ts);
extern int __vdso_gettimeofday(struct timeval *tv, struct timezone *tz);
extern time_t __vdso_time(time_t *t);
-#ifdef CONFIG_HPET_TIMER
-extern u8 hpet_page
- __attribute__((visibility("hidden")));
-
-static notrace cycle_t vread_hpet(void)
-{
- return *(const volatile u32 *)(&hpet_page + HPET_COUNTER);
-}
-#endif
-
#ifdef CONFIG_PARAVIRT_CLOCK
extern u8 pvclock_page
__attribute__((visibility("hidden")));
@@ -195,10 +184,6 @@ notrace static inline u64 vgetsns(int *mode)
if (gtod->vclock_mode == VCLOCK_TSC)
cycles = vread_tsc();
-#ifdef CONFIG_HPET_TIMER
- else if (gtod->vclock_mode == VCLOCK_HPET)
- cycles = vread_hpet();
-#endif
#ifdef CONFIG_PARAVIRT_CLOCK
else if (gtod->vclock_mode == VCLOCK_PVCLOCK)
cycles = vread_pvclock(mode);
diff --git a/arch/x86/entry/vdso/vdso-layout.lds.S b/arch/x86/entry/vdso/vdso-layout.lds.S
index 4158acc17df0..a708aa90b507 100644
--- a/arch/x86/entry/vdso/vdso-layout.lds.S
+++ b/arch/x86/entry/vdso/vdso-layout.lds.S
@@ -25,7 +25,7 @@ SECTIONS
* segment.
*/
- vvar_start = . - 3 * PAGE_SIZE;
+ vvar_start = . - 2 * PAGE_SIZE;
vvar_page = vvar_start;
/* Place all vvars at the offsets in asm/vvar.h. */
@@ -35,8 +35,7 @@ SECTIONS
#undef __VVAR_KERNEL_LDS
#undef EMIT_VVAR
- hpet_page = vvar_start + PAGE_SIZE;
- pvclock_page = vvar_start + 2 * PAGE_SIZE;
+ pvclock_page = vvar_start + PAGE_SIZE;
. = SIZEOF_HEADERS;
diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
index 10f704584922..ab220ac9b3b9 100644
--- a/arch/x86/entry/vdso/vma.c
+++ b/arch/x86/entry/vdso/vma.c
@@ -18,7 +18,6 @@
#include <asm/vdso.h>
#include <asm/vvar.h>
#include <asm/page.h>
-#include <asm/hpet.h>
#include <asm/desc.h>
#include <asm/cpufeature.h>
@@ -129,16 +128,6 @@ static int vvar_fault(const struct vm_special_mapping *sm,
if (sym_offset == image->sym_vvar_page) {
ret = vm_insert_pfn(vma, (unsigned long)vmf->virtual_address,
__pa_symbol(&__vvar_page) >> PAGE_SHIFT);
- } else if (sym_offset == image->sym_hpet_page) {
-#ifdef CONFIG_HPET_TIMER
- if (hpet_address && vclock_was_used(VCLOCK_HPET)) {
- ret = vm_insert_pfn_prot(
- vma,
- (unsigned long)vmf->virtual_address,
- hpet_address >> PAGE_SHIFT,
- pgprot_noncached(PAGE_READONLY));
- }
-#endif
} else if (sym_offset == image->sym_pvclock_page) {
struct pvclock_vsyscall_time_info *pvti =
pvclock_pvti_cpu0_va();
@@ -174,7 +163,8 @@ static int map_vdso(const struct vdso_image *image, bool calculate_addr)
addr = 0;
}
- down_write(&mm->mmap_sem);
+ if (down_write_killable(&mm->mmap_sem))
+ return -EINTR;
addr = get_unmapped_area(NULL, addr,
image->size - image->sym_vvar_start, 0, 0);
diff --git a/arch/x86/events/Kconfig b/arch/x86/events/Kconfig
new file mode 100644
index 000000000000..98397db5ceae
--- /dev/null
+++ b/arch/x86/events/Kconfig
@@ -0,0 +1,36 @@
+menu "Performance monitoring"
+
+config PERF_EVENTS_INTEL_UNCORE
+ tristate "Intel uncore performance events"
+ depends on PERF_EVENTS && CPU_SUP_INTEL && PCI
+ default y
+ ---help---
+ Include support for Intel uncore performance events. These are
+ available on NehalemEX and more modern processors.
+
+config PERF_EVENTS_INTEL_RAPL
+ tristate "Intel rapl performance events"
+ depends on PERF_EVENTS && CPU_SUP_INTEL && PCI
+ default y
+ ---help---
+ Include support for Intel rapl performance events for power
+ monitoring on modern processors.
+
+config PERF_EVENTS_INTEL_CSTATE
+ tristate "Intel cstate performance events"
+ depends on PERF_EVENTS && CPU_SUP_INTEL && PCI
+ default y
+ ---help---
+ Include support for Intel cstate performance events for power
+ monitoring on modern processors.
+
+config PERF_EVENTS_AMD_POWER
+ depends on PERF_EVENTS && CPU_SUP_AMD
+ tristate "AMD Processor Power Reporting Mechanism"
+ ---help---
+ Provide power reporting mechanism support for AMD processors.
+ Currently, it leverages X86_FEATURE_ACC_POWER
+ (CPUID Fn8000_0007_EDX[12]) interface to calculate the
+ average power consumption on Family 15h processors.
+
+endmenu
diff --git a/arch/x86/events/Makefile b/arch/x86/events/Makefile
index f59618a39990..1d392c39fe56 100644
--- a/arch/x86/events/Makefile
+++ b/arch/x86/events/Makefile
@@ -6,9 +6,6 @@ obj-$(CONFIG_X86_LOCAL_APIC) += amd/ibs.o msr.o
ifdef CONFIG_AMD_IOMMU
obj-$(CONFIG_CPU_SUP_AMD) += amd/iommu.o
endif
-obj-$(CONFIG_CPU_SUP_INTEL) += intel/core.o intel/bts.o intel/cqm.o
-obj-$(CONFIG_CPU_SUP_INTEL) += intel/cstate.o intel/ds.o intel/knc.o
-obj-$(CONFIG_CPU_SUP_INTEL) += intel/lbr.o intel/p4.o intel/p6.o intel/pt.o
-obj-$(CONFIG_CPU_SUP_INTEL) += intel/rapl.o msr.o
-obj-$(CONFIG_PERF_EVENTS_INTEL_UNCORE) += intel/uncore.o intel/uncore_nhmex.o
-obj-$(CONFIG_PERF_EVENTS_INTEL_UNCORE) += intel/uncore_snb.o intel/uncore_snbep.o
+
+obj-$(CONFIG_CPU_SUP_INTEL) += msr.o
+obj-$(CONFIG_CPU_SUP_INTEL) += intel/
diff --git a/arch/x86/events/amd/uncore.c b/arch/x86/events/amd/uncore.c
index 3db9569e658c..98ac57381bf9 100644
--- a/arch/x86/events/amd/uncore.c
+++ b/arch/x86/events/amd/uncore.c
@@ -263,6 +263,7 @@ static const struct attribute_group *amd_uncore_attr_groups[] = {
};
static struct pmu amd_nb_pmu = {
+ .task_ctx_nr = perf_invalid_context,
.attr_groups = amd_uncore_attr_groups,
.name = "amd_nb",
.event_init = amd_uncore_event_init,
@@ -274,6 +275,7 @@ static struct pmu amd_nb_pmu = {
};
static struct pmu amd_l2_pmu = {
+ .task_ctx_nr = perf_invalid_context,
.attr_groups = amd_uncore_attr_groups,
.name = "amd_l2",
.event_init = amd_uncore_event_init,
diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 041e442a3e28..33787ee817f0 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -360,6 +360,9 @@ int x86_add_exclusive(unsigned int what)
{
int i;
+ if (x86_pmu.lbr_pt_coexist)
+ return 0;
+
if (!atomic_inc_not_zero(&x86_pmu.lbr_exclusive[what])) {
mutex_lock(&pmc_reserve_mutex);
for (i = 0; i < ARRAY_SIZE(x86_pmu.lbr_exclusive); i++) {
@@ -380,6 +383,9 @@ fail_unlock:
void x86_del_exclusive(unsigned int what)
{
+ if (x86_pmu.lbr_pt_coexist)
+ return;
+
atomic_dec(&x86_pmu.lbr_exclusive[what]);
atomic_dec(&active_events);
}
@@ -1518,7 +1524,7 @@ x86_pmu_notifier(struct notifier_block *self, unsigned long action, void *hcpu)
static void __init pmu_check_apic(void)
{
- if (cpu_has_apic)
+ if (boot_cpu_has(X86_FEATURE_APIC))
return;
x86_pmu.apic = 0;
@@ -2177,7 +2183,7 @@ void arch_perf_update_userpage(struct perf_event *event,
* cap_user_time_zero doesn't make sense when we're using a different
* time base for the records.
*/
- if (event->clock == &local_clock) {
+ if (!event->attr.use_clockid) {
userpg->cap_user_time_zero = 1;
userpg->time_zero = data->cyc2ns_offset;
}
@@ -2196,7 +2202,7 @@ static int backtrace_stack(void *data, char *name)
static int backtrace_address(void *data, unsigned long addr, int reliable)
{
- struct perf_callchain_entry *entry = data;
+ struct perf_callchain_entry_ctx *entry = data;
return perf_callchain_store(entry, addr);
}
@@ -2208,7 +2214,7 @@ static const struct stacktrace_ops backtrace_ops = {
};
void
-perf_callchain_kernel(struct perf_callchain_entry *entry, struct pt_regs *regs)
+perf_callchain_kernel(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
{
if (perf_guest_cbs && perf_guest_cbs->is_in_guest()) {
/* TODO: We don't support guest os callchain now */
@@ -2262,7 +2268,7 @@ static unsigned long get_segment_base(unsigned int segment)
#include <asm/compat.h>
static inline int
-perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry *entry)
+perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry_ctx *entry)
{
/* 32-bit process in 64-bit kernel. */
unsigned long ss_base, cs_base;
@@ -2277,7 +2283,7 @@ perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry *entry)
fp = compat_ptr(ss_base + regs->bp);
pagefault_disable();
- while (entry->nr < PERF_MAX_STACK_DEPTH) {
+ while (entry->nr < entry->max_stack) {
unsigned long bytes;
frame.next_frame = 0;
frame.return_address = 0;
@@ -2303,14 +2309,14 @@ perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry *entry)
}
#else
static inline int
-perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry *entry)
+perf_callchain_user32(struct pt_regs *regs, struct perf_callchain_entry_ctx *entry)
{
return 0;
}
#endif
void
-perf_callchain_user(struct perf_callchain_entry *entry, struct pt_regs *regs)
+perf_callchain_user(struct perf_callchain_entry_ctx *entry, struct pt_regs *regs)
{
struct stack_frame frame;
const void __user *fp;
@@ -2337,7 +2343,7 @@ perf_callchain_user(struct perf_callchain_entry *entry, struct pt_regs *regs)
return;
pagefault_disable();
- while (entry->nr < PERF_MAX_STACK_DEPTH) {
+ while (entry->nr < entry->max_stack) {
unsigned long bytes;
frame.next_frame = NULL;
frame.return_address = 0;
diff --git a/arch/x86/events/intel/Makefile b/arch/x86/events/intel/Makefile
new file mode 100644
index 000000000000..3660b2cf245a
--- /dev/null
+++ b/arch/x86/events/intel/Makefile
@@ -0,0 +1,9 @@
+obj-$(CONFIG_CPU_SUP_INTEL) += core.o bts.o cqm.o
+obj-$(CONFIG_CPU_SUP_INTEL) += ds.o knc.o
+obj-$(CONFIG_CPU_SUP_INTEL) += lbr.o p4.o p6.o pt.o
+obj-$(CONFIG_PERF_EVENTS_INTEL_RAPL) += intel-rapl.o
+intel-rapl-objs := rapl.o
+obj-$(CONFIG_PERF_EVENTS_INTEL_UNCORE) += intel-uncore.o
+intel-uncore-objs := uncore.o uncore_nhmex.o uncore_snb.o uncore_snbep.o
+obj-$(CONFIG_PERF_EVENTS_INTEL_CSTATE) += intel-cstate.o
+intel-cstate-objs := cstate.o
diff --git a/arch/x86/events/intel/bts.c b/arch/x86/events/intel/bts.c
index b99dc9258c0f..0a6e393a2e62 100644
--- a/arch/x86/events/intel/bts.c
+++ b/arch/x86/events/intel/bts.c
@@ -171,18 +171,6 @@ static void bts_buffer_pad_out(struct bts_phys *phys, unsigned long head)
memset(page_address(phys->page) + index, 0, phys->size - index);
}
-static bool bts_buffer_is_full(struct bts_buffer *buf, struct bts_ctx *bts)
-{
- if (buf->snapshot)
- return false;
-
- if (local_read(&buf->data_size) >= bts->handle.size ||
- bts->handle.size - local_read(&buf->data_size) < BTS_RECORD_SIZE)
- return true;
-
- return false;
-}
-
static void bts_update(struct bts_ctx *bts)
{
int cpu = raw_smp_processor_id();
@@ -213,18 +201,15 @@ static void bts_update(struct bts_ctx *bts)
}
}
+static int
+bts_buffer_reset(struct bts_buffer *buf, struct perf_output_handle *handle);
+
static void __bts_event_start(struct perf_event *event)
{
struct bts_ctx *bts = this_cpu_ptr(&bts_ctx);
struct bts_buffer *buf = perf_get_aux(&bts->handle);
u64 config = 0;
- if (!buf || bts_buffer_is_full(buf, bts))
- return;
-
- event->hw.itrace_started = 1;
- event->hw.state = 0;
-
if (!buf->snapshot)
config |= ARCH_PERFMON_EVENTSEL_INT;
if (!event->attr.exclude_kernel)
@@ -241,16 +226,41 @@ static void __bts_event_start(struct perf_event *event)
wmb();
intel_pmu_enable_bts(config);
+
}
static void bts_event_start(struct perf_event *event, int flags)
{
+ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
struct bts_ctx *bts = this_cpu_ptr(&bts_ctx);
+ struct bts_buffer *buf;
+
+ buf = perf_aux_output_begin(&bts->handle, event);
+ if (!buf)
+ goto fail_stop;
+
+ if (bts_buffer_reset(buf, &bts->handle))
+ goto fail_end_stop;
+
+ bts->ds_back.bts_buffer_base = cpuc->ds->bts_buffer_base;
+ bts->ds_back.bts_absolute_maximum = cpuc->ds->bts_absolute_maximum;
+ bts->ds_back.bts_interrupt_threshold = cpuc->ds->bts_interrupt_threshold;
+
+ event->hw.itrace_started = 1;
+ event->hw.state = 0;
__bts_event_start(event);
/* PMI handler: this counter is running and likely generating PMIs */
ACCESS_ONCE(bts->started) = 1;
+
+ return;
+
+fail_end_stop:
+ perf_aux_output_end(&bts->handle, 0, false);
+
+fail_stop:
+ event->hw.state = PERF_HES_STOPPED;
}
static void __bts_event_stop(struct perf_event *event)
@@ -269,15 +279,32 @@ static void __bts_event_stop(struct perf_event *event)
static void bts_event_stop(struct perf_event *event, int flags)
{
+ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
struct bts_ctx *bts = this_cpu_ptr(&bts_ctx);
+ struct bts_buffer *buf = perf_get_aux(&bts->handle);
/* PMI handler: don't restart this counter */
ACCESS_ONCE(bts->started) = 0;
__bts_event_stop(event);
- if (flags & PERF_EF_UPDATE)
+ if (flags & PERF_EF_UPDATE) {
bts_update(bts);
+
+ if (buf) {
+ if (buf->snapshot)
+ bts->handle.head =
+ local_xchg(&buf->data_size,
+ buf->nr_pages << PAGE_SHIFT);
+ perf_aux_output_end(&bts->handle, local_xchg(&buf->data_size, 0),
+ !!local_xchg(&buf->lost, 0));
+ }
+
+ cpuc->ds->bts_index = bts->ds_back.bts_buffer_base;
+ cpuc->ds->bts_buffer_base = bts->ds_back.bts_buffer_base;
+ cpuc->ds->bts_absolute_maximum = bts->ds_back.bts_absolute_maximum;
+ cpuc->ds->bts_interrupt_threshold = bts->ds_back.bts_interrupt_threshold;
+ }
}
void intel_bts_enable_local(void)
@@ -417,34 +444,14 @@ int intel_bts_interrupt(void)
static void bts_event_del(struct perf_event *event, int mode)
{
- struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
- struct bts_ctx *bts = this_cpu_ptr(&bts_ctx);
- struct bts_buffer *buf = perf_get_aux(&bts->handle);
-
bts_event_stop(event, PERF_EF_UPDATE);
-
- if (buf) {
- if (buf->snapshot)
- bts->handle.head =
- local_xchg(&buf->data_size,
- buf->nr_pages << PAGE_SHIFT);
- perf_aux_output_end(&bts->handle, local_xchg(&buf->data_size, 0),
- !!local_xchg(&buf->lost, 0));
- }
-
- cpuc->ds->bts_index = bts->ds_back.bts_buffer_base;
- cpuc->ds->bts_buffer_base = bts->ds_back.bts_buffer_base;
- cpuc->ds->bts_absolute_maximum = bts->ds_back.bts_absolute_maximum;
- cpuc->ds->bts_interrupt_threshold = bts->ds_back.bts_interrupt_threshold;
}
static int bts_event_add(struct perf_event *event, int mode)
{
- struct bts_buffer *buf;
struct bts_ctx *bts = this_cpu_ptr(&bts_ctx);
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
struct hw_perf_event *hwc = &event->hw;
- int ret = -EBUSY;
event->hw.state = PERF_HES_STOPPED;
@@ -454,26 +461,10 @@ static int bts_event_add(struct perf_event *event, int mode)
if (bts->handle.event)
return -EBUSY;
- buf = perf_aux_output_begin(&bts->handle, event);
- if (!buf)
- return -EINVAL;
-
- ret = bts_buffer_reset(buf, &bts->handle);
- if (ret) {
- perf_aux_output_end(&bts->handle, 0, false);
- return ret;
- }
-
- bts->ds_back.bts_buffer_base = cpuc->ds->bts_buffer_base;
- bts->ds_back.bts_absolute_maximum = cpuc->ds->bts_absolute_maximum;
- bts->ds_back.bts_interrupt_threshold = cpuc->ds->bts_interrupt_threshold;
-
if (mode & PERF_EF_START) {
bts_event_start(event, 0);
- if (hwc->state & PERF_HES_STOPPED) {
- bts_event_del(event, 0);
- return -EBUSY;
- }
+ if (hwc->state & PERF_HES_STOPPED)
+ return -EINVAL;
}
return 0;
diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index a6fd4dbcf820..7c666958a625 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -1465,6 +1465,140 @@ static __initconst const u64 slm_hw_cache_event_ids
},
};
+static struct extra_reg intel_glm_extra_regs[] __read_mostly = {
+ /* must define OFFCORE_RSP_X first, see intel_fixup_er() */
+ INTEL_UEVENT_EXTRA_REG(0x01b7, MSR_OFFCORE_RSP_0, 0x760005ffbfull, RSP_0),
+ INTEL_UEVENT_EXTRA_REG(0x02b7, MSR_OFFCORE_RSP_1, 0x360005ffbfull, RSP_1),
+ EVENT_EXTRA_END
+};
+
+#define GLM_DEMAND_DATA_RD BIT_ULL(0)
+#define GLM_DEMAND_RFO BIT_ULL(1)
+#define GLM_ANY_RESPONSE BIT_ULL(16)
+#define GLM_SNP_NONE_OR_MISS BIT_ULL(33)
+#define GLM_DEMAND_READ GLM_DEMAND_DATA_RD
+#define GLM_DEMAND_WRITE GLM_DEMAND_RFO
+#define GLM_DEMAND_PREFETCH (SNB_PF_DATA_RD|SNB_PF_RFO)
+#define GLM_LLC_ACCESS GLM_ANY_RESPONSE
+#define GLM_SNP_ANY (GLM_SNP_NONE_OR_MISS|SNB_NO_FWD|SNB_HITM)
+#define GLM_LLC_MISS (GLM_SNP_ANY|SNB_NON_DRAM)
+
+static __initconst const u64 glm_hw_cache_event_ids
+ [PERF_COUNT_HW_CACHE_MAX]
+ [PERF_COUNT_HW_CACHE_OP_MAX]
+ [PERF_COUNT_HW_CACHE_RESULT_MAX] = {
+ [C(L1D)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x81d0, /* MEM_UOPS_RETIRED.ALL_LOADS */
+ [C(RESULT_MISS)] = 0x0,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = 0x82d0, /* MEM_UOPS_RETIRED.ALL_STORES */
+ [C(RESULT_MISS)] = 0x0,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = 0x0,
+ [C(RESULT_MISS)] = 0x0,
+ },
+ },
+ [C(L1I)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x0380, /* ICACHE.ACCESSES */
+ [C(RESULT_MISS)] = 0x0280, /* ICACHE.MISSES */
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = 0x0,
+ [C(RESULT_MISS)] = 0x0,
+ },
+ },
+ [C(LL)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x1b7, /* OFFCORE_RESPONSE */
+ [C(RESULT_MISS)] = 0x1b7, /* OFFCORE_RESPONSE */
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = 0x1b7, /* OFFCORE_RESPONSE */
+ [C(RESULT_MISS)] = 0x1b7, /* OFFCORE_RESPONSE */
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = 0x1b7, /* OFFCORE_RESPONSE */
+ [C(RESULT_MISS)] = 0x1b7, /* OFFCORE_RESPONSE */
+ },
+ },
+ [C(DTLB)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x81d0, /* MEM_UOPS_RETIRED.ALL_LOADS */
+ [C(RESULT_MISS)] = 0x0,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = 0x82d0, /* MEM_UOPS_RETIRED.ALL_STORES */
+ [C(RESULT_MISS)] = 0x0,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = 0x0,
+ [C(RESULT_MISS)] = 0x0,
+ },
+ },
+ [C(ITLB)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x00c0, /* INST_RETIRED.ANY_P */
+ [C(RESULT_MISS)] = 0x0481, /* ITLB.MISS */
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+ },
+ [C(BPU)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = 0x00c4, /* BR_INST_RETIRED.ALL_BRANCHES */
+ [C(RESULT_MISS)] = 0x00c5, /* BR_MISP_RETIRED.ALL_BRANCHES */
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = -1,
+ [C(RESULT_MISS)] = -1,
+ },
+ },
+};
+
+static __initconst const u64 glm_hw_cache_extra_regs
+ [PERF_COUNT_HW_CACHE_MAX]
+ [PERF_COUNT_HW_CACHE_OP_MAX]
+ [PERF_COUNT_HW_CACHE_RESULT_MAX] = {
+ [C(LL)] = {
+ [C(OP_READ)] = {
+ [C(RESULT_ACCESS)] = GLM_DEMAND_READ|
+ GLM_LLC_ACCESS,
+ [C(RESULT_MISS)] = GLM_DEMAND_READ|
+ GLM_LLC_MISS,
+ },
+ [C(OP_WRITE)] = {
+ [C(RESULT_ACCESS)] = GLM_DEMAND_WRITE|
+ GLM_LLC_ACCESS,
+ [C(RESULT_MISS)] = GLM_DEMAND_WRITE|
+ GLM_LLC_MISS,
+ },
+ [C(OP_PREFETCH)] = {
+ [C(RESULT_ACCESS)] = GLM_DEMAND_PREFETCH|
+ GLM_LLC_ACCESS,
+ [C(RESULT_MISS)] = GLM_DEMAND_PREFETCH|
+ GLM_LLC_MISS,
+ },
+ },
+};
+
#define KNL_OT_L2_HITE BIT_ULL(19) /* Other Tile L2 Hit */
#define KNL_OT_L2_HITF BIT_ULL(20) /* Other Tile L2 Hit */
#define KNL_MCDRAM_LOCAL BIT_ULL(21)
@@ -3447,7 +3581,7 @@ __init int intel_pmu_init(void)
memcpy(hw_cache_extra_regs, slm_hw_cache_extra_regs,
sizeof(hw_cache_extra_regs));
- intel_pmu_lbr_init_atom();
+ intel_pmu_lbr_init_slm();
x86_pmu.event_constraints = intel_slm_event_constraints;
x86_pmu.pebs_constraints = intel_slm_pebs_event_constraints;
@@ -3456,6 +3590,30 @@ __init int intel_pmu_init(void)
pr_cont("Silvermont events, ");
break;
+ case 92: /* 14nm Atom "Goldmont" */
+ case 95: /* 14nm Atom "Goldmont Denverton" */
+ memcpy(hw_cache_event_ids, glm_hw_cache_event_ids,
+ sizeof(hw_cache_event_ids));
+ memcpy(hw_cache_extra_regs, glm_hw_cache_extra_regs,
+ sizeof(hw_cache_extra_regs));
+
+ intel_pmu_lbr_init_skl();
+
+ x86_pmu.event_constraints = intel_slm_event_constraints;
+ x86_pmu.pebs_constraints = intel_glm_pebs_event_constraints;
+ x86_pmu.extra_regs = intel_glm_extra_regs;
+ /*
+ * It's recommended to use CPU_CLK_UNHALTED.CORE_P + NPEBS
+ * for precise cycles.
+ * :pp is identical to :ppp
+ */
+ x86_pmu.pebs_aliases = NULL;
+ x86_pmu.pebs_prec_dist = true;
+ x86_pmu.lbr_pt_coexist = true;
+ x86_pmu.flags |= PMU_FL_HAS_RSP_1;
+ pr_cont("Goldmont events, ");
+ break;
+
case 37: /* 32nm Westmere */
case 44: /* 32nm Westmere-EP */
case 47: /* 32nm Westmere-EX */
@@ -3708,7 +3866,7 @@ __init int intel_pmu_init(void)
c->idxmsk64 |= (1ULL << x86_pmu.num_counters) - 1;
}
c->idxmsk64 &=
- ~(~0UL << (INTEL_PMC_IDX_FIXED + x86_pmu.num_counters_fixed));
+ ~(~0ULL << (INTEL_PMC_IDX_FIXED + x86_pmu.num_counters_fixed));
c->weight = hweight64(c->idxmsk64);
}
}
diff --git a/arch/x86/events/intel/cstate.c b/arch/x86/events/intel/cstate.c
index 7946c4231169..9ba4e4136a15 100644
--- a/arch/x86/events/intel/cstate.c
+++ b/arch/x86/events/intel/cstate.c
@@ -91,6 +91,8 @@
#include <asm/cpu_device_id.h>
#include "../perf_event.h"
+MODULE_LICENSE("GPL");
+
#define DEFINE_CSTATE_FORMAT_ATTR(_var, _name, _format) \
static ssize_t __cstate_##_var##_show(struct kobject *kobj, \
struct kobj_attribute *attr, \
@@ -106,22 +108,27 @@ static ssize_t cstate_get_attr_cpumask(struct device *dev,
struct device_attribute *attr,
char *buf);
+/* Model -> events mapping */
+struct cstate_model {
+ unsigned long core_events;
+ unsigned long pkg_events;
+ unsigned long quirks;
+};
+
+/* Quirk flags */
+#define SLM_PKG_C6_USE_C7_MSR (1UL << 0)
+
struct perf_cstate_msr {
u64 msr;
struct perf_pmu_events_attr *attr;
- bool (*test)(int idx);
};
/* cstate_core PMU */
-
static struct pmu cstate_core_pmu;
static bool has_cstate_core;
-enum perf_cstate_core_id {
- /*
- * cstate_core events
- */
+enum perf_cstate_core_events {
PERF_CSTATE_CORE_C1_RES = 0,
PERF_CSTATE_CORE_C3_RES,
PERF_CSTATE_CORE_C6_RES,
@@ -130,69 +137,16 @@ enum perf_cstate_core_id {
PERF_CSTATE_CORE_EVENT_MAX,
};
-bool test_core(int idx)
-{
- if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
- boot_cpu_data.x86 != 6)
- return false;
-
- switch (boot_cpu_data.x86_model) {
- case 30: /* 45nm Nehalem */
- case 26: /* 45nm Nehalem-EP */
- case 46: /* 45nm Nehalem-EX */
-
- case 37: /* 32nm Westmere */
- case 44: /* 32nm Westmere-EP */
- case 47: /* 32nm Westmere-EX */
- if (idx == PERF_CSTATE_CORE_C3_RES ||
- idx == PERF_CSTATE_CORE_C6_RES)
- return true;
- break;
- case 42: /* 32nm SandyBridge */
- case 45: /* 32nm SandyBridge-E/EN/EP */
-
- case 58: /* 22nm IvyBridge */
- case 62: /* 22nm IvyBridge-EP/EX */
-
- case 60: /* 22nm Haswell Core */
- case 63: /* 22nm Haswell Server */
- case 69: /* 22nm Haswell ULT */
- case 70: /* 22nm Haswell + GT3e (Intel Iris Pro graphics) */
-
- case 61: /* 14nm Broadwell Core-M */
- case 86: /* 14nm Broadwell Xeon D */
- case 71: /* 14nm Broadwell + GT3e (Intel Iris Pro graphics) */
- case 79: /* 14nm Broadwell Server */
-
- case 78: /* 14nm Skylake Mobile */
- case 94: /* 14nm Skylake Desktop */
- if (idx == PERF_CSTATE_CORE_C3_RES ||
- idx == PERF_CSTATE_CORE_C6_RES ||
- idx == PERF_CSTATE_CORE_C7_RES)
- return true;
- break;
- case 55: /* 22nm Atom "Silvermont" */
- case 77: /* 22nm Atom "Silvermont Avoton/Rangely" */
- case 76: /* 14nm Atom "Airmont" */
- if (idx == PERF_CSTATE_CORE_C1_RES ||
- idx == PERF_CSTATE_CORE_C6_RES)
- return true;
- break;
- }
-
- return false;
-}
-
PMU_EVENT_ATTR_STRING(c1-residency, evattr_cstate_core_c1, "event=0x00");
PMU_EVENT_ATTR_STRING(c3-residency, evattr_cstate_core_c3, "event=0x01");
PMU_EVENT_ATTR_STRING(c6-residency, evattr_cstate_core_c6, "event=0x02");
PMU_EVENT_ATTR_STRING(c7-residency, evattr_cstate_core_c7, "event=0x03");
static struct perf_cstate_msr core_msr[] = {
- [PERF_CSTATE_CORE_C1_RES] = { MSR_CORE_C1_RES, &evattr_cstate_core_c1, test_core, },
- [PERF_CSTATE_CORE_C3_RES] = { MSR_CORE_C3_RESIDENCY, &evattr_cstate_core_c3, test_core, },
- [PERF_CSTATE_CORE_C6_RES] = { MSR_CORE_C6_RESIDENCY, &evattr_cstate_core_c6, test_core, },
- [PERF_CSTATE_CORE_C7_RES] = { MSR_CORE_C7_RESIDENCY, &evattr_cstate_core_c7, test_core, },
+ [PERF_CSTATE_CORE_C1_RES] = { MSR_CORE_C1_RES, &evattr_cstate_core_c1 },
+ [PERF_CSTATE_CORE_C3_RES] = { MSR_CORE_C3_RESIDENCY, &evattr_cstate_core_c3 },
+ [PERF_CSTATE_CORE_C6_RES] = { MSR_CORE_C6_RESIDENCY, &evattr_cstate_core_c6 },
+ [PERF_CSTATE_CORE_C7_RES] = { MSR_CORE_C7_RESIDENCY, &evattr_cstate_core_c7 },
};
static struct attribute *core_events_attrs[PERF_CSTATE_CORE_EVENT_MAX + 1] = {
@@ -234,18 +188,11 @@ static const struct attribute_group *core_attr_groups[] = {
NULL,
};
-/* cstate_core PMU end */
-
-
/* cstate_pkg PMU */
-
static struct pmu cstate_pkg_pmu;
static bool has_cstate_pkg;
-enum perf_cstate_pkg_id {
- /*
- * cstate_pkg events
- */
+enum perf_cstate_pkg_events {
PERF_CSTATE_PKG_C2_RES = 0,
PERF_CSTATE_PKG_C3_RES,
PERF_CSTATE_PKG_C6_RES,
@@ -257,69 +204,6 @@ enum perf_cstate_pkg_id {
PERF_CSTATE_PKG_EVENT_MAX,
};
-bool test_pkg(int idx)
-{
- if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
- boot_cpu_data.x86 != 6)
- return false;
-
- switch (boot_cpu_data.x86_model) {
- case 30: /* 45nm Nehalem */
- case 26: /* 45nm Nehalem-EP */
- case 46: /* 45nm Nehalem-EX */
-
- case 37: /* 32nm Westmere */
- case 44: /* 32nm Westmere-EP */
- case 47: /* 32nm Westmere-EX */
- if (idx == PERF_CSTATE_CORE_C3_RES ||
- idx == PERF_CSTATE_CORE_C6_RES ||
- idx == PERF_CSTATE_CORE_C7_RES)
- return true;
- break;
- case 42: /* 32nm SandyBridge */
- case 45: /* 32nm SandyBridge-E/EN/EP */
-
- case 58: /* 22nm IvyBridge */
- case 62: /* 22nm IvyBridge-EP/EX */
-
- case 60: /* 22nm Haswell Core */
- case 63: /* 22nm Haswell Server */
- case 70: /* 22nm Haswell + GT3e (Intel Iris Pro graphics) */
-
- case 61: /* 14nm Broadwell Core-M */
- case 86: /* 14nm Broadwell Xeon D */
- case 71: /* 14nm Broadwell + GT3e (Intel Iris Pro graphics) */
- case 79: /* 14nm Broadwell Server */
-
- case 78: /* 14nm Skylake Mobile */
- case 94: /* 14nm Skylake Desktop */
- if (idx == PERF_CSTATE_PKG_C2_RES ||
- idx == PERF_CSTATE_PKG_C3_RES ||
- idx == PERF_CSTATE_PKG_C6_RES ||
- idx == PERF_CSTATE_PKG_C7_RES)
- return true;
- break;
- case 55: /* 22nm Atom "Silvermont" */
- case 77: /* 22nm Atom "Silvermont Avoton/Rangely" */
- case 76: /* 14nm Atom "Airmont" */
- if (idx == PERF_CSTATE_CORE_C6_RES)
- return true;
- break;
- case 69: /* 22nm Haswell ULT */
- if (idx == PERF_CSTATE_PKG_C2_RES ||
- idx == PERF_CSTATE_PKG_C3_RES ||
- idx == PERF_CSTATE_PKG_C6_RES ||
- idx == PERF_CSTATE_PKG_C7_RES ||
- idx == PERF_CSTATE_PKG_C8_RES ||
- idx == PERF_CSTATE_PKG_C9_RES ||
- idx == PERF_CSTATE_PKG_C10_RES)
- return true;
- break;
- }
-
- return false;
-}
-
PMU_EVENT_ATTR_STRING(c2-residency, evattr_cstate_pkg_c2, "event=0x00");
PMU_EVENT_ATTR_STRING(c3-residency, evattr_cstate_pkg_c3, "event=0x01");
PMU_EVENT_ATTR_STRING(c6-residency, evattr_cstate_pkg_c6, "event=0x02");
@@ -329,13 +213,13 @@ PMU_EVENT_ATTR_STRING(c9-residency, evattr_cstate_pkg_c9, "event=0x05");
PMU_EVENT_ATTR_STRING(c10-residency, evattr_cstate_pkg_c10, "event=0x06");
static struct perf_cstate_msr pkg_msr[] = {
- [PERF_CSTATE_PKG_C2_RES] = { MSR_PKG_C2_RESIDENCY, &evattr_cstate_pkg_c2, test_pkg, },
- [PERF_CSTATE_PKG_C3_RES] = { MSR_PKG_C3_RESIDENCY, &evattr_cstate_pkg_c3, test_pkg, },
- [PERF_CSTATE_PKG_C6_RES] = { MSR_PKG_C6_RESIDENCY, &evattr_cstate_pkg_c6, test_pkg, },
- [PERF_CSTATE_PKG_C7_RES] = { MSR_PKG_C7_RESIDENCY, &evattr_cstate_pkg_c7, test_pkg, },
- [PERF_CSTATE_PKG_C8_RES] = { MSR_PKG_C8_RESIDENCY, &evattr_cstate_pkg_c8, test_pkg, },
- [PERF_CSTATE_PKG_C9_RES] = { MSR_PKG_C9_RESIDENCY, &evattr_cstate_pkg_c9, test_pkg, },
- [PERF_CSTATE_PKG_C10_RES] = { MSR_PKG_C10_RESIDENCY, &evattr_cstate_pkg_c10, test_pkg, },
+ [PERF_CSTATE_PKG_C2_RES] = { MSR_PKG_C2_RESIDENCY, &evattr_cstate_pkg_c2 },
+ [PERF_CSTATE_PKG_C3_RES] = { MSR_PKG_C3_RESIDENCY, &evattr_cstate_pkg_c3 },
+ [PERF_CSTATE_PKG_C6_RES] = { MSR_PKG_C6_RESIDENCY, &evattr_cstate_pkg_c6 },
+ [PERF_CSTATE_PKG_C7_RES] = { MSR_PKG_C7_RESIDENCY, &evattr_cstate_pkg_c7 },
+ [PERF_CSTATE_PKG_C8_RES] = { MSR_PKG_C8_RESIDENCY, &evattr_cstate_pkg_c8 },
+ [PERF_CSTATE_PKG_C9_RES] = { MSR_PKG_C9_RESIDENCY, &evattr_cstate_pkg_c9 },
+ [PERF_CSTATE_PKG_C10_RES] = { MSR_PKG_C10_RESIDENCY, &evattr_cstate_pkg_c10 },
};
static struct attribute *pkg_events_attrs[PERF_CSTATE_PKG_EVENT_MAX + 1] = {
@@ -366,8 +250,6 @@ static const struct attribute_group *pkg_attr_groups[] = {
NULL,
};
-/* cstate_pkg PMU end*/
-
static ssize_t cstate_get_attr_cpumask(struct device *dev,
struct device_attribute *attr,
char *buf)
@@ -385,7 +267,7 @@ static ssize_t cstate_get_attr_cpumask(struct device *dev,
static int cstate_pmu_event_init(struct perf_event *event)
{
u64 cfg = event->attr.config;
- int ret = 0;
+ int cpu;
if (event->attr.type != event->pmu->type)
return -ENOENT;
@@ -400,26 +282,36 @@ static int cstate_pmu_event_init(struct perf_event *event)
event->attr.sample_period) /* no sampling */
return -EINVAL;
+ if (event->cpu < 0)
+ return -EINVAL;
+
if (event->pmu == &cstate_core_pmu) {
if (cfg >= PERF_CSTATE_CORE_EVENT_MAX)
return -EINVAL;
if (!core_msr[cfg].attr)
return -EINVAL;
event->hw.event_base = core_msr[cfg].msr;
+ cpu = cpumask_any_and(&cstate_core_cpu_mask,
+ topology_sibling_cpumask(event->cpu));
} else if (event->pmu == &cstate_pkg_pmu) {
if (cfg >= PERF_CSTATE_PKG_EVENT_MAX)
return -EINVAL;
if (!pkg_msr[cfg].attr)
return -EINVAL;
event->hw.event_base = pkg_msr[cfg].msr;
- } else
+ cpu = cpumask_any_and(&cstate_pkg_cpu_mask,
+ topology_core_cpumask(event->cpu));
+ } else {
return -ENOENT;
+ }
+
+ if (cpu >= nr_cpu_ids)
+ return -ENODEV;
- /* must be done before validate_group */
+ event->cpu = cpu;
event->hw.config = cfg;
event->hw.idx = -1;
-
- return ret;
+ return 0;
}
static inline u64 cstate_pmu_read_counter(struct perf_event *event)
@@ -469,172 +361,91 @@ static int cstate_pmu_event_add(struct perf_event *event, int mode)
return 0;
}
+/*
+ * Check if exiting cpu is the designated reader. If so migrate the
+ * events when there is a valid target available
+ */
static void cstate_cpu_exit(int cpu)
{
- int i, id, target;
+ unsigned int target;
- /* cpu exit for cstate core */
- if (has_cstate_core) {
- id = topology_core_id(cpu);
- target = -1;
-
- for_each_online_cpu(i) {
- if (i == cpu)
- continue;
- if (id == topology_core_id(i)) {
- target = i;
- break;
- }
- }
- if (cpumask_test_and_clear_cpu(cpu, &cstate_core_cpu_mask) && target >= 0)
+ if (has_cstate_core &&
+ cpumask_test_and_clear_cpu(cpu, &cstate_core_cpu_mask)) {
+
+ target = cpumask_any_but(topology_sibling_cpumask(cpu), cpu);
+ /* Migrate events if there is a valid target */
+ if (target < nr_cpu_ids) {
cpumask_set_cpu(target, &cstate_core_cpu_mask);
- WARN_ON(cpumask_empty(&cstate_core_cpu_mask));
- if (target >= 0)
perf_pmu_migrate_context(&cstate_core_pmu, cpu, target);
+ }
}
- /* cpu exit for cstate pkg */
- if (has_cstate_pkg) {
- id = topology_physical_package_id(cpu);
- target = -1;
-
- for_each_online_cpu(i) {
- if (i == cpu)
- continue;
- if (id == topology_physical_package_id(i)) {
- target = i;
- break;
- }
- }
- if (cpumask_test_and_clear_cpu(cpu, &cstate_pkg_cpu_mask) && target >= 0)
+ if (has_cstate_pkg &&
+ cpumask_test_and_clear_cpu(cpu, &cstate_pkg_cpu_mask)) {
+
+ target = cpumask_any_but(topology_core_cpumask(cpu), cpu);
+ /* Migrate events if there is a valid target */
+ if (target < nr_cpu_ids) {
cpumask_set_cpu(target, &cstate_pkg_cpu_mask);
- WARN_ON(cpumask_empty(&cstate_pkg_cpu_mask));
- if (target >= 0)
perf_pmu_migrate_context(&cstate_pkg_pmu, cpu, target);
+ }
}
}
static void cstate_cpu_init(int cpu)
{
- int i, id;
+ unsigned int target;
- /* cpu init for cstate core */
- if (has_cstate_core) {
- id = topology_core_id(cpu);
- for_each_cpu(i, &cstate_core_cpu_mask) {
- if (id == topology_core_id(i))
- break;
- }
- if (i >= nr_cpu_ids)
- cpumask_set_cpu(cpu, &cstate_core_cpu_mask);
- }
+ /*
+ * If this is the first online thread of that core, set it in
+ * the core cpu mask as the designated reader.
+ */
+ target = cpumask_any_and(&cstate_core_cpu_mask,
+ topology_sibling_cpumask(cpu));
- /* cpu init for cstate pkg */
- if (has_cstate_pkg) {
- id = topology_physical_package_id(cpu);
- for_each_cpu(i, &cstate_pkg_cpu_mask) {
- if (id == topology_physical_package_id(i))
- break;
- }
- if (i >= nr_cpu_ids)
- cpumask_set_cpu(cpu, &cstate_pkg_cpu_mask);
- }
+ if (has_cstate_core && target >= nr_cpu_ids)
+ cpumask_set_cpu(cpu, &cstate_core_cpu_mask);
+
+ /*
+ * If this is the first online thread of that package, set it
+ * in the package cpu mask as the designated reader.
+ */
+ target = cpumask_any_and(&cstate_pkg_cpu_mask,
+ topology_core_cpumask(cpu));
+ if (has_cstate_pkg && target >= nr_cpu_ids)
+ cpumask_set_cpu(cpu, &cstate_pkg_cpu_mask);
}
static int cstate_cpu_notifier(struct notifier_block *self,
- unsigned long action, void *hcpu)
+ unsigned long action, void *hcpu)
{
unsigned int cpu = (long)hcpu;
switch (action & ~CPU_TASKS_FROZEN) {
- case CPU_UP_PREPARE:
- break;
case CPU_STARTING:
cstate_cpu_init(cpu);
break;
- case CPU_UP_CANCELED:
- case CPU_DYING:
- break;
- case CPU_ONLINE:
- case CPU_DEAD:
- break;
case CPU_DOWN_PREPARE:
cstate_cpu_exit(cpu);
break;
default:
break;
}
-
return NOTIFY_OK;
}
-/*
- * Probe the cstate events and insert the available one into sysfs attrs
- * Return false if there is no available events.
- */
-static bool cstate_probe_msr(struct perf_cstate_msr *msr,
- struct attribute **events_attrs,
- int max_event_nr)
-{
- int i, j = 0;
- u64 val;
-
- /* Probe the cstate events. */
- for (i = 0; i < max_event_nr; i++) {
- if (!msr[i].test(i) || rdmsrl_safe(msr[i].msr, &val))
- msr[i].attr = NULL;
- }
-
- /* List remaining events in the sysfs attrs. */
- for (i = 0; i < max_event_nr; i++) {
- if (msr[i].attr)
- events_attrs[j++] = &msr[i].attr->attr.attr;
- }
- events_attrs[j] = NULL;
-
- return (j > 0) ? true : false;
-}
-
-static int __init cstate_init(void)
-{
- /* SLM has different MSR for PKG C6 */
- switch (boot_cpu_data.x86_model) {
- case 55:
- case 76:
- case 77:
- pkg_msr[PERF_CSTATE_PKG_C6_RES].msr = MSR_PKG_C7_RESIDENCY;
- }
-
- if (cstate_probe_msr(core_msr, core_events_attrs, PERF_CSTATE_CORE_EVENT_MAX))
- has_cstate_core = true;
-
- if (cstate_probe_msr(pkg_msr, pkg_events_attrs, PERF_CSTATE_PKG_EVENT_MAX))
- has_cstate_pkg = true;
-
- return (has_cstate_core || has_cstate_pkg) ? 0 : -ENODEV;
-}
-
-static void __init cstate_cpumask_init(void)
-{
- int cpu;
-
- cpu_notifier_register_begin();
-
- for_each_online_cpu(cpu)
- cstate_cpu_init(cpu);
-
- __perf_cpu_notifier(cstate_cpu_notifier);
-
- cpu_notifier_register_done();
-}
+static struct notifier_block cstate_cpu_nb = {
+ .notifier_call = cstate_cpu_notifier,
+ .priority = CPU_PRI_PERF + 1,
+};
static struct pmu cstate_core_pmu = {
.attr_groups = core_attr_groups,
.name = "cstate_core",
.task_ctx_nr = perf_invalid_context,
.event_init = cstate_pmu_event_init,
- .add = cstate_pmu_event_add, /* must have */
- .del = cstate_pmu_event_del, /* must have */
+ .add = cstate_pmu_event_add,
+ .del = cstate_pmu_event_del,
.start = cstate_pmu_event_start,
.stop = cstate_pmu_event_stop,
.read = cstate_pmu_event_update,
@@ -646,49 +457,203 @@ static struct pmu cstate_pkg_pmu = {
.name = "cstate_pkg",
.task_ctx_nr = perf_invalid_context,
.event_init = cstate_pmu_event_init,
- .add = cstate_pmu_event_add, /* must have */
- .del = cstate_pmu_event_del, /* must have */
+ .add = cstate_pmu_event_add,
+ .del = cstate_pmu_event_del,
.start = cstate_pmu_event_start,
.stop = cstate_pmu_event_stop,
.read = cstate_pmu_event_update,
.capabilities = PERF_PMU_CAP_NO_INTERRUPT,
};
-static void __init cstate_pmus_register(void)
+static const struct cstate_model nhm_cstates __initconst = {
+ .core_events = BIT(PERF_CSTATE_CORE_C3_RES) |
+ BIT(PERF_CSTATE_CORE_C6_RES),
+
+ .pkg_events = BIT(PERF_CSTATE_PKG_C3_RES) |
+ BIT(PERF_CSTATE_PKG_C6_RES) |
+ BIT(PERF_CSTATE_PKG_C7_RES),
+};
+
+static const struct cstate_model snb_cstates __initconst = {
+ .core_events = BIT(PERF_CSTATE_CORE_C3_RES) |
+ BIT(PERF_CSTATE_CORE_C6_RES) |
+ BIT(PERF_CSTATE_CORE_C7_RES),
+
+ .pkg_events = BIT(PERF_CSTATE_PKG_C2_RES) |
+ BIT(PERF_CSTATE_PKG_C3_RES) |
+ BIT(PERF_CSTATE_PKG_C6_RES) |
+ BIT(PERF_CSTATE_PKG_C7_RES),
+};
+
+static const struct cstate_model hswult_cstates __initconst = {
+ .core_events = BIT(PERF_CSTATE_CORE_C3_RES) |
+ BIT(PERF_CSTATE_CORE_C6_RES) |
+ BIT(PERF_CSTATE_CORE_C7_RES),
+
+ .pkg_events = BIT(PERF_CSTATE_PKG_C2_RES) |
+ BIT(PERF_CSTATE_PKG_C3_RES) |
+ BIT(PERF_CSTATE_PKG_C6_RES) |
+ BIT(PERF_CSTATE_PKG_C7_RES) |
+ BIT(PERF_CSTATE_PKG_C8_RES) |
+ BIT(PERF_CSTATE_PKG_C9_RES) |
+ BIT(PERF_CSTATE_PKG_C10_RES),
+};
+
+static const struct cstate_model slm_cstates __initconst = {
+ .core_events = BIT(PERF_CSTATE_CORE_C1_RES) |
+ BIT(PERF_CSTATE_CORE_C6_RES),
+
+ .pkg_events = BIT(PERF_CSTATE_PKG_C6_RES),
+ .quirks = SLM_PKG_C6_USE_C7_MSR,
+};
+
+#define X86_CSTATES_MODEL(model, states) \
+ { X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, (unsigned long) &(states) }
+
+static const struct x86_cpu_id intel_cstates_match[] __initconst = {
+ X86_CSTATES_MODEL(30, nhm_cstates), /* 45nm Nehalem */
+ X86_CSTATES_MODEL(26, nhm_cstates), /* 45nm Nehalem-EP */
+ X86_CSTATES_MODEL(46, nhm_cstates), /* 45nm Nehalem-EX */
+
+ X86_CSTATES_MODEL(37, nhm_cstates), /* 32nm Westmere */
+ X86_CSTATES_MODEL(44, nhm_cstates), /* 32nm Westmere-EP */
+ X86_CSTATES_MODEL(47, nhm_cstates), /* 32nm Westmere-EX */
+
+ X86_CSTATES_MODEL(42, snb_cstates), /* 32nm SandyBridge */
+ X86_CSTATES_MODEL(45, snb_cstates), /* 32nm SandyBridge-E/EN/EP */
+
+ X86_CSTATES_MODEL(58, snb_cstates), /* 22nm IvyBridge */
+ X86_CSTATES_MODEL(62, snb_cstates), /* 22nm IvyBridge-EP/EX */
+
+ X86_CSTATES_MODEL(60, snb_cstates), /* 22nm Haswell Core */
+ X86_CSTATES_MODEL(63, snb_cstates), /* 22nm Haswell Server */
+ X86_CSTATES_MODEL(70, snb_cstates), /* 22nm Haswell + GT3e */
+
+ X86_CSTATES_MODEL(69, hswult_cstates), /* 22nm Haswell ULT */
+
+ X86_CSTATES_MODEL(55, slm_cstates), /* 22nm Atom Silvermont */
+ X86_CSTATES_MODEL(77, slm_cstates), /* 22nm Atom Avoton/Rangely */
+ X86_CSTATES_MODEL(76, slm_cstates), /* 22nm Atom Airmont */
+
+ X86_CSTATES_MODEL(61, snb_cstates), /* 14nm Broadwell Core-M */
+ X86_CSTATES_MODEL(86, snb_cstates), /* 14nm Broadwell Xeon D */
+ X86_CSTATES_MODEL(71, snb_cstates), /* 14nm Broadwell + GT3e */
+ X86_CSTATES_MODEL(79, snb_cstates), /* 14nm Broadwell Server */
+
+ X86_CSTATES_MODEL(78, snb_cstates), /* 14nm Skylake Mobile */
+ X86_CSTATES_MODEL(94, snb_cstates), /* 14nm Skylake Desktop */
+ { },
+};
+MODULE_DEVICE_TABLE(x86cpu, intel_cstates_match);
+
+/*
+ * Probe the cstate events and insert the available one into sysfs attrs
+ * Return false if there are no available events.
+ */
+static bool __init cstate_probe_msr(const unsigned long evmsk, int max,
+ struct perf_cstate_msr *msr,
+ struct attribute **attrs)
{
- int err;
+ bool found = false;
+ unsigned int bit;
+ u64 val;
+
+ for (bit = 0; bit < max; bit++) {
+ if (test_bit(bit, &evmsk) && !rdmsrl_safe(msr[bit].msr, &val)) {
+ *attrs++ = &msr[bit].attr->attr.attr;
+ found = true;
+ } else {
+ msr[bit].attr = NULL;
+ }
+ }
+ *attrs = NULL;
+
+ return found;
+}
+
+static int __init cstate_probe(const struct cstate_model *cm)
+{
+ /* SLM has different MSR for PKG C6 */
+ if (cm->quirks & SLM_PKG_C6_USE_C7_MSR)
+ pkg_msr[PERF_CSTATE_PKG_C6_RES].msr = MSR_PKG_C7_RESIDENCY;
+
+ has_cstate_core = cstate_probe_msr(cm->core_events,
+ PERF_CSTATE_CORE_EVENT_MAX,
+ core_msr, core_events_attrs);
+
+ has_cstate_pkg = cstate_probe_msr(cm->pkg_events,
+ PERF_CSTATE_PKG_EVENT_MAX,
+ pkg_msr, pkg_events_attrs);
+
+ return (has_cstate_core || has_cstate_pkg) ? 0 : -ENODEV;
+}
+
+static inline void cstate_cleanup(void)
+{
+ if (has_cstate_core)
+ perf_pmu_unregister(&cstate_core_pmu);
+
+ if (has_cstate_pkg)
+ perf_pmu_unregister(&cstate_pkg_pmu);
+}
+
+static int __init cstate_init(void)
+{
+ int cpu, err;
+
+ cpu_notifier_register_begin();
+ for_each_online_cpu(cpu)
+ cstate_cpu_init(cpu);
if (has_cstate_core) {
err = perf_pmu_register(&cstate_core_pmu, cstate_core_pmu.name, -1);
- if (WARN_ON(err))
- pr_info("Failed to register PMU %s error %d\n",
- cstate_core_pmu.name, err);
+ if (err) {
+ has_cstate_core = false;
+ pr_info("Failed to register cstate core pmu\n");
+ goto out;
+ }
}
if (has_cstate_pkg) {
err = perf_pmu_register(&cstate_pkg_pmu, cstate_pkg_pmu.name, -1);
- if (WARN_ON(err))
- pr_info("Failed to register PMU %s error %d\n",
- cstate_pkg_pmu.name, err);
+ if (err) {
+ has_cstate_pkg = false;
+ pr_info("Failed to register cstate pkg pmu\n");
+ cstate_cleanup();
+ goto out;
+ }
}
+ __register_cpu_notifier(&cstate_cpu_nb);
+out:
+ cpu_notifier_register_done();
+ return err;
}
static int __init cstate_pmu_init(void)
{
+ const struct x86_cpu_id *id;
int err;
- if (cpu_has_hypervisor)
+ if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
+ return -ENODEV;
+
+ id = x86_match_cpu(intel_cstates_match);
+ if (!id)
return -ENODEV;
- err = cstate_init();
+ err = cstate_probe((const struct cstate_model *) id->driver_data);
if (err)
return err;
- cstate_cpumask_init();
-
- cstate_pmus_register();
-
- return 0;
+ return cstate_init();
}
+module_init(cstate_pmu_init);
-device_initcall(cstate_pmu_init);
+static void __exit cstate_pmu_exit(void)
+{
+ cpu_notifier_register_begin();
+ __unregister_cpu_notifier(&cstate_cpu_nb);
+ cstate_cleanup();
+ cpu_notifier_register_done();
+}
+module_exit(cstate_pmu_exit);
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index 8584b90d8e0b..7ce9f3f669e6 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -645,6 +645,12 @@ struct event_constraint intel_slm_pebs_event_constraints[] = {
EVENT_CONSTRAINT_END
};
+struct event_constraint intel_glm_pebs_event_constraints[] = {
+ /* Allow all events as PEBS with no flags */
+ INTEL_ALL_EVENT_CONSTRAINT(0, 0x1),
+ EVENT_CONSTRAINT_END
+};
+
struct event_constraint intel_nehalem_pebs_event_constraints[] = {
INTEL_PLD_CONSTRAINT(0x100b, 0xf), /* MEM_INST_RETIRED.* */
INTEL_FLAGS_EVENT_CONSTRAINT(0x0f, 0xf), /* MEM_UNCORE_RETIRED.* */
diff --git a/arch/x86/events/intel/lbr.c b/arch/x86/events/intel/lbr.c
index 1ca5d1e7d4f2..9e2b40cdb05f 100644
--- a/arch/x86/events/intel/lbr.c
+++ b/arch/x86/events/intel/lbr.c
@@ -14,7 +14,8 @@ enum {
LBR_FORMAT_EIP_FLAGS = 0x03,
LBR_FORMAT_EIP_FLAGS2 = 0x04,
LBR_FORMAT_INFO = 0x05,
- LBR_FORMAT_MAX_KNOWN = LBR_FORMAT_INFO,
+ LBR_FORMAT_TIME = 0x06,
+ LBR_FORMAT_MAX_KNOWN = LBR_FORMAT_TIME,
};
static enum {
@@ -464,6 +465,16 @@ static void intel_pmu_lbr_read_64(struct cpu_hw_events *cpuc)
abort = !!(info & LBR_INFO_ABORT);
cycles = (info & LBR_INFO_CYCLES);
}
+
+ if (lbr_format == LBR_FORMAT_TIME) {
+ mis = !!(from & LBR_FROM_FLAG_MISPRED);
+ pred = !mis;
+ skip = 1;
+ cycles = ((to >> 48) & LBR_INFO_CYCLES);
+
+ to = (u64)((((s64)to) << 16) >> 16);
+ }
+
if (lbr_flags & LBR_EIP_FLAGS) {
mis = !!(from & LBR_FROM_FLAG_MISPRED);
pred = !mis;
@@ -1049,6 +1060,24 @@ void __init intel_pmu_lbr_init_atom(void)
pr_cont("8-deep LBR, ");
}
+/* slm */
+void __init intel_pmu_lbr_init_slm(void)
+{
+ x86_pmu.lbr_nr = 8;
+ x86_pmu.lbr_tos = MSR_LBR_TOS;
+ x86_pmu.lbr_from = MSR_LBR_CORE_FROM;
+ x86_pmu.lbr_to = MSR_LBR_CORE_TO;
+
+ x86_pmu.lbr_sel_mask = LBR_SEL_MASK;
+ x86_pmu.lbr_sel_map = nhm_lbr_sel_map;
+
+ /*
+ * SW branch filter usage:
+ * - compensate for lack of HW filter
+ */
+ pr_cont("8-deep LBR, ");
+}
+
/* Knights Landing */
void intel_pmu_lbr_init_knl(void)
{
diff --git a/arch/x86/events/intel/p4.c b/arch/x86/events/intel/p4.c
index 0a5ede187d9c..eb0533558c2b 100644
--- a/arch/x86/events/intel/p4.c
+++ b/arch/x86/events/intel/p4.c
@@ -826,7 +826,7 @@ static int p4_hw_config(struct perf_event *event)
* Clear bits we reserve to be managed by kernel itself
* and never allowed from a user space
*/
- event->attr.config &= P4_CONFIG_MASK;
+ event->attr.config &= P4_CONFIG_MASK;
rc = p4_validate_raw_event(event);
if (rc)
diff --git a/arch/x86/events/intel/pt.c b/arch/x86/events/intel/pt.c
index 09a77dbc73c9..04bb5fb5a8d7 100644
--- a/arch/x86/events/intel/pt.c
+++ b/arch/x86/events/intel/pt.c
@@ -67,11 +67,13 @@ static struct pt_cap_desc {
PT_CAP(max_subleaf, 0, CR_EAX, 0xffffffff),
PT_CAP(cr3_filtering, 0, CR_EBX, BIT(0)),
PT_CAP(psb_cyc, 0, CR_EBX, BIT(1)),
+ PT_CAP(ip_filtering, 0, CR_EBX, BIT(2)),
PT_CAP(mtc, 0, CR_EBX, BIT(3)),
PT_CAP(topa_output, 0, CR_ECX, BIT(0)),
PT_CAP(topa_multiple_entries, 0, CR_ECX, BIT(1)),
PT_CAP(single_range_output, 0, CR_ECX, BIT(2)),
PT_CAP(payloads_lip, 0, CR_ECX, BIT(31)),
+ PT_CAP(num_address_ranges, 1, CR_EAX, 0x3),
PT_CAP(mtc_periods, 1, CR_EAX, 0xffff0000),
PT_CAP(cycle_thresholds, 1, CR_EBX, 0xffff),
PT_CAP(psb_periods, 1, CR_EBX, 0xffff0000),
@@ -125,9 +127,46 @@ static struct attribute_group pt_format_group = {
.attrs = pt_formats_attr,
};
+static ssize_t
+pt_timing_attr_show(struct device *dev, struct device_attribute *attr,
+ char *page)
+{
+ struct perf_pmu_events_attr *pmu_attr =
+ container_of(attr, struct perf_pmu_events_attr, attr);
+
+ switch (pmu_attr->id) {
+ case 0:
+ return sprintf(page, "%lu\n", pt_pmu.max_nonturbo_ratio);
+ case 1:
+ return sprintf(page, "%u:%u\n",
+ pt_pmu.tsc_art_num,
+ pt_pmu.tsc_art_den);
+ default:
+ break;
+ }
+
+ return -EINVAL;
+}
+
+PMU_EVENT_ATTR(max_nonturbo_ratio, timing_attr_max_nonturbo_ratio, 0,
+ pt_timing_attr_show);
+PMU_EVENT_ATTR(tsc_art_ratio, timing_attr_tsc_art_ratio, 1,
+ pt_timing_attr_show);
+
+static struct attribute *pt_timing_attr[] = {
+ &timing_attr_max_nonturbo_ratio.attr.attr,
+ &timing_attr_tsc_art_ratio.attr.attr,
+ NULL,
+};
+
+static struct attribute_group pt_timing_group = {
+ .attrs = pt_timing_attr,
+};
+
static const struct attribute_group *pt_attr_groups[] = {
&pt_cap_group,
&pt_format_group,
+ &pt_timing_group,
NULL,
};
@@ -140,6 +179,23 @@ static int __init pt_pmu_hw_init(void)
int ret;
long i;
+ rdmsrl(MSR_PLATFORM_INFO, reg);
+ pt_pmu.max_nonturbo_ratio = (reg & 0xff00) >> 8;
+
+ /*
+ * if available, read in TSC to core crystal clock ratio,
+ * otherwise, zero for numerator stands for "not enumerated"
+ * as per SDM
+ */
+ if (boot_cpu_data.cpuid_level >= CPUID_TSC_LEAF) {
+ u32 eax, ebx, ecx, edx;
+
+ cpuid(CPUID_TSC_LEAF, &eax, &ebx, &ecx, &edx);
+
+ pt_pmu.tsc_art_num = ebx;
+ pt_pmu.tsc_art_den = eax;
+ }
+
if (boot_cpu_has(X86_FEATURE_VMX)) {
/*
* Intel SDM, 36.5 "Tracing post-VMXON" says that
@@ -263,6 +319,75 @@ static bool pt_event_valid(struct perf_event *event)
* These all are cpu affine and operate on a local PT
*/
+/* Address ranges and their corresponding msr configuration registers */
+static const struct pt_address_range {
+ unsigned long msr_a;
+ unsigned long msr_b;
+ unsigned int reg_off;
+} pt_address_ranges[] = {
+ {
+ .msr_a = MSR_IA32_RTIT_ADDR0_A,
+ .msr_b = MSR_IA32_RTIT_ADDR0_B,
+ .reg_off = RTIT_CTL_ADDR0_OFFSET,
+ },
+ {
+ .msr_a = MSR_IA32_RTIT_ADDR1_A,
+ .msr_b = MSR_IA32_RTIT_ADDR1_B,
+ .reg_off = RTIT_CTL_ADDR1_OFFSET,
+ },
+ {
+ .msr_a = MSR_IA32_RTIT_ADDR2_A,
+ .msr_b = MSR_IA32_RTIT_ADDR2_B,
+ .reg_off = RTIT_CTL_ADDR2_OFFSET,
+ },
+ {
+ .msr_a = MSR_IA32_RTIT_ADDR3_A,
+ .msr_b = MSR_IA32_RTIT_ADDR3_B,
+ .reg_off = RTIT_CTL_ADDR3_OFFSET,
+ }
+};
+
+static u64 pt_config_filters(struct perf_event *event)
+{
+ struct pt_filters *filters = event->hw.addr_filters;
+ struct pt *pt = this_cpu_ptr(&pt_ctx);
+ unsigned int range = 0;
+ u64 rtit_ctl = 0;
+
+ if (!filters)
+ return 0;
+
+ perf_event_addr_filters_sync(event);
+
+ for (range = 0; range < filters->nr_filters; range++) {
+ struct pt_filter *filter = &filters->filter[range];
+
+ /*
+ * Note, if the range has zero start/end addresses due
+ * to its dynamic object not being loaded yet, we just
+ * go ahead and program zeroed range, which will simply
+ * produce no data. Note^2: if executable code at 0x0
+ * is a concern, we can set up an "invalid" configuration
+ * such as msr_b < msr_a.
+ */
+
+ /* avoid redundant msr writes */
+ if (pt->filters.filter[range].msr_a != filter->msr_a) {
+ wrmsrl(pt_address_ranges[range].msr_a, filter->msr_a);
+ pt->filters.filter[range].msr_a = filter->msr_a;
+ }
+
+ if (pt->filters.filter[range].msr_b != filter->msr_b) {
+ wrmsrl(pt_address_ranges[range].msr_b, filter->msr_b);
+ pt->filters.filter[range].msr_b = filter->msr_b;
+ }
+
+ rtit_ctl |= filter->config << pt_address_ranges[range].reg_off;
+ }
+
+ return rtit_ctl;
+}
+
static void pt_config(struct perf_event *event)
{
u64 reg;
@@ -272,7 +397,8 @@ static void pt_config(struct perf_event *event)
wrmsrl(MSR_IA32_RTIT_STATUS, 0);
}
- reg = RTIT_CTL_TOPA | RTIT_CTL_BRANCH_EN | RTIT_CTL_TRACEEN;
+ reg = pt_config_filters(event);
+ reg |= RTIT_CTL_TOPA | RTIT_CTL_BRANCH_EN | RTIT_CTL_TRACEEN;
if (!event->attr.exclude_kernel)
reg |= RTIT_CTL_OS;
@@ -709,6 +835,7 @@ static int pt_buffer_reset_markers(struct pt_buffer *buf,
/* clear STOP and INT from current entry */
buf->topa_index[buf->stop_pos]->stop = 0;
+ buf->topa_index[buf->stop_pos]->intr = 0;
buf->topa_index[buf->intr_pos]->intr = 0;
/* how many pages till the STOP marker */
@@ -733,6 +860,7 @@ static int pt_buffer_reset_markers(struct pt_buffer *buf,
buf->intr_pos = idx;
buf->topa_index[buf->stop_pos]->stop = 1;
+ buf->topa_index[buf->stop_pos]->intr = 1;
buf->topa_index[buf->intr_pos]->intr = 1;
return 0;
@@ -919,24 +1047,80 @@ static void pt_buffer_free_aux(void *data)
kfree(buf);
}
-/**
- * pt_buffer_is_full() - check if the buffer is full
- * @buf: PT buffer.
- * @pt: Per-cpu pt handle.
- *
- * If the user hasn't read data from the output region that aux_head
- * points to, the buffer is considered full: the user needs to read at
- * least this region and update aux_tail to point past it.
- */
-static bool pt_buffer_is_full(struct pt_buffer *buf, struct pt *pt)
+static int pt_addr_filters_init(struct perf_event *event)
{
- if (buf->snapshot)
- return false;
+ struct pt_filters *filters;
+ int node = event->cpu == -1 ? -1 : cpu_to_node(event->cpu);
+
+ if (!pt_cap_get(PT_CAP_num_address_ranges))
+ return 0;
+
+ filters = kzalloc_node(sizeof(struct pt_filters), GFP_KERNEL, node);
+ if (!filters)
+ return -ENOMEM;
+
+ if (event->parent)
+ memcpy(filters, event->parent->hw.addr_filters,
+ sizeof(*filters));
+
+ event->hw.addr_filters = filters;
+
+ return 0;
+}
+
+static void pt_addr_filters_fini(struct perf_event *event)
+{
+ kfree(event->hw.addr_filters);
+ event->hw.addr_filters = NULL;
+}
+
+static int pt_event_addr_filters_validate(struct list_head *filters)
+{
+ struct perf_addr_filter *filter;
+ int range = 0;
+
+ list_for_each_entry(filter, filters, entry) {
+ /* PT doesn't support single address triggers */
+ if (!filter->range)
+ return -EOPNOTSUPP;
+
+ if (!filter->inode && !kernel_ip(filter->offset))
+ return -EINVAL;
+
+ if (++range > pt_cap_get(PT_CAP_num_address_ranges))
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
+static void pt_event_addr_filters_sync(struct perf_event *event)
+{
+ struct perf_addr_filters_head *head = perf_event_addr_filters(event);
+ unsigned long msr_a, msr_b, *offs = event->addr_filters_offs;
+ struct pt_filters *filters = event->hw.addr_filters;
+ struct perf_addr_filter *filter;
+ int range = 0;
+
+ if (!filters)
+ return;
- if (local_read(&buf->data_size) >= pt->handle.size)
- return true;
+ list_for_each_entry(filter, &head->list, entry) {
+ if (filter->inode && !offs[range]) {
+ msr_a = msr_b = 0;
+ } else {
+ /* apply the offset */
+ msr_a = filter->offset + offs[range];
+ msr_b = filter->size + msr_a;
+ }
+
+ filters->filter[range].msr_a = msr_a;
+ filters->filter[range].msr_b = msr_b;
+ filters->filter[range].config = filter->filter ? 1 : 2;
+ range++;
+ }
- return false;
+ filters->nr_filters = range;
}
/**
@@ -953,7 +1137,7 @@ void intel_pt_interrupt(void)
* after PT has been disabled by pt_event_stop(). Make sure we don't
* do anything (particularly, re-enable) for this event here.
*/
- if (!ACCESS_ONCE(pt->handle_nmi))
+ if (!READ_ONCE(pt->handle_nmi))
return;
/*
@@ -1038,23 +1222,36 @@ EXPORT_SYMBOL_GPL(intel_pt_handle_vmx);
static void pt_event_start(struct perf_event *event, int mode)
{
+ struct hw_perf_event *hwc = &event->hw;
struct pt *pt = this_cpu_ptr(&pt_ctx);
- struct pt_buffer *buf = perf_get_aux(&pt->handle);
+ struct pt_buffer *buf;
if (READ_ONCE(pt->vmx_on))
return;
- if (!buf || pt_buffer_is_full(buf, pt)) {
- event->hw.state = PERF_HES_STOPPED;
- return;
+ buf = perf_aux_output_begin(&pt->handle, event);
+ if (!buf)
+ goto fail_stop;
+
+ pt_buffer_reset_offsets(buf, pt->handle.head);
+ if (!buf->snapshot) {
+ if (pt_buffer_reset_markers(buf, &pt->handle))
+ goto fail_end_stop;
}
- ACCESS_ONCE(pt->handle_nmi) = 1;
- event->hw.state = 0;
+ WRITE_ONCE(pt->handle_nmi, 1);
+ hwc->state = 0;
pt_config_buffer(buf->cur->table, buf->cur_idx,
buf->output_off);
pt_config(event);
+
+ return;
+
+fail_end_stop:
+ perf_aux_output_end(&pt->handle, 0, true);
+fail_stop:
+ hwc->state = PERF_HES_STOPPED;
}
static void pt_event_stop(struct perf_event *event, int mode)
@@ -1065,7 +1262,7 @@ static void pt_event_stop(struct perf_event *event, int mode)
* Protect against the PMI racing with disabling wrmsr,
* see comment in intel_pt_interrupt().
*/
- ACCESS_ONCE(pt->handle_nmi) = 0;
+ WRITE_ONCE(pt->handle_nmi, 0);
pt_config_stop(event);
@@ -1088,19 +1285,7 @@ static void pt_event_stop(struct perf_event *event, int mode)
pt_handle_status(pt);
pt_update_head(pt);
- }
-}
-
-static void pt_event_del(struct perf_event *event, int mode)
-{
- struct pt *pt = this_cpu_ptr(&pt_ctx);
- struct pt_buffer *buf;
- pt_event_stop(event, PERF_EF_UPDATE);
-
- buf = perf_get_aux(&pt->handle);
-
- if (buf) {
if (buf->snapshot)
pt->handle.head =
local_xchg(&buf->data_size,
@@ -1110,9 +1295,13 @@ static void pt_event_del(struct perf_event *event, int mode)
}
}
+static void pt_event_del(struct perf_event *event, int mode)
+{
+ pt_event_stop(event, PERF_EF_UPDATE);
+}
+
static int pt_event_add(struct perf_event *event, int mode)
{
- struct pt_buffer *buf;
struct pt *pt = this_cpu_ptr(&pt_ctx);
struct hw_perf_event *hwc = &event->hw;
int ret = -EBUSY;
@@ -1120,34 +1309,18 @@ static int pt_event_add(struct perf_event *event, int mode)
if (pt->handle.event)
goto fail;
- buf = perf_aux_output_begin(&pt->handle, event);
- ret = -EINVAL;
- if (!buf)
- goto fail_stop;
-
- pt_buffer_reset_offsets(buf, pt->handle.head);
- if (!buf->snapshot) {
- ret = pt_buffer_reset_markers(buf, &pt->handle);
- if (ret)
- goto fail_end_stop;
- }
-
if (mode & PERF_EF_START) {
pt_event_start(event, 0);
- ret = -EBUSY;
+ ret = -EINVAL;
if (hwc->state == PERF_HES_STOPPED)
- goto fail_end_stop;
+ goto fail;
} else {
hwc->state = PERF_HES_STOPPED;
}
- return 0;
-
-fail_end_stop:
- perf_aux_output_end(&pt->handle, 0, true);
-fail_stop:
- hwc->state = PERF_HES_STOPPED;
+ ret = 0;
fail:
+
return ret;
}
@@ -1157,6 +1330,7 @@ static void pt_event_read(struct perf_event *event)
static void pt_event_destroy(struct perf_event *event)
{
+ pt_addr_filters_fini(event);
x86_del_exclusive(x86_lbr_exclusive_pt);
}
@@ -1171,6 +1345,11 @@ static int pt_event_init(struct perf_event *event)
if (x86_add_exclusive(x86_lbr_exclusive_pt))
return -EBUSY;
+ if (pt_addr_filters_init(event)) {
+ x86_del_exclusive(x86_lbr_exclusive_pt);
+ return -ENOMEM;
+ }
+
event->destroy = pt_event_destroy;
return 0;
@@ -1190,7 +1369,7 @@ static __init int pt_init(void)
BUILD_BUG_ON(sizeof(struct topa) > PAGE_SIZE);
- if (!test_cpu_cap(&boot_cpu_data, X86_FEATURE_INTEL_PT))
+ if (!boot_cpu_has(X86_FEATURE_INTEL_PT))
return -ENODEV;
get_online_cpus();
@@ -1224,16 +1403,21 @@ static __init int pt_init(void)
PERF_PMU_CAP_AUX_NO_SG | PERF_PMU_CAP_AUX_SW_DOUBLEBUF;
pt_pmu.pmu.capabilities |= PERF_PMU_CAP_EXCLUSIVE | PERF_PMU_CAP_ITRACE;
- pt_pmu.pmu.attr_groups = pt_attr_groups;
- pt_pmu.pmu.task_ctx_nr = perf_sw_context;
- pt_pmu.pmu.event_init = pt_event_init;
- pt_pmu.pmu.add = pt_event_add;
- pt_pmu.pmu.del = pt_event_del;
- pt_pmu.pmu.start = pt_event_start;
- pt_pmu.pmu.stop = pt_event_stop;
- pt_pmu.pmu.read = pt_event_read;
- pt_pmu.pmu.setup_aux = pt_buffer_setup_aux;
- pt_pmu.pmu.free_aux = pt_buffer_free_aux;
+ pt_pmu.pmu.attr_groups = pt_attr_groups;
+ pt_pmu.pmu.task_ctx_nr = perf_sw_context;
+ pt_pmu.pmu.event_init = pt_event_init;
+ pt_pmu.pmu.add = pt_event_add;
+ pt_pmu.pmu.del = pt_event_del;
+ pt_pmu.pmu.start = pt_event_start;
+ pt_pmu.pmu.stop = pt_event_stop;
+ pt_pmu.pmu.read = pt_event_read;
+ pt_pmu.pmu.setup_aux = pt_buffer_setup_aux;
+ pt_pmu.pmu.free_aux = pt_buffer_free_aux;
+ pt_pmu.pmu.addr_filters_sync = pt_event_addr_filters_sync;
+ pt_pmu.pmu.addr_filters_validate = pt_event_addr_filters_validate;
+ pt_pmu.pmu.nr_addr_filters =
+ pt_cap_get(PT_CAP_num_address_ranges);
+
ret = perf_pmu_register(&pt_pmu.pmu, "intel_pt", -1);
return ret;
diff --git a/arch/x86/events/intel/pt.h b/arch/x86/events/intel/pt.h
index 3abb5f5cccc8..efffa4a09f68 100644
--- a/arch/x86/events/intel/pt.h
+++ b/arch/x86/events/intel/pt.h
@@ -20,6 +20,40 @@
#define __INTEL_PT_H__
/*
+ * PT MSR bit definitions
+ */
+#define RTIT_CTL_TRACEEN BIT(0)
+#define RTIT_CTL_CYCLEACC BIT(1)
+#define RTIT_CTL_OS BIT(2)
+#define RTIT_CTL_USR BIT(3)
+#define RTIT_CTL_CR3EN BIT(7)
+#define RTIT_CTL_TOPA BIT(8)
+#define RTIT_CTL_MTC_EN BIT(9)
+#define RTIT_CTL_TSC_EN BIT(10)
+#define RTIT_CTL_DISRETC BIT(11)
+#define RTIT_CTL_BRANCH_EN BIT(13)
+#define RTIT_CTL_MTC_RANGE_OFFSET 14
+#define RTIT_CTL_MTC_RANGE (0x0full << RTIT_CTL_MTC_RANGE_OFFSET)
+#define RTIT_CTL_CYC_THRESH_OFFSET 19
+#define RTIT_CTL_CYC_THRESH (0x0full << RTIT_CTL_CYC_THRESH_OFFSET)
+#define RTIT_CTL_PSB_FREQ_OFFSET 24
+#define RTIT_CTL_PSB_FREQ (0x0full << RTIT_CTL_PSB_FREQ_OFFSET)
+#define RTIT_CTL_ADDR0_OFFSET 32
+#define RTIT_CTL_ADDR0 (0x0full << RTIT_CTL_ADDR0_OFFSET)
+#define RTIT_CTL_ADDR1_OFFSET 36
+#define RTIT_CTL_ADDR1 (0x0full << RTIT_CTL_ADDR1_OFFSET)
+#define RTIT_CTL_ADDR2_OFFSET 40
+#define RTIT_CTL_ADDR2 (0x0full << RTIT_CTL_ADDR2_OFFSET)
+#define RTIT_CTL_ADDR3_OFFSET 44
+#define RTIT_CTL_ADDR3 (0x0full << RTIT_CTL_ADDR3_OFFSET)
+#define RTIT_STATUS_FILTEREN BIT(0)
+#define RTIT_STATUS_CONTEXTEN BIT(1)
+#define RTIT_STATUS_TRIGGEREN BIT(2)
+#define RTIT_STATUS_BUFFOVF BIT(3)
+#define RTIT_STATUS_ERROR BIT(4)
+#define RTIT_STATUS_STOPPED BIT(5)
+
+/*
* Single-entry ToPA: when this close to region boundary, switch
* buffers to avoid losing data.
*/
@@ -48,15 +82,20 @@ struct topa_entry {
#define PT_CPUID_LEAVES 2
#define PT_CPUID_REGS_NUM 4 /* number of regsters (eax, ebx, ecx, edx) */
+/* TSC to Core Crystal Clock Ratio */
+#define CPUID_TSC_LEAF 0x15
+
enum pt_capabilities {
PT_CAP_max_subleaf = 0,
PT_CAP_cr3_filtering,
PT_CAP_psb_cyc,
+ PT_CAP_ip_filtering,
PT_CAP_mtc,
PT_CAP_topa_output,
PT_CAP_topa_multiple_entries,
PT_CAP_single_range_output,
PT_CAP_payloads_lip,
+ PT_CAP_num_address_ranges,
PT_CAP_mtc_periods,
PT_CAP_cycle_thresholds,
PT_CAP_psb_periods,
@@ -66,6 +105,9 @@ struct pt_pmu {
struct pmu pmu;
u32 caps[PT_CPUID_REGS_NUM * PT_CPUID_LEAVES];
bool vmx;
+ unsigned long max_nonturbo_ratio;
+ unsigned int tsc_art_num;
+ unsigned int tsc_art_den;
};
/**
@@ -104,14 +146,40 @@ struct pt_buffer {
struct topa_entry *topa_index[0];
};
+#define PT_FILTERS_NUM 4
+
+/**
+ * struct pt_filter - IP range filter configuration
+ * @msr_a: range start, goes to RTIT_ADDRn_A
+ * @msr_b: range end, goes to RTIT_ADDRn_B
+ * @config: 4-bit field in RTIT_CTL
+ */
+struct pt_filter {
+ unsigned long msr_a;
+ unsigned long msr_b;
+ unsigned long config;
+};
+
+/**
+ * struct pt_filters - IP range filtering context
+ * @filter: filters defined for this context
+ * @nr_filters: number of defined filters in the @filter array
+ */
+struct pt_filters {
+ struct pt_filter filter[PT_FILTERS_NUM];
+ unsigned int nr_filters;
+};
+
/**
* struct pt - per-cpu pt context
* @handle: perf output handle
+ * @filters: last configured filters
* @handle_nmi: do handle PT PMI on this cpu, there's an active event
* @vmx_on: 1 if VMX is ON on this cpu
*/
struct pt {
struct perf_output_handle handle;
+ struct pt_filters filters;
int handle_nmi;
int vmx_on;
};
diff --git a/arch/x86/events/intel/rapl.c b/arch/x86/events/intel/rapl.c
index 1705c9d75e44..99c4bab123cd 100644
--- a/arch/x86/events/intel/rapl.c
+++ b/arch/x86/events/intel/rapl.c
@@ -27,10 +27,14 @@
* event: rapl_energy_dram
* perf code: 0x3
*
- * dram counter: consumption of the builtin-gpu domain (client only)
+ * gpu counter: consumption of the builtin-gpu domain (client only)
* event: rapl_energy_gpu
* perf code: 0x4
*
+ * psys counter: consumption of the builtin-psys domain (client only)
+ * event: rapl_energy_psys
+ * perf code: 0x5
+ *
* We manage those counters as free running (read-only). They may be
* use simultaneously by other tools, such as turbostat.
*
@@ -53,6 +57,8 @@
#include <asm/cpu_device_id.h>
#include "../perf_event.h"
+MODULE_LICENSE("GPL");
+
/*
* RAPL energy status counters
*/
@@ -64,13 +70,16 @@
#define INTEL_RAPL_RAM 0x3 /* pseudo-encoding */
#define RAPL_IDX_PP1_NRG_STAT 3 /* gpu */
#define INTEL_RAPL_PP1 0x4 /* pseudo-encoding */
+#define RAPL_IDX_PSYS_NRG_STAT 4 /* psys */
+#define INTEL_RAPL_PSYS 0x5 /* pseudo-encoding */
-#define NR_RAPL_DOMAINS 0x4
+#define NR_RAPL_DOMAINS 0x5
static const char *const rapl_domain_names[NR_RAPL_DOMAINS] __initconst = {
"pp0-core",
"package",
"dram",
"pp1-gpu",
+ "psys",
};
/* Clients have PP0, PKG */
@@ -89,6 +98,13 @@ static const char *const rapl_domain_names[NR_RAPL_DOMAINS] __initconst = {
1<<RAPL_IDX_RAM_NRG_STAT|\
1<<RAPL_IDX_PP1_NRG_STAT)
+/* SKL clients have PP0, PKG, RAM, PP1, PSYS */
+#define RAPL_IDX_SKL_CLN (1<<RAPL_IDX_PP0_NRG_STAT|\
+ 1<<RAPL_IDX_PKG_NRG_STAT|\
+ 1<<RAPL_IDX_RAM_NRG_STAT|\
+ 1<<RAPL_IDX_PP1_NRG_STAT|\
+ 1<<RAPL_IDX_PSYS_NRG_STAT)
+
/* Knights Landing has PKG, RAM */
#define RAPL_IDX_KNL (1<<RAPL_IDX_PKG_NRG_STAT|\
1<<RAPL_IDX_RAM_NRG_STAT)
@@ -360,6 +376,10 @@ static int rapl_pmu_event_init(struct perf_event *event)
bit = RAPL_IDX_PP1_NRG_STAT;
msr = MSR_PP1_ENERGY_STATUS;
break;
+ case INTEL_RAPL_PSYS:
+ bit = RAPL_IDX_PSYS_NRG_STAT;
+ msr = MSR_PLATFORM_ENERGY_STATUS;
+ break;
default:
return -EINVAL;
}
@@ -414,11 +434,13 @@ RAPL_EVENT_ATTR_STR(energy-cores, rapl_cores, "event=0x01");
RAPL_EVENT_ATTR_STR(energy-pkg , rapl_pkg, "event=0x02");
RAPL_EVENT_ATTR_STR(energy-ram , rapl_ram, "event=0x03");
RAPL_EVENT_ATTR_STR(energy-gpu , rapl_gpu, "event=0x04");
+RAPL_EVENT_ATTR_STR(energy-psys, rapl_psys, "event=0x05");
RAPL_EVENT_ATTR_STR(energy-cores.unit, rapl_cores_unit, "Joules");
RAPL_EVENT_ATTR_STR(energy-pkg.unit , rapl_pkg_unit, "Joules");
RAPL_EVENT_ATTR_STR(energy-ram.unit , rapl_ram_unit, "Joules");
RAPL_EVENT_ATTR_STR(energy-gpu.unit , rapl_gpu_unit, "Joules");
+RAPL_EVENT_ATTR_STR(energy-psys.unit, rapl_psys_unit, "Joules");
/*
* we compute in 0.23 nJ increments regardless of MSR
@@ -427,6 +449,7 @@ RAPL_EVENT_ATTR_STR(energy-cores.scale, rapl_cores_scale, "2.3283064365386962890
RAPL_EVENT_ATTR_STR(energy-pkg.scale, rapl_pkg_scale, "2.3283064365386962890625e-10");
RAPL_EVENT_ATTR_STR(energy-ram.scale, rapl_ram_scale, "2.3283064365386962890625e-10");
RAPL_EVENT_ATTR_STR(energy-gpu.scale, rapl_gpu_scale, "2.3283064365386962890625e-10");
+RAPL_EVENT_ATTR_STR(energy-psys.scale, rapl_psys_scale, "2.3283064365386962890625e-10");
static struct attribute *rapl_events_srv_attr[] = {
EVENT_PTR(rapl_cores),
@@ -476,6 +499,27 @@ static struct attribute *rapl_events_hsw_attr[] = {
NULL,
};
+static struct attribute *rapl_events_skl_attr[] = {
+ EVENT_PTR(rapl_cores),
+ EVENT_PTR(rapl_pkg),
+ EVENT_PTR(rapl_gpu),
+ EVENT_PTR(rapl_ram),
+ EVENT_PTR(rapl_psys),
+
+ EVENT_PTR(rapl_cores_unit),
+ EVENT_PTR(rapl_pkg_unit),
+ EVENT_PTR(rapl_gpu_unit),
+ EVENT_PTR(rapl_ram_unit),
+ EVENT_PTR(rapl_psys_unit),
+
+ EVENT_PTR(rapl_cores_scale),
+ EVENT_PTR(rapl_pkg_scale),
+ EVENT_PTR(rapl_gpu_scale),
+ EVENT_PTR(rapl_ram_scale),
+ EVENT_PTR(rapl_psys_scale),
+ NULL,
+};
+
static struct attribute *rapl_events_knl_attr[] = {
EVENT_PTR(rapl_pkg),
EVENT_PTR(rapl_ram),
@@ -592,6 +636,11 @@ static int rapl_cpu_notifier(struct notifier_block *self,
return NOTIFY_OK;
}
+static struct notifier_block rapl_cpu_nb = {
+ .notifier_call = rapl_cpu_notifier,
+ .priority = CPU_PRI_PERF + 1,
+};
+
static int rapl_check_hw_unit(bool apply_quirk)
{
u64 msr_rapl_power_unit_bits;
@@ -660,7 +709,7 @@ static int __init rapl_prepare_cpus(void)
return 0;
}
-static void __init cleanup_rapl_pmus(void)
+static void cleanup_rapl_pmus(void)
{
int i;
@@ -691,52 +740,92 @@ static int __init init_rapl_pmus(void)
return 0;
}
+#define X86_RAPL_MODEL_MATCH(model, init) \
+ { X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, (unsigned long)&init }
+
+struct intel_rapl_init_fun {
+ bool apply_quirk;
+ int cntr_mask;
+ struct attribute **attrs;
+};
+
+static const struct intel_rapl_init_fun snb_rapl_init __initconst = {
+ .apply_quirk = false,
+ .cntr_mask = RAPL_IDX_CLN,
+ .attrs = rapl_events_cln_attr,
+};
+
+static const struct intel_rapl_init_fun hsx_rapl_init __initconst = {
+ .apply_quirk = true,
+ .cntr_mask = RAPL_IDX_SRV,
+ .attrs = rapl_events_srv_attr,
+};
+
+static const struct intel_rapl_init_fun hsw_rapl_init __initconst = {
+ .apply_quirk = false,
+ .cntr_mask = RAPL_IDX_HSW,
+ .attrs = rapl_events_hsw_attr,
+};
+
+static const struct intel_rapl_init_fun snbep_rapl_init __initconst = {
+ .apply_quirk = false,
+ .cntr_mask = RAPL_IDX_SRV,
+ .attrs = rapl_events_srv_attr,
+};
+
+static const struct intel_rapl_init_fun knl_rapl_init __initconst = {
+ .apply_quirk = true,
+ .cntr_mask = RAPL_IDX_KNL,
+ .attrs = rapl_events_knl_attr,
+};
+
+static const struct intel_rapl_init_fun skl_rapl_init __initconst = {
+ .apply_quirk = false,
+ .cntr_mask = RAPL_IDX_SKL_CLN,
+ .attrs = rapl_events_skl_attr,
+};
+
static const struct x86_cpu_id rapl_cpu_match[] __initconst = {
- [0] = { .vendor = X86_VENDOR_INTEL, .family = 6 },
- [1] = {},
+ X86_RAPL_MODEL_MATCH(42, snb_rapl_init), /* Sandy Bridge */
+ X86_RAPL_MODEL_MATCH(45, snbep_rapl_init), /* Sandy Bridge-EP */
+
+ X86_RAPL_MODEL_MATCH(58, snb_rapl_init), /* Ivy Bridge */
+ X86_RAPL_MODEL_MATCH(62, snbep_rapl_init), /* IvyTown */
+
+ X86_RAPL_MODEL_MATCH(60, hsw_rapl_init), /* Haswell */
+ X86_RAPL_MODEL_MATCH(63, hsx_rapl_init), /* Haswell-Server */
+ X86_RAPL_MODEL_MATCH(69, hsw_rapl_init), /* Haswell-Celeron */
+ X86_RAPL_MODEL_MATCH(70, hsw_rapl_init), /* Haswell GT3e */
+
+ X86_RAPL_MODEL_MATCH(61, hsw_rapl_init), /* Broadwell */
+ X86_RAPL_MODEL_MATCH(71, hsw_rapl_init), /* Broadwell-H */
+ X86_RAPL_MODEL_MATCH(79, hsx_rapl_init), /* Broadwell-Server */
+ X86_RAPL_MODEL_MATCH(86, hsx_rapl_init), /* Broadwell Xeon D */
+
+ X86_RAPL_MODEL_MATCH(87, knl_rapl_init), /* Knights Landing */
+
+ X86_RAPL_MODEL_MATCH(78, skl_rapl_init), /* Skylake */
+ X86_RAPL_MODEL_MATCH(94, skl_rapl_init), /* Skylake H/S */
+ {},
};
+MODULE_DEVICE_TABLE(x86cpu, rapl_cpu_match);
+
static int __init rapl_pmu_init(void)
{
- bool apply_quirk = false;
+ const struct x86_cpu_id *id;
+ struct intel_rapl_init_fun *rapl_init;
+ bool apply_quirk;
int ret;
- if (!x86_match_cpu(rapl_cpu_match))
+ id = x86_match_cpu(rapl_cpu_match);
+ if (!id)
return -ENODEV;
- switch (boot_cpu_data.x86_model) {
- case 42: /* Sandy Bridge */
- case 58: /* Ivy Bridge */
- rapl_cntr_mask = RAPL_IDX_CLN;
- rapl_pmu_events_group.attrs = rapl_events_cln_attr;
- break;
- case 63: /* Haswell-Server */
- case 79: /* Broadwell-Server */
- apply_quirk = true;
- rapl_cntr_mask = RAPL_IDX_SRV;
- rapl_pmu_events_group.attrs = rapl_events_srv_attr;
- break;
- case 60: /* Haswell */
- case 69: /* Haswell-Celeron */
- case 70: /* Haswell GT3e */
- case 61: /* Broadwell */
- case 71: /* Broadwell-H */
- rapl_cntr_mask = RAPL_IDX_HSW;
- rapl_pmu_events_group.attrs = rapl_events_hsw_attr;
- break;
- case 45: /* Sandy Bridge-EP */
- case 62: /* IvyTown */
- rapl_cntr_mask = RAPL_IDX_SRV;
- rapl_pmu_events_group.attrs = rapl_events_srv_attr;
- break;
- case 87: /* Knights Landing */
- apply_quirk = true;
- rapl_cntr_mask = RAPL_IDX_KNL;
- rapl_pmu_events_group.attrs = rapl_events_knl_attr;
- break;
- default:
- return -ENODEV;
- }
+ rapl_init = (struct intel_rapl_init_fun *)id->driver_data;
+ apply_quirk = rapl_init->apply_quirk;
+ rapl_cntr_mask = rapl_init->cntr_mask;
+ rapl_pmu_events_group.attrs = rapl_init->attrs;
ret = rapl_check_hw_unit(apply_quirk);
if (ret)
@@ -756,7 +845,7 @@ static int __init rapl_pmu_init(void)
if (ret)
goto out;
- __perf_cpu_notifier(rapl_cpu_notifier);
+ __register_cpu_notifier(&rapl_cpu_nb);
cpu_notifier_register_done();
rapl_advertise();
return 0;
@@ -767,4 +856,14 @@ out:
cpu_notifier_register_done();
return ret;
}
-device_initcall(rapl_pmu_init);
+module_init(rapl_pmu_init);
+
+static void __exit intel_rapl_exit(void)
+{
+ cpu_notifier_register_begin();
+ __unregister_cpu_notifier(&rapl_cpu_nb);
+ perf_pmu_unregister(&rapl_pmus->pmu);
+ cleanup_rapl_pmus();
+ cpu_notifier_register_done();
+}
+module_exit(intel_rapl_exit);
diff --git a/arch/x86/events/intel/uncore.c b/arch/x86/events/intel/uncore.c
index 7012d18bb293..fce74062d981 100644
--- a/arch/x86/events/intel/uncore.c
+++ b/arch/x86/events/intel/uncore.c
@@ -1,3 +1,4 @@
+#include <asm/cpu_device_id.h>
#include "uncore.h"
static struct intel_uncore_type *empty_uncore[] = { NULL, };
@@ -21,6 +22,8 @@ static struct event_constraint uncore_constraint_fixed =
struct event_constraint uncore_constraint_empty =
EVENT_CONSTRAINT(0, 0, 0);
+MODULE_LICENSE("GPL");
+
static int uncore_pcibus_to_physid(struct pci_bus *bus)
{
struct pci2phy_map *map;
@@ -754,7 +757,7 @@ static void uncore_pmu_unregister(struct intel_uncore_pmu *pmu)
pmu->registered = false;
}
-static void __init __uncore_exit_boxes(struct intel_uncore_type *type, int cpu)
+static void __uncore_exit_boxes(struct intel_uncore_type *type, int cpu)
{
struct intel_uncore_pmu *pmu = type->pmus;
struct intel_uncore_box *box;
@@ -770,7 +773,7 @@ static void __init __uncore_exit_boxes(struct intel_uncore_type *type, int cpu)
}
}
-static void __init uncore_exit_boxes(void *dummy)
+static void uncore_exit_boxes(void *dummy)
{
struct intel_uncore_type **types;
@@ -787,7 +790,7 @@ static void uncore_free_boxes(struct intel_uncore_pmu *pmu)
kfree(pmu->boxes);
}
-static void __init uncore_type_exit(struct intel_uncore_type *type)
+static void uncore_type_exit(struct intel_uncore_type *type)
{
struct intel_uncore_pmu *pmu = type->pmus;
int i;
@@ -804,7 +807,7 @@ static void __init uncore_type_exit(struct intel_uncore_type *type)
type->events_group = NULL;
}
-static void __init uncore_types_exit(struct intel_uncore_type **types)
+static void uncore_types_exit(struct intel_uncore_type **types)
{
for (; *types; types++)
uncore_type_exit(*types);
@@ -888,7 +891,7 @@ static int uncore_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id
return -ENODEV;
pkg = topology_phys_to_logical_pkg(phys_id);
- if (WARN_ON_ONCE(pkg < 0))
+ if (pkg < 0)
return -EINVAL;
if (UNCORE_PCI_DEV_TYPE(id->driver_data) == UNCORE_EXTRA_PCI_DEV) {
@@ -989,46 +992,6 @@ static int __init uncore_pci_init(void)
size_t size;
int ret;
- switch (boot_cpu_data.x86_model) {
- case 45: /* Sandy Bridge-EP */
- ret = snbep_uncore_pci_init();
- break;
- case 62: /* Ivy Bridge-EP */
- ret = ivbep_uncore_pci_init();
- break;
- case 63: /* Haswell-EP */
- ret = hswep_uncore_pci_init();
- break;
- case 79: /* BDX-EP */
- case 86: /* BDX-DE */
- ret = bdx_uncore_pci_init();
- break;
- case 42: /* Sandy Bridge */
- ret = snb_uncore_pci_init();
- break;
- case 58: /* Ivy Bridge */
- ret = ivb_uncore_pci_init();
- break;
- case 60: /* Haswell */
- case 69: /* Haswell Celeron */
- ret = hsw_uncore_pci_init();
- break;
- case 61: /* Broadwell */
- ret = bdw_uncore_pci_init();
- break;
- case 87: /* Knights Landing */
- ret = knl_uncore_pci_init();
- break;
- case 94: /* SkyLake */
- ret = skl_uncore_pci_init();
- break;
- default:
- return -ENODEV;
- }
-
- if (ret)
- return ret;
-
size = max_packages * sizeof(struct pci_extra_dev);
uncore_extra_pci_dev = kzalloc(size, GFP_KERNEL);
if (!uncore_extra_pci_dev) {
@@ -1060,7 +1023,7 @@ err:
return ret;
}
-static void __init uncore_pci_exit(void)
+static void uncore_pci_exit(void)
{
if (pcidrv_registered) {
pcidrv_registered = false;
@@ -1287,46 +1250,6 @@ static int __init uncore_cpu_init(void)
{
int ret;
- switch (boot_cpu_data.x86_model) {
- case 26: /* Nehalem */
- case 30:
- case 37: /* Westmere */
- case 44:
- nhm_uncore_cpu_init();
- break;
- case 42: /* Sandy Bridge */
- case 58: /* Ivy Bridge */
- case 60: /* Haswell */
- case 69: /* Haswell */
- case 70: /* Haswell */
- case 61: /* Broadwell */
- case 71: /* Broadwell */
- snb_uncore_cpu_init();
- break;
- case 45: /* Sandy Bridge-EP */
- snbep_uncore_cpu_init();
- break;
- case 46: /* Nehalem-EX */
- case 47: /* Westmere-EX aka. Xeon E7 */
- nhmex_uncore_cpu_init();
- break;
- case 62: /* Ivy Bridge-EP */
- ivbep_uncore_cpu_init();
- break;
- case 63: /* Haswell-EP */
- hswep_uncore_cpu_init();
- break;
- case 79: /* BDX-EP */
- case 86: /* BDX-DE */
- bdx_uncore_cpu_init();
- break;
- case 87: /* Knights Landing */
- knl_uncore_cpu_init();
- break;
- default:
- return -ENODEV;
- }
-
ret = uncore_types_init(uncore_msr_uncores, true);
if (ret)
goto err;
@@ -1376,20 +1299,123 @@ static int __init uncore_cpumask_init(bool msr)
return 0;
}
+#define X86_UNCORE_MODEL_MATCH(model, init) \
+ { X86_VENDOR_INTEL, 6, model, X86_FEATURE_ANY, (unsigned long)&init }
+
+struct intel_uncore_init_fun {
+ void (*cpu_init)(void);
+ int (*pci_init)(void);
+};
+
+static const struct intel_uncore_init_fun nhm_uncore_init __initconst = {
+ .cpu_init = nhm_uncore_cpu_init,
+};
+
+static const struct intel_uncore_init_fun snb_uncore_init __initconst = {
+ .cpu_init = snb_uncore_cpu_init,
+ .pci_init = snb_uncore_pci_init,
+};
+
+static const struct intel_uncore_init_fun ivb_uncore_init __initconst = {
+ .cpu_init = snb_uncore_cpu_init,
+ .pci_init = ivb_uncore_pci_init,
+};
+
+static const struct intel_uncore_init_fun hsw_uncore_init __initconst = {
+ .cpu_init = snb_uncore_cpu_init,
+ .pci_init = hsw_uncore_pci_init,
+};
+
+static const struct intel_uncore_init_fun bdw_uncore_init __initconst = {
+ .cpu_init = snb_uncore_cpu_init,
+ .pci_init = bdw_uncore_pci_init,
+};
+
+static const struct intel_uncore_init_fun snbep_uncore_init __initconst = {
+ .cpu_init = snbep_uncore_cpu_init,
+ .pci_init = snbep_uncore_pci_init,
+};
+
+static const struct intel_uncore_init_fun nhmex_uncore_init __initconst = {
+ .cpu_init = nhmex_uncore_cpu_init,
+};
+
+static const struct intel_uncore_init_fun ivbep_uncore_init __initconst = {
+ .cpu_init = ivbep_uncore_cpu_init,
+ .pci_init = ivbep_uncore_pci_init,
+};
+
+static const struct intel_uncore_init_fun hswep_uncore_init __initconst = {
+ .cpu_init = hswep_uncore_cpu_init,
+ .pci_init = hswep_uncore_pci_init,
+};
+
+static const struct intel_uncore_init_fun bdx_uncore_init __initconst = {
+ .cpu_init = bdx_uncore_cpu_init,
+ .pci_init = bdx_uncore_pci_init,
+};
+
+static const struct intel_uncore_init_fun knl_uncore_init __initconst = {
+ .cpu_init = knl_uncore_cpu_init,
+ .pci_init = knl_uncore_pci_init,
+};
+
+static const struct intel_uncore_init_fun skl_uncore_init __initconst = {
+ .pci_init = skl_uncore_pci_init,
+};
+
+static const struct x86_cpu_id intel_uncore_match[] __initconst = {
+ X86_UNCORE_MODEL_MATCH(26, nhm_uncore_init), /* Nehalem */
+ X86_UNCORE_MODEL_MATCH(30, nhm_uncore_init),
+ X86_UNCORE_MODEL_MATCH(37, nhm_uncore_init), /* Westmere */
+ X86_UNCORE_MODEL_MATCH(44, nhm_uncore_init),
+ X86_UNCORE_MODEL_MATCH(42, snb_uncore_init), /* Sandy Bridge */
+ X86_UNCORE_MODEL_MATCH(58, ivb_uncore_init), /* Ivy Bridge */
+ X86_UNCORE_MODEL_MATCH(60, hsw_uncore_init), /* Haswell */
+ X86_UNCORE_MODEL_MATCH(69, hsw_uncore_init), /* Haswell Celeron */
+ X86_UNCORE_MODEL_MATCH(70, hsw_uncore_init), /* Haswell */
+ X86_UNCORE_MODEL_MATCH(61, bdw_uncore_init), /* Broadwell */
+ X86_UNCORE_MODEL_MATCH(71, bdw_uncore_init), /* Broadwell */
+ X86_UNCORE_MODEL_MATCH(45, snbep_uncore_init), /* Sandy Bridge-EP */
+ X86_UNCORE_MODEL_MATCH(46, nhmex_uncore_init), /* Nehalem-EX */
+ X86_UNCORE_MODEL_MATCH(47, nhmex_uncore_init), /* Westmere-EX aka. Xeon E7 */
+ X86_UNCORE_MODEL_MATCH(62, ivbep_uncore_init), /* Ivy Bridge-EP */
+ X86_UNCORE_MODEL_MATCH(63, hswep_uncore_init), /* Haswell-EP */
+ X86_UNCORE_MODEL_MATCH(79, bdx_uncore_init), /* BDX-EP */
+ X86_UNCORE_MODEL_MATCH(86, bdx_uncore_init), /* BDX-DE */
+ X86_UNCORE_MODEL_MATCH(87, knl_uncore_init), /* Knights Landing */
+ X86_UNCORE_MODEL_MATCH(94, skl_uncore_init), /* SkyLake */
+ {},
+};
+
+MODULE_DEVICE_TABLE(x86cpu, intel_uncore_match);
+
static int __init intel_uncore_init(void)
{
- int pret, cret, ret;
+ const struct x86_cpu_id *id;
+ struct intel_uncore_init_fun *uncore_init;
+ int pret = 0, cret = 0, ret;
- if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
+ id = x86_match_cpu(intel_uncore_match);
+ if (!id)
return -ENODEV;
- if (cpu_has_hypervisor)
+ if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
return -ENODEV;
max_packages = topology_max_packages();
- pret = uncore_pci_init();
- cret = uncore_cpu_init();
+ uncore_init = (struct intel_uncore_init_fun *)id->driver_data;
+ if (uncore_init->pci_init) {
+ pret = uncore_init->pci_init();
+ if (!pret)
+ pret = uncore_pci_init();
+ }
+
+ if (uncore_init->cpu_init) {
+ uncore_init->cpu_init();
+ cret = uncore_cpu_init();
+ }
if (cret && pret)
return -ENODEV;
@@ -1409,4 +1435,14 @@ err:
cpu_notifier_register_done();
return ret;
}
-device_initcall(intel_uncore_init);
+module_init(intel_uncore_init);
+
+static void __exit intel_uncore_exit(void)
+{
+ cpu_notifier_register_begin();
+ __unregister_cpu_notifier(&uncore_cpu_nb);
+ uncore_types_exit(uncore_msr_uncores);
+ uncore_pci_exit();
+ cpu_notifier_register_done();
+}
+module_exit(intel_uncore_exit);
diff --git a/arch/x86/events/intel/uncore_snbep.c b/arch/x86/events/intel/uncore_snbep.c
index ab2bcaaebe38..b2625867ebd1 100644
--- a/arch/x86/events/intel/uncore_snbep.c
+++ b/arch/x86/events/intel/uncore_snbep.c
@@ -219,6 +219,9 @@
#define KNL_CHA_MSR_PMON_BOX_FILTER_TID 0x1ff
#define KNL_CHA_MSR_PMON_BOX_FILTER_STATE (7 << 18)
#define KNL_CHA_MSR_PMON_BOX_FILTER_OP (0xfffffe2aULL << 32)
+#define KNL_CHA_MSR_PMON_BOX_FILTER_REMOTE_NODE (0x1ULL << 32)
+#define KNL_CHA_MSR_PMON_BOX_FILTER_LOCAL_NODE (0x1ULL << 33)
+#define KNL_CHA_MSR_PMON_BOX_FILTER_NNC (0x1ULL << 37)
/* KNL EDC/MC UCLK */
#define KNL_UCLK_MSR_PMON_CTR0_LOW 0x400
@@ -1902,6 +1905,10 @@ static int knl_cha_hw_config(struct intel_uncore_box *box,
reg1->reg = HSWEP_C0_MSR_PMON_BOX_FILTER0 +
KNL_CHA_MSR_OFFSET * box->pmu->pmu_idx;
reg1->config = event->attr.config1 & knl_cha_filter_mask(idx);
+
+ reg1->config |= KNL_CHA_MSR_PMON_BOX_FILTER_REMOTE_NODE;
+ reg1->config |= KNL_CHA_MSR_PMON_BOX_FILTER_LOCAL_NODE;
+ reg1->config |= KNL_CHA_MSR_PMON_BOX_FILTER_NNC;
reg1->idx = idx;
}
return 0;
diff --git a/arch/x86/events/msr.c b/arch/x86/events/msr.c
index ec863b9a9f78..85ef3c2e80e0 100644
--- a/arch/x86/events/msr.c
+++ b/arch/x86/events/msr.c
@@ -6,6 +6,8 @@ enum perf_msr_id {
PERF_MSR_MPERF = 2,
PERF_MSR_PPERF = 3,
PERF_MSR_SMI = 4,
+ PERF_MSR_PTSC = 5,
+ PERF_MSR_IRPERF = 6,
PERF_MSR_EVENT_MAX,
};
@@ -15,6 +17,16 @@ static bool test_aperfmperf(int idx)
return boot_cpu_has(X86_FEATURE_APERFMPERF);
}
+static bool test_ptsc(int idx)
+{
+ return boot_cpu_has(X86_FEATURE_PTSC);
+}
+
+static bool test_irperf(int idx)
+{
+ return boot_cpu_has(X86_FEATURE_IRPERF);
+}
+
static bool test_intel(int idx)
{
if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
@@ -69,18 +81,22 @@ struct perf_msr {
bool (*test)(int idx);
};
-PMU_EVENT_ATTR_STRING(tsc, evattr_tsc, "event=0x00");
-PMU_EVENT_ATTR_STRING(aperf, evattr_aperf, "event=0x01");
-PMU_EVENT_ATTR_STRING(mperf, evattr_mperf, "event=0x02");
-PMU_EVENT_ATTR_STRING(pperf, evattr_pperf, "event=0x03");
-PMU_EVENT_ATTR_STRING(smi, evattr_smi, "event=0x04");
+PMU_EVENT_ATTR_STRING(tsc, evattr_tsc, "event=0x00");
+PMU_EVENT_ATTR_STRING(aperf, evattr_aperf, "event=0x01");
+PMU_EVENT_ATTR_STRING(mperf, evattr_mperf, "event=0x02");
+PMU_EVENT_ATTR_STRING(pperf, evattr_pperf, "event=0x03");
+PMU_EVENT_ATTR_STRING(smi, evattr_smi, "event=0x04");
+PMU_EVENT_ATTR_STRING(ptsc, evattr_ptsc, "event=0x05");
+PMU_EVENT_ATTR_STRING(irperf, evattr_irperf, "event=0x06");
static struct perf_msr msr[] = {
- [PERF_MSR_TSC] = { 0, &evattr_tsc, NULL, },
- [PERF_MSR_APERF] = { MSR_IA32_APERF, &evattr_aperf, test_aperfmperf, },
- [PERF_MSR_MPERF] = { MSR_IA32_MPERF, &evattr_mperf, test_aperfmperf, },
- [PERF_MSR_PPERF] = { MSR_PPERF, &evattr_pperf, test_intel, },
- [PERF_MSR_SMI] = { MSR_SMI_COUNT, &evattr_smi, test_intel, },
+ [PERF_MSR_TSC] = { 0, &evattr_tsc, NULL, },
+ [PERF_MSR_APERF] = { MSR_IA32_APERF, &evattr_aperf, test_aperfmperf, },
+ [PERF_MSR_MPERF] = { MSR_IA32_MPERF, &evattr_mperf, test_aperfmperf, },
+ [PERF_MSR_PPERF] = { MSR_PPERF, &evattr_pperf, test_intel, },
+ [PERF_MSR_SMI] = { MSR_SMI_COUNT, &evattr_smi, test_intel, },
+ [PERF_MSR_PTSC] = { MSR_F15H_PTSC, &evattr_ptsc, test_ptsc, },
+ [PERF_MSR_IRPERF] = { MSR_F17H_IRPERF, &evattr_irperf, test_irperf, },
};
static struct attribute *events_attrs[PERF_MSR_EVENT_MAX + 1] = {
@@ -166,7 +182,7 @@ again:
if (unlikely(event->hw.event_base == MSR_SMI_COUNT))
delta = sign_extend64(delta, 31);
- local64_add(now - prev, &event->count);
+ local64_add(delta, &event->count);
}
static void msr_event_start(struct perf_event *event, int flags)
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index ad4dc7ffffb5..8bd764df815d 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -601,6 +601,7 @@ struct x86_pmu {
u64 lbr_sel_mask; /* LBR_SELECT valid bits */
const int *lbr_sel_map; /* lbr_select mappings */
bool lbr_double_abort; /* duplicated lbr aborts */
+ bool lbr_pt_coexist; /* LBR may coexist with PT */
/*
* Intel PT/LBR/BTS are exclusive
@@ -859,6 +860,8 @@ extern struct event_constraint intel_atom_pebs_event_constraints[];
extern struct event_constraint intel_slm_pebs_event_constraints[];
+extern struct event_constraint intel_glm_pebs_event_constraints[];
+
extern struct event_constraint intel_nehalem_pebs_event_constraints[];
extern struct event_constraint intel_westmere_pebs_event_constraints[];
@@ -907,6 +910,8 @@ void intel_pmu_lbr_init_nhm(void);
void intel_pmu_lbr_init_atom(void);
+void intel_pmu_lbr_init_slm(void);
+
void intel_pmu_lbr_init_snb(void);
void intel_pmu_lbr_init_hsw(void);
diff --git a/arch/x86/ia32/ia32_aout.c b/arch/x86/ia32/ia32_aout.c
index ae6aad1d24f7..cb26f18d43af 100644
--- a/arch/x86/ia32/ia32_aout.c
+++ b/arch/x86/ia32/ia32_aout.c
@@ -116,13 +116,13 @@ static struct linux_binfmt aout_format = {
.min_coredump = PAGE_SIZE
};
-static void set_brk(unsigned long start, unsigned long end)
+static int set_brk(unsigned long start, unsigned long end)
{
start = PAGE_ALIGN(start);
end = PAGE_ALIGN(end);
if (end <= start)
- return;
- vm_brk(start, end - start);
+ return 0;
+ return vm_brk(start, end - start);
}
#ifdef CONFIG_COREDUMP
@@ -321,7 +321,7 @@ static int load_aout_binary(struct linux_binprm *bprm)
error = vm_brk(text_addr & PAGE_MASK, map_size);
- if (error != (text_addr & PAGE_MASK))
+ if (error)
return error;
error = read_code(bprm->file, text_addr, 32,
@@ -349,7 +349,10 @@ static int load_aout_binary(struct linux_binprm *bprm)
#endif
if (!bprm->file->f_op->mmap || (fd_offset & ~PAGE_MASK) != 0) {
- vm_brk(N_TXTADDR(ex), ex.a_text+ex.a_data);
+ error = vm_brk(N_TXTADDR(ex), ex.a_text+ex.a_data);
+ if (error)
+ return error;
+
read_code(bprm->file, N_TXTADDR(ex), fd_offset,
ex.a_text+ex.a_data);
goto beyond_if;
@@ -372,10 +375,13 @@ static int load_aout_binary(struct linux_binprm *bprm)
if (error != N_DATADDR(ex))
return error;
}
+
beyond_if:
- set_binfmt(&aout_format);
+ error = set_brk(current->mm->start_brk, current->mm->brk);
+ if (error)
+ return error;
- set_brk(current->mm->start_brk, current->mm->brk);
+ set_binfmt(&aout_format);
current->mm->start_stack =
(unsigned long)create_aout_tables((char __user *)bprm->p, bprm);
@@ -434,7 +440,9 @@ static int load_aout_library(struct file *file)
error_time = jiffies;
}
#endif
- vm_brk(start_addr, ex.a_text + ex.a_data + ex.a_bss);
+ retval = vm_brk(start_addr, ex.a_text + ex.a_data + ex.a_bss);
+ if (retval)
+ goto out;
read_code(file, start_addr, N_TXTOFF(ex),
ex.a_text + ex.a_data);
@@ -453,9 +461,8 @@ static int load_aout_library(struct file *file)
len = PAGE_ALIGN(ex.a_text + ex.a_data);
bss = ex.a_text + ex.a_data + ex.a_bss;
if (bss > len) {
- error = vm_brk(start_addr + len, bss - len);
- retval = error;
- if (error != start_addr + len)
+ retval = vm_brk(start_addr + len, bss - len);
+ if (retval)
goto out;
}
retval = 0;
diff --git a/arch/x86/ia32/ia32_signal.c b/arch/x86/ia32/ia32_signal.c
index 0552884da18d..2f29f4e407c3 100644
--- a/arch/x86/ia32/ia32_signal.c
+++ b/arch/x86/ia32/ia32_signal.c
@@ -357,7 +357,7 @@ int ia32_setup_rt_frame(int sig, struct ksignal *ksig,
put_user_ex(ptr_to_compat(&frame->uc), &frame->puc);
/* Create the ucontext. */
- if (cpu_has_xsave)
+ if (boot_cpu_has(X86_FEATURE_XSAVE))
put_user_ex(UC_FP_XSTATE, &frame->uc.uc_flags);
else
put_user_ex(0, &frame->uc.uc_flags);
diff --git a/arch/x86/include/asm/alternative.h b/arch/x86/include/asm/alternative.h
index 99afb665a004..e77a6443104f 100644
--- a/arch/x86/include/asm/alternative.h
+++ b/arch/x86/include/asm/alternative.h
@@ -1,11 +1,12 @@
#ifndef _ASM_X86_ALTERNATIVE_H
#define _ASM_X86_ALTERNATIVE_H
+#ifndef __ASSEMBLY__
+
#include <linux/types.h>
#include <linux/stddef.h>
#include <linux/stringify.h>
#include <asm/asm.h>
-#include <asm/ptrace.h>
/*
* Alternative inline assembly for SMP.
@@ -233,36 +234,6 @@ static inline int alternatives_text_reserved(void *start, void *end)
*/
#define ASM_NO_INPUT_CLOBBER(clbr...) "i" (0) : clbr
-struct paravirt_patch_site;
-#ifdef CONFIG_PARAVIRT
-void apply_paravirt(struct paravirt_patch_site *start,
- struct paravirt_patch_site *end);
-#else
-static inline void apply_paravirt(struct paravirt_patch_site *start,
- struct paravirt_patch_site *end)
-{}
-#define __parainstructions NULL
-#define __parainstructions_end NULL
-#endif
-
-extern void *text_poke_early(void *addr, const void *opcode, size_t len);
-
-/*
- * Clear and restore the kernel write-protection flag on the local CPU.
- * Allows the kernel to edit read-only pages.
- * Side-effect: any interrupt handler running between save and restore will have
- * the ability to write to read-only pages.
- *
- * Warning:
- * Code patching in the UP case is safe if NMIs and MCE handlers are stopped and
- * no thread can be preempted in the instructions being modified (no iret to an
- * invalid instruction possible) or if the instructions are changed from a
- * consistent state to another consistent state atomically.
- * On the local CPU you need to be protected again NMI or MCE handlers seeing an
- * inconsistent instruction while you patch.
- */
-extern void *text_poke(void *addr, const void *opcode, size_t len);
-extern int poke_int3_handler(struct pt_regs *regs);
-extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
+#endif /* __ASSEMBLY__ */
#endif /* _ASM_X86_ALTERNATIVE_H */
diff --git a/arch/x86/include/asm/apic.h b/arch/x86/include/asm/apic.h
index 98f25bbafac4..bc27611fa58f 100644
--- a/arch/x86/include/asm/apic.h
+++ b/arch/x86/include/asm/apic.h
@@ -239,10 +239,10 @@ extern void __init check_x2apic(void);
extern void x2apic_setup(void);
static inline int x2apic_enabled(void)
{
- return cpu_has_x2apic && apic_is_x2apic_enabled();
+ return boot_cpu_has(X86_FEATURE_X2APIC) && apic_is_x2apic_enabled();
}
-#define x2apic_supported() (cpu_has_x2apic)
+#define x2apic_supported() (boot_cpu_has(X86_FEATURE_X2APIC))
#else /* !CONFIG_X86_X2APIC */
static inline void check_x2apic(void) { }
static inline void x2apic_setup(void) { }
diff --git a/arch/x86/include/asm/bios_ebda.h b/arch/x86/include/asm/bios_ebda.h
index aa6a3170ab5a..2b00c776f223 100644
--- a/arch/x86/include/asm/bios_ebda.h
+++ b/arch/x86/include/asm/bios_ebda.h
@@ -17,27 +17,6 @@ static inline unsigned int get_bios_ebda(void)
return address; /* 0 means none */
}
-/*
- * Return the sanitized length of the EBDA in bytes, if it exists.
- */
-static inline unsigned int get_bios_ebda_length(void)
-{
- unsigned int address;
- unsigned int length;
-
- address = get_bios_ebda();
- if (!address)
- return 0;
-
- /* EBDA length is byte 0 of the EBDA (stored in KiB) */
- length = *(unsigned char *)phys_to_virt(address);
- length <<= 10;
-
- /* Trim the length if it extends beyond 640KiB */
- length = min_t(unsigned int, (640 * 1024) - address, length);
- return length;
-}
-
void reserve_ebda_region(void);
#ifdef CONFIG_X86_CHECK_BIOS_CORRUPTION
diff --git a/arch/x86/include/asm/boot.h b/arch/x86/include/asm/boot.h
index 6b8d6e8cd449..abd06b19ddd2 100644
--- a/arch/x86/include/asm/boot.h
+++ b/arch/x86/include/asm/boot.h
@@ -12,29 +12,46 @@
/* Minimum kernel alignment, as a power of two */
#ifdef CONFIG_X86_64
-#define MIN_KERNEL_ALIGN_LG2 PMD_SHIFT
+# define MIN_KERNEL_ALIGN_LG2 PMD_SHIFT
#else
-#define MIN_KERNEL_ALIGN_LG2 (PAGE_SHIFT + THREAD_SIZE_ORDER)
+# define MIN_KERNEL_ALIGN_LG2 (PAGE_SHIFT + THREAD_SIZE_ORDER)
#endif
#define MIN_KERNEL_ALIGN (_AC(1, UL) << MIN_KERNEL_ALIGN_LG2)
#if (CONFIG_PHYSICAL_ALIGN & (CONFIG_PHYSICAL_ALIGN-1)) || \
(CONFIG_PHYSICAL_ALIGN < MIN_KERNEL_ALIGN)
-#error "Invalid value for CONFIG_PHYSICAL_ALIGN"
+# error "Invalid value for CONFIG_PHYSICAL_ALIGN"
#endif
#ifdef CONFIG_KERNEL_BZIP2
-#define BOOT_HEAP_SIZE 0x400000
+# define BOOT_HEAP_SIZE 0x400000
#else /* !CONFIG_KERNEL_BZIP2 */
-
-#define BOOT_HEAP_SIZE 0x10000
-
-#endif /* !CONFIG_KERNEL_BZIP2 */
+# define BOOT_HEAP_SIZE 0x10000
+#endif
#ifdef CONFIG_X86_64
-#define BOOT_STACK_SIZE 0x4000
-#else
-#define BOOT_STACK_SIZE 0x1000
+# define BOOT_STACK_SIZE 0x4000
+
+# define BOOT_INIT_PGT_SIZE (6*4096)
+# ifdef CONFIG_RANDOMIZE_BASE
+/*
+ * Assuming all cross the 512GB boundary:
+ * 1 page for level4
+ * (2+2)*4 pages for kernel, param, cmd_line, and randomized kernel
+ * 2 pages for first 2M (video RAM: CONFIG_X86_VERBOSE_BOOTUP).
+ * Total is 19 pages.
+ */
+# ifdef CONFIG_X86_VERBOSE_BOOTUP
+# define BOOT_PGT_SIZE (19*4096)
+# else /* !CONFIG_X86_VERBOSE_BOOTUP */
+# define BOOT_PGT_SIZE (17*4096)
+# endif
+# else /* !CONFIG_RANDOMIZE_BASE */
+# define BOOT_PGT_SIZE BOOT_INIT_PGT_SIZE
+# endif
+
+#else /* !CONFIG_X86_64 */
+# define BOOT_STACK_SIZE 0x1000
#endif
#endif /* _ASM_X86_BOOT_H */
diff --git a/arch/x86/include/asm/bugs.h b/arch/x86/include/asm/bugs.h
index 08abf639075f..5490bbaf71d5 100644
--- a/arch/x86/include/asm/bugs.h
+++ b/arch/x86/include/asm/bugs.h
@@ -1,8 +1,16 @@
#ifndef _ASM_X86_BUGS_H
#define _ASM_X86_BUGS_H
+#include <asm/processor.h>
+
extern void check_bugs(void);
+#if defined(CONFIG_CPU_SUP_INTEL)
+void check_mpx_erratum(struct cpuinfo_x86 *c);
+#else
+static inline void check_mpx_erratum(struct cpuinfo_x86 *c) {}
+#endif
+
#if defined(CONFIG_CPU_SUP_INTEL) && defined(CONFIG_X86_32)
int ppro_with_ram_bug(void);
#else
diff --git a/arch/x86/include/asm/clocksource.h b/arch/x86/include/asm/clocksource.h
index d194266acb28..eae33c7170c8 100644
--- a/arch/x86/include/asm/clocksource.h
+++ b/arch/x86/include/asm/clocksource.h
@@ -3,11 +3,10 @@
#ifndef _ASM_X86_CLOCKSOURCE_H
#define _ASM_X86_CLOCKSOURCE_H
-#define VCLOCK_NONE 0 /* No vDSO clock available. */
-#define VCLOCK_TSC 1 /* vDSO should use vread_tsc. */
-#define VCLOCK_HPET 2 /* vDSO should use vread_hpet. */
-#define VCLOCK_PVCLOCK 3 /* vDSO should use vread_pvclock. */
-#define VCLOCK_MAX 3
+#define VCLOCK_NONE 0 /* No vDSO clock available. */
+#define VCLOCK_TSC 1 /* vDSO should use vread_tsc. */
+#define VCLOCK_PVCLOCK 2 /* vDSO should use vread_pvclock. */
+#define VCLOCK_MAX 2
struct arch_clocksource_data {
int vclock_mode;
diff --git a/arch/x86/include/asm/compat.h b/arch/x86/include/asm/compat.h
index ebb102e1bbc7..5a3b2c119ed0 100644
--- a/arch/x86/include/asm/compat.h
+++ b/arch/x86/include/asm/compat.h
@@ -307,7 +307,7 @@ static inline void __user *arch_compat_alloc_user_space(long len)
return (void __user *)round_down(sp - len, 16);
}
-static inline bool is_x32_task(void)
+static inline bool in_x32_syscall(void)
{
#ifdef CONFIG_X86_X32_ABI
if (task_pt_regs(current)->orig_ax & __X32_SYSCALL_BIT)
@@ -318,7 +318,7 @@ static inline bool is_x32_task(void)
static inline bool in_compat_syscall(void)
{
- return is_ia32_task() || is_x32_task();
+ return in_ia32_syscall() || in_x32_syscall();
}
#define in_compat_syscall in_compat_syscall /* override the generic impl */
diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
index 3636ec06c887..483fb547e3c0 100644
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -27,6 +27,7 @@ enum cpuid_leafs
CPUID_6_EAX,
CPUID_8000_000A_EDX,
CPUID_7_ECX,
+ CPUID_8000_0007_EBX,
};
#ifdef CONFIG_X86_FEATURE_NAMES
@@ -63,9 +64,9 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
(((bit)>>5)==11 && (1UL<<((bit)&31) & REQUIRED_MASK11)) || \
(((bit)>>5)==12 && (1UL<<((bit)&31) & REQUIRED_MASK12)) || \
(((bit)>>5)==13 && (1UL<<((bit)&31) & REQUIRED_MASK13)) || \
- (((bit)>>5)==13 && (1UL<<((bit)&31) & REQUIRED_MASK14)) || \
- (((bit)>>5)==13 && (1UL<<((bit)&31) & REQUIRED_MASK15)) || \
- (((bit)>>5)==14 && (1UL<<((bit)&31) & REQUIRED_MASK16)) )
+ (((bit)>>5)==14 && (1UL<<((bit)&31) & REQUIRED_MASK14)) || \
+ (((bit)>>5)==15 && (1UL<<((bit)&31) & REQUIRED_MASK15)) || \
+ (((bit)>>5)==16 && (1UL<<((bit)&31) & REQUIRED_MASK16)) )
#define DISABLED_MASK_BIT_SET(bit) \
( (((bit)>>5)==0 && (1UL<<((bit)&31) & DISABLED_MASK0 )) || \
@@ -82,9 +83,9 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
(((bit)>>5)==11 && (1UL<<((bit)&31) & DISABLED_MASK11)) || \
(((bit)>>5)==12 && (1UL<<((bit)&31) & DISABLED_MASK12)) || \
(((bit)>>5)==13 && (1UL<<((bit)&31) & DISABLED_MASK13)) || \
- (((bit)>>5)==13 && (1UL<<((bit)&31) & DISABLED_MASK14)) || \
- (((bit)>>5)==13 && (1UL<<((bit)&31) & DISABLED_MASK15)) || \
- (((bit)>>5)==14 && (1UL<<((bit)&31) & DISABLED_MASK16)) )
+ (((bit)>>5)==14 && (1UL<<((bit)&31) & DISABLED_MASK14)) || \
+ (((bit)>>5)==15 && (1UL<<((bit)&31) & DISABLED_MASK15)) || \
+ (((bit)>>5)==16 && (1UL<<((bit)&31) & DISABLED_MASK16)) )
#define cpu_has(c, bit) \
(__builtin_constant_p(bit) && REQUIRED_MASK_BIT_SET(bit) ? 1 : \
@@ -118,31 +119,6 @@ extern const char * const x86_bug_flags[NBUGINTS*32];
set_bit(bit, (unsigned long *)cpu_caps_set); \
} while (0)
-#define cpu_has_fpu boot_cpu_has(X86_FEATURE_FPU)
-#define cpu_has_pse boot_cpu_has(X86_FEATURE_PSE)
-#define cpu_has_tsc boot_cpu_has(X86_FEATURE_TSC)
-#define cpu_has_pge boot_cpu_has(X86_FEATURE_PGE)
-#define cpu_has_apic boot_cpu_has(X86_FEATURE_APIC)
-#define cpu_has_fxsr boot_cpu_has(X86_FEATURE_FXSR)
-#define cpu_has_xmm boot_cpu_has(X86_FEATURE_XMM)
-#define cpu_has_xmm2 boot_cpu_has(X86_FEATURE_XMM2)
-#define cpu_has_aes boot_cpu_has(X86_FEATURE_AES)
-#define cpu_has_avx boot_cpu_has(X86_FEATURE_AVX)
-#define cpu_has_avx2 boot_cpu_has(X86_FEATURE_AVX2)
-#define cpu_has_clflush boot_cpu_has(X86_FEATURE_CLFLUSH)
-#define cpu_has_gbpages boot_cpu_has(X86_FEATURE_GBPAGES)
-#define cpu_has_arch_perfmon boot_cpu_has(X86_FEATURE_ARCH_PERFMON)
-#define cpu_has_pat boot_cpu_has(X86_FEATURE_PAT)
-#define cpu_has_x2apic boot_cpu_has(X86_FEATURE_X2APIC)
-#define cpu_has_xsave boot_cpu_has(X86_FEATURE_XSAVE)
-#define cpu_has_xsaves boot_cpu_has(X86_FEATURE_XSAVES)
-#define cpu_has_osxsave boot_cpu_has(X86_FEATURE_OSXSAVE)
-#define cpu_has_hypervisor boot_cpu_has(X86_FEATURE_HYPERVISOR)
-/*
- * Do not add any more of those clumsy macros - use static_cpu_has() for
- * fast paths and boot_cpu_has() otherwise!
- */
-
#if defined(CC_HAVE_ASM_GOTO) && defined(CONFIG_X86_FAST_FEATURE_TESTS)
/*
* Static testing of CPU features. Used the same as boot_cpu_has().
diff --git a/arch/x86/include/asm/cpufeatures.h b/arch/x86/include/asm/cpufeatures.h
index 8f9afefd2dc5..4a413485f9eb 100644
--- a/arch/x86/include/asm/cpufeatures.h
+++ b/arch/x86/include/asm/cpufeatures.h
@@ -12,7 +12,7 @@
/*
* Defines x86 CPU feature bits
*/
-#define NCAPINTS 17 /* N 32-bit words worth of info */
+#define NCAPINTS 18 /* N 32-bit words worth of info */
#define NBUGINTS 1 /* N 32-bit bug flags */
/*
@@ -177,6 +177,7 @@
#define X86_FEATURE_PERFCTR_CORE ( 6*32+23) /* core performance counter extensions */
#define X86_FEATURE_PERFCTR_NB ( 6*32+24) /* NB performance counter extensions */
#define X86_FEATURE_BPEXT (6*32+26) /* data breakpoint extension */
+#define X86_FEATURE_PTSC ( 6*32+27) /* performance time-stamp counter */
#define X86_FEATURE_PERFCTR_L2 ( 6*32+28) /* L2 performance counter extensions */
#define X86_FEATURE_MWAITX ( 6*32+29) /* MWAIT extension (MONITORX/MWAITX) */
@@ -250,6 +251,7 @@
/* AMD-defined CPU features, CPUID level 0x80000008 (ebx), word 13 */
#define X86_FEATURE_CLZERO (13*32+0) /* CLZERO instruction */
+#define X86_FEATURE_IRPERF (13*32+1) /* Instructions Retired Count */
/* Thermal and Power Management Leaf, CPUID level 0x00000006 (eax), word 14 */
#define X86_FEATURE_DTHERM (14*32+ 0) /* Digital Thermal Sensor */
@@ -280,6 +282,11 @@
#define X86_FEATURE_PKU (16*32+ 3) /* Protection Keys for Userspace */
#define X86_FEATURE_OSPKE (16*32+ 4) /* OS Protection Keys Enable */
+/* AMD-defined CPU features, CPUID level 0x80000007 (ebx), word 17 */
+#define X86_FEATURE_OVERFLOW_RECOV (17*32+0) /* MCA overflow recovery support */
+#define X86_FEATURE_SUCCOR (17*32+1) /* Uncorrectable error containment and recovery */
+#define X86_FEATURE_SMCA (17*32+3) /* Scalable MCA */
+
/*
* BUG word(s)
*/
@@ -294,6 +301,9 @@
#define X86_BUG_FXSAVE_LEAK X86_BUG(6) /* FXSAVE leaks FOP/FIP/FOP */
#define X86_BUG_CLFLUSH_MONITOR X86_BUG(7) /* AAI65, CLFLUSH required before MONITOR */
#define X86_BUG_SYSRET_SS_ATTRS X86_BUG(8) /* SYSRET doesn't fix up SS attrs */
+#define X86_BUG_NULL_SEG X86_BUG(9) /* Nulling a selector preserves the base */
+#define X86_BUG_SWAPGS_FENCE X86_BUG(10) /* SWAPGS without input dep on GS */
+
#ifdef CONFIG_X86_32
/*
diff --git a/arch/x86/include/asm/disabled-features.h b/arch/x86/include/asm/disabled-features.h
index 39343be7d4f4..911e9358ceb1 100644
--- a/arch/x86/include/asm/disabled-features.h
+++ b/arch/x86/include/asm/disabled-features.h
@@ -29,11 +29,11 @@
#endif /* CONFIG_X86_64 */
#ifdef CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS
-# define DISABLE_PKU (1<<(X86_FEATURE_PKU))
-# define DISABLE_OSPKE (1<<(X86_FEATURE_OSPKE))
-#else
# define DISABLE_PKU 0
# define DISABLE_OSPKE 0
+#else
+# define DISABLE_PKU (1<<(X86_FEATURE_PKU & 31))
+# define DISABLE_OSPKE (1<<(X86_FEATURE_OSPKE & 31))
#endif /* CONFIG_X86_INTEL_MEMORY_PROTECTION_KEYS */
/*
diff --git a/arch/x86/include/asm/efi.h b/arch/x86/include/asm/efi.h
index 53748c45e488..78d1e7467eae 100644
--- a/arch/x86/include/asm/efi.h
+++ b/arch/x86/include/asm/efi.h
@@ -3,6 +3,7 @@
#include <asm/fpu/api.h>
#include <asm/pgtable.h>
+#include <asm/processor-flags.h>
#include <asm/tlb.h>
/*
@@ -28,33 +29,22 @@
#define MAX_CMDLINE_ADDRESS UINT_MAX
-#ifdef CONFIG_X86_32
+#define ARCH_EFI_IRQ_FLAGS_MASK X86_EFLAGS_IF
+#ifdef CONFIG_X86_32
extern unsigned long asmlinkage efi_call_phys(void *, ...);
+#define arch_efi_call_virt_setup() kernel_fpu_begin()
+#define arch_efi_call_virt_teardown() kernel_fpu_end()
+
/*
* Wrap all the virtual calls in a way that forces the parameters on the stack.
*/
-
-/* Use this macro if your virtual returns a non-void value */
-#define efi_call_virt(f, args...) \
+#define arch_efi_call_virt(f, args...) \
({ \
- efi_status_t __s; \
- kernel_fpu_begin(); \
- __s = ((efi_##f##_t __attribute__((regparm(0)))*) \
- efi.systab->runtime->f)(args); \
- kernel_fpu_end(); \
- __s; \
-})
-
-/* Use this macro if your virtual call does not return any value */
-#define __efi_call_virt(f, args...) \
-({ \
- kernel_fpu_begin(); \
((efi_##f##_t __attribute__((regparm(0)))*) \
efi.systab->runtime->f)(args); \
- kernel_fpu_end(); \
})
#define efi_ioremap(addr, size, type, attr) ioremap_cache(addr, size)
@@ -78,10 +68,8 @@ struct efi_scratch {
u64 phys_stack;
} __packed;
-#define efi_call_virt(f, ...) \
+#define arch_efi_call_virt_setup() \
({ \
- efi_status_t __s; \
- \
efi_sync_low_kernel_mappings(); \
preempt_disable(); \
__kernel_fpu_begin(); \
@@ -91,9 +79,13 @@ struct efi_scratch {
write_cr3((unsigned long)efi_scratch.efi_pgt); \
__flush_tlb_all(); \
} \
- \
- __s = efi_call((void *)efi.systab->runtime->f, __VA_ARGS__); \
- \
+})
+
+#define arch_efi_call_virt(f, args...) \
+ efi_call((void *)efi.systab->runtime->f, args) \
+
+#define arch_efi_call_virt_teardown() \
+({ \
if (efi_scratch.use_pgd) { \
write_cr3(efi_scratch.prev_cr3); \
__flush_tlb_all(); \
@@ -101,15 +93,8 @@ struct efi_scratch {
\
__kernel_fpu_end(); \
preempt_enable(); \
- __s; \
})
-/*
- * All X86_64 virt calls return non-void values. Thus, use non-void call for
- * virt calls that would be void on X86_32.
- */
-#define __efi_call_virt(f, args...) efi_call_virt(f, args)
-
extern void __iomem *__init efi_ioremap(unsigned long addr, unsigned long size,
u32 type, u64 attribute);
@@ -180,6 +165,8 @@ static inline bool efi_runtime_supported(void)
extern struct console early_efi_console;
extern void parse_efi_setup(u64 phys_addr, u32 data_len);
+extern void efifb_setup_from_dmi(struct screen_info *si, const char *opt);
+
#ifdef CONFIG_EFI_MIXED
extern void efi_thunk_runtime_setup(void);
extern efi_status_t efi_thunk_set_virtual_address_map(
@@ -225,6 +212,11 @@ __pure const struct efi_config *__efi_early(void);
#define efi_call_early(f, ...) \
__efi_early()->call(__efi_early()->f, __VA_ARGS__);
+#define __efi_call_early(f, ...) \
+ __efi_early()->call((unsigned long)f, __VA_ARGS__);
+
+#define efi_is_64bit() __efi_early()->is64
+
extern bool efi_reboot_required(void);
#else
diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h
index 15340e36ddcb..fea7724141a0 100644
--- a/arch/x86/include/asm/elf.h
+++ b/arch/x86/include/asm/elf.h
@@ -176,7 +176,7 @@ static inline void elf_common_init(struct thread_struct *t,
regs->si = regs->di = regs->bp = 0;
regs->r8 = regs->r9 = regs->r10 = regs->r11 = 0;
regs->r12 = regs->r13 = regs->r14 = regs->r15 = 0;
- t->fs = t->gs = 0;
+ t->fsbase = t->gsbase = 0;
t->fsindex = t->gsindex = 0;
t->ds = t->es = ds;
}
@@ -226,8 +226,8 @@ do { \
(pr_reg)[18] = (regs)->flags; \
(pr_reg)[19] = (regs)->sp; \
(pr_reg)[20] = (regs)->ss; \
- (pr_reg)[21] = current->thread.fs; \
- (pr_reg)[22] = current->thread.gs; \
+ (pr_reg)[21] = current->thread.fsbase; \
+ (pr_reg)[22] = current->thread.gsbase; \
asm("movl %%ds,%0" : "=r" (v)); (pr_reg)[23] = v; \
asm("movl %%es,%0" : "=r" (v)); (pr_reg)[24] = v; \
asm("movl %%fs,%0" : "=r" (v)); (pr_reg)[25] = v; \
diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h
index e6a8613fbfb0..3a106165e03a 100644
--- a/arch/x86/include/asm/hugetlb.h
+++ b/arch/x86/include/asm/hugetlb.h
@@ -4,7 +4,7 @@
#include <asm/page.h>
#include <asm-generic/hugetlb.h>
-#define hugepages_supported() cpu_has_pse
+#define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE)
static inline int is_hugepage_only_range(struct mm_struct *mm,
unsigned long addr,
diff --git a/arch/x86/include/asm/intel_telemetry.h b/arch/x86/include/asm/intel_telemetry.h
index ed65fe701de5..85029b58d0cd 100644
--- a/arch/x86/include/asm/intel_telemetry.h
+++ b/arch/x86/include/asm/intel_telemetry.h
@@ -99,7 +99,7 @@ struct telemetry_core_ops {
int (*reset_events)(void);
};
-int telemetry_set_pltdata(struct telemetry_core_ops *ops,
+int telemetry_set_pltdata(const struct telemetry_core_ops *ops,
struct telemetry_plt_config *pltconfig);
int telemetry_clear_pltdata(void);
diff --git a/arch/x86/include/asm/irq_work.h b/arch/x86/include/asm/irq_work.h
index d0afb05c84fc..f70604125286 100644
--- a/arch/x86/include/asm/irq_work.h
+++ b/arch/x86/include/asm/irq_work.h
@@ -5,7 +5,7 @@
static inline bool arch_irq_work_has_interrupt(void)
{
- return cpu_has_apic;
+ return boot_cpu_has(X86_FEATURE_APIC);
}
#endif /* _ASM_IRQ_WORK_H */
diff --git a/arch/x86/include/asm/kgdb.h b/arch/x86/include/asm/kgdb.h
index 332f98c9111f..22a8537eb780 100644
--- a/arch/x86/include/asm/kgdb.h
+++ b/arch/x86/include/asm/kgdb.h
@@ -6,6 +6,8 @@
* Copyright (C) 2008 Wind River Systems, Inc.
*/
+#include <asm/ptrace.h>
+
/*
* BUFMAX defines the maximum number of characters in inbound/outbound
* buffers at least NUMREGBYTES*2 are needed for register packets
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index b7e394485a5f..e0fbe7e70dc1 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -562,7 +562,6 @@ struct kvm_vcpu_arch {
struct {
u64 msr_val;
u64 last_steal;
- u64 accum_steal;
struct gfn_to_hva_cache stime;
struct kvm_steal_time steal;
} st;
@@ -774,6 +773,11 @@ struct kvm_arch {
u8 nr_reserved_ioapic_pins;
bool disabled_lapic_found;
+
+ /* Struct members for AVIC */
+ u32 ldr_mode;
+ struct page *avic_logical_id_table_page;
+ struct page *avic_physical_id_table_page;
};
struct kvm_vm_stat {
@@ -804,6 +808,7 @@ struct kvm_vcpu_stat {
u32 halt_exits;
u32 halt_successful_poll;
u32 halt_attempted_poll;
+ u32 halt_poll_invalid;
u32 halt_wakeup;
u32 request_irq_exits;
u32 irq_exits;
@@ -848,6 +853,9 @@ struct kvm_x86_ops {
bool (*cpu_has_high_real_mode_segbase)(void);
void (*cpuid_update)(struct kvm_vcpu *vcpu);
+ int (*vm_init)(struct kvm *kvm);
+ void (*vm_destroy)(struct kvm *kvm);
+
/* Create, but do not attach this VCPU */
struct kvm_vcpu *(*vcpu_create)(struct kvm *kvm, unsigned id);
void (*vcpu_free)(struct kvm_vcpu *vcpu);
@@ -914,7 +922,7 @@ struct kvm_x86_ops {
bool (*get_enable_apicv)(void);
void (*refresh_apicv_exec_ctrl)(struct kvm_vcpu *vcpu);
void (*hwapic_irr_update)(struct kvm_vcpu *vcpu, int max_irr);
- void (*hwapic_isr_update)(struct kvm *kvm, int isr);
+ void (*hwapic_isr_update)(struct kvm_vcpu *vcpu, int isr);
void (*load_eoi_exitmap)(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap);
void (*set_virtual_x2apic_mode)(struct kvm_vcpu *vcpu, bool set);
void (*set_apic_access_page_addr)(struct kvm_vcpu *vcpu, hpa_t hpa);
@@ -990,8 +998,13 @@ struct kvm_x86_ops {
*/
int (*pre_block)(struct kvm_vcpu *vcpu);
void (*post_block)(struct kvm_vcpu *vcpu);
+
+ void (*vcpu_blocking)(struct kvm_vcpu *vcpu);
+ void (*vcpu_unblocking)(struct kvm_vcpu *vcpu);
+
int (*update_pi_irte)(struct kvm *kvm, unsigned int host_irq,
uint32_t guest_irq, bool set);
+ void (*apicv_post_state_restore)(struct kvm_vcpu *vcpu);
};
struct kvm_arch_async_pf {
@@ -1341,7 +1354,18 @@ bool kvm_intr_is_single_vcpu(struct kvm *kvm, struct kvm_lapic_irq *irq,
void kvm_set_msi_irq(struct kvm_kernel_irq_routing_entry *e,
struct kvm_lapic_irq *irq);
-static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu) {}
-static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu) {}
+static inline void kvm_arch_vcpu_blocking(struct kvm_vcpu *vcpu)
+{
+ if (kvm_x86_ops->vcpu_blocking)
+ kvm_x86_ops->vcpu_blocking(vcpu);
+}
+
+static inline void kvm_arch_vcpu_unblocking(struct kvm_vcpu *vcpu)
+{
+ if (kvm_x86_ops->vcpu_unblocking)
+ kvm_x86_ops->vcpu_unblocking(vcpu);
+}
+
+static inline void kvm_arch_vcpu_block_finish(struct kvm_vcpu *vcpu) {}
#endif /* _ASM_X86_KVM_HOST_H */
diff --git a/arch/x86/include/asm/linkage.h b/arch/x86/include/asm/linkage.h
index 79327e9483a3..0ccb26dda126 100644
--- a/arch/x86/include/asm/linkage.h
+++ b/arch/x86/include/asm/linkage.h
@@ -8,40 +8,6 @@
#ifdef CONFIG_X86_32
#define asmlinkage CPP_ASMLINKAGE __attribute__((regparm(0)))
-
-/*
- * Make sure the compiler doesn't do anything stupid with the
- * arguments on the stack - they are owned by the *caller*, not
- * the callee. This just fools gcc into not spilling into them,
- * and keeps it from doing tailcall recursion and/or using the
- * stack slots for temporaries, since they are live and "used"
- * all the way to the end of the function.
- *
- * NOTE! On x86-64, all the arguments are in registers, so this
- * only matters on a 32-bit kernel.
- */
-#define asmlinkage_protect(n, ret, args...) \
- __asmlinkage_protect##n(ret, ##args)
-#define __asmlinkage_protect_n(ret, args...) \
- __asm__ __volatile__ ("" : "=r" (ret) : "0" (ret), ##args)
-#define __asmlinkage_protect0(ret) \
- __asmlinkage_protect_n(ret)
-#define __asmlinkage_protect1(ret, arg1) \
- __asmlinkage_protect_n(ret, "m" (arg1))
-#define __asmlinkage_protect2(ret, arg1, arg2) \
- __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2))
-#define __asmlinkage_protect3(ret, arg1, arg2, arg3) \
- __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3))
-#define __asmlinkage_protect4(ret, arg1, arg2, arg3, arg4) \
- __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \
- "m" (arg4))
-#define __asmlinkage_protect5(ret, arg1, arg2, arg3, arg4, arg5) \
- __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \
- "m" (arg4), "m" (arg5))
-#define __asmlinkage_protect6(ret, arg1, arg2, arg3, arg4, arg5, arg6) \
- __asmlinkage_protect_n(ret, "m" (arg1), "m" (arg2), "m" (arg3), \
- "m" (arg4), "m" (arg5), "m" (arg6))
-
#endif /* CONFIG_X86_32 */
#ifdef __ASSEMBLY__
diff --git a/arch/x86/include/asm/livepatch.h b/arch/x86/include/asm/livepatch.h
index 7e68f9558552..a7f9181f63f3 100644
--- a/arch/x86/include/asm/livepatch.h
+++ b/arch/x86/include/asm/livepatch.h
@@ -32,8 +32,6 @@ static inline int klp_check_compiler_support(void)
#endif
return 0;
}
-int klp_write_module_reloc(struct module *mod, unsigned long type,
- unsigned long loc, unsigned long value);
static inline void klp_arch_set_pc(struct pt_regs *regs, unsigned long ip)
{
diff --git a/arch/x86/include/asm/mce.h b/arch/x86/include/asm/mce.h
index 92b6f651fa4f..8bf766ef0e18 100644
--- a/arch/x86/include/asm/mce.h
+++ b/arch/x86/include/asm/mce.h
@@ -104,13 +104,23 @@
#define MCE_LOG_SIGNATURE "MACHINECHECK"
/* AMD Scalable MCA */
+#define MSR_AMD64_SMCA_MC0_CTL 0xc0002000
+#define MSR_AMD64_SMCA_MC0_STATUS 0xc0002001
+#define MSR_AMD64_SMCA_MC0_ADDR 0xc0002002
#define MSR_AMD64_SMCA_MC0_MISC0 0xc0002003
#define MSR_AMD64_SMCA_MC0_CONFIG 0xc0002004
#define MSR_AMD64_SMCA_MC0_IPID 0xc0002005
+#define MSR_AMD64_SMCA_MC0_DESTAT 0xc0002008
+#define MSR_AMD64_SMCA_MC0_DEADDR 0xc0002009
#define MSR_AMD64_SMCA_MC0_MISC1 0xc000200a
+#define MSR_AMD64_SMCA_MCx_CTL(x) (MSR_AMD64_SMCA_MC0_CTL + 0x10*(x))
+#define MSR_AMD64_SMCA_MCx_STATUS(x) (MSR_AMD64_SMCA_MC0_STATUS + 0x10*(x))
+#define MSR_AMD64_SMCA_MCx_ADDR(x) (MSR_AMD64_SMCA_MC0_ADDR + 0x10*(x))
#define MSR_AMD64_SMCA_MCx_MISC(x) (MSR_AMD64_SMCA_MC0_MISC0 + 0x10*(x))
#define MSR_AMD64_SMCA_MCx_CONFIG(x) (MSR_AMD64_SMCA_MC0_CONFIG + 0x10*(x))
#define MSR_AMD64_SMCA_MCx_IPID(x) (MSR_AMD64_SMCA_MC0_IPID + 0x10*(x))
+#define MSR_AMD64_SMCA_MCx_DESTAT(x) (MSR_AMD64_SMCA_MC0_DESTAT + 0x10*(x))
+#define MSR_AMD64_SMCA_MCx_DEADDR(x) (MSR_AMD64_SMCA_MC0_DEADDR + 0x10*(x))
#define MSR_AMD64_SMCA_MCx_MISCy(x, y) ((MSR_AMD64_SMCA_MC0_MISC1 + y) + (0x10*(x)))
/*
@@ -168,9 +178,18 @@ struct mce_vendor_flags {
__reserved_0 : 61;
};
+
+struct mca_msr_regs {
+ u32 (*ctl) (int bank);
+ u32 (*status) (int bank);
+ u32 (*addr) (int bank);
+ u32 (*misc) (int bank);
+};
+
extern struct mce_vendor_flags mce_flags;
extern struct mca_config mca_cfg;
+extern struct mca_msr_regs msr_ops;
extern void mce_register_decode_chain(struct notifier_block *nb);
extern void mce_unregister_decode_chain(struct notifier_block *nb);
diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h
index 84280029cafd..396348196aa7 100644
--- a/arch/x86/include/asm/mmu_context.h
+++ b/arch/x86/include/asm/mmu_context.h
@@ -115,103 +115,12 @@ static inline void destroy_context(struct mm_struct *mm)
destroy_context_ldt(mm);
}
-static inline void switch_mm(struct mm_struct *prev, struct mm_struct *next,
- struct task_struct *tsk)
-{
- unsigned cpu = smp_processor_id();
+extern void switch_mm(struct mm_struct *prev, struct mm_struct *next,
+ struct task_struct *tsk);
- if (likely(prev != next)) {
-#ifdef CONFIG_SMP
- this_cpu_write(cpu_tlbstate.state, TLBSTATE_OK);
- this_cpu_write(cpu_tlbstate.active_mm, next);
-#endif
- cpumask_set_cpu(cpu, mm_cpumask(next));
-
- /*
- * Re-load page tables.
- *
- * This logic has an ordering constraint:
- *
- * CPU 0: Write to a PTE for 'next'
- * CPU 0: load bit 1 in mm_cpumask. if nonzero, send IPI.
- * CPU 1: set bit 1 in next's mm_cpumask
- * CPU 1: load from the PTE that CPU 0 writes (implicit)
- *
- * We need to prevent an outcome in which CPU 1 observes
- * the new PTE value and CPU 0 observes bit 1 clear in
- * mm_cpumask. (If that occurs, then the IPI will never
- * be sent, and CPU 0's TLB will contain a stale entry.)
- *
- * The bad outcome can occur if either CPU's load is
- * reordered before that CPU's store, so both CPUs must
- * execute full barriers to prevent this from happening.
- *
- * Thus, switch_mm needs a full barrier between the
- * store to mm_cpumask and any operation that could load
- * from next->pgd. TLB fills are special and can happen
- * due to instruction fetches or for no reason at all,
- * and neither LOCK nor MFENCE orders them.
- * Fortunately, load_cr3() is serializing and gives the
- * ordering guarantee we need.
- *
- */
- load_cr3(next->pgd);
-
- trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
-
- /* Stop flush ipis for the previous mm */
- cpumask_clear_cpu(cpu, mm_cpumask(prev));
-
- /* Load per-mm CR4 state */
- load_mm_cr4(next);
-
-#ifdef CONFIG_MODIFY_LDT_SYSCALL
- /*
- * Load the LDT, if the LDT is different.
- *
- * It's possible that prev->context.ldt doesn't match
- * the LDT register. This can happen if leave_mm(prev)
- * was called and then modify_ldt changed
- * prev->context.ldt but suppressed an IPI to this CPU.
- * In this case, prev->context.ldt != NULL, because we
- * never set context.ldt to NULL while the mm still
- * exists. That means that next->context.ldt !=
- * prev->context.ldt, because mms never share an LDT.
- */
- if (unlikely(prev->context.ldt != next->context.ldt))
- load_mm_ldt(next);
-#endif
- }
-#ifdef CONFIG_SMP
- else {
- this_cpu_write(cpu_tlbstate.state, TLBSTATE_OK);
- BUG_ON(this_cpu_read(cpu_tlbstate.active_mm) != next);
-
- if (!cpumask_test_cpu(cpu, mm_cpumask(next))) {
- /*
- * On established mms, the mm_cpumask is only changed
- * from irq context, from ptep_clear_flush() while in
- * lazy tlb mode, and here. Irqs are blocked during
- * schedule, protecting us from simultaneous changes.
- */
- cpumask_set_cpu(cpu, mm_cpumask(next));
-
- /*
- * We were in lazy tlb mode and leave_mm disabled
- * tlb flush IPI delivery. We must reload CR3
- * to make sure to use no freed page tables.
- *
- * As above, load_cr3() is serializing and orders TLB
- * fills with respect to the mm_cpumask write.
- */
- load_cr3(next->pgd);
- trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
- load_mm_cr4(next);
- load_mm_ldt(next);
- }
- }
-#endif
-}
+extern void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
+ struct task_struct *tsk);
+#define switch_mm_irqs_off switch_mm_irqs_off
#define activate_mm(prev, next) \
do { \
diff --git a/arch/x86/include/asm/msr-index.h b/arch/x86/include/asm/msr-index.h
index 5b3c9a55f51c..5a73a9c62c39 100644
--- a/arch/x86/include/asm/msr-index.h
+++ b/arch/x86/include/asm/msr-index.h
@@ -89,27 +89,16 @@
#define MSR_PEBS_LD_LAT_THRESHOLD 0x000003f6
#define MSR_IA32_RTIT_CTL 0x00000570
-#define RTIT_CTL_TRACEEN BIT(0)
-#define RTIT_CTL_CYCLEACC BIT(1)
-#define RTIT_CTL_OS BIT(2)
-#define RTIT_CTL_USR BIT(3)
-#define RTIT_CTL_CR3EN BIT(7)
-#define RTIT_CTL_TOPA BIT(8)
-#define RTIT_CTL_MTC_EN BIT(9)
-#define RTIT_CTL_TSC_EN BIT(10)
-#define RTIT_CTL_DISRETC BIT(11)
-#define RTIT_CTL_BRANCH_EN BIT(13)
-#define RTIT_CTL_MTC_RANGE_OFFSET 14
-#define RTIT_CTL_MTC_RANGE (0x0full << RTIT_CTL_MTC_RANGE_OFFSET)
-#define RTIT_CTL_CYC_THRESH_OFFSET 19
-#define RTIT_CTL_CYC_THRESH (0x0full << RTIT_CTL_CYC_THRESH_OFFSET)
-#define RTIT_CTL_PSB_FREQ_OFFSET 24
-#define RTIT_CTL_PSB_FREQ (0x0full << RTIT_CTL_PSB_FREQ_OFFSET)
#define MSR_IA32_RTIT_STATUS 0x00000571
-#define RTIT_STATUS_CONTEXTEN BIT(1)
-#define RTIT_STATUS_TRIGGEREN BIT(2)
-#define RTIT_STATUS_ERROR BIT(4)
-#define RTIT_STATUS_STOPPED BIT(5)
+#define MSR_IA32_RTIT_STATUS 0x00000571
+#define MSR_IA32_RTIT_ADDR0_A 0x00000580
+#define MSR_IA32_RTIT_ADDR0_B 0x00000581
+#define MSR_IA32_RTIT_ADDR1_A 0x00000582
+#define MSR_IA32_RTIT_ADDR1_B 0x00000583
+#define MSR_IA32_RTIT_ADDR2_A 0x00000584
+#define MSR_IA32_RTIT_ADDR2_B 0x00000585
+#define MSR_IA32_RTIT_ADDR3_A 0x00000586
+#define MSR_IA32_RTIT_ADDR3_B 0x00000587
#define MSR_IA32_RTIT_CR3_MATCH 0x00000572
#define MSR_IA32_RTIT_OUTPUT_BASE 0x00000560
#define MSR_IA32_RTIT_OUTPUT_MASK 0x00000561
@@ -205,6 +194,8 @@
#define MSR_CONFIG_TDP_CONTROL 0x0000064B
#define MSR_TURBO_ACTIVATION_RATIO 0x0000064C
+#define MSR_PLATFORM_ENERGY_STATUS 0x0000064D
+
#define MSR_PKG_WEIGHTED_CORE_C0_RES 0x00000658
#define MSR_PKG_ANY_CORE_C0_RES 0x00000659
#define MSR_PKG_ANY_GFXE_C0_RES 0x0000065A
@@ -315,6 +306,9 @@
#define MSR_AMD64_IBSOPDATA4 0xc001103d
#define MSR_AMD64_IBS_REG_COUNT_MAX 8 /* includes MSR_AMD64_IBSBRTARGET */
+/* Fam 17h MSRs */
+#define MSR_F17H_IRPERF 0xc00000e9
+
/* Fam 16h MSRs */
#define MSR_F16H_L2I_PERF_CTL 0xc0010230
#define MSR_F16H_L2I_PERF_CTR 0xc0010231
@@ -328,6 +322,7 @@
#define MSR_F15H_PERF_CTR 0xc0010201
#define MSR_F15H_NB_PERF_CTL 0xc0010240
#define MSR_F15H_NB_PERF_CTR 0xc0010241
+#define MSR_F15H_PTSC 0xc0010280
#define MSR_F15H_IC_CFG 0xc0011021
/* Fam 10h MSRs */
diff --git a/arch/x86/include/asm/msr.h b/arch/x86/include/asm/msr.h
index 7a79ee2778b3..7dc1d8fef7fd 100644
--- a/arch/x86/include/asm/msr.h
+++ b/arch/x86/include/asm/msr.h
@@ -84,7 +84,10 @@ static inline unsigned long long native_read_msr(unsigned int msr)
{
DECLARE_ARGS(val, low, high);
- asm volatile("rdmsr" : EAX_EDX_RET(val, low, high) : "c" (msr));
+ asm volatile("1: rdmsr\n"
+ "2:\n"
+ _ASM_EXTABLE_HANDLE(1b, 2b, ex_handler_rdmsr_unsafe)
+ : EAX_EDX_RET(val, low, high) : "c" (msr));
if (msr_tracepoint_active(__tracepoint_read_msr))
do_trace_read_msr(msr, EAX_EDX_VAL(val, low, high), 0);
return EAX_EDX_VAL(val, low, high);
@@ -98,7 +101,10 @@ static inline unsigned long long native_read_msr_safe(unsigned int msr,
asm volatile("2: rdmsr ; xor %[err],%[err]\n"
"1:\n\t"
".section .fixup,\"ax\"\n\t"
- "3: mov %[fault],%[err] ; jmp 1b\n\t"
+ "3: mov %[fault],%[err]\n\t"
+ "xorl %%eax, %%eax\n\t"
+ "xorl %%edx, %%edx\n\t"
+ "jmp 1b\n\t"
".previous\n\t"
_ASM_EXTABLE(2b, 3b)
: [err] "=r" (*err), EAX_EDX_RET(val, low, high)
@@ -108,10 +114,14 @@ static inline unsigned long long native_read_msr_safe(unsigned int msr,
return EAX_EDX_VAL(val, low, high);
}
-static inline void native_write_msr(unsigned int msr,
- unsigned low, unsigned high)
+/* Can be uninlined because referenced by paravirt */
+notrace static inline void native_write_msr(unsigned int msr,
+ unsigned low, unsigned high)
{
- asm volatile("wrmsr" : : "c" (msr), "a"(low), "d" (high) : "memory");
+ asm volatile("1: wrmsr\n"
+ "2:\n"
+ _ASM_EXTABLE_HANDLE(1b, 2b, ex_handler_wrmsr_unsafe)
+ : : "c" (msr), "a"(low), "d" (high) : "memory");
if (msr_tracepoint_active(__tracepoint_read_msr))
do_trace_write_msr(msr, ((u64)high << 32 | low), 0);
}
diff --git a/arch/x86/include/asm/mtrr.h b/arch/x86/include/asm/mtrr.h
index b94f6f64e23d..dbff1456d215 100644
--- a/arch/x86/include/asm/mtrr.h
+++ b/arch/x86/include/asm/mtrr.h
@@ -24,6 +24,7 @@
#define _ASM_X86_MTRR_H
#include <uapi/asm/mtrr.h>
+#include <asm/pat.h>
/*
@@ -83,9 +84,12 @@ static inline int mtrr_trim_uncached_memory(unsigned long end_pfn)
static inline void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi)
{
}
+static inline void mtrr_bp_init(void)
+{
+ pat_disable("MTRRs disabled, skipping PAT initialization too.");
+}
#define mtrr_ap_init() do {} while (0)
-#define mtrr_bp_init() do {} while (0)
#define set_mtrr_aps_delayed_init() do {} while (0)
#define mtrr_aps_init() do {} while (0)
#define mtrr_bp_restore() do {} while (0)
diff --git a/arch/x86/include/asm/page.h b/arch/x86/include/asm/page.h
index 802dde30c928..cf8f619b305f 100644
--- a/arch/x86/include/asm/page.h
+++ b/arch/x86/include/asm/page.h
@@ -37,7 +37,10 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr,
alloc_page_vma(GFP_HIGHUSER | __GFP_ZERO | movableflags, vma, vaddr)
#define __HAVE_ARCH_ALLOC_ZEROED_USER_HIGHPAGE
+#ifndef __pa
#define __pa(x) __phys_addr((unsigned long)(x))
+#endif
+
#define __pa_nodebug(x) __phys_addr_nodebug((unsigned long)(x))
/* __pa_symbol should be used for C visible symbols.
This seems to be the official gcc blessed way to do such arithmetic. */
@@ -51,7 +54,9 @@ static inline void copy_user_page(void *to, void *from, unsigned long vaddr,
#define __pa_symbol(x) \
__phys_addr_symbol(__phys_reloc_hide((unsigned long)(x)))
+#ifndef __va
#define __va(x) ((void *)((unsigned long)(x)+PAGE_OFFSET))
+#endif
#define __boot_va(x) __va(x)
#define __boot_pa(x) __pa(x)
diff --git a/arch/x86/include/asm/page_64_types.h b/arch/x86/include/asm/page_64_types.h
index 4928cf0d5af0..d5c2f8b40faa 100644
--- a/arch/x86/include/asm/page_64_types.h
+++ b/arch/x86/include/asm/page_64_types.h
@@ -47,12 +47,10 @@
* are fully set up. If kernel ASLR is configured, it can extend the
* kernel page table mapping, reducing the size of the modules area.
*/
-#define KERNEL_IMAGE_SIZE_DEFAULT (512 * 1024 * 1024)
-#if defined(CONFIG_RANDOMIZE_BASE) && \
- CONFIG_RANDOMIZE_BASE_MAX_OFFSET > KERNEL_IMAGE_SIZE_DEFAULT
-#define KERNEL_IMAGE_SIZE CONFIG_RANDOMIZE_BASE_MAX_OFFSET
+#if defined(CONFIG_RANDOMIZE_BASE)
+#define KERNEL_IMAGE_SIZE (1024 * 1024 * 1024)
#else
-#define KERNEL_IMAGE_SIZE KERNEL_IMAGE_SIZE_DEFAULT
+#define KERNEL_IMAGE_SIZE (512 * 1024 * 1024)
#endif
#endif /* _ASM_X86_PAGE_64_DEFS_H */
diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 601f1b8f9961..2970d22d7766 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -15,17 +15,6 @@
#include <linux/cpumask.h>
#include <asm/frame.h>
-static inline int paravirt_enabled(void)
-{
- return pv_info.paravirt_enabled;
-}
-
-static inline int paravirt_has_feature(unsigned int feature)
-{
- WARN_ON_ONCE(!pv_info.paravirt_enabled);
- return (pv_info.features & feature);
-}
-
static inline void load_sp0(struct tss_struct *tss,
struct thread_struct *thread)
{
@@ -130,21 +119,31 @@ static inline void wbinvd(void)
#define get_kernel_rpl() (pv_info.kernel_rpl)
-static inline u64 paravirt_read_msr(unsigned msr, int *err)
+static inline u64 paravirt_read_msr(unsigned msr)
+{
+ return PVOP_CALL1(u64, pv_cpu_ops.read_msr, msr);
+}
+
+static inline void paravirt_write_msr(unsigned msr,
+ unsigned low, unsigned high)
+{
+ return PVOP_VCALL3(pv_cpu_ops.write_msr, msr, low, high);
+}
+
+static inline u64 paravirt_read_msr_safe(unsigned msr, int *err)
{
- return PVOP_CALL2(u64, pv_cpu_ops.read_msr, msr, err);
+ return PVOP_CALL2(u64, pv_cpu_ops.read_msr_safe, msr, err);
}
-static inline int paravirt_write_msr(unsigned msr, unsigned low, unsigned high)
+static inline int paravirt_write_msr_safe(unsigned msr,
+ unsigned low, unsigned high)
{
- return PVOP_CALL3(int, pv_cpu_ops.write_msr, msr, low, high);
+ return PVOP_CALL3(int, pv_cpu_ops.write_msr_safe, msr, low, high);
}
-/* These should all do BUG_ON(_err), but our headers are too tangled. */
#define rdmsr(msr, val1, val2) \
do { \
- int _err; \
- u64 _l = paravirt_read_msr(msr, &_err); \
+ u64 _l = paravirt_read_msr(msr); \
val1 = (u32)_l; \
val2 = _l >> 32; \
} while (0)
@@ -156,8 +155,7 @@ do { \
#define rdmsrl(msr, val) \
do { \
- int _err; \
- val = paravirt_read_msr(msr, &_err); \
+ val = paravirt_read_msr(msr); \
} while (0)
static inline void wrmsrl(unsigned msr, u64 val)
@@ -165,23 +163,23 @@ static inline void wrmsrl(unsigned msr, u64 val)
wrmsr(msr, (u32)val, (u32)(val>>32));
}
-#define wrmsr_safe(msr, a, b) paravirt_write_msr(msr, a, b)
+#define wrmsr_safe(msr, a, b) paravirt_write_msr_safe(msr, a, b)
/* rdmsr with exception handling */
-#define rdmsr_safe(msr, a, b) \
-({ \
- int _err; \
- u64 _l = paravirt_read_msr(msr, &_err); \
- (*a) = (u32)_l; \
- (*b) = _l >> 32; \
- _err; \
+#define rdmsr_safe(msr, a, b) \
+({ \
+ int _err; \
+ u64 _l = paravirt_read_msr_safe(msr, &_err); \
+ (*a) = (u32)_l; \
+ (*b) = _l >> 32; \
+ _err; \
})
static inline int rdmsrl_safe(unsigned msr, unsigned long long *p)
{
int err;
- *p = paravirt_read_msr(msr, &err);
+ *p = paravirt_read_msr_safe(msr, &err);
return err;
}
diff --git a/arch/x86/include/asm/paravirt_types.h b/arch/x86/include/asm/paravirt_types.h
index e8c2326478c8..7fa9e7740ba3 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -69,15 +69,9 @@ struct pv_info {
u16 extra_user_64bit_cs; /* __USER_CS if none */
#endif
- int paravirt_enabled;
- unsigned int features; /* valid only if paravirt_enabled is set */
const char *name;
};
-#define paravirt_has(x) paravirt_has_feature(PV_SUPPORTED_##x)
-/* Supported features */
-#define PV_SUPPORTED_RTC (1<<0)
-
struct pv_init_ops {
/*
* Patch may replace one of the defined code sequences with
@@ -155,10 +149,16 @@ struct pv_cpu_ops {
void (*cpuid)(unsigned int *eax, unsigned int *ebx,
unsigned int *ecx, unsigned int *edx);
- /* MSR, PMC and TSR operations.
- err = 0/-EFAULT. wrmsr returns 0/-EFAULT. */
- u64 (*read_msr)(unsigned int msr, int *err);
- int (*write_msr)(unsigned int msr, unsigned low, unsigned high);
+ /* Unsafe MSR operations. These will warn or panic on failure. */
+ u64 (*read_msr)(unsigned int msr);
+ void (*write_msr)(unsigned int msr, unsigned low, unsigned high);
+
+ /*
+ * Safe MSR operations.
+ * read sets err to 0 or -EIO. write returns 0 or -EIO.
+ */
+ u64 (*read_msr_safe)(unsigned int msr, int *err);
+ int (*write_msr_safe)(unsigned int msr, unsigned low, unsigned high);
u64 (*read_pmc)(int counter);
diff --git a/arch/x86/include/asm/pat.h b/arch/x86/include/asm/pat.h
index ca6c228d5e62..0b1ff4c1c14e 100644
--- a/arch/x86/include/asm/pat.h
+++ b/arch/x86/include/asm/pat.h
@@ -5,8 +5,8 @@
#include <asm/pgtable_types.h>
bool pat_enabled(void);
+void pat_disable(const char *reason);
extern void pat_init(void);
-void pat_init_cache_modes(u64);
extern int reserve_memtype(u64 start, u64 end,
enum page_cache_mode req_pcm, enum page_cache_mode *ret_pcm);
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 97f3242e133c..1a27396b6ea0 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -181,9 +181,10 @@ static inline int pmd_trans_huge(pmd_t pmd)
return (pmd_val(pmd) & (_PAGE_PSE|_PAGE_DEVMAP)) == _PAGE_PSE;
}
+#define has_transparent_hugepage has_transparent_hugepage
static inline int has_transparent_hugepage(void)
{
- return cpu_has_pse;
+ return boot_cpu_has(X86_FEATURE_PSE);
}
#ifdef __HAVE_ARCH_PTE_DEVMAP
diff --git a/arch/x86/include/asm/pmc_core.h b/arch/x86/include/asm/pmc_core.h
new file mode 100644
index 000000000000..d4855f11136d
--- /dev/null
+++ b/arch/x86/include/asm/pmc_core.h
@@ -0,0 +1,27 @@
+/*
+ * Intel Core SoC Power Management Controller Header File
+ *
+ * Copyright (c) 2016, Intel Corporation.
+ * All Rights Reserved.
+ *
+ * Authors: Rajneesh Bhardwaj <rajneesh.bhardwaj@intel.com>
+ * Vishwanath Somayaji <vishwanath.somayaji@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ */
+
+#ifndef _ASM_PMC_CORE_H
+#define _ASM_PMC_CORE_H
+
+/* API to read SLP_S0_RESIDENCY counter */
+int intel_pmc_slp_s0_counter_read(u32 *data);
+
+#endif /* _ASM_PMC_CORE_H */
diff --git a/arch/x86/include/asm/processor.h b/arch/x86/include/asm/processor.h
index 9264476f3d57..62c6cc3cc5d3 100644
--- a/arch/x86/include/asm/processor.h
+++ b/arch/x86/include/asm/processor.h
@@ -388,9 +388,16 @@ struct thread_struct {
unsigned long ip;
#endif
#ifdef CONFIG_X86_64
- unsigned long fs;
+ unsigned long fsbase;
+ unsigned long gsbase;
+#else
+ /*
+ * XXX: this could presumably be unsigned short. Alternatively,
+ * 32-bit kernels could be taught to use fsindex instead.
+ */
+ unsigned long fs;
+ unsigned long gs;
#endif
- unsigned long gs;
/* Save middle states of ptrace breakpoints */
struct perf_event *ptrace_bps[HBP_NUM];
@@ -473,8 +480,6 @@ static inline unsigned long current_top_of_stack(void)
#include <asm/paravirt.h>
#else
#define __cpuid native_cpuid
-#define paravirt_enabled() 0
-#define paravirt_has(x) 0
static inline void load_sp0(struct tss_struct *tss,
struct thread_struct *thread)
diff --git a/arch/x86/include/asm/rwsem.h b/arch/x86/include/asm/rwsem.h
index ceec86eb68e9..453744c1d347 100644
--- a/arch/x86/include/asm/rwsem.h
+++ b/arch/x86/include/asm/rwsem.h
@@ -99,26 +99,36 @@ static inline int __down_read_trylock(struct rw_semaphore *sem)
/*
* lock for writing
*/
-static inline void __down_write_nested(struct rw_semaphore *sem, int subclass)
+#define ____down_write(sem, slow_path) \
+({ \
+ long tmp; \
+ struct rw_semaphore* ret; \
+ asm volatile("# beginning down_write\n\t" \
+ LOCK_PREFIX " xadd %1,(%3)\n\t" \
+ /* adds 0xffff0001, returns the old value */ \
+ " test " __ASM_SEL(%w1,%k1) "," __ASM_SEL(%w1,%k1) "\n\t" \
+ /* was the active mask 0 before? */\
+ " jz 1f\n" \
+ " call " slow_path "\n" \
+ "1:\n" \
+ "# ending down_write" \
+ : "+m" (sem->count), "=d" (tmp), "=a" (ret) \
+ : "a" (sem), "1" (RWSEM_ACTIVE_WRITE_BIAS) \
+ : "memory", "cc"); \
+ ret; \
+})
+
+static inline void __down_write(struct rw_semaphore *sem)
{
- long tmp;
- asm volatile("# beginning down_write\n\t"
- LOCK_PREFIX " xadd %1,(%2)\n\t"
- /* adds 0xffff0001, returns the old value */
- " test " __ASM_SEL(%w1,%k1) "," __ASM_SEL(%w1,%k1) "\n\t"
- /* was the active mask 0 before? */
- " jz 1f\n"
- " call call_rwsem_down_write_failed\n"
- "1:\n"
- "# ending down_write"
- : "+m" (sem->count), "=d" (tmp)
- : "a" (sem), "1" (RWSEM_ACTIVE_WRITE_BIAS)
- : "memory", "cc");
+ ____down_write(sem, "call_rwsem_down_write_failed");
}
-static inline void __down_write(struct rw_semaphore *sem)
+static inline int __down_write_killable(struct rw_semaphore *sem)
{
- __down_write_nested(sem, 0);
+ if (IS_ERR(____down_write(sem, "call_rwsem_down_write_failed_killable")))
+ return -EINTR;
+
+ return 0;
}
/*
diff --git a/arch/x86/include/asm/segment.h b/arch/x86/include/asm/segment.h
index 7d5a1929d76b..1549caa098f0 100644
--- a/arch/x86/include/asm/segment.h
+++ b/arch/x86/include/asm/segment.h
@@ -2,6 +2,7 @@
#define _ASM_X86_SEGMENT_H
#include <linux/const.h>
+#include <asm/alternative.h>
/*
* Constructor for a conventional segment GDT (or LDT) entry.
@@ -207,13 +208,6 @@
#define __USER_CS (GDT_ENTRY_DEFAULT_USER_CS*8 + 3)
#define __PER_CPU_SEG (GDT_ENTRY_PER_CPU*8 + 3)
-/* TLS indexes for 64-bit - hardcoded in arch_prctl(): */
-#define FS_TLS 0
-#define GS_TLS 1
-
-#define GS_TLS_SEL ((GDT_ENTRY_TLS_MIN+GS_TLS)*8 + 3)
-#define FS_TLS_SEL ((GDT_ENTRY_TLS_MIN+FS_TLS)*8 + 3)
-
#endif
#ifndef CONFIG_PARAVIRT
@@ -249,10 +243,13 @@ extern const char early_idt_handler_array[NUM_EXCEPTION_VECTORS][EARLY_IDT_HANDL
#endif
/*
- * Load a segment. Fall back on loading the zero
- * segment if something goes wrong..
+ * Load a segment. Fall back on loading the zero segment if something goes
+ * wrong. This variant assumes that loading zero fully clears the segment.
+ * This is always the case on Intel CPUs and, even on 64-bit AMD CPUs, any
+ * failure to fully clear the cached descriptor is only observable for
+ * FS and GS.
*/
-#define loadsegment(seg, value) \
+#define __loadsegment_simple(seg, value) \
do { \
unsigned short __val = (value); \
\
@@ -269,6 +266,38 @@ do { \
: "+r" (__val) : : "memory"); \
} while (0)
+#define __loadsegment_ss(value) __loadsegment_simple(ss, (value))
+#define __loadsegment_ds(value) __loadsegment_simple(ds, (value))
+#define __loadsegment_es(value) __loadsegment_simple(es, (value))
+
+#ifdef CONFIG_X86_32
+
+/*
+ * On 32-bit systems, the hidden parts of FS and GS are unobservable if
+ * the selector is NULL, so there's no funny business here.
+ */
+#define __loadsegment_fs(value) __loadsegment_simple(fs, (value))
+#define __loadsegment_gs(value) __loadsegment_simple(gs, (value))
+
+#else
+
+static inline void __loadsegment_fs(unsigned short value)
+{
+ asm volatile(" \n"
+ "1: movw %0, %%fs \n"
+ "2: \n"
+
+ _ASM_EXTABLE_HANDLE(1b, 2b, ex_handler_clear_fs)
+
+ : : "rm" (value) : "memory");
+}
+
+/* __loadsegment_gs is intentionally undefined. Use load_gs_index instead. */
+
+#endif
+
+#define loadsegment(seg, value) __loadsegment_ ## seg (value)
+
/*
* Save a segment register away:
*/
diff --git a/arch/x86/include/asm/setup.h b/arch/x86/include/asm/setup.h
index 11af24e09c8a..ac1d5da14734 100644
--- a/arch/x86/include/asm/setup.h
+++ b/arch/x86/include/asm/setup.h
@@ -6,6 +6,7 @@
#define COMMAND_LINE_SIZE 2048
#include <linux/linkage.h>
+#include <asm/page_types.h>
#ifdef __i386__
diff --git a/arch/x86/include/asm/svm.h b/arch/x86/include/asm/svm.h
index 6136d99f537b..d0fe23ec7e98 100644
--- a/arch/x86/include/asm/svm.h
+++ b/arch/x86/include/asm/svm.h
@@ -78,7 +78,8 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
u32 exit_int_info;
u32 exit_int_info_err;
u64 nested_ctl;
- u8 reserved_4[16];
+ u64 avic_vapic_bar;
+ u8 reserved_4[8];
u32 event_inj;
u32 event_inj_err;
u64 nested_cr3;
@@ -88,7 +89,11 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
u64 next_rip;
u8 insn_len;
u8 insn_bytes[15];
- u8 reserved_6[800];
+ u64 avic_backing_page; /* Offset 0xe0 */
+ u8 reserved_6[8]; /* Offset 0xe8 */
+ u64 avic_logical_id; /* Offset 0xf0 */
+ u64 avic_physical_id; /* Offset 0xf8 */
+ u8 reserved_7[768];
};
@@ -111,6 +116,9 @@ struct __attribute__ ((__packed__)) vmcb_control_area {
#define V_INTR_MASKING_SHIFT 24
#define V_INTR_MASKING_MASK (1 << V_INTR_MASKING_SHIFT)
+#define AVIC_ENABLE_SHIFT 31
+#define AVIC_ENABLE_MASK (1 << AVIC_ENABLE_SHIFT)
+
#define SVM_INTERRUPT_SHADOW_MASK 1
#define SVM_IOIO_STR_SHIFT 2
diff --git a/arch/x86/include/asm/switch_to.h b/arch/x86/include/asm/switch_to.h
index 751bf4b7bf11..8f321a1b03a1 100644
--- a/arch/x86/include/asm/switch_to.h
+++ b/arch/x86/include/asm/switch_to.h
@@ -39,8 +39,7 @@ do { \
*/ \
unsigned long ebx, ecx, edx, esi, edi; \
\
- asm volatile("pushfl\n\t" /* save flags */ \
- "pushl %%ebp\n\t" /* save EBP */ \
+ asm volatile("pushl %%ebp\n\t" /* save EBP */ \
"movl %%esp,%[prev_sp]\n\t" /* save ESP */ \
"movl %[next_sp],%%esp\n\t" /* restore ESP */ \
"movl $1f,%[prev_ip]\n\t" /* save EIP */ \
@@ -49,7 +48,6 @@ do { \
"jmp __switch_to\n" /* regparm call */ \
"1:\t" \
"popl %%ebp\n\t" /* restore EBP */ \
- "popfl\n" /* restore flags */ \
\
/* output parameters */ \
: [prev_sp] "=m" (prev->thread.sp), \
diff --git a/arch/x86/include/asm/text-patching.h b/arch/x86/include/asm/text-patching.h
new file mode 100644
index 000000000000..90395063383c
--- /dev/null
+++ b/arch/x86/include/asm/text-patching.h
@@ -0,0 +1,40 @@
+#ifndef _ASM_X86_TEXT_PATCHING_H
+#define _ASM_X86_TEXT_PATCHING_H
+
+#include <linux/types.h>
+#include <linux/stddef.h>
+#include <asm/ptrace.h>
+
+struct paravirt_patch_site;
+#ifdef CONFIG_PARAVIRT
+void apply_paravirt(struct paravirt_patch_site *start,
+ struct paravirt_patch_site *end);
+#else
+static inline void apply_paravirt(struct paravirt_patch_site *start,
+ struct paravirt_patch_site *end)
+{}
+#define __parainstructions NULL
+#define __parainstructions_end NULL
+#endif
+
+extern void *text_poke_early(void *addr, const void *opcode, size_t len);
+
+/*
+ * Clear and restore the kernel write-protection flag on the local CPU.
+ * Allows the kernel to edit read-only pages.
+ * Side-effect: any interrupt handler running between save and restore will have
+ * the ability to write to read-only pages.
+ *
+ * Warning:
+ * Code patching in the UP case is safe if NMIs and MCE handlers are stopped and
+ * no thread can be preempted in the instructions being modified (no iret to an
+ * invalid instruction possible) or if the instructions are changed from a
+ * consistent state to another consistent state atomically.
+ * On the local CPU you need to be protected again NMI or MCE handlers seeing an
+ * inconsistent instruction while you patch.
+ */
+extern void *text_poke(void *addr, const void *opcode, size_t len);
+extern int poke_int3_handler(struct pt_regs *regs);
+extern void *text_poke_bp(void *addr, const void *opcode, size_t len, void *handler);
+
+#endif /* _ASM_X86_TEXT_PATCHING_H */
diff --git a/arch/x86/include/asm/thread_info.h b/arch/x86/include/asm/thread_info.h
index ffae84df8a93..30c133ac05cd 100644
--- a/arch/x86/include/asm/thread_info.h
+++ b/arch/x86/include/asm/thread_info.h
@@ -255,7 +255,7 @@ static inline bool test_and_clear_restore_sigmask(void)
return true;
}
-static inline bool is_ia32_task(void)
+static inline bool in_ia32_syscall(void)
{
#ifdef CONFIG_X86_32
return true;
diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index 1fde8d580a5b..4e5be94e079a 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -181,7 +181,7 @@ static inline void __native_flush_tlb_single(unsigned long addr)
static inline void __flush_tlb_all(void)
{
- if (cpu_has_pge)
+ if (static_cpu_has(X86_FEATURE_PGE))
__flush_tlb_global();
else
__flush_tlb();
diff --git a/arch/x86/include/asm/tsc.h b/arch/x86/include/asm/tsc.h
index 174c4212780a..7428697c5b8d 100644
--- a/arch/x86/include/asm/tsc.h
+++ b/arch/x86/include/asm/tsc.h
@@ -22,7 +22,7 @@ extern void disable_TSC(void);
static inline cycles_t get_cycles(void)
{
#ifndef CONFIG_X86_TSC
- if (!cpu_has_tsc)
+ if (!boot_cpu_has(X86_FEATURE_TSC))
return 0;
#endif
diff --git a/arch/x86/include/asm/uaccess.h b/arch/x86/include/asm/uaccess.h
index a969ae607be8..2982387ba817 100644
--- a/arch/x86/include/asm/uaccess.h
+++ b/arch/x86/include/asm/uaccess.h
@@ -5,6 +5,7 @@
*/
#include <linux/errno.h>
#include <linux/compiler.h>
+#include <linux/kasan-checks.h>
#include <linux/thread_info.h>
#include <linux/string.h>
#include <asm/asm.h>
@@ -108,9 +109,17 @@ struct exception_table_entry {
#define ARCH_HAS_RELATIVE_EXTABLE
+#define swap_ex_entry_fixup(a, b, tmp, delta) \
+ do { \
+ (a)->fixup = (b)->fixup + (delta); \
+ (b)->fixup = (tmp).fixup - (delta); \
+ (a)->handler = (b)->handler + (delta); \
+ (b)->handler = (tmp).handler - (delta); \
+ } while (0)
+
extern int fixup_exception(struct pt_regs *regs, int trapnr);
extern bool ex_has_fault_handler(unsigned long ip);
-extern int early_fixup_exception(unsigned long *ip);
+extern void early_fixup_exception(struct pt_regs *regs, int trapnr);
/*
* These are the main single-value transfer routines. They automatically
@@ -713,6 +722,8 @@ copy_from_user(void *to, const void __user *from, unsigned long n)
might_fault();
+ kasan_check_write(to, n);
+
/*
* While we would like to have the compiler do the checking for us
* even in the non-constant size case, any false positives there are
@@ -746,6 +757,8 @@ copy_to_user(void __user *to, const void *from, unsigned long n)
{
int sz = __compiletime_object_size(from);
+ kasan_check_read(from, n);
+
might_fault();
/* See the comment in copy_from_user() above. */
diff --git a/arch/x86/include/asm/uaccess_32.h b/arch/x86/include/asm/uaccess_32.h
index 3fe0eac59462..4b32da24faaf 100644
--- a/arch/x86/include/asm/uaccess_32.h
+++ b/arch/x86/include/asm/uaccess_32.h
@@ -33,46 +33,10 @@ unsigned long __must_check __copy_from_user_ll_nocache_nozero
* the specified block with access_ok() before calling this function.
* The caller should also make sure he pins the user space address
* so that we don't result in page fault and sleep.
- *
- * Here we special-case 1, 2 and 4-byte copy_*_user invocations. On a fault
- * we return the initial request size (1, 2 or 4), as copy_*_user should do.
- * If a store crosses a page boundary and gets a fault, the x86 will not write
- * anything, so this is accurate.
*/
-
static __always_inline unsigned long __must_check
__copy_to_user_inatomic(void __user *to, const void *from, unsigned long n)
{
- if (__builtin_constant_p(n)) {
- unsigned long ret;
-
- switch (n) {
- case 1:
- __uaccess_begin();
- __put_user_size(*(u8 *)from, (u8 __user *)to,
- 1, ret, 1);
- __uaccess_end();
- return ret;
- case 2:
- __uaccess_begin();
- __put_user_size(*(u16 *)from, (u16 __user *)to,
- 2, ret, 2);
- __uaccess_end();
- return ret;
- case 4:
- __uaccess_begin();
- __put_user_size(*(u32 *)from, (u32 __user *)to,
- 4, ret, 4);
- __uaccess_end();
- return ret;
- case 8:
- __uaccess_begin();
- __put_user_size(*(u64 *)from, (u64 __user *)to,
- 8, ret, 8);
- __uaccess_end();
- return ret;
- }
- }
return __copy_to_user_ll(to, from, n);
}
@@ -101,32 +65,6 @@ __copy_to_user(void __user *to, const void *from, unsigned long n)
static __always_inline unsigned long
__copy_from_user_inatomic(void *to, const void __user *from, unsigned long n)
{
- /* Avoid zeroing the tail if the copy fails..
- * If 'n' is constant and 1, 2, or 4, we do still zero on a failure,
- * but as the zeroing behaviour is only significant when n is not
- * constant, that shouldn't be a problem.
- */
- if (__builtin_constant_p(n)) {
- unsigned long ret;
-
- switch (n) {
- case 1:
- __uaccess_begin();
- __get_user_size(*(u8 *)to, from, 1, ret, 1);
- __uaccess_end();
- return ret;
- case 2:
- __uaccess_begin();
- __get_user_size(*(u16 *)to, from, 2, ret, 2);
- __uaccess_end();
- return ret;
- case 4:
- __uaccess_begin();
- __get_user_size(*(u32 *)to, from, 4, ret, 4);
- __uaccess_end();
- return ret;
- }
- }
return __copy_from_user_ll_nozero(to, from, n);
}
diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
index 307698688fa1..2eac2aa3e37f 100644
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -7,6 +7,7 @@
#include <linux/compiler.h>
#include <linux/errno.h>
#include <linux/lockdep.h>
+#include <linux/kasan-checks.h>
#include <asm/alternative.h>
#include <asm/cpufeatures.h>
#include <asm/page.h>
@@ -109,6 +110,7 @@ static __always_inline __must_check
int __copy_from_user(void *dst, const void __user *src, unsigned size)
{
might_fault();
+ kasan_check_write(dst, size);
return __copy_from_user_nocheck(dst, src, size);
}
@@ -175,6 +177,7 @@ static __always_inline __must_check
int __copy_to_user(void __user *dst, const void *src, unsigned size)
{
might_fault();
+ kasan_check_read(src, size);
return __copy_to_user_nocheck(dst, src, size);
}
@@ -242,12 +245,14 @@ int __copy_in_user(void __user *dst, const void __user *src, unsigned size)
static __must_check __always_inline int
__copy_from_user_inatomic(void *dst, const void __user *src, unsigned size)
{
+ kasan_check_write(dst, size);
return __copy_from_user_nocheck(dst, src, size);
}
static __must_check __always_inline int
__copy_to_user_inatomic(void __user *dst, const void *src, unsigned size)
{
+ kasan_check_read(src, size);
return __copy_to_user_nocheck(dst, src, size);
}
@@ -258,6 +263,7 @@ static inline int
__copy_from_user_nocache(void *dst, const void __user *src, unsigned size)
{
might_fault();
+ kasan_check_write(dst, size);
return __copy_user_nocache(dst, src, size, 1);
}
@@ -265,6 +271,7 @@ static inline int
__copy_from_user_inatomic_nocache(void *dst, const void __user *src,
unsigned size)
{
+ kasan_check_write(dst, size);
return __copy_user_nocache(dst, src, size, 0);
}
diff --git a/arch/x86/include/asm/uv/bios.h b/arch/x86/include/asm/uv/bios.h
index 71605c7d5c5c..c852590254d5 100644
--- a/arch/x86/include/asm/uv/bios.h
+++ b/arch/x86/include/asm/uv/bios.h
@@ -51,15 +51,66 @@ enum {
BIOS_STATUS_UNAVAIL = -EBUSY
};
+/* Address map parameters */
+struct uv_gam_parameters {
+ u64 mmr_base;
+ u64 gru_base;
+ u8 mmr_shift; /* Convert PNode to MMR space offset */
+ u8 gru_shift; /* Convert PNode to GRU space offset */
+ u8 gpa_shift; /* Size of offset field in GRU phys addr */
+ u8 unused1;
+};
+
+/* UV_TABLE_GAM_RANGE_ENTRY values */
+#define UV_GAM_RANGE_TYPE_UNUSED 0 /* End of table */
+#define UV_GAM_RANGE_TYPE_RAM 1 /* Normal RAM */
+#define UV_GAM_RANGE_TYPE_NVRAM 2 /* Non-volatile memory */
+#define UV_GAM_RANGE_TYPE_NV_WINDOW 3 /* NVMDIMM block window */
+#define UV_GAM_RANGE_TYPE_NV_MAILBOX 4 /* NVMDIMM mailbox */
+#define UV_GAM_RANGE_TYPE_HOLE 5 /* Unused address range */
+#define UV_GAM_RANGE_TYPE_MAX 6
+
+/* The structure stores PA bits 56:26, for 64MB granularity */
+#define UV_GAM_RANGE_SHFT 26 /* 64MB */
+
+struct uv_gam_range_entry {
+ char type; /* Entry type: GAM_RANGE_TYPE_UNUSED, etc. */
+ char unused1;
+ u16 nasid; /* HNasid */
+ u16 sockid; /* Socket ID, high bits of APIC ID */
+ u16 pnode; /* Index to MMR and GRU spaces */
+ u32 pxm; /* ACPI proximity domain number */
+ u32 limit; /* PA bits 56:26 (UV_GAM_RANGE_SHFT) */
+};
+
+#define UV_SYSTAB_SIG "UVST"
+#define UV_SYSTAB_VERSION_1 1 /* UV1/2/3 BIOS version */
+#define UV_SYSTAB_VERSION_UV4 0x400 /* UV4 BIOS base version */
+#define UV_SYSTAB_VERSION_UV4_1 0x401 /* + gpa_shift */
+#define UV_SYSTAB_VERSION_UV4_2 0x402 /* + TYPE_NVRAM/WINDOW/MBOX */
+#define UV_SYSTAB_VERSION_UV4_LATEST UV_SYSTAB_VERSION_UV4_2
+
+#define UV_SYSTAB_TYPE_UNUSED 0 /* End of table (offset == 0) */
+#define UV_SYSTAB_TYPE_GAM_PARAMS 1 /* GAM PARAM conversions */
+#define UV_SYSTAB_TYPE_GAM_RNG_TBL 2 /* GAM entry table */
+#define UV_SYSTAB_TYPE_MAX 3
+
/*
* The UV system table describes specific firmware
* capabilities available to the Linux kernel at runtime.
*/
struct uv_systab {
- char signature[4]; /* must be "UVST" */
+ char signature[4]; /* must be UV_SYSTAB_SIG */
u32 revision; /* distinguish different firmware revs */
u64 function; /* BIOS runtime callback function ptr */
+ u32 size; /* systab size (starting with _VERSION_UV4) */
+ struct {
+ u32 type:8; /* type of entry */
+ u32 offset:24; /* byte offset from struct start to entry */
+ } entry[1]; /* additional entries follow */
};
+extern struct uv_systab *uv_systab;
+/* (... end of definitions from UV BIOS ...) */
enum {
BIOS_FREQ_BASE_PLATFORM = 0,
@@ -99,7 +150,11 @@ extern s64 uv_bios_change_memprotect(u64, u64, enum uv_memprotect);
extern s64 uv_bios_reserved_page_pa(u64, u64 *, u64 *, u64 *);
extern int uv_bios_set_legacy_vga_target(bool decode, int domain, int bus);
+#ifdef CONFIG_EFI
extern void uv_bios_init(void);
+#else
+void uv_bios_init(void) { }
+#endif
extern unsigned long sn_rtc_cycles_per_second;
extern int uv_type;
@@ -107,7 +162,7 @@ extern long sn_partition_id;
extern long sn_coherency_id;
extern long sn_region_size;
extern long system_serial_number;
-#define partition_coherence_id() (sn_coherency_id)
+#define uv_partition_coherence_id() (sn_coherency_id)
extern struct kobject *sgi_uv_kobj; /* /sys/firmware/sgi_uv */
diff --git a/arch/x86/include/asm/uv/uv_bau.h b/arch/x86/include/asm/uv/uv_bau.h
index fc808b83fccb..cc44d926c17e 100644
--- a/arch/x86/include/asm/uv/uv_bau.h
+++ b/arch/x86/include/asm/uv/uv_bau.h
@@ -598,7 +598,7 @@ struct bau_control {
int timeout_tries;
int ipi_attempts;
int conseccompletes;
- short nobau;
+ bool nobau;
short baudisabled;
short cpu;
short osnode;
diff --git a/arch/x86/include/asm/uv/uv_hub.h b/arch/x86/include/asm/uv/uv_hub.h
index ea7074784cc4..097b80c989c4 100644
--- a/arch/x86/include/asm/uv/uv_hub.h
+++ b/arch/x86/include/asm/uv/uv_hub.h
@@ -16,9 +16,11 @@
#include <linux/percpu.h>
#include <linux/timer.h>
#include <linux/io.h>
+#include <linux/topology.h>
#include <asm/types.h>
#include <asm/percpu.h>
#include <asm/uv/uv_mmrs.h>
+#include <asm/uv/bios.h>
#include <asm/irq_vectors.h>
#include <asm/io_apic.h>
@@ -103,7 +105,6 @@
* processor APICID register.
*/
-
/*
* Maximum number of bricks in all partitions and in all coherency domains.
* This is the total number of bricks accessible in the numalink fabric. It
@@ -127,6 +128,7 @@
*/
#define UV_MAX_NASID_VALUE (UV_MAX_NUMALINK_BLADES * 2)
+/* System Controller Interface Reg info */
struct uv_scir_s {
struct timer_list timer;
unsigned long offset;
@@ -137,71 +139,173 @@ struct uv_scir_s {
unsigned char enabled;
};
+/* GAM (globally addressed memory) range table */
+struct uv_gam_range_s {
+ u32 limit; /* PA bits 56:26 (GAM_RANGE_SHFT) */
+ u16 nasid; /* node's global physical address */
+ s8 base; /* entry index of node's base addr */
+ u8 reserved;
+};
+
/*
* The following defines attributes of the HUB chip. These attributes are
- * frequently referenced and are kept in the per-cpu data areas of each cpu.
- * They are kept together in a struct to minimize cache misses.
+ * frequently referenced and are kept in a common per hub struct.
+ * After setup, the struct is read only, so it should be readily
+ * available in the L3 cache on the cpu socket for the node.
*/
struct uv_hub_info_s {
unsigned long global_mmr_base;
+ unsigned long global_mmr_shift;
unsigned long gpa_mask;
- unsigned int gnode_extra;
+ unsigned short *socket_to_node;
+ unsigned short *socket_to_pnode;
+ unsigned short *pnode_to_socket;
+ struct uv_gam_range_s *gr_table;
+ unsigned short min_socket;
+ unsigned short min_pnode;
+ unsigned char m_val;
+ unsigned char n_val;
+ unsigned char gr_table_len;
unsigned char hub_revision;
unsigned char apic_pnode_shift;
+ unsigned char gpa_shift;
unsigned char m_shift;
unsigned char n_lshift;
+ unsigned int gnode_extra;
unsigned long gnode_upper;
unsigned long lowmem_remap_top;
unsigned long lowmem_remap_base;
+ unsigned long global_gru_base;
+ unsigned long global_gru_shift;
unsigned short pnode;
unsigned short pnode_mask;
unsigned short coherency_domain_number;
unsigned short numa_blade_id;
- unsigned char blade_processor_id;
- unsigned char m_val;
- unsigned char n_val;
+ unsigned short nr_possible_cpus;
+ unsigned short nr_online_cpus;
+ short memory_nid;
+};
+
+/* CPU specific info with a pointer to the hub common info struct */
+struct uv_cpu_info_s {
+ void *p_uv_hub_info;
+ unsigned char blade_cpu_id;
struct uv_scir_s scir;
};
+DECLARE_PER_CPU(struct uv_cpu_info_s, __uv_cpu_info);
+
+#define uv_cpu_info this_cpu_ptr(&__uv_cpu_info)
+#define uv_cpu_info_per(cpu) (&per_cpu(__uv_cpu_info, cpu))
+
+#define uv_scir_info (&uv_cpu_info->scir)
+#define uv_cpu_scir_info(cpu) (&uv_cpu_info_per(cpu)->scir)
+
+/* Node specific hub common info struct */
+extern void **__uv_hub_info_list;
+static inline struct uv_hub_info_s *uv_hub_info_list(int node)
+{
+ return (struct uv_hub_info_s *)__uv_hub_info_list[node];
+}
+
+static inline struct uv_hub_info_s *_uv_hub_info(void)
+{
+ return (struct uv_hub_info_s *)uv_cpu_info->p_uv_hub_info;
+}
+#define uv_hub_info _uv_hub_info()
-DECLARE_PER_CPU(struct uv_hub_info_s, __uv_hub_info);
-#define uv_hub_info this_cpu_ptr(&__uv_hub_info)
-#define uv_cpu_hub_info(cpu) (&per_cpu(__uv_hub_info, cpu))
+static inline struct uv_hub_info_s *uv_cpu_hub_info(int cpu)
+{
+ return (struct uv_hub_info_s *)uv_cpu_info_per(cpu)->p_uv_hub_info;
+}
+
+#define UV_HUB_INFO_VERSION 0x7150
+extern int uv_hub_info_version(void);
+static inline int uv_hub_info_check(int version)
+{
+ if (uv_hub_info_version() == version)
+ return 0;
+
+ pr_crit("UV: uv_hub_info version(%x) mismatch, expecting(%x)\n",
+ uv_hub_info_version(), version);
+
+ BUG(); /* Catastrophic - cannot continue on unknown UV system */
+}
+#define _uv_hub_info_check() uv_hub_info_check(UV_HUB_INFO_VERSION)
/*
- * Hub revisions less than UV2_HUB_REVISION_BASE are UV1 hubs. All UV2
- * hubs have revision numbers greater than or equal to UV2_HUB_REVISION_BASE.
+ * HUB revision ranges for each UV HUB architecture.
* This is a software convention - NOT the hardware revision numbers in
* the hub chip.
*/
#define UV1_HUB_REVISION_BASE 1
#define UV2_HUB_REVISION_BASE 3
#define UV3_HUB_REVISION_BASE 5
+#define UV4_HUB_REVISION_BASE 7
+#ifdef UV1_HUB_IS_SUPPORTED
static inline int is_uv1_hub(void)
{
return uv_hub_info->hub_revision < UV2_HUB_REVISION_BASE;
}
+#else
+static inline int is_uv1_hub(void)
+{
+ return 0;
+}
+#endif
+#ifdef UV2_HUB_IS_SUPPORTED
static inline int is_uv2_hub(void)
{
return ((uv_hub_info->hub_revision >= UV2_HUB_REVISION_BASE) &&
(uv_hub_info->hub_revision < UV3_HUB_REVISION_BASE));
}
+#else
+static inline int is_uv2_hub(void)
+{
+ return 0;
+}
+#endif
+#ifdef UV3_HUB_IS_SUPPORTED
+static inline int is_uv3_hub(void)
+{
+ return ((uv_hub_info->hub_revision >= UV3_HUB_REVISION_BASE) &&
+ (uv_hub_info->hub_revision < UV4_HUB_REVISION_BASE));
+}
+#else
static inline int is_uv3_hub(void)
{
- return uv_hub_info->hub_revision >= UV3_HUB_REVISION_BASE;
+ return 0;
}
+#endif
-static inline int is_uv_hub(void)
+#ifdef UV4_HUB_IS_SUPPORTED
+static inline int is_uv4_hub(void)
{
- return uv_hub_info->hub_revision;
+ return uv_hub_info->hub_revision >= UV4_HUB_REVISION_BASE;
}
+#else
+static inline int is_uv4_hub(void)
+{
+ return 0;
+}
+#endif
-/* code common to uv2 and uv3 only */
static inline int is_uvx_hub(void)
{
- return uv_hub_info->hub_revision >= UV2_HUB_REVISION_BASE;
+ if (uv_hub_info->hub_revision >= UV2_HUB_REVISION_BASE)
+ return uv_hub_info->hub_revision;
+
+ return 0;
+}
+
+static inline int is_uv_hub(void)
+{
+#ifdef UV1_HUB_IS_SUPPORTED
+ return uv_hub_info->hub_revision;
+#endif
+ return is_uvx_hub();
}
union uvh_apicid {
@@ -243,24 +347,42 @@ union uvh_apicid {
#define UV3_LOCAL_MMR_SIZE (32UL * 1024 * 1024)
#define UV3_GLOBAL_MMR32_SIZE (32UL * 1024 * 1024)
-#define UV_LOCAL_MMR_BASE (is_uv1_hub() ? UV1_LOCAL_MMR_BASE : \
- (is_uv2_hub() ? UV2_LOCAL_MMR_BASE : \
- UV3_LOCAL_MMR_BASE))
-#define UV_GLOBAL_MMR32_BASE (is_uv1_hub() ? UV1_GLOBAL_MMR32_BASE :\
- (is_uv2_hub() ? UV2_GLOBAL_MMR32_BASE :\
- UV3_GLOBAL_MMR32_BASE))
-#define UV_LOCAL_MMR_SIZE (is_uv1_hub() ? UV1_LOCAL_MMR_SIZE : \
- (is_uv2_hub() ? UV2_LOCAL_MMR_SIZE : \
- UV3_LOCAL_MMR_SIZE))
-#define UV_GLOBAL_MMR32_SIZE (is_uv1_hub() ? UV1_GLOBAL_MMR32_SIZE :\
- (is_uv2_hub() ? UV2_GLOBAL_MMR32_SIZE :\
- UV3_GLOBAL_MMR32_SIZE))
+#define UV4_LOCAL_MMR_BASE 0xfa000000UL
+#define UV4_GLOBAL_MMR32_BASE 0xfc000000UL
+#define UV4_LOCAL_MMR_SIZE (32UL * 1024 * 1024)
+#define UV4_GLOBAL_MMR32_SIZE (16UL * 1024 * 1024)
+
+#define UV_LOCAL_MMR_BASE ( \
+ is_uv1_hub() ? UV1_LOCAL_MMR_BASE : \
+ is_uv2_hub() ? UV2_LOCAL_MMR_BASE : \
+ is_uv3_hub() ? UV3_LOCAL_MMR_BASE : \
+ /*is_uv4_hub*/ UV4_LOCAL_MMR_BASE)
+
+#define UV_GLOBAL_MMR32_BASE ( \
+ is_uv1_hub() ? UV1_GLOBAL_MMR32_BASE : \
+ is_uv2_hub() ? UV2_GLOBAL_MMR32_BASE : \
+ is_uv3_hub() ? UV3_GLOBAL_MMR32_BASE : \
+ /*is_uv4_hub*/ UV4_GLOBAL_MMR32_BASE)
+
+#define UV_LOCAL_MMR_SIZE ( \
+ is_uv1_hub() ? UV1_LOCAL_MMR_SIZE : \
+ is_uv2_hub() ? UV2_LOCAL_MMR_SIZE : \
+ is_uv3_hub() ? UV3_LOCAL_MMR_SIZE : \
+ /*is_uv4_hub*/ UV4_LOCAL_MMR_SIZE)
+
+#define UV_GLOBAL_MMR32_SIZE ( \
+ is_uv1_hub() ? UV1_GLOBAL_MMR32_SIZE : \
+ is_uv2_hub() ? UV2_GLOBAL_MMR32_SIZE : \
+ is_uv3_hub() ? UV3_GLOBAL_MMR32_SIZE : \
+ /*is_uv4_hub*/ UV4_GLOBAL_MMR32_SIZE)
+
#define UV_GLOBAL_MMR64_BASE (uv_hub_info->global_mmr_base)
#define UV_GLOBAL_GRU_MMR_BASE 0x4000000
#define UV_GLOBAL_MMR32_PNODE_SHIFT 15
-#define UV_GLOBAL_MMR64_PNODE_SHIFT 26
+#define _UV_GLOBAL_MMR64_PNODE_SHIFT 26
+#define UV_GLOBAL_MMR64_PNODE_SHIFT (uv_hub_info->global_mmr_shift)
#define UV_GLOBAL_MMR32_PNODE_BITS(p) ((p) << (UV_GLOBAL_MMR32_PNODE_SHIFT))
@@ -307,18 +429,74 @@ union uvh_apicid {
* between socket virtual and socket physical addresses.
*/
+/* global bits offset - number of local address bits in gpa for this UV arch */
+static inline unsigned int uv_gpa_shift(void)
+{
+ return uv_hub_info->gpa_shift;
+}
+#define _uv_gpa_shift
+
+/* Find node that has the address range that contains global address */
+static inline struct uv_gam_range_s *uv_gam_range(unsigned long pa)
+{
+ struct uv_gam_range_s *gr = uv_hub_info->gr_table;
+ unsigned long pal = (pa & uv_hub_info->gpa_mask) >> UV_GAM_RANGE_SHFT;
+ int i, num = uv_hub_info->gr_table_len;
+
+ if (gr) {
+ for (i = 0; i < num; i++, gr++) {
+ if (pal < gr->limit)
+ return gr;
+ }
+ }
+ pr_crit("UV: GAM Range for 0x%lx not found at %p!\n", pa, gr);
+ BUG();
+}
+
+/* Return base address of node that contains global address */
+static inline unsigned long uv_gam_range_base(unsigned long pa)
+{
+ struct uv_gam_range_s *gr = uv_gam_range(pa);
+ int base = gr->base;
+
+ if (base < 0)
+ return 0UL;
+
+ return uv_hub_info->gr_table[base].limit;
+}
+
+/* socket phys RAM --> UV global NASID (UV4+) */
+static inline unsigned long uv_soc_phys_ram_to_nasid(unsigned long paddr)
+{
+ return uv_gam_range(paddr)->nasid;
+}
+#define _uv_soc_phys_ram_to_nasid
+
+/* socket virtual --> UV global NASID (UV4+) */
+static inline unsigned long uv_gpa_nasid(void *v)
+{
+ return uv_soc_phys_ram_to_nasid(__pa(v));
+}
+
/* socket phys RAM --> UV global physical address */
static inline unsigned long uv_soc_phys_ram_to_gpa(unsigned long paddr)
{
+ unsigned int m_val = uv_hub_info->m_val;
+
if (paddr < uv_hub_info->lowmem_remap_top)
paddr |= uv_hub_info->lowmem_remap_base;
paddr |= uv_hub_info->gnode_upper;
- paddr = ((paddr << uv_hub_info->m_shift) >> uv_hub_info->m_shift) |
- ((paddr >> uv_hub_info->m_val) << uv_hub_info->n_lshift);
+ if (m_val)
+ paddr = ((paddr << uv_hub_info->m_shift)
+ >> uv_hub_info->m_shift) |
+ ((paddr >> uv_hub_info->m_val)
+ << uv_hub_info->n_lshift);
+ else
+ paddr |= uv_soc_phys_ram_to_nasid(paddr)
+ << uv_hub_info->gpa_shift;
return paddr;
}
-
/* socket virtual --> UV global physical address */
static inline unsigned long uv_gpa(void *v)
{
@@ -338,54 +516,89 @@ static inline unsigned long uv_gpa_to_soc_phys_ram(unsigned long gpa)
unsigned long paddr;
unsigned long remap_base = uv_hub_info->lowmem_remap_base;
unsigned long remap_top = uv_hub_info->lowmem_remap_top;
+ unsigned int m_val = uv_hub_info->m_val;
+
+ if (m_val)
+ gpa = ((gpa << uv_hub_info->m_shift) >> uv_hub_info->m_shift) |
+ ((gpa >> uv_hub_info->n_lshift) << uv_hub_info->m_val);
- gpa = ((gpa << uv_hub_info->m_shift) >> uv_hub_info->m_shift) |
- ((gpa >> uv_hub_info->n_lshift) << uv_hub_info->m_val);
paddr = gpa & uv_hub_info->gpa_mask;
if (paddr >= remap_base && paddr < remap_base + remap_top)
paddr -= remap_base;
return paddr;
}
-
-/* gpa -> pnode */
+/* gpa -> gnode */
static inline unsigned long uv_gpa_to_gnode(unsigned long gpa)
{
- return gpa >> uv_hub_info->n_lshift;
+ unsigned int n_lshift = uv_hub_info->n_lshift;
+
+ if (n_lshift)
+ return gpa >> n_lshift;
+
+ return uv_gam_range(gpa)->nasid >> 1;
}
/* gpa -> pnode */
static inline int uv_gpa_to_pnode(unsigned long gpa)
{
- unsigned long n_mask = (1UL << uv_hub_info->n_val) - 1;
-
- return uv_gpa_to_gnode(gpa) & n_mask;
+ return uv_gpa_to_gnode(gpa) & uv_hub_info->pnode_mask;
}
-/* gpa -> node offset*/
+/* gpa -> node offset */
static inline unsigned long uv_gpa_to_offset(unsigned long gpa)
{
- return (gpa << uv_hub_info->m_shift) >> uv_hub_info->m_shift;
+ unsigned int m_shift = uv_hub_info->m_shift;
+
+ if (m_shift)
+ return (gpa << m_shift) >> m_shift;
+
+ return (gpa & uv_hub_info->gpa_mask) - uv_gam_range_base(gpa);
+}
+
+/* Convert socket to node */
+static inline int _uv_socket_to_node(int socket, unsigned short *s2nid)
+{
+ return s2nid ? s2nid[socket - uv_hub_info->min_socket] : socket;
+}
+
+static inline int uv_socket_to_node(int socket)
+{
+ return _uv_socket_to_node(socket, uv_hub_info->socket_to_node);
}
/* pnode, offset --> socket virtual */
static inline void *uv_pnode_offset_to_vaddr(int pnode, unsigned long offset)
{
- return __va(((unsigned long)pnode << uv_hub_info->m_val) | offset);
-}
+ unsigned int m_val = uv_hub_info->m_val;
+ unsigned long base;
+ unsigned short sockid, node, *p2s;
+ if (m_val)
+ return __va(((unsigned long)pnode << m_val) | offset);
-/*
- * Extract a PNODE from an APICID (full apicid, not processor subset)
- */
+ p2s = uv_hub_info->pnode_to_socket;
+ sockid = p2s ? p2s[pnode - uv_hub_info->min_pnode] : pnode;
+ node = uv_socket_to_node(sockid);
+
+ /* limit address of previous socket is our base, except node 0 is 0 */
+ if (!node)
+ return __va((unsigned long)offset);
+
+ base = (unsigned long)(uv_hub_info->gr_table[node - 1].limit);
+ return __va(base << UV_GAM_RANGE_SHFT | offset);
+}
+
+/* Extract/Convert a PNODE from an APICID (full apicid, not processor subset) */
static inline int uv_apicid_to_pnode(int apicid)
{
- return (apicid >> uv_hub_info->apic_pnode_shift);
+ int pnode = apicid >> uv_hub_info->apic_pnode_shift;
+ unsigned short *s2pn = uv_hub_info->socket_to_pnode;
+
+ return s2pn ? s2pn[pnode - uv_hub_info->min_socket] : pnode;
}
-/*
- * Convert an apicid to the socket number on the blade
- */
+/* Convert an apicid to the socket number on the blade */
static inline int uv_apicid_to_socket(int apicid)
{
if (is_uv1_hub())
@@ -434,16 +647,6 @@ static inline unsigned long uv_read_global_mmr64(int pnode, unsigned long offset
return readq(uv_global_mmr64_address(pnode, offset));
}
-/*
- * Global MMR space addresses when referenced by the GRU. (GRU does
- * NOT use socket addressing).
- */
-static inline unsigned long uv_global_gru_mmr_address(int pnode, unsigned long offset)
-{
- return UV_GLOBAL_GRU_MMR_BASE | offset |
- ((unsigned long)pnode << uv_hub_info->m_val);
-}
-
static inline void uv_write_global_mmr8(int pnode, unsigned long offset, unsigned char val)
{
writeb(val, uv_global_mmr64_address(pnode, offset));
@@ -483,27 +686,23 @@ static inline void uv_write_local_mmr8(unsigned long offset, unsigned char val)
writeb(val, uv_local_mmr_address(offset));
}
-/*
- * Structures and definitions for converting between cpu, node, pnode, and blade
- * numbers.
- */
-struct uv_blade_info {
- unsigned short nr_possible_cpus;
- unsigned short nr_online_cpus;
- unsigned short pnode;
- short memory_nid;
- spinlock_t nmi_lock; /* obsolete, see uv_hub_nmi */
- unsigned long nmi_count; /* obsolete, see uv_hub_nmi */
-};
-extern struct uv_blade_info *uv_blade_info;
-extern short *uv_node_to_blade;
-extern short *uv_cpu_to_blade;
-extern short uv_possible_blades;
-
/* Blade-local cpu number of current cpu. Numbered 0 .. <# cpus on the blade> */
static inline int uv_blade_processor_id(void)
{
- return uv_hub_info->blade_processor_id;
+ return uv_cpu_info->blade_cpu_id;
+}
+
+/* Blade-local cpu number of cpu N. Numbered 0 .. <# cpus on the blade> */
+static inline int uv_cpu_blade_processor_id(int cpu)
+{
+ return uv_cpu_info_per(cpu)->blade_cpu_id;
+}
+#define _uv_cpu_blade_processor_id 1 /* indicate function available */
+
+/* Blade number to Node number (UV1..UV4 is 1:1) */
+static inline int uv_blade_to_node(int blade)
+{
+ return blade;
}
/* Blade number of current cpu. Numnbered 0 .. <#blades -1> */
@@ -512,55 +711,60 @@ static inline int uv_numa_blade_id(void)
return uv_hub_info->numa_blade_id;
}
-/* Convert a cpu number to the the UV blade number */
-static inline int uv_cpu_to_blade_id(int cpu)
+/*
+ * Convert linux node number to the UV blade number.
+ * .. Currently for UV1 thru UV4 the node and the blade are identical.
+ * .. If this changes then you MUST check references to this function!
+ */
+static inline int uv_node_to_blade_id(int nid)
{
- return uv_cpu_to_blade[cpu];
+ return nid;
}
-/* Convert linux node number to the UV blade number */
-static inline int uv_node_to_blade_id(int nid)
+/* Convert a cpu number to the the UV blade number */
+static inline int uv_cpu_to_blade_id(int cpu)
{
- return uv_node_to_blade[nid];
+ return uv_node_to_blade_id(cpu_to_node(cpu));
}
/* Convert a blade id to the PNODE of the blade */
static inline int uv_blade_to_pnode(int bid)
{
- return uv_blade_info[bid].pnode;
+ return uv_hub_info_list(uv_blade_to_node(bid))->pnode;
}
/* Nid of memory node on blade. -1 if no blade-local memory */
static inline int uv_blade_to_memory_nid(int bid)
{
- return uv_blade_info[bid].memory_nid;
+ return uv_hub_info_list(uv_blade_to_node(bid))->memory_nid;
}
/* Determine the number of possible cpus on a blade */
static inline int uv_blade_nr_possible_cpus(int bid)
{
- return uv_blade_info[bid].nr_possible_cpus;
+ return uv_hub_info_list(uv_blade_to_node(bid))->nr_possible_cpus;
}
/* Determine the number of online cpus on a blade */
static inline int uv_blade_nr_online_cpus(int bid)
{
- return uv_blade_info[bid].nr_online_cpus;
+ return uv_hub_info_list(uv_blade_to_node(bid))->nr_online_cpus;
}
/* Convert a cpu id to the PNODE of the blade containing the cpu */
static inline int uv_cpu_to_pnode(int cpu)
{
- return uv_blade_info[uv_cpu_to_blade_id(cpu)].pnode;
+ return uv_cpu_hub_info(cpu)->pnode;
}
/* Convert a linux node number to the PNODE of the blade */
static inline int uv_node_to_pnode(int nid)
{
- return uv_blade_info[uv_node_to_blade_id(nid)].pnode;
+ return uv_hub_info_list(nid)->pnode;
}
/* Maximum possible number of blades */
+extern short uv_possible_blades;
static inline int uv_num_possible_blades(void)
{
return uv_possible_blades;
@@ -578,9 +782,7 @@ extern void uv_nmi_setup(void);
/* Newer SMM NMI handler, not present in all systems */
#define UVH_NMI_MMRX UVH_EVENT_OCCURRED0
#define UVH_NMI_MMRX_CLEAR UVH_EVENT_OCCURRED0_ALIAS
-#define UVH_NMI_MMRX_SHIFT (is_uv1_hub() ? \
- UV1H_EVENT_OCCURRED0_EXTIO_INT0_SHFT :\
- UVXH_EVENT_OCCURRED0_EXTIO_INT0_SHFT)
+#define UVH_NMI_MMRX_SHIFT UVH_EVENT_OCCURRED0_EXTIO_INT0_SHFT
#define UVH_NMI_MMRX_TYPE "EXTIO_INT0"
/* Non-zero indicates newer SMM NMI handler present */
@@ -622,9 +824,9 @@ DECLARE_PER_CPU(struct uv_cpu_nmi_s, uv_cpu_nmi);
/* Update SCIR state */
static inline void uv_set_scir_bits(unsigned char value)
{
- if (uv_hub_info->scir.state != value) {
- uv_hub_info->scir.state = value;
- uv_write_local_mmr8(uv_hub_info->scir.offset, value);
+ if (uv_scir_info->state != value) {
+ uv_scir_info->state = value;
+ uv_write_local_mmr8(uv_scir_info->offset, value);
}
}
@@ -635,10 +837,10 @@ static inline unsigned long uv_scir_offset(int apicid)
static inline void uv_set_cpu_scir_bits(int cpu, unsigned char value)
{
- if (uv_cpu_hub_info(cpu)->scir.state != value) {
+ if (uv_cpu_scir_info(cpu)->state != value) {
uv_write_global_mmr8(uv_cpu_to_pnode(cpu),
- uv_cpu_hub_info(cpu)->scir.offset, value);
- uv_cpu_hub_info(cpu)->scir.state = value;
+ uv_cpu_scir_info(cpu)->offset, value);
+ uv_cpu_scir_info(cpu)->state = value;
}
}
@@ -666,10 +868,7 @@ static inline void uv_hub_send_ipi(int pnode, int apicid, int vector)
/*
* Get the minimum revision number of the hub chips within the partition.
- * 1 - UV1 rev 1.0 initial silicon
- * 2 - UV1 rev 2.0 production silicon
- * 3 - UV2 rev 1.0 initial silicon
- * 5 - UV3 rev 1.0 initial silicon
+ * (See UVx_HUB_REVISION_BASE above for specific values.)
*/
static inline int uv_get_min_hub_revision_id(void)
{
diff --git a/arch/x86/include/asm/uv/uv_mmrs.h b/arch/x86/include/asm/uv/uv_mmrs.h
index ddd8db6b6e70..548d684a7960 100644
--- a/arch/x86/include/asm/uv/uv_mmrs.h
+++ b/arch/x86/include/asm/uv/uv_mmrs.h
@@ -5,7 +5,7 @@
*
* SGI UV MMR definitions
*
- * Copyright (C) 2007-2014 Silicon Graphics, Inc. All rights reserved.
+ * Copyright (C) 2007-2016 Silicon Graphics, Inc. All rights reserved.
*/
#ifndef _ASM_X86_UV_UV_MMRS_H
@@ -18,10 +18,11 @@
* grouped by architecture types.
*
* UVH - definitions common to all UV hub types.
- * UVXH - definitions common to all UV eXtended hub types (currently 2 & 3).
+ * UVXH - definitions common to all UV eXtended hub types (currently 2, 3, 4).
* UV1H - definitions specific to UV type 1 hub.
* UV2H - definitions specific to UV type 2 hub.
* UV3H - definitions specific to UV type 3 hub.
+ * UV4H - definitions specific to UV type 4 hub.
*
* So in general, MMR addresses and structures are identical on all hubs types.
* These MMRs are identified as:
@@ -32,19 +33,25 @@
* } s;
* };
*
- * If the MMR exists on all hub types but have different addresses:
+ * If the MMR exists on all hub types but have different addresses,
+ * use a conditional operator to define the value at runtime.
* #define UV1Hxxx a
* #define UV2Hxxx b
* #define UV3Hxxx c
+ * #define UV4Hxxx d
* #define UVHxxx (is_uv1_hub() ? UV1Hxxx :
* (is_uv2_hub() ? UV2Hxxx :
- * UV3Hxxx))
+ * (is_uv3_hub() ? UV3Hxxx :
+ * UV4Hxxx))
*
- * If the MMR exists on all hub types > 1 but have different addresses:
+ * If the MMR exists on all hub types > 1 but have different addresses, the
+ * variation using "UVX" as the prefix exists.
* #define UV2Hxxx b
* #define UV3Hxxx c
- * #define UVXHxxx (is_uv2_hub() ? UV2Hxxx :
- * UV3Hxxx))
+ * #define UV4Hxxx d
+ * #define UVHxxx (is_uv2_hub() ? UV2Hxxx :
+ * (is_uv3_hub() ? UV3Hxxx :
+ * UV4Hxxx))
*
* union uvh_xxx {
* unsigned long v;
@@ -56,6 +63,8 @@
* } s2;
* struct uv3h_xxx_s { # Full UV3 definition (*)
* } s3;
+ * struct uv4h_xxx_s { # Full UV4 definition (*)
+ * } s4;
* };
* (* - if present and different than the common struct)
*
@@ -73,7 +82,7 @@
* } sn;
* };
*
- * (GEN Flags: mflags_opt= undefs=0 UV23=UVXH)
+ * (GEN Flags: mflags_opt= undefs=function UV234=UVXH)
*/
#define UV_MMR_ENABLE (1UL << 63)
@@ -83,20 +92,36 @@
#define UV2_HUB_PART_NUMBER_X 0x1111
#define UV3_HUB_PART_NUMBER 0x9578
#define UV3_HUB_PART_NUMBER_X 0x4321
+#define UV4_HUB_PART_NUMBER 0x99a1
/* Compat: Indicate which UV Hubs are supported. */
+#define UV1_HUB_IS_SUPPORTED 1
#define UV2_HUB_IS_SUPPORTED 1
#define UV3_HUB_IS_SUPPORTED 1
+#define UV4_HUB_IS_SUPPORTED 1
+
+/* Error function to catch undefined references */
+extern unsigned long uv_undefined(char *str);
/* ========================================================================= */
/* UVH_BAU_DATA_BROADCAST */
/* ========================================================================= */
#define UVH_BAU_DATA_BROADCAST 0x61688UL
-#define UVH_BAU_DATA_BROADCAST_32 0x440
+
+#define UV1H_BAU_DATA_BROADCAST_32 0x440
+#define UV2H_BAU_DATA_BROADCAST_32 0x440
+#define UV3H_BAU_DATA_BROADCAST_32 0x440
+#define UV4H_BAU_DATA_BROADCAST_32 0x360
+#define UVH_BAU_DATA_BROADCAST_32 ( \
+ is_uv1_hub() ? UV1H_BAU_DATA_BROADCAST_32 : \
+ is_uv2_hub() ? UV2H_BAU_DATA_BROADCAST_32 : \
+ is_uv3_hub() ? UV3H_BAU_DATA_BROADCAST_32 : \
+ /*is_uv4_hub*/ UV4H_BAU_DATA_BROADCAST_32)
#define UVH_BAU_DATA_BROADCAST_ENABLE_SHFT 0
#define UVH_BAU_DATA_BROADCAST_ENABLE_MASK 0x0000000000000001UL
+
union uvh_bau_data_broadcast_u {
unsigned long v;
struct uvh_bau_data_broadcast_s {
@@ -109,7 +134,16 @@ union uvh_bau_data_broadcast_u {
/* UVH_BAU_DATA_CONFIG */
/* ========================================================================= */
#define UVH_BAU_DATA_CONFIG 0x61680UL
-#define UVH_BAU_DATA_CONFIG_32 0x438
+
+#define UV1H_BAU_DATA_CONFIG_32 0x438
+#define UV2H_BAU_DATA_CONFIG_32 0x438
+#define UV3H_BAU_DATA_CONFIG_32 0x438
+#define UV4H_BAU_DATA_CONFIG_32 0x358
+#define UVH_BAU_DATA_CONFIG_32 ( \
+ is_uv1_hub() ? UV1H_BAU_DATA_CONFIG_32 : \
+ is_uv2_hub() ? UV2H_BAU_DATA_CONFIG_32 : \
+ is_uv3_hub() ? UV3H_BAU_DATA_CONFIG_32 : \
+ /*is_uv4_hub*/ UV4H_BAU_DATA_CONFIG_32)
#define UVH_BAU_DATA_CONFIG_VECTOR_SHFT 0
#define UVH_BAU_DATA_CONFIG_DM_SHFT 8
@@ -128,6 +162,7 @@ union uvh_bau_data_broadcast_u {
#define UVH_BAU_DATA_CONFIG_M_MASK 0x0000000000010000UL
#define UVH_BAU_DATA_CONFIG_APIC_ID_MASK 0xffffffff00000000UL
+
union uvh_bau_data_config_u {
unsigned long v;
struct uvh_bau_data_config_s {
@@ -266,7 +301,6 @@ union uvh_bau_data_config_u {
#define UV1H_EVENT_OCCURRED0_BAU_DATA_MASK 0x0080000000000000UL
#define UV1H_EVENT_OCCURRED0_POWER_MANAGEMENT_REQ_MASK 0x0100000000000000UL
-#define UVXH_EVENT_OCCURRED0_QP_HCERR_SHFT 1
#define UVXH_EVENT_OCCURRED0_RH_HCERR_SHFT 2
#define UVXH_EVENT_OCCURRED0_LH0_HCERR_SHFT 3
#define UVXH_EVENT_OCCURRED0_LH1_HCERR_SHFT 4
@@ -275,55 +309,11 @@ union uvh_bau_data_config_u {
#define UVXH_EVENT_OCCURRED0_NI0_HCERR_SHFT 7
#define UVXH_EVENT_OCCURRED0_NI1_HCERR_SHFT 8
#define UVXH_EVENT_OCCURRED0_LB_AOERR0_SHFT 9
-#define UVXH_EVENT_OCCURRED0_QP_AOERR0_SHFT 10
#define UVXH_EVENT_OCCURRED0_LH0_AOERR0_SHFT 12
#define UVXH_EVENT_OCCURRED0_LH1_AOERR0_SHFT 13
#define UVXH_EVENT_OCCURRED0_GR0_AOERR0_SHFT 14
#define UVXH_EVENT_OCCURRED0_GR1_AOERR0_SHFT 15
#define UVXH_EVENT_OCCURRED0_XB_AOERR0_SHFT 16
-#define UVXH_EVENT_OCCURRED0_RT_AOERR0_SHFT 17
-#define UVXH_EVENT_OCCURRED0_NI0_AOERR0_SHFT 18
-#define UVXH_EVENT_OCCURRED0_NI1_AOERR0_SHFT 19
-#define UVXH_EVENT_OCCURRED0_LB_AOERR1_SHFT 20
-#define UVXH_EVENT_OCCURRED0_QP_AOERR1_SHFT 21
-#define UVXH_EVENT_OCCURRED0_RH_AOERR1_SHFT 22
-#define UVXH_EVENT_OCCURRED0_LH0_AOERR1_SHFT 23
-#define UVXH_EVENT_OCCURRED0_LH1_AOERR1_SHFT 24
-#define UVXH_EVENT_OCCURRED0_GR0_AOERR1_SHFT 25
-#define UVXH_EVENT_OCCURRED0_GR1_AOERR1_SHFT 26
-#define UVXH_EVENT_OCCURRED0_XB_AOERR1_SHFT 27
-#define UVXH_EVENT_OCCURRED0_RT_AOERR1_SHFT 28
-#define UVXH_EVENT_OCCURRED0_NI0_AOERR1_SHFT 29
-#define UVXH_EVENT_OCCURRED0_NI1_AOERR1_SHFT 30
-#define UVXH_EVENT_OCCURRED0_SYSTEM_SHUTDOWN_INT_SHFT 31
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_0_SHFT 32
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_1_SHFT 33
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_2_SHFT 34
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_3_SHFT 35
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_4_SHFT 36
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_5_SHFT 37
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_6_SHFT 38
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_7_SHFT 39
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_8_SHFT 40
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_9_SHFT 41
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_10_SHFT 42
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_11_SHFT 43
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_12_SHFT 44
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_13_SHFT 45
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_14_SHFT 46
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_15_SHFT 47
-#define UVXH_EVENT_OCCURRED0_L1_NMI_INT_SHFT 48
-#define UVXH_EVENT_OCCURRED0_STOP_CLOCK_SHFT 49
-#define UVXH_EVENT_OCCURRED0_ASIC_TO_L1_SHFT 50
-#define UVXH_EVENT_OCCURRED0_L1_TO_ASIC_SHFT 51
-#define UVXH_EVENT_OCCURRED0_LA_SEQ_TRIGGER_SHFT 52
-#define UVXH_EVENT_OCCURRED0_IPI_INT_SHFT 53
-#define UVXH_EVENT_OCCURRED0_EXTIO_INT0_SHFT 54
-#define UVXH_EVENT_OCCURRED0_EXTIO_INT1_SHFT 55
-#define UVXH_EVENT_OCCURRED0_EXTIO_INT2_SHFT 56
-#define UVXH_EVENT_OCCURRED0_EXTIO_INT3_SHFT 57
-#define UVXH_EVENT_OCCURRED0_PROFILE_INT_SHFT 58
-#define UVXH_EVENT_OCCURRED0_QP_HCERR_MASK 0x0000000000000002UL
#define UVXH_EVENT_OCCURRED0_RH_HCERR_MASK 0x0000000000000004UL
#define UVXH_EVENT_OCCURRED0_LH0_HCERR_MASK 0x0000000000000008UL
#define UVXH_EVENT_OCCURRED0_LH1_HCERR_MASK 0x0000000000000010UL
@@ -332,54 +322,294 @@ union uvh_bau_data_config_u {
#define UVXH_EVENT_OCCURRED0_NI0_HCERR_MASK 0x0000000000000080UL
#define UVXH_EVENT_OCCURRED0_NI1_HCERR_MASK 0x0000000000000100UL
#define UVXH_EVENT_OCCURRED0_LB_AOERR0_MASK 0x0000000000000200UL
-#define UVXH_EVENT_OCCURRED0_QP_AOERR0_MASK 0x0000000000000400UL
#define UVXH_EVENT_OCCURRED0_LH0_AOERR0_MASK 0x0000000000001000UL
#define UVXH_EVENT_OCCURRED0_LH1_AOERR0_MASK 0x0000000000002000UL
#define UVXH_EVENT_OCCURRED0_GR0_AOERR0_MASK 0x0000000000004000UL
#define UVXH_EVENT_OCCURRED0_GR1_AOERR0_MASK 0x0000000000008000UL
#define UVXH_EVENT_OCCURRED0_XB_AOERR0_MASK 0x0000000000010000UL
-#define UVXH_EVENT_OCCURRED0_RT_AOERR0_MASK 0x0000000000020000UL
-#define UVXH_EVENT_OCCURRED0_NI0_AOERR0_MASK 0x0000000000040000UL
-#define UVXH_EVENT_OCCURRED0_NI1_AOERR0_MASK 0x0000000000080000UL
-#define UVXH_EVENT_OCCURRED0_LB_AOERR1_MASK 0x0000000000100000UL
-#define UVXH_EVENT_OCCURRED0_QP_AOERR1_MASK 0x0000000000200000UL
-#define UVXH_EVENT_OCCURRED0_RH_AOERR1_MASK 0x0000000000400000UL
-#define UVXH_EVENT_OCCURRED0_LH0_AOERR1_MASK 0x0000000000800000UL
-#define UVXH_EVENT_OCCURRED0_LH1_AOERR1_MASK 0x0000000001000000UL
-#define UVXH_EVENT_OCCURRED0_GR0_AOERR1_MASK 0x0000000002000000UL
-#define UVXH_EVENT_OCCURRED0_GR1_AOERR1_MASK 0x0000000004000000UL
-#define UVXH_EVENT_OCCURRED0_XB_AOERR1_MASK 0x0000000008000000UL
-#define UVXH_EVENT_OCCURRED0_RT_AOERR1_MASK 0x0000000010000000UL
-#define UVXH_EVENT_OCCURRED0_NI0_AOERR1_MASK 0x0000000020000000UL
-#define UVXH_EVENT_OCCURRED0_NI1_AOERR1_MASK 0x0000000040000000UL
-#define UVXH_EVENT_OCCURRED0_SYSTEM_SHUTDOWN_INT_MASK 0x0000000080000000UL
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_0_MASK 0x0000000100000000UL
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_1_MASK 0x0000000200000000UL
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_2_MASK 0x0000000400000000UL
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_3_MASK 0x0000000800000000UL
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_4_MASK 0x0000001000000000UL
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_5_MASK 0x0000002000000000UL
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_6_MASK 0x0000004000000000UL
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_7_MASK 0x0000008000000000UL
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_8_MASK 0x0000010000000000UL
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_9_MASK 0x0000020000000000UL
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_10_MASK 0x0000040000000000UL
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_11_MASK 0x0000080000000000UL
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_12_MASK 0x0000100000000000UL
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_13_MASK 0x0000200000000000UL
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_14_MASK 0x0000400000000000UL
-#define UVXH_EVENT_OCCURRED0_LB_IRQ_INT_15_MASK 0x0000800000000000UL
-#define UVXH_EVENT_OCCURRED0_L1_NMI_INT_MASK 0x0001000000000000UL
-#define UVXH_EVENT_OCCURRED0_STOP_CLOCK_MASK 0x0002000000000000UL
-#define UVXH_EVENT_OCCURRED0_ASIC_TO_L1_MASK 0x0004000000000000UL
-#define UVXH_EVENT_OCCURRED0_L1_TO_ASIC_MASK 0x0008000000000000UL
-#define UVXH_EVENT_OCCURRED0_LA_SEQ_TRIGGER_MASK 0x0010000000000000UL
-#define UVXH_EVENT_OCCURRED0_IPI_INT_MASK 0x0020000000000000UL
-#define UVXH_EVENT_OCCURRED0_EXTIO_INT0_MASK 0x0040000000000000UL
-#define UVXH_EVENT_OCCURRED0_EXTIO_INT1_MASK 0x0080000000000000UL
-#define UVXH_EVENT_OCCURRED0_EXTIO_INT2_MASK 0x0100000000000000UL
-#define UVXH_EVENT_OCCURRED0_EXTIO_INT3_MASK 0x0200000000000000UL
-#define UVXH_EVENT_OCCURRED0_PROFILE_INT_MASK 0x0400000000000000UL
+
+#define UV2H_EVENT_OCCURRED0_QP_HCERR_SHFT 1
+#define UV2H_EVENT_OCCURRED0_QP_AOERR0_SHFT 10
+#define UV2H_EVENT_OCCURRED0_RT_AOERR0_SHFT 17
+#define UV2H_EVENT_OCCURRED0_NI0_AOERR0_SHFT 18
+#define UV2H_EVENT_OCCURRED0_NI1_AOERR0_SHFT 19
+#define UV2H_EVENT_OCCURRED0_LB_AOERR1_SHFT 20
+#define UV2H_EVENT_OCCURRED0_QP_AOERR1_SHFT 21
+#define UV2H_EVENT_OCCURRED0_RH_AOERR1_SHFT 22
+#define UV2H_EVENT_OCCURRED0_LH0_AOERR1_SHFT 23
+#define UV2H_EVENT_OCCURRED0_LH1_AOERR1_SHFT 24
+#define UV2H_EVENT_OCCURRED0_GR0_AOERR1_SHFT 25
+#define UV2H_EVENT_OCCURRED0_GR1_AOERR1_SHFT 26
+#define UV2H_EVENT_OCCURRED0_XB_AOERR1_SHFT 27
+#define UV2H_EVENT_OCCURRED0_RT_AOERR1_SHFT 28
+#define UV2H_EVENT_OCCURRED0_NI0_AOERR1_SHFT 29
+#define UV2H_EVENT_OCCURRED0_NI1_AOERR1_SHFT 30
+#define UV2H_EVENT_OCCURRED0_SYSTEM_SHUTDOWN_INT_SHFT 31
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_0_SHFT 32
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_1_SHFT 33
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_2_SHFT 34
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_3_SHFT 35
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_4_SHFT 36
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_5_SHFT 37
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_6_SHFT 38
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_7_SHFT 39
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_8_SHFT 40
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_9_SHFT 41
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_10_SHFT 42
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_11_SHFT 43
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_12_SHFT 44
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_13_SHFT 45
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_14_SHFT 46
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_15_SHFT 47
+#define UV2H_EVENT_OCCURRED0_L1_NMI_INT_SHFT 48
+#define UV2H_EVENT_OCCURRED0_STOP_CLOCK_SHFT 49
+#define UV2H_EVENT_OCCURRED0_ASIC_TO_L1_SHFT 50
+#define UV2H_EVENT_OCCURRED0_L1_TO_ASIC_SHFT 51
+#define UV2H_EVENT_OCCURRED0_LA_SEQ_TRIGGER_SHFT 52
+#define UV2H_EVENT_OCCURRED0_IPI_INT_SHFT 53
+#define UV2H_EVENT_OCCURRED0_EXTIO_INT0_SHFT 54
+#define UV2H_EVENT_OCCURRED0_EXTIO_INT1_SHFT 55
+#define UV2H_EVENT_OCCURRED0_EXTIO_INT2_SHFT 56
+#define UV2H_EVENT_OCCURRED0_EXTIO_INT3_SHFT 57
+#define UV2H_EVENT_OCCURRED0_PROFILE_INT_SHFT 58
+#define UV2H_EVENT_OCCURRED0_QP_HCERR_MASK 0x0000000000000002UL
+#define UV2H_EVENT_OCCURRED0_QP_AOERR0_MASK 0x0000000000000400UL
+#define UV2H_EVENT_OCCURRED0_RT_AOERR0_MASK 0x0000000000020000UL
+#define UV2H_EVENT_OCCURRED0_NI0_AOERR0_MASK 0x0000000000040000UL
+#define UV2H_EVENT_OCCURRED0_NI1_AOERR0_MASK 0x0000000000080000UL
+#define UV2H_EVENT_OCCURRED0_LB_AOERR1_MASK 0x0000000000100000UL
+#define UV2H_EVENT_OCCURRED0_QP_AOERR1_MASK 0x0000000000200000UL
+#define UV2H_EVENT_OCCURRED0_RH_AOERR1_MASK 0x0000000000400000UL
+#define UV2H_EVENT_OCCURRED0_LH0_AOERR1_MASK 0x0000000000800000UL
+#define UV2H_EVENT_OCCURRED0_LH1_AOERR1_MASK 0x0000000001000000UL
+#define UV2H_EVENT_OCCURRED0_GR0_AOERR1_MASK 0x0000000002000000UL
+#define UV2H_EVENT_OCCURRED0_GR1_AOERR1_MASK 0x0000000004000000UL
+#define UV2H_EVENT_OCCURRED0_XB_AOERR1_MASK 0x0000000008000000UL
+#define UV2H_EVENT_OCCURRED0_RT_AOERR1_MASK 0x0000000010000000UL
+#define UV2H_EVENT_OCCURRED0_NI0_AOERR1_MASK 0x0000000020000000UL
+#define UV2H_EVENT_OCCURRED0_NI1_AOERR1_MASK 0x0000000040000000UL
+#define UV2H_EVENT_OCCURRED0_SYSTEM_SHUTDOWN_INT_MASK 0x0000000080000000UL
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_0_MASK 0x0000000100000000UL
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_1_MASK 0x0000000200000000UL
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_2_MASK 0x0000000400000000UL
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_3_MASK 0x0000000800000000UL
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_4_MASK 0x0000001000000000UL
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_5_MASK 0x0000002000000000UL
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_6_MASK 0x0000004000000000UL
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_7_MASK 0x0000008000000000UL
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_8_MASK 0x0000010000000000UL
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_9_MASK 0x0000020000000000UL
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_10_MASK 0x0000040000000000UL
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_11_MASK 0x0000080000000000UL
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_12_MASK 0x0000100000000000UL
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_13_MASK 0x0000200000000000UL
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_14_MASK 0x0000400000000000UL
+#define UV2H_EVENT_OCCURRED0_LB_IRQ_INT_15_MASK 0x0000800000000000UL
+#define UV2H_EVENT_OCCURRED0_L1_NMI_INT_MASK 0x0001000000000000UL
+#define UV2H_EVENT_OCCURRED0_STOP_CLOCK_MASK 0x0002000000000000UL
+#define UV2H_EVENT_OCCURRED0_ASIC_TO_L1_MASK 0x0004000000000000UL
+#define UV2H_EVENT_OCCURRED0_L1_TO_ASIC_MASK 0x0008000000000000UL
+#define UV2H_EVENT_OCCURRED0_LA_SEQ_TRIGGER_MASK 0x0010000000000000UL
+#define UV2H_EVENT_OCCURRED0_IPI_INT_MASK 0x0020000000000000UL
+#define UV2H_EVENT_OCCURRED0_EXTIO_INT0_MASK 0x0040000000000000UL
+#define UV2H_EVENT_OCCURRED0_EXTIO_INT1_MASK 0x0080000000000000UL
+#define UV2H_EVENT_OCCURRED0_EXTIO_INT2_MASK 0x0100000000000000UL
+#define UV2H_EVENT_OCCURRED0_EXTIO_INT3_MASK 0x0200000000000000UL
+#define UV2H_EVENT_OCCURRED0_PROFILE_INT_MASK 0x0400000000000000UL
+
+#define UV3H_EVENT_OCCURRED0_QP_HCERR_SHFT 1
+#define UV3H_EVENT_OCCURRED0_QP_AOERR0_SHFT 10
+#define UV3H_EVENT_OCCURRED0_RT_AOERR0_SHFT 17
+#define UV3H_EVENT_OCCURRED0_NI0_AOERR0_SHFT 18
+#define UV3H_EVENT_OCCURRED0_NI1_AOERR0_SHFT 19
+#define UV3H_EVENT_OCCURRED0_LB_AOERR1_SHFT 20
+#define UV3H_EVENT_OCCURRED0_QP_AOERR1_SHFT 21
+#define UV3H_EVENT_OCCURRED0_RH_AOERR1_SHFT 22
+#define UV3H_EVENT_OCCURRED0_LH0_AOERR1_SHFT 23
+#define UV3H_EVENT_OCCURRED0_LH1_AOERR1_SHFT 24
+#define UV3H_EVENT_OCCURRED0_GR0_AOERR1_SHFT 25
+#define UV3H_EVENT_OCCURRED0_GR1_AOERR1_SHFT 26
+#define UV3H_EVENT_OCCURRED0_XB_AOERR1_SHFT 27
+#define UV3H_EVENT_OCCURRED0_RT_AOERR1_SHFT 28
+#define UV3H_EVENT_OCCURRED0_NI0_AOERR1_SHFT 29
+#define UV3H_EVENT_OCCURRED0_NI1_AOERR1_SHFT 30
+#define UV3H_EVENT_OCCURRED0_SYSTEM_SHUTDOWN_INT_SHFT 31
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_0_SHFT 32
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_1_SHFT 33
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_2_SHFT 34
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_3_SHFT 35
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_4_SHFT 36
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_5_SHFT 37
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_6_SHFT 38
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_7_SHFT 39
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_8_SHFT 40
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_9_SHFT 41
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_10_SHFT 42
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_11_SHFT 43
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_12_SHFT 44
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_13_SHFT 45
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_14_SHFT 46
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_15_SHFT 47
+#define UV3H_EVENT_OCCURRED0_L1_NMI_INT_SHFT 48
+#define UV3H_EVENT_OCCURRED0_STOP_CLOCK_SHFT 49
+#define UV3H_EVENT_OCCURRED0_ASIC_TO_L1_SHFT 50
+#define UV3H_EVENT_OCCURRED0_L1_TO_ASIC_SHFT 51
+#define UV3H_EVENT_OCCURRED0_LA_SEQ_TRIGGER_SHFT 52
+#define UV3H_EVENT_OCCURRED0_IPI_INT_SHFT 53
+#define UV3H_EVENT_OCCURRED0_EXTIO_INT0_SHFT 54
+#define UV3H_EVENT_OCCURRED0_EXTIO_INT1_SHFT 55
+#define UV3H_EVENT_OCCURRED0_EXTIO_INT2_SHFT 56
+#define UV3H_EVENT_OCCURRED0_EXTIO_INT3_SHFT 57
+#define UV3H_EVENT_OCCURRED0_PROFILE_INT_SHFT 58
+#define UV3H_EVENT_OCCURRED0_QP_HCERR_MASK 0x0000000000000002UL
+#define UV3H_EVENT_OCCURRED0_QP_AOERR0_MASK 0x0000000000000400UL
+#define UV3H_EVENT_OCCURRED0_RT_AOERR0_MASK 0x0000000000020000UL
+#define UV3H_EVENT_OCCURRED0_NI0_AOERR0_MASK 0x0000000000040000UL
+#define UV3H_EVENT_OCCURRED0_NI1_AOERR0_MASK 0x0000000000080000UL
+#define UV3H_EVENT_OCCURRED0_LB_AOERR1_MASK 0x0000000000100000UL
+#define UV3H_EVENT_OCCURRED0_QP_AOERR1_MASK 0x0000000000200000UL
+#define UV3H_EVENT_OCCURRED0_RH_AOERR1_MASK 0x0000000000400000UL
+#define UV3H_EVENT_OCCURRED0_LH0_AOERR1_MASK 0x0000000000800000UL
+#define UV3H_EVENT_OCCURRED0_LH1_AOERR1_MASK 0x0000000001000000UL
+#define UV3H_EVENT_OCCURRED0_GR0_AOERR1_MASK 0x0000000002000000UL
+#define UV3H_EVENT_OCCURRED0_GR1_AOERR1_MASK 0x0000000004000000UL
+#define UV3H_EVENT_OCCURRED0_XB_AOERR1_MASK 0x0000000008000000UL
+#define UV3H_EVENT_OCCURRED0_RT_AOERR1_MASK 0x0000000010000000UL
+#define UV3H_EVENT_OCCURRED0_NI0_AOERR1_MASK 0x0000000020000000UL
+#define UV3H_EVENT_OCCURRED0_NI1_AOERR1_MASK 0x0000000040000000UL
+#define UV3H_EVENT_OCCURRED0_SYSTEM_SHUTDOWN_INT_MASK 0x0000000080000000UL
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_0_MASK 0x0000000100000000UL
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_1_MASK 0x0000000200000000UL
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_2_MASK 0x0000000400000000UL
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_3_MASK 0x0000000800000000UL
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_4_MASK 0x0000001000000000UL
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_5_MASK 0x0000002000000000UL
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_6_MASK 0x0000004000000000UL
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_7_MASK 0x0000008000000000UL
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_8_MASK 0x0000010000000000UL
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_9_MASK 0x0000020000000000UL
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_10_MASK 0x0000040000000000UL
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_11_MASK 0x0000080000000000UL
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_12_MASK 0x0000100000000000UL
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_13_MASK 0x0000200000000000UL
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_14_MASK 0x0000400000000000UL
+#define UV3H_EVENT_OCCURRED0_LB_IRQ_INT_15_MASK 0x0000800000000000UL
+#define UV3H_EVENT_OCCURRED0_L1_NMI_INT_MASK 0x0001000000000000UL
+#define UV3H_EVENT_OCCURRED0_STOP_CLOCK_MASK 0x0002000000000000UL
+#define UV3H_EVENT_OCCURRED0_ASIC_TO_L1_MASK 0x0004000000000000UL
+#define UV3H_EVENT_OCCURRED0_L1_TO_ASIC_MASK 0x0008000000000000UL
+#define UV3H_EVENT_OCCURRED0_LA_SEQ_TRIGGER_MASK 0x0010000000000000UL
+#define UV3H_EVENT_OCCURRED0_IPI_INT_MASK 0x0020000000000000UL
+#define UV3H_EVENT_OCCURRED0_EXTIO_INT0_MASK 0x0040000000000000UL
+#define UV3H_EVENT_OCCURRED0_EXTIO_INT1_MASK 0x0080000000000000UL
+#define UV3H_EVENT_OCCURRED0_EXTIO_INT2_MASK 0x0100000000000000UL
+#define UV3H_EVENT_OCCURRED0_EXTIO_INT3_MASK 0x0200000000000000UL
+#define UV3H_EVENT_OCCURRED0_PROFILE_INT_MASK 0x0400000000000000UL
+
+#define UV4H_EVENT_OCCURRED0_KT_HCERR_SHFT 1
+#define UV4H_EVENT_OCCURRED0_KT_AOERR0_SHFT 10
+#define UV4H_EVENT_OCCURRED0_RTQ0_AOERR0_SHFT 17
+#define UV4H_EVENT_OCCURRED0_RTQ1_AOERR0_SHFT 18
+#define UV4H_EVENT_OCCURRED0_RTQ2_AOERR0_SHFT 19
+#define UV4H_EVENT_OCCURRED0_RTQ3_AOERR0_SHFT 20
+#define UV4H_EVENT_OCCURRED0_NI0_AOERR0_SHFT 21
+#define UV4H_EVENT_OCCURRED0_NI1_AOERR0_SHFT 22
+#define UV4H_EVENT_OCCURRED0_LB_AOERR1_SHFT 23
+#define UV4H_EVENT_OCCURRED0_KT_AOERR1_SHFT 24
+#define UV4H_EVENT_OCCURRED0_RH_AOERR1_SHFT 25
+#define UV4H_EVENT_OCCURRED0_LH0_AOERR1_SHFT 26
+#define UV4H_EVENT_OCCURRED0_LH1_AOERR1_SHFT 27
+#define UV4H_EVENT_OCCURRED0_GR0_AOERR1_SHFT 28
+#define UV4H_EVENT_OCCURRED0_GR1_AOERR1_SHFT 29
+#define UV4H_EVENT_OCCURRED0_XB_AOERR1_SHFT 30
+#define UV4H_EVENT_OCCURRED0_RTQ0_AOERR1_SHFT 31
+#define UV4H_EVENT_OCCURRED0_RTQ1_AOERR1_SHFT 32
+#define UV4H_EVENT_OCCURRED0_RTQ2_AOERR1_SHFT 33
+#define UV4H_EVENT_OCCURRED0_RTQ3_AOERR1_SHFT 34
+#define UV4H_EVENT_OCCURRED0_NI0_AOERR1_SHFT 35
+#define UV4H_EVENT_OCCURRED0_NI1_AOERR1_SHFT 36
+#define UV4H_EVENT_OCCURRED0_SYSTEM_SHUTDOWN_INT_SHFT 37
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_0_SHFT 38
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_1_SHFT 39
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_2_SHFT 40
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_3_SHFT 41
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_4_SHFT 42
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_5_SHFT 43
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_6_SHFT 44
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_7_SHFT 45
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_8_SHFT 46
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_9_SHFT 47
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_10_SHFT 48
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_11_SHFT 49
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_12_SHFT 50
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_13_SHFT 51
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_14_SHFT 52
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_15_SHFT 53
+#define UV4H_EVENT_OCCURRED0_L1_NMI_INT_SHFT 54
+#define UV4H_EVENT_OCCURRED0_STOP_CLOCK_SHFT 55
+#define UV4H_EVENT_OCCURRED0_ASIC_TO_L1_SHFT 56
+#define UV4H_EVENT_OCCURRED0_L1_TO_ASIC_SHFT 57
+#define UV4H_EVENT_OCCURRED0_LA_SEQ_TRIGGER_SHFT 58
+#define UV4H_EVENT_OCCURRED0_IPI_INT_SHFT 59
+#define UV4H_EVENT_OCCURRED0_EXTIO_INT0_SHFT 60
+#define UV4H_EVENT_OCCURRED0_EXTIO_INT1_SHFT 61
+#define UV4H_EVENT_OCCURRED0_EXTIO_INT2_SHFT 62
+#define UV4H_EVENT_OCCURRED0_EXTIO_INT3_SHFT 63
+#define UV4H_EVENT_OCCURRED0_KT_HCERR_MASK 0x0000000000000002UL
+#define UV4H_EVENT_OCCURRED0_KT_AOERR0_MASK 0x0000000000000400UL
+#define UV4H_EVENT_OCCURRED0_RTQ0_AOERR0_MASK 0x0000000000020000UL
+#define UV4H_EVENT_OCCURRED0_RTQ1_AOERR0_MASK 0x0000000000040000UL
+#define UV4H_EVENT_OCCURRED0_RTQ2_AOERR0_MASK 0x0000000000080000UL
+#define UV4H_EVENT_OCCURRED0_RTQ3_AOERR0_MASK 0x0000000000100000UL
+#define UV4H_EVENT_OCCURRED0_NI0_AOERR0_MASK 0x0000000000200000UL
+#define UV4H_EVENT_OCCURRED0_NI1_AOERR0_MASK 0x0000000000400000UL
+#define UV4H_EVENT_OCCURRED0_LB_AOERR1_MASK 0x0000000000800000UL
+#define UV4H_EVENT_OCCURRED0_KT_AOERR1_MASK 0x0000000001000000UL
+#define UV4H_EVENT_OCCURRED0_RH_AOERR1_MASK 0x0000000002000000UL
+#define UV4H_EVENT_OCCURRED0_LH0_AOERR1_MASK 0x0000000004000000UL
+#define UV4H_EVENT_OCCURRED0_LH1_AOERR1_MASK 0x0000000008000000UL
+#define UV4H_EVENT_OCCURRED0_GR0_AOERR1_MASK 0x0000000010000000UL
+#define UV4H_EVENT_OCCURRED0_GR1_AOERR1_MASK 0x0000000020000000UL
+#define UV4H_EVENT_OCCURRED0_XB_AOERR1_MASK 0x0000000040000000UL
+#define UV4H_EVENT_OCCURRED0_RTQ0_AOERR1_MASK 0x0000000080000000UL
+#define UV4H_EVENT_OCCURRED0_RTQ1_AOERR1_MASK 0x0000000100000000UL
+#define UV4H_EVENT_OCCURRED0_RTQ2_AOERR1_MASK 0x0000000200000000UL
+#define UV4H_EVENT_OCCURRED0_RTQ3_AOERR1_MASK 0x0000000400000000UL
+#define UV4H_EVENT_OCCURRED0_NI0_AOERR1_MASK 0x0000000800000000UL
+#define UV4H_EVENT_OCCURRED0_NI1_AOERR1_MASK 0x0000001000000000UL
+#define UV4H_EVENT_OCCURRED0_SYSTEM_SHUTDOWN_INT_MASK 0x0000002000000000UL
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_0_MASK 0x0000004000000000UL
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_1_MASK 0x0000008000000000UL
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_2_MASK 0x0000010000000000UL
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_3_MASK 0x0000020000000000UL
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_4_MASK 0x0000040000000000UL
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_5_MASK 0x0000080000000000UL
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_6_MASK 0x0000100000000000UL
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_7_MASK 0x0000200000000000UL
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_8_MASK 0x0000400000000000UL
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_9_MASK 0x0000800000000000UL
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_10_MASK 0x0001000000000000UL
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_11_MASK 0x0002000000000000UL
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_12_MASK 0x0004000000000000UL
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_13_MASK 0x0008000000000000UL
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_14_MASK 0x0010000000000000UL
+#define UV4H_EVENT_OCCURRED0_LB_IRQ_INT_15_MASK 0x0020000000000000UL
+#define UV4H_EVENT_OCCURRED0_L1_NMI_INT_MASK 0x0040000000000000UL
+#define UV4H_EVENT_OCCURRED0_STOP_CLOCK_MASK 0x0080000000000000UL
+#define UV4H_EVENT_OCCURRED0_ASIC_TO_L1_MASK 0x0100000000000000UL
+#define UV4H_EVENT_OCCURRED0_L1_TO_ASIC_MASK 0x0200000000000000UL
+#define UV4H_EVENT_OCCURRED0_LA_SEQ_TRIGGER_MASK 0x0400000000000000UL
+#define UV4H_EVENT_OCCURRED0_IPI_INT_MASK 0x0800000000000000UL
+#define UV4H_EVENT_OCCURRED0_EXTIO_INT0_MASK 0x1000000000000000UL
+#define UV4H_EVENT_OCCURRED0_EXTIO_INT1_MASK 0x2000000000000000UL
+#define UV4H_EVENT_OCCURRED0_EXTIO_INT2_MASK 0x4000000000000000UL
+#define UV4H_EVENT_OCCURRED0_EXTIO_INT3_MASK 0x8000000000000000UL
+
+#define UVH_EVENT_OCCURRED0_EXTIO_INT0_SHFT ( \
+ is_uv1_hub() ? UV1H_EVENT_OCCURRED0_EXTIO_INT0_SHFT : \
+ is_uv2_hub() ? UV2H_EVENT_OCCURRED0_EXTIO_INT0_SHFT : \
+ is_uv3_hub() ? UV3H_EVENT_OCCURRED0_EXTIO_INT0_SHFT : \
+ /*is_uv4_hub*/ UV4H_EVENT_OCCURRED0_EXTIO_INT0_SHFT)
union uvh_event_occurred0_u {
unsigned long v;
@@ -391,7 +621,7 @@ union uvh_event_occurred0_u {
} s;
struct uvxh_event_occurred0_s {
unsigned long lb_hcerr:1; /* RW */
- unsigned long qp_hcerr:1; /* RW */
+ unsigned long rsvd_1:1;
unsigned long rh_hcerr:1; /* RW */
unsigned long lh0_hcerr:1; /* RW */
unsigned long lh1_hcerr:1; /* RW */
@@ -400,25 +630,51 @@ union uvh_event_occurred0_u {
unsigned long ni0_hcerr:1; /* RW */
unsigned long ni1_hcerr:1; /* RW */
unsigned long lb_aoerr0:1; /* RW */
- unsigned long qp_aoerr0:1; /* RW */
+ unsigned long rsvd_10:1;
unsigned long rh_aoerr0:1; /* RW */
unsigned long lh0_aoerr0:1; /* RW */
unsigned long lh1_aoerr0:1; /* RW */
unsigned long gr0_aoerr0:1; /* RW */
unsigned long gr1_aoerr0:1; /* RW */
unsigned long xb_aoerr0:1; /* RW */
- unsigned long rt_aoerr0:1; /* RW */
+ unsigned long rsvd_17_63:47;
+ } sx;
+ struct uv4h_event_occurred0_s {
+ unsigned long lb_hcerr:1; /* RW */
+ unsigned long kt_hcerr:1; /* RW */
+ unsigned long rh_hcerr:1; /* RW */
+ unsigned long lh0_hcerr:1; /* RW */
+ unsigned long lh1_hcerr:1; /* RW */
+ unsigned long gr0_hcerr:1; /* RW */
+ unsigned long gr1_hcerr:1; /* RW */
+ unsigned long ni0_hcerr:1; /* RW */
+ unsigned long ni1_hcerr:1; /* RW */
+ unsigned long lb_aoerr0:1; /* RW */
+ unsigned long kt_aoerr0:1; /* RW */
+ unsigned long rh_aoerr0:1; /* RW */
+ unsigned long lh0_aoerr0:1; /* RW */
+ unsigned long lh1_aoerr0:1; /* RW */
+ unsigned long gr0_aoerr0:1; /* RW */
+ unsigned long gr1_aoerr0:1; /* RW */
+ unsigned long xb_aoerr0:1; /* RW */
+ unsigned long rtq0_aoerr0:1; /* RW */
+ unsigned long rtq1_aoerr0:1; /* RW */
+ unsigned long rtq2_aoerr0:1; /* RW */
+ unsigned long rtq3_aoerr0:1; /* RW */
unsigned long ni0_aoerr0:1; /* RW */
unsigned long ni1_aoerr0:1; /* RW */
unsigned long lb_aoerr1:1; /* RW */
- unsigned long qp_aoerr1:1; /* RW */
+ unsigned long kt_aoerr1:1; /* RW */
unsigned long rh_aoerr1:1; /* RW */
unsigned long lh0_aoerr1:1; /* RW */
unsigned long lh1_aoerr1:1; /* RW */
unsigned long gr0_aoerr1:1; /* RW */
unsigned long gr1_aoerr1:1; /* RW */
unsigned long xb_aoerr1:1; /* RW */
- unsigned long rt_aoerr1:1; /* RW */
+ unsigned long rtq0_aoerr1:1; /* RW */
+ unsigned long rtq1_aoerr1:1; /* RW */
+ unsigned long rtq2_aoerr1:1; /* RW */
+ unsigned long rtq3_aoerr1:1; /* RW */
unsigned long ni0_aoerr1:1; /* RW */
unsigned long ni1_aoerr1:1; /* RW */
unsigned long system_shutdown_int:1; /* RW */
@@ -448,9 +704,7 @@ union uvh_event_occurred0_u {
unsigned long extio_int1:1; /* RW */
unsigned long extio_int2:1; /* RW */
unsigned long extio_int3:1; /* RW */
- unsigned long profile_int:1; /* RW */
- unsigned long rsvd_59_63:5;
- } sx;
+ } s4;
};
/* ========================================================================= */
@@ -464,11 +718,21 @@ union uvh_event_occurred0_u {
/* UVH_EXTIO_INT0_BROADCAST */
/* ========================================================================= */
#define UVH_EXTIO_INT0_BROADCAST 0x61448UL
-#define UVH_EXTIO_INT0_BROADCAST_32 0x3f0
+
+#define UV1H_EXTIO_INT0_BROADCAST_32 0x3f0
+#define UV2H_EXTIO_INT0_BROADCAST_32 0x3f0
+#define UV3H_EXTIO_INT0_BROADCAST_32 0x3f0
+#define UV4H_EXTIO_INT0_BROADCAST_32 0x310
+#define UVH_EXTIO_INT0_BROADCAST_32 ( \
+ is_uv1_hub() ? UV1H_EXTIO_INT0_BROADCAST_32 : \
+ is_uv2_hub() ? UV2H_EXTIO_INT0_BROADCAST_32 : \
+ is_uv3_hub() ? UV3H_EXTIO_INT0_BROADCAST_32 : \
+ /*is_uv4_hub*/ UV4H_EXTIO_INT0_BROADCAST_32)
#define UVH_EXTIO_INT0_BROADCAST_ENABLE_SHFT 0
#define UVH_EXTIO_INT0_BROADCAST_ENABLE_MASK 0x0000000000000001UL
+
union uvh_extio_int0_broadcast_u {
unsigned long v;
struct uvh_extio_int0_broadcast_s {
@@ -499,6 +763,7 @@ union uvh_extio_int0_broadcast_u {
#define UVH_GR0_TLB_INT0_CONFIG_M_MASK 0x0000000000010000UL
#define UVH_GR0_TLB_INT0_CONFIG_APIC_ID_MASK 0xffffffff00000000UL
+
union uvh_gr0_tlb_int0_config_u {
unsigned long v;
struct uvh_gr0_tlb_int0_config_s {
@@ -537,6 +802,7 @@ union uvh_gr0_tlb_int0_config_u {
#define UVH_GR0_TLB_INT1_CONFIG_M_MASK 0x0000000000010000UL
#define UVH_GR0_TLB_INT1_CONFIG_APIC_ID_MASK 0xffffffff00000000UL
+
union uvh_gr0_tlb_int1_config_u {
unsigned long v;
struct uvh_gr0_tlb_int1_config_s {
@@ -559,19 +825,18 @@ union uvh_gr0_tlb_int1_config_u {
#define UV1H_GR0_TLB_MMR_CONTROL 0x401080UL
#define UV2H_GR0_TLB_MMR_CONTROL 0xc01080UL
#define UV3H_GR0_TLB_MMR_CONTROL 0xc01080UL
-#define UVH_GR0_TLB_MMR_CONTROL \
- (is_uv1_hub() ? UV1H_GR0_TLB_MMR_CONTROL : \
- (is_uv2_hub() ? UV2H_GR0_TLB_MMR_CONTROL : \
- UV3H_GR0_TLB_MMR_CONTROL))
+#define UV4H_GR0_TLB_MMR_CONTROL 0x601080UL
+#define UVH_GR0_TLB_MMR_CONTROL ( \
+ is_uv1_hub() ? UV1H_GR0_TLB_MMR_CONTROL : \
+ is_uv2_hub() ? UV2H_GR0_TLB_MMR_CONTROL : \
+ is_uv3_hub() ? UV3H_GR0_TLB_MMR_CONTROL : \
+ /*is_uv4_hub*/ UV4H_GR0_TLB_MMR_CONTROL)
#define UVH_GR0_TLB_MMR_CONTROL_INDEX_SHFT 0
-#define UVH_GR0_TLB_MMR_CONTROL_MEM_SEL_SHFT 12
#define UVH_GR0_TLB_MMR_CONTROL_AUTO_VALID_EN_SHFT 16
#define UVH_GR0_TLB_MMR_CONTROL_MMR_HASH_INDEX_EN_SHFT 20
#define UVH_GR0_TLB_MMR_CONTROL_MMR_WRITE_SHFT 30
#define UVH_GR0_TLB_MMR_CONTROL_MMR_READ_SHFT 31
-#define UVH_GR0_TLB_MMR_CONTROL_INDEX_MASK 0x0000000000000fffUL
-#define UVH_GR0_TLB_MMR_CONTROL_MEM_SEL_MASK 0x0000000000003000UL
#define UVH_GR0_TLB_MMR_CONTROL_AUTO_VALID_EN_MASK 0x0000000000010000UL
#define UVH_GR0_TLB_MMR_CONTROL_MMR_HASH_INDEX_EN_MASK 0x0000000000100000UL
#define UVH_GR0_TLB_MMR_CONTROL_MMR_WRITE_MASK 0x0000000040000000UL
@@ -601,14 +866,11 @@ union uvh_gr0_tlb_int1_config_u {
#define UV1H_GR0_TLB_MMR_CONTROL_MMR_INJ_TLBLRUV_MASK 0x1000000000000000UL
#define UVXH_GR0_TLB_MMR_CONTROL_INDEX_SHFT 0
-#define UVXH_GR0_TLB_MMR_CONTROL_MEM_SEL_SHFT 12
#define UVXH_GR0_TLB_MMR_CONTROL_AUTO_VALID_EN_SHFT 16
#define UVXH_GR0_TLB_MMR_CONTROL_MMR_HASH_INDEX_EN_SHFT 20
#define UVXH_GR0_TLB_MMR_CONTROL_MMR_WRITE_SHFT 30
#define UVXH_GR0_TLB_MMR_CONTROL_MMR_READ_SHFT 31
#define UVXH_GR0_TLB_MMR_CONTROL_MMR_OP_DONE_SHFT 32
-#define UVXH_GR0_TLB_MMR_CONTROL_INDEX_MASK 0x0000000000000fffUL
-#define UVXH_GR0_TLB_MMR_CONTROL_MEM_SEL_MASK 0x0000000000003000UL
#define UVXH_GR0_TLB_MMR_CONTROL_AUTO_VALID_EN_MASK 0x0000000000010000UL
#define UVXH_GR0_TLB_MMR_CONTROL_MMR_HASH_INDEX_EN_MASK 0x0000000000100000UL
#define UVXH_GR0_TLB_MMR_CONTROL_MMR_WRITE_MASK 0x0000000040000000UL
@@ -651,12 +913,45 @@ union uvh_gr0_tlb_int1_config_u {
#define UV3H_GR0_TLB_MMR_CONTROL_MMR_READ_MASK 0x0000000080000000UL
#define UV3H_GR0_TLB_MMR_CONTROL_MMR_OP_DONE_MASK 0x0000000100000000UL
+#define UV4H_GR0_TLB_MMR_CONTROL_INDEX_SHFT 0
+#define UV4H_GR0_TLB_MMR_CONTROL_MEM_SEL_SHFT 13
+#define UV4H_GR0_TLB_MMR_CONTROL_AUTO_VALID_EN_SHFT 16
+#define UV4H_GR0_TLB_MMR_CONTROL_MMR_HASH_INDEX_EN_SHFT 20
+#define UV4H_GR0_TLB_MMR_CONTROL_ECC_SEL_SHFT 21
+#define UV4H_GR0_TLB_MMR_CONTROL_MMR_WRITE_SHFT 30
+#define UV4H_GR0_TLB_MMR_CONTROL_MMR_READ_SHFT 31
+#define UV4H_GR0_TLB_MMR_CONTROL_MMR_OP_DONE_SHFT 32
+#define UV4H_GR0_TLB_MMR_CONTROL_PAGE_SIZE_SHFT 59
+#define UV4H_GR0_TLB_MMR_CONTROL_INDEX_MASK 0x0000000000001fffUL
+#define UV4H_GR0_TLB_MMR_CONTROL_MEM_SEL_MASK 0x0000000000006000UL
+#define UV4H_GR0_TLB_MMR_CONTROL_AUTO_VALID_EN_MASK 0x0000000000010000UL
+#define UV4H_GR0_TLB_MMR_CONTROL_MMR_HASH_INDEX_EN_MASK 0x0000000000100000UL
+#define UV4H_GR0_TLB_MMR_CONTROL_ECC_SEL_MASK 0x0000000000200000UL
+#define UV4H_GR0_TLB_MMR_CONTROL_MMR_WRITE_MASK 0x0000000040000000UL
+#define UV4H_GR0_TLB_MMR_CONTROL_MMR_READ_MASK 0x0000000080000000UL
+#define UV4H_GR0_TLB_MMR_CONTROL_MMR_OP_DONE_MASK 0x0000000100000000UL
+#define UV4H_GR0_TLB_MMR_CONTROL_PAGE_SIZE_MASK 0xf800000000000000UL
+
+#define UVH_GR0_TLB_MMR_CONTROL_INDEX_MASK ( \
+ is_uv1_hub() ? UV1H_GR0_TLB_MMR_CONTROL_INDEX_MASK : \
+ is_uv2_hub() ? UV2H_GR0_TLB_MMR_CONTROL_INDEX_MASK : \
+ is_uv3_hub() ? UV3H_GR0_TLB_MMR_CONTROL_INDEX_MASK : \
+ /*is_uv4_hub*/ UV4H_GR0_TLB_MMR_CONTROL_INDEX_MASK)
+#define UVH_GR0_TLB_MMR_CONTROL_MEM_SEL_MASK ( \
+ is_uv1_hub() ? UV1H_GR0_TLB_MMR_CONTROL_MEM_SEL_MASK : \
+ is_uv2_hub() ? UV2H_GR0_TLB_MMR_CONTROL_MEM_SEL_MASK : \
+ is_uv3_hub() ? UV3H_GR0_TLB_MMR_CONTROL_MEM_SEL_MASK : \
+ /*is_uv4_hub*/ UV4H_GR0_TLB_MMR_CONTROL_MEM_SEL_MASK)
+#define UVH_GR0_TLB_MMR_CONTROL_MEM_SEL_SHFT ( \
+ is_uv1_hub() ? UV1H_GR0_TLB_MMR_CONTROL_MEM_SEL_SHFT : \
+ is_uv2_hub() ? UV2H_GR0_TLB_MMR_CONTROL_MEM_SEL_SHFT : \
+ is_uv3_hub() ? UV3H_GR0_TLB_MMR_CONTROL_MEM_SEL_SHFT : \
+ /*is_uv4_hub*/ UV4H_GR0_TLB_MMR_CONTROL_MEM_SEL_SHFT)
+
union uvh_gr0_tlb_mmr_control_u {
unsigned long v;
struct uvh_gr0_tlb_mmr_control_s {
- unsigned long index:12; /* RW */
- unsigned long mem_sel:2; /* RW */
- unsigned long rsvd_14_15:2;
+ unsigned long rsvd_0_15:16;
unsigned long auto_valid_en:1; /* RW */
unsigned long rsvd_17_19:3;
unsigned long mmr_hash_index_en:1; /* RW */
@@ -690,9 +985,7 @@ union uvh_gr0_tlb_mmr_control_u {
unsigned long rsvd_61_63:3;
} s1;
struct uvxh_gr0_tlb_mmr_control_s {
- unsigned long index:12; /* RW */
- unsigned long mem_sel:2; /* RW */
- unsigned long rsvd_14_15:2;
+ unsigned long rsvd_0_15:16;
unsigned long auto_valid_en:1; /* RW */
unsigned long rsvd_17_19:3;
unsigned long mmr_hash_index_en:1; /* RW */
@@ -703,8 +996,7 @@ union uvh_gr0_tlb_mmr_control_u {
unsigned long rsvd_33_47:15;
unsigned long rsvd_48:1;
unsigned long rsvd_49_51:3;
- unsigned long rsvd_52:1;
- unsigned long rsvd_53_63:11;
+ unsigned long rsvd_52_63:12;
} sx;
struct uv2h_gr0_tlb_mmr_control_s {
unsigned long index:12; /* RW */
@@ -741,6 +1033,24 @@ union uvh_gr0_tlb_mmr_control_u {
unsigned long undef_52:1; /* Undefined */
unsigned long rsvd_53_63:11;
} s3;
+ struct uv4h_gr0_tlb_mmr_control_s {
+ unsigned long index:13; /* RW */
+ unsigned long mem_sel:2; /* RW */
+ unsigned long rsvd_15:1;
+ unsigned long auto_valid_en:1; /* RW */
+ unsigned long rsvd_17_19:3;
+ unsigned long mmr_hash_index_en:1; /* RW */
+ unsigned long ecc_sel:1; /* RW */
+ unsigned long rsvd_22_29:8;
+ unsigned long mmr_write:1; /* WP */
+ unsigned long mmr_read:1; /* WP */
+ unsigned long mmr_op_done:1; /* RW */
+ unsigned long rsvd_33_47:15;
+ unsigned long undef_48:1; /* Undefined */
+ unsigned long rsvd_49_51:3;
+ unsigned long rsvd_52_58:7;
+ unsigned long page_size:5; /* RW */
+ } s4;
};
/* ========================================================================= */
@@ -749,19 +1059,14 @@ union uvh_gr0_tlb_mmr_control_u {
#define UV1H_GR0_TLB_MMR_READ_DATA_HI 0x4010a0UL
#define UV2H_GR0_TLB_MMR_READ_DATA_HI 0xc010a0UL
#define UV3H_GR0_TLB_MMR_READ_DATA_HI 0xc010a0UL
-#define UVH_GR0_TLB_MMR_READ_DATA_HI \
- (is_uv1_hub() ? UV1H_GR0_TLB_MMR_READ_DATA_HI : \
- (is_uv2_hub() ? UV2H_GR0_TLB_MMR_READ_DATA_HI : \
- UV3H_GR0_TLB_MMR_READ_DATA_HI))
+#define UV4H_GR0_TLB_MMR_READ_DATA_HI 0x6010a0UL
+#define UVH_GR0_TLB_MMR_READ_DATA_HI ( \
+ is_uv1_hub() ? UV1H_GR0_TLB_MMR_READ_DATA_HI : \
+ is_uv2_hub() ? UV2H_GR0_TLB_MMR_READ_DATA_HI : \
+ is_uv3_hub() ? UV3H_GR0_TLB_MMR_READ_DATA_HI : \
+ /*is_uv4_hub*/ UV4H_GR0_TLB_MMR_READ_DATA_HI)
#define UVH_GR0_TLB_MMR_READ_DATA_HI_PFN_SHFT 0
-#define UVH_GR0_TLB_MMR_READ_DATA_HI_GAA_SHFT 41
-#define UVH_GR0_TLB_MMR_READ_DATA_HI_DIRTY_SHFT 43
-#define UVH_GR0_TLB_MMR_READ_DATA_HI_LARGER_SHFT 44
-#define UVH_GR0_TLB_MMR_READ_DATA_HI_PFN_MASK 0x000001ffffffffffUL
-#define UVH_GR0_TLB_MMR_READ_DATA_HI_GAA_MASK 0x0000060000000000UL
-#define UVH_GR0_TLB_MMR_READ_DATA_HI_DIRTY_MASK 0x0000080000000000UL
-#define UVH_GR0_TLB_MMR_READ_DATA_HI_LARGER_MASK 0x0000100000000000UL
#define UV1H_GR0_TLB_MMR_READ_DATA_HI_PFN_SHFT 0
#define UV1H_GR0_TLB_MMR_READ_DATA_HI_GAA_SHFT 41
@@ -773,13 +1078,6 @@ union uvh_gr0_tlb_mmr_control_u {
#define UV1H_GR0_TLB_MMR_READ_DATA_HI_LARGER_MASK 0x0000100000000000UL
#define UVXH_GR0_TLB_MMR_READ_DATA_HI_PFN_SHFT 0
-#define UVXH_GR0_TLB_MMR_READ_DATA_HI_GAA_SHFT 41
-#define UVXH_GR0_TLB_MMR_READ_DATA_HI_DIRTY_SHFT 43
-#define UVXH_GR0_TLB_MMR_READ_DATA_HI_LARGER_SHFT 44
-#define UVXH_GR0_TLB_MMR_READ_DATA_HI_PFN_MASK 0x000001ffffffffffUL
-#define UVXH_GR0_TLB_MMR_READ_DATA_HI_GAA_MASK 0x0000060000000000UL
-#define UVXH_GR0_TLB_MMR_READ_DATA_HI_DIRTY_MASK 0x0000080000000000UL
-#define UVXH_GR0_TLB_MMR_READ_DATA_HI_LARGER_MASK 0x0000100000000000UL
#define UV2H_GR0_TLB_MMR_READ_DATA_HI_PFN_SHFT 0
#define UV2H_GR0_TLB_MMR_READ_DATA_HI_GAA_SHFT 41
@@ -803,15 +1101,24 @@ union uvh_gr0_tlb_mmr_control_u {
#define UV3H_GR0_TLB_MMR_READ_DATA_HI_AA_EXT_MASK 0x0000200000000000UL
#define UV3H_GR0_TLB_MMR_READ_DATA_HI_WAY_ECC_MASK 0xff80000000000000UL
+#define UV4H_GR0_TLB_MMR_READ_DATA_HI_PFN_SHFT 0
+#define UV4H_GR0_TLB_MMR_READ_DATA_HI_PNID_SHFT 34
+#define UV4H_GR0_TLB_MMR_READ_DATA_HI_GAA_SHFT 49
+#define UV4H_GR0_TLB_MMR_READ_DATA_HI_DIRTY_SHFT 51
+#define UV4H_GR0_TLB_MMR_READ_DATA_HI_LARGER_SHFT 52
+#define UV4H_GR0_TLB_MMR_READ_DATA_HI_AA_EXT_SHFT 53
+#define UV4H_GR0_TLB_MMR_READ_DATA_HI_WAY_ECC_SHFT 55
+#define UV4H_GR0_TLB_MMR_READ_DATA_HI_PFN_MASK 0x00000003ffffffffUL
+#define UV4H_GR0_TLB_MMR_READ_DATA_HI_PNID_MASK 0x0001fffc00000000UL
+#define UV4H_GR0_TLB_MMR_READ_DATA_HI_GAA_MASK 0x0006000000000000UL
+#define UV4H_GR0_TLB_MMR_READ_DATA_HI_DIRTY_MASK 0x0008000000000000UL
+#define UV4H_GR0_TLB_MMR_READ_DATA_HI_LARGER_MASK 0x0010000000000000UL
+#define UV4H_GR0_TLB_MMR_READ_DATA_HI_AA_EXT_MASK 0x0020000000000000UL
+#define UV4H_GR0_TLB_MMR_READ_DATA_HI_WAY_ECC_MASK 0xff80000000000000UL
+
+
union uvh_gr0_tlb_mmr_read_data_hi_u {
unsigned long v;
- struct uvh_gr0_tlb_mmr_read_data_hi_s {
- unsigned long pfn:41; /* RO */
- unsigned long gaa:2; /* RO */
- unsigned long dirty:1; /* RO */
- unsigned long larger:1; /* RO */
- unsigned long rsvd_45_63:19;
- } s;
struct uv1h_gr0_tlb_mmr_read_data_hi_s {
unsigned long pfn:41; /* RO */
unsigned long gaa:2; /* RO */
@@ -819,13 +1126,6 @@ union uvh_gr0_tlb_mmr_read_data_hi_u {
unsigned long larger:1; /* RO */
unsigned long rsvd_45_63:19;
} s1;
- struct uvxh_gr0_tlb_mmr_read_data_hi_s {
- unsigned long pfn:41; /* RO */
- unsigned long gaa:2; /* RO */
- unsigned long dirty:1; /* RO */
- unsigned long larger:1; /* RO */
- unsigned long rsvd_45_63:19;
- } sx;
struct uv2h_gr0_tlb_mmr_read_data_hi_s {
unsigned long pfn:41; /* RO */
unsigned long gaa:2; /* RO */
@@ -842,6 +1142,16 @@ union uvh_gr0_tlb_mmr_read_data_hi_u {
unsigned long undef_46_54:9; /* Undefined */
unsigned long way_ecc:9; /* RO */
} s3;
+ struct uv4h_gr0_tlb_mmr_read_data_hi_s {
+ unsigned long pfn:34; /* RO */
+ unsigned long pnid:15; /* RO */
+ unsigned long gaa:2; /* RO */
+ unsigned long dirty:1; /* RO */
+ unsigned long larger:1; /* RO */
+ unsigned long aa_ext:1; /* RO */
+ unsigned long undef_54:1; /* Undefined */
+ unsigned long way_ecc:9; /* RO */
+ } s4;
};
/* ========================================================================= */
@@ -850,10 +1160,12 @@ union uvh_gr0_tlb_mmr_read_data_hi_u {
#define UV1H_GR0_TLB_MMR_READ_DATA_LO 0x4010a8UL
#define UV2H_GR0_TLB_MMR_READ_DATA_LO 0xc010a8UL
#define UV3H_GR0_TLB_MMR_READ_DATA_LO 0xc010a8UL
-#define UVH_GR0_TLB_MMR_READ_DATA_LO \
- (is_uv1_hub() ? UV1H_GR0_TLB_MMR_READ_DATA_LO : \
- (is_uv2_hub() ? UV2H_GR0_TLB_MMR_READ_DATA_LO : \
- UV3H_GR0_TLB_MMR_READ_DATA_LO))
+#define UV4H_GR0_TLB_MMR_READ_DATA_LO 0x6010a8UL
+#define UVH_GR0_TLB_MMR_READ_DATA_LO ( \
+ is_uv1_hub() ? UV1H_GR0_TLB_MMR_READ_DATA_LO : \
+ is_uv2_hub() ? UV2H_GR0_TLB_MMR_READ_DATA_LO : \
+ is_uv3_hub() ? UV3H_GR0_TLB_MMR_READ_DATA_LO : \
+ /*is_uv4_hub*/ UV4H_GR0_TLB_MMR_READ_DATA_LO)
#define UVH_GR0_TLB_MMR_READ_DATA_LO_VPN_SHFT 0
#define UVH_GR0_TLB_MMR_READ_DATA_LO_ASID_SHFT 39
@@ -890,6 +1202,14 @@ union uvh_gr0_tlb_mmr_read_data_hi_u {
#define UV3H_GR0_TLB_MMR_READ_DATA_LO_ASID_MASK 0x7fffff8000000000UL
#define UV3H_GR0_TLB_MMR_READ_DATA_LO_VALID_MASK 0x8000000000000000UL
+#define UV4H_GR0_TLB_MMR_READ_DATA_LO_VPN_SHFT 0
+#define UV4H_GR0_TLB_MMR_READ_DATA_LO_ASID_SHFT 39
+#define UV4H_GR0_TLB_MMR_READ_DATA_LO_VALID_SHFT 63
+#define UV4H_GR0_TLB_MMR_READ_DATA_LO_VPN_MASK 0x0000007fffffffffUL
+#define UV4H_GR0_TLB_MMR_READ_DATA_LO_ASID_MASK 0x7fffff8000000000UL
+#define UV4H_GR0_TLB_MMR_READ_DATA_LO_VALID_MASK 0x8000000000000000UL
+
+
union uvh_gr0_tlb_mmr_read_data_lo_u {
unsigned long v;
struct uvh_gr0_tlb_mmr_read_data_lo_s {
@@ -917,12 +1237,25 @@ union uvh_gr0_tlb_mmr_read_data_lo_u {
unsigned long asid:24; /* RO */
unsigned long valid:1; /* RO */
} s3;
+ struct uv4h_gr0_tlb_mmr_read_data_lo_s {
+ unsigned long vpn:39; /* RO */
+ unsigned long asid:24; /* RO */
+ unsigned long valid:1; /* RO */
+ } s4;
};
/* ========================================================================= */
/* UVH_GR1_TLB_INT0_CONFIG */
/* ========================================================================= */
-#define UVH_GR1_TLB_INT0_CONFIG 0x61f00UL
+#define UV1H_GR1_TLB_INT0_CONFIG 0x61f00UL
+#define UV2H_GR1_TLB_INT0_CONFIG 0x61f00UL
+#define UV3H_GR1_TLB_INT0_CONFIG 0x61f00UL
+#define UV4H_GR1_TLB_INT0_CONFIG 0x62100UL
+#define UVH_GR1_TLB_INT0_CONFIG ( \
+ is_uv1_hub() ? UV1H_GR1_TLB_INT0_CONFIG : \
+ is_uv2_hub() ? UV2H_GR1_TLB_INT0_CONFIG : \
+ is_uv3_hub() ? UV3H_GR1_TLB_INT0_CONFIG : \
+ /*is_uv4_hub*/ UV4H_GR1_TLB_INT0_CONFIG)
#define UVH_GR1_TLB_INT0_CONFIG_VECTOR_SHFT 0
#define UVH_GR1_TLB_INT0_CONFIG_DM_SHFT 8
@@ -941,6 +1274,7 @@ union uvh_gr0_tlb_mmr_read_data_lo_u {
#define UVH_GR1_TLB_INT0_CONFIG_M_MASK 0x0000000000010000UL
#define UVH_GR1_TLB_INT0_CONFIG_APIC_ID_MASK 0xffffffff00000000UL
+
union uvh_gr1_tlb_int0_config_u {
unsigned long v;
struct uvh_gr1_tlb_int0_config_s {
@@ -960,7 +1294,15 @@ union uvh_gr1_tlb_int0_config_u {
/* ========================================================================= */
/* UVH_GR1_TLB_INT1_CONFIG */
/* ========================================================================= */
-#define UVH_GR1_TLB_INT1_CONFIG 0x61f40UL
+#define UV1H_GR1_TLB_INT1_CONFIG 0x61f40UL
+#define UV2H_GR1_TLB_INT1_CONFIG 0x61f40UL
+#define UV3H_GR1_TLB_INT1_CONFIG 0x61f40UL
+#define UV4H_GR1_TLB_INT1_CONFIG 0x62140UL
+#define UVH_GR1_TLB_INT1_CONFIG ( \
+ is_uv1_hub() ? UV1H_GR1_TLB_INT1_CONFIG : \
+ is_uv2_hub() ? UV2H_GR1_TLB_INT1_CONFIG : \
+ is_uv3_hub() ? UV3H_GR1_TLB_INT1_CONFIG : \
+ /*is_uv4_hub*/ UV4H_GR1_TLB_INT1_CONFIG)
#define UVH_GR1_TLB_INT1_CONFIG_VECTOR_SHFT 0
#define UVH_GR1_TLB_INT1_CONFIG_DM_SHFT 8
@@ -979,6 +1321,7 @@ union uvh_gr1_tlb_int0_config_u {
#define UVH_GR1_TLB_INT1_CONFIG_M_MASK 0x0000000000010000UL
#define UVH_GR1_TLB_INT1_CONFIG_APIC_ID_MASK 0xffffffff00000000UL
+
union uvh_gr1_tlb_int1_config_u {
unsigned long v;
struct uvh_gr1_tlb_int1_config_s {
@@ -1001,19 +1344,18 @@ union uvh_gr1_tlb_int1_config_u {
#define UV1H_GR1_TLB_MMR_CONTROL 0x801080UL
#define UV2H_GR1_TLB_MMR_CONTROL 0x1001080UL
#define UV3H_GR1_TLB_MMR_CONTROL 0x1001080UL
-#define UVH_GR1_TLB_MMR_CONTROL \
- (is_uv1_hub() ? UV1H_GR1_TLB_MMR_CONTROL : \
- (is_uv2_hub() ? UV2H_GR1_TLB_MMR_CONTROL : \
- UV3H_GR1_TLB_MMR_CONTROL))
+#define UV4H_GR1_TLB_MMR_CONTROL 0x701080UL
+#define UVH_GR1_TLB_MMR_CONTROL ( \
+ is_uv1_hub() ? UV1H_GR1_TLB_MMR_CONTROL : \
+ is_uv2_hub() ? UV2H_GR1_TLB_MMR_CONTROL : \
+ is_uv3_hub() ? UV3H_GR1_TLB_MMR_CONTROL : \
+ /*is_uv4_hub*/ UV4H_GR1_TLB_MMR_CONTROL)
#define UVH_GR1_TLB_MMR_CONTROL_INDEX_SHFT 0
-#define UVH_GR1_TLB_MMR_CONTROL_MEM_SEL_SHFT 12
#define UVH_GR1_TLB_MMR_CONTROL_AUTO_VALID_EN_SHFT 16
#define UVH_GR1_TLB_MMR_CONTROL_MMR_HASH_INDEX_EN_SHFT 20
#define UVH_GR1_TLB_MMR_CONTROL_MMR_WRITE_SHFT 30
#define UVH_GR1_TLB_MMR_CONTROL_MMR_READ_SHFT 31
-#define UVH_GR1_TLB_MMR_CONTROL_INDEX_MASK 0x0000000000000fffUL
-#define UVH_GR1_TLB_MMR_CONTROL_MEM_SEL_MASK 0x0000000000003000UL
#define UVH_GR1_TLB_MMR_CONTROL_AUTO_VALID_EN_MASK 0x0000000000010000UL
#define UVH_GR1_TLB_MMR_CONTROL_MMR_HASH_INDEX_EN_MASK 0x0000000000100000UL
#define UVH_GR1_TLB_MMR_CONTROL_MMR_WRITE_MASK 0x0000000040000000UL
@@ -1043,14 +1385,11 @@ union uvh_gr1_tlb_int1_config_u {
#define UV1H_GR1_TLB_MMR_CONTROL_MMR_INJ_TLBLRUV_MASK 0x1000000000000000UL
#define UVXH_GR1_TLB_MMR_CONTROL_INDEX_SHFT 0
-#define UVXH_GR1_TLB_MMR_CONTROL_MEM_SEL_SHFT 12
#define UVXH_GR1_TLB_MMR_CONTROL_AUTO_VALID_EN_SHFT 16
#define UVXH_GR1_TLB_MMR_CONTROL_MMR_HASH_INDEX_EN_SHFT 20
#define UVXH_GR1_TLB_MMR_CONTROL_MMR_WRITE_SHFT 30
#define UVXH_GR1_TLB_MMR_CONTROL_MMR_READ_SHFT 31
#define UVXH_GR1_TLB_MMR_CONTROL_MMR_OP_DONE_SHFT 32
-#define UVXH_GR1_TLB_MMR_CONTROL_INDEX_MASK 0x0000000000000fffUL
-#define UVXH_GR1_TLB_MMR_CONTROL_MEM_SEL_MASK 0x0000000000003000UL
#define UVXH_GR1_TLB_MMR_CONTROL_AUTO_VALID_EN_MASK 0x0000000000010000UL
#define UVXH_GR1_TLB_MMR_CONTROL_MMR_HASH_INDEX_EN_MASK 0x0000000000100000UL
#define UVXH_GR1_TLB_MMR_CONTROL_MMR_WRITE_MASK 0x0000000040000000UL
@@ -1093,12 +1432,30 @@ union uvh_gr1_tlb_int1_config_u {
#define UV3H_GR1_TLB_MMR_CONTROL_MMR_READ_MASK 0x0000000080000000UL
#define UV3H_GR1_TLB_MMR_CONTROL_MMR_OP_DONE_MASK 0x0000000100000000UL
+#define UV4H_GR1_TLB_MMR_CONTROL_INDEX_SHFT 0
+#define UV4H_GR1_TLB_MMR_CONTROL_MEM_SEL_SHFT 13
+#define UV4H_GR1_TLB_MMR_CONTROL_AUTO_VALID_EN_SHFT 16
+#define UV4H_GR1_TLB_MMR_CONTROL_MMR_HASH_INDEX_EN_SHFT 20
+#define UV4H_GR1_TLB_MMR_CONTROL_ECC_SEL_SHFT 21
+#define UV4H_GR1_TLB_MMR_CONTROL_MMR_WRITE_SHFT 30
+#define UV4H_GR1_TLB_MMR_CONTROL_MMR_READ_SHFT 31
+#define UV4H_GR1_TLB_MMR_CONTROL_MMR_OP_DONE_SHFT 32
+#define UV4H_GR1_TLB_MMR_CONTROL_PAGE_SIZE_SHFT 59
+#define UV4H_GR1_TLB_MMR_CONTROL_INDEX_MASK 0x0000000000001fffUL
+#define UV4H_GR1_TLB_MMR_CONTROL_MEM_SEL_MASK 0x0000000000006000UL
+#define UV4H_GR1_TLB_MMR_CONTROL_AUTO_VALID_EN_MASK 0x0000000000010000UL
+#define UV4H_GR1_TLB_MMR_CONTROL_MMR_HASH_INDEX_EN_MASK 0x0000000000100000UL
+#define UV4H_GR1_TLB_MMR_CONTROL_ECC_SEL_MASK 0x0000000000200000UL
+#define UV4H_GR1_TLB_MMR_CONTROL_MMR_WRITE_MASK 0x0000000040000000UL
+#define UV4H_GR1_TLB_MMR_CONTROL_MMR_READ_MASK 0x0000000080000000UL
+#define UV4H_GR1_TLB_MMR_CONTROL_MMR_OP_DONE_MASK 0x0000000100000000UL
+#define UV4H_GR1_TLB_MMR_CONTROL_PAGE_SIZE_MASK 0xf800000000000000UL
+
+
union uvh_gr1_tlb_mmr_control_u {
unsigned long v;
struct uvh_gr1_tlb_mmr_control_s {
- unsigned long index:12; /* RW */
- unsigned long mem_sel:2; /* RW */
- unsigned long rsvd_14_15:2;
+ unsigned long rsvd_0_15:16;
unsigned long auto_valid_en:1; /* RW */
unsigned long rsvd_17_19:3;
unsigned long mmr_hash_index_en:1; /* RW */
@@ -1132,9 +1489,7 @@ union uvh_gr1_tlb_mmr_control_u {
unsigned long rsvd_61_63:3;
} s1;
struct uvxh_gr1_tlb_mmr_control_s {
- unsigned long index:12; /* RW */
- unsigned long mem_sel:2; /* RW */
- unsigned long rsvd_14_15:2;
+ unsigned long rsvd_0_15:16;
unsigned long auto_valid_en:1; /* RW */
unsigned long rsvd_17_19:3;
unsigned long mmr_hash_index_en:1; /* RW */
@@ -1145,8 +1500,7 @@ union uvh_gr1_tlb_mmr_control_u {
unsigned long rsvd_33_47:15;
unsigned long rsvd_48:1;
unsigned long rsvd_49_51:3;
- unsigned long rsvd_52:1;
- unsigned long rsvd_53_63:11;
+ unsigned long rsvd_52_63:12;
} sx;
struct uv2h_gr1_tlb_mmr_control_s {
unsigned long index:12; /* RW */
@@ -1183,6 +1537,24 @@ union uvh_gr1_tlb_mmr_control_u {
unsigned long undef_52:1; /* Undefined */
unsigned long rsvd_53_63:11;
} s3;
+ struct uv4h_gr1_tlb_mmr_control_s {
+ unsigned long index:13; /* RW */
+ unsigned long mem_sel:2; /* RW */
+ unsigned long rsvd_15:1;
+ unsigned long auto_valid_en:1; /* RW */
+ unsigned long rsvd_17_19:3;
+ unsigned long mmr_hash_index_en:1; /* RW */
+ unsigned long ecc_sel:1; /* RW */
+ unsigned long rsvd_22_29:8;
+ unsigned long mmr_write:1; /* WP */
+ unsigned long mmr_read:1; /* WP */
+ unsigned long mmr_op_done:1; /* RW */
+ unsigned long rsvd_33_47:15;
+ unsigned long undef_48:1; /* Undefined */
+ unsigned long rsvd_49_51:3;
+ unsigned long rsvd_52_58:7;
+ unsigned long page_size:5; /* RW */
+ } s4;
};
/* ========================================================================= */
@@ -1191,19 +1563,14 @@ union uvh_gr1_tlb_mmr_control_u {
#define UV1H_GR1_TLB_MMR_READ_DATA_HI 0x8010a0UL
#define UV2H_GR1_TLB_MMR_READ_DATA_HI 0x10010a0UL
#define UV3H_GR1_TLB_MMR_READ_DATA_HI 0x10010a0UL
-#define UVH_GR1_TLB_MMR_READ_DATA_HI \
- (is_uv1_hub() ? UV1H_GR1_TLB_MMR_READ_DATA_HI : \
- (is_uv2_hub() ? UV2H_GR1_TLB_MMR_READ_DATA_HI : \
- UV3H_GR1_TLB_MMR_READ_DATA_HI))
+#define UV4H_GR1_TLB_MMR_READ_DATA_HI 0x7010a0UL
+#define UVH_GR1_TLB_MMR_READ_DATA_HI ( \
+ is_uv1_hub() ? UV1H_GR1_TLB_MMR_READ_DATA_HI : \
+ is_uv2_hub() ? UV2H_GR1_TLB_MMR_READ_DATA_HI : \
+ is_uv3_hub() ? UV3H_GR1_TLB_MMR_READ_DATA_HI : \
+ /*is_uv4_hub*/ UV4H_GR1_TLB_MMR_READ_DATA_HI)
#define UVH_GR1_TLB_MMR_READ_DATA_HI_PFN_SHFT 0
-#define UVH_GR1_TLB_MMR_READ_DATA_HI_GAA_SHFT 41
-#define UVH_GR1_TLB_MMR_READ_DATA_HI_DIRTY_SHFT 43
-#define UVH_GR1_TLB_MMR_READ_DATA_HI_LARGER_SHFT 44
-#define UVH_GR1_TLB_MMR_READ_DATA_HI_PFN_MASK 0x000001ffffffffffUL
-#define UVH_GR1_TLB_MMR_READ_DATA_HI_GAA_MASK 0x0000060000000000UL
-#define UVH_GR1_TLB_MMR_READ_DATA_HI_DIRTY_MASK 0x0000080000000000UL
-#define UVH_GR1_TLB_MMR_READ_DATA_HI_LARGER_MASK 0x0000100000000000UL
#define UV1H_GR1_TLB_MMR_READ_DATA_HI_PFN_SHFT 0
#define UV1H_GR1_TLB_MMR_READ_DATA_HI_GAA_SHFT 41
@@ -1215,13 +1582,6 @@ union uvh_gr1_tlb_mmr_control_u {
#define UV1H_GR1_TLB_MMR_READ_DATA_HI_LARGER_MASK 0x0000100000000000UL
#define UVXH_GR1_TLB_MMR_READ_DATA_HI_PFN_SHFT 0
-#define UVXH_GR1_TLB_MMR_READ_DATA_HI_GAA_SHFT 41
-#define UVXH_GR1_TLB_MMR_READ_DATA_HI_DIRTY_SHFT 43
-#define UVXH_GR1_TLB_MMR_READ_DATA_HI_LARGER_SHFT 44
-#define UVXH_GR1_TLB_MMR_READ_DATA_HI_PFN_MASK 0x000001ffffffffffUL
-#define UVXH_GR1_TLB_MMR_READ_DATA_HI_GAA_MASK 0x0000060000000000UL
-#define UVXH_GR1_TLB_MMR_READ_DATA_HI_DIRTY_MASK 0x0000080000000000UL
-#define UVXH_GR1_TLB_MMR_READ_DATA_HI_LARGER_MASK 0x0000100000000000UL
#define UV2H_GR1_TLB_MMR_READ_DATA_HI_PFN_SHFT 0
#define UV2H_GR1_TLB_MMR_READ_DATA_HI_GAA_SHFT 41
@@ -1245,15 +1605,24 @@ union uvh_gr1_tlb_mmr_control_u {
#define UV3H_GR1_TLB_MMR_READ_DATA_HI_AA_EXT_MASK 0x0000200000000000UL
#define UV3H_GR1_TLB_MMR_READ_DATA_HI_WAY_ECC_MASK 0xff80000000000000UL
+#define UV4H_GR1_TLB_MMR_READ_DATA_HI_PFN_SHFT 0
+#define UV4H_GR1_TLB_MMR_READ_DATA_HI_PNID_SHFT 34
+#define UV4H_GR1_TLB_MMR_READ_DATA_HI_GAA_SHFT 49
+#define UV4H_GR1_TLB_MMR_READ_DATA_HI_DIRTY_SHFT 51
+#define UV4H_GR1_TLB_MMR_READ_DATA_HI_LARGER_SHFT 52
+#define UV4H_GR1_TLB_MMR_READ_DATA_HI_AA_EXT_SHFT 53
+#define UV4H_GR1_TLB_MMR_READ_DATA_HI_WAY_ECC_SHFT 55
+#define UV4H_GR1_TLB_MMR_READ_DATA_HI_PFN_MASK 0x00000003ffffffffUL
+#define UV4H_GR1_TLB_MMR_READ_DATA_HI_PNID_MASK 0x0001fffc00000000UL
+#define UV4H_GR1_TLB_MMR_READ_DATA_HI_GAA_MASK 0x0006000000000000UL
+#define UV4H_GR1_TLB_MMR_READ_DATA_HI_DIRTY_MASK 0x0008000000000000UL
+#define UV4H_GR1_TLB_MMR_READ_DATA_HI_LARGER_MASK 0x0010000000000000UL
+#define UV4H_GR1_TLB_MMR_READ_DATA_HI_AA_EXT_MASK 0x0020000000000000UL
+#define UV4H_GR1_TLB_MMR_READ_DATA_HI_WAY_ECC_MASK 0xff80000000000000UL
+
+
union uvh_gr1_tlb_mmr_read_data_hi_u {
unsigned long v;
- struct uvh_gr1_tlb_mmr_read_data_hi_s {
- unsigned long pfn:41; /* RO */
- unsigned long gaa:2; /* RO */
- unsigned long dirty:1; /* RO */
- unsigned long larger:1; /* RO */
- unsigned long rsvd_45_63:19;
- } s;
struct uv1h_gr1_tlb_mmr_read_data_hi_s {
unsigned long pfn:41; /* RO */
unsigned long gaa:2; /* RO */
@@ -1261,13 +1630,6 @@ union uvh_gr1_tlb_mmr_read_data_hi_u {
unsigned long larger:1; /* RO */
unsigned long rsvd_45_63:19;
} s1;
- struct uvxh_gr1_tlb_mmr_read_data_hi_s {
- unsigned long pfn:41; /* RO */
- unsigned long gaa:2; /* RO */
- unsigned long dirty:1; /* RO */
- unsigned long larger:1; /* RO */
- unsigned long rsvd_45_63:19;
- } sx;
struct uv2h_gr1_tlb_mmr_read_data_hi_s {
unsigned long pfn:41; /* RO */
unsigned long gaa:2; /* RO */
@@ -1284,6 +1646,16 @@ union uvh_gr1_tlb_mmr_read_data_hi_u {
unsigned long undef_46_54:9; /* Undefined */
unsigned long way_ecc:9; /* RO */
} s3;
+ struct uv4h_gr1_tlb_mmr_read_data_hi_s {
+ unsigned long pfn:34; /* RO */
+ unsigned long pnid:15; /* RO */
+ unsigned long gaa:2; /* RO */
+ unsigned long dirty:1; /* RO */
+ unsigned long larger:1; /* RO */
+ unsigned long aa_ext:1; /* RO */
+ unsigned long undef_54:1; /* Undefined */
+ unsigned long way_ecc:9; /* RO */
+ } s4;
};
/* ========================================================================= */
@@ -1292,10 +1664,12 @@ union uvh_gr1_tlb_mmr_read_data_hi_u {
#define UV1H_GR1_TLB_MMR_READ_DATA_LO 0x8010a8UL
#define UV2H_GR1_TLB_MMR_READ_DATA_LO 0x10010a8UL
#define UV3H_GR1_TLB_MMR_READ_DATA_LO 0x10010a8UL
-#define UVH_GR1_TLB_MMR_READ_DATA_LO \
- (is_uv1_hub() ? UV1H_GR1_TLB_MMR_READ_DATA_LO : \
- (is_uv2_hub() ? UV2H_GR1_TLB_MMR_READ_DATA_LO : \
- UV3H_GR1_TLB_MMR_READ_DATA_LO))
+#define UV4H_GR1_TLB_MMR_READ_DATA_LO 0x7010a8UL
+#define UVH_GR1_TLB_MMR_READ_DATA_LO ( \
+ is_uv1_hub() ? UV1H_GR1_TLB_MMR_READ_DATA_LO : \
+ is_uv2_hub() ? UV2H_GR1_TLB_MMR_READ_DATA_LO : \
+ is_uv3_hub() ? UV3H_GR1_TLB_MMR_READ_DATA_LO : \
+ /*is_uv4_hub*/ UV4H_GR1_TLB_MMR_READ_DATA_LO)
#define UVH_GR1_TLB_MMR_READ_DATA_LO_VPN_SHFT 0
#define UVH_GR1_TLB_MMR_READ_DATA_LO_ASID_SHFT 39
@@ -1332,6 +1706,14 @@ union uvh_gr1_tlb_mmr_read_data_hi_u {
#define UV3H_GR1_TLB_MMR_READ_DATA_LO_ASID_MASK 0x7fffff8000000000UL
#define UV3H_GR1_TLB_MMR_READ_DATA_LO_VALID_MASK 0x8000000000000000UL
+#define UV4H_GR1_TLB_MMR_READ_DATA_LO_VPN_SHFT 0
+#define UV4H_GR1_TLB_MMR_READ_DATA_LO_ASID_SHFT 39
+#define UV4H_GR1_TLB_MMR_READ_DATA_LO_VALID_SHFT 63
+#define UV4H_GR1_TLB_MMR_READ_DATA_LO_VPN_MASK 0x0000007fffffffffUL
+#define UV4H_GR1_TLB_MMR_READ_DATA_LO_ASID_MASK 0x7fffff8000000000UL
+#define UV4H_GR1_TLB_MMR_READ_DATA_LO_VALID_MASK 0x8000000000000000UL
+
+
union uvh_gr1_tlb_mmr_read_data_lo_u {
unsigned long v;
struct uvh_gr1_tlb_mmr_read_data_lo_s {
@@ -1359,6 +1741,11 @@ union uvh_gr1_tlb_mmr_read_data_lo_u {
unsigned long asid:24; /* RO */
unsigned long valid:1; /* RO */
} s3;
+ struct uv4h_gr1_tlb_mmr_read_data_lo_s {
+ unsigned long vpn:39; /* RO */
+ unsigned long asid:24; /* RO */
+ unsigned long valid:1; /* RO */
+ } s4;
};
/* ========================================================================= */
@@ -1369,6 +1756,7 @@ union uvh_gr1_tlb_mmr_read_data_lo_u {
#define UVH_INT_CMPB_REAL_TIME_CMPB_SHFT 0
#define UVH_INT_CMPB_REAL_TIME_CMPB_MASK 0x00ffffffffffffffUL
+
union uvh_int_cmpb_u {
unsigned long v;
struct uvh_int_cmpb_s {
@@ -1382,12 +1770,14 @@ union uvh_int_cmpb_u {
/* ========================================================================= */
#define UVH_INT_CMPC 0x22100UL
+
#define UV1H_INT_CMPC_REAL_TIME_CMPC_SHFT 0
#define UV1H_INT_CMPC_REAL_TIME_CMPC_MASK 0x00ffffffffffffffUL
#define UVXH_INT_CMPC_REAL_TIME_CMP_2_SHFT 0
#define UVXH_INT_CMPC_REAL_TIME_CMP_2_MASK 0x00ffffffffffffffUL
+
union uvh_int_cmpc_u {
unsigned long v;
struct uvh_int_cmpc_s {
@@ -1401,12 +1791,14 @@ union uvh_int_cmpc_u {
/* ========================================================================= */
#define UVH_INT_CMPD 0x22180UL
+
#define UV1H_INT_CMPD_REAL_TIME_CMPD_SHFT 0
#define UV1H_INT_CMPD_REAL_TIME_CMPD_MASK 0x00ffffffffffffffUL
#define UVXH_INT_CMPD_REAL_TIME_CMP_3_SHFT 0
#define UVXH_INT_CMPD_REAL_TIME_CMP_3_MASK 0x00ffffffffffffffUL
+
union uvh_int_cmpd_u {
unsigned long v;
struct uvh_int_cmpd_s {
@@ -1419,7 +1811,16 @@ union uvh_int_cmpd_u {
/* UVH_IPI_INT */
/* ========================================================================= */
#define UVH_IPI_INT 0x60500UL
-#define UVH_IPI_INT_32 0x348
+
+#define UV1H_IPI_INT_32 0x348
+#define UV2H_IPI_INT_32 0x348
+#define UV3H_IPI_INT_32 0x348
+#define UV4H_IPI_INT_32 0x268
+#define UVH_IPI_INT_32 ( \
+ is_uv1_hub() ? UV1H_IPI_INT_32 : \
+ is_uv2_hub() ? UV2H_IPI_INT_32 : \
+ is_uv3_hub() ? UV3H_IPI_INT_32 : \
+ /*is_uv4_hub*/ UV4H_IPI_INT_32)
#define UVH_IPI_INT_VECTOR_SHFT 0
#define UVH_IPI_INT_DELIVERY_MODE_SHFT 8
@@ -1432,6 +1833,7 @@ union uvh_int_cmpd_u {
#define UVH_IPI_INT_APIC_ID_MASK 0x0000ffffffff0000UL
#define UVH_IPI_INT_SEND_MASK 0x8000000000000000UL
+
union uvh_ipi_int_u {
unsigned long v;
struct uvh_ipi_int_s {
@@ -1448,103 +1850,269 @@ union uvh_ipi_int_u {
/* ========================================================================= */
/* UVH_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST */
/* ========================================================================= */
-#define UVH_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST 0x320050UL
+#define UV1H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST 0x320050UL
+#define UV2H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST 0x320050UL
+#define UV3H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST 0x320050UL
+#define UV4H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST uv_undefined("UV4H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST")
+#define UVH_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST ( \
+ is_uv1_hub() ? UV1H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST : \
+ is_uv2_hub() ? UV2H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST : \
+ is_uv3_hub() ? UV3H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST)
#define UVH_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST_32 0x9c0
-#define UVH_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST_ADDRESS_SHFT 4
-#define UVH_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST_NODE_ID_SHFT 49
-#define UVH_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST_ADDRESS_MASK 0x000007fffffffff0UL
-#define UVH_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST_NODE_ID_MASK 0x7ffe000000000000UL
+
+#define UV1H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST_ADDRESS_SHFT 4
+#define UV1H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST_NODE_ID_SHFT 49
+#define UV1H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST_ADDRESS_MASK 0x000007fffffffff0UL
+#define UV1H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST_NODE_ID_MASK 0x7ffe000000000000UL
+
+
+#define UV2H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST_ADDRESS_SHFT 4
+#define UV2H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST_NODE_ID_SHFT 49
+#define UV2H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST_ADDRESS_MASK 0x000007fffffffff0UL
+#define UV2H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST_NODE_ID_MASK 0x7ffe000000000000UL
+
+#define UV3H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST_ADDRESS_SHFT 4
+#define UV3H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST_NODE_ID_SHFT 49
+#define UV3H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST_ADDRESS_MASK 0x000007fffffffff0UL
+#define UV3H_LB_BAU_INTD_PAYLOAD_QUEUE_FIRST_NODE_ID_MASK 0x7ffe000000000000UL
+
union uvh_lb_bau_intd_payload_queue_first_u {
unsigned long v;
- struct uvh_lb_bau_intd_payload_queue_first_s {
+ struct uv1h_lb_bau_intd_payload_queue_first_s {
unsigned long rsvd_0_3:4;
unsigned long address:39; /* RW */
unsigned long rsvd_43_48:6;
unsigned long node_id:14; /* RW */
unsigned long rsvd_63:1;
- } s;
+ } s1;
+ struct uv2h_lb_bau_intd_payload_queue_first_s {
+ unsigned long rsvd_0_3:4;
+ unsigned long address:39; /* RW */
+ unsigned long rsvd_43_48:6;
+ unsigned long node_id:14; /* RW */
+ unsigned long rsvd_63:1;
+ } s2;
+ struct uv3h_lb_bau_intd_payload_queue_first_s {
+ unsigned long rsvd_0_3:4;
+ unsigned long address:39; /* RW */
+ unsigned long rsvd_43_48:6;
+ unsigned long node_id:14; /* RW */
+ unsigned long rsvd_63:1;
+ } s3;
};
/* ========================================================================= */
/* UVH_LB_BAU_INTD_PAYLOAD_QUEUE_LAST */
/* ========================================================================= */
-#define UVH_LB_BAU_INTD_PAYLOAD_QUEUE_LAST 0x320060UL
+#define UV1H_LB_BAU_INTD_PAYLOAD_QUEUE_LAST 0x320060UL
+#define UV2H_LB_BAU_INTD_PAYLOAD_QUEUE_LAST 0x320060UL
+#define UV3H_LB_BAU_INTD_PAYLOAD_QUEUE_LAST 0x320060UL
+#define UV4H_LB_BAU_INTD_PAYLOAD_QUEUE_LAST uv_undefined("UV4H_LB_BAU_INTD_PAYLOAD_QUEUE_LAST")
+#define UVH_LB_BAU_INTD_PAYLOAD_QUEUE_LAST ( \
+ is_uv1_hub() ? UV1H_LB_BAU_INTD_PAYLOAD_QUEUE_LAST : \
+ is_uv2_hub() ? UV2H_LB_BAU_INTD_PAYLOAD_QUEUE_LAST : \
+ is_uv3_hub() ? UV3H_LB_BAU_INTD_PAYLOAD_QUEUE_LAST : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_INTD_PAYLOAD_QUEUE_LAST)
#define UVH_LB_BAU_INTD_PAYLOAD_QUEUE_LAST_32 0x9c8
-#define UVH_LB_BAU_INTD_PAYLOAD_QUEUE_LAST_ADDRESS_SHFT 4
-#define UVH_LB_BAU_INTD_PAYLOAD_QUEUE_LAST_ADDRESS_MASK 0x000007fffffffff0UL
+
+#define UV1H_LB_BAU_INTD_PAYLOAD_QUEUE_LAST_ADDRESS_SHFT 4
+#define UV1H_LB_BAU_INTD_PAYLOAD_QUEUE_LAST_ADDRESS_MASK 0x000007fffffffff0UL
+
+
+#define UV2H_LB_BAU_INTD_PAYLOAD_QUEUE_LAST_ADDRESS_SHFT 4
+#define UV2H_LB_BAU_INTD_PAYLOAD_QUEUE_LAST_ADDRESS_MASK 0x000007fffffffff0UL
+
+#define UV3H_LB_BAU_INTD_PAYLOAD_QUEUE_LAST_ADDRESS_SHFT 4
+#define UV3H_LB_BAU_INTD_PAYLOAD_QUEUE_LAST_ADDRESS_MASK 0x000007fffffffff0UL
+
union uvh_lb_bau_intd_payload_queue_last_u {
unsigned long v;
- struct uvh_lb_bau_intd_payload_queue_last_s {
+ struct uv1h_lb_bau_intd_payload_queue_last_s {
unsigned long rsvd_0_3:4;
unsigned long address:39; /* RW */
unsigned long rsvd_43_63:21;
- } s;
+ } s1;
+ struct uv2h_lb_bau_intd_payload_queue_last_s {
+ unsigned long rsvd_0_3:4;
+ unsigned long address:39; /* RW */
+ unsigned long rsvd_43_63:21;
+ } s2;
+ struct uv3h_lb_bau_intd_payload_queue_last_s {
+ unsigned long rsvd_0_3:4;
+ unsigned long address:39; /* RW */
+ unsigned long rsvd_43_63:21;
+ } s3;
};
/* ========================================================================= */
/* UVH_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL */
/* ========================================================================= */
-#define UVH_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL 0x320070UL
+#define UV1H_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL 0x320070UL
+#define UV2H_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL 0x320070UL
+#define UV3H_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL 0x320070UL
+#define UV4H_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL uv_undefined("UV4H_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL")
+#define UVH_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL ( \
+ is_uv1_hub() ? UV1H_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL : \
+ is_uv2_hub() ? UV2H_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL : \
+ is_uv3_hub() ? UV3H_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL)
#define UVH_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL_32 0x9d0
-#define UVH_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL_ADDRESS_SHFT 4
-#define UVH_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL_ADDRESS_MASK 0x000007fffffffff0UL
+
+#define UV1H_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL_ADDRESS_SHFT 4
+#define UV1H_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL_ADDRESS_MASK 0x000007fffffffff0UL
+
+
+#define UV2H_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL_ADDRESS_SHFT 4
+#define UV2H_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL_ADDRESS_MASK 0x000007fffffffff0UL
+
+#define UV3H_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL_ADDRESS_SHFT 4
+#define UV3H_LB_BAU_INTD_PAYLOAD_QUEUE_TAIL_ADDRESS_MASK 0x000007fffffffff0UL
+
union uvh_lb_bau_intd_payload_queue_tail_u {
unsigned long v;
- struct uvh_lb_bau_intd_payload_queue_tail_s {
+ struct uv1h_lb_bau_intd_payload_queue_tail_s {
unsigned long rsvd_0_3:4;
unsigned long address:39; /* RW */
unsigned long rsvd_43_63:21;
- } s;
+ } s1;
+ struct uv2h_lb_bau_intd_payload_queue_tail_s {
+ unsigned long rsvd_0_3:4;
+ unsigned long address:39; /* RW */
+ unsigned long rsvd_43_63:21;
+ } s2;
+ struct uv3h_lb_bau_intd_payload_queue_tail_s {
+ unsigned long rsvd_0_3:4;
+ unsigned long address:39; /* RW */
+ unsigned long rsvd_43_63:21;
+ } s3;
};
/* ========================================================================= */
/* UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE */
/* ========================================================================= */
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE 0x320080UL
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE 0x320080UL
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE 0x320080UL
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE 0x320080UL
+#define UV4H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE uv_undefined("UV4H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE")
+#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE ( \
+ is_uv1_hub() ? UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE : \
+ is_uv2_hub() ? UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE : \
+ is_uv3_hub() ? UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE)
#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_32 0xa68
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_0_SHFT 0
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_1_SHFT 1
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_2_SHFT 2
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_3_SHFT 3
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_4_SHFT 4
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_5_SHFT 5
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_6_SHFT 6
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_7_SHFT 7
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_0_SHFT 8
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_1_SHFT 9
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_2_SHFT 10
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_3_SHFT 11
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_4_SHFT 12
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_5_SHFT 13
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_6_SHFT 14
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_7_SHFT 15
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_0_MASK 0x0000000000000001UL
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_1_MASK 0x0000000000000002UL
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_2_MASK 0x0000000000000004UL
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_3_MASK 0x0000000000000008UL
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_4_MASK 0x0000000000000010UL
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_5_MASK 0x0000000000000020UL
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_6_MASK 0x0000000000000040UL
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_7_MASK 0x0000000000000080UL
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_0_MASK 0x0000000000000100UL
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_1_MASK 0x0000000000000200UL
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_2_MASK 0x0000000000000400UL
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_3_MASK 0x0000000000000800UL
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_4_MASK 0x0000000000001000UL
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_5_MASK 0x0000000000002000UL
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_6_MASK 0x0000000000004000UL
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_7_MASK 0x0000000000008000UL
+
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_0_SHFT 0
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_1_SHFT 1
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_2_SHFT 2
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_3_SHFT 3
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_4_SHFT 4
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_5_SHFT 5
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_6_SHFT 6
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_7_SHFT 7
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_0_SHFT 8
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_1_SHFT 9
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_2_SHFT 10
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_3_SHFT 11
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_4_SHFT 12
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_5_SHFT 13
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_6_SHFT 14
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_7_SHFT 15
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_0_MASK 0x0000000000000001UL
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_1_MASK 0x0000000000000002UL
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_2_MASK 0x0000000000000004UL
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_3_MASK 0x0000000000000008UL
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_4_MASK 0x0000000000000010UL
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_5_MASK 0x0000000000000020UL
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_6_MASK 0x0000000000000040UL
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_7_MASK 0x0000000000000080UL
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_0_MASK 0x0000000000000100UL
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_1_MASK 0x0000000000000200UL
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_2_MASK 0x0000000000000400UL
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_3_MASK 0x0000000000000800UL
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_4_MASK 0x0000000000001000UL
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_5_MASK 0x0000000000002000UL
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_6_MASK 0x0000000000004000UL
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_7_MASK 0x0000000000008000UL
+
+
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_0_SHFT 0
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_1_SHFT 1
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_2_SHFT 2
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_3_SHFT 3
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_4_SHFT 4
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_5_SHFT 5
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_6_SHFT 6
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_7_SHFT 7
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_0_SHFT 8
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_1_SHFT 9
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_2_SHFT 10
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_3_SHFT 11
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_4_SHFT 12
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_5_SHFT 13
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_6_SHFT 14
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_7_SHFT 15
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_0_MASK 0x0000000000000001UL
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_1_MASK 0x0000000000000002UL
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_2_MASK 0x0000000000000004UL
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_3_MASK 0x0000000000000008UL
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_4_MASK 0x0000000000000010UL
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_5_MASK 0x0000000000000020UL
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_6_MASK 0x0000000000000040UL
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_7_MASK 0x0000000000000080UL
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_0_MASK 0x0000000000000100UL
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_1_MASK 0x0000000000000200UL
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_2_MASK 0x0000000000000400UL
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_3_MASK 0x0000000000000800UL
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_4_MASK 0x0000000000001000UL
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_5_MASK 0x0000000000002000UL
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_6_MASK 0x0000000000004000UL
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_7_MASK 0x0000000000008000UL
+
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_0_SHFT 0
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_1_SHFT 1
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_2_SHFT 2
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_3_SHFT 3
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_4_SHFT 4
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_5_SHFT 5
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_6_SHFT 6
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_7_SHFT 7
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_0_SHFT 8
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_1_SHFT 9
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_2_SHFT 10
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_3_SHFT 11
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_4_SHFT 12
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_5_SHFT 13
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_6_SHFT 14
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_7_SHFT 15
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_0_MASK 0x0000000000000001UL
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_1_MASK 0x0000000000000002UL
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_2_MASK 0x0000000000000004UL
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_3_MASK 0x0000000000000008UL
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_4_MASK 0x0000000000000010UL
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_5_MASK 0x0000000000000020UL
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_6_MASK 0x0000000000000040UL
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_PENDING_7_MASK 0x0000000000000080UL
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_0_MASK 0x0000000000000100UL
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_1_MASK 0x0000000000000200UL
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_2_MASK 0x0000000000000400UL
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_3_MASK 0x0000000000000800UL
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_4_MASK 0x0000000000001000UL
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_5_MASK 0x0000000000002000UL
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_6_MASK 0x0000000000004000UL
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_TIMEOUT_7_MASK 0x0000000000008000UL
+
union uvh_lb_bau_intd_software_acknowledge_u {
unsigned long v;
- struct uvh_lb_bau_intd_software_acknowledge_s {
+ struct uv1h_lb_bau_intd_software_acknowledge_s {
unsigned long pending_0:1; /* RW, W1C */
unsigned long pending_1:1; /* RW, W1C */
unsigned long pending_2:1; /* RW, W1C */
@@ -1562,27 +2130,84 @@ union uvh_lb_bau_intd_software_acknowledge_u {
unsigned long timeout_6:1; /* RW, W1C */
unsigned long timeout_7:1; /* RW, W1C */
unsigned long rsvd_16_63:48;
- } s;
+ } s1;
+ struct uv2h_lb_bau_intd_software_acknowledge_s {
+ unsigned long pending_0:1; /* RW */
+ unsigned long pending_1:1; /* RW */
+ unsigned long pending_2:1; /* RW */
+ unsigned long pending_3:1; /* RW */
+ unsigned long pending_4:1; /* RW */
+ unsigned long pending_5:1; /* RW */
+ unsigned long pending_6:1; /* RW */
+ unsigned long pending_7:1; /* RW */
+ unsigned long timeout_0:1; /* RW */
+ unsigned long timeout_1:1; /* RW */
+ unsigned long timeout_2:1; /* RW */
+ unsigned long timeout_3:1; /* RW */
+ unsigned long timeout_4:1; /* RW */
+ unsigned long timeout_5:1; /* RW */
+ unsigned long timeout_6:1; /* RW */
+ unsigned long timeout_7:1; /* RW */
+ unsigned long rsvd_16_63:48;
+ } s2;
+ struct uv3h_lb_bau_intd_software_acknowledge_s {
+ unsigned long pending_0:1; /* RW */
+ unsigned long pending_1:1; /* RW */
+ unsigned long pending_2:1; /* RW */
+ unsigned long pending_3:1; /* RW */
+ unsigned long pending_4:1; /* RW */
+ unsigned long pending_5:1; /* RW */
+ unsigned long pending_6:1; /* RW */
+ unsigned long pending_7:1; /* RW */
+ unsigned long timeout_0:1; /* RW */
+ unsigned long timeout_1:1; /* RW */
+ unsigned long timeout_2:1; /* RW */
+ unsigned long timeout_3:1; /* RW */
+ unsigned long timeout_4:1; /* RW */
+ unsigned long timeout_5:1; /* RW */
+ unsigned long timeout_6:1; /* RW */
+ unsigned long timeout_7:1; /* RW */
+ unsigned long rsvd_16_63:48;
+ } s3;
};
/* ========================================================================= */
/* UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_ALIAS */
/* ========================================================================= */
-#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_ALIAS 0x320088UL
+#define UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_ALIAS 0x320088UL
+#define UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_ALIAS 0x320088UL
+#define UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_ALIAS 0x320088UL
+#define UV4H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_ALIAS uv_undefined("UV4H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_ALIAS")
+#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_ALIAS ( \
+ is_uv1_hub() ? UV1H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_ALIAS : \
+ is_uv2_hub() ? UV2H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_ALIAS : \
+ is_uv3_hub() ? UV3H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_ALIAS : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_ALIAS)
#define UVH_LB_BAU_INTD_SOFTWARE_ACKNOWLEDGE_ALIAS_32 0xa70
/* ========================================================================= */
/* UVH_LB_BAU_MISC_CONTROL */
/* ========================================================================= */
-#define UVH_LB_BAU_MISC_CONTROL 0x320170UL
#define UV1H_LB_BAU_MISC_CONTROL 0x320170UL
#define UV2H_LB_BAU_MISC_CONTROL 0x320170UL
#define UV3H_LB_BAU_MISC_CONTROL 0x320170UL
-#define UVH_LB_BAU_MISC_CONTROL_32 0xa10
-#define UV1H_LB_BAU_MISC_CONTROL_32 0x320170UL
-#define UV2H_LB_BAU_MISC_CONTROL_32 0x320170UL
-#define UV3H_LB_BAU_MISC_CONTROL_32 0x320170UL
+#define UV4H_LB_BAU_MISC_CONTROL 0xc8170UL
+#define UVH_LB_BAU_MISC_CONTROL ( \
+ is_uv1_hub() ? UV1H_LB_BAU_MISC_CONTROL : \
+ is_uv2_hub() ? UV2H_LB_BAU_MISC_CONTROL : \
+ is_uv3_hub() ? UV3H_LB_BAU_MISC_CONTROL : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_MISC_CONTROL)
+
+#define UV1H_LB_BAU_MISC_CONTROL_32 0xa10
+#define UV2H_LB_BAU_MISC_CONTROL_32 0xa10
+#define UV3H_LB_BAU_MISC_CONTROL_32 0xa10
+#define UV4H_LB_BAU_MISC_CONTROL_32 0xa18
+#define UVH_LB_BAU_MISC_CONTROL_32 ( \
+ is_uv1_hub() ? UV1H_LB_BAU_MISC_CONTROL_32 : \
+ is_uv2_hub() ? UV2H_LB_BAU_MISC_CONTROL_32 : \
+ is_uv3_hub() ? UV3H_LB_BAU_MISC_CONTROL_32 : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_MISC_CONTROL_32)
#define UVH_LB_BAU_MISC_CONTROL_REJECTION_DELAY_SHFT 0
#define UVH_LB_BAU_MISC_CONTROL_APIC_MODE_SHFT 8
@@ -1590,8 +2215,6 @@ union uvh_lb_bau_intd_software_acknowledge_u {
#define UVH_LB_BAU_MISC_CONTROL_FORCE_LOCK_NOP_SHFT 10
#define UVH_LB_BAU_MISC_CONTROL_QPI_AGENT_PRESENCE_VECTOR_SHFT 11
#define UVH_LB_BAU_MISC_CONTROL_DESCRIPTOR_FETCH_MODE_SHFT 14
-#define UVH_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_SHFT 15
-#define UVH_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHFT 16
#define UVH_LB_BAU_MISC_CONTROL_ENABLE_DUAL_MAPPING_MODE_SHFT 20
#define UVH_LB_BAU_MISC_CONTROL_VGA_IO_PORT_DECODE_ENABLE_SHFT 21
#define UVH_LB_BAU_MISC_CONTROL_VGA_IO_PORT_16_BIT_DECODE_SHFT 22
@@ -1606,8 +2229,6 @@ union uvh_lb_bau_intd_software_acknowledge_u {
#define UVH_LB_BAU_MISC_CONTROL_FORCE_LOCK_NOP_MASK 0x0000000000000400UL
#define UVH_LB_BAU_MISC_CONTROL_QPI_AGENT_PRESENCE_VECTOR_MASK 0x0000000000003800UL
#define UVH_LB_BAU_MISC_CONTROL_DESCRIPTOR_FETCH_MODE_MASK 0x0000000000004000UL
-#define UVH_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_MASK 0x0000000000008000UL
-#define UVH_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_MASK 0x00000000000f0000UL
#define UVH_LB_BAU_MISC_CONTROL_ENABLE_DUAL_MAPPING_MODE_MASK 0x0000000000100000UL
#define UVH_LB_BAU_MISC_CONTROL_VGA_IO_PORT_DECODE_ENABLE_MASK 0x0000000000200000UL
#define UVH_LB_BAU_MISC_CONTROL_VGA_IO_PORT_16_BIT_DECODE_MASK 0x0000000000400000UL
@@ -1656,8 +2277,6 @@ union uvh_lb_bau_intd_software_acknowledge_u {
#define UVXH_LB_BAU_MISC_CONTROL_FORCE_LOCK_NOP_SHFT 10
#define UVXH_LB_BAU_MISC_CONTROL_QPI_AGENT_PRESENCE_VECTOR_SHFT 11
#define UVXH_LB_BAU_MISC_CONTROL_DESCRIPTOR_FETCH_MODE_SHFT 14
-#define UVXH_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_SHFT 15
-#define UVXH_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHFT 16
#define UVXH_LB_BAU_MISC_CONTROL_ENABLE_DUAL_MAPPING_MODE_SHFT 20
#define UVXH_LB_BAU_MISC_CONTROL_VGA_IO_PORT_DECODE_ENABLE_SHFT 21
#define UVXH_LB_BAU_MISC_CONTROL_VGA_IO_PORT_16_BIT_DECODE_SHFT 22
@@ -1679,8 +2298,6 @@ union uvh_lb_bau_intd_software_acknowledge_u {
#define UVXH_LB_BAU_MISC_CONTROL_FORCE_LOCK_NOP_MASK 0x0000000000000400UL
#define UVXH_LB_BAU_MISC_CONTROL_QPI_AGENT_PRESENCE_VECTOR_MASK 0x0000000000003800UL
#define UVXH_LB_BAU_MISC_CONTROL_DESCRIPTOR_FETCH_MODE_MASK 0x0000000000004000UL
-#define UVXH_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_MASK 0x0000000000008000UL
-#define UVXH_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_MASK 0x00000000000f0000UL
#define UVXH_LB_BAU_MISC_CONTROL_ENABLE_DUAL_MAPPING_MODE_MASK 0x0000000000100000UL
#define UVXH_LB_BAU_MISC_CONTROL_VGA_IO_PORT_DECODE_ENABLE_MASK 0x0000000000200000UL
#define UVXH_LB_BAU_MISC_CONTROL_VGA_IO_PORT_16_BIT_DECODE_MASK 0x0000000000400000UL
@@ -1797,6 +2414,88 @@ union uvh_lb_bau_intd_software_acknowledge_u {
#define UV3H_LB_BAU_MISC_CONTROL_THREAD_KILL_TIMEBASE_MASK 0x00003fc000000000UL
#define UV3H_LB_BAU_MISC_CONTROL_FUN_MASK 0xffff000000000000UL
+#define UV4H_LB_BAU_MISC_CONTROL_REJECTION_DELAY_SHFT 0
+#define UV4H_LB_BAU_MISC_CONTROL_APIC_MODE_SHFT 8
+#define UV4H_LB_BAU_MISC_CONTROL_FORCE_BROADCAST_SHFT 9
+#define UV4H_LB_BAU_MISC_CONTROL_FORCE_LOCK_NOP_SHFT 10
+#define UV4H_LB_BAU_MISC_CONTROL_QPI_AGENT_PRESENCE_VECTOR_SHFT 11
+#define UV4H_LB_BAU_MISC_CONTROL_DESCRIPTOR_FETCH_MODE_SHFT 14
+#define UV4H_LB_BAU_MISC_CONTROL_RESERVED_15_19_SHFT 15
+#define UV4H_LB_BAU_MISC_CONTROL_ENABLE_DUAL_MAPPING_MODE_SHFT 20
+#define UV4H_LB_BAU_MISC_CONTROL_VGA_IO_PORT_DECODE_ENABLE_SHFT 21
+#define UV4H_LB_BAU_MISC_CONTROL_VGA_IO_PORT_16_BIT_DECODE_SHFT 22
+#define UV4H_LB_BAU_MISC_CONTROL_SUPPRESS_DEST_REGISTRATION_SHFT 23
+#define UV4H_LB_BAU_MISC_CONTROL_PROGRAMMED_INITIAL_PRIORITY_SHFT 24
+#define UV4H_LB_BAU_MISC_CONTROL_USE_INCOMING_PRIORITY_SHFT 27
+#define UV4H_LB_BAU_MISC_CONTROL_ENABLE_PROGRAMMED_INITIAL_PRIORITY_SHFT 28
+#define UV4H_LB_BAU_MISC_CONTROL_ENABLE_AUTOMATIC_APIC_MODE_SELECTION_SHFT 29
+#define UV4H_LB_BAU_MISC_CONTROL_APIC_MODE_STATUS_SHFT 30
+#define UV4H_LB_BAU_MISC_CONTROL_SUPPRESS_INTERRUPTS_TO_SELF_SHFT 31
+#define UV4H_LB_BAU_MISC_CONTROL_ENABLE_LOCK_BASED_SYSTEM_FLUSH_SHFT 32
+#define UV4H_LB_BAU_MISC_CONTROL_ENABLE_EXTENDED_SB_STATUS_SHFT 33
+#define UV4H_LB_BAU_MISC_CONTROL_SUPPRESS_INT_PRIO_UDT_TO_SELF_SHFT 34
+#define UV4H_LB_BAU_MISC_CONTROL_USE_LEGACY_DESCRIPTOR_FORMATS_SHFT 35
+#define UV4H_LB_BAU_MISC_CONTROL_SUPPRESS_QUIESCE_MSGS_TO_QPI_SHFT 36
+#define UV4H_LB_BAU_MISC_CONTROL_RESERVED_37_SHFT 37
+#define UV4H_LB_BAU_MISC_CONTROL_THREAD_KILL_TIMEBASE_SHFT 38
+#define UV4H_LB_BAU_MISC_CONTROL_ADDRESS_INTERLEAVE_SELECT_SHFT 46
+#define UV4H_LB_BAU_MISC_CONTROL_FUN_SHFT 48
+#define UV4H_LB_BAU_MISC_CONTROL_REJECTION_DELAY_MASK 0x00000000000000ffUL
+#define UV4H_LB_BAU_MISC_CONTROL_APIC_MODE_MASK 0x0000000000000100UL
+#define UV4H_LB_BAU_MISC_CONTROL_FORCE_BROADCAST_MASK 0x0000000000000200UL
+#define UV4H_LB_BAU_MISC_CONTROL_FORCE_LOCK_NOP_MASK 0x0000000000000400UL
+#define UV4H_LB_BAU_MISC_CONTROL_QPI_AGENT_PRESENCE_VECTOR_MASK 0x0000000000003800UL
+#define UV4H_LB_BAU_MISC_CONTROL_DESCRIPTOR_FETCH_MODE_MASK 0x0000000000004000UL
+#define UV4H_LB_BAU_MISC_CONTROL_RESERVED_15_19_MASK 0x00000000000f8000UL
+#define UV4H_LB_BAU_MISC_CONTROL_ENABLE_DUAL_MAPPING_MODE_MASK 0x0000000000100000UL
+#define UV4H_LB_BAU_MISC_CONTROL_VGA_IO_PORT_DECODE_ENABLE_MASK 0x0000000000200000UL
+#define UV4H_LB_BAU_MISC_CONTROL_VGA_IO_PORT_16_BIT_DECODE_MASK 0x0000000000400000UL
+#define UV4H_LB_BAU_MISC_CONTROL_SUPPRESS_DEST_REGISTRATION_MASK 0x0000000000800000UL
+#define UV4H_LB_BAU_MISC_CONTROL_PROGRAMMED_INITIAL_PRIORITY_MASK 0x0000000007000000UL
+#define UV4H_LB_BAU_MISC_CONTROL_USE_INCOMING_PRIORITY_MASK 0x0000000008000000UL
+#define UV4H_LB_BAU_MISC_CONTROL_ENABLE_PROGRAMMED_INITIAL_PRIORITY_MASK 0x0000000010000000UL
+#define UV4H_LB_BAU_MISC_CONTROL_ENABLE_AUTOMATIC_APIC_MODE_SELECTION_MASK 0x0000000020000000UL
+#define UV4H_LB_BAU_MISC_CONTROL_APIC_MODE_STATUS_MASK 0x0000000040000000UL
+#define UV4H_LB_BAU_MISC_CONTROL_SUPPRESS_INTERRUPTS_TO_SELF_MASK 0x0000000080000000UL
+#define UV4H_LB_BAU_MISC_CONTROL_ENABLE_LOCK_BASED_SYSTEM_FLUSH_MASK 0x0000000100000000UL
+#define UV4H_LB_BAU_MISC_CONTROL_ENABLE_EXTENDED_SB_STATUS_MASK 0x0000000200000000UL
+#define UV4H_LB_BAU_MISC_CONTROL_SUPPRESS_INT_PRIO_UDT_TO_SELF_MASK 0x0000000400000000UL
+#define UV4H_LB_BAU_MISC_CONTROL_USE_LEGACY_DESCRIPTOR_FORMATS_MASK 0x0000000800000000UL
+#define UV4H_LB_BAU_MISC_CONTROL_SUPPRESS_QUIESCE_MSGS_TO_QPI_MASK 0x0000001000000000UL
+#define UV4H_LB_BAU_MISC_CONTROL_RESERVED_37_MASK 0x0000002000000000UL
+#define UV4H_LB_BAU_MISC_CONTROL_THREAD_KILL_TIMEBASE_MASK 0x00003fc000000000UL
+#define UV4H_LB_BAU_MISC_CONTROL_ADDRESS_INTERLEAVE_SELECT_MASK 0x0000400000000000UL
+#define UV4H_LB_BAU_MISC_CONTROL_FUN_MASK 0xffff000000000000UL
+
+#define UV4H_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_MASK \
+ uv_undefined("UV4H_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_MASK")
+#define UVH_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_MASK ( \
+ is_uv1_hub() ? UV1H_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_MASK : \
+ is_uv2_hub() ? UV2H_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_MASK : \
+ is_uv3_hub() ? UV3H_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_MASK : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_MASK)
+#define UV4H_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_SHFT \
+ uv_undefined("UV4H_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_SHFT")
+#define UVH_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_SHFT ( \
+ is_uv1_hub() ? UV1H_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_SHFT : \
+ is_uv2_hub() ? UV2H_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_SHFT : \
+ is_uv3_hub() ? UV3H_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_SHFT : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_MISC_CONTROL_ENABLE_INTD_SOFT_ACK_MODE_SHFT)
+#define UV4H_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_MASK \
+ uv_undefined("UV4H_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_MASK")
+#define UVH_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_MASK ( \
+ is_uv1_hub() ? UV1H_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_MASK : \
+ is_uv2_hub() ? UV2H_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_MASK : \
+ is_uv3_hub() ? UV3H_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_MASK : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_MASK)
+#define UV4H_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHFT \
+ uv_undefined("UV4H_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHFT")
+#define UVH_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHFT ( \
+ is_uv1_hub() ? UV1H_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHFT : \
+ is_uv2_hub() ? UV2H_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHFT : \
+ is_uv3_hub() ? UV3H_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHFT : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_MISC_CONTROL_INTD_SOFT_ACK_TIMEOUT_PERIOD_SHFT)
+
union uvh_lb_bau_misc_control_u {
unsigned long v;
struct uvh_lb_bau_misc_control_s {
@@ -1806,8 +2505,7 @@ union uvh_lb_bau_misc_control_u {
unsigned long force_lock_nop:1; /* RW */
unsigned long qpi_agent_presence_vector:3; /* RW */
unsigned long descriptor_fetch_mode:1; /* RW */
- unsigned long enable_intd_soft_ack_mode:1; /* RW */
- unsigned long intd_soft_ack_timeout_period:4; /* RW */
+ unsigned long rsvd_15_19:5;
unsigned long enable_dual_mapping_mode:1; /* RW */
unsigned long vga_io_port_decode_enable:1; /* RW */
unsigned long vga_io_port_16_bit_decode:1; /* RW */
@@ -1844,8 +2542,7 @@ union uvh_lb_bau_misc_control_u {
unsigned long force_lock_nop:1; /* RW */
unsigned long qpi_agent_presence_vector:3; /* RW */
unsigned long descriptor_fetch_mode:1; /* RW */
- unsigned long enable_intd_soft_ack_mode:1; /* RW */
- unsigned long intd_soft_ack_timeout_period:4; /* RW */
+ unsigned long rsvd_15_19:5;
unsigned long enable_dual_mapping_mode:1; /* RW */
unsigned long vga_io_port_decode_enable:1; /* RW */
unsigned long vga_io_port_16_bit_decode:1; /* RW */
@@ -1918,13 +2615,59 @@ union uvh_lb_bau_misc_control_u {
unsigned long rsvd_46_47:2;
unsigned long fun:16; /* RW */
} s3;
+ struct uv4h_lb_bau_misc_control_s {
+ unsigned long rejection_delay:8; /* RW */
+ unsigned long apic_mode:1; /* RW */
+ unsigned long force_broadcast:1; /* RW */
+ unsigned long force_lock_nop:1; /* RW */
+ unsigned long qpi_agent_presence_vector:3; /* RW */
+ unsigned long descriptor_fetch_mode:1; /* RW */
+ unsigned long rsvd_15_19:5;
+ unsigned long enable_dual_mapping_mode:1; /* RW */
+ unsigned long vga_io_port_decode_enable:1; /* RW */
+ unsigned long vga_io_port_16_bit_decode:1; /* RW */
+ unsigned long suppress_dest_registration:1; /* RW */
+ unsigned long programmed_initial_priority:3; /* RW */
+ unsigned long use_incoming_priority:1; /* RW */
+ unsigned long enable_programmed_initial_priority:1;/* RW */
+ unsigned long enable_automatic_apic_mode_selection:1;/* RW */
+ unsigned long apic_mode_status:1; /* RO */
+ unsigned long suppress_interrupts_to_self:1; /* RW */
+ unsigned long enable_lock_based_system_flush:1;/* RW */
+ unsigned long enable_extended_sb_status:1; /* RW */
+ unsigned long suppress_int_prio_udt_to_self:1;/* RW */
+ unsigned long use_legacy_descriptor_formats:1;/* RW */
+ unsigned long suppress_quiesce_msgs_to_qpi:1; /* RW */
+ unsigned long rsvd_37:1;
+ unsigned long thread_kill_timebase:8; /* RW */
+ unsigned long address_interleave_select:1; /* RW */
+ unsigned long rsvd_47:1;
+ unsigned long fun:16; /* RW */
+ } s4;
};
/* ========================================================================= */
/* UVH_LB_BAU_SB_ACTIVATION_CONTROL */
/* ========================================================================= */
-#define UVH_LB_BAU_SB_ACTIVATION_CONTROL 0x320020UL
-#define UVH_LB_BAU_SB_ACTIVATION_CONTROL_32 0x9a8
+#define UV1H_LB_BAU_SB_ACTIVATION_CONTROL 0x320020UL
+#define UV2H_LB_BAU_SB_ACTIVATION_CONTROL 0x320020UL
+#define UV3H_LB_BAU_SB_ACTIVATION_CONTROL 0x320020UL
+#define UV4H_LB_BAU_SB_ACTIVATION_CONTROL 0xc8020UL
+#define UVH_LB_BAU_SB_ACTIVATION_CONTROL ( \
+ is_uv1_hub() ? UV1H_LB_BAU_SB_ACTIVATION_CONTROL : \
+ is_uv2_hub() ? UV2H_LB_BAU_SB_ACTIVATION_CONTROL : \
+ is_uv3_hub() ? UV3H_LB_BAU_SB_ACTIVATION_CONTROL : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_SB_ACTIVATION_CONTROL)
+
+#define UV1H_LB_BAU_SB_ACTIVATION_CONTROL_32 0x9a8
+#define UV2H_LB_BAU_SB_ACTIVATION_CONTROL_32 0x9a8
+#define UV3H_LB_BAU_SB_ACTIVATION_CONTROL_32 0x9a8
+#define UV4H_LB_BAU_SB_ACTIVATION_CONTROL_32 0x9c8
+#define UVH_LB_BAU_SB_ACTIVATION_CONTROL_32 ( \
+ is_uv1_hub() ? UV1H_LB_BAU_SB_ACTIVATION_CONTROL_32 : \
+ is_uv2_hub() ? UV2H_LB_BAU_SB_ACTIVATION_CONTROL_32 : \
+ is_uv3_hub() ? UV3H_LB_BAU_SB_ACTIVATION_CONTROL_32 : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_SB_ACTIVATION_CONTROL_32)
#define UVH_LB_BAU_SB_ACTIVATION_CONTROL_INDEX_SHFT 0
#define UVH_LB_BAU_SB_ACTIVATION_CONTROL_PUSH_SHFT 62
@@ -1933,6 +2676,7 @@ union uvh_lb_bau_misc_control_u {
#define UVH_LB_BAU_SB_ACTIVATION_CONTROL_PUSH_MASK 0x4000000000000000UL
#define UVH_LB_BAU_SB_ACTIVATION_CONTROL_INIT_MASK 0x8000000000000000UL
+
union uvh_lb_bau_sb_activation_control_u {
unsigned long v;
struct uvh_lb_bau_sb_activation_control_s {
@@ -1946,12 +2690,30 @@ union uvh_lb_bau_sb_activation_control_u {
/* ========================================================================= */
/* UVH_LB_BAU_SB_ACTIVATION_STATUS_0 */
/* ========================================================================= */
-#define UVH_LB_BAU_SB_ACTIVATION_STATUS_0 0x320030UL
-#define UVH_LB_BAU_SB_ACTIVATION_STATUS_0_32 0x9b0
+#define UV1H_LB_BAU_SB_ACTIVATION_STATUS_0 0x320030UL
+#define UV2H_LB_BAU_SB_ACTIVATION_STATUS_0 0x320030UL
+#define UV3H_LB_BAU_SB_ACTIVATION_STATUS_0 0x320030UL
+#define UV4H_LB_BAU_SB_ACTIVATION_STATUS_0 0xc8030UL
+#define UVH_LB_BAU_SB_ACTIVATION_STATUS_0 ( \
+ is_uv1_hub() ? UV1H_LB_BAU_SB_ACTIVATION_STATUS_0 : \
+ is_uv2_hub() ? UV2H_LB_BAU_SB_ACTIVATION_STATUS_0 : \
+ is_uv3_hub() ? UV3H_LB_BAU_SB_ACTIVATION_STATUS_0 : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_SB_ACTIVATION_STATUS_0)
+
+#define UV1H_LB_BAU_SB_ACTIVATION_STATUS_0_32 0x9b0
+#define UV2H_LB_BAU_SB_ACTIVATION_STATUS_0_32 0x9b0
+#define UV3H_LB_BAU_SB_ACTIVATION_STATUS_0_32 0x9b0
+#define UV4H_LB_BAU_SB_ACTIVATION_STATUS_0_32 0x9d0
+#define UVH_LB_BAU_SB_ACTIVATION_STATUS_0_32 ( \
+ is_uv1_hub() ? UV1H_LB_BAU_SB_ACTIVATION_STATUS_0_32 : \
+ is_uv2_hub() ? UV2H_LB_BAU_SB_ACTIVATION_STATUS_0_32 : \
+ is_uv3_hub() ? UV3H_LB_BAU_SB_ACTIVATION_STATUS_0_32 : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_SB_ACTIVATION_STATUS_0_32)
#define UVH_LB_BAU_SB_ACTIVATION_STATUS_0_STATUS_SHFT 0
#define UVH_LB_BAU_SB_ACTIVATION_STATUS_0_STATUS_MASK 0xffffffffffffffffUL
+
union uvh_lb_bau_sb_activation_status_0_u {
unsigned long v;
struct uvh_lb_bau_sb_activation_status_0_s {
@@ -1962,12 +2724,30 @@ union uvh_lb_bau_sb_activation_status_0_u {
/* ========================================================================= */
/* UVH_LB_BAU_SB_ACTIVATION_STATUS_1 */
/* ========================================================================= */
-#define UVH_LB_BAU_SB_ACTIVATION_STATUS_1 0x320040UL
-#define UVH_LB_BAU_SB_ACTIVATION_STATUS_1_32 0x9b8
+#define UV1H_LB_BAU_SB_ACTIVATION_STATUS_1 0x320040UL
+#define UV2H_LB_BAU_SB_ACTIVATION_STATUS_1 0x320040UL
+#define UV3H_LB_BAU_SB_ACTIVATION_STATUS_1 0x320040UL
+#define UV4H_LB_BAU_SB_ACTIVATION_STATUS_1 0xc8040UL
+#define UVH_LB_BAU_SB_ACTIVATION_STATUS_1 ( \
+ is_uv1_hub() ? UV1H_LB_BAU_SB_ACTIVATION_STATUS_1 : \
+ is_uv2_hub() ? UV2H_LB_BAU_SB_ACTIVATION_STATUS_1 : \
+ is_uv3_hub() ? UV3H_LB_BAU_SB_ACTIVATION_STATUS_1 : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_SB_ACTIVATION_STATUS_1)
+
+#define UV1H_LB_BAU_SB_ACTIVATION_STATUS_1_32 0x9b8
+#define UV2H_LB_BAU_SB_ACTIVATION_STATUS_1_32 0x9b8
+#define UV3H_LB_BAU_SB_ACTIVATION_STATUS_1_32 0x9b8
+#define UV4H_LB_BAU_SB_ACTIVATION_STATUS_1_32 0x9d8
+#define UVH_LB_BAU_SB_ACTIVATION_STATUS_1_32 ( \
+ is_uv1_hub() ? UV1H_LB_BAU_SB_ACTIVATION_STATUS_1_32 : \
+ is_uv2_hub() ? UV2H_LB_BAU_SB_ACTIVATION_STATUS_1_32 : \
+ is_uv3_hub() ? UV3H_LB_BAU_SB_ACTIVATION_STATUS_1_32 : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_SB_ACTIVATION_STATUS_1_32)
#define UVH_LB_BAU_SB_ACTIVATION_STATUS_1_STATUS_SHFT 0
#define UVH_LB_BAU_SB_ACTIVATION_STATUS_1_STATUS_MASK 0xffffffffffffffffUL
+
union uvh_lb_bau_sb_activation_status_1_u {
unsigned long v;
struct uvh_lb_bau_sb_activation_status_1_s {
@@ -1978,23 +2758,55 @@ union uvh_lb_bau_sb_activation_status_1_u {
/* ========================================================================= */
/* UVH_LB_BAU_SB_DESCRIPTOR_BASE */
/* ========================================================================= */
-#define UVH_LB_BAU_SB_DESCRIPTOR_BASE 0x320010UL
-#define UVH_LB_BAU_SB_DESCRIPTOR_BASE_32 0x9a0
+#define UV1H_LB_BAU_SB_DESCRIPTOR_BASE 0x320010UL
+#define UV2H_LB_BAU_SB_DESCRIPTOR_BASE 0x320010UL
+#define UV3H_LB_BAU_SB_DESCRIPTOR_BASE 0x320010UL
+#define UV4H_LB_BAU_SB_DESCRIPTOR_BASE 0xc8010UL
+#define UVH_LB_BAU_SB_DESCRIPTOR_BASE ( \
+ is_uv1_hub() ? UV1H_LB_BAU_SB_DESCRIPTOR_BASE : \
+ is_uv2_hub() ? UV2H_LB_BAU_SB_DESCRIPTOR_BASE : \
+ is_uv3_hub() ? UV3H_LB_BAU_SB_DESCRIPTOR_BASE : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_SB_DESCRIPTOR_BASE)
+
+#define UV1H_LB_BAU_SB_DESCRIPTOR_BASE_32 0x9a0
+#define UV2H_LB_BAU_SB_DESCRIPTOR_BASE_32 0x9a0
+#define UV3H_LB_BAU_SB_DESCRIPTOR_BASE_32 0x9a0
+#define UV4H_LB_BAU_SB_DESCRIPTOR_BASE_32 0x9c0
+#define UVH_LB_BAU_SB_DESCRIPTOR_BASE_32 ( \
+ is_uv1_hub() ? UV1H_LB_BAU_SB_DESCRIPTOR_BASE_32 : \
+ is_uv2_hub() ? UV2H_LB_BAU_SB_DESCRIPTOR_BASE_32 : \
+ is_uv3_hub() ? UV3H_LB_BAU_SB_DESCRIPTOR_BASE_32 : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_SB_DESCRIPTOR_BASE_32)
#define UVH_LB_BAU_SB_DESCRIPTOR_BASE_PAGE_ADDRESS_SHFT 12
#define UVH_LB_BAU_SB_DESCRIPTOR_BASE_NODE_ID_SHFT 49
-#define UVH_LB_BAU_SB_DESCRIPTOR_BASE_PAGE_ADDRESS_MASK 0x000007fffffff000UL
#define UVH_LB_BAU_SB_DESCRIPTOR_BASE_NODE_ID_MASK 0x7ffe000000000000UL
+#define UV1H_LB_BAU_SB_DESCRIPTOR_BASE_PAGE_ADDRESS_MASK 0x000007fffffff000UL
+
+
+#define UV2H_LB_BAU_SB_DESCRIPTOR_BASE_PAGE_ADDRESS_MASK 0x000007fffffff000UL
+
+#define UV3H_LB_BAU_SB_DESCRIPTOR_BASE_PAGE_ADDRESS_MASK 0x000007fffffff000UL
+
+#define UV4H_LB_BAU_SB_DESCRIPTOR_BASE_PAGE_ADDRESS_MASK 0x00003ffffffff000UL
+
+
union uvh_lb_bau_sb_descriptor_base_u {
unsigned long v;
struct uvh_lb_bau_sb_descriptor_base_s {
unsigned long rsvd_0_11:12;
- unsigned long page_address:31; /* RW */
- unsigned long rsvd_43_48:6;
+ unsigned long rsvd_12_48:37;
unsigned long node_id:14; /* RW */
unsigned long rsvd_63:1;
} s;
+ struct uv4h_lb_bau_sb_descriptor_base_s {
+ unsigned long rsvd_0_11:12;
+ unsigned long page_address:34; /* RW */
+ unsigned long rsvd_46_48:3;
+ unsigned long node_id:14; /* RW */
+ unsigned long rsvd_63:1;
+ } s4;
};
/* ========================================================================= */
@@ -2004,6 +2816,7 @@ union uvh_lb_bau_sb_descriptor_base_u {
#define UV1H_NODE_ID 0x0UL
#define UV2H_NODE_ID 0x0UL
#define UV3H_NODE_ID 0x0UL
+#define UV4H_NODE_ID 0x0UL
#define UVH_NODE_ID_FORCE1_SHFT 0
#define UVH_NODE_ID_MANUFACTURER_SHFT 1
@@ -2080,6 +2893,26 @@ union uvh_lb_bau_sb_descriptor_base_u {
#define UV3H_NODE_ID_NODES_PER_BIT_MASK 0x01fc000000000000UL
#define UV3H_NODE_ID_NI_PORT_MASK 0x3e00000000000000UL
+#define UV4H_NODE_ID_FORCE1_SHFT 0
+#define UV4H_NODE_ID_MANUFACTURER_SHFT 1
+#define UV4H_NODE_ID_PART_NUMBER_SHFT 12
+#define UV4H_NODE_ID_REVISION_SHFT 28
+#define UV4H_NODE_ID_NODE_ID_SHFT 32
+#define UV4H_NODE_ID_ROUTER_SELECT_SHFT 48
+#define UV4H_NODE_ID_RESERVED_2_SHFT 49
+#define UV4H_NODE_ID_NODES_PER_BIT_SHFT 50
+#define UV4H_NODE_ID_NI_PORT_SHFT 57
+#define UV4H_NODE_ID_FORCE1_MASK 0x0000000000000001UL
+#define UV4H_NODE_ID_MANUFACTURER_MASK 0x0000000000000ffeUL
+#define UV4H_NODE_ID_PART_NUMBER_MASK 0x000000000ffff000UL
+#define UV4H_NODE_ID_REVISION_MASK 0x00000000f0000000UL
+#define UV4H_NODE_ID_NODE_ID_MASK 0x00007fff00000000UL
+#define UV4H_NODE_ID_ROUTER_SELECT_MASK 0x0001000000000000UL
+#define UV4H_NODE_ID_RESERVED_2_MASK 0x0002000000000000UL
+#define UV4H_NODE_ID_NODES_PER_BIT_MASK 0x01fc000000000000UL
+#define UV4H_NODE_ID_NI_PORT_MASK 0x3e00000000000000UL
+
+
union uvh_node_id_u {
unsigned long v;
struct uvh_node_id_s {
@@ -2137,17 +2970,40 @@ union uvh_node_id_u {
unsigned long ni_port:5; /* RO */
unsigned long rsvd_62_63:2;
} s3;
+ struct uv4h_node_id_s {
+ unsigned long force1:1; /* RO */
+ unsigned long manufacturer:11; /* RO */
+ unsigned long part_number:16; /* RO */
+ unsigned long revision:4; /* RO */
+ unsigned long node_id:15; /* RW */
+ unsigned long rsvd_47:1;
+ unsigned long router_select:1; /* RO */
+ unsigned long rsvd_49:1;
+ unsigned long nodes_per_bit:7; /* RO */
+ unsigned long ni_port:5; /* RO */
+ unsigned long rsvd_62_63:2;
+ } s4;
};
/* ========================================================================= */
/* UVH_NODE_PRESENT_TABLE */
/* ========================================================================= */
#define UVH_NODE_PRESENT_TABLE 0x1400UL
-#define UVH_NODE_PRESENT_TABLE_DEPTH 16
+
+#define UV1H_NODE_PRESENT_TABLE_DEPTH 16
+#define UV2H_NODE_PRESENT_TABLE_DEPTH 16
+#define UV3H_NODE_PRESENT_TABLE_DEPTH 16
+#define UV4H_NODE_PRESENT_TABLE_DEPTH 4
+#define UVH_NODE_PRESENT_TABLE_DEPTH ( \
+ is_uv1_hub() ? UV1H_NODE_PRESENT_TABLE_DEPTH : \
+ is_uv2_hub() ? UV2H_NODE_PRESENT_TABLE_DEPTH : \
+ is_uv3_hub() ? UV3H_NODE_PRESENT_TABLE_DEPTH : \
+ /*is_uv4_hub*/ UV4H_NODE_PRESENT_TABLE_DEPTH)
#define UVH_NODE_PRESENT_TABLE_NODES_SHFT 0
#define UVH_NODE_PRESENT_TABLE_NODES_MASK 0xffffffffffffffffUL
+
union uvh_node_present_table_u {
unsigned long v;
struct uvh_node_present_table_s {
@@ -2158,7 +3014,15 @@ union uvh_node_present_table_u {
/* ========================================================================= */
/* UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_0_MMR */
/* ========================================================================= */
-#define UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_0_MMR 0x16000c8UL
+#define UV1H_RH_GAM_ALIAS210_OVERLAY_CONFIG_0_MMR 0x16000c8UL
+#define UV2H_RH_GAM_ALIAS210_OVERLAY_CONFIG_0_MMR 0x16000c8UL
+#define UV3H_RH_GAM_ALIAS210_OVERLAY_CONFIG_0_MMR 0x16000c8UL
+#define UV4H_RH_GAM_ALIAS210_OVERLAY_CONFIG_0_MMR 0x4800c8UL
+#define UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_0_MMR ( \
+ is_uv1_hub() ? UV1H_RH_GAM_ALIAS210_OVERLAY_CONFIG_0_MMR : \
+ is_uv2_hub() ? UV2H_RH_GAM_ALIAS210_OVERLAY_CONFIG_0_MMR : \
+ is_uv3_hub() ? UV3H_RH_GAM_ALIAS210_OVERLAY_CONFIG_0_MMR : \
+ /*is_uv4_hub*/ UV4H_RH_GAM_ALIAS210_OVERLAY_CONFIG_0_MMR)
#define UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_0_MMR_BASE_SHFT 24
#define UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_0_MMR_M_ALIAS_SHFT 48
@@ -2167,6 +3031,7 @@ union uvh_node_present_table_u {
#define UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_0_MMR_M_ALIAS_MASK 0x001f000000000000UL
#define UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_0_MMR_ENABLE_MASK 0x8000000000000000UL
+
union uvh_rh_gam_alias210_overlay_config_0_mmr_u {
unsigned long v;
struct uvh_rh_gam_alias210_overlay_config_0_mmr_s {
@@ -2182,7 +3047,15 @@ union uvh_rh_gam_alias210_overlay_config_0_mmr_u {
/* ========================================================================= */
/* UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_1_MMR */
/* ========================================================================= */
-#define UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_1_MMR 0x16000d8UL
+#define UV1H_RH_GAM_ALIAS210_OVERLAY_CONFIG_1_MMR 0x16000d8UL
+#define UV2H_RH_GAM_ALIAS210_OVERLAY_CONFIG_1_MMR 0x16000d8UL
+#define UV3H_RH_GAM_ALIAS210_OVERLAY_CONFIG_1_MMR 0x16000d8UL
+#define UV4H_RH_GAM_ALIAS210_OVERLAY_CONFIG_1_MMR 0x4800d8UL
+#define UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_1_MMR ( \
+ is_uv1_hub() ? UV1H_RH_GAM_ALIAS210_OVERLAY_CONFIG_1_MMR : \
+ is_uv2_hub() ? UV2H_RH_GAM_ALIAS210_OVERLAY_CONFIG_1_MMR : \
+ is_uv3_hub() ? UV3H_RH_GAM_ALIAS210_OVERLAY_CONFIG_1_MMR : \
+ /*is_uv4_hub*/ UV4H_RH_GAM_ALIAS210_OVERLAY_CONFIG_1_MMR)
#define UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_1_MMR_BASE_SHFT 24
#define UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_1_MMR_M_ALIAS_SHFT 48
@@ -2191,6 +3064,7 @@ union uvh_rh_gam_alias210_overlay_config_0_mmr_u {
#define UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_1_MMR_M_ALIAS_MASK 0x001f000000000000UL
#define UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_1_MMR_ENABLE_MASK 0x8000000000000000UL
+
union uvh_rh_gam_alias210_overlay_config_1_mmr_u {
unsigned long v;
struct uvh_rh_gam_alias210_overlay_config_1_mmr_s {
@@ -2206,7 +3080,15 @@ union uvh_rh_gam_alias210_overlay_config_1_mmr_u {
/* ========================================================================= */
/* UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_2_MMR */
/* ========================================================================= */
-#define UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_2_MMR 0x16000e8UL
+#define UV1H_RH_GAM_ALIAS210_OVERLAY_CONFIG_2_MMR 0x16000e8UL
+#define UV2H_RH_GAM_ALIAS210_OVERLAY_CONFIG_2_MMR 0x16000e8UL
+#define UV3H_RH_GAM_ALIAS210_OVERLAY_CONFIG_2_MMR 0x16000e8UL
+#define UV4H_RH_GAM_ALIAS210_OVERLAY_CONFIG_2_MMR 0x4800e8UL
+#define UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_2_MMR ( \
+ is_uv1_hub() ? UV1H_RH_GAM_ALIAS210_OVERLAY_CONFIG_2_MMR : \
+ is_uv2_hub() ? UV2H_RH_GAM_ALIAS210_OVERLAY_CONFIG_2_MMR : \
+ is_uv3_hub() ? UV3H_RH_GAM_ALIAS210_OVERLAY_CONFIG_2_MMR : \
+ /*is_uv4_hub*/ UV4H_RH_GAM_ALIAS210_OVERLAY_CONFIG_2_MMR)
#define UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_2_MMR_BASE_SHFT 24
#define UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_2_MMR_M_ALIAS_SHFT 48
@@ -2215,6 +3097,7 @@ union uvh_rh_gam_alias210_overlay_config_1_mmr_u {
#define UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_2_MMR_M_ALIAS_MASK 0x001f000000000000UL
#define UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_2_MMR_ENABLE_MASK 0x8000000000000000UL
+
union uvh_rh_gam_alias210_overlay_config_2_mmr_u {
unsigned long v;
struct uvh_rh_gam_alias210_overlay_config_2_mmr_s {
@@ -2230,11 +3113,20 @@ union uvh_rh_gam_alias210_overlay_config_2_mmr_u {
/* ========================================================================= */
/* UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_0_MMR */
/* ========================================================================= */
-#define UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_0_MMR 0x16000d0UL
+#define UV1H_RH_GAM_ALIAS210_REDIRECT_CONFIG_0_MMR 0x16000d0UL
+#define UV2H_RH_GAM_ALIAS210_REDIRECT_CONFIG_0_MMR 0x16000d0UL
+#define UV3H_RH_GAM_ALIAS210_REDIRECT_CONFIG_0_MMR 0x16000d0UL
+#define UV4H_RH_GAM_ALIAS210_REDIRECT_CONFIG_0_MMR 0x4800d0UL
+#define UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_0_MMR ( \
+ is_uv1_hub() ? UV1H_RH_GAM_ALIAS210_REDIRECT_CONFIG_0_MMR : \
+ is_uv2_hub() ? UV2H_RH_GAM_ALIAS210_REDIRECT_CONFIG_0_MMR : \
+ is_uv3_hub() ? UV3H_RH_GAM_ALIAS210_REDIRECT_CONFIG_0_MMR : \
+ /*is_uv4_hub*/ UV4H_RH_GAM_ALIAS210_REDIRECT_CONFIG_0_MMR)
#define UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_0_MMR_DEST_BASE_SHFT 24
#define UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_0_MMR_DEST_BASE_MASK 0x00003fffff000000UL
+
union uvh_rh_gam_alias210_redirect_config_0_mmr_u {
unsigned long v;
struct uvh_rh_gam_alias210_redirect_config_0_mmr_s {
@@ -2247,11 +3139,20 @@ union uvh_rh_gam_alias210_redirect_config_0_mmr_u {
/* ========================================================================= */
/* UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_1_MMR */
/* ========================================================================= */
-#define UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_1_MMR 0x16000e0UL
+#define UV1H_RH_GAM_ALIAS210_REDIRECT_CONFIG_1_MMR 0x16000e0UL
+#define UV2H_RH_GAM_ALIAS210_REDIRECT_CONFIG_1_MMR 0x16000e0UL
+#define UV3H_RH_GAM_ALIAS210_REDIRECT_CONFIG_1_MMR 0x16000e0UL
+#define UV4H_RH_GAM_ALIAS210_REDIRECT_CONFIG_1_MMR 0x4800e0UL
+#define UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_1_MMR ( \
+ is_uv1_hub() ? UV1H_RH_GAM_ALIAS210_REDIRECT_CONFIG_1_MMR : \
+ is_uv2_hub() ? UV2H_RH_GAM_ALIAS210_REDIRECT_CONFIG_1_MMR : \
+ is_uv3_hub() ? UV3H_RH_GAM_ALIAS210_REDIRECT_CONFIG_1_MMR : \
+ /*is_uv4_hub*/ UV4H_RH_GAM_ALIAS210_REDIRECT_CONFIG_1_MMR)
#define UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_1_MMR_DEST_BASE_SHFT 24
#define UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_1_MMR_DEST_BASE_MASK 0x00003fffff000000UL
+
union uvh_rh_gam_alias210_redirect_config_1_mmr_u {
unsigned long v;
struct uvh_rh_gam_alias210_redirect_config_1_mmr_s {
@@ -2264,11 +3165,20 @@ union uvh_rh_gam_alias210_redirect_config_1_mmr_u {
/* ========================================================================= */
/* UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_2_MMR */
/* ========================================================================= */
-#define UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_2_MMR 0x16000f0UL
+#define UV1H_RH_GAM_ALIAS210_REDIRECT_CONFIG_2_MMR 0x16000f0UL
+#define UV2H_RH_GAM_ALIAS210_REDIRECT_CONFIG_2_MMR 0x16000f0UL
+#define UV3H_RH_GAM_ALIAS210_REDIRECT_CONFIG_2_MMR 0x16000f0UL
+#define UV4H_RH_GAM_ALIAS210_REDIRECT_CONFIG_2_MMR 0x4800f0UL
+#define UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_2_MMR ( \
+ is_uv1_hub() ? UV1H_RH_GAM_ALIAS210_REDIRECT_CONFIG_2_MMR : \
+ is_uv2_hub() ? UV2H_RH_GAM_ALIAS210_REDIRECT_CONFIG_2_MMR : \
+ is_uv3_hub() ? UV3H_RH_GAM_ALIAS210_REDIRECT_CONFIG_2_MMR : \
+ /*is_uv4_hub*/ UV4H_RH_GAM_ALIAS210_REDIRECT_CONFIG_2_MMR)
#define UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_2_MMR_DEST_BASE_SHFT 24
#define UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_2_MMR_DEST_BASE_MASK 0x00003fffff000000UL
+
union uvh_rh_gam_alias210_redirect_config_2_mmr_u {
unsigned long v;
struct uvh_rh_gam_alias210_redirect_config_2_mmr_s {
@@ -2281,14 +3191,17 @@ union uvh_rh_gam_alias210_redirect_config_2_mmr_u {
/* ========================================================================= */
/* UVH_RH_GAM_CONFIG_MMR */
/* ========================================================================= */
-#define UVH_RH_GAM_CONFIG_MMR 0x1600000UL
#define UV1H_RH_GAM_CONFIG_MMR 0x1600000UL
#define UV2H_RH_GAM_CONFIG_MMR 0x1600000UL
#define UV3H_RH_GAM_CONFIG_MMR 0x1600000UL
+#define UV4H_RH_GAM_CONFIG_MMR 0x480000UL
+#define UVH_RH_GAM_CONFIG_MMR ( \
+ is_uv1_hub() ? UV1H_RH_GAM_CONFIG_MMR : \
+ is_uv2_hub() ? UV2H_RH_GAM_CONFIG_MMR : \
+ is_uv3_hub() ? UV3H_RH_GAM_CONFIG_MMR : \
+ /*is_uv4_hub*/ UV4H_RH_GAM_CONFIG_MMR)
-#define UVH_RH_GAM_CONFIG_MMR_M_SKT_SHFT 0
#define UVH_RH_GAM_CONFIG_MMR_N_SKT_SHFT 6
-#define UVH_RH_GAM_CONFIG_MMR_M_SKT_MASK 0x000000000000003fUL
#define UVH_RH_GAM_CONFIG_MMR_N_SKT_MASK 0x00000000000003c0UL
#define UV1H_RH_GAM_CONFIG_MMR_M_SKT_SHFT 0
@@ -2298,9 +3211,7 @@ union uvh_rh_gam_alias210_redirect_config_2_mmr_u {
#define UV1H_RH_GAM_CONFIG_MMR_N_SKT_MASK 0x00000000000003c0UL
#define UV1H_RH_GAM_CONFIG_MMR_MMIOL_CFG_MASK 0x0000000000001000UL
-#define UVXH_RH_GAM_CONFIG_MMR_M_SKT_SHFT 0
#define UVXH_RH_GAM_CONFIG_MMR_N_SKT_SHFT 6
-#define UVXH_RH_GAM_CONFIG_MMR_M_SKT_MASK 0x000000000000003fUL
#define UVXH_RH_GAM_CONFIG_MMR_N_SKT_MASK 0x00000000000003c0UL
#define UV2H_RH_GAM_CONFIG_MMR_M_SKT_SHFT 0
@@ -2313,10 +3224,14 @@ union uvh_rh_gam_alias210_redirect_config_2_mmr_u {
#define UV3H_RH_GAM_CONFIG_MMR_M_SKT_MASK 0x000000000000003fUL
#define UV3H_RH_GAM_CONFIG_MMR_N_SKT_MASK 0x00000000000003c0UL
+#define UV4H_RH_GAM_CONFIG_MMR_N_SKT_SHFT 6
+#define UV4H_RH_GAM_CONFIG_MMR_N_SKT_MASK 0x00000000000003c0UL
+
+
union uvh_rh_gam_config_mmr_u {
unsigned long v;
struct uvh_rh_gam_config_mmr_s {
- unsigned long m_skt:6; /* RW */
+ unsigned long rsvd_0_5:6;
unsigned long n_skt:4; /* RW */
unsigned long rsvd_10_63:54;
} s;
@@ -2328,7 +3243,7 @@ union uvh_rh_gam_config_mmr_u {
unsigned long rsvd_13_63:51;
} s1;
struct uvxh_rh_gam_config_mmr_s {
- unsigned long m_skt:6; /* RW */
+ unsigned long rsvd_0_5:6;
unsigned long n_skt:4; /* RW */
unsigned long rsvd_10_63:54;
} sx;
@@ -2342,20 +3257,28 @@ union uvh_rh_gam_config_mmr_u {
unsigned long n_skt:4; /* RW */
unsigned long rsvd_10_63:54;
} s3;
+ struct uv4h_rh_gam_config_mmr_s {
+ unsigned long rsvd_0_5:6;
+ unsigned long n_skt:4; /* RW */
+ unsigned long rsvd_10_63:54;
+ } s4;
};
/* ========================================================================= */
/* UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR */
/* ========================================================================= */
-#define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR 0x1600010UL
#define UV1H_RH_GAM_GRU_OVERLAY_CONFIG_MMR 0x1600010UL
#define UV2H_RH_GAM_GRU_OVERLAY_CONFIG_MMR 0x1600010UL
#define UV3H_RH_GAM_GRU_OVERLAY_CONFIG_MMR 0x1600010UL
+#define UV4H_RH_GAM_GRU_OVERLAY_CONFIG_MMR 0x480010UL
+#define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR ( \
+ is_uv1_hub() ? UV1H_RH_GAM_GRU_OVERLAY_CONFIG_MMR : \
+ is_uv2_hub() ? UV2H_RH_GAM_GRU_OVERLAY_CONFIG_MMR : \
+ is_uv3_hub() ? UV3H_RH_GAM_GRU_OVERLAY_CONFIG_MMR : \
+ /*is_uv4_hub*/ UV4H_RH_GAM_GRU_OVERLAY_CONFIG_MMR)
-#define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_SHFT 28
#define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_N_GRU_SHFT 52
#define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_ENABLE_SHFT 63
-#define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_MASK 0x00003ffff0000000UL
#define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_N_GRU_MASK 0x00f0000000000000UL
#define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_ENABLE_MASK 0x8000000000000000UL
@@ -2368,10 +3291,8 @@ union uvh_rh_gam_config_mmr_u {
#define UV1H_RH_GAM_GRU_OVERLAY_CONFIG_MMR_N_GRU_MASK 0x00f0000000000000UL
#define UV1H_RH_GAM_GRU_OVERLAY_CONFIG_MMR_ENABLE_MASK 0x8000000000000000UL
-#define UVXH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_SHFT 28
#define UVXH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_N_GRU_SHFT 52
#define UVXH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_ENABLE_SHFT 63
-#define UVXH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_MASK 0x00003ffff0000000UL
#define UVXH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_N_GRU_MASK 0x00f0000000000000UL
#define UVXH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_ENABLE_MASK 0x8000000000000000UL
@@ -2391,12 +3312,28 @@ union uvh_rh_gam_config_mmr_u {
#define UV3H_RH_GAM_GRU_OVERLAY_CONFIG_MMR_MODE_MASK 0x4000000000000000UL
#define UV3H_RH_GAM_GRU_OVERLAY_CONFIG_MMR_ENABLE_MASK 0x8000000000000000UL
+#define UV4H_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_SHFT 26
+#define UV4H_RH_GAM_GRU_OVERLAY_CONFIG_MMR_N_GRU_SHFT 52
+#define UV4H_RH_GAM_GRU_OVERLAY_CONFIG_MMR_ENABLE_SHFT 63
+#define UV4H_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_MASK 0x00003ffffc000000UL
+#define UV4H_RH_GAM_GRU_OVERLAY_CONFIG_MMR_N_GRU_MASK 0x00f0000000000000UL
+#define UV4H_RH_GAM_GRU_OVERLAY_CONFIG_MMR_ENABLE_MASK 0x8000000000000000UL
+
+#define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_MASK ( \
+ is_uv1_hub() ? UV1H_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_MASK : \
+ is_uv2_hub() ? UV2H_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_MASK : \
+ is_uv3_hub() ? UV3H_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_MASK : \
+ /*is_uv4_hub*/ UV4H_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_MASK)
+#define UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_SHFT ( \
+ is_uv1_hub() ? UV1H_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_SHFT : \
+ is_uv2_hub() ? UV2H_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_SHFT : \
+ is_uv3_hub() ? UV3H_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_SHFT : \
+ /*is_uv4_hub*/ UV4H_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_SHFT)
+
union uvh_rh_gam_gru_overlay_config_mmr_u {
unsigned long v;
struct uvh_rh_gam_gru_overlay_config_mmr_s {
- unsigned long rsvd_0_27:28;
- unsigned long base:18; /* RW */
- unsigned long rsvd_46_51:6;
+ unsigned long rsvd_0_51:52;
unsigned long n_gru:4; /* RW */
unsigned long rsvd_56_62:7;
unsigned long enable:1; /* RW */
@@ -2412,8 +3349,7 @@ union uvh_rh_gam_gru_overlay_config_mmr_u {
unsigned long enable:1; /* RW */
} s1;
struct uvxh_rh_gam_gru_overlay_config_mmr_s {
- unsigned long rsvd_0_27:28;
- unsigned long base:18; /* RW */
+ unsigned long rsvd_0_45:46;
unsigned long rsvd_46_51:6;
unsigned long n_gru:4; /* RW */
unsigned long rsvd_56_62:7;
@@ -2436,6 +3372,15 @@ union uvh_rh_gam_gru_overlay_config_mmr_u {
unsigned long mode:1; /* RW */
unsigned long enable:1; /* RW */
} s3;
+ struct uv4h_rh_gam_gru_overlay_config_mmr_s {
+ unsigned long rsvd_0_24:25;
+ unsigned long undef_25:1; /* Undefined */
+ unsigned long base:20; /* RW */
+ unsigned long rsvd_46_51:6;
+ unsigned long n_gru:4; /* RW */
+ unsigned long rsvd_56_62:7;
+ unsigned long enable:1; /* RW */
+ } s4;
};
/* ========================================================================= */
@@ -2443,6 +3388,14 @@ union uvh_rh_gam_gru_overlay_config_mmr_u {
/* ========================================================================= */
#define UV1H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR 0x1600030UL
#define UV2H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR 0x1600030UL
+#define UV3H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR uv_undefined("UV3H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR")
+#define UV4H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR uv_undefined("UV4H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR")
+#define UVH_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR ( \
+ is_uv1_hub() ? UV1H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR : \
+ is_uv2_hub() ? UV2H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR : \
+ is_uv3_hub() ? UV3H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR : \
+ /*is_uv4_hub*/ UV4H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR)
+
#define UV1H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR_BASE_SHFT 30
#define UV1H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR_M_IO_SHFT 46
@@ -2453,6 +3406,7 @@ union uvh_rh_gam_gru_overlay_config_mmr_u {
#define UV1H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR_N_IO_MASK 0x00f0000000000000UL
#define UV1H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR_ENABLE_MASK 0x8000000000000000UL
+
#define UV2H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR_BASE_SHFT 27
#define UV2H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR_M_IO_SHFT 46
#define UV2H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR_N_IO_SHFT 52
@@ -2462,6 +3416,7 @@ union uvh_rh_gam_gru_overlay_config_mmr_u {
#define UV2H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR_N_IO_MASK 0x00f0000000000000UL
#define UV2H_RH_GAM_MMIOH_OVERLAY_CONFIG_MMR_ENABLE_MASK 0x8000000000000000UL
+
union uvh_rh_gam_mmioh_overlay_config_mmr_u {
unsigned long v;
struct uv1h_rh_gam_mmioh_overlay_config_mmr_s {
@@ -2485,10 +3440,15 @@ union uvh_rh_gam_mmioh_overlay_config_mmr_u {
/* ========================================================================= */
/* UVH_RH_GAM_MMR_OVERLAY_CONFIG_MMR */
/* ========================================================================= */
-#define UVH_RH_GAM_MMR_OVERLAY_CONFIG_MMR 0x1600028UL
#define UV1H_RH_GAM_MMR_OVERLAY_CONFIG_MMR 0x1600028UL
#define UV2H_RH_GAM_MMR_OVERLAY_CONFIG_MMR 0x1600028UL
#define UV3H_RH_GAM_MMR_OVERLAY_CONFIG_MMR 0x1600028UL
+#define UV4H_RH_GAM_MMR_OVERLAY_CONFIG_MMR 0x480028UL
+#define UVH_RH_GAM_MMR_OVERLAY_CONFIG_MMR ( \
+ is_uv1_hub() ? UV1H_RH_GAM_MMR_OVERLAY_CONFIG_MMR : \
+ is_uv2_hub() ? UV2H_RH_GAM_MMR_OVERLAY_CONFIG_MMR : \
+ is_uv3_hub() ? UV3H_RH_GAM_MMR_OVERLAY_CONFIG_MMR : \
+ /*is_uv4_hub*/ UV4H_RH_GAM_MMR_OVERLAY_CONFIG_MMR)
#define UVH_RH_GAM_MMR_OVERLAY_CONFIG_MMR_BASE_SHFT 26
#define UVH_RH_GAM_MMR_OVERLAY_CONFIG_MMR_ENABLE_SHFT 63
@@ -2517,6 +3477,12 @@ union uvh_rh_gam_mmioh_overlay_config_mmr_u {
#define UV3H_RH_GAM_MMR_OVERLAY_CONFIG_MMR_BASE_MASK 0x00003ffffc000000UL
#define UV3H_RH_GAM_MMR_OVERLAY_CONFIG_MMR_ENABLE_MASK 0x8000000000000000UL
+#define UV4H_RH_GAM_MMR_OVERLAY_CONFIG_MMR_BASE_SHFT 26
+#define UV4H_RH_GAM_MMR_OVERLAY_CONFIG_MMR_ENABLE_SHFT 63
+#define UV4H_RH_GAM_MMR_OVERLAY_CONFIG_MMR_BASE_MASK 0x00003ffffc000000UL
+#define UV4H_RH_GAM_MMR_OVERLAY_CONFIG_MMR_ENABLE_MASK 0x8000000000000000UL
+
+
union uvh_rh_gam_mmr_overlay_config_mmr_u {
unsigned long v;
struct uvh_rh_gam_mmr_overlay_config_mmr_s {
@@ -2550,16 +3516,31 @@ union uvh_rh_gam_mmr_overlay_config_mmr_u {
unsigned long rsvd_46_62:17;
unsigned long enable:1; /* RW */
} s3;
+ struct uv4h_rh_gam_mmr_overlay_config_mmr_s {
+ unsigned long rsvd_0_25:26;
+ unsigned long base:20; /* RW */
+ unsigned long rsvd_46_62:17;
+ unsigned long enable:1; /* RW */
+ } s4;
};
/* ========================================================================= */
/* UVH_RTC */
/* ========================================================================= */
-#define UVH_RTC 0x340000UL
+#define UV1H_RTC 0x340000UL
+#define UV2H_RTC 0x340000UL
+#define UV3H_RTC 0x340000UL
+#define UV4H_RTC 0xe0000UL
+#define UVH_RTC ( \
+ is_uv1_hub() ? UV1H_RTC : \
+ is_uv2_hub() ? UV2H_RTC : \
+ is_uv3_hub() ? UV3H_RTC : \
+ /*is_uv4_hub*/ UV4H_RTC)
#define UVH_RTC_REAL_TIME_CLOCK_SHFT 0
#define UVH_RTC_REAL_TIME_CLOCK_MASK 0x00ffffffffffffffUL
+
union uvh_rtc_u {
unsigned long v;
struct uvh_rtc_s {
@@ -2590,6 +3571,7 @@ union uvh_rtc_u {
#define UVH_RTC1_INT_CONFIG_M_MASK 0x0000000000010000UL
#define UVH_RTC1_INT_CONFIG_APIC_ID_MASK 0xffffffff00000000UL
+
union uvh_rtc1_int_config_u {
unsigned long v;
struct uvh_rtc1_int_config_s {
@@ -2609,12 +3591,30 @@ union uvh_rtc1_int_config_u {
/* ========================================================================= */
/* UVH_SCRATCH5 */
/* ========================================================================= */
-#define UVH_SCRATCH5 0x2d0200UL
-#define UVH_SCRATCH5_32 0x778
+#define UV1H_SCRATCH5 0x2d0200UL
+#define UV2H_SCRATCH5 0x2d0200UL
+#define UV3H_SCRATCH5 0x2d0200UL
+#define UV4H_SCRATCH5 0xb0200UL
+#define UVH_SCRATCH5 ( \
+ is_uv1_hub() ? UV1H_SCRATCH5 : \
+ is_uv2_hub() ? UV2H_SCRATCH5 : \
+ is_uv3_hub() ? UV3H_SCRATCH5 : \
+ /*is_uv4_hub*/ UV4H_SCRATCH5)
+
+#define UV1H_SCRATCH5_32 0x778
+#define UV2H_SCRATCH5_32 0x778
+#define UV3H_SCRATCH5_32 0x778
+#define UV4H_SCRATCH5_32 0x798
+#define UVH_SCRATCH5_32 ( \
+ is_uv1_hub() ? UV1H_SCRATCH5_32 : \
+ is_uv2_hub() ? UV2H_SCRATCH5_32 : \
+ is_uv3_hub() ? UV3H_SCRATCH5_32 : \
+ /*is_uv4_hub*/ UV4H_SCRATCH5_32)
#define UVH_SCRATCH5_SCRATCH5_SHFT 0
#define UVH_SCRATCH5_SCRATCH5_MASK 0xffffffffffffffffUL
+
union uvh_scratch5_u {
unsigned long v;
struct uvh_scratch5_s {
@@ -2625,14 +3625,39 @@ union uvh_scratch5_u {
/* ========================================================================= */
/* UVH_SCRATCH5_ALIAS */
/* ========================================================================= */
-#define UVH_SCRATCH5_ALIAS 0x2d0208UL
-#define UVH_SCRATCH5_ALIAS_32 0x780
+#define UV1H_SCRATCH5_ALIAS 0x2d0208UL
+#define UV2H_SCRATCH5_ALIAS 0x2d0208UL
+#define UV3H_SCRATCH5_ALIAS 0x2d0208UL
+#define UV4H_SCRATCH5_ALIAS 0xb0208UL
+#define UVH_SCRATCH5_ALIAS ( \
+ is_uv1_hub() ? UV1H_SCRATCH5_ALIAS : \
+ is_uv2_hub() ? UV2H_SCRATCH5_ALIAS : \
+ is_uv3_hub() ? UV3H_SCRATCH5_ALIAS : \
+ /*is_uv4_hub*/ UV4H_SCRATCH5_ALIAS)
+
+#define UV1H_SCRATCH5_ALIAS_32 0x780
+#define UV2H_SCRATCH5_ALIAS_32 0x780
+#define UV3H_SCRATCH5_ALIAS_32 0x780
+#define UV4H_SCRATCH5_ALIAS_32 0x7a0
+#define UVH_SCRATCH5_ALIAS_32 ( \
+ is_uv1_hub() ? UV1H_SCRATCH5_ALIAS_32 : \
+ is_uv2_hub() ? UV2H_SCRATCH5_ALIAS_32 : \
+ is_uv3_hub() ? UV3H_SCRATCH5_ALIAS_32 : \
+ /*is_uv4_hub*/ UV4H_SCRATCH5_ALIAS_32)
/* ========================================================================= */
/* UVH_SCRATCH5_ALIAS_2 */
/* ========================================================================= */
-#define UVH_SCRATCH5_ALIAS_2 0x2d0210UL
+#define UV1H_SCRATCH5_ALIAS_2 0x2d0210UL
+#define UV2H_SCRATCH5_ALIAS_2 0x2d0210UL
+#define UV3H_SCRATCH5_ALIAS_2 0x2d0210UL
+#define UV4H_SCRATCH5_ALIAS_2 0xb0210UL
+#define UVH_SCRATCH5_ALIAS_2 ( \
+ is_uv1_hub() ? UV1H_SCRATCH5_ALIAS_2 : \
+ is_uv2_hub() ? UV2H_SCRATCH5_ALIAS_2 : \
+ is_uv3_hub() ? UV3H_SCRATCH5_ALIAS_2 : \
+ /*is_uv4_hub*/ UV4H_SCRATCH5_ALIAS_2)
#define UVH_SCRATCH5_ALIAS_2_32 0x788
@@ -2640,76 +3665,255 @@ union uvh_scratch5_u {
/* UVXH_EVENT_OCCURRED2 */
/* ========================================================================= */
#define UVXH_EVENT_OCCURRED2 0x70100UL
-#define UVXH_EVENT_OCCURRED2_32 0xb68
-
-#define UVXH_EVENT_OCCURRED2_RTC_0_SHFT 0
-#define UVXH_EVENT_OCCURRED2_RTC_1_SHFT 1
-#define UVXH_EVENT_OCCURRED2_RTC_2_SHFT 2
-#define UVXH_EVENT_OCCURRED2_RTC_3_SHFT 3
-#define UVXH_EVENT_OCCURRED2_RTC_4_SHFT 4
-#define UVXH_EVENT_OCCURRED2_RTC_5_SHFT 5
-#define UVXH_EVENT_OCCURRED2_RTC_6_SHFT 6
-#define UVXH_EVENT_OCCURRED2_RTC_7_SHFT 7
-#define UVXH_EVENT_OCCURRED2_RTC_8_SHFT 8
-#define UVXH_EVENT_OCCURRED2_RTC_9_SHFT 9
-#define UVXH_EVENT_OCCURRED2_RTC_10_SHFT 10
-#define UVXH_EVENT_OCCURRED2_RTC_11_SHFT 11
-#define UVXH_EVENT_OCCURRED2_RTC_12_SHFT 12
-#define UVXH_EVENT_OCCURRED2_RTC_13_SHFT 13
-#define UVXH_EVENT_OCCURRED2_RTC_14_SHFT 14
-#define UVXH_EVENT_OCCURRED2_RTC_15_SHFT 15
-#define UVXH_EVENT_OCCURRED2_RTC_16_SHFT 16
-#define UVXH_EVENT_OCCURRED2_RTC_17_SHFT 17
-#define UVXH_EVENT_OCCURRED2_RTC_18_SHFT 18
-#define UVXH_EVENT_OCCURRED2_RTC_19_SHFT 19
-#define UVXH_EVENT_OCCURRED2_RTC_20_SHFT 20
-#define UVXH_EVENT_OCCURRED2_RTC_21_SHFT 21
-#define UVXH_EVENT_OCCURRED2_RTC_22_SHFT 22
-#define UVXH_EVENT_OCCURRED2_RTC_23_SHFT 23
-#define UVXH_EVENT_OCCURRED2_RTC_24_SHFT 24
-#define UVXH_EVENT_OCCURRED2_RTC_25_SHFT 25
-#define UVXH_EVENT_OCCURRED2_RTC_26_SHFT 26
-#define UVXH_EVENT_OCCURRED2_RTC_27_SHFT 27
-#define UVXH_EVENT_OCCURRED2_RTC_28_SHFT 28
-#define UVXH_EVENT_OCCURRED2_RTC_29_SHFT 29
-#define UVXH_EVENT_OCCURRED2_RTC_30_SHFT 30
-#define UVXH_EVENT_OCCURRED2_RTC_31_SHFT 31
-#define UVXH_EVENT_OCCURRED2_RTC_0_MASK 0x0000000000000001UL
-#define UVXH_EVENT_OCCURRED2_RTC_1_MASK 0x0000000000000002UL
-#define UVXH_EVENT_OCCURRED2_RTC_2_MASK 0x0000000000000004UL
-#define UVXH_EVENT_OCCURRED2_RTC_3_MASK 0x0000000000000008UL
-#define UVXH_EVENT_OCCURRED2_RTC_4_MASK 0x0000000000000010UL
-#define UVXH_EVENT_OCCURRED2_RTC_5_MASK 0x0000000000000020UL
-#define UVXH_EVENT_OCCURRED2_RTC_6_MASK 0x0000000000000040UL
-#define UVXH_EVENT_OCCURRED2_RTC_7_MASK 0x0000000000000080UL
-#define UVXH_EVENT_OCCURRED2_RTC_8_MASK 0x0000000000000100UL
-#define UVXH_EVENT_OCCURRED2_RTC_9_MASK 0x0000000000000200UL
-#define UVXH_EVENT_OCCURRED2_RTC_10_MASK 0x0000000000000400UL
-#define UVXH_EVENT_OCCURRED2_RTC_11_MASK 0x0000000000000800UL
-#define UVXH_EVENT_OCCURRED2_RTC_12_MASK 0x0000000000001000UL
-#define UVXH_EVENT_OCCURRED2_RTC_13_MASK 0x0000000000002000UL
-#define UVXH_EVENT_OCCURRED2_RTC_14_MASK 0x0000000000004000UL
-#define UVXH_EVENT_OCCURRED2_RTC_15_MASK 0x0000000000008000UL
-#define UVXH_EVENT_OCCURRED2_RTC_16_MASK 0x0000000000010000UL
-#define UVXH_EVENT_OCCURRED2_RTC_17_MASK 0x0000000000020000UL
-#define UVXH_EVENT_OCCURRED2_RTC_18_MASK 0x0000000000040000UL
-#define UVXH_EVENT_OCCURRED2_RTC_19_MASK 0x0000000000080000UL
-#define UVXH_EVENT_OCCURRED2_RTC_20_MASK 0x0000000000100000UL
-#define UVXH_EVENT_OCCURRED2_RTC_21_MASK 0x0000000000200000UL
-#define UVXH_EVENT_OCCURRED2_RTC_22_MASK 0x0000000000400000UL
-#define UVXH_EVENT_OCCURRED2_RTC_23_MASK 0x0000000000800000UL
-#define UVXH_EVENT_OCCURRED2_RTC_24_MASK 0x0000000001000000UL
-#define UVXH_EVENT_OCCURRED2_RTC_25_MASK 0x0000000002000000UL
-#define UVXH_EVENT_OCCURRED2_RTC_26_MASK 0x0000000004000000UL
-#define UVXH_EVENT_OCCURRED2_RTC_27_MASK 0x0000000008000000UL
-#define UVXH_EVENT_OCCURRED2_RTC_28_MASK 0x0000000010000000UL
-#define UVXH_EVENT_OCCURRED2_RTC_29_MASK 0x0000000020000000UL
-#define UVXH_EVENT_OCCURRED2_RTC_30_MASK 0x0000000040000000UL
-#define UVXH_EVENT_OCCURRED2_RTC_31_MASK 0x0000000080000000UL
-
-union uvxh_event_occurred2_u {
+
+#define UV2H_EVENT_OCCURRED2_32 0xb68
+#define UV3H_EVENT_OCCURRED2_32 0xb68
+#define UV4H_EVENT_OCCURRED2_32 0x608
+#define UVH_EVENT_OCCURRED2_32 ( \
+ is_uv2_hub() ? UV2H_EVENT_OCCURRED2_32 : \
+ is_uv3_hub() ? UV3H_EVENT_OCCURRED2_32 : \
+ /*is_uv4_hub*/ UV4H_EVENT_OCCURRED2_32)
+
+
+#define UV2H_EVENT_OCCURRED2_RTC_0_SHFT 0
+#define UV2H_EVENT_OCCURRED2_RTC_1_SHFT 1
+#define UV2H_EVENT_OCCURRED2_RTC_2_SHFT 2
+#define UV2H_EVENT_OCCURRED2_RTC_3_SHFT 3
+#define UV2H_EVENT_OCCURRED2_RTC_4_SHFT 4
+#define UV2H_EVENT_OCCURRED2_RTC_5_SHFT 5
+#define UV2H_EVENT_OCCURRED2_RTC_6_SHFT 6
+#define UV2H_EVENT_OCCURRED2_RTC_7_SHFT 7
+#define UV2H_EVENT_OCCURRED2_RTC_8_SHFT 8
+#define UV2H_EVENT_OCCURRED2_RTC_9_SHFT 9
+#define UV2H_EVENT_OCCURRED2_RTC_10_SHFT 10
+#define UV2H_EVENT_OCCURRED2_RTC_11_SHFT 11
+#define UV2H_EVENT_OCCURRED2_RTC_12_SHFT 12
+#define UV2H_EVENT_OCCURRED2_RTC_13_SHFT 13
+#define UV2H_EVENT_OCCURRED2_RTC_14_SHFT 14
+#define UV2H_EVENT_OCCURRED2_RTC_15_SHFT 15
+#define UV2H_EVENT_OCCURRED2_RTC_16_SHFT 16
+#define UV2H_EVENT_OCCURRED2_RTC_17_SHFT 17
+#define UV2H_EVENT_OCCURRED2_RTC_18_SHFT 18
+#define UV2H_EVENT_OCCURRED2_RTC_19_SHFT 19
+#define UV2H_EVENT_OCCURRED2_RTC_20_SHFT 20
+#define UV2H_EVENT_OCCURRED2_RTC_21_SHFT 21
+#define UV2H_EVENT_OCCURRED2_RTC_22_SHFT 22
+#define UV2H_EVENT_OCCURRED2_RTC_23_SHFT 23
+#define UV2H_EVENT_OCCURRED2_RTC_24_SHFT 24
+#define UV2H_EVENT_OCCURRED2_RTC_25_SHFT 25
+#define UV2H_EVENT_OCCURRED2_RTC_26_SHFT 26
+#define UV2H_EVENT_OCCURRED2_RTC_27_SHFT 27
+#define UV2H_EVENT_OCCURRED2_RTC_28_SHFT 28
+#define UV2H_EVENT_OCCURRED2_RTC_29_SHFT 29
+#define UV2H_EVENT_OCCURRED2_RTC_30_SHFT 30
+#define UV2H_EVENT_OCCURRED2_RTC_31_SHFT 31
+#define UV2H_EVENT_OCCURRED2_RTC_0_MASK 0x0000000000000001UL
+#define UV2H_EVENT_OCCURRED2_RTC_1_MASK 0x0000000000000002UL
+#define UV2H_EVENT_OCCURRED2_RTC_2_MASK 0x0000000000000004UL
+#define UV2H_EVENT_OCCURRED2_RTC_3_MASK 0x0000000000000008UL
+#define UV2H_EVENT_OCCURRED2_RTC_4_MASK 0x0000000000000010UL
+#define UV2H_EVENT_OCCURRED2_RTC_5_MASK 0x0000000000000020UL
+#define UV2H_EVENT_OCCURRED2_RTC_6_MASK 0x0000000000000040UL
+#define UV2H_EVENT_OCCURRED2_RTC_7_MASK 0x0000000000000080UL
+#define UV2H_EVENT_OCCURRED2_RTC_8_MASK 0x0000000000000100UL
+#define UV2H_EVENT_OCCURRED2_RTC_9_MASK 0x0000000000000200UL
+#define UV2H_EVENT_OCCURRED2_RTC_10_MASK 0x0000000000000400UL
+#define UV2H_EVENT_OCCURRED2_RTC_11_MASK 0x0000000000000800UL
+#define UV2H_EVENT_OCCURRED2_RTC_12_MASK 0x0000000000001000UL
+#define UV2H_EVENT_OCCURRED2_RTC_13_MASK 0x0000000000002000UL
+#define UV2H_EVENT_OCCURRED2_RTC_14_MASK 0x0000000000004000UL
+#define UV2H_EVENT_OCCURRED2_RTC_15_MASK 0x0000000000008000UL
+#define UV2H_EVENT_OCCURRED2_RTC_16_MASK 0x0000000000010000UL
+#define UV2H_EVENT_OCCURRED2_RTC_17_MASK 0x0000000000020000UL
+#define UV2H_EVENT_OCCURRED2_RTC_18_MASK 0x0000000000040000UL
+#define UV2H_EVENT_OCCURRED2_RTC_19_MASK 0x0000000000080000UL
+#define UV2H_EVENT_OCCURRED2_RTC_20_MASK 0x0000000000100000UL
+#define UV2H_EVENT_OCCURRED2_RTC_21_MASK 0x0000000000200000UL
+#define UV2H_EVENT_OCCURRED2_RTC_22_MASK 0x0000000000400000UL
+#define UV2H_EVENT_OCCURRED2_RTC_23_MASK 0x0000000000800000UL
+#define UV2H_EVENT_OCCURRED2_RTC_24_MASK 0x0000000001000000UL
+#define UV2H_EVENT_OCCURRED2_RTC_25_MASK 0x0000000002000000UL
+#define UV2H_EVENT_OCCURRED2_RTC_26_MASK 0x0000000004000000UL
+#define UV2H_EVENT_OCCURRED2_RTC_27_MASK 0x0000000008000000UL
+#define UV2H_EVENT_OCCURRED2_RTC_28_MASK 0x0000000010000000UL
+#define UV2H_EVENT_OCCURRED2_RTC_29_MASK 0x0000000020000000UL
+#define UV2H_EVENT_OCCURRED2_RTC_30_MASK 0x0000000040000000UL
+#define UV2H_EVENT_OCCURRED2_RTC_31_MASK 0x0000000080000000UL
+
+#define UV3H_EVENT_OCCURRED2_RTC_0_SHFT 0
+#define UV3H_EVENT_OCCURRED2_RTC_1_SHFT 1
+#define UV3H_EVENT_OCCURRED2_RTC_2_SHFT 2
+#define UV3H_EVENT_OCCURRED2_RTC_3_SHFT 3
+#define UV3H_EVENT_OCCURRED2_RTC_4_SHFT 4
+#define UV3H_EVENT_OCCURRED2_RTC_5_SHFT 5
+#define UV3H_EVENT_OCCURRED2_RTC_6_SHFT 6
+#define UV3H_EVENT_OCCURRED2_RTC_7_SHFT 7
+#define UV3H_EVENT_OCCURRED2_RTC_8_SHFT 8
+#define UV3H_EVENT_OCCURRED2_RTC_9_SHFT 9
+#define UV3H_EVENT_OCCURRED2_RTC_10_SHFT 10
+#define UV3H_EVENT_OCCURRED2_RTC_11_SHFT 11
+#define UV3H_EVENT_OCCURRED2_RTC_12_SHFT 12
+#define UV3H_EVENT_OCCURRED2_RTC_13_SHFT 13
+#define UV3H_EVENT_OCCURRED2_RTC_14_SHFT 14
+#define UV3H_EVENT_OCCURRED2_RTC_15_SHFT 15
+#define UV3H_EVENT_OCCURRED2_RTC_16_SHFT 16
+#define UV3H_EVENT_OCCURRED2_RTC_17_SHFT 17
+#define UV3H_EVENT_OCCURRED2_RTC_18_SHFT 18
+#define UV3H_EVENT_OCCURRED2_RTC_19_SHFT 19
+#define UV3H_EVENT_OCCURRED2_RTC_20_SHFT 20
+#define UV3H_EVENT_OCCURRED2_RTC_21_SHFT 21
+#define UV3H_EVENT_OCCURRED2_RTC_22_SHFT 22
+#define UV3H_EVENT_OCCURRED2_RTC_23_SHFT 23
+#define UV3H_EVENT_OCCURRED2_RTC_24_SHFT 24
+#define UV3H_EVENT_OCCURRED2_RTC_25_SHFT 25
+#define UV3H_EVENT_OCCURRED2_RTC_26_SHFT 26
+#define UV3H_EVENT_OCCURRED2_RTC_27_SHFT 27
+#define UV3H_EVENT_OCCURRED2_RTC_28_SHFT 28
+#define UV3H_EVENT_OCCURRED2_RTC_29_SHFT 29
+#define UV3H_EVENT_OCCURRED2_RTC_30_SHFT 30
+#define UV3H_EVENT_OCCURRED2_RTC_31_SHFT 31
+#define UV3H_EVENT_OCCURRED2_RTC_0_MASK 0x0000000000000001UL
+#define UV3H_EVENT_OCCURRED2_RTC_1_MASK 0x0000000000000002UL
+#define UV3H_EVENT_OCCURRED2_RTC_2_MASK 0x0000000000000004UL
+#define UV3H_EVENT_OCCURRED2_RTC_3_MASK 0x0000000000000008UL
+#define UV3H_EVENT_OCCURRED2_RTC_4_MASK 0x0000000000000010UL
+#define UV3H_EVENT_OCCURRED2_RTC_5_MASK 0x0000000000000020UL
+#define UV3H_EVENT_OCCURRED2_RTC_6_MASK 0x0000000000000040UL
+#define UV3H_EVENT_OCCURRED2_RTC_7_MASK 0x0000000000000080UL
+#define UV3H_EVENT_OCCURRED2_RTC_8_MASK 0x0000000000000100UL
+#define UV3H_EVENT_OCCURRED2_RTC_9_MASK 0x0000000000000200UL
+#define UV3H_EVENT_OCCURRED2_RTC_10_MASK 0x0000000000000400UL
+#define UV3H_EVENT_OCCURRED2_RTC_11_MASK 0x0000000000000800UL
+#define UV3H_EVENT_OCCURRED2_RTC_12_MASK 0x0000000000001000UL
+#define UV3H_EVENT_OCCURRED2_RTC_13_MASK 0x0000000000002000UL
+#define UV3H_EVENT_OCCURRED2_RTC_14_MASK 0x0000000000004000UL
+#define UV3H_EVENT_OCCURRED2_RTC_15_MASK 0x0000000000008000UL
+#define UV3H_EVENT_OCCURRED2_RTC_16_MASK 0x0000000000010000UL
+#define UV3H_EVENT_OCCURRED2_RTC_17_MASK 0x0000000000020000UL
+#define UV3H_EVENT_OCCURRED2_RTC_18_MASK 0x0000000000040000UL
+#define UV3H_EVENT_OCCURRED2_RTC_19_MASK 0x0000000000080000UL
+#define UV3H_EVENT_OCCURRED2_RTC_20_MASK 0x0000000000100000UL
+#define UV3H_EVENT_OCCURRED2_RTC_21_MASK 0x0000000000200000UL
+#define UV3H_EVENT_OCCURRED2_RTC_22_MASK 0x0000000000400000UL
+#define UV3H_EVENT_OCCURRED2_RTC_23_MASK 0x0000000000800000UL
+#define UV3H_EVENT_OCCURRED2_RTC_24_MASK 0x0000000001000000UL
+#define UV3H_EVENT_OCCURRED2_RTC_25_MASK 0x0000000002000000UL
+#define UV3H_EVENT_OCCURRED2_RTC_26_MASK 0x0000000004000000UL
+#define UV3H_EVENT_OCCURRED2_RTC_27_MASK 0x0000000008000000UL
+#define UV3H_EVENT_OCCURRED2_RTC_28_MASK 0x0000000010000000UL
+#define UV3H_EVENT_OCCURRED2_RTC_29_MASK 0x0000000020000000UL
+#define UV3H_EVENT_OCCURRED2_RTC_30_MASK 0x0000000040000000UL
+#define UV3H_EVENT_OCCURRED2_RTC_31_MASK 0x0000000080000000UL
+
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT0_SHFT 0
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT1_SHFT 1
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT2_SHFT 2
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT3_SHFT 3
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT4_SHFT 4
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT5_SHFT 5
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT6_SHFT 6
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT7_SHFT 7
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT8_SHFT 8
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT9_SHFT 9
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT10_SHFT 10
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT11_SHFT 11
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT12_SHFT 12
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT13_SHFT 13
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT14_SHFT 14
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT15_SHFT 15
+#define UV4H_EVENT_OCCURRED2_RTC_INTERVAL_INT_SHFT 16
+#define UV4H_EVENT_OCCURRED2_BAU_DASHBOARD_INT_SHFT 17
+#define UV4H_EVENT_OCCURRED2_RTC_0_SHFT 18
+#define UV4H_EVENT_OCCURRED2_RTC_1_SHFT 19
+#define UV4H_EVENT_OCCURRED2_RTC_2_SHFT 20
+#define UV4H_EVENT_OCCURRED2_RTC_3_SHFT 21
+#define UV4H_EVENT_OCCURRED2_RTC_4_SHFT 22
+#define UV4H_EVENT_OCCURRED2_RTC_5_SHFT 23
+#define UV4H_EVENT_OCCURRED2_RTC_6_SHFT 24
+#define UV4H_EVENT_OCCURRED2_RTC_7_SHFT 25
+#define UV4H_EVENT_OCCURRED2_RTC_8_SHFT 26
+#define UV4H_EVENT_OCCURRED2_RTC_9_SHFT 27
+#define UV4H_EVENT_OCCURRED2_RTC_10_SHFT 28
+#define UV4H_EVENT_OCCURRED2_RTC_11_SHFT 29
+#define UV4H_EVENT_OCCURRED2_RTC_12_SHFT 30
+#define UV4H_EVENT_OCCURRED2_RTC_13_SHFT 31
+#define UV4H_EVENT_OCCURRED2_RTC_14_SHFT 32
+#define UV4H_EVENT_OCCURRED2_RTC_15_SHFT 33
+#define UV4H_EVENT_OCCURRED2_RTC_16_SHFT 34
+#define UV4H_EVENT_OCCURRED2_RTC_17_SHFT 35
+#define UV4H_EVENT_OCCURRED2_RTC_18_SHFT 36
+#define UV4H_EVENT_OCCURRED2_RTC_19_SHFT 37
+#define UV4H_EVENT_OCCURRED2_RTC_20_SHFT 38
+#define UV4H_EVENT_OCCURRED2_RTC_21_SHFT 39
+#define UV4H_EVENT_OCCURRED2_RTC_22_SHFT 40
+#define UV4H_EVENT_OCCURRED2_RTC_23_SHFT 41
+#define UV4H_EVENT_OCCURRED2_RTC_24_SHFT 42
+#define UV4H_EVENT_OCCURRED2_RTC_25_SHFT 43
+#define UV4H_EVENT_OCCURRED2_RTC_26_SHFT 44
+#define UV4H_EVENT_OCCURRED2_RTC_27_SHFT 45
+#define UV4H_EVENT_OCCURRED2_RTC_28_SHFT 46
+#define UV4H_EVENT_OCCURRED2_RTC_29_SHFT 47
+#define UV4H_EVENT_OCCURRED2_RTC_30_SHFT 48
+#define UV4H_EVENT_OCCURRED2_RTC_31_SHFT 49
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT0_MASK 0x0000000000000001UL
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT1_MASK 0x0000000000000002UL
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT2_MASK 0x0000000000000004UL
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT3_MASK 0x0000000000000008UL
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT4_MASK 0x0000000000000010UL
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT5_MASK 0x0000000000000020UL
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT6_MASK 0x0000000000000040UL
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT7_MASK 0x0000000000000080UL
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT8_MASK 0x0000000000000100UL
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT9_MASK 0x0000000000000200UL
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT10_MASK 0x0000000000000400UL
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT11_MASK 0x0000000000000800UL
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT12_MASK 0x0000000000001000UL
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT13_MASK 0x0000000000002000UL
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT14_MASK 0x0000000000004000UL
+#define UV4H_EVENT_OCCURRED2_MESSAGE_ACCELERATOR_INT15_MASK 0x0000000000008000UL
+#define UV4H_EVENT_OCCURRED2_RTC_INTERVAL_INT_MASK 0x0000000000010000UL
+#define UV4H_EVENT_OCCURRED2_BAU_DASHBOARD_INT_MASK 0x0000000000020000UL
+#define UV4H_EVENT_OCCURRED2_RTC_0_MASK 0x0000000000040000UL
+#define UV4H_EVENT_OCCURRED2_RTC_1_MASK 0x0000000000080000UL
+#define UV4H_EVENT_OCCURRED2_RTC_2_MASK 0x0000000000100000UL
+#define UV4H_EVENT_OCCURRED2_RTC_3_MASK 0x0000000000200000UL
+#define UV4H_EVENT_OCCURRED2_RTC_4_MASK 0x0000000000400000UL
+#define UV4H_EVENT_OCCURRED2_RTC_5_MASK 0x0000000000800000UL
+#define UV4H_EVENT_OCCURRED2_RTC_6_MASK 0x0000000001000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_7_MASK 0x0000000002000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_8_MASK 0x0000000004000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_9_MASK 0x0000000008000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_10_MASK 0x0000000010000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_11_MASK 0x0000000020000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_12_MASK 0x0000000040000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_13_MASK 0x0000000080000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_14_MASK 0x0000000100000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_15_MASK 0x0000000200000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_16_MASK 0x0000000400000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_17_MASK 0x0000000800000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_18_MASK 0x0000001000000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_19_MASK 0x0000002000000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_20_MASK 0x0000004000000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_21_MASK 0x0000008000000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_22_MASK 0x0000010000000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_23_MASK 0x0000020000000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_24_MASK 0x0000040000000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_25_MASK 0x0000080000000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_26_MASK 0x0000100000000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_27_MASK 0x0000200000000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_28_MASK 0x0000400000000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_29_MASK 0x0000800000000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_30_MASK 0x0001000000000000UL
+#define UV4H_EVENT_OCCURRED2_RTC_31_MASK 0x0002000000000000UL
+
+#define UVXH_EVENT_OCCURRED2_RTC_1_MASK ( \
+ is_uv2_hub() ? UV2H_EVENT_OCCURRED2_RTC_1_MASK : \
+ is_uv3_hub() ? UV3H_EVENT_OCCURRED2_RTC_1_MASK : \
+ /*is_uv4_hub*/ UV4H_EVENT_OCCURRED2_RTC_1_MASK)
+
+union uvh_event_occurred2_u {
unsigned long v;
- struct uvxh_event_occurred2_s {
+ struct uv2h_event_occurred2_s {
unsigned long rtc_0:1; /* RW */
unsigned long rtc_1:1; /* RW */
unsigned long rtc_2:1; /* RW */
@@ -2743,25 +3947,129 @@ union uvxh_event_occurred2_u {
unsigned long rtc_30:1; /* RW */
unsigned long rtc_31:1; /* RW */
unsigned long rsvd_32_63:32;
- } sx;
+ } s2;
+ struct uv3h_event_occurred2_s {
+ unsigned long rtc_0:1; /* RW */
+ unsigned long rtc_1:1; /* RW */
+ unsigned long rtc_2:1; /* RW */
+ unsigned long rtc_3:1; /* RW */
+ unsigned long rtc_4:1; /* RW */
+ unsigned long rtc_5:1; /* RW */
+ unsigned long rtc_6:1; /* RW */
+ unsigned long rtc_7:1; /* RW */
+ unsigned long rtc_8:1; /* RW */
+ unsigned long rtc_9:1; /* RW */
+ unsigned long rtc_10:1; /* RW */
+ unsigned long rtc_11:1; /* RW */
+ unsigned long rtc_12:1; /* RW */
+ unsigned long rtc_13:1; /* RW */
+ unsigned long rtc_14:1; /* RW */
+ unsigned long rtc_15:1; /* RW */
+ unsigned long rtc_16:1; /* RW */
+ unsigned long rtc_17:1; /* RW */
+ unsigned long rtc_18:1; /* RW */
+ unsigned long rtc_19:1; /* RW */
+ unsigned long rtc_20:1; /* RW */
+ unsigned long rtc_21:1; /* RW */
+ unsigned long rtc_22:1; /* RW */
+ unsigned long rtc_23:1; /* RW */
+ unsigned long rtc_24:1; /* RW */
+ unsigned long rtc_25:1; /* RW */
+ unsigned long rtc_26:1; /* RW */
+ unsigned long rtc_27:1; /* RW */
+ unsigned long rtc_28:1; /* RW */
+ unsigned long rtc_29:1; /* RW */
+ unsigned long rtc_30:1; /* RW */
+ unsigned long rtc_31:1; /* RW */
+ unsigned long rsvd_32_63:32;
+ } s3;
+ struct uv4h_event_occurred2_s {
+ unsigned long message_accelerator_int0:1; /* RW */
+ unsigned long message_accelerator_int1:1; /* RW */
+ unsigned long message_accelerator_int2:1; /* RW */
+ unsigned long message_accelerator_int3:1; /* RW */
+ unsigned long message_accelerator_int4:1; /* RW */
+ unsigned long message_accelerator_int5:1; /* RW */
+ unsigned long message_accelerator_int6:1; /* RW */
+ unsigned long message_accelerator_int7:1; /* RW */
+ unsigned long message_accelerator_int8:1; /* RW */
+ unsigned long message_accelerator_int9:1; /* RW */
+ unsigned long message_accelerator_int10:1; /* RW */
+ unsigned long message_accelerator_int11:1; /* RW */
+ unsigned long message_accelerator_int12:1; /* RW */
+ unsigned long message_accelerator_int13:1; /* RW */
+ unsigned long message_accelerator_int14:1; /* RW */
+ unsigned long message_accelerator_int15:1; /* RW */
+ unsigned long rtc_interval_int:1; /* RW */
+ unsigned long bau_dashboard_int:1; /* RW */
+ unsigned long rtc_0:1; /* RW */
+ unsigned long rtc_1:1; /* RW */
+ unsigned long rtc_2:1; /* RW */
+ unsigned long rtc_3:1; /* RW */
+ unsigned long rtc_4:1; /* RW */
+ unsigned long rtc_5:1; /* RW */
+ unsigned long rtc_6:1; /* RW */
+ unsigned long rtc_7:1; /* RW */
+ unsigned long rtc_8:1; /* RW */
+ unsigned long rtc_9:1; /* RW */
+ unsigned long rtc_10:1; /* RW */
+ unsigned long rtc_11:1; /* RW */
+ unsigned long rtc_12:1; /* RW */
+ unsigned long rtc_13:1; /* RW */
+ unsigned long rtc_14:1; /* RW */
+ unsigned long rtc_15:1; /* RW */
+ unsigned long rtc_16:1; /* RW */
+ unsigned long rtc_17:1; /* RW */
+ unsigned long rtc_18:1; /* RW */
+ unsigned long rtc_19:1; /* RW */
+ unsigned long rtc_20:1; /* RW */
+ unsigned long rtc_21:1; /* RW */
+ unsigned long rtc_22:1; /* RW */
+ unsigned long rtc_23:1; /* RW */
+ unsigned long rtc_24:1; /* RW */
+ unsigned long rtc_25:1; /* RW */
+ unsigned long rtc_26:1; /* RW */
+ unsigned long rtc_27:1; /* RW */
+ unsigned long rtc_28:1; /* RW */
+ unsigned long rtc_29:1; /* RW */
+ unsigned long rtc_30:1; /* RW */
+ unsigned long rtc_31:1; /* RW */
+ unsigned long rsvd_50_63:14;
+ } s4;
};
/* ========================================================================= */
/* UVXH_EVENT_OCCURRED2_ALIAS */
/* ========================================================================= */
#define UVXH_EVENT_OCCURRED2_ALIAS 0x70108UL
-#define UVXH_EVENT_OCCURRED2_ALIAS_32 0xb70
+
+#define UV2H_EVENT_OCCURRED2_ALIAS_32 0xb70
+#define UV3H_EVENT_OCCURRED2_ALIAS_32 0xb70
+#define UV4H_EVENT_OCCURRED2_ALIAS_32 0x610
+#define UVH_EVENT_OCCURRED2_ALIAS_32 ( \
+ is_uv2_hub() ? UV2H_EVENT_OCCURRED2_ALIAS_32 : \
+ is_uv3_hub() ? UV3H_EVENT_OCCURRED2_ALIAS_32 : \
+ /*is_uv4_hub*/ UV4H_EVENT_OCCURRED2_ALIAS_32)
/* ========================================================================= */
/* UVXH_LB_BAU_SB_ACTIVATION_STATUS_2 */
/* ========================================================================= */
-#define UVXH_LB_BAU_SB_ACTIVATION_STATUS_2 0x320130UL
#define UV2H_LB_BAU_SB_ACTIVATION_STATUS_2 0x320130UL
#define UV3H_LB_BAU_SB_ACTIVATION_STATUS_2 0x320130UL
-#define UVXH_LB_BAU_SB_ACTIVATION_STATUS_2_32 0x9f0
-#define UV2H_LB_BAU_SB_ACTIVATION_STATUS_2_32 0x320130UL
-#define UV3H_LB_BAU_SB_ACTIVATION_STATUS_2_32 0x320130UL
+#define UV4H_LB_BAU_SB_ACTIVATION_STATUS_2 0xc8130UL
+#define UVH_LB_BAU_SB_ACTIVATION_STATUS_2 ( \
+ is_uv2_hub() ? UV2H_LB_BAU_SB_ACTIVATION_STATUS_2 : \
+ is_uv3_hub() ? UV3H_LB_BAU_SB_ACTIVATION_STATUS_2 : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_SB_ACTIVATION_STATUS_2)
+
+#define UV2H_LB_BAU_SB_ACTIVATION_STATUS_2_32 0x9f0
+#define UV3H_LB_BAU_SB_ACTIVATION_STATUS_2_32 0x9f0
+#define UV4H_LB_BAU_SB_ACTIVATION_STATUS_2_32 0xa10
+#define UVH_LB_BAU_SB_ACTIVATION_STATUS_2_32 ( \
+ is_uv2_hub() ? UV2H_LB_BAU_SB_ACTIVATION_STATUS_2_32 : \
+ is_uv3_hub() ? UV3H_LB_BAU_SB_ACTIVATION_STATUS_2_32 : \
+ /*is_uv4_hub*/ UV4H_LB_BAU_SB_ACTIVATION_STATUS_2_32)
#define UVXH_LB_BAU_SB_ACTIVATION_STATUS_2_AUX_ERROR_SHFT 0
#define UVXH_LB_BAU_SB_ACTIVATION_STATUS_2_AUX_ERROR_MASK 0xffffffffffffffffUL
@@ -2772,6 +4080,10 @@ union uvxh_event_occurred2_u {
#define UV3H_LB_BAU_SB_ACTIVATION_STATUS_2_AUX_ERROR_SHFT 0
#define UV3H_LB_BAU_SB_ACTIVATION_STATUS_2_AUX_ERROR_MASK 0xffffffffffffffffUL
+#define UV4H_LB_BAU_SB_ACTIVATION_STATUS_2_AUX_ERROR_SHFT 0
+#define UV4H_LB_BAU_SB_ACTIVATION_STATUS_2_AUX_ERROR_MASK 0xffffffffffffffffUL
+
+
union uvxh_lb_bau_sb_activation_status_2_u {
unsigned long v;
struct uvxh_lb_bau_sb_activation_status_2_s {
@@ -2783,6 +4095,9 @@ union uvxh_lb_bau_sb_activation_status_2_u {
struct uv3h_lb_bau_sb_activation_status_2_s {
unsigned long aux_error:64; /* RW */
} s3;
+ struct uv4h_lb_bau_sb_activation_status_2_s {
+ unsigned long aux_error:64; /* RW */
+ } s4;
};
/* ========================================================================= */
@@ -2823,26 +4138,6 @@ union uv3h_gr0_gam_gr_config_u {
};
/* ========================================================================= */
-/* UV3H_GR1_GAM_GR_CONFIG */
-/* ========================================================================= */
-#define UV3H_GR1_GAM_GR_CONFIG 0x1000028UL
-
-#define UV3H_GR1_GAM_GR_CONFIG_M_SKT_SHFT 0
-#define UV3H_GR1_GAM_GR_CONFIG_SUBSPACE_SHFT 10
-#define UV3H_GR1_GAM_GR_CONFIG_M_SKT_MASK 0x000000000000003fUL
-#define UV3H_GR1_GAM_GR_CONFIG_SUBSPACE_MASK 0x0000000000000400UL
-
-union uv3h_gr1_gam_gr_config_u {
- unsigned long v;
- struct uv3h_gr1_gam_gr_config_s {
- unsigned long m_skt:6; /* RW */
- unsigned long undef_6_9:4; /* Undefined */
- unsigned long subspace:1; /* RW */
- unsigned long reserved:53;
- } s3;
-};
-
-/* ========================================================================= */
/* UV3H_RH_GAM_MMIOH_OVERLAY_CONFIG0_MMR */
/* ========================================================================= */
#define UV3H_RH_GAM_MMIOH_OVERLAY_CONFIG0_MMR 0x1603000UL
@@ -2924,5 +4219,67 @@ union uv3h_rh_gam_mmioh_redirect_config1_mmr_u {
} s3;
};
+/* ========================================================================= */
+/* UV4H_LB_PROC_INTD_QUEUE_FIRST */
+/* ========================================================================= */
+#define UV4H_LB_PROC_INTD_QUEUE_FIRST 0xa4100UL
+
+#define UV4H_LB_PROC_INTD_QUEUE_FIRST_FIRST_PAYLOAD_ADDRESS_SHFT 6
+#define UV4H_LB_PROC_INTD_QUEUE_FIRST_FIRST_PAYLOAD_ADDRESS_MASK 0x00003fffffffffc0UL
+
+union uv4h_lb_proc_intd_queue_first_u {
+ unsigned long v;
+ struct uv4h_lb_proc_intd_queue_first_s {
+ unsigned long undef_0_5:6; /* Undefined */
+ unsigned long first_payload_address:40; /* RW */
+ } s4;
+};
+
+/* ========================================================================= */
+/* UV4H_LB_PROC_INTD_QUEUE_LAST */
+/* ========================================================================= */
+#define UV4H_LB_PROC_INTD_QUEUE_LAST 0xa4108UL
+
+#define UV4H_LB_PROC_INTD_QUEUE_LAST_LAST_PAYLOAD_ADDRESS_SHFT 5
+#define UV4H_LB_PROC_INTD_QUEUE_LAST_LAST_PAYLOAD_ADDRESS_MASK 0x00003fffffffffe0UL
+
+union uv4h_lb_proc_intd_queue_last_u {
+ unsigned long v;
+ struct uv4h_lb_proc_intd_queue_last_s {
+ unsigned long undef_0_4:5; /* Undefined */
+ unsigned long last_payload_address:41; /* RW */
+ } s4;
+};
+
+/* ========================================================================= */
+/* UV4H_LB_PROC_INTD_SOFT_ACK_CLEAR */
+/* ========================================================================= */
+#define UV4H_LB_PROC_INTD_SOFT_ACK_CLEAR 0xa4118UL
+
+#define UV4H_LB_PROC_INTD_SOFT_ACK_CLEAR_SOFT_ACK_PENDING_FLAGS_SHFT 0
+#define UV4H_LB_PROC_INTD_SOFT_ACK_CLEAR_SOFT_ACK_PENDING_FLAGS_MASK 0x00000000000000ffUL
+
+union uv4h_lb_proc_intd_soft_ack_clear_u {
+ unsigned long v;
+ struct uv4h_lb_proc_intd_soft_ack_clear_s {
+ unsigned long soft_ack_pending_flags:8; /* WP */
+ } s4;
+};
+
+/* ========================================================================= */
+/* UV4H_LB_PROC_INTD_SOFT_ACK_PENDING */
+/* ========================================================================= */
+#define UV4H_LB_PROC_INTD_SOFT_ACK_PENDING 0xa4110UL
+
+#define UV4H_LB_PROC_INTD_SOFT_ACK_PENDING_SOFT_ACK_FLAGS_SHFT 0
+#define UV4H_LB_PROC_INTD_SOFT_ACK_PENDING_SOFT_ACK_FLAGS_MASK 0x00000000000000ffUL
+
+union uv4h_lb_proc_intd_soft_ack_pending_u {
+ unsigned long v;
+ struct uv4h_lb_proc_intd_soft_ack_pending_s {
+ unsigned long soft_ack_flags:8; /* RW */
+ } s4;
+};
+
#endif /* _ASM_X86_UV_UV_MMRS_H */
diff --git a/arch/x86/include/asm/x86_init.h b/arch/x86/include/asm/x86_init.h
index 1ae89a2721d6..4dcdf74dfed8 100644
--- a/arch/x86/include/asm/x86_init.h
+++ b/arch/x86/include/asm/x86_init.h
@@ -142,6 +142,44 @@ struct x86_cpuinit_ops {
struct timespec;
/**
+ * struct x86_legacy_devices - legacy x86 devices
+ *
+ * @pnpbios: this platform can have a PNPBIOS. If this is disabled the platform
+ * is known to never have a PNPBIOS.
+ *
+ * These are devices known to require LPC or ISA bus. The definition of legacy
+ * devices adheres to the ACPI 5.2.9.3 IA-PC Boot Architecture flag
+ * ACPI_FADT_LEGACY_DEVICES. These devices consist of user visible devices on
+ * the LPC or ISA bus. User visible devices are devices that have end-user
+ * accessible connectors (for example, LPT parallel port). Legacy devices on
+ * the LPC bus consist for example of serial and parallel ports, PS/2 keyboard
+ * / mouse, and the floppy disk controller. A system that lacks all known
+ * legacy devices can assume all devices can be detected exclusively via
+ * standard device enumeration mechanisms including the ACPI namespace.
+ *
+ * A system which has does not have ACPI_FADT_LEGACY_DEVICES enabled must not
+ * have any of the legacy devices enumerated below present.
+ */
+struct x86_legacy_devices {
+ int pnpbios;
+};
+
+/**
+ * struct x86_legacy_features - legacy x86 features
+ *
+ * @rtc: this device has a CMOS real-time clock present
+ * @ebda_search: it's safe to search for the EBDA signature in the hardware's
+ * low RAM
+ * @devices: legacy x86 devices, refer to struct x86_legacy_devices
+ * documentation for further details.
+ */
+struct x86_legacy_features {
+ int rtc;
+ int ebda_search;
+ struct x86_legacy_devices devices;
+};
+
+/**
* struct x86_platform_ops - platform specific runtime functions
* @calibrate_tsc: calibrate TSC
* @get_wallclock: get time from HW clock like RTC etc.
@@ -152,6 +190,14 @@ struct timespec;
* @save_sched_clock_state: save state for sched_clock() on suspend
* @restore_sched_clock_state: restore state for sched_clock() on resume
* @apic_post_init: adjust apic if neeeded
+ * @legacy: legacy features
+ * @set_legacy_features: override legacy features. Use of this callback
+ * is highly discouraged. You should only need
+ * this if your hardware platform requires further
+ * custom fine tuning far beyong what may be
+ * possible in x86_early_init_platform_quirks() by
+ * only using the current x86_hardware_subarch
+ * semantics.
*/
struct x86_platform_ops {
unsigned long (*calibrate_tsc)(void);
@@ -165,6 +211,8 @@ struct x86_platform_ops {
void (*save_sched_clock_state)(void);
void (*restore_sched_clock_state)(void);
void (*apic_post_init)(void);
+ struct x86_legacy_features legacy;
+ void (*set_legacy_features)(void);
};
struct pci_dev;
@@ -186,6 +234,8 @@ extern struct x86_cpuinit_ops x86_cpuinit;
extern struct x86_platform_ops x86_platform;
extern struct x86_msi_ops x86_msi;
extern struct x86_io_apic_ops x86_io_apic_ops;
+
+extern void x86_early_init_platform_quirks(void);
extern void x86_init_noop(void);
extern void x86_init_uint_noop(unsigned int unused);
diff --git a/arch/x86/include/asm/xor_32.h b/arch/x86/include/asm/xor_32.h
index c54beb44c4c1..635eac543922 100644
--- a/arch/x86/include/asm/xor_32.h
+++ b/arch/x86/include/asm/xor_32.h
@@ -550,7 +550,7 @@ static struct xor_block_template xor_block_pIII_sse = {
#define XOR_TRY_TEMPLATES \
do { \
AVX_XOR_SPEED; \
- if (cpu_has_xmm) { \
+ if (boot_cpu_has(X86_FEATURE_XMM)) { \
xor_speed(&xor_block_pIII_sse); \
xor_speed(&xor_block_sse_pf64); \
} else if (boot_cpu_has(X86_FEATURE_MMX)) { \
diff --git a/arch/x86/include/asm/xor_avx.h b/arch/x86/include/asm/xor_avx.h
index 7c0a517ec751..22a7b1870a31 100644
--- a/arch/x86/include/asm/xor_avx.h
+++ b/arch/x86/include/asm/xor_avx.h
@@ -167,12 +167,12 @@ static struct xor_block_template xor_block_avx = {
#define AVX_XOR_SPEED \
do { \
- if (cpu_has_avx && cpu_has_osxsave) \
+ if (boot_cpu_has(X86_FEATURE_AVX) && boot_cpu_has(X86_FEATURE_OSXSAVE)) \
xor_speed(&xor_block_avx); \
} while (0)
#define AVX_SELECT(FASTEST) \
- (cpu_has_avx && cpu_has_osxsave ? &xor_block_avx : FASTEST)
+ (boot_cpu_has(X86_FEATURE_AVX) && boot_cpu_has(X86_FEATURE_OSXSAVE) ? &xor_block_avx : FASTEST)
#else
diff --git a/arch/x86/include/uapi/asm/bootparam.h b/arch/x86/include/uapi/asm/bootparam.h
index 329254373479..c18ce67495fa 100644
--- a/arch/x86/include/uapi/asm/bootparam.h
+++ b/arch/x86/include/uapi/asm/bootparam.h
@@ -157,7 +157,46 @@ struct boot_params {
__u8 _pad9[276]; /* 0xeec */
} __attribute__((packed));
-enum {
+/**
+ * enum x86_hardware_subarch - x86 hardware subarchitecture
+ *
+ * The x86 hardware_subarch and hardware_subarch_data were added as of the x86
+ * boot protocol 2.07 to help distinguish and support custom x86 boot
+ * sequences. This enum represents accepted values for the x86
+ * hardware_subarch. Custom x86 boot sequences (not X86_SUBARCH_PC) do not
+ * have or simply *cannot* make use of natural stubs like BIOS or EFI, the
+ * hardware_subarch can be used on the Linux entry path to revector to a
+ * subarchitecture stub when needed. This subarchitecture stub can be used to
+ * set up Linux boot parameters or for special care to account for nonstandard
+ * handling of page tables.
+ *
+ * These enums should only ever be used by x86 code, and the code that uses
+ * it should be well contained and compartamentalized.
+ *
+ * KVM and Xen HVM do not have a subarch as these are expected to follow
+ * standard x86 boot entries. If there is a genuine need for "hypervisor" type
+ * that should be considered separately in the future. Future guest types
+ * should seriously consider working with standard x86 boot stubs such as
+ * the BIOS or EFI boot stubs.
+ *
+ * WARNING: this enum is only used for legacy hacks, for platform features that
+ * are not easily enumerated or discoverable. You should not ever use
+ * this for new features.
+ *
+ * @X86_SUBARCH_PC: Should be used if the hardware is enumerable using standard
+ * PC mechanisms (PCI, ACPI) and doesn't need a special boot flow.
+ * @X86_SUBARCH_LGUEST: Used for x86 hypervisor demo, lguest
+ * @X86_SUBARCH_XEN: Used for Xen guest types which follow the PV boot path,
+ * which start at asm startup_xen() entry point and later jump to the C
+ * xen_start_kernel() entry point. Both domU and dom0 type of guests are
+ * currently supportd through this PV boot path.
+ * @X86_SUBARCH_INTEL_MID: Used for Intel MID (Mobile Internet Device) platform
+ * systems which do not have the PCI legacy interfaces.
+ * @X86_SUBARCH_CE4100: Used for Intel CE media processor (CE4100) SoC for
+ * for settop boxes and media devices, the use of a subarch for CE4100
+ * is more of a hack...
+ */
+enum x86_hardware_subarch {
X86_SUBARCH_PC = 0,
X86_SUBARCH_LGUEST,
X86_SUBARCH_XEN,
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index cd54147cb365..739c0c594022 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -216,9 +216,9 @@ struct kvm_cpuid_entry2 {
__u32 padding[3];
};
-#define KVM_CPUID_FLAG_SIGNIFCANT_INDEX BIT(0)
-#define KVM_CPUID_FLAG_STATEFUL_FUNC BIT(1)
-#define KVM_CPUID_FLAG_STATE_READ_NEXT BIT(2)
+#define KVM_CPUID_FLAG_SIGNIFCANT_INDEX (1 << 0)
+#define KVM_CPUID_FLAG_STATEFUL_FUNC (1 << 1)
+#define KVM_CPUID_FLAG_STATE_READ_NEXT (1 << 2)
/* for KVM_SET_CPUID2 */
struct kvm_cpuid2 {
diff --git a/arch/x86/include/uapi/asm/svm.h b/arch/x86/include/uapi/asm/svm.h
index 8a4add8e4639..3725e145aa58 100644
--- a/arch/x86/include/uapi/asm/svm.h
+++ b/arch/x86/include/uapi/asm/svm.h
@@ -2,10 +2,12 @@
#define _UAPI__SVM_H
#define SVM_EXIT_READ_CR0 0x000
+#define SVM_EXIT_READ_CR2 0x002
#define SVM_EXIT_READ_CR3 0x003
#define SVM_EXIT_READ_CR4 0x004
#define SVM_EXIT_READ_CR8 0x008
#define SVM_EXIT_WRITE_CR0 0x010
+#define SVM_EXIT_WRITE_CR2 0x012
#define SVM_EXIT_WRITE_CR3 0x013
#define SVM_EXIT_WRITE_CR4 0x014
#define SVM_EXIT_WRITE_CR8 0x018
@@ -73,15 +75,19 @@
#define SVM_EXIT_MWAIT_COND 0x08c
#define SVM_EXIT_XSETBV 0x08d
#define SVM_EXIT_NPF 0x400
+#define SVM_EXIT_AVIC_INCOMPLETE_IPI 0x401
+#define SVM_EXIT_AVIC_UNACCELERATED_ACCESS 0x402
#define SVM_EXIT_ERR -1
#define SVM_EXIT_REASONS \
{ SVM_EXIT_READ_CR0, "read_cr0" }, \
+ { SVM_EXIT_READ_CR2, "read_cr2" }, \
{ SVM_EXIT_READ_CR3, "read_cr3" }, \
{ SVM_EXIT_READ_CR4, "read_cr4" }, \
{ SVM_EXIT_READ_CR8, "read_cr8" }, \
{ SVM_EXIT_WRITE_CR0, "write_cr0" }, \
+ { SVM_EXIT_WRITE_CR2, "write_cr2" }, \
{ SVM_EXIT_WRITE_CR3, "write_cr3" }, \
{ SVM_EXIT_WRITE_CR4, "write_cr4" }, \
{ SVM_EXIT_WRITE_CR8, "write_cr8" }, \
@@ -89,32 +95,66 @@
{ SVM_EXIT_READ_DR1, "read_dr1" }, \
{ SVM_EXIT_READ_DR2, "read_dr2" }, \
{ SVM_EXIT_READ_DR3, "read_dr3" }, \
+ { SVM_EXIT_READ_DR4, "read_dr4" }, \
+ { SVM_EXIT_READ_DR5, "read_dr5" }, \
+ { SVM_EXIT_READ_DR6, "read_dr6" }, \
+ { SVM_EXIT_READ_DR7, "read_dr7" }, \
{ SVM_EXIT_WRITE_DR0, "write_dr0" }, \
{ SVM_EXIT_WRITE_DR1, "write_dr1" }, \
{ SVM_EXIT_WRITE_DR2, "write_dr2" }, \
{ SVM_EXIT_WRITE_DR3, "write_dr3" }, \
+ { SVM_EXIT_WRITE_DR4, "write_dr4" }, \
{ SVM_EXIT_WRITE_DR5, "write_dr5" }, \
+ { SVM_EXIT_WRITE_DR6, "write_dr6" }, \
{ SVM_EXIT_WRITE_DR7, "write_dr7" }, \
+ { SVM_EXIT_EXCP_BASE + DE_VECTOR, "DE excp" }, \
{ SVM_EXIT_EXCP_BASE + DB_VECTOR, "DB excp" }, \
{ SVM_EXIT_EXCP_BASE + BP_VECTOR, "BP excp" }, \
+ { SVM_EXIT_EXCP_BASE + OF_VECTOR, "OF excp" }, \
+ { SVM_EXIT_EXCP_BASE + BR_VECTOR, "BR excp" }, \
{ SVM_EXIT_EXCP_BASE + UD_VECTOR, "UD excp" }, \
- { SVM_EXIT_EXCP_BASE + PF_VECTOR, "PF excp" }, \
{ SVM_EXIT_EXCP_BASE + NM_VECTOR, "NM excp" }, \
+ { SVM_EXIT_EXCP_BASE + DF_VECTOR, "DF excp" }, \
+ { SVM_EXIT_EXCP_BASE + TS_VECTOR, "TS excp" }, \
+ { SVM_EXIT_EXCP_BASE + NP_VECTOR, "NP excp" }, \
+ { SVM_EXIT_EXCP_BASE + SS_VECTOR, "SS excp" }, \
+ { SVM_EXIT_EXCP_BASE + GP_VECTOR, "GP excp" }, \
+ { SVM_EXIT_EXCP_BASE + PF_VECTOR, "PF excp" }, \
+ { SVM_EXIT_EXCP_BASE + MF_VECTOR, "MF excp" }, \
{ SVM_EXIT_EXCP_BASE + AC_VECTOR, "AC excp" }, \
{ SVM_EXIT_EXCP_BASE + MC_VECTOR, "MC excp" }, \
+ { SVM_EXIT_EXCP_BASE + XM_VECTOR, "XF excp" }, \
{ SVM_EXIT_INTR, "interrupt" }, \
{ SVM_EXIT_NMI, "nmi" }, \
{ SVM_EXIT_SMI, "smi" }, \
{ SVM_EXIT_INIT, "init" }, \
{ SVM_EXIT_VINTR, "vintr" }, \
+ { SVM_EXIT_CR0_SEL_WRITE, "cr0_sel_write" }, \
+ { SVM_EXIT_IDTR_READ, "read_idtr" }, \
+ { SVM_EXIT_GDTR_READ, "read_gdtr" }, \
+ { SVM_EXIT_LDTR_READ, "read_ldtr" }, \
+ { SVM_EXIT_TR_READ, "read_rt" }, \
+ { SVM_EXIT_IDTR_WRITE, "write_idtr" }, \
+ { SVM_EXIT_GDTR_WRITE, "write_gdtr" }, \
+ { SVM_EXIT_LDTR_WRITE, "write_ldtr" }, \
+ { SVM_EXIT_TR_WRITE, "write_rt" }, \
+ { SVM_EXIT_RDTSC, "rdtsc" }, \
+ { SVM_EXIT_RDPMC, "rdpmc" }, \
+ { SVM_EXIT_PUSHF, "pushf" }, \
+ { SVM_EXIT_POPF, "popf" }, \
{ SVM_EXIT_CPUID, "cpuid" }, \
+ { SVM_EXIT_RSM, "rsm" }, \
+ { SVM_EXIT_IRET, "iret" }, \
+ { SVM_EXIT_SWINT, "swint" }, \
{ SVM_EXIT_INVD, "invd" }, \
+ { SVM_EXIT_PAUSE, "pause" }, \
{ SVM_EXIT_HLT, "hlt" }, \
{ SVM_EXIT_INVLPG, "invlpg" }, \
{ SVM_EXIT_INVLPGA, "invlpga" }, \
{ SVM_EXIT_IOIO, "io" }, \
{ SVM_EXIT_MSR, "msr" }, \
{ SVM_EXIT_TASK_SWITCH, "task_switch" }, \
+ { SVM_EXIT_FERR_FREEZE, "ferr_freeze" }, \
{ SVM_EXIT_SHUTDOWN, "shutdown" }, \
{ SVM_EXIT_VMRUN, "vmrun" }, \
{ SVM_EXIT_VMMCALL, "hypercall" }, \
@@ -123,11 +163,16 @@
{ SVM_EXIT_STGI, "stgi" }, \
{ SVM_EXIT_CLGI, "clgi" }, \
{ SVM_EXIT_SKINIT, "skinit" }, \
+ { SVM_EXIT_RDTSCP, "rdtscp" }, \
+ { SVM_EXIT_ICEBP, "icebp" }, \
{ SVM_EXIT_WBINVD, "wbinvd" }, \
{ SVM_EXIT_MONITOR, "monitor" }, \
{ SVM_EXIT_MWAIT, "mwait" }, \
{ SVM_EXIT_XSETBV, "xsetbv" }, \
- { SVM_EXIT_NPF, "npf" }
+ { SVM_EXIT_NPF, "npf" }, \
+ { SVM_EXIT_AVIC_INCOMPLETE_IPI, "avic_incomplete_ipi" }, \
+ { SVM_EXIT_AVIC_UNACCELERATED_ACCESS, "avic_unaccelerated_access" }, \
+ { SVM_EXIT_ERR, "invalid_guest_state" }
#endif /* _UAPI__SVM_H */
diff --git a/arch/x86/kernel/Makefile b/arch/x86/kernel/Makefile
index 616ebd22ef9a..0503f5bfb18d 100644
--- a/arch/x86/kernel/Makefile
+++ b/arch/x86/kernel/Makefile
@@ -2,7 +2,11 @@
# Makefile for the linux kernel.
#
-extra-y := head_$(BITS).o head$(BITS).o head.o vmlinux.lds
+extra-y := head_$(BITS).o
+extra-y += head$(BITS).o
+extra-y += ebda.o
+extra-y += platform-quirks.o
+extra-y += vmlinux.lds
CPPFLAGS_vmlinux.lds += -U$(UTS_MACHINE)
@@ -79,7 +83,6 @@ obj-$(CONFIG_X86_MPPARSE) += mpparse.o
obj-y += apic/
obj-$(CONFIG_X86_REBOOTFIXUPS) += reboot_fixups_32.o
obj-$(CONFIG_DYNAMIC_FTRACE) += ftrace.o
-obj-$(CONFIG_LIVEPATCH) += livepatch.o
obj-$(CONFIG_FUNCTION_GRAPH_TRACER) += ftrace.o
obj-$(CONFIG_FTRACE_SYSCALLS) += ftrace.o
obj-$(CONFIG_X86_TSC) += trace_clock.o
diff --git a/arch/x86/kernel/acpi/boot.c b/arch/x86/kernel/acpi/boot.c
index 8c2f1ef6ca23..9414f84584e4 100644
--- a/arch/x86/kernel/acpi/boot.c
+++ b/arch/x86/kernel/acpi/boot.c
@@ -136,7 +136,7 @@ static int __init acpi_parse_madt(struct acpi_table_header *table)
{
struct acpi_table_madt *madt = NULL;
- if (!cpu_has_apic)
+ if (!boot_cpu_has(X86_FEATURE_APIC))
return -EINVAL;
madt = (struct acpi_table_madt *)table;
@@ -445,7 +445,6 @@ static void __init acpi_sci_ioapic_setup(u8 bus_irq, u16 polarity, u16 trigger,
polarity = acpi_sci_flags & ACPI_MADT_POLARITY_MASK;
mp_override_legacy_irq(bus_irq, polarity, trigger, gsi);
- acpi_penalize_sci_irq(bus_irq, trigger, polarity);
/*
* stash over-ride to indicate we've been here
@@ -913,6 +912,15 @@ late_initcall(hpet_insert_resource);
static int __init acpi_parse_fadt(struct acpi_table_header *table)
{
+ if (!(acpi_gbl_FADT.boot_flags & ACPI_FADT_LEGACY_DEVICES)) {
+ pr_debug("ACPI: no legacy devices present\n");
+ x86_platform.legacy.devices.pnpbios = 0;
+ }
+
+ if (acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC) {
+ pr_debug("ACPI: not registering RTC platform device\n");
+ x86_platform.legacy.rtc = 0;
+ }
#ifdef CONFIG_X86_PM_TIMER
/* detect the location of the ACPI PM Timer */
@@ -951,7 +959,7 @@ static int __init early_acpi_parse_madt_lapic_addr_ovr(void)
{
int count;
- if (!cpu_has_apic)
+ if (!boot_cpu_has(X86_FEATURE_APIC))
return -ENODEV;
/*
@@ -979,7 +987,7 @@ static int __init acpi_parse_madt_lapic_entries(void)
int ret;
struct acpi_subtable_proc madt_proc[2];
- if (!cpu_has_apic)
+ if (!boot_cpu_has(X86_FEATURE_APIC))
return -ENODEV;
/*
@@ -1125,7 +1133,7 @@ static int __init acpi_parse_madt_ioapic_entries(void)
if (acpi_disabled || acpi_noirq)
return -ENODEV;
- if (!cpu_has_apic)
+ if (!boot_cpu_has(X86_FEATURE_APIC))
return -ENODEV;
/*
diff --git a/arch/x86/kernel/alternative.c b/arch/x86/kernel/alternative.c
index 25f909362b7a..5cb272a7a5a3 100644
--- a/arch/x86/kernel/alternative.c
+++ b/arch/x86/kernel/alternative.c
@@ -11,6 +11,7 @@
#include <linux/stop_machine.h>
#include <linux/slab.h>
#include <linux/kdebug.h>
+#include <asm/text-patching.h>
#include <asm/alternative.h>
#include <asm/sections.h>
#include <asm/pgtable.h>
diff --git a/arch/x86/kernel/apic/apic.c b/arch/x86/kernel/apic/apic.c
index d356987a04e9..60078a67d7e3 100644
--- a/arch/x86/kernel/apic/apic.c
+++ b/arch/x86/kernel/apic/apic.c
@@ -607,7 +607,7 @@ static void __init lapic_cal_handler(struct clock_event_device *dev)
long tapic = apic_read(APIC_TMCCT);
unsigned long pm = acpi_pm_read_early();
- if (cpu_has_tsc)
+ if (boot_cpu_has(X86_FEATURE_TSC))
tsc = rdtsc();
switch (lapic_cal_loops++) {
@@ -668,7 +668,7 @@ calibrate_by_pmtimer(long deltapm, long *delta, long *deltatsc)
*delta = (long)res;
/* Correct the tsc counter value */
- if (cpu_has_tsc) {
+ if (boot_cpu_has(X86_FEATURE_TSC)) {
res = (((u64)(*deltatsc)) * pm_100ms);
do_div(res, deltapm);
apic_printk(APIC_VERBOSE, "TSC delta adjusted to "
@@ -760,7 +760,7 @@ static int __init calibrate_APIC_clock(void)
apic_printk(APIC_VERBOSE, "..... calibration result: %u\n",
lapic_timer_frequency);
- if (cpu_has_tsc) {
+ if (boot_cpu_has(X86_FEATURE_TSC)) {
apic_printk(APIC_VERBOSE, "..... CPU clock speed is "
"%ld.%04ld MHz.\n",
(deltatsc / LAPIC_CAL_LOOPS) / (1000000 / HZ),
@@ -1085,7 +1085,7 @@ void lapic_shutdown(void)
{
unsigned long flags;
- if (!cpu_has_apic && !apic_from_smp_config())
+ if (!boot_cpu_has(X86_FEATURE_APIC) && !apic_from_smp_config())
return;
local_irq_save(flags);
@@ -1134,7 +1134,7 @@ void __init init_bsp_APIC(void)
* Don't do the setup now if we have a SMP BIOS as the
* through-I/O-APIC virtual wire mode might be active.
*/
- if (smp_found_config || !cpu_has_apic)
+ if (smp_found_config || !boot_cpu_has(X86_FEATURE_APIC))
return;
/*
@@ -1227,7 +1227,7 @@ void setup_local_APIC(void)
unsigned long long tsc = 0, ntsc;
long long max_loops = cpu_khz ? cpu_khz : 1000000;
- if (cpu_has_tsc)
+ if (boot_cpu_has(X86_FEATURE_TSC))
tsc = rdtsc();
if (disable_apic) {
@@ -1311,7 +1311,7 @@ void setup_local_APIC(void)
break;
}
if (queued) {
- if (cpu_has_tsc && cpu_khz) {
+ if (boot_cpu_has(X86_FEATURE_TSC) && cpu_khz) {
ntsc = rdtsc();
max_loops = (cpu_khz << 10) - (ntsc - tsc);
} else
@@ -1445,7 +1445,7 @@ static void __x2apic_disable(void)
{
u64 msr;
- if (!cpu_has_apic)
+ if (!boot_cpu_has(X86_FEATURE_APIC))
return;
rdmsrl(MSR_IA32_APICBASE, msr);
@@ -1561,7 +1561,7 @@ void __init check_x2apic(void)
pr_info("x2apic: enabled by BIOS, switching to x2apic ops\n");
x2apic_mode = 1;
x2apic_state = X2APIC_ON;
- } else if (!cpu_has_x2apic) {
+ } else if (!boot_cpu_has(X86_FEATURE_X2APIC)) {
x2apic_state = X2APIC_DISABLED;
}
}
@@ -1632,7 +1632,7 @@ void __init enable_IR_x2apic(void)
*/
static int __init detect_init_APIC(void)
{
- if (!cpu_has_apic) {
+ if (!boot_cpu_has(X86_FEATURE_APIC)) {
pr_info("No local APIC present\n");
return -1;
}
@@ -1711,14 +1711,14 @@ static int __init detect_init_APIC(void)
goto no_apic;
case X86_VENDOR_INTEL:
if (boot_cpu_data.x86 == 6 || boot_cpu_data.x86 == 15 ||
- (boot_cpu_data.x86 == 5 && cpu_has_apic))
+ (boot_cpu_data.x86 == 5 && boot_cpu_has(X86_FEATURE_APIC)))
break;
goto no_apic;
default:
goto no_apic;
}
- if (!cpu_has_apic) {
+ if (!boot_cpu_has(X86_FEATURE_APIC)) {
/*
* Over-ride BIOS and try to enable the local APIC only if
* "lapic" specified.
@@ -2233,19 +2233,19 @@ int __init APIC_init_uniprocessor(void)
return -1;
}
#ifdef CONFIG_X86_64
- if (!cpu_has_apic) {
+ if (!boot_cpu_has(X86_FEATURE_APIC)) {
disable_apic = 1;
pr_info("Apic disabled by BIOS\n");
return -1;
}
#else
- if (!smp_found_config && !cpu_has_apic)
+ if (!smp_found_config && !boot_cpu_has(X86_FEATURE_APIC))
return -1;
/*
* Complain if the BIOS pretends there is one.
*/
- if (!cpu_has_apic &&
+ if (!boot_cpu_has(X86_FEATURE_APIC) &&
APIC_INTEGRATED(apic_version[boot_cpu_physical_apicid])) {
pr_err("BIOS bug, local APIC 0x%x not detected!...\n",
boot_cpu_physical_apicid);
@@ -2426,7 +2426,7 @@ static void apic_pm_activate(void)
static int __init init_lapic_sysfs(void)
{
/* XXX: remove suspend/resume procs if !apic_pm_state.active? */
- if (cpu_has_apic)
+ if (boot_cpu_has(X86_FEATURE_APIC))
register_syscore_ops(&lapic_syscore_ops);
return 0;
diff --git a/arch/x86/kernel/apic/apic_noop.c b/arch/x86/kernel/apic/apic_noop.c
index 331a7a07c48f..13d19ed58514 100644
--- a/arch/x86/kernel/apic/apic_noop.c
+++ b/arch/x86/kernel/apic/apic_noop.c
@@ -100,13 +100,13 @@ static void noop_vector_allocation_domain(int cpu, struct cpumask *retmask,
static u32 noop_apic_read(u32 reg)
{
- WARN_ON_ONCE((cpu_has_apic && !disable_apic));
+ WARN_ON_ONCE(boot_cpu_has(X86_FEATURE_APIC) && !disable_apic);
return 0;
}
static void noop_apic_write(u32 reg, u32 v)
{
- WARN_ON_ONCE(cpu_has_apic && !disable_apic);
+ WARN_ON_ONCE(boot_cpu_has(X86_FEATURE_APIC) && !disable_apic);
}
struct apic apic_noop = {
diff --git a/arch/x86/kernel/apic/hw_nmi.c b/arch/x86/kernel/apic/hw_nmi.c
index 045e424fb368..7788ce643bf4 100644
--- a/arch/x86/kernel/apic/hw_nmi.c
+++ b/arch/x86/kernel/apic/hw_nmi.c
@@ -18,7 +18,6 @@
#include <linux/nmi.h>
#include <linux/module.h>
#include <linux/delay.h>
-#include <linux/seq_buf.h>
#ifdef CONFIG_HARDLOCKUP_DETECTOR
u64 hw_nmi_get_sample_period(int watchdog_thresh)
diff --git a/arch/x86/kernel/apic/io_apic.c b/arch/x86/kernel/apic/io_apic.c
index fdb0fbfb1197..84e33ff5a6d5 100644
--- a/arch/x86/kernel/apic/io_apic.c
+++ b/arch/x86/kernel/apic/io_apic.c
@@ -1454,7 +1454,7 @@ void native_disable_io_apic(void)
ioapic_write_entry(ioapic_i8259.apic, ioapic_i8259.pin, entry);
}
- if (cpu_has_apic || apic_from_smp_config())
+ if (boot_cpu_has(X86_FEATURE_APIC) || apic_from_smp_config())
disconnect_bsp_APIC(ioapic_i8259.pin != -1);
}
diff --git a/arch/x86/kernel/apic/ipi.c b/arch/x86/kernel/apic/ipi.c
index 28bde88b0085..2a0f225afebd 100644
--- a/arch/x86/kernel/apic/ipi.c
+++ b/arch/x86/kernel/apic/ipi.c
@@ -230,7 +230,7 @@ int safe_smp_processor_id(void)
{
int apicid, cpuid;
- if (!cpu_has_apic)
+ if (!boot_cpu_has(X86_FEATURE_APIC))
return 0;
apicid = hard_smp_processor_id();
diff --git a/arch/x86/kernel/apic/vector.c b/arch/x86/kernel/apic/vector.c
index ef495511f019..a5e400afc563 100644
--- a/arch/x86/kernel/apic/vector.c
+++ b/arch/x86/kernel/apic/vector.c
@@ -944,7 +944,7 @@ static int __init print_ICs(void)
print_PIC();
/* don't print out if apic is not there */
- if (!cpu_has_apic && !apic_from_smp_config())
+ if (!boot_cpu_has(X86_FEATURE_APIC) && !apic_from_smp_config())
return 0;
print_local_APICs(show_lapic);
diff --git a/arch/x86/kernel/apic/x2apic_uv_x.c b/arch/x86/kernel/apic/x2apic_uv_x.c
index d7ce96a7daca..29003154fafd 100644
--- a/arch/x86/kernel/apic/x2apic_uv_x.c
+++ b/arch/x86/kernel/apic/x2apic_uv_x.c
@@ -48,12 +48,35 @@ static u64 gru_start_paddr, gru_end_paddr;
static u64 gru_dist_base, gru_first_node_paddr = -1LL, gru_last_node_paddr;
static u64 gru_dist_lmask, gru_dist_umask;
static union uvh_apicid uvh_apicid;
+
+/* info derived from CPUID */
+static struct {
+ unsigned int apicid_shift;
+ unsigned int apicid_mask;
+ unsigned int socketid_shift; /* aka pnode_shift for UV1/2/3 */
+ unsigned int pnode_mask;
+ unsigned int gpa_shift;
+} uv_cpuid;
+
int uv_min_hub_revision_id;
EXPORT_SYMBOL_GPL(uv_min_hub_revision_id);
unsigned int uv_apicid_hibits;
EXPORT_SYMBOL_GPL(uv_apicid_hibits);
static struct apic apic_x2apic_uv_x;
+static struct uv_hub_info_s uv_hub_info_node0;
+
+/* Set this to use hardware error handler instead of kernel panic */
+static int disable_uv_undefined_panic = 1;
+unsigned long uv_undefined(char *str)
+{
+ if (likely(!disable_uv_undefined_panic))
+ panic("UV: error: undefined MMR: %s\n", str);
+ else
+ pr_crit("UV: error: undefined MMR: %s\n", str);
+ return ~0ul; /* cause a machine fault */
+}
+EXPORT_SYMBOL(uv_undefined);
static unsigned long __init uv_early_read_mmr(unsigned long addr)
{
@@ -108,21 +131,71 @@ static int __init early_get_pnodeid(void)
case UV3_HUB_PART_NUMBER_X:
uv_min_hub_revision_id += UV3_HUB_REVISION_BASE;
break;
+ case UV4_HUB_PART_NUMBER:
+ uv_min_hub_revision_id += UV4_HUB_REVISION_BASE - 1;
+ break;
}
uv_hub_info->hub_revision = uv_min_hub_revision_id;
- pnode = (node_id.s.node_id >> 1) & ((1 << m_n_config.s.n_skt) - 1);
+ uv_cpuid.pnode_mask = (1 << m_n_config.s.n_skt) - 1;
+ pnode = (node_id.s.node_id >> 1) & uv_cpuid.pnode_mask;
+ uv_cpuid.gpa_shift = 46; /* default unless changed */
+
+ pr_info("UV: rev:%d part#:%x nodeid:%04x n_skt:%d pnmsk:%x pn:%x\n",
+ node_id.s.revision, node_id.s.part_number, node_id.s.node_id,
+ m_n_config.s.n_skt, uv_cpuid.pnode_mask, pnode);
return pnode;
}
-static void __init early_get_apic_pnode_shift(void)
+/* [copied from arch/x86/kernel/cpu/topology.c:detect_extended_topology()] */
+#define SMT_LEVEL 0 /* leaf 0xb SMT level */
+#define INVALID_TYPE 0 /* leaf 0xb sub-leaf types */
+#define SMT_TYPE 1
+#define CORE_TYPE 2
+#define LEAFB_SUBTYPE(ecx) (((ecx) >> 8) & 0xff)
+#define BITS_SHIFT_NEXT_LEVEL(eax) ((eax) & 0x1f)
+
+static void set_x2apic_bits(void)
+{
+ unsigned int eax, ebx, ecx, edx, sub_index;
+ unsigned int sid_shift;
+
+ cpuid(0, &eax, &ebx, &ecx, &edx);
+ if (eax < 0xb) {
+ pr_info("UV: CPU does not have CPUID.11\n");
+ return;
+ }
+ cpuid_count(0xb, SMT_LEVEL, &eax, &ebx, &ecx, &edx);
+ if (ebx == 0 || (LEAFB_SUBTYPE(ecx) != SMT_TYPE)) {
+ pr_info("UV: CPUID.11 not implemented\n");
+ return;
+ }
+ sid_shift = BITS_SHIFT_NEXT_LEVEL(eax);
+ sub_index = 1;
+ do {
+ cpuid_count(0xb, sub_index, &eax, &ebx, &ecx, &edx);
+ if (LEAFB_SUBTYPE(ecx) == CORE_TYPE) {
+ sid_shift = BITS_SHIFT_NEXT_LEVEL(eax);
+ break;
+ }
+ sub_index++;
+ } while (LEAFB_SUBTYPE(ecx) != INVALID_TYPE);
+ uv_cpuid.apicid_shift = 0;
+ uv_cpuid.apicid_mask = (~(-1 << sid_shift));
+ uv_cpuid.socketid_shift = sid_shift;
+}
+
+static void __init early_get_apic_socketid_shift(void)
{
- uvh_apicid.v = uv_early_read_mmr(UVH_APICID);
- if (!uvh_apicid.v)
- /*
- * Old bios, use default value
- */
- uvh_apicid.s.pnode_shift = UV_APIC_PNODE_SHIFT;
+ if (is_uv2_hub() || is_uv3_hub())
+ uvh_apicid.v = uv_early_read_mmr(UVH_APICID);
+
+ set_x2apic_bits();
+
+ pr_info("UV: apicid_shift:%d apicid_mask:0x%x\n",
+ uv_cpuid.apicid_shift, uv_cpuid.apicid_mask);
+ pr_info("UV: socketid_shift:%d pnode_mask:0x%x\n",
+ uv_cpuid.socketid_shift, uv_cpuid.pnode_mask);
}
/*
@@ -150,13 +223,18 @@ static int __init uv_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
if (strncmp(oem_id, "SGI", 3) != 0)
return 0;
+ /* Setup early hub type field in uv_hub_info for Node 0 */
+ uv_cpu_info->p_uv_hub_info = &uv_hub_info_node0;
+
/*
* Determine UV arch type.
* SGI: UV100/1000
* SGI2: UV2000/3000
* SGI3: UV300 (truncated to 4 chars because of different varieties)
+ * SGI4: UV400 (truncated to 4 chars because of different varieties)
*/
uv_hub_info->hub_revision =
+ !strncmp(oem_id, "SGI4", 4) ? UV4_HUB_REVISION_BASE :
!strncmp(oem_id, "SGI3", 4) ? UV3_HUB_REVISION_BASE :
!strcmp(oem_id, "SGI2") ? UV2_HUB_REVISION_BASE :
!strcmp(oem_id, "SGI") ? UV1_HUB_REVISION_BASE : 0;
@@ -165,7 +243,7 @@ static int __init uv_acpi_madt_oem_check(char *oem_id, char *oem_table_id)
goto badbios;
pnodeid = early_get_pnodeid();
- early_get_apic_pnode_shift();
+ early_get_apic_socketid_shift();
x86_platform.is_untracked_pat_range = uv_is_untracked_pat_range;
x86_platform.nmi_init = uv_nmi_init;
@@ -211,17 +289,11 @@ int is_uv_system(void)
}
EXPORT_SYMBOL_GPL(is_uv_system);
-DEFINE_PER_CPU(struct uv_hub_info_s, __uv_hub_info);
-EXPORT_PER_CPU_SYMBOL_GPL(__uv_hub_info);
-
-struct uv_blade_info *uv_blade_info;
-EXPORT_SYMBOL_GPL(uv_blade_info);
-
-short *uv_node_to_blade;
-EXPORT_SYMBOL_GPL(uv_node_to_blade);
+void **__uv_hub_info_list;
+EXPORT_SYMBOL_GPL(__uv_hub_info_list);
-short *uv_cpu_to_blade;
-EXPORT_SYMBOL_GPL(uv_cpu_to_blade);
+DEFINE_PER_CPU(struct uv_cpu_info_s, __uv_cpu_info);
+EXPORT_PER_CPU_SYMBOL_GPL(__uv_cpu_info);
short uv_possible_blades;
EXPORT_SYMBOL_GPL(uv_possible_blades);
@@ -229,6 +301,115 @@ EXPORT_SYMBOL_GPL(uv_possible_blades);
unsigned long sn_rtc_cycles_per_second;
EXPORT_SYMBOL(sn_rtc_cycles_per_second);
+/* the following values are used for the per node hub info struct */
+static __initdata unsigned short *_node_to_pnode;
+static __initdata unsigned short _min_socket, _max_socket;
+static __initdata unsigned short _min_pnode, _max_pnode, _gr_table_len;
+static __initdata struct uv_gam_range_entry *uv_gre_table;
+static __initdata struct uv_gam_parameters *uv_gp_table;
+static __initdata unsigned short *_socket_to_node;
+static __initdata unsigned short *_socket_to_pnode;
+static __initdata unsigned short *_pnode_to_socket;
+static __initdata struct uv_gam_range_s *_gr_table;
+#define SOCK_EMPTY ((unsigned short)~0)
+
+extern int uv_hub_info_version(void)
+{
+ return UV_HUB_INFO_VERSION;
+}
+EXPORT_SYMBOL(uv_hub_info_version);
+
+/* Build GAM range lookup table */
+static __init void build_uv_gr_table(void)
+{
+ struct uv_gam_range_entry *gre = uv_gre_table;
+ struct uv_gam_range_s *grt;
+ unsigned long last_limit = 0, ram_limit = 0;
+ int bytes, i, sid, lsid = -1;
+
+ if (!gre)
+ return;
+
+ bytes = _gr_table_len * sizeof(struct uv_gam_range_s);
+ grt = kzalloc(bytes, GFP_KERNEL);
+ BUG_ON(!grt);
+ _gr_table = grt;
+
+ for (; gre->type != UV_GAM_RANGE_TYPE_UNUSED; gre++) {
+ if (gre->type == UV_GAM_RANGE_TYPE_HOLE) {
+ if (!ram_limit) { /* mark hole between ram/non-ram */
+ ram_limit = last_limit;
+ last_limit = gre->limit;
+ lsid++;
+ continue;
+ }
+ last_limit = gre->limit;
+ pr_info("UV: extra hole in GAM RE table @%d\n",
+ (int)(gre - uv_gre_table));
+ continue;
+ }
+ if (_max_socket < gre->sockid) {
+ pr_err("UV: GAM table sockid(%d) too large(>%d) @%d\n",
+ gre->sockid, _max_socket,
+ (int)(gre - uv_gre_table));
+ continue;
+ }
+ sid = gre->sockid - _min_socket;
+ if (lsid < sid) { /* new range */
+ grt = &_gr_table[sid];
+ grt->base = lsid;
+ grt->nasid = gre->nasid;
+ grt->limit = last_limit = gre->limit;
+ lsid = sid;
+ continue;
+ }
+ if (lsid == sid && !ram_limit) { /* update range */
+ if (grt->limit == last_limit) { /* .. if contiguous */
+ grt->limit = last_limit = gre->limit;
+ continue;
+ }
+ }
+ if (!ram_limit) { /* non-contiguous ram range */
+ grt++;
+ grt->base = sid - 1;
+ grt->nasid = gre->nasid;
+ grt->limit = last_limit = gre->limit;
+ continue;
+ }
+ grt++; /* non-contiguous/non-ram */
+ grt->base = grt - _gr_table; /* base is this entry */
+ grt->nasid = gre->nasid;
+ grt->limit = last_limit = gre->limit;
+ lsid++;
+ }
+
+ /* shorten table if possible */
+ grt++;
+ i = grt - _gr_table;
+ if (i < _gr_table_len) {
+ void *ret;
+
+ bytes = i * sizeof(struct uv_gam_range_s);
+ ret = krealloc(_gr_table, bytes, GFP_KERNEL);
+ if (ret) {
+ _gr_table = ret;
+ _gr_table_len = i;
+ }
+ }
+
+ /* display resultant gam range table */
+ for (i = 0, grt = _gr_table; i < _gr_table_len; i++, grt++) {
+ int gb = grt->base;
+ unsigned long start = gb < 0 ? 0 :
+ (unsigned long)_gr_table[gb].limit << UV_GAM_RANGE_SHFT;
+ unsigned long end =
+ (unsigned long)grt->limit << UV_GAM_RANGE_SHFT;
+
+ pr_info("UV: GAM Range %2d %04x 0x%013lx-0x%013lx (%d)\n",
+ i, grt->nasid, start, end, gb);
+ }
+}
+
static int uv_wakeup_secondary(int phys_apicid, unsigned long start_rip)
{
unsigned long val;
@@ -355,7 +536,6 @@ static unsigned long set_apic_id(unsigned int id)
static unsigned int uv_read_apic_id(void)
{
-
return x2apic_get_apic_id(apic_read(APIC_ID));
}
@@ -430,58 +610,38 @@ static void set_x2apic_extra_bits(int pnode)
__this_cpu_write(x2apic_extra_bits, pnode << uvh_apicid.s.pnode_shift);
}
-/*
- * Called on boot cpu.
- */
-static __init int boot_pnode_to_blade(int pnode)
-{
- int blade;
-
- for (blade = 0; blade < uv_num_possible_blades(); blade++)
- if (pnode == uv_blade_info[blade].pnode)
- return blade;
- BUG();
-}
-
-struct redir_addr {
- unsigned long redirect;
- unsigned long alias;
-};
-
+#define UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_LENGTH 3
#define DEST_SHIFT UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_0_MMR_DEST_BASE_SHFT
-static __initdata struct redir_addr redir_addrs[] = {
- {UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_0_MMR, UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_0_MMR},
- {UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_1_MMR, UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_1_MMR},
- {UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_2_MMR, UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_2_MMR},
-};
-
-static unsigned char get_n_lshift(int m_val)
-{
- union uv3h_gr0_gam_gr_config_u m_gr_config;
-
- if (is_uv1_hub())
- return m_val;
-
- if (is_uv2_hub())
- return m_val == 40 ? 40 : 39;
-
- m_gr_config.v = uv_read_local_mmr(UV3H_GR0_GAM_GR_CONFIG);
- return m_gr_config.s3.m_skt;
-}
-
static __init void get_lowmem_redirect(unsigned long *base, unsigned long *size)
{
union uvh_rh_gam_alias210_overlay_config_2_mmr_u alias;
union uvh_rh_gam_alias210_redirect_config_2_mmr_u redirect;
+ unsigned long m_redirect;
+ unsigned long m_overlay;
int i;
- for (i = 0; i < ARRAY_SIZE(redir_addrs); i++) {
- alias.v = uv_read_local_mmr(redir_addrs[i].alias);
+ for (i = 0; i < UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_LENGTH; i++) {
+ switch (i) {
+ case 0:
+ m_redirect = UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_0_MMR;
+ m_overlay = UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_0_MMR;
+ break;
+ case 1:
+ m_redirect = UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_1_MMR;
+ m_overlay = UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_1_MMR;
+ break;
+ case 2:
+ m_redirect = UVH_RH_GAM_ALIAS210_REDIRECT_CONFIG_2_MMR;
+ m_overlay = UVH_RH_GAM_ALIAS210_OVERLAY_CONFIG_2_MMR;
+ break;
+ }
+ alias.v = uv_read_local_mmr(m_overlay);
if (alias.s.enable && alias.s.base == 0) {
*size = (1UL << alias.s.m_alias);
- redirect.v = uv_read_local_mmr(redir_addrs[i].redirect);
- *base = (unsigned long)redirect.s.dest_base << DEST_SHIFT;
+ redirect.v = uv_read_local_mmr(m_redirect);
+ *base = (unsigned long)redirect.s.dest_base
+ << DEST_SHIFT;
return;
}
}
@@ -544,6 +704,8 @@ static __init void map_gru_high(int max_pnode)
{
union uvh_rh_gam_gru_overlay_config_mmr_u gru;
int shift = UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_SHFT;
+ unsigned long mask = UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR_BASE_MASK;
+ unsigned long base;
gru.v = uv_read_local_mmr(UVH_RH_GAM_GRU_OVERLAY_CONFIG_MMR);
if (!gru.s.enable) {
@@ -555,8 +717,9 @@ static __init void map_gru_high(int max_pnode)
map_gru_distributed(gru.v);
return;
}
- map_high("GRU", gru.s.base, shift, shift, max_pnode, map_wb);
- gru_start_paddr = ((u64)gru.s.base << shift);
+ base = (gru.v & mask) >> shift;
+ map_high("GRU", base, shift, shift, max_pnode, map_wb);
+ gru_start_paddr = ((u64)base << shift);
gru_end_paddr = gru_start_paddr + (1UL << shift) * (max_pnode + 1);
}
@@ -595,6 +758,7 @@ static __initdata struct mmioh_config mmiohs[] = {
},
};
+/* UV3 & UV4 have identical MMIOH overlay configs */
static __init void map_mmioh_high_uv3(int index, int min_pnode, int max_pnode)
{
union uv3h_rh_gam_mmioh_overlay_config0_mmr_u overlay;
@@ -674,7 +838,7 @@ static __init void map_mmioh_high(int min_pnode, int max_pnode)
unsigned long mmr, base;
int shift, enable, m_io, n_io;
- if (is_uv3_hub()) {
+ if (is_uv3_hub() || is_uv4_hub()) {
/* Map both MMIOH Regions */
map_mmioh_high_uv3(0, min_pnode, max_pnode);
map_mmioh_high_uv3(1, min_pnode, max_pnode);
@@ -739,8 +903,8 @@ static __init void uv_rtc_init(void)
*/
static void uv_heartbeat(unsigned long ignored)
{
- struct timer_list *timer = &uv_hub_info->scir.timer;
- unsigned char bits = uv_hub_info->scir.state;
+ struct timer_list *timer = &uv_scir_info->timer;
+ unsigned char bits = uv_scir_info->state;
/* flip heartbeat bit */
bits ^= SCIR_CPU_HEARTBEAT;
@@ -760,14 +924,14 @@ static void uv_heartbeat(unsigned long ignored)
static void uv_heartbeat_enable(int cpu)
{
- while (!uv_cpu_hub_info(cpu)->scir.enabled) {
- struct timer_list *timer = &uv_cpu_hub_info(cpu)->scir.timer;
+ while (!uv_cpu_scir_info(cpu)->enabled) {
+ struct timer_list *timer = &uv_cpu_scir_info(cpu)->timer;
uv_set_cpu_scir_bits(cpu, SCIR_CPU_HEARTBEAT|SCIR_CPU_ACTIVITY);
setup_timer(timer, uv_heartbeat, cpu);
timer->expires = jiffies + SCIR_CPU_HB_INTERVAL;
add_timer_on(timer, cpu);
- uv_cpu_hub_info(cpu)->scir.enabled = 1;
+ uv_cpu_scir_info(cpu)->enabled = 1;
/* also ensure that boot cpu is enabled */
cpu = 0;
@@ -777,9 +941,9 @@ static void uv_heartbeat_enable(int cpu)
#ifdef CONFIG_HOTPLUG_CPU
static void uv_heartbeat_disable(int cpu)
{
- if (uv_cpu_hub_info(cpu)->scir.enabled) {
- uv_cpu_hub_info(cpu)->scir.enabled = 0;
- del_timer(&uv_cpu_hub_info(cpu)->scir.timer);
+ if (uv_cpu_scir_info(cpu)->enabled) {
+ uv_cpu_scir_info(cpu)->enabled = 0;
+ del_timer(&uv_cpu_scir_info(cpu)->timer);
}
uv_set_cpu_scir_bits(cpu, 0xff);
}
@@ -862,155 +1026,475 @@ int uv_set_vga_state(struct pci_dev *pdev, bool decode,
void uv_cpu_init(void)
{
/* CPU 0 initialization will be done via uv_system_init. */
- if (!uv_blade_info)
+ if (smp_processor_id() == 0)
return;
- uv_blade_info[uv_numa_blade_id()].nr_online_cpus++;
+ uv_hub_info->nr_online_cpus++;
if (get_uv_system_type() == UV_NON_UNIQUE_APIC)
set_x2apic_extra_bits(uv_hub_info->pnode);
}
-void __init uv_system_init(void)
+struct mn {
+ unsigned char m_val;
+ unsigned char n_val;
+ unsigned char m_shift;
+ unsigned char n_lshift;
+};
+
+static void get_mn(struct mn *mnp)
{
- union uvh_rh_gam_config_mmr_u m_n_config;
- union uvh_node_id_u node_id;
- unsigned long gnode_upper, lowmem_redir_base, lowmem_redir_size;
- int bytes, nid, cpu, lcpu, pnode, blade, i, j, m_val, n_val;
- int gnode_extra, min_pnode = 999999, max_pnode = -1;
- unsigned long mmr_base, present, paddr;
- unsigned short pnode_mask;
- unsigned char n_lshift;
- char *hub = (is_uv1_hub() ? "UV100/1000" :
- (is_uv2_hub() ? "UV2000/3000" :
- (is_uv3_hub() ? "UV300" : NULL)));
+ union uvh_rh_gam_config_mmr_u m_n_config;
+ union uv3h_gr0_gam_gr_config_u m_gr_config;
- if (!hub) {
- pr_err("UV: Unknown/unsupported UV hub\n");
- return;
+ m_n_config.v = uv_read_local_mmr(UVH_RH_GAM_CONFIG_MMR);
+ mnp->n_val = m_n_config.s.n_skt;
+ if (is_uv4_hub()) {
+ mnp->m_val = 0;
+ mnp->n_lshift = 0;
+ } else if (is_uv3_hub()) {
+ mnp->m_val = m_n_config.s3.m_skt;
+ m_gr_config.v = uv_read_local_mmr(UV3H_GR0_GAM_GR_CONFIG);
+ mnp->n_lshift = m_gr_config.s3.m_skt;
+ } else if (is_uv2_hub()) {
+ mnp->m_val = m_n_config.s2.m_skt;
+ mnp->n_lshift = mnp->m_val == 40 ? 40 : 39;
+ } else if (is_uv1_hub()) {
+ mnp->m_val = m_n_config.s1.m_skt;
+ mnp->n_lshift = mnp->m_val;
}
- pr_info("UV: Found %s hub\n", hub);
+ mnp->m_shift = mnp->m_val ? 64 - mnp->m_val : 0;
+}
- map_low_mmrs();
+void __init uv_init_hub_info(struct uv_hub_info_s *hub_info)
+{
+ struct mn mn = {0}; /* avoid unitialized warnings */
+ union uvh_node_id_u node_id;
- m_n_config.v = uv_read_local_mmr(UVH_RH_GAM_CONFIG_MMR );
- m_val = m_n_config.s.m_skt;
- n_val = m_n_config.s.n_skt;
- pnode_mask = (1 << n_val) - 1;
- n_lshift = get_n_lshift(m_val);
- mmr_base =
- uv_read_local_mmr(UVH_RH_GAM_MMR_OVERLAY_CONFIG_MMR) &
- ~UV_MMR_ENABLE;
+ get_mn(&mn);
+ hub_info->m_val = mn.m_val;
+ hub_info->n_val = mn.n_val;
+ hub_info->m_shift = mn.m_shift;
+ hub_info->n_lshift = mn.n_lshift ? mn.n_lshift : 0;
+
+ hub_info->hub_revision = uv_hub_info->hub_revision;
+ hub_info->pnode_mask = uv_cpuid.pnode_mask;
+ hub_info->min_pnode = _min_pnode;
+ hub_info->min_socket = _min_socket;
+ hub_info->pnode_to_socket = _pnode_to_socket;
+ hub_info->socket_to_node = _socket_to_node;
+ hub_info->socket_to_pnode = _socket_to_pnode;
+ hub_info->gr_table_len = _gr_table_len;
+ hub_info->gr_table = _gr_table;
+ hub_info->gpa_mask = mn.m_val ?
+ (1UL << (mn.m_val + mn.n_val)) - 1 :
+ (1UL << uv_cpuid.gpa_shift) - 1;
node_id.v = uv_read_local_mmr(UVH_NODE_ID);
- gnode_extra = (node_id.s.node_id & ~((1 << n_val) - 1)) >> 1;
- gnode_upper = ((unsigned long)gnode_extra << m_val);
- pr_info("UV: N:%d M:%d pnode_mask:0x%x gnode_upper/extra:0x%lx/0x%x n_lshift 0x%x\n",
- n_val, m_val, pnode_mask, gnode_upper, gnode_extra,
- n_lshift);
+ hub_info->gnode_extra =
+ (node_id.s.node_id & ~((1 << mn.n_val) - 1)) >> 1;
+
+ hub_info->gnode_upper =
+ ((unsigned long)hub_info->gnode_extra << mn.m_val);
+
+ if (uv_gp_table) {
+ hub_info->global_mmr_base = uv_gp_table->mmr_base;
+ hub_info->global_mmr_shift = uv_gp_table->mmr_shift;
+ hub_info->global_gru_base = uv_gp_table->gru_base;
+ hub_info->global_gru_shift = uv_gp_table->gru_shift;
+ hub_info->gpa_shift = uv_gp_table->gpa_shift;
+ hub_info->gpa_mask = (1UL << hub_info->gpa_shift) - 1;
+ } else {
+ hub_info->global_mmr_base =
+ uv_read_local_mmr(UVH_RH_GAM_MMR_OVERLAY_CONFIG_MMR) &
+ ~UV_MMR_ENABLE;
+ hub_info->global_mmr_shift = _UV_GLOBAL_MMR64_PNODE_SHIFT;
+ }
- pr_info("UV: global MMR base 0x%lx\n", mmr_base);
+ get_lowmem_redirect(
+ &hub_info->lowmem_remap_base, &hub_info->lowmem_remap_top);
- for(i = 0; i < UVH_NODE_PRESENT_TABLE_DEPTH; i++)
- uv_possible_blades +=
- hweight64(uv_read_local_mmr( UVH_NODE_PRESENT_TABLE + i * 8));
+ hub_info->apic_pnode_shift = uv_cpuid.socketid_shift;
- /* uv_num_possible_blades() is really the hub count */
- pr_info("UV: Found %d blades, %d hubs\n",
- is_uv1_hub() ? uv_num_possible_blades() :
- (uv_num_possible_blades() + 1) / 2,
- uv_num_possible_blades());
+ /* show system specific info */
+ pr_info("UV: N:%d M:%d m_shift:%d n_lshift:%d\n",
+ hub_info->n_val, hub_info->m_val,
+ hub_info->m_shift, hub_info->n_lshift);
- bytes = sizeof(struct uv_blade_info) * uv_num_possible_blades();
- uv_blade_info = kzalloc(bytes, GFP_KERNEL);
- BUG_ON(!uv_blade_info);
+ pr_info("UV: gpa_mask/shift:0x%lx/%d pnode_mask:0x%x apic_pns:%d\n",
+ hub_info->gpa_mask, hub_info->gpa_shift,
+ hub_info->pnode_mask, hub_info->apic_pnode_shift);
- for (blade = 0; blade < uv_num_possible_blades(); blade++)
- uv_blade_info[blade].memory_nid = -1;
+ pr_info("UV: mmr_base/shift:0x%lx/%ld gru_base/shift:0x%lx/%ld\n",
+ hub_info->global_mmr_base, hub_info->global_mmr_shift,
+ hub_info->global_gru_base, hub_info->global_gru_shift);
- get_lowmem_redirect(&lowmem_redir_base, &lowmem_redir_size);
+ pr_info("UV: gnode_upper:0x%lx gnode_extra:0x%x\n",
+ hub_info->gnode_upper, hub_info->gnode_extra);
+}
+
+static void __init decode_gam_params(unsigned long ptr)
+{
+ uv_gp_table = (struct uv_gam_parameters *)ptr;
+
+ pr_info("UV: GAM Params...\n");
+ pr_info("UV: mmr_base/shift:0x%llx/%d gru_base/shift:0x%llx/%d gpa_shift:%d\n",
+ uv_gp_table->mmr_base, uv_gp_table->mmr_shift,
+ uv_gp_table->gru_base, uv_gp_table->gru_shift,
+ uv_gp_table->gpa_shift);
+}
+
+static void __init decode_gam_rng_tbl(unsigned long ptr)
+{
+ struct uv_gam_range_entry *gre = (struct uv_gam_range_entry *)ptr;
+ unsigned long lgre = 0;
+ int index = 0;
+ int sock_min = 999999, pnode_min = 99999;
+ int sock_max = -1, pnode_max = -1;
+
+ uv_gre_table = gre;
+ for (; gre->type != UV_GAM_RANGE_TYPE_UNUSED; gre++) {
+ if (!index) {
+ pr_info("UV: GAM Range Table...\n");
+ pr_info("UV: # %20s %14s %5s %4s %5s %3s %2s %3s\n",
+ "Range", "", "Size", "Type", "NASID",
+ "SID", "PN", "PXM");
+ }
+ pr_info(
+ "UV: %2d: 0x%014lx-0x%014lx %5luG %3d %04x %02x %02x %3d\n",
+ index++,
+ (unsigned long)lgre << UV_GAM_RANGE_SHFT,
+ (unsigned long)gre->limit << UV_GAM_RANGE_SHFT,
+ ((unsigned long)(gre->limit - lgre)) >>
+ (30 - UV_GAM_RANGE_SHFT), /* 64M -> 1G */
+ gre->type, gre->nasid, gre->sockid,
+ gre->pnode, gre->pxm);
+
+ lgre = gre->limit;
+ if (sock_min > gre->sockid)
+ sock_min = gre->sockid;
+ if (sock_max < gre->sockid)
+ sock_max = gre->sockid;
+ if (pnode_min > gre->pnode)
+ pnode_min = gre->pnode;
+ if (pnode_max < gre->pnode)
+ pnode_max = gre->pnode;
+ }
+ _min_socket = sock_min;
+ _max_socket = sock_max;
+ _min_pnode = pnode_min;
+ _max_pnode = pnode_max;
+ _gr_table_len = index;
+ pr_info(
+ "UV: GRT: %d entries, sockets(min:%x,max:%x) pnodes(min:%x,max:%x)\n",
+ index, _min_socket, _max_socket, _min_pnode, _max_pnode);
+}
+
+static void __init decode_uv_systab(void)
+{
+ struct uv_systab *st;
+ int i;
+
+ st = uv_systab;
+ if ((!st || st->revision < UV_SYSTAB_VERSION_UV4) && !is_uv4_hub())
+ return;
+ if (st->revision != UV_SYSTAB_VERSION_UV4_LATEST) {
+ pr_crit(
+ "UV: BIOS UVsystab version(%x) mismatch, expecting(%x)\n",
+ st->revision, UV_SYSTAB_VERSION_UV4_LATEST);
+ BUG();
+ }
+
+ for (i = 0; st->entry[i].type != UV_SYSTAB_TYPE_UNUSED; i++) {
+ unsigned long ptr = st->entry[i].offset;
- bytes = sizeof(uv_node_to_blade[0]) * num_possible_nodes();
- uv_node_to_blade = kmalloc(bytes, GFP_KERNEL);
- BUG_ON(!uv_node_to_blade);
- memset(uv_node_to_blade, 255, bytes);
+ if (!ptr)
+ continue;
+
+ ptr = ptr + (unsigned long)st;
+
+ switch (st->entry[i].type) {
+ case UV_SYSTAB_TYPE_GAM_PARAMS:
+ decode_gam_params(ptr);
+ break;
+
+ case UV_SYSTAB_TYPE_GAM_RNG_TBL:
+ decode_gam_rng_tbl(ptr);
+ break;
+ }
+ }
+}
- bytes = sizeof(uv_cpu_to_blade[0]) * num_possible_cpus();
- uv_cpu_to_blade = kmalloc(bytes, GFP_KERNEL);
- BUG_ON(!uv_cpu_to_blade);
- memset(uv_cpu_to_blade, 255, bytes);
+/*
+ * Setup physical blade translations from UVH_NODE_PRESENT_TABLE
+ * .. NB: UVH_NODE_PRESENT_TABLE is going away,
+ * .. being replaced by GAM Range Table
+ */
+static __init void boot_init_possible_blades(struct uv_hub_info_s *hub_info)
+{
+ int i, uv_pb = 0;
- blade = 0;
+ pr_info("UV: NODE_PRESENT_DEPTH = %d\n", UVH_NODE_PRESENT_TABLE_DEPTH);
for (i = 0; i < UVH_NODE_PRESENT_TABLE_DEPTH; i++) {
- present = uv_read_local_mmr(UVH_NODE_PRESENT_TABLE + i * 8);
- for (j = 0; j < 64; j++) {
- if (!test_bit(j, &present))
- continue;
- pnode = (i * 64 + j) & pnode_mask;
- uv_blade_info[blade].pnode = pnode;
- uv_blade_info[blade].nr_possible_cpus = 0;
- uv_blade_info[blade].nr_online_cpus = 0;
- spin_lock_init(&uv_blade_info[blade].nmi_lock);
- min_pnode = min(pnode, min_pnode);
- max_pnode = max(pnode, max_pnode);
- blade++;
+ unsigned long np;
+
+ np = uv_read_local_mmr(UVH_NODE_PRESENT_TABLE + i * 8);
+ if (np)
+ pr_info("UV: NODE_PRESENT(%d) = 0x%016lx\n", i, np);
+
+ uv_pb += hweight64(np);
+ }
+ if (uv_possible_blades != uv_pb)
+ uv_possible_blades = uv_pb;
+}
+
+static void __init build_socket_tables(void)
+{
+ struct uv_gam_range_entry *gre = uv_gre_table;
+ int num, nump;
+ int cpu, i, lnid;
+ int minsock = _min_socket;
+ int maxsock = _max_socket;
+ int minpnode = _min_pnode;
+ int maxpnode = _max_pnode;
+ size_t bytes;
+
+ if (!gre) {
+ if (is_uv1_hub() || is_uv2_hub() || is_uv3_hub()) {
+ pr_info("UV: No UVsystab socket table, ignoring\n");
+ return; /* not required */
}
+ pr_crit(
+ "UV: Error: UVsystab address translations not available!\n");
+ BUG();
+ }
+
+ /* build socket id -> node id, pnode */
+ num = maxsock - minsock + 1;
+ bytes = num * sizeof(_socket_to_node[0]);
+ _socket_to_node = kmalloc(bytes, GFP_KERNEL);
+ _socket_to_pnode = kmalloc(bytes, GFP_KERNEL);
+
+ nump = maxpnode - minpnode + 1;
+ bytes = nump * sizeof(_pnode_to_socket[0]);
+ _pnode_to_socket = kmalloc(bytes, GFP_KERNEL);
+ BUG_ON(!_socket_to_node || !_socket_to_pnode || !_pnode_to_socket);
+
+ for (i = 0; i < num; i++)
+ _socket_to_node[i] = _socket_to_pnode[i] = SOCK_EMPTY;
+
+ for (i = 0; i < nump; i++)
+ _pnode_to_socket[i] = SOCK_EMPTY;
+
+ /* fill in pnode/node/addr conversion list values */
+ pr_info("UV: GAM Building socket/pnode/pxm conversion tables\n");
+ for (; gre->type != UV_GAM_RANGE_TYPE_UNUSED; gre++) {
+ if (gre->type == UV_GAM_RANGE_TYPE_HOLE)
+ continue;
+ i = gre->sockid - minsock;
+ if (_socket_to_pnode[i] != SOCK_EMPTY)
+ continue; /* duplicate */
+ _socket_to_pnode[i] = gre->pnode;
+ _socket_to_node[i] = gre->pxm;
+
+ i = gre->pnode - minpnode;
+ _pnode_to_socket[i] = gre->sockid;
+
+ pr_info(
+ "UV: sid:%02x type:%d nasid:%04x pn:%02x pxm:%2d pn2s:%2x\n",
+ gre->sockid, gre->type, gre->nasid,
+ _socket_to_pnode[gre->sockid - minsock],
+ _socket_to_node[gre->sockid - minsock],
+ _pnode_to_socket[gre->pnode - minpnode]);
}
- uv_bios_init();
+ /* check socket -> node values */
+ lnid = -1;
+ for_each_present_cpu(cpu) {
+ int nid = cpu_to_node(cpu);
+ int apicid, sockid;
+
+ if (lnid == nid)
+ continue;
+ lnid = nid;
+ apicid = per_cpu(x86_cpu_to_apicid, cpu);
+ sockid = apicid >> uv_cpuid.socketid_shift;
+ i = sockid - minsock;
+
+ if (nid != _socket_to_node[i]) {
+ pr_warn(
+ "UV: %02x: type:%d socket:%02x PXM:%02x != node:%2d\n",
+ i, sockid, gre->type, _socket_to_node[i], nid);
+ _socket_to_node[i] = nid;
+ }
+ }
+
+ /* Setup physical blade to pnode translation from GAM Range Table */
+ bytes = num_possible_nodes() * sizeof(_node_to_pnode[0]);
+ _node_to_pnode = kmalloc(bytes, GFP_KERNEL);
+ BUG_ON(!_node_to_pnode);
+
+ for (lnid = 0; lnid < num_possible_nodes(); lnid++) {
+ unsigned short sockid;
+
+ for (sockid = minsock; sockid <= maxsock; sockid++) {
+ if (lnid == _socket_to_node[sockid - minsock]) {
+ _node_to_pnode[lnid] =
+ _socket_to_pnode[sockid - minsock];
+ break;
+ }
+ }
+ if (sockid > maxsock) {
+ pr_err("UV: socket for node %d not found!\n", lnid);
+ BUG();
+ }
+ }
+
+ /*
+ * If socket id == pnode or socket id == node for all nodes,
+ * system runs faster by removing corresponding conversion table.
+ */
+ pr_info("UV: Checking socket->node/pnode for identity maps\n");
+ if (minsock == 0) {
+ for (i = 0; i < num; i++)
+ if (_socket_to_node[i] == SOCK_EMPTY ||
+ i != _socket_to_node[i])
+ break;
+ if (i >= num) {
+ kfree(_socket_to_node);
+ _socket_to_node = NULL;
+ pr_info("UV: 1:1 socket_to_node table removed\n");
+ }
+ }
+ if (minsock == minpnode) {
+ for (i = 0; i < num; i++)
+ if (_socket_to_pnode[i] != SOCK_EMPTY &&
+ _socket_to_pnode[i] != i + minpnode)
+ break;
+ if (i >= num) {
+ kfree(_socket_to_pnode);
+ _socket_to_pnode = NULL;
+ pr_info("UV: 1:1 socket_to_pnode table removed\n");
+ }
+ }
+}
+
+void __init uv_system_init(void)
+{
+ struct uv_hub_info_s hub_info = {0};
+ int bytes, cpu, nodeid;
+ unsigned short min_pnode = 9999, max_pnode = 0;
+ char *hub = is_uv4_hub() ? "UV400" :
+ is_uv3_hub() ? "UV300" :
+ is_uv2_hub() ? "UV2000/3000" :
+ is_uv1_hub() ? "UV100/1000" : NULL;
+
+ if (!hub) {
+ pr_err("UV: Unknown/unsupported UV hub\n");
+ return;
+ }
+ pr_info("UV: Found %s hub\n", hub);
+
+ map_low_mmrs();
+
+ uv_bios_init(); /* get uv_systab for decoding */
+ decode_uv_systab();
+ build_socket_tables();
+ build_uv_gr_table();
+ uv_init_hub_info(&hub_info);
+ uv_possible_blades = num_possible_nodes();
+ if (!_node_to_pnode)
+ boot_init_possible_blades(&hub_info);
+
+ /* uv_num_possible_blades() is really the hub count */
+ pr_info("UV: Found %d hubs, %d nodes, %d cpus\n",
+ uv_num_possible_blades(),
+ num_possible_nodes(),
+ num_possible_cpus());
+
uv_bios_get_sn_info(0, &uv_type, &sn_partition_id, &sn_coherency_id,
&sn_region_size, &system_serial_number);
+ hub_info.coherency_domain_number = sn_coherency_id;
uv_rtc_init();
- for_each_present_cpu(cpu) {
- int apicid = per_cpu(x86_cpu_to_apicid, cpu);
+ bytes = sizeof(void *) * uv_num_possible_blades();
+ __uv_hub_info_list = kzalloc(bytes, GFP_KERNEL);
+ BUG_ON(!__uv_hub_info_list);
- nid = cpu_to_node(cpu);
- /*
- * apic_pnode_shift must be set before calling uv_apicid_to_pnode();
- */
- uv_cpu_hub_info(cpu)->pnode_mask = pnode_mask;
- uv_cpu_hub_info(cpu)->apic_pnode_shift = uvh_apicid.s.pnode_shift;
- uv_cpu_hub_info(cpu)->hub_revision = uv_hub_info->hub_revision;
+ bytes = sizeof(struct uv_hub_info_s);
+ for_each_node(nodeid) {
+ struct uv_hub_info_s *new_hub;
- uv_cpu_hub_info(cpu)->m_shift = 64 - m_val;
- uv_cpu_hub_info(cpu)->n_lshift = n_lshift;
+ if (__uv_hub_info_list[nodeid]) {
+ pr_err("UV: Node %d UV HUB already initialized!?\n",
+ nodeid);
+ BUG();
+ }
+
+ /* Allocate new per hub info list */
+ new_hub = (nodeid == 0) ?
+ &uv_hub_info_node0 :
+ kzalloc_node(bytes, GFP_KERNEL, nodeid);
+ BUG_ON(!new_hub);
+ __uv_hub_info_list[nodeid] = new_hub;
+ new_hub = uv_hub_info_list(nodeid);
+ BUG_ON(!new_hub);
+ *new_hub = hub_info;
+
+ /* Use information from GAM table if available */
+ if (_node_to_pnode)
+ new_hub->pnode = _node_to_pnode[nodeid];
+ else /* Fill in during cpu loop */
+ new_hub->pnode = 0xffff;
+ new_hub->numa_blade_id = uv_node_to_blade_id(nodeid);
+ new_hub->memory_nid = -1;
+ new_hub->nr_possible_cpus = 0;
+ new_hub->nr_online_cpus = 0;
+ }
+ /* Initialize per cpu info */
+ for_each_possible_cpu(cpu) {
+ int apicid = per_cpu(x86_cpu_to_apicid, cpu);
+ int numa_node_id;
+ unsigned short pnode;
+
+ nodeid = cpu_to_node(cpu);
+ numa_node_id = numa_cpu_node(cpu);
pnode = uv_apicid_to_pnode(apicid);
- blade = boot_pnode_to_blade(pnode);
- lcpu = uv_blade_info[blade].nr_possible_cpus;
- uv_blade_info[blade].nr_possible_cpus++;
-
- /* Any node on the blade, else will contain -1. */
- uv_blade_info[blade].memory_nid = nid;
-
- uv_cpu_hub_info(cpu)->lowmem_remap_base = lowmem_redir_base;
- uv_cpu_hub_info(cpu)->lowmem_remap_top = lowmem_redir_size;
- uv_cpu_hub_info(cpu)->m_val = m_val;
- uv_cpu_hub_info(cpu)->n_val = n_val;
- uv_cpu_hub_info(cpu)->numa_blade_id = blade;
- uv_cpu_hub_info(cpu)->blade_processor_id = lcpu;
- uv_cpu_hub_info(cpu)->pnode = pnode;
- uv_cpu_hub_info(cpu)->gpa_mask = (1UL << (m_val + n_val)) - 1;
- uv_cpu_hub_info(cpu)->gnode_upper = gnode_upper;
- uv_cpu_hub_info(cpu)->gnode_extra = gnode_extra;
- uv_cpu_hub_info(cpu)->global_mmr_base = mmr_base;
- uv_cpu_hub_info(cpu)->coherency_domain_number = sn_coherency_id;
- uv_cpu_hub_info(cpu)->scir.offset = uv_scir_offset(apicid);
- uv_node_to_blade[nid] = blade;
- uv_cpu_to_blade[cpu] = blade;
+
+ uv_cpu_info_per(cpu)->p_uv_hub_info = uv_hub_info_list(nodeid);
+ uv_cpu_info_per(cpu)->blade_cpu_id =
+ uv_cpu_hub_info(cpu)->nr_possible_cpus++;
+ if (uv_cpu_hub_info(cpu)->memory_nid == -1)
+ uv_cpu_hub_info(cpu)->memory_nid = cpu_to_node(cpu);
+ if (nodeid != numa_node_id && /* init memoryless node */
+ uv_hub_info_list(numa_node_id)->pnode == 0xffff)
+ uv_hub_info_list(numa_node_id)->pnode = pnode;
+ else if (uv_cpu_hub_info(cpu)->pnode == 0xffff)
+ uv_cpu_hub_info(cpu)->pnode = pnode;
+ uv_cpu_scir_info(cpu)->offset = uv_scir_offset(apicid);
}
- /* Add blade/pnode info for nodes without cpus */
- for_each_online_node(nid) {
- if (uv_node_to_blade[nid] >= 0)
- continue;
- paddr = node_start_pfn(nid) << PAGE_SHIFT;
- pnode = uv_gpa_to_pnode(uv_soc_phys_ram_to_gpa(paddr));
- blade = boot_pnode_to_blade(pnode);
- uv_node_to_blade[nid] = blade;
+ for_each_node(nodeid) {
+ unsigned short pnode = uv_hub_info_list(nodeid)->pnode;
+
+ /* Add pnode info for pre-GAM list nodes without cpus */
+ if (pnode == 0xffff) {
+ unsigned long paddr;
+
+ paddr = node_start_pfn(nodeid) << PAGE_SHIFT;
+ pnode = uv_gpa_to_pnode(uv_soc_phys_ram_to_gpa(paddr));
+ uv_hub_info_list(nodeid)->pnode = pnode;
+ }
+ min_pnode = min(pnode, min_pnode);
+ max_pnode = max(pnode, max_pnode);
+ pr_info("UV: UVHUB node:%2d pn:%02x nrcpus:%d\n",
+ nodeid,
+ uv_hub_info_list(nodeid)->pnode,
+ uv_hub_info_list(nodeid)->nr_possible_cpus);
}
+ pr_info("UV: min_pnode:%02x max_pnode:%02x\n", min_pnode, max_pnode);
map_gru_high(max_pnode);
map_mmr_high(max_pnode);
map_mmioh_high(min_pnode, max_pnode);
diff --git a/arch/x86/kernel/apm_32.c b/arch/x86/kernel/apm_32.c
index 9307f182fe30..c7364bd633e1 100644
--- a/arch/x86/kernel/apm_32.c
+++ b/arch/x86/kernel/apm_32.c
@@ -2267,7 +2267,7 @@ static int __init apm_init(void)
dmi_check_system(apm_dmi_table);
- if (apm_info.bios.version == 0 || paravirt_enabled() || machine_is_olpc()) {
+ if (apm_info.bios.version == 0 || machine_is_olpc()) {
printk(KERN_INFO "apm: BIOS not found.\n");
return -ENODEV;
}
diff --git a/arch/x86/kernel/asm-offsets.c b/arch/x86/kernel/asm-offsets.c
index 5c042466f274..674134e9f5e5 100644
--- a/arch/x86/kernel/asm-offsets.c
+++ b/arch/x86/kernel/asm-offsets.c
@@ -80,6 +80,7 @@ void common(void) {
OFFSET(BP_hardware_subarch, boot_params, hdr.hardware_subarch);
OFFSET(BP_version, boot_params, hdr.version);
OFFSET(BP_kernel_alignment, boot_params, hdr.kernel_alignment);
+ OFFSET(BP_init_size, boot_params, hdr.init_size);
OFFSET(BP_pref_address, boot_params, hdr.pref_address);
OFFSET(BP_code32_start, boot_params, hdr.code32_start);
diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
index 7b76eb67a9b3..c343a54bed39 100644
--- a/arch/x86/kernel/cpu/amd.c
+++ b/arch/x86/kernel/cpu/amd.c
@@ -565,14 +565,17 @@ static void early_init_amd(struct cpuinfo_x86 *c)
* can safely set X86_FEATURE_EXTD_APICID unconditionally for families
* after 16h.
*/
- if (cpu_has_apic && c->x86 > 0x16) {
- set_cpu_cap(c, X86_FEATURE_EXTD_APICID);
- } else if (cpu_has_apic && c->x86 >= 0xf) {
- /* check CPU config space for extended APIC ID */
- unsigned int val;
- val = read_pci_config(0, 24, 0, 0x68);
- if ((val & ((1 << 17) | (1 << 18))) == ((1 << 17) | (1 << 18)))
+ if (boot_cpu_has(X86_FEATURE_APIC)) {
+ if (c->x86 > 0x16)
set_cpu_cap(c, X86_FEATURE_EXTD_APICID);
+ else if (c->x86 >= 0xf) {
+ /* check CPU config space for extended APIC ID */
+ unsigned int val;
+
+ val = read_pci_config(0, 24, 0, 0x68);
+ if ((val >> 17 & 0x3) == 0x3)
+ set_cpu_cap(c, X86_FEATURE_EXTD_APICID);
+ }
}
#endif
@@ -628,6 +631,7 @@ static void init_amd_k8(struct cpuinfo_x86 *c)
*/
msr_set_bit(MSR_K7_HWCR, 6);
#endif
+ set_cpu_bug(c, X86_BUG_SWAPGS_FENCE);
}
static void init_amd_gh(struct cpuinfo_x86 *c)
@@ -746,7 +750,7 @@ static void init_amd(struct cpuinfo_x86 *c)
if (c->x86 >= 0xf)
set_cpu_cap(c, X86_FEATURE_K8);
- if (cpu_has_xmm2) {
+ if (cpu_has(c, X86_FEATURE_XMM2)) {
/* MFENCE stops RDTSC speculation */
set_cpu_cap(c, X86_FEATURE_MFENCE_RDTSC);
}
diff --git a/arch/x86/kernel/cpu/common.c b/arch/x86/kernel/cpu/common.c
index 8394b3d1f94f..0fe6953f421c 100644
--- a/arch/x86/kernel/cpu/common.c
+++ b/arch/x86/kernel/cpu/common.c
@@ -37,6 +37,7 @@
#include <asm/mtrr.h>
#include <linux/numa.h>
#include <asm/asm.h>
+#include <asm/bugs.h>
#include <asm/cpu.h>
#include <asm/mce.h>
#include <asm/msr.h>
@@ -270,6 +271,8 @@ static inline void squash_the_stupid_serial_number(struct cpuinfo_x86 *c)
static __init int setup_disable_smep(char *arg)
{
setup_clear_cpu_cap(X86_FEATURE_SMEP);
+ /* Check for things that depend on SMEP being enabled: */
+ check_mpx_erratum(&boot_cpu_data);
return 1;
}
__setup("nosmep", setup_disable_smep);
@@ -310,6 +313,10 @@ static bool pku_disabled;
static __always_inline void setup_pku(struct cpuinfo_x86 *c)
{
+ /* check the boot processor, plus compile options for PKU: */
+ if (!cpu_feature_enabled(X86_FEATURE_PKU))
+ return;
+ /* checks the actual processor's cpuid bits: */
if (!cpu_has(c, X86_FEATURE_PKU))
return;
if (pku_disabled)
@@ -430,7 +437,7 @@ void load_percpu_segment(int cpu)
#ifdef CONFIG_X86_32
loadsegment(fs, __KERNEL_PERCPU);
#else
- loadsegment(gs, 0);
+ __loadsegment_simple(gs, 0);
wrmsrl(MSR_GS_BASE, (unsigned long)per_cpu(irq_stack_union.gs_base, cpu));
#endif
load_stack_canary_segment();
@@ -717,6 +724,13 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
}
}
+ if (c->extended_cpuid_level >= 0x80000007) {
+ cpuid(0x80000007, &eax, &ebx, &ecx, &edx);
+
+ c->x86_capability[CPUID_8000_0007_EBX] = ebx;
+ c->x86_power = edx;
+ }
+
if (c->extended_cpuid_level >= 0x80000008) {
cpuid(0x80000008, &eax, &ebx, &ecx, &edx);
@@ -729,9 +743,6 @@ void get_cpu_cap(struct cpuinfo_x86 *c)
c->x86_phys_bits = 36;
#endif
- if (c->extended_cpuid_level >= 0x80000007)
- c->x86_power = cpuid_edx(0x80000007);
-
if (c->extended_cpuid_level >= 0x8000000a)
c->x86_capability[CPUID_8000_000A_EDX] = cpuid_edx(0x8000000a);
@@ -862,30 +873,34 @@ static void detect_nopl(struct cpuinfo_x86 *c)
#else
set_cpu_cap(c, X86_FEATURE_NOPL);
#endif
+}
+static void detect_null_seg_behavior(struct cpuinfo_x86 *c)
+{
+#ifdef CONFIG_X86_64
/*
- * ESPFIX is a strange bug. All real CPUs have it. Paravirt
- * systems that run Linux at CPL > 0 may or may not have the
- * issue, but, even if they have the issue, there's absolutely
- * nothing we can do about it because we can't use the real IRET
- * instruction.
+ * Empirically, writing zero to a segment selector on AMD does
+ * not clear the base, whereas writing zero to a segment
+ * selector on Intel does clear the base. Intel's behavior
+ * allows slightly faster context switches in the common case
+ * where GS is unused by the prev and next threads.
*
- * NB: For the time being, only 32-bit kernels support
- * X86_BUG_ESPFIX as such. 64-bit kernels directly choose
- * whether to apply espfix using paravirt hooks. If any
- * non-paravirt system ever shows up that does *not* have the
- * ESPFIX issue, we can change this.
+ * Since neither vendor documents this anywhere that I can see,
+ * detect it directly instead of hardcoding the choice by
+ * vendor.
+ *
+ * I've designated AMD's behavior as the "bug" because it's
+ * counterintuitive and less friendly.
*/
-#ifdef CONFIG_X86_32
-#ifdef CONFIG_PARAVIRT
- do {
- extern void native_iret(void);
- if (pv_cpu_ops.iret == native_iret)
- set_cpu_bug(c, X86_BUG_ESPFIX);
- } while (0);
-#else
- set_cpu_bug(c, X86_BUG_ESPFIX);
-#endif
+
+ unsigned long old_base, tmp;
+ rdmsrl(MSR_FS_BASE, old_base);
+ wrmsrl(MSR_FS_BASE, 1);
+ loadsegment(fs, 0);
+ rdmsrl(MSR_FS_BASE, tmp);
+ if (tmp != 0)
+ set_cpu_bug(c, X86_BUG_NULL_SEG);
+ wrmsrl(MSR_FS_BASE, old_base);
#endif
}
@@ -921,6 +936,33 @@ static void generic_identify(struct cpuinfo_x86 *c)
get_model_name(c); /* Default name */
detect_nopl(c);
+
+ detect_null_seg_behavior(c);
+
+ /*
+ * ESPFIX is a strange bug. All real CPUs have it. Paravirt
+ * systems that run Linux at CPL > 0 may or may not have the
+ * issue, but, even if they have the issue, there's absolutely
+ * nothing we can do about it because we can't use the real IRET
+ * instruction.
+ *
+ * NB: For the time being, only 32-bit kernels support
+ * X86_BUG_ESPFIX as such. 64-bit kernels directly choose
+ * whether to apply espfix using paravirt hooks. If any
+ * non-paravirt system ever shows up that does *not* have the
+ * ESPFIX issue, we can change this.
+ */
+#ifdef CONFIG_X86_32
+# ifdef CONFIG_PARAVIRT
+ do {
+ extern void native_iret(void);
+ if (pv_cpu_ops.iret == native_iret)
+ set_cpu_bug(c, X86_BUG_ESPFIX);
+ } while (0);
+# else
+ set_cpu_bug(c, X86_BUG_ESPFIX);
+# endif
+#endif
}
static void x86_init_cache_qos(struct cpuinfo_x86 *c)
@@ -1076,12 +1118,12 @@ void enable_sep_cpu(void)
struct tss_struct *tss;
int cpu;
+ if (!boot_cpu_has(X86_FEATURE_SEP))
+ return;
+
cpu = get_cpu();
tss = &per_cpu(cpu_tss, cpu);
- if (!boot_cpu_has(X86_FEATURE_SEP))
- goto out;
-
/*
* We cache MSR_IA32_SYSENTER_CS's value in the TSS's ss1 field --
* see the big comment in struct x86_hw_tss's definition.
@@ -1096,7 +1138,6 @@ void enable_sep_cpu(void)
wrmsr(MSR_IA32_SYSENTER_EIP, (unsigned long)entry_SYSENTER_32, 0);
-out:
put_cpu();
}
#endif
@@ -1528,7 +1569,7 @@ void cpu_init(void)
pr_info("Initializing CPU#%d\n", cpu);
if (cpu_feature_enabled(X86_FEATURE_VME) ||
- cpu_has_tsc ||
+ boot_cpu_has(X86_FEATURE_TSC) ||
boot_cpu_has(X86_FEATURE_DE))
cr4_clear_bits(X86_CR4_VME|X86_CR4_PVI|X86_CR4_TSD|X86_CR4_DE);
diff --git a/arch/x86/kernel/cpu/cyrix.c b/arch/x86/kernel/cpu/cyrix.c
index 6adef9cac23e..bd9dcd6b712d 100644
--- a/arch/x86/kernel/cpu/cyrix.c
+++ b/arch/x86/kernel/cpu/cyrix.c
@@ -333,7 +333,7 @@ static void init_cyrix(struct cpuinfo_x86 *c)
switch (dir0_lsn) {
case 0xd: /* either a 486SLC or DLC w/o DEVID */
dir0_msn = 0;
- p = Cx486_name[(cpu_has_fpu ? 1 : 0)];
+ p = Cx486_name[!!boot_cpu_has(X86_FEATURE_FPU)];
break;
case 0xe: /* a 486S A step */
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 1f7fdb91a818..6e2ffbebbcdb 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -25,6 +25,41 @@
#include <asm/apic.h>
#endif
+/*
+ * Just in case our CPU detection goes bad, or you have a weird system,
+ * allow a way to override the automatic disabling of MPX.
+ */
+static int forcempx;
+
+static int __init forcempx_setup(char *__unused)
+{
+ forcempx = 1;
+
+ return 1;
+}
+__setup("intel-skd-046-workaround=disable", forcempx_setup);
+
+void check_mpx_erratum(struct cpuinfo_x86 *c)
+{
+ if (forcempx)
+ return;
+ /*
+ * Turn off the MPX feature on CPUs where SMEP is not
+ * available or disabled.
+ *
+ * Works around Intel Erratum SKD046: "Branch Instructions
+ * May Initialize MPX Bound Registers Incorrectly".
+ *
+ * This might falsely disable MPX on systems without
+ * SMEP, like Atom processors without SMEP. But there
+ * is no such hardware known at the moment.
+ */
+ if (cpu_has(c, X86_FEATURE_MPX) && !cpu_has(c, X86_FEATURE_SMEP)) {
+ setup_clear_cpu_cap(X86_FEATURE_MPX);
+ pr_warn("x86/mpx: Disabling MPX since SMEP not present\n");
+ }
+}
+
static void early_init_intel(struct cpuinfo_x86 *c)
{
u64 misc_enable;
@@ -152,9 +187,9 @@ static void early_init_intel(struct cpuinfo_x86 *c)
* the TLB when any changes are made to any of the page table entries.
* The operating system must reload CR3 to cause the TLB to be flushed"
*
- * As a result cpu_has_pge() in arch/x86/include/asm/tlbflush.h should
- * be false so that __flush_tlb_all() causes CR3 insted of CR4.PGE
- * to be modified
+ * As a result, boot_cpu_has(X86_FEATURE_PGE) in arch/x86/include/asm/tlbflush.h
+ * should be false so that __flush_tlb_all() causes CR3 insted of CR4.PGE
+ * to be modified.
*/
if (c->x86 == 5 && c->x86_model == 9) {
pr_info("Disabling PGE capability bit\n");
@@ -173,6 +208,8 @@ static void early_init_intel(struct cpuinfo_x86 *c)
if (edx & (1U << 28))
c->x86_coreid_bits = get_count_order((ebx >> 16) & 0xff);
}
+
+ check_mpx_erratum(c);
}
#ifdef CONFIG_X86_32
@@ -233,7 +270,7 @@ static void intel_workarounds(struct cpuinfo_x86 *c)
* The Quark is also family 5, but does not have the same bug.
*/
clear_cpu_bug(c, X86_BUG_F00F);
- if (!paravirt_enabled() && c->x86 == 5 && c->x86_model < 9) {
+ if (c->x86 == 5 && c->x86_model < 9) {
static int f00f_workaround_enabled;
set_cpu_bug(c, X86_BUG_F00F);
@@ -281,7 +318,7 @@ static void intel_workarounds(struct cpuinfo_x86 *c)
* integrated APIC (see 11AP erratum in "Pentium Processor
* Specification Update").
*/
- if (cpu_has_apic && (c->x86<<8 | c->x86_model<<4) == 0x520 &&
+ if (boot_cpu_has(X86_FEATURE_APIC) && (c->x86<<8 | c->x86_model<<4) == 0x520 &&
(c->x86_mask < 0x6 || c->x86_mask == 0xb))
set_cpu_bug(c, X86_BUG_11AP);
@@ -336,7 +373,7 @@ static int intel_num_cpu_cores(struct cpuinfo_x86 *c)
{
unsigned int eax, ebx, ecx, edx;
- if (c->cpuid_level < 4)
+ if (!IS_ENABLED(CONFIG_SMP) || c->cpuid_level < 4)
return 1;
/* Intel has a non-standard dependency on %ecx for this CPUID level. */
@@ -456,7 +493,7 @@ static void init_intel(struct cpuinfo_x86 *c)
set_cpu_cap(c, X86_FEATURE_ARCH_PERFMON);
}
- if (cpu_has_xmm2)
+ if (cpu_has(c, X86_FEATURE_XMM2))
set_cpu_cap(c, X86_FEATURE_LFENCE_RDTSC);
if (boot_cpu_has(X86_FEATURE_DS)) {
@@ -468,7 +505,7 @@ static void init_intel(struct cpuinfo_x86 *c)
set_cpu_cap(c, X86_FEATURE_PEBS);
}
- if (c->x86 == 6 && cpu_has_clflush &&
+ if (c->x86 == 6 && boot_cpu_has(X86_FEATURE_CLFLUSH) &&
(c->x86_model == 29 || c->x86_model == 46 || c->x86_model == 47))
set_cpu_bug(c, X86_BUG_CLFLUSH_MONITOR);
diff --git a/arch/x86/kernel/cpu/mcheck/mce-genpool.c b/arch/x86/kernel/cpu/mcheck/mce-genpool.c
index 2658e2af74ec..93d824ec3120 100644
--- a/arch/x86/kernel/cpu/mcheck/mce-genpool.c
+++ b/arch/x86/kernel/cpu/mcheck/mce-genpool.c
@@ -26,6 +26,52 @@ static struct gen_pool *mce_evt_pool;
static LLIST_HEAD(mce_event_llist);
static char gen_pool_buf[MCE_POOLSZ];
+/*
+ * Compare the record "t" with each of the records on list "l" to see if
+ * an equivalent one is present in the list.
+ */
+static bool is_duplicate_mce_record(struct mce_evt_llist *t, struct mce_evt_llist *l)
+{
+ struct mce_evt_llist *node;
+ struct mce *m1, *m2;
+
+ m1 = &t->mce;
+
+ llist_for_each_entry(node, &l->llnode, llnode) {
+ m2 = &node->mce;
+
+ if (!mce_cmp(m1, m2))
+ return true;
+ }
+ return false;
+}
+
+/*
+ * The system has panicked - we'd like to peruse the list of MCE records
+ * that have been queued, but not seen by anyone yet. The list is in
+ * reverse time order, so we need to reverse it. While doing that we can
+ * also drop duplicate records (these were logged because some banks are
+ * shared between cores or by all threads on a socket).
+ */
+struct llist_node *mce_gen_pool_prepare_records(void)
+{
+ struct llist_node *head;
+ LLIST_HEAD(new_head);
+ struct mce_evt_llist *node, *t;
+
+ head = llist_del_all(&mce_event_llist);
+ if (!head)
+ return NULL;
+
+ /* squeeze out duplicates while reversing order */
+ llist_for_each_entry_safe(node, t, head, llnode) {
+ if (!is_duplicate_mce_record(node, t))
+ llist_add(&node->llnode, &new_head);
+ }
+
+ return new_head.first;
+}
+
void mce_gen_pool_process(void)
{
struct llist_node *head;
diff --git a/arch/x86/kernel/cpu/mcheck/mce-internal.h b/arch/x86/kernel/cpu/mcheck/mce-internal.h
index 547720efd923..cd74a3f00aea 100644
--- a/arch/x86/kernel/cpu/mcheck/mce-internal.h
+++ b/arch/x86/kernel/cpu/mcheck/mce-internal.h
@@ -35,6 +35,7 @@ void mce_gen_pool_process(void);
bool mce_gen_pool_empty(void);
int mce_gen_pool_add(struct mce *mce);
int mce_gen_pool_init(void);
+struct llist_node *mce_gen_pool_prepare_records(void);
extern int (*mce_severity)(struct mce *a, int tolerant, char **msg, bool is_excp);
struct dentry *mce_get_debugfs_dir(void);
@@ -81,3 +82,17 @@ static inline int apei_clear_mce(u64 record_id)
#endif
void mce_inject_log(struct mce *m);
+
+/*
+ * We consider records to be equivalent if bank+status+addr+misc all match.
+ * This is only used when the system is going down because of a fatal error
+ * to avoid cluttering the console log with essentially repeated information.
+ * In normal processing all errors seen are logged.
+ */
+static inline bool mce_cmp(struct mce *m1, struct mce *m2)
+{
+ return m1->bank != m2->bank ||
+ m1->status != m2->status ||
+ m1->addr != m2->addr ||
+ m1->misc != m2->misc;
+}
diff --git a/arch/x86/kernel/cpu/mcheck/mce-severity.c b/arch/x86/kernel/cpu/mcheck/mce-severity.c
index 5119766d9889..631356c8cca4 100644
--- a/arch/x86/kernel/cpu/mcheck/mce-severity.c
+++ b/arch/x86/kernel/cpu/mcheck/mce-severity.c
@@ -204,6 +204,33 @@ static int error_context(struct mce *m)
return IN_KERNEL;
}
+static int mce_severity_amd_smca(struct mce *m, int err_ctx)
+{
+ u32 addr = MSR_AMD64_SMCA_MCx_CONFIG(m->bank);
+ u32 low, high;
+
+ /*
+ * We need to look at the following bits:
+ * - "succor" bit (data poisoning support), and
+ * - TCC bit (Task Context Corrupt)
+ * in MCi_STATUS to determine error severity.
+ */
+ if (!mce_flags.succor)
+ return MCE_PANIC_SEVERITY;
+
+ if (rdmsr_safe(addr, &low, &high))
+ return MCE_PANIC_SEVERITY;
+
+ /* TCC (Task context corrupt). If set and if IN_KERNEL, panic. */
+ if ((low & MCI_CONFIG_MCAX) &&
+ (m->status & MCI_STATUS_TCC) &&
+ (err_ctx == IN_KERNEL))
+ return MCE_PANIC_SEVERITY;
+
+ /* ...otherwise invoke hwpoison handler. */
+ return MCE_AR_SEVERITY;
+}
+
/*
* See AMD Error Scope Hierarchy table in a newer BKDG. For example
* 49125_15h_Models_30h-3Fh_BKDG.pdf, section "RAS Features"
@@ -225,6 +252,9 @@ static int mce_severity_amd(struct mce *m, int tolerant, char **msg, bool is_exc
* to at least kill process to prolong system operation.
*/
if (mce_flags.overflow_recov) {
+ if (mce_flags.smca)
+ return mce_severity_amd_smca(m, ctx);
+
/* software can try to contain */
if (!(m->mcgstatus & MCG_STATUS_RIPV) && (ctx == IN_KERNEL))
return MCE_PANIC_SEVERITY;
diff --git a/arch/x86/kernel/cpu/mcheck/mce.c b/arch/x86/kernel/cpu/mcheck/mce.c
index f0c921b03e42..92e5e37d97bf 100644
--- a/arch/x86/kernel/cpu/mcheck/mce.c
+++ b/arch/x86/kernel/cpu/mcheck/mce.c
@@ -161,7 +161,6 @@ void mce_log(struct mce *mce)
if (!mce_gen_pool_add(mce))
irq_work_queue(&mce_irq_work);
- mce->finished = 0;
wmb();
for (;;) {
entry = mce_log_get_idx_check(mcelog.next);
@@ -194,7 +193,6 @@ void mce_log(struct mce *mce)
mcelog.entry[entry].finished = 1;
wmb();
- mce->finished = 1;
set_bit(0, &mce_need_notify);
}
@@ -224,6 +222,53 @@ void mce_unregister_decode_chain(struct notifier_block *nb)
}
EXPORT_SYMBOL_GPL(mce_unregister_decode_chain);
+static inline u32 ctl_reg(int bank)
+{
+ return MSR_IA32_MCx_CTL(bank);
+}
+
+static inline u32 status_reg(int bank)
+{
+ return MSR_IA32_MCx_STATUS(bank);
+}
+
+static inline u32 addr_reg(int bank)
+{
+ return MSR_IA32_MCx_ADDR(bank);
+}
+
+static inline u32 misc_reg(int bank)
+{
+ return MSR_IA32_MCx_MISC(bank);
+}
+
+static inline u32 smca_ctl_reg(int bank)
+{
+ return MSR_AMD64_SMCA_MCx_CTL(bank);
+}
+
+static inline u32 smca_status_reg(int bank)
+{
+ return MSR_AMD64_SMCA_MCx_STATUS(bank);
+}
+
+static inline u32 smca_addr_reg(int bank)
+{
+ return MSR_AMD64_SMCA_MCx_ADDR(bank);
+}
+
+static inline u32 smca_misc_reg(int bank)
+{
+ return MSR_AMD64_SMCA_MCx_MISC(bank);
+}
+
+struct mca_msr_regs msr_ops = {
+ .ctl = ctl_reg,
+ .status = status_reg,
+ .addr = addr_reg,
+ .misc = misc_reg
+};
+
static void print_mce(struct mce *m)
{
int ret = 0;
@@ -290,7 +335,9 @@ static void wait_for_panic(void)
static void mce_panic(const char *msg, struct mce *final, char *exp)
{
- int i, apei_err = 0;
+ int apei_err = 0;
+ struct llist_node *pending;
+ struct mce_evt_llist *l;
if (!fake_panic) {
/*
@@ -307,11 +354,10 @@ static void mce_panic(const char *msg, struct mce *final, char *exp)
if (atomic_inc_return(&mce_fake_panicked) > 1)
return;
}
+ pending = mce_gen_pool_prepare_records();
/* First print corrected ones that are still unlogged */
- for (i = 0; i < MCE_LOG_LEN; i++) {
- struct mce *m = &mcelog.entry[i];
- if (!(m->status & MCI_STATUS_VAL))
- continue;
+ llist_for_each_entry(l, pending, llnode) {
+ struct mce *m = &l->mce;
if (!(m->status & MCI_STATUS_UC)) {
print_mce(m);
if (!apei_err)
@@ -319,13 +365,11 @@ static void mce_panic(const char *msg, struct mce *final, char *exp)
}
}
/* Now print uncorrected but with the final one last */
- for (i = 0; i < MCE_LOG_LEN; i++) {
- struct mce *m = &mcelog.entry[i];
- if (!(m->status & MCI_STATUS_VAL))
- continue;
+ llist_for_each_entry(l, pending, llnode) {
+ struct mce *m = &l->mce;
if (!(m->status & MCI_STATUS_UC))
continue;
- if (!final || memcmp(m, final, sizeof(struct mce))) {
+ if (!final || mce_cmp(m, final)) {
print_mce(m);
if (!apei_err)
apei_err = apei_write_mce(m);
@@ -356,11 +400,11 @@ static int msr_to_offset(u32 msr)
if (msr == mca_cfg.rip_msr)
return offsetof(struct mce, ip);
- if (msr == MSR_IA32_MCx_STATUS(bank))
+ if (msr == msr_ops.status(bank))
return offsetof(struct mce, status);
- if (msr == MSR_IA32_MCx_ADDR(bank))
+ if (msr == msr_ops.addr(bank))
return offsetof(struct mce, addr);
- if (msr == MSR_IA32_MCx_MISC(bank))
+ if (msr == msr_ops.misc(bank))
return offsetof(struct mce, misc);
if (msr == MSR_IA32_MCG_STATUS)
return offsetof(struct mce, mcgstatus);
@@ -523,9 +567,9 @@ static struct notifier_block mce_srao_nb = {
static void mce_read_aux(struct mce *m, int i)
{
if (m->status & MCI_STATUS_MISCV)
- m->misc = mce_rdmsrl(MSR_IA32_MCx_MISC(i));
+ m->misc = mce_rdmsrl(msr_ops.misc(i));
if (m->status & MCI_STATUS_ADDRV) {
- m->addr = mce_rdmsrl(MSR_IA32_MCx_ADDR(i));
+ m->addr = mce_rdmsrl(msr_ops.addr(i));
/*
* Mask the reported address by the reported granularity.
@@ -607,7 +651,7 @@ bool machine_check_poll(enum mcp_flags flags, mce_banks_t *b)
m.tsc = 0;
barrier();
- m.status = mce_rdmsrl(MSR_IA32_MCx_STATUS(i));
+ m.status = mce_rdmsrl(msr_ops.status(i));
if (!(m.status & MCI_STATUS_VAL))
continue;
@@ -654,7 +698,7 @@ bool machine_check_poll(enum mcp_flags flags, mce_banks_t *b)
/*
* Clear state for this bank.
*/
- mce_wrmsrl(MSR_IA32_MCx_STATUS(i), 0);
+ mce_wrmsrl(msr_ops.status(i), 0);
}
/*
@@ -679,7 +723,7 @@ static int mce_no_way_out(struct mce *m, char **msg, unsigned long *validp,
char *tmp;
for (i = 0; i < mca_cfg.banks; i++) {
- m->status = mce_rdmsrl(MSR_IA32_MCx_STATUS(i));
+ m->status = mce_rdmsrl(msr_ops.status(i));
if (m->status & MCI_STATUS_VAL) {
__set_bit(i, validp);
if (quirk_no_way_out)
@@ -830,9 +874,9 @@ static int mce_start(int *no_way_out)
atomic_add(*no_way_out, &global_nwo);
/*
- * global_nwo should be updated before mce_callin
+ * Rely on the implied barrier below, such that global_nwo
+ * is updated before mce_callin.
*/
- smp_wmb();
order = atomic_inc_return(&mce_callin);
/*
@@ -957,7 +1001,7 @@ static void mce_clear_state(unsigned long *toclear)
for (i = 0; i < mca_cfg.banks; i++) {
if (test_bit(i, toclear))
- mce_wrmsrl(MSR_IA32_MCx_STATUS(i), 0);
+ mce_wrmsrl(msr_ops.status(i), 0);
}
}
@@ -994,11 +1038,12 @@ void do_machine_check(struct pt_regs *regs, long error_code)
int i;
int worst = 0;
int severity;
+
/*
* Establish sequential order between the CPUs entering the machine
* check handler.
*/
- int order;
+ int order = -1;
/*
* If no_way_out gets set, there is no safe way to recover from this
* MCE. If mca_cfg.tolerant is cranked up, we'll try anyway.
@@ -1012,7 +1057,12 @@ void do_machine_check(struct pt_regs *regs, long error_code)
DECLARE_BITMAP(toclear, MAX_NR_BANKS);
DECLARE_BITMAP(valid_banks, MAX_NR_BANKS);
char *msg = "Unknown";
- int lmce = 0;
+
+ /*
+ * MCEs are always local on AMD. Same is determined by MCG_STATUS_LMCES
+ * on Intel.
+ */
+ int lmce = 1;
/* If this CPU is offline, just bail out. */
if (cpu_is_offline(smp_processor_id())) {
@@ -1051,19 +1101,20 @@ void do_machine_check(struct pt_regs *regs, long error_code)
kill_it = 1;
/*
- * Check if this MCE is signaled to only this logical processor
+ * Check if this MCE is signaled to only this logical processor,
+ * on Intel only.
*/
- if (m.mcgstatus & MCG_STATUS_LMCES)
- lmce = 1;
- else {
- /*
- * Go through all the banks in exclusion of the other CPUs.
- * This way we don't report duplicated events on shared banks
- * because the first one to see it will clear it.
- * If this is a Local MCE, then no need to perform rendezvous.
- */
+ if (m.cpuvendor == X86_VENDOR_INTEL)
+ lmce = m.mcgstatus & MCG_STATUS_LMCES;
+
+ /*
+ * Go through all banks in exclusion of the other CPUs. This way we
+ * don't report duplicated events on shared banks because the first one
+ * to see it will clear it. If this is a Local MCE, then no need to
+ * perform rendezvous.
+ */
+ if (!lmce)
order = mce_start(&no_way_out);
- }
for (i = 0; i < cfg->banks; i++) {
__clear_bit(i, toclear);
@@ -1076,7 +1127,7 @@ void do_machine_check(struct pt_regs *regs, long error_code)
m.addr = 0;
m.bank = i;
- m.status = mce_rdmsrl(MSR_IA32_MCx_STATUS(i));
+ m.status = mce_rdmsrl(msr_ops.status(i));
if ((m.status & MCI_STATUS_VAL) == 0)
continue;
@@ -1420,7 +1471,6 @@ static void __mcheck_cpu_init_generic(void)
enum mcp_flags m_fl = 0;
mce_banks_t all_banks;
u64 cap;
- int i;
if (!mca_cfg.bootlog)
m_fl = MCP_DONTLOG;
@@ -1436,14 +1486,19 @@ static void __mcheck_cpu_init_generic(void)
rdmsrl(MSR_IA32_MCG_CAP, cap);
if (cap & MCG_CTL_P)
wrmsr(MSR_IA32_MCG_CTL, 0xffffffff, 0xffffffff);
+}
+
+static void __mcheck_cpu_init_clear_banks(void)
+{
+ int i;
for (i = 0; i < mca_cfg.banks; i++) {
struct mce_bank *b = &mce_banks[i];
if (!b->init)
continue;
- wrmsrl(MSR_IA32_MCx_CTL(i), b->ctl);
- wrmsrl(MSR_IA32_MCx_STATUS(i), 0);
+ wrmsrl(msr_ops.ctl(i), b->ctl);
+ wrmsrl(msr_ops.status(i), 0);
}
}
@@ -1495,7 +1550,7 @@ static int __mcheck_cpu_apply_quirks(struct cpuinfo_x86 *c)
*/
clear_bit(10, (unsigned long *)&mce_banks[4].ctl);
}
- if (c->x86 <= 17 && cfg->bootlog < 0) {
+ if (c->x86 < 17 && cfg->bootlog < 0) {
/*
* Lots of broken BIOS around that don't clear them
* by default and leave crap in there. Don't log:
@@ -1628,11 +1683,19 @@ static void __mcheck_cpu_init_vendor(struct cpuinfo_x86 *c)
break;
case X86_VENDOR_AMD: {
- u32 ebx = cpuid_ebx(0x80000007);
+ mce_flags.overflow_recov = !!cpu_has(c, X86_FEATURE_OVERFLOW_RECOV);
+ mce_flags.succor = !!cpu_has(c, X86_FEATURE_SUCCOR);
+ mce_flags.smca = !!cpu_has(c, X86_FEATURE_SMCA);
- mce_flags.overflow_recov = !!(ebx & BIT(0));
- mce_flags.succor = !!(ebx & BIT(1));
- mce_flags.smca = !!(ebx & BIT(3));
+ /*
+ * Install proper ops for Scalable MCA enabled processors
+ */
+ if (mce_flags.smca) {
+ msr_ops.ctl = smca_ctl_reg;
+ msr_ops.status = smca_status_reg;
+ msr_ops.addr = smca_addr_reg;
+ msr_ops.misc = smca_misc_reg;
+ }
mce_amd_feature_init(c);
break;
@@ -1717,6 +1780,7 @@ void mcheck_cpu_init(struct cpuinfo_x86 *c)
__mcheck_cpu_init_generic();
__mcheck_cpu_init_vendor(c);
+ __mcheck_cpu_init_clear_banks();
__mcheck_cpu_init_timer();
}
@@ -2082,7 +2146,7 @@ static void mce_disable_error_reporting(void)
struct mce_bank *b = &mce_banks[i];
if (b->init)
- wrmsrl(MSR_IA32_MCx_CTL(i), 0);
+ wrmsrl(msr_ops.ctl(i), 0);
}
return;
}
@@ -2121,6 +2185,7 @@ static void mce_syscore_resume(void)
{
__mcheck_cpu_init_generic();
__mcheck_cpu_init_vendor(raw_cpu_ptr(&cpu_info));
+ __mcheck_cpu_init_clear_banks();
}
static struct syscore_ops mce_syscore_ops = {
@@ -2138,6 +2203,7 @@ static void mce_cpu_restart(void *data)
if (!mce_available(raw_cpu_ptr(&cpu_info)))
return;
__mcheck_cpu_init_generic();
+ __mcheck_cpu_init_clear_banks();
__mcheck_cpu_init_timer();
}
@@ -2413,7 +2479,7 @@ static void mce_reenable_cpu(void *h)
struct mce_bank *b = &mce_banks[i];
if (b->init)
- wrmsrl(MSR_IA32_MCx_CTL(i), b->ctl);
+ wrmsrl(msr_ops.ctl(i), b->ctl);
}
}
diff --git a/arch/x86/kernel/cpu/mcheck/mce_amd.c b/arch/x86/kernel/cpu/mcheck/mce_amd.c
index 9d656fd436ef..10b0661651e0 100644
--- a/arch/x86/kernel/cpu/mcheck/mce_amd.c
+++ b/arch/x86/kernel/cpu/mcheck/mce_amd.c
@@ -54,14 +54,6 @@
/* Threshold LVT offset is at MSR0xC0000410[15:12] */
#define SMCA_THR_LVT_OFF 0xF000
-/*
- * OS is required to set the MCAX bit to acknowledge that it is now using the
- * new MSR ranges and new registers under each bank. It also means that the OS
- * will configure deferred errors in the new MCx_CONFIG register. If the bit is
- * not set, uncorrectable errors will cause a system panic.
- */
-#define SMCA_MCAX_EN_OFF 0x1
-
static const char * const th_names[] = {
"load_store",
"insn_fetch",
@@ -333,7 +325,7 @@ static u32 get_block_address(u32 current_addr, u32 low, u32 high,
/* Fall back to method we used for older processors: */
switch (block) {
case 0:
- addr = MSR_IA32_MCx_MISC(bank);
+ addr = msr_ops.misc(bank);
break;
case 1:
offset = ((low & MASK_BLKPTR_LO) >> 21);
@@ -351,6 +343,7 @@ prepare_threshold_block(unsigned int bank, unsigned int block, u32 addr,
int offset, u32 misc_high)
{
unsigned int cpu = smp_processor_id();
+ u32 smca_low, smca_high, smca_addr;
struct threshold_block b;
int new;
@@ -369,24 +362,49 @@ prepare_threshold_block(unsigned int bank, unsigned int block, u32 addr,
b.interrupt_enable = 1;
- if (mce_flags.smca) {
- u32 smca_low, smca_high;
- u32 smca_addr = MSR_AMD64_SMCA_MCx_CONFIG(bank);
+ if (!mce_flags.smca) {
+ new = (misc_high & MASK_LVTOFF_HI) >> 20;
+ goto set_offset;
+ }
- if (!rdmsr_safe(smca_addr, &smca_low, &smca_high)) {
- smca_high |= SMCA_MCAX_EN_OFF;
- wrmsr(smca_addr, smca_low, smca_high);
- }
+ smca_addr = MSR_AMD64_SMCA_MCx_CONFIG(bank);
- /* Gather LVT offset for thresholding: */
- if (rdmsr_safe(MSR_CU_DEF_ERR, &smca_low, &smca_high))
- goto out;
+ if (!rdmsr_safe(smca_addr, &smca_low, &smca_high)) {
+ /*
+ * OS is required to set the MCAX bit to acknowledge that it is
+ * now using the new MSR ranges and new registers under each
+ * bank. It also means that the OS will configure deferred
+ * errors in the new MCx_CONFIG register. If the bit is not set,
+ * uncorrectable errors will cause a system panic.
+ *
+ * MCA_CONFIG[MCAX] is bit 32 (0 in the high portion of the MSR.)
+ */
+ smca_high |= BIT(0);
- new = (smca_low & SMCA_THR_LVT_OFF) >> 12;
- } else {
- new = (misc_high & MASK_LVTOFF_HI) >> 20;
+ /*
+ * SMCA logs Deferred Error information in MCA_DE{STAT,ADDR}
+ * registers with the option of additionally logging to
+ * MCA_{STATUS,ADDR} if MCA_CONFIG[LogDeferredInMcaStat] is set.
+ *
+ * This bit is usually set by BIOS to retain the old behavior
+ * for OSes that don't use the new registers. Linux supports the
+ * new registers so let's disable that additional logging here.
+ *
+ * MCA_CONFIG[LogDeferredInMcaStat] is bit 34 (bit 2 in the high
+ * portion of the MSR).
+ */
+ smca_high &= ~BIT(2);
+
+ wrmsr(smca_addr, smca_low, smca_high);
}
+ /* Gather LVT offset for thresholding: */
+ if (rdmsr_safe(MSR_CU_DEF_ERR, &smca_low, &smca_high))
+ goto out;
+
+ new = (smca_low & SMCA_THR_LVT_OFF) >> 12;
+
+set_offset:
offset = setup_APIC_mce_threshold(offset, new);
if ((offset == new) && (mce_threshold_vector != amd_threshold_interrupt))
@@ -430,12 +448,23 @@ void mce_amd_feature_init(struct cpuinfo_x86 *c)
deferred_error_interrupt_enable(c);
}
-static void __log_error(unsigned int bank, bool threshold_err, u64 misc)
+static void
+__log_error(unsigned int bank, bool deferred_err, bool threshold_err, u64 misc)
{
+ u32 msr_status = msr_ops.status(bank);
+ u32 msr_addr = msr_ops.addr(bank);
struct mce m;
u64 status;
- rdmsrl(MSR_IA32_MCx_STATUS(bank), status);
+ WARN_ON_ONCE(deferred_err && threshold_err);
+
+ if (deferred_err && mce_flags.smca) {
+ msr_status = MSR_AMD64_SMCA_MCx_DESTAT(bank);
+ msr_addr = MSR_AMD64_SMCA_MCx_DEADDR(bank);
+ }
+
+ rdmsrl(msr_status, status);
+
if (!(status & MCI_STATUS_VAL))
return;
@@ -448,10 +477,11 @@ static void __log_error(unsigned int bank, bool threshold_err, u64 misc)
m.misc = misc;
if (m.status & MCI_STATUS_ADDRV)
- rdmsrl(MSR_IA32_MCx_ADDR(bank), m.addr);
+ rdmsrl(msr_addr, m.addr);
mce_log(&m);
- wrmsrl(MSR_IA32_MCx_STATUS(bank), 0);
+
+ wrmsrl(msr_status, 0);
}
static inline void __smp_deferred_error_interrupt(void)
@@ -479,17 +509,21 @@ asmlinkage __visible void smp_trace_deferred_error_interrupt(void)
/* APIC interrupt handler for deferred errors */
static void amd_deferred_error_interrupt(void)
{
- u64 status;
unsigned int bank;
+ u32 msr_status;
+ u64 status;
for (bank = 0; bank < mca_cfg.banks; ++bank) {
- rdmsrl(MSR_IA32_MCx_STATUS(bank), status);
+ msr_status = (mce_flags.smca) ? MSR_AMD64_SMCA_MCx_DESTAT(bank)
+ : msr_ops.status(bank);
+
+ rdmsrl(msr_status, status);
if (!(status & MCI_STATUS_VAL) ||
!(status & MCI_STATUS_DEFERRED))
continue;
- __log_error(bank, false, 0);
+ __log_error(bank, true, false, 0);
break;
}
}
@@ -544,7 +578,7 @@ static void amd_threshold_interrupt(void)
return;
log:
- __log_error(bank, true, ((u64)high << 32) | low);
+ __log_error(bank, false, true, ((u64)high << 32) | low);
}
/*
diff --git a/arch/x86/kernel/cpu/mcheck/mce_intel.c b/arch/x86/kernel/cpu/mcheck/mce_intel.c
index 1e8bb6c94f14..1defb8ea882c 100644
--- a/arch/x86/kernel/cpu/mcheck/mce_intel.c
+++ b/arch/x86/kernel/cpu/mcheck/mce_intel.c
@@ -84,7 +84,7 @@ static int cmci_supported(int *banks)
*/
if (boot_cpu_data.x86_vendor != X86_VENDOR_INTEL)
return 0;
- if (!cpu_has_apic || lapic_get_maxlvt() < 6)
+ if (!boot_cpu_has(X86_FEATURE_APIC) || lapic_get_maxlvt() < 6)
return 0;
rdmsrl(MSR_IA32_MCG_CAP, cap);
*banks = min_t(unsigned, MAX_NR_BANKS, cap & 0xff);
diff --git a/arch/x86/kernel/cpu/mcheck/therm_throt.c b/arch/x86/kernel/cpu/mcheck/therm_throt.c
index ac780cad3b86..6b9dc4d18ccc 100644
--- a/arch/x86/kernel/cpu/mcheck/therm_throt.c
+++ b/arch/x86/kernel/cpu/mcheck/therm_throt.c
@@ -450,7 +450,7 @@ asmlinkage __visible void smp_trace_thermal_interrupt(struct pt_regs *regs)
/* Thermal monitoring depends on APIC, ACPI and clock modulation */
static int intel_thermal_supported(struct cpuinfo_x86 *c)
{
- if (!cpu_has_apic)
+ if (!boot_cpu_has(X86_FEATURE_APIC))
return 0;
if (!cpu_has(c, X86_FEATURE_ACPI) || !cpu_has(c, X86_FEATURE_ACC))
return 0;
diff --git a/arch/x86/kernel/cpu/microcode/intel.c b/arch/x86/kernel/cpu/microcode/intel.c
index cbb3cf09b065..65cbbcd48fe4 100644
--- a/arch/x86/kernel/cpu/microcode/intel.c
+++ b/arch/x86/kernel/cpu/microcode/intel.c
@@ -422,7 +422,7 @@ static void show_saved_mc(void)
data_size = get_datasize(mc_saved_header);
date = mc_saved_header->date;
- pr_debug("mc_saved[%d]: sig=0x%x, pf=0x%x, rev=0x%x, toal size=0x%x, date = %04x-%02x-%02x\n",
+ pr_debug("mc_saved[%d]: sig=0x%x, pf=0x%x, rev=0x%x, total size=0x%x, date = %04x-%02x-%02x\n",
i, sig, pf, rev, total_size,
date & 0xffff,
date >> 24,
diff --git a/arch/x86/kernel/cpu/mtrr/cyrix.c b/arch/x86/kernel/cpu/mtrr/cyrix.c
index f8c81ba0b465..b1086f79e57e 100644
--- a/arch/x86/kernel/cpu/mtrr/cyrix.c
+++ b/arch/x86/kernel/cpu/mtrr/cyrix.c
@@ -137,7 +137,7 @@ static void prepare_set(void)
u32 cr0;
/* Save value of CR4 and clear Page Global Enable (bit 7) */
- if (cpu_has_pge) {
+ if (boot_cpu_has(X86_FEATURE_PGE)) {
cr4 = __read_cr4();
__write_cr4(cr4 & ~X86_CR4_PGE);
}
@@ -170,7 +170,7 @@ static void post_set(void)
write_cr0(read_cr0() & ~X86_CR0_CD);
/* Restore value of CR4 */
- if (cpu_has_pge)
+ if (boot_cpu_has(X86_FEATURE_PGE))
__write_cr4(cr4);
}
diff --git a/arch/x86/kernel/cpu/mtrr/generic.c b/arch/x86/kernel/cpu/mtrr/generic.c
index 19f57360dfd2..16e37a2581ac 100644
--- a/arch/x86/kernel/cpu/mtrr/generic.c
+++ b/arch/x86/kernel/cpu/mtrr/generic.c
@@ -444,11 +444,24 @@ static void __init print_mtrr_state(void)
pr_debug("TOM2: %016llx aka %lldM\n", mtrr_tom2, mtrr_tom2>>20);
}
+/* PAT setup for BP. We need to go through sync steps here */
+void __init mtrr_bp_pat_init(void)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+ prepare_set();
+
+ pat_init();
+
+ post_set();
+ local_irq_restore(flags);
+}
+
/* Grab all of the MTRR state for this CPU into *state */
bool __init get_mtrr_state(void)
{
struct mtrr_var_range *vrs;
- unsigned long flags;
unsigned lo, dummy;
unsigned int i;
@@ -481,15 +494,6 @@ bool __init get_mtrr_state(void)
mtrr_state_set = 1;
- /* PAT setup for BP. We need to go through sync steps here */
- local_irq_save(flags);
- prepare_set();
-
- pat_init();
-
- post_set();
- local_irq_restore(flags);
-
return !!(mtrr_state.enabled & MTRR_STATE_MTRR_ENABLED);
}
@@ -741,7 +745,7 @@ static void prepare_set(void) __acquires(set_atomicity_lock)
wbinvd();
/* Save value of CR4 and clear Page Global Enable (bit 7) */
- if (cpu_has_pge) {
+ if (boot_cpu_has(X86_FEATURE_PGE)) {
cr4 = __read_cr4();
__write_cr4(cr4 & ~X86_CR4_PGE);
}
@@ -771,7 +775,7 @@ static void post_set(void) __releases(set_atomicity_lock)
write_cr0(read_cr0() & ~X86_CR0_CD);
/* Restore value of CR4 */
- if (cpu_has_pge)
+ if (boot_cpu_has(X86_FEATURE_PGE))
__write_cr4(cr4);
raw_spin_unlock(&set_atomicity_lock);
}
diff --git a/arch/x86/kernel/cpu/mtrr/main.c b/arch/x86/kernel/cpu/mtrr/main.c
index 10f8d4796240..7d393ecdeee6 100644
--- a/arch/x86/kernel/cpu/mtrr/main.c
+++ b/arch/x86/kernel/cpu/mtrr/main.c
@@ -752,6 +752,9 @@ void __init mtrr_bp_init(void)
/* BIOS may override */
__mtrr_enabled = get_mtrr_state();
+ if (mtrr_enabled())
+ mtrr_bp_pat_init();
+
if (mtrr_cleanup(phys_addr)) {
changed_by_mtrr_cleanup = 1;
mtrr_if->set_all();
@@ -759,8 +762,16 @@ void __init mtrr_bp_init(void)
}
}
- if (!mtrr_enabled())
+ if (!mtrr_enabled()) {
pr_info("MTRR: Disabled\n");
+
+ /*
+ * PAT initialization relies on MTRR's rendezvous handler.
+ * Skip PAT init until the handler can initialize both
+ * features independently.
+ */
+ pat_disable("MTRRs disabled, skipping PAT initialization too.");
+ }
}
void mtrr_ap_init(void)
diff --git a/arch/x86/kernel/cpu/mtrr/mtrr.h b/arch/x86/kernel/cpu/mtrr/mtrr.h
index 951884dcc433..6c7ced07d16d 100644
--- a/arch/x86/kernel/cpu/mtrr/mtrr.h
+++ b/arch/x86/kernel/cpu/mtrr/mtrr.h
@@ -52,6 +52,7 @@ void set_mtrr_prepare_save(struct set_mtrr_context *ctxt);
void fill_mtrr_var_range(unsigned int index,
u32 base_lo, u32 base_hi, u32 mask_lo, u32 mask_hi);
bool get_mtrr_state(void);
+void mtrr_bp_pat_init(void);
extern void set_mtrr_ops(const struct mtrr_ops *ops);
diff --git a/arch/x86/kernel/cpu/vmware.c b/arch/x86/kernel/cpu/vmware.c
index 364e58346897..8cac429b6a1d 100644
--- a/arch/x86/kernel/cpu/vmware.c
+++ b/arch/x86/kernel/cpu/vmware.c
@@ -94,7 +94,7 @@ static void __init vmware_platform_setup(void)
*/
static uint32_t __init vmware_platform(void)
{
- if (cpu_has_hypervisor) {
+ if (boot_cpu_has(X86_FEATURE_HYPERVISOR)) {
unsigned int eax;
unsigned int hyper_vendor_id[3];
diff --git a/arch/x86/kernel/devicetree.c b/arch/x86/kernel/devicetree.c
index 1f4acd68b98b..3fe45f84ced4 100644
--- a/arch/x86/kernel/devicetree.c
+++ b/arch/x86/kernel/devicetree.c
@@ -151,7 +151,7 @@ static void __init dtb_lapic_setup(void)
return;
/* Did the boot loader setup the local APIC ? */
- if (!cpu_has_apic) {
+ if (!boot_cpu_has(X86_FEATURE_APIC)) {
if (apic_force_enable(r.start))
return;
}
diff --git a/arch/x86/kernel/dumpstack.c b/arch/x86/kernel/dumpstack.c
index 8efa57a5f29e..2bb25c3fe2e8 100644
--- a/arch/x86/kernel/dumpstack.c
+++ b/arch/x86/kernel/dumpstack.c
@@ -260,19 +260,12 @@ int __die(const char *str, struct pt_regs *regs, long err)
unsigned long sp;
#endif
printk(KERN_DEFAULT
- "%s: %04lx [#%d] ", str, err & 0xffff, ++die_counter);
-#ifdef CONFIG_PREEMPT
- printk("PREEMPT ");
-#endif
-#ifdef CONFIG_SMP
- printk("SMP ");
-#endif
- if (debug_pagealloc_enabled())
- printk("DEBUG_PAGEALLOC ");
-#ifdef CONFIG_KASAN
- printk("KASAN");
-#endif
- printk("\n");
+ "%s: %04lx [#%d]%s%s%s%s\n", str, err & 0xffff, ++die_counter,
+ IS_ENABLED(CONFIG_PREEMPT) ? " PREEMPT" : "",
+ IS_ENABLED(CONFIG_SMP) ? " SMP" : "",
+ debug_pagealloc_enabled() ? " DEBUG_PAGEALLOC" : "",
+ IS_ENABLED(CONFIG_KASAN) ? " KASAN" : "");
+
if (notify_die(DIE_OOPS, str, regs, err,
current->thread.trap_nr, SIGSEGV) == NOTIFY_STOP)
return 1;
diff --git a/arch/x86/kernel/head.c b/arch/x86/kernel/ebda.c
index 992f442ca155..afe65dffee80 100644
--- a/arch/x86/kernel/head.c
+++ b/arch/x86/kernel/ebda.c
@@ -38,7 +38,7 @@ void __init reserve_ebda_region(void)
* that the paravirt case can handle memory setup
* correctly, without our help.
*/
- if (paravirt_enabled())
+ if (!x86_platform.legacy.ebda_search)
return;
/* end of low (conventional) memory */
diff --git a/arch/x86/kernel/fpu/bugs.c b/arch/x86/kernel/fpu/bugs.c
index dd9ca9b60ff3..aad34aafc0e0 100644
--- a/arch/x86/kernel/fpu/bugs.c
+++ b/arch/x86/kernel/fpu/bugs.c
@@ -21,11 +21,15 @@ static double __initdata y = 3145727.0;
* We should really only care about bugs here
* anyway. Not features.
*/
-static void __init check_fpu(void)
+void __init fpu__init_check_bugs(void)
{
u32 cr0_saved;
s32 fdiv_bug;
+ /* kernel_fpu_begin/end() relies on patched alternative instructions. */
+ if (!boot_cpu_has(X86_FEATURE_FPU))
+ return;
+
/* We might have CR0::TS set already, clear it: */
cr0_saved = read_cr0();
write_cr0(cr0_saved & ~X86_CR0_TS);
@@ -59,13 +63,3 @@ static void __init check_fpu(void)
pr_warn("Hmm, FPU with FDIV bug\n");
}
}
-
-void __init fpu__init_check_bugs(void)
-{
- /*
- * kernel_fpu_begin/end() in check_fpu() relies on the patched
- * alternative instructions.
- */
- if (cpu_has_fpu)
- check_fpu();
-}
diff --git a/arch/x86/kernel/fpu/core.c b/arch/x86/kernel/fpu/core.c
index 8e37cc8a539a..97027545a72d 100644
--- a/arch/x86/kernel/fpu/core.c
+++ b/arch/x86/kernel/fpu/core.c
@@ -217,14 +217,14 @@ static inline void fpstate_init_fstate(struct fregs_state *fp)
void fpstate_init(union fpregs_state *state)
{
- if (!cpu_has_fpu) {
+ if (!static_cpu_has(X86_FEATURE_FPU)) {
fpstate_init_soft(&state->soft);
return;
}
memset(state, 0, xstate_size);
- if (cpu_has_fxsr)
+ if (static_cpu_has(X86_FEATURE_FXSR))
fpstate_init_fxstate(&state->fxsave);
else
fpstate_init_fstate(&state->fsave);
@@ -237,7 +237,7 @@ int fpu__copy(struct fpu *dst_fpu, struct fpu *src_fpu)
dst_fpu->fpregs_active = 0;
dst_fpu->last_cpu = -1;
- if (!src_fpu->fpstate_active || !cpu_has_fpu)
+ if (!src_fpu->fpstate_active || !static_cpu_has(X86_FEATURE_FPU))
return 0;
WARN_ON_FPU(src_fpu != &current->thread.fpu);
@@ -506,33 +506,6 @@ void fpu__clear(struct fpu *fpu)
* x87 math exception handling:
*/
-static inline unsigned short get_fpu_cwd(struct fpu *fpu)
-{
- if (cpu_has_fxsr) {
- return fpu->state.fxsave.cwd;
- } else {
- return (unsigned short)fpu->state.fsave.cwd;
- }
-}
-
-static inline unsigned short get_fpu_swd(struct fpu *fpu)
-{
- if (cpu_has_fxsr) {
- return fpu->state.fxsave.swd;
- } else {
- return (unsigned short)fpu->state.fsave.swd;
- }
-}
-
-static inline unsigned short get_fpu_mxcsr(struct fpu *fpu)
-{
- if (cpu_has_xmm) {
- return fpu->state.fxsave.mxcsr;
- } else {
- return MXCSR_DEFAULT;
- }
-}
-
int fpu__exception_code(struct fpu *fpu, int trap_nr)
{
int err;
@@ -547,10 +520,15 @@ int fpu__exception_code(struct fpu *fpu, int trap_nr)
* so if this combination doesn't produce any single exception,
* then we have a bad program that isn't synchronizing its FPU usage
* and it will suffer the consequences since we won't be able to
- * fully reproduce the context of the exception
+ * fully reproduce the context of the exception.
*/
- cwd = get_fpu_cwd(fpu);
- swd = get_fpu_swd(fpu);
+ if (boot_cpu_has(X86_FEATURE_FXSR)) {
+ cwd = fpu->state.fxsave.cwd;
+ swd = fpu->state.fxsave.swd;
+ } else {
+ cwd = (unsigned short)fpu->state.fsave.cwd;
+ swd = (unsigned short)fpu->state.fsave.swd;
+ }
err = swd & ~cwd;
} else {
@@ -560,7 +538,11 @@ int fpu__exception_code(struct fpu *fpu, int trap_nr)
* unmasked exception was caught we must mask the exception mask bits
* at 0x1f80, and then use these to mask the exception bits at 0x3f.
*/
- unsigned short mxcsr = get_fpu_mxcsr(fpu);
+ unsigned short mxcsr = MXCSR_DEFAULT;
+
+ if (boot_cpu_has(X86_FEATURE_XMM))
+ mxcsr = fpu->state.fxsave.mxcsr;
+
err = ~(mxcsr >> 7) & mxcsr;
}
diff --git a/arch/x86/kernel/fpu/init.c b/arch/x86/kernel/fpu/init.c
index 54c86fffbf9f..aacfd7a82cec 100644
--- a/arch/x86/kernel/fpu/init.c
+++ b/arch/x86/kernel/fpu/init.c
@@ -29,22 +29,22 @@ static void fpu__init_cpu_generic(void)
unsigned long cr0;
unsigned long cr4_mask = 0;
- if (cpu_has_fxsr)
+ if (boot_cpu_has(X86_FEATURE_FXSR))
cr4_mask |= X86_CR4_OSFXSR;
- if (cpu_has_xmm)
+ if (boot_cpu_has(X86_FEATURE_XMM))
cr4_mask |= X86_CR4_OSXMMEXCPT;
if (cr4_mask)
cr4_set_bits(cr4_mask);
cr0 = read_cr0();
cr0 &= ~(X86_CR0_TS|X86_CR0_EM); /* clear TS and EM */
- if (!cpu_has_fpu)
+ if (!boot_cpu_has(X86_FEATURE_FPU))
cr0 |= X86_CR0_EM;
write_cr0(cr0);
/* Flush out any pending x87 state: */
#ifdef CONFIG_MATH_EMULATION
- if (!cpu_has_fpu)
+ if (!boot_cpu_has(X86_FEATURE_FPU))
fpstate_init_soft(&current->thread.fpu.state.soft);
else
#endif
@@ -89,7 +89,7 @@ static void fpu__init_system_early_generic(struct cpuinfo_x86 *c)
}
#ifndef CONFIG_MATH_EMULATION
- if (!cpu_has_fpu) {
+ if (!boot_cpu_has(X86_FEATURE_FPU)) {
pr_emerg("x86/fpu: Giving up, no FPU found and no math emulation present\n");
for (;;)
asm volatile("hlt");
@@ -106,7 +106,7 @@ static void __init fpu__init_system_mxcsr(void)
{
unsigned int mask = 0;
- if (cpu_has_fxsr) {
+ if (boot_cpu_has(X86_FEATURE_FXSR)) {
/* Static because GCC does not get 16-byte stack alignment right: */
static struct fxregs_state fxregs __initdata;
@@ -212,7 +212,7 @@ static void __init fpu__init_system_xstate_size_legacy(void)
* fpu__init_system_xstate().
*/
- if (!cpu_has_fpu) {
+ if (!boot_cpu_has(X86_FEATURE_FPU)) {
/*
* Disable xsave as we do not support it if i387
* emulation is enabled.
@@ -221,7 +221,7 @@ static void __init fpu__init_system_xstate_size_legacy(void)
setup_clear_cpu_cap(X86_FEATURE_XSAVEOPT);
xstate_size = sizeof(struct swregs_state);
} else {
- if (cpu_has_fxsr)
+ if (boot_cpu_has(X86_FEATURE_FXSR))
xstate_size = sizeof(struct fxregs_state);
else
xstate_size = sizeof(struct fregs_state);
diff --git a/arch/x86/kernel/fpu/regset.c b/arch/x86/kernel/fpu/regset.c
index 8bd1c003942a..81422dfb152b 100644
--- a/arch/x86/kernel/fpu/regset.c
+++ b/arch/x86/kernel/fpu/regset.c
@@ -21,7 +21,10 @@ int regset_xregset_fpregs_active(struct task_struct *target, const struct user_r
{
struct fpu *target_fpu = &target->thread.fpu;
- return (cpu_has_fxsr && target_fpu->fpstate_active) ? regset->n : 0;
+ if (boot_cpu_has(X86_FEATURE_FXSR) && target_fpu->fpstate_active)
+ return regset->n;
+ else
+ return 0;
}
int xfpregs_get(struct task_struct *target, const struct user_regset *regset,
@@ -30,7 +33,7 @@ int xfpregs_get(struct task_struct *target, const struct user_regset *regset,
{
struct fpu *fpu = &target->thread.fpu;
- if (!cpu_has_fxsr)
+ if (!boot_cpu_has(X86_FEATURE_FXSR))
return -ENODEV;
fpu__activate_fpstate_read(fpu);
@@ -47,7 +50,7 @@ int xfpregs_set(struct task_struct *target, const struct user_regset *regset,
struct fpu *fpu = &target->thread.fpu;
int ret;
- if (!cpu_has_fxsr)
+ if (!boot_cpu_has(X86_FEATURE_FXSR))
return -ENODEV;
fpu__activate_fpstate_write(fpu);
@@ -65,7 +68,7 @@ int xfpregs_set(struct task_struct *target, const struct user_regset *regset,
* update the header bits in the xsave header, indicating the
* presence of FP and SSE state.
*/
- if (cpu_has_xsave)
+ if (boot_cpu_has(X86_FEATURE_XSAVE))
fpu->state.xsave.header.xfeatures |= XFEATURE_MASK_FPSSE;
return ret;
@@ -79,7 +82,7 @@ int xstateregs_get(struct task_struct *target, const struct user_regset *regset,
struct xregs_state *xsave;
int ret;
- if (!cpu_has_xsave)
+ if (!boot_cpu_has(X86_FEATURE_XSAVE))
return -ENODEV;
fpu__activate_fpstate_read(fpu);
@@ -108,7 +111,7 @@ int xstateregs_set(struct task_struct *target, const struct user_regset *regset,
struct xregs_state *xsave;
int ret;
- if (!cpu_has_xsave)
+ if (!boot_cpu_has(X86_FEATURE_XSAVE))
return -ENODEV;
fpu__activate_fpstate_write(fpu);
@@ -275,10 +278,10 @@ int fpregs_get(struct task_struct *target, const struct user_regset *regset,
fpu__activate_fpstate_read(fpu);
- if (!static_cpu_has(X86_FEATURE_FPU))
+ if (!boot_cpu_has(X86_FEATURE_FPU))
return fpregs_soft_get(target, regset, pos, count, kbuf, ubuf);
- if (!cpu_has_fxsr)
+ if (!boot_cpu_has(X86_FEATURE_FXSR))
return user_regset_copyout(&pos, &count, &kbuf, &ubuf,
&fpu->state.fsave, 0,
-1);
@@ -306,10 +309,10 @@ int fpregs_set(struct task_struct *target, const struct user_regset *regset,
fpu__activate_fpstate_write(fpu);
fpstate_sanitize_xstate(fpu);
- if (!static_cpu_has(X86_FEATURE_FPU))
+ if (!boot_cpu_has(X86_FEATURE_FPU))
return fpregs_soft_set(target, regset, pos, count, kbuf, ubuf);
- if (!cpu_has_fxsr)
+ if (!boot_cpu_has(X86_FEATURE_FXSR))
return user_regset_copyin(&pos, &count, &kbuf, &ubuf,
&fpu->state.fsave, 0,
-1);
@@ -325,7 +328,7 @@ int fpregs_set(struct task_struct *target, const struct user_regset *regset,
* update the header bit in the xsave header, indicating the
* presence of FP.
*/
- if (cpu_has_xsave)
+ if (boot_cpu_has(X86_FEATURE_XSAVE))
fpu->state.xsave.header.xfeatures |= XFEATURE_MASK_FP;
return ret;
}
diff --git a/arch/x86/kernel/fpu/xstate.c b/arch/x86/kernel/fpu/xstate.c
index b48ef35b28d4..4ea2a59483c7 100644
--- a/arch/x86/kernel/fpu/xstate.c
+++ b/arch/x86/kernel/fpu/xstate.c
@@ -190,7 +190,7 @@ void fpstate_sanitize_xstate(struct fpu *fpu)
*/
void fpu__init_cpu_xstate(void)
{
- if (!cpu_has_xsave || !xfeatures_mask)
+ if (!boot_cpu_has(X86_FEATURE_XSAVE) || !xfeatures_mask)
return;
cr4_set_bits(X86_CR4_OSXSAVE);
@@ -280,7 +280,7 @@ static void __init setup_xstate_comp(void)
xstate_comp_offsets[0] = 0;
xstate_comp_offsets[1] = offsetof(struct fxregs_state, xmm_space);
- if (!cpu_has_xsaves) {
+ if (!boot_cpu_has(X86_FEATURE_XSAVES)) {
for (i = FIRST_EXTENDED_XFEATURE; i < XFEATURE_MAX; i++) {
if (xfeature_enabled(i)) {
xstate_comp_offsets[i] = xstate_offsets[i];
@@ -316,13 +316,13 @@ static void __init setup_init_fpu_buf(void)
WARN_ON_FPU(!on_boot_cpu);
on_boot_cpu = 0;
- if (!cpu_has_xsave)
+ if (!boot_cpu_has(X86_FEATURE_XSAVE))
return;
setup_xstate_features();
print_xstate_features();
- if (cpu_has_xsaves) {
+ if (boot_cpu_has(X86_FEATURE_XSAVES)) {
init_fpstate.xsave.header.xcomp_bv = (u64)1 << 63 | xfeatures_mask;
init_fpstate.xsave.header.xfeatures = xfeatures_mask;
}
@@ -417,7 +417,7 @@ static int xfeature_size(int xfeature_nr)
*/
static int using_compacted_format(void)
{
- return cpu_has_xsaves;
+ return boot_cpu_has(X86_FEATURE_XSAVES);
}
static void __xstate_dump_leaves(void)
@@ -549,7 +549,7 @@ static unsigned int __init calculate_xstate_size(void)
unsigned int eax, ebx, ecx, edx;
unsigned int calculated_xstate_size;
- if (!cpu_has_xsaves) {
+ if (!boot_cpu_has(X86_FEATURE_XSAVES)) {
/*
* - CPUID function 0DH, sub-function 0:
* EBX enumerates the size (in bytes) required by
@@ -630,7 +630,7 @@ void __init fpu__init_system_xstate(void)
WARN_ON_FPU(!on_boot_cpu);
on_boot_cpu = 0;
- if (!cpu_has_xsave) {
+ if (!boot_cpu_has(X86_FEATURE_XSAVE)) {
pr_info("x86/fpu: Legacy x87 FPU detected.\n");
return;
}
@@ -667,7 +667,7 @@ void __init fpu__init_system_xstate(void)
pr_info("x86/fpu: Enabled xstate features 0x%llx, context size is %d bytes, using '%s' format.\n",
xfeatures_mask,
xstate_size,
- cpu_has_xsaves ? "compacted" : "standard");
+ boot_cpu_has(X86_FEATURE_XSAVES) ? "compacted" : "standard");
}
/*
@@ -678,7 +678,7 @@ void fpu__resume_cpu(void)
/*
* Restore XCR0 on xsave capable CPUs:
*/
- if (cpu_has_xsave)
+ if (boot_cpu_has(X86_FEATURE_XSAVE))
xsetbv(XCR_XFEATURE_ENABLED_MASK, xfeatures_mask);
}
diff --git a/arch/x86/kernel/head32.c b/arch/x86/kernel/head32.c
index 2911ef3a9f1c..d784bb547a9d 100644
--- a/arch/x86/kernel/head32.c
+++ b/arch/x86/kernel/head32.c
@@ -34,6 +34,8 @@ asmlinkage __visible void __init i386_start_kernel(void)
cr4_init_shadow();
sanitize_boot_params(&boot_params);
+ x86_early_init_platform_quirks();
+
/* Call the subarch specific early setup function */
switch (boot_params.hdr.hardware_subarch) {
case X86_SUBARCH_INTEL_MID:
diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
index 1f4422d5c8d0..b72fb0b71dd1 100644
--- a/arch/x86/kernel/head64.c
+++ b/arch/x86/kernel/head64.c
@@ -182,6 +182,7 @@ void __init x86_64_start_reservations(char *real_mode_data)
if (!boot_params.hdr.version)
copy_bootdata(__va(real_mode_data));
+ x86_early_init_platform_quirks();
reserve_ebda_region();
switch (boot_params.hdr.hardware_subarch) {
diff --git a/arch/x86/kernel/head_32.S b/arch/x86/kernel/head_32.S
index af1112980dd4..6f8902b0d151 100644
--- a/arch/x86/kernel/head_32.S
+++ b/arch/x86/kernel/head_32.S
@@ -555,62 +555,53 @@ early_idt_handler_common:
*/
cld
- cmpl $2,(%esp) # X86_TRAP_NMI
- je .Lis_nmi # Ignore NMI
-
- cmpl $2,%ss:early_recursion_flag
- je hlt_loop
incl %ss:early_recursion_flag
- push %eax # 16(%esp)
- push %ecx # 12(%esp)
- push %edx # 8(%esp)
- push %ds # 4(%esp)
- push %es # 0(%esp)
- movl $(__KERNEL_DS),%eax
- movl %eax,%ds
- movl %eax,%es
-
- cmpl $(__KERNEL_CS),32(%esp)
- jne 10f
+ /* The vector number is in pt_regs->gs */
- leal 28(%esp),%eax # Pointer to %eip
- call early_fixup_exception
- andl %eax,%eax
- jnz ex_entry /* found an exception entry */
-
-10:
-#ifdef CONFIG_PRINTK
- xorl %eax,%eax
- movw %ax,2(%esp) /* clean up the segment values on some cpus */
- movw %ax,6(%esp)
- movw %ax,34(%esp)
- leal 40(%esp),%eax
- pushl %eax /* %esp before the exception */
- pushl %ebx
- pushl %ebp
- pushl %esi
- pushl %edi
- movl %cr2,%eax
- pushl %eax
- pushl (20+6*4)(%esp) /* trapno */
- pushl $fault_msg
- call printk
-#endif
- call dump_stack
-hlt_loop:
- hlt
- jmp hlt_loop
-
-ex_entry:
- pop %es
- pop %ds
- pop %edx
- pop %ecx
- pop %eax
- decl %ss:early_recursion_flag
-.Lis_nmi:
- addl $8,%esp /* drop vector number and error code */
+ cld
+ pushl %fs /* pt_regs->fs */
+ movw $0, 2(%esp) /* clear high bits (some CPUs leave garbage) */
+ pushl %es /* pt_regs->es */
+ movw $0, 2(%esp) /* clear high bits (some CPUs leave garbage) */
+ pushl %ds /* pt_regs->ds */
+ movw $0, 2(%esp) /* clear high bits (some CPUs leave garbage) */
+ pushl %eax /* pt_regs->ax */
+ pushl %ebp /* pt_regs->bp */
+ pushl %edi /* pt_regs->di */
+ pushl %esi /* pt_regs->si */
+ pushl %edx /* pt_regs->dx */
+ pushl %ecx /* pt_regs->cx */
+ pushl %ebx /* pt_regs->bx */
+
+ /* Fix up DS and ES */
+ movl $(__KERNEL_DS), %ecx
+ movl %ecx, %ds
+ movl %ecx, %es
+
+ /* Load the vector number into EDX */
+ movl PT_GS(%esp), %edx
+
+ /* Load GS into pt_regs->gs and clear high bits */
+ movw %gs, PT_GS(%esp)
+ movw $0, PT_GS+2(%esp)
+
+ movl %esp, %eax /* args are pt_regs (EAX), trapnr (EDX) */
+ call early_fixup_exception
+
+ popl %ebx /* pt_regs->bx */
+ popl %ecx /* pt_regs->cx */
+ popl %edx /* pt_regs->dx */
+ popl %esi /* pt_regs->si */
+ popl %edi /* pt_regs->di */
+ popl %ebp /* pt_regs->bp */
+ popl %eax /* pt_regs->ax */
+ popl %ds /* pt_regs->ds */
+ popl %es /* pt_regs->es */
+ popl %fs /* pt_regs->fs */
+ popl %gs /* pt_regs->gs */
+ decl %ss:early_recursion_flag
+ addl $4, %esp /* pop pt_regs->orig_ax */
iret
ENDPROC(early_idt_handler_common)
@@ -647,10 +638,14 @@ ignore_int:
popl %eax
#endif
iret
+
+hlt_loop:
+ hlt
+ jmp hlt_loop
ENDPROC(ignore_int)
__INITDATA
.align 4
-early_recursion_flag:
+GLOBAL(early_recursion_flag)
.long 0
__REFDATA
@@ -715,19 +710,6 @@ __INITRODATA
int_msg:
.asciz "Unknown interrupt or fault at: %p %p %p\n"
-fault_msg:
-/* fault info: */
- .ascii "BUG: Int %d: CR2 %p\n"
-/* regs pushed in early_idt_handler: */
- .ascii " EDI %p ESI %p EBP %p EBX %p\n"
- .ascii " ESP %p ES %p DS %p\n"
- .ascii " EDX %p ECX %p EAX %p\n"
-/* fault frame: */
- .ascii " vec %p err %p EIP %p CS %p flg %p\n"
- .ascii "Stack: %p %p %p %p %p %p %p %p\n"
- .ascii " %p %p %p %p %p %p %p %p\n"
- .asciz " %p %p %p %p %p %p %p %p\n"
-
#include "../../x86/xen/xen-head.S"
/*
diff --git a/arch/x86/kernel/head_64.S b/arch/x86/kernel/head_64.S
index 22fbf9df61bb..5df831ef1442 100644
--- a/arch/x86/kernel/head_64.S
+++ b/arch/x86/kernel/head_64.S
@@ -20,6 +20,7 @@
#include <asm/processor-flags.h>
#include <asm/percpu.h>
#include <asm/nops.h>
+#include "../entry/calling.h"
#ifdef CONFIG_PARAVIRT
#include <asm/asm-offsets.h>
@@ -64,6 +65,14 @@ startup_64:
* tables and then reload them.
*/
+ /*
+ * Setup stack for verify_cpu(). "-8" because stack_start is defined
+ * this way, see below. Our best guess is a NULL ptr for stack
+ * termination heuristics and we don't want to break anything which
+ * might depend on it (kgdb, ...).
+ */
+ leaq (__end_init_task - 8)(%rip), %rsp
+
/* Sanitize CPU configuration */
call verify_cpu
@@ -350,90 +359,48 @@ early_idt_handler_common:
*/
cld
- cmpl $2,(%rsp) # X86_TRAP_NMI
- je .Lis_nmi # Ignore NMI
-
- cmpl $2,early_recursion_flag(%rip)
- jz 1f
incl early_recursion_flag(%rip)
- pushq %rax # 64(%rsp)
- pushq %rcx # 56(%rsp)
- pushq %rdx # 48(%rsp)
- pushq %rsi # 40(%rsp)
- pushq %rdi # 32(%rsp)
- pushq %r8 # 24(%rsp)
- pushq %r9 # 16(%rsp)
- pushq %r10 # 8(%rsp)
- pushq %r11 # 0(%rsp)
-
- cmpl $__KERNEL_CS,96(%rsp)
- jne 11f
-
- cmpl $14,72(%rsp) # Page fault?
+ /* The vector number is currently in the pt_regs->di slot. */
+ pushq %rsi /* pt_regs->si */
+ movq 8(%rsp), %rsi /* RSI = vector number */
+ movq %rdi, 8(%rsp) /* pt_regs->di = RDI */
+ pushq %rdx /* pt_regs->dx */
+ pushq %rcx /* pt_regs->cx */
+ pushq %rax /* pt_regs->ax */
+ pushq %r8 /* pt_regs->r8 */
+ pushq %r9 /* pt_regs->r9 */
+ pushq %r10 /* pt_regs->r10 */
+ pushq %r11 /* pt_regs->r11 */
+ pushq %rbx /* pt_regs->bx */
+ pushq %rbp /* pt_regs->bp */
+ pushq %r12 /* pt_regs->r12 */
+ pushq %r13 /* pt_regs->r13 */
+ pushq %r14 /* pt_regs->r14 */
+ pushq %r15 /* pt_regs->r15 */
+
+ cmpq $14,%rsi /* Page fault? */
jnz 10f
- GET_CR2_INTO(%rdi) # can clobber any volatile register if pv
+ GET_CR2_INTO(%rdi) /* Can clobber any volatile register if pv */
call early_make_pgtable
andl %eax,%eax
- jz 20f # All good
+ jz 20f /* All good */
10:
- leaq 88(%rsp),%rdi # Pointer to %rip
+ movq %rsp,%rdi /* RDI = pt_regs; RSI is already trapnr */
call early_fixup_exception
- andl %eax,%eax
- jnz 20f # Found an exception entry
-
-11:
-#ifdef CONFIG_EARLY_PRINTK
- GET_CR2_INTO(%r9) # can clobber any volatile register if pv
- movl 80(%rsp),%r8d # error code
- movl 72(%rsp),%esi # vector number
- movl 96(%rsp),%edx # %cs
- movq 88(%rsp),%rcx # %rip
- xorl %eax,%eax
- leaq early_idt_msg(%rip),%rdi
- call early_printk
- cmpl $2,early_recursion_flag(%rip)
- jz 1f
- call dump_stack
-#ifdef CONFIG_KALLSYMS
- leaq early_idt_ripmsg(%rip),%rdi
- movq 40(%rsp),%rsi # %rip again
- call __print_symbol
-#endif
-#endif /* EARLY_PRINTK */
-1: hlt
- jmp 1b
-
-20: # Exception table entry found or page table generated
- popq %r11
- popq %r10
- popq %r9
- popq %r8
- popq %rdi
- popq %rsi
- popq %rdx
- popq %rcx
- popq %rax
+
+20:
decl early_recursion_flag(%rip)
-.Lis_nmi:
- addq $16,%rsp # drop vector number and error code
- INTERRUPT_RETURN
+ jmp restore_regs_and_iret
ENDPROC(early_idt_handler_common)
__INITDATA
.balign 4
-early_recursion_flag:
+GLOBAL(early_recursion_flag)
.long 0
-#ifdef CONFIG_EARLY_PRINTK
-early_idt_msg:
- .asciz "PANIC: early exception %02lx rip %lx:%lx error %lx cr2 %lx\n"
-early_idt_ripmsg:
- .asciz "RIP %s\n"
-#endif /* CONFIG_EARLY_PRINTK */
-
#define NEXT_PAGE(name) \
.balign PAGE_SIZE; \
GLOBAL(name)
diff --git a/arch/x86/kernel/hpet.c b/arch/x86/kernel/hpet.c
index a1f0e4a5c47e..f112af7aa62e 100644
--- a/arch/x86/kernel/hpet.c
+++ b/arch/x86/kernel/hpet.c
@@ -54,7 +54,7 @@ struct hpet_dev {
char name[10];
};
-inline struct hpet_dev *EVT_TO_HPET_DEV(struct clock_event_device *evtdev)
+static inline struct hpet_dev *EVT_TO_HPET_DEV(struct clock_event_device *evtdev)
{
return container_of(evtdev, struct hpet_dev, evt);
}
@@ -773,7 +773,6 @@ static struct clocksource clocksource_hpet = {
.mask = HPET_MASK,
.flags = CLOCK_SOURCE_IS_CONTINUOUS,
.resume = hpet_resume_counter,
- .archdata = { .vclock_mode = VCLOCK_HPET },
};
static int hpet_clocksource_register(void)
diff --git a/arch/x86/kernel/jump_label.c b/arch/x86/kernel/jump_label.c
index e565e0e4d216..fc25f698d792 100644
--- a/arch/x86/kernel/jump_label.c
+++ b/arch/x86/kernel/jump_label.c
@@ -13,6 +13,7 @@
#include <linux/cpu.h>
#include <asm/kprobes.h>
#include <asm/alternative.h>
+#include <asm/text-patching.h>
#ifdef HAVE_JUMP_LABEL
diff --git a/arch/x86/kernel/kexec-bzimage64.c b/arch/x86/kernel/kexec-bzimage64.c
index 2af478e3fd4e..f2356bda2b05 100644
--- a/arch/x86/kernel/kexec-bzimage64.c
+++ b/arch/x86/kernel/kexec-bzimage64.c
@@ -19,8 +19,7 @@
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/efi.h>
-#include <linux/verify_pefile.h>
-#include <keys/system_keyring.h>
+#include <linux/verification.h>
#include <asm/bootparam.h>
#include <asm/setup.h>
@@ -529,18 +528,9 @@ static int bzImage64_cleanup(void *loader_data)
#ifdef CONFIG_KEXEC_BZIMAGE_VERIFY_SIG
static int bzImage64_verify_sig(const char *kernel, unsigned long kernel_len)
{
- bool trusted;
- int ret;
-
- ret = verify_pefile_signature(kernel, kernel_len,
- system_trusted_keyring,
- VERIFYING_KEXEC_PE_SIGNATURE,
- &trusted);
- if (ret < 0)
- return ret;
- if (!trusted)
- return -EKEYREJECTED;
- return 0;
+ return verify_pefile_signature(kernel, kernel_len,
+ NULL,
+ VERIFYING_KEXEC_PE_SIGNATURE);
}
#endif
diff --git a/arch/x86/kernel/kgdb.c b/arch/x86/kernel/kgdb.c
index 2da6ee9ae69b..04cde527d728 100644
--- a/arch/x86/kernel/kgdb.c
+++ b/arch/x86/kernel/kgdb.c
@@ -45,6 +45,7 @@
#include <linux/uaccess.h>
#include <linux/memory.h>
+#include <asm/text-patching.h>
#include <asm/debugreg.h>
#include <asm/apicdef.h>
#include <asm/apic.h>
diff --git a/arch/x86/kernel/kprobes/core.c b/arch/x86/kernel/kprobes/core.c
index ae703acb85c1..38cf7a741250 100644
--- a/arch/x86/kernel/kprobes/core.c
+++ b/arch/x86/kernel/kprobes/core.c
@@ -51,6 +51,7 @@
#include <linux/ftrace.h>
#include <linux/frame.h>
+#include <asm/text-patching.h>
#include <asm/cacheflush.h>
#include <asm/desc.h>
#include <asm/pgtable.h>
diff --git a/arch/x86/kernel/kprobes/opt.c b/arch/x86/kernel/kprobes/opt.c
index 7b3b9d15c47a..4425f593f0ec 100644
--- a/arch/x86/kernel/kprobes/opt.c
+++ b/arch/x86/kernel/kprobes/opt.c
@@ -29,6 +29,7 @@
#include <linux/kallsyms.h>
#include <linux/ftrace.h>
+#include <asm/text-patching.h>
#include <asm/cacheflush.h>
#include <asm/desc.h>
#include <asm/pgtable.h>
diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
index 807950860fb7..eea2a6f72b31 100644
--- a/arch/x86/kernel/kvm.c
+++ b/arch/x86/kernel/kvm.c
@@ -285,14 +285,6 @@ static void __init paravirt_ops_setup(void)
{
pv_info.name = "KVM";
- /*
- * KVM isn't paravirt in the sense of paravirt_enabled. A KVM
- * guest kernel works like a bare metal kernel with additional
- * features, and paravirt_enabled is about features that are
- * missing.
- */
- pv_info.paravirt_enabled = 0;
-
if (kvm_para_has_feature(KVM_FEATURE_NOP_IO_DELAY))
pv_cpu_ops.io_delay = kvm_io_delay;
@@ -522,7 +514,7 @@ static noinline uint32_t __kvm_cpuid_base(void)
if (boot_cpu_data.cpuid_level < 0)
return 0; /* So we don't blow up on old processors */
- if (cpu_has_hypervisor)
+ if (boot_cpu_has(X86_FEATURE_HYPERVISOR))
return hypervisor_cpuid_base("KVMKVMKVM\0\0\0", 0);
return 0;
diff --git a/arch/x86/kernel/livepatch.c b/arch/x86/kernel/livepatch.c
deleted file mode 100644
index 92fc1a51f994..000000000000
--- a/arch/x86/kernel/livepatch.c
+++ /dev/null
@@ -1,70 +0,0 @@
-/*
- * livepatch.c - x86-specific Kernel Live Patching Core
- *
- * Copyright (C) 2014 Seth Jennings <sjenning@redhat.com>
- * Copyright (C) 2014 SUSE
- *
- * This program is free software; you can redistribute it and/or
- * modify it under the terms of the GNU General Public License
- * as published by the Free Software Foundation; either version 2
- * of the License, or (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, see <http://www.gnu.org/licenses/>.
- */
-
-#include <linux/module.h>
-#include <linux/uaccess.h>
-#include <asm/elf.h>
-#include <asm/livepatch.h>
-
-/**
- * klp_write_module_reloc() - write a relocation in a module
- * @mod: module in which the section to be modified is found
- * @type: ELF relocation type (see asm/elf.h)
- * @loc: address that the relocation should be written to
- * @value: relocation value (sym address + addend)
- *
- * This function writes a relocation to the specified location for
- * a particular module.
- */
-int klp_write_module_reloc(struct module *mod, unsigned long type,
- unsigned long loc, unsigned long value)
-{
- size_t size = 4;
- unsigned long val;
- unsigned long core = (unsigned long)mod->core_layout.base;
- unsigned long core_size = mod->core_layout.size;
-
- switch (type) {
- case R_X86_64_NONE:
- return 0;
- case R_X86_64_64:
- val = value;
- size = 8;
- break;
- case R_X86_64_32:
- val = (u32)value;
- break;
- case R_X86_64_32S:
- val = (s32)value;
- break;
- case R_X86_64_PC32:
- val = (u32)(value - loc);
- break;
- default:
- /* unsupported relocation type */
- return -EINVAL;
- }
-
- if (loc < core || loc >= core + core_size)
- /* loc does not point to any symbol inside the module */
- return -EINVAL;
-
- return probe_kernel_write((void *)loc, &val, size);
-}
diff --git a/arch/x86/kernel/machine_kexec_64.c b/arch/x86/kernel/machine_kexec_64.c
index ba7fbba9831b..5a294e48b185 100644
--- a/arch/x86/kernel/machine_kexec_64.c
+++ b/arch/x86/kernel/machine_kexec_64.c
@@ -538,3 +538,48 @@ overflow:
return -ENOEXEC;
}
#endif /* CONFIG_KEXEC_FILE */
+
+static int
+kexec_mark_range(unsigned long start, unsigned long end, bool protect)
+{
+ struct page *page;
+ unsigned int nr_pages;
+
+ /*
+ * For physical range: [start, end]. We must skip the unassigned
+ * crashk resource with zero-valued "end" member.
+ */
+ if (!end || start > end)
+ return 0;
+
+ page = pfn_to_page(start >> PAGE_SHIFT);
+ nr_pages = (end >> PAGE_SHIFT) - (start >> PAGE_SHIFT) + 1;
+ if (protect)
+ return set_pages_ro(page, nr_pages);
+ else
+ return set_pages_rw(page, nr_pages);
+}
+
+static void kexec_mark_crashkres(bool protect)
+{
+ unsigned long control;
+
+ kexec_mark_range(crashk_low_res.start, crashk_low_res.end, protect);
+
+ /* Don't touch the control code page used in crash_kexec().*/
+ control = PFN_PHYS(page_to_pfn(kexec_crash_image->control_code_page));
+ /* Control code page is located in the 2nd page. */
+ kexec_mark_range(crashk_res.start, control + PAGE_SIZE - 1, protect);
+ control += KEXEC_CONTROL_PAGE_SIZE;
+ kexec_mark_range(control, crashk_res.end, protect);
+}
+
+void arch_kexec_protect_crashkres(void)
+{
+ kexec_mark_crashkres(true);
+}
+
+void arch_kexec_unprotect_crashkres(void)
+{
+ kexec_mark_crashkres(false);
+}
diff --git a/arch/x86/kernel/mcount_64.S b/arch/x86/kernel/mcount_64.S
index ed48a9f465f8..61924222a9e1 100644
--- a/arch/x86/kernel/mcount_64.S
+++ b/arch/x86/kernel/mcount_64.S
@@ -182,7 +182,8 @@ GLOBAL(ftrace_graph_call)
jmp ftrace_stub
#endif
-GLOBAL(ftrace_stub)
+/* This is weak to keep gas from relaxing the jumps */
+WEAK(ftrace_stub)
retq
END(ftrace_caller)
diff --git a/arch/x86/kernel/module.c b/arch/x86/kernel/module.c
index 005c03e93fc5..477ae806c2fa 100644
--- a/arch/x86/kernel/module.c
+++ b/arch/x86/kernel/module.c
@@ -31,6 +31,7 @@
#include <linux/jump_label.h>
#include <linux/random.h>
+#include <asm/text-patching.h>
#include <asm/page.h>
#include <asm/pgtable.h>
#include <asm/setup.h>
diff --git a/arch/x86/kernel/paravirt.c b/arch/x86/kernel/paravirt.c
index f08ac28b8136..7b3b3f24c3ea 100644
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -294,7 +294,6 @@ enum paravirt_lazy_mode paravirt_get_lazy_mode(void)
struct pv_info pv_info = {
.name = "bare hardware",
- .paravirt_enabled = 0,
.kernel_rpl = 0,
.shared_kernel_pmd = 1, /* Only used when CONFIG_X86_PAE is set */
@@ -339,8 +338,10 @@ __visible struct pv_cpu_ops pv_cpu_ops = {
.write_cr8 = native_write_cr8,
#endif
.wbinvd = native_wbinvd,
- .read_msr = native_read_msr_safe,
- .write_msr = native_write_msr_safe,
+ .read_msr = native_read_msr,
+ .write_msr = native_write_msr,
+ .read_msr_safe = native_read_msr_safe,
+ .write_msr_safe = native_write_msr_safe,
.read_pmc = native_read_pmc,
.load_tr_desc = native_load_tr_desc,
.set_ldt = native_set_ldt,
diff --git a/arch/x86/kernel/pci-iommu_table.c b/arch/x86/kernel/pci-iommu_table.c
index 35ccf75696eb..f712dfdf1357 100644
--- a/arch/x86/kernel/pci-iommu_table.c
+++ b/arch/x86/kernel/pci-iommu_table.c
@@ -72,7 +72,7 @@ void __init check_iommu_entries(struct iommu_table_entry *start,
}
}
#else
-inline void check_iommu_entries(struct iommu_table_entry *start,
+void __init check_iommu_entries(struct iommu_table_entry *start,
struct iommu_table_entry *finish)
{
}
diff --git a/arch/x86/kernel/platform-quirks.c b/arch/x86/kernel/platform-quirks.c
new file mode 100644
index 000000000000..b2f8a33b36ff
--- /dev/null
+++ b/arch/x86/kernel/platform-quirks.c
@@ -0,0 +1,35 @@
+#include <linux/kernel.h>
+#include <linux/init.h>
+
+#include <asm/setup.h>
+#include <asm/bios_ebda.h>
+
+void __init x86_early_init_platform_quirks(void)
+{
+ x86_platform.legacy.rtc = 1;
+ x86_platform.legacy.ebda_search = 0;
+ x86_platform.legacy.devices.pnpbios = 1;
+
+ switch (boot_params.hdr.hardware_subarch) {
+ case X86_SUBARCH_PC:
+ x86_platform.legacy.ebda_search = 1;
+ break;
+ case X86_SUBARCH_XEN:
+ case X86_SUBARCH_LGUEST:
+ case X86_SUBARCH_INTEL_MID:
+ case X86_SUBARCH_CE4100:
+ x86_platform.legacy.devices.pnpbios = 0;
+ x86_platform.legacy.rtc = 0;
+ break;
+ }
+
+ if (x86_platform.set_legacy_features)
+ x86_platform.set_legacy_features();
+}
+
+#if defined(CONFIG_PNPBIOS)
+bool __init arch_pnpbios_disabled(void)
+{
+ return x86_platform.legacy.devices.pnpbios == 0;
+}
+#endif
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index 2915d54e9dd5..96becbbb52e0 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -97,10 +97,9 @@ int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src)
/*
* Free current thread data structures etc..
*/
-void exit_thread(void)
+void exit_thread(struct task_struct *tsk)
{
- struct task_struct *me = current;
- struct thread_struct *t = &me->thread;
+ struct thread_struct *t = &tsk->thread;
unsigned long *bp = t->io_bitmap_ptr;
struct fpu *fpu = &t->fpu;
diff --git a/arch/x86/kernel/process_64.c b/arch/x86/kernel/process_64.c
index 6cbab31ac23a..6e789ca1f841 100644
--- a/arch/x86/kernel/process_64.c
+++ b/arch/x86/kernel/process_64.c
@@ -136,25 +136,6 @@ void release_thread(struct task_struct *dead_task)
}
}
-static inline void set_32bit_tls(struct task_struct *t, int tls, u32 addr)
-{
- struct user_desc ud = {
- .base_addr = addr,
- .limit = 0xfffff,
- .seg_32bit = 1,
- .limit_in_pages = 1,
- .useable = 1,
- };
- struct desc_struct *desc = t->thread.tls_array;
- desc += tls;
- fill_ldt(desc, &ud);
-}
-
-static inline u32 read_32bit_tls(struct task_struct *t, int tls)
-{
- return get_desc_base(&t->thread.tls_array[tls]);
-}
-
int copy_thread_tls(unsigned long clone_flags, unsigned long sp,
unsigned long arg, struct task_struct *p, unsigned long tls)
{
@@ -169,9 +150,9 @@ int copy_thread_tls(unsigned long clone_flags, unsigned long sp,
p->thread.io_bitmap_ptr = NULL;
savesegment(gs, p->thread.gsindex);
- p->thread.gs = p->thread.gsindex ? 0 : me->thread.gs;
+ p->thread.gsbase = p->thread.gsindex ? 0 : me->thread.gsbase;
savesegment(fs, p->thread.fsindex);
- p->thread.fs = p->thread.fsindex ? 0 : me->thread.fs;
+ p->thread.fsbase = p->thread.fsindex ? 0 : me->thread.fsbase;
savesegment(es, p->thread.es);
savesegment(ds, p->thread.ds);
memset(p->thread.ptrace_bps, 0, sizeof(p->thread.ptrace_bps));
@@ -210,7 +191,7 @@ int copy_thread_tls(unsigned long clone_flags, unsigned long sp,
*/
if (clone_flags & CLONE_SETTLS) {
#ifdef CONFIG_IA32_EMULATION
- if (is_ia32_task())
+ if (in_ia32_syscall())
err = do_set_thread_area(p, -1,
(struct user_desc __user *)tls, 0);
else
@@ -282,7 +263,7 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
struct fpu *next_fpu = &next->fpu;
int cpu = smp_processor_id();
struct tss_struct *tss = &per_cpu(cpu_tss, cpu);
- unsigned fsindex, gsindex;
+ unsigned prev_fsindex, prev_gsindex;
fpu_switch_t fpu_switch;
fpu_switch = switch_fpu_prepare(prev_fpu, next_fpu, cpu);
@@ -292,8 +273,8 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
*
* (e.g. xen_load_tls())
*/
- savesegment(fs, fsindex);
- savesegment(gs, gsindex);
+ savesegment(fs, prev_fsindex);
+ savesegment(gs, prev_gsindex);
/*
* Load TLS before restoring any segments so that segment loads
@@ -336,66 +317,104 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
* Switch FS and GS.
*
* These are even more complicated than DS and ES: they have
- * 64-bit bases are that controlled by arch_prctl. Those bases
- * only differ from the values in the GDT or LDT if the selector
- * is 0.
- *
- * Loading the segment register resets the hidden base part of
- * the register to 0 or the value from the GDT / LDT. If the
- * next base address zero, writing 0 to the segment register is
- * much faster than using wrmsr to explicitly zero the base.
- *
- * The thread_struct.fs and thread_struct.gs values are 0
- * if the fs and gs bases respectively are not overridden
- * from the values implied by fsindex and gsindex. They
- * are nonzero, and store the nonzero base addresses, if
- * the bases are overridden.
- *
- * (fs != 0 && fsindex != 0) || (gs != 0 && gsindex != 0) should
- * be impossible.
- *
- * Therefore we need to reload the segment registers if either
- * the old or new selector is nonzero, and we need to override
- * the base address if next thread expects it to be overridden.
+ * 64-bit bases are that controlled by arch_prctl. The bases
+ * don't necessarily match the selectors, as user code can do
+ * any number of things to cause them to be inconsistent.
*
- * This code is unnecessarily slow in the case where the old and
- * new indexes are zero and the new base is nonzero -- it will
- * unnecessarily write 0 to the selector before writing the new
- * base address.
+ * We don't promise to preserve the bases if the selectors are
+ * nonzero. We also don't promise to preserve the base if the
+ * selector is zero and the base doesn't match whatever was
+ * most recently passed to ARCH_SET_FS/GS. (If/when the
+ * FSGSBASE instructions are enabled, we'll need to offer
+ * stronger guarantees.)
*
- * Note: This all depends on arch_prctl being the only way that
- * user code can override the segment base. Once wrfsbase and
- * wrgsbase are enabled, most of this code will need to change.
+ * As an invariant,
+ * (fsbase != 0 && fsindex != 0) || (gsbase != 0 && gsindex != 0) is
+ * impossible.
*/
- if (unlikely(fsindex | next->fsindex | prev->fs)) {
+ if (next->fsindex) {
+ /* Loading a nonzero value into FS sets the index and base. */
loadsegment(fs, next->fsindex);
-
- /*
- * If user code wrote a nonzero value to FS, then it also
- * cleared the overridden base address.
- *
- * XXX: if user code wrote 0 to FS and cleared the base
- * address itself, we won't notice and we'll incorrectly
- * restore the prior base address next time we reschdule
- * the process.
- */
- if (fsindex)
- prev->fs = 0;
+ } else {
+ if (next->fsbase) {
+ /* Next index is zero but next base is nonzero. */
+ if (prev_fsindex)
+ loadsegment(fs, 0);
+ wrmsrl(MSR_FS_BASE, next->fsbase);
+ } else {
+ /* Next base and index are both zero. */
+ if (static_cpu_has_bug(X86_BUG_NULL_SEG)) {
+ /*
+ * We don't know the previous base and can't
+ * find out without RDMSR. Forcibly clear it.
+ */
+ loadsegment(fs, __USER_DS);
+ loadsegment(fs, 0);
+ } else {
+ /*
+ * If the previous index is zero and ARCH_SET_FS
+ * didn't change the base, then the base is
+ * also zero and we don't need to do anything.
+ */
+ if (prev->fsbase || prev_fsindex)
+ loadsegment(fs, 0);
+ }
+ }
}
- if (next->fs)
- wrmsrl(MSR_FS_BASE, next->fs);
- prev->fsindex = fsindex;
+ /*
+ * Save the old state and preserve the invariant.
+ * NB: if prev_fsindex == 0, then we can't reliably learn the base
+ * without RDMSR because Intel user code can zero it without telling
+ * us and AMD user code can program any 32-bit value without telling
+ * us.
+ */
+ if (prev_fsindex)
+ prev->fsbase = 0;
+ prev->fsindex = prev_fsindex;
- if (unlikely(gsindex | next->gsindex | prev->gs)) {
+ if (next->gsindex) {
+ /* Loading a nonzero value into GS sets the index and base. */
load_gs_index(next->gsindex);
-
- /* This works (and fails) the same way as fsindex above. */
- if (gsindex)
- prev->gs = 0;
+ } else {
+ if (next->gsbase) {
+ /* Next index is zero but next base is nonzero. */
+ if (prev_gsindex)
+ load_gs_index(0);
+ wrmsrl(MSR_KERNEL_GS_BASE, next->gsbase);
+ } else {
+ /* Next base and index are both zero. */
+ if (static_cpu_has_bug(X86_BUG_NULL_SEG)) {
+ /*
+ * We don't know the previous base and can't
+ * find out without RDMSR. Forcibly clear it.
+ *
+ * This contains a pointless SWAPGS pair.
+ * Fixing it would involve an explicit check
+ * for Xen or a new pvop.
+ */
+ load_gs_index(__USER_DS);
+ load_gs_index(0);
+ } else {
+ /*
+ * If the previous index is zero and ARCH_SET_GS
+ * didn't change the base, then the base is
+ * also zero and we don't need to do anything.
+ */
+ if (prev->gsbase || prev_gsindex)
+ load_gs_index(0);
+ }
+ }
}
- if (next->gs)
- wrmsrl(MSR_KERNEL_GS_BASE, next->gs);
- prev->gsindex = gsindex;
+ /*
+ * Save the old state and preserve the invariant.
+ * NB: if prev_gsindex == 0, then we can't reliably learn the base
+ * without RDMSR because Intel user code can zero it without telling
+ * us and AMD user code can program any 32-bit value without telling
+ * us.
+ */
+ if (prev_gsindex)
+ prev->gsbase = 0;
+ prev->gsindex = prev_gsindex;
switch_fpu_finish(next_fpu, fpu_switch);
@@ -513,81 +532,47 @@ long do_arch_prctl(struct task_struct *task, int code, unsigned long addr)
switch (code) {
case ARCH_SET_GS:
- if (addr >= TASK_SIZE_OF(task))
+ if (addr >= TASK_SIZE_MAX)
return -EPERM;
cpu = get_cpu();
- /* handle small bases via the GDT because that's faster to
- switch. */
- if (addr <= 0xffffffff) {
- set_32bit_tls(task, GS_TLS, addr);
- if (doit) {
- load_TLS(&task->thread, cpu);
- load_gs_index(GS_TLS_SEL);
- }
- task->thread.gsindex = GS_TLS_SEL;
- task->thread.gs = 0;
- } else {
- task->thread.gsindex = 0;
- task->thread.gs = addr;
- if (doit) {
- load_gs_index(0);
- ret = wrmsrl_safe(MSR_KERNEL_GS_BASE, addr);
- }
+ task->thread.gsindex = 0;
+ task->thread.gsbase = addr;
+ if (doit) {
+ load_gs_index(0);
+ ret = wrmsrl_safe(MSR_KERNEL_GS_BASE, addr);
}
put_cpu();
break;
case ARCH_SET_FS:
/* Not strictly needed for fs, but do it for symmetry
with gs */
- if (addr >= TASK_SIZE_OF(task))
+ if (addr >= TASK_SIZE_MAX)
return -EPERM;
cpu = get_cpu();
- /* handle small bases via the GDT because that's faster to
- switch. */
- if (addr <= 0xffffffff) {
- set_32bit_tls(task, FS_TLS, addr);
- if (doit) {
- load_TLS(&task->thread, cpu);
- loadsegment(fs, FS_TLS_SEL);
- }
- task->thread.fsindex = FS_TLS_SEL;
- task->thread.fs = 0;
- } else {
- task->thread.fsindex = 0;
- task->thread.fs = addr;
- if (doit) {
- /* set the selector to 0 to not confuse
- __switch_to */
- loadsegment(fs, 0);
- ret = wrmsrl_safe(MSR_FS_BASE, addr);
- }
+ task->thread.fsindex = 0;
+ task->thread.fsbase = addr;
+ if (doit) {
+ /* set the selector to 0 to not confuse __switch_to */
+ loadsegment(fs, 0);
+ ret = wrmsrl_safe(MSR_FS_BASE, addr);
}
put_cpu();
break;
case ARCH_GET_FS: {
unsigned long base;
- if (task->thread.fsindex == FS_TLS_SEL)
- base = read_32bit_tls(task, FS_TLS);
- else if (doit)
+ if (doit)
rdmsrl(MSR_FS_BASE, base);
else
- base = task->thread.fs;
+ base = task->thread.fsbase;
ret = put_user(base, (unsigned long __user *)addr);
break;
}
case ARCH_GET_GS: {
unsigned long base;
- unsigned gsindex;
- if (task->thread.gsindex == GS_TLS_SEL)
- base = read_32bit_tls(task, GS_TLS);
- else if (doit) {
- savesegment(gs, gsindex);
- if (gsindex)
- rdmsrl(MSR_KERNEL_GS_BASE, base);
- else
- base = task->thread.gs;
- } else
- base = task->thread.gs;
+ if (doit)
+ rdmsrl(MSR_KERNEL_GS_BASE, base);
+ else
+ base = task->thread.gsbase;
ret = put_user(base, (unsigned long __user *)addr);
break;
}
diff --git a/arch/x86/kernel/ptrace.c b/arch/x86/kernel/ptrace.c
index 32e9d9cbb884..600edd225e81 100644
--- a/arch/x86/kernel/ptrace.c
+++ b/arch/x86/kernel/ptrace.c
@@ -303,29 +303,11 @@ static int set_segment_reg(struct task_struct *task,
switch (offset) {
case offsetof(struct user_regs_struct,fs):
- /*
- * If this is setting fs as for normal 64-bit use but
- * setting fs_base has implicitly changed it, leave it.
- */
- if ((value == FS_TLS_SEL && task->thread.fsindex == 0 &&
- task->thread.fs != 0) ||
- (value == 0 && task->thread.fsindex == FS_TLS_SEL &&
- task->thread.fs == 0))
- break;
task->thread.fsindex = value;
if (task == current)
loadsegment(fs, task->thread.fsindex);
break;
case offsetof(struct user_regs_struct,gs):
- /*
- * If this is setting gs as for normal 64-bit use but
- * setting gs_base has implicitly changed it, leave it.
- */
- if ((value == GS_TLS_SEL && task->thread.gsindex == 0 &&
- task->thread.gs != 0) ||
- (value == 0 && task->thread.gsindex == GS_TLS_SEL &&
- task->thread.gs == 0))
- break;
task->thread.gsindex = value;
if (task == current)
load_gs_index(task->thread.gsindex);
@@ -410,23 +392,23 @@ static int putreg(struct task_struct *child,
#ifdef CONFIG_X86_64
case offsetof(struct user_regs_struct,fs_base):
- if (value >= TASK_SIZE_OF(child))
+ if (value >= TASK_SIZE_MAX)
return -EIO;
/*
* When changing the segment base, use do_arch_prctl
* to set either thread.fs or thread.fsindex and the
* corresponding GDT slot.
*/
- if (child->thread.fs != value)
+ if (child->thread.fsbase != value)
return do_arch_prctl(child, ARCH_SET_FS, value);
return 0;
case offsetof(struct user_regs_struct,gs_base):
/*
* Exactly the same here as the %fs handling above.
*/
- if (value >= TASK_SIZE_OF(child))
+ if (value >= TASK_SIZE_MAX)
return -EIO;
- if (child->thread.gs != value)
+ if (child->thread.gsbase != value)
return do_arch_prctl(child, ARCH_SET_GS, value);
return 0;
#endif
@@ -453,31 +435,17 @@ static unsigned long getreg(struct task_struct *task, unsigned long offset)
#ifdef CONFIG_X86_64
case offsetof(struct user_regs_struct, fs_base): {
/*
- * do_arch_prctl may have used a GDT slot instead of
- * the MSR. To userland, it appears the same either
- * way, except the %fs segment selector might not be 0.
+ * XXX: This will not behave as expected if called on
+ * current or if fsindex != 0.
*/
- unsigned int seg = task->thread.fsindex;
- if (task->thread.fs != 0)
- return task->thread.fs;
- if (task == current)
- asm("movl %%fs,%0" : "=r" (seg));
- if (seg != FS_TLS_SEL)
- return 0;
- return get_desc_base(&task->thread.tls_array[FS_TLS]);
+ return task->thread.fsbase;
}
case offsetof(struct user_regs_struct, gs_base): {
/*
- * Exactly the same here as the %fs handling above.
+ * XXX: This will not behave as expected if called on
+ * current or if fsindex != 0.
*/
- unsigned int seg = task->thread.gsindex;
- if (task->thread.gs != 0)
- return task->thread.gs;
- if (task == current)
- asm("movl %%gs,%0" : "=r" (seg));
- if (seg != GS_TLS_SEL)
- return 0;
- return get_desc_base(&task->thread.tls_array[GS_TLS]);
+ return task->thread.gsbase;
}
#endif
}
@@ -1266,7 +1234,7 @@ long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
compat_ulong_t caddr, compat_ulong_t cdata)
{
#ifdef CONFIG_X86_X32_ABI
- if (!is_ia32_task())
+ if (!in_ia32_syscall())
return x32_arch_ptrace(child, request, caddr, cdata);
#endif
#ifdef CONFIG_IA32_EMULATION
diff --git a/arch/x86/kernel/reboot.c b/arch/x86/kernel/reboot.c
index ab0adc0fa5db..a9b31eb815f2 100644
--- a/arch/x86/kernel/reboot.c
+++ b/arch/x86/kernel/reboot.c
@@ -535,6 +535,15 @@ static void native_machine_emergency_restart(void)
mode = reboot_mode == REBOOT_WARM ? 0x1234 : 0;
*((unsigned short *)__va(0x472)) = mode;
+ /*
+ * If an EFI capsule has been registered with the firmware then
+ * override the reboot= parameter.
+ */
+ if (efi_capsule_pending(NULL)) {
+ pr_info("EFI capsule is pending, forcing EFI reboot.\n");
+ reboot_type = BOOT_EFI;
+ }
+
for (;;) {
/* Could also try the reset bit in the Hammer NB */
switch (reboot_type) {
diff --git a/arch/x86/kernel/rtc.c b/arch/x86/kernel/rtc.c
index 4af8d063fb36..eceaa082ec3f 100644
--- a/arch/x86/kernel/rtc.c
+++ b/arch/x86/kernel/rtc.c
@@ -14,6 +14,7 @@
#include <asm/time.h>
#include <asm/intel-mid.h>
#include <asm/rtc.h>
+#include <asm/setup.h>
#ifdef CONFIG_X86_32
/*
@@ -185,22 +186,7 @@ static __init int add_rtc_cmos(void)
}
}
#endif
- if (of_have_populated_dt())
- return 0;
-
- /* Intel MID platforms don't have ioport rtc */
- if (intel_mid_identify_cpu())
- return -ENODEV;
-
-#ifdef CONFIG_ACPI
- if (acpi_gbl_FADT.boot_flags & ACPI_FADT_NO_CMOS_RTC) {
- /* This warning can likely go away again in a year or two. */
- pr_info("ACPI: not registering RTC platform device\n");
- return -ENODEV;
- }
-#endif
-
- if (paravirt_enabled() && !paravirt_has(RTC))
+ if (!x86_platform.legacy.rtc)
return -ENODEV;
platform_device_register(&rtc_device);
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 2367ae07eb76..c4e7b3991b60 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -398,6 +398,11 @@ static void __init reserve_initrd(void)
memblock_free(ramdisk_image, ramdisk_end - ramdisk_image);
}
+
+static void __init early_initrd_acpi_init(void)
+{
+ early_acpi_table_init((void *)initrd_start, initrd_end - initrd_start);
+}
#else
static void __init early_reserve_initrd(void)
{
@@ -405,6 +410,9 @@ static void __init early_reserve_initrd(void)
static void __init reserve_initrd(void)
{
}
+static void __init early_initrd_acpi_init(void)
+{
+}
#endif /* CONFIG_BLK_DEV_INITRD */
static void __init parse_setup_data(void)
@@ -1138,9 +1146,7 @@ void __init setup_arch(char **cmdline_p)
reserve_initrd();
-#if defined(CONFIG_ACPI) && defined(CONFIG_BLK_DEV_INITRD)
- acpi_initrd_override((void *)initrd_start, initrd_end - initrd_start);
-#endif
+ early_initrd_acpi_init();
vsmp_init();
diff --git a/arch/x86/kernel/signal.c b/arch/x86/kernel/signal.c
index 548ddf7d6fd2..22cc2f9f8aec 100644
--- a/arch/x86/kernel/signal.c
+++ b/arch/x86/kernel/signal.c
@@ -248,18 +248,17 @@ get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, size_t frame_size,
if (config_enabled(CONFIG_X86_64))
sp -= 128;
- if (!onsigstack) {
- /* This is the X/Open sanctioned signal stack switching. */
- if (ka->sa.sa_flags & SA_ONSTACK) {
- if (current->sas_ss_size)
- sp = current->sas_ss_sp + current->sas_ss_size;
- } else if (config_enabled(CONFIG_X86_32) &&
- (regs->ss & 0xffff) != __USER_DS &&
- !(ka->sa.sa_flags & SA_RESTORER) &&
- ka->sa.sa_restorer) {
- /* This is the legacy signal stack switching. */
- sp = (unsigned long) ka->sa.sa_restorer;
- }
+ /* This is the X/Open sanctioned signal stack switching. */
+ if (ka->sa.sa_flags & SA_ONSTACK) {
+ if (sas_ss_flags(sp) == 0)
+ sp = current->sas_ss_sp + current->sas_ss_size;
+ } else if (config_enabled(CONFIG_X86_32) &&
+ !onsigstack &&
+ (regs->ss & 0xffff) != __USER_DS &&
+ !(ka->sa.sa_flags & SA_RESTORER) &&
+ ka->sa.sa_restorer) {
+ /* This is the legacy signal stack switching. */
+ sp = (unsigned long) ka->sa.sa_restorer;
}
if (fpu->fpstate_active) {
@@ -391,7 +390,7 @@ static int __setup_rt_frame(int sig, struct ksignal *ksig,
put_user_ex(&frame->uc, &frame->puc);
/* Create the ucontext. */
- if (cpu_has_xsave)
+ if (boot_cpu_has(X86_FEATURE_XSAVE))
put_user_ex(UC_FP_XSTATE, &frame->uc.uc_flags);
else
put_user_ex(0, &frame->uc.uc_flags);
@@ -442,7 +441,7 @@ static unsigned long frame_uc_flags(struct pt_regs *regs)
{
unsigned long flags;
- if (cpu_has_xsave)
+ if (boot_cpu_has(X86_FEATURE_XSAVE))
flags = UC_FP_XSTATE | UC_SIGCONTEXT_SS;
else
flags = UC_SIGCONTEXT_SS;
@@ -762,7 +761,7 @@ handle_signal(struct ksignal *ksig, struct pt_regs *regs)
static inline unsigned long get_nr_restart_syscall(const struct pt_regs *regs)
{
#ifdef CONFIG_X86_64
- if (is_ia32_task())
+ if (in_ia32_syscall())
return __NR_ia32_restart_syscall;
#endif
#ifdef CONFIG_X86_X32_ABI
diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c
index a2065d3b3b39..fafe8b923cac 100644
--- a/arch/x86/kernel/smpboot.c
+++ b/arch/x86/kernel/smpboot.c
@@ -332,6 +332,11 @@ static void __init smp_init_package_map(void)
* primary cores.
*/
ncpus = boot_cpu_data.x86_max_cores;
+ if (!ncpus) {
+ pr_warn("x86_max_cores == zero !?!?");
+ ncpus = 1;
+ }
+
__max_logical_packages = DIV_ROUND_UP(total_cpus, ncpus);
/*
@@ -1231,7 +1236,7 @@ static int __init smp_sanity_check(unsigned max_cpus)
* If we couldn't find a local APIC, then get out of here now!
*/
if (APIC_INTEGRATED(apic_version[boot_cpu_physical_apicid]) &&
- !cpu_has_apic) {
+ !boot_cpu_has(X86_FEATURE_APIC)) {
if (!disable_apic) {
pr_err("BIOS bug, local APIC #%d not detected!...\n",
boot_cpu_physical_apicid);
diff --git a/arch/x86/kernel/sysfb_efi.c b/arch/x86/kernel/sysfb_efi.c
index 5da924bbf0a0..623965e86b65 100644
--- a/arch/x86/kernel/sysfb_efi.c
+++ b/arch/x86/kernel/sysfb_efi.c
@@ -68,6 +68,21 @@ struct efifb_dmi_info efifb_dmi_list[] = {
[M_UNKNOWN] = { NULL, 0, 0, 0, 0, OVERRIDE_NONE }
};
+void efifb_setup_from_dmi(struct screen_info *si, const char *opt)
+{
+ int i;
+
+ for (i = 0; i < M_UNKNOWN; i++) {
+ if (efifb_dmi_list[i].base != 0 &&
+ !strcmp(opt, efifb_dmi_list[i].optname)) {
+ si->lfb_base = efifb_dmi_list[i].base;
+ si->lfb_linelength = efifb_dmi_list[i].stride;
+ si->lfb_width = efifb_dmi_list[i].width;
+ si->lfb_height = efifb_dmi_list[i].height;
+ }
+ }
+}
+
#define choose_value(dmivalue, fwvalue, field, flags) ({ \
typeof(fwvalue) _ret_ = fwvalue; \
if ((flags) & (field)) \
diff --git a/arch/x86/kernel/tboot.c b/arch/x86/kernel/tboot.c
index e72a07f20b05..9b0185fbe3eb 100644
--- a/arch/x86/kernel/tboot.c
+++ b/arch/x86/kernel/tboot.c
@@ -74,12 +74,6 @@ void __init tboot_probe(void)
return;
}
- /* only a natively booted kernel should be using TXT */
- if (paravirt_enabled()) {
- pr_warning("non-0 tboot_addr but pv_ops is enabled\n");
- return;
- }
-
/* Map and check for tboot UUID. */
set_fixmap(FIX_TBOOT_BASE, boot_params.tboot_addr);
tboot = (struct tboot *)fix_to_virt(FIX_TBOOT_BASE);
diff --git a/arch/x86/kernel/tce_64.c b/arch/x86/kernel/tce_64.c
index ab40954e113e..f386bad0984e 100644
--- a/arch/x86/kernel/tce_64.c
+++ b/arch/x86/kernel/tce_64.c
@@ -40,7 +40,7 @@
static inline void flush_tce(void* tceaddr)
{
/* a single tce can't cross a cache line */
- if (cpu_has_clflush)
+ if (boot_cpu_has(X86_FEATURE_CLFLUSH))
clflush(tceaddr);
else
wbinvd();
diff --git a/arch/x86/kernel/tls.c b/arch/x86/kernel/tls.c
index 7fc5e843f247..9692a5e9fdab 100644
--- a/arch/x86/kernel/tls.c
+++ b/arch/x86/kernel/tls.c
@@ -114,6 +114,7 @@ int do_set_thread_area(struct task_struct *p, int idx,
int can_allocate)
{
struct user_desc info;
+ unsigned short __maybe_unused sel, modified_sel;
if (copy_from_user(&info, u_info, sizeof(info)))
return -EFAULT;
@@ -141,6 +142,47 @@ int do_set_thread_area(struct task_struct *p, int idx,
set_tls_desc(p, idx, &info, 1);
+ /*
+ * If DS, ES, FS, or GS points to the modified segment, forcibly
+ * refresh it. Only needed on x86_64 because x86_32 reloads them
+ * on return to user mode.
+ */
+ modified_sel = (idx << 3) | 3;
+
+ if (p == current) {
+#ifdef CONFIG_X86_64
+ savesegment(ds, sel);
+ if (sel == modified_sel)
+ loadsegment(ds, sel);
+
+ savesegment(es, sel);
+ if (sel == modified_sel)
+ loadsegment(es, sel);
+
+ savesegment(fs, sel);
+ if (sel == modified_sel)
+ loadsegment(fs, sel);
+
+ savesegment(gs, sel);
+ if (sel == modified_sel)
+ load_gs_index(sel);
+#endif
+
+#ifdef CONFIG_X86_32_LAZY_GS
+ savesegment(gs, sel);
+ if (sel == modified_sel)
+ loadsegment(gs, sel);
+#endif
+ } else {
+#ifdef CONFIG_X86_64
+ if (p->thread.fsindex == modified_sel)
+ p->thread.fsbase = info.base_addr;
+
+ if (p->thread.gsindex == modified_sel)
+ p->thread.gsbase = info.base_addr;
+#endif
+ }
+
return 0;
}
diff --git a/arch/x86/kernel/traps.c b/arch/x86/kernel/traps.c
index 06cbe25861f1..d1590486204a 100644
--- a/arch/x86/kernel/traps.c
+++ b/arch/x86/kernel/traps.c
@@ -51,6 +51,7 @@
#include <asm/processor.h>
#include <asm/debugreg.h>
#include <linux/atomic.h>
+#include <asm/text-patching.h>
#include <asm/ftrace.h>
#include <asm/traps.h>
#include <asm/desc.h>
diff --git a/arch/x86/kernel/tsc.c b/arch/x86/kernel/tsc.c
index c9c4c7ce3eb2..38ba6de56ede 100644
--- a/arch/x86/kernel/tsc.c
+++ b/arch/x86/kernel/tsc.c
@@ -36,7 +36,7 @@ static int __read_mostly tsc_unstable;
/* native_sched_clock() is called before tsc_init(), so
we must start with the TSC soft disabled to prevent
- erroneous rdtsc usage on !cpu_has_tsc processors */
+ erroneous rdtsc usage on !boot_cpu_has(X86_FEATURE_TSC) processors */
static int __read_mostly tsc_disabled = -1;
static DEFINE_STATIC_KEY_FALSE(__use_tsc);
@@ -834,15 +834,15 @@ int recalibrate_cpu_khz(void)
#ifndef CONFIG_SMP
unsigned long cpu_khz_old = cpu_khz;
- if (cpu_has_tsc) {
- tsc_khz = x86_platform.calibrate_tsc();
- cpu_khz = tsc_khz;
- cpu_data(0).loops_per_jiffy =
- cpufreq_scale(cpu_data(0).loops_per_jiffy,
- cpu_khz_old, cpu_khz);
- return 0;
- } else
+ if (!boot_cpu_has(X86_FEATURE_TSC))
return -ENODEV;
+
+ tsc_khz = x86_platform.calibrate_tsc();
+ cpu_khz = tsc_khz;
+ cpu_data(0).loops_per_jiffy = cpufreq_scale(cpu_data(0).loops_per_jiffy,
+ cpu_khz_old, cpu_khz);
+
+ return 0;
#else
return -ENODEV;
#endif
@@ -922,9 +922,6 @@ static int time_cpufreq_notifier(struct notifier_block *nb, unsigned long val,
struct cpufreq_freqs *freq = data;
unsigned long *lpj;
- if (cpu_has(&cpu_data(freq->cpu), X86_FEATURE_CONSTANT_TSC))
- return 0;
-
lpj = &boot_cpu_data.loops_per_jiffy;
#ifdef CONFIG_SMP
if (!(freq->flags & CPUFREQ_CONST_LOOPS))
@@ -954,9 +951,9 @@ static struct notifier_block time_cpufreq_notifier_block = {
.notifier_call = time_cpufreq_notifier
};
-static int __init cpufreq_tsc(void)
+static int __init cpufreq_register_tsc_scaling(void)
{
- if (!cpu_has_tsc)
+ if (!boot_cpu_has(X86_FEATURE_TSC))
return 0;
if (boot_cpu_has(X86_FEATURE_CONSTANT_TSC))
return 0;
@@ -965,7 +962,7 @@ static int __init cpufreq_tsc(void)
return 0;
}
-core_initcall(cpufreq_tsc);
+core_initcall(cpufreq_register_tsc_scaling);
#endif /* CONFIG_CPU_FREQ */
@@ -1081,7 +1078,7 @@ static void __init check_system_tsc_reliable(void)
*/
int unsynchronized_tsc(void)
{
- if (!cpu_has_tsc || tsc_unstable)
+ if (!boot_cpu_has(X86_FEATURE_TSC) || tsc_unstable)
return 1;
#ifdef CONFIG_SMP
@@ -1205,7 +1202,7 @@ out:
static int __init init_tsc_clocksource(void)
{
- if (!cpu_has_tsc || tsc_disabled > 0 || !tsc_khz)
+ if (!boot_cpu_has(X86_FEATURE_TSC) || tsc_disabled > 0 || !tsc_khz)
return 0;
if (tsc_clocksource_reliable)
@@ -1242,7 +1239,7 @@ void __init tsc_init(void)
u64 lpj;
int cpu;
- if (!cpu_has_tsc) {
+ if (!boot_cpu_has(X86_FEATURE_TSC)) {
setup_clear_cpu_cap(X86_FEATURE_TSC_DEADLINE_TIMER);
return;
}
diff --git a/arch/x86/kernel/tsc_msr.c b/arch/x86/kernel/tsc_msr.c
index 6aa0f4d9eea6..9911a0620f9a 100644
--- a/arch/x86/kernel/tsc_msr.c
+++ b/arch/x86/kernel/tsc_msr.c
@@ -23,6 +23,7 @@
#include <asm/param.h>
/* CPU reference clock frequency: in KHz */
+#define FREQ_80 80000
#define FREQ_83 83200
#define FREQ_100 99840
#define FREQ_133 133200
@@ -56,6 +57,8 @@ static struct freq_desc freq_desc_tables[] = {
{ 6, 0x37, 1, { FREQ_83, FREQ_100, FREQ_133, FREQ_166, 0, 0, 0, 0 } },
/* ANN */
{ 6, 0x5a, 1, { FREQ_83, FREQ_100, FREQ_133, FREQ_100, 0, 0, 0, 0 } },
+ /* AIRMONT */
+ { 6, 0x4c, 1, { FREQ_83, FREQ_100, FREQ_133, FREQ_166, FREQ_80, 0, 0, 0 } },
};
static int match_cpu(u8 family, u8 model)
diff --git a/arch/x86/kernel/uprobes.c b/arch/x86/kernel/uprobes.c
index bf4db6eaec8f..6c1ff31d99ff 100644
--- a/arch/x86/kernel/uprobes.c
+++ b/arch/x86/kernel/uprobes.c
@@ -516,7 +516,7 @@ struct uprobe_xol_ops {
static inline int sizeof_long(void)
{
- return is_ia32_task() ? 4 : 8;
+ return in_ia32_syscall() ? 4 : 8;
}
static int default_pre_xol_op(struct arch_uprobe *auprobe, struct pt_regs *regs)
@@ -578,7 +578,7 @@ static void default_abort_op(struct arch_uprobe *auprobe, struct pt_regs *regs)
riprel_post_xol(auprobe, regs);
}
-static struct uprobe_xol_ops default_xol_ops = {
+static const struct uprobe_xol_ops default_xol_ops = {
.pre_xol = default_pre_xol_op,
.post_xol = default_post_xol_op,
.abort = default_abort_op,
@@ -695,7 +695,7 @@ static void branch_clear_offset(struct arch_uprobe *auprobe, struct insn *insn)
0, insn->immediate.nbytes);
}
-static struct uprobe_xol_ops branch_xol_ops = {
+static const struct uprobe_xol_ops branch_xol_ops = {
.emulate = branch_emulate_op,
.post_xol = branch_post_xol_op,
};
diff --git a/arch/x86/kernel/vmlinux.lds.S b/arch/x86/kernel/vmlinux.lds.S
index 4c941f88d405..9297a002d8e5 100644
--- a/arch/x86/kernel/vmlinux.lds.S
+++ b/arch/x86/kernel/vmlinux.lds.S
@@ -334,7 +334,7 @@ SECTIONS
__brk_limit = .;
}
- . = ALIGN(PAGE_SIZE);
+ . = ALIGN(PAGE_SIZE); /* keep VO_INIT_SIZE page aligned */
_end = .;
STABS_DEBUG
diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
index bbbaa802d13e..769af907f824 100644
--- a/arch/x86/kvm/cpuid.c
+++ b/arch/x86/kvm/cpuid.c
@@ -75,7 +75,7 @@ int kvm_update_cpuid(struct kvm_vcpu *vcpu)
return 0;
/* Update OSXSAVE bit */
- if (cpu_has_xsave && best->function == 0x1) {
+ if (boot_cpu_has(X86_FEATURE_XSAVE) && best->function == 0x1) {
best->ecx &= ~F(OSXSAVE);
if (kvm_read_cr4_bits(vcpu, X86_CR4_OSXSAVE))
best->ecx |= F(OSXSAVE);
diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
index 0f6294376fbd..a2f24af3c999 100644
--- a/arch/x86/kvm/emulate.c
+++ b/arch/x86/kvm/emulate.c
@@ -5110,13 +5110,17 @@ static void fetch_possible_mmx_operand(struct x86_emulate_ctxt *ctxt,
static int fastop(struct x86_emulate_ctxt *ctxt, void (*fop)(struct fastop *))
{
+ register void *__sp asm(_ASM_SP);
ulong flags = (ctxt->eflags & EFLAGS_MASK) | X86_EFLAGS_IF;
+
if (!(ctxt->d & ByteOp))
fop += __ffs(ctxt->dst.bytes) * FASTOP_SIZE;
+
asm("push %[flags]; popf; call *%[fastop]; pushf; pop %[flags]\n"
: "+a"(ctxt->dst.val), "+d"(ctxt->src.val), [flags]"+D"(flags),
- [fastop]"+S"(fop)
+ [fastop]"+S"(fop), "+r"(__sp)
: "c"(ctxt->src2.val));
+
ctxt->eflags = (ctxt->eflags & ~EFLAGS_MASK) | (flags & EFLAGS_MASK);
if (!fop) /* exception is returned in fop variable */
return emulate_de(ctxt);
diff --git a/arch/x86/kvm/ioapic.c b/arch/x86/kvm/ioapic.c
index 9db47090ead0..5f42d038fcb4 100644
--- a/arch/x86/kvm/ioapic.c
+++ b/arch/x86/kvm/ioapic.c
@@ -443,7 +443,7 @@ static void __kvm_ioapic_update_eoi(struct kvm_vcpu *vcpu,
spin_lock(&ioapic->lock);
if (trigger_mode != IOAPIC_LEVEL_TRIG ||
- kvm_apic_get_reg(apic, APIC_SPIV) & APIC_SPIV_DIRECTED_EOI)
+ kvm_lapic_get_reg(apic, APIC_SPIV) & APIC_SPIV_DIRECTED_EOI)
continue;
ASSERT(ent->fields.trig_mode == IOAPIC_LEVEL_TRIG);
diff --git a/arch/x86/kvm/iommu.c b/arch/x86/kvm/iommu.c
index a22a488b4622..3069281904d3 100644
--- a/arch/x86/kvm/iommu.c
+++ b/arch/x86/kvm/iommu.c
@@ -254,7 +254,7 @@ int kvm_iommu_map_guest(struct kvm *kvm)
!iommu_capable(&pci_bus_type, IOMMU_CAP_INTR_REMAP)) {
printk(KERN_WARNING "%s: No interrupt remapping support,"
" disallowing device assignment."
- " Re-enble with \"allow_unsafe_assigned_interrupts=1\""
+ " Re-enable with \"allow_unsafe_assigned_interrupts=1\""
" module option.\n", __func__);
iommu_domain_free(kvm->arch.iommu_domain);
kvm->arch.iommu_domain = NULL;
diff --git a/arch/x86/kvm/irq_comm.c b/arch/x86/kvm/irq_comm.c
index 54ead79e444b..dfb4c6476877 100644
--- a/arch/x86/kvm/irq_comm.c
+++ b/arch/x86/kvm/irq_comm.c
@@ -382,9 +382,6 @@ void kvm_scan_ioapic_routes(struct kvm_vcpu *vcpu,
u32 i, nr_ioapic_pins;
int idx;
- /* kvm->irq_routing must be read after clearing
- * KVM_SCAN_IOAPIC. */
- smp_mb();
idx = srcu_read_lock(&kvm->irq_srcu);
table = srcu_dereference(kvm->irq_routing, &kvm->irq_srcu);
nr_ioapic_pins = min_t(u32, table->nr_rt_entries,
diff --git a/arch/x86/kvm/lapic.c b/arch/x86/kvm/lapic.c
index 1a2da0e5a373..bbb5b283ff63 100644
--- a/arch/x86/kvm/lapic.c
+++ b/arch/x86/kvm/lapic.c
@@ -59,9 +59,8 @@
/* #define apic_debug(fmt,arg...) printk(KERN_WARNING fmt,##arg) */
#define apic_debug(fmt, arg...)
-#define APIC_LVT_NUM 6
/* 14 is the version for Xeon and Pentium 8.4.8*/
-#define APIC_VERSION (0x14UL | ((APIC_LVT_NUM - 1) << 16))
+#define APIC_VERSION (0x14UL | ((KVM_APIC_LVT_NUM - 1) << 16))
#define LAPIC_MMIO_LENGTH (1 << 12)
/* followed define is not in apicdef.h */
#define APIC_SHORT_MASK 0xc0000
@@ -73,14 +72,6 @@
#define APIC_BROADCAST 0xFF
#define X2APIC_BROADCAST 0xFFFFFFFFul
-#define VEC_POS(v) ((v) & (32 - 1))
-#define REG_POS(v) (((v) >> 5) << 4)
-
-static inline void apic_set_reg(struct kvm_lapic *apic, int reg_off, u32 val)
-{
- *((u32 *) (apic->regs + reg_off)) = val;
-}
-
static inline int apic_test_vector(int vec, void *bitmap)
{
return test_bit(VEC_POS(vec), (bitmap) + REG_POS(vec));
@@ -94,11 +85,6 @@ bool kvm_apic_pending_eoi(struct kvm_vcpu *vcpu, int vector)
apic_test_vector(vector, apic->regs + APIC_IRR);
}
-static inline void apic_set_vector(int vec, void *bitmap)
-{
- set_bit(VEC_POS(vec), (bitmap) + REG_POS(vec));
-}
-
static inline void apic_clear_vector(int vec, void *bitmap)
{
clear_bit(VEC_POS(vec), (bitmap) + REG_POS(vec));
@@ -173,7 +159,7 @@ static void recalculate_apic_map(struct kvm *kvm)
continue;
aid = kvm_apic_id(apic);
- ldr = kvm_apic_get_reg(apic, APIC_LDR);
+ ldr = kvm_lapic_get_reg(apic, APIC_LDR);
if (aid < ARRAY_SIZE(new->phys_map))
new->phys_map[aid] = apic;
@@ -182,7 +168,7 @@ static void recalculate_apic_map(struct kvm *kvm)
new->mode |= KVM_APIC_MODE_X2APIC;
} else if (ldr) {
ldr = GET_APIC_LOGICAL_ID(ldr);
- if (kvm_apic_get_reg(apic, APIC_DFR) == APIC_DFR_FLAT)
+ if (kvm_lapic_get_reg(apic, APIC_DFR) == APIC_DFR_FLAT)
new->mode |= KVM_APIC_MODE_XAPIC_FLAT;
else
new->mode |= KVM_APIC_MODE_XAPIC_CLUSTER;
@@ -212,7 +198,7 @@ static inline void apic_set_spiv(struct kvm_lapic *apic, u32 val)
{
bool enabled = val & APIC_SPIV_APIC_ENABLED;
- apic_set_reg(apic, APIC_SPIV, val);
+ kvm_lapic_set_reg(apic, APIC_SPIV, val);
if (enabled != apic->sw_enabled) {
apic->sw_enabled = enabled;
@@ -226,13 +212,13 @@ static inline void apic_set_spiv(struct kvm_lapic *apic, u32 val)
static inline void kvm_apic_set_id(struct kvm_lapic *apic, u8 id)
{
- apic_set_reg(apic, APIC_ID, id << 24);
+ kvm_lapic_set_reg(apic, APIC_ID, id << 24);
recalculate_apic_map(apic->vcpu->kvm);
}
static inline void kvm_apic_set_ldr(struct kvm_lapic *apic, u32 id)
{
- apic_set_reg(apic, APIC_LDR, id);
+ kvm_lapic_set_reg(apic, APIC_LDR, id);
recalculate_apic_map(apic->vcpu->kvm);
}
@@ -240,19 +226,19 @@ static inline void kvm_apic_set_x2apic_id(struct kvm_lapic *apic, u8 id)
{
u32 ldr = ((id >> 4) << 16) | (1 << (id & 0xf));
- apic_set_reg(apic, APIC_ID, id << 24);
- apic_set_reg(apic, APIC_LDR, ldr);
+ kvm_lapic_set_reg(apic, APIC_ID, id << 24);
+ kvm_lapic_set_reg(apic, APIC_LDR, ldr);
recalculate_apic_map(apic->vcpu->kvm);
}
static inline int apic_lvt_enabled(struct kvm_lapic *apic, int lvt_type)
{
- return !(kvm_apic_get_reg(apic, lvt_type) & APIC_LVT_MASKED);
+ return !(kvm_lapic_get_reg(apic, lvt_type) & APIC_LVT_MASKED);
}
static inline int apic_lvt_vector(struct kvm_lapic *apic, int lvt_type)
{
- return kvm_apic_get_reg(apic, lvt_type) & APIC_VECTOR_MASK;
+ return kvm_lapic_get_reg(apic, lvt_type) & APIC_VECTOR_MASK;
}
static inline int apic_lvtt_oneshot(struct kvm_lapic *apic)
@@ -287,10 +273,10 @@ void kvm_apic_set_version(struct kvm_vcpu *vcpu)
feat = kvm_find_cpuid_entry(apic->vcpu, 0x1, 0);
if (feat && (feat->ecx & (1 << (X86_FEATURE_X2APIC & 31))))
v |= APIC_LVR_DIRECTED_EOI;
- apic_set_reg(apic, APIC_LVR, v);
+ kvm_lapic_set_reg(apic, APIC_LVR, v);
}
-static const unsigned int apic_lvt_mask[APIC_LVT_NUM] = {
+static const unsigned int apic_lvt_mask[KVM_APIC_LVT_NUM] = {
LVT_MASK , /* part LVTT mask, timer mode mask added at runtime */
LVT_MASK | APIC_MODE_MASK, /* LVTTHMR */
LVT_MASK | APIC_MODE_MASK, /* LVTPC */
@@ -349,16 +335,6 @@ void kvm_apic_update_irr(struct kvm_vcpu *vcpu, u32 *pir)
}
EXPORT_SYMBOL_GPL(kvm_apic_update_irr);
-static inline void apic_set_irr(int vec, struct kvm_lapic *apic)
-{
- apic_set_vector(vec, apic->regs + APIC_IRR);
- /*
- * irr_pending must be true if any interrupt is pending; set it after
- * APIC_IRR to avoid race with apic_clear_irr
- */
- apic->irr_pending = true;
-}
-
static inline int apic_search_irr(struct kvm_lapic *apic)
{
return find_highest_vector(apic->regs + APIC_IRR);
@@ -416,7 +392,7 @@ static inline void apic_set_isr(int vec, struct kvm_lapic *apic)
* just set SVI.
*/
if (unlikely(vcpu->arch.apicv_active))
- kvm_x86_ops->hwapic_isr_update(vcpu->kvm, vec);
+ kvm_x86_ops->hwapic_isr_update(vcpu, vec);
else {
++apic->isr_count;
BUG_ON(apic->isr_count > MAX_APIC_VECTOR);
@@ -464,7 +440,7 @@ static inline void apic_clear_isr(int vec, struct kvm_lapic *apic)
* and must be left alone.
*/
if (unlikely(vcpu->arch.apicv_active))
- kvm_x86_ops->hwapic_isr_update(vcpu->kvm,
+ kvm_x86_ops->hwapic_isr_update(vcpu,
apic_find_highest_isr(apic));
else {
--apic->isr_count;
@@ -549,8 +525,8 @@ static void apic_update_ppr(struct kvm_lapic *apic)
u32 tpr, isrv, ppr, old_ppr;
int isr;
- old_ppr = kvm_apic_get_reg(apic, APIC_PROCPRI);
- tpr = kvm_apic_get_reg(apic, APIC_TASKPRI);
+ old_ppr = kvm_lapic_get_reg(apic, APIC_PROCPRI);
+ tpr = kvm_lapic_get_reg(apic, APIC_TASKPRI);
isr = apic_find_highest_isr(apic);
isrv = (isr != -1) ? isr : 0;
@@ -563,7 +539,7 @@ static void apic_update_ppr(struct kvm_lapic *apic)
apic, ppr, isr, isrv);
if (old_ppr != ppr) {
- apic_set_reg(apic, APIC_PROCPRI, ppr);
+ kvm_lapic_set_reg(apic, APIC_PROCPRI, ppr);
if (ppr < old_ppr)
kvm_make_request(KVM_REQ_EVENT, apic->vcpu);
}
@@ -571,7 +547,7 @@ static void apic_update_ppr(struct kvm_lapic *apic)
static void apic_set_tpr(struct kvm_lapic *apic, u32 tpr)
{
- apic_set_reg(apic, APIC_TASKPRI, tpr);
+ kvm_lapic_set_reg(apic, APIC_TASKPRI, tpr);
apic_update_ppr(apic);
}
@@ -601,7 +577,7 @@ static bool kvm_apic_match_logical_addr(struct kvm_lapic *apic, u32 mda)
if (kvm_apic_broadcast(apic, mda))
return true;
- logical_id = kvm_apic_get_reg(apic, APIC_LDR);
+ logical_id = kvm_lapic_get_reg(apic, APIC_LDR);
if (apic_x2apic_mode(apic))
return ((logical_id >> 16) == (mda >> 16))
@@ -610,7 +586,7 @@ static bool kvm_apic_match_logical_addr(struct kvm_lapic *apic, u32 mda)
logical_id = GET_APIC_LOGICAL_ID(logical_id);
mda = GET_APIC_DEST_FIELD(mda);
- switch (kvm_apic_get_reg(apic, APIC_DFR)) {
+ switch (kvm_lapic_get_reg(apic, APIC_DFR)) {
case APIC_DFR_FLAT:
return (logical_id & mda) != 0;
case APIC_DFR_CLUSTER:
@@ -618,7 +594,7 @@ static bool kvm_apic_match_logical_addr(struct kvm_lapic *apic, u32 mda)
&& (logical_id & mda & 0xf) != 0;
default:
apic_debug("Bad DFR vcpu %d: %08x\n",
- apic->vcpu->vcpu_id, kvm_apic_get_reg(apic, APIC_DFR));
+ apic->vcpu->vcpu_id, kvm_lapic_get_reg(apic, APIC_DFR));
return false;
}
}
@@ -668,6 +644,7 @@ bool kvm_apic_match_dest(struct kvm_vcpu *vcpu, struct kvm_lapic *source,
return false;
}
}
+EXPORT_SYMBOL_GPL(kvm_apic_match_dest);
int kvm_vector_to_index(u32 vector, u32 dest_vcpus,
const unsigned long *bitmap, u32 bitmap_size)
@@ -921,7 +898,7 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
if (apic_test_vector(vector, apic->regs + APIC_TMR) != !!trig_mode) {
if (trig_mode)
- apic_set_vector(vector, apic->regs + APIC_TMR);
+ kvm_lapic_set_vector(vector, apic->regs + APIC_TMR);
else
apic_clear_vector(vector, apic->regs + APIC_TMR);
}
@@ -929,7 +906,7 @@ static int __apic_accept_irq(struct kvm_lapic *apic, int delivery_mode,
if (vcpu->arch.apicv_active)
kvm_x86_ops->deliver_posted_interrupt(vcpu, vector);
else {
- apic_set_irr(vector, apic);
+ kvm_lapic_set_irr(vector, apic);
kvm_make_request(KVM_REQ_EVENT, vcpu);
kvm_vcpu_kick(vcpu);
@@ -1073,8 +1050,8 @@ EXPORT_SYMBOL_GPL(kvm_apic_set_eoi_accelerated);
static void apic_send_ipi(struct kvm_lapic *apic)
{
- u32 icr_low = kvm_apic_get_reg(apic, APIC_ICR);
- u32 icr_high = kvm_apic_get_reg(apic, APIC_ICR2);
+ u32 icr_low = kvm_lapic_get_reg(apic, APIC_ICR);
+ u32 icr_high = kvm_lapic_get_reg(apic, APIC_ICR2);
struct kvm_lapic_irq irq;
irq.vector = icr_low & APIC_VECTOR_MASK;
@@ -1111,7 +1088,7 @@ static u32 apic_get_tmcct(struct kvm_lapic *apic)
ASSERT(apic != NULL);
/* if initial count is 0, current count should also be 0 */
- if (kvm_apic_get_reg(apic, APIC_TMICT) == 0 ||
+ if (kvm_lapic_get_reg(apic, APIC_TMICT) == 0 ||
apic->lapic_timer.period == 0)
return 0;
@@ -1168,13 +1145,13 @@ static u32 __apic_read(struct kvm_lapic *apic, unsigned int offset)
break;
case APIC_PROCPRI:
apic_update_ppr(apic);
- val = kvm_apic_get_reg(apic, offset);
+ val = kvm_lapic_get_reg(apic, offset);
break;
case APIC_TASKPRI:
report_tpr_access(apic, false);
/* fall thru */
default:
- val = kvm_apic_get_reg(apic, offset);
+ val = kvm_lapic_get_reg(apic, offset);
break;
}
@@ -1186,7 +1163,7 @@ static inline struct kvm_lapic *to_lapic(struct kvm_io_device *dev)
return container_of(dev, struct kvm_lapic, dev);
}
-static int apic_reg_read(struct kvm_lapic *apic, u32 offset, int len,
+int kvm_lapic_reg_read(struct kvm_lapic *apic, u32 offset, int len,
void *data)
{
unsigned char alignment = offset & 0xf;
@@ -1223,6 +1200,7 @@ static int apic_reg_read(struct kvm_lapic *apic, u32 offset, int len,
}
return 0;
}
+EXPORT_SYMBOL_GPL(kvm_lapic_reg_read);
static int apic_mmio_in_range(struct kvm_lapic *apic, gpa_t addr)
{
@@ -1240,7 +1218,7 @@ static int apic_mmio_read(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
if (!apic_mmio_in_range(apic, address))
return -EOPNOTSUPP;
- apic_reg_read(apic, offset, len, data);
+ kvm_lapic_reg_read(apic, offset, len, data);
return 0;
}
@@ -1249,7 +1227,7 @@ static void update_divide_count(struct kvm_lapic *apic)
{
u32 tmp1, tmp2, tdcr;
- tdcr = kvm_apic_get_reg(apic, APIC_TDCR);
+ tdcr = kvm_lapic_get_reg(apic, APIC_TDCR);
tmp1 = tdcr & 0xf;
tmp2 = ((tmp1 & 0x3) | ((tmp1 & 0x8) >> 1)) + 1;
apic->divide_count = 0x1 << (tmp2 & 0x7);
@@ -1260,7 +1238,7 @@ static void update_divide_count(struct kvm_lapic *apic)
static void apic_update_lvtt(struct kvm_lapic *apic)
{
- u32 timer_mode = kvm_apic_get_reg(apic, APIC_LVTT) &
+ u32 timer_mode = kvm_lapic_get_reg(apic, APIC_LVTT) &
apic->lapic_timer.timer_mode_mask;
if (apic->lapic_timer.timer_mode != timer_mode) {
@@ -1296,7 +1274,7 @@ static void apic_timer_expired(struct kvm_lapic *apic)
static bool lapic_timer_int_injected(struct kvm_vcpu *vcpu)
{
struct kvm_lapic *apic = vcpu->arch.apic;
- u32 reg = kvm_apic_get_reg(apic, APIC_LVTT);
+ u32 reg = kvm_lapic_get_reg(apic, APIC_LVTT);
if (kvm_apic_hw_enabled(apic)) {
int vec = reg & APIC_VECTOR_MASK;
@@ -1344,7 +1322,7 @@ static void start_apic_timer(struct kvm_lapic *apic)
if (apic_lvtt_period(apic) || apic_lvtt_oneshot(apic)) {
/* lapic timer in oneshot or periodic mode */
now = apic->lapic_timer.timer.base->get_time();
- apic->lapic_timer.period = (u64)kvm_apic_get_reg(apic, APIC_TMICT)
+ apic->lapic_timer.period = (u64)kvm_lapic_get_reg(apic, APIC_TMICT)
* APIC_BUS_CYCLE_NS * apic->divide_count;
if (!apic->lapic_timer.period)
@@ -1376,7 +1354,7 @@ static void start_apic_timer(struct kvm_lapic *apic)
"timer initial count 0x%x, period %lldns, "
"expire @ 0x%016" PRIx64 ".\n", __func__,
APIC_BUS_CYCLE_NS, ktime_to_ns(now),
- kvm_apic_get_reg(apic, APIC_TMICT),
+ kvm_lapic_get_reg(apic, APIC_TMICT),
apic->lapic_timer.period,
ktime_to_ns(ktime_add_ns(now,
apic->lapic_timer.period)));
@@ -1425,7 +1403,7 @@ static void apic_manage_nmi_watchdog(struct kvm_lapic *apic, u32 lvt0_val)
}
}
-static int apic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
+int kvm_lapic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
{
int ret = 0;
@@ -1457,7 +1435,7 @@ static int apic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
case APIC_DFR:
if (!apic_x2apic_mode(apic)) {
- apic_set_reg(apic, APIC_DFR, val | 0x0FFFFFFF);
+ kvm_lapic_set_reg(apic, APIC_DFR, val | 0x0FFFFFFF);
recalculate_apic_map(apic->vcpu->kvm);
} else
ret = 1;
@@ -1465,17 +1443,17 @@ static int apic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
case APIC_SPIV: {
u32 mask = 0x3ff;
- if (kvm_apic_get_reg(apic, APIC_LVR) & APIC_LVR_DIRECTED_EOI)
+ if (kvm_lapic_get_reg(apic, APIC_LVR) & APIC_LVR_DIRECTED_EOI)
mask |= APIC_SPIV_DIRECTED_EOI;
apic_set_spiv(apic, val & mask);
if (!(val & APIC_SPIV_APIC_ENABLED)) {
int i;
u32 lvt_val;
- for (i = 0; i < APIC_LVT_NUM; i++) {
- lvt_val = kvm_apic_get_reg(apic,
+ for (i = 0; i < KVM_APIC_LVT_NUM; i++) {
+ lvt_val = kvm_lapic_get_reg(apic,
APIC_LVTT + 0x10 * i);
- apic_set_reg(apic, APIC_LVTT + 0x10 * i,
+ kvm_lapic_set_reg(apic, APIC_LVTT + 0x10 * i,
lvt_val | APIC_LVT_MASKED);
}
apic_update_lvtt(apic);
@@ -1486,14 +1464,14 @@ static int apic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
}
case APIC_ICR:
/* No delay here, so we always clear the pending bit */
- apic_set_reg(apic, APIC_ICR, val & ~(1 << 12));
+ kvm_lapic_set_reg(apic, APIC_ICR, val & ~(1 << 12));
apic_send_ipi(apic);
break;
case APIC_ICR2:
if (!apic_x2apic_mode(apic))
val &= 0xff000000;
- apic_set_reg(apic, APIC_ICR2, val);
+ kvm_lapic_set_reg(apic, APIC_ICR2, val);
break;
case APIC_LVT0:
@@ -1507,7 +1485,7 @@ static int apic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
val |= APIC_LVT_MASKED;
val &= apic_lvt_mask[(reg - APIC_LVTT) >> 4];
- apic_set_reg(apic, reg, val);
+ kvm_lapic_set_reg(apic, reg, val);
break;
@@ -1515,7 +1493,7 @@ static int apic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
if (!kvm_apic_sw_enabled(apic))
val |= APIC_LVT_MASKED;
val &= (apic_lvt_mask[0] | apic->lapic_timer.timer_mode_mask);
- apic_set_reg(apic, APIC_LVTT, val);
+ kvm_lapic_set_reg(apic, APIC_LVTT, val);
apic_update_lvtt(apic);
break;
@@ -1524,14 +1502,14 @@ static int apic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
break;
hrtimer_cancel(&apic->lapic_timer.timer);
- apic_set_reg(apic, APIC_TMICT, val);
+ kvm_lapic_set_reg(apic, APIC_TMICT, val);
start_apic_timer(apic);
break;
case APIC_TDCR:
if (val & 4)
apic_debug("KVM_WRITE:TDCR %x\n", val);
- apic_set_reg(apic, APIC_TDCR, val);
+ kvm_lapic_set_reg(apic, APIC_TDCR, val);
update_divide_count(apic);
break;
@@ -1544,7 +1522,7 @@ static int apic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
case APIC_SELF_IPI:
if (apic_x2apic_mode(apic)) {
- apic_reg_write(apic, APIC_ICR, 0x40000 | (val & 0xff));
+ kvm_lapic_reg_write(apic, APIC_ICR, 0x40000 | (val & 0xff));
} else
ret = 1;
break;
@@ -1556,6 +1534,7 @@ static int apic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val)
apic_debug("Local APIC Write to read-only register %x\n", reg);
return ret;
}
+EXPORT_SYMBOL_GPL(kvm_lapic_reg_write);
static int apic_mmio_write(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
gpa_t address, int len, const void *data)
@@ -1585,14 +1564,14 @@ static int apic_mmio_write(struct kvm_vcpu *vcpu, struct kvm_io_device *this,
apic_debug("%s: offset 0x%x with length 0x%x, and value is "
"0x%x\n", __func__, offset, len, val);
- apic_reg_write(apic, offset & 0xff0, val);
+ kvm_lapic_reg_write(apic, offset & 0xff0, val);
return 0;
}
void kvm_lapic_set_eoi(struct kvm_vcpu *vcpu)
{
- apic_reg_write(vcpu->arch.apic, APIC_EOI, 0);
+ kvm_lapic_reg_write(vcpu->arch.apic, APIC_EOI, 0);
}
EXPORT_SYMBOL_GPL(kvm_lapic_set_eoi);
@@ -1604,10 +1583,10 @@ void kvm_apic_write_nodecode(struct kvm_vcpu *vcpu, u32 offset)
/* hw has done the conditional check and inst decode */
offset &= 0xff0;
- apic_reg_read(vcpu->arch.apic, offset, 4, &val);
+ kvm_lapic_reg_read(vcpu->arch.apic, offset, 4, &val);
/* TODO: optimize to just emulate side effect w/o one more write */
- apic_reg_write(vcpu->arch.apic, offset, val);
+ kvm_lapic_reg_write(vcpu->arch.apic, offset, val);
}
EXPORT_SYMBOL_GPL(kvm_apic_write_nodecode);
@@ -1667,14 +1646,14 @@ void kvm_lapic_set_tpr(struct kvm_vcpu *vcpu, unsigned long cr8)
struct kvm_lapic *apic = vcpu->arch.apic;
apic_set_tpr(apic, ((cr8 & 0x0f) << 4)
- | (kvm_apic_get_reg(apic, APIC_TASKPRI) & 4));
+ | (kvm_lapic_get_reg(apic, APIC_TASKPRI) & 4));
}
u64 kvm_lapic_get_cr8(struct kvm_vcpu *vcpu)
{
u64 tpr;
- tpr = (u64) kvm_apic_get_reg(vcpu->arch.apic, APIC_TASKPRI);
+ tpr = (u64) kvm_lapic_get_reg(vcpu->arch.apic, APIC_TASKPRI);
return (tpr & 0xf0) >> 4;
}
@@ -1740,28 +1719,28 @@ void kvm_lapic_reset(struct kvm_vcpu *vcpu, bool init_event)
kvm_apic_set_id(apic, vcpu->vcpu_id);
kvm_apic_set_version(apic->vcpu);
- for (i = 0; i < APIC_LVT_NUM; i++)
- apic_set_reg(apic, APIC_LVTT + 0x10 * i, APIC_LVT_MASKED);
+ for (i = 0; i < KVM_APIC_LVT_NUM; i++)
+ kvm_lapic_set_reg(apic, APIC_LVTT + 0x10 * i, APIC_LVT_MASKED);
apic_update_lvtt(apic);
if (kvm_check_has_quirk(vcpu->kvm, KVM_X86_QUIRK_LINT0_REENABLED))
- apic_set_reg(apic, APIC_LVT0,
+ kvm_lapic_set_reg(apic, APIC_LVT0,
SET_APIC_DELIVERY_MODE(0, APIC_MODE_EXTINT));
- apic_manage_nmi_watchdog(apic, kvm_apic_get_reg(apic, APIC_LVT0));
+ apic_manage_nmi_watchdog(apic, kvm_lapic_get_reg(apic, APIC_LVT0));
- apic_set_reg(apic, APIC_DFR, 0xffffffffU);
+ kvm_lapic_set_reg(apic, APIC_DFR, 0xffffffffU);
apic_set_spiv(apic, 0xff);
- apic_set_reg(apic, APIC_TASKPRI, 0);
+ kvm_lapic_set_reg(apic, APIC_TASKPRI, 0);
if (!apic_x2apic_mode(apic))
kvm_apic_set_ldr(apic, 0);
- apic_set_reg(apic, APIC_ESR, 0);
- apic_set_reg(apic, APIC_ICR, 0);
- apic_set_reg(apic, APIC_ICR2, 0);
- apic_set_reg(apic, APIC_TDCR, 0);
- apic_set_reg(apic, APIC_TMICT, 0);
+ kvm_lapic_set_reg(apic, APIC_ESR, 0);
+ kvm_lapic_set_reg(apic, APIC_ICR, 0);
+ kvm_lapic_set_reg(apic, APIC_ICR2, 0);
+ kvm_lapic_set_reg(apic, APIC_TDCR, 0);
+ kvm_lapic_set_reg(apic, APIC_TMICT, 0);
for (i = 0; i < 8; i++) {
- apic_set_reg(apic, APIC_IRR + 0x10 * i, 0);
- apic_set_reg(apic, APIC_ISR + 0x10 * i, 0);
- apic_set_reg(apic, APIC_TMR + 0x10 * i, 0);
+ kvm_lapic_set_reg(apic, APIC_IRR + 0x10 * i, 0);
+ kvm_lapic_set_reg(apic, APIC_ISR + 0x10 * i, 0);
+ kvm_lapic_set_reg(apic, APIC_TMR + 0x10 * i, 0);
}
apic->irr_pending = vcpu->arch.apicv_active;
apic->isr_count = vcpu->arch.apicv_active ? 1 : 0;
@@ -1806,7 +1785,7 @@ int apic_has_pending_timer(struct kvm_vcpu *vcpu)
int kvm_apic_local_deliver(struct kvm_lapic *apic, int lvt_type)
{
- u32 reg = kvm_apic_get_reg(apic, lvt_type);
+ u32 reg = kvm_lapic_get_reg(apic, lvt_type);
int vector, mode, trig_mode;
if (kvm_apic_hw_enabled(apic) && !(reg & APIC_LVT_MASKED)) {
@@ -1901,14 +1880,14 @@ int kvm_apic_has_interrupt(struct kvm_vcpu *vcpu)
apic_update_ppr(apic);
highest_irr = apic_find_highest_irr(apic);
if ((highest_irr == -1) ||
- ((highest_irr & 0xF0) <= kvm_apic_get_reg(apic, APIC_PROCPRI)))
+ ((highest_irr & 0xF0) <= kvm_lapic_get_reg(apic, APIC_PROCPRI)))
return -1;
return highest_irr;
}
int kvm_apic_accept_pic_intr(struct kvm_vcpu *vcpu)
{
- u32 lvt0 = kvm_apic_get_reg(vcpu->arch.apic, APIC_LVT0);
+ u32 lvt0 = kvm_lapic_get_reg(vcpu->arch.apic, APIC_LVT0);
int r = 0;
if (!kvm_apic_hw_enabled(vcpu->arch.apic))
@@ -1974,7 +1953,7 @@ void kvm_apic_post_state_restore(struct kvm_vcpu *vcpu,
apic_update_ppr(apic);
hrtimer_cancel(&apic->lapic_timer.timer);
apic_update_lvtt(apic);
- apic_manage_nmi_watchdog(apic, kvm_apic_get_reg(apic, APIC_LVT0));
+ apic_manage_nmi_watchdog(apic, kvm_lapic_get_reg(apic, APIC_LVT0));
update_divide_count(apic);
start_apic_timer(apic);
apic->irr_pending = true;
@@ -1982,9 +1961,11 @@ void kvm_apic_post_state_restore(struct kvm_vcpu *vcpu,
1 : count_vectors(apic->regs + APIC_ISR);
apic->highest_isr_cache = -1;
if (vcpu->arch.apicv_active) {
+ if (kvm_x86_ops->apicv_post_state_restore)
+ kvm_x86_ops->apicv_post_state_restore(vcpu);
kvm_x86_ops->hwapic_irr_update(vcpu,
apic_find_highest_irr(apic));
- kvm_x86_ops->hwapic_isr_update(vcpu->kvm,
+ kvm_x86_ops->hwapic_isr_update(vcpu,
apic_find_highest_isr(apic));
}
kvm_make_request(KVM_REQ_EVENT, vcpu);
@@ -2097,7 +2078,7 @@ void kvm_lapic_sync_to_vapic(struct kvm_vcpu *vcpu)
if (!test_bit(KVM_APIC_CHECK_VAPIC, &vcpu->arch.apic_attention))
return;
- tpr = kvm_apic_get_reg(apic, APIC_TASKPRI) & 0xff;
+ tpr = kvm_lapic_get_reg(apic, APIC_TASKPRI) & 0xff;
max_irr = apic_find_highest_irr(apic);
if (max_irr < 0)
max_irr = 0;
@@ -2139,8 +2120,8 @@ int kvm_x2apic_msr_write(struct kvm_vcpu *vcpu, u32 msr, u64 data)
/* if this is ICR write vector before command */
if (reg == APIC_ICR)
- apic_reg_write(apic, APIC_ICR2, (u32)(data >> 32));
- return apic_reg_write(apic, reg, (u32)data);
+ kvm_lapic_reg_write(apic, APIC_ICR2, (u32)(data >> 32));
+ return kvm_lapic_reg_write(apic, reg, (u32)data);
}
int kvm_x2apic_msr_read(struct kvm_vcpu *vcpu, u32 msr, u64 *data)
@@ -2157,10 +2138,10 @@ int kvm_x2apic_msr_read(struct kvm_vcpu *vcpu, u32 msr, u64 *data)
return 1;
}
- if (apic_reg_read(apic, reg, 4, &low))
+ if (kvm_lapic_reg_read(apic, reg, 4, &low))
return 1;
if (reg == APIC_ICR)
- apic_reg_read(apic, APIC_ICR2, 4, &high);
+ kvm_lapic_reg_read(apic, APIC_ICR2, 4, &high);
*data = (((u64)high) << 32) | low;
@@ -2176,8 +2157,8 @@ int kvm_hv_vapic_msr_write(struct kvm_vcpu *vcpu, u32 reg, u64 data)
/* if this is ICR write vector before command */
if (reg == APIC_ICR)
- apic_reg_write(apic, APIC_ICR2, (u32)(data >> 32));
- return apic_reg_write(apic, reg, (u32)data);
+ kvm_lapic_reg_write(apic, APIC_ICR2, (u32)(data >> 32));
+ return kvm_lapic_reg_write(apic, reg, (u32)data);
}
int kvm_hv_vapic_msr_read(struct kvm_vcpu *vcpu, u32 reg, u64 *data)
@@ -2188,10 +2169,10 @@ int kvm_hv_vapic_msr_read(struct kvm_vcpu *vcpu, u32 reg, u64 *data)
if (!lapic_in_kernel(vcpu))
return 1;
- if (apic_reg_read(apic, reg, 4, &low))
+ if (kvm_lapic_reg_read(apic, reg, 4, &low))
return 1;
if (reg == APIC_ICR)
- apic_reg_read(apic, APIC_ICR2, 4, &high);
+ kvm_lapic_reg_read(apic, APIC_ICR2, 4, &high);
*data = (((u64)high) << 32) | low;
diff --git a/arch/x86/kvm/lapic.h b/arch/x86/kvm/lapic.h
index f71183e502ee..891c6da7d4aa 100644
--- a/arch/x86/kvm/lapic.h
+++ b/arch/x86/kvm/lapic.h
@@ -7,6 +7,10 @@
#define KVM_APIC_INIT 0
#define KVM_APIC_SIPI 1
+#define KVM_APIC_LVT_NUM 6
+
+#define KVM_APIC_SHORT_MASK 0xc0000
+#define KVM_APIC_DEST_MASK 0x800
struct kvm_timer {
struct hrtimer timer;
@@ -59,6 +63,11 @@ void kvm_lapic_set_eoi(struct kvm_vcpu *vcpu);
void kvm_lapic_set_base(struct kvm_vcpu *vcpu, u64 value);
u64 kvm_lapic_get_base(struct kvm_vcpu *vcpu);
void kvm_apic_set_version(struct kvm_vcpu *vcpu);
+int kvm_lapic_reg_write(struct kvm_lapic *apic, u32 reg, u32 val);
+int kvm_lapic_reg_read(struct kvm_lapic *apic, u32 offset, int len,
+ void *data);
+bool kvm_apic_match_dest(struct kvm_vcpu *vcpu, struct kvm_lapic *source,
+ int short_hand, unsigned int dest, int dest_mode);
void __kvm_apic_update_irr(u32 *pir, void *regs);
void kvm_apic_update_irr(struct kvm_vcpu *vcpu, u32 *pir);
@@ -99,9 +108,32 @@ static inline bool kvm_hv_vapic_assist_page_enabled(struct kvm_vcpu *vcpu)
int kvm_lapic_enable_pv_eoi(struct kvm_vcpu *vcpu, u64 data);
void kvm_lapic_init(void);
-static inline u32 kvm_apic_get_reg(struct kvm_lapic *apic, int reg_off)
+#define VEC_POS(v) ((v) & (32 - 1))
+#define REG_POS(v) (((v) >> 5) << 4)
+
+static inline void kvm_lapic_set_vector(int vec, void *bitmap)
+{
+ set_bit(VEC_POS(vec), (bitmap) + REG_POS(vec));
+}
+
+static inline void kvm_lapic_set_irr(int vec, struct kvm_lapic *apic)
+{
+ kvm_lapic_set_vector(vec, apic->regs + APIC_IRR);
+ /*
+ * irr_pending must be true if any interrupt is pending; set it after
+ * APIC_IRR to avoid race with apic_clear_irr
+ */
+ apic->irr_pending = true;
+}
+
+static inline u32 kvm_lapic_get_reg(struct kvm_lapic *apic, int reg_off)
+{
+ return *((u32 *) (apic->regs + reg_off));
+}
+
+static inline void kvm_lapic_set_reg(struct kvm_lapic *apic, int reg_off, u32 val)
{
- return *((u32 *) (apic->regs + reg_off));
+ *((u32 *) (apic->regs + reg_off)) = val;
}
extern struct static_key kvm_no_apic_vcpu;
@@ -169,7 +201,7 @@ static inline int kvm_lapic_latched_init(struct kvm_vcpu *vcpu)
static inline int kvm_apic_id(struct kvm_lapic *apic)
{
- return (kvm_apic_get_reg(apic, APIC_ID) >> 24) & 0xff;
+ return (kvm_lapic_get_reg(apic, APIC_ID) >> 24) & 0xff;
}
bool kvm_apic_pending_eoi(struct kvm_vcpu *vcpu, int vector);
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b6f50e8b0a39..24e800116ab4 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1909,18 +1909,17 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
* since it has been deleted from active_mmu_pages but still can be found
* at hast list.
*
- * for_each_gfn_indirect_valid_sp has skipped that kind of page and
- * kvm_mmu_get_page(), the only user of for_each_gfn_sp(), has skipped
- * all the obsolete pages.
+ * for_each_gfn_valid_sp() has skipped that kind of pages.
*/
-#define for_each_gfn_sp(_kvm, _sp, _gfn) \
+#define for_each_gfn_valid_sp(_kvm, _sp, _gfn) \
hlist_for_each_entry(_sp, \
&(_kvm)->arch.mmu_page_hash[kvm_page_table_hashfn(_gfn)], hash_link) \
- if ((_sp)->gfn != (_gfn)) {} else
+ if ((_sp)->gfn != (_gfn) || is_obsolete_sp((_kvm), (_sp)) \
+ || (_sp)->role.invalid) {} else
#define for_each_gfn_indirect_valid_sp(_kvm, _sp, _gfn) \
- for_each_gfn_sp(_kvm, _sp, _gfn) \
- if ((_sp)->role.direct || (_sp)->role.invalid) {} else
+ for_each_gfn_valid_sp(_kvm, _sp, _gfn) \
+ if ((_sp)->role.direct) {} else
/* @sp->gfn should be write-protected at the call site */
static bool __kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
@@ -1961,6 +1960,11 @@ static void kvm_mmu_audit(struct kvm_vcpu *vcpu, int point) { }
static void mmu_audit_disable(void) { }
#endif
+static bool is_obsolete_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
+{
+ return unlikely(sp->mmu_valid_gen != kvm->arch.mmu_valid_gen);
+}
+
static bool kvm_sync_page(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
struct list_head *invalid_list)
{
@@ -2105,11 +2109,6 @@ static void clear_sp_write_flooding_count(u64 *spte)
__clear_sp_write_flooding_count(sp);
}
-static bool is_obsolete_sp(struct kvm *kvm, struct kvm_mmu_page *sp)
-{
- return unlikely(sp->mmu_valid_gen != kvm->arch.mmu_valid_gen);
-}
-
static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
gfn_t gfn,
gva_t gaddr,
@@ -2136,10 +2135,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct kvm_vcpu *vcpu,
quadrant &= (1 << ((PT32_PT_BITS - PT64_PT_BITS) * level)) - 1;
role.quadrant = quadrant;
}
- for_each_gfn_sp(vcpu->kvm, sp, gfn) {
- if (is_obsolete_sp(vcpu->kvm, sp))
- continue;
-
+ for_each_gfn_valid_sp(vcpu->kvm, sp, gfn) {
if (!need_sync && sp->unsync)
need_sync = true;
@@ -3844,7 +3840,8 @@ reset_tdp_shadow_zero_bits_mask(struct kvm_vcpu *vcpu,
__reset_rsvds_bits_mask(vcpu, &context->shadow_zero_check,
boot_cpu_data.x86_phys_bits,
context->shadow_root_level, false,
- cpu_has_gbpages, true, true);
+ boot_cpu_has(X86_FEATURE_GBPAGES),
+ true, true);
else
__reset_rsvds_bits_mask_ept(&context->shadow_zero_check,
boot_cpu_data.x86_phys_bits,
diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kvm/mtrr.c
index 3f8c732117ec..c146f3c262c3 100644
--- a/arch/x86/kvm/mtrr.c
+++ b/arch/x86/kvm/mtrr.c
@@ -44,8 +44,6 @@ static bool msr_mtrr_valid(unsigned msr)
case MSR_MTRRdefType:
case MSR_IA32_CR_PAT:
return true;
- case 0x2f8:
- return true;
}
return false;
}
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 31346a3f20a5..1163e8173e5a 100644
--- a/arch/x86/kvm/svm.c
+++ b/arch/x86/kvm/svm.c
@@ -14,6 +14,9 @@
* the COPYING file in the top-level directory.
*
*/
+
+#define pr_fmt(fmt) "SVM: " fmt
+
#include <linux/kvm_host.h>
#include "irq.h"
@@ -32,6 +35,7 @@
#include <linux/trace_events.h>
#include <linux/slab.h>
+#include <asm/apic.h>
#include <asm/perf_event.h>
#include <asm/tlbflush.h>
#include <asm/desc.h>
@@ -68,6 +72,8 @@ MODULE_DEVICE_TABLE(x86cpu, svm_cpu_id);
#define SVM_FEATURE_DECODE_ASSIST (1 << 7)
#define SVM_FEATURE_PAUSE_FILTER (1 << 10)
+#define SVM_AVIC_DOORBELL 0xc001011b
+
#define NESTED_EXIT_HOST 0 /* Exit handled on host level */
#define NESTED_EXIT_DONE 1 /* Exit caused nested vmexit */
#define NESTED_EXIT_CONTINUE 2 /* Further checks needed */
@@ -78,6 +84,18 @@ MODULE_DEVICE_TABLE(x86cpu, svm_cpu_id);
#define TSC_RATIO_MIN 0x0000000000000001ULL
#define TSC_RATIO_MAX 0x000000ffffffffffULL
+#define AVIC_HPA_MASK ~((0xFFFULL << 52) | 0xFFF)
+
+/*
+ * 0xff is broadcast, so the max index allowed for physical APIC ID
+ * table is 0xfe. APIC IDs above 0xff are reserved.
+ */
+#define AVIC_MAX_PHYSICAL_ID_COUNT 255
+
+#define AVIC_UNACCEL_ACCESS_WRITE_MASK 1
+#define AVIC_UNACCEL_ACCESS_OFFSET_MASK 0xFF0
+#define AVIC_UNACCEL_ACCESS_VECTOR_MASK 0xFFFFFFFF
+
static bool erratum_383_found __read_mostly;
static const u32 host_save_user_msrs[] = {
@@ -162,8 +180,21 @@ struct vcpu_svm {
/* cached guest cpuid flags for faster access */
bool nrips_enabled : 1;
+
+ u32 ldr_reg;
+ struct page *avic_backing_page;
+ u64 *avic_physical_id_cache;
+ bool avic_is_running;
};
+#define AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK (0xFF)
+#define AVIC_LOGICAL_ID_ENTRY_VALID_MASK (1 << 31)
+
+#define AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK (0xFFULL)
+#define AVIC_PHYSICAL_ID_ENTRY_BACKING_PAGE_MASK (0xFFFFFFFFFFULL << 12)
+#define AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK (1ULL << 62)
+#define AVIC_PHYSICAL_ID_ENTRY_VALID_MASK (1ULL << 63)
+
static DEFINE_PER_CPU(u64, current_tsc_ratio);
#define TSC_RATIO_DEFAULT 0x0100000000ULL
@@ -205,6 +236,10 @@ module_param(npt, int, S_IRUGO);
static int nested = true;
module_param(nested, int, S_IRUGO);
+/* enable / disable AVIC */
+static int avic;
+module_param(avic, int, S_IRUGO);
+
static void svm_set_cr0(struct kvm_vcpu *vcpu, unsigned long cr0);
static void svm_flush_tlb(struct kvm_vcpu *vcpu);
static void svm_complete_interrupts(struct vcpu_svm *svm);
@@ -228,12 +263,18 @@ enum {
VMCB_SEG, /* CS, DS, SS, ES, CPL */
VMCB_CR2, /* CR2 only */
VMCB_LBR, /* DBGCTL, BR_FROM, BR_TO, LAST_EX_FROM, LAST_EX_TO */
+ VMCB_AVIC, /* AVIC APIC_BAR, AVIC APIC_BACKING_PAGE,
+ * AVIC PHYSICAL_TABLE pointer,
+ * AVIC LOGICAL_TABLE pointer
+ */
VMCB_DIRTY_MAX,
};
/* TPR and CR2 are always written before VMRUN */
#define VMCB_ALWAYS_DIRTY_MASK ((1U << VMCB_INTR) | (1U << VMCB_CR2))
+#define VMCB_AVIC_APIC_BAR_MASK 0xFFFFFFFFFF000ULL
+
static inline void mark_all_dirty(struct vmcb *vmcb)
{
vmcb->control.clean = 0;
@@ -255,6 +296,23 @@ static inline struct vcpu_svm *to_svm(struct kvm_vcpu *vcpu)
return container_of(vcpu, struct vcpu_svm, vcpu);
}
+static inline void avic_update_vapic_bar(struct vcpu_svm *svm, u64 data)
+{
+ svm->vmcb->control.avic_vapic_bar = data & VMCB_AVIC_APIC_BAR_MASK;
+ mark_dirty(svm->vmcb, VMCB_AVIC);
+}
+
+static inline bool avic_vcpu_is_running(struct kvm_vcpu *vcpu)
+{
+ struct vcpu_svm *svm = to_svm(vcpu);
+ u64 *entry = svm->avic_physical_id_cache;
+
+ if (!entry)
+ return false;
+
+ return (READ_ONCE(*entry) & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK);
+}
+
static void recalc_intercepts(struct vcpu_svm *svm)
{
struct vmcb_control_area *c, *h;
@@ -923,6 +981,12 @@ static __init int svm_hardware_setup(void)
} else
kvm_disable_tdp();
+ if (avic && (!npt_enabled || !boot_cpu_has(X86_FEATURE_AVIC)))
+ avic = false;
+
+ if (avic)
+ pr_info("AVIC enabled\n");
+
return 0;
err:
@@ -1000,6 +1064,22 @@ static void svm_adjust_tsc_offset_guest(struct kvm_vcpu *vcpu, s64 adjustment)
mark_dirty(svm->vmcb, VMCB_INTERCEPTS);
}
+static void avic_init_vmcb(struct vcpu_svm *svm)
+{
+ struct vmcb *vmcb = svm->vmcb;
+ struct kvm_arch *vm_data = &svm->vcpu.kvm->arch;
+ phys_addr_t bpa = page_to_phys(svm->avic_backing_page);
+ phys_addr_t lpa = page_to_phys(vm_data->avic_logical_id_table_page);
+ phys_addr_t ppa = page_to_phys(vm_data->avic_physical_id_table_page);
+
+ vmcb->control.avic_backing_page = bpa & AVIC_HPA_MASK;
+ vmcb->control.avic_logical_id = lpa & AVIC_HPA_MASK;
+ vmcb->control.avic_physical_id = ppa & AVIC_HPA_MASK;
+ vmcb->control.avic_physical_id |= AVIC_MAX_PHYSICAL_ID_COUNT;
+ vmcb->control.int_ctl |= AVIC_ENABLE_MASK;
+ svm->vcpu.arch.apicv_active = true;
+}
+
static void init_vmcb(struct vcpu_svm *svm)
{
struct vmcb_control_area *control = &svm->vmcb->control;
@@ -1014,7 +1094,8 @@ static void init_vmcb(struct vcpu_svm *svm)
set_cr_intercept(svm, INTERCEPT_CR0_WRITE);
set_cr_intercept(svm, INTERCEPT_CR3_WRITE);
set_cr_intercept(svm, INTERCEPT_CR4_WRITE);
- set_cr_intercept(svm, INTERCEPT_CR8_WRITE);
+ if (!kvm_vcpu_apicv_active(&svm->vcpu))
+ set_cr_intercept(svm, INTERCEPT_CR8_WRITE);
set_dr_intercepts(svm);
@@ -1110,9 +1191,197 @@ static void init_vmcb(struct vcpu_svm *svm)
set_intercept(svm, INTERCEPT_PAUSE);
}
+ if (avic)
+ avic_init_vmcb(svm);
+
mark_all_dirty(svm->vmcb);
enable_gif(svm);
+
+}
+
+static u64 *avic_get_physical_id_entry(struct kvm_vcpu *vcpu, int index)
+{
+ u64 *avic_physical_id_table;
+ struct kvm_arch *vm_data = &vcpu->kvm->arch;
+
+ if (index >= AVIC_MAX_PHYSICAL_ID_COUNT)
+ return NULL;
+
+ avic_physical_id_table = page_address(vm_data->avic_physical_id_table_page);
+
+ return &avic_physical_id_table[index];
+}
+
+/**
+ * Note:
+ * AVIC hardware walks the nested page table to check permissions,
+ * but does not use the SPA address specified in the leaf page
+ * table entry since it uses address in the AVIC_BACKING_PAGE pointer
+ * field of the VMCB. Therefore, we set up the
+ * APIC_ACCESS_PAGE_PRIVATE_MEMSLOT (4KB) here.
+ */
+static int avic_init_access_page(struct kvm_vcpu *vcpu)
+{
+ struct kvm *kvm = vcpu->kvm;
+ int ret;
+
+ if (kvm->arch.apic_access_page_done)
+ return 0;
+
+ ret = x86_set_memory_region(kvm,
+ APIC_ACCESS_PAGE_PRIVATE_MEMSLOT,
+ APIC_DEFAULT_PHYS_BASE,
+ PAGE_SIZE);
+ if (ret)
+ return ret;
+
+ kvm->arch.apic_access_page_done = true;
+ return 0;
+}
+
+static int avic_init_backing_page(struct kvm_vcpu *vcpu)
+{
+ int ret;
+ u64 *entry, new_entry;
+ int id = vcpu->vcpu_id;
+ struct vcpu_svm *svm = to_svm(vcpu);
+
+ ret = avic_init_access_page(vcpu);
+ if (ret)
+ return ret;
+
+ if (id >= AVIC_MAX_PHYSICAL_ID_COUNT)
+ return -EINVAL;
+
+ if (!svm->vcpu.arch.apic->regs)
+ return -EINVAL;
+
+ svm->avic_backing_page = virt_to_page(svm->vcpu.arch.apic->regs);
+
+ /* Setting AVIC backing page address in the phy APIC ID table */
+ entry = avic_get_physical_id_entry(vcpu, id);
+ if (!entry)
+ return -EINVAL;
+
+ new_entry = READ_ONCE(*entry);
+ new_entry = (page_to_phys(svm->avic_backing_page) &
+ AVIC_PHYSICAL_ID_ENTRY_BACKING_PAGE_MASK) |
+ AVIC_PHYSICAL_ID_ENTRY_VALID_MASK;
+ WRITE_ONCE(*entry, new_entry);
+
+ svm->avic_physical_id_cache = entry;
+
+ return 0;
+}
+
+static void avic_vm_destroy(struct kvm *kvm)
+{
+ struct kvm_arch *vm_data = &kvm->arch;
+
+ if (vm_data->avic_logical_id_table_page)
+ __free_page(vm_data->avic_logical_id_table_page);
+ if (vm_data->avic_physical_id_table_page)
+ __free_page(vm_data->avic_physical_id_table_page);
+}
+
+static int avic_vm_init(struct kvm *kvm)
+{
+ int err = -ENOMEM;
+ struct kvm_arch *vm_data = &kvm->arch;
+ struct page *p_page;
+ struct page *l_page;
+
+ if (!avic)
+ return 0;
+
+ /* Allocating physical APIC ID table (4KB) */
+ p_page = alloc_page(GFP_KERNEL);
+ if (!p_page)
+ goto free_avic;
+
+ vm_data->avic_physical_id_table_page = p_page;
+ clear_page(page_address(p_page));
+
+ /* Allocating logical APIC ID table (4KB) */
+ l_page = alloc_page(GFP_KERNEL);
+ if (!l_page)
+ goto free_avic;
+
+ vm_data->avic_logical_id_table_page = l_page;
+ clear_page(page_address(l_page));
+
+ return 0;
+
+free_avic:
+ avic_vm_destroy(kvm);
+ return err;
+}
+
+/**
+ * This function is called during VCPU halt/unhalt.
+ */
+static void avic_set_running(struct kvm_vcpu *vcpu, bool is_run)
+{
+ u64 entry;
+ int h_physical_id = __default_cpu_present_to_apicid(vcpu->cpu);
+ struct vcpu_svm *svm = to_svm(vcpu);
+
+ if (!kvm_vcpu_apicv_active(vcpu))
+ return;
+
+ svm->avic_is_running = is_run;
+
+ /* ID = 0xff (broadcast), ID > 0xff (reserved) */
+ if (WARN_ON(h_physical_id >= AVIC_MAX_PHYSICAL_ID_COUNT))
+ return;
+
+ entry = READ_ONCE(*(svm->avic_physical_id_cache));
+ WARN_ON(is_run == !!(entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK));
+
+ entry &= ~AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
+ if (is_run)
+ entry |= AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
+ WRITE_ONCE(*(svm->avic_physical_id_cache), entry);
+}
+
+static void avic_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
+{
+ u64 entry;
+ /* ID = 0xff (broadcast), ID > 0xff (reserved) */
+ int h_physical_id = __default_cpu_present_to_apicid(cpu);
+ struct vcpu_svm *svm = to_svm(vcpu);
+
+ if (!kvm_vcpu_apicv_active(vcpu))
+ return;
+
+ if (WARN_ON(h_physical_id >= AVIC_MAX_PHYSICAL_ID_COUNT))
+ return;
+
+ entry = READ_ONCE(*(svm->avic_physical_id_cache));
+ WARN_ON(entry & AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK);
+
+ entry &= ~AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK;
+ entry |= (h_physical_id & AVIC_PHYSICAL_ID_ENTRY_HOST_PHYSICAL_ID_MASK);
+
+ entry &= ~AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
+ if (svm->avic_is_running)
+ entry |= AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
+
+ WRITE_ONCE(*(svm->avic_physical_id_cache), entry);
+}
+
+static void avic_vcpu_put(struct kvm_vcpu *vcpu)
+{
+ u64 entry;
+ struct vcpu_svm *svm = to_svm(vcpu);
+
+ if (!kvm_vcpu_apicv_active(vcpu))
+ return;
+
+ entry = READ_ONCE(*(svm->avic_physical_id_cache));
+ entry &= ~AVIC_PHYSICAL_ID_ENTRY_IS_RUNNING_MASK;
+ WRITE_ONCE(*(svm->avic_physical_id_cache), entry);
}
static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
@@ -1131,6 +1400,9 @@ static void svm_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
kvm_cpuid(vcpu, &eax, &dummy, &dummy, &dummy);
kvm_register_write(vcpu, VCPU_REGS_RDX, eax);
+
+ if (kvm_vcpu_apicv_active(vcpu) && !init_event)
+ avic_update_vapic_bar(svm, APIC_DEFAULT_PHYS_BASE);
}
static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
@@ -1169,6 +1441,17 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
if (!hsave_page)
goto free_page3;
+ if (avic) {
+ err = avic_init_backing_page(&svm->vcpu);
+ if (err)
+ goto free_page4;
+ }
+
+ /* We initialize this flag to true to make sure that the is_running
+ * bit would be set the first time the vcpu is loaded.
+ */
+ svm->avic_is_running = true;
+
svm->nested.hsave = page_address(hsave_page);
svm->msrpm = page_address(msrpm_pages);
@@ -1187,6 +1470,8 @@ static struct kvm_vcpu *svm_create_vcpu(struct kvm *kvm, unsigned int id)
return &svm->vcpu;
+free_page4:
+ __free_page(hsave_page);
free_page3:
__free_pages(nested_msrpm_pages, MSRPM_ALLOC_ORDER);
free_page2:
@@ -1243,6 +1528,8 @@ static void svm_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
/* This assumes that the kernel never uses MSR_TSC_AUX */
if (static_cpu_has(X86_FEATURE_RDTSCP))
wrmsrl(MSR_TSC_AUX, svm->tsc_aux);
+
+ avic_vcpu_load(vcpu, cpu);
}
static void svm_vcpu_put(struct kvm_vcpu *vcpu)
@@ -1250,11 +1537,13 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu)
struct vcpu_svm *svm = to_svm(vcpu);
int i;
+ avic_vcpu_put(vcpu);
+
++vcpu->stat.host_state_reload;
kvm_load_ldt(svm->host.ldt);
#ifdef CONFIG_X86_64
loadsegment(fs, svm->host.fs);
- wrmsrl(MSR_KERNEL_GS_BASE, current->thread.gs);
+ wrmsrl(MSR_KERNEL_GS_BASE, current->thread.gsbase);
load_gs_index(svm->host.gs);
#else
#ifdef CONFIG_X86_32_LAZY_GS
@@ -1265,6 +1554,16 @@ static void svm_vcpu_put(struct kvm_vcpu *vcpu)
wrmsrl(host_save_user_msrs[i], svm->host_user_msrs[i]);
}
+static void svm_vcpu_blocking(struct kvm_vcpu *vcpu)
+{
+ avic_set_running(vcpu, false);
+}
+
+static void svm_vcpu_unblocking(struct kvm_vcpu *vcpu)
+{
+ avic_set_running(vcpu, true);
+}
+
static unsigned long svm_get_rflags(struct kvm_vcpu *vcpu)
{
return to_svm(vcpu)->vmcb->save.rflags;
@@ -2673,10 +2972,11 @@ static int clgi_interception(struct vcpu_svm *svm)
disable_gif(svm);
/* After a CLGI no interrupts should come */
- svm_clear_vintr(svm);
- svm->vmcb->control.int_ctl &= ~V_IRQ_MASK;
-
- mark_dirty(svm->vmcb, VMCB_INTR);
+ if (!kvm_vcpu_apicv_active(&svm->vcpu)) {
+ svm_clear_vintr(svm);
+ svm->vmcb->control.int_ctl &= ~V_IRQ_MASK;
+ mark_dirty(svm->vmcb, VMCB_INTR);
+ }
return 1;
}
@@ -3212,6 +3512,10 @@ static int svm_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr)
case MSR_VM_IGNNE:
vcpu_unimpl(vcpu, "unimplemented wrmsr: 0x%x data 0x%llx\n", ecx, data);
break;
+ case MSR_IA32_APICBASE:
+ if (kvm_vcpu_apicv_active(vcpu))
+ avic_update_vapic_bar(to_svm(vcpu), data);
+ /* Follow through */
default:
return kvm_set_msr_common(vcpu, msr);
}
@@ -3281,6 +3585,278 @@ static int mwait_interception(struct vcpu_svm *svm)
return nop_interception(svm);
}
+enum avic_ipi_failure_cause {
+ AVIC_IPI_FAILURE_INVALID_INT_TYPE,
+ AVIC_IPI_FAILURE_TARGET_NOT_RUNNING,
+ AVIC_IPI_FAILURE_INVALID_TARGET,
+ AVIC_IPI_FAILURE_INVALID_BACKING_PAGE,
+};
+
+static int avic_incomplete_ipi_interception(struct vcpu_svm *svm)
+{
+ u32 icrh = svm->vmcb->control.exit_info_1 >> 32;
+ u32 icrl = svm->vmcb->control.exit_info_1;
+ u32 id = svm->vmcb->control.exit_info_2 >> 32;
+ u32 index = svm->vmcb->control.exit_info_2 & 0xFF;
+ struct kvm_lapic *apic = svm->vcpu.arch.apic;
+
+ trace_kvm_avic_incomplete_ipi(svm->vcpu.vcpu_id, icrh, icrl, id, index);
+
+ switch (id) {
+ case AVIC_IPI_FAILURE_INVALID_INT_TYPE:
+ /*
+ * AVIC hardware handles the generation of
+ * IPIs when the specified Message Type is Fixed
+ * (also known as fixed delivery mode) and
+ * the Trigger Mode is edge-triggered. The hardware
+ * also supports self and broadcast delivery modes
+ * specified via the Destination Shorthand(DSH)
+ * field of the ICRL. Logical and physical APIC ID
+ * formats are supported. All other IPI types cause
+ * a #VMEXIT, which needs to emulated.
+ */
+ kvm_lapic_reg_write(apic, APIC_ICR2, icrh);
+ kvm_lapic_reg_write(apic, APIC_ICR, icrl);
+ break;
+ case AVIC_IPI_FAILURE_TARGET_NOT_RUNNING: {
+ int i;
+ struct kvm_vcpu *vcpu;
+ struct kvm *kvm = svm->vcpu.kvm;
+ struct kvm_lapic *apic = svm->vcpu.arch.apic;
+
+ /*
+ * At this point, we expect that the AVIC HW has already
+ * set the appropriate IRR bits on the valid target
+ * vcpus. So, we just need to kick the appropriate vcpu.
+ */
+ kvm_for_each_vcpu(i, vcpu, kvm) {
+ bool m = kvm_apic_match_dest(vcpu, apic,
+ icrl & KVM_APIC_SHORT_MASK,
+ GET_APIC_DEST_FIELD(icrh),
+ icrl & KVM_APIC_DEST_MASK);
+
+ if (m && !avic_vcpu_is_running(vcpu))
+ kvm_vcpu_wake_up(vcpu);
+ }
+ break;
+ }
+ case AVIC_IPI_FAILURE_INVALID_TARGET:
+ break;
+ case AVIC_IPI_FAILURE_INVALID_BACKING_PAGE:
+ WARN_ONCE(1, "Invalid backing page\n");
+ break;
+ default:
+ pr_err("Unknown IPI interception\n");
+ }
+
+ return 1;
+}
+
+static u32 *avic_get_logical_id_entry(struct kvm_vcpu *vcpu, u32 ldr, bool flat)
+{
+ struct kvm_arch *vm_data = &vcpu->kvm->arch;
+ int index;
+ u32 *logical_apic_id_table;
+ int dlid = GET_APIC_LOGICAL_ID(ldr);
+
+ if (!dlid)
+ return NULL;
+
+ if (flat) { /* flat */
+ index = ffs(dlid) - 1;
+ if (index > 7)
+ return NULL;
+ } else { /* cluster */
+ int cluster = (dlid & 0xf0) >> 4;
+ int apic = ffs(dlid & 0x0f) - 1;
+
+ if ((apic < 0) || (apic > 7) ||
+ (cluster >= 0xf))
+ return NULL;
+ index = (cluster << 2) + apic;
+ }
+
+ logical_apic_id_table = (u32 *) page_address(vm_data->avic_logical_id_table_page);
+
+ return &logical_apic_id_table[index];
+}
+
+static int avic_ldr_write(struct kvm_vcpu *vcpu, u8 g_physical_id, u32 ldr,
+ bool valid)
+{
+ bool flat;
+ u32 *entry, new_entry;
+
+ flat = kvm_lapic_get_reg(vcpu->arch.apic, APIC_DFR) == APIC_DFR_FLAT;
+ entry = avic_get_logical_id_entry(vcpu, ldr, flat);
+ if (!entry)
+ return -EINVAL;
+
+ new_entry = READ_ONCE(*entry);
+ new_entry &= ~AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK;
+ new_entry |= (g_physical_id & AVIC_LOGICAL_ID_ENTRY_GUEST_PHYSICAL_ID_MASK);
+ if (valid)
+ new_entry |= AVIC_LOGICAL_ID_ENTRY_VALID_MASK;
+ else
+ new_entry &= ~AVIC_LOGICAL_ID_ENTRY_VALID_MASK;
+ WRITE_ONCE(*entry, new_entry);
+
+ return 0;
+}
+
+static int avic_handle_ldr_update(struct kvm_vcpu *vcpu)
+{
+ int ret;
+ struct vcpu_svm *svm = to_svm(vcpu);
+ u32 ldr = kvm_lapic_get_reg(vcpu->arch.apic, APIC_LDR);
+
+ if (!ldr)
+ return 1;
+
+ ret = avic_ldr_write(vcpu, vcpu->vcpu_id, ldr, true);
+ if (ret && svm->ldr_reg) {
+ avic_ldr_write(vcpu, 0, svm->ldr_reg, false);
+ svm->ldr_reg = 0;
+ } else {
+ svm->ldr_reg = ldr;
+ }
+ return ret;
+}
+
+static int avic_handle_apic_id_update(struct kvm_vcpu *vcpu)
+{
+ u64 *old, *new;
+ struct vcpu_svm *svm = to_svm(vcpu);
+ u32 apic_id_reg = kvm_lapic_get_reg(vcpu->arch.apic, APIC_ID);
+ u32 id = (apic_id_reg >> 24) & 0xff;
+
+ if (vcpu->vcpu_id == id)
+ return 0;
+
+ old = avic_get_physical_id_entry(vcpu, vcpu->vcpu_id);
+ new = avic_get_physical_id_entry(vcpu, id);
+ if (!new || !old)
+ return 1;
+
+ /* We need to move physical_id_entry to new offset */
+ *new = *old;
+ *old = 0ULL;
+ to_svm(vcpu)->avic_physical_id_cache = new;
+
+ /*
+ * Also update the guest physical APIC ID in the logical
+ * APIC ID table entry if already setup the LDR.
+ */
+ if (svm->ldr_reg)
+ avic_handle_ldr_update(vcpu);
+
+ return 0;
+}
+
+static int avic_handle_dfr_update(struct kvm_vcpu *vcpu)
+{
+ struct vcpu_svm *svm = to_svm(vcpu);
+ struct kvm_arch *vm_data = &vcpu->kvm->arch;
+ u32 dfr = kvm_lapic_get_reg(vcpu->arch.apic, APIC_DFR);
+ u32 mod = (dfr >> 28) & 0xf;
+
+ /*
+ * We assume that all local APICs are using the same type.
+ * If this changes, we need to flush the AVIC logical
+ * APID id table.
+ */
+ if (vm_data->ldr_mode == mod)
+ return 0;
+
+ clear_page(page_address(vm_data->avic_logical_id_table_page));
+ vm_data->ldr_mode = mod;
+
+ if (svm->ldr_reg)
+ avic_handle_ldr_update(vcpu);
+ return 0;
+}
+
+static int avic_unaccel_trap_write(struct vcpu_svm *svm)
+{
+ struct kvm_lapic *apic = svm->vcpu.arch.apic;
+ u32 offset = svm->vmcb->control.exit_info_1 &
+ AVIC_UNACCEL_ACCESS_OFFSET_MASK;
+
+ switch (offset) {
+ case APIC_ID:
+ if (avic_handle_apic_id_update(&svm->vcpu))
+ return 0;
+ break;
+ case APIC_LDR:
+ if (avic_handle_ldr_update(&svm->vcpu))
+ return 0;
+ break;
+ case APIC_DFR:
+ avic_handle_dfr_update(&svm->vcpu);
+ break;
+ default:
+ break;
+ }
+
+ kvm_lapic_reg_write(apic, offset, kvm_lapic_get_reg(apic, offset));
+
+ return 1;
+}
+
+static bool is_avic_unaccelerated_access_trap(u32 offset)
+{
+ bool ret = false;
+
+ switch (offset) {
+ case APIC_ID:
+ case APIC_EOI:
+ case APIC_RRR:
+ case APIC_LDR:
+ case APIC_DFR:
+ case APIC_SPIV:
+ case APIC_ESR:
+ case APIC_ICR:
+ case APIC_LVTT:
+ case APIC_LVTTHMR:
+ case APIC_LVTPC:
+ case APIC_LVT0:
+ case APIC_LVT1:
+ case APIC_LVTERR:
+ case APIC_TMICT:
+ case APIC_TDCR:
+ ret = true;
+ break;
+ default:
+ break;
+ }
+ return ret;
+}
+
+static int avic_unaccelerated_access_interception(struct vcpu_svm *svm)
+{
+ int ret = 0;
+ u32 offset = svm->vmcb->control.exit_info_1 &
+ AVIC_UNACCEL_ACCESS_OFFSET_MASK;
+ u32 vector = svm->vmcb->control.exit_info_2 &
+ AVIC_UNACCEL_ACCESS_VECTOR_MASK;
+ bool write = (svm->vmcb->control.exit_info_1 >> 32) &
+ AVIC_UNACCEL_ACCESS_WRITE_MASK;
+ bool trap = is_avic_unaccelerated_access_trap(offset);
+
+ trace_kvm_avic_unaccelerated_access(svm->vcpu.vcpu_id, offset,
+ trap, write, vector);
+ if (trap) {
+ /* Handling Trap */
+ WARN_ONCE(!write, "svm: Handling trap read.\n");
+ ret = avic_unaccel_trap_write(svm);
+ } else {
+ /* Handling Fault */
+ ret = (emulate_instruction(&svm->vcpu, 0) == EMULATE_DONE);
+ }
+
+ return ret;
+}
+
static int (*const svm_exit_handlers[])(struct vcpu_svm *svm) = {
[SVM_EXIT_READ_CR0] = cr_interception,
[SVM_EXIT_READ_CR3] = cr_interception,
@@ -3344,6 +3920,8 @@ static int (*const svm_exit_handlers[])(struct vcpu_svm *svm) = {
[SVM_EXIT_XSETBV] = xsetbv_interception,
[SVM_EXIT_NPF] = pf_interception,
[SVM_EXIT_RSM] = emulate_on_interception,
+ [SVM_EXIT_AVIC_INCOMPLETE_IPI] = avic_incomplete_ipi_interception,
+ [SVM_EXIT_AVIC_UNACCELERATED_ACCESS] = avic_unaccelerated_access_interception,
};
static void dump_vmcb(struct kvm_vcpu *vcpu)
@@ -3375,10 +3953,14 @@ static void dump_vmcb(struct kvm_vcpu *vcpu)
pr_err("%-20s%08x\n", "exit_int_info_err:", control->exit_int_info_err);
pr_err("%-20s%lld\n", "nested_ctl:", control->nested_ctl);
pr_err("%-20s%016llx\n", "nested_cr3:", control->nested_cr3);
+ pr_err("%-20s%016llx\n", "avic_vapic_bar:", control->avic_vapic_bar);
pr_err("%-20s%08x\n", "event_inj:", control->event_inj);
pr_err("%-20s%08x\n", "event_inj_err:", control->event_inj_err);
pr_err("%-20s%lld\n", "lbr_ctl:", control->lbr_ctl);
pr_err("%-20s%016llx\n", "next_rip:", control->next_rip);
+ pr_err("%-20s%016llx\n", "avic_backing_page:", control->avic_backing_page);
+ pr_err("%-20s%016llx\n", "avic_logical_id:", control->avic_logical_id);
+ pr_err("%-20s%016llx\n", "avic_physical_id:", control->avic_physical_id);
pr_err("VMCB State Save Area:\n");
pr_err("%-5s s: %04x a: %04x l: %08x b: %016llx\n",
"es:",
@@ -3562,6 +4144,7 @@ static inline void svm_inject_irq(struct vcpu_svm *svm, int irq)
{
struct vmcb_control_area *control;
+ /* The following fields are ignored when AVIC is enabled */
control = &svm->vmcb->control;
control->int_vector = irq;
control->int_ctl &= ~V_INTR_PRIO_MASK;
@@ -3583,11 +4166,17 @@ static void svm_set_irq(struct kvm_vcpu *vcpu)
SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_INTR;
}
+static inline bool svm_nested_virtualize_tpr(struct kvm_vcpu *vcpu)
+{
+ return is_guest_mode(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK);
+}
+
static void update_cr8_intercept(struct kvm_vcpu *vcpu, int tpr, int irr)
{
struct vcpu_svm *svm = to_svm(vcpu);
- if (is_guest_mode(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK))
+ if (svm_nested_virtualize_tpr(vcpu) ||
+ kvm_vcpu_apicv_active(vcpu))
return;
clr_cr_intercept(svm, INTERCEPT_CR8_WRITE);
@@ -3606,11 +4195,28 @@ static void svm_set_virtual_x2apic_mode(struct kvm_vcpu *vcpu, bool set)
static bool svm_get_enable_apicv(void)
{
- return false;
+ return avic;
+}
+
+static void svm_hwapic_irr_update(struct kvm_vcpu *vcpu, int max_irr)
+{
}
+static void svm_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
+{
+}
+
+/* Note: Currently only used by Hyper-V. */
static void svm_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
{
+ struct vcpu_svm *svm = to_svm(vcpu);
+ struct vmcb *vmcb = svm->vmcb;
+
+ if (!avic)
+ return;
+
+ vmcb->control.int_ctl &= ~AVIC_ENABLE_MASK;
+ mark_dirty(vmcb, VMCB_INTR);
}
static void svm_load_eoi_exitmap(struct kvm_vcpu *vcpu, u64 *eoi_exit_bitmap)
@@ -3623,6 +4229,18 @@ static void svm_sync_pir_to_irr(struct kvm_vcpu *vcpu)
return;
}
+static void svm_deliver_avic_intr(struct kvm_vcpu *vcpu, int vec)
+{
+ kvm_lapic_set_irr(vec, vcpu->arch.apic);
+ smp_mb__after_atomic();
+
+ if (avic_vcpu_is_running(vcpu))
+ wrmsrl(SVM_AVIC_DOORBELL,
+ __default_cpu_present_to_apicid(vcpu->cpu));
+ else
+ kvm_vcpu_wake_up(vcpu);
+}
+
static int svm_nmi_allowed(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
@@ -3677,6 +4295,9 @@ static void enable_irq_window(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
+ if (kvm_vcpu_apicv_active(vcpu))
+ return;
+
/*
* In case GIF=0 we can't rely on the CPU to tell us when GIF becomes
* 1, because that's a separate STGI/VMRUN intercept. The next time we
@@ -3728,7 +4349,7 @@ static inline void sync_cr8_to_lapic(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
- if (is_guest_mode(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK))
+ if (svm_nested_virtualize_tpr(vcpu))
return;
if (!is_cr_intercept(svm, INTERCEPT_CR8_WRITE)) {
@@ -3742,7 +4363,8 @@ static inline void sync_lapic_to_cr8(struct kvm_vcpu *vcpu)
struct vcpu_svm *svm = to_svm(vcpu);
u64 cr8;
- if (is_guest_mode(vcpu) && (vcpu->arch.hflags & HF_VINTR_MASK))
+ if (svm_nested_virtualize_tpr(vcpu) ||
+ kvm_vcpu_apicv_active(vcpu))
return;
cr8 = kvm_get_cr8(vcpu);
@@ -4045,14 +4667,26 @@ static u64 svm_get_mt_mask(struct kvm_vcpu *vcpu, gfn_t gfn, bool is_mmio)
static void svm_cpuid_update(struct kvm_vcpu *vcpu)
{
struct vcpu_svm *svm = to_svm(vcpu);
+ struct kvm_cpuid_entry2 *entry;
/* Update nrips enabled cache */
svm->nrips_enabled = !!guest_cpuid_has_nrips(&svm->vcpu);
+
+ if (!kvm_vcpu_apicv_active(vcpu))
+ return;
+
+ entry = kvm_find_cpuid_entry(vcpu, 1, 0);
+ if (entry)
+ entry->ecx &= ~bit(X86_FEATURE_X2APIC);
}
static void svm_set_supported_cpuid(u32 func, struct kvm_cpuid_entry2 *entry)
{
switch (func) {
+ case 0x1:
+ if (avic)
+ entry->ecx &= ~bit(X86_FEATURE_X2APIC);
+ break;
case 0x80000001:
if (nested)
entry->ecx |= (1 << 2); /* Set SVM bit */
@@ -4307,6 +4941,15 @@ static void svm_sched_in(struct kvm_vcpu *vcpu, int cpu)
{
}
+static inline void avic_post_state_restore(struct kvm_vcpu *vcpu)
+{
+ if (avic_handle_apic_id_update(vcpu) != 0)
+ return;
+ if (avic_handle_dfr_update(vcpu) != 0)
+ return;
+ avic_handle_ldr_update(vcpu);
+}
+
static struct kvm_x86_ops svm_x86_ops = {
.cpu_has_kvm_support = has_svm,
.disabled_by_bios = is_disabled,
@@ -4322,9 +4965,14 @@ static struct kvm_x86_ops svm_x86_ops = {
.vcpu_free = svm_free_vcpu,
.vcpu_reset = svm_vcpu_reset,
+ .vm_init = avic_vm_init,
+ .vm_destroy = avic_vm_destroy,
+
.prepare_guest_switch = svm_prepare_guest_switch,
.vcpu_load = svm_vcpu_load,
.vcpu_put = svm_vcpu_put,
+ .vcpu_blocking = svm_vcpu_blocking,
+ .vcpu_unblocking = svm_vcpu_unblocking,
.update_bp_intercept = update_bp_intercept,
.get_msr = svm_get_msr,
@@ -4382,6 +5030,9 @@ static struct kvm_x86_ops svm_x86_ops = {
.refresh_apicv_exec_ctrl = svm_refresh_apicv_exec_ctrl,
.load_eoi_exitmap = svm_load_eoi_exitmap,
.sync_pir_to_irr = svm_sync_pir_to_irr,
+ .hwapic_irr_update = svm_hwapic_irr_update,
+ .hwapic_isr_update = svm_hwapic_isr_update,
+ .apicv_post_state_restore = avic_post_state_restore,
.set_tss_addr = svm_set_tss_addr,
.get_tdp_level = get_npt_level,
@@ -4415,6 +5066,7 @@ static struct kvm_x86_ops svm_x86_ops = {
.sched_in = svm_sched_in,
.pmu_ops = &amd_pmu_ops,
+ .deliver_posted_interrupt = svm_deliver_avic_intr,
};
static int __init svm_init(void)
diff --git a/arch/x86/kvm/trace.h b/arch/x86/kvm/trace.h
index 2f1ea2f61e1f..8de925031b5c 100644
--- a/arch/x86/kvm/trace.h
+++ b/arch/x86/kvm/trace.h
@@ -809,8 +809,7 @@ TRACE_EVENT(kvm_write_tsc_offset,
#define host_clocks \
{VCLOCK_NONE, "none"}, \
- {VCLOCK_TSC, "tsc"}, \
- {VCLOCK_HPET, "hpet"} \
+ {VCLOCK_TSC, "tsc"} \
TRACE_EVENT(kvm_update_master_clock,
TP_PROTO(bool use_master_clock, unsigned int host_clock, bool offset_matched),
@@ -1292,6 +1291,63 @@ TRACE_EVENT(kvm_hv_stimer_cleanup,
__entry->vcpu_id, __entry->timer_index)
);
+/*
+ * Tracepoint for AMD AVIC
+ */
+TRACE_EVENT(kvm_avic_incomplete_ipi,
+ TP_PROTO(u32 vcpu, u32 icrh, u32 icrl, u32 id, u32 index),
+ TP_ARGS(vcpu, icrh, icrl, id, index),
+
+ TP_STRUCT__entry(
+ __field(u32, vcpu)
+ __field(u32, icrh)
+ __field(u32, icrl)
+ __field(u32, id)
+ __field(u32, index)
+ ),
+
+ TP_fast_assign(
+ __entry->vcpu = vcpu;
+ __entry->icrh = icrh;
+ __entry->icrl = icrl;
+ __entry->id = id;
+ __entry->index = index;
+ ),
+
+ TP_printk("vcpu=%u, icrh:icrl=%#010x:%08x, id=%u, index=%u\n",
+ __entry->vcpu, __entry->icrh, __entry->icrl,
+ __entry->id, __entry->index)
+);
+
+TRACE_EVENT(kvm_avic_unaccelerated_access,
+ TP_PROTO(u32 vcpu, u32 offset, bool ft, bool rw, u32 vec),
+ TP_ARGS(vcpu, offset, ft, rw, vec),
+
+ TP_STRUCT__entry(
+ __field(u32, vcpu)
+ __field(u32, offset)
+ __field(bool, ft)
+ __field(bool, rw)
+ __field(u32, vec)
+ ),
+
+ TP_fast_assign(
+ __entry->vcpu = vcpu;
+ __entry->offset = offset;
+ __entry->ft = ft;
+ __entry->rw = rw;
+ __entry->vec = vec;
+ ),
+
+ TP_printk("vcpu=%u, offset=%#x(%s), %s, %s, vec=%#x\n",
+ __entry->vcpu,
+ __entry->offset,
+ __print_symbolic(__entry->offset, kvm_trace_symbol_apic),
+ __entry->ft ? "trap" : "fault",
+ __entry->rw ? "write" : "read",
+ __entry->vec)
+);
+
#endif /* _TRACE_KVM_H */
#undef TRACE_INCLUDE_PATH
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 133679d520af..fb93010beaa4 100644
--- a/arch/x86/kvm/vmx.c
+++ b/arch/x86/kvm/vmx.c
@@ -2418,7 +2418,9 @@ static void vmx_set_msr_bitmap(struct kvm_vcpu *vcpu)
if (is_guest_mode(vcpu))
msr_bitmap = vmx_msr_bitmap_nested;
- else if (vcpu->arch.apic_base & X2APIC_ENABLE) {
+ else if (cpu_has_secondary_exec_ctrls() &&
+ (vmcs_read32(SECONDARY_VM_EXEC_CONTROL) &
+ SECONDARY_EXEC_VIRTUALIZE_X2APIC_MODE)) {
if (is_long_mode(vcpu))
msr_bitmap = vmx_msr_bitmap_longmode_x2apic;
else
@@ -3390,7 +3392,7 @@ static __init int setup_vmcs_config(struct vmcs_config *vmcs_conf)
}
}
- if (cpu_has_xsaves)
+ if (boot_cpu_has(X86_FEATURE_XSAVES))
rdmsrl(MSR_IA32_XSS, host_xss);
return 0;
@@ -4787,6 +4789,19 @@ static void vmx_refresh_apicv_exec_ctrl(struct kvm_vcpu *vcpu)
struct vcpu_vmx *vmx = to_vmx(vcpu);
vmcs_write32(PIN_BASED_VM_EXEC_CONTROL, vmx_pin_based_exec_ctrl(vmx));
+ if (cpu_has_secondary_exec_ctrls()) {
+ if (kvm_vcpu_apicv_active(vcpu))
+ vmcs_set_bits(SECONDARY_VM_EXEC_CONTROL,
+ SECONDARY_EXEC_APIC_REGISTER_VIRT |
+ SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY);
+ else
+ vmcs_clear_bits(SECONDARY_VM_EXEC_CONTROL,
+ SECONDARY_EXEC_APIC_REGISTER_VIRT |
+ SECONDARY_EXEC_VIRTUAL_INTR_DELIVERY);
+ }
+
+ if (cpu_has_vmx_msr_bitmap())
+ vmx_set_msr_bitmap(vcpu);
}
static u32 vmx_exec_control(struct vcpu_vmx *vmx)
@@ -5050,8 +5065,8 @@ static void vmx_vcpu_reset(struct kvm_vcpu *vcpu, bool init_event)
vmcs_write16(VIRTUAL_PROCESSOR_ID, vmx->vpid);
cr0 = X86_CR0_NW | X86_CR0_CD | X86_CR0_ET;
- vmx_set_cr0(vcpu, cr0); /* enter rmode */
vmx->vcpu.arch.cr0 = cr0;
+ vmx_set_cr0(vcpu, cr0); /* enter rmode */
vmx_set_cr4(vcpu, 0);
vmx_set_efer(vcpu, 0);
vmx_fpu_activate(vcpu);
@@ -6333,23 +6348,20 @@ static __init int hardware_setup(void)
set_bit(0, vmx_vpid_bitmap); /* 0 is reserved for host */
- if (enable_apicv) {
- for (msr = 0x800; msr <= 0x8ff; msr++)
- vmx_disable_intercept_msr_read_x2apic(msr);
-
- /* According SDM, in x2apic mode, the whole id reg is used.
- * But in KVM, it only use the highest eight bits. Need to
- * intercept it */
- vmx_enable_intercept_msr_read_x2apic(0x802);
- /* TMCCT */
- vmx_enable_intercept_msr_read_x2apic(0x839);
- /* TPR */
- vmx_disable_intercept_msr_write_x2apic(0x808);
- /* EOI */
- vmx_disable_intercept_msr_write_x2apic(0x80b);
- /* SELF-IPI */
- vmx_disable_intercept_msr_write_x2apic(0x83f);
- }
+ for (msr = 0x800; msr <= 0x8ff; msr++)
+ vmx_disable_intercept_msr_read_x2apic(msr);
+
+ /* According SDM, in x2apic mode, the whole id reg is used. But in
+ * KVM, it only use the highest eight bits. Need to intercept it */
+ vmx_enable_intercept_msr_read_x2apic(0x802);
+ /* TMCCT */
+ vmx_enable_intercept_msr_read_x2apic(0x839);
+ /* TPR */
+ vmx_disable_intercept_msr_write_x2apic(0x808);
+ /* EOI */
+ vmx_disable_intercept_msr_write_x2apic(0x80b);
+ /* SELF-IPI */
+ vmx_disable_intercept_msr_write_x2apic(0x83f);
if (enable_ept) {
kvm_mmu_set_mask_ptes(0ull,
@@ -8318,19 +8330,19 @@ static void vmx_set_apic_access_page_addr(struct kvm_vcpu *vcpu, hpa_t hpa)
vmcs_write64(APIC_ACCESS_ADDR, hpa);
}
-static void vmx_hwapic_isr_update(struct kvm *kvm, int isr)
+static void vmx_hwapic_isr_update(struct kvm_vcpu *vcpu, int max_isr)
{
u16 status;
u8 old;
- if (isr == -1)
- isr = 0;
+ if (max_isr == -1)
+ max_isr = 0;
status = vmcs_read16(GUEST_INTR_STATUS);
old = status >> 8;
- if (isr != old) {
+ if (max_isr != old) {
status &= 0xff;
- status |= isr << 8;
+ status |= max_isr << 8;
vmcs_write16(GUEST_INTR_STATUS, status);
}
}
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 9b7798c7b210..c805cf494154 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -161,6 +161,7 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
{ "halt_exits", VCPU_STAT(halt_exits) },
{ "halt_successful_poll", VCPU_STAT(halt_successful_poll) },
{ "halt_attempted_poll", VCPU_STAT(halt_attempted_poll) },
+ { "halt_poll_invalid", VCPU_STAT(halt_poll_invalid) },
{ "halt_wakeup", VCPU_STAT(halt_wakeup) },
{ "hypercalls", VCPU_STAT(hypercalls) },
{ "request_irq", VCPU_STAT(request_irq_exits) },
@@ -2002,22 +2003,8 @@ static void kvmclock_reset(struct kvm_vcpu *vcpu)
vcpu->arch.pv_time_enabled = false;
}
-static void accumulate_steal_time(struct kvm_vcpu *vcpu)
-{
- u64 delta;
-
- if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
- return;
-
- delta = current->sched_info.run_delay - vcpu->arch.st.last_steal;
- vcpu->arch.st.last_steal = current->sched_info.run_delay;
- vcpu->arch.st.accum_steal = delta;
-}
-
static void record_steal_time(struct kvm_vcpu *vcpu)
{
- accumulate_steal_time(vcpu);
-
if (!(vcpu->arch.st.msr_val & KVM_MSR_ENABLED))
return;
@@ -2025,9 +2012,26 @@ static void record_steal_time(struct kvm_vcpu *vcpu)
&vcpu->arch.st.steal, sizeof(struct kvm_steal_time))))
return;
- vcpu->arch.st.steal.steal += vcpu->arch.st.accum_steal;
- vcpu->arch.st.steal.version += 2;
- vcpu->arch.st.accum_steal = 0;
+ if (vcpu->arch.st.steal.version & 1)
+ vcpu->arch.st.steal.version += 1; /* first time write, random junk */
+
+ vcpu->arch.st.steal.version += 1;
+
+ kvm_write_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
+ &vcpu->arch.st.steal, sizeof(struct kvm_steal_time));
+
+ smp_wmb();
+
+ vcpu->arch.st.steal.steal += current->sched_info.run_delay -
+ vcpu->arch.st.last_steal;
+ vcpu->arch.st.last_steal = current->sched_info.run_delay;
+
+ kvm_write_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
+ &vcpu->arch.st.steal, sizeof(struct kvm_steal_time));
+
+ smp_wmb();
+
+ vcpu->arch.st.steal.version += 1;
kvm_write_guest_cached(vcpu->kvm, &vcpu->arch.st.stime,
&vcpu->arch.st.steal, sizeof(struct kvm_steal_time));
@@ -2611,7 +2615,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
r = KVM_MAX_MCE_BANKS;
break;
case KVM_CAP_XCRS:
- r = cpu_has_xsave;
+ r = boot_cpu_has(X86_FEATURE_XSAVE);
break;
case KVM_CAP_TSC_CONTROL:
r = kvm_has_tsc_control;
@@ -3094,7 +3098,7 @@ static void load_xsave(struct kvm_vcpu *vcpu, u8 *src)
/* Set XSTATE_BV and possibly XCOMP_BV. */
xsave->header.xfeatures = xstate_bv;
- if (cpu_has_xsaves)
+ if (boot_cpu_has(X86_FEATURE_XSAVES))
xsave->header.xcomp_bv = host_xcr0 | XSTATE_COMPACTION_ENABLED;
/*
@@ -3121,7 +3125,7 @@ static void load_xsave(struct kvm_vcpu *vcpu, u8 *src)
static void kvm_vcpu_ioctl_x86_get_xsave(struct kvm_vcpu *vcpu,
struct kvm_xsave *guest_xsave)
{
- if (cpu_has_xsave) {
+ if (boot_cpu_has(X86_FEATURE_XSAVE)) {
memset(guest_xsave, 0, sizeof(struct kvm_xsave));
fill_xsave((u8 *) guest_xsave->region, vcpu);
} else {
@@ -3139,7 +3143,7 @@ static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu,
u64 xstate_bv =
*(u64 *)&guest_xsave->region[XSAVE_HDR_OFFSET / sizeof(u32)];
- if (cpu_has_xsave) {
+ if (boot_cpu_has(X86_FEATURE_XSAVE)) {
/*
* Here we allow setting states that are not present in
* CPUID leaf 0xD, index 0, EDX:EAX. This is for compatibility
@@ -3160,7 +3164,7 @@ static int kvm_vcpu_ioctl_x86_set_xsave(struct kvm_vcpu *vcpu,
static void kvm_vcpu_ioctl_x86_get_xcrs(struct kvm_vcpu *vcpu,
struct kvm_xcrs *guest_xcrs)
{
- if (!cpu_has_xsave) {
+ if (!boot_cpu_has(X86_FEATURE_XSAVE)) {
guest_xcrs->nr_xcrs = 0;
return;
}
@@ -3176,7 +3180,7 @@ static int kvm_vcpu_ioctl_x86_set_xcrs(struct kvm_vcpu *vcpu,
{
int i, r = 0;
- if (!cpu_has_xsave)
+ if (!boot_cpu_has(X86_FEATURE_XSAVE))
return -EINVAL;
if (guest_xcrs->nr_xcrs > KVM_MAX_XCRS || guest_xcrs->flags)
@@ -5865,7 +5869,7 @@ int kvm_arch_init(void *opaque)
perf_register_guest_info_callbacks(&kvm_guest_cbs);
- if (cpu_has_xsave)
+ if (boot_cpu_has(X86_FEATURE_XSAVE))
host_xcr0 = xgetbv(XCR_XFEATURE_ENABLED_MASK);
kvm_lapic_init();
@@ -7293,7 +7297,7 @@ int kvm_arch_vcpu_ioctl_set_fpu(struct kvm_vcpu *vcpu, struct kvm_fpu *fpu)
static void fx_init(struct kvm_vcpu *vcpu)
{
fpstate_init(&vcpu->arch.guest_fpu.state);
- if (cpu_has_xsaves)
+ if (boot_cpu_has(X86_FEATURE_XSAVES))
vcpu->arch.guest_fpu.state.xsave.header.xcomp_bv =
host_xcr0 | XSTATE_COMPACTION_ENABLED;
@@ -7752,6 +7756,9 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
kvm_page_track_init(kvm);
kvm_mmu_init_vm(kvm);
+ if (kvm_x86_ops->vm_init)
+ return kvm_x86_ops->vm_init(kvm);
+
return 0;
}
@@ -7873,6 +7880,8 @@ void kvm_arch_destroy_vm(struct kvm *kvm)
x86_set_memory_region(kvm, IDENTITY_PAGETABLE_PRIVATE_MEMSLOT, 0, 0);
x86_set_memory_region(kvm, TSS_PRIVATE_MEMSLOT, 0, 0);
}
+ if (kvm_x86_ops->vm_destroy)
+ kvm_x86_ops->vm_destroy(kvm);
kvm_iommu_unmap_guest(kvm);
kfree(kvm->arch.vpic);
kfree(kvm->arch.vioapic);
@@ -8355,19 +8364,21 @@ bool kvm_arch_has_noncoherent_dma(struct kvm *kvm)
}
EXPORT_SYMBOL_GPL(kvm_arch_has_noncoherent_dma);
+bool kvm_arch_has_irq_bypass(void)
+{
+ return kvm_x86_ops->update_pi_irte != NULL;
+}
+
int kvm_arch_irq_bypass_add_producer(struct irq_bypass_consumer *cons,
struct irq_bypass_producer *prod)
{
struct kvm_kernel_irqfd *irqfd =
container_of(cons, struct kvm_kernel_irqfd, consumer);
- if (kvm_x86_ops->update_pi_irte) {
- irqfd->producer = prod;
- return kvm_x86_ops->update_pi_irte(irqfd->kvm,
- prod->irq, irqfd->gsi, 1);
- }
+ irqfd->producer = prod;
- return -EINVAL;
+ return kvm_x86_ops->update_pi_irte(irqfd->kvm,
+ prod->irq, irqfd->gsi, 1);
}
void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons,
@@ -8377,11 +8388,6 @@ void kvm_arch_irq_bypass_del_producer(struct irq_bypass_consumer *cons,
struct kvm_kernel_irqfd *irqfd =
container_of(cons, struct kvm_kernel_irqfd, consumer);
- if (!kvm_x86_ops->update_pi_irte) {
- WARN_ON(irqfd->producer != NULL);
- return;
- }
-
WARN_ON(irqfd->producer != prod);
irqfd->producer = NULL;
@@ -8429,3 +8435,5 @@ EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_write_tsc_offset);
EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_ple_window);
EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_pml_full);
EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_pi_irte_update);
+EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_unaccelerated_access);
+EXPORT_TRACEPOINT_SYMBOL_GPL(kvm_avic_incomplete_ipi);
diff --git a/arch/x86/lguest/boot.c b/arch/x86/lguest/boot.c
index fd57d3ae7e16..3847e736702e 100644
--- a/arch/x86/lguest/boot.c
+++ b/arch/x86/lguest/boot.c
@@ -1408,13 +1408,10 @@ __init void lguest_init(void)
{
/* We're under lguest. */
pv_info.name = "lguest";
- /* Paravirt is enabled. */
- pv_info.paravirt_enabled = 1;
/* We're running at privilege level 1, not 0 as normal. */
pv_info.kernel_rpl = 1;
/* Everyone except Xen runs with this set. */
pv_info.shared_kernel_pmd = 1;
- pv_info.features = 0;
/*
* We set up all the lguest overrides for sensitive operations. These
diff --git a/arch/x86/lib/rwsem.S b/arch/x86/lib/rwsem.S
index be110efa0096..bf2c6074efd2 100644
--- a/arch/x86/lib/rwsem.S
+++ b/arch/x86/lib/rwsem.S
@@ -29,8 +29,10 @@
* there is contention on the semaphore.
*
* %eax contains the semaphore pointer on entry. Save the C-clobbered
- * registers (%eax, %edx and %ecx) except %eax whish is either a return
- * value or just clobbered..
+ * registers (%eax, %edx and %ecx) except %eax which is either a return
+ * value or just gets clobbered. Same is true for %edx so make sure GCC
+ * reloads it after the slow path, by making it hold a temporary, for
+ * example see ____down_write().
*/
#define save_common_regs \
@@ -106,6 +108,16 @@ ENTRY(call_rwsem_down_write_failed)
ret
ENDPROC(call_rwsem_down_write_failed)
+ENTRY(call_rwsem_down_write_failed_killable)
+ FRAME_BEGIN
+ save_common_regs
+ movq %rax,%rdi
+ call rwsem_down_write_failed_killable
+ restore_common_regs
+ FRAME_END
+ ret
+ENDPROC(call_rwsem_down_write_failed_killable)
+
ENTRY(call_rwsem_wake)
FRAME_BEGIN
/* do nothing if still outstanding active readers */
diff --git a/arch/x86/lib/usercopy_32.c b/arch/x86/lib/usercopy_32.c
index 91d93b95bd86..b559d9238781 100644
--- a/arch/x86/lib/usercopy_32.c
+++ b/arch/x86/lib/usercopy_32.c
@@ -612,7 +612,7 @@ unsigned long __copy_from_user_ll_nocache(void *to, const void __user *from,
{
stac();
#ifdef CONFIG_X86_INTEL_USERCOPY
- if (n > 64 && cpu_has_xmm2)
+ if (n > 64 && static_cpu_has(X86_FEATURE_XMM2))
n = __copy_user_zeroing_intel_nocache(to, from, n);
else
__copy_user_zeroing(to, from, n);
@@ -629,7 +629,7 @@ unsigned long __copy_from_user_ll_nocache_nozero(void *to, const void __user *fr
{
stac();
#ifdef CONFIG_X86_INTEL_USERCOPY
- if (n > 64 && cpu_has_xmm2)
+ if (n > 64 && static_cpu_has(X86_FEATURE_XMM2))
n = __copy_user_intel_nocache(to, from, n);
else
__copy_user(to, from, n);
diff --git a/arch/x86/mm/Makefile b/arch/x86/mm/Makefile
index f98913258c63..62c0043a5fd5 100644
--- a/arch/x86/mm/Makefile
+++ b/arch/x86/mm/Makefile
@@ -2,7 +2,7 @@
KCOV_INSTRUMENT_tlb.o := n
obj-y := init.o init_$(BITS).o fault.o ioremap.o extable.o pageattr.o mmap.o \
- pat.o pgtable.o physaddr.o gup.o setup_nx.o
+ pat.o pgtable.o physaddr.o gup.o setup_nx.o tlb.o
# Make sure __phys_addr has no stackprotector
nostackp := $(call cc-option, -fno-stack-protector)
@@ -12,7 +12,6 @@ CFLAGS_setup_nx.o := $(nostackp)
CFLAGS_fault.o := -I$(src)/../include/asm/trace
obj-$(CONFIG_X86_PAT) += pat_rbtree.o
-obj-$(CONFIG_SMP) += tlb.o
obj-$(CONFIG_X86_32) += pgtable_32.o iomap_32.o
diff --git a/arch/x86/mm/extable.c b/arch/x86/mm/extable.c
index 82447b3fba38..4bb53b89f3c5 100644
--- a/arch/x86/mm/extable.c
+++ b/arch/x86/mm/extable.c
@@ -1,5 +1,6 @@
#include <linux/module.h>
#include <asm/uaccess.h>
+#include <asm/traps.h>
typedef bool (*ex_handler_t)(const struct exception_table_entry *,
struct pt_regs *, int);
@@ -42,6 +43,43 @@ bool ex_handler_ext(const struct exception_table_entry *fixup,
}
EXPORT_SYMBOL(ex_handler_ext);
+bool ex_handler_rdmsr_unsafe(const struct exception_table_entry *fixup,
+ struct pt_regs *regs, int trapnr)
+{
+ WARN_ONCE(1, "unchecked MSR access error: RDMSR from 0x%x\n",
+ (unsigned int)regs->cx);
+
+ /* Pretend that the read succeeded and returned 0. */
+ regs->ip = ex_fixup_addr(fixup);
+ regs->ax = 0;
+ regs->dx = 0;
+ return true;
+}
+EXPORT_SYMBOL(ex_handler_rdmsr_unsafe);
+
+bool ex_handler_wrmsr_unsafe(const struct exception_table_entry *fixup,
+ struct pt_regs *regs, int trapnr)
+{
+ WARN_ONCE(1, "unchecked MSR access error: WRMSR to 0x%x (tried to write 0x%08x%08x)\n",
+ (unsigned int)regs->cx,
+ (unsigned int)regs->dx, (unsigned int)regs->ax);
+
+ /* Pretend that the write succeeded. */
+ regs->ip = ex_fixup_addr(fixup);
+ return true;
+}
+EXPORT_SYMBOL(ex_handler_wrmsr_unsafe);
+
+bool ex_handler_clear_fs(const struct exception_table_entry *fixup,
+ struct pt_regs *regs, int trapnr)
+{
+ if (static_cpu_has(X86_BUG_NULL_SEG))
+ asm volatile ("mov %0, %%fs" : : "rm" (__USER_DS));
+ asm volatile ("mov %0, %%fs" : : "rm" (0));
+ return ex_handler_default(fixup, regs, trapnr);
+}
+EXPORT_SYMBOL(ex_handler_clear_fs);
+
bool ex_has_fault_handler(unsigned long ip)
{
const struct exception_table_entry *e;
@@ -82,24 +120,46 @@ int fixup_exception(struct pt_regs *regs, int trapnr)
return handler(e, regs, trapnr);
}
+extern unsigned int early_recursion_flag;
+
/* Restricted version used during very early boot */
-int __init early_fixup_exception(unsigned long *ip)
+void __init early_fixup_exception(struct pt_regs *regs, int trapnr)
{
- const struct exception_table_entry *e;
- unsigned long new_ip;
- ex_handler_t handler;
-
- e = search_exception_tables(*ip);
- if (!e)
- return 0;
-
- new_ip = ex_fixup_addr(e);
- handler = ex_fixup_handler(e);
-
- /* special handling not supported during early boot */
- if (handler != ex_handler_default)
- return 0;
-
- *ip = new_ip;
- return 1;
+ /* Ignore early NMIs. */
+ if (trapnr == X86_TRAP_NMI)
+ return;
+
+ if (early_recursion_flag > 2)
+ goto halt_loop;
+
+ if (regs->cs != __KERNEL_CS)
+ goto fail;
+
+ /*
+ * The full exception fixup machinery is available as soon as
+ * the early IDT is loaded. This means that it is the
+ * responsibility of extable users to either function correctly
+ * when handlers are invoked early or to simply avoid causing
+ * exceptions before they're ready to handle them.
+ *
+ * This is better than filtering which handlers can be used,
+ * because refusing to call a handler here is guaranteed to
+ * result in a hard-to-debug panic.
+ *
+ * Keep in mind that not all vectors actually get here. Early
+ * fage faults, for example, are special.
+ */
+ if (fixup_exception(regs, trapnr))
+ return;
+
+fail:
+ early_printk("PANIC: early exception 0x%02x IP %lx:%lx error %lx cr2 0x%lx\n",
+ (unsigned)trapnr, (unsigned long)regs->cs, regs->ip,
+ regs->orig_ax, read_cr2());
+
+ show_regs(regs);
+
+halt_loop:
+ while (true)
+ halt();
}
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 5ce1ed02f7e8..7d1fa7cd2374 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -292,7 +292,7 @@ void vmalloc_sync_all(void)
return;
for (address = VMALLOC_START & PMD_MASK;
- address >= TASK_SIZE && address < FIXADDR_TOP;
+ address >= TASK_SIZE_MAX && address < FIXADDR_TOP;
address += PMD_SIZE) {
struct page *page;
@@ -854,8 +854,13 @@ __bad_area_nosemaphore(struct pt_regs *regs, unsigned long error_code,
return;
}
#endif
- /* Kernel addresses are always protection faults: */
- if (address >= TASK_SIZE)
+
+ /*
+ * To avoid leaking information about the kernel page table
+ * layout, pretend that user-mode accesses to kernel addresses
+ * are always protection faults.
+ */
+ if (address >= TASK_SIZE_MAX)
error_code |= PF_PROT;
if (likely(show_unhandled_signals))
diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
index 740d7ac03a55..2ae8584b44c7 100644
--- a/arch/x86/mm/hugetlbpage.c
+++ b/arch/x86/mm/hugetlbpage.c
@@ -162,9 +162,10 @@ static __init int setup_hugepagesz(char *opt)
unsigned long ps = memparse(opt, &opt);
if (ps == PMD_SIZE) {
hugetlb_add_hstate(PMD_SHIFT - PAGE_SHIFT);
- } else if (ps == PUD_SIZE && cpu_has_gbpages) {
+ } else if (ps == PUD_SIZE && boot_cpu_has(X86_FEATURE_GBPAGES)) {
hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT);
} else {
+ hugetlb_bad_size();
printk(KERN_ERR "hugepagesz: Unsupported page size %lu M\n",
ps >> 20);
return 0;
@@ -177,7 +178,7 @@ __setup("hugepagesz=", setup_hugepagesz);
static __init int gigantic_pages_init(void)
{
/* With compaction or CMA we can allocate gigantic pages at runtime */
- if (cpu_has_gbpages && !size_to_hstate(1UL << PUD_SHIFT))
+ if (boot_cpu_has(X86_FEATURE_GBPAGES) && !size_to_hstate(1UL << PUD_SHIFT))
hugetlb_add_hstate(PUD_SHIFT - PAGE_SHIFT);
return 0;
}
diff --git a/arch/x86/mm/ident_map.c b/arch/x86/mm/ident_map.c
new file mode 100644
index 000000000000..ec21796ac5fd
--- /dev/null
+++ b/arch/x86/mm/ident_map.c
@@ -0,0 +1,79 @@
+/*
+ * Helper routines for building identity mapping page tables. This is
+ * included by both the compressed kernel and the regular kernel.
+ */
+
+static void ident_pmd_init(unsigned long pmd_flag, pmd_t *pmd_page,
+ unsigned long addr, unsigned long end)
+{
+ addr &= PMD_MASK;
+ for (; addr < end; addr += PMD_SIZE) {
+ pmd_t *pmd = pmd_page + pmd_index(addr);
+
+ if (!pmd_present(*pmd))
+ set_pmd(pmd, __pmd(addr | pmd_flag));
+ }
+}
+
+static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
+ unsigned long addr, unsigned long end)
+{
+ unsigned long next;
+
+ for (; addr < end; addr = next) {
+ pud_t *pud = pud_page + pud_index(addr);
+ pmd_t *pmd;
+
+ next = (addr & PUD_MASK) + PUD_SIZE;
+ if (next > end)
+ next = end;
+
+ if (pud_present(*pud)) {
+ pmd = pmd_offset(pud, 0);
+ ident_pmd_init(info->pmd_flag, pmd, addr, next);
+ continue;
+ }
+ pmd = (pmd_t *)info->alloc_pgt_page(info->context);
+ if (!pmd)
+ return -ENOMEM;
+ ident_pmd_init(info->pmd_flag, pmd, addr, next);
+ set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE));
+ }
+
+ return 0;
+}
+
+int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
+ unsigned long addr, unsigned long end)
+{
+ unsigned long next;
+ int result;
+ int off = info->kernel_mapping ? pgd_index(__PAGE_OFFSET) : 0;
+
+ for (; addr < end; addr = next) {
+ pgd_t *pgd = pgd_page + pgd_index(addr) + off;
+ pud_t *pud;
+
+ next = (addr & PGDIR_MASK) + PGDIR_SIZE;
+ if (next > end)
+ next = end;
+
+ if (pgd_present(*pgd)) {
+ pud = pud_offset(pgd, 0);
+ result = ident_pud_init(info, pud, addr, next);
+ if (result)
+ return result;
+ continue;
+ }
+
+ pud = (pud_t *)info->alloc_pgt_page(info->context);
+ if (!pud)
+ return -ENOMEM;
+ result = ident_pud_init(info, pud, addr, next);
+ if (result)
+ return result;
+ set_pgd(pgd, __pgd(__pa(pud) | _KERNPG_TABLE));
+ }
+
+ return 0;
+}
diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 9d56f271d519..372aad2b3291 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -157,23 +157,23 @@ static void __init probe_page_size_mask(void)
* This will simplify cpa(), which otherwise needs to support splitting
* large pages into small in interrupt context, etc.
*/
- if (cpu_has_pse && !debug_pagealloc_enabled())
+ if (boot_cpu_has(X86_FEATURE_PSE) && !debug_pagealloc_enabled())
page_size_mask |= 1 << PG_LEVEL_2M;
#endif
/* Enable PSE if available */
- if (cpu_has_pse)
+ if (boot_cpu_has(X86_FEATURE_PSE))
cr4_set_bits_and_update_boot(X86_CR4_PSE);
/* Enable PGE if available */
- if (cpu_has_pge) {
+ if (boot_cpu_has(X86_FEATURE_PGE)) {
cr4_set_bits_and_update_boot(X86_CR4_PGE);
__supported_pte_mask |= _PAGE_GLOBAL;
} else
__supported_pte_mask &= ~_PAGE_GLOBAL;
/* Enable 1 GB linear kernel mappings if available: */
- if (direct_gbpages && cpu_has_gbpages) {
+ if (direct_gbpages && boot_cpu_has(X86_FEATURE_GBPAGES)) {
printk(KERN_INFO "Using GB pages for direct mapping\n");
page_size_mask |= 1 << PG_LEVEL_1G;
} else {
diff --git a/arch/x86/mm/init_32.c b/arch/x86/mm/init_32.c
index bd7a9b9e2e14..84df150ee77e 100644
--- a/arch/x86/mm/init_32.c
+++ b/arch/x86/mm/init_32.c
@@ -284,7 +284,7 @@ kernel_physical_mapping_init(unsigned long start,
*/
mapping_iter = 1;
- if (!cpu_has_pse)
+ if (!boot_cpu_has(X86_FEATURE_PSE))
use_pse = 0;
repeat:
@@ -804,9 +804,6 @@ void __init mem_init(void)
BUILD_BUG_ON(VMALLOC_START >= VMALLOC_END);
#undef high_memory
#undef __FIXADDR_TOP
-#ifdef CONFIG_RANDOMIZE_BASE
- BUILD_BUG_ON(CONFIG_RANDOMIZE_BASE_MAX_OFFSET > KERNEL_IMAGE_SIZE);
-#endif
#ifdef CONFIG_HIGHMEM
BUG_ON(PKMAP_BASE + LAST_PKMAP*PAGE_SIZE > FIXADDR_START);
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 214afda97911..bce2e5d9edd4 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -58,79 +58,7 @@
#include "mm_internal.h"
-static void ident_pmd_init(unsigned long pmd_flag, pmd_t *pmd_page,
- unsigned long addr, unsigned long end)
-{
- addr &= PMD_MASK;
- for (; addr < end; addr += PMD_SIZE) {
- pmd_t *pmd = pmd_page + pmd_index(addr);
-
- if (!pmd_present(*pmd))
- set_pmd(pmd, __pmd(addr | pmd_flag));
- }
-}
-static int ident_pud_init(struct x86_mapping_info *info, pud_t *pud_page,
- unsigned long addr, unsigned long end)
-{
- unsigned long next;
-
- for (; addr < end; addr = next) {
- pud_t *pud = pud_page + pud_index(addr);
- pmd_t *pmd;
-
- next = (addr & PUD_MASK) + PUD_SIZE;
- if (next > end)
- next = end;
-
- if (pud_present(*pud)) {
- pmd = pmd_offset(pud, 0);
- ident_pmd_init(info->pmd_flag, pmd, addr, next);
- continue;
- }
- pmd = (pmd_t *)info->alloc_pgt_page(info->context);
- if (!pmd)
- return -ENOMEM;
- ident_pmd_init(info->pmd_flag, pmd, addr, next);
- set_pud(pud, __pud(__pa(pmd) | _KERNPG_TABLE));
- }
-
- return 0;
-}
-
-int kernel_ident_mapping_init(struct x86_mapping_info *info, pgd_t *pgd_page,
- unsigned long addr, unsigned long end)
-{
- unsigned long next;
- int result;
- int off = info->kernel_mapping ? pgd_index(__PAGE_OFFSET) : 0;
-
- for (; addr < end; addr = next) {
- pgd_t *pgd = pgd_page + pgd_index(addr) + off;
- pud_t *pud;
-
- next = (addr & PGDIR_MASK) + PGDIR_SIZE;
- if (next > end)
- next = end;
-
- if (pgd_present(*pgd)) {
- pud = pud_offset(pgd, 0);
- result = ident_pud_init(info, pud, addr, next);
- if (result)
- return result;
- continue;
- }
-
- pud = (pud_t *)info->alloc_pgt_page(info->context);
- if (!pud)
- return -ENOMEM;
- result = ident_pud_init(info, pud, addr, next);
- if (result)
- return result;
- set_pgd(pgd, __pgd(__pa(pud) | _KERNPG_TABLE));
- }
-
- return 0;
-}
+#include "ident_map.c"
/*
* NOTE: pagetable_init alloc all the fixmap pagetables contiguous on the
@@ -1295,7 +1223,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node)
struct vmem_altmap *altmap = to_vmem_altmap(start);
int err;
- if (cpu_has_pse)
+ if (boot_cpu_has(X86_FEATURE_PSE))
err = vmemmap_populate_hugepages(start, end, node, altmap);
else if (altmap) {
pr_err_once("%s: no cpu support for altmap allocations\n",
@@ -1338,7 +1266,7 @@ void register_page_bootmem_memmap(unsigned long section_nr,
}
get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO);
- if (!cpu_has_pse) {
+ if (!boot_cpu_has(X86_FEATURE_PSE)) {
next = (addr + PAGE_SIZE) & PAGE_MASK;
pmd = pmd_offset(pud, addr);
if (pmd_none(*pmd))
diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c
index 0d8d53d1f5cc..f0894910bdd7 100644
--- a/arch/x86/mm/ioremap.c
+++ b/arch/x86/mm/ioremap.c
@@ -378,7 +378,7 @@ EXPORT_SYMBOL(iounmap);
int __init arch_ioremap_pud_supported(void)
{
#ifdef CONFIG_X86_64
- return cpu_has_gbpages;
+ return boot_cpu_has(X86_FEATURE_GBPAGES);
#else
return 0;
#endif
@@ -386,7 +386,7 @@ int __init arch_ioremap_pud_supported(void)
int __init arch_ioremap_pmd_supported(void)
{
- return cpu_has_pse;
+ return boot_cpu_has(X86_FEATURE_PSE);
}
/*
diff --git a/arch/x86/mm/numa.c b/arch/x86/mm/numa.c
index f70c1ff46125..9c086c57105c 100644
--- a/arch/x86/mm/numa.c
+++ b/arch/x86/mm/numa.c
@@ -617,9 +617,7 @@ static void __init numa_init_array(void)
if (early_cpu_to_node(i) != NUMA_NO_NODE)
continue;
numa_set_node(i, rr);
- rr = next_node(rr, node_online_map);
- if (rr == MAX_NUMNODES)
- rr = first_node(node_online_map);
+ rr = next_node_in(rr, node_online_map);
}
}
diff --git a/arch/x86/mm/pageattr.c b/arch/x86/mm/pageattr.c
index 01be9ec3bf79..7a1f7bbf4105 100644
--- a/arch/x86/mm/pageattr.c
+++ b/arch/x86/mm/pageattr.c
@@ -1055,7 +1055,7 @@ static int populate_pud(struct cpa_data *cpa, unsigned long start, pgd_t *pgd,
/*
* Map everything starting from the Gb boundary, possibly with 1G pages
*/
- while (cpu_has_gbpages && end - start >= PUD_SIZE) {
+ while (boot_cpu_has(X86_FEATURE_GBPAGES) && end - start >= PUD_SIZE) {
set_pud(pud, __pud(cpa->pfn << PAGE_SHIFT | _PAGE_PSE |
massage_pgprot(pud_pgprot)));
@@ -1125,8 +1125,14 @@ static int populate_pgd(struct cpa_data *cpa, unsigned long addr)
static int __cpa_process_fault(struct cpa_data *cpa, unsigned long vaddr,
int primary)
{
- if (cpa->pgd)
+ if (cpa->pgd) {
+ /*
+ * Right now, we only execute this code path when mapping
+ * the EFI virtual memory map regions, no other users
+ * provide a ->pgd value. This may change in the future.
+ */
return populate_pgd(cpa, vaddr);
+ }
/*
* Ignore all non primary paths.
@@ -1460,7 +1466,7 @@ static int change_page_attr_set_clr(unsigned long *addr, int numpages,
* error case we fall back to cpa_flush_all (which uses
* WBINVD):
*/
- if (!ret && cpu_has_clflush) {
+ if (!ret && boot_cpu_has(X86_FEATURE_CLFLUSH)) {
if (cpa.flags & (CPA_PAGES_ARRAY | CPA_ARRAY)) {
cpa_flush_array(addr, numpages, cache,
cpa.flags, pages);
diff --git a/arch/x86/mm/pat.c b/arch/x86/mm/pat.c
index faec01e7a17d..fb0604f11eec 100644
--- a/arch/x86/mm/pat.c
+++ b/arch/x86/mm/pat.c
@@ -40,11 +40,22 @@
static bool boot_cpu_done;
static int __read_mostly __pat_enabled = IS_ENABLED(CONFIG_X86_PAT);
+static void init_cache_modes(void);
-static inline void pat_disable(const char *reason)
+void pat_disable(const char *reason)
{
+ if (!__pat_enabled)
+ return;
+
+ if (boot_cpu_done) {
+ WARN_ONCE(1, "x86/PAT: PAT cannot be disabled after initialization\n");
+ return;
+ }
+
__pat_enabled = 0;
pr_info("x86/PAT: %s\n", reason);
+
+ init_cache_modes();
}
static int __init nopat(char *str)
@@ -181,7 +192,7 @@ static enum page_cache_mode pat_get_cache_mode(unsigned pat_val, char *msg)
* configuration.
* Using lower indices is preferred, so we start with highest index.
*/
-void pat_init_cache_modes(u64 pat)
+static void __init_cache_modes(u64 pat)
{
enum page_cache_mode cache;
char pat_msg[33];
@@ -202,14 +213,11 @@ static void pat_bsp_init(u64 pat)
{
u64 tmp_pat;
- if (!cpu_has_pat) {
+ if (!boot_cpu_has(X86_FEATURE_PAT)) {
pat_disable("PAT not supported by CPU.");
return;
}
- if (!pat_enabled())
- goto done;
-
rdmsrl(MSR_IA32_CR_PAT, tmp_pat);
if (!tmp_pat) {
pat_disable("PAT MSR is 0, disabled.");
@@ -218,16 +226,12 @@ static void pat_bsp_init(u64 pat)
wrmsrl(MSR_IA32_CR_PAT, pat);
-done:
- pat_init_cache_modes(pat);
+ __init_cache_modes(pat);
}
static void pat_ap_init(u64 pat)
{
- if (!pat_enabled())
- return;
-
- if (!cpu_has_pat) {
+ if (!boot_cpu_has(X86_FEATURE_PAT)) {
/*
* If this happens we are on a secondary CPU, but switched to
* PAT on the boot CPU. We have no way to undo PAT.
@@ -238,18 +242,32 @@ static void pat_ap_init(u64 pat)
wrmsrl(MSR_IA32_CR_PAT, pat);
}
-void pat_init(void)
+static void init_cache_modes(void)
{
- u64 pat;
- struct cpuinfo_x86 *c = &boot_cpu_data;
+ u64 pat = 0;
+ static int init_cm_done;
- if (!pat_enabled()) {
+ if (init_cm_done)
+ return;
+
+ if (boot_cpu_has(X86_FEATURE_PAT)) {
+ /*
+ * CPU supports PAT. Set PAT table to be consistent with
+ * PAT MSR. This case supports "nopat" boot option, and
+ * virtual machine environments which support PAT without
+ * MTRRs. In specific, Xen has unique setup to PAT MSR.
+ *
+ * If PAT MSR returns 0, it is considered invalid and emulates
+ * as No PAT.
+ */
+ rdmsrl(MSR_IA32_CR_PAT, pat);
+ }
+
+ if (!pat) {
/*
* No PAT. Emulate the PAT table that corresponds to the two
- * cache bits, PWT (Write Through) and PCD (Cache Disable). This
- * setup is the same as the BIOS default setup when the system
- * has PAT but the "nopat" boot option has been specified. This
- * emulated PAT table is used when MSR_IA32_CR_PAT returns 0.
+ * cache bits, PWT (Write Through) and PCD (Cache Disable).
+ * This setup is also the same as the BIOS default setup.
*
* PTE encoding:
*
@@ -266,10 +284,36 @@ void pat_init(void)
*/
pat = PAT(0, WB) | PAT(1, WT) | PAT(2, UC_MINUS) | PAT(3, UC) |
PAT(4, WB) | PAT(5, WT) | PAT(6, UC_MINUS) | PAT(7, UC);
+ }
+
+ __init_cache_modes(pat);
+
+ init_cm_done = 1;
+}
+
+/**
+ * pat_init - Initialize PAT MSR and PAT table
+ *
+ * This function initializes PAT MSR and PAT table with an OS-defined value
+ * to enable additional cache attributes, WC and WT.
+ *
+ * This function must be called on all CPUs using the specific sequence of
+ * operations defined in Intel SDM. mtrr_rendezvous_handler() provides this
+ * procedure for PAT.
+ */
+void pat_init(void)
+{
+ u64 pat;
+ struct cpuinfo_x86 *c = &boot_cpu_data;
+
+ if (!pat_enabled()) {
+ init_cache_modes();
+ return;
+ }
- } else if ((c->x86_vendor == X86_VENDOR_INTEL) &&
- (((c->x86 == 0x6) && (c->x86_model <= 0xd)) ||
- ((c->x86 == 0xf) && (c->x86_model <= 0x6)))) {
+ if ((c->x86_vendor == X86_VENDOR_INTEL) &&
+ (((c->x86 == 0x6) && (c->x86_model <= 0xd)) ||
+ ((c->x86 == 0xf) && (c->x86_model <= 0x6)))) {
/*
* PAT support with the lower four entries. Intel Pentium 2,
* 3, M, and 4 are affected by PAT errata, which makes the
@@ -734,25 +778,6 @@ int phys_mem_access_prot_allowed(struct file *file, unsigned long pfn,
if (file->f_flags & O_DSYNC)
pcm = _PAGE_CACHE_MODE_UC_MINUS;
-#ifdef CONFIG_X86_32
- /*
- * On the PPro and successors, the MTRRs are used to set
- * memory types for physical addresses outside main memory,
- * so blindly setting UC or PWT on those pages is wrong.
- * For Pentiums and earlier, the surround logic should disable
- * caching for the high addresses through the KEN pin, but
- * we maintain the tradition of paranoia in this code.
- */
- if (!pat_enabled() &&
- !(boot_cpu_has(X86_FEATURE_MTRR) ||
- boot_cpu_has(X86_FEATURE_K6_MTRR) ||
- boot_cpu_has(X86_FEATURE_CYRIX_ARR) ||
- boot_cpu_has(X86_FEATURE_CENTAUR_MCR)) &&
- (pfn << PAGE_SHIFT) >= __pa(high_memory)) {
- pcm = _PAGE_CACHE_MODE_UC;
- }
-#endif
-
*vma_prot = __pgprot((pgprot_val(*vma_prot) & ~_PAGE_CACHE_MASK) |
cachemode2protval(pcm));
return 1;
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index fe9b9f776361..5643fd0b1a7d 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -28,6 +28,8 @@
* Implement flush IPI by CALL_FUNCTION_VECTOR, Alex Shi
*/
+#ifdef CONFIG_SMP
+
struct flush_tlb_info {
struct mm_struct *flush_mm;
unsigned long flush_start;
@@ -57,6 +59,118 @@ void leave_mm(int cpu)
}
EXPORT_SYMBOL_GPL(leave_mm);
+#endif /* CONFIG_SMP */
+
+void switch_mm(struct mm_struct *prev, struct mm_struct *next,
+ struct task_struct *tsk)
+{
+ unsigned long flags;
+
+ local_irq_save(flags);
+ switch_mm_irqs_off(prev, next, tsk);
+ local_irq_restore(flags);
+}
+
+void switch_mm_irqs_off(struct mm_struct *prev, struct mm_struct *next,
+ struct task_struct *tsk)
+{
+ unsigned cpu = smp_processor_id();
+
+ if (likely(prev != next)) {
+#ifdef CONFIG_SMP
+ this_cpu_write(cpu_tlbstate.state, TLBSTATE_OK);
+ this_cpu_write(cpu_tlbstate.active_mm, next);
+#endif
+ cpumask_set_cpu(cpu, mm_cpumask(next));
+
+ /*
+ * Re-load page tables.
+ *
+ * This logic has an ordering constraint:
+ *
+ * CPU 0: Write to a PTE for 'next'
+ * CPU 0: load bit 1 in mm_cpumask. if nonzero, send IPI.
+ * CPU 1: set bit 1 in next's mm_cpumask
+ * CPU 1: load from the PTE that CPU 0 writes (implicit)
+ *
+ * We need to prevent an outcome in which CPU 1 observes
+ * the new PTE value and CPU 0 observes bit 1 clear in
+ * mm_cpumask. (If that occurs, then the IPI will never
+ * be sent, and CPU 0's TLB will contain a stale entry.)
+ *
+ * The bad outcome can occur if either CPU's load is
+ * reordered before that CPU's store, so both CPUs must
+ * execute full barriers to prevent this from happening.
+ *
+ * Thus, switch_mm needs a full barrier between the
+ * store to mm_cpumask and any operation that could load
+ * from next->pgd. TLB fills are special and can happen
+ * due to instruction fetches or for no reason at all,
+ * and neither LOCK nor MFENCE orders them.
+ * Fortunately, load_cr3() is serializing and gives the
+ * ordering guarantee we need.
+ *
+ */
+ load_cr3(next->pgd);
+
+ trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
+
+ /* Stop flush ipis for the previous mm */
+ cpumask_clear_cpu(cpu, mm_cpumask(prev));
+
+ /* Load per-mm CR4 state */
+ load_mm_cr4(next);
+
+#ifdef CONFIG_MODIFY_LDT_SYSCALL
+ /*
+ * Load the LDT, if the LDT is different.
+ *
+ * It's possible that prev->context.ldt doesn't match
+ * the LDT register. This can happen if leave_mm(prev)
+ * was called and then modify_ldt changed
+ * prev->context.ldt but suppressed an IPI to this CPU.
+ * In this case, prev->context.ldt != NULL, because we
+ * never set context.ldt to NULL while the mm still
+ * exists. That means that next->context.ldt !=
+ * prev->context.ldt, because mms never share an LDT.
+ */
+ if (unlikely(prev->context.ldt != next->context.ldt))
+ load_mm_ldt(next);
+#endif
+ }
+#ifdef CONFIG_SMP
+ else {
+ this_cpu_write(cpu_tlbstate.state, TLBSTATE_OK);
+ BUG_ON(this_cpu_read(cpu_tlbstate.active_mm) != next);
+
+ if (!cpumask_test_cpu(cpu, mm_cpumask(next))) {
+ /*
+ * On established mms, the mm_cpumask is only changed
+ * from irq context, from ptep_clear_flush() while in
+ * lazy tlb mode, and here. Irqs are blocked during
+ * schedule, protecting us from simultaneous changes.
+ */
+ cpumask_set_cpu(cpu, mm_cpumask(next));
+
+ /*
+ * We were in lazy tlb mode and leave_mm disabled
+ * tlb flush IPI delivery. We must reload CR3
+ * to make sure to use no freed page tables.
+ *
+ * As above, load_cr3() is serializing and orders TLB
+ * fills with respect to the mm_cpumask write.
+ */
+ load_cr3(next->pgd);
+ trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
+ load_mm_cr4(next);
+ load_mm_ldt(next);
+ }
+ }
+#endif
+}
+
+#ifdef CONFIG_SMP
+
/*
* The flush IPI assumes that a thread switch happens in this order:
* [cpu0: the cpu that switches]
@@ -353,3 +467,5 @@ static int __init create_tlb_single_page_flush_ceiling(void)
return 0;
}
late_initcall(create_tlb_single_page_flush_ceiling);
+
+#endif /* CONFIG_SMP */
diff --git a/arch/x86/net/bpf_jit_comp.c b/arch/x86/net/bpf_jit_comp.c
index 4286f3618bd0..fe04a04dab8e 100644
--- a/arch/x86/net/bpf_jit_comp.c
+++ b/arch/x86/net/bpf_jit_comp.c
@@ -110,11 +110,16 @@ static void bpf_flush_icache(void *start, void *end)
((int)K < 0 ? ((int)K >= SKF_LL_OFF ? func##_negative_offset : func) : func##_positive_offset)
/* pick a register outside of BPF range for JIT internal work */
-#define AUX_REG (MAX_BPF_REG + 1)
+#define AUX_REG (MAX_BPF_JIT_REG + 1)
-/* the following table maps BPF registers to x64 registers.
- * x64 register r12 is unused, since if used as base address register
- * in load/store instructions, it always needs an extra byte of encoding
+/* The following table maps BPF registers to x64 registers.
+ *
+ * x64 register r12 is unused, since if used as base address
+ * register in load/store instructions, it always needs an
+ * extra byte of encoding and is callee saved.
+ *
+ * r9 caches skb->len - skb->data_len
+ * r10 caches skb->data, and used for blinding (if enabled)
*/
static const int reg2hex[] = {
[BPF_REG_0] = 0, /* rax */
@@ -128,6 +133,7 @@ static const int reg2hex[] = {
[BPF_REG_8] = 6, /* r14 callee saved */
[BPF_REG_9] = 7, /* r15 callee saved */
[BPF_REG_FP] = 5, /* rbp readonly */
+ [BPF_REG_AX] = 2, /* r10 temp register */
[AUX_REG] = 3, /* r11 temp register */
};
@@ -141,7 +147,8 @@ static bool is_ereg(u32 reg)
BIT(AUX_REG) |
BIT(BPF_REG_7) |
BIT(BPF_REG_8) |
- BIT(BPF_REG_9));
+ BIT(BPF_REG_9) |
+ BIT(BPF_REG_AX));
}
/* add modifiers if 'reg' maps to x64 registers r8..r15 */
@@ -182,6 +189,7 @@ static void jit_fill_hole(void *area, unsigned int size)
struct jit_context {
int cleanup_addr; /* epilogue code offset */
bool seen_ld_abs;
+ bool seen_ax_reg;
};
/* maximum number of bytes emitted while JITing one eBPF insn */
@@ -345,6 +353,7 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
struct bpf_insn *insn = bpf_prog->insnsi;
int insn_cnt = bpf_prog->len;
bool seen_ld_abs = ctx->seen_ld_abs | (oldproglen == 0);
+ bool seen_ax_reg = ctx->seen_ax_reg | (oldproglen == 0);
bool seen_exit = false;
u8 temp[BPF_MAX_INSN_SIZE + BPF_INSN_SAFETY];
int i, cnt = 0;
@@ -367,6 +376,9 @@ static int do_jit(struct bpf_prog *bpf_prog, int *addrs, u8 *image,
int ilen;
u8 *func;
+ if (dst_reg == BPF_REG_AX || src_reg == BPF_REG_AX)
+ ctx->seen_ax_reg = seen_ax_reg = true;
+
switch (insn->code) {
/* ALU */
case BPF_ALU | BPF_ADD | BPF_X:
@@ -1002,6 +1014,10 @@ common_load:
* sk_load_* helpers also use %r10 and %r9d.
* See bpf_jit.S
*/
+ if (seen_ax_reg)
+ /* r10 = skb->data, mov %r10, off32(%rbx) */
+ EMIT3_off32(0x4c, 0x8b, 0x93,
+ offsetof(struct sk_buff, data));
EMIT1_off32(0xE8, jmp_offset); /* call */
break;
@@ -1073,25 +1089,37 @@ void bpf_jit_compile(struct bpf_prog *prog)
{
}
-void bpf_int_jit_compile(struct bpf_prog *prog)
+struct bpf_prog *bpf_int_jit_compile(struct bpf_prog *prog)
{
struct bpf_binary_header *header = NULL;
+ struct bpf_prog *tmp, *orig_prog = prog;
int proglen, oldproglen = 0;
struct jit_context ctx = {};
+ bool tmp_blinded = false;
u8 *image = NULL;
int *addrs;
int pass;
int i;
if (!bpf_jit_enable)
- return;
+ return orig_prog;
- if (!prog || !prog->len)
- return;
+ tmp = bpf_jit_blind_constants(prog);
+ /* If blinding was requested and we failed during blinding,
+ * we must fall back to the interpreter.
+ */
+ if (IS_ERR(tmp))
+ return orig_prog;
+ if (tmp != prog) {
+ tmp_blinded = true;
+ prog = tmp;
+ }
addrs = kmalloc(prog->len * sizeof(*addrs), GFP_KERNEL);
- if (!addrs)
- return;
+ if (!addrs) {
+ prog = orig_prog;
+ goto out;
+ }
/* Before first pass, make a rough estimation of addrs[]
* each bpf instruction is translated to less than 64 bytes
@@ -1113,21 +1141,25 @@ void bpf_int_jit_compile(struct bpf_prog *prog)
image = NULL;
if (header)
bpf_jit_binary_free(header);
- goto out;
+ prog = orig_prog;
+ goto out_addrs;
}
if (image) {
if (proglen != oldproglen) {
pr_err("bpf_jit: proglen=%d != oldproglen=%d\n",
proglen, oldproglen);
- goto out;
+ prog = orig_prog;
+ goto out_addrs;
}
break;
}
if (proglen == oldproglen) {
header = bpf_jit_binary_alloc(proglen, &image,
1, jit_fill_hole);
- if (!header)
- goto out;
+ if (!header) {
+ prog = orig_prog;
+ goto out_addrs;
+ }
}
oldproglen = proglen;
}
@@ -1141,8 +1173,14 @@ void bpf_int_jit_compile(struct bpf_prog *prog)
prog->bpf_func = (void *)image;
prog->jited = 1;
}
-out:
+
+out_addrs:
kfree(addrs);
+out:
+ if (tmp_blinded)
+ bpf_jit_prog_release_other(prog, prog == orig_prog ?
+ tmp : orig_prog);
+ return prog;
}
void bpf_jit_free(struct bpf_prog *fp)
diff --git a/arch/x86/oprofile/nmi_int.c b/arch/x86/oprofile/nmi_int.c
index 0e07e0968c3a..28c04123b6dd 100644
--- a/arch/x86/oprofile/nmi_int.c
+++ b/arch/x86/oprofile/nmi_int.c
@@ -636,7 +636,7 @@ static int __init ppro_init(char **cpu_type)
__u8 cpu_model = boot_cpu_data.x86_model;
struct op_x86_model_spec *spec = &op_ppro_spec; /* default */
- if (force_cpu_type == arch_perfmon && cpu_has_arch_perfmon)
+ if (force_cpu_type == arch_perfmon && boot_cpu_has(X86_FEATURE_ARCH_PERFMON))
return 0;
/*
@@ -700,7 +700,7 @@ int __init op_nmi_init(struct oprofile_operations *ops)
char *cpu_type = NULL;
int ret = 0;
- if (!cpu_has_apic)
+ if (!boot_cpu_has(X86_FEATURE_APIC))
return -ENODEV;
if (force_cpu_type == timer)
@@ -761,7 +761,7 @@ int __init op_nmi_init(struct oprofile_operations *ops)
if (cpu_type)
break;
- if (!cpu_has_arch_perfmon)
+ if (!boot_cpu_has(X86_FEATURE_ARCH_PERFMON))
return -ENODEV;
/* use arch perfmon as fallback */
diff --git a/arch/x86/oprofile/op_model_ppro.c b/arch/x86/oprofile/op_model_ppro.c
index d90528ea5412..350f7096baac 100644
--- a/arch/x86/oprofile/op_model_ppro.c
+++ b/arch/x86/oprofile/op_model_ppro.c
@@ -75,7 +75,7 @@ static void ppro_setup_ctrs(struct op_x86_model_spec const *model,
u64 val;
int i;
- if (cpu_has_arch_perfmon) {
+ if (boot_cpu_has(X86_FEATURE_ARCH_PERFMON)) {
union cpuid10_eax eax;
eax.full = cpuid_eax(0xa);
diff --git a/arch/x86/pci/acpi.c b/arch/x86/pci/acpi.c
index 3cd69832d7f4..b2a4e2a61f6b 100644
--- a/arch/x86/pci/acpi.c
+++ b/arch/x86/pci/acpi.c
@@ -396,7 +396,6 @@ int __init pci_acpi_init(void)
return -ENODEV;
printk(KERN_INFO "PCI: Using ACPI for IRQ routing\n");
- acpi_irq_penalty_init();
pcibios_enable_irq = acpi_pci_irq_enable;
pcibios_disable_irq = acpi_pci_irq_disable;
x86_init.pci.init_irq = x86_init_noop;
diff --git a/arch/x86/pci/common.c b/arch/x86/pci/common.c
index 381a43c40bf7..8196054fedb0 100644
--- a/arch/x86/pci/common.c
+++ b/arch/x86/pci/common.c
@@ -516,7 +516,7 @@ void __init pcibios_set_cache_line_size(void)
int __init pcibios_init(void)
{
- if (!raw_pci_ops) {
+ if (!raw_pci_ops && !raw_pci_ext_ops) {
printk(KERN_WARNING "PCI: System does not support PCI\n");
return 0;
}
diff --git a/arch/x86/pci/fixup.c b/arch/x86/pci/fixup.c
index b7de1929714b..837ea36a837d 100644
--- a/arch/x86/pci/fixup.c
+++ b/arch/x86/pci/fixup.c
@@ -552,9 +552,16 @@ static void twinhead_reserve_killing_zone(struct pci_dev *dev)
}
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_INTEL, 0x27B9, twinhead_reserve_killing_zone);
+/*
+ * Broadwell EP Home Agent BARs erroneously return non-zero values when read.
+ *
+ * See http://www.intel.com/content/www/us/en/processors/xeon/xeon-e5-v4-spec-update.html
+ * entry BDF2.
+ */
static void pci_bdwep_bar(struct pci_dev *dev)
{
dev->non_compliant_bars = 1;
}
+DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6f60, pci_bdwep_bar);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6fa0, pci_bdwep_bar);
DECLARE_PCI_FIXUP_EARLY(PCI_VENDOR_ID_INTEL, 0x6fc0, pci_bdwep_bar);
diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
index beac4dfdade6..99ddab79215e 100644
--- a/arch/x86/pci/xen.c
+++ b/arch/x86/pci/xen.c
@@ -445,7 +445,7 @@ void __init xen_msi_init(void)
uint32_t eax = cpuid_eax(xen_cpuid_base() + 4);
if (((eax & XEN_HVM_CPUID_X2APIC_VIRT) && x2apic_mode) ||
- ((eax & XEN_HVM_CPUID_APIC_ACCESS_VIRT) && cpu_has_apic))
+ ((eax & XEN_HVM_CPUID_APIC_ACCESS_VIRT) && boot_cpu_has(X86_FEATURE_APIC)))
return;
}
@@ -491,8 +491,11 @@ int __init pci_xen_initial_domain(void)
#endif
__acpi_register_gsi = acpi_register_gsi_xen;
__acpi_unregister_gsi = NULL;
- /* Pre-allocate legacy irqs */
- for (irq = 0; irq < nr_legacy_irqs(); irq++) {
+ /*
+ * Pre-allocate the legacy IRQs. Use NR_LEGACY_IRQS here
+ * because we don't have a PIC and thus nr_legacy_irqs() is zero.
+ */
+ for (irq = 0; irq < NR_IRQS_LEGACY; irq++) {
int trigger, polarity;
if (acpi_get_override_irq(irq, &trigger, &polarity) == -1)
diff --git a/arch/x86/platform/efi/efi.c b/arch/x86/platform/efi/efi.c
index 994a7df84a7b..f93545e7dc54 100644
--- a/arch/x86/platform/efi/efi.c
+++ b/arch/x86/platform/efi/efi.c
@@ -54,10 +54,6 @@
#include <asm/rtc.h>
#include <asm/uv/uv.h>
-#define EFI_DEBUG
-
-struct efi_memory_map memmap;
-
static struct efi efi_phys __initdata;
static efi_system_table_t efi_systab __initdata;
@@ -119,11 +115,10 @@ void efi_get_time(struct timespec *now)
void __init efi_find_mirror(void)
{
- void *p;
+ efi_memory_desc_t *md;
u64 mirror_size = 0, total_size = 0;
- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
- efi_memory_desc_t *md = p;
+ for_each_efi_memory_desc(md) {
unsigned long long start = md->phys_addr;
unsigned long long size = md->num_pages << EFI_PAGE_SHIFT;
@@ -146,10 +141,9 @@ void __init efi_find_mirror(void)
static void __init do_add_efi_memmap(void)
{
- void *p;
+ efi_memory_desc_t *md;
- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
- efi_memory_desc_t *md = p;
+ for_each_efi_memory_desc(md) {
unsigned long long start = md->phys_addr;
unsigned long long size = md->num_pages << EFI_PAGE_SHIFT;
int e820_type;
@@ -209,47 +203,47 @@ int __init efi_memblock_x86_reserve_range(void)
#else
pmap = (e->efi_memmap | ((__u64)e->efi_memmap_hi << 32));
#endif
- memmap.phys_map = pmap;
- memmap.nr_map = e->efi_memmap_size /
+ efi.memmap.phys_map = pmap;
+ efi.memmap.nr_map = e->efi_memmap_size /
e->efi_memdesc_size;
- memmap.desc_size = e->efi_memdesc_size;
- memmap.desc_version = e->efi_memdesc_version;
+ efi.memmap.desc_size = e->efi_memdesc_size;
+ efi.memmap.desc_version = e->efi_memdesc_version;
- memblock_reserve(pmap, memmap.nr_map * memmap.desc_size);
+ WARN(efi.memmap.desc_version != 1,
+ "Unexpected EFI_MEMORY_DESCRIPTOR version %ld",
+ efi.memmap.desc_version);
- efi.memmap = &memmap;
+ memblock_reserve(pmap, efi.memmap.nr_map * efi.memmap.desc_size);
return 0;
}
void __init efi_print_memmap(void)
{
-#ifdef EFI_DEBUG
efi_memory_desc_t *md;
- void *p;
- int i;
+ int i = 0;
- for (p = memmap.map, i = 0;
- p < memmap.map_end;
- p += memmap.desc_size, i++) {
+ for_each_efi_memory_desc(md) {
char buf[64];
- md = p;
pr_info("mem%02u: %s range=[0x%016llx-0x%016llx] (%lluMB)\n",
- i, efi_md_typeattr_format(buf, sizeof(buf), md),
+ i++, efi_md_typeattr_format(buf, sizeof(buf), md),
md->phys_addr,
md->phys_addr + (md->num_pages << EFI_PAGE_SHIFT) - 1,
(md->num_pages >> (20 - EFI_PAGE_SHIFT)));
}
-#endif /* EFI_DEBUG */
}
void __init efi_unmap_memmap(void)
{
+ unsigned long size;
+
clear_bit(EFI_MEMMAP, &efi.flags);
- if (memmap.map) {
- early_memunmap(memmap.map, memmap.nr_map * memmap.desc_size);
- memmap.map = NULL;
+
+ size = efi.memmap.nr_map * efi.memmap.desc_size;
+ if (efi.memmap.map) {
+ early_memunmap(efi.memmap.map, size);
+ efi.memmap.map = NULL;
}
}
@@ -352,8 +346,6 @@ static int __init efi_systab_init(void *phys)
efi.systab->hdr.revision >> 16,
efi.systab->hdr.revision & 0xffff);
- set_bit(EFI_SYSTEM_TABLES, &efi.flags);
-
return 0;
}
@@ -440,17 +432,22 @@ static int __init efi_runtime_init(void)
static int __init efi_memmap_init(void)
{
+ unsigned long addr, size;
+
if (efi_enabled(EFI_PARAVIRT))
return 0;
/* Map the EFI memory map */
- memmap.map = early_memremap((unsigned long)memmap.phys_map,
- memmap.nr_map * memmap.desc_size);
- if (memmap.map == NULL) {
+ size = efi.memmap.nr_map * efi.memmap.desc_size;
+ addr = (unsigned long)efi.memmap.phys_map;
+
+ efi.memmap.map = early_memremap(addr, size);
+ if (efi.memmap.map == NULL) {
pr_err("Could not map the memory map!\n");
return -ENOMEM;
}
- memmap.map_end = memmap.map + (memmap.nr_map * memmap.desc_size);
+
+ efi.memmap.map_end = efi.memmap.map + size;
if (add_efi_memmap)
do_add_efi_memmap();
@@ -552,12 +549,9 @@ void __init efi_set_executable(efi_memory_desc_t *md, bool executable)
void __init runtime_code_page_mkexec(void)
{
efi_memory_desc_t *md;
- void *p;
/* Make EFI runtime service code area executable */
- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
- md = p;
-
+ for_each_efi_memory_desc(md) {
if (md->type != EFI_RUNTIME_SERVICES_CODE)
continue;
@@ -604,12 +598,10 @@ void __init old_map_region(efi_memory_desc_t *md)
/* Merge contiguous regions of the same type and attribute */
static void __init efi_merge_regions(void)
{
- void *p;
efi_memory_desc_t *md, *prev_md = NULL;
- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
+ for_each_efi_memory_desc(md) {
u64 prev_size;
- md = p;
if (!prev_md) {
prev_md = md;
@@ -651,30 +643,31 @@ static void __init get_systab_virt_addr(efi_memory_desc_t *md)
static void __init save_runtime_map(void)
{
#ifdef CONFIG_KEXEC_CORE
+ unsigned long desc_size;
efi_memory_desc_t *md;
- void *tmp, *p, *q = NULL;
+ void *tmp, *q = NULL;
int count = 0;
if (efi_enabled(EFI_OLD_MEMMAP))
return;
- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
- md = p;
+ desc_size = efi.memmap.desc_size;
+ for_each_efi_memory_desc(md) {
if (!(md->attribute & EFI_MEMORY_RUNTIME) ||
(md->type == EFI_BOOT_SERVICES_CODE) ||
(md->type == EFI_BOOT_SERVICES_DATA))
continue;
- tmp = krealloc(q, (count + 1) * memmap.desc_size, GFP_KERNEL);
+ tmp = krealloc(q, (count + 1) * desc_size, GFP_KERNEL);
if (!tmp)
goto out;
q = tmp;
- memcpy(q + count * memmap.desc_size, md, memmap.desc_size);
+ memcpy(q + count * desc_size, md, desc_size);
count++;
}
- efi_runtime_map_setup(q, count, memmap.desc_size);
+ efi_runtime_map_setup(q, count, desc_size);
return;
out:
@@ -714,10 +707,10 @@ static inline void *efi_map_next_entry_reverse(void *entry)
{
/* Initial call */
if (!entry)
- return memmap.map_end - memmap.desc_size;
+ return efi.memmap.map_end - efi.memmap.desc_size;
- entry -= memmap.desc_size;
- if (entry < memmap.map)
+ entry -= efi.memmap.desc_size;
+ if (entry < efi.memmap.map)
return NULL;
return entry;
@@ -759,10 +752,10 @@ static void *efi_map_next_entry(void *entry)
/* Initial call */
if (!entry)
- return memmap.map;
+ return efi.memmap.map;
- entry += memmap.desc_size;
- if (entry >= memmap.map_end)
+ entry += efi.memmap.desc_size;
+ if (entry >= efi.memmap.map_end)
return NULL;
return entry;
@@ -776,8 +769,11 @@ static void * __init efi_map_regions(int *count, int *pg_shift)
{
void *p, *new_memmap = NULL;
unsigned long left = 0;
+ unsigned long desc_size;
efi_memory_desc_t *md;
+ desc_size = efi.memmap.desc_size;
+
p = NULL;
while ((p = efi_map_next_entry(p))) {
md = p;
@@ -792,7 +788,7 @@ static void * __init efi_map_regions(int *count, int *pg_shift)
efi_map_region(md);
get_systab_virt_addr(md);
- if (left < memmap.desc_size) {
+ if (left < desc_size) {
new_memmap = realloc_pages(new_memmap, *pg_shift);
if (!new_memmap)
return NULL;
@@ -801,10 +797,9 @@ static void * __init efi_map_regions(int *count, int *pg_shift)
(*pg_shift)++;
}
- memcpy(new_memmap + (*count * memmap.desc_size), md,
- memmap.desc_size);
+ memcpy(new_memmap + (*count * desc_size), md, desc_size);
- left -= memmap.desc_size;
+ left -= desc_size;
(*count)++;
}
@@ -816,7 +811,6 @@ static void __init kexec_enter_virtual_mode(void)
#ifdef CONFIG_KEXEC_CORE
efi_memory_desc_t *md;
unsigned int num_pages;
- void *p;
efi.systab = NULL;
@@ -840,8 +834,7 @@ static void __init kexec_enter_virtual_mode(void)
* Map efi regions which were passed via setup_data. The virt_addr is a
* fixed addr which was used in first kernel of a kexec boot.
*/
- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
- md = p;
+ for_each_efi_memory_desc(md) {
efi_map_region_fixed(md); /* FIXME: add error handling */
get_systab_virt_addr(md);
}
@@ -850,10 +843,10 @@ static void __init kexec_enter_virtual_mode(void)
BUG_ON(!efi.systab);
- num_pages = ALIGN(memmap.nr_map * memmap.desc_size, PAGE_SIZE);
+ num_pages = ALIGN(efi.memmap.nr_map * efi.memmap.desc_size, PAGE_SIZE);
num_pages >>= PAGE_SHIFT;
- if (efi_setup_page_tables(memmap.phys_map, num_pages)) {
+ if (efi_setup_page_tables(efi.memmap.phys_map, num_pages)) {
clear_bit(EFI_RUNTIME_SERVICES, &efi.flags);
return;
}
@@ -937,16 +930,16 @@ static void __init __efi_enter_virtual_mode(void)
if (efi_is_native()) {
status = phys_efi_set_virtual_address_map(
- memmap.desc_size * count,
- memmap.desc_size,
- memmap.desc_version,
+ efi.memmap.desc_size * count,
+ efi.memmap.desc_size,
+ efi.memmap.desc_version,
(efi_memory_desc_t *)__pa(new_memmap));
} else {
status = efi_thunk_set_virtual_address_map(
efi_phys.set_virtual_address_map,
- memmap.desc_size * count,
- memmap.desc_size,
- memmap.desc_version,
+ efi.memmap.desc_size * count,
+ efi.memmap.desc_size,
+ efi.memmap.desc_version,
(efi_memory_desc_t *)__pa(new_memmap));
}
@@ -1011,13 +1004,11 @@ void __init efi_enter_virtual_mode(void)
u32 efi_mem_type(unsigned long phys_addr)
{
efi_memory_desc_t *md;
- void *p;
if (!efi_enabled(EFI_MEMMAP))
return 0;
- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
- md = p;
+ for_each_efi_memory_desc(md) {
if ((md->phys_addr <= phys_addr) &&
(phys_addr < (md->phys_addr +
(md->num_pages << EFI_PAGE_SHIFT))))
diff --git a/arch/x86/platform/efi/efi_64.c b/arch/x86/platform/efi/efi_64.c
index 49e4dd4a1f58..6e7242be1c87 100644
--- a/arch/x86/platform/efi/efi_64.c
+++ b/arch/x86/platform/efi/efi_64.c
@@ -55,14 +55,12 @@ struct efi_scratch efi_scratch;
static void __init early_code_mapping_set_exec(int executable)
{
efi_memory_desc_t *md;
- void *p;
if (!(__supported_pte_mask & _PAGE_NX))
return;
/* Make EFI service code area executable */
- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
- md = p;
+ for_each_efi_memory_desc(md) {
if (md->type == EFI_RUNTIME_SERVICES_CODE ||
md->type == EFI_BOOT_SERVICES_CODE)
efi_set_executable(md, executable);
@@ -253,7 +251,7 @@ int __init efi_setup_page_tables(unsigned long pa_memmap, unsigned num_pages)
* Map all of RAM so that we can access arguments in the 1:1
* mapping when making EFI runtime calls.
*/
- for_each_efi_memory_desc(&memmap, md) {
+ for_each_efi_memory_desc(md) {
if (md->type != EFI_CONVENTIONAL_MEMORY &&
md->type != EFI_LOADER_DATA &&
md->type != EFI_LOADER_CODE)
@@ -398,7 +396,6 @@ void __init efi_runtime_update_mappings(void)
unsigned long pfn;
pgd_t *pgd = efi_pgd;
efi_memory_desc_t *md;
- void *p;
if (efi_enabled(EFI_OLD_MEMMAP)) {
if (__supported_pte_mask & _PAGE_NX)
@@ -409,9 +406,8 @@ void __init efi_runtime_update_mappings(void)
if (!efi_enabled(EFI_NX_PE_DATA))
return;
- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
+ for_each_efi_memory_desc(md) {
unsigned long pf = 0;
- md = p;
if (!(md->attribute & EFI_MEMORY_RUNTIME))
continue;
diff --git a/arch/x86/platform/efi/efi_stub_64.S b/arch/x86/platform/efi/efi_stub_64.S
index 92723aeae0f9..cd95075944ab 100644
--- a/arch/x86/platform/efi/efi_stub_64.S
+++ b/arch/x86/platform/efi/efi_stub_64.S
@@ -11,7 +11,6 @@
#include <asm/msr.h>
#include <asm/processor-flags.h>
#include <asm/page_types.h>
-#include <asm/frame.h>
#define SAVE_XMM \
mov %rsp, %rax; \
@@ -40,10 +39,10 @@
mov (%rsp), %rsp
ENTRY(efi_call)
- FRAME_BEGIN
+ pushq %rbp
+ movq %rsp, %rbp
SAVE_XMM
- mov (%rsp), %rax
- mov 8(%rax), %rax
+ mov 16(%rbp), %rax
subq $48, %rsp
mov %r9, 32(%rsp)
mov %rax, 40(%rsp)
@@ -53,6 +52,6 @@ ENTRY(efi_call)
call *%rdi
addq $48, %rsp
RESTORE_XMM
- FRAME_END
+ popq %rbp
ret
ENDPROC(efi_call)
diff --git a/arch/x86/platform/efi/quirks.c b/arch/x86/platform/efi/quirks.c
index ab50ada1d56e..4480c06cade7 100644
--- a/arch/x86/platform/efi/quirks.c
+++ b/arch/x86/platform/efi/quirks.c
@@ -195,10 +195,9 @@ static bool can_free_region(u64 start, u64 size)
*/
void __init efi_reserve_boot_services(void)
{
- void *p;
+ efi_memory_desc_t *md;
- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
- efi_memory_desc_t *md = p;
+ for_each_efi_memory_desc(md) {
u64 start = md->phys_addr;
u64 size = md->num_pages << EFI_PAGE_SHIFT;
bool already_reserved;
@@ -250,10 +249,9 @@ void __init efi_reserve_boot_services(void)
void __init efi_free_boot_services(void)
{
- void *p;
+ efi_memory_desc_t *md;
- for (p = memmap.map; p < memmap.map_end; p += memmap.desc_size) {
- efi_memory_desc_t *md = p;
+ for_each_efi_memory_desc(md) {
unsigned long long start = md->phys_addr;
unsigned long long size = md->num_pages << EFI_PAGE_SHIFT;
@@ -373,5 +371,5 @@ bool efi_reboot_required(void)
bool efi_poweroff_required(void)
{
- return !!acpi_gbl_reduced_hardware;
+ return acpi_gbl_reduced_hardware || acpi_no_s5;
}
diff --git a/arch/x86/platform/uv/bios_uv.c b/arch/x86/platform/uv/bios_uv.c
index 1584cbed0dce..815fec6e05e2 100644
--- a/arch/x86/platform/uv/bios_uv.c
+++ b/arch/x86/platform/uv/bios_uv.c
@@ -21,19 +21,20 @@
#include <linux/efi.h>
#include <linux/export.h>
+#include <linux/slab.h>
#include <asm/efi.h>
#include <linux/io.h>
#include <asm/uv/bios.h>
#include <asm/uv/uv_hub.h>
-static struct uv_systab uv_systab;
+struct uv_systab *uv_systab;
s64 uv_bios_call(enum uv_bios_cmd which, u64 a1, u64 a2, u64 a3, u64 a4, u64 a5)
{
- struct uv_systab *tab = &uv_systab;
+ struct uv_systab *tab = uv_systab;
s64 ret;
- if (!tab->function)
+ if (!tab || !tab->function)
/*
* BIOS does not support UV systab
*/
@@ -183,34 +184,31 @@ int uv_bios_set_legacy_vga_target(bool decode, int domain, int bus)
}
EXPORT_SYMBOL_GPL(uv_bios_set_legacy_vga_target);
-
#ifdef CONFIG_EFI
void uv_bios_init(void)
{
- struct uv_systab *tab;
-
- if ((efi.uv_systab == EFI_INVALID_TABLE_ADDR) ||
- (efi.uv_systab == (unsigned long)NULL)) {
- printk(KERN_CRIT "No EFI UV System Table.\n");
- uv_systab.function = (unsigned long)NULL;
+ uv_systab = NULL;
+ if ((efi.uv_systab == EFI_INVALID_TABLE_ADDR) || !efi.uv_systab) {
+ pr_crit("UV: UVsystab: missing\n");
return;
}
- tab = (struct uv_systab *)ioremap(efi.uv_systab,
- sizeof(struct uv_systab));
- if (strncmp(tab->signature, "UVST", 4) != 0)
- printk(KERN_ERR "bad signature in UV system table!");
-
- /*
- * Copy table to permanent spot for later use.
- */
- memcpy(&uv_systab, tab, sizeof(struct uv_systab));
- iounmap(tab);
+ uv_systab = ioremap(efi.uv_systab, sizeof(struct uv_systab));
+ if (!uv_systab || strncmp(uv_systab->signature, UV_SYSTAB_SIG, 4)) {
+ pr_err("UV: UVsystab: bad signature!\n");
+ iounmap(uv_systab);
+ return;
+ }
- printk(KERN_INFO "EFI UV System Table Revision %d\n",
- uv_systab.revision);
+ if (uv_systab->revision >= UV_SYSTAB_VERSION_UV4) {
+ iounmap(uv_systab);
+ uv_systab = ioremap(efi.uv_systab, uv_systab->size);
+ if (!uv_systab) {
+ pr_err("UV: UVsystab: ioremap(%d) failed!\n",
+ uv_systab->size);
+ return;
+ }
+ }
+ pr_info("UV: UVsystab: Revision:%x\n", uv_systab->revision);
}
-#else /* !CONFIG_EFI */
-
-void uv_bios_init(void) { }
#endif
diff --git a/arch/x86/platform/uv/tlb_uv.c b/arch/x86/platform/uv/tlb_uv.c
index 3b6ec42718e4..fdb4d42b4ce5 100644
--- a/arch/x86/platform/uv/tlb_uv.c
+++ b/arch/x86/platform/uv/tlb_uv.c
@@ -37,7 +37,7 @@ static int timeout_base_ns[] = {
};
static int timeout_us;
-static int nobau;
+static bool nobau = true;
static int nobau_perm;
static cycles_t congested_cycles;
@@ -106,13 +106,28 @@ static char *stat_description[] = {
"enable: number times use of the BAU was re-enabled"
};
-static int __init
-setup_nobau(char *arg)
+static int __init setup_bau(char *arg)
{
- nobau = 1;
+ int result;
+
+ if (!arg)
+ return -EINVAL;
+
+ result = strtobool(arg, &nobau);
+ if (result)
+ return result;
+
+ /* we need to flip the logic here, so that bau=y sets nobau to false */
+ nobau = !nobau;
+
+ if (!nobau)
+ pr_info("UV BAU Enabled\n");
+ else
+ pr_info("UV BAU Disabled\n");
+
return 0;
}
-early_param("nobau", setup_nobau);
+early_param("bau", setup_bau);
/* base pnode in this partition */
static int uv_base_pnode __read_mostly;
@@ -131,10 +146,10 @@ set_bau_on(void)
pr_info("BAU not initialized; cannot be turned on\n");
return;
}
- nobau = 0;
+ nobau = false;
for_each_present_cpu(cpu) {
bcp = &per_cpu(bau_control, cpu);
- bcp->nobau = 0;
+ bcp->nobau = false;
}
pr_info("BAU turned on\n");
return;
@@ -146,10 +161,10 @@ set_bau_off(void)
int cpu;
struct bau_control *bcp;
- nobau = 1;
+ nobau = true;
for_each_present_cpu(cpu) {
bcp = &per_cpu(bau_control, cpu);
- bcp->nobau = 1;
+ bcp->nobau = true;
}
pr_info("BAU turned off\n");
return;
@@ -1886,7 +1901,7 @@ static void __init init_per_cpu_tunables(void)
bcp = &per_cpu(bau_control, cpu);
bcp->baudisabled = 0;
if (nobau)
- bcp->nobau = 1;
+ bcp->nobau = true;
bcp->statp = &per_cpu(ptcstats, cpu);
/* time interval to catch a hardware stay-busy bug */
bcp->timeout_interval = usec_2_cycles(2*timeout_us);
@@ -2025,7 +2040,8 @@ static int scan_sock(struct socket_desc *sdp, struct uvhub_desc *bdp,
return 1;
}
bcp->uvhub_master = *hmasterp;
- bcp->uvhub_cpu = uv_cpu_hub_info(cpu)->blade_processor_id;
+ bcp->uvhub_cpu = uv_cpu_blade_processor_id(cpu);
+
if (bcp->uvhub_cpu >= MAX_CPUS_PER_UVHUB) {
printk(KERN_EMERG "%d cpus per uvhub invalid\n",
bcp->uvhub_cpu);
diff --git a/arch/x86/platform/uv/uv_sysfs.c b/arch/x86/platform/uv/uv_sysfs.c
index 5d4ba301e776..e9da9ebd924a 100644
--- a/arch/x86/platform/uv/uv_sysfs.c
+++ b/arch/x86/platform/uv/uv_sysfs.c
@@ -34,7 +34,7 @@ static ssize_t partition_id_show(struct kobject *kobj,
static ssize_t coherence_id_show(struct kobject *kobj,
struct kobj_attribute *attr, char *buf)
{
- return snprintf(buf, PAGE_SIZE, "%ld\n", partition_coherence_id());
+ return snprintf(buf, PAGE_SIZE, "%ld\n", uv_partition_coherence_id());
}
static struct kobj_attribute partition_id_attr =
diff --git a/arch/x86/platform/uv/uv_time.c b/arch/x86/platform/uv/uv_time.c
index 2b158a9fa1d7..b333fc45f9ec 100644
--- a/arch/x86/platform/uv/uv_time.c
+++ b/arch/x86/platform/uv/uv_time.c
@@ -165,7 +165,7 @@ static __init int uv_rtc_allocate_timers(void)
for_each_present_cpu(cpu) {
int nid = cpu_to_node(cpu);
int bid = uv_cpu_to_blade_id(cpu);
- int bcpu = uv_cpu_hub_info(cpu)->blade_processor_id;
+ int bcpu = uv_cpu_blade_processor_id(cpu);
struct uv_rtc_timer_head *head = blade_info[bid];
if (!head) {
@@ -226,7 +226,7 @@ static int uv_rtc_set_timer(int cpu, u64 expires)
int pnode = uv_cpu_to_pnode(cpu);
int bid = uv_cpu_to_blade_id(cpu);
struct uv_rtc_timer_head *head = blade_info[bid];
- int bcpu = uv_cpu_hub_info(cpu)->blade_processor_id;
+ int bcpu = uv_cpu_blade_processor_id(cpu);
u64 *t = &head->cpu[bcpu].expires;
unsigned long flags;
int next_cpu;
@@ -262,7 +262,7 @@ static int uv_rtc_unset_timer(int cpu, int force)
int pnode = uv_cpu_to_pnode(cpu);
int bid = uv_cpu_to_blade_id(cpu);
struct uv_rtc_timer_head *head = blade_info[bid];
- int bcpu = uv_cpu_hub_info(cpu)->blade_processor_id;
+ int bcpu = uv_cpu_blade_processor_id(cpu);
u64 *t = &head->cpu[bcpu].expires;
unsigned long flags;
int rc = 0;
diff --git a/arch/x86/power/hibernate_32.c b/arch/x86/power/hibernate_32.c
index 291226b952a9..9f14bd34581d 100644
--- a/arch/x86/power/hibernate_32.c
+++ b/arch/x86/power/hibernate_32.c
@@ -106,7 +106,7 @@ static int resume_physical_mapping_init(pgd_t *pgd_base)
* normal page tables.
* NOTE: We can mark everything as executable here
*/
- if (cpu_has_pse) {
+ if (boot_cpu_has(X86_FEATURE_PSE)) {
set_pmd(pmd, pfn_pmd(pfn, PAGE_KERNEL_LARGE_EXEC));
pfn += PTRS_PER_PTE;
} else {
diff --git a/arch/x86/purgatory/Makefile b/arch/x86/purgatory/Makefile
index 92e3e1d84c1d..12734a96df47 100644
--- a/arch/x86/purgatory/Makefile
+++ b/arch/x86/purgatory/Makefile
@@ -26,7 +26,5 @@ quiet_cmd_bin2c = BIN2C $@
$(obj)/kexec-purgatory.c: $(obj)/purgatory.ro FORCE
$(call if_changed,bin2c)
- @:
-
obj-$(CONFIG_KEXEC_FILE) += kexec-purgatory.o
diff --git a/arch/x86/ras/Kconfig b/arch/x86/ras/Kconfig
index df280da34825..d957d5f21a86 100644
--- a/arch/x86/ras/Kconfig
+++ b/arch/x86/ras/Kconfig
@@ -1,4 +1,4 @@
-config AMD_MCE_INJ
+config MCE_AMD_INJ
tristate "Simple MCE injection interface for AMD processors"
depends on RAS && EDAC_DECODE_MCE && DEBUG_FS && AMD_NB
default n
diff --git a/arch/x86/ras/Makefile b/arch/x86/ras/Makefile
index dd2c98b84037..5f94546db280 100644
--- a/arch/x86/ras/Makefile
+++ b/arch/x86/ras/Makefile
@@ -1,2 +1,2 @@
-obj-$(CONFIG_AMD_MCE_INJ) += mce_amd_inj.o
+obj-$(CONFIG_MCE_AMD_INJ) += mce_amd_inj.o
diff --git a/arch/x86/ras/mce_amd_inj.c b/arch/x86/ras/mce_amd_inj.c
index 9e02dcaef683..e69f4701a076 100644
--- a/arch/x86/ras/mce_amd_inj.c
+++ b/arch/x86/ras/mce_amd_inj.c
@@ -290,14 +290,33 @@ static void do_inject(void)
wrmsr_on_cpu(cpu, MSR_IA32_MCG_STATUS,
(u32)mcg_status, (u32)(mcg_status >> 32));
- wrmsr_on_cpu(cpu, MSR_IA32_MCx_STATUS(b),
- (u32)i_mce.status, (u32)(i_mce.status >> 32));
+ if (boot_cpu_has(X86_FEATURE_SMCA)) {
+ if (inj_type == DFR_INT_INJ) {
+ wrmsr_on_cpu(cpu, MSR_AMD64_SMCA_MCx_DESTAT(b),
+ (u32)i_mce.status, (u32)(i_mce.status >> 32));
+
+ wrmsr_on_cpu(cpu, MSR_AMD64_SMCA_MCx_DEADDR(b),
+ (u32)i_mce.addr, (u32)(i_mce.addr >> 32));
+ } else {
+ wrmsr_on_cpu(cpu, MSR_AMD64_SMCA_MCx_STATUS(b),
+ (u32)i_mce.status, (u32)(i_mce.status >> 32));
+
+ wrmsr_on_cpu(cpu, MSR_AMD64_SMCA_MCx_ADDR(b),
+ (u32)i_mce.addr, (u32)(i_mce.addr >> 32));
+ }
+
+ wrmsr_on_cpu(cpu, MSR_AMD64_SMCA_MCx_MISC(b),
+ (u32)i_mce.misc, (u32)(i_mce.misc >> 32));
+ } else {
+ wrmsr_on_cpu(cpu, MSR_IA32_MCx_STATUS(b),
+ (u32)i_mce.status, (u32)(i_mce.status >> 32));
- wrmsr_on_cpu(cpu, MSR_IA32_MCx_ADDR(b),
- (u32)i_mce.addr, (u32)(i_mce.addr >> 32));
+ wrmsr_on_cpu(cpu, MSR_IA32_MCx_ADDR(b),
+ (u32)i_mce.addr, (u32)(i_mce.addr >> 32));
- wrmsr_on_cpu(cpu, MSR_IA32_MCx_MISC(b),
- (u32)i_mce.misc, (u32)(i_mce.misc >> 32));
+ wrmsr_on_cpu(cpu, MSR_IA32_MCx_MISC(b),
+ (u32)i_mce.misc, (u32)(i_mce.misc >> 32));
+ }
toggle_hw_mce_inject(cpu, false);
diff --git a/arch/x86/realmode/rm/Makefile b/arch/x86/realmode/rm/Makefile
index b95964610ea7..c556c5ae8de5 100644
--- a/arch/x86/realmode/rm/Makefile
+++ b/arch/x86/realmode/rm/Makefile
@@ -59,7 +59,6 @@ OBJCOPYFLAGS_realmode.bin := -O binary
targets += realmode.bin
$(obj)/realmode.bin: $(obj)/realmode.elf $(obj)/realmode.relocs FORCE
$(call if_changed,objcopy)
- @:
quiet_cmd_relocs = RELOCS $@
cmd_relocs = arch/x86/tools/relocs --realmode $< > $@
diff --git a/arch/x86/tools/calc_run_size.sh b/arch/x86/tools/calc_run_size.sh
deleted file mode 100644
index 1a4c17bb3910..000000000000
--- a/arch/x86/tools/calc_run_size.sh
+++ /dev/null
@@ -1,42 +0,0 @@
-#!/bin/sh
-#
-# Calculate the amount of space needed to run the kernel, including room for
-# the .bss and .brk sections.
-#
-# Usage:
-# objdump -h a.out | sh calc_run_size.sh
-
-NUM='\([0-9a-fA-F]*[ \t]*\)'
-OUT=$(sed -n 's/^[ \t0-9]*.b[sr][sk][ \t]*'"$NUM$NUM$NUM$NUM"'.*/\1\4/p')
-if [ -z "$OUT" ] ; then
- echo "Never found .bss or .brk file offset" >&2
- exit 1
-fi
-
-OUT=$(echo ${OUT# })
-sizeA=$(printf "%d" 0x${OUT%% *})
-OUT=${OUT#* }
-offsetA=$(printf "%d" 0x${OUT%% *})
-OUT=${OUT#* }
-sizeB=$(printf "%d" 0x${OUT%% *})
-OUT=${OUT#* }
-offsetB=$(printf "%d" 0x${OUT%% *})
-
-run_size=$(( $offsetA + $sizeA + $sizeB ))
-
-# BFD linker shows the same file offset in ELF.
-if [ "$offsetA" -ne "$offsetB" ] ; then
- # Gold linker shows them as consecutive.
- endB=$(( $offsetB + $sizeB ))
- if [ "$endB" != "$run_size" ] ; then
- printf "sizeA: 0x%x\n" $sizeA >&2
- printf "offsetA: 0x%x\n" $offsetA >&2
- printf "sizeB: 0x%x\n" $sizeB >&2
- printf "offsetB: 0x%x\n" $offsetB >&2
- echo ".bss and .brk are non-contiguous" >&2
- exit 1
- fi
-fi
-
-printf "%d\n" $run_size
-exit 0
diff --git a/arch/x86/um/os-Linux/registers.c b/arch/x86/um/os-Linux/registers.c
index 41bfe84e11ab..00f54a91bb4b 100644
--- a/arch/x86/um/os-Linux/registers.c
+++ b/arch/x86/um/os-Linux/registers.c
@@ -11,21 +11,56 @@
#endif
#include <longjmp.h>
#include <sysdep/ptrace_user.h>
+#include <sys/uio.h>
+#include <asm/sigcontext.h>
+#include <linux/elf.h>
-int save_fp_registers(int pid, unsigned long *fp_regs)
+int have_xstate_support;
+
+int save_i387_registers(int pid, unsigned long *fp_regs)
{
if (ptrace(PTRACE_GETFPREGS, pid, 0, fp_regs) < 0)
return -errno;
return 0;
}
-int restore_fp_registers(int pid, unsigned long *fp_regs)
+int save_fp_registers(int pid, unsigned long *fp_regs)
+{
+ struct iovec iov;
+
+ if (have_xstate_support) {
+ iov.iov_base = fp_regs;
+ iov.iov_len = sizeof(struct _xstate);
+ if (ptrace(PTRACE_GETREGSET, pid, NT_X86_XSTATE, &iov) < 0)
+ return -errno;
+ return 0;
+ } else {
+ return save_i387_registers(pid, fp_regs);
+ }
+}
+
+int restore_i387_registers(int pid, unsigned long *fp_regs)
{
if (ptrace(PTRACE_SETFPREGS, pid, 0, fp_regs) < 0)
return -errno;
return 0;
}
+int restore_fp_registers(int pid, unsigned long *fp_regs)
+{
+ struct iovec iov;
+
+ if (have_xstate_support) {
+ iov.iov_base = fp_regs;
+ iov.iov_len = sizeof(struct _xstate);
+ if (ptrace(PTRACE_SETREGSET, pid, NT_X86_XSTATE, &iov) < 0)
+ return -errno;
+ return 0;
+ } else {
+ return restore_i387_registers(pid, fp_regs);
+ }
+}
+
#ifdef __i386__
int have_fpx_regs = 1;
int save_fpx_registers(int pid, unsigned long *fp_regs)
@@ -85,6 +120,16 @@ int put_fp_registers(int pid, unsigned long *regs)
return restore_fp_registers(pid, regs);
}
+void arch_init_registers(int pid)
+{
+ struct _xstate fp_regs;
+ struct iovec iov;
+
+ iov.iov_base = &fp_regs;
+ iov.iov_len = sizeof(struct _xstate);
+ if (ptrace(PTRACE_GETREGSET, pid, NT_X86_XSTATE, &iov) == 0)
+ have_xstate_support = 1;
+}
#endif
unsigned long get_thread_reg(int reg, jmp_buf *buf)
diff --git a/arch/x86/um/ptrace_32.c b/arch/x86/um/ptrace_32.c
index 47c78d5e5c32..ebd4dd6ef73b 100644
--- a/arch/x86/um/ptrace_32.c
+++ b/arch/x86/um/ptrace_32.c
@@ -194,7 +194,8 @@ static int get_fpregs(struct user_i387_struct __user *buf, struct task_struct *c
int err, n, cpu = ((struct thread_info *) child->stack)->cpu;
struct user_i387_struct fpregs;
- err = save_fp_registers(userspace_pid[cpu], (unsigned long *) &fpregs);
+ err = save_i387_registers(userspace_pid[cpu],
+ (unsigned long *) &fpregs);
if (err)
return err;
@@ -214,7 +215,7 @@ static int set_fpregs(struct user_i387_struct __user *buf, struct task_struct *c
if (n > 0)
return -EFAULT;
- return restore_fp_registers(userspace_pid[cpu],
+ return restore_i387_registers(userspace_pid[cpu],
(unsigned long *) &fpregs);
}
diff --git a/arch/x86/um/ptrace_64.c b/arch/x86/um/ptrace_64.c
index a629694ee750..faab418876ce 100644
--- a/arch/x86/um/ptrace_64.c
+++ b/arch/x86/um/ptrace_64.c
@@ -222,14 +222,14 @@ int is_syscall(unsigned long addr)
static int get_fpregs(struct user_i387_struct __user *buf, struct task_struct *child)
{
int err, n, cpu = ((struct thread_info *) child->stack)->cpu;
- long fpregs[HOST_FP_SIZE];
+ struct user_i387_struct fpregs;
- BUG_ON(sizeof(*buf) != sizeof(fpregs));
- err = save_fp_registers(userspace_pid[cpu], fpregs);
+ err = save_i387_registers(userspace_pid[cpu],
+ (unsigned long *) &fpregs);
if (err)
return err;
- n = copy_to_user(buf, fpregs, sizeof(fpregs));
+ n = copy_to_user(buf, &fpregs, sizeof(fpregs));
if (n > 0)
return -EFAULT;
@@ -239,14 +239,14 @@ static int get_fpregs(struct user_i387_struct __user *buf, struct task_struct *c
static int set_fpregs(struct user_i387_struct __user *buf, struct task_struct *child)
{
int n, cpu = ((struct thread_info *) child->stack)->cpu;
- long fpregs[HOST_FP_SIZE];
+ struct user_i387_struct fpregs;
- BUG_ON(sizeof(*buf) != sizeof(fpregs));
- n = copy_from_user(fpregs, buf, sizeof(fpregs));
+ n = copy_from_user(&fpregs, buf, sizeof(fpregs));
if (n > 0)
return -EFAULT;
- return restore_fp_registers(userspace_pid[cpu], fpregs);
+ return restore_i387_registers(userspace_pid[cpu],
+ (unsigned long *) &fpregs);
}
long subarch_ptrace(struct task_struct *child, long request,
diff --git a/arch/x86/um/shared/sysdep/ptrace_64.h b/arch/x86/um/shared/sysdep/ptrace_64.h
index 919789f1071e..0dc223aa1c2d 100644
--- a/arch/x86/um/shared/sysdep/ptrace_64.h
+++ b/arch/x86/um/shared/sysdep/ptrace_64.h
@@ -57,8 +57,6 @@
#define UPT_SYSCALL_ARG5(r) UPT_R8(r)
#define UPT_SYSCALL_ARG6(r) UPT_R9(r)
-static inline void arch_init_registers(int pid)
-{
-}
+extern void arch_init_registers(int pid);
#endif
diff --git a/arch/x86/um/signal.c b/arch/x86/um/signal.c
index 14fcd01ed992..49e503697022 100644
--- a/arch/x86/um/signal.c
+++ b/arch/x86/um/signal.c
@@ -225,26 +225,16 @@ static int copy_sc_from_user(struct pt_regs *regs,
} else
#endif
{
- struct user_i387_struct fp;
-
- err = copy_from_user(&fp, (void *)sc.fpstate,
- sizeof(struct user_i387_struct));
+ err = copy_from_user(regs->regs.fp, (void *)sc.fpstate,
+ sizeof(struct _xstate));
if (err)
return 1;
-
- err = restore_fp_registers(pid, (unsigned long *) &fp);
- if (err < 0) {
- printk(KERN_ERR "copy_sc_from_user - "
- "restore_fp_registers failed, errno = %d\n",
- -err);
- return 1;
- }
}
return 0;
}
static int copy_sc_to_user(struct sigcontext __user *to,
- struct _fpstate __user *to_fp, struct pt_regs *regs,
+ struct _xstate __user *to_fp, struct pt_regs *regs,
unsigned long mask)
{
struct sigcontext sc;
@@ -310,25 +300,22 @@ static int copy_sc_to_user(struct sigcontext __user *to,
return 1;
}
- err = convert_fxsr_to_user(to_fp, &fpx);
+ err = convert_fxsr_to_user(&to_fp->fpstate, &fpx);
if (err)
return 1;
- err |= __put_user(fpx.swd, &to_fp->status);
- err |= __put_user(X86_FXSR_MAGIC, &to_fp->magic);
+ err |= __put_user(fpx.swd, &to_fp->fpstate.status);
+ err |= __put_user(X86_FXSR_MAGIC, &to_fp->fpstate.magic);
if (err)
return 1;
- if (copy_to_user(&to_fp->_fxsr_env[0], &fpx,
+ if (copy_to_user(&to_fp->fpstate._fxsr_env[0], &fpx,
sizeof(struct user_fxsr_struct)))
return 1;
} else
#endif
{
- struct user_i387_struct fp;
-
- err = save_fp_registers(pid, (unsigned long *) &fp);
- if (copy_to_user(to_fp, &fp, sizeof(struct user_i387_struct)))
+ if (copy_to_user(to_fp, regs->regs.fp, sizeof(struct _xstate)))
return 1;
}
@@ -337,7 +324,7 @@ static int copy_sc_to_user(struct sigcontext __user *to,
#ifdef CONFIG_X86_32
static int copy_ucontext_to_user(struct ucontext __user *uc,
- struct _fpstate __user *fp, sigset_t *set,
+ struct _xstate __user *fp, sigset_t *set,
unsigned long sp)
{
int err = 0;
@@ -353,7 +340,7 @@ struct sigframe
char __user *pretcode;
int sig;
struct sigcontext sc;
- struct _fpstate fpstate;
+ struct _xstate fpstate;
unsigned long extramask[_NSIG_WORDS-1];
char retcode[8];
};
@@ -366,7 +353,7 @@ struct rt_sigframe
void __user *puc;
struct siginfo info;
struct ucontext uc;
- struct _fpstate fpstate;
+ struct _xstate fpstate;
char retcode[8];
};
@@ -495,7 +482,7 @@ struct rt_sigframe
char __user *pretcode;
struct ucontext uc;
struct siginfo info;
- struct _fpstate fpstate;
+ struct _xstate fpstate;
};
int setup_signal_stack_si(unsigned long stack_top, struct ksignal *ksig,
diff --git a/arch/x86/um/user-offsets.c b/arch/x86/um/user-offsets.c
index 470564bbd08e..cb3c22370cf5 100644
--- a/arch/x86/um/user-offsets.c
+++ b/arch/x86/um/user-offsets.c
@@ -50,7 +50,7 @@ void foo(void)
DEFINE(HOST_GS, GS);
DEFINE(HOST_ORIG_AX, ORIG_EAX);
#else
- DEFINE(HOST_FP_SIZE, sizeof(struct _fpstate) / sizeof(unsigned long));
+ DEFINE(HOST_FP_SIZE, sizeof(struct _xstate) / sizeof(unsigned long));
DEFINE_LONGS(HOST_BX, RBX);
DEFINE_LONGS(HOST_CX, RCX);
DEFINE_LONGS(HOST_DI, RDI);
diff --git a/arch/x86/um/vdso/vma.c b/arch/x86/um/vdso/vma.c
index 237c6831e095..6be22f991b59 100644
--- a/arch/x86/um/vdso/vma.c
+++ b/arch/x86/um/vdso/vma.c
@@ -61,7 +61,8 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
if (!vdso_enabled)
return 0;
- down_write(&mm->mmap_sem);
+ if (down_write_killable(&mm->mmap_sem))
+ return -EINTR;
err = install_special_mapping(mm, um_vdso_addr, PAGE_SIZE,
VM_READ|VM_EXEC|
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index 880862c7d9dd..760789ae8562 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -75,7 +75,6 @@
#include <asm/mach_traps.h>
#include <asm/mwait.h>
#include <asm/pci_x86.h>
-#include <asm/pat.h>
#include <asm/cpu.h>
#ifdef CONFIG_ACPI
@@ -1093,6 +1092,26 @@ static int xen_write_msr_safe(unsigned int msr, unsigned low, unsigned high)
return ret;
}
+static u64 xen_read_msr(unsigned int msr)
+{
+ /*
+ * This will silently swallow a #GP from RDMSR. It may be worth
+ * changing that.
+ */
+ int err;
+
+ return xen_read_msr_safe(msr, &err);
+}
+
+static void xen_write_msr(unsigned int msr, unsigned low, unsigned high)
+{
+ /*
+ * This will silently swallow a #GP from WRMSR. It may be worth
+ * changing that.
+ */
+ xen_write_msr_safe(msr, low, high);
+}
+
void xen_setup_shared_info(void)
{
if (!xen_feature(XENFEAT_auto_translated_physmap)) {
@@ -1187,13 +1206,11 @@ static unsigned xen_patch(u8 type, u16 clobbers, void *insnbuf,
}
static const struct pv_info xen_info __initconst = {
- .paravirt_enabled = 1,
.shared_kernel_pmd = 0,
#ifdef CONFIG_X86_64
.extra_user_64bit_cs = FLAT_USER_CS64,
#endif
- .features = 0,
.name = "Xen",
};
@@ -1223,8 +1240,11 @@ static const struct pv_cpu_ops xen_cpu_ops __initconst = {
.wbinvd = native_wbinvd,
- .read_msr = xen_read_msr_safe,
- .write_msr = xen_write_msr_safe,
+ .read_msr = xen_read_msr,
+ .write_msr = xen_write_msr,
+
+ .read_msr_safe = xen_read_msr_safe,
+ .write_msr_safe = xen_write_msr_safe,
.read_pmc = xen_read_pmc,
@@ -1469,10 +1489,10 @@ static void xen_pvh_set_cr_flags(int cpu)
* For BSP, PSE PGE are set in probe_page_size_mask(), for APs
* set them here. For all, OSFXSR OSXMMEXCPT are set in fpu__init_cpu().
*/
- if (cpu_has_pse)
+ if (boot_cpu_has(X86_FEATURE_PSE))
cr4_set_bits_and_update_boot(X86_CR4_PSE);
- if (cpu_has_pge)
+ if (boot_cpu_has(X86_FEATURE_PGE))
cr4_set_bits_and_update_boot(X86_CR4_PGE);
}
@@ -1506,12 +1526,16 @@ static void __init xen_pvh_early_guest_init(void)
}
#endif /* CONFIG_XEN_PVH */
+static void __init xen_dom0_set_legacy_features(void)
+{
+ x86_platform.legacy.rtc = 1;
+}
+
/* First C function to be called on Xen boot */
asmlinkage __visible void __init xen_start_kernel(void)
{
struct physdev_set_iopl set_iopl;
unsigned long initrd_start = 0;
- u64 pat;
int rc;
if (!xen_start_info)
@@ -1527,8 +1551,6 @@ asmlinkage __visible void __init xen_start_kernel(void)
/* Install Xen paravirt ops */
pv_info = xen_info;
- if (xen_initial_domain())
- pv_info.features |= PV_SUPPORTED_RTC;
pv_init_ops = xen_init_ops;
if (!xen_pvh_domain()) {
pv_cpu_ops = xen_cpu_ops;
@@ -1618,13 +1640,6 @@ asmlinkage __visible void __init xen_start_kernel(void)
xen_start_info->nr_pages);
xen_reserve_special_pages();
- /*
- * Modify the cache mode translation tables to match Xen's PAT
- * configuration.
- */
- rdmsrl(MSR_IA32_CR_PAT, pat);
- pat_init_cache_modes(pat);
-
/* keep using Xen gdt for now; no urgent need to change it */
#ifdef CONFIG_X86_32
@@ -1670,6 +1685,7 @@ asmlinkage __visible void __init xen_start_kernel(void)
boot_params.hdr.ramdisk_image = initrd_start;
boot_params.hdr.ramdisk_size = xen_start_info->mod_len;
boot_params.hdr.cmd_line_ptr = __pa(xen_start_info->cmd_line);
+ boot_params.hdr.hardware_subarch = X86_SUBARCH_XEN;
if (!xen_initial_domain()) {
add_preferred_console("xenboot", 0, NULL);
@@ -1687,6 +1703,8 @@ asmlinkage __visible void __init xen_start_kernel(void)
.u.firmware_info.type = XEN_FW_KBD_SHIFT_FLAGS,
};
+ x86_platform.set_legacy_features =
+ xen_dom0_set_legacy_features;
xen_init_vga(info, xen_start_info->console.dom0.info_size);
xen_start_info->console.domU.mfn = 0;
xen_start_info->console.domU.evtchn = 0;
diff --git a/arch/x86/xen/setup.c b/arch/x86/xen/setup.c
index 7ab29518a3b9..e345891450c3 100644
--- a/arch/x86/xen/setup.c
+++ b/arch/x86/xen/setup.c
@@ -393,6 +393,9 @@ static unsigned long __init xen_set_identity_and_remap_chunk(
unsigned long i = 0;
unsigned long n = end_pfn - start_pfn;
+ if (remap_pfn == 0)
+ remap_pfn = nr_pages;
+
while (i < n) {
unsigned long cur_pfn = start_pfn + i;
unsigned long left = n - i;
@@ -438,17 +441,29 @@ static unsigned long __init xen_set_identity_and_remap_chunk(
return remap_pfn;
}
-static void __init xen_set_identity_and_remap(unsigned long nr_pages)
+static unsigned long __init xen_count_remap_pages(
+ unsigned long start_pfn, unsigned long end_pfn, unsigned long nr_pages,
+ unsigned long remap_pages)
+{
+ if (start_pfn >= nr_pages)
+ return remap_pages;
+
+ return remap_pages + min(end_pfn, nr_pages) - start_pfn;
+}
+
+static unsigned long __init xen_foreach_remap_area(unsigned long nr_pages,
+ unsigned long (*func)(unsigned long start_pfn, unsigned long end_pfn,
+ unsigned long nr_pages, unsigned long last_val))
{
phys_addr_t start = 0;
- unsigned long last_pfn = nr_pages;
+ unsigned long ret_val = 0;
const struct e820entry *entry = xen_e820_map;
int i;
/*
* Combine non-RAM regions and gaps until a RAM region (or the
- * end of the map) is reached, then set the 1:1 map and
- * remap the memory in those non-RAM regions.
+ * end of the map) is reached, then call the provided function
+ * to perform its duty on the non-RAM region.
*
* The combined non-RAM regions are rounded to a whole number
* of pages so any partial pages are accessible via the 1:1
@@ -466,14 +481,13 @@ static void __init xen_set_identity_and_remap(unsigned long nr_pages)
end_pfn = PFN_UP(entry->addr);
if (start_pfn < end_pfn)
- last_pfn = xen_set_identity_and_remap_chunk(
- start_pfn, end_pfn, nr_pages,
- last_pfn);
+ ret_val = func(start_pfn, end_pfn, nr_pages,
+ ret_val);
start = end;
}
}
- pr_info("Released %ld page(s)\n", xen_released_pages);
+ return ret_val;
}
/*
@@ -596,35 +610,6 @@ static void __init xen_ignore_unusable(void)
}
}
-static unsigned long __init xen_count_remap_pages(unsigned long max_pfn)
-{
- unsigned long extra = 0;
- unsigned long start_pfn, end_pfn;
- const struct e820entry *entry = xen_e820_map;
- int i;
-
- end_pfn = 0;
- for (i = 0; i < xen_e820_map_entries; i++, entry++) {
- start_pfn = PFN_DOWN(entry->addr);
- /* Adjacent regions on non-page boundaries handling! */
- end_pfn = min(end_pfn, start_pfn);
-
- if (start_pfn >= max_pfn)
- return extra + max_pfn - end_pfn;
-
- /* Add any holes in map to result. */
- extra += start_pfn - end_pfn;
-
- end_pfn = PFN_UP(entry->addr + entry->size);
- end_pfn = min(end_pfn, max_pfn);
-
- if (entry->type != E820_RAM)
- extra += end_pfn - start_pfn;
- }
-
- return extra;
-}
-
bool __init xen_is_e820_reserved(phys_addr_t start, phys_addr_t size)
{
struct e820entry *entry;
@@ -804,7 +789,7 @@ char * __init xen_memory_setup(void)
max_pages = xen_get_max_pages();
/* How many extra pages do we need due to remapping? */
- max_pages += xen_count_remap_pages(max_pfn);
+ max_pages += xen_foreach_remap_area(max_pfn, xen_count_remap_pages);
if (max_pages > max_pfn)
extra_pages += max_pages - max_pfn;
@@ -922,7 +907,9 @@ char * __init xen_memory_setup(void)
* Set identity map on non-RAM pages and prepare remapping the
* underlying RAM.
*/
- xen_set_identity_and_remap(max_pfn);
+ xen_foreach_remap_area(max_pfn, xen_set_identity_and_remap_chunk);
+
+ pr_info("Released %ld page(s)\n", xen_released_pages);
return "Xen";
}
diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
index a0a4e554c6f1..6deba5bc7e34 100644
--- a/arch/x86/xen/time.c
+++ b/arch/x86/xen/time.c
@@ -290,11 +290,11 @@ static int xen_vcpuop_set_next_event(unsigned long delta,
WARN_ON(!clockevent_state_oneshot(evt));
single.timeout_abs_ns = get_abs_timeout(delta);
- single.flags = VCPU_SSHOTTMR_future;
+ /* Get an event anyway, even if the timeout is already expired */
+ single.flags = 0;
ret = HYPERVISOR_vcpu_op(VCPUOP_set_singleshot_timer, cpu, &single);
-
- BUG_ON(ret != 0 && ret != -ETIME);
+ BUG_ON(ret != 0);
return ret;
}
diff --git a/arch/xtensa/Kconfig b/arch/xtensa/Kconfig
index e832d3e9835e..64336f666fb6 100644
--- a/arch/xtensa/Kconfig
+++ b/arch/xtensa/Kconfig
@@ -5,7 +5,6 @@ config XTENSA
def_bool y
select ARCH_WANT_FRAME_POINTERS
select ARCH_WANT_IPC_PARSE_VERSION
- select ARCH_WANT_OPTIONAL_GPIOLIB
select BUILDTIME_EXTABLE_SORT
select CLONE_BACKWARDS
select COMMON_CLK
@@ -15,6 +14,7 @@ config XTENSA
select GENERIC_PCI_IOMAP
select GENERIC_SCHED_CLOCK
select HAVE_DMA_API_DEBUG
+ select HAVE_EXIT_THREAD
select HAVE_FUNCTION_TRACER
select HAVE_FUTEX_CMPXCHG if !MMU
select HAVE_HW_BREAKPOINT if PERF_EVENTS
diff --git a/arch/xtensa/configs/generic_kc705_defconfig b/arch/xtensa/configs/generic_kc705_defconfig
index f4b7b3888da8..d9444f01f4da 100644
--- a/arch/xtensa/configs/generic_kc705_defconfig
+++ b/arch/xtensa/configs/generic_kc705_defconfig
@@ -11,7 +11,6 @@ CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
CONFIG_CGROUP_CPUACCT=y
-CONFIG_RESOURCE_COUNTERS=y
CONFIG_MEMCG=y
CONFIG_NAMESPACES=y
CONFIG_SCHED_AUTOGROUP=y
diff --git a/arch/xtensa/configs/smp_lx200_defconfig b/arch/xtensa/configs/smp_lx200_defconfig
index 22eeacba37cc..61f943c95619 100644
--- a/arch/xtensa/configs/smp_lx200_defconfig
+++ b/arch/xtensa/configs/smp_lx200_defconfig
@@ -11,7 +11,6 @@ CONFIG_CGROUP_FREEZER=y
CONFIG_CGROUP_DEVICE=y
CONFIG_CPUSETS=y
CONFIG_CGROUP_CPUACCT=y
-CONFIG_RESOURCE_COUNTERS=y
CONFIG_MEMCG=y
CONFIG_NAMESPACES=y
CONFIG_SCHED_AUTOGROUP=y
diff --git a/arch/xtensa/include/asm/Kbuild b/arch/xtensa/include/asm/Kbuild
index b56855a1382a..28cf4c5d65ef 100644
--- a/arch/xtensa/include/asm/Kbuild
+++ b/arch/xtensa/include/asm/Kbuild
@@ -22,6 +22,7 @@ generic-y += mm-arch-hooks.h
generic-y += percpu.h
generic-y += preempt.h
generic-y += resource.h
+generic-y += rwsem.h
generic-y += sections.h
generic-y += siginfo.h
generic-y += statfs.h
diff --git a/arch/xtensa/include/asm/rwsem.h b/arch/xtensa/include/asm/rwsem.h
deleted file mode 100644
index 249619e7e7f2..000000000000
--- a/arch/xtensa/include/asm/rwsem.h
+++ /dev/null
@@ -1,131 +0,0 @@
-/*
- * include/asm-xtensa/rwsem.h
- *
- * This file is subject to the terms and conditions of the GNU General Public
- * License. See the file "COPYING" in the main directory of this archive
- * for more details.
- *
- * Largely copied from include/asm-ppc/rwsem.h
- *
- * Copyright (C) 2001 - 2005 Tensilica Inc.
- */
-
-#ifndef _XTENSA_RWSEM_H
-#define _XTENSA_RWSEM_H
-
-#ifndef _LINUX_RWSEM_H
-#error "Please don't include <asm/rwsem.h> directly, use <linux/rwsem.h> instead."
-#endif
-
-#define RWSEM_UNLOCKED_VALUE 0x00000000
-#define RWSEM_ACTIVE_BIAS 0x00000001
-#define RWSEM_ACTIVE_MASK 0x0000ffff
-#define RWSEM_WAITING_BIAS (-0x00010000)
-#define RWSEM_ACTIVE_READ_BIAS RWSEM_ACTIVE_BIAS
-#define RWSEM_ACTIVE_WRITE_BIAS (RWSEM_WAITING_BIAS + RWSEM_ACTIVE_BIAS)
-
-/*
- * lock for reading
- */
-static inline void __down_read(struct rw_semaphore *sem)
-{
- if (atomic_add_return(1,(atomic_t *)(&sem->count)) > 0)
- smp_wmb();
- else
- rwsem_down_read_failed(sem);
-}
-
-static inline int __down_read_trylock(struct rw_semaphore *sem)
-{
- int tmp;
-
- while ((tmp = sem->count) >= 0) {
- if (tmp == cmpxchg(&sem->count, tmp,
- tmp + RWSEM_ACTIVE_READ_BIAS)) {
- smp_wmb();
- return 1;
- }
- }
- return 0;
-}
-
-/*
- * lock for writing
- */
-static inline void __down_write(struct rw_semaphore *sem)
-{
- int tmp;
-
- tmp = atomic_add_return(RWSEM_ACTIVE_WRITE_BIAS,
- (atomic_t *)(&sem->count));
- if (tmp == RWSEM_ACTIVE_WRITE_BIAS)
- smp_wmb();
- else
- rwsem_down_write_failed(sem);
-}
-
-static inline int __down_write_trylock(struct rw_semaphore *sem)
-{
- int tmp;
-
- tmp = cmpxchg(&sem->count, RWSEM_UNLOCKED_VALUE,
- RWSEM_ACTIVE_WRITE_BIAS);
- smp_wmb();
- return tmp == RWSEM_UNLOCKED_VALUE;
-}
-
-/*
- * unlock after reading
- */
-static inline void __up_read(struct rw_semaphore *sem)
-{
- int tmp;
-
- smp_wmb();
- tmp = atomic_sub_return(1,(atomic_t *)(&sem->count));
- if (tmp < -1 && (tmp & RWSEM_ACTIVE_MASK) == 0)
- rwsem_wake(sem);
-}
-
-/*
- * unlock after writing
- */
-static inline void __up_write(struct rw_semaphore *sem)
-{
- smp_wmb();
- if (atomic_sub_return(RWSEM_ACTIVE_WRITE_BIAS,
- (atomic_t *)(&sem->count)) < 0)
- rwsem_wake(sem);
-}
-
-/*
- * implement atomic add functionality
- */
-static inline void rwsem_atomic_add(int delta, struct rw_semaphore *sem)
-{
- atomic_add(delta, (atomic_t *)(&sem->count));
-}
-
-/*
- * downgrade write lock to read lock
- */
-static inline void __downgrade_write(struct rw_semaphore *sem)
-{
- int tmp;
-
- smp_wmb();
- tmp = atomic_add_return(-RWSEM_WAITING_BIAS, (atomic_t *)(&sem->count));
- if (tmp < 0)
- rwsem_downgrade_wake(sem);
-}
-
-/*
- * implement exchange and add functionality
- */
-static inline int rwsem_atomic_update(int delta, struct rw_semaphore *sem)
-{
- smp_mb();
- return atomic_add_return(delta, (atomic_t *)(&sem->count));
-}
-
-#endif /* _XTENSA_RWSEM_H */
diff --git a/arch/xtensa/kernel/perf_event.c b/arch/xtensa/kernel/perf_event.c
index 54f01188c29c..ef90479e0397 100644
--- a/arch/xtensa/kernel/perf_event.c
+++ b/arch/xtensa/kernel/perf_event.c
@@ -323,23 +323,23 @@ static void xtensa_pmu_read(struct perf_event *event)
static int callchain_trace(struct stackframe *frame, void *data)
{
- struct perf_callchain_entry *entry = data;
+ struct perf_callchain_entry_ctx *entry = data;
perf_callchain_store(entry, frame->pc);
return 0;
}
-void perf_callchain_kernel(struct perf_callchain_entry *entry,
+void perf_callchain_kernel(struct perf_callchain_entry_ctx *entry,
struct pt_regs *regs)
{
- xtensa_backtrace_kernel(regs, PERF_MAX_STACK_DEPTH,
+ xtensa_backtrace_kernel(regs, entry->max_stack,
callchain_trace, NULL, entry);
}
-void perf_callchain_user(struct perf_callchain_entry *entry,
+void perf_callchain_user(struct perf_callchain_entry_ctx *entry,
struct pt_regs *regs)
{
- xtensa_backtrace_user(regs, PERF_MAX_STACK_DEPTH,
+ xtensa_backtrace_user(regs, entry->max_stack,
callchain_trace, entry);
}
diff --git a/arch/xtensa/kernel/process.c b/arch/xtensa/kernel/process.c
index 5bbfed81c97b..e0ded48561db 100644
--- a/arch/xtensa/kernel/process.c
+++ b/arch/xtensa/kernel/process.c
@@ -115,10 +115,10 @@ void arch_cpu_idle(void)
/*
* This is called when the thread calls exit().
*/
-void exit_thread(void)
+void exit_thread(struct task_struct *tsk)
{
#if XTENSA_HAVE_COPROCESSORS
- coprocessor_release_all(current_thread_info());
+ coprocessor_release_all(task_thread_info(tsk));
#endif
}
diff --git a/arch/xtensa/platforms/iss/network.c b/arch/xtensa/platforms/iss/network.c
index 976a38594537..66a5d15a9e0e 100644
--- a/arch/xtensa/platforms/iss/network.c
+++ b/arch/xtensa/platforms/iss/network.c
@@ -428,7 +428,7 @@ static int iss_net_start_xmit(struct sk_buff *skb, struct net_device *dev)
if (len == skb->len) {
lp->stats.tx_packets++;
lp->stats.tx_bytes += skb->len;
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
netif_start_queue(dev);
/* this is normally done in the interrupt when tx finishes */
diff --git a/block/bio.c b/block/bio.c
index 807d25e466ec..0e4aa42bc30d 100644
--- a/block/bio.c
+++ b/block/bio.c
@@ -311,17 +311,6 @@ static void bio_chain_endio(struct bio *bio)
bio_endio(__bio_chain_endio(bio));
}
-/*
- * Increment chain count for the bio. Make sure the CHAIN flag update
- * is visible before the raised count.
- */
-static inline void bio_inc_remaining(struct bio *bio)
-{
- bio_set_flag(bio, BIO_CHAIN);
- smp_mb__before_atomic();
- atomic_inc(&bio->__bi_remaining);
-}
-
/**
* bio_chain - chain bio completions
* @bio: the target bio
diff --git a/block/blk-core.c b/block/blk-core.c
index b60537b2c35b..2475b1c72773 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -1523,6 +1523,7 @@ EXPORT_SYMBOL(blk_put_request);
* blk_add_request_payload - add a payload to a request
* @rq: request to update
* @page: page backing the payload
+ * @offset: offset in page
* @len: length of the payload.
*
* This allows to later add a payload to an already submitted request by
@@ -1533,12 +1534,12 @@ EXPORT_SYMBOL(blk_put_request);
* discard requests should ever use it.
*/
void blk_add_request_payload(struct request *rq, struct page *page,
- unsigned int len)
+ int offset, unsigned int len)
{
struct bio *bio = rq->bio;
bio->bi_io_vec->bv_page = page;
- bio->bi_io_vec->bv_offset = 0;
+ bio->bi_io_vec->bv_offset = offset;
bio->bi_io_vec->bv_len = len;
bio->bi_iter.bi_size = len;
@@ -1963,7 +1964,8 @@ generic_make_request_checks(struct bio *bio)
* drivers without flush support don't have to worry
* about them.
*/
- if ((bio->bi_rw & (REQ_FLUSH | REQ_FUA)) && !q->flush_flags) {
+ if ((bio->bi_rw & (REQ_FLUSH | REQ_FUA)) &&
+ !test_bit(QUEUE_FLAG_WC, &q->queue_flags)) {
bio->bi_rw &= ~(REQ_FLUSH | REQ_FUA);
if (!nr_sectors) {
err = 0;
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 9c423e53324a..b1c91d229e5e 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -95,17 +95,18 @@ enum {
static bool blk_kick_flush(struct request_queue *q,
struct blk_flush_queue *fq);
-static unsigned int blk_flush_policy(unsigned int fflags, struct request *rq)
+static unsigned int blk_flush_policy(unsigned long fflags, struct request *rq)
{
unsigned int policy = 0;
if (blk_rq_sectors(rq))
policy |= REQ_FSEQ_DATA;
- if (fflags & REQ_FLUSH) {
+ if (fflags & (1UL << QUEUE_FLAG_WC)) {
if (rq->cmd_flags & REQ_FLUSH)
policy |= REQ_FSEQ_PREFLUSH;
- if (!(fflags & REQ_FUA) && (rq->cmd_flags & REQ_FUA))
+ if (!(fflags & (1UL << QUEUE_FLAG_FUA)) &&
+ (rq->cmd_flags & REQ_FUA))
policy |= REQ_FSEQ_POSTFLUSH;
}
return policy;
@@ -384,7 +385,7 @@ static void mq_flush_data_end_io(struct request *rq, int error)
void blk_insert_flush(struct request *rq)
{
struct request_queue *q = rq->q;
- unsigned int fflags = q->flush_flags; /* may change, cache */
+ unsigned long fflags = q->queue_flags; /* may change, cache */
unsigned int policy = blk_flush_policy(fflags, rq);
struct blk_flush_queue *fq = blk_get_flush_queue(q, rq->mq_ctx);
@@ -393,7 +394,7 @@ void blk_insert_flush(struct request *rq)
* REQ_FLUSH and FUA for the driver.
*/
rq->cmd_flags &= ~REQ_FLUSH;
- if (!(fflags & REQ_FUA))
+ if (!(fflags & (1UL << QUEUE_FLAG_FUA)))
rq->cmd_flags &= ~REQ_FUA;
/*
diff --git a/block/blk-lib.c b/block/blk-lib.c
index 9ebf65379556..23d7f301a196 100644
--- a/block/blk-lib.c
+++ b/block/blk-lib.c
@@ -9,82 +9,46 @@
#include "blk.h"
-struct bio_batch {
- atomic_t done;
- int error;
- struct completion *wait;
-};
-
-static void bio_batch_end_io(struct bio *bio)
+static struct bio *next_bio(struct bio *bio, int rw, unsigned int nr_pages,
+ gfp_t gfp)
{
- struct bio_batch *bb = bio->bi_private;
+ struct bio *new = bio_alloc(gfp, nr_pages);
+
+ if (bio) {
+ bio_chain(bio, new);
+ submit_bio(rw, bio);
+ }
- if (bio->bi_error && bio->bi_error != -EOPNOTSUPP)
- bb->error = bio->bi_error;
- if (atomic_dec_and_test(&bb->done))
- complete(bb->wait);
- bio_put(bio);
+ return new;
}
-/**
- * blkdev_issue_discard - queue a discard
- * @bdev: blockdev to issue discard for
- * @sector: start sector
- * @nr_sects: number of sectors to discard
- * @gfp_mask: memory allocation flags (for bio_alloc)
- * @flags: BLKDEV_IFL_* flags to control behaviour
- *
- * Description:
- * Issue a discard request for the sectors in question.
- */
-int blkdev_issue_discard(struct block_device *bdev, sector_t sector,
- sector_t nr_sects, gfp_t gfp_mask, unsigned long flags)
+int __blkdev_issue_discard(struct block_device *bdev, sector_t sector,
+ sector_t nr_sects, gfp_t gfp_mask, int type, struct bio **biop)
{
- DECLARE_COMPLETION_ONSTACK(wait);
struct request_queue *q = bdev_get_queue(bdev);
- int type = REQ_WRITE | REQ_DISCARD;
+ struct bio *bio = *biop;
unsigned int granularity;
int alignment;
- struct bio_batch bb;
- struct bio *bio;
- int ret = 0;
- struct blk_plug plug;
if (!q)
return -ENXIO;
-
if (!blk_queue_discard(q))
return -EOPNOTSUPP;
+ if ((type & REQ_SECURE) && !blk_queue_secdiscard(q))
+ return -EOPNOTSUPP;
/* Zero-sector (unknown) and one-sector granularities are the same. */
granularity = max(q->limits.discard_granularity >> 9, 1U);
alignment = (bdev_discard_alignment(bdev) >> 9) % granularity;
- if (flags & BLKDEV_DISCARD_SECURE) {
- if (!blk_queue_secdiscard(q))
- return -EOPNOTSUPP;
- type |= REQ_SECURE;
- }
-
- atomic_set(&bb.done, 1);
- bb.error = 0;
- bb.wait = &wait;
-
- blk_start_plug(&plug);
while (nr_sects) {
unsigned int req_sects;
sector_t end_sect, tmp;
- bio = bio_alloc(gfp_mask, 1);
- if (!bio) {
- ret = -ENOMEM;
- break;
- }
-
/* Make sure bi_size doesn't overflow */
req_sects = min_t(sector_t, nr_sects, UINT_MAX >> 9);
- /*
+ /**
* If splitting a request, and the next starting sector would be
* misaligned, stop the discard at the previous aligned sector.
*/
@@ -98,18 +62,14 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector,
req_sects = end_sect - sector;
}
+ bio = next_bio(bio, type, 1, gfp_mask);
bio->bi_iter.bi_sector = sector;
- bio->bi_end_io = bio_batch_end_io;
bio->bi_bdev = bdev;
- bio->bi_private = &bb;
bio->bi_iter.bi_size = req_sects << 9;
nr_sects -= req_sects;
sector = end_sect;
- atomic_inc(&bb.done);
- submit_bio(type, bio);
-
/*
* We can loop for a long time in here, if someone does
* full device discards (like mkfs). Be nice and allow
@@ -118,14 +78,44 @@ int blkdev_issue_discard(struct block_device *bdev, sector_t sector,
*/
cond_resched();
}
- blk_finish_plug(&plug);
- /* Wait for bios in-flight */
- if (!atomic_dec_and_test(&bb.done))
- wait_for_completion_io(&wait);
+ *biop = bio;
+ return 0;
+}
+EXPORT_SYMBOL(__blkdev_issue_discard);
+
+/**
+ * blkdev_issue_discard - queue a discard
+ * @bdev: blockdev to issue discard for
+ * @sector: start sector
+ * @nr_sects: number of sectors to discard
+ * @gfp_mask: memory allocation flags (for bio_alloc)
+ * @flags: BLKDEV_IFL_* flags to control behaviour
+ *
+ * Description:
+ * Issue a discard request for the sectors in question.
+ */
+int blkdev_issue_discard(struct block_device *bdev, sector_t sector,
+ sector_t nr_sects, gfp_t gfp_mask, unsigned long flags)
+{
+ int type = REQ_WRITE | REQ_DISCARD;
+ struct bio *bio = NULL;
+ struct blk_plug plug;
+ int ret;
+
+ if (flags & BLKDEV_DISCARD_SECURE)
+ type |= REQ_SECURE;
+
+ blk_start_plug(&plug);
+ ret = __blkdev_issue_discard(bdev, sector, nr_sects, gfp_mask, type,
+ &bio);
+ if (!ret && bio) {
+ ret = submit_bio_wait(type, bio);
+ if (ret == -EOPNOTSUPP)
+ ret = 0;
+ }
+ blk_finish_plug(&plug);
- if (bb.error)
- return bb.error;
return ret;
}
EXPORT_SYMBOL(blkdev_issue_discard);
@@ -145,11 +135,9 @@ int blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
sector_t nr_sects, gfp_t gfp_mask,
struct page *page)
{
- DECLARE_COMPLETION_ONSTACK(wait);
struct request_queue *q = bdev_get_queue(bdev);
unsigned int max_write_same_sectors;
- struct bio_batch bb;
- struct bio *bio;
+ struct bio *bio = NULL;
int ret = 0;
if (!q)
@@ -158,21 +146,10 @@ int blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
/* Ensure that max_write_same_sectors doesn't overflow bi_size */
max_write_same_sectors = UINT_MAX >> 9;
- atomic_set(&bb.done, 1);
- bb.error = 0;
- bb.wait = &wait;
-
while (nr_sects) {
- bio = bio_alloc(gfp_mask, 1);
- if (!bio) {
- ret = -ENOMEM;
- break;
- }
-
+ bio = next_bio(bio, REQ_WRITE | REQ_WRITE_SAME, 1, gfp_mask);
bio->bi_iter.bi_sector = sector;
- bio->bi_end_io = bio_batch_end_io;
bio->bi_bdev = bdev;
- bio->bi_private = &bb;
bio->bi_vcnt = 1;
bio->bi_io_vec->bv_page = page;
bio->bi_io_vec->bv_offset = 0;
@@ -186,18 +163,11 @@ int blkdev_issue_write_same(struct block_device *bdev, sector_t sector,
bio->bi_iter.bi_size = nr_sects << 9;
nr_sects = 0;
}
-
- atomic_inc(&bb.done);
- submit_bio(REQ_WRITE | REQ_WRITE_SAME, bio);
}
- /* Wait for bios in-flight */
- if (!atomic_dec_and_test(&bb.done))
- wait_for_completion_io(&wait);
-
- if (bb.error)
- return bb.error;
- return ret;
+ if (bio)
+ ret = submit_bio_wait(REQ_WRITE | REQ_WRITE_SAME, bio);
+ return ret != -EOPNOTSUPP ? ret : 0;
}
EXPORT_SYMBOL(blkdev_issue_write_same);
@@ -216,28 +186,15 @@ static int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
sector_t nr_sects, gfp_t gfp_mask)
{
int ret;
- struct bio *bio;
- struct bio_batch bb;
+ struct bio *bio = NULL;
unsigned int sz;
- DECLARE_COMPLETION_ONSTACK(wait);
-
- atomic_set(&bb.done, 1);
- bb.error = 0;
- bb.wait = &wait;
- ret = 0;
while (nr_sects != 0) {
- bio = bio_alloc(gfp_mask,
- min(nr_sects, (sector_t)BIO_MAX_PAGES));
- if (!bio) {
- ret = -ENOMEM;
- break;
- }
-
+ bio = next_bio(bio, WRITE,
+ min(nr_sects, (sector_t)BIO_MAX_PAGES),
+ gfp_mask);
bio->bi_iter.bi_sector = sector;
bio->bi_bdev = bdev;
- bio->bi_end_io = bio_batch_end_io;
- bio->bi_private = &bb;
while (nr_sects != 0) {
sz = min((sector_t) PAGE_SIZE >> 9 , nr_sects);
@@ -247,18 +204,11 @@ static int __blkdev_issue_zeroout(struct block_device *bdev, sector_t sector,
if (ret < (sz << 9))
break;
}
- ret = 0;
- atomic_inc(&bb.done);
- submit_bio(WRITE, bio);
}
- /* Wait for bios in-flight */
- if (!atomic_dec_and_test(&bb.done))
- wait_for_completion_io(&wait);
-
- if (bb.error)
- return bb.error;
- return ret;
+ if (bio)
+ return submit_bio_wait(WRITE, bio);
+ return 0;
}
/**
diff --git a/block/blk-map.c b/block/blk-map.c
index a54f0543b956..b9f88b7751fb 100644
--- a/block/blk-map.c
+++ b/block/blk-map.c
@@ -9,24 +9,6 @@
#include "blk.h"
-static bool iovec_gap_to_prv(struct request_queue *q,
- struct iovec *prv, struct iovec *cur)
-{
- unsigned long prev_end;
-
- if (!queue_virt_boundary(q))
- return false;
-
- if (prv->iov_base == NULL && prv->iov_len == 0)
- /* prv is not set - don't check */
- return false;
-
- prev_end = (unsigned long)(prv->iov_base + prv->iov_len);
-
- return (((unsigned long)cur->iov_base & queue_virt_boundary(q)) ||
- prev_end & queue_virt_boundary(q));
-}
-
int blk_rq_append_bio(struct request_queue *q, struct request *rq,
struct bio *bio)
{
@@ -125,31 +107,18 @@ int blk_rq_map_user_iov(struct request_queue *q, struct request *rq,
struct rq_map_data *map_data,
const struct iov_iter *iter, gfp_t gfp_mask)
{
- struct iovec iov, prv = {.iov_base = NULL, .iov_len = 0};
- bool copy = (q->dma_pad_mask & iter->count) || map_data;
+ bool copy = false;
+ unsigned long align = q->dma_pad_mask | queue_dma_alignment(q);
struct bio *bio = NULL;
struct iov_iter i;
int ret;
- if (!iter || !iter->count)
- return -EINVAL;
-
- iov_for_each(iov, i, *iter) {
- unsigned long uaddr = (unsigned long) iov.iov_base;
-
- if (!iov.iov_len)
- return -EINVAL;
-
- /*
- * Keep going so we check length of all segments
- */
- if ((uaddr & queue_dma_alignment(q)) ||
- iovec_gap_to_prv(q, &prv, &iov))
- copy = true;
-
- prv.iov_base = iov.iov_base;
- prv.iov_len = iov.iov_len;
- }
+ if (map_data)
+ copy = true;
+ else if (iov_iter_alignment(iter) & align)
+ copy = true;
+ else if (queue_virt_boundary(q))
+ copy = queue_virt_boundary(q) & iov_iter_gap_alignment(iter);
i = *iter;
do {
diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
index abdbb47405cb..56a0c37a3d06 100644
--- a/block/blk-mq-tag.c
+++ b/block/blk-mq-tag.c
@@ -464,15 +464,26 @@ static void bt_tags_for_each(struct blk_mq_tags *tags,
}
}
-void blk_mq_all_tag_busy_iter(struct blk_mq_tags *tags, busy_tag_iter_fn *fn,
- void *priv)
+static void blk_mq_all_tag_busy_iter(struct blk_mq_tags *tags,
+ busy_tag_iter_fn *fn, void *priv)
{
if (tags->nr_reserved_tags)
bt_tags_for_each(tags, &tags->breserved_tags, 0, fn, priv, true);
bt_tags_for_each(tags, &tags->bitmap_tags, tags->nr_reserved_tags, fn, priv,
false);
}
-EXPORT_SYMBOL(blk_mq_all_tag_busy_iter);
+
+void blk_mq_tagset_busy_iter(struct blk_mq_tag_set *tagset,
+ busy_tag_iter_fn *fn, void *priv)
+{
+ int i;
+
+ for (i = 0; i < tagset->nr_hw_queues; i++) {
+ if (tagset->tags && tagset->tags[i])
+ blk_mq_all_tag_busy_iter(tagset->tags[i], fn, priv);
+ }
+}
+EXPORT_SYMBOL(blk_mq_tagset_busy_iter);
void blk_mq_queue_tag_busy_iter(struct request_queue *q, busy_iter_fn *fn,
void *priv)
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 1699baf39b78..29cbc1b5fbdb 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1122,8 +1122,7 @@ static void blk_mq_bio_to_request(struct request *rq, struct bio *bio)
{
init_request_from_bio(rq, bio);
- if (blk_do_io_stat(rq))
- blk_account_io_start(rq, 1);
+ blk_account_io_start(rq, 1);
}
static inline bool hctx_allow_merges(struct blk_mq_hw_ctx *hctx)
@@ -1496,7 +1495,7 @@ static struct blk_mq_tags *blk_mq_init_rq_map(struct blk_mq_tag_set *set,
int to_do;
void *p;
- while (left < order_to_size(this_order - 1) && this_order)
+ while (this_order && left < order_to_size(this_order - 1))
this_order--;
do {
@@ -2021,7 +2020,7 @@ struct request_queue *blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
q->queue_ctx = alloc_percpu(struct blk_mq_ctx);
if (!q->queue_ctx)
- return ERR_PTR(-ENOMEM);
+ goto err_exit;
q->queue_hw_ctx = kzalloc_node(nr_cpu_ids * sizeof(*(q->queue_hw_ctx)),
GFP_KERNEL, set->numa_node);
@@ -2085,6 +2084,8 @@ err_map:
kfree(q->queue_hw_ctx);
err_percpu:
free_percpu(q->queue_ctx);
+err_exit:
+ q->mq_ops = NULL;
return ERR_PTR(-ENOMEM);
}
EXPORT_SYMBOL(blk_mq_init_allocated_queue);
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 331e4eee0dda..f679ae122843 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -820,32 +820,40 @@ void blk_queue_update_dma_alignment(struct request_queue *q, int mask)
}
EXPORT_SYMBOL(blk_queue_update_dma_alignment);
-/**
- * blk_queue_flush - configure queue's cache flush capability
- * @q: the request queue for the device
- * @flush: 0, REQ_FLUSH or REQ_FLUSH | REQ_FUA
- *
- * Tell block layer cache flush capability of @q. If it supports
- * flushing, REQ_FLUSH should be set. If it supports bypassing
- * write cache for individual writes, REQ_FUA should be set.
- */
-void blk_queue_flush(struct request_queue *q, unsigned int flush)
-{
- WARN_ON_ONCE(flush & ~(REQ_FLUSH | REQ_FUA));
-
- if (WARN_ON_ONCE(!(flush & REQ_FLUSH) && (flush & REQ_FUA)))
- flush &= ~REQ_FUA;
-
- q->flush_flags = flush & (REQ_FLUSH | REQ_FUA);
-}
-EXPORT_SYMBOL_GPL(blk_queue_flush);
-
void blk_queue_flush_queueable(struct request_queue *q, bool queueable)
{
- q->flush_not_queueable = !queueable;
+ spin_lock_irq(q->queue_lock);
+ if (queueable)
+ clear_bit(QUEUE_FLAG_FLUSH_NQ, &q->queue_flags);
+ else
+ set_bit(QUEUE_FLAG_FLUSH_NQ, &q->queue_flags);
+ spin_unlock_irq(q->queue_lock);
}
EXPORT_SYMBOL_GPL(blk_queue_flush_queueable);
+/**
+ * blk_queue_write_cache - configure queue's write cache
+ * @q: the request queue for the device
+ * @wc: write back cache on or off
+ * @fua: device supports FUA writes, if true
+ *
+ * Tell the block layer about the write cache of @q.
+ */
+void blk_queue_write_cache(struct request_queue *q, bool wc, bool fua)
+{
+ spin_lock_irq(q->queue_lock);
+ if (wc)
+ queue_flag_set(QUEUE_FLAG_WC, q);
+ else
+ queue_flag_clear(QUEUE_FLAG_WC, q);
+ if (fua)
+ queue_flag_set(QUEUE_FLAG_FUA, q);
+ else
+ queue_flag_clear(QUEUE_FLAG_FUA, q);
+ spin_unlock_irq(q->queue_lock);
+}
+EXPORT_SYMBOL_GPL(blk_queue_write_cache);
+
static int __init blk_settings_init(void)
{
blk_max_low_pfn = max_low_pfn - 1;
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 995b58d46ed1..99205965f559 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -347,6 +347,38 @@ static ssize_t queue_poll_store(struct request_queue *q, const char *page,
return ret;
}
+static ssize_t queue_wc_show(struct request_queue *q, char *page)
+{
+ if (test_bit(QUEUE_FLAG_WC, &q->queue_flags))
+ return sprintf(page, "write back\n");
+
+ return sprintf(page, "write through\n");
+}
+
+static ssize_t queue_wc_store(struct request_queue *q, const char *page,
+ size_t count)
+{
+ int set = -1;
+
+ if (!strncmp(page, "write back", 10))
+ set = 1;
+ else if (!strncmp(page, "write through", 13) ||
+ !strncmp(page, "none", 4))
+ set = 0;
+
+ if (set == -1)
+ return -EINVAL;
+
+ spin_lock_irq(q->queue_lock);
+ if (set)
+ queue_flag_set(QUEUE_FLAG_WC, q);
+ else
+ queue_flag_clear(QUEUE_FLAG_WC, q);
+ spin_unlock_irq(q->queue_lock);
+
+ return count;
+}
+
static struct queue_sysfs_entry queue_requests_entry = {
.attr = {.name = "nr_requests", .mode = S_IRUGO | S_IWUSR },
.show = queue_requests_show,
@@ -478,6 +510,12 @@ static struct queue_sysfs_entry queue_poll_entry = {
.store = queue_poll_store,
};
+static struct queue_sysfs_entry queue_wc_entry = {
+ .attr = {.name = "write_cache", .mode = S_IRUGO | S_IWUSR },
+ .show = queue_wc_show,
+ .store = queue_wc_store,
+};
+
static struct attribute *default_attrs[] = {
&queue_requests_entry.attr,
&queue_ra_entry.attr,
@@ -503,6 +541,7 @@ static struct attribute *default_attrs[] = {
&queue_iostats_entry.attr,
&queue_random_entry.attr,
&queue_poll_entry.attr,
+ &queue_wc_entry.attr,
NULL,
};
diff --git a/block/blk-throttle.c b/block/blk-throttle.c
index 2149a1ddbacf..47a3e540631a 100644
--- a/block/blk-throttle.c
+++ b/block/blk-throttle.c
@@ -211,15 +211,14 @@ static struct throtl_data *sq_to_td(struct throtl_service_queue *sq)
*
* The messages are prefixed with "throtl BLKG_NAME" if @sq belongs to a
* throtl_grp; otherwise, just "throtl".
- *
- * TODO: this should be made a function and name formatting should happen
- * after testing whether blktrace is enabled.
*/
#define throtl_log(sq, fmt, args...) do { \
struct throtl_grp *__tg = sq_to_tg((sq)); \
struct throtl_data *__td = sq_to_td((sq)); \
\
(void)__td; \
+ if (likely(!blk_trace_note_message_enabled(__td->queue))) \
+ break; \
if ((__tg)) { \
char __pbuf[128]; \
\
diff --git a/block/ioctl.c b/block/ioctl.c
index 4ff1f92f89ca..ed2397f8de9d 100644
--- a/block/ioctl.c
+++ b/block/ioctl.c
@@ -4,7 +4,6 @@
#include <linux/gfp.h>
#include <linux/blkpg.h>
#include <linux/hdreg.h>
-#include <linux/badblocks.h>
#include <linux/backing-dev.h>
#include <linux/fs.h>
#include <linux/blktrace_api.h>
@@ -407,35 +406,6 @@ static inline int is_unrecognized_ioctl(int ret)
ret == -ENOIOCTLCMD;
}
-#ifdef CONFIG_FS_DAX
-bool blkdev_dax_capable(struct block_device *bdev)
-{
- struct gendisk *disk = bdev->bd_disk;
-
- if (!disk->fops->direct_access)
- return false;
-
- /*
- * If the partition is not aligned on a page boundary, we can't
- * do dax I/O to it.
- */
- if ((bdev->bd_part->start_sect % (PAGE_SIZE / 512))
- || (bdev->bd_part->nr_sects % (PAGE_SIZE / 512)))
- return false;
-
- /*
- * If the device has known bad blocks, force all I/O through the
- * driver / page cache.
- *
- * TODO: support finer grained dax error handling
- */
- if (disk->bb && disk->bb->count)
- return false;
-
- return true;
-}
-#endif
-
static int blkdev_flushbuf(struct block_device *bdev, fmode_t mode,
unsigned cmd, unsigned long arg)
{
@@ -598,9 +568,6 @@ int blkdev_ioctl(struct block_device *bdev, fmode_t mode, unsigned cmd,
case BLKTRACESETUP:
case BLKTRACETEARDOWN:
return blk_trace_ioctl(bdev, cmd, argp);
- case BLKDAXGET:
- return put_int(arg, !!(bdev->bd_inode->i_flags & S_DAX));
- break;
case IOC_PR_REGISTER:
return blkdev_pr_register(bdev, argp);
case IOC_PR_RESERVE:
diff --git a/block/partitions/efi.c b/block/partitions/efi.c
index 26cb624ace05..bcd86e5cd546 100644
--- a/block/partitions/efi.c
+++ b/block/partitions/efi.c
@@ -430,7 +430,7 @@ static int is_gpt_valid(struct parsed_partitions *state, u64 lba,
}
/* Check that sizeof_partition_entry has the correct value */
if (le32_to_cpu((*gpt)->sizeof_partition_entry) != sizeof(gpt_entry)) {
- pr_debug("GUID Partitition Entry Size check failed.\n");
+ pr_debug("GUID Partition Entry Size check failed.\n");
goto fail;
}
@@ -443,7 +443,7 @@ static int is_gpt_valid(struct parsed_partitions *state, u64 lba,
le32_to_cpu((*gpt)->sizeof_partition_entry));
if (crc != le32_to_cpu((*gpt)->partition_entry_array_crc32)) {
- pr_debug("GUID Partitition Entry Array CRC check failed.\n");
+ pr_debug("GUID Partition Entry Array CRC check failed.\n");
goto fail_ptes;
}
diff --git a/block/partitions/ldm.c b/block/partitions/ldm.c
index e507cfbd044e..edcea70674c9 100644
--- a/block/partitions/ldm.c
+++ b/block/partitions/ldm.c
@@ -27,6 +27,8 @@
#include <linux/pagemap.h>
#include <linux/stringify.h>
#include <linux/kernel.h>
+#include <linux/uuid.h>
+
#include "ldm.h"
#include "check.h"
#include "msdos.h"
@@ -66,60 +68,6 @@ void _ldm_printk(const char *level, const char *function, const char *fmt, ...)
}
/**
- * ldm_parse_hexbyte - Convert a ASCII hex number to a byte
- * @src: Pointer to at least 2 characters to convert.
- *
- * Convert a two character ASCII hex string to a number.
- *
- * Return: 0-255 Success, the byte was parsed correctly
- * -1 Error, an invalid character was supplied
- */
-static int ldm_parse_hexbyte (const u8 *src)
-{
- unsigned int x; /* For correct wrapping */
- int h;
-
- /* high part */
- x = h = hex_to_bin(src[0]);
- if (h < 0)
- return -1;
-
- /* low part */
- h = hex_to_bin(src[1]);
- if (h < 0)
- return -1;
-
- return (x << 4) + h;
-}
-
-/**
- * ldm_parse_guid - Convert GUID from ASCII to binary
- * @src: 36 char string of the form fa50ff2b-f2e8-45de-83fa-65417f2f49ba
- * @dest: Memory block to hold binary GUID (16 bytes)
- *
- * N.B. The GUID need not be NULL terminated.
- *
- * Return: 'true' @dest contains binary GUID
- * 'false' @dest contents are undefined
- */
-static bool ldm_parse_guid (const u8 *src, u8 *dest)
-{
- static const int size[] = { 4, 2, 2, 2, 6 };
- int i, j, v;
-
- if (src[8] != '-' || src[13] != '-' ||
- src[18] != '-' || src[23] != '-')
- return false;
-
- for (j = 0; j < 5; j++, src++)
- for (i = 0; i < size[j]; i++, src+=2, *dest++ = v)
- if ((v = ldm_parse_hexbyte (src)) < 0)
- return false;
-
- return true;
-}
-
-/**
* ldm_parse_privhead - Read the LDM Database PRIVHEAD structure
* @data: Raw database PRIVHEAD structure loaded from the device
* @ph: In-memory privhead structure in which to return parsed information
@@ -167,7 +115,7 @@ static bool ldm_parse_privhead(const u8 *data, struct privhead *ph)
ldm_error("PRIVHEAD disk size doesn't match real disk size");
return false;
}
- if (!ldm_parse_guid(data + 0x0030, ph->disk_id)) {
+ if (uuid_be_to_bin(data + 0x0030, (uuid_be *)ph->disk_id)) {
ldm_error("PRIVHEAD contains an invalid GUID.");
return false;
}
@@ -944,7 +892,7 @@ static bool ldm_parse_dsk3 (const u8 *buffer, int buflen, struct vblk *vb)
disk = &vb->vblk.disk;
ldm_get_vstr (buffer + 0x18 + r_diskid, disk->alt_name,
sizeof (disk->alt_name));
- if (!ldm_parse_guid (buffer + 0x19 + r_name, disk->disk_id))
+ if (uuid_be_to_bin(buffer + 0x19 + r_name, (uuid_be *)disk->disk_id))
return false;
return true;
diff --git a/certs/Kconfig b/certs/Kconfig
index f0f8a4433685..fc5955f5fc8a 100644
--- a/certs/Kconfig
+++ b/certs/Kconfig
@@ -17,6 +17,7 @@ config MODULE_SIG_KEY
config SYSTEM_TRUSTED_KEYRING
bool "Provide system-wide ring of trusted keys"
depends on KEYS
+ depends on ASYMMETRIC_KEY_TYPE
help
Provide a system keyring to which trusted keys can be added. Keys in
the keyring are considered to be trusted. Keys may be added at will
@@ -55,4 +56,12 @@ config SYSTEM_EXTRA_CERTIFICATE_SIZE
This is the number of bytes reserved in the kernel image for a
certificate to be inserted.
+config SECONDARY_TRUSTED_KEYRING
+ bool "Provide a keyring to which extra trustable keys may be added"
+ depends on SYSTEM_TRUSTED_KEYRING
+ help
+ If set, provide a keyring to which extra keys may be added, provided
+ those keys are not blacklisted and are vouched for by a key built
+ into the kernel or already in the secondary trusted keyring.
+
endmenu
diff --git a/certs/system_keyring.c b/certs/system_keyring.c
index f4180326c2e1..50979d6dcecd 100644
--- a/certs/system_keyring.c
+++ b/certs/system_keyring.c
@@ -18,29 +18,88 @@
#include <keys/system_keyring.h>
#include <crypto/pkcs7.h>
-struct key *system_trusted_keyring;
-EXPORT_SYMBOL_GPL(system_trusted_keyring);
+static struct key *builtin_trusted_keys;
+#ifdef CONFIG_SECONDARY_TRUSTED_KEYRING
+static struct key *secondary_trusted_keys;
+#endif
extern __initconst const u8 system_certificate_list[];
extern __initconst const unsigned long system_certificate_list_size;
+/**
+ * restrict_link_to_builtin_trusted - Restrict keyring addition by built in CA
+ *
+ * Restrict the addition of keys into a keyring based on the key-to-be-added
+ * being vouched for by a key in the built in system keyring.
+ */
+int restrict_link_by_builtin_trusted(struct key *keyring,
+ const struct key_type *type,
+ const union key_payload *payload)
+{
+ return restrict_link_by_signature(builtin_trusted_keys, type, payload);
+}
+
+#ifdef CONFIG_SECONDARY_TRUSTED_KEYRING
+/**
+ * restrict_link_by_builtin_and_secondary_trusted - Restrict keyring
+ * addition by both builtin and secondary keyrings
+ *
+ * Restrict the addition of keys into a keyring based on the key-to-be-added
+ * being vouched for by a key in either the built-in or the secondary system
+ * keyrings.
+ */
+int restrict_link_by_builtin_and_secondary_trusted(
+ struct key *keyring,
+ const struct key_type *type,
+ const union key_payload *payload)
+{
+ /* If we have a secondary trusted keyring, then that contains a link
+ * through to the builtin keyring and the search will follow that link.
+ */
+ if (type == &key_type_keyring &&
+ keyring == secondary_trusted_keys &&
+ payload == &builtin_trusted_keys->payload)
+ /* Allow the builtin keyring to be added to the secondary */
+ return 0;
+
+ return restrict_link_by_signature(secondary_trusted_keys, type, payload);
+}
+#endif
+
/*
- * Load the compiled-in keys
+ * Create the trusted keyrings
*/
static __init int system_trusted_keyring_init(void)
{
- pr_notice("Initialise system trusted keyring\n");
+ pr_notice("Initialise system trusted keyrings\n");
- system_trusted_keyring =
- keyring_alloc(".system_keyring",
+ builtin_trusted_keys =
+ keyring_alloc(".builtin_trusted_keys",
KUIDT_INIT(0), KGIDT_INIT(0), current_cred(),
((KEY_POS_ALL & ~KEY_POS_SETATTR) |
KEY_USR_VIEW | KEY_USR_READ | KEY_USR_SEARCH),
- KEY_ALLOC_NOT_IN_QUOTA, NULL);
- if (IS_ERR(system_trusted_keyring))
- panic("Can't allocate system trusted keyring\n");
+ KEY_ALLOC_NOT_IN_QUOTA,
+ NULL, NULL);
+ if (IS_ERR(builtin_trusted_keys))
+ panic("Can't allocate builtin trusted keyring\n");
+
+#ifdef CONFIG_SECONDARY_TRUSTED_KEYRING
+ secondary_trusted_keys =
+ keyring_alloc(".secondary_trusted_keys",
+ KUIDT_INIT(0), KGIDT_INIT(0), current_cred(),
+ ((KEY_POS_ALL & ~KEY_POS_SETATTR) |
+ KEY_USR_VIEW | KEY_USR_READ | KEY_USR_SEARCH |
+ KEY_USR_WRITE),
+ KEY_ALLOC_NOT_IN_QUOTA,
+ restrict_link_by_builtin_and_secondary_trusted,
+ NULL);
+ if (IS_ERR(secondary_trusted_keys))
+ panic("Can't allocate secondary trusted keyring\n");
+
+ if (key_link(secondary_trusted_keys, builtin_trusted_keys) < 0)
+ panic("Can't link trusted keyrings\n");
+#endif
- set_bit(KEY_FLAG_TRUSTED_ONLY, &system_trusted_keyring->flags);
return 0;
}
@@ -76,7 +135,7 @@ static __init int load_system_certificate_list(void)
if (plen > end - p)
goto dodgy_cert;
- key = key_create_or_update(make_key_ref(system_trusted_keyring, 1),
+ key = key_create_or_update(make_key_ref(builtin_trusted_keys, 1),
"asymmetric",
NULL,
p,
@@ -84,8 +143,8 @@ static __init int load_system_certificate_list(void)
((KEY_POS_ALL & ~KEY_POS_SETATTR) |
KEY_USR_VIEW | KEY_USR_READ),
KEY_ALLOC_NOT_IN_QUOTA |
- KEY_ALLOC_TRUSTED |
- KEY_ALLOC_BUILT_IN);
+ KEY_ALLOC_BUILT_IN |
+ KEY_ALLOC_BYPASS_RESTRICTION);
if (IS_ERR(key)) {
pr_err("Problem loading in-kernel X.509 certificate (%ld)\n",
PTR_ERR(key));
@@ -108,19 +167,27 @@ late_initcall(load_system_certificate_list);
#ifdef CONFIG_SYSTEM_DATA_VERIFICATION
/**
- * Verify a PKCS#7-based signature on system data.
- * @data: The data to be verified.
+ * verify_pkcs7_signature - Verify a PKCS#7-based signature on system data.
+ * @data: The data to be verified (NULL if expecting internal data).
* @len: Size of @data.
* @raw_pkcs7: The PKCS#7 message that is the signature.
* @pkcs7_len: The size of @raw_pkcs7.
+ * @trusted_keys: Trusted keys to use (NULL for builtin trusted keys only,
+ * (void *)1UL for all trusted keys).
* @usage: The use to which the key is being put.
+ * @view_content: Callback to gain access to content.
+ * @ctx: Context for callback.
*/
-int system_verify_data(const void *data, unsigned long len,
- const void *raw_pkcs7, size_t pkcs7_len,
- enum key_being_used_for usage)
+int verify_pkcs7_signature(const void *data, size_t len,
+ const void *raw_pkcs7, size_t pkcs7_len,
+ struct key *trusted_keys,
+ enum key_being_used_for usage,
+ int (*view_content)(void *ctx,
+ const void *data, size_t len,
+ size_t asn1hdrlen),
+ void *ctx)
{
struct pkcs7_message *pkcs7;
- bool trusted;
int ret;
pkcs7 = pkcs7_parse_message(raw_pkcs7, pkcs7_len);
@@ -128,7 +195,7 @@ int system_verify_data(const void *data, unsigned long len,
return PTR_ERR(pkcs7);
/* The data should be detached - so we need to supply it. */
- if (pkcs7_supply_detached_data(pkcs7, data, len) < 0) {
+ if (data && pkcs7_supply_detached_data(pkcs7, data, len) < 0) {
pr_err("PKCS#7 signature with non-detached data\n");
ret = -EBADMSG;
goto error;
@@ -138,13 +205,33 @@ int system_verify_data(const void *data, unsigned long len,
if (ret < 0)
goto error;
- ret = pkcs7_validate_trust(pkcs7, system_trusted_keyring, &trusted);
- if (ret < 0)
+ if (!trusted_keys) {
+ trusted_keys = builtin_trusted_keys;
+ } else if (trusted_keys == (void *)1UL) {
+#ifdef CONFIG_SECONDARY_TRUSTED_KEYRING
+ trusted_keys = secondary_trusted_keys;
+#else
+ trusted_keys = builtin_trusted_keys;
+#endif
+ }
+ ret = pkcs7_validate_trust(pkcs7, trusted_keys);
+ if (ret < 0) {
+ if (ret == -ENOKEY)
+ pr_err("PKCS#7 signature not signed with a trusted key\n");
goto error;
+ }
+
+ if (view_content) {
+ size_t asn1hdrlen;
+
+ ret = pkcs7_get_content_data(pkcs7, &data, &len, &asn1hdrlen);
+ if (ret < 0) {
+ if (ret == -ENODATA)
+ pr_devel("PKCS#7 message does not contain data\n");
+ goto error;
+ }
- if (!trusted) {
- pr_err("PKCS#7 signature not signed with a trusted key\n");
- ret = -ENOKEY;
+ ret = view_content(ctx, data, len, asn1hdrlen);
}
error:
@@ -152,6 +239,6 @@ error:
pr_devel("<==%s() = %d\n", __func__, ret);
return ret;
}
-EXPORT_SYMBOL_GPL(system_verify_data);
+EXPORT_SYMBOL_GPL(verify_pkcs7_signature);
#endif /* CONFIG_SYSTEM_DATA_VERIFICATION */
diff --git a/crypto/Kconfig b/crypto/Kconfig
index 93a1fdc1feee..1d33beb6a1ae 100644
--- a/crypto/Kconfig
+++ b/crypto/Kconfig
@@ -96,6 +96,7 @@ config CRYPTO_AKCIPHER
config CRYPTO_RSA
tristate "RSA algorithm"
select CRYPTO_AKCIPHER
+ select CRYPTO_MANAGER
select MPILIB
select ASN1
help
diff --git a/crypto/ahash.c b/crypto/ahash.c
index 5fc1f172963d..3887a98abcc3 100644
--- a/crypto/ahash.c
+++ b/crypto/ahash.c
@@ -69,8 +69,9 @@ static int hash_walk_new_entry(struct crypto_hash_walk *walk)
struct scatterlist *sg;
sg = walk->sg;
- walk->pg = sg_page(sg);
walk->offset = sg->offset;
+ walk->pg = sg_page(walk->sg) + (walk->offset >> PAGE_SHIFT);
+ walk->offset = offset_in_page(walk->offset);
walk->entrylen = sg->length;
if (walk->entrylen > walk->total)
diff --git a/crypto/algif_aead.c b/crypto/algif_aead.c
index 147069c9afd0..80a0f1a78551 100644
--- a/crypto/algif_aead.c
+++ b/crypto/algif_aead.c
@@ -13,7 +13,7 @@
* any later version.
*/
-#include <crypto/aead.h>
+#include <crypto/internal/aead.h>
#include <crypto/scatterwalk.h>
#include <crypto/if_alg.h>
#include <linux/init.h>
@@ -29,15 +29,24 @@ struct aead_sg_list {
struct scatterlist sg[ALG_MAX_PAGES];
};
+struct aead_async_rsgl {
+ struct af_alg_sgl sgl;
+ struct list_head list;
+};
+
+struct aead_async_req {
+ struct scatterlist *tsgl;
+ struct aead_async_rsgl first_rsgl;
+ struct list_head list;
+ struct kiocb *iocb;
+ unsigned int tsgls;
+ char iv[];
+};
+
struct aead_ctx {
struct aead_sg_list tsgl;
- /*
- * RSGL_MAX_ENTRIES is an artificial limit where user space at maximum
- * can cause the kernel to allocate RSGL_MAX_ENTRIES * ALG_MAX_PAGES
- * pages
- */
-#define RSGL_MAX_ENTRIES ALG_MAX_PAGES
- struct af_alg_sgl rsgl[RSGL_MAX_ENTRIES];
+ struct aead_async_rsgl first_rsgl;
+ struct list_head list;
void *iv;
@@ -75,6 +84,17 @@ static inline bool aead_sufficient_data(struct aead_ctx *ctx)
return ctx->used >= ctx->aead_assoclen + as;
}
+static void aead_reset_ctx(struct aead_ctx *ctx)
+{
+ struct aead_sg_list *sgl = &ctx->tsgl;
+
+ sg_init_table(sgl->sg, ALG_MAX_PAGES);
+ sgl->cur = 0;
+ ctx->used = 0;
+ ctx->more = 0;
+ ctx->merge = 0;
+}
+
static void aead_put_sgl(struct sock *sk)
{
struct alg_sock *ask = alg_sk(sk);
@@ -90,11 +110,7 @@ static void aead_put_sgl(struct sock *sk)
put_page(sg_page(sg + i));
sg_assign_page(sg + i, NULL);
}
- sg_init_table(sg, ALG_MAX_PAGES);
- sgl->cur = 0;
- ctx->used = 0;
- ctx->more = 0;
- ctx->merge = 0;
+ aead_reset_ctx(ctx);
}
static void aead_wmem_wakeup(struct sock *sk)
@@ -349,23 +365,188 @@ unlock:
return err ?: size;
}
-static int aead_recvmsg(struct socket *sock, struct msghdr *msg, size_t ignored, int flags)
+#define GET_ASYM_REQ(req, tfm) (struct aead_async_req *) \
+ ((char *)req + sizeof(struct aead_request) + \
+ crypto_aead_reqsize(tfm))
+
+ #define GET_REQ_SIZE(tfm) sizeof(struct aead_async_req) + \
+ crypto_aead_reqsize(tfm) + crypto_aead_ivsize(tfm) + \
+ sizeof(struct aead_request)
+
+static void aead_async_cb(struct crypto_async_request *_req, int err)
+{
+ struct sock *sk = _req->data;
+ struct alg_sock *ask = alg_sk(sk);
+ struct aead_ctx *ctx = ask->private;
+ struct crypto_aead *tfm = crypto_aead_reqtfm(&ctx->aead_req);
+ struct aead_request *req = aead_request_cast(_req);
+ struct aead_async_req *areq = GET_ASYM_REQ(req, tfm);
+ struct scatterlist *sg = areq->tsgl;
+ struct aead_async_rsgl *rsgl;
+ struct kiocb *iocb = areq->iocb;
+ unsigned int i, reqlen = GET_REQ_SIZE(tfm);
+
+ list_for_each_entry(rsgl, &areq->list, list) {
+ af_alg_free_sg(&rsgl->sgl);
+ if (rsgl != &areq->first_rsgl)
+ sock_kfree_s(sk, rsgl, sizeof(*rsgl));
+ }
+
+ for (i = 0; i < areq->tsgls; i++)
+ put_page(sg_page(sg + i));
+
+ sock_kfree_s(sk, areq->tsgl, sizeof(*areq->tsgl) * areq->tsgls);
+ sock_kfree_s(sk, req, reqlen);
+ __sock_put(sk);
+ iocb->ki_complete(iocb, err, err);
+}
+
+static int aead_recvmsg_async(struct socket *sock, struct msghdr *msg,
+ int flags)
+{
+ struct sock *sk = sock->sk;
+ struct alg_sock *ask = alg_sk(sk);
+ struct aead_ctx *ctx = ask->private;
+ struct crypto_aead *tfm = crypto_aead_reqtfm(&ctx->aead_req);
+ struct aead_async_req *areq;
+ struct aead_request *req = NULL;
+ struct aead_sg_list *sgl = &ctx->tsgl;
+ struct aead_async_rsgl *last_rsgl = NULL, *rsgl;
+ unsigned int as = crypto_aead_authsize(tfm);
+ unsigned int i, reqlen = GET_REQ_SIZE(tfm);
+ int err = -ENOMEM;
+ unsigned long used;
+ size_t outlen;
+ size_t usedpages = 0;
+
+ lock_sock(sk);
+ if (ctx->more) {
+ err = aead_wait_for_data(sk, flags);
+ if (err)
+ goto unlock;
+ }
+
+ used = ctx->used;
+ outlen = used;
+
+ if (!aead_sufficient_data(ctx))
+ goto unlock;
+
+ req = sock_kmalloc(sk, reqlen, GFP_KERNEL);
+ if (unlikely(!req))
+ goto unlock;
+
+ areq = GET_ASYM_REQ(req, tfm);
+ memset(&areq->first_rsgl, '\0', sizeof(areq->first_rsgl));
+ INIT_LIST_HEAD(&areq->list);
+ areq->iocb = msg->msg_iocb;
+ memcpy(areq->iv, ctx->iv, crypto_aead_ivsize(tfm));
+ aead_request_set_tfm(req, tfm);
+ aead_request_set_ad(req, ctx->aead_assoclen);
+ aead_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG,
+ aead_async_cb, sk);
+ used -= ctx->aead_assoclen + (ctx->enc ? as : 0);
+
+ /* take over all tx sgls from ctx */
+ areq->tsgl = sock_kmalloc(sk, sizeof(*areq->tsgl) * sgl->cur,
+ GFP_KERNEL);
+ if (unlikely(!areq->tsgl))
+ goto free;
+
+ sg_init_table(areq->tsgl, sgl->cur);
+ for (i = 0; i < sgl->cur; i++)
+ sg_set_page(&areq->tsgl[i], sg_page(&sgl->sg[i]),
+ sgl->sg[i].length, sgl->sg[i].offset);
+
+ areq->tsgls = sgl->cur;
+
+ /* create rx sgls */
+ while (iov_iter_count(&msg->msg_iter)) {
+ size_t seglen = min_t(size_t, iov_iter_count(&msg->msg_iter),
+ (outlen - usedpages));
+
+ if (list_empty(&areq->list)) {
+ rsgl = &areq->first_rsgl;
+
+ } else {
+ rsgl = sock_kmalloc(sk, sizeof(*rsgl), GFP_KERNEL);
+ if (unlikely(!rsgl)) {
+ err = -ENOMEM;
+ goto free;
+ }
+ }
+ rsgl->sgl.npages = 0;
+ list_add_tail(&rsgl->list, &areq->list);
+
+ /* make one iovec available as scatterlist */
+ err = af_alg_make_sg(&rsgl->sgl, &msg->msg_iter, seglen);
+ if (err < 0)
+ goto free;
+
+ usedpages += err;
+
+ /* chain the new scatterlist with previous one */
+ if (last_rsgl)
+ af_alg_link_sg(&last_rsgl->sgl, &rsgl->sgl);
+
+ last_rsgl = rsgl;
+
+ /* we do not need more iovecs as we have sufficient memory */
+ if (outlen <= usedpages)
+ break;
+
+ iov_iter_advance(&msg->msg_iter, err);
+ }
+ err = -EINVAL;
+ /* ensure output buffer is sufficiently large */
+ if (usedpages < outlen)
+ goto free;
+
+ aead_request_set_crypt(req, areq->tsgl, areq->first_rsgl.sgl.sg, used,
+ areq->iv);
+ err = ctx->enc ? crypto_aead_encrypt(req) : crypto_aead_decrypt(req);
+ if (err) {
+ if (err == -EINPROGRESS) {
+ sock_hold(sk);
+ err = -EIOCBQUEUED;
+ aead_reset_ctx(ctx);
+ goto unlock;
+ } else if (err == -EBADMSG) {
+ aead_put_sgl(sk);
+ }
+ goto free;
+ }
+ aead_put_sgl(sk);
+
+free:
+ list_for_each_entry(rsgl, &areq->list, list) {
+ af_alg_free_sg(&rsgl->sgl);
+ if (rsgl != &areq->first_rsgl)
+ sock_kfree_s(sk, rsgl, sizeof(*rsgl));
+ }
+ if (areq->tsgl)
+ sock_kfree_s(sk, areq->tsgl, sizeof(*areq->tsgl) * areq->tsgls);
+ if (req)
+ sock_kfree_s(sk, req, reqlen);
+unlock:
+ aead_wmem_wakeup(sk);
+ release_sock(sk);
+ return err ? err : outlen;
+}
+
+static int aead_recvmsg_sync(struct socket *sock, struct msghdr *msg, int flags)
{
struct sock *sk = sock->sk;
struct alg_sock *ask = alg_sk(sk);
struct aead_ctx *ctx = ask->private;
unsigned as = crypto_aead_authsize(crypto_aead_reqtfm(&ctx->aead_req));
struct aead_sg_list *sgl = &ctx->tsgl;
- unsigned int i = 0;
+ struct aead_async_rsgl *last_rsgl = NULL;
+ struct aead_async_rsgl *rsgl, *tmp;
int err = -EINVAL;
unsigned long used = 0;
size_t outlen = 0;
size_t usedpages = 0;
- unsigned int cnt = 0;
-
- /* Limit number of IOV blocks to be accessed below */
- if (msg->msg_iter.nr_segs > RSGL_MAX_ENTRIES)
- return -ENOMSG;
lock_sock(sk);
@@ -417,21 +598,33 @@ static int aead_recvmsg(struct socket *sock, struct msghdr *msg, size_t ignored,
size_t seglen = min_t(size_t, iov_iter_count(&msg->msg_iter),
(outlen - usedpages));
+ if (list_empty(&ctx->list)) {
+ rsgl = &ctx->first_rsgl;
+ } else {
+ rsgl = sock_kmalloc(sk, sizeof(*rsgl), GFP_KERNEL);
+ if (unlikely(!rsgl)) {
+ err = -ENOMEM;
+ goto unlock;
+ }
+ }
+ rsgl->sgl.npages = 0;
+ list_add_tail(&rsgl->list, &ctx->list);
+
/* make one iovec available as scatterlist */
- err = af_alg_make_sg(&ctx->rsgl[cnt], &msg->msg_iter,
- seglen);
+ err = af_alg_make_sg(&rsgl->sgl, &msg->msg_iter, seglen);
if (err < 0)
goto unlock;
usedpages += err;
/* chain the new scatterlist with previous one */
- if (cnt)
- af_alg_link_sg(&ctx->rsgl[cnt-1], &ctx->rsgl[cnt]);
+ if (last_rsgl)
+ af_alg_link_sg(&last_rsgl->sgl, &rsgl->sgl);
+
+ last_rsgl = rsgl;
/* we do not need more iovecs as we have sufficient memory */
if (outlen <= usedpages)
break;
iov_iter_advance(&msg->msg_iter, err);
- cnt++;
}
err = -EINVAL;
@@ -440,8 +633,7 @@ static int aead_recvmsg(struct socket *sock, struct msghdr *msg, size_t ignored,
goto unlock;
sg_mark_end(sgl->sg + sgl->cur - 1);
-
- aead_request_set_crypt(&ctx->aead_req, sgl->sg, ctx->rsgl[0].sg,
+ aead_request_set_crypt(&ctx->aead_req, sgl->sg, ctx->first_rsgl.sgl.sg,
used, ctx->iv);
aead_request_set_ad(&ctx->aead_req, ctx->aead_assoclen);
@@ -454,23 +646,35 @@ static int aead_recvmsg(struct socket *sock, struct msghdr *msg, size_t ignored,
/* EBADMSG implies a valid cipher operation took place */
if (err == -EBADMSG)
aead_put_sgl(sk);
+
goto unlock;
}
aead_put_sgl(sk);
-
err = 0;
unlock:
- for (i = 0; i < cnt; i++)
- af_alg_free_sg(&ctx->rsgl[i]);
-
+ list_for_each_entry_safe(rsgl, tmp, &ctx->list, list) {
+ af_alg_free_sg(&rsgl->sgl);
+ if (rsgl != &ctx->first_rsgl)
+ sock_kfree_s(sk, rsgl, sizeof(*rsgl));
+ list_del(&rsgl->list);
+ }
+ INIT_LIST_HEAD(&ctx->list);
aead_wmem_wakeup(sk);
release_sock(sk);
return err ? err : outlen;
}
+static int aead_recvmsg(struct socket *sock, struct msghdr *msg, size_t ignored,
+ int flags)
+{
+ return (msg->msg_iocb && !is_sync_kiocb(msg->msg_iocb)) ?
+ aead_recvmsg_async(sock, msg, flags) :
+ aead_recvmsg_sync(sock, msg, flags);
+}
+
static unsigned int aead_poll(struct file *file, struct socket *sock,
poll_table *wait)
{
@@ -540,6 +744,7 @@ static void aead_sock_destruct(struct sock *sk)
unsigned int ivlen = crypto_aead_ivsize(
crypto_aead_reqtfm(&ctx->aead_req));
+ WARN_ON(atomic_read(&sk->sk_refcnt) != 0);
aead_put_sgl(sk);
sock_kzfree_s(sk, ctx->iv, ivlen);
sock_kfree_s(sk, ctx, ctx->len);
@@ -574,6 +779,7 @@ static int aead_accept_parent(void *private, struct sock *sk)
ctx->aead_assoclen = 0;
af_alg_init_completion(&ctx->completion);
sg_init_table(ctx->tsgl.sg, ALG_MAX_PAGES);
+ INIT_LIST_HEAD(&ctx->list);
ask->private = ctx;
diff --git a/crypto/asymmetric_keys/Kconfig b/crypto/asymmetric_keys/Kconfig
index 91a7e047a765..e28e912000a7 100644
--- a/crypto/asymmetric_keys/Kconfig
+++ b/crypto/asymmetric_keys/Kconfig
@@ -1,5 +1,5 @@
menuconfig ASYMMETRIC_KEY_TYPE
- tristate "Asymmetric (public-key cryptographic) key type"
+ bool "Asymmetric (public-key cryptographic) key type"
depends on KEYS
help
This option provides support for a key type that holds the data for
@@ -40,8 +40,7 @@ config PKCS7_MESSAGE_PARSER
config PKCS7_TEST_KEY
tristate "PKCS#7 testing key type"
- depends on PKCS7_MESSAGE_PARSER
- select SYSTEM_TRUSTED_KEYRING
+ depends on SYSTEM_DATA_VERIFICATION
help
This option provides a type of key that can be loaded up from a
PKCS#7 message - provided the message is signed by a trusted key. If
@@ -54,6 +53,7 @@ config PKCS7_TEST_KEY
config SIGNED_PE_FILE_VERIFICATION
bool "Support for PE file signature verification"
depends on PKCS7_MESSAGE_PARSER=y
+ depends on SYSTEM_DATA_VERIFICATION
select ASN1
select OID_REGISTRY
help
diff --git a/crypto/asymmetric_keys/Makefile b/crypto/asymmetric_keys/Makefile
index f90486256f01..6516855bec18 100644
--- a/crypto/asymmetric_keys/Makefile
+++ b/crypto/asymmetric_keys/Makefile
@@ -4,7 +4,10 @@
obj-$(CONFIG_ASYMMETRIC_KEY_TYPE) += asymmetric_keys.o
-asymmetric_keys-y := asymmetric_type.o signature.o
+asymmetric_keys-y := \
+ asymmetric_type.o \
+ restrict.o \
+ signature.o
obj-$(CONFIG_ASYMMETRIC_PUBLIC_KEY_SUBTYPE) += public_key.o
diff --git a/crypto/asymmetric_keys/asymmetric_keys.h b/crypto/asymmetric_keys/asymmetric_keys.h
index 1d450b580245..ca8e9ac34ce6 100644
--- a/crypto/asymmetric_keys/asymmetric_keys.h
+++ b/crypto/asymmetric_keys/asymmetric_keys.h
@@ -9,6 +9,8 @@
* 2 of the Licence, or (at your option) any later version.
*/
+#include <keys/asymmetric-type.h>
+
extern struct asymmetric_key_id *asymmetric_key_hex_to_key_id(const char *id);
extern int __asymmetric_key_hex_to_key_id(const char *id,
diff --git a/crypto/asymmetric_keys/asymmetric_type.c b/crypto/asymmetric_keys/asymmetric_type.c
index 9f2165b27d52..6600181d5d01 100644
--- a/crypto/asymmetric_keys/asymmetric_type.c
+++ b/crypto/asymmetric_keys/asymmetric_type.c
@@ -35,6 +35,95 @@ static LIST_HEAD(asymmetric_key_parsers);
static DECLARE_RWSEM(asymmetric_key_parsers_sem);
/**
+ * find_asymmetric_key - Find a key by ID.
+ * @keyring: The keys to search.
+ * @id_0: The first ID to look for or NULL.
+ * @id_1: The second ID to look for or NULL.
+ * @partial: Use partial match if true, exact if false.
+ *
+ * Find a key in the given keyring by identifier. The preferred identifier is
+ * the id_0 and the fallback identifier is the id_1. If both are given, the
+ * lookup is by the former, but the latter must also match.
+ */
+struct key *find_asymmetric_key(struct key *keyring,
+ const struct asymmetric_key_id *id_0,
+ const struct asymmetric_key_id *id_1,
+ bool partial)
+{
+ struct key *key;
+ key_ref_t ref;
+ const char *lookup;
+ char *req, *p;
+ int len;
+
+ if (id_0) {
+ lookup = id_0->data;
+ len = id_0->len;
+ } else {
+ lookup = id_1->data;
+ len = id_1->len;
+ }
+
+ /* Construct an identifier "id:<keyid>". */
+ p = req = kmalloc(2 + 1 + len * 2 + 1, GFP_KERNEL);
+ if (!req)
+ return ERR_PTR(-ENOMEM);
+
+ if (partial) {
+ *p++ = 'i';
+ *p++ = 'd';
+ } else {
+ *p++ = 'e';
+ *p++ = 'x';
+ }
+ *p++ = ':';
+ p = bin2hex(p, lookup, len);
+ *p = 0;
+
+ pr_debug("Look up: \"%s\"\n", req);
+
+ ref = keyring_search(make_key_ref(keyring, 1),
+ &key_type_asymmetric, req);
+ if (IS_ERR(ref))
+ pr_debug("Request for key '%s' err %ld\n", req, PTR_ERR(ref));
+ kfree(req);
+
+ if (IS_ERR(ref)) {
+ switch (PTR_ERR(ref)) {
+ /* Hide some search errors */
+ case -EACCES:
+ case -ENOTDIR:
+ case -EAGAIN:
+ return ERR_PTR(-ENOKEY);
+ default:
+ return ERR_CAST(ref);
+ }
+ }
+
+ key = key_ref_to_ptr(ref);
+ if (id_0 && id_1) {
+ const struct asymmetric_key_ids *kids = asymmetric_key_ids(key);
+
+ if (!kids->id[0]) {
+ pr_debug("First ID matches, but second is missing\n");
+ goto reject;
+ }
+ if (!asymmetric_key_id_same(id_1, kids->id[1])) {
+ pr_debug("First ID matches, but second does not\n");
+ goto reject;
+ }
+ }
+
+ pr_devel("<==%s() = 0 [%x]\n", __func__, key_serial(key));
+ return key;
+
+reject:
+ key_put(key);
+ return ERR_PTR(-EKEYREJECTED);
+}
+EXPORT_SYMBOL_GPL(find_asymmetric_key);
+
+/**
* asymmetric_key_generate_id: Construct an asymmetric key ID
* @val_1: First binary blob
* @len_1: Length of first binary blob
@@ -331,7 +420,8 @@ static void asymmetric_key_free_preparse(struct key_preparsed_payload *prep)
pr_devel("==>%s()\n", __func__);
if (subtype) {
- subtype->destroy(prep->payload.data[asym_crypto]);
+ subtype->destroy(prep->payload.data[asym_crypto],
+ prep->payload.data[asym_auth]);
module_put(subtype->owner);
}
asymmetric_key_free_kids(kids);
@@ -346,13 +436,15 @@ static void asymmetric_key_destroy(struct key *key)
struct asymmetric_key_subtype *subtype = asymmetric_key_subtype(key);
struct asymmetric_key_ids *kids = key->payload.data[asym_key_ids];
void *data = key->payload.data[asym_crypto];
+ void *auth = key->payload.data[asym_auth];
key->payload.data[asym_crypto] = NULL;
key->payload.data[asym_subtype] = NULL;
key->payload.data[asym_key_ids] = NULL;
+ key->payload.data[asym_auth] = NULL;
if (subtype) {
- subtype->destroy(data);
+ subtype->destroy(data, auth);
module_put(subtype->owner);
}
diff --git a/crypto/asymmetric_keys/mscode_parser.c b/crypto/asymmetric_keys/mscode_parser.c
index 3242cbfaeaa2..6a76d5c70ef6 100644
--- a/crypto/asymmetric_keys/mscode_parser.c
+++ b/crypto/asymmetric_keys/mscode_parser.c
@@ -21,19 +21,13 @@
/*
* Parse a Microsoft Individual Code Signing blob
*/
-int mscode_parse(struct pefile_context *ctx)
+int mscode_parse(void *_ctx, const void *content_data, size_t data_len,
+ size_t asn1hdrlen)
{
- const void *content_data;
- size_t data_len;
- int ret;
-
- ret = pkcs7_get_content_data(ctx->pkcs7, &content_data, &data_len, 1);
-
- if (ret) {
- pr_debug("PKCS#7 message does not contain data\n");
- return ret;
- }
+ struct pefile_context *ctx = _ctx;
+ content_data -= asn1hdrlen;
+ data_len += asn1hdrlen;
pr_devel("Data: %zu [%*ph]\n", data_len, (unsigned)(data_len),
content_data);
@@ -129,7 +123,6 @@ int mscode_note_digest(void *context, size_t hdrlen,
{
struct pefile_context *ctx = context;
- ctx->digest = value;
- ctx->digest_len = vlen;
- return 0;
+ ctx->digest = kmemdup(value, vlen, GFP_KERNEL);
+ return ctx->digest ? 0 : -ENOMEM;
}
diff --git a/crypto/asymmetric_keys/pkcs7_key_type.c b/crypto/asymmetric_keys/pkcs7_key_type.c
index e2d0edbbc71a..1063b644efcd 100644
--- a/crypto/asymmetric_keys/pkcs7_key_type.c
+++ b/crypto/asymmetric_keys/pkcs7_key_type.c
@@ -13,12 +13,9 @@
#include <linux/key.h>
#include <linux/err.h>
#include <linux/module.h>
+#include <linux/verification.h>
#include <linux/key-type.h>
-#include <keys/asymmetric-type.h>
-#include <crypto/pkcs7.h>
#include <keys/user-type.h>
-#include <keys/system_keyring.h>
-#include "pkcs7_parser.h"
MODULE_LICENSE("GPL");
MODULE_DESCRIPTION("PKCS#7 testing key type");
@@ -29,60 +26,47 @@ MODULE_PARM_DESC(pkcs7_usage,
"Usage to specify when verifying the PKCS#7 message");
/*
- * Preparse a PKCS#7 wrapped and validated data blob.
+ * Retrieve the PKCS#7 message content.
*/
-static int pkcs7_preparse(struct key_preparsed_payload *prep)
+static int pkcs7_view_content(void *ctx, const void *data, size_t len,
+ size_t asn1hdrlen)
{
- enum key_being_used_for usage = pkcs7_usage;
- struct pkcs7_message *pkcs7;
- const void *data, *saved_prep_data;
- size_t datalen, saved_prep_datalen;
- bool trusted;
+ struct key_preparsed_payload *prep = ctx;
+ const void *saved_prep_data;
+ size_t saved_prep_datalen;
int ret;
- kenter("");
-
- if (usage >= NR__KEY_BEING_USED_FOR) {
- pr_err("Invalid usage type %d\n", usage);
- return -EINVAL;
- }
-
saved_prep_data = prep->data;
saved_prep_datalen = prep->datalen;
- pkcs7 = pkcs7_parse_message(saved_prep_data, saved_prep_datalen);
- if (IS_ERR(pkcs7)) {
- ret = PTR_ERR(pkcs7);
- goto error;
- }
-
- ret = pkcs7_verify(pkcs7, usage);
- if (ret < 0)
- goto error_free;
-
- ret = pkcs7_validate_trust(pkcs7, system_trusted_keyring, &trusted);
- if (ret < 0)
- goto error_free;
- if (!trusted)
- pr_warn("PKCS#7 message doesn't chain back to a trusted key\n");
-
- ret = pkcs7_get_content_data(pkcs7, &data, &datalen, false);
- if (ret < 0)
- goto error_free;
-
prep->data = data;
- prep->datalen = datalen;
+ prep->datalen = len;
+
ret = user_preparse(prep);
+
prep->data = saved_prep_data;
prep->datalen = saved_prep_datalen;
-
-error_free:
- pkcs7_free_message(pkcs7);
-error:
- kleave(" = %d", ret);
return ret;
}
/*
+ * Preparse a PKCS#7 wrapped and validated data blob.
+ */
+static int pkcs7_preparse(struct key_preparsed_payload *prep)
+{
+ enum key_being_used_for usage = pkcs7_usage;
+
+ if (usage >= NR__KEY_BEING_USED_FOR) {
+ pr_err("Invalid usage type %d\n", usage);
+ return -EINVAL;
+ }
+
+ return verify_pkcs7_signature(NULL, 0,
+ prep->data, prep->datalen,
+ (void *)1UL, usage,
+ pkcs7_view_content, prep);
+}
+
+/*
* user defined keys take an arbitrary string as the description and an
* arbitrary blob of data as the payload
*/
diff --git a/crypto/asymmetric_keys/pkcs7_parser.c b/crypto/asymmetric_keys/pkcs7_parser.c
index 40de03f49ff8..af4cd8649117 100644
--- a/crypto/asymmetric_keys/pkcs7_parser.c
+++ b/crypto/asymmetric_keys/pkcs7_parser.c
@@ -44,9 +44,7 @@ struct pkcs7_parse_context {
static void pkcs7_free_signed_info(struct pkcs7_signed_info *sinfo)
{
if (sinfo) {
- kfree(sinfo->sig.s);
- kfree(sinfo->sig.digest);
- kfree(sinfo->signing_cert_id);
+ public_key_signature_free(sinfo->sig);
kfree(sinfo);
}
}
@@ -125,6 +123,10 @@ struct pkcs7_message *pkcs7_parse_message(const void *data, size_t datalen)
ctx->sinfo = kzalloc(sizeof(struct pkcs7_signed_info), GFP_KERNEL);
if (!ctx->sinfo)
goto out_no_sinfo;
+ ctx->sinfo->sig = kzalloc(sizeof(struct public_key_signature),
+ GFP_KERNEL);
+ if (!ctx->sinfo->sig)
+ goto out_no_sig;
ctx->data = (unsigned long)data;
ctx->ppcerts = &ctx->certs;
@@ -150,6 +152,7 @@ out:
ctx->certs = cert->next;
x509_free_certificate(cert);
}
+out_no_sig:
pkcs7_free_signed_info(ctx->sinfo);
out_no_sinfo:
pkcs7_free_message(ctx->msg);
@@ -165,24 +168,25 @@ EXPORT_SYMBOL_GPL(pkcs7_parse_message);
* @pkcs7: The preparsed PKCS#7 message to access
* @_data: Place to return a pointer to the data
* @_data_len: Place to return the data length
- * @want_wrapper: True if the ASN.1 object header should be included in the data
+ * @_headerlen: Size of ASN.1 header not included in _data
*
- * Get access to the data content of the PKCS#7 message, including, optionally,
- * the header of the ASN.1 object that contains it. Returns -ENODATA if the
- * data object was missing from the message.
+ * Get access to the data content of the PKCS#7 message. The size of the
+ * header of the ASN.1 object that contains it is also provided and can be used
+ * to adjust *_data and *_data_len to get the entire object.
+ *
+ * Returns -ENODATA if the data object was missing from the message.
*/
int pkcs7_get_content_data(const struct pkcs7_message *pkcs7,
const void **_data, size_t *_data_len,
- bool want_wrapper)
+ size_t *_headerlen)
{
- size_t wrapper;
-
if (!pkcs7->data)
return -ENODATA;
- wrapper = want_wrapper ? pkcs7->data_hdrlen : 0;
- *_data = pkcs7->data - wrapper;
- *_data_len = pkcs7->data_len + wrapper;
+ *_data = pkcs7->data;
+ *_data_len = pkcs7->data_len;
+ if (_headerlen)
+ *_headerlen = pkcs7->data_hdrlen;
return 0;
}
EXPORT_SYMBOL_GPL(pkcs7_get_content_data);
@@ -218,25 +222,26 @@ int pkcs7_sig_note_digest_algo(void *context, size_t hdrlen,
switch (ctx->last_oid) {
case OID_md4:
- ctx->sinfo->sig.hash_algo = "md4";
+ ctx->sinfo->sig->hash_algo = "md4";
break;
case OID_md5:
- ctx->sinfo->sig.hash_algo = "md5";
+ ctx->sinfo->sig->hash_algo = "md5";
break;
case OID_sha1:
- ctx->sinfo->sig.hash_algo = "sha1";
+ ctx->sinfo->sig->hash_algo = "sha1";
break;
case OID_sha256:
- ctx->sinfo->sig.hash_algo = "sha256";
+ ctx->sinfo->sig->hash_algo = "sha256";
break;
case OID_sha384:
- ctx->sinfo->sig.hash_algo = "sha384";
+ ctx->sinfo->sig->hash_algo = "sha384";
break;
case OID_sha512:
- ctx->sinfo->sig.hash_algo = "sha512";
+ ctx->sinfo->sig->hash_algo = "sha512";
break;
case OID_sha224:
- ctx->sinfo->sig.hash_algo = "sha224";
+ ctx->sinfo->sig->hash_algo = "sha224";
+ break;
default:
printk("Unsupported digest algo: %u\n", ctx->last_oid);
return -ENOPKG;
@@ -255,7 +260,7 @@ int pkcs7_sig_note_pkey_algo(void *context, size_t hdrlen,
switch (ctx->last_oid) {
case OID_rsaEncryption:
- ctx->sinfo->sig.pkey_algo = "rsa";
+ ctx->sinfo->sig->pkey_algo = "rsa";
break;
default:
printk("Unsupported pkey algo: %u\n", ctx->last_oid);
@@ -615,11 +620,11 @@ int pkcs7_sig_note_signature(void *context, size_t hdrlen,
{
struct pkcs7_parse_context *ctx = context;
- ctx->sinfo->sig.s = kmemdup(value, vlen, GFP_KERNEL);
- if (!ctx->sinfo->sig.s)
+ ctx->sinfo->sig->s = kmemdup(value, vlen, GFP_KERNEL);
+ if (!ctx->sinfo->sig->s)
return -ENOMEM;
- ctx->sinfo->sig.s_size = vlen;
+ ctx->sinfo->sig->s_size = vlen;
return 0;
}
@@ -655,12 +660,16 @@ int pkcs7_note_signed_info(void *context, size_t hdrlen,
pr_devel("SINFO KID: %u [%*phN]\n", kid->len, kid->len, kid->data);
- sinfo->signing_cert_id = kid;
+ sinfo->sig->auth_ids[0] = kid;
sinfo->index = ++ctx->sinfo_index;
*ctx->ppsinfo = sinfo;
ctx->ppsinfo = &sinfo->next;
ctx->sinfo = kzalloc(sizeof(struct pkcs7_signed_info), GFP_KERNEL);
if (!ctx->sinfo)
return -ENOMEM;
+ ctx->sinfo->sig = kzalloc(sizeof(struct public_key_signature),
+ GFP_KERNEL);
+ if (!ctx->sinfo->sig)
+ return -ENOMEM;
return 0;
}
diff --git a/crypto/asymmetric_keys/pkcs7_parser.h b/crypto/asymmetric_keys/pkcs7_parser.h
index a66b19ebcf47..f4e81074f5e0 100644
--- a/crypto/asymmetric_keys/pkcs7_parser.h
+++ b/crypto/asymmetric_keys/pkcs7_parser.h
@@ -22,7 +22,6 @@ struct pkcs7_signed_info {
struct pkcs7_signed_info *next;
struct x509_certificate *signer; /* Signing certificate (in msg->certs) */
unsigned index;
- bool trusted;
bool unsupported_crypto; /* T if not usable due to missing crypto */
/* Message digest - the digest of the Content Data (or NULL) */
@@ -41,19 +40,17 @@ struct pkcs7_signed_info {
#define sinfo_has_ms_statement_type 5
time64_t signing_time;
- /* Issuing cert serial number and issuer's name [PKCS#7 or CMS ver 1]
- * or issuing cert's SKID [CMS ver 3].
- */
- struct asymmetric_key_id *signing_cert_id;
-
/* Message signature.
*
* This contains the generated digest of _either_ the Content Data or
* the Authenticated Attributes [RFC2315 9.3]. If the latter, one of
* the attributes contains the digest of the the Content Data within
* it.
+ *
+ * THis also contains the issuing cert serial number and issuer's name
+ * [PKCS#7 or CMS ver 1] or issuing cert's SKID [CMS ver 3].
*/
- struct public_key_signature sig;
+ struct public_key_signature *sig;
};
struct pkcs7_message {
diff --git a/crypto/asymmetric_keys/pkcs7_trust.c b/crypto/asymmetric_keys/pkcs7_trust.c
index 7d7a39b47c62..f6a009d88a33 100644
--- a/crypto/asymmetric_keys/pkcs7_trust.c
+++ b/crypto/asymmetric_keys/pkcs7_trust.c
@@ -27,10 +27,9 @@ static int pkcs7_validate_trust_one(struct pkcs7_message *pkcs7,
struct pkcs7_signed_info *sinfo,
struct key *trust_keyring)
{
- struct public_key_signature *sig = &sinfo->sig;
+ struct public_key_signature *sig = sinfo->sig;
struct x509_certificate *x509, *last = NULL, *p;
struct key *key;
- bool trusted;
int ret;
kenter(",%u,", sinfo->index);
@@ -42,10 +41,8 @@ static int pkcs7_validate_trust_one(struct pkcs7_message *pkcs7,
for (x509 = sinfo->signer; x509; x509 = x509->signer) {
if (x509->seen) {
- if (x509->verified) {
- trusted = x509->trusted;
+ if (x509->verified)
goto verified;
- }
kleave(" = -ENOKEY [cached]");
return -ENOKEY;
}
@@ -54,9 +51,8 @@ static int pkcs7_validate_trust_one(struct pkcs7_message *pkcs7,
/* Look to see if this certificate is present in the trusted
* keys.
*/
- key = x509_request_asymmetric_key(trust_keyring,
- x509->id, x509->skid,
- false);
+ key = find_asymmetric_key(trust_keyring,
+ x509->id, x509->skid, false);
if (!IS_ERR(key)) {
/* One of the X.509 certificates in the PKCS#7 message
* is apparently the same as one we already trust.
@@ -80,17 +76,17 @@ static int pkcs7_validate_trust_one(struct pkcs7_message *pkcs7,
might_sleep();
last = x509;
- sig = &last->sig;
+ sig = last->sig;
}
/* No match - see if the root certificate has a signer amongst the
* trusted keys.
*/
- if (last && (last->akid_id || last->akid_skid)) {
- key = x509_request_asymmetric_key(trust_keyring,
- last->akid_id,
- last->akid_skid,
- false);
+ if (last && (last->sig->auth_ids[0] || last->sig->auth_ids[1])) {
+ key = find_asymmetric_key(trust_keyring,
+ last->sig->auth_ids[0],
+ last->sig->auth_ids[1],
+ false);
if (!IS_ERR(key)) {
x509 = last;
pr_devel("sinfo %u: Root cert %u signer is key %x\n",
@@ -104,10 +100,8 @@ static int pkcs7_validate_trust_one(struct pkcs7_message *pkcs7,
/* As a last resort, see if we have a trusted public key that matches
* the signed info directly.
*/
- key = x509_request_asymmetric_key(trust_keyring,
- sinfo->signing_cert_id,
- NULL,
- false);
+ key = find_asymmetric_key(trust_keyring,
+ sinfo->sig->auth_ids[0], NULL, false);
if (!IS_ERR(key)) {
pr_devel("sinfo %u: Direct signer is key %x\n",
sinfo->index, key_serial(key));
@@ -122,7 +116,6 @@ static int pkcs7_validate_trust_one(struct pkcs7_message *pkcs7,
matched:
ret = verify_signature(key, sig);
- trusted = test_bit(KEY_FLAG_TRUSTED, &key->flags);
key_put(key);
if (ret < 0) {
if (ret == -ENOMEM)
@@ -134,12 +127,9 @@ matched:
verified:
if (x509) {
x509->verified = true;
- for (p = sinfo->signer; p != x509; p = p->signer) {
+ for (p = sinfo->signer; p != x509; p = p->signer)
p->verified = true;
- p->trusted = trusted;
- }
}
- sinfo->trusted = trusted;
kleave(" = 0");
return 0;
}
@@ -148,7 +138,6 @@ verified:
* pkcs7_validate_trust - Validate PKCS#7 trust chain
* @pkcs7: The PKCS#7 certificate to validate
* @trust_keyring: Signing certificates to use as starting points
- * @_trusted: Set to true if trustworth, false otherwise
*
* Validate that the certificate chain inside the PKCS#7 message intersects
* keys we already know and trust.
@@ -170,16 +159,13 @@ verified:
* May also return -ENOMEM.
*/
int pkcs7_validate_trust(struct pkcs7_message *pkcs7,
- struct key *trust_keyring,
- bool *_trusted)
+ struct key *trust_keyring)
{
struct pkcs7_signed_info *sinfo;
struct x509_certificate *p;
int cached_ret = -ENOKEY;
int ret;
- *_trusted = false;
-
for (p = pkcs7->certs; p; p = p->next)
p->seen = false;
@@ -193,7 +179,6 @@ int pkcs7_validate_trust(struct pkcs7_message *pkcs7,
cached_ret = -ENOPKG;
continue;
case 0:
- *_trusted |= sinfo->trusted;
cached_ret = 0;
continue;
default:
diff --git a/crypto/asymmetric_keys/pkcs7_verify.c b/crypto/asymmetric_keys/pkcs7_verify.c
index 50be2a15e531..44b746e9df1b 100644
--- a/crypto/asymmetric_keys/pkcs7_verify.c
+++ b/crypto/asymmetric_keys/pkcs7_verify.c
@@ -25,34 +25,36 @@
static int pkcs7_digest(struct pkcs7_message *pkcs7,
struct pkcs7_signed_info *sinfo)
{
+ struct public_key_signature *sig = sinfo->sig;
struct crypto_shash *tfm;
struct shash_desc *desc;
- size_t digest_size, desc_size;
- void *digest;
+ size_t desc_size;
int ret;
- kenter(",%u,%s", sinfo->index, sinfo->sig.hash_algo);
+ kenter(",%u,%s", sinfo->index, sinfo->sig->hash_algo);
- if (!sinfo->sig.hash_algo)
+ if (!sinfo->sig->hash_algo)
return -ENOPKG;
/* Allocate the hashing algorithm we're going to need and find out how
* big the hash operational data will be.
*/
- tfm = crypto_alloc_shash(sinfo->sig.hash_algo, 0, 0);
+ tfm = crypto_alloc_shash(sinfo->sig->hash_algo, 0, 0);
if (IS_ERR(tfm))
return (PTR_ERR(tfm) == -ENOENT) ? -ENOPKG : PTR_ERR(tfm);
desc_size = crypto_shash_descsize(tfm) + sizeof(*desc);
- sinfo->sig.digest_size = digest_size = crypto_shash_digestsize(tfm);
+ sig->digest_size = crypto_shash_digestsize(tfm);
ret = -ENOMEM;
- digest = kzalloc(ALIGN(digest_size, __alignof__(*desc)) + desc_size,
- GFP_KERNEL);
- if (!digest)
+ sig->digest = kmalloc(sig->digest_size, GFP_KERNEL);
+ if (!sig->digest)
+ goto error_no_desc;
+
+ desc = kzalloc(desc_size, GFP_KERNEL);
+ if (!desc)
goto error_no_desc;
- desc = PTR_ALIGN(digest + digest_size, __alignof__(*desc));
desc->tfm = tfm;
desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
@@ -60,10 +62,11 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7,
ret = crypto_shash_init(desc);
if (ret < 0)
goto error;
- ret = crypto_shash_finup(desc, pkcs7->data, pkcs7->data_len, digest);
+ ret = crypto_shash_finup(desc, pkcs7->data, pkcs7->data_len,
+ sig->digest);
if (ret < 0)
goto error;
- pr_devel("MsgDigest = [%*ph]\n", 8, digest);
+ pr_devel("MsgDigest = [%*ph]\n", 8, sig->digest);
/* However, if there are authenticated attributes, there must be a
* message digest attribute amongst them which corresponds to the
@@ -78,14 +81,15 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7,
goto error;
}
- if (sinfo->msgdigest_len != sinfo->sig.digest_size) {
+ if (sinfo->msgdigest_len != sig->digest_size) {
pr_debug("Sig %u: Invalid digest size (%u)\n",
sinfo->index, sinfo->msgdigest_len);
ret = -EBADMSG;
goto error;
}
- if (memcmp(digest, sinfo->msgdigest, sinfo->msgdigest_len) != 0) {
+ if (memcmp(sig->digest, sinfo->msgdigest,
+ sinfo->msgdigest_len) != 0) {
pr_debug("Sig %u: Message digest doesn't match\n",
sinfo->index);
ret = -EKEYREJECTED;
@@ -97,7 +101,7 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7,
* convert the attributes from a CONT.0 into a SET before we
* hash it.
*/
- memset(digest, 0, sinfo->sig.digest_size);
+ memset(sig->digest, 0, sig->digest_size);
ret = crypto_shash_init(desc);
if (ret < 0)
@@ -107,17 +111,14 @@ static int pkcs7_digest(struct pkcs7_message *pkcs7,
if (ret < 0)
goto error;
ret = crypto_shash_finup(desc, sinfo->authattrs,
- sinfo->authattrs_len, digest);
+ sinfo->authattrs_len, sig->digest);
if (ret < 0)
goto error;
- pr_devel("AADigest = [%*ph]\n", 8, digest);
+ pr_devel("AADigest = [%*ph]\n", 8, sig->digest);
}
- sinfo->sig.digest = digest;
- digest = NULL;
-
error:
- kfree(digest);
+ kfree(desc);
error_no_desc:
crypto_free_shash(tfm);
kleave(" = %d", ret);
@@ -144,12 +145,12 @@ static int pkcs7_find_key(struct pkcs7_message *pkcs7,
* PKCS#7 message - but I can't be 100% sure of that. It's
* possible this will need element-by-element comparison.
*/
- if (!asymmetric_key_id_same(x509->id, sinfo->signing_cert_id))
+ if (!asymmetric_key_id_same(x509->id, sinfo->sig->auth_ids[0]))
continue;
pr_devel("Sig %u: Found cert serial match X.509[%u]\n",
sinfo->index, certix);
- if (x509->pub->pkey_algo != sinfo->sig.pkey_algo) {
+ if (x509->pub->pkey_algo != sinfo->sig->pkey_algo) {
pr_warn("Sig %u: X.509 algo and PKCS#7 sig algo don't match\n",
sinfo->index);
continue;
@@ -164,7 +165,7 @@ static int pkcs7_find_key(struct pkcs7_message *pkcs7,
*/
pr_debug("Sig %u: Issuing X.509 cert not found (#%*phN)\n",
sinfo->index,
- sinfo->signing_cert_id->len, sinfo->signing_cert_id->data);
+ sinfo->sig->auth_ids[0]->len, sinfo->sig->auth_ids[0]->data);
return 0;
}
@@ -174,6 +175,7 @@ static int pkcs7_find_key(struct pkcs7_message *pkcs7,
static int pkcs7_verify_sig_chain(struct pkcs7_message *pkcs7,
struct pkcs7_signed_info *sinfo)
{
+ struct public_key_signature *sig;
struct x509_certificate *x509 = sinfo->signer, *p;
struct asymmetric_key_id *auth;
int ret;
@@ -188,34 +190,26 @@ static int pkcs7_verify_sig_chain(struct pkcs7_message *pkcs7,
x509->subject,
x509->raw_serial_size, x509->raw_serial);
x509->seen = true;
- ret = x509_get_sig_params(x509);
- if (ret < 0)
- goto maybe_missing_crypto_in_x509;
+ if (x509->unsupported_key)
+ goto unsupported_crypto_in_x509;
pr_debug("- issuer %s\n", x509->issuer);
- if (x509->akid_id)
+ sig = x509->sig;
+ if (sig->auth_ids[0])
pr_debug("- authkeyid.id %*phN\n",
- x509->akid_id->len, x509->akid_id->data);
- if (x509->akid_skid)
+ sig->auth_ids[0]->len, sig->auth_ids[0]->data);
+ if (sig->auth_ids[1])
pr_debug("- authkeyid.skid %*phN\n",
- x509->akid_skid->len, x509->akid_skid->data);
+ sig->auth_ids[1]->len, sig->auth_ids[1]->data);
- if ((!x509->akid_id && !x509->akid_skid) ||
- strcmp(x509->subject, x509->issuer) == 0) {
+ if (x509->self_signed) {
/* If there's no authority certificate specified, then
* the certificate must be self-signed and is the root
* of the chain. Likewise if the cert is its own
* authority.
*/
- pr_debug("- no auth?\n");
- if (x509->raw_subject_size != x509->raw_issuer_size ||
- memcmp(x509->raw_subject, x509->raw_issuer,
- x509->raw_issuer_size) != 0)
- return 0;
-
- ret = x509_check_signature(x509->pub, x509);
- if (ret < 0)
- goto maybe_missing_crypto_in_x509;
+ if (x509->unsupported_sig)
+ goto unsupported_crypto_in_x509;
x509->signer = x509;
pr_debug("- self-signed\n");
return 0;
@@ -224,7 +218,7 @@ static int pkcs7_verify_sig_chain(struct pkcs7_message *pkcs7,
/* Look through the X.509 certificates in the PKCS#7 message's
* list to see if the next one is there.
*/
- auth = x509->akid_id;
+ auth = sig->auth_ids[0];
if (auth) {
pr_debug("- want %*phN\n", auth->len, auth->data);
for (p = pkcs7->certs; p; p = p->next) {
@@ -234,7 +228,7 @@ static int pkcs7_verify_sig_chain(struct pkcs7_message *pkcs7,
goto found_issuer_check_skid;
}
} else {
- auth = x509->akid_skid;
+ auth = sig->auth_ids[1];
pr_debug("- want %*phN\n", auth->len, auth->data);
for (p = pkcs7->certs; p; p = p->next) {
if (!p->skid)
@@ -254,8 +248,8 @@ static int pkcs7_verify_sig_chain(struct pkcs7_message *pkcs7,
/* We matched issuer + serialNumber, but if there's an
* authKeyId.keyId, that must match the CA subjKeyId also.
*/
- if (x509->akid_skid &&
- !asymmetric_key_id_same(p->skid, x509->akid_skid)) {
+ if (sig->auth_ids[1] &&
+ !asymmetric_key_id_same(p->skid, sig->auth_ids[1])) {
pr_warn("Sig %u: X.509 chain contains auth-skid nonmatch (%u->%u)\n",
sinfo->index, x509->index, p->index);
return -EKEYREJECTED;
@@ -267,7 +261,7 @@ static int pkcs7_verify_sig_chain(struct pkcs7_message *pkcs7,
sinfo->index);
return 0;
}
- ret = x509_check_signature(p->pub, x509);
+ ret = public_key_verify_signature(p->pub, p->sig);
if (ret < 0)
return ret;
x509->signer = p;
@@ -279,16 +273,14 @@ static int pkcs7_verify_sig_chain(struct pkcs7_message *pkcs7,
might_sleep();
}
-maybe_missing_crypto_in_x509:
+unsupported_crypto_in_x509:
/* Just prune the certificate chain at this point if we lack some
* crypto module to go further. Note, however, we don't want to set
- * sinfo->missing_crypto as the signed info block may still be
+ * sinfo->unsupported_crypto as the signed info block may still be
* validatable against an X.509 cert lower in the chain that we have a
* trusted copy of.
*/
- if (ret == -ENOPKG)
- return 0;
- return ret;
+ return 0;
}
/*
@@ -332,7 +324,7 @@ static int pkcs7_verify_one(struct pkcs7_message *pkcs7,
}
/* Verify the PKCS#7 binary against the key */
- ret = public_key_verify_signature(sinfo->signer->pub, &sinfo->sig);
+ ret = public_key_verify_signature(sinfo->signer->pub, sinfo->sig);
if (ret < 0)
return ret;
@@ -375,9 +367,8 @@ int pkcs7_verify(struct pkcs7_message *pkcs7,
enum key_being_used_for usage)
{
struct pkcs7_signed_info *sinfo;
- struct x509_certificate *x509;
int enopkg = -ENOPKG;
- int ret, n;
+ int ret;
kenter("");
@@ -419,12 +410,6 @@ int pkcs7_verify(struct pkcs7_message *pkcs7,
return -EINVAL;
}
- for (n = 0, x509 = pkcs7->certs; x509; x509 = x509->next, n++) {
- ret = x509_get_sig_params(x509);
- if (ret < 0)
- return ret;
- }
-
for (sinfo = pkcs7->signed_infos; sinfo; sinfo = sinfo->next) {
ret = pkcs7_verify_one(pkcs7, sinfo);
if (ret < 0) {
diff --git a/crypto/asymmetric_keys/public_key.c b/crypto/asymmetric_keys/public_key.c
index 0f8b264b3961..fd76b5fc3b3a 100644
--- a/crypto/asymmetric_keys/public_key.c
+++ b/crypto/asymmetric_keys/public_key.c
@@ -39,15 +39,23 @@ static void public_key_describe(const struct key *asymmetric_key,
/*
* Destroy a public key algorithm key.
*/
-void public_key_destroy(void *payload)
+void public_key_free(struct public_key *key)
{
- struct public_key *key = payload;
-
- if (key)
+ if (key) {
kfree(key->key);
- kfree(key);
+ kfree(key);
+ }
+}
+EXPORT_SYMBOL_GPL(public_key_free);
+
+/*
+ * Destroy a public key algorithm key.
+ */
+static void public_key_destroy(void *payload0, void *payload3)
+{
+ public_key_free(payload0);
+ public_key_signature_free(payload3);
}
-EXPORT_SYMBOL_GPL(public_key_destroy);
struct public_key_completion {
struct completion completion;
diff --git a/crypto/asymmetric_keys/restrict.c b/crypto/asymmetric_keys/restrict.c
new file mode 100644
index 000000000000..ac4bddf669de
--- /dev/null
+++ b/crypto/asymmetric_keys/restrict.c
@@ -0,0 +1,108 @@
+/* Instantiate a public key crypto key from an X.509 Certificate
+ *
+ * Copyright (C) 2012, 2016 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public Licence
+ * as published by the Free Software Foundation; either version
+ * 2 of the Licence, or (at your option) any later version.
+ */
+
+#define pr_fmt(fmt) "ASYM: "fmt
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/err.h>
+#include <crypto/public_key.h>
+#include "asymmetric_keys.h"
+
+static bool use_builtin_keys;
+static struct asymmetric_key_id *ca_keyid;
+
+#ifndef MODULE
+static struct {
+ struct asymmetric_key_id id;
+ unsigned char data[10];
+} cakey;
+
+static int __init ca_keys_setup(char *str)
+{
+ if (!str) /* default system keyring */
+ return 1;
+
+ if (strncmp(str, "id:", 3) == 0) {
+ struct asymmetric_key_id *p = &cakey.id;
+ size_t hexlen = (strlen(str) - 3) / 2;
+ int ret;
+
+ if (hexlen == 0 || hexlen > sizeof(cakey.data)) {
+ pr_err("Missing or invalid ca_keys id\n");
+ return 1;
+ }
+
+ ret = __asymmetric_key_hex_to_key_id(str + 3, p, hexlen);
+ if (ret < 0)
+ pr_err("Unparsable ca_keys id hex string\n");
+ else
+ ca_keyid = p; /* owner key 'id:xxxxxx' */
+ } else if (strcmp(str, "builtin") == 0) {
+ use_builtin_keys = true;
+ }
+
+ return 1;
+}
+__setup("ca_keys=", ca_keys_setup);
+#endif
+
+/**
+ * restrict_link_by_signature - Restrict additions to a ring of public keys
+ * @trust_keyring: A ring of keys that can be used to vouch for the new cert.
+ * @type: The type of key being added.
+ * @payload: The payload of the new key.
+ *
+ * Check the new certificate against the ones in the trust keyring. If one of
+ * those is the signing key and validates the new certificate, then mark the
+ * new certificate as being trusted.
+ *
+ * Returns 0 if the new certificate was accepted, -ENOKEY if we couldn't find a
+ * matching parent certificate in the trusted list, -EKEYREJECTED if the
+ * signature check fails or the key is blacklisted and some other error if
+ * there is a matching certificate but the signature check cannot be performed.
+ */
+int restrict_link_by_signature(struct key *trust_keyring,
+ const struct key_type *type,
+ const union key_payload *payload)
+{
+ const struct public_key_signature *sig;
+ struct key *key;
+ int ret;
+
+ pr_devel("==>%s()\n", __func__);
+
+ if (!trust_keyring)
+ return -ENOKEY;
+
+ if (type != &key_type_asymmetric)
+ return -EOPNOTSUPP;
+
+ sig = payload->data[asym_auth];
+ if (!sig->auth_ids[0] && !sig->auth_ids[1])
+ return 0;
+
+ if (ca_keyid && !asymmetric_key_id_partial(sig->auth_ids[1], ca_keyid))
+ return -EPERM;
+
+ /* See if we have a key that signed this one. */
+ key = find_asymmetric_key(trust_keyring,
+ sig->auth_ids[0], sig->auth_ids[1],
+ false);
+ if (IS_ERR(key))
+ return -ENOKEY;
+
+ if (use_builtin_keys && !test_bit(KEY_FLAG_BUILTIN, &key->flags))
+ ret = -ENOKEY;
+ else
+ ret = verify_signature(key, sig);
+ key_put(key);
+ return ret;
+}
diff --git a/crypto/asymmetric_keys/signature.c b/crypto/asymmetric_keys/signature.c
index 004d5fc8e56b..11b7ba170904 100644
--- a/crypto/asymmetric_keys/signature.c
+++ b/crypto/asymmetric_keys/signature.c
@@ -15,9 +15,27 @@
#include <keys/asymmetric-subtype.h>
#include <linux/export.h>
#include <linux/err.h>
+#include <linux/slab.h>
#include <crypto/public_key.h>
#include "asymmetric_keys.h"
+/*
+ * Destroy a public key signature.
+ */
+void public_key_signature_free(struct public_key_signature *sig)
+{
+ int i;
+
+ if (sig) {
+ for (i = 0; i < ARRAY_SIZE(sig->auth_ids); i++)
+ kfree(sig->auth_ids[i]);
+ kfree(sig->s);
+ kfree(sig->digest);
+ kfree(sig);
+ }
+}
+EXPORT_SYMBOL_GPL(public_key_signature_free);
+
/**
* verify_signature - Initiate the use of an asymmetric key to verify a signature
* @key: The asymmetric key to verify against
diff --git a/crypto/asymmetric_keys/verify_pefile.c b/crypto/asymmetric_keys/verify_pefile.c
index 7e8c2338ae25..672a94c2c3ff 100644
--- a/crypto/asymmetric_keys/verify_pefile.c
+++ b/crypto/asymmetric_keys/verify_pefile.c
@@ -16,7 +16,7 @@
#include <linux/err.h>
#include <linux/pe.h>
#include <linux/asn1.h>
-#include <crypto/pkcs7.h>
+#include <linux/verification.h>
#include <crypto/hash.h>
#include "verify_pefile.h"
@@ -392,9 +392,8 @@ error_no_desc:
* verify_pefile_signature - Verify the signature on a PE binary image
* @pebuf: Buffer containing the PE binary image
* @pelen: Length of the binary image
- * @trust_keyring: Signing certificates to use as starting points
+ * @trust_keys: Signing certificate(s) to use as starting points
* @usage: The use to which the key is being put.
- * @_trusted: Set to true if trustworth, false otherwise
*
* Validate that the certificate chain inside the PKCS#7 message inside the PE
* binary image intersects keys we already know and trust.
@@ -418,14 +417,10 @@ error_no_desc:
* May also return -ENOMEM.
*/
int verify_pefile_signature(const void *pebuf, unsigned pelen,
- struct key *trusted_keyring,
- enum key_being_used_for usage,
- bool *_trusted)
+ struct key *trusted_keys,
+ enum key_being_used_for usage)
{
- struct pkcs7_message *pkcs7;
struct pefile_context ctx;
- const void *data;
- size_t datalen;
int ret;
kenter("");
@@ -439,19 +434,10 @@ int verify_pefile_signature(const void *pebuf, unsigned pelen,
if (ret < 0)
return ret;
- pkcs7 = pkcs7_parse_message(pebuf + ctx.sig_offset, ctx.sig_len);
- if (IS_ERR(pkcs7))
- return PTR_ERR(pkcs7);
- ctx.pkcs7 = pkcs7;
-
- ret = pkcs7_get_content_data(ctx.pkcs7, &data, &datalen, false);
- if (ret < 0 || datalen == 0) {
- pr_devel("PKCS#7 message does not contain data\n");
- ret = -EBADMSG;
- goto error;
- }
-
- ret = mscode_parse(&ctx);
+ ret = verify_pkcs7_signature(NULL, 0,
+ pebuf + ctx.sig_offset, ctx.sig_len,
+ trusted_keys, usage,
+ mscode_parse, &ctx);
if (ret < 0)
goto error;
@@ -462,16 +448,8 @@ int verify_pefile_signature(const void *pebuf, unsigned pelen,
* contents.
*/
ret = pefile_digest_pe(pebuf, pelen, &ctx);
- if (ret < 0)
- goto error;
-
- ret = pkcs7_verify(pkcs7, usage);
- if (ret < 0)
- goto error;
-
- ret = pkcs7_validate_trust(pkcs7, trusted_keyring, _trusted);
error:
- pkcs7_free_message(ctx.pkcs7);
+ kfree(ctx.digest);
return ret;
}
diff --git a/crypto/asymmetric_keys/verify_pefile.h b/crypto/asymmetric_keys/verify_pefile.h
index a133eb81a492..cd4d20930728 100644
--- a/crypto/asymmetric_keys/verify_pefile.h
+++ b/crypto/asymmetric_keys/verify_pefile.h
@@ -9,7 +9,6 @@
* 2 of the Licence, or (at your option) any later version.
*/
-#include <linux/verify_pefile.h>
#include <crypto/pkcs7.h>
#include <crypto/hash_info.h>
@@ -23,7 +22,6 @@ struct pefile_context {
unsigned sig_offset;
unsigned sig_len;
const struct section_header *secs;
- struct pkcs7_message *pkcs7;
/* PKCS#7 MS Individual Code Signing content */
const void *digest; /* Digest */
@@ -39,4 +37,5 @@ struct pefile_context {
/*
* mscode_parser.c
*/
-extern int mscode_parse(struct pefile_context *ctx);
+extern int mscode_parse(void *_ctx, const void *content_data, size_t data_len,
+ size_t asn1hdrlen);
diff --git a/crypto/asymmetric_keys/x509_cert_parser.c b/crypto/asymmetric_keys/x509_cert_parser.c
index 4a29bac70060..865f46ea724f 100644
--- a/crypto/asymmetric_keys/x509_cert_parser.c
+++ b/crypto/asymmetric_keys/x509_cert_parser.c
@@ -47,15 +47,12 @@ struct x509_parse_context {
void x509_free_certificate(struct x509_certificate *cert)
{
if (cert) {
- public_key_destroy(cert->pub);
+ public_key_free(cert->pub);
+ public_key_signature_free(cert->sig);
kfree(cert->issuer);
kfree(cert->subject);
kfree(cert->id);
kfree(cert->skid);
- kfree(cert->akid_id);
- kfree(cert->akid_skid);
- kfree(cert->sig.digest);
- kfree(cert->sig.s);
kfree(cert);
}
}
@@ -78,6 +75,9 @@ struct x509_certificate *x509_cert_parse(const void *data, size_t datalen)
cert->pub = kzalloc(sizeof(struct public_key), GFP_KERNEL);
if (!cert->pub)
goto error_no_ctx;
+ cert->sig = kzalloc(sizeof(struct public_key_signature), GFP_KERNEL);
+ if (!cert->sig)
+ goto error_no_ctx;
ctx = kzalloc(sizeof(struct x509_parse_context), GFP_KERNEL);
if (!ctx)
goto error_no_ctx;
@@ -108,6 +108,11 @@ struct x509_certificate *x509_cert_parse(const void *data, size_t datalen)
cert->pub->keylen = ctx->key_size;
+ /* Grab the signature bits */
+ ret = x509_get_sig_params(cert);
+ if (ret < 0)
+ goto error_decode;
+
/* Generate cert issuer + serial number key ID */
kid = asymmetric_key_generate_id(cert->raw_serial,
cert->raw_serial_size,
@@ -119,6 +124,11 @@ struct x509_certificate *x509_cert_parse(const void *data, size_t datalen)
}
cert->id = kid;
+ /* Detect self-signed certificates */
+ ret = x509_check_for_self_signed(cert);
+ if (ret < 0)
+ goto error_decode;
+
kfree(ctx);
return cert;
@@ -188,33 +198,33 @@ int x509_note_pkey_algo(void *context, size_t hdrlen,
return -ENOPKG; /* Unsupported combination */
case OID_md4WithRSAEncryption:
- ctx->cert->sig.hash_algo = "md4";
- ctx->cert->sig.pkey_algo = "rsa";
+ ctx->cert->sig->hash_algo = "md4";
+ ctx->cert->sig->pkey_algo = "rsa";
break;
case OID_sha1WithRSAEncryption:
- ctx->cert->sig.hash_algo = "sha1";
- ctx->cert->sig.pkey_algo = "rsa";
+ ctx->cert->sig->hash_algo = "sha1";
+ ctx->cert->sig->pkey_algo = "rsa";
break;
case OID_sha256WithRSAEncryption:
- ctx->cert->sig.hash_algo = "sha256";
- ctx->cert->sig.pkey_algo = "rsa";
+ ctx->cert->sig->hash_algo = "sha256";
+ ctx->cert->sig->pkey_algo = "rsa";
break;
case OID_sha384WithRSAEncryption:
- ctx->cert->sig.hash_algo = "sha384";
- ctx->cert->sig.pkey_algo = "rsa";
+ ctx->cert->sig->hash_algo = "sha384";
+ ctx->cert->sig->pkey_algo = "rsa";
break;
case OID_sha512WithRSAEncryption:
- ctx->cert->sig.hash_algo = "sha512";
- ctx->cert->sig.pkey_algo = "rsa";
+ ctx->cert->sig->hash_algo = "sha512";
+ ctx->cert->sig->pkey_algo = "rsa";
break;
case OID_sha224WithRSAEncryption:
- ctx->cert->sig.hash_algo = "sha224";
- ctx->cert->sig.pkey_algo = "rsa";
+ ctx->cert->sig->hash_algo = "sha224";
+ ctx->cert->sig->pkey_algo = "rsa";
break;
}
@@ -572,14 +582,14 @@ int x509_akid_note_kid(void *context, size_t hdrlen,
pr_debug("AKID: keyid: %*phN\n", (int)vlen, value);
- if (ctx->cert->akid_skid)
+ if (ctx->cert->sig->auth_ids[1])
return 0;
kid = asymmetric_key_generate_id(value, vlen, "", 0);
if (IS_ERR(kid))
return PTR_ERR(kid);
pr_debug("authkeyid %*phN\n", kid->len, kid->data);
- ctx->cert->akid_skid = kid;
+ ctx->cert->sig->auth_ids[1] = kid;
return 0;
}
@@ -611,7 +621,7 @@ int x509_akid_note_serial(void *context, size_t hdrlen,
pr_debug("AKID: serial: %*phN\n", (int)vlen, value);
- if (!ctx->akid_raw_issuer || ctx->cert->akid_id)
+ if (!ctx->akid_raw_issuer || ctx->cert->sig->auth_ids[0])
return 0;
kid = asymmetric_key_generate_id(value,
@@ -622,6 +632,6 @@ int x509_akid_note_serial(void *context, size_t hdrlen,
return PTR_ERR(kid);
pr_debug("authkeyid %*phN\n", kid->len, kid->data);
- ctx->cert->akid_id = kid;
+ ctx->cert->sig->auth_ids[0] = kid;
return 0;
}
diff --git a/crypto/asymmetric_keys/x509_parser.h b/crypto/asymmetric_keys/x509_parser.h
index dbeed6018e63..05eef1c68881 100644
--- a/crypto/asymmetric_keys/x509_parser.h
+++ b/crypto/asymmetric_keys/x509_parser.h
@@ -17,13 +17,11 @@ struct x509_certificate {
struct x509_certificate *next;
struct x509_certificate *signer; /* Certificate that signed this one */
struct public_key *pub; /* Public key details */
- struct public_key_signature sig; /* Signature parameters */
+ struct public_key_signature *sig; /* Signature parameters */
char *issuer; /* Name of certificate issuer */
char *subject; /* Name of certificate subject */
struct asymmetric_key_id *id; /* Issuer + Serial number */
struct asymmetric_key_id *skid; /* Subject + subjectKeyId (optional) */
- struct asymmetric_key_id *akid_id; /* CA AuthKeyId matching ->id (optional) */
- struct asymmetric_key_id *akid_skid; /* CA AuthKeyId matching ->skid (optional) */
time64_t valid_from;
time64_t valid_to;
const void *tbs; /* Signed data */
@@ -41,8 +39,9 @@ struct x509_certificate {
unsigned index;
bool seen; /* Infinite recursion prevention */
bool verified;
- bool trusted;
- bool unsupported_crypto; /* T if can't be verified due to missing crypto */
+ bool self_signed; /* T if self-signed (check unsupported_sig too) */
+ bool unsupported_key; /* T if key uses unsupported crypto */
+ bool unsupported_sig; /* T if signature uses unsupported crypto */
};
/*
@@ -58,5 +57,4 @@ extern int x509_decode_time(time64_t *_t, size_t hdrlen,
* x509_public_key.c
*/
extern int x509_get_sig_params(struct x509_certificate *cert);
-extern int x509_check_signature(const struct public_key *pub,
- struct x509_certificate *cert);
+extern int x509_check_for_self_signed(struct x509_certificate *cert);
diff --git a/crypto/asymmetric_keys/x509_public_key.c b/crypto/asymmetric_keys/x509_public_key.c
index 733c046aacc6..fb732296cd36 100644
--- a/crypto/asymmetric_keys/x509_public_key.c
+++ b/crypto/asymmetric_keys/x509_public_key.c
@@ -20,256 +20,133 @@
#include "asymmetric_keys.h"
#include "x509_parser.h"
-static bool use_builtin_keys;
-static struct asymmetric_key_id *ca_keyid;
-
-#ifndef MODULE
-static struct {
- struct asymmetric_key_id id;
- unsigned char data[10];
-} cakey;
-
-static int __init ca_keys_setup(char *str)
-{
- if (!str) /* default system keyring */
- return 1;
-
- if (strncmp(str, "id:", 3) == 0) {
- struct asymmetric_key_id *p = &cakey.id;
- size_t hexlen = (strlen(str) - 3) / 2;
- int ret;
-
- if (hexlen == 0 || hexlen > sizeof(cakey.data)) {
- pr_err("Missing or invalid ca_keys id\n");
- return 1;
- }
-
- ret = __asymmetric_key_hex_to_key_id(str + 3, p, hexlen);
- if (ret < 0)
- pr_err("Unparsable ca_keys id hex string\n");
- else
- ca_keyid = p; /* owner key 'id:xxxxxx' */
- } else if (strcmp(str, "builtin") == 0) {
- use_builtin_keys = true;
- }
-
- return 1;
-}
-__setup("ca_keys=", ca_keys_setup);
-#endif
-
-/**
- * x509_request_asymmetric_key - Request a key by X.509 certificate params.
- * @keyring: The keys to search.
- * @id: The issuer & serialNumber to look for or NULL.
- * @skid: The subjectKeyIdentifier to look for or NULL.
- * @partial: Use partial match if true, exact if false.
- *
- * Find a key in the given keyring by identifier. The preferred identifier is
- * the issuer + serialNumber and the fallback identifier is the
- * subjectKeyIdentifier. If both are given, the lookup is by the former, but
- * the latter must also match.
- */
-struct key *x509_request_asymmetric_key(struct key *keyring,
- const struct asymmetric_key_id *id,
- const struct asymmetric_key_id *skid,
- bool partial)
-{
- struct key *key;
- key_ref_t ref;
- const char *lookup;
- char *req, *p;
- int len;
-
- if (id) {
- lookup = id->data;
- len = id->len;
- } else {
- lookup = skid->data;
- len = skid->len;
- }
-
- /* Construct an identifier "id:<keyid>". */
- p = req = kmalloc(2 + 1 + len * 2 + 1, GFP_KERNEL);
- if (!req)
- return ERR_PTR(-ENOMEM);
-
- if (partial) {
- *p++ = 'i';
- *p++ = 'd';
- } else {
- *p++ = 'e';
- *p++ = 'x';
- }
- *p++ = ':';
- p = bin2hex(p, lookup, len);
- *p = 0;
-
- pr_debug("Look up: \"%s\"\n", req);
-
- ref = keyring_search(make_key_ref(keyring, 1),
- &key_type_asymmetric, req);
- if (IS_ERR(ref))
- pr_debug("Request for key '%s' err %ld\n", req, PTR_ERR(ref));
- kfree(req);
-
- if (IS_ERR(ref)) {
- switch (PTR_ERR(ref)) {
- /* Hide some search errors */
- case -EACCES:
- case -ENOTDIR:
- case -EAGAIN:
- return ERR_PTR(-ENOKEY);
- default:
- return ERR_CAST(ref);
- }
- }
-
- key = key_ref_to_ptr(ref);
- if (id && skid) {
- const struct asymmetric_key_ids *kids = asymmetric_key_ids(key);
- if (!kids->id[1]) {
- pr_debug("issuer+serial match, but expected SKID missing\n");
- goto reject;
- }
- if (!asymmetric_key_id_same(skid, kids->id[1])) {
- pr_debug("issuer+serial match, but SKID does not\n");
- goto reject;
- }
- }
-
- pr_devel("<==%s() = 0 [%x]\n", __func__, key_serial(key));
- return key;
-
-reject:
- key_put(key);
- return ERR_PTR(-EKEYREJECTED);
-}
-EXPORT_SYMBOL_GPL(x509_request_asymmetric_key);
-
/*
* Set up the signature parameters in an X.509 certificate. This involves
* digesting the signed data and extracting the signature.
*/
int x509_get_sig_params(struct x509_certificate *cert)
{
+ struct public_key_signature *sig = cert->sig;
struct crypto_shash *tfm;
struct shash_desc *desc;
- size_t digest_size, desc_size;
- void *digest;
+ size_t desc_size;
int ret;
pr_devel("==>%s()\n", __func__);
- if (cert->unsupported_crypto)
- return -ENOPKG;
- if (cert->sig.s)
+ if (!cert->pub->pkey_algo)
+ cert->unsupported_key = true;
+
+ if (!sig->pkey_algo)
+ cert->unsupported_sig = true;
+
+ /* We check the hash if we can - even if we can't then verify it */
+ if (!sig->hash_algo) {
+ cert->unsupported_sig = true;
return 0;
+ }
- cert->sig.s = kmemdup(cert->raw_sig, cert->raw_sig_size,
- GFP_KERNEL);
- if (!cert->sig.s)
+ sig->s = kmemdup(cert->raw_sig, cert->raw_sig_size, GFP_KERNEL);
+ if (!sig->s)
return -ENOMEM;
- cert->sig.s_size = cert->raw_sig_size;
+ sig->s_size = cert->raw_sig_size;
/* Allocate the hashing algorithm we're going to need and find out how
* big the hash operational data will be.
*/
- tfm = crypto_alloc_shash(cert->sig.hash_algo, 0, 0);
+ tfm = crypto_alloc_shash(sig->hash_algo, 0, 0);
if (IS_ERR(tfm)) {
if (PTR_ERR(tfm) == -ENOENT) {
- cert->unsupported_crypto = true;
- return -ENOPKG;
+ cert->unsupported_sig = true;
+ return 0;
}
return PTR_ERR(tfm);
}
desc_size = crypto_shash_descsize(tfm) + sizeof(*desc);
- digest_size = crypto_shash_digestsize(tfm);
+ sig->digest_size = crypto_shash_digestsize(tfm);
- /* We allocate the hash operational data storage on the end of the
- * digest storage space.
- */
ret = -ENOMEM;
- digest = kzalloc(ALIGN(digest_size, __alignof__(*desc)) + desc_size,
- GFP_KERNEL);
- if (!digest)
+ sig->digest = kmalloc(sig->digest_size, GFP_KERNEL);
+ if (!sig->digest)
goto error;
- cert->sig.digest = digest;
- cert->sig.digest_size = digest_size;
+ desc = kzalloc(desc_size, GFP_KERNEL);
+ if (!desc)
+ goto error;
- desc = PTR_ALIGN(digest + digest_size, __alignof__(*desc));
desc->tfm = tfm;
desc->flags = CRYPTO_TFM_REQ_MAY_SLEEP;
ret = crypto_shash_init(desc);
if (ret < 0)
- goto error;
+ goto error_2;
might_sleep();
- ret = crypto_shash_finup(desc, cert->tbs, cert->tbs_size, digest);
+ ret = crypto_shash_finup(desc, cert->tbs, cert->tbs_size, sig->digest);
+
+error_2:
+ kfree(desc);
error:
crypto_free_shash(tfm);
pr_devel("<==%s() = %d\n", __func__, ret);
return ret;
}
-EXPORT_SYMBOL_GPL(x509_get_sig_params);
/*
- * Check the signature on a certificate using the provided public key
+ * Check for self-signedness in an X.509 cert and if found, check the signature
+ * immediately if we can.
*/
-int x509_check_signature(const struct public_key *pub,
- struct x509_certificate *cert)
+int x509_check_for_self_signed(struct x509_certificate *cert)
{
- int ret;
+ int ret = 0;
pr_devel("==>%s()\n", __func__);
- ret = x509_get_sig_params(cert);
- if (ret < 0)
- return ret;
+ if (cert->raw_subject_size != cert->raw_issuer_size ||
+ memcmp(cert->raw_subject, cert->raw_issuer,
+ cert->raw_issuer_size) != 0)
+ goto not_self_signed;
+
+ if (cert->sig->auth_ids[0] || cert->sig->auth_ids[1]) {
+ /* If the AKID is present it may have one or two parts. If
+ * both are supplied, both must match.
+ */
+ bool a = asymmetric_key_id_same(cert->skid, cert->sig->auth_ids[1]);
+ bool b = asymmetric_key_id_same(cert->id, cert->sig->auth_ids[0]);
+
+ if (!a && !b)
+ goto not_self_signed;
+
+ ret = -EKEYREJECTED;
+ if (((a && !b) || (b && !a)) &&
+ cert->sig->auth_ids[0] && cert->sig->auth_ids[1])
+ goto out;
+ }
- ret = public_key_verify_signature(pub, &cert->sig);
- if (ret == -ENOPKG)
- cert->unsupported_crypto = true;
- pr_debug("Cert Verification: %d\n", ret);
- return ret;
-}
-EXPORT_SYMBOL_GPL(x509_check_signature);
+ ret = -EKEYREJECTED;
+ if (cert->pub->pkey_algo != cert->sig->pkey_algo)
+ goto out;
-/*
- * Check the new certificate against the ones in the trust keyring. If one of
- * those is the signing key and validates the new certificate, then mark the
- * new certificate as being trusted.
- *
- * Return 0 if the new certificate was successfully validated, 1 if we couldn't
- * find a matching parent certificate in the trusted list and an error if there
- * is a matching certificate but the signature check fails.
- */
-static int x509_validate_trust(struct x509_certificate *cert,
- struct key *trust_keyring)
-{
- struct key *key;
- int ret = 1;
-
- if (!trust_keyring)
- return -EOPNOTSUPP;
-
- if (ca_keyid && !asymmetric_key_id_partial(cert->akid_skid, ca_keyid))
- return -EPERM;
-
- key = x509_request_asymmetric_key(trust_keyring,
- cert->akid_id, cert->akid_skid,
- false);
- if (!IS_ERR(key)) {
- if (!use_builtin_keys
- || test_bit(KEY_FLAG_BUILTIN, &key->flags))
- ret = x509_check_signature(key->payload.data[asym_crypto],
- cert);
- key_put(key);
+ ret = public_key_verify_signature(cert->pub, cert->sig);
+ if (ret < 0) {
+ if (ret == -ENOPKG) {
+ cert->unsupported_sig = true;
+ ret = 0;
+ }
+ goto out;
}
+
+ pr_devel("Cert Self-signature verified");
+ cert->self_signed = true;
+
+out:
+ pr_devel("<==%s() = %d\n", __func__, ret);
return ret;
+
+not_self_signed:
+ pr_devel("<==%s() = 0 [not]\n", __func__);
+ return 0;
}
/*
@@ -291,34 +168,22 @@ static int x509_key_preparse(struct key_preparsed_payload *prep)
pr_devel("Cert Issuer: %s\n", cert->issuer);
pr_devel("Cert Subject: %s\n", cert->subject);
- if (!cert->pub->pkey_algo ||
- !cert->sig.pkey_algo ||
- !cert->sig.hash_algo) {
+ if (cert->unsupported_key) {
ret = -ENOPKG;
goto error_free_cert;
}
pr_devel("Cert Key Algo: %s\n", cert->pub->pkey_algo);
pr_devel("Cert Valid period: %lld-%lld\n", cert->valid_from, cert->valid_to);
- pr_devel("Cert Signature: %s + %s\n",
- cert->sig.pkey_algo,
- cert->sig.hash_algo);
cert->pub->id_type = "X509";
- /* Check the signature on the key if it appears to be self-signed */
- if ((!cert->akid_skid && !cert->akid_id) ||
- asymmetric_key_id_same(cert->skid, cert->akid_skid) ||
- asymmetric_key_id_same(cert->id, cert->akid_id)) {
- ret = x509_check_signature(cert->pub, cert); /* self-signed */
- if (ret < 0)
- goto error_free_cert;
- } else if (!prep->trusted) {
- ret = x509_validate_trust(cert, get_system_trusted_keyring());
- if (ret)
- ret = x509_validate_trust(cert, get_ima_mok_keyring());
- if (!ret)
- prep->trusted = 1;
+ if (cert->unsupported_sig) {
+ public_key_signature_free(cert->sig);
+ cert->sig = NULL;
+ } else {
+ pr_devel("Cert Signature: %s + %s\n",
+ cert->sig->pkey_algo, cert->sig->hash_algo);
}
/* Propose a description */
@@ -353,6 +218,7 @@ static int x509_key_preparse(struct key_preparsed_payload *prep)
prep->payload.data[asym_subtype] = &public_key_subtype;
prep->payload.data[asym_key_ids] = kids;
prep->payload.data[asym_crypto] = cert->pub;
+ prep->payload.data[asym_auth] = cert->sig;
prep->description = desc;
prep->quotalen = 100;
@@ -360,6 +226,7 @@ static int x509_key_preparse(struct key_preparsed_payload *prep)
cert->pub = NULL;
cert->id = NULL;
cert->skid = NULL;
+ cert->sig = NULL;
desc = NULL;
ret = 0;
diff --git a/crypto/drbg.c b/crypto/drbg.c
index 1b86310db7b1..0a3538f6cf22 100644
--- a/crypto/drbg.c
+++ b/crypto/drbg.c
@@ -592,8 +592,10 @@ static const struct drbg_state_ops drbg_ctr_ops = {
******************************************************************/
#if defined(CONFIG_CRYPTO_DRBG_HASH) || defined(CONFIG_CRYPTO_DRBG_HMAC)
-static int drbg_kcapi_hash(struct drbg_state *drbg, const unsigned char *key,
- unsigned char *outval, const struct list_head *in);
+static int drbg_kcapi_hash(struct drbg_state *drbg, unsigned char *outval,
+ const struct list_head *in);
+static void drbg_kcapi_hmacsetkey(struct drbg_state *drbg,
+ const unsigned char *key);
static int drbg_init_hash_kernel(struct drbg_state *drbg);
static int drbg_fini_hash_kernel(struct drbg_state *drbg);
#endif /* (CONFIG_CRYPTO_DRBG_HASH || CONFIG_CRYPTO_DRBG_HMAC) */
@@ -619,9 +621,11 @@ static int drbg_hmac_update(struct drbg_state *drbg, struct list_head *seed,
LIST_HEAD(seedlist);
LIST_HEAD(vdatalist);
- if (!reseed)
+ if (!reseed) {
/* 10.1.2.3 step 2 -- memset(0) of C is implicit with kzalloc */
memset(drbg->V, 1, drbg_statelen(drbg));
+ drbg_kcapi_hmacsetkey(drbg, drbg->C);
+ }
drbg_string_fill(&seed1, drbg->V, drbg_statelen(drbg));
list_add_tail(&seed1.list, &seedlist);
@@ -641,12 +645,13 @@ static int drbg_hmac_update(struct drbg_state *drbg, struct list_head *seed,
prefix = DRBG_PREFIX1;
/* 10.1.2.2 step 1 and 4 -- concatenation and HMAC for key */
seed2.buf = &prefix;
- ret = drbg_kcapi_hash(drbg, drbg->C, drbg->C, &seedlist);
+ ret = drbg_kcapi_hash(drbg, drbg->C, &seedlist);
if (ret)
return ret;
+ drbg_kcapi_hmacsetkey(drbg, drbg->C);
/* 10.1.2.2 step 2 and 5 -- HMAC for V */
- ret = drbg_kcapi_hash(drbg, drbg->C, drbg->V, &vdatalist);
+ ret = drbg_kcapi_hash(drbg, drbg->V, &vdatalist);
if (ret)
return ret;
@@ -681,7 +686,7 @@ static int drbg_hmac_generate(struct drbg_state *drbg,
while (len < buflen) {
unsigned int outlen = 0;
/* 10.1.2.5 step 4.1 */
- ret = drbg_kcapi_hash(drbg, drbg->C, drbg->V, &datalist);
+ ret = drbg_kcapi_hash(drbg, drbg->V, &datalist);
if (ret)
return ret;
outlen = (drbg_blocklen(drbg) < (buflen - len)) ?
@@ -796,7 +801,7 @@ static int drbg_hash_df(struct drbg_state *drbg,
while (len < outlen) {
short blocklen = 0;
/* 10.4.1 step 4.1 */
- ret = drbg_kcapi_hash(drbg, NULL, tmp, entropylist);
+ ret = drbg_kcapi_hash(drbg, tmp, entropylist);
if (ret)
goto out;
/* 10.4.1 step 4.2 */
@@ -874,7 +879,7 @@ static int drbg_hash_process_addtl(struct drbg_state *drbg,
list_add_tail(&data1.list, &datalist);
list_add_tail(&data2.list, &datalist);
list_splice_tail(addtl, &datalist);
- ret = drbg_kcapi_hash(drbg, NULL, drbg->scratchpad, &datalist);
+ ret = drbg_kcapi_hash(drbg, drbg->scratchpad, &datalist);
if (ret)
goto out;
@@ -907,7 +912,7 @@ static int drbg_hash_hashgen(struct drbg_state *drbg,
while (len < buflen) {
unsigned int outlen = 0;
/* 10.1.1.4 step hashgen 4.1 */
- ret = drbg_kcapi_hash(drbg, NULL, dst, &datalist);
+ ret = drbg_kcapi_hash(drbg, dst, &datalist);
if (ret) {
len = ret;
goto out;
@@ -956,7 +961,7 @@ static int drbg_hash_generate(struct drbg_state *drbg,
list_add_tail(&data1.list, &datalist);
drbg_string_fill(&data2, drbg->V, drbg_statelen(drbg));
list_add_tail(&data2.list, &datalist);
- ret = drbg_kcapi_hash(drbg, NULL, drbg->scratchpad, &datalist);
+ ret = drbg_kcapi_hash(drbg, drbg->scratchpad, &datalist);
if (ret) {
len = ret;
goto out;
@@ -1600,14 +1605,20 @@ static int drbg_fini_hash_kernel(struct drbg_state *drbg)
return 0;
}
-static int drbg_kcapi_hash(struct drbg_state *drbg, const unsigned char *key,
- unsigned char *outval, const struct list_head *in)
+static void drbg_kcapi_hmacsetkey(struct drbg_state *drbg,
+ const unsigned char *key)
+{
+ struct sdesc *sdesc = (struct sdesc *)drbg->priv_data;
+
+ crypto_shash_setkey(sdesc->shash.tfm, key, drbg_statelen(drbg));
+}
+
+static int drbg_kcapi_hash(struct drbg_state *drbg, unsigned char *outval,
+ const struct list_head *in)
{
struct sdesc *sdesc = (struct sdesc *)drbg->priv_data;
struct drbg_string *input = NULL;
- if (key)
- crypto_shash_setkey(sdesc->shash.tfm, key, drbg_statelen(drbg));
crypto_shash_init(&sdesc->shash);
list_for_each_entry(input, in, list)
crypto_shash_update(&sdesc->shash, input->buf, input->len);
diff --git a/crypto/lzo.c b/crypto/lzo.c
index 4b3e92525dac..c3f3dd9a28c5 100644
--- a/crypto/lzo.c
+++ b/crypto/lzo.c
@@ -32,7 +32,7 @@ static int lzo_init(struct crypto_tfm *tfm)
struct lzo_ctx *ctx = crypto_tfm_ctx(tfm);
ctx->lzo_comp_mem = kmalloc(LZO1X_MEM_COMPRESS,
- GFP_KERNEL | __GFP_NOWARN | __GFP_REPEAT);
+ GFP_KERNEL | __GFP_NOWARN);
if (!ctx->lzo_comp_mem)
ctx->lzo_comp_mem = vmalloc(LZO1X_MEM_COMPRESS);
if (!ctx->lzo_comp_mem)
diff --git a/crypto/testmgr.c b/crypto/testmgr.c
index b86883aedca1..c727fb0cb021 100644
--- a/crypto/testmgr.c
+++ b/crypto/testmgr.c
@@ -35,6 +35,10 @@
#include "internal.h"
+static bool notests;
+module_param(notests, bool, 0644);
+MODULE_PARM_DESC(notests, "disable crypto self-tests");
+
#ifdef CONFIG_CRYPTO_MANAGER_DISABLE_TESTS
/* a perfect nop */
@@ -1776,6 +1780,7 @@ static int alg_test_drbg(const struct alg_test_desc *desc, const char *driver,
static int do_test_rsa(struct crypto_akcipher *tfm,
struct akcipher_testvec *vecs)
{
+ char *xbuf[XBUFSIZE];
struct akcipher_request *req;
void *outbuf_enc = NULL;
void *outbuf_dec = NULL;
@@ -1784,9 +1789,12 @@ static int do_test_rsa(struct crypto_akcipher *tfm,
int err = -ENOMEM;
struct scatterlist src, dst, src_tab[2];
+ if (testmgr_alloc_buf(xbuf))
+ return err;
+
req = akcipher_request_alloc(tfm, GFP_KERNEL);
if (!req)
- return err;
+ goto free_xbuf;
init_completion(&result.completion);
@@ -1804,9 +1812,14 @@ static int do_test_rsa(struct crypto_akcipher *tfm,
if (!outbuf_enc)
goto free_req;
+ if (WARN_ON(vecs->m_size > PAGE_SIZE))
+ goto free_all;
+
+ memcpy(xbuf[0], vecs->m, vecs->m_size);
+
sg_init_table(src_tab, 2);
- sg_set_buf(&src_tab[0], vecs->m, 8);
- sg_set_buf(&src_tab[1], vecs->m + 8, vecs->m_size - 8);
+ sg_set_buf(&src_tab[0], xbuf[0], 8);
+ sg_set_buf(&src_tab[1], xbuf[0] + 8, vecs->m_size - 8);
sg_init_one(&dst, outbuf_enc, out_len_max);
akcipher_request_set_crypt(req, src_tab, &dst, vecs->m_size,
out_len_max);
@@ -1825,7 +1838,7 @@ static int do_test_rsa(struct crypto_akcipher *tfm,
goto free_all;
}
/* verify that encrypted message is equal to expected */
- if (memcmp(vecs->c, sg_virt(req->dst), vecs->c_size)) {
+ if (memcmp(vecs->c, outbuf_enc, vecs->c_size)) {
pr_err("alg: rsa: encrypt test failed. Invalid output\n");
err = -EINVAL;
goto free_all;
@@ -1840,7 +1853,13 @@ static int do_test_rsa(struct crypto_akcipher *tfm,
err = -ENOMEM;
goto free_all;
}
- sg_init_one(&src, vecs->c, vecs->c_size);
+
+ if (WARN_ON(vecs->c_size > PAGE_SIZE))
+ goto free_all;
+
+ memcpy(xbuf[0], vecs->c, vecs->c_size);
+
+ sg_init_one(&src, xbuf[0], vecs->c_size);
sg_init_one(&dst, outbuf_dec, out_len_max);
init_completion(&result.completion);
akcipher_request_set_crypt(req, &src, &dst, vecs->c_size, out_len_max);
@@ -1867,6 +1886,8 @@ free_all:
kfree(outbuf_enc);
free_req:
akcipher_request_free(req);
+free_xbuf:
+ testmgr_free_buf(xbuf);
return err;
}
@@ -3868,6 +3889,11 @@ int alg_test(const char *driver, const char *alg, u32 type, u32 mask)
int j;
int rc;
+ if (!fips_enabled && notests) {
+ printk_once(KERN_INFO "alg: self-tests disabled\n");
+ return 0;
+ }
+
alg_test_descs_check_order();
if ((type & CRYPTO_ALG_TYPE_MASK) == CRYPTO_ALG_TYPE_CIPHER) {
diff --git a/drivers/Kconfig b/drivers/Kconfig
index d2ac339de85f..e1e2066cecdb 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -114,6 +114,8 @@ source "drivers/rtc/Kconfig"
source "drivers/dma/Kconfig"
+source "drivers/dma-buf/Kconfig"
+
source "drivers/dca/Kconfig"
source "drivers/auxdisplay/Kconfig"
@@ -190,6 +192,8 @@ source "drivers/android/Kconfig"
source "drivers/nvdimm/Kconfig"
+source "drivers/dax/Kconfig"
+
source "drivers/nvmem/Kconfig"
source "drivers/hwtracing/stm/Kconfig"
diff --git a/drivers/Makefile b/drivers/Makefile
index 8f5d076baeb0..0b6f3d60193d 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -66,6 +66,7 @@ obj-$(CONFIG_PARPORT) += parport/
obj-$(CONFIG_NVM) += lightnvm/
obj-y += base/ block/ misc/ mfd/ nfc/
obj-$(CONFIG_LIBNVDIMM) += nvdimm/
+obj-$(CONFIG_DEV_DAX) += dax/
obj-$(CONFIG_DMA_SHARED_BUFFER) += dma-buf/
obj-$(CONFIG_NUBUS) += nubus/
obj-y += macintosh/
diff --git a/drivers/acpi/Kconfig b/drivers/acpi/Kconfig
index 82b96ee8624c..b7e2e776397d 100644
--- a/drivers/acpi/Kconfig
+++ b/drivers/acpi/Kconfig
@@ -5,10 +5,10 @@
menuconfig ACPI
bool "ACPI (Advanced Configuration and Power Interface) Support"
depends on !IA64_HP_SIM
- depends on IA64 || X86 || (ARM64 && EXPERT)
+ depends on IA64 || X86 || ARM64
depends on PCI
select PNP
- default y
+ default y if (IA64 || X86)
help
Advanced Configuration and Power Interface (ACPI) support for
Linux requires an ACPI-compliant platform (hardware/firmware),
@@ -311,12 +311,12 @@ config ACPI_CUSTOM_DSDT
bool
default ACPI_CUSTOM_DSDT_FILE != ""
-config ACPI_INITRD_TABLE_OVERRIDE
- bool "ACPI tables override via initrd"
+config ACPI_TABLE_UPGRADE
+ bool "Allow upgrading ACPI tables via initrd"
depends on BLK_DEV_INITRD && X86
- default n
+ default y
help
- This option provides functionality to override arbitrary ACPI tables
+ This option provides functionality to upgrade arbitrary ACPI tables
via initrd. No functional change if no ACPI tables are passed via
initrd, therefore it's safe to say Y.
See Documentation/acpi/initrd_table_override.txt for details
diff --git a/drivers/acpi/Makefile b/drivers/acpi/Makefile
index edeb2d1d99be..251ce85a66fb 100644
--- a/drivers/acpi/Makefile
+++ b/drivers/acpi/Makefile
@@ -18,7 +18,7 @@ obj-$(CONFIG_ACPI) += acpi.o \
acpica/
# All the builtin files are in the "acpi." module_param namespace.
-acpi-y += osl.o utils.o reboot.o
+acpi-y += osi.o osl.o utils.o reboot.o
acpi-y += nvs.o
# Power management related files
@@ -47,6 +47,7 @@ acpi-$(CONFIG_ARM_AMBA) += acpi_amba.o
acpi-y += int340x_thermal.o
acpi-y += power.o
acpi-y += event.o
+acpi-$(CONFIG_ACPI_REDUCED_HARDWARE_ONLY) += evged.o
acpi-y += sysfs.o
acpi-y += property.o
acpi-$(CONFIG_X86) += acpi_cmos_rtc.o
diff --git a/drivers/acpi/acpi_amba.c b/drivers/acpi/acpi_amba.c
index 2a61b54ab968..7f77c071709a 100644
--- a/drivers/acpi/acpi_amba.c
+++ b/drivers/acpi/acpi_amba.c
@@ -35,8 +35,7 @@ static void amba_register_dummy_clk(void)
if (amba_dummy_clk)
return;
- amba_dummy_clk = clk_register_fixed_rate(NULL, "apb_pclk", NULL,
- CLK_IS_ROOT, 0);
+ amba_dummy_clk = clk_register_fixed_rate(NULL, "apb_pclk", NULL, 0, 0);
clk_register_clkdev(amba_dummy_clk, "apb_pclk", NULL);
}
diff --git a/drivers/acpi/acpi_apd.c b/drivers/acpi/acpi_apd.c
index f245bf35bedb..1daf9c46df8e 100644
--- a/drivers/acpi/acpi_apd.c
+++ b/drivers/acpi/acpi_apd.c
@@ -62,8 +62,7 @@ static int acpi_apd_setup(struct apd_private_data *pdata)
if (dev_desc->fixed_clk_rate) {
clk = clk_register_fixed_rate(&pdata->adev->dev,
dev_name(&pdata->adev->dev),
- NULL, CLK_IS_ROOT,
- dev_desc->fixed_clk_rate);
+ NULL, 0, dev_desc->fixed_clk_rate);
clk_register_clkdev(clk, NULL, dev_name(&pdata->adev->dev));
pdata->clk = clk;
}
diff --git a/drivers/acpi/acpi_dbg.c b/drivers/acpi/acpi_dbg.c
index 15e4604efba7..1f4128487dd4 100644
--- a/drivers/acpi/acpi_dbg.c
+++ b/drivers/acpi/acpi_dbg.c
@@ -265,7 +265,7 @@ static int acpi_aml_write_kern(const char *buf, int len)
char *p;
ret = acpi_aml_lock_write(crc, ACPI_AML_OUT_KERN);
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
return ret;
/* sync tail before inserting logs */
smp_mb();
@@ -286,7 +286,7 @@ static int acpi_aml_readb_kern(void)
char *p;
ret = acpi_aml_lock_read(crc, ACPI_AML_IN_KERN);
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
return ret;
/* sync head before removing cmds */
smp_rmb();
@@ -330,7 +330,7 @@ again:
goto again;
break;
}
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
break;
size += ret;
count -= ret;
@@ -373,7 +373,7 @@ again:
if (ret == 0)
goto again;
}
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
break;
*(msg + size) = (char)ret;
size++;
@@ -526,7 +526,7 @@ static int acpi_aml_open(struct inode *inode, struct file *file)
}
acpi_aml_io.users++;
err_lock:
- if (IS_ERR_VALUE(ret)) {
+ if (ret < 0) {
if (acpi_aml_active_reader == file)
acpi_aml_active_reader = NULL;
}
@@ -587,7 +587,7 @@ static int acpi_aml_read_user(char __user *buf, int len)
char *p;
ret = acpi_aml_lock_read(crc, ACPI_AML_OUT_USER);
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
return ret;
/* sync head before removing logs */
smp_rmb();
@@ -602,7 +602,7 @@ static int acpi_aml_read_user(char __user *buf, int len)
crc->tail = (crc->tail + n) & (ACPI_AML_BUF_SIZE - 1);
ret = n;
out:
- acpi_aml_unlock_fifo(ACPI_AML_OUT_USER, !IS_ERR_VALUE(ret));
+ acpi_aml_unlock_fifo(ACPI_AML_OUT_USER, !ret);
return ret;
}
@@ -634,7 +634,7 @@ again:
goto again;
}
}
- if (IS_ERR_VALUE(ret)) {
+ if (ret < 0) {
if (!acpi_aml_running())
ret = 0;
break;
@@ -657,7 +657,7 @@ static int acpi_aml_write_user(const char __user *buf, int len)
char *p;
ret = acpi_aml_lock_write(crc, ACPI_AML_IN_USER);
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
return ret;
/* sync tail before inserting cmds */
smp_mb();
@@ -672,7 +672,7 @@ static int acpi_aml_write_user(const char __user *buf, int len)
crc->head = (crc->head + n) & (ACPI_AML_BUF_SIZE - 1);
ret = n;
out:
- acpi_aml_unlock_fifo(ACPI_AML_IN_USER, !IS_ERR_VALUE(ret));
+ acpi_aml_unlock_fifo(ACPI_AML_IN_USER, !ret);
return n;
}
@@ -704,7 +704,7 @@ again:
goto again;
}
}
- if (IS_ERR_VALUE(ret)) {
+ if (ret < 0) {
if (!acpi_aml_running())
ret = 0;
break;
diff --git a/drivers/acpi/acpi_video.c b/drivers/acpi/acpi_video.c
index 4361bc98ef4c..3d5b8a099351 100644
--- a/drivers/acpi/acpi_video.c
+++ b/drivers/acpi/acpi_video.c
@@ -191,19 +191,6 @@ struct acpi_video_device_cap {
u8 _DDC:1; /* Return the EDID for this device */
};
-struct acpi_video_brightness_flags {
- u8 _BCL_no_ac_battery_levels:1; /* no AC/Battery levels in _BCL */
- u8 _BCL_reversed:1; /* _BCL package is in a reversed order */
- u8 _BQC_use_index:1; /* _BQC returns an index value */
-};
-
-struct acpi_video_device_brightness {
- int curr;
- int count;
- int *levels;
- struct acpi_video_brightness_flags flags;
-};
-
struct acpi_video_device {
unsigned long device_id;
struct acpi_video_device_flags flags;
@@ -325,7 +312,7 @@ static const struct thermal_cooling_device_ops video_cooling_ops = {
*/
static int
-acpi_video_device_lcd_query_levels(struct acpi_video_device *device,
+acpi_video_device_lcd_query_levels(acpi_handle handle,
union acpi_object **levels)
{
int status;
@@ -335,7 +322,7 @@ acpi_video_device_lcd_query_levels(struct acpi_video_device *device,
*levels = NULL;
- status = acpi_evaluate_object(device->dev->handle, "_BCL", NULL, &buffer);
+ status = acpi_evaluate_object(handle, "_BCL", NULL, &buffer);
if (!ACPI_SUCCESS(status))
return status;
obj = (union acpi_object *)buffer.pointer;
@@ -766,36 +753,28 @@ static int acpi_video_bqc_quirk(struct acpi_video_device *device,
return 0;
}
-
-/*
- * Arg:
- * device : video output device (LCD, CRT, ..)
- *
- * Return Value:
- * Maximum brightness level
- *
- * Allocate and initialize device->brightness.
- */
-
-static int
-acpi_video_init_brightness(struct acpi_video_device *device)
+int acpi_video_get_levels(struct acpi_device *device,
+ struct acpi_video_device_brightness **dev_br)
{
union acpi_object *obj = NULL;
int i, max_level = 0, count = 0, level_ac_battery = 0;
- unsigned long long level, level_old;
union acpi_object *o;
struct acpi_video_device_brightness *br = NULL;
- int result = -EINVAL;
+ int result = 0;
u32 value;
- if (!ACPI_SUCCESS(acpi_video_device_lcd_query_levels(device, &obj))) {
+ if (!ACPI_SUCCESS(acpi_video_device_lcd_query_levels(device->handle,
+ &obj))) {
ACPI_DEBUG_PRINT((ACPI_DB_INFO, "Could not query available "
"LCD brightness level\n"));
+ result = -ENODEV;
goto out;
}
- if (obj->package.count < 2)
+ if (obj->package.count < 2) {
+ result = -EINVAL;
goto out;
+ }
br = kzalloc(sizeof(*br), GFP_KERNEL);
if (!br) {
@@ -861,6 +840,38 @@ acpi_video_init_brightness(struct acpi_video_device *device)
"Found unordered _BCL package"));
br->count = count;
+ *dev_br = br;
+
+out:
+ kfree(obj);
+ return result;
+out_free:
+ kfree(br);
+ goto out;
+}
+EXPORT_SYMBOL(acpi_video_get_levels);
+
+/*
+ * Arg:
+ * device : video output device (LCD, CRT, ..)
+ *
+ * Return Value:
+ * Maximum brightness level
+ *
+ * Allocate and initialize device->brightness.
+ */
+
+static int
+acpi_video_init_brightness(struct acpi_video_device *device)
+{
+ int i, max_level = 0;
+ unsigned long long level, level_old;
+ struct acpi_video_device_brightness *br = NULL;
+ int result = -EINVAL;
+
+ result = acpi_video_get_levels(device->dev, &br);
+ if (result)
+ return result;
device->brightness = br;
/* _BQC uses INDEX while _BCL uses VALUE in some laptops */
@@ -903,17 +914,13 @@ set_level:
goto out_free_levels;
ACPI_DEBUG_PRINT((ACPI_DB_INFO,
- "found %d brightness levels\n", count - 2));
- kfree(obj);
- return result;
+ "found %d brightness levels\n", br->count - 2));
+ return 0;
out_free_levels:
kfree(br->levels);
-out_free:
kfree(br);
-out:
device->brightness = NULL;
- kfree(obj);
return result;
}
diff --git a/drivers/acpi/acpica/Makefile b/drivers/acpi/acpica/Makefile
index f682374c19f4..227bb7bb19d7 100644
--- a/drivers/acpi/acpica/Makefile
+++ b/drivers/acpi/acpica/Makefile
@@ -43,6 +43,7 @@ acpi-y += \
evxfregn.o
acpi-y += \
+ exconcat.o \
exconfig.o \
exconvrt.o \
excreate.o \
@@ -149,6 +150,7 @@ acpi-y += \
acpi-y += \
utaddress.o \
utalloc.o \
+ utascii.o \
utbuffer.o \
utcopy.o \
utexcep.o \
diff --git a/drivers/acpi/acpica/acdebug.h b/drivers/acpi/acpica/acdebug.h
index 993af9eb007a..f6404ea928cb 100644
--- a/drivers/acpi/acpica/acdebug.h
+++ b/drivers/acpi/acpica/acdebug.h
@@ -53,7 +53,7 @@
#define ACPI_DEBUG_BUFFER_SIZE 0x4000 /* 16K buffer for return objects */
struct acpi_db_command_info {
- char *name; /* Command Name */
+ const char *name; /* Command Name */
u8 min_args; /* Minimum arguments required */
};
@@ -64,7 +64,7 @@ struct acpi_db_command_help {
};
struct acpi_db_argument_info {
- char *name; /* Argument Name */
+ const char *name; /* Argument Name */
};
struct acpi_db_execute_walk {
@@ -196,7 +196,7 @@ ACPI_DBR_DEPENDENT_RETURN_VOID(void
acpi_walk_state
*walk_state))
- acpi_status acpi_db_display_all_methods(char *display_count_arg);
+acpi_status acpi_db_display_all_methods(char *display_count_arg);
void acpi_db_display_arguments(void);
@@ -220,7 +220,7 @@ ACPI_DBR_DEPENDENT_RETURN_VOID(void
* dbexec - debugger control method execution
*/
void
-acpi_db_execute(char *name, char **args, acpi_object_type * types, u32 flags);
+acpi_db_execute(char *name, char **args, acpi_object_type *types, u32 flags);
void
acpi_db_create_execution_threads(char *num_threads_arg,
@@ -271,7 +271,7 @@ void ACPI_SYSTEM_XFACE acpi_db_execute_thread(void *context);
acpi_status acpi_db_user_commands(void);
char *acpi_db_get_next_token(char *string,
- char **next, acpi_object_type * return_type);
+ char **next, acpi_object_type *return_type);
/*
* dbobject
diff --git a/drivers/acpi/acpica/acevents.h b/drivers/acpi/acpica/acevents.h
index 010cf81bada9..77af91cf46d4 100644
--- a/drivers/acpi/acpica/acevents.h
+++ b/drivers/acpi/acpica/acevents.h
@@ -72,6 +72,7 @@ acpi_status acpi_ev_init_global_lock_handler(void);
ACPI_HW_DEPENDENT_RETURN_OK(acpi_status
acpi_ev_acquire_global_lock(u16 timeout))
ACPI_HW_DEPENDENT_RETURN_OK(acpi_status acpi_ev_release_global_lock(void))
+
acpi_status acpi_ev_remove_global_lock_handler(void);
/*
@@ -198,8 +199,6 @@ void
acpi_ev_detach_region(union acpi_operand_object *region_obj,
u8 acpi_ns_is_locked);
-void acpi_ev_associate_reg_method(union acpi_operand_object *region_obj);
-
void
acpi_ev_execute_reg_methods(struct acpi_namespace_node *node,
acpi_adr_space_type space_id, u32 function);
diff --git a/drivers/acpi/acpica/acglobal.h b/drivers/acpi/acpica/acglobal.h
index 51b073b68f16..fded776236e2 100644
--- a/drivers/acpi/acpica/acglobal.h
+++ b/drivers/acpi/acpica/acglobal.h
@@ -187,6 +187,8 @@ extern const char *acpi_gbl_sleep_state_names[ACPI_S_STATE_COUNT];
extern const char *acpi_gbl_lowest_dstate_names[ACPI_NUM_sx_w_METHODS];
extern const char *acpi_gbl_highest_dstate_names[ACPI_NUM_sx_d_METHODS];
extern const char *acpi_gbl_region_types[ACPI_NUM_PREDEFINED_REGIONS];
+extern const char acpi_gbl_lower_hex_digits[];
+extern const char acpi_gbl_upper_hex_digits[];
extern const struct acpi_opcode_info acpi_gbl_aml_op_info[AML_NUM_OPCODES];
#ifdef ACPI_DBG_TRACK_ALLOCATIONS
@@ -361,6 +363,15 @@ ACPI_GLOBAL(u32, acpi_gbl_num_objects);
#endif /* ACPI_DEBUGGER */
+#if defined (ACPI_DISASSEMBLER) || defined (ACPI_ASL_COMPILER)
+
+ACPI_GLOBAL(const char, *acpi_gbl_pld_panel_list[]);
+ACPI_GLOBAL(const char, *acpi_gbl_pld_vertical_position_list[]);
+ACPI_GLOBAL(const char, *acpi_gbl_pld_horizontal_position_list[]);
+ACPI_GLOBAL(const char, *acpi_gbl_pld_shape_list[]);
+
+#endif
+
/*****************************************************************************
*
* Application globals
diff --git a/drivers/acpi/acpica/acinterp.h b/drivers/acpi/acpica/acinterp.h
index bae1a35c345f..7ead235555cf 100644
--- a/drivers/acpi/acpica/acinterp.h
+++ b/drivers/acpi/acpica/acinterp.h
@@ -67,7 +67,7 @@
typedef const struct acpi_exdump_info {
u8 opcode;
u8 offset;
- char *name;
+ const char *name;
} acpi_exdump_info;
@@ -370,7 +370,7 @@ acpi_ex_resolve_to_value(union acpi_operand_object **stack_ptr,
acpi_status
acpi_ex_resolve_multiple(struct acpi_walk_state *walk_state,
union acpi_operand_object *operand,
- acpi_object_type * return_type,
+ acpi_object_type *return_type,
union acpi_operand_object **return_desc);
/*
diff --git a/drivers/acpi/acpica/aclocal.h b/drivers/acpi/acpica/aclocal.h
index 9562a10a1a18..13331d70dea0 100644
--- a/drivers/acpi/acpica/aclocal.h
+++ b/drivers/acpi/acpica/aclocal.h
@@ -278,7 +278,7 @@ struct acpi_create_field_info {
};
typedef
-acpi_status(*acpi_internal_method) (struct acpi_walk_state * walk_state);
+acpi_status (*acpi_internal_method) (struct acpi_walk_state * walk_state);
/*
* Bitmapped ACPI types. Used internally only
@@ -395,11 +395,12 @@ union acpi_predefined_info {
/* Return object auto-repair info */
-typedef acpi_status(*acpi_object_converter) (struct acpi_namespace_node * scope,
- union acpi_operand_object
- *original_object,
- union acpi_operand_object
- **converted_object);
+typedef acpi_status (*acpi_object_converter) (struct acpi_namespace_node *
+ scope,
+ union acpi_operand_object *
+ original_object,
+ union acpi_operand_object **
+ converted_object);
struct acpi_simple_repair_info {
char name[ACPI_NAME_SIZE];
@@ -539,10 +540,10 @@ struct acpi_gpe_device_info {
struct acpi_namespace_node *gpe_device;
};
-typedef acpi_status(*acpi_gpe_callback) (struct acpi_gpe_xrupt_info *
- gpe_xrupt_info,
- struct acpi_gpe_block_info *gpe_block,
- void *context);
+typedef acpi_status (*acpi_gpe_callback) (struct acpi_gpe_xrupt_info *
+ gpe_xrupt_info,
+ struct acpi_gpe_block_info *
+ gpe_block, void *context);
/* Information about each particular fixed event */
@@ -657,10 +658,11 @@ struct acpi_result_values {
};
typedef
-acpi_status(*acpi_parse_downwards) (struct acpi_walk_state * walk_state,
- union acpi_parse_object ** out_op);
+acpi_status (*acpi_parse_downwards) (struct acpi_walk_state * walk_state,
+ union acpi_parse_object ** out_op);
-typedef acpi_status(*acpi_parse_upwards) (struct acpi_walk_state * walk_state);
+typedef
+acpi_status (*acpi_parse_upwards) (struct acpi_walk_state * walk_state);
/* Global handlers for AML Notifies */
@@ -700,7 +702,8 @@ union acpi_generic_state {
*
****************************************************************************/
-typedef acpi_status(*acpi_execute_op) (struct acpi_walk_state * walk_state);
+typedef
+acpi_status (*acpi_execute_op) (struct acpi_walk_state * walk_state);
/* Address Range info block */
@@ -853,24 +856,24 @@ struct acpi_parse_state {
/* Parse object flags */
-#define ACPI_PARSEOP_GENERIC 0x01
-#define ACPI_PARSEOP_NAMED 0x02
-#define ACPI_PARSEOP_DEFERRED 0x04
-#define ACPI_PARSEOP_BYTELIST 0x08
-#define ACPI_PARSEOP_IN_STACK 0x10
-#define ACPI_PARSEOP_TARGET 0x20
-#define ACPI_PARSEOP_IN_CACHE 0x80
+#define ACPI_PARSEOP_GENERIC 0x01
+#define ACPI_PARSEOP_NAMED_OBJECT 0x02
+#define ACPI_PARSEOP_DEFERRED 0x04
+#define ACPI_PARSEOP_BYTELIST 0x08
+#define ACPI_PARSEOP_IN_STACK 0x10
+#define ACPI_PARSEOP_TARGET 0x20
+#define ACPI_PARSEOP_IN_CACHE 0x80
/* Parse object disasm_flags */
-#define ACPI_PARSEOP_IGNORE 0x01
-#define ACPI_PARSEOP_PARAMLIST 0x02
-#define ACPI_PARSEOP_EMPTY_TERMLIST 0x04
-#define ACPI_PARSEOP_PREDEF_CHECKED 0x08
-#define ACPI_PARSEOP_CLOSING_PAREN 0x10
-#define ACPI_PARSEOP_COMPOUND 0x20
-#define ACPI_PARSEOP_ASSIGNMENT 0x40
-#define ACPI_PARSEOP_ELSEIF 0x80
+#define ACPI_PARSEOP_IGNORE 0x01
+#define ACPI_PARSEOP_PARAMETER_LIST 0x02
+#define ACPI_PARSEOP_EMPTY_TERMLIST 0x04
+#define ACPI_PARSEOP_PREDEFINED_CHECKED 0x08
+#define ACPI_PARSEOP_CLOSING_PAREN 0x10
+#define ACPI_PARSEOP_COMPOUND_ASSIGNMENT 0x20
+#define ACPI_PARSEOP_ASSIGNMENT 0x40
+#define ACPI_PARSEOP_ELSEIF 0x80
/*****************************************************************************
*
@@ -1096,6 +1099,7 @@ struct acpi_external_list {
#define ACPI_EXT_ORIGIN_FROM_FILE 0x02 /* External came from a file */
#define ACPI_EXT_INTERNAL_PATH_ALLOCATED 0x04 /* Deallocate internal path on completion */
#define ACPI_EXT_EXTERNAL_EMITTED 0x08 /* External() statement has been emitted */
+#define ACPI_EXT_ORIGIN_FROM_OPCODE 0x10 /* External came from a External() opcode */
struct acpi_external_file {
char *path;
diff --git a/drivers/acpi/acpica/acmacros.h b/drivers/acpi/acpica/acmacros.h
index 411c18b7d541..a3b95431b7c5 100644
--- a/drivers/acpi/acpica/acmacros.h
+++ b/drivers/acpi/acpica/acmacros.h
@@ -260,14 +260,31 @@
#define ACPI_IS_MISALIGNED(value) (((acpi_size) value) & (sizeof(acpi_size)-1))
+/* Generic (power-of-two) rounding */
+
+#define ACPI_IS_ALIGNED(a, s) (((a) & ((s) - 1)) == 0)
+#define ACPI_IS_POWER_OF_TWO(a) ACPI_IS_ALIGNED(a, a)
+
/*
* Bitmask creation
* Bit positions start at zero.
* MASK_BITS_ABOVE creates a mask starting AT the position and above
* MASK_BITS_BELOW creates a mask starting one bit BELOW the position
+ * MASK_BITS_ABOVE/BELOW accpets a bit offset to create a mask
+ * MASK_BITS_ABOVE/BELOW_32/64 accpets a bit width to create a mask
+ * Note: The ACPI_INTEGER_BIT_SIZE check is used to bypass compiler
+ * differences with the shift operator
*/
#define ACPI_MASK_BITS_ABOVE(position) (~((ACPI_UINT64_MAX) << ((u32) (position))))
#define ACPI_MASK_BITS_BELOW(position) ((ACPI_UINT64_MAX) << ((u32) (position)))
+#define ACPI_MASK_BITS_ABOVE_32(width) ((u32) ACPI_MASK_BITS_ABOVE(width))
+#define ACPI_MASK_BITS_BELOW_32(width) ((u32) ACPI_MASK_BITS_BELOW(width))
+#define ACPI_MASK_BITS_ABOVE_64(width) ((width) == ACPI_INTEGER_BIT_SIZE ? \
+ ACPI_UINT64_MAX : \
+ ACPI_MASK_BITS_ABOVE(width))
+#define ACPI_MASK_BITS_BELOW_64(width) ((width) == ACPI_INTEGER_BIT_SIZE ? \
+ (u64) 0 : \
+ ACPI_MASK_BITS_BELOW(width))
/* Bitfields within ACPI registers */
@@ -283,10 +300,10 @@
/* Generic bitfield macros and masks */
#define ACPI_GET_BITS(source_ptr, position, mask) \
- ((*source_ptr >> position) & mask)
+ ((*(source_ptr) >> (position)) & (mask))
#define ACPI_SET_BITS(target_ptr, position, mask, value) \
- (*target_ptr |= ((value & mask) << position))
+ (*(target_ptr) |= (((value) & (mask)) << (position)))
#define ACPI_1BIT_MASK 0x00000001
#define ACPI_2BIT_MASK 0x00000003
diff --git a/drivers/acpi/acpica/acnamesp.h b/drivers/acpi/acpica/acnamesp.h
index 022d69cb345a..f33a4ba8e0cb 100644
--- a/drivers/acpi/acpica/acnamesp.h
+++ b/drivers/acpi/acpica/acnamesp.h
@@ -206,9 +206,10 @@ void acpi_ns_dump_tables(acpi_handle search_base, u32 max_depth);
void acpi_ns_dump_entry(acpi_handle handle, u32 debug_level);
void
-acpi_ns_dump_pathname(acpi_handle handle, char *msg, u32 level, u32 component);
+acpi_ns_dump_pathname(acpi_handle handle,
+ const char *msg, u32 level, u32 component);
-void acpi_ns_print_pathname(u32 num_segments, char *pathname);
+void acpi_ns_print_pathname(u32 num_segments, const char *pathname);
acpi_status
acpi_ns_dump_one_object(acpi_handle obj_handle,
diff --git a/drivers/acpi/acpica/acparser.h b/drivers/acpi/acpica/acparser.h
index 7da639d62416..fc305775c3d7 100644
--- a/drivers/acpi/acpica/acparser.h
+++ b/drivers/acpi/acpica/acparser.h
@@ -139,7 +139,7 @@ acpi_ps_complete_final_op(struct acpi_walk_state *walk_state,
*/
const struct acpi_opcode_info *acpi_ps_get_opcode_info(u16 opcode);
-char *acpi_ps_get_opcode_name(u16 opcode);
+const char *acpi_ps_get_opcode_name(u16 opcode);
u8 acpi_ps_get_argument_count(u32 op_type);
diff --git a/drivers/acpi/acpica/acpredef.h b/drivers/acpi/acpica/acpredef.h
index 5faeab41e302..888440b2cf2e 100644
--- a/drivers/acpi/acpica/acpredef.h
+++ b/drivers/acpi/acpica/acpredef.h
@@ -129,7 +129,8 @@ enum acpi_return_package_types {
ACPI_PTYPE2_REV_FIXED = 9,
ACPI_PTYPE2_FIX_VAR = 10,
ACPI_PTYPE2_VAR_VAR = 11,
- ACPI_PTYPE2_UUID_PAIR = 12
+ ACPI_PTYPE2_UUID_PAIR = 12,
+ ACPI_PTYPE_CUSTOM = 13
};
/* Support macros for users of the predefined info table */
@@ -340,7 +341,7 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = {
{{"_BIX", METHOD_0ARGS,
METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Fixed-length (16 Int),(4 Str) */
- PACKAGE_INFO(ACPI_PTYPE1_FIXED, ACPI_RTYPE_INTEGER, 16,
+ PACKAGE_INFO(ACPI_PTYPE_CUSTOM, ACPI_RTYPE_INTEGER, 16,
ACPI_RTYPE_STRING, 4, 0),
{{"_BLT",
@@ -523,6 +524,9 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = {
METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Fixed-length (4 Int) */
PACKAGE_INFO(ACPI_PTYPE1_FIXED, ACPI_RTYPE_INTEGER, 4, 0, 0, 0),
+ {{"_FIT", METHOD_0ARGS,
+ METHOD_RETURNS(ACPI_RTYPE_BUFFER)}}, /* ACPI 6.0 */
+
{{"_FIX", METHOD_0ARGS,
METHOD_RETURNS(ACPI_RTYPE_PACKAGE)}}, /* Variable-length (Ints) */
PACKAGE_INFO(ACPI_PTYPE1_VAR, ACPI_RTYPE_INTEGER, 0, 0, 0, 0),
@@ -1053,6 +1057,12 @@ const union acpi_predefined_info acpi_gbl_predefined_methods[] = {
METHOD_RETURNS(ACPI_RTYPE_INTEGER | ACPI_RTYPE_STRING |
ACPI_RTYPE_BUFFER)}},
+ {{"_WPC", METHOD_0ARGS,
+ METHOD_RETURNS(ACPI_RTYPE_INTEGER)}}, /* ACPI 6.1 */
+
+ {{"_WPP", METHOD_0ARGS,
+ METHOD_RETURNS(ACPI_RTYPE_INTEGER)}}, /* ACPI 6.1 */
+
PACKAGE_INFO(0, 0, 0, 0, 0, 0) /* Table terminator */
};
#else
diff --git a/drivers/acpi/acpica/acresrc.h b/drivers/acpi/acpica/acresrc.h
index 5dd58beafa5c..63da1e37caba 100644
--- a/drivers/acpi/acpica/acresrc.h
+++ b/drivers/acpi/acpica/acresrc.h
@@ -124,7 +124,7 @@ typedef enum {
typedef const struct acpi_rsdump_info {
u8 opcode;
u8 offset;
- char *name;
+ const char *name;
const char **pointer;
} acpi_rsdump_info;
@@ -209,7 +209,7 @@ acpi_rs_get_prs_method_data(struct acpi_namespace_node *node,
acpi_status
acpi_rs_get_method_data(acpi_handle handle,
- char *path, struct acpi_buffer *ret_buffer);
+ const char *path, struct acpi_buffer *ret_buffer);
acpi_status
acpi_rs_set_srs_method_data(struct acpi_namespace_node *node,
@@ -223,16 +223,16 @@ acpi_rs_get_aei_method_data(struct acpi_namespace_node *node,
* rscalc
*/
acpi_status
-acpi_rs_get_list_length(u8 * aml_buffer,
- u32 aml_buffer_length, acpi_size * size_needed);
+acpi_rs_get_list_length(u8 *aml_buffer,
+ u32 aml_buffer_length, acpi_size *size_needed);
acpi_status
acpi_rs_get_aml_length(struct acpi_resource *resource_list,
- acpi_size resource_list_size, acpi_size * size_needed);
+ acpi_size resource_list_size, acpi_size *size_needed);
acpi_status
acpi_rs_get_pci_routing_table_length(union acpi_operand_object *package_object,
- acpi_size * buffer_size_needed);
+ acpi_size *buffer_size_needed);
acpi_status
acpi_rs_convert_aml_to_resources(u8 * aml,
diff --git a/drivers/acpi/acpica/acstruct.h b/drivers/acpi/acpica/acstruct.h
index b3b386e0b119..6235642e31d3 100644
--- a/drivers/acpi/acpica/acstruct.h
+++ b/drivers/acpi/acpica/acstruct.h
@@ -184,7 +184,7 @@ struct acpi_evaluate_info {
/* The first 3 elements are passed by the caller to acpi_ns_evaluate */
struct acpi_namespace_node *prefix_node; /* Input: starting node */
- char *relative_pathname; /* Input: path relative to prefix_node */
+ const char *relative_pathname; /* Input: path relative to prefix_node */
union acpi_operand_object **parameters; /* Input: argument list */
struct acpi_namespace_node *node; /* Resolved node (prefix_node:relative_pathname) */
diff --git a/drivers/acpi/acpica/actables.h b/drivers/acpi/acpica/actables.h
index 848ad3ac938f..cd5a135fcf29 100644
--- a/drivers/acpi/acpica/actables.h
+++ b/drivers/acpi/acpica/actables.h
@@ -161,8 +161,6 @@ acpi_tb_install_fixed_table(acpi_physical_address address,
acpi_status acpi_tb_parse_root_table(acpi_physical_address rsdp_address);
-u8 acpi_is_valid_signature(char *signature);
-
/*
* tbxfload
*/
diff --git a/drivers/acpi/acpica/acutils.h b/drivers/acpi/acpica/acutils.h
index e43ab6f2ad7e..a7dbb2b882cf 100644
--- a/drivers/acpi/acpica/acutils.h
+++ b/drivers/acpi/acpica/acutils.h
@@ -136,16 +136,16 @@ extern const char *acpi_gbl_pt_decode[];
#define ACPI_SMALL_VARIABLE_LENGTH 3
typedef
-acpi_status(*acpi_walk_aml_callback) (u8 *aml,
- u32 length,
- u32 offset,
- u8 resource_index, void **context);
+acpi_status (*acpi_walk_aml_callback) (u8 *aml,
+ u32 length,
+ u32 offset,
+ u8 resource_index, void **context);
typedef
-acpi_status(*acpi_pkg_callback) (u8 object_type,
- union acpi_operand_object *source_object,
- union acpi_generic_state * state,
- void *context);
+acpi_status (*acpi_pkg_callback) (u8 object_type,
+ union acpi_operand_object * source_object,
+ union acpi_generic_state * state,
+ void *context);
struct acpi_pkg_info {
u8 *free_space;
@@ -167,6 +167,15 @@ struct acpi_pkg_info {
#define DB_QWORD_DISPLAY 8
/*
+ * utascii - ASCII utilities
+ */
+u8 acpi_ut_valid_nameseg(char *signature);
+
+u8 acpi_ut_valid_name_char(char character, u32 position);
+
+void acpi_ut_check_and_repair_ascii(u8 *name, char *repaired_name, u32 count);
+
+/*
* utnonansi - Non-ANSI C library functions
*/
void acpi_ut_strupr(char *src_string);
@@ -175,7 +184,14 @@ void acpi_ut_strlwr(char *src_string);
int acpi_ut_stricmp(char *string1, char *string2);
-acpi_status acpi_ut_strtoul64(char *string, u32 base, u64 *ret_integer);
+acpi_status
+acpi_ut_strtoul64(char *string,
+ u32 base, u32 max_integer_byte_width, u64 *ret_integer);
+
+/* Values for max_integer_byte_width above */
+
+#define ACPI_MAX32_BYTE_WIDTH 4
+#define ACPI_MAX64_BYTE_WIDTH 8
/*
* utglobal - Global data structures and procedures
@@ -266,7 +282,8 @@ acpi_ut_trace(u32 line_number,
void
acpi_ut_trace_ptr(u32 line_number,
const char *function_name,
- const char *module_name, u32 component_id, void *pointer);
+ const char *module_name,
+ u32 component_id, const void *pointer);
void
acpi_ut_trace_u32(u32 line_number,
@@ -276,7 +293,8 @@ acpi_ut_trace_u32(u32 line_number,
void
acpi_ut_trace_str(u32 line_number,
const char *function_name,
- const char *module_name, u32 component_id, char *string);
+ const char *module_name,
+ u32 component_id, const char *string);
void
acpi_ut_exit(u32 line_number,
@@ -335,12 +353,12 @@ void acpi_ut_delete_internal_object_list(union acpi_operand_object **obj_list);
*/
acpi_status
acpi_ut_evaluate_object(struct acpi_namespace_node *prefix_node,
- char *path,
+ const char *path,
u32 expected_return_btypes,
union acpi_operand_object **return_desc);
acpi_status
-acpi_ut_evaluate_numeric_object(char *object_name,
+acpi_ut_evaluate_numeric_object(const char *object_name,
struct acpi_namespace_node *device_node,
u64 *value);
@@ -415,7 +433,7 @@ union acpi_operand_object *acpi_ut_create_buffer_object(acpi_size buffer_size);
union acpi_operand_object *acpi_ut_create_string_object(acpi_size string_size);
acpi_status
-acpi_ut_get_object_size(union acpi_operand_object *obj, acpi_size * obj_length);
+acpi_ut_get_object_size(union acpi_operand_object *obj, acpi_size *obj_length);
/*
* utosi - Support for the _OSI predefined control method
@@ -526,15 +544,15 @@ void acpi_ut_set_integer_width(u8 revision);
void
acpi_ut_display_init_pathname(u8 type,
struct acpi_namespace_node *obj_handle,
- char *path);
+ const char *path);
#endif
/*
* utownerid - Support for Table/Method Owner IDs
*/
-acpi_status acpi_ut_allocate_owner_id(acpi_owner_id * owner_id);
+acpi_status acpi_ut_allocate_owner_id(acpi_owner_id *owner_id);
-void acpi_ut_release_owner_id(acpi_owner_id * owner_id);
+void acpi_ut_release_owner_id(acpi_owner_id *owner_id);
/*
* utresrc
@@ -570,10 +588,6 @@ void acpi_ut_print_string(char *string, u16 max_length);
void ut_convert_backslashes(char *pathname);
#endif
-u8 acpi_ut_valid_acpi_name(char *name);
-
-u8 acpi_ut_valid_acpi_char(char character, u32 position);
-
void acpi_ut_repair_name(char *name);
#if defined (ACPI_DEBUGGER) || defined (ACPI_APPLICATION)
@@ -628,7 +642,7 @@ void acpi_ut_dump_allocation_info(void);
void acpi_ut_dump_allocations(u32 component, const char *module);
acpi_status
-acpi_ut_create_list(char *list_name,
+acpi_ut_create_list(const char *list_name,
u16 object_size, struct acpi_memory_list **return_cache);
#endif /* ACPI_DBG_TRACK_ALLOCATIONS */
diff --git a/drivers/acpi/acpica/dbcmds.c b/drivers/acpi/acpica/dbcmds.c
index 772178c96ccf..62bd446535f5 100644
--- a/drivers/acpi/acpica/dbcmds.c
+++ b/drivers/acpi/acpica/dbcmds.c
@@ -738,9 +738,9 @@ acpi_dm_test_resource_conversion(struct acpi_namespace_node *node, char *name)
original_aml = return_buffer.pointer;
acpi_dm_compare_aml_resources(original_aml->buffer.pointer,
- (acpi_rsdesc_size) original_aml->buffer.
+ (acpi_rsdesc_size)original_aml->buffer.
length, new_aml.pointer,
- (acpi_rsdesc_size) new_aml.length);
+ (acpi_rsdesc_size)new_aml.length);
/* Cleanup and exit */
diff --git a/drivers/acpi/acpica/dbconvert.c b/drivers/acpi/acpica/dbconvert.c
index 68f4e0f4b095..7cd07b27f758 100644
--- a/drivers/acpi/acpica/dbconvert.c
+++ b/drivers/acpi/acpica/dbconvert.c
@@ -194,7 +194,7 @@ acpi_db_convert_to_buffer(char *string, union acpi_object *object)
*
******************************************************************************/
-acpi_status acpi_db_convert_to_package(char *string, union acpi_object * object)
+acpi_status acpi_db_convert_to_package(char *string, union acpi_object *object)
{
char *this;
char *next;
@@ -252,7 +252,7 @@ acpi_status acpi_db_convert_to_package(char *string, union acpi_object * object)
acpi_status
acpi_db_convert_to_object(acpi_object_type type,
- char *string, union acpi_object * object)
+ char *string, union acpi_object *object)
{
acpi_status status = AE_OK;
@@ -277,7 +277,9 @@ acpi_db_convert_to_object(acpi_object_type type,
default:
object->type = ACPI_TYPE_INTEGER;
- status = acpi_ut_strtoul64(string, 16, &object->integer.value);
+ status =
+ acpi_ut_strtoul64(string, 16, acpi_gbl_integer_byte_width,
+ &object->integer.value);
break;
}
diff --git a/drivers/acpi/acpica/dbexec.c b/drivers/acpi/acpica/dbexec.c
index c814855376e2..12df2915ad74 100644
--- a/drivers/acpi/acpica/dbexec.c
+++ b/drivers/acpi/acpica/dbexec.c
@@ -361,7 +361,7 @@ acpi_db_execution_walk(acpi_handle obj_handle,
******************************************************************************/
void
-acpi_db_execute(char *name, char **args, acpi_object_type * types, u32 flags)
+acpi_db_execute(char *name, char **args, acpi_object_type *types, u32 flags)
{
acpi_status status;
struct acpi_buffer return_obj;
diff --git a/drivers/acpi/acpica/dbinput.c b/drivers/acpi/acpica/dbinput.c
index 417c02a89915..7cd5d2e022da 100644
--- a/drivers/acpi/acpica/dbinput.c
+++ b/drivers/acpi/acpica/dbinput.c
@@ -57,12 +57,12 @@ static u32 acpi_db_get_line(char *input_buffer);
static u32 acpi_db_match_command(char *user_command);
-static void acpi_db_display_command_info(char *command, u8 display_all);
+static void acpi_db_display_command_info(const char *command, u8 display_all);
static void acpi_db_display_help(char *command);
static u8
-acpi_db_match_command_help(char *command,
+acpi_db_match_command_help(const char *command,
const struct acpi_db_command_help *help);
/*
@@ -348,7 +348,7 @@ static const struct acpi_db_command_help acpi_gbl_db_command_help[] = {
******************************************************************************/
static u8
-acpi_db_match_command_help(char *command,
+acpi_db_match_command_help(const char *command,
const struct acpi_db_command_help *help)
{
char *invocation = help->invocation;
@@ -402,7 +402,7 @@ acpi_db_match_command_help(char *command,
*
******************************************************************************/
-static void acpi_db_display_command_info(char *command, u8 display_all)
+static void acpi_db_display_command_info(const char *command, u8 display_all)
{
const struct acpi_db_command_help *next;
u8 matched;
@@ -466,7 +466,7 @@ static void acpi_db_display_help(char *command)
******************************************************************************/
char *acpi_db_get_next_token(char *string,
- char **next, acpi_object_type * return_type)
+ char **next, acpi_object_type *return_type)
{
char *start;
u32 depth;
@@ -656,8 +656,9 @@ static u32 acpi_db_match_command(char *user_command)
}
for (i = CMD_FIRST_VALID; acpi_gbl_db_commands[i].name; i++) {
- if (strstr(acpi_gbl_db_commands[i].name, user_command) ==
- acpi_gbl_db_commands[i].name) {
+ if (strstr
+ (ACPI_CAST_PTR(char, acpi_gbl_db_commands[i].name),
+ user_command) == acpi_gbl_db_commands[i].name) {
return (i);
}
}
@@ -683,8 +684,8 @@ static u32 acpi_db_match_command(char *user_command)
acpi_status
acpi_db_command_dispatch(char *input_buffer,
- struct acpi_walk_state * walk_state,
- union acpi_parse_object * op)
+ struct acpi_walk_state *walk_state,
+ union acpi_parse_object *op)
{
u32 temp;
u32 command_index;
diff --git a/drivers/acpi/acpica/dbnames.c b/drivers/acpi/acpica/dbnames.c
index 3c23b5a1079b..8667f14d535e 100644
--- a/drivers/acpi/acpica/dbnames.c
+++ b/drivers/acpi/acpica/dbnames.c
@@ -285,7 +285,7 @@ void acpi_db_dump_namespace_by_owner(char *owner_arg, char *depth_arg)
u32 max_depth = ACPI_UINT32_MAX;
acpi_owner_id owner_id;
- owner_id = (acpi_owner_id) strtoul(owner_arg, NULL, 0);
+ owner_id = (acpi_owner_id)strtoul(owner_arg, NULL, 0);
/* Now we can check for the depth argument */
@@ -709,7 +709,7 @@ acpi_db_integrity_walk(acpi_handle obj_handle,
return (AE_OK);
}
- if (!acpi_ut_valid_acpi_name(node->name.ascii)) {
+ if (!acpi_ut_valid_nameseg(node->name.ascii)) {
acpi_os_printf("Invalid AcpiName for Node %p\n", node);
return (AE_OK);
}
diff --git a/drivers/acpi/acpica/dbutils.c b/drivers/acpi/acpica/dbutils.c
index b37a2c77b86b..ae80106d1000 100644
--- a/drivers/acpi/acpica/dbutils.c
+++ b/drivers/acpi/acpica/dbutils.c
@@ -56,8 +56,6 @@ acpi_status acpi_db_second_pass_parse(union acpi_parse_object *root);
void acpi_db_dump_buffer(u32 address);
#endif
-static char *gbl_hex_to_ascii = "0123456789ABCDEF";
-
/*******************************************************************************
*
* FUNCTION: acpi_db_match_argument
@@ -82,8 +80,9 @@ acpi_db_match_argument(char *user_argument,
}
for (i = 0; arguments[i].name; i++) {
- if (strstr(arguments[i].name, user_argument) ==
- arguments[i].name) {
+ if (strstr(ACPI_CAST_PTR(char, arguments[i].name),
+ ACPI_CAST_PTR(char,
+ user_argument)) == arguments[i].name) {
return (i);
}
}
@@ -339,7 +338,7 @@ void acpi_db_uint32_to_hex_string(u32 value, char *buffer)
buffer[8] = '\0';
for (i = 7; i >= 0; i--) {
- buffer[i] = gbl_hex_to_ascii[value & 0x0F];
+ buffer[i] = acpi_gbl_upper_hex_digits[value & 0x0F];
value = value >> 4;
}
}
diff --git a/drivers/acpi/acpica/dbxface.c b/drivers/acpi/acpica/dbxface.c
index e94e0d80bc7b..124db237775d 100644
--- a/drivers/acpi/acpica/dbxface.c
+++ b/drivers/acpi/acpica/dbxface.c
@@ -162,8 +162,8 @@ void acpi_db_signal_break_point(struct acpi_walk_state *walk_state)
******************************************************************************/
acpi_status
-acpi_db_single_step(struct acpi_walk_state * walk_state,
- union acpi_parse_object * op, u32 opcode_class)
+acpi_db_single_step(struct acpi_walk_state *walk_state,
+ union acpi_parse_object *op, u32 opcode_class)
{
union acpi_parse_object *next;
acpi_status status = AE_OK;
diff --git a/drivers/acpi/acpica/dscontrol.c b/drivers/acpi/acpica/dscontrol.c
index c9a663f21ac8..4ddcbf100234 100644
--- a/drivers/acpi/acpica/dscontrol.c
+++ b/drivers/acpi/acpica/dscontrol.c
@@ -163,8 +163,8 @@ acpi_ds_exec_begin_control_op(struct acpi_walk_state *walk_state,
******************************************************************************/
acpi_status
-acpi_ds_exec_end_control_op(struct acpi_walk_state * walk_state,
- union acpi_parse_object * op)
+acpi_ds_exec_end_control_op(struct acpi_walk_state *walk_state,
+ union acpi_parse_object *op)
{
acpi_status status = AE_OK;
union acpi_generic_state *control_state;
diff --git a/drivers/acpi/acpica/dsinit.c b/drivers/acpi/acpica/dsinit.c
index 5aa1c5feee50..f1e6dcc7a827 100644
--- a/drivers/acpi/acpica/dsinit.c
+++ b/drivers/acpi/acpica/dsinit.c
@@ -188,7 +188,7 @@ acpi_ds_init_one_object(acpi_handle obj_handle,
acpi_status
acpi_ds_initialize_objects(u32 table_index,
- struct acpi_namespace_node * start_node)
+ struct acpi_namespace_node *start_node)
{
acpi_status status;
struct acpi_init_walk_info info;
diff --git a/drivers/acpi/acpica/dsmethod.c b/drivers/acpi/acpica/dsmethod.c
index da198b864107..47c7b52a519c 100644
--- a/drivers/acpi/acpica/dsmethod.c
+++ b/drivers/acpi/acpica/dsmethod.c
@@ -209,7 +209,7 @@ acpi_ds_detect_named_opcodes(struct acpi_walk_state *walk_state,
******************************************************************************/
acpi_status
-acpi_ds_method_error(acpi_status status, struct acpi_walk_state * walk_state)
+acpi_ds_method_error(acpi_status status, struct acpi_walk_state *walk_state)
{
u32 aml_offset;
diff --git a/drivers/acpi/acpica/dsutils.c b/drivers/acpi/acpica/dsutils.c
index 8ca9416320e0..f393de9f5887 100644
--- a/drivers/acpi/acpica/dsutils.c
+++ b/drivers/acpi/acpica/dsutils.c
@@ -569,7 +569,7 @@ acpi_ds_create_operand(struct acpi_walk_state *walk_state,
/* TBD: May only be temporary */
obj_desc =
- acpi_ut_create_string_object((acpi_size) name_length);
+ acpi_ut_create_string_object((acpi_size)name_length);
strncpy(obj_desc->string.pointer,
name_string, name_length);
diff --git a/drivers/acpi/acpica/dswload.c b/drivers/acpi/acpica/dswload.c
index d1cedcfda1d2..fd34040d4f44 100644
--- a/drivers/acpi/acpica/dswload.c
+++ b/drivers/acpi/acpica/dswload.c
@@ -137,8 +137,8 @@ acpi_ds_init_callbacks(struct acpi_walk_state *walk_state, u32 pass_number)
******************************************************************************/
acpi_status
-acpi_ds_load1_begin_op(struct acpi_walk_state * walk_state,
- union acpi_parse_object ** out_op)
+acpi_ds_load1_begin_op(struct acpi_walk_state *walk_state,
+ union acpi_parse_object **out_op)
{
union acpi_parse_object *op;
struct acpi_namespace_node *node;
diff --git a/drivers/acpi/acpica/dswload2.c b/drivers/acpi/acpica/dswload2.c
index 0bac6e14170e..762db3fa70e0 100644
--- a/drivers/acpi/acpica/dswload2.c
+++ b/drivers/acpi/acpica/dswload2.c
@@ -490,8 +490,8 @@ acpi_status acpi_ds_load2_end_op(struct acpi_walk_state *walk_state)
status =
acpi_ds_create_index_field(op,
- (acpi_handle) arg->
- common.node, walk_state);
+ (acpi_handle)arg->common.
+ node, walk_state);
break;
case AML_BANK_FIELD_OP:
diff --git a/drivers/acpi/acpica/dswstate.c b/drivers/acpi/acpica/dswstate.c
index 3a26ddbaed6d..e3338698e56b 100644
--- a/drivers/acpi/acpica/dswstate.c
+++ b/drivers/acpi/acpica/dswstate.c
@@ -143,8 +143,8 @@ acpi_ds_result_pop(union acpi_operand_object **object,
******************************************************************************/
acpi_status
-acpi_ds_result_push(union acpi_operand_object * object,
- struct acpi_walk_state * walk_state)
+acpi_ds_result_push(union acpi_operand_object *object,
+ struct acpi_walk_state *walk_state)
{
union acpi_generic_state *state;
acpi_status status;
@@ -307,7 +307,7 @@ static acpi_status acpi_ds_result_stack_pop(struct acpi_walk_state *walk_state)
******************************************************************************/
acpi_status
-acpi_ds_obj_stack_push(void *object, struct acpi_walk_state * walk_state)
+acpi_ds_obj_stack_push(void *object, struct acpi_walk_state *walk_state)
{
ACPI_FUNCTION_NAME(ds_obj_stack_push);
@@ -354,7 +354,7 @@ acpi_ds_obj_stack_push(void *object, struct acpi_walk_state * walk_state)
******************************************************************************/
acpi_status
-acpi_ds_obj_stack_pop(u32 pop_count, struct acpi_walk_state * walk_state)
+acpi_ds_obj_stack_pop(u32 pop_count, struct acpi_walk_state *walk_state)
{
u32 i;
@@ -411,7 +411,7 @@ acpi_ds_obj_stack_pop_and_delete(u32 pop_count,
return;
}
- for (i = (s32) pop_count - 1; i >= 0; i--) {
+ for (i = (s32)pop_count - 1; i >= 0; i--) {
if (walk_state->num_operands == 0) {
return;
}
diff --git a/drivers/acpi/acpica/evgpe.c b/drivers/acpi/acpica/evgpe.c
index b47e62aaf654..4b4949ce05bc 100644
--- a/drivers/acpi/acpica/evgpe.c
+++ b/drivers/acpi/acpica/evgpe.c
@@ -440,7 +440,7 @@ u32 acpi_ev_gpe_detect(struct acpi_gpe_xrupt_info *gpe_xrupt_list)
gpe_event_info =
&gpe_block->
- event_info[((acpi_size) i *
+ event_info[((acpi_size)i *
ACPI_GPE_REGISTER_WIDTH) + j];
gpe_number =
j + gpe_register_info->base_gpe_number;
@@ -652,7 +652,7 @@ static void ACPI_SYSTEM_XFACE acpi_ev_asynch_enable_gpe(void *context)
*
******************************************************************************/
-acpi_status acpi_ev_finish_gpe(struct acpi_gpe_event_info * gpe_event_info)
+acpi_status acpi_ev_finish_gpe(struct acpi_gpe_event_info *gpe_event_info)
{
acpi_status status;
diff --git a/drivers/acpi/acpica/evgpeblk.c b/drivers/acpi/acpica/evgpeblk.c
index 447fa1cac64f..d54014cab01d 100644
--- a/drivers/acpi/acpica/evgpeblk.c
+++ b/drivers/acpi/acpica/evgpeblk.c
@@ -211,7 +211,7 @@ acpi_ev_create_gpe_info_blocks(struct acpi_gpe_block_info *gpe_block)
/* Allocate the GPE register information block */
- gpe_register_info = ACPI_ALLOCATE_ZEROED((acpi_size) gpe_block->
+ gpe_register_info = ACPI_ALLOCATE_ZEROED((acpi_size)gpe_block->
register_count *
sizeof(struct
acpi_gpe_register_info));
@@ -225,7 +225,7 @@ acpi_ev_create_gpe_info_blocks(struct acpi_gpe_block_info *gpe_block)
* Allocate the GPE event_info block. There are eight distinct GPEs
* per register. Initialization to zeros is sufficient.
*/
- gpe_event_info = ACPI_ALLOCATE_ZEROED((acpi_size) gpe_block->gpe_count *
+ gpe_event_info = ACPI_ALLOCATE_ZEROED((acpi_size)gpe_block->gpe_count *
sizeof(struct
acpi_gpe_event_info));
if (!gpe_event_info) {
diff --git a/drivers/acpi/acpica/evgpeutil.c b/drivers/acpi/acpica/evgpeutil.c
index 66c4b5b7cd64..3f150d567e64 100644
--- a/drivers/acpi/acpica/evgpeutil.c
+++ b/drivers/acpi/acpica/evgpeutil.c
@@ -163,7 +163,7 @@ acpi_ev_get_gpe_device(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
acpi_status
acpi_ev_get_gpe_xrupt_block(u32 interrupt_number,
- struct acpi_gpe_xrupt_info ** gpe_xrupt_block)
+ struct acpi_gpe_xrupt_info **gpe_xrupt_block)
{
struct acpi_gpe_xrupt_info *next_gpe_xrupt;
struct acpi_gpe_xrupt_info *gpe_xrupt;
@@ -320,7 +320,7 @@ acpi_ev_delete_gpe_handlers(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
/* Now look at the individual GPEs in this byte register */
for (j = 0; j < ACPI_GPE_REGISTER_WIDTH; j++) {
- gpe_event_info = &gpe_block->event_info[((acpi_size) i *
+ gpe_event_info = &gpe_block->event_info[((acpi_size)i *
ACPI_GPE_REGISTER_WIDTH)
+ j];
diff --git a/drivers/acpi/acpica/evhandler.c b/drivers/acpi/acpica/evhandler.c
index 0f6be8956a99..24768ca03f19 100644
--- a/drivers/acpi/acpica/evhandler.c
+++ b/drivers/acpi/acpica/evhandler.c
@@ -359,7 +359,7 @@ union acpi_operand_object *acpi_ev_find_region_handler(acpi_adr_space_type
******************************************************************************/
acpi_status
-acpi_ev_install_space_handler(struct acpi_namespace_node * node,
+acpi_ev_install_space_handler(struct acpi_namespace_node *node,
acpi_adr_space_type space_id,
acpi_adr_space_handler handler,
acpi_adr_space_setup setup, void *context)
diff --git a/drivers/acpi/acpica/evmisc.c b/drivers/acpi/acpica/evmisc.c
index c67d78c5995f..f51d43adb7d1 100644
--- a/drivers/acpi/acpica/evmisc.c
+++ b/drivers/acpi/acpica/evmisc.c
@@ -99,8 +99,7 @@ u8 acpi_ev_is_notify_object(struct acpi_namespace_node *node)
******************************************************************************/
acpi_status
-acpi_ev_queue_notify_request(struct acpi_namespace_node * node,
- u32 notify_value)
+acpi_ev_queue_notify_request(struct acpi_namespace_node *node, u32 notify_value)
{
union acpi_operand_object *obj_desc;
union acpi_operand_object *handler_list_head = NULL;
diff --git a/drivers/acpi/acpica/evregion.c b/drivers/acpi/acpica/evregion.c
index 63924d1c737a..4c6f79514040 100644
--- a/drivers/acpi/acpica/evregion.c
+++ b/drivers/acpi/acpica/evregion.c
@@ -526,81 +526,59 @@ acpi_ev_attach_region(union acpi_operand_object *handler_obj,
/*******************************************************************************
*
- * FUNCTION: acpi_ev_associate_reg_method
+ * FUNCTION: acpi_ev_execute_reg_method
*
* PARAMETERS: region_obj - Region object
+ * function - Passed to _REG: On (1) or Off (0)
*
* RETURN: Status
*
- * DESCRIPTION: Find and associate _REG method to a region
+ * DESCRIPTION: Execute _REG method for a region
*
******************************************************************************/
-void acpi_ev_associate_reg_method(union acpi_operand_object *region_obj)
+acpi_status
+acpi_ev_execute_reg_method(union acpi_operand_object *region_obj, u32 function)
{
- acpi_name *reg_name_ptr = (acpi_name *) METHOD_NAME__REG;
+ struct acpi_evaluate_info *info;
+ union acpi_operand_object *args[3];
+ union acpi_operand_object *region_obj2;
+ const acpi_name *reg_name_ptr =
+ ACPI_CAST_PTR(acpi_name, METHOD_NAME__REG);
struct acpi_namespace_node *method_node;
struct acpi_namespace_node *node;
- union acpi_operand_object *region_obj2;
acpi_status status;
- ACPI_FUNCTION_TRACE(ev_associate_reg_method);
+ ACPI_FUNCTION_TRACE(ev_execute_reg_method);
+
+ if (!acpi_gbl_namespace_initialized ||
+ region_obj->region.handler == NULL) {
+ return_ACPI_STATUS(AE_OK);
+ }
region_obj2 = acpi_ns_get_secondary_object(region_obj);
if (!region_obj2) {
- return_VOID;
+ return_ACPI_STATUS(AE_NOT_EXIST);
}
+ /*
+ * Find any "_REG" method associated with this region definition.
+ * The method should always be updated as this function may be
+ * invoked after a namespace change.
+ */
node = region_obj->region.node->parent;
-
- /* Find any "_REG" method associated with this region definition */
-
status =
acpi_ns_search_one_scope(*reg_name_ptr, node, ACPI_TYPE_METHOD,
&method_node);
if (ACPI_SUCCESS(status)) {
/*
- * The _REG method is optional and there can be only one per region
- * definition. This will be executed when the handler is attached
- * or removed
+ * The _REG method is optional and there can be only one per
+ * region definition. This will be executed when the handler is
+ * attached or removed.
*/
region_obj2->extra.method_REG = method_node;
}
-
- return_VOID;
-}
-
-/*******************************************************************************
- *
- * FUNCTION: acpi_ev_execute_reg_method
- *
- * PARAMETERS: region_obj - Region object
- * function - Passed to _REG: On (1) or Off (0)
- *
- * RETURN: Status
- *
- * DESCRIPTION: Execute _REG method for a region
- *
- ******************************************************************************/
-
-acpi_status
-acpi_ev_execute_reg_method(union acpi_operand_object *region_obj, u32 function)
-{
- struct acpi_evaluate_info *info;
- union acpi_operand_object *args[3];
- union acpi_operand_object *region_obj2;
- acpi_status status;
-
- ACPI_FUNCTION_TRACE(ev_execute_reg_method);
-
- region_obj2 = acpi_ns_get_secondary_object(region_obj);
- if (!region_obj2) {
- return_ACPI_STATUS(AE_NOT_EXIST);
- }
-
- if (region_obj2->extra.method_REG == NULL ||
- region_obj->region.handler == NULL ||
- !acpi_gbl_namespace_initialized) {
+ if (region_obj2->extra.method_REG == NULL) {
return_ACPI_STATUS(AE_OK);
}
diff --git a/drivers/acpi/acpica/evrgnini.c b/drivers/acpi/acpica/evrgnini.c
index fda869c9ad0b..b6ea9c0d0d8c 100644
--- a/drivers/acpi/acpica/evrgnini.c
+++ b/drivers/acpi/acpica/evrgnini.c
@@ -227,7 +227,7 @@ acpi_ev_pci_config_region_setup(acpi_handle handle,
/* Install a handler for this PCI root bridge */
- status = acpi_install_address_space_handler((acpi_handle) pci_root_node, ACPI_ADR_SPACE_PCI_CONFIG, ACPI_DEFAULT_HANDLER, NULL, NULL);
+ status = acpi_install_address_space_handler((acpi_handle)pci_root_node, ACPI_ADR_SPACE_PCI_CONFIG, ACPI_DEFAULT_HANDLER, NULL, NULL);
if (ACPI_FAILURE(status)) {
if (status == AE_SAME_HANDLER) {
/*
@@ -518,7 +518,6 @@ acpi_ev_initialize_region(union acpi_operand_object *region_obj,
return_ACPI_STATUS(AE_OK);
}
- acpi_ev_associate_reg_method(region_obj);
region_obj->common.flags |= AOPOBJ_OBJECT_INITIALIZED;
node = region_obj->region.node->parent;
diff --git a/drivers/acpi/acpica/evxfgpe.c b/drivers/acpi/acpica/evxfgpe.c
index 90456714821f..17cfef721d00 100644
--- a/drivers/acpi/acpica/evxfgpe.c
+++ b/drivers/acpi/acpica/evxfgpe.c
@@ -917,7 +917,7 @@ ACPI_EXPORT_SYMBOL(acpi_remove_gpe_block)
* the FADT-defined gpe blocks. Otherwise, the GPE block device.
*
******************************************************************************/
-acpi_status acpi_get_gpe_device(u32 index, acpi_handle * gpe_device)
+acpi_status acpi_get_gpe_device(u32 index, acpi_handle *gpe_device)
{
struct acpi_gpe_device_info info;
acpi_status status;
diff --git a/drivers/acpi/acpica/exconcat.c b/drivers/acpi/acpica/exconcat.c
new file mode 100644
index 000000000000..2423fe03e879
--- /dev/null
+++ b/drivers/acpi/acpica/exconcat.c
@@ -0,0 +1,439 @@
+/******************************************************************************
+ *
+ * Module Name: exconcat - Concatenate-type AML operators
+ *
+ *****************************************************************************/
+
+/*
+ * Copyright (C) 2000 - 2016, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions, and the following disclaimer,
+ * without modification.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ * substantially similar to the "NO WARRANTY" disclaimer below
+ * ("Disclaimer") and any redistribution must be conditioned upon
+ * including a substantially similar Disclaimer requirement for further
+ * binary redistribution.
+ * 3. Neither the names of the above-listed copyright holders nor the names
+ * of any contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * Alternatively, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") version 2 as published by the Free
+ * Software Foundation.
+ *
+ * NO WARRANTY
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
+ * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGES.
+ */
+
+#include <acpi/acpi.h>
+#include "accommon.h"
+#include "acinterp.h"
+#include "amlresrc.h"
+
+#define _COMPONENT ACPI_EXECUTER
+ACPI_MODULE_NAME("exconcat")
+
+/* Local Prototypes */
+static acpi_status
+acpi_ex_convert_to_object_type_string(union acpi_operand_object *obj_desc,
+ union acpi_operand_object **result_desc);
+
+/*******************************************************************************
+ *
+ * FUNCTION: acpi_ex_do_concatenate
+ *
+ * PARAMETERS: operand0 - First source object
+ * operand1 - Second source object
+ * actual_return_desc - Where to place the return object
+ * walk_state - Current walk state
+ *
+ * RETURN: Status
+ *
+ * DESCRIPTION: Concatenate two objects with the ACPI-defined conversion
+ * rules as necessary.
+ * NOTE:
+ * Per the ACPI spec (up to 6.1), Concatenate only supports Integer,
+ * String, and Buffer objects. However, we support all objects here
+ * as an extension. This improves the usefulness of both Concatenate
+ * and the Printf/Fprintf macros. The extension returns a string
+ * describing the object type for the other objects.
+ * 02/2016.
+ *
+ ******************************************************************************/
+
+acpi_status
+acpi_ex_do_concatenate(union acpi_operand_object *operand0,
+ union acpi_operand_object *operand1,
+ union acpi_operand_object **actual_return_desc,
+ struct acpi_walk_state *walk_state)
+{
+ union acpi_operand_object *local_operand0 = operand0;
+ union acpi_operand_object *local_operand1 = operand1;
+ union acpi_operand_object *temp_operand1 = NULL;
+ union acpi_operand_object *return_desc;
+ char *buffer;
+ acpi_object_type operand0_type;
+ acpi_object_type operand1_type;
+ acpi_status status;
+
+ ACPI_FUNCTION_TRACE(ex_do_concatenate);
+
+ /* Operand 0 preprocessing */
+
+ switch (operand0->common.type) {
+ case ACPI_TYPE_INTEGER:
+ case ACPI_TYPE_STRING:
+ case ACPI_TYPE_BUFFER:
+
+ operand0_type = operand0->common.type;
+ break;
+
+ default:
+
+ /* For all other types, get the "object type" string */
+
+ status =
+ acpi_ex_convert_to_object_type_string(operand0,
+ &local_operand0);
+ if (ACPI_FAILURE(status)) {
+ goto cleanup;
+ }
+
+ operand0_type = ACPI_TYPE_STRING;
+ break;
+ }
+
+ /* Operand 1 preprocessing */
+
+ switch (operand1->common.type) {
+ case ACPI_TYPE_INTEGER:
+ case ACPI_TYPE_STRING:
+ case ACPI_TYPE_BUFFER:
+
+ operand1_type = operand1->common.type;
+ break;
+
+ default:
+
+ /* For all other types, get the "object type" string */
+
+ status =
+ acpi_ex_convert_to_object_type_string(operand1,
+ &local_operand1);
+ if (ACPI_FAILURE(status)) {
+ goto cleanup;
+ }
+
+ operand1_type = ACPI_TYPE_STRING;
+ break;
+ }
+
+ /*
+ * Convert the second operand if necessary. The first operand (0)
+ * determines the type of the second operand (1) (See the Data Types
+ * section of the ACPI specification). Both object types are
+ * guaranteed to be either Integer/String/Buffer by the operand
+ * resolution mechanism.
+ */
+ switch (operand0_type) {
+ case ACPI_TYPE_INTEGER:
+
+ status =
+ acpi_ex_convert_to_integer(local_operand1, &temp_operand1,
+ 16);
+ break;
+
+ case ACPI_TYPE_BUFFER:
+
+ status =
+ acpi_ex_convert_to_buffer(local_operand1, &temp_operand1);
+ break;
+
+ case ACPI_TYPE_STRING:
+
+ switch (operand1_type) {
+ case ACPI_TYPE_INTEGER:
+ case ACPI_TYPE_STRING:
+ case ACPI_TYPE_BUFFER:
+
+ /* Other types have already been converted to string */
+
+ status =
+ acpi_ex_convert_to_string(local_operand1,
+ &temp_operand1,
+ ACPI_IMPLICIT_CONVERT_HEX);
+ break;
+
+ default:
+
+ status = AE_OK;
+ break;
+ }
+ break;
+
+ default:
+
+ ACPI_ERROR((AE_INFO, "Invalid object type: 0x%X",
+ operand0->common.type));
+ status = AE_AML_INTERNAL;
+ }
+
+ if (ACPI_FAILURE(status)) {
+ goto cleanup;
+ }
+
+ /* Take care with any newly created operand objects */
+
+ if ((local_operand1 != operand1) && (local_operand1 != temp_operand1)) {
+ acpi_ut_remove_reference(local_operand1);
+ }
+
+ local_operand1 = temp_operand1;
+
+ /*
+ * Both operands are now known to be the same object type
+ * (Both are Integer, String, or Buffer), and we can now perform
+ * the concatenation.
+ *
+ * There are three cases to handle, as per the ACPI spec:
+ *
+ * 1) Two Integers concatenated to produce a new Buffer
+ * 2) Two Strings concatenated to produce a new String
+ * 3) Two Buffers concatenated to produce a new Buffer
+ */
+ switch (operand0_type) {
+ case ACPI_TYPE_INTEGER:
+
+ /* Result of two Integers is a Buffer */
+ /* Need enough buffer space for two integers */
+
+ return_desc = acpi_ut_create_buffer_object((acpi_size)
+ ACPI_MUL_2
+ (acpi_gbl_integer_byte_width));
+ if (!return_desc) {
+ status = AE_NO_MEMORY;
+ goto cleanup;
+ }
+
+ buffer = (char *)return_desc->buffer.pointer;
+
+ /* Copy the first integer, LSB first */
+
+ memcpy(buffer, &operand0->integer.value,
+ acpi_gbl_integer_byte_width);
+
+ /* Copy the second integer (LSB first) after the first */
+
+ memcpy(buffer + acpi_gbl_integer_byte_width,
+ &local_operand1->integer.value,
+ acpi_gbl_integer_byte_width);
+ break;
+
+ case ACPI_TYPE_STRING:
+
+ /* Result of two Strings is a String */
+
+ return_desc = acpi_ut_create_string_object(((acpi_size)
+ local_operand0->
+ string.length +
+ local_operand1->
+ string.length));
+ if (!return_desc) {
+ status = AE_NO_MEMORY;
+ goto cleanup;
+ }
+
+ buffer = return_desc->string.pointer;
+
+ /* Concatenate the strings */
+
+ strcpy(buffer, local_operand0->string.pointer);
+ strcat(buffer, local_operand1->string.pointer);
+ break;
+
+ case ACPI_TYPE_BUFFER:
+
+ /* Result of two Buffers is a Buffer */
+
+ return_desc = acpi_ut_create_buffer_object(((acpi_size)
+ operand0->buffer.
+ length +
+ local_operand1->
+ buffer.length));
+ if (!return_desc) {
+ status = AE_NO_MEMORY;
+ goto cleanup;
+ }
+
+ buffer = (char *)return_desc->buffer.pointer;
+
+ /* Concatenate the buffers */
+
+ memcpy(buffer, operand0->buffer.pointer,
+ operand0->buffer.length);
+ memcpy(buffer + operand0->buffer.length,
+ local_operand1->buffer.pointer,
+ local_operand1->buffer.length);
+ break;
+
+ default:
+
+ /* Invalid object type, should not happen here */
+
+ ACPI_ERROR((AE_INFO, "Invalid object type: 0x%X",
+ operand0->common.type));
+ status = AE_AML_INTERNAL;
+ goto cleanup;
+ }
+
+ *actual_return_desc = return_desc;
+
+cleanup:
+ if (local_operand0 != operand0) {
+ acpi_ut_remove_reference(local_operand0);
+ }
+
+ if (local_operand1 != operand1) {
+ acpi_ut_remove_reference(local_operand1);
+ }
+
+ return_ACPI_STATUS(status);
+}
+
+/*******************************************************************************
+ *
+ * FUNCTION: acpi_ex_convert_to_object_type_string
+ *
+ * PARAMETERS: obj_desc - Object to be converted
+ * return_desc - Where to place the return object
+ *
+ * RETURN: Status
+ *
+ * DESCRIPTION: Convert an object of arbitrary type to a string object that
+ * contains the namestring for the object. Used for the
+ * concatenate operator.
+ *
+ ******************************************************************************/
+
+static acpi_status
+acpi_ex_convert_to_object_type_string(union acpi_operand_object *obj_desc,
+ union acpi_operand_object **result_desc)
+{
+ union acpi_operand_object *return_desc;
+ const char *type_string;
+
+ type_string = acpi_ut_get_type_name(obj_desc->common.type);
+
+ return_desc = acpi_ut_create_string_object(((acpi_size)strlen(type_string) + 9)); /* 9 For "[ Object]" */
+ if (!return_desc) {
+ return (AE_NO_MEMORY);
+ }
+
+ strcpy(return_desc->string.pointer, "[");
+ strcat(return_desc->string.pointer, type_string);
+ strcat(return_desc->string.pointer, " Object]");
+
+ *result_desc = return_desc;
+ return (AE_OK);
+}
+
+/*******************************************************************************
+ *
+ * FUNCTION: acpi_ex_concat_template
+ *
+ * PARAMETERS: operand0 - First source object
+ * operand1 - Second source object
+ * actual_return_desc - Where to place the return object
+ * walk_state - Current walk state
+ *
+ * RETURN: Status
+ *
+ * DESCRIPTION: Concatenate two resource templates
+ *
+ ******************************************************************************/
+
+acpi_status
+acpi_ex_concat_template(union acpi_operand_object *operand0,
+ union acpi_operand_object *operand1,
+ union acpi_operand_object **actual_return_desc,
+ struct acpi_walk_state *walk_state)
+{
+ acpi_status status;
+ union acpi_operand_object *return_desc;
+ u8 *new_buf;
+ u8 *end_tag;
+ acpi_size length0;
+ acpi_size length1;
+ acpi_size new_length;
+
+ ACPI_FUNCTION_TRACE(ex_concat_template);
+
+ /*
+ * Find the end_tag descriptor in each resource template.
+ * Note1: returned pointers point TO the end_tag, not past it.
+ * Note2: zero-length buffers are allowed; treated like one end_tag
+ */
+
+ /* Get the length of the first resource template */
+
+ status = acpi_ut_get_resource_end_tag(operand0, &end_tag);
+ if (ACPI_FAILURE(status)) {
+ return_ACPI_STATUS(status);
+ }
+
+ length0 = ACPI_PTR_DIFF(end_tag, operand0->buffer.pointer);
+
+ /* Get the length of the second resource template */
+
+ status = acpi_ut_get_resource_end_tag(operand1, &end_tag);
+ if (ACPI_FAILURE(status)) {
+ return_ACPI_STATUS(status);
+ }
+
+ length1 = ACPI_PTR_DIFF(end_tag, operand1->buffer.pointer);
+
+ /* Combine both lengths, minimum size will be 2 for end_tag */
+
+ new_length = length0 + length1 + sizeof(struct aml_resource_end_tag);
+
+ /* Create a new buffer object for the result (with one end_tag) */
+
+ return_desc = acpi_ut_create_buffer_object(new_length);
+ if (!return_desc) {
+ return_ACPI_STATUS(AE_NO_MEMORY);
+ }
+
+ /*
+ * Copy the templates to the new buffer, 0 first, then 1 follows. One
+ * end_tag descriptor is copied from Operand1.
+ */
+ new_buf = return_desc->buffer.pointer;
+ memcpy(new_buf, operand0->buffer.pointer, length0);
+ memcpy(new_buf + length0, operand1->buffer.pointer, length1);
+
+ /* Insert end_tag and set the checksum to zero, means "ignore checksum" */
+
+ new_buf[new_length - 1] = 0;
+ new_buf[new_length - 2] = ACPI_RESOURCE_NAME_END_TAG | 1;
+
+ /* Return the completed resource template */
+
+ *actual_return_desc = return_desc;
+ return_ACPI_STATUS(AE_OK);
+}
diff --git a/drivers/acpi/acpica/exconfig.c b/drivers/acpi/acpica/exconfig.c
index f74161301037..a1d177d58254 100644
--- a/drivers/acpi/acpica/exconfig.c
+++ b/drivers/acpi/acpica/exconfig.c
@@ -118,7 +118,9 @@ acpi_ex_add_table(u32 table_index,
/* Execute any module-level code that was found in the table */
acpi_ex_exit_interpreter();
- acpi_ns_exec_module_code_list();
+ if (acpi_gbl_group_module_level_code) {
+ acpi_ns_exec_module_code_list();
+ }
acpi_ex_enter_interpreter();
/*
diff --git a/drivers/acpi/acpica/exconvrt.c b/drivers/acpi/acpica/exconvrt.c
index 0b9f2c13b98a..b7e9b3d803e1 100644
--- a/drivers/acpi/acpica/exconvrt.c
+++ b/drivers/acpi/acpica/exconvrt.c
@@ -124,7 +124,9 @@ acpi_ex_convert_to_integer(union acpi_operand_object *obj_desc,
* of ACPI 3.0) is that the to_integer() operator allows both decimal
* and hexadecimal strings (hex prefixed with "0x").
*/
- status = acpi_ut_strtoul64((char *)pointer, flags, &result);
+ status = acpi_ut_strtoul64((char *)pointer, flags,
+ acpi_gbl_integer_byte_width,
+ &result);
if (ACPI_FAILURE(status)) {
return_ACPI_STATUS(status);
}
@@ -439,7 +441,7 @@ acpi_ex_convert_to_string(union acpi_operand_object * obj_desc,
* Need enough space for one ASCII integer (plus null terminator)
*/
return_desc =
- acpi_ut_create_string_object((acpi_size) string_length);
+ acpi_ut_create_string_object((acpi_size)string_length);
if (!return_desc) {
return_ACPI_STATUS(AE_NO_MEMORY);
}
@@ -518,7 +520,7 @@ acpi_ex_convert_to_string(union acpi_operand_object * obj_desc,
}
return_desc =
- acpi_ut_create_string_object((acpi_size) string_length);
+ acpi_ut_create_string_object((acpi_size)string_length);
if (!return_desc) {
return_ACPI_STATUS(AE_NO_MEMORY);
}
diff --git a/drivers/acpi/acpica/excreate.c b/drivers/acpi/acpica/excreate.c
index bea9612e4720..613ba6eb08bb 100644
--- a/drivers/acpi/acpica/excreate.c
+++ b/drivers/acpi/acpica/excreate.c
@@ -394,7 +394,7 @@ acpi_status acpi_ex_create_processor(struct acpi_walk_state *walk_state)
obj_desc->processor.proc_id = (u8) operand[1]->integer.value;
obj_desc->processor.length = (u8) operand[3]->integer.value;
obj_desc->processor.address =
- (acpi_io_address) operand[2]->integer.value;
+ (acpi_io_address)operand[2]->integer.value;
/* Install the processor object in the parent Node */
diff --git a/drivers/acpi/acpica/exdump.c b/drivers/acpi/acpica/exdump.c
index ee30974b245a..fce6b2e10209 100644
--- a/drivers/acpi/acpica/exdump.c
+++ b/drivers/acpi/acpica/exdump.c
@@ -55,9 +55,9 @@ ACPI_MODULE_NAME("exdump")
*/
#if defined(ACPI_DEBUG_OUTPUT) || defined(ACPI_DEBUGGER)
/* Local prototypes */
-static void acpi_ex_out_string(char *title, char *value);
+static void acpi_ex_out_string(const char *title, const char *value);
-static void acpi_ex_out_pointer(char *title, void *value);
+static void acpi_ex_out_pointer(const char *title, const void *value);
static void
acpi_ex_dump_object(union acpi_operand_object *obj_desc,
@@ -365,8 +365,7 @@ acpi_ex_dump_object(union acpi_operand_object *obj_desc,
struct acpi_exdump_info *info)
{
u8 *target;
- char *name;
- const char *reference_name;
+ const char *name;
u8 count;
union acpi_operand_object *start;
union acpi_operand_object *data = NULL;
@@ -459,9 +458,9 @@ acpi_ex_dump_object(union acpi_operand_object *obj_desc,
case ACPI_EXD_REFERENCE:
- reference_name = acpi_ut_get_reference_name(obj_desc);
acpi_ex_out_string("Class Name",
- ACPI_CAST_PTR(char, reference_name));
+ acpi_ut_get_reference_name
+ (obj_desc));
acpi_ex_dump_reference_obj(obj_desc);
break;
@@ -934,12 +933,12 @@ acpi_ex_dump_operands(union acpi_operand_object **operands,
*
******************************************************************************/
-static void acpi_ex_out_string(char *title, char *value)
+static void acpi_ex_out_string(const char *title, const char *value)
{
acpi_os_printf("%20s : %s\n", title, value);
}
-static void acpi_ex_out_pointer(char *title, void *value)
+static void acpi_ex_out_pointer(const char *title, const void *value)
{
acpi_os_printf("%20s : %p\n", title, value);
}
diff --git a/drivers/acpi/acpica/exfield.c b/drivers/acpi/acpica/exfield.c
index d5d8020a8523..d7d3ee36338b 100644
--- a/drivers/acpi/acpica/exfield.c
+++ b/drivers/acpi/acpica/exfield.c
@@ -126,7 +126,7 @@ acpi_ex_get_serial_access_length(u32 accessor_type, u32 access_length)
******************************************************************************/
acpi_status
-acpi_ex_read_data_from_field(struct acpi_walk_state * walk_state,
+acpi_ex_read_data_from_field(struct acpi_walk_state *walk_state,
union acpi_operand_object *obj_desc,
union acpi_operand_object **ret_buffer_desc)
{
@@ -233,7 +233,7 @@ acpi_ex_read_data_from_field(struct acpi_walk_state * walk_state,
* Note: Field.length is in bits.
*/
length =
- (acpi_size) ACPI_ROUND_BITS_UP_TO_BYTES(obj_desc->field.bit_length);
+ (acpi_size)ACPI_ROUND_BITS_UP_TO_BYTES(obj_desc->field.bit_length);
if (length > acpi_gbl_integer_byte_width) {
diff --git a/drivers/acpi/acpica/exfldio.c b/drivers/acpi/acpica/exfldio.c
index f0c5ed0b7db8..ee76d299b3d0 100644
--- a/drivers/acpi/acpica/exfldio.c
+++ b/drivers/acpi/acpica/exfldio.c
@@ -164,7 +164,7 @@ acpi_ex_setup_region(union acpi_operand_object *obj_desc,
if (ACPI_ROUND_UP(rgn_desc->region.length,
obj_desc->common_field.
access_byte_width) >=
- ((acpi_size) obj_desc->common_field.
+ ((acpi_size)obj_desc->common_field.
base_byte_offset +
obj_desc->common_field.access_byte_width +
field_datum_byte_offset)) {
@@ -897,17 +897,9 @@ acpi_ex_insert_into_field(union acpi_operand_object *obj_desc,
access_bit_width = ACPI_MUL_8(obj_desc->common_field.access_byte_width);
- /*
- * Create the bitmasks used for bit insertion.
- * Note: This if/else is used to bypass compiler differences with the
- * shift operator
- */
- if (access_bit_width == ACPI_INTEGER_BIT_SIZE) {
- width_mask = ACPI_UINT64_MAX;
- } else {
- width_mask = ACPI_MASK_BITS_ABOVE(access_bit_width);
- }
+ /* Create the bitmasks used for bit insertion */
+ width_mask = ACPI_MASK_BITS_ABOVE_64(access_bit_width);
mask = width_mask &
ACPI_MASK_BITS_BELOW(obj_desc->common_field.start_field_bit_offset);
diff --git a/drivers/acpi/acpica/exmisc.c b/drivers/acpi/acpica/exmisc.c
index db30ae43ddd8..4f7e667624b3 100644
--- a/drivers/acpi/acpica/exmisc.c
+++ b/drivers/acpi/acpica/exmisc.c
@@ -45,7 +45,6 @@
#include "accommon.h"
#include "acinterp.h"
#include "amlcode.h"
-#include "amlresrc.h"
#define _COMPONENT ACPI_EXECUTER
ACPI_MODULE_NAME("exmisc")
@@ -140,295 +139,6 @@ acpi_ex_get_object_reference(union acpi_operand_object *obj_desc,
/*******************************************************************************
*
- * FUNCTION: acpi_ex_concat_template
- *
- * PARAMETERS: operand0 - First source object
- * operand1 - Second source object
- * actual_return_desc - Where to place the return object
- * walk_state - Current walk state
- *
- * RETURN: Status
- *
- * DESCRIPTION: Concatenate two resource templates
- *
- ******************************************************************************/
-
-acpi_status
-acpi_ex_concat_template(union acpi_operand_object *operand0,
- union acpi_operand_object *operand1,
- union acpi_operand_object **actual_return_desc,
- struct acpi_walk_state *walk_state)
-{
- acpi_status status;
- union acpi_operand_object *return_desc;
- u8 *new_buf;
- u8 *end_tag;
- acpi_size length0;
- acpi_size length1;
- acpi_size new_length;
-
- ACPI_FUNCTION_TRACE(ex_concat_template);
-
- /*
- * Find the end_tag descriptor in each resource template.
- * Note1: returned pointers point TO the end_tag, not past it.
- * Note2: zero-length buffers are allowed; treated like one end_tag
- */
-
- /* Get the length of the first resource template */
-
- status = acpi_ut_get_resource_end_tag(operand0, &end_tag);
- if (ACPI_FAILURE(status)) {
- return_ACPI_STATUS(status);
- }
-
- length0 = ACPI_PTR_DIFF(end_tag, operand0->buffer.pointer);
-
- /* Get the length of the second resource template */
-
- status = acpi_ut_get_resource_end_tag(operand1, &end_tag);
- if (ACPI_FAILURE(status)) {
- return_ACPI_STATUS(status);
- }
-
- length1 = ACPI_PTR_DIFF(end_tag, operand1->buffer.pointer);
-
- /* Combine both lengths, minimum size will be 2 for end_tag */
-
- new_length = length0 + length1 + sizeof(struct aml_resource_end_tag);
-
- /* Create a new buffer object for the result (with one end_tag) */
-
- return_desc = acpi_ut_create_buffer_object(new_length);
- if (!return_desc) {
- return_ACPI_STATUS(AE_NO_MEMORY);
- }
-
- /*
- * Copy the templates to the new buffer, 0 first, then 1 follows. One
- * end_tag descriptor is copied from Operand1.
- */
- new_buf = return_desc->buffer.pointer;
- memcpy(new_buf, operand0->buffer.pointer, length0);
- memcpy(new_buf + length0, operand1->buffer.pointer, length1);
-
- /* Insert end_tag and set the checksum to zero, means "ignore checksum" */
-
- new_buf[new_length - 1] = 0;
- new_buf[new_length - 2] = ACPI_RESOURCE_NAME_END_TAG | 1;
-
- /* Return the completed resource template */
-
- *actual_return_desc = return_desc;
- return_ACPI_STATUS(AE_OK);
-}
-
-/*******************************************************************************
- *
- * FUNCTION: acpi_ex_do_concatenate
- *
- * PARAMETERS: operand0 - First source object
- * operand1 - Second source object
- * actual_return_desc - Where to place the return object
- * walk_state - Current walk state
- *
- * RETURN: Status
- *
- * DESCRIPTION: Concatenate two objects OF THE SAME TYPE.
- *
- ******************************************************************************/
-
-acpi_status
-acpi_ex_do_concatenate(union acpi_operand_object *operand0,
- union acpi_operand_object *operand1,
- union acpi_operand_object **actual_return_desc,
- struct acpi_walk_state *walk_state)
-{
- union acpi_operand_object *local_operand1 = operand1;
- union acpi_operand_object *return_desc;
- char *new_buf;
- const char *type_string;
- acpi_status status;
-
- ACPI_FUNCTION_TRACE(ex_do_concatenate);
-
- /*
- * Convert the second operand if necessary. The first operand
- * determines the type of the second operand, (See the Data Types
- * section of the ACPI specification.) Both object types are
- * guaranteed to be either Integer/String/Buffer by the operand
- * resolution mechanism.
- */
- switch (operand0->common.type) {
- case ACPI_TYPE_INTEGER:
-
- status =
- acpi_ex_convert_to_integer(operand1, &local_operand1, 16);
- break;
-
- case ACPI_TYPE_STRING:
- /*
- * Per the ACPI spec, Concatenate only supports int/str/buf.
- * However, we support all objects here as an extension.
- * This improves the usefulness of the Printf() macro.
- * 12/2015.
- */
- switch (operand1->common.type) {
- case ACPI_TYPE_INTEGER:
- case ACPI_TYPE_STRING:
- case ACPI_TYPE_BUFFER:
-
- status =
- acpi_ex_convert_to_string(operand1, &local_operand1,
- ACPI_IMPLICIT_CONVERT_HEX);
- break;
-
- default:
- /*
- * Just emit a string containing the object type.
- */
- type_string =
- acpi_ut_get_type_name(operand1->common.type);
-
- local_operand1 = acpi_ut_create_string_object(((acpi_size) strlen(type_string) + 9)); /* 9 For "[Object]" */
- if (!local_operand1) {
- status = AE_NO_MEMORY;
- goto cleanup;
- }
-
- strcpy(local_operand1->string.pointer, "[");
- strcat(local_operand1->string.pointer, type_string);
- strcat(local_operand1->string.pointer, " Object]");
- status = AE_OK;
- break;
- }
- break;
-
- case ACPI_TYPE_BUFFER:
-
- status = acpi_ex_convert_to_buffer(operand1, &local_operand1);
- break;
-
- default:
-
- ACPI_ERROR((AE_INFO, "Invalid object type: 0x%X",
- operand0->common.type));
- status = AE_AML_INTERNAL;
- }
-
- if (ACPI_FAILURE(status)) {
- goto cleanup;
- }
-
- /*
- * Both operands are now known to be the same object type
- * (Both are Integer, String, or Buffer), and we can now perform the
- * concatenation.
- */
-
- /*
- * There are three cases to handle:
- *
- * 1) Two Integers concatenated to produce a new Buffer
- * 2) Two Strings concatenated to produce a new String
- * 3) Two Buffers concatenated to produce a new Buffer
- */
- switch (operand0->common.type) {
- case ACPI_TYPE_INTEGER:
-
- /* Result of two Integers is a Buffer */
- /* Need enough buffer space for two integers */
-
- return_desc = acpi_ut_create_buffer_object((acpi_size)
- ACPI_MUL_2
- (acpi_gbl_integer_byte_width));
- if (!return_desc) {
- status = AE_NO_MEMORY;
- goto cleanup;
- }
-
- new_buf = (char *)return_desc->buffer.pointer;
-
- /* Copy the first integer, LSB first */
-
- memcpy(new_buf, &operand0->integer.value,
- acpi_gbl_integer_byte_width);
-
- /* Copy the second integer (LSB first) after the first */
-
- memcpy(new_buf + acpi_gbl_integer_byte_width,
- &local_operand1->integer.value,
- acpi_gbl_integer_byte_width);
- break;
-
- case ACPI_TYPE_STRING:
-
- /* Result of two Strings is a String */
-
- return_desc = acpi_ut_create_string_object(((acpi_size)
- operand0->string.
- length +
- local_operand1->
- string.length));
- if (!return_desc) {
- status = AE_NO_MEMORY;
- goto cleanup;
- }
-
- new_buf = return_desc->string.pointer;
-
- /* Concatenate the strings */
-
- strcpy(new_buf, operand0->string.pointer);
- strcat(new_buf, local_operand1->string.pointer);
- break;
-
- case ACPI_TYPE_BUFFER:
-
- /* Result of two Buffers is a Buffer */
-
- return_desc = acpi_ut_create_buffer_object(((acpi_size)
- operand0->buffer.
- length +
- local_operand1->
- buffer.length));
- if (!return_desc) {
- status = AE_NO_MEMORY;
- goto cleanup;
- }
-
- new_buf = (char *)return_desc->buffer.pointer;
-
- /* Concatenate the buffers */
-
- memcpy(new_buf, operand0->buffer.pointer,
- operand0->buffer.length);
- memcpy(new_buf + operand0->buffer.length,
- local_operand1->buffer.pointer,
- local_operand1->buffer.length);
- break;
-
- default:
-
- /* Invalid object type, should not happen here */
-
- ACPI_ERROR((AE_INFO, "Invalid object type: 0x%X",
- operand0->common.type));
- status = AE_AML_INTERNAL;
- goto cleanup;
- }
-
- *actual_return_desc = return_desc;
-
-cleanup:
- if (local_operand1 != operand1) {
- acpi_ut_remove_reference(local_operand1);
- }
- return_ACPI_STATUS(status);
-}
-
-/*******************************************************************************
- *
* FUNCTION: acpi_ex_do_math_op
*
* PARAMETERS: opcode - AML opcode
diff --git a/drivers/acpi/acpica/exnames.c b/drivers/acpi/acpica/exnames.c
index 27c11ab5eb04..3d6af93fe561 100644
--- a/drivers/acpi/acpica/exnames.c
+++ b/drivers/acpi/acpica/exnames.c
@@ -178,7 +178,7 @@ static acpi_status acpi_ex_name_segment(u8 ** in_aml_address, char *name_string)
for (index = 0;
(index < ACPI_NAME_SIZE)
- && (acpi_ut_valid_acpi_char(*aml_address, 0)); index++) {
+ && (acpi_ut_valid_name_char(*aml_address, 0)); index++) {
char_buf[index] = *aml_address++;
ACPI_DEBUG_PRINT((ACPI_DB_LOAD, "%c\n", char_buf[index]));
}
diff --git a/drivers/acpi/acpica/exoparg3.c b/drivers/acpi/acpica/exoparg3.c
index 5aa21c4eda1d..69e4e269ad2f 100644
--- a/drivers/acpi/acpica/exoparg3.c
+++ b/drivers/acpi/acpica/exoparg3.c
@@ -184,7 +184,7 @@ acpi_status acpi_ex_opcode_3A_1T_1R(struct acpi_walk_state *walk_state)
/* Get the Integer values from the objects */
index = operand[1]->integer.value;
- length = (acpi_size) operand[2]->integer.value;
+ length = (acpi_size)operand[2]->integer.value;
/*
* If the index is beyond the length of the String/Buffer, or if the
@@ -198,8 +198,8 @@ acpi_status acpi_ex_opcode_3A_1T_1R(struct acpi_walk_state *walk_state)
else if ((index + length) > operand[0]->string.length) {
length =
- (acpi_size) operand[0]->string.length -
- (acpi_size) index;
+ (acpi_size)operand[0]->string.length -
+ (acpi_size)index;
}
/* Strings always have a sub-pointer, not so for buffers */
@@ -209,7 +209,7 @@ acpi_status acpi_ex_opcode_3A_1T_1R(struct acpi_walk_state *walk_state)
/* Always allocate a new buffer for the String */
- buffer = ACPI_ALLOCATE_ZEROED((acpi_size) length + 1);
+ buffer = ACPI_ALLOCATE_ZEROED((acpi_size)length + 1);
if (!buffer) {
status = AE_NO_MEMORY;
goto cleanup;
diff --git a/drivers/acpi/acpica/exoparg6.c b/drivers/acpi/acpica/exoparg6.c
index e2b63483857f..786d53b0bb37 100644
--- a/drivers/acpi/acpica/exoparg6.c
+++ b/drivers/acpi/acpica/exoparg6.c
@@ -207,7 +207,7 @@ acpi_ex_do_match(u32 match_op,
*
******************************************************************************/
-acpi_status acpi_ex_opcode_6A_0T_1R(struct acpi_walk_state * walk_state)
+acpi_status acpi_ex_opcode_6A_0T_1R(struct acpi_walk_state *walk_state)
{
union acpi_operand_object **operand = &walk_state->operands[0];
union acpi_operand_object *return_desc = NULL;
diff --git a/drivers/acpi/acpica/exregion.c b/drivers/acpi/acpica/exregion.c
index 076074daf2b6..31b381cae94d 100644
--- a/drivers/acpi/acpica/exregion.c
+++ b/drivers/acpi/acpica/exregion.c
@@ -325,15 +325,15 @@ acpi_ex_system_io_space_handler(u32 function,
switch (function) {
case ACPI_READ:
- status = acpi_hw_read_port((acpi_io_address) address,
+ status = acpi_hw_read_port((acpi_io_address)address,
&value32, bit_width);
*value = value32;
break;
case ACPI_WRITE:
- status = acpi_hw_write_port((acpi_io_address) address,
- (u32) * value, bit_width);
+ status = acpi_hw_write_port((acpi_io_address)address,
+ (u32)*value, bit_width);
break;
default:
diff --git a/drivers/acpi/acpica/exresnte.c b/drivers/acpi/acpica/exresnte.c
index c1e8bfb0f7f4..a183cb740d24 100644
--- a/drivers/acpi/acpica/exresnte.c
+++ b/drivers/acpi/acpica/exresnte.c
@@ -93,7 +93,7 @@ acpi_ex_resolve_node_to_value(struct acpi_namespace_node **object_ptr,
*/
node = *object_ptr;
source_desc = acpi_ns_get_attached_object(node);
- entry_type = acpi_ns_get_type((acpi_handle) node);
+ entry_type = acpi_ns_get_type((acpi_handle)node);
ACPI_DEBUG_PRINT((ACPI_DB_EXEC, "Entry=%p SourceDesc=%p [%s]\n",
node, source_desc,
@@ -106,7 +106,7 @@ acpi_ex_resolve_node_to_value(struct acpi_namespace_node **object_ptr,
node = ACPI_CAST_PTR(struct acpi_namespace_node, node->object);
source_desc = acpi_ns_get_attached_object(node);
- entry_type = acpi_ns_get_type((acpi_handle) node);
+ entry_type = acpi_ns_get_type((acpi_handle)node);
*object_ptr = node;
}
diff --git a/drivers/acpi/acpica/exresolv.c b/drivers/acpi/acpica/exresolv.c
index fedacf13dc36..e1d3878be2c6 100644
--- a/drivers/acpi/acpica/exresolv.c
+++ b/drivers/acpi/acpica/exresolv.c
@@ -334,7 +334,7 @@ acpi_ex_resolve_object_to_value(union acpi_operand_object **stack_ptr,
acpi_status
acpi_ex_resolve_multiple(struct acpi_walk_state *walk_state,
union acpi_operand_object *operand,
- acpi_object_type * return_type,
+ acpi_object_type *return_type,
union acpi_operand_object **return_desc)
{
union acpi_operand_object *obj_desc = ACPI_CAST_PTR(void, operand);
diff --git a/drivers/acpi/acpica/exresop.c b/drivers/acpi/acpica/exresop.c
index cc2c26c46a6d..27b41fd7542d 100644
--- a/drivers/acpi/acpica/exresop.c
+++ b/drivers/acpi/acpica/exresop.c
@@ -131,8 +131,8 @@ acpi_ex_check_object_type(acpi_object_type type_needed,
acpi_status
acpi_ex_resolve_operands(u16 opcode,
- union acpi_operand_object ** stack_ptr,
- struct acpi_walk_state * walk_state)
+ union acpi_operand_object **stack_ptr,
+ struct acpi_walk_state *walk_state)
{
union acpi_operand_object *obj_desc;
acpi_status status = AE_OK;
diff --git a/drivers/acpi/acpica/exstorob.c b/drivers/acpi/acpica/exstorob.c
index 28b724827f0f..1dab82746d06 100644
--- a/drivers/acpi/acpica/exstorob.c
+++ b/drivers/acpi/acpica/exstorob.c
@@ -188,7 +188,7 @@ acpi_ex_store_string_to_string(union acpi_operand_object *source_desc,
* Clear old string and copy in the new one
*/
memset(target_desc->string.pointer, 0,
- (acpi_size) target_desc->string.length + 1);
+ (acpi_size)target_desc->string.length + 1);
memcpy(target_desc->string.pointer, buffer, length);
} else {
/*
@@ -204,7 +204,7 @@ acpi_ex_store_string_to_string(union acpi_operand_object *source_desc,
}
target_desc->string.pointer =
- ACPI_ALLOCATE_ZEROED((acpi_size) length + 1);
+ ACPI_ALLOCATE_ZEROED((acpi_size)length + 1);
if (!target_desc->string.pointer) {
return_ACPI_STATUS(AE_NO_MEMORY);
diff --git a/drivers/acpi/acpica/exutils.c b/drivers/acpi/acpica/exutils.c
index 4d44bc1cb2ca..425f13372e68 100644
--- a/drivers/acpi/acpica/exutils.c
+++ b/drivers/acpi/acpica/exutils.c
@@ -301,8 +301,8 @@ static u32 acpi_ex_digits_needed(u64 value, u32 base)
*
* FUNCTION: acpi_ex_eisa_id_to_string
*
- * PARAMETERS: compressed_id - EISAID to be converted
- * out_string - Where to put the converted string (8 bytes)
+ * PARAMETERS: out_string - Where to put the converted string (8 bytes)
+ * compressed_id - EISAID to be converted
*
* RETURN: None
*
@@ -354,7 +354,7 @@ void acpi_ex_eisa_id_to_string(char *out_string, u64 compressed_id)
* possible 64-bit integer.
* value - Value to be converted
*
- * RETURN: None, string
+ * RETURN: Converted string in out_string
*
* DESCRIPTION: Convert a 64-bit integer to decimal string representation.
* Assumes string buffer is large enough to hold the string. The
@@ -384,9 +384,9 @@ void acpi_ex_integer_to_string(char *out_string, u64 value)
* FUNCTION: acpi_ex_pci_cls_to_string
*
* PARAMETERS: out_string - Where to put the converted string (7 bytes)
- * PARAMETERS: class_code - PCI class code to be converted (3 bytes)
+ * class_code - PCI class code to be converted (3 bytes)
*
- * RETURN: None
+ * RETURN: Converted string in out_string
*
* DESCRIPTION: Convert 3-bytes PCI class code to string representation.
* Return buffer must be large enough to hold the string. The
@@ -417,7 +417,7 @@ void acpi_ex_pci_cls_to_string(char *out_string, u8 class_code[3])
*
* PARAMETERS: space_id - ID to be validated
*
- * RETURN: TRUE if valid/supported ID.
+ * RETURN: TRUE if space_id is a valid/supported ID.
*
* DESCRIPTION: Validate an operation region space_ID.
*
diff --git a/drivers/acpi/acpica/hwgpe.c b/drivers/acpi/acpica/hwgpe.c
index 1c4f4518611a..bdecd5e76e87 100644
--- a/drivers/acpi/acpica/hwgpe.c
+++ b/drivers/acpi/acpica/hwgpe.c
@@ -166,7 +166,7 @@ acpi_hw_low_set_gpe(struct acpi_gpe_event_info *gpe_event_info, u32 action)
*
******************************************************************************/
-acpi_status acpi_hw_clear_gpe(struct acpi_gpe_event_info * gpe_event_info)
+acpi_status acpi_hw_clear_gpe(struct acpi_gpe_event_info *gpe_event_info)
{
struct acpi_gpe_register_info *gpe_register_info;
acpi_status status;
@@ -206,7 +206,7 @@ acpi_status acpi_hw_clear_gpe(struct acpi_gpe_event_info * gpe_event_info)
******************************************************************************/
acpi_status
-acpi_hw_get_gpe_status(struct acpi_gpe_event_info * gpe_event_info,
+acpi_hw_get_gpe_status(struct acpi_gpe_event_info *gpe_event_info,
acpi_event_status *event_status)
{
u32 in_byte;
@@ -391,7 +391,7 @@ acpi_hw_clear_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
acpi_status
acpi_hw_enable_runtime_gpe_block(struct acpi_gpe_xrupt_info *gpe_xrupt_info,
- struct acpi_gpe_block_info * gpe_block,
+ struct acpi_gpe_block_info *gpe_block,
void *context)
{
u32 i;
diff --git a/drivers/acpi/acpica/hwregs.c b/drivers/acpi/acpica/hwregs.c
index 5ba0498412fd..0f18dbc9a37f 100644
--- a/drivers/acpi/acpica/hwregs.c
+++ b/drivers/acpi/acpica/hwregs.c
@@ -51,6 +51,10 @@ ACPI_MODULE_NAME("hwregs")
#if (!ACPI_REDUCED_HARDWARE)
/* Local Prototypes */
+static u8
+acpi_hw_get_access_bit_width(struct acpi_generic_address *reg,
+ u8 max_bit_width);
+
static acpi_status
acpi_hw_read_multiple(u32 *value,
struct acpi_generic_address *register_a,
@@ -65,6 +69,48 @@ acpi_hw_write_multiple(u32 value,
/******************************************************************************
*
+ * FUNCTION: acpi_hw_get_access_bit_width
+ *
+ * PARAMETERS: reg - GAS register structure
+ * max_bit_width - Max bit_width supported (32 or 64)
+ *
+ * RETURN: Status
+ *
+ * DESCRIPTION: Obtain optimal access bit width
+ *
+ ******************************************************************************/
+
+static u8
+acpi_hw_get_access_bit_width(struct acpi_generic_address *reg, u8 max_bit_width)
+{
+ u64 address;
+
+ if (!reg->access_width) {
+ /*
+ * Detect old register descriptors where only the bit_width field
+ * makes senses. The target address is copied to handle possible
+ * alignment issues.
+ */
+ ACPI_MOVE_64_TO_64(&address, &reg->address);
+ if (!reg->bit_offset && reg->bit_width &&
+ ACPI_IS_POWER_OF_TWO(reg->bit_width) &&
+ ACPI_IS_ALIGNED(reg->bit_width, 8) &&
+ ACPI_IS_ALIGNED(address, reg->bit_width)) {
+ return (reg->bit_width);
+ } else {
+ if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_IO) {
+ return (32);
+ } else {
+ return (max_bit_width);
+ }
+ }
+ } else {
+ return (1 << (reg->access_width + 2));
+ }
+}
+
+/******************************************************************************
+ *
* FUNCTION: acpi_hw_validate_register
*
* PARAMETERS: reg - GAS register structure
@@ -83,6 +129,8 @@ acpi_status
acpi_hw_validate_register(struct acpi_generic_address *reg,
u8 max_bit_width, u64 *address)
{
+ u8 bit_width;
+ u8 access_width;
/* Must have a valid pointer to a GAS structure */
@@ -109,23 +157,25 @@ acpi_hw_validate_register(struct acpi_generic_address *reg,
return (AE_SUPPORT);
}
- /* Validate the bit_width */
+ /* Validate the access_width */
- if ((reg->bit_width != 8) &&
- (reg->bit_width != 16) &&
- (reg->bit_width != 32) && (reg->bit_width != max_bit_width)) {
+ if (reg->access_width > 4) {
ACPI_ERROR((AE_INFO,
- "Unsupported register bit width: 0x%X",
- reg->bit_width));
+ "Unsupported register access width: 0x%X",
+ reg->access_width));
return (AE_SUPPORT);
}
- /* Validate the bit_offset. Just a warning for now. */
+ /* Validate the bit_width, convert access_width into number of bits */
- if (reg->bit_offset != 0) {
+ access_width = acpi_hw_get_access_bit_width(reg, max_bit_width);
+ bit_width =
+ ACPI_ROUND_UP(reg->bit_offset + reg->bit_width, access_width);
+ if (max_bit_width < bit_width) {
ACPI_WARNING((AE_INFO,
- "Unsupported register bit offset: 0x%X",
- reg->bit_offset));
+ "Requested bit width 0x%X is smaller than register bit width 0x%X",
+ max_bit_width, bit_width));
+ return (AE_SUPPORT);
}
return (AE_OK);
@@ -145,17 +195,19 @@ acpi_hw_validate_register(struct acpi_generic_address *reg,
* 64-bit values is not needed.
*
* LIMITATIONS: <These limitations also apply to acpi_hw_write>
- * bit_width must be exactly 8, 16, or 32.
* space_ID must be system_memory or system_IO.
- * bit_offset and access_width are currently ignored, as there has
- * not been a need to implement these.
*
******************************************************************************/
acpi_status acpi_hw_read(u32 *value, struct acpi_generic_address *reg)
{
u64 address;
+ u8 access_width;
+ u32 bit_width;
+ u8 bit_offset;
u64 value64;
+ u32 value32;
+ u8 index;
acpi_status status;
ACPI_FUNCTION_NAME(hw_read);
@@ -167,28 +219,75 @@ acpi_status acpi_hw_read(u32 *value, struct acpi_generic_address *reg)
return (status);
}
- /* Initialize entire 32-bit return value to zero */
-
+ /*
+ * Initialize entire 32-bit return value to zero, convert access_width
+ * into number of bits based
+ */
*value = 0;
+ access_width = acpi_hw_get_access_bit_width(reg, 32);
+ bit_width = reg->bit_offset + reg->bit_width;
+ bit_offset = reg->bit_offset;
/*
* Two address spaces supported: Memory or IO. PCI_Config is
* not supported here because the GAS structure is insufficient
*/
- if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) {
- status = acpi_os_read_memory((acpi_physical_address)
- address, &value64, reg->bit_width);
+ index = 0;
+ while (bit_width) {
+ if (bit_offset >= access_width) {
+ value32 = 0;
+ bit_offset -= access_width;
+ } else {
+ if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) {
+ status =
+ acpi_os_read_memory((acpi_physical_address)
+ address +
+ index *
+ ACPI_DIV_8
+ (access_width),
+ &value64, access_width);
+ value32 = (u32)value64;
+ } else { /* ACPI_ADR_SPACE_SYSTEM_IO, validated earlier */
+
+ status = acpi_hw_read_port((acpi_io_address)
+ address +
+ index *
+ ACPI_DIV_8
+ (access_width),
+ &value32,
+ access_width);
+ }
+
+ /*
+ * Use offset style bit masks because:
+ * bit_offset < access_width/bit_width < access_width, and
+ * access_width is ensured to be less than 32-bits by
+ * acpi_hw_validate_register().
+ */
+ if (bit_offset) {
+ value32 &= ACPI_MASK_BITS_BELOW(bit_offset);
+ bit_offset = 0;
+ }
+ if (bit_width < access_width) {
+ value32 &= ACPI_MASK_BITS_ABOVE(bit_width);
+ }
+ }
- *value = (u32)value64;
- } else { /* ACPI_ADR_SPACE_SYSTEM_IO, validated earlier */
+ /*
+ * Use offset style bit writes because "Index * AccessWidth" is
+ * ensured to be less than 32-bits by acpi_hw_validate_register().
+ */
+ ACPI_SET_BITS(value, index * access_width,
+ ACPI_MASK_BITS_ABOVE_32(access_width), value32);
- status = acpi_hw_read_port((acpi_io_address)
- address, value, reg->bit_width);
+ bit_width -=
+ bit_width > access_width ? access_width : bit_width;
+ index++;
}
ACPI_DEBUG_PRINT((ACPI_DB_IO,
"Read: %8.8X width %2d from %8.8X%8.8X (%s)\n",
- *value, reg->bit_width, ACPI_FORMAT_UINT64(address),
+ *value, access_width, ACPI_FORMAT_UINT64(address),
acpi_ut_get_region_name(reg->space_id)));
return (status);
@@ -212,6 +311,12 @@ acpi_status acpi_hw_read(u32 *value, struct acpi_generic_address *reg)
acpi_status acpi_hw_write(u32 value, struct acpi_generic_address *reg)
{
u64 address;
+ u8 access_width;
+ u32 bit_width;
+ u8 bit_offset;
+ u64 value64;
+ u32 new_value32, old_value32;
+ u8 index;
acpi_status status;
ACPI_FUNCTION_NAME(hw_write);
@@ -223,23 +328,145 @@ acpi_status acpi_hw_write(u32 value, struct acpi_generic_address *reg)
return (status);
}
+ /* Convert access_width into number of bits based */
+
+ access_width = acpi_hw_get_access_bit_width(reg, 32);
+ bit_width = reg->bit_offset + reg->bit_width;
+ bit_offset = reg->bit_offset;
+
/*
* Two address spaces supported: Memory or IO. PCI_Config is
* not supported here because the GAS structure is insufficient
*/
- if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) {
- status = acpi_os_write_memory((acpi_physical_address)
- address, (u64)value,
- reg->bit_width);
- } else { /* ACPI_ADR_SPACE_SYSTEM_IO, validated earlier */
-
- status = acpi_hw_write_port((acpi_io_address)
- address, value, reg->bit_width);
+ index = 0;
+ while (bit_width) {
+ /*
+ * Use offset style bit reads because "Index * AccessWidth" is
+ * ensured to be less than 32-bits by acpi_hw_validate_register().
+ */
+ new_value32 = ACPI_GET_BITS(&value, index * access_width,
+ ACPI_MASK_BITS_ABOVE_32
+ (access_width));
+
+ if (bit_offset >= access_width) {
+ bit_offset -= access_width;
+ } else {
+ /*
+ * Use offset style bit masks because access_width is ensured
+ * to be less than 32-bits by acpi_hw_validate_register() and
+ * bit_offset/bit_width is less than access_width here.
+ */
+ if (bit_offset) {
+ new_value32 &= ACPI_MASK_BITS_BELOW(bit_offset);
+ }
+ if (bit_width < access_width) {
+ new_value32 &= ACPI_MASK_BITS_ABOVE(bit_width);
+ }
+
+ if (reg->space_id == ACPI_ADR_SPACE_SYSTEM_MEMORY) {
+ if (bit_offset || bit_width < access_width) {
+ /*
+ * Read old values in order not to modify the bits that
+ * are beyond the register bit_width/bit_offset setting.
+ */
+ status =
+ acpi_os_read_memory((acpi_physical_address)
+ address +
+ index *
+ ACPI_DIV_8
+ (access_width),
+ &value64,
+ access_width);
+ old_value32 = (u32)value64;
+
+ /*
+ * Use offset style bit masks because access_width is
+ * ensured to be less than 32-bits by
+ * acpi_hw_validate_register() and bit_offset/bit_width is
+ * less than access_width here.
+ */
+ if (bit_offset) {
+ old_value32 &=
+ ACPI_MASK_BITS_ABOVE
+ (bit_offset);
+ bit_offset = 0;
+ }
+ if (bit_width < access_width) {
+ old_value32 &=
+ ACPI_MASK_BITS_BELOW
+ (bit_width);
+ }
+
+ new_value32 |= old_value32;
+ }
+
+ value64 = (u64)new_value32;
+ status =
+ acpi_os_write_memory((acpi_physical_address)
+ address +
+ index *
+ ACPI_DIV_8
+ (access_width),
+ value64, access_width);
+ } else { /* ACPI_ADR_SPACE_SYSTEM_IO, validated earlier */
+
+ if (bit_offset || bit_width < access_width) {
+ /*
+ * Read old values in order not to modify the bits that
+ * are beyond the register bit_width/bit_offset setting.
+ */
+ status =
+ acpi_hw_read_port((acpi_io_address)
+ address +
+ index *
+ ACPI_DIV_8
+ (access_width),
+ &old_value32,
+ access_width);
+
+ /*
+ * Use offset style bit masks because access_width is
+ * ensured to be less than 32-bits by
+ * acpi_hw_validate_register() and bit_offset/bit_width is
+ * less than access_width here.
+ */
+ if (bit_offset) {
+ old_value32 &=
+ ACPI_MASK_BITS_ABOVE
+ (bit_offset);
+ bit_offset = 0;
+ }
+ if (bit_width < access_width) {
+ old_value32 &=
+ ACPI_MASK_BITS_BELOW
+ (bit_width);
+ }
+
+ new_value32 |= old_value32;
+ }
+
+ status = acpi_hw_write_port((acpi_io_address)
+ address +
+ index *
+ ACPI_DIV_8
+ (access_width),
+ new_value32,
+ access_width);
+ }
+ }
+
+ /*
+ * Index * access_width is ensured to be less than 32-bits by
+ * acpi_hw_validate_register().
+ */
+ bit_width -=
+ bit_width > access_width ? access_width : bit_width;
+ index++;
}
ACPI_DEBUG_PRINT((ACPI_DB_IO,
"Wrote: %8.8X width %2d to %8.8X%8.8X (%s)\n",
- value, reg->bit_width, ACPI_FORMAT_UINT64(address),
+ value, access_width, ACPI_FORMAT_UINT64(address),
acpi_ut_get_region_name(reg->space_id)));
return (status);
diff --git a/drivers/acpi/acpica/hwxface.c b/drivers/acpi/acpica/hwxface.c
index a01ddb393a55..98c26ff39409 100644
--- a/drivers/acpi/acpica/hwxface.c
+++ b/drivers/acpi/acpica/hwxface.c
@@ -91,10 +91,9 @@ acpi_status acpi_reset(void)
* compatibility with other ACPI implementations that have allowed
* BIOS code with bad register width values to go unnoticed.
*/
- status =
- acpi_os_write_port((acpi_io_address) reset_reg->address,
- acpi_gbl_FADT.reset_value,
- ACPI_RESET_REGISTER_WIDTH);
+ status = acpi_os_write_port((acpi_io_address)reset_reg->address,
+ acpi_gbl_FADT.reset_value,
+ ACPI_RESET_REGISTER_WIDTH);
} else {
/* Write the reset value to the reset register */
@@ -504,9 +503,7 @@ acpi_get_sleep_type_data(u8 sleep_state, u8 *sleep_type_a, u8 *sleep_type_b)
* Evaluate the \_Sx namespace object containing the register values
* for this state
*/
- info->relative_pathname = ACPI_CAST_PTR(char,
- acpi_gbl_sleep_state_names
- [sleep_state]);
+ info->relative_pathname = acpi_gbl_sleep_state_names[sleep_state];
status = acpi_ns_evaluate(info);
if (ACPI_FAILURE(status)) {
diff --git a/drivers/acpi/acpica/nsaccess.c b/drivers/acpi/acpica/nsaccess.c
index 697af810e5ad..426a6307eafa 100644
--- a/drivers/acpi/acpica/nsaccess.c
+++ b/drivers/acpi/acpica/nsaccess.c
@@ -107,9 +107,10 @@ acpi_status acpi_ns_root_initialize(void)
continue;
}
- status = acpi_ns_lookup(NULL, init_val->name, init_val->type,
- ACPI_IMODE_LOAD_PASS2,
- ACPI_NS_NO_UPSEARCH, NULL, &new_node);
+ status =
+ acpi_ns_lookup(NULL, (char *)init_val->name, init_val->type,
+ ACPI_IMODE_LOAD_PASS2, ACPI_NS_NO_UPSEARCH,
+ NULL, &new_node);
if (ACPI_FAILURE(status)) {
ACPI_EXCEPTION((AE_INFO, status,
"Could not create predefined name %s",
diff --git a/drivers/acpi/acpica/nsconvert.c b/drivers/acpi/acpica/nsconvert.c
index 878e8fb6a64c..c803bda7915c 100644
--- a/drivers/acpi/acpica/nsconvert.c
+++ b/drivers/acpi/acpica/nsconvert.c
@@ -79,7 +79,8 @@ acpi_ns_convert_to_integer(union acpi_operand_object *original_object,
/* String-to-Integer conversion */
status = acpi_ut_strtoul64(original_object->string.pointer,
- ACPI_ANY_BASE, &value);
+ ACPI_ANY_BASE,
+ acpi_gbl_integer_byte_width, &value);
if (ACPI_FAILURE(status)) {
return (status);
}
@@ -317,7 +318,7 @@ acpi_ns_convert_to_buffer(union acpi_operand_object *original_object,
******************************************************************************/
acpi_status
-acpi_ns_convert_to_unicode(struct acpi_namespace_node * scope,
+acpi_ns_convert_to_unicode(struct acpi_namespace_node *scope,
union acpi_operand_object *original_object,
union acpi_operand_object **return_object)
{
@@ -384,7 +385,7 @@ acpi_ns_convert_to_unicode(struct acpi_namespace_node * scope,
******************************************************************************/
acpi_status
-acpi_ns_convert_to_resource(struct acpi_namespace_node * scope,
+acpi_ns_convert_to_resource(struct acpi_namespace_node *scope,
union acpi_operand_object *original_object,
union acpi_operand_object **return_object)
{
@@ -463,7 +464,7 @@ acpi_ns_convert_to_resource(struct acpi_namespace_node * scope,
******************************************************************************/
acpi_status
-acpi_ns_convert_to_reference(struct acpi_namespace_node * scope,
+acpi_ns_convert_to_reference(struct acpi_namespace_node *scope,
union acpi_operand_object *original_object,
union acpi_operand_object **return_object)
{
diff --git a/drivers/acpi/acpica/nsdump.c b/drivers/acpi/acpica/nsdump.c
index af236e348294..ce1f8605d996 100644
--- a/drivers/acpi/acpica/nsdump.c
+++ b/drivers/acpi/acpica/nsdump.c
@@ -81,7 +81,7 @@ acpi_ns_get_max_depth(acpi_handle obj_handle,
*
******************************************************************************/
-void acpi_ns_print_pathname(u32 num_segments, char *pathname)
+void acpi_ns_print_pathname(u32 num_segments, const char *pathname)
{
u32 i;
@@ -114,6 +114,9 @@ void acpi_ns_print_pathname(u32 num_segments, char *pathname)
acpi_os_printf("]\n");
}
+#ifdef ACPI_OBSOLETE_FUNCTIONS
+/* Not used at this time, perhaps later */
+
/*******************************************************************************
*
* FUNCTION: acpi_ns_dump_pathname
@@ -131,7 +134,8 @@ void acpi_ns_print_pathname(u32 num_segments, char *pathname)
******************************************************************************/
void
-acpi_ns_dump_pathname(acpi_handle handle, char *msg, u32 level, u32 component)
+acpi_ns_dump_pathname(acpi_handle handle,
+ const char *msg, u32 level, u32 component)
{
ACPI_FUNCTION_TRACE(ns_dump_pathname);
@@ -148,6 +152,7 @@ acpi_ns_dump_pathname(acpi_handle handle, char *msg, u32 level, u32 component)
acpi_os_printf("\n");
return_VOID;
}
+#endif
/*******************************************************************************
*
diff --git a/drivers/acpi/acpica/nsinit.c b/drivers/acpi/acpica/nsinit.c
index d4aa8b696ee9..36643a8cf65a 100644
--- a/drivers/acpi/acpica/nsinit.c
+++ b/drivers/acpi/acpica/nsinit.c
@@ -140,6 +140,7 @@ acpi_status acpi_ns_initialize_devices(u32 flags)
{
acpi_status status = AE_OK;
struct acpi_device_walk_info info;
+ acpi_handle handle;
ACPI_FUNCTION_TRACE(ns_initialize_devices);
@@ -190,6 +191,27 @@ acpi_status acpi_ns_initialize_devices(u32 flags)
if (ACPI_SUCCESS(status)) {
info.num_INI++;
}
+
+ /*
+ * Execute \_SB._INI.
+ * There appears to be a strict order requirement for \_SB._INI,
+ * which should be evaluated before any _REG evaluations.
+ */
+ status = acpi_get_handle(NULL, "\\_SB", &handle);
+ if (ACPI_SUCCESS(status)) {
+ memset(info.evaluate_info, 0,
+ sizeof(struct acpi_evaluate_info));
+ info.evaluate_info->prefix_node = handle;
+ info.evaluate_info->relative_pathname =
+ METHOD_NAME__INI;
+ info.evaluate_info->parameters = NULL;
+ info.evaluate_info->flags = ACPI_IGNORE_RETURN_VALUE;
+
+ status = acpi_ns_evaluate(info.evaluate_info);
+ if (ACPI_SUCCESS(status)) {
+ info.num_INI++;
+ }
+ }
}
/*
@@ -198,6 +220,12 @@ acpi_status acpi_ns_initialize_devices(u32 flags)
* Note: Any objects accessed by the _REG methods will be automatically
* initialized, even if they contain executable AML (see the call to
* acpi_ns_initialize_objects below).
+ *
+ * Note: According to the ACPI specification, we actually needn't execute
+ * _REG for system_memory/system_io operation regions, but for PCI_Config
+ * operation regions, it is required to evaluate _REG for those on a PCI
+ * root bus that doesn't contain _BBN object. So this code is kept here
+ * in order not to break things.
*/
if (!(flags & ACPI_NO_ADDRESS_SPACE_INIT)) {
ACPI_DEBUG_PRINT((ACPI_DB_EXEC,
@@ -592,33 +620,37 @@ acpi_ns_init_one_device(acpi_handle obj_handle,
* Note: We know there is an _INI within this subtree, but it may not be
* under this particular device, it may be lower in the branch.
*/
- ACPI_DEBUG_EXEC(acpi_ut_display_init_pathname
- (ACPI_TYPE_METHOD, device_node, METHOD_NAME__INI));
-
- memset(info, 0, sizeof(struct acpi_evaluate_info));
- info->prefix_node = device_node;
- info->relative_pathname = METHOD_NAME__INI;
- info->parameters = NULL;
- info->flags = ACPI_IGNORE_RETURN_VALUE;
-
- status = acpi_ns_evaluate(info);
-
- if (ACPI_SUCCESS(status)) {
- walk_info->num_INI++;
- }
+ if (!ACPI_COMPARE_NAME(device_node->name.ascii, "_SB_") ||
+ device_node->parent != acpi_gbl_root_node) {
+ ACPI_DEBUG_EXEC(acpi_ut_display_init_pathname
+ (ACPI_TYPE_METHOD, device_node,
+ METHOD_NAME__INI));
+
+ memset(info, 0, sizeof(struct acpi_evaluate_info));
+ info->prefix_node = device_node;
+ info->relative_pathname = METHOD_NAME__INI;
+ info->parameters = NULL;
+ info->flags = ACPI_IGNORE_RETURN_VALUE;
+
+ status = acpi_ns_evaluate(info);
+ if (ACPI_SUCCESS(status)) {
+ walk_info->num_INI++;
+ }
#ifdef ACPI_DEBUG_OUTPUT
- else if (status != AE_NOT_FOUND) {
+ else if (status != AE_NOT_FOUND) {
- /* Ignore error and move on to next device */
+ /* Ignore error and move on to next device */
- char *scope_name =
- acpi_ns_get_normalized_pathname(device_node, TRUE);
+ char *scope_name =
+ acpi_ns_get_normalized_pathname(device_node, TRUE);
- ACPI_EXCEPTION((AE_INFO, status, "during %s._INI execution",
- scope_name));
- ACPI_FREE(scope_name);
- }
+ ACPI_EXCEPTION((AE_INFO, status,
+ "during %s._INI execution",
+ scope_name));
+ ACPI_FREE(scope_name);
+ }
#endif
+ }
/* Ignore errors from above */
diff --git a/drivers/acpi/acpica/nsload.c b/drivers/acpi/acpica/nsload.c
index 75cdb8790d49..b5e2b0ada0ab 100644
--- a/drivers/acpi/acpica/nsload.c
+++ b/drivers/acpi/acpica/nsload.c
@@ -123,8 +123,8 @@ acpi_ns_load_table(u32 table_index, struct acpi_namespace_node *node)
(void)acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
acpi_ns_delete_namespace_by_owner(acpi_gbl_root_table_list.
tables[table_index].owner_id);
- acpi_tb_release_owner_id(table_index);
+ acpi_tb_release_owner_id(table_index);
return_ACPI_STATUS(status);
}
diff --git a/drivers/acpi/acpica/nsnames.c b/drivers/acpi/acpica/nsnames.c
index eb6e1b88a51d..f03dd41e86d0 100644
--- a/drivers/acpi/acpica/nsnames.c
+++ b/drivers/acpi/acpica/nsnames.c
@@ -113,7 +113,7 @@ acpi_size acpi_ns_get_pathname_length(struct acpi_namespace_node *node)
acpi_status
acpi_ns_handle_to_pathname(acpi_handle target_handle,
- struct acpi_buffer * buffer, u8 no_trailing)
+ struct acpi_buffer *buffer, u8 no_trailing)
{
acpi_status status;
struct acpi_namespace_node *node;
diff --git a/drivers/acpi/acpica/nsobject.c b/drivers/acpi/acpica/nsobject.c
index 051306f0d0d6..cfa2bb7162d8 100644
--- a/drivers/acpi/acpica/nsobject.c
+++ b/drivers/acpi/acpica/nsobject.c
@@ -399,7 +399,7 @@ acpi_ns_attach_data(struct acpi_namespace_node *node,
******************************************************************************/
acpi_status
-acpi_ns_detach_data(struct acpi_namespace_node * node,
+acpi_ns_detach_data(struct acpi_namespace_node *node,
acpi_object_handler handler)
{
union acpi_operand_object *obj_desc;
@@ -444,7 +444,7 @@ acpi_ns_detach_data(struct acpi_namespace_node * node,
******************************************************************************/
acpi_status
-acpi_ns_get_attached_data(struct acpi_namespace_node * node,
+acpi_ns_get_attached_data(struct acpi_namespace_node *node,
acpi_object_handler handler, void **data)
{
union acpi_operand_object *obj_desc;
diff --git a/drivers/acpi/acpica/nsprepkg.c b/drivers/acpi/acpica/nsprepkg.c
index 9047f2808d5b..fbedc6e8ab36 100644
--- a/drivers/acpi/acpica/nsprepkg.c
+++ b/drivers/acpi/acpica/nsprepkg.c
@@ -62,6 +62,10 @@ acpi_ns_check_package_elements(struct acpi_evaluate_info *info,
u32 count1,
u8 type2, u32 count2, u32 start_index);
+static acpi_status
+acpi_ns_custom_package(struct acpi_evaluate_info *info,
+ union acpi_operand_object **elements, u32 count);
+
/*******************************************************************************
*
* FUNCTION: acpi_ns_check_package
@@ -135,6 +139,11 @@ acpi_ns_check_package(struct acpi_evaluate_info *info,
* PTYPE2 packages contain subpackages
*/
switch (package->ret_info.type) {
+ case ACPI_PTYPE_CUSTOM:
+
+ status = acpi_ns_custom_package(info, elements, count);
+ break;
+
case ACPI_PTYPE1_FIXED:
/*
* The package count is fixed and there are no subpackages
@@ -179,6 +188,7 @@ acpi_ns_check_package(struct acpi_evaluate_info *info,
if (ACPI_FAILURE(status)) {
return (status);
}
+
elements++;
}
break;
@@ -225,6 +235,7 @@ acpi_ns_check_package(struct acpi_evaluate_info *info,
return (status);
}
}
+
elements++;
}
break;
@@ -569,11 +580,13 @@ acpi_ns_check_package_list(struct acpi_evaluate_info *info,
if (sub_package->package.count < expected_count) {
goto package_too_small;
}
+
if (sub_package->package.count <
package->ret_info.count1) {
expected_count = package->ret_info.count1;
goto package_too_small;
}
+
if (expected_count == 0) {
/*
* Either the num_entries element was originally zero or it was
@@ -622,6 +635,83 @@ package_too_small:
/*******************************************************************************
*
+ * FUNCTION: acpi_ns_custom_package
+ *
+ * PARAMETERS: info - Method execution information block
+ * elements - Pointer to the package elements array
+ * count - Element count for the package
+ *
+ * RETURN: Status
+ *
+ * DESCRIPTION: Check a returned package object for the correct count and
+ * correct type of all sub-objects.
+ *
+ * NOTE: Currently used for the _BIX method only. When needed for two or more
+ * methods, probably a detect/dispatch mechanism will be required.
+ *
+ ******************************************************************************/
+
+static acpi_status
+acpi_ns_custom_package(struct acpi_evaluate_info *info,
+ union acpi_operand_object **elements, u32 count)
+{
+ u32 expected_count;
+ u32 version;
+ acpi_status status = AE_OK;
+
+ ACPI_FUNCTION_NAME(ns_custom_package);
+
+ /* Get version number, must be Integer */
+
+ if ((*elements)->common.type != ACPI_TYPE_INTEGER) {
+ ACPI_WARN_PREDEFINED((AE_INFO, info->full_pathname,
+ info->node_flags,
+ "Return Package has invalid object type for version number"));
+ return_ACPI_STATUS(AE_AML_OPERAND_TYPE);
+ }
+
+ version = (u32)(*elements)->integer.value;
+ expected_count = 21; /* Version 1 */
+
+ if (version == 0) {
+ expected_count = 20; /* Version 0 */
+ }
+
+ if (count < expected_count) {
+ ACPI_WARN_PREDEFINED((AE_INFO, info->full_pathname,
+ info->node_flags,
+ "Return Package is too small - found %u elements, expected %u",
+ count, expected_count));
+ return_ACPI_STATUS(AE_AML_OPERAND_VALUE);
+ } else if (count > expected_count) {
+ ACPI_DEBUG_PRINT((ACPI_DB_REPAIR,
+ "%s: Return Package is larger than needed - "
+ "found %u, expected %u\n",
+ info->full_pathname, count, expected_count));
+ }
+
+ /* Validate all elements of the returned package */
+
+ status = acpi_ns_check_package_elements(info, elements,
+ ACPI_RTYPE_INTEGER, 16,
+ ACPI_RTYPE_STRING, 4, 0);
+ if (ACPI_FAILURE(status)) {
+ return_ACPI_STATUS(status);
+ }
+
+ /* Version 1 has a single trailing integer */
+
+ if (version > 0) {
+ status = acpi_ns_check_package_elements(info, elements + 20,
+ ACPI_RTYPE_INTEGER, 1,
+ 0, 0, 20);
+ }
+
+ return_ACPI_STATUS(status);
+}
+
+/*******************************************************************************
+ *
* FUNCTION: acpi_ns_check_package_elements
*
* PARAMETERS: info - Method execution information block
@@ -661,6 +751,7 @@ acpi_ns_check_package_elements(struct acpi_evaluate_info *info,
if (ACPI_FAILURE(status)) {
return (status);
}
+
this_element++;
}
@@ -671,6 +762,7 @@ acpi_ns_check_package_elements(struct acpi_evaluate_info *info,
if (ACPI_FAILURE(status)) {
return (status);
}
+
this_element++;
}
diff --git a/drivers/acpi/acpica/nsrepair.c b/drivers/acpi/acpica/nsrepair.c
index 805e36de8707..9523d41c7ae9 100644
--- a/drivers/acpi/acpica/nsrepair.c
+++ b/drivers/acpi/acpica/nsrepair.c
@@ -399,7 +399,7 @@ static const struct acpi_simple_repair_info *acpi_ns_match_simple_repair(struct
******************************************************************************/
acpi_status
-acpi_ns_repair_null_element(struct acpi_evaluate_info * info,
+acpi_ns_repair_null_element(struct acpi_evaluate_info *info,
u32 expected_btypes,
u32 package_index,
union acpi_operand_object **return_object_ptr)
diff --git a/drivers/acpi/acpica/nsrepair2.c b/drivers/acpi/acpica/nsrepair2.c
index 63edbbbf9ae4..d5336122486b 100644
--- a/drivers/acpi/acpica/nsrepair2.c
+++ b/drivers/acpi/acpica/nsrepair2.c
@@ -54,9 +54,9 @@ ACPI_MODULE_NAME("nsrepair2")
* be repaired on a per-name basis.
*/
typedef
-acpi_status(*acpi_repair_function) (struct acpi_evaluate_info * info,
- union acpi_operand_object
- **return_object_ptr);
+acpi_status (*acpi_repair_function) (struct acpi_evaluate_info * info,
+ union acpi_operand_object **
+ return_object_ptr);
typedef struct acpi_repair_info {
char name[ACPI_NAME_SIZE];
diff --git a/drivers/acpi/acpica/nsutils.c b/drivers/acpi/acpica/nsutils.c
index c72cc62b92d0..784a30b76e0f 100644
--- a/drivers/acpi/acpica/nsutils.c
+++ b/drivers/acpi/acpica/nsutils.c
@@ -272,11 +272,11 @@ acpi_status acpi_ns_build_internal_name(struct acpi_namestring_info *info)
result = &internal_name[i];
} else if (num_segments == 2) {
internal_name[i] = AML_DUAL_NAME_PREFIX;
- result = &internal_name[(acpi_size) i + 1];
+ result = &internal_name[(acpi_size)i + 1];
} else {
internal_name[i] = AML_MULTI_NAME_PREFIX_OP;
- internal_name[(acpi_size) i + 1] = (char)num_segments;
- result = &internal_name[(acpi_size) i + 2];
+ internal_name[(acpi_size)i + 1] = (char)num_segments;
+ result = &internal_name[(acpi_size)i + 2];
}
}
@@ -456,7 +456,7 @@ acpi_ns_externalize_name(u32 internal_name_length,
names_index = prefix_length + 2;
num_segments = (u8)
- internal_name[(acpi_size) prefix_length + 1];
+ internal_name[(acpi_size)prefix_length + 1];
break;
case AML_DUAL_NAME_PREFIX:
diff --git a/drivers/acpi/acpica/nsxfeval.c b/drivers/acpi/acpica/nsxfeval.c
index a7deeaa8eddc..d2a9b4fd739f 100644
--- a/drivers/acpi/acpica/nsxfeval.c
+++ b/drivers/acpi/acpica/nsxfeval.c
@@ -256,7 +256,7 @@ acpi_evaluate_object(acpi_handle handle,
* Allocate a new parameter block for the internal objects
* Add 1 to count to allow for null terminated internal list
*/
- info->parameters = ACPI_ALLOCATE_ZEROED(((acpi_size) info->
+ info->parameters = ACPI_ALLOCATE_ZEROED(((acpi_size)info->
param_count +
1) * sizeof(void *));
if (!info->parameters) {
@@ -280,13 +280,12 @@ acpi_evaluate_object(acpi_handle handle,
info->parameters[info->param_count] = NULL;
}
-#if 0
+#ifdef _FUTURE_FEATURE
/*
* Begin incoming argument count analysis. Check for too few args
* and too many args.
*/
-
switch (acpi_ns_get_type(info->node)) {
case ACPI_TYPE_METHOD:
@@ -370,68 +369,68 @@ acpi_evaluate_object(acpi_handle handle,
* If we are expecting a return value, and all went well above,
* copy the return value to an external object.
*/
- if (return_buffer) {
- if (!info->return_object) {
- return_buffer->length = 0;
- } else {
- if (ACPI_GET_DESCRIPTOR_TYPE(info->return_object) ==
- ACPI_DESC_TYPE_NAMED) {
- /*
- * If we received a NS Node as a return object, this means that
- * the object we are evaluating has nothing interesting to
- * return (such as a mutex, etc.) We return an error because
- * these types are essentially unsupported by this interface.
- * We don't check up front because this makes it easier to add
- * support for various types at a later date if necessary.
- */
- status = AE_TYPE;
- info->return_object = NULL; /* No need to delete a NS Node */
- return_buffer->length = 0;
- }
+ if (!return_buffer) {
+ goto cleanup_return_object;
+ }
- if (ACPI_SUCCESS(status)) {
+ if (!info->return_object) {
+ return_buffer->length = 0;
+ goto cleanup;
+ }
- /* Dereference Index and ref_of references */
+ if (ACPI_GET_DESCRIPTOR_TYPE(info->return_object) ==
+ ACPI_DESC_TYPE_NAMED) {
+ /*
+ * If we received a NS Node as a return object, this means that
+ * the object we are evaluating has nothing interesting to
+ * return (such as a mutex, etc.) We return an error because
+ * these types are essentially unsupported by this interface.
+ * We don't check up front because this makes it easier to add
+ * support for various types at a later date if necessary.
+ */
+ status = AE_TYPE;
+ info->return_object = NULL; /* No need to delete a NS Node */
+ return_buffer->length = 0;
+ }
- acpi_ns_resolve_references(info);
+ if (ACPI_FAILURE(status)) {
+ goto cleanup_return_object;
+ }
- /* Get the size of the returned object */
+ /* Dereference Index and ref_of references */
- status =
- acpi_ut_get_object_size(info->return_object,
- &buffer_space_needed);
- if (ACPI_SUCCESS(status)) {
-
- /* Validate/Allocate/Clear caller buffer */
-
- status =
- acpi_ut_initialize_buffer
- (return_buffer,
- buffer_space_needed);
- if (ACPI_FAILURE(status)) {
- /*
- * Caller's buffer is too small or a new one can't
- * be allocated
- */
- ACPI_DEBUG_PRINT((ACPI_DB_INFO,
- "Needed buffer size %X, %s\n",
- (u32)
- buffer_space_needed,
- acpi_format_exception
- (status)));
- } else {
- /* We have enough space for the object, build it */
-
- status =
- acpi_ut_copy_iobject_to_eobject
- (info->return_object,
- return_buffer);
- }
- }
- }
+ acpi_ns_resolve_references(info);
+
+ /* Get the size of the returned object */
+
+ status = acpi_ut_get_object_size(info->return_object,
+ &buffer_space_needed);
+ if (ACPI_SUCCESS(status)) {
+
+ /* Validate/Allocate/Clear caller buffer */
+
+ status = acpi_ut_initialize_buffer(return_buffer,
+ buffer_space_needed);
+ if (ACPI_FAILURE(status)) {
+ /*
+ * Caller's buffer is too small or a new one can't
+ * be allocated
+ */
+ ACPI_DEBUG_PRINT((ACPI_DB_INFO,
+ "Needed buffer size %X, %s\n",
+ (u32)buffer_space_needed,
+ acpi_format_exception(status)));
+ } else {
+ /* We have enough space for the object, build it */
+
+ status =
+ acpi_ut_copy_iobject_to_eobject(info->return_object,
+ return_buffer);
}
}
+cleanup_return_object:
+
if (info->return_object) {
/*
* Delete the internal return object. NOTE: Interpreter must be
diff --git a/drivers/acpi/acpica/nsxfname.c b/drivers/acpi/acpica/nsxfname.c
index 285b82044e7b..76a1bd4bb070 100644
--- a/drivers/acpi/acpica/nsxfname.c
+++ b/drivers/acpi/acpica/nsxfname.c
@@ -78,7 +78,7 @@ static char *acpi_ns_copy_device_id(struct acpi_pnp_device_id *dest,
acpi_status
acpi_get_handle(acpi_handle parent,
- acpi_string pathname, acpi_handle * ret_handle)
+ acpi_string pathname, acpi_handle *ret_handle)
{
acpi_status status;
struct acpi_namespace_node *node = NULL;
@@ -155,7 +155,7 @@ ACPI_EXPORT_SYMBOL(acpi_get_handle)
*
******************************************************************************/
acpi_status
-acpi_get_name(acpi_handle handle, u32 name_type, struct acpi_buffer * buffer)
+acpi_get_name(acpi_handle handle, u32 name_type, struct acpi_buffer *buffer)
{
acpi_status status;
struct acpi_namespace_node *node;
@@ -448,7 +448,7 @@ acpi_get_object_info(acpi_handle handle,
/* Point past the CID PNP_DEVICE_ID array */
next_id_string +=
- ((acpi_size) cid_list->count *
+ ((acpi_size)cid_list->count *
sizeof(struct acpi_pnp_device_id));
}
diff --git a/drivers/acpi/acpica/nsxfobj.c b/drivers/acpi/acpica/nsxfobj.c
index c312cd490450..32d372b85243 100644
--- a/drivers/acpi/acpica/nsxfobj.c
+++ b/drivers/acpi/acpica/nsxfobj.c
@@ -63,7 +63,7 @@ ACPI_MODULE_NAME("nsxfobj")
* DESCRIPTION: This routine returns the type associatd with a particular handle
*
******************************************************************************/
-acpi_status acpi_get_type(acpi_handle handle, acpi_object_type * ret_type)
+acpi_status acpi_get_type(acpi_handle handle, acpi_object_type *ret_type)
{
struct acpi_namespace_node *node;
acpi_status status;
@@ -115,7 +115,7 @@ ACPI_EXPORT_SYMBOL(acpi_get_type)
* Handle.
*
******************************************************************************/
-acpi_status acpi_get_parent(acpi_handle handle, acpi_handle * ret_handle)
+acpi_status acpi_get_parent(acpi_handle handle, acpi_handle *ret_handle)
{
struct acpi_namespace_node *node;
struct acpi_namespace_node *parent_node;
@@ -183,7 +183,7 @@ ACPI_EXPORT_SYMBOL(acpi_get_parent)
acpi_status
acpi_get_next_object(acpi_object_type type,
acpi_handle parent,
- acpi_handle child, acpi_handle * ret_handle)
+ acpi_handle child, acpi_handle *ret_handle)
{
acpi_status status;
struct acpi_namespace_node *node;
diff --git a/drivers/acpi/acpica/psargs.c b/drivers/acpi/acpica/psargs.c
index d48cbed342c1..c29c930ffa08 100644
--- a/drivers/acpi/acpica/psargs.c
+++ b/drivers/acpi/acpica/psargs.c
@@ -87,7 +87,7 @@ acpi_ps_get_next_package_length(struct acpi_parse_state *parser_state)
* used to encode the package length, either 0,1,2, or 3
*/
byte_count = (aml[0] >> 6);
- parser_state->aml += ((acpi_size) byte_count + 1);
+ parser_state->aml += ((acpi_size)byte_count + 1);
/* Get bytes 3, 2, 1 as needed */
diff --git a/drivers/acpi/acpica/psopinfo.c b/drivers/acpi/acpica/psopinfo.c
index cfd17a4f2e91..177b05b239b7 100644
--- a/drivers/acpi/acpica/psopinfo.c
+++ b/drivers/acpi/acpica/psopinfo.c
@@ -158,7 +158,7 @@ const struct acpi_opcode_info *acpi_ps_get_opcode_info(u16 opcode)
*
******************************************************************************/
-char *acpi_ps_get_opcode_name(u16 opcode)
+const char *acpi_ps_get_opcode_name(u16 opcode)
{
#if defined(ACPI_DISASSEMBLER) || defined (ACPI_DEBUG_OUTPUT)
diff --git a/drivers/acpi/acpica/psparse.c b/drivers/acpi/acpica/psparse.c
index 8038ed2aca05..0a23897d8adf 100644
--- a/drivers/acpi/acpica/psparse.c
+++ b/drivers/acpi/acpica/psparse.c
@@ -130,8 +130,8 @@ u16 acpi_ps_peek_opcode(struct acpi_parse_state * parser_state)
******************************************************************************/
acpi_status
-acpi_ps_complete_this_op(struct acpi_walk_state * walk_state,
- union acpi_parse_object * op)
+acpi_ps_complete_this_op(struct acpi_walk_state *walk_state,
+ union acpi_parse_object *op)
{
union acpi_parse_object *prev;
union acpi_parse_object *next;
diff --git a/drivers/acpi/acpica/psutils.c b/drivers/acpi/acpica/psutils.c
index b28b0da171b6..89cb4bffcc7c 100644
--- a/drivers/acpi/acpica/psutils.c
+++ b/drivers/acpi/acpica/psutils.c
@@ -128,7 +128,7 @@ union acpi_parse_object *acpi_ps_alloc_op(u16 opcode, u8 *aml)
if (op_info->flags & AML_DEFER) {
flags = ACPI_PARSEOP_DEFERRED;
} else if (op_info->flags & AML_NAMED) {
- flags = ACPI_PARSEOP_NAMED;
+ flags = ACPI_PARSEOP_NAMED_OBJECT;
} else if (opcode == AML_INT_BYTELIST_OP) {
flags = ACPI_PARSEOP_BYTELIST;
}
diff --git a/drivers/acpi/acpica/psxface.c b/drivers/acpi/acpica/psxface.c
index 04b37fcca684..cf30cd821f5e 100644
--- a/drivers/acpi/acpica/psxface.c
+++ b/drivers/acpi/acpica/psxface.c
@@ -115,7 +115,7 @@ acpi_debug_trace(const char *name, u32 debug_level, u32 debug_layer, u32 flags)
*
******************************************************************************/
-acpi_status acpi_ps_execute_method(struct acpi_evaluate_info * info)
+acpi_status acpi_ps_execute_method(struct acpi_evaluate_info *info)
{
acpi_status status;
union acpi_parse_object *op;
diff --git a/drivers/acpi/acpica/rscalc.c b/drivers/acpi/acpica/rscalc.c
index 2b1209d73e44..f1e83addd5b4 100644
--- a/drivers/acpi/acpica/rscalc.c
+++ b/drivers/acpi/acpica/rscalc.c
@@ -112,7 +112,7 @@ acpi_rs_struct_option_length(struct acpi_resource_source *resource_source)
* resource_source_index (1).
*/
if (resource_source->string_ptr) {
- return ((acpi_rs_length) (resource_source->string_length + 1));
+ return ((acpi_rs_length)(resource_source->string_length + 1));
}
return (0);
@@ -188,7 +188,7 @@ acpi_rs_stream_option_length(u32 resource_length,
acpi_status
acpi_rs_get_aml_length(struct acpi_resource *resource,
- acpi_size resource_list_size, acpi_size * size_needed)
+ acpi_size resource_list_size, acpi_size *size_needed)
{
acpi_size aml_size_needed = 0;
struct acpi_resource *resource_end;
@@ -278,11 +278,11 @@ acpi_rs_get_aml_length(struct acpi_resource *resource,
* 16-Bit Address Resource:
* Add the size of the optional resource_source info
*/
- total_size = (acpi_rs_length) (total_size +
- acpi_rs_struct_option_length
- (&resource->data.
- address16.
- resource_source));
+ total_size = (acpi_rs_length)(total_size +
+ acpi_rs_struct_option_length
+ (&resource->data.
+ address16.
+ resource_source));
break;
case ACPI_RESOURCE_TYPE_ADDRESS32:
@@ -290,11 +290,11 @@ acpi_rs_get_aml_length(struct acpi_resource *resource,
* 32-Bit Address Resource:
* Add the size of the optional resource_source info
*/
- total_size = (acpi_rs_length) (total_size +
- acpi_rs_struct_option_length
- (&resource->data.
- address32.
- resource_source));
+ total_size = (acpi_rs_length)(total_size +
+ acpi_rs_struct_option_length
+ (&resource->data.
+ address32.
+ resource_source));
break;
case ACPI_RESOURCE_TYPE_ADDRESS64:
@@ -302,11 +302,11 @@ acpi_rs_get_aml_length(struct acpi_resource *resource,
* 64-Bit Address Resource:
* Add the size of the optional resource_source info
*/
- total_size = (acpi_rs_length) (total_size +
- acpi_rs_struct_option_length
- (&resource->data.
- address64.
- resource_source));
+ total_size = (acpi_rs_length)(total_size +
+ acpi_rs_struct_option_length
+ (&resource->data.
+ address64.
+ resource_source));
break;
case ACPI_RESOURCE_TYPE_EXTENDED_IRQ:
@@ -315,28 +315,28 @@ acpi_rs_get_aml_length(struct acpi_resource *resource,
* Add the size of each additional optional interrupt beyond the
* required 1 (4 bytes for each u32 interrupt number)
*/
- total_size = (acpi_rs_length) (total_size +
- ((resource->data.
- extended_irq.
- interrupt_count -
- 1) * 4) +
- /* Add the size of the optional resource_source info */
- acpi_rs_struct_option_length
- (&resource->data.
+ total_size = (acpi_rs_length)(total_size +
+ ((resource->data.
extended_irq.
- resource_source));
+ interrupt_count -
+ 1) * 4) +
+ /* Add the size of the optional resource_source info */
+ acpi_rs_struct_option_length
+ (&resource->data.
+ extended_irq.
+ resource_source));
break;
case ACPI_RESOURCE_TYPE_GPIO:
- total_size = (acpi_rs_length) (total_size +
- (resource->data.gpio.
- pin_table_length * 2) +
- resource->data.gpio.
- resource_source.
- string_length +
- resource->data.gpio.
- vendor_length);
+ total_size = (acpi_rs_length)(total_size +
+ (resource->data.gpio.
+ pin_table_length * 2) +
+ resource->data.gpio.
+ resource_source.
+ string_length +
+ resource->data.gpio.
+ vendor_length);
break;
@@ -348,14 +348,14 @@ acpi_rs_get_aml_length(struct acpi_resource *resource,
common_serial_bus.
type];
- total_size = (acpi_rs_length) (total_size +
- resource->data.
- i2c_serial_bus.
- resource_source.
- string_length +
- resource->data.
- i2c_serial_bus.
- vendor_length);
+ total_size = (acpi_rs_length)(total_size +
+ resource->data.
+ i2c_serial_bus.
+ resource_source.
+ string_length +
+ resource->data.
+ i2c_serial_bus.
+ vendor_length);
break;
@@ -397,8 +397,8 @@ acpi_rs_get_aml_length(struct acpi_resource *resource,
******************************************************************************/
acpi_status
-acpi_rs_get_list_length(u8 * aml_buffer,
- u32 aml_buffer_length, acpi_size * size_needed)
+acpi_rs_get_list_length(u8 *aml_buffer,
+ u32 aml_buffer_length, acpi_size *size_needed)
{
acpi_status status;
u8 *end_aml;
@@ -610,7 +610,7 @@ acpi_rs_get_list_length(u8 * aml_buffer,
acpi_status
acpi_rs_get_pci_routing_table_length(union acpi_operand_object *package_object,
- acpi_size * buffer_size_needed)
+ acpi_size *buffer_size_needed)
{
u32 number_of_elements;
acpi_size temp_size_needed = 0;
diff --git a/drivers/acpi/acpica/rscreate.c b/drivers/acpi/acpica/rscreate.c
index 12978891e842..809b61c114fe 100644
--- a/drivers/acpi/acpica/rscreate.c
+++ b/drivers/acpi/acpica/rscreate.c
@@ -347,7 +347,7 @@ acpi_rs_create_pci_routing_table(union acpi_operand_object *package_object,
(u8 *) output_buffer->pointer);
path_buffer.pointer = user_prt->source;
- status = acpi_ns_handle_to_pathname((acpi_handle) node, &path_buffer, FALSE);
+ status = acpi_ns_handle_to_pathname((acpi_handle)node, &path_buffer, FALSE);
/* +1 to include null terminator */
diff --git a/drivers/acpi/acpica/rsdump.c b/drivers/acpi/acpica/rsdump.c
index 23a17c86d5a9..5ffdb5602d8d 100644
--- a/drivers/acpi/acpica/rsdump.c
+++ b/drivers/acpi/acpica/rsdump.c
@@ -52,17 +52,17 @@ ACPI_MODULE_NAME("rsdump")
* All functions in this module are used by the AML Debugger only
*/
/* Local prototypes */
-static void acpi_rs_out_string(char *title, char *value);
+static void acpi_rs_out_string(const char *title, const char *value);
-static void acpi_rs_out_integer8(char *title, u8 value);
+static void acpi_rs_out_integer8(const char *title, u8 value);
-static void acpi_rs_out_integer16(char *title, u16 value);
+static void acpi_rs_out_integer16(const char *title, u16 value);
-static void acpi_rs_out_integer32(char *title, u32 value);
+static void acpi_rs_out_integer32(const char *title, u32 value);
-static void acpi_rs_out_integer64(char *title, u64 value);
+static void acpi_rs_out_integer64(const char *title, u64 value);
-static void acpi_rs_out_title(char *title);
+static void acpi_rs_out_title(const char *title);
static void acpi_rs_dump_byte_list(u16 length, u8 *data);
@@ -208,7 +208,7 @@ acpi_rs_dump_descriptor(void *resource, struct acpi_rsdump_info *table)
{
u8 *target = NULL;
u8 *previous_target;
- char *name;
+ const char *name;
u8 count;
/* First table entry must contain the table length (# of table entries) */
@@ -248,10 +248,8 @@ acpi_rs_dump_descriptor(void *resource, struct acpi_rsdump_info *table)
case ACPI_RSD_UINT8:
if (table->pointer) {
- acpi_rs_out_string(name, ACPI_CAST_PTR(char,
- table->
- pointer
- [*target]));
+ acpi_rs_out_string(name,
+ table->pointer[*target]);
} else {
acpi_rs_out_integer8(name, ACPI_GET8(target));
}
@@ -276,26 +274,20 @@ acpi_rs_dump_descriptor(void *resource, struct acpi_rsdump_info *table)
case ACPI_RSD_1BITFLAG:
- acpi_rs_out_string(name, ACPI_CAST_PTR(char,
- table->
- pointer[*target &
- 0x01]));
+ acpi_rs_out_string(name,
+ table->pointer[*target & 0x01]);
break;
case ACPI_RSD_2BITFLAG:
- acpi_rs_out_string(name, ACPI_CAST_PTR(char,
- table->
- pointer[*target &
- 0x03]));
+ acpi_rs_out_string(name,
+ table->pointer[*target & 0x03]);
break;
case ACPI_RSD_3BITFLAG:
- acpi_rs_out_string(name, ACPI_CAST_PTR(char,
- table->
- pointer[*target &
- 0x07]));
+ acpi_rs_out_string(name,
+ table->pointer[*target & 0x07]);
break;
case ACPI_RSD_SHORTLIST:
@@ -481,7 +473,7 @@ static void acpi_rs_dump_address_common(union acpi_resource_data *resource)
*
******************************************************************************/
-static void acpi_rs_out_string(char *title, char *value)
+static void acpi_rs_out_string(const char *title, const char *value)
{
acpi_os_printf("%27s : %s", title, value);
@@ -491,30 +483,30 @@ static void acpi_rs_out_string(char *title, char *value)
acpi_os_printf("\n");
}
-static void acpi_rs_out_integer8(char *title, u8 value)
+static void acpi_rs_out_integer8(const char *title, u8 value)
{
acpi_os_printf("%27s : %2.2X\n", title, value);
}
-static void acpi_rs_out_integer16(char *title, u16 value)
+static void acpi_rs_out_integer16(const char *title, u16 value)
{
acpi_os_printf("%27s : %4.4X\n", title, value);
}
-static void acpi_rs_out_integer32(char *title, u32 value)
+static void acpi_rs_out_integer32(const char *title, u32 value)
{
acpi_os_printf("%27s : %8.8X\n", title, value);
}
-static void acpi_rs_out_integer64(char *title, u64 value)
+static void acpi_rs_out_integer64(const char *title, u64 value)
{
acpi_os_printf("%27s : %8.8X%8.8X\n", title, ACPI_FORMAT_UINT64(value));
}
-static void acpi_rs_out_title(char *title)
+static void acpi_rs_out_title(const char *title)
{
acpi_os_printf("%27s : ", title);
diff --git a/drivers/acpi/acpica/rsdumpinfo.c b/drivers/acpi/acpica/rsdumpinfo.c
index 5c3491387f9f..61e8f16c857d 100644
--- a/drivers/acpi/acpica/rsdumpinfo.c
+++ b/drivers/acpi/acpica/rsdumpinfo.c
@@ -330,19 +330,20 @@ struct acpi_rsdump_info acpi_rs_dump_fixed_dma[4] = {
{ACPI_RSD_UINT8, ACPI_RSD_OFFSET (common_serial_bus.type), "Type", acpi_gbl_sbt_decode}, \
{ACPI_RSD_1BITFLAG, ACPI_RSD_OFFSET (common_serial_bus.producer_consumer), "ProducerConsumer", acpi_gbl_consume_decode}, \
{ACPI_RSD_1BITFLAG, ACPI_RSD_OFFSET (common_serial_bus.slave_mode), "SlaveMode", acpi_gbl_sm_decode}, \
+ {ACPI_RSD_1BITFLAG, ACPI_RSD_OFFSET (common_serial_bus.connection_sharing),"ConnectionSharing", acpi_gbl_shr_decode}, \
{ACPI_RSD_UINT8, ACPI_RSD_OFFSET (common_serial_bus.type_revision_id), "TypeRevisionId", NULL}, \
{ACPI_RSD_UINT16, ACPI_RSD_OFFSET (common_serial_bus.type_data_length), "TypeDataLength", NULL}, \
{ACPI_RSD_SOURCE, ACPI_RSD_OFFSET (common_serial_bus.resource_source), "ResourceSource", NULL}, \
{ACPI_RSD_UINT16, ACPI_RSD_OFFSET (common_serial_bus.vendor_length), "VendorLength", NULL}, \
{ACPI_RSD_SHORTLISTX,ACPI_RSD_OFFSET (common_serial_bus.vendor_data), "VendorData", NULL},
-struct acpi_rsdump_info acpi_rs_dump_common_serial_bus[10] = {
+struct acpi_rsdump_info acpi_rs_dump_common_serial_bus[11] = {
{ACPI_RSD_TITLE, ACPI_RSD_TABLE_SIZE(acpi_rs_dump_common_serial_bus),
"Common Serial Bus", NULL},
ACPI_RS_DUMP_COMMON_SERIAL_BUS
};
-struct acpi_rsdump_info acpi_rs_dump_i2c_serial_bus[13] = {
+struct acpi_rsdump_info acpi_rs_dump_i2c_serial_bus[14] = {
{ACPI_RSD_TITLE, ACPI_RSD_TABLE_SIZE(acpi_rs_dump_i2c_serial_bus),
"I2C Serial Bus", NULL},
ACPI_RS_DUMP_COMMON_SERIAL_BUS {ACPI_RSD_1BITFLAG,
@@ -355,7 +356,7 @@ struct acpi_rsdump_info acpi_rs_dump_i2c_serial_bus[13] = {
"SlaveAddress", NULL},
};
-struct acpi_rsdump_info acpi_rs_dump_spi_serial_bus[17] = {
+struct acpi_rsdump_info acpi_rs_dump_spi_serial_bus[18] = {
{ACPI_RSD_TITLE, ACPI_RSD_TABLE_SIZE(acpi_rs_dump_spi_serial_bus),
"Spi Serial Bus", NULL},
ACPI_RS_DUMP_COMMON_SERIAL_BUS {ACPI_RSD_1BITFLAG,
@@ -376,7 +377,7 @@ struct acpi_rsdump_info acpi_rs_dump_spi_serial_bus[17] = {
"ConnectionSpeed", NULL},
};
-struct acpi_rsdump_info acpi_rs_dump_uart_serial_bus[19] = {
+struct acpi_rsdump_info acpi_rs_dump_uart_serial_bus[20] = {
{ACPI_RSD_TITLE, ACPI_RSD_TABLE_SIZE(acpi_rs_dump_uart_serial_bus),
"Uart Serial Bus", NULL},
ACPI_RS_DUMP_COMMON_SERIAL_BUS {ACPI_RSD_2BITFLAG,
diff --git a/drivers/acpi/acpica/rsmisc.c b/drivers/acpi/acpica/rsmisc.c
index ce3d0b77ec89..25165ca42051 100644
--- a/drivers/acpi/acpica/rsmisc.c
+++ b/drivers/acpi/acpica/rsmisc.c
@@ -87,7 +87,7 @@ acpi_rs_convert_aml_to_resource(struct acpi_resource *resource,
return_ACPI_STATUS(AE_BAD_PARAMETER);
}
- if (((acpi_size) resource) & 0x3) {
+ if (((acpi_size)resource) & 0x3) {
/* Each internal resource struct is expected to be 32-bit aligned */
diff --git a/drivers/acpi/acpica/rsserial.c b/drivers/acpi/acpica/rsserial.c
index 8a01296ac7cf..b82c061f205a 100644
--- a/drivers/acpi/acpica/rsserial.c
+++ b/drivers/acpi/acpica/rsserial.c
@@ -151,7 +151,7 @@ struct acpi_rsconvert_info acpi_rs_convert_gpio[18] = {
*
******************************************************************************/
-struct acpi_rsconvert_info acpi_rs_convert_i2c_serial_bus[16] = {
+struct acpi_rsconvert_info acpi_rs_convert_i2c_serial_bus[17] = {
{ACPI_RSC_INITGET, ACPI_RESOURCE_TYPE_SERIAL_BUS,
ACPI_RS_SIZE(struct acpi_resource_i2c_serialbus),
ACPI_RSC_TABLE_SIZE(acpi_rs_convert_i2c_serial_bus)},
@@ -177,6 +177,11 @@ struct acpi_rsconvert_info acpi_rs_convert_i2c_serial_bus[16] = {
AML_OFFSET(common_serial_bus.flags),
1},
+ {ACPI_RSC_1BITFLAG,
+ ACPI_RS_OFFSET(data.common_serial_bus.connection_sharing),
+ AML_OFFSET(common_serial_bus.flags),
+ 2},
+
{ACPI_RSC_MOVE8,
ACPI_RS_OFFSET(data.common_serial_bus.type_revision_id),
AML_OFFSET(common_serial_bus.type_revision_id),
@@ -237,7 +242,7 @@ struct acpi_rsconvert_info acpi_rs_convert_i2c_serial_bus[16] = {
*
******************************************************************************/
-struct acpi_rsconvert_info acpi_rs_convert_spi_serial_bus[20] = {
+struct acpi_rsconvert_info acpi_rs_convert_spi_serial_bus[21] = {
{ACPI_RSC_INITGET, ACPI_RESOURCE_TYPE_SERIAL_BUS,
ACPI_RS_SIZE(struct acpi_resource_spi_serialbus),
ACPI_RSC_TABLE_SIZE(acpi_rs_convert_spi_serial_bus)},
@@ -263,6 +268,11 @@ struct acpi_rsconvert_info acpi_rs_convert_spi_serial_bus[20] = {
AML_OFFSET(common_serial_bus.flags),
1},
+ {ACPI_RSC_1BITFLAG,
+ ACPI_RS_OFFSET(data.common_serial_bus.connection_sharing),
+ AML_OFFSET(common_serial_bus.flags),
+ 2},
+
{ACPI_RSC_MOVE8,
ACPI_RS_OFFSET(data.common_serial_bus.type_revision_id),
AML_OFFSET(common_serial_bus.type_revision_id),
@@ -339,7 +349,7 @@ struct acpi_rsconvert_info acpi_rs_convert_spi_serial_bus[20] = {
*
******************************************************************************/
-struct acpi_rsconvert_info acpi_rs_convert_uart_serial_bus[22] = {
+struct acpi_rsconvert_info acpi_rs_convert_uart_serial_bus[23] = {
{ACPI_RSC_INITGET, ACPI_RESOURCE_TYPE_SERIAL_BUS,
ACPI_RS_SIZE(struct acpi_resource_uart_serialbus),
ACPI_RSC_TABLE_SIZE(acpi_rs_convert_uart_serial_bus)},
@@ -365,6 +375,11 @@ struct acpi_rsconvert_info acpi_rs_convert_uart_serial_bus[22] = {
AML_OFFSET(common_serial_bus.flags),
1},
+ {ACPI_RSC_1BITFLAG,
+ ACPI_RS_OFFSET(data.common_serial_bus.connection_sharing),
+ AML_OFFSET(common_serial_bus.flags),
+ 2},
+
{ACPI_RSC_MOVE8,
ACPI_RS_OFFSET(data.common_serial_bus.type_revision_id),
AML_OFFSET(common_serial_bus.type_revision_id),
diff --git a/drivers/acpi/acpica/rsutils.c b/drivers/acpi/acpica/rsutils.c
index cf06e49cd91c..fa491c64c040 100644
--- a/drivers/acpi/acpica/rsutils.c
+++ b/drivers/acpi/acpica/rsutils.c
@@ -338,7 +338,7 @@ acpi_rs_get_resource_source(acpi_rs_length resource_length,
* Note: Some resource descriptors will have an additional null, so
* we add 1 to the minimum length.
*/
- if (total_length > (acpi_rsdesc_size) (minimum_length + 1)) {
+ if (total_length > (acpi_rsdesc_size)(minimum_length + 1)) {
/* Get the resource_source_index */
@@ -377,7 +377,7 @@ acpi_rs_get_resource_source(acpi_rs_length resource_length,
ACPI_CAST_PTR(char,
&aml_resource_source[1]));
- return ((acpi_rs_length) total_length);
+ return ((acpi_rs_length)total_length);
}
/* resource_source is not present */
@@ -406,9 +406,9 @@ acpi_rs_get_resource_source(acpi_rs_length resource_length,
******************************************************************************/
acpi_rsdesc_size
-acpi_rs_set_resource_source(union aml_resource * aml,
+acpi_rs_set_resource_source(union aml_resource *aml,
acpi_rs_length minimum_length,
- struct acpi_resource_source * resource_source)
+ struct acpi_resource_source *resource_source)
{
u8 *aml_resource_source;
acpi_rsdesc_size descriptor_length;
@@ -466,8 +466,8 @@ acpi_rs_set_resource_source(union aml_resource * aml,
******************************************************************************/
acpi_status
-acpi_rs_get_prt_method_data(struct acpi_namespace_node * node,
- struct acpi_buffer * ret_buffer)
+acpi_rs_get_prt_method_data(struct acpi_namespace_node *node,
+ struct acpi_buffer *ret_buffer)
{
union acpi_operand_object *obj_desc;
acpi_status status;
@@ -671,7 +671,7 @@ acpi_rs_get_aei_method_data(struct acpi_namespace_node *node,
acpi_status
acpi_rs_get_method_data(acpi_handle handle,
- char *path, struct acpi_buffer *ret_buffer)
+ const char *path, struct acpi_buffer *ret_buffer)
{
union acpi_operand_object *obj_desc;
acpi_status status;
diff --git a/drivers/acpi/acpica/rsxface.c b/drivers/acpi/acpica/rsxface.c
index 900933be9909..465ed8137167 100644
--- a/drivers/acpi/acpica/rsxface.c
+++ b/drivers/acpi/acpica/rsxface.c
@@ -433,8 +433,8 @@ ACPI_EXPORT_SYMBOL(acpi_resource_to_address64)
acpi_status
acpi_get_vendor_resource(acpi_handle device_handle,
char *name,
- struct acpi_vendor_uuid * uuid,
- struct acpi_buffer * ret_buffer)
+ struct acpi_vendor_uuid *uuid,
+ struct acpi_buffer *ret_buffer)
{
struct acpi_vendor_walk_info info;
acpi_status status;
@@ -539,7 +539,7 @@ acpi_rs_match_vendor_resource(struct acpi_resource *resource, void *context)
******************************************************************************/
acpi_status
-acpi_walk_resource_buffer(struct acpi_buffer * buffer,
+acpi_walk_resource_buffer(struct acpi_buffer *buffer,
acpi_walk_resource_callback user_function,
void *context)
{
diff --git a/drivers/acpi/acpica/tbdata.c b/drivers/acpi/acpica/tbdata.c
index 7da79ce74080..1388a19e5db8 100644
--- a/drivers/acpi/acpica/tbdata.c
+++ b/drivers/acpi/acpica/tbdata.c
@@ -368,7 +368,7 @@ acpi_status acpi_tb_validate_temp_table(struct acpi_table_desc *table_desc)
*****************************************************************************/
acpi_status
-acpi_tb_verify_temp_table(struct acpi_table_desc * table_desc, char *signature)
+acpi_tb_verify_temp_table(struct acpi_table_desc *table_desc, char *signature)
{
acpi_status status = AE_OK;
@@ -401,9 +401,9 @@ acpi_tb_verify_temp_table(struct acpi_table_desc * table_desc, char *signature)
ACPI_EXCEPTION((AE_INFO, AE_NO_MEMORY,
"%4.4s 0x%8.8X%8.8X"
" Attempted table install failed",
- acpi_ut_valid_acpi_name(table_desc->
- signature.
- ascii) ?
+ acpi_ut_valid_nameseg(table_desc->
+ signature.
+ ascii) ?
table_desc->signature.ascii : "????",
ACPI_FORMAT_UINT64(table_desc->
address)));
@@ -454,7 +454,7 @@ acpi_status acpi_tb_resize_root_table_list(void)
table_count = acpi_gbl_root_table_list.current_table_count;
}
- tables = ACPI_ALLOCATE_ZEROED(((acpi_size) table_count +
+ tables = ACPI_ALLOCATE_ZEROED(((acpi_size)table_count +
ACPI_ROOT_TABLE_SIZE_INCREMENT) *
sizeof(struct acpi_table_desc));
if (!tables) {
@@ -467,8 +467,7 @@ acpi_status acpi_tb_resize_root_table_list(void)
if (acpi_gbl_root_table_list.tables) {
memcpy(tables, acpi_gbl_root_table_list.tables,
- (acpi_size) table_count *
- sizeof(struct acpi_table_desc));
+ (acpi_size)table_count * sizeof(struct acpi_table_desc));
if (acpi_gbl_root_table_list.flags & ACPI_ROOT_ORIGIN_ALLOCATED) {
ACPI_FREE(acpi_gbl_root_table_list.tables);
@@ -701,7 +700,7 @@ acpi_status acpi_tb_release_owner_id(u32 table_index)
*
******************************************************************************/
-acpi_status acpi_tb_get_owner_id(u32 table_index, acpi_owner_id * owner_id)
+acpi_status acpi_tb_get_owner_id(u32 table_index, acpi_owner_id *owner_id)
{
acpi_status status = AE_BAD_PARAMETER;
diff --git a/drivers/acpi/acpica/tbfadt.c b/drivers/acpi/acpica/tbfadt.c
index a79e4f30b530..620806965243 100644
--- a/drivers/acpi/acpica/tbfadt.c
+++ b/drivers/acpi/acpica/tbfadt.c
@@ -53,7 +53,7 @@ static void
acpi_tb_init_generic_address(struct acpi_generic_address *generic_address,
u8 space_id,
u8 byte_width,
- u64 address, char *register_name, u8 flags);
+ u64 address, const char *register_name, u8 flags);
static void acpi_tb_convert_fadt(void);
@@ -65,7 +65,7 @@ acpi_tb_select_address(char *register_name, u32 address32, u64 address64);
/* Table for conversion of FADT to common internal format and FADT validation */
typedef struct acpi_fadt_info {
- char *name;
+ const char *name;
u16 address64;
u16 address32;
u16 length;
@@ -192,7 +192,7 @@ static void
acpi_tb_init_generic_address(struct acpi_generic_address *generic_address,
u8 space_id,
u8 byte_width,
- u64 address, char *register_name, u8 flags)
+ u64 address, const char *register_name, u8 flags)
{
u8 bit_width;
@@ -344,7 +344,7 @@ void acpi_tb_parse_fadt(void)
/* Obtain the DSDT and FACS tables via their addresses within the FADT */
- acpi_tb_install_fixed_table((acpi_physical_address) acpi_gbl_FADT.Xdsdt,
+ acpi_tb_install_fixed_table((acpi_physical_address)acpi_gbl_FADT.Xdsdt,
ACPI_SIG_DSDT, &acpi_gbl_dsdt_index);
/* If Hardware Reduced flag is set, there is no FACS */
@@ -385,14 +385,15 @@ void acpi_tb_create_local_fadt(struct acpi_table_header *table, u32 length)
{
/*
* Check if the FADT is larger than the largest table that we expect
- * (the ACPI 5.0 version). If so, truncate the table, and issue
- * a warning.
+ * (typically the current ACPI specification version). If so, truncate
+ * the table, and issue a warning.
*/
if (length > sizeof(struct acpi_table_fadt)) {
ACPI_BIOS_WARNING((AE_INFO,
- "FADT (revision %u) is longer than ACPI 5.0 version, "
+ "FADT (revision %u) is longer than %s length, "
"truncating length %u to %u",
- table->revision, length,
+ table->revision, ACPI_FADT_CONFORMANCE,
+ length,
(u32)sizeof(struct acpi_table_fadt)));
}
@@ -467,7 +468,7 @@ void acpi_tb_create_local_fadt(struct acpi_table_header *table, u32 length)
static void acpi_tb_convert_fadt(void)
{
- char *name;
+ const char *name;
struct acpi_generic_address *address64;
u32 address32;
u8 length;
@@ -646,9 +647,12 @@ static void acpi_tb_convert_fadt(void)
if ((address64->address && !length) ||
(!address64->address && length)) {
ACPI_BIOS_WARNING((AE_INFO,
- "Optional FADT field %s has zero address or length: "
- "0x%8.8X%8.8X/0x%X",
- name,
+ "Optional FADT field %s has valid %s but zero %s: "
+ "0x%8.8X%8.8X/0x%X", name,
+ (length ? "Length" :
+ "Address"),
+ (length ? "Address" :
+ "Length"),
ACPI_FORMAT_UINT64
(address64->address),
length));
diff --git a/drivers/acpi/acpica/tbfind.c b/drivers/acpi/acpica/tbfind.c
index f2d08034630e..e348d616e60f 100644
--- a/drivers/acpi/acpica/tbfind.c
+++ b/drivers/acpi/acpica/tbfind.c
@@ -76,7 +76,7 @@ acpi_tb_find_table(char *signature,
/* Validate the input table signature */
- if (!acpi_is_valid_signature(signature)) {
+ if (!acpi_ut_valid_nameseg(signature)) {
return_ACPI_STATUS(AE_BAD_SIGNATURE);
}
diff --git a/drivers/acpi/acpica/tbinstal.c b/drivers/acpi/acpica/tbinstal.c
index 4dc6108de4ff..8b13052128fc 100644
--- a/drivers/acpi/acpica/tbinstal.c
+++ b/drivers/acpi/acpica/tbinstal.c
@@ -299,9 +299,9 @@ acpi_tb_install_standard_table(acpi_physical_address address,
ACPI_BIOS_ERROR((AE_INFO,
"Table has invalid signature [%4.4s] (0x%8.8X), "
"must be SSDT or OEMx",
- acpi_ut_valid_acpi_name(new_table_desc.
- signature.
- ascii) ?
+ acpi_ut_valid_nameseg(new_table_desc.
+ signature.
+ ascii) ?
new_table_desc.signature.
ascii : "????",
new_table_desc.signature.integer));
diff --git a/drivers/acpi/acpica/tbutils.c b/drivers/acpi/acpica/tbutils.c
index 9240c76d2823..e28553914bf5 100644
--- a/drivers/acpi/acpica/tbutils.c
+++ b/drivers/acpi/acpica/tbutils.c
@@ -231,7 +231,7 @@ acpi_tb_get_root_table_entry(u8 *table_entry, u32 table_entry_size)
ACPI_FORMAT_UINT64(address64)));
}
#endif
- return ((acpi_physical_address) (address64));
+ return ((acpi_physical_address)(address64));
}
}
@@ -287,12 +287,12 @@ acpi_status __init acpi_tb_parse_root_table(acpi_physical_address rsdp_address)
* the XSDT if the revision is > 1 and the XSDT pointer is present,
* as per the ACPI specification.
*/
- address = (acpi_physical_address) rsdp->xsdt_physical_address;
+ address = (acpi_physical_address)rsdp->xsdt_physical_address;
table_entry_size = ACPI_XSDT_ENTRY_SIZE;
} else {
/* Root table is an RSDT (32-bit physical addresses) */
- address = (acpi_physical_address) rsdp->rsdt_physical_address;
+ address = (acpi_physical_address)rsdp->rsdt_physical_address;
table_entry_size = ACPI_RSDT_ENTRY_SIZE;
}
@@ -380,30 +380,3 @@ next_table:
acpi_os_unmap_memory(table, length);
return_ACPI_STATUS(AE_OK);
}
-
-/*******************************************************************************
- *
- * FUNCTION: acpi_is_valid_signature
- *
- * PARAMETERS: signature - Sig string to be validated
- *
- * RETURN: TRUE if signature is has 4 valid ACPI characters
- *
- * DESCRIPTION: Validate an ACPI table signature.
- *
- ******************************************************************************/
-
-u8 acpi_is_valid_signature(char *signature)
-{
- u32 i;
-
- /* Validate each character in the signature */
-
- for (i = 0; i < ACPI_NAME_SIZE; i++) {
- if (!acpi_ut_valid_acpi_char(signature[i], i)) {
- return (FALSE);
- }
- }
-
- return (TRUE);
-}
diff --git a/drivers/acpi/acpica/tbxface.c b/drivers/acpi/acpica/tbxface.c
index 326df65decef..3ecec937e8c9 100644
--- a/drivers/acpi/acpica/tbxface.c
+++ b/drivers/acpi/acpica/tbxface.c
@@ -99,7 +99,7 @@ acpi_status acpi_allocate_root_table(u32 initial_table_count)
******************************************************************************/
acpi_status __init
-acpi_initialize_tables(struct acpi_table_desc * initial_table_array,
+acpi_initialize_tables(struct acpi_table_desc *initial_table_array,
u32 initial_table_count, u8 allow_resize)
{
acpi_physical_address rsdp_address;
@@ -120,7 +120,7 @@ acpi_initialize_tables(struct acpi_table_desc * initial_table_array,
/* Root Table Array has been statically allocated by the host */
memset(initial_table_array, 0,
- (acpi_size) initial_table_count *
+ (acpi_size)initial_table_count *
sizeof(struct acpi_table_desc));
acpi_gbl_root_table_list.tables = initial_table_array;
@@ -352,7 +352,7 @@ ACPI_EXPORT_SYMBOL(acpi_get_table)
*
******************************************************************************/
acpi_status
-acpi_get_table_by_index(u32 table_index, struct acpi_table_header ** table)
+acpi_get_table_by_index(u32 table_index, struct acpi_table_header **table)
{
acpi_status status;
diff --git a/drivers/acpi/acpica/tbxfload.c b/drivers/acpi/acpica/tbxfload.c
index 3151968c10d1..ac71abcd32bb 100644
--- a/drivers/acpi/acpica/tbxfload.c
+++ b/drivers/acpi/acpica/tbxfload.c
@@ -82,7 +82,7 @@ acpi_status __init acpi_load_tables(void)
* their customized default region handlers.
*/
status = acpi_ev_install_region_handlers();
- if (ACPI_FAILURE(status) && status != AE_ALREADY_EXISTS) {
+ if (ACPI_FAILURE(status)) {
ACPI_EXCEPTION((AE_INFO, status,
"During Region initialization"));
return_ACPI_STATUS(status);
diff --git a/drivers/acpi/acpica/tbxfroot.c b/drivers/acpi/acpica/tbxfroot.c
index b9a78e457d19..adb6cfc54661 100644
--- a/drivers/acpi/acpica/tbxfroot.c
+++ b/drivers/acpi/acpica/tbxfroot.c
@@ -90,7 +90,7 @@ u32 acpi_tb_get_rsdp_length(struct acpi_table_rsdp *rsdp)
*
******************************************************************************/
-acpi_status acpi_tb_validate_rsdp(struct acpi_table_rsdp * rsdp)
+acpi_status acpi_tb_validate_rsdp(struct acpi_table_rsdp *rsdp)
{
/*
@@ -142,7 +142,7 @@ acpi_status acpi_tb_validate_rsdp(struct acpi_table_rsdp * rsdp)
*
******************************************************************************/
-acpi_status __init acpi_find_root_pointer(acpi_physical_address * table_address)
+acpi_status __init acpi_find_root_pointer(acpi_physical_address *table_address)
{
u8 *table_ptr;
u8 *mem_rover;
@@ -201,7 +201,7 @@ acpi_status __init acpi_find_root_pointer(acpi_physical_address * table_address)
(u32) ACPI_PTR_DIFF(mem_rover, table_ptr);
*table_address =
- (acpi_physical_address) physical_address;
+ (acpi_physical_address)physical_address;
return_ACPI_STATUS(AE_OK);
}
}
@@ -234,7 +234,7 @@ acpi_status __init acpi_find_root_pointer(acpi_physical_address * table_address)
(ACPI_HI_RSDP_WINDOW_BASE +
ACPI_PTR_DIFF(mem_rover, table_ptr));
- *table_address = (acpi_physical_address) physical_address;
+ *table_address = (acpi_physical_address)physical_address;
return_ACPI_STATUS(AE_OK);
}
diff --git a/drivers/acpi/acpica/utalloc.c b/drivers/acpi/acpica/utalloc.c
index 3dbdc3ab8b78..13324a27b99b 100644
--- a/drivers/acpi/acpica/utalloc.c
+++ b/drivers/acpi/acpica/utalloc.c
@@ -231,7 +231,7 @@ acpi_status acpi_ut_delete_caches(void)
*
******************************************************************************/
-acpi_status acpi_ut_validate_buffer(struct acpi_buffer * buffer)
+acpi_status acpi_ut_validate_buffer(struct acpi_buffer *buffer)
{
/* Obviously, the structure pointer must be valid */
@@ -272,8 +272,7 @@ acpi_status acpi_ut_validate_buffer(struct acpi_buffer * buffer)
******************************************************************************/
acpi_status
-acpi_ut_initialize_buffer(struct acpi_buffer * buffer,
- acpi_size required_length)
+acpi_ut_initialize_buffer(struct acpi_buffer *buffer, acpi_size required_length)
{
acpi_size input_buffer_length;
diff --git a/drivers/acpi/acpica/utascii.c b/drivers/acpi/acpica/utascii.c
new file mode 100644
index 000000000000..706c1f346490
--- /dev/null
+++ b/drivers/acpi/acpica/utascii.c
@@ -0,0 +1,140 @@
+/******************************************************************************
+ *
+ * Module Name: utascii - Utility ascii functions
+ *
+ *****************************************************************************/
+
+/*
+ * Copyright (C) 2000 - 2016, Intel Corp.
+ * All rights reserved.
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions, and the following disclaimer,
+ * without modification.
+ * 2. Redistributions in binary form must reproduce at minimum a disclaimer
+ * substantially similar to the "NO WARRANTY" disclaimer below
+ * ("Disclaimer") and any redistribution must be conditioned upon
+ * including a substantially similar Disclaimer requirement for further
+ * binary redistribution.
+ * 3. Neither the names of the above-listed copyright holders nor the names
+ * of any contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * Alternatively, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") version 2 as published by the Free
+ * Software Foundation.
+ *
+ * NO WARRANTY
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+ * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
+ * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
+ * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
+ * STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
+ * IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGES.
+ */
+
+#include <acpi/acpi.h>
+#include "accommon.h"
+
+/*******************************************************************************
+ *
+ * FUNCTION: acpi_ut_valid_nameseg
+ *
+ * PARAMETERS: name - The name or table signature to be examined.
+ * Four characters, does not have to be a
+ * NULL terminated string.
+ *
+ * RETURN: TRUE if signature is has 4 valid ACPI characters
+ *
+ * DESCRIPTION: Validate an ACPI table signature.
+ *
+ ******************************************************************************/
+
+u8 acpi_ut_valid_nameseg(char *name)
+{
+ u32 i;
+
+ /* Validate each character in the signature */
+
+ for (i = 0; i < ACPI_NAME_SIZE; i++) {
+ if (!acpi_ut_valid_name_char(name[i], i)) {
+ return (FALSE);
+ }
+ }
+
+ return (TRUE);
+}
+
+/*******************************************************************************
+ *
+ * FUNCTION: acpi_ut_valid_name_char
+ *
+ * PARAMETERS: char - The character to be examined
+ * position - Byte position (0-3)
+ *
+ * RETURN: TRUE if the character is valid, FALSE otherwise
+ *
+ * DESCRIPTION: Check for a valid ACPI character. Must be one of:
+ * 1) Upper case alpha
+ * 2) numeric
+ * 3) underscore
+ *
+ * We allow a '!' as the last character because of the ASF! table
+ *
+ ******************************************************************************/
+
+u8 acpi_ut_valid_name_char(char character, u32 position)
+{
+
+ if (!((character >= 'A' && character <= 'Z') ||
+ (character >= '0' && character <= '9') || (character == '_'))) {
+
+ /* Allow a '!' in the last position */
+
+ if (character == '!' && position == 3) {
+ return (TRUE);
+ }
+
+ return (FALSE);
+ }
+
+ return (TRUE);
+}
+
+/*******************************************************************************
+ *
+ * FUNCTION: acpi_ut_check_and_repair_ascii
+ *
+ * PARAMETERS: name - Ascii string
+ * count - Number of characters to check
+ *
+ * RETURN: None
+ *
+ * DESCRIPTION: Ensure that the requested number of characters are printable
+ * Ascii characters. Sets non-printable and null chars to <space>.
+ *
+ ******************************************************************************/
+
+void acpi_ut_check_and_repair_ascii(u8 *name, char *repaired_name, u32 count)
+{
+ u32 i;
+
+ for (i = 0; i < count; i++) {
+ repaired_name[i] = (char)name[i];
+
+ if (!name[i]) {
+ return;
+ }
+ if (!isprint(name[i])) {
+ repaired_name[i] = ' ';
+ }
+ }
+}
diff --git a/drivers/acpi/acpica/utbuffer.c b/drivers/acpi/acpica/utbuffer.c
index 0cfb2b8edad5..bd31faf5da7c 100644
--- a/drivers/acpi/acpica/utbuffer.c
+++ b/drivers/acpi/acpica/utbuffer.c
@@ -106,31 +106,31 @@ void acpi_ut_dump_buffer(u8 *buffer, u32 count, u32 display, u32 base_offset)
default: /* Default is BYTE display */
acpi_os_printf("%02X ",
- buffer[(acpi_size) i + j]);
+ buffer[(acpi_size)i + j]);
break;
case DB_WORD_DISPLAY:
ACPI_MOVE_16_TO_32(&temp32,
- &buffer[(acpi_size) i + j]);
+ &buffer[(acpi_size)i + j]);
acpi_os_printf("%04X ", temp32);
break;
case DB_DWORD_DISPLAY:
ACPI_MOVE_32_TO_32(&temp32,
- &buffer[(acpi_size) i + j]);
+ &buffer[(acpi_size)i + j]);
acpi_os_printf("%08X ", temp32);
break;
case DB_QWORD_DISPLAY:
ACPI_MOVE_32_TO_32(&temp32,
- &buffer[(acpi_size) i + j]);
+ &buffer[(acpi_size)i + j]);
acpi_os_printf("%08X", temp32);
ACPI_MOVE_32_TO_32(&temp32,
- &buffer[(acpi_size) i + j +
+ &buffer[(acpi_size)i + j +
4]);
acpi_os_printf("%08X ", temp32);
break;
@@ -158,7 +158,7 @@ void acpi_ut_dump_buffer(u8 *buffer, u32 count, u32 display, u32 base_offset)
acpi_os_printf("// ");
}
- buf_char = buffer[(acpi_size) i + j];
+ buf_char = buffer[(acpi_size)i + j];
if (isprint(buf_char)) {
acpi_os_printf("%c", buf_char);
} else {
@@ -274,31 +274,31 @@ acpi_ut_dump_buffer_to_file(ACPI_FILE file,
default: /* Default is BYTE display */
acpi_ut_file_printf(file, "%02X ",
- buffer[(acpi_size) i + j]);
+ buffer[(acpi_size)i + j]);
break;
case DB_WORD_DISPLAY:
ACPI_MOVE_16_TO_32(&temp32,
- &buffer[(acpi_size) i + j]);
+ &buffer[(acpi_size)i + j]);
acpi_ut_file_printf(file, "%04X ", temp32);
break;
case DB_DWORD_DISPLAY:
ACPI_MOVE_32_TO_32(&temp32,
- &buffer[(acpi_size) i + j]);
+ &buffer[(acpi_size)i + j]);
acpi_ut_file_printf(file, "%08X ", temp32);
break;
case DB_QWORD_DISPLAY:
ACPI_MOVE_32_TO_32(&temp32,
- &buffer[(acpi_size) i + j]);
+ &buffer[(acpi_size)i + j]);
acpi_ut_file_printf(file, "%08X", temp32);
ACPI_MOVE_32_TO_32(&temp32,
- &buffer[(acpi_size) i + j +
+ &buffer[(acpi_size)i + j +
4]);
acpi_ut_file_printf(file, "%08X ", temp32);
break;
@@ -318,7 +318,7 @@ acpi_ut_dump_buffer_to_file(ACPI_FILE file,
return;
}
- buf_char = buffer[(acpi_size) i + j];
+ buf_char = buffer[(acpi_size)i + j];
if (isprint(buf_char)) {
acpi_ut_file_printf(file, "%c", buf_char);
} else {
diff --git a/drivers/acpi/acpica/utcache.c b/drivers/acpi/acpica/utcache.c
index f8e9978888e1..3b8d23ef351f 100644
--- a/drivers/acpi/acpica/utcache.c
+++ b/drivers/acpi/acpica/utcache.c
@@ -105,7 +105,7 @@ acpi_os_create_cache(char *cache_name,
*
******************************************************************************/
-acpi_status acpi_os_purge_cache(struct acpi_memory_list * cache)
+acpi_status acpi_os_purge_cache(struct acpi_memory_list *cache)
{
void *next;
acpi_status status;
@@ -151,7 +151,7 @@ acpi_status acpi_os_purge_cache(struct acpi_memory_list * cache)
*
******************************************************************************/
-acpi_status acpi_os_delete_cache(struct acpi_memory_list * cache)
+acpi_status acpi_os_delete_cache(struct acpi_memory_list *cache)
{
acpi_status status;
@@ -184,8 +184,7 @@ acpi_status acpi_os_delete_cache(struct acpi_memory_list * cache)
*
******************************************************************************/
-acpi_status
-acpi_os_release_object(struct acpi_memory_list * cache, void *object)
+acpi_status acpi_os_release_object(struct acpi_memory_list *cache, void *object)
{
acpi_status status;
diff --git a/drivers/acpi/acpica/utcopy.c b/drivers/acpi/acpica/utcopy.c
index 98d53e59ce55..82f971402d85 100644
--- a/drivers/acpi/acpica/utcopy.c
+++ b/drivers/acpi/acpica/utcopy.c
@@ -53,7 +53,7 @@ ACPI_MODULE_NAME("utcopy")
static acpi_status
acpi_ut_copy_isimple_to_esimple(union acpi_operand_object *internal_object,
union acpi_object *external_object,
- u8 * data_space, acpi_size * buffer_space_used);
+ u8 *data_space, acpi_size *buffer_space_used);
static acpi_status
acpi_ut_copy_ielement_to_ielement(u8 object_type,
@@ -63,7 +63,7 @@ acpi_ut_copy_ielement_to_ielement(u8 object_type,
static acpi_status
acpi_ut_copy_ipackage_to_epackage(union acpi_operand_object *internal_object,
- u8 * buffer, acpi_size * space_used);
+ u8 *buffer, acpi_size *space_used);
static acpi_status
acpi_ut_copy_esimple_to_isimple(union acpi_object *user_obj,
@@ -111,7 +111,7 @@ acpi_ut_copy_ipackage_to_ipackage(union acpi_operand_object *source_obj,
static acpi_status
acpi_ut_copy_isimple_to_esimple(union acpi_operand_object *internal_object,
union acpi_object *external_object,
- u8 * data_space, acpi_size * buffer_space_used)
+ u8 *data_space, acpi_size *buffer_space_used)
{
acpi_status status = AE_OK;
@@ -151,7 +151,7 @@ acpi_ut_copy_isimple_to_esimple(union acpi_operand_object *internal_object,
memcpy((void *)data_space,
(void *)internal_object->string.pointer,
- (acpi_size) internal_object->string.length + 1);
+ (acpi_size)internal_object->string.length + 1);
break;
case ACPI_TYPE_BUFFER:
@@ -331,7 +331,7 @@ acpi_ut_copy_ielement_to_eelement(u8 object_type,
static acpi_status
acpi_ut_copy_ipackage_to_epackage(union acpi_operand_object *internal_object,
- u8 * buffer, acpi_size * space_used)
+ u8 *buffer, acpi_size *space_used)
{
union acpi_object *external_object;
acpi_status status;
@@ -362,7 +362,7 @@ acpi_ut_copy_ipackage_to_epackage(union acpi_operand_object *internal_object,
* Leave room for an array of ACPI_OBJECTS in the buffer
* and move the free space past it
*/
- info.length += (acpi_size) external_object->package.count *
+ info.length += (acpi_size)external_object->package.count *
ACPI_ROUND_UP_TO_NATIVE_WORD(sizeof(union acpi_object));
info.free_space += external_object->package.count *
ACPI_ROUND_UP_TO_NATIVE_WORD(sizeof(union acpi_object));
@@ -738,7 +738,7 @@ acpi_ut_copy_simple_object(union acpi_operand_object *source_desc,
*/
if (source_desc->string.pointer) {
dest_desc->string.pointer =
- ACPI_ALLOCATE((acpi_size) source_desc->string.
+ ACPI_ALLOCATE((acpi_size)source_desc->string.
length + 1);
if (!dest_desc->string.pointer) {
return (AE_NO_MEMORY);
@@ -748,7 +748,7 @@ acpi_ut_copy_simple_object(union acpi_operand_object *source_desc,
memcpy(dest_desc->string.pointer,
source_desc->string.pointer,
- (acpi_size) source_desc->string.length + 1);
+ (acpi_size)source_desc->string.length + 1);
}
break;
diff --git a/drivers/acpi/acpica/utdebug.c b/drivers/acpi/acpica/utdebug.c
index 1cfc5f69b033..574422205005 100644
--- a/drivers/acpi/acpica/utdebug.c
+++ b/drivers/acpi/acpica/utdebug.c
@@ -51,13 +51,9 @@
ACPI_MODULE_NAME("utdebug")
#ifdef ACPI_DEBUG_OUTPUT
-static acpi_thread_id acpi_gbl_prev_thread_id = (acpi_thread_id) 0xFFFFFFFF;
-static char *acpi_gbl_fn_entry_str = "----Entry";
-static char *acpi_gbl_fn_exit_str = "----Exit-";
-
-/* Local prototypes */
-
-static const char *acpi_ut_trim_function_name(const char *function_name);
+static acpi_thread_id acpi_gbl_previous_thread_id = (acpi_thread_id) 0xFFFFFFFF;
+static const char *acpi_gbl_function_entry_prefix = "----Entry";
+static const char *acpi_gbl_function_exit_prefix = "----Exit-";
/*******************************************************************************
*
@@ -178,14 +174,14 @@ acpi_debug_print(u32 requested_debug_level,
* Thread tracking and context switch notification
*/
thread_id = acpi_os_get_thread_id();
- if (thread_id != acpi_gbl_prev_thread_id) {
+ if (thread_id != acpi_gbl_previous_thread_id) {
if (ACPI_LV_THREADS & acpi_dbg_level) {
acpi_os_printf
("\n**** Context Switch from TID %u to TID %u ****\n\n",
- (u32)acpi_gbl_prev_thread_id, (u32)thread_id);
+ (u32)acpi_gbl_previous_thread_id, (u32)thread_id);
}
- acpi_gbl_prev_thread_id = thread_id;
+ acpi_gbl_previous_thread_id = thread_id;
acpi_gbl_nesting_level = 0;
}
@@ -287,7 +283,8 @@ acpi_ut_trace(u32 line_number,
if (ACPI_IS_DEBUG_ENABLED(ACPI_LV_FUNCTIONS, component_id)) {
acpi_debug_print(ACPI_LV_FUNCTIONS,
line_number, function_name, module_name,
- component_id, "%s\n", acpi_gbl_fn_entry_str);
+ component_id, "%s\n",
+ acpi_gbl_function_entry_prefix);
}
}
@@ -312,7 +309,8 @@ ACPI_EXPORT_SYMBOL(acpi_ut_trace)
void
acpi_ut_trace_ptr(u32 line_number,
const char *function_name,
- const char *module_name, u32 component_id, void *pointer)
+ const char *module_name,
+ u32 component_id, const void *pointer)
{
acpi_gbl_nesting_level++;
@@ -323,8 +321,8 @@ acpi_ut_trace_ptr(u32 line_number,
if (ACPI_IS_DEBUG_ENABLED(ACPI_LV_FUNCTIONS, component_id)) {
acpi_debug_print(ACPI_LV_FUNCTIONS,
line_number, function_name, module_name,
- component_id, "%s %p\n", acpi_gbl_fn_entry_str,
- pointer);
+ component_id, "%s %p\n",
+ acpi_gbl_function_entry_prefix, pointer);
}
}
@@ -348,7 +346,7 @@ acpi_ut_trace_ptr(u32 line_number,
void
acpi_ut_trace_str(u32 line_number,
const char *function_name,
- const char *module_name, u32 component_id, char *string)
+ const char *module_name, u32 component_id, const char *string)
{
acpi_gbl_nesting_level++;
@@ -359,8 +357,8 @@ acpi_ut_trace_str(u32 line_number,
if (ACPI_IS_DEBUG_ENABLED(ACPI_LV_FUNCTIONS, component_id)) {
acpi_debug_print(ACPI_LV_FUNCTIONS,
line_number, function_name, module_name,
- component_id, "%s %s\n", acpi_gbl_fn_entry_str,
- string);
+ component_id, "%s %s\n",
+ acpi_gbl_function_entry_prefix, string);
}
}
@@ -396,7 +394,7 @@ acpi_ut_trace_u32(u32 line_number,
acpi_debug_print(ACPI_LV_FUNCTIONS,
line_number, function_name, module_name,
component_id, "%s %08X\n",
- acpi_gbl_fn_entry_str, integer);
+ acpi_gbl_function_entry_prefix, integer);
}
}
@@ -427,7 +425,8 @@ acpi_ut_exit(u32 line_number,
if (ACPI_IS_DEBUG_ENABLED(ACPI_LV_FUNCTIONS, component_id)) {
acpi_debug_print(ACPI_LV_FUNCTIONS,
line_number, function_name, module_name,
- component_id, "%s\n", acpi_gbl_fn_exit_str);
+ component_id, "%s\n",
+ acpi_gbl_function_exit_prefix);
}
if (acpi_gbl_nesting_level) {
@@ -467,14 +466,14 @@ acpi_ut_status_exit(u32 line_number,
acpi_debug_print(ACPI_LV_FUNCTIONS,
line_number, function_name,
module_name, component_id, "%s %s\n",
- acpi_gbl_fn_exit_str,
+ acpi_gbl_function_exit_prefix,
acpi_format_exception(status));
} else {
acpi_debug_print(ACPI_LV_FUNCTIONS,
line_number, function_name,
module_name, component_id,
"%s ****Exception****: %s\n",
- acpi_gbl_fn_exit_str,
+ acpi_gbl_function_exit_prefix,
acpi_format_exception(status));
}
}
@@ -514,7 +513,7 @@ acpi_ut_value_exit(u32 line_number,
acpi_debug_print(ACPI_LV_FUNCTIONS,
line_number, function_name, module_name,
component_id, "%s %8.8X%8.8X\n",
- acpi_gbl_fn_exit_str,
+ acpi_gbl_function_exit_prefix,
ACPI_FORMAT_UINT64(value));
}
@@ -552,8 +551,8 @@ acpi_ut_ptr_exit(u32 line_number,
if (ACPI_IS_DEBUG_ENABLED(ACPI_LV_FUNCTIONS, component_id)) {
acpi_debug_print(ACPI_LV_FUNCTIONS,
line_number, function_name, module_name,
- component_id, "%s %p\n", acpi_gbl_fn_exit_str,
- ptr);
+ component_id, "%s %p\n",
+ acpi_gbl_function_exit_prefix, ptr);
}
if (acpi_gbl_nesting_level) {
diff --git a/drivers/acpi/acpica/utdecode.c b/drivers/acpi/acpica/utdecode.c
index 6ba65b02550c..efd7988e34cb 100644
--- a/drivers/acpi/acpica/utdecode.c
+++ b/drivers/acpi/acpica/utdecode.c
@@ -446,7 +446,7 @@ const char *acpi_ut_get_mutex_name(u32 mutex_id)
/* Names for Notify() values, used for debug output */
-static const char *acpi_gbl_generic_notify[ACPI_NOTIFY_MAX + 1] = {
+static const char *acpi_gbl_generic_notify[ACPI_GENERIC_NOTIFY_MAX + 1] = {
/* 00 */ "Bus Check",
/* 01 */ "Device Check",
/* 02 */ "Device Wake",
@@ -459,49 +459,53 @@ static const char *acpi_gbl_generic_notify[ACPI_NOTIFY_MAX + 1] = {
/* 09 */ "Device PLD Check",
/* 0A */ "Reserved",
/* 0B */ "System Locality Update",
- /* 0C */ "Shutdown Request",
+ /* 0C */ "Shutdown Request",
+ /* Reserved in ACPI 6.0 */
/* 0D */ "System Resource Affinity Update"
};
-static const char *acpi_gbl_device_notify[4] = {
+static const char *acpi_gbl_device_notify[5] = {
/* 80 */ "Status Change",
/* 81 */ "Information Change",
/* 82 */ "Device-Specific Change",
- /* 83 */ "Device-Specific Change"
+ /* 83 */ "Device-Specific Change",
+ /* 84 */ "Reserved"
};
-static const char *acpi_gbl_processor_notify[4] = {
+static const char *acpi_gbl_processor_notify[5] = {
/* 80 */ "Performance Capability Change",
/* 81 */ "C-State Change",
/* 82 */ "Throttling Capability Change",
- /* 83 */ "Device-Specific Change"
+ /* 83 */ "Guaranteed Change",
+ /* 84 */ "Minimum Excursion"
};
-static const char *acpi_gbl_thermal_notify[4] = {
+static const char *acpi_gbl_thermal_notify[5] = {
/* 80 */ "Thermal Status Change",
/* 81 */ "Thermal Trip Point Change",
/* 82 */ "Thermal Device List Change",
- /* 83 */ "Thermal Relationship Change"
+ /* 83 */ "Thermal Relationship Change",
+ /* 84 */ "Reserved"
};
const char *acpi_ut_get_notify_name(u32 notify_value, acpi_object_type type)
{
- /* 00 - 0D are common to all object types */
+ /* 00 - 0D are "common to all object types" (from ACPI Spec) */
- if (notify_value <= ACPI_NOTIFY_MAX) {
+ if (notify_value <= ACPI_GENERIC_NOTIFY_MAX) {
return (acpi_gbl_generic_notify[notify_value]);
}
- /* 0D - 7F are reserved */
+ /* 0E - 7F are reserved */
if (notify_value <= ACPI_MAX_SYS_NOTIFY) {
return ("Reserved");
}
- /* 80 - 83 are per-object-type */
+ /* 80 - 84 are per-object-type */
- if (notify_value <= 0x83) {
+ if (notify_value <= ACPI_SPECIFIC_NOTIFY_MAX) {
switch (type) {
case ACPI_TYPE_ANY:
case ACPI_TYPE_DEVICE:
diff --git a/drivers/acpi/acpica/uteval.c b/drivers/acpi/acpica/uteval.c
index 17b9f3e6e1e1..7bad13f2e518 100644
--- a/drivers/acpi/acpica/uteval.c
+++ b/drivers/acpi/acpica/uteval.c
@@ -69,7 +69,7 @@ ACPI_MODULE_NAME("uteval")
acpi_status
acpi_ut_evaluate_object(struct acpi_namespace_node *prefix_node,
- char *path,
+ const char *path,
u32 expected_return_btypes,
union acpi_operand_object **return_desc)
{
@@ -204,7 +204,7 @@ cleanup:
******************************************************************************/
acpi_status
-acpi_ut_evaluate_numeric_object(char *object_name,
+acpi_ut_evaluate_numeric_object(const char *object_name,
struct acpi_namespace_node *device_node,
u64 *value)
{
diff --git a/drivers/acpi/acpica/utglobal.c b/drivers/acpi/acpica/utglobal.c
index 48fffcfe9911..dd3fd7f97f8e 100644
--- a/drivers/acpi/acpica/utglobal.c
+++ b/drivers/acpi/acpica/utglobal.c
@@ -80,6 +80,11 @@ const char *acpi_gbl_highest_dstate_names[ACPI_NUM_sx_d_METHODS] = {
"_S4D"
};
+/* Hex-to-ascii */
+
+const char acpi_gbl_lower_hex_digits[] = "0123456789abcdef";
+const char acpi_gbl_upper_hex_digits[] = "0123456789ABCDEF";
+
/*******************************************************************************
*
* Namespace globals
@@ -221,6 +226,49 @@ struct acpi_fixed_event_info acpi_gbl_fixed_event_info[ACPI_NUM_FIXED_EVENTS] =
};
#endif /* !ACPI_REDUCED_HARDWARE */
+#if defined (ACPI_DISASSEMBLER) || defined (ACPI_ASL_COMPILER)
+
+/* to_pld macro: compile/disassemble strings */
+
+const char *acpi_gbl_pld_panel_list[] = {
+ "TOP",
+ "BOTTOM",
+ "LEFT",
+ "RIGHT",
+ "FRONT",
+ "BACK",
+ "UNKNOWN",
+ NULL
+};
+
+const char *acpi_gbl_pld_vertical_position_list[] = {
+ "UPPER",
+ "CENTER",
+ "LOWER",
+ NULL
+};
+
+const char *acpi_gbl_pld_horizontal_position_list[] = {
+ "LEFT",
+ "CENTER",
+ "RIGHT",
+ NULL
+};
+
+const char *acpi_gbl_pld_shape_list[] = {
+ "ROUND",
+ "OVAL",
+ "SQUARE",
+ "VERTICALRECTANGLE",
+ "HORIZONTALRECTANGLE",
+ "VERTICALTRAPEZOID",
+ "HORIZONTALTRAPEZOID",
+ "UNKNOWN",
+ "CHAMFERED",
+ NULL
+};
+#endif
+
/* Public globals */
ACPI_EXPORT_SYMBOL(acpi_gbl_FADT)
diff --git a/drivers/acpi/acpica/utids.c b/drivers/acpi/acpica/utids.c
index 6fb4ec365272..f7cd2d52643b 100644
--- a/drivers/acpi/acpica/utids.c
+++ b/drivers/acpi/acpica/utids.c
@@ -95,7 +95,7 @@ acpi_ut_execute_HID(struct acpi_namespace_node *device_node,
hid =
ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_pnp_device_id) +
- (acpi_size) length);
+ (acpi_size)length);
if (!hid) {
status = AE_NO_MEMORY;
goto cleanup;
@@ -173,7 +173,7 @@ acpi_ut_execute_UID(struct acpi_namespace_node *device_node,
uid =
ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_pnp_device_id) +
- (acpi_size) length);
+ (acpi_size)length);
if (!uid) {
status = AE_NO_MEMORY;
goto cleanup;
@@ -309,7 +309,7 @@ acpi_ut_execute_CID(struct acpi_namespace_node *device_node,
/* Area for CID strings starts after the CID PNP_DEVICE_ID array */
next_id_string = ACPI_CAST_PTR(char, cid_list->ids) +
- ((acpi_size) count * sizeof(struct acpi_pnp_device_id));
+ ((acpi_size)count * sizeof(struct acpi_pnp_device_id));
/* Copy/convert the CIDs to the return buffer */
@@ -413,7 +413,7 @@ acpi_ut_execute_CLS(struct acpi_namespace_node *device_node,
cls =
ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_pnp_device_id) +
- (acpi_size) length);
+ (acpi_size)length);
if (!cls) {
status = AE_NO_MEMORY;
goto cleanup;
diff --git a/drivers/acpi/acpica/utmath.c b/drivers/acpi/acpica/utmath.c
index 667372093de1..2d6530ee7e51 100644
--- a/drivers/acpi/acpica/utmath.c
+++ b/drivers/acpi/acpica/utmath.c
@@ -236,8 +236,8 @@ acpi_ut_divide(u64 in_dividend,
}
remainder.full = remainder.full - dividend.full;
- remainder.part.hi = (u32) - ((s32) remainder.part.hi);
- remainder.part.lo = (u32) - ((s32) remainder.part.lo);
+ remainder.part.hi = (u32)-((s32)remainder.part.hi);
+ remainder.part.lo = (u32)-((s32)remainder.part.lo);
if (remainder.part.lo) {
remainder.part.hi--;
diff --git a/drivers/acpi/acpica/utmisc.c b/drivers/acpi/acpica/utmisc.c
index d938c27cc6cf..389de3bd1ff1 100644
--- a/drivers/acpi/acpica/utmisc.c
+++ b/drivers/acpi/acpica/utmisc.c
@@ -361,7 +361,7 @@ acpi_ut_walk_package_tree(union acpi_operand_object *source_object,
void
acpi_ut_display_init_pathname(u8 type,
struct acpi_namespace_node *obj_handle,
- char *path)
+ const char *path)
{
acpi_status status;
struct acpi_buffer buffer;
diff --git a/drivers/acpi/acpica/utnonansi.c b/drivers/acpi/acpica/utnonansi.c
index d5c3adf19bd0..3465fe2c5a5c 100644
--- a/drivers/acpi/acpica/utnonansi.c
+++ b/drivers/acpi/acpica/utnonansi.c
@@ -205,37 +205,41 @@ acpi_ut_safe_strncat(char *dest,
*
* FUNCTION: acpi_ut_strtoul64
*
- * PARAMETERS: string - Null terminated string
- * base - Radix of the string: 16 or ACPI_ANY_BASE;
- * ACPI_ANY_BASE means 'in behalf of to_integer'
- * ret_integer - Where the converted integer is returned
+ * PARAMETERS: string - Null terminated string
+ * base - Radix of the string: 16 or 10 or
+ * ACPI_ANY_BASE
+ * max_integer_byte_width - Maximum allowable integer,in bytes:
+ * 4 or 8 (32 or 64 bits)
+ * ret_integer - Where the converted integer is
+ * returned
*
* RETURN: Status and Converted value
*
* DESCRIPTION: Convert a string into an unsigned value. Performs either a
- * 32-bit or 64-bit conversion, depending on the current mode
- * of the interpreter.
+ * 32-bit or 64-bit conversion, depending on the input integer
+ * size (often the current mode of the interpreter).
*
- * NOTES: acpi_gbl_integer_byte_width should be set to the proper width.
+ * NOTES: Negative numbers are not supported, as they are not supported
+ * by ACPI.
+ *
+ * acpi_gbl_integer_byte_width should be set to the proper width.
* For the core ACPICA code, this width depends on the DSDT
- * version. For iASL, the default byte width is always 8.
+ * version. For iASL, the default byte width is always 8 for the
+ * parser, but error checking is performed later to flag cases
+ * where a 64-bit constant is defined in a 32-bit DSDT/SSDT.
*
* Does not support Octal strings, not needed at this time.
*
- * There is an earlier version of the function after this one,
- * below. It is slightly different than this one, and the two
- * may eventually may need to be merged. (01/2016).
- *
******************************************************************************/
-acpi_status acpi_ut_strtoul64(char *string, u32 base, u64 *ret_integer)
+acpi_status
+acpi_ut_strtoul64(char *string,
+ u32 base, u32 max_integer_byte_width, u64 *ret_integer)
{
u32 this_digit = 0;
u64 return_value = 0;
u64 quotient;
u64 dividend;
- u32 to_integer_op = (base == ACPI_ANY_BASE);
- u32 mode32 = (acpi_gbl_integer_byte_width == 4);
u8 valid_digits = 0;
u8 sign_of0x = 0;
u8 term = 0;
@@ -244,6 +248,7 @@ acpi_status acpi_ut_strtoul64(char *string, u32 base, u64 *ret_integer)
switch (base) {
case ACPI_ANY_BASE:
+ case 10:
case 16:
break;
@@ -265,9 +270,9 @@ acpi_status acpi_ut_strtoul64(char *string, u32 base, u64 *ret_integer)
string++;
}
- if (to_integer_op) {
+ if (base == ACPI_ANY_BASE) {
/*
- * Base equal to ACPI_ANY_BASE means 'ToInteger operation case'.
+ * Base equal to ACPI_ANY_BASE means 'Either decimal or hex'.
* We need to determine if it is decimal or hexadecimal.
*/
if ((*string == '0') && (tolower((int)*(string + 1)) == 'x')) {
@@ -284,7 +289,7 @@ acpi_status acpi_ut_strtoul64(char *string, u32 base, u64 *ret_integer)
/* Any string left? Check that '0x' is not followed by white space. */
if (!(*string) || isspace((int)*string) || *string == '\t') {
- if (to_integer_op) {
+ if (base == ACPI_ANY_BASE) {
goto error_exit;
} else {
goto all_done;
@@ -292,10 +297,11 @@ acpi_status acpi_ut_strtoul64(char *string, u32 base, u64 *ret_integer)
}
/*
- * Perform a 32-bit or 64-bit conversion, depending upon the current
- * execution mode of the interpreter
+ * Perform a 32-bit or 64-bit conversion, depending upon the input
+ * byte width
*/
- dividend = (mode32) ? ACPI_UINT32_MAX : ACPI_UINT64_MAX;
+ dividend = (max_integer_byte_width <= ACPI_MAX32_BYTE_WIDTH) ?
+ ACPI_UINT32_MAX : ACPI_UINT64_MAX;
/* Main loop: convert the string to a 32- or 64-bit integer */
@@ -323,7 +329,7 @@ acpi_status acpi_ut_strtoul64(char *string, u32 base, u64 *ret_integer)
}
if (term) {
- if (to_integer_op) {
+ if (base == ACPI_ANY_BASE) {
goto error_exit;
} else {
break;
@@ -338,12 +344,13 @@ acpi_status acpi_ut_strtoul64(char *string, u32 base, u64 *ret_integer)
valid_digits++;
- if (sign_of0x
- && ((valid_digits > 16)
- || ((valid_digits > 8) && mode32))) {
+ if (sign_of0x && ((valid_digits > 16) ||
+ ((valid_digits > 8)
+ && (max_integer_byte_width <=
+ ACPI_MAX32_BYTE_WIDTH)))) {
/*
* This is to_integer operation case.
- * No any restrictions for string-to-integer conversion,
+ * No restrictions for string-to-integer conversion,
* see ACPI spec.
*/
goto error_exit;
@@ -355,7 +362,7 @@ acpi_status acpi_ut_strtoul64(char *string, u32 base, u64 *ret_integer)
&quotient, NULL);
if (return_value > quotient) {
- if (to_integer_op) {
+ if (base == ACPI_ANY_BASE) {
goto error_exit;
} else {
break;
@@ -378,7 +385,8 @@ all_done:
return_ACPI_STATUS(AE_OK);
error_exit:
- /* Base was set/validated above */
+
+ /* Base was set/validated above (10 or 16) */
if (base == 10) {
return_ACPI_STATUS(AE_BAD_DECIMAL_CONSTANT);
@@ -388,8 +396,7 @@ error_exit:
}
#ifdef _OBSOLETE_FUNCTIONS
-/* TBD: use version in ACPICA main code base? */
-/* DONE: 01/2016 */
+/* Removed: 01/2016 */
/*******************************************************************************
*
diff --git a/drivers/acpi/acpica/utobject.c b/drivers/acpi/acpica/utobject.c
index edad3f043ab9..72b9a062bbab 100644
--- a/drivers/acpi/acpica/utobject.c
+++ b/drivers/acpi/acpica/utobject.c
@@ -51,11 +51,11 @@ ACPI_MODULE_NAME("utobject")
/* Local prototypes */
static acpi_status
acpi_ut_get_simple_object_size(union acpi_operand_object *obj,
- acpi_size * obj_length);
+ acpi_size *obj_length);
static acpi_status
acpi_ut_get_package_object_size(union acpi_operand_object *obj,
- acpi_size * obj_length);
+ acpi_size *obj_length);
static acpi_status
acpi_ut_get_element_length(u8 object_type,
@@ -177,7 +177,7 @@ union acpi_operand_object *acpi_ut_create_package_object(u32 count)
* Create the element array. Count+1 allows the array to be null
* terminated.
*/
- package_elements = ACPI_ALLOCATE_ZEROED(((acpi_size) count +
+ package_elements = ACPI_ALLOCATE_ZEROED(((acpi_size)count +
1) * sizeof(void *));
if (!package_elements) {
ACPI_FREE(package_desc);
@@ -454,7 +454,7 @@ void acpi_ut_delete_object_desc(union acpi_operand_object *object)
static acpi_status
acpi_ut_get_simple_object_size(union acpi_operand_object *internal_object,
- acpi_size * obj_length)
+ acpi_size *obj_length)
{
acpi_size length;
acpi_size size;
@@ -495,12 +495,12 @@ acpi_ut_get_simple_object_size(union acpi_operand_object *internal_object,
switch (internal_object->common.type) {
case ACPI_TYPE_STRING:
- length += (acpi_size) internal_object->string.length + 1;
+ length += (acpi_size)internal_object->string.length + 1;
break;
case ACPI_TYPE_BUFFER:
- length += (acpi_size) internal_object->buffer.length;
+ length += (acpi_size)internal_object->buffer.length;
break;
case ACPI_TYPE_INTEGER:
@@ -640,7 +640,7 @@ acpi_ut_get_element_length(u8 object_type,
static acpi_status
acpi_ut_get_package_object_size(union acpi_operand_object *internal_object,
- acpi_size * obj_length)
+ acpi_size *obj_length)
{
acpi_status status;
struct acpi_pkg_info info;
@@ -665,7 +665,7 @@ acpi_ut_get_package_object_size(union acpi_operand_object *internal_object,
*/
info.length +=
ACPI_ROUND_UP_TO_NATIVE_WORD(sizeof(union acpi_object)) *
- (acpi_size) info.num_packages;
+ (acpi_size)info.num_packages;
/* Return the total package length */
@@ -689,7 +689,7 @@ acpi_ut_get_package_object_size(union acpi_operand_object *internal_object,
acpi_status
acpi_ut_get_object_size(union acpi_operand_object *internal_object,
- acpi_size * obj_length)
+ acpi_size *obj_length)
{
acpi_status status;
diff --git a/drivers/acpi/acpica/utosi.c b/drivers/acpi/acpica/utosi.c
index b5cfe577fabf..3f5fed670271 100644
--- a/drivers/acpi/acpica/utosi.c
+++ b/drivers/acpi/acpica/utosi.c
@@ -150,7 +150,7 @@ acpi_status acpi_ut_initialize_interfaces(void)
i < (ACPI_ARRAY_LENGTH(acpi_default_supported_interfaces) - 1);
i++) {
acpi_default_supported_interfaces[i].next =
- &acpi_default_supported_interfaces[(acpi_size) i + 1];
+ &acpi_default_supported_interfaces[(acpi_size)i + 1];
}
acpi_os_release_mutex(acpi_gbl_osi_mutex);
@@ -397,7 +397,7 @@ struct acpi_interface_info *acpi_ut_get_interface(acpi_string interface_name)
*
******************************************************************************/
-acpi_status acpi_ut_osi_implementation(struct acpi_walk_state * walk_state)
+acpi_status acpi_ut_osi_implementation(struct acpi_walk_state *walk_state)
{
union acpi_operand_object *string_desc;
union acpi_operand_object *return_desc;
diff --git a/drivers/acpi/acpica/utownerid.c b/drivers/acpi/acpica/utownerid.c
index 813520ab8ca4..3cd573c5f7f9 100644
--- a/drivers/acpi/acpica/utownerid.c
+++ b/drivers/acpi/acpica/utownerid.c
@@ -61,7 +61,7 @@ ACPI_MODULE_NAME("utownerid")
* when the method exits or the table is unloaded.
*
******************************************************************************/
-acpi_status acpi_ut_allocate_owner_id(acpi_owner_id * owner_id)
+acpi_status acpi_ut_allocate_owner_id(acpi_owner_id *owner_id)
{
u32 i;
u32 j;
@@ -122,7 +122,7 @@ acpi_status acpi_ut_allocate_owner_id(acpi_owner_id * owner_id)
* permanently allocated (prevents +1 overflow)
*/
*owner_id =
- (acpi_owner_id) ((k + 1) + ACPI_MUL_32(j));
+ (acpi_owner_id)((k + 1) + ACPI_MUL_32(j));
ACPI_DEBUG_PRINT((ACPI_DB_VALUES,
"Allocated OwnerId: %2.2X\n",
@@ -167,7 +167,7 @@ exit:
*
******************************************************************************/
-void acpi_ut_release_owner_id(acpi_owner_id * owner_id_ptr)
+void acpi_ut_release_owner_id(acpi_owner_id *owner_id_ptr)
{
acpi_owner_id owner_id = *owner_id_ptr;
acpi_status status;
diff --git a/drivers/acpi/acpica/utprint.c b/drivers/acpi/acpica/utprint.c
index 8c218ad787cd..dd084cf52502 100644
--- a/drivers/acpi/acpica/utprint.c
+++ b/drivers/acpi/acpica/utprint.c
@@ -67,11 +67,6 @@ static char *acpi_ut_format_number(char *string,
static char *acpi_ut_put_number(char *string, u64 number, u8 base, u8 upper);
-/* Module globals */
-
-static const char acpi_gbl_lower_hex_digits[] = "0123456789abcdef";
-static const char acpi_gbl_upper_hex_digits[] = "0123456789ABCDEF";
-
/*******************************************************************************
*
* FUNCTION: acpi_ut_bound_string_length
@@ -269,9 +264,9 @@ static char *acpi_ut_format_number(char *string,
sign = '\0';
if (type & ACPI_FORMAT_SIGN) {
- if ((s64) number < 0) {
+ if ((s64)number < 0) {
sign = '-';
- number = -(s64) number;
+ number = -(s64)number;
width--;
} else if (type & ACPI_FORMAT_SIGN_PLUS) {
sign = '+';
@@ -409,7 +404,7 @@ acpi_ut_vsnprintf(char *string,
width = -1;
if (isdigit((int)*format)) {
format = acpi_ut_scan_number(format, &number);
- width = (s32) number;
+ width = (s32)number;
} else if (*format == '*') {
++format;
width = va_arg(args, int);
@@ -426,7 +421,7 @@ acpi_ut_vsnprintf(char *string,
++format;
if (isdigit((int)*format)) {
format = acpi_ut_scan_number(format, &number);
- precision = (s32) number;
+ precision = (s32)number;
} else if (*format == '*') {
++format;
precision = va_arg(args, int);
@@ -555,17 +550,17 @@ acpi_ut_vsnprintf(char *string,
if (qualifier == 'L') {
number = va_arg(args, u64);
if (type & ACPI_FORMAT_SIGN) {
- number = (s64) number;
+ number = (s64)number;
}
} else if (qualifier == 'l') {
number = va_arg(args, unsigned long);
if (type & ACPI_FORMAT_SIGN) {
- number = (s32) number;
+ number = (s32)number;
}
} else if (qualifier == 'h') {
number = (u16)va_arg(args, int);
if (type & ACPI_FORMAT_SIGN) {
- number = (s16) number;
+ number = (s16)number;
}
} else {
number = va_arg(args, unsigned int);
diff --git a/drivers/acpi/acpica/utstring.c b/drivers/acpi/acpica/utstring.c
index 0b005728db4e..288913a0e709 100644
--- a/drivers/acpi/acpica/utstring.c
+++ b/drivers/acpi/acpica/utstring.c
@@ -130,7 +130,7 @@ void acpi_ut_print_string(char *string, u16 max_length)
} else {
/* All others will be Hex escapes */
- acpi_os_printf("\\x%2.2X", (s32) string[i]);
+ acpi_os_printf("\\x%2.2X", (s32)string[i]);
}
break;
}
@@ -145,73 +145,6 @@ void acpi_ut_print_string(char *string, u16 max_length)
/*******************************************************************************
*
- * FUNCTION: acpi_ut_valid_acpi_char
- *
- * PARAMETERS: char - The character to be examined
- * position - Byte position (0-3)
- *
- * RETURN: TRUE if the character is valid, FALSE otherwise
- *
- * DESCRIPTION: Check for a valid ACPI character. Must be one of:
- * 1) Upper case alpha
- * 2) numeric
- * 3) underscore
- *
- * We allow a '!' as the last character because of the ASF! table
- *
- ******************************************************************************/
-
-u8 acpi_ut_valid_acpi_char(char character, u32 position)
-{
-
- if (!((character >= 'A' && character <= 'Z') ||
- (character >= '0' && character <= '9') || (character == '_'))) {
-
- /* Allow a '!' in the last position */
-
- if (character == '!' && position == 3) {
- return (TRUE);
- }
-
- return (FALSE);
- }
-
- return (TRUE);
-}
-
-/*******************************************************************************
- *
- * FUNCTION: acpi_ut_valid_acpi_name
- *
- * PARAMETERS: name - The name to be examined. Does not have to
- * be NULL terminated string.
- *
- * RETURN: TRUE if the name is valid, FALSE otherwise
- *
- * DESCRIPTION: Check for a valid ACPI name. Each character must be one of:
- * 1) Upper case alpha
- * 2) numeric
- * 3) underscore
- *
- ******************************************************************************/
-
-u8 acpi_ut_valid_acpi_name(char *name)
-{
- u32 i;
-
- ACPI_FUNCTION_ENTRY();
-
- for (i = 0; i < ACPI_NAME_SIZE; i++) {
- if (!acpi_ut_valid_acpi_char(name[i], i)) {
- return (FALSE);
- }
- }
-
- return (TRUE);
-}
-
-/*******************************************************************************
- *
* FUNCTION: acpi_ut_repair_name
*
* PARAMETERS: name - The ACPI name to be repaired
@@ -253,7 +186,7 @@ void acpi_ut_repair_name(char *name)
/* Check each character in the name */
for (i = 0; i < ACPI_NAME_SIZE; i++) {
- if (acpi_ut_valid_acpi_char(name[i], i)) {
+ if (acpi_ut_valid_name_char(name[i], i)) {
continue;
}
diff --git a/drivers/acpi/acpica/uttrack.c b/drivers/acpi/acpica/uttrack.c
index 60c406a8efcb..0df07dfa53b6 100644
--- a/drivers/acpi/acpica/uttrack.c
+++ b/drivers/acpi/acpica/uttrack.c
@@ -90,7 +90,7 @@ acpi_ut_remove_allocation(struct acpi_debug_mem_block *address,
******************************************************************************/
acpi_status
-acpi_ut_create_list(char *list_name,
+acpi_ut_create_list(const char *list_name,
u16 object_size, struct acpi_memory_list **return_cache)
{
struct acpi_memory_list *cache;
diff --git a/drivers/acpi/acpica/utxface.c b/drivers/acpi/acpica/utxface.c
index 68d4673f62e6..d9e6aac7dc83 100644
--- a/drivers/acpi/acpica/utxface.c
+++ b/drivers/acpi/acpica/utxface.c
@@ -127,7 +127,7 @@ ACPI_EXPORT_SYMBOL(acpi_subsystem_status)
* and the value of out_buffer is undefined.
*
******************************************************************************/
-acpi_status acpi_get_system_info(struct acpi_buffer * out_buffer)
+acpi_status acpi_get_system_info(struct acpi_buffer *out_buffer)
{
struct acpi_system_info *info_ptr;
acpi_status status;
@@ -483,7 +483,7 @@ ACPI_EXPORT_SYMBOL(acpi_check_address_range)
******************************************************************************/
acpi_status
acpi_decode_pld_buffer(u8 *in_buffer,
- acpi_size length, struct acpi_pld_info ** return_buffer)
+ acpi_size length, struct acpi_pld_info **return_buffer)
{
struct acpi_pld_info *pld_info;
u32 *buffer = ACPI_CAST_PTR(u32, in_buffer);
diff --git a/drivers/acpi/battery.c b/drivers/acpi/battery.c
index b719ab3090bb..ab234791a0ba 100644
--- a/drivers/acpi/battery.c
+++ b/drivers/acpi/battery.c
@@ -1316,7 +1316,7 @@ static int __init acpi_battery_init(void)
static void __exit acpi_battery_exit(void)
{
- async_synchronize_cookie(async_cookie);
+ async_synchronize_cookie(async_cookie + 1);
acpi_bus_unregister_driver(&acpi_battery_driver);
#ifdef CONFIG_ACPI_PROCFS_POWER
acpi_unlock_battery_dir(acpi_battery_dir);
diff --git a/drivers/acpi/blacklist.c b/drivers/acpi/blacklist.c
index 96809cd99ace..bdc67bad61a7 100644
--- a/drivers/acpi/blacklist.c
+++ b/drivers/acpi/blacklist.c
@@ -3,7 +3,7 @@
*
* Check to see if the given machine has a known bad ACPI BIOS
* or if the BIOS is too old.
- * Check given machine against acpi_osi_dmi_table[].
+ * Check given machine against acpi_rev_dmi_table[].
*
* Copyright (C) 2004 Len Brown <len.brown@intel.com>
* Copyright (C) 2002 Andy Grover <andrew.grover@intel.com>
@@ -47,7 +47,7 @@ struct acpi_blacklist_item {
u32 is_critical_error;
};
-static struct dmi_system_id acpi_osi_dmi_table[] __initdata;
+static struct dmi_system_id acpi_rev_dmi_table[] __initdata;
/*
* POLICY: If *anything* doesn't work, put it on the blacklist.
@@ -128,36 +128,12 @@ int __init acpi_blacklisted(void)
}
}
- dmi_check_system(acpi_osi_dmi_table);
+ (void)early_acpi_osi_init();
+ dmi_check_system(acpi_rev_dmi_table);
return blacklisted;
}
#ifdef CONFIG_DMI
-static int __init dmi_enable_osi_linux(const struct dmi_system_id *d)
-{
- acpi_dmi_osi_linux(1, d); /* enable */
- return 0;
-}
-static int __init dmi_disable_osi_vista(const struct dmi_system_id *d)
-{
- printk(KERN_NOTICE PREFIX "DMI detected: %s\n", d->ident);
- acpi_osi_setup("!Windows 2006");
- acpi_osi_setup("!Windows 2006 SP1");
- acpi_osi_setup("!Windows 2006 SP2");
- return 0;
-}
-static int __init dmi_disable_osi_win7(const struct dmi_system_id *d)
-{
- printk(KERN_NOTICE PREFIX "DMI detected: %s\n", d->ident);
- acpi_osi_setup("!Windows 2009");
- return 0;
-}
-static int __init dmi_disable_osi_win8(const struct dmi_system_id *d)
-{
- printk(KERN_NOTICE PREFIX "DMI detected: %s\n", d->ident);
- acpi_osi_setup("!Windows 2012");
- return 0;
-}
#ifdef CONFIG_ACPI_REV_OVERRIDE_POSSIBLE
static int __init dmi_enable_rev_override(const struct dmi_system_id *d)
{
@@ -168,169 +144,7 @@ static int __init dmi_enable_rev_override(const struct dmi_system_id *d)
}
#endif
-static struct dmi_system_id acpi_osi_dmi_table[] __initdata = {
- {
- .callback = dmi_disable_osi_vista,
- .ident = "Fujitsu Siemens",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
- DMI_MATCH(DMI_PRODUCT_NAME, "ESPRIMO Mobile V5505"),
- },
- },
- {
- /*
- * There have a NVIF method in MSI GX723 DSDT need call by Nvidia
- * driver (e.g. nouveau) when user press brightness hotkey.
- * Currently, nouveau driver didn't do the job and it causes there
- * have a infinite while loop in DSDT when user press hotkey.
- * We add MSI GX723's dmi information to this table for workaround
- * this issue.
- * Will remove MSI GX723 from the table after nouveau grows support.
- */
- .callback = dmi_disable_osi_vista,
- .ident = "MSI GX723",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Micro-Star International"),
- DMI_MATCH(DMI_PRODUCT_NAME, "GX723"),
- },
- },
- {
- .callback = dmi_disable_osi_vista,
- .ident = "Sony VGN-NS10J_S",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
- DMI_MATCH(DMI_PRODUCT_NAME, "VGN-NS10J_S"),
- },
- },
- {
- .callback = dmi_disable_osi_vista,
- .ident = "Sony VGN-SR290J",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
- DMI_MATCH(DMI_PRODUCT_NAME, "VGN-SR290J"),
- },
- },
- {
- .callback = dmi_disable_osi_vista,
- .ident = "VGN-NS50B_L",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
- DMI_MATCH(DMI_PRODUCT_NAME, "VGN-NS50B_L"),
- },
- },
- {
- .callback = dmi_disable_osi_vista,
- .ident = "VGN-SR19XN",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
- DMI_MATCH(DMI_PRODUCT_NAME, "VGN-SR19XN"),
- },
- },
- {
- .callback = dmi_disable_osi_vista,
- .ident = "Toshiba Satellite L355",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
- DMI_MATCH(DMI_PRODUCT_VERSION, "Satellite L355"),
- },
- },
- {
- .callback = dmi_disable_osi_win7,
- .ident = "ASUS K50IJ",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK Computer Inc."),
- DMI_MATCH(DMI_PRODUCT_NAME, "K50IJ"),
- },
- },
- {
- .callback = dmi_disable_osi_vista,
- .ident = "Toshiba P305D",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
- DMI_MATCH(DMI_PRODUCT_NAME, "Satellite P305D"),
- },
- },
- {
- .callback = dmi_disable_osi_vista,
- .ident = "Toshiba NB100",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
- DMI_MATCH(DMI_PRODUCT_NAME, "NB100"),
- },
- },
-
- /*
- * The wireless hotkey does not work on those machines when
- * returning true for _OSI("Windows 2012")
- */
- {
- .callback = dmi_disable_osi_win8,
- .ident = "Dell Inspiron 7737",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
- DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 7737"),
- },
- },
- {
- .callback = dmi_disable_osi_win8,
- .ident = "Dell Inspiron 7537",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
- DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 7537"),
- },
- },
- {
- .callback = dmi_disable_osi_win8,
- .ident = "Dell Inspiron 5437",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
- DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 5437"),
- },
- },
- {
- .callback = dmi_disable_osi_win8,
- .ident = "Dell Inspiron 3437",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
- DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 3437"),
- },
- },
- {
- .callback = dmi_disable_osi_win8,
- .ident = "Dell Vostro 3446",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
- DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 3446"),
- },
- },
- {
- .callback = dmi_disable_osi_win8,
- .ident = "Dell Vostro 3546",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
- DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 3546"),
- },
- },
-
- /*
- * BIOS invocation of _OSI(Linux) is almost always a BIOS bug.
- * Linux ignores it, except for the machines enumerated below.
- */
-
- /*
- * Without this this EEEpc exports a non working WMI interface, with
- * this it exports a working "good old" eeepc_laptop interface, fixing
- * both brightness control, and rfkill not working.
- */
- {
- .callback = dmi_enable_osi_linux,
- .ident = "Asus EEE PC 1015PX",
- .matches = {
- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK Computer INC."),
- DMI_MATCH(DMI_PRODUCT_NAME, "1015PX"),
- },
- },
-
+static struct dmi_system_id acpi_rev_dmi_table[] __initdata = {
#ifdef CONFIG_ACPI_REV_OVERRIDE_POSSIBLE
/*
* DELL XPS 13 (2015) switches sound between HDA and I2S
diff --git a/drivers/acpi/bus.c b/drivers/acpi/bus.c
index c068c829b453..31e8da648fff 100644
--- a/drivers/acpi/bus.c
+++ b/drivers/acpi/bus.c
@@ -925,11 +925,13 @@ void __init acpi_early_init(void)
goto error0;
}
- status = acpi_load_tables();
- if (ACPI_FAILURE(status)) {
- printk(KERN_ERR PREFIX
- "Unable to load the System Description Tables\n");
- goto error0;
+ if (acpi_gbl_group_module_level_code) {
+ status = acpi_load_tables();
+ if (ACPI_FAILURE(status)) {
+ printk(KERN_ERR PREFIX
+ "Unable to load the System Description Tables\n");
+ goto error0;
+ }
}
#ifdef CONFIG_X86
@@ -995,17 +997,10 @@ static int __init acpi_bus_init(void)
acpi_os_initialize1();
- status = acpi_enable_subsystem(ACPI_NO_ACPI_ENABLE);
- if (ACPI_FAILURE(status)) {
- printk(KERN_ERR PREFIX
- "Unable to start the ACPI Interpreter\n");
- goto error1;
- }
-
/*
* ACPI 2.0 requires the EC driver to be loaded and work before
- * the EC device is found in the namespace (i.e. before acpi_initialize_objects()
- * is called).
+ * the EC device is found in the namespace (i.e. before
+ * acpi_load_tables() is called).
*
* This is accomplished by looking for the ECDT table, and getting
* the EC parameters out of that.
@@ -1013,6 +1008,22 @@ static int __init acpi_bus_init(void)
status = acpi_ec_ecdt_probe();
/* Ignore result. Not having an ECDT is not fatal. */
+ if (!acpi_gbl_group_module_level_code) {
+ status = acpi_load_tables();
+ if (ACPI_FAILURE(status)) {
+ printk(KERN_ERR PREFIX
+ "Unable to load the System Description Tables\n");
+ goto error1;
+ }
+ }
+
+ status = acpi_enable_subsystem(ACPI_NO_ACPI_ENABLE);
+ if (ACPI_FAILURE(status)) {
+ printk(KERN_ERR PREFIX
+ "Unable to start the ACPI Interpreter\n");
+ goto error1;
+ }
+
status = acpi_initialize_objects(ACPI_FULL_INITIALIZATION);
if (ACPI_FAILURE(status)) {
printk(KERN_ERR PREFIX "Unable to initialize ACPI objects\n");
diff --git a/drivers/acpi/device_pm.c b/drivers/acpi/device_pm.c
index cd2c3d6d40e0..993fd31394c8 100644
--- a/drivers/acpi/device_pm.c
+++ b/drivers/acpi/device_pm.c
@@ -319,6 +319,7 @@ int acpi_device_fix_up_power(struct acpi_device *device)
return ret;
}
+EXPORT_SYMBOL_GPL(acpi_device_fix_up_power);
int acpi_device_update_power(struct acpi_device *device, int *state_p)
{
diff --git a/drivers/acpi/device_sysfs.c b/drivers/acpi/device_sysfs.c
index b9afb47db7ed..7b2c48fde4e2 100644
--- a/drivers/acpi/device_sysfs.c
+++ b/drivers/acpi/device_sysfs.c
@@ -35,7 +35,7 @@ static ssize_t acpi_object_path(acpi_handle handle, char *buf)
if (result)
return result;
- result = sprintf(buf, "%s\n", (char*)path.pointer);
+ result = sprintf(buf, "%s\n", (char *)path.pointer);
kfree(path.pointer);
return result;
}
@@ -333,7 +333,8 @@ int acpi_device_modalias(struct device *dev, char *buf, int size)
EXPORT_SYMBOL_GPL(acpi_device_modalias);
static ssize_t
-acpi_device_modalias_show(struct device *dev, struct device_attribute *attr, char *buf) {
+acpi_device_modalias_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
return __acpi_device_modalias(to_acpi_device(dev), buf, 1024);
}
static DEVICE_ATTR(modalias, 0444, acpi_device_modalias_show, NULL);
@@ -397,7 +398,8 @@ acpi_eject_store(struct device *d, struct device_attribute *attr,
static DEVICE_ATTR(eject, 0200, NULL, acpi_eject_store);
static ssize_t
-acpi_device_hid_show(struct device *dev, struct device_attribute *attr, char *buf) {
+acpi_device_hid_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
struct acpi_device *acpi_dev = to_acpi_device(dev);
return sprintf(buf, "%s\n", acpi_device_hid(acpi_dev));
@@ -467,12 +469,27 @@ acpi_device_sun_show(struct device *dev, struct device_attribute *attr,
status = acpi_evaluate_integer(acpi_dev->handle, "_SUN", NULL, &sun);
if (ACPI_FAILURE(status))
- return -ENODEV;
+ return -EIO;
return sprintf(buf, "%llu\n", sun);
}
static DEVICE_ATTR(sun, 0444, acpi_device_sun_show, NULL);
+static ssize_t
+acpi_device_hrv_show(struct device *dev, struct device_attribute *attr,
+ char *buf) {
+ struct acpi_device *acpi_dev = to_acpi_device(dev);
+ acpi_status status;
+ unsigned long long hrv;
+
+ status = acpi_evaluate_integer(acpi_dev->handle, "_HRV", NULL, &hrv);
+ if (ACPI_FAILURE(status))
+ return -EIO;
+
+ return sprintf(buf, "%llu\n", hrv);
+}
+static DEVICE_ATTR(hrv, 0444, acpi_device_hrv_show, NULL);
+
static ssize_t status_show(struct device *dev, struct device_attribute *attr,
char *buf) {
struct acpi_device *acpi_dev = to_acpi_device(dev);
@@ -481,7 +498,7 @@ static ssize_t status_show(struct device *dev, struct device_attribute *attr,
status = acpi_evaluate_integer(acpi_dev->handle, "_STA", NULL, &sta);
if (ACPI_FAILURE(status))
- return -ENODEV;
+ return -EIO;
return sprintf(buf, "%llu\n", sta);
}
@@ -541,16 +558,22 @@ int acpi_device_setup_files(struct acpi_device *dev)
goto end;
}
+ if (acpi_has_method(dev->handle, "_HRV")) {
+ result = device_create_file(&dev->dev, &dev_attr_hrv);
+ if (result)
+ goto end;
+ }
+
if (acpi_has_method(dev->handle, "_STA")) {
result = device_create_file(&dev->dev, &dev_attr_status);
if (result)
goto end;
}
- /*
- * If device has _EJ0, 'eject' file is created that is used to trigger
- * hot-removal function from userland.
- */
+ /*
+ * If device has _EJ0, 'eject' file is created that is used to trigger
+ * hot-removal function from userland.
+ */
if (acpi_has_method(dev->handle, "_EJ0")) {
result = device_create_file(&dev->dev, &dev_attr_eject);
if (result)
@@ -604,6 +627,9 @@ void acpi_device_remove_files(struct acpi_device *dev)
if (acpi_has_method(dev->handle, "_SUN"))
device_remove_file(&dev->dev, &dev_attr_sun);
+ if (acpi_has_method(dev->handle, "_HRV"))
+ device_remove_file(&dev->dev, &dev_attr_hrv);
+
if (dev->pnp.unique_id)
device_remove_file(&dev->dev, &dev_attr_uid);
if (dev->pnp.type.bus_address)
diff --git a/drivers/acpi/ec.c b/drivers/acpi/ec.c
index b420fb46669d..0e70181f150c 100644
--- a/drivers/acpi/ec.c
+++ b/drivers/acpi/ec.c
@@ -105,8 +105,8 @@ enum ec_command {
enum {
EC_FLAGS_QUERY_PENDING, /* Query is pending */
EC_FLAGS_QUERY_GUARDING, /* Guard for SCI_EVT check */
- EC_FLAGS_HANDLERS_INSTALLED, /* Handlers for GPE and
- * OpReg are installed */
+ EC_FLAGS_GPE_HANDLER_INSTALLED, /* GPE handler installed */
+ EC_FLAGS_EC_HANDLER_INSTALLED, /* OpReg handler installed */
EC_FLAGS_STARTED, /* Driver is started */
EC_FLAGS_STOPPED, /* Driver is stopped */
EC_FLAGS_COMMAND_STORM, /* GPE storms occurred to the
@@ -175,10 +175,9 @@ static void acpi_ec_event_processor(struct work_struct *work);
struct acpi_ec *boot_ec, *first_ec;
EXPORT_SYMBOL(first_ec);
-static int EC_FLAGS_VALIDATE_ECDT; /* ASUStec ECDTs need to be validated */
-static int EC_FLAGS_SKIP_DSDT_SCAN; /* Not all BIOS survive early DSDT scan */
static int EC_FLAGS_CLEAR_ON_RESUME; /* Needs acpi_ec_clear() on boot/resume */
static int EC_FLAGS_QUERY_HANDSHAKE; /* Needs QR_EC issued when SCI_EVT set */
+static int EC_FLAGS_CORRECT_ECDT; /* Needs ECDT port address correction */
/* --------------------------------------------------------------------------
* Logging/Debugging
@@ -367,7 +366,8 @@ static inline void acpi_ec_clear_gpe(struct acpi_ec *ec)
static void acpi_ec_submit_request(struct acpi_ec *ec)
{
ec->reference_count++;
- if (ec->reference_count == 1)
+ if (test_bit(EC_FLAGS_GPE_HANDLER_INSTALLED, &ec->flags) &&
+ ec->reference_count == 1)
acpi_ec_enable_gpe(ec, true);
}
@@ -376,7 +376,8 @@ static void acpi_ec_complete_request(struct acpi_ec *ec)
bool flushed = false;
ec->reference_count--;
- if (ec->reference_count == 0)
+ if (test_bit(EC_FLAGS_GPE_HANDLER_INSTALLED, &ec->flags) &&
+ ec->reference_count == 0)
acpi_ec_disable_gpe(ec, true);
flushed = acpi_ec_flushed(ec);
if (flushed)
@@ -1287,52 +1288,64 @@ static int ec_install_handlers(struct acpi_ec *ec)
{
acpi_status status;
- if (test_bit(EC_FLAGS_HANDLERS_INSTALLED, &ec->flags))
- return 0;
- status = acpi_install_gpe_raw_handler(NULL, ec->gpe,
- ACPI_GPE_EDGE_TRIGGERED,
- &acpi_ec_gpe_handler, ec);
- if (ACPI_FAILURE(status))
- return -ENODEV;
-
acpi_ec_start(ec, false);
- status = acpi_install_address_space_handler(ec->handle,
- ACPI_ADR_SPACE_EC,
- &acpi_ec_space_handler,
- NULL, ec);
- if (ACPI_FAILURE(status)) {
- if (status == AE_NOT_FOUND) {
- /*
- * Maybe OS fails in evaluating the _REG object.
- * The AE_NOT_FOUND error will be ignored and OS
- * continue to initialize EC.
- */
- pr_err("Fail in evaluating the _REG object"
- " of EC device. Broken bios is suspected.\n");
- } else {
- acpi_ec_stop(ec, false);
- acpi_remove_gpe_handler(NULL, ec->gpe,
- &acpi_ec_gpe_handler);
- return -ENODEV;
+
+ if (!test_bit(EC_FLAGS_EC_HANDLER_INSTALLED, &ec->flags)) {
+ status = acpi_install_address_space_handler(ec->handle,
+ ACPI_ADR_SPACE_EC,
+ &acpi_ec_space_handler,
+ NULL, ec);
+ if (ACPI_FAILURE(status)) {
+ if (status == AE_NOT_FOUND) {
+ /*
+ * Maybe OS fails in evaluating the _REG
+ * object. The AE_NOT_FOUND error will be
+ * ignored and OS * continue to initialize
+ * EC.
+ */
+ pr_err("Fail in evaluating the _REG object"
+ " of EC device. Broken bios is suspected.\n");
+ } else {
+ acpi_ec_stop(ec, false);
+ return -ENODEV;
+ }
+ }
+ set_bit(EC_FLAGS_EC_HANDLER_INSTALLED, &ec->flags);
+ }
+
+ if (!test_bit(EC_FLAGS_GPE_HANDLER_INSTALLED, &ec->flags)) {
+ status = acpi_install_gpe_raw_handler(NULL, ec->gpe,
+ ACPI_GPE_EDGE_TRIGGERED,
+ &acpi_ec_gpe_handler, ec);
+ /* This is not fatal as we can poll EC events */
+ if (ACPI_SUCCESS(status)) {
+ set_bit(EC_FLAGS_GPE_HANDLER_INSTALLED, &ec->flags);
+ if (test_bit(EC_FLAGS_STARTED, &ec->flags) &&
+ ec->reference_count >= 1)
+ acpi_ec_enable_gpe(ec, true);
}
}
- set_bit(EC_FLAGS_HANDLERS_INSTALLED, &ec->flags);
return 0;
}
static void ec_remove_handlers(struct acpi_ec *ec)
{
- if (!test_bit(EC_FLAGS_HANDLERS_INSTALLED, &ec->flags))
- return;
acpi_ec_stop(ec, false);
- if (ACPI_FAILURE(acpi_remove_address_space_handler(ec->handle,
- ACPI_ADR_SPACE_EC, &acpi_ec_space_handler)))
- pr_err("failed to remove space handler\n");
- if (ACPI_FAILURE(acpi_remove_gpe_handler(NULL, ec->gpe,
- &acpi_ec_gpe_handler)))
- pr_err("failed to remove gpe handler\n");
- clear_bit(EC_FLAGS_HANDLERS_INSTALLED, &ec->flags);
+
+ if (test_bit(EC_FLAGS_EC_HANDLER_INSTALLED, &ec->flags)) {
+ if (ACPI_FAILURE(acpi_remove_address_space_handler(ec->handle,
+ ACPI_ADR_SPACE_EC, &acpi_ec_space_handler)))
+ pr_err("failed to remove space handler\n");
+ clear_bit(EC_FLAGS_EC_HANDLER_INSTALLED, &ec->flags);
+ }
+
+ if (test_bit(EC_FLAGS_GPE_HANDLER_INSTALLED, &ec->flags)) {
+ if (ACPI_FAILURE(acpi_remove_gpe_handler(NULL, ec->gpe,
+ &acpi_ec_gpe_handler)))
+ pr_err("failed to remove gpe handler\n");
+ clear_bit(EC_FLAGS_GPE_HANDLER_INSTALLED, &ec->flags);
+ }
}
static int acpi_ec_add(struct acpi_device *device)
@@ -1344,11 +1357,12 @@ static int acpi_ec_add(struct acpi_device *device)
strcpy(acpi_device_class(device), ACPI_EC_CLASS);
/* Check for boot EC */
- if (boot_ec &&
- (boot_ec->handle == device->handle ||
- boot_ec->handle == ACPI_ROOT_OBJECT)) {
+ if (boot_ec) {
ec = boot_ec;
boot_ec = NULL;
+ ec_remove_handlers(ec);
+ if (first_ec == ec)
+ first_ec = NULL;
} else {
ec = make_acpi_ec();
if (!ec)
@@ -1434,7 +1448,7 @@ ec_parse_io_ports(struct acpi_resource *resource, void *context)
int __init acpi_boot_ec_enable(void)
{
- if (!boot_ec || test_bit(EC_FLAGS_HANDLERS_INSTALLED, &boot_ec->flags))
+ if (!boot_ec)
return 0;
if (!ec_install_handlers(boot_ec)) {
first_ec = boot_ec;
@@ -1448,20 +1462,6 @@ static const struct acpi_device_id ec_device_ids[] = {
{"", 0},
};
-/* Some BIOS do not survive early DSDT scan, skip it */
-static int ec_skip_dsdt_scan(const struct dmi_system_id *id)
-{
- EC_FLAGS_SKIP_DSDT_SCAN = 1;
- return 0;
-}
-
-/* ASUStek often supplies us with broken ECDT, validate it */
-static int ec_validate_ecdt(const struct dmi_system_id *id)
-{
- EC_FLAGS_VALIDATE_ECDT = 1;
- return 0;
-}
-
#if 0
/*
* Some EC firmware variations refuses to respond QR_EC when SCI_EVT is not
@@ -1503,30 +1503,29 @@ static int ec_clear_on_resume(const struct dmi_system_id *id)
return 0;
}
+static int ec_correct_ecdt(const struct dmi_system_id *id)
+{
+ pr_debug("Detected system needing ECDT address correction.\n");
+ EC_FLAGS_CORRECT_ECDT = 1;
+ return 0;
+}
+
static struct dmi_system_id ec_dmi_table[] __initdata = {
{
- ec_skip_dsdt_scan, "Compal JFL92", {
- DMI_MATCH(DMI_BIOS_VENDOR, "COMPAL"),
- DMI_MATCH(DMI_BOARD_NAME, "JFL92") }, NULL},
+ ec_correct_ecdt, "Asus L4R", {
+ DMI_MATCH(DMI_BIOS_VERSION, "1008.006"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "L4R"),
+ DMI_MATCH(DMI_BOARD_NAME, "L4R") }, NULL},
{
- ec_validate_ecdt, "MSI MS-171F", {
+ ec_correct_ecdt, "Asus M6R", {
+ DMI_MATCH(DMI_BIOS_VERSION, "0207"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "M6R"),
+ DMI_MATCH(DMI_BOARD_NAME, "M6R") }, NULL},
+ {
+ ec_correct_ecdt, "MSI MS-171F", {
DMI_MATCH(DMI_SYS_VENDOR, "Micro-Star"),
DMI_MATCH(DMI_PRODUCT_NAME, "MS-171F"),}, NULL},
{
- ec_validate_ecdt, "ASUS hardware", {
- DMI_MATCH(DMI_BIOS_VENDOR, "ASUS") }, NULL},
- {
- ec_validate_ecdt, "ASUS hardware", {
- DMI_MATCH(DMI_BOARD_VENDOR, "ASUSTeK Computer Inc.") }, NULL},
- {
- ec_skip_dsdt_scan, "HP Folio 13", {
- DMI_MATCH(DMI_SYS_VENDOR, "Hewlett-Packard"),
- DMI_MATCH(DMI_PRODUCT_NAME, "HP Folio 13"),}, NULL},
- {
- ec_validate_ecdt, "ASUS hardware", {
- DMI_MATCH(DMI_SYS_VENDOR, "ASUSTek Computer Inc."),
- DMI_MATCH(DMI_PRODUCT_NAME, "L4R"),}, NULL},
- {
ec_clear_on_resume, "Samsung hardware", {
DMI_MATCH(DMI_SYS_VENDOR, "SAMSUNG ELECTRONICS CO., LTD.")}, NULL},
{},
@@ -1534,8 +1533,8 @@ static struct dmi_system_id ec_dmi_table[] __initdata = {
int __init acpi_ec_ecdt_probe(void)
{
+ int ret = 0;
acpi_status status;
- struct acpi_ec *saved_ec = NULL;
struct acpi_table_ecdt *ecdt_ptr;
boot_ec = make_acpi_ec();
@@ -1547,67 +1546,45 @@ int __init acpi_ec_ecdt_probe(void)
dmi_check_system(ec_dmi_table);
status = acpi_get_table(ACPI_SIG_ECDT, 1,
(struct acpi_table_header **)&ecdt_ptr);
- if (ACPI_SUCCESS(status)) {
- pr_info("EC description table is found, configuring boot EC\n");
- boot_ec->command_addr = ecdt_ptr->control.address;
- boot_ec->data_addr = ecdt_ptr->data.address;
- boot_ec->gpe = ecdt_ptr->gpe;
- boot_ec->handle = ACPI_ROOT_OBJECT;
- acpi_get_handle(ACPI_ROOT_OBJECT, ecdt_ptr->id,
- &boot_ec->handle);
- /* Don't trust ECDT, which comes from ASUSTek */
- if (!EC_FLAGS_VALIDATE_ECDT)
- goto install;
- saved_ec = kmemdup(boot_ec, sizeof(struct acpi_ec), GFP_KERNEL);
- if (!saved_ec)
- return -ENOMEM;
- /* fall through */
+ if (ACPI_FAILURE(status)) {
+ ret = -ENODEV;
+ goto error;
}
- if (EC_FLAGS_SKIP_DSDT_SCAN) {
- kfree(saved_ec);
- return -ENODEV;
+ if (!ecdt_ptr->control.address || !ecdt_ptr->data.address) {
+ /*
+ * Asus X50GL:
+ * https://bugzilla.kernel.org/show_bug.cgi?id=11880
+ */
+ ret = -ENODEV;
+ goto error;
}
- /* This workaround is needed only on some broken machines,
- * which require early EC, but fail to provide ECDT */
- pr_debug("Look up EC in DSDT\n");
- status = acpi_get_devices(ec_device_ids[0].id, ec_parse_device,
- boot_ec, NULL);
- /* Check that acpi_get_devices actually find something */
- if (ACPI_FAILURE(status) || !boot_ec->handle)
- goto error;
- if (saved_ec) {
- /* try to find good ECDT from ASUSTek */
- if (saved_ec->command_addr != boot_ec->command_addr ||
- saved_ec->data_addr != boot_ec->data_addr ||
- saved_ec->gpe != boot_ec->gpe ||
- saved_ec->handle != boot_ec->handle)
- pr_info("ASUSTek keeps feeding us with broken "
- "ECDT tables, which are very hard to workaround. "
- "Trying to use DSDT EC info instead. Please send "
- "output of acpidump to linux-acpi@vger.kernel.org\n");
- kfree(saved_ec);
- saved_ec = NULL;
+ pr_info("EC description table is found, configuring boot EC\n");
+ if (EC_FLAGS_CORRECT_ECDT) {
+ /*
+ * Asus L4R, Asus M6R
+ * https://bugzilla.kernel.org/show_bug.cgi?id=9399
+ * MSI MS-171F
+ * https://bugzilla.kernel.org/show_bug.cgi?id=12461
+ */
+ boot_ec->command_addr = ecdt_ptr->data.address;
+ boot_ec->data_addr = ecdt_ptr->control.address;
} else {
- /* We really need to limit this workaround, the only ASUS,
- * which needs it, has fake EC._INI method, so use it as flag.
- * Keep boot_ec struct as it will be needed soon.
- */
- if (!dmi_name_in_vendors("ASUS") ||
- !acpi_has_method(boot_ec->handle, "_INI"))
- return -ENODEV;
+ boot_ec->command_addr = ecdt_ptr->control.address;
+ boot_ec->data_addr = ecdt_ptr->data.address;
}
-install:
- if (!ec_install_handlers(boot_ec)) {
+ boot_ec->gpe = ecdt_ptr->gpe;
+ boot_ec->handle = ACPI_ROOT_OBJECT;
+ ret = ec_install_handlers(boot_ec);
+ if (!ret)
first_ec = boot_ec;
- return 0;
- }
error:
- kfree(boot_ec);
- kfree(saved_ec);
- boot_ec = NULL;
- return -ENODEV;
+ if (ret) {
+ kfree(boot_ec);
+ boot_ec = NULL;
+ }
+ return ret;
}
static int param_set_event_clearing(const char *val, struct kernel_param *kp)
diff --git a/drivers/acpi/evged.c b/drivers/acpi/evged.c
new file mode 100644
index 000000000000..46f060356a22
--- /dev/null
+++ b/drivers/acpi/evged.c
@@ -0,0 +1,154 @@
+/*
+ * Generic Event Device for ACPI.
+ *
+ * Copyright (c) 2016, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Generic Event Device allows platforms to handle interrupts in ACPI
+ * ASL statements. It follows very similar to _EVT method approach
+ * from GPIO events. All interrupts are listed in _CRS and the handler
+ * is written in _EVT method. Here is an example.
+ *
+ * Device (GED0)
+ * {
+ *
+ * Name (_HID, "ACPI0013")
+ * Name (_UID, 0)
+ * Method (_CRS, 0x0, Serialized)
+ * {
+ * Name (RBUF, ResourceTemplate ()
+ * {
+ * Interrupt(ResourceConsumer, Edge, ActiveHigh, Shared, , , )
+ * {123}
+ * }
+ * })
+ *
+ * Method (_EVT, 1) {
+ * if (Lequal(123, Arg0))
+ * {
+ * }
+ * }
+ * }
+ *
+ */
+
+#include <linux/err.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/list.h>
+#include <linux/platform_device.h>
+#include <linux/acpi.h>
+
+#define MODULE_NAME "acpi-ged"
+
+struct acpi_ged_event {
+ struct list_head node;
+ struct device *dev;
+ unsigned int gsi;
+ unsigned int irq;
+ acpi_handle handle;
+};
+
+static irqreturn_t acpi_ged_irq_handler(int irq, void *data)
+{
+ struct acpi_ged_event *event = data;
+ acpi_status acpi_ret;
+
+ acpi_ret = acpi_execute_simple_method(event->handle, NULL, event->gsi);
+ if (ACPI_FAILURE(acpi_ret))
+ dev_err_once(event->dev, "IRQ method execution failed\n");
+
+ return IRQ_HANDLED;
+}
+
+static acpi_status acpi_ged_request_interrupt(struct acpi_resource *ares,
+ void *context)
+{
+ struct acpi_ged_event *event;
+ unsigned int irq;
+ unsigned int gsi;
+ unsigned int irqflags = IRQF_ONESHOT;
+ struct device *dev = context;
+ acpi_handle handle = ACPI_HANDLE(dev);
+ acpi_handle evt_handle;
+ struct resource r;
+ struct acpi_resource_irq *p = &ares->data.irq;
+ struct acpi_resource_extended_irq *pext = &ares->data.extended_irq;
+
+ if (ares->type == ACPI_RESOURCE_TYPE_END_TAG)
+ return AE_OK;
+
+ if (!acpi_dev_resource_interrupt(ares, 0, &r)) {
+ dev_err(dev, "unable to parse IRQ resource\n");
+ return AE_ERROR;
+ }
+ if (ares->type == ACPI_RESOURCE_TYPE_IRQ)
+ gsi = p->interrupts[0];
+ else
+ gsi = pext->interrupts[0];
+
+ irq = r.start;
+
+ if (ACPI_FAILURE(acpi_get_handle(handle, "_EVT", &evt_handle))) {
+ dev_err(dev, "cannot locate _EVT method\n");
+ return AE_ERROR;
+ }
+
+ dev_info(dev, "GED listening GSI %u @ IRQ %u\n", gsi, irq);
+
+ event = devm_kzalloc(dev, sizeof(*event), GFP_KERNEL);
+ if (!event)
+ return AE_ERROR;
+
+ event->gsi = gsi;
+ event->dev = dev;
+ event->irq = irq;
+ event->handle = evt_handle;
+
+ if (r.flags & IORESOURCE_IRQ_SHAREABLE)
+ irqflags |= IRQF_SHARED;
+
+ if (devm_request_threaded_irq(dev, irq, NULL, acpi_ged_irq_handler,
+ irqflags, "ACPI:Ged", event)) {
+ dev_err(dev, "failed to setup event handler for irq %u\n", irq);
+ return AE_ERROR;
+ }
+
+ return AE_OK;
+}
+
+static int ged_probe(struct platform_device *pdev)
+{
+ acpi_status acpi_ret;
+
+ acpi_ret = acpi_walk_resources(ACPI_HANDLE(&pdev->dev), "_CRS",
+ acpi_ged_request_interrupt, &pdev->dev);
+ if (ACPI_FAILURE(acpi_ret)) {
+ dev_err(&pdev->dev, "unable to parse the _CRS record\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static const struct acpi_device_id ged_acpi_ids[] = {
+ {"ACPI0013"},
+ {},
+};
+
+static struct platform_driver ged_driver = {
+ .probe = ged_probe,
+ .driver = {
+ .name = MODULE_NAME,
+ .acpi_match_table = ACPI_PTR(ged_acpi_ids),
+ },
+};
+builtin_platform_driver(ged_driver);
diff --git a/drivers/acpi/internal.h b/drivers/acpi/internal.h
index 7c188472d9c2..9bb0773d39bf 100644
--- a/drivers/acpi/internal.h
+++ b/drivers/acpi/internal.h
@@ -20,7 +20,8 @@
#define PREFIX "ACPI: "
-void acpi_initrd_initialize_tables(void);
+int early_acpi_osi_init(void);
+int acpi_osi_init(void);
acpi_status acpi_os_initialize1(void);
void init_acpi_device_notify(void);
int acpi_scan_init(void);
diff --git a/drivers/acpi/nfit.c b/drivers/acpi/nfit.c
index 63cc9dbe4f3b..2215fc847fa9 100644
--- a/drivers/acpi/nfit.c
+++ b/drivers/acpi/nfit.c
@@ -45,6 +45,11 @@ module_param(scrub_overflow_abort, uint, S_IRUGO|S_IWUSR);
MODULE_PARM_DESC(scrub_overflow_abort,
"Number of times we overflow ARS results before abort");
+static bool disable_vendor_specific;
+module_param(disable_vendor_specific, bool, S_IRUGO);
+MODULE_PARM_DESC(disable_vendor_specific,
+ "Limit commands to the publicly specified set\n");
+
static struct workqueue_struct *nfit_wq;
struct nfit_table_prev {
@@ -171,33 +176,46 @@ static int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc,
unsigned int buf_len, int *cmd_rc)
{
struct acpi_nfit_desc *acpi_desc = to_acpi_nfit_desc(nd_desc);
- const struct nd_cmd_desc *desc = NULL;
union acpi_object in_obj, in_buf, *out_obj;
+ const struct nd_cmd_desc *desc = NULL;
struct device *dev = acpi_desc->dev;
+ struct nd_cmd_pkg *call_pkg = NULL;
const char *cmd_name, *dimm_name;
- unsigned long dsm_mask;
+ unsigned long cmd_mask, dsm_mask;
acpi_handle handle;
+ unsigned int func;
const u8 *uuid;
u32 offset;
int rc, i;
+ func = cmd;
+ if (cmd == ND_CMD_CALL) {
+ call_pkg = buf;
+ func = call_pkg->nd_command;
+ }
+
if (nvdimm) {
struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm);
struct acpi_device *adev = nfit_mem->adev;
if (!adev)
return -ENOTTY;
+ if (call_pkg && nfit_mem->family != call_pkg->nd_family)
+ return -ENOTTY;
+
dimm_name = nvdimm_name(nvdimm);
cmd_name = nvdimm_cmd_name(cmd);
+ cmd_mask = nvdimm_cmd_mask(nvdimm);
dsm_mask = nfit_mem->dsm_mask;
desc = nd_cmd_dimm_desc(cmd);
- uuid = to_nfit_uuid(NFIT_DEV_DIMM);
+ uuid = to_nfit_uuid(nfit_mem->family);
handle = adev->handle;
} else {
struct acpi_device *adev = to_acpi_dev(acpi_desc);
cmd_name = nvdimm_bus_cmd_name(cmd);
- dsm_mask = nd_desc->dsm_mask;
+ cmd_mask = nd_desc->cmd_mask;
+ dsm_mask = cmd_mask;
desc = nd_cmd_bus_desc(cmd);
uuid = to_nfit_uuid(NFIT_DEV_BUS);
handle = adev->handle;
@@ -207,7 +225,7 @@ static int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc,
if (!desc || (cmd && (desc->out_num + desc->in_num == 0)))
return -ENOTTY;
- if (!test_bit(cmd, &dsm_mask))
+ if (!test_bit(cmd, &cmd_mask) || !test_bit(func, &dsm_mask))
return -ENOTTY;
in_obj.type = ACPI_TYPE_PACKAGE;
@@ -222,21 +240,44 @@ static int acpi_nfit_ctl(struct nvdimm_bus_descriptor *nd_desc,
in_buf.buffer.length += nd_cmd_in_size(nvdimm, cmd, desc,
i, buf);
+ if (call_pkg) {
+ /* skip over package wrapper */
+ in_buf.buffer.pointer = (void *) &call_pkg->nd_payload;
+ in_buf.buffer.length = call_pkg->nd_size_in;
+ }
+
if (IS_ENABLED(CONFIG_ACPI_NFIT_DEBUG)) {
- dev_dbg(dev, "%s:%s cmd: %s input length: %d\n", __func__,
- dimm_name, cmd_name, in_buf.buffer.length);
- print_hex_dump_debug(cmd_name, DUMP_PREFIX_OFFSET, 4,
- 4, in_buf.buffer.pointer, min_t(u32, 128,
- in_buf.buffer.length), true);
+ dev_dbg(dev, "%s:%s cmd: %d: func: %d input length: %d\n",
+ __func__, dimm_name, cmd, func,
+ in_buf.buffer.length);
+ print_hex_dump_debug("nvdimm in ", DUMP_PREFIX_OFFSET, 4, 4,
+ in_buf.buffer.pointer,
+ min_t(u32, 256, in_buf.buffer.length), true);
}
- out_obj = acpi_evaluate_dsm(handle, uuid, 1, cmd, &in_obj);
+ out_obj = acpi_evaluate_dsm(handle, uuid, 1, func, &in_obj);
if (!out_obj) {
dev_dbg(dev, "%s:%s _DSM failed cmd: %s\n", __func__, dimm_name,
cmd_name);
return -EINVAL;
}
+ if (call_pkg) {
+ call_pkg->nd_fw_size = out_obj->buffer.length;
+ memcpy(call_pkg->nd_payload + call_pkg->nd_size_in,
+ out_obj->buffer.pointer,
+ min(call_pkg->nd_fw_size, call_pkg->nd_size_out));
+
+ ACPI_FREE(out_obj);
+ /*
+ * Need to support FW function w/o known size in advance.
+ * Caller can determine required size based upon nd_fw_size.
+ * If we return an error (like elsewhere) then caller wouldn't
+ * be able to rely upon data returned to make calculation.
+ */
+ return 0;
+ }
+
if (out_obj->package.type != ACPI_TYPE_BUFFER) {
dev_dbg(dev, "%s:%s unexpected output object type cmd: %s type: %d\n",
__func__, dimm_name, cmd_name, out_obj->type);
@@ -658,6 +699,7 @@ static int nfit_mem_dcr_init(struct acpi_nfit_desc *acpi_desc,
if (!nfit_mem)
return -ENOMEM;
INIT_LIST_HEAD(&nfit_mem->list);
+ nfit_mem->acpi_desc = acpi_desc;
list_add(&nfit_mem->list, &acpi_desc->dimms);
}
@@ -819,7 +861,7 @@ static ssize_t vendor_show(struct device *dev,
{
struct acpi_nfit_control_region *dcr = to_nfit_dcr(dev);
- return sprintf(buf, "%#x\n", dcr->vendor_id);
+ return sprintf(buf, "0x%04x\n", be16_to_cpu(dcr->vendor_id));
}
static DEVICE_ATTR_RO(vendor);
@@ -828,7 +870,7 @@ static ssize_t rev_id_show(struct device *dev,
{
struct acpi_nfit_control_region *dcr = to_nfit_dcr(dev);
- return sprintf(buf, "%#x\n", dcr->revision_id);
+ return sprintf(buf, "0x%04x\n", be16_to_cpu(dcr->revision_id));
}
static DEVICE_ATTR_RO(rev_id);
@@ -837,28 +879,142 @@ static ssize_t device_show(struct device *dev,
{
struct acpi_nfit_control_region *dcr = to_nfit_dcr(dev);
- return sprintf(buf, "%#x\n", dcr->device_id);
+ return sprintf(buf, "0x%04x\n", be16_to_cpu(dcr->device_id));
}
static DEVICE_ATTR_RO(device);
+static ssize_t subsystem_vendor_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct acpi_nfit_control_region *dcr = to_nfit_dcr(dev);
+
+ return sprintf(buf, "0x%04x\n", be16_to_cpu(dcr->subsystem_vendor_id));
+}
+static DEVICE_ATTR_RO(subsystem_vendor);
+
+static ssize_t subsystem_rev_id_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct acpi_nfit_control_region *dcr = to_nfit_dcr(dev);
+
+ return sprintf(buf, "0x%04x\n",
+ be16_to_cpu(dcr->subsystem_revision_id));
+}
+static DEVICE_ATTR_RO(subsystem_rev_id);
+
+static ssize_t subsystem_device_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct acpi_nfit_control_region *dcr = to_nfit_dcr(dev);
+
+ return sprintf(buf, "0x%04x\n", be16_to_cpu(dcr->subsystem_device_id));
+}
+static DEVICE_ATTR_RO(subsystem_device);
+
+static int num_nvdimm_formats(struct nvdimm *nvdimm)
+{
+ struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm);
+ int formats = 0;
+
+ if (nfit_mem->memdev_pmem)
+ formats++;
+ if (nfit_mem->memdev_bdw)
+ formats++;
+ return formats;
+}
+
static ssize_t format_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct acpi_nfit_control_region *dcr = to_nfit_dcr(dev);
- return sprintf(buf, "%#x\n", dcr->code);
+ return sprintf(buf, "0x%04x\n", be16_to_cpu(dcr->code));
}
static DEVICE_ATTR_RO(format);
+static ssize_t format1_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ u32 handle;
+ ssize_t rc = -ENXIO;
+ struct nfit_mem *nfit_mem;
+ struct nfit_memdev *nfit_memdev;
+ struct acpi_nfit_desc *acpi_desc;
+ struct nvdimm *nvdimm = to_nvdimm(dev);
+ struct acpi_nfit_control_region *dcr = to_nfit_dcr(dev);
+
+ nfit_mem = nvdimm_provider_data(nvdimm);
+ acpi_desc = nfit_mem->acpi_desc;
+ handle = to_nfit_memdev(dev)->device_handle;
+
+ /* assumes DIMMs have at most 2 published interface codes */
+ mutex_lock(&acpi_desc->init_mutex);
+ list_for_each_entry(nfit_memdev, &acpi_desc->memdevs, list) {
+ struct acpi_nfit_memory_map *memdev = nfit_memdev->memdev;
+ struct nfit_dcr *nfit_dcr;
+
+ if (memdev->device_handle != handle)
+ continue;
+
+ list_for_each_entry(nfit_dcr, &acpi_desc->dcrs, list) {
+ if (nfit_dcr->dcr->region_index != memdev->region_index)
+ continue;
+ if (nfit_dcr->dcr->code == dcr->code)
+ continue;
+ rc = sprintf(buf, "%#x\n",
+ be16_to_cpu(nfit_dcr->dcr->code));
+ break;
+ }
+ if (rc != ENXIO)
+ break;
+ }
+ mutex_unlock(&acpi_desc->init_mutex);
+ return rc;
+}
+static DEVICE_ATTR_RO(format1);
+
+static ssize_t formats_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvdimm *nvdimm = to_nvdimm(dev);
+
+ return sprintf(buf, "%d\n", num_nvdimm_formats(nvdimm));
+}
+static DEVICE_ATTR_RO(formats);
+
static ssize_t serial_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
struct acpi_nfit_control_region *dcr = to_nfit_dcr(dev);
- return sprintf(buf, "%#x\n", dcr->serial_number);
+ return sprintf(buf, "0x%08x\n", be32_to_cpu(dcr->serial_number));
}
static DEVICE_ATTR_RO(serial);
+static ssize_t family_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvdimm *nvdimm = to_nvdimm(dev);
+ struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm);
+
+ if (nfit_mem->family < 0)
+ return -ENXIO;
+ return sprintf(buf, "%d\n", nfit_mem->family);
+}
+static DEVICE_ATTR_RO(family);
+
+static ssize_t dsm_mask_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvdimm *nvdimm = to_nvdimm(dev);
+ struct nfit_mem *nfit_mem = nvdimm_provider_data(nvdimm);
+
+ if (nfit_mem->family < 0)
+ return -ENXIO;
+ return sprintf(buf, "%#lx\n", nfit_mem->dsm_mask);
+}
+static DEVICE_ATTR_RO(dsm_mask);
+
static ssize_t flags_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
@@ -873,15 +1029,41 @@ static ssize_t flags_show(struct device *dev,
}
static DEVICE_ATTR_RO(flags);
+static ssize_t id_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct acpi_nfit_control_region *dcr = to_nfit_dcr(dev);
+
+ if (dcr->valid_fields & ACPI_NFIT_CONTROL_MFG_INFO_VALID)
+ return sprintf(buf, "%04x-%02x-%04x-%08x\n",
+ be16_to_cpu(dcr->vendor_id),
+ dcr->manufacturing_location,
+ be16_to_cpu(dcr->manufacturing_date),
+ be32_to_cpu(dcr->serial_number));
+ else
+ return sprintf(buf, "%04x-%08x\n",
+ be16_to_cpu(dcr->vendor_id),
+ be32_to_cpu(dcr->serial_number));
+}
+static DEVICE_ATTR_RO(id);
+
static struct attribute *acpi_nfit_dimm_attributes[] = {
&dev_attr_handle.attr,
&dev_attr_phys_id.attr,
&dev_attr_vendor.attr,
&dev_attr_device.attr,
+ &dev_attr_rev_id.attr,
+ &dev_attr_subsystem_vendor.attr,
+ &dev_attr_subsystem_device.attr,
+ &dev_attr_subsystem_rev_id.attr,
&dev_attr_format.attr,
+ &dev_attr_formats.attr,
+ &dev_attr_format1.attr,
&dev_attr_serial.attr,
- &dev_attr_rev_id.attr,
&dev_attr_flags.attr,
+ &dev_attr_id.attr,
+ &dev_attr_family.attr,
+ &dev_attr_dsm_mask.attr,
NULL,
};
@@ -889,11 +1071,13 @@ static umode_t acpi_nfit_dimm_attr_visible(struct kobject *kobj,
struct attribute *a, int n)
{
struct device *dev = container_of(kobj, struct device, kobj);
+ struct nvdimm *nvdimm = to_nvdimm(dev);
- if (to_nfit_dcr(dev))
- return a->mode;
- else
+ if (!to_nfit_dcr(dev))
return 0;
+ if (a == &dev_attr_format1.attr && num_nvdimm_formats(nvdimm) <= 1)
+ return 0;
+ return a->mode;
}
static struct attribute_group acpi_nfit_dimm_attribute_group = {
@@ -926,10 +1110,13 @@ static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc,
{
struct acpi_device *adev, *adev_dimm;
struct device *dev = acpi_desc->dev;
- const u8 *uuid = to_nfit_uuid(NFIT_DEV_DIMM);
+ unsigned long dsm_mask;
+ const u8 *uuid;
int i;
- nfit_mem->dsm_mask = acpi_desc->dimm_dsm_force_en;
+ /* nfit test assumes 1:1 relationship between commands and dsms */
+ nfit_mem->dsm_mask = acpi_desc->dimm_cmd_force_en;
+ nfit_mem->family = NVDIMM_FAMILY_INTEL;
adev = to_acpi_dev(acpi_desc);
if (!adev)
return 0;
@@ -942,7 +1129,35 @@ static int acpi_nfit_add_dimm(struct acpi_nfit_desc *acpi_desc,
return force_enable_dimms ? 0 : -ENODEV;
}
- for (i = ND_CMD_SMART; i <= ND_CMD_VENDOR; i++)
+ /*
+ * Until standardization materializes we need to consider up to 3
+ * different command sets. Note, that checking for function0 (bit0)
+ * tells us if any commands are reachable through this uuid.
+ */
+ for (i = NVDIMM_FAMILY_INTEL; i <= NVDIMM_FAMILY_HPE2; i++)
+ if (acpi_check_dsm(adev_dimm->handle, to_nfit_uuid(i), 1, 1))
+ break;
+
+ /* limit the supported commands to those that are publicly documented */
+ nfit_mem->family = i;
+ if (nfit_mem->family == NVDIMM_FAMILY_INTEL) {
+ dsm_mask = 0x3fe;
+ if (disable_vendor_specific)
+ dsm_mask &= ~(1 << ND_CMD_VENDOR);
+ } else if (nfit_mem->family == NVDIMM_FAMILY_HPE1)
+ dsm_mask = 0x1c3c76;
+ else if (nfit_mem->family == NVDIMM_FAMILY_HPE2) {
+ dsm_mask = 0x1fe;
+ if (disable_vendor_specific)
+ dsm_mask &= ~(1 << 8);
+ } else {
+ dev_err(dev, "unknown dimm command family\n");
+ nfit_mem->family = -1;
+ return force_enable_dimms ? 0 : -ENODEV;
+ }
+
+ uuid = to_nfit_uuid(nfit_mem->family);
+ for_each_set_bit(i, &dsm_mask, BITS_PER_LONG)
if (acpi_check_dsm(adev_dimm->handle, uuid, 1, 1ULL << i))
set_bit(i, &nfit_mem->dsm_mask);
@@ -955,8 +1170,8 @@ static int acpi_nfit_register_dimms(struct acpi_nfit_desc *acpi_desc)
int dimm_count = 0;
list_for_each_entry(nfit_mem, &acpi_desc->dimms, list) {
+ unsigned long flags = 0, cmd_mask;
struct nvdimm *nvdimm;
- unsigned long flags = 0;
u32 device_handle;
u16 mem_flags;
int rc;
@@ -979,9 +1194,18 @@ static int acpi_nfit_register_dimms(struct acpi_nfit_desc *acpi_desc)
if (rc)
continue;
+ /*
+ * TODO: provide translation for non-NVDIMM_FAMILY_INTEL
+ * devices (i.e. from nd_cmd to acpi_dsm) to standardize the
+ * userspace interface.
+ */
+ cmd_mask = 1UL << ND_CMD_CALL;
+ if (nfit_mem->family == NVDIMM_FAMILY_INTEL)
+ cmd_mask |= nfit_mem->dsm_mask;
+
nvdimm = nvdimm_create(acpi_desc->nvdimm_bus, nfit_mem,
acpi_nfit_dimm_attribute_groups,
- flags, &nfit_mem->dsm_mask);
+ flags, cmd_mask);
if (!nvdimm)
return -ENOMEM;
@@ -1010,14 +1234,14 @@ static void acpi_nfit_init_dsms(struct acpi_nfit_desc *acpi_desc)
struct acpi_device *adev;
int i;
- nd_desc->dsm_mask = acpi_desc->bus_dsm_force_en;
+ nd_desc->cmd_mask = acpi_desc->bus_cmd_force_en;
adev = to_acpi_dev(acpi_desc);
if (!adev)
return;
for (i = ND_CMD_ARS_CAP; i <= ND_CMD_CLEAR_ERROR; i++)
if (acpi_check_dsm(adev->handle, uuid, 1, 1ULL << i))
- set_bit(i, &nd_desc->dsm_mask);
+ set_bit(i, &nd_desc->cmd_mask);
}
static ssize_t range_index_show(struct device *dev,
@@ -2309,7 +2533,7 @@ static int acpi_nfit_add(struct acpi_device *adev)
acpi_size sz;
int rc;
- status = acpi_get_table_with_size("NFIT", 0, &tbl, &sz);
+ status = acpi_get_table_with_size(ACPI_SIG_NFIT, 0, &tbl, &sz);
if (ACPI_FAILURE(status)) {
/* This is ok, we could have an nvdimm hotplugged later */
dev_dbg(dev, "failed to find NFIT at startup\n");
@@ -2466,6 +2690,8 @@ static __init int nfit_init(void)
acpi_str_to_uuid(UUID_PERSISTENT_VIRTUAL_CD, nfit_uuid[NFIT_SPA_PCD]);
acpi_str_to_uuid(UUID_NFIT_BUS, nfit_uuid[NFIT_DEV_BUS]);
acpi_str_to_uuid(UUID_NFIT_DIMM, nfit_uuid[NFIT_DEV_DIMM]);
+ acpi_str_to_uuid(UUID_NFIT_DIMM_N_HPE1, nfit_uuid[NFIT_DEV_DIMM_N_HPE1]);
+ acpi_str_to_uuid(UUID_NFIT_DIMM_N_HPE2, nfit_uuid[NFIT_DEV_DIMM_N_HPE2]);
nfit_wq = create_singlethread_workqueue("nfit");
if (!nfit_wq)
diff --git a/drivers/acpi/nfit.h b/drivers/acpi/nfit.h
index c75576b2d50e..11cb38348aef 100644
--- a/drivers/acpi/nfit.h
+++ b/drivers/acpi/nfit.h
@@ -21,13 +21,25 @@
#include <linux/acpi.h>
#include <acpi/acuuid.h>
+/* ACPI 6.1 */
#define UUID_NFIT_BUS "2f10e7a4-9e91-11e4-89d3-123b93f75cba"
+
+/* http://pmem.io/documents/NVDIMM_DSM_Interface_Example.pdf */
#define UUID_NFIT_DIMM "4309ac30-0d11-11e4-9191-0800200c9a66"
+
+/* https://github.com/HewlettPackard/hpe-nvm/blob/master/Documentation/ */
+#define UUID_NFIT_DIMM_N_HPE1 "9002c334-acf3-4c0e-9642-a235f0d53bc6"
+#define UUID_NFIT_DIMM_N_HPE2 "5008664b-b758-41a0-a03c-27c2f2d04f7e"
+
#define ACPI_NFIT_MEM_FAILED_MASK (ACPI_NFIT_MEM_SAVE_FAILED \
| ACPI_NFIT_MEM_RESTORE_FAILED | ACPI_NFIT_MEM_FLUSH_FAILED \
| ACPI_NFIT_MEM_NOT_ARMED)
enum nfit_uuids {
+ /* for simplicity alias the uuid index with the family id */
+ NFIT_DEV_DIMM = NVDIMM_FAMILY_INTEL,
+ NFIT_DEV_DIMM_N_HPE1 = NVDIMM_FAMILY_HPE1,
+ NFIT_DEV_DIMM_N_HPE2 = NVDIMM_FAMILY_HPE2,
NFIT_SPA_VOLATILE,
NFIT_SPA_PM,
NFIT_SPA_DCR,
@@ -37,15 +49,16 @@ enum nfit_uuids {
NFIT_SPA_PDISK,
NFIT_SPA_PCD,
NFIT_DEV_BUS,
- NFIT_DEV_DIMM,
NFIT_UUID_MAX,
};
-enum nfit_fic {
- NFIT_FIC_BYTE = 0x101, /* byte-addressable energy backed */
- NFIT_FIC_BLK = 0x201, /* block-addressable non-energy backed */
- NFIT_FIC_BYTEN = 0x301, /* byte-addressable non-energy backed */
-};
+/*
+ * Region format interface codes are stored as an array of bytes in the
+ * NFIT DIMM Control Region structure
+ */
+#define NFIT_FIC_BYTE cpu_to_be16(0x101) /* byte-addressable energy backed */
+#define NFIT_FIC_BLK cpu_to_be16(0x201) /* block-addressable non-energy backed */
+#define NFIT_FIC_BYTEN cpu_to_be16(0x301) /* byte-addressable non-energy backed */
enum {
NFIT_BLK_READ_FLUSH = 1,
@@ -109,7 +122,9 @@ struct nfit_mem {
struct nfit_flush *nfit_flush;
struct list_head list;
struct acpi_device *adev;
+ struct acpi_nfit_desc *acpi_desc;
unsigned long dsm_mask;
+ int family;
};
struct acpi_nfit_desc {
@@ -132,8 +147,8 @@ struct acpi_nfit_desc {
size_t ars_status_size;
struct work_struct work;
unsigned int cancel:1;
- unsigned long dimm_dsm_force_en;
- unsigned long bus_dsm_force_en;
+ unsigned long dimm_cmd_force_en;
+ unsigned long bus_cmd_force_en;
int (*blk_do_io)(struct nd_blk_region *ndbr, resource_size_t dpa,
void *iobuf, u64 len, int rw);
};
diff --git a/drivers/acpi/numa.c b/drivers/acpi/numa.c
index 72b6e9ef0ae9..d176e0ece470 100644
--- a/drivers/acpi/numa.c
+++ b/drivers/acpi/numa.c
@@ -327,10 +327,18 @@ int __init acpi_numa_init(void)
/* SRAT: Static Resource Affinity Table */
if (!acpi_table_parse(ACPI_SIG_SRAT, acpi_parse_srat)) {
- acpi_table_parse_srat(ACPI_SRAT_TYPE_X2APIC_CPU_AFFINITY,
- acpi_parse_x2apic_affinity, 0);
- acpi_table_parse_srat(ACPI_SRAT_TYPE_CPU_AFFINITY,
- acpi_parse_processor_affinity, 0);
+ struct acpi_subtable_proc srat_proc[2];
+
+ memset(srat_proc, 0, sizeof(srat_proc));
+ srat_proc[0].id = ACPI_SRAT_TYPE_CPU_AFFINITY;
+ srat_proc[0].handler = acpi_parse_processor_affinity;
+ srat_proc[1].id = ACPI_SRAT_TYPE_X2APIC_CPU_AFFINITY;
+ srat_proc[1].handler = acpi_parse_x2apic_affinity;
+
+ acpi_table_parse_entries_array(ACPI_SIG_SRAT,
+ sizeof(struct acpi_table_srat),
+ srat_proc, ARRAY_SIZE(srat_proc), 0);
+
cnt = acpi_table_parse_srat(ACPI_SRAT_TYPE_MEMORY_AFFINITY,
acpi_parse_memory_affinity,
NR_NODE_MEMBLKS);
diff --git a/drivers/acpi/osi.c b/drivers/acpi/osi.c
new file mode 100644
index 000000000000..849f9d2245ca
--- /dev/null
+++ b/drivers/acpi/osi.c
@@ -0,0 +1,522 @@
+/*
+ * osi.c - _OSI implementation
+ *
+ * Copyright (C) 2016 Intel Corporation
+ * Author: Lv Zheng <lv.zheng@intel.com>
+ *
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or (at
+ * your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ */
+
+/* Uncomment next line to get verbose printout */
+/* #define DEBUG */
+#define pr_fmt(fmt) "ACPI: " fmt
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/acpi.h>
+#include <linux/dmi.h>
+
+#include "internal.h"
+
+
+#define OSI_STRING_LENGTH_MAX 64
+#define OSI_STRING_ENTRIES_MAX 16
+
+struct acpi_osi_entry {
+ char string[OSI_STRING_LENGTH_MAX];
+ bool enable;
+};
+
+static struct acpi_osi_config {
+ u8 default_disabling;
+ unsigned int linux_enable:1;
+ unsigned int linux_dmi:1;
+ unsigned int linux_cmdline:1;
+ unsigned int darwin_enable:1;
+ unsigned int darwin_dmi:1;
+ unsigned int darwin_cmdline:1;
+} osi_config;
+
+static struct acpi_osi_config osi_config;
+static struct acpi_osi_entry
+osi_setup_entries[OSI_STRING_ENTRIES_MAX] __initdata = {
+ {"Module Device", true},
+ {"Processor Device", true},
+ {"3.0 _SCP Extensions", true},
+ {"Processor Aggregator Device", true},
+};
+
+static u32 acpi_osi_handler(acpi_string interface, u32 supported)
+{
+ if (!strcmp("Linux", interface)) {
+ pr_notice_once(FW_BUG
+ "BIOS _OSI(Linux) query %s%s\n",
+ osi_config.linux_enable ? "honored" : "ignored",
+ osi_config.linux_cmdline ? " via cmdline" :
+ osi_config.linux_dmi ? " via DMI" : "");
+ }
+ if (!strcmp("Darwin", interface)) {
+ pr_notice_once(
+ "BIOS _OSI(Darwin) query %s%s\n",
+ osi_config.darwin_enable ? "honored" : "ignored",
+ osi_config.darwin_cmdline ? " via cmdline" :
+ osi_config.darwin_dmi ? " via DMI" : "");
+ }
+
+ return supported;
+}
+
+void __init acpi_osi_setup(char *str)
+{
+ struct acpi_osi_entry *osi;
+ bool enable = true;
+ int i;
+
+ if (!acpi_gbl_create_osi_method)
+ return;
+
+ if (str == NULL || *str == '\0') {
+ pr_info("_OSI method disabled\n");
+ acpi_gbl_create_osi_method = FALSE;
+ return;
+ }
+
+ if (*str == '!') {
+ str++;
+ if (*str == '\0') {
+ /* Do not override acpi_osi=!* */
+ if (!osi_config.default_disabling)
+ osi_config.default_disabling =
+ ACPI_DISABLE_ALL_VENDOR_STRINGS;
+ return;
+ } else if (*str == '*') {
+ osi_config.default_disabling = ACPI_DISABLE_ALL_STRINGS;
+ for (i = 0; i < OSI_STRING_ENTRIES_MAX; i++) {
+ osi = &osi_setup_entries[i];
+ osi->enable = false;
+ }
+ return;
+ } else if (*str == '!') {
+ osi_config.default_disabling = 0;
+ return;
+ }
+ enable = false;
+ }
+
+ for (i = 0; i < OSI_STRING_ENTRIES_MAX; i++) {
+ osi = &osi_setup_entries[i];
+ if (!strcmp(osi->string, str)) {
+ osi->enable = enable;
+ break;
+ } else if (osi->string[0] == '\0') {
+ osi->enable = enable;
+ strncpy(osi->string, str, OSI_STRING_LENGTH_MAX);
+ break;
+ }
+ }
+}
+
+static void __init __acpi_osi_setup_darwin(bool enable)
+{
+ osi_config.darwin_enable = !!enable;
+ if (enable) {
+ acpi_osi_setup("!");
+ acpi_osi_setup("Darwin");
+ } else {
+ acpi_osi_setup("!!");
+ acpi_osi_setup("!Darwin");
+ }
+}
+
+static void __init acpi_osi_setup_darwin(bool enable)
+{
+ /* Override acpi_osi_dmi_blacklisted() */
+ osi_config.darwin_dmi = 0;
+ osi_config.darwin_cmdline = 1;
+ __acpi_osi_setup_darwin(enable);
+}
+
+/*
+ * The story of _OSI(Linux)
+ *
+ * From pre-history through Linux-2.6.22, Linux responded TRUE upon a BIOS
+ * OSI(Linux) query.
+ *
+ * Unfortunately, reference BIOS writers got wind of this and put
+ * OSI(Linux) in their example code, quickly exposing this string as
+ * ill-conceived and opening the door to an un-bounded number of BIOS
+ * incompatibilities.
+ *
+ * For example, OSI(Linux) was used on resume to re-POST a video card on
+ * one system, because Linux at that time could not do a speedy restore in
+ * its native driver. But then upon gaining quick native restore
+ * capability, Linux has no way to tell the BIOS to skip the time-consuming
+ * POST -- putting Linux at a permanent performance disadvantage. On
+ * another system, the BIOS writer used OSI(Linux) to infer native OS
+ * support for IPMI! On other systems, OSI(Linux) simply got in the way of
+ * Linux claiming to be compatible with other operating systems, exposing
+ * BIOS issues such as skipped device initialization.
+ *
+ * So "Linux" turned out to be a really poor chose of OSI string, and from
+ * Linux-2.6.23 onward we respond FALSE.
+ *
+ * BIOS writers should NOT query _OSI(Linux) on future systems. Linux will
+ * complain on the console when it sees it, and return FALSE. To get Linux
+ * to return TRUE for your system will require a kernel source update to
+ * add a DMI entry, or boot with "acpi_osi=Linux"
+ */
+static void __init __acpi_osi_setup_linux(bool enable)
+{
+ osi_config.linux_enable = !!enable;
+ if (enable)
+ acpi_osi_setup("Linux");
+ else
+ acpi_osi_setup("!Linux");
+}
+
+static void __init acpi_osi_setup_linux(bool enable)
+{
+ /* Override acpi_osi_dmi_blacklisted() */
+ osi_config.linux_dmi = 0;
+ osi_config.linux_cmdline = 1;
+ __acpi_osi_setup_linux(enable);
+}
+
+/*
+ * Modify the list of "OS Interfaces" reported to BIOS via _OSI
+ *
+ * empty string disables _OSI
+ * string starting with '!' disables that string
+ * otherwise string is added to list, augmenting built-in strings
+ */
+static void __init acpi_osi_setup_late(void)
+{
+ struct acpi_osi_entry *osi;
+ char *str;
+ int i;
+ acpi_status status;
+
+ if (osi_config.default_disabling) {
+ status = acpi_update_interfaces(osi_config.default_disabling);
+ if (ACPI_SUCCESS(status))
+ pr_info("Disabled all _OSI OS vendors%s\n",
+ osi_config.default_disabling ==
+ ACPI_DISABLE_ALL_STRINGS ?
+ " and feature groups" : "");
+ }
+
+ for (i = 0; i < OSI_STRING_ENTRIES_MAX; i++) {
+ osi = &osi_setup_entries[i];
+ str = osi->string;
+ if (*str == '\0')
+ break;
+ if (osi->enable) {
+ status = acpi_install_interface(str);
+ if (ACPI_SUCCESS(status))
+ pr_info("Added _OSI(%s)\n", str);
+ } else {
+ status = acpi_remove_interface(str);
+ if (ACPI_SUCCESS(status))
+ pr_info("Deleted _OSI(%s)\n", str);
+ }
+ }
+}
+
+static int __init osi_setup(char *str)
+{
+ if (str && !strcmp("Linux", str))
+ acpi_osi_setup_linux(true);
+ else if (str && !strcmp("!Linux", str))
+ acpi_osi_setup_linux(false);
+ else if (str && !strcmp("Darwin", str))
+ acpi_osi_setup_darwin(true);
+ else if (str && !strcmp("!Darwin", str))
+ acpi_osi_setup_darwin(false);
+ else
+ acpi_osi_setup(str);
+
+ return 1;
+}
+__setup("acpi_osi=", osi_setup);
+
+bool acpi_osi_is_win8(void)
+{
+ return acpi_gbl_osi_data >= ACPI_OSI_WIN_8;
+}
+EXPORT_SYMBOL(acpi_osi_is_win8);
+
+static void __init acpi_osi_dmi_darwin(bool enable,
+ const struct dmi_system_id *d)
+{
+ pr_notice("DMI detected to setup _OSI(\"Darwin\"): %s\n", d->ident);
+ osi_config.darwin_dmi = 1;
+ __acpi_osi_setup_darwin(enable);
+}
+
+void __init acpi_osi_dmi_linux(bool enable, const struct dmi_system_id *d)
+{
+ pr_notice("DMI detected to setup _OSI(\"Linux\"): %s\n", d->ident);
+ osi_config.linux_dmi = 1;
+ __acpi_osi_setup_linux(enable);
+}
+
+static int __init dmi_enable_osi_darwin(const struct dmi_system_id *d)
+{
+ acpi_osi_dmi_darwin(true, d);
+
+ return 0;
+}
+
+static int __init dmi_enable_osi_linux(const struct dmi_system_id *d)
+{
+ acpi_osi_dmi_linux(true, d);
+
+ return 0;
+}
+
+static int __init dmi_disable_osi_vista(const struct dmi_system_id *d)
+{
+ pr_notice("DMI detected: %s\n", d->ident);
+ acpi_osi_setup("!Windows 2006");
+ acpi_osi_setup("!Windows 2006 SP1");
+ acpi_osi_setup("!Windows 2006 SP2");
+
+ return 0;
+}
+
+static int __init dmi_disable_osi_win7(const struct dmi_system_id *d)
+{
+ pr_notice("DMI detected: %s\n", d->ident);
+ acpi_osi_setup("!Windows 2009");
+
+ return 0;
+}
+
+static int __init dmi_disable_osi_win8(const struct dmi_system_id *d)
+{
+ pr_notice("DMI detected: %s\n", d->ident);
+ acpi_osi_setup("!Windows 2012");
+
+ return 0;
+}
+
+/*
+ * Linux default _OSI response behavior is determined by this DMI table.
+ *
+ * Note that _OSI("Linux")/_OSI("Darwin") determined here can be overridden
+ * by acpi_osi=!Linux/acpi_osi=!Darwin command line options.
+ */
+static struct dmi_system_id acpi_osi_dmi_table[] __initdata = {
+ {
+ .callback = dmi_disable_osi_vista,
+ .ident = "Fujitsu Siemens",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "ESPRIMO Mobile V5505"),
+ },
+ },
+ {
+ /*
+ * There have a NVIF method in MSI GX723 DSDT need call by Nvidia
+ * driver (e.g. nouveau) when user press brightness hotkey.
+ * Currently, nouveau driver didn't do the job and it causes there
+ * have a infinite while loop in DSDT when user press hotkey.
+ * We add MSI GX723's dmi information to this table for workaround
+ * this issue.
+ * Will remove MSI GX723 from the table after nouveau grows support.
+ */
+ .callback = dmi_disable_osi_vista,
+ .ident = "MSI GX723",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Micro-Star International"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "GX723"),
+ },
+ },
+ {
+ .callback = dmi_disable_osi_vista,
+ .ident = "Sony VGN-NS10J_S",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "VGN-NS10J_S"),
+ },
+ },
+ {
+ .callback = dmi_disable_osi_vista,
+ .ident = "Sony VGN-SR290J",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "VGN-SR290J"),
+ },
+ },
+ {
+ .callback = dmi_disable_osi_vista,
+ .ident = "VGN-NS50B_L",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "VGN-NS50B_L"),
+ },
+ },
+ {
+ .callback = dmi_disable_osi_vista,
+ .ident = "VGN-SR19XN",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Sony Corporation"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "VGN-SR19XN"),
+ },
+ },
+ {
+ .callback = dmi_disable_osi_vista,
+ .ident = "Toshiba Satellite L355",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
+ DMI_MATCH(DMI_PRODUCT_VERSION, "Satellite L355"),
+ },
+ },
+ {
+ .callback = dmi_disable_osi_win7,
+ .ident = "ASUS K50IJ",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK Computer Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "K50IJ"),
+ },
+ },
+ {
+ .callback = dmi_disable_osi_vista,
+ .ident = "Toshiba P305D",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Satellite P305D"),
+ },
+ },
+ {
+ .callback = dmi_disable_osi_vista,
+ .ident = "Toshiba NB100",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "TOSHIBA"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "NB100"),
+ },
+ },
+
+ /*
+ * The wireless hotkey does not work on those machines when
+ * returning true for _OSI("Windows 2012")
+ */
+ {
+ .callback = dmi_disable_osi_win8,
+ .ident = "Dell Inspiron 7737",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 7737"),
+ },
+ },
+ {
+ .callback = dmi_disable_osi_win8,
+ .ident = "Dell Inspiron 7537",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 7537"),
+ },
+ },
+ {
+ .callback = dmi_disable_osi_win8,
+ .ident = "Dell Inspiron 5437",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 5437"),
+ },
+ },
+ {
+ .callback = dmi_disable_osi_win8,
+ .ident = "Dell Inspiron 3437",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Inspiron 3437"),
+ },
+ },
+ {
+ .callback = dmi_disable_osi_win8,
+ .ident = "Dell Vostro 3446",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 3446"),
+ },
+ },
+ {
+ .callback = dmi_disable_osi_win8,
+ .ident = "Dell Vostro 3546",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Dell Inc."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "Vostro 3546"),
+ },
+ },
+
+ /*
+ * BIOS invocation of _OSI(Linux) is almost always a BIOS bug.
+ * Linux ignores it, except for the machines enumerated below.
+ */
+
+ /*
+ * Without this this EEEpc exports a non working WMI interface, with
+ * this it exports a working "good old" eeepc_laptop interface, fixing
+ * both brightness control, and rfkill not working.
+ */
+ {
+ .callback = dmi_enable_osi_linux,
+ .ident = "Asus EEE PC 1015PX",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "ASUSTeK Computer INC."),
+ DMI_MATCH(DMI_PRODUCT_NAME, "1015PX"),
+ },
+ },
+
+ /*
+ * Enable _OSI("Darwin") for all apple platforms.
+ */
+ {
+ .callback = dmi_enable_osi_darwin,
+ .ident = "Apple hardware",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Apple Inc."),
+ },
+ },
+ {
+ .callback = dmi_enable_osi_darwin,
+ .ident = "Apple hardware",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "Apple Computer, Inc."),
+ },
+ },
+ {}
+};
+
+static __init void acpi_osi_dmi_blacklisted(void)
+{
+ dmi_check_system(acpi_osi_dmi_table);
+}
+
+int __init early_acpi_osi_init(void)
+{
+ acpi_osi_dmi_blacklisted();
+
+ return 0;
+}
+
+int __init acpi_osi_init(void)
+{
+ acpi_install_interface_handler(acpi_osi_handler);
+ acpi_osi_setup_late();
+
+ return 0;
+}
diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
index 814d5f83b75e..b108f1358a32 100644
--- a/drivers/acpi/osl.c
+++ b/drivers/acpi/osl.c
@@ -56,10 +56,6 @@ struct acpi_os_dpc {
struct work_struct work;
};
-#ifdef CONFIG_ACPI_CUSTOM_DSDT
-#include CONFIG_ACPI_CUSTOM_DSDT_FILE
-#endif
-
#ifdef ENABLE_DEBUGGER
#include <linux/kdb.h>
@@ -96,72 +92,6 @@ struct acpi_ioremap {
static LIST_HEAD(acpi_ioremaps);
static DEFINE_MUTEX(acpi_ioremap_lock);
-static void __init acpi_osi_setup_late(void);
-
-/*
- * The story of _OSI(Linux)
- *
- * From pre-history through Linux-2.6.22,
- * Linux responded TRUE upon a BIOS OSI(Linux) query.
- *
- * Unfortunately, reference BIOS writers got wind of this
- * and put OSI(Linux) in their example code, quickly exposing
- * this string as ill-conceived and opening the door to
- * an un-bounded number of BIOS incompatibilities.
- *
- * For example, OSI(Linux) was used on resume to re-POST a
- * video card on one system, because Linux at that time
- * could not do a speedy restore in its native driver.
- * But then upon gaining quick native restore capability,
- * Linux has no way to tell the BIOS to skip the time-consuming
- * POST -- putting Linux at a permanent performance disadvantage.
- * On another system, the BIOS writer used OSI(Linux)
- * to infer native OS support for IPMI! On other systems,
- * OSI(Linux) simply got in the way of Linux claiming to
- * be compatible with other operating systems, exposing
- * BIOS issues such as skipped device initialization.
- *
- * So "Linux" turned out to be a really poor chose of
- * OSI string, and from Linux-2.6.23 onward we respond FALSE.
- *
- * BIOS writers should NOT query _OSI(Linux) on future systems.
- * Linux will complain on the console when it sees it, and return FALSE.
- * To get Linux to return TRUE for your system will require
- * a kernel source update to add a DMI entry,
- * or boot with "acpi_osi=Linux"
- */
-
-static struct osi_linux {
- unsigned int enable:1;
- unsigned int dmi:1;
- unsigned int cmdline:1;
- unsigned int default_disabling:1;
-} osi_linux = {0, 0, 0, 0};
-
-static u32 acpi_osi_handler(acpi_string interface, u32 supported)
-{
- if (!strcmp("Linux", interface)) {
-
- printk_once(KERN_NOTICE FW_BUG PREFIX
- "BIOS _OSI(Linux) query %s%s\n",
- osi_linux.enable ? "honored" : "ignored",
- osi_linux.cmdline ? " via cmdline" :
- osi_linux.dmi ? " via DMI" : "");
- }
-
- if (!strcmp("Darwin", interface)) {
- /*
- * Apple firmware will behave poorly if it receives positive
- * answers to "Darwin" and any other OS. Respond positively
- * to Darwin and then disable all other vendor strings.
- */
- acpi_update_interfaces(ACPI_DISABLE_ALL_VENDOR_STRINGS);
- supported = ACPI_UINT32_MAX;
- }
-
- return supported;
-}
-
static void __init acpi_request_region (struct acpi_generic_address *gas,
unsigned int length, char *desc)
{
@@ -582,7 +512,7 @@ static char acpi_os_name[ACPI_MAX_OVERRIDE_LEN];
acpi_status
acpi_os_predefined_override(const struct acpi_predefined_names *init_val,
- char **new_val)
+ acpi_string *new_val)
{
if (!init_val || !new_val)
return AE_BAD_PARAMETER;
@@ -602,280 +532,6 @@ acpi_os_predefined_override(const struct acpi_predefined_names *init_val,
return AE_OK;
}
-static void acpi_table_taint(struct acpi_table_header *table)
-{
- pr_warn(PREFIX
- "Override [%4.4s-%8.8s], this is unsafe: tainting kernel\n",
- table->signature, table->oem_table_id);
- add_taint(TAINT_OVERRIDDEN_ACPI_TABLE, LOCKDEP_NOW_UNRELIABLE);
-}
-
-#ifdef CONFIG_ACPI_INITRD_TABLE_OVERRIDE
-#include <linux/earlycpio.h>
-#include <linux/memblock.h>
-
-static u64 acpi_tables_addr;
-static int all_tables_size;
-
-/* Copied from acpica/tbutils.c:acpi_tb_checksum() */
-static u8 __init acpi_table_checksum(u8 *buffer, u32 length)
-{
- u8 sum = 0;
- u8 *end = buffer + length;
-
- while (buffer < end)
- sum = (u8) (sum + *(buffer++));
- return sum;
-}
-
-/* All but ACPI_SIG_RSDP and ACPI_SIG_FACS: */
-static const char * const table_sigs[] = {
- ACPI_SIG_BERT, ACPI_SIG_CPEP, ACPI_SIG_ECDT, ACPI_SIG_EINJ,
- ACPI_SIG_ERST, ACPI_SIG_HEST, ACPI_SIG_MADT, ACPI_SIG_MSCT,
- ACPI_SIG_SBST, ACPI_SIG_SLIT, ACPI_SIG_SRAT, ACPI_SIG_ASF,
- ACPI_SIG_BOOT, ACPI_SIG_DBGP, ACPI_SIG_DMAR, ACPI_SIG_HPET,
- ACPI_SIG_IBFT, ACPI_SIG_IVRS, ACPI_SIG_MCFG, ACPI_SIG_MCHI,
- ACPI_SIG_SLIC, ACPI_SIG_SPCR, ACPI_SIG_SPMI, ACPI_SIG_TCPA,
- ACPI_SIG_UEFI, ACPI_SIG_WAET, ACPI_SIG_WDAT, ACPI_SIG_WDDT,
- ACPI_SIG_WDRT, ACPI_SIG_DSDT, ACPI_SIG_FADT, ACPI_SIG_PSDT,
- ACPI_SIG_RSDT, ACPI_SIG_XSDT, ACPI_SIG_SSDT, NULL };
-
-#define ACPI_HEADER_SIZE sizeof(struct acpi_table_header)
-
-#define ACPI_OVERRIDE_TABLES 64
-static struct cpio_data __initdata acpi_initrd_files[ACPI_OVERRIDE_TABLES];
-static DECLARE_BITMAP(acpi_initrd_installed, ACPI_OVERRIDE_TABLES);
-
-#define MAP_CHUNK_SIZE (NR_FIX_BTMAPS << PAGE_SHIFT)
-
-void __init acpi_initrd_override(void *data, size_t size)
-{
- int sig, no, table_nr = 0, total_offset = 0;
- long offset = 0;
- struct acpi_table_header *table;
- char cpio_path[32] = "kernel/firmware/acpi/";
- struct cpio_data file;
-
- if (data == NULL || size == 0)
- return;
-
- for (no = 0; no < ACPI_OVERRIDE_TABLES; no++) {
- file = find_cpio_data(cpio_path, data, size, &offset);
- if (!file.data)
- break;
-
- data += offset;
- size -= offset;
-
- if (file.size < sizeof(struct acpi_table_header)) {
- pr_err("ACPI OVERRIDE: Table smaller than ACPI header [%s%s]\n",
- cpio_path, file.name);
- continue;
- }
-
- table = file.data;
-
- for (sig = 0; table_sigs[sig]; sig++)
- if (!memcmp(table->signature, table_sigs[sig], 4))
- break;
-
- if (!table_sigs[sig]) {
- pr_err("ACPI OVERRIDE: Unknown signature [%s%s]\n",
- cpio_path, file.name);
- continue;
- }
- if (file.size != table->length) {
- pr_err("ACPI OVERRIDE: File length does not match table length [%s%s]\n",
- cpio_path, file.name);
- continue;
- }
- if (acpi_table_checksum(file.data, table->length)) {
- pr_err("ACPI OVERRIDE: Bad table checksum [%s%s]\n",
- cpio_path, file.name);
- continue;
- }
-
- pr_info("%4.4s ACPI table found in initrd [%s%s][0x%x]\n",
- table->signature, cpio_path, file.name, table->length);
-
- all_tables_size += table->length;
- acpi_initrd_files[table_nr].data = file.data;
- acpi_initrd_files[table_nr].size = file.size;
- table_nr++;
- }
- if (table_nr == 0)
- return;
-
- acpi_tables_addr =
- memblock_find_in_range(0, max_low_pfn_mapped << PAGE_SHIFT,
- all_tables_size, PAGE_SIZE);
- if (!acpi_tables_addr) {
- WARN_ON(1);
- return;
- }
- /*
- * Only calling e820_add_reserve does not work and the
- * tables are invalid (memory got used) later.
- * memblock_reserve works as expected and the tables won't get modified.
- * But it's not enough on X86 because ioremap will
- * complain later (used by acpi_os_map_memory) that the pages
- * that should get mapped are not marked "reserved".
- * Both memblock_reserve and e820_add_region (via arch_reserve_mem_area)
- * works fine.
- */
- memblock_reserve(acpi_tables_addr, all_tables_size);
- arch_reserve_mem_area(acpi_tables_addr, all_tables_size);
-
- /*
- * early_ioremap only can remap 256k one time. If we map all
- * tables one time, we will hit the limit. Need to map chunks
- * one by one during copying the same as that in relocate_initrd().
- */
- for (no = 0; no < table_nr; no++) {
- unsigned char *src_p = acpi_initrd_files[no].data;
- phys_addr_t size = acpi_initrd_files[no].size;
- phys_addr_t dest_addr = acpi_tables_addr + total_offset;
- phys_addr_t slop, clen;
- char *dest_p;
-
- total_offset += size;
-
- while (size) {
- slop = dest_addr & ~PAGE_MASK;
- clen = size;
- if (clen > MAP_CHUNK_SIZE - slop)
- clen = MAP_CHUNK_SIZE - slop;
- dest_p = early_ioremap(dest_addr & PAGE_MASK,
- clen + slop);
- memcpy(dest_p + slop, src_p, clen);
- early_iounmap(dest_p, clen + slop);
- src_p += clen;
- dest_addr += clen;
- size -= clen;
- }
- }
-}
-
-acpi_status
-acpi_os_physical_table_override(struct acpi_table_header *existing_table,
- acpi_physical_address *address, u32 *length)
-{
- int table_offset = 0;
- int table_index = 0;
- struct acpi_table_header *table;
- u32 table_length;
-
- *length = 0;
- *address = 0;
- if (!acpi_tables_addr)
- return AE_OK;
-
- while (table_offset + ACPI_HEADER_SIZE <= all_tables_size) {
- table = acpi_os_map_memory(acpi_tables_addr + table_offset,
- ACPI_HEADER_SIZE);
- if (table_offset + table->length > all_tables_size) {
- acpi_os_unmap_memory(table, ACPI_HEADER_SIZE);
- WARN_ON(1);
- return AE_OK;
- }
-
- table_length = table->length;
-
- /* Only override tables matched */
- if (test_bit(table_index, acpi_initrd_installed) ||
- memcmp(existing_table->signature, table->signature, 4) ||
- memcmp(table->oem_table_id, existing_table->oem_table_id,
- ACPI_OEM_TABLE_ID_SIZE)) {
- acpi_os_unmap_memory(table, ACPI_HEADER_SIZE);
- goto next_table;
- }
-
- *length = table_length;
- *address = acpi_tables_addr + table_offset;
- acpi_table_taint(existing_table);
- acpi_os_unmap_memory(table, ACPI_HEADER_SIZE);
- set_bit(table_index, acpi_initrd_installed);
- break;
-
-next_table:
- table_offset += table_length;
- table_index++;
- }
- return AE_OK;
-}
-
-void __init acpi_initrd_initialize_tables(void)
-{
- int table_offset = 0;
- int table_index = 0;
- u32 table_length;
- struct acpi_table_header *table;
-
- if (!acpi_tables_addr)
- return;
-
- while (table_offset + ACPI_HEADER_SIZE <= all_tables_size) {
- table = acpi_os_map_memory(acpi_tables_addr + table_offset,
- ACPI_HEADER_SIZE);
- if (table_offset + table->length > all_tables_size) {
- acpi_os_unmap_memory(table, ACPI_HEADER_SIZE);
- WARN_ON(1);
- return;
- }
-
- table_length = table->length;
-
- /* Skip RSDT/XSDT which should only be used for override */
- if (test_bit(table_index, acpi_initrd_installed) ||
- ACPI_COMPARE_NAME(table->signature, ACPI_SIG_RSDT) ||
- ACPI_COMPARE_NAME(table->signature, ACPI_SIG_XSDT)) {
- acpi_os_unmap_memory(table, ACPI_HEADER_SIZE);
- goto next_table;
- }
-
- acpi_table_taint(table);
- acpi_os_unmap_memory(table, ACPI_HEADER_SIZE);
- acpi_install_table(acpi_tables_addr + table_offset, TRUE);
- set_bit(table_index, acpi_initrd_installed);
-next_table:
- table_offset += table_length;
- table_index++;
- }
-}
-#else
-acpi_status
-acpi_os_physical_table_override(struct acpi_table_header *existing_table,
- acpi_physical_address *address,
- u32 *table_length)
-{
- *table_length = 0;
- *address = 0;
- return AE_OK;
-}
-
-void __init acpi_initrd_initialize_tables(void)
-{
-}
-#endif /* CONFIG_ACPI_INITRD_TABLE_OVERRIDE */
-
-acpi_status
-acpi_os_table_override(struct acpi_table_header *existing_table,
- struct acpi_table_header **new_table)
-{
- if (!existing_table || !new_table)
- return AE_BAD_PARAMETER;
-
- *new_table = NULL;
-
-#ifdef CONFIG_ACPI_CUSTOM_DSDT
- if (strncmp(existing_table->signature, "DSDT", 4) == 0)
- *new_table = (struct acpi_table_header *)AmlCode;
-#endif
- if (*new_table != NULL)
- acpi_table_taint(existing_table);
- return AE_OK;
-}
-
static irqreturn_t acpi_irq(int irq, void *dev_id)
{
u32 handled;
@@ -1717,156 +1373,6 @@ static int __init acpi_os_name_setup(char *str)
__setup("acpi_os_name=", acpi_os_name_setup);
-#define OSI_STRING_LENGTH_MAX 64 /* arbitrary */
-#define OSI_STRING_ENTRIES_MAX 16 /* arbitrary */
-
-struct osi_setup_entry {
- char string[OSI_STRING_LENGTH_MAX];
- bool enable;
-};
-
-static struct osi_setup_entry
- osi_setup_entries[OSI_STRING_ENTRIES_MAX] __initdata = {
- {"Module Device", true},
- {"Processor Device", true},
- {"3.0 _SCP Extensions", true},
- {"Processor Aggregator Device", true},
-};
-
-void __init acpi_osi_setup(char *str)
-{
- struct osi_setup_entry *osi;
- bool enable = true;
- int i;
-
- if (!acpi_gbl_create_osi_method)
- return;
-
- if (str == NULL || *str == '\0') {
- printk(KERN_INFO PREFIX "_OSI method disabled\n");
- acpi_gbl_create_osi_method = FALSE;
- return;
- }
-
- if (*str == '!') {
- str++;
- if (*str == '\0') {
- osi_linux.default_disabling = 1;
- return;
- } else if (*str == '*') {
- acpi_update_interfaces(ACPI_DISABLE_ALL_STRINGS);
- for (i = 0; i < OSI_STRING_ENTRIES_MAX; i++) {
- osi = &osi_setup_entries[i];
- osi->enable = false;
- }
- return;
- }
- enable = false;
- }
-
- for (i = 0; i < OSI_STRING_ENTRIES_MAX; i++) {
- osi = &osi_setup_entries[i];
- if (!strcmp(osi->string, str)) {
- osi->enable = enable;
- break;
- } else if (osi->string[0] == '\0') {
- osi->enable = enable;
- strncpy(osi->string, str, OSI_STRING_LENGTH_MAX);
- break;
- }
- }
-}
-
-static void __init set_osi_linux(unsigned int enable)
-{
- if (osi_linux.enable != enable)
- osi_linux.enable = enable;
-
- if (osi_linux.enable)
- acpi_osi_setup("Linux");
- else
- acpi_osi_setup("!Linux");
-
- return;
-}
-
-static void __init acpi_cmdline_osi_linux(unsigned int enable)
-{
- osi_linux.cmdline = 1; /* cmdline set the default and override DMI */
- osi_linux.dmi = 0;
- set_osi_linux(enable);
-
- return;
-}
-
-void __init acpi_dmi_osi_linux(int enable, const struct dmi_system_id *d)
-{
- printk(KERN_NOTICE PREFIX "DMI detected: %s\n", d->ident);
-
- if (enable == -1)
- return;
-
- osi_linux.dmi = 1; /* DMI knows that this box asks OSI(Linux) */
- set_osi_linux(enable);
-
- return;
-}
-
-/*
- * Modify the list of "OS Interfaces" reported to BIOS via _OSI
- *
- * empty string disables _OSI
- * string starting with '!' disables that string
- * otherwise string is added to list, augmenting built-in strings
- */
-static void __init acpi_osi_setup_late(void)
-{
- struct osi_setup_entry *osi;
- char *str;
- int i;
- acpi_status status;
-
- if (osi_linux.default_disabling) {
- status = acpi_update_interfaces(ACPI_DISABLE_ALL_VENDOR_STRINGS);
-
- if (ACPI_SUCCESS(status))
- printk(KERN_INFO PREFIX "Disabled all _OSI OS vendors\n");
- }
-
- for (i = 0; i < OSI_STRING_ENTRIES_MAX; i++) {
- osi = &osi_setup_entries[i];
- str = osi->string;
-
- if (*str == '\0')
- break;
- if (osi->enable) {
- status = acpi_install_interface(str);
-
- if (ACPI_SUCCESS(status))
- printk(KERN_INFO PREFIX "Added _OSI(%s)\n", str);
- } else {
- status = acpi_remove_interface(str);
-
- if (ACPI_SUCCESS(status))
- printk(KERN_INFO PREFIX "Deleted _OSI(%s)\n", str);
- }
- }
-}
-
-static int __init osi_setup(char *str)
-{
- if (str && !strcmp("Linux", str))
- acpi_cmdline_osi_linux(1);
- else if (str && !strcmp("!Linux", str))
- acpi_cmdline_osi_linux(0);
- else
- acpi_osi_setup(str);
-
- return 1;
-}
-
-__setup("acpi_osi=", osi_setup);
-
/*
* Disable the auto-serialization of named objects creation methods.
*
@@ -1986,12 +1492,6 @@ int acpi_resources_are_enforced(void)
}
EXPORT_SYMBOL(acpi_resources_are_enforced);
-bool acpi_osi_is_win8(void)
-{
- return acpi_gbl_osi_data >= ACPI_OSI_WIN_8;
-}
-EXPORT_SYMBOL(acpi_osi_is_win8);
-
/*
* Deallocate the memory for a spinlock.
*/
@@ -2157,8 +1657,7 @@ acpi_status __init acpi_os_initialize1(void)
BUG_ON(!kacpid_wq);
BUG_ON(!kacpi_notify_wq);
BUG_ON(!kacpi_hotplug_wq);
- acpi_install_interface_handler(acpi_osi_handler);
- acpi_osi_setup_late();
+ acpi_osi_init();
return AE_OK;
}
diff --git a/drivers/acpi/pci_link.c b/drivers/acpi/pci_link.c
index ededa909df2f..8fc7323ed3e8 100644
--- a/drivers/acpi/pci_link.c
+++ b/drivers/acpi/pci_link.c
@@ -36,6 +36,7 @@
#include <linux/mutex.h>
#include <linux/slab.h>
#include <linux/acpi.h>
+#include <linux/irq.h>
#include "internal.h"
@@ -437,17 +438,15 @@ static int acpi_pci_link_set(struct acpi_pci_link *link, int irq)
* enabled system.
*/
-#define ACPI_MAX_IRQS 256
-#define ACPI_MAX_ISA_IRQ 16
+#define ACPI_MAX_ISA_IRQS 16
-#define PIRQ_PENALTY_PCI_AVAILABLE (0)
#define PIRQ_PENALTY_PCI_POSSIBLE (16*16)
#define PIRQ_PENALTY_PCI_USING (16*16*16)
#define PIRQ_PENALTY_ISA_TYPICAL (16*16*16*16)
#define PIRQ_PENALTY_ISA_USED (16*16*16*16*16)
#define PIRQ_PENALTY_ISA_ALWAYS (16*16*16*16*16*16)
-static int acpi_irq_penalty[ACPI_MAX_IRQS] = {
+static int acpi_isa_irq_penalty[ACPI_MAX_ISA_IRQS] = {
PIRQ_PENALTY_ISA_ALWAYS, /* IRQ0 timer */
PIRQ_PENALTY_ISA_ALWAYS, /* IRQ1 keyboard */
PIRQ_PENALTY_ISA_ALWAYS, /* IRQ2 cascade */
@@ -457,9 +456,9 @@ static int acpi_irq_penalty[ACPI_MAX_IRQS] = {
PIRQ_PENALTY_ISA_TYPICAL, /* IRQ6 */
PIRQ_PENALTY_ISA_TYPICAL, /* IRQ7 parallel, spurious */
PIRQ_PENALTY_ISA_TYPICAL, /* IRQ8 rtc, sometimes */
- PIRQ_PENALTY_PCI_AVAILABLE, /* IRQ9 PCI, often acpi */
- PIRQ_PENALTY_PCI_AVAILABLE, /* IRQ10 PCI */
- PIRQ_PENALTY_PCI_AVAILABLE, /* IRQ11 PCI */
+ 0, /* IRQ9 PCI, often acpi */
+ 0, /* IRQ10 PCI */
+ 0, /* IRQ11 PCI */
PIRQ_PENALTY_ISA_USED, /* IRQ12 mouse */
PIRQ_PENALTY_ISA_USED, /* IRQ13 fpe, sometimes */
PIRQ_PENALTY_ISA_USED, /* IRQ14 ide0 */
@@ -467,39 +466,58 @@ static int acpi_irq_penalty[ACPI_MAX_IRQS] = {
/* >IRQ15 */
};
-int __init acpi_irq_penalty_init(void)
+static int acpi_irq_pci_sharing_penalty(int irq)
{
struct acpi_pci_link *link;
- int i;
+ int penalty = 0;
- /*
- * Update penalties to facilitate IRQ balancing.
- */
list_for_each_entry(link, &acpi_link_list, list) {
-
/*
- * reflect the possible and active irqs in the penalty table --
- * useful for breaking ties.
+ * If a link is active, penalize its IRQ heavily
+ * so we try to choose a different IRQ.
*/
- if (link->irq.possible_count) {
- int penalty =
- PIRQ_PENALTY_PCI_POSSIBLE /
- link->irq.possible_count;
-
- for (i = 0; i < link->irq.possible_count; i++) {
- if (link->irq.possible[i] < ACPI_MAX_ISA_IRQ)
- acpi_irq_penalty[link->irq.
- possible[i]] +=
- penalty;
- }
-
- } else if (link->irq.active) {
- acpi_irq_penalty[link->irq.active] +=
- PIRQ_PENALTY_PCI_POSSIBLE;
+ if (link->irq.active && link->irq.active == irq)
+ penalty += PIRQ_PENALTY_PCI_USING;
+ else {
+ int i;
+
+ /*
+ * If a link is inactive, penalize the IRQs it
+ * might use, but not as severely.
+ */
+ for (i = 0; i < link->irq.possible_count; i++)
+ if (link->irq.possible[i] == irq)
+ penalty += PIRQ_PENALTY_PCI_POSSIBLE /
+ link->irq.possible_count;
}
}
- return 0;
+ return penalty;
+}
+
+static int acpi_irq_get_penalty(int irq)
+{
+ int penalty = 0;
+
+ if (irq < ACPI_MAX_ISA_IRQS)
+ penalty += acpi_isa_irq_penalty[irq];
+
+ /*
+ * Penalize IRQ used by ACPI SCI. If ACPI SCI pin attributes conflict
+ * with PCI IRQ attributes, mark ACPI SCI as ISA_ALWAYS so it won't be
+ * use for PCI IRQs.
+ */
+ if (irq == acpi_gbl_FADT.sci_interrupt) {
+ u32 type = irq_get_trigger_type(irq) & IRQ_TYPE_SENSE_MASK;
+
+ if (type != IRQ_TYPE_LEVEL_LOW)
+ penalty += PIRQ_PENALTY_ISA_ALWAYS;
+ else
+ penalty += PIRQ_PENALTY_PCI_USING;
+ }
+
+ penalty += acpi_irq_pci_sharing_penalty(irq);
+ return penalty;
}
static int acpi_irq_balance = -1; /* 0: static, 1: balance */
@@ -547,12 +565,12 @@ static int acpi_pci_link_allocate(struct acpi_pci_link *link)
* the use of IRQs 9, 10, 11, and >15.
*/
for (i = (link->irq.possible_count - 1); i >= 0; i--) {
- if (acpi_irq_penalty[irq] >
- acpi_irq_penalty[link->irq.possible[i]])
+ if (acpi_irq_get_penalty(irq) >
+ acpi_irq_get_penalty(link->irq.possible[i]))
irq = link->irq.possible[i];
}
}
- if (acpi_irq_penalty[irq] >= PIRQ_PENALTY_ISA_ALWAYS) {
+ if (acpi_irq_get_penalty(irq) >= PIRQ_PENALTY_ISA_ALWAYS) {
printk(KERN_ERR PREFIX "No IRQ available for %s [%s]. "
"Try pci=noacpi or acpi=off\n",
acpi_device_name(link->device),
@@ -568,7 +586,6 @@ static int acpi_pci_link_allocate(struct acpi_pci_link *link)
acpi_device_bid(link->device));
return -ENODEV;
} else {
- acpi_irq_penalty[link->irq.active] += PIRQ_PENALTY_PCI_USING;
printk(KERN_WARNING PREFIX "%s [%s] enabled at IRQ %d\n",
acpi_device_name(link->device),
acpi_device_bid(link->device), link->irq.active);
@@ -778,7 +795,7 @@ static void acpi_pci_link_remove(struct acpi_device *device)
}
/*
- * modify acpi_irq_penalty[] from cmdline
+ * modify acpi_isa_irq_penalty[] from cmdline
*/
static int __init acpi_irq_penalty_update(char *str, int used)
{
@@ -787,23 +804,24 @@ static int __init acpi_irq_penalty_update(char *str, int used)
for (i = 0; i < 16; i++) {
int retval;
int irq;
+ int new_penalty;
retval = get_option(&str, &irq);
if (!retval)
break; /* no number found */
- if (irq < 0)
- continue;
-
- if (irq >= ARRAY_SIZE(acpi_irq_penalty))
+ /* see if this is a ISA IRQ */
+ if ((irq < 0) || (irq >= ACPI_MAX_ISA_IRQS))
continue;
if (used)
- acpi_irq_penalty[irq] += PIRQ_PENALTY_ISA_USED;
+ new_penalty = acpi_irq_get_penalty(irq) +
+ PIRQ_PENALTY_ISA_USED;
else
- acpi_irq_penalty[irq] = PIRQ_PENALTY_PCI_AVAILABLE;
+ new_penalty = 0;
+ acpi_isa_irq_penalty[irq] = new_penalty;
if (retval != 2) /* no next number */
break;
}
@@ -819,34 +837,15 @@ static int __init acpi_irq_penalty_update(char *str, int used)
*/
void acpi_penalize_isa_irq(int irq, int active)
{
- if (irq >= 0 && irq < ARRAY_SIZE(acpi_irq_penalty)) {
- if (active)
- acpi_irq_penalty[irq] += PIRQ_PENALTY_ISA_USED;
- else
- acpi_irq_penalty[irq] += PIRQ_PENALTY_PCI_USING;
- }
+ if ((irq >= 0) && (irq < ARRAY_SIZE(acpi_isa_irq_penalty)))
+ acpi_isa_irq_penalty[irq] = acpi_irq_get_penalty(irq) +
+ active ? PIRQ_PENALTY_ISA_USED : PIRQ_PENALTY_PCI_USING;
}
bool acpi_isa_irq_available(int irq)
{
- return irq >= 0 && (irq >= ARRAY_SIZE(acpi_irq_penalty) ||
- acpi_irq_penalty[irq] < PIRQ_PENALTY_ISA_ALWAYS);
-}
-
-/*
- * Penalize IRQ used by ACPI SCI. If ACPI SCI pin attributes conflict with
- * PCI IRQ attributes, mark ACPI SCI as ISA_ALWAYS so it won't be use for
- * PCI IRQs.
- */
-void acpi_penalize_sci_irq(int irq, int trigger, int polarity)
-{
- if (irq >= 0 && irq < ARRAY_SIZE(acpi_irq_penalty)) {
- if (trigger != ACPI_MADT_TRIGGER_LEVEL ||
- polarity != ACPI_MADT_POLARITY_ACTIVE_LOW)
- acpi_irq_penalty[irq] += PIRQ_PENALTY_ISA_ALWAYS;
- else
- acpi_irq_penalty[irq] += PIRQ_PENALTY_PCI_USING;
- }
+ return irq >= 0 && (irq >= ARRAY_SIZE(acpi_isa_irq_penalty) ||
+ acpi_irq_get_penalty(irq) < PIRQ_PENALTY_ISA_ALWAYS);
}
/*
diff --git a/drivers/acpi/sleep.c b/drivers/acpi/sleep.c
index 2a8b59644297..7a2e4d45b266 100644
--- a/drivers/acpi/sleep.c
+++ b/drivers/acpi/sleep.c
@@ -26,6 +26,11 @@
#include "internal.h"
#include "sleep.h"
+/*
+ * Some HW-full platforms do not have _S5, so they may need
+ * to leverage efi power off for a shutdown.
+ */
+bool acpi_no_s5;
static u8 sleep_states[ACPI_S_STATE_COUNT];
static void acpi_sleep_tts_switch(u32 acpi_state)
@@ -882,6 +887,8 @@ int __init acpi_sleep_init(void)
sleep_states[ACPI_STATE_S5] = 1;
pm_power_off_prepare = acpi_power_off_prepare;
pm_power_off = acpi_power_off;
+ } else {
+ acpi_no_s5 = true;
}
supported[0] = 0;
diff --git a/drivers/acpi/sysfs.c b/drivers/acpi/sysfs.c
index 0243d375c6fd..4b3a9e27f1b6 100644
--- a/drivers/acpi/sysfs.c
+++ b/drivers/acpi/sysfs.c
@@ -555,23 +555,22 @@ static void acpi_global_event_handler(u32 event_type, acpi_handle device,
static int get_status(u32 index, acpi_event_status *status,
acpi_handle *handle)
{
- int result = 0;
+ int result;
if (index >= num_gpes + ACPI_NUM_FIXED_EVENTS)
- goto end;
+ return -EINVAL;
if (index < num_gpes) {
result = acpi_get_gpe_device(index, handle);
if (result) {
ACPI_EXCEPTION((AE_INFO, AE_NOT_FOUND,
"Invalid GPE 0x%x", index));
- goto end;
+ return result;
}
result = acpi_get_gpe_status(*handle, index, status);
} else if (index < (num_gpes + ACPI_NUM_FIXED_EVENTS))
result = acpi_get_event_status(index - num_gpes, status);
-end:
return result;
}
diff --git a/drivers/acpi/tables.c b/drivers/acpi/tables.c
index f49c02442d65..a372f9eaa15d 100644
--- a/drivers/acpi/tables.c
+++ b/drivers/acpi/tables.c
@@ -32,8 +32,14 @@
#include <linux/errno.h>
#include <linux/acpi.h>
#include <linux/bootmem.h>
+#include <linux/earlycpio.h>
+#include <linux/memblock.h>
#include "internal.h"
+#ifdef CONFIG_ACPI_CUSTOM_DSDT
+#include CONFIG_ACPI_CUSTOM_DSDT_FILE
+#endif
+
#define ACPI_MAX_TABLES 128
static char *mps_inti_flags_polarity[] = { "dfl", "high", "res", "low" };
@@ -433,6 +439,314 @@ static void __init check_multiple_madt(void)
return;
}
+static void acpi_table_taint(struct acpi_table_header *table)
+{
+ pr_warn("Override [%4.4s-%8.8s], this is unsafe: tainting kernel\n",
+ table->signature, table->oem_table_id);
+ add_taint(TAINT_OVERRIDDEN_ACPI_TABLE, LOCKDEP_NOW_UNRELIABLE);
+}
+
+#ifdef CONFIG_ACPI_TABLE_UPGRADE
+static u64 acpi_tables_addr;
+static int all_tables_size;
+
+/* Copied from acpica/tbutils.c:acpi_tb_checksum() */
+static u8 __init acpi_table_checksum(u8 *buffer, u32 length)
+{
+ u8 sum = 0;
+ u8 *end = buffer + length;
+
+ while (buffer < end)
+ sum = (u8) (sum + *(buffer++));
+ return sum;
+}
+
+/* All but ACPI_SIG_RSDP and ACPI_SIG_FACS: */
+static const char * const table_sigs[] = {
+ ACPI_SIG_BERT, ACPI_SIG_CPEP, ACPI_SIG_ECDT, ACPI_SIG_EINJ,
+ ACPI_SIG_ERST, ACPI_SIG_HEST, ACPI_SIG_MADT, ACPI_SIG_MSCT,
+ ACPI_SIG_SBST, ACPI_SIG_SLIT, ACPI_SIG_SRAT, ACPI_SIG_ASF,
+ ACPI_SIG_BOOT, ACPI_SIG_DBGP, ACPI_SIG_DMAR, ACPI_SIG_HPET,
+ ACPI_SIG_IBFT, ACPI_SIG_IVRS, ACPI_SIG_MCFG, ACPI_SIG_MCHI,
+ ACPI_SIG_SLIC, ACPI_SIG_SPCR, ACPI_SIG_SPMI, ACPI_SIG_TCPA,
+ ACPI_SIG_UEFI, ACPI_SIG_WAET, ACPI_SIG_WDAT, ACPI_SIG_WDDT,
+ ACPI_SIG_WDRT, ACPI_SIG_DSDT, ACPI_SIG_FADT, ACPI_SIG_PSDT,
+ ACPI_SIG_RSDT, ACPI_SIG_XSDT, ACPI_SIG_SSDT, NULL };
+
+#define ACPI_HEADER_SIZE sizeof(struct acpi_table_header)
+
+#define NR_ACPI_INITRD_TABLES 64
+static struct cpio_data __initdata acpi_initrd_files[NR_ACPI_INITRD_TABLES];
+static DECLARE_BITMAP(acpi_initrd_installed, NR_ACPI_INITRD_TABLES);
+
+#define MAP_CHUNK_SIZE (NR_FIX_BTMAPS << PAGE_SHIFT)
+
+static void __init acpi_table_initrd_init(void *data, size_t size)
+{
+ int sig, no, table_nr = 0, total_offset = 0;
+ long offset = 0;
+ struct acpi_table_header *table;
+ char cpio_path[32] = "kernel/firmware/acpi/";
+ struct cpio_data file;
+
+ if (data == NULL || size == 0)
+ return;
+
+ for (no = 0; no < NR_ACPI_INITRD_TABLES; no++) {
+ file = find_cpio_data(cpio_path, data, size, &offset);
+ if (!file.data)
+ break;
+
+ data += offset;
+ size -= offset;
+
+ if (file.size < sizeof(struct acpi_table_header)) {
+ pr_err("ACPI OVERRIDE: Table smaller than ACPI header [%s%s]\n",
+ cpio_path, file.name);
+ continue;
+ }
+
+ table = file.data;
+
+ for (sig = 0; table_sigs[sig]; sig++)
+ if (!memcmp(table->signature, table_sigs[sig], 4))
+ break;
+
+ if (!table_sigs[sig]) {
+ pr_err("ACPI OVERRIDE: Unknown signature [%s%s]\n",
+ cpio_path, file.name);
+ continue;
+ }
+ if (file.size != table->length) {
+ pr_err("ACPI OVERRIDE: File length does not match table length [%s%s]\n",
+ cpio_path, file.name);
+ continue;
+ }
+ if (acpi_table_checksum(file.data, table->length)) {
+ pr_err("ACPI OVERRIDE: Bad table checksum [%s%s]\n",
+ cpio_path, file.name);
+ continue;
+ }
+
+ pr_info("%4.4s ACPI table found in initrd [%s%s][0x%x]\n",
+ table->signature, cpio_path, file.name, table->length);
+
+ all_tables_size += table->length;
+ acpi_initrd_files[table_nr].data = file.data;
+ acpi_initrd_files[table_nr].size = file.size;
+ table_nr++;
+ }
+ if (table_nr == 0)
+ return;
+
+ acpi_tables_addr =
+ memblock_find_in_range(0, max_low_pfn_mapped << PAGE_SHIFT,
+ all_tables_size, PAGE_SIZE);
+ if (!acpi_tables_addr) {
+ WARN_ON(1);
+ return;
+ }
+ /*
+ * Only calling e820_add_reserve does not work and the
+ * tables are invalid (memory got used) later.
+ * memblock_reserve works as expected and the tables won't get modified.
+ * But it's not enough on X86 because ioremap will
+ * complain later (used by acpi_os_map_memory) that the pages
+ * that should get mapped are not marked "reserved".
+ * Both memblock_reserve and e820_add_region (via arch_reserve_mem_area)
+ * works fine.
+ */
+ memblock_reserve(acpi_tables_addr, all_tables_size);
+ arch_reserve_mem_area(acpi_tables_addr, all_tables_size);
+
+ /*
+ * early_ioremap only can remap 256k one time. If we map all
+ * tables one time, we will hit the limit. Need to map chunks
+ * one by one during copying the same as that in relocate_initrd().
+ */
+ for (no = 0; no < table_nr; no++) {
+ unsigned char *src_p = acpi_initrd_files[no].data;
+ phys_addr_t size = acpi_initrd_files[no].size;
+ phys_addr_t dest_addr = acpi_tables_addr + total_offset;
+ phys_addr_t slop, clen;
+ char *dest_p;
+
+ total_offset += size;
+
+ while (size) {
+ slop = dest_addr & ~PAGE_MASK;
+ clen = size;
+ if (clen > MAP_CHUNK_SIZE - slop)
+ clen = MAP_CHUNK_SIZE - slop;
+ dest_p = early_ioremap(dest_addr & PAGE_MASK,
+ clen + slop);
+ memcpy(dest_p + slop, src_p, clen);
+ early_iounmap(dest_p, clen + slop);
+ src_p += clen;
+ dest_addr += clen;
+ size -= clen;
+ }
+ }
+}
+
+static acpi_status
+acpi_table_initrd_override(struct acpi_table_header *existing_table,
+ acpi_physical_address *address, u32 *length)
+{
+ int table_offset = 0;
+ int table_index = 0;
+ struct acpi_table_header *table;
+ u32 table_length;
+
+ *length = 0;
+ *address = 0;
+ if (!acpi_tables_addr)
+ return AE_OK;
+
+ while (table_offset + ACPI_HEADER_SIZE <= all_tables_size) {
+ table = acpi_os_map_memory(acpi_tables_addr + table_offset,
+ ACPI_HEADER_SIZE);
+ if (table_offset + table->length > all_tables_size) {
+ acpi_os_unmap_memory(table, ACPI_HEADER_SIZE);
+ WARN_ON(1);
+ return AE_OK;
+ }
+
+ table_length = table->length;
+
+ /* Only override tables matched */
+ if (memcmp(existing_table->signature, table->signature, 4) ||
+ memcmp(table->oem_id, existing_table->oem_id,
+ ACPI_OEM_ID_SIZE) ||
+ memcmp(table->oem_table_id, existing_table->oem_table_id,
+ ACPI_OEM_TABLE_ID_SIZE)) {
+ acpi_os_unmap_memory(table, ACPI_HEADER_SIZE);
+ goto next_table;
+ }
+ /*
+ * Mark the table to avoid being used in
+ * acpi_table_initrd_scan() and check the revision.
+ */
+ if (test_and_set_bit(table_index, acpi_initrd_installed) ||
+ existing_table->oem_revision >= table->oem_revision) {
+ acpi_os_unmap_memory(table, ACPI_HEADER_SIZE);
+ goto next_table;
+ }
+
+ *length = table_length;
+ *address = acpi_tables_addr + table_offset;
+ pr_info("Table Upgrade: override [%4.4s-%6.6s-%8.8s]\n",
+ table->signature, table->oem_id,
+ table->oem_table_id);
+ acpi_os_unmap_memory(table, ACPI_HEADER_SIZE);
+ break;
+
+next_table:
+ table_offset += table_length;
+ table_index++;
+ }
+ return AE_OK;
+}
+
+static void __init acpi_table_initrd_scan(void)
+{
+ int table_offset = 0;
+ int table_index = 0;
+ u32 table_length;
+ struct acpi_table_header *table;
+
+ if (!acpi_tables_addr)
+ return;
+
+ while (table_offset + ACPI_HEADER_SIZE <= all_tables_size) {
+ table = acpi_os_map_memory(acpi_tables_addr + table_offset,
+ ACPI_HEADER_SIZE);
+ if (table_offset + table->length > all_tables_size) {
+ acpi_os_unmap_memory(table, ACPI_HEADER_SIZE);
+ WARN_ON(1);
+ return;
+ }
+
+ table_length = table->length;
+
+ /* Skip RSDT/XSDT which should only be used for override */
+ if (ACPI_COMPARE_NAME(table->signature, ACPI_SIG_RSDT) ||
+ ACPI_COMPARE_NAME(table->signature, ACPI_SIG_XSDT)) {
+ acpi_os_unmap_memory(table, ACPI_HEADER_SIZE);
+ goto next_table;
+ }
+ /*
+ * Mark the table to avoid being used in
+ * acpi_table_initrd_override(). Though this is not possible
+ * because override is disabled in acpi_install_table().
+ */
+ if (test_and_set_bit(table_index, acpi_initrd_installed)) {
+ acpi_os_unmap_memory(table, ACPI_HEADER_SIZE);
+ goto next_table;
+ }
+
+ pr_info("Table Upgrade: install [%4.4s-%6.6s-%8.8s]\n",
+ table->signature, table->oem_id,
+ table->oem_table_id);
+ acpi_os_unmap_memory(table, ACPI_HEADER_SIZE);
+ acpi_install_table(acpi_tables_addr + table_offset, TRUE);
+next_table:
+ table_offset += table_length;
+ table_index++;
+ }
+}
+#else
+static void __init acpi_table_initrd_init(void *data, size_t size)
+{
+}
+
+static acpi_status
+acpi_table_initrd_override(struct acpi_table_header *existing_table,
+ acpi_physical_address *address,
+ u32 *table_length)
+{
+ *table_length = 0;
+ *address = 0;
+ return AE_OK;
+}
+
+static void __init acpi_table_initrd_scan(void)
+{
+}
+#endif /* CONFIG_ACPI_TABLE_UPGRADE */
+
+acpi_status
+acpi_os_physical_table_override(struct acpi_table_header *existing_table,
+ acpi_physical_address *address,
+ u32 *table_length)
+{
+ return acpi_table_initrd_override(existing_table, address,
+ table_length);
+}
+
+acpi_status
+acpi_os_table_override(struct acpi_table_header *existing_table,
+ struct acpi_table_header **new_table)
+{
+ if (!existing_table || !new_table)
+ return AE_BAD_PARAMETER;
+
+ *new_table = NULL;
+
+#ifdef CONFIG_ACPI_CUSTOM_DSDT
+ if (strncmp(existing_table->signature, "DSDT", 4) == 0)
+ *new_table = (struct acpi_table_header *)AmlCode;
+#endif
+ if (*new_table != NULL)
+ acpi_table_taint(existing_table);
+ return AE_OK;
+}
+
+void __init early_acpi_table_init(void *data, size_t size)
+{
+ acpi_table_initrd_init(data, size);
+}
+
/*
* acpi_table_init()
*
@@ -457,7 +771,7 @@ int __init acpi_table_init(void)
status = acpi_initialize_tables(initial_tables, ACPI_MAX_TABLES, 0);
if (ACPI_FAILURE(status))
return -EINVAL;
- acpi_initrd_initialize_tables();
+ acpi_table_initrd_scan();
check_multiple_madt();
return 0;
diff --git a/drivers/acpi/utils.c b/drivers/acpi/utils.c
index 050673f0c0b3..22c09952e177 100644
--- a/drivers/acpi/utils.c
+++ b/drivers/acpi/utils.c
@@ -625,7 +625,7 @@ acpi_status acpi_evaluate_lck(acpi_handle handle, int lock)
* some old BIOSes do expect a buffer or an integer etc.
*/
union acpi_object *
-acpi_evaluate_dsm(acpi_handle handle, const u8 *uuid, int rev, int func,
+acpi_evaluate_dsm(acpi_handle handle, const u8 *uuid, u64 rev, u64 func,
union acpi_object *argv4)
{
acpi_status ret;
@@ -674,7 +674,7 @@ EXPORT_SYMBOL(acpi_evaluate_dsm);
* functions. Currently only support 64 functions at maximum, should be
* enough for now.
*/
-bool acpi_check_dsm(acpi_handle handle, const u8 *uuid, int rev, u64 funcs)
+bool acpi_check_dsm(acpi_handle handle, const u8 *uuid, u64 rev, u64 funcs)
{
int i;
u64 mask = 0;
@@ -707,7 +707,7 @@ bool acpi_check_dsm(acpi_handle handle, const u8 *uuid, int rev, u64 funcs)
EXPORT_SYMBOL(acpi_check_dsm);
/**
- * acpi_dev_present - Detect presence of a given ACPI device in the system.
+ * acpi_dev_found - Detect presence of a given ACPI device in the namespace.
* @hid: Hardware ID of the device.
*
* Return %true if the device was present at the moment of invocation.
@@ -719,7 +719,7 @@ EXPORT_SYMBOL(acpi_check_dsm);
* instead). Calling from module_init() is fine (which is synonymous
* with device_initcall()).
*/
-bool acpi_dev_present(const char *hid)
+bool acpi_dev_found(const char *hid)
{
struct acpi_device_bus_id *acpi_device_bus_id;
bool found = false;
@@ -734,7 +734,7 @@ bool acpi_dev_present(const char *hid)
return found;
}
-EXPORT_SYMBOL(acpi_dev_present);
+EXPORT_SYMBOL(acpi_dev_found);
/*
* acpi_backlight= handling, this is done here rather then in video_detect.c
diff --git a/drivers/acpi/video_detect.c b/drivers/acpi/video_detect.c
index 1316ddd92fac..3d1327615f72 100644
--- a/drivers/acpi/video_detect.c
+++ b/drivers/acpi/video_detect.c
@@ -358,7 +358,7 @@ enum acpi_backlight_type acpi_video_get_backlight_type(void)
if (!(video_caps & ACPI_VIDEO_BACKLIGHT))
return acpi_backlight_vendor;
- if (acpi_osi_is_win8() && backlight_device_registered(BACKLIGHT_RAW))
+ if (acpi_osi_is_win8() && backlight_device_get_by_type(BACKLIGHT_RAW))
return acpi_backlight_native;
return acpi_backlight_video;
diff --git a/drivers/amba/bus.c b/drivers/amba/bus.c
index f0099360039e..a5b5c87e2114 100644
--- a/drivers/amba/bus.c
+++ b/drivers/amba/bus.c
@@ -336,16 +336,7 @@ static void amba_device_release(struct device *dev)
kfree(d);
}
-/**
- * amba_device_add - add a previously allocated AMBA device structure
- * @dev: AMBA device allocated by amba_device_alloc
- * @parent: resource parent for this devices resources
- *
- * Claim the resource, and read the device cell ID if not already
- * initialized. Register the AMBA device with the Linux device
- * manager.
- */
-int amba_device_add(struct amba_device *dev, struct resource *parent)
+static int amba_device_try_add(struct amba_device *dev, struct resource *parent)
{
u32 size;
void __iomem *tmp;
@@ -373,6 +364,12 @@ int amba_device_add(struct amba_device *dev, struct resource *parent)
goto err_release;
}
+ ret = dev_pm_domain_attach(&dev->dev, true);
+ if (ret == -EPROBE_DEFER) {
+ iounmap(tmp);
+ goto err_release;
+ }
+
ret = amba_get_enable_pclk(dev);
if (ret == 0) {
u32 pid, cid;
@@ -398,6 +395,7 @@ int amba_device_add(struct amba_device *dev, struct resource *parent)
}
iounmap(tmp);
+ dev_pm_domain_detach(&dev->dev, true);
if (ret)
goto err_release;
@@ -421,6 +419,88 @@ int amba_device_add(struct amba_device *dev, struct resource *parent)
err_out:
return ret;
}
+
+/*
+ * Registration of AMBA device require reading its pid and cid registers.
+ * To do this, the device must be turned on (if it is a part of power domain)
+ * and have clocks enabled. However in some cases those resources might not be
+ * yet available. Returning EPROBE_DEFER is not a solution in such case,
+ * because callers don't handle this special error code. Instead such devices
+ * are added to the special list and their registration is retried from
+ * periodic worker, until all resources are available and registration succeeds.
+ */
+struct deferred_device {
+ struct amba_device *dev;
+ struct resource *parent;
+ struct list_head node;
+};
+
+static LIST_HEAD(deferred_devices);
+static DEFINE_MUTEX(deferred_devices_lock);
+
+static void amba_deferred_retry_func(struct work_struct *dummy);
+static DECLARE_DELAYED_WORK(deferred_retry_work, amba_deferred_retry_func);
+
+#define DEFERRED_DEVICE_TIMEOUT (msecs_to_jiffies(5 * 1000))
+
+static void amba_deferred_retry_func(struct work_struct *dummy)
+{
+ struct deferred_device *ddev, *tmp;
+
+ mutex_lock(&deferred_devices_lock);
+
+ list_for_each_entry_safe(ddev, tmp, &deferred_devices, node) {
+ int ret = amba_device_try_add(ddev->dev, ddev->parent);
+
+ if (ret == -EPROBE_DEFER)
+ continue;
+
+ list_del_init(&ddev->node);
+ kfree(ddev);
+ }
+
+ if (!list_empty(&deferred_devices))
+ schedule_delayed_work(&deferred_retry_work,
+ DEFERRED_DEVICE_TIMEOUT);
+
+ mutex_unlock(&deferred_devices_lock);
+}
+
+/**
+ * amba_device_add - add a previously allocated AMBA device structure
+ * @dev: AMBA device allocated by amba_device_alloc
+ * @parent: resource parent for this devices resources
+ *
+ * Claim the resource, and read the device cell ID if not already
+ * initialized. Register the AMBA device with the Linux device
+ * manager.
+ */
+int amba_device_add(struct amba_device *dev, struct resource *parent)
+{
+ int ret = amba_device_try_add(dev, parent);
+
+ if (ret == -EPROBE_DEFER) {
+ struct deferred_device *ddev;
+
+ ddev = kmalloc(sizeof(*ddev), GFP_KERNEL);
+ if (!ddev)
+ return -ENOMEM;
+
+ ddev->dev = dev;
+ ddev->parent = parent;
+ ret = 0;
+
+ mutex_lock(&deferred_devices_lock);
+
+ if (list_empty(&deferred_devices))
+ schedule_delayed_work(&deferred_retry_work,
+ DEFERRED_DEVICE_TIMEOUT);
+ list_add_tail(&ddev->node, &deferred_devices);
+
+ mutex_unlock(&deferred_devices_lock);
+ }
+ return ret;
+}
EXPORT_SYMBOL_GPL(amba_device_add);
static struct amba_device *
diff --git a/drivers/ata/Kconfig b/drivers/ata/Kconfig
index cfa936a32513..e2dc4c045146 100644
--- a/drivers/ata/Kconfig
+++ b/drivers/ata/Kconfig
@@ -313,14 +313,23 @@ config ATA_PIIX
config SATA_DWC
tristate "DesignWare Cores SATA support"
- depends on 460EX
- select DW_DMAC
+ depends on DMADEVICES
+ select GENERIC_PHY
help
This option enables support for the on-chip SATA controller of the
AppliedMicro processor 460EX.
If unsure, say N.
+config SATA_DWC_OLD_DMA
+ bool "Support old device trees"
+ depends on SATA_DWC
+ select DW_DMAC_CORE
+ default y if 460EX
+ help
+ This option enables support for old device trees without the
+ "dmas" property.
+
config SATA_DWC_DEBUG
bool "Debugging driver version"
depends on SATA_DWC
diff --git a/drivers/ata/libahci.c b/drivers/ata/libahci.c
index a5d7c1c2a05e..71b07198e207 100644
--- a/drivers/ata/libahci.c
+++ b/drivers/ata/libahci.c
@@ -2550,8 +2550,8 @@ int ahci_host_activate(struct ata_host *host, struct scsi_host_template *sht)
if (hpriv->flags & (AHCI_HFLAG_MULTI_MSI | AHCI_HFLAG_MULTI_MSIX)) {
if (hpriv->irq_handler)
- dev_warn(host->dev, "both AHCI_HFLAG_MULTI_MSI flag set \
- and custom irq handler implemented\n");
+ dev_warn(host->dev,
+ "both AHCI_HFLAG_MULTI_MSI flag set and custom irq handler implemented\n");
rc = ahci_host_activate_multi_irqs(host, sht);
} else {
diff --git a/drivers/ata/libata-core.c b/drivers/ata/libata-core.c
index 55e257c268dd..6be7770f68e9 100644
--- a/drivers/ata/libata-core.c
+++ b/drivers/ata/libata-core.c
@@ -66,6 +66,7 @@
#include <scsi/scsi_host.h>
#include <linux/libata.h>
#include <asm/byteorder.h>
+#include <asm/unaligned.h>
#include <linux/cdrom.h>
#include <linux/ratelimit.h>
#include <linux/pm_runtime.h>
@@ -695,7 +696,7 @@ static int ata_rwcmd_protocol(struct ata_taskfile *tf, struct ata_device *dev)
* RETURNS:
* Block address read from @tf.
*/
-u64 ata_tf_read_block(struct ata_taskfile *tf, struct ata_device *dev)
+u64 ata_tf_read_block(const struct ata_taskfile *tf, struct ata_device *dev)
{
u64 block = 0;
@@ -720,7 +721,7 @@ u64 ata_tf_read_block(struct ata_taskfile *tf, struct ata_device *dev)
if (!sect) {
ata_dev_warn(dev,
"device reported invalid CHS sector 0\n");
- sect = 1; /* oh well */
+ return U64_MAX;
}
block = (cyl * dev->heads + head) * dev->sectors + sect - 1;
@@ -884,7 +885,7 @@ unsigned long ata_pack_xfermask(unsigned long pio_mask,
* @udma_mask: resulting udma_mask
*
* Unpack @xfer_mask into @pio_mask, @mwdma_mask and @udma_mask.
- * Any NULL distination masks will be ignored.
+ * Any NULL destination masks will be ignored.
*/
void ata_unpack_xfermask(unsigned long xfer_mask, unsigned long *pio_mask,
unsigned long *mwdma_mask, unsigned long *udma_mask)
@@ -2079,6 +2080,81 @@ static inline u8 ata_dev_knobble(struct ata_device *dev)
return ((ap->cbl == ATA_CBL_SATA) && (!ata_id_is_sata(dev->id)));
}
+static void ata_dev_config_ncq_send_recv(struct ata_device *dev)
+{
+ struct ata_port *ap = dev->link->ap;
+ unsigned int err_mask;
+ int log_index = ATA_LOG_NCQ_SEND_RECV * 2;
+ u16 log_pages;
+
+ err_mask = ata_read_log_page(dev, ATA_LOG_DIRECTORY,
+ 0, ap->sector_buf, 1);
+ if (err_mask) {
+ ata_dev_dbg(dev,
+ "failed to get Log Directory Emask 0x%x\n",
+ err_mask);
+ return;
+ }
+ log_pages = get_unaligned_le16(&ap->sector_buf[log_index]);
+ if (!log_pages) {
+ ata_dev_warn(dev,
+ "NCQ Send/Recv Log not supported\n");
+ return;
+ }
+ err_mask = ata_read_log_page(dev, ATA_LOG_NCQ_SEND_RECV,
+ 0, ap->sector_buf, 1);
+ if (err_mask) {
+ ata_dev_dbg(dev,
+ "failed to get NCQ Send/Recv Log Emask 0x%x\n",
+ err_mask);
+ } else {
+ u8 *cmds = dev->ncq_send_recv_cmds;
+
+ dev->flags |= ATA_DFLAG_NCQ_SEND_RECV;
+ memcpy(cmds, ap->sector_buf, ATA_LOG_NCQ_SEND_RECV_SIZE);
+
+ if (dev->horkage & ATA_HORKAGE_NO_NCQ_TRIM) {
+ ata_dev_dbg(dev, "disabling queued TRIM support\n");
+ cmds[ATA_LOG_NCQ_SEND_RECV_DSM_OFFSET] &=
+ ~ATA_LOG_NCQ_SEND_RECV_DSM_TRIM;
+ }
+ }
+}
+
+static void ata_dev_config_ncq_non_data(struct ata_device *dev)
+{
+ struct ata_port *ap = dev->link->ap;
+ unsigned int err_mask;
+ int log_index = ATA_LOG_NCQ_NON_DATA * 2;
+ u16 log_pages;
+
+ err_mask = ata_read_log_page(dev, ATA_LOG_DIRECTORY,
+ 0, ap->sector_buf, 1);
+ if (err_mask) {
+ ata_dev_dbg(dev,
+ "failed to get Log Directory Emask 0x%x\n",
+ err_mask);
+ return;
+ }
+ log_pages = get_unaligned_le16(&ap->sector_buf[log_index]);
+ if (!log_pages) {
+ ata_dev_warn(dev,
+ "NCQ Send/Recv Log not supported\n");
+ return;
+ }
+ err_mask = ata_read_log_page(dev, ATA_LOG_NCQ_NON_DATA,
+ 0, ap->sector_buf, 1);
+ if (err_mask) {
+ ata_dev_dbg(dev,
+ "failed to get NCQ Non-Data Log Emask 0x%x\n",
+ err_mask);
+ } else {
+ u8 *cmds = dev->ncq_non_data_cmds;
+
+ memcpy(cmds, ap->sector_buf, ATA_LOG_NCQ_NON_DATA_SIZE);
+ }
+}
+
static int ata_dev_config_ncq(struct ata_device *dev,
char *desc, size_t desc_sz)
{
@@ -2123,29 +2199,125 @@ static int ata_dev_config_ncq(struct ata_device *dev,
snprintf(desc, desc_sz, "NCQ (depth %d/%d)%s", hdepth,
ddepth, aa_desc);
- if ((ap->flags & ATA_FLAG_FPDMA_AUX) &&
- ata_id_has_ncq_send_and_recv(dev->id)) {
- err_mask = ata_read_log_page(dev, ATA_LOG_NCQ_SEND_RECV,
- 0, ap->sector_buf, 1);
- if (err_mask) {
- ata_dev_dbg(dev,
- "failed to get NCQ Send/Recv Log Emask 0x%x\n",
- err_mask);
- } else {
- u8 *cmds = dev->ncq_send_recv_cmds;
+ if ((ap->flags & ATA_FLAG_FPDMA_AUX)) {
+ if (ata_id_has_ncq_send_and_recv(dev->id))
+ ata_dev_config_ncq_send_recv(dev);
+ if (ata_id_has_ncq_non_data(dev->id))
+ ata_dev_config_ncq_non_data(dev);
+ }
- dev->flags |= ATA_DFLAG_NCQ_SEND_RECV;
- memcpy(cmds, ap->sector_buf, ATA_LOG_NCQ_SEND_RECV_SIZE);
+ return 0;
+}
- if (dev->horkage & ATA_HORKAGE_NO_NCQ_TRIM) {
- ata_dev_dbg(dev, "disabling queued TRIM support\n");
- cmds[ATA_LOG_NCQ_SEND_RECV_DSM_OFFSET] &=
- ~ATA_LOG_NCQ_SEND_RECV_DSM_TRIM;
- }
+static void ata_dev_config_sense_reporting(struct ata_device *dev)
+{
+ unsigned int err_mask;
+
+ if (!ata_id_has_sense_reporting(dev->id))
+ return;
+
+ if (ata_id_sense_reporting_enabled(dev->id))
+ return;
+
+ err_mask = ata_dev_set_feature(dev, SETFEATURE_SENSE_DATA, 0x1);
+ if (err_mask) {
+ ata_dev_dbg(dev,
+ "failed to enable Sense Data Reporting, Emask 0x%x\n",
+ err_mask);
+ }
+}
+
+static void ata_dev_config_zac(struct ata_device *dev)
+{
+ struct ata_port *ap = dev->link->ap;
+ unsigned int err_mask;
+ u8 *identify_buf = ap->sector_buf;
+ int log_index = ATA_LOG_SATA_ID_DEV_DATA * 2, i, found = 0;
+ u16 log_pages;
+
+ dev->zac_zones_optimal_open = U32_MAX;
+ dev->zac_zones_optimal_nonseq = U32_MAX;
+ dev->zac_zones_max_open = U32_MAX;
+
+ /*
+ * Always set the 'ZAC' flag for Host-managed devices.
+ */
+ if (dev->class == ATA_DEV_ZAC)
+ dev->flags |= ATA_DFLAG_ZAC;
+ else if (ata_id_zoned_cap(dev->id) == 0x01)
+ /*
+ * Check for host-aware devices.
+ */
+ dev->flags |= ATA_DFLAG_ZAC;
+
+ if (!(dev->flags & ATA_DFLAG_ZAC))
+ return;
+
+ /*
+ * Read Log Directory to figure out if IDENTIFY DEVICE log
+ * is supported.
+ */
+ err_mask = ata_read_log_page(dev, ATA_LOG_DIRECTORY,
+ 0, ap->sector_buf, 1);
+ if (err_mask) {
+ ata_dev_info(dev,
+ "failed to get Log Directory Emask 0x%x\n",
+ err_mask);
+ return;
+ }
+ log_pages = get_unaligned_le16(&ap->sector_buf[log_index]);
+ if (log_pages == 0) {
+ ata_dev_warn(dev,
+ "ATA Identify Device Log not supported\n");
+ return;
+ }
+ /*
+ * Read IDENTIFY DEVICE data log, page 0, to figure out
+ * if page 9 is supported.
+ */
+ err_mask = ata_read_log_page(dev, ATA_LOG_SATA_ID_DEV_DATA, 0,
+ identify_buf, 1);
+ if (err_mask) {
+ ata_dev_info(dev,
+ "failed to get Device Identify Log Emask 0x%x\n",
+ err_mask);
+ return;
+ }
+ log_pages = identify_buf[8];
+ for (i = 0; i < log_pages; i++) {
+ if (identify_buf[9 + i] == ATA_LOG_ZONED_INFORMATION) {
+ found++;
+ break;
}
}
+ if (!found) {
+ ata_dev_warn(dev,
+ "ATA Zoned Information Log not supported\n");
+ return;
+ }
- return 0;
+ /*
+ * Read IDENTIFY DEVICE data log, page 9 (Zoned-device information)
+ */
+ err_mask = ata_read_log_page(dev, ATA_LOG_SATA_ID_DEV_DATA,
+ ATA_LOG_ZONED_INFORMATION,
+ identify_buf, 1);
+ if (!err_mask) {
+ u64 zoned_cap, opt_open, opt_nonseq, max_open;
+
+ zoned_cap = get_unaligned_le64(&identify_buf[8]);
+ if ((zoned_cap >> 63))
+ dev->zac_zoned_cap = (zoned_cap & 1);
+ opt_open = get_unaligned_le64(&identify_buf[24]);
+ if ((opt_open >> 63))
+ dev->zac_zones_optimal_open = (u32)opt_open;
+ opt_nonseq = get_unaligned_le64(&identify_buf[32]);
+ if ((opt_nonseq >> 63))
+ dev->zac_zones_optimal_nonseq = (u32)opt_nonseq;
+ max_open = get_unaligned_le64(&identify_buf[40]);
+ if ((max_open >> 63))
+ dev->zac_zones_max_open = (u32)max_open;
+ }
}
/**
@@ -2370,7 +2542,8 @@ int ata_dev_configure(struct ata_device *dev)
dev->devslp_timing[i] = sata_setting[j];
}
}
-
+ ata_dev_config_sense_reporting(dev);
+ ata_dev_config_zac(dev);
dev->cdb_len = 16;
}
@@ -3399,7 +3572,7 @@ int ata_do_set_mode(struct ata_link *link, struct ata_device **r_failed_dev)
* EH context.
*
* RETURNS:
- * 0 if @linke is ready before @deadline; otherwise, -errno.
+ * 0 if @link is ready before @deadline; otherwise, -errno.
*/
int ata_wait_ready(struct ata_link *link, unsigned long deadline,
int (*check_ready)(struct ata_link *link))
@@ -3480,7 +3653,7 @@ int ata_wait_ready(struct ata_link *link, unsigned long deadline,
* EH context.
*
* RETURNS:
- * 0 if @linke is ready before @deadline; otherwise, -errno.
+ * 0 if @link is ready before @deadline; otherwise, -errno.
*/
int ata_wait_after_reset(struct ata_link *link, unsigned long deadline,
int (*check_ready)(struct ata_link *link))
@@ -3493,7 +3666,7 @@ int ata_wait_after_reset(struct ata_link *link, unsigned long deadline,
/**
* sata_link_debounce - debounce SATA phy status
* @link: ATA link to debounce SATA phy status for
- * @params: timing parameters { interval, duratinon, timeout } in msec
+ * @params: timing parameters { interval, duration, timeout } in msec
* @deadline: deadline jiffies for the operation
*
* Make sure SStatus of @link reaches stable state, determined by
@@ -3563,7 +3736,7 @@ int sata_link_debounce(struct ata_link *link, const unsigned long *params,
/**
* sata_link_resume - resume SATA link
* @link: ATA link to resume SATA
- * @params: timing parameters { interval, duratinon, timeout } in msec
+ * @params: timing parameters { interval, duration, timeout } in msec
* @deadline: deadline jiffies for the operation
*
* Resume SATA phy @link and debounce it.
@@ -3746,7 +3919,7 @@ int ata_std_prereset(struct ata_link *link, unsigned long deadline)
/**
* sata_link_hardreset - reset link via SATA phy reset
* @link: link to reset
- * @timing: timing parameters { interval, duratinon, timeout } in msec
+ * @timing: timing parameters { interval, duration, timeout } in msec
* @deadline: deadline jiffies for the operation
* @online: optional out parameter indicating link onlineness
* @check_ready: optional callback to check link readiness
@@ -4528,6 +4701,7 @@ unsigned int ata_dev_set_feature(struct ata_device *dev, u8 enable, u8 feature)
{
struct ata_taskfile tf;
unsigned int err_mask;
+ unsigned long timeout = 0;
/* set up set-features taskfile */
DPRINTK("set features - SATA features\n");
@@ -4539,7 +4713,10 @@ unsigned int ata_dev_set_feature(struct ata_device *dev, u8 enable, u8 feature)
tf.protocol = ATA_PROT_NODATA;
tf.nsect = feature;
- err_mask = ata_exec_internal(dev, &tf, NULL, DMA_NONE, NULL, 0, 0);
+ if (enable == SETFEATURES_SPINUP)
+ timeout = ata_probe_timeout ?
+ ata_probe_timeout * 1000 : SETFEATURES_SPINUP_TIMEOUT;
+ err_mask = ata_exec_internal(dev, &tf, NULL, DMA_NONE, NULL, 0, timeout);
DPRINTK("EXIT, err_mask=%x\n", err_mask);
return err_mask;
@@ -6208,7 +6385,7 @@ int ata_host_register(struct ata_host *host, struct scsi_host_template *sht)
*
* After allocating an ATA host and initializing it, most libata
* LLDs perform three steps to activate the host - start host,
- * request IRQ and register it. This helper takes necessasry
+ * request IRQ and register it. This helper takes necessary
* arguments and performs the three steps in one go.
*
* An invalid IRQ skips the IRQ registration and expects the host to
@@ -6261,7 +6438,7 @@ int ata_host_activate(struct ata_host *host, int irq,
}
/**
- * ata_port_detach - Detach ATA port in prepration of device removal
+ * ata_port_detach - Detach ATA port in preparation of device removal
* @ap: ATA port to be detached
*
* Detach all ATA devices and the associated SCSI devices of @ap;
diff --git a/drivers/ata/libata-eh.c b/drivers/ata/libata-eh.c
index 961acc788f44..61dc7a99e89a 100644
--- a/drivers/ata/libata-eh.c
+++ b/drivers/ata/libata-eh.c
@@ -1600,6 +1600,8 @@ static int ata_eh_read_log_10h(struct ata_device *dev,
tf->hob_lbah = buf[10];
tf->nsect = buf[12];
tf->hob_nsect = buf[13];
+ if (ata_id_has_ncq_autosense(dev->id))
+ tf->auxiliary = buf[14] << 16 | buf[15] << 8 | buf[16];
return 0;
}
@@ -1636,6 +1638,56 @@ unsigned int atapi_eh_tur(struct ata_device *dev, u8 *r_sense_key)
}
/**
+ * ata_eh_request_sense - perform REQUEST_SENSE_DATA_EXT
+ * @dev: device to perform REQUEST_SENSE_SENSE_DATA_EXT to
+ * @cmd: scsi command for which the sense code should be set
+ *
+ * Perform REQUEST_SENSE_DATA_EXT after the device reported CHECK
+ * SENSE. This function is an EH helper.
+ *
+ * LOCKING:
+ * Kernel thread context (may sleep).
+ */
+static void ata_eh_request_sense(struct ata_queued_cmd *qc,
+ struct scsi_cmnd *cmd)
+{
+ struct ata_device *dev = qc->dev;
+ struct ata_taskfile tf;
+ unsigned int err_mask;
+
+ if (qc->ap->pflags & ATA_PFLAG_FROZEN) {
+ ata_dev_warn(dev, "sense data available but port frozen\n");
+ return;
+ }
+
+ if (!cmd || qc->flags & ATA_QCFLAG_SENSE_VALID)
+ return;
+
+ if (!ata_id_sense_reporting_enabled(dev->id)) {
+ ata_dev_warn(qc->dev, "sense data reporting disabled\n");
+ return;
+ }
+
+ DPRINTK("ATA request sense\n");
+
+ ata_tf_init(dev, &tf);
+ tf.flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE;
+ tf.flags |= ATA_TFLAG_LBA | ATA_TFLAG_LBA48;
+ tf.command = ATA_CMD_REQ_SENSE_DATA;
+ tf.protocol = ATA_PROT_NODATA;
+
+ err_mask = ata_exec_internal(dev, &tf, NULL, DMA_NONE, NULL, 0, 0);
+ /* Ignore err_mask; ATA_ERR might be set */
+ if (tf.command & ATA_SENSE) {
+ ata_scsi_set_sense(dev, cmd, tf.lbah, tf.lbam, tf.lbal);
+ qc->flags |= ATA_QCFLAG_SENSE_VALID;
+ } else {
+ ata_dev_warn(dev, "request sense failed stat %02x emask %x\n",
+ tf.command, err_mask);
+ }
+}
+
+/**
* atapi_eh_request_sense - perform ATAPI REQUEST_SENSE
* @dev: device to perform REQUEST_SENSE to
* @sense_buf: result sense data buffer (SCSI_SENSE_BUFFERSIZE bytes long)
@@ -1797,6 +1849,18 @@ void ata_eh_analyze_ncq_error(struct ata_link *link)
memcpy(&qc->result_tf, &tf, sizeof(tf));
qc->result_tf.flags = ATA_TFLAG_ISADDR | ATA_TFLAG_LBA | ATA_TFLAG_LBA48;
qc->err_mask |= AC_ERR_DEV | AC_ERR_NCQ;
+ if ((qc->result_tf.command & ATA_SENSE) || qc->result_tf.auxiliary) {
+ char sense_key, asc, ascq;
+
+ sense_key = (qc->result_tf.auxiliary >> 16) & 0xff;
+ asc = (qc->result_tf.auxiliary >> 8) & 0xff;
+ ascq = qc->result_tf.auxiliary & 0xff;
+ ata_scsi_set_sense(dev, qc->scsicmd, sense_key, asc, ascq);
+ ata_scsi_set_sense_information(dev, qc->scsicmd,
+ &qc->result_tf);
+ qc->flags |= ATA_QCFLAG_SENSE_VALID;
+ }
+
ehc->i.err_mask &= ~AC_ERR_DEV;
}
@@ -1826,14 +1890,23 @@ static unsigned int ata_eh_analyze_tf(struct ata_queued_cmd *qc,
return ATA_EH_RESET;
}
- if (stat & (ATA_ERR | ATA_DF))
+ if (stat & (ATA_ERR | ATA_DF)) {
qc->err_mask |= AC_ERR_DEV;
- else
+ /*
+ * Sense data reporting does not work if the
+ * device fault bit is set.
+ */
+ if (stat & ATA_DF)
+ stat &= ~ATA_SENSE;
+ } else {
return 0;
+ }
switch (qc->dev->class) {
case ATA_DEV_ATA:
case ATA_DEV_ZAC:
+ if (stat & ATA_SENSE)
+ ata_eh_request_sense(qc, qc->scsicmd);
if (err & ATA_ICRC)
qc->err_mask |= AC_ERR_ATA_BUS;
if (err & (ATA_UNC | ATA_AMNF))
@@ -1847,20 +1920,31 @@ static unsigned int ata_eh_analyze_tf(struct ata_queued_cmd *qc,
tmp = atapi_eh_request_sense(qc->dev,
qc->scsicmd->sense_buffer,
qc->result_tf.feature >> 4);
- if (!tmp) {
- /* ATA_QCFLAG_SENSE_VALID is used to
- * tell atapi_qc_complete() that sense
- * data is already valid.
- *
- * TODO: interpret sense data and set
- * appropriate err_mask.
- */
+ if (!tmp)
qc->flags |= ATA_QCFLAG_SENSE_VALID;
- } else
+ else
qc->err_mask |= tmp;
}
}
+ if (qc->flags & ATA_QCFLAG_SENSE_VALID) {
+ int ret = scsi_check_sense(qc->scsicmd);
+ /*
+ * SUCCESS here means that the sense code could
+ * evaluated and should be passed to the upper layers
+ * for correct evaluation.
+ * FAILED means the sense code could not interpreted
+ * and the device would need to be reset.
+ * NEEDS_RETRY and ADD_TO_MLQUEUE means that the
+ * command would need to be retried.
+ */
+ if (ret == NEEDS_RETRY || ret == ADD_TO_MLQUEUE) {
+ qc->flags |= ATA_QCFLAG_RETRY;
+ qc->err_mask |= AC_ERR_OTHER;
+ } else if (ret != SUCCESS) {
+ qc->err_mask |= AC_ERR_HSM;
+ }
+ }
if (qc->err_mask & (AC_ERR_HSM | AC_ERR_TIMEOUT | AC_ERR_ATA_BUS))
action |= ATA_EH_RESET;
@@ -2398,6 +2482,8 @@ const char *ata_get_cmd_descript(u8 command)
{ ATA_CMD_CFA_WRITE_MULT_NE, "CFA WRITE MULTIPLE WITHOUT ERASE" },
{ ATA_CMD_REQ_SENSE_DATA, "REQUEST SENSE DATA EXT" },
{ ATA_CMD_SANITIZE_DEVICE, "SANITIZE DEVICE" },
+ { ATA_CMD_ZAC_MGMT_IN, "ZAC MANAGEMENT IN" },
+ { ATA_CMD_ZAC_MGMT_OUT, "ZAC MANAGEMENT OUT" },
{ ATA_CMD_READ_LONG, "READ LONG (with retries)" },
{ ATA_CMD_READ_LONG_ONCE, "READ LONG (without retries)" },
{ ATA_CMD_WRITE_LONG, "WRITE LONG (with retries)" },
@@ -2569,14 +2655,15 @@ static void ata_eh_link_report(struct ata_link *link)
#ifdef CONFIG_ATA_VERBOSE_ERROR
if (res->command & (ATA_BUSY | ATA_DRDY | ATA_DF | ATA_DRQ |
- ATA_ERR)) {
+ ATA_SENSE | ATA_ERR)) {
if (res->command & ATA_BUSY)
ata_dev_err(qc->dev, "status: { Busy }\n");
else
- ata_dev_err(qc->dev, "status: { %s%s%s%s}\n",
+ ata_dev_err(qc->dev, "status: { %s%s%s%s%s}\n",
res->command & ATA_DRDY ? "DRDY " : "",
res->command & ATA_DF ? "DF " : "",
res->command & ATA_DRQ ? "DRQ " : "",
+ res->command & ATA_SENSE ? "SENSE " : "",
res->command & ATA_ERR ? "ERR " : "");
}
diff --git a/drivers/ata/libata-scsi.c b/drivers/ata/libata-scsi.c
index 567859ce0512..bfec66fb26e2 100644
--- a/drivers/ata/libata-scsi.c
+++ b/drivers/ata/libata-scsi.c
@@ -270,11 +270,52 @@ DEVICE_ATTR(unload_heads, S_IRUGO | S_IWUSR,
ata_scsi_park_show, ata_scsi_park_store);
EXPORT_SYMBOL_GPL(dev_attr_unload_heads);
-static void ata_scsi_set_sense(struct scsi_cmnd *cmd, u8 sk, u8 asc, u8 ascq)
+void ata_scsi_set_sense(struct ata_device *dev, struct scsi_cmnd *cmd,
+ u8 sk, u8 asc, u8 ascq)
{
+ bool d_sense = (dev->flags & ATA_DFLAG_D_SENSE);
+
+ if (!cmd)
+ return;
+
cmd->result = (DRIVER_SENSE << 24) | SAM_STAT_CHECK_CONDITION;
- scsi_build_sense_buffer(0, cmd->sense_buffer, sk, asc, ascq);
+ scsi_build_sense_buffer(d_sense, cmd->sense_buffer, sk, asc, ascq);
+}
+
+void ata_scsi_set_sense_information(struct ata_device *dev,
+ struct scsi_cmnd *cmd,
+ const struct ata_taskfile *tf)
+{
+ u64 information;
+
+ if (!cmd)
+ return;
+
+ information = ata_tf_read_block(tf, dev);
+ if (information == U64_MAX)
+ return;
+
+ scsi_set_sense_information(cmd->sense_buffer,
+ SCSI_SENSE_BUFFERSIZE, information);
+}
+
+static void ata_scsi_set_invalid_field(struct ata_device *dev,
+ struct scsi_cmnd *cmd, u16 field, u8 bit)
+{
+ ata_scsi_set_sense(dev, cmd, ILLEGAL_REQUEST, 0x24, 0x0);
+ /* "Invalid field in cbd" */
+ scsi_set_sense_field_pointer(cmd->sense_buffer, SCSI_SENSE_BUFFERSIZE,
+ field, bit, 1);
+}
+
+static void ata_scsi_set_invalid_parameter(struct ata_device *dev,
+ struct scsi_cmnd *cmd, u16 field)
+{
+ /* "Invalid field in parameter list" */
+ ata_scsi_set_sense(dev, cmd, ILLEGAL_REQUEST, 0x26, 0x0);
+ scsi_set_sense_field_pointer(cmd->sense_buffer, SCSI_SENSE_BUFFERSIZE,
+ field, 0xff, 0);
}
static ssize_t
@@ -364,10 +405,10 @@ struct device_attribute *ata_common_sdev_attrs[] = {
};
EXPORT_SYMBOL_GPL(ata_common_sdev_attrs);
-static void ata_scsi_invalid_field(struct scsi_cmnd *cmd)
+static void ata_scsi_invalid_field(struct ata_device *dev,
+ struct scsi_cmnd *cmd, u16 field)
{
- ata_scsi_set_sense(cmd, ILLEGAL_REQUEST, 0x24, 0x0);
- /* "Invalid field in cbd" */
+ ata_scsi_set_invalid_field(dev, cmd, field, 0xff);
cmd->scsi_done(cmd);
}
@@ -980,6 +1021,7 @@ static void ata_gen_passthru_sense(struct ata_queued_cmd *qc)
unsigned char *sb = cmd->sense_buffer;
unsigned char *desc = sb + 8;
int verbose = qc->ap->ops->error_handler == NULL;
+ u8 sense_key, asc, ascq;
memset(sb, 0, SCSI_SENSE_BUFFERSIZE);
@@ -992,47 +1034,71 @@ static void ata_gen_passthru_sense(struct ata_queued_cmd *qc)
if (qc->err_mask ||
tf->command & (ATA_BUSY | ATA_DF | ATA_ERR | ATA_DRQ)) {
ata_to_sense_error(qc->ap->print_id, tf->command, tf->feature,
- &sb[1], &sb[2], &sb[3], verbose);
- sb[1] &= 0x0f;
+ &sense_key, &asc, &ascq, verbose);
+ ata_scsi_set_sense(qc->dev, cmd, sense_key, asc, ascq);
} else {
- sb[1] = RECOVERED_ERROR;
- sb[2] = 0;
- sb[3] = 0x1D;
+ /*
+ * ATA PASS-THROUGH INFORMATION AVAILABLE
+ * Always in descriptor format sense.
+ */
+ scsi_build_sense_buffer(1, cmd->sense_buffer,
+ RECOVERED_ERROR, 0, 0x1D);
}
- /*
- * Sense data is current and format is descriptor.
- */
- sb[0] = 0x72;
-
- desc[0] = 0x09;
-
- /* set length of additional sense data */
- sb[7] = 14;
- desc[1] = 12;
-
- /*
- * Copy registers into sense buffer.
- */
- desc[2] = 0x00;
- desc[3] = tf->feature; /* == error reg */
- desc[5] = tf->nsect;
- desc[7] = tf->lbal;
- desc[9] = tf->lbam;
- desc[11] = tf->lbah;
- desc[12] = tf->device;
- desc[13] = tf->command; /* == status reg */
+ if ((cmd->sense_buffer[0] & 0x7f) >= 0x72) {
+ u8 len;
+
+ /* descriptor format */
+ len = sb[7];
+ desc = (char *)scsi_sense_desc_find(sb, len + 8, 9);
+ if (!desc) {
+ if (SCSI_SENSE_BUFFERSIZE < len + 14)
+ return;
+ sb[7] = len + 14;
+ desc = sb + 8 + len;
+ }
+ desc[0] = 9;
+ desc[1] = 12;
+ /*
+ * Copy registers into sense buffer.
+ */
+ desc[2] = 0x00;
+ desc[3] = tf->feature; /* == error reg */
+ desc[5] = tf->nsect;
+ desc[7] = tf->lbal;
+ desc[9] = tf->lbam;
+ desc[11] = tf->lbah;
+ desc[12] = tf->device;
+ desc[13] = tf->command; /* == status reg */
- /*
- * Fill in Extend bit, and the high order bytes
- * if applicable.
- */
- if (tf->flags & ATA_TFLAG_LBA48) {
- desc[2] |= 0x01;
- desc[4] = tf->hob_nsect;
- desc[6] = tf->hob_lbal;
- desc[8] = tf->hob_lbam;
- desc[10] = tf->hob_lbah;
+ /*
+ * Fill in Extend bit, and the high order bytes
+ * if applicable.
+ */
+ if (tf->flags & ATA_TFLAG_LBA48) {
+ desc[2] |= 0x01;
+ desc[4] = tf->hob_nsect;
+ desc[6] = tf->hob_lbal;
+ desc[8] = tf->hob_lbam;
+ desc[10] = tf->hob_lbah;
+ }
+ } else {
+ /* Fixed sense format */
+ desc[0] = tf->feature;
+ desc[1] = tf->command; /* status */
+ desc[2] = tf->device;
+ desc[3] = tf->nsect;
+ desc[0] = 0;
+ if (tf->flags & ATA_TFLAG_LBA48) {
+ desc[8] |= 0x80;
+ if (tf->hob_nsect)
+ desc[8] |= 0x40;
+ if (tf->hob_lbal || tf->hob_lbam || tf->hob_lbah)
+ desc[8] |= 0x20;
+ }
+ desc[9] = tf->lbal;
+ desc[10] = tf->lbam;
+ desc[11] = tf->lbah;
}
}
@@ -1052,41 +1118,41 @@ static void ata_gen_ata_sense(struct ata_queued_cmd *qc)
struct scsi_cmnd *cmd = qc->scsicmd;
struct ata_taskfile *tf = &qc->result_tf;
unsigned char *sb = cmd->sense_buffer;
- unsigned char *desc = sb + 8;
int verbose = qc->ap->ops->error_handler == NULL;
u64 block;
+ u8 sense_key, asc, ascq;
memset(sb, 0, SCSI_SENSE_BUFFERSIZE);
cmd->result = (DRIVER_SENSE << 24) | SAM_STAT_CHECK_CONDITION;
- /* sense data is current and format is descriptor */
- sb[0] = 0x72;
-
+ if (ata_dev_disabled(dev)) {
+ /* Device disabled after error recovery */
+ /* LOGICAL UNIT NOT READY, HARD RESET REQUIRED */
+ ata_scsi_set_sense(dev, cmd, NOT_READY, 0x04, 0x21);
+ return;
+ }
/* Use ata_to_sense_error() to map status register bits
* onto sense key, asc & ascq.
*/
if (qc->err_mask ||
tf->command & (ATA_BUSY | ATA_DF | ATA_ERR | ATA_DRQ)) {
ata_to_sense_error(qc->ap->print_id, tf->command, tf->feature,
- &sb[1], &sb[2], &sb[3], verbose);
- sb[1] &= 0x0f;
+ &sense_key, &asc, &ascq, verbose);
+ ata_scsi_set_sense(dev, cmd, sense_key, asc, ascq);
+ } else {
+ /* Could not decode error */
+ ata_dev_warn(dev, "could not decode error status 0x%x err_mask 0x%x\n",
+ tf->command, qc->err_mask);
+ ata_scsi_set_sense(dev, cmd, ABORTED_COMMAND, 0, 0);
+ return;
}
block = ata_tf_read_block(&qc->result_tf, dev);
+ if (block == U64_MAX)
+ return;
- /* information sense data descriptor */
- sb[7] = 12;
- desc[0] = 0x00;
- desc[1] = 10;
-
- desc[2] |= 0x80; /* valid */
- desc[6] = block >> 40;
- desc[7] = block >> 32;
- desc[8] = block >> 24;
- desc[9] = block >> 16;
- desc[10] = block >> 8;
- desc[11] = block;
+ scsi_set_sense_information(sb, SCSI_SENSE_BUFFERSIZE, block);
}
static void ata_scsi_sdev_config(struct scsi_device *sdev)
@@ -1109,7 +1175,7 @@ static void ata_scsi_sdev_config(struct scsi_device *sdev)
* @rq: request to be checked
*
* ATAPI commands which transfer variable length data to host
- * might overflow due to application error or hardare bug. This
+ * might overflow due to application error or hardware bug. This
* function checks whether overflow should be drained and ignored
* for @request.
*
@@ -1343,19 +1409,29 @@ static unsigned int ata_scsi_start_stop_xlat(struct ata_queued_cmd *qc)
struct scsi_cmnd *scmd = qc->scsicmd;
struct ata_taskfile *tf = &qc->tf;
const u8 *cdb = scmd->cmnd;
+ u16 fp;
+ u8 bp = 0xff;
- if (scmd->cmd_len < 5)
+ if (scmd->cmd_len < 5) {
+ fp = 4;
goto invalid_fld;
+ }
tf->flags |= ATA_TFLAG_DEVICE | ATA_TFLAG_ISADDR;
tf->protocol = ATA_PROT_NODATA;
if (cdb[1] & 0x1) {
; /* ignore IMMED bit, violates sat-r05 */
}
- if (cdb[4] & 0x2)
+ if (cdb[4] & 0x2) {
+ fp = 4;
+ bp = 1;
goto invalid_fld; /* LOEJ bit set not supported */
- if (((cdb[4] >> 4) & 0xf) != 0)
+ }
+ if (((cdb[4] >> 4) & 0xf) != 0) {
+ fp = 4;
+ bp = 3;
goto invalid_fld; /* power conditions not supported */
+ }
if (cdb[4] & 0x1) {
tf->nsect = 1; /* 1 sector, lba=0 */
@@ -1401,8 +1477,7 @@ static unsigned int ata_scsi_start_stop_xlat(struct ata_queued_cmd *qc)
return 0;
invalid_fld:
- ata_scsi_set_sense(scmd, ILLEGAL_REQUEST, 0x24, 0x0);
- /* "Invalid field in cbd" */
+ ata_scsi_set_invalid_field(qc->dev, scmd, fp, bp);
return 1;
skip:
scmd->result = SAM_STAT_GOOD;
@@ -1553,20 +1628,27 @@ static unsigned int ata_scsi_verify_xlat(struct ata_queued_cmd *qc)
const u8 *cdb = scmd->cmnd;
u64 block;
u32 n_block;
+ u16 fp;
tf->flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE;
tf->protocol = ATA_PROT_NODATA;
if (cdb[0] == VERIFY) {
- if (scmd->cmd_len < 10)
+ if (scmd->cmd_len < 10) {
+ fp = 9;
goto invalid_fld;
+ }
scsi_10_lba_len(cdb, &block, &n_block);
} else if (cdb[0] == VERIFY_16) {
- if (scmd->cmd_len < 16)
+ if (scmd->cmd_len < 16) {
+ fp = 15;
goto invalid_fld;
+ }
scsi_16_lba_len(cdb, &block, &n_block);
- } else
+ } else {
+ fp = 0;
goto invalid_fld;
+ }
if (!n_block)
goto nothing_to_do;
@@ -1640,12 +1722,11 @@ static unsigned int ata_scsi_verify_xlat(struct ata_queued_cmd *qc)
return 0;
invalid_fld:
- ata_scsi_set_sense(scmd, ILLEGAL_REQUEST, 0x24, 0x0);
- /* "Invalid field in cbd" */
+ ata_scsi_set_invalid_field(qc->dev, scmd, fp, 0xff);
return 1;
out_of_range:
- ata_scsi_set_sense(scmd, ILLEGAL_REQUEST, 0x21, 0x0);
+ ata_scsi_set_sense(qc->dev, scmd, ILLEGAL_REQUEST, 0x21, 0x0);
/* "Logical Block Address out of range" */
return 1;
@@ -1680,6 +1761,7 @@ static unsigned int ata_scsi_rw_xlat(struct ata_queued_cmd *qc)
u64 block;
u32 n_block;
int rc;
+ u16 fp = 0;
if (cdb[0] == WRITE_10 || cdb[0] == WRITE_6 || cdb[0] == WRITE_16)
tf_flags |= ATA_TFLAG_WRITE;
@@ -1688,16 +1770,20 @@ static unsigned int ata_scsi_rw_xlat(struct ata_queued_cmd *qc)
switch (cdb[0]) {
case READ_10:
case WRITE_10:
- if (unlikely(scmd->cmd_len < 10))
+ if (unlikely(scmd->cmd_len < 10)) {
+ fp = 9;
goto invalid_fld;
+ }
scsi_10_lba_len(cdb, &block, &n_block);
if (cdb[1] & (1 << 3))
tf_flags |= ATA_TFLAG_FUA;
break;
case READ_6:
case WRITE_6:
- if (unlikely(scmd->cmd_len < 6))
+ if (unlikely(scmd->cmd_len < 6)) {
+ fp = 5;
goto invalid_fld;
+ }
scsi_6_lba_len(cdb, &block, &n_block);
/* for 6-byte r/w commands, transfer length 0
@@ -1708,14 +1794,17 @@ static unsigned int ata_scsi_rw_xlat(struct ata_queued_cmd *qc)
break;
case READ_16:
case WRITE_16:
- if (unlikely(scmd->cmd_len < 16))
+ if (unlikely(scmd->cmd_len < 16)) {
+ fp = 15;
goto invalid_fld;
+ }
scsi_16_lba_len(cdb, &block, &n_block);
if (cdb[1] & (1 << 3))
tf_flags |= ATA_TFLAG_FUA;
break;
default:
DPRINTK("no-byte command\n");
+ fp = 0;
goto invalid_fld;
}
@@ -1742,12 +1831,11 @@ static unsigned int ata_scsi_rw_xlat(struct ata_queued_cmd *qc)
goto out_of_range;
/* treat all other errors as -EINVAL, fall through */
invalid_fld:
- ata_scsi_set_sense(scmd, ILLEGAL_REQUEST, 0x24, 0x0);
- /* "Invalid field in cbd" */
+ ata_scsi_set_invalid_field(qc->dev, scmd, fp, 0xff);
return 1;
out_of_range:
- ata_scsi_set_sense(scmd, ILLEGAL_REQUEST, 0x21, 0x0);
+ ata_scsi_set_sense(qc->dev, scmd, ILLEGAL_REQUEST, 0x21, 0x0);
/* "Logical Block Address out of range" */
return 1;
@@ -1784,6 +1872,8 @@ static void ata_scsi_qc_complete(struct ata_queued_cmd *qc)
if (((cdb[0] == ATA_16) || (cdb[0] == ATA_12)) &&
((cdb[2] & 0x20) || need_sense))
ata_gen_passthru_sense(qc);
+ else if (qc->flags & ATA_QCFLAG_SENSE_VALID)
+ cmd->result = SAM_STAT_CHECK_CONDITION;
else if (need_sense)
ata_gen_ata_sense(qc);
else
@@ -1992,14 +2082,14 @@ static unsigned int ata_scsiop_inq_std(struct ata_scsi_args *args, u8 *rbuf)
0x00,
0xA0, /* SAM-5 (no version claimed) */
- 0x04,
- 0xC0, /* SBC-3 (no version claimed) */
+ 0x06,
+ 0x00, /* SBC-4 (no version claimed) */
- 0x04,
- 0x60, /* SPC-4 (no version claimed) */
+ 0x05,
+ 0xC0, /* SPC-5 (no version claimed) */
0x60,
- 0x20, /* ZBC (no version claimed) */
+ 0x24, /* ZBC r05 */
};
u8 hdr[] = {
@@ -2019,10 +2109,8 @@ static unsigned int ata_scsiop_inq_std(struct ata_scsi_args *args, u8 *rbuf)
(args->dev->link->ap->pflags & ATA_PFLAG_EXTERNAL))
hdr[1] |= (1 << 7);
- if (args->dev->class == ATA_DEV_ZAC) {
+ if (args->dev->class == ATA_DEV_ZAC)
hdr[0] = TYPE_ZBC;
- hdr[2] = 0x6; /* ZBC is defined in SPC-4 */
- }
memcpy(rbuf, hdr, sizeof(hdr));
memcpy(&rbuf[8], "ATA ", 8);
@@ -2036,7 +2124,7 @@ static unsigned int ata_scsiop_inq_std(struct ata_scsi_args *args, u8 *rbuf)
if (rbuf[32] == 0 || rbuf[32] == ' ')
memcpy(&rbuf[32], "n/a ", 4);
- if (args->dev->class == ATA_DEV_ZAC)
+ if (ata_id_zoned_cap(args->id) || args->dev->class == ATA_DEV_ZAC)
memcpy(rbuf + 58, versions_zbc, sizeof(versions_zbc));
else
memcpy(rbuf + 58, versions, sizeof(versions));
@@ -2056,6 +2144,7 @@ static unsigned int ata_scsiop_inq_std(struct ata_scsi_args *args, u8 *rbuf)
*/
static unsigned int ata_scsiop_inq_00(struct ata_scsi_args *args, u8 *rbuf)
{
+ int num_pages;
const u8 pages[] = {
0x00, /* page 0x00, this page */
0x80, /* page 0x80, unit serial no page */
@@ -2064,10 +2153,14 @@ static unsigned int ata_scsiop_inq_00(struct ata_scsi_args *args, u8 *rbuf)
0xb0, /* page 0xb0, block limits page */
0xb1, /* page 0xb1, block device characteristics page */
0xb2, /* page 0xb2, thin provisioning page */
+ 0xb6, /* page 0xb6, zoned block device characteristics */
};
- rbuf[3] = sizeof(pages); /* number of supported VPD pages */
- memcpy(rbuf + 4, pages, sizeof(pages));
+ num_pages = sizeof(pages);
+ if (!(args->dev->flags & ATA_DFLAG_ZAC))
+ num_pages--;
+ rbuf[3] = num_pages; /* number of supported VPD pages */
+ memcpy(rbuf + 4, pages, num_pages);
return 0;
}
@@ -2232,12 +2325,15 @@ static unsigned int ata_scsiop_inq_b1(struct ata_scsi_args *args, u8 *rbuf)
{
int form_factor = ata_id_form_factor(args->id);
int media_rotation_rate = ata_id_rotation_rate(args->id);
+ u8 zoned = ata_id_zoned_cap(args->id);
rbuf[1] = 0xb1;
rbuf[3] = 0x3c;
rbuf[4] = media_rotation_rate >> 8;
rbuf[5] = media_rotation_rate;
rbuf[7] = form_factor;
+ if (zoned)
+ rbuf[8] = (zoned << 4);
return 0;
}
@@ -2252,6 +2348,26 @@ static unsigned int ata_scsiop_inq_b2(struct ata_scsi_args *args, u8 *rbuf)
return 0;
}
+static unsigned int ata_scsiop_inq_b6(struct ata_scsi_args *args, u8 *rbuf)
+{
+ /*
+ * zbc-r05 SCSI Zoned Block device characteristics VPD page
+ */
+ rbuf[1] = 0xb6;
+ rbuf[3] = 0x3C;
+
+ /*
+ * URSWRZ bit is only meaningful for host-managed ZAC drives
+ */
+ if (args->dev->zac_zoned_cap & 1)
+ rbuf[4] |= 1;
+ put_unaligned_be32(args->dev->zac_zones_optimal_open, &rbuf[8]);
+ put_unaligned_be32(args->dev->zac_zones_optimal_nonseq, &rbuf[12]);
+ put_unaligned_be32(args->dev->zac_zones_max_open, &rbuf[16]);
+
+ return 0;
+}
+
/**
* ata_scsiop_noop - Command handler that simply returns success.
* @args: device IDENTIFY data / SCSI command of interest.
@@ -2317,6 +2433,7 @@ static unsigned int ata_msense_caching(u16 *id, u8 *buf, bool changeable)
/**
* ata_msense_ctl_mode - Simulate MODE SENSE control mode page
+ * @dev: ATA device of interest
* @buf: output buffer
* @changeable: whether changeable parameters are requested
*
@@ -2325,9 +2442,12 @@ static unsigned int ata_msense_caching(u16 *id, u8 *buf, bool changeable)
* LOCKING:
* None.
*/
-static unsigned int ata_msense_ctl_mode(u8 *buf, bool changeable)
+static unsigned int ata_msense_ctl_mode(struct ata_device *dev, u8 *buf,
+ bool changeable)
{
modecpy(buf, def_control_mpage, sizeof(def_control_mpage), changeable);
+ if (changeable && (dev->flags & ATA_DFLAG_D_SENSE))
+ buf[2] |= (1 << 2); /* Descriptor sense requested */
return sizeof(def_control_mpage);
}
@@ -2395,7 +2515,8 @@ static unsigned int ata_scsiop_mode_sense(struct ata_scsi_args *args, u8 *rbuf)
};
u8 pg, spg;
unsigned int ebd, page_control, six_byte;
- u8 dpofua;
+ u8 dpofua, bp = 0xff;
+ u16 fp;
VPRINTK("ENTER\n");
@@ -2414,6 +2535,8 @@ static unsigned int ata_scsiop_mode_sense(struct ata_scsi_args *args, u8 *rbuf)
case 3: /* saved */
goto saving_not_supp;
default:
+ fp = 2;
+ bp = 6;
goto invalid_fld;
}
@@ -2428,8 +2551,10 @@ static unsigned int ata_scsiop_mode_sense(struct ata_scsi_args *args, u8 *rbuf)
* No mode subpages supported (yet) but asking for _all_
* subpages may be valid
*/
- if (spg && (spg != ALL_SUB_MPAGES))
+ if (spg && (spg != ALL_SUB_MPAGES)) {
+ fp = 3;
goto invalid_fld;
+ }
switch(pg) {
case RW_RECOVERY_MPAGE:
@@ -2441,16 +2566,17 @@ static unsigned int ata_scsiop_mode_sense(struct ata_scsi_args *args, u8 *rbuf)
break;
case CONTROL_MPAGE:
- p += ata_msense_ctl_mode(p, page_control == 1);
+ p += ata_msense_ctl_mode(args->dev, p, page_control == 1);
break;
case ALL_MPAGES:
p += ata_msense_rw_recovery(p, page_control == 1);
p += ata_msense_caching(args->id, p, page_control == 1);
- p += ata_msense_ctl_mode(p, page_control == 1);
+ p += ata_msense_ctl_mode(args->dev, p, page_control == 1);
break;
default: /* invalid page code */
+ fp = 2;
goto invalid_fld;
}
@@ -2480,12 +2606,11 @@ static unsigned int ata_scsiop_mode_sense(struct ata_scsi_args *args, u8 *rbuf)
return 0;
invalid_fld:
- ata_scsi_set_sense(args->cmd, ILLEGAL_REQUEST, 0x24, 0x0);
- /* "Invalid field in cbd" */
+ ata_scsi_set_invalid_field(dev, args->cmd, fp, bp);
return 1;
saving_not_supp:
- ata_scsi_set_sense(args->cmd, ILLEGAL_REQUEST, 0x39, 0x0);
+ ata_scsi_set_sense(dev, args->cmd, ILLEGAL_REQUEST, 0x39, 0x0);
/* "Saving parameters not supported" */
return 1;
}
@@ -2561,6 +2686,9 @@ static unsigned int ata_scsiop_read_cap(struct ata_scsi_args *args, u8 *rbuf)
rbuf[14] |= 0x40; /* LBPRZ */
}
}
+ if (ata_id_zoned_cap(args->id) ||
+ args->dev->class == ATA_DEV_ZAC)
+ rbuf[12] = (1 << 4); /* RC_BASIS */
}
return 0;
}
@@ -2942,9 +3070,12 @@ static unsigned int ata_scsi_pass_thru(struct ata_queued_cmd *qc)
struct scsi_cmnd *scmd = qc->scsicmd;
struct ata_device *dev = qc->dev;
const u8 *cdb = scmd->cmnd;
+ u16 fp;
- if ((tf->protocol = ata_scsi_map_proto(cdb[1])) == ATA_PROT_UNKNOWN)
+ if ((tf->protocol = ata_scsi_map_proto(cdb[1])) == ATA_PROT_UNKNOWN) {
+ fp = 1;
goto invalid_fld;
+ }
/* enable LBA */
tf->flags |= ATA_TFLAG_LBA;
@@ -3008,8 +3139,10 @@ static unsigned int ata_scsi_pass_thru(struct ata_queued_cmd *qc)
case ATA_CMD_READ_LONG_ONCE:
case ATA_CMD_WRITE_LONG:
case ATA_CMD_WRITE_LONG_ONCE:
- if (tf->protocol != ATA_PROT_PIO || tf->nsect != 1)
+ if (tf->protocol != ATA_PROT_PIO || tf->nsect != 1) {
+ fp = 1;
goto invalid_fld;
+ }
qc->sect_size = scsi_bufflen(scmd);
break;
@@ -3072,12 +3205,16 @@ static unsigned int ata_scsi_pass_thru(struct ata_queued_cmd *qc)
ata_qc_set_pc_nbytes(qc);
/* We may not issue DMA commands if no DMA mode is set */
- if (tf->protocol == ATA_PROT_DMA && dev->dma_mode == 0)
+ if (tf->protocol == ATA_PROT_DMA && dev->dma_mode == 0) {
+ fp = 1;
goto invalid_fld;
+ }
/* sanity check for pio multi commands */
- if ((cdb[1] & 0xe0) && !is_multi_taskfile(tf))
+ if ((cdb[1] & 0xe0) && !is_multi_taskfile(tf)) {
+ fp = 1;
goto invalid_fld;
+ }
if (is_multi_taskfile(tf)) {
unsigned int multi_count = 1 << (cdb[1] >> 5);
@@ -3098,8 +3235,10 @@ static unsigned int ata_scsi_pass_thru(struct ata_queued_cmd *qc)
* ->set_dmamode(), and ->post_set_mode() hooks).
*/
if (tf->command == ATA_CMD_SET_FEATURES &&
- tf->feature == SETFEATURES_XFER)
+ tf->feature == SETFEATURES_XFER) {
+ fp = (cdb[0] == ATA_16) ? 4 : 3;
goto invalid_fld;
+ }
/*
* Filter TPM commands by default. These provide an
@@ -3116,14 +3255,15 @@ static unsigned int ata_scsi_pass_thru(struct ata_queued_cmd *qc)
* so that we comply with the TC consortium stated goal that the user
* can turn off TC features of their system.
*/
- if (tf->command >= 0x5C && tf->command <= 0x5F && !libata_allow_tpm)
+ if (tf->command >= 0x5C && tf->command <= 0x5F && !libata_allow_tpm) {
+ fp = (cdb[0] == ATA_16) ? 14 : 9;
goto invalid_fld;
+ }
return 0;
invalid_fld:
- ata_scsi_set_sense(scmd, ILLEGAL_REQUEST, 0x24, 0x00);
- /* "Invalid field in cdb" */
+ ata_scsi_set_invalid_field(dev, scmd, fp, 0xff);
return 1;
}
@@ -3137,25 +3277,32 @@ static unsigned int ata_scsi_write_same_xlat(struct ata_queued_cmd *qc)
u32 n_block;
u32 size;
void *buf;
+ u16 fp;
+ u8 bp = 0xff;
/* we may not issue DMA commands if no DMA mode is set */
if (unlikely(!dev->dma_mode))
- goto invalid_fld;
+ goto invalid_opcode;
- if (unlikely(scmd->cmd_len < 16))
+ if (unlikely(scmd->cmd_len < 16)) {
+ fp = 15;
goto invalid_fld;
+ }
scsi_16_lba_len(cdb, &block, &n_block);
/* for now we only support WRITE SAME with the unmap bit set */
- if (unlikely(!(cdb[1] & 0x8)))
+ if (unlikely(!(cdb[1] & 0x8))) {
+ fp = 1;
+ bp = 3;
goto invalid_fld;
+ }
/*
* WRITE SAME always has a sector sized buffer as payload, this
* should never be a multiple entry S/G list.
*/
if (!scsi_sg_count(scmd))
- goto invalid_fld;
+ goto invalid_param_len;
buf = page_address(sg_page(scsi_sglist(scmd)));
size = ata_set_lba_range_entries(buf, 512, block, n_block);
@@ -3186,9 +3333,242 @@ static unsigned int ata_scsi_write_same_xlat(struct ata_queued_cmd *qc)
return 0;
+invalid_fld:
+ ata_scsi_set_invalid_field(dev, scmd, fp, bp);
+ return 1;
+invalid_param_len:
+ /* "Parameter list length error" */
+ ata_scsi_set_sense(dev, scmd, ILLEGAL_REQUEST, 0x1a, 0x0);
+ return 1;
+invalid_opcode:
+ /* "Invalid command operation code" */
+ ata_scsi_set_sense(dev, scmd, ILLEGAL_REQUEST, 0x20, 0x0);
+ return 1;
+}
+
+/**
+ * ata_scsi_report_zones_complete - convert ATA output
+ * @qc: command structure returning the data
+ *
+ * Convert T-13 little-endian field representation into
+ * T-10 big-endian field representation.
+ * What a mess.
+ */
+static void ata_scsi_report_zones_complete(struct ata_queued_cmd *qc)
+{
+ struct scsi_cmnd *scmd = qc->scsicmd;
+ struct sg_mapping_iter miter;
+ unsigned long flags;
+ unsigned int bytes = 0;
+
+ sg_miter_start(&miter, scsi_sglist(scmd), scsi_sg_count(scmd),
+ SG_MITER_TO_SG | SG_MITER_ATOMIC);
+
+ local_irq_save(flags);
+ while (sg_miter_next(&miter)) {
+ unsigned int offset = 0;
+
+ if (bytes == 0) {
+ char *hdr;
+ u32 list_length;
+ u64 max_lba, opt_lba;
+ u16 same;
+
+ /* Swizzle header */
+ hdr = miter.addr;
+ list_length = get_unaligned_le32(&hdr[0]);
+ same = get_unaligned_le16(&hdr[4]);
+ max_lba = get_unaligned_le64(&hdr[8]);
+ opt_lba = get_unaligned_le64(&hdr[16]);
+ put_unaligned_be32(list_length, &hdr[0]);
+ hdr[4] = same & 0xf;
+ put_unaligned_be64(max_lba, &hdr[8]);
+ put_unaligned_be64(opt_lba, &hdr[16]);
+ offset += 64;
+ bytes += 64;
+ }
+ while (offset < miter.length) {
+ char *rec;
+ u8 cond, type, non_seq, reset;
+ u64 size, start, wp;
+
+ /* Swizzle zone descriptor */
+ rec = miter.addr + offset;
+ type = rec[0] & 0xf;
+ cond = (rec[1] >> 4) & 0xf;
+ non_seq = (rec[1] & 2);
+ reset = (rec[1] & 1);
+ size = get_unaligned_le64(&rec[8]);
+ start = get_unaligned_le64(&rec[16]);
+ wp = get_unaligned_le64(&rec[24]);
+ rec[0] = type;
+ rec[1] = (cond << 4) | non_seq | reset;
+ put_unaligned_be64(size, &rec[8]);
+ put_unaligned_be64(start, &rec[16]);
+ put_unaligned_be64(wp, &rec[24]);
+ WARN_ON(offset + 64 > miter.length);
+ offset += 64;
+ bytes += 64;
+ }
+ }
+ sg_miter_stop(&miter);
+ local_irq_restore(flags);
+
+ ata_scsi_qc_complete(qc);
+}
+
+static unsigned int ata_scsi_zbc_in_xlat(struct ata_queued_cmd *qc)
+{
+ struct ata_taskfile *tf = &qc->tf;
+ struct scsi_cmnd *scmd = qc->scsicmd;
+ const u8 *cdb = scmd->cmnd;
+ u16 sect, fp = (u16)-1;
+ u8 sa, options, bp = 0xff;
+ u64 block;
+ u32 n_block;
+
+ if (unlikely(scmd->cmd_len < 16)) {
+ ata_dev_warn(qc->dev, "invalid cdb length %d\n",
+ scmd->cmd_len);
+ fp = 15;
+ goto invalid_fld;
+ }
+ scsi_16_lba_len(cdb, &block, &n_block);
+ if (n_block != scsi_bufflen(scmd)) {
+ ata_dev_warn(qc->dev, "non-matching transfer count (%d/%d)\n",
+ n_block, scsi_bufflen(scmd));
+ goto invalid_param_len;
+ }
+ sa = cdb[1] & 0x1f;
+ if (sa != ZI_REPORT_ZONES) {
+ ata_dev_warn(qc->dev, "invalid service action %d\n", sa);
+ fp = 1;
+ goto invalid_fld;
+ }
+ /*
+ * ZAC allows only for transfers in 512 byte blocks,
+ * and uses a 16 bit value for the transfer count.
+ */
+ if ((n_block / 512) > 0xffff || n_block < 512 || (n_block % 512)) {
+ ata_dev_warn(qc->dev, "invalid transfer count %d\n", n_block);
+ goto invalid_param_len;
+ }
+ sect = n_block / 512;
+ options = cdb[14];
+
+ if (ata_ncq_enabled(qc->dev) &&
+ ata_fpdma_zac_mgmt_in_supported(qc->dev)) {
+ tf->protocol = ATA_PROT_NCQ;
+ tf->command = ATA_CMD_FPDMA_RECV;
+ tf->hob_nsect = ATA_SUBCMD_FPDMA_RECV_ZAC_MGMT_IN & 0x1f;
+ tf->nsect = qc->tag << 3;
+ tf->feature = sect & 0xff;
+ tf->hob_feature = (sect >> 8) & 0xff;
+ tf->auxiliary = ATA_SUBCMD_ZAC_MGMT_IN_REPORT_ZONES;
+ } else {
+ tf->command = ATA_CMD_ZAC_MGMT_IN;
+ tf->feature = ATA_SUBCMD_ZAC_MGMT_IN_REPORT_ZONES;
+ tf->protocol = ATA_PROT_DMA;
+ tf->hob_feature = options;
+ tf->hob_nsect = (sect >> 8) & 0xff;
+ tf->nsect = sect & 0xff;
+ }
+ tf->device = ATA_LBA;
+ tf->lbah = (block >> 16) & 0xff;
+ tf->lbam = (block >> 8) & 0xff;
+ tf->lbal = block & 0xff;
+ tf->hob_lbah = (block >> 40) & 0xff;
+ tf->hob_lbam = (block >> 32) & 0xff;
+ tf->hob_lbal = (block >> 24) & 0xff;
+
+ tf->flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE | ATA_TFLAG_LBA48;
+ qc->flags |= ATA_QCFLAG_RESULT_TF;
+
+ ata_qc_set_pc_nbytes(qc);
+
+ qc->complete_fn = ata_scsi_report_zones_complete;
+
+ return 0;
+
+invalid_fld:
+ ata_scsi_set_invalid_field(qc->dev, scmd, fp, bp);
+ return 1;
+
+invalid_param_len:
+ /* "Parameter list length error" */
+ ata_scsi_set_sense(qc->dev, scmd, ILLEGAL_REQUEST, 0x1a, 0x0);
+ return 1;
+}
+
+static unsigned int ata_scsi_zbc_out_xlat(struct ata_queued_cmd *qc)
+{
+ struct ata_taskfile *tf = &qc->tf;
+ struct scsi_cmnd *scmd = qc->scsicmd;
+ struct ata_device *dev = qc->dev;
+ const u8 *cdb = scmd->cmnd;
+ u8 reset_all, sa;
+ u64 block;
+ u32 n_block;
+ u16 fp = (u16)-1;
+
+ if (unlikely(scmd->cmd_len < 16)) {
+ fp = 15;
+ goto invalid_fld;
+ }
+
+ sa = cdb[1] & 0x1f;
+ if ((sa != ZO_CLOSE_ZONE) && (sa != ZO_FINISH_ZONE) &&
+ (sa != ZO_OPEN_ZONE) && (sa != ZO_RESET_WRITE_POINTER)) {
+ fp = 1;
+ goto invalid_fld;
+ }
+
+ scsi_16_lba_len(cdb, &block, &n_block);
+ if (n_block) {
+ /*
+ * ZAC MANAGEMENT OUT doesn't define any length
+ */
+ goto invalid_param_len;
+ }
+ if (block > dev->n_sectors)
+ goto out_of_range;
+
+ reset_all = cdb[14] & 0x1;
+
+ if (ata_ncq_enabled(qc->dev) &&
+ ata_fpdma_zac_mgmt_out_supported(qc->dev)) {
+ tf->protocol = ATA_PROT_NCQ;
+ tf->command = ATA_CMD_NCQ_NON_DATA;
+ tf->hob_nsect = ATA_SUBCMD_NCQ_NON_DATA_ZAC_MGMT_OUT;
+ tf->nsect = qc->tag << 3;
+ tf->auxiliary = sa | (reset_all & 0x1) << 8;
+ } else {
+ tf->protocol = ATA_PROT_NODATA;
+ tf->command = ATA_CMD_ZAC_MGMT_OUT;
+ tf->feature = sa;
+ tf->hob_feature = reset_all & 0x1;
+ }
+ tf->lbah = (block >> 16) & 0xff;
+ tf->lbam = (block >> 8) & 0xff;
+ tf->lbal = block & 0xff;
+ tf->hob_lbah = (block >> 40) & 0xff;
+ tf->hob_lbam = (block >> 32) & 0xff;
+ tf->hob_lbal = (block >> 24) & 0xff;
+ tf->device = ATA_LBA;
+ tf->flags |= ATA_TFLAG_ISADDR | ATA_TFLAG_DEVICE | ATA_TFLAG_LBA48;
+
+ return 0;
+
invalid_fld:
- ata_scsi_set_sense(scmd, ILLEGAL_REQUEST, 0x24, 0x00);
- /* "Invalid field in cdb" */
+ ata_scsi_set_invalid_field(qc->dev, scmd, fp, 0xff);
+ return 1;
+ out_of_range:
+ /* "Logical Block Address out of range" */
+ ata_scsi_set_sense(qc->dev, scmd, ILLEGAL_REQUEST, 0x21, 0x00);
+ return 1;
+invalid_param_len:
+ /* "Parameter list length error" */
+ ata_scsi_set_sense(qc->dev, scmd, ILLEGAL_REQUEST, 0x1a, 0x0);
return 1;
}
@@ -3197,6 +3577,7 @@ static unsigned int ata_scsi_write_same_xlat(struct ata_queued_cmd *qc)
* @qc: Storage for translated ATA taskfile
* @buf: input buffer
* @len: number of valid bytes in the input buffer
+ * @fp: out parameter for the failed field on error
*
* Prepare a taskfile to modify caching information for the device.
*
@@ -3204,20 +3585,26 @@ static unsigned int ata_scsi_write_same_xlat(struct ata_queued_cmd *qc)
* None.
*/
static int ata_mselect_caching(struct ata_queued_cmd *qc,
- const u8 *buf, int len)
+ const u8 *buf, int len, u16 *fp)
{
struct ata_taskfile *tf = &qc->tf;
struct ata_device *dev = qc->dev;
char mpage[CACHE_MPAGE_LEN];
u8 wce;
+ int i;
/*
* The first two bytes of def_cache_mpage are a header, so offsets
* in mpage are off by 2 compared to buf. Same for len.
*/
- if (len != CACHE_MPAGE_LEN - 2)
+ if (len != CACHE_MPAGE_LEN - 2) {
+ if (len < CACHE_MPAGE_LEN - 2)
+ *fp = len;
+ else
+ *fp = CACHE_MPAGE_LEN - 2;
return -EINVAL;
+ }
wce = buf[0] & (1 << 2);
@@ -3225,10 +3612,14 @@ static int ata_mselect_caching(struct ata_queued_cmd *qc,
* Check that read-only bits are not modified.
*/
ata_msense_caching(dev->id, mpage, false);
- mpage[2] &= ~(1 << 2);
- mpage[2] |= wce;
- if (memcmp(mpage + 2, buf, CACHE_MPAGE_LEN - 2) != 0)
- return -EINVAL;
+ for (i = 0; i < CACHE_MPAGE_LEN - 2; i++) {
+ if (i == 0)
+ continue;
+ if (mpage[i + 2] != buf[i]) {
+ *fp = i;
+ return -EINVAL;
+ }
+ }
tf->flags |= ATA_TFLAG_DEVICE | ATA_TFLAG_ISADDR;
tf->protocol = ATA_PROT_NODATA;
@@ -3239,6 +3630,62 @@ static int ata_mselect_caching(struct ata_queued_cmd *qc,
}
/**
+ * ata_mselect_control - Simulate MODE SELECT for control page
+ * @qc: Storage for translated ATA taskfile
+ * @buf: input buffer
+ * @len: number of valid bytes in the input buffer
+ * @fp: out parameter for the failed field on error
+ *
+ * Prepare a taskfile to modify caching information for the device.
+ *
+ * LOCKING:
+ * None.
+ */
+static int ata_mselect_control(struct ata_queued_cmd *qc,
+ const u8 *buf, int len, u16 *fp)
+{
+ struct ata_device *dev = qc->dev;
+ char mpage[CONTROL_MPAGE_LEN];
+ u8 d_sense;
+ int i;
+
+ /*
+ * The first two bytes of def_control_mpage are a header, so offsets
+ * in mpage are off by 2 compared to buf. Same for len.
+ */
+
+ if (len != CONTROL_MPAGE_LEN - 2) {
+ if (len < CONTROL_MPAGE_LEN - 2)
+ *fp = len;
+ else
+ *fp = CONTROL_MPAGE_LEN - 2;
+ return -EINVAL;
+ }
+
+ d_sense = buf[0] & (1 << 2);
+
+ /*
+ * Check that read-only bits are not modified.
+ */
+ ata_msense_ctl_mode(dev, mpage, false);
+ for (i = 0; i < CONTROL_MPAGE_LEN - 2; i++) {
+ if (i == 0)
+ continue;
+ if (mpage[2 + i] != buf[i]) {
+ *fp = i;
+ return -EINVAL;
+ }
+ }
+ if (d_sense & (1 << 2))
+ dev->flags |= ATA_DFLAG_D_SENSE;
+ else
+ dev->flags &= ~ATA_DFLAG_D_SENSE;
+ qc->scsicmd->result = SAM_STAT_GOOD;
+ qc->scsicmd->scsi_done(qc->scsicmd);
+ return 0;
+}
+
+/**
* ata_scsiop_mode_select - Simulate MODE SELECT 6, 10 commands
* @qc: Storage for translated ATA taskfile
*
@@ -3257,27 +3704,36 @@ static unsigned int ata_scsi_mode_select_xlat(struct ata_queued_cmd *qc)
u8 pg, spg;
unsigned six_byte, pg_len, hdr_len, bd_len;
int len;
+ u16 fp = (u16)-1;
+ u8 bp = 0xff;
VPRINTK("ENTER\n");
six_byte = (cdb[0] == MODE_SELECT);
if (six_byte) {
- if (scmd->cmd_len < 5)
+ if (scmd->cmd_len < 5) {
+ fp = 4;
goto invalid_fld;
+ }
len = cdb[4];
hdr_len = 4;
} else {
- if (scmd->cmd_len < 9)
+ if (scmd->cmd_len < 9) {
+ fp = 8;
goto invalid_fld;
+ }
len = (cdb[7] << 8) + cdb[8];
hdr_len = 8;
}
/* We only support PF=1, SP=0. */
- if ((cdb[1] & 0x11) != 0x10)
+ if ((cdb[1] & 0x11) != 0x10) {
+ fp = 1;
+ bp = (cdb[1] & 0x01) ? 1 : 5;
goto invalid_fld;
+ }
/* Test early for possible overrun. */
if (!scsi_sg_count(scmd) || scsi_sglist(scmd)->length < len)
@@ -3298,8 +3754,11 @@ static unsigned int ata_scsi_mode_select_xlat(struct ata_queued_cmd *qc)
p += hdr_len;
if (len < bd_len)
goto invalid_param_len;
- if (bd_len != 0 && bd_len != 8)
+ if (bd_len != 0 && bd_len != 8) {
+ fp = (six_byte) ? 3 : 6;
+ fp += bd_len + hdr_len;
goto invalid_param;
+ }
len -= bd_len;
p += bd_len;
@@ -3330,18 +3789,29 @@ static unsigned int ata_scsi_mode_select_xlat(struct ata_queued_cmd *qc)
* No mode subpages supported (yet) but asking for _all_
* subpages may be valid
*/
- if (spg && (spg != ALL_SUB_MPAGES))
+ if (spg && (spg != ALL_SUB_MPAGES)) {
+ fp = (p[0] & 0x40) ? 1 : 0;
+ fp += hdr_len + bd_len;
goto invalid_param;
+ }
if (pg_len > len)
goto invalid_param_len;
switch (pg) {
case CACHE_MPAGE:
- if (ata_mselect_caching(qc, p, pg_len) < 0)
+ if (ata_mselect_caching(qc, p, pg_len, &fp) < 0) {
+ fp += hdr_len + bd_len;
goto invalid_param;
+ }
+ break;
+ case CONTROL_MPAGE:
+ if (ata_mselect_control(qc, p, pg_len, &fp) < 0) {
+ fp += hdr_len + bd_len;
+ goto invalid_param;
+ }
break;
-
default: /* invalid page code */
+ fp = bd_len + hdr_len;
goto invalid_param;
}
@@ -3355,18 +3825,16 @@ static unsigned int ata_scsi_mode_select_xlat(struct ata_queued_cmd *qc)
return 0;
invalid_fld:
- /* "Invalid field in CDB" */
- ata_scsi_set_sense(scmd, ILLEGAL_REQUEST, 0x24, 0x0);
+ ata_scsi_set_invalid_field(qc->dev, scmd, fp, bp);
return 1;
invalid_param:
- /* "Invalid field in parameter list" */
- ata_scsi_set_sense(scmd, ILLEGAL_REQUEST, 0x26, 0x0);
+ ata_scsi_set_invalid_parameter(qc->dev, scmd, fp);
return 1;
invalid_param_len:
/* "Parameter list length error" */
- ata_scsi_set_sense(scmd, ILLEGAL_REQUEST, 0x1a, 0x0);
+ ata_scsi_set_sense(qc->dev, scmd, ILLEGAL_REQUEST, 0x1a, 0x0);
return 1;
skip:
@@ -3419,6 +3887,12 @@ static inline ata_xlat_func_t ata_get_xlat_func(struct ata_device *dev, u8 cmd)
return ata_scsi_mode_select_xlat;
break;
+ case ZBC_IN:
+ return ata_scsi_zbc_in_xlat;
+
+ case ZBC_OUT:
+ return ata_scsi_zbc_out_xlat;
+
case START_STOP:
return ata_scsi_start_stop_xlat;
}
@@ -3439,14 +3913,11 @@ static inline void ata_scsi_dump_cdb(struct ata_port *ap,
{
#ifdef ATA_DEBUG
struct scsi_device *scsidev = cmd->device;
- u8 *scsicmd = cmd->cmnd;
- DPRINTK("CDB (%u:%d,%d,%d) %02x %02x %02x %02x %02x %02x %02x %02x %02x\n",
+ DPRINTK("CDB (%u:%d,%d,%d) %9ph\n",
ap->print_id,
scsidev->channel, scsidev->id, scsidev->lun,
- scsicmd[0], scsicmd[1], scsicmd[2], scsicmd[3],
- scsicmd[4], scsicmd[5], scsicmd[6], scsicmd[7],
- scsicmd[8]);
+ cmd->cmnd);
#endif
}
@@ -3570,12 +4041,12 @@ void ata_scsi_simulate(struct ata_device *dev, struct scsi_cmnd *cmd)
switch(scsicmd[0]) {
/* TODO: worth improving? */
case FORMAT_UNIT:
- ata_scsi_invalid_field(cmd);
+ ata_scsi_invalid_field(dev, cmd, 0);
break;
case INQUIRY:
- if (scsicmd[1] & 2) /* is CmdDt set? */
- ata_scsi_invalid_field(cmd);
+ if (scsicmd[1] & 2) /* is CmdDt set? */
+ ata_scsi_invalid_field(dev, cmd, 1);
else if ((scsicmd[1] & 1) == 0) /* is EVPD clear? */
ata_scsi_rbuf_fill(&args, ata_scsiop_inq_std);
else switch (scsicmd[2]) {
@@ -3600,8 +4071,14 @@ void ata_scsi_simulate(struct ata_device *dev, struct scsi_cmnd *cmd)
case 0xb2:
ata_scsi_rbuf_fill(&args, ata_scsiop_inq_b2);
break;
+ case 0xb6:
+ if (dev->flags & ATA_DFLAG_ZAC) {
+ ata_scsi_rbuf_fill(&args, ata_scsiop_inq_b6);
+ break;
+ }
+ /* Fallthrough */
default:
- ata_scsi_invalid_field(cmd);
+ ata_scsi_invalid_field(dev, cmd, 2);
break;
}
break;
@@ -3619,7 +4096,7 @@ void ata_scsi_simulate(struct ata_device *dev, struct scsi_cmnd *cmd)
if ((scsicmd[1] & 0x1f) == SAI_READ_CAPACITY_16)
ata_scsi_rbuf_fill(&args, ata_scsiop_read_cap);
else
- ata_scsi_invalid_field(cmd);
+ ata_scsi_invalid_field(dev, cmd, 1);
break;
case REPORT_LUNS:
@@ -3627,7 +4104,7 @@ void ata_scsi_simulate(struct ata_device *dev, struct scsi_cmnd *cmd)
break;
case REQUEST_SENSE:
- ata_scsi_set_sense(cmd, 0, 0, 0);
+ ata_scsi_set_sense(dev, cmd, 0, 0, 0);
cmd->result = (DRIVER_SENSE << 24);
cmd->scsi_done(cmd);
break;
@@ -3651,12 +4128,12 @@ void ata_scsi_simulate(struct ata_device *dev, struct scsi_cmnd *cmd)
if ((tmp8 == 0x4) && (!scsicmd[3]) && (!scsicmd[4]))
ata_scsi_rbuf_fill(&args, ata_scsiop_noop);
else
- ata_scsi_invalid_field(cmd);
+ ata_scsi_invalid_field(dev, cmd, 1);
break;
/* all other commands */
default:
- ata_scsi_set_sense(cmd, ILLEGAL_REQUEST, 0x20, 0x0);
+ ata_scsi_set_sense(dev, cmd, ILLEGAL_REQUEST, 0x20, 0x0);
/* "Invalid command operation code" */
cmd->scsi_done(cmd);
break;
diff --git a/drivers/ata/libata-trace.c b/drivers/ata/libata-trace.c
index fd30b8c10cf5..f8c550df0615 100644
--- a/drivers/ata/libata-trace.c
+++ b/drivers/ata/libata-trace.c
@@ -149,3 +149,75 @@ libata_trace_parse_qc_flags(struct trace_seq *p, unsigned int qc_flags)
return ret;
}
+
+const char *
+libata_trace_parse_subcmd(struct trace_seq *p, unsigned char cmd,
+ unsigned char feature, unsigned char hob_nsect)
+{
+ const char *ret = trace_seq_buffer_ptr(p);
+
+ switch (cmd) {
+ case ATA_CMD_FPDMA_RECV:
+ switch (hob_nsect & 0x5f) {
+ case ATA_SUBCMD_FPDMA_RECV_RD_LOG_DMA_EXT:
+ trace_seq_printf(p, " READ_LOG_DMA_EXT");
+ break;
+ case ATA_SUBCMD_FPDMA_RECV_ZAC_MGMT_IN:
+ trace_seq_printf(p, " ZAC_MGMT_IN");
+ break;
+ }
+ break;
+ case ATA_CMD_FPDMA_SEND:
+ switch (hob_nsect & 0x5f) {
+ case ATA_SUBCMD_FPDMA_SEND_WR_LOG_DMA_EXT:
+ trace_seq_printf(p, " WRITE_LOG_DMA_EXT");
+ break;
+ case ATA_SUBCMD_FPDMA_SEND_DSM:
+ trace_seq_printf(p, " DATASET_MANAGEMENT");
+ break;
+ }
+ break;
+ case ATA_CMD_NCQ_NON_DATA:
+ switch (feature) {
+ case ATA_SUBCMD_NCQ_NON_DATA_ABORT_QUEUE:
+ trace_seq_printf(p, " ABORT_QUEUE");
+ break;
+ case ATA_SUBCMD_NCQ_NON_DATA_SET_FEATURES:
+ trace_seq_printf(p, " SET_FEATURES");
+ break;
+ case ATA_SUBCMD_NCQ_NON_DATA_ZERO_EXT:
+ trace_seq_printf(p, " ZERO_EXT");
+ break;
+ case ATA_SUBCMD_NCQ_NON_DATA_ZAC_MGMT_OUT:
+ trace_seq_printf(p, " ZAC_MGMT_OUT");
+ break;
+ }
+ break;
+ case ATA_CMD_ZAC_MGMT_IN:
+ switch (feature) {
+ case ATA_SUBCMD_ZAC_MGMT_IN_REPORT_ZONES:
+ trace_seq_printf(p, " REPORT_ZONES");
+ break;
+ }
+ break;
+ case ATA_CMD_ZAC_MGMT_OUT:
+ switch (feature) {
+ case ATA_SUBCMD_ZAC_MGMT_OUT_CLOSE_ZONE:
+ trace_seq_printf(p, " CLOSE_ZONE");
+ break;
+ case ATA_SUBCMD_ZAC_MGMT_OUT_FINISH_ZONE:
+ trace_seq_printf(p, " FINISH_ZONE");
+ break;
+ case ATA_SUBCMD_ZAC_MGMT_OUT_OPEN_ZONE:
+ trace_seq_printf(p, " OPEN_ZONE");
+ break;
+ case ATA_SUBCMD_ZAC_MGMT_OUT_RESET_WRITE_POINTER:
+ trace_seq_printf(p, " RESET_WRITE_POINTER");
+ break;
+ }
+ break;
+ }
+ trace_seq_putc(p, 0);
+
+ return ret;
+}
diff --git a/drivers/ata/libata.h b/drivers/ata/libata.h
index f840ca18a7c0..3b301a48007c 100644
--- a/drivers/ata/libata.h
+++ b/drivers/ata/libata.h
@@ -67,7 +67,8 @@ extern struct ata_queued_cmd *ata_qc_new_init(struct ata_device *dev, int tag);
extern int ata_build_rw_tf(struct ata_taskfile *tf, struct ata_device *dev,
u64 block, u32 n_block, unsigned int tf_flags,
unsigned int tag);
-extern u64 ata_tf_read_block(struct ata_taskfile *tf, struct ata_device *dev);
+extern u64 ata_tf_read_block(const struct ata_taskfile *tf,
+ struct ata_device *dev);
extern unsigned ata_exec_internal(struct ata_device *dev,
struct ata_taskfile *tf, const u8 *cdb,
int dma_dir, void *buf, unsigned int buflen,
@@ -137,6 +138,11 @@ extern int ata_scsi_add_hosts(struct ata_host *host,
struct scsi_host_template *sht);
extern void ata_scsi_scan_host(struct ata_port *ap, int sync);
extern int ata_scsi_offline_dev(struct ata_device *dev);
+extern void ata_scsi_set_sense(struct ata_device *dev,
+ struct scsi_cmnd *cmd, u8 sk, u8 asc, u8 ascq);
+extern void ata_scsi_set_sense_information(struct ata_device *dev,
+ struct scsi_cmnd *cmd,
+ const struct ata_taskfile *tf);
extern void ata_scsi_media_change_notify(struct ata_device *dev);
extern void ata_scsi_hotplug(struct work_struct *work);
extern void ata_schedule_scsi_eh(struct Scsi_Host *shost);
diff --git a/drivers/ata/pata_icside.c b/drivers/ata/pata_icside.c
index d7c732042a4f..188f2f2eb21f 100644
--- a/drivers/ata/pata_icside.c
+++ b/drivers/ata/pata_icside.c
@@ -294,7 +294,7 @@ static int icside_dma_init(struct pata_icside_info *info)
static struct scsi_host_template pata_icside_sht = {
ATA_BASE_SHT(DRV_NAME),
- .sg_tablesize = SCSI_MAX_SG_CHAIN_SEGMENTS,
+ .sg_tablesize = SG_MAX_SEGMENTS,
.dma_boundary = IOMD_DMA_BOUNDARY,
};
diff --git a/drivers/ata/sata_dwc_460ex.c b/drivers/ata/sata_dwc_460ex.c
index 902034991517..00c2af1d211b 100644
--- a/drivers/ata/sata_dwc_460ex.c
+++ b/drivers/ata/sata_dwc_460ex.c
@@ -30,10 +30,12 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/device.h>
+#include <linux/dmaengine.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
+#include <linux/phy/phy.h>
#include <linux/libata.h>
#include <linux/slab.h>
@@ -42,10 +44,6 @@
#include <scsi/scsi_host.h>
#include <scsi/scsi_cmnd.h>
-/* Supported DMA engine drivers */
-#include <linux/platform_data/dma-dw.h>
-#include <linux/dma/dw.h>
-
/* These two are defined in "libata.h" */
#undef DRV_NAME
#undef DRV_VERSION
@@ -53,19 +51,14 @@
#define DRV_NAME "sata-dwc"
#define DRV_VERSION "1.3"
-#ifndef out_le32
-#define out_le32(a, v) __raw_writel(__cpu_to_le32(v), (void __iomem *)(a))
-#endif
-
-#ifndef in_le32
-#define in_le32(a) __le32_to_cpu(__raw_readl((void __iomem *)(a)))
-#endif
+#define sata_dwc_writel(a, v) writel_relaxed(v, a)
+#define sata_dwc_readl(a) readl_relaxed(a)
#ifndef NO_IRQ
#define NO_IRQ 0
#endif
-#define AHB_DMA_BRST_DFLT 64 /* 16 data items burst length*/
+#define AHB_DMA_BRST_DFLT 64 /* 16 data items burst length */
enum {
SATA_DWC_MAX_PORTS = 1,
@@ -102,7 +95,7 @@ struct sata_dwc_regs {
u32 versionr; /* Version Register */
u32 idr; /* ID Register */
u32 unimpl[192]; /* Unimplemented */
- u32 dmadr[256]; /* FIFO Locations in DMA Mode */
+ u32 dmadr[256]; /* FIFO Locations in DMA Mode */
};
enum {
@@ -146,9 +139,14 @@ struct sata_dwc_device {
struct device *dev; /* generic device struct */
struct ata_probe_ent *pe; /* ptr to probe-ent */
struct ata_host *host;
- u8 __iomem *reg_base;
- struct sata_dwc_regs *sata_dwc_regs; /* DW Synopsys SATA specific */
+ struct sata_dwc_regs __iomem *sata_dwc_regs; /* DW SATA specific */
+ u32 sactive_issued;
+ u32 sactive_queued;
+ struct phy *phy;
+ phys_addr_t dmadr;
+#ifdef CONFIG_SATA_DWC_OLD_DMA
struct dw_dma_chip *dma;
+#endif
};
#define SATA_DWC_QCMD_MAX 32
@@ -159,25 +157,19 @@ struct sata_dwc_device_port {
int dma_pending[SATA_DWC_QCMD_MAX];
/* DMA info */
- struct dw_dma_slave *dws;
struct dma_chan *chan;
struct dma_async_tx_descriptor *desc[SATA_DWC_QCMD_MAX];
u32 dma_interrupt_count;
};
/*
- * Commonly used DWC SATA driver Macros
+ * Commonly used DWC SATA driver macros
*/
-#define HSDEV_FROM_HOST(host) ((struct sata_dwc_device *)\
- (host)->private_data)
-#define HSDEV_FROM_AP(ap) ((struct sata_dwc_device *)\
- (ap)->host->private_data)
-#define HSDEVP_FROM_AP(ap) ((struct sata_dwc_device_port *)\
- (ap)->private_data)
-#define HSDEV_FROM_QC(qc) ((struct sata_dwc_device *)\
- (qc)->ap->host->private_data)
-#define HSDEV_FROM_HSDEVP(p) ((struct sata_dwc_device *)\
- (hsdevp)->hsdev)
+#define HSDEV_FROM_HOST(host) ((struct sata_dwc_device *)(host)->private_data)
+#define HSDEV_FROM_AP(ap) ((struct sata_dwc_device *)(ap)->host->private_data)
+#define HSDEVP_FROM_AP(ap) ((struct sata_dwc_device_port *)(ap)->private_data)
+#define HSDEV_FROM_QC(qc) ((struct sata_dwc_device *)(qc)->ap->host->private_data)
+#define HSDEV_FROM_HSDEVP(p) ((struct sata_dwc_device *)(p)->hsdev)
enum {
SATA_DWC_CMD_ISSUED_NOT = 0,
@@ -190,21 +182,6 @@ enum {
SATA_DWC_DMA_PENDING_RX = 2,
};
-struct sata_dwc_host_priv {
- void __iomem *scr_addr_sstatus;
- u32 sata_dwc_sactive_issued ;
- u32 sata_dwc_sactive_queued ;
-};
-
-static struct sata_dwc_host_priv host_pvt;
-
-static struct dw_dma_slave sata_dwc_dma_dws = {
- .src_id = 0,
- .dst_id = 0,
- .src_master = 0,
- .dst_master = 1,
-};
-
/*
* Prototypes
*/
@@ -215,6 +192,93 @@ static void sata_dwc_dma_xfer_complete(struct ata_port *ap, u32 check_status);
static void sata_dwc_port_stop(struct ata_port *ap);
static void sata_dwc_clear_dmacr(struct sata_dwc_device_port *hsdevp, u8 tag);
+#ifdef CONFIG_SATA_DWC_OLD_DMA
+
+#include <linux/platform_data/dma-dw.h>
+#include <linux/dma/dw.h>
+
+static struct dw_dma_slave sata_dwc_dma_dws = {
+ .src_id = 0,
+ .dst_id = 0,
+ .m_master = 1,
+ .p_master = 0,
+};
+
+static bool sata_dwc_dma_filter(struct dma_chan *chan, void *param)
+{
+ struct dw_dma_slave *dws = &sata_dwc_dma_dws;
+
+ if (dws->dma_dev != chan->device->dev)
+ return false;
+
+ chan->private = dws;
+ return true;
+}
+
+static int sata_dwc_dma_get_channel_old(struct sata_dwc_device_port *hsdevp)
+{
+ struct sata_dwc_device *hsdev = hsdevp->hsdev;
+ struct dw_dma_slave *dws = &sata_dwc_dma_dws;
+ dma_cap_mask_t mask;
+
+ dws->dma_dev = hsdev->dev;
+
+ dma_cap_zero(mask);
+ dma_cap_set(DMA_SLAVE, mask);
+
+ /* Acquire DMA channel */
+ hsdevp->chan = dma_request_channel(mask, sata_dwc_dma_filter, hsdevp);
+ if (!hsdevp->chan) {
+ dev_err(hsdev->dev, "%s: dma channel unavailable\n",
+ __func__);
+ return -EAGAIN;
+ }
+
+ return 0;
+}
+
+static int sata_dwc_dma_init_old(struct platform_device *pdev,
+ struct sata_dwc_device *hsdev)
+{
+ struct device_node *np = pdev->dev.of_node;
+ struct resource *res;
+
+ hsdev->dma = devm_kzalloc(&pdev->dev, sizeof(*hsdev->dma), GFP_KERNEL);
+ if (!hsdev->dma)
+ return -ENOMEM;
+
+ hsdev->dma->dev = &pdev->dev;
+
+ /* Get SATA DMA interrupt number */
+ hsdev->dma->irq = irq_of_parse_and_map(np, 1);
+ if (hsdev->dma->irq == NO_IRQ) {
+ dev_err(&pdev->dev, "no SATA DMA irq\n");
+ return -ENODEV;
+ }
+
+ /* Get physical SATA DMA register base address */
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 1);
+ hsdev->dma->regs = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(hsdev->dma->regs)) {
+ dev_err(&pdev->dev,
+ "ioremap failed for AHBDMA register address\n");
+ return PTR_ERR(hsdev->dma->regs);
+ }
+
+ /* Initialize AHB DMAC */
+ return dw_dma_probe(hsdev->dma);
+}
+
+static void sata_dwc_dma_exit_old(struct sata_dwc_device *hsdev)
+{
+ if (!hsdev->dma)
+ return;
+
+ dw_dma_remove(hsdev->dma);
+}
+
+#endif
+
static const char *get_prot_descript(u8 protocol)
{
switch ((enum ata_tf_protocols)protocol) {
@@ -305,21 +369,20 @@ static struct dma_async_tx_descriptor *dma_dwc_xfer_setup(struct ata_queued_cmd
struct ata_port *ap = qc->ap;
struct sata_dwc_device_port *hsdevp = HSDEVP_FROM_AP(ap);
struct sata_dwc_device *hsdev = HSDEV_FROM_AP(ap);
- dma_addr_t addr = (dma_addr_t)&hsdev->sata_dwc_regs->dmadr;
struct dma_slave_config sconf;
struct dma_async_tx_descriptor *desc;
if (qc->dma_dir == DMA_DEV_TO_MEM) {
- sconf.src_addr = addr;
- sconf.device_fc = true;
+ sconf.src_addr = hsdev->dmadr;
+ sconf.device_fc = false;
} else { /* DMA_MEM_TO_DEV */
- sconf.dst_addr = addr;
+ sconf.dst_addr = hsdev->dmadr;
sconf.device_fc = false;
}
sconf.direction = qc->dma_dir;
- sconf.src_maxburst = AHB_DMA_BRST_DFLT;
- sconf.dst_maxburst = AHB_DMA_BRST_DFLT;
+ sconf.src_maxburst = AHB_DMA_BRST_DFLT / 4; /* in items */
+ sconf.dst_maxburst = AHB_DMA_BRST_DFLT / 4; /* in items */
sconf.src_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
sconf.dst_addr_width = DMA_SLAVE_BUSWIDTH_4_BYTES;
@@ -336,8 +399,8 @@ static struct dma_async_tx_descriptor *dma_dwc_xfer_setup(struct ata_queued_cmd
desc->callback = dma_dwc_xfer_done;
desc->callback_param = hsdev;
- dev_dbg(hsdev->dev, "%s sg: 0x%p, count: %d addr: %pad\n",
- __func__, qc->sg, qc->n_elem, &addr);
+ dev_dbg(hsdev->dev, "%s sg: 0x%p, count: %d addr: %pa\n", __func__,
+ qc->sg, qc->n_elem, &hsdev->dmadr);
return desc;
}
@@ -350,48 +413,38 @@ static int sata_dwc_scr_read(struct ata_link *link, unsigned int scr, u32 *val)
return -EINVAL;
}
- *val = in_le32(link->ap->ioaddr.scr_addr + (scr * 4));
- dev_dbg(link->ap->dev, "%s: id=%d reg=%d val=val=0x%08x\n",
- __func__, link->ap->print_id, scr, *val);
+ *val = sata_dwc_readl(link->ap->ioaddr.scr_addr + (scr * 4));
+ dev_dbg(link->ap->dev, "%s: id=%d reg=%d val=0x%08x\n", __func__,
+ link->ap->print_id, scr, *val);
return 0;
}
static int sata_dwc_scr_write(struct ata_link *link, unsigned int scr, u32 val)
{
- dev_dbg(link->ap->dev, "%s: id=%d reg=%d val=val=0x%08x\n",
- __func__, link->ap->print_id, scr, val);
+ dev_dbg(link->ap->dev, "%s: id=%d reg=%d val=0x%08x\n", __func__,
+ link->ap->print_id, scr, val);
if (scr > SCR_NOTIFICATION) {
dev_err(link->ap->dev, "%s: Incorrect SCR offset 0x%02x\n",
__func__, scr);
return -EINVAL;
}
- out_le32(link->ap->ioaddr.scr_addr + (scr * 4), val);
+ sata_dwc_writel(link->ap->ioaddr.scr_addr + (scr * 4), val);
return 0;
}
-static u32 core_scr_read(unsigned int scr)
-{
- return in_le32(host_pvt.scr_addr_sstatus + (scr * 4));
-}
-
-static void core_scr_write(unsigned int scr, u32 val)
-{
- out_le32(host_pvt.scr_addr_sstatus + (scr * 4), val);
-}
-
-static void clear_serror(void)
+static void clear_serror(struct ata_port *ap)
{
u32 val;
- val = core_scr_read(SCR_ERROR);
- core_scr_write(SCR_ERROR, val);
+ sata_dwc_scr_read(&ap->link, SCR_ERROR, &val);
+ sata_dwc_scr_write(&ap->link, SCR_ERROR, val);
}
static void clear_interrupt_bit(struct sata_dwc_device *hsdev, u32 bit)
{
- out_le32(&hsdev->sata_dwc_regs->intpr,
- in_le32(&hsdev->sata_dwc_regs->intpr));
+ sata_dwc_writel(&hsdev->sata_dwc_regs->intpr,
+ sata_dwc_readl(&hsdev->sata_dwc_regs->intpr));
}
static u32 qcmd_tag_to_mask(u8 tag)
@@ -412,7 +465,7 @@ static void sata_dwc_error_intr(struct ata_port *ap,
ata_ehi_clear_desc(ehi);
- serror = core_scr_read(SCR_ERROR);
+ sata_dwc_scr_read(&ap->link, SCR_ERROR, &serror);
status = ap->ops->sff_check_status(ap);
tag = ap->link.active_tag;
@@ -423,7 +476,7 @@ static void sata_dwc_error_intr(struct ata_port *ap,
hsdevp->dma_pending[tag], hsdevp->cmd_issued[tag]);
/* Clear error register and interrupt bit */
- clear_serror();
+ clear_serror(ap);
clear_interrupt_bit(hsdev, SATA_DWC_INTPR_ERR);
/* This is the only error happening now. TODO check for exact error */
@@ -462,12 +515,12 @@ static irqreturn_t sata_dwc_isr(int irq, void *dev_instance)
int handled, num_processed, port = 0;
uint intpr, sactive, sactive2, tag_mask;
struct sata_dwc_device_port *hsdevp;
- host_pvt.sata_dwc_sactive_issued = 0;
+ hsdev->sactive_issued = 0;
spin_lock_irqsave(&host->lock, flags);
/* Read the interrupt register */
- intpr = in_le32(&hsdev->sata_dwc_regs->intpr);
+ intpr = sata_dwc_readl(&hsdev->sata_dwc_regs->intpr);
ap = host->ports[port];
hsdevp = HSDEVP_FROM_AP(ap);
@@ -486,12 +539,12 @@ static irqreturn_t sata_dwc_isr(int irq, void *dev_instance)
if (intpr & SATA_DWC_INTPR_NEWFP) {
clear_interrupt_bit(hsdev, SATA_DWC_INTPR_NEWFP);
- tag = (u8)(in_le32(&hsdev->sata_dwc_regs->fptagr));
+ tag = (u8)(sata_dwc_readl(&hsdev->sata_dwc_regs->fptagr));
dev_dbg(ap->dev, "%s: NEWFP tag=%d\n", __func__, tag);
if (hsdevp->cmd_issued[tag] != SATA_DWC_CMD_ISSUED_PEND)
dev_warn(ap->dev, "CMD tag=%d not pending?\n", tag);
- host_pvt.sata_dwc_sactive_issued |= qcmd_tag_to_mask(tag);
+ hsdev->sactive_issued |= qcmd_tag_to_mask(tag);
qc = ata_qc_from_tag(ap, tag);
/*
@@ -505,11 +558,11 @@ static irqreturn_t sata_dwc_isr(int irq, void *dev_instance)
handled = 1;
goto DONE;
}
- sactive = core_scr_read(SCR_ACTIVE);
- tag_mask = (host_pvt.sata_dwc_sactive_issued | sactive) ^ sactive;
+ sata_dwc_scr_read(&ap->link, SCR_ACTIVE, &sactive);
+ tag_mask = (hsdev->sactive_issued | sactive) ^ sactive;
/* If no sactive issued and tag_mask is zero then this is not NCQ */
- if (host_pvt.sata_dwc_sactive_issued == 0 && tag_mask == 0) {
+ if (hsdev->sactive_issued == 0 && tag_mask == 0) {
if (ap->link.active_tag == ATA_TAG_POISON)
tag = 0;
else
@@ -579,22 +632,19 @@ DRVSTILLBUSY:
*/
/* process completed commands */
- sactive = core_scr_read(SCR_ACTIVE);
- tag_mask = (host_pvt.sata_dwc_sactive_issued | sactive) ^ sactive;
+ sata_dwc_scr_read(&ap->link, SCR_ACTIVE, &sactive);
+ tag_mask = (hsdev->sactive_issued | sactive) ^ sactive;
- if (sactive != 0 || (host_pvt.sata_dwc_sactive_issued) > 1 || \
- tag_mask > 1) {
+ if (sactive != 0 || hsdev->sactive_issued > 1 || tag_mask > 1) {
dev_dbg(ap->dev,
"%s NCQ:sactive=0x%08x sactive_issued=0x%08x tag_mask=0x%08x\n",
- __func__, sactive, host_pvt.sata_dwc_sactive_issued,
- tag_mask);
+ __func__, sactive, hsdev->sactive_issued, tag_mask);
}
- if ((tag_mask | (host_pvt.sata_dwc_sactive_issued)) != \
- (host_pvt.sata_dwc_sactive_issued)) {
+ if ((tag_mask | hsdev->sactive_issued) != hsdev->sactive_issued) {
dev_warn(ap->dev,
- "Bad tag mask? sactive=0x%08x (host_pvt.sata_dwc_sactive_issued)=0x%08x tag_mask=0x%08x\n",
- sactive, host_pvt.sata_dwc_sactive_issued, tag_mask);
+ "Bad tag mask? sactive=0x%08x sactive_issued=0x%08x tag_mask=0x%08x\n",
+ sactive, hsdev->sactive_issued, tag_mask);
}
/* read just to clear ... not bad if currently still busy */
@@ -656,7 +706,7 @@ STILLBUSY:
* we were processing --we read status as part of processing a completed
* command).
*/
- sactive2 = core_scr_read(SCR_ACTIVE);
+ sata_dwc_scr_read(&ap->link, SCR_ACTIVE, &sactive2);
if (sactive2 != sactive) {
dev_dbg(ap->dev,
"More completed - sactive=0x%x sactive2=0x%x\n",
@@ -672,15 +722,14 @@ DONE:
static void sata_dwc_clear_dmacr(struct sata_dwc_device_port *hsdevp, u8 tag)
{
struct sata_dwc_device *hsdev = HSDEV_FROM_HSDEVP(hsdevp);
+ u32 dmacr = sata_dwc_readl(&hsdev->sata_dwc_regs->dmacr);
if (hsdevp->dma_pending[tag] == SATA_DWC_DMA_PENDING_RX) {
- out_le32(&(hsdev->sata_dwc_regs->dmacr),
- SATA_DWC_DMACR_RX_CLEAR(
- in_le32(&(hsdev->sata_dwc_regs->dmacr))));
+ dmacr = SATA_DWC_DMACR_RX_CLEAR(dmacr);
+ sata_dwc_writel(&hsdev->sata_dwc_regs->dmacr, dmacr);
} else if (hsdevp->dma_pending[tag] == SATA_DWC_DMA_PENDING_TX) {
- out_le32(&(hsdev->sata_dwc_regs->dmacr),
- SATA_DWC_DMACR_TX_CLEAR(
- in_le32(&(hsdev->sata_dwc_regs->dmacr))));
+ dmacr = SATA_DWC_DMACR_TX_CLEAR(dmacr);
+ sata_dwc_writel(&hsdev->sata_dwc_regs->dmacr, dmacr);
} else {
/*
* This should not happen, it indicates the driver is out of
@@ -688,10 +737,9 @@ static void sata_dwc_clear_dmacr(struct sata_dwc_device_port *hsdevp, u8 tag)
*/
dev_err(hsdev->dev,
"%s DMA protocol RX and TX DMA not pending tag=0x%02x pending=%d dmacr: 0x%08x\n",
- __func__, tag, hsdevp->dma_pending[tag],
- in_le32(&hsdev->sata_dwc_regs->dmacr));
- out_le32(&(hsdev->sata_dwc_regs->dmacr),
- SATA_DWC_DMACR_TXRXCH_CLEAR);
+ __func__, tag, hsdevp->dma_pending[tag], dmacr);
+ sata_dwc_writel(&hsdev->sata_dwc_regs->dmacr,
+ SATA_DWC_DMACR_TXRXCH_CLEAR);
}
}
@@ -716,7 +764,7 @@ static void sata_dwc_dma_xfer_complete(struct ata_port *ap, u32 check_status)
__func__, qc->tag, qc->tf.command,
get_dma_dir_descript(qc->dma_dir),
get_prot_descript(qc->tf.protocol),
- in_le32(&(hsdev->sata_dwc_regs->dmacr)));
+ sata_dwc_readl(&hsdev->sata_dwc_regs->dmacr));
}
#endif
@@ -725,7 +773,7 @@ static void sata_dwc_dma_xfer_complete(struct ata_port *ap, u32 check_status)
dev_err(ap->dev,
"%s DMA protocol RX and TX DMA not pending dmacr: 0x%08x\n",
__func__,
- in_le32(&(hsdev->sata_dwc_regs->dmacr)));
+ sata_dwc_readl(&hsdev->sata_dwc_regs->dmacr));
}
hsdevp->dma_pending[tag] = SATA_DWC_DMA_PENDING_NONE;
@@ -742,8 +790,9 @@ static int sata_dwc_qc_complete(struct ata_port *ap, struct ata_queued_cmd *qc,
u8 status = 0;
u32 mask = 0x0;
u8 tag = qc->tag;
+ struct sata_dwc_device *hsdev = HSDEV_FROM_AP(ap);
struct sata_dwc_device_port *hsdevp = HSDEVP_FROM_AP(ap);
- host_pvt.sata_dwc_sactive_queued = 0;
+ hsdev->sactive_queued = 0;
dev_dbg(ap->dev, "%s checkstatus? %x\n", __func__, check_status);
if (hsdevp->dma_pending[tag] == SATA_DWC_DMA_PENDING_TX)
@@ -756,10 +805,8 @@ static int sata_dwc_qc_complete(struct ata_port *ap, struct ata_queued_cmd *qc,
/* clear active bit */
mask = (~(qcmd_tag_to_mask(tag)));
- host_pvt.sata_dwc_sactive_queued = (host_pvt.sata_dwc_sactive_queued) \
- & mask;
- host_pvt.sata_dwc_sactive_issued = (host_pvt.sata_dwc_sactive_issued) \
- & mask;
+ hsdev->sactive_queued = hsdev->sactive_queued & mask;
+ hsdev->sactive_issued = hsdev->sactive_issued & mask;
ata_qc_complete(qc);
return 0;
}
@@ -767,54 +814,62 @@ static int sata_dwc_qc_complete(struct ata_port *ap, struct ata_queued_cmd *qc,
static void sata_dwc_enable_interrupts(struct sata_dwc_device *hsdev)
{
/* Enable selective interrupts by setting the interrupt maskregister*/
- out_le32(&hsdev->sata_dwc_regs->intmr,
- SATA_DWC_INTMR_ERRM |
- SATA_DWC_INTMR_NEWFPM |
- SATA_DWC_INTMR_PMABRTM |
- SATA_DWC_INTMR_DMATM);
+ sata_dwc_writel(&hsdev->sata_dwc_regs->intmr,
+ SATA_DWC_INTMR_ERRM |
+ SATA_DWC_INTMR_NEWFPM |
+ SATA_DWC_INTMR_PMABRTM |
+ SATA_DWC_INTMR_DMATM);
/*
* Unmask the error bits that should trigger an error interrupt by
* setting the error mask register.
*/
- out_le32(&hsdev->sata_dwc_regs->errmr, SATA_DWC_SERROR_ERR_BITS);
+ sata_dwc_writel(&hsdev->sata_dwc_regs->errmr, SATA_DWC_SERROR_ERR_BITS);
dev_dbg(hsdev->dev, "%s: INTMR = 0x%08x, ERRMR = 0x%08x\n",
- __func__, in_le32(&hsdev->sata_dwc_regs->intmr),
- in_le32(&hsdev->sata_dwc_regs->errmr));
+ __func__, sata_dwc_readl(&hsdev->sata_dwc_regs->intmr),
+ sata_dwc_readl(&hsdev->sata_dwc_regs->errmr));
}
-static bool sata_dwc_dma_filter(struct dma_chan *chan, void *param)
+static void sata_dwc_setup_port(struct ata_ioports *port, void __iomem *base)
{
- struct sata_dwc_device_port *hsdevp = param;
- struct dw_dma_slave *dws = hsdevp->dws;
+ port->cmd_addr = base + 0x00;
+ port->data_addr = base + 0x00;
- if (dws->dma_dev != chan->device->dev)
- return false;
+ port->error_addr = base + 0x04;
+ port->feature_addr = base + 0x04;
- chan->private = dws;
- return true;
-}
+ port->nsect_addr = base + 0x08;
-static void sata_dwc_setup_port(struct ata_ioports *port, unsigned long base)
-{
- port->cmd_addr = (void __iomem *)base + 0x00;
- port->data_addr = (void __iomem *)base + 0x00;
+ port->lbal_addr = base + 0x0c;
+ port->lbam_addr = base + 0x10;
+ port->lbah_addr = base + 0x14;
- port->error_addr = (void __iomem *)base + 0x04;
- port->feature_addr = (void __iomem *)base + 0x04;
+ port->device_addr = base + 0x18;
+ port->command_addr = base + 0x1c;
+ port->status_addr = base + 0x1c;
- port->nsect_addr = (void __iomem *)base + 0x08;
+ port->altstatus_addr = base + 0x20;
+ port->ctl_addr = base + 0x20;
+}
+
+static int sata_dwc_dma_get_channel(struct sata_dwc_device_port *hsdevp)
+{
+ struct sata_dwc_device *hsdev = hsdevp->hsdev;
+ struct device *dev = hsdev->dev;
- port->lbal_addr = (void __iomem *)base + 0x0c;
- port->lbam_addr = (void __iomem *)base + 0x10;
- port->lbah_addr = (void __iomem *)base + 0x14;
+#ifdef CONFIG_SATA_DWC_OLD_DMA
+ if (!of_find_property(dev->of_node, "dmas", NULL))
+ return sata_dwc_dma_get_channel_old(hsdevp);
+#endif
- port->device_addr = (void __iomem *)base + 0x18;
- port->command_addr = (void __iomem *)base + 0x1c;
- port->status_addr = (void __iomem *)base + 0x1c;
+ hsdevp->chan = dma_request_chan(dev, "sata-dma");
+ if (IS_ERR(hsdevp->chan)) {
+ dev_err(dev, "failed to allocate dma channel: %ld\n",
+ PTR_ERR(hsdevp->chan));
+ return PTR_ERR(hsdevp->chan);
+ }
- port->altstatus_addr = (void __iomem *)base + 0x20;
- port->ctl_addr = (void __iomem *)base + 0x20;
+ return 0;
}
/*
@@ -829,7 +884,6 @@ static int sata_dwc_port_start(struct ata_port *ap)
struct sata_dwc_device *hsdev;
struct sata_dwc_device_port *hsdevp = NULL;
struct device *pdev;
- dma_cap_mask_t mask;
int i;
hsdev = HSDEV_FROM_AP(ap);
@@ -853,20 +907,13 @@ static int sata_dwc_port_start(struct ata_port *ap)
}
hsdevp->hsdev = hsdev;
- hsdevp->dws = &sata_dwc_dma_dws;
- hsdevp->dws->dma_dev = hsdev->dev;
-
- dma_cap_zero(mask);
- dma_cap_set(DMA_SLAVE, mask);
+ err = sata_dwc_dma_get_channel(hsdevp);
+ if (err)
+ goto CLEANUP_ALLOC;
- /* Acquire DMA channel */
- hsdevp->chan = dma_request_channel(mask, sata_dwc_dma_filter, hsdevp);
- if (!hsdevp->chan) {
- dev_err(hsdev->dev, "%s: dma channel unavailable\n",
- __func__);
- err = -EAGAIN;
+ err = phy_power_on(hsdev->phy);
+ if (err)
goto CLEANUP_ALLOC;
- }
for (i = 0; i < SATA_DWC_QCMD_MAX; i++)
hsdevp->cmd_issued[i] = SATA_DWC_CMD_ISSUED_NOT;
@@ -877,18 +924,18 @@ static int sata_dwc_port_start(struct ata_port *ap)
if (ap->port_no == 0) {
dev_dbg(ap->dev, "%s: clearing TXCHEN, RXCHEN in DMAC\n",
__func__);
- out_le32(&hsdev->sata_dwc_regs->dmacr,
- SATA_DWC_DMACR_TXRXCH_CLEAR);
+ sata_dwc_writel(&hsdev->sata_dwc_regs->dmacr,
+ SATA_DWC_DMACR_TXRXCH_CLEAR);
dev_dbg(ap->dev, "%s: setting burst size in DBTSR\n",
__func__);
- out_le32(&hsdev->sata_dwc_regs->dbtsr,
- (SATA_DWC_DBTSR_MWR(AHB_DMA_BRST_DFLT) |
- SATA_DWC_DBTSR_MRD(AHB_DMA_BRST_DFLT)));
+ sata_dwc_writel(&hsdev->sata_dwc_regs->dbtsr,
+ (SATA_DWC_DBTSR_MWR(AHB_DMA_BRST_DFLT) |
+ SATA_DWC_DBTSR_MRD(AHB_DMA_BRST_DFLT)));
}
/* Clear any error bits before libata starts issuing commands */
- clear_serror();
+ clear_serror(ap);
ap->private_data = hsdevp;
dev_dbg(ap->dev, "%s: done\n", __func__);
return 0;
@@ -903,11 +950,13 @@ CLEANUP:
static void sata_dwc_port_stop(struct ata_port *ap)
{
struct sata_dwc_device_port *hsdevp = HSDEVP_FROM_AP(ap);
+ struct sata_dwc_device *hsdev = HSDEV_FROM_AP(ap);
dev_dbg(ap->dev, "%s: ap->id = %d\n", __func__, ap->print_id);
- dmaengine_terminate_all(hsdevp->chan);
+ dmaengine_terminate_sync(hsdevp->chan);
dma_release_channel(hsdevp->chan);
+ phy_power_off(hsdev->phy);
kfree(hsdevp);
ap->private_data = NULL;
@@ -924,22 +973,20 @@ static void sata_dwc_exec_command_by_tag(struct ata_port *ap,
struct ata_taskfile *tf,
u8 tag, u32 cmd_issued)
{
- unsigned long flags;
struct sata_dwc_device_port *hsdevp = HSDEVP_FROM_AP(ap);
dev_dbg(ap->dev, "%s cmd(0x%02x): %s tag=%d\n", __func__, tf->command,
ata_get_cmd_descript(tf->command), tag);
- spin_lock_irqsave(&ap->host->lock, flags);
hsdevp->cmd_issued[tag] = cmd_issued;
- spin_unlock_irqrestore(&ap->host->lock, flags);
+
/*
* Clear SError before executing a new command.
* sata_dwc_scr_write and read can not be used here. Clearing the PM
* managed SError register for the disk needs to be done before the
* task file is loaded.
*/
- clear_serror();
+ clear_serror(ap);
ata_sff_exec_command(ap, tf);
}
@@ -992,18 +1039,18 @@ static void sata_dwc_bmdma_start_by_tag(struct ata_queued_cmd *qc, u8 tag)
sata_dwc_tf_dump(ap, &qc->tf);
if (start_dma) {
- reg = core_scr_read(SCR_ERROR);
+ sata_dwc_scr_read(&ap->link, SCR_ERROR, &reg);
if (reg & SATA_DWC_SERROR_ERR_BITS) {
dev_err(ap->dev, "%s: ****** SError=0x%08x ******\n",
__func__, reg);
}
if (dir == DMA_TO_DEVICE)
- out_le32(&hsdev->sata_dwc_regs->dmacr,
- SATA_DWC_DMACR_TXCHEN);
+ sata_dwc_writel(&hsdev->sata_dwc_regs->dmacr,
+ SATA_DWC_DMACR_TXCHEN);
else
- out_le32(&hsdev->sata_dwc_regs->dmacr,
- SATA_DWC_DMACR_RXCHEN);
+ sata_dwc_writel(&hsdev->sata_dwc_regs->dmacr,
+ SATA_DWC_DMACR_RXCHEN);
/* Enable AHB DMA transfer on the specified channel */
dmaengine_submit(desc);
@@ -1025,36 +1072,12 @@ static void sata_dwc_bmdma_start(struct ata_queued_cmd *qc)
sata_dwc_bmdma_start_by_tag(qc, tag);
}
-/*
- * Function : sata_dwc_qc_prep_by_tag
- * arguments : ata_queued_cmd *qc, u8 tag
- * Return value : None
- * qc_prep for a particular queued command based on tag
- */
-static void sata_dwc_qc_prep_by_tag(struct ata_queued_cmd *qc, u8 tag)
-{
- struct dma_async_tx_descriptor *desc;
- struct ata_port *ap = qc->ap;
- struct sata_dwc_device_port *hsdevp = HSDEVP_FROM_AP(ap);
-
- dev_dbg(ap->dev, "%s: port=%d dma dir=%s n_elem=%d\n",
- __func__, ap->port_no, get_dma_dir_descript(qc->dma_dir),
- qc->n_elem);
-
- desc = dma_dwc_xfer_setup(qc);
- if (!desc) {
- dev_err(ap->dev, "%s: dma_dwc_xfer_setup returns NULL\n",
- __func__);
- return;
- }
- hsdevp->desc[tag] = desc;
-}
-
static unsigned int sata_dwc_qc_issue(struct ata_queued_cmd *qc)
{
u32 sactive;
u8 tag = qc->tag;
struct ata_port *ap = qc->ap;
+ struct sata_dwc_device_port *hsdevp = HSDEVP_FROM_AP(ap);
#ifdef DEBUG_NCQ
if (qc->tag > 0 || ap->link.sactive > 1)
@@ -1068,47 +1091,33 @@ static unsigned int sata_dwc_qc_issue(struct ata_queued_cmd *qc)
if (!ata_is_ncq(qc->tf.protocol))
tag = 0;
- sata_dwc_qc_prep_by_tag(qc, tag);
+
+ if (ata_is_dma(qc->tf.protocol)) {
+ hsdevp->desc[tag] = dma_dwc_xfer_setup(qc);
+ if (!hsdevp->desc[tag])
+ return AC_ERR_SYSTEM;
+ } else {
+ hsdevp->desc[tag] = NULL;
+ }
if (ata_is_ncq(qc->tf.protocol)) {
- sactive = core_scr_read(SCR_ACTIVE);
+ sata_dwc_scr_read(&ap->link, SCR_ACTIVE, &sactive);
sactive |= (0x00000001 << tag);
- core_scr_write(SCR_ACTIVE, sactive);
+ sata_dwc_scr_write(&ap->link, SCR_ACTIVE, sactive);
dev_dbg(qc->ap->dev,
"%s: tag=%d ap->link.sactive = 0x%08x sactive=0x%08x\n",
__func__, tag, qc->ap->link.sactive, sactive);
ap->ops->sff_tf_load(ap, &qc->tf);
- sata_dwc_exec_command_by_tag(ap, &qc->tf, qc->tag,
+ sata_dwc_exec_command_by_tag(ap, &qc->tf, tag,
SATA_DWC_CMD_ISSUED_PEND);
} else {
- ata_sff_qc_issue(qc);
+ return ata_bmdma_qc_issue(qc);
}
return 0;
}
-/*
- * Function : sata_dwc_qc_prep
- * arguments : ata_queued_cmd *qc
- * Return value : None
- * qc_prep for a particular queued command
- */
-
-static void sata_dwc_qc_prep(struct ata_queued_cmd *qc)
-{
- if ((qc->dma_dir == DMA_NONE) || (qc->tf.protocol == ATA_PROT_PIO))
- return;
-
-#ifdef DEBUG_NCQ
- if (qc->tag > 0)
- dev_info(qc->ap->dev, "%s: qc->tag=%d ap->active_tag=0x%08x\n",
- __func__, qc->tag, qc->ap->link.active_tag);
-
- return ;
-#endif
-}
-
static void sata_dwc_error_handler(struct ata_port *ap)
{
ata_sff_error_handler(ap);
@@ -1125,17 +1134,22 @@ static int sata_dwc_hardreset(struct ata_link *link, unsigned int *class,
sata_dwc_enable_interrupts(hsdev);
/* Reconfigure the DMA control register */
- out_le32(&hsdev->sata_dwc_regs->dmacr,
- SATA_DWC_DMACR_TXRXCH_CLEAR);
+ sata_dwc_writel(&hsdev->sata_dwc_regs->dmacr,
+ SATA_DWC_DMACR_TXRXCH_CLEAR);
/* Reconfigure the DMA Burst Transaction Size register */
- out_le32(&hsdev->sata_dwc_regs->dbtsr,
- SATA_DWC_DBTSR_MWR(AHB_DMA_BRST_DFLT) |
- SATA_DWC_DBTSR_MRD(AHB_DMA_BRST_DFLT));
+ sata_dwc_writel(&hsdev->sata_dwc_regs->dbtsr,
+ SATA_DWC_DBTSR_MWR(AHB_DMA_BRST_DFLT) |
+ SATA_DWC_DBTSR_MRD(AHB_DMA_BRST_DFLT));
return ret;
}
+static void sata_dwc_dev_select(struct ata_port *ap, unsigned int device)
+{
+ /* SATA DWC is master only */
+}
+
/*
* scsi mid-layer and libata interface structures
*/
@@ -1148,7 +1162,13 @@ static struct scsi_host_template sata_dwc_sht = {
*/
.sg_tablesize = LIBATA_MAX_PRD,
/* .can_queue = ATA_MAX_QUEUE, */
- .dma_boundary = ATA_DMA_BOUNDARY,
+ /*
+ * Make sure a LLI block is not created that will span 8K max FIS
+ * boundary. If the block spans such a FIS boundary, there is a chance
+ * that a DMA burst will cross that boundary -- this results in an
+ * error in the host controller.
+ */
+ .dma_boundary = 0x1fff /* ATA_DMA_BOUNDARY */,
};
static struct ata_port_operations sata_dwc_ops = {
@@ -1157,7 +1177,6 @@ static struct ata_port_operations sata_dwc_ops = {
.error_handler = sata_dwc_error_handler,
.hardreset = sata_dwc_hardreset,
- .qc_prep = sata_dwc_qc_prep,
.qc_issue = sata_dwc_qc_issue,
.scr_read = sata_dwc_scr_read,
@@ -1166,6 +1185,8 @@ static struct ata_port_operations sata_dwc_ops = {
.port_start = sata_dwc_port_start,
.port_stop = sata_dwc_port_stop,
+ .sff_dev_select = sata_dwc_dev_select,
+
.bmdma_setup = sata_dwc_bmdma_setup,
.bmdma_start = sata_dwc_bmdma_start,
};
@@ -1184,13 +1205,14 @@ static int sata_dwc_probe(struct platform_device *ofdev)
struct sata_dwc_device *hsdev;
u32 idr, versionr;
char *ver = (char *)&versionr;
- u8 __iomem *base;
+ void __iomem *base;
int err = 0;
int irq;
struct ata_host *host;
struct ata_port_info pi = sata_dwc_port_info[0];
const struct ata_port_info *ppi[] = { &pi, NULL };
struct device_node *np = ofdev->dev.of_node;
+ struct resource *res;
/* Allocate DWC SATA device */
host = ata_host_alloc_pinfo(&ofdev->dev, ppi, SATA_DWC_MAX_PORTS);
@@ -1201,57 +1223,33 @@ static int sata_dwc_probe(struct platform_device *ofdev)
host->private_data = hsdev;
/* Ioremap SATA registers */
- base = of_iomap(np, 0);
- if (!base) {
+ res = platform_get_resource(ofdev, IORESOURCE_MEM, 0);
+ base = devm_ioremap_resource(&ofdev->dev, res);
+ if (IS_ERR(base)) {
dev_err(&ofdev->dev,
"ioremap failed for SATA register address\n");
- return -ENODEV;
+ return PTR_ERR(base);
}
- hsdev->reg_base = base;
dev_dbg(&ofdev->dev, "ioremap done for SATA register address\n");
/* Synopsys DWC SATA specific Registers */
- hsdev->sata_dwc_regs = (void *__iomem)(base + SATA_DWC_REG_OFFSET);
+ hsdev->sata_dwc_regs = base + SATA_DWC_REG_OFFSET;
+ hsdev->dmadr = res->start + SATA_DWC_REG_OFFSET + offsetof(struct sata_dwc_regs, dmadr);
/* Setup port */
host->ports[0]->ioaddr.cmd_addr = base;
host->ports[0]->ioaddr.scr_addr = base + SATA_DWC_SCR_OFFSET;
- host_pvt.scr_addr_sstatus = base + SATA_DWC_SCR_OFFSET;
- sata_dwc_setup_port(&host->ports[0]->ioaddr, (unsigned long)base);
+ sata_dwc_setup_port(&host->ports[0]->ioaddr, base);
/* Read the ID and Version Registers */
- idr = in_le32(&hsdev->sata_dwc_regs->idr);
- versionr = in_le32(&hsdev->sata_dwc_regs->versionr);
+ idr = sata_dwc_readl(&hsdev->sata_dwc_regs->idr);
+ versionr = sata_dwc_readl(&hsdev->sata_dwc_regs->versionr);
dev_notice(&ofdev->dev, "id %d, controller version %c.%c%c\n",
idr, ver[0], ver[1], ver[2]);
- /* Get SATA DMA interrupt number */
- hsdev->dma->irq = irq_of_parse_and_map(np, 1);
- if (hsdev->dma->irq == NO_IRQ) {
- dev_err(&ofdev->dev, "no SATA DMA irq\n");
- err = -ENODEV;
- goto error_iomap;
- }
-
- /* Get physical SATA DMA register base address */
- hsdev->dma->regs = of_iomap(np, 1);
- if (!hsdev->dma->regs) {
- dev_err(&ofdev->dev,
- "ioremap failed for AHBDMA register address\n");
- err = -ENODEV;
- goto error_iomap;
- }
-
/* Save dev for later use in dev_xxx() routines */
hsdev->dev = &ofdev->dev;
- hsdev->dma->dev = &ofdev->dev;
-
- /* Initialize AHB DMAC */
- err = dw_dma_probe(hsdev->dma, NULL);
- if (err)
- goto error_dma_iomap;
-
/* Enable SATA Interrupts */
sata_dwc_enable_interrupts(hsdev);
@@ -1263,6 +1261,25 @@ static int sata_dwc_probe(struct platform_device *ofdev)
goto error_out;
}
+#ifdef CONFIG_SATA_DWC_OLD_DMA
+ if (!of_find_property(np, "dmas", NULL)) {
+ err = sata_dwc_dma_init_old(ofdev, hsdev);
+ if (err)
+ goto error_out;
+ }
+#endif
+
+ hsdev->phy = devm_phy_optional_get(hsdev->dev, "sata-phy");
+ if (IS_ERR(hsdev->phy)) {
+ err = PTR_ERR(hsdev->phy);
+ hsdev->phy = NULL;
+ goto error_out;
+ }
+
+ err = phy_init(hsdev->phy);
+ if (err)
+ goto error_out;
+
/*
* Now, register with libATA core, this will also initiate the
* device discovery process, invoking our port_start() handler &
@@ -1276,12 +1293,7 @@ static int sata_dwc_probe(struct platform_device *ofdev)
return 0;
error_out:
- /* Free SATA DMA resources */
- dw_dma_remove(hsdev->dma);
-error_dma_iomap:
- iounmap(hsdev->dma->regs);
-error_iomap:
- iounmap(base);
+ phy_exit(hsdev->phy);
return err;
}
@@ -1293,11 +1305,13 @@ static int sata_dwc_remove(struct platform_device *ofdev)
ata_host_detach(host);
+ phy_exit(hsdev->phy);
+
+#ifdef CONFIG_SATA_DWC_OLD_DMA
/* Free SATA DMA resources */
- dw_dma_remove(hsdev->dma);
+ sata_dwc_dma_exit_old(hsdev);
+#endif
- iounmap(hsdev->dma->regs);
- iounmap(hsdev->reg_base);
dev_dbg(&ofdev->dev, "done\n");
return 0;
}
diff --git a/drivers/ata/sata_highbank.c b/drivers/ata/sata_highbank.c
index 8638d575b2b9..aafb8cc03523 100644
--- a/drivers/ata/sata_highbank.c
+++ b/drivers/ata/sata_highbank.c
@@ -197,7 +197,7 @@ static void highbank_set_em_messages(struct device *dev,
for (i = 0; i < SGPIO_PINS; i++) {
err = of_get_named_gpio(np, "calxeda,sgpio-gpio", i);
- if (IS_ERR_VALUE(err))
+ if (err < 0)
return;
pdata->sgpio_gpio[i] = err;
diff --git a/drivers/base/devcoredump.c b/drivers/base/devcoredump.c
index 1bd120a0b084..240374fd1838 100644
--- a/drivers/base/devcoredump.c
+++ b/drivers/base/devcoredump.c
@@ -4,6 +4,7 @@
* GPL LICENSE SUMMARY
*
* Copyright(c) 2014 Intel Mobile Communications GmbH
+ * Copyright(c) 2015 Intel Deutschland GmbH
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of version 2 of the GNU General Public License as
@@ -41,12 +42,12 @@ static bool devcd_disabled;
struct devcd_entry {
struct device devcd_dev;
- const void *data;
+ void *data;
size_t datalen;
struct module *owner;
ssize_t (*read)(char *buffer, loff_t offset, size_t count,
- const void *data, size_t datalen);
- void (*free)(const void *data);
+ void *data, size_t datalen);
+ void (*free)(void *data);
struct delayed_work del_wk;
struct device *failing_dev;
};
@@ -174,7 +175,7 @@ static struct class devcd_class = {
};
static ssize_t devcd_readv(char *buffer, loff_t offset, size_t count,
- const void *data, size_t datalen)
+ void *data, size_t datalen)
{
if (offset > datalen)
return -EINVAL;
@@ -188,6 +189,11 @@ static ssize_t devcd_readv(char *buffer, loff_t offset, size_t count,
return count;
}
+static void devcd_freev(void *data)
+{
+ vfree(data);
+}
+
/**
* dev_coredumpv - create device coredump with vmalloc data
* @dev: the struct device for the crashed device
@@ -198,10 +204,10 @@ static ssize_t devcd_readv(char *buffer, loff_t offset, size_t count,
* This function takes ownership of the vmalloc'ed data and will free
* it when it is no longer used. See dev_coredumpm() for more information.
*/
-void dev_coredumpv(struct device *dev, const void *data, size_t datalen,
+void dev_coredumpv(struct device *dev, void *data, size_t datalen,
gfp_t gfp)
{
- dev_coredumpm(dev, NULL, data, datalen, gfp, devcd_readv, vfree);
+ dev_coredumpm(dev, NULL, data, datalen, gfp, devcd_readv, devcd_freev);
}
EXPORT_SYMBOL_GPL(dev_coredumpv);
@@ -213,6 +219,44 @@ static int devcd_match_failing(struct device *dev, const void *failing)
}
/**
+ * devcd_free_sgtable - free all the memory of the given scatterlist table
+ * (i.e. both pages and scatterlist instances)
+ * NOTE: if two tables allocated with devcd_alloc_sgtable and then chained
+ * using the sg_chain function then that function should be called only once
+ * on the chained table
+ * @table: pointer to sg_table to free
+ */
+static void devcd_free_sgtable(void *data)
+{
+ _devcd_free_sgtable(data);
+}
+
+/**
+ * devcd_read_from_table - copy data from sg_table to a given buffer
+ * and return the number of bytes read
+ * @buffer: the buffer to copy the data to it
+ * @buf_len: the length of the buffer
+ * @data: the scatterlist table to copy from
+ * @offset: start copy from @offset@ bytes from the head of the data
+ * in the given scatterlist
+ * @data_len: the length of the data in the sg_table
+ */
+static ssize_t devcd_read_from_sgtable(char *buffer, loff_t offset,
+ size_t buf_len, void *data,
+ size_t data_len)
+{
+ struct scatterlist *table = data;
+
+ if (offset > data_len)
+ return -EINVAL;
+
+ if (offset + buf_len > data_len)
+ buf_len = data_len - offset;
+ return sg_pcopy_to_buffer(table, sg_nents(table), buffer, buf_len,
+ offset);
+}
+
+/**
* dev_coredumpm - create device coredump with read/free methods
* @dev: the struct device for the crashed device
* @owner: the module that contains the read/free functions, use %THIS_MODULE
@@ -228,10 +272,10 @@ static int devcd_match_failing(struct device *dev, const void *failing)
* function will be called to free the data.
*/
void dev_coredumpm(struct device *dev, struct module *owner,
- const void *data, size_t datalen, gfp_t gfp,
+ void *data, size_t datalen, gfp_t gfp,
ssize_t (*read)(char *buffer, loff_t offset, size_t count,
- const void *data, size_t datalen),
- void (*free)(const void *data))
+ void *data, size_t datalen),
+ void (*free)(void *data))
{
static atomic_t devcd_count = ATOMIC_INIT(0);
struct devcd_entry *devcd;
@@ -291,6 +335,27 @@ void dev_coredumpm(struct device *dev, struct module *owner,
}
EXPORT_SYMBOL_GPL(dev_coredumpm);
+/**
+ * dev_coredumpmsg - create device coredump that uses scatterlist as data
+ * parameter
+ * @dev: the struct device for the crashed device
+ * @table: the dump data
+ * @datalen: length of the data
+ * @gfp: allocation flags
+ *
+ * Creates a new device coredump for the given device. If a previous one hasn't
+ * been read yet, the new coredump is discarded. The data lifetime is determined
+ * by the device coredump framework and when it is no longer needed
+ * it will free the data.
+ */
+void dev_coredumpsg(struct device *dev, struct scatterlist *table,
+ size_t datalen, gfp_t gfp)
+{
+ dev_coredumpm(dev, NULL, table, datalen, gfp, devcd_read_from_sgtable,
+ devcd_free_sgtable);
+}
+EXPORT_SYMBOL_GPL(dev_coredumpsg);
+
static int __init devcoredump_init(void)
{
return class_register(&devcd_class);
diff --git a/drivers/base/platform.c b/drivers/base/platform.c
index f437afa17f2b..6482d47deb50 100644
--- a/drivers/base/platform.c
+++ b/drivers/base/platform.c
@@ -322,16 +322,16 @@ EXPORT_SYMBOL_GPL(platform_device_add_data);
/**
* platform_device_add_properties - add built-in properties to a platform device
* @pdev: platform device to add properties to
- * @pset: properties to add
+ * @properties: null terminated array of properties to add
*
- * The function will take deep copy of the properties in @pset and attach
- * the copy to the platform device. The memory associated with properties
- * will be freed when the platform device is released.
+ * The function will take deep copy of @properties and attach the copy to the
+ * platform device. The memory associated with properties will be freed when the
+ * platform device is released.
*/
int platform_device_add_properties(struct platform_device *pdev,
- const struct property_set *pset)
+ struct property_entry *properties)
{
- return device_add_property_set(&pdev->dev, pset);
+ return device_add_properties(&pdev->dev, properties);
}
EXPORT_SYMBOL_GPL(platform_device_add_properties);
@@ -447,7 +447,7 @@ void platform_device_del(struct platform_device *pdev)
release_resource(r);
}
- device_remove_property_set(&pdev->dev);
+ device_remove_properties(&pdev->dev);
}
}
EXPORT_SYMBOL_GPL(platform_device_del);
@@ -526,8 +526,9 @@ struct platform_device *platform_device_register_full(
if (ret)
goto err;
- if (pdevinfo->pset) {
- ret = platform_device_add_properties(pdev, pdevinfo->pset);
+ if (pdevinfo->properties) {
+ ret = platform_device_add_properties(pdev,
+ pdevinfo->properties);
if (ret)
goto err;
}
diff --git a/drivers/base/power/clock_ops.c b/drivers/base/power/clock_ops.c
index 0e64a1b5e62a..3657ac1cb801 100644
--- a/drivers/base/power/clock_ops.c
+++ b/drivers/base/power/clock_ops.c
@@ -159,7 +159,7 @@ int of_pm_clk_add_clks(struct device *dev)
count = of_count_phandle_with_args(dev->of_node, "clocks",
"#clock-cells");
- if (count == 0)
+ if (count <= 0)
return -ENODEV;
clks = kcalloc(count, sizeof(*clks), GFP_KERNEL);
diff --git a/drivers/base/power/domain.c b/drivers/base/power/domain.c
index 56705b52758e..de23b648fce3 100644
--- a/drivers/base/power/domain.c
+++ b/drivers/base/power/domain.c
@@ -229,17 +229,6 @@ static int genpd_poweron(struct generic_pm_domain *genpd, unsigned int depth)
return ret;
}
-static int genpd_save_dev(struct generic_pm_domain *genpd, struct device *dev)
-{
- return GENPD_DEV_CALLBACK(genpd, int, save_state, dev);
-}
-
-static int genpd_restore_dev(struct generic_pm_domain *genpd,
- struct device *dev)
-{
- return GENPD_DEV_CALLBACK(genpd, int, restore_state, dev);
-}
-
static int genpd_dev_pm_qos_notifier(struct notifier_block *nb,
unsigned long val, void *ptr)
{
@@ -372,17 +361,63 @@ static void genpd_power_off_work_fn(struct work_struct *work)
}
/**
- * pm_genpd_runtime_suspend - Suspend a device belonging to I/O PM domain.
+ * __genpd_runtime_suspend - walk the hierarchy of ->runtime_suspend() callbacks
+ * @dev: Device to handle.
+ */
+static int __genpd_runtime_suspend(struct device *dev)
+{
+ int (*cb)(struct device *__dev);
+
+ if (dev->type && dev->type->pm)
+ cb = dev->type->pm->runtime_suspend;
+ else if (dev->class && dev->class->pm)
+ cb = dev->class->pm->runtime_suspend;
+ else if (dev->bus && dev->bus->pm)
+ cb = dev->bus->pm->runtime_suspend;
+ else
+ cb = NULL;
+
+ if (!cb && dev->driver && dev->driver->pm)
+ cb = dev->driver->pm->runtime_suspend;
+
+ return cb ? cb(dev) : 0;
+}
+
+/**
+ * __genpd_runtime_resume - walk the hierarchy of ->runtime_resume() callbacks
+ * @dev: Device to handle.
+ */
+static int __genpd_runtime_resume(struct device *dev)
+{
+ int (*cb)(struct device *__dev);
+
+ if (dev->type && dev->type->pm)
+ cb = dev->type->pm->runtime_resume;
+ else if (dev->class && dev->class->pm)
+ cb = dev->class->pm->runtime_resume;
+ else if (dev->bus && dev->bus->pm)
+ cb = dev->bus->pm->runtime_resume;
+ else
+ cb = NULL;
+
+ if (!cb && dev->driver && dev->driver->pm)
+ cb = dev->driver->pm->runtime_resume;
+
+ return cb ? cb(dev) : 0;
+}
+
+/**
+ * genpd_runtime_suspend - Suspend a device belonging to I/O PM domain.
* @dev: Device to suspend.
*
* Carry out a runtime suspend of a device under the assumption that its
* pm_domain field points to the domain member of an object of type
* struct generic_pm_domain representing a PM domain consisting of I/O devices.
*/
-static int pm_genpd_runtime_suspend(struct device *dev)
+static int genpd_runtime_suspend(struct device *dev)
{
struct generic_pm_domain *genpd;
- bool (*stop_ok)(struct device *__dev);
+ bool (*suspend_ok)(struct device *__dev);
struct gpd_timing_data *td = &dev_gpd_data(dev)->td;
bool runtime_pm = pm_runtime_enabled(dev);
ktime_t time_start;
@@ -401,21 +436,21 @@ static int pm_genpd_runtime_suspend(struct device *dev)
* runtime PM is disabled. Under these circumstances, we shall skip
* validating/measuring the PM QoS latency.
*/
- stop_ok = genpd->gov ? genpd->gov->stop_ok : NULL;
- if (runtime_pm && stop_ok && !stop_ok(dev))
+ suspend_ok = genpd->gov ? genpd->gov->suspend_ok : NULL;
+ if (runtime_pm && suspend_ok && !suspend_ok(dev))
return -EBUSY;
/* Measure suspend latency. */
if (runtime_pm)
time_start = ktime_get();
- ret = genpd_save_dev(genpd, dev);
+ ret = __genpd_runtime_suspend(dev);
if (ret)
return ret;
ret = genpd_stop_dev(genpd, dev);
if (ret) {
- genpd_restore_dev(genpd, dev);
+ __genpd_runtime_resume(dev);
return ret;
}
@@ -446,14 +481,14 @@ static int pm_genpd_runtime_suspend(struct device *dev)
}
/**
- * pm_genpd_runtime_resume - Resume a device belonging to I/O PM domain.
+ * genpd_runtime_resume - Resume a device belonging to I/O PM domain.
* @dev: Device to resume.
*
* Carry out a runtime resume of a device under the assumption that its
* pm_domain field points to the domain member of an object of type
* struct generic_pm_domain representing a PM domain consisting of I/O devices.
*/
-static int pm_genpd_runtime_resume(struct device *dev)
+static int genpd_runtime_resume(struct device *dev)
{
struct generic_pm_domain *genpd;
struct gpd_timing_data *td = &dev_gpd_data(dev)->td;
@@ -491,7 +526,7 @@ static int pm_genpd_runtime_resume(struct device *dev)
if (ret)
goto err_poweroff;
- ret = genpd_restore_dev(genpd, dev);
+ ret = __genpd_runtime_resume(dev);
if (ret)
goto err_stop;
@@ -695,15 +730,6 @@ static int pm_genpd_prepare(struct device *dev)
* at this point and a system wakeup event should be reported if it's
* set up to wake up the system from sleep states.
*/
- pm_runtime_get_noresume(dev);
- if (pm_runtime_barrier(dev) && device_may_wakeup(dev))
- pm_wakeup_event(dev, 0);
-
- if (pm_wakeup_pending()) {
- pm_runtime_put(dev);
- return -EBUSY;
- }
-
if (resume_needed(dev, genpd))
pm_runtime_resume(dev);
@@ -716,10 +742,8 @@ static int pm_genpd_prepare(struct device *dev)
mutex_unlock(&genpd->lock);
- if (genpd->suspend_power_off) {
- pm_runtime_put_noidle(dev);
+ if (genpd->suspend_power_off)
return 0;
- }
/*
* The PM domain must be in the GPD_STATE_ACTIVE state at this point,
@@ -741,7 +765,6 @@ static int pm_genpd_prepare(struct device *dev)
pm_runtime_enable(dev);
}
- pm_runtime_put(dev);
return ret;
}
@@ -1427,54 +1450,6 @@ out:
}
EXPORT_SYMBOL_GPL(pm_genpd_remove_subdomain);
-/* Default device callbacks for generic PM domains. */
-
-/**
- * pm_genpd_default_save_state - Default "save device state" for PM domains.
- * @dev: Device to handle.
- */
-static int pm_genpd_default_save_state(struct device *dev)
-{
- int (*cb)(struct device *__dev);
-
- if (dev->type && dev->type->pm)
- cb = dev->type->pm->runtime_suspend;
- else if (dev->class && dev->class->pm)
- cb = dev->class->pm->runtime_suspend;
- else if (dev->bus && dev->bus->pm)
- cb = dev->bus->pm->runtime_suspend;
- else
- cb = NULL;
-
- if (!cb && dev->driver && dev->driver->pm)
- cb = dev->driver->pm->runtime_suspend;
-
- return cb ? cb(dev) : 0;
-}
-
-/**
- * pm_genpd_default_restore_state - Default PM domains "restore device state".
- * @dev: Device to handle.
- */
-static int pm_genpd_default_restore_state(struct device *dev)
-{
- int (*cb)(struct device *__dev);
-
- if (dev->type && dev->type->pm)
- cb = dev->type->pm->runtime_resume;
- else if (dev->class && dev->class->pm)
- cb = dev->class->pm->runtime_resume;
- else if (dev->bus && dev->bus->pm)
- cb = dev->bus->pm->runtime_resume;
- else
- cb = NULL;
-
- if (!cb && dev->driver && dev->driver->pm)
- cb = dev->driver->pm->runtime_resume;
-
- return cb ? cb(dev) : 0;
-}
-
/**
* pm_genpd_init - Initialize a generic I/O PM domain object.
* @genpd: PM domain object to initialize.
@@ -1498,8 +1473,8 @@ void pm_genpd_init(struct generic_pm_domain *genpd,
genpd->device_count = 0;
genpd->max_off_time_ns = -1;
genpd->max_off_time_changed = true;
- genpd->domain.ops.runtime_suspend = pm_genpd_runtime_suspend;
- genpd->domain.ops.runtime_resume = pm_genpd_runtime_resume;
+ genpd->domain.ops.runtime_suspend = genpd_runtime_suspend;
+ genpd->domain.ops.runtime_resume = genpd_runtime_resume;
genpd->domain.ops.prepare = pm_genpd_prepare;
genpd->domain.ops.suspend = pm_genpd_suspend;
genpd->domain.ops.suspend_late = pm_genpd_suspend_late;
@@ -1520,8 +1495,6 @@ void pm_genpd_init(struct generic_pm_domain *genpd,
genpd->domain.ops.restore_early = pm_genpd_resume_early;
genpd->domain.ops.restore = pm_genpd_resume;
genpd->domain.ops.complete = pm_genpd_complete;
- genpd->dev_ops.save_state = pm_genpd_default_save_state;
- genpd->dev_ops.restore_state = pm_genpd_default_restore_state;
if (genpd->flags & GENPD_FLAG_PM_CLK) {
genpd->dev_ops.stop = pm_clk_suspend;
diff --git a/drivers/base/power/domain_governor.c b/drivers/base/power/domain_governor.c
index 00a5436dd44b..2e0fce711135 100644
--- a/drivers/base/power/domain_governor.c
+++ b/drivers/base/power/domain_governor.c
@@ -37,10 +37,10 @@ static int dev_update_qos_constraint(struct device *dev, void *data)
}
/**
- * default_stop_ok - Default PM domain governor routine for stopping devices.
+ * default_suspend_ok - Default PM domain governor routine to suspend devices.
* @dev: Device to check.
*/
-static bool default_stop_ok(struct device *dev)
+static bool default_suspend_ok(struct device *dev)
{
struct gpd_timing_data *td = &dev_gpd_data(dev)->td;
unsigned long flags;
@@ -51,13 +51,13 @@ static bool default_stop_ok(struct device *dev)
spin_lock_irqsave(&dev->power.lock, flags);
if (!td->constraint_changed) {
- bool ret = td->cached_stop_ok;
+ bool ret = td->cached_suspend_ok;
spin_unlock_irqrestore(&dev->power.lock, flags);
return ret;
}
td->constraint_changed = false;
- td->cached_stop_ok = false;
+ td->cached_suspend_ok = false;
td->effective_constraint_ns = -1;
constraint_ns = __dev_pm_qos_read_value(dev);
@@ -83,13 +83,13 @@ static bool default_stop_ok(struct device *dev)
return false;
}
td->effective_constraint_ns = constraint_ns;
- td->cached_stop_ok = constraint_ns >= 0;
+ td->cached_suspend_ok = constraint_ns >= 0;
/*
* The children have been suspended already, so we don't need to take
- * their stop latencies into account here.
+ * their suspend latencies into account here.
*/
- return td->cached_stop_ok;
+ return td->cached_suspend_ok;
}
/**
@@ -150,7 +150,7 @@ static bool __default_power_down_ok(struct dev_pm_domain *pd,
*/
td = &to_gpd_data(pdd)->td;
constraint_ns = td->effective_constraint_ns;
- /* default_stop_ok() need not be called before us. */
+ /* default_suspend_ok() need not be called before us. */
if (constraint_ns < 0) {
constraint_ns = dev_pm_qos_read_value(pdd->dev);
constraint_ns *= NSEC_PER_USEC;
@@ -227,7 +227,7 @@ static bool always_on_power_down_ok(struct dev_pm_domain *domain)
}
struct dev_power_governor simple_qos_governor = {
- .stop_ok = default_stop_ok,
+ .suspend_ok = default_suspend_ok,
.power_down_ok = default_power_down_ok,
};
@@ -236,5 +236,5 @@ struct dev_power_governor simple_qos_governor = {
*/
struct dev_power_governor pm_domain_always_on_gov = {
.power_down_ok = always_on_power_down_ok,
- .stop_ok = default_stop_ok,
+ .suspend_ok = default_suspend_ok,
};
diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index 6e7c3ccea24b..e44944f4be77 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -1267,14 +1267,15 @@ int dpm_suspend_late(pm_message_t state)
error = device_suspend_late(dev);
mutex_lock(&dpm_list_mtx);
+ if (!list_empty(&dev->power.entry))
+ list_move(&dev->power.entry, &dpm_late_early_list);
+
if (error) {
pm_dev_err(dev, state, " late", error);
dpm_save_failed_dev(dev_name(dev));
put_device(dev);
break;
}
- if (!list_empty(&dev->power.entry))
- list_move(&dev->power.entry, &dpm_late_early_list);
put_device(dev);
if (async_error)
@@ -1556,7 +1557,6 @@ int dpm_suspend(pm_message_t state)
static int device_prepare(struct device *dev, pm_message_t state)
{
int (*callback)(struct device *) = NULL;
- char *info = NULL;
int ret = 0;
if (dev->power.syscore)
@@ -1579,24 +1579,17 @@ static int device_prepare(struct device *dev, pm_message_t state)
goto unlock;
}
- if (dev->pm_domain) {
- info = "preparing power domain ";
+ if (dev->pm_domain)
callback = dev->pm_domain->ops.prepare;
- } else if (dev->type && dev->type->pm) {
- info = "preparing type ";
+ else if (dev->type && dev->type->pm)
callback = dev->type->pm->prepare;
- } else if (dev->class && dev->class->pm) {
- info = "preparing class ";
+ else if (dev->class && dev->class->pm)
callback = dev->class->pm->prepare;
- } else if (dev->bus && dev->bus->pm) {
- info = "preparing bus ";
+ else if (dev->bus && dev->bus->pm)
callback = dev->bus->pm->prepare;
- }
- if (!callback && dev->driver && dev->driver->pm) {
- info = "preparing driver ";
+ if (!callback && dev->driver && dev->driver->pm)
callback = dev->driver->pm->prepare;
- }
if (callback)
ret = callback(dev);
diff --git a/drivers/base/power/opp/Makefile b/drivers/base/power/opp/Makefile
index 19837ef04d8e..e70ceb406fe9 100644
--- a/drivers/base/power/opp/Makefile
+++ b/drivers/base/power/opp/Makefile
@@ -1,3 +1,4 @@
ccflags-$(CONFIG_DEBUG_DRIVER) := -DDEBUG
obj-y += core.o cpu.o
+obj-$(CONFIG_OF) += of.o
obj-$(CONFIG_DEBUG_FS) += debugfs.o
diff --git a/drivers/base/power/opp/core.c b/drivers/base/power/opp/core.c
index d8f4cc22856c..7c04c87738a6 100644
--- a/drivers/base/power/opp/core.c
+++ b/drivers/base/power/opp/core.c
@@ -18,7 +18,6 @@
#include <linux/err.h>
#include <linux/slab.h>
#include <linux/device.h>
-#include <linux/of.h>
#include <linux/export.h>
#include <linux/regulator/consumer.h>
@@ -29,7 +28,7 @@
* from here, with each opp_table containing the list of opps it supports in
* various states of availability.
*/
-static LIST_HEAD(opp_tables);
+LIST_HEAD(opp_tables);
/* Lock to allow exclusive modification to the device and opp lists */
DEFINE_MUTEX(opp_table_lock);
@@ -53,26 +52,6 @@ static struct opp_device *_find_opp_dev(const struct device *dev,
return NULL;
}
-static struct opp_table *_managed_opp(const struct device_node *np)
-{
- struct opp_table *opp_table;
-
- list_for_each_entry_rcu(opp_table, &opp_tables, node) {
- if (opp_table->np == np) {
- /*
- * Multiple devices can point to the same OPP table and
- * so will have same node-pointer, np.
- *
- * But the OPPs will be considered as shared only if the
- * OPP table contains a "opp-shared" property.
- */
- return opp_table->shared_opp ? opp_table : NULL;
- }
- }
-
- return NULL;
-}
-
/**
* _find_opp_table() - find opp_table struct using device pointer
* @dev: device pointer used to lookup OPP table
@@ -757,7 +736,6 @@ static struct opp_table *_add_opp_table(struct device *dev)
{
struct opp_table *opp_table;
struct opp_device *opp_dev;
- struct device_node *np;
int ret;
/* Check for existing table for 'dev' first */
@@ -781,20 +759,7 @@ static struct opp_table *_add_opp_table(struct device *dev)
return NULL;
}
- /*
- * Only required for backward compatibility with v1 bindings, but isn't
- * harmful for other cases. And so we do it unconditionally.
- */
- np = of_node_get(dev->of_node);
- if (np) {
- u32 val;
-
- if (!of_property_read_u32(np, "clock-latency", &val))
- opp_table->clock_latency_ns_max = val;
- of_property_read_u32(np, "voltage-tolerance",
- &opp_table->voltage_tolerance_v1);
- of_node_put(np);
- }
+ _of_init_opp_table(opp_table, dev);
/* Set regulator to a non-NULL error value */
opp_table->regulator = ERR_PTR(-ENXIO);
@@ -890,8 +855,8 @@ static void _kfree_opp_rcu(struct rcu_head *head)
* It is assumed that the caller holds required mutex for an RCU updater
* strategy.
*/
-static void _opp_remove(struct opp_table *opp_table,
- struct dev_pm_opp *opp, bool notify)
+void _opp_remove(struct opp_table *opp_table, struct dev_pm_opp *opp,
+ bool notify)
{
/*
* Notify the changes in the availability of the operable
@@ -952,8 +917,8 @@ unlock:
}
EXPORT_SYMBOL_GPL(dev_pm_opp_remove);
-static struct dev_pm_opp *_allocate_opp(struct device *dev,
- struct opp_table **opp_table)
+struct dev_pm_opp *_allocate_opp(struct device *dev,
+ struct opp_table **opp_table)
{
struct dev_pm_opp *opp;
@@ -989,8 +954,8 @@ static bool _opp_supported_by_regulators(struct dev_pm_opp *opp,
return true;
}
-static int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
- struct opp_table *opp_table)
+int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
+ struct opp_table *opp_table)
{
struct dev_pm_opp *opp;
struct list_head *head = &opp_table->opp_list;
@@ -1066,8 +1031,8 @@ static int _opp_add(struct device *dev, struct dev_pm_opp *new_opp,
* Duplicate OPPs (both freq and volt are same) and !opp->available
* -ENOMEM Memory allocation failure
*/
-static int _opp_add_v1(struct device *dev, unsigned long freq, long u_volt,
- bool dynamic)
+int _opp_add_v1(struct device *dev, unsigned long freq, long u_volt,
+ bool dynamic)
{
struct opp_table *opp_table;
struct dev_pm_opp *new_opp;
@@ -1112,83 +1077,6 @@ unlock:
return ret;
}
-/* TODO: Support multiple regulators */
-static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev,
- struct opp_table *opp_table)
-{
- u32 microvolt[3] = {0};
- u32 val;
- int count, ret;
- struct property *prop = NULL;
- char name[NAME_MAX];
-
- /* Search for "opp-microvolt-<name>" */
- if (opp_table->prop_name) {
- snprintf(name, sizeof(name), "opp-microvolt-%s",
- opp_table->prop_name);
- prop = of_find_property(opp->np, name, NULL);
- }
-
- if (!prop) {
- /* Search for "opp-microvolt" */
- sprintf(name, "opp-microvolt");
- prop = of_find_property(opp->np, name, NULL);
-
- /* Missing property isn't a problem, but an invalid entry is */
- if (!prop)
- return 0;
- }
-
- count = of_property_count_u32_elems(opp->np, name);
- if (count < 0) {
- dev_err(dev, "%s: Invalid %s property (%d)\n",
- __func__, name, count);
- return count;
- }
-
- /* There can be one or three elements here */
- if (count != 1 && count != 3) {
- dev_err(dev, "%s: Invalid number of elements in %s property (%d)\n",
- __func__, name, count);
- return -EINVAL;
- }
-
- ret = of_property_read_u32_array(opp->np, name, microvolt, count);
- if (ret) {
- dev_err(dev, "%s: error parsing %s: %d\n", __func__, name, ret);
- return -EINVAL;
- }
-
- opp->u_volt = microvolt[0];
-
- if (count == 1) {
- opp->u_volt_min = opp->u_volt;
- opp->u_volt_max = opp->u_volt;
- } else {
- opp->u_volt_min = microvolt[1];
- opp->u_volt_max = microvolt[2];
- }
-
- /* Search for "opp-microamp-<name>" */
- prop = NULL;
- if (opp_table->prop_name) {
- snprintf(name, sizeof(name), "opp-microamp-%s",
- opp_table->prop_name);
- prop = of_find_property(opp->np, name, NULL);
- }
-
- if (!prop) {
- /* Search for "opp-microamp" */
- sprintf(name, "opp-microamp");
- prop = of_find_property(opp->np, name, NULL);
- }
-
- if (prop && !of_property_read_u32(opp->np, name, &val))
- opp->u_amp = val;
-
- return 0;
-}
-
/**
* dev_pm_opp_set_supported_hw() - Set supported platforms
* @dev: Device for which supported-hw has to be set.
@@ -1517,144 +1405,6 @@ unlock:
}
EXPORT_SYMBOL_GPL(dev_pm_opp_put_regulator);
-static bool _opp_is_supported(struct device *dev, struct opp_table *opp_table,
- struct device_node *np)
-{
- unsigned int count = opp_table->supported_hw_count;
- u32 version;
- int ret;
-
- if (!opp_table->supported_hw)
- return true;
-
- while (count--) {
- ret = of_property_read_u32_index(np, "opp-supported-hw", count,
- &version);
- if (ret) {
- dev_warn(dev, "%s: failed to read opp-supported-hw property at index %d: %d\n",
- __func__, count, ret);
- return false;
- }
-
- /* Both of these are bitwise masks of the versions */
- if (!(version & opp_table->supported_hw[count]))
- return false;
- }
-
- return true;
-}
-
-/**
- * _opp_add_static_v2() - Allocate static OPPs (As per 'v2' DT bindings)
- * @dev: device for which we do this operation
- * @np: device node
- *
- * This function adds an opp definition to the opp table and returns status. The
- * opp can be controlled using dev_pm_opp_enable/disable functions and may be
- * removed by dev_pm_opp_remove.
- *
- * Locking: The internal opp_table and opp structures are RCU protected.
- * Hence this function internally uses RCU updater strategy with mutex locks
- * to keep the integrity of the internal data structures. Callers should ensure
- * that this function is *NOT* called under RCU protection or in contexts where
- * mutex cannot be locked.
- *
- * Return:
- * 0 On success OR
- * Duplicate OPPs (both freq and volt are same) and opp->available
- * -EEXIST Freq are same and volt are different OR
- * Duplicate OPPs (both freq and volt are same) and !opp->available
- * -ENOMEM Memory allocation failure
- * -EINVAL Failed parsing the OPP node
- */
-static int _opp_add_static_v2(struct device *dev, struct device_node *np)
-{
- struct opp_table *opp_table;
- struct dev_pm_opp *new_opp;
- u64 rate;
- u32 val;
- int ret;
-
- /* Hold our table modification lock here */
- mutex_lock(&opp_table_lock);
-
- new_opp = _allocate_opp(dev, &opp_table);
- if (!new_opp) {
- ret = -ENOMEM;
- goto unlock;
- }
-
- ret = of_property_read_u64(np, "opp-hz", &rate);
- if (ret < 0) {
- dev_err(dev, "%s: opp-hz not found\n", __func__);
- goto free_opp;
- }
-
- /* Check if the OPP supports hardware's hierarchy of versions or not */
- if (!_opp_is_supported(dev, opp_table, np)) {
- dev_dbg(dev, "OPP not supported by hardware: %llu\n", rate);
- goto free_opp;
- }
-
- /*
- * Rate is defined as an unsigned long in clk API, and so casting
- * explicitly to its type. Must be fixed once rate is 64 bit
- * guaranteed in clk API.
- */
- new_opp->rate = (unsigned long)rate;
- new_opp->turbo = of_property_read_bool(np, "turbo-mode");
-
- new_opp->np = np;
- new_opp->dynamic = false;
- new_opp->available = true;
-
- if (!of_property_read_u32(np, "clock-latency-ns", &val))
- new_opp->clock_latency_ns = val;
-
- ret = opp_parse_supplies(new_opp, dev, opp_table);
- if (ret)
- goto free_opp;
-
- ret = _opp_add(dev, new_opp, opp_table);
- if (ret)
- goto free_opp;
-
- /* OPP to select on device suspend */
- if (of_property_read_bool(np, "opp-suspend")) {
- if (opp_table->suspend_opp) {
- dev_warn(dev, "%s: Multiple suspend OPPs found (%lu %lu)\n",
- __func__, opp_table->suspend_opp->rate,
- new_opp->rate);
- } else {
- new_opp->suspend = true;
- opp_table->suspend_opp = new_opp;
- }
- }
-
- if (new_opp->clock_latency_ns > opp_table->clock_latency_ns_max)
- opp_table->clock_latency_ns_max = new_opp->clock_latency_ns;
-
- mutex_unlock(&opp_table_lock);
-
- pr_debug("%s: turbo:%d rate:%lu uv:%lu uvmin:%lu uvmax:%lu latency:%lu\n",
- __func__, new_opp->turbo, new_opp->rate, new_opp->u_volt,
- new_opp->u_volt_min, new_opp->u_volt_max,
- new_opp->clock_latency_ns);
-
- /*
- * Notify the changes in the availability of the operable
- * frequency/voltage list.
- */
- srcu_notifier_call_chain(&opp_table->srcu_head, OPP_EVENT_ADD, new_opp);
- return 0;
-
-free_opp:
- _opp_remove(opp_table, new_opp, false);
-unlock:
- mutex_unlock(&opp_table_lock);
- return ret;
-}
-
/**
* dev_pm_opp_add() - Add an OPP table from a table definitions
* @dev: device for which we do this operation
@@ -1842,21 +1592,11 @@ struct srcu_notifier_head *dev_pm_opp_get_notifier(struct device *dev)
}
EXPORT_SYMBOL_GPL(dev_pm_opp_get_notifier);
-#ifdef CONFIG_OF
-/**
- * dev_pm_opp_of_remove_table() - Free OPP table entries created from static DT
- * entries
- * @dev: device pointer used to lookup OPP table.
- *
- * Free OPPs created using static entries present in DT.
- *
- * Locking: The internal opp_table and opp structures are RCU protected.
- * Hence this function indirectly uses RCU updater strategy with mutex locks
- * to keep the integrity of the internal data structures. Callers should ensure
- * that this function is *NOT* called under RCU protection or in contexts where
- * mutex cannot be locked.
+/*
+ * Free OPPs either created using static entries present in DT or even the
+ * dynamically added entries based on remove_all param.
*/
-void dev_pm_opp_of_remove_table(struct device *dev)
+void _dev_pm_opp_remove_table(struct device *dev, bool remove_all)
{
struct opp_table *opp_table;
struct dev_pm_opp *opp, *tmp;
@@ -1881,7 +1621,7 @@ void dev_pm_opp_of_remove_table(struct device *dev)
if (list_is_singular(&opp_table->dev_list)) {
/* Free static OPPs */
list_for_each_entry_safe(opp, tmp, &opp_table->opp_list, node) {
- if (!opp->dynamic)
+ if (remove_all || !opp->dynamic)
_opp_remove(opp_table, opp, true);
}
} else {
@@ -1891,160 +1631,22 @@ void dev_pm_opp_of_remove_table(struct device *dev)
unlock:
mutex_unlock(&opp_table_lock);
}
-EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table);
-
-/* Returns opp descriptor node for a device, caller must do of_node_put() */
-struct device_node *_of_get_opp_desc_node(struct device *dev)
-{
- /*
- * TODO: Support for multiple OPP tables.
- *
- * There should be only ONE phandle present in "operating-points-v2"
- * property.
- */
-
- return of_parse_phandle(dev->of_node, "operating-points-v2", 0);
-}
-
-/* Initializes OPP tables based on new bindings */
-static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
-{
- struct device_node *np;
- struct opp_table *opp_table;
- int ret = 0, count = 0;
-
- mutex_lock(&opp_table_lock);
-
- opp_table = _managed_opp(opp_np);
- if (opp_table) {
- /* OPPs are already managed */
- if (!_add_opp_dev(dev, opp_table))
- ret = -ENOMEM;
- mutex_unlock(&opp_table_lock);
- return ret;
- }
- mutex_unlock(&opp_table_lock);
-
- /* We have opp-table node now, iterate over it and add OPPs */
- for_each_available_child_of_node(opp_np, np) {
- count++;
-
- ret = _opp_add_static_v2(dev, np);
- if (ret) {
- dev_err(dev, "%s: Failed to add OPP, %d\n", __func__,
- ret);
- goto free_table;
- }
- }
-
- /* There should be one of more OPP defined */
- if (WARN_ON(!count))
- return -ENOENT;
-
- mutex_lock(&opp_table_lock);
-
- opp_table = _find_opp_table(dev);
- if (WARN_ON(IS_ERR(opp_table))) {
- ret = PTR_ERR(opp_table);
- mutex_unlock(&opp_table_lock);
- goto free_table;
- }
-
- opp_table->np = opp_np;
- opp_table->shared_opp = of_property_read_bool(opp_np, "opp-shared");
-
- mutex_unlock(&opp_table_lock);
-
- return 0;
-
-free_table:
- dev_pm_opp_of_remove_table(dev);
-
- return ret;
-}
-
-/* Initializes OPP tables based on old-deprecated bindings */
-static int _of_add_opp_table_v1(struct device *dev)
-{
- const struct property *prop;
- const __be32 *val;
- int nr;
-
- prop = of_find_property(dev->of_node, "operating-points", NULL);
- if (!prop)
- return -ENODEV;
- if (!prop->value)
- return -ENODATA;
-
- /*
- * Each OPP is a set of tuples consisting of frequency and
- * voltage like <freq-kHz vol-uV>.
- */
- nr = prop->length / sizeof(u32);
- if (nr % 2) {
- dev_err(dev, "%s: Invalid OPP table\n", __func__);
- return -EINVAL;
- }
-
- val = prop->value;
- while (nr) {
- unsigned long freq = be32_to_cpup(val++) * 1000;
- unsigned long volt = be32_to_cpup(val++);
-
- if (_opp_add_v1(dev, freq, volt, false))
- dev_warn(dev, "%s: Failed to add OPP %ld\n",
- __func__, freq);
- nr -= 2;
- }
-
- return 0;
-}
/**
- * dev_pm_opp_of_add_table() - Initialize opp table from device tree
+ * dev_pm_opp_remove_table() - Free all OPPs associated with the device
* @dev: device pointer used to lookup OPP table.
*
- * Register the initial OPP table with the OPP library for given device.
+ * Free both OPPs created using static entries present in DT and the
+ * dynamically added entries.
*
* Locking: The internal opp_table and opp structures are RCU protected.
* Hence this function indirectly uses RCU updater strategy with mutex locks
* to keep the integrity of the internal data structures. Callers should ensure
* that this function is *NOT* called under RCU protection or in contexts where
* mutex cannot be locked.
- *
- * Return:
- * 0 On success OR
- * Duplicate OPPs (both freq and volt are same) and opp->available
- * -EEXIST Freq are same and volt are different OR
- * Duplicate OPPs (both freq and volt are same) and !opp->available
- * -ENOMEM Memory allocation failure
- * -ENODEV when 'operating-points' property is not found or is invalid data
- * in device node.
- * -ENODATA when empty 'operating-points' property is found
- * -EINVAL when invalid entries are found in opp-v2 table
*/
-int dev_pm_opp_of_add_table(struct device *dev)
+void dev_pm_opp_remove_table(struct device *dev)
{
- struct device_node *opp_np;
- int ret;
-
- /*
- * OPPs have two version of bindings now. The older one is deprecated,
- * try for the new binding first.
- */
- opp_np = _of_get_opp_desc_node(dev);
- if (!opp_np) {
- /*
- * Try old-deprecated bindings for backward compatibility with
- * older dtbs.
- */
- return _of_add_opp_table_v1(dev);
- }
-
- ret = _of_add_opp_table_v2(dev, opp_np);
- of_node_put(opp_np);
-
- return ret;
+ _dev_pm_opp_remove_table(dev, true);
}
-EXPORT_SYMBOL_GPL(dev_pm_opp_of_add_table);
-#endif
+EXPORT_SYMBOL_GPL(dev_pm_opp_remove_table);
diff --git a/drivers/base/power/opp/cpu.c b/drivers/base/power/opp/cpu.c
index ba2bdbd932ef..83d6e7ba1a34 100644
--- a/drivers/base/power/opp/cpu.c
+++ b/drivers/base/power/opp/cpu.c
@@ -18,7 +18,6 @@
#include <linux/err.h>
#include <linux/errno.h>
#include <linux/export.h>
-#include <linux/of.h>
#include <linux/slab.h>
#include "opp.h"
@@ -119,8 +118,66 @@ void dev_pm_opp_free_cpufreq_table(struct device *dev,
EXPORT_SYMBOL_GPL(dev_pm_opp_free_cpufreq_table);
#endif /* CONFIG_CPU_FREQ */
-/* Required only for V1 bindings, as v2 can manage it from DT itself */
-int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask)
+void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, bool of)
+{
+ struct device *cpu_dev;
+ int cpu;
+
+ WARN_ON(cpumask_empty(cpumask));
+
+ for_each_cpu(cpu, cpumask) {
+ cpu_dev = get_cpu_device(cpu);
+ if (!cpu_dev) {
+ pr_err("%s: failed to get cpu%d device\n", __func__,
+ cpu);
+ continue;
+ }
+
+ if (of)
+ dev_pm_opp_of_remove_table(cpu_dev);
+ else
+ dev_pm_opp_remove_table(cpu_dev);
+ }
+}
+
+/**
+ * dev_pm_opp_cpumask_remove_table() - Removes OPP table for @cpumask
+ * @cpumask: cpumask for which OPP table needs to be removed
+ *
+ * This removes the OPP tables for CPUs present in the @cpumask.
+ * This should be used to remove all the OPPs entries associated with
+ * the cpus in @cpumask.
+ *
+ * Locking: The internal opp_table and opp structures are RCU protected.
+ * Hence this function internally uses RCU updater strategy with mutex locks
+ * to keep the integrity of the internal data structures. Callers should ensure
+ * that this function is *NOT* called under RCU protection or in contexts where
+ * mutex cannot be locked.
+ */
+void dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask)
+{
+ _dev_pm_opp_cpumask_remove_table(cpumask, false);
+}
+EXPORT_SYMBOL_GPL(dev_pm_opp_cpumask_remove_table);
+
+/**
+ * dev_pm_opp_set_sharing_cpus() - Mark OPP table as shared by few CPUs
+ * @cpu_dev: CPU device for which we do this operation
+ * @cpumask: cpumask of the CPUs which share the OPP table with @cpu_dev
+ *
+ * This marks OPP table of the @cpu_dev as shared by the CPUs present in
+ * @cpumask.
+ *
+ * Returns -ENODEV if OPP table isn't already present.
+ *
+ * Locking: The internal opp_table and opp structures are RCU protected.
+ * Hence this function internally uses RCU updater strategy with mutex locks
+ * to keep the integrity of the internal data structures. Callers should ensure
+ * that this function is *NOT* called under RCU protection or in contexts where
+ * mutex cannot be locked.
+ */
+int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev,
+ const struct cpumask *cpumask)
{
struct opp_device *opp_dev;
struct opp_table *opp_table;
@@ -131,7 +188,7 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask)
opp_table = _find_opp_table(cpu_dev);
if (IS_ERR(opp_table)) {
- ret = -EINVAL;
+ ret = PTR_ERR(opp_table);
goto unlock;
}
@@ -152,6 +209,9 @@ int dev_pm_opp_set_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask)
__func__, cpu);
continue;
}
+
+ /* Mark opp-table as multiple CPUs are sharing it now */
+ opp_table->shared_opp = true;
}
unlock:
mutex_unlock(&opp_table_lock);
@@ -160,112 +220,47 @@ unlock:
}
EXPORT_SYMBOL_GPL(dev_pm_opp_set_sharing_cpus);
-#ifdef CONFIG_OF
-void dev_pm_opp_of_cpumask_remove_table(cpumask_var_t cpumask)
-{
- struct device *cpu_dev;
- int cpu;
-
- WARN_ON(cpumask_empty(cpumask));
-
- for_each_cpu(cpu, cpumask) {
- cpu_dev = get_cpu_device(cpu);
- if (!cpu_dev) {
- pr_err("%s: failed to get cpu%d device\n", __func__,
- cpu);
- continue;
- }
-
- dev_pm_opp_of_remove_table(cpu_dev);
- }
-}
-EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_remove_table);
-
-int dev_pm_opp_of_cpumask_add_table(cpumask_var_t cpumask)
-{
- struct device *cpu_dev;
- int cpu, ret = 0;
-
- WARN_ON(cpumask_empty(cpumask));
-
- for_each_cpu(cpu, cpumask) {
- cpu_dev = get_cpu_device(cpu);
- if (!cpu_dev) {
- pr_err("%s: failed to get cpu%d device\n", __func__,
- cpu);
- continue;
- }
-
- ret = dev_pm_opp_of_add_table(cpu_dev);
- if (ret) {
- pr_err("%s: couldn't find opp table for cpu:%d, %d\n",
- __func__, cpu, ret);
-
- /* Free all other OPPs */
- dev_pm_opp_of_cpumask_remove_table(cpumask);
- break;
- }
- }
-
- return ret;
-}
-EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_add_table);
-
-/*
- * Works only for OPP v2 bindings.
+/**
+ * dev_pm_opp_get_sharing_cpus() - Get cpumask of CPUs sharing OPPs with @cpu_dev
+ * @cpu_dev: CPU device for which we do this operation
+ * @cpumask: cpumask to update with information of sharing CPUs
+ *
+ * This updates the @cpumask with CPUs that are sharing OPPs with @cpu_dev.
*
- * Returns -ENOENT if operating-points-v2 bindings aren't supported.
+ * Returns -ENODEV if OPP table isn't already present.
+ *
+ * Locking: The internal opp_table and opp structures are RCU protected.
+ * Hence this function internally uses RCU updater strategy with mutex locks
+ * to keep the integrity of the internal data structures. Callers should ensure
+ * that this function is *NOT* called under RCU protection or in contexts where
+ * mutex cannot be locked.
*/
-int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev, cpumask_var_t cpumask)
+int dev_pm_opp_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask)
{
- struct device_node *np, *tmp_np;
- struct device *tcpu_dev;
- int cpu, ret = 0;
-
- /* Get OPP descriptor node */
- np = _of_get_opp_desc_node(cpu_dev);
- if (!np) {
- dev_dbg(cpu_dev, "%s: Couldn't find cpu_dev node.\n", __func__);
- return -ENOENT;
- }
-
- cpumask_set_cpu(cpu_dev->id, cpumask);
-
- /* OPPs are shared ? */
- if (!of_property_read_bool(np, "opp-shared"))
- goto put_cpu_node;
-
- for_each_possible_cpu(cpu) {
- if (cpu == cpu_dev->id)
- continue;
+ struct opp_device *opp_dev;
+ struct opp_table *opp_table;
+ int ret = 0;
- tcpu_dev = get_cpu_device(cpu);
- if (!tcpu_dev) {
- dev_err(cpu_dev, "%s: failed to get cpu%d device\n",
- __func__, cpu);
- ret = -ENODEV;
- goto put_cpu_node;
- }
+ mutex_lock(&opp_table_lock);
- /* Get OPP descriptor node */
- tmp_np = _of_get_opp_desc_node(tcpu_dev);
- if (!tmp_np) {
- dev_err(tcpu_dev, "%s: Couldn't find tcpu_dev node.\n",
- __func__);
- ret = -ENOENT;
- goto put_cpu_node;
- }
+ opp_table = _find_opp_table(cpu_dev);
+ if (IS_ERR(opp_table)) {
+ ret = PTR_ERR(opp_table);
+ goto unlock;
+ }
- /* CPUs are sharing opp node */
- if (np == tmp_np)
- cpumask_set_cpu(cpu, cpumask);
+ cpumask_clear(cpumask);
- of_node_put(tmp_np);
+ if (opp_table->shared_opp) {
+ list_for_each_entry(opp_dev, &opp_table->dev_list, node)
+ cpumask_set_cpu(opp_dev->dev->id, cpumask);
+ } else {
+ cpumask_set_cpu(cpu_dev->id, cpumask);
}
-put_cpu_node:
- of_node_put(np);
+unlock:
+ mutex_unlock(&opp_table_lock);
+
return ret;
}
-EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_sharing_cpus);
-#endif
+EXPORT_SYMBOL_GPL(dev_pm_opp_get_sharing_cpus);
diff --git a/drivers/base/power/opp/of.c b/drivers/base/power/opp/of.c
new file mode 100644
index 000000000000..94d2010558e3
--- /dev/null
+++ b/drivers/base/power/opp/of.c
@@ -0,0 +1,591 @@
+/*
+ * Generic OPP OF helpers
+ *
+ * Copyright (C) 2009-2010 Texas Instruments Incorporated.
+ * Nishanth Menon
+ * Romit Dasgupta
+ * Kevin Hilman
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/cpu.h>
+#include <linux/errno.h>
+#include <linux/device.h>
+#include <linux/of.h>
+#include <linux/export.h>
+
+#include "opp.h"
+
+static struct opp_table *_managed_opp(const struct device_node *np)
+{
+ struct opp_table *opp_table;
+
+ list_for_each_entry_rcu(opp_table, &opp_tables, node) {
+ if (opp_table->np == np) {
+ /*
+ * Multiple devices can point to the same OPP table and
+ * so will have same node-pointer, np.
+ *
+ * But the OPPs will be considered as shared only if the
+ * OPP table contains a "opp-shared" property.
+ */
+ return opp_table->shared_opp ? opp_table : NULL;
+ }
+ }
+
+ return NULL;
+}
+
+void _of_init_opp_table(struct opp_table *opp_table, struct device *dev)
+{
+ struct device_node *np;
+
+ /*
+ * Only required for backward compatibility with v1 bindings, but isn't
+ * harmful for other cases. And so we do it unconditionally.
+ */
+ np = of_node_get(dev->of_node);
+ if (np) {
+ u32 val;
+
+ if (!of_property_read_u32(np, "clock-latency", &val))
+ opp_table->clock_latency_ns_max = val;
+ of_property_read_u32(np, "voltage-tolerance",
+ &opp_table->voltage_tolerance_v1);
+ of_node_put(np);
+ }
+}
+
+static bool _opp_is_supported(struct device *dev, struct opp_table *opp_table,
+ struct device_node *np)
+{
+ unsigned int count = opp_table->supported_hw_count;
+ u32 version;
+ int ret;
+
+ if (!opp_table->supported_hw)
+ return true;
+
+ while (count--) {
+ ret = of_property_read_u32_index(np, "opp-supported-hw", count,
+ &version);
+ if (ret) {
+ dev_warn(dev, "%s: failed to read opp-supported-hw property at index %d: %d\n",
+ __func__, count, ret);
+ return false;
+ }
+
+ /* Both of these are bitwise masks of the versions */
+ if (!(version & opp_table->supported_hw[count]))
+ return false;
+ }
+
+ return true;
+}
+
+/* TODO: Support multiple regulators */
+static int opp_parse_supplies(struct dev_pm_opp *opp, struct device *dev,
+ struct opp_table *opp_table)
+{
+ u32 microvolt[3] = {0};
+ u32 val;
+ int count, ret;
+ struct property *prop = NULL;
+ char name[NAME_MAX];
+
+ /* Search for "opp-microvolt-<name>" */
+ if (opp_table->prop_name) {
+ snprintf(name, sizeof(name), "opp-microvolt-%s",
+ opp_table->prop_name);
+ prop = of_find_property(opp->np, name, NULL);
+ }
+
+ if (!prop) {
+ /* Search for "opp-microvolt" */
+ sprintf(name, "opp-microvolt");
+ prop = of_find_property(opp->np, name, NULL);
+
+ /* Missing property isn't a problem, but an invalid entry is */
+ if (!prop)
+ return 0;
+ }
+
+ count = of_property_count_u32_elems(opp->np, name);
+ if (count < 0) {
+ dev_err(dev, "%s: Invalid %s property (%d)\n",
+ __func__, name, count);
+ return count;
+ }
+
+ /* There can be one or three elements here */
+ if (count != 1 && count != 3) {
+ dev_err(dev, "%s: Invalid number of elements in %s property (%d)\n",
+ __func__, name, count);
+ return -EINVAL;
+ }
+
+ ret = of_property_read_u32_array(opp->np, name, microvolt, count);
+ if (ret) {
+ dev_err(dev, "%s: error parsing %s: %d\n", __func__, name, ret);
+ return -EINVAL;
+ }
+
+ opp->u_volt = microvolt[0];
+
+ if (count == 1) {
+ opp->u_volt_min = opp->u_volt;
+ opp->u_volt_max = opp->u_volt;
+ } else {
+ opp->u_volt_min = microvolt[1];
+ opp->u_volt_max = microvolt[2];
+ }
+
+ /* Search for "opp-microamp-<name>" */
+ prop = NULL;
+ if (opp_table->prop_name) {
+ snprintf(name, sizeof(name), "opp-microamp-%s",
+ opp_table->prop_name);
+ prop = of_find_property(opp->np, name, NULL);
+ }
+
+ if (!prop) {
+ /* Search for "opp-microamp" */
+ sprintf(name, "opp-microamp");
+ prop = of_find_property(opp->np, name, NULL);
+ }
+
+ if (prop && !of_property_read_u32(opp->np, name, &val))
+ opp->u_amp = val;
+
+ return 0;
+}
+
+/**
+ * dev_pm_opp_of_remove_table() - Free OPP table entries created from static DT
+ * entries
+ * @dev: device pointer used to lookup OPP table.
+ *
+ * Free OPPs created using static entries present in DT.
+ *
+ * Locking: The internal opp_table and opp structures are RCU protected.
+ * Hence this function indirectly uses RCU updater strategy with mutex locks
+ * to keep the integrity of the internal data structures. Callers should ensure
+ * that this function is *NOT* called under RCU protection or in contexts where
+ * mutex cannot be locked.
+ */
+void dev_pm_opp_of_remove_table(struct device *dev)
+{
+ _dev_pm_opp_remove_table(dev, false);
+}
+EXPORT_SYMBOL_GPL(dev_pm_opp_of_remove_table);
+
+/* Returns opp descriptor node for a device, caller must do of_node_put() */
+struct device_node *_of_get_opp_desc_node(struct device *dev)
+{
+ /*
+ * TODO: Support for multiple OPP tables.
+ *
+ * There should be only ONE phandle present in "operating-points-v2"
+ * property.
+ */
+
+ return of_parse_phandle(dev->of_node, "operating-points-v2", 0);
+}
+
+/**
+ * _opp_add_static_v2() - Allocate static OPPs (As per 'v2' DT bindings)
+ * @dev: device for which we do this operation
+ * @np: device node
+ *
+ * This function adds an opp definition to the opp table and returns status. The
+ * opp can be controlled using dev_pm_opp_enable/disable functions and may be
+ * removed by dev_pm_opp_remove.
+ *
+ * Locking: The internal opp_table and opp structures are RCU protected.
+ * Hence this function internally uses RCU updater strategy with mutex locks
+ * to keep the integrity of the internal data structures. Callers should ensure
+ * that this function is *NOT* called under RCU protection or in contexts where
+ * mutex cannot be locked.
+ *
+ * Return:
+ * 0 On success OR
+ * Duplicate OPPs (both freq and volt are same) and opp->available
+ * -EEXIST Freq are same and volt are different OR
+ * Duplicate OPPs (both freq and volt are same) and !opp->available
+ * -ENOMEM Memory allocation failure
+ * -EINVAL Failed parsing the OPP node
+ */
+static int _opp_add_static_v2(struct device *dev, struct device_node *np)
+{
+ struct opp_table *opp_table;
+ struct dev_pm_opp *new_opp;
+ u64 rate;
+ u32 val;
+ int ret;
+
+ /* Hold our table modification lock here */
+ mutex_lock(&opp_table_lock);
+
+ new_opp = _allocate_opp(dev, &opp_table);
+ if (!new_opp) {
+ ret = -ENOMEM;
+ goto unlock;
+ }
+
+ ret = of_property_read_u64(np, "opp-hz", &rate);
+ if (ret < 0) {
+ dev_err(dev, "%s: opp-hz not found\n", __func__);
+ goto free_opp;
+ }
+
+ /* Check if the OPP supports hardware's hierarchy of versions or not */
+ if (!_opp_is_supported(dev, opp_table, np)) {
+ dev_dbg(dev, "OPP not supported by hardware: %llu\n", rate);
+ goto free_opp;
+ }
+
+ /*
+ * Rate is defined as an unsigned long in clk API, and so casting
+ * explicitly to its type. Must be fixed once rate is 64 bit
+ * guaranteed in clk API.
+ */
+ new_opp->rate = (unsigned long)rate;
+ new_opp->turbo = of_property_read_bool(np, "turbo-mode");
+
+ new_opp->np = np;
+ new_opp->dynamic = false;
+ new_opp->available = true;
+
+ if (!of_property_read_u32(np, "clock-latency-ns", &val))
+ new_opp->clock_latency_ns = val;
+
+ ret = opp_parse_supplies(new_opp, dev, opp_table);
+ if (ret)
+ goto free_opp;
+
+ ret = _opp_add(dev, new_opp, opp_table);
+ if (ret)
+ goto free_opp;
+
+ /* OPP to select on device suspend */
+ if (of_property_read_bool(np, "opp-suspend")) {
+ if (opp_table->suspend_opp) {
+ dev_warn(dev, "%s: Multiple suspend OPPs found (%lu %lu)\n",
+ __func__, opp_table->suspend_opp->rate,
+ new_opp->rate);
+ } else {
+ new_opp->suspend = true;
+ opp_table->suspend_opp = new_opp;
+ }
+ }
+
+ if (new_opp->clock_latency_ns > opp_table->clock_latency_ns_max)
+ opp_table->clock_latency_ns_max = new_opp->clock_latency_ns;
+
+ mutex_unlock(&opp_table_lock);
+
+ pr_debug("%s: turbo:%d rate:%lu uv:%lu uvmin:%lu uvmax:%lu latency:%lu\n",
+ __func__, new_opp->turbo, new_opp->rate, new_opp->u_volt,
+ new_opp->u_volt_min, new_opp->u_volt_max,
+ new_opp->clock_latency_ns);
+
+ /*
+ * Notify the changes in the availability of the operable
+ * frequency/voltage list.
+ */
+ srcu_notifier_call_chain(&opp_table->srcu_head, OPP_EVENT_ADD, new_opp);
+ return 0;
+
+free_opp:
+ _opp_remove(opp_table, new_opp, false);
+unlock:
+ mutex_unlock(&opp_table_lock);
+ return ret;
+}
+
+/* Initializes OPP tables based on new bindings */
+static int _of_add_opp_table_v2(struct device *dev, struct device_node *opp_np)
+{
+ struct device_node *np;
+ struct opp_table *opp_table;
+ int ret = 0, count = 0;
+
+ mutex_lock(&opp_table_lock);
+
+ opp_table = _managed_opp(opp_np);
+ if (opp_table) {
+ /* OPPs are already managed */
+ if (!_add_opp_dev(dev, opp_table))
+ ret = -ENOMEM;
+ mutex_unlock(&opp_table_lock);
+ return ret;
+ }
+ mutex_unlock(&opp_table_lock);
+
+ /* We have opp-table node now, iterate over it and add OPPs */
+ for_each_available_child_of_node(opp_np, np) {
+ count++;
+
+ ret = _opp_add_static_v2(dev, np);
+ if (ret) {
+ dev_err(dev, "%s: Failed to add OPP, %d\n", __func__,
+ ret);
+ goto free_table;
+ }
+ }
+
+ /* There should be one of more OPP defined */
+ if (WARN_ON(!count))
+ return -ENOENT;
+
+ mutex_lock(&opp_table_lock);
+
+ opp_table = _find_opp_table(dev);
+ if (WARN_ON(IS_ERR(opp_table))) {
+ ret = PTR_ERR(opp_table);
+ mutex_unlock(&opp_table_lock);
+ goto free_table;
+ }
+
+ opp_table->np = opp_np;
+ opp_table->shared_opp = of_property_read_bool(opp_np, "opp-shared");
+
+ mutex_unlock(&opp_table_lock);
+
+ return 0;
+
+free_table:
+ dev_pm_opp_of_remove_table(dev);
+
+ return ret;
+}
+
+/* Initializes OPP tables based on old-deprecated bindings */
+static int _of_add_opp_table_v1(struct device *dev)
+{
+ const struct property *prop;
+ const __be32 *val;
+ int nr;
+
+ prop = of_find_property(dev->of_node, "operating-points", NULL);
+ if (!prop)
+ return -ENODEV;
+ if (!prop->value)
+ return -ENODATA;
+
+ /*
+ * Each OPP is a set of tuples consisting of frequency and
+ * voltage like <freq-kHz vol-uV>.
+ */
+ nr = prop->length / sizeof(u32);
+ if (nr % 2) {
+ dev_err(dev, "%s: Invalid OPP table\n", __func__);
+ return -EINVAL;
+ }
+
+ val = prop->value;
+ while (nr) {
+ unsigned long freq = be32_to_cpup(val++) * 1000;
+ unsigned long volt = be32_to_cpup(val++);
+
+ if (_opp_add_v1(dev, freq, volt, false))
+ dev_warn(dev, "%s: Failed to add OPP %ld\n",
+ __func__, freq);
+ nr -= 2;
+ }
+
+ return 0;
+}
+
+/**
+ * dev_pm_opp_of_add_table() - Initialize opp table from device tree
+ * @dev: device pointer used to lookup OPP table.
+ *
+ * Register the initial OPP table with the OPP library for given device.
+ *
+ * Locking: The internal opp_table and opp structures are RCU protected.
+ * Hence this function indirectly uses RCU updater strategy with mutex locks
+ * to keep the integrity of the internal data structures. Callers should ensure
+ * that this function is *NOT* called under RCU protection or in contexts where
+ * mutex cannot be locked.
+ *
+ * Return:
+ * 0 On success OR
+ * Duplicate OPPs (both freq and volt are same) and opp->available
+ * -EEXIST Freq are same and volt are different OR
+ * Duplicate OPPs (both freq and volt are same) and !opp->available
+ * -ENOMEM Memory allocation failure
+ * -ENODEV when 'operating-points' property is not found or is invalid data
+ * in device node.
+ * -ENODATA when empty 'operating-points' property is found
+ * -EINVAL when invalid entries are found in opp-v2 table
+ */
+int dev_pm_opp_of_add_table(struct device *dev)
+{
+ struct device_node *opp_np;
+ int ret;
+
+ /*
+ * OPPs have two version of bindings now. The older one is deprecated,
+ * try for the new binding first.
+ */
+ opp_np = _of_get_opp_desc_node(dev);
+ if (!opp_np) {
+ /*
+ * Try old-deprecated bindings for backward compatibility with
+ * older dtbs.
+ */
+ return _of_add_opp_table_v1(dev);
+ }
+
+ ret = _of_add_opp_table_v2(dev, opp_np);
+ of_node_put(opp_np);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(dev_pm_opp_of_add_table);
+
+/* CPU device specific helpers */
+
+/**
+ * dev_pm_opp_of_cpumask_remove_table() - Removes OPP table for @cpumask
+ * @cpumask: cpumask for which OPP table needs to be removed
+ *
+ * This removes the OPP tables for CPUs present in the @cpumask.
+ * This should be used only to remove static entries created from DT.
+ *
+ * Locking: The internal opp_table and opp structures are RCU protected.
+ * Hence this function internally uses RCU updater strategy with mutex locks
+ * to keep the integrity of the internal data structures. Callers should ensure
+ * that this function is *NOT* called under RCU protection or in contexts where
+ * mutex cannot be locked.
+ */
+void dev_pm_opp_of_cpumask_remove_table(const struct cpumask *cpumask)
+{
+ _dev_pm_opp_cpumask_remove_table(cpumask, true);
+}
+EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_remove_table);
+
+/**
+ * dev_pm_opp_of_cpumask_add_table() - Adds OPP table for @cpumask
+ * @cpumask: cpumask for which OPP table needs to be added.
+ *
+ * This adds the OPP tables for CPUs present in the @cpumask.
+ *
+ * Locking: The internal opp_table and opp structures are RCU protected.
+ * Hence this function internally uses RCU updater strategy with mutex locks
+ * to keep the integrity of the internal data structures. Callers should ensure
+ * that this function is *NOT* called under RCU protection or in contexts where
+ * mutex cannot be locked.
+ */
+int dev_pm_opp_of_cpumask_add_table(const struct cpumask *cpumask)
+{
+ struct device *cpu_dev;
+ int cpu, ret = 0;
+
+ WARN_ON(cpumask_empty(cpumask));
+
+ for_each_cpu(cpu, cpumask) {
+ cpu_dev = get_cpu_device(cpu);
+ if (!cpu_dev) {
+ pr_err("%s: failed to get cpu%d device\n", __func__,
+ cpu);
+ continue;
+ }
+
+ ret = dev_pm_opp_of_add_table(cpu_dev);
+ if (ret) {
+ pr_err("%s: couldn't find opp table for cpu:%d, %d\n",
+ __func__, cpu, ret);
+
+ /* Free all other OPPs */
+ dev_pm_opp_of_cpumask_remove_table(cpumask);
+ break;
+ }
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(dev_pm_opp_of_cpumask_add_table);
+
+/*
+ * Works only for OPP v2 bindings.
+ *
+ * Returns -ENOENT if operating-points-v2 bindings aren't supported.
+ */
+/**
+ * dev_pm_opp_of_get_sharing_cpus() - Get cpumask of CPUs sharing OPPs with
+ * @cpu_dev using operating-points-v2
+ * bindings.
+ *
+ * @cpu_dev: CPU device for which we do this operation
+ * @cpumask: cpumask to update with information of sharing CPUs
+ *
+ * This updates the @cpumask with CPUs that are sharing OPPs with @cpu_dev.
+ *
+ * Returns -ENOENT if operating-points-v2 isn't present for @cpu_dev.
+ *
+ * Locking: The internal opp_table and opp structures are RCU protected.
+ * Hence this function internally uses RCU updater strategy with mutex locks
+ * to keep the integrity of the internal data structures. Callers should ensure
+ * that this function is *NOT* called under RCU protection or in contexts where
+ * mutex cannot be locked.
+ */
+int dev_pm_opp_of_get_sharing_cpus(struct device *cpu_dev,
+ struct cpumask *cpumask)
+{
+ struct device_node *np, *tmp_np;
+ struct device *tcpu_dev;
+ int cpu, ret = 0;
+
+ /* Get OPP descriptor node */
+ np = _of_get_opp_desc_node(cpu_dev);
+ if (!np) {
+ dev_dbg(cpu_dev, "%s: Couldn't find cpu_dev node.\n", __func__);
+ return -ENOENT;
+ }
+
+ cpumask_set_cpu(cpu_dev->id, cpumask);
+
+ /* OPPs are shared ? */
+ if (!of_property_read_bool(np, "opp-shared"))
+ goto put_cpu_node;
+
+ for_each_possible_cpu(cpu) {
+ if (cpu == cpu_dev->id)
+ continue;
+
+ tcpu_dev = get_cpu_device(cpu);
+ if (!tcpu_dev) {
+ dev_err(cpu_dev, "%s: failed to get cpu%d device\n",
+ __func__, cpu);
+ ret = -ENODEV;
+ goto put_cpu_node;
+ }
+
+ /* Get OPP descriptor node */
+ tmp_np = _of_get_opp_desc_node(tcpu_dev);
+ if (!tmp_np) {
+ dev_err(tcpu_dev, "%s: Couldn't find tcpu_dev node.\n",
+ __func__);
+ ret = -ENOENT;
+ goto put_cpu_node;
+ }
+
+ /* CPUs are sharing opp node */
+ if (np == tmp_np)
+ cpumask_set_cpu(cpu, cpumask);
+
+ of_node_put(tmp_np);
+ }
+
+put_cpu_node:
+ of_node_put(np);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(dev_pm_opp_of_get_sharing_cpus);
diff --git a/drivers/base/power/opp/opp.h b/drivers/base/power/opp/opp.h
index f67f806fcf3a..20f3be22e060 100644
--- a/drivers/base/power/opp/opp.h
+++ b/drivers/base/power/opp/opp.h
@@ -28,6 +28,8 @@ struct regulator;
/* Lock to allow exclusive modification to the device and opp lists */
extern struct mutex opp_table_lock;
+extern struct list_head opp_tables;
+
/*
* Internal data structure organization with the OPP layer library is as
* follows:
@@ -183,6 +185,18 @@ struct opp_table {
struct opp_table *_find_opp_table(struct device *dev);
struct opp_device *_add_opp_dev(const struct device *dev, struct opp_table *opp_table);
struct device_node *_of_get_opp_desc_node(struct device *dev);
+void _dev_pm_opp_remove_table(struct device *dev, bool remove_all);
+struct dev_pm_opp *_allocate_opp(struct device *dev, struct opp_table **opp_table);
+int _opp_add(struct device *dev, struct dev_pm_opp *new_opp, struct opp_table *opp_table);
+void _opp_remove(struct opp_table *opp_table, struct dev_pm_opp *opp, bool notify);
+int _opp_add_v1(struct device *dev, unsigned long freq, long u_volt, bool dynamic);
+void _dev_pm_opp_cpumask_remove_table(const struct cpumask *cpumask, bool of);
+
+#ifdef CONFIG_OF
+void _of_init_opp_table(struct opp_table *opp_table, struct device *dev);
+#else
+static inline void _of_init_opp_table(struct opp_table *opp_table, struct device *dev) {}
+#endif
#ifdef CONFIG_DEBUG_FS
void opp_debug_remove_one(struct dev_pm_opp *opp);
diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
index 4c7055009bd6..b74690418504 100644
--- a/drivers/base/power/runtime.c
+++ b/drivers/base/power/runtime.c
@@ -1506,11 +1506,16 @@ int pm_runtime_force_resume(struct device *dev)
goto out;
}
- ret = callback(dev);
+ ret = pm_runtime_set_active(dev);
if (ret)
goto out;
- pm_runtime_set_active(dev);
+ ret = callback(dev);
+ if (ret) {
+ pm_runtime_set_suspended(dev);
+ goto out;
+ }
+
pm_runtime_mark_last_busy(dev);
out:
pm_runtime_enable(dev);
diff --git a/drivers/base/property.c b/drivers/base/property.c
index 7f692accdc90..f38c21de29b7 100644
--- a/drivers/base/property.c
+++ b/drivers/base/property.c
@@ -19,6 +19,11 @@
#include <linux/etherdevice.h>
#include <linux/phy.h>
+struct property_set {
+ struct fwnode_handle fwnode;
+ struct property_entry *properties;
+};
+
static inline bool is_pset_node(struct fwnode_handle *fwnode)
{
return !IS_ERR_OR_NULL(fwnode) && fwnode->type == FWNODE_PDATA;
@@ -801,14 +806,14 @@ static struct property_set *pset_copy_set(const struct property_set *pset)
}
/**
- * device_remove_property_set - Remove properties from a device object.
+ * device_remove_properties - Remove properties from a device object.
* @dev: Device whose properties to remove.
*
* The function removes properties previously associated to the device
- * secondary firmware node with device_add_property_set(). Memory allocated
+ * secondary firmware node with device_add_properties(). Memory allocated
* to the properties will also be released.
*/
-void device_remove_property_set(struct device *dev)
+void device_remove_properties(struct device *dev)
{
struct fwnode_handle *fwnode;
@@ -831,24 +836,27 @@ void device_remove_property_set(struct device *dev)
}
}
}
-EXPORT_SYMBOL_GPL(device_remove_property_set);
+EXPORT_SYMBOL_GPL(device_remove_properties);
/**
- * device_add_property_set - Add a collection of properties to a device object.
+ * device_add_properties - Add a collection of properties to a device object.
* @dev: Device to add properties to.
- * @pset: Collection of properties to add.
+ * @properties: Collection of properties to add.
*
- * Associate a collection of device properties represented by @pset with @dev
- * as its secondary firmware node. The function takes a copy of @pset.
+ * Associate a collection of device properties represented by @properties with
+ * @dev as its secondary firmware node. The function takes a copy of
+ * @properties.
*/
-int device_add_property_set(struct device *dev, const struct property_set *pset)
+int device_add_properties(struct device *dev, struct property_entry *properties)
{
- struct property_set *p;
+ struct property_set *p, pset;
- if (!pset)
+ if (!properties)
return -EINVAL;
- p = pset_copy_set(pset);
+ pset.properties = properties;
+
+ p = pset_copy_set(&pset);
if (IS_ERR(p))
return PTR_ERR(p);
@@ -856,7 +864,7 @@ int device_add_property_set(struct device *dev, const struct property_set *pset)
set_secondary_fwnode(dev, &p->fwnode);
return 0;
}
-EXPORT_SYMBOL_GPL(device_add_property_set);
+EXPORT_SYMBOL_GPL(device_add_properties);
/**
* device_get_next_child_node - Return the next child node handle for a device
diff --git a/drivers/base/regmap/internal.h b/drivers/base/regmap/internal.h
index 5c79526245c2..a0380338946a 100644
--- a/drivers/base/regmap/internal.h
+++ b/drivers/base/regmap/internal.h
@@ -13,6 +13,7 @@
#ifndef _REGMAP_INTERNAL_H
#define _REGMAP_INTERNAL_H
+#include <linux/device.h>
#include <linux/regmap.h>
#include <linux/fs.h>
#include <linux/list.h>
diff --git a/drivers/base/regmap/regcache-flat.c b/drivers/base/regmap/regcache-flat.c
index 3ee72550b1e3..4d2e50bfc726 100644
--- a/drivers/base/regmap/regcache-flat.c
+++ b/drivers/base/regmap/regcache-flat.c
@@ -27,7 +27,7 @@ static int regcache_flat_init(struct regmap *map)
int i;
unsigned int *cache;
- if (!map || map->reg_stride_order < 0)
+ if (!map || map->reg_stride_order < 0 || !map->max_register)
return -EINVAL;
map->cache = kcalloc(regcache_flat_get_index(map, map->max_register)
diff --git a/drivers/base/regmap/regcache.c b/drivers/base/regmap/regcache.c
index 4170b7d95276..df7ff7290821 100644
--- a/drivers/base/regmap/regcache.c
+++ b/drivers/base/regmap/regcache.c
@@ -529,7 +529,7 @@ EXPORT_SYMBOL_GPL(regcache_mark_dirty);
* regcache_cache_bypass: Put a register map into cache bypass mode
*
* @map: map to configure
- * @cache_bypass: flag if changes should not be written to the hardware
+ * @cache_bypass: flag if changes should not be written to the cache
*
* When a register map is marked with the cache bypass option, writes
* to the register map API will only update the hardware and not the
diff --git a/drivers/base/regmap/regmap-mmio.c b/drivers/base/regmap/regmap-mmio.c
index 7526906ca080..5189fd6182f6 100644
--- a/drivers/base/regmap/regmap-mmio.c
+++ b/drivers/base/regmap/regmap-mmio.c
@@ -23,6 +23,8 @@
#include <linux/regmap.h>
#include <linux/slab.h>
+#include "internal.h"
+
struct regmap_mmio_context {
void __iomem *regs;
unsigned val_bytes;
@@ -212,6 +214,7 @@ static const struct regmap_bus regmap_mmio = {
.reg_write = regmap_mmio_write,
.reg_read = regmap_mmio_read,
.free_context = regmap_mmio_free_context,
+ .val_format_endian_default = REGMAP_ENDIAN_LITTLE,
};
static struct regmap_mmio_context *regmap_mmio_gen_context(struct device *dev,
@@ -245,7 +248,7 @@ static struct regmap_mmio_context *regmap_mmio_gen_context(struct device *dev,
ctx->val_bytes = config->val_bits / 8;
ctx->clk = ERR_PTR(-ENODEV);
- switch (config->reg_format_endian) {
+ switch (regmap_get_val_endian(dev, &regmap_mmio, config)) {
case REGMAP_ENDIAN_DEFAULT:
case REGMAP_ENDIAN_LITTLE:
#ifdef __LITTLE_ENDIAN
diff --git a/drivers/base/regmap/regmap-spmi.c b/drivers/base/regmap/regmap-spmi.c
index 7e58f6560399..4a36e415e938 100644
--- a/drivers/base/regmap/regmap-spmi.c
+++ b/drivers/base/regmap/regmap-spmi.c
@@ -142,7 +142,7 @@ static int regmap_spmi_ext_read(void *context,
while (val_size) {
len = min_t(size_t, val_size, 8);
- err = spmi_ext_register_readl(context, addr, val, val_size);
+ err = spmi_ext_register_readl(context, addr, val, len);
if (err)
goto err_out;
diff --git a/drivers/bcma/driver_chipcommon_sflash.c b/drivers/bcma/driver_chipcommon_sflash.c
index 04d706ca5f43..35b13a08ca3e 100644
--- a/drivers/bcma/driver_chipcommon_sflash.c
+++ b/drivers/bcma/driver_chipcommon_sflash.c
@@ -146,7 +146,6 @@ int bcma_sflash_init(struct bcma_drv_cc *cc)
return -ENOTSUPP;
}
- sflash->window = BCMA_SOC_FLASH2;
sflash->blocksize = e->blocksize;
sflash->numblocks = e->numblocks;
sflash->size = sflash->blocksize * sflash->numblocks;
diff --git a/drivers/block/aoe/aoecmd.c b/drivers/block/aoe/aoecmd.c
index 437b3a822f44..d597e432e195 100644
--- a/drivers/block/aoe/aoecmd.c
+++ b/drivers/block/aoe/aoecmd.c
@@ -861,7 +861,7 @@ rqbiocnt(struct request *r)
* discussion.
*
* We cannot use get_page in the workaround, because it insists on a
- * positive page count as a precondition. So we use _count directly.
+ * positive page count as a precondition. So we use _refcount directly.
*/
static void
bio_pageinc(struct bio *bio)
diff --git a/drivers/block/brd.c b/drivers/block/brd.c
index 51a071e32221..c04bd9bc39fd 100644
--- a/drivers/block/brd.c
+++ b/drivers/block/brd.c
@@ -381,7 +381,7 @@ static int brd_rw_page(struct block_device *bdev, sector_t sector,
#ifdef CONFIG_BLK_DEV_RAM_DAX
static long brd_direct_access(struct block_device *bdev, sector_t sector,
- void __pmem **kaddr, pfn_t *pfn)
+ void __pmem **kaddr, pfn_t *pfn, long size)
{
struct brd_device *brd = bdev->bd_disk->private_data;
struct page *page;
diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index fa209773d494..2ba1494b2799 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -2761,7 +2761,7 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
q->backing_dev_info.congested_data = device;
blk_queue_make_request(q, drbd_make_request);
- blk_queue_flush(q, REQ_FLUSH | REQ_FUA);
+ blk_queue_write_cache(q, true, true);
/* Setting the max_hw_sectors to an odd value of 8kibyte here
This triggers a max_bio_size message upon first attach or connect */
blk_queue_max_hw_sectors(q, DRBD_MAX_BIO_SIZE_SAFE >> 8);
diff --git a/drivers/block/drbd/drbd_nl.c b/drivers/block/drbd/drbd_nl.c
index 1fd1dccebb6b..0bac9c8246bc 100644
--- a/drivers/block/drbd/drbd_nl.c
+++ b/drivers/block/drbd/drbd_nl.c
@@ -3633,14 +3633,15 @@ static int nla_put_status_info(struct sk_buff *skb, struct drbd_device *device,
goto nla_put_failure;
if (nla_put_u32(skb, T_sib_reason, sib ? sib->sib_reason : SIB_GET_STATUS_REPLY) ||
nla_put_u32(skb, T_current_state, device->state.i) ||
- nla_put_u64(skb, T_ed_uuid, device->ed_uuid) ||
- nla_put_u64(skb, T_capacity, drbd_get_capacity(device->this_bdev)) ||
- nla_put_u64(skb, T_send_cnt, device->send_cnt) ||
- nla_put_u64(skb, T_recv_cnt, device->recv_cnt) ||
- nla_put_u64(skb, T_read_cnt, device->read_cnt) ||
- nla_put_u64(skb, T_writ_cnt, device->writ_cnt) ||
- nla_put_u64(skb, T_al_writ_cnt, device->al_writ_cnt) ||
- nla_put_u64(skb, T_bm_writ_cnt, device->bm_writ_cnt) ||
+ nla_put_u64_0pad(skb, T_ed_uuid, device->ed_uuid) ||
+ nla_put_u64_0pad(skb, T_capacity,
+ drbd_get_capacity(device->this_bdev)) ||
+ nla_put_u64_0pad(skb, T_send_cnt, device->send_cnt) ||
+ nla_put_u64_0pad(skb, T_recv_cnt, device->recv_cnt) ||
+ nla_put_u64_0pad(skb, T_read_cnt, device->read_cnt) ||
+ nla_put_u64_0pad(skb, T_writ_cnt, device->writ_cnt) ||
+ nla_put_u64_0pad(skb, T_al_writ_cnt, device->al_writ_cnt) ||
+ nla_put_u64_0pad(skb, T_bm_writ_cnt, device->bm_writ_cnt) ||
nla_put_u32(skb, T_ap_bio_cnt, atomic_read(&device->ap_bio_cnt)) ||
nla_put_u32(skb, T_ap_pending_cnt, atomic_read(&device->ap_pending_cnt)) ||
nla_put_u32(skb, T_rs_pending_cnt, atomic_read(&device->rs_pending_cnt)))
@@ -3657,13 +3658,16 @@ static int nla_put_status_info(struct sk_buff *skb, struct drbd_device *device,
goto nla_put_failure;
if (nla_put_u32(skb, T_disk_flags, device->ldev->md.flags) ||
- nla_put_u64(skb, T_bits_total, drbd_bm_bits(device)) ||
- nla_put_u64(skb, T_bits_oos, drbd_bm_total_weight(device)))
+ nla_put_u64_0pad(skb, T_bits_total, drbd_bm_bits(device)) ||
+ nla_put_u64_0pad(skb, T_bits_oos,
+ drbd_bm_total_weight(device)))
goto nla_put_failure;
if (C_SYNC_SOURCE <= device->state.conn &&
C_PAUSED_SYNC_T >= device->state.conn) {
- if (nla_put_u64(skb, T_bits_rs_total, device->rs_total) ||
- nla_put_u64(skb, T_bits_rs_failed, device->rs_failed))
+ if (nla_put_u64_0pad(skb, T_bits_rs_total,
+ device->rs_total) ||
+ nla_put_u64_0pad(skb, T_bits_rs_failed,
+ device->rs_failed))
goto nla_put_failure;
}
}
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 80cf8add46ff..1fa8cc235977 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -943,7 +943,7 @@ static int loop_set_fd(struct loop_device *lo, fmode_t mode,
mapping_set_gfp_mask(mapping, lo->old_gfp_mask & ~(__GFP_IO|__GFP_FS));
if (!(lo_flags & LO_FLAGS_READ_ONLY) && file->f_op->fsync)
- blk_queue_flush(lo->lo_queue, REQ_FLUSH);
+ blk_queue_write_cache(lo->lo_queue, true, false);
loop_update_dio(lo);
set_capacity(lo->lo_disk, size);
diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
index 25824c1697c5..6053e4659fa2 100644
--- a/drivers/block/mtip32xx/mtip32xx.c
+++ b/drivers/block/mtip32xx/mtip32xx.c
@@ -3000,14 +3000,14 @@ restart_eh:
"Completion workers still active!");
spin_lock(dd->queue->queue_lock);
- blk_mq_all_tag_busy_iter(*dd->tags.tags,
+ blk_mq_tagset_busy_iter(&dd->tags,
mtip_queue_cmd, dd);
spin_unlock(dd->queue->queue_lock);
set_bit(MTIP_PF_ISSUE_CMDS_BIT, &dd->port->flags);
if (mtip_device_reset(dd))
- blk_mq_all_tag_busy_iter(*dd->tags.tags,
+ blk_mq_tagset_busy_iter(&dd->tags,
mtip_abort_cmd, dd);
clear_bit(MTIP_PF_TO_ACTIVE_BIT, &dd->port->flags);
@@ -4023,12 +4023,6 @@ skip_create_disk:
blk_queue_io_min(dd->queue, 4096);
blk_queue_bounce_limit(dd->queue, dd->pdev->dma_mask);
- /*
- * write back cache is not supported in the device. FUA depends on
- * write back cache support, hence setting flush support to zero.
- */
- blk_queue_flush(dd->queue, 0);
-
/* Signal trim support */
if (dd->trim_supp == true) {
set_bit(QUEUE_FLAG_DISCARD, &dd->queue->queue_flags);
@@ -4174,7 +4168,7 @@ static int mtip_block_remove(struct driver_data *dd)
blk_mq_freeze_queue_start(dd->queue);
blk_mq_stop_hw_queues(dd->queue);
- blk_mq_all_tag_busy_iter(dd->tags.tags[0], mtip_no_dev_cleanup, dd);
+ blk_mq_tagset_busy_iter(&dd->tags, mtip_no_dev_cleanup, dd);
/*
* Delete our gendisk structure. This also removes the device
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 08afbc7a2bb8..31e73a7a40f2 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -693,9 +693,9 @@ static void nbd_parse_flags(struct nbd_device *nbd, struct block_device *bdev)
if (nbd->flags & NBD_FLAG_SEND_TRIM)
queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, nbd->disk->queue);
if (nbd->flags & NBD_FLAG_SEND_FLUSH)
- blk_queue_flush(nbd->disk->queue, REQ_FLUSH);
+ blk_queue_write_cache(nbd->disk->queue, true, false);
else
- blk_queue_flush(nbd->disk->queue, 0);
+ blk_queue_write_cache(nbd->disk->queue, false, false);
}
static int nbd_dev_dbg_init(struct nbd_device *nbd);
diff --git a/drivers/block/osdblk.c b/drivers/block/osdblk.c
index 1b709a4e3b5e..c2854a2bfdb0 100644
--- a/drivers/block/osdblk.c
+++ b/drivers/block/osdblk.c
@@ -437,7 +437,7 @@ static int osdblk_init_disk(struct osdblk_device *osdev)
blk_queue_stack_limits(q, osd_request_queue(osdev->osd));
blk_queue_prep_rq(q, blk_queue_start_tag);
- blk_queue_flush(q, REQ_FLUSH);
+ blk_queue_write_cache(q, true, false);
disk->queue = q;
diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c
index c120d70d3fb3..4b7e405830d7 100644
--- a/drivers/block/ps3disk.c
+++ b/drivers/block/ps3disk.c
@@ -468,7 +468,7 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev)
blk_queue_dma_alignment(queue, dev->blk_size-1);
blk_queue_logical_block_size(queue, dev->blk_size);
- blk_queue_flush(queue, REQ_FLUSH);
+ blk_queue_write_cache(queue, true, false);
blk_queue_max_segments(queue, -1);
blk_queue_max_segment_size(queue, dev->bounce_size);
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index 0ede6d7e2568..81666a56415e 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -350,12 +350,12 @@ struct rbd_device {
struct rbd_spec *spec;
struct rbd_options *opts;
- char *header_name;
+ struct ceph_object_id header_oid;
+ struct ceph_object_locator header_oloc;
struct ceph_file_layout layout;
- struct ceph_osd_event *watch_event;
- struct rbd_obj_request *watch_request;
+ struct ceph_osd_linger_request *watch_handle;
struct rbd_spec *parent_spec;
u64 parent_overlap;
@@ -1596,12 +1596,6 @@ static int rbd_obj_request_wait(struct rbd_obj_request *obj_request)
return __rbd_obj_request_wait(obj_request, 0);
}
-static int rbd_obj_request_wait_timeout(struct rbd_obj_request *obj_request,
- unsigned long timeout)
-{
- return __rbd_obj_request_wait(obj_request, timeout);
-}
-
static void rbd_img_request_complete(struct rbd_img_request *img_request)
{
@@ -1751,12 +1745,6 @@ static void rbd_obj_request_complete(struct rbd_obj_request *obj_request)
complete_all(&obj_request->completion);
}
-static void rbd_osd_trivial_callback(struct rbd_obj_request *obj_request)
-{
- dout("%s: obj %p\n", __func__, obj_request);
- obj_request_done_set(obj_request);
-}
-
static void rbd_osd_read_callback(struct rbd_obj_request *obj_request)
{
struct rbd_img_request *img_request = NULL;
@@ -1828,13 +1816,12 @@ static void rbd_osd_call_callback(struct rbd_obj_request *obj_request)
obj_request_done_set(obj_request);
}
-static void rbd_osd_req_callback(struct ceph_osd_request *osd_req,
- struct ceph_msg *msg)
+static void rbd_osd_req_callback(struct ceph_osd_request *osd_req)
{
struct rbd_obj_request *obj_request = osd_req->r_priv;
u16 opcode;
- dout("%s: osd_req %p msg %p\n", __func__, osd_req, msg);
+ dout("%s: osd_req %p\n", __func__, osd_req);
rbd_assert(osd_req == obj_request->osd_req);
if (obj_request_img_data_test(obj_request)) {
rbd_assert(obj_request->img_request);
@@ -1878,10 +1865,6 @@ static void rbd_osd_req_callback(struct ceph_osd_request *osd_req,
case CEPH_OSD_OP_CALL:
rbd_osd_call_callback(obj_request);
break;
- case CEPH_OSD_OP_NOTIFY_ACK:
- case CEPH_OSD_OP_WATCH:
- rbd_osd_trivial_callback(obj_request);
- break;
default:
rbd_warn(NULL, "%s: unsupported op %hu",
obj_request->object_name, (unsigned short) opcode);
@@ -1896,27 +1879,17 @@ static void rbd_osd_req_format_read(struct rbd_obj_request *obj_request)
{
struct rbd_img_request *img_request = obj_request->img_request;
struct ceph_osd_request *osd_req = obj_request->osd_req;
- u64 snap_id;
- rbd_assert(osd_req != NULL);
-
- snap_id = img_request ? img_request->snap_id : CEPH_NOSNAP;
- ceph_osdc_build_request(osd_req, obj_request->offset,
- NULL, snap_id, NULL);
+ if (img_request)
+ osd_req->r_snapid = img_request->snap_id;
}
static void rbd_osd_req_format_write(struct rbd_obj_request *obj_request)
{
- struct rbd_img_request *img_request = obj_request->img_request;
struct ceph_osd_request *osd_req = obj_request->osd_req;
- struct ceph_snap_context *snapc;
- struct timespec mtime = CURRENT_TIME;
- rbd_assert(osd_req != NULL);
-
- snapc = img_request ? img_request->snapc : NULL;
- ceph_osdc_build_request(osd_req, obj_request->offset,
- snapc, CEPH_NOSNAP, &mtime);
+ osd_req->r_mtime = CURRENT_TIME;
+ osd_req->r_data_offset = obj_request->offset;
}
/*
@@ -1954,7 +1927,7 @@ static struct ceph_osd_request *rbd_osd_req_create(
osd_req = ceph_osdc_alloc_request(osdc, snapc, num_ops, false,
GFP_NOIO);
if (!osd_req)
- return NULL; /* ENOMEM */
+ goto fail;
if (op_type == OBJ_OP_WRITE || op_type == OBJ_OP_DISCARD)
osd_req->r_flags = CEPH_OSD_FLAG_WRITE | CEPH_OSD_FLAG_ONDISK;
@@ -1965,9 +1938,18 @@ static struct ceph_osd_request *rbd_osd_req_create(
osd_req->r_priv = obj_request;
osd_req->r_base_oloc.pool = ceph_file_layout_pg_pool(rbd_dev->layout);
- ceph_oid_set_name(&osd_req->r_base_oid, obj_request->object_name);
+ if (ceph_oid_aprintf(&osd_req->r_base_oid, GFP_NOIO, "%s",
+ obj_request->object_name))
+ goto fail;
+
+ if (ceph_osdc_alloc_messages(osd_req, GFP_NOIO))
+ goto fail;
return osd_req;
+
+fail:
+ ceph_osdc_put_request(osd_req);
+ return NULL;
}
/*
@@ -2003,16 +1985,25 @@ rbd_osd_req_create_copyup(struct rbd_obj_request *obj_request)
osd_req = ceph_osdc_alloc_request(osdc, snapc, num_osd_ops,
false, GFP_NOIO);
if (!osd_req)
- return NULL; /* ENOMEM */
+ goto fail;
osd_req->r_flags = CEPH_OSD_FLAG_WRITE | CEPH_OSD_FLAG_ONDISK;
osd_req->r_callback = rbd_osd_req_callback;
osd_req->r_priv = obj_request;
osd_req->r_base_oloc.pool = ceph_file_layout_pg_pool(rbd_dev->layout);
- ceph_oid_set_name(&osd_req->r_base_oid, obj_request->object_name);
+ if (ceph_oid_aprintf(&osd_req->r_base_oid, GFP_NOIO, "%s",
+ obj_request->object_name))
+ goto fail;
+
+ if (ceph_osdc_alloc_messages(osd_req, GFP_NOIO))
+ goto fail;
return osd_req;
+
+fail:
+ ceph_osdc_put_request(osd_req);
+ return NULL;
}
@@ -2973,17 +2964,20 @@ static int rbd_img_request_submit(struct rbd_img_request *img_request)
{
struct rbd_obj_request *obj_request;
struct rbd_obj_request *next_obj_request;
+ int ret = 0;
dout("%s: img %p\n", __func__, img_request);
- for_each_obj_request_safe(img_request, obj_request, next_obj_request) {
- int ret;
+ rbd_img_request_get(img_request);
+ for_each_obj_request_safe(img_request, obj_request, next_obj_request) {
ret = rbd_img_obj_request_submit(obj_request);
if (ret)
- return ret;
+ goto out_put_ireq;
}
- return 0;
+out_put_ireq:
+ rbd_img_request_put(img_request);
+ return ret;
}
static void rbd_img_parent_read_callback(struct rbd_img_request *img_request)
@@ -3090,45 +3084,18 @@ out_err:
obj_request_done_set(obj_request);
}
-static int rbd_obj_notify_ack_sync(struct rbd_device *rbd_dev, u64 notify_id)
-{
- struct rbd_obj_request *obj_request;
- struct ceph_osd_client *osdc = &rbd_dev->rbd_client->client->osdc;
- int ret;
-
- obj_request = rbd_obj_request_create(rbd_dev->header_name, 0, 0,
- OBJ_REQUEST_NODATA);
- if (!obj_request)
- return -ENOMEM;
-
- ret = -ENOMEM;
- obj_request->osd_req = rbd_osd_req_create(rbd_dev, OBJ_OP_READ, 1,
- obj_request);
- if (!obj_request->osd_req)
- goto out;
-
- osd_req_op_watch_init(obj_request->osd_req, 0, CEPH_OSD_OP_NOTIFY_ACK,
- notify_id, 0, 0);
- rbd_osd_req_format_read(obj_request);
+static int rbd_dev_header_watch_sync(struct rbd_device *rbd_dev);
+static void __rbd_dev_header_unwatch_sync(struct rbd_device *rbd_dev);
- ret = rbd_obj_request_submit(osdc, obj_request);
- if (ret)
- goto out;
- ret = rbd_obj_request_wait(obj_request);
-out:
- rbd_obj_request_put(obj_request);
-
- return ret;
-}
-
-static void rbd_watch_cb(u64 ver, u64 notify_id, u8 opcode, void *data)
+static void rbd_watch_cb(void *arg, u64 notify_id, u64 cookie,
+ u64 notifier_id, void *data, size_t data_len)
{
- struct rbd_device *rbd_dev = (struct rbd_device *)data;
+ struct rbd_device *rbd_dev = arg;
+ struct ceph_osd_client *osdc = &rbd_dev->rbd_client->client->osdc;
int ret;
- dout("%s: \"%s\" notify_id %llu opcode %u\n", __func__,
- rbd_dev->header_name, (unsigned long long)notify_id,
- (unsigned int)opcode);
+ dout("%s rbd_dev %p cookie %llu notify_id %llu\n", __func__, rbd_dev,
+ cookie, notify_id);
/*
* Until adequate refresh error handling is in place, there is
@@ -3140,63 +3107,31 @@ static void rbd_watch_cb(u64 ver, u64 notify_id, u8 opcode, void *data)
if (ret)
rbd_warn(rbd_dev, "refresh failed: %d", ret);
- ret = rbd_obj_notify_ack_sync(rbd_dev, notify_id);
+ ret = ceph_osdc_notify_ack(osdc, &rbd_dev->header_oid,
+ &rbd_dev->header_oloc, notify_id, cookie,
+ NULL, 0);
if (ret)
rbd_warn(rbd_dev, "notify_ack ret %d", ret);
}
-/*
- * Send a (un)watch request and wait for the ack. Return a request
- * with a ref held on success or error.
- */
-static struct rbd_obj_request *rbd_obj_watch_request_helper(
- struct rbd_device *rbd_dev,
- bool watch)
+static void rbd_watch_errcb(void *arg, u64 cookie, int err)
{
- struct ceph_osd_client *osdc = &rbd_dev->rbd_client->client->osdc;
- struct ceph_options *opts = osdc->client->options;
- struct rbd_obj_request *obj_request;
+ struct rbd_device *rbd_dev = arg;
int ret;
- obj_request = rbd_obj_request_create(rbd_dev->header_name, 0, 0,
- OBJ_REQUEST_NODATA);
- if (!obj_request)
- return ERR_PTR(-ENOMEM);
-
- obj_request->osd_req = rbd_osd_req_create(rbd_dev, OBJ_OP_WRITE, 1,
- obj_request);
- if (!obj_request->osd_req) {
- ret = -ENOMEM;
- goto out;
- }
-
- osd_req_op_watch_init(obj_request->osd_req, 0, CEPH_OSD_OP_WATCH,
- rbd_dev->watch_event->cookie, 0, watch);
- rbd_osd_req_format_write(obj_request);
+ rbd_warn(rbd_dev, "encountered watch error: %d", err);
- if (watch)
- ceph_osdc_set_request_linger(osdc, obj_request->osd_req);
-
- ret = rbd_obj_request_submit(osdc, obj_request);
- if (ret)
- goto out;
+ __rbd_dev_header_unwatch_sync(rbd_dev);
- ret = rbd_obj_request_wait_timeout(obj_request, opts->mount_timeout);
- if (ret)
- goto out;
-
- ret = obj_request->result;
+ ret = rbd_dev_header_watch_sync(rbd_dev);
if (ret) {
- if (watch)
- rbd_obj_request_end(obj_request);
- goto out;
+ rbd_warn(rbd_dev, "failed to reregister watch: %d", ret);
+ return;
}
- return obj_request;
-
-out:
- rbd_obj_request_put(obj_request);
- return ERR_PTR(ret);
+ ret = rbd_dev_refresh(rbd_dev);
+ if (ret)
+ rbd_warn(rbd_dev, "reregisteration refresh failed: %d", ret);
}
/*
@@ -3205,35 +3140,33 @@ out:
static int rbd_dev_header_watch_sync(struct rbd_device *rbd_dev)
{
struct ceph_osd_client *osdc = &rbd_dev->rbd_client->client->osdc;
- struct rbd_obj_request *obj_request;
- int ret;
+ struct ceph_osd_linger_request *handle;
- rbd_assert(!rbd_dev->watch_event);
- rbd_assert(!rbd_dev->watch_request);
+ rbd_assert(!rbd_dev->watch_handle);
- ret = ceph_osdc_create_event(osdc, rbd_watch_cb, rbd_dev,
- &rbd_dev->watch_event);
- if (ret < 0)
- return ret;
+ handle = ceph_osdc_watch(osdc, &rbd_dev->header_oid,
+ &rbd_dev->header_oloc, rbd_watch_cb,
+ rbd_watch_errcb, rbd_dev);
+ if (IS_ERR(handle))
+ return PTR_ERR(handle);
- obj_request = rbd_obj_watch_request_helper(rbd_dev, true);
- if (IS_ERR(obj_request)) {
- ceph_osdc_cancel_event(rbd_dev->watch_event);
- rbd_dev->watch_event = NULL;
- return PTR_ERR(obj_request);
- }
+ rbd_dev->watch_handle = handle;
+ return 0;
+}
- /*
- * A watch request is set to linger, so the underlying osd
- * request won't go away until we unregister it. We retain
- * a pointer to the object request during that time (in
- * rbd_dev->watch_request), so we'll keep a reference to it.
- * We'll drop that reference after we've unregistered it in
- * rbd_dev_header_unwatch_sync().
- */
- rbd_dev->watch_request = obj_request;
+static void __rbd_dev_header_unwatch_sync(struct rbd_device *rbd_dev)
+{
+ struct ceph_osd_client *osdc = &rbd_dev->rbd_client->client->osdc;
+ int ret;
- return 0;
+ if (!rbd_dev->watch_handle)
+ return;
+
+ ret = ceph_osdc_unwatch(osdc, rbd_dev->watch_handle);
+ if (ret)
+ rbd_warn(rbd_dev, "failed to unwatch: %d", ret);
+
+ rbd_dev->watch_handle = NULL;
}
/*
@@ -3241,24 +3174,7 @@ static int rbd_dev_header_watch_sync(struct rbd_device *rbd_dev)
*/
static void rbd_dev_header_unwatch_sync(struct rbd_device *rbd_dev)
{
- struct rbd_obj_request *obj_request;
-
- rbd_assert(rbd_dev->watch_event);
- rbd_assert(rbd_dev->watch_request);
-
- rbd_obj_request_end(rbd_dev->watch_request);
- rbd_obj_request_put(rbd_dev->watch_request);
- rbd_dev->watch_request = NULL;
-
- obj_request = rbd_obj_watch_request_helper(rbd_dev, false);
- if (!IS_ERR(obj_request))
- rbd_obj_request_put(obj_request);
- else
- rbd_warn(rbd_dev, "unable to tear down watch request (%ld)",
- PTR_ERR(obj_request));
-
- ceph_osdc_cancel_event(rbd_dev->watch_event);
- rbd_dev->watch_event = NULL;
+ __rbd_dev_header_unwatch_sync(rbd_dev);
dout("%s flushing notifies\n", __func__);
ceph_osdc_flush_notifies(&rbd_dev->rbd_client->client->osdc);
@@ -3591,7 +3507,7 @@ static int rbd_dev_v1_header_info(struct rbd_device *rbd_dev)
if (!ondisk)
return -ENOMEM;
- ret = rbd_obj_read_sync(rbd_dev, rbd_dev->header_name,
+ ret = rbd_obj_read_sync(rbd_dev, rbd_dev->header_oid.name,
0, size, ondisk);
if (ret < 0)
goto out;
@@ -4033,6 +3949,8 @@ static void rbd_dev_release(struct device *dev)
struct rbd_device *rbd_dev = dev_to_rbd_dev(dev);
bool need_put = !!rbd_dev->opts;
+ ceph_oid_destroy(&rbd_dev->header_oid);
+
rbd_put_client(rbd_dev->rbd_client);
rbd_spec_put(rbd_dev->spec);
kfree(rbd_dev->opts);
@@ -4063,6 +3981,9 @@ static struct rbd_device *rbd_dev_create(struct rbd_client *rbdc,
INIT_LIST_HEAD(&rbd_dev->node);
init_rwsem(&rbd_dev->header_rwsem);
+ ceph_oid_init(&rbd_dev->header_oid);
+ ceph_oloc_init(&rbd_dev->header_oloc);
+
rbd_dev->dev.bus = &rbd_bus_type;
rbd_dev->dev.type = &rbd_device_type;
rbd_dev->dev.parent = &rbd_root_dev;
@@ -4111,7 +4032,7 @@ static int _rbd_dev_v2_snap_size(struct rbd_device *rbd_dev, u64 snap_id,
__le64 size;
} __attribute__ ((packed)) size_buf = { 0 };
- ret = rbd_obj_method_sync(rbd_dev, rbd_dev->header_name,
+ ret = rbd_obj_method_sync(rbd_dev, rbd_dev->header_oid.name,
"rbd", "get_size",
&snapid, sizeof (snapid),
&size_buf, sizeof (size_buf));
@@ -4151,7 +4072,7 @@ static int rbd_dev_v2_object_prefix(struct rbd_device *rbd_dev)
if (!reply_buf)
return -ENOMEM;
- ret = rbd_obj_method_sync(rbd_dev, rbd_dev->header_name,
+ ret = rbd_obj_method_sync(rbd_dev, rbd_dev->header_oid.name,
"rbd", "get_object_prefix", NULL, 0,
reply_buf, RBD_OBJ_PREFIX_LEN_MAX);
dout("%s: rbd_obj_method_sync returned %d\n", __func__, ret);
@@ -4186,7 +4107,7 @@ static int _rbd_dev_v2_snap_features(struct rbd_device *rbd_dev, u64 snap_id,
u64 unsup;
int ret;
- ret = rbd_obj_method_sync(rbd_dev, rbd_dev->header_name,
+ ret = rbd_obj_method_sync(rbd_dev, rbd_dev->header_oid.name,
"rbd", "get_features",
&snapid, sizeof (snapid),
&features_buf, sizeof (features_buf));
@@ -4248,7 +4169,7 @@ static int rbd_dev_v2_parent_info(struct rbd_device *rbd_dev)
}
snapid = cpu_to_le64(rbd_dev->spec->snap_id);
- ret = rbd_obj_method_sync(rbd_dev, rbd_dev->header_name,
+ ret = rbd_obj_method_sync(rbd_dev, rbd_dev->header_oid.name,
"rbd", "get_parent",
&snapid, sizeof (snapid),
reply_buf, size);
@@ -4351,7 +4272,7 @@ static int rbd_dev_v2_striping_info(struct rbd_device *rbd_dev)
u64 stripe_count;
int ret;
- ret = rbd_obj_method_sync(rbd_dev, rbd_dev->header_name,
+ ret = rbd_obj_method_sync(rbd_dev, rbd_dev->header_oid.name,
"rbd", "get_stripe_unit_count", NULL, 0,
(char *)&striping_info_buf, size);
dout("%s: rbd_obj_method_sync returned %d\n", __func__, ret);
@@ -4599,7 +4520,7 @@ static int rbd_dev_v2_snap_context(struct rbd_device *rbd_dev)
if (!reply_buf)
return -ENOMEM;
- ret = rbd_obj_method_sync(rbd_dev, rbd_dev->header_name,
+ ret = rbd_obj_method_sync(rbd_dev, rbd_dev->header_oid.name,
"rbd", "get_snapcontext", NULL, 0,
reply_buf, size);
dout("%s: rbd_obj_method_sync returned %d\n", __func__, ret);
@@ -4664,7 +4585,7 @@ static const char *rbd_dev_v2_snap_name(struct rbd_device *rbd_dev,
return ERR_PTR(-ENOMEM);
snapid = cpu_to_le64(snap_id);
- ret = rbd_obj_method_sync(rbd_dev, rbd_dev->header_name,
+ ret = rbd_obj_method_sync(rbd_dev, rbd_dev->header_oid.name,
"rbd", "get_snapshot_name",
&snapid, sizeof (snapid),
reply_buf, size);
@@ -4975,13 +4896,13 @@ static int rbd_add_get_pool_id(struct rbd_client *rbdc, const char *pool_name)
again:
ret = ceph_pg_poolid_by_name(rbdc->client->osdc.osdmap, pool_name);
if (ret == -ENOENT && tries++ < 1) {
- ret = ceph_monc_do_get_version(&rbdc->client->monc, "osdmap",
- &newest_epoch);
+ ret = ceph_monc_get_version(&rbdc->client->monc, "osdmap",
+ &newest_epoch);
if (ret < 0)
return ret;
if (rbdc->client->osdc.osdmap->epoch < newest_epoch) {
- ceph_monc_request_next_osdmap(&rbdc->client->monc);
+ ceph_osdc_maybe_request_map(&rbdc->client->osdc);
(void) ceph_monc_wait_osdmap(&rbdc->client->monc,
newest_epoch,
opts->mount_timeout);
@@ -5260,35 +5181,26 @@ err_out_unlock:
static int rbd_dev_header_name(struct rbd_device *rbd_dev)
{
struct rbd_spec *spec = rbd_dev->spec;
- size_t size;
+ int ret;
/* Record the header object name for this rbd image. */
rbd_assert(rbd_image_format_valid(rbd_dev->image_format));
+ rbd_dev->header_oloc.pool = ceph_file_layout_pg_pool(rbd_dev->layout);
if (rbd_dev->image_format == 1)
- size = strlen(spec->image_name) + sizeof (RBD_SUFFIX);
+ ret = ceph_oid_aprintf(&rbd_dev->header_oid, GFP_KERNEL, "%s%s",
+ spec->image_name, RBD_SUFFIX);
else
- size = sizeof (RBD_HEADER_PREFIX) + strlen(spec->image_id);
-
- rbd_dev->header_name = kmalloc(size, GFP_KERNEL);
- if (!rbd_dev->header_name)
- return -ENOMEM;
+ ret = ceph_oid_aprintf(&rbd_dev->header_oid, GFP_KERNEL, "%s%s",
+ RBD_HEADER_PREFIX, spec->image_id);
- if (rbd_dev->image_format == 1)
- sprintf(rbd_dev->header_name, "%s%s",
- spec->image_name, RBD_SUFFIX);
- else
- sprintf(rbd_dev->header_name, "%s%s",
- RBD_HEADER_PREFIX, spec->image_id);
- return 0;
+ return ret;
}
static void rbd_dev_image_release(struct rbd_device *rbd_dev)
{
rbd_dev_unprobe(rbd_dev);
- kfree(rbd_dev->header_name);
- rbd_dev->header_name = NULL;
rbd_dev->image_format = 0;
kfree(rbd_dev->spec->image_id);
rbd_dev->spec->image_id = NULL;
@@ -5327,7 +5239,7 @@ static int rbd_dev_image_probe(struct rbd_device *rbd_dev, int depth)
pr_info("image %s/%s does not exist\n",
rbd_dev->spec->pool_name,
rbd_dev->spec->image_name);
- goto out_header_name;
+ goto err_out_format;
}
}
@@ -5373,7 +5285,7 @@ static int rbd_dev_image_probe(struct rbd_device *rbd_dev, int depth)
goto err_out_probe;
dout("discovered format %u image, header name is %s\n",
- rbd_dev->image_format, rbd_dev->header_name);
+ rbd_dev->image_format, rbd_dev->header_oid.name);
return 0;
err_out_probe:
@@ -5381,9 +5293,6 @@ err_out_probe:
err_out_watch:
if (!depth)
rbd_dev_header_unwatch_sync(rbd_dev);
-out_header_name:
- kfree(rbd_dev->header_name);
- rbd_dev->header_name = NULL;
err_out_format:
rbd_dev->image_format = 0;
kfree(rbd_dev->spec->image_id);
diff --git a/drivers/block/skd_main.c b/drivers/block/skd_main.c
index 586f9168ffa4..910e065918af 100644
--- a/drivers/block/skd_main.c
+++ b/drivers/block/skd_main.c
@@ -133,7 +133,6 @@ MODULE_VERSION(DRV_VERSION "-" DRV_BUILD_ID);
#define SKD_TIMER_MINUTES(minutes) ((minutes) * (60))
#define INQ_STD_NBYTES 36
-#define SKD_DISCARD_CDB_LENGTH 24
enum skd_drvr_state {
SKD_DRVR_STATE_LOAD,
@@ -212,7 +211,6 @@ struct skd_request_context {
struct request *req;
u8 flush_cmd;
- u8 discard_page;
u32 timeout_stamp;
u8 sg_data_dir;
@@ -230,7 +228,6 @@ struct skd_request_context {
};
#define SKD_DATA_DIR_HOST_TO_CARD 1
#define SKD_DATA_DIR_CARD_TO_HOST 2
-#define SKD_DATA_DIR_NONE 3 /* especially for DISCARD requests. */
struct skd_special_context {
struct skd_request_context req;
@@ -540,31 +537,6 @@ skd_prep_zerosize_flush_cdb(struct skd_scsi_request *scsi_req,
scsi_req->cdb[9] = 0;
}
-static void
-skd_prep_discard_cdb(struct skd_scsi_request *scsi_req,
- struct skd_request_context *skreq,
- struct page *page,
- u32 lba, u32 count)
-{
- char *buf;
- unsigned long len;
- struct request *req;
-
- buf = page_address(page);
- len = SKD_DISCARD_CDB_LENGTH;
-
- scsi_req->cdb[0] = UNMAP;
- scsi_req->cdb[8] = len;
-
- put_unaligned_be16(6 + 16, &buf[0]);
- put_unaligned_be16(16, &buf[2]);
- put_unaligned_be64(lba, &buf[8]);
- put_unaligned_be32(count, &buf[16]);
-
- req = skreq->req;
- blk_add_request_payload(req, page, len);
-}
-
static void skd_request_fn_not_online(struct request_queue *q);
static void skd_request_fn(struct request_queue *q)
@@ -575,7 +547,6 @@ static void skd_request_fn(struct request_queue *q)
struct skd_request_context *skreq;
struct request *req = NULL;
struct skd_scsi_request *scsi_req;
- struct page *page;
unsigned long io_flags;
int error;
u32 lba;
@@ -669,7 +640,6 @@ static void skd_request_fn(struct request_queue *q)
skreq->flush_cmd = 0;
skreq->n_sg = 0;
skreq->sg_byte_count = 0;
- skreq->discard_page = 0;
/*
* OK to now dequeue request from q.
@@ -735,18 +705,7 @@ static void skd_request_fn(struct request_queue *q)
else
skreq->sg_data_dir = SKD_DATA_DIR_HOST_TO_CARD;
- if (io_flags & REQ_DISCARD) {
- page = alloc_page(GFP_ATOMIC | __GFP_ZERO);
- if (!page) {
- pr_err("request_fn:Page allocation failed.\n");
- skd_end_request(skdev, skreq, -ENOMEM);
- break;
- }
- skreq->discard_page = 1;
- req->completion_data = page;
- skd_prep_discard_cdb(scsi_req, skreq, page, lba, count);
-
- } else if (flush == SKD_FLUSH_ZERO_SIZE_FIRST) {
+ if (flush == SKD_FLUSH_ZERO_SIZE_FIRST) {
skd_prep_zerosize_flush_cdb(scsi_req, skreq);
SKD_ASSERT(skreq->flush_cmd == 1);
@@ -851,16 +810,6 @@ skip_sg:
static void skd_end_request(struct skd_device *skdev,
struct skd_request_context *skreq, int error)
{
- struct request *req = skreq->req;
- unsigned int io_flags = req->cmd_flags;
-
- if ((io_flags & REQ_DISCARD) &&
- (skreq->discard_page == 1)) {
- pr_debug("%s:%s:%d, free the page!",
- skdev->name, __func__, __LINE__);
- __free_page(req->completion_data);
- }
-
if (unlikely(error)) {
struct request *req = skreq->req;
char *cmd = (rq_data_dir(req) == READ) ? "read" : "write";
@@ -4412,19 +4361,13 @@ static int skd_cons_disk(struct skd_device *skdev)
disk->queue = q;
q->queuedata = skdev;
- blk_queue_flush(q, REQ_FLUSH | REQ_FUA);
+ blk_queue_write_cache(q, true, true);
blk_queue_max_segments(q, skdev->sgs_per_request);
blk_queue_max_hw_sectors(q, SKD_N_MAX_SECTORS);
/* set sysfs ptimal_io_size to 8K */
blk_queue_io_opt(q, 8192);
- /* DISCARD Flag initialization. */
- q->limits.discard_granularity = 8192;
- q->limits.discard_alignment = 0;
- blk_queue_max_discard_sectors(q, UINT_MAX >> 9);
- q->limits.discard_zeroes_data = 1;
- queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, q);
queue_flag_set_unlocked(QUEUE_FLAG_NONROT, q);
queue_flag_clear_unlocked(QUEUE_FLAG_ADD_RANDOM, q);
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 28cff0d23d82..42758b52768c 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -493,11 +493,7 @@ static void virtblk_update_cache_mode(struct virtio_device *vdev)
u8 writeback = virtblk_get_cache_mode(vdev);
struct virtio_blk *vblk = vdev->priv;
- if (writeback)
- blk_queue_flush(vblk->disk->queue, REQ_FLUSH);
- else
- blk_queue_flush(vblk->disk->queue, 0);
-
+ blk_queue_write_cache(vblk->disk->queue, writeback, false);
revalidate_disk(vblk->disk);
}
diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index 26aa080e243c..3355f1cdd4e5 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -477,7 +477,7 @@ static int xen_vbd_create(struct xen_blkif *blkif, blkif_vdev_t handle,
vbd->type |= VDISK_REMOVABLE;
q = bdev_get_queue(bdev);
- if (q && q->flush_flags)
+ if (q && test_bit(QUEUE_FLAG_WC, &q->queue_flags))
vbd->flush_support = true;
if (q && blk_queue_secdiscard(q))
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 6405b6557792..ca13df854639 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -998,7 +998,8 @@ static const char *flush_info(unsigned int feature_flush)
static void xlvbd_flush(struct blkfront_info *info)
{
- blk_queue_flush(info->rq, info->feature_flush);
+ blk_queue_write_cache(info->rq, info->feature_flush & REQ_FLUSH,
+ info->feature_flush & REQ_FUA);
pr_info("blkfront: %s: %s %s %s %s %s\n",
info->gd->disk_name, flush_info(info->feature_flush),
"persistent grants:", info->feature_persistent ?
diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c
index 3ef42e563bb5..b51a816d766b 100644
--- a/drivers/block/zram/zcomp.c
+++ b/drivers/block/zram/zcomp.c
@@ -13,6 +13,7 @@
#include <linux/slab.h>
#include <linux/wait.h>
#include <linux/sched.h>
+#include <linux/cpu.h>
#include "zcomp.h"
#include "zcomp_lzo.h"
@@ -20,29 +21,6 @@
#include "zcomp_lz4.h"
#endif
-/*
- * single zcomp_strm backend
- */
-struct zcomp_strm_single {
- struct mutex strm_lock;
- struct zcomp_strm *zstrm;
-};
-
-/*
- * multi zcomp_strm backend
- */
-struct zcomp_strm_multi {
- /* protect strm list */
- spinlock_t strm_lock;
- /* max possible number of zstrm streams */
- int max_strm;
- /* number of available zstrm streams */
- int avail_strm;
- /* list of available strms */
- struct list_head idle_strm;
- wait_queue_head_t strm_wait;
-};
-
static struct zcomp_backend *backends[] = {
&zcomp_lzo,
#ifdef CONFIG_ZRAM_LZ4_COMPRESS
@@ -93,188 +71,6 @@ static struct zcomp_strm *zcomp_strm_alloc(struct zcomp *comp, gfp_t flags)
return zstrm;
}
-/*
- * get idle zcomp_strm or wait until other process release
- * (zcomp_strm_release()) one for us
- */
-static struct zcomp_strm *zcomp_strm_multi_find(struct zcomp *comp)
-{
- struct zcomp_strm_multi *zs = comp->stream;
- struct zcomp_strm *zstrm;
-
- while (1) {
- spin_lock(&zs->strm_lock);
- if (!list_empty(&zs->idle_strm)) {
- zstrm = list_entry(zs->idle_strm.next,
- struct zcomp_strm, list);
- list_del(&zstrm->list);
- spin_unlock(&zs->strm_lock);
- return zstrm;
- }
- /* zstrm streams limit reached, wait for idle stream */
- if (zs->avail_strm >= zs->max_strm) {
- spin_unlock(&zs->strm_lock);
- wait_event(zs->strm_wait, !list_empty(&zs->idle_strm));
- continue;
- }
- /* allocate new zstrm stream */
- zs->avail_strm++;
- spin_unlock(&zs->strm_lock);
- /*
- * This function can be called in swapout/fs write path
- * so we can't use GFP_FS|IO. And it assumes we already
- * have at least one stream in zram initialization so we
- * don't do best effort to allocate more stream in here.
- * A default stream will work well without further multiple
- * streams. That's why we use NORETRY | NOWARN.
- */
- zstrm = zcomp_strm_alloc(comp, GFP_NOIO | __GFP_NORETRY |
- __GFP_NOWARN);
- if (!zstrm) {
- spin_lock(&zs->strm_lock);
- zs->avail_strm--;
- spin_unlock(&zs->strm_lock);
- wait_event(zs->strm_wait, !list_empty(&zs->idle_strm));
- continue;
- }
- break;
- }
- return zstrm;
-}
-
-/* add stream back to idle list and wake up waiter or free the stream */
-static void zcomp_strm_multi_release(struct zcomp *comp, struct zcomp_strm *zstrm)
-{
- struct zcomp_strm_multi *zs = comp->stream;
-
- spin_lock(&zs->strm_lock);
- if (zs->avail_strm <= zs->max_strm) {
- list_add(&zstrm->list, &zs->idle_strm);
- spin_unlock(&zs->strm_lock);
- wake_up(&zs->strm_wait);
- return;
- }
-
- zs->avail_strm--;
- spin_unlock(&zs->strm_lock);
- zcomp_strm_free(comp, zstrm);
-}
-
-/* change max_strm limit */
-static bool zcomp_strm_multi_set_max_streams(struct zcomp *comp, int num_strm)
-{
- struct zcomp_strm_multi *zs = comp->stream;
- struct zcomp_strm *zstrm;
-
- spin_lock(&zs->strm_lock);
- zs->max_strm = num_strm;
- /*
- * if user has lowered the limit and there are idle streams,
- * immediately free as much streams (and memory) as we can.
- */
- while (zs->avail_strm > num_strm && !list_empty(&zs->idle_strm)) {
- zstrm = list_entry(zs->idle_strm.next,
- struct zcomp_strm, list);
- list_del(&zstrm->list);
- zcomp_strm_free(comp, zstrm);
- zs->avail_strm--;
- }
- spin_unlock(&zs->strm_lock);
- return true;
-}
-
-static void zcomp_strm_multi_destroy(struct zcomp *comp)
-{
- struct zcomp_strm_multi *zs = comp->stream;
- struct zcomp_strm *zstrm;
-
- while (!list_empty(&zs->idle_strm)) {
- zstrm = list_entry(zs->idle_strm.next,
- struct zcomp_strm, list);
- list_del(&zstrm->list);
- zcomp_strm_free(comp, zstrm);
- }
- kfree(zs);
-}
-
-static int zcomp_strm_multi_create(struct zcomp *comp, int max_strm)
-{
- struct zcomp_strm *zstrm;
- struct zcomp_strm_multi *zs;
-
- comp->destroy = zcomp_strm_multi_destroy;
- comp->strm_find = zcomp_strm_multi_find;
- comp->strm_release = zcomp_strm_multi_release;
- comp->set_max_streams = zcomp_strm_multi_set_max_streams;
- zs = kmalloc(sizeof(struct zcomp_strm_multi), GFP_KERNEL);
- if (!zs)
- return -ENOMEM;
-
- comp->stream = zs;
- spin_lock_init(&zs->strm_lock);
- INIT_LIST_HEAD(&zs->idle_strm);
- init_waitqueue_head(&zs->strm_wait);
- zs->max_strm = max_strm;
- zs->avail_strm = 1;
-
- zstrm = zcomp_strm_alloc(comp, GFP_KERNEL);
- if (!zstrm) {
- kfree(zs);
- return -ENOMEM;
- }
- list_add(&zstrm->list, &zs->idle_strm);
- return 0;
-}
-
-static struct zcomp_strm *zcomp_strm_single_find(struct zcomp *comp)
-{
- struct zcomp_strm_single *zs = comp->stream;
- mutex_lock(&zs->strm_lock);
- return zs->zstrm;
-}
-
-static void zcomp_strm_single_release(struct zcomp *comp,
- struct zcomp_strm *zstrm)
-{
- struct zcomp_strm_single *zs = comp->stream;
- mutex_unlock(&zs->strm_lock);
-}
-
-static bool zcomp_strm_single_set_max_streams(struct zcomp *comp, int num_strm)
-{
- /* zcomp_strm_single support only max_comp_streams == 1 */
- return false;
-}
-
-static void zcomp_strm_single_destroy(struct zcomp *comp)
-{
- struct zcomp_strm_single *zs = comp->stream;
- zcomp_strm_free(comp, zs->zstrm);
- kfree(zs);
-}
-
-static int zcomp_strm_single_create(struct zcomp *comp)
-{
- struct zcomp_strm_single *zs;
-
- comp->destroy = zcomp_strm_single_destroy;
- comp->strm_find = zcomp_strm_single_find;
- comp->strm_release = zcomp_strm_single_release;
- comp->set_max_streams = zcomp_strm_single_set_max_streams;
- zs = kmalloc(sizeof(struct zcomp_strm_single), GFP_KERNEL);
- if (!zs)
- return -ENOMEM;
-
- comp->stream = zs;
- mutex_init(&zs->strm_lock);
- zs->zstrm = zcomp_strm_alloc(comp, GFP_KERNEL);
- if (!zs->zstrm) {
- kfree(zs);
- return -ENOMEM;
- }
- return 0;
-}
-
/* show available compressors */
ssize_t zcomp_available_show(const char *comp, char *buf)
{
@@ -299,19 +95,14 @@ bool zcomp_available_algorithm(const char *comp)
return find_backend(comp) != NULL;
}
-bool zcomp_set_max_streams(struct zcomp *comp, int num_strm)
-{
- return comp->set_max_streams(comp, num_strm);
-}
-
struct zcomp_strm *zcomp_strm_find(struct zcomp *comp)
{
- return comp->strm_find(comp);
+ return *get_cpu_ptr(comp->stream);
}
void zcomp_strm_release(struct zcomp *comp, struct zcomp_strm *zstrm)
{
- comp->strm_release(comp, zstrm);
+ put_cpu_ptr(comp->stream);
}
int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm,
@@ -327,9 +118,83 @@ int zcomp_decompress(struct zcomp *comp, const unsigned char *src,
return comp->backend->decompress(src, src_len, dst);
}
+static int __zcomp_cpu_notifier(struct zcomp *comp,
+ unsigned long action, unsigned long cpu)
+{
+ struct zcomp_strm *zstrm;
+
+ switch (action) {
+ case CPU_UP_PREPARE:
+ if (WARN_ON(*per_cpu_ptr(comp->stream, cpu)))
+ break;
+ zstrm = zcomp_strm_alloc(comp, GFP_KERNEL);
+ if (IS_ERR_OR_NULL(zstrm)) {
+ pr_err("Can't allocate a compression stream\n");
+ return NOTIFY_BAD;
+ }
+ *per_cpu_ptr(comp->stream, cpu) = zstrm;
+ break;
+ case CPU_DEAD:
+ case CPU_UP_CANCELED:
+ zstrm = *per_cpu_ptr(comp->stream, cpu);
+ if (!IS_ERR_OR_NULL(zstrm))
+ zcomp_strm_free(comp, zstrm);
+ *per_cpu_ptr(comp->stream, cpu) = NULL;
+ break;
+ default:
+ break;
+ }
+ return NOTIFY_OK;
+}
+
+static int zcomp_cpu_notifier(struct notifier_block *nb,
+ unsigned long action, void *pcpu)
+{
+ unsigned long cpu = (unsigned long)pcpu;
+ struct zcomp *comp = container_of(nb, typeof(*comp), notifier);
+
+ return __zcomp_cpu_notifier(comp, action, cpu);
+}
+
+static int zcomp_init(struct zcomp *comp)
+{
+ unsigned long cpu;
+ int ret;
+
+ comp->notifier.notifier_call = zcomp_cpu_notifier;
+
+ comp->stream = alloc_percpu(struct zcomp_strm *);
+ if (!comp->stream)
+ return -ENOMEM;
+
+ cpu_notifier_register_begin();
+ for_each_online_cpu(cpu) {
+ ret = __zcomp_cpu_notifier(comp, CPU_UP_PREPARE, cpu);
+ if (ret == NOTIFY_BAD)
+ goto cleanup;
+ }
+ __register_cpu_notifier(&comp->notifier);
+ cpu_notifier_register_done();
+ return 0;
+
+cleanup:
+ for_each_online_cpu(cpu)
+ __zcomp_cpu_notifier(comp, CPU_UP_CANCELED, cpu);
+ cpu_notifier_register_done();
+ return -ENOMEM;
+}
+
void zcomp_destroy(struct zcomp *comp)
{
- comp->destroy(comp);
+ unsigned long cpu;
+
+ cpu_notifier_register_begin();
+ for_each_online_cpu(cpu)
+ __zcomp_cpu_notifier(comp, CPU_UP_CANCELED, cpu);
+ __unregister_cpu_notifier(&comp->notifier);
+ cpu_notifier_register_done();
+
+ free_percpu(comp->stream);
kfree(comp);
}
@@ -339,9 +204,9 @@ void zcomp_destroy(struct zcomp *comp)
* backend pointer or ERR_PTR if things went bad. ERR_PTR(-EINVAL)
* if requested algorithm is not supported, ERR_PTR(-ENOMEM) in
* case of allocation error, or any other error potentially
- * returned by functions zcomp_strm_{multi,single}_create.
+ * returned by zcomp_init().
*/
-struct zcomp *zcomp_create(const char *compress, int max_strm)
+struct zcomp *zcomp_create(const char *compress)
{
struct zcomp *comp;
struct zcomp_backend *backend;
@@ -356,10 +221,7 @@ struct zcomp *zcomp_create(const char *compress, int max_strm)
return ERR_PTR(-ENOMEM);
comp->backend = backend;
- if (max_strm > 1)
- error = zcomp_strm_multi_create(comp, max_strm);
- else
- error = zcomp_strm_single_create(comp);
+ error = zcomp_init(comp);
if (error) {
kfree(comp);
return ERR_PTR(error);
diff --git a/drivers/block/zram/zcomp.h b/drivers/block/zram/zcomp.h
index b7d2a4bcae54..ffd88cb747fe 100644
--- a/drivers/block/zram/zcomp.h
+++ b/drivers/block/zram/zcomp.h
@@ -10,8 +10,6 @@
#ifndef _ZCOMP_H_
#define _ZCOMP_H_
-#include <linux/mutex.h>
-
struct zcomp_strm {
/* compression/decompression buffer */
void *buffer;
@@ -21,8 +19,6 @@ struct zcomp_strm {
* working memory)
*/
void *private;
- /* used in multi stream backend, protected by backend strm_lock */
- struct list_head list;
};
/* static compression backend */
@@ -41,19 +37,15 @@ struct zcomp_backend {
/* dynamic per-device compression frontend */
struct zcomp {
- void *stream;
+ struct zcomp_strm * __percpu *stream;
struct zcomp_backend *backend;
-
- struct zcomp_strm *(*strm_find)(struct zcomp *comp);
- void (*strm_release)(struct zcomp *comp, struct zcomp_strm *zstrm);
- bool (*set_max_streams)(struct zcomp *comp, int num_strm);
- void (*destroy)(struct zcomp *comp);
+ struct notifier_block notifier;
};
ssize_t zcomp_available_show(const char *comp, char *buf);
bool zcomp_available_algorithm(const char *comp);
-struct zcomp *zcomp_create(const char *comp, int max_strm);
+struct zcomp *zcomp_create(const char *comp);
void zcomp_destroy(struct zcomp *comp);
struct zcomp_strm *zcomp_strm_find(struct zcomp *comp);
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 370c2f76016d..8fcad8b761f1 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -304,46 +304,25 @@ static ssize_t mem_used_max_store(struct device *dev,
return len;
}
+/*
+ * We switched to per-cpu streams and this attr is not needed anymore.
+ * However, we will keep it around for some time, because:
+ * a) we may revert per-cpu streams in the future
+ * b) it's visible to user space and we need to follow our 2 years
+ * retirement rule; but we already have a number of 'soon to be
+ * altered' attrs, so max_comp_streams need to wait for the next
+ * layoff cycle.
+ */
static ssize_t max_comp_streams_show(struct device *dev,
struct device_attribute *attr, char *buf)
{
- int val;
- struct zram *zram = dev_to_zram(dev);
-
- down_read(&zram->init_lock);
- val = zram->max_comp_streams;
- up_read(&zram->init_lock);
-
- return scnprintf(buf, PAGE_SIZE, "%d\n", val);
+ return scnprintf(buf, PAGE_SIZE, "%d\n", num_online_cpus());
}
static ssize_t max_comp_streams_store(struct device *dev,
struct device_attribute *attr, const char *buf, size_t len)
{
- int num;
- struct zram *zram = dev_to_zram(dev);
- int ret;
-
- ret = kstrtoint(buf, 0, &num);
- if (ret < 0)
- return ret;
- if (num < 1)
- return -EINVAL;
-
- down_write(&zram->init_lock);
- if (init_done(zram)) {
- if (!zcomp_set_max_streams(zram->comp, num)) {
- pr_info("Cannot change max compression streams\n");
- ret = -EINVAL;
- goto out;
- }
- }
-
- zram->max_comp_streams = num;
- ret = len;
-out:
- up_write(&zram->init_lock);
- return ret;
+ return len;
}
static ssize_t comp_algorithm_show(struct device *dev,
@@ -456,8 +435,26 @@ static ssize_t mm_stat_show(struct device *dev,
return ret;
}
+static ssize_t debug_stat_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ int version = 1;
+ struct zram *zram = dev_to_zram(dev);
+ ssize_t ret;
+
+ down_read(&zram->init_lock);
+ ret = scnprintf(buf, PAGE_SIZE,
+ "version: %d\n%8llu\n",
+ version,
+ (u64)atomic64_read(&zram->stats.writestall));
+ up_read(&zram->init_lock);
+
+ return ret;
+}
+
static DEVICE_ATTR_RO(io_stat);
static DEVICE_ATTR_RO(mm_stat);
+static DEVICE_ATTR_RO(debug_stat);
ZRAM_ATTR_RO(num_reads);
ZRAM_ATTR_RO(num_writes);
ZRAM_ATTR_RO(failed_reads);
@@ -514,7 +511,7 @@ static struct zram_meta *zram_meta_alloc(char *pool_name, u64 disksize)
goto out_error;
}
- meta->mem_pool = zs_create_pool(pool_name, GFP_NOIO | __GFP_HIGHMEM);
+ meta->mem_pool = zs_create_pool(pool_name);
if (!meta->mem_pool) {
pr_err("Error creating memory pool\n");
goto out_error;
@@ -650,7 +647,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
{
int ret = 0;
size_t clen;
- unsigned long handle;
+ unsigned long handle = 0;
struct page *page;
unsigned char *user_mem, *cmem, *src, *uncmem = NULL;
struct zram_meta *meta = zram->meta;
@@ -673,9 +670,8 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
goto out;
}
- zstrm = zcomp_strm_find(zram->comp);
+compress_again:
user_mem = kmap_atomic(page);
-
if (is_partial_io(bvec)) {
memcpy(uncmem + offset, user_mem + bvec->bv_offset,
bvec->bv_len);
@@ -699,6 +695,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
goto out;
}
+ zstrm = zcomp_strm_find(zram->comp);
ret = zcomp_compress(zram->comp, zstrm, uncmem, &clen);
if (!is_partial_io(bvec)) {
kunmap_atomic(user_mem);
@@ -710,6 +707,7 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
pr_err("Compression failed! err=%d\n", ret);
goto out;
}
+
src = zstrm->buffer;
if (unlikely(clen > max_zpage_size)) {
clen = PAGE_SIZE;
@@ -717,8 +715,35 @@ static int zram_bvec_write(struct zram *zram, struct bio_vec *bvec, u32 index,
src = uncmem;
}
- handle = zs_malloc(meta->mem_pool, clen);
+ /*
+ * handle allocation has 2 paths:
+ * a) fast path is executed with preemption disabled (for
+ * per-cpu streams) and has __GFP_DIRECT_RECLAIM bit clear,
+ * since we can't sleep;
+ * b) slow path enables preemption and attempts to allocate
+ * the page with __GFP_DIRECT_RECLAIM bit set. we have to
+ * put per-cpu compression stream and, thus, to re-do
+ * the compression once handle is allocated.
+ *
+ * if we have a 'non-null' handle here then we are coming
+ * from the slow path and handle has already been allocated.
+ */
+ if (!handle)
+ handle = zs_malloc(meta->mem_pool, clen,
+ __GFP_KSWAPD_RECLAIM |
+ __GFP_NOWARN |
+ __GFP_HIGHMEM);
if (!handle) {
+ zcomp_strm_release(zram->comp, zstrm);
+ zstrm = NULL;
+
+ atomic64_inc(&zram->stats.writestall);
+
+ handle = zs_malloc(meta->mem_pool, clen,
+ GFP_NOIO | __GFP_HIGHMEM);
+ if (handle)
+ goto compress_again;
+
pr_err("Error allocating memory for compressed page: %u, size=%zu\n",
index, clen);
ret = -ENOMEM;
@@ -1009,7 +1034,6 @@ static void zram_reset_device(struct zram *zram)
/* Reset stats */
memset(&zram->stats, 0, sizeof(zram->stats));
zram->disksize = 0;
- zram->max_comp_streams = 1;
set_capacity(zram->disk, 0);
part_stat_set_all(&zram->disk->part0, 0);
@@ -1038,7 +1062,7 @@ static ssize_t disksize_store(struct device *dev,
if (!meta)
return -ENOMEM;
- comp = zcomp_create(zram->compressor, zram->max_comp_streams);
+ comp = zcomp_create(zram->compressor);
if (IS_ERR(comp)) {
pr_err("Cannot initialise %s compressing backend\n",
zram->compressor);
@@ -1177,6 +1201,7 @@ static struct attribute *zram_disk_attrs[] = {
&dev_attr_comp_algorithm.attr,
&dev_attr_io_stat.attr,
&dev_attr_mm_stat.attr,
+ &dev_attr_debug_stat.attr,
NULL,
};
@@ -1273,7 +1298,6 @@ static int zram_add(void)
}
strlcpy(zram->compressor, default_compressor, sizeof(zram->compressor));
zram->meta = NULL;
- zram->max_comp_streams = 1;
pr_info("Added device: %s\n", zram->disk->disk_name);
return device_id;
diff --git a/drivers/block/zram/zram_drv.h b/drivers/block/zram/zram_drv.h
index 8e92339686d7..3f5bf66a27e4 100644
--- a/drivers/block/zram/zram_drv.h
+++ b/drivers/block/zram/zram_drv.h
@@ -85,6 +85,7 @@ struct zram_stats {
atomic64_t zero_pages; /* no. of zero filled pages */
atomic64_t pages_stored; /* no. of pages currently stored */
atomic_long_t max_used_pages; /* no. of maximum pages stored */
+ atomic64_t writestall; /* no. of write slow paths */
};
struct zram_meta {
@@ -102,7 +103,6 @@ struct zram {
* the number of pages zram can consume for storing compressed data
*/
unsigned long limit_pages;
- int max_comp_streams;
struct zram_stats stats;
atomic_t refcount; /* refcount for zram_meta */
diff --git a/drivers/bluetooth/ath3k.c b/drivers/bluetooth/ath3k.c
index 47ca4b39d306..25894687c168 100644
--- a/drivers/bluetooth/ath3k.c
+++ b/drivers/bluetooth/ath3k.c
@@ -122,6 +122,7 @@ static const struct usb_device_id ath3k_table[] = {
{ USB_DEVICE(0x13d3, 0x3432) },
{ USB_DEVICE(0x13d3, 0x3472) },
{ USB_DEVICE(0x13d3, 0x3474) },
+ { USB_DEVICE(0x13d3, 0x3487) },
/* Atheros AR5BBU12 with sflash firmware */
{ USB_DEVICE(0x0489, 0xE02C) },
@@ -188,6 +189,7 @@ static const struct usb_device_id ath3k_blist_tbl[] = {
{ USB_DEVICE(0x13d3, 0x3432), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3472), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3474), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x13d3, 0x3487), .driver_info = BTUSB_ATH3012 },
/* Atheros AR5BBU22 with sflash firmware */
{ USB_DEVICE(0x0489, 0xE036), .driver_info = BTUSB_ATH3012 },
@@ -206,7 +208,8 @@ static int ath3k_load_firmware(struct usb_device *udev,
const struct firmware *firmware)
{
u8 *send_buf;
- int err, pipe, len, size, sent = 0;
+ int len = 0;
+ int err, pipe, size, sent = 0;
int count = firmware->size;
BT_DBG("udev %p", udev);
@@ -302,7 +305,8 @@ static int ath3k_load_fwfile(struct usb_device *udev,
const struct firmware *firmware)
{
u8 *send_buf;
- int err, pipe, len, size, count, sent = 0;
+ int len = 0;
+ int err, pipe, size, count, sent = 0;
int ret;
count = firmware->size;
diff --git a/drivers/bluetooth/btmrvl_drv.h b/drivers/bluetooth/btmrvl_drv.h
index 05904732e6f1..f742384b53f7 100644
--- a/drivers/bluetooth/btmrvl_drv.h
+++ b/drivers/bluetooth/btmrvl_drv.h
@@ -23,6 +23,17 @@
#include <linux/bitops.h>
#include <linux/slab.h>
#include <net/bluetooth/bluetooth.h>
+#include <linux/err.h>
+#include <linux/gpio.h>
+#include <linux/gfp.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/of_gpio.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+#include <linux/pm_runtime.h>
+#include <linux/slab.h>
+#include <linux/of_irq.h>
#define BTM_HEADER_LEN 4
#define BTM_UPLD_SIZE 2312
diff --git a/drivers/bluetooth/btmrvl_main.c b/drivers/bluetooth/btmrvl_main.c
index f25a825a693f..7ad8d61c0c61 100644
--- a/drivers/bluetooth/btmrvl_main.c
+++ b/drivers/bluetooth/btmrvl_main.c
@@ -510,34 +510,39 @@ static int btmrvl_download_cal_data(struct btmrvl_private *priv,
static int btmrvl_check_device_tree(struct btmrvl_private *priv)
{
struct device_node *dt_node;
+ struct btmrvl_sdio_card *card = priv->btmrvl_dev.card;
u8 cal_data[BT_CAL_HDR_LEN + BT_CAL_DATA_SIZE];
- int ret;
- u32 val;
+ int ret = 0;
+ u16 gpio, gap;
+
+ if (card->plt_of_node) {
+ dt_node = card->plt_of_node;
+ ret = of_property_read_u16(dt_node, "marvell,wakeup-pin",
+ &gpio);
+ if (ret)
+ gpio = (priv->btmrvl_dev.gpio_gap & 0xff00) >> 8;
+
+ ret = of_property_read_u16(dt_node, "marvell,wakeup-gap-ms",
+ &gap);
+ if (ret)
+ gap = (u8)(priv->btmrvl_dev.gpio_gap & 0x00ff);
- for_each_compatible_node(dt_node, NULL, "btmrvl,cfgdata") {
- ret = of_property_read_u32(dt_node, "btmrvl,gpio-gap", &val);
- if (!ret)
- priv->btmrvl_dev.gpio_gap = val;
+ priv->btmrvl_dev.gpio_gap = (gpio << 8) + gap;
- ret = of_property_read_u8_array(dt_node, "btmrvl,cal-data",
+ ret = of_property_read_u8_array(dt_node, "marvell,cal-data",
cal_data + BT_CAL_HDR_LEN,
BT_CAL_DATA_SIZE);
- if (ret) {
- of_node_put(dt_node);
+ if (ret)
return ret;
- }
BT_DBG("Use cal data from device tree");
ret = btmrvl_download_cal_data(priv, cal_data,
BT_CAL_DATA_SIZE);
- if (ret) {
+ if (ret)
BT_ERR("Fail to download calibrate data");
- of_node_put(dt_node);
- return ret;
- }
}
- return 0;
+ return ret;
}
static int btmrvl_setup(struct hci_dev *hdev)
diff --git a/drivers/bluetooth/btmrvl_sdio.c b/drivers/bluetooth/btmrvl_sdio.c
index c6ef248de5e4..f425ddf91a24 100644
--- a/drivers/bluetooth/btmrvl_sdio.c
+++ b/drivers/bluetooth/btmrvl_sdio.c
@@ -52,6 +52,68 @@ static struct memory_type_mapping mem_type_mapping_tbl[] = {
{"EXTLAST", NULL, 0, 0xFE},
};
+static const struct of_device_id btmrvl_sdio_of_match_table[] = {
+ { .compatible = "marvell,sd8897-bt" },
+ { .compatible = "marvell,sd8997-bt" },
+ { }
+};
+
+static irqreturn_t btmrvl_wake_irq_bt(int irq, void *priv)
+{
+ struct btmrvl_plt_wake_cfg *cfg = priv;
+
+ if (cfg->irq_bt >= 0) {
+ pr_info("%s: wake by bt", __func__);
+ cfg->wake_by_bt = true;
+ disable_irq_nosync(irq);
+ }
+
+ return IRQ_HANDLED;
+}
+
+/* This function parses device tree node using mmc subnode devicetree API.
+ * The device node is saved in card->plt_of_node.
+ * If the device tree node exists and includes interrupts attributes, this
+ * function will request platform specific wakeup interrupt.
+ */
+static int btmrvl_sdio_probe_of(struct device *dev,
+ struct btmrvl_sdio_card *card)
+{
+ struct btmrvl_plt_wake_cfg *cfg;
+ int ret;
+
+ if (!dev->of_node ||
+ !of_match_node(btmrvl_sdio_of_match_table, dev->of_node)) {
+ pr_err("sdio platform data not available");
+ return -1;
+ }
+
+ card->plt_of_node = dev->of_node;
+
+ card->plt_wake_cfg = devm_kzalloc(dev, sizeof(*card->plt_wake_cfg),
+ GFP_KERNEL);
+ cfg = card->plt_wake_cfg;
+ if (cfg && card->plt_of_node) {
+ cfg->irq_bt = irq_of_parse_and_map(card->plt_of_node, 0);
+ if (!cfg->irq_bt) {
+ dev_err(dev, "fail to parse irq_bt from device tree");
+ } else {
+ ret = devm_request_irq(dev, cfg->irq_bt,
+ btmrvl_wake_irq_bt,
+ IRQF_TRIGGER_LOW,
+ "bt_wake", cfg);
+ if (ret) {
+ dev_err(dev,
+ "Failed to request irq_bt %d (%d)\n",
+ cfg->irq_bt, ret);
+ }
+ disable_irq(cfg->irq_bt);
+ }
+ }
+
+ return 0;
+}
+
/* The btmrvl_sdio_remove() callback function is called
* when user removes this module from kernel space or ejects
* the card from the slot. The driver handles these 2 cases
@@ -1464,6 +1526,9 @@ static int btmrvl_sdio_probe(struct sdio_func *func,
btmrvl_sdio_enable_host_int(card);
+ /* Device tree node parsing and platform specific configuration*/
+ btmrvl_sdio_probe_of(&func->dev, card);
+
priv = btmrvl_add_card(card);
if (!priv) {
BT_ERR("Initializing card failed!");
@@ -1544,6 +1609,13 @@ static int btmrvl_sdio_suspend(struct device *dev)
return 0;
}
+ /* Enable platform specific wakeup interrupt */
+ if (card->plt_wake_cfg && card->plt_wake_cfg->irq_bt >= 0) {
+ card->plt_wake_cfg->wake_by_bt = false;
+ enable_irq(card->plt_wake_cfg->irq_bt);
+ enable_irq_wake(card->plt_wake_cfg->irq_bt);
+ }
+
priv = card->priv;
priv->adapter->is_suspending = true;
hcidev = priv->btmrvl_dev.hcidev;
@@ -1606,6 +1678,13 @@ static int btmrvl_sdio_resume(struct device *dev)
BT_DBG("%s: SDIO resume", hcidev->name);
hci_resume_dev(hcidev);
+ /* Disable platform specific wakeup interrupt */
+ if (card->plt_wake_cfg && card->plt_wake_cfg->irq_bt >= 0) {
+ disable_irq_wake(card->plt_wake_cfg->irq_bt);
+ if (!card->plt_wake_cfg->wake_by_bt)
+ disable_irq(card->plt_wake_cfg->irq_bt);
+ }
+
return 0;
}
diff --git a/drivers/bluetooth/btmrvl_sdio.h b/drivers/bluetooth/btmrvl_sdio.h
index 1a3bd064c442..3a522d23ee6e 100644
--- a/drivers/bluetooth/btmrvl_sdio.h
+++ b/drivers/bluetooth/btmrvl_sdio.h
@@ -62,6 +62,10 @@
#define FIRMWARE_READY 0xfedc
+struct btmrvl_plt_wake_cfg {
+ int irq_bt;
+ bool wake_by_bt;
+};
struct btmrvl_sdio_card_reg {
u8 cfg;
@@ -97,6 +101,8 @@ struct btmrvl_sdio_card {
u16 sd_blksz_fw_dl;
u8 rx_unit;
struct btmrvl_private *priv;
+ struct device_node *plt_of_node;
+ struct btmrvl_plt_wake_cfg *plt_wake_cfg;
};
struct btmrvl_sdio_device {
diff --git a/drivers/bluetooth/btusb.c b/drivers/bluetooth/btusb.c
index 0d4e372e426d..a3be65e6231a 100644
--- a/drivers/bluetooth/btusb.c
+++ b/drivers/bluetooth/btusb.c
@@ -236,6 +236,7 @@ static const struct usb_device_id blacklist_table[] = {
{ USB_DEVICE(0x13d3, 0x3432), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3472), .driver_info = BTUSB_ATH3012 },
{ USB_DEVICE(0x13d3, 0x3474), .driver_info = BTUSB_ATH3012 },
+ { USB_DEVICE(0x13d3, 0x3487), .driver_info = BTUSB_ATH3012 },
/* Atheros AR5BBU12 with sflash firmware */
{ USB_DEVICE(0x0489, 0xe02c), .driver_info = BTUSB_IGNORE },
@@ -2001,12 +2002,13 @@ static int btusb_setup_intel_new(struct hci_dev *hdev)
return -EINVAL;
}
- /* At the moment only the hardware variant iBT 3.0 (LnP/SfP) is
- * supported by this firmware loading method. This check has been
- * put in place to ensure correct forward compatibility options
- * when newer hardware variants come along.
+ /* At the moment the iBT 3.0 hardware variants 0x0b (LnP/SfP)
+ * and 0x0c (WsP) are supported by this firmware loading method.
+ *
+ * This check has been put in place to ensure correct forward
+ * compatibility options when newer hardware variants come along.
*/
- if (ver.hw_variant != 0x0b) {
+ if (ver.hw_variant != 0x0b && ver.hw_variant != 0x0c) {
BT_ERR("%s: Unsupported Intel hardware variant (%u)",
hdev->name, ver.hw_variant);
return -EINVAL;
diff --git a/drivers/bluetooth/hci_bcm.c b/drivers/bluetooth/hci_bcm.c
index d8881dc0600c..1c97eda8bae3 100644
--- a/drivers/bluetooth/hci_bcm.c
+++ b/drivers/bluetooth/hci_bcm.c
@@ -825,6 +825,7 @@ static const struct acpi_device_id bcm_acpi_match[] = {
{ "BCM2E64", 0 },
{ "BCM2E65", 0 },
{ "BCM2E67", 0 },
+ { "BCM2E71", 0 },
{ "BCM2E7B", 0 },
{ "BCM2E7C", 0 },
{ },
diff --git a/drivers/bluetooth/hci_bcsp.c b/drivers/bluetooth/hci_bcsp.c
index 064f2fefad62..d7d23ceba4d1 100644
--- a/drivers/bluetooth/hci_bcsp.c
+++ b/drivers/bluetooth/hci_bcsp.c
@@ -102,13 +102,12 @@ static const u16 crc_table[] = {
/* Initialise the crc calculator */
#define BCSP_CRC_INIT(x) x = 0xffff
-/*
- Update crc with next data byte
-
- Implementation note
- The data byte is treated as two nibbles. The crc is generated
- in reverse, i.e., bits are fed into the register from the top.
-*/
+/* Update crc with next data byte
+ *
+ * Implementation note
+ * The data byte is treated as two nibbles. The crc is generated
+ * in reverse, i.e., bits are fed into the register from the top.
+ */
static void bcsp_crc_update(u16 *crc, u8 d)
{
u16 reg = *crc;
@@ -223,9 +222,10 @@ static struct sk_buff *bcsp_prepare_pkt(struct bcsp_struct *bcsp, u8 *data,
}
/* Max len of packet: (original len +4(bcsp hdr) +2(crc))*2
- (because bytes 0xc0 and 0xdb are escaped, worst case is
- when the packet is all made of 0xc0 and 0xdb :) )
- + 2 (0xc0 delimiters at start and end). */
+ * (because bytes 0xc0 and 0xdb are escaped, worst case is
+ * when the packet is all made of 0xc0 and 0xdb :) )
+ * + 2 (0xc0 delimiters at start and end).
+ */
nskb = alloc_skb((len + 6) * 2 + 2, GFP_ATOMIC);
if (!nskb)
@@ -285,7 +285,7 @@ static struct sk_buff *bcsp_dequeue(struct hci_uart *hu)
struct bcsp_struct *bcsp = hu->priv;
unsigned long flags;
struct sk_buff *skb;
-
+
/* First of all, check for unreliable messages in the queue,
since they have priority */
@@ -305,8 +305,9 @@ static struct sk_buff *bcsp_dequeue(struct hci_uart *hu)
}
/* Now, try to send a reliable pkt. We can only send a
- reliable packet if the number of packets sent but not yet ack'ed
- is < than the winsize */
+ * reliable packet if the number of packets sent but not yet ack'ed
+ * is < than the winsize
+ */
spin_lock_irqsave_nested(&bcsp->unack.lock, flags, SINGLE_DEPTH_NESTING);
@@ -332,12 +333,14 @@ static struct sk_buff *bcsp_dequeue(struct hci_uart *hu)
spin_unlock_irqrestore(&bcsp->unack.lock, flags);
/* We could not send a reliable packet, either because there are
- none or because there are too many unack'ed pkts. Did we receive
- any packets we have not acknowledged yet ? */
+ * none or because there are too many unack'ed pkts. Did we receive
+ * any packets we have not acknowledged yet ?
+ */
if (bcsp->txack_req) {
/* if so, craft an empty ACK pkt and send it on BCSP unreliable
- channel 0 */
+ * channel 0
+ */
struct sk_buff *nskb = bcsp_prepare_pkt(bcsp, NULL, 0, BCSP_ACK_PKT);
return nskb;
}
@@ -399,8 +402,9 @@ static void bcsp_pkt_cull(struct bcsp_struct *bcsp)
}
/* Handle BCSP link-establishment packets. When we
- detect a "sync" packet, symptom that the BT module has reset,
- we do nothing :) (yet) */
+ * detect a "sync" packet, symptom that the BT module has reset,
+ * we do nothing :) (yet)
+ */
static void bcsp_handle_le_pkt(struct hci_uart *hu)
{
struct bcsp_struct *bcsp = hu->priv;
@@ -462,7 +466,7 @@ static inline void bcsp_unslip_one_byte(struct bcsp_struct *bcsp, unsigned char
case 0xdd:
memcpy(skb_put(bcsp->rx_skb, 1), &db, 1);
if ((bcsp->rx_skb->data[0] & 0x40) != 0 &&
- bcsp->rx_state != BCSP_W4_CRC)
+ bcsp->rx_state != BCSP_W4_CRC)
bcsp_crc_update(&bcsp->message_crc, 0xdb);
bcsp->rx_esc_state = BCSP_ESCSTATE_NOESC;
bcsp->rx_count--;
@@ -534,7 +538,7 @@ static void bcsp_complete_rx_pkt(struct hci_uart *hu)
} else {
BT_ERR("Packet for unknown channel (%u %s)",
bcsp->rx_skb->data[1] & 0x0f,
- bcsp->rx_skb->data[0] & 0x80 ?
+ bcsp->rx_skb->data[0] & 0x80 ?
"reliable" : "unreliable");
kfree_skb(bcsp->rx_skb);
}
@@ -562,7 +566,7 @@ static int bcsp_recv(struct hci_uart *hu, const void *data, int count)
struct bcsp_struct *bcsp = hu->priv;
const unsigned char *ptr;
- BT_DBG("hu %p count %d rx_state %d rx_count %ld",
+ BT_DBG("hu %p count %d rx_state %d rx_count %ld",
hu, count, bcsp->rx_state, bcsp->rx_count);
ptr = data;
@@ -591,7 +595,7 @@ static int bcsp_recv(struct hci_uart *hu, const void *data, int count)
continue;
}
if (bcsp->rx_skb->data[0] & 0x80 /* reliable pkt */
- && (bcsp->rx_skb->data[0] & 0x07) != bcsp->rxseq_txack) {
+ && (bcsp->rx_skb->data[0] & 0x07) != bcsp->rxseq_txack) {
BT_ERR("Out-of-order packet arrived, got %u expected %u",
bcsp->rx_skb->data[0] & 0x07, bcsp->rxseq_txack);
@@ -601,7 +605,7 @@ static int bcsp_recv(struct hci_uart *hu, const void *data, int count)
continue;
}
bcsp->rx_state = BCSP_W4_DATA;
- bcsp->rx_count = (bcsp->rx_skb->data[1] >> 4) +
+ bcsp->rx_count = (bcsp->rx_skb->data[1] >> 4) +
(bcsp->rx_skb->data[2] << 4); /* May be 0 */
continue;
@@ -615,7 +619,7 @@ static int bcsp_recv(struct hci_uart *hu, const void *data, int count)
case BCSP_W4_CRC:
if (bitrev16(bcsp->message_crc) != bscp_get_crc(bcsp)) {
- BT_ERR ("Checksum failed: computed %04x received %04x",
+ BT_ERR("Checksum failed: computed %04x received %04x",
bitrev16(bcsp->message_crc),
bscp_get_crc(bcsp));
@@ -653,8 +657,9 @@ static int bcsp_recv(struct hci_uart *hu, const void *data, int count)
BCSP_CRC_INIT(bcsp->message_crc);
/* Do not increment ptr or decrement count
- * Allocate packet. Max len of a BCSP pkt=
- * 0xFFF (payload) +4 (header) +2 (crc) */
+ * Allocate packet. Max len of a BCSP pkt=
+ * 0xFFF (payload) +4 (header) +2 (crc)
+ */
bcsp->rx_skb = bt_skb_alloc(0x1005, GFP_ATOMIC);
if (!bcsp->rx_skb) {
diff --git a/drivers/bluetooth/hci_intel.c b/drivers/bluetooth/hci_intel.c
index 91d605147b10..f6f2b01a1fea 100644
--- a/drivers/bluetooth/hci_intel.c
+++ b/drivers/bluetooth/hci_intel.c
@@ -1210,8 +1210,7 @@ static int intel_probe(struct platform_device *pdev)
idev->pdev = pdev;
- idev->reset = devm_gpiod_get_optional(&pdev->dev, "reset",
- GPIOD_OUT_LOW);
+ idev->reset = devm_gpiod_get(&pdev->dev, "reset", GPIOD_OUT_LOW);
if (IS_ERR(idev->reset)) {
dev_err(&pdev->dev, "Unable to retrieve gpio\n");
return PTR_ERR(idev->reset);
@@ -1223,8 +1222,7 @@ static int intel_probe(struct platform_device *pdev)
dev_err(&pdev->dev, "No IRQ, falling back to gpio-irq\n");
- host_wake = devm_gpiod_get_optional(&pdev->dev, "host-wake",
- GPIOD_IN);
+ host_wake = devm_gpiod_get(&pdev->dev, "host-wake", GPIOD_IN);
if (IS_ERR(host_wake)) {
dev_err(&pdev->dev, "Unable to retrieve IRQ\n");
goto no_irq;
diff --git a/drivers/bluetooth/hci_ldisc.c b/drivers/bluetooth/hci_ldisc.c
index c00168a5bb80..49b3e1e2d236 100644
--- a/drivers/bluetooth/hci_ldisc.c
+++ b/drivers/bluetooth/hci_ldisc.c
@@ -227,7 +227,7 @@ static int hci_uart_flush(struct hci_dev *hdev)
tty_ldisc_flush(tty);
tty_driver_flush_buffer(tty);
- if (test_bit(HCI_UART_PROTO_SET, &hu->flags))
+ if (test_bit(HCI_UART_PROTO_READY, &hu->flags))
hu->proto->flush(hu);
return 0;
@@ -492,7 +492,7 @@ static void hci_uart_tty_close(struct tty_struct *tty)
cancel_work_sync(&hu->write_work);
- if (test_and_clear_bit(HCI_UART_PROTO_SET, &hu->flags)) {
+ if (test_and_clear_bit(HCI_UART_PROTO_READY, &hu->flags)) {
if (hdev) {
if (test_bit(HCI_UART_REGISTERED, &hu->flags))
hci_unregister_dev(hdev);
@@ -500,6 +500,7 @@ static void hci_uart_tty_close(struct tty_struct *tty)
}
hu->proto->close(hu);
}
+ clear_bit(HCI_UART_PROTO_SET, &hu->flags);
kfree(hu);
}
@@ -526,7 +527,7 @@ static void hci_uart_tty_wakeup(struct tty_struct *tty)
if (tty != hu->tty)
return;
- if (test_bit(HCI_UART_PROTO_SET, &hu->flags))
+ if (test_bit(HCI_UART_PROTO_READY, &hu->flags))
hci_uart_tx_wakeup(hu);
}
@@ -550,7 +551,7 @@ static void hci_uart_tty_receive(struct tty_struct *tty, const u8 *data,
if (!hu || tty != hu->tty)
return;
- if (!test_bit(HCI_UART_PROTO_SET, &hu->flags))
+ if (!test_bit(HCI_UART_PROTO_READY, &hu->flags))
return;
/* It does not need a lock here as it is already protected by a mutex in
@@ -638,9 +639,11 @@ static int hci_uart_set_proto(struct hci_uart *hu, int id)
return err;
hu->proto = p;
+ set_bit(HCI_UART_PROTO_READY, &hu->flags);
err = hci_uart_register_dev(hu);
if (err) {
+ clear_bit(HCI_UART_PROTO_READY, &hu->flags);
p->close(hu);
return err;
}
diff --git a/drivers/bluetooth/hci_uart.h b/drivers/bluetooth/hci_uart.h
index 4814ff08f427..839bad1d8152 100644
--- a/drivers/bluetooth/hci_uart.h
+++ b/drivers/bluetooth/hci_uart.h
@@ -95,6 +95,7 @@ struct hci_uart {
/* HCI_UART proto flag bits */
#define HCI_UART_PROTO_SET 0
#define HCI_UART_REGISTERED 1
+#define HCI_UART_PROTO_READY 2
/* TX states */
#define HCI_UART_SENDING 1
diff --git a/drivers/bluetooth/hci_vhci.c b/drivers/bluetooth/hci_vhci.c
index 80783dcb7f57..aba31210c802 100644
--- a/drivers/bluetooth/hci_vhci.c
+++ b/drivers/bluetooth/hci_vhci.c
@@ -50,6 +50,7 @@ struct vhci_data {
wait_queue_head_t read_wait;
struct sk_buff_head readq;
+ struct mutex open_mutex;
struct delayed_work open_timeout;
};
@@ -87,12 +88,15 @@ static int vhci_send_frame(struct hci_dev *hdev, struct sk_buff *skb)
return 0;
}
-static int vhci_create_device(struct vhci_data *data, __u8 opcode)
+static int __vhci_create_device(struct vhci_data *data, __u8 opcode)
{
struct hci_dev *hdev;
struct sk_buff *skb;
__u8 dev_type;
+ if (data->hdev)
+ return -EBADFD;
+
/* bits 0-1 are dev_type (BR/EDR or AMP) */
dev_type = opcode & 0x03;
@@ -151,6 +155,17 @@ static int vhci_create_device(struct vhci_data *data, __u8 opcode)
return 0;
}
+static int vhci_create_device(struct vhci_data *data, __u8 opcode)
+{
+ int err;
+
+ mutex_lock(&data->open_mutex);
+ err = __vhci_create_device(data, opcode);
+ mutex_unlock(&data->open_mutex);
+
+ return err;
+}
+
static inline ssize_t vhci_get_user(struct vhci_data *data,
struct iov_iter *from)
{
@@ -189,11 +204,6 @@ static inline ssize_t vhci_get_user(struct vhci_data *data,
break;
case HCI_VENDOR_PKT:
- if (data->hdev) {
- kfree_skb(skb);
- return -EBADFD;
- }
-
cancel_delayed_work_sync(&data->open_timeout);
opcode = *((__u8 *) skb->data);
@@ -320,6 +330,7 @@ static int vhci_open(struct inode *inode, struct file *file)
skb_queue_head_init(&data->readq);
init_waitqueue_head(&data->read_wait);
+ mutex_init(&data->open_mutex);
INIT_DELAYED_WORK(&data->open_timeout, vhci_open_timeout);
file->private_data = data;
@@ -333,15 +344,18 @@ static int vhci_open(struct inode *inode, struct file *file)
static int vhci_release(struct inode *inode, struct file *file)
{
struct vhci_data *data = file->private_data;
- struct hci_dev *hdev = data->hdev;
+ struct hci_dev *hdev;
cancel_delayed_work_sync(&data->open_timeout);
+ hdev = data->hdev;
+
if (hdev) {
hci_unregister_dev(hdev);
hci_free_dev(hdev);
}
+ skb_queue_purge(&data->readq);
file->private_data = NULL;
kfree(data);
diff --git a/drivers/bus/Kconfig b/drivers/bus/Kconfig
index d4a3a3133da5..c5a7de9bc783 100644
--- a/drivers/bus/Kconfig
+++ b/drivers/bus/Kconfig
@@ -48,7 +48,7 @@ config ARM_CCI5xx_PMU
If unsure, say Y
config ARM_CCN
- bool "ARM CCN driver support"
+ tristate "ARM CCN driver support"
depends on ARM || ARM64
depends on PERF_EVENTS
help
@@ -58,6 +58,7 @@ config ARM_CCN
config BRCMSTB_GISB_ARB
bool "Broadcom STB GISB bus arbiter"
depends on ARM || MIPS
+ default ARCH_BRCMSTB || BMIPS_GENERIC
help
Driver for the Broadcom Set Top Box System-on-a-chip internal bus
arbiter. This driver provides timeout and target abort error handling
@@ -110,7 +111,7 @@ config OMAP_OCP2SCP
config SIMPLE_PM_BUS
bool "Simple Power-Managed Bus Driver"
depends on OF && PM
- depends on ARCH_SHMOBILE || COMPILE_TEST
+ depends on ARCH_RENESAS || COMPILE_TEST
help
Driver for transparent busses that don't need a real driver, but
where the bus controller is part of a PM domain, or under the control
diff --git a/drivers/bus/arm-ccn.c b/drivers/bus/arm-ccn.c
index 7082c7268845..acc3eb542c74 100644
--- a/drivers/bus/arm-ccn.c
+++ b/drivers/bus/arm-ccn.c
@@ -1189,7 +1189,7 @@ static int arm_ccn_pmu_cpu_notifier(struct notifier_block *nb,
perf_pmu_migrate_context(&dt->pmu, cpu, target);
cpumask_set_cpu(target, &dt->cpu);
if (ccn->irq)
- WARN_ON(irq_set_affinity(ccn->irq, &dt->cpu) != 0);
+ WARN_ON(irq_set_affinity_hint(ccn->irq, &dt->cpu) != 0);
default:
break;
}
@@ -1278,7 +1278,7 @@ static int arm_ccn_pmu_init(struct arm_ccn *ccn)
/* Also make sure that the overflow interrupt is handled by this CPU */
if (ccn->irq) {
- err = irq_set_affinity(ccn->irq, &ccn->dt.cpu);
+ err = irq_set_affinity_hint(ccn->irq, &ccn->dt.cpu);
if (err) {
dev_err(ccn->dev, "Failed to set interrupt affinity!\n");
goto error_set_affinity;
@@ -1306,7 +1306,8 @@ static void arm_ccn_pmu_cleanup(struct arm_ccn *ccn)
{
int i;
- irq_set_affinity(ccn->irq, cpu_possible_mask);
+ if (ccn->irq)
+ irq_set_affinity_hint(ccn->irq, NULL);
unregister_cpu_notifier(&ccn->dt.cpu_nb);
for (i = 0; i < ccn->num_xps; i++)
writel(0, ccn->xp[i].base + CCN_XP_DT_CONTROL);
diff --git a/drivers/bus/brcmstb_gisb.c b/drivers/bus/brcmstb_gisb.c
index f364fa4d24eb..72fe0a5a8bf3 100644
--- a/drivers/bus/brcmstb_gisb.c
+++ b/drivers/bus/brcmstb_gisb.c
@@ -30,6 +30,10 @@
#include <asm/signal.h>
#endif
+#ifdef CONFIG_MIPS
+#include <asm/traps.h>
+#endif
+
#define ARB_ERR_CAP_CLEAR (1 << 0)
#define ARB_ERR_CAP_STATUS_TIMEOUT (1 << 12)
#define ARB_ERR_CAP_STATUS_TEA (1 << 11)
@@ -238,6 +242,29 @@ static int brcmstb_bus_error_handler(unsigned long addr, unsigned int fsr,
}
#endif
+#ifdef CONFIG_MIPS
+static int brcmstb_bus_error_handler(struct pt_regs *regs, int is_fixup)
+{
+ int ret = 0;
+ struct brcmstb_gisb_arb_device *gdev;
+ u32 cap_status;
+
+ list_for_each_entry(gdev, &brcmstb_gisb_arb_device_list, next) {
+ cap_status = gisb_read(gdev, ARB_ERR_CAP_STATUS);
+
+ /* Invalid captured address, bail out */
+ if (!(cap_status & ARB_ERR_CAP_STATUS_VALID)) {
+ is_fixup = 1;
+ goto out;
+ }
+
+ ret |= brcmstb_gisb_arb_decode_addr(gdev, "bus error");
+ }
+out:
+ return is_fixup ? MIPS_BE_FIXUP : MIPS_BE_FATAL;
+}
+#endif
+
static irqreturn_t brcmstb_gisb_timeout_handler(int irq, void *dev_id)
{
brcmstb_gisb_arb_decode_addr(dev_id, "timeout");
@@ -355,6 +382,9 @@ static int __init brcmstb_gisb_arb_probe(struct platform_device *pdev)
hook_fault_code(22, brcmstb_bus_error_handler, SIGBUS, 0,
"imprecise external abort");
#endif
+#ifdef CONFIG_MIPS
+ board_be_handler = brcmstb_bus_error_handler;
+#endif
dev_info(&pdev->dev, "registered mem: %p, irqs: %d, %d\n",
gdev->base, timeout_irq, tea_irq);
diff --git a/drivers/bus/mips_cdmm.c b/drivers/bus/mips_cdmm.c
index 1c543effe062..cad49bc38b3e 100644
--- a/drivers/bus/mips_cdmm.c
+++ b/drivers/bus/mips_cdmm.c
@@ -599,8 +599,8 @@ BUILD_PERDEV_HELPER(cpu_up) /* int mips_cdmm_cpu_up_helper(...) */
* mips_cdmm_bus_down() - Tear down the CDMM bus.
* @data: Pointer to unsigned int CPU number.
*
- * This work_on_cpu callback function is executed on a given CPU to call the
- * CDMM driver cpu_down callback for all devices on that CPU.
+ * This function is executed on the hotplugged CPU and calls the CDMM
+ * driver cpu_down callback for all devices on that CPU.
*/
static long mips_cdmm_bus_down(void *data)
{
@@ -630,7 +630,9 @@ static long mips_cdmm_bus_down(void *data)
* CDMM devices on that CPU, or to call the CDMM driver cpu_up callback for all
* devices already discovered on that CPU.
*
- * It is used during initialisation and when CPUs are brought online.
+ * It is used as work_on_cpu callback function during
+ * initialisation. When CPUs are brought online the function is
+ * invoked directly on the hotplugged CPU.
*/
static long mips_cdmm_bus_up(void *data)
{
@@ -677,10 +679,10 @@ static int mips_cdmm_cpu_notify(struct notifier_block *nb,
switch (action & ~CPU_TASKS_FROZEN) {
case CPU_ONLINE:
case CPU_DOWN_FAILED:
- work_on_cpu(cpu, mips_cdmm_bus_up, &cpu);
+ mips_cdmm_bus_up(&cpu);
break;
case CPU_DOWN_PREPARE:
- work_on_cpu(cpu, mips_cdmm_bus_down, &cpu);
+ mips_cdmm_bus_down(&cpu);
break;
default:
return NOTIFY_DONE;
diff --git a/drivers/char/Kconfig b/drivers/char/Kconfig
index 3ec0766ed5e9..601f64fcc890 100644
--- a/drivers/char/Kconfig
+++ b/drivers/char/Kconfig
@@ -279,8 +279,7 @@ if RTC_LIB=n
config RTC
tristate "Enhanced Real Time Clock Support (legacy PC RTC driver)"
- depends on !PPC && !PARISC && !IA64 && !M68K && !SPARC && !FRV \
- && !ARM && !SUPERH && !S390 && !AVR32 && !BLACKFIN && !UML
+ depends on ALPHA || (MIPS && MACH_LOONGSON64) || MN10300
---help---
If you say Y here and create a character special file /dev/rtc with
major number 10 and minor number 135 using mknod ("man mknod"), you
@@ -585,7 +584,6 @@ config TELCLOCK
config DEVPORT
bool
- depends on !M68K
depends on ISA || PCI
default y
diff --git a/drivers/char/hw_random/Kconfig b/drivers/char/hw_random/Kconfig
index 67ee8b08ab53..ac51149e9777 100644
--- a/drivers/char/hw_random/Kconfig
+++ b/drivers/char/hw_random/Kconfig
@@ -268,19 +268,6 @@ config HW_RANDOM_NOMADIK
If unsure, say Y.
-config HW_RANDOM_PPC4XX
- tristate "PowerPC 4xx generic true random number generator support"
- depends on PPC && 4xx
- default HW_RANDOM
- ---help---
- This driver provides the kernel-side support for the TRNG hardware
- found in the security function of some PowerPC 4xx SoCs.
-
- To compile this driver as a module, choose M here: the
- module will be called ppc4xx-rng.
-
- If unsure, say N.
-
config HW_RANDOM_PSERIES
tristate "pSeries HW Random Number Generator support"
depends on PPC64 && IBMVIO
@@ -309,7 +296,8 @@ config HW_RANDOM_POWERNV
config HW_RANDOM_EXYNOS
tristate "EXYNOS HW random number generator support"
- depends on ARCH_EXYNOS
+ depends on ARCH_EXYNOS || COMPILE_TEST
+ depends on HAS_IOMEM
default HW_RANDOM
---help---
This driver provides kernel-side support for the Random Number
@@ -333,6 +321,19 @@ config HW_RANDOM_TPM
If unsure, say Y.
+config HW_RANDOM_HISI
+ tristate "Hisilicon Random Number Generator support"
+ depends on HW_RANDOM && ARCH_HISI
+ default HW_RANDOM
+ ---help---
+ This driver provides kernel-side support for the Random Number
+ Generator hardware found on Hisilicon Hip04 and Hip05 SoC.
+
+ To compile this driver as a module, choose M here: the
+ module will be called hisi-rng.
+
+ If unsure, say Y.
+
config HW_RANDOM_MSM
tristate "Qualcomm SoCs Random Number Generator support"
depends on HW_RANDOM && ARCH_QCOM
diff --git a/drivers/char/hw_random/Makefile b/drivers/char/hw_random/Makefile
index f5a6fa7690e7..63022b49f160 100644
--- a/drivers/char/hw_random/Makefile
+++ b/drivers/char/hw_random/Makefile
@@ -22,10 +22,10 @@ obj-$(CONFIG_HW_RANDOM_TX4939) += tx4939-rng.o
obj-$(CONFIG_HW_RANDOM_MXC_RNGA) += mxc-rnga.o
obj-$(CONFIG_HW_RANDOM_OCTEON) += octeon-rng.o
obj-$(CONFIG_HW_RANDOM_NOMADIK) += nomadik-rng.o
-obj-$(CONFIG_HW_RANDOM_PPC4XX) += ppc4xx-rng.o
obj-$(CONFIG_HW_RANDOM_PSERIES) += pseries-rng.o
obj-$(CONFIG_HW_RANDOM_POWERNV) += powernv-rng.o
obj-$(CONFIG_HW_RANDOM_EXYNOS) += exynos-rng.o
+obj-$(CONFIG_HW_RANDOM_HISI) += hisi-rng.o
obj-$(CONFIG_HW_RANDOM_TPM) += tpm-rng.o
obj-$(CONFIG_HW_RANDOM_BCM2835) += bcm2835-rng.o
obj-$(CONFIG_HW_RANDOM_IPROC_RNG200) += iproc-rng200.o
diff --git a/drivers/char/hw_random/exynos-rng.c b/drivers/char/hw_random/exynos-rng.c
index ada081232528..ed44561ea647 100644
--- a/drivers/char/hw_random/exynos-rng.c
+++ b/drivers/char/hw_random/exynos-rng.c
@@ -2,7 +2,7 @@
* exynos-rng.c - Random Number Generator driver for the exynos
*
* Copyright (C) 2012 Samsung Electronics
- * Jonghwa Lee <jonghwa3.lee@smasung.com>
+ * Jonghwa Lee <jonghwa3.lee@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -77,7 +77,8 @@ static int exynos_init(struct hwrng *rng)
pm_runtime_get_sync(exynos_rng->dev);
ret = exynos_rng_configure(exynos_rng);
- pm_runtime_put_noidle(exynos_rng->dev);
+ pm_runtime_mark_last_busy(exynos_rng->dev);
+ pm_runtime_put_autosuspend(exynos_rng->dev);
return ret;
}
@@ -89,6 +90,7 @@ static int exynos_read(struct hwrng *rng, void *buf,
struct exynos_rng, rng);
u32 *data = buf;
int retry = 100;
+ int ret = 4;
pm_runtime_get_sync(exynos_rng->dev);
@@ -97,23 +99,27 @@ static int exynos_read(struct hwrng *rng, void *buf,
while (!(exynos_rng_readl(exynos_rng,
EXYNOS_PRNG_STATUS_OFFSET) & PRNG_DONE) && --retry)
cpu_relax();
- if (!retry)
- return -ETIMEDOUT;
+ if (!retry) {
+ ret = -ETIMEDOUT;
+ goto out;
+ }
exynos_rng_writel(exynos_rng, PRNG_DONE, EXYNOS_PRNG_STATUS_OFFSET);
*data = exynos_rng_readl(exynos_rng, EXYNOS_PRNG_OUT1_OFFSET);
+out:
pm_runtime_mark_last_busy(exynos_rng->dev);
pm_runtime_put_sync_autosuspend(exynos_rng->dev);
- return 4;
+ return ret;
}
static int exynos_rng_probe(struct platform_device *pdev)
{
struct exynos_rng *exynos_rng;
struct resource *res;
+ int ret;
exynos_rng = devm_kzalloc(&pdev->dev, sizeof(struct exynos_rng),
GFP_KERNEL);
@@ -141,7 +147,21 @@ static int exynos_rng_probe(struct platform_device *pdev)
pm_runtime_use_autosuspend(&pdev->dev);
pm_runtime_enable(&pdev->dev);
- return devm_hwrng_register(&pdev->dev, &exynos_rng->rng);
+ ret = devm_hwrng_register(&pdev->dev, &exynos_rng->rng);
+ if (ret) {
+ pm_runtime_dont_use_autosuspend(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
+ }
+
+ return ret;
+}
+
+static int exynos_rng_remove(struct platform_device *pdev)
+{
+ pm_runtime_dont_use_autosuspend(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
+
+ return 0;
}
static int __maybe_unused exynos_rng_runtime_suspend(struct device *dev)
@@ -201,6 +221,7 @@ static struct platform_driver exynos_rng_driver = {
.of_match_table = exynos_rng_dt_match,
},
.probe = exynos_rng_probe,
+ .remove = exynos_rng_remove,
};
module_platform_driver(exynos_rng_driver);
diff --git a/drivers/char/hw_random/hisi-rng.c b/drivers/char/hw_random/hisi-rng.c
new file mode 100644
index 000000000000..40d96572c591
--- /dev/null
+++ b/drivers/char/hw_random/hisi-rng.c
@@ -0,0 +1,126 @@
+/*
+ * Copyright (C) 2016 HiSilicon Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/err.h>
+#include <linux/kernel.h>
+#include <linux/hw_random.h>
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/random.h>
+
+#define RNG_SEED 0x0
+#define RNG_CTRL 0x4
+ #define RNG_SEED_SEL BIT(2)
+ #define RNG_RING_EN BIT(1)
+ #define RNG_EN BIT(0)
+#define RNG_RAN_NUM 0x10
+#define RNG_PHY_SEED 0x14
+
+#define to_hisi_rng(p) container_of(p, struct hisi_rng, rng)
+
+static int seed_sel;
+module_param(seed_sel, int, S_IRUGO);
+MODULE_PARM_DESC(seed_sel, "Auto reload seed. 0, use LFSR(default); 1, use ring oscillator.");
+
+struct hisi_rng {
+ void __iomem *base;
+ struct hwrng rng;
+};
+
+static int hisi_rng_init(struct hwrng *rng)
+{
+ struct hisi_rng *hrng = to_hisi_rng(rng);
+ int val = RNG_EN;
+ u32 seed;
+
+ /* get a random number as initial seed */
+ get_random_bytes(&seed, sizeof(seed));
+
+ writel_relaxed(seed, hrng->base + RNG_SEED);
+
+ /**
+ * The seed is reload periodically, there are two choice
+ * of seeds, default seed using the value from LFSR, or
+ * will use seed generated by ring oscillator.
+ */
+ if (seed_sel == 1)
+ val |= RNG_RING_EN | RNG_SEED_SEL;
+
+ writel_relaxed(val, hrng->base + RNG_CTRL);
+ return 0;
+}
+
+static void hisi_rng_cleanup(struct hwrng *rng)
+{
+ struct hisi_rng *hrng = to_hisi_rng(rng);
+
+ writel_relaxed(0, hrng->base + RNG_CTRL);
+}
+
+static int hisi_rng_read(struct hwrng *rng, void *buf, size_t max, bool wait)
+{
+ struct hisi_rng *hrng = to_hisi_rng(rng);
+ u32 *data = buf;
+
+ *data = readl_relaxed(hrng->base + RNG_RAN_NUM);
+ return 4;
+}
+
+static int hisi_rng_probe(struct platform_device *pdev)
+{
+ struct hisi_rng *rng;
+ struct resource *res;
+ int ret;
+
+ rng = devm_kzalloc(&pdev->dev, sizeof(*rng), GFP_KERNEL);
+ if (!rng)
+ return -ENOMEM;
+
+ platform_set_drvdata(pdev, rng);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ rng->base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(rng->base))
+ return PTR_ERR(rng->base);
+
+ rng->rng.name = pdev->name;
+ rng->rng.init = hisi_rng_init;
+ rng->rng.cleanup = hisi_rng_cleanup;
+ rng->rng.read = hisi_rng_read;
+
+ ret = devm_hwrng_register(&pdev->dev, &rng->rng);
+ if (ret) {
+ dev_err(&pdev->dev, "failed to register hwrng\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static const struct of_device_id hisi_rng_dt_ids[] = {
+ { .compatible = "hisilicon,hip04-rng" },
+ { .compatible = "hisilicon,hip05-rng" },
+ { }
+};
+MODULE_DEVICE_TABLE(of, hisi_rng_dt_ids);
+
+static struct platform_driver hisi_rng_driver = {
+ .probe = hisi_rng_probe,
+ .driver = {
+ .name = "hisi-rng",
+ .of_match_table = of_match_ptr(hisi_rng_dt_ids),
+ },
+};
+
+module_platform_driver(hisi_rng_driver);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Kefeng Wang <wangkefeng.wang@huawei>");
+MODULE_DESCRIPTION("Hisilicon random number generator driver");
diff --git a/drivers/char/hw_random/ppc4xx-rng.c b/drivers/char/hw_random/ppc4xx-rng.c
deleted file mode 100644
index c0db4387d2e2..000000000000
--- a/drivers/char/hw_random/ppc4xx-rng.c
+++ /dev/null
@@ -1,147 +0,0 @@
-/*
- * Generic PowerPC 44x RNG driver
- *
- * Copyright 2011 IBM Corporation
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms of the GNU General Public License as published by the
- * Free Software Foundation; version 2 of the License.
- */
-
-#include <linux/module.h>
-#include <linux/kernel.h>
-#include <linux/platform_device.h>
-#include <linux/hw_random.h>
-#include <linux/delay.h>
-#include <linux/of_address.h>
-#include <linux/of_platform.h>
-#include <asm/io.h>
-
-#define PPC4XX_TRNG_DEV_CTRL 0x60080
-
-#define PPC4XX_TRNGE 0x00020000
-#define PPC4XX_TRNG_CTRL 0x0008
-#define PPC4XX_TRNG_CTRL_DALM 0x20
-#define PPC4XX_TRNG_STAT 0x0004
-#define PPC4XX_TRNG_STAT_B 0x1
-#define PPC4XX_TRNG_DATA 0x0000
-
-#define MODULE_NAME "ppc4xx_rng"
-
-static int ppc4xx_rng_data_present(struct hwrng *rng, int wait)
-{
- void __iomem *rng_regs = (void __iomem *) rng->priv;
- int busy, i, present = 0;
-
- for (i = 0; i < 20; i++) {
- busy = (in_le32(rng_regs + PPC4XX_TRNG_STAT) & PPC4XX_TRNG_STAT_B);
- if (!busy || !wait) {
- present = 1;
- break;
- }
- udelay(10);
- }
- return present;
-}
-
-static int ppc4xx_rng_data_read(struct hwrng *rng, u32 *data)
-{
- void __iomem *rng_regs = (void __iomem *) rng->priv;
- *data = in_le32(rng_regs + PPC4XX_TRNG_DATA);
- return 4;
-}
-
-static int ppc4xx_rng_enable(int enable)
-{
- struct device_node *ctrl;
- void __iomem *ctrl_reg;
- int err = 0;
- u32 val;
-
- /* Find the main crypto device node and map it to turn the TRNG on */
- ctrl = of_find_compatible_node(NULL, NULL, "amcc,ppc4xx-crypto");
- if (!ctrl)
- return -ENODEV;
-
- ctrl_reg = of_iomap(ctrl, 0);
- if (!ctrl_reg) {
- err = -ENODEV;
- goto out;
- }
-
- val = in_le32(ctrl_reg + PPC4XX_TRNG_DEV_CTRL);
-
- if (enable)
- val |= PPC4XX_TRNGE;
- else
- val = val & ~PPC4XX_TRNGE;
-
- out_le32(ctrl_reg + PPC4XX_TRNG_DEV_CTRL, val);
- iounmap(ctrl_reg);
-
-out:
- of_node_put(ctrl);
-
- return err;
-}
-
-static struct hwrng ppc4xx_rng = {
- .name = MODULE_NAME,
- .data_present = ppc4xx_rng_data_present,
- .data_read = ppc4xx_rng_data_read,
-};
-
-static int ppc4xx_rng_probe(struct platform_device *dev)
-{
- void __iomem *rng_regs;
- int err = 0;
-
- rng_regs = of_iomap(dev->dev.of_node, 0);
- if (!rng_regs)
- return -ENODEV;
-
- err = ppc4xx_rng_enable(1);
- if (err)
- return err;
-
- out_le32(rng_regs + PPC4XX_TRNG_CTRL, PPC4XX_TRNG_CTRL_DALM);
- ppc4xx_rng.priv = (unsigned long) rng_regs;
-
- err = hwrng_register(&ppc4xx_rng);
-
- return err;
-}
-
-static int ppc4xx_rng_remove(struct platform_device *dev)
-{
- void __iomem *rng_regs = (void __iomem *) ppc4xx_rng.priv;
-
- hwrng_unregister(&ppc4xx_rng);
- ppc4xx_rng_enable(0);
- iounmap(rng_regs);
-
- return 0;
-}
-
-static const struct of_device_id ppc4xx_rng_match[] = {
- { .compatible = "ppc4xx-rng", },
- { .compatible = "amcc,ppc460ex-rng", },
- { .compatible = "amcc,ppc440epx-rng", },
- {},
-};
-MODULE_DEVICE_TABLE(of, ppc4xx_rng_match);
-
-static struct platform_driver ppc4xx_rng_driver = {
- .driver = {
- .name = MODULE_NAME,
- .of_match_table = ppc4xx_rng_match,
- },
- .probe = ppc4xx_rng_probe,
- .remove = ppc4xx_rng_remove,
-};
-
-module_platform_driver(ppc4xx_rng_driver);
-
-MODULE_LICENSE("GPL");
-MODULE_AUTHOR("Josh Boyer <jwboyer@linux.vnet.ibm.com>");
-MODULE_DESCRIPTION("HW RNG driver for PPC 4xx processors");
diff --git a/drivers/char/ipmi/ipmi_si_intf.c b/drivers/char/ipmi/ipmi_si_intf.c
index 1e25b5205724..7b1c412b40a2 100644
--- a/drivers/char/ipmi/ipmi_si_intf.c
+++ b/drivers/char/ipmi/ipmi_si_intf.c
@@ -104,7 +104,7 @@ enum si_intf_state {
#define IPMI_BT_INTMASK_ENABLE_IRQ_BIT 1
enum si_type {
- SI_KCS, SI_SMIC, SI_BT
+ SI_KCS, SI_SMIC, SI_BT
};
static const char * const si_to_str[] = { "kcs", "smic", "bt" };
@@ -410,7 +410,7 @@ static enum si_sm_result start_next_msg(struct smi_info *smi_info)
rv = SI_SM_CALL_WITHOUT_DELAY;
}
- out:
+out:
return rv;
}
@@ -539,7 +539,7 @@ static struct ipmi_smi_msg *alloc_msg_handle_irq(struct smi_info *smi_info)
static void handle_flags(struct smi_info *smi_info)
{
- retry:
+retry:
if (smi_info->msg_flags & WDT_PRE_TIMEOUT_INT) {
/* Watchdog pre-timeout */
smi_inc_stat(smi_info, watchdog_pretimeouts);
@@ -831,7 +831,7 @@ static enum si_sm_result smi_event_handler(struct smi_info *smi_info,
{
enum si_sm_result si_sm_result;
- restart:
+restart:
/*
* There used to be a loop here that waited a little while
* (around 25us) before giving up. That turned out to be
@@ -944,7 +944,7 @@ static enum si_sm_result smi_event_handler(struct smi_info *smi_info,
smi_info->timer_running = false;
}
- out:
+out:
return si_sm_result;
}
@@ -1190,7 +1190,7 @@ static void smi_timeout(unsigned long data)
timeout = jiffies + SI_TIMEOUT_JIFFIES;
}
- do_mod_timer:
+do_mod_timer:
if (smi_result != SI_SM_IDLE)
smi_mod_timer(smi_info, timeout);
else
@@ -1576,10 +1576,9 @@ static int port_setup(struct smi_info *info)
if (request_region(addr + idx * info->io.regspacing,
info->io.regsize, DEVICE_NAME) == NULL) {
/* Undo allocations */
- while (idx--) {
+ while (idx--)
release_region(addr + idx * info->io.regspacing,
info->io.regsize);
- }
return -EIO;
}
}
@@ -1638,25 +1637,28 @@ static void mem_outq(const struct si_sm_io *io, unsigned int offset,
}
#endif
-static void mem_cleanup(struct smi_info *info)
+static void mem_region_cleanup(struct smi_info *info, int num)
{
unsigned long addr = info->io.addr_data;
- int mapsize;
+ int idx;
+ for (idx = 0; idx < num; idx++)
+ release_mem_region(addr + idx * info->io.regspacing,
+ info->io.regsize);
+}
+
+static void mem_cleanup(struct smi_info *info)
+{
if (info->io.addr) {
iounmap(info->io.addr);
-
- mapsize = ((info->io_size * info->io.regspacing)
- - (info->io.regspacing - info->io.regsize));
-
- release_mem_region(addr, mapsize);
+ mem_region_cleanup(info, info->io_size);
}
}
static int mem_setup(struct smi_info *info)
{
unsigned long addr = info->io.addr_data;
- int mapsize;
+ int mapsize, idx;
if (!addr)
return -ENODEV;
@@ -1693,6 +1695,21 @@ static int mem_setup(struct smi_info *info)
}
/*
+ * Some BIOSes reserve disjoint memory regions in their ACPI
+ * tables. This causes problems when trying to request the
+ * entire region. Therefore we must request each register
+ * separately.
+ */
+ for (idx = 0; idx < info->io_size; idx++) {
+ if (request_mem_region(addr + idx * info->io.regspacing,
+ info->io.regsize, DEVICE_NAME) == NULL) {
+ /* Undo allocations */
+ mem_region_cleanup(info, idx);
+ return -EIO;
+ }
+ }
+
+ /*
* Calculate the total amount of memory to claim. This is an
* unusual looking calculation, but it avoids claiming any
* more memory than it has to. It will claim everything
@@ -1701,13 +1718,9 @@ static int mem_setup(struct smi_info *info)
*/
mapsize = ((info->io_size * info->io.regspacing)
- (info->io.regspacing - info->io.regsize));
-
- if (request_mem_region(addr, mapsize, DEVICE_NAME) == NULL)
- return -EIO;
-
info->io.addr = ioremap(addr, mapsize);
if (info->io.addr == NULL) {
- release_mem_region(addr, mapsize);
+ mem_region_cleanup(info, info->io_size);
return -EIO;
}
return 0;
@@ -1975,7 +1988,7 @@ static int hotmod_handler(const char *val, struct kernel_param *kp)
}
}
rv = len;
- out:
+out:
kfree(str);
return rv;
}
@@ -2945,7 +2958,7 @@ static int try_get_dev_id(struct smi_info *smi_info)
/* Check and record info from the get device id, in case we need it. */
rv = ipmi_demangle_device_id(resp, resp_len, &smi_info->device_id);
- out:
+out:
kfree(resp);
return rv;
}
@@ -3192,7 +3205,7 @@ static int try_enable_event_buffer(struct smi_info *smi_info)
else
smi_info->supports_event_msg_buff = true;
- out:
+out:
kfree(resp);
return rv;
}
@@ -3718,10 +3731,10 @@ static int try_smi_init(struct smi_info *new_smi)
return 0;
- out_err_stop_timer:
+out_err_stop_timer:
wait_for_timer_and_thread(new_smi);
- out_err:
+out_err:
new_smi->interrupt_disabled = true;
if (new_smi->intf) {
diff --git a/drivers/char/ipmi/ipmi_ssif.c b/drivers/char/ipmi/ipmi_ssif.c
index 8b3be8b92573..097c86898608 100644
--- a/drivers/char/ipmi/ipmi_ssif.c
+++ b/drivers/char/ipmi/ipmi_ssif.c
@@ -1870,7 +1870,7 @@ static int try_init_spmi(struct SPMITable *spmi)
return -EIO;
}
- myaddr = spmi->addr.address >> 1;
+ myaddr = spmi->addr.address & 0x7f;
return new_ssif_client(myaddr, NULL, 0, 0, SI_SPMI);
}
diff --git a/drivers/char/pcmcia/synclink_cs.c b/drivers/char/pcmcia/synclink_cs.c
index 22c27652e46a..d28922df01d7 100644
--- a/drivers/char/pcmcia/synclink_cs.c
+++ b/drivers/char/pcmcia/synclink_cs.c
@@ -1101,7 +1101,7 @@ static void dcd_change(MGSLPC_INFO *info, struct tty_struct *tty)
wake_up_interruptible(&info->status_event_wait_q);
wake_up_interruptible(&info->event_wait_q);
- if (info->port.flags & ASYNC_CHECK_CD) {
+ if (tty_port_check_carrier(&info->port)) {
if (debug_level >= DEBUG_LEVEL_ISR)
printk("%s CD now %s...", info->device_name,
(info->serial_signals & SerialSignal_DCD) ? "on" : "off");
@@ -1272,7 +1272,7 @@ static int startup(MGSLPC_INFO * info, struct tty_struct *tty)
if (debug_level >= DEBUG_LEVEL_INFO)
printk("%s(%d):startup(%s)\n", __FILE__, __LINE__, info->device_name);
- if (info->port.flags & ASYNC_INITIALIZED)
+ if (tty_port_initialized(&info->port))
return 0;
if (!info->tx_buf) {
@@ -1311,7 +1311,7 @@ static int startup(MGSLPC_INFO * info, struct tty_struct *tty)
if (tty)
clear_bit(TTY_IO_ERROR, &tty->flags);
- info->port.flags |= ASYNC_INITIALIZED;
+ tty_port_set_initialized(&info->port, 1);
return 0;
}
@@ -1322,7 +1322,7 @@ static void shutdown(MGSLPC_INFO * info, struct tty_struct *tty)
{
unsigned long flags;
- if (!(info->port.flags & ASYNC_INITIALIZED))
+ if (!tty_port_initialized(&info->port))
return;
if (debug_level >= DEBUG_LEVEL_INFO)
@@ -1361,7 +1361,7 @@ static void shutdown(MGSLPC_INFO * info, struct tty_struct *tty)
if (tty)
set_bit(TTY_IO_ERROR, &tty->flags);
- info->port.flags &= ~ASYNC_INITIALIZED;
+ tty_port_set_initialized(&info->port, 0);
}
static void mgslpc_program_hw(MGSLPC_INFO *info, struct tty_struct *tty)
@@ -1466,15 +1466,8 @@ static void mgslpc_change_params(MGSLPC_INFO *info, struct tty_struct *tty)
}
info->timeout += HZ/50; /* Add .02 seconds of slop */
- if (cflag & CRTSCTS)
- info->port.flags |= ASYNC_CTS_FLOW;
- else
- info->port.flags &= ~ASYNC_CTS_FLOW;
-
- if (cflag & CLOCAL)
- info->port.flags &= ~ASYNC_CHECK_CD;
- else
- info->port.flags |= ASYNC_CHECK_CD;
+ tty_port_set_cts_flow(&info->port, cflag & CRTSCTS);
+ tty_port_set_check_carrier(&info->port, ~cflag & CLOCAL);
/* process tty input control flags */
@@ -2246,7 +2239,7 @@ static int mgslpc_ioctl(struct tty_struct *tty,
if ((cmd != TIOCGSERIAL) && (cmd != TIOCSSERIAL) &&
(cmd != TIOCMIWAIT)) {
- if (tty->flags & (1 << TTY_IO_ERROR))
+ if (tty_io_error(tty))
return -EIO;
}
@@ -2316,7 +2309,7 @@ static void mgslpc_set_termios(struct tty_struct *tty, struct ktermios *old_term
/* Handle transition away from B0 status */
if (!(old_termios->c_cflag & CBAUD) && C_BAUD(tty)) {
info->serial_signals |= SerialSignal_DTR;
- if (!C_CRTSCTS(tty) || !test_bit(TTY_THROTTLED, &tty->flags))
+ if (!C_CRTSCTS(tty) || !tty_throttled(tty))
info->serial_signals |= SerialSignal_RTS;
spin_lock_irqsave(&info->lock, flags);
set_signals(info);
@@ -2345,7 +2338,7 @@ static void mgslpc_close(struct tty_struct *tty, struct file * filp)
if (tty_port_close_start(port, tty, filp) == 0)
goto cleanup;
- if (port->flags & ASYNC_INITIALIZED)
+ if (tty_port_initialized(port))
mgslpc_wait_until_sent(tty, info->timeout);
mgslpc_flush_buffer(tty);
@@ -2378,7 +2371,7 @@ static void mgslpc_wait_until_sent(struct tty_struct *tty, int timeout)
if (mgslpc_paranoia_check(info, tty->name, "mgslpc_wait_until_sent"))
return;
- if (!(info->port.flags & ASYNC_INITIALIZED))
+ if (!tty_port_initialized(&info->port))
goto exit;
orig_jiffies = jiffies;
@@ -3969,7 +3962,7 @@ static netdev_tx_t hdlcdev_xmit(struct sk_buff *skb,
dev_kfree_skb(skb);
/* save start time for transmit timeout detection */
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
/* start hardware transmitter if necessary */
spin_lock_irqsave(&info->lock, flags);
@@ -4032,7 +4025,7 @@ static int hdlcdev_open(struct net_device *dev)
tty_kref_put(tty);
/* enable network layer transmit */
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
netif_start_queue(dev);
/* inform generic HDLC layer of current DCD status */
diff --git a/drivers/char/random.c b/drivers/char/random.c
index b583e5336630..0158d3bff7e5 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -260,6 +260,7 @@
#include <linux/irq.h>
#include <linux/syscalls.h>
#include <linux/completion.h>
+#include <linux/uuid.h>
#include <asm/processor.h>
#include <asm/uaccess.h>
@@ -1621,26 +1622,6 @@ SYSCALL_DEFINE3(getrandom, char __user *, buf, size_t, count,
return urandom_read(NULL, buf, count, NULL);
}
-/***************************************************************
- * Random UUID interface
- *
- * Used here for a Boot ID, but can be useful for other kernel
- * drivers.
- ***************************************************************/
-
-/*
- * Generate random UUID
- */
-void generate_random_uuid(unsigned char uuid_out[16])
-{
- get_random_bytes(uuid_out, 16);
- /* Set UUID version to 4 --- truly random generation */
- uuid_out[6] = (uuid_out[6] & 0x0F) | 0x40;
- /* Set the UUID variant to DCE */
- uuid_out[8] = (uuid_out[8] & 0x3F) | 0x80;
-}
-EXPORT_SYMBOL(generate_random_uuid);
-
/********************************************************************
*
* Sysctl interface
diff --git a/drivers/char/xillybus/xillybus_of.c b/drivers/char/xillybus/xillybus_of.c
index 781865084dc1..78a492f5acfb 100644
--- a/drivers/char/xillybus/xillybus_of.c
+++ b/drivers/char/xillybus/xillybus_of.c
@@ -81,7 +81,6 @@ static int xilly_map_single_of(struct xilly_endpoint *ep,
{
dma_addr_t addr;
struct xilly_mapping *this;
- int rc;
this = kzalloc(sizeof(*this), GFP_KERNEL);
if (!this)
@@ -101,15 +100,7 @@ static int xilly_map_single_of(struct xilly_endpoint *ep,
*ret_dma_handle = addr;
- rc = devm_add_action(ep->dev, xilly_of_unmap, this);
-
- if (rc) {
- dma_unmap_single(ep->dev, addr, size, direction);
- kfree(this);
- return rc;
- }
-
- return 0;
+ return devm_add_action_or_reset(ep->dev, xilly_of_unmap, this);
}
static struct xilly_endpoint_hardware of_hw = {
diff --git a/drivers/char/xillybus/xillybus_pcie.c b/drivers/char/xillybus/xillybus_pcie.c
index 9418300214e9..dff2d1538164 100644
--- a/drivers/char/xillybus/xillybus_pcie.c
+++ b/drivers/char/xillybus/xillybus_pcie.c
@@ -98,7 +98,6 @@ static int xilly_map_single_pci(struct xilly_endpoint *ep,
int pci_direction;
dma_addr_t addr;
struct xilly_mapping *this;
- int rc;
this = kzalloc(sizeof(*this), GFP_KERNEL);
if (!this)
@@ -120,14 +119,7 @@ static int xilly_map_single_pci(struct xilly_endpoint *ep,
*ret_dma_handle = addr;
- rc = devm_add_action(ep->dev, xilly_pci_unmap, this);
- if (rc) {
- pci_unmap_single(ep->pdev, addr, size, pci_direction);
- kfree(this);
- return rc;
- }
-
- return 0;
+ return devm_add_action_or_reset(ep->dev, xilly_pci_unmap, this);
}
static struct xilly_endpoint_hardware pci_hw = {
diff --git a/drivers/clk/Kconfig b/drivers/clk/Kconfig
index 16f7d33421d8..53ddba26578c 100644
--- a/drivers/clk/Kconfig
+++ b/drivers/clk/Kconfig
@@ -197,10 +197,20 @@ config COMMON_CLK_PXA
---help---
Support for the Marvell PXA SoC.
+config COMMON_CLK_PIC32
+ def_bool COMMON_CLK && MACH_PIC32
+
+config COMMON_CLK_OXNAS
+ bool "Clock driver for the OXNAS SoC Family"
+ select MFD_SYSCON
+ ---help---
+ Support for the OXNAS SoC Family clocks.
+
source "drivers/clk/bcm/Kconfig"
source "drivers/clk/hisilicon/Kconfig"
source "drivers/clk/mvebu/Kconfig"
source "drivers/clk/qcom/Kconfig"
+source "drivers/clk/renesas/Kconfig"
source "drivers/clk/samsung/Kconfig"
source "drivers/clk/tegra/Kconfig"
source "drivers/clk/ti/Kconfig"
diff --git a/drivers/clk/Makefile b/drivers/clk/Makefile
index 46869d696e4d..dcc5e698ff6d 100644
--- a/drivers/clk/Makefile
+++ b/drivers/clk/Makefile
@@ -33,6 +33,7 @@ obj-$(CONFIG_ARCH_MB86S7X) += clk-mb86s7x.o
obj-$(CONFIG_ARCH_MOXART) += clk-moxart.o
obj-$(CONFIG_ARCH_NOMADIK) += clk-nomadik.o
obj-$(CONFIG_ARCH_NSPIRE) += clk-nspire.o
+obj-$(CONFIG_COMMON_CLK_OXNAS) += clk-oxnas.o
obj-$(CONFIG_COMMON_CLK_PALMAS) += clk-palmas.o
obj-$(CONFIG_CLK_QORIQ) += clk-qoriq.o
obj-$(CONFIG_COMMON_CLK_RK808) += clk-rk808.o
@@ -51,6 +52,7 @@ obj-$(CONFIG_COMMON_CLK_WM831X) += clk-wm831x.o
obj-$(CONFIG_COMMON_CLK_XGENE) += clk-xgene.o
obj-$(CONFIG_COMMON_CLK_PWM) += clk-pwm.o
obj-$(CONFIG_COMMON_CLK_AT91) += at91/
+obj-$(CONFIG_ARCH_ARTPEC) += axis/
obj-y += bcm/
obj-$(CONFIG_ARCH_BERLIN) += berlin/
obj-$(CONFIG_ARCH_HISI) += hisilicon/
@@ -58,10 +60,11 @@ obj-$(CONFIG_ARCH_MXC) += imx/
obj-$(CONFIG_MACH_INGENIC) += ingenic/
obj-$(CONFIG_COMMON_CLK_KEYSTONE) += keystone/
obj-$(CONFIG_ARCH_MEDIATEK) += mediatek/
+obj-$(CONFIG_MACH_PIC32) += microchip/
ifeq ($(CONFIG_COMMON_CLK), y)
obj-$(CONFIG_ARCH_MMP) += mmp/
endif
-obj-$(CONFIG_PLAT_ORION) += mvebu/
+obj-y += mvebu/
obj-$(CONFIG_ARCH_MESON) += meson/
obj-$(CONFIG_ARCH_MXS) += mxs/
obj-$(CONFIG_MACH_PISTACHIO) += pistachio/
@@ -84,3 +87,4 @@ obj-$(CONFIG_X86) += x86/
obj-$(CONFIG_ARCH_ZX) += zte/
obj-$(CONFIG_ARCH_ZYNQ) += zynq/
obj-$(CONFIG_H8300) += h8300/
+obj-$(CONFIG_ARC_PLAT_AXS10X) += axs10x/
diff --git a/drivers/clk/at91/clk-h32mx.c b/drivers/clk/at91/clk-h32mx.c
index 819f5842fa66..8e20c8a76db7 100644
--- a/drivers/clk/at91/clk-h32mx.c
+++ b/drivers/clk/at91/clk-h32mx.c
@@ -114,7 +114,7 @@ static void __init of_sama5d4_clk_h32mx_setup(struct device_node *np)
h32mxclk->regmap = regmap;
clk = clk_register(NULL, &h32mxclk->hw);
- if (!clk) {
+ if (IS_ERR(clk)) {
kfree(h32mxclk);
return;
}
diff --git a/drivers/clk/axis/Makefile b/drivers/clk/axis/Makefile
new file mode 100644
index 000000000000..628c9d3b9a02
--- /dev/null
+++ b/drivers/clk/axis/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_MACH_ARTPEC6) += clk-artpec6.o
diff --git a/drivers/clk/axis/clk-artpec6.c b/drivers/clk/axis/clk-artpec6.c
new file mode 100644
index 000000000000..ffc988b098e4
--- /dev/null
+++ b/drivers/clk/axis/clk-artpec6.c
@@ -0,0 +1,242 @@
+/*
+ * ARTPEC-6 clock initialization
+ *
+ * Copyright 2015-2016 Axis Comunications AB.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/clk-provider.h>
+#include <linux/device.h>
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+#include <dt-bindings/clock/axis,artpec6-clkctrl.h>
+
+#define NUM_I2S_CLOCKS 2
+
+struct artpec6_clkctrl_drvdata {
+ struct clk *clk_table[ARTPEC6_CLK_NUMCLOCKS];
+ void __iomem *syscon_base;
+ struct clk_onecell_data clk_data;
+ spinlock_t i2scfg_lock;
+};
+
+static struct artpec6_clkctrl_drvdata *clkdata;
+
+static const char *const i2s_clk_names[NUM_I2S_CLOCKS] = {
+ "i2s0",
+ "i2s1",
+};
+
+static const int i2s_clk_indexes[NUM_I2S_CLOCKS] = {
+ ARTPEC6_CLK_I2S0_CLK,
+ ARTPEC6_CLK_I2S1_CLK,
+};
+
+static void of_artpec6_clkctrl_setup(struct device_node *np)
+{
+ int i;
+ const char *sys_refclk_name;
+ u32 pll_mode, pll_m, pll_n;
+ struct clk **clks;
+
+ /* Mandatory parent clock. */
+ i = of_property_match_string(np, "clock-names", "sys_refclk");
+ if (i < 0)
+ return;
+
+ sys_refclk_name = of_clk_get_parent_name(np, i);
+
+ clkdata = kzalloc(sizeof(*clkdata), GFP_KERNEL);
+ if (!clkdata)
+ return;
+
+ clks = clkdata->clk_table;
+
+ for (i = 0; i < ARTPEC6_CLK_NUMCLOCKS; ++i)
+ clks[i] = ERR_PTR(-EPROBE_DEFER);
+
+ clkdata->syscon_base = of_iomap(np, 0);
+ BUG_ON(clkdata->syscon_base == NULL);
+
+ /* Read PLL1 factors configured by boot strap pins. */
+ pll_mode = (readl(clkdata->syscon_base) >> 6) & 3;
+ switch (pll_mode) {
+ case 0: /* DDR3-2133 mode */
+ pll_m = 4;
+ pll_n = 85;
+ break;
+ case 1: /* DDR3-1866 mode */
+ pll_m = 6;
+ pll_n = 112;
+ break;
+ case 2: /* DDR3-1600 mode */
+ pll_m = 4;
+ pll_n = 64;
+ break;
+ case 3: /* DDR3-1333 mode */
+ pll_m = 8;
+ pll_n = 106;
+ break;
+ }
+
+ clks[ARTPEC6_CLK_CPU] =
+ clk_register_fixed_factor(NULL, "cpu", sys_refclk_name, 0, pll_n,
+ pll_m);
+ clks[ARTPEC6_CLK_CPU_PERIPH] =
+ clk_register_fixed_factor(NULL, "cpu_periph", "cpu", 0, 1, 2);
+
+ /* EPROBE_DEFER on the apb_clock is not handled in amba devices. */
+ clks[ARTPEC6_CLK_UART_PCLK] =
+ clk_register_fixed_factor(NULL, "uart_pclk", "cpu", 0, 1, 8);
+ clks[ARTPEC6_CLK_UART_REFCLK] =
+ clk_register_fixed_rate(NULL, "uart_ref", sys_refclk_name, 0,
+ 50000000);
+
+ clks[ARTPEC6_CLK_SPI_PCLK] =
+ clk_register_fixed_factor(NULL, "spi_pclk", "cpu", 0, 1, 8);
+ clks[ARTPEC6_CLK_SPI_SSPCLK] =
+ clk_register_fixed_rate(NULL, "spi_sspclk", sys_refclk_name, 0,
+ 50000000);
+
+ clks[ARTPEC6_CLK_DBG_PCLK] =
+ clk_register_fixed_factor(NULL, "dbg_pclk", "cpu", 0, 1, 8);
+
+ clkdata->clk_data.clks = clkdata->clk_table;
+ clkdata->clk_data.clk_num = ARTPEC6_CLK_NUMCLOCKS;
+
+ of_clk_add_provider(np, of_clk_src_onecell_get, &clkdata->clk_data);
+}
+
+CLK_OF_DECLARE(artpec6_clkctrl, "axis,artpec6-clkctrl",
+ of_artpec6_clkctrl_setup);
+
+static int artpec6_clkctrl_probe(struct platform_device *pdev)
+{
+ int propidx;
+ struct device_node *np = pdev->dev.of_node;
+ struct device *dev = &pdev->dev;
+ struct clk **clks = clkdata->clk_table;
+ const char *sys_refclk_name;
+ const char *i2s_refclk_name = NULL;
+ const char *frac_clk_name[2] = { NULL, NULL };
+ const char *i2s_mux_parents[2];
+ u32 muxreg;
+ int i;
+ int err = 0;
+
+ /* Mandatory parent clock. */
+ propidx = of_property_match_string(np, "clock-names", "sys_refclk");
+ if (propidx < 0)
+ return -EINVAL;
+
+ sys_refclk_name = of_clk_get_parent_name(np, propidx);
+
+ /* Find clock names of optional parent clocks. */
+ propidx = of_property_match_string(np, "clock-names", "i2s_refclk");
+ if (propidx >= 0)
+ i2s_refclk_name = of_clk_get_parent_name(np, propidx);
+
+ propidx = of_property_match_string(np, "clock-names", "frac_clk0");
+ if (propidx >= 0)
+ frac_clk_name[0] = of_clk_get_parent_name(np, propidx);
+ propidx = of_property_match_string(np, "clock-names", "frac_clk1");
+ if (propidx >= 0)
+ frac_clk_name[1] = of_clk_get_parent_name(np, propidx);
+
+ spin_lock_init(&clkdata->i2scfg_lock);
+
+ clks[ARTPEC6_CLK_NAND_CLKA] =
+ clk_register_fixed_factor(dev, "nand_clka", "cpu", 0, 1, 8);
+ clks[ARTPEC6_CLK_NAND_CLKB] =
+ clk_register_fixed_rate(dev, "nand_clkb", sys_refclk_name, 0,
+ 100000000);
+ clks[ARTPEC6_CLK_ETH_ACLK] =
+ clk_register_fixed_factor(dev, "eth_aclk", "cpu", 0, 1, 4);
+ clks[ARTPEC6_CLK_DMA_ACLK] =
+ clk_register_fixed_factor(dev, "dma_aclk", "cpu", 0, 1, 4);
+ clks[ARTPEC6_CLK_PTP_REF] =
+ clk_register_fixed_rate(dev, "ptp_ref", sys_refclk_name, 0,
+ 100000000);
+ clks[ARTPEC6_CLK_SD_PCLK] =
+ clk_register_fixed_rate(dev, "sd_pclk", sys_refclk_name, 0,
+ 100000000);
+ clks[ARTPEC6_CLK_SD_IMCLK] =
+ clk_register_fixed_rate(dev, "sd_imclk", sys_refclk_name, 0,
+ 100000000);
+ clks[ARTPEC6_CLK_I2S_HST] =
+ clk_register_fixed_factor(dev, "i2s_hst", "cpu", 0, 1, 8);
+
+ for (i = 0; i < NUM_I2S_CLOCKS; ++i) {
+ if (i2s_refclk_name && frac_clk_name[i]) {
+ i2s_mux_parents[0] = frac_clk_name[i];
+ i2s_mux_parents[1] = i2s_refclk_name;
+
+ clks[i2s_clk_indexes[i]] =
+ clk_register_mux(dev, i2s_clk_names[i],
+ i2s_mux_parents, 2,
+ CLK_SET_RATE_NO_REPARENT |
+ CLK_SET_RATE_PARENT,
+ clkdata->syscon_base + 0x14, i, 1,
+ 0, &clkdata->i2scfg_lock);
+ } else if (frac_clk_name[i]) {
+ /* Lock the mux for internal clock reference. */
+ muxreg = readl(clkdata->syscon_base + 0x14);
+ muxreg &= ~BIT(i);
+ writel(muxreg, clkdata->syscon_base + 0x14);
+ clks[i2s_clk_indexes[i]] =
+ clk_register_fixed_factor(dev, i2s_clk_names[i],
+ frac_clk_name[i], 0, 1,
+ 1);
+ } else if (i2s_refclk_name) {
+ /* Lock the mux for external clock reference. */
+ muxreg = readl(clkdata->syscon_base + 0x14);
+ muxreg |= BIT(i);
+ writel(muxreg, clkdata->syscon_base + 0x14);
+ clks[i2s_clk_indexes[i]] =
+ clk_register_fixed_factor(dev, i2s_clk_names[i],
+ i2s_refclk_name, 0, 1, 1);
+ }
+ }
+
+ clks[ARTPEC6_CLK_I2C] =
+ clk_register_fixed_rate(dev, "i2c", sys_refclk_name, 0, 100000000);
+
+ clks[ARTPEC6_CLK_SYS_TIMER] =
+ clk_register_fixed_rate(dev, "timer", sys_refclk_name, 0,
+ 100000000);
+ clks[ARTPEC6_CLK_FRACDIV_IN] =
+ clk_register_fixed_rate(dev, "fracdiv_in", sys_refclk_name, 0,
+ 600000000);
+
+ for (i = 0; i < ARTPEC6_CLK_NUMCLOCKS; ++i) {
+ if (IS_ERR(clks[i]) && PTR_ERR(clks[i]) != -EPROBE_DEFER) {
+ dev_err(dev,
+ "Failed to register clock at index %d err=%ld\n",
+ i, PTR_ERR(clks[i]));
+ err = PTR_ERR(clks[i]);
+ }
+ }
+
+ return err;
+}
+
+static const struct of_device_id artpec_clkctrl_of_match[] = {
+ { .compatible = "axis,artpec6-clkctrl" },
+ {}
+};
+
+static struct platform_driver artpec6_clkctrl_driver = {
+ .probe = artpec6_clkctrl_probe,
+ .driver = {
+ .name = "artpec6_clkctrl",
+ .of_match_table = artpec_clkctrl_of_match,
+ },
+};
+
+builtin_platform_driver(artpec6_clkctrl_driver);
diff --git a/drivers/clk/axs10x/Makefile b/drivers/clk/axs10x/Makefile
new file mode 100644
index 000000000000..01996b871b06
--- /dev/null
+++ b/drivers/clk/axs10x/Makefile
@@ -0,0 +1 @@
+obj-y += i2s_pll_clock.o
diff --git a/drivers/clk/axs10x/i2s_pll_clock.c b/drivers/clk/axs10x/i2s_pll_clock.c
new file mode 100644
index 000000000000..411310d29581
--- /dev/null
+++ b/drivers/clk/axs10x/i2s_pll_clock.c
@@ -0,0 +1,228 @@
+/*
+ * Synopsys AXS10X SDP I2S PLL clock driver
+ *
+ * Copyright (C) 2016 Synopsys
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+#include <linux/platform_device.h>
+#include <linux/module.h>
+#include <linux/clk-provider.h>
+#include <linux/err.h>
+#include <linux/device.h>
+#include <linux/of_address.h>
+#include <linux/slab.h>
+#include <linux/of.h>
+
+/* PLL registers addresses */
+#define PLL_IDIV_REG 0x0
+#define PLL_FBDIV_REG 0x4
+#define PLL_ODIV0_REG 0x8
+#define PLL_ODIV1_REG 0xC
+
+struct i2s_pll_cfg {
+ unsigned int rate;
+ unsigned int idiv;
+ unsigned int fbdiv;
+ unsigned int odiv0;
+ unsigned int odiv1;
+};
+
+static const struct i2s_pll_cfg i2s_pll_cfg_27m[] = {
+ /* 27 Mhz */
+ { 1024000, 0x104, 0x451, 0x10E38, 0x2000 },
+ { 1411200, 0x104, 0x596, 0x10D35, 0x2000 },
+ { 1536000, 0x208, 0xA28, 0x10B2C, 0x2000 },
+ { 2048000, 0x82, 0x451, 0x10E38, 0x2000 },
+ { 2822400, 0x82, 0x596, 0x10D35, 0x2000 },
+ { 3072000, 0x104, 0xA28, 0x10B2C, 0x2000 },
+ { 2116800, 0x82, 0x3CF, 0x10C30, 0x2000 },
+ { 2304000, 0x104, 0x79E, 0x10B2C, 0x2000 },
+ { 0, 0, 0, 0, 0 },
+};
+
+static const struct i2s_pll_cfg i2s_pll_cfg_28m[] = {
+ /* 28.224 Mhz */
+ { 1024000, 0x82, 0x105, 0x107DF, 0x2000 },
+ { 1411200, 0x28A, 0x1, 0x10001, 0x2000 },
+ { 1536000, 0xA28, 0x187, 0x10042, 0x2000 },
+ { 2048000, 0x41, 0x105, 0x107DF, 0x2000 },
+ { 2822400, 0x145, 0x1, 0x10001, 0x2000 },
+ { 3072000, 0x514, 0x187, 0x10042, 0x2000 },
+ { 2116800, 0x514, 0x42, 0x10001, 0x2000 },
+ { 2304000, 0x619, 0x82, 0x10001, 0x2000 },
+ { 0, 0, 0, 0, 0 },
+};
+
+struct i2s_pll_clk {
+ void __iomem *base;
+ struct clk_hw hw;
+ struct device *dev;
+};
+
+static inline void i2s_pll_write(struct i2s_pll_clk *clk, unsigned int reg,
+ unsigned int val)
+{
+ writel_relaxed(val, clk->base + reg);
+}
+
+static inline unsigned int i2s_pll_read(struct i2s_pll_clk *clk,
+ unsigned int reg)
+{
+ return readl_relaxed(clk->base + reg);
+}
+
+static inline struct i2s_pll_clk *to_i2s_pll_clk(struct clk_hw *hw)
+{
+ return container_of(hw, struct i2s_pll_clk, hw);
+}
+
+static inline unsigned int i2s_pll_get_value(unsigned int val)
+{
+ return (val & 0x3F) + ((val >> 6) & 0x3F);
+}
+
+static const struct i2s_pll_cfg *i2s_pll_get_cfg(unsigned long prate)
+{
+ switch (prate) {
+ case 27000000:
+ return i2s_pll_cfg_27m;
+ case 28224000:
+ return i2s_pll_cfg_28m;
+ default:
+ return NULL;
+ }
+}
+
+static unsigned long i2s_pll_recalc_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+{
+ struct i2s_pll_clk *clk = to_i2s_pll_clk(hw);
+ unsigned int idiv, fbdiv, odiv;
+
+ idiv = i2s_pll_get_value(i2s_pll_read(clk, PLL_IDIV_REG));
+ fbdiv = i2s_pll_get_value(i2s_pll_read(clk, PLL_FBDIV_REG));
+ odiv = i2s_pll_get_value(i2s_pll_read(clk, PLL_ODIV0_REG));
+
+ return ((parent_rate / idiv) * fbdiv) / odiv;
+}
+
+static long i2s_pll_round_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long *prate)
+{
+ struct i2s_pll_clk *clk = to_i2s_pll_clk(hw);
+ const struct i2s_pll_cfg *pll_cfg = i2s_pll_get_cfg(*prate);
+ int i;
+
+ if (!pll_cfg) {
+ dev_err(clk->dev, "invalid parent rate=%ld\n", *prate);
+ return -EINVAL;
+ }
+
+ for (i = 0; pll_cfg[i].rate != 0; i++)
+ if (pll_cfg[i].rate == rate)
+ return rate;
+
+ return -EINVAL;
+}
+
+static int i2s_pll_set_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long parent_rate)
+{
+ struct i2s_pll_clk *clk = to_i2s_pll_clk(hw);
+ const struct i2s_pll_cfg *pll_cfg = i2s_pll_get_cfg(parent_rate);
+ int i;
+
+ if (!pll_cfg) {
+ dev_err(clk->dev, "invalid parent rate=%ld\n", parent_rate);
+ return -EINVAL;
+ }
+
+ for (i = 0; pll_cfg[i].rate != 0; i++) {
+ if (pll_cfg[i].rate == rate) {
+ i2s_pll_write(clk, PLL_IDIV_REG, pll_cfg[i].idiv);
+ i2s_pll_write(clk, PLL_FBDIV_REG, pll_cfg[i].fbdiv);
+ i2s_pll_write(clk, PLL_ODIV0_REG, pll_cfg[i].odiv0);
+ i2s_pll_write(clk, PLL_ODIV1_REG, pll_cfg[i].odiv1);
+ return 0;
+ }
+ }
+
+ dev_err(clk->dev, "invalid rate=%ld, parent_rate=%ld\n", rate,
+ parent_rate);
+ return -EINVAL;
+}
+
+static const struct clk_ops i2s_pll_ops = {
+ .recalc_rate = i2s_pll_recalc_rate,
+ .round_rate = i2s_pll_round_rate,
+ .set_rate = i2s_pll_set_rate,
+};
+
+static int i2s_pll_clk_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct device_node *node = dev->of_node;
+ const char *clk_name;
+ const char *parent_name;
+ struct clk *clk;
+ struct i2s_pll_clk *pll_clk;
+ struct clk_init_data init;
+ struct resource *mem;
+
+ pll_clk = devm_kzalloc(dev, sizeof(*pll_clk), GFP_KERNEL);
+ if (!pll_clk)
+ return -ENOMEM;
+
+ mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ pll_clk->base = devm_ioremap_resource(dev, mem);
+ if (IS_ERR(pll_clk->base))
+ return PTR_ERR(pll_clk->base);
+
+ clk_name = node->name;
+ init.name = clk_name;
+ init.ops = &i2s_pll_ops;
+ parent_name = of_clk_get_parent_name(node, 0);
+ init.parent_names = &parent_name;
+ init.num_parents = 1;
+ pll_clk->hw.init = &init;
+ pll_clk->dev = dev;
+
+ clk = devm_clk_register(dev, &pll_clk->hw);
+ if (IS_ERR(clk)) {
+ dev_err(dev, "failed to register %s clock (%ld)\n",
+ clk_name, PTR_ERR(clk));
+ return PTR_ERR(clk);
+ }
+
+ return of_clk_add_provider(node, of_clk_src_simple_get, clk);
+}
+
+static int i2s_pll_clk_remove(struct platform_device *pdev)
+{
+ of_clk_del_provider(pdev->dev.of_node);
+ return 0;
+}
+
+static const struct of_device_id i2s_pll_clk_id[] = {
+ { .compatible = "snps,axs10x-i2s-pll-clock", },
+ { },
+};
+MODULE_DEVICE_TABLE(of, i2s_pll_clk_id);
+
+static struct platform_driver i2s_pll_clk_driver = {
+ .driver = {
+ .name = "axs10x-i2s-pll-clock",
+ .of_match_table = i2s_pll_clk_id,
+ },
+ .probe = i2s_pll_clk_probe,
+ .remove = i2s_pll_clk_remove,
+};
+module_platform_driver(i2s_pll_clk_driver);
+
+MODULE_AUTHOR("Jose Abreu <joabreu@synopsys.com>");
+MODULE_DESCRIPTION("Synopsys AXS10X SDP I2S PLL Clock Driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/clk/bcm/clk-bcm2835.c b/drivers/clk/bcm/clk-bcm2835.c
index c74ed3fd496d..7a7970865c2d 100644
--- a/drivers/clk/bcm/clk-bcm2835.c
+++ b/drivers/clk/bcm/clk-bcm2835.c
@@ -12,9 +12,6 @@
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
*/
/**
@@ -40,6 +37,7 @@
#include <linux/clk-provider.h>
#include <linux/clkdev.h>
#include <linux/clk/bcm2835.h>
+#include <linux/debugfs.h>
#include <linux/module.h>
#include <linux/of.h>
#include <linux/platform_device.h>
@@ -51,6 +49,7 @@
#define CM_GNRICCTL 0x000
#define CM_GNRICDIV 0x004
# define CM_DIV_FRAC_BITS 12
+# define CM_DIV_FRAC_MASK GENMASK(CM_DIV_FRAC_BITS - 1, 0)
#define CM_VPUCTL 0x008
#define CM_VPUDIV 0x00c
@@ -118,6 +117,8 @@
#define CM_SDCCTL 0x1a8
#define CM_SDCDIV 0x1ac
#define CM_ARMCTL 0x1b0
+#define CM_AVEOCTL 0x1b8
+#define CM_AVEODIV 0x1bc
#define CM_EMMCCTL 0x1c0
#define CM_EMMCDIV 0x1c4
@@ -128,6 +129,7 @@
# define CM_GATE BIT(CM_GATE_BIT)
# define CM_BUSY BIT(7)
# define CM_BUSYD BIT(8)
+# define CM_FRAC BIT(9)
# define CM_SRC_SHIFT 0
# define CM_SRC_BITS 4
# define CM_SRC_MASK 0xf
@@ -297,11 +299,11 @@
struct bcm2835_cprman {
struct device *dev;
void __iomem *regs;
- spinlock_t regs_lock;
+ spinlock_t regs_lock; /* spinlock for all clocks */
const char *osc_name;
struct clk_onecell_data onecell;
- struct clk *clks[BCM2835_CLOCK_COUNT];
+ struct clk *clks[];
};
static inline void cprman_write(struct bcm2835_cprman *cprman, u32 reg, u32 val)
@@ -314,6 +316,27 @@ static inline u32 cprman_read(struct bcm2835_cprman *cprman, u32 reg)
return readl(cprman->regs + reg);
}
+static int bcm2835_debugfs_regset(struct bcm2835_cprman *cprman, u32 base,
+ struct debugfs_reg32 *regs, size_t nregs,
+ struct dentry *dentry)
+{
+ struct dentry *regdump;
+ struct debugfs_regset32 *regset;
+
+ regset = devm_kzalloc(cprman->dev, sizeof(*regset), GFP_KERNEL);
+ if (!regset)
+ return -ENOMEM;
+
+ regset->regs = regs;
+ regset->nregs = nregs;
+ regset->base = cprman->regs + base;
+
+ regdump = debugfs_create_regset32("regdump", S_IRUGO, dentry,
+ regset);
+
+ return regdump ? 0 : -ENOMEM;
+}
+
/*
* These are fixed clocks. They're probably not all root clocks and it may
* be possible to turn them on and off but until this is mapped out better
@@ -377,132 +400,27 @@ struct bcm2835_pll_ana_bits {
static const struct bcm2835_pll_ana_bits bcm2835_ana_default = {
.mask0 = 0,
.set0 = 0,
- .mask1 = ~(A2W_PLL_KI_MASK | A2W_PLL_KP_MASK),
+ .mask1 = (u32)~(A2W_PLL_KI_MASK | A2W_PLL_KP_MASK),
.set1 = (2 << A2W_PLL_KI_SHIFT) | (8 << A2W_PLL_KP_SHIFT),
- .mask3 = ~A2W_PLL_KA_MASK,
+ .mask3 = (u32)~A2W_PLL_KA_MASK,
.set3 = (2 << A2W_PLL_KA_SHIFT),
.fb_prediv_mask = BIT(14),
};
static const struct bcm2835_pll_ana_bits bcm2835_ana_pllh = {
- .mask0 = ~(A2W_PLLH_KA_MASK | A2W_PLLH_KI_LOW_MASK),
+ .mask0 = (u32)~(A2W_PLLH_KA_MASK | A2W_PLLH_KI_LOW_MASK),
.set0 = (2 << A2W_PLLH_KA_SHIFT) | (2 << A2W_PLLH_KI_LOW_SHIFT),
- .mask1 = ~(A2W_PLLH_KI_HIGH_MASK | A2W_PLLH_KP_MASK),
+ .mask1 = (u32)~(A2W_PLLH_KI_HIGH_MASK | A2W_PLLH_KP_MASK),
.set1 = (6 << A2W_PLLH_KP_SHIFT),
.mask3 = 0,
.set3 = 0,
.fb_prediv_mask = BIT(11),
};
-/*
- * PLLA is the auxiliary PLL, used to drive the CCP2 (Compact Camera
- * Port 2) transmitter clock.
- *
- * It is in the PX LDO power domain, which is on when the AUDIO domain
- * is on.
- */
-static const struct bcm2835_pll_data bcm2835_plla_data = {
- .name = "plla",
- .cm_ctrl_reg = CM_PLLA,
- .a2w_ctrl_reg = A2W_PLLA_CTRL,
- .frac_reg = A2W_PLLA_FRAC,
- .ana_reg_base = A2W_PLLA_ANA0,
- .reference_enable_mask = A2W_XOSC_CTRL_PLLA_ENABLE,
- .lock_mask = CM_LOCK_FLOCKA,
-
- .ana = &bcm2835_ana_default,
-
- .min_rate = 600000000u,
- .max_rate = 2400000000u,
- .max_fb_rate = BCM2835_MAX_FB_RATE,
-};
-
-/* PLLB is used for the ARM's clock. */
-static const struct bcm2835_pll_data bcm2835_pllb_data = {
- .name = "pllb",
- .cm_ctrl_reg = CM_PLLB,
- .a2w_ctrl_reg = A2W_PLLB_CTRL,
- .frac_reg = A2W_PLLB_FRAC,
- .ana_reg_base = A2W_PLLB_ANA0,
- .reference_enable_mask = A2W_XOSC_CTRL_PLLB_ENABLE,
- .lock_mask = CM_LOCK_FLOCKB,
-
- .ana = &bcm2835_ana_default,
-
- .min_rate = 600000000u,
- .max_rate = 3000000000u,
- .max_fb_rate = BCM2835_MAX_FB_RATE,
-};
-
-/*
- * PLLC is the core PLL, used to drive the core VPU clock.
- *
- * It is in the PX LDO power domain, which is on when the AUDIO domain
- * is on.
-*/
-static const struct bcm2835_pll_data bcm2835_pllc_data = {
- .name = "pllc",
- .cm_ctrl_reg = CM_PLLC,
- .a2w_ctrl_reg = A2W_PLLC_CTRL,
- .frac_reg = A2W_PLLC_FRAC,
- .ana_reg_base = A2W_PLLC_ANA0,
- .reference_enable_mask = A2W_XOSC_CTRL_PLLC_ENABLE,
- .lock_mask = CM_LOCK_FLOCKC,
-
- .ana = &bcm2835_ana_default,
-
- .min_rate = 600000000u,
- .max_rate = 3000000000u,
- .max_fb_rate = BCM2835_MAX_FB_RATE,
-};
-
-/*
- * PLLD is the display PLL, used to drive DSI display panels.
- *
- * It is in the PX LDO power domain, which is on when the AUDIO domain
- * is on.
- */
-static const struct bcm2835_pll_data bcm2835_plld_data = {
- .name = "plld",
- .cm_ctrl_reg = CM_PLLD,
- .a2w_ctrl_reg = A2W_PLLD_CTRL,
- .frac_reg = A2W_PLLD_FRAC,
- .ana_reg_base = A2W_PLLD_ANA0,
- .reference_enable_mask = A2W_XOSC_CTRL_DDR_ENABLE,
- .lock_mask = CM_LOCK_FLOCKD,
-
- .ana = &bcm2835_ana_default,
-
- .min_rate = 600000000u,
- .max_rate = 2400000000u,
- .max_fb_rate = BCM2835_MAX_FB_RATE,
-};
-
-/*
- * PLLH is used to supply the pixel clock or the AUX clock for the TV
- * encoder.
- *
- * It is in the HDMI power domain.
- */
-static const struct bcm2835_pll_data bcm2835_pllh_data = {
- "pllh",
- .cm_ctrl_reg = CM_PLLH,
- .a2w_ctrl_reg = A2W_PLLH_CTRL,
- .frac_reg = A2W_PLLH_FRAC,
- .ana_reg_base = A2W_PLLH_ANA0,
- .reference_enable_mask = A2W_XOSC_CTRL_PLLC_ENABLE,
- .lock_mask = CM_LOCK_FLOCKH,
-
- .ana = &bcm2835_ana_pllh,
-
- .min_rate = 600000000u,
- .max_rate = 3000000000u,
- .max_fb_rate = BCM2835_MAX_FB_RATE,
-};
-
struct bcm2835_pll_divider_data {
const char *name;
- const struct bcm2835_pll_data *source_pll;
+ const char *source_pll;
+
u32 cm_reg;
u32 a2w_reg;
@@ -511,124 +429,6 @@ struct bcm2835_pll_divider_data {
u32 fixed_divider;
};
-static const struct bcm2835_pll_divider_data bcm2835_plla_core_data = {
- .name = "plla_core",
- .source_pll = &bcm2835_plla_data,
- .cm_reg = CM_PLLA,
- .a2w_reg = A2W_PLLA_CORE,
- .load_mask = CM_PLLA_LOADCORE,
- .hold_mask = CM_PLLA_HOLDCORE,
- .fixed_divider = 1,
-};
-
-static const struct bcm2835_pll_divider_data bcm2835_plla_per_data = {
- .name = "plla_per",
- .source_pll = &bcm2835_plla_data,
- .cm_reg = CM_PLLA,
- .a2w_reg = A2W_PLLA_PER,
- .load_mask = CM_PLLA_LOADPER,
- .hold_mask = CM_PLLA_HOLDPER,
- .fixed_divider = 1,
-};
-
-static const struct bcm2835_pll_divider_data bcm2835_pllb_arm_data = {
- .name = "pllb_arm",
- .source_pll = &bcm2835_pllb_data,
- .cm_reg = CM_PLLB,
- .a2w_reg = A2W_PLLB_ARM,
- .load_mask = CM_PLLB_LOADARM,
- .hold_mask = CM_PLLB_HOLDARM,
- .fixed_divider = 1,
-};
-
-static const struct bcm2835_pll_divider_data bcm2835_pllc_core0_data = {
- .name = "pllc_core0",
- .source_pll = &bcm2835_pllc_data,
- .cm_reg = CM_PLLC,
- .a2w_reg = A2W_PLLC_CORE0,
- .load_mask = CM_PLLC_LOADCORE0,
- .hold_mask = CM_PLLC_HOLDCORE0,
- .fixed_divider = 1,
-};
-
-static const struct bcm2835_pll_divider_data bcm2835_pllc_core1_data = {
- .name = "pllc_core1", .source_pll = &bcm2835_pllc_data,
- .cm_reg = CM_PLLC, A2W_PLLC_CORE1,
- .load_mask = CM_PLLC_LOADCORE1,
- .hold_mask = CM_PLLC_HOLDCORE1,
- .fixed_divider = 1,
-};
-
-static const struct bcm2835_pll_divider_data bcm2835_pllc_core2_data = {
- .name = "pllc_core2",
- .source_pll = &bcm2835_pllc_data,
- .cm_reg = CM_PLLC,
- .a2w_reg = A2W_PLLC_CORE2,
- .load_mask = CM_PLLC_LOADCORE2,
- .hold_mask = CM_PLLC_HOLDCORE2,
- .fixed_divider = 1,
-};
-
-static const struct bcm2835_pll_divider_data bcm2835_pllc_per_data = {
- .name = "pllc_per",
- .source_pll = &bcm2835_pllc_data,
- .cm_reg = CM_PLLC,
- .a2w_reg = A2W_PLLC_PER,
- .load_mask = CM_PLLC_LOADPER,
- .hold_mask = CM_PLLC_HOLDPER,
- .fixed_divider = 1,
-};
-
-static const struct bcm2835_pll_divider_data bcm2835_plld_core_data = {
- .name = "plld_core",
- .source_pll = &bcm2835_plld_data,
- .cm_reg = CM_PLLD,
- .a2w_reg = A2W_PLLD_CORE,
- .load_mask = CM_PLLD_LOADCORE,
- .hold_mask = CM_PLLD_HOLDCORE,
- .fixed_divider = 1,
-};
-
-static const struct bcm2835_pll_divider_data bcm2835_plld_per_data = {
- .name = "plld_per",
- .source_pll = &bcm2835_plld_data,
- .cm_reg = CM_PLLD,
- .a2w_reg = A2W_PLLD_PER,
- .load_mask = CM_PLLD_LOADPER,
- .hold_mask = CM_PLLD_HOLDPER,
- .fixed_divider = 1,
-};
-
-static const struct bcm2835_pll_divider_data bcm2835_pllh_rcal_data = {
- .name = "pllh_rcal",
- .source_pll = &bcm2835_pllh_data,
- .cm_reg = CM_PLLH,
- .a2w_reg = A2W_PLLH_RCAL,
- .load_mask = CM_PLLH_LOADRCAL,
- .hold_mask = 0,
- .fixed_divider = 10,
-};
-
-static const struct bcm2835_pll_divider_data bcm2835_pllh_aux_data = {
- .name = "pllh_aux",
- .source_pll = &bcm2835_pllh_data,
- .cm_reg = CM_PLLH,
- .a2w_reg = A2W_PLLH_AUX,
- .load_mask = CM_PLLH_LOADAUX,
- .hold_mask = 0,
- .fixed_divider = 10,
-};
-
-static const struct bcm2835_pll_divider_data bcm2835_pllh_pix_data = {
- .name = "pllh_pix",
- .source_pll = &bcm2835_pllh_data,
- .cm_reg = CM_PLLH,
- .a2w_reg = A2W_PLLH_PIX,
- .load_mask = CM_PLLH_LOADPIX,
- .hold_mask = 0,
- .fixed_divider = 10,
-};
-
struct bcm2835_clock_data {
const char *name;
@@ -644,187 +444,14 @@ struct bcm2835_clock_data {
u32 frac_bits;
bool is_vpu_clock;
+ bool is_mash_clock;
};
-static const char *const bcm2835_clock_per_parents[] = {
- "gnd",
- "xosc",
- "testdebug0",
- "testdebug1",
- "plla_per",
- "pllc_per",
- "plld_per",
- "pllh_aux",
-};
-
-static const char *const bcm2835_clock_vpu_parents[] = {
- "gnd",
- "xosc",
- "testdebug0",
- "testdebug1",
- "plla_core",
- "pllc_core0",
- "plld_core",
- "pllh_aux",
- "pllc_core1",
- "pllc_core2",
-};
-
-static const char *const bcm2835_clock_osc_parents[] = {
- "gnd",
- "xosc",
- "testdebug0",
- "testdebug1"
-};
-
-/*
- * Used for a 1Mhz clock for the system clocksource, and also used by
- * the watchdog timer and the camera pulse generator.
- */
-static const struct bcm2835_clock_data bcm2835_clock_timer_data = {
- .name = "timer",
- .num_mux_parents = ARRAY_SIZE(bcm2835_clock_osc_parents),
- .parents = bcm2835_clock_osc_parents,
- .ctl_reg = CM_TIMERCTL,
- .div_reg = CM_TIMERDIV,
- .int_bits = 6,
- .frac_bits = 12,
-};
-
-/* One Time Programmable Memory clock. Maximum 10Mhz. */
-static const struct bcm2835_clock_data bcm2835_clock_otp_data = {
- .name = "otp",
- .num_mux_parents = ARRAY_SIZE(bcm2835_clock_osc_parents),
- .parents = bcm2835_clock_osc_parents,
- .ctl_reg = CM_OTPCTL,
- .div_reg = CM_OTPDIV,
- .int_bits = 4,
- .frac_bits = 0,
-};
-
-/*
- * VPU clock. This doesn't have an enable bit, since it drives the
- * bus for everything else, and is special so it doesn't need to be
- * gated for rate changes. It is also known as "clk_audio" in various
- * hardware documentation.
- */
-static const struct bcm2835_clock_data bcm2835_clock_vpu_data = {
- .name = "vpu",
- .num_mux_parents = ARRAY_SIZE(bcm2835_clock_vpu_parents),
- .parents = bcm2835_clock_vpu_parents,
- .ctl_reg = CM_VPUCTL,
- .div_reg = CM_VPUDIV,
- .int_bits = 12,
- .frac_bits = 8,
- .is_vpu_clock = true,
-};
-
-static const struct bcm2835_clock_data bcm2835_clock_v3d_data = {
- .name = "v3d",
- .num_mux_parents = ARRAY_SIZE(bcm2835_clock_vpu_parents),
- .parents = bcm2835_clock_vpu_parents,
- .ctl_reg = CM_V3DCTL,
- .div_reg = CM_V3DDIV,
- .int_bits = 4,
- .frac_bits = 8,
-};
-
-static const struct bcm2835_clock_data bcm2835_clock_isp_data = {
- .name = "isp",
- .num_mux_parents = ARRAY_SIZE(bcm2835_clock_vpu_parents),
- .parents = bcm2835_clock_vpu_parents,
- .ctl_reg = CM_ISPCTL,
- .div_reg = CM_ISPDIV,
- .int_bits = 4,
- .frac_bits = 8,
-};
-
-static const struct bcm2835_clock_data bcm2835_clock_h264_data = {
- .name = "h264",
- .num_mux_parents = ARRAY_SIZE(bcm2835_clock_vpu_parents),
- .parents = bcm2835_clock_vpu_parents,
- .ctl_reg = CM_H264CTL,
- .div_reg = CM_H264DIV,
- .int_bits = 4,
- .frac_bits = 8,
-};
-
-/* TV encoder clock. Only operating frequency is 108Mhz. */
-static const struct bcm2835_clock_data bcm2835_clock_vec_data = {
- .name = "vec",
- .num_mux_parents = ARRAY_SIZE(bcm2835_clock_per_parents),
- .parents = bcm2835_clock_per_parents,
- .ctl_reg = CM_VECCTL,
- .div_reg = CM_VECDIV,
- .int_bits = 4,
- .frac_bits = 0,
-};
-
-static const struct bcm2835_clock_data bcm2835_clock_uart_data = {
- .name = "uart",
- .num_mux_parents = ARRAY_SIZE(bcm2835_clock_per_parents),
- .parents = bcm2835_clock_per_parents,
- .ctl_reg = CM_UARTCTL,
- .div_reg = CM_UARTDIV,
- .int_bits = 10,
- .frac_bits = 12,
-};
-
-/* HDMI state machine */
-static const struct bcm2835_clock_data bcm2835_clock_hsm_data = {
- .name = "hsm",
- .num_mux_parents = ARRAY_SIZE(bcm2835_clock_per_parents),
- .parents = bcm2835_clock_per_parents,
- .ctl_reg = CM_HSMCTL,
- .div_reg = CM_HSMDIV,
- .int_bits = 4,
- .frac_bits = 8,
-};
-
-/*
- * Secondary SDRAM clock. Used for low-voltage modes when the PLL in
- * the SDRAM controller can't be used.
- */
-static const struct bcm2835_clock_data bcm2835_clock_sdram_data = {
- .name = "sdram",
- .num_mux_parents = ARRAY_SIZE(bcm2835_clock_vpu_parents),
- .parents = bcm2835_clock_vpu_parents,
- .ctl_reg = CM_SDCCTL,
- .div_reg = CM_SDCDIV,
- .int_bits = 6,
- .frac_bits = 0,
-};
-
-/* Clock for the temperature sensor. Generally run at 2Mhz, max 5Mhz. */
-static const struct bcm2835_clock_data bcm2835_clock_tsens_data = {
- .name = "tsens",
- .num_mux_parents = ARRAY_SIZE(bcm2835_clock_osc_parents),
- .parents = bcm2835_clock_osc_parents,
- .ctl_reg = CM_TSENSCTL,
- .div_reg = CM_TSENSDIV,
- .int_bits = 5,
- .frac_bits = 0,
-};
-
-/* Arasan EMMC clock */
-static const struct bcm2835_clock_data bcm2835_clock_emmc_data = {
- .name = "emmc",
- .num_mux_parents = ARRAY_SIZE(bcm2835_clock_per_parents),
- .parents = bcm2835_clock_per_parents,
- .ctl_reg = CM_EMMCCTL,
- .div_reg = CM_EMMCDIV,
- .int_bits = 4,
- .frac_bits = 8,
-};
+struct bcm2835_gate_data {
+ const char *name;
+ const char *parent;
-static const struct bcm2835_clock_data bcm2835_clock_pwm_data = {
- .name = "pwm",
- .num_mux_parents = ARRAY_SIZE(bcm2835_clock_per_parents),
- .parents = bcm2835_clock_per_parents,
- .ctl_reg = CM_PWMCTL,
- .div_reg = CM_PWMDIV,
- .int_bits = 12,
- .frac_bits = 12,
+ u32 ctl_reg;
};
struct bcm2835_pll {
@@ -910,8 +537,14 @@ static void bcm2835_pll_off(struct clk_hw *hw)
struct bcm2835_cprman *cprman = pll->cprman;
const struct bcm2835_pll_data *data = pll->data;
- cprman_write(cprman, data->cm_ctrl_reg, CM_PLL_ANARST);
- cprman_write(cprman, data->a2w_ctrl_reg, A2W_PLL_CTRL_PWRDN);
+ spin_lock(&cprman->regs_lock);
+ cprman_write(cprman, data->cm_ctrl_reg,
+ cprman_read(cprman, data->cm_ctrl_reg) |
+ CM_PLL_ANARST);
+ cprman_write(cprman, data->a2w_ctrl_reg,
+ cprman_read(cprman, data->a2w_ctrl_reg) |
+ A2W_PLL_CTRL_PWRDN);
+ spin_unlock(&cprman->regs_lock);
}
static int bcm2835_pll_on(struct clk_hw *hw)
@@ -921,6 +554,10 @@ static int bcm2835_pll_on(struct clk_hw *hw)
const struct bcm2835_pll_data *data = pll->data;
ktime_t timeout;
+ cprman_write(cprman, data->a2w_ctrl_reg,
+ cprman_read(cprman, data->a2w_ctrl_reg) &
+ ~A2W_PLL_CTRL_PWRDN);
+
/* Take the PLL out of reset. */
cprman_write(cprman, data->cm_ctrl_reg,
cprman_read(cprman, data->cm_ctrl_reg) & ~CM_PLL_ANARST);
@@ -1030,6 +667,36 @@ static int bcm2835_pll_set_rate(struct clk_hw *hw,
return 0;
}
+static int bcm2835_pll_debug_init(struct clk_hw *hw,
+ struct dentry *dentry)
+{
+ struct bcm2835_pll *pll = container_of(hw, struct bcm2835_pll, hw);
+ struct bcm2835_cprman *cprman = pll->cprman;
+ const struct bcm2835_pll_data *data = pll->data;
+ struct debugfs_reg32 *regs;
+
+ regs = devm_kzalloc(cprman->dev, 7 * sizeof(*regs), GFP_KERNEL);
+ if (!regs)
+ return -ENOMEM;
+
+ regs[0].name = "cm_ctrl";
+ regs[0].offset = data->cm_ctrl_reg;
+ regs[1].name = "a2w_ctrl";
+ regs[1].offset = data->a2w_ctrl_reg;
+ regs[2].name = "frac";
+ regs[2].offset = data->frac_reg;
+ regs[3].name = "ana0";
+ regs[3].offset = data->ana_reg_base + 0 * 4;
+ regs[4].name = "ana1";
+ regs[4].offset = data->ana_reg_base + 1 * 4;
+ regs[5].name = "ana2";
+ regs[5].offset = data->ana_reg_base + 2 * 4;
+ regs[6].name = "ana3";
+ regs[6].offset = data->ana_reg_base + 3 * 4;
+
+ return bcm2835_debugfs_regset(cprman, 0, regs, 7, dentry);
+}
+
static const struct clk_ops bcm2835_pll_clk_ops = {
.is_prepared = bcm2835_pll_is_on,
.prepare = bcm2835_pll_on,
@@ -1037,6 +704,7 @@ static const struct clk_ops bcm2835_pll_clk_ops = {
.recalc_rate = bcm2835_pll_get_rate,
.set_rate = bcm2835_pll_set_rate,
.round_rate = bcm2835_pll_round_rate,
+ .debug_init = bcm2835_pll_debug_init,
};
struct bcm2835_pll_divider {
@@ -1079,10 +747,12 @@ static void bcm2835_pll_divider_off(struct clk_hw *hw)
struct bcm2835_cprman *cprman = divider->cprman;
const struct bcm2835_pll_divider_data *data = divider->data;
+ spin_lock(&cprman->regs_lock);
cprman_write(cprman, data->cm_reg,
(cprman_read(cprman, data->cm_reg) &
~data->load_mask) | data->hold_mask);
cprman_write(cprman, data->a2w_reg, A2W_PLL_CHANNEL_DISABLE);
+ spin_unlock(&cprman->regs_lock);
}
static int bcm2835_pll_divider_on(struct clk_hw *hw)
@@ -1091,12 +761,14 @@ static int bcm2835_pll_divider_on(struct clk_hw *hw)
struct bcm2835_cprman *cprman = divider->cprman;
const struct bcm2835_pll_divider_data *data = divider->data;
+ spin_lock(&cprman->regs_lock);
cprman_write(cprman, data->a2w_reg,
cprman_read(cprman, data->a2w_reg) &
~A2W_PLL_CHANNEL_DISABLE);
cprman_write(cprman, data->cm_reg,
cprman_read(cprman, data->cm_reg) & ~data->hold_mask);
+ spin_unlock(&cprman->regs_lock);
return 0;
}
@@ -1124,6 +796,26 @@ static int bcm2835_pll_divider_set_rate(struct clk_hw *hw,
return 0;
}
+static int bcm2835_pll_divider_debug_init(struct clk_hw *hw,
+ struct dentry *dentry)
+{
+ struct bcm2835_pll_divider *divider = bcm2835_pll_divider_from_hw(hw);
+ struct bcm2835_cprman *cprman = divider->cprman;
+ const struct bcm2835_pll_divider_data *data = divider->data;
+ struct debugfs_reg32 *regs;
+
+ regs = devm_kzalloc(cprman->dev, 7 * sizeof(*regs), GFP_KERNEL);
+ if (!regs)
+ return -ENOMEM;
+
+ regs[0].name = "cm";
+ regs[0].offset = data->cm_reg;
+ regs[1].name = "a2w";
+ regs[1].offset = data->a2w_reg;
+
+ return bcm2835_debugfs_regset(cprman, 0, regs, 2, dentry);
+}
+
static const struct clk_ops bcm2835_pll_divider_clk_ops = {
.is_prepared = bcm2835_pll_divider_is_on,
.prepare = bcm2835_pll_divider_on,
@@ -1131,6 +823,7 @@ static const struct clk_ops bcm2835_pll_divider_clk_ops = {
.recalc_rate = bcm2835_pll_divider_get_rate,
.set_rate = bcm2835_pll_divider_set_rate,
.round_rate = bcm2835_pll_divider_round_rate,
+ .debug_init = bcm2835_pll_divider_debug_init,
};
/*
@@ -1170,7 +863,7 @@ static u32 bcm2835_clock_choose_div(struct clk_hw *hw,
GENMASK(CM_DIV_FRAC_BITS - data->frac_bits, 0) >> 1;
u64 temp = (u64)parent_rate << CM_DIV_FRAC_BITS;
u64 rem;
- u32 div;
+ u32 div, mindiv, maxdiv;
rem = do_div(temp, rate);
div = temp;
@@ -1180,10 +873,23 @@ static u32 bcm2835_clock_choose_div(struct clk_hw *hw,
div += unused_frac_mask + 1;
div &= ~unused_frac_mask;
- /* Clamp to the limits. */
- div = max(div, unused_frac_mask + 1);
- div = min_t(u32, div, GENMASK(data->int_bits + CM_DIV_FRAC_BITS - 1,
- CM_DIV_FRAC_BITS - data->frac_bits));
+ /* different clamping limits apply for a mash clock */
+ if (data->is_mash_clock) {
+ /* clamp to min divider of 2 */
+ mindiv = 2 << CM_DIV_FRAC_BITS;
+ /* clamp to the highest possible integer divider */
+ maxdiv = (BIT(data->int_bits) - 1) << CM_DIV_FRAC_BITS;
+ } else {
+ /* clamp to min divider of 1 */
+ mindiv = 1 << CM_DIV_FRAC_BITS;
+ /* clamp to the highest possible fractional divider */
+ maxdiv = GENMASK(data->int_bits + CM_DIV_FRAC_BITS - 1,
+ CM_DIV_FRAC_BITS - data->frac_bits);
+ }
+
+ /* apply the clamping limits */
+ div = max_t(u32, div, mindiv);
+ div = min_t(u32, div, maxdiv);
return div;
}
@@ -1277,14 +983,31 @@ static int bcm2835_clock_set_rate(struct clk_hw *hw,
struct bcm2835_cprman *cprman = clock->cprman;
const struct bcm2835_clock_data *data = clock->data;
u32 div = bcm2835_clock_choose_div(hw, rate, parent_rate, false);
+ u32 ctl;
+
+ spin_lock(&cprman->regs_lock);
+
+ /*
+ * Setting up frac support
+ *
+ * In principle it is recommended to stop/start the clock first,
+ * but as we set CLK_SET_RATE_GATE during registration of the
+ * clock this requirement should be take care of by the
+ * clk-framework.
+ */
+ ctl = cprman_read(cprman, data->ctl_reg) & ~CM_FRAC;
+ ctl |= (div & CM_DIV_FRAC_MASK) ? CM_FRAC : 0;
+ cprman_write(cprman, data->ctl_reg, ctl);
cprman_write(cprman, data->div_reg, div);
+ spin_unlock(&cprman->regs_lock);
+
return 0;
}
static int bcm2835_clock_determine_rate(struct clk_hw *hw,
- struct clk_rate_request *req)
+ struct clk_rate_request *req)
{
struct bcm2835_clock *clock = bcm2835_clock_from_hw(hw);
struct clk_hw *parent, *best_parent = NULL;
@@ -1342,6 +1065,30 @@ static u8 bcm2835_clock_get_parent(struct clk_hw *hw)
return (src & CM_SRC_MASK) >> CM_SRC_SHIFT;
}
+static struct debugfs_reg32 bcm2835_debugfs_clock_reg32[] = {
+ {
+ .name = "ctl",
+ .offset = 0,
+ },
+ {
+ .name = "div",
+ .offset = 4,
+ },
+};
+
+static int bcm2835_clock_debug_init(struct clk_hw *hw,
+ struct dentry *dentry)
+{
+ struct bcm2835_clock *clock = bcm2835_clock_from_hw(hw);
+ struct bcm2835_cprman *cprman = clock->cprman;
+ const struct bcm2835_clock_data *data = clock->data;
+
+ return bcm2835_debugfs_regset(
+ cprman, data->ctl_reg,
+ bcm2835_debugfs_clock_reg32,
+ ARRAY_SIZE(bcm2835_debugfs_clock_reg32),
+ dentry);
+}
static const struct clk_ops bcm2835_clock_clk_ops = {
.is_prepared = bcm2835_clock_is_on,
@@ -1352,6 +1099,7 @@ static const struct clk_ops bcm2835_clock_clk_ops = {
.determine_rate = bcm2835_clock_determine_rate,
.set_parent = bcm2835_clock_set_parent,
.get_parent = bcm2835_clock_get_parent,
+ .debug_init = bcm2835_clock_debug_init,
};
static int bcm2835_vpu_clock_is_on(struct clk_hw *hw)
@@ -1370,6 +1118,7 @@ static const struct clk_ops bcm2835_vpu_clock_clk_ops = {
.determine_rate = bcm2835_clock_determine_rate,
.set_parent = bcm2835_clock_set_parent,
.get_parent = bcm2835_clock_get_parent,
+ .debug_init = bcm2835_clock_debug_init,
};
static struct clk *bcm2835_register_pll(struct bcm2835_cprman *cprman,
@@ -1418,7 +1167,7 @@ bcm2835_register_pll_divider(struct bcm2835_cprman *cprman,
memset(&init, 0, sizeof(init));
- init.parent_names = &data->source_pll->name;
+ init.parent_names = &data->source_pll;
init.num_parents = 1;
init.name = divider_name;
init.ops = &bcm2835_pll_divider_clk_ops;
@@ -1501,14 +1250,559 @@ static struct clk *bcm2835_register_clock(struct bcm2835_cprman *cprman,
return devm_clk_register(cprman->dev, &clock->hw);
}
+static struct clk *bcm2835_register_gate(struct bcm2835_cprman *cprman,
+ const struct bcm2835_gate_data *data)
+{
+ return clk_register_gate(cprman->dev, data->name, data->parent,
+ CLK_IGNORE_UNUSED | CLK_SET_RATE_GATE,
+ cprman->regs + data->ctl_reg,
+ CM_GATE_BIT, 0, &cprman->regs_lock);
+}
+
+typedef struct clk *(*bcm2835_clk_register)(struct bcm2835_cprman *cprman,
+ const void *data);
+struct bcm2835_clk_desc {
+ bcm2835_clk_register clk_register;
+ const void *data;
+};
+
+/* assignment helper macros for different clock types */
+#define _REGISTER(f, ...) { .clk_register = (bcm2835_clk_register)f, \
+ .data = __VA_ARGS__ }
+#define REGISTER_PLL(...) _REGISTER(&bcm2835_register_pll, \
+ &(struct bcm2835_pll_data) \
+ {__VA_ARGS__})
+#define REGISTER_PLL_DIV(...) _REGISTER(&bcm2835_register_pll_divider, \
+ &(struct bcm2835_pll_divider_data) \
+ {__VA_ARGS__})
+#define REGISTER_CLK(...) _REGISTER(&bcm2835_register_clock, \
+ &(struct bcm2835_clock_data) \
+ {__VA_ARGS__})
+#define REGISTER_GATE(...) _REGISTER(&bcm2835_register_gate, \
+ &(struct bcm2835_gate_data) \
+ {__VA_ARGS__})
+
+/* parent mux arrays plus helper macros */
+
+/* main oscillator parent mux */
+static const char *const bcm2835_clock_osc_parents[] = {
+ "gnd",
+ "xosc",
+ "testdebug0",
+ "testdebug1"
+};
+
+#define REGISTER_OSC_CLK(...) REGISTER_CLK( \
+ .num_mux_parents = ARRAY_SIZE(bcm2835_clock_osc_parents), \
+ .parents = bcm2835_clock_osc_parents, \
+ __VA_ARGS__)
+
+/* main peripherial parent mux */
+static const char *const bcm2835_clock_per_parents[] = {
+ "gnd",
+ "xosc",
+ "testdebug0",
+ "testdebug1",
+ "plla_per",
+ "pllc_per",
+ "plld_per",
+ "pllh_aux",
+};
+
+#define REGISTER_PER_CLK(...) REGISTER_CLK( \
+ .num_mux_parents = ARRAY_SIZE(bcm2835_clock_per_parents), \
+ .parents = bcm2835_clock_per_parents, \
+ __VA_ARGS__)
+
+/* main vpu parent mux */
+static const char *const bcm2835_clock_vpu_parents[] = {
+ "gnd",
+ "xosc",
+ "testdebug0",
+ "testdebug1",
+ "plla_core",
+ "pllc_core0",
+ "plld_core",
+ "pllh_aux",
+ "pllc_core1",
+ "pllc_core2",
+};
+
+#define REGISTER_VPU_CLK(...) REGISTER_CLK( \
+ .num_mux_parents = ARRAY_SIZE(bcm2835_clock_vpu_parents), \
+ .parents = bcm2835_clock_vpu_parents, \
+ __VA_ARGS__)
+
+/*
+ * the real definition of all the pll, pll_dividers and clocks
+ * these make use of the above REGISTER_* macros
+ */
+static const struct bcm2835_clk_desc clk_desc_array[] = {
+ /* the PLL + PLL dividers */
+
+ /*
+ * PLLA is the auxiliary PLL, used to drive the CCP2
+ * (Compact Camera Port 2) transmitter clock.
+ *
+ * It is in the PX LDO power domain, which is on when the
+ * AUDIO domain is on.
+ */
+ [BCM2835_PLLA] = REGISTER_PLL(
+ .name = "plla",
+ .cm_ctrl_reg = CM_PLLA,
+ .a2w_ctrl_reg = A2W_PLLA_CTRL,
+ .frac_reg = A2W_PLLA_FRAC,
+ .ana_reg_base = A2W_PLLA_ANA0,
+ .reference_enable_mask = A2W_XOSC_CTRL_PLLA_ENABLE,
+ .lock_mask = CM_LOCK_FLOCKA,
+
+ .ana = &bcm2835_ana_default,
+
+ .min_rate = 600000000u,
+ .max_rate = 2400000000u,
+ .max_fb_rate = BCM2835_MAX_FB_RATE),
+ [BCM2835_PLLA_CORE] = REGISTER_PLL_DIV(
+ .name = "plla_core",
+ .source_pll = "plla",
+ .cm_reg = CM_PLLA,
+ .a2w_reg = A2W_PLLA_CORE,
+ .load_mask = CM_PLLA_LOADCORE,
+ .hold_mask = CM_PLLA_HOLDCORE,
+ .fixed_divider = 1),
+ [BCM2835_PLLA_PER] = REGISTER_PLL_DIV(
+ .name = "plla_per",
+ .source_pll = "plla",
+ .cm_reg = CM_PLLA,
+ .a2w_reg = A2W_PLLA_PER,
+ .load_mask = CM_PLLA_LOADPER,
+ .hold_mask = CM_PLLA_HOLDPER,
+ .fixed_divider = 1),
+ [BCM2835_PLLA_DSI0] = REGISTER_PLL_DIV(
+ .name = "plla_dsi0",
+ .source_pll = "plla",
+ .cm_reg = CM_PLLA,
+ .a2w_reg = A2W_PLLA_DSI0,
+ .load_mask = CM_PLLA_LOADDSI0,
+ .hold_mask = CM_PLLA_HOLDDSI0,
+ .fixed_divider = 1),
+ [BCM2835_PLLA_CCP2] = REGISTER_PLL_DIV(
+ .name = "plla_ccp2",
+ .source_pll = "plla",
+ .cm_reg = CM_PLLA,
+ .a2w_reg = A2W_PLLA_CCP2,
+ .load_mask = CM_PLLA_LOADCCP2,
+ .hold_mask = CM_PLLA_HOLDCCP2,
+ .fixed_divider = 1),
+
+ /* PLLB is used for the ARM's clock. */
+ [BCM2835_PLLB] = REGISTER_PLL(
+ .name = "pllb",
+ .cm_ctrl_reg = CM_PLLB,
+ .a2w_ctrl_reg = A2W_PLLB_CTRL,
+ .frac_reg = A2W_PLLB_FRAC,
+ .ana_reg_base = A2W_PLLB_ANA0,
+ .reference_enable_mask = A2W_XOSC_CTRL_PLLB_ENABLE,
+ .lock_mask = CM_LOCK_FLOCKB,
+
+ .ana = &bcm2835_ana_default,
+
+ .min_rate = 600000000u,
+ .max_rate = 3000000000u,
+ .max_fb_rate = BCM2835_MAX_FB_RATE),
+ [BCM2835_PLLB_ARM] = REGISTER_PLL_DIV(
+ .name = "pllb_arm",
+ .source_pll = "pllb",
+ .cm_reg = CM_PLLB,
+ .a2w_reg = A2W_PLLB_ARM,
+ .load_mask = CM_PLLB_LOADARM,
+ .hold_mask = CM_PLLB_HOLDARM,
+ .fixed_divider = 1),
+
+ /*
+ * PLLC is the core PLL, used to drive the core VPU clock.
+ *
+ * It is in the PX LDO power domain, which is on when the
+ * AUDIO domain is on.
+ */
+ [BCM2835_PLLC] = REGISTER_PLL(
+ .name = "pllc",
+ .cm_ctrl_reg = CM_PLLC,
+ .a2w_ctrl_reg = A2W_PLLC_CTRL,
+ .frac_reg = A2W_PLLC_FRAC,
+ .ana_reg_base = A2W_PLLC_ANA0,
+ .reference_enable_mask = A2W_XOSC_CTRL_PLLC_ENABLE,
+ .lock_mask = CM_LOCK_FLOCKC,
+
+ .ana = &bcm2835_ana_default,
+
+ .min_rate = 600000000u,
+ .max_rate = 3000000000u,
+ .max_fb_rate = BCM2835_MAX_FB_RATE),
+ [BCM2835_PLLC_CORE0] = REGISTER_PLL_DIV(
+ .name = "pllc_core0",
+ .source_pll = "pllc",
+ .cm_reg = CM_PLLC,
+ .a2w_reg = A2W_PLLC_CORE0,
+ .load_mask = CM_PLLC_LOADCORE0,
+ .hold_mask = CM_PLLC_HOLDCORE0,
+ .fixed_divider = 1),
+ [BCM2835_PLLC_CORE1] = REGISTER_PLL_DIV(
+ .name = "pllc_core1",
+ .source_pll = "pllc",
+ .cm_reg = CM_PLLC,
+ .a2w_reg = A2W_PLLC_CORE1,
+ .load_mask = CM_PLLC_LOADCORE1,
+ .hold_mask = CM_PLLC_HOLDCORE1,
+ .fixed_divider = 1),
+ [BCM2835_PLLC_CORE2] = REGISTER_PLL_DIV(
+ .name = "pllc_core2",
+ .source_pll = "pllc",
+ .cm_reg = CM_PLLC,
+ .a2w_reg = A2W_PLLC_CORE2,
+ .load_mask = CM_PLLC_LOADCORE2,
+ .hold_mask = CM_PLLC_HOLDCORE2,
+ .fixed_divider = 1),
+ [BCM2835_PLLC_PER] = REGISTER_PLL_DIV(
+ .name = "pllc_per",
+ .source_pll = "pllc",
+ .cm_reg = CM_PLLC,
+ .a2w_reg = A2W_PLLC_PER,
+ .load_mask = CM_PLLC_LOADPER,
+ .hold_mask = CM_PLLC_HOLDPER,
+ .fixed_divider = 1),
+
+ /*
+ * PLLD is the display PLL, used to drive DSI display panels.
+ *
+ * It is in the PX LDO power domain, which is on when the
+ * AUDIO domain is on.
+ */
+ [BCM2835_PLLD] = REGISTER_PLL(
+ .name = "plld",
+ .cm_ctrl_reg = CM_PLLD,
+ .a2w_ctrl_reg = A2W_PLLD_CTRL,
+ .frac_reg = A2W_PLLD_FRAC,
+ .ana_reg_base = A2W_PLLD_ANA0,
+ .reference_enable_mask = A2W_XOSC_CTRL_DDR_ENABLE,
+ .lock_mask = CM_LOCK_FLOCKD,
+
+ .ana = &bcm2835_ana_default,
+
+ .min_rate = 600000000u,
+ .max_rate = 2400000000u,
+ .max_fb_rate = BCM2835_MAX_FB_RATE),
+ [BCM2835_PLLD_CORE] = REGISTER_PLL_DIV(
+ .name = "plld_core",
+ .source_pll = "plld",
+ .cm_reg = CM_PLLD,
+ .a2w_reg = A2W_PLLD_CORE,
+ .load_mask = CM_PLLD_LOADCORE,
+ .hold_mask = CM_PLLD_HOLDCORE,
+ .fixed_divider = 1),
+ [BCM2835_PLLD_PER] = REGISTER_PLL_DIV(
+ .name = "plld_per",
+ .source_pll = "plld",
+ .cm_reg = CM_PLLD,
+ .a2w_reg = A2W_PLLD_PER,
+ .load_mask = CM_PLLD_LOADPER,
+ .hold_mask = CM_PLLD_HOLDPER,
+ .fixed_divider = 1),
+ [BCM2835_PLLD_DSI0] = REGISTER_PLL_DIV(
+ .name = "plld_dsi0",
+ .source_pll = "plld",
+ .cm_reg = CM_PLLD,
+ .a2w_reg = A2W_PLLD_DSI0,
+ .load_mask = CM_PLLD_LOADDSI0,
+ .hold_mask = CM_PLLD_HOLDDSI0,
+ .fixed_divider = 1),
+ [BCM2835_PLLD_DSI1] = REGISTER_PLL_DIV(
+ .name = "plld_dsi1",
+ .source_pll = "plld",
+ .cm_reg = CM_PLLD,
+ .a2w_reg = A2W_PLLD_DSI1,
+ .load_mask = CM_PLLD_LOADDSI1,
+ .hold_mask = CM_PLLD_HOLDDSI1,
+ .fixed_divider = 1),
+
+ /*
+ * PLLH is used to supply the pixel clock or the AUX clock for the
+ * TV encoder.
+ *
+ * It is in the HDMI power domain.
+ */
+ [BCM2835_PLLH] = REGISTER_PLL(
+ "pllh",
+ .cm_ctrl_reg = CM_PLLH,
+ .a2w_ctrl_reg = A2W_PLLH_CTRL,
+ .frac_reg = A2W_PLLH_FRAC,
+ .ana_reg_base = A2W_PLLH_ANA0,
+ .reference_enable_mask = A2W_XOSC_CTRL_PLLC_ENABLE,
+ .lock_mask = CM_LOCK_FLOCKH,
+
+ .ana = &bcm2835_ana_pllh,
+
+ .min_rate = 600000000u,
+ .max_rate = 3000000000u,
+ .max_fb_rate = BCM2835_MAX_FB_RATE),
+ [BCM2835_PLLH_RCAL] = REGISTER_PLL_DIV(
+ .name = "pllh_rcal",
+ .source_pll = "pllh",
+ .cm_reg = CM_PLLH,
+ .a2w_reg = A2W_PLLH_RCAL,
+ .load_mask = CM_PLLH_LOADRCAL,
+ .hold_mask = 0,
+ .fixed_divider = 10),
+ [BCM2835_PLLH_AUX] = REGISTER_PLL_DIV(
+ .name = "pllh_aux",
+ .source_pll = "pllh",
+ .cm_reg = CM_PLLH,
+ .a2w_reg = A2W_PLLH_AUX,
+ .load_mask = CM_PLLH_LOADAUX,
+ .hold_mask = 0,
+ .fixed_divider = 10),
+ [BCM2835_PLLH_PIX] = REGISTER_PLL_DIV(
+ .name = "pllh_pix",
+ .source_pll = "pllh",
+ .cm_reg = CM_PLLH,
+ .a2w_reg = A2W_PLLH_PIX,
+ .load_mask = CM_PLLH_LOADPIX,
+ .hold_mask = 0,
+ .fixed_divider = 10),
+
+ /* the clocks */
+
+ /* clocks with oscillator parent mux */
+
+ /* One Time Programmable Memory clock. Maximum 10Mhz. */
+ [BCM2835_CLOCK_OTP] = REGISTER_OSC_CLK(
+ .name = "otp",
+ .ctl_reg = CM_OTPCTL,
+ .div_reg = CM_OTPDIV,
+ .int_bits = 4,
+ .frac_bits = 0),
+ /*
+ * Used for a 1Mhz clock for the system clocksource, and also used
+ * bythe watchdog timer and the camera pulse generator.
+ */
+ [BCM2835_CLOCK_TIMER] = REGISTER_OSC_CLK(
+ .name = "timer",
+ .ctl_reg = CM_TIMERCTL,
+ .div_reg = CM_TIMERDIV,
+ .int_bits = 6,
+ .frac_bits = 12),
+ /*
+ * Clock for the temperature sensor.
+ * Generally run at 2Mhz, max 5Mhz.
+ */
+ [BCM2835_CLOCK_TSENS] = REGISTER_OSC_CLK(
+ .name = "tsens",
+ .ctl_reg = CM_TSENSCTL,
+ .div_reg = CM_TSENSDIV,
+ .int_bits = 5,
+ .frac_bits = 0),
+ [BCM2835_CLOCK_TEC] = REGISTER_OSC_CLK(
+ .name = "tec",
+ .ctl_reg = CM_TECCTL,
+ .div_reg = CM_TECDIV,
+ .int_bits = 6,
+ .frac_bits = 0),
+
+ /* clocks with vpu parent mux */
+ [BCM2835_CLOCK_H264] = REGISTER_VPU_CLK(
+ .name = "h264",
+ .ctl_reg = CM_H264CTL,
+ .div_reg = CM_H264DIV,
+ .int_bits = 4,
+ .frac_bits = 8),
+ [BCM2835_CLOCK_ISP] = REGISTER_VPU_CLK(
+ .name = "isp",
+ .ctl_reg = CM_ISPCTL,
+ .div_reg = CM_ISPDIV,
+ .int_bits = 4,
+ .frac_bits = 8),
+
+ /*
+ * Secondary SDRAM clock. Used for low-voltage modes when the PLL
+ * in the SDRAM controller can't be used.
+ */
+ [BCM2835_CLOCK_SDRAM] = REGISTER_VPU_CLK(
+ .name = "sdram",
+ .ctl_reg = CM_SDCCTL,
+ .div_reg = CM_SDCDIV,
+ .int_bits = 6,
+ .frac_bits = 0),
+ [BCM2835_CLOCK_V3D] = REGISTER_VPU_CLK(
+ .name = "v3d",
+ .ctl_reg = CM_V3DCTL,
+ .div_reg = CM_V3DDIV,
+ .int_bits = 4,
+ .frac_bits = 8),
+ /*
+ * VPU clock. This doesn't have an enable bit, since it drives
+ * the bus for everything else, and is special so it doesn't need
+ * to be gated for rate changes. It is also known as "clk_audio"
+ * in various hardware documentation.
+ */
+ [BCM2835_CLOCK_VPU] = REGISTER_VPU_CLK(
+ .name = "vpu",
+ .ctl_reg = CM_VPUCTL,
+ .div_reg = CM_VPUDIV,
+ .int_bits = 12,
+ .frac_bits = 8,
+ .is_vpu_clock = true),
+
+ /* clocks with per parent mux */
+ [BCM2835_CLOCK_AVEO] = REGISTER_PER_CLK(
+ .name = "aveo",
+ .ctl_reg = CM_AVEOCTL,
+ .div_reg = CM_AVEODIV,
+ .int_bits = 4,
+ .frac_bits = 0),
+ [BCM2835_CLOCK_CAM0] = REGISTER_PER_CLK(
+ .name = "cam0",
+ .ctl_reg = CM_CAM0CTL,
+ .div_reg = CM_CAM0DIV,
+ .int_bits = 4,
+ .frac_bits = 8),
+ [BCM2835_CLOCK_CAM1] = REGISTER_PER_CLK(
+ .name = "cam1",
+ .ctl_reg = CM_CAM1CTL,
+ .div_reg = CM_CAM1DIV,
+ .int_bits = 4,
+ .frac_bits = 8),
+ [BCM2835_CLOCK_DFT] = REGISTER_PER_CLK(
+ .name = "dft",
+ .ctl_reg = CM_DFTCTL,
+ .div_reg = CM_DFTDIV,
+ .int_bits = 5,
+ .frac_bits = 0),
+ [BCM2835_CLOCK_DPI] = REGISTER_PER_CLK(
+ .name = "dpi",
+ .ctl_reg = CM_DPICTL,
+ .div_reg = CM_DPIDIV,
+ .int_bits = 4,
+ .frac_bits = 8),
+
+ /* Arasan EMMC clock */
+ [BCM2835_CLOCK_EMMC] = REGISTER_PER_CLK(
+ .name = "emmc",
+ .ctl_reg = CM_EMMCCTL,
+ .div_reg = CM_EMMCDIV,
+ .int_bits = 4,
+ .frac_bits = 8),
+
+ /* General purpose (GPIO) clocks */
+ [BCM2835_CLOCK_GP0] = REGISTER_PER_CLK(
+ .name = "gp0",
+ .ctl_reg = CM_GP0CTL,
+ .div_reg = CM_GP0DIV,
+ .int_bits = 12,
+ .frac_bits = 12,
+ .is_mash_clock = true),
+ [BCM2835_CLOCK_GP1] = REGISTER_PER_CLK(
+ .name = "gp1",
+ .ctl_reg = CM_GP1CTL,
+ .div_reg = CM_GP1DIV,
+ .int_bits = 12,
+ .frac_bits = 12,
+ .is_mash_clock = true),
+ [BCM2835_CLOCK_GP2] = REGISTER_PER_CLK(
+ .name = "gp2",
+ .ctl_reg = CM_GP2CTL,
+ .div_reg = CM_GP2DIV,
+ .int_bits = 12,
+ .frac_bits = 12),
+
+ /* HDMI state machine */
+ [BCM2835_CLOCK_HSM] = REGISTER_PER_CLK(
+ .name = "hsm",
+ .ctl_reg = CM_HSMCTL,
+ .div_reg = CM_HSMDIV,
+ .int_bits = 4,
+ .frac_bits = 8),
+ [BCM2835_CLOCK_PCM] = REGISTER_PER_CLK(
+ .name = "pcm",
+ .ctl_reg = CM_PCMCTL,
+ .div_reg = CM_PCMDIV,
+ .int_bits = 12,
+ .frac_bits = 12,
+ .is_mash_clock = true),
+ [BCM2835_CLOCK_PWM] = REGISTER_PER_CLK(
+ .name = "pwm",
+ .ctl_reg = CM_PWMCTL,
+ .div_reg = CM_PWMDIV,
+ .int_bits = 12,
+ .frac_bits = 12,
+ .is_mash_clock = true),
+ [BCM2835_CLOCK_SLIM] = REGISTER_PER_CLK(
+ .name = "slim",
+ .ctl_reg = CM_SLIMCTL,
+ .div_reg = CM_SLIMDIV,
+ .int_bits = 12,
+ .frac_bits = 12,
+ .is_mash_clock = true),
+ [BCM2835_CLOCK_SMI] = REGISTER_PER_CLK(
+ .name = "smi",
+ .ctl_reg = CM_SMICTL,
+ .div_reg = CM_SMIDIV,
+ .int_bits = 4,
+ .frac_bits = 8),
+ [BCM2835_CLOCK_UART] = REGISTER_PER_CLK(
+ .name = "uart",
+ .ctl_reg = CM_UARTCTL,
+ .div_reg = CM_UARTDIV,
+ .int_bits = 10,
+ .frac_bits = 12),
+
+ /* TV encoder clock. Only operating frequency is 108Mhz. */
+ [BCM2835_CLOCK_VEC] = REGISTER_PER_CLK(
+ .name = "vec",
+ .ctl_reg = CM_VECCTL,
+ .div_reg = CM_VECDIV,
+ .int_bits = 4,
+ .frac_bits = 0),
+
+ /* dsi clocks */
+ [BCM2835_CLOCK_DSI0E] = REGISTER_PER_CLK(
+ .name = "dsi0e",
+ .ctl_reg = CM_DSI0ECTL,
+ .div_reg = CM_DSI0EDIV,
+ .int_bits = 4,
+ .frac_bits = 8),
+ [BCM2835_CLOCK_DSI1E] = REGISTER_PER_CLK(
+ .name = "dsi1e",
+ .ctl_reg = CM_DSI1ECTL,
+ .div_reg = CM_DSI1EDIV,
+ .int_bits = 4,
+ .frac_bits = 8),
+
+ /* the gates */
+
+ /*
+ * CM_PERIICTL (and CM_PERIACTL, CM_SYSCTL and CM_VPUCTL if
+ * you have the debug bit set in the power manager, which we
+ * don't bother exposing) are individual gates off of the
+ * non-stop vpu clock.
+ */
+ [BCM2835_CLOCK_PERI_IMAGE] = REGISTER_GATE(
+ .name = "peri_image",
+ .parent = "vpu",
+ .ctl_reg = CM_PERIICTL),
+};
+
static int bcm2835_clk_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
struct clk **clks;
struct bcm2835_cprman *cprman;
struct resource *res;
+ const struct bcm2835_clk_desc *desc;
+ const size_t asize = ARRAY_SIZE(clk_desc_array);
+ size_t i;
- cprman = devm_kzalloc(dev, sizeof(*cprman), GFP_KERNEL);
+ cprman = devm_kzalloc(dev,
+ sizeof(*cprman) + asize * sizeof(*clks),
+ GFP_KERNEL);
if (!cprman)
return -ENOMEM;
@@ -1525,80 +1819,15 @@ static int bcm2835_clk_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, cprman);
- cprman->onecell.clk_num = BCM2835_CLOCK_COUNT;
+ cprman->onecell.clk_num = asize;
cprman->onecell.clks = cprman->clks;
clks = cprman->clks;
- clks[BCM2835_PLLA] = bcm2835_register_pll(cprman, &bcm2835_plla_data);
- clks[BCM2835_PLLB] = bcm2835_register_pll(cprman, &bcm2835_pllb_data);
- clks[BCM2835_PLLC] = bcm2835_register_pll(cprman, &bcm2835_pllc_data);
- clks[BCM2835_PLLD] = bcm2835_register_pll(cprman, &bcm2835_plld_data);
- clks[BCM2835_PLLH] = bcm2835_register_pll(cprman, &bcm2835_pllh_data);
-
- clks[BCM2835_PLLA_CORE] =
- bcm2835_register_pll_divider(cprman, &bcm2835_plla_core_data);
- clks[BCM2835_PLLA_PER] =
- bcm2835_register_pll_divider(cprman, &bcm2835_plla_per_data);
- clks[BCM2835_PLLC_CORE0] =
- bcm2835_register_pll_divider(cprman, &bcm2835_pllc_core0_data);
- clks[BCM2835_PLLC_CORE1] =
- bcm2835_register_pll_divider(cprman, &bcm2835_pllc_core1_data);
- clks[BCM2835_PLLC_CORE2] =
- bcm2835_register_pll_divider(cprman, &bcm2835_pllc_core2_data);
- clks[BCM2835_PLLC_PER] =
- bcm2835_register_pll_divider(cprman, &bcm2835_pllc_per_data);
- clks[BCM2835_PLLD_CORE] =
- bcm2835_register_pll_divider(cprman, &bcm2835_plld_core_data);
- clks[BCM2835_PLLD_PER] =
- bcm2835_register_pll_divider(cprman, &bcm2835_plld_per_data);
- clks[BCM2835_PLLH_RCAL] =
- bcm2835_register_pll_divider(cprman, &bcm2835_pllh_rcal_data);
- clks[BCM2835_PLLH_AUX] =
- bcm2835_register_pll_divider(cprman, &bcm2835_pllh_aux_data);
- clks[BCM2835_PLLH_PIX] =
- bcm2835_register_pll_divider(cprman, &bcm2835_pllh_pix_data);
-
- clks[BCM2835_CLOCK_TIMER] =
- bcm2835_register_clock(cprman, &bcm2835_clock_timer_data);
- clks[BCM2835_CLOCK_OTP] =
- bcm2835_register_clock(cprman, &bcm2835_clock_otp_data);
- clks[BCM2835_CLOCK_TSENS] =
- bcm2835_register_clock(cprman, &bcm2835_clock_tsens_data);
- clks[BCM2835_CLOCK_VPU] =
- bcm2835_register_clock(cprman, &bcm2835_clock_vpu_data);
- clks[BCM2835_CLOCK_V3D] =
- bcm2835_register_clock(cprman, &bcm2835_clock_v3d_data);
- clks[BCM2835_CLOCK_ISP] =
- bcm2835_register_clock(cprman, &bcm2835_clock_isp_data);
- clks[BCM2835_CLOCK_H264] =
- bcm2835_register_clock(cprman, &bcm2835_clock_h264_data);
- clks[BCM2835_CLOCK_V3D] =
- bcm2835_register_clock(cprman, &bcm2835_clock_v3d_data);
- clks[BCM2835_CLOCK_SDRAM] =
- bcm2835_register_clock(cprman, &bcm2835_clock_sdram_data);
- clks[BCM2835_CLOCK_UART] =
- bcm2835_register_clock(cprman, &bcm2835_clock_uart_data);
- clks[BCM2835_CLOCK_VEC] =
- bcm2835_register_clock(cprman, &bcm2835_clock_vec_data);
- clks[BCM2835_CLOCK_HSM] =
- bcm2835_register_clock(cprman, &bcm2835_clock_hsm_data);
- clks[BCM2835_CLOCK_EMMC] =
- bcm2835_register_clock(cprman, &bcm2835_clock_emmc_data);
-
- /*
- * CM_PERIICTL (and CM_PERIACTL, CM_SYSCTL and CM_VPUCTL if
- * you have the debug bit set in the power manager, which we
- * don't bother exposing) are individual gates off of the
- * non-stop vpu clock.
- */
- clks[BCM2835_CLOCK_PERI_IMAGE] =
- clk_register_gate(dev, "peri_image", "vpu",
- CLK_IGNORE_UNUSED | CLK_SET_RATE_GATE,
- cprman->regs + CM_PERIICTL, CM_GATE_BIT,
- 0, &cprman->regs_lock);
-
- clks[BCM2835_CLOCK_PWM] =
- bcm2835_register_clock(cprman, &bcm2835_clock_pwm_data);
+ for (i = 0; i < asize; i++) {
+ desc = &clk_desc_array[i];
+ if (desc->clk_register && desc->data)
+ clks[i] = desc->clk_register(cprman, desc->data);
+ }
return of_clk_add_provider(dev->of_node, of_clk_src_onecell_get,
&cprman->onecell);
diff --git a/drivers/clk/bcm/clk-kona-setup.c b/drivers/clk/bcm/clk-kona-setup.c
index deaa7f962b84..526b0b0e9a9f 100644
--- a/drivers/clk/bcm/clk-kona-setup.c
+++ b/drivers/clk/bcm/clk-kona-setup.c
@@ -577,7 +577,8 @@ static u32 *parent_process(const char *clocks[],
* selector is not required, but we allocate space for the
* array anyway to keep things simple.
*/
- parent_names = kmalloc(parent_count * sizeof(parent_names), GFP_KERNEL);
+ parent_names = kmalloc_array(parent_count, sizeof(*parent_names),
+ GFP_KERNEL);
if (!parent_names) {
pr_err("%s: error allocating %u parent names\n", __func__,
parent_count);
diff --git a/drivers/clk/clk-clps711x.c b/drivers/clk/clk-clps711x.c
index ff4ef4f1df62..1f60b02416a7 100644
--- a/drivers/clk/clk-clps711x.c
+++ b/drivers/clk/clk-clps711x.c
@@ -107,16 +107,15 @@ static struct clps711x_clk * __init _clps711x_clk_init(void __iomem *base,
writel(tmp, base + CLPS711X_SYSCON1);
clps711x_clk->clks[CLPS711X_CLK_DUMMY] =
- clk_register_fixed_rate(NULL, "dummy", NULL, CLK_IS_ROOT, 0);
+ clk_register_fixed_rate(NULL, "dummy", NULL, 0, 0);
clps711x_clk->clks[CLPS711X_CLK_CPU] =
- clk_register_fixed_rate(NULL, "cpu", NULL, CLK_IS_ROOT, f_cpu);
+ clk_register_fixed_rate(NULL, "cpu", NULL, 0, f_cpu);
clps711x_clk->clks[CLPS711X_CLK_BUS] =
- clk_register_fixed_rate(NULL, "bus", NULL, CLK_IS_ROOT, f_bus);
+ clk_register_fixed_rate(NULL, "bus", NULL, 0, f_bus);
clps711x_clk->clks[CLPS711X_CLK_PLL] =
- clk_register_fixed_rate(NULL, "pll", NULL, CLK_IS_ROOT, f_pll);
+ clk_register_fixed_rate(NULL, "pll", NULL, 0, f_pll);
clps711x_clk->clks[CLPS711X_CLK_TIMERREF] =
- clk_register_fixed_rate(NULL, "timer_ref", NULL, CLK_IS_ROOT,
- f_tim);
+ clk_register_fixed_rate(NULL, "timer_ref", NULL, 0, f_tim);
clps711x_clk->clks[CLPS711X_CLK_TIMER1] =
clk_register_divider_table(NULL, "timer1", "timer_ref", 0,
base + CLPS711X_SYSCON1, 5, 1, 0,
@@ -126,10 +125,9 @@ static struct clps711x_clk * __init _clps711x_clk_init(void __iomem *base,
base + CLPS711X_SYSCON1, 7, 1, 0,
timer_div_table, &clps711x_clk->lock);
clps711x_clk->clks[CLPS711X_CLK_PWM] =
- clk_register_fixed_rate(NULL, "pwm", NULL, CLK_IS_ROOT, f_pwm);
+ clk_register_fixed_rate(NULL, "pwm", NULL, 0, f_pwm);
clps711x_clk->clks[CLPS711X_CLK_SPIREF] =
- clk_register_fixed_rate(NULL, "spi_ref", NULL, CLK_IS_ROOT,
- f_spi);
+ clk_register_fixed_rate(NULL, "spi_ref", NULL, 0, f_spi);
clps711x_clk->clks[CLPS711X_CLK_SPI] =
clk_register_divider_table(NULL, "spi", "spi_ref", 0,
base + CLPS711X_SYSCON1, 16, 2, 0,
@@ -137,8 +135,7 @@ static struct clps711x_clk * __init _clps711x_clk_init(void __iomem *base,
clps711x_clk->clks[CLPS711X_CLK_UART] =
clk_register_fixed_factor(NULL, "uart", "bus", 0, 1, 10);
clps711x_clk->clks[CLPS711X_CLK_TICK] =
- clk_register_fixed_rate(NULL, "tick", NULL, CLK_IS_ROOT, 64);
-
+ clk_register_fixed_rate(NULL, "tick", NULL, 0, 64);
for (i = 0; i < CLPS711X_CLK_MAX; i++)
if (IS_ERR(clps711x_clk->clks[i]))
pr_err("clk %i: register failed with %ld\n",
diff --git a/drivers/clk/clk-composite.c b/drivers/clk/clk-composite.c
index 1f903e1f86a2..00269de2f390 100644
--- a/drivers/clk/clk-composite.c
+++ b/drivers/clk/clk-composite.c
@@ -151,6 +151,33 @@ static int clk_composite_set_rate(struct clk_hw *hw, unsigned long rate,
return rate_ops->set_rate(rate_hw, rate, parent_rate);
}
+static int clk_composite_set_rate_and_parent(struct clk_hw *hw,
+ unsigned long rate,
+ unsigned long parent_rate,
+ u8 index)
+{
+ struct clk_composite *composite = to_clk_composite(hw);
+ const struct clk_ops *rate_ops = composite->rate_ops;
+ const struct clk_ops *mux_ops = composite->mux_ops;
+ struct clk_hw *rate_hw = composite->rate_hw;
+ struct clk_hw *mux_hw = composite->mux_hw;
+ unsigned long temp_rate;
+
+ __clk_hw_set_clk(rate_hw, hw);
+ __clk_hw_set_clk(mux_hw, hw);
+
+ temp_rate = rate_ops->recalc_rate(rate_hw, parent_rate);
+ if (temp_rate > rate) {
+ rate_ops->set_rate(rate_hw, rate, parent_rate);
+ mux_ops->set_parent(mux_hw, index);
+ } else {
+ mux_ops->set_parent(mux_hw, index);
+ rate_ops->set_rate(rate_hw, rate, parent_rate);
+ }
+
+ return 0;
+}
+
static int clk_composite_is_enabled(struct clk_hw *hw)
{
struct clk_composite *composite = to_clk_composite(hw);
@@ -184,17 +211,18 @@ static void clk_composite_disable(struct clk_hw *hw)
gate_ops->disable(gate_hw);
}
-struct clk *clk_register_composite(struct device *dev, const char *name,
+struct clk_hw *clk_hw_register_composite(struct device *dev, const char *name,
const char * const *parent_names, int num_parents,
struct clk_hw *mux_hw, const struct clk_ops *mux_ops,
struct clk_hw *rate_hw, const struct clk_ops *rate_ops,
struct clk_hw *gate_hw, const struct clk_ops *gate_ops,
unsigned long flags)
{
- struct clk *clk;
+ struct clk_hw *hw;
struct clk_init_data init;
struct clk_composite *composite;
struct clk_ops *clk_composite_ops;
+ int ret;
composite = kzalloc(sizeof(*composite), GFP_KERNEL);
if (!composite)
@@ -204,12 +232,13 @@ struct clk *clk_register_composite(struct device *dev, const char *name,
init.flags = flags | CLK_IS_BASIC;
init.parent_names = parent_names;
init.num_parents = num_parents;
+ hw = &composite->hw;
clk_composite_ops = &composite->ops;
if (mux_hw && mux_ops) {
if (!mux_ops->get_parent) {
- clk = ERR_PTR(-EINVAL);
+ hw = ERR_PTR(-EINVAL);
goto err;
}
@@ -224,7 +253,7 @@ struct clk *clk_register_composite(struct device *dev, const char *name,
if (rate_hw && rate_ops) {
if (!rate_ops->recalc_rate) {
- clk = ERR_PTR(-EINVAL);
+ hw = ERR_PTR(-EINVAL);
goto err;
}
clk_composite_ops->recalc_rate = clk_composite_recalc_rate;
@@ -250,10 +279,16 @@ struct clk *clk_register_composite(struct device *dev, const char *name,
composite->rate_ops = rate_ops;
}
+ if (mux_hw && mux_ops && rate_hw && rate_ops) {
+ if (mux_ops->set_parent && rate_ops->set_rate)
+ clk_composite_ops->set_rate_and_parent =
+ clk_composite_set_rate_and_parent;
+ }
+
if (gate_hw && gate_ops) {
if (!gate_ops->is_enabled || !gate_ops->enable ||
!gate_ops->disable) {
- clk = ERR_PTR(-EINVAL);
+ hw = ERR_PTR(-EINVAL);
goto err;
}
@@ -267,22 +302,56 @@ struct clk *clk_register_composite(struct device *dev, const char *name,
init.ops = clk_composite_ops;
composite->hw.init = &init;
- clk = clk_register(dev, &composite->hw);
- if (IS_ERR(clk))
+ ret = clk_hw_register(dev, hw);
+ if (ret) {
+ hw = ERR_PTR(ret);
goto err;
+ }
if (composite->mux_hw)
- composite->mux_hw->clk = clk;
+ composite->mux_hw->clk = hw->clk;
if (composite->rate_hw)
- composite->rate_hw->clk = clk;
+ composite->rate_hw->clk = hw->clk;
if (composite->gate_hw)
- composite->gate_hw->clk = clk;
+ composite->gate_hw->clk = hw->clk;
- return clk;
+ return hw;
err:
kfree(composite);
- return clk;
+ return hw;
+}
+
+struct clk *clk_register_composite(struct device *dev, const char *name,
+ const char * const *parent_names, int num_parents,
+ struct clk_hw *mux_hw, const struct clk_ops *mux_ops,
+ struct clk_hw *rate_hw, const struct clk_ops *rate_ops,
+ struct clk_hw *gate_hw, const struct clk_ops *gate_ops,
+ unsigned long flags)
+{
+ struct clk_hw *hw;
+
+ hw = clk_hw_register_composite(dev, name, parent_names, num_parents,
+ mux_hw, mux_ops, rate_hw, rate_ops, gate_hw, gate_ops,
+ flags);
+ if (IS_ERR(hw))
+ return ERR_CAST(hw);
+ return hw->clk;
+}
+
+void clk_unregister_composite(struct clk *clk)
+{
+ struct clk_composite *composite;
+ struct clk_hw *hw;
+
+ hw = __clk_get_hw(clk);
+ if (!hw)
+ return;
+
+ composite = to_clk_composite(hw);
+
+ clk_unregister(clk);
+ kfree(composite);
}
diff --git a/drivers/clk/clk-divider.c b/drivers/clk/clk-divider.c
index 00e035b51c69..a0f55bc1ad3d 100644
--- a/drivers/clk/clk-divider.c
+++ b/drivers/clk/clk-divider.c
@@ -426,15 +426,16 @@ const struct clk_ops clk_divider_ro_ops = {
};
EXPORT_SYMBOL_GPL(clk_divider_ro_ops);
-static struct clk *_register_divider(struct device *dev, const char *name,
+static struct clk_hw *_register_divider(struct device *dev, const char *name,
const char *parent_name, unsigned long flags,
void __iomem *reg, u8 shift, u8 width,
u8 clk_divider_flags, const struct clk_div_table *table,
spinlock_t *lock)
{
struct clk_divider *div;
- struct clk *clk;
+ struct clk_hw *hw;
struct clk_init_data init;
+ int ret;
if (clk_divider_flags & CLK_DIVIDER_HIWORD_MASK) {
if (width + shift > 16) {
@@ -467,12 +468,14 @@ static struct clk *_register_divider(struct device *dev, const char *name,
div->table = table;
/* register the clock */
- clk = clk_register(dev, &div->hw);
-
- if (IS_ERR(clk))
+ hw = &div->hw;
+ ret = clk_hw_register(dev, hw);
+ if (ret) {
kfree(div);
+ hw = ERR_PTR(ret);
+ }
- return clk;
+ return hw;
}
/**
@@ -492,12 +495,39 @@ struct clk *clk_register_divider(struct device *dev, const char *name,
void __iomem *reg, u8 shift, u8 width,
u8 clk_divider_flags, spinlock_t *lock)
{
- return _register_divider(dev, name, parent_name, flags, reg, shift,
+ struct clk_hw *hw;
+
+ hw = _register_divider(dev, name, parent_name, flags, reg, shift,
width, clk_divider_flags, NULL, lock);
+ if (IS_ERR(hw))
+ return ERR_CAST(hw);
+ return hw->clk;
}
EXPORT_SYMBOL_GPL(clk_register_divider);
/**
+ * clk_hw_register_divider - register a divider clock with the clock framework
+ * @dev: device registering this clock
+ * @name: name of this clock
+ * @parent_name: name of clock's parent
+ * @flags: framework-specific flags
+ * @reg: register address to adjust divider
+ * @shift: number of bits to shift the bitfield
+ * @width: width of the bitfield
+ * @clk_divider_flags: divider-specific flags for this clock
+ * @lock: shared register lock for this clock
+ */
+struct clk_hw *clk_hw_register_divider(struct device *dev, const char *name,
+ const char *parent_name, unsigned long flags,
+ void __iomem *reg, u8 shift, u8 width,
+ u8 clk_divider_flags, spinlock_t *lock)
+{
+ return _register_divider(dev, name, parent_name, flags, reg, shift,
+ width, clk_divider_flags, NULL, lock);
+}
+EXPORT_SYMBOL_GPL(clk_hw_register_divider);
+
+/**
* clk_register_divider_table - register a table based divider clock with
* the clock framework
* @dev: device registering this clock
@@ -517,11 +547,41 @@ struct clk *clk_register_divider_table(struct device *dev, const char *name,
u8 clk_divider_flags, const struct clk_div_table *table,
spinlock_t *lock)
{
- return _register_divider(dev, name, parent_name, flags, reg, shift,
+ struct clk_hw *hw;
+
+ hw = _register_divider(dev, name, parent_name, flags, reg, shift,
width, clk_divider_flags, table, lock);
+ if (IS_ERR(hw))
+ return ERR_CAST(hw);
+ return hw->clk;
}
EXPORT_SYMBOL_GPL(clk_register_divider_table);
+/**
+ * clk_hw_register_divider_table - register a table based divider clock with
+ * the clock framework
+ * @dev: device registering this clock
+ * @name: name of this clock
+ * @parent_name: name of clock's parent
+ * @flags: framework-specific flags
+ * @reg: register address to adjust divider
+ * @shift: number of bits to shift the bitfield
+ * @width: width of the bitfield
+ * @clk_divider_flags: divider-specific flags for this clock
+ * @table: array of divider/value pairs ending with a div set to 0
+ * @lock: shared register lock for this clock
+ */
+struct clk_hw *clk_hw_register_divider_table(struct device *dev,
+ const char *name, const char *parent_name, unsigned long flags,
+ void __iomem *reg, u8 shift, u8 width,
+ u8 clk_divider_flags, const struct clk_div_table *table,
+ spinlock_t *lock)
+{
+ return _register_divider(dev, name, parent_name, flags, reg, shift,
+ width, clk_divider_flags, table, lock);
+}
+EXPORT_SYMBOL_GPL(clk_hw_register_divider_table);
+
void clk_unregister_divider(struct clk *clk)
{
struct clk_divider *div;
@@ -537,3 +597,18 @@ void clk_unregister_divider(struct clk *clk)
kfree(div);
}
EXPORT_SYMBOL_GPL(clk_unregister_divider);
+
+/**
+ * clk_hw_unregister_divider - unregister a clk divider
+ * @hw: hardware-specific clock data to unregister
+ */
+void clk_hw_unregister_divider(struct clk_hw *hw)
+{
+ struct clk_divider *div;
+
+ div = to_clk_divider(hw);
+
+ clk_hw_unregister(hw);
+ kfree(div);
+}
+EXPORT_SYMBOL_GPL(clk_hw_unregister_divider);
diff --git a/drivers/clk/clk-fixed-factor.c b/drivers/clk/clk-fixed-factor.c
index 053448e2453d..75cd6c792cb8 100644
--- a/drivers/clk/clk-fixed-factor.c
+++ b/drivers/clk/clk-fixed-factor.c
@@ -68,13 +68,14 @@ const struct clk_ops clk_fixed_factor_ops = {
};
EXPORT_SYMBOL_GPL(clk_fixed_factor_ops);
-struct clk *clk_register_fixed_factor(struct device *dev, const char *name,
- const char *parent_name, unsigned long flags,
+struct clk_hw *clk_hw_register_fixed_factor(struct device *dev,
+ const char *name, const char *parent_name, unsigned long flags,
unsigned int mult, unsigned int div)
{
struct clk_fixed_factor *fix;
struct clk_init_data init;
- struct clk *clk;
+ struct clk_hw *hw;
+ int ret;
fix = kmalloc(sizeof(*fix), GFP_KERNEL);
if (!fix)
@@ -91,12 +92,28 @@ struct clk *clk_register_fixed_factor(struct device *dev, const char *name,
init.parent_names = &parent_name;
init.num_parents = 1;
- clk = clk_register(dev, &fix->hw);
-
- if (IS_ERR(clk))
+ hw = &fix->hw;
+ ret = clk_hw_register(dev, hw);
+ if (ret) {
kfree(fix);
+ hw = ERR_PTR(ret);
+ }
+
+ return hw;
+}
+EXPORT_SYMBOL_GPL(clk_hw_register_fixed_factor);
+
+struct clk *clk_register_fixed_factor(struct device *dev, const char *name,
+ const char *parent_name, unsigned long flags,
+ unsigned int mult, unsigned int div)
+{
+ struct clk_hw *hw;
- return clk;
+ hw = clk_hw_register_fixed_factor(dev, name, parent_name, flags, mult,
+ div);
+ if (IS_ERR(hw))
+ return ERR_CAST(hw);
+ return hw->clk;
}
EXPORT_SYMBOL_GPL(clk_register_fixed_factor);
@@ -113,6 +130,17 @@ void clk_unregister_fixed_factor(struct clk *clk)
}
EXPORT_SYMBOL_GPL(clk_unregister_fixed_factor);
+void clk_hw_unregister_fixed_factor(struct clk_hw *hw)
+{
+ struct clk_fixed_factor *fix;
+
+ fix = to_clk_fixed_factor(hw);
+
+ clk_hw_unregister(hw);
+ kfree(fix);
+}
+EXPORT_SYMBOL_GPL(clk_hw_unregister_fixed_factor);
+
#ifdef CONFIG_OF
/**
* of_fixed_factor_clk_setup() - Setup function for simple fixed factor clock
diff --git a/drivers/clk/clk-fixed-rate.c b/drivers/clk/clk-fixed-rate.c
index cd9dc925b3f8..8e4453eb54e8 100644
--- a/drivers/clk/clk-fixed-rate.c
+++ b/drivers/clk/clk-fixed-rate.c
@@ -45,8 +45,8 @@ const struct clk_ops clk_fixed_rate_ops = {
EXPORT_SYMBOL_GPL(clk_fixed_rate_ops);
/**
- * clk_register_fixed_rate_with_accuracy - register fixed-rate clock with the
- * clock framework
+ * clk_hw_register_fixed_rate_with_accuracy - register fixed-rate clock with
+ * the clock framework
* @dev: device that is registering this clock
* @name: name of this clock
* @parent_name: name of clock's parent
@@ -54,13 +54,14 @@ EXPORT_SYMBOL_GPL(clk_fixed_rate_ops);
* @fixed_rate: non-adjustable clock rate
* @fixed_accuracy: non-adjustable clock rate
*/
-struct clk *clk_register_fixed_rate_with_accuracy(struct device *dev,
+struct clk_hw *clk_hw_register_fixed_rate_with_accuracy(struct device *dev,
const char *name, const char *parent_name, unsigned long flags,
unsigned long fixed_rate, unsigned long fixed_accuracy)
{
struct clk_fixed_rate *fixed;
- struct clk *clk;
+ struct clk_hw *hw;
struct clk_init_data init;
+ int ret;
/* allocate fixed-rate clock */
fixed = kzalloc(sizeof(*fixed), GFP_KERNEL);
@@ -79,22 +80,49 @@ struct clk *clk_register_fixed_rate_with_accuracy(struct device *dev,
fixed->hw.init = &init;
/* register the clock */
- clk = clk_register(dev, &fixed->hw);
- if (IS_ERR(clk))
+ hw = &fixed->hw;
+ ret = clk_hw_register(dev, hw);
+ if (ret) {
kfree(fixed);
+ hw = ERR_PTR(ret);
+ }
- return clk;
+ return hw;
+}
+EXPORT_SYMBOL_GPL(clk_hw_register_fixed_rate_with_accuracy);
+
+struct clk *clk_register_fixed_rate_with_accuracy(struct device *dev,
+ const char *name, const char *parent_name, unsigned long flags,
+ unsigned long fixed_rate, unsigned long fixed_accuracy)
+{
+ struct clk_hw *hw;
+
+ hw = clk_hw_register_fixed_rate_with_accuracy(dev, name, parent_name,
+ flags, fixed_rate, fixed_accuracy);
+ if (IS_ERR(hw))
+ return ERR_CAST(hw);
+ return hw->clk;
}
EXPORT_SYMBOL_GPL(clk_register_fixed_rate_with_accuracy);
/**
- * clk_register_fixed_rate - register fixed-rate clock with the clock framework
+ * clk_hw_register_fixed_rate - register fixed-rate clock with the clock
+ * framework
* @dev: device that is registering this clock
* @name: name of this clock
* @parent_name: name of clock's parent
* @flags: framework-specific flags
* @fixed_rate: non-adjustable clock rate
*/
+struct clk_hw *clk_hw_register_fixed_rate(struct device *dev, const char *name,
+ const char *parent_name, unsigned long flags,
+ unsigned long fixed_rate)
+{
+ return clk_hw_register_fixed_rate_with_accuracy(dev, name, parent_name,
+ flags, fixed_rate, 0);
+}
+EXPORT_SYMBOL_GPL(clk_hw_register_fixed_rate);
+
struct clk *clk_register_fixed_rate(struct device *dev, const char *name,
const char *parent_name, unsigned long flags,
unsigned long fixed_rate)
diff --git a/drivers/clk/clk-fractional-divider.c b/drivers/clk/clk-fractional-divider.c
index 1abcd76b4993..aab904618eb6 100644
--- a/drivers/clk/clk-fractional-divider.c
+++ b/drivers/clk/clk-fractional-divider.c
@@ -116,14 +116,15 @@ const struct clk_ops clk_fractional_divider_ops = {
};
EXPORT_SYMBOL_GPL(clk_fractional_divider_ops);
-struct clk *clk_register_fractional_divider(struct device *dev,
+struct clk_hw *clk_hw_register_fractional_divider(struct device *dev,
const char *name, const char *parent_name, unsigned long flags,
void __iomem *reg, u8 mshift, u8 mwidth, u8 nshift, u8 nwidth,
u8 clk_divider_flags, spinlock_t *lock)
{
struct clk_fractional_divider *fd;
struct clk_init_data init;
- struct clk *clk;
+ struct clk_hw *hw;
+ int ret;
fd = kzalloc(sizeof(*fd), GFP_KERNEL);
if (!fd)
@@ -146,10 +147,39 @@ struct clk *clk_register_fractional_divider(struct device *dev,
fd->lock = lock;
fd->hw.init = &init;
- clk = clk_register(dev, &fd->hw);
- if (IS_ERR(clk))
+ hw = &fd->hw;
+ ret = clk_hw_register(dev, hw);
+ if (ret) {
kfree(fd);
+ hw = ERR_PTR(ret);
+ }
+
+ return hw;
+}
+EXPORT_SYMBOL_GPL(clk_hw_register_fractional_divider);
- return clk;
+struct clk *clk_register_fractional_divider(struct device *dev,
+ const char *name, const char *parent_name, unsigned long flags,
+ void __iomem *reg, u8 mshift, u8 mwidth, u8 nshift, u8 nwidth,
+ u8 clk_divider_flags, spinlock_t *lock)
+{
+ struct clk_hw *hw;
+
+ hw = clk_hw_register_fractional_divider(dev, name, parent_name, flags,
+ reg, mshift, mwidth, nshift, nwidth, clk_divider_flags,
+ lock);
+ if (IS_ERR(hw))
+ return ERR_CAST(hw);
+ return hw->clk;
}
EXPORT_SYMBOL_GPL(clk_register_fractional_divider);
+
+void clk_hw_unregister_fractional_divider(struct clk_hw *hw)
+{
+ struct clk_fractional_divider *fd;
+
+ fd = to_clk_fd(hw);
+
+ clk_hw_unregister(hw);
+ kfree(fd);
+}
diff --git a/drivers/clk/clk-gate.c b/drivers/clk/clk-gate.c
index d0d8ec8e1f1b..4e691e35483a 100644
--- a/drivers/clk/clk-gate.c
+++ b/drivers/clk/clk-gate.c
@@ -110,7 +110,7 @@ const struct clk_ops clk_gate_ops = {
EXPORT_SYMBOL_GPL(clk_gate_ops);
/**
- * clk_register_gate - register a gate clock with the clock framework
+ * clk_hw_register_gate - register a gate clock with the clock framework
* @dev: device that is registering this clock
* @name: name of this clock
* @parent_name: name of this clock's parent
@@ -120,14 +120,15 @@ EXPORT_SYMBOL_GPL(clk_gate_ops);
* @clk_gate_flags: gate-specific flags for this clock
* @lock: shared register lock for this clock
*/
-struct clk *clk_register_gate(struct device *dev, const char *name,
+struct clk_hw *clk_hw_register_gate(struct device *dev, const char *name,
const char *parent_name, unsigned long flags,
void __iomem *reg, u8 bit_idx,
u8 clk_gate_flags, spinlock_t *lock)
{
struct clk_gate *gate;
- struct clk *clk;
+ struct clk_hw *hw;
struct clk_init_data init;
+ int ret;
if (clk_gate_flags & CLK_GATE_HIWORD_MASK) {
if (bit_idx > 15) {
@@ -154,12 +155,29 @@ struct clk *clk_register_gate(struct device *dev, const char *name,
gate->lock = lock;
gate->hw.init = &init;
- clk = clk_register(dev, &gate->hw);
-
- if (IS_ERR(clk))
+ hw = &gate->hw;
+ ret = clk_hw_register(dev, hw);
+ if (ret) {
kfree(gate);
+ hw = ERR_PTR(ret);
+ }
- return clk;
+ return hw;
+}
+EXPORT_SYMBOL_GPL(clk_hw_register_gate);
+
+struct clk *clk_register_gate(struct device *dev, const char *name,
+ const char *parent_name, unsigned long flags,
+ void __iomem *reg, u8 bit_idx,
+ u8 clk_gate_flags, spinlock_t *lock)
+{
+ struct clk_hw *hw;
+
+ hw = clk_hw_register_gate(dev, name, parent_name, flags, reg,
+ bit_idx, clk_gate_flags, lock);
+ if (IS_ERR(hw))
+ return ERR_CAST(hw);
+ return hw->clk;
}
EXPORT_SYMBOL_GPL(clk_register_gate);
@@ -178,3 +196,14 @@ void clk_unregister_gate(struct clk *clk)
kfree(gate);
}
EXPORT_SYMBOL_GPL(clk_unregister_gate);
+
+void clk_hw_unregister_gate(struct clk_hw *hw)
+{
+ struct clk_gate *gate;
+
+ gate = to_clk_gate(hw);
+
+ clk_hw_unregister(hw);
+ kfree(gate);
+}
+EXPORT_SYMBOL_GPL(clk_hw_unregister_gate);
diff --git a/drivers/clk/clk-gpio.c b/drivers/clk/clk-gpio.c
index 08f65acc5d57..86b245746a6b 100644
--- a/drivers/clk/clk-gpio.c
+++ b/drivers/clk/clk-gpio.c
@@ -94,13 +94,13 @@ const struct clk_ops clk_gpio_mux_ops = {
};
EXPORT_SYMBOL_GPL(clk_gpio_mux_ops);
-static struct clk *clk_register_gpio(struct device *dev, const char *name,
+static struct clk_hw *clk_register_gpio(struct device *dev, const char *name,
const char * const *parent_names, u8 num_parents, unsigned gpio,
bool active_low, unsigned long flags,
const struct clk_ops *clk_gpio_ops)
{
struct clk_gpio *clk_gpio;
- struct clk *clk;
+ struct clk_hw *hw;
struct clk_init_data init = {};
unsigned long gpio_flags;
int err;
@@ -141,24 +141,26 @@ static struct clk *clk_register_gpio(struct device *dev, const char *name,
clk_gpio->gpiod = gpio_to_desc(gpio);
clk_gpio->hw.init = &init;
+ hw = &clk_gpio->hw;
if (dev)
- clk = devm_clk_register(dev, &clk_gpio->hw);
+ err = devm_clk_hw_register(dev, hw);
else
- clk = clk_register(NULL, &clk_gpio->hw);
+ err = clk_hw_register(NULL, hw);
- if (!IS_ERR(clk))
- return clk;
+ if (!err)
+ return hw;
if (!dev) {
gpiod_put(clk_gpio->gpiod);
kfree(clk_gpio);
}
- return clk;
+ return ERR_PTR(err);
}
/**
- * clk_register_gpio_gate - register a gpio clock gate with the clock framework
+ * clk_hw_register_gpio_gate - register a gpio clock gate with the clock
+ * framework
* @dev: device that is registering this clock
* @name: name of this clock
* @parent_name: name of this clock's parent
@@ -166,7 +168,7 @@ static struct clk *clk_register_gpio(struct device *dev, const char *name,
* @active_low: true if gpio should be set to 0 to enable clock
* @flags: clock flags
*/
-struct clk *clk_register_gpio_gate(struct device *dev, const char *name,
+struct clk_hw *clk_hw_register_gpio_gate(struct device *dev, const char *name,
const char *parent_name, unsigned gpio, bool active_low,
unsigned long flags)
{
@@ -175,10 +177,24 @@ struct clk *clk_register_gpio_gate(struct device *dev, const char *name,
(parent_name ? 1 : 0), gpio, active_low, flags,
&clk_gpio_gate_ops);
}
+EXPORT_SYMBOL_GPL(clk_hw_register_gpio_gate);
+
+struct clk *clk_register_gpio_gate(struct device *dev, const char *name,
+ const char *parent_name, unsigned gpio, bool active_low,
+ unsigned long flags)
+{
+ struct clk_hw *hw;
+
+ hw = clk_hw_register_gpio_gate(dev, name, parent_name, gpio, active_low,
+ flags);
+ if (IS_ERR(hw))
+ return ERR_CAST(hw);
+ return hw->clk;
+}
EXPORT_SYMBOL_GPL(clk_register_gpio_gate);
/**
- * clk_register_gpio_mux - register a gpio clock mux with the clock framework
+ * clk_hw_register_gpio_mux - register a gpio clock mux with the clock framework
* @dev: device that is registering this clock
* @name: name of this clock
* @parent_names: names of this clock's parents
@@ -187,7 +203,7 @@ EXPORT_SYMBOL_GPL(clk_register_gpio_gate);
* @active_low: true if gpio should be set to 0 to enable clock
* @flags: clock flags
*/
-struct clk *clk_register_gpio_mux(struct device *dev, const char *name,
+struct clk_hw *clk_hw_register_gpio_mux(struct device *dev, const char *name,
const char * const *parent_names, u8 num_parents, unsigned gpio,
bool active_low, unsigned long flags)
{
@@ -199,6 +215,20 @@ struct clk *clk_register_gpio_mux(struct device *dev, const char *name,
return clk_register_gpio(dev, name, parent_names, num_parents,
gpio, active_low, flags, &clk_gpio_mux_ops);
}
+EXPORT_SYMBOL_GPL(clk_hw_register_gpio_mux);
+
+struct clk *clk_register_gpio_mux(struct device *dev, const char *name,
+ const char * const *parent_names, u8 num_parents, unsigned gpio,
+ bool active_low, unsigned long flags)
+{
+ struct clk_hw *hw;
+
+ hw = clk_hw_register_gpio_mux(dev, name, parent_names, num_parents,
+ gpio, active_low, flags);
+ if (IS_ERR(hw))
+ return ERR_CAST(hw);
+ return hw->clk;
+}
EXPORT_SYMBOL_GPL(clk_register_gpio_mux);
static int gpio_clk_driver_probe(struct platform_device *pdev)
diff --git a/drivers/clk/clk-ls1x.c b/drivers/clk/clk-ls1x.c
index d4c61985f448..5097831387ff 100644
--- a/drivers/clk/clk-ls1x.c
+++ b/drivers/clk/clk-ls1x.c
@@ -88,8 +88,7 @@ void __init ls1x_clk_init(void)
{
struct clk *clk;
- clk = clk_register_fixed_rate(NULL, "osc_33m_clk", NULL, CLK_IS_ROOT,
- OSC);
+ clk = clk_register_fixed_rate(NULL, "osc_33m_clk", NULL, 0, OSC);
clk_register_clkdev(clk, "osc_33m_clk", NULL);
/* clock derived from 33 MHz OSC clk */
diff --git a/drivers/clk/clk-mux.c b/drivers/clk/clk-mux.c
index 252188fd8bcd..16a3d5717f4e 100644
--- a/drivers/clk/clk-mux.c
+++ b/drivers/clk/clk-mux.c
@@ -113,16 +113,17 @@ const struct clk_ops clk_mux_ro_ops = {
};
EXPORT_SYMBOL_GPL(clk_mux_ro_ops);
-struct clk *clk_register_mux_table(struct device *dev, const char *name,
+struct clk_hw *clk_hw_register_mux_table(struct device *dev, const char *name,
const char * const *parent_names, u8 num_parents,
unsigned long flags,
void __iomem *reg, u8 shift, u32 mask,
u8 clk_mux_flags, u32 *table, spinlock_t *lock)
{
struct clk_mux *mux;
- struct clk *clk;
+ struct clk_hw *hw;
struct clk_init_data init;
u8 width = 0;
+ int ret;
if (clk_mux_flags & CLK_MUX_HIWORD_MASK) {
width = fls(mask) - ffs(mask) + 1;
@@ -157,12 +158,31 @@ struct clk *clk_register_mux_table(struct device *dev, const char *name,
mux->table = table;
mux->hw.init = &init;
- clk = clk_register(dev, &mux->hw);
-
- if (IS_ERR(clk))
+ hw = &mux->hw;
+ ret = clk_hw_register(dev, hw);
+ if (ret) {
kfree(mux);
+ hw = ERR_PTR(ret);
+ }
- return clk;
+ return hw;
+}
+EXPORT_SYMBOL_GPL(clk_hw_register_mux_table);
+
+struct clk *clk_register_mux_table(struct device *dev, const char *name,
+ const char * const *parent_names, u8 num_parents,
+ unsigned long flags,
+ void __iomem *reg, u8 shift, u32 mask,
+ u8 clk_mux_flags, u32 *table, spinlock_t *lock)
+{
+ struct clk_hw *hw;
+
+ hw = clk_hw_register_mux_table(dev, name, parent_names, num_parents,
+ flags, reg, shift, mask, clk_mux_flags,
+ table, lock);
+ if (IS_ERR(hw))
+ return ERR_CAST(hw);
+ return hw->clk;
}
EXPORT_SYMBOL_GPL(clk_register_mux_table);
@@ -180,6 +200,20 @@ struct clk *clk_register_mux(struct device *dev, const char *name,
}
EXPORT_SYMBOL_GPL(clk_register_mux);
+struct clk_hw *clk_hw_register_mux(struct device *dev, const char *name,
+ const char * const *parent_names, u8 num_parents,
+ unsigned long flags,
+ void __iomem *reg, u8 shift, u8 width,
+ u8 clk_mux_flags, spinlock_t *lock)
+{
+ u32 mask = BIT(width) - 1;
+
+ return clk_hw_register_mux_table(dev, name, parent_names, num_parents,
+ flags, reg, shift, mask, clk_mux_flags,
+ NULL, lock);
+}
+EXPORT_SYMBOL_GPL(clk_hw_register_mux);
+
void clk_unregister_mux(struct clk *clk)
{
struct clk_mux *mux;
@@ -195,3 +229,14 @@ void clk_unregister_mux(struct clk *clk)
kfree(mux);
}
EXPORT_SYMBOL_GPL(clk_unregister_mux);
+
+void clk_hw_unregister_mux(struct clk_hw *hw)
+{
+ struct clk_mux *mux;
+
+ mux = to_clk_mux(hw);
+
+ clk_hw_unregister(hw);
+ kfree(mux);
+}
+EXPORT_SYMBOL_GPL(clk_hw_unregister_mux);
diff --git a/drivers/clk/clk-nspire.c b/drivers/clk/clk-nspire.c
index a378db7b2382..64f196a90816 100644
--- a/drivers/clk/clk-nspire.c
+++ b/drivers/clk/clk-nspire.c
@@ -125,8 +125,7 @@ static void __init nspire_clk_setup(struct device_node *node,
of_property_read_string(node, "clock-output-names", &clk_name);
- clk = clk_register_fixed_rate(NULL, clk_name, NULL, CLK_IS_ROOT,
- info.base_clock);
+ clk = clk_register_fixed_rate(NULL, clk_name, NULL, 0, info.base_clock);
if (!IS_ERR(clk))
of_clk_add_provider(node, of_clk_src_simple_get, clk);
else
diff --git a/drivers/clk/clk-oxnas.c b/drivers/clk/clk-oxnas.c
new file mode 100644
index 000000000000..efba7d4dbcfc
--- /dev/null
+++ b/drivers/clk/clk-oxnas.c
@@ -0,0 +1,195 @@
+/*
+ * Copyright (C) 2010 Broadcom
+ * Copyright (C) 2012 Stephen Warren
+ * Copyright (C) 2016 Neil Armstrong <narmstrong@baylibre.com>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/clk-provider.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+#include <linux/stringify.h>
+#include <linux/regmap.h>
+#include <linux/mfd/syscon.h>
+
+/* Standard regmap gate clocks */
+struct clk_oxnas {
+ struct clk_hw hw;
+ signed char bit;
+ struct regmap *regmap;
+};
+
+/* Regmap offsets */
+#define CLK_STAT_REGOFFSET 0x24
+#define CLK_SET_REGOFFSET 0x2c
+#define CLK_CLR_REGOFFSET 0x30
+
+static inline struct clk_oxnas *to_clk_oxnas(struct clk_hw *hw)
+{
+ return container_of(hw, struct clk_oxnas, hw);
+}
+
+static int oxnas_clk_is_enabled(struct clk_hw *hw)
+{
+ struct clk_oxnas *std = to_clk_oxnas(hw);
+ int ret;
+ unsigned int val;
+
+ ret = regmap_read(std->regmap, CLK_STAT_REGOFFSET, &val);
+ if (ret < 0)
+ return ret;
+
+ return val & BIT(std->bit);
+}
+
+static int oxnas_clk_enable(struct clk_hw *hw)
+{
+ struct clk_oxnas *std = to_clk_oxnas(hw);
+
+ regmap_write(std->regmap, CLK_SET_REGOFFSET, BIT(std->bit));
+
+ return 0;
+}
+
+static void oxnas_clk_disable(struct clk_hw *hw)
+{
+ struct clk_oxnas *std = to_clk_oxnas(hw);
+
+ regmap_write(std->regmap, CLK_CLR_REGOFFSET, BIT(std->bit));
+}
+
+static const struct clk_ops oxnas_clk_ops = {
+ .enable = oxnas_clk_enable,
+ .disable = oxnas_clk_disable,
+ .is_enabled = oxnas_clk_is_enabled,
+};
+
+static const char *const oxnas_clk_parents[] = {
+ "oscillator",
+};
+
+static const char *const eth_parents[] = {
+ "gmacclk",
+};
+
+#define DECLARE_STD_CLKP(__clk, __parent) \
+static const struct clk_init_data clk_##__clk##_init = { \
+ .name = __stringify(__clk), \
+ .ops = &oxnas_clk_ops, \
+ .parent_names = __parent, \
+ .num_parents = ARRAY_SIZE(__parent), \
+}
+
+#define DECLARE_STD_CLK(__clk) DECLARE_STD_CLKP(__clk, oxnas_clk_parents)
+
+/* Hardware Bit - Clock association */
+struct clk_oxnas_init_data {
+ unsigned long bit;
+ const struct clk_init_data *clk_init;
+};
+
+/* Clk init data declaration */
+DECLARE_STD_CLK(leon);
+DECLARE_STD_CLK(dma_sgdma);
+DECLARE_STD_CLK(cipher);
+DECLARE_STD_CLK(sata);
+DECLARE_STD_CLK(audio);
+DECLARE_STD_CLK(usbmph);
+DECLARE_STD_CLKP(etha, eth_parents);
+DECLARE_STD_CLK(pciea);
+DECLARE_STD_CLK(nand);
+
+/* Table index is clock indice */
+static const struct clk_oxnas_init_data clk_oxnas_init[] = {
+ [0] = {0, &clk_leon_init},
+ [1] = {1, &clk_dma_sgdma_init},
+ [2] = {2, &clk_cipher_init},
+ /* Skip & Do not touch to DDR clock */
+ [3] = {4, &clk_sata_init},
+ [4] = {5, &clk_audio_init},
+ [5] = {6, &clk_usbmph_init},
+ [6] = {7, &clk_etha_init},
+ [7] = {8, &clk_pciea_init},
+ [8] = {9, &clk_nand_init},
+};
+
+struct clk_oxnas_data {
+ struct clk_oxnas clk_oxnas[ARRAY_SIZE(clk_oxnas_init)];
+ struct clk_onecell_data onecell_data[ARRAY_SIZE(clk_oxnas_init)];
+ struct clk *clks[ARRAY_SIZE(clk_oxnas_init)];
+};
+
+static int oxnas_stdclk_probe(struct platform_device *pdev)
+{
+ struct device_node *np = pdev->dev.of_node;
+ struct clk_oxnas_data *clk_oxnas;
+ struct regmap *regmap;
+ int i;
+
+ clk_oxnas = devm_kzalloc(&pdev->dev, sizeof(*clk_oxnas), GFP_KERNEL);
+ if (!clk_oxnas)
+ return -ENOMEM;
+
+ regmap = syscon_node_to_regmap(of_get_parent(np));
+ if (!regmap) {
+ dev_err(&pdev->dev, "failed to have parent regmap\n");
+ return -EINVAL;
+ }
+
+ for (i = 0; i < ARRAY_SIZE(clk_oxnas_init); i++) {
+ struct clk_oxnas *_clk;
+
+ _clk = &clk_oxnas->clk_oxnas[i];
+ _clk->bit = clk_oxnas_init[i].bit;
+ _clk->hw.init = clk_oxnas_init[i].clk_init;
+ _clk->regmap = regmap;
+
+ clk_oxnas->clks[i] =
+ devm_clk_register(&pdev->dev, &_clk->hw);
+ if (WARN_ON(IS_ERR(clk_oxnas->clks[i])))
+ return PTR_ERR(clk_oxnas->clks[i]);
+ }
+
+ clk_oxnas->onecell_data->clks = clk_oxnas->clks;
+ clk_oxnas->onecell_data->clk_num = ARRAY_SIZE(clk_oxnas_init);
+
+ return of_clk_add_provider(np, of_clk_src_onecell_get,
+ clk_oxnas->onecell_data);
+}
+
+static int oxnas_stdclk_remove(struct platform_device *pdev)
+{
+ of_clk_del_provider(pdev->dev.of_node);
+
+ return 0;
+}
+
+static const struct of_device_id oxnas_stdclk_dt_ids[] = {
+ { .compatible = "oxsemi,ox810se-stdclk" },
+ { }
+};
+MODULE_DEVICE_TABLE(of, oxnas_stdclk_dt_ids);
+
+static struct platform_driver oxnas_stdclk_driver = {
+ .probe = oxnas_stdclk_probe,
+ .remove = oxnas_stdclk_remove,
+ .driver = {
+ .name = "oxnas-stdclk",
+ .of_match_table = oxnas_stdclk_dt_ids,
+ },
+};
+
+module_platform_driver(oxnas_stdclk_driver);
diff --git a/drivers/clk/clk-palmas.c b/drivers/clk/clk-palmas.c
index 9c0b8e6b1ab3..8328863cb0e0 100644
--- a/drivers/clk/clk-palmas.c
+++ b/drivers/clk/clk-palmas.c
@@ -132,7 +132,7 @@ static const struct palmas_clks_of_match_data palmas_of_clk32kg = {
.init = {
.name = "clk32kg",
.ops = &palmas_clks_ops,
- .flags = CLK_IS_ROOT | CLK_IGNORE_UNUSED,
+ .flags = CLK_IGNORE_UNUSED,
},
.desc = {
.clk_name = "clk32kg",
@@ -148,7 +148,7 @@ static const struct palmas_clks_of_match_data palmas_of_clk32kgaudio = {
.init = {
.name = "clk32kgaudio",
.ops = &palmas_clks_ops,
- .flags = CLK_IS_ROOT | CLK_IGNORE_UNUSED,
+ .flags = CLK_IGNORE_UNUSED,
},
.desc = {
.clk_name = "clk32kgaudio",
diff --git a/drivers/clk/clk-pwm.c b/drivers/clk/clk-pwm.c
index 883045814dac..1630a1f085f7 100644
--- a/drivers/clk/clk-pwm.c
+++ b/drivers/clk/clk-pwm.c
@@ -59,6 +59,7 @@ static int clk_pwm_probe(struct platform_device *pdev)
struct clk_init_data init;
struct clk_pwm *clk_pwm;
struct pwm_device *pwm;
+ struct pwm_args pargs;
const char *clk_name;
struct clk *clk;
int ret;
@@ -71,22 +72,28 @@ static int clk_pwm_probe(struct platform_device *pdev)
if (IS_ERR(pwm))
return PTR_ERR(pwm);
- if (!pwm->period) {
+ pwm_get_args(pwm, &pargs);
+ if (!pargs.period) {
dev_err(&pdev->dev, "invalid PWM period\n");
return -EINVAL;
}
if (of_property_read_u32(node, "clock-frequency", &clk_pwm->fixed_rate))
- clk_pwm->fixed_rate = NSEC_PER_SEC / pwm->period;
+ clk_pwm->fixed_rate = NSEC_PER_SEC / pargs.period;
- if (pwm->period != NSEC_PER_SEC / clk_pwm->fixed_rate &&
- pwm->period != DIV_ROUND_UP(NSEC_PER_SEC, clk_pwm->fixed_rate)) {
+ if (pargs.period != NSEC_PER_SEC / clk_pwm->fixed_rate &&
+ pargs.period != DIV_ROUND_UP(NSEC_PER_SEC, clk_pwm->fixed_rate)) {
dev_err(&pdev->dev,
"clock-frequency does not match PWM period\n");
return -EINVAL;
}
- ret = pwm_config(pwm, (pwm->period + 1) >> 1, pwm->period);
+ /*
+ * FIXME: pwm_apply_args() should be removed when switching to the
+ * atomic PWM API.
+ */
+ pwm_apply_args(pwm);
+ ret = pwm_config(pwm, (pargs.period + 1) >> 1, pargs.period);
if (ret < 0)
return ret;
diff --git a/drivers/clk/clk-qoriq.c b/drivers/clk/clk-qoriq.c
index 7bc1c4527ae4..58566a17944a 100644
--- a/drivers/clk/clk-qoriq.c
+++ b/drivers/clk/clk-qoriq.c
@@ -869,14 +869,15 @@ static void __init core_mux_init(struct device_node *np)
}
}
-static struct clk *sysclk_from_fixed(struct device_node *node, const char *name)
+static struct clk __init
+*sysclk_from_fixed(struct device_node *node, const char *name)
{
u32 rate;
if (of_property_read_u32(node, "clock-frequency", &rate))
return ERR_PTR(-ENODEV);
- return clk_register_fixed_rate(NULL, name, NULL, CLK_IS_ROOT, rate);
+ return clk_register_fixed_rate(NULL, name, NULL, 0, rate);
}
static struct clk *sysclk_from_parent(const char *name)
diff --git a/drivers/clk/clk-rk808.c b/drivers/clk/clk-rk808.c
index 0fee2f4ca258..74383039761e 100644
--- a/drivers/clk/clk-rk808.c
+++ b/drivers/clk/clk-rk808.c
@@ -106,7 +106,6 @@ static int rk808_clkout_probe(struct platform_device *pdev)
if (!clk_table)
return -ENOMEM;
- init.flags = CLK_IS_ROOT;
init.parent_names = NULL;
init.num_parents = 0;
init.name = "rk808-clkout1";
diff --git a/drivers/clk/clk-tango4.c b/drivers/clk/clk-tango4.c
index 004ab7dfcfe3..eef75e305a59 100644
--- a/drivers/clk/clk-tango4.c
+++ b/drivers/clk/clk-tango4.c
@@ -4,17 +4,19 @@
#include <linux/init.h>
#include <linux/io.h>
-static struct clk *out[2];
-static struct clk_onecell_data clk_data = { out, 2 };
+#define CLK_COUNT 4 /* cpu_clk, sys_clk, usb_clk, sdio_clk */
+static struct clk *clks[CLK_COUNT];
+static struct clk_onecell_data clk_data = { clks, CLK_COUNT };
-#define SYSCLK_CTRL 0x20
-#define CPUCLK_CTRL 0x24
-#define LEGACY_DIV 0x3c
+#define SYSCLK_DIV 0x20
+#define CPUCLK_DIV 0x24
+#define DIV_BYPASS BIT(23)
-#define PLL_N(val) (((val) >> 0) & 0x7f)
-#define PLL_K(val) (((val) >> 13) & 0x7)
-#define PLL_M(val) (((val) >> 16) & 0x7)
-#define DIV_INDEX(val) (((val) >> 8) & 0xf)
+/*** CLKGEN_PLL ***/
+#define extract_pll_n(val) ((val >> 0) & ((1u << 7) - 1))
+#define extract_pll_k(val) ((val >> 13) & ((1u << 3) - 1))
+#define extract_pll_m(val) ((val >> 16) & ((1u << 3) - 1))
+#define extract_pll_isel(val) ((val >> 24) & ((1u << 3) - 1))
static void __init make_pll(int idx, const char *parent, void __iomem *base)
{
@@ -22,40 +24,61 @@ static void __init make_pll(int idx, const char *parent, void __iomem *base)
u32 val, mul, div;
sprintf(name, "pll%d", idx);
- val = readl_relaxed(base + idx*8);
- mul = PLL_N(val) + 1;
- div = (PLL_M(val) + 1) << PLL_K(val);
+ val = readl(base + idx * 8);
+ mul = extract_pll_n(val) + 1;
+ div = (extract_pll_m(val) + 1) << extract_pll_k(val);
clk_register_fixed_factor(NULL, name, parent, 0, mul, div);
+ if (extract_pll_isel(val) != 1)
+ panic("%s: input not set to XTAL_IN\n", name);
}
-static int __init get_div(void __iomem *base)
+static void __init make_cd(int idx, void __iomem *base)
{
- u8 sysclk_tab[16] = { 2, 4, 3, 3, 3, 3, 3, 3, 4, 4, 4, 4 };
- int idx = DIV_INDEX(readl_relaxed(base + LEGACY_DIV));
+ char name[8];
+ u32 val, mul, div;
- return sysclk_tab[idx];
+ sprintf(name, "cd%d", idx);
+ val = readl(base + idx * 8);
+ mul = 1 << 27;
+ div = (2 << 27) + val;
+ clk_register_fixed_factor(NULL, name, "pll2", 0, mul, div);
+ if (val > 0xf0000000)
+ panic("%s: unsupported divider %x\n", name, val);
}
static void __init tango4_clkgen_setup(struct device_node *np)
{
- int div, ret;
+ struct clk **pp = clk_data.clks;
void __iomem *base = of_iomap(np, 0);
const char *parent = of_clk_get_parent_name(np, 0);
if (!base)
- panic("%s: invalid address\n", np->full_name);
+ panic("%s: invalid address\n", np->name);
+
+ if (readl(base + CPUCLK_DIV) & DIV_BYPASS)
+ panic("%s: unsupported cpuclk setup\n", np->name);
+
+ if (readl(base + SYSCLK_DIV) & DIV_BYPASS)
+ panic("%s: unsupported sysclk setup\n", np->name);
+
+ writel(0x100, base + CPUCLK_DIV); /* disable frequency ramping */
make_pll(0, parent, base);
make_pll(1, parent, base);
+ make_pll(2, parent, base);
+ make_cd(2, base + 0x80);
+ make_cd(6, base + 0x80);
- out[0] = clk_register_divider(NULL, "cpuclk", "pll0", 0,
- base + CPUCLK_CTRL, 8, 8, CLK_DIVIDER_ONE_BASED, NULL);
+ pp[0] = clk_register_divider(NULL, "cpu_clk", "pll0", 0,
+ base + CPUCLK_DIV, 8, 8, CLK_DIVIDER_ONE_BASED, NULL);
+ pp[1] = clk_register_fixed_factor(NULL, "sys_clk", "pll1", 0, 1, 4);
+ pp[2] = clk_register_fixed_factor(NULL, "usb_clk", "cd2", 0, 1, 2);
+ pp[3] = clk_register_fixed_factor(NULL, "sdio_clk", "cd6", 0, 1, 2);
- div = readl_relaxed(base + SYSCLK_CTRL) & BIT(23) ? get_div(base) : 4;
- out[1] = clk_register_fixed_factor(NULL, "sysclk", "pll1", 0, 1, div);
+ if (IS_ERR(pp[0]) || IS_ERR(pp[1]) || IS_ERR(pp[2]) || IS_ERR(pp[3]))
+ panic("%s: clk registration failed\n", np->name);
- ret = of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data);
- if (IS_ERR(out[0]) || IS_ERR(out[1]) || ret < 0)
- panic("%s: clk registration failed\n", np->full_name);
+ if (of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data))
+ panic("%s: clk provider registration failed\n", np->name);
}
CLK_OF_DECLARE(tango4_clkgen, "sigma,tango4-clkgen", tango4_clkgen_setup);
diff --git a/drivers/clk/clk-twl6040.c b/drivers/clk/clk-twl6040.c
index 8e5ed649a098..697c66757400 100644
--- a/drivers/clk/clk-twl6040.c
+++ b/drivers/clk/clk-twl6040.c
@@ -74,7 +74,6 @@ static const struct clk_ops twl6040_mcpdm_ops = {
static struct clk_init_data wm831x_clkout_init = {
.name = "mcpdm_fclk",
.ops = &twl6040_mcpdm_ops,
- .flags = CLK_IS_ROOT,
};
static int twl6040_clk_probe(struct platform_device *pdev)
diff --git a/drivers/clk/clk-wm831x.c b/drivers/clk/clk-wm831x.c
index 43f9d15255f4..88def4b2761c 100644
--- a/drivers/clk/clk-wm831x.c
+++ b/drivers/clk/clk-wm831x.c
@@ -58,7 +58,6 @@ static const struct clk_ops wm831x_xtal_ops = {
static struct clk_init_data wm831x_xtal_init = {
.name = "xtal",
.ops = &wm831x_xtal_ops,
- .flags = CLK_IS_ROOT,
};
static const unsigned long wm831x_fll_auto_rates[] = {
diff --git a/drivers/clk/clk-xgene.c b/drivers/clk/clk-xgene.c
index d73450b60b28..343313250c58 100644
--- a/drivers/clk/clk-xgene.c
+++ b/drivers/clk/clk-xgene.c
@@ -198,7 +198,7 @@ static void xgene_pllclk_init(struct device_node *np, enum xgene_pll_type pll_ty
of_property_read_string(np, "clock-output-names", &clk_name);
clk = xgene_register_clk_pll(NULL,
clk_name, of_clk_get_parent_name(np, 0),
- CLK_IS_ROOT, reg, 0, pll_type, &clk_lock,
+ 0, reg, 0, pll_type, &clk_lock,
version);
if (!IS_ERR(clk)) {
of_clk_add_provider(np, of_clk_src_simple_get, clk);
diff --git a/drivers/clk/clk.c b/drivers/clk/clk.c
index fb74dc1f7520..d584004f7af7 100644
--- a/drivers/clk/clk.c
+++ b/drivers/clk/clk.c
@@ -574,6 +574,9 @@ static void clk_core_unprepare(struct clk_core *core)
if (WARN_ON(core->prepare_count == 0))
return;
+ if (WARN_ON(core->prepare_count == 1 && core->flags & CLK_IS_CRITICAL))
+ return;
+
if (--core->prepare_count > 0)
return;
@@ -679,6 +682,9 @@ static void clk_core_disable(struct clk_core *core)
if (WARN_ON(core->enable_count == 0))
return;
+ if (WARN_ON(core->enable_count == 1 && core->flags & CLK_IS_CRITICAL))
+ return;
+
if (--core->enable_count > 0)
return;
@@ -2397,6 +2403,16 @@ static int __clk_core_init(struct clk_core *core)
if (core->ops->init)
core->ops->init(core->hw);
+ if (core->flags & CLK_IS_CRITICAL) {
+ unsigned long flags;
+
+ clk_core_prepare(core);
+
+ flags = clk_enable_lock();
+ clk_core_enable(core);
+ clk_enable_unlock(flags);
+ }
+
kref_init(&core->ref);
out:
clk_prepare_unlock();
@@ -2536,6 +2552,22 @@ fail_out:
}
EXPORT_SYMBOL_GPL(clk_register);
+/**
+ * clk_hw_register - register a clk_hw and return an error code
+ * @dev: device that is registering this clock
+ * @hw: link to hardware-specific clock data
+ *
+ * clk_hw_register is the primary interface for populating the clock tree with
+ * new clock nodes. It returns an integer equal to zero indicating success or
+ * less than zero indicating failure. Drivers must test for an error code after
+ * calling clk_hw_register().
+ */
+int clk_hw_register(struct device *dev, struct clk_hw *hw)
+{
+ return PTR_ERR_OR_ZERO(clk_register(dev, hw));
+}
+EXPORT_SYMBOL_GPL(clk_hw_register);
+
/* Free memory allocated for a clock. */
static void __clk_release(struct kref *ref)
{
@@ -2637,11 +2669,26 @@ unlock:
}
EXPORT_SYMBOL_GPL(clk_unregister);
+/**
+ * clk_hw_unregister - unregister a currently registered clk_hw
+ * @hw: hardware-specific clock data to unregister
+ */
+void clk_hw_unregister(struct clk_hw *hw)
+{
+ clk_unregister(hw->clk);
+}
+EXPORT_SYMBOL_GPL(clk_hw_unregister);
+
static void devm_clk_release(struct device *dev, void *res)
{
clk_unregister(*(struct clk **)res);
}
+static void devm_clk_hw_release(struct device *dev, void *res)
+{
+ clk_hw_unregister(*(struct clk_hw **)res);
+}
+
/**
* devm_clk_register - resource managed clk_register()
* @dev: device that is registering this clock
@@ -2672,6 +2719,36 @@ struct clk *devm_clk_register(struct device *dev, struct clk_hw *hw)
}
EXPORT_SYMBOL_GPL(devm_clk_register);
+/**
+ * devm_clk_hw_register - resource managed clk_hw_register()
+ * @dev: device that is registering this clock
+ * @hw: link to hardware-specific clock data
+ *
+ * Managed clk_hw_register(). Clocks registered by this function are
+ * automatically clk_hw_unregister()ed on driver detach. See clk_hw_register()
+ * for more information.
+ */
+int devm_clk_hw_register(struct device *dev, struct clk_hw *hw)
+{
+ struct clk_hw **hwp;
+ int ret;
+
+ hwp = devres_alloc(devm_clk_hw_release, sizeof(*hwp), GFP_KERNEL);
+ if (!hwp)
+ return -ENOMEM;
+
+ ret = clk_hw_register(dev, hw);
+ if (!ret) {
+ *hwp = hw;
+ devres_add(dev, hwp);
+ } else {
+ devres_free(hwp);
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(devm_clk_hw_register);
+
static int devm_clk_match(struct device *dev, void *res, void *data)
{
struct clk *c = res;
@@ -2680,6 +2757,15 @@ static int devm_clk_match(struct device *dev, void *res, void *data)
return c == data;
}
+static int devm_clk_hw_match(struct device *dev, void *res, void *data)
+{
+ struct clk_hw *hw = res;
+
+ if (WARN_ON(!hw))
+ return 0;
+ return hw == data;
+}
+
/**
* devm_clk_unregister - resource managed clk_unregister()
* @clk: clock to unregister
@@ -2694,6 +2780,22 @@ void devm_clk_unregister(struct device *dev, struct clk *clk)
}
EXPORT_SYMBOL_GPL(devm_clk_unregister);
+/**
+ * devm_clk_hw_unregister - resource managed clk_hw_unregister()
+ * @dev: device that is unregistering the hardware-specific clock data
+ * @hw: link to hardware-specific clock data
+ *
+ * Unregister a clk_hw registered with devm_clk_hw_register(). Normally
+ * this function will not need to be called and the resource management
+ * code will ensure that the resource is freed.
+ */
+void devm_clk_hw_unregister(struct device *dev, struct clk_hw *hw)
+{
+ WARN_ON(devres_release(dev, devm_clk_hw_release, devm_clk_hw_match,
+ hw));
+}
+EXPORT_SYMBOL_GPL(devm_clk_hw_unregister);
+
/*
* clkdev helpers
*/
@@ -2855,6 +2957,7 @@ struct of_clk_provider {
struct device_node *node;
struct clk *(*get)(struct of_phandle_args *clkspec, void *data);
+ struct clk_hw *(*get_hw)(struct of_phandle_args *clkspec, void *data);
void *data;
};
@@ -2871,6 +2974,12 @@ struct clk *of_clk_src_simple_get(struct of_phandle_args *clkspec,
}
EXPORT_SYMBOL_GPL(of_clk_src_simple_get);
+struct clk_hw *of_clk_hw_simple_get(struct of_phandle_args *clkspec, void *data)
+{
+ return data;
+}
+EXPORT_SYMBOL_GPL(of_clk_hw_simple_get);
+
struct clk *of_clk_src_onecell_get(struct of_phandle_args *clkspec, void *data)
{
struct clk_onecell_data *clk_data = data;
@@ -2885,6 +2994,21 @@ struct clk *of_clk_src_onecell_get(struct of_phandle_args *clkspec, void *data)
}
EXPORT_SYMBOL_GPL(of_clk_src_onecell_get);
+struct clk_hw *
+of_clk_hw_onecell_get(struct of_phandle_args *clkspec, void *data)
+{
+ struct clk_hw_onecell_data *hw_data = data;
+ unsigned int idx = clkspec->args[0];
+
+ if (idx >= hw_data->num) {
+ pr_err("%s: invalid index %u\n", __func__, idx);
+ return ERR_PTR(-EINVAL);
+ }
+
+ return hw_data->hws[idx];
+}
+EXPORT_SYMBOL_GPL(of_clk_hw_onecell_get);
+
/**
* of_clk_add_provider() - Register a clock provider for a node
* @np: Device node pointer associated with clock provider
@@ -2921,6 +3045,41 @@ int of_clk_add_provider(struct device_node *np,
EXPORT_SYMBOL_GPL(of_clk_add_provider);
/**
+ * of_clk_add_hw_provider() - Register a clock provider for a node
+ * @np: Device node pointer associated with clock provider
+ * @get: callback for decoding clk_hw
+ * @data: context pointer for @get callback.
+ */
+int of_clk_add_hw_provider(struct device_node *np,
+ struct clk_hw *(*get)(struct of_phandle_args *clkspec,
+ void *data),
+ void *data)
+{
+ struct of_clk_provider *cp;
+ int ret;
+
+ cp = kzalloc(sizeof(*cp), GFP_KERNEL);
+ if (!cp)
+ return -ENOMEM;
+
+ cp->node = of_node_get(np);
+ cp->data = data;
+ cp->get_hw = get;
+
+ mutex_lock(&of_clk_mutex);
+ list_add(&cp->link, &of_clk_providers);
+ mutex_unlock(&of_clk_mutex);
+ pr_debug("Added clk_hw provider from %s\n", np->full_name);
+
+ ret = of_clk_set_defaults(np, true);
+ if (ret < 0)
+ of_clk_del_provider(np);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(of_clk_add_hw_provider);
+
+/**
* of_clk_del_provider() - Remove a previously registered clock provider
* @np: Device node pointer associated with clock provider
*/
@@ -2941,11 +3100,32 @@ void of_clk_del_provider(struct device_node *np)
}
EXPORT_SYMBOL_GPL(of_clk_del_provider);
+static struct clk_hw *
+__of_clk_get_hw_from_provider(struct of_clk_provider *provider,
+ struct of_phandle_args *clkspec)
+{
+ struct clk *clk;
+ struct clk_hw *hw = ERR_PTR(-EPROBE_DEFER);
+
+ if (provider->get_hw) {
+ hw = provider->get_hw(clkspec, provider->data);
+ } else if (provider->get) {
+ clk = provider->get(clkspec, provider->data);
+ if (!IS_ERR(clk))
+ hw = __clk_get_hw(clk);
+ else
+ hw = ERR_CAST(clk);
+ }
+
+ return hw;
+}
+
struct clk *__of_clk_get_from_provider(struct of_phandle_args *clkspec,
const char *dev_id, const char *con_id)
{
struct of_clk_provider *provider;
struct clk *clk = ERR_PTR(-EPROBE_DEFER);
+ struct clk_hw *hw = ERR_PTR(-EPROBE_DEFER);
if (!clkspec)
return ERR_PTR(-EINVAL);
@@ -2954,10 +3134,9 @@ struct clk *__of_clk_get_from_provider(struct of_phandle_args *clkspec,
mutex_lock(&of_clk_mutex);
list_for_each_entry(provider, &of_clk_providers, link) {
if (provider->node == clkspec->np)
- clk = provider->get(clkspec, provider->data);
- if (!IS_ERR(clk)) {
- clk = __clk_create_clk(__clk_get_hw(clk), dev_id,
- con_id);
+ hw = __of_clk_get_hw_from_provider(provider, clkspec);
+ if (!IS_ERR(hw)) {
+ clk = __clk_create_clk(hw, dev_id, con_id);
if (!IS_ERR(clk) && !__clk_get(clk)) {
__clk_free_clk(clk);
@@ -3127,6 +3306,41 @@ static int parent_ready(struct device_node *np)
}
/**
+ * of_clk_detect_critical() - set CLK_IS_CRITICAL flag from Device Tree
+ * @np: Device node pointer associated with clock provider
+ * @index: clock index
+ * @flags: pointer to clk_core->flags
+ *
+ * Detects if the clock-critical property exists and, if so, sets the
+ * corresponding CLK_IS_CRITICAL flag.
+ *
+ * Do not use this function. It exists only for legacy Device Tree
+ * bindings, such as the one-clock-per-node style that are outdated.
+ * Those bindings typically put all clock data into .dts and the Linux
+ * driver has no clock data, thus making it impossible to set this flag
+ * correctly from the driver. Only those drivers may call
+ * of_clk_detect_critical from their setup functions.
+ *
+ * Return: error code or zero on success
+ */
+int of_clk_detect_critical(struct device_node *np,
+ int index, unsigned long *flags)
+{
+ struct property *prop;
+ const __be32 *cur;
+ uint32_t idx;
+
+ if (!np || !flags)
+ return -EINVAL;
+
+ of_property_for_each_u32(np, "clock-critical", prop, cur, idx)
+ if (index == idx)
+ *flags |= CLK_IS_CRITICAL;
+
+ return 0;
+}
+
+/**
* of_clk_init() - Scan and init clock providers from the DT
* @matches: array of compatible values and init functions for providers.
*
diff --git a/drivers/clk/clkdev.c b/drivers/clk/clkdev.c
index eb20b941154b..89cc700fbc37 100644
--- a/drivers/clk/clkdev.c
+++ b/drivers/clk/clkdev.c
@@ -301,6 +301,20 @@ clkdev_alloc(struct clk *clk, const char *con_id, const char *dev_fmt, ...)
}
EXPORT_SYMBOL(clkdev_alloc);
+struct clk_lookup *
+clkdev_hw_alloc(struct clk_hw *hw, const char *con_id, const char *dev_fmt, ...)
+{
+ struct clk_lookup *cl;
+ va_list ap;
+
+ va_start(ap, dev_fmt);
+ cl = vclkdev_alloc(hw, con_id, dev_fmt, ap);
+ va_end(ap);
+
+ return cl;
+}
+EXPORT_SYMBOL(clkdev_hw_alloc);
+
/**
* clkdev_create - allocate and add a clkdev lookup structure
* @clk: struct clk to associate with all clk_lookups
@@ -324,6 +338,29 @@ struct clk_lookup *clkdev_create(struct clk *clk, const char *con_id,
}
EXPORT_SYMBOL_GPL(clkdev_create);
+/**
+ * clkdev_hw_create - allocate and add a clkdev lookup structure
+ * @hw: struct clk_hw to associate with all clk_lookups
+ * @con_id: connection ID string on device
+ * @dev_fmt: format string describing device name
+ *
+ * Returns a clk_lookup structure, which can be later unregistered and
+ * freed.
+ */
+struct clk_lookup *clkdev_hw_create(struct clk_hw *hw, const char *con_id,
+ const char *dev_fmt, ...)
+{
+ struct clk_lookup *cl;
+ va_list ap;
+
+ va_start(ap, dev_fmt);
+ cl = vclkdev_create(hw, con_id, dev_fmt, ap);
+ va_end(ap);
+
+ return cl;
+}
+EXPORT_SYMBOL_GPL(clkdev_hw_create);
+
int clk_add_alias(const char *alias, const char *alias_dev_name,
const char *con_id, struct device *dev)
{
@@ -404,28 +441,28 @@ int clk_register_clkdev(struct clk *clk, const char *con_id,
EXPORT_SYMBOL(clk_register_clkdev);
/**
- * clk_register_clkdevs - register a set of clk_lookup for a struct clk
- * @clk: struct clk to associate with all clk_lookups
- * @cl: array of clk_lookup structures with con_id and dev_id pre-initialized
- * @num: number of clk_lookup structures to register
+ * clk_hw_register_clkdev - register one clock lookup for a struct clk_hw
+ * @hw: struct clk_hw to associate with all clk_lookups
+ * @con_id: connection ID string on device
+ * @dev_id: format string describing device name
*
- * To make things easier for mass registration, we detect error clks
- * from a previous clk_register() call, and return the error code for
- * those. This is to permit this function to be called immediately
- * after clk_register().
+ * con_id or dev_id may be NULL as a wildcard, just as in the rest of
+ * clkdev.
*/
-int clk_register_clkdevs(struct clk *clk, struct clk_lookup *cl, size_t num)
+int clk_hw_register_clkdev(struct clk_hw *hw, const char *con_id,
+ const char *dev_id)
{
- unsigned i;
-
- if (IS_ERR(clk))
- return PTR_ERR(clk);
+ struct clk_lookup *cl;
- for (i = 0; i < num; i++, cl++) {
- cl->clk_hw = __clk_get_hw(clk);
- __clkdev_add(cl);
- }
+ /*
+ * Since dev_id can be NULL, and NULL is handled specially, we must
+ * pass it as either a NULL format string, or with "%s".
+ */
+ if (dev_id)
+ cl = __clk_register_clkdev(hw, con_id, "%s", dev_id);
+ else
+ cl = __clk_register_clkdev(hw, con_id, NULL);
- return 0;
+ return cl ? 0 : -ENOMEM;
}
-EXPORT_SYMBOL(clk_register_clkdevs);
+EXPORT_SYMBOL(clk_hw_register_clkdev);
diff --git a/drivers/clk/hisilicon/Kconfig b/drivers/clk/hisilicon/Kconfig
index e43485448612..3f537a04c6a6 100644
--- a/drivers/clk/hisilicon/Kconfig
+++ b/drivers/clk/hisilicon/Kconfig
@@ -1,3 +1,11 @@
+config COMMON_CLK_HI3519
+ tristate "Hi3519 Clock Driver"
+ depends on ARCH_HISI || COMPILE_TEST
+ select RESET_HISI
+ default ARCH_HISI
+ help
+ Build the clock driver for hi3519.
+
config COMMON_CLK_HI6220
bool "Hi6220 Clock Driver"
depends on ARCH_HISI || COMPILE_TEST
@@ -5,6 +13,13 @@ config COMMON_CLK_HI6220
help
Build the Hisilicon Hi6220 clock driver based on the common clock framework.
+config RESET_HISI
+ bool "HiSilicon Reset Controller Driver"
+ depends on ARCH_HISI || COMPILE_TEST
+ select RESET_CONTROLLER
+ help
+ Build reset controller driver for HiSilicon device chipsets.
+
config STUB_CLK_HI6220
bool "Hi6220 Stub Clock Driver"
depends on COMMON_CLK_HI6220 && MAILBOX
diff --git a/drivers/clk/hisilicon/Makefile b/drivers/clk/hisilicon/Makefile
index 74dba31590f9..e169ec7da023 100644
--- a/drivers/clk/hisilicon/Makefile
+++ b/drivers/clk/hisilicon/Makefile
@@ -7,5 +7,7 @@ obj-y += clk.o clkgate-separated.o clkdivider-hi6220.o
obj-$(CONFIG_ARCH_HI3xxx) += clk-hi3620.o
obj-$(CONFIG_ARCH_HIP04) += clk-hip04.o
obj-$(CONFIG_ARCH_HIX5HD2) += clk-hix5hd2.o
+obj-$(CONFIG_COMMON_CLK_HI3519) += clk-hi3519.o
obj-$(CONFIG_COMMON_CLK_HI6220) += clk-hi6220.o
+obj-$(CONFIG_RESET_HISI) += reset.o
obj-$(CONFIG_STUB_CLK_HI6220) += clk-hi6220-stub.o
diff --git a/drivers/clk/hisilicon/clk-hi3519.c b/drivers/clk/hisilicon/clk-hi3519.c
new file mode 100644
index 000000000000..715c7301a66a
--- /dev/null
+++ b/drivers/clk/hisilicon/clk-hi3519.c
@@ -0,0 +1,131 @@
+/*
+ * Hi3519 Clock Driver
+ *
+ * Copyright (c) 2015-2016 HiSilicon Technologies Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <dt-bindings/clock/hi3519-clock.h>
+#include <linux/clk-provider.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include "clk.h"
+#include "reset.h"
+
+#define HI3519_INNER_CLK_OFFSET 64
+#define HI3519_FIXED_24M 65
+#define HI3519_FIXED_50M 66
+#define HI3519_FIXED_75M 67
+#define HI3519_FIXED_125M 68
+#define HI3519_FIXED_150M 69
+#define HI3519_FIXED_200M 70
+#define HI3519_FIXED_250M 71
+#define HI3519_FIXED_300M 72
+#define HI3519_FIXED_400M 73
+#define HI3519_FMC_MUX 74
+
+#define HI3519_NR_CLKS 128
+
+static const struct hisi_fixed_rate_clock hi3519_fixed_rate_clks[] = {
+ { HI3519_FIXED_24M, "24m", NULL, 0, 24000000, },
+ { HI3519_FIXED_50M, "50m", NULL, 0, 50000000, },
+ { HI3519_FIXED_75M, "75m", NULL, 0, 75000000, },
+ { HI3519_FIXED_125M, "125m", NULL, 0, 125000000, },
+ { HI3519_FIXED_150M, "150m", NULL, 0, 150000000, },
+ { HI3519_FIXED_200M, "200m", NULL, 0, 200000000, },
+ { HI3519_FIXED_250M, "250m", NULL, 0, 250000000, },
+ { HI3519_FIXED_300M, "300m", NULL, 0, 300000000, },
+ { HI3519_FIXED_400M, "400m", NULL, 0, 400000000, },
+};
+
+static const char *const fmc_mux_p[] = {
+ "24m", "75m", "125m", "150m", "200m", "250m", "300m", "400m", };
+static u32 fmc_mux_table[] = {0, 1, 2, 3, 4, 5, 6, 7};
+
+static const struct hisi_mux_clock hi3519_mux_clks[] = {
+ { HI3519_FMC_MUX, "fmc_mux", fmc_mux_p, ARRAY_SIZE(fmc_mux_p),
+ CLK_SET_RATE_PARENT, 0xc0, 2, 3, 0, fmc_mux_table, },
+};
+
+static const struct hisi_gate_clock hi3519_gate_clks[] = {
+ { HI3519_FMC_CLK, "clk_fmc", "fmc_mux",
+ CLK_SET_RATE_PARENT, 0xc0, 1, 0, },
+ { HI3519_UART0_CLK, "clk_uart0", "24m",
+ CLK_SET_RATE_PARENT, 0xe4, 20, 0, },
+ { HI3519_UART1_CLK, "clk_uart1", "24m",
+ CLK_SET_RATE_PARENT, 0xe4, 21, 0, },
+ { HI3519_UART2_CLK, "clk_uart2", "24m",
+ CLK_SET_RATE_PARENT, 0xe4, 22, 0, },
+ { HI3519_UART3_CLK, "clk_uart3", "24m",
+ CLK_SET_RATE_PARENT, 0xe4, 23, 0, },
+ { HI3519_UART4_CLK, "clk_uart4", "24m",
+ CLK_SET_RATE_PARENT, 0xe4, 24, 0, },
+ { HI3519_SPI0_CLK, "clk_spi0", "50m",
+ CLK_SET_RATE_PARENT, 0xe4, 16, 0, },
+ { HI3519_SPI1_CLK, "clk_spi1", "50m",
+ CLK_SET_RATE_PARENT, 0xe4, 17, 0, },
+ { HI3519_SPI2_CLK, "clk_spi2", "50m",
+ CLK_SET_RATE_PARENT, 0xe4, 18, 0, },
+};
+
+static int hi3519_clk_probe(struct platform_device *pdev)
+{
+ struct device_node *np = pdev->dev.of_node;
+ struct hisi_clock_data *clk_data;
+ struct hisi_reset_controller *rstc;
+
+ rstc = hisi_reset_init(np);
+ if (!rstc)
+ return -ENOMEM;
+
+ clk_data = hisi_clk_init(np, HI3519_NR_CLKS);
+ if (!clk_data) {
+ hisi_reset_exit(rstc);
+ return -ENODEV;
+ }
+
+ hisi_clk_register_fixed_rate(hi3519_fixed_rate_clks,
+ ARRAY_SIZE(hi3519_fixed_rate_clks),
+ clk_data);
+ hisi_clk_register_mux(hi3519_mux_clks, ARRAY_SIZE(hi3519_mux_clks),
+ clk_data);
+ hisi_clk_register_gate(hi3519_gate_clks,
+ ARRAY_SIZE(hi3519_gate_clks), clk_data);
+
+ return 0;
+}
+
+static const struct of_device_id hi3519_clk_match_table[] = {
+ { .compatible = "hisilicon,hi3519-crg" },
+ { }
+};
+MODULE_DEVICE_TABLE(of, hi3519_clk_match_table);
+
+static struct platform_driver hi3519_clk_driver = {
+ .probe = hi3519_clk_probe,
+ .driver = {
+ .name = "hi3519-clk",
+ .of_match_table = hi3519_clk_match_table,
+ },
+};
+
+static int __init hi3519_clk_init(void)
+{
+ return platform_driver_register(&hi3519_clk_driver);
+}
+core_initcall(hi3519_clk_init);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("HiSilicon Hi3519 Clock Driver");
diff --git a/drivers/clk/hisilicon/clk.c b/drivers/clk/hisilicon/clk.c
index 9f8e76676553..9b15adbfc30c 100644
--- a/drivers/clk/hisilicon/clk.c
+++ b/drivers/clk/hisilicon/clk.c
@@ -37,7 +37,7 @@
static DEFINE_SPINLOCK(hisi_clk_lock);
-struct hisi_clock_data __init *hisi_clk_init(struct device_node *np,
+struct hisi_clock_data *hisi_clk_init(struct device_node *np,
int nr_clks)
{
struct hisi_clock_data *clk_data;
@@ -71,8 +71,9 @@ err_data:
err:
return NULL;
}
+EXPORT_SYMBOL_GPL(hisi_clk_init);
-void __init hisi_clk_register_fixed_rate(struct hisi_fixed_rate_clock *clks,
+void hisi_clk_register_fixed_rate(const struct hisi_fixed_rate_clock *clks,
int nums, struct hisi_clock_data *data)
{
struct clk *clk;
@@ -91,8 +92,9 @@ void __init hisi_clk_register_fixed_rate(struct hisi_fixed_rate_clock *clks,
data->clk_data.clks[clks[i].id] = clk;
}
}
+EXPORT_SYMBOL_GPL(hisi_clk_register_fixed_rate);
-void __init hisi_clk_register_fixed_factor(struct hisi_fixed_factor_clock *clks,
+void hisi_clk_register_fixed_factor(const struct hisi_fixed_factor_clock *clks,
int nums,
struct hisi_clock_data *data)
{
@@ -112,8 +114,9 @@ void __init hisi_clk_register_fixed_factor(struct hisi_fixed_factor_clock *clks,
data->clk_data.clks[clks[i].id] = clk;
}
}
+EXPORT_SYMBOL_GPL(hisi_clk_register_fixed_factor);
-void __init hisi_clk_register_mux(struct hisi_mux_clock *clks,
+void hisi_clk_register_mux(const struct hisi_mux_clock *clks,
int nums, struct hisi_clock_data *data)
{
struct clk *clk;
@@ -141,8 +144,9 @@ void __init hisi_clk_register_mux(struct hisi_mux_clock *clks,
data->clk_data.clks[clks[i].id] = clk;
}
}
+EXPORT_SYMBOL_GPL(hisi_clk_register_mux);
-void __init hisi_clk_register_divider(struct hisi_divider_clock *clks,
+void hisi_clk_register_divider(const struct hisi_divider_clock *clks,
int nums, struct hisi_clock_data *data)
{
struct clk *clk;
@@ -170,8 +174,9 @@ void __init hisi_clk_register_divider(struct hisi_divider_clock *clks,
data->clk_data.clks[clks[i].id] = clk;
}
}
+EXPORT_SYMBOL_GPL(hisi_clk_register_divider);
-void __init hisi_clk_register_gate(struct hisi_gate_clock *clks,
+void hisi_clk_register_gate(const struct hisi_gate_clock *clks,
int nums, struct hisi_clock_data *data)
{
struct clk *clk;
@@ -198,8 +203,9 @@ void __init hisi_clk_register_gate(struct hisi_gate_clock *clks,
data->clk_data.clks[clks[i].id] = clk;
}
}
+EXPORT_SYMBOL_GPL(hisi_clk_register_gate);
-void __init hisi_clk_register_gate_sep(struct hisi_gate_clock *clks,
+void hisi_clk_register_gate_sep(const struct hisi_gate_clock *clks,
int nums, struct hisi_clock_data *data)
{
struct clk *clk;
@@ -226,8 +232,9 @@ void __init hisi_clk_register_gate_sep(struct hisi_gate_clock *clks,
data->clk_data.clks[clks[i].id] = clk;
}
}
+EXPORT_SYMBOL_GPL(hisi_clk_register_gate_sep);
-void __init hi6220_clk_register_divider(struct hi6220_divider_clock *clks,
+void __init hi6220_clk_register_divider(const struct hi6220_divider_clock *clks,
int nums, struct hisi_clock_data *data)
{
struct clk *clk;
diff --git a/drivers/clk/hisilicon/clk.h b/drivers/clk/hisilicon/clk.h
index b56fbc1c5f27..20d64afe4ad8 100644
--- a/drivers/clk/hisilicon/clk.h
+++ b/drivers/clk/hisilicon/clk.h
@@ -111,18 +111,18 @@ struct clk *hi6220_register_clkdiv(struct device *dev, const char *name,
u8 shift, u8 width, u32 mask_bit, spinlock_t *lock);
struct hisi_clock_data *hisi_clk_init(struct device_node *, int);
-void hisi_clk_register_fixed_rate(struct hisi_fixed_rate_clock *,
+void hisi_clk_register_fixed_rate(const struct hisi_fixed_rate_clock *,
int, struct hisi_clock_data *);
-void hisi_clk_register_fixed_factor(struct hisi_fixed_factor_clock *,
+void hisi_clk_register_fixed_factor(const struct hisi_fixed_factor_clock *,
int, struct hisi_clock_data *);
-void hisi_clk_register_mux(struct hisi_mux_clock *, int,
+void hisi_clk_register_mux(const struct hisi_mux_clock *, int,
struct hisi_clock_data *);
-void hisi_clk_register_divider(struct hisi_divider_clock *,
+void hisi_clk_register_divider(const struct hisi_divider_clock *,
int, struct hisi_clock_data *);
-void hisi_clk_register_gate(struct hisi_gate_clock *,
+void hisi_clk_register_gate(const struct hisi_gate_clock *,
int, struct hisi_clock_data *);
-void hisi_clk_register_gate_sep(struct hisi_gate_clock *,
+void hisi_clk_register_gate_sep(const struct hisi_gate_clock *,
int, struct hisi_clock_data *);
-void hi6220_clk_register_divider(struct hi6220_divider_clock *,
+void hi6220_clk_register_divider(const struct hi6220_divider_clock *,
int, struct hisi_clock_data *);
#endif /* __HISI_CLK_H */
diff --git a/drivers/clk/hisilicon/reset.c b/drivers/clk/hisilicon/reset.c
new file mode 100644
index 000000000000..6aa49c2204d0
--- /dev/null
+++ b/drivers/clk/hisilicon/reset.c
@@ -0,0 +1,134 @@
+/*
+ * Hisilicon Reset Controller Driver
+ *
+ * Copyright (c) 2015-2016 HiSilicon Technologies Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/io.h>
+#include <linux/of_address.h>
+#include <linux/reset-controller.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include "reset.h"
+
+#define HISI_RESET_BIT_MASK 0x1f
+#define HISI_RESET_OFFSET_SHIFT 8
+#define HISI_RESET_OFFSET_MASK 0xffff00
+
+struct hisi_reset_controller {
+ spinlock_t lock;
+ void __iomem *membase;
+ struct reset_controller_dev rcdev;
+};
+
+
+#define to_hisi_reset_controller(rcdev) \
+ container_of(rcdev, struct hisi_reset_controller, rcdev)
+
+static int hisi_reset_of_xlate(struct reset_controller_dev *rcdev,
+ const struct of_phandle_args *reset_spec)
+{
+ u32 offset;
+ u8 bit;
+
+ offset = (reset_spec->args[0] << HISI_RESET_OFFSET_SHIFT)
+ & HISI_RESET_OFFSET_MASK;
+ bit = reset_spec->args[1] & HISI_RESET_BIT_MASK;
+
+ return (offset | bit);
+}
+
+static int hisi_reset_assert(struct reset_controller_dev *rcdev,
+ unsigned long id)
+{
+ struct hisi_reset_controller *rstc = to_hisi_reset_controller(rcdev);
+ unsigned long flags;
+ u32 offset, reg;
+ u8 bit;
+
+ offset = (id & HISI_RESET_OFFSET_MASK) >> HISI_RESET_OFFSET_SHIFT;
+ bit = id & HISI_RESET_BIT_MASK;
+
+ spin_lock_irqsave(&rstc->lock, flags);
+
+ reg = readl(rstc->membase + offset);
+ writel(reg | BIT(bit), rstc->membase + offset);
+
+ spin_unlock_irqrestore(&rstc->lock, flags);
+
+ return 0;
+}
+
+static int hisi_reset_deassert(struct reset_controller_dev *rcdev,
+ unsigned long id)
+{
+ struct hisi_reset_controller *rstc = to_hisi_reset_controller(rcdev);
+ unsigned long flags;
+ u32 offset, reg;
+ u8 bit;
+
+ offset = (id & HISI_RESET_OFFSET_MASK) >> HISI_RESET_OFFSET_SHIFT;
+ bit = id & HISI_RESET_BIT_MASK;
+
+ spin_lock_irqsave(&rstc->lock, flags);
+
+ reg = readl(rstc->membase + offset);
+ writel(reg & ~BIT(bit), rstc->membase + offset);
+
+ spin_unlock_irqrestore(&rstc->lock, flags);
+
+ return 0;
+}
+
+static const struct reset_control_ops hisi_reset_ops = {
+ .assert = hisi_reset_assert,
+ .deassert = hisi_reset_deassert,
+};
+
+struct hisi_reset_controller *hisi_reset_init(struct device_node *np)
+{
+ struct hisi_reset_controller *rstc;
+
+ rstc = kzalloc(sizeof(*rstc), GFP_KERNEL);
+ if (!rstc)
+ return NULL;
+
+ rstc->membase = of_iomap(np, 0);
+ if (!rstc->membase) {
+ kfree(rstc);
+ return NULL;
+ }
+
+ spin_lock_init(&rstc->lock);
+
+ rstc->rcdev.owner = THIS_MODULE;
+ rstc->rcdev.ops = &hisi_reset_ops;
+ rstc->rcdev.of_node = np;
+ rstc->rcdev.of_reset_n_cells = 2;
+ rstc->rcdev.of_xlate = hisi_reset_of_xlate;
+ reset_controller_register(&rstc->rcdev);
+
+ return rstc;
+}
+EXPORT_SYMBOL_GPL(hisi_reset_init);
+
+void hisi_reset_exit(struct hisi_reset_controller *rstc)
+{
+ reset_controller_unregister(&rstc->rcdev);
+ iounmap(rstc->membase);
+ kfree(rstc);
+}
+EXPORT_SYMBOL_GPL(hisi_reset_exit);
diff --git a/drivers/clk/hisilicon/reset.h b/drivers/clk/hisilicon/reset.h
new file mode 100644
index 000000000000..677d773ed27c
--- /dev/null
+++ b/drivers/clk/hisilicon/reset.h
@@ -0,0 +1,36 @@
+/*
+ * Copyright (c) 2015 HiSilicon Technologies Co., Ltd.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __HISI_RESET_H
+#define __HISI_RESET_H
+
+struct device_node;
+struct hisi_reset_controller;
+
+#ifdef CONFIG_RESET_CONTROLLER
+struct hisi_reset_controller *hisi_reset_init(struct device_node *np);
+void hisi_reset_exit(struct hisi_reset_controller *rstc);
+#else
+static inline hisi_reset_controller *hisi_reset_init(struct device_node *np)
+{
+ return 0;
+}
+static inline void hisi_reset_exit(struct hisi_reset_controller *rstc)
+{}
+#endif
+
+#endif /* __HISI_RESET_H */
diff --git a/drivers/clk/imx/clk-gate2.c b/drivers/clk/imx/clk-gate2.c
index 8935bff99fe7..db44a198a0d9 100644
--- a/drivers/clk/imx/clk-gate2.c
+++ b/drivers/clk/imx/clk-gate2.c
@@ -31,6 +31,7 @@ struct clk_gate2 {
struct clk_hw hw;
void __iomem *reg;
u8 bit_idx;
+ u8 cgr_val;
u8 flags;
spinlock_t *lock;
unsigned int *share_count;
@@ -50,7 +51,8 @@ static int clk_gate2_enable(struct clk_hw *hw)
goto out;
reg = readl(gate->reg);
- reg |= 3 << gate->bit_idx;
+ reg &= ~(3 << gate->bit_idx);
+ reg |= gate->cgr_val << gate->bit_idx;
writel(reg, gate->reg);
out:
@@ -125,7 +127,7 @@ static struct clk_ops clk_gate2_ops = {
struct clk *clk_register_gate2(struct device *dev, const char *name,
const char *parent_name, unsigned long flags,
- void __iomem *reg, u8 bit_idx,
+ void __iomem *reg, u8 bit_idx, u8 cgr_val,
u8 clk_gate2_flags, spinlock_t *lock,
unsigned int *share_count)
{
@@ -140,6 +142,7 @@ struct clk *clk_register_gate2(struct device *dev, const char *name,
/* struct clk_gate2 assignments */
gate->reg = reg;
gate->bit_idx = bit_idx;
+ gate->cgr_val = cgr_val;
gate->flags = clk_gate2_flags;
gate->lock = lock;
gate->share_count = share_count;
diff --git a/drivers/clk/imx/clk-imx35.c b/drivers/clk/imx/clk-imx35.c
index a71d24cb4c06..b0978d3b83e2 100644
--- a/drivers/clk/imx/clk-imx35.c
+++ b/drivers/clk/imx/clk-imx35.c
@@ -66,7 +66,7 @@ static const char *std_sel[] = {"ppll", "arm"};
static const char *ipg_per_sel[] = {"ahb_per_div", "arm_per_div"};
enum mx35_clks {
- ckih, ckil, mpll, ppll, mpll_075, arm, hsp, hsp_div, hsp_sel, ahb, ipg,
+ ckih, mpll, ppll, mpll_075, arm, hsp, hsp_div, hsp_sel, ahb, ipg,
arm_per_div, ahb_per_div, ipg_per, uart_sel, uart_div, esdhc_sel,
esdhc1_div, esdhc2_div, esdhc3_div, spdif_sel, spdif_div_pre,
spdif_div_post, ssi_sel, ssi1_div_pre, ssi1_div_post, ssi2_div_pre,
@@ -79,7 +79,7 @@ enum mx35_clks {
rtc_gate, rtic_gate, scc_gate, sdma_gate, spba_gate, spdif_gate,
ssi1_gate, ssi2_gate, uart1_gate, uart2_gate, uart3_gate, usbotg_gate,
wdog_gate, max_gate, admux_gate, csi_gate, csi_div, csi_sel, iim_gate,
- gpu2d_gate, clk_max
+ gpu2d_gate, ckil, clk_max
};
static struct clk *clk[clk_max];
diff --git a/drivers/clk/imx/clk-imx6sx.c b/drivers/clk/imx/clk-imx6sx.c
index fea125eb4330..97e742a8be17 100644
--- a/drivers/clk/imx/clk-imx6sx.c
+++ b/drivers/clk/imx/clk-imx6sx.c
@@ -134,6 +134,8 @@ static u32 share_count_esai;
static u32 share_count_ssi1;
static u32 share_count_ssi2;
static u32 share_count_ssi3;
+static u32 share_count_sai1;
+static u32 share_count_sai2;
static struct clk ** const uart_clks[] __initconst = {
&clks[IMX6SX_CLK_UART_IPG],
@@ -469,10 +471,10 @@ static void __init imx6sx_clocks_init(struct device_node *ccm_node)
clks[IMX6SX_CLK_SSI3] = imx_clk_gate2_shared("ssi3", "ssi3_podf", base + 0x7c, 22, &share_count_ssi3);
clks[IMX6SX_CLK_UART_IPG] = imx_clk_gate2("uart_ipg", "ipg", base + 0x7c, 24);
clks[IMX6SX_CLK_UART_SERIAL] = imx_clk_gate2("uart_serial", "uart_podf", base + 0x7c, 26);
- clks[IMX6SX_CLK_SAI1_IPG] = imx_clk_gate2("sai1_ipg", "ipg", base + 0x7c, 28);
- clks[IMX6SX_CLK_SAI2_IPG] = imx_clk_gate2("sai2_ipg", "ipg", base + 0x7c, 30);
- clks[IMX6SX_CLK_SAI1] = imx_clk_gate2("sai1", "ssi1_podf", base + 0x7c, 28);
- clks[IMX6SX_CLK_SAI2] = imx_clk_gate2("sai2", "ssi2_podf", base + 0x7c, 30);
+ clks[IMX6SX_CLK_SAI1_IPG] = imx_clk_gate2_shared("sai1_ipg", "ipg", base + 0x7c, 28, &share_count_sai1);
+ clks[IMX6SX_CLK_SAI2_IPG] = imx_clk_gate2_shared("sai2_ipg", "ipg", base + 0x7c, 30, &share_count_sai2);
+ clks[IMX6SX_CLK_SAI1] = imx_clk_gate2_shared("sai1", "ssi1_podf", base + 0x7c, 28, &share_count_sai1);
+ clks[IMX6SX_CLK_SAI2] = imx_clk_gate2_shared("sai2", "ssi2_podf", base + 0x7c, 30, &share_count_sai2);
/* CCGR6 */
clks[IMX6SX_CLK_USBOH3] = imx_clk_gate2("usboh3", "ipg", base + 0x80, 0);
diff --git a/drivers/clk/imx/clk-imx7d.c b/drivers/clk/imx/clk-imx7d.c
index fbb6a8c8653d..522996800d5b 100644
--- a/drivers/clk/imx/clk-imx7d.c
+++ b/drivers/clk/imx/clk-imx7d.c
@@ -56,7 +56,7 @@ static const char *nand_usdhc_bus_sel[] = { "osc", "pll_sys_pfd2_270m_clk",
"pll_sys_pfd2_135m_clk", "pll_sys_pfd6_clk", "pll_enet_250m_clk",
"pll_audio_main_clk", };
-static const char *ahb_channel_sel[] = { "osc", "pll_sys_pfd2_135m_clk",
+static const char *ahb_channel_sel[] = { "osc", "pll_sys_pfd2_270m_clk",
"pll_dram_533m_clk", "pll_sys_pfd0_392m_clk",
"pll_enet_125m_clk", "pll_usb_main_clk", "pll_audio_main_clk",
"pll_video_main_clk", };
@@ -342,7 +342,7 @@ static const char *clko1_sel[] = { "osc", "pll_sys_main_clk",
static const char *clko2_sel[] = { "osc", "pll_sys_main_240m_clk",
"pll_sys_pfd0_392m_clk", "pll_sys_pfd1_166m_clk", "pll_sys_pfd4_clk",
- "pll_audio_main_clk", "pll_video_main_clk", "osc_32k_clk", };
+ "pll_audio_main_clk", "pll_video_main_clk", "ckil", };
static const char *lvds1_sel[] = { "pll_arm_main_clk",
"pll_sys_main_clk", "pll_sys_pfd0_392m_clk", "pll_sys_pfd1_332m_clk",
@@ -382,6 +382,7 @@ static void __init imx7d_clocks_init(struct device_node *ccm_node)
clks[IMX7D_CLK_DUMMY] = imx_clk_fixed("dummy", 0);
clks[IMX7D_OSC_24M_CLK] = of_clk_get_by_name(ccm_node, "osc");
+ clks[IMX7D_CKIL] = of_clk_get_by_name(ccm_node, "ckil");
np = of_find_compatible_node(NULL, NULL, "fsl,imx7d-anatop");
base = of_iomap(np, 0);
diff --git a/drivers/clk/imx/clk-pllv3.c b/drivers/clk/imx/clk-pllv3.c
index c05c43d56a94..4826b3c9e19e 100644
--- a/drivers/clk/imx/clk-pllv3.c
+++ b/drivers/clk/imx/clk-pllv3.c
@@ -44,6 +44,7 @@ struct clk_pllv3 {
u32 powerdown;
u32 div_mask;
u32 div_shift;
+ unsigned long ref_clock;
};
#define to_clk_pllv3(_hw) container_of(_hw, struct clk_pllv3, hw)
@@ -286,7 +287,9 @@ static const struct clk_ops clk_pllv3_av_ops = {
static unsigned long clk_pllv3_enet_recalc_rate(struct clk_hw *hw,
unsigned long parent_rate)
{
- return 500000000;
+ struct clk_pllv3 *pll = to_clk_pllv3(hw);
+
+ return pll->ref_clock;
}
static const struct clk_ops clk_pllv3_enet_ops = {
@@ -326,7 +329,11 @@ struct clk *imx_clk_pllv3(enum imx_pllv3_type type, const char *name,
break;
case IMX_PLLV3_ENET_IMX7:
pll->powerdown = IMX7_ENET_PLL_POWER;
+ pll->ref_clock = 1000000000;
+ ops = &clk_pllv3_enet_ops;
+ break;
case IMX_PLLV3_ENET:
+ pll->ref_clock = 500000000;
ops = &clk_pllv3_enet_ops;
break;
default:
diff --git a/drivers/clk/imx/clk-vf610.c b/drivers/clk/imx/clk-vf610.c
index 0a94d9661d91..3a1f24475ee4 100644
--- a/drivers/clk/imx/clk-vf610.c
+++ b/drivers/clk/imx/clk-vf610.c
@@ -10,6 +10,7 @@
#include <linux/of_address.h>
#include <linux/clk.h>
+#include <linux/syscore_ops.h>
#include <dt-bindings/clock/vf610-clock.h>
#include "clk.h"
@@ -40,6 +41,7 @@
#define CCM_CCGR9 (ccm_base + 0x64)
#define CCM_CCGR10 (ccm_base + 0x68)
#define CCM_CCGR11 (ccm_base + 0x6c)
+#define CCM_CCGRx(x) (CCM_CCGR0 + (x) * 4)
#define CCM_CMEOR0 (ccm_base + 0x70)
#define CCM_CMEOR1 (ccm_base + 0x74)
#define CCM_CMEOR2 (ccm_base + 0x78)
@@ -115,10 +117,19 @@ static struct clk_div_table pll4_audio_div_table[] = {
static struct clk *clk[VF610_CLK_END];
static struct clk_onecell_data clk_data;
+static u32 cscmr1;
+static u32 cscmr2;
+static u32 cscdr1;
+static u32 cscdr2;
+static u32 cscdr3;
+static u32 ccgr[12];
+
static unsigned int const clks_init_on[] __initconst = {
VF610_CLK_SYS_BUS,
VF610_CLK_DDR_SEL,
VF610_CLK_DAP,
+ VF610_CLK_DDRMC,
+ VF610_CLK_WKPU,
};
static struct clk * __init vf610_get_fixed_clock(
@@ -132,6 +143,43 @@ static struct clk * __init vf610_get_fixed_clock(
return clk;
};
+static int vf610_clk_suspend(void)
+{
+ int i;
+
+ cscmr1 = readl_relaxed(CCM_CSCMR1);
+ cscmr2 = readl_relaxed(CCM_CSCMR2);
+
+ cscdr1 = readl_relaxed(CCM_CSCDR1);
+ cscdr2 = readl_relaxed(CCM_CSCDR2);
+ cscdr3 = readl_relaxed(CCM_CSCDR3);
+
+ for (i = 0; i < 12; i++)
+ ccgr[i] = readl_relaxed(CCM_CCGRx(i));
+
+ return 0;
+}
+
+static void vf610_clk_resume(void)
+{
+ int i;
+
+ writel_relaxed(cscmr1, CCM_CSCMR1);
+ writel_relaxed(cscmr2, CCM_CSCMR2);
+
+ writel_relaxed(cscdr1, CCM_CSCDR1);
+ writel_relaxed(cscdr2, CCM_CSCDR2);
+ writel_relaxed(cscdr3, CCM_CSCDR3);
+
+ for (i = 0; i < 12; i++)
+ writel_relaxed(ccgr[i], CCM_CCGRx(i));
+}
+
+static struct syscore_ops vf610_clk_syscore_ops = {
+ .suspend = vf610_clk_suspend,
+ .resume = vf610_clk_resume,
+};
+
static void __init vf610_clocks_init(struct device_node *ccm_node)
{
struct device_node *np;
@@ -233,6 +281,9 @@ static void __init vf610_clocks_init(struct device_node *ccm_node)
clk[VF610_CLK_PLL4_MAIN_DIV] = clk_register_divider_table(NULL, "pll4_audio_div", "pll4_audio", 0, CCM_CACRR, 6, 3, 0, pll4_audio_div_table, &imx_ccm_lock);
clk[VF610_CLK_PLL6_MAIN_DIV] = imx_clk_divider("pll6_video_div", "pll6_video", CCM_CACRR, 21, 1);
+ clk[VF610_CLK_DDRMC] = imx_clk_gate2_cgr("ddrmc", "ddr_sel", CCM_CCGR6, CCM_CCGRx_CGn(14), 0x2);
+ clk[VF610_CLK_WKPU] = imx_clk_gate2_cgr("wkpu", "ipg_bus", CCM_CCGR4, CCM_CCGRx_CGn(10), 0x2);
+
clk[VF610_CLK_USBPHY0] = imx_clk_gate("usbphy0", "pll3_usb_otg", PLL3_CTRL, 6);
clk[VF610_CLK_USBPHY1] = imx_clk_gate("usbphy1", "pll7_usb_host", PLL7_CTRL, 6);
@@ -321,11 +372,14 @@ static void __init vf610_clocks_init(struct device_node *ccm_node)
clk[VF610_CLK_DCU0_SEL] = imx_clk_mux("dcu0_sel", CCM_CSCMR1, 28, 1, dcu_sels, 2);
clk[VF610_CLK_DCU0_EN] = imx_clk_gate("dcu0_en", "dcu0_sel", CCM_CSCDR3, 19);
clk[VF610_CLK_DCU0_DIV] = imx_clk_divider("dcu0_div", "dcu0_en", CCM_CSCDR3, 16, 3);
- clk[VF610_CLK_DCU0] = imx_clk_gate2("dcu0", "dcu0_div", CCM_CCGR3, CCM_CCGRx_CGn(8));
+ clk[VF610_CLK_DCU0] = imx_clk_gate2("dcu0", "ipg_bus", CCM_CCGR3, CCM_CCGRx_CGn(8));
clk[VF610_CLK_DCU1_SEL] = imx_clk_mux("dcu1_sel", CCM_CSCMR1, 29, 1, dcu_sels, 2);
clk[VF610_CLK_DCU1_EN] = imx_clk_gate("dcu1_en", "dcu1_sel", CCM_CSCDR3, 23);
clk[VF610_CLK_DCU1_DIV] = imx_clk_divider("dcu1_div", "dcu1_en", CCM_CSCDR3, 20, 3);
- clk[VF610_CLK_DCU1] = imx_clk_gate2("dcu1", "dcu1_div", CCM_CCGR9, CCM_CCGRx_CGn(8));
+ clk[VF610_CLK_DCU1] = imx_clk_gate2("dcu1", "ipg_bus", CCM_CCGR9, CCM_CCGRx_CGn(8));
+
+ clk[VF610_CLK_TCON0] = imx_clk_gate2("tcon0", "platform_bus", CCM_CCGR1, CCM_CCGRx_CGn(13));
+ clk[VF610_CLK_TCON1] = imx_clk_gate2("tcon1", "platform_bus", CCM_CCGR7, CCM_CCGRx_CGn(13));
clk[VF610_CLK_ESAI_SEL] = imx_clk_mux("esai_sel", CCM_CSCMR1, 20, 2, esai_sels, 4);
clk[VF610_CLK_ESAI_EN] = imx_clk_gate("esai_en", "esai_sel", CCM_CSCDR2, 30);
@@ -409,6 +463,8 @@ static void __init vf610_clocks_init(struct device_node *ccm_node)
for (i = 0; i < ARRAY_SIZE(clks_init_on); i++)
clk_prepare_enable(clk[clks_init_on[i]]);
+ register_syscore_ops(&vf610_clk_syscore_ops);
+
/* Add the clocks to provider list */
clk_data.clks = clk;
clk_data.clk_num = ARRAY_SIZE(clk);
diff --git a/drivers/clk/imx/clk.h b/drivers/clk/imx/clk.h
index d942f5748d08..508d0fad84cf 100644
--- a/drivers/clk/imx/clk.h
+++ b/drivers/clk/imx/clk.h
@@ -41,7 +41,7 @@ struct clk *imx_clk_pllv3(enum imx_pllv3_type type, const char *name,
struct clk *clk_register_gate2(struct device *dev, const char *name,
const char *parent_name, unsigned long flags,
- void __iomem *reg, u8 bit_idx,
+ void __iomem *reg, u8 bit_idx, u8 cgr_val,
u8 clk_gate_flags, spinlock_t *lock,
unsigned int *share_count);
@@ -55,7 +55,7 @@ static inline struct clk *imx_clk_gate2(const char *name, const char *parent,
void __iomem *reg, u8 shift)
{
return clk_register_gate2(NULL, name, parent, CLK_SET_RATE_PARENT, reg,
- shift, 0, &imx_ccm_lock, NULL);
+ shift, 0x3, 0, &imx_ccm_lock, NULL);
}
static inline struct clk *imx_clk_gate2_shared(const char *name,
@@ -63,7 +63,14 @@ static inline struct clk *imx_clk_gate2_shared(const char *name,
unsigned int *share_count)
{
return clk_register_gate2(NULL, name, parent, CLK_SET_RATE_PARENT, reg,
- shift, 0, &imx_ccm_lock, share_count);
+ shift, 0x3, 0, &imx_ccm_lock, share_count);
+}
+
+static inline struct clk *imx_clk_gate2_cgr(const char *name, const char *parent,
+ void __iomem *reg, u8 shift, u8 cgr_val)
+{
+ return clk_register_gate2(NULL, name, parent, CLK_SET_RATE_PARENT, reg,
+ shift, cgr_val, 0, &imx_ccm_lock, NULL);
}
struct clk *imx_clk_pfd(const char *name, const char *parent_name,
diff --git a/drivers/clk/ingenic/cgu.c b/drivers/clk/ingenic/cgu.c
index 7cfb7b2a2ed6..e8248f9185f7 100644
--- a/drivers/clk/ingenic/cgu.c
+++ b/drivers/clk/ingenic/cgu.c
@@ -325,6 +325,7 @@ ingenic_clk_recalc_rate(struct clk_hw *hw, unsigned long parent_rate)
div = (div_reg >> clk_info->div.shift) &
GENMASK(clk_info->div.bits - 1, 0);
div += 1;
+ div *= clk_info->div.div;
rate /= div;
}
@@ -345,6 +346,14 @@ ingenic_clk_calc_div(const struct ingenic_cgu_clk_info *clk_info,
div = min_t(unsigned, div, 1 << clk_info->div.bits);
div = max_t(unsigned, div, 1);
+ /*
+ * If the divider value itself must be divided before being written to
+ * the divider register, we must ensure we don't have any bits set that
+ * would be lost as a result of doing so.
+ */
+ div /= clk_info->div.div;
+ div *= clk_info->div.div;
+
return div;
}
@@ -395,7 +404,7 @@ ingenic_clk_set_rate(struct clk_hw *hw, unsigned long req_rate,
/* update the divide */
mask = GENMASK(clk_info->div.bits - 1, 0);
reg &= ~(mask << clk_info->div.shift);
- reg |= (div - 1) << clk_info->div.shift;
+ reg |= ((div / clk_info->div.div) - 1) << clk_info->div.shift;
/* clear the stop bit */
if (clk_info->div.stop_bit != -1)
diff --git a/drivers/clk/ingenic/cgu.h b/drivers/clk/ingenic/cgu.h
index 99347e2b97e8..09700b2c555d 100644
--- a/drivers/clk/ingenic/cgu.h
+++ b/drivers/clk/ingenic/cgu.h
@@ -76,8 +76,11 @@ struct ingenic_cgu_mux_info {
/**
* struct ingenic_cgu_div_info - information about a divider
* @reg: offset of the divider control register within the CGU
- * @shift: number of bits to shift the divide value by (ie. the index of
+ * @shift: number of bits to left shift the divide value by (ie. the index of
* the lowest bit of the divide value within its control register)
+ * @div: number of bits to divide the divider value by (i.e. if the
+ * effective divider value is the value written to the register
+ * multiplied by some constant)
* @bits: the size of the divide value in bits
* @ce_bit: the index of the change enable bit within reg, or -1 if there
* isn't one
@@ -87,6 +90,7 @@ struct ingenic_cgu_mux_info {
struct ingenic_cgu_div_info {
unsigned reg;
u8 shift;
+ u8 div;
u8 bits;
s8 ce_bit;
s8 busy_bit;
diff --git a/drivers/clk/ingenic/jz4740-cgu.c b/drivers/clk/ingenic/jz4740-cgu.c
index 305a26c2a800..510fe7e0c8f1 100644
--- a/drivers/clk/ingenic/jz4740-cgu.c
+++ b/drivers/clk/ingenic/jz4740-cgu.c
@@ -90,51 +90,51 @@ static const struct ingenic_cgu_clk_info jz4740_cgu_clocks[] = {
[JZ4740_CLK_PLL_HALF] = {
"pll half", CGU_CLK_DIV,
.parents = { JZ4740_CLK_PLL, -1, -1, -1 },
- .div = { CGU_REG_CPCCR, 21, 1, -1, -1, -1 },
+ .div = { CGU_REG_CPCCR, 21, 1, 1, -1, -1, -1 },
},
[JZ4740_CLK_CCLK] = {
"cclk", CGU_CLK_DIV,
.parents = { JZ4740_CLK_PLL, -1, -1, -1 },
- .div = { CGU_REG_CPCCR, 0, 4, 22, -1, -1 },
+ .div = { CGU_REG_CPCCR, 0, 1, 4, 22, -1, -1 },
},
[JZ4740_CLK_HCLK] = {
"hclk", CGU_CLK_DIV,
.parents = { JZ4740_CLK_PLL, -1, -1, -1 },
- .div = { CGU_REG_CPCCR, 4, 4, 22, -1, -1 },
+ .div = { CGU_REG_CPCCR, 4, 1, 4, 22, -1, -1 },
},
[JZ4740_CLK_PCLK] = {
"pclk", CGU_CLK_DIV,
.parents = { JZ4740_CLK_PLL, -1, -1, -1 },
- .div = { CGU_REG_CPCCR, 8, 4, 22, -1, -1 },
+ .div = { CGU_REG_CPCCR, 8, 1, 4, 22, -1, -1 },
},
[JZ4740_CLK_MCLK] = {
"mclk", CGU_CLK_DIV,
.parents = { JZ4740_CLK_PLL, -1, -1, -1 },
- .div = { CGU_REG_CPCCR, 12, 4, 22, -1, -1 },
+ .div = { CGU_REG_CPCCR, 12, 1, 4, 22, -1, -1 },
},
[JZ4740_CLK_LCD] = {
"lcd", CGU_CLK_DIV | CGU_CLK_GATE,
.parents = { JZ4740_CLK_PLL_HALF, -1, -1, -1 },
- .div = { CGU_REG_CPCCR, 16, 5, 22, -1, -1 },
+ .div = { CGU_REG_CPCCR, 16, 1, 5, 22, -1, -1 },
.gate = { CGU_REG_CLKGR, 10 },
},
[JZ4740_CLK_LCD_PCLK] = {
"lcd_pclk", CGU_CLK_DIV,
.parents = { JZ4740_CLK_PLL_HALF, -1, -1, -1 },
- .div = { CGU_REG_LPCDR, 0, 11, -1, -1, -1 },
+ .div = { CGU_REG_LPCDR, 0, 1, 11, -1, -1, -1 },
},
[JZ4740_CLK_I2S] = {
"i2s", CGU_CLK_MUX | CGU_CLK_DIV | CGU_CLK_GATE,
.parents = { JZ4740_CLK_EXT, JZ4740_CLK_PLL_HALF, -1, -1 },
.mux = { CGU_REG_CPCCR, 31, 1 },
- .div = { CGU_REG_I2SCDR, 0, 8, -1, -1, -1 },
+ .div = { CGU_REG_I2SCDR, 0, 1, 8, -1, -1, -1 },
.gate = { CGU_REG_CLKGR, 6 },
},
@@ -142,21 +142,21 @@ static const struct ingenic_cgu_clk_info jz4740_cgu_clocks[] = {
"spi", CGU_CLK_MUX | CGU_CLK_DIV | CGU_CLK_GATE,
.parents = { JZ4740_CLK_EXT, JZ4740_CLK_PLL, -1, -1 },
.mux = { CGU_REG_SSICDR, 31, 1 },
- .div = { CGU_REG_SSICDR, 0, 4, -1, -1, -1 },
+ .div = { CGU_REG_SSICDR, 0, 1, 4, -1, -1, -1 },
.gate = { CGU_REG_CLKGR, 4 },
},
[JZ4740_CLK_MMC] = {
"mmc", CGU_CLK_DIV | CGU_CLK_GATE,
.parents = { JZ4740_CLK_PLL_HALF, -1, -1, -1 },
- .div = { CGU_REG_MSCCDR, 0, 5, -1, -1, -1 },
+ .div = { CGU_REG_MSCCDR, 0, 1, 5, -1, -1, -1 },
.gate = { CGU_REG_CLKGR, 7 },
},
[JZ4740_CLK_UHC] = {
"uhc", CGU_CLK_DIV | CGU_CLK_GATE,
.parents = { JZ4740_CLK_PLL_HALF, -1, -1, -1 },
- .div = { CGU_REG_UHCCDR, 0, 4, -1, -1, -1 },
+ .div = { CGU_REG_UHCCDR, 0, 1, 4, -1, -1, -1 },
.gate = { CGU_REG_CLKGR, 14 },
},
@@ -164,7 +164,7 @@ static const struct ingenic_cgu_clk_info jz4740_cgu_clocks[] = {
"udc", CGU_CLK_MUX | CGU_CLK_DIV,
.parents = { JZ4740_CLK_EXT, JZ4740_CLK_PLL_HALF, -1, -1 },
.mux = { CGU_REG_CPCCR, 29, 1 },
- .div = { CGU_REG_CPCCR, 23, 6, -1, -1, -1 },
+ .div = { CGU_REG_CPCCR, 23, 1, 6, -1, -1, -1 },
.gate = { CGU_REG_SCR, 6 },
},
diff --git a/drivers/clk/ingenic/jz4780-cgu.c b/drivers/clk/ingenic/jz4780-cgu.c
index 431f962300b6..b35d6d9dd5aa 100644
--- a/drivers/clk/ingenic/jz4780-cgu.c
+++ b/drivers/clk/ingenic/jz4780-cgu.c
@@ -296,13 +296,13 @@ static const struct ingenic_cgu_clk_info jz4780_cgu_clocks[] = {
[JZ4780_CLK_CPU] = {
"cpu", CGU_CLK_DIV,
.parents = { JZ4780_CLK_CPUMUX, -1, -1, -1 },
- .div = { CGU_REG_CLOCKCONTROL, 0, 4, 22, -1, -1 },
+ .div = { CGU_REG_CLOCKCONTROL, 0, 1, 4, 22, -1, -1 },
},
[JZ4780_CLK_L2CACHE] = {
"l2cache", CGU_CLK_DIV,
.parents = { JZ4780_CLK_CPUMUX, -1, -1, -1 },
- .div = { CGU_REG_CLOCKCONTROL, 4, 4, -1, -1, -1 },
+ .div = { CGU_REG_CLOCKCONTROL, 4, 1, 4, -1, -1, -1 },
},
[JZ4780_CLK_AHB0] = {
@@ -310,7 +310,7 @@ static const struct ingenic_cgu_clk_info jz4780_cgu_clocks[] = {
.parents = { -1, JZ4780_CLK_SCLKA, JZ4780_CLK_MPLL,
JZ4780_CLK_EPLL },
.mux = { CGU_REG_CLOCKCONTROL, 26, 2 },
- .div = { CGU_REG_CLOCKCONTROL, 8, 4, 21, -1, -1 },
+ .div = { CGU_REG_CLOCKCONTROL, 8, 1, 4, 21, -1, -1 },
},
[JZ4780_CLK_AHB2PMUX] = {
@@ -323,20 +323,20 @@ static const struct ingenic_cgu_clk_info jz4780_cgu_clocks[] = {
[JZ4780_CLK_AHB2] = {
"ahb2", CGU_CLK_DIV,
.parents = { JZ4780_CLK_AHB2PMUX, -1, -1, -1 },
- .div = { CGU_REG_CLOCKCONTROL, 12, 4, 20, -1, -1 },
+ .div = { CGU_REG_CLOCKCONTROL, 12, 1, 4, 20, -1, -1 },
},
[JZ4780_CLK_PCLK] = {
"pclk", CGU_CLK_DIV,
.parents = { JZ4780_CLK_AHB2PMUX, -1, -1, -1 },
- .div = { CGU_REG_CLOCKCONTROL, 16, 4, 20, -1, -1 },
+ .div = { CGU_REG_CLOCKCONTROL, 16, 1, 4, 20, -1, -1 },
},
[JZ4780_CLK_DDR] = {
"ddr", CGU_CLK_MUX | CGU_CLK_DIV,
.parents = { -1, JZ4780_CLK_SCLKA, JZ4780_CLK_MPLL, -1 },
.mux = { CGU_REG_DDRCDR, 30, 2 },
- .div = { CGU_REG_DDRCDR, 0, 4, 29, 28, 27 },
+ .div = { CGU_REG_DDRCDR, 0, 1, 4, 29, 28, 27 },
},
[JZ4780_CLK_VPU] = {
@@ -344,7 +344,7 @@ static const struct ingenic_cgu_clk_info jz4780_cgu_clocks[] = {
.parents = { JZ4780_CLK_SCLKA, JZ4780_CLK_MPLL,
JZ4780_CLK_EPLL, -1 },
.mux = { CGU_REG_VPUCDR, 30, 2 },
- .div = { CGU_REG_VPUCDR, 0, 4, 29, 28, 27 },
+ .div = { CGU_REG_VPUCDR, 0, 1, 4, 29, 28, 27 },
.gate = { CGU_REG_CLKGR1, 2 },
},
@@ -352,7 +352,7 @@ static const struct ingenic_cgu_clk_info jz4780_cgu_clocks[] = {
"i2s_pll", CGU_CLK_MUX | CGU_CLK_DIV,
.parents = { JZ4780_CLK_SCLKA, JZ4780_CLK_EPLL, -1, -1 },
.mux = { CGU_REG_I2SCDR, 30, 1 },
- .div = { CGU_REG_I2SCDR, 0, 8, 29, 28, 27 },
+ .div = { CGU_REG_I2SCDR, 0, 1, 8, 29, 28, 27 },
},
[JZ4780_CLK_I2S] = {
@@ -366,7 +366,7 @@ static const struct ingenic_cgu_clk_info jz4780_cgu_clocks[] = {
.parents = { JZ4780_CLK_SCLKA, JZ4780_CLK_MPLL,
JZ4780_CLK_VPLL, -1 },
.mux = { CGU_REG_LP0CDR, 30, 2 },
- .div = { CGU_REG_LP0CDR, 0, 8, 28, 27, 26 },
+ .div = { CGU_REG_LP0CDR, 0, 1, 8, 28, 27, 26 },
},
[JZ4780_CLK_LCD1PIXCLK] = {
@@ -374,7 +374,7 @@ static const struct ingenic_cgu_clk_info jz4780_cgu_clocks[] = {
.parents = { JZ4780_CLK_SCLKA, JZ4780_CLK_MPLL,
JZ4780_CLK_VPLL, -1 },
.mux = { CGU_REG_LP1CDR, 30, 2 },
- .div = { CGU_REG_LP1CDR, 0, 8, 28, 27, 26 },
+ .div = { CGU_REG_LP1CDR, 0, 1, 8, 28, 27, 26 },
},
[JZ4780_CLK_MSCMUX] = {
@@ -386,21 +386,21 @@ static const struct ingenic_cgu_clk_info jz4780_cgu_clocks[] = {
[JZ4780_CLK_MSC0] = {
"msc0", CGU_CLK_DIV | CGU_CLK_GATE,
.parents = { JZ4780_CLK_MSCMUX, -1, -1, -1 },
- .div = { CGU_REG_MSC0CDR, 0, 8, 29, 28, 27 },
+ .div = { CGU_REG_MSC0CDR, 0, 2, 8, 29, 28, 27 },
.gate = { CGU_REG_CLKGR0, 3 },
},
[JZ4780_CLK_MSC1] = {
"msc1", CGU_CLK_DIV | CGU_CLK_GATE,
.parents = { JZ4780_CLK_MSCMUX, -1, -1, -1 },
- .div = { CGU_REG_MSC1CDR, 0, 8, 29, 28, 27 },
+ .div = { CGU_REG_MSC1CDR, 0, 2, 8, 29, 28, 27 },
.gate = { CGU_REG_CLKGR0, 11 },
},
[JZ4780_CLK_MSC2] = {
"msc2", CGU_CLK_DIV | CGU_CLK_GATE,
.parents = { JZ4780_CLK_MSCMUX, -1, -1, -1 },
- .div = { CGU_REG_MSC2CDR, 0, 8, 29, 28, 27 },
+ .div = { CGU_REG_MSC2CDR, 0, 2, 8, 29, 28, 27 },
.gate = { CGU_REG_CLKGR0, 12 },
},
@@ -409,7 +409,7 @@ static const struct ingenic_cgu_clk_info jz4780_cgu_clocks[] = {
.parents = { JZ4780_CLK_SCLKA, JZ4780_CLK_MPLL,
JZ4780_CLK_EPLL, JZ4780_CLK_OTGPHY },
.mux = { CGU_REG_UHCCDR, 30, 2 },
- .div = { CGU_REG_UHCCDR, 0, 8, 29, 28, 27 },
+ .div = { CGU_REG_UHCCDR, 0, 1, 8, 29, 28, 27 },
.gate = { CGU_REG_CLKGR0, 24 },
},
@@ -417,7 +417,7 @@ static const struct ingenic_cgu_clk_info jz4780_cgu_clocks[] = {
"ssi_pll", CGU_CLK_MUX | CGU_CLK_DIV,
.parents = { JZ4780_CLK_SCLKA, JZ4780_CLK_MPLL, -1, -1 },
.mux = { CGU_REG_SSICDR, 30, 1 },
- .div = { CGU_REG_SSICDR, 0, 8, 29, 28, 27 },
+ .div = { CGU_REG_SSICDR, 0, 1, 8, 29, 28, 27 },
},
[JZ4780_CLK_SSI] = {
@@ -430,7 +430,7 @@ static const struct ingenic_cgu_clk_info jz4780_cgu_clocks[] = {
"cim_mclk", CGU_CLK_MUX | CGU_CLK_DIV,
.parents = { JZ4780_CLK_SCLKA, JZ4780_CLK_MPLL, -1, -1 },
.mux = { CGU_REG_CIMCDR, 31, 1 },
- .div = { CGU_REG_CIMCDR, 0, 8, 30, 29, 28 },
+ .div = { CGU_REG_CIMCDR, 0, 1, 8, 30, 29, 28 },
},
[JZ4780_CLK_PCMPLL] = {
@@ -438,7 +438,7 @@ static const struct ingenic_cgu_clk_info jz4780_cgu_clocks[] = {
.parents = { JZ4780_CLK_SCLKA, JZ4780_CLK_MPLL,
JZ4780_CLK_EPLL, JZ4780_CLK_VPLL },
.mux = { CGU_REG_PCMCDR, 29, 2 },
- .div = { CGU_REG_PCMCDR, 0, 8, 28, 27, 26 },
+ .div = { CGU_REG_PCMCDR, 0, 1, 8, 28, 27, 26 },
},
[JZ4780_CLK_PCM] = {
@@ -453,7 +453,7 @@ static const struct ingenic_cgu_clk_info jz4780_cgu_clocks[] = {
.parents = { -1, JZ4780_CLK_SCLKA, JZ4780_CLK_MPLL,
JZ4780_CLK_EPLL },
.mux = { CGU_REG_GPUCDR, 30, 2 },
- .div = { CGU_REG_GPUCDR, 0, 4, 29, 28, 27 },
+ .div = { CGU_REG_GPUCDR, 0, 1, 4, 29, 28, 27 },
.gate = { CGU_REG_CLKGR1, 4 },
},
@@ -462,7 +462,7 @@ static const struct ingenic_cgu_clk_info jz4780_cgu_clocks[] = {
.parents = { JZ4780_CLK_SCLKA, JZ4780_CLK_MPLL,
JZ4780_CLK_VPLL, -1 },
.mux = { CGU_REG_HDMICDR, 30, 2 },
- .div = { CGU_REG_HDMICDR, 0, 8, 29, 28, 26 },
+ .div = { CGU_REG_HDMICDR, 0, 1, 8, 29, 28, 26 },
.gate = { CGU_REG_CLKGR1, 9 },
},
@@ -471,7 +471,7 @@ static const struct ingenic_cgu_clk_info jz4780_cgu_clocks[] = {
.parents = { -1, JZ4780_CLK_SCLKA, JZ4780_CLK_MPLL,
JZ4780_CLK_EPLL },
.mux = { CGU_REG_BCHCDR, 30, 2 },
- .div = { CGU_REG_BCHCDR, 0, 4, 29, 28, 27 },
+ .div = { CGU_REG_BCHCDR, 0, 1, 4, 29, 28, 27 },
.gate = { CGU_REG_CLKGR0, 1 },
},
diff --git a/drivers/clk/meson/meson8b-clkc.c b/drivers/clk/meson/meson8b-clkc.c
index 61f6d55c4ac7..4d057b3e21b2 100644
--- a/drivers/clk/meson/meson8b-clkc.c
+++ b/drivers/clk/meson/meson8b-clkc.c
@@ -141,11 +141,11 @@ static const struct composite_conf mali_conf __initconst = {
};
static const struct clk_conf meson8b_xtal_conf __initconst =
- FIXED_RATE_P(MESON8B_REG_CTL0_ADDR, CLKID_XTAL, "xtal",
- CLK_IS_ROOT, PARM(0x00, 4, 7));
+ FIXED_RATE_P(MESON8B_REG_CTL0_ADDR, CLKID_XTAL, "xtal", 0,
+ PARM(0x00, 4, 7));
static const struct clk_conf meson8b_clk_confs[] __initconst = {
- FIXED_RATE(CLKID_ZERO, "zero", CLK_IS_ROOT, 0),
+ FIXED_RATE(CLKID_ZERO, "zero", 0, 0),
PLL(MESON8B_REG_PLL_FIXED, CLKID_PLL_FIXED, "fixed_pll",
p_xtal, 0, &pll_confs),
PLL(MESON8B_REG_PLL_VID, CLKID_PLL_VID, "vid_pll",
diff --git a/drivers/clk/microchip/Makefile b/drivers/clk/microchip/Makefile
new file mode 100644
index 000000000000..2152f418106a
--- /dev/null
+++ b/drivers/clk/microchip/Makefile
@@ -0,0 +1,2 @@
+obj-$(CONFIG_COMMON_CLK_PIC32) += clk-core.o
+obj-$(CONFIG_PIC32MZDA) += clk-pic32mzda.o
diff --git a/drivers/clk/microchip/clk-core.c b/drivers/clk/microchip/clk-core.c
new file mode 100644
index 000000000000..ca85cea17839
--- /dev/null
+++ b/drivers/clk/microchip/clk-core.c
@@ -0,0 +1,1031 @@
+/*
+ * Purna Chandra Mandal,<purna.mandal@microchip.com>
+ * Copyright (C) 2015 Microchip Technology Inc. All rights reserved.
+ *
+ * This program is free software; you can distribute it and/or modify it
+ * under the terms of the GNU General Public License (Version 2) as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ */
+#include <linux/clk-provider.h>
+#include <linux/delay.h>
+#include <linux/device.h>
+#include <linux/interrupt.h>
+#include <linux/iopoll.h>
+#include <asm/mach-pic32/pic32.h>
+#include <asm/traps.h>
+
+#include "clk-core.h"
+
+/* OSCCON Reg fields */
+#define OSC_CUR_MASK 0x07
+#define OSC_CUR_SHIFT 12
+#define OSC_NEW_MASK 0x07
+#define OSC_NEW_SHIFT 8
+#define OSC_SWEN BIT(0)
+
+/* SPLLCON Reg fields */
+#define PLL_RANGE_MASK 0x07
+#define PLL_RANGE_SHIFT 0
+#define PLL_ICLK_MASK 0x01
+#define PLL_ICLK_SHIFT 7
+#define PLL_IDIV_MASK 0x07
+#define PLL_IDIV_SHIFT 8
+#define PLL_ODIV_MASK 0x07
+#define PLL_ODIV_SHIFT 24
+#define PLL_MULT_MASK 0x7F
+#define PLL_MULT_SHIFT 16
+#define PLL_MULT_MAX 128
+#define PLL_ODIV_MIN 1
+#define PLL_ODIV_MAX 5
+
+/* Peripheral Bus Clock Reg Fields */
+#define PB_DIV_MASK 0x7f
+#define PB_DIV_SHIFT 0
+#define PB_DIV_READY BIT(11)
+#define PB_DIV_ENABLE BIT(15)
+#define PB_DIV_MAX 128
+#define PB_DIV_MIN 0
+
+/* Reference Oscillator Control Reg fields */
+#define REFO_SEL_MASK 0x0f
+#define REFO_SEL_SHIFT 0
+#define REFO_ACTIVE BIT(8)
+#define REFO_DIVSW_EN BIT(9)
+#define REFO_OE BIT(12)
+#define REFO_ON BIT(15)
+#define REFO_DIV_SHIFT 16
+#define REFO_DIV_MASK 0x7fff
+
+/* Reference Oscillator Trim Register Fields */
+#define REFO_TRIM_REG 0x10
+#define REFO_TRIM_MASK 0x1ff
+#define REFO_TRIM_SHIFT 23
+#define REFO_TRIM_MAX 511
+
+/* Mux Slew Control Register fields */
+#define SLEW_BUSY BIT(0)
+#define SLEW_DOWNEN BIT(1)
+#define SLEW_UPEN BIT(2)
+#define SLEW_DIV 0x07
+#define SLEW_DIV_SHIFT 8
+#define SLEW_SYSDIV 0x0f
+#define SLEW_SYSDIV_SHIFT 20
+
+/* Clock Poll Timeout */
+#define LOCK_TIMEOUT_US USEC_PER_MSEC
+
+/* SoC specific clock needed during SPLL clock rate switch */
+static struct clk_hw *pic32_sclk_hw;
+
+/* add instruction pipeline delay while CPU clock is in-transition. */
+#define cpu_nop5() \
+do { \
+ __asm__ __volatile__("nop"); \
+ __asm__ __volatile__("nop"); \
+ __asm__ __volatile__("nop"); \
+ __asm__ __volatile__("nop"); \
+ __asm__ __volatile__("nop"); \
+} while (0)
+
+/* Perpheral bus clocks */
+struct pic32_periph_clk {
+ struct clk_hw hw;
+ void __iomem *ctrl_reg;
+ struct pic32_clk_common *core;
+};
+
+#define clkhw_to_pbclk(_hw) container_of(_hw, struct pic32_periph_clk, hw)
+
+static int pbclk_is_enabled(struct clk_hw *hw)
+{
+ struct pic32_periph_clk *pb = clkhw_to_pbclk(hw);
+
+ return readl(pb->ctrl_reg) & PB_DIV_ENABLE;
+}
+
+static int pbclk_enable(struct clk_hw *hw)
+{
+ struct pic32_periph_clk *pb = clkhw_to_pbclk(hw);
+
+ writel(PB_DIV_ENABLE, PIC32_SET(pb->ctrl_reg));
+ return 0;
+}
+
+static void pbclk_disable(struct clk_hw *hw)
+{
+ struct pic32_periph_clk *pb = clkhw_to_pbclk(hw);
+
+ writel(PB_DIV_ENABLE, PIC32_CLR(pb->ctrl_reg));
+}
+
+static unsigned long calc_best_divided_rate(unsigned long rate,
+ unsigned long parent_rate,
+ u32 divider_max,
+ u32 divider_min)
+{
+ unsigned long divided_rate, divided_rate_down, best_rate;
+ unsigned long div, div_up;
+
+ /* eq. clk_rate = parent_rate / divider.
+ *
+ * Find best divider to produce closest of target divided rate.
+ */
+ div = parent_rate / rate;
+ div = clamp_val(div, divider_min, divider_max);
+ div_up = clamp_val(div + 1, divider_min, divider_max);
+
+ divided_rate = parent_rate / div;
+ divided_rate_down = parent_rate / div_up;
+ if (abs(rate - divided_rate_down) < abs(rate - divided_rate))
+ best_rate = divided_rate_down;
+ else
+ best_rate = divided_rate;
+
+ return best_rate;
+}
+
+static inline u32 pbclk_read_pbdiv(struct pic32_periph_clk *pb)
+{
+ return ((readl(pb->ctrl_reg) >> PB_DIV_SHIFT) & PB_DIV_MASK) + 1;
+}
+
+static unsigned long pbclk_recalc_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+{
+ struct pic32_periph_clk *pb = clkhw_to_pbclk(hw);
+
+ return parent_rate / pbclk_read_pbdiv(pb);
+}
+
+static long pbclk_round_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long *parent_rate)
+{
+ return calc_best_divided_rate(rate, *parent_rate,
+ PB_DIV_MAX, PB_DIV_MIN);
+}
+
+static int pbclk_set_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long parent_rate)
+{
+ struct pic32_periph_clk *pb = clkhw_to_pbclk(hw);
+ unsigned long flags;
+ u32 v, div;
+ int err;
+
+ /* check & wait for DIV_READY */
+ err = readl_poll_timeout(pb->ctrl_reg, v, v & PB_DIV_READY,
+ 1, LOCK_TIMEOUT_US);
+ if (err)
+ return err;
+
+ /* calculate clkdiv and best rate */
+ div = DIV_ROUND_CLOSEST(parent_rate, rate);
+
+ spin_lock_irqsave(&pb->core->reg_lock, flags);
+
+ /* apply new div */
+ v = readl(pb->ctrl_reg);
+ v &= ~PB_DIV_MASK;
+ v |= (div - 1);
+
+ pic32_syskey_unlock();
+
+ writel(v, pb->ctrl_reg);
+
+ spin_unlock_irqrestore(&pb->core->reg_lock, flags);
+
+ /* wait again, for pbdivready */
+ err = readl_poll_timeout_atomic(pb->ctrl_reg, v, v & PB_DIV_READY,
+ 1, LOCK_TIMEOUT_US);
+ if (err)
+ return err;
+
+ /* confirm that new div is applied correctly */
+ return (pbclk_read_pbdiv(pb) == div) ? 0 : -EBUSY;
+}
+
+const struct clk_ops pic32_pbclk_ops = {
+ .enable = pbclk_enable,
+ .disable = pbclk_disable,
+ .is_enabled = pbclk_is_enabled,
+ .recalc_rate = pbclk_recalc_rate,
+ .round_rate = pbclk_round_rate,
+ .set_rate = pbclk_set_rate,
+};
+
+struct clk *pic32_periph_clk_register(const struct pic32_periph_clk_data *desc,
+ struct pic32_clk_common *core)
+{
+ struct pic32_periph_clk *pbclk;
+ struct clk *clk;
+
+ pbclk = devm_kzalloc(core->dev, sizeof(*pbclk), GFP_KERNEL);
+ if (!pbclk)
+ return ERR_PTR(-ENOMEM);
+
+ pbclk->hw.init = &desc->init_data;
+ pbclk->core = core;
+ pbclk->ctrl_reg = desc->ctrl_reg + core->iobase;
+
+ clk = devm_clk_register(core->dev, &pbclk->hw);
+ if (IS_ERR(clk)) {
+ dev_err(core->dev, "%s: clk_register() failed\n", __func__);
+ devm_kfree(core->dev, pbclk);
+ }
+
+ return clk;
+}
+
+/* Reference oscillator operations */
+struct pic32_ref_osc {
+ struct clk_hw hw;
+ void __iomem *ctrl_reg;
+ const u32 *parent_map;
+ struct pic32_clk_common *core;
+};
+
+#define clkhw_to_refosc(_hw) container_of(_hw, struct pic32_ref_osc, hw)
+
+static int roclk_is_enabled(struct clk_hw *hw)
+{
+ struct pic32_ref_osc *refo = clkhw_to_refosc(hw);
+
+ return readl(refo->ctrl_reg) & REFO_ON;
+}
+
+static int roclk_enable(struct clk_hw *hw)
+{
+ struct pic32_ref_osc *refo = clkhw_to_refosc(hw);
+
+ writel(REFO_ON | REFO_OE, PIC32_SET(refo->ctrl_reg));
+ return 0;
+}
+
+static void roclk_disable(struct clk_hw *hw)
+{
+ struct pic32_ref_osc *refo = clkhw_to_refosc(hw);
+
+ writel(REFO_ON | REFO_OE, PIC32_CLR(refo->ctrl_reg));
+}
+
+static void roclk_init(struct clk_hw *hw)
+{
+ /* initialize clock in disabled state */
+ roclk_disable(hw);
+}
+
+static u8 roclk_get_parent(struct clk_hw *hw)
+{
+ struct pic32_ref_osc *refo = clkhw_to_refosc(hw);
+ u32 v, i;
+
+ v = (readl(refo->ctrl_reg) >> REFO_SEL_SHIFT) & REFO_SEL_MASK;
+
+ if (!refo->parent_map)
+ return v;
+
+ for (i = 0; i < clk_hw_get_num_parents(hw); i++)
+ if (refo->parent_map[i] == v)
+ return i;
+
+ return -EINVAL;
+}
+
+static unsigned long roclk_calc_rate(unsigned long parent_rate,
+ u32 rodiv, u32 rotrim)
+{
+ u64 rate64;
+
+ /* fout = fin / [2 * {div + (trim / 512)}]
+ * = fin * 512 / [1024 * div + 2 * trim]
+ * = fin * 256 / (512 * div + trim)
+ * = (fin << 8) / ((div << 9) + trim)
+ */
+ if (rotrim) {
+ rodiv = (rodiv << 9) + rotrim;
+ rate64 = parent_rate;
+ rate64 <<= 8;
+ do_div(rate64, rodiv);
+ } else if (rodiv) {
+ rate64 = parent_rate / (rodiv << 1);
+ } else {
+ rate64 = parent_rate;
+ }
+ return rate64;
+}
+
+static void roclk_calc_div_trim(unsigned long rate,
+ unsigned long parent_rate,
+ u32 *rodiv_p, u32 *rotrim_p)
+{
+ u32 div, rotrim, rodiv;
+ u64 frac;
+
+ /* Find integer approximation of floating-point arithmetic.
+ * fout = fin / [2 * {rodiv + (rotrim / 512)}] ... (1)
+ * i.e. fout = fin / 2 * DIV
+ * whereas DIV = rodiv + (rotrim / 512)
+ *
+ * Since kernel does not perform floating-point arithmatic so
+ * (rotrim/512) will be zero. And DIV & rodiv will result same.
+ *
+ * ie. fout = (fin * 256) / [(512 * rodiv) + rotrim] ... from (1)
+ * ie. rotrim = ((fin * 256) / fout) - (512 * DIV)
+ */
+ if (parent_rate <= rate) {
+ div = 0;
+ frac = 0;
+ rodiv = 0;
+ rotrim = 0;
+ } else {
+ div = parent_rate / (rate << 1);
+ frac = parent_rate;
+ frac <<= 8;
+ do_div(frac, rate);
+ frac -= (u64)(div << 9);
+
+ rodiv = (div > REFO_DIV_MASK) ? REFO_DIV_MASK : div;
+ rotrim = (frac >= REFO_TRIM_MAX) ? REFO_TRIM_MAX : frac;
+ }
+
+ if (rodiv_p)
+ *rodiv_p = rodiv;
+
+ if (rotrim_p)
+ *rotrim_p = rotrim;
+}
+
+static unsigned long roclk_recalc_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+{
+ struct pic32_ref_osc *refo = clkhw_to_refosc(hw);
+ u32 v, rodiv, rotrim;
+
+ /* get rodiv */
+ v = readl(refo->ctrl_reg);
+ rodiv = (v >> REFO_DIV_SHIFT) & REFO_DIV_MASK;
+
+ /* get trim */
+ v = readl(refo->ctrl_reg + REFO_TRIM_REG);
+ rotrim = (v >> REFO_TRIM_SHIFT) & REFO_TRIM_MASK;
+
+ return roclk_calc_rate(parent_rate, rodiv, rotrim);
+}
+
+static long roclk_round_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long *parent_rate)
+{
+ u32 rotrim, rodiv;
+
+ /* calculate dividers for new rate */
+ roclk_calc_div_trim(rate, *parent_rate, &rodiv, &rotrim);
+
+ /* caclulate new rate (rounding) based on new rodiv & rotrim */
+ return roclk_calc_rate(*parent_rate, rodiv, rotrim);
+}
+
+static int roclk_determine_rate(struct clk_hw *hw,
+ struct clk_rate_request *req)
+{
+ struct clk_hw *parent_clk, *best_parent_clk = NULL;
+ unsigned int i, delta, best_delta = -1;
+ unsigned long parent_rate, best_parent_rate = 0;
+ unsigned long best = 0, nearest_rate;
+
+ /* find a parent which can generate nearest clkrate >= rate */
+ for (i = 0; i < clk_hw_get_num_parents(hw); i++) {
+ /* get parent */
+ parent_clk = clk_hw_get_parent_by_index(hw, i);
+ if (!parent_clk)
+ continue;
+
+ /* skip if parent runs slower than target rate */
+ parent_rate = clk_hw_get_rate(parent_clk);
+ if (req->rate > parent_rate)
+ continue;
+
+ nearest_rate = roclk_round_rate(hw, req->rate, &parent_rate);
+ delta = abs(nearest_rate - req->rate);
+ if ((nearest_rate >= req->rate) && (delta < best_delta)) {
+ best_parent_clk = parent_clk;
+ best_parent_rate = parent_rate;
+ best = nearest_rate;
+ best_delta = delta;
+
+ if (delta == 0)
+ break;
+ }
+ }
+
+ /* if no match found, retain old rate */
+ if (!best_parent_clk) {
+ pr_err("%s:%s, no parent found for rate %lu.\n",
+ __func__, clk_hw_get_name(hw), req->rate);
+ return clk_hw_get_rate(hw);
+ }
+
+ pr_debug("%s,rate %lu, best_parent(%s, %lu), best %lu, delta %d\n",
+ clk_hw_get_name(hw), req->rate,
+ clk_hw_get_name(best_parent_clk), best_parent_rate,
+ best, best_delta);
+
+ if (req->best_parent_rate)
+ req->best_parent_rate = best_parent_rate;
+
+ if (req->best_parent_hw)
+ req->best_parent_hw = best_parent_clk;
+
+ return best;
+}
+
+static int roclk_set_parent(struct clk_hw *hw, u8 index)
+{
+ struct pic32_ref_osc *refo = clkhw_to_refosc(hw);
+ unsigned long flags;
+ u32 v;
+ int err;
+
+ if (refo->parent_map)
+ index = refo->parent_map[index];
+
+ /* wait until ACTIVE bit is zero or timeout */
+ err = readl_poll_timeout(refo->ctrl_reg, v, !(v & REFO_ACTIVE),
+ 1, LOCK_TIMEOUT_US);
+ if (err) {
+ pr_err("%s: poll failed, clk active\n", clk_hw_get_name(hw));
+ return err;
+ }
+
+ spin_lock_irqsave(&refo->core->reg_lock, flags);
+
+ pic32_syskey_unlock();
+
+ /* calculate & apply new */
+ v = readl(refo->ctrl_reg);
+ v &= ~(REFO_SEL_MASK << REFO_SEL_SHIFT);
+ v |= index << REFO_SEL_SHIFT;
+
+ writel(v, refo->ctrl_reg);
+
+ spin_unlock_irqrestore(&refo->core->reg_lock, flags);
+
+ return 0;
+}
+
+static int roclk_set_rate_and_parent(struct clk_hw *hw,
+ unsigned long rate,
+ unsigned long parent_rate,
+ u8 index)
+{
+ struct pic32_ref_osc *refo = clkhw_to_refosc(hw);
+ unsigned long flags;
+ u32 trim, rodiv, v;
+ int err;
+
+ /* calculate new rodiv & rotrim for new rate */
+ roclk_calc_div_trim(rate, parent_rate, &rodiv, &trim);
+
+ pr_debug("parent_rate = %lu, rate = %lu, div = %d, trim = %d\n",
+ parent_rate, rate, rodiv, trim);
+
+ /* wait till source change is active */
+ err = readl_poll_timeout(refo->ctrl_reg, v,
+ !(v & (REFO_ACTIVE | REFO_DIVSW_EN)),
+ 1, LOCK_TIMEOUT_US);
+ if (err) {
+ pr_err("%s: poll timedout, clock is still active\n", __func__);
+ return err;
+ }
+
+ spin_lock_irqsave(&refo->core->reg_lock, flags);
+ v = readl(refo->ctrl_reg);
+
+ pic32_syskey_unlock();
+
+ /* apply parent, if required */
+ if (refo->parent_map)
+ index = refo->parent_map[index];
+
+ v &= ~(REFO_SEL_MASK << REFO_SEL_SHIFT);
+ v |= index << REFO_SEL_SHIFT;
+
+ /* apply RODIV */
+ v &= ~(REFO_DIV_MASK << REFO_DIV_SHIFT);
+ v |= rodiv << REFO_DIV_SHIFT;
+ writel(v, refo->ctrl_reg);
+
+ /* apply ROTRIM */
+ v = readl(refo->ctrl_reg + REFO_TRIM_REG);
+ v &= ~(REFO_TRIM_MASK << REFO_TRIM_SHIFT);
+ v |= trim << REFO_TRIM_SHIFT;
+ writel(v, refo->ctrl_reg + REFO_TRIM_REG);
+
+ /* enable & activate divider switching */
+ writel(REFO_ON | REFO_DIVSW_EN, PIC32_SET(refo->ctrl_reg));
+
+ /* wait till divswen is in-progress */
+ err = readl_poll_timeout_atomic(refo->ctrl_reg, v, !(v & REFO_DIVSW_EN),
+ 1, LOCK_TIMEOUT_US);
+ /* leave the clk gated as it was */
+ writel(REFO_ON, PIC32_CLR(refo->ctrl_reg));
+
+ spin_unlock_irqrestore(&refo->core->reg_lock, flags);
+
+ return err;
+}
+
+static int roclk_set_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long parent_rate)
+{
+ u8 index = roclk_get_parent(hw);
+
+ return roclk_set_rate_and_parent(hw, rate, parent_rate, index);
+}
+
+const struct clk_ops pic32_roclk_ops = {
+ .enable = roclk_enable,
+ .disable = roclk_disable,
+ .is_enabled = roclk_is_enabled,
+ .get_parent = roclk_get_parent,
+ .set_parent = roclk_set_parent,
+ .determine_rate = roclk_determine_rate,
+ .recalc_rate = roclk_recalc_rate,
+ .set_rate_and_parent = roclk_set_rate_and_parent,
+ .set_rate = roclk_set_rate,
+ .init = roclk_init,
+};
+
+struct clk *pic32_refo_clk_register(const struct pic32_ref_osc_data *data,
+ struct pic32_clk_common *core)
+{
+ struct pic32_ref_osc *refo;
+ struct clk *clk;
+
+ refo = devm_kzalloc(core->dev, sizeof(*refo), GFP_KERNEL);
+ if (!refo)
+ return ERR_PTR(-ENOMEM);
+
+ refo->core = core;
+ refo->hw.init = &data->init_data;
+ refo->ctrl_reg = data->ctrl_reg + core->iobase;
+ refo->parent_map = data->parent_map;
+
+ clk = devm_clk_register(core->dev, &refo->hw);
+ if (IS_ERR(clk))
+ dev_err(core->dev, "%s: clk_register() failed\n", __func__);
+
+ return clk;
+}
+
+struct pic32_sys_pll {
+ struct clk_hw hw;
+ void __iomem *ctrl_reg;
+ void __iomem *status_reg;
+ u32 lock_mask;
+ u32 idiv; /* PLL iclk divider, treated fixed */
+ struct pic32_clk_common *core;
+};
+
+#define clkhw_to_spll(_hw) container_of(_hw, struct pic32_sys_pll, hw)
+
+static inline u32 spll_odiv_to_divider(u32 odiv)
+{
+ odiv = clamp_val(odiv, PLL_ODIV_MIN, PLL_ODIV_MAX);
+
+ return 1 << odiv;
+}
+
+static unsigned long spll_calc_mult_div(struct pic32_sys_pll *pll,
+ unsigned long rate,
+ unsigned long parent_rate,
+ u32 *mult_p, u32 *odiv_p)
+{
+ u32 mul, div, best_mul = 1, best_div = 1;
+ unsigned long new_rate, best_rate = rate;
+ unsigned int best_delta = -1, delta, match_found = 0;
+ u64 rate64;
+
+ parent_rate /= pll->idiv;
+
+ for (mul = 1; mul <= PLL_MULT_MAX; mul++) {
+ for (div = PLL_ODIV_MIN; div <= PLL_ODIV_MAX; div++) {
+ rate64 = parent_rate;
+ rate64 *= mul;
+ do_div(rate64, 1 << div);
+ new_rate = rate64;
+ delta = abs(rate - new_rate);
+ if ((new_rate >= rate) && (delta < best_delta)) {
+ best_delta = delta;
+ best_rate = new_rate;
+ best_mul = mul;
+ best_div = div;
+ match_found = 1;
+ }
+ }
+ }
+
+ if (!match_found) {
+ pr_warn("spll: no match found\n");
+ return 0;
+ }
+
+ pr_debug("rate %lu, par_rate %lu/mult %u, div %u, best_rate %lu\n",
+ rate, parent_rate, best_mul, best_div, best_rate);
+
+ if (mult_p)
+ *mult_p = best_mul - 1;
+
+ if (odiv_p)
+ *odiv_p = best_div;
+
+ return best_rate;
+}
+
+static unsigned long spll_clk_recalc_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+{
+ struct pic32_sys_pll *pll = clkhw_to_spll(hw);
+ unsigned long pll_in_rate;
+ u32 mult, odiv, div, v;
+ u64 rate64;
+
+ v = readl(pll->ctrl_reg);
+ odiv = ((v >> PLL_ODIV_SHIFT) & PLL_ODIV_MASK);
+ mult = ((v >> PLL_MULT_SHIFT) & PLL_MULT_MASK) + 1;
+ div = spll_odiv_to_divider(odiv);
+
+ /* pll_in_rate = parent_rate / idiv
+ * pll_out_rate = pll_in_rate * mult / div;
+ */
+ pll_in_rate = parent_rate / pll->idiv;
+ rate64 = pll_in_rate;
+ rate64 *= mult;
+ do_div(rate64, div);
+
+ return rate64;
+}
+
+static long spll_clk_round_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long *parent_rate)
+{
+ struct pic32_sys_pll *pll = clkhw_to_spll(hw);
+
+ return spll_calc_mult_div(pll, rate, *parent_rate, NULL, NULL);
+}
+
+static int spll_clk_set_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long parent_rate)
+{
+ struct pic32_sys_pll *pll = clkhw_to_spll(hw);
+ unsigned long ret, flags;
+ u32 mult, odiv, v;
+ int err;
+
+ ret = spll_calc_mult_div(pll, rate, parent_rate, &mult, &odiv);
+ if (!ret)
+ return -EINVAL;
+
+ /*
+ * We can't change SPLL counters when it is in-active use
+ * by SYSCLK. So check before applying new counters/rate.
+ */
+
+ /* Is spll_clk active parent of sys_clk ? */
+ if (unlikely(clk_hw_get_parent(pic32_sclk_hw) == hw)) {
+ pr_err("%s: failed, clk in-use\n", __func__);
+ return -EBUSY;
+ }
+
+ spin_lock_irqsave(&pll->core->reg_lock, flags);
+
+ /* apply new multiplier & divisor */
+ v = readl(pll->ctrl_reg);
+ v &= ~(PLL_MULT_MASK << PLL_MULT_SHIFT);
+ v &= ~(PLL_ODIV_MASK << PLL_ODIV_SHIFT);
+ v |= (mult << PLL_MULT_SHIFT) | (odiv << PLL_ODIV_SHIFT);
+
+ /* sys unlock before write */
+ pic32_syskey_unlock();
+
+ writel(v, pll->ctrl_reg);
+ cpu_relax();
+
+ /* insert few nops (5-stage) to ensure CPU does not hang */
+ cpu_nop5();
+ cpu_nop5();
+
+ /* Wait until PLL is locked (maximum 100 usecs). */
+ err = readl_poll_timeout_atomic(pll->status_reg, v,
+ v & pll->lock_mask, 1, 100);
+ spin_unlock_irqrestore(&pll->core->reg_lock, flags);
+
+ return err;
+}
+
+/* SPLL clock operation */
+const struct clk_ops pic32_spll_ops = {
+ .recalc_rate = spll_clk_recalc_rate,
+ .round_rate = spll_clk_round_rate,
+ .set_rate = spll_clk_set_rate,
+};
+
+struct clk *pic32_spll_clk_register(const struct pic32_sys_pll_data *data,
+ struct pic32_clk_common *core)
+{
+ struct pic32_sys_pll *spll;
+ struct clk *clk;
+
+ spll = devm_kzalloc(core->dev, sizeof(*spll), GFP_KERNEL);
+ if (!spll)
+ return ERR_PTR(-ENOMEM);
+
+ spll->core = core;
+ spll->hw.init = &data->init_data;
+ spll->ctrl_reg = data->ctrl_reg + core->iobase;
+ spll->status_reg = data->status_reg + core->iobase;
+ spll->lock_mask = data->lock_mask;
+
+ /* cache PLL idiv; PLL driver uses it as constant.*/
+ spll->idiv = (readl(spll->ctrl_reg) >> PLL_IDIV_SHIFT) & PLL_IDIV_MASK;
+ spll->idiv += 1;
+
+ clk = devm_clk_register(core->dev, &spll->hw);
+ if (IS_ERR(clk))
+ dev_err(core->dev, "sys_pll: clk_register() failed\n");
+
+ return clk;
+}
+
+/* System mux clock(aka SCLK) */
+
+struct pic32_sys_clk {
+ struct clk_hw hw;
+ void __iomem *mux_reg;
+ void __iomem *slew_reg;
+ u32 slew_div;
+ const u32 *parent_map;
+ struct pic32_clk_common *core;
+};
+
+#define clkhw_to_sys_clk(_hw) container_of(_hw, struct pic32_sys_clk, hw)
+
+static unsigned long sclk_get_rate(struct clk_hw *hw, unsigned long parent_rate)
+{
+ struct pic32_sys_clk *sclk = clkhw_to_sys_clk(hw);
+ u32 div;
+
+ div = (readl(sclk->slew_reg) >> SLEW_SYSDIV_SHIFT) & SLEW_SYSDIV;
+ div += 1; /* sys-div to divider */
+
+ return parent_rate / div;
+}
+
+static long sclk_round_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long *parent_rate)
+{
+ return calc_best_divided_rate(rate, *parent_rate, SLEW_SYSDIV, 1);
+}
+
+static int sclk_set_rate(struct clk_hw *hw,
+ unsigned long rate, unsigned long parent_rate)
+{
+ struct pic32_sys_clk *sclk = clkhw_to_sys_clk(hw);
+ unsigned long flags;
+ u32 v, div;
+ int err;
+
+ div = parent_rate / rate;
+
+ spin_lock_irqsave(&sclk->core->reg_lock, flags);
+
+ /* apply new div */
+ v = readl(sclk->slew_reg);
+ v &= ~(SLEW_SYSDIV << SLEW_SYSDIV_SHIFT);
+ v |= (div - 1) << SLEW_SYSDIV_SHIFT;
+
+ pic32_syskey_unlock();
+
+ writel(v, sclk->slew_reg);
+
+ /* wait until BUSY is cleared */
+ err = readl_poll_timeout_atomic(sclk->slew_reg, v,
+ !(v & SLEW_BUSY), 1, LOCK_TIMEOUT_US);
+
+ spin_unlock_irqrestore(&sclk->core->reg_lock, flags);
+
+ return err;
+}
+
+static u8 sclk_get_parent(struct clk_hw *hw)
+{
+ struct pic32_sys_clk *sclk = clkhw_to_sys_clk(hw);
+ u32 i, v;
+
+ v = (readl(sclk->mux_reg) >> OSC_CUR_SHIFT) & OSC_CUR_MASK;
+
+ if (!sclk->parent_map)
+ return v;
+
+ for (i = 0; i < clk_hw_get_num_parents(hw); i++)
+ if (sclk->parent_map[i] == v)
+ return i;
+ return -EINVAL;
+}
+
+static int sclk_set_parent(struct clk_hw *hw, u8 index)
+{
+ struct pic32_sys_clk *sclk = clkhw_to_sys_clk(hw);
+ unsigned long flags;
+ u32 nosc, cosc, v;
+ int err;
+
+ spin_lock_irqsave(&sclk->core->reg_lock, flags);
+
+ /* find new_osc */
+ nosc = sclk->parent_map ? sclk->parent_map[index] : index;
+
+ /* set new parent */
+ v = readl(sclk->mux_reg);
+ v &= ~(OSC_NEW_MASK << OSC_NEW_SHIFT);
+ v |= nosc << OSC_NEW_SHIFT;
+
+ pic32_syskey_unlock();
+
+ writel(v, sclk->mux_reg);
+
+ /* initate switch */
+ writel(OSC_SWEN, PIC32_SET(sclk->mux_reg));
+ cpu_relax();
+
+ /* add nop to flush pipeline (as cpu_clk is in-flux) */
+ cpu_nop5();
+
+ /* wait for SWEN bit to clear */
+ err = readl_poll_timeout_atomic(sclk->slew_reg, v,
+ !(v & OSC_SWEN), 1, LOCK_TIMEOUT_US);
+
+ spin_unlock_irqrestore(&sclk->core->reg_lock, flags);
+
+ /*
+ * SCLK clock-switching logic might reject a clock switching request
+ * if pre-requisites (like new clk_src not present or unstable) are
+ * not met.
+ * So confirm before claiming success.
+ */
+ cosc = (readl(sclk->mux_reg) >> OSC_CUR_SHIFT) & OSC_CUR_MASK;
+ if (cosc != nosc) {
+ pr_err("%s: err, failed to set_parent() to %d, current %d\n",
+ clk_hw_get_name(hw), nosc, cosc);
+ err = -EBUSY;
+ }
+
+ return err;
+}
+
+static void sclk_init(struct clk_hw *hw)
+{
+ struct pic32_sys_clk *sclk = clkhw_to_sys_clk(hw);
+ unsigned long flags;
+ u32 v;
+
+ /* Maintain reference to this clk, required in spll_clk_set_rate() */
+ pic32_sclk_hw = hw;
+
+ /* apply slew divider on both up and down scaling */
+ if (sclk->slew_div) {
+ spin_lock_irqsave(&sclk->core->reg_lock, flags);
+ v = readl(sclk->slew_reg);
+ v &= ~(SLEW_DIV << SLEW_DIV_SHIFT);
+ v |= sclk->slew_div << SLEW_DIV_SHIFT;
+ v |= SLEW_DOWNEN | SLEW_UPEN;
+ writel(v, sclk->slew_reg);
+ spin_unlock_irqrestore(&sclk->core->reg_lock, flags);
+ }
+}
+
+/* sclk with post-divider */
+const struct clk_ops pic32_sclk_ops = {
+ .get_parent = sclk_get_parent,
+ .set_parent = sclk_set_parent,
+ .round_rate = sclk_round_rate,
+ .set_rate = sclk_set_rate,
+ .recalc_rate = sclk_get_rate,
+ .init = sclk_init,
+ .determine_rate = __clk_mux_determine_rate,
+};
+
+/* sclk with no slew and no post-divider */
+const struct clk_ops pic32_sclk_no_div_ops = {
+ .get_parent = sclk_get_parent,
+ .set_parent = sclk_set_parent,
+ .init = sclk_init,
+ .determine_rate = __clk_mux_determine_rate,
+};
+
+struct clk *pic32_sys_clk_register(const struct pic32_sys_clk_data *data,
+ struct pic32_clk_common *core)
+{
+ struct pic32_sys_clk *sclk;
+ struct clk *clk;
+
+ sclk = devm_kzalloc(core->dev, sizeof(*sclk), GFP_KERNEL);
+ if (!sclk)
+ return ERR_PTR(-ENOMEM);
+
+ sclk->core = core;
+ sclk->hw.init = &data->init_data;
+ sclk->mux_reg = data->mux_reg + core->iobase;
+ sclk->slew_reg = data->slew_reg + core->iobase;
+ sclk->slew_div = data->slew_div;
+ sclk->parent_map = data->parent_map;
+
+ clk = devm_clk_register(core->dev, &sclk->hw);
+ if (IS_ERR(clk))
+ dev_err(core->dev, "%s: clk register failed\n", __func__);
+
+ return clk;
+}
+
+/* secondary oscillator */
+struct pic32_sec_osc {
+ struct clk_hw hw;
+ void __iomem *enable_reg;
+ void __iomem *status_reg;
+ u32 enable_mask;
+ u32 status_mask;
+ unsigned long fixed_rate;
+ struct pic32_clk_common *core;
+};
+
+#define clkhw_to_sosc(_hw) container_of(_hw, struct pic32_sec_osc, hw)
+static int sosc_clk_enable(struct clk_hw *hw)
+{
+ struct pic32_sec_osc *sosc = clkhw_to_sosc(hw);
+ u32 v;
+
+ /* enable SOSC */
+ pic32_syskey_unlock();
+ writel(sosc->enable_mask, PIC32_SET(sosc->enable_reg));
+
+ /* wait till warm-up period expires or ready-status is updated */
+ return readl_poll_timeout_atomic(sosc->status_reg, v,
+ v & sosc->status_mask, 1, 100);
+}
+
+static void sosc_clk_disable(struct clk_hw *hw)
+{
+ struct pic32_sec_osc *sosc = clkhw_to_sosc(hw);
+
+ pic32_syskey_unlock();
+ writel(sosc->enable_mask, PIC32_CLR(sosc->enable_reg));
+}
+
+static int sosc_clk_is_enabled(struct clk_hw *hw)
+{
+ struct pic32_sec_osc *sosc = clkhw_to_sosc(hw);
+ u32 enabled, ready;
+
+ /* check enabled and ready status */
+ enabled = readl(sosc->enable_reg) & sosc->enable_mask;
+ ready = readl(sosc->status_reg) & sosc->status_mask;
+
+ return enabled && ready;
+}
+
+static unsigned long sosc_clk_calc_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+{
+ return clkhw_to_sosc(hw)->fixed_rate;
+}
+
+const struct clk_ops pic32_sosc_ops = {
+ .enable = sosc_clk_enable,
+ .disable = sosc_clk_disable,
+ .is_enabled = sosc_clk_is_enabled,
+ .recalc_rate = sosc_clk_calc_rate,
+};
+
+struct clk *pic32_sosc_clk_register(const struct pic32_sec_osc_data *data,
+ struct pic32_clk_common *core)
+{
+ struct pic32_sec_osc *sosc;
+
+ sosc = devm_kzalloc(core->dev, sizeof(*sosc), GFP_KERNEL);
+ if (!sosc)
+ return ERR_PTR(-ENOMEM);
+
+ sosc->core = core;
+ sosc->hw.init = &data->init_data;
+ sosc->fixed_rate = data->fixed_rate;
+ sosc->enable_mask = data->enable_mask;
+ sosc->status_mask = data->status_mask;
+ sosc->enable_reg = data->enable_reg + core->iobase;
+ sosc->status_reg = data->status_reg + core->iobase;
+
+ return devm_clk_register(core->dev, &sosc->hw);
+}
diff --git a/drivers/clk/microchip/clk-core.h b/drivers/clk/microchip/clk-core.h
new file mode 100644
index 000000000000..856664277a29
--- /dev/null
+++ b/drivers/clk/microchip/clk-core.h
@@ -0,0 +1,84 @@
+/*
+ * Purna Chandra Mandal,<purna.mandal@microchip.com>
+ * Copyright (C) 2015 Microchip Technology Inc. All rights reserved.
+ *
+ * This program is free software; you can distribute it and/or modify it
+ * under the terms of the GNU General Public License (Version 2) as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ */
+#ifndef __MICROCHIP_CLK_PIC32_H_
+#define __MICROCHIP_CLK_PIC32_H_
+
+#include <linux/clk-provider.h>
+
+/* PIC32 clock data */
+struct pic32_clk_common {
+ struct device *dev;
+ void __iomem *iobase;
+ spinlock_t reg_lock; /* clock lock */
+};
+
+/* System PLL clock */
+struct pic32_sys_pll_data {
+ struct clk_init_data init_data;
+ const u32 ctrl_reg;
+ const u32 status_reg;
+ const u32 lock_mask;
+};
+
+/* System clock */
+struct pic32_sys_clk_data {
+ struct clk_init_data init_data;
+ const u32 mux_reg;
+ const u32 slew_reg;
+ const u32 *parent_map;
+ const u32 slew_div;
+};
+
+/* Reference Oscillator clock */
+struct pic32_ref_osc_data {
+ struct clk_init_data init_data;
+ const u32 ctrl_reg;
+ const u32 *parent_map;
+};
+
+/* Peripheral Bus clock */
+struct pic32_periph_clk_data {
+ struct clk_init_data init_data;
+ const u32 ctrl_reg;
+};
+
+/* External Secondary Oscillator clock */
+struct pic32_sec_osc_data {
+ struct clk_init_data init_data;
+ const u32 enable_reg;
+ const u32 status_reg;
+ const u32 enable_mask;
+ const u32 status_mask;
+ const unsigned long fixed_rate;
+};
+
+extern const struct clk_ops pic32_pbclk_ops;
+extern const struct clk_ops pic32_sclk_ops;
+extern const struct clk_ops pic32_sclk_no_div_ops;
+extern const struct clk_ops pic32_spll_ops;
+extern const struct clk_ops pic32_roclk_ops;
+extern const struct clk_ops pic32_sosc_ops;
+
+struct clk *pic32_periph_clk_register(const struct pic32_periph_clk_data *data,
+ struct pic32_clk_common *core);
+struct clk *pic32_refo_clk_register(const struct pic32_ref_osc_data *data,
+ struct pic32_clk_common *core);
+struct clk *pic32_sys_clk_register(const struct pic32_sys_clk_data *data,
+ struct pic32_clk_common *core);
+struct clk *pic32_spll_clk_register(const struct pic32_sys_pll_data *data,
+ struct pic32_clk_common *core);
+struct clk *pic32_sosc_clk_register(const struct pic32_sec_osc_data *data,
+ struct pic32_clk_common *core);
+
+#endif /* __MICROCHIP_CLK_PIC32_H_*/
diff --git a/drivers/clk/microchip/clk-pic32mzda.c b/drivers/clk/microchip/clk-pic32mzda.c
new file mode 100644
index 000000000000..020a29acc5b0
--- /dev/null
+++ b/drivers/clk/microchip/clk-pic32mzda.c
@@ -0,0 +1,275 @@
+/*
+ * Purna Chandra Mandal,<purna.mandal@microchip.com>
+ * Copyright (C) 2015 Microchip Technology Inc. All rights reserved.
+ *
+ * This program is free software; you can distribute it and/or modify it
+ * under the terms of the GNU General Public License (Version 2) as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
+ * for more details.
+ */
+#include <dt-bindings/clock/microchip,pic32-clock.h>
+#include <linux/clk.h>
+#include <linux/clk-provider.h>
+#include <linux/clkdev.h>
+#include <linux/module.h>
+#include <linux/of_address.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+#include <asm/traps.h>
+
+#include "clk-core.h"
+
+/* FRC Postscaler */
+#define OSC_FRCDIV_MASK 0x07
+#define OSC_FRCDIV_SHIFT 24
+
+/* SPLL fields */
+#define PLL_ICLK_MASK 0x01
+#define PLL_ICLK_SHIFT 7
+
+#define DECLARE_PERIPHERAL_CLOCK(__clk_name, __reg, __flags) \
+ { \
+ .ctrl_reg = (__reg), \
+ .init_data = { \
+ .name = (__clk_name), \
+ .parent_names = (const char *[]) { \
+ "sys_clk" \
+ }, \
+ .num_parents = 1, \
+ .ops = &pic32_pbclk_ops, \
+ .flags = (__flags), \
+ }, \
+ }
+
+#define DECLARE_REFO_CLOCK(__clkid, __reg) \
+ { \
+ .ctrl_reg = (__reg), \
+ .init_data = { \
+ .name = "refo" #__clkid "_clk", \
+ .parent_names = (const char *[]) { \
+ "sys_clk", "pb1_clk", "posc_clk", \
+ "frc_clk", "lprc_clk", "sosc_clk", \
+ "sys_pll", "refi" #__clkid "_clk", \
+ "bfrc_clk", \
+ }, \
+ .num_parents = 9, \
+ .flags = CLK_SET_RATE_GATE | CLK_SET_PARENT_GATE,\
+ .ops = &pic32_roclk_ops, \
+ }, \
+ .parent_map = (const u32[]) { \
+ 0, 1, 2, 3, 4, 5, 7, 8, 9 \
+ }, \
+ }
+
+static const struct pic32_ref_osc_data ref_clks[] = {
+ DECLARE_REFO_CLOCK(1, 0x80),
+ DECLARE_REFO_CLOCK(2, 0xa0),
+ DECLARE_REFO_CLOCK(3, 0xc0),
+ DECLARE_REFO_CLOCK(4, 0xe0),
+ DECLARE_REFO_CLOCK(5, 0x100),
+};
+
+static const struct pic32_periph_clk_data periph_clocks[] = {
+ DECLARE_PERIPHERAL_CLOCK("pb1_clk", 0x140, 0),
+ DECLARE_PERIPHERAL_CLOCK("pb2_clk", 0x150, CLK_IGNORE_UNUSED),
+ DECLARE_PERIPHERAL_CLOCK("pb3_clk", 0x160, 0),
+ DECLARE_PERIPHERAL_CLOCK("pb4_clk", 0x170, 0),
+ DECLARE_PERIPHERAL_CLOCK("pb5_clk", 0x180, 0),
+ DECLARE_PERIPHERAL_CLOCK("pb6_clk", 0x190, 0),
+ DECLARE_PERIPHERAL_CLOCK("cpu_clk", 0x1a0, CLK_IGNORE_UNUSED),
+};
+
+static const struct pic32_sys_clk_data sys_mux_clk = {
+ .slew_reg = 0x1c0,
+ .slew_div = 2, /* step of div_4 -> div_2 -> no_div */
+ .init_data = {
+ .name = "sys_clk",
+ .parent_names = (const char *[]) {
+ "frcdiv_clk", "sys_pll", "posc_clk",
+ "sosc_clk", "lprc_clk", "frcdiv_clk",
+ },
+ .num_parents = 6,
+ .ops = &pic32_sclk_ops,
+ },
+ .parent_map = (const u32[]) {
+ 0, 1, 2, 4, 5, 7,
+ },
+};
+
+static const struct pic32_sys_pll_data sys_pll = {
+ .ctrl_reg = 0x020,
+ .status_reg = 0x1d0,
+ .lock_mask = BIT(7),
+ .init_data = {
+ .name = "sys_pll",
+ .parent_names = (const char *[]) {
+ "spll_mux_clk"
+ },
+ .num_parents = 1,
+ .ops = &pic32_spll_ops,
+ },
+};
+
+static const struct pic32_sec_osc_data sosc_clk = {
+ .status_reg = 0x1d0,
+ .enable_mask = BIT(1),
+ .status_mask = BIT(4),
+ .init_data = {
+ .name = "sosc_clk",
+ .parent_names = NULL,
+ .ops = &pic32_sosc_ops,
+ },
+};
+
+static int pic32mzda_critical_clks[] = {
+ PB2CLK, PB7CLK
+};
+
+/* PIC32MZDA clock data */
+struct pic32mzda_clk_data {
+ struct clk *clks[MAXCLKS];
+ struct pic32_clk_common core;
+ struct clk_onecell_data onecell_data;
+ struct notifier_block failsafe_notifier;
+};
+
+static int pic32_fscm_nmi(struct notifier_block *nb,
+ unsigned long action, void *data)
+{
+ struct pic32mzda_clk_data *cd;
+
+ cd = container_of(nb, struct pic32mzda_clk_data, failsafe_notifier);
+
+ /* SYSCLK is now running from BFRCCLK. Report clock failure. */
+ if (readl(cd->core.iobase) & BIT(2))
+ pr_alert("pic32-clk: FSCM detected clk failure.\n");
+
+ /* TODO: detect reason of failure and recover accordingly */
+
+ return NOTIFY_OK;
+}
+
+static int pic32mzda_clk_probe(struct platform_device *pdev)
+{
+ const char *const pll_mux_parents[] = {"posc_clk", "frc_clk"};
+ struct device_node *np = pdev->dev.of_node;
+ struct pic32mzda_clk_data *cd;
+ struct pic32_clk_common *core;
+ struct clk *pll_mux_clk, *clk;
+ struct clk **clks;
+ int nr_clks, i, ret;
+
+ cd = devm_kzalloc(&pdev->dev, sizeof(*cd), GFP_KERNEL);
+ if (!cd)
+ return -ENOMEM;
+
+ core = &cd->core;
+ core->iobase = of_io_request_and_map(np, 0, of_node_full_name(np));
+ if (IS_ERR(core->iobase)) {
+ dev_err(&pdev->dev, "pic32-clk: failed to map registers\n");
+ return PTR_ERR(core->iobase);
+ }
+
+ spin_lock_init(&core->reg_lock);
+ core->dev = &pdev->dev;
+ clks = &cd->clks[0];
+
+ /* register fixed rate clocks */
+ clks[POSCCLK] = clk_register_fixed_rate(&pdev->dev, "posc_clk", NULL,
+ CLK_IS_ROOT, 24000000);
+ clks[FRCCLK] = clk_register_fixed_rate(&pdev->dev, "frc_clk", NULL,
+ CLK_IS_ROOT, 8000000);
+ clks[BFRCCLK] = clk_register_fixed_rate(&pdev->dev, "bfrc_clk", NULL,
+ CLK_IS_ROOT, 8000000);
+ clks[LPRCCLK] = clk_register_fixed_rate(&pdev->dev, "lprc_clk", NULL,
+ CLK_IS_ROOT, 32000);
+ clks[UPLLCLK] = clk_register_fixed_rate(&pdev->dev, "usbphy_clk", NULL,
+ CLK_IS_ROOT, 24000000);
+ /* fixed rate (optional) clock */
+ if (of_find_property(np, "microchip,pic32mzda-sosc", NULL)) {
+ pr_info("pic32-clk: dt requests SOSC.\n");
+ clks[SOSCCLK] = pic32_sosc_clk_register(&sosc_clk, core);
+ }
+ /* divider clock */
+ clks[FRCDIVCLK] = clk_register_divider(&pdev->dev, "frcdiv_clk",
+ "frc_clk", 0,
+ core->iobase,
+ OSC_FRCDIV_SHIFT,
+ OSC_FRCDIV_MASK,
+ CLK_DIVIDER_POWER_OF_TWO,
+ &core->reg_lock);
+ /* PLL ICLK mux */
+ pll_mux_clk = clk_register_mux(&pdev->dev, "spll_mux_clk",
+ pll_mux_parents, 2, 0,
+ core->iobase + 0x020,
+ PLL_ICLK_SHIFT, 1, 0, &core->reg_lock);
+ if (IS_ERR(pll_mux_clk))
+ pr_err("spll_mux_clk: clk register failed\n");
+
+ /* PLL */
+ clks[PLLCLK] = pic32_spll_clk_register(&sys_pll, core);
+ /* SYSTEM clock */
+ clks[SCLK] = pic32_sys_clk_register(&sys_mux_clk, core);
+ /* Peripheral bus clocks */
+ for (nr_clks = PB1CLK, i = 0; nr_clks <= PB7CLK; i++, nr_clks++)
+ clks[nr_clks] = pic32_periph_clk_register(&periph_clocks[i],
+ core);
+ /* Reference oscillator clock */
+ for (nr_clks = REF1CLK, i = 0; nr_clks <= REF5CLK; i++, nr_clks++)
+ clks[nr_clks] = pic32_refo_clk_register(&ref_clks[i], core);
+
+ /* register clkdev */
+ for (i = 0; i < MAXCLKS; i++) {
+ if (IS_ERR(clks[i]))
+ continue;
+ clk_register_clkdev(clks[i], NULL, __clk_get_name(clks[i]));
+ }
+
+ /* register clock provider */
+ cd->onecell_data.clks = clks;
+ cd->onecell_data.clk_num = MAXCLKS;
+ ret = of_clk_add_provider(np, of_clk_src_onecell_get,
+ &cd->onecell_data);
+ if (ret)
+ return ret;
+
+ /* force enable critical clocks */
+ for (i = 0; i < ARRAY_SIZE(pic32mzda_critical_clks); i++) {
+ clk = clks[pic32mzda_critical_clks[i]];
+ if (clk_prepare_enable(clk))
+ dev_err(&pdev->dev, "clk_prepare_enable(%s) failed\n",
+ __clk_get_name(clk));
+ }
+
+ /* register NMI for failsafe clock monitor */
+ cd->failsafe_notifier.notifier_call = pic32_fscm_nmi;
+ return register_nmi_notifier(&cd->failsafe_notifier);
+}
+
+static const struct of_device_id pic32mzda_clk_match_table[] = {
+ { .compatible = "microchip,pic32mzda-clk", },
+ { }
+};
+MODULE_DEVICE_TABLE(of, pic32mzda_clk_match_table);
+
+static struct platform_driver pic32mzda_clk_driver = {
+ .probe = pic32mzda_clk_probe,
+ .driver = {
+ .name = "clk-pic32mzda",
+ .of_match_table = pic32mzda_clk_match_table,
+ },
+};
+
+static int __init microchip_pic32mzda_clk_init(void)
+{
+ return platform_driver_register(&pic32mzda_clk_driver);
+}
+core_initcall(microchip_pic32mzda_clk_init);
+
+MODULE_DESCRIPTION("Microchip PIC32MZDA Clock Driver");
+MODULE_LICENSE("GPL v2");
+MODULE_ALIAS("platform:clk-pic32mzda");
diff --git a/drivers/clk/mmp/clk-mmp2.c b/drivers/clk/mmp/clk-mmp2.c
index 38931dbd1eff..383f6a4f64f0 100644
--- a/drivers/clk/mmp/clk-mmp2.c
+++ b/drivers/clk/mmp/clk-mmp2.c
@@ -99,23 +99,19 @@ void __init mmp2_clk_init(phys_addr_t mpmu_phys, phys_addr_t apmu_phys,
return;
}
- clk = clk_register_fixed_rate(NULL, "clk32", NULL, CLK_IS_ROOT, 3200);
+ clk = clk_register_fixed_rate(NULL, "clk32", NULL, 0, 3200);
clk_register_clkdev(clk, "clk32", NULL);
- vctcxo = clk_register_fixed_rate(NULL, "vctcxo", NULL, CLK_IS_ROOT,
- 26000000);
+ vctcxo = clk_register_fixed_rate(NULL, "vctcxo", NULL, 0, 26000000);
clk_register_clkdev(vctcxo, "vctcxo", NULL);
- clk = clk_register_fixed_rate(NULL, "pll1", NULL, CLK_IS_ROOT,
- 800000000);
+ clk = clk_register_fixed_rate(NULL, "pll1", NULL, 0, 800000000);
clk_register_clkdev(clk, "pll1", NULL);
- clk = clk_register_fixed_rate(NULL, "usb_pll", NULL, CLK_IS_ROOT,
- 480000000);
+ clk = clk_register_fixed_rate(NULL, "usb_pll", NULL, 0, 480000000);
clk_register_clkdev(clk, "usb_pll", NULL);
- clk = clk_register_fixed_rate(NULL, "pll2", NULL, CLK_IS_ROOT,
- 960000000);
+ clk = clk_register_fixed_rate(NULL, "pll2", NULL, 0, 960000000);
clk_register_clkdev(clk, "pll2", NULL);
clk = clk_register_fixed_factor(NULL, "pll1_2", "pll1",
diff --git a/drivers/clk/mmp/clk-of-mmp2.c b/drivers/clk/mmp/clk-of-mmp2.c
index 251533d87c65..3a51fff1b0e7 100644
--- a/drivers/clk/mmp/clk-of-mmp2.c
+++ b/drivers/clk/mmp/clk-of-mmp2.c
@@ -63,11 +63,11 @@ struct mmp2_clk_unit {
};
static struct mmp_param_fixed_rate_clk fixed_rate_clks[] = {
- {MMP2_CLK_CLK32, "clk32", NULL, CLK_IS_ROOT, 32768},
- {MMP2_CLK_VCTCXO, "vctcxo", NULL, CLK_IS_ROOT, 26000000},
- {MMP2_CLK_PLL1, "pll1", NULL, CLK_IS_ROOT, 800000000},
- {MMP2_CLK_PLL2, "pll2", NULL, CLK_IS_ROOT, 960000000},
- {MMP2_CLK_USB_PLL, "usb_pll", NULL, CLK_IS_ROOT, 480000000},
+ {MMP2_CLK_CLK32, "clk32", NULL, 0, 32768},
+ {MMP2_CLK_VCTCXO, "vctcxo", NULL, 0, 26000000},
+ {MMP2_CLK_PLL1, "pll1", NULL, 0, 800000000},
+ {MMP2_CLK_PLL2, "pll2", NULL, 0, 960000000},
+ {MMP2_CLK_USB_PLL, "usb_pll", NULL, 0, 480000000},
};
static struct mmp_param_fixed_factor_clk fixed_factor_clks[] = {
diff --git a/drivers/clk/mmp/clk-of-pxa168.c b/drivers/clk/mmp/clk-of-pxa168.c
index 64eaf4141c69..87f2317b2a00 100644
--- a/drivers/clk/mmp/clk-of-pxa168.c
+++ b/drivers/clk/mmp/clk-of-pxa168.c
@@ -56,10 +56,10 @@ struct pxa168_clk_unit {
};
static struct mmp_param_fixed_rate_clk fixed_rate_clks[] = {
- {PXA168_CLK_CLK32, "clk32", NULL, CLK_IS_ROOT, 32768},
- {PXA168_CLK_VCTCXO, "vctcxo", NULL, CLK_IS_ROOT, 26000000},
- {PXA168_CLK_PLL1, "pll1", NULL, CLK_IS_ROOT, 624000000},
- {PXA168_CLK_USB_PLL, "usb_pll", NULL, CLK_IS_ROOT, 480000000},
+ {PXA168_CLK_CLK32, "clk32", NULL, 0, 32768},
+ {PXA168_CLK_VCTCXO, "vctcxo", NULL, 0, 26000000},
+ {PXA168_CLK_PLL1, "pll1", NULL, 0, 624000000},
+ {PXA168_CLK_USB_PLL, "usb_pll", NULL, 0, 480000000},
};
static struct mmp_param_fixed_factor_clk fixed_factor_clks[] = {
diff --git a/drivers/clk/mmp/clk-of-pxa1928.c b/drivers/clk/mmp/clk-of-pxa1928.c
index 433a5ae1eae0..e478ff44e170 100644
--- a/drivers/clk/mmp/clk-of-pxa1928.c
+++ b/drivers/clk/mmp/clk-of-pxa1928.c
@@ -34,12 +34,12 @@ struct pxa1928_clk_unit {
};
static struct mmp_param_fixed_rate_clk fixed_rate_clks[] = {
- {0, "clk32", NULL, CLK_IS_ROOT, 32768},
- {0, "vctcxo", NULL, CLK_IS_ROOT, 26000000},
- {0, "pll1_624", NULL, CLK_IS_ROOT, 624000000},
- {0, "pll5p", NULL, CLK_IS_ROOT, 832000000},
- {0, "pll5", NULL, CLK_IS_ROOT, 1248000000},
- {0, "usb_pll", NULL, CLK_IS_ROOT, 480000000},
+ {0, "clk32", NULL, 0, 32768},
+ {0, "vctcxo", NULL, 0, 26000000},
+ {0, "pll1_624", NULL, 0, 624000000},
+ {0, "pll5p", NULL, 0, 832000000},
+ {0, "pll5", NULL, 0, 1248000000},
+ {0, "usb_pll", NULL, 0, 480000000},
};
static struct mmp_param_fixed_factor_clk fixed_factor_clks[] = {
diff --git a/drivers/clk/mmp/clk-of-pxa910.c b/drivers/clk/mmp/clk-of-pxa910.c
index 13d6173326a4..e22a67f76d93 100644
--- a/drivers/clk/mmp/clk-of-pxa910.c
+++ b/drivers/clk/mmp/clk-of-pxa910.c
@@ -56,10 +56,10 @@ struct pxa910_clk_unit {
};
static struct mmp_param_fixed_rate_clk fixed_rate_clks[] = {
- {PXA910_CLK_CLK32, "clk32", NULL, CLK_IS_ROOT, 32768},
- {PXA910_CLK_VCTCXO, "vctcxo", NULL, CLK_IS_ROOT, 26000000},
- {PXA910_CLK_PLL1, "pll1", NULL, CLK_IS_ROOT, 624000000},
- {PXA910_CLK_USB_PLL, "usb_pll", NULL, CLK_IS_ROOT, 480000000},
+ {PXA910_CLK_CLK32, "clk32", NULL, 0, 32768},
+ {PXA910_CLK_VCTCXO, "vctcxo", NULL, 0, 26000000},
+ {PXA910_CLK_PLL1, "pll1", NULL, 0, 624000000},
+ {PXA910_CLK_USB_PLL, "usb_pll", NULL, 0, 480000000},
};
static struct mmp_param_fixed_factor_clk fixed_factor_clks[] = {
diff --git a/drivers/clk/mmp/clk-pxa168.c b/drivers/clk/mmp/clk-pxa168.c
index 0dd83fb950c9..a9ef9209532a 100644
--- a/drivers/clk/mmp/clk-pxa168.c
+++ b/drivers/clk/mmp/clk-pxa168.c
@@ -92,15 +92,13 @@ void __init pxa168_clk_init(phys_addr_t mpmu_phys, phys_addr_t apmu_phys,
return;
}
- clk = clk_register_fixed_rate(NULL, "clk32", NULL, CLK_IS_ROOT, 3200);
+ clk = clk_register_fixed_rate(NULL, "clk32", NULL, 0, 3200);
clk_register_clkdev(clk, "clk32", NULL);
- clk = clk_register_fixed_rate(NULL, "vctcxo", NULL, CLK_IS_ROOT,
- 26000000);
+ clk = clk_register_fixed_rate(NULL, "vctcxo", NULL, 0, 26000000);
clk_register_clkdev(clk, "vctcxo", NULL);
- clk = clk_register_fixed_rate(NULL, "pll1", NULL, CLK_IS_ROOT,
- 624000000);
+ clk = clk_register_fixed_rate(NULL, "pll1", NULL, 0, 624000000);
clk_register_clkdev(clk, "pll1", NULL);
clk = clk_register_fixed_factor(NULL, "pll1_2", "pll1",
diff --git a/drivers/clk/mmp/clk-pxa910.c b/drivers/clk/mmp/clk-pxa910.c
index e1d2ce22cdf1..a520cf7702a1 100644
--- a/drivers/clk/mmp/clk-pxa910.c
+++ b/drivers/clk/mmp/clk-pxa910.c
@@ -97,15 +97,13 @@ void __init pxa910_clk_init(phys_addr_t mpmu_phys, phys_addr_t apmu_phys,
return;
}
- clk = clk_register_fixed_rate(NULL, "clk32", NULL, CLK_IS_ROOT, 3200);
+ clk = clk_register_fixed_rate(NULL, "clk32", NULL, 0, 3200);
clk_register_clkdev(clk, "clk32", NULL);
- clk = clk_register_fixed_rate(NULL, "vctcxo", NULL, CLK_IS_ROOT,
- 26000000);
+ clk = clk_register_fixed_rate(NULL, "vctcxo", NULL, 0, 26000000);
clk_register_clkdev(clk, "vctcxo", NULL);
- clk = clk_register_fixed_rate(NULL, "pll1", NULL, CLK_IS_ROOT,
- 624000000);
+ clk = clk_register_fixed_rate(NULL, "pll1", NULL, 0, 624000000);
clk_register_clkdev(clk, "pll1", NULL);
clk = clk_register_fixed_factor(NULL, "pll1_2", "pll1",
diff --git a/drivers/clk/mvebu/Kconfig b/drivers/clk/mvebu/Kconfig
index eaee8f099c8c..3165da77d525 100644
--- a/drivers/clk/mvebu/Kconfig
+++ b/drivers/clk/mvebu/Kconfig
@@ -29,6 +29,12 @@ config ARMADA_XP_CLK
select MVEBU_CLK_COMMON
select MVEBU_CLK_CPU
+config ARMADA_AP806_SYSCON
+ bool
+
+config ARMADA_CP110_SYSCON
+ bool
+
config DOVE_CLK
bool
select MVEBU_CLK_COMMON
diff --git a/drivers/clk/mvebu/Makefile b/drivers/clk/mvebu/Makefile
index 8866115486f7..7172ef65693d 100644
--- a/drivers/clk/mvebu/Makefile
+++ b/drivers/clk/mvebu/Makefile
@@ -7,6 +7,8 @@ obj-$(CONFIG_ARMADA_375_CLK) += armada-375.o
obj-$(CONFIG_ARMADA_38X_CLK) += armada-38x.o
obj-$(CONFIG_ARMADA_39X_CLK) += armada-39x.o
obj-$(CONFIG_ARMADA_XP_CLK) += armada-xp.o
+obj-$(CONFIG_ARMADA_AP806_SYSCON) += ap806-system-controller.o
+obj-$(CONFIG_ARMADA_CP110_SYSCON) += cp110-system-controller.o
obj-$(CONFIG_DOVE_CLK) += dove.o dove-divider.o
obj-$(CONFIG_KIRKWOOD_CLK) += kirkwood.o
obj-$(CONFIG_ORION_CLK) += orion.o
diff --git a/drivers/clk/mvebu/ap806-system-controller.c b/drivers/clk/mvebu/ap806-system-controller.c
new file mode 100644
index 000000000000..02023baf86c9
--- /dev/null
+++ b/drivers/clk/mvebu/ap806-system-controller.c
@@ -0,0 +1,168 @@
+/*
+ * Marvell Armada AP806 System Controller
+ *
+ * Copyright (C) 2016 Marvell
+ *
+ * Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+#define pr_fmt(fmt) "ap806-system-controller: " fmt
+
+#include <linux/clk-provider.h>
+#include <linux/mfd/syscon.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <linux/regmap.h>
+
+#define AP806_SAR_REG 0x400
+#define AP806_SAR_CLKFREQ_MODE_MASK 0x1f
+
+#define AP806_CLK_NUM 4
+
+static struct clk *ap806_clks[AP806_CLK_NUM];
+
+static struct clk_onecell_data ap806_clk_data = {
+ .clks = ap806_clks,
+ .clk_num = AP806_CLK_NUM,
+};
+
+static int ap806_syscon_clk_probe(struct platform_device *pdev)
+{
+ unsigned int freq_mode, cpuclk_freq;
+ const char *name, *fixedclk_name;
+ struct device_node *np = pdev->dev.of_node;
+ struct regmap *regmap;
+ u32 reg;
+ int ret;
+
+ regmap = syscon_node_to_regmap(np);
+ if (IS_ERR(regmap)) {
+ dev_err(&pdev->dev, "cannot get regmap\n");
+ return PTR_ERR(regmap);
+ }
+
+ ret = regmap_read(regmap, AP806_SAR_REG, &reg);
+ if (ret) {
+ dev_err(&pdev->dev, "cannot read from regmap\n");
+ return ret;
+ }
+
+ freq_mode = reg & AP806_SAR_CLKFREQ_MODE_MASK;
+ switch (freq_mode) {
+ case 0x0 ... 0x5:
+ cpuclk_freq = 2000;
+ break;
+ case 0x6 ... 0xB:
+ cpuclk_freq = 1800;
+ break;
+ case 0xC ... 0x11:
+ cpuclk_freq = 1600;
+ break;
+ case 0x12 ... 0x16:
+ cpuclk_freq = 1400;
+ break;
+ case 0x17 ... 0x19:
+ cpuclk_freq = 1300;
+ break;
+ default:
+ dev_err(&pdev->dev, "invalid SAR value\n");
+ return -EINVAL;
+ }
+
+ /* Convert to hertz */
+ cpuclk_freq *= 1000 * 1000;
+
+ /* CPU clocks depend on the Sample At Reset configuration */
+ of_property_read_string_index(np, "clock-output-names",
+ 0, &name);
+ ap806_clks[0] = clk_register_fixed_rate(&pdev->dev, name, NULL,
+ 0, cpuclk_freq);
+ if (IS_ERR(ap806_clks[0])) {
+ ret = PTR_ERR(ap806_clks[0]);
+ goto fail0;
+ }
+
+ of_property_read_string_index(np, "clock-output-names",
+ 1, &name);
+ ap806_clks[1] = clk_register_fixed_rate(&pdev->dev, name, NULL, 0,
+ cpuclk_freq);
+ if (IS_ERR(ap806_clks[1])) {
+ ret = PTR_ERR(ap806_clks[1]);
+ goto fail1;
+ }
+
+ /* Fixed clock is always 1200 Mhz */
+ of_property_read_string_index(np, "clock-output-names",
+ 2, &fixedclk_name);
+ ap806_clks[2] = clk_register_fixed_rate(&pdev->dev, fixedclk_name, NULL,
+ 0, 1200 * 1000 * 1000);
+ if (IS_ERR(ap806_clks[2])) {
+ ret = PTR_ERR(ap806_clks[2]);
+ goto fail2;
+ }
+
+ /* MSS Clock is fixed clock divided by 6 */
+ of_property_read_string_index(np, "clock-output-names",
+ 3, &name);
+ ap806_clks[3] = clk_register_fixed_factor(NULL, name, fixedclk_name,
+ 0, 1, 6);
+ if (IS_ERR(ap806_clks[3])) {
+ ret = PTR_ERR(ap806_clks[3]);
+ goto fail3;
+ }
+
+ ret = of_clk_add_provider(np, of_clk_src_onecell_get, &ap806_clk_data);
+ if (ret)
+ goto fail_clk_add;
+
+ return 0;
+
+fail_clk_add:
+ clk_unregister_fixed_factor(ap806_clks[3]);
+fail3:
+ clk_unregister_fixed_rate(ap806_clks[2]);
+fail2:
+ clk_unregister_fixed_rate(ap806_clks[1]);
+fail1:
+ clk_unregister_fixed_rate(ap806_clks[0]);
+fail0:
+ return ret;
+}
+
+static int ap806_syscon_clk_remove(struct platform_device *pdev)
+{
+ of_clk_del_provider(pdev->dev.of_node);
+ clk_unregister_fixed_factor(ap806_clks[3]);
+ clk_unregister_fixed_rate(ap806_clks[2]);
+ clk_unregister_fixed_rate(ap806_clks[1]);
+ clk_unregister_fixed_rate(ap806_clks[0]);
+
+ return 0;
+}
+
+static const struct of_device_id ap806_syscon_of_match[] = {
+ { .compatible = "marvell,ap806-system-controller", },
+ { }
+};
+MODULE_DEVICE_TABLE(of, armada8k_pcie_of_match);
+
+static struct platform_driver ap806_syscon_driver = {
+ .probe = ap806_syscon_clk_probe,
+ .remove = ap806_syscon_clk_remove,
+ .driver = {
+ .name = "marvell-ap806-system-controller",
+ .of_match_table = ap806_syscon_of_match,
+ },
+};
+
+module_platform_driver(ap806_syscon_driver);
+
+MODULE_DESCRIPTION("Marvell AP806 System Controller driver");
+MODULE_AUTHOR("Thomas Petazzoni <thomas.petazzoni@free-electrons.com>");
+MODULE_LICENSE("GPL");
diff --git a/drivers/clk/mvebu/cp110-system-controller.c b/drivers/clk/mvebu/cp110-system-controller.c
new file mode 100644
index 000000000000..7fa42d6b2b92
--- /dev/null
+++ b/drivers/clk/mvebu/cp110-system-controller.c
@@ -0,0 +1,406 @@
+/*
+ * Marvell Armada CP110 System Controller
+ *
+ * Copyright (C) 2016 Marvell
+ *
+ * Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+/*
+ * CP110 has 5 core clocks:
+ *
+ * - APLL (1 Ghz)
+ * - PPv2 core (1/3 APLL)
+ * - EIP (1/2 APLL)
+ * - Core (1/2 EIP)
+ *
+ * - NAND clock, which is either:
+ * - Equal to the core clock
+ * - 2/5 APLL
+ *
+ * CP110 has 32 gatable clocks, for the various peripherals in the
+ * IP. They have fairly complicated parent/child relationships.
+ */
+
+#define pr_fmt(fmt) "cp110-system-controller: " fmt
+
+#include <linux/clk-provider.h>
+#include <linux/mfd/syscon.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <linux/regmap.h>
+#include <linux/slab.h>
+
+#define CP110_PM_CLOCK_GATING_REG 0x220
+#define CP110_NAND_FLASH_CLK_CTRL_REG 0x700
+#define NF_CLOCK_SEL_400_MASK BIT(0)
+
+enum {
+ CP110_CLK_TYPE_CORE,
+ CP110_CLK_TYPE_GATABLE,
+};
+
+#define CP110_MAX_CORE_CLOCKS 5
+#define CP110_MAX_GATABLE_CLOCKS 32
+
+#define CP110_CLK_NUM \
+ (CP110_MAX_CORE_CLOCKS + CP110_MAX_GATABLE_CLOCKS)
+
+#define CP110_CORE_APLL 0
+#define CP110_CORE_PPV2 1
+#define CP110_CORE_EIP 2
+#define CP110_CORE_CORE 3
+#define CP110_CORE_NAND 4
+
+/* A number of gatable clocks need special handling */
+#define CP110_GATE_AUDIO 0
+#define CP110_GATE_COMM_UNIT 1
+#define CP110_GATE_NAND 2
+#define CP110_GATE_PPV2 3
+#define CP110_GATE_SDIO 4
+#define CP110_GATE_XOR1 7
+#define CP110_GATE_XOR0 8
+#define CP110_GATE_PCIE_X1_0 11
+#define CP110_GATE_PCIE_X1_1 12
+#define CP110_GATE_PCIE_X4 13
+#define CP110_GATE_PCIE_XOR 14
+#define CP110_GATE_SATA 15
+#define CP110_GATE_SATA_USB 16
+#define CP110_GATE_MAIN 17
+#define CP110_GATE_SDMMC 18
+#define CP110_GATE_SLOW_IO 21
+#define CP110_GATE_USB3H0 22
+#define CP110_GATE_USB3H1 23
+#define CP110_GATE_USB3DEV 24
+#define CP110_GATE_EIP150 25
+#define CP110_GATE_EIP197 26
+
+static struct clk *cp110_clks[CP110_CLK_NUM];
+
+static struct clk_onecell_data cp110_clk_data = {
+ .clks = cp110_clks,
+ .clk_num = CP110_CLK_NUM,
+};
+
+struct cp110_gate_clk {
+ struct clk_hw hw;
+ struct regmap *regmap;
+ u8 bit_idx;
+};
+
+#define to_cp110_gate_clk(clk) container_of(clk, struct cp110_gate_clk, hw)
+
+static int cp110_gate_enable(struct clk_hw *hw)
+{
+ struct cp110_gate_clk *gate = to_cp110_gate_clk(hw);
+
+ regmap_update_bits(gate->regmap, CP110_PM_CLOCK_GATING_REG,
+ BIT(gate->bit_idx), BIT(gate->bit_idx));
+
+ return 0;
+}
+
+static void cp110_gate_disable(struct clk_hw *hw)
+{
+ struct cp110_gate_clk *gate = to_cp110_gate_clk(hw);
+
+ regmap_update_bits(gate->regmap, CP110_PM_CLOCK_GATING_REG,
+ BIT(gate->bit_idx), 0);
+}
+
+static int cp110_gate_is_enabled(struct clk_hw *hw)
+{
+ struct cp110_gate_clk *gate = to_cp110_gate_clk(hw);
+ u32 val;
+
+ regmap_read(gate->regmap, CP110_PM_CLOCK_GATING_REG, &val);
+
+ return val & BIT(gate->bit_idx);
+}
+
+static const struct clk_ops cp110_gate_ops = {
+ .enable = cp110_gate_enable,
+ .disable = cp110_gate_disable,
+ .is_enabled = cp110_gate_is_enabled,
+};
+
+static struct clk *cp110_register_gate(const char *name,
+ const char *parent_name,
+ struct regmap *regmap, u8 bit_idx)
+{
+ struct cp110_gate_clk *gate;
+ struct clk *clk;
+ struct clk_init_data init;
+
+ gate = kzalloc(sizeof(*gate), GFP_KERNEL);
+ if (!gate)
+ return ERR_PTR(-ENOMEM);
+
+ init.name = name;
+ init.ops = &cp110_gate_ops;
+ init.parent_names = &parent_name;
+ init.num_parents = 1;
+
+ gate->regmap = regmap;
+ gate->bit_idx = bit_idx;
+ gate->hw.init = &init;
+
+ clk = clk_register(NULL, &gate->hw);
+ if (IS_ERR(clk))
+ kfree(gate);
+
+ return clk;
+}
+
+static void cp110_unregister_gate(struct clk *clk)
+{
+ struct clk_hw *hw;
+
+ hw = __clk_get_hw(clk);
+ if (!hw)
+ return;
+
+ clk_unregister(clk);
+ kfree(to_cp110_gate_clk(hw));
+}
+
+static struct clk *cp110_of_clk_get(struct of_phandle_args *clkspec, void *data)
+{
+ struct clk_onecell_data *clk_data = data;
+ unsigned int type = clkspec->args[0];
+ unsigned int idx = clkspec->args[1];
+
+ if (type == CP110_CLK_TYPE_CORE) {
+ if (idx > CP110_MAX_CORE_CLOCKS)
+ return ERR_PTR(-EINVAL);
+ return clk_data->clks[idx];
+ } else if (type == CP110_CLK_TYPE_GATABLE) {
+ if (idx > CP110_MAX_GATABLE_CLOCKS)
+ return ERR_PTR(-EINVAL);
+ return clk_data->clks[CP110_MAX_CORE_CLOCKS + idx];
+ }
+
+ return ERR_PTR(-EINVAL);
+}
+
+static int cp110_syscon_clk_probe(struct platform_device *pdev)
+{
+ struct regmap *regmap;
+ struct device_node *np = pdev->dev.of_node;
+ const char *ppv2_name, *apll_name, *core_name, *eip_name, *nand_name;
+ struct clk *clk;
+ u32 nand_clk_ctrl;
+ int i, ret;
+
+ regmap = syscon_node_to_regmap(np);
+ if (IS_ERR(regmap))
+ return PTR_ERR(regmap);
+
+ ret = regmap_read(regmap, CP110_NAND_FLASH_CLK_CTRL_REG,
+ &nand_clk_ctrl);
+ if (ret)
+ return ret;
+
+ /* Register the APLL which is the root of the clk tree */
+ of_property_read_string_index(np, "core-clock-output-names",
+ CP110_CORE_APLL, &apll_name);
+ clk = clk_register_fixed_rate(NULL, apll_name, NULL, 0,
+ 1000 * 1000 * 1000);
+ if (IS_ERR(clk)) {
+ ret = PTR_ERR(clk);
+ goto fail0;
+ }
+
+ cp110_clks[CP110_CORE_APLL] = clk;
+
+ /* PPv2 is APLL/3 */
+ of_property_read_string_index(np, "core-clock-output-names",
+ CP110_CORE_PPV2, &ppv2_name);
+ clk = clk_register_fixed_factor(NULL, ppv2_name, apll_name, 0, 1, 3);
+ if (IS_ERR(clk)) {
+ ret = PTR_ERR(clk);
+ goto fail1;
+ }
+
+ cp110_clks[CP110_CORE_PPV2] = clk;
+
+ /* EIP clock is APLL/2 */
+ of_property_read_string_index(np, "core-clock-output-names",
+ CP110_CORE_EIP, &eip_name);
+ clk = clk_register_fixed_factor(NULL, eip_name, apll_name, 0, 1, 2);
+ if (IS_ERR(clk)) {
+ ret = PTR_ERR(clk);
+ goto fail2;
+ }
+
+ cp110_clks[CP110_CORE_EIP] = clk;
+
+ /* Core clock is EIP/2 */
+ of_property_read_string_index(np, "core-clock-output-names",
+ CP110_CORE_CORE, &core_name);
+ clk = clk_register_fixed_factor(NULL, core_name, eip_name, 0, 1, 2);
+ if (IS_ERR(clk)) {
+ ret = PTR_ERR(clk);
+ goto fail3;
+ }
+
+ cp110_clks[CP110_CORE_CORE] = clk;
+
+ /* NAND can be either APLL/2.5 or core clock */
+ of_property_read_string_index(np, "core-clock-output-names",
+ CP110_CORE_NAND, &nand_name);
+ if (nand_clk_ctrl & NF_CLOCK_SEL_400_MASK)
+ clk = clk_register_fixed_factor(NULL, nand_name,
+ apll_name, 0, 2, 5);
+ else
+ clk = clk_register_fixed_factor(NULL, nand_name,
+ core_name, 0, 1, 1);
+ if (IS_ERR(clk)) {
+ ret = PTR_ERR(clk);
+ goto fail4;
+ }
+
+ cp110_clks[CP110_CORE_NAND] = clk;
+
+ for (i = 0; i < CP110_MAX_GATABLE_CLOCKS; i++) {
+ const char *parent, *name;
+ int ret;
+
+ ret = of_property_read_string_index(np,
+ "gate-clock-output-names",
+ i, &name);
+ /* Reached the end of the list? */
+ if (ret < 0)
+ break;
+
+ if (!strcmp(name, "none"))
+ continue;
+
+ switch (i) {
+ case CP110_GATE_AUDIO:
+ case CP110_GATE_COMM_UNIT:
+ case CP110_GATE_EIP150:
+ case CP110_GATE_EIP197:
+ case CP110_GATE_SLOW_IO:
+ of_property_read_string_index(np,
+ "gate-clock-output-names",
+ CP110_GATE_MAIN, &parent);
+ break;
+ case CP110_GATE_NAND:
+ parent = nand_name;
+ break;
+ case CP110_GATE_PPV2:
+ parent = ppv2_name;
+ break;
+ case CP110_GATE_SDIO:
+ of_property_read_string_index(np,
+ "gate-clock-output-names",
+ CP110_GATE_SDMMC, &parent);
+ break;
+ case CP110_GATE_XOR1:
+ case CP110_GATE_XOR0:
+ case CP110_GATE_PCIE_X1_0:
+ case CP110_GATE_PCIE_X1_1:
+ case CP110_GATE_PCIE_X4:
+ of_property_read_string_index(np,
+ "gate-clock-output-names",
+ CP110_GATE_PCIE_XOR, &parent);
+ break;
+ case CP110_GATE_SATA:
+ case CP110_GATE_USB3H0:
+ case CP110_GATE_USB3H1:
+ case CP110_GATE_USB3DEV:
+ of_property_read_string_index(np,
+ "gate-clock-output-names",
+ CP110_GATE_SATA_USB, &parent);
+ break;
+ default:
+ parent = core_name;
+ break;
+ }
+
+ clk = cp110_register_gate(name, parent, regmap, i);
+ if (IS_ERR(clk)) {
+ ret = PTR_ERR(clk);
+ goto fail_gate;
+ }
+
+ cp110_clks[CP110_MAX_CORE_CLOCKS + i] = clk;
+ }
+
+ ret = of_clk_add_provider(np, cp110_of_clk_get, &cp110_clk_data);
+ if (ret)
+ goto fail_clk_add;
+
+ return 0;
+
+fail_clk_add:
+fail_gate:
+ for (i = 0; i < CP110_MAX_GATABLE_CLOCKS; i++) {
+ clk = cp110_clks[CP110_MAX_CORE_CLOCKS + i];
+
+ if (clk)
+ cp110_unregister_gate(clk);
+ }
+
+ clk_unregister_fixed_factor(cp110_clks[CP110_CORE_NAND]);
+fail4:
+ clk_unregister_fixed_factor(cp110_clks[CP110_CORE_CORE]);
+fail3:
+ clk_unregister_fixed_factor(cp110_clks[CP110_CORE_EIP]);
+fail2:
+ clk_unregister_fixed_factor(cp110_clks[CP110_CORE_PPV2]);
+fail1:
+ clk_unregister_fixed_rate(cp110_clks[CP110_CORE_APLL]);
+fail0:
+ return ret;
+}
+
+static int cp110_syscon_clk_remove(struct platform_device *pdev)
+{
+ int i;
+
+ of_clk_del_provider(pdev->dev.of_node);
+
+ for (i = 0; i < CP110_MAX_GATABLE_CLOCKS; i++) {
+ struct clk *clk = cp110_clks[CP110_MAX_CORE_CLOCKS + i];
+
+ if (clk)
+ cp110_unregister_gate(clk);
+ }
+
+ clk_unregister_fixed_factor(cp110_clks[CP110_CORE_NAND]);
+ clk_unregister_fixed_factor(cp110_clks[CP110_CORE_CORE]);
+ clk_unregister_fixed_factor(cp110_clks[CP110_CORE_EIP]);
+ clk_unregister_fixed_factor(cp110_clks[CP110_CORE_PPV2]);
+ clk_unregister_fixed_rate(cp110_clks[CP110_CORE_APLL]);
+
+ return 0;
+}
+
+static const struct of_device_id cp110_syscon_of_match[] = {
+ { .compatible = "marvell,cp110-system-controller0", },
+ { }
+};
+MODULE_DEVICE_TABLE(of, armada8k_pcie_of_match);
+
+static struct platform_driver cp110_syscon_driver = {
+ .probe = cp110_syscon_clk_probe,
+ .remove = cp110_syscon_clk_remove,
+ .driver = {
+ .name = "marvell-cp110-system-controller0",
+ .of_match_table = cp110_syscon_of_match,
+ },
+};
+
+module_platform_driver(cp110_syscon_driver);
+
+MODULE_DESCRIPTION("Marvell CP110 System Controller 0 driver");
+MODULE_AUTHOR("Thomas Petazzoni <thomas.petazzoni@free-electrons.com>");
+MODULE_LICENSE("GPL");
diff --git a/drivers/clk/nxp/clk-lpc18xx-creg.c b/drivers/clk/nxp/clk-lpc18xx-creg.c
index d44b61afa2dc..9e35749dafdf 100644
--- a/drivers/clk/nxp/clk-lpc18xx-creg.c
+++ b/drivers/clk/nxp/clk-lpc18xx-creg.c
@@ -147,6 +147,7 @@ static struct clk *clk_register_creg_clk(struct device *dev,
init.name = creg_clk->name;
init.parent_names = parent_name;
init.num_parents = 1;
+ init.flags = 0;
creg_clk->reg = syscon;
creg_clk->hw.init = &init;
diff --git a/drivers/clk/qcom/gcc-msm8916.c b/drivers/clk/qcom/gcc-msm8916.c
index 9c29080a84d8..5c4e193164d4 100644
--- a/drivers/clk/qcom/gcc-msm8916.c
+++ b/drivers/clk/qcom/gcc-msm8916.c
@@ -2346,6 +2346,7 @@ static struct clk_branch gcc_crypto_ahb_clk = {
"pcnoc_bfdcd_clk_src",
},
.num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
.ops = &clk_branch2_ops,
},
},
@@ -2381,6 +2382,7 @@ static struct clk_branch gcc_crypto_clk = {
"crypto_clk_src",
},
.num_parents = 1,
+ .flags = CLK_SET_RATE_PARENT,
.ops = &clk_branch2_ops,
},
},
diff --git a/drivers/clk/qcom/mmcc-msm8996.c b/drivers/clk/qcom/mmcc-msm8996.c
index 6df7ff36b416..847dd9dadeca 100644
--- a/drivers/clk/qcom/mmcc-msm8996.c
+++ b/drivers/clk/qcom/mmcc-msm8996.c
@@ -1279,21 +1279,6 @@ static struct clk_branch mmss_misc_cxo_clk = {
},
};
-static struct clk_branch mmss_mmagic_axi_clk = {
- .halt_reg = 0x506c,
- .clkr = {
- .enable_reg = 0x506c,
- .enable_mask = BIT(0),
- .hw.init = &(struct clk_init_data){
- .name = "mmss_mmagic_axi_clk",
- .parent_names = (const char *[]){ "axi_clk_src" },
- .num_parents = 1,
- .flags = CLK_SET_RATE_PARENT,
- .ops = &clk_branch2_ops,
- },
- },
-};
-
static struct clk_branch mmss_mmagic_maxi_clk = {
.halt_reg = 0x5074,
.clkr = {
@@ -1579,21 +1564,6 @@ static struct clk_branch smmu_video_axi_clk = {
},
};
-static struct clk_branch mmagic_bimc_axi_clk = {
- .halt_reg = 0x5294,
- .clkr = {
- .enable_reg = 0x5294,
- .enable_mask = BIT(0),
- .hw.init = &(struct clk_init_data){
- .name = "mmagic_bimc_axi_clk",
- .parent_names = (const char *[]){ "axi_clk_src" },
- .num_parents = 1,
- .flags = CLK_SET_RATE_PARENT,
- .ops = &clk_branch2_ops,
- },
- },
-};
-
static struct clk_branch mmagic_bimc_noc_cfg_ahb_clk = {
.halt_reg = 0x5298,
.clkr = {
@@ -3121,7 +3091,6 @@ static struct clk_regmap *mmcc_msm8996_clocks[] = {
[MMSS_MMAGIC_CFG_AHB_CLK] = &mmss_mmagic_cfg_ahb_clk.clkr,
[MMSS_MISC_AHB_CLK] = &mmss_misc_ahb_clk.clkr,
[MMSS_MISC_CXO_CLK] = &mmss_misc_cxo_clk.clkr,
- [MMSS_MMAGIC_AXI_CLK] = &mmss_mmagic_axi_clk.clkr,
[MMSS_MMAGIC_MAXI_CLK] = &mmss_mmagic_maxi_clk.clkr,
[MMAGIC_CAMSS_AXI_CLK] = &mmagic_camss_axi_clk.clkr,
[MMAGIC_CAMSS_NOC_CFG_AHB_CLK] = &mmagic_camss_noc_cfg_ahb_clk.clkr,
@@ -3141,7 +3110,6 @@ static struct clk_regmap *mmcc_msm8996_clocks[] = {
[MMAGIC_VIDEO_NOC_CFG_AHB_CLK] = &mmagic_video_noc_cfg_ahb_clk.clkr,
[SMMU_VIDEO_AHB_CLK] = &smmu_video_ahb_clk.clkr,
[SMMU_VIDEO_AXI_CLK] = &smmu_video_axi_clk.clkr,
- [MMAGIC_BIMC_AXI_CLK] = &mmagic_bimc_axi_clk.clkr,
[MMAGIC_BIMC_NOC_CFG_AHB_CLK] = &mmagic_bimc_noc_cfg_ahb_clk.clkr,
[GPU_GX_GFX3D_CLK] = &gpu_gx_gfx3d_clk.clkr,
[GPU_GX_RBBMTIMER_CLK] = &gpu_gx_rbbmtimer_clk.clkr,
diff --git a/drivers/clk/renesas/Kconfig b/drivers/clk/renesas/Kconfig
new file mode 100644
index 000000000000..2115ce410cfb
--- /dev/null
+++ b/drivers/clk/renesas/Kconfig
@@ -0,0 +1,16 @@
+config CLK_RENESAS_CPG_MSSR
+ bool
+ default y if ARCH_R8A7795
+
+config CLK_RENESAS_CPG_MSTP
+ bool
+ default y if ARCH_R7S72100
+ default y if ARCH_R8A73A4
+ default y if ARCH_R8A7740
+ default y if ARCH_R8A7778
+ default y if ARCH_R8A7779
+ default y if ARCH_R8A7790
+ default y if ARCH_R8A7791
+ default y if ARCH_R8A7793
+ default y if ARCH_R8A7794
+ default y if ARCH_SH73A0
diff --git a/drivers/clk/renesas/Makefile b/drivers/clk/renesas/Makefile
index 7e2579b30326..ead8bb843524 100644
--- a/drivers/clk/renesas/Makefile
+++ b/drivers/clk/renesas/Makefile
@@ -1,13 +1,15 @@
obj-$(CONFIG_ARCH_EMEV2) += clk-emev2.o
-obj-$(CONFIG_ARCH_R7S72100) += clk-rz.o clk-mstp.o
-obj-$(CONFIG_ARCH_R8A73A4) += clk-r8a73a4.o clk-mstp.o clk-div6.o
-obj-$(CONFIG_ARCH_R8A7740) += clk-r8a7740.o clk-mstp.o clk-div6.o
-obj-$(CONFIG_ARCH_R8A7778) += clk-r8a7778.o clk-mstp.o
-obj-$(CONFIG_ARCH_R8A7779) += clk-r8a7779.o clk-mstp.o
-obj-$(CONFIG_ARCH_R8A7790) += clk-rcar-gen2.o clk-mstp.o clk-div6.o
-obj-$(CONFIG_ARCH_R8A7791) += clk-rcar-gen2.o clk-mstp.o clk-div6.o
-obj-$(CONFIG_ARCH_R8A7793) += clk-rcar-gen2.o clk-mstp.o clk-div6.o
-obj-$(CONFIG_ARCH_R8A7794) += clk-rcar-gen2.o clk-mstp.o clk-div6.o
-obj-$(CONFIG_ARCH_R8A7795) += renesas-cpg-mssr.o \
- r8a7795-cpg-mssr.o clk-div6.o
-obj-$(CONFIG_ARCH_SH73A0) += clk-sh73a0.o clk-mstp.o clk-div6.o
+obj-$(CONFIG_ARCH_R7S72100) += clk-rz.o
+obj-$(CONFIG_ARCH_R8A73A4) += clk-r8a73a4.o clk-div6.o
+obj-$(CONFIG_ARCH_R8A7740) += clk-r8a7740.o clk-div6.o
+obj-$(CONFIG_ARCH_R8A7778) += clk-r8a7778.o
+obj-$(CONFIG_ARCH_R8A7779) += clk-r8a7779.o
+obj-$(CONFIG_ARCH_R8A7790) += clk-rcar-gen2.o clk-div6.o
+obj-$(CONFIG_ARCH_R8A7791) += clk-rcar-gen2.o clk-div6.o
+obj-$(CONFIG_ARCH_R8A7793) += clk-rcar-gen2.o clk-div6.o
+obj-$(CONFIG_ARCH_R8A7794) += clk-rcar-gen2.o clk-div6.o
+obj-$(CONFIG_ARCH_R8A7795) += r8a7795-cpg-mssr.o
+obj-$(CONFIG_ARCH_SH73A0) += clk-sh73a0.o clk-div6.o
+
+obj-$(CONFIG_CLK_RENESAS_CPG_MSSR) += renesas-cpg-mssr.o clk-div6.o
+obj-$(CONFIG_CLK_RENESAS_CPG_MSTP) += clk-mstp.o
diff --git a/drivers/clk/renesas/clk-mstp.c b/drivers/clk/renesas/clk-mstp.c
index 3d44e183aedd..5093a250650d 100644
--- a/drivers/clk/renesas/clk-mstp.c
+++ b/drivers/clk/renesas/clk-mstp.c
@@ -243,9 +243,7 @@ static void __init cpg_mstp_clocks_init(struct device_node *np)
}
CLK_OF_DECLARE(cpg_mstp_clks, "renesas,cpg-mstp-clocks", cpg_mstp_clocks_init);
-
-#ifdef CONFIG_PM_GENERIC_DOMAINS_OF
-int cpg_mstp_attach_dev(struct generic_pm_domain *domain, struct device *dev)
+int cpg_mstp_attach_dev(struct generic_pm_domain *unused, struct device *dev)
{
struct device_node *np = dev->of_node;
struct of_phandle_args clkspec;
@@ -297,7 +295,7 @@ fail_put:
return error;
}
-void cpg_mstp_detach_dev(struct generic_pm_domain *domain, struct device *dev)
+void cpg_mstp_detach_dev(struct generic_pm_domain *unused, struct device *dev)
{
if (!list_empty(&dev->power.subsys_data->clock_list))
pm_clk_destroy(dev);
@@ -318,12 +316,10 @@ void __init cpg_mstp_add_clk_domain(struct device_node *np)
return;
pd->name = np->name;
-
pd->flags = GENPD_FLAG_PM_CLK;
- pm_genpd_init(pd, &simple_qos_governor, false);
pd->attach_dev = cpg_mstp_attach_dev;
pd->detach_dev = cpg_mstp_detach_dev;
+ pm_genpd_init(pd, &pm_domain_always_on_gov, false);
of_genpd_add_provider_simple(np, pd);
}
-#endif /* !CONFIG_PM_GENERIC_DOMAINS_OF */
diff --git a/drivers/clk/renesas/r8a7795-cpg-mssr.c b/drivers/clk/renesas/r8a7795-cpg-mssr.c
index b2198aef5ed4..ca5519c583d4 100644
--- a/drivers/clk/renesas/r8a7795-cpg-mssr.c
+++ b/drivers/clk/renesas/r8a7795-cpg-mssr.c
@@ -13,6 +13,7 @@
*/
#include <linux/bug.h>
+#include <linux/clk.h>
#include <linux/clk-provider.h>
#include <linux/device.h>
#include <linux/err.h>
@@ -26,6 +27,7 @@
#include "renesas-cpg-mssr.h"
+#define CPG_RCKCR 0x240
enum clk_ids {
/* Core Clock Outputs exported to DT */
@@ -50,6 +52,7 @@ enum clk_ids {
CLK_S3,
CLK_SDSRC,
CLK_SSPSRC,
+ CLK_RINT,
/* Module Clocks */
MOD_CLK_BASE
@@ -63,8 +66,12 @@ enum r8a7795_clk_types {
CLK_TYPE_GEN3_PLL3,
CLK_TYPE_GEN3_PLL4,
CLK_TYPE_GEN3_SD,
+ CLK_TYPE_GEN3_R,
};
+#define DEF_GEN3_SD(_name, _id, _parent, _offset) \
+ DEF_BASE(_name, _id, CLK_TYPE_GEN3_SD, _parent, .offset = _offset)
+
static const struct cpg_core_clk r8a7795_core_clks[] __initconst = {
/* External Clock Inputs */
DEF_INPUT("extal", CLK_EXTAL),
@@ -102,10 +109,10 @@ static const struct cpg_core_clk r8a7795_core_clks[] __initconst = {
DEF_FIXED("s3d2", R8A7795_CLK_S3D2, CLK_S3, 2, 1),
DEF_FIXED("s3d4", R8A7795_CLK_S3D4, CLK_S3, 4, 1),
- DEF_SD("sd0", R8A7795_CLK_SD0, CLK_PLL1_DIV2, 0x0074),
- DEF_SD("sd1", R8A7795_CLK_SD1, CLK_PLL1_DIV2, 0x0078),
- DEF_SD("sd2", R8A7795_CLK_SD2, CLK_PLL1_DIV2, 0x0268),
- DEF_SD("sd3", R8A7795_CLK_SD3, CLK_PLL1_DIV2, 0x026c),
+ DEF_GEN3_SD("sd0", R8A7795_CLK_SD0, CLK_PLL1_DIV2, 0x0074),
+ DEF_GEN3_SD("sd1", R8A7795_CLK_SD1, CLK_PLL1_DIV2, 0x0078),
+ DEF_GEN3_SD("sd2", R8A7795_CLK_SD2, CLK_PLL1_DIV2, 0x0268),
+ DEF_GEN3_SD("sd3", R8A7795_CLK_SD3, CLK_PLL1_DIV2, 0x026c),
DEF_FIXED("cl", R8A7795_CLK_CL, CLK_PLL1_DIV2, 48, 1),
DEF_FIXED("cp", R8A7795_CLK_CP, CLK_EXTAL, 2, 1),
@@ -113,6 +120,12 @@ static const struct cpg_core_clk r8a7795_core_clks[] __initconst = {
DEF_DIV6P1("mso", R8A7795_CLK_MSO, CLK_PLL1_DIV4, 0x014),
DEF_DIV6P1("hdmi", R8A7795_CLK_HDMI, CLK_PLL1_DIV2, 0x250),
DEF_DIV6P1("canfd", R8A7795_CLK_CANFD, CLK_PLL1_DIV4, 0x244),
+ DEF_DIV6P1("csi0", R8A7795_CLK_CSI0, CLK_PLL1_DIV4, 0x00c),
+
+ DEF_DIV6_RO("osc", R8A7795_CLK_OSC, CLK_EXTAL, CPG_RCKCR, 8),
+ DEF_DIV6_RO("r_int", CLK_RINT, CLK_EXTAL, CPG_RCKCR, 32),
+
+ DEF_BASE("r", R8A7795_CLK_R, CLK_TYPE_GEN3_R, CLK_RINT),
};
static const struct mssr_mod_clk r8a7795_mod_clks[] __initconst = {
@@ -139,6 +152,7 @@ static const struct mssr_mod_clk r8a7795_mod_clks[] __initconst = {
DEF_MOD("usb3-if0", 328, R8A7795_CLK_S3D1),
DEF_MOD("usb-dmac0", 330, R8A7795_CLK_S3D1),
DEF_MOD("usb-dmac1", 331, R8A7795_CLK_S3D1),
+ DEF_MOD("rwdt0", 402, R8A7795_CLK_R),
DEF_MOD("intc-ex", 407, R8A7795_CLK_CP),
DEF_MOD("intc-ap", 408, R8A7795_CLK_S3D1),
DEF_MOD("audmac0", 502, R8A7795_CLK_S3D4),
@@ -148,6 +162,7 @@ static const struct mssr_mod_clk r8a7795_mod_clks[] __initconst = {
DEF_MOD("hscif2", 518, R8A7795_CLK_S3D1),
DEF_MOD("hscif1", 519, R8A7795_CLK_S3D1),
DEF_MOD("hscif0", 520, R8A7795_CLK_S3D1),
+ DEF_MOD("pwm", 523, R8A7795_CLK_S3D4),
DEF_MOD("fcpvd3", 600, R8A7795_CLK_S2D1),
DEF_MOD("fcpvd2", 601, R8A7795_CLK_S2D1),
DEF_MOD("fcpvd1", 602, R8A7795_CLK_S2D1),
@@ -176,6 +191,10 @@ static const struct mssr_mod_clk r8a7795_mod_clks[] __initconst = {
DEF_MOD("ehci1", 702, R8A7795_CLK_S3D4),
DEF_MOD("ehci0", 703, R8A7795_CLK_S3D4),
DEF_MOD("hsusb", 704, R8A7795_CLK_S3D4),
+ DEF_MOD("csi21", 713, R8A7795_CLK_CSI0),
+ DEF_MOD("csi20", 714, R8A7795_CLK_CSI0),
+ DEF_MOD("csi41", 715, R8A7795_CLK_CSI0),
+ DEF_MOD("csi40", 716, R8A7795_CLK_CSI0),
DEF_MOD("du3", 721, R8A7795_CLK_S2D1),
DEF_MOD("du2", 722, R8A7795_CLK_S2D1),
DEF_MOD("du1", 723, R8A7795_CLK_S2D1),
@@ -183,6 +202,14 @@ static const struct mssr_mod_clk r8a7795_mod_clks[] __initconst = {
DEF_MOD("lvds", 727, R8A7795_CLK_S2D1),
DEF_MOD("hdmi1", 728, R8A7795_CLK_HDMI),
DEF_MOD("hdmi0", 729, R8A7795_CLK_HDMI),
+ DEF_MOD("vin7", 804, R8A7795_CLK_S2D1),
+ DEF_MOD("vin6", 805, R8A7795_CLK_S2D1),
+ DEF_MOD("vin5", 806, R8A7795_CLK_S2D1),
+ DEF_MOD("vin4", 807, R8A7795_CLK_S2D1),
+ DEF_MOD("vin3", 808, R8A7795_CLK_S2D1),
+ DEF_MOD("vin2", 809, R8A7795_CLK_S2D1),
+ DEF_MOD("vin1", 810, R8A7795_CLK_S2D1),
+ DEF_MOD("vin0", 811, R8A7795_CLK_S2D1),
DEF_MOD("etheravb", 812, R8A7795_CLK_S3D2),
DEF_MOD("sata0", 815, R8A7795_CLK_S3D2),
DEF_MOD("gpio7", 905, R8A7795_CLK_CP),
@@ -578,6 +605,18 @@ struct clk * __init r8a7795_cpg_clk_register(struct device *dev,
case CLK_TYPE_GEN3_SD:
return cpg_sd_clk_register(core, base, __clk_get_name(parent));
+ case CLK_TYPE_GEN3_R:
+ /* RINT is default. Only if EXTALR is populated, we switch to it */
+ value = readl(base + CPG_RCKCR) & 0x3f;
+
+ if (clk_get_rate(clks[CLK_EXTALR])) {
+ parent = clks[CLK_EXTALR];
+ value |= BIT(15);
+ }
+
+ writel(value, base + CPG_RCKCR);
+ break;
+
default:
return ERR_PTR(-EINVAL);
}
diff --git a/drivers/clk/renesas/renesas-cpg-mssr.c b/drivers/clk/renesas/renesas-cpg-mssr.c
index 58e24b326a48..210cd744a7a9 100644
--- a/drivers/clk/renesas/renesas-cpg-mssr.c
+++ b/drivers/clk/renesas/renesas-cpg-mssr.c
@@ -15,6 +15,7 @@
#include <linux/clk.h>
#include <linux/clk-provider.h>
+#include <linux/clk/renesas.h>
#include <linux/device.h>
#include <linux/init.h>
#include <linux/mod_devicetable.h>
@@ -253,7 +254,7 @@ static void __init cpg_mssr_register_core_clk(const struct cpg_core_clk *core,
{
struct clk *clk = NULL, *parent;
struct device *dev = priv->dev;
- unsigned int id = core->id;
+ unsigned int id = core->id, div = core->div;
const char *parent_name;
WARN_DEBUG(id >= priv->num_core_clks);
@@ -266,6 +267,7 @@ static void __init cpg_mssr_register_core_clk(const struct cpg_core_clk *core,
case CLK_TYPE_FF:
case CLK_TYPE_DIV6P1:
+ case CLK_TYPE_DIV6_RO:
WARN_DEBUG(core->parent >= priv->num_core_clks);
parent = priv->clks[core->parent];
if (IS_ERR(parent)) {
@@ -274,13 +276,18 @@ static void __init cpg_mssr_register_core_clk(const struct cpg_core_clk *core,
}
parent_name = __clk_get_name(parent);
- if (core->type == CLK_TYPE_FF) {
- clk = clk_register_fixed_factor(NULL, core->name,
- parent_name, 0,
- core->mult, core->div);
- } else {
+
+ if (core->type == CLK_TYPE_DIV6_RO)
+ /* Multiply with the DIV6 register value */
+ div *= (readl(priv->base + core->offset) & 0x3f) + 1;
+
+ if (core->type == CLK_TYPE_DIV6P1) {
clk = cpg_div6_register(core->name, 1, &parent_name,
priv->base + core->offset);
+ } else {
+ clk = clk_register_fixed_factor(NULL, core->name,
+ parent_name, 0,
+ core->mult, div);
}
break;
@@ -375,8 +382,6 @@ fail:
kfree(clock);
}
-
-#ifdef CONFIG_PM_GENERIC_DOMAINS_OF
struct cpg_mssr_clk_domain {
struct generic_pm_domain genpd;
struct device_node *np;
@@ -384,6 +389,8 @@ struct cpg_mssr_clk_domain {
unsigned int core_pm_clks[0];
};
+static struct cpg_mssr_clk_domain *cpg_mssr_clk_domain;
+
static bool cpg_mssr_is_pm_clk(const struct of_phandle_args *clkspec,
struct cpg_mssr_clk_domain *pd)
{
@@ -407,17 +414,20 @@ static bool cpg_mssr_is_pm_clk(const struct of_phandle_args *clkspec,
}
}
-static int cpg_mssr_attach_dev(struct generic_pm_domain *genpd,
- struct device *dev)
+int cpg_mssr_attach_dev(struct generic_pm_domain *unused, struct device *dev)
{
- struct cpg_mssr_clk_domain *pd =
- container_of(genpd, struct cpg_mssr_clk_domain, genpd);
+ struct cpg_mssr_clk_domain *pd = cpg_mssr_clk_domain;
struct device_node *np = dev->of_node;
struct of_phandle_args clkspec;
struct clk *clk;
int i = 0;
int error;
+ if (!pd) {
+ dev_dbg(dev, "CPG/MSSR clock domain not yet available\n");
+ return -EPROBE_DEFER;
+ }
+
while (!of_parse_phandle_with_args(np, "clocks", "#clock-cells", i,
&clkspec)) {
if (cpg_mssr_is_pm_clk(&clkspec, pd))
@@ -457,8 +467,7 @@ fail_put:
return error;
}
-static void cpg_mssr_detach_dev(struct generic_pm_domain *genpd,
- struct device *dev)
+void cpg_mssr_detach_dev(struct generic_pm_domain *unused, struct device *dev)
{
if (!list_empty(&dev->power.subsys_data->clock_list))
pm_clk_destroy(dev);
@@ -484,22 +493,14 @@ static int __init cpg_mssr_add_clk_domain(struct device *dev,
genpd = &pd->genpd;
genpd->name = np->name;
genpd->flags = GENPD_FLAG_PM_CLK;
- pm_genpd_init(genpd, &simple_qos_governor, false);
genpd->attach_dev = cpg_mssr_attach_dev;
genpd->detach_dev = cpg_mssr_detach_dev;
+ pm_genpd_init(genpd, &pm_domain_always_on_gov, false);
+ cpg_mssr_clk_domain = pd;
of_genpd_add_provider_simple(np, genpd);
return 0;
}
-#else
-static inline int cpg_mssr_add_clk_domain(struct device *dev,
- const unsigned int *core_pm_clks,
- unsigned int num_core_pm_clks)
-{
- return 0;
-}
-#endif /* !CONFIG_PM_GENERIC_DOMAINS_OF */
-
static const struct of_device_id cpg_mssr_match[] = {
#ifdef CONFIG_ARCH_R8A7795
diff --git a/drivers/clk/renesas/renesas-cpg-mssr.h b/drivers/clk/renesas/renesas-cpg-mssr.h
index 952b6957233b..0d1e3e811e79 100644
--- a/drivers/clk/renesas/renesas-cpg-mssr.h
+++ b/drivers/clk/renesas/renesas-cpg-mssr.h
@@ -37,6 +37,7 @@ enum clk_types {
CLK_TYPE_IN, /* External Clock Input */
CLK_TYPE_FF, /* Fixed Factor Clock */
CLK_TYPE_DIV6P1, /* DIV6 Clock with 1 parent clock */
+ CLK_TYPE_DIV6_RO, /* DIV6 Clock read only with extra divisor */
/* Custom definitions start here */
CLK_TYPE_CUSTOM,
@@ -53,9 +54,8 @@ enum clk_types {
DEF_BASE(_name, _id, CLK_TYPE_FF, _parent, .div = _div, .mult = _mult)
#define DEF_DIV6P1(_name, _id, _parent, _offset) \
DEF_BASE(_name, _id, CLK_TYPE_DIV6P1, _parent, .offset = _offset)
-#define DEF_SD(_name, _id, _parent, _offset) \
- DEF_BASE(_name, _id, CLK_TYPE_GEN3_SD, _parent, .offset = _offset)
-
+#define DEF_DIV6_RO(_name, _id, _parent, _offset, _div) \
+ DEF_BASE(_name, _id, CLK_TYPE_DIV6_RO, _parent, .offset = _offset, .div = _div, .mult = 1)
/*
* Definitions of Module Clocks
diff --git a/drivers/clk/rockchip/Makefile b/drivers/clk/rockchip/Makefile
index 80b9a379beb4..f47a2fa962d2 100644
--- a/drivers/clk/rockchip/Makefile
+++ b/drivers/clk/rockchip/Makefile
@@ -15,3 +15,4 @@ obj-y += clk-rk3188.o
obj-y += clk-rk3228.o
obj-y += clk-rk3288.o
obj-y += clk-rk3368.o
+obj-y += clk-rk3399.o
diff --git a/drivers/clk/rockchip/clk-cpu.c b/drivers/clk/rockchip/clk-cpu.c
index 4e73ed5cab58..4bb130cd0062 100644
--- a/drivers/clk/rockchip/clk-cpu.c
+++ b/drivers/clk/rockchip/clk-cpu.c
@@ -158,12 +158,16 @@ static int rockchip_cpuclk_pre_rate_change(struct rockchip_cpuclk *cpuclk,
writel(HIWORD_UPDATE(alt_div, reg_data->div_core_mask,
reg_data->div_core_shift) |
- HIWORD_UPDATE(1, 1, reg_data->mux_core_shift),
+ HIWORD_UPDATE(reg_data->mux_core_alt,
+ reg_data->mux_core_mask,
+ reg_data->mux_core_shift),
cpuclk->reg_base + reg_data->core_reg);
} else {
/* select alternate parent */
- writel(HIWORD_UPDATE(1, 1, reg_data->mux_core_shift),
- cpuclk->reg_base + reg_data->core_reg);
+ writel(HIWORD_UPDATE(reg_data->mux_core_alt,
+ reg_data->mux_core_mask,
+ reg_data->mux_core_shift),
+ cpuclk->reg_base + reg_data->core_reg);
}
spin_unlock_irqrestore(cpuclk->lock, flags);
@@ -198,7 +202,9 @@ static int rockchip_cpuclk_post_rate_change(struct rockchip_cpuclk *cpuclk,
writel(HIWORD_UPDATE(0, reg_data->div_core_mask,
reg_data->div_core_shift) |
- HIWORD_UPDATE(0, 1, reg_data->mux_core_shift),
+ HIWORD_UPDATE(reg_data->mux_core_main,
+ reg_data->mux_core_mask,
+ reg_data->mux_core_shift),
cpuclk->reg_base + reg_data->core_reg);
if (ndata->old_rate > ndata->new_rate)
@@ -252,7 +258,7 @@ struct clk *rockchip_clk_register_cpuclk(const char *name,
return ERR_PTR(-ENOMEM);
init.name = name;
- init.parent_names = &parent_names[0];
+ init.parent_names = &parent_names[reg_data->mux_core_main];
init.num_parents = 1;
init.ops = &rockchip_cpuclk_ops;
@@ -270,10 +276,10 @@ struct clk *rockchip_clk_register_cpuclk(const char *name,
cpuclk->clk_nb.notifier_call = rockchip_cpuclk_notifier_cb;
cpuclk->hw.init = &init;
- cpuclk->alt_parent = __clk_lookup(parent_names[1]);
+ cpuclk->alt_parent = __clk_lookup(parent_names[reg_data->mux_core_alt]);
if (!cpuclk->alt_parent) {
- pr_err("%s: could not lookup alternate parent\n",
- __func__);
+ pr_err("%s: could not lookup alternate parent: (%d)\n",
+ __func__, reg_data->mux_core_alt);
ret = -EINVAL;
goto free_cpuclk;
}
@@ -285,10 +291,11 @@ struct clk *rockchip_clk_register_cpuclk(const char *name,
goto free_cpuclk;
}
- clk = __clk_lookup(parent_names[0]);
+ clk = __clk_lookup(parent_names[reg_data->mux_core_main]);
if (!clk) {
- pr_err("%s: could not lookup parent clock %s\n",
- __func__, parent_names[0]);
+ pr_err("%s: could not lookup parent clock: (%d) %s\n",
+ __func__, reg_data->mux_core_main,
+ parent_names[reg_data->mux_core_main]);
ret = -EINVAL;
goto free_alt_parent;
}
diff --git a/drivers/clk/rockchip/clk-mmc-phase.c b/drivers/clk/rockchip/clk-mmc-phase.c
index e0dc7e83403a..bc856f21f6b2 100644
--- a/drivers/clk/rockchip/clk-mmc-phase.c
+++ b/drivers/clk/rockchip/clk-mmc-phase.c
@@ -123,7 +123,8 @@ static int rockchip_mmc_set_phase(struct clk_hw *hw, int degrees)
raw_value = delay_num ? ROCKCHIP_MMC_DELAY_SEL : 0;
raw_value |= delay_num << ROCKCHIP_MMC_DELAYNUM_OFFSET;
raw_value |= nineties;
- writel(HIWORD_UPDATE(raw_value, 0x07ff, mmc_clock->shift), mmc_clock->reg);
+ writel(HIWORD_UPDATE(raw_value, 0x07ff, mmc_clock->shift),
+ mmc_clock->reg);
pr_debug("%s->set_phase(%d) delay_nums=%u reg[0x%p]=0x%03x actual_degrees=%d\n",
clk_hw_get_name(hw), degrees, delay_num,
diff --git a/drivers/clk/rockchip/clk-pll.c b/drivers/clk/rockchip/clk-pll.c
index 5de797e34d54..db81e454166b 100644
--- a/drivers/clk/rockchip/clk-pll.c
+++ b/drivers/clk/rockchip/clk-pll.c
@@ -46,6 +46,8 @@ struct rockchip_clk_pll {
const struct rockchip_pll_rate_table *rate_table;
unsigned int rate_count;
spinlock_t *lock;
+
+ struct rockchip_clk_provider *ctx;
};
#define to_rockchip_clk_pll(_hw) container_of(_hw, struct rockchip_clk_pll, hw)
@@ -90,15 +92,10 @@ static long rockchip_pll_round_rate(struct clk_hw *hw,
*/
static int rockchip_pll_wait_lock(struct rockchip_clk_pll *pll)
{
- struct regmap *grf = rockchip_clk_get_grf();
+ struct regmap *grf = pll->ctx->grf;
unsigned int val;
int delay = 24000000, ret;
- if (IS_ERR(grf)) {
- pr_err("%s: grf regmap not available\n", __func__);
- return PTR_ERR(grf);
- }
-
while (delay > 0) {
ret = regmap_read(grf, pll->lock_offset, &val);
if (ret) {
@@ -234,7 +231,7 @@ static int rockchip_rk3036_pll_set_params(struct rockchip_clk_pll *pll,
/* wait for the pll to lock */
ret = rockchip_pll_wait_lock(pll);
if (ret) {
- pr_warn("%s: pll update unsucessful, trying to restore old params\n",
+ pr_warn("%s: pll update unsuccessful, trying to restore old params\n",
__func__);
rockchip_rk3036_pll_set_params(pll, &cur);
}
@@ -250,17 +247,9 @@ static int rockchip_rk3036_pll_set_rate(struct clk_hw *hw, unsigned long drate,
{
struct rockchip_clk_pll *pll = to_rockchip_clk_pll(hw);
const struct rockchip_pll_rate_table *rate;
- unsigned long old_rate = rockchip_rk3036_pll_recalc_rate(hw, prate);
- struct regmap *grf = rockchip_clk_get_grf();
- if (IS_ERR(grf)) {
- pr_debug("%s: grf regmap not available, aborting rate change\n",
- __func__);
- return PTR_ERR(grf);
- }
-
- pr_debug("%s: changing %s from %lu to %lu with a parent rate of %lu\n",
- __func__, __clk_get_name(hw->clk), old_rate, drate, prate);
+ pr_debug("%s: changing %s to %lu with a parent rate of %lu\n",
+ __func__, __clk_get_name(hw->clk), drate, prate);
/* Get required rate settings from table */
rate = rockchip_get_pll_settings(pll, drate);
@@ -473,7 +462,7 @@ static int rockchip_rk3066_pll_set_params(struct rockchip_clk_pll *pll,
/* wait for the pll to lock */
ret = rockchip_pll_wait_lock(pll);
if (ret) {
- pr_warn("%s: pll update unsucessful, trying to restore old params\n",
+ pr_warn("%s: pll update unsuccessful, trying to restore old params\n",
__func__);
rockchip_rk3066_pll_set_params(pll, &cur);
}
@@ -489,17 +478,9 @@ static int rockchip_rk3066_pll_set_rate(struct clk_hw *hw, unsigned long drate,
{
struct rockchip_clk_pll *pll = to_rockchip_clk_pll(hw);
const struct rockchip_pll_rate_table *rate;
- unsigned long old_rate = rockchip_rk3066_pll_recalc_rate(hw, prate);
- struct regmap *grf = rockchip_clk_get_grf();
- if (IS_ERR(grf)) {
- pr_debug("%s: grf regmap not available, aborting rate change\n",
- __func__);
- return PTR_ERR(grf);
- }
-
- pr_debug("%s: changing %s from %lu to %lu with a parent rate of %lu\n",
- __func__, clk_hw_get_name(hw), old_rate, drate, prate);
+ pr_debug("%s: changing %s to %lu with a parent rate of %lu\n",
+ __func__, clk_hw_get_name(hw), drate, prate);
/* Get required rate settings from table */
rate = rockchip_get_pll_settings(pll, drate);
@@ -563,11 +544,6 @@ static void rockchip_rk3066_pll_init(struct clk_hw *hw)
rate->no, cur.no, rate->nf, cur.nf, rate->nb, cur.nb);
if (rate->nr != cur.nr || rate->no != cur.no || rate->nf != cur.nf
|| rate->nb != cur.nb) {
- struct regmap *grf = rockchip_clk_get_grf();
-
- if (IS_ERR(grf))
- return;
-
pr_debug("%s: pll %s: rate params do not match rate table, adjusting\n",
__func__, clk_hw_get_name(hw));
rockchip_rk3066_pll_set_params(pll, rate);
@@ -591,16 +567,277 @@ static const struct clk_ops rockchip_rk3066_pll_clk_ops = {
.init = rockchip_rk3066_pll_init,
};
+/**
+ * PLL used in RK3399
+ */
+
+#define RK3399_PLLCON(i) (i * 0x4)
+#define RK3399_PLLCON0_FBDIV_MASK 0xfff
+#define RK3399_PLLCON0_FBDIV_SHIFT 0
+#define RK3399_PLLCON1_REFDIV_MASK 0x3f
+#define RK3399_PLLCON1_REFDIV_SHIFT 0
+#define RK3399_PLLCON1_POSTDIV1_MASK 0x7
+#define RK3399_PLLCON1_POSTDIV1_SHIFT 8
+#define RK3399_PLLCON1_POSTDIV2_MASK 0x7
+#define RK3399_PLLCON1_POSTDIV2_SHIFT 12
+#define RK3399_PLLCON2_FRAC_MASK 0xffffff
+#define RK3399_PLLCON2_FRAC_SHIFT 0
+#define RK3399_PLLCON2_LOCK_STATUS BIT(31)
+#define RK3399_PLLCON3_PWRDOWN BIT(0)
+#define RK3399_PLLCON3_DSMPD_MASK 0x1
+#define RK3399_PLLCON3_DSMPD_SHIFT 3
+
+static int rockchip_rk3399_pll_wait_lock(struct rockchip_clk_pll *pll)
+{
+ u32 pllcon;
+ int delay = 24000000;
+
+ /* poll check the lock status in rk3399 xPLLCON2 */
+ while (delay > 0) {
+ pllcon = readl_relaxed(pll->reg_base + RK3399_PLLCON(2));
+ if (pllcon & RK3399_PLLCON2_LOCK_STATUS)
+ return 0;
+
+ delay--;
+ }
+
+ pr_err("%s: timeout waiting for pll to lock\n", __func__);
+ return -ETIMEDOUT;
+}
+
+static void rockchip_rk3399_pll_get_params(struct rockchip_clk_pll *pll,
+ struct rockchip_pll_rate_table *rate)
+{
+ u32 pllcon;
+
+ pllcon = readl_relaxed(pll->reg_base + RK3399_PLLCON(0));
+ rate->fbdiv = ((pllcon >> RK3399_PLLCON0_FBDIV_SHIFT)
+ & RK3399_PLLCON0_FBDIV_MASK);
+
+ pllcon = readl_relaxed(pll->reg_base + RK3399_PLLCON(1));
+ rate->refdiv = ((pllcon >> RK3399_PLLCON1_REFDIV_SHIFT)
+ & RK3399_PLLCON1_REFDIV_MASK);
+ rate->postdiv1 = ((pllcon >> RK3399_PLLCON1_POSTDIV1_SHIFT)
+ & RK3399_PLLCON1_POSTDIV1_MASK);
+ rate->postdiv2 = ((pllcon >> RK3399_PLLCON1_POSTDIV2_SHIFT)
+ & RK3399_PLLCON1_POSTDIV2_MASK);
+
+ pllcon = readl_relaxed(pll->reg_base + RK3399_PLLCON(2));
+ rate->frac = ((pllcon >> RK3399_PLLCON2_FRAC_SHIFT)
+ & RK3399_PLLCON2_FRAC_MASK);
+
+ pllcon = readl_relaxed(pll->reg_base + RK3399_PLLCON(3));
+ rate->dsmpd = ((pllcon >> RK3399_PLLCON3_DSMPD_SHIFT)
+ & RK3399_PLLCON3_DSMPD_MASK);
+}
+
+static unsigned long rockchip_rk3399_pll_recalc_rate(struct clk_hw *hw,
+ unsigned long prate)
+{
+ struct rockchip_clk_pll *pll = to_rockchip_clk_pll(hw);
+ struct rockchip_pll_rate_table cur;
+ u64 rate64 = prate;
+
+ rockchip_rk3399_pll_get_params(pll, &cur);
+
+ rate64 *= cur.fbdiv;
+ do_div(rate64, cur.refdiv);
+
+ if (cur.dsmpd == 0) {
+ /* fractional mode */
+ u64 frac_rate64 = prate * cur.frac;
+
+ do_div(frac_rate64, cur.refdiv);
+ rate64 += frac_rate64 >> 24;
+ }
+
+ do_div(rate64, cur.postdiv1);
+ do_div(rate64, cur.postdiv2);
+
+ return (unsigned long)rate64;
+}
+
+static int rockchip_rk3399_pll_set_params(struct rockchip_clk_pll *pll,
+ const struct rockchip_pll_rate_table *rate)
+{
+ const struct clk_ops *pll_mux_ops = pll->pll_mux_ops;
+ struct clk_mux *pll_mux = &pll->pll_mux;
+ struct rockchip_pll_rate_table cur;
+ u32 pllcon;
+ int rate_change_remuxed = 0;
+ int cur_parent;
+ int ret;
+
+ pr_debug("%s: rate settings for %lu fbdiv: %d, postdiv1: %d, refdiv: %d, postdiv2: %d, dsmpd: %d, frac: %d\n",
+ __func__, rate->rate, rate->fbdiv, rate->postdiv1, rate->refdiv,
+ rate->postdiv2, rate->dsmpd, rate->frac);
+
+ rockchip_rk3399_pll_get_params(pll, &cur);
+ cur.rate = 0;
+
+ cur_parent = pll_mux_ops->get_parent(&pll_mux->hw);
+ if (cur_parent == PLL_MODE_NORM) {
+ pll_mux_ops->set_parent(&pll_mux->hw, PLL_MODE_SLOW);
+ rate_change_remuxed = 1;
+ }
+
+ /* update pll values */
+ writel_relaxed(HIWORD_UPDATE(rate->fbdiv, RK3399_PLLCON0_FBDIV_MASK,
+ RK3399_PLLCON0_FBDIV_SHIFT),
+ pll->reg_base + RK3399_PLLCON(0));
+
+ writel_relaxed(HIWORD_UPDATE(rate->refdiv, RK3399_PLLCON1_REFDIV_MASK,
+ RK3399_PLLCON1_REFDIV_SHIFT) |
+ HIWORD_UPDATE(rate->postdiv1, RK3399_PLLCON1_POSTDIV1_MASK,
+ RK3399_PLLCON1_POSTDIV1_SHIFT) |
+ HIWORD_UPDATE(rate->postdiv2, RK3399_PLLCON1_POSTDIV2_MASK,
+ RK3399_PLLCON1_POSTDIV2_SHIFT),
+ pll->reg_base + RK3399_PLLCON(1));
+
+ /* xPLL CON2 is not HIWORD_MASK */
+ pllcon = readl_relaxed(pll->reg_base + RK3399_PLLCON(2));
+ pllcon &= ~(RK3399_PLLCON2_FRAC_MASK << RK3399_PLLCON2_FRAC_SHIFT);
+ pllcon |= rate->frac << RK3399_PLLCON2_FRAC_SHIFT;
+ writel_relaxed(pllcon, pll->reg_base + RK3399_PLLCON(2));
+
+ writel_relaxed(HIWORD_UPDATE(rate->dsmpd, RK3399_PLLCON3_DSMPD_MASK,
+ RK3399_PLLCON3_DSMPD_SHIFT),
+ pll->reg_base + RK3399_PLLCON(3));
+
+ /* wait for the pll to lock */
+ ret = rockchip_rk3399_pll_wait_lock(pll);
+ if (ret) {
+ pr_warn("%s: pll update unsuccessful, trying to restore old params\n",
+ __func__);
+ rockchip_rk3399_pll_set_params(pll, &cur);
+ }
+
+ if (rate_change_remuxed)
+ pll_mux_ops->set_parent(&pll_mux->hw, PLL_MODE_NORM);
+
+ return ret;
+}
+
+static int rockchip_rk3399_pll_set_rate(struct clk_hw *hw, unsigned long drate,
+ unsigned long prate)
+{
+ struct rockchip_clk_pll *pll = to_rockchip_clk_pll(hw);
+ const struct rockchip_pll_rate_table *rate;
+
+ pr_debug("%s: changing %s to %lu with a parent rate of %lu\n",
+ __func__, __clk_get_name(hw->clk), drate, prate);
+
+ /* Get required rate settings from table */
+ rate = rockchip_get_pll_settings(pll, drate);
+ if (!rate) {
+ pr_err("%s: Invalid rate : %lu for pll clk %s\n", __func__,
+ drate, __clk_get_name(hw->clk));
+ return -EINVAL;
+ }
+
+ return rockchip_rk3399_pll_set_params(pll, rate);
+}
+
+static int rockchip_rk3399_pll_enable(struct clk_hw *hw)
+{
+ struct rockchip_clk_pll *pll = to_rockchip_clk_pll(hw);
+
+ writel(HIWORD_UPDATE(0, RK3399_PLLCON3_PWRDOWN, 0),
+ pll->reg_base + RK3399_PLLCON(3));
+
+ return 0;
+}
+
+static void rockchip_rk3399_pll_disable(struct clk_hw *hw)
+{
+ struct rockchip_clk_pll *pll = to_rockchip_clk_pll(hw);
+
+ writel(HIWORD_UPDATE(RK3399_PLLCON3_PWRDOWN,
+ RK3399_PLLCON3_PWRDOWN, 0),
+ pll->reg_base + RK3399_PLLCON(3));
+}
+
+static int rockchip_rk3399_pll_is_enabled(struct clk_hw *hw)
+{
+ struct rockchip_clk_pll *pll = to_rockchip_clk_pll(hw);
+ u32 pllcon = readl(pll->reg_base + RK3399_PLLCON(3));
+
+ return !(pllcon & RK3399_PLLCON3_PWRDOWN);
+}
+
+static void rockchip_rk3399_pll_init(struct clk_hw *hw)
+{
+ struct rockchip_clk_pll *pll = to_rockchip_clk_pll(hw);
+ const struct rockchip_pll_rate_table *rate;
+ struct rockchip_pll_rate_table cur;
+ unsigned long drate;
+
+ if (!(pll->flags & ROCKCHIP_PLL_SYNC_RATE))
+ return;
+
+ drate = clk_hw_get_rate(hw);
+ rate = rockchip_get_pll_settings(pll, drate);
+
+ /* when no rate setting for the current rate, rely on clk_set_rate */
+ if (!rate)
+ return;
+
+ rockchip_rk3399_pll_get_params(pll, &cur);
+
+ pr_debug("%s: pll %s@%lu: Hz\n", __func__, __clk_get_name(hw->clk),
+ drate);
+ pr_debug("old - fbdiv: %d, postdiv1: %d, refdiv: %d, postdiv2: %d, dsmpd: %d, frac: %d\n",
+ cur.fbdiv, cur.postdiv1, cur.refdiv, cur.postdiv2,
+ cur.dsmpd, cur.frac);
+ pr_debug("new - fbdiv: %d, postdiv1: %d, refdiv: %d, postdiv2: %d, dsmpd: %d, frac: %d\n",
+ rate->fbdiv, rate->postdiv1, rate->refdiv, rate->postdiv2,
+ rate->dsmpd, rate->frac);
+
+ if (rate->fbdiv != cur.fbdiv || rate->postdiv1 != cur.postdiv1 ||
+ rate->refdiv != cur.refdiv || rate->postdiv2 != cur.postdiv2 ||
+ rate->dsmpd != cur.dsmpd || rate->frac != cur.frac) {
+ struct clk *parent = clk_get_parent(hw->clk);
+
+ if (!parent) {
+ pr_warn("%s: parent of %s not available\n",
+ __func__, __clk_get_name(hw->clk));
+ return;
+ }
+
+ pr_debug("%s: pll %s: rate params do not match rate table, adjusting\n",
+ __func__, __clk_get_name(hw->clk));
+ rockchip_rk3399_pll_set_params(pll, rate);
+ }
+}
+
+static const struct clk_ops rockchip_rk3399_pll_clk_norate_ops = {
+ .recalc_rate = rockchip_rk3399_pll_recalc_rate,
+ .enable = rockchip_rk3399_pll_enable,
+ .disable = rockchip_rk3399_pll_disable,
+ .is_enabled = rockchip_rk3399_pll_is_enabled,
+};
+
+static const struct clk_ops rockchip_rk3399_pll_clk_ops = {
+ .recalc_rate = rockchip_rk3399_pll_recalc_rate,
+ .round_rate = rockchip_pll_round_rate,
+ .set_rate = rockchip_rk3399_pll_set_rate,
+ .enable = rockchip_rk3399_pll_enable,
+ .disable = rockchip_rk3399_pll_disable,
+ .is_enabled = rockchip_rk3399_pll_is_enabled,
+ .init = rockchip_rk3399_pll_init,
+};
+
/*
* Common registering of pll clocks
*/
-struct clk *rockchip_clk_register_pll(enum rockchip_pll_type pll_type,
+struct clk *rockchip_clk_register_pll(struct rockchip_clk_provider *ctx,
+ enum rockchip_pll_type pll_type,
const char *name, const char *const *parent_names,
- u8 num_parents, void __iomem *base, int con_offset,
- int grf_lock_offset, int lock_shift, int mode_offset,
- int mode_shift, struct rockchip_pll_rate_table *rate_table,
- u8 clk_pll_flags, spinlock_t *lock)
+ u8 num_parents, int con_offset, int grf_lock_offset,
+ int lock_shift, int mode_offset, int mode_shift,
+ struct rockchip_pll_rate_table *rate_table,
+ u8 clk_pll_flags)
{
const char *pll_parents[3];
struct clk_init_data init;
@@ -624,14 +861,16 @@ struct clk *rockchip_clk_register_pll(enum rockchip_pll_type pll_type,
/* create the mux on top of the real pll */
pll->pll_mux_ops = &clk_mux_ops;
pll_mux = &pll->pll_mux;
- pll_mux->reg = base + mode_offset;
+ pll_mux->reg = ctx->reg_base + mode_offset;
pll_mux->shift = mode_shift;
pll_mux->mask = PLL_MODE_MASK;
pll_mux->flags = 0;
- pll_mux->lock = lock;
+ pll_mux->lock = &ctx->lock;
pll_mux->hw.init = &init;
- if (pll_type == pll_rk3036 || pll_type == pll_rk3066)
+ if (pll_type == pll_rk3036 ||
+ pll_type == pll_rk3066 ||
+ pll_type == pll_rk3399)
pll_mux->flags |= CLK_MUX_HIWORD_MASK;
/* the actual muxing is xin24m, pll-output, xin32k */
@@ -677,17 +916,23 @@ struct clk *rockchip_clk_register_pll(enum rockchip_pll_type pll_type,
switch (pll_type) {
case pll_rk3036:
- if (!pll->rate_table)
+ if (!pll->rate_table || IS_ERR(ctx->grf))
init.ops = &rockchip_rk3036_pll_clk_norate_ops;
else
init.ops = &rockchip_rk3036_pll_clk_ops;
break;
case pll_rk3066:
- if (!pll->rate_table)
+ if (!pll->rate_table || IS_ERR(ctx->grf))
init.ops = &rockchip_rk3066_pll_clk_norate_ops;
else
init.ops = &rockchip_rk3066_pll_clk_ops;
break;
+ case pll_rk3399:
+ if (!pll->rate_table)
+ init.ops = &rockchip_rk3399_pll_clk_norate_ops;
+ else
+ init.ops = &rockchip_rk3399_pll_clk_ops;
+ break;
default:
pr_warn("%s: Unknown pll type for pll clk %s\n",
__func__, name);
@@ -695,11 +940,12 @@ struct clk *rockchip_clk_register_pll(enum rockchip_pll_type pll_type,
pll->hw.init = &init;
pll->type = pll_type;
- pll->reg_base = base + con_offset;
+ pll->reg_base = ctx->reg_base + con_offset;
pll->lock_offset = grf_lock_offset;
pll->lock_shift = lock_shift;
pll->flags = clk_pll_flags;
- pll->lock = lock;
+ pll->lock = &ctx->lock;
+ pll->ctx = ctx;
pll_clk = clk_register(NULL, &pll->hw);
if (IS_ERR(pll_clk)) {
diff --git a/drivers/clk/rockchip/clk-rk3036.c b/drivers/clk/rockchip/clk-rk3036.c
index 7cdb2d61f3e0..924f560dcf80 100644
--- a/drivers/clk/rockchip/clk-rk3036.c
+++ b/drivers/clk/rockchip/clk-rk3036.c
@@ -113,7 +113,10 @@ static const struct rockchip_cpuclk_reg_data rk3036_cpuclk_data = {
.core_reg = RK2928_CLKSEL_CON(0),
.div_core_shift = 0,
.div_core_mask = 0x1f,
+ .mux_core_alt = 1,
+ .mux_core_main = 0,
.mux_core_shift = 7,
+ .mux_core_mask = 0x1,
};
PNAME(mux_pll_p) = { "xin24m", "xin24m" };
@@ -437,6 +440,7 @@ static const char *const rk3036_critical_clocks[] __initconst = {
static void __init rk3036_clk_init(struct device_node *np)
{
+ struct rockchip_clk_provider *ctx;
void __iomem *reg_base;
struct clk *clk;
@@ -446,22 +450,27 @@ static void __init rk3036_clk_init(struct device_node *np)
return;
}
- rockchip_clk_init(np, reg_base, CLK_NR_CLKS);
+ ctx = rockchip_clk_init(np, reg_base, CLK_NR_CLKS);
+ if (IS_ERR(ctx)) {
+ pr_err("%s: rockchip clk init failed\n", __func__);
+ iounmap(reg_base);
+ return;
+ }
clk = clk_register_fixed_factor(NULL, "usb480m", "xin24m", 0, 20, 1);
if (IS_ERR(clk))
pr_warn("%s: could not register clock usb480m: %ld\n",
__func__, PTR_ERR(clk));
- rockchip_clk_register_plls(rk3036_pll_clks,
+ rockchip_clk_register_plls(ctx, rk3036_pll_clks,
ARRAY_SIZE(rk3036_pll_clks),
RK3036_GRF_SOC_STATUS0);
- rockchip_clk_register_branches(rk3036_clk_branches,
+ rockchip_clk_register_branches(ctx, rk3036_clk_branches,
ARRAY_SIZE(rk3036_clk_branches));
rockchip_clk_protect_critical(rk3036_critical_clocks,
ARRAY_SIZE(rk3036_critical_clocks));
- rockchip_clk_register_armclk(ARMCLK, "armclk",
+ rockchip_clk_register_armclk(ctx, ARMCLK, "armclk",
mux_armclk_p, ARRAY_SIZE(mux_armclk_p),
&rk3036_cpuclk_data, rk3036_cpuclk_rates,
ARRAY_SIZE(rk3036_cpuclk_rates));
@@ -469,6 +478,8 @@ static void __init rk3036_clk_init(struct device_node *np)
rockchip_register_softrst(np, 9, reg_base + RK2928_SOFTRST_CON(0),
ROCKCHIP_SOFTRST_HIWORD_MASK);
- rockchip_register_restart_notifier(RK2928_GLB_SRST_FST, NULL);
+ rockchip_register_restart_notifier(ctx, RK2928_GLB_SRST_FST, NULL);
+
+ rockchip_clk_of_add_provider(np, ctx);
}
CLK_OF_DECLARE(rk3036_cru, "rockchip,rk3036-cru", rk3036_clk_init);
diff --git a/drivers/clk/rockchip/clk-rk3188.c b/drivers/clk/rockchip/clk-rk3188.c
index 40bab3901491..d0e722a0e8cf 100644
--- a/drivers/clk/rockchip/clk-rk3188.c
+++ b/drivers/clk/rockchip/clk-rk3188.c
@@ -155,7 +155,10 @@ static const struct rockchip_cpuclk_reg_data rk3066_cpuclk_data = {
.core_reg = RK2928_CLKSEL_CON(0),
.div_core_shift = 0,
.div_core_mask = 0x1f,
+ .mux_core_alt = 1,
+ .mux_core_main = 0,
.mux_core_shift = 8,
+ .mux_core_mask = 0x1,
};
#define RK3188_DIV_ACLK_CORE_MASK 0x7
@@ -191,7 +194,10 @@ static const struct rockchip_cpuclk_reg_data rk3188_cpuclk_data = {
.core_reg = RK2928_CLKSEL_CON(0),
.div_core_shift = 9,
.div_core_mask = 0x1f,
+ .mux_core_alt = 1,
+ .mux_core_main = 0,
.mux_core_shift = 8,
+ .mux_core_mask = 0x1,
};
PNAME(mux_pll_p) = { "xin24m", "xin32k" };
@@ -753,57 +759,75 @@ static const char *const rk3188_critical_clocks[] __initconst = {
"hclk_cpubus"
};
-static void __init rk3188_common_clk_init(struct device_node *np)
+static struct rockchip_clk_provider *__init rk3188_common_clk_init(struct device_node *np)
{
+ struct rockchip_clk_provider *ctx;
void __iomem *reg_base;
reg_base = of_iomap(np, 0);
if (!reg_base) {
pr_err("%s: could not map cru region\n", __func__);
- return;
+ return ERR_PTR(-ENOMEM);
}
- rockchip_clk_init(np, reg_base, CLK_NR_CLKS);
+ ctx = rockchip_clk_init(np, reg_base, CLK_NR_CLKS);
+ if (IS_ERR(ctx)) {
+ pr_err("%s: rockchip clk init failed\n", __func__);
+ iounmap(reg_base);
+ return ERR_PTR(-ENOMEM);
+ }
- rockchip_clk_register_branches(common_clk_branches,
+ rockchip_clk_register_branches(ctx, common_clk_branches,
ARRAY_SIZE(common_clk_branches));
rockchip_register_softrst(np, 9, reg_base + RK2928_SOFTRST_CON(0),
ROCKCHIP_SOFTRST_HIWORD_MASK);
- rockchip_register_restart_notifier(RK2928_GLB_SRST_FST, NULL);
+ rockchip_register_restart_notifier(ctx, RK2928_GLB_SRST_FST, NULL);
+
+ return ctx;
}
static void __init rk3066a_clk_init(struct device_node *np)
{
- rk3188_common_clk_init(np);
- rockchip_clk_register_plls(rk3066_pll_clks,
+ struct rockchip_clk_provider *ctx;
+
+ ctx = rk3188_common_clk_init(np);
+ if (IS_ERR(ctx))
+ return;
+
+ rockchip_clk_register_plls(ctx, rk3066_pll_clks,
ARRAY_SIZE(rk3066_pll_clks),
RK3066_GRF_SOC_STATUS);
- rockchip_clk_register_branches(rk3066a_clk_branches,
+ rockchip_clk_register_branches(ctx, rk3066a_clk_branches,
ARRAY_SIZE(rk3066a_clk_branches));
- rockchip_clk_register_armclk(ARMCLK, "armclk",
+ rockchip_clk_register_armclk(ctx, ARMCLK, "armclk",
mux_armclk_p, ARRAY_SIZE(mux_armclk_p),
&rk3066_cpuclk_data, rk3066_cpuclk_rates,
ARRAY_SIZE(rk3066_cpuclk_rates));
rockchip_clk_protect_critical(rk3188_critical_clocks,
ARRAY_SIZE(rk3188_critical_clocks));
+ rockchip_clk_of_add_provider(np, ctx);
}
CLK_OF_DECLARE(rk3066a_cru, "rockchip,rk3066a-cru", rk3066a_clk_init);
static void __init rk3188a_clk_init(struct device_node *np)
{
+ struct rockchip_clk_provider *ctx;
struct clk *clk1, *clk2;
unsigned long rate;
int ret;
- rk3188_common_clk_init(np);
- rockchip_clk_register_plls(rk3188_pll_clks,
+ ctx = rk3188_common_clk_init(np);
+ if (IS_ERR(ctx))
+ return;
+
+ rockchip_clk_register_plls(ctx, rk3188_pll_clks,
ARRAY_SIZE(rk3188_pll_clks),
RK3188_GRF_SOC_STATUS);
- rockchip_clk_register_branches(rk3188_clk_branches,
+ rockchip_clk_register_branches(ctx, rk3188_clk_branches,
ARRAY_SIZE(rk3188_clk_branches));
- rockchip_clk_register_armclk(ARMCLK, "armclk",
+ rockchip_clk_register_armclk(ctx, ARMCLK, "armclk",
mux_armclk_p, ARRAY_SIZE(mux_armclk_p),
&rk3188_cpuclk_data, rk3188_cpuclk_rates,
ARRAY_SIZE(rk3188_cpuclk_rates));
@@ -827,6 +851,7 @@ static void __init rk3188a_clk_init(struct device_node *np)
rockchip_clk_protect_critical(rk3188_critical_clocks,
ARRAY_SIZE(rk3188_critical_clocks));
+ rockchip_clk_of_add_provider(np, ctx);
}
CLK_OF_DECLARE(rk3188a_cru, "rockchip,rk3188a-cru", rk3188a_clk_init);
diff --git a/drivers/clk/rockchip/clk-rk3228.c b/drivers/clk/rockchip/clk-rk3228.c
index 7702d2855e9c..016bdb0b793a 100644
--- a/drivers/clk/rockchip/clk-rk3228.c
+++ b/drivers/clk/rockchip/clk-rk3228.c
@@ -111,7 +111,10 @@ static const struct rockchip_cpuclk_reg_data rk3228_cpuclk_data = {
.core_reg = RK2928_CLKSEL_CON(0),
.div_core_shift = 0,
.div_core_mask = 0x1f,
+ .mux_core_alt = 1,
+ .mux_core_main = 0,
.mux_core_shift = 6,
+ .mux_core_mask = 0x1,
};
PNAME(mux_pll_p) = { "clk_24m", "xin24m" };
@@ -625,6 +628,7 @@ static const char *const rk3228_critical_clocks[] __initconst = {
static void __init rk3228_clk_init(struct device_node *np)
{
+ struct rockchip_clk_provider *ctx;
void __iomem *reg_base;
reg_base = of_iomap(np, 0);
@@ -633,17 +637,22 @@ static void __init rk3228_clk_init(struct device_node *np)
return;
}
- rockchip_clk_init(np, reg_base, CLK_NR_CLKS);
+ ctx = rockchip_clk_init(np, reg_base, CLK_NR_CLKS);
+ if (IS_ERR(ctx)) {
+ pr_err("%s: rockchip clk init failed\n", __func__);
+ iounmap(reg_base);
+ return;
+ }
- rockchip_clk_register_plls(rk3228_pll_clks,
+ rockchip_clk_register_plls(ctx, rk3228_pll_clks,
ARRAY_SIZE(rk3228_pll_clks),
RK3228_GRF_SOC_STATUS0);
- rockchip_clk_register_branches(rk3228_clk_branches,
+ rockchip_clk_register_branches(ctx, rk3228_clk_branches,
ARRAY_SIZE(rk3228_clk_branches));
rockchip_clk_protect_critical(rk3228_critical_clocks,
ARRAY_SIZE(rk3228_critical_clocks));
- rockchip_clk_register_armclk(ARMCLK, "armclk",
+ rockchip_clk_register_armclk(ctx, ARMCLK, "armclk",
mux_armclk_p, ARRAY_SIZE(mux_armclk_p),
&rk3228_cpuclk_data, rk3228_cpuclk_rates,
ARRAY_SIZE(rk3228_cpuclk_rates));
@@ -651,6 +660,8 @@ static void __init rk3228_clk_init(struct device_node *np)
rockchip_register_softrst(np, 9, reg_base + RK2928_SOFTRST_CON(0),
ROCKCHIP_SOFTRST_HIWORD_MASK);
- rockchip_register_restart_notifier(RK3228_GLB_SRST_FST, NULL);
+ rockchip_register_restart_notifier(ctx, RK3228_GLB_SRST_FST, NULL);
+
+ rockchip_clk_of_add_provider(np, ctx);
}
CLK_OF_DECLARE(rk3228_cru, "rockchip,rk3228-cru", rk3228_clk_init);
diff --git a/drivers/clk/rockchip/clk-rk3288.c b/drivers/clk/rockchip/clk-rk3288.c
index 3cb72163a512..39af05a589b3 100644
--- a/drivers/clk/rockchip/clk-rk3288.c
+++ b/drivers/clk/rockchip/clk-rk3288.c
@@ -165,7 +165,10 @@ static const struct rockchip_cpuclk_reg_data rk3288_cpuclk_data = {
.core_reg = RK3288_CLKSEL_CON(0),
.div_core_shift = 8,
.div_core_mask = 0x1f,
+ .mux_core_alt = 1,
+ .mux_core_main = 0,
.mux_core_shift = 15,
+ .mux_core_mask = 0x1,
};
PNAME(mux_pll_p) = { "xin24m", "xin32k" };
@@ -878,6 +881,7 @@ static struct syscore_ops rk3288_clk_syscore_ops = {
static void __init rk3288_clk_init(struct device_node *np)
{
+ struct rockchip_clk_provider *ctx;
struct clk *clk;
rk3288_cru_base = of_iomap(np, 0);
@@ -886,7 +890,12 @@ static void __init rk3288_clk_init(struct device_node *np)
return;
}
- rockchip_clk_init(np, rk3288_cru_base, CLK_NR_CLKS);
+ ctx = rockchip_clk_init(np, rk3288_cru_base, CLK_NR_CLKS);
+ if (IS_ERR(ctx)) {
+ pr_err("%s: rockchip clk init failed\n", __func__);
+ iounmap(rk3288_cru_base);
+ return;
+ }
/* Watchdog pclk is controlled by RK3288_SGRF_SOC_CON0[1]. */
clk = clk_register_fixed_factor(NULL, "pclk_wdt", "pclk_pd_alive", 0, 1, 1);
@@ -894,17 +903,17 @@ static void __init rk3288_clk_init(struct device_node *np)
pr_warn("%s: could not register clock pclk_wdt: %ld\n",
__func__, PTR_ERR(clk));
else
- rockchip_clk_add_lookup(clk, PCLK_WDT);
+ rockchip_clk_add_lookup(ctx, clk, PCLK_WDT);
- rockchip_clk_register_plls(rk3288_pll_clks,
+ rockchip_clk_register_plls(ctx, rk3288_pll_clks,
ARRAY_SIZE(rk3288_pll_clks),
RK3288_GRF_SOC_STATUS1);
- rockchip_clk_register_branches(rk3288_clk_branches,
+ rockchip_clk_register_branches(ctx, rk3288_clk_branches,
ARRAY_SIZE(rk3288_clk_branches));
rockchip_clk_protect_critical(rk3288_critical_clocks,
ARRAY_SIZE(rk3288_critical_clocks));
- rockchip_clk_register_armclk(ARMCLK, "armclk",
+ rockchip_clk_register_armclk(ctx, ARMCLK, "armclk",
mux_armclk_p, ARRAY_SIZE(mux_armclk_p),
&rk3288_cpuclk_data, rk3288_cpuclk_rates,
ARRAY_SIZE(rk3288_cpuclk_rates));
@@ -913,8 +922,10 @@ static void __init rk3288_clk_init(struct device_node *np)
rk3288_cru_base + RK3288_SOFTRST_CON(0),
ROCKCHIP_SOFTRST_HIWORD_MASK);
- rockchip_register_restart_notifier(RK3288_GLB_SRST_FST,
+ rockchip_register_restart_notifier(ctx, RK3288_GLB_SRST_FST,
rk3288_clk_shutdown);
register_syscore_ops(&rk3288_clk_syscore_ops);
+
+ rockchip_clk_of_add_provider(np, ctx);
}
CLK_OF_DECLARE(rk3288_cru, "rockchip,rk3288-cru", rk3288_clk_init);
diff --git a/drivers/clk/rockchip/clk-rk3368.c b/drivers/clk/rockchip/clk-rk3368.c
index a2bb12200465..6cb474c593e7 100644
--- a/drivers/clk/rockchip/clk-rk3368.c
+++ b/drivers/clk/rockchip/clk-rk3368.c
@@ -165,14 +165,20 @@ static const struct rockchip_cpuclk_reg_data rk3368_cpuclkb_data = {
.core_reg = RK3368_CLKSEL_CON(0),
.div_core_shift = 0,
.div_core_mask = 0x1f,
+ .mux_core_alt = 1,
+ .mux_core_main = 0,
.mux_core_shift = 7,
+ .mux_core_mask = 0x1,
};
static const struct rockchip_cpuclk_reg_data rk3368_cpuclkl_data = {
.core_reg = RK3368_CLKSEL_CON(2),
.div_core_shift = 0,
+ .mux_core_alt = 1,
+ .mux_core_main = 0,
.div_core_mask = 0x1f,
.mux_core_shift = 7,
+ .mux_core_mask = 0x1,
};
#define RK3368_DIV_ACLKM_MASK 0x1f
@@ -856,6 +862,7 @@ static const char *const rk3368_critical_clocks[] __initconst = {
static void __init rk3368_clk_init(struct device_node *np)
{
+ struct rockchip_clk_provider *ctx;
void __iomem *reg_base;
struct clk *clk;
@@ -865,7 +872,12 @@ static void __init rk3368_clk_init(struct device_node *np)
return;
}
- rockchip_clk_init(np, reg_base, CLK_NR_CLKS);
+ ctx = rockchip_clk_init(np, reg_base, CLK_NR_CLKS);
+ if (IS_ERR(ctx)) {
+ pr_err("%s: rockchip clk init failed\n", __func__);
+ iounmap(reg_base);
+ return;
+ }
/* Watchdog pclk is controlled by sgrf_soc_con3[7]. */
clk = clk_register_fixed_factor(NULL, "pclk_wdt", "pclk_pd_alive", 0, 1, 1);
@@ -873,22 +885,22 @@ static void __init rk3368_clk_init(struct device_node *np)
pr_warn("%s: could not register clock pclk_wdt: %ld\n",
__func__, PTR_ERR(clk));
else
- rockchip_clk_add_lookup(clk, PCLK_WDT);
+ rockchip_clk_add_lookup(ctx, clk, PCLK_WDT);
- rockchip_clk_register_plls(rk3368_pll_clks,
+ rockchip_clk_register_plls(ctx, rk3368_pll_clks,
ARRAY_SIZE(rk3368_pll_clks),
RK3368_GRF_SOC_STATUS0);
- rockchip_clk_register_branches(rk3368_clk_branches,
+ rockchip_clk_register_branches(ctx, rk3368_clk_branches,
ARRAY_SIZE(rk3368_clk_branches));
rockchip_clk_protect_critical(rk3368_critical_clocks,
ARRAY_SIZE(rk3368_critical_clocks));
- rockchip_clk_register_armclk(ARMCLKB, "armclkb",
+ rockchip_clk_register_armclk(ctx, ARMCLKB, "armclkb",
mux_armclkb_p, ARRAY_SIZE(mux_armclkb_p),
&rk3368_cpuclkb_data, rk3368_cpuclkb_rates,
ARRAY_SIZE(rk3368_cpuclkb_rates));
- rockchip_clk_register_armclk(ARMCLKL, "armclkl",
+ rockchip_clk_register_armclk(ctx, ARMCLKL, "armclkl",
mux_armclkl_p, ARRAY_SIZE(mux_armclkl_p),
&rk3368_cpuclkl_data, rk3368_cpuclkl_rates,
ARRAY_SIZE(rk3368_cpuclkl_rates));
@@ -896,6 +908,8 @@ static void __init rk3368_clk_init(struct device_node *np)
rockchip_register_softrst(np, 15, reg_base + RK3368_SOFTRST_CON(0),
ROCKCHIP_SOFTRST_HIWORD_MASK);
- rockchip_register_restart_notifier(RK3368_GLB_SRST_FST, NULL);
+ rockchip_register_restart_notifier(ctx, RK3368_GLB_SRST_FST, NULL);
+
+ rockchip_clk_of_add_provider(np, ctx);
}
CLK_OF_DECLARE(rk3368_cru, "rockchip,rk3368-cru", rk3368_clk_init);
diff --git a/drivers/clk/rockchip/clk-rk3399.c b/drivers/clk/rockchip/clk-rk3399.c
new file mode 100644
index 000000000000..291543f52caa
--- /dev/null
+++ b/drivers/clk/rockchip/clk-rk3399.c
@@ -0,0 +1,1573 @@
+/*
+ * Copyright (c) 2016 Rockchip Electronics Co. Ltd.
+ * Author: Xing Zheng <zhengxing@rock-chips.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/clk-provider.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <linux/regmap.h>
+#include <dt-bindings/clock/rk3399-cru.h>
+#include "clk.h"
+
+enum rk3399_plls {
+ lpll, bpll, dpll, cpll, gpll, npll, vpll,
+};
+
+enum rk3399_pmu_plls {
+ ppll,
+};
+
+static struct rockchip_pll_rate_table rk3399_pll_rates[] = {
+ /* _mhz, _refdiv, _fbdiv, _postdiv1, _postdiv2, _dsmpd, _frac */
+ RK3036_PLL_RATE(2208000000, 1, 92, 1, 1, 1, 0),
+ RK3036_PLL_RATE(2184000000, 1, 91, 1, 1, 1, 0),
+ RK3036_PLL_RATE(2160000000, 1, 90, 1, 1, 1, 0),
+ RK3036_PLL_RATE(2136000000, 1, 89, 1, 1, 1, 0),
+ RK3036_PLL_RATE(2112000000, 1, 88, 1, 1, 1, 0),
+ RK3036_PLL_RATE(2088000000, 1, 87, 1, 1, 1, 0),
+ RK3036_PLL_RATE(2064000000, 1, 86, 1, 1, 1, 0),
+ RK3036_PLL_RATE(2040000000, 1, 85, 1, 1, 1, 0),
+ RK3036_PLL_RATE(2016000000, 1, 84, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1992000000, 1, 83, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1968000000, 1, 82, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1944000000, 1, 81, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1920000000, 1, 80, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1896000000, 1, 79, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1872000000, 1, 78, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1848000000, 1, 77, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1824000000, 1, 76, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1800000000, 1, 75, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1776000000, 1, 74, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1752000000, 1, 73, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1728000000, 1, 72, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1704000000, 1, 71, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1680000000, 1, 70, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1656000000, 1, 69, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1632000000, 1, 68, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1608000000, 1, 67, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1584000000, 1, 66, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1560000000, 1, 65, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1536000000, 1, 64, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1512000000, 1, 63, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1488000000, 1, 62, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1464000000, 1, 61, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1440000000, 1, 60, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1416000000, 1, 59, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1392000000, 1, 58, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1368000000, 1, 57, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1344000000, 1, 56, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1320000000, 1, 55, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1296000000, 1, 54, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1272000000, 1, 53, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1248000000, 1, 52, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1200000000, 1, 50, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1188000000, 2, 99, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1104000000, 1, 46, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1100000000, 12, 550, 1, 1, 1, 0),
+ RK3036_PLL_RATE(1008000000, 1, 84, 2, 1, 1, 0),
+ RK3036_PLL_RATE(1000000000, 6, 500, 2, 1, 1, 0),
+ RK3036_PLL_RATE( 984000000, 1, 82, 2, 1, 1, 0),
+ RK3036_PLL_RATE( 960000000, 1, 80, 2, 1, 1, 0),
+ RK3036_PLL_RATE( 936000000, 1, 78, 2, 1, 1, 0),
+ RK3036_PLL_RATE( 912000000, 1, 76, 2, 1, 1, 0),
+ RK3036_PLL_RATE( 900000000, 4, 300, 2, 1, 1, 0),
+ RK3036_PLL_RATE( 888000000, 1, 74, 2, 1, 1, 0),
+ RK3036_PLL_RATE( 864000000, 1, 72, 2, 1, 1, 0),
+ RK3036_PLL_RATE( 840000000, 1, 70, 2, 1, 1, 0),
+ RK3036_PLL_RATE( 816000000, 1, 68, 2, 1, 1, 0),
+ RK3036_PLL_RATE( 800000000, 6, 400, 2, 1, 1, 0),
+ RK3036_PLL_RATE( 700000000, 6, 350, 2, 1, 1, 0),
+ RK3036_PLL_RATE( 696000000, 1, 58, 2, 1, 1, 0),
+ RK3036_PLL_RATE( 676000000, 3, 169, 2, 1, 1, 0),
+ RK3036_PLL_RATE( 600000000, 1, 75, 3, 1, 1, 0),
+ RK3036_PLL_RATE( 594000000, 1, 99, 4, 1, 1, 0),
+ RK3036_PLL_RATE( 504000000, 1, 63, 3, 1, 1, 0),
+ RK3036_PLL_RATE( 500000000, 6, 250, 2, 1, 1, 0),
+ RK3036_PLL_RATE( 408000000, 1, 68, 2, 2, 1, 0),
+ RK3036_PLL_RATE( 312000000, 1, 52, 2, 2, 1, 0),
+ RK3036_PLL_RATE( 297000000, 1, 99, 4, 2, 1, 0),
+ RK3036_PLL_RATE( 216000000, 1, 72, 4, 2, 1, 0),
+ RK3036_PLL_RATE( 148500000, 1, 99, 4, 4, 1, 0),
+ RK3036_PLL_RATE( 96000000, 1, 64, 4, 4, 1, 0),
+ RK3036_PLL_RATE( 74250000, 2, 99, 4, 4, 1, 0),
+ RK3036_PLL_RATE( 54000000, 1, 54, 6, 4, 1, 0),
+ RK3036_PLL_RATE( 27000000, 1, 27, 6, 4, 1, 0),
+ { /* sentinel */ },
+};
+
+/* CRU parents */
+PNAME(mux_pll_p) = { "xin24m", "xin32k" };
+
+PNAME(mux_armclkl_p) = { "clk_core_l_lpll_src",
+ "clk_core_l_bpll_src",
+ "clk_core_l_dpll_src",
+ "clk_core_l_gpll_src" };
+PNAME(mux_armclkb_p) = { "clk_core_b_lpll_src",
+ "clk_core_b_bpll_src",
+ "clk_core_b_dpll_src",
+ "clk_core_b_gpll_src" };
+PNAME(mux_aclk_cci_p) = { "cpll_aclk_cci_src",
+ "gpll_aclk_cci_src",
+ "npll_aclk_cci_src",
+ "vpll_aclk_cci_src" };
+PNAME(mux_cci_trace_p) = { "cpll_cci_trace",
+ "gpll_cci_trace" };
+PNAME(mux_cs_p) = { "cpll_cs", "gpll_cs",
+ "npll_cs"};
+PNAME(mux_aclk_perihp_p) = { "cpll_aclk_perihp_src",
+ "gpll_aclk_perihp_src" };
+
+PNAME(mux_pll_src_cpll_gpll_p) = { "cpll", "gpll" };
+PNAME(mux_pll_src_cpll_gpll_npll_p) = { "cpll", "gpll", "npll" };
+PNAME(mux_pll_src_cpll_gpll_ppll_p) = { "cpll", "gpll", "ppll" };
+PNAME(mux_pll_src_cpll_gpll_upll_p) = { "cpll", "gpll", "upll" };
+PNAME(mux_pll_src_npll_cpll_gpll_p) = { "npll", "cpll", "gpll" };
+PNAME(mux_pll_src_cpll_gpll_npll_ppll_p) = { "cpll", "gpll", "npll",
+ "ppll" };
+PNAME(mux_pll_src_cpll_gpll_npll_24m_p) = { "cpll", "gpll", "npll",
+ "xin24m" };
+PNAME(mux_pll_src_cpll_gpll_npll_usbphy480m_p) = { "cpll", "gpll", "npll",
+ "clk_usbphy_480m" };
+PNAME(mux_pll_src_ppll_cpll_gpll_npll_p) = { "ppll", "cpll", "gpll",
+ "npll", "upll" };
+PNAME(mux_pll_src_cpll_gpll_npll_upll_24m_p) = { "cpll", "gpll", "npll",
+ "upll", "xin24m" };
+PNAME(mux_pll_src_cpll_gpll_npll_ppll_upll_24m_p) = { "cpll", "gpll", "npll",
+ "ppll", "upll", "xin24m" };
+
+PNAME(mux_pll_src_vpll_cpll_gpll_p) = { "vpll", "cpll", "gpll" };
+PNAME(mux_pll_src_vpll_cpll_gpll_npll_p) = { "vpll", "cpll", "gpll",
+ "npll" };
+PNAME(mux_pll_src_vpll_cpll_gpll_24m_p) = { "vpll", "cpll", "gpll",
+ "xin24m" };
+
+PNAME(mux_dclk_vop0_p) = { "dclk_vop0_div",
+ "dclk_vop0_frac" };
+PNAME(mux_dclk_vop1_p) = { "dclk_vop1_div",
+ "dclk_vop1_frac" };
+
+PNAME(mux_clk_cif_p) = { "clk_cifout_src", "xin24m" };
+
+PNAME(mux_pll_src_24m_usbphy480m_p) = { "xin24m", "clk_usbphy_480m" };
+PNAME(mux_pll_src_24m_pciephy_p) = { "xin24m", "clk_pciephy_ref100m" };
+PNAME(mux_pll_src_24m_32k_cpll_gpll_p) = { "xin24m", "xin32k",
+ "cpll", "gpll" };
+PNAME(mux_pciecore_cru_phy_p) = { "clk_pcie_core_cru",
+ "clk_pcie_core_phy" };
+
+PNAME(mux_aclk_emmc_p) = { "cpll_aclk_emmc_src",
+ "gpll_aclk_emmc_src" };
+
+PNAME(mux_aclk_perilp0_p) = { "cpll_aclk_perilp0_src",
+ "gpll_aclk_perilp0_src" };
+
+PNAME(mux_fclk_cm0s_p) = { "cpll_fclk_cm0s_src",
+ "gpll_fclk_cm0s_src" };
+
+PNAME(mux_hclk_perilp1_p) = { "cpll_hclk_perilp1_src",
+ "gpll_hclk_perilp1_src" };
+
+PNAME(mux_clk_testout1_p) = { "clk_testout1_pll_src", "xin24m" };
+PNAME(mux_clk_testout2_p) = { "clk_testout2_pll_src", "xin24m" };
+
+PNAME(mux_usbphy_480m_p) = { "clk_usbphy0_480m_src",
+ "clk_usbphy1_480m_src" };
+PNAME(mux_aclk_gmac_p) = { "cpll_aclk_gmac_src",
+ "gpll_aclk_gmac_src" };
+PNAME(mux_rmii_p) = { "clk_gmac", "clkin_gmac" };
+PNAME(mux_spdif_p) = { "clk_spdif_div", "clk_spdif_frac",
+ "clkin_i2s", "xin12m" };
+PNAME(mux_i2s0_p) = { "clk_i2s0_div", "clk_i2s0_frac",
+ "clkin_i2s", "xin12m" };
+PNAME(mux_i2s1_p) = { "clk_i2s1_div", "clk_i2s1_frac",
+ "clkin_i2s", "xin12m" };
+PNAME(mux_i2s2_p) = { "clk_i2s2_div", "clk_i2s2_frac",
+ "clkin_i2s", "xin12m" };
+PNAME(mux_i2sch_p) = { "clk_i2s0", "clk_i2s1",
+ "clk_i2s2" };
+PNAME(mux_i2sout_p) = { "clk_i2sout_src", "xin12m" };
+
+PNAME(mux_uart0_p) = { "clk_uart0_div", "clk_uart0_frac", "xin24m" };
+PNAME(mux_uart1_p) = { "clk_uart1_div", "clk_uart1_frac", "xin24m" };
+PNAME(mux_uart2_p) = { "clk_uart2_div", "clk_uart2_frac", "xin24m" };
+PNAME(mux_uart3_p) = { "clk_uart3_div", "clk_uart3_frac", "xin24m" };
+
+/* PMU CRU parents */
+PNAME(mux_ppll_24m_p) = { "ppll", "xin24m" };
+PNAME(mux_24m_ppll_p) = { "xin24m", "ppll" };
+PNAME(mux_fclk_cm0s_pmu_ppll_p) = { "fclk_cm0s_pmu_ppll_src", "xin24m" };
+PNAME(mux_wifi_pmu_p) = { "clk_wifi_div", "clk_wifi_frac" };
+PNAME(mux_uart4_pmu_p) = { "clk_uart4_div", "clk_uart4_frac",
+ "xin24m" };
+PNAME(mux_clk_testout2_2io_p) = { "clk_testout2", "clk_32k_suspend_pmu" };
+
+static struct rockchip_pll_clock rk3399_pll_clks[] __initdata = {
+ [lpll] = PLL(pll_rk3399, PLL_APLLL, "lpll", mux_pll_p, 0, RK3399_PLL_CON(0),
+ RK3399_PLL_CON(3), 8, 31, 0, rk3399_pll_rates),
+ [bpll] = PLL(pll_rk3399, PLL_APLLB, "bpll", mux_pll_p, 0, RK3399_PLL_CON(8),
+ RK3399_PLL_CON(11), 8, 31, 0, rk3399_pll_rates),
+ [dpll] = PLL(pll_rk3399, PLL_DPLL, "dpll", mux_pll_p, 0, RK3399_PLL_CON(16),
+ RK3399_PLL_CON(19), 8, 31, 0, NULL),
+ [cpll] = PLL(pll_rk3399, PLL_CPLL, "cpll", mux_pll_p, 0, RK3399_PLL_CON(24),
+ RK3399_PLL_CON(27), 8, 31, ROCKCHIP_PLL_SYNC_RATE, rk3399_pll_rates),
+ [gpll] = PLL(pll_rk3399, PLL_GPLL, "gpll", mux_pll_p, 0, RK3399_PLL_CON(32),
+ RK3399_PLL_CON(35), 8, 31, ROCKCHIP_PLL_SYNC_RATE, rk3399_pll_rates),
+ [npll] = PLL(pll_rk3399, PLL_NPLL, "npll", mux_pll_p, 0, RK3399_PLL_CON(40),
+ RK3399_PLL_CON(43), 8, 31, ROCKCHIP_PLL_SYNC_RATE, rk3399_pll_rates),
+ [vpll] = PLL(pll_rk3399, PLL_VPLL, "vpll", mux_pll_p, 0, RK3399_PLL_CON(48),
+ RK3399_PLL_CON(51), 8, 31, ROCKCHIP_PLL_SYNC_RATE, rk3399_pll_rates),
+};
+
+static struct rockchip_pll_clock rk3399_pmu_pll_clks[] __initdata = {
+ [ppll] = PLL(pll_rk3399, PLL_PPLL, "ppll", mux_pll_p, 0, RK3399_PMU_PLL_CON(0),
+ RK3399_PMU_PLL_CON(3), 8, 31, ROCKCHIP_PLL_SYNC_RATE, rk3399_pll_rates),
+};
+
+#define MFLAGS CLK_MUX_HIWORD_MASK
+#define DFLAGS CLK_DIVIDER_HIWORD_MASK
+#define GFLAGS (CLK_GATE_HIWORD_MASK | CLK_GATE_SET_TO_DISABLE)
+#define IFLAGS ROCKCHIP_INVERTER_HIWORD_MASK
+
+static struct rockchip_clk_branch rk3399_spdif_fracmux __initdata =
+ MUX(0, "clk_spdif_mux", mux_spdif_p, CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(32), 13, 2, MFLAGS);
+
+static struct rockchip_clk_branch rk3399_i2s0_fracmux __initdata =
+ MUX(0, "clk_i2s0_mux", mux_i2s0_p, CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(28), 8, 2, MFLAGS);
+
+static struct rockchip_clk_branch rk3399_i2s1_fracmux __initdata =
+ MUX(0, "clk_i2s1_mux", mux_i2s1_p, CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(29), 8, 2, MFLAGS);
+
+static struct rockchip_clk_branch rk3399_i2s2_fracmux __initdata =
+ MUX(0, "clk_i2s2_mux", mux_i2s2_p, CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(30), 8, 2, MFLAGS);
+
+static struct rockchip_clk_branch rk3399_uart0_fracmux __initdata =
+ MUX(SCLK_UART0, "clk_uart0", mux_uart0_p, CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(33), 8, 2, MFLAGS);
+
+static struct rockchip_clk_branch rk3399_uart1_fracmux __initdata =
+ MUX(SCLK_UART1, "clk_uart1", mux_uart1_p, CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(34), 8, 2, MFLAGS);
+
+static struct rockchip_clk_branch rk3399_uart2_fracmux __initdata =
+ MUX(SCLK_UART2, "clk_uart2", mux_uart2_p, CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(35), 8, 2, MFLAGS);
+
+static struct rockchip_clk_branch rk3399_uart3_fracmux __initdata =
+ MUX(SCLK_UART3, "clk_uart3", mux_uart3_p, CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(36), 8, 2, MFLAGS);
+
+static struct rockchip_clk_branch rk3399_uart4_pmu_fracmux __initdata =
+ MUX(SCLK_UART4_PMU, "clk_uart4_pmu", mux_uart4_pmu_p, CLK_SET_RATE_PARENT,
+ RK3399_PMU_CLKSEL_CON(5), 8, 2, MFLAGS);
+
+static struct rockchip_clk_branch rk3399_dclk_vop0_fracmux __initdata =
+ MUX(DCLK_VOP0, "dclk_vop0", mux_dclk_vop0_p, CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(49), 11, 1, MFLAGS);
+
+static struct rockchip_clk_branch rk3399_dclk_vop1_fracmux __initdata =
+ MUX(DCLK_VOP1, "dclk_vop1", mux_dclk_vop1_p, CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(50), 11, 1, MFLAGS);
+
+static struct rockchip_clk_branch rk3399_pmuclk_wifi_fracmux __initdata =
+ MUX(SCLK_WIFI_PMU, "clk_wifi_pmu", mux_wifi_pmu_p, CLK_SET_RATE_PARENT,
+ RK3399_PMU_CLKSEL_CON(1), 14, 1, MFLAGS);
+
+static const struct rockchip_cpuclk_reg_data rk3399_cpuclkl_data = {
+ .core_reg = RK3399_CLKSEL_CON(0),
+ .div_core_shift = 0,
+ .div_core_mask = 0x1f,
+ .mux_core_alt = 3,
+ .mux_core_main = 0,
+ .mux_core_shift = 6,
+ .mux_core_mask = 0x3,
+};
+
+static const struct rockchip_cpuclk_reg_data rk3399_cpuclkb_data = {
+ .core_reg = RK3399_CLKSEL_CON(2),
+ .div_core_shift = 0,
+ .div_core_mask = 0x1f,
+ .mux_core_alt = 3,
+ .mux_core_main = 1,
+ .mux_core_shift = 6,
+ .mux_core_mask = 0x3,
+};
+
+#define RK3399_DIV_ACLKM_MASK 0x1f
+#define RK3399_DIV_ACLKM_SHIFT 8
+#define RK3399_DIV_ATCLK_MASK 0x1f
+#define RK3399_DIV_ATCLK_SHIFT 0
+#define RK3399_DIV_PCLK_DBG_MASK 0x1f
+#define RK3399_DIV_PCLK_DBG_SHIFT 8
+
+#define RK3399_CLKSEL0(_offs, _aclkm) \
+ { \
+ .reg = RK3399_CLKSEL_CON(0 + _offs), \
+ .val = HIWORD_UPDATE(_aclkm, RK3399_DIV_ACLKM_MASK, \
+ RK3399_DIV_ACLKM_SHIFT), \
+ }
+#define RK3399_CLKSEL1(_offs, _atclk, _pdbg) \
+ { \
+ .reg = RK3399_CLKSEL_CON(1 + _offs), \
+ .val = HIWORD_UPDATE(_atclk, RK3399_DIV_ATCLK_MASK, \
+ RK3399_DIV_ATCLK_SHIFT) | \
+ HIWORD_UPDATE(_pdbg, RK3399_DIV_PCLK_DBG_MASK, \
+ RK3399_DIV_PCLK_DBG_SHIFT), \
+ }
+
+/* cluster_l: aclkm in clksel0, rest in clksel1 */
+#define RK3399_CPUCLKL_RATE(_prate, _aclkm, _atclk, _pdbg) \
+ { \
+ .prate = _prate##U, \
+ .divs = { \
+ RK3399_CLKSEL0(0, _aclkm), \
+ RK3399_CLKSEL1(0, _atclk, _pdbg), \
+ }, \
+ }
+
+/* cluster_b: aclkm in clksel2, rest in clksel3 */
+#define RK3399_CPUCLKB_RATE(_prate, _aclkm, _atclk, _pdbg) \
+ { \
+ .prate = _prate##U, \
+ .divs = { \
+ RK3399_CLKSEL0(2, _aclkm), \
+ RK3399_CLKSEL1(2, _atclk, _pdbg), \
+ }, \
+ }
+
+static struct rockchip_cpuclk_rate_table rk3399_cpuclkl_rates[] __initdata = {
+ RK3399_CPUCLKL_RATE(1800000000, 1, 8, 8),
+ RK3399_CPUCLKL_RATE(1704000000, 1, 8, 8),
+ RK3399_CPUCLKL_RATE(1608000000, 1, 7, 7),
+ RK3399_CPUCLKL_RATE(1512000000, 1, 7, 7),
+ RK3399_CPUCLKL_RATE(1488000000, 1, 6, 6),
+ RK3399_CPUCLKL_RATE(1416000000, 1, 6, 6),
+ RK3399_CPUCLKL_RATE(1200000000, 1, 5, 5),
+ RK3399_CPUCLKL_RATE(1008000000, 1, 5, 5),
+ RK3399_CPUCLKL_RATE( 816000000, 1, 4, 4),
+ RK3399_CPUCLKL_RATE( 696000000, 1, 3, 3),
+ RK3399_CPUCLKL_RATE( 600000000, 1, 3, 3),
+ RK3399_CPUCLKL_RATE( 408000000, 1, 2, 2),
+ RK3399_CPUCLKL_RATE( 312000000, 1, 1, 1),
+ RK3399_CPUCLKL_RATE( 216000000, 1, 1, 1),
+ RK3399_CPUCLKL_RATE( 96000000, 1, 1, 1),
+};
+
+static struct rockchip_cpuclk_rate_table rk3399_cpuclkb_rates[] __initdata = {
+ RK3399_CPUCLKB_RATE(2208000000, 1, 11, 11),
+ RK3399_CPUCLKB_RATE(2184000000, 1, 11, 11),
+ RK3399_CPUCLKB_RATE(2088000000, 1, 10, 10),
+ RK3399_CPUCLKB_RATE(2040000000, 1, 10, 10),
+ RK3399_CPUCLKB_RATE(1992000000, 1, 9, 9),
+ RK3399_CPUCLKB_RATE(1896000000, 1, 9, 9),
+ RK3399_CPUCLKB_RATE(1800000000, 1, 8, 8),
+ RK3399_CPUCLKB_RATE(1704000000, 1, 8, 8),
+ RK3399_CPUCLKB_RATE(1608000000, 1, 7, 7),
+ RK3399_CPUCLKB_RATE(1512000000, 1, 7, 7),
+ RK3399_CPUCLKB_RATE(1488000000, 1, 6, 6),
+ RK3399_CPUCLKB_RATE(1416000000, 1, 6, 6),
+ RK3399_CPUCLKB_RATE(1200000000, 1, 5, 5),
+ RK3399_CPUCLKB_RATE(1008000000, 1, 5, 5),
+ RK3399_CPUCLKB_RATE( 816000000, 1, 4, 4),
+ RK3399_CPUCLKB_RATE( 696000000, 1, 3, 3),
+ RK3399_CPUCLKB_RATE( 600000000, 1, 3, 3),
+ RK3399_CPUCLKB_RATE( 408000000, 1, 2, 2),
+ RK3399_CPUCLKB_RATE( 312000000, 1, 1, 1),
+ RK3399_CPUCLKB_RATE( 216000000, 1, 1, 1),
+ RK3399_CPUCLKB_RATE( 96000000, 1, 1, 1),
+};
+
+static struct rockchip_clk_branch rk3399_clk_branches[] __initdata = {
+ /*
+ * CRU Clock-Architecture
+ */
+
+ /* usbphy */
+ GATE(SCLK_USB2PHY0_REF, "clk_usb2phy0_ref", "xin24m", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(6), 5, GFLAGS),
+ GATE(SCLK_USB2PHY1_REF, "clk_usb2phy1_ref", "xin24m", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(6), 6, GFLAGS),
+
+ GATE(0, "clk_usbphy0_480m_src", "clk_usbphy0_480m", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(13), 12, GFLAGS),
+ GATE(0, "clk_usbphy1_480m_src", "clk_usbphy1_480m", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(13), 12, GFLAGS),
+ MUX(0, "clk_usbphy_480m", mux_usbphy_480m_p, CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(14), 6, 1, MFLAGS),
+
+ MUX(0, "upll", mux_pll_src_24m_usbphy480m_p, 0,
+ RK3399_CLKSEL_CON(14), 15, 1, MFLAGS),
+
+ COMPOSITE_NODIV(SCLK_HSICPHY, "clk_hsicphy", mux_pll_src_cpll_gpll_npll_usbphy480m_p, 0,
+ RK3399_CLKSEL_CON(19), 0, 2, MFLAGS,
+ RK3399_CLKGATE_CON(6), 4, GFLAGS),
+
+ COMPOSITE(ACLK_USB3, "aclk_usb3", mux_pll_src_cpll_gpll_npll_p, 0,
+ RK3399_CLKSEL_CON(39), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(12), 0, GFLAGS),
+ GATE(ACLK_USB3_NOC, "aclk_usb3_noc", "aclk_usb3", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(30), 0, GFLAGS),
+ GATE(ACLK_USB3OTG0, "aclk_usb3otg0", "aclk_usb3", 0,
+ RK3399_CLKGATE_CON(30), 1, GFLAGS),
+ GATE(ACLK_USB3OTG1, "aclk_usb3otg1", "aclk_usb3", 0,
+ RK3399_CLKGATE_CON(30), 2, GFLAGS),
+ GATE(ACLK_USB3_RKSOC_AXI_PERF, "aclk_usb3_rksoc_axi_perf", "aclk_usb3", 0,
+ RK3399_CLKGATE_CON(30), 3, GFLAGS),
+ GATE(ACLK_USB3_GRF, "aclk_usb3_grf", "aclk_usb3", 0,
+ RK3399_CLKGATE_CON(30), 4, GFLAGS),
+
+ GATE(SCLK_USB3OTG0_REF, "clk_usb3otg0_ref", "xin24m", 0,
+ RK3399_CLKGATE_CON(12), 1, GFLAGS),
+ GATE(SCLK_USB3OTG1_REF, "clk_usb3otg1_ref", "xin24m", 0,
+ RK3399_CLKGATE_CON(12), 2, GFLAGS),
+
+ COMPOSITE(SCLK_USB3OTG0_SUSPEND, "clk_usb3otg0_suspend", mux_pll_p, 0,
+ RK3399_CLKSEL_CON(40), 15, 1, MFLAGS, 0, 10, DFLAGS,
+ RK3399_CLKGATE_CON(12), 3, GFLAGS),
+
+ COMPOSITE(SCLK_USB3OTG1_SUSPEND, "clk_usb3otg1_suspend", mux_pll_p, 0,
+ RK3399_CLKSEL_CON(41), 15, 1, MFLAGS, 0, 10, DFLAGS,
+ RK3399_CLKGATE_CON(12), 4, GFLAGS),
+
+ COMPOSITE(SCLK_UPHY0_TCPDPHY_REF, "clk_uphy0_tcpdphy_ref", mux_pll_p, 0,
+ RK3399_CLKSEL_CON(64), 15, 1, MFLAGS, 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(13), 4, GFLAGS),
+
+ COMPOSITE(SCLK_UPHY0_TCPDCORE, "clk_uphy0_tcpdcore", mux_pll_src_24m_32k_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(64), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(13), 5, GFLAGS),
+
+ COMPOSITE(SCLK_UPHY1_TCPDPHY_REF, "clk_uphy1_tcpdphy_ref", mux_pll_p, 0,
+ RK3399_CLKSEL_CON(65), 15, 1, MFLAGS, 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(13), 6, GFLAGS),
+
+ COMPOSITE(SCLK_UPHY1_TCPDCORE, "clk_uphy1_tcpdcore", mux_pll_src_24m_32k_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(65), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(13), 7, GFLAGS),
+
+ /* little core */
+ GATE(0, "clk_core_l_lpll_src", "lpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(0), 0, GFLAGS),
+ GATE(0, "clk_core_l_bpll_src", "bpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(0), 1, GFLAGS),
+ GATE(0, "clk_core_l_dpll_src", "dpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(0), 2, GFLAGS),
+ GATE(0, "clk_core_l_gpll_src", "gpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(0), 3, GFLAGS),
+
+ COMPOSITE_NOMUX(0, "aclkm_core_l", "armclkl", CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(0), 8, 5, DFLAGS | CLK_DIVIDER_READ_ONLY,
+ RK3399_CLKGATE_CON(0), 4, GFLAGS),
+ COMPOSITE_NOMUX(0, "atclk_core_l", "armclkl", CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(1), 0, 5, DFLAGS | CLK_DIVIDER_READ_ONLY,
+ RK3399_CLKGATE_CON(0), 5, GFLAGS),
+ COMPOSITE_NOMUX(0, "pclk_dbg_core_l", "armclkl", CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(1), 8, 5, DFLAGS | CLK_DIVIDER_READ_ONLY,
+ RK3399_CLKGATE_CON(0), 6, GFLAGS),
+
+ GATE(ACLK_CORE_ADB400_CORE_L_2_CCI500, "aclk_core_adb400_core_l_2_cci500", "aclkm_core_l", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(14), 12, GFLAGS),
+ GATE(ACLK_PERF_CORE_L, "aclk_perf_core_l", "aclkm_core_l", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(14), 13, GFLAGS),
+
+ GATE(0, "clk_dbg_pd_core_l", "armclkl", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(14), 9, GFLAGS),
+ GATE(ACLK_GIC_ADB400_GIC_2_CORE_L, "aclk_core_adb400_gic_2_core_l", "armclkl", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(14), 10, GFLAGS),
+ GATE(ACLK_GIC_ADB400_CORE_L_2_GIC, "aclk_core_adb400_core_l_2_gic", "armclkl", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(14), 11, GFLAGS),
+ GATE(SCLK_PVTM_CORE_L, "clk_pvtm_core_l", "xin24m", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(0), 7, GFLAGS),
+
+ /* big core */
+ GATE(0, "clk_core_b_lpll_src", "lpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(1), 0, GFLAGS),
+ GATE(0, "clk_core_b_bpll_src", "bpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(1), 1, GFLAGS),
+ GATE(0, "clk_core_b_dpll_src", "dpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(1), 2, GFLAGS),
+ GATE(0, "clk_core_b_gpll_src", "gpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(1), 3, GFLAGS),
+
+ COMPOSITE_NOMUX(0, "aclkm_core_b", "armclkb", CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(2), 8, 5, DFLAGS | CLK_DIVIDER_READ_ONLY,
+ RK3399_CLKGATE_CON(1), 4, GFLAGS),
+ COMPOSITE_NOMUX(0, "atclk_core_b", "armclkb", CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(3), 0, 5, DFLAGS | CLK_DIVIDER_READ_ONLY,
+ RK3399_CLKGATE_CON(1), 5, GFLAGS),
+ COMPOSITE_NOMUX(0, "pclk_dbg_core_b", "armclkb", CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(3), 8, 5, DFLAGS | CLK_DIVIDER_READ_ONLY,
+ RK3399_CLKGATE_CON(1), 6, GFLAGS),
+
+ GATE(ACLK_CORE_ADB400_CORE_B_2_CCI500, "aclk_core_adb400_core_b_2_cci500", "aclkm_core_b", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(14), 5, GFLAGS),
+ GATE(ACLK_PERF_CORE_B, "aclk_perf_core_b", "aclkm_core_b", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(14), 6, GFLAGS),
+
+ GATE(0, "clk_dbg_pd_core_b", "armclkb", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(14), 1, GFLAGS),
+ GATE(ACLK_GIC_ADB400_GIC_2_CORE_B, "aclk_core_adb400_gic_2_core_b", "armclkb", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(14), 3, GFLAGS),
+ GATE(ACLK_GIC_ADB400_CORE_B_2_GIC, "aclk_core_adb400_core_b_2_gic", "armclkb", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(14), 4, GFLAGS),
+
+ DIV(0, "pclken_dbg_core_b", "pclk_dbg_core_b", CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(3), 13, 2, DFLAGS | CLK_DIVIDER_READ_ONLY),
+
+ GATE(0, "pclk_dbg_cxcs_pd_core_b", "pclk_dbg_core_b", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(14), 2, GFLAGS),
+
+ GATE(SCLK_PVTM_CORE_B, "clk_pvtm_core_b", "xin24m", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(1), 7, GFLAGS),
+
+ /* gmac */
+ GATE(0, "cpll_aclk_gmac_src", "cpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(6), 9, GFLAGS),
+ GATE(0, "gpll_aclk_gmac_src", "gpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(6), 8, GFLAGS),
+ COMPOSITE(0, "aclk_gmac_pre", mux_aclk_gmac_p, 0,
+ RK3399_CLKSEL_CON(20), 7, 1, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(6), 10, GFLAGS),
+
+ GATE(ACLK_GMAC, "aclk_gmac", "aclk_gmac_pre", 0,
+ RK3399_CLKGATE_CON(32), 0, GFLAGS),
+ GATE(ACLK_GMAC_NOC, "aclk_gmac_noc", "aclk_gmac_pre", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(32), 1, GFLAGS),
+ GATE(ACLK_PERF_GMAC, "aclk_perf_gmac", "aclk_gmac_pre", 0,
+ RK3399_CLKGATE_CON(32), 4, GFLAGS),
+
+ COMPOSITE_NOMUX(0, "pclk_gmac_pre", "aclk_gmac_pre", 0,
+ RK3399_CLKSEL_CON(19), 8, 3, DFLAGS,
+ RK3399_CLKGATE_CON(6), 11, GFLAGS),
+ GATE(PCLK_GMAC, "pclk_gmac", "pclk_gmac_pre", 0,
+ RK3399_CLKGATE_CON(32), 2, GFLAGS),
+ GATE(PCLK_GMAC_NOC, "pclk_gmac_noc", "pclk_gmac_pre", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(32), 3, GFLAGS),
+
+ COMPOSITE(SCLK_MAC, "clk_gmac", mux_pll_src_cpll_gpll_npll_p, 0,
+ RK3399_CLKSEL_CON(20), 14, 2, MFLAGS, 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(5), 5, GFLAGS),
+
+ MUX(SCLK_RMII_SRC, "clk_rmii_src", mux_rmii_p, CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(19), 4, 1, MFLAGS),
+ GATE(SCLK_MACREF_OUT, "clk_mac_refout", "clk_rmii_src", 0,
+ RK3399_CLKGATE_CON(5), 6, GFLAGS),
+ GATE(SCLK_MACREF, "clk_mac_ref", "clk_rmii_src", 0,
+ RK3399_CLKGATE_CON(5), 7, GFLAGS),
+ GATE(SCLK_MAC_RX, "clk_rmii_rx", "clk_rmii_src", 0,
+ RK3399_CLKGATE_CON(5), 8, GFLAGS),
+ GATE(SCLK_MAC_TX, "clk_rmii_tx", "clk_rmii_src", 0,
+ RK3399_CLKGATE_CON(5), 9, GFLAGS),
+
+ /* spdif */
+ COMPOSITE(0, "clk_spdif_div", mux_pll_src_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(32), 7, 1, MFLAGS, 0, 7, DFLAGS,
+ RK3399_CLKGATE_CON(8), 13, GFLAGS),
+ COMPOSITE_FRACMUX(0, "clk_spdif_frac", "clk_spdif_div", CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(99), 0,
+ RK3399_CLKGATE_CON(8), 14, GFLAGS,
+ &rk3399_spdif_fracmux),
+ GATE(SCLK_SPDIF_8CH, "clk_spdif", "clk_spdif_mux", CLK_SET_RATE_PARENT,
+ RK3399_CLKGATE_CON(8), 15, GFLAGS),
+
+ COMPOSITE(SCLK_SPDIF_REC_DPTX, "clk_spdif_rec_dptx", mux_pll_src_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(32), 15, 1, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(10), 6, GFLAGS),
+ /* i2s */
+ COMPOSITE(0, "clk_i2s0_div", mux_pll_src_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(28), 7, 1, MFLAGS, 0, 7, DFLAGS,
+ RK3399_CLKGATE_CON(8), 3, GFLAGS),
+ COMPOSITE_FRACMUX(0, "clk_i2s0_frac", "clk_i2s0_div", CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(96), 0,
+ RK3399_CLKGATE_CON(8), 4, GFLAGS,
+ &rk3399_i2s0_fracmux),
+ GATE(SCLK_I2S0_8CH, "clk_i2s0", "clk_i2s0_mux", CLK_SET_RATE_PARENT,
+ RK3399_CLKGATE_CON(8), 5, GFLAGS),
+
+ COMPOSITE(0, "clk_i2s1_div", mux_pll_src_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(29), 7, 1, MFLAGS, 0, 7, DFLAGS,
+ RK3399_CLKGATE_CON(8), 6, GFLAGS),
+ COMPOSITE_FRACMUX(0, "clk_i2s1_frac", "clk_i2s1_div", CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(97), 0,
+ RK3399_CLKGATE_CON(8), 7, GFLAGS,
+ &rk3399_i2s1_fracmux),
+ GATE(SCLK_I2S1_8CH, "clk_i2s1", "clk_i2s1_mux", CLK_SET_RATE_PARENT,
+ RK3399_CLKGATE_CON(8), 8, GFLAGS),
+
+ COMPOSITE(0, "clk_i2s2_div", mux_pll_src_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(30), 7, 1, MFLAGS, 0, 7, DFLAGS,
+ RK3399_CLKGATE_CON(8), 9, GFLAGS),
+ COMPOSITE_FRACMUX(0, "clk_i2s2_frac", "clk_i2s2_div", CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(98), 0,
+ RK3399_CLKGATE_CON(8), 10, GFLAGS,
+ &rk3399_i2s2_fracmux),
+ GATE(SCLK_I2S2_8CH, "clk_i2s2", "clk_i2s2_mux", CLK_SET_RATE_PARENT,
+ RK3399_CLKGATE_CON(8), 11, GFLAGS),
+
+ MUX(0, "clk_i2sout_src", mux_i2sch_p, CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(31), 0, 2, MFLAGS),
+ COMPOSITE_NODIV(SCLK_I2S_8CH_OUT, "clk_i2sout", mux_i2sout_p, CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(30), 8, 2, MFLAGS,
+ RK3399_CLKGATE_CON(8), 12, GFLAGS),
+
+ /* uart */
+ MUX(0, "clk_uart0_src", mux_pll_src_cpll_gpll_upll_p, 0,
+ RK3399_CLKSEL_CON(33), 12, 2, MFLAGS),
+ COMPOSITE_NOMUX(0, "clk_uart0_div", "clk_uart0_src", 0,
+ RK3399_CLKSEL_CON(33), 0, 7, DFLAGS,
+ RK3399_CLKGATE_CON(9), 0, GFLAGS),
+ COMPOSITE_FRACMUX(0, "clk_uart0_frac", "clk_uart0_div", CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(100), 0,
+ RK3399_CLKGATE_CON(9), 1, GFLAGS,
+ &rk3399_uart0_fracmux),
+
+ MUX(0, "clk_uart_src", mux_pll_src_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(33), 15, 1, MFLAGS),
+ COMPOSITE_NOMUX(0, "clk_uart1_div", "clk_uart_src", 0,
+ RK3399_CLKSEL_CON(34), 0, 7, DFLAGS,
+ RK3399_CLKGATE_CON(9), 2, GFLAGS),
+ COMPOSITE_FRACMUX(0, "clk_uart1_frac", "clk_uart1_div", CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(101), 0,
+ RK3399_CLKGATE_CON(9), 3, GFLAGS,
+ &rk3399_uart1_fracmux),
+
+ COMPOSITE_NOMUX(0, "clk_uart2_div", "clk_uart_src", 0,
+ RK3399_CLKSEL_CON(35), 0, 7, DFLAGS,
+ RK3399_CLKGATE_CON(9), 4, GFLAGS),
+ COMPOSITE_FRACMUX(0, "clk_uart2_frac", "clk_uart2_div", CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(102), 0,
+ RK3399_CLKGATE_CON(9), 5, GFLAGS,
+ &rk3399_uart2_fracmux),
+
+ COMPOSITE_NOMUX(0, "clk_uart3_div", "clk_uart_src", 0,
+ RK3399_CLKSEL_CON(36), 0, 7, DFLAGS,
+ RK3399_CLKGATE_CON(9), 6, GFLAGS),
+ COMPOSITE_FRACMUX(0, "clk_uart3_frac", "clk_uart3_div", CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(103), 0,
+ RK3399_CLKGATE_CON(9), 7, GFLAGS,
+ &rk3399_uart3_fracmux),
+
+ COMPOSITE(0, "pclk_ddr", mux_pll_src_cpll_gpll_p, CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(6), 15, 1, MFLAGS, 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(3), 4, GFLAGS),
+
+ GATE(PCLK_CENTER_MAIN_NOC, "pclk_center_main_noc", "pclk_ddr", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(18), 10, GFLAGS),
+ GATE(PCLK_DDR_MON, "pclk_ddr_mon", "pclk_ddr", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(18), 12, GFLAGS),
+ GATE(PCLK_CIC, "pclk_cic", "pclk_ddr", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(18), 15, GFLAGS),
+ GATE(PCLK_DDR_SGRF, "pclk_ddr_sgrf", "pclk_ddr", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(19), 2, GFLAGS),
+
+ GATE(SCLK_PVTM_DDR, "clk_pvtm_ddr", "xin24m", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(4), 11, GFLAGS),
+ GATE(SCLK_DFIMON0_TIMER, "clk_dfimon0_timer", "xin24m", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(3), 5, GFLAGS),
+ GATE(SCLK_DFIMON1_TIMER, "clk_dfimon1_timer", "xin24m", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(3), 6, GFLAGS),
+
+ /* cci */
+ GATE(0, "cpll_aclk_cci_src", "cpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(2), 0, GFLAGS),
+ GATE(0, "gpll_aclk_cci_src", "gpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(2), 1, GFLAGS),
+ GATE(0, "npll_aclk_cci_src", "npll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(2), 2, GFLAGS),
+ GATE(0, "vpll_aclk_cci_src", "vpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(2), 3, GFLAGS),
+
+ COMPOSITE(0, "aclk_cci_pre", mux_aclk_cci_p, CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(5), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(2), 4, GFLAGS),
+
+ GATE(ACLK_ADB400M_PD_CORE_L, "aclk_adb400m_pd_core_l", "aclk_cci_pre", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(15), 0, GFLAGS),
+ GATE(ACLK_ADB400M_PD_CORE_B, "aclk_adb400m_pd_core_b", "aclk_cci_pre", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(15), 1, GFLAGS),
+ GATE(ACLK_CCI, "aclk_cci", "aclk_cci_pre", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(15), 2, GFLAGS),
+ GATE(ACLK_CCI_NOC0, "aclk_cci_noc0", "aclk_cci_pre", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(15), 3, GFLAGS),
+ GATE(ACLK_CCI_NOC1, "aclk_cci_noc1", "aclk_cci_pre", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(15), 4, GFLAGS),
+ GATE(ACLK_CCI_GRF, "aclk_cci_grf", "aclk_cci_pre", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(15), 7, GFLAGS),
+
+ GATE(0, "cpll_cci_trace", "cpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(2), 5, GFLAGS),
+ GATE(0, "gpll_cci_trace", "gpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(2), 6, GFLAGS),
+ COMPOSITE(SCLK_CCI_TRACE, "clk_cci_trace", mux_cci_trace_p, CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(5), 15, 2, MFLAGS, 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(2), 7, GFLAGS),
+
+ GATE(0, "cpll_cs", "cpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(2), 8, GFLAGS),
+ GATE(0, "gpll_cs", "gpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(2), 9, GFLAGS),
+ GATE(0, "npll_cs", "npll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(2), 10, GFLAGS),
+ COMPOSITE_NOGATE(0, "clk_cs", mux_cs_p, CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(4), 6, 2, MFLAGS, 0, 5, DFLAGS),
+ GATE(0, "clk_dbg_cxcs", "clk_cs", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(15), 5, GFLAGS),
+ GATE(0, "clk_dbg_noc", "clk_cs", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(15), 6, GFLAGS),
+
+ /* vcodec */
+ COMPOSITE(0, "aclk_vcodec_pre", mux_pll_src_cpll_gpll_npll_ppll_p, 0,
+ RK3399_CLKSEL_CON(7), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(4), 0, GFLAGS),
+ COMPOSITE_NOMUX(0, "hclk_vcodec_pre", "aclk_vcodec_pre", 0,
+ RK3399_CLKSEL_CON(7), 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(4), 1, GFLAGS),
+ GATE(HCLK_VCODEC, "hclk_vcodec", "hclk_vcodec_pre", 0,
+ RK3399_CLKGATE_CON(17), 2, GFLAGS),
+ GATE(0, "hclk_vcodec_noc", "hclk_vcodec_pre", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(17), 3, GFLAGS),
+
+ GATE(ACLK_VCODEC, "aclk_vcodec", "aclk_vcodec_pre", 0,
+ RK3399_CLKGATE_CON(17), 0, GFLAGS),
+ GATE(0, "aclk_vcodec_noc", "aclk_vcodec_pre", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(17), 1, GFLAGS),
+
+ /* vdu */
+ COMPOSITE(SCLK_VDU_CORE, "clk_vdu_core", mux_pll_src_cpll_gpll_npll_p, 0,
+ RK3399_CLKSEL_CON(9), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(4), 4, GFLAGS),
+ COMPOSITE(SCLK_VDU_CA, "clk_vdu_ca", mux_pll_src_cpll_gpll_npll_p, 0,
+ RK3399_CLKSEL_CON(9), 14, 2, MFLAGS, 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(4), 5, GFLAGS),
+
+ COMPOSITE(0, "aclk_vdu_pre", mux_pll_src_cpll_gpll_npll_ppll_p, 0,
+ RK3399_CLKSEL_CON(8), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(4), 2, GFLAGS),
+ COMPOSITE_NOMUX(0, "hclk_vdu_pre", "aclk_vdu_pre", 0,
+ RK3399_CLKSEL_CON(8), 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(4), 3, GFLAGS),
+ GATE(HCLK_VDU, "hclk_vdu", "hclk_vdu_pre", 0,
+ RK3399_CLKGATE_CON(17), 10, GFLAGS),
+ GATE(HCLK_VDU_NOC, "hclk_vdu_noc", "hclk_vdu_pre", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(17), 11, GFLAGS),
+
+ GATE(ACLK_VDU, "aclk_vdu", "aclk_vdu_pre", 0,
+ RK3399_CLKGATE_CON(17), 8, GFLAGS),
+ GATE(ACLK_VDU_NOC, "aclk_vdu_noc", "aclk_vdu_pre", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(17), 9, GFLAGS),
+
+ /* iep */
+ COMPOSITE(0, "aclk_iep_pre", mux_pll_src_cpll_gpll_npll_ppll_p, 0,
+ RK3399_CLKSEL_CON(10), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(4), 6, GFLAGS),
+ COMPOSITE_NOMUX(0, "hclk_iep_pre", "aclk_iep_pre", 0,
+ RK3399_CLKSEL_CON(10), 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(4), 7, GFLAGS),
+ GATE(HCLK_IEP, "hclk_iep", "hclk_iep_pre", 0,
+ RK3399_CLKGATE_CON(16), 2, GFLAGS),
+ GATE(HCLK_IEP_NOC, "hclk_iep_noc", "hclk_iep_pre", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(16), 3, GFLAGS),
+
+ GATE(ACLK_IEP, "aclk_iep", "aclk_iep_pre", 0,
+ RK3399_CLKGATE_CON(16), 0, GFLAGS),
+ GATE(ACLK_IEP_NOC, "aclk_iep_noc", "aclk_iep_pre", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(16), 1, GFLAGS),
+
+ /* rga */
+ COMPOSITE(SCLK_RGA_CORE, "clk_rga_core", mux_pll_src_cpll_gpll_npll_ppll_p, 0,
+ RK3399_CLKSEL_CON(12), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(4), 10, GFLAGS),
+
+ COMPOSITE(0, "aclk_rga_pre", mux_pll_src_cpll_gpll_npll_ppll_p, 0,
+ RK3399_CLKSEL_CON(11), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(4), 8, GFLAGS),
+ COMPOSITE_NOMUX(0, "hclk_rga_pre", "aclk_rga_pre", 0,
+ RK3399_CLKSEL_CON(11), 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(4), 9, GFLAGS),
+ GATE(HCLK_RGA, "hclk_rga", "hclk_rga_pre", 0,
+ RK3399_CLKGATE_CON(16), 10, GFLAGS),
+ GATE(HCLK_RGA_NOC, "hclk_rga_noc", "hclk_rga_pre", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(16), 11, GFLAGS),
+
+ GATE(ACLK_RGA, "aclk_rga", "aclk_rga_pre", 0,
+ RK3399_CLKGATE_CON(16), 8, GFLAGS),
+ GATE(ACLK_RGA_NOC, "aclk_rga_noc", "aclk_rga_pre", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(16), 9, GFLAGS),
+
+ /* center */
+ COMPOSITE(0, "aclk_center", mux_pll_src_cpll_gpll_npll_p, CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(12), 14, 2, MFLAGS, 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(3), 7, GFLAGS),
+ GATE(ACLK_CENTER_MAIN_NOC, "aclk_center_main_noc", "aclk_center", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(19), 0, GFLAGS),
+ GATE(ACLK_CENTER_PERI_NOC, "aclk_center_peri_noc", "aclk_center", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(19), 1, GFLAGS),
+
+ /* gpu */
+ COMPOSITE(0, "aclk_gpu_pre", mux_pll_src_ppll_cpll_gpll_npll_p, CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(13), 5, 3, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(13), 0, GFLAGS),
+ GATE(ACLK_GPU, "aclk_gpu", "aclk_gpu_pre", 0,
+ RK3399_CLKGATE_CON(30), 8, GFLAGS),
+ GATE(ACLK_PERF_GPU, "aclk_perf_gpu", "aclk_gpu_pre", 0,
+ RK3399_CLKGATE_CON(30), 10, GFLAGS),
+ GATE(ACLK_GPU_GRF, "aclk_gpu_grf", "aclk_gpu_pre", 0,
+ RK3399_CLKGATE_CON(30), 11, GFLAGS),
+ GATE(SCLK_PVTM_GPU, "aclk_pvtm_gpu", "xin24m", 0,
+ RK3399_CLKGATE_CON(13), 1, GFLAGS),
+
+ /* perihp */
+ GATE(0, "cpll_aclk_perihp_src", "gpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(5), 0, GFLAGS),
+ GATE(0, "gpll_aclk_perihp_src", "cpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(5), 1, GFLAGS),
+ COMPOSITE(ACLK_PERIHP, "aclk_perihp", mux_aclk_perihp_p, CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(14), 7, 1, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(5), 2, GFLAGS),
+ COMPOSITE_NOMUX(HCLK_PERIHP, "hclk_perihp", "aclk_perihp", CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(14), 8, 2, DFLAGS,
+ RK3399_CLKGATE_CON(5), 3, GFLAGS),
+ COMPOSITE_NOMUX(PCLK_PERIHP, "pclk_perihp", "aclk_perihp", CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(14), 12, 2, DFLAGS,
+ RK3399_CLKGATE_CON(5), 4, GFLAGS),
+
+ GATE(ACLK_PERF_PCIE, "aclk_perf_pcie", "aclk_perihp", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(20), 2, GFLAGS),
+ GATE(ACLK_PCIE, "aclk_pcie", "aclk_perihp", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(20), 10, GFLAGS),
+ GATE(0, "aclk_perihp_noc", "aclk_perihp", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(20), 12, GFLAGS),
+
+ GATE(HCLK_HOST0, "hclk_host0", "hclk_perihp", 0,
+ RK3399_CLKGATE_CON(20), 5, GFLAGS),
+ GATE(HCLK_HOST0_ARB, "hclk_host0_arb", "hclk_perihp", 0,
+ RK3399_CLKGATE_CON(20), 6, GFLAGS),
+ GATE(HCLK_HOST1, "hclk_host1", "hclk_perihp", 0,
+ RK3399_CLKGATE_CON(20), 7, GFLAGS),
+ GATE(HCLK_HOST1_ARB, "hclk_host1_arb", "hclk_perihp", 0,
+ RK3399_CLKGATE_CON(20), 8, GFLAGS),
+ GATE(HCLK_HSIC, "hclk_hsic", "hclk_perihp", 0,
+ RK3399_CLKGATE_CON(20), 9, GFLAGS),
+ GATE(0, "hclk_perihp_noc", "hclk_perihp", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(20), 13, GFLAGS),
+ GATE(0, "hclk_ahb1tom", "hclk_perihp", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(20), 15, GFLAGS),
+
+ GATE(PCLK_PERIHP_GRF, "pclk_perihp_grf", "pclk_perihp", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(20), 4, GFLAGS),
+ GATE(PCLK_PCIE, "pclk_pcie", "pclk_perihp", 0,
+ RK3399_CLKGATE_CON(20), 11, GFLAGS),
+ GATE(0, "pclk_perihp_noc", "pclk_perihp", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(20), 14, GFLAGS),
+ GATE(PCLK_HSICPHY, "pclk_hsicphy", "pclk_perihp", 0,
+ RK3399_CLKGATE_CON(31), 8, GFLAGS),
+
+ /* sdio & sdmmc */
+ COMPOSITE(0, "hclk_sd", mux_pll_src_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(13), 15, 1, MFLAGS, 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(12), 13, GFLAGS),
+ GATE(HCLK_SDMMC, "hclk_sdmmc", "hclk_sd", 0,
+ RK3399_CLKGATE_CON(33), 8, GFLAGS),
+ GATE(0, "hclk_sdmmc_noc", "hclk_sd", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(33), 9, GFLAGS),
+
+ COMPOSITE(SCLK_SDIO, "clk_sdio", mux_pll_src_cpll_gpll_npll_ppll_upll_24m_p, 0,
+ RK3399_CLKSEL_CON(15), 8, 3, MFLAGS, 0, 7, DFLAGS,
+ RK3399_CLKGATE_CON(6), 0, GFLAGS),
+
+ COMPOSITE(SCLK_SDMMC, "clk_sdmmc", mux_pll_src_cpll_gpll_npll_ppll_upll_24m_p, 0,
+ RK3399_CLKSEL_CON(16), 8, 3, MFLAGS, 0, 7, DFLAGS,
+ RK3399_CLKGATE_CON(6), 1, GFLAGS),
+
+ MMC(SCLK_SDMMC_DRV, "sdmmc_drv", "clk_sdmmc", RK3399_SDMMC_CON0, 1),
+ MMC(SCLK_SDMMC_SAMPLE, "sdmmc_sample", "clk_sdmmc", RK3399_SDMMC_CON1, 1),
+
+ MMC(SCLK_SDIO_DRV, "sdio_drv", "clk_sdio", RK3399_SDIO_CON0, 1),
+ MMC(SCLK_SDIO_SAMPLE, "sdio_sample", "clk_sdio", RK3399_SDIO_CON1, 1),
+
+ /* pcie */
+ COMPOSITE(SCLK_PCIE_PM, "clk_pcie_pm", mux_pll_src_cpll_gpll_npll_24m_p, 0,
+ RK3399_CLKSEL_CON(17), 8, 3, MFLAGS, 0, 7, DFLAGS,
+ RK3399_CLKGATE_CON(6), 2, GFLAGS),
+
+ COMPOSITE_NOMUX(SCLK_PCIEPHY_REF100M, "clk_pciephy_ref100m", "npll", 0,
+ RK3399_CLKSEL_CON(18), 11, 5, DFLAGS,
+ RK3399_CLKGATE_CON(12), 6, GFLAGS),
+ MUX(SCLK_PCIEPHY_REF, "clk_pciephy_ref", mux_pll_src_24m_pciephy_p, CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(18), 10, 1, MFLAGS),
+
+ COMPOSITE(0, "clk_pcie_core_cru", mux_pll_src_cpll_gpll_npll_p, 0,
+ RK3399_CLKSEL_CON(18), 8, 2, MFLAGS, 0, 7, DFLAGS,
+ RK3399_CLKGATE_CON(6), 3, GFLAGS),
+ MUX(SCLK_PCIE_CORE, "clk_pcie_core", mux_pciecore_cru_phy_p, CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(18), 7, 1, MFLAGS),
+
+ /* emmc */
+ COMPOSITE(SCLK_EMMC, "clk_emmc", mux_pll_src_cpll_gpll_npll_upll_24m_p, 0,
+ RK3399_CLKSEL_CON(22), 8, 3, MFLAGS, 0, 7, DFLAGS,
+ RK3399_CLKGATE_CON(6), 14, GFLAGS),
+
+ GATE(0, "cpll_aclk_emmc_src", "cpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(6), 12, GFLAGS),
+ GATE(0, "gpll_aclk_emmc_src", "gpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(6), 13, GFLAGS),
+ COMPOSITE_NOGATE(ACLK_EMMC, "aclk_emmc", mux_aclk_emmc_p, CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(21), 7, 1, MFLAGS, 0, 5, DFLAGS),
+ GATE(ACLK_EMMC_CORE, "aclk_emmccore", "aclk_emmc", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(32), 8, GFLAGS),
+ GATE(ACLK_EMMC_NOC, "aclk_emmc_noc", "aclk_emmc", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(32), 9, GFLAGS),
+ GATE(ACLK_EMMC_GRF, "aclk_emmcgrf", "aclk_emmc", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(32), 10, GFLAGS),
+
+ /* perilp0 */
+ GATE(0, "cpll_aclk_perilp0_src", "cpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(7), 1, GFLAGS),
+ GATE(0, "gpll_aclk_perilp0_src", "gpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(7), 0, GFLAGS),
+ COMPOSITE(ACLK_PERILP0, "aclk_perilp0", mux_aclk_perilp0_p, CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(23), 7, 1, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(7), 2, GFLAGS),
+ COMPOSITE_NOMUX(HCLK_PERILP0, "hclk_perilp0", "aclk_perilp0", CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(23), 8, 2, DFLAGS,
+ RK3399_CLKGATE_CON(7), 3, GFLAGS),
+ COMPOSITE_NOMUX(PCLK_PERILP0, "pclk_perilp0", "aclk_perilp0", 0,
+ RK3399_CLKSEL_CON(23), 12, 3, DFLAGS,
+ RK3399_CLKGATE_CON(7), 4, GFLAGS),
+
+ /* aclk_perilp0 gates */
+ GATE(ACLK_INTMEM, "aclk_intmem", "aclk_perilp0", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(23), 0, GFLAGS),
+ GATE(ACLK_TZMA, "aclk_tzma", "aclk_perilp0", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(23), 1, GFLAGS),
+ GATE(SCLK_INTMEM0, "clk_intmem0", "aclk_perilp0", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(23), 2, GFLAGS),
+ GATE(SCLK_INTMEM1, "clk_intmem1", "aclk_perilp0", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(23), 3, GFLAGS),
+ GATE(SCLK_INTMEM2, "clk_intmem2", "aclk_perilp0", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(23), 4, GFLAGS),
+ GATE(SCLK_INTMEM3, "clk_intmem3", "aclk_perilp0", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(23), 5, GFLAGS),
+ GATE(SCLK_INTMEM4, "clk_intmem4", "aclk_perilp0", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(23), 6, GFLAGS),
+ GATE(SCLK_INTMEM5, "clk_intmem5", "aclk_perilp0", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(23), 7, GFLAGS),
+ GATE(ACLK_DCF, "aclk_dcf", "aclk_perilp0", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(23), 8, GFLAGS),
+ GATE(ACLK_DMAC0_PERILP, "aclk_dmac0_perilp", "aclk_perilp0", 0, RK3399_CLKGATE_CON(25), 5, GFLAGS),
+ GATE(ACLK_DMAC1_PERILP, "aclk_dmac1_perilp", "aclk_perilp0", 0, RK3399_CLKGATE_CON(25), 6, GFLAGS),
+ GATE(ACLK_PERILP0_NOC, "aclk_perilp0_noc", "aclk_perilp0", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(25), 7, GFLAGS),
+
+ /* hclk_perilp0 gates */
+ GATE(HCLK_ROM, "hclk_rom", "hclk_perilp0", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(24), 4, GFLAGS),
+ GATE(HCLK_M_CRYPTO0, "hclk_m_crypto0", "hclk_perilp0", 0, RK3399_CLKGATE_CON(24), 5, GFLAGS),
+ GATE(HCLK_S_CRYPTO0, "hclk_s_crypto0", "hclk_perilp0", 0, RK3399_CLKGATE_CON(24), 6, GFLAGS),
+ GATE(HCLK_M_CRYPTO1, "hclk_m_crypto1", "hclk_perilp0", 0, RK3399_CLKGATE_CON(24), 14, GFLAGS),
+ GATE(HCLK_S_CRYPTO1, "hclk_s_crypto1", "hclk_perilp0", 0, RK3399_CLKGATE_CON(24), 15, GFLAGS),
+ GATE(HCLK_PERILP0_NOC, "hclk_perilp0_noc", "hclk_perilp0", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(25), 8, GFLAGS),
+
+ /* pclk_perilp0 gates */
+ GATE(PCLK_DCF, "pclk_dcf", "pclk_perilp0", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(23), 9, GFLAGS),
+
+ /* crypto */
+ COMPOSITE(SCLK_CRYPTO0, "clk_crypto0", mux_pll_src_cpll_gpll_ppll_p, 0,
+ RK3399_CLKSEL_CON(24), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(7), 7, GFLAGS),
+
+ COMPOSITE(SCLK_CRYPTO1, "clk_crypto1", mux_pll_src_cpll_gpll_ppll_p, 0,
+ RK3399_CLKSEL_CON(26), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(7), 8, GFLAGS),
+
+ /* cm0s_perilp */
+ GATE(0, "cpll_fclk_cm0s_src", "cpll", 0,
+ RK3399_CLKGATE_CON(7), 6, GFLAGS),
+ GATE(0, "gpll_fclk_cm0s_src", "gpll", 0,
+ RK3399_CLKGATE_CON(7), 5, GFLAGS),
+ COMPOSITE(FCLK_CM0S, "fclk_cm0s", mux_fclk_cm0s_p, 0,
+ RK3399_CLKSEL_CON(24), 15, 1, MFLAGS, 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(7), 9, GFLAGS),
+
+ /* fclk_cm0s gates */
+ GATE(SCLK_M0_PERILP, "sclk_m0_perilp", "fclk_cm0s", 0, RK3399_CLKGATE_CON(24), 8, GFLAGS),
+ GATE(HCLK_M0_PERILP, "hclk_m0_perilp", "fclk_cm0s", 0, RK3399_CLKGATE_CON(24), 9, GFLAGS),
+ GATE(DCLK_M0_PERILP, "dclk_m0_perilp", "fclk_cm0s", 0, RK3399_CLKGATE_CON(24), 10, GFLAGS),
+ GATE(SCLK_M0_PERILP_DEC, "clk_m0_perilp_dec", "fclk_cm0s", 0, RK3399_CLKGATE_CON(24), 11, GFLAGS),
+ GATE(HCLK_M0_PERILP_NOC, "hclk_m0_perilp_noc", "fclk_cm0s", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(25), 11, GFLAGS),
+
+ /* perilp1 */
+ GATE(0, "cpll_hclk_perilp1_src", "cpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(8), 1, GFLAGS),
+ GATE(0, "gpll_hclk_perilp1_src", "gpll", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(8), 0, GFLAGS),
+ COMPOSITE_NOGATE(HCLK_PERILP1, "hclk_perilp1", mux_hclk_perilp1_p, CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(25), 7, 1, MFLAGS, 0, 5, DFLAGS),
+ COMPOSITE_NOMUX(PCLK_PERILP1, "pclk_perilp1", "hclk_perilp1", CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(25), 8, 3, DFLAGS,
+ RK3399_CLKGATE_CON(8), 2, GFLAGS),
+
+ /* hclk_perilp1 gates */
+ GATE(0, "hclk_perilp1_noc", "hclk_perilp1", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(25), 9, GFLAGS),
+ GATE(0, "hclk_sdio_noc", "hclk_perilp1", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(25), 12, GFLAGS),
+ GATE(HCLK_I2S0_8CH, "hclk_i2s0", "hclk_perilp1", 0, RK3399_CLKGATE_CON(34), 0, GFLAGS),
+ GATE(HCLK_I2S1_8CH, "hclk_i2s1", "hclk_perilp1", 0, RK3399_CLKGATE_CON(34), 1, GFLAGS),
+ GATE(HCLK_I2S2_8CH, "hclk_i2s2", "hclk_perilp1", 0, RK3399_CLKGATE_CON(34), 2, GFLAGS),
+ GATE(HCLK_SPDIF, "hclk_spdif", "hclk_perilp1", 0, RK3399_CLKGATE_CON(34), 3, GFLAGS),
+ GATE(HCLK_SDIO, "hclk_sdio", "hclk_perilp1", 0, RK3399_CLKGATE_CON(34), 4, GFLAGS),
+ GATE(PCLK_SPI5, "pclk_spi5", "hclk_perilp1", 0, RK3399_CLKGATE_CON(34), 5, GFLAGS),
+ GATE(0, "hclk_sdioaudio_noc", "hclk_perilp1", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(34), 6, GFLAGS),
+
+ /* pclk_perilp1 gates */
+ GATE(PCLK_UART0, "pclk_uart0", "pclk_perilp1", 0, RK3399_CLKGATE_CON(22), 0, GFLAGS),
+ GATE(PCLK_UART1, "pclk_uart1", "pclk_perilp1", 0, RK3399_CLKGATE_CON(22), 1, GFLAGS),
+ GATE(PCLK_UART2, "pclk_uart2", "pclk_perilp1", 0, RK3399_CLKGATE_CON(22), 2, GFLAGS),
+ GATE(PCLK_UART3, "pclk_uart3", "pclk_perilp1", 0, RK3399_CLKGATE_CON(22), 3, GFLAGS),
+ GATE(PCLK_I2C7, "pclk_rki2c7", "pclk_perilp1", 0, RK3399_CLKGATE_CON(22), 5, GFLAGS),
+ GATE(PCLK_I2C1, "pclk_rki2c1", "pclk_perilp1", 0, RK3399_CLKGATE_CON(22), 6, GFLAGS),
+ GATE(PCLK_I2C5, "pclk_rki2c5", "pclk_perilp1", 0, RK3399_CLKGATE_CON(22), 7, GFLAGS),
+ GATE(PCLK_I2C6, "pclk_rki2c6", "pclk_perilp1", 0, RK3399_CLKGATE_CON(22), 8, GFLAGS),
+ GATE(PCLK_I2C2, "pclk_rki2c2", "pclk_perilp1", 0, RK3399_CLKGATE_CON(22), 9, GFLAGS),
+ GATE(PCLK_I2C3, "pclk_rki2c3", "pclk_perilp1", 0, RK3399_CLKGATE_CON(22), 10, GFLAGS),
+ GATE(PCLK_MAILBOX0, "pclk_mailbox0", "pclk_perilp1", 0, RK3399_CLKGATE_CON(22), 11, GFLAGS),
+ GATE(PCLK_SARADC, "pclk_saradc", "pclk_perilp1", 0, RK3399_CLKGATE_CON(22), 12, GFLAGS),
+ GATE(PCLK_TSADC, "pclk_tsadc", "pclk_perilp1", 0, RK3399_CLKGATE_CON(22), 13, GFLAGS),
+ GATE(PCLK_EFUSE1024NS, "pclk_efuse1024ns", "pclk_perilp1", 0, RK3399_CLKGATE_CON(22), 14, GFLAGS),
+ GATE(PCLK_EFUSE1024S, "pclk_efuse1024s", "pclk_perilp1", 0, RK3399_CLKGATE_CON(22), 15, GFLAGS),
+ GATE(PCLK_SPI0, "pclk_spi0", "pclk_perilp1", 0, RK3399_CLKGATE_CON(23), 10, GFLAGS),
+ GATE(PCLK_SPI1, "pclk_spi1", "pclk_perilp1", 0, RK3399_CLKGATE_CON(23), 11, GFLAGS),
+ GATE(PCLK_SPI2, "pclk_spi2", "pclk_perilp1", 0, RK3399_CLKGATE_CON(23), 12, GFLAGS),
+ GATE(PCLK_SPI4, "pclk_spi4", "pclk_perilp1", 0, RK3399_CLKGATE_CON(23), 13, GFLAGS),
+ GATE(PCLK_PERIHP_GRF, "pclk_perilp_sgrf", "pclk_perilp1", 0, RK3399_CLKGATE_CON(24), 13, GFLAGS),
+ GATE(0, "pclk_perilp1_noc", "pclk_perilp1", 0, RK3399_CLKGATE_CON(25), 10, GFLAGS),
+
+ /* saradc */
+ COMPOSITE_NOMUX(SCLK_SARADC, "clk_saradc", "xin24m", 0,
+ RK3399_CLKSEL_CON(26), 8, 8, DFLAGS,
+ RK3399_CLKGATE_CON(9), 11, GFLAGS),
+
+ /* tsadc */
+ COMPOSITE(SCLK_TSADC, "clk_tsadc", mux_pll_p, 0,
+ RK3399_CLKSEL_CON(27), 15, 1, MFLAGS, 0, 10, DFLAGS,
+ RK3399_CLKGATE_CON(9), 10, GFLAGS),
+
+ /* cif_testout */
+ MUX(0, "clk_testout1_pll_src", mux_pll_src_cpll_gpll_npll_p, 0,
+ RK3399_CLKSEL_CON(38), 6, 2, MFLAGS),
+ COMPOSITE(0, "clk_testout1", mux_clk_testout1_p, 0,
+ RK3399_CLKSEL_CON(38), 5, 1, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(13), 14, GFLAGS),
+
+ MUX(0, "clk_testout2_pll_src", mux_pll_src_cpll_gpll_npll_p, 0,
+ RK3399_CLKSEL_CON(38), 14, 2, MFLAGS),
+ COMPOSITE(0, "clk_testout2", mux_clk_testout2_p, 0,
+ RK3399_CLKSEL_CON(38), 13, 1, MFLAGS, 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(13), 15, GFLAGS),
+
+ /* vio */
+ COMPOSITE(ACLK_VIO, "aclk_vio", mux_pll_src_cpll_gpll_ppll_p, CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(42), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(11), 10, GFLAGS),
+ COMPOSITE_NOMUX(PCLK_VIO, "pclk_vio", "aclk_vio", 0,
+ RK3399_CLKSEL_CON(43), 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(11), 1, GFLAGS),
+
+ GATE(ACLK_VIO_NOC, "aclk_vio_noc", "aclk_vio", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(29), 0, GFLAGS),
+
+ GATE(PCLK_MIPI_DSI0, "pclk_mipi_dsi0", "pclk_vio", 0,
+ RK3399_CLKGATE_CON(29), 1, GFLAGS),
+ GATE(PCLK_MIPI_DSI1, "pclk_mipi_dsi1", "pclk_vio", 0,
+ RK3399_CLKGATE_CON(29), 2, GFLAGS),
+ GATE(PCLK_VIO_GRF, "pclk_vio_grf", "pclk_vio", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(29), 12, GFLAGS),
+
+ /* hdcp */
+ COMPOSITE(ACLK_HDCP, "aclk_hdcp", mux_pll_src_cpll_gpll_ppll_p, 0,
+ RK3399_CLKSEL_CON(42), 14, 2, MFLAGS, 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(11), 12, GFLAGS),
+ COMPOSITE_NOMUX(HCLK_HDCP, "hclk_hdcp", "aclk_hdcp", 0,
+ RK3399_CLKSEL_CON(43), 5, 5, DFLAGS,
+ RK3399_CLKGATE_CON(11), 3, GFLAGS),
+ COMPOSITE_NOMUX(PCLK_HDCP, "pclk_hdcp", "aclk_hdcp", 0,
+ RK3399_CLKSEL_CON(43), 10, 5, DFLAGS,
+ RK3399_CLKGATE_CON(11), 10, GFLAGS),
+
+ GATE(ACLK_HDCP_NOC, "aclk_hdcp_noc", "aclk_hdcp", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(29), 4, GFLAGS),
+ GATE(ACLK_HDCP22, "aclk_hdcp22", "aclk_hdcp", 0,
+ RK3399_CLKGATE_CON(29), 10, GFLAGS),
+
+ GATE(HCLK_HDCP_NOC, "hclk_hdcp_noc", "hclk_hdcp", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(29), 5, GFLAGS),
+ GATE(HCLK_HDCP22, "hclk_hdcp22", "hclk_hdcp", 0,
+ RK3399_CLKGATE_CON(29), 9, GFLAGS),
+
+ GATE(PCLK_HDCP_NOC, "pclk_hdcp_noc", "pclk_hdcp", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(29), 3, GFLAGS),
+ GATE(PCLK_HDMI_CTRL, "pclk_hdmi_ctrl", "pclk_hdcp", 0,
+ RK3399_CLKGATE_CON(29), 6, GFLAGS),
+ GATE(PCLK_DP_CTRL, "pclk_dp_ctrl", "pclk_hdcp", 0,
+ RK3399_CLKGATE_CON(29), 7, GFLAGS),
+ GATE(PCLK_HDCP22, "pclk_hdcp22", "pclk_hdcp", 0,
+ RK3399_CLKGATE_CON(29), 8, GFLAGS),
+ GATE(PCLK_GASKET, "pclk_gasket", "pclk_hdcp", 0,
+ RK3399_CLKGATE_CON(29), 11, GFLAGS),
+
+ /* edp */
+ COMPOSITE(SCLK_DP_CORE, "clk_dp_core", mux_pll_src_npll_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(46), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(11), 8, GFLAGS),
+
+ COMPOSITE(PCLK_EDP, "pclk_edp", mux_pll_src_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(44), 15, 1, MFLAGS, 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(11), 11, GFLAGS),
+ GATE(PCLK_EDP_NOC, "pclk_edp_noc", "pclk_edp", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(32), 12, GFLAGS),
+ GATE(PCLK_EDP_CTRL, "pclk_edp_ctrl", "pclk_edp", 0,
+ RK3399_CLKGATE_CON(32), 13, GFLAGS),
+
+ /* hdmi */
+ GATE(SCLK_HDMI_SFR, "clk_hdmi_sfr", "xin24m", 0,
+ RK3399_CLKGATE_CON(11), 6, GFLAGS),
+
+ COMPOSITE(SCLK_HDMI_CEC, "clk_hdmi_cec", mux_pll_p, 0,
+ RK3399_CLKSEL_CON(45), 15, 1, MFLAGS, 0, 10, DFLAGS,
+ RK3399_CLKGATE_CON(11), 7, GFLAGS),
+
+ /* vop0 */
+ COMPOSITE(ACLK_VOP0_PRE, "aclk_vop0_pre", mux_pll_src_vpll_cpll_gpll_npll_p, 0,
+ RK3399_CLKSEL_CON(47), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(10), 8, GFLAGS),
+ COMPOSITE_NOMUX(0, "hclk_vop0_pre", "aclk_vop0_pre", 0,
+ RK3399_CLKSEL_CON(47), 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(10), 9, GFLAGS),
+
+ GATE(ACLK_VOP0, "aclk_vop0", "aclk_vop0_pre", 0,
+ RK3399_CLKGATE_CON(28), 3, GFLAGS),
+ GATE(ACLK_VOP0_NOC, "aclk_vop0_noc", "aclk_vop0_pre", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(28), 1, GFLAGS),
+
+ GATE(HCLK_VOP0, "hclk_vop0", "hclk_vop0_pre", 0,
+ RK3399_CLKGATE_CON(28), 2, GFLAGS),
+ GATE(HCLK_VOP0_NOC, "hclk_vop0_noc", "hclk_vop0_pre", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(28), 0, GFLAGS),
+
+ COMPOSITE(DCLK_VOP0_DIV, "dclk_vop0_div", mux_pll_src_vpll_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(49), 8, 2, MFLAGS, 0, 8, DFLAGS,
+ RK3399_CLKGATE_CON(10), 12, GFLAGS),
+
+ COMPOSITE_FRACMUX_NOGATE(0, "dclk_vop0_frac", "dclk_vop0_div", CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(106), 0,
+ &rk3399_dclk_vop0_fracmux),
+
+ COMPOSITE(SCLK_VOP0_PWM, "clk_vop0_pwm", mux_pll_src_vpll_cpll_gpll_24m_p, 0,
+ RK3399_CLKSEL_CON(51), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(10), 14, GFLAGS),
+
+ /* vop1 */
+ COMPOSITE(ACLK_VOP1_PRE, "aclk_vop1_pre", mux_pll_src_vpll_cpll_gpll_npll_p, 0,
+ RK3399_CLKSEL_CON(48), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(10), 10, GFLAGS),
+ COMPOSITE_NOMUX(0, "hclk_vop1_pre", "aclk_vop1_pre", 0,
+ RK3399_CLKSEL_CON(48), 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(10), 11, GFLAGS),
+
+ GATE(ACLK_VOP1, "aclk_vop1", "aclk_vop1_pre", 0,
+ RK3399_CLKGATE_CON(28), 7, GFLAGS),
+ GATE(ACLK_VOP1_NOC, "aclk_vop1_noc", "aclk_vop1_pre", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(28), 5, GFLAGS),
+
+ GATE(HCLK_VOP1, "hclk_vop1", "hclk_vop1_pre", 0,
+ RK3399_CLKGATE_CON(28), 6, GFLAGS),
+ GATE(HCLK_VOP1_NOC, "hclk_vop1_noc", "hclk_vop1_pre", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(28), 4, GFLAGS),
+
+ COMPOSITE(DCLK_VOP1_DIV, "dclk_vop1_div", mux_pll_src_vpll_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(50), 8, 2, MFLAGS, 0, 8, DFLAGS,
+ RK3399_CLKGATE_CON(10), 13, GFLAGS),
+
+ COMPOSITE_FRACMUX_NOGATE(0, "dclk_vop1_frac", "dclk_vop1_div", CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(107), 0,
+ &rk3399_dclk_vop1_fracmux),
+
+ COMPOSITE(SCLK_VOP1_PWM, "clk_vop1_pwm", mux_pll_src_vpll_cpll_gpll_24m_p, CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(52), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(10), 15, GFLAGS),
+
+ /* isp */
+ COMPOSITE(ACLK_ISP0, "aclk_isp0", mux_pll_src_cpll_gpll_ppll_p, 0,
+ RK3399_CLKSEL_CON(53), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(12), 8, GFLAGS),
+ COMPOSITE_NOMUX(HCLK_ISP0, "hclk_isp0", "aclk_isp0", 0,
+ RK3399_CLKSEL_CON(53), 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(12), 9, GFLAGS),
+
+ GATE(ACLK_ISP0_NOC, "aclk_isp0_noc", "aclk_isp0", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(27), 1, GFLAGS),
+ GATE(ACLK_ISP0_WRAPPER, "aclk_isp0_wrapper", "aclk_isp0", 0,
+ RK3399_CLKGATE_CON(27), 5, GFLAGS),
+ GATE(HCLK_ISP1_WRAPPER, "hclk_isp1_wrapper", "aclk_isp0", 0,
+ RK3399_CLKGATE_CON(27), 7, GFLAGS),
+
+ GATE(HCLK_ISP0_NOC, "hclk_isp0_noc", "hclk_isp0", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(27), 0, GFLAGS),
+ GATE(HCLK_ISP0_WRAPPER, "hclk_isp0_wrapper", "hclk_isp0", 0,
+ RK3399_CLKGATE_CON(27), 4, GFLAGS),
+
+ COMPOSITE(SCLK_ISP0, "clk_isp0", mux_pll_src_cpll_gpll_npll_p, 0,
+ RK3399_CLKSEL_CON(55), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(11), 4, GFLAGS),
+
+ COMPOSITE(ACLK_ISP1, "aclk_isp1", mux_pll_src_cpll_gpll_ppll_p, 0,
+ RK3399_CLKSEL_CON(54), 6, 2, MFLAGS, 0, 5, DFLAGS,
+ RK3399_CLKGATE_CON(12), 10, GFLAGS),
+ COMPOSITE_NOMUX(HCLK_ISP1, "hclk_isp1", "aclk_isp1", 0,
+ RK3399_CLKSEL_CON(54), 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(12), 11, GFLAGS),
+
+ GATE(ACLK_ISP1_NOC, "aclk_isp1_noc", "aclk_isp1", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(27), 3, GFLAGS),
+
+ GATE(HCLK_ISP1_NOC, "hclk_isp1_noc", "hclk_isp1", CLK_IGNORE_UNUSED,
+ RK3399_CLKGATE_CON(27), 2, GFLAGS),
+ GATE(ACLK_ISP1_WRAPPER, "aclk_isp1_wrapper", "hclk_isp1", 0,
+ RK3399_CLKGATE_CON(27), 8, GFLAGS),
+
+ COMPOSITE(SCLK_ISP1, "clk_isp1", mux_pll_src_cpll_gpll_npll_p, 0,
+ RK3399_CLKSEL_CON(55), 14, 2, MFLAGS, 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(11), 5, GFLAGS),
+
+ /*
+ * We use pclkin_cifinv by default GRF_SOC_CON20[9] (GSC20_9) setting in system,
+ * so we ignore the mux and make clocks nodes as following,
+ *
+ * pclkin_cifinv --|-------\
+ * |GSC20_9|-- pclkin_cifmux -- |G27_6| -- pclkin_isp1_wrapper
+ * pclkin_cif --|-------/
+ */
+ GATE(PCLK_ISP1_WRAPPER, "pclkin_isp1_wrapper", "pclkin_cif", 0,
+ RK3399_CLKGATE_CON(27), 6, GFLAGS),
+
+ /* cif */
+ COMPOSITE_NODIV(0, "clk_cifout_src", mux_pll_src_cpll_gpll_npll_p, 0,
+ RK3399_CLKSEL_CON(56), 6, 2, MFLAGS,
+ RK3399_CLKGATE_CON(10), 7, GFLAGS),
+
+ COMPOSITE_NOGATE(SCLK_CIF_OUT, "clk_cifout", mux_clk_cif_p, 0,
+ RK3399_CLKSEL_CON(56), 5, 1, MFLAGS, 0, 5, DFLAGS),
+
+ /* gic */
+ COMPOSITE(ACLK_GIC_PRE, "aclk_gic_pre", mux_pll_src_cpll_gpll_p, CLK_IGNORE_UNUSED,
+ RK3399_CLKSEL_CON(56), 15, 1, MFLAGS, 8, 5, DFLAGS,
+ RK3399_CLKGATE_CON(12), 12, GFLAGS),
+
+ GATE(ACLK_GIC, "aclk_gic", "aclk_gic_pre", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(33), 0, GFLAGS),
+ GATE(ACLK_GIC_NOC, "aclk_gic_noc", "aclk_gic_pre", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(33), 1, GFLAGS),
+ GATE(ACLK_GIC_ADB400_CORE_L_2_GIC, "aclk_gic_adb400_core_l_2_gic", "aclk_gic_pre", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(33), 2, GFLAGS),
+ GATE(ACLK_GIC_ADB400_CORE_B_2_GIC, "aclk_gic_adb400_core_b_2_gic", "aclk_gic_pre", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(33), 3, GFLAGS),
+ GATE(ACLK_GIC_ADB400_GIC_2_CORE_L, "aclk_gic_adb400_gic_2_core_l", "aclk_gic_pre", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(33), 4, GFLAGS),
+ GATE(ACLK_GIC_ADB400_GIC_2_CORE_B, "aclk_gic_adb400_gic_2_core_b", "aclk_gic_pre", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(33), 5, GFLAGS),
+
+ /* alive */
+ /* pclk_alive_gpll_src is controlled by PMUGRF_SOC_CON0[6] */
+ DIV(PCLK_ALIVE, "pclk_alive", "gpll", 0,
+ RK3399_CLKSEL_CON(57), 0, 5, DFLAGS),
+
+ GATE(PCLK_USBPHY_MUX_G, "pclk_usbphy_mux_g", "pclk_alive", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(21), 4, GFLAGS),
+ GATE(PCLK_UPHY0_TCPHY_G, "pclk_uphy0_tcphy_g", "pclk_alive", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(21), 5, GFLAGS),
+ GATE(PCLK_UPHY0_TCPD_G, "pclk_uphy0_tcpd_g", "pclk_alive", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(21), 6, GFLAGS),
+ GATE(PCLK_UPHY1_TCPHY_G, "pclk_uphy1_tcphy_g", "pclk_alive", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(21), 8, GFLAGS),
+ GATE(PCLK_UPHY1_TCPD_G, "pclk_uphy1_tcpd_g", "pclk_alive", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(21), 9, GFLAGS),
+
+ GATE(PCLK_GRF, "pclk_grf", "pclk_alive", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(31), 1, GFLAGS),
+ GATE(PCLK_INTR_ARB, "pclk_intr_arb", "pclk_alive", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(31), 2, GFLAGS),
+ GATE(PCLK_GPIO2, "pclk_gpio2", "pclk_alive", 0, RK3399_CLKGATE_CON(31), 3, GFLAGS),
+ GATE(PCLK_GPIO3, "pclk_gpio3", "pclk_alive", 0, RK3399_CLKGATE_CON(31), 4, GFLAGS),
+ GATE(PCLK_GPIO4, "pclk_gpio4", "pclk_alive", 0, RK3399_CLKGATE_CON(31), 5, GFLAGS),
+ GATE(PCLK_TIMER0, "pclk_timer0", "pclk_alive", 0, RK3399_CLKGATE_CON(31), 6, GFLAGS),
+ GATE(PCLK_TIMER1, "pclk_timer1", "pclk_alive", 0, RK3399_CLKGATE_CON(31), 7, GFLAGS),
+ GATE(PCLK_PMU_INTR_ARB, "pclk_pmu_intr_arb", "pclk_alive", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(31), 9, GFLAGS),
+ GATE(PCLK_SGRF, "pclk_sgrf", "pclk_alive", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(31), 10, GFLAGS),
+
+ GATE(SCLK_MIPIDPHY_REF, "clk_mipidphy_ref", "xin24m", 0, RK3399_CLKGATE_CON(11), 14, GFLAGS),
+ GATE(SCLK_DPHY_PLL, "clk_dphy_pll", "clk_mipidphy_ref", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(21), 0, GFLAGS),
+
+ GATE(SCLK_MIPIDPHY_CFG, "clk_mipidphy_cfg", "xin24m", 0, RK3399_CLKGATE_CON(11), 15, GFLAGS),
+ GATE(SCLK_DPHY_TX0_CFG, "clk_dphy_tx0_cfg", "clk_mipidphy_cfg", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(21), 1, GFLAGS),
+ GATE(SCLK_DPHY_TX1RX1_CFG, "clk_dphy_tx1rx1_cfg", "clk_mipidphy_cfg", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(21), 2, GFLAGS),
+ GATE(SCLK_DPHY_RX0_CFG, "clk_dphy_rx0_cfg", "clk_mipidphy_cfg", CLK_IGNORE_UNUSED, RK3399_CLKGATE_CON(21), 3, GFLAGS),
+
+ /* testout */
+ MUX(0, "clk_test_pre", mux_pll_src_cpll_gpll_p, CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(58), 7, 1, MFLAGS),
+ COMPOSITE_FRAC(0, "clk_test_frac", "clk_test_pre", CLK_SET_RATE_PARENT,
+ RK3399_CLKSEL_CON(105), 0,
+ RK3399_CLKGATE_CON(13), 9, GFLAGS),
+
+ DIV(0, "clk_test_24m", "xin24m", 0,
+ RK3399_CLKSEL_CON(57), 6, 10, DFLAGS),
+
+ /* spi */
+ COMPOSITE(SCLK_SPI0, "clk_spi0", mux_pll_src_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(59), 7, 1, MFLAGS, 0, 7, DFLAGS,
+ RK3399_CLKGATE_CON(9), 12, GFLAGS),
+
+ COMPOSITE(SCLK_SPI1, "clk_spi1", mux_pll_src_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(59), 15, 1, MFLAGS, 8, 7, DFLAGS,
+ RK3399_CLKGATE_CON(9), 13, GFLAGS),
+
+ COMPOSITE(SCLK_SPI2, "clk_spi2", mux_pll_src_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(60), 7, 1, MFLAGS, 0, 7, DFLAGS,
+ RK3399_CLKGATE_CON(9), 14, GFLAGS),
+
+ COMPOSITE(SCLK_SPI4, "clk_spi4", mux_pll_src_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(60), 15, 1, MFLAGS, 8, 7, DFLAGS,
+ RK3399_CLKGATE_CON(9), 15, GFLAGS),
+
+ COMPOSITE(SCLK_SPI5, "clk_spi5", mux_pll_src_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(58), 15, 1, MFLAGS, 8, 7, DFLAGS,
+ RK3399_CLKGATE_CON(13), 13, GFLAGS),
+
+ /* i2c */
+ COMPOSITE(SCLK_I2C1, "clk_i2c1", mux_pll_src_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(61), 7, 1, MFLAGS, 0, 7, DFLAGS,
+ RK3399_CLKGATE_CON(10), 0, GFLAGS),
+
+ COMPOSITE(SCLK_I2C2, "clk_i2c2", mux_pll_src_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(62), 7, 1, MFLAGS, 0, 7, DFLAGS,
+ RK3399_CLKGATE_CON(10), 2, GFLAGS),
+
+ COMPOSITE(SCLK_I2C3, "clk_i2c3", mux_pll_src_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(63), 7, 1, MFLAGS, 0, 7, DFLAGS,
+ RK3399_CLKGATE_CON(10), 4, GFLAGS),
+
+ COMPOSITE(SCLK_I2C5, "clk_i2c5", mux_pll_src_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(61), 15, 1, MFLAGS, 8, 7, DFLAGS,
+ RK3399_CLKGATE_CON(10), 1, GFLAGS),
+
+ COMPOSITE(SCLK_I2C6, "clk_i2c6", mux_pll_src_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(62), 15, 1, MFLAGS, 8, 7, DFLAGS,
+ RK3399_CLKGATE_CON(10), 3, GFLAGS),
+
+ COMPOSITE(SCLK_I2C7, "clk_i2c7", mux_pll_src_cpll_gpll_p, 0,
+ RK3399_CLKSEL_CON(63), 15, 1, MFLAGS, 8, 7, DFLAGS,
+ RK3399_CLKGATE_CON(10), 5, GFLAGS),
+
+ /* timer */
+ GATE(SCLK_TIMER00, "clk_timer00", "xin24m", 0, RK3399_CLKGATE_CON(26), 0, GFLAGS),
+ GATE(SCLK_TIMER01, "clk_timer01", "xin24m", 0, RK3399_CLKGATE_CON(26), 1, GFLAGS),
+ GATE(SCLK_TIMER02, "clk_timer02", "xin24m", 0, RK3399_CLKGATE_CON(26), 2, GFLAGS),
+ GATE(SCLK_TIMER03, "clk_timer03", "xin24m", 0, RK3399_CLKGATE_CON(26), 3, GFLAGS),
+ GATE(SCLK_TIMER04, "clk_timer04", "xin24m", 0, RK3399_CLKGATE_CON(26), 4, GFLAGS),
+ GATE(SCLK_TIMER05, "clk_timer05", "xin24m", 0, RK3399_CLKGATE_CON(26), 5, GFLAGS),
+ GATE(SCLK_TIMER06, "clk_timer06", "xin24m", 0, RK3399_CLKGATE_CON(26), 6, GFLAGS),
+ GATE(SCLK_TIMER07, "clk_timer07", "xin24m", 0, RK3399_CLKGATE_CON(26), 7, GFLAGS),
+ GATE(SCLK_TIMER08, "clk_timer08", "xin24m", 0, RK3399_CLKGATE_CON(26), 8, GFLAGS),
+ GATE(SCLK_TIMER09, "clk_timer09", "xin24m", 0, RK3399_CLKGATE_CON(26), 9, GFLAGS),
+ GATE(SCLK_TIMER10, "clk_timer10", "xin24m", 0, RK3399_CLKGATE_CON(26), 10, GFLAGS),
+ GATE(SCLK_TIMER11, "clk_timer11", "xin24m", 0, RK3399_CLKGATE_CON(26), 11, GFLAGS),
+
+ /* clk_test */
+ /* clk_test_pre is controlled by CRU_MISC_CON[3] */
+ COMPOSITE_NOMUX(0, "clk_test", "clk_test_pre", CLK_IGNORE_UNUSED,
+ RK3368_CLKSEL_CON(58), 0, 5, DFLAGS,
+ RK3368_CLKGATE_CON(13), 11, GFLAGS),
+};
+
+static struct rockchip_clk_branch rk3399_clk_pmu_branches[] __initdata = {
+ /*
+ * PMU CRU Clock-Architecture
+ */
+
+ GATE(0, "fclk_cm0s_pmu_ppll_src", "ppll", 0,
+ RK3399_PMU_CLKGATE_CON(0), 1, GFLAGS),
+
+ COMPOSITE_NOGATE(FCLK_CM0S_SRC_PMU, "fclk_cm0s_src_pmu", mux_fclk_cm0s_pmu_ppll_p, 0,
+ RK3399_PMU_CLKSEL_CON(0), 15, 1, MFLAGS, 8, 5, DFLAGS),
+
+ COMPOSITE(SCLK_SPI3_PMU, "clk_spi3_pmu", mux_24m_ppll_p, 0,
+ RK3399_PMU_CLKSEL_CON(1), 7, 1, MFLAGS, 0, 7, DFLAGS,
+ RK3399_PMU_CLKGATE_CON(0), 2, GFLAGS),
+
+ COMPOSITE(0, "clk_wifi_div", mux_ppll_24m_p, CLK_IGNORE_UNUSED,
+ RK3399_PMU_CLKSEL_CON(1), 13, 1, MFLAGS, 8, 5, DFLAGS,
+ RK3399_PMU_CLKGATE_CON(0), 8, GFLAGS),
+
+ COMPOSITE_FRACMUX_NOGATE(0, "clk_wifi_frac", "clk_wifi_div", CLK_SET_RATE_PARENT,
+ RK3399_PMU_CLKSEL_CON(7), 0,
+ &rk3399_pmuclk_wifi_fracmux),
+
+ MUX(0, "clk_timer_src_pmu", mux_pll_p, CLK_IGNORE_UNUSED,
+ RK3399_PMU_CLKSEL_CON(1), 15, 1, MFLAGS),
+
+ COMPOSITE_NOMUX(SCLK_I2C0_PMU, "clk_i2c0_pmu", "ppll", 0,
+ RK3399_PMU_CLKSEL_CON(2), 0, 7, DFLAGS,
+ RK3399_PMU_CLKGATE_CON(0), 9, GFLAGS),
+
+ COMPOSITE_NOMUX(SCLK_I2C4_PMU, "clk_i2c4_pmu", "ppll", 0,
+ RK3399_PMU_CLKSEL_CON(3), 0, 7, DFLAGS,
+ RK3399_PMU_CLKGATE_CON(0), 10, GFLAGS),
+
+ COMPOSITE_NOMUX(SCLK_I2C8_PMU, "clk_i2c8_pmu", "ppll", 0,
+ RK3399_PMU_CLKSEL_CON(2), 8, 7, DFLAGS,
+ RK3399_PMU_CLKGATE_CON(0), 11, GFLAGS),
+
+ DIV(0, "clk_32k_suspend_pmu", "xin24m", CLK_IGNORE_UNUSED,
+ RK3399_PMU_CLKSEL_CON(4), 0, 10, DFLAGS),
+ MUX(0, "clk_testout_2io", mux_clk_testout2_2io_p, CLK_IGNORE_UNUSED,
+ RK3399_PMU_CLKSEL_CON(4), 15, 1, MFLAGS),
+
+ COMPOSITE(0, "clk_uart4_div", mux_24m_ppll_p, 0,
+ RK3399_PMU_CLKSEL_CON(5), 10, 1, MFLAGS, 0, 7, DFLAGS,
+ RK3399_PMU_CLKGATE_CON(0), 5, GFLAGS),
+
+ COMPOSITE_FRACMUX(0, "clk_uart4_frac", "clk_uart4_div", CLK_SET_RATE_PARENT,
+ RK3399_PMU_CLKSEL_CON(6), 0,
+ RK3399_PMU_CLKGATE_CON(0), 6, GFLAGS,
+ &rk3399_uart4_pmu_fracmux),
+
+ DIV(PCLK_SRC_PMU, "pclk_pmu_src", "ppll", CLK_IGNORE_UNUSED,
+ RK3399_PMU_CLKSEL_CON(0), 0, 5, DFLAGS),
+
+ /* pmu clock gates */
+ GATE(SCLK_TIMER12_PMU, "clk_timer0_pmu", "clk_timer_src_pmu", 0, RK3399_PMU_CLKGATE_CON(0), 3, GFLAGS),
+ GATE(SCLK_TIMER13_PMU, "clk_timer1_pmu", "clk_timer_src_pmu", 0, RK3399_PMU_CLKGATE_CON(0), 4, GFLAGS),
+
+ GATE(SCLK_PVTM_PMU, "clk_pvtm_pmu", "xin24m", CLK_IGNORE_UNUSED, RK3399_PMU_CLKGATE_CON(0), 7, GFLAGS),
+
+ GATE(PCLK_PMU, "pclk_pmu", "pclk_pmu_src", CLK_IGNORE_UNUSED, RK3399_PMU_CLKGATE_CON(1), 0, GFLAGS),
+ GATE(PCLK_PMUGRF_PMU, "pclk_pmugrf_pmu", "pclk_pmu_src", CLK_IGNORE_UNUSED, RK3399_PMU_CLKGATE_CON(1), 1, GFLAGS),
+ GATE(PCLK_INTMEM1_PMU, "pclk_intmem1_pmu", "pclk_pmu_src", CLK_IGNORE_UNUSED, RK3399_PMU_CLKGATE_CON(1), 2, GFLAGS),
+ GATE(PCLK_GPIO0_PMU, "pclk_gpio0_pmu", "pclk_pmu_src", 0, RK3399_PMU_CLKGATE_CON(1), 3, GFLAGS),
+ GATE(PCLK_GPIO1_PMU, "pclk_gpio1_pmu", "pclk_pmu_src", 0, RK3399_PMU_CLKGATE_CON(1), 4, GFLAGS),
+ GATE(PCLK_SGRF_PMU, "pclk_sgrf_pmu", "pclk_pmu_src", CLK_IGNORE_UNUSED, RK3399_PMU_CLKGATE_CON(1), 5, GFLAGS),
+ GATE(PCLK_NOC_PMU, "pclk_noc_pmu", "pclk_pmu_src", CLK_IGNORE_UNUSED, RK3399_PMU_CLKGATE_CON(1), 6, GFLAGS),
+ GATE(PCLK_I2C0_PMU, "pclk_i2c0_pmu", "pclk_pmu_src", 0, RK3399_PMU_CLKGATE_CON(1), 7, GFLAGS),
+ GATE(PCLK_I2C4_PMU, "pclk_i2c4_pmu", "pclk_pmu_src", 0, RK3399_PMU_CLKGATE_CON(1), 8, GFLAGS),
+ GATE(PCLK_I2C8_PMU, "pclk_i2c8_pmu", "pclk_pmu_src", 0, RK3399_PMU_CLKGATE_CON(1), 9, GFLAGS),
+ GATE(PCLK_RKPWM_PMU, "pclk_rkpwm_pmu", "pclk_pmu_src", 0, RK3399_PMU_CLKGATE_CON(1), 10, GFLAGS),
+ GATE(PCLK_SPI3_PMU, "pclk_spi3_pmu", "pclk_pmu_src", 0, RK3399_PMU_CLKGATE_CON(1), 11, GFLAGS),
+ GATE(PCLK_TIMER_PMU, "pclk_timer_pmu", "pclk_pmu_src", 0, RK3399_PMU_CLKGATE_CON(1), 12, GFLAGS),
+ GATE(PCLK_MAILBOX_PMU, "pclk_mailbox_pmu", "pclk_pmu_src", 0, RK3399_PMU_CLKGATE_CON(1), 13, GFLAGS),
+ GATE(PCLK_UART4_PMU, "pclk_uart4_pmu", "pclk_pmu_src", 0, RK3399_PMU_CLKGATE_CON(1), 14, GFLAGS),
+ GATE(PCLK_WDT_M0_PMU, "pclk_wdt_m0_pmu", "pclk_pmu_src", 0, RK3399_PMU_CLKGATE_CON(1), 15, GFLAGS),
+
+ GATE(FCLK_CM0S_PMU, "fclk_cm0s_pmu", "fclk_cm0s_src_pmu", 0, RK3399_PMU_CLKGATE_CON(2), 0, GFLAGS),
+ GATE(SCLK_CM0S_PMU, "sclk_cm0s_pmu", "fclk_cm0s_src_pmu", 0, RK3399_PMU_CLKGATE_CON(2), 1, GFLAGS),
+ GATE(HCLK_CM0S_PMU, "hclk_cm0s_pmu", "fclk_cm0s_src_pmu", 0, RK3399_PMU_CLKGATE_CON(2), 2, GFLAGS),
+ GATE(DCLK_CM0S_PMU, "dclk_cm0s_pmu", "fclk_cm0s_src_pmu", 0, RK3399_PMU_CLKGATE_CON(2), 3, GFLAGS),
+ GATE(HCLK_NOC_PMU, "hclk_noc_pmu", "fclk_cm0s_src_pmu", CLK_IGNORE_UNUSED, RK3399_PMU_CLKGATE_CON(2), 5, GFLAGS),
+};
+
+static const char *const rk3399_cru_critical_clocks[] __initconst = {
+ "aclk_cci_pre",
+ "pclk_perilp0",
+ "pclk_perilp0",
+ "hclk_perilp0",
+ "hclk_perilp0_noc",
+ "pclk_perilp1",
+ "pclk_perilp1_noc",
+ "pclk_perihp",
+ "pclk_perihp_noc",
+ "hclk_perihp",
+ "aclk_perihp",
+ "aclk_perihp_noc",
+ "aclk_perilp0",
+ "aclk_perilp0_noc",
+ "hclk_perilp1",
+ "hclk_perilp1_noc",
+ "aclk_dmac0_perilp",
+ "gpll_hclk_perilp1_src",
+ "gpll_aclk_perilp0_src",
+ "gpll_aclk_perihp_src",
+};
+
+static const char *const rk3399_pmucru_critical_clocks[] __initconst = {
+ "ppll",
+ "pclk_pmu_src",
+ "fclk_cm0s_src_pmu",
+ "clk_timer_src_pmu",
+};
+
+static void __init rk3399_clk_init(struct device_node *np)
+{
+ struct rockchip_clk_provider *ctx;
+ void __iomem *reg_base;
+
+ reg_base = of_iomap(np, 0);
+ if (!reg_base) {
+ pr_err("%s: could not map cru region\n", __func__);
+ return;
+ }
+
+ ctx = rockchip_clk_init(np, reg_base, CLK_NR_CLKS);
+ if (IS_ERR(ctx)) {
+ pr_err("%s: rockchip clk init failed\n", __func__);
+ return;
+ }
+
+ rockchip_clk_register_plls(ctx, rk3399_pll_clks,
+ ARRAY_SIZE(rk3399_pll_clks), -1);
+
+ rockchip_clk_register_branches(ctx, rk3399_clk_branches,
+ ARRAY_SIZE(rk3399_clk_branches));
+
+ rockchip_clk_protect_critical(rk3399_cru_critical_clocks,
+ ARRAY_SIZE(rk3399_cru_critical_clocks));
+
+ rockchip_clk_register_armclk(ctx, ARMCLKL, "armclkl",
+ mux_armclkl_p, ARRAY_SIZE(mux_armclkl_p),
+ &rk3399_cpuclkl_data, rk3399_cpuclkl_rates,
+ ARRAY_SIZE(rk3399_cpuclkl_rates));
+
+ rockchip_clk_register_armclk(ctx, ARMCLKB, "armclkb",
+ mux_armclkb_p, ARRAY_SIZE(mux_armclkb_p),
+ &rk3399_cpuclkb_data, rk3399_cpuclkb_rates,
+ ARRAY_SIZE(rk3399_cpuclkb_rates));
+
+ rockchip_register_softrst(np, 21, reg_base + RK3399_SOFTRST_CON(0),
+ ROCKCHIP_SOFTRST_HIWORD_MASK);
+
+ rockchip_register_restart_notifier(ctx, RK3399_GLB_SRST_FST, NULL);
+
+ rockchip_clk_of_add_provider(np, ctx);
+}
+CLK_OF_DECLARE(rk3399_cru, "rockchip,rk3399-cru", rk3399_clk_init);
+
+static void __init rk3399_pmu_clk_init(struct device_node *np)
+{
+ struct rockchip_clk_provider *ctx;
+ void __iomem *reg_base;
+
+ reg_base = of_iomap(np, 0);
+ if (!reg_base) {
+ pr_err("%s: could not map cru pmu region\n", __func__);
+ return;
+ }
+
+ ctx = rockchip_clk_init(np, reg_base, CLKPMU_NR_CLKS);
+ if (IS_ERR(ctx)) {
+ pr_err("%s: rockchip pmu clk init failed\n", __func__);
+ return;
+ }
+
+ rockchip_clk_register_plls(ctx, rk3399_pmu_pll_clks,
+ ARRAY_SIZE(rk3399_pmu_pll_clks), -1);
+
+ rockchip_clk_register_branches(ctx, rk3399_clk_pmu_branches,
+ ARRAY_SIZE(rk3399_clk_pmu_branches));
+
+ rockchip_clk_protect_critical(rk3399_pmucru_critical_clocks,
+ ARRAY_SIZE(rk3399_pmucru_critical_clocks));
+
+ rockchip_register_softrst(np, 2, reg_base + RK3399_PMU_SOFTRST_CON(0),
+ ROCKCHIP_SOFTRST_HIWORD_MASK);
+
+ rockchip_clk_of_add_provider(np, ctx);
+}
+CLK_OF_DECLARE(rk3399_cru_pmu, "rockchip,rk3399-pmucru", rk3399_pmu_clk_init);
diff --git a/drivers/clk/rockchip/clk.c b/drivers/clk/rockchip/clk.c
index ec06350c78c4..7ffd134995f2 100644
--- a/drivers/clk/rockchip/clk.c
+++ b/drivers/clk/rockchip/clk.c
@@ -2,6 +2,9 @@
* Copyright (c) 2014 MundoReader S.L.
* Author: Heiko Stuebner <heiko@sntech.de>
*
+ * Copyright (c) 2016 Rockchip Electronics Co. Ltd.
+ * Author: Xing Zheng <zhengxing@rock-chips.com>
+ *
* based on
*
* samsung/clk.c
@@ -39,7 +42,8 @@
* sometimes without one of those components.
*/
static struct clk *rockchip_clk_register_branch(const char *name,
- const char *const *parent_names, u8 num_parents, void __iomem *base,
+ const char *const *parent_names, u8 num_parents,
+ void __iomem *base,
int muxdiv_offset, u8 mux_shift, u8 mux_width, u8 mux_flags,
u8 div_shift, u8 div_width, u8 div_flags,
struct clk_div_table *div_table, int gate_offset,
@@ -136,9 +140,11 @@ static int rockchip_clk_frac_notifier_cb(struct notifier_block *nb,
pr_debug("%s: event %lu, old_rate %lu, new_rate: %lu\n",
__func__, event, ndata->old_rate, ndata->new_rate);
if (event == PRE_RATE_CHANGE) {
- frac->rate_change_idx = frac->mux_ops->get_parent(&frac_mux->hw);
+ frac->rate_change_idx =
+ frac->mux_ops->get_parent(&frac_mux->hw);
if (frac->rate_change_idx != frac->mux_frac_idx) {
- frac->mux_ops->set_parent(&frac_mux->hw, frac->mux_frac_idx);
+ frac->mux_ops->set_parent(&frac_mux->hw,
+ frac->mux_frac_idx);
frac->rate_change_remuxed = 1;
}
} else if (event == POST_RATE_CHANGE) {
@@ -149,7 +155,8 @@ static int rockchip_clk_frac_notifier_cb(struct notifier_block *nb,
* reaches the mux itself.
*/
if (frac->rate_change_remuxed) {
- frac->mux_ops->set_parent(&frac_mux->hw, frac->rate_change_idx);
+ frac->mux_ops->set_parent(&frac_mux->hw,
+ frac->rate_change_idx);
frac->rate_change_remuxed = 0;
}
}
@@ -157,7 +164,8 @@ static int rockchip_clk_frac_notifier_cb(struct notifier_block *nb,
return notifier_from_errno(ret);
}
-static struct clk *rockchip_clk_register_frac_branch(const char *name,
+static struct clk *rockchip_clk_register_frac_branch(
+ struct rockchip_clk_provider *ctx, const char *name,
const char *const *parent_names, u8 num_parents,
void __iomem *base, int muxdiv_offset, u8 div_flags,
int gate_offset, u8 gate_shift, u8 gate_flags,
@@ -250,7 +258,7 @@ static struct clk *rockchip_clk_register_frac_branch(const char *name,
if (IS_ERR(mux_clk))
return clk;
- rockchip_clk_add_lookup(mux_clk, child->id);
+ rockchip_clk_add_lookup(ctx, mux_clk, child->id);
/* notifier on the fraction divider to catch rate changes */
if (frac->mux_frac_idx >= 0) {
@@ -314,66 +322,82 @@ static struct clk *rockchip_clk_register_factor_branch(const char *name,
return clk;
}
-static DEFINE_SPINLOCK(clk_lock);
-static struct clk **clk_table;
-static void __iomem *reg_base;
-static struct clk_onecell_data clk_data;
-static struct device_node *cru_node;
-static struct regmap *grf;
-
-void __init rockchip_clk_init(struct device_node *np, void __iomem *base,
- unsigned long nr_clks)
+struct rockchip_clk_provider * __init rockchip_clk_init(struct device_node *np,
+ void __iomem *base, unsigned long nr_clks)
{
- reg_base = base;
- cru_node = np;
- grf = ERR_PTR(-EPROBE_DEFER);
+ struct rockchip_clk_provider *ctx;
+ struct clk **clk_table;
+ int i;
+
+ ctx = kzalloc(sizeof(struct rockchip_clk_provider), GFP_KERNEL);
+ if (!ctx)
+ return ERR_PTR(-ENOMEM);
clk_table = kcalloc(nr_clks, sizeof(struct clk *), GFP_KERNEL);
if (!clk_table)
- pr_err("%s: could not allocate clock lookup table\n", __func__);
+ goto err_free;
- clk_data.clks = clk_table;
- clk_data.clk_num = nr_clks;
- of_clk_add_provider(np, of_clk_src_onecell_get, &clk_data);
+ for (i = 0; i < nr_clks; ++i)
+ clk_table[i] = ERR_PTR(-ENOENT);
+
+ ctx->reg_base = base;
+ ctx->clk_data.clks = clk_table;
+ ctx->clk_data.clk_num = nr_clks;
+ ctx->cru_node = np;
+ ctx->grf = ERR_PTR(-EPROBE_DEFER);
+ spin_lock_init(&ctx->lock);
+
+ ctx->grf = syscon_regmap_lookup_by_phandle(ctx->cru_node,
+ "rockchip,grf");
+
+ return ctx;
+
+err_free:
+ kfree(ctx);
+ return ERR_PTR(-ENOMEM);
}
-struct regmap *rockchip_clk_get_grf(void)
+void __init rockchip_clk_of_add_provider(struct device_node *np,
+ struct rockchip_clk_provider *ctx)
{
- if (IS_ERR(grf))
- grf = syscon_regmap_lookup_by_phandle(cru_node, "rockchip,grf");
- return grf;
+ if (of_clk_add_provider(np, of_clk_src_onecell_get,
+ &ctx->clk_data))
+ pr_err("%s: could not register clk provider\n", __func__);
}
-void rockchip_clk_add_lookup(struct clk *clk, unsigned int id)
+void rockchip_clk_add_lookup(struct rockchip_clk_provider *ctx,
+ struct clk *clk, unsigned int id)
{
- if (clk_table && id)
- clk_table[id] = clk;
+ if (ctx->clk_data.clks && id)
+ ctx->clk_data.clks[id] = clk;
}
-void __init rockchip_clk_register_plls(struct rockchip_pll_clock *list,
+void __init rockchip_clk_register_plls(struct rockchip_clk_provider *ctx,
+ struct rockchip_pll_clock *list,
unsigned int nr_pll, int grf_lock_offset)
{
struct clk *clk;
int idx;
for (idx = 0; idx < nr_pll; idx++, list++) {
- clk = rockchip_clk_register_pll(list->type, list->name,
+ clk = rockchip_clk_register_pll(ctx, list->type, list->name,
list->parent_names, list->num_parents,
- reg_base, list->con_offset, grf_lock_offset,
+ list->con_offset, grf_lock_offset,
list->lock_shift, list->mode_offset,
list->mode_shift, list->rate_table,
- list->pll_flags, &clk_lock);
+ list->pll_flags);
if (IS_ERR(clk)) {
pr_err("%s: failed to register clock %s\n", __func__,
list->name);
continue;
}
- rockchip_clk_add_lookup(clk, list->id);
+ rockchip_clk_add_lookup(ctx, clk, list->id);
}
}
void __init rockchip_clk_register_branches(
+ struct rockchip_clk_provider *ctx,
struct rockchip_clk_branch *list,
unsigned int nr_clk)
{
@@ -389,56 +413,59 @@ void __init rockchip_clk_register_branches(
case branch_mux:
clk = clk_register_mux(NULL, list->name,
list->parent_names, list->num_parents,
- flags, reg_base + list->muxdiv_offset,
+ flags, ctx->reg_base + list->muxdiv_offset,
list->mux_shift, list->mux_width,
- list->mux_flags, &clk_lock);
+ list->mux_flags, &ctx->lock);
break;
case branch_divider:
if (list->div_table)
clk = clk_register_divider_table(NULL,
list->name, list->parent_names[0],
- flags, reg_base + list->muxdiv_offset,
+ flags,
+ ctx->reg_base + list->muxdiv_offset,
list->div_shift, list->div_width,
list->div_flags, list->div_table,
- &clk_lock);
+ &ctx->lock);
else
clk = clk_register_divider(NULL, list->name,
list->parent_names[0], flags,
- reg_base + list->muxdiv_offset,
+ ctx->reg_base + list->muxdiv_offset,
list->div_shift, list->div_width,
- list->div_flags, &clk_lock);
+ list->div_flags, &ctx->lock);
break;
case branch_fraction_divider:
- clk = rockchip_clk_register_frac_branch(list->name,
+ clk = rockchip_clk_register_frac_branch(ctx, list->name,
list->parent_names, list->num_parents,
- reg_base, list->muxdiv_offset, list->div_flags,
+ ctx->reg_base, list->muxdiv_offset,
+ list->div_flags,
list->gate_offset, list->gate_shift,
list->gate_flags, flags, list->child,
- &clk_lock);
+ &ctx->lock);
break;
case branch_gate:
flags |= CLK_SET_RATE_PARENT;
clk = clk_register_gate(NULL, list->name,
list->parent_names[0], flags,
- reg_base + list->gate_offset,
- list->gate_shift, list->gate_flags, &clk_lock);
+ ctx->reg_base + list->gate_offset,
+ list->gate_shift, list->gate_flags, &ctx->lock);
break;
case branch_composite:
clk = rockchip_clk_register_branch(list->name,
list->parent_names, list->num_parents,
- reg_base, list->muxdiv_offset, list->mux_shift,
+ ctx->reg_base, list->muxdiv_offset,
+ list->mux_shift,
list->mux_width, list->mux_flags,
list->div_shift, list->div_width,
list->div_flags, list->div_table,
list->gate_offset, list->gate_shift,
- list->gate_flags, flags, &clk_lock);
+ list->gate_flags, flags, &ctx->lock);
break;
case branch_mmc:
clk = rockchip_clk_register_mmc(
list->name,
list->parent_names, list->num_parents,
- reg_base + list->muxdiv_offset,
+ ctx->reg_base + list->muxdiv_offset,
list->div_shift
);
break;
@@ -446,16 +473,16 @@ void __init rockchip_clk_register_branches(
clk = rockchip_clk_register_inverter(
list->name, list->parent_names,
list->num_parents,
- reg_base + list->muxdiv_offset,
- list->div_shift, list->div_flags, &clk_lock);
+ ctx->reg_base + list->muxdiv_offset,
+ list->div_shift, list->div_flags, &ctx->lock);
break;
case branch_factor:
clk = rockchip_clk_register_factor_branch(
list->name, list->parent_names,
- list->num_parents, reg_base,
+ list->num_parents, ctx->reg_base,
list->div_shift, list->div_width,
list->gate_offset, list->gate_shift,
- list->gate_flags, flags, &clk_lock);
+ list->gate_flags, flags, &ctx->lock);
break;
}
@@ -472,11 +499,12 @@ void __init rockchip_clk_register_branches(
continue;
}
- rockchip_clk_add_lookup(clk, list->id);
+ rockchip_clk_add_lookup(ctx, clk, list->id);
}
}
-void __init rockchip_clk_register_armclk(unsigned int lookup_id,
+void __init rockchip_clk_register_armclk(struct rockchip_clk_provider *ctx,
+ unsigned int lookup_id,
const char *name, const char *const *parent_names,
u8 num_parents,
const struct rockchip_cpuclk_reg_data *reg_data,
@@ -486,15 +514,15 @@ void __init rockchip_clk_register_armclk(unsigned int lookup_id,
struct clk *clk;
clk = rockchip_clk_register_cpuclk(name, parent_names, num_parents,
- reg_data, rates, nrates, reg_base,
- &clk_lock);
+ reg_data, rates, nrates,
+ ctx->reg_base, &ctx->lock);
if (IS_ERR(clk)) {
pr_err("%s: failed to register clock %s: %ld\n",
__func__, name, PTR_ERR(clk));
return;
}
- rockchip_clk_add_lookup(clk, lookup_id);
+ rockchip_clk_add_lookup(ctx, clk, lookup_id);
}
void __init rockchip_clk_protect_critical(const char *const clocks[],
@@ -511,6 +539,7 @@ void __init rockchip_clk_protect_critical(const char *const clocks[],
}
}
+static void __iomem *rst_base;
static unsigned int reg_restart;
static void (*cb_restart)(void);
static int rockchip_restart_notify(struct notifier_block *this,
@@ -519,7 +548,7 @@ static int rockchip_restart_notify(struct notifier_block *this,
if (cb_restart)
cb_restart();
- writel(0xfdb9, reg_base + reg_restart);
+ writel(0xfdb9, rst_base + reg_restart);
return NOTIFY_DONE;
}
@@ -528,10 +557,14 @@ static struct notifier_block rockchip_restart_handler = {
.priority = 128,
};
-void __init rockchip_register_restart_notifier(unsigned int reg, void (*cb)(void))
+void __init
+rockchip_register_restart_notifier(struct rockchip_clk_provider *ctx,
+ unsigned int reg,
+ void (*cb)(void))
{
int ret;
+ rst_base = ctx->reg_base;
reg_restart = reg;
cb_restart = cb;
ret = register_restart_handler(&rockchip_restart_handler);
diff --git a/drivers/clk/rockchip/clk.h b/drivers/clk/rockchip/clk.h
index 39c198bbcbee..2194ffa8c9fd 100644
--- a/drivers/clk/rockchip/clk.h
+++ b/drivers/clk/rockchip/clk.h
@@ -27,13 +27,13 @@
#define CLK_ROCKCHIP_CLK_H
#include <linux/io.h>
+#include <linux/clk-provider.h>
struct clk;
#define HIWORD_UPDATE(val, mask, shift) \
((val) << (shift) | (mask) << ((shift) + 16))
-/* register positions shared by RK2928, RK3036, RK3066, RK3188 and RK3228 */
#define RK2928_PLL_CON(x) ((x) * 0x4)
#define RK2928_MODE_CON 0x40
#define RK2928_CLKSEL_CON(x) ((x) * 0x4 + 0x44)
@@ -92,9 +92,30 @@ struct clk;
#define RK3368_EMMC_CON0 0x418
#define RK3368_EMMC_CON1 0x41c
+#define RK3399_PLL_CON(x) RK2928_PLL_CON(x)
+#define RK3399_CLKSEL_CON(x) ((x) * 0x4 + 0x100)
+#define RK3399_CLKGATE_CON(x) ((x) * 0x4 + 0x300)
+#define RK3399_SOFTRST_CON(x) ((x) * 0x4 + 0x400)
+#define RK3399_GLB_SRST_FST 0x500
+#define RK3399_GLB_SRST_SND 0x504
+#define RK3399_GLB_CNT_TH 0x508
+#define RK3399_MISC_CON 0x50c
+#define RK3399_RST_CON 0x510
+#define RK3399_RST_ST 0x514
+#define RK3399_SDMMC_CON0 0x580
+#define RK3399_SDMMC_CON1 0x584
+#define RK3399_SDIO_CON0 0x588
+#define RK3399_SDIO_CON1 0x58c
+
+#define RK3399_PMU_PLL_CON(x) RK2928_PLL_CON(x)
+#define RK3399_PMU_CLKSEL_CON(x) ((x) * 0x4 + 0x80)
+#define RK3399_PMU_CLKGATE_CON(x) ((x) * 0x4 + 0x100)
+#define RK3399_PMU_SOFTRST_CON(x) ((x) * 0x4 + 0x110)
+
enum rockchip_pll_type {
pll_rk3036,
pll_rk3066,
+ pll_rk3399,
};
#define RK3036_PLL_RATE(_rate, _refdiv, _fbdiv, _postdiv1, \
@@ -127,13 +148,29 @@ enum rockchip_pll_type {
.nb = _nb, \
}
+/**
+ * struct rockchip_clk_provider - information about clock provider
+ * @reg_base: virtual address for the register base.
+ * @clk_data: holds clock related data like clk* and number of clocks.
+ * @cru_node: device-node of the clock-provider
+ * @grf: regmap of the general-register-files syscon
+ * @lock: maintains exclusion between callbacks for a given clock-provider.
+ */
+struct rockchip_clk_provider {
+ void __iomem *reg_base;
+ struct clk_onecell_data clk_data;
+ struct device_node *cru_node;
+ struct regmap *grf;
+ spinlock_t lock;
+};
+
struct rockchip_pll_rate_table {
unsigned long rate;
unsigned int nr;
unsigned int nf;
unsigned int no;
unsigned int nb;
- /* for RK3036 */
+ /* for RK3036/RK3399 */
unsigned int fbdiv;
unsigned int postdiv1;
unsigned int refdiv;
@@ -143,10 +180,11 @@ struct rockchip_pll_rate_table {
};
/**
- * struct rockchip_pll_clock: information about pll clock
+ * struct rockchip_pll_clock - information about pll clock
* @id: platform specific id of the clock.
* @name: name of this pll clock.
- * @parent_name: name of the parent clock.
+ * @parent_names: name of the parent clock.
+ * @num_parents: number of parents
* @flags: optional flags for basic clock.
* @con_offset: offset of the register for configuring the PLL.
* @mode_offset: offset of the register for configuring the PLL-mode.
@@ -194,12 +232,13 @@ struct rockchip_pll_clock {
.rate_table = _rtable, \
}
-struct clk *rockchip_clk_register_pll(enum rockchip_pll_type pll_type,
+struct clk *rockchip_clk_register_pll(struct rockchip_clk_provider *ctx,
+ enum rockchip_pll_type pll_type,
const char *name, const char *const *parent_names,
- u8 num_parents, void __iomem *base, int con_offset,
- int grf_lock_offset, int lock_shift, int reg_mode,
- int mode_shift, struct rockchip_pll_rate_table *rate_table,
- u8 clk_pll_flags, spinlock_t *lock);
+ u8 num_parents, int con_offset, int grf_lock_offset,
+ int lock_shift, int mode_offset, int mode_shift,
+ struct rockchip_pll_rate_table *rate_table,
+ u8 clk_pll_flags);
struct rockchip_cpuclk_clksel {
int reg;
@@ -213,18 +252,23 @@ struct rockchip_cpuclk_rate_table {
};
/**
- * struct rockchip_cpuclk_reg_data: describes register offsets and masks of the cpuclock
+ * struct rockchip_cpuclk_reg_data - register offsets and masks of the cpuclock
* @core_reg: register offset of the core settings register
* @div_core_shift: core divider offset used to divide the pll value
* @div_core_mask: core divider mask
+ * @mux_core_alt: mux value to select alternate parent
+ * @mux_core_main: mux value to select main parent of core
* @mux_core_shift: offset of the core multiplexer
+ * @mux_core_mask: core multiplexer mask
*/
struct rockchip_cpuclk_reg_data {
int core_reg;
u8 div_core_shift;
u32 div_core_mask;
- int mux_core_reg;
+ u8 mux_core_alt;
+ u8 mux_core_main;
u8 mux_core_shift;
+ u32 mux_core_mask;
};
struct clk *rockchip_clk_register_cpuclk(const char *name,
@@ -428,6 +472,22 @@ struct rockchip_clk_branch {
.child = ch, \
}
+#define COMPOSITE_FRACMUX_NOGATE(_id, cname, pname, f, mo, df, ch) \
+ { \
+ .id = _id, \
+ .branch_type = branch_fraction_divider, \
+ .name = cname, \
+ .parent_names = (const char *[]){ pname }, \
+ .num_parents = 1, \
+ .flags = f, \
+ .muxdiv_offset = mo, \
+ .div_shift = 16, \
+ .div_width = 16, \
+ .div_flags = df, \
+ .gate_offset = -1, \
+ .child = ch, \
+ }
+
#define MUX(_id, cname, pnames, f, o, s, w, mf) \
{ \
.id = _id, \
@@ -536,21 +596,27 @@ struct rockchip_clk_branch {
.gate_flags = gf, \
}
-void rockchip_clk_init(struct device_node *np, void __iomem *base,
- unsigned long nr_clks);
-struct regmap *rockchip_clk_get_grf(void);
-void rockchip_clk_add_lookup(struct clk *clk, unsigned int id);
-void rockchip_clk_register_branches(struct rockchip_clk_branch *clk_list,
+struct rockchip_clk_provider *rockchip_clk_init(struct device_node *np,
+ void __iomem *base, unsigned long nr_clks);
+void rockchip_clk_of_add_provider(struct device_node *np,
+ struct rockchip_clk_provider *ctx);
+void rockchip_clk_add_lookup(struct rockchip_clk_provider *ctx,
+ struct clk *clk, unsigned int id);
+void rockchip_clk_register_branches(struct rockchip_clk_provider *ctx,
+ struct rockchip_clk_branch *list,
unsigned int nr_clk);
-void rockchip_clk_register_plls(struct rockchip_pll_clock *pll_list,
+void rockchip_clk_register_plls(struct rockchip_clk_provider *ctx,
+ struct rockchip_pll_clock *pll_list,
unsigned int nr_pll, int grf_lock_offset);
-void rockchip_clk_register_armclk(unsigned int lookup_id, const char *name,
+void rockchip_clk_register_armclk(struct rockchip_clk_provider *ctx,
+ unsigned int lookup_id, const char *name,
const char *const *parent_names, u8 num_parents,
const struct rockchip_cpuclk_reg_data *reg_data,
const struct rockchip_cpuclk_rate_table *rates,
int nrates);
void rockchip_clk_protect_critical(const char *const clocks[], int nclocks);
-void rockchip_register_restart_notifier(unsigned int reg, void (*cb)(void));
+void rockchip_register_restart_notifier(struct rockchip_clk_provider *ctx,
+ unsigned int reg, void (*cb)(void));
#define ROCKCHIP_SOFTRST_HIWORD_MASK BIT(0)
diff --git a/drivers/clk/samsung/clk-exynos3250.c b/drivers/clk/samsung/clk-exynos3250.c
index fdd41b17a24f..16575ee874cb 100644
--- a/drivers/clk/samsung/clk-exynos3250.c
+++ b/drivers/clk/samsung/clk-exynos3250.c
@@ -302,10 +302,12 @@ static struct samsung_mux_clock mux_clks[] __initdata = {
/* SRC_FSYS */
MUX(CLK_MOUT_TSADC, "mout_tsadc", group_sclk_p, SRC_FSYS, 28, 4),
+ MUX(CLK_MOUT_MMC2, "mout_mmc2", group_sclk_p, SRC_FSYS, 8, 4),
MUX(CLK_MOUT_MMC1, "mout_mmc1", group_sclk_p, SRC_FSYS, 4, 4),
MUX(CLK_MOUT_MMC0, "mout_mmc0", group_sclk_p, SRC_FSYS, 0, 4),
/* SRC_PERIL0 */
+ MUX(CLK_MOUT_UART2, "mout_uart2", group_sclk_p, SRC_PERIL0, 8, 4),
MUX(CLK_MOUT_UART1, "mout_uart1", group_sclk_p, SRC_PERIL0, 4, 4),
MUX(CLK_MOUT_UART0, "mout_uart0", group_sclk_p, SRC_PERIL0, 0, 4),
@@ -389,7 +391,13 @@ static struct samsung_div_clock div_clks[] __initdata = {
CLK_SET_RATE_PARENT, 0),
DIV(CLK_DIV_MMC0, "div_mmc0", "mout_mmc0", DIV_FSYS1, 0, 4),
+ /* DIV_FSYS2 */
+ DIV_F(CLK_DIV_MMC2_PRE, "div_mmc2_pre", "div_mmc2", DIV_FSYS2, 8, 8,
+ CLK_SET_RATE_PARENT, 0),
+ DIV(CLK_DIV_MMC2, "div_mmc2", "mout_mmc2", DIV_FSYS2, 0, 4),
+
/* DIV_PERIL0 */
+ DIV(CLK_DIV_UART2, "div_uart2", "mout_uart2", DIV_PERIL0, 8, 4),
DIV(CLK_DIV_UART1, "div_uart1", "mout_uart1", DIV_PERIL0, 4, 4),
DIV(CLK_DIV_UART0, "div_uart0", "mout_uart0", DIV_PERIL0, 0, 4),
@@ -538,6 +546,8 @@ static struct samsung_gate_clock gate_clks[] __initdata = {
GATE_SCLK_FSYS, 9, CLK_SET_RATE_PARENT, 0),
GATE(CLK_SCLK_EBI, "sclk_ebi", "div_ebi",
GATE_SCLK_FSYS, 6, CLK_SET_RATE_PARENT, 0),
+ GATE(CLK_SCLK_MMC2, "sclk_mmc2", "div_mmc2_pre",
+ GATE_SCLK_FSYS, 2, CLK_SET_RATE_PARENT, 0),
GATE(CLK_SCLK_MMC1, "sclk_mmc1", "div_mmc1_pre",
GATE_SCLK_FSYS, 1, CLK_SET_RATE_PARENT, 0),
GATE(CLK_SCLK_MMC0, "sclk_mmc0", "div_mmc0_pre",
@@ -552,6 +562,9 @@ static struct samsung_gate_clock gate_clks[] __initdata = {
GATE_SCLK_PERIL, 7, CLK_SET_RATE_PARENT, 0),
GATE(CLK_SCLK_SPI0, "sclk_spi0", "div_spi0_pre",
GATE_SCLK_PERIL, 6, CLK_SET_RATE_PARENT, 0),
+
+ GATE(CLK_SCLK_UART2, "sclk_uart2", "div_uart2",
+ GATE_SCLK_PERIL, 2, CLK_SET_RATE_PARENT, 0),
GATE(CLK_SCLK_UART1, "sclk_uart1", "div_uart1",
GATE_SCLK_PERIL, 1, CLK_SET_RATE_PARENT, 0),
GATE(CLK_SCLK_UART0, "sclk_uart0", "div_uart0",
@@ -630,6 +643,7 @@ static struct samsung_gate_clock gate_clks[] __initdata = {
GATE(CLK_USBOTG, "usbotg", "div_aclk_200", GATE_IP_FSYS, 13, 0, 0),
GATE(CLK_USBHOST, "usbhost", "div_aclk_200", GATE_IP_FSYS, 12, 0, 0),
GATE(CLK_SROMC, "sromc", "div_aclk_200", GATE_IP_FSYS, 11, 0, 0),
+ GATE(CLK_SDMMC2, "sdmmc2", "div_aclk_200", GATE_IP_FSYS, 7, 0, 0),
GATE(CLK_SDMMC1, "sdmmc1", "div_aclk_200", GATE_IP_FSYS, 6, 0, 0),
GATE(CLK_SDMMC0, "sdmmc0", "div_aclk_200", GATE_IP_FSYS, 5, 0, 0),
GATE(CLK_PDMA1, "pdma1", "div_aclk_200", GATE_IP_FSYS, 1, 0, 0),
@@ -649,6 +663,7 @@ static struct samsung_gate_clock gate_clks[] __initdata = {
GATE(CLK_I2C2, "i2c2", "div_aclk_100", GATE_IP_PERIL, 8, 0, 0),
GATE(CLK_I2C1, "i2c1", "div_aclk_100", GATE_IP_PERIL, 7, 0, 0),
GATE(CLK_I2C0, "i2c0", "div_aclk_100", GATE_IP_PERIL, 6, 0, 0),
+ GATE(CLK_UART2, "uart2", "div_aclk_100", GATE_IP_PERIL, 2, 0, 0),
GATE(CLK_UART1, "uart1", "div_aclk_100", GATE_IP_PERIL, 1, 0, 0),
GATE(CLK_UART0, "uart0", "div_aclk_100", GATE_IP_PERIL, 0, 0, 0),
};
diff --git a/drivers/clk/samsung/clk-exynos5420.c b/drivers/clk/samsung/clk-exynos5420.c
index be03ed0fcb6b..92382cef9f90 100644
--- a/drivers/clk/samsung/clk-exynos5420.c
+++ b/drivers/clk/samsung/clk-exynos5420.c
@@ -554,8 +554,8 @@ static struct samsung_mux_clock exynos5800_mux_clks[] __initdata = {
};
static struct samsung_div_clock exynos5800_div_clks[] __initdata = {
- DIV(0, "dout_aclk400_wcore", "mout_aclk400_wcore", DIV_TOP0, 16, 3),
-
+ DIV(CLK_DOUT_ACLK400_WCORE, "dout_aclk400_wcore",
+ "mout_aclk400_wcore", DIV_TOP0, 16, 3),
DIV(0, "dout_aclk550_cam", "mout_aclk550_cam",
DIV_TOP8, 16, 3),
DIV(0, "dout_aclkfl1_550_cam", "mout_aclkfl1_550_cam",
@@ -607,8 +607,8 @@ static struct samsung_mux_clock exynos5420_mux_clks[] __initdata = {
};
static struct samsung_div_clock exynos5420_div_clks[] __initdata = {
- DIV(0, "dout_aclk400_wcore", "mout_aclk400_wcore_bpll",
- DIV_TOP0, 16, 3),
+ DIV(CLK_DOUT_ACLK400_WCORE, "dout_aclk400_wcore",
+ "mout_aclk400_wcore_bpll", DIV_TOP0, 16, 3),
};
static struct samsung_mux_clock exynos5x_mux_clks[] __initdata = {
@@ -785,31 +785,47 @@ static struct samsung_div_clock exynos5x_div_clks[] __initdata = {
DIV(0, "div_kfc", "mout_kfc", DIV_KFC0, 0, 3),
DIV(0, "sclk_kpll", "mout_kpll", DIV_KFC0, 24, 3),
- DIV(0, "dout_aclk400_isp", "mout_aclk400_isp", DIV_TOP0, 0, 3),
- DIV(0, "dout_aclk400_mscl", "mout_aclk400_mscl", DIV_TOP0, 4, 3),
- DIV(0, "dout_aclk200", "mout_aclk200", DIV_TOP0, 8, 3),
- DIV(0, "dout_aclk200_fsys2", "mout_aclk200_fsys2", DIV_TOP0, 12, 3),
- DIV(0, "dout_aclk100_noc", "mout_aclk100_noc", DIV_TOP0, 20, 3),
- DIV(0, "dout_pclk200_fsys", "mout_pclk200_fsys", DIV_TOP0, 24, 3),
- DIV(0, "dout_aclk200_fsys", "mout_aclk200_fsys", DIV_TOP0, 28, 3),
-
- DIV(0, "dout_aclk333_432_gscl", "mout_aclk333_432_gscl",
- DIV_TOP1, 0, 3),
- DIV(0, "dout_aclk333_432_isp", "mout_aclk333_432_isp",
- DIV_TOP1, 4, 3),
- DIV(0, "dout_aclk66", "mout_aclk66", DIV_TOP1, 8, 6),
- DIV(0, "dout_aclk333_432_isp0", "mout_aclk333_432_isp0",
- DIV_TOP1, 16, 3),
- DIV(0, "dout_aclk266", "mout_aclk266", DIV_TOP1, 20, 3),
- DIV(0, "dout_aclk166", "mout_aclk166", DIV_TOP1, 24, 3),
- DIV(0, "dout_aclk333", "mout_aclk333", DIV_TOP1, 28, 3),
-
- DIV(0, "dout_aclk333_g2d", "mout_aclk333_g2d", DIV_TOP2, 8, 3),
- DIV(0, "dout_aclk266_g2d", "mout_aclk266_g2d", DIV_TOP2, 12, 3),
- DIV(0, "dout_aclk_g3d", "mout_aclk_g3d", DIV_TOP2, 16, 3),
- DIV(0, "dout_aclk300_jpeg", "mout_aclk300_jpeg", DIV_TOP2, 20, 3),
- DIV(0, "dout_aclk300_disp1", "mout_aclk300_disp1", DIV_TOP2, 24, 3),
- DIV(0, "dout_aclk300_gscl", "mout_aclk300_gscl", DIV_TOP2, 28, 3),
+ DIV(CLK_DOUT_ACLK400_ISP, "dout_aclk400_isp", "mout_aclk400_isp",
+ DIV_TOP0, 0, 3),
+ DIV(CLK_DOUT_ACLK400_MSCL, "dout_aclk400_mscl", "mout_aclk400_mscl",
+ DIV_TOP0, 4, 3),
+ DIV(CLK_DOUT_ACLK200, "dout_aclk200", "mout_aclk200",
+ DIV_TOP0, 8, 3),
+ DIV(CLK_DOUT_ACLK200_FSYS2, "dout_aclk200_fsys2", "mout_aclk200_fsys2",
+ DIV_TOP0, 12, 3),
+ DIV(CLK_DOUT_ACLK100_NOC, "dout_aclk100_noc", "mout_aclk100_noc",
+ DIV_TOP0, 20, 3),
+ DIV(CLK_DOUT_PCLK200_FSYS, "dout_pclk200_fsys", "mout_pclk200_fsys",
+ DIV_TOP0, 24, 3),
+ DIV(CLK_DOUT_ACLK200_FSYS, "dout_aclk200_fsys", "mout_aclk200_fsys",
+ DIV_TOP0, 28, 3),
+ DIV(CLK_DOUT_ACLK333_432_GSCL, "dout_aclk333_432_gscl",
+ "mout_aclk333_432_gscl", DIV_TOP1, 0, 3),
+ DIV(CLK_DOUT_ACLK333_432_ISP, "dout_aclk333_432_isp",
+ "mout_aclk333_432_isp", DIV_TOP1, 4, 3),
+ DIV(CLK_DOUT_ACLK66, "dout_aclk66", "mout_aclk66",
+ DIV_TOP1, 8, 6),
+ DIV(CLK_DOUT_ACLK333_432_ISP0, "dout_aclk333_432_isp0",
+ "mout_aclk333_432_isp0", DIV_TOP1, 16, 3),
+ DIV(CLK_DOUT_ACLK266, "dout_aclk266", "mout_aclk266",
+ DIV_TOP1, 20, 3),
+ DIV(CLK_DOUT_ACLK166, "dout_aclk166", "mout_aclk166",
+ DIV_TOP1, 24, 3),
+ DIV(CLK_DOUT_ACLK333, "dout_aclk333", "mout_aclk333",
+ DIV_TOP1, 28, 3),
+
+ DIV(CLK_DOUT_ACLK333_G2D, "dout_aclk333_g2d", "mout_aclk333_g2d",
+ DIV_TOP2, 8, 3),
+ DIV(CLK_DOUT_ACLK266_G2D, "dout_aclk266_g2d", "mout_aclk266_g2d",
+ DIV_TOP2, 12, 3),
+ DIV(CLK_DOUT_ACLK_G3D, "dout_aclk_g3d", "mout_aclk_g3d", DIV_TOP2,
+ 16, 3),
+ DIV(CLK_DOUT_ACLK300_JPEG, "dout_aclk300_jpeg", "mout_aclk300_jpeg",
+ DIV_TOP2, 20, 3),
+ DIV(CLK_DOUT_ACLK300_DISP1, "dout_aclk300_disp1",
+ "mout_aclk300_disp1", DIV_TOP2, 24, 3),
+ DIV(CLK_DOUT_ACLK300_GSCL, "dout_aclk300_gscl", "mout_aclk300_gscl",
+ DIV_TOP2, 28, 3),
/* DISP1 Block */
DIV(0, "dout_fimd1", "mout_fimd1_final", DIV_DISP10, 0, 4),
@@ -817,7 +833,8 @@ static struct samsung_div_clock exynos5x_div_clks[] __initdata = {
DIV(0, "dout_dp1", "mout_dp1", DIV_DISP10, 24, 4),
DIV(CLK_DOUT_PIXEL, "dout_hdmi_pixel", "mout_pixel", DIV_DISP10, 28, 4),
DIV(0, "dout_disp1_blk", "aclk200_disp1", DIV2_RATIO0, 16, 2),
- DIV(0, "dout_aclk400_disp1", "mout_aclk400_disp1", DIV_TOP2, 4, 3),
+ DIV(CLK_DOUT_ACLK400_DISP1, "dout_aclk400_disp1",
+ "mout_aclk400_disp1", DIV_TOP2, 4, 3),
/* Audio Block */
DIV(0, "dout_maudio0", "mout_maudio0", DIV_MAU, 20, 4),
diff --git a/drivers/clk/sirf/clk-atlas6.c b/drivers/clk/sirf/clk-atlas6.c
index c5eaa9d16247..665fa681b2e1 100644
--- a/drivers/clk/sirf/clk-atlas6.c
+++ b/drivers/clk/sirf/clk-atlas6.c
@@ -130,10 +130,9 @@ static void __init atlas6_clk_init(struct device_node *np)
panic("unable to map clkc registers\n");
/* These are always available (RTC and 26MHz OSC)*/
- atlas6_clks[rtc] = clk_register_fixed_rate(NULL, "rtc", NULL,
- CLK_IS_ROOT, 32768);
- atlas6_clks[osc] = clk_register_fixed_rate(NULL, "osc", NULL,
- CLK_IS_ROOT, 26000000);
+ atlas6_clks[rtc] = clk_register_fixed_rate(NULL, "rtc", NULL, 0, 32768);
+ atlas6_clks[osc] = clk_register_fixed_rate(NULL, "osc", NULL, 0,
+ 26000000);
for (i = pll1; i < maxclk; i++) {
atlas6_clks[i] = clk_register(NULL, atlas6_clk_hw_array[i]);
diff --git a/drivers/clk/sirf/clk-prima2.c b/drivers/clk/sirf/clk-prima2.c
index f92c40264342..aac1c8ec151a 100644
--- a/drivers/clk/sirf/clk-prima2.c
+++ b/drivers/clk/sirf/clk-prima2.c
@@ -129,10 +129,9 @@ static void __init prima2_clk_init(struct device_node *np)
panic("unable to map clkc registers\n");
/* These are always available (RTC and 26MHz OSC)*/
- prima2_clks[rtc] = clk_register_fixed_rate(NULL, "rtc", NULL,
- CLK_IS_ROOT, 32768);
- prima2_clks[osc] = clk_register_fixed_rate(NULL, "osc", NULL,
- CLK_IS_ROOT, 26000000);
+ prima2_clks[rtc] = clk_register_fixed_rate(NULL, "rtc", NULL, 0, 32768);
+ prima2_clks[osc] = clk_register_fixed_rate(NULL, "osc", NULL, 0,
+ 26000000);
for (i = pll1; i < maxclk; i++) {
prima2_clks[i] = clk_register(NULL, prima2_clk_hw_array[i]);
diff --git a/drivers/clk/sunxi/Makefile b/drivers/clk/sunxi/Makefile
index 3fd7901d48e4..39d2044a1f49 100644
--- a/drivers/clk/sunxi/Makefile
+++ b/drivers/clk/sunxi/Makefile
@@ -11,6 +11,9 @@ obj-y += clk-a10-ve.o
obj-y += clk-a20-gmac.o
obj-y += clk-mod0.o
obj-y += clk-simple-gates.o
+obj-y += clk-sun4i-display.o
+obj-y += clk-sun4i-pll3.o
+obj-y += clk-sun4i-tcon-ch1.o
obj-y += clk-sun8i-bus-gates.o
obj-y += clk-sun8i-mbus.o
obj-y += clk-sun9i-core.o
diff --git a/drivers/clk/sunxi/clk-a10-hosc.c b/drivers/clk/sunxi/clk-a10-hosc.c
index 6b598c6a0213..dca532431394 100644
--- a/drivers/clk/sunxi/clk-a10-hosc.c
+++ b/drivers/clk/sunxi/clk-a10-hosc.c
@@ -54,8 +54,7 @@ static void __init sun4i_osc_clk_setup(struct device_node *node)
NULL, 0,
NULL, NULL,
&fixed->hw, &clk_fixed_rate_ops,
- &gate->hw, &clk_gate_ops,
- CLK_IS_ROOT);
+ &gate->hw, &clk_gate_ops, 0);
if (IS_ERR(clk))
goto err_free_gate;
diff --git a/drivers/clk/sunxi/clk-a10-mod1.c b/drivers/clk/sunxi/clk-a10-mod1.c
index e9d870de165c..e2819fa09637 100644
--- a/drivers/clk/sunxi/clk-a10-mod1.c
+++ b/drivers/clk/sunxi/clk-a10-mod1.c
@@ -62,7 +62,7 @@ static void __init sun4i_mod1_clk_setup(struct device_node *node)
clk = clk_register_composite(NULL, clk_name, parents, i,
&mux->hw, &clk_mux_ops,
NULL, NULL,
- &gate->hw, &clk_gate_ops, 0);
+ &gate->hw, &clk_gate_ops, CLK_SET_RATE_PARENT);
if (IS_ERR(clk))
goto err_free_gate;
diff --git a/drivers/clk/sunxi/clk-sun4i-display.c b/drivers/clk/sunxi/clk-sun4i-display.c
new file mode 100644
index 000000000000..445a7498d6df
--- /dev/null
+++ b/drivers/clk/sunxi/clk-sun4i-display.c
@@ -0,0 +1,261 @@
+/*
+ * Copyright 2015 Maxime Ripard
+ *
+ * Maxime Ripard <maxime.ripard@free-electrons.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/clk-provider.h>
+#include <linux/kernel.h>
+#include <linux/of_address.h>
+#include <linux/reset-controller.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+struct sun4i_a10_display_clk_data {
+ bool has_div;
+ u8 num_rst;
+ u8 parents;
+
+ u8 offset_en;
+ u8 offset_div;
+ u8 offset_mux;
+ u8 offset_rst;
+
+ u8 width_div;
+ u8 width_mux;
+};
+
+struct reset_data {
+ void __iomem *reg;
+ spinlock_t *lock;
+ struct reset_controller_dev rcdev;
+ u8 offset;
+};
+
+static DEFINE_SPINLOCK(sun4i_a10_display_lock);
+
+static inline struct reset_data *rcdev_to_reset_data(struct reset_controller_dev *rcdev)
+{
+ return container_of(rcdev, struct reset_data, rcdev);
+};
+
+static int sun4i_a10_display_assert(struct reset_controller_dev *rcdev,
+ unsigned long id)
+{
+ struct reset_data *data = rcdev_to_reset_data(rcdev);
+ unsigned long flags;
+ u32 reg;
+
+ spin_lock_irqsave(data->lock, flags);
+
+ reg = readl(data->reg);
+ writel(reg & ~BIT(data->offset + id), data->reg);
+
+ spin_unlock_irqrestore(data->lock, flags);
+
+ return 0;
+}
+
+static int sun4i_a10_display_deassert(struct reset_controller_dev *rcdev,
+ unsigned long id)
+{
+ struct reset_data *data = rcdev_to_reset_data(rcdev);
+ unsigned long flags;
+ u32 reg;
+
+ spin_lock_irqsave(data->lock, flags);
+
+ reg = readl(data->reg);
+ writel(reg | BIT(data->offset + id), data->reg);
+
+ spin_unlock_irqrestore(data->lock, flags);
+
+ return 0;
+}
+
+static int sun4i_a10_display_status(struct reset_controller_dev *rcdev,
+ unsigned long id)
+{
+ struct reset_data *data = rcdev_to_reset_data(rcdev);
+
+ return !(readl(data->reg) & BIT(data->offset + id));
+}
+
+static const struct reset_control_ops sun4i_a10_display_reset_ops = {
+ .assert = sun4i_a10_display_assert,
+ .deassert = sun4i_a10_display_deassert,
+ .status = sun4i_a10_display_status,
+};
+
+static int sun4i_a10_display_reset_xlate(struct reset_controller_dev *rcdev,
+ const struct of_phandle_args *spec)
+{
+ /* We only have a single reset signal */
+ return 0;
+}
+
+static void __init sun4i_a10_display_init(struct device_node *node,
+ const struct sun4i_a10_display_clk_data *data)
+{
+ const char *parents[4];
+ const char *clk_name = node->name;
+ struct reset_data *reset_data;
+ struct clk_divider *div = NULL;
+ struct clk_gate *gate;
+ struct resource res;
+ struct clk_mux *mux;
+ void __iomem *reg;
+ struct clk *clk;
+ int ret;
+
+ of_property_read_string(node, "clock-output-names", &clk_name);
+
+ reg = of_io_request_and_map(node, 0, of_node_full_name(node));
+ if (IS_ERR(reg)) {
+ pr_err("%s: Could not map the clock registers\n", clk_name);
+ return;
+ }
+
+ ret = of_clk_parent_fill(node, parents, data->parents);
+ if (ret != data->parents) {
+ pr_err("%s: Could not retrieve the parents\n", clk_name);
+ goto unmap;
+ }
+
+ mux = kzalloc(sizeof(*mux), GFP_KERNEL);
+ if (!mux)
+ goto unmap;
+
+ mux->reg = reg;
+ mux->shift = data->offset_mux;
+ mux->mask = (1 << data->width_mux) - 1;
+ mux->lock = &sun4i_a10_display_lock;
+
+ gate = kzalloc(sizeof(*gate), GFP_KERNEL);
+ if (!gate)
+ goto free_mux;
+
+ gate->reg = reg;
+ gate->bit_idx = data->offset_en;
+ gate->lock = &sun4i_a10_display_lock;
+
+ if (data->has_div) {
+ div = kzalloc(sizeof(*div), GFP_KERNEL);
+ if (!div)
+ goto free_gate;
+
+ div->reg = reg;
+ div->shift = data->offset_div;
+ div->width = data->width_div;
+ div->lock = &sun4i_a10_display_lock;
+ }
+
+ clk = clk_register_composite(NULL, clk_name,
+ parents, data->parents,
+ &mux->hw, &clk_mux_ops,
+ data->has_div ? &div->hw : NULL,
+ data->has_div ? &clk_divider_ops : NULL,
+ &gate->hw, &clk_gate_ops,
+ 0);
+ if (IS_ERR(clk)) {
+ pr_err("%s: Couldn't register the clock\n", clk_name);
+ goto free_div;
+ }
+
+ ret = of_clk_add_provider(node, of_clk_src_simple_get, clk);
+ if (ret) {
+ pr_err("%s: Couldn't register DT provider\n", clk_name);
+ goto free_clk;
+ }
+
+ if (!data->num_rst)
+ return;
+
+ reset_data = kzalloc(sizeof(*reset_data), GFP_KERNEL);
+ if (!reset_data)
+ goto free_of_clk;
+
+ reset_data->reg = reg;
+ reset_data->offset = data->offset_rst;
+ reset_data->lock = &sun4i_a10_display_lock;
+ reset_data->rcdev.nr_resets = data->num_rst;
+ reset_data->rcdev.ops = &sun4i_a10_display_reset_ops;
+ reset_data->rcdev.of_node = node;
+
+ if (data->num_rst == 1) {
+ reset_data->rcdev.of_reset_n_cells = 0;
+ reset_data->rcdev.of_xlate = &sun4i_a10_display_reset_xlate;
+ } else {
+ reset_data->rcdev.of_reset_n_cells = 1;
+ }
+
+ if (reset_controller_register(&reset_data->rcdev)) {
+ pr_err("%s: Couldn't register the reset controller\n",
+ clk_name);
+ goto free_reset;
+ }
+
+ return;
+
+free_reset:
+ kfree(reset_data);
+free_of_clk:
+ of_clk_del_provider(node);
+free_clk:
+ clk_unregister_composite(clk);
+free_div:
+ kfree(div);
+free_gate:
+ kfree(gate);
+free_mux:
+ kfree(mux);
+unmap:
+ iounmap(reg);
+ of_address_to_resource(node, 0, &res);
+ release_mem_region(res.start, resource_size(&res));
+}
+
+static const struct sun4i_a10_display_clk_data sun4i_a10_tcon_ch0_data __initconst = {
+ .num_rst = 2,
+ .parents = 4,
+ .offset_en = 31,
+ .offset_rst = 29,
+ .offset_mux = 24,
+ .width_mux = 2,
+};
+
+static void __init sun4i_a10_tcon_ch0_setup(struct device_node *node)
+{
+ sun4i_a10_display_init(node, &sun4i_a10_tcon_ch0_data);
+}
+CLK_OF_DECLARE(sun4i_a10_tcon_ch0, "allwinner,sun4i-a10-tcon-ch0-clk",
+ sun4i_a10_tcon_ch0_setup);
+
+static const struct sun4i_a10_display_clk_data sun4i_a10_display_data __initconst = {
+ .has_div = true,
+ .num_rst = 1,
+ .parents = 3,
+ .offset_en = 31,
+ .offset_rst = 30,
+ .offset_mux = 24,
+ .offset_div = 0,
+ .width_mux = 2,
+ .width_div = 4,
+};
+
+static void __init sun4i_a10_display_setup(struct device_node *node)
+{
+ sun4i_a10_display_init(node, &sun4i_a10_display_data);
+}
+CLK_OF_DECLARE(sun4i_a10_display, "allwinner,sun4i-a10-display-clk",
+ sun4i_a10_display_setup);
diff --git a/drivers/clk/sunxi/clk-sun4i-pll3.c b/drivers/clk/sunxi/clk-sun4i-pll3.c
new file mode 100644
index 000000000000..f66267e77d9c
--- /dev/null
+++ b/drivers/clk/sunxi/clk-sun4i-pll3.c
@@ -0,0 +1,98 @@
+/*
+ * Copyright 2015 Maxime Ripard
+ *
+ * Maxime Ripard <maxime.ripard@free-electrons.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/clk-provider.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#define SUN4I_A10_PLL3_GATE_BIT 31
+#define SUN4I_A10_PLL3_DIV_WIDTH 7
+#define SUN4I_A10_PLL3_DIV_SHIFT 0
+
+static DEFINE_SPINLOCK(sun4i_a10_pll3_lock);
+
+static void __init sun4i_a10_pll3_setup(struct device_node *node)
+{
+ const char *clk_name = node->name, *parent;
+ struct clk_multiplier *mult;
+ struct clk_gate *gate;
+ struct resource res;
+ void __iomem *reg;
+ struct clk *clk;
+ int ret;
+
+ of_property_read_string(node, "clock-output-names", &clk_name);
+ parent = of_clk_get_parent_name(node, 0);
+
+ reg = of_io_request_and_map(node, 0, of_node_full_name(node));
+ if (IS_ERR(reg)) {
+ pr_err("%s: Could not map the clock registers\n", clk_name);
+ return;
+ }
+
+ gate = kzalloc(sizeof(*gate), GFP_KERNEL);
+ if (!gate)
+ goto err_unmap;
+
+ gate->reg = reg;
+ gate->bit_idx = SUN4I_A10_PLL3_GATE_BIT;
+ gate->lock = &sun4i_a10_pll3_lock;
+
+ mult = kzalloc(sizeof(*mult), GFP_KERNEL);
+ if (!mult)
+ goto err_free_gate;
+
+ mult->reg = reg;
+ mult->shift = SUN4I_A10_PLL3_DIV_SHIFT;
+ mult->width = SUN4I_A10_PLL3_DIV_WIDTH;
+ mult->lock = &sun4i_a10_pll3_lock;
+
+ clk = clk_register_composite(NULL, clk_name,
+ &parent, 1,
+ NULL, NULL,
+ &mult->hw, &clk_multiplier_ops,
+ &gate->hw, &clk_gate_ops,
+ 0);
+ if (IS_ERR(clk)) {
+ pr_err("%s: Couldn't register the clock\n", clk_name);
+ goto err_free_mult;
+ }
+
+ ret = of_clk_add_provider(node, of_clk_src_simple_get, clk);
+ if (ret) {
+ pr_err("%s: Couldn't register DT provider\n",
+ clk_name);
+ goto err_clk_unregister;
+ }
+
+ return;
+
+err_clk_unregister:
+ clk_unregister_composite(clk);
+err_free_mult:
+ kfree(mult);
+err_free_gate:
+ kfree(gate);
+err_unmap:
+ iounmap(reg);
+ of_address_to_resource(node, 0, &res);
+ release_mem_region(res.start, resource_size(&res));
+}
+
+CLK_OF_DECLARE(sun4i_a10_pll3, "allwinner,sun4i-a10-pll3-clk",
+ sun4i_a10_pll3_setup);
diff --git a/drivers/clk/sunxi/clk-sun4i-tcon-ch1.c b/drivers/clk/sunxi/clk-sun4i-tcon-ch1.c
new file mode 100644
index 000000000000..98a4582de56a
--- /dev/null
+++ b/drivers/clk/sunxi/clk-sun4i-tcon-ch1.c
@@ -0,0 +1,300 @@
+/*
+ * Copyright 2015 Maxime Ripard
+ *
+ * Maxime Ripard <maxime.ripard@free-electrons.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/clk-provider.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+
+#define TCON_CH1_SCLK2_PARENTS 4
+
+#define TCON_CH1_SCLK2_GATE_BIT BIT(31)
+#define TCON_CH1_SCLK2_MUX_MASK 3
+#define TCON_CH1_SCLK2_MUX_SHIFT 24
+#define TCON_CH1_SCLK2_DIV_MASK 0xf
+#define TCON_CH1_SCLK2_DIV_SHIFT 0
+
+#define TCON_CH1_SCLK1_GATE_BIT BIT(15)
+#define TCON_CH1_SCLK1_HALF_BIT BIT(11)
+
+struct tcon_ch1_clk {
+ struct clk_hw hw;
+ spinlock_t lock;
+ void __iomem *reg;
+};
+
+#define hw_to_tclk(hw) container_of(hw, struct tcon_ch1_clk, hw)
+
+static void tcon_ch1_disable(struct clk_hw *hw)
+{
+ struct tcon_ch1_clk *tclk = hw_to_tclk(hw);
+ unsigned long flags;
+ u32 reg;
+
+ spin_lock_irqsave(&tclk->lock, flags);
+ reg = readl(tclk->reg);
+ reg &= ~(TCON_CH1_SCLK2_GATE_BIT | TCON_CH1_SCLK1_GATE_BIT);
+ writel(reg, tclk->reg);
+ spin_unlock_irqrestore(&tclk->lock, flags);
+}
+
+static int tcon_ch1_enable(struct clk_hw *hw)
+{
+ struct tcon_ch1_clk *tclk = hw_to_tclk(hw);
+ unsigned long flags;
+ u32 reg;
+
+ spin_lock_irqsave(&tclk->lock, flags);
+ reg = readl(tclk->reg);
+ reg |= TCON_CH1_SCLK2_GATE_BIT | TCON_CH1_SCLK1_GATE_BIT;
+ writel(reg, tclk->reg);
+ spin_unlock_irqrestore(&tclk->lock, flags);
+
+ return 0;
+}
+
+static int tcon_ch1_is_enabled(struct clk_hw *hw)
+{
+ struct tcon_ch1_clk *tclk = hw_to_tclk(hw);
+ u32 reg;
+
+ reg = readl(tclk->reg);
+ return reg & (TCON_CH1_SCLK2_GATE_BIT | TCON_CH1_SCLK1_GATE_BIT);
+}
+
+static u8 tcon_ch1_get_parent(struct clk_hw *hw)
+{
+ struct tcon_ch1_clk *tclk = hw_to_tclk(hw);
+ int num_parents = clk_hw_get_num_parents(hw);
+ u32 reg;
+
+ reg = readl(tclk->reg) >> TCON_CH1_SCLK2_MUX_SHIFT;
+ reg &= reg >> TCON_CH1_SCLK2_MUX_MASK;
+
+ if (reg >= num_parents)
+ return -EINVAL;
+
+ return reg;
+}
+
+static int tcon_ch1_set_parent(struct clk_hw *hw, u8 index)
+{
+ struct tcon_ch1_clk *tclk = hw_to_tclk(hw);
+ unsigned long flags;
+ u32 reg;
+
+ spin_lock_irqsave(&tclk->lock, flags);
+ reg = readl(tclk->reg);
+ reg &= ~(TCON_CH1_SCLK2_MUX_MASK << TCON_CH1_SCLK2_MUX_SHIFT);
+ reg |= index << TCON_CH1_SCLK2_MUX_SHIFT;
+ writel(reg, tclk->reg);
+ spin_unlock_irqrestore(&tclk->lock, flags);
+
+ return 0;
+};
+
+static unsigned long tcon_ch1_calc_divider(unsigned long rate,
+ unsigned long parent_rate,
+ u8 *div,
+ bool *half)
+{
+ unsigned long best_rate = 0;
+ u8 best_m = 0, m;
+ bool is_double;
+
+ for (m = 1; m < 16; m++) {
+ u8 d;
+
+ for (d = 1; d < 3; d++) {
+ unsigned long tmp_rate;
+
+ tmp_rate = parent_rate / m / d;
+
+ if (tmp_rate > rate)
+ continue;
+
+ if (!best_rate ||
+ (rate - tmp_rate) < (rate - best_rate)) {
+ best_rate = tmp_rate;
+ best_m = m;
+ is_double = d;
+ }
+ }
+ }
+
+ if (div && half) {
+ *div = best_m;
+ *half = is_double;
+ }
+
+ return best_rate;
+}
+
+static int tcon_ch1_determine_rate(struct clk_hw *hw,
+ struct clk_rate_request *req)
+{
+ long best_rate = -EINVAL;
+ int i;
+
+ for (i = 0; i < clk_hw_get_num_parents(hw); i++) {
+ unsigned long parent_rate;
+ unsigned long tmp_rate;
+ struct clk_hw *parent;
+
+ parent = clk_hw_get_parent_by_index(hw, i);
+ if (!parent)
+ continue;
+
+ parent_rate = clk_hw_get_rate(parent);
+
+ tmp_rate = tcon_ch1_calc_divider(req->rate, parent_rate,
+ NULL, NULL);
+
+ if (best_rate < 0 ||
+ (req->rate - tmp_rate) < (req->rate - best_rate)) {
+ best_rate = tmp_rate;
+ req->best_parent_rate = parent_rate;
+ req->best_parent_hw = parent;
+ }
+ }
+
+ if (best_rate < 0)
+ return best_rate;
+
+ req->rate = best_rate;
+ return 0;
+}
+
+static unsigned long tcon_ch1_recalc_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+{
+ struct tcon_ch1_clk *tclk = hw_to_tclk(hw);
+ u32 reg;
+
+ reg = readl(tclk->reg);
+
+ parent_rate /= (reg & TCON_CH1_SCLK2_DIV_MASK) + 1;
+
+ if (reg & TCON_CH1_SCLK1_HALF_BIT)
+ parent_rate /= 2;
+
+ return parent_rate;
+}
+
+static int tcon_ch1_set_rate(struct clk_hw *hw, unsigned long rate,
+ unsigned long parent_rate)
+{
+ struct tcon_ch1_clk *tclk = hw_to_tclk(hw);
+ unsigned long flags;
+ bool half;
+ u8 div_m;
+ u32 reg;
+
+ tcon_ch1_calc_divider(rate, parent_rate, &div_m, &half);
+
+ spin_lock_irqsave(&tclk->lock, flags);
+ reg = readl(tclk->reg);
+ reg &= ~(TCON_CH1_SCLK2_DIV_MASK | TCON_CH1_SCLK1_HALF_BIT);
+ reg |= (div_m - 1) & TCON_CH1_SCLK2_DIV_MASK;
+
+ if (half)
+ reg |= TCON_CH1_SCLK1_HALF_BIT;
+
+ writel(reg, tclk->reg);
+ spin_unlock_irqrestore(&tclk->lock, flags);
+
+ return 0;
+}
+
+static const struct clk_ops tcon_ch1_ops = {
+ .disable = tcon_ch1_disable,
+ .enable = tcon_ch1_enable,
+ .is_enabled = tcon_ch1_is_enabled,
+
+ .get_parent = tcon_ch1_get_parent,
+ .set_parent = tcon_ch1_set_parent,
+
+ .determine_rate = tcon_ch1_determine_rate,
+ .recalc_rate = tcon_ch1_recalc_rate,
+ .set_rate = tcon_ch1_set_rate,
+};
+
+static void __init tcon_ch1_setup(struct device_node *node)
+{
+ const char *parents[TCON_CH1_SCLK2_PARENTS];
+ const char *clk_name = node->name;
+ struct clk_init_data init;
+ struct tcon_ch1_clk *tclk;
+ struct resource res;
+ struct clk *clk;
+ void __iomem *reg;
+ int ret;
+
+ of_property_read_string(node, "clock-output-names", &clk_name);
+
+ reg = of_io_request_and_map(node, 0, of_node_full_name(node));
+ if (IS_ERR(reg)) {
+ pr_err("%s: Could not map the clock registers\n", clk_name);
+ return;
+ }
+
+ ret = of_clk_parent_fill(node, parents, TCON_CH1_SCLK2_PARENTS);
+ if (ret != TCON_CH1_SCLK2_PARENTS) {
+ pr_err("%s Could not retrieve the parents\n", clk_name);
+ goto err_unmap;
+ }
+
+ tclk = kzalloc(sizeof(*tclk), GFP_KERNEL);
+ if (!tclk)
+ goto err_unmap;
+
+ init.name = clk_name;
+ init.ops = &tcon_ch1_ops;
+ init.parent_names = parents;
+ init.num_parents = TCON_CH1_SCLK2_PARENTS;
+ init.flags = CLK_SET_RATE_PARENT;
+
+ tclk->reg = reg;
+ tclk->hw.init = &init;
+ spin_lock_init(&tclk->lock);
+
+ clk = clk_register(NULL, &tclk->hw);
+ if (IS_ERR(clk)) {
+ pr_err("%s: Couldn't register the clock\n", clk_name);
+ goto err_free_data;
+ }
+
+ ret = of_clk_add_provider(node, of_clk_src_simple_get, clk);
+ if (ret) {
+ pr_err("%s: Couldn't register our clock provider\n", clk_name);
+ goto err_unregister_clk;
+ }
+
+ return;
+
+err_unregister_clk:
+ clk_unregister(clk);
+err_free_data:
+ kfree(tclk);
+err_unmap:
+ iounmap(reg);
+ of_address_to_resource(node, 0, &res);
+ release_mem_region(res.start, resource_size(&res));
+}
+
+CLK_OF_DECLARE(tcon_ch1, "allwinner,sun4i-a10-tcon-ch1-clk",
+ tcon_ch1_setup);
diff --git a/drivers/clk/sunxi/clk-sun9i-mmc.c b/drivers/clk/sunxi/clk-sun9i-mmc.c
index 028dd832a39f..716737388b7d 100644
--- a/drivers/clk/sunxi/clk-sun9i-mmc.c
+++ b/drivers/clk/sunxi/clk-sun9i-mmc.c
@@ -106,7 +106,7 @@ static int sun9i_a80_mmc_config_clk_probe(struct platform_device *pdev)
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
/* one clock/reset pair per word */
- count = DIV_ROUND_UP((r->end - r->start + 1), SUN9I_MMC_WIDTH);
+ count = DIV_ROUND_UP((resource_size(r)), SUN9I_MMC_WIDTH);
data->membase = devm_ioremap_resource(&pdev->dev, r);
if (IS_ERR(data->membase))
return PTR_ERR(data->membase);
diff --git a/drivers/clk/sunxi/clk-sunxi.c b/drivers/clk/sunxi/clk-sunxi.c
index 91de0a006773..838b22aa8b67 100644
--- a/drivers/clk/sunxi/clk-sunxi.c
+++ b/drivers/clk/sunxi/clk-sunxi.c
@@ -523,21 +523,12 @@ static const struct factors_data sun4i_pll5_data __initconst = {
.enable = 31,
.table = &sun4i_pll5_config,
.getter = sun4i_get_pll5_factors,
- .name = "pll5",
-};
-
-static const struct factors_data sun4i_pll6_data __initconst = {
- .enable = 31,
- .table = &sun4i_pll5_config,
- .getter = sun4i_get_pll5_factors,
- .name = "pll6",
};
static const struct factors_data sun6i_a31_pll6_data __initconst = {
.enable = 31,
.table = &sun6i_a31_pll6_config,
.getter = sun6i_a31_get_pll6_factors,
- .name = "pll6x2",
};
static const struct factors_data sun5i_a13_ahb_data __initconst = {
@@ -933,7 +924,7 @@ static const struct divs_data pll5_divs_data __initconst = {
};
static const struct divs_data pll6_divs_data __initconst = {
- .factors = &sun4i_pll6_data,
+ .factors = &sun4i_pll5_data,
.ndivs = 4,
.div = {
{ .shift = 0, .table = pll6_sata_tbl, .gate = 14 }, /* M, SATA */
@@ -975,6 +966,8 @@ static struct clk ** __init sunxi_divs_clk_setup(struct device_node *node,
struct clk_gate *gate = NULL;
struct clk_fixed_factor *fix_factor;
struct clk_divider *divider;
+ struct factors_data factors = *data->factors;
+ char *derived_name = NULL;
void __iomem *reg;
int ndivs = SUNXI_DIVS_MAX_QTY, i = 0;
int flags, clkflags;
@@ -983,11 +976,37 @@ static struct clk ** __init sunxi_divs_clk_setup(struct device_node *node,
if (data->ndivs)
ndivs = data->ndivs;
+ /* Try to find a name for base factor clock */
+ for (i = 0; i < ndivs; i++) {
+ if (data->div[i].self) {
+ of_property_read_string_index(node, "clock-output-names",
+ i, &factors.name);
+ break;
+ }
+ }
+ /* If we don't have a .self clk use the first output-name up to '_' */
+ if (factors.name == NULL) {
+ char *endp;
+
+ of_property_read_string_index(node, "clock-output-names",
+ 0, &clk_name);
+ endp = strchr(clk_name, '_');
+ if (endp) {
+ derived_name = kstrndup(clk_name, endp - clk_name,
+ GFP_KERNEL);
+ factors.name = derived_name;
+ } else {
+ factors.name = clk_name;
+ }
+ }
+
/* Set up factor clock that we will be dividing */
- pclk = sunxi_factors_clk_setup(node, data->factors);
+ pclk = sunxi_factors_clk_setup(node, &factors);
if (!pclk)
return NULL;
+
parent = __clk_get_name(pclk);
+ kfree(derived_name);
reg = of_iomap(node, 0);
if (!reg) {
@@ -1127,3 +1146,41 @@ static void __init sun6i_pll6_clk_setup(struct device_node *node)
}
CLK_OF_DECLARE(sun6i_pll6, "allwinner,sun6i-a31-pll6-clk",
sun6i_pll6_clk_setup);
+
+/*
+ * sun6i display
+ *
+ * rate = parent_rate / (m + 1);
+ */
+static void sun6i_display_factors(struct factors_request *req)
+{
+ u8 m;
+
+ if (req->rate > req->parent_rate)
+ req->rate = req->parent_rate;
+
+ m = DIV_ROUND_UP(req->parent_rate, req->rate);
+
+ req->rate = req->parent_rate / m;
+ req->m = m - 1;
+}
+
+static const struct clk_factors_config sun6i_display_config = {
+ .mshift = 0,
+ .mwidth = 4,
+};
+
+static const struct factors_data sun6i_display_data __initconst = {
+ .enable = 31,
+ .mux = 24,
+ .muxmask = BIT(2) | BIT(1) | BIT(0),
+ .table = &sun6i_display_config,
+ .getter = sun6i_display_factors,
+};
+
+static void __init sun6i_display_setup(struct device_node *node)
+{
+ sunxi_factors_clk_setup(node, &sun6i_display_data);
+}
+CLK_OF_DECLARE(sun6i_display, "allwinner,sun6i-a31-display-clk",
+ sun6i_display_setup);
diff --git a/drivers/clk/tegra/Makefile b/drivers/clk/tegra/Makefile
index 97984c503bbb..33fd0938d79e 100644
--- a/drivers/clk/tegra/Makefile
+++ b/drivers/clk/tegra/Makefile
@@ -3,6 +3,7 @@ obj-y += clk-audio-sync.o
obj-y += clk-dfll.o
obj-y += clk-divider.o
obj-y += clk-periph.o
+obj-y += clk-periph-fixed.o
obj-y += clk-periph-gate.o
obj-y += clk-pll.o
obj-y += clk-pll-out.o
diff --git a/drivers/clk/tegra/clk-dfll.c b/drivers/clk/tegra/clk-dfll.c
index 19bfa07e24b1..f010562534eb 100644
--- a/drivers/clk/tegra/clk-dfll.c
+++ b/drivers/clk/tegra/clk-dfll.c
@@ -55,6 +55,7 @@
#include <linux/seq_file.h>
#include "clk-dfll.h"
+#include "cvb.h"
/*
* DFLL control registers - access via dfll_{readl,writel}
@@ -442,8 +443,8 @@ static void dfll_tune_low(struct tegra_dfll *td)
{
td->tune_range = DFLL_TUNE_LOW;
- dfll_writel(td, td->soc->tune0_low, DFLL_TUNE0);
- dfll_writel(td, td->soc->tune1, DFLL_TUNE1);
+ dfll_writel(td, td->soc->cvb->cpu_dfll_data.tune0_low, DFLL_TUNE0);
+ dfll_writel(td, td->soc->cvb->cpu_dfll_data.tune1, DFLL_TUNE1);
dfll_wmb(td);
if (td->soc->set_clock_trimmers_low)
@@ -1449,7 +1450,7 @@ static int dfll_build_i2c_lut(struct tegra_dfll *td)
}
v_max = dev_pm_opp_get_voltage(opp);
- v = td->soc->min_millivolts * 1000;
+ v = td->soc->cvb->min_millivolts * 1000;
lut = find_vdd_map_entry_exact(td, v);
if (lut < 0)
goto out;
@@ -1461,7 +1462,7 @@ static int dfll_build_i2c_lut(struct tegra_dfll *td)
break;
v_opp = dev_pm_opp_get_voltage(opp);
- if (v_opp <= td->soc->min_millivolts * 1000)
+ if (v_opp <= td->soc->cvb->min_millivolts * 1000)
td->dvco_rate_min = dev_pm_opp_get_freq(opp);
for (;;) {
@@ -1490,7 +1491,7 @@ static int dfll_build_i2c_lut(struct tegra_dfll *td)
if (!td->dvco_rate_min)
dev_err(td->dev, "no opp above DFLL minimum voltage %d mV\n",
- td->soc->min_millivolts);
+ td->soc->cvb->min_millivolts);
else
ret = 0;
diff --git a/drivers/clk/tegra/clk-dfll.h b/drivers/clk/tegra/clk-dfll.h
index 2e4c0772a5dc..ed2ad888268f 100644
--- a/drivers/clk/tegra/clk-dfll.h
+++ b/drivers/clk/tegra/clk-dfll.h
@@ -24,22 +24,18 @@
/**
* struct tegra_dfll_soc_data - SoC-specific hooks/integration for the DFLL driver
- * @opp_dev: struct device * that holds the OPP table for the DFLL
- * @min_millivolts: minimum voltage (in mV) that the DFLL can operate
- * @tune0_low: DFLL tuning register 0 (low voltage range)
- * @tune0_high: DFLL tuning register 0 (high voltage range)
- * @tune1: DFLL tuning register 1
- * @assert_dvco_reset: fn ptr to place the DVCO in reset
- * @deassert_dvco_reset: fn ptr to release the DVCO reset
- * @set_clock_trimmers_high: fn ptr to tune clock trimmers for high voltage
- * @set_clock_trimmers_low: fn ptr to tune clock trimmers for low voltage
+ * @dev: struct device * that holds the OPP table for the DFLL
+ * @max_freq: maximum frequency supported on this SoC
+ * @cvb: CPU frequency table for this SoC
+ * @init_clock_trimmers: callback to initialize clock trimmers
+ * @set_clock_trimmers_high: callback to tune clock trimmers for high voltage
+ * @set_clock_trimmers_low: callback to tune clock trimmers for low voltage
*/
struct tegra_dfll_soc_data {
struct device *dev;
- unsigned int min_millivolts;
- u32 tune0_low;
- u32 tune0_high;
- u32 tune1;
+ unsigned long max_freq;
+ const struct cvb_table *cvb;
+
void (*init_clock_trimmers)(void);
void (*set_clock_trimmers_high)(void);
void (*set_clock_trimmers_low)(void);
diff --git a/drivers/clk/tegra/clk-id.h b/drivers/clk/tegra/clk-id.h
index 62ea38187b71..36c974916d4f 100644
--- a/drivers/clk/tegra/clk-id.h
+++ b/drivers/clk/tegra/clk-id.h
@@ -71,6 +71,7 @@ enum clk_id {
tegra_clk_disp2_8,
tegra_clk_dp2,
tegra_clk_dpaux,
+ tegra_clk_dpaux1,
tegra_clk_dsialp,
tegra_clk_dsia_mux,
tegra_clk_dsiblp,
@@ -306,6 +307,7 @@ enum clk_id {
tegra_clk_xusb_ss_div2,
tegra_clk_xusb_ssp_src,
tegra_clk_sclk_mux,
+ tegra_clk_sor_safe,
tegra_clk_max,
};
diff --git a/drivers/clk/tegra/clk-periph-fixed.c b/drivers/clk/tegra/clk-periph-fixed.c
new file mode 100644
index 000000000000..c57dfb037b10
--- /dev/null
+++ b/drivers/clk/tegra/clk-periph-fixed.c
@@ -0,0 +1,120 @@
+/*
+ * Copyright (c) 2015, NVIDIA CORPORATION. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/clk-provider.h>
+
+#include "clk.h"
+
+static inline struct tegra_clk_periph_fixed *
+to_tegra_clk_periph_fixed(struct clk_hw *hw)
+{
+ return container_of(hw, struct tegra_clk_periph_fixed, hw);
+}
+
+static int tegra_clk_periph_fixed_is_enabled(struct clk_hw *hw)
+{
+ struct tegra_clk_periph_fixed *fixed = to_tegra_clk_periph_fixed(hw);
+ u32 mask = 1 << (fixed->num % 32), value;
+
+ value = readl(fixed->base + fixed->regs->enb_reg);
+ if (value & mask) {
+ value = readl(fixed->base + fixed->regs->rst_reg);
+ if ((value & mask) == 0)
+ return 1;
+ }
+
+ return 0;
+}
+
+static int tegra_clk_periph_fixed_enable(struct clk_hw *hw)
+{
+ struct tegra_clk_periph_fixed *fixed = to_tegra_clk_periph_fixed(hw);
+ u32 mask = 1 << (fixed->num % 32);
+
+ writel(mask, fixed->base + fixed->regs->enb_set_reg);
+
+ return 0;
+}
+
+static void tegra_clk_periph_fixed_disable(struct clk_hw *hw)
+{
+ struct tegra_clk_periph_fixed *fixed = to_tegra_clk_periph_fixed(hw);
+ u32 mask = 1 << (fixed->num % 32);
+
+ writel(mask, fixed->base + fixed->regs->enb_clr_reg);
+}
+
+static unsigned long
+tegra_clk_periph_fixed_recalc_rate(struct clk_hw *hw,
+ unsigned long parent_rate)
+{
+ struct tegra_clk_periph_fixed *fixed = to_tegra_clk_periph_fixed(hw);
+ unsigned long long rate;
+
+ rate = (unsigned long long)parent_rate * fixed->mul;
+ do_div(rate, fixed->div);
+
+ return (unsigned long)rate;
+}
+
+static const struct clk_ops tegra_clk_periph_fixed_ops = {
+ .is_enabled = tegra_clk_periph_fixed_is_enabled,
+ .enable = tegra_clk_periph_fixed_enable,
+ .disable = tegra_clk_periph_fixed_disable,
+ .recalc_rate = tegra_clk_periph_fixed_recalc_rate,
+};
+
+struct clk *tegra_clk_register_periph_fixed(const char *name,
+ const char *parent,
+ unsigned long flags,
+ void __iomem *base,
+ unsigned int mul,
+ unsigned int div,
+ unsigned int num)
+{
+ const struct tegra_clk_periph_regs *regs;
+ struct tegra_clk_periph_fixed *fixed;
+ struct clk_init_data init;
+ struct clk *clk;
+
+ regs = get_reg_bank(num);
+ if (!regs)
+ return ERR_PTR(-EINVAL);
+
+ fixed = kzalloc(sizeof(*fixed), GFP_KERNEL);
+ if (!fixed)
+ return ERR_PTR(-ENOMEM);
+
+ init.name = name;
+ init.flags = flags;
+ init.parent_names = parent ? &parent : NULL;
+ init.num_parents = parent ? 1 : 0;
+ init.ops = &tegra_clk_periph_fixed_ops;
+
+ fixed->base = base;
+ fixed->regs = regs;
+ fixed->mul = mul;
+ fixed->div = div;
+ fixed->num = num;
+
+ fixed->hw.init = &init;
+
+ clk = clk_register(NULL, &fixed->hw);
+ if (IS_ERR(clk))
+ kfree(fixed);
+
+ return clk;
+}
diff --git a/drivers/clk/tegra/clk-periph-gate.c b/drivers/clk/tegra/clk-periph-gate.c
index d28d6e95020f..88127828befe 100644
--- a/drivers/clk/tegra/clk-periph-gate.c
+++ b/drivers/clk/tegra/clk-periph-gate.c
@@ -134,7 +134,7 @@ struct clk *tegra_clk_register_periph_gate(const char *name,
struct tegra_clk_periph_gate *gate;
struct clk *clk;
struct clk_init_data init;
- struct tegra_clk_periph_regs *pregs;
+ const struct tegra_clk_periph_regs *pregs;
pregs = get_reg_bank(clk_num);
if (!pregs)
diff --git a/drivers/clk/tegra/clk-periph.c b/drivers/clk/tegra/clk-periph.c
index ec5b6113b012..a17ca6d7f649 100644
--- a/drivers/clk/tegra/clk-periph.c
+++ b/drivers/clk/tegra/clk-periph.c
@@ -145,7 +145,7 @@ static struct clk *_tegra_clk_register_periph(const char *name,
{
struct clk *clk;
struct clk_init_data init;
- struct tegra_clk_periph_regs *bank;
+ const struct tegra_clk_periph_regs *bank;
bool div = !(periph->gate.flags & TEGRA_PERIPH_NO_DIV);
if (periph->gate.flags & TEGRA_PERIPH_NO_DIV) {
diff --git a/drivers/clk/tegra/clk-pll.c b/drivers/clk/tegra/clk-pll.c
index 6ac3f843e7ca..4e194ecc8d5e 100644
--- a/drivers/clk/tegra/clk-pll.c
+++ b/drivers/clk/tegra/clk-pll.c
@@ -2013,6 +2013,52 @@ struct clk *tegra_clk_register_pllss(const char *name, const char *parent_name,
#endif
#if defined(CONFIG_ARCH_TEGRA_210_SOC)
+struct clk *tegra_clk_register_pllre_tegra210(const char *name,
+ const char *parent_name, void __iomem *clk_base,
+ void __iomem *pmc, unsigned long flags,
+ struct tegra_clk_pll_params *pll_params,
+ spinlock_t *lock, unsigned long parent_rate)
+{
+ u32 val;
+ struct tegra_clk_pll *pll;
+ struct clk *clk;
+
+ pll_params->vco_min = _clip_vco_min(pll_params->vco_min, parent_rate);
+
+ if (pll_params->adjust_vco)
+ pll_params->vco_min = pll_params->adjust_vco(pll_params,
+ parent_rate);
+
+ pll = _tegra_init_pll(clk_base, pmc, pll_params, lock);
+ if (IS_ERR(pll))
+ return ERR_CAST(pll);
+
+ /* program minimum rate by default */
+
+ val = pll_readl_base(pll);
+ if (val & PLL_BASE_ENABLE)
+ WARN_ON(readl_relaxed(clk_base + pll_params->iddq_reg) &
+ BIT(pll_params->iddq_bit_idx));
+ else {
+ val = 0x4 << divm_shift(pll);
+ val |= 0x41 << divn_shift(pll);
+ pll_writel_base(val, pll);
+ }
+
+ /* disable lock override */
+
+ val = pll_readl_misc(pll);
+ val &= ~BIT(29);
+ pll_writel_misc(val, pll);
+
+ clk = _tegra_clk_register_pll(pll, name, parent_name, flags,
+ &tegra_clk_pllre_ops);
+ if (IS_ERR(clk))
+ kfree(pll);
+
+ return clk;
+}
+
static int clk_plle_tegra210_enable(struct clk_hw *hw)
{
struct tegra_clk_pll *pll = to_clk_pll(hw);
diff --git a/drivers/clk/tegra/clk-tegra-fixed.c b/drivers/clk/tegra/clk-tegra-fixed.c
index d64ec7a1b976..91c38f1666c1 100644
--- a/drivers/clk/tegra/clk-tegra-fixed.c
+++ b/drivers/clk/tegra/clk-tegra-fixed.c
@@ -107,4 +107,3 @@ void __init tegra_fixed_clk_init(struct tegra_clk *tegra_clks)
*dt_clk = clk;
}
}
-
diff --git a/drivers/clk/tegra/clk-tegra-periph.c b/drivers/clk/tegra/clk-tegra-periph.c
index ea2b9cbf9e70..29d04c663abf 100644
--- a/drivers/clk/tegra/clk-tegra-periph.c
+++ b/drivers/clk/tegra/clk-tegra-periph.c
@@ -803,7 +803,7 @@ static struct tegra_periph_init_data gate_clks[] = {
GATE("hda2hdmi", "clk_m", 128, TEGRA_PERIPH_ON_APB, tegra_clk_hda2hdmi, 0),
GATE("bsea", "clk_m", 62, 0, tegra_clk_bsea, 0),
GATE("bsev", "clk_m", 63, 0, tegra_clk_bsev, 0),
- GATE("mipi-cal", "clk_m", 56, 0, tegra_clk_mipi_cal, 0),
+ GATE("mipi-cal", "clk72mhz", 56, 0, tegra_clk_mipi_cal, 0),
GATE("usbd", "clk_m", 22, 0, tegra_clk_usbd, 0),
GATE("usb2", "clk_m", 58, 0, tegra_clk_usb2, 0),
GATE("usb3", "clk_m", 59, 0, tegra_clk_usb3, 0),
@@ -821,7 +821,6 @@ static struct tegra_periph_init_data gate_clks[] = {
GATE("ispb", "clk_m", 3, 0, tegra_clk_ispb, 0),
GATE("vim2_clk", "clk_m", 11, 0, tegra_clk_vim2_clk, 0),
GATE("pcie", "clk_m", 70, 0, tegra_clk_pcie, 0),
- GATE("dpaux", "clk_m", 181, 0, tegra_clk_dpaux, 0),
GATE("gpu", "pll_ref", 184, 0, tegra_clk_gpu, 0),
GATE("pllg_ref", "pll_ref", 189, 0, tegra_clk_pll_g_ref, 0),
GATE("hsic_trk", "usb2_hsic_trk", 209, TEGRA_PERIPH_NO_RESET, tegra_clk_hsic_trk, 0),
@@ -877,7 +876,7 @@ static void __init periph_clk_init(void __iomem *clk_base,
struct clk **dt_clk;
for (i = 0; i < ARRAY_SIZE(periph_clks); i++) {
- struct tegra_clk_periph_regs *bank;
+ const struct tegra_clk_periph_regs *bank;
struct tegra_periph_init_data *data;
data = periph_clks + i;
diff --git a/drivers/clk/tegra/clk-tegra114.c b/drivers/clk/tegra/clk-tegra114.c
index df47ec3169c3..b78054fac0a8 100644
--- a/drivers/clk/tegra/clk-tegra114.c
+++ b/drivers/clk/tegra/clk-tegra114.c
@@ -743,7 +743,6 @@ static struct tegra_clk tegra114_clks[tegra_clk_max] __initdata = {
[tegra_clk_csi] = { .dt_id = TEGRA114_CLK_CSI, .present = true },
[tegra_clk_i2c2] = { .dt_id = TEGRA114_CLK_I2C2, .present = true },
[tegra_clk_uartc] = { .dt_id = TEGRA114_CLK_UARTC, .present = true },
- [tegra_clk_mipi_cal] = { .dt_id = TEGRA114_CLK_MIPI_CAL, .present = true },
[tegra_clk_emc] = { .dt_id = TEGRA114_CLK_EMC, .present = true },
[tegra_clk_usb2] = { .dt_id = TEGRA114_CLK_USB2, .present = true },
[tegra_clk_usb3] = { .dt_id = TEGRA114_CLK_USB3, .present = true },
@@ -1237,6 +1236,11 @@ static __init void tegra114_periph_clk_init(void __iomem *clk_base,
&emc_lock);
clks[TEGRA114_CLK_MC] = clk;
+ clk = tegra_clk_register_periph_gate("mipi-cal", "clk_m", 0, clk_base,
+ CLK_SET_RATE_PARENT, 56,
+ periph_clk_enb_refcnt);
+ clks[TEGRA114_CLK_MIPI_CAL] = clk;
+
for (i = 0; i < ARRAY_SIZE(tegra_periph_clk_list); i++) {
data = &tegra_periph_clk_list[i];
clk = tegra_clk_register_periph(data->name,
diff --git a/drivers/clk/tegra/clk-tegra124-dfll-fcpu.c b/drivers/clk/tegra/clk-tegra124-dfll-fcpu.c
index 61253330c12b..c205809ba580 100644
--- a/drivers/clk/tegra/clk-tegra124-dfll-fcpu.c
+++ b/drivers/clk/tegra/clk-tegra124-dfll-fcpu.c
@@ -47,32 +47,32 @@ static const struct cvb_table tegra124_cpu_cvb_tables[] = {
},
.speedo_scale = 100,
.voltage_scale = 1000,
- .cvb_table = {
- {204000000UL, {1112619, -29295, 402} },
- {306000000UL, {1150460, -30585, 402} },
- {408000000UL, {1190122, -31865, 402} },
- {510000000UL, {1231606, -33155, 402} },
- {612000000UL, {1274912, -34435, 402} },
- {714000000UL, {1320040, -35725, 402} },
- {816000000UL, {1366990, -37005, 402} },
- {918000000UL, {1415762, -38295, 402} },
- {1020000000UL, {1466355, -39575, 402} },
- {1122000000UL, {1518771, -40865, 402} },
- {1224000000UL, {1573009, -42145, 402} },
- {1326000000UL, {1629068, -43435, 402} },
- {1428000000UL, {1686950, -44715, 402} },
- {1530000000UL, {1746653, -46005, 402} },
- {1632000000UL, {1808179, -47285, 402} },
- {1734000000UL, {1871526, -48575, 402} },
- {1836000000UL, {1936696, -49855, 402} },
- {1938000000UL, {2003687, -51145, 402} },
- {2014500000UL, {2054787, -52095, 402} },
- {2116500000UL, {2124957, -53385, 402} },
- {2218500000UL, {2196950, -54665, 402} },
- {2320500000UL, {2270765, -55955, 402} },
- {2422500000UL, {2346401, -57235, 402} },
- {2524500000UL, {2437299, -58535, 402} },
- {0, { 0, 0, 0} },
+ .entries = {
+ { 204000000UL, { 1112619, -29295, 402 } },
+ { 306000000UL, { 1150460, -30585, 402 } },
+ { 408000000UL, { 1190122, -31865, 402 } },
+ { 510000000UL, { 1231606, -33155, 402 } },
+ { 612000000UL, { 1274912, -34435, 402 } },
+ { 714000000UL, { 1320040, -35725, 402 } },
+ { 816000000UL, { 1366990, -37005, 402 } },
+ { 918000000UL, { 1415762, -38295, 402 } },
+ { 1020000000UL, { 1466355, -39575, 402 } },
+ { 1122000000UL, { 1518771, -40865, 402 } },
+ { 1224000000UL, { 1573009, -42145, 402 } },
+ { 1326000000UL, { 1629068, -43435, 402 } },
+ { 1428000000UL, { 1686950, -44715, 402 } },
+ { 1530000000UL, { 1746653, -46005, 402 } },
+ { 1632000000UL, { 1808179, -47285, 402 } },
+ { 1734000000UL, { 1871526, -48575, 402 } },
+ { 1836000000UL, { 1936696, -49855, 402 } },
+ { 1938000000UL, { 2003687, -51145, 402 } },
+ { 2014500000UL, { 2054787, -52095, 402 } },
+ { 2116500000UL, { 2124957, -53385, 402 } },
+ { 2218500000UL, { 2196950, -54665, 402 } },
+ { 2320500000UL, { 2270765, -55955, 402 } },
+ { 2422500000UL, { 2346401, -57235, 402 } },
+ { 2524500000UL, { 2437299, -58535, 402 } },
+ { 0UL, { 0, 0, 0 } },
},
.cpu_dfll_data = {
.tune0_low = 0x005020ff,
@@ -84,9 +84,8 @@ static const struct cvb_table tegra124_cpu_cvb_tables[] = {
static int tegra124_dfll_fcpu_probe(struct platform_device *pdev)
{
- int process_id, speedo_id, speedo_value;
+ int process_id, speedo_id, speedo_value, err;
struct tegra_dfll_soc_data *soc;
- const struct cvb_table *cvb;
process_id = tegra_sku_info.cpu_process_id;
speedo_id = tegra_sku_info.cpu_speedo_id;
@@ -108,23 +107,41 @@ static int tegra124_dfll_fcpu_probe(struct platform_device *pdev)
return -ENODEV;
}
- cvb = tegra_cvb_build_opp_table(tegra124_cpu_cvb_tables,
- ARRAY_SIZE(tegra124_cpu_cvb_tables),
- process_id, speedo_id, speedo_value,
- cpu_max_freq_table[speedo_id],
- soc->dev);
- if (IS_ERR(cvb)) {
- dev_err(&pdev->dev, "couldn't build OPP table: %ld\n",
- PTR_ERR(cvb));
- return PTR_ERR(cvb);
+ soc->max_freq = cpu_max_freq_table[speedo_id];
+
+ soc->cvb = tegra_cvb_add_opp_table(soc->dev, tegra124_cpu_cvb_tables,
+ ARRAY_SIZE(tegra124_cpu_cvb_tables),
+ process_id, speedo_id, speedo_value,
+ soc->max_freq);
+ if (IS_ERR(soc->cvb)) {
+ dev_err(&pdev->dev, "couldn't add OPP table: %ld\n",
+ PTR_ERR(soc->cvb));
+ return PTR_ERR(soc->cvb);
+ }
+
+ err = tegra_dfll_register(pdev, soc);
+ if (err < 0) {
+ tegra_cvb_remove_opp_table(soc->dev, soc->cvb, soc->max_freq);
+ return err;
}
- soc->min_millivolts = cvb->min_millivolts;
- soc->tune0_low = cvb->cpu_dfll_data.tune0_low;
- soc->tune0_high = cvb->cpu_dfll_data.tune0_high;
- soc->tune1 = cvb->cpu_dfll_data.tune1;
+ platform_set_drvdata(pdev, soc);
+
+ return 0;
+}
+
+static int tegra124_dfll_fcpu_remove(struct platform_device *pdev)
+{
+ struct tegra_dfll_soc_data *soc = platform_get_drvdata(pdev);
+ int err;
+
+ err = tegra_dfll_unregister(pdev);
+ if (err < 0)
+ dev_err(&pdev->dev, "failed to unregister DFLL: %d\n", err);
+
+ tegra_cvb_remove_opp_table(soc->dev, soc->cvb, soc->max_freq);
- return tegra_dfll_register(pdev, soc);
+ return 0;
}
static const struct of_device_id tegra124_dfll_fcpu_of_match[] = {
@@ -140,7 +157,7 @@ static const struct dev_pm_ops tegra124_dfll_pm_ops = {
static struct platform_driver tegra124_dfll_fcpu_driver = {
.probe = tegra124_dfll_fcpu_probe,
- .remove = tegra_dfll_unregister,
+ .remove = tegra124_dfll_fcpu_remove,
.driver = {
.name = "tegra124-dfll",
.of_match_table = tegra124_dfll_fcpu_of_match,
diff --git a/drivers/clk/tegra/clk-tegra124.c b/drivers/clk/tegra/clk-tegra124.c
index 1627258292d2..f4fbbf16a056 100644
--- a/drivers/clk/tegra/clk-tegra124.c
+++ b/drivers/clk/tegra/clk-tegra124.c
@@ -1155,6 +1155,10 @@ static __init void tegra124_periph_clk_init(void __iomem *clk_base,
1, 2);
clks[TEGRA124_CLK_XUSB_SS_DIV2] = clk;
+ clk = tegra_clk_register_periph_fixed("dpaux", "pll_p", 0, clk_base,
+ 1, 17, 181);
+ clks[TEGRA124_CLK_DPAUX] = clk;
+
clk = clk_register_gate(NULL, "pll_d_dsi_out", "pll_d_out0", 0,
clk_base + PLLD_MISC, 30, 0, &pll_d_lock);
clks[TEGRA124_CLK_PLL_D_DSI_OUT] = clk;
diff --git a/drivers/clk/tegra/clk-tegra20.c b/drivers/clk/tegra/clk-tegra20.c
index 7ad63837694f..837e5cbd60e9 100644
--- a/drivers/clk/tegra/clk-tegra20.c
+++ b/drivers/clk/tegra/clk-tegra20.c
@@ -623,7 +623,7 @@ static unsigned int tegra20_get_pll_ref_div(void)
case OSC_CTRL_PLL_REF_DIV_4:
return 4;
default:
- pr_err("Invalied pll ref divider %d\n", pll_ref_div);
+ pr_err("Invalid pll ref divider %d\n", pll_ref_div);
BUG();
}
return 0;
diff --git a/drivers/clk/tegra/clk-tegra210.c b/drivers/clk/tegra/clk-tegra210.c
index 637041fd53ad..456cf586d2c2 100644
--- a/drivers/clk/tegra/clk-tegra210.c
+++ b/drivers/clk/tegra/clk-tegra210.c
@@ -92,6 +92,7 @@
#define PLLE_AUX 0x48c
#define PLLRE_BASE 0x4c4
#define PLLRE_MISC0 0x4c8
+#define PLLRE_OUT1 0x4cc
#define PLLDP_BASE 0x590
#define PLLDP_MISC 0x594
@@ -175,6 +176,19 @@
#define UTMIP_PLL_CFG1_FORCE_PLL_ENABLE_POWERDOWN BIT(14)
#define UTMIP_PLL_CFG1_FORCE_PLL_ACTIVE_POWERDOWN BIT(12)
+#define SATA_PLL_CFG0 0x490
+#define SATA_PLL_CFG0_PADPLL_RESET_SWCTL BIT(0)
+#define SATA_PLL_CFG0_PADPLL_USE_LOCKDET BIT(2)
+#define SATA_PLL_CFG0_PADPLL_SLEEP_IDDQ BIT(13)
+#define SATA_PLL_CFG0_SEQ_ENABLE BIT(24)
+
+#define XUSBIO_PLL_CFG0 0x51c
+#define XUSBIO_PLL_CFG0_PADPLL_RESET_SWCTL BIT(0)
+#define XUSBIO_PLL_CFG0_CLK_ENABLE_SWCTL BIT(2)
+#define XUSBIO_PLL_CFG0_PADPLL_USE_LOCKDET BIT(6)
+#define XUSBIO_PLL_CFG0_PADPLL_SLEEP_IDDQ BIT(13)
+#define XUSBIO_PLL_CFG0_SEQ_ENABLE BIT(24)
+
#define UTMIPLL_HW_PWRDN_CFG0 0x52c
#define UTMIPLL_HW_PWRDN_CFG0_UTMIPLL_LOCK BIT(31)
#define UTMIPLL_HW_PWRDN_CFG0_SEQ_START_STATE BIT(25)
@@ -416,6 +430,51 @@ static const char *mux_pllmcp_clkm[] = {
#define PLLU_MISC0_WRITE_MASK 0xbfffffff
#define PLLU_MISC1_WRITE_MASK 0x00000007
+void tegra210_xusb_pll_hw_control_enable(void)
+{
+ u32 val;
+
+ val = readl_relaxed(clk_base + XUSBIO_PLL_CFG0);
+ val &= ~(XUSBIO_PLL_CFG0_CLK_ENABLE_SWCTL |
+ XUSBIO_PLL_CFG0_PADPLL_RESET_SWCTL);
+ val |= XUSBIO_PLL_CFG0_PADPLL_USE_LOCKDET |
+ XUSBIO_PLL_CFG0_PADPLL_SLEEP_IDDQ;
+ writel_relaxed(val, clk_base + XUSBIO_PLL_CFG0);
+}
+EXPORT_SYMBOL_GPL(tegra210_xusb_pll_hw_control_enable);
+
+void tegra210_xusb_pll_hw_sequence_start(void)
+{
+ u32 val;
+
+ val = readl_relaxed(clk_base + XUSBIO_PLL_CFG0);
+ val |= XUSBIO_PLL_CFG0_SEQ_ENABLE;
+ writel_relaxed(val, clk_base + XUSBIO_PLL_CFG0);
+}
+EXPORT_SYMBOL_GPL(tegra210_xusb_pll_hw_sequence_start);
+
+void tegra210_sata_pll_hw_control_enable(void)
+{
+ u32 val;
+
+ val = readl_relaxed(clk_base + SATA_PLL_CFG0);
+ val &= ~SATA_PLL_CFG0_PADPLL_RESET_SWCTL;
+ val |= SATA_PLL_CFG0_PADPLL_USE_LOCKDET |
+ SATA_PLL_CFG0_PADPLL_SLEEP_IDDQ;
+ writel_relaxed(val, clk_base + SATA_PLL_CFG0);
+}
+EXPORT_SYMBOL_GPL(tegra210_sata_pll_hw_control_enable);
+
+void tegra210_sata_pll_hw_sequence_start(void)
+{
+ u32 val;
+
+ val = readl_relaxed(clk_base + SATA_PLL_CFG0);
+ val |= SATA_PLL_CFG0_SEQ_ENABLE;
+ writel_relaxed(val, clk_base + SATA_PLL_CFG0);
+}
+EXPORT_SYMBOL_GPL(tegra210_sata_pll_hw_sequence_start);
+
static inline void _pll_misc_chk_default(void __iomem *base,
struct tegra_clk_pll_params *params,
u8 misc_num, u32 default_val, u32 mask)
@@ -1162,7 +1221,7 @@ static int tegra210_pll_fixed_mdiv_cfg(struct clk_hw *hw,
p = rate >= params->vco_min ? 1 : -EINVAL;
}
- if (IS_ERR_VALUE(p))
+ if (p < 0)
return -EINVAL;
cfg->m = tegra_pll_get_fixed_mdiv(hw, input_rate);
@@ -2092,6 +2151,7 @@ static struct tegra_clk tegra210_clks[tegra_clk_max] __initdata = {
[tegra_clk_clk72Mhz_8] = { .dt_id = TEGRA210_CLK_CLK72MHZ, .present = true },
[tegra_clk_vic03_8] = { .dt_id = TEGRA210_CLK_VIC03, .present = true },
[tegra_clk_dpaux] = { .dt_id = TEGRA210_CLK_DPAUX, .present = true },
+ [tegra_clk_dpaux1] = { .dt_id = TEGRA210_CLK_DPAUX1, .present = true },
[tegra_clk_sor0] = { .dt_id = TEGRA210_CLK_SOR0, .present = true },
[tegra_clk_sor0_lvds] = { .dt_id = TEGRA210_CLK_SOR0_LVDS, .present = true },
[tegra_clk_gpu] = { .dt_id = TEGRA210_CLK_GPU, .present = true },
@@ -2403,6 +2463,18 @@ static __init void tegra210_periph_clk_init(void __iomem *clk_base,
1, 2);
clks[TEGRA210_CLK_XUSB_SS_DIV2] = clk;
+ clk = tegra_clk_register_periph_fixed("dpaux", "pll_p", 0, clk_base,
+ 1, 17, 181);
+ clks[TEGRA210_CLK_DPAUX] = clk;
+
+ clk = tegra_clk_register_periph_fixed("dpaux1", "pll_p", 0, clk_base,
+ 1, 17, 207);
+ clks[TEGRA210_CLK_DPAUX1] = clk;
+
+ clk = tegra_clk_register_periph_fixed("sor_safe", "pll_p", 0, clk_base,
+ 1, 17, 222);
+ clks[TEGRA210_CLK_SOR_SAFE] = clk;
+
/* pll_d_dsi_out */
clk = clk_register_gate(NULL, "pll_d_dsi_out", "pll_d_out0", 0,
clk_base + PLLD_MISC0, 21, 0, &pll_d_lock);
@@ -2582,8 +2654,10 @@ static void __init tegra210_pll_init(void __iomem *clk_base,
clks[TEGRA210_CLK_PLL_D_OUT0] = clk;
/* PLLRE */
- clk = tegra_clk_register_pllre("pll_re_vco", "pll_ref", clk_base, pmc,
- 0, &pll_re_vco_params, &pll_re_lock, pll_ref_freq);
+ clk = tegra_clk_register_pllre_tegra210("pll_re_vco", "pll_ref",
+ clk_base, pmc, 0,
+ &pll_re_vco_params,
+ &pll_re_lock, pll_ref_freq);
clk_register_clkdev(clk, "pll_re_vco", NULL);
clks[TEGRA210_CLK_PLL_RE_VCO] = clk;
@@ -2593,6 +2667,15 @@ static void __init tegra210_pll_init(void __iomem *clk_base,
clk_register_clkdev(clk, "pll_re_out", NULL);
clks[TEGRA210_CLK_PLL_RE_OUT] = clk;
+ clk = tegra_clk_register_divider("pll_re_out1_div", "pll_re_vco",
+ clk_base + PLLRE_OUT1, 0,
+ TEGRA_DIVIDER_ROUND_UP,
+ 8, 8, 1, NULL);
+ clk = tegra_clk_register_pll_out("pll_re_out1", "pll_re_out1_div",
+ clk_base + PLLRE_OUT1, 1, 0,
+ CLK_SET_RATE_PARENT, 0, NULL);
+ clks[TEGRA210_CLK_PLL_RE_OUT1] = clk;
+
/* PLLE */
clk = tegra_clk_register_plle_tegra210("pll_e", "pll_ref",
clk_base, 0, &pll_e_params, NULL);
diff --git a/drivers/clk/tegra/clk-tegra30.c b/drivers/clk/tegra/clk-tegra30.c
index 0478565cf292..9396f4930da7 100644
--- a/drivers/clk/tegra/clk-tegra30.c
+++ b/drivers/clk/tegra/clk-tegra30.c
@@ -339,11 +339,11 @@ static const struct pdiv_map pllu_p[] = {
};
static struct tegra_clk_pll_freq_table pll_u_freq_table[] = {
- { 12000000, 480000000, 960, 12, 1, 12 },
- { 13000000, 480000000, 960, 13, 1, 12 },
- { 16800000, 480000000, 400, 7, 1, 5 },
- { 19200000, 480000000, 200, 4, 1, 3 },
- { 26000000, 480000000, 960, 26, 1, 12 },
+ { 12000000, 480000000, 960, 12, 2, 12 },
+ { 13000000, 480000000, 960, 13, 2, 12 },
+ { 16800000, 480000000, 400, 7, 2, 5 },
+ { 19200000, 480000000, 200, 4, 2, 3 },
+ { 26000000, 480000000, 960, 26, 2, 12 },
{ 0, 0, 0, 0, 0, 0 },
};
@@ -1372,6 +1372,7 @@ static struct tegra_clk_init_table init_table[] __initdata = {
{ TEGRA30_CLK_SBC4, TEGRA30_CLK_PLL_P, 100000000, 0 },
{ TEGRA30_CLK_SBC5, TEGRA30_CLK_PLL_P, 100000000, 0 },
{ TEGRA30_CLK_SBC6, TEGRA30_CLK_PLL_P, 100000000, 0 },
+ { TEGRA30_CLK_PLL_C, TEGRA30_CLK_CLK_MAX, 600000000, 0 },
{ TEGRA30_CLK_HOST1X, TEGRA30_CLK_PLL_C, 150000000, 0 },
{ TEGRA30_CLK_DISP1, TEGRA30_CLK_PLL_P, 600000000, 0 },
{ TEGRA30_CLK_DISP2, TEGRA30_CLK_PLL_P, 600000000, 0 },
@@ -1379,6 +1380,7 @@ static struct tegra_clk_init_table init_table[] __initdata = {
{ TEGRA30_CLK_GR2D, TEGRA30_CLK_PLL_C, 300000000, 0 },
{ TEGRA30_CLK_GR3D, TEGRA30_CLK_PLL_C, 300000000, 0 },
{ TEGRA30_CLK_GR3D2, TEGRA30_CLK_PLL_C, 300000000, 0 },
+ { TEGRA30_CLK_PLL_U, TEGRA30_CLK_CLK_MAX, 480000000, 0 },
/* must be the last entry */
{ TEGRA30_CLK_CLK_MAX, TEGRA30_CLK_CLK_MAX, 0, 0 },
};
diff --git a/drivers/clk/tegra/clk.c b/drivers/clk/tegra/clk.c
index f60fe2e344ca..b2cdd9a235f4 100644
--- a/drivers/clk/tegra/clk.c
+++ b/drivers/clk/tegra/clk.c
@@ -84,7 +84,7 @@ static int (*special_reset_assert)(unsigned long);
static int (*special_reset_deassert)(unsigned long);
static unsigned int num_special_reset;
-static struct tegra_clk_periph_regs periph_regs[] = {
+static const struct tegra_clk_periph_regs periph_regs[] = {
[0] = {
.enb_reg = CLK_OUT_ENB_L,
.enb_set_reg = CLK_OUT_ENB_SET_L,
@@ -182,7 +182,7 @@ static int tegra_clk_rst_deassert(struct reset_controller_dev *rcdev,
return -EINVAL;
}
-struct tegra_clk_periph_regs *get_reg_bank(int clkid)
+const struct tegra_clk_periph_regs *get_reg_bank(int clkid)
{
int reg_bank = clkid / 32;
diff --git a/drivers/clk/tegra/clk.h b/drivers/clk/tegra/clk.h
index 4dbcfaec576a..9421f0310999 100644
--- a/drivers/clk/tegra/clk.h
+++ b/drivers/clk/tegra/clk.h
@@ -386,6 +386,12 @@ struct clk *tegra_clk_register_pllre(const char *name, const char *parent_name,
struct tegra_clk_pll_params *pll_params,
spinlock_t *lock, unsigned long parent_rate);
+struct clk *tegra_clk_register_pllre_tegra210(const char *name,
+ const char *parent_name, void __iomem *clk_base,
+ void __iomem *pmc, unsigned long flags,
+ struct tegra_clk_pll_params *pll_params,
+ spinlock_t *lock, unsigned long parent_rate);
+
struct clk *tegra_clk_register_plle_tegra114(const char *name,
const char *parent_name,
void __iomem *clk_base, unsigned long flags,
@@ -496,7 +502,7 @@ struct tegra_clk_periph_gate {
u8 flags;
int clk_num;
int *enable_refcnt;
- struct tegra_clk_periph_regs *regs;
+ const struct tegra_clk_periph_regs *regs;
};
#define to_clk_periph_gate(_hw) \
@@ -516,6 +522,23 @@ struct clk *tegra_clk_register_periph_gate(const char *name,
const char *parent_name, u8 gate_flags, void __iomem *clk_base,
unsigned long flags, int clk_num, int *enable_refcnt);
+struct tegra_clk_periph_fixed {
+ struct clk_hw hw;
+ void __iomem *base;
+ const struct tegra_clk_periph_regs *regs;
+ unsigned int mul;
+ unsigned int div;
+ unsigned int num;
+};
+
+struct clk *tegra_clk_register_periph_fixed(const char *name,
+ const char *parent,
+ unsigned long flags,
+ void __iomem *base,
+ unsigned int mul,
+ unsigned int div,
+ unsigned int num);
+
/**
* struct clk-periph - peripheral clock
*
@@ -716,7 +739,7 @@ void tegra_init_from_table(struct tegra_clk_init_table *tbl,
void tegra_init_dup_clks(struct tegra_clk_duplicate *dup_list,
struct clk *clks[], int clk_max);
-struct tegra_clk_periph_regs *get_reg_bank(int clkid);
+const struct tegra_clk_periph_regs *get_reg_bank(int clkid);
struct clk **tegra_clk_init(void __iomem *clk_base, int num, int periph_banks);
struct clk **tegra_lookup_dt_id(int clk_id, struct tegra_clk *tegra_clk);
diff --git a/drivers/clk/tegra/cvb.c b/drivers/clk/tegra/cvb.c
index 69c74eec3a4b..624115e82ff9 100644
--- a/drivers/clk/tegra/cvb.c
+++ b/drivers/clk/tegra/cvb.c
@@ -61,29 +61,28 @@ static int round_voltage(int mv, const struct rail_alignment *align, int up)
return mv;
}
-static int build_opp_table(const struct cvb_table *d,
- int speedo_value,
- unsigned long max_freq,
- struct device *opp_dev)
+static int build_opp_table(struct device *dev, const struct cvb_table *table,
+ int speedo_value, unsigned long max_freq)
{
+ const struct rail_alignment *align = &table->alignment;
int i, ret, dfll_mv, min_mv, max_mv;
- const struct cvb_table_freq_entry *table = NULL;
- const struct rail_alignment *align = &d->alignment;
- min_mv = round_voltage(d->min_millivolts, align, UP);
- max_mv = round_voltage(d->max_millivolts, align, DOWN);
+ min_mv = round_voltage(table->min_millivolts, align, UP);
+ max_mv = round_voltage(table->max_millivolts, align, DOWN);
for (i = 0; i < MAX_DVFS_FREQS; i++) {
- table = &d->cvb_table[i];
- if (!table->freq || (table->freq > max_freq))
+ const struct cvb_table_freq_entry *entry = &table->entries[i];
+
+ if (!entry->freq || (entry->freq > max_freq))
break;
- dfll_mv = get_cvb_voltage(
- speedo_value, d->speedo_scale, &table->coefficients);
- dfll_mv = round_cvb_voltage(dfll_mv, d->voltage_scale, align);
+ dfll_mv = get_cvb_voltage(speedo_value, table->speedo_scale,
+ &entry->coefficients);
+ dfll_mv = round_cvb_voltage(dfll_mv, table->voltage_scale,
+ align);
dfll_mv = clamp(dfll_mv, min_mv, max_mv);
- ret = dev_pm_opp_add(opp_dev, table->freq, dfll_mv * 1000);
+ ret = dev_pm_opp_add(dev, entry->freq, dfll_mv * 1000);
if (ret)
return ret;
}
@@ -92,7 +91,7 @@ static int build_opp_table(const struct cvb_table *d,
}
/**
- * tegra_cvb_build_opp_table - build OPP table from Tegra CVB tables
+ * tegra_cvb_add_opp_table - build OPP table from Tegra CVB tables
* @cvb_tables: array of CVB tables
* @sz: size of the previously mentioned array
* @process_id: process id of the HW module
@@ -108,26 +107,42 @@ static int build_opp_table(const struct cvb_table *d,
* given @opp_dev. Returns a pointer to the struct cvb_table that matched
* or an ERR_PTR on failure.
*/
-const struct cvb_table *tegra_cvb_build_opp_table(
- const struct cvb_table *cvb_tables,
- size_t sz, int process_id,
- int speedo_id, int speedo_value,
- unsigned long max_rate,
- struct device *opp_dev)
+const struct cvb_table *
+tegra_cvb_add_opp_table(struct device *dev, const struct cvb_table *tables,
+ size_t count, int process_id, int speedo_id,
+ int speedo_value, unsigned long max_freq)
{
- int i, ret;
+ size_t i;
+ int ret;
- for (i = 0; i < sz; i++) {
- const struct cvb_table *d = &cvb_tables[i];
+ for (i = 0; i < count; i++) {
+ const struct cvb_table *table = &tables[i];
- if (d->speedo_id != -1 && d->speedo_id != speedo_id)
+ if (table->speedo_id != -1 && table->speedo_id != speedo_id)
continue;
- if (d->process_id != -1 && d->process_id != process_id)
+
+ if (table->process_id != -1 && table->process_id != process_id)
continue;
- ret = build_opp_table(d, speedo_value, max_rate, opp_dev);
- return ret ? ERR_PTR(ret) : d;
+ ret = build_opp_table(dev, table, speedo_value, max_freq);
+ return ret ? ERR_PTR(ret) : table;
}
return ERR_PTR(-EINVAL);
}
+
+void tegra_cvb_remove_opp_table(struct device *dev,
+ const struct cvb_table *table,
+ unsigned long max_freq)
+{
+ unsigned int i;
+
+ for (i = 0; i < MAX_DVFS_FREQS; i++) {
+ const struct cvb_table_freq_entry *entry = &table->entries[i];
+
+ if (!entry->freq || (entry->freq > max_freq))
+ break;
+
+ dev_pm_opp_remove(dev, entry->freq);
+ }
+}
diff --git a/drivers/clk/tegra/cvb.h b/drivers/clk/tegra/cvb.h
index f62cdc4f4234..c1f077993b2a 100644
--- a/drivers/clk/tegra/cvb.h
+++ b/drivers/clk/tegra/cvb.h
@@ -53,15 +53,16 @@ struct cvb_table {
int speedo_scale;
int voltage_scale;
- struct cvb_table_freq_entry cvb_table[MAX_DVFS_FREQS];
+ struct cvb_table_freq_entry entries[MAX_DVFS_FREQS];
struct cvb_cpu_dfll_data cpu_dfll_data;
};
-const struct cvb_table *tegra_cvb_build_opp_table(
- const struct cvb_table *cvb_tables,
- size_t sz, int process_id,
- int speedo_id, int speedo_value,
- unsigned long max_rate,
- struct device *opp_dev);
+const struct cvb_table *
+tegra_cvb_add_opp_table(struct device *dev, const struct cvb_table *cvb_tables,
+ size_t count, int process_id, int speedo_id,
+ int speedo_value, unsigned long max_freq);
+void tegra_cvb_remove_opp_table(struct device *dev,
+ const struct cvb_table *table,
+ unsigned long max_freq);
#endif
diff --git a/drivers/clk/ti/clk-54xx.c b/drivers/clk/ti/clk-54xx.c
index 59ce2fa2c104..294bc03ec067 100644
--- a/drivers/clk/ti/clk-54xx.c
+++ b/drivers/clk/ti/clk-54xx.c
@@ -210,6 +210,7 @@ static struct ti_dt_clk omap54xx_clks[] = {
DT_CLK("usbhs_omap", "usbtll_fck", "dummy_ck"),
DT_CLK("omap_wdt", "ick", "dummy_ck"),
DT_CLK(NULL, "timer_32k_ck", "sys_32k_ck"),
+ DT_CLK(NULL, "sys_clkin_ck", "sys_clkin"),
DT_CLK("4ae18000.timer", "timer_sys_ck", "sys_clkin"),
DT_CLK("48032000.timer", "timer_sys_ck", "sys_clkin"),
DT_CLK("48034000.timer", "timer_sys_ck", "sys_clkin"),
diff --git a/drivers/clk/ti/clk-7xx.c b/drivers/clk/ti/clk-7xx.c
index a911d7de3377..bfa17d33ef3b 100644
--- a/drivers/clk/ti/clk-7xx.c
+++ b/drivers/clk/ti/clk-7xx.c
@@ -223,7 +223,7 @@ static struct ti_dt_clk dra7xx_clks[] = {
DT_CLK(NULL, "mcasp6_aux_gfclk_mux", "mcasp6_aux_gfclk_mux"),
DT_CLK(NULL, "mcasp7_ahclkx_mux", "mcasp7_ahclkx_mux"),
DT_CLK(NULL, "mcasp7_aux_gfclk_mux", "mcasp7_aux_gfclk_mux"),
- DT_CLK(NULL, "mcasp8_ahclk_mux", "mcasp8_ahclk_mux"),
+ DT_CLK(NULL, "mcasp8_ahclkx_mux", "mcasp8_ahclkx_mux"),
DT_CLK(NULL, "mcasp8_aux_gfclk_mux", "mcasp8_aux_gfclk_mux"),
DT_CLK(NULL, "mmc1_fclk_mux", "mmc1_fclk_mux"),
DT_CLK(NULL, "mmc1_fclk_div", "mmc1_fclk_div"),
@@ -289,6 +289,7 @@ static struct ti_dt_clk dra7xx_clks[] = {
DT_CLK("usbhs_omap", "usbtll_fck", "dummy_ck"),
DT_CLK("omap_wdt", "ick", "dummy_ck"),
DT_CLK(NULL, "timer_32k_ck", "sys_32k_ck"),
+ DT_CLK(NULL, "sys_clkin_ck", "timer_sys_clk_div"),
DT_CLK("4ae18000.timer", "timer_sys_ck", "timer_sys_clk_div"),
DT_CLK("48032000.timer", "timer_sys_ck", "timer_sys_clk_div"),
DT_CLK("48034000.timer", "timer_sys_ck", "timer_sys_clk_div"),
diff --git a/drivers/clk/ti/clk-dra7-atl.c b/drivers/clk/ti/clk-dra7-atl.c
index 2e14dfb588f4..c77333230bdf 100644
--- a/drivers/clk/ti/clk-dra7-atl.c
+++ b/drivers/clk/ti/clk-dra7-atl.c
@@ -265,6 +265,7 @@ static int of_dra7_atl_clk_probe(struct platform_device *pdev)
/* Get configuration for the ATL instances */
snprintf(prop, sizeof(prop), "atl%u", i);
+ of_node_get(node);
cfg_node = of_find_node_by_name(node, prop);
if (cfg_node) {
ret = of_property_read_u32(cfg_node, "bws",
@@ -278,6 +279,7 @@ static int of_dra7_atl_clk_probe(struct platform_device *pdev)
atl_write(cinfo, DRA7_ATL_AWSMUX_REG(i),
cdesc->aws);
}
+ of_node_put(cfg_node);
}
cdesc->probed = true;
diff --git a/drivers/clk/ti/clkt_dflt.c b/drivers/clk/ti/clkt_dflt.c
index 1ddc288fce4e..c6ae563801d7 100644
--- a/drivers/clk/ti/clkt_dflt.c
+++ b/drivers/clk/ti/clkt_dflt.c
@@ -222,7 +222,7 @@ int omap2_dflt_clk_enable(struct clk_hw *hw)
}
}
- if (unlikely(IS_ERR(clk->enable_reg))) {
+ if (IS_ERR(clk->enable_reg)) {
pr_err("%s: %s missing enable_reg\n", __func__,
clk_hw_get_name(hw));
ret = -EINVAL;
diff --git a/drivers/clk/ti/clkt_dpll.c b/drivers/clk/ti/clkt_dpll.c
index 032c658a5f5e..b919fdfe8256 100644
--- a/drivers/clk/ti/clkt_dpll.c
+++ b/drivers/clk/ti/clkt_dpll.c
@@ -301,6 +301,9 @@ long omap2_dpll_round_rate(struct clk_hw *hw, unsigned long target_rate,
dd = clk->dpll_data;
+ if (dd->max_rate && target_rate > dd->max_rate)
+ target_rate = dd->max_rate;
+
ref_rate = clk_hw_get_rate(dd->clk_ref);
clk_name = clk_hw_get_name(hw);
pr_debug("clock: %s: starting DPLL round_rate, target rate %lu\n",
diff --git a/drivers/clk/ti/dpll.c b/drivers/clk/ti/dpll.c
index 3bc9959f71c3..9fc8754a6e61 100644
--- a/drivers/clk/ti/dpll.c
+++ b/drivers/clk/ti/dpll.c
@@ -655,6 +655,7 @@ static void __init of_ti_am3_no_gate_dpll_setup(struct device_node *node)
.max_multiplier = 2047,
.max_divider = 128,
.min_divider = 1,
+ .max_rate = 1000000000,
.modes = (1 << DPLL_LOW_POWER_BYPASS) | (1 << DPLL_LOCKED),
};
@@ -674,6 +675,7 @@ static void __init of_ti_am3_jtype_dpll_setup(struct device_node *node)
.max_divider = 256,
.min_divider = 2,
.flags = DPLL_J_TYPE,
+ .max_rate = 2000000000,
.modes = (1 << DPLL_LOW_POWER_BYPASS) | (1 << DPLL_LOCKED),
};
@@ -692,6 +694,7 @@ static void __init of_ti_am3_no_gate_jtype_dpll_setup(struct device_node *node)
.max_multiplier = 2047,
.max_divider = 128,
.min_divider = 1,
+ .max_rate = 2000000000,
.flags = DPLL_J_TYPE,
.modes = (1 << DPLL_LOW_POWER_BYPASS) | (1 << DPLL_LOCKED),
};
@@ -712,6 +715,7 @@ static void __init of_ti_am3_dpll_setup(struct device_node *node)
.max_multiplier = 2047,
.max_divider = 128,
.min_divider = 1,
+ .max_rate = 1000000000,
.modes = (1 << DPLL_LOW_POWER_BYPASS) | (1 << DPLL_LOCKED),
};
@@ -729,6 +733,7 @@ static void __init of_ti_am3_core_dpll_setup(struct device_node *node)
.max_multiplier = 2047,
.max_divider = 128,
.min_divider = 1,
+ .max_rate = 1000000000,
.modes = (1 << DPLL_LOW_POWER_BYPASS) | (1 << DPLL_LOCKED),
};
diff --git a/drivers/clk/zte/clk-zx296702.c b/drivers/clk/zte/clk-zx296702.c
index ebd20d852e73..76e967c19775 100644
--- a/drivers/clk/zte/clk-zx296702.c
+++ b/drivers/clk/zte/clk-zx296702.c
@@ -234,8 +234,7 @@ static void __init zx296702_top_clocks_init(struct device_node *np)
WARN_ON(!topcrm_base);
clk[ZX296702_OSC] =
- clk_register_fixed_rate(NULL, "osc", NULL, CLK_IS_ROOT,
- 30000000);
+ clk_register_fixed_rate(NULL, "osc", NULL, 0, 30000000);
clk[ZX296702_PLL_A9] =
clk_register_zx_pll("pll_a9", "osc", 0, topcrm_base
+ 0x01c, pll_a9_config,
diff --git a/drivers/clocksource/Kconfig b/drivers/clocksource/Kconfig
index c346be650892..47352d25c15e 100644
--- a/drivers/clocksource/Kconfig
+++ b/drivers/clocksource/Kconfig
@@ -181,11 +181,27 @@ config CLKSRC_TI_32K
This option enables support for Texas Instruments 32.768 Hz clocksource
available on many OMAP-like platforms.
+config CLKSRC_NPS
+ bool "NPS400 clocksource driver" if COMPILE_TEST
+ depends on !PHYS_ADDR_T_64BIT
+ select CLKSRC_MMIO
+ select CLKSRC_OF if OF
+ help
+ NPS400 clocksource support.
+ Got 64 bit counter with update rate up to 1000MHz.
+ This counter is accessed via couple of 32 bit memory mapped registers.
+
config CLKSRC_STM32
bool "Clocksource for STM32 SoCs" if !ARCH_STM32
depends on OF && ARM && (ARCH_STM32 || COMPILE_TEST)
select CLKSRC_MMIO
+config CLKSRC_MPS2
+ bool "Clocksource for MPS2 SoCs" if COMPILE_TEST
+ depends on GENERIC_SCHED_CLOCK
+ select CLKSRC_MMIO
+ select CLKSRC_OF
+
config ARM_ARCH_TIMER
bool
select CLKSRC_OF if OF
diff --git a/drivers/clocksource/Makefile b/drivers/clocksource/Makefile
index dc2b8997f6e6..473974f9590a 100644
--- a/drivers/clocksource/Makefile
+++ b/drivers/clocksource/Makefile
@@ -39,6 +39,7 @@ obj-$(CONFIG_CLKSRC_EFM32) += time-efm32.o
obj-$(CONFIG_CLKSRC_STM32) += timer-stm32.o
obj-$(CONFIG_CLKSRC_EXYNOS_MCT) += exynos_mct.o
obj-$(CONFIG_CLKSRC_LPC32XX) += time-lpc32xx.o
+obj-$(CONFIG_CLKSRC_MPS2) += mps2-timer.o
obj-$(CONFIG_CLKSRC_SAMSUNG_PWM) += samsung_pwm_timer.o
obj-$(CONFIG_FSL_FTM_TIMER) += fsl_ftm_timer.o
obj-$(CONFIG_VF_PIT_TIMER) += vf_pit_timer.o
@@ -46,6 +47,7 @@ obj-$(CONFIG_CLKSRC_QCOM) += qcom-timer.o
obj-$(CONFIG_MTK_TIMER) += mtk_timer.o
obj-$(CONFIG_CLKSRC_PISTACHIO) += time-pistachio.o
obj-$(CONFIG_CLKSRC_TI_32K) += timer-ti-32k.o
+obj-$(CONFIG_CLKSRC_NPS) += timer-nps.o
obj-$(CONFIG_ARM_ARCH_TIMER) += arm_arch_timer.o
obj-$(CONFIG_ARM_GLOBAL_TIMER) += arm_global_timer.o
diff --git a/drivers/clocksource/arm_arch_timer.c b/drivers/clocksource/arm_arch_timer.c
index 5152b3898155..4814446a0024 100644
--- a/drivers/clocksource/arm_arch_timer.c
+++ b/drivers/clocksource/arm_arch_timer.c
@@ -468,11 +468,11 @@ static struct cyclecounter cyclecounter = {
.mask = CLOCKSOURCE_MASK(56),
};
-static struct timecounter timecounter;
+static struct arch_timer_kvm_info arch_timer_kvm_info;
-struct timecounter *arch_timer_get_timecounter(void)
+struct arch_timer_kvm_info *arch_timer_get_kvm_info(void)
{
- return &timecounter;
+ return &arch_timer_kvm_info;
}
static void __init arch_counter_register(unsigned type)
@@ -500,7 +500,8 @@ static void __init arch_counter_register(unsigned type)
clocksource_register_hz(&clocksource_counter, arch_timer_rate);
cyclecounter.mult = clocksource_counter.mult;
cyclecounter.shift = clocksource_counter.shift;
- timecounter_init(&timecounter, &cyclecounter, start_count);
+ timecounter_init(&arch_timer_kvm_info.timecounter,
+ &cyclecounter, start_count);
/* 56 bits minimum, so we assume worst case rollover */
sched_clock_register(arch_timer_read_counter, 56, arch_timer_rate);
@@ -744,6 +745,8 @@ static void __init arch_timer_init(void)
arch_timer_register();
arch_timer_common_init();
+
+ arch_timer_kvm_info.virtual_irq = arch_timer_ppi[VIRT_PPI];
}
static void __init arch_timer_of_init(struct device_node *np)
diff --git a/drivers/clocksource/dw_apb_timer.c b/drivers/clocksource/dw_apb_timer.c
index 63345260244d..797505aa2ba4 100644
--- a/drivers/clocksource/dw_apb_timer.c
+++ b/drivers/clocksource/dw_apb_timer.c
@@ -264,6 +264,7 @@ dw_apb_clockevent_init(int cpu, const char *name, unsigned rating,
dw_ced->ced.set_state_shutdown = apbt_shutdown;
dw_ced->ced.set_state_periodic = apbt_set_periodic;
dw_ced->ced.set_state_oneshot = apbt_set_oneshot;
+ dw_ced->ced.set_state_oneshot_stopped = apbt_shutdown;
dw_ced->ced.tick_resume = apbt_resume;
dw_ced->ced.set_next_event = apbt_next_event;
dw_ced->ced.irq = dw_ced->timer.irq;
diff --git a/drivers/clocksource/mps2-timer.c b/drivers/clocksource/mps2-timer.c
new file mode 100644
index 000000000000..3d33a5e23dee
--- /dev/null
+++ b/drivers/clocksource/mps2-timer.c
@@ -0,0 +1,275 @@
+/*
+ * Copyright (C) 2015 ARM Limited
+ *
+ * Author: Vladimir Murzin <vladimir.murzin@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/clk.h>
+#include <linux/clockchips.h>
+#include <linux/clocksource.h>
+#include <linux/err.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/irq.h>
+#include <linux/of_address.h>
+#include <linux/of.h>
+#include <linux/of_irq.h>
+#include <linux/sched_clock.h>
+#include <linux/slab.h>
+
+#define TIMER_CTRL 0x0
+#define TIMER_CTRL_ENABLE BIT(0)
+#define TIMER_CTRL_IE BIT(3)
+
+#define TIMER_VALUE 0x4
+#define TIMER_RELOAD 0x8
+#define TIMER_INT 0xc
+
+struct clockevent_mps2 {
+ void __iomem *reg;
+ u32 clock_count_per_tick;
+ struct clock_event_device clkevt;
+};
+
+static void __iomem *sched_clock_base;
+
+static u64 notrace mps2_sched_read(void)
+{
+ return ~readl_relaxed(sched_clock_base + TIMER_VALUE);
+}
+
+static inline struct clockevent_mps2 *to_mps2_clkevt(struct clock_event_device *c)
+{
+ return container_of(c, struct clockevent_mps2, clkevt);
+}
+
+static void clockevent_mps2_writel(u32 val, struct clock_event_device *c, u32 offset)
+{
+ writel_relaxed(val, to_mps2_clkevt(c)->reg + offset);
+}
+
+static int mps2_timer_shutdown(struct clock_event_device *ce)
+{
+ clockevent_mps2_writel(0, ce, TIMER_RELOAD);
+ clockevent_mps2_writel(0, ce, TIMER_CTRL);
+
+ return 0;
+}
+
+static int mps2_timer_set_next_event(unsigned long next, struct clock_event_device *ce)
+{
+ clockevent_mps2_writel(next, ce, TIMER_VALUE);
+ clockevent_mps2_writel(TIMER_CTRL_IE | TIMER_CTRL_ENABLE, ce, TIMER_CTRL);
+
+ return 0;
+}
+
+static int mps2_timer_set_periodic(struct clock_event_device *ce)
+{
+ u32 clock_count_per_tick = to_mps2_clkevt(ce)->clock_count_per_tick;
+
+ clockevent_mps2_writel(clock_count_per_tick, ce, TIMER_RELOAD);
+ clockevent_mps2_writel(clock_count_per_tick, ce, TIMER_VALUE);
+ clockevent_mps2_writel(TIMER_CTRL_IE | TIMER_CTRL_ENABLE, ce, TIMER_CTRL);
+
+ return 0;
+}
+
+static irqreturn_t mps2_timer_interrupt(int irq, void *dev_id)
+{
+ struct clockevent_mps2 *ce = dev_id;
+ u32 status = readl_relaxed(ce->reg + TIMER_INT);
+
+ if (!status) {
+ pr_warn("spurious interrupt\n");
+ return IRQ_NONE;
+ }
+
+ writel_relaxed(1, ce->reg + TIMER_INT);
+
+ ce->clkevt.event_handler(&ce->clkevt);
+
+ return IRQ_HANDLED;
+}
+
+static int __init mps2_clockevent_init(struct device_node *np)
+{
+ void __iomem *base;
+ struct clk *clk = NULL;
+ struct clockevent_mps2 *ce;
+ u32 rate;
+ int irq, ret;
+ const char *name = "mps2-clkevt";
+
+ ret = of_property_read_u32(np, "clock-frequency", &rate);
+ if (ret) {
+ clk = of_clk_get(np, 0);
+ if (IS_ERR(clk)) {
+ ret = PTR_ERR(clk);
+ pr_err("failed to get clock for clockevent: %d\n", ret);
+ goto out;
+ }
+
+ ret = clk_prepare_enable(clk);
+ if (ret) {
+ pr_err("failed to enable clock for clockevent: %d\n", ret);
+ goto out_clk_put;
+ }
+
+ rate = clk_get_rate(clk);
+ }
+
+ base = of_iomap(np, 0);
+ if (!base) {
+ ret = -EADDRNOTAVAIL;
+ pr_err("failed to map register for clockevent: %d\n", ret);
+ goto out_clk_disable;
+ }
+
+ irq = irq_of_parse_and_map(np, 0);
+ if (!irq) {
+ ret = -ENOENT;
+ pr_err("failed to get irq for clockevent: %d\n", ret);
+ goto out_iounmap;
+ }
+
+ ce = kzalloc(sizeof(*ce), GFP_KERNEL);
+ if (!ce) {
+ ret = -ENOMEM;
+ goto out_iounmap;
+ }
+
+ ce->reg = base;
+ ce->clock_count_per_tick = DIV_ROUND_CLOSEST(rate, HZ);
+ ce->clkevt.irq = irq;
+ ce->clkevt.name = name;
+ ce->clkevt.rating = 200;
+ ce->clkevt.features = CLOCK_EVT_FEAT_PERIODIC | CLOCK_EVT_FEAT_ONESHOT;
+ ce->clkevt.cpumask = cpu_possible_mask;
+ ce->clkevt.set_state_shutdown = mps2_timer_shutdown,
+ ce->clkevt.set_state_periodic = mps2_timer_set_periodic,
+ ce->clkevt.set_state_oneshot = mps2_timer_shutdown,
+ ce->clkevt.set_next_event = mps2_timer_set_next_event;
+
+ /* Ensure timer is disabled */
+ writel_relaxed(0, base + TIMER_CTRL);
+
+ ret = request_irq(irq, mps2_timer_interrupt, IRQF_TIMER, name, ce);
+ if (ret) {
+ pr_err("failed to request irq for clockevent: %d\n", ret);
+ goto out_kfree;
+ }
+
+ clockevents_config_and_register(&ce->clkevt, rate, 0xf, 0xffffffff);
+
+ return 0;
+
+out_kfree:
+ kfree(ce);
+out_iounmap:
+ iounmap(base);
+out_clk_disable:
+ /* clk_{disable, unprepare, put}() can handle NULL as a parameter */
+ clk_disable_unprepare(clk);
+out_clk_put:
+ clk_put(clk);
+out:
+ return ret;
+}
+
+static int __init mps2_clocksource_init(struct device_node *np)
+{
+ void __iomem *base;
+ struct clk *clk = NULL;
+ u32 rate;
+ int ret;
+ const char *name = "mps2-clksrc";
+
+ ret = of_property_read_u32(np, "clock-frequency", &rate);
+ if (ret) {
+ clk = of_clk_get(np, 0);
+ if (IS_ERR(clk)) {
+ ret = PTR_ERR(clk);
+ pr_err("failed to get clock for clocksource: %d\n", ret);
+ goto out;
+ }
+
+ ret = clk_prepare_enable(clk);
+ if (ret) {
+ pr_err("failed to enable clock for clocksource: %d\n", ret);
+ goto out_clk_put;
+ }
+
+ rate = clk_get_rate(clk);
+ }
+
+ base = of_iomap(np, 0);
+ if (!base) {
+ ret = -EADDRNOTAVAIL;
+ pr_err("failed to map register for clocksource: %d\n", ret);
+ goto out_clk_disable;
+ }
+
+ /* Ensure timer is disabled */
+ writel_relaxed(0, base + TIMER_CTRL);
+
+ /* ... and set it up as free-running clocksource */
+ writel_relaxed(0xffffffff, base + TIMER_VALUE);
+ writel_relaxed(0xffffffff, base + TIMER_RELOAD);
+
+ writel_relaxed(TIMER_CTRL_ENABLE, base + TIMER_CTRL);
+
+ ret = clocksource_mmio_init(base + TIMER_VALUE, name,
+ rate, 200, 32,
+ clocksource_mmio_readl_down);
+ if (ret) {
+ pr_err("failed to init clocksource: %d\n", ret);
+ goto out_iounmap;
+ }
+
+ sched_clock_base = base;
+ sched_clock_register(mps2_sched_read, 32, rate);
+
+ return 0;
+
+out_iounmap:
+ iounmap(base);
+out_clk_disable:
+ /* clk_{disable, unprepare, put}() can handle NULL as a parameter */
+ clk_disable_unprepare(clk);
+out_clk_put:
+ clk_put(clk);
+out:
+ return ret;
+}
+
+static void __init mps2_timer_init(struct device_node *np)
+{
+ static int has_clocksource, has_clockevent;
+ int ret;
+
+ if (!has_clocksource) {
+ ret = mps2_clocksource_init(np);
+ if (!ret) {
+ has_clocksource = 1;
+ return;
+ }
+ }
+
+ if (!has_clockevent) {
+ ret = mps2_clockevent_init(np);
+ if (!ret) {
+ has_clockevent = 1;
+ return;
+ }
+ }
+}
+
+CLOCKSOURCE_OF_DECLARE(mps2_timer, "arm,mps2-timer", mps2_timer_init);
diff --git a/drivers/clocksource/mtk_timer.c b/drivers/clocksource/mtk_timer.c
index d67bc356488f..7e583f8ea5f4 100644
--- a/drivers/clocksource/mtk_timer.c
+++ b/drivers/clocksource/mtk_timer.c
@@ -152,7 +152,7 @@ static irqreturn_t mtk_timer_interrupt(int irq, void *dev_id)
}
static void
-mtk_timer_setup(struct mtk_clock_event_device *evt, u8 timer, u8 option)
+__init mtk_timer_setup(struct mtk_clock_event_device *evt, u8 timer, u8 option)
{
writel(TIMER_CTRL_CLEAR | TIMER_CTRL_DISABLE,
evt->gpt_base + TIMER_CTRL_REG(timer));
diff --git a/drivers/clocksource/tegra20_timer.c b/drivers/clocksource/tegra20_timer.c
index 38333aba3055..7b94ad2ab278 100644
--- a/drivers/clocksource/tegra20_timer.c
+++ b/drivers/clocksource/tegra20_timer.c
@@ -258,17 +258,3 @@ static void __init tegra20_init_rtc(struct device_node *np)
register_persistent_clock(NULL, tegra_read_persistent_clock64);
}
CLOCKSOURCE_OF_DECLARE(tegra20_rtc, "nvidia,tegra20-rtc", tegra20_init_rtc);
-
-#ifdef CONFIG_PM
-static u32 usec_config;
-
-void tegra_timer_suspend(void)
-{
- usec_config = timer_readl(TIMERUS_USEC_CFG);
-}
-
-void tegra_timer_resume(void)
-{
- timer_writel(usec_config, TIMERUS_USEC_CFG);
-}
-#endif
diff --git a/drivers/clocksource/timer-nps.c b/drivers/clocksource/timer-nps.c
new file mode 100644
index 000000000000..d46108920b2c
--- /dev/null
+++ b/drivers/clocksource/timer-nps.c
@@ -0,0 +1,98 @@
+/*
+ * Copyright (c) 2016, Mellanox Technologies. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/interrupt.h>
+#include <linux/clocksource.h>
+#include <linux/clockchips.h>
+#include <linux/clk.h>
+#include <linux/of.h>
+#include <linux/of_irq.h>
+#include <linux/cpu.h>
+#include <soc/nps/common.h>
+
+#define NPS_MSU_TICK_LOW 0xC8
+#define NPS_CLUSTER_OFFSET 8
+#define NPS_CLUSTER_NUM 16
+
+/* This array is per cluster of CPUs (Each NPS400 cluster got 256 CPUs) */
+static void *nps_msu_reg_low_addr[NPS_CLUSTER_NUM] __read_mostly;
+
+static unsigned long nps_timer_rate;
+
+static cycle_t nps_clksrc_read(struct clocksource *clksrc)
+{
+ int cluster = raw_smp_processor_id() >> NPS_CLUSTER_OFFSET;
+
+ return (cycle_t)ioread32be(nps_msu_reg_low_addr[cluster]);
+}
+
+static void __init nps_setup_clocksource(struct device_node *node,
+ struct clk *clk)
+{
+ int ret, cluster;
+
+ for (cluster = 0; cluster < NPS_CLUSTER_NUM; cluster++)
+ nps_msu_reg_low_addr[cluster] =
+ nps_host_reg((cluster << NPS_CLUSTER_OFFSET),
+ NPS_MSU_BLKID, NPS_MSU_TICK_LOW);
+
+ ret = clk_prepare_enable(clk);
+ if (ret) {
+ pr_err("Couldn't enable parent clock\n");
+ return;
+ }
+
+ nps_timer_rate = clk_get_rate(clk);
+
+ ret = clocksource_mmio_init(nps_msu_reg_low_addr, "EZnps-tick",
+ nps_timer_rate, 301, 32, nps_clksrc_read);
+ if (ret) {
+ pr_err("Couldn't register clock source.\n");
+ clk_disable_unprepare(clk);
+ }
+}
+
+static void __init nps_timer_init(struct device_node *node)
+{
+ struct clk *clk;
+
+ clk = of_clk_get(node, 0);
+ if (IS_ERR(clk)) {
+ pr_err("Can't get timer clock.\n");
+ return;
+ }
+
+ nps_setup_clocksource(node, clk);
+}
+
+CLOCKSOURCE_OF_DECLARE(ezchip_nps400_clksrc, "ezchip,nps400-timer",
+ nps_timer_init);
diff --git a/drivers/cpufreq/Kconfig b/drivers/cpufreq/Kconfig
index a7f45853c103..b7445b6ae5a4 100644
--- a/drivers/cpufreq/Kconfig
+++ b/drivers/cpufreq/Kconfig
@@ -18,7 +18,11 @@ config CPU_FREQ
if CPU_FREQ
+config CPU_FREQ_GOV_ATTR_SET
+ bool
+
config CPU_FREQ_GOV_COMMON
+ select CPU_FREQ_GOV_ATTR_SET
select IRQ_WORK
bool
@@ -103,6 +107,17 @@ config CPU_FREQ_DEFAULT_GOV_CONSERVATIVE
Be aware that not all cpufreq drivers support the conservative
governor. If unsure have a look at the help section of the
driver. Fallback governor will be the performance governor.
+
+config CPU_FREQ_DEFAULT_GOV_SCHEDUTIL
+ bool "schedutil"
+ depends on SMP
+ select CPU_FREQ_GOV_SCHEDUTIL
+ select CPU_FREQ_GOV_PERFORMANCE
+ help
+ Use the 'schedutil' CPUFreq governor by default. If unsure,
+ have a look at the help section of that governor. The fallback
+ governor will be 'performance'.
+
endchoice
config CPU_FREQ_GOV_PERFORMANCE
@@ -184,6 +199,26 @@ config CPU_FREQ_GOV_CONSERVATIVE
If in doubt, say N.
+config CPU_FREQ_GOV_SCHEDUTIL
+ tristate "'schedutil' cpufreq policy governor"
+ depends on CPU_FREQ && SMP
+ select CPU_FREQ_GOV_ATTR_SET
+ select IRQ_WORK
+ help
+ This governor makes decisions based on the utilization data provided
+ by the scheduler. It sets the CPU frequency to be proportional to
+ the utilization/capacity ratio coming from the scheduler. If the
+ utilization is frequency-invariant, the new frequency is also
+ proportional to the maximum available frequency. If that is not the
+ case, it is proportional to the current frequency of the CPU. The
+ frequency tipping point is at utilization/capacity equal to 80% in
+ both cases.
+
+ To compile this driver as a module, choose M here: the module will
+ be called cpufreq_schedutil.
+
+ If in doubt, say N.
+
comment "CPU frequency scaling drivers"
config CPUFREQ_DT
@@ -191,6 +226,7 @@ config CPUFREQ_DT
depends on HAVE_CLK && OF
# if CPU_THERMAL is on and THERMAL=m, CPUFREQ_DT cannot be =y:
depends on !CPU_THERMAL || THERMAL
+ select CPUFREQ_DT_PLATDEV
select PM_OPP
help
This adds a generic DT based cpufreq driver for frequency management.
@@ -199,6 +235,15 @@ config CPUFREQ_DT
If in doubt, say N.
+config CPUFREQ_DT_PLATDEV
+ bool
+ help
+ This adds a generic DT based cpufreq platdev driver for frequency
+ management. This creates a 'cpufreq-dt' platform device, on the
+ supported platforms.
+
+ If in doubt, say N.
+
if X86
source "drivers/cpufreq/Kconfig.x86"
endif
diff --git a/drivers/cpufreq/Kconfig.arm b/drivers/cpufreq/Kconfig.arm
index 14b1f9393b05..d89b8afe23b6 100644
--- a/drivers/cpufreq/Kconfig.arm
+++ b/drivers/cpufreq/Kconfig.arm
@@ -50,15 +50,6 @@ config ARM_HIGHBANK_CPUFREQ
If in doubt, say N.
-config ARM_HISI_ACPU_CPUFREQ
- tristate "Hisilicon ACPU CPUfreq driver"
- depends on ARCH_HISI && CPUFREQ_DT
- select PM_OPP
- help
- This enables the hisilicon ACPU CPUfreq driver.
-
- If in doubt, say N.
-
config ARM_IMX6Q_CPUFREQ
tristate "Freescale i.MX6 cpufreq support"
depends on ARCH_MXC
diff --git a/drivers/cpufreq/Kconfig.x86 b/drivers/cpufreq/Kconfig.x86
index c59bdcb83217..adbd1de1cea5 100644
--- a/drivers/cpufreq/Kconfig.x86
+++ b/drivers/cpufreq/Kconfig.x86
@@ -5,6 +5,7 @@
config X86_INTEL_PSTATE
bool "Intel P state control"
depends on X86
+ select ACPI_PROCESSOR if ACPI
help
This driver provides a P state for Intel core processors.
The driver implements an internal governor and will become
diff --git a/drivers/cpufreq/Makefile b/drivers/cpufreq/Makefile
index 9e63fb1b09f8..0a9b6a093646 100644
--- a/drivers/cpufreq/Makefile
+++ b/drivers/cpufreq/Makefile
@@ -11,8 +11,10 @@ obj-$(CONFIG_CPU_FREQ_GOV_USERSPACE) += cpufreq_userspace.o
obj-$(CONFIG_CPU_FREQ_GOV_ONDEMAND) += cpufreq_ondemand.o
obj-$(CONFIG_CPU_FREQ_GOV_CONSERVATIVE) += cpufreq_conservative.o
obj-$(CONFIG_CPU_FREQ_GOV_COMMON) += cpufreq_governor.o
+obj-$(CONFIG_CPU_FREQ_GOV_ATTR_SET) += cpufreq_governor_attr_set.o
obj-$(CONFIG_CPUFREQ_DT) += cpufreq-dt.o
+obj-$(CONFIG_CPUFREQ_DT_PLATDEV) += cpufreq-dt-platdev.o
##################################################################################
# x86 drivers.
@@ -53,7 +55,6 @@ obj-$(CONFIG_ARCH_DAVINCI) += davinci-cpufreq.o
obj-$(CONFIG_UX500_SOC_DB8500) += dbx500-cpufreq.o
obj-$(CONFIG_ARM_EXYNOS5440_CPUFREQ) += exynos5440-cpufreq.o
obj-$(CONFIG_ARM_HIGHBANK_CPUFREQ) += highbank-cpufreq.o
-obj-$(CONFIG_ARM_HISI_ACPU_CPUFREQ) += hisi-acpu-cpufreq.o
obj-$(CONFIG_ARM_IMX6Q_CPUFREQ) += imx6q-cpufreq.o
obj-$(CONFIG_ARM_INTEGRATOR) += integrator-cpufreq.o
obj-$(CONFIG_ARM_KIRKWOOD_CPUFREQ) += kirkwood-cpufreq.o
@@ -78,6 +79,7 @@ obj-$(CONFIG_ARM_TEGRA20_CPUFREQ) += tegra20-cpufreq.o
obj-$(CONFIG_ARM_TEGRA124_CPUFREQ) += tegra124-cpufreq.o
obj-$(CONFIG_ARM_VEXPRESS_SPC_CPUFREQ) += vexpress-spc-cpufreq.o
obj-$(CONFIG_ACPI_CPPC_CPUFREQ) += cppc_cpufreq.o
+obj-$(CONFIG_MACH_MVEBU_V7) += mvebu-cpufreq.o
##################################################################################
@@ -100,7 +102,7 @@ obj-$(CONFIG_CRIS_MACH_ARTPEC3) += cris-artpec3-cpufreq.o
obj-$(CONFIG_ETRAXFS) += cris-etraxfs-cpufreq.o
obj-$(CONFIG_IA64_ACPI_CPUFREQ) += ia64-acpi-cpufreq.o
obj-$(CONFIG_LOONGSON2_CPUFREQ) += loongson2_cpufreq.o
-obj-$(CONFIG_LOONGSON1_CPUFREQ) += ls1x-cpufreq.o
+obj-$(CONFIG_LOONGSON1_CPUFREQ) += loongson1-cpufreq.o
obj-$(CONFIG_SH_CPU_FREQ) += sh-cpufreq.o
obj-$(CONFIG_SPARC_US2E_CPUFREQ) += sparc-us2e-cpufreq.o
obj-$(CONFIG_SPARC_US3_CPUFREQ) += sparc-us3-cpufreq.o
diff --git a/drivers/cpufreq/acpi-cpufreq.c b/drivers/cpufreq/acpi-cpufreq.c
index fb5712141040..32a15052f363 100644
--- a/drivers/cpufreq/acpi-cpufreq.c
+++ b/drivers/cpufreq/acpi-cpufreq.c
@@ -25,6 +25,8 @@
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
@@ -50,8 +52,6 @@ MODULE_AUTHOR("Paul Diefenbaugh, Dominik Brodowski");
MODULE_DESCRIPTION("ACPI Processor P-States Driver");
MODULE_LICENSE("GPL");
-#define PFX "acpi-cpufreq: "
-
enum {
UNDEFINED_CAPABLE = 0,
SYSTEM_INTEL_MSR_CAPABLE,
@@ -65,7 +65,6 @@ enum {
#define MSR_K7_HWCR_CPB_DIS (1ULL << 25)
struct acpi_cpufreq_data {
- struct cpufreq_frequency_table *freq_table;
unsigned int resume;
unsigned int cpu_feature;
unsigned int acpi_perf_cpu;
@@ -200,8 +199,9 @@ static int check_amd_hwpstate_cpu(unsigned int cpuid)
return cpu_has(cpu, X86_FEATURE_HW_PSTATE);
}
-static unsigned extract_io(u32 value, struct acpi_cpufreq_data *data)
+static unsigned extract_io(struct cpufreq_policy *policy, u32 value)
{
+ struct acpi_cpufreq_data *data = policy->driver_data;
struct acpi_processor_performance *perf;
int i;
@@ -209,13 +209,14 @@ static unsigned extract_io(u32 value, struct acpi_cpufreq_data *data)
for (i = 0; i < perf->state_count; i++) {
if (value == perf->states[i].status)
- return data->freq_table[i].frequency;
+ return policy->freq_table[i].frequency;
}
return 0;
}
-static unsigned extract_msr(u32 msr, struct acpi_cpufreq_data *data)
+static unsigned extract_msr(struct cpufreq_policy *policy, u32 msr)
{
+ struct acpi_cpufreq_data *data = policy->driver_data;
struct cpufreq_frequency_table *pos;
struct acpi_processor_performance *perf;
@@ -226,20 +227,22 @@ static unsigned extract_msr(u32 msr, struct acpi_cpufreq_data *data)
perf = to_perf_data(data);
- cpufreq_for_each_entry(pos, data->freq_table)
+ cpufreq_for_each_entry(pos, policy->freq_table)
if (msr == perf->states[pos->driver_data].status)
return pos->frequency;
- return data->freq_table[0].frequency;
+ return policy->freq_table[0].frequency;
}
-static unsigned extract_freq(u32 val, struct acpi_cpufreq_data *data)
+static unsigned extract_freq(struct cpufreq_policy *policy, u32 val)
{
+ struct acpi_cpufreq_data *data = policy->driver_data;
+
switch (data->cpu_feature) {
case SYSTEM_INTEL_MSR_CAPABLE:
case SYSTEM_AMD_MSR_CAPABLE:
- return extract_msr(val, data);
+ return extract_msr(policy, val);
case SYSTEM_IO_CAPABLE:
- return extract_io(val, data);
+ return extract_io(policy, val);
default:
return 0;
}
@@ -374,11 +377,11 @@ static unsigned int get_cur_freq_on_cpu(unsigned int cpu)
return 0;
data = policy->driver_data;
- if (unlikely(!data || !data->freq_table))
+ if (unlikely(!data || !policy->freq_table))
return 0;
- cached_freq = data->freq_table[to_perf_data(data)->state].frequency;
- freq = extract_freq(get_cur_val(cpumask_of(cpu), data), data);
+ cached_freq = policy->freq_table[to_perf_data(data)->state].frequency;
+ freq = extract_freq(policy, get_cur_val(cpumask_of(cpu), data));
if (freq != cached_freq) {
/*
* The dreaded BIOS frequency change behind our back.
@@ -392,14 +395,15 @@ static unsigned int get_cur_freq_on_cpu(unsigned int cpu)
return freq;
}
-static unsigned int check_freqs(const struct cpumask *mask, unsigned int freq,
- struct acpi_cpufreq_data *data)
+static unsigned int check_freqs(struct cpufreq_policy *policy,
+ const struct cpumask *mask, unsigned int freq)
{
+ struct acpi_cpufreq_data *data = policy->driver_data;
unsigned int cur_freq;
unsigned int i;
for (i = 0; i < 100; i++) {
- cur_freq = extract_freq(get_cur_val(mask, data), data);
+ cur_freq = extract_freq(policy, get_cur_val(mask, data));
if (cur_freq == freq)
return 1;
udelay(10);
@@ -416,12 +420,12 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy,
unsigned int next_perf_state = 0; /* Index into perf table */
int result = 0;
- if (unlikely(data == NULL || data->freq_table == NULL)) {
+ if (unlikely(!data)) {
return -ENODEV;
}
perf = to_perf_data(data);
- next_perf_state = data->freq_table[index].driver_data;
+ next_perf_state = policy->freq_table[index].driver_data;
if (perf->state == next_perf_state) {
if (unlikely(data->resume)) {
pr_debug("Called after resume, resetting to P%d\n",
@@ -444,8 +448,8 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy,
drv_write(data, mask, perf->states[next_perf_state].control);
if (acpi_pstate_strict) {
- if (!check_freqs(mask, data->freq_table[index].frequency,
- data)) {
+ if (!check_freqs(policy, mask,
+ policy->freq_table[index].frequency)) {
pr_debug("acpi_cpufreq_target failed (%d)\n",
policy->cpu);
result = -EAGAIN;
@@ -458,6 +462,43 @@ static int acpi_cpufreq_target(struct cpufreq_policy *policy,
return result;
}
+unsigned int acpi_cpufreq_fast_switch(struct cpufreq_policy *policy,
+ unsigned int target_freq)
+{
+ struct acpi_cpufreq_data *data = policy->driver_data;
+ struct acpi_processor_performance *perf;
+ struct cpufreq_frequency_table *entry;
+ unsigned int next_perf_state, next_freq, freq;
+
+ /*
+ * Find the closest frequency above target_freq.
+ *
+ * The table is sorted in the reverse order with respect to the
+ * frequency and all of the entries are valid (see the initialization).
+ */
+ entry = policy->freq_table;
+ do {
+ entry++;
+ freq = entry->frequency;
+ } while (freq >= target_freq && freq != CPUFREQ_TABLE_END);
+ entry--;
+ next_freq = entry->frequency;
+ next_perf_state = entry->driver_data;
+
+ perf = to_perf_data(data);
+ if (perf->state == next_perf_state) {
+ if (unlikely(data->resume))
+ data->resume = 0;
+ else
+ return next_freq;
+ }
+
+ data->cpu_freq_write(&perf->control_register,
+ perf->states[next_perf_state].control);
+ perf->state = next_perf_state;
+ return next_freq;
+}
+
static unsigned long
acpi_cpufreq_guess_freq(struct acpi_cpufreq_data *data, unsigned int cpu)
{
@@ -611,10 +652,7 @@ static int acpi_cpufreq_blacklist(struct cpuinfo_x86 *c)
if ((c->x86 == 15) &&
(c->x86_model == 6) &&
(c->x86_mask == 8)) {
- printk(KERN_INFO "acpi-cpufreq: Intel(R) "
- "Xeon(R) 7100 Errata AL30, processors may "
- "lock up on frequency changes: disabling "
- "acpi-cpufreq.\n");
+ pr_info("Intel(R) Xeon(R) 7100 Errata AL30, processors may lock up on frequency changes: disabling acpi-cpufreq\n");
return -ENODEV;
}
}
@@ -631,6 +669,7 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
unsigned int result = 0;
struct cpuinfo_x86 *c = &cpu_data(policy->cpu);
struct acpi_processor_performance *perf;
+ struct cpufreq_frequency_table *freq_table;
#ifdef CONFIG_SMP
static int blacklisted;
#endif
@@ -690,7 +729,7 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
cpumask_copy(data->freqdomain_cpus,
topology_sibling_cpumask(cpu));
policy->shared_type = CPUFREQ_SHARED_TYPE_HW;
- pr_info_once(PFX "overriding BIOS provided _PSD data\n");
+ pr_info_once("overriding BIOS provided _PSD data\n");
}
#endif
@@ -742,9 +781,9 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
goto err_unreg;
}
- data->freq_table = kzalloc(sizeof(*data->freq_table) *
+ freq_table = kzalloc(sizeof(*freq_table) *
(perf->state_count+1), GFP_KERNEL);
- if (!data->freq_table) {
+ if (!freq_table) {
result = -ENOMEM;
goto err_unreg;
}
@@ -762,30 +801,29 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
if (perf->control_register.space_id == ACPI_ADR_SPACE_FIXED_HARDWARE &&
policy->cpuinfo.transition_latency > 20 * 1000) {
policy->cpuinfo.transition_latency = 20 * 1000;
- printk_once(KERN_INFO
- "P-state transition latency capped at 20 uS\n");
+ pr_info_once("P-state transition latency capped at 20 uS\n");
}
/* table init */
for (i = 0; i < perf->state_count; i++) {
if (i > 0 && perf->states[i].core_frequency >=
- data->freq_table[valid_states-1].frequency / 1000)
+ freq_table[valid_states-1].frequency / 1000)
continue;
- data->freq_table[valid_states].driver_data = i;
- data->freq_table[valid_states].frequency =
+ freq_table[valid_states].driver_data = i;
+ freq_table[valid_states].frequency =
perf->states[i].core_frequency * 1000;
valid_states++;
}
- data->freq_table[valid_states].frequency = CPUFREQ_TABLE_END;
+ freq_table[valid_states].frequency = CPUFREQ_TABLE_END;
perf->state = 0;
- result = cpufreq_table_validate_and_show(policy, data->freq_table);
+ result = cpufreq_table_validate_and_show(policy, freq_table);
if (result)
goto err_freqfree;
if (perf->states[0].core_frequency * 1000 != policy->cpuinfo.max_freq)
- printk(KERN_WARNING FW_WARN "P-state 0 is not max freq\n");
+ pr_warn(FW_WARN "P-state 0 is not max freq\n");
switch (perf->control_register.space_id) {
case ACPI_ADR_SPACE_SYSTEM_IO:
@@ -821,10 +859,13 @@ static int acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
*/
data->resume = 1;
+ policy->fast_switch_possible = !acpi_pstate_strict &&
+ !(policy_is_shared(policy) && policy->shared_type != CPUFREQ_SHARED_TYPE_ANY);
+
return result;
err_freqfree:
- kfree(data->freq_table);
+ kfree(freq_table);
err_unreg:
acpi_processor_unregister_performance(cpu);
err_free_mask:
@@ -842,13 +883,12 @@ static int acpi_cpufreq_cpu_exit(struct cpufreq_policy *policy)
pr_debug("acpi_cpufreq_cpu_exit\n");
- if (data) {
- policy->driver_data = NULL;
- acpi_processor_unregister_performance(data->acpi_perf_cpu);
- free_cpumask_var(data->freqdomain_cpus);
- kfree(data->freq_table);
- kfree(data);
- }
+ policy->fast_switch_possible = false;
+ policy->driver_data = NULL;
+ acpi_processor_unregister_performance(data->acpi_perf_cpu);
+ free_cpumask_var(data->freqdomain_cpus);
+ kfree(policy->freq_table);
+ kfree(data);
return 0;
}
@@ -876,6 +916,7 @@ static struct freq_attr *acpi_cpufreq_attr[] = {
static struct cpufreq_driver acpi_cpufreq_driver = {
.verify = cpufreq_generic_frequency_table_verify,
.target_index = acpi_cpufreq_target,
+ .fast_switch = acpi_cpufreq_fast_switch,
.bios_limit = acpi_processor_get_bios_limit,
.init = acpi_cpufreq_cpu_init,
.exit = acpi_cpufreq_cpu_exit,
diff --git a/drivers/cpufreq/arm_big_little.c b/drivers/cpufreq/arm_big_little.c
index c251247ae661..418042201e6d 100644
--- a/drivers/cpufreq/arm_big_little.c
+++ b/drivers/cpufreq/arm_big_little.c
@@ -298,7 +298,8 @@ static int merge_cluster_tables(void)
return 0;
}
-static void _put_cluster_clk_and_freq_table(struct device *cpu_dev)
+static void _put_cluster_clk_and_freq_table(struct device *cpu_dev,
+ const struct cpumask *cpumask)
{
u32 cluster = raw_cpu_to_cluster(cpu_dev->id);
@@ -308,11 +309,12 @@ static void _put_cluster_clk_and_freq_table(struct device *cpu_dev)
clk_put(clk[cluster]);
dev_pm_opp_free_cpufreq_table(cpu_dev, &freq_table[cluster]);
if (arm_bL_ops->free_opp_table)
- arm_bL_ops->free_opp_table(cpu_dev);
+ arm_bL_ops->free_opp_table(cpumask);
dev_dbg(cpu_dev, "%s: cluster: %d\n", __func__, cluster);
}
-static void put_cluster_clk_and_freq_table(struct device *cpu_dev)
+static void put_cluster_clk_and_freq_table(struct device *cpu_dev,
+ const struct cpumask *cpumask)
{
u32 cluster = cpu_to_cluster(cpu_dev->id);
int i;
@@ -321,7 +323,7 @@ static void put_cluster_clk_and_freq_table(struct device *cpu_dev)
return;
if (cluster < MAX_CLUSTERS)
- return _put_cluster_clk_and_freq_table(cpu_dev);
+ return _put_cluster_clk_and_freq_table(cpu_dev, cpumask);
for_each_present_cpu(i) {
struct device *cdev = get_cpu_device(i);
@@ -330,14 +332,15 @@ static void put_cluster_clk_and_freq_table(struct device *cpu_dev)
return;
}
- _put_cluster_clk_and_freq_table(cdev);
+ _put_cluster_clk_and_freq_table(cdev, cpumask);
}
/* free virtual table */
kfree(freq_table[cluster]);
}
-static int _get_cluster_clk_and_freq_table(struct device *cpu_dev)
+static int _get_cluster_clk_and_freq_table(struct device *cpu_dev,
+ const struct cpumask *cpumask)
{
u32 cluster = raw_cpu_to_cluster(cpu_dev->id);
int ret;
@@ -345,7 +348,7 @@ static int _get_cluster_clk_and_freq_table(struct device *cpu_dev)
if (freq_table[cluster])
return 0;
- ret = arm_bL_ops->init_opp_table(cpu_dev);
+ ret = arm_bL_ops->init_opp_table(cpumask);
if (ret) {
dev_err(cpu_dev, "%s: init_opp_table failed, cpu: %d, err: %d\n",
__func__, cpu_dev->id, ret);
@@ -374,14 +377,15 @@ static int _get_cluster_clk_and_freq_table(struct device *cpu_dev)
free_opp_table:
if (arm_bL_ops->free_opp_table)
- arm_bL_ops->free_opp_table(cpu_dev);
+ arm_bL_ops->free_opp_table(cpumask);
out:
dev_err(cpu_dev, "%s: Failed to get data for cluster: %d\n", __func__,
cluster);
return ret;
}
-static int get_cluster_clk_and_freq_table(struct device *cpu_dev)
+static int get_cluster_clk_and_freq_table(struct device *cpu_dev,
+ const struct cpumask *cpumask)
{
u32 cluster = cpu_to_cluster(cpu_dev->id);
int i, ret;
@@ -390,7 +394,7 @@ static int get_cluster_clk_and_freq_table(struct device *cpu_dev)
return 0;
if (cluster < MAX_CLUSTERS) {
- ret = _get_cluster_clk_and_freq_table(cpu_dev);
+ ret = _get_cluster_clk_and_freq_table(cpu_dev, cpumask);
if (ret)
atomic_dec(&cluster_usage[cluster]);
return ret;
@@ -407,7 +411,7 @@ static int get_cluster_clk_and_freq_table(struct device *cpu_dev)
return -ENODEV;
}
- ret = _get_cluster_clk_and_freq_table(cdev);
+ ret = _get_cluster_clk_and_freq_table(cdev, cpumask);
if (ret)
goto put_clusters;
}
@@ -433,7 +437,7 @@ put_clusters:
return -ENODEV;
}
- _put_cluster_clk_and_freq_table(cdev);
+ _put_cluster_clk_and_freq_table(cdev, cpumask);
}
atomic_dec(&cluster_usage[cluster]);
@@ -455,18 +459,6 @@ static int bL_cpufreq_init(struct cpufreq_policy *policy)
return -ENODEV;
}
- ret = get_cluster_clk_and_freq_table(cpu_dev);
- if (ret)
- return ret;
-
- ret = cpufreq_table_validate_and_show(policy, freq_table[cur_cluster]);
- if (ret) {
- dev_err(cpu_dev, "CPU %d, cluster: %d invalid freq table\n",
- policy->cpu, cur_cluster);
- put_cluster_clk_and_freq_table(cpu_dev);
- return ret;
- }
-
if (cur_cluster < MAX_CLUSTERS) {
int cpu;
@@ -479,6 +471,18 @@ static int bL_cpufreq_init(struct cpufreq_policy *policy)
per_cpu(physical_cluster, policy->cpu) = A15_CLUSTER;
}
+ ret = get_cluster_clk_and_freq_table(cpu_dev, policy->cpus);
+ if (ret)
+ return ret;
+
+ ret = cpufreq_table_validate_and_show(policy, freq_table[cur_cluster]);
+ if (ret) {
+ dev_err(cpu_dev, "CPU %d, cluster: %d invalid freq table\n",
+ policy->cpu, cur_cluster);
+ put_cluster_clk_and_freq_table(cpu_dev, policy->cpus);
+ return ret;
+ }
+
if (arm_bL_ops->get_transition_latency)
policy->cpuinfo.transition_latency =
arm_bL_ops->get_transition_latency(cpu_dev);
@@ -509,7 +513,7 @@ static int bL_cpufreq_exit(struct cpufreq_policy *policy)
return -ENODEV;
}
- put_cluster_clk_and_freq_table(cpu_dev);
+ put_cluster_clk_and_freq_table(cpu_dev, policy->related_cpus);
dev_dbg(cpu_dev, "%s: Exited, cpu: %d\n", __func__, policy->cpu);
return 0;
diff --git a/drivers/cpufreq/arm_big_little.h b/drivers/cpufreq/arm_big_little.h
index b88889d9387e..184d7c3a112a 100644
--- a/drivers/cpufreq/arm_big_little.h
+++ b/drivers/cpufreq/arm_big_little.h
@@ -30,11 +30,11 @@ struct cpufreq_arm_bL_ops {
* This must set opp table for cpu_dev in a similar way as done by
* dev_pm_opp_of_add_table().
*/
- int (*init_opp_table)(struct device *cpu_dev);
+ int (*init_opp_table)(const struct cpumask *cpumask);
/* Optional */
int (*get_transition_latency)(struct device *cpu_dev);
- void (*free_opp_table)(struct device *cpu_dev);
+ void (*free_opp_table)(const struct cpumask *cpumask);
};
int bL_cpufreq_register(struct cpufreq_arm_bL_ops *ops);
diff --git a/drivers/cpufreq/arm_big_little_dt.c b/drivers/cpufreq/arm_big_little_dt.c
index 16ddeefe9443..39b3f51d9a30 100644
--- a/drivers/cpufreq/arm_big_little_dt.c
+++ b/drivers/cpufreq/arm_big_little_dt.c
@@ -43,23 +43,6 @@ static struct device_node *get_cpu_node_with_valid_op(int cpu)
return np;
}
-static int dt_init_opp_table(struct device *cpu_dev)
-{
- struct device_node *np;
- int ret;
-
- np = of_node_get(cpu_dev->of_node);
- if (!np) {
- pr_err("failed to find cpu%d node\n", cpu_dev->id);
- return -ENOENT;
- }
-
- ret = dev_pm_opp_of_add_table(cpu_dev);
- of_node_put(np);
-
- return ret;
-}
-
static int dt_get_transition_latency(struct device *cpu_dev)
{
struct device_node *np;
@@ -81,8 +64,8 @@ static int dt_get_transition_latency(struct device *cpu_dev)
static struct cpufreq_arm_bL_ops dt_bL_ops = {
.name = "dt-bl",
.get_transition_latency = dt_get_transition_latency,
- .init_opp_table = dt_init_opp_table,
- .free_opp_table = dev_pm_opp_of_remove_table,
+ .init_opp_table = dev_pm_opp_of_cpumask_add_table,
+ .free_opp_table = dev_pm_opp_of_cpumask_remove_table,
};
static int generic_bL_probe(struct platform_device *pdev)
diff --git a/drivers/cpufreq/cppc_cpufreq.c b/drivers/cpufreq/cppc_cpufreq.c
index 7c0bdfb1a2ca..8882b8e2ecd0 100644
--- a/drivers/cpufreq/cppc_cpufreq.c
+++ b/drivers/cpufreq/cppc_cpufreq.c
@@ -173,4 +173,25 @@ out:
return -ENODEV;
}
+static void __exit cppc_cpufreq_exit(void)
+{
+ struct cpudata *cpu;
+ int i;
+
+ cpufreq_unregister_driver(&cppc_cpufreq_driver);
+
+ for_each_possible_cpu(i) {
+ cpu = all_cpu_data[i];
+ free_cpumask_var(cpu->shared_cpu_map);
+ kfree(cpu);
+ }
+
+ kfree(all_cpu_data);
+}
+
+module_exit(cppc_cpufreq_exit);
+MODULE_AUTHOR("Ashwin Chaugule");
+MODULE_DESCRIPTION("CPUFreq driver based on the ACPI CPPC v5.0+ spec");
+MODULE_LICENSE("GPL");
+
late_initcall(cppc_cpufreq_init);
diff --git a/drivers/cpufreq/cpufreq-dt-platdev.c b/drivers/cpufreq/cpufreq-dt-platdev.c
new file mode 100644
index 000000000000..3646b143bbf5
--- /dev/null
+++ b/drivers/cpufreq/cpufreq-dt-platdev.c
@@ -0,0 +1,94 @@
+/*
+ * Copyright (C) 2016 Linaro.
+ * Viresh Kumar <viresh.kumar@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/err.h>
+#include <linux/of.h>
+#include <linux/platform_device.h>
+
+static const struct of_device_id machines[] __initconst = {
+ { .compatible = "allwinner,sun4i-a10", },
+ { .compatible = "allwinner,sun5i-a10s", },
+ { .compatible = "allwinner,sun5i-a13", },
+ { .compatible = "allwinner,sun5i-r8", },
+ { .compatible = "allwinner,sun6i-a31", },
+ { .compatible = "allwinner,sun6i-a31s", },
+ { .compatible = "allwinner,sun7i-a20", },
+ { .compatible = "allwinner,sun8i-a23", },
+ { .compatible = "allwinner,sun8i-a33", },
+ { .compatible = "allwinner,sun8i-a83t", },
+ { .compatible = "allwinner,sun8i-h3", },
+
+ { .compatible = "hisilicon,hi6220", },
+
+ { .compatible = "fsl,imx27", },
+ { .compatible = "fsl,imx51", },
+ { .compatible = "fsl,imx53", },
+ { .compatible = "fsl,imx7d", },
+
+ { .compatible = "marvell,berlin", },
+
+ { .compatible = "samsung,exynos3250", },
+ { .compatible = "samsung,exynos4210", },
+ { .compatible = "samsung,exynos4212", },
+ { .compatible = "samsung,exynos4412", },
+ { .compatible = "samsung,exynos5250", },
+#ifndef CONFIG_BL_SWITCHER
+ { .compatible = "samsung,exynos5420", },
+ { .compatible = "samsung,exynos5800", },
+#endif
+
+ { .compatible = "renesas,emev2", },
+ { .compatible = "renesas,r7s72100", },
+ { .compatible = "renesas,r8a73a4", },
+ { .compatible = "renesas,r8a7740", },
+ { .compatible = "renesas,r8a7778", },
+ { .compatible = "renesas,r8a7779", },
+ { .compatible = "renesas,r8a7790", },
+ { .compatible = "renesas,r8a7791", },
+ { .compatible = "renesas,r8a7793", },
+ { .compatible = "renesas,r8a7794", },
+ { .compatible = "renesas,sh73a0", },
+
+ { .compatible = "rockchip,rk2928", },
+ { .compatible = "rockchip,rk3036", },
+ { .compatible = "rockchip,rk3066a", },
+ { .compatible = "rockchip,rk3066b", },
+ { .compatible = "rockchip,rk3188", },
+ { .compatible = "rockchip,rk3228", },
+ { .compatible = "rockchip,rk3288", },
+ { .compatible = "rockchip,rk3366", },
+ { .compatible = "rockchip,rk3368", },
+ { .compatible = "rockchip,rk3399", },
+
+ { .compatible = "sigma,tango4" },
+
+ { .compatible = "ti,omap2", },
+ { .compatible = "ti,omap3", },
+ { .compatible = "ti,omap4", },
+ { .compatible = "ti,omap5", },
+
+ { .compatible = "xlnx,zynq-7000", },
+};
+
+static int __init cpufreq_dt_platdev_init(void)
+{
+ struct device_node *np = of_find_node_by_path("/");
+
+ if (!np)
+ return -ENODEV;
+
+ if (!of_match_node(machines, np))
+ return -ENODEV;
+
+ of_node_put(of_root);
+
+ return PTR_ERR_OR_ZERO(platform_device_register_simple("cpufreq-dt", -1,
+ NULL, 0));
+}
+device_initcall(cpufreq_dt_platdev_init);
diff --git a/drivers/cpufreq/cpufreq-dt.c b/drivers/cpufreq/cpufreq-dt.c
index 5f8dbe640a20..3957de801ae8 100644
--- a/drivers/cpufreq/cpufreq-dt.c
+++ b/drivers/cpufreq/cpufreq-dt.c
@@ -15,7 +15,6 @@
#include <linux/cpu.h>
#include <linux/cpu_cooling.h>
#include <linux/cpufreq.h>
-#include <linux/cpufreq-dt.h>
#include <linux/cpumask.h>
#include <linux/err.h>
#include <linux/module.h>
@@ -147,7 +146,7 @@ static int cpufreq_init(struct cpufreq_policy *policy)
struct clk *cpu_clk;
struct dev_pm_opp *suspend_opp;
unsigned int transition_latency;
- bool opp_v1 = false;
+ bool fallback = false;
const char *name;
int ret;
@@ -167,14 +166,16 @@ static int cpufreq_init(struct cpufreq_policy *policy)
/* Get OPP-sharing information from "operating-points-v2" bindings */
ret = dev_pm_opp_of_get_sharing_cpus(cpu_dev, policy->cpus);
if (ret) {
+ if (ret != -ENOENT)
+ goto out_put_clk;
+
/*
* operating-points-v2 not supported, fallback to old method of
- * finding shared-OPPs for backward compatibility.
+ * finding shared-OPPs for backward compatibility if the
+ * platform hasn't set sharing CPUs.
*/
- if (ret == -ENOENT)
- opp_v1 = true;
- else
- goto out_put_clk;
+ if (dev_pm_opp_get_sharing_cpus(cpu_dev, policy->cpus))
+ fallback = true;
}
/*
@@ -214,11 +215,8 @@ static int cpufreq_init(struct cpufreq_policy *policy)
goto out_free_opp;
}
- if (opp_v1) {
- struct cpufreq_dt_platform_data *pd = cpufreq_get_driver_data();
-
- if (!pd || !pd->independent_clocks)
- cpumask_setall(policy->cpus);
+ if (fallback) {
+ cpumask_setall(policy->cpus);
/*
* OPP tables are initialized only for policy->cpu, do it for
diff --git a/drivers/cpufreq/cpufreq-nforce2.c b/drivers/cpufreq/cpufreq-nforce2.c
index db69eeb501a7..5503d491b016 100644
--- a/drivers/cpufreq/cpufreq-nforce2.c
+++ b/drivers/cpufreq/cpufreq-nforce2.c
@@ -7,6 +7,8 @@
* BIG FAT DISCLAIMER: Work in progress code. Possibly *dangerous*
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
@@ -56,8 +58,6 @@ MODULE_PARM_DESC(fid, "CPU multiplier to use (11.5 = 115)");
MODULE_PARM_DESC(min_fsb,
"Minimum FSB to use, if not defined: current FSB - 50");
-#define PFX "cpufreq-nforce2: "
-
/**
* nforce2_calc_fsb - calculate FSB
* @pll: PLL value
@@ -174,13 +174,13 @@ static int nforce2_set_fsb(unsigned int fsb)
int pll = 0;
if ((fsb > max_fsb) || (fsb < NFORCE2_MIN_FSB)) {
- printk(KERN_ERR PFX "FSB %d is out of range!\n", fsb);
+ pr_err("FSB %d is out of range!\n", fsb);
return -EINVAL;
}
tfsb = nforce2_fsb_read(0);
if (!tfsb) {
- printk(KERN_ERR PFX "Error while reading the FSB\n");
+ pr_err("Error while reading the FSB\n");
return -EINVAL;
}
@@ -276,8 +276,7 @@ static int nforce2_target(struct cpufreq_policy *policy,
/* local_irq_save(flags); */
if (nforce2_set_fsb(target_fsb) < 0)
- printk(KERN_ERR PFX "Changing FSB to %d failed\n",
- target_fsb);
+ pr_err("Changing FSB to %d failed\n", target_fsb);
else
pr_debug("Changed FSB successfully to %d\n",
target_fsb);
@@ -325,8 +324,7 @@ static int nforce2_cpu_init(struct cpufreq_policy *policy)
/* FIX: Get FID from CPU */
if (!fid) {
if (!cpu_khz) {
- printk(KERN_WARNING PFX
- "cpu_khz not set, can't calculate multiplier!\n");
+ pr_warn("cpu_khz not set, can't calculate multiplier!\n");
return -ENODEV;
}
@@ -341,8 +339,8 @@ static int nforce2_cpu_init(struct cpufreq_policy *policy)
}
}
- printk(KERN_INFO PFX "FSB currently at %i MHz, FID %d.%d\n", fsb,
- fid / 10, fid % 10);
+ pr_info("FSB currently at %i MHz, FID %d.%d\n",
+ fsb, fid / 10, fid % 10);
/* Set maximum FSB to FSB at boot time */
max_fsb = nforce2_fsb_read(1);
@@ -401,11 +399,9 @@ static int nforce2_detect_chipset(void)
if (nforce2_dev == NULL)
return -ENODEV;
- printk(KERN_INFO PFX "Detected nForce2 chipset revision %X\n",
- nforce2_dev->revision);
- printk(KERN_INFO PFX
- "FSB changing is maybe unstable and can lead to "
- "crashes and data loss.\n");
+ pr_info("Detected nForce2 chipset revision %X\n",
+ nforce2_dev->revision);
+ pr_info("FSB changing is maybe unstable and can lead to crashes and data loss\n");
return 0;
}
@@ -423,7 +419,7 @@ static int __init nforce2_init(void)
/* detect chipset */
if (nforce2_detect_chipset()) {
- printk(KERN_INFO PFX "No nForce2 chipset.\n");
+ pr_info("No nForce2 chipset\n");
return -ENODEV;
}
diff --git a/drivers/cpufreq/cpufreq.c b/drivers/cpufreq/cpufreq.c
index c4acfc5273b3..36bc11a106aa 100644
--- a/drivers/cpufreq/cpufreq.c
+++ b/drivers/cpufreq/cpufreq.c
@@ -78,6 +78,16 @@ static int cpufreq_governor(struct cpufreq_policy *policy, unsigned int event);
static unsigned int __cpufreq_get(struct cpufreq_policy *policy);
static int cpufreq_start_governor(struct cpufreq_policy *policy);
+static inline void cpufreq_exit_governor(struct cpufreq_policy *policy)
+{
+ (void)cpufreq_governor(policy, CPUFREQ_GOV_POLICY_EXIT);
+}
+
+static inline void cpufreq_stop_governor(struct cpufreq_policy *policy)
+{
+ (void)cpufreq_governor(policy, CPUFREQ_GOV_STOP);
+}
+
/**
* Two notifier lists: the "policy" list is involved in the
* validation process for a new CPU frequency policy; the
@@ -429,6 +439,73 @@ void cpufreq_freq_transition_end(struct cpufreq_policy *policy,
}
EXPORT_SYMBOL_GPL(cpufreq_freq_transition_end);
+/*
+ * Fast frequency switching status count. Positive means "enabled", negative
+ * means "disabled" and 0 means "not decided yet".
+ */
+static int cpufreq_fast_switch_count;
+static DEFINE_MUTEX(cpufreq_fast_switch_lock);
+
+static void cpufreq_list_transition_notifiers(void)
+{
+ struct notifier_block *nb;
+
+ pr_info("Registered transition notifiers:\n");
+
+ mutex_lock(&cpufreq_transition_notifier_list.mutex);
+
+ for (nb = cpufreq_transition_notifier_list.head; nb; nb = nb->next)
+ pr_info("%pF\n", nb->notifier_call);
+
+ mutex_unlock(&cpufreq_transition_notifier_list.mutex);
+}
+
+/**
+ * cpufreq_enable_fast_switch - Enable fast frequency switching for policy.
+ * @policy: cpufreq policy to enable fast frequency switching for.
+ *
+ * Try to enable fast frequency switching for @policy.
+ *
+ * The attempt will fail if there is at least one transition notifier registered
+ * at this point, as fast frequency switching is quite fundamentally at odds
+ * with transition notifiers. Thus if successful, it will make registration of
+ * transition notifiers fail going forward.
+ */
+void cpufreq_enable_fast_switch(struct cpufreq_policy *policy)
+{
+ lockdep_assert_held(&policy->rwsem);
+
+ if (!policy->fast_switch_possible)
+ return;
+
+ mutex_lock(&cpufreq_fast_switch_lock);
+ if (cpufreq_fast_switch_count >= 0) {
+ cpufreq_fast_switch_count++;
+ policy->fast_switch_enabled = true;
+ } else {
+ pr_warn("CPU%u: Fast frequency switching not enabled\n",
+ policy->cpu);
+ cpufreq_list_transition_notifiers();
+ }
+ mutex_unlock(&cpufreq_fast_switch_lock);
+}
+EXPORT_SYMBOL_GPL(cpufreq_enable_fast_switch);
+
+/**
+ * cpufreq_disable_fast_switch - Disable fast frequency switching for policy.
+ * @policy: cpufreq policy to disable fast frequency switching for.
+ */
+void cpufreq_disable_fast_switch(struct cpufreq_policy *policy)
+{
+ mutex_lock(&cpufreq_fast_switch_lock);
+ if (policy->fast_switch_enabled) {
+ policy->fast_switch_enabled = false;
+ if (!WARN_ON(cpufreq_fast_switch_count <= 0))
+ cpufreq_fast_switch_count--;
+ }
+ mutex_unlock(&cpufreq_fast_switch_lock);
+}
+EXPORT_SYMBOL_GPL(cpufreq_disable_fast_switch);
/*********************************************************************
* SYSFS INTERFACE *
@@ -954,13 +1031,8 @@ static int cpufreq_add_policy_cpu(struct cpufreq_policy *policy, unsigned int cp
return 0;
down_write(&policy->rwsem);
- if (has_target()) {
- ret = cpufreq_governor(policy, CPUFREQ_GOV_STOP);
- if (ret) {
- pr_err("%s: Failed to stop governor\n", __func__);
- goto unlock;
- }
- }
+ if (has_target())
+ cpufreq_stop_governor(policy);
cpumask_set_cpu(cpu, policy->cpus);
@@ -969,8 +1041,6 @@ static int cpufreq_add_policy_cpu(struct cpufreq_policy *policy, unsigned int cp
if (ret)
pr_err("%s: Failed to start governor\n", __func__);
}
-
-unlock:
up_write(&policy->rwsem);
return ret;
}
@@ -1248,26 +1318,24 @@ out_free_policy:
*/
static int cpufreq_add_dev(struct device *dev, struct subsys_interface *sif)
{
+ struct cpufreq_policy *policy;
unsigned cpu = dev->id;
- int ret;
dev_dbg(dev, "%s: adding CPU%u\n", __func__, cpu);
- if (cpu_online(cpu)) {
- ret = cpufreq_online(cpu);
- } else {
- /*
- * A hotplug notifier will follow and we will handle it as CPU
- * online then. For now, just create the sysfs link, unless
- * there is no policy or the link is already present.
- */
- struct cpufreq_policy *policy = per_cpu(cpufreq_cpu_data, cpu);
+ if (cpu_online(cpu))
+ return cpufreq_online(cpu);
- ret = policy && !cpumask_test_and_set_cpu(cpu, policy->real_cpus)
- ? add_cpu_dev_symlink(policy, cpu) : 0;
- }
+ /*
+ * A hotplug notifier will follow and we will handle it as CPU online
+ * then. For now, just create the sysfs link, unless there is no policy
+ * or the link is already present.
+ */
+ policy = per_cpu(cpufreq_cpu_data, cpu);
+ if (!policy || cpumask_test_and_set_cpu(cpu, policy->real_cpus))
+ return 0;
- return ret;
+ return add_cpu_dev_symlink(policy, cpu);
}
static void cpufreq_offline(unsigned int cpu)
@@ -1284,11 +1352,8 @@ static void cpufreq_offline(unsigned int cpu)
}
down_write(&policy->rwsem);
- if (has_target()) {
- ret = cpufreq_governor(policy, CPUFREQ_GOV_STOP);
- if (ret)
- pr_err("%s: Failed to stop governor\n", __func__);
- }
+ if (has_target())
+ cpufreq_stop_governor(policy);
cpumask_clear_cpu(cpu, policy->cpus);
@@ -1317,12 +1382,8 @@ static void cpufreq_offline(unsigned int cpu)
if (cpufreq_driver->stop_cpu)
cpufreq_driver->stop_cpu(policy);
- /* If cpu is last user of policy, free policy */
- if (has_target()) {
- ret = cpufreq_governor(policy, CPUFREQ_GOV_POLICY_EXIT);
- if (ret)
- pr_err("%s: Failed to exit governor\n", __func__);
- }
+ if (has_target())
+ cpufreq_exit_governor(policy);
/*
* Perform the ->exit() even during light-weight tear-down,
@@ -1447,8 +1508,12 @@ static unsigned int __cpufreq_get(struct cpufreq_policy *policy)
ret_freq = cpufreq_driver->get(policy->cpu);
- /* Updating inactive policies is invalid, so avoid doing that. */
- if (unlikely(policy_is_inactive(policy)))
+ /*
+ * Updating inactive policies is invalid, so avoid doing that. Also
+ * if fast frequency switching is used with the given policy, the check
+ * against policy->cur is pointless, so skip it in that case too.
+ */
+ if (unlikely(policy_is_inactive(policy)) || policy->fast_switch_enabled)
return ret_freq;
if (ret_freq && policy->cur &&
@@ -1552,7 +1617,6 @@ EXPORT_SYMBOL(cpufreq_generic_suspend);
void cpufreq_suspend(void)
{
struct cpufreq_policy *policy;
- int ret;
if (!cpufreq_driver)
return;
@@ -1565,14 +1629,8 @@ void cpufreq_suspend(void)
for_each_active_policy(policy) {
if (has_target()) {
down_write(&policy->rwsem);
- ret = cpufreq_governor(policy, CPUFREQ_GOV_STOP);
+ cpufreq_stop_governor(policy);
up_write(&policy->rwsem);
-
- if (ret) {
- pr_err("%s: Failed to stop governor for policy: %p\n",
- __func__, policy);
- continue;
- }
}
if (cpufreq_driver->suspend && cpufreq_driver->suspend(policy))
@@ -1679,8 +1737,18 @@ int cpufreq_register_notifier(struct notifier_block *nb, unsigned int list)
switch (list) {
case CPUFREQ_TRANSITION_NOTIFIER:
+ mutex_lock(&cpufreq_fast_switch_lock);
+
+ if (cpufreq_fast_switch_count > 0) {
+ mutex_unlock(&cpufreq_fast_switch_lock);
+ return -EBUSY;
+ }
ret = srcu_notifier_chain_register(
&cpufreq_transition_notifier_list, nb);
+ if (!ret)
+ cpufreq_fast_switch_count--;
+
+ mutex_unlock(&cpufreq_fast_switch_lock);
break;
case CPUFREQ_POLICY_NOTIFIER:
ret = blocking_notifier_chain_register(
@@ -1713,8 +1781,14 @@ int cpufreq_unregister_notifier(struct notifier_block *nb, unsigned int list)
switch (list) {
case CPUFREQ_TRANSITION_NOTIFIER:
+ mutex_lock(&cpufreq_fast_switch_lock);
+
ret = srcu_notifier_chain_unregister(
&cpufreq_transition_notifier_list, nb);
+ if (!ret && !WARN_ON(cpufreq_fast_switch_count >= 0))
+ cpufreq_fast_switch_count++;
+
+ mutex_unlock(&cpufreq_fast_switch_lock);
break;
case CPUFREQ_POLICY_NOTIFIER:
ret = blocking_notifier_chain_unregister(
@@ -1733,6 +1807,37 @@ EXPORT_SYMBOL(cpufreq_unregister_notifier);
* GOVERNORS *
*********************************************************************/
+/**
+ * cpufreq_driver_fast_switch - Carry out a fast CPU frequency switch.
+ * @policy: cpufreq policy to switch the frequency for.
+ * @target_freq: New frequency to set (may be approximate).
+ *
+ * Carry out a fast frequency switch without sleeping.
+ *
+ * The driver's ->fast_switch() callback invoked by this function must be
+ * suitable for being called from within RCU-sched read-side critical sections
+ * and it is expected to select the minimum available frequency greater than or
+ * equal to @target_freq (CPUFREQ_RELATION_L).
+ *
+ * This function must not be called if policy->fast_switch_enabled is unset.
+ *
+ * Governors calling this function must guarantee that it will never be invoked
+ * twice in parallel for the same policy and that it will never be called in
+ * parallel with either ->target() or ->target_index() for the same policy.
+ *
+ * If CPUFREQ_ENTRY_INVALID is returned by the driver's ->fast_switch()
+ * callback to indicate an error condition, the hardware configuration must be
+ * preserved.
+ */
+unsigned int cpufreq_driver_fast_switch(struct cpufreq_policy *policy,
+ unsigned int target_freq)
+{
+ clamp_val(target_freq, policy->min, policy->max);
+
+ return cpufreq_driver->fast_switch(policy, target_freq);
+}
+EXPORT_SYMBOL_GPL(cpufreq_driver_fast_switch);
+
/* Must set freqs->new to intermediate frequency */
static int __target_intermediate(struct cpufreq_policy *policy,
struct cpufreq_freqs *freqs, int index)
@@ -1928,16 +2033,15 @@ static int cpufreq_governor(struct cpufreq_policy *policy, unsigned int event)
ret = policy->governor->governor(policy, event);
- if (!ret) {
- if (event == CPUFREQ_GOV_POLICY_INIT)
+ if (event == CPUFREQ_GOV_POLICY_INIT) {
+ if (ret)
+ module_put(policy->governor->owner);
+ else
policy->governor->initialized++;
- else if (event == CPUFREQ_GOV_POLICY_EXIT)
- policy->governor->initialized--;
- }
-
- if (((event == CPUFREQ_GOV_POLICY_INIT) && ret) ||
- ((event == CPUFREQ_GOV_POLICY_EXIT) && !ret))
+ } else if (event == CPUFREQ_GOV_POLICY_EXIT) {
+ policy->governor->initialized--;
module_put(policy->governor->owner);
+ }
return ret;
}
@@ -2100,20 +2204,8 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy,
old_gov = policy->governor;
/* end old governor */
if (old_gov) {
- ret = cpufreq_governor(policy, CPUFREQ_GOV_STOP);
- if (ret) {
- /* This can happen due to race with other operations */
- pr_debug("%s: Failed to Stop Governor: %s (%d)\n",
- __func__, old_gov->name, ret);
- return ret;
- }
-
- ret = cpufreq_governor(policy, CPUFREQ_GOV_POLICY_EXIT);
- if (ret) {
- pr_err("%s: Failed to Exit Governor: %s (%d)\n",
- __func__, old_gov->name, ret);
- return ret;
- }
+ cpufreq_stop_governor(policy);
+ cpufreq_exit_governor(policy);
}
/* start new governor */
@@ -2125,7 +2217,7 @@ static int cpufreq_set_policy(struct cpufreq_policy *policy,
pr_debug("cpufreq: governor change\n");
return 0;
}
- cpufreq_governor(policy, CPUFREQ_GOV_POLICY_EXIT);
+ cpufreq_exit_governor(policy);
}
/* new governor failed, so re-start old one */
@@ -2193,16 +2285,13 @@ static int cpufreq_cpu_callback(struct notifier_block *nfb,
switch (action & ~CPU_TASKS_FROZEN) {
case CPU_ONLINE:
+ case CPU_DOWN_FAILED:
cpufreq_online(cpu);
break;
case CPU_DOWN_PREPARE:
cpufreq_offline(cpu);
break;
-
- case CPU_DOWN_FAILED:
- cpufreq_online(cpu);
- break;
}
return NOTIFY_OK;
}
@@ -2377,10 +2466,7 @@ int cpufreq_register_driver(struct cpufreq_driver *driver_data)
register_hotcpu_notifier(&cpufreq_cpu_notifier);
pr_debug("driver %s up and running\n", driver_data->name);
-
-out:
- put_online_cpus();
- return ret;
+ goto out;
err_if_unreg:
subsys_interface_unregister(&cpufreq_interface);
@@ -2390,7 +2476,9 @@ err_null_driver:
write_lock_irqsave(&cpufreq_driver_lock, flags);
cpufreq_driver = NULL;
write_unlock_irqrestore(&cpufreq_driver_lock, flags);
- goto out;
+out:
+ put_online_cpus();
+ return ret;
}
EXPORT_SYMBOL_GPL(cpufreq_register_driver);
diff --git a/drivers/cpufreq/cpufreq_conservative.c b/drivers/cpufreq/cpufreq_conservative.c
index bf4913f6453b..316df247e00d 100644
--- a/drivers/cpufreq/cpufreq_conservative.c
+++ b/drivers/cpufreq/cpufreq_conservative.c
@@ -129,9 +129,10 @@ static struct notifier_block cs_cpufreq_notifier_block = {
/************************** sysfs interface ************************/
static struct dbs_governor cs_dbs_gov;
-static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data,
- const char *buf, size_t count)
+static ssize_t store_sampling_down_factor(struct gov_attr_set *attr_set,
+ const char *buf, size_t count)
{
+ struct dbs_data *dbs_data = to_dbs_data(attr_set);
unsigned int input;
int ret;
ret = sscanf(buf, "%u", &input);
@@ -143,9 +144,10 @@ static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data,
return count;
}
-static ssize_t store_up_threshold(struct dbs_data *dbs_data, const char *buf,
- size_t count)
+static ssize_t store_up_threshold(struct gov_attr_set *attr_set,
+ const char *buf, size_t count)
{
+ struct dbs_data *dbs_data = to_dbs_data(attr_set);
struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
unsigned int input;
int ret;
@@ -158,9 +160,10 @@ static ssize_t store_up_threshold(struct dbs_data *dbs_data, const char *buf,
return count;
}
-static ssize_t store_down_threshold(struct dbs_data *dbs_data, const char *buf,
- size_t count)
+static ssize_t store_down_threshold(struct gov_attr_set *attr_set,
+ const char *buf, size_t count)
{
+ struct dbs_data *dbs_data = to_dbs_data(attr_set);
struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
unsigned int input;
int ret;
@@ -175,9 +178,10 @@ static ssize_t store_down_threshold(struct dbs_data *dbs_data, const char *buf,
return count;
}
-static ssize_t store_ignore_nice_load(struct dbs_data *dbs_data,
- const char *buf, size_t count)
+static ssize_t store_ignore_nice_load(struct gov_attr_set *attr_set,
+ const char *buf, size_t count)
{
+ struct dbs_data *dbs_data = to_dbs_data(attr_set);
unsigned int input;
int ret;
@@ -199,9 +203,10 @@ static ssize_t store_ignore_nice_load(struct dbs_data *dbs_data,
return count;
}
-static ssize_t store_freq_step(struct dbs_data *dbs_data, const char *buf,
- size_t count)
+static ssize_t store_freq_step(struct gov_attr_set *attr_set, const char *buf,
+ size_t count)
{
+ struct dbs_data *dbs_data = to_dbs_data(attr_set);
struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
unsigned int input;
int ret;
diff --git a/drivers/cpufreq/cpufreq_governor.c b/drivers/cpufreq/cpufreq_governor.c
index 5f1147fa9239..be498d56dd69 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -43,9 +43,10 @@ static DEFINE_MUTEX(gov_dbs_data_mutex);
* This must be called with dbs_data->mutex held, otherwise traversing
* policy_dbs_list isn't safe.
*/
-ssize_t store_sampling_rate(struct dbs_data *dbs_data, const char *buf,
+ssize_t store_sampling_rate(struct gov_attr_set *attr_set, const char *buf,
size_t count)
{
+ struct dbs_data *dbs_data = to_dbs_data(attr_set);
struct policy_dbs_info *policy_dbs;
unsigned int rate;
int ret;
@@ -59,7 +60,7 @@ ssize_t store_sampling_rate(struct dbs_data *dbs_data, const char *buf,
* We are operating under dbs_data->mutex and so the list and its
* entries can't be freed concurrently.
*/
- list_for_each_entry(policy_dbs, &dbs_data->policy_dbs_list, list) {
+ list_for_each_entry(policy_dbs, &attr_set->policy_list, list) {
mutex_lock(&policy_dbs->timer_mutex);
/*
* On 32-bit architectures this may race with the
@@ -96,13 +97,13 @@ void gov_update_cpu_data(struct dbs_data *dbs_data)
{
struct policy_dbs_info *policy_dbs;
- list_for_each_entry(policy_dbs, &dbs_data->policy_dbs_list, list) {
+ list_for_each_entry(policy_dbs, &dbs_data->attr_set.policy_list, list) {
unsigned int j;
for_each_cpu(j, policy_dbs->policy->cpus) {
struct cpu_dbs_info *j_cdbs = &per_cpu(cpu_dbs, j);
- j_cdbs->prev_cpu_idle = get_cpu_idle_time(j, &j_cdbs->prev_cpu_wall,
+ j_cdbs->prev_cpu_idle = get_cpu_idle_time(j, &j_cdbs->prev_update_time,
dbs_data->io_is_busy);
if (dbs_data->ignore_nice_load)
j_cdbs->prev_cpu_nice = kcpustat_cpu(j).cpustat[CPUTIME_NICE];
@@ -111,54 +112,6 @@ void gov_update_cpu_data(struct dbs_data *dbs_data)
}
EXPORT_SYMBOL_GPL(gov_update_cpu_data);
-static inline struct dbs_data *to_dbs_data(struct kobject *kobj)
-{
- return container_of(kobj, struct dbs_data, kobj);
-}
-
-static inline struct governor_attr *to_gov_attr(struct attribute *attr)
-{
- return container_of(attr, struct governor_attr, attr);
-}
-
-static ssize_t governor_show(struct kobject *kobj, struct attribute *attr,
- char *buf)
-{
- struct dbs_data *dbs_data = to_dbs_data(kobj);
- struct governor_attr *gattr = to_gov_attr(attr);
-
- return gattr->show(dbs_data, buf);
-}
-
-static ssize_t governor_store(struct kobject *kobj, struct attribute *attr,
- const char *buf, size_t count)
-{
- struct dbs_data *dbs_data = to_dbs_data(kobj);
- struct governor_attr *gattr = to_gov_attr(attr);
- int ret = -EBUSY;
-
- mutex_lock(&dbs_data->mutex);
-
- if (dbs_data->usage_count)
- ret = gattr->store(dbs_data, buf, count);
-
- mutex_unlock(&dbs_data->mutex);
-
- return ret;
-}
-
-/*
- * Sysfs Ops for accessing governor attributes.
- *
- * All show/store invocations for governor specific sysfs attributes, will first
- * call the below show/store callbacks and the attribute specific callback will
- * be called from within it.
- */
-static const struct sysfs_ops governor_sysfs_ops = {
- .show = governor_show,
- .store = governor_store,
-};
-
unsigned int dbs_update(struct cpufreq_policy *policy)
{
struct policy_dbs_info *policy_dbs = policy->governor_data;
@@ -184,14 +137,14 @@ unsigned int dbs_update(struct cpufreq_policy *policy)
/* Get Absolute Load */
for_each_cpu(j, policy->cpus) {
struct cpu_dbs_info *j_cdbs = &per_cpu(cpu_dbs, j);
- u64 cur_wall_time, cur_idle_time;
- unsigned int idle_time, wall_time;
+ u64 update_time, cur_idle_time;
+ unsigned int idle_time, time_elapsed;
unsigned int load;
- cur_idle_time = get_cpu_idle_time(j, &cur_wall_time, io_busy);
+ cur_idle_time = get_cpu_idle_time(j, &update_time, io_busy);
- wall_time = cur_wall_time - j_cdbs->prev_cpu_wall;
- j_cdbs->prev_cpu_wall = cur_wall_time;
+ time_elapsed = update_time - j_cdbs->prev_update_time;
+ j_cdbs->prev_update_time = update_time;
idle_time = cur_idle_time - j_cdbs->prev_cpu_idle;
j_cdbs->prev_cpu_idle = cur_idle_time;
@@ -203,47 +156,62 @@ unsigned int dbs_update(struct cpufreq_policy *policy)
j_cdbs->prev_cpu_nice = cur_nice;
}
- if (unlikely(!wall_time || wall_time < idle_time))
- continue;
-
- /*
- * If the CPU had gone completely idle, and a task just woke up
- * on this CPU now, it would be unfair to calculate 'load' the
- * usual way for this elapsed time-window, because it will show
- * near-zero load, irrespective of how CPU intensive that task
- * actually is. This is undesirable for latency-sensitive bursty
- * workloads.
- *
- * To avoid this, we reuse the 'load' from the previous
- * time-window and give this task a chance to start with a
- * reasonably high CPU frequency. (However, we shouldn't over-do
- * this copy, lest we get stuck at a high load (high frequency)
- * for too long, even when the current system load has actually
- * dropped down. So we perform the copy only once, upon the
- * first wake-up from idle.)
- *
- * Detecting this situation is easy: the governor's utilization
- * update handler would not have run during CPU-idle periods.
- * Hence, an unusually large 'wall_time' (as compared to the
- * sampling rate) indicates this scenario.
- *
- * prev_load can be zero in two cases and we must recalculate it
- * for both cases:
- * - during long idle intervals
- * - explicitly set to zero
- */
- if (unlikely(wall_time > (2 * sampling_rate) &&
- j_cdbs->prev_load)) {
+ if (unlikely(!time_elapsed)) {
+ /*
+ * That can only happen when this function is called
+ * twice in a row with a very short interval between the
+ * calls, so the previous load value can be used then.
+ */
load = j_cdbs->prev_load;
-
+ } else if (unlikely(time_elapsed > 2 * sampling_rate &&
+ j_cdbs->prev_load)) {
/*
- * Perform a destructive copy, to ensure that we copy
- * the previous load only once, upon the first wake-up
- * from idle.
+ * If the CPU had gone completely idle and a task has
+ * just woken up on this CPU now, it would be unfair to
+ * calculate 'load' the usual way for this elapsed
+ * time-window, because it would show near-zero load,
+ * irrespective of how CPU intensive that task actually
+ * was. This is undesirable for latency-sensitive bursty
+ * workloads.
+ *
+ * To avoid this, reuse the 'load' from the previous
+ * time-window and give this task a chance to start with
+ * a reasonably high CPU frequency. However, that
+ * shouldn't be over-done, lest we get stuck at a high
+ * load (high frequency) for too long, even when the
+ * current system load has actually dropped down, so
+ * clear prev_load to guarantee that the load will be
+ * computed again next time.
+ *
+ * Detecting this situation is easy: the governor's
+ * utilization update handler would not have run during
+ * CPU-idle periods. Hence, an unusually large
+ * 'time_elapsed' (as compared to the sampling rate)
+ * indicates this scenario.
*/
+ load = j_cdbs->prev_load;
j_cdbs->prev_load = 0;
} else {
- load = 100 * (wall_time - idle_time) / wall_time;
+ if (time_elapsed >= idle_time) {
+ load = 100 * (time_elapsed - idle_time) / time_elapsed;
+ } else {
+ /*
+ * That can happen if idle_time is returned by
+ * get_cpu_idle_time_jiffy(). In that case
+ * idle_time is roughly equal to the difference
+ * between time_elapsed and "busy time" obtained
+ * from CPU statistics. Then, the "busy time"
+ * can end up being greater than time_elapsed
+ * (for example, if jiffies_64 and the CPU
+ * statistics are updated by different CPUs),
+ * so idle_time may in fact be negative. That
+ * means, though, that the CPU was busy all
+ * the time (on the rough average) during the
+ * last sampling interval and 100 can be
+ * returned as the load.
+ */
+ load = (int)idle_time < 0 ? 100 : 0;
+ }
j_cdbs->prev_load = load;
}
@@ -254,43 +222,6 @@ unsigned int dbs_update(struct cpufreq_policy *policy)
}
EXPORT_SYMBOL_GPL(dbs_update);
-static void gov_set_update_util(struct policy_dbs_info *policy_dbs,
- unsigned int delay_us)
-{
- struct cpufreq_policy *policy = policy_dbs->policy;
- int cpu;
-
- gov_update_sample_delay(policy_dbs, delay_us);
- policy_dbs->last_sample_time = 0;
-
- for_each_cpu(cpu, policy->cpus) {
- struct cpu_dbs_info *cdbs = &per_cpu(cpu_dbs, cpu);
-
- cpufreq_set_update_util_data(cpu, &cdbs->update_util);
- }
-}
-
-static inline void gov_clear_update_util(struct cpufreq_policy *policy)
-{
- int i;
-
- for_each_cpu(i, policy->cpus)
- cpufreq_set_update_util_data(i, NULL);
-
- synchronize_sched();
-}
-
-static void gov_cancel_work(struct cpufreq_policy *policy)
-{
- struct policy_dbs_info *policy_dbs = policy->governor_data;
-
- gov_clear_update_util(policy_dbs->policy);
- irq_work_sync(&policy_dbs->irq_work);
- cancel_work_sync(&policy_dbs->work);
- atomic_set(&policy_dbs->work_count, 0);
- policy_dbs->work_in_progress = false;
-}
-
static void dbs_work_handler(struct work_struct *work)
{
struct policy_dbs_info *policy_dbs;
@@ -378,6 +309,44 @@ static void dbs_update_util_handler(struct update_util_data *data, u64 time,
irq_work_queue(&policy_dbs->irq_work);
}
+static void gov_set_update_util(struct policy_dbs_info *policy_dbs,
+ unsigned int delay_us)
+{
+ struct cpufreq_policy *policy = policy_dbs->policy;
+ int cpu;
+
+ gov_update_sample_delay(policy_dbs, delay_us);
+ policy_dbs->last_sample_time = 0;
+
+ for_each_cpu(cpu, policy->cpus) {
+ struct cpu_dbs_info *cdbs = &per_cpu(cpu_dbs, cpu);
+
+ cpufreq_add_update_util_hook(cpu, &cdbs->update_util,
+ dbs_update_util_handler);
+ }
+}
+
+static inline void gov_clear_update_util(struct cpufreq_policy *policy)
+{
+ int i;
+
+ for_each_cpu(i, policy->cpus)
+ cpufreq_remove_update_util_hook(i);
+
+ synchronize_sched();
+}
+
+static void gov_cancel_work(struct cpufreq_policy *policy)
+{
+ struct policy_dbs_info *policy_dbs = policy->governor_data;
+
+ gov_clear_update_util(policy_dbs->policy);
+ irq_work_sync(&policy_dbs->irq_work);
+ cancel_work_sync(&policy_dbs->work);
+ atomic_set(&policy_dbs->work_count, 0);
+ policy_dbs->work_in_progress = false;
+}
+
static struct policy_dbs_info *alloc_policy_dbs_info(struct cpufreq_policy *policy,
struct dbs_governor *gov)
{
@@ -400,7 +369,6 @@ static struct policy_dbs_info *alloc_policy_dbs_info(struct cpufreq_policy *poli
struct cpu_dbs_info *j_cdbs = &per_cpu(cpu_dbs, j);
j_cdbs->policy_dbs = policy_dbs;
- j_cdbs->update_util.func = dbs_update_util_handler;
}
return policy_dbs;
}
@@ -449,10 +417,7 @@ static int cpufreq_governor_init(struct cpufreq_policy *policy)
policy_dbs->dbs_data = dbs_data;
policy->governor_data = policy_dbs;
- mutex_lock(&dbs_data->mutex);
- dbs_data->usage_count++;
- list_add(&policy_dbs->list, &dbs_data->policy_dbs_list);
- mutex_unlock(&dbs_data->mutex);
+ gov_attr_set_get(&dbs_data->attr_set, &policy_dbs->list);
goto out;
}
@@ -462,8 +427,7 @@ static int cpufreq_governor_init(struct cpufreq_policy *policy)
goto free_policy_dbs_info;
}
- INIT_LIST_HEAD(&dbs_data->policy_dbs_list);
- mutex_init(&dbs_data->mutex);
+ gov_attr_set_init(&dbs_data->attr_set, &policy_dbs->list);
ret = gov->init(dbs_data, !policy->governor->initialized);
if (ret)
@@ -483,14 +447,11 @@ static int cpufreq_governor_init(struct cpufreq_policy *policy)
if (!have_governor_per_policy())
gov->gdbs_data = dbs_data;
- policy->governor_data = policy_dbs;
-
policy_dbs->dbs_data = dbs_data;
- dbs_data->usage_count = 1;
- list_add(&policy_dbs->list, &dbs_data->policy_dbs_list);
+ policy->governor_data = policy_dbs;
gov->kobj_type.sysfs_ops = &governor_sysfs_ops;
- ret = kobject_init_and_add(&dbs_data->kobj, &gov->kobj_type,
+ ret = kobject_init_and_add(&dbs_data->attr_set.kobj, &gov->kobj_type,
get_governor_parent_kobj(policy),
"%s", gov->gov.name);
if (!ret)
@@ -519,29 +480,21 @@ static int cpufreq_governor_exit(struct cpufreq_policy *policy)
struct dbs_governor *gov = dbs_governor_of(policy);
struct policy_dbs_info *policy_dbs = policy->governor_data;
struct dbs_data *dbs_data = policy_dbs->dbs_data;
- int count;
+ unsigned int count;
/* Protect gov->gdbs_data against concurrent updates. */
mutex_lock(&gov_dbs_data_mutex);
- mutex_lock(&dbs_data->mutex);
- list_del(&policy_dbs->list);
- count = --dbs_data->usage_count;
- mutex_unlock(&dbs_data->mutex);
+ count = gov_attr_set_put(&dbs_data->attr_set, &policy_dbs->list);
- if (!count) {
- kobject_put(&dbs_data->kobj);
-
- policy->governor_data = NULL;
+ policy->governor_data = NULL;
+ if (!count) {
if (!have_governor_per_policy())
gov->gdbs_data = NULL;
gov->exit(dbs_data, policy->governor->initialized == 1);
- mutex_destroy(&dbs_data->mutex);
kfree(dbs_data);
- } else {
- policy->governor_data = NULL;
}
free_policy_dbs_info(policy_dbs, gov);
@@ -570,12 +523,12 @@ static int cpufreq_governor_start(struct cpufreq_policy *policy)
for_each_cpu(j, policy->cpus) {
struct cpu_dbs_info *j_cdbs = &per_cpu(cpu_dbs, j);
- unsigned int prev_load;
- j_cdbs->prev_cpu_idle = get_cpu_idle_time(j, &j_cdbs->prev_cpu_wall, io_busy);
-
- prev_load = j_cdbs->prev_cpu_wall - j_cdbs->prev_cpu_idle;
- j_cdbs->prev_load = 100 * prev_load / (unsigned int)j_cdbs->prev_cpu_wall;
+ j_cdbs->prev_cpu_idle = get_cpu_idle_time(j, &j_cdbs->prev_update_time, io_busy);
+ /*
+ * Make the first invocation of dbs_update() compute the load.
+ */
+ j_cdbs->prev_load = 0;
if (ignore_nice)
j_cdbs->prev_cpu_nice = kcpustat_cpu(j).cpustat[CPUTIME_NICE];
diff --git a/drivers/cpufreq/cpufreq_governor.h b/drivers/cpufreq/cpufreq_governor.h
index 61ff82fe0613..34eb214b6d57 100644
--- a/drivers/cpufreq/cpufreq_governor.h
+++ b/drivers/cpufreq/cpufreq_governor.h
@@ -24,20 +24,6 @@
#include <linux/module.h>
#include <linux/mutex.h>
-/*
- * The polling frequency depends on the capability of the processor. Default
- * polling frequency is 1000 times the transition latency of the processor. The
- * governor will work on any processor with transition latency <= 10ms, using
- * appropriate sampling rate.
- *
- * For CPUs with transition latency > 10ms (mostly drivers with CPUFREQ_ETERNAL)
- * this governor will not work. All times here are in us (micro seconds).
- */
-#define MIN_SAMPLING_RATE_RATIO (2)
-#define LATENCY_MULTIPLIER (1000)
-#define MIN_LATENCY_MULTIPLIER (20)
-#define TRANSITION_LATENCY_LIMIT (10 * 1000 * 1000)
-
/* Ondemand Sampling types */
enum {OD_NORMAL_SAMPLE, OD_SUB_SAMPLE};
@@ -52,7 +38,7 @@ enum {OD_NORMAL_SAMPLE, OD_SUB_SAMPLE};
/* Governor demand based switching data (per-policy or global). */
struct dbs_data {
- int usage_count;
+ struct gov_attr_set attr_set;
void *tuners;
unsigned int min_sampling_rate;
unsigned int ignore_nice_load;
@@ -60,37 +46,27 @@ struct dbs_data {
unsigned int sampling_down_factor;
unsigned int up_threshold;
unsigned int io_is_busy;
-
- struct kobject kobj;
- struct list_head policy_dbs_list;
- /*
- * Protect concurrent updates to governor tunables from sysfs,
- * policy_dbs_list and usage_count.
- */
- struct mutex mutex;
};
-/* Governor's specific attributes */
-struct dbs_data;
-struct governor_attr {
- struct attribute attr;
- ssize_t (*show)(struct dbs_data *dbs_data, char *buf);
- ssize_t (*store)(struct dbs_data *dbs_data, const char *buf,
- size_t count);
-};
+static inline struct dbs_data *to_dbs_data(struct gov_attr_set *attr_set)
+{
+ return container_of(attr_set, struct dbs_data, attr_set);
+}
#define gov_show_one(_gov, file_name) \
static ssize_t show_##file_name \
-(struct dbs_data *dbs_data, char *buf) \
+(struct gov_attr_set *attr_set, char *buf) \
{ \
+ struct dbs_data *dbs_data = to_dbs_data(attr_set); \
struct _gov##_dbs_tuners *tuners = dbs_data->tuners; \
return sprintf(buf, "%u\n", tuners->file_name); \
}
#define gov_show_one_common(file_name) \
static ssize_t show_##file_name \
-(struct dbs_data *dbs_data, char *buf) \
+(struct gov_attr_set *attr_set, char *buf) \
{ \
+ struct dbs_data *dbs_data = to_dbs_data(attr_set); \
return sprintf(buf, "%u\n", dbs_data->file_name); \
}
@@ -135,7 +111,7 @@ static inline void gov_update_sample_delay(struct policy_dbs_info *policy_dbs,
/* Per cpu structures */
struct cpu_dbs_info {
u64 prev_cpu_idle;
- u64 prev_cpu_wall;
+ u64 prev_update_time;
u64 prev_cpu_nice;
/*
* Used to keep track of load in the previous interval. However, when
@@ -184,7 +160,7 @@ void od_register_powersave_bias_handler(unsigned int (*f)
(struct cpufreq_policy *, unsigned int, unsigned int),
unsigned int powersave_bias);
void od_unregister_powersave_bias_handler(void);
-ssize_t store_sampling_rate(struct dbs_data *dbs_data, const char *buf,
+ssize_t store_sampling_rate(struct gov_attr_set *attr_set, const char *buf,
size_t count);
void gov_update_cpu_data(struct dbs_data *dbs_data);
#endif /* _CPUFREQ_GOVERNOR_H */
diff --git a/drivers/cpufreq/cpufreq_governor_attr_set.c b/drivers/cpufreq/cpufreq_governor_attr_set.c
new file mode 100644
index 000000000000..52841f807a7e
--- /dev/null
+++ b/drivers/cpufreq/cpufreq_governor_attr_set.c
@@ -0,0 +1,84 @@
+/*
+ * Abstract code for CPUFreq governor tunable sysfs attributes.
+ *
+ * Copyright (C) 2016, Intel Corporation
+ * Author: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include "cpufreq_governor.h"
+
+static inline struct gov_attr_set *to_gov_attr_set(struct kobject *kobj)
+{
+ return container_of(kobj, struct gov_attr_set, kobj);
+}
+
+static inline struct governor_attr *to_gov_attr(struct attribute *attr)
+{
+ return container_of(attr, struct governor_attr, attr);
+}
+
+static ssize_t governor_show(struct kobject *kobj, struct attribute *attr,
+ char *buf)
+{
+ struct governor_attr *gattr = to_gov_attr(attr);
+
+ return gattr->show(to_gov_attr_set(kobj), buf);
+}
+
+static ssize_t governor_store(struct kobject *kobj, struct attribute *attr,
+ const char *buf, size_t count)
+{
+ struct gov_attr_set *attr_set = to_gov_attr_set(kobj);
+ struct governor_attr *gattr = to_gov_attr(attr);
+ int ret;
+
+ mutex_lock(&attr_set->update_lock);
+ ret = attr_set->usage_count ? gattr->store(attr_set, buf, count) : -EBUSY;
+ mutex_unlock(&attr_set->update_lock);
+ return ret;
+}
+
+const struct sysfs_ops governor_sysfs_ops = {
+ .show = governor_show,
+ .store = governor_store,
+};
+EXPORT_SYMBOL_GPL(governor_sysfs_ops);
+
+void gov_attr_set_init(struct gov_attr_set *attr_set, struct list_head *list_node)
+{
+ INIT_LIST_HEAD(&attr_set->policy_list);
+ mutex_init(&attr_set->update_lock);
+ attr_set->usage_count = 1;
+ list_add(list_node, &attr_set->policy_list);
+}
+EXPORT_SYMBOL_GPL(gov_attr_set_init);
+
+void gov_attr_set_get(struct gov_attr_set *attr_set, struct list_head *list_node)
+{
+ mutex_lock(&attr_set->update_lock);
+ attr_set->usage_count++;
+ list_add(list_node, &attr_set->policy_list);
+ mutex_unlock(&attr_set->update_lock);
+}
+EXPORT_SYMBOL_GPL(gov_attr_set_get);
+
+unsigned int gov_attr_set_put(struct gov_attr_set *attr_set, struct list_head *list_node)
+{
+ unsigned int count;
+
+ mutex_lock(&attr_set->update_lock);
+ list_del(list_node);
+ count = --attr_set->usage_count;
+ mutex_unlock(&attr_set->update_lock);
+ if (count)
+ return count;
+
+ kobject_put(&attr_set->kobj);
+ mutex_destroy(&attr_set->update_lock);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(gov_attr_set_put);
diff --git a/drivers/cpufreq/cpufreq_ondemand.c b/drivers/cpufreq/cpufreq_ondemand.c
index acd80272ded6..300163430516 100644
--- a/drivers/cpufreq/cpufreq_ondemand.c
+++ b/drivers/cpufreq/cpufreq_ondemand.c
@@ -207,9 +207,10 @@ static unsigned int od_dbs_timer(struct cpufreq_policy *policy)
/************************** sysfs interface ************************/
static struct dbs_governor od_dbs_gov;
-static ssize_t store_io_is_busy(struct dbs_data *dbs_data, const char *buf,
- size_t count)
+static ssize_t store_io_is_busy(struct gov_attr_set *attr_set, const char *buf,
+ size_t count)
{
+ struct dbs_data *dbs_data = to_dbs_data(attr_set);
unsigned int input;
int ret;
@@ -224,9 +225,10 @@ static ssize_t store_io_is_busy(struct dbs_data *dbs_data, const char *buf,
return count;
}
-static ssize_t store_up_threshold(struct dbs_data *dbs_data, const char *buf,
- size_t count)
+static ssize_t store_up_threshold(struct gov_attr_set *attr_set,
+ const char *buf, size_t count)
{
+ struct dbs_data *dbs_data = to_dbs_data(attr_set);
unsigned int input;
int ret;
ret = sscanf(buf, "%u", &input);
@@ -240,9 +242,10 @@ static ssize_t store_up_threshold(struct dbs_data *dbs_data, const char *buf,
return count;
}
-static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data,
- const char *buf, size_t count)
+static ssize_t store_sampling_down_factor(struct gov_attr_set *attr_set,
+ const char *buf, size_t count)
{
+ struct dbs_data *dbs_data = to_dbs_data(attr_set);
struct policy_dbs_info *policy_dbs;
unsigned int input;
int ret;
@@ -254,7 +257,7 @@ static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data,
dbs_data->sampling_down_factor = input;
/* Reset down sampling multiplier in case it was active */
- list_for_each_entry(policy_dbs, &dbs_data->policy_dbs_list, list) {
+ list_for_each_entry(policy_dbs, &attr_set->policy_list, list) {
/*
* Doing this without locking might lead to using different
* rate_mult values in od_update() and od_dbs_timer().
@@ -267,9 +270,10 @@ static ssize_t store_sampling_down_factor(struct dbs_data *dbs_data,
return count;
}
-static ssize_t store_ignore_nice_load(struct dbs_data *dbs_data,
- const char *buf, size_t count)
+static ssize_t store_ignore_nice_load(struct gov_attr_set *attr_set,
+ const char *buf, size_t count)
{
+ struct dbs_data *dbs_data = to_dbs_data(attr_set);
unsigned int input;
int ret;
@@ -291,9 +295,10 @@ static ssize_t store_ignore_nice_load(struct dbs_data *dbs_data,
return count;
}
-static ssize_t store_powersave_bias(struct dbs_data *dbs_data, const char *buf,
- size_t count)
+static ssize_t store_powersave_bias(struct gov_attr_set *attr_set,
+ const char *buf, size_t count)
{
+ struct dbs_data *dbs_data = to_dbs_data(attr_set);
struct od_dbs_tuners *od_tuners = dbs_data->tuners;
struct policy_dbs_info *policy_dbs;
unsigned int input;
@@ -308,7 +313,7 @@ static ssize_t store_powersave_bias(struct dbs_data *dbs_data, const char *buf,
od_tuners->powersave_bias = input;
- list_for_each_entry(policy_dbs, &dbs_data->policy_dbs_list, list)
+ list_for_each_entry(policy_dbs, &attr_set->policy_list, list)
ondemand_powersave_bias_init(policy_dbs->policy);
return count;
diff --git a/drivers/cpufreq/cpufreq_userspace.c b/drivers/cpufreq/cpufreq_userspace.c
index 4d16f45ee1da..9f3dec9a3f36 100644
--- a/drivers/cpufreq/cpufreq_userspace.c
+++ b/drivers/cpufreq/cpufreq_userspace.c
@@ -17,6 +17,7 @@
#include <linux/init.h>
#include <linux/module.h>
#include <linux/mutex.h>
+#include <linux/slab.h>
static DEFINE_PER_CPU(unsigned int, cpu_is_managed);
static DEFINE_MUTEX(userspace_mutex);
@@ -31,6 +32,7 @@ static DEFINE_MUTEX(userspace_mutex);
static int cpufreq_set(struct cpufreq_policy *policy, unsigned int freq)
{
int ret = -EINVAL;
+ unsigned int *setspeed = policy->governor_data;
pr_debug("cpufreq_set for cpu %u, freq %u kHz\n", policy->cpu, freq);
@@ -38,6 +40,8 @@ static int cpufreq_set(struct cpufreq_policy *policy, unsigned int freq)
if (!per_cpu(cpu_is_managed, policy->cpu))
goto err;
+ *setspeed = freq;
+
ret = __cpufreq_driver_target(policy, freq, CPUFREQ_RELATION_L);
err:
mutex_unlock(&userspace_mutex);
@@ -49,19 +53,45 @@ static ssize_t show_speed(struct cpufreq_policy *policy, char *buf)
return sprintf(buf, "%u\n", policy->cur);
}
+static int cpufreq_userspace_policy_init(struct cpufreq_policy *policy)
+{
+ unsigned int *setspeed;
+
+ setspeed = kzalloc(sizeof(*setspeed), GFP_KERNEL);
+ if (!setspeed)
+ return -ENOMEM;
+
+ policy->governor_data = setspeed;
+ return 0;
+}
+
static int cpufreq_governor_userspace(struct cpufreq_policy *policy,
unsigned int event)
{
+ unsigned int *setspeed = policy->governor_data;
unsigned int cpu = policy->cpu;
int rc = 0;
+ if (event == CPUFREQ_GOV_POLICY_INIT)
+ return cpufreq_userspace_policy_init(policy);
+
+ if (!setspeed)
+ return -EINVAL;
+
switch (event) {
+ case CPUFREQ_GOV_POLICY_EXIT:
+ mutex_lock(&userspace_mutex);
+ policy->governor_data = NULL;
+ kfree(setspeed);
+ mutex_unlock(&userspace_mutex);
+ break;
case CPUFREQ_GOV_START:
BUG_ON(!policy->cur);
pr_debug("started managing cpu %u\n", cpu);
mutex_lock(&userspace_mutex);
per_cpu(cpu_is_managed, cpu) = 1;
+ *setspeed = policy->cur;
mutex_unlock(&userspace_mutex);
break;
case CPUFREQ_GOV_STOP:
@@ -69,20 +99,23 @@ static int cpufreq_governor_userspace(struct cpufreq_policy *policy,
mutex_lock(&userspace_mutex);
per_cpu(cpu_is_managed, cpu) = 0;
+ *setspeed = 0;
mutex_unlock(&userspace_mutex);
break;
case CPUFREQ_GOV_LIMITS:
mutex_lock(&userspace_mutex);
- pr_debug("limit event for cpu %u: %u - %u kHz, currently %u kHz\n",
- cpu, policy->min, policy->max,
- policy->cur);
+ pr_debug("limit event for cpu %u: %u - %u kHz, currently %u kHz, last set to %u kHz\n",
+ cpu, policy->min, policy->max, policy->cur, *setspeed);
- if (policy->max < policy->cur)
+ if (policy->max < *setspeed)
__cpufreq_driver_target(policy, policy->max,
CPUFREQ_RELATION_H);
- else if (policy->min > policy->cur)
+ else if (policy->min > *setspeed)
__cpufreq_driver_target(policy, policy->min,
CPUFREQ_RELATION_L);
+ else
+ __cpufreq_driver_target(policy, *setspeed,
+ CPUFREQ_RELATION_L);
mutex_unlock(&userspace_mutex);
break;
}
diff --git a/drivers/cpufreq/e_powersaver.c b/drivers/cpufreq/e_powersaver.c
index 4085244c8a67..cdf097b29862 100644
--- a/drivers/cpufreq/e_powersaver.c
+++ b/drivers/cpufreq/e_powersaver.c
@@ -6,6 +6,8 @@
* BIG FAT DISCLAIMER: Work in progress code. Possibly *dangerous*
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
@@ -20,7 +22,7 @@
#include <asm/msr.h>
#include <asm/tsc.h>
-#if defined CONFIG_ACPI_PROCESSOR || defined CONFIG_ACPI_PROCESSOR_MODULE
+#if IS_ENABLED(CONFIG_ACPI_PROCESSOR)
#include <linux/acpi.h>
#include <acpi/processor.h>
#endif
@@ -33,7 +35,7 @@
struct eps_cpu_data {
u32 fsb;
-#if defined CONFIG_ACPI_PROCESSOR || defined CONFIG_ACPI_PROCESSOR_MODULE
+#if IS_ENABLED(CONFIG_ACPI_PROCESSOR)
u32 bios_limit;
#endif
struct cpufreq_frequency_table freq_table[];
@@ -46,7 +48,7 @@ static int freq_failsafe_off;
static int voltage_failsafe_off;
static int set_max_voltage;
-#if defined CONFIG_ACPI_PROCESSOR || defined CONFIG_ACPI_PROCESSOR_MODULE
+#if IS_ENABLED(CONFIG_ACPI_PROCESSOR)
static int ignore_acpi_limit;
static struct acpi_processor_performance *eps_acpi_cpu_perf;
@@ -141,11 +143,9 @@ static int eps_set_state(struct eps_cpu_data *centaur,
/* Print voltage and multiplier */
rdmsr(MSR_IA32_PERF_STATUS, lo, hi);
current_voltage = lo & 0xff;
- printk(KERN_INFO "eps: Current voltage = %dmV\n",
- current_voltage * 16 + 700);
+ pr_info("Current voltage = %dmV\n", current_voltage * 16 + 700);
current_multiplier = (lo >> 8) & 0xff;
- printk(KERN_INFO "eps: Current multiplier = %d\n",
- current_multiplier);
+ pr_info("Current multiplier = %d\n", current_multiplier);
}
#endif
return 0;
@@ -166,7 +166,7 @@ static int eps_target(struct cpufreq_policy *policy, unsigned int index)
dest_state = centaur->freq_table[index].driver_data & 0xffff;
ret = eps_set_state(centaur, policy, dest_state);
if (ret)
- printk(KERN_ERR "eps: Timeout!\n");
+ pr_err("Timeout!\n");
return ret;
}
@@ -186,7 +186,7 @@ static int eps_cpu_init(struct cpufreq_policy *policy)
int k, step, voltage;
int ret;
int states;
-#if defined CONFIG_ACPI_PROCESSOR || defined CONFIG_ACPI_PROCESSOR_MODULE
+#if IS_ENABLED(CONFIG_ACPI_PROCESSOR)
unsigned int limit;
#endif
@@ -194,36 +194,36 @@ static int eps_cpu_init(struct cpufreq_policy *policy)
return -ENODEV;
/* Check brand */
- printk(KERN_INFO "eps: Detected VIA ");
+ pr_info("Detected VIA ");
switch (c->x86_model) {
case 10:
rdmsr(0x1153, lo, hi);
brand = (((lo >> 2) ^ lo) >> 18) & 3;
- printk(KERN_CONT "Model A ");
+ pr_cont("Model A ");
break;
case 13:
rdmsr(0x1154, lo, hi);
brand = (((lo >> 4) ^ (lo >> 2))) & 0x000000ff;
- printk(KERN_CONT "Model D ");
+ pr_cont("Model D ");
break;
}
switch (brand) {
case EPS_BRAND_C7M:
- printk(KERN_CONT "C7-M\n");
+ pr_cont("C7-M\n");
break;
case EPS_BRAND_C7:
- printk(KERN_CONT "C7\n");
+ pr_cont("C7\n");
break;
case EPS_BRAND_EDEN:
- printk(KERN_CONT "Eden\n");
+ pr_cont("Eden\n");
break;
case EPS_BRAND_C7D:
- printk(KERN_CONT "C7-D\n");
+ pr_cont("C7-D\n");
break;
case EPS_BRAND_C3:
- printk(KERN_CONT "C3\n");
+ pr_cont("C3\n");
return -ENODEV;
break;
}
@@ -235,7 +235,7 @@ static int eps_cpu_init(struct cpufreq_policy *policy)
/* Can be locked at 0 */
rdmsrl(MSR_IA32_MISC_ENABLE, val);
if (!(val & MSR_IA32_MISC_ENABLE_ENHANCED_SPEEDSTEP)) {
- printk(KERN_INFO "eps: Can't enable Enhanced PowerSaver\n");
+ pr_info("Can't enable Enhanced PowerSaver\n");
return -ENODEV;
}
}
@@ -243,22 +243,19 @@ static int eps_cpu_init(struct cpufreq_policy *policy)
/* Print voltage and multiplier */
rdmsr(MSR_IA32_PERF_STATUS, lo, hi);
current_voltage = lo & 0xff;
- printk(KERN_INFO "eps: Current voltage = %dmV\n",
- current_voltage * 16 + 700);
+ pr_info("Current voltage = %dmV\n", current_voltage * 16 + 700);
current_multiplier = (lo >> 8) & 0xff;
- printk(KERN_INFO "eps: Current multiplier = %d\n", current_multiplier);
+ pr_info("Current multiplier = %d\n", current_multiplier);
/* Print limits */
max_voltage = hi & 0xff;
- printk(KERN_INFO "eps: Highest voltage = %dmV\n",
- max_voltage * 16 + 700);
+ pr_info("Highest voltage = %dmV\n", max_voltage * 16 + 700);
max_multiplier = (hi >> 8) & 0xff;
- printk(KERN_INFO "eps: Highest multiplier = %d\n", max_multiplier);
+ pr_info("Highest multiplier = %d\n", max_multiplier);
min_voltage = (hi >> 16) & 0xff;
- printk(KERN_INFO "eps: Lowest voltage = %dmV\n",
- min_voltage * 16 + 700);
+ pr_info("Lowest voltage = %dmV\n", min_voltage * 16 + 700);
min_multiplier = (hi >> 24) & 0xff;
- printk(KERN_INFO "eps: Lowest multiplier = %d\n", min_multiplier);
+ pr_info("Lowest multiplier = %d\n", min_multiplier);
/* Sanity checks */
if (current_multiplier == 0 || max_multiplier == 0
@@ -276,34 +273,30 @@ static int eps_cpu_init(struct cpufreq_policy *policy)
/* Check for systems using underclocked CPU */
if (!freq_failsafe_off && max_multiplier != current_multiplier) {
- printk(KERN_INFO "eps: Your processor is running at different "
- "frequency then its maximum. Aborting.\n");
- printk(KERN_INFO "eps: You can use freq_failsafe_off option "
- "to disable this check.\n");
+ pr_info("Your processor is running at different frequency then its maximum. Aborting.\n");
+ pr_info("You can use freq_failsafe_off option to disable this check.\n");
return -EINVAL;
}
if (!voltage_failsafe_off && max_voltage != current_voltage) {
- printk(KERN_INFO "eps: Your processor is running at different "
- "voltage then its maximum. Aborting.\n");
- printk(KERN_INFO "eps: You can use voltage_failsafe_off "
- "option to disable this check.\n");
+ pr_info("Your processor is running at different voltage then its maximum. Aborting.\n");
+ pr_info("You can use voltage_failsafe_off option to disable this check.\n");
return -EINVAL;
}
/* Calc FSB speed */
fsb = cpu_khz / current_multiplier;
-#if defined CONFIG_ACPI_PROCESSOR || defined CONFIG_ACPI_PROCESSOR_MODULE
+#if IS_ENABLED(CONFIG_ACPI_PROCESSOR)
/* Check for ACPI processor speed limit */
if (!ignore_acpi_limit && !eps_acpi_init()) {
if (!acpi_processor_get_bios_limit(policy->cpu, &limit)) {
- printk(KERN_INFO "eps: ACPI limit %u.%uGHz\n",
+ pr_info("ACPI limit %u.%uGHz\n",
limit/1000000,
(limit%1000000)/10000);
eps_acpi_exit(policy);
/* Check if max_multiplier is in BIOS limits */
if (limit && max_multiplier * fsb > limit) {
- printk(KERN_INFO "eps: Aborting.\n");
+ pr_info("Aborting\n");
return -EINVAL;
}
}
@@ -319,8 +312,7 @@ static int eps_cpu_init(struct cpufreq_policy *policy)
v = (set_max_voltage - 700) / 16;
/* Check if voltage is within limits */
if (v >= min_voltage && v <= max_voltage) {
- printk(KERN_INFO "eps: Setting %dmV as maximum.\n",
- v * 16 + 700);
+ pr_info("Setting %dmV as maximum\n", v * 16 + 700);
max_voltage = v;
}
}
@@ -341,7 +333,7 @@ static int eps_cpu_init(struct cpufreq_policy *policy)
/* Copy basic values */
centaur->fsb = fsb;
-#if defined CONFIG_ACPI_PROCESSOR || defined CONFIG_ACPI_PROCESSOR_MODULE
+#if IS_ENABLED(CONFIG_ACPI_PROCESSOR)
centaur->bios_limit = limit;
#endif
@@ -426,7 +418,7 @@ module_param(freq_failsafe_off, int, 0644);
MODULE_PARM_DESC(freq_failsafe_off, "Disable current vs max frequency check");
module_param(voltage_failsafe_off, int, 0644);
MODULE_PARM_DESC(voltage_failsafe_off, "Disable current vs max voltage check");
-#if defined CONFIG_ACPI_PROCESSOR || defined CONFIG_ACPI_PROCESSOR_MODULE
+#if IS_ENABLED(CONFIG_ACPI_PROCESSOR)
module_param(ignore_acpi_limit, int, 0644);
MODULE_PARM_DESC(ignore_acpi_limit, "Don't check ACPI's processor speed limit");
#endif
diff --git a/drivers/cpufreq/elanfreq.c b/drivers/cpufreq/elanfreq.c
index 1c06e786c9ba..bfce11cba1df 100644
--- a/drivers/cpufreq/elanfreq.c
+++ b/drivers/cpufreq/elanfreq.c
@@ -16,6 +16,8 @@
*
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
@@ -185,7 +187,7 @@ static int elanfreq_cpu_init(struct cpufreq_policy *policy)
static int __init elanfreq_setup(char *str)
{
max_freq = simple_strtoul(str, &str, 0);
- printk(KERN_WARNING "You're using the deprecated elanfreq command line option. Use elanfreq.max_freq instead, please!\n");
+ pr_warn("You're using the deprecated elanfreq command line option. Use elanfreq.max_freq instead, please!\n");
return 1;
}
__setup("elanfreq=", elanfreq_setup);
diff --git a/drivers/cpufreq/hisi-acpu-cpufreq.c b/drivers/cpufreq/hisi-acpu-cpufreq.c
deleted file mode 100644
index 026d5b2224de..000000000000
--- a/drivers/cpufreq/hisi-acpu-cpufreq.c
+++ /dev/null
@@ -1,42 +0,0 @@
-/*
- * Hisilicon Platforms Using ACPU CPUFreq Support
- *
- * Copyright (c) 2015 Hisilicon Limited.
- * Copyright (c) 2015 Linaro Limited.
- *
- * Leo Yan <leo.yan@linaro.org>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- * This program is distributed "as is" WITHOUT ANY WARRANTY of any
- * kind, whether express or implied; without even the implied warranty
- * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- */
-
-#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
-
-#include <linux/err.h>
-#include <linux/init.h>
-#include <linux/kernel.h>
-#include <linux/module.h>
-#include <linux/of.h>
-#include <linux/platform_device.h>
-
-static int __init hisi_acpu_cpufreq_driver_init(void)
-{
- struct platform_device *pdev;
-
- if (!of_machine_is_compatible("hisilicon,hi6220"))
- return -ENODEV;
-
- pdev = platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
- return PTR_ERR_OR_ZERO(pdev);
-}
-module_init(hisi_acpu_cpufreq_driver_init);
-
-MODULE_AUTHOR("Leo Yan <leo.yan@linaro.org>");
-MODULE_DESCRIPTION("Hisilicon acpu cpufreq driver");
-MODULE_LICENSE("GPL v2");
diff --git a/drivers/cpufreq/ia64-acpi-cpufreq.c b/drivers/cpufreq/ia64-acpi-cpufreq.c
index 0202429f1c5b..759612da4fdc 100644
--- a/drivers/cpufreq/ia64-acpi-cpufreq.c
+++ b/drivers/cpufreq/ia64-acpi-cpufreq.c
@@ -8,6 +8,8 @@
* Venkatesh Pallipadi <venkatesh.pallipadi@intel.com>
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/module.h>
@@ -118,8 +120,7 @@ processor_get_freq (
if (ret) {
set_cpus_allowed_ptr(current, &saved_mask);
- printk(KERN_WARNING "get performance failed with error %d\n",
- ret);
+ pr_warn("get performance failed with error %d\n", ret);
ret = 0;
goto migrate_end;
}
@@ -177,7 +178,7 @@ processor_set_freq (
ret = processor_set_pstate(value);
if (ret) {
- printk(KERN_WARNING "Transition failed with error %d\n", ret);
+ pr_warn("Transition failed with error %d\n", ret);
retval = -ENODEV;
goto migrate_end;
}
@@ -291,8 +292,7 @@ acpi_cpufreq_cpu_init (
/* notify BIOS that we exist */
acpi_processor_notify_smm(THIS_MODULE);
- printk(KERN_INFO "acpi-cpufreq: CPU%u - ACPI performance management "
- "activated.\n", cpu);
+ pr_info("CPU%u - ACPI performance management activated\n", cpu);
for (i = 0; i < data->acpi_data.state_count; i++)
pr_debug(" %cP%d: %d MHz, %d mW, %d uS, %d uS, 0x%x 0x%x\n",
diff --git a/drivers/cpufreq/intel_pstate.c b/drivers/cpufreq/intel_pstate.c
index b230ebaae66c..3a9c4325d6e2 100644
--- a/drivers/cpufreq/intel_pstate.c
+++ b/drivers/cpufreq/intel_pstate.c
@@ -10,6 +10,8 @@
* of the License.
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/kernel.h>
#include <linux/kernel_stat.h>
#include <linux/module.h>
@@ -39,10 +41,17 @@
#define ATOM_TURBO_RATIOS 0x66c
#define ATOM_TURBO_VIDS 0x66d
+#ifdef CONFIG_ACPI
+#include <acpi/processor.h>
+#endif
+
#define FRAC_BITS 8
#define int_tofp(X) ((int64_t)(X) << FRAC_BITS)
#define fp_toint(X) ((X) >> FRAC_BITS)
+#define EXT_BITS 6
+#define EXT_FRAC_BITS (EXT_BITS + FRAC_BITS)
+
static inline int32_t mul_fp(int32_t x, int32_t y)
{
return ((int64_t)x * (int64_t)y) >> FRAC_BITS;
@@ -64,12 +73,22 @@ static inline int ceiling_fp(int32_t x)
return ret;
}
+static inline u64 mul_ext_fp(u64 x, u64 y)
+{
+ return (x * y) >> EXT_FRAC_BITS;
+}
+
+static inline u64 div_ext_fp(u64 x, u64 y)
+{
+ return div64_u64(x << EXT_FRAC_BITS, y);
+}
+
/**
* struct sample - Store performance sample
- * @core_pct_busy: Ratio of APERF/MPERF in percent, which is actual
+ * @core_avg_perf: Ratio of APERF/MPERF which is the actual average
* performance during last sample period
* @busy_scaled: Scaled busy value which is used to calculate next
- * P state. This can be different than core_pct_busy
+ * P state. This can be different than core_avg_perf
* to account for cpu idle period
* @aperf: Difference of actual performance frequency clock count
* read from APERF MSR between last and current sample
@@ -84,7 +103,7 @@ static inline int ceiling_fp(int32_t x)
* data for choosing next P State.
*/
struct sample {
- int32_t core_pct_busy;
+ int32_t core_avg_perf;
int32_t busy_scaled;
u64 aperf;
u64 mperf;
@@ -162,6 +181,7 @@ struct _pid {
* struct cpudata - Per CPU instance data storage
* @cpu: CPU number for this instance data
* @update_util: CPUFreq utility callback information
+ * @update_util_set: CPUFreq utility callback is set
* @pstate: Stores P state limits for this CPU
* @vid: Stores VID limits for this CPU
* @pid: Stores PID parameters for this CPU
@@ -172,6 +192,8 @@ struct _pid {
* @prev_cummulative_iowait: IO Wait time difference from last and
* current sample
* @sample: Storage for storing last Sample data
+ * @acpi_perf_data: Stores ACPI perf information read from _PSS
+ * @valid_pss_table: Set to true for valid ACPI _PSS entries found
*
* This structure stores per CPU instance data for all CPUs.
*/
@@ -179,6 +201,7 @@ struct cpudata {
int cpu;
struct update_util_data update_util;
+ bool update_util_set;
struct pstate_data pstate;
struct vid_data vid;
@@ -190,6 +213,10 @@ struct cpudata {
u64 prev_tsc;
u64 prev_cummulative_iowait;
struct sample sample;
+#ifdef CONFIG_ACPI
+ struct acpi_processor_performance acpi_perf_data;
+ bool valid_pss_table;
+#endif
};
static struct cpudata **all_cpu_data;
@@ -258,6 +285,9 @@ static struct pstate_adjust_policy pid_params;
static struct pstate_funcs pstate_funcs;
static int hwp_active;
+#ifdef CONFIG_ACPI
+static bool acpi_ppc;
+#endif
/**
* struct perf_limits - Store user and policy limits
@@ -331,6 +361,124 @@ static struct perf_limits *limits = &performance_limits;
static struct perf_limits *limits = &powersave_limits;
#endif
+#ifdef CONFIG_ACPI
+
+static bool intel_pstate_get_ppc_enable_status(void)
+{
+ if (acpi_gbl_FADT.preferred_profile == PM_ENTERPRISE_SERVER ||
+ acpi_gbl_FADT.preferred_profile == PM_PERFORMANCE_SERVER)
+ return true;
+
+ return acpi_ppc;
+}
+
+/*
+ * The max target pstate ratio is a 8 bit value in both PLATFORM_INFO MSR and
+ * in TURBO_RATIO_LIMIT MSR, which pstate driver stores in max_pstate and
+ * max_turbo_pstate fields. The PERF_CTL MSR contains 16 bit value for P state
+ * ratio, out of it only high 8 bits are used. For example 0x1700 is setting
+ * target ratio 0x17. The _PSS control value stores in a format which can be
+ * directly written to PERF_CTL MSR. But in intel_pstate driver this shift
+ * occurs during write to PERF_CTL (E.g. for cores core_set_pstate()).
+ * This function converts the _PSS control value to intel pstate driver format
+ * for comparison and assignment.
+ */
+static int convert_to_native_pstate_format(struct cpudata *cpu, int index)
+{
+ return cpu->acpi_perf_data.states[index].control >> 8;
+}
+
+static void intel_pstate_init_acpi_perf_limits(struct cpufreq_policy *policy)
+{
+ struct cpudata *cpu;
+ int turbo_pss_ctl;
+ int ret;
+ int i;
+
+ if (hwp_active)
+ return;
+
+ if (!intel_pstate_get_ppc_enable_status())
+ return;
+
+ cpu = all_cpu_data[policy->cpu];
+
+ ret = acpi_processor_register_performance(&cpu->acpi_perf_data,
+ policy->cpu);
+ if (ret)
+ return;
+
+ /*
+ * Check if the control value in _PSS is for PERF_CTL MSR, which should
+ * guarantee that the states returned by it map to the states in our
+ * list directly.
+ */
+ if (cpu->acpi_perf_data.control_register.space_id !=
+ ACPI_ADR_SPACE_FIXED_HARDWARE)
+ goto err;
+
+ /*
+ * If there is only one entry _PSS, simply ignore _PSS and continue as
+ * usual without taking _PSS into account
+ */
+ if (cpu->acpi_perf_data.state_count < 2)
+ goto err;
+
+ pr_debug("CPU%u - ACPI _PSS perf data\n", policy->cpu);
+ for (i = 0; i < cpu->acpi_perf_data.state_count; i++) {
+ pr_debug(" %cP%d: %u MHz, %u mW, 0x%x\n",
+ (i == cpu->acpi_perf_data.state ? '*' : ' '), i,
+ (u32) cpu->acpi_perf_data.states[i].core_frequency,
+ (u32) cpu->acpi_perf_data.states[i].power,
+ (u32) cpu->acpi_perf_data.states[i].control);
+ }
+
+ /*
+ * The _PSS table doesn't contain whole turbo frequency range.
+ * This just contains +1 MHZ above the max non turbo frequency,
+ * with control value corresponding to max turbo ratio. But
+ * when cpufreq set policy is called, it will call with this
+ * max frequency, which will cause a reduced performance as
+ * this driver uses real max turbo frequency as the max
+ * frequency. So correct this frequency in _PSS table to
+ * correct max turbo frequency based on the turbo ratio.
+ * Also need to convert to MHz as _PSS freq is in MHz.
+ */
+ turbo_pss_ctl = convert_to_native_pstate_format(cpu, 0);
+ if (turbo_pss_ctl > cpu->pstate.max_pstate)
+ cpu->acpi_perf_data.states[0].core_frequency =
+ policy->cpuinfo.max_freq / 1000;
+ cpu->valid_pss_table = true;
+ pr_info("_PPC limits will be enforced\n");
+
+ return;
+
+ err:
+ cpu->valid_pss_table = false;
+ acpi_processor_unregister_performance(policy->cpu);
+}
+
+static void intel_pstate_exit_perf_limits(struct cpufreq_policy *policy)
+{
+ struct cpudata *cpu;
+
+ cpu = all_cpu_data[policy->cpu];
+ if (!cpu->valid_pss_table)
+ return;
+
+ acpi_processor_unregister_performance(policy->cpu);
+}
+
+#else
+static void intel_pstate_init_acpi_perf_limits(struct cpufreq_policy *policy)
+{
+}
+
+static void intel_pstate_exit_perf_limits(struct cpufreq_policy *policy)
+{
+}
+#endif
+
static inline void pid_reset(struct _pid *pid, int setpoint, int busy,
int deadband, int integral) {
pid->setpoint = int_tofp(setpoint);
@@ -341,17 +489,17 @@ static inline void pid_reset(struct _pid *pid, int setpoint, int busy,
static inline void pid_p_gain_set(struct _pid *pid, int percent)
{
- pid->p_gain = div_fp(int_tofp(percent), int_tofp(100));
+ pid->p_gain = div_fp(percent, 100);
}
static inline void pid_i_gain_set(struct _pid *pid, int percent)
{
- pid->i_gain = div_fp(int_tofp(percent), int_tofp(100));
+ pid->i_gain = div_fp(percent, 100);
}
static inline void pid_d_gain_set(struct _pid *pid, int percent)
{
- pid->d_gain = div_fp(int_tofp(percent), int_tofp(100));
+ pid->d_gain = div_fp(percent, 100);
}
static signed int pid_calc(struct _pid *pid, int32_t busy)
@@ -537,7 +685,7 @@ static ssize_t show_turbo_pct(struct kobject *kobj,
total = cpu->pstate.turbo_pstate - cpu->pstate.min_pstate + 1;
no_turbo = cpu->pstate.max_pstate - cpu->pstate.min_pstate + 1;
- turbo_fp = div_fp(int_tofp(no_turbo), int_tofp(total));
+ turbo_fp = div_fp(no_turbo, total);
turbo_pct = 100 - fp_toint(mul_fp(turbo_fp, int_tofp(100)));
return sprintf(buf, "%u\n", turbo_pct);
}
@@ -579,7 +727,7 @@ static ssize_t store_no_turbo(struct kobject *a, struct attribute *b,
update_turbo_state();
if (limits->turbo_disabled) {
- pr_warn("intel_pstate: Turbo disabled by BIOS or unavailable on processor\n");
+ pr_warn("Turbo disabled by BIOS or unavailable on processor\n");
return -EPERM;
}
@@ -608,8 +756,7 @@ static ssize_t store_max_perf_pct(struct kobject *a, struct attribute *b,
limits->max_perf_pct);
limits->max_perf_pct = max(limits->min_perf_pct,
limits->max_perf_pct);
- limits->max_perf = div_fp(int_tofp(limits->max_perf_pct),
- int_tofp(100));
+ limits->max_perf = div_fp(limits->max_perf_pct, 100);
if (hwp_active)
intel_pstate_hwp_set_online_cpus();
@@ -633,8 +780,7 @@ static ssize_t store_min_perf_pct(struct kobject *a, struct attribute *b,
limits->min_perf_pct);
limits->min_perf_pct = min(limits->max_perf_pct,
limits->min_perf_pct);
- limits->min_perf = div_fp(int_tofp(limits->min_perf_pct),
- int_tofp(100));
+ limits->min_perf = div_fp(limits->min_perf_pct, 100);
if (hwp_active)
intel_pstate_hwp_set_online_cpus();
@@ -1019,15 +1165,11 @@ static void intel_pstate_get_cpu_pstates(struct cpudata *cpu)
intel_pstate_set_min_pstate(cpu);
}
-static inline void intel_pstate_calc_busy(struct cpudata *cpu)
+static inline void intel_pstate_calc_avg_perf(struct cpudata *cpu)
{
struct sample *sample = &cpu->sample;
- int64_t core_pct;
-
- core_pct = int_tofp(sample->aperf) * int_tofp(100);
- core_pct = div64_u64(core_pct, int_tofp(sample->mperf));
- sample->core_pct_busy = (int32_t)core_pct;
+ sample->core_avg_perf = div_ext_fp(sample->aperf, sample->mperf);
}
static inline bool intel_pstate_sample(struct cpudata *cpu, u64 time)
@@ -1070,9 +1212,14 @@ static inline bool intel_pstate_sample(struct cpudata *cpu, u64 time)
static inline int32_t get_avg_frequency(struct cpudata *cpu)
{
- return fp_toint(mul_fp(cpu->sample.core_pct_busy,
- int_tofp(cpu->pstate.max_pstate_physical *
- cpu->pstate.scaling / 100)));
+ return mul_ext_fp(cpu->sample.core_avg_perf,
+ cpu->pstate.max_pstate_physical * cpu->pstate.scaling);
+}
+
+static inline int32_t get_avg_pstate(struct cpudata *cpu)
+{
+ return mul_ext_fp(cpu->pstate.max_pstate_physical,
+ cpu->sample.core_avg_perf);
}
static inline int32_t get_target_pstate_use_cpu_load(struct cpudata *cpu)
@@ -1107,49 +1254,43 @@ static inline int32_t get_target_pstate_use_cpu_load(struct cpudata *cpu)
cpu_load = div64_u64(int_tofp(100) * mperf, sample->tsc);
cpu->sample.busy_scaled = cpu_load;
- return cpu->pstate.current_pstate - pid_calc(&cpu->pid, cpu_load);
+ return get_avg_pstate(cpu) - pid_calc(&cpu->pid, cpu_load);
}
static inline int32_t get_target_pstate_use_performance(struct cpudata *cpu)
{
- int32_t core_busy, max_pstate, current_pstate, sample_ratio;
+ int32_t perf_scaled, max_pstate, current_pstate, sample_ratio;
u64 duration_ns;
/*
- * core_busy is the ratio of actual performance to max
- * max_pstate is the max non turbo pstate available
- * current_pstate was the pstate that was requested during
- * the last sample period.
- *
- * We normalize core_busy, which was our actual percent
- * performance to what we requested during the last sample
- * period. The result will be a percentage of busy at a
- * specified pstate.
+ * perf_scaled is the average performance during the last sampling
+ * period scaled by the ratio of the maximum P-state to the P-state
+ * requested last time (in percent). That measures the system's
+ * response to the previous P-state selection.
*/
- core_busy = cpu->sample.core_pct_busy;
- max_pstate = int_tofp(cpu->pstate.max_pstate_physical);
- current_pstate = int_tofp(cpu->pstate.current_pstate);
- core_busy = mul_fp(core_busy, div_fp(max_pstate, current_pstate));
+ max_pstate = cpu->pstate.max_pstate_physical;
+ current_pstate = cpu->pstate.current_pstate;
+ perf_scaled = mul_ext_fp(cpu->sample.core_avg_perf,
+ div_fp(100 * max_pstate, current_pstate));
/*
* Since our utilization update callback will not run unless we are
* in C0, check if the actual elapsed time is significantly greater (3x)
* than our sample interval. If it is, then we were idle for a long
- * enough period of time to adjust our busyness.
+ * enough period of time to adjust our performance metric.
*/
duration_ns = cpu->sample.time - cpu->last_sample_time;
if ((s64)duration_ns > pid_params.sample_rate_ns * 3) {
- sample_ratio = div_fp(int_tofp(pid_params.sample_rate_ns),
- int_tofp(duration_ns));
- core_busy = mul_fp(core_busy, sample_ratio);
+ sample_ratio = div_fp(pid_params.sample_rate_ns, duration_ns);
+ perf_scaled = mul_fp(perf_scaled, sample_ratio);
} else {
sample_ratio = div_fp(100 * cpu->sample.mperf, cpu->sample.tsc);
if (sample_ratio < int_tofp(1))
- core_busy = 0;
+ perf_scaled = 0;
}
- cpu->sample.busy_scaled = core_busy;
- return cpu->pstate.current_pstate - pid_calc(&cpu->pid, core_busy);
+ cpu->sample.busy_scaled = perf_scaled;
+ return cpu->pstate.current_pstate - pid_calc(&cpu->pid, perf_scaled);
}
static inline void intel_pstate_update_pstate(struct cpudata *cpu, int pstate)
@@ -1179,7 +1320,7 @@ static inline void intel_pstate_adjust_busy_pstate(struct cpudata *cpu)
intel_pstate_update_pstate(cpu, target_pstate);
sample = &cpu->sample;
- trace_pstate_sample(fp_toint(sample->core_pct_busy),
+ trace_pstate_sample(mul_ext_fp(100, sample->core_avg_perf),
fp_toint(sample->busy_scaled),
from,
cpu->pstate.current_pstate,
@@ -1199,7 +1340,7 @@ static void intel_pstate_update_util(struct update_util_data *data, u64 time,
bool sample_taken = intel_pstate_sample(cpu, time);
if (sample_taken) {
- intel_pstate_calc_busy(cpu);
+ intel_pstate_calc_avg_perf(cpu);
if (!hwp_active)
intel_pstate_adjust_busy_pstate(cpu);
}
@@ -1261,23 +1402,16 @@ static int intel_pstate_init_cpu(unsigned int cpunum)
intel_pstate_busy_pid_reset(cpu);
- cpu->update_util.func = intel_pstate_update_util;
-
- pr_debug("intel_pstate: controlling: cpu %d\n", cpunum);
+ pr_debug("controlling: cpu %d\n", cpunum);
return 0;
}
static unsigned int intel_pstate_get(unsigned int cpu_num)
{
- struct sample *sample;
- struct cpudata *cpu;
+ struct cpudata *cpu = all_cpu_data[cpu_num];
- cpu = all_cpu_data[cpu_num];
- if (!cpu)
- return 0;
- sample = &cpu->sample;
- return get_avg_frequency(cpu);
+ return cpu ? get_avg_frequency(cpu) : 0;
}
static void intel_pstate_set_update_util_hook(unsigned int cpu_num)
@@ -1286,12 +1420,20 @@ static void intel_pstate_set_update_util_hook(unsigned int cpu_num)
/* Prevent intel_pstate_update_util() from using stale data. */
cpu->sample.time = 0;
- cpufreq_set_update_util_data(cpu_num, &cpu->update_util);
+ cpufreq_add_update_util_hook(cpu_num, &cpu->update_util,
+ intel_pstate_update_util);
+ cpu->update_util_set = true;
}
static void intel_pstate_clear_update_util_hook(unsigned int cpu)
{
- cpufreq_set_update_util_data(cpu, NULL);
+ struct cpudata *cpu_data = all_cpu_data[cpu];
+
+ if (!cpu_data->update_util_set)
+ return;
+
+ cpufreq_remove_update_util_hook(cpu);
+ cpu_data->update_util_set = false;
synchronize_sched();
}
@@ -1311,20 +1453,30 @@ static void intel_pstate_set_performance_limits(struct perf_limits *limits)
static int intel_pstate_set_policy(struct cpufreq_policy *policy)
{
+ struct cpudata *cpu;
+
if (!policy->cpuinfo.max_freq)
return -ENODEV;
intel_pstate_clear_update_util_hook(policy->cpu);
+ cpu = all_cpu_data[0];
+ if (cpu->pstate.max_pstate_physical > cpu->pstate.max_pstate &&
+ policy->max < policy->cpuinfo.max_freq &&
+ policy->max > cpu->pstate.max_pstate * cpu->pstate.scaling) {
+ pr_debug("policy->max > max non turbo frequency\n");
+ policy->max = policy->cpuinfo.max_freq;
+ }
+
if (policy->policy == CPUFREQ_POLICY_PERFORMANCE) {
limits = &performance_limits;
if (policy->max >= policy->cpuinfo.max_freq) {
- pr_debug("intel_pstate: set performance\n");
+ pr_debug("set performance\n");
intel_pstate_set_performance_limits(limits);
goto out;
}
} else {
- pr_debug("intel_pstate: set powersave\n");
+ pr_debug("set powersave\n");
limits = &powersave_limits;
}
@@ -1348,10 +1500,8 @@ static int intel_pstate_set_policy(struct cpufreq_policy *policy)
/* Make sure min_perf_pct <= max_perf_pct */
limits->min_perf_pct = min(limits->max_perf_pct, limits->min_perf_pct);
- limits->min_perf = div_fp(int_tofp(limits->min_perf_pct),
- int_tofp(100));
- limits->max_perf = div_fp(int_tofp(limits->max_perf_pct),
- int_tofp(100));
+ limits->min_perf = div_fp(limits->min_perf_pct, 100);
+ limits->max_perf = div_fp(limits->max_perf_pct, 100);
out:
intel_pstate_set_update_util_hook(policy->cpu);
@@ -1377,7 +1527,7 @@ static void intel_pstate_stop_cpu(struct cpufreq_policy *policy)
int cpu_num = policy->cpu;
struct cpudata *cpu = all_cpu_data[cpu_num];
- pr_debug("intel_pstate: CPU %d exiting\n", cpu_num);
+ pr_debug("CPU %d exiting\n", cpu_num);
intel_pstate_clear_update_util_hook(cpu_num);
@@ -1410,12 +1560,20 @@ static int intel_pstate_cpu_init(struct cpufreq_policy *policy)
policy->cpuinfo.min_freq = cpu->pstate.min_pstate * cpu->pstate.scaling;
policy->cpuinfo.max_freq =
cpu->pstate.turbo_pstate * cpu->pstate.scaling;
+ intel_pstate_init_acpi_perf_limits(policy);
policy->cpuinfo.transition_latency = CPUFREQ_ETERNAL;
cpumask_set_cpu(policy->cpu, policy->cpus);
return 0;
}
+static int intel_pstate_cpu_exit(struct cpufreq_policy *policy)
+{
+ intel_pstate_exit_perf_limits(policy);
+
+ return 0;
+}
+
static struct cpufreq_driver intel_pstate_driver = {
.flags = CPUFREQ_CONST_LOOPS,
.verify = intel_pstate_verify_policy,
@@ -1423,6 +1581,7 @@ static struct cpufreq_driver intel_pstate_driver = {
.resume = intel_pstate_hwp_set_policy,
.get = intel_pstate_get,
.init = intel_pstate_cpu_init,
+ .exit = intel_pstate_cpu_exit,
.stop_cpu = intel_pstate_stop_cpu,
.name = "intel_pstate",
};
@@ -1466,8 +1625,7 @@ static void copy_cpu_funcs(struct pstate_funcs *funcs)
}
-#if IS_ENABLED(CONFIG_ACPI)
-#include <acpi/processor.h>
+#ifdef CONFIG_ACPI
static bool intel_pstate_no_acpi_pss(void)
{
@@ -1623,7 +1781,7 @@ hwp_cpu_matched:
if (intel_pstate_platform_pwr_mgmt_exists())
return -ENODEV;
- pr_info("Intel P-state driver initializing.\n");
+ pr_info("Intel P-state driver initializing\n");
all_cpu_data = vzalloc(sizeof(void *) * num_possible_cpus());
if (!all_cpu_data)
@@ -1640,7 +1798,7 @@ hwp_cpu_matched:
intel_pstate_sysfs_expose_params();
if (hwp_active)
- pr_info("intel_pstate: HWP enabled\n");
+ pr_info("HWP enabled\n");
return rc;
out:
@@ -1666,13 +1824,19 @@ static int __init intel_pstate_setup(char *str)
if (!strcmp(str, "disable"))
no_load = 1;
if (!strcmp(str, "no_hwp")) {
- pr_info("intel_pstate: HWP disabled\n");
+ pr_info("HWP disabled\n");
no_hwp = 1;
}
if (!strcmp(str, "force"))
force_load = 1;
if (!strcmp(str, "hwp_only"))
hwp_only = 1;
+
+#ifdef CONFIG_ACPI
+ if (!strcmp(str, "support_acpi_ppc"))
+ acpi_ppc = true;
+#endif
+
return 0;
}
early_param("intel_pstate", intel_pstate_setup);
diff --git a/drivers/cpufreq/longhaul.c b/drivers/cpufreq/longhaul.c
index 0f6b229afcb9..c46a12df40dd 100644
--- a/drivers/cpufreq/longhaul.c
+++ b/drivers/cpufreq/longhaul.c
@@ -21,6 +21,8 @@
* BIG FAT DISCLAIMER: Work in progress code. Possibly *dangerous*
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
@@ -40,8 +42,6 @@
#include "longhaul.h"
-#define PFX "longhaul: "
-
#define TYPE_LONGHAUL_V1 1
#define TYPE_LONGHAUL_V2 2
#define TYPE_POWERSAVER 3
@@ -347,14 +347,13 @@ retry_loop:
freqs.new = calc_speed(longhaul_get_cpu_mult());
/* Check if requested frequency is set. */
if (unlikely(freqs.new != speed)) {
- printk(KERN_INFO PFX "Failed to set requested frequency!\n");
+ pr_info("Failed to set requested frequency!\n");
/* Revision ID = 1 but processor is expecting revision key
* equal to 0. Jumpers at the bottom of processor will change
* multiplier and FSB, but will not change bits in Longhaul
* MSR nor enable voltage scaling. */
if (!revid_errata) {
- printk(KERN_INFO PFX "Enabling \"Ignore Revision ID\" "
- "option.\n");
+ pr_info("Enabling \"Ignore Revision ID\" option\n");
revid_errata = 1;
msleep(200);
goto retry_loop;
@@ -364,11 +363,10 @@ retry_loop:
* but it doesn't change frequency. I tried poking various
* bits in northbridge registers, but without success. */
if (longhaul_flags & USE_ACPI_C3) {
- printk(KERN_INFO PFX "Disabling ACPI C3 support.\n");
+ pr_info("Disabling ACPI C3 support\n");
longhaul_flags &= ~USE_ACPI_C3;
if (revid_errata) {
- printk(KERN_INFO PFX "Disabling \"Ignore "
- "Revision ID\" option.\n");
+ pr_info("Disabling \"Ignore Revision ID\" option\n");
revid_errata = 0;
}
msleep(200);
@@ -379,7 +377,7 @@ retry_loop:
* RevID = 1. RevID errata will make things right. Just
* to be 100% sure. */
if (longhaul_version == TYPE_LONGHAUL_V2) {
- printk(KERN_INFO PFX "Switching to Longhaul ver. 1\n");
+ pr_info("Switching to Longhaul ver. 1\n");
longhaul_version = TYPE_LONGHAUL_V1;
msleep(200);
goto retry_loop;
@@ -387,8 +385,7 @@ retry_loop:
}
if (!bm_timeout) {
- printk(KERN_INFO PFX "Warning: Timeout while waiting for "
- "idle PCI bus.\n");
+ pr_info("Warning: Timeout while waiting for idle PCI bus\n");
return -EBUSY;
}
@@ -433,12 +430,12 @@ static int longhaul_get_ranges(void)
/* Get current frequency */
mult = longhaul_get_cpu_mult();
if (mult == -1) {
- printk(KERN_INFO PFX "Invalid (reserved) multiplier!\n");
+ pr_info("Invalid (reserved) multiplier!\n");
return -EINVAL;
}
fsb = guess_fsb(mult);
if (fsb == 0) {
- printk(KERN_INFO PFX "Invalid (reserved) FSB!\n");
+ pr_info("Invalid (reserved) FSB!\n");
return -EINVAL;
}
/* Get max multiplier - as we always did.
@@ -468,11 +465,11 @@ static int longhaul_get_ranges(void)
print_speed(highest_speed/1000));
if (lowest_speed == highest_speed) {
- printk(KERN_INFO PFX "highestspeed == lowest, aborting.\n");
+ pr_info("highestspeed == lowest, aborting\n");
return -EINVAL;
}
if (lowest_speed > highest_speed) {
- printk(KERN_INFO PFX "nonsense! lowest (%d > %d) !\n",
+ pr_info("nonsense! lowest (%d > %d) !\n",
lowest_speed, highest_speed);
return -EINVAL;
}
@@ -538,16 +535,16 @@ static void longhaul_setup_voltagescaling(void)
rdmsrl(MSR_VIA_LONGHAUL, longhaul.val);
if (!(longhaul.bits.RevisionID & 1)) {
- printk(KERN_INFO PFX "Voltage scaling not supported by CPU.\n");
+ pr_info("Voltage scaling not supported by CPU\n");
return;
}
if (!longhaul.bits.VRMRev) {
- printk(KERN_INFO PFX "VRM 8.5\n");
+ pr_info("VRM 8.5\n");
vrm_mV_table = &vrm85_mV[0];
mV_vrm_table = &mV_vrm85[0];
} else {
- printk(KERN_INFO PFX "Mobile VRM\n");
+ pr_info("Mobile VRM\n");
if (cpu_model < CPU_NEHEMIAH)
return;
vrm_mV_table = &mobilevrm_mV[0];
@@ -558,27 +555,21 @@ static void longhaul_setup_voltagescaling(void)
maxvid = vrm_mV_table[longhaul.bits.MaximumVID];
if (minvid.mV == 0 || maxvid.mV == 0 || minvid.mV > maxvid.mV) {
- printk(KERN_INFO PFX "Bogus values Min:%d.%03d Max:%d.%03d. "
- "Voltage scaling disabled.\n",
- minvid.mV/1000, minvid.mV%1000,
- maxvid.mV/1000, maxvid.mV%1000);
+ pr_info("Bogus values Min:%d.%03d Max:%d.%03d - Voltage scaling disabled\n",
+ minvid.mV/1000, minvid.mV%1000,
+ maxvid.mV/1000, maxvid.mV%1000);
return;
}
if (minvid.mV == maxvid.mV) {
- printk(KERN_INFO PFX "Claims to support voltage scaling but "
- "min & max are both %d.%03d. "
- "Voltage scaling disabled\n",
- maxvid.mV/1000, maxvid.mV%1000);
+ pr_info("Claims to support voltage scaling but min & max are both %d.%03d - Voltage scaling disabled\n",
+ maxvid.mV/1000, maxvid.mV%1000);
return;
}
/* How many voltage steps*/
numvscales = maxvid.pos - minvid.pos + 1;
- printk(KERN_INFO PFX
- "Max VID=%d.%03d "
- "Min VID=%d.%03d, "
- "%d possible voltage scales\n",
+ pr_info("Max VID=%d.%03d Min VID=%d.%03d, %d possible voltage scales\n",
maxvid.mV/1000, maxvid.mV%1000,
minvid.mV/1000, minvid.mV%1000,
numvscales);
@@ -617,12 +608,12 @@ static void longhaul_setup_voltagescaling(void)
pos = minvid.pos;
freq_pos->driver_data |= mV_vrm_table[pos] << 8;
vid = vrm_mV_table[mV_vrm_table[pos]];
- printk(KERN_INFO PFX "f: %d kHz, index: %d, vid: %d mV\n",
+ pr_info("f: %d kHz, index: %d, vid: %d mV\n",
speed, (int)(freq_pos - longhaul_table), vid.mV);
}
can_scale_voltage = 1;
- printk(KERN_INFO PFX "Voltage scaling enabled.\n");
+ pr_info("Voltage scaling enabled\n");
}
@@ -720,8 +711,7 @@ static int enable_arbiter_disable(void)
pci_write_config_byte(dev, reg, pci_cmd);
pci_read_config_byte(dev, reg, &pci_cmd);
if (!(pci_cmd & 1<<7)) {
- printk(KERN_ERR PFX
- "Can't enable access to port 0x22.\n");
+ pr_err("Can't enable access to port 0x22\n");
status = 0;
}
}
@@ -758,8 +748,7 @@ static int longhaul_setup_southbridge(void)
if (pci_cmd & 1 << 7) {
pci_read_config_dword(dev, 0x88, &acpi_regs_addr);
acpi_regs_addr &= 0xff00;
- printk(KERN_INFO PFX "ACPI I/O at 0x%x\n",
- acpi_regs_addr);
+ pr_info("ACPI I/O at 0x%x\n", acpi_regs_addr);
}
pci_dev_put(dev);
@@ -853,14 +842,14 @@ static int longhaul_cpu_init(struct cpufreq_policy *policy)
longhaul_version = TYPE_LONGHAUL_V1;
}
- printk(KERN_INFO PFX "VIA %s CPU detected. ", cpuname);
+ pr_info("VIA %s CPU detected. ", cpuname);
switch (longhaul_version) {
case TYPE_LONGHAUL_V1:
case TYPE_LONGHAUL_V2:
- printk(KERN_CONT "Longhaul v%d supported.\n", longhaul_version);
+ pr_cont("Longhaul v%d supported\n", longhaul_version);
break;
case TYPE_POWERSAVER:
- printk(KERN_CONT "Powersaver supported.\n");
+ pr_cont("Powersaver supported\n");
break;
};
@@ -889,15 +878,14 @@ static int longhaul_cpu_init(struct cpufreq_policy *policy)
if (!(longhaul_flags & USE_ACPI_C3
|| longhaul_flags & USE_NORTHBRIDGE)
&& ((pr == NULL) || !(pr->flags.bm_control))) {
- printk(KERN_ERR PFX
- "No ACPI support. Unsupported northbridge.\n");
+ pr_err("No ACPI support: Unsupported northbridge\n");
return -ENODEV;
}
if (longhaul_flags & USE_NORTHBRIDGE)
- printk(KERN_INFO PFX "Using northbridge support.\n");
+ pr_info("Using northbridge support\n");
if (longhaul_flags & USE_ACPI_C3)
- printk(KERN_INFO PFX "Using ACPI support.\n");
+ pr_info("Using ACPI support\n");
ret = longhaul_get_ranges();
if (ret != 0)
@@ -934,20 +922,18 @@ static int __init longhaul_init(void)
return -ENODEV;
if (!enable) {
- printk(KERN_ERR PFX "Option \"enable\" not set. Aborting.\n");
+ pr_err("Option \"enable\" not set - Aborting\n");
return -ENODEV;
}
#ifdef CONFIG_SMP
if (num_online_cpus() > 1) {
- printk(KERN_ERR PFX "More than 1 CPU detected, "
- "longhaul disabled.\n");
+ pr_err("More than 1 CPU detected, longhaul disabled\n");
return -ENODEV;
}
#endif
#ifdef CONFIG_X86_IO_APIC
- if (cpu_has_apic) {
- printk(KERN_ERR PFX "APIC detected. Longhaul is currently "
- "broken in this configuration.\n");
+ if (boot_cpu_has(X86_FEATURE_APIC)) {
+ pr_err("APIC detected. Longhaul is currently broken in this configuration.\n");
return -ENODEV;
}
#endif
@@ -955,7 +941,7 @@ static int __init longhaul_init(void)
case 6 ... 9:
return cpufreq_register_driver(&longhaul_driver);
case 10:
- printk(KERN_ERR PFX "Use acpi-cpufreq driver for VIA C7\n");
+ pr_err("Use acpi-cpufreq driver for VIA C7\n");
default:
;
}
diff --git a/drivers/cpufreq/ls1x-cpufreq.c b/drivers/cpufreq/loongson1-cpufreq.c
index 262581b3318d..be89416e2358 100644
--- a/drivers/cpufreq/ls1x-cpufreq.c
+++ b/drivers/cpufreq/loongson1-cpufreq.c
@@ -1,7 +1,7 @@
/*
* CPU Frequency Scaling for Loongson 1 SoC
*
- * Copyright (C) 2014 Zhang, Keguang <keguang.zhang@gmail.com>
+ * Copyright (C) 2014-2016 Zhang, Keguang <keguang.zhang@gmail.com>
*
* This file is licensed under the terms of the GNU General Public
* License version 2. This program is licensed "as is" without any
@@ -20,7 +20,7 @@
#include <cpufreq.h>
#include <loongson1.h>
-static struct {
+struct ls1x_cpufreq {
struct device *dev;
struct clk *clk; /* CPU clk */
struct clk *mux_clk; /* MUX of CPU clk */
@@ -28,7 +28,9 @@ static struct {
struct clk *osc_clk; /* OSC clk */
unsigned int max_freq;
unsigned int min_freq;
-} ls1x_cpufreq;
+};
+
+static struct ls1x_cpufreq *cpufreq;
static int ls1x_cpufreq_notifier(struct notifier_block *nb,
unsigned long val, void *data)
@@ -46,6 +48,7 @@ static struct notifier_block ls1x_cpufreq_notifier_block = {
static int ls1x_cpufreq_target(struct cpufreq_policy *policy,
unsigned int index)
{
+ struct device *cpu_dev = get_cpu_device(policy->cpu);
unsigned int old_freq, new_freq;
old_freq = policy->cur;
@@ -60,53 +63,49 @@ static int ls1x_cpufreq_target(struct cpufreq_policy *policy,
* - Reparent CPU clk back to CPU DIV clk
*/
- dev_dbg(ls1x_cpufreq.dev, "%u KHz --> %u KHz\n", old_freq, new_freq);
- clk_set_parent(policy->clk, ls1x_cpufreq.osc_clk);
+ clk_set_parent(policy->clk, cpufreq->osc_clk);
__raw_writel(__raw_readl(LS1X_CLK_PLL_DIV) | RST_CPU_EN | RST_CPU,
LS1X_CLK_PLL_DIV);
__raw_writel(__raw_readl(LS1X_CLK_PLL_DIV) & ~(RST_CPU_EN | RST_CPU),
LS1X_CLK_PLL_DIV);
- clk_set_rate(ls1x_cpufreq.mux_clk, new_freq * 1000);
- clk_set_parent(policy->clk, ls1x_cpufreq.mux_clk);
+ clk_set_rate(cpufreq->mux_clk, new_freq * 1000);
+ clk_set_parent(policy->clk, cpufreq->mux_clk);
+ dev_dbg(cpu_dev, "%u KHz --> %u KHz\n", old_freq, new_freq);
return 0;
}
static int ls1x_cpufreq_init(struct cpufreq_policy *policy)
{
+ struct device *cpu_dev = get_cpu_device(policy->cpu);
struct cpufreq_frequency_table *freq_tbl;
unsigned int pll_freq, freq;
int steps, i, ret;
- pll_freq = clk_get_rate(ls1x_cpufreq.pll_clk) / 1000;
+ pll_freq = clk_get_rate(cpufreq->pll_clk) / 1000;
steps = 1 << DIV_CPU_WIDTH;
- freq_tbl = kzalloc(sizeof(*freq_tbl) * steps, GFP_KERNEL);
- if (!freq_tbl) {
- dev_err(ls1x_cpufreq.dev,
- "failed to alloc cpufreq_frequency_table\n");
- ret = -ENOMEM;
- goto out;
- }
+ freq_tbl = kcalloc(steps, sizeof(*freq_tbl), GFP_KERNEL);
+ if (!freq_tbl)
+ return -ENOMEM;
for (i = 0; i < (steps - 1); i++) {
freq = pll_freq / (i + 1);
- if ((freq < ls1x_cpufreq.min_freq) ||
- (freq > ls1x_cpufreq.max_freq))
+ if ((freq < cpufreq->min_freq) || (freq > cpufreq->max_freq))
freq_tbl[i].frequency = CPUFREQ_ENTRY_INVALID;
else
freq_tbl[i].frequency = freq;
- dev_dbg(ls1x_cpufreq.dev,
+ dev_dbg(cpu_dev,
"cpufreq table: index %d: frequency %d\n", i,
freq_tbl[i].frequency);
}
freq_tbl[i].frequency = CPUFREQ_TABLE_END;
- policy->clk = ls1x_cpufreq.clk;
+ policy->clk = cpufreq->clk;
ret = cpufreq_generic_init(policy, freq_tbl, 0);
if (ret)
kfree(freq_tbl);
-out:
+
return ret;
}
@@ -138,85 +137,86 @@ static int ls1x_cpufreq_remove(struct platform_device *pdev)
static int ls1x_cpufreq_probe(struct platform_device *pdev)
{
- struct plat_ls1x_cpufreq *pdata = pdev->dev.platform_data;
+ struct plat_ls1x_cpufreq *pdata = dev_get_platdata(&pdev->dev);
struct clk *clk;
int ret;
- if (!pdata || !pdata->clk_name || !pdata->osc_clk_name)
+ if (!pdata || !pdata->clk_name || !pdata->osc_clk_name) {
+ dev_err(&pdev->dev, "platform data missing\n");
return -EINVAL;
+ }
- ls1x_cpufreq.dev = &pdev->dev;
+ cpufreq =
+ devm_kzalloc(&pdev->dev, sizeof(struct ls1x_cpufreq), GFP_KERNEL);
+ if (!cpufreq)
+ return -ENOMEM;
+
+ cpufreq->dev = &pdev->dev;
clk = devm_clk_get(&pdev->dev, pdata->clk_name);
if (IS_ERR(clk)) {
- dev_err(ls1x_cpufreq.dev, "unable to get %s clock\n",
+ dev_err(&pdev->dev, "unable to get %s clock\n",
pdata->clk_name);
- ret = PTR_ERR(clk);
- goto out;
+ return PTR_ERR(clk);
}
- ls1x_cpufreq.clk = clk;
+ cpufreq->clk = clk;
clk = clk_get_parent(clk);
if (IS_ERR(clk)) {
- dev_err(ls1x_cpufreq.dev, "unable to get parent of %s clock\n",
- __clk_get_name(ls1x_cpufreq.clk));
- ret = PTR_ERR(clk);
- goto out;
+ dev_err(&pdev->dev, "unable to get parent of %s clock\n",
+ __clk_get_name(cpufreq->clk));
+ return PTR_ERR(clk);
}
- ls1x_cpufreq.mux_clk = clk;
+ cpufreq->mux_clk = clk;
clk = clk_get_parent(clk);
if (IS_ERR(clk)) {
- dev_err(ls1x_cpufreq.dev, "unable to get parent of %s clock\n",
- __clk_get_name(ls1x_cpufreq.mux_clk));
- ret = PTR_ERR(clk);
- goto out;
+ dev_err(&pdev->dev, "unable to get parent of %s clock\n",
+ __clk_get_name(cpufreq->mux_clk));
+ return PTR_ERR(clk);
}
- ls1x_cpufreq.pll_clk = clk;
+ cpufreq->pll_clk = clk;
clk = devm_clk_get(&pdev->dev, pdata->osc_clk_name);
if (IS_ERR(clk)) {
- dev_err(ls1x_cpufreq.dev, "unable to get %s clock\n",
+ dev_err(&pdev->dev, "unable to get %s clock\n",
pdata->osc_clk_name);
- ret = PTR_ERR(clk);
- goto out;
+ return PTR_ERR(clk);
}
- ls1x_cpufreq.osc_clk = clk;
+ cpufreq->osc_clk = clk;
- ls1x_cpufreq.max_freq = pdata->max_freq;
- ls1x_cpufreq.min_freq = pdata->min_freq;
+ cpufreq->max_freq = pdata->max_freq;
+ cpufreq->min_freq = pdata->min_freq;
ret = cpufreq_register_driver(&ls1x_cpufreq_driver);
if (ret) {
- dev_err(ls1x_cpufreq.dev,
- "failed to register cpufreq driver: %d\n", ret);
- goto out;
+ dev_err(&pdev->dev,
+ "failed to register CPUFreq driver: %d\n", ret);
+ return ret;
}
ret = cpufreq_register_notifier(&ls1x_cpufreq_notifier_block,
CPUFREQ_TRANSITION_NOTIFIER);
- if (!ret)
- goto out;
-
- dev_err(ls1x_cpufreq.dev, "failed to register cpufreq notifier: %d\n",
- ret);
+ if (ret) {
+ dev_err(&pdev->dev,
+ "failed to register CPUFreq notifier: %d\n",ret);
+ cpufreq_unregister_driver(&ls1x_cpufreq_driver);
+ }
- cpufreq_unregister_driver(&ls1x_cpufreq_driver);
-out:
return ret;
}
static struct platform_driver ls1x_cpufreq_platdrv = {
- .driver = {
+ .probe = ls1x_cpufreq_probe,
+ .remove = ls1x_cpufreq_remove,
+ .driver = {
.name = "ls1x-cpufreq",
},
- .probe = ls1x_cpufreq_probe,
- .remove = ls1x_cpufreq_remove,
};
module_platform_driver(ls1x_cpufreq_platdrv);
MODULE_AUTHOR("Kelvin Cheung <keguang.zhang@gmail.com>");
-MODULE_DESCRIPTION("Loongson 1 CPUFreq driver");
+MODULE_DESCRIPTION("Loongson1 CPUFreq driver");
MODULE_LICENSE("GPL");
diff --git a/drivers/cpufreq/loongson2_cpufreq.c b/drivers/cpufreq/loongson2_cpufreq.c
index cd593c1f66dc..6bbdac1065ff 100644
--- a/drivers/cpufreq/loongson2_cpufreq.c
+++ b/drivers/cpufreq/loongson2_cpufreq.c
@@ -10,6 +10,9 @@
* License. See the file "COPYING" in the main directory of this archive
* for more details.
*/
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/cpufreq.h>
#include <linux/module.h>
#include <linux/err.h>
@@ -76,7 +79,7 @@ static int loongson2_cpufreq_cpu_init(struct cpufreq_policy *policy)
cpuclk = clk_get(NULL, "cpu_clk");
if (IS_ERR(cpuclk)) {
- printk(KERN_ERR "cpufreq: couldn't get CPU clk\n");
+ pr_err("couldn't get CPU clk\n");
return PTR_ERR(cpuclk);
}
@@ -163,7 +166,7 @@ static int __init cpufreq_init(void)
if (ret)
return ret;
- pr_info("cpufreq: Loongson-2F CPU frequency driver.\n");
+ pr_info("Loongson-2F CPU frequency driver\n");
cpufreq_register_notifier(&loongson2_cpufreq_notifier_block,
CPUFREQ_TRANSITION_NOTIFIER);
diff --git a/drivers/cpufreq/maple-cpufreq.c b/drivers/cpufreq/maple-cpufreq.c
index cc3408fc073f..d9df89392b84 100644
--- a/drivers/cpufreq/maple-cpufreq.c
+++ b/drivers/cpufreq/maple-cpufreq.c
@@ -13,6 +13,8 @@
#undef DEBUG
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/module.h>
#include <linux/types.h>
#include <linux/errno.h>
@@ -174,7 +176,7 @@ static int __init maple_cpufreq_init(void)
/* Get first CPU node */
cpunode = of_cpu_device_node_get(0);
if (cpunode == NULL) {
- printk(KERN_ERR "cpufreq: Can't find any CPU 0 node\n");
+ pr_err("Can't find any CPU 0 node\n");
goto bail_noprops;
}
@@ -182,8 +184,7 @@ static int __init maple_cpufreq_init(void)
/* we actually don't care on which CPU to access PVR */
pvr_hi = PVR_VER(mfspr(SPRN_PVR));
if (pvr_hi != 0x3c && pvr_hi != 0x44) {
- printk(KERN_ERR "cpufreq: Unsupported CPU version (%x)\n",
- pvr_hi);
+ pr_err("Unsupported CPU version (%x)\n", pvr_hi);
goto bail_noprops;
}
@@ -222,8 +223,8 @@ static int __init maple_cpufreq_init(void)
maple_pmode_cur = -1;
maple_scom_switch_freq(maple_scom_query_freq());
- printk(KERN_INFO "Registering Maple CPU frequency driver\n");
- printk(KERN_INFO "Low: %d Mhz, High: %d Mhz, Cur: %d MHz\n",
+ pr_info("Registering Maple CPU frequency driver\n");
+ pr_info("Low: %d Mhz, High: %d Mhz, Cur: %d MHz\n",
maple_cpu_freqs[1].frequency/1000,
maple_cpu_freqs[0].frequency/1000,
maple_cpu_freqs[maple_pmode_cur].frequency/1000);
diff --git a/drivers/cpufreq/mt8173-cpufreq.c b/drivers/cpufreq/mt8173-cpufreq.c
index 2058e6d292ce..643f43179df1 100644
--- a/drivers/cpufreq/mt8173-cpufreq.c
+++ b/drivers/cpufreq/mt8173-cpufreq.c
@@ -59,11 +59,8 @@ static LIST_HEAD(dvfs_info_list);
static struct mtk_cpu_dvfs_info *mtk_cpu_dvfs_info_lookup(int cpu)
{
struct mtk_cpu_dvfs_info *info;
- struct list_head *list;
-
- list_for_each(list, &dvfs_info_list) {
- info = list_entry(list, struct mtk_cpu_dvfs_info, list_head);
+ list_for_each_entry(info, &dvfs_info_list, list_head) {
if (cpumask_test_cpu(cpu, &info->cpus))
return info;
}
@@ -310,17 +307,24 @@ static int mtk_cpufreq_set_target(struct cpufreq_policy *policy,
return 0;
}
+#define DYNAMIC_POWER "dynamic-power-coefficient"
+
static void mtk_cpufreq_ready(struct cpufreq_policy *policy)
{
struct mtk_cpu_dvfs_info *info = policy->driver_data;
struct device_node *np = of_node_get(info->cpu_dev->of_node);
+ u32 capacitance = 0;
if (WARN_ON(!np))
return;
if (of_find_property(np, "#cooling-cells", NULL)) {
- info->cdev = of_cpufreq_cooling_register(np,
- policy->related_cpus);
+ of_property_read_u32(np, DYNAMIC_POWER, &capacitance);
+
+ info->cdev = of_cpufreq_power_cooling_register(np,
+ policy->related_cpus,
+ capacitance,
+ NULL);
if (IS_ERR(info->cdev)) {
dev_err(info->cpu_dev,
@@ -524,8 +528,7 @@ static struct cpufreq_driver mt8173_cpufreq_driver = {
static int mt8173_cpufreq_probe(struct platform_device *pdev)
{
- struct mtk_cpu_dvfs_info *info;
- struct list_head *list, *tmp;
+ struct mtk_cpu_dvfs_info *info, *tmp;
int cpu, ret;
for_each_possible_cpu(cpu) {
@@ -559,11 +562,9 @@ static int mt8173_cpufreq_probe(struct platform_device *pdev)
return 0;
release_dvfs_info_list:
- list_for_each_safe(list, tmp, &dvfs_info_list) {
- info = list_entry(list, struct mtk_cpu_dvfs_info, list_head);
-
+ list_for_each_entry_safe(info, tmp, &dvfs_info_list, list_head) {
mtk_cpu_dvfs_info_release(info);
- list_del(list);
+ list_del(&info->list_head);
}
return ret;
diff --git a/drivers/cpufreq/mvebu-cpufreq.c b/drivers/cpufreq/mvebu-cpufreq.c
new file mode 100644
index 000000000000..e920889b9ac2
--- /dev/null
+++ b/drivers/cpufreq/mvebu-cpufreq.c
@@ -0,0 +1,107 @@
+/*
+ * CPUFreq support for Armada 370/XP platforms.
+ *
+ * Copyright (C) 2012-2016 Marvell
+ *
+ * Yehuda Yitschak <yehuday@marvell.com>
+ * Gregory Clement <gregory.clement@free-electrons.com>
+ * Thomas Petazzoni <thomas.petazzoni@free-electrons.com>
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+#define pr_fmt(fmt) "mvebu-pmsu: " fmt
+
+#include <linux/clk.h>
+#include <linux/cpu.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <linux/pm_opp.h>
+#include <linux/resource.h>
+
+static int __init armada_xp_pmsu_cpufreq_init(void)
+{
+ struct device_node *np;
+ struct resource res;
+ int ret, cpu;
+
+ if (!of_machine_is_compatible("marvell,armadaxp"))
+ return 0;
+
+ /*
+ * In order to have proper cpufreq handling, we need to ensure
+ * that the Device Tree description of the CPU clock includes
+ * the definition of the PMU DFS registers. If not, we do not
+ * register the clock notifier and the cpufreq driver. This
+ * piece of code is only for compatibility with old Device
+ * Trees.
+ */
+ np = of_find_compatible_node(NULL, NULL, "marvell,armada-xp-cpu-clock");
+ if (!np)
+ return 0;
+
+ ret = of_address_to_resource(np, 1, &res);
+ if (ret) {
+ pr_warn(FW_WARN "not enabling cpufreq, deprecated armada-xp-cpu-clock binding\n");
+ of_node_put(np);
+ return 0;
+ }
+
+ of_node_put(np);
+
+ /*
+ * For each CPU, this loop registers the operating points
+ * supported (which are the nominal CPU frequency and half of
+ * it), and registers the clock notifier that will take care
+ * of doing the PMSU part of a frequency transition.
+ */
+ for_each_possible_cpu(cpu) {
+ struct device *cpu_dev;
+ struct clk *clk;
+ int ret;
+
+ cpu_dev = get_cpu_device(cpu);
+ if (!cpu_dev) {
+ pr_err("Cannot get CPU %d\n", cpu);
+ continue;
+ }
+
+ clk = clk_get(cpu_dev, 0);
+ if (IS_ERR(clk)) {
+ pr_err("Cannot get clock for CPU %d\n", cpu);
+ return PTR_ERR(clk);
+ }
+
+ /*
+ * In case of a failure of dev_pm_opp_add(), we don't
+ * bother with cleaning up the registered OPP (there's
+ * no function to do so), and simply cancel the
+ * registration of the cpufreq device.
+ */
+ ret = dev_pm_opp_add(cpu_dev, clk_get_rate(clk), 0);
+ if (ret) {
+ clk_put(clk);
+ return ret;
+ }
+
+ ret = dev_pm_opp_add(cpu_dev, clk_get_rate(clk) / 2, 0);
+ if (ret) {
+ clk_put(clk);
+ return ret;
+ }
+
+ ret = dev_pm_opp_set_sharing_cpus(cpu_dev,
+ cpumask_of(cpu_dev->id));
+ if (ret)
+ dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n",
+ __func__, ret);
+ }
+
+ platform_device_register_simple("cpufreq-dt", -1, NULL, 0);
+ return 0;
+}
+device_initcall(armada_xp_pmsu_cpufreq_init);
diff --git a/drivers/cpufreq/omap-cpufreq.c b/drivers/cpufreq/omap-cpufreq.c
index e3866e0d5bf8..376e63ca94e8 100644
--- a/drivers/cpufreq/omap-cpufreq.c
+++ b/drivers/cpufreq/omap-cpufreq.c
@@ -13,6 +13,9 @@
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*/
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/sched.h>
@@ -51,7 +54,7 @@ static int omap_target(struct cpufreq_policy *policy, unsigned int index)
freq = new_freq * 1000;
ret = clk_round_rate(policy->clk, freq);
- if (IS_ERR_VALUE(ret)) {
+ if (ret < 0) {
dev_warn(mpu_dev,
"CPUfreq: Cannot find matching frequency for %lu\n",
freq);
@@ -163,13 +166,13 @@ static int omap_cpufreq_probe(struct platform_device *pdev)
{
mpu_dev = get_cpu_device(0);
if (!mpu_dev) {
- pr_warning("%s: unable to get the mpu device\n", __func__);
+ pr_warn("%s: unable to get the MPU device\n", __func__);
return -EINVAL;
}
mpu_reg = regulator_get(mpu_dev, "vcc");
if (IS_ERR(mpu_reg)) {
- pr_warning("%s: unable to get MPU regulator\n", __func__);
+ pr_warn("%s: unable to get MPU regulator\n", __func__);
mpu_reg = NULL;
} else {
/*
diff --git a/drivers/cpufreq/p4-clockmod.c b/drivers/cpufreq/p4-clockmod.c
index 5dd95dab580d..fd77812313f3 100644
--- a/drivers/cpufreq/p4-clockmod.c
+++ b/drivers/cpufreq/p4-clockmod.c
@@ -20,6 +20,8 @@
*
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
@@ -35,8 +37,6 @@
#include "speedstep-lib.h"
-#define PFX "p4-clockmod: "
-
/*
* Duty Cycle (3bits), note DC_DISABLE is not specified in
* intel docs i just use it to mean disable
@@ -124,11 +124,7 @@ static unsigned int cpufreq_p4_get_frequency(struct cpuinfo_x86 *c)
{
if (c->x86 == 0x06) {
if (cpu_has(c, X86_FEATURE_EST))
- printk_once(KERN_WARNING PFX "Warning: EST-capable "
- "CPU detected. The acpi-cpufreq module offers "
- "voltage scaling in addition to frequency "
- "scaling. You should use that instead of "
- "p4-clockmod, if possible.\n");
+ pr_warn_once("Warning: EST-capable CPU detected. The acpi-cpufreq module offers voltage scaling in addition to frequency scaling. You should use that instead of p4-clockmod, if possible.\n");
switch (c->x86_model) {
case 0x0E: /* Core */
case 0x0F: /* Core Duo */
@@ -152,11 +148,7 @@ static unsigned int cpufreq_p4_get_frequency(struct cpuinfo_x86 *c)
p4clockmod_driver.flags |= CPUFREQ_CONST_LOOPS;
if (speedstep_detect_processor() == SPEEDSTEP_CPU_P4M) {
- printk(KERN_WARNING PFX "Warning: Pentium 4-M detected. "
- "The speedstep-ich or acpi cpufreq modules offer "
- "voltage scaling in addition of frequency scaling. "
- "You should use either one instead of p4-clockmod, "
- "if possible.\n");
+ pr_warn("Warning: Pentium 4-M detected. The speedstep-ich or acpi cpufreq modules offer voltage scaling in addition of frequency scaling. You should use either one instead of p4-clockmod, if possible.\n");
return speedstep_get_frequency(SPEEDSTEP_CPU_P4M);
}
@@ -265,8 +257,7 @@ static int __init cpufreq_p4_init(void)
ret = cpufreq_register_driver(&p4clockmod_driver);
if (!ret)
- printk(KERN_INFO PFX "P4/Xeon(TM) CPU On-Demand Clock "
- "Modulation available\n");
+ pr_info("P4/Xeon(TM) CPU On-Demand Clock Modulation available\n");
return ret;
}
diff --git a/drivers/cpufreq/pmac32-cpufreq.c b/drivers/cpufreq/pmac32-cpufreq.c
index 1f49d97a70ea..ff44016ea031 100644
--- a/drivers/cpufreq/pmac32-cpufreq.c
+++ b/drivers/cpufreq/pmac32-cpufreq.c
@@ -13,6 +13,8 @@
*
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/module.h>
#include <linux/types.h>
#include <linux/errno.h>
@@ -298,7 +300,7 @@ static int pmu_set_cpu_speed(int low_speed)
_set_L3CR(save_l3cr);
/* Restore userland MMU context */
- switch_mmu_context(NULL, current->active_mm);
+ switch_mmu_context(NULL, current->active_mm, NULL);
#ifdef DEBUG_FREQ
printk(KERN_DEBUG "HID1, after: %x\n", mfspr(SPRN_HID1));
@@ -481,13 +483,13 @@ static int pmac_cpufreq_init_MacRISC3(struct device_node *cpunode)
freqs = of_get_property(cpunode, "bus-frequencies", &lenp);
lenp /= sizeof(u32);
if (freqs == NULL || lenp != 2) {
- printk(KERN_ERR "cpufreq: bus-frequencies incorrect or missing\n");
+ pr_err("bus-frequencies incorrect or missing\n");
return 1;
}
ratio = of_get_property(cpunode, "processor-to-bus-ratio*2",
NULL);
if (ratio == NULL) {
- printk(KERN_ERR "cpufreq: processor-to-bus-ratio*2 missing\n");
+ pr_err("processor-to-bus-ratio*2 missing\n");
return 1;
}
@@ -550,7 +552,7 @@ static int pmac_cpufreq_init_7447A(struct device_node *cpunode)
if (volt_gpio_np)
voltage_gpio = read_gpio(volt_gpio_np);
if (!voltage_gpio){
- printk(KERN_ERR "cpufreq: missing cpu-vcore-select gpio\n");
+ pr_err("missing cpu-vcore-select gpio\n");
return 1;
}
@@ -675,9 +677,9 @@ out:
pmac_cpu_freqs[CPUFREQ_HIGH].frequency = hi_freq;
ppc_proc_freq = cur_freq * 1000ul;
- printk(KERN_INFO "Registering PowerMac CPU frequency driver\n");
- printk(KERN_INFO "Low: %d Mhz, High: %d Mhz, Boot: %d Mhz\n",
- low_freq/1000, hi_freq/1000, cur_freq/1000);
+ pr_info("Registering PowerMac CPU frequency driver\n");
+ pr_info("Low: %d Mhz, High: %d Mhz, Boot: %d Mhz\n",
+ low_freq/1000, hi_freq/1000, cur_freq/1000);
return cpufreq_register_driver(&pmac_cpufreq_driver);
}
diff --git a/drivers/cpufreq/pmac64-cpufreq.c b/drivers/cpufreq/pmac64-cpufreq.c
index 4ff86878727f..267e0894c62d 100644
--- a/drivers/cpufreq/pmac64-cpufreq.c
+++ b/drivers/cpufreq/pmac64-cpufreq.c
@@ -12,6 +12,8 @@
#undef DEBUG
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/module.h>
#include <linux/types.h>
#include <linux/errno.h>
@@ -138,7 +140,7 @@ static void g5_vdnap_switch_volt(int speed_mode)
usleep_range(1000, 1000);
}
if (done == 0)
- printk(KERN_WARNING "cpufreq: Timeout in clock slewing !\n");
+ pr_warn("Timeout in clock slewing !\n");
}
@@ -266,7 +268,7 @@ static int g5_pfunc_switch_freq(int speed_mode)
rc = pmf_call_one(pfunc_cpu_setfreq_low, NULL);
if (rc)
- printk(KERN_WARNING "cpufreq: pfunc switch error %d\n", rc);
+ pr_warn("pfunc switch error %d\n", rc);
/* It's an irq GPIO so we should be able to just block here,
* I'll do that later after I've properly tested the IRQ code for
@@ -282,7 +284,7 @@ static int g5_pfunc_switch_freq(int speed_mode)
usleep_range(500, 500);
}
if (done == 0)
- printk(KERN_WARNING "cpufreq: Timeout in clock slewing !\n");
+ pr_warn("Timeout in clock slewing !\n");
/* If frequency is going down, last ramp the voltage */
if (speed_mode > g5_pmode_cur)
@@ -368,7 +370,7 @@ static int __init g5_neo2_cpufreq_init(struct device_node *cpunode)
}
pvr_hi = (*valp) >> 16;
if (pvr_hi != 0x3c && pvr_hi != 0x44) {
- printk(KERN_ERR "cpufreq: Unsupported CPU version\n");
+ pr_err("Unsupported CPU version\n");
goto bail_noprops;
}
@@ -403,8 +405,7 @@ static int __init g5_neo2_cpufreq_init(struct device_node *cpunode)
root = of_find_node_by_path("/");
if (root == NULL) {
- printk(KERN_ERR "cpufreq: Can't find root of "
- "device tree\n");
+ pr_err("Can't find root of device tree\n");
goto bail_noprops;
}
pfunc_set_vdnap0 = pmf_find_function(root, "set-vdnap0");
@@ -412,8 +413,7 @@ static int __init g5_neo2_cpufreq_init(struct device_node *cpunode)
pmf_find_function(root, "slewing-done");
if (pfunc_set_vdnap0 == NULL ||
pfunc_vdnap0_complete == NULL) {
- printk(KERN_ERR "cpufreq: Can't find required "
- "platform function\n");
+ pr_err("Can't find required platform function\n");
goto bail_noprops;
}
@@ -453,10 +453,10 @@ static int __init g5_neo2_cpufreq_init(struct device_node *cpunode)
g5_pmode_cur = -1;
g5_switch_freq(g5_query_freq());
- printk(KERN_INFO "Registering G5 CPU frequency driver\n");
- printk(KERN_INFO "Frequency method: %s, Voltage method: %s\n",
- freq_method, volt_method);
- printk(KERN_INFO "Low: %d Mhz, High: %d Mhz, Cur: %d MHz\n",
+ pr_info("Registering G5 CPU frequency driver\n");
+ pr_info("Frequency method: %s, Voltage method: %s\n",
+ freq_method, volt_method);
+ pr_info("Low: %d Mhz, High: %d Mhz, Cur: %d MHz\n",
g5_cpu_freqs[1].frequency/1000,
g5_cpu_freqs[0].frequency/1000,
g5_cpu_freqs[g5_pmode_cur].frequency/1000);
@@ -493,7 +493,7 @@ static int __init g5_pm72_cpufreq_init(struct device_node *cpunode)
if (cpuid != NULL)
eeprom = of_get_property(cpuid, "cpuid", NULL);
if (eeprom == NULL) {
- printk(KERN_ERR "cpufreq: Can't find cpuid EEPROM !\n");
+ pr_err("Can't find cpuid EEPROM !\n");
rc = -ENODEV;
goto bail;
}
@@ -511,7 +511,7 @@ static int __init g5_pm72_cpufreq_init(struct device_node *cpunode)
break;
}
if (hwclock == NULL) {
- printk(KERN_ERR "cpufreq: Can't find i2c clock chip !\n");
+ pr_err("Can't find i2c clock chip !\n");
rc = -ENODEV;
goto bail;
}
@@ -539,7 +539,7 @@ static int __init g5_pm72_cpufreq_init(struct device_node *cpunode)
/* Check we have minimum requirements */
if (pfunc_cpu_getfreq == NULL || pfunc_cpu_setfreq_high == NULL ||
pfunc_cpu_setfreq_low == NULL || pfunc_slewing_done == NULL) {
- printk(KERN_ERR "cpufreq: Can't find platform functions !\n");
+ pr_err("Can't find platform functions !\n");
rc = -ENODEV;
goto bail;
}
@@ -567,7 +567,7 @@ static int __init g5_pm72_cpufreq_init(struct device_node *cpunode)
/* Get max frequency from device-tree */
valp = of_get_property(cpunode, "clock-frequency", NULL);
if (!valp) {
- printk(KERN_ERR "cpufreq: Can't find CPU frequency !\n");
+ pr_err("Can't find CPU frequency !\n");
rc = -ENODEV;
goto bail;
}
@@ -583,8 +583,7 @@ static int __init g5_pm72_cpufreq_init(struct device_node *cpunode)
/* Check for machines with no useful settings */
if (il == ih) {
- printk(KERN_WARNING "cpufreq: No low frequency mode available"
- " on this model !\n");
+ pr_warn("No low frequency mode available on this model !\n");
rc = -ENODEV;
goto bail;
}
@@ -595,7 +594,7 @@ static int __init g5_pm72_cpufreq_init(struct device_node *cpunode)
/* Sanity check */
if (min_freq >= max_freq || min_freq < 1000) {
- printk(KERN_ERR "cpufreq: Can't calculate low frequency !\n");
+ pr_err("Can't calculate low frequency !\n");
rc = -ENXIO;
goto bail;
}
@@ -619,10 +618,10 @@ static int __init g5_pm72_cpufreq_init(struct device_node *cpunode)
g5_pmode_cur = -1;
g5_switch_freq(g5_query_freq());
- printk(KERN_INFO "Registering G5 CPU frequency driver\n");
- printk(KERN_INFO "Frequency method: i2c/pfunc, "
- "Voltage method: %s\n", has_volt ? "i2c/pfunc" : "none");
- printk(KERN_INFO "Low: %d Mhz, High: %d Mhz, Cur: %d MHz\n",
+ pr_info("Registering G5 CPU frequency driver\n");
+ pr_info("Frequency method: i2c/pfunc, Voltage method: %s\n",
+ has_volt ? "i2c/pfunc" : "none");
+ pr_info("Low: %d Mhz, High: %d Mhz, Cur: %d MHz\n",
g5_cpu_freqs[1].frequency/1000,
g5_cpu_freqs[0].frequency/1000,
g5_cpu_freqs[g5_pmode_cur].frequency/1000);
@@ -654,7 +653,7 @@ static int __init g5_cpufreq_init(void)
/* Get first CPU node */
cpunode = of_cpu_device_node_get(0);
if (cpunode == NULL) {
- pr_err("cpufreq: Can't find any CPU node\n");
+ pr_err("Can't find any CPU node\n");
return -ENODEV;
}
diff --git a/drivers/cpufreq/powernow-k6.c b/drivers/cpufreq/powernow-k6.c
index e6f24b281e3e..dedd2568e852 100644
--- a/drivers/cpufreq/powernow-k6.c
+++ b/drivers/cpufreq/powernow-k6.c
@@ -8,6 +8,8 @@
* BIG FAT DISCLAIMER: Work in progress code. Possibly *dangerous*
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
@@ -22,7 +24,6 @@
#define POWERNOW_IOPORT 0xfff0 /* it doesn't matter where, as long
as it is unused */
-#define PFX "powernow-k6: "
static unsigned int busfreq; /* FSB, in 10 kHz */
static unsigned int max_multiplier;
@@ -141,7 +142,7 @@ static int powernow_k6_target(struct cpufreq_policy *policy,
{
if (clock_ratio[best_i].driver_data > max_multiplier) {
- printk(KERN_ERR PFX "invalid target frequency\n");
+ pr_err("invalid target frequency\n");
return -EINVAL;
}
@@ -175,13 +176,14 @@ static int powernow_k6_cpu_init(struct cpufreq_policy *policy)
max_multiplier = param_max_multiplier;
goto have_max_multiplier;
}
- printk(KERN_ERR "powernow-k6: invalid max_multiplier parameter, valid parameters 20, 30, 35, 40, 45, 50, 55, 60\n");
+ pr_err("invalid max_multiplier parameter, valid parameters 20, 30, 35, 40, 45, 50, 55, 60\n");
return -EINVAL;
}
if (!max_multiplier) {
- printk(KERN_WARNING "powernow-k6: unknown frequency %u, cannot determine current multiplier\n", khz);
- printk(KERN_WARNING "powernow-k6: use module parameters max_multiplier and bus_frequency\n");
+ pr_warn("unknown frequency %u, cannot determine current multiplier\n",
+ khz);
+ pr_warn("use module parameters max_multiplier and bus_frequency\n");
return -EOPNOTSUPP;
}
@@ -193,7 +195,7 @@ have_max_multiplier:
busfreq = param_busfreq / 10;
goto have_busfreq;
}
- printk(KERN_ERR "powernow-k6: invalid bus_frequency parameter, allowed range 50000 - 150000 kHz\n");
+ pr_err("invalid bus_frequency parameter, allowed range 50000 - 150000 kHz\n");
return -EINVAL;
}
@@ -275,7 +277,7 @@ static int __init powernow_k6_init(void)
return -ENODEV;
if (!request_region(POWERNOW_IOPORT, 16, "PowerNow!")) {
- printk(KERN_INFO PFX "PowerNow IOPORT region already used.\n");
+ pr_info("PowerNow IOPORT region already used\n");
return -EIO;
}
diff --git a/drivers/cpufreq/powernow-k7.c b/drivers/cpufreq/powernow-k7.c
index c1ae1999770a..9f013ed42977 100644
--- a/drivers/cpufreq/powernow-k7.c
+++ b/drivers/cpufreq/powernow-k7.c
@@ -13,6 +13,8 @@
* - We disable half multipliers if ACPI is used on A0 stepping CPUs.
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
@@ -35,9 +37,6 @@
#include "powernow-k7.h"
-#define PFX "powernow: "
-
-
struct psb_s {
u8 signature[10];
u8 tableversion;
@@ -127,14 +126,13 @@ static int check_powernow(void)
maxei = cpuid_eax(0x80000000);
if (maxei < 0x80000007) { /* Any powernow info ? */
#ifdef MODULE
- printk(KERN_INFO PFX "No powernow capabilities detected\n");
+ pr_info("No powernow capabilities detected\n");
#endif
return 0;
}
if ((c->x86_model == 6) && (c->x86_mask == 0)) {
- printk(KERN_INFO PFX "K7 660[A0] core detected, "
- "enabling errata workarounds\n");
+ pr_info("K7 660[A0] core detected, enabling errata workarounds\n");
have_a0 = 1;
}
@@ -144,22 +142,22 @@ static int check_powernow(void)
if (!(edx & (1 << 1 | 1 << 2)))
return 0;
- printk(KERN_INFO PFX "PowerNOW! Technology present. Can scale: ");
+ pr_info("PowerNOW! Technology present. Can scale: ");
if (edx & 1 << 1) {
- printk("frequency");
+ pr_cont("frequency");
can_scale_bus = 1;
}
if ((edx & (1 << 1 | 1 << 2)) == 0x6)
- printk(" and ");
+ pr_cont(" and ");
if (edx & 1 << 2) {
- printk("voltage");
+ pr_cont("voltage");
can_scale_vid = 1;
}
- printk(".\n");
+ pr_cont("\n");
return 1;
}
@@ -427,16 +425,14 @@ err1:
err05:
kfree(acpi_processor_perf);
err0:
- printk(KERN_WARNING PFX "ACPI perflib can not be used on "
- "this platform\n");
+ pr_warn("ACPI perflib can not be used on this platform\n");
acpi_processor_perf = NULL;
return retval;
}
#else
static int powernow_acpi_init(void)
{
- printk(KERN_INFO PFX "no support for ACPI processor found."
- " Please recompile your kernel with ACPI processor\n");
+ pr_info("no support for ACPI processor found - please recompile your kernel with ACPI processor\n");
return -EINVAL;
}
#endif
@@ -468,8 +464,7 @@ static int powernow_decode_bios(int maxfid, int startvid)
psb = (struct psb_s *) p;
pr_debug("Table version: 0x%x\n", psb->tableversion);
if (psb->tableversion != 0x12) {
- printk(KERN_INFO PFX "Sorry, only v1.2 tables"
- " supported right now\n");
+ pr_info("Sorry, only v1.2 tables supported right now\n");
return -ENODEV;
}
@@ -481,10 +476,8 @@ static int powernow_decode_bios(int maxfid, int startvid)
latency = psb->settlingtime;
if (latency < 100) {
- printk(KERN_INFO PFX "BIOS set settling time "
- "to %d microseconds. "
- "Should be at least 100. "
- "Correcting.\n", latency);
+ pr_info("BIOS set settling time to %d microseconds. Should be at least 100. Correcting.\n",
+ latency);
latency = 100;
}
pr_debug("Settling Time: %d microseconds.\n",
@@ -516,10 +509,9 @@ static int powernow_decode_bios(int maxfid, int startvid)
p += 2;
}
}
- printk(KERN_INFO PFX "No PST tables match this cpuid "
- "(0x%x)\n", etuple);
- printk(KERN_INFO PFX "This is indicative of a broken "
- "BIOS.\n");
+ pr_info("No PST tables match this cpuid (0x%x)\n",
+ etuple);
+ pr_info("This is indicative of a broken BIOS\n");
return -EINVAL;
}
@@ -552,7 +544,7 @@ static int fixup_sgtc(void)
sgtc = 100 * m * latency;
sgtc = sgtc / 3;
if (sgtc > 0xfffff) {
- printk(KERN_WARNING PFX "SGTC too large %d\n", sgtc);
+ pr_warn("SGTC too large %d\n", sgtc);
sgtc = 0xfffff;
}
return sgtc;
@@ -574,14 +566,10 @@ static unsigned int powernow_get(unsigned int cpu)
static int acer_cpufreq_pst(const struct dmi_system_id *d)
{
- printk(KERN_WARNING PFX
- "%s laptop with broken PST tables in BIOS detected.\n",
+ pr_warn("%s laptop with broken PST tables in BIOS detected\n",
d->ident);
- printk(KERN_WARNING PFX
- "You need to downgrade to 3A21 (09/09/2002), or try a newer "
- "BIOS than 3A71 (01/20/2003)\n");
- printk(KERN_WARNING PFX
- "cpufreq scaling has been disabled as a result of this.\n");
+ pr_warn("You need to downgrade to 3A21 (09/09/2002), or try a newer BIOS than 3A71 (01/20/2003)\n");
+ pr_warn("cpufreq scaling has been disabled as a result of this\n");
return 0;
}
@@ -616,40 +604,38 @@ static int powernow_cpu_init(struct cpufreq_policy *policy)
fsb = (10 * cpu_khz) / fid_codes[fidvidstatus.bits.CFID];
if (!fsb) {
- printk(KERN_WARNING PFX "can not determine bus frequency\n");
+ pr_warn("can not determine bus frequency\n");
return -EINVAL;
}
pr_debug("FSB: %3dMHz\n", fsb/1000);
if (dmi_check_system(powernow_dmi_table) || acpi_force) {
- printk(KERN_INFO PFX "PSB/PST known to be broken. "
- "Trying ACPI instead\n");
+ pr_info("PSB/PST known to be broken - trying ACPI instead\n");
result = powernow_acpi_init();
} else {
result = powernow_decode_bios(fidvidstatus.bits.MFID,
fidvidstatus.bits.SVID);
if (result) {
- printk(KERN_INFO PFX "Trying ACPI perflib\n");
+ pr_info("Trying ACPI perflib\n");
maximum_speed = 0;
minimum_speed = -1;
latency = 0;
result = powernow_acpi_init();
if (result) {
- printk(KERN_INFO PFX
- "ACPI and legacy methods failed\n");
+ pr_info("ACPI and legacy methods failed\n");
}
} else {
/* SGTC use the bus clock as timer */
latency = fixup_sgtc();
- printk(KERN_INFO PFX "SGTC: %d\n", latency);
+ pr_info("SGTC: %d\n", latency);
}
}
if (result)
return result;
- printk(KERN_INFO PFX "Minimum speed %d MHz. Maximum speed %d MHz.\n",
- minimum_speed/1000, maximum_speed/1000);
+ pr_info("Minimum speed %d MHz - Maximum speed %d MHz\n",
+ minimum_speed/1000, maximum_speed/1000);
policy->cpuinfo.transition_latency =
cpufreq_scale(2000000UL, fsb, latency);
diff --git a/drivers/cpufreq/powernv-cpufreq.c b/drivers/cpufreq/powernv-cpufreq.c
index 39ac78c94be0..54c45368e3f1 100644
--- a/drivers/cpufreq/powernv-cpufreq.c
+++ b/drivers/cpufreq/powernv-cpufreq.c
@@ -36,12 +36,56 @@
#include <asm/reg.h>
#include <asm/smp.h> /* Required for cpu_sibling_mask() in UP configs */
#include <asm/opal.h>
+#include <linux/timer.h>
#define POWERNV_MAX_PSTATES 256
#define PMSR_PSAFE_ENABLE (1UL << 30)
#define PMSR_SPR_EM_DISABLE (1UL << 31)
#define PMSR_MAX(x) ((x >> 32) & 0xFF)
+#define MAX_RAMP_DOWN_TIME 5120
+/*
+ * On an idle system we want the global pstate to ramp-down from max value to
+ * min over a span of ~5 secs. Also we want it to initially ramp-down slowly and
+ * then ramp-down rapidly later on.
+ *
+ * This gives a percentage rampdown for time elapsed in milliseconds.
+ * ramp_down_percentage = ((ms * ms) >> 18)
+ * ~= 3.8 * (sec * sec)
+ *
+ * At 0 ms ramp_down_percent = 0
+ * At 5120 ms ramp_down_percent = 100
+ */
+#define ramp_down_percent(time) ((time * time) >> 18)
+
+/* Interval after which the timer is queued to bring down global pstate */
+#define GPSTATE_TIMER_INTERVAL 2000
+
+/**
+ * struct global_pstate_info - Per policy data structure to maintain history of
+ * global pstates
+ * @highest_lpstate: The local pstate from which we are ramping down
+ * @elapsed_time: Time in ms spent in ramping down from
+ * highest_lpstate
+ * @last_sampled_time: Time from boot in ms when global pstates were
+ * last set
+ * @last_lpstate,last_gpstate: Last set values for local and global pstates
+ * @timer: Is used for ramping down if cpu goes idle for
+ * a long time with global pstate held high
+ * @gpstate_lock: A spinlock to maintain synchronization between
+ * routines called by the timer handler and
+ * governer's target_index calls
+ */
+struct global_pstate_info {
+ int highest_lpstate;
+ unsigned int elapsed_time;
+ unsigned int last_sampled_time;
+ int last_lpstate;
+ int last_gpstate;
+ spinlock_t gpstate_lock;
+ struct timer_list timer;
+};
+
static struct cpufreq_frequency_table powernv_freqs[POWERNV_MAX_PSTATES+1];
static bool rebooting, throttled, occ_reset;
@@ -94,6 +138,17 @@ static struct powernv_pstate_info {
int nr_pstates;
} powernv_pstate_info;
+static inline void reset_gpstates(struct cpufreq_policy *policy)
+{
+ struct global_pstate_info *gpstates = policy->driver_data;
+
+ gpstates->highest_lpstate = 0;
+ gpstates->elapsed_time = 0;
+ gpstates->last_sampled_time = 0;
+ gpstates->last_lpstate = 0;
+ gpstates->last_gpstate = 0;
+}
+
/*
* Initialize the freq table based on data obtained
* from the firmware passed via device-tree
@@ -285,6 +340,7 @@ static inline void set_pmspr(unsigned long sprn, unsigned long val)
struct powernv_smp_call_data {
unsigned int freq;
int pstate_id;
+ int gpstate_id;
};
/*
@@ -343,19 +399,21 @@ static unsigned int powernv_cpufreq_get(unsigned int cpu)
* (struct powernv_smp_call_data *) and the pstate_id which needs to be set
* on this CPU should be present in freq_data->pstate_id.
*/
-static void set_pstate(void *freq_data)
+static void set_pstate(void *data)
{
unsigned long val;
- unsigned long pstate_ul =
- ((struct powernv_smp_call_data *) freq_data)->pstate_id;
+ struct powernv_smp_call_data *freq_data = data;
+ unsigned long pstate_ul = freq_data->pstate_id;
+ unsigned long gpstate_ul = freq_data->gpstate_id;
val = get_pmspr(SPRN_PMCR);
val = val & 0x0000FFFFFFFFFFFFULL;
pstate_ul = pstate_ul & 0xFF;
+ gpstate_ul = gpstate_ul & 0xFF;
/* Set both global(bits 56..63) and local(bits 48..55) PStates */
- val = val | (pstate_ul << 56) | (pstate_ul << 48);
+ val = val | (gpstate_ul << 56) | (pstate_ul << 48);
pr_debug("Setting cpu %d pmcr to %016lX\n",
raw_smp_processor_id(), val);
@@ -424,6 +482,111 @@ next:
}
}
+/**
+ * calc_global_pstate - Calculate global pstate
+ * @elapsed_time: Elapsed time in milliseconds
+ * @local_pstate: New local pstate
+ * @highest_lpstate: pstate from which its ramping down
+ *
+ * Finds the appropriate global pstate based on the pstate from which its
+ * ramping down and the time elapsed in ramping down. It follows a quadratic
+ * equation which ensures that it reaches ramping down to pmin in 5sec.
+ */
+static inline int calc_global_pstate(unsigned int elapsed_time,
+ int highest_lpstate, int local_pstate)
+{
+ int pstate_diff;
+
+ /*
+ * Using ramp_down_percent we get the percentage of rampdown
+ * that we are expecting to be dropping. Difference between
+ * highest_lpstate and powernv_pstate_info.min will give a absolute
+ * number of how many pstates we will drop eventually by the end of
+ * 5 seconds, then just scale it get the number pstates to be dropped.
+ */
+ pstate_diff = ((int)ramp_down_percent(elapsed_time) *
+ (highest_lpstate - powernv_pstate_info.min)) / 100;
+
+ /* Ensure that global pstate is >= to local pstate */
+ if (highest_lpstate - pstate_diff < local_pstate)
+ return local_pstate;
+ else
+ return highest_lpstate - pstate_diff;
+}
+
+static inline void queue_gpstate_timer(struct global_pstate_info *gpstates)
+{
+ unsigned int timer_interval;
+
+ /*
+ * Setting up timer to fire after GPSTATE_TIMER_INTERVAL ms, But
+ * if it exceeds MAX_RAMP_DOWN_TIME ms for ramp down time.
+ * Set timer such that it fires exactly at MAX_RAMP_DOWN_TIME
+ * seconds of ramp down time.
+ */
+ if ((gpstates->elapsed_time + GPSTATE_TIMER_INTERVAL)
+ > MAX_RAMP_DOWN_TIME)
+ timer_interval = MAX_RAMP_DOWN_TIME - gpstates->elapsed_time;
+ else
+ timer_interval = GPSTATE_TIMER_INTERVAL;
+
+ mod_timer_pinned(&gpstates->timer, jiffies +
+ msecs_to_jiffies(timer_interval));
+}
+
+/**
+ * gpstate_timer_handler
+ *
+ * @data: pointer to cpufreq_policy on which timer was queued
+ *
+ * This handler brings down the global pstate closer to the local pstate
+ * according quadratic equation. Queues a new timer if it is still not equal
+ * to local pstate
+ */
+void gpstate_timer_handler(unsigned long data)
+{
+ struct cpufreq_policy *policy = (struct cpufreq_policy *)data;
+ struct global_pstate_info *gpstates = policy->driver_data;
+ int gpstate_id;
+ unsigned int time_diff = jiffies_to_msecs(jiffies)
+ - gpstates->last_sampled_time;
+ struct powernv_smp_call_data freq_data;
+
+ if (!spin_trylock(&gpstates->gpstate_lock))
+ return;
+
+ gpstates->last_sampled_time += time_diff;
+ gpstates->elapsed_time += time_diff;
+ freq_data.pstate_id = gpstates->last_lpstate;
+
+ if ((gpstates->last_gpstate == freq_data.pstate_id) ||
+ (gpstates->elapsed_time > MAX_RAMP_DOWN_TIME)) {
+ gpstate_id = freq_data.pstate_id;
+ reset_gpstates(policy);
+ gpstates->highest_lpstate = freq_data.pstate_id;
+ } else {
+ gpstate_id = calc_global_pstate(gpstates->elapsed_time,
+ gpstates->highest_lpstate,
+ freq_data.pstate_id);
+ }
+
+ /*
+ * If local pstate is equal to global pstate, rampdown is over
+ * So timer is not required to be queued.
+ */
+ if (gpstate_id != freq_data.pstate_id)
+ queue_gpstate_timer(gpstates);
+
+ freq_data.gpstate_id = gpstate_id;
+ gpstates->last_gpstate = freq_data.gpstate_id;
+ gpstates->last_lpstate = freq_data.pstate_id;
+
+ spin_unlock(&gpstates->gpstate_lock);
+
+ /* Timer may get migrated to a different cpu on cpu hot unplug */
+ smp_call_function_any(policy->cpus, set_pstate, &freq_data, 1);
+}
+
/*
* powernv_cpufreq_target_index: Sets the frequency corresponding to
* the cpufreq table entry indexed by new_index on the cpus in the
@@ -433,6 +596,8 @@ static int powernv_cpufreq_target_index(struct cpufreq_policy *policy,
unsigned int new_index)
{
struct powernv_smp_call_data freq_data;
+ unsigned int cur_msec, gpstate_id;
+ struct global_pstate_info *gpstates = policy->driver_data;
if (unlikely(rebooting) && new_index != get_nominal_index())
return 0;
@@ -440,28 +605,81 @@ static int powernv_cpufreq_target_index(struct cpufreq_policy *policy,
if (!throttled)
powernv_cpufreq_throttle_check(NULL);
+ cur_msec = jiffies_to_msecs(get_jiffies_64());
+
+ spin_lock(&gpstates->gpstate_lock);
freq_data.pstate_id = powernv_freqs[new_index].driver_data;
+ if (!gpstates->last_sampled_time) {
+ gpstate_id = freq_data.pstate_id;
+ gpstates->highest_lpstate = freq_data.pstate_id;
+ goto gpstates_done;
+ }
+
+ if (gpstates->last_gpstate > freq_data.pstate_id) {
+ gpstates->elapsed_time += cur_msec -
+ gpstates->last_sampled_time;
+
+ /*
+ * If its has been ramping down for more than MAX_RAMP_DOWN_TIME
+ * we should be resetting all global pstate related data. Set it
+ * equal to local pstate to start fresh.
+ */
+ if (gpstates->elapsed_time > MAX_RAMP_DOWN_TIME) {
+ reset_gpstates(policy);
+ gpstates->highest_lpstate = freq_data.pstate_id;
+ gpstate_id = freq_data.pstate_id;
+ } else {
+ /* Elaspsed_time is less than 5 seconds, continue to rampdown */
+ gpstate_id = calc_global_pstate(gpstates->elapsed_time,
+ gpstates->highest_lpstate,
+ freq_data.pstate_id);
+ }
+ } else {
+ reset_gpstates(policy);
+ gpstates->highest_lpstate = freq_data.pstate_id;
+ gpstate_id = freq_data.pstate_id;
+ }
+
+ /*
+ * If local pstate is equal to global pstate, rampdown is over
+ * So timer is not required to be queued.
+ */
+ if (gpstate_id != freq_data.pstate_id)
+ queue_gpstate_timer(gpstates);
+ else
+ del_timer_sync(&gpstates->timer);
+
+gpstates_done:
+ freq_data.gpstate_id = gpstate_id;
+ gpstates->last_sampled_time = cur_msec;
+ gpstates->last_gpstate = freq_data.gpstate_id;
+ gpstates->last_lpstate = freq_data.pstate_id;
+
+ spin_unlock(&gpstates->gpstate_lock);
+
/*
* Use smp_call_function to send IPI and execute the
* mtspr on target CPU. We could do that without IPI
* if current CPU is within policy->cpus (core)
*/
smp_call_function_any(policy->cpus, set_pstate, &freq_data, 1);
-
return 0;
}
static int powernv_cpufreq_cpu_init(struct cpufreq_policy *policy)
{
- int base, i;
+ int base, i, ret;
+ struct kernfs_node *kn;
+ struct global_pstate_info *gpstates;
base = cpu_first_thread_sibling(policy->cpu);
for (i = 0; i < threads_per_core; i++)
cpumask_set_cpu(base + i, policy->cpus);
- if (!policy->driver_data) {
+ kn = kernfs_find_and_get(policy->kobj.sd, throttle_attr_grp.name);
+ if (!kn) {
int ret;
ret = sysfs_create_group(&policy->kobj, &throttle_attr_grp);
@@ -470,13 +688,37 @@ static int powernv_cpufreq_cpu_init(struct cpufreq_policy *policy)
policy->cpu);
return ret;
}
- /*
- * policy->driver_data is used as a flag for one-time
- * creation of throttle sysfs files.
- */
- policy->driver_data = policy;
+ } else {
+ kernfs_put(kn);
}
- return cpufreq_table_validate_and_show(policy, powernv_freqs);
+
+ gpstates = kzalloc(sizeof(*gpstates), GFP_KERNEL);
+ if (!gpstates)
+ return -ENOMEM;
+
+ policy->driver_data = gpstates;
+
+ /* initialize timer */
+ init_timer_deferrable(&gpstates->timer);
+ gpstates->timer.data = (unsigned long)policy;
+ gpstates->timer.function = gpstate_timer_handler;
+ gpstates->timer.expires = jiffies +
+ msecs_to_jiffies(GPSTATE_TIMER_INTERVAL);
+ spin_lock_init(&gpstates->gpstate_lock);
+ ret = cpufreq_table_validate_and_show(policy, powernv_freqs);
+
+ if (ret < 0)
+ kfree(policy->driver_data);
+
+ return ret;
+}
+
+static int powernv_cpufreq_cpu_exit(struct cpufreq_policy *policy)
+{
+ /* timer is deleted in cpufreq_cpu_stop() */
+ kfree(policy->driver_data);
+
+ return 0;
}
static int powernv_cpufreq_reboot_notifier(struct notifier_block *nb,
@@ -604,15 +846,19 @@ static struct notifier_block powernv_cpufreq_opal_nb = {
static void powernv_cpufreq_stop_cpu(struct cpufreq_policy *policy)
{
struct powernv_smp_call_data freq_data;
+ struct global_pstate_info *gpstates = policy->driver_data;
freq_data.pstate_id = powernv_pstate_info.min;
+ freq_data.gpstate_id = powernv_pstate_info.min;
smp_call_function_single(policy->cpu, set_pstate, &freq_data, 1);
+ del_timer_sync(&gpstates->timer);
}
static struct cpufreq_driver powernv_cpufreq_driver = {
.name = "powernv-cpufreq",
.flags = CPUFREQ_CONST_LOOPS,
.init = powernv_cpufreq_cpu_init,
+ .exit = powernv_cpufreq_cpu_exit,
.verify = cpufreq_generic_frequency_table_verify,
.target_index = powernv_cpufreq_target_index,
.get = powernv_cpufreq_get,
diff --git a/drivers/cpufreq/ppc_cbe_cpufreq.h b/drivers/cpufreq/ppc_cbe_cpufreq.h
index b4c00a5a6a59..3eace725ccd6 100644
--- a/drivers/cpufreq/ppc_cbe_cpufreq.h
+++ b/drivers/cpufreq/ppc_cbe_cpufreq.h
@@ -17,7 +17,7 @@ int cbe_cpufreq_get_pmode(int cpu);
int cbe_cpufreq_set_pmode_pmi(int cpu, unsigned int pmode);
-#if defined(CONFIG_CPU_FREQ_CBE_PMI) || defined(CONFIG_CPU_FREQ_CBE_PMI_MODULE)
+#if IS_ENABLED(CONFIG_CPU_FREQ_CBE_PMI)
extern bool cbe_cpufreq_has_pmi;
#else
#define cbe_cpufreq_has_pmi (0)
diff --git a/drivers/cpufreq/ppc_cbe_cpufreq_pmi.c b/drivers/cpufreq/ppc_cbe_cpufreq_pmi.c
index 7969f7690498..7c4cd5c634f2 100644
--- a/drivers/cpufreq/ppc_cbe_cpufreq_pmi.c
+++ b/drivers/cpufreq/ppc_cbe_cpufreq_pmi.c
@@ -23,7 +23,7 @@
#include <linux/kernel.h>
#include <linux/types.h>
#include <linux/timer.h>
-#include <linux/module.h>
+#include <linux/init.h>
#include <linux/of_platform.h>
#include <asm/processor.h>
@@ -142,15 +142,4 @@ static int __init cbe_cpufreq_pmi_init(void)
return 0;
}
-
-static void __exit cbe_cpufreq_pmi_exit(void)
-{
- cpufreq_unregister_notifier(&pmi_notifier_block, CPUFREQ_POLICY_NOTIFIER);
- pmi_unregister_handler(&cbe_pmi_handler);
-}
-
-module_init(cbe_cpufreq_pmi_init);
-module_exit(cbe_cpufreq_pmi_exit);
-
-MODULE_LICENSE("GPL");
-MODULE_AUTHOR("Christian Krafft <krafft@de.ibm.com>");
+device_initcall(cbe_cpufreq_pmi_init);
diff --git a/drivers/cpufreq/pxa2xx-cpufreq.c b/drivers/cpufreq/pxa2xx-cpufreq.c
index 46fee1539cc8..ce345bf34d5d 100644
--- a/drivers/cpufreq/pxa2xx-cpufreq.c
+++ b/drivers/cpufreq/pxa2xx-cpufreq.c
@@ -29,6 +29,8 @@
*
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/sched.h>
@@ -186,8 +188,7 @@ static int pxa_cpufreq_change_voltage(const struct pxa_freqs *pxa_freq)
ret = regulator_set_voltage(vcc_core, vmin, vmax);
if (ret)
- pr_err("cpufreq: Failed to set vcc_core in [%dmV..%dmV]\n",
- vmin, vmax);
+ pr_err("Failed to set vcc_core in [%dmV..%dmV]\n", vmin, vmax);
return ret;
}
@@ -195,10 +196,10 @@ static void __init pxa_cpufreq_init_voltages(void)
{
vcc_core = regulator_get(NULL, "vcc_core");
if (IS_ERR(vcc_core)) {
- pr_info("cpufreq: Didn't find vcc_core regulator\n");
+ pr_info("Didn't find vcc_core regulator\n");
vcc_core = NULL;
} else {
- pr_info("cpufreq: Found vcc_core regulator\n");
+ pr_info("Found vcc_core regulator\n");
}
}
#else
@@ -233,9 +234,8 @@ static void pxa27x_guess_max_freq(void)
{
if (!pxa27x_maxfreq) {
pxa27x_maxfreq = 416000;
- printk(KERN_INFO "PXA CPU 27x max frequency not defined "
- "(pxa27x_maxfreq), assuming pxa271 with %dkHz maxfreq\n",
- pxa27x_maxfreq);
+ pr_info("PXA CPU 27x max frequency not defined (pxa27x_maxfreq), assuming pxa271 with %dkHz maxfreq\n",
+ pxa27x_maxfreq);
} else {
pxa27x_maxfreq *= 1000;
}
@@ -408,7 +408,7 @@ static int pxa_cpufreq_init(struct cpufreq_policy *policy)
*/
if (cpu_is_pxa25x()) {
find_freq_tables(&pxa255_freq_table, &pxa255_freqs);
- pr_info("PXA255 cpufreq using %s frequency table\n",
+ pr_info("using %s frequency table\n",
pxa255_turbo_table ? "turbo" : "run");
cpufreq_table_validate_and_show(policy, pxa255_freq_table);
@@ -417,7 +417,7 @@ static int pxa_cpufreq_init(struct cpufreq_policy *policy)
cpufreq_table_validate_and_show(policy, pxa27x_freq_table);
}
- printk(KERN_INFO "PXA CPU frequency change support initialized\n");
+ pr_info("frequency change support initialized\n");
return 0;
}
diff --git a/drivers/cpufreq/qoriq-cpufreq.c b/drivers/cpufreq/qoriq-cpufreq.c
index b23e525a7af3..53d8c3fb16f6 100644
--- a/drivers/cpufreq/qoriq-cpufreq.c
+++ b/drivers/cpufreq/qoriq-cpufreq.c
@@ -301,10 +301,11 @@ err_np:
return -ENODEV;
}
-static int __exit qoriq_cpufreq_cpu_exit(struct cpufreq_policy *policy)
+static int qoriq_cpufreq_cpu_exit(struct cpufreq_policy *policy)
{
struct cpu_data *data = policy->driver_data;
+ cpufreq_cooling_unregister(data->cdev);
kfree(data->pclk);
kfree(data->table);
kfree(data);
@@ -333,8 +334,8 @@ static void qoriq_cpufreq_ready(struct cpufreq_policy *policy)
cpud->cdev = of_cpufreq_cooling_register(np,
policy->related_cpus);
- if (IS_ERR(cpud->cdev)) {
- pr_err("Failed to register cooling device cpu%d: %ld\n",
+ if (IS_ERR(cpud->cdev) && PTR_ERR(cpud->cdev) != -ENOSYS) {
+ pr_err("cpu%d is not running as cooling device: %ld\n",
policy->cpu, PTR_ERR(cpud->cdev));
cpud->cdev = NULL;
@@ -348,7 +349,7 @@ static struct cpufreq_driver qoriq_cpufreq_driver = {
.name = "qoriq_cpufreq",
.flags = CPUFREQ_CONST_LOOPS,
.init = qoriq_cpufreq_cpu_init,
- .exit = __exit_p(qoriq_cpufreq_cpu_exit),
+ .exit = qoriq_cpufreq_cpu_exit,
.verify = cpufreq_generic_frequency_table_verify,
.target_index = qoriq_cpufreq_target,
.get = cpufreq_generic_get,
diff --git a/drivers/cpufreq/s3c2412-cpufreq.c b/drivers/cpufreq/s3c2412-cpufreq.c
index eb262133fef2..b04b6f02bbdc 100644
--- a/drivers/cpufreq/s3c2412-cpufreq.c
+++ b/drivers/cpufreq/s3c2412-cpufreq.c
@@ -10,6 +10,8 @@
* published by the Free Software Foundation.
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/init.h>
#include <linux/module.h>
#include <linux/interrupt.h>
@@ -197,21 +199,20 @@ static int s3c2412_cpufreq_add(struct device *dev,
hclk = clk_get(NULL, "hclk");
if (IS_ERR(hclk)) {
- printk(KERN_ERR "%s: cannot find hclk clock\n", __func__);
+ pr_err("cannot find hclk clock\n");
return -ENOENT;
}
fclk = clk_get(NULL, "fclk");
if (IS_ERR(fclk)) {
- printk(KERN_ERR "%s: cannot find fclk clock\n", __func__);
+ pr_err("cannot find fclk clock\n");
goto err_fclk;
}
fclk_rate = clk_get_rate(fclk);
if (fclk_rate > 200000000) {
- printk(KERN_INFO
- "%s: fclk %ld MHz, assuming 266MHz capable part\n",
- __func__, fclk_rate / 1000000);
+ pr_info("fclk %ld MHz, assuming 266MHz capable part\n",
+ fclk_rate / 1000000);
s3c2412_cpufreq_info.max.fclk = 266000000;
s3c2412_cpufreq_info.max.hclk = 133000000;
s3c2412_cpufreq_info.max.pclk = 66000000;
@@ -219,13 +220,13 @@ static int s3c2412_cpufreq_add(struct device *dev,
armclk = clk_get(NULL, "armclk");
if (IS_ERR(armclk)) {
- printk(KERN_ERR "%s: cannot find arm clock\n", __func__);
+ pr_err("cannot find arm clock\n");
goto err_armclk;
}
xtal = clk_get(NULL, "xtal");
if (IS_ERR(xtal)) {
- printk(KERN_ERR "%s: cannot find xtal clock\n", __func__);
+ pr_err("cannot find xtal clock\n");
goto err_xtal;
}
diff --git a/drivers/cpufreq/s3c2440-cpufreq.c b/drivers/cpufreq/s3c2440-cpufreq.c
index 0129f5c70a61..d0d75b65ddd6 100644
--- a/drivers/cpufreq/s3c2440-cpufreq.c
+++ b/drivers/cpufreq/s3c2440-cpufreq.c
@@ -11,6 +11,8 @@
* published by the Free Software Foundation.
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/init.h>
#include <linux/module.h>
#include <linux/interrupt.h>
@@ -66,7 +68,7 @@ static int s3c2440_cpufreq_calcdivs(struct s3c_cpufreq_config *cfg)
__func__, fclk, armclk, hclk_max);
if (armclk > fclk) {
- printk(KERN_WARNING "%s: armclk > fclk\n", __func__);
+ pr_warn("%s: armclk > fclk\n", __func__);
armclk = fclk;
}
@@ -273,7 +275,7 @@ static int s3c2440_cpufreq_add(struct device *dev,
armclk = s3c_cpufreq_clk_get(NULL, "armclk");
if (IS_ERR(xtal) || IS_ERR(hclk) || IS_ERR(fclk) || IS_ERR(armclk)) {
- printk(KERN_ERR "%s: failed to get clocks\n", __func__);
+ pr_err("%s: failed to get clocks\n", __func__);
return -ENOENT;
}
diff --git a/drivers/cpufreq/s3c24xx-cpufreq-debugfs.c b/drivers/cpufreq/s3c24xx-cpufreq-debugfs.c
index 9b7b4289d66c..4d976e8dbb2f 100644
--- a/drivers/cpufreq/s3c24xx-cpufreq-debugfs.c
+++ b/drivers/cpufreq/s3c24xx-cpufreq-debugfs.c
@@ -10,6 +10,8 @@
* published by the Free Software Foundation.
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/init.h>
#include <linux/export.h>
#include <linux/interrupt.h>
@@ -178,7 +180,7 @@ static int __init s3c_freq_debugfs_init(void)
{
dbgfs_root = debugfs_create_dir("s3c-cpufreq", NULL);
if (IS_ERR(dbgfs_root)) {
- printk(KERN_ERR "%s: error creating debugfs root\n", __func__);
+ pr_err("%s: error creating debugfs root\n", __func__);
return PTR_ERR(dbgfs_root);
}
diff --git a/drivers/cpufreq/s3c24xx-cpufreq.c b/drivers/cpufreq/s3c24xx-cpufreq.c
index 68ef8fd9482f..ae8eaed77b70 100644
--- a/drivers/cpufreq/s3c24xx-cpufreq.c
+++ b/drivers/cpufreq/s3c24xx-cpufreq.c
@@ -10,6 +10,8 @@
* published by the Free Software Foundation.
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/init.h>
#include <linux/module.h>
#include <linux/interrupt.h>
@@ -175,7 +177,7 @@ static int s3c_cpufreq_settarget(struct cpufreq_policy *policy,
cpu_new.freq.fclk = cpu_new.pll.frequency;
if (s3c_cpufreq_calcdivs(&cpu_new) < 0) {
- printk(KERN_ERR "no divisors for %d\n", target_freq);
+ pr_err("no divisors for %d\n", target_freq);
goto err_notpossible;
}
@@ -187,7 +189,7 @@ static int s3c_cpufreq_settarget(struct cpufreq_policy *policy,
if (cpu_new.freq.hclk != cpu_cur.freq.hclk) {
if (s3c_cpufreq_calcio(&cpu_new) < 0) {
- printk(KERN_ERR "%s: no IO timings\n", __func__);
+ pr_err("%s: no IO timings\n", __func__);
goto err_notpossible;
}
}
@@ -262,7 +264,7 @@ static int s3c_cpufreq_settarget(struct cpufreq_policy *policy,
return 0;
err_notpossible:
- printk(KERN_ERR "no compatible settings for %d\n", target_freq);
+ pr_err("no compatible settings for %d\n", target_freq);
return -EINVAL;
}
@@ -331,7 +333,7 @@ static int s3c_cpufreq_target(struct cpufreq_policy *policy,
&index);
if (ret < 0) {
- printk(KERN_ERR "%s: no PLL available\n", __func__);
+ pr_err("%s: no PLL available\n", __func__);
goto err_notpossible;
}
@@ -346,7 +348,7 @@ static int s3c_cpufreq_target(struct cpufreq_policy *policy,
return s3c_cpufreq_settarget(policy, target_freq, pll);
err_notpossible:
- printk(KERN_ERR "no compatible settings for %d\n", target_freq);
+ pr_err("no compatible settings for %d\n", target_freq);
return -EINVAL;
}
@@ -356,7 +358,7 @@ struct clk *s3c_cpufreq_clk_get(struct device *dev, const char *name)
clk = clk_get(dev, name);
if (IS_ERR(clk))
- printk(KERN_ERR "cpufreq: failed to get clock '%s'\n", name);
+ pr_err("failed to get clock '%s'\n", name);
return clk;
}
@@ -378,15 +380,16 @@ static int __init s3c_cpufreq_initclks(void)
if (IS_ERR(clk_fclk) || IS_ERR(clk_hclk) || IS_ERR(clk_pclk) ||
IS_ERR(_clk_mpll) || IS_ERR(clk_arm) || IS_ERR(_clk_xtal)) {
- printk(KERN_ERR "%s: could not get clock(s)\n", __func__);
+ pr_err("%s: could not get clock(s)\n", __func__);
return -ENOENT;
}
- printk(KERN_INFO "%s: clocks f=%lu,h=%lu,p=%lu,a=%lu\n", __func__,
- clk_get_rate(clk_fclk) / 1000,
- clk_get_rate(clk_hclk) / 1000,
- clk_get_rate(clk_pclk) / 1000,
- clk_get_rate(clk_arm) / 1000);
+ pr_info("%s: clocks f=%lu,h=%lu,p=%lu,a=%lu\n",
+ __func__,
+ clk_get_rate(clk_fclk) / 1000,
+ clk_get_rate(clk_hclk) / 1000,
+ clk_get_rate(clk_pclk) / 1000,
+ clk_get_rate(clk_arm) / 1000);
return 0;
}
@@ -424,7 +427,7 @@ static int s3c_cpufreq_resume(struct cpufreq_policy *policy)
ret = s3c_cpufreq_settarget(NULL, suspend_freq, &suspend_pll);
if (ret) {
- printk(KERN_ERR "%s: failed to reset pll/freq\n", __func__);
+ pr_err("%s: failed to reset pll/freq\n", __func__);
return ret;
}
@@ -449,13 +452,12 @@ static struct cpufreq_driver s3c24xx_driver = {
int s3c_cpufreq_register(struct s3c_cpufreq_info *info)
{
if (!info || !info->name) {
- printk(KERN_ERR "%s: failed to pass valid information\n",
- __func__);
+ pr_err("%s: failed to pass valid information\n", __func__);
return -EINVAL;
}
- printk(KERN_INFO "S3C24XX CPU Frequency driver, %s cpu support\n",
- info->name);
+ pr_info("S3C24XX CPU Frequency driver, %s cpu support\n",
+ info->name);
/* check our driver info has valid data */
@@ -478,7 +480,7 @@ int __init s3c_cpufreq_setboard(struct s3c_cpufreq_board *board)
struct s3c_cpufreq_board *ours;
if (!board) {
- printk(KERN_INFO "%s: no board data\n", __func__);
+ pr_info("%s: no board data\n", __func__);
return -EINVAL;
}
@@ -487,7 +489,7 @@ int __init s3c_cpufreq_setboard(struct s3c_cpufreq_board *board)
ours = kzalloc(sizeof(*ours), GFP_KERNEL);
if (ours == NULL) {
- printk(KERN_ERR "%s: no memory\n", __func__);
+ pr_err("%s: no memory\n", __func__);
return -ENOMEM;
}
@@ -502,15 +504,15 @@ static int __init s3c_cpufreq_auto_io(void)
int ret;
if (!cpu_cur.info->get_iotiming) {
- printk(KERN_ERR "%s: get_iotiming undefined\n", __func__);
+ pr_err("%s: get_iotiming undefined\n", __func__);
return -ENOENT;
}
- printk(KERN_INFO "%s: working out IO settings\n", __func__);
+ pr_info("%s: working out IO settings\n", __func__);
ret = (cpu_cur.info->get_iotiming)(&cpu_cur, &s3c24xx_iotiming);
if (ret)
- printk(KERN_ERR "%s: failed to get timings\n", __func__);
+ pr_err("%s: failed to get timings\n", __func__);
return ret;
}
@@ -561,7 +563,7 @@ static void s3c_cpufreq_update_loctkime(void)
val = calc_locktime(rate, cpu_cur.info->locktime_u) << bits;
val |= calc_locktime(rate, cpu_cur.info->locktime_m);
- printk(KERN_INFO "%s: new locktime is 0x%08x\n", __func__, val);
+ pr_info("%s: new locktime is 0x%08x\n", __func__, val);
__raw_writel(val, S3C2410_LOCKTIME);
}
@@ -580,7 +582,7 @@ static int s3c_cpufreq_build_freq(void)
ftab = kzalloc(sizeof(*ftab) * size, GFP_KERNEL);
if (!ftab) {
- printk(KERN_ERR "%s: no memory for tables\n", __func__);
+ pr_err("%s: no memory for tables\n", __func__);
return -ENOMEM;
}
@@ -608,15 +610,14 @@ static int __init s3c_cpufreq_initcall(void)
if (cpu_cur.board->auto_io) {
ret = s3c_cpufreq_auto_io();
if (ret) {
- printk(KERN_ERR "%s: failed to get io timing\n",
+ pr_err("%s: failed to get io timing\n",
__func__);
goto out;
}
}
if (cpu_cur.board->need_io && !cpu_cur.info->set_iotiming) {
- printk(KERN_ERR "%s: no IO support registered\n",
- __func__);
+ pr_err("%s: no IO support registered\n", __func__);
ret = -EINVAL;
goto out;
}
@@ -666,9 +667,9 @@ int s3c_plltab_register(struct cpufreq_frequency_table *plls,
vals += plls_no;
vals->frequency = CPUFREQ_TABLE_END;
- printk(KERN_INFO "cpufreq: %d PLL entries\n", plls_no);
+ pr_info("%d PLL entries\n", plls_no);
} else
- printk(KERN_ERR "cpufreq: no memory for PLL tables\n");
+ pr_err("no memory for PLL tables\n");
return vals ? 0 : -ENOMEM;
}
diff --git a/drivers/cpufreq/s5pv210-cpufreq.c b/drivers/cpufreq/s5pv210-cpufreq.c
index a145b319d171..06d85917b6d5 100644
--- a/drivers/cpufreq/s5pv210-cpufreq.c
+++ b/drivers/cpufreq/s5pv210-cpufreq.c
@@ -9,6 +9,8 @@
* published by the Free Software Foundation.
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/types.h>
#include <linux/kernel.h>
#include <linux/init.h>
@@ -205,7 +207,7 @@ static void s5pv210_set_refresh(enum s5pv210_dmc_port ch, unsigned long freq)
} else if (ch == DMC1) {
reg = (dmc_base[1] + 0x30);
} else {
- printk(KERN_ERR "Cannot find DMC port\n");
+ pr_err("Cannot find DMC port\n");
return;
}
@@ -534,7 +536,7 @@ static int s5pv210_cpu_init(struct cpufreq_policy *policy)
mem_type = check_mem_type(dmc_base[0]);
if ((mem_type != LPDDR) && (mem_type != LPDDR2)) {
- printk(KERN_ERR "CPUFreq doesn't support this memory type\n");
+ pr_err("CPUFreq doesn't support this memory type\n");
ret = -EINVAL;
goto out_dmc1;
}
@@ -635,13 +637,13 @@ static int s5pv210_cpufreq_probe(struct platform_device *pdev)
arm_regulator = regulator_get(NULL, "vddarm");
if (IS_ERR(arm_regulator)) {
- pr_err("failed to get regulator vddarm");
+ pr_err("failed to get regulator vddarm\n");
return PTR_ERR(arm_regulator);
}
int_regulator = regulator_get(NULL, "vddint");
if (IS_ERR(int_regulator)) {
- pr_err("failed to get regulator vddint");
+ pr_err("failed to get regulator vddint\n");
regulator_put(arm_regulator);
return PTR_ERR(int_regulator);
}
diff --git a/drivers/cpufreq/sc520_freq.c b/drivers/cpufreq/sc520_freq.c
index ac84e4818014..4225501a4b78 100644
--- a/drivers/cpufreq/sc520_freq.c
+++ b/drivers/cpufreq/sc520_freq.c
@@ -13,6 +13,8 @@
* 2005-03-30: - initial revision
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
@@ -30,8 +32,6 @@
static __u8 __iomem *cpuctl;
-#define PFX "sc520_freq: "
-
static struct cpufreq_frequency_table sc520_freq_table[] = {
{0, 0x01, 100000},
{0, 0x02, 133000},
@@ -44,8 +44,8 @@ static unsigned int sc520_freq_get_cpu_frequency(unsigned int cpu)
switch (clockspeed_reg & 0x03) {
default:
- printk(KERN_ERR PFX "error: cpuctl register has unexpected "
- "value %02x\n", clockspeed_reg);
+ pr_err("error: cpuctl register has unexpected value %02x\n",
+ clockspeed_reg);
case 0x01:
return 100000;
case 0x02:
@@ -112,7 +112,7 @@ static int __init sc520_freq_init(void)
cpuctl = ioremap((unsigned long)(MMCR_BASE + OFFS_CPUCTL), 1);
if (!cpuctl) {
- printk(KERN_ERR "sc520_freq: error: failed to remap memory\n");
+ pr_err("sc520_freq: error: failed to remap memory\n");
return -ENOMEM;
}
diff --git a/drivers/cpufreq/scpi-cpufreq.c b/drivers/cpufreq/scpi-cpufreq.c
index de5e89b2eaaa..e8a7bf57b31b 100644
--- a/drivers/cpufreq/scpi-cpufreq.c
+++ b/drivers/cpufreq/scpi-cpufreq.c
@@ -18,6 +18,7 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/cpu.h>
#include <linux/cpufreq.h>
#include <linux/module.h>
#include <linux/platform_device.h>
@@ -38,10 +39,20 @@ static struct scpi_dvfs_info *scpi_get_dvfs_info(struct device *cpu_dev)
return scpi_ops->dvfs_get_info(domain);
}
-static int scpi_opp_table_ops(struct device *cpu_dev, bool remove)
+static int scpi_get_transition_latency(struct device *cpu_dev)
{
- int idx, ret = 0;
+ struct scpi_dvfs_info *info = scpi_get_dvfs_info(cpu_dev);
+
+ if (IS_ERR(info))
+ return PTR_ERR(info);
+ return info->latency;
+}
+
+static int scpi_init_opp_table(const struct cpumask *cpumask)
+{
+ int idx, ret;
struct scpi_opp *opp;
+ struct device *cpu_dev = get_cpu_device(cpumask_first(cpumask));
struct scpi_dvfs_info *info = scpi_get_dvfs_info(cpu_dev);
if (IS_ERR(info))
@@ -51,11 +62,7 @@ static int scpi_opp_table_ops(struct device *cpu_dev, bool remove)
return -EIO;
for (opp = info->opps, idx = 0; idx < info->count; idx++, opp++) {
- if (remove)
- dev_pm_opp_remove(cpu_dev, opp->freq);
- else
- ret = dev_pm_opp_add(cpu_dev, opp->freq,
- opp->m_volt * 1000);
+ ret = dev_pm_opp_add(cpu_dev, opp->freq, opp->m_volt * 1000);
if (ret) {
dev_warn(cpu_dev, "failed to add opp %uHz %umV\n",
opp->freq, opp->m_volt);
@@ -64,33 +71,19 @@ static int scpi_opp_table_ops(struct device *cpu_dev, bool remove)
return ret;
}
}
- return ret;
-}
-static int scpi_get_transition_latency(struct device *cpu_dev)
-{
- struct scpi_dvfs_info *info = scpi_get_dvfs_info(cpu_dev);
-
- if (IS_ERR(info))
- return PTR_ERR(info);
- return info->latency;
-}
-
-static int scpi_init_opp_table(struct device *cpu_dev)
-{
- return scpi_opp_table_ops(cpu_dev, false);
-}
-
-static void scpi_free_opp_table(struct device *cpu_dev)
-{
- scpi_opp_table_ops(cpu_dev, true);
+ ret = dev_pm_opp_set_sharing_cpus(cpu_dev, cpumask);
+ if (ret)
+ dev_err(cpu_dev, "%s: failed to mark OPPs as shared: %d\n",
+ __func__, ret);
+ return ret;
}
static struct cpufreq_arm_bL_ops scpi_cpufreq_ops = {
.name = "scpi",
.get_transition_latency = scpi_get_transition_latency,
.init_opp_table = scpi_init_opp_table,
- .free_opp_table = scpi_free_opp_table,
+ .free_opp_table = dev_pm_opp_cpumask_remove_table,
};
static int scpi_cpufreq_probe(struct platform_device *pdev)
diff --git a/drivers/cpufreq/speedstep-centrino.c b/drivers/cpufreq/speedstep-centrino.c
index 7d4a31571608..41bc5397f4bb 100644
--- a/drivers/cpufreq/speedstep-centrino.c
+++ b/drivers/cpufreq/speedstep-centrino.c
@@ -13,6 +13,8 @@
* Copyright (C) 2003 Jeremy Fitzhardinge <jeremy@goop.org>
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
@@ -27,7 +29,6 @@
#include <asm/cpufeature.h>
#include <asm/cpu_device_id.h>
-#define PFX "speedstep-centrino: "
#define MAINTAINER "linux-pm@vger.kernel.org"
#define INTEL_MSR_RANGE (0xffff)
@@ -386,8 +387,7 @@ static int centrino_cpu_init(struct cpufreq_policy *policy)
/* check to see if it stuck */
rdmsr(MSR_IA32_MISC_ENABLE, l, h);
if (!(l & MSR_IA32_MISC_ENABLE_ENHANCED_SPEEDSTEP)) {
- printk(KERN_INFO PFX
- "couldn't enable Enhanced SpeedStep\n");
+ pr_info("couldn't enable Enhanced SpeedStep\n");
return -ENODEV;
}
}
diff --git a/drivers/cpufreq/speedstep-ich.c b/drivers/cpufreq/speedstep-ich.c
index 37555c6b86a7..b86953a3ddc4 100644
--- a/drivers/cpufreq/speedstep-ich.c
+++ b/drivers/cpufreq/speedstep-ich.c
@@ -18,6 +18,8 @@
* SPEEDSTEP - DEFINITIONS *
*********************************************************************/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
@@ -68,13 +70,13 @@ static int speedstep_find_register(void)
/* get PMBASE */
pci_read_config_dword(speedstep_chipset_dev, 0x40, &pmbase);
if (!(pmbase & 0x01)) {
- printk(KERN_ERR "speedstep-ich: could not find speedstep register\n");
+ pr_err("could not find speedstep register\n");
return -ENODEV;
}
pmbase &= 0xFFFFFFFE;
if (!pmbase) {
- printk(KERN_ERR "speedstep-ich: could not find speedstep register\n");
+ pr_err("could not find speedstep register\n");
return -ENODEV;
}
@@ -136,7 +138,7 @@ static void speedstep_set_state(unsigned int state)
pr_debug("change to %u MHz succeeded\n",
speedstep_get_frequency(speedstep_processor) / 1000);
else
- printk(KERN_ERR "cpufreq: change failed - I/O error\n");
+ pr_err("change failed - I/O error\n");
return;
}
diff --git a/drivers/cpufreq/speedstep-lib.c b/drivers/cpufreq/speedstep-lib.c
index 15d3214aaa00..1b8062182c81 100644
--- a/drivers/cpufreq/speedstep-lib.c
+++ b/drivers/cpufreq/speedstep-lib.c
@@ -8,6 +8,8 @@
* BIG FAT DISCLAIMER: Work in progress code. Possibly *dangerous*
*/
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
@@ -153,7 +155,7 @@ static unsigned int pentium_core_get_frequency(void)
fsb = 333333;
break;
default:
- printk(KERN_ERR "PCORE - MSR_FSB_FREQ undefined value");
+ pr_err("PCORE - MSR_FSB_FREQ undefined value\n");
}
rdmsr(MSR_IA32_EBL_CR_POWERON, msr_lo, msr_tmp);
@@ -453,11 +455,8 @@ unsigned int speedstep_get_freqs(enum speedstep_processor processor,
*/
if (*transition_latency > 10000000 ||
*transition_latency < 50000) {
- printk(KERN_WARNING PFX "frequency transition "
- "measured seems out of range (%u "
- "nSec), falling back to a safe one of"
- "%u nSec.\n",
- *transition_latency, 500000);
+ pr_warn("frequency transition measured seems out of range (%u nSec), falling back to a safe one of %u nSec\n",
+ *transition_latency, 500000);
*transition_latency = 500000;
}
}
diff --git a/drivers/cpufreq/speedstep-smi.c b/drivers/cpufreq/speedstep-smi.c
index 819229e824fb..770a9ae1999a 100644
--- a/drivers/cpufreq/speedstep-smi.c
+++ b/drivers/cpufreq/speedstep-smi.c
@@ -12,6 +12,8 @@
* SPEEDSTEP - DEFINITIONS *
*********************************************************************/
+#define pr_fmt(fmt) "cpufreq: " fmt
+
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
@@ -204,9 +206,8 @@ static void speedstep_set_state(unsigned int state)
(speedstep_freqs[new_state].frequency / 1000),
retry, result);
else
- printk(KERN_ERR "cpufreq: change to state %u "
- "failed with new_state %u and result %u\n",
- state, new_state, result);
+ pr_err("change to state %u failed with new_state %u and result %u\n",
+ state, new_state, result);
return;
}
diff --git a/drivers/cpufreq/tegra124-cpufreq.c b/drivers/cpufreq/tegra124-cpufreq.c
index 20bcceb58ccc..43530254201a 100644
--- a/drivers/cpufreq/tegra124-cpufreq.c
+++ b/drivers/cpufreq/tegra124-cpufreq.c
@@ -14,7 +14,6 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
#include <linux/clk.h>
-#include <linux/cpufreq-dt.h>
#include <linux/err.h>
#include <linux/init.h>
#include <linux/kernel.h>
@@ -69,10 +68,6 @@ static void tegra124_cpu_switch_to_pllx(struct tegra124_cpufreq_priv *priv)
clk_set_parent(priv->cpu_clk, priv->pllx_clk);
}
-static struct cpufreq_dt_platform_data cpufreq_dt_pd = {
- .independent_clocks = false,
-};
-
static int tegra124_cpufreq_probe(struct platform_device *pdev)
{
struct tegra124_cpufreq_priv *priv;
@@ -129,8 +124,6 @@ static int tegra124_cpufreq_probe(struct platform_device *pdev)
cpufreq_dt_devinfo.name = "cpufreq-dt";
cpufreq_dt_devinfo.parent = &pdev->dev;
- cpufreq_dt_devinfo.data = &cpufreq_dt_pd;
- cpufreq_dt_devinfo.size_data = sizeof(cpufreq_dt_pd);
priv->cpufreq_dt_pdev =
platform_device_register_full(&cpufreq_dt_devinfo);
diff --git a/drivers/cpufreq/vexpress-spc-cpufreq.c b/drivers/cpufreq/vexpress-spc-cpufreq.c
index 433e93fd4900..87e5bdc5ec74 100644
--- a/drivers/cpufreq/vexpress-spc-cpufreq.c
+++ b/drivers/cpufreq/vexpress-spc-cpufreq.c
@@ -18,6 +18,7 @@
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/cpu.h>
#include <linux/cpufreq.h>
#include <linux/module.h>
#include <linux/platform_device.h>
@@ -26,8 +27,9 @@
#include "arm_big_little.h"
-static int ve_spc_init_opp_table(struct device *cpu_dev)
+static int ve_spc_init_opp_table(const struct cpumask *cpumask)
{
+ struct device *cpu_dev = get_cpu_device(cpumask_first(cpumask));
/*
* platform specific SPC code must initialise the opp table
* so just check if the OPP count is non-zero
diff --git a/drivers/cpuidle/cpuidle.c b/drivers/cpuidle/cpuidle.c
index f996efc56605..a4d0059e232c 100644
--- a/drivers/cpuidle/cpuidle.c
+++ b/drivers/cpuidle/cpuidle.c
@@ -173,7 +173,7 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
struct cpuidle_state *target_state = &drv->states[index];
bool broadcast = !!(target_state->flags & CPUIDLE_FLAG_TIMER_STOP);
- ktime_t time_start, time_end;
+ u64 time_start, time_end;
s64 diff;
/*
@@ -195,13 +195,13 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
sched_idle_set_state(target_state);
trace_cpu_idle_rcuidle(index, dev->cpu);
- time_start = ktime_get();
+ time_start = local_clock();
stop_critical_timings();
entered_state = target_state->enter(dev, drv, index);
start_critical_timings();
- time_end = ktime_get();
+ time_end = local_clock();
trace_cpu_idle_rcuidle(PWR_EVENT_EXIT, dev->cpu);
/* The cpu is no longer idle or about to enter idle. */
@@ -214,10 +214,14 @@ int cpuidle_enter_state(struct cpuidle_device *dev, struct cpuidle_driver *drv,
tick_broadcast_exit();
}
- if (!cpuidle_state_is_coupled(drv, entered_state))
+ if (!cpuidle_state_is_coupled(drv, index))
local_irq_enable();
- diff = ktime_to_us(ktime_sub(time_end, time_start));
+ /*
+ * local_clock() returns the time in nanosecond, let's shift
+ * by 10 (divide by 1024) to have microsecond based time.
+ */
+ diff = (time_end - time_start) >> 10;
if (diff > INT_MAX)
diff = INT_MAX;
@@ -433,6 +437,8 @@ static void __cpuidle_unregister_device(struct cpuidle_device *dev)
list_del(&dev->device_list);
per_cpu(cpuidle_devices, dev->cpu) = NULL;
module_put(drv->owner);
+
+ dev->registered = 0;
}
static void __cpuidle_device_init(struct cpuidle_device *dev)
diff --git a/drivers/crypto/Kconfig b/drivers/crypto/Kconfig
index 477fffdb4f49..d77ba2f12242 100644
--- a/drivers/crypto/Kconfig
+++ b/drivers/crypto/Kconfig
@@ -279,6 +279,14 @@ config CRYPTO_DEV_PPC4XX
help
This option allows you to have support for AMCC crypto acceleration.
+config HW_RANDOM_PPC4XX
+ bool "PowerPC 4xx generic true random number generator support"
+ depends on CRYPTO_DEV_PPC4XX && HW_RANDOM
+ default y
+ ---help---
+ This option provides the kernel-side support for the TRNG hardware
+ found in the security function of some PowerPC 4xx SoCs.
+
config CRYPTO_DEV_OMAP_SHAM
tristate "Support for OMAP MD5/SHA1/SHA2 hw accelerator"
depends on ARCH_OMAP2PLUS
@@ -302,15 +310,16 @@ config CRYPTO_DEV_OMAP_AES
want to use the OMAP module for AES algorithms.
config CRYPTO_DEV_OMAP_DES
- tristate "Support for OMAP DES3DES hw engine"
+ tristate "Support for OMAP DES/3DES hw engine"
depends on ARCH_OMAP2PLUS
select CRYPTO_DES
select CRYPTO_BLKCIPHER
+ select CRYPTO_ENGINE
help
OMAP processors have DES/3DES module accelerator. Select this if you
want to use the OMAP module for DES and 3DES algorithms. Currently
- the ECB and CBC modes of operation supported by the driver. Also
- accesses made on unaligned boundaries are also supported.
+ the ECB and CBC modes of operation are supported by the driver. Also
+ accesses made on unaligned boundaries are supported.
config CRYPTO_DEV_PICOXCELL
tristate "Support for picoXcell IPSEC and Layer2 crypto engines"
@@ -340,9 +349,19 @@ config CRYPTO_DEV_SAHARA
This option enables support for the SAHARA HW crypto accelerator
found in some Freescale i.MX chips.
+config CRYPTO_DEV_MXC_SCC
+ tristate "Support for Freescale Security Controller (SCC)"
+ depends on ARCH_MXC && OF
+ select CRYPTO_BLKCIPHER
+ select CRYPTO_DES
+ help
+ This option enables support for the Security Controller (SCC)
+ found in Freescale i.MX25 chips.
+
config CRYPTO_DEV_S5P
tristate "Support for Samsung S5PV210/Exynos crypto accelerator"
- depends on ARCH_S5PV210 || ARCH_EXYNOS
+ depends on ARCH_S5PV210 || ARCH_EXYNOS || COMPILE_TEST
+ depends on HAS_IOMEM && HAS_DMA
select CRYPTO_AES
select CRYPTO_BLKCIPHER
help
diff --git a/drivers/crypto/Makefile b/drivers/crypto/Makefile
index 713de9d11148..3c6432dd09d9 100644
--- a/drivers/crypto/Makefile
+++ b/drivers/crypto/Makefile
@@ -23,6 +23,7 @@ obj-$(CONFIG_CRYPTO_DEV_PICOXCELL) += picoxcell_crypto.o
obj-$(CONFIG_CRYPTO_DEV_PPC4XX) += amcc/
obj-$(CONFIG_CRYPTO_DEV_S5P) += s5p-sss.o
obj-$(CONFIG_CRYPTO_DEV_SAHARA) += sahara.o
+obj-$(CONFIG_CRYPTO_DEV_MXC_SCC) += mxc-scc.o
obj-$(CONFIG_CRYPTO_DEV_TALITOS) += talitos.o
obj-$(CONFIG_CRYPTO_DEV_UX500) += ux500/
obj-$(CONFIG_CRYPTO_DEV_QAT) += qat/
diff --git a/drivers/crypto/amcc/Makefile b/drivers/crypto/amcc/Makefile
index 5c0c62b65d69..b95539928fdf 100644
--- a/drivers/crypto/amcc/Makefile
+++ b/drivers/crypto/amcc/Makefile
@@ -1,2 +1,3 @@
obj-$(CONFIG_CRYPTO_DEV_PPC4XX) += crypto4xx.o
crypto4xx-y := crypto4xx_core.o crypto4xx_alg.o crypto4xx_sa.o
+crypto4xx-$(CONFIG_HW_RANDOM_PPC4XX) += crypto4xx_trng.o
diff --git a/drivers/crypto/amcc/crypto4xx_core.c b/drivers/crypto/amcc/crypto4xx_core.c
index 62134c8a2260..dae1e39139e9 100644
--- a/drivers/crypto/amcc/crypto4xx_core.c
+++ b/drivers/crypto/amcc/crypto4xx_core.c
@@ -40,6 +40,7 @@
#include "crypto4xx_reg_def.h"
#include "crypto4xx_core.h"
#include "crypto4xx_sa.h"
+#include "crypto4xx_trng.h"
#define PPC4XX_SEC_VERSION_STR "0.5"
@@ -1225,6 +1226,7 @@ static int crypto4xx_probe(struct platform_device *ofdev)
if (rc)
goto err_start_dev;
+ ppc4xx_trng_probe(core_dev);
return 0;
err_start_dev:
@@ -1252,6 +1254,8 @@ static int crypto4xx_remove(struct platform_device *ofdev)
struct device *dev = &ofdev->dev;
struct crypto4xx_core_device *core_dev = dev_get_drvdata(dev);
+ ppc4xx_trng_remove(core_dev);
+
free_irq(core_dev->irq, dev);
irq_dispose_mapping(core_dev->irq);
@@ -1272,7 +1276,7 @@ MODULE_DEVICE_TABLE(of, crypto4xx_match);
static struct platform_driver crypto4xx_driver = {
.driver = {
- .name = "crypto4xx",
+ .name = MODULE_NAME,
.of_match_table = crypto4xx_match,
},
.probe = crypto4xx_probe,
@@ -1284,4 +1288,3 @@ module_platform_driver(crypto4xx_driver);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("James Hsiao <jhsiao@amcc.com>");
MODULE_DESCRIPTION("Driver for AMCC PPC4xx crypto accelerator");
-
diff --git a/drivers/crypto/amcc/crypto4xx_core.h b/drivers/crypto/amcc/crypto4xx_core.h
index bac0bdeb4b5f..ecfdcfe3698d 100644
--- a/drivers/crypto/amcc/crypto4xx_core.h
+++ b/drivers/crypto/amcc/crypto4xx_core.h
@@ -24,6 +24,8 @@
#include <crypto/internal/hash.h>
+#define MODULE_NAME "crypto4xx"
+
#define PPC460SX_SDR0_SRST 0x201
#define PPC405EX_SDR0_SRST 0x200
#define PPC460EX_SDR0_SRST 0x201
@@ -72,6 +74,7 @@ struct crypto4xx_device {
char *name;
u64 ce_phy_address;
void __iomem *ce_base;
+ void __iomem *trng_base;
void *pdr; /* base address of packet
descriptor ring */
@@ -106,6 +109,7 @@ struct crypto4xx_core_device {
struct device *device;
struct platform_device *ofdev;
struct crypto4xx_device *dev;
+ struct hwrng *trng;
u32 int_status;
u32 irq;
struct tasklet_struct tasklet;
diff --git a/drivers/crypto/amcc/crypto4xx_reg_def.h b/drivers/crypto/amcc/crypto4xx_reg_def.h
index 5f5fbc0716ff..46fe57c8f6eb 100644
--- a/drivers/crypto/amcc/crypto4xx_reg_def.h
+++ b/drivers/crypto/amcc/crypto4xx_reg_def.h
@@ -125,6 +125,7 @@
#define PPC4XX_INTERRUPT_CLR 0x3ffff
#define PPC4XX_PRNG_CTRL_AUTO_EN 0x3
#define PPC4XX_DC_3DES_EN 1
+#define PPC4XX_TRNG_EN 0x00020000
#define PPC4XX_INT_DESCR_CNT 4
#define PPC4XX_INT_TIMEOUT_CNT 0
#define PPC4XX_INT_CFG 1
diff --git a/drivers/crypto/amcc/crypto4xx_trng.c b/drivers/crypto/amcc/crypto4xx_trng.c
new file mode 100644
index 000000000000..677ca17fd223
--- /dev/null
+++ b/drivers/crypto/amcc/crypto4xx_trng.c
@@ -0,0 +1,131 @@
+/*
+ * Generic PowerPC 44x RNG driver
+ *
+ * Copyright 2011 IBM Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the
+ * Free Software Foundation; version 2 of the License.
+ */
+
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/interrupt.h>
+#include <linux/platform_device.h>
+#include <linux/hw_random.h>
+#include <linux/delay.h>
+#include <linux/of_address.h>
+#include <linux/of_platform.h>
+#include <linux/io.h>
+
+#include "crypto4xx_core.h"
+#include "crypto4xx_trng.h"
+#include "crypto4xx_reg_def.h"
+
+#define PPC4XX_TRNG_CTRL 0x0008
+#define PPC4XX_TRNG_CTRL_DALM 0x20
+#define PPC4XX_TRNG_STAT 0x0004
+#define PPC4XX_TRNG_STAT_B 0x1
+#define PPC4XX_TRNG_DATA 0x0000
+
+static int ppc4xx_trng_data_present(struct hwrng *rng, int wait)
+{
+ struct crypto4xx_device *dev = (void *)rng->priv;
+ int busy, i, present = 0;
+
+ for (i = 0; i < 20; i++) {
+ busy = (in_le32(dev->trng_base + PPC4XX_TRNG_STAT) &
+ PPC4XX_TRNG_STAT_B);
+ if (!busy || !wait) {
+ present = 1;
+ break;
+ }
+ udelay(10);
+ }
+ return present;
+}
+
+static int ppc4xx_trng_data_read(struct hwrng *rng, u32 *data)
+{
+ struct crypto4xx_device *dev = (void *)rng->priv;
+ *data = in_le32(dev->trng_base + PPC4XX_TRNG_DATA);
+ return 4;
+}
+
+static void ppc4xx_trng_enable(struct crypto4xx_device *dev, bool enable)
+{
+ u32 device_ctrl;
+
+ device_ctrl = readl(dev->ce_base + CRYPTO4XX_DEVICE_CTRL);
+ if (enable)
+ device_ctrl |= PPC4XX_TRNG_EN;
+ else
+ device_ctrl &= ~PPC4XX_TRNG_EN;
+ writel(device_ctrl, dev->ce_base + CRYPTO4XX_DEVICE_CTRL);
+}
+
+static const struct of_device_id ppc4xx_trng_match[] = {
+ { .compatible = "ppc4xx-rng", },
+ { .compatible = "amcc,ppc460ex-rng", },
+ { .compatible = "amcc,ppc440epx-rng", },
+ {},
+};
+
+void ppc4xx_trng_probe(struct crypto4xx_core_device *core_dev)
+{
+ struct crypto4xx_device *dev = core_dev->dev;
+ struct device_node *trng = NULL;
+ struct hwrng *rng = NULL;
+ int err;
+
+ /* Find the TRNG device node and map it */
+ trng = of_find_matching_node(NULL, ppc4xx_trng_match);
+ if (!trng || !of_device_is_available(trng))
+ return;
+
+ dev->trng_base = of_iomap(trng, 0);
+ of_node_put(trng);
+ if (!dev->trng_base)
+ goto err_out;
+
+ rng = kzalloc(sizeof(*rng), GFP_KERNEL);
+ if (!rng)
+ goto err_out;
+
+ rng->name = MODULE_NAME;
+ rng->data_present = ppc4xx_trng_data_present;
+ rng->data_read = ppc4xx_trng_data_read;
+ rng->priv = (unsigned long) dev;
+ core_dev->trng = rng;
+ ppc4xx_trng_enable(dev, true);
+ out_le32(dev->trng_base + PPC4XX_TRNG_CTRL, PPC4XX_TRNG_CTRL_DALM);
+ err = devm_hwrng_register(core_dev->device, core_dev->trng);
+ if (err) {
+ ppc4xx_trng_enable(dev, false);
+ dev_err(core_dev->device, "failed to register hwrng (%d).\n",
+ err);
+ goto err_out;
+ }
+ return;
+
+err_out:
+ of_node_put(trng);
+ iounmap(dev->trng_base);
+ kfree(rng);
+ dev->trng_base = NULL;
+ core_dev->trng = NULL;
+}
+
+void ppc4xx_trng_remove(struct crypto4xx_core_device *core_dev)
+{
+ if (core_dev && core_dev->trng) {
+ struct crypto4xx_device *dev = core_dev->dev;
+
+ devm_hwrng_unregister(core_dev->device, core_dev->trng);
+ ppc4xx_trng_enable(dev, false);
+ iounmap(dev->trng_base);
+ kfree(core_dev->trng);
+ }
+}
+
+MODULE_ALIAS("ppc4xx_rng");
diff --git a/drivers/crypto/amcc/crypto4xx_trng.h b/drivers/crypto/amcc/crypto4xx_trng.h
new file mode 100644
index 000000000000..931d22531f51
--- /dev/null
+++ b/drivers/crypto/amcc/crypto4xx_trng.h
@@ -0,0 +1,34 @@
+/**
+ * AMCC SoC PPC4xx Crypto Driver
+ *
+ * Copyright (c) 2008 Applied Micro Circuits Corporation.
+ * All rights reserved. James Hsiao <jhsiao@amcc.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * This file defines the security context
+ * associate format.
+ */
+
+#ifndef __CRYPTO4XX_TRNG_H__
+#define __CRYPTO4XX_TRNG_H__
+
+#ifdef CONFIG_HW_RANDOM_PPC4XX
+void ppc4xx_trng_probe(struct crypto4xx_core_device *core_dev);
+void ppc4xx_trng_remove(struct crypto4xx_core_device *core_dev);
+#else
+static inline void ppc4xx_trng_probe(
+ struct crypto4xx_device *dev __maybe_unused) { }
+static inline void ppc4xx_trng_remove(
+ struct crypto4xx_device *dev __maybe_unused) { }
+#endif
+
+#endif
diff --git a/drivers/crypto/caam/ctrl.c b/drivers/crypto/caam/ctrl.c
index 44d30b45f3cc..5ad5f3009ae0 100644
--- a/drivers/crypto/caam/ctrl.c
+++ b/drivers/crypto/caam/ctrl.c
@@ -402,7 +402,7 @@ int caam_get_era(void)
ret = of_property_read_u32(caam_node, "fsl,sec-era", &prop);
of_node_put(caam_node);
- return IS_ERR_VALUE(ret) ? -ENOTSUPP : prop;
+ return ret ? -ENOTSUPP : prop;
}
EXPORT_SYMBOL(caam_get_era);
diff --git a/drivers/crypto/caam/jr.c b/drivers/crypto/caam/jr.c
index 6fd63a600614..5ef4be22eb80 100644
--- a/drivers/crypto/caam/jr.c
+++ b/drivers/crypto/caam/jr.c
@@ -248,7 +248,7 @@ static void caam_jr_dequeue(unsigned long devarg)
struct device *caam_jr_alloc(void)
{
struct caam_drv_private_jr *jrpriv, *min_jrpriv = NULL;
- struct device *dev = NULL;
+ struct device *dev = ERR_PTR(-ENODEV);
int min_tfm_cnt = INT_MAX;
int tfm_cnt;
diff --git a/drivers/crypto/ccp/Kconfig b/drivers/crypto/ccp/Kconfig
index 6e37845abf8f..2238f77aa248 100644
--- a/drivers/crypto/ccp/Kconfig
+++ b/drivers/crypto/ccp/Kconfig
@@ -3,6 +3,8 @@ config CRYPTO_DEV_CCP_DD
depends on CRYPTO_DEV_CCP
default m
select HW_RANDOM
+ select DMA_ENGINE
+ select DMADEVICES
select CRYPTO_SHA1
select CRYPTO_SHA256
help
diff --git a/drivers/crypto/ccp/Makefile b/drivers/crypto/ccp/Makefile
index b750592cc936..ee4d2741b3ab 100644
--- a/drivers/crypto/ccp/Makefile
+++ b/drivers/crypto/ccp/Makefile
@@ -1,5 +1,9 @@
obj-$(CONFIG_CRYPTO_DEV_CCP_DD) += ccp.o
-ccp-objs := ccp-dev.o ccp-ops.o ccp-dev-v3.o ccp-platform.o
+ccp-objs := ccp-dev.o \
+ ccp-ops.o \
+ ccp-dev-v3.o \
+ ccp-platform.o \
+ ccp-dmaengine.o
ccp-$(CONFIG_PCI) += ccp-pci.o
obj-$(CONFIG_CRYPTO_DEV_CCP_CRYPTO) += ccp-crypto.o
diff --git a/drivers/crypto/ccp/ccp-dev-v3.c b/drivers/crypto/ccp/ccp-dev-v3.c
index 7d5eab49179e..d7a710347967 100644
--- a/drivers/crypto/ccp/ccp-dev-v3.c
+++ b/drivers/crypto/ccp/ccp-dev-v3.c
@@ -406,6 +406,11 @@ static int ccp_init(struct ccp_device *ccp)
goto e_kthread;
}
+ /* Register the DMA engine support */
+ ret = ccp_dmaengine_register(ccp);
+ if (ret)
+ goto e_hwrng;
+
ccp_add_device(ccp);
/* Enable interrupts */
@@ -413,6 +418,9 @@ static int ccp_init(struct ccp_device *ccp)
return 0;
+e_hwrng:
+ hwrng_unregister(&ccp->hwrng);
+
e_kthread:
for (i = 0; i < ccp->cmd_q_count; i++)
if (ccp->cmd_q[i].kthread)
@@ -436,6 +444,9 @@ static void ccp_destroy(struct ccp_device *ccp)
/* Remove this device from the list of available units first */
ccp_del_device(ccp);
+ /* Unregister the DMA engine */
+ ccp_dmaengine_unregister(ccp);
+
/* Unregister the RNG */
hwrng_unregister(&ccp->hwrng);
@@ -515,7 +526,7 @@ static irqreturn_t ccp_irq_handler(int irq, void *data)
return IRQ_HANDLED;
}
-static struct ccp_actions ccp3_actions = {
+static const struct ccp_actions ccp3_actions = {
.perform_aes = ccp_perform_aes,
.perform_xts_aes = ccp_perform_xts_aes,
.perform_sha = ccp_perform_sha,
diff --git a/drivers/crypto/ccp/ccp-dev.c b/drivers/crypto/ccp/ccp-dev.c
index 4dbc18727235..87b9f2bfa623 100644
--- a/drivers/crypto/ccp/ccp-dev.c
+++ b/drivers/crypto/ccp/ccp-dev.c
@@ -16,7 +16,7 @@
#include <linux/sched.h>
#include <linux/interrupt.h>
#include <linux/spinlock.h>
-#include <linux/rwlock_types.h>
+#include <linux/spinlock_types.h>
#include <linux/types.h>
#include <linux/mutex.h>
#include <linux/delay.h>
diff --git a/drivers/crypto/ccp/ccp-dev.h b/drivers/crypto/ccp/ccp-dev.h
index 7745d0be491d..bd41ffceff82 100644
--- a/drivers/crypto/ccp/ccp-dev.h
+++ b/drivers/crypto/ccp/ccp-dev.h
@@ -22,6 +22,9 @@
#include <linux/dmapool.h>
#include <linux/hw_random.h>
#include <linux/bitops.h>
+#include <linux/interrupt.h>
+#include <linux/irqreturn.h>
+#include <linux/dmaengine.h>
#define MAX_CCP_NAME_LEN 16
#define MAX_DMAPOOL_NAME_LEN 32
@@ -159,7 +162,7 @@ struct ccp_actions {
/* Structure to hold CCP version-specific values */
struct ccp_vdata {
unsigned int version;
- struct ccp_actions *perform;
+ const struct ccp_actions *perform;
};
extern struct ccp_vdata ccpv3;
@@ -167,6 +170,39 @@ extern struct ccp_vdata ccpv3;
struct ccp_device;
struct ccp_cmd;
+struct ccp_dma_cmd {
+ struct list_head entry;
+
+ struct ccp_cmd ccp_cmd;
+};
+
+struct ccp_dma_desc {
+ struct list_head entry;
+
+ struct ccp_device *ccp;
+
+ struct list_head pending;
+ struct list_head active;
+
+ enum dma_status status;
+ struct dma_async_tx_descriptor tx_desc;
+ size_t len;
+};
+
+struct ccp_dma_chan {
+ struct ccp_device *ccp;
+
+ spinlock_t lock;
+ struct list_head pending;
+ struct list_head active;
+ struct list_head complete;
+
+ struct tasklet_struct cleanup_tasklet;
+
+ enum dma_status status;
+ struct dma_chan dma_chan;
+};
+
struct ccp_cmd_queue {
struct ccp_device *ccp;
@@ -261,6 +297,14 @@ struct ccp_device {
unsigned int hwrng_retries;
/*
+ * Support for the CCP DMA capabilities
+ */
+ struct dma_device dma_dev;
+ struct ccp_dma_chan *ccp_dma_chan;
+ struct kmem_cache *dma_cmd_cache;
+ struct kmem_cache *dma_desc_cache;
+
+ /*
* A counter used to generate job-ids for cmds submitted to the CCP
*/
atomic_t current_id ____cacheline_aligned;
@@ -418,4 +462,7 @@ int ccp_cmd_queue_thread(void *data);
int ccp_run_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd);
+int ccp_dmaengine_register(struct ccp_device *ccp);
+void ccp_dmaengine_unregister(struct ccp_device *ccp);
+
#endif
diff --git a/drivers/crypto/ccp/ccp-dmaengine.c b/drivers/crypto/ccp/ccp-dmaengine.c
new file mode 100644
index 000000000000..94f77b0f9ae7
--- /dev/null
+++ b/drivers/crypto/ccp/ccp-dmaengine.c
@@ -0,0 +1,727 @@
+/*
+ * AMD Cryptographic Coprocessor (CCP) driver
+ *
+ * Copyright (C) 2016 Advanced Micro Devices, Inc.
+ *
+ * Author: Gary R Hook <gary.hook@amd.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/kernel.h>
+#include <linux/dmaengine.h>
+#include <linux/spinlock.h>
+#include <linux/mutex.h>
+#include <linux/ccp.h>
+
+#include "ccp-dev.h"
+#include "../../dma/dmaengine.h"
+
+#define CCP_DMA_WIDTH(_mask) \
+({ \
+ u64 mask = _mask + 1; \
+ (mask == 0) ? 64 : fls64(mask); \
+})
+
+static void ccp_free_cmd_resources(struct ccp_device *ccp,
+ struct list_head *list)
+{
+ struct ccp_dma_cmd *cmd, *ctmp;
+
+ list_for_each_entry_safe(cmd, ctmp, list, entry) {
+ list_del(&cmd->entry);
+ kmem_cache_free(ccp->dma_cmd_cache, cmd);
+ }
+}
+
+static void ccp_free_desc_resources(struct ccp_device *ccp,
+ struct list_head *list)
+{
+ struct ccp_dma_desc *desc, *dtmp;
+
+ list_for_each_entry_safe(desc, dtmp, list, entry) {
+ ccp_free_cmd_resources(ccp, &desc->active);
+ ccp_free_cmd_resources(ccp, &desc->pending);
+
+ list_del(&desc->entry);
+ kmem_cache_free(ccp->dma_desc_cache, desc);
+ }
+}
+
+static void ccp_free_chan_resources(struct dma_chan *dma_chan)
+{
+ struct ccp_dma_chan *chan = container_of(dma_chan, struct ccp_dma_chan,
+ dma_chan);
+ unsigned long flags;
+
+ dev_dbg(chan->ccp->dev, "%s - chan=%p\n", __func__, chan);
+
+ spin_lock_irqsave(&chan->lock, flags);
+
+ ccp_free_desc_resources(chan->ccp, &chan->complete);
+ ccp_free_desc_resources(chan->ccp, &chan->active);
+ ccp_free_desc_resources(chan->ccp, &chan->pending);
+
+ spin_unlock_irqrestore(&chan->lock, flags);
+}
+
+static void ccp_cleanup_desc_resources(struct ccp_device *ccp,
+ struct list_head *list)
+{
+ struct ccp_dma_desc *desc, *dtmp;
+
+ list_for_each_entry_safe_reverse(desc, dtmp, list, entry) {
+ if (!async_tx_test_ack(&desc->tx_desc))
+ continue;
+
+ dev_dbg(ccp->dev, "%s - desc=%p\n", __func__, desc);
+
+ ccp_free_cmd_resources(ccp, &desc->active);
+ ccp_free_cmd_resources(ccp, &desc->pending);
+
+ list_del(&desc->entry);
+ kmem_cache_free(ccp->dma_desc_cache, desc);
+ }
+}
+
+static void ccp_do_cleanup(unsigned long data)
+{
+ struct ccp_dma_chan *chan = (struct ccp_dma_chan *)data;
+ unsigned long flags;
+
+ dev_dbg(chan->ccp->dev, "%s - chan=%s\n", __func__,
+ dma_chan_name(&chan->dma_chan));
+
+ spin_lock_irqsave(&chan->lock, flags);
+
+ ccp_cleanup_desc_resources(chan->ccp, &chan->complete);
+
+ spin_unlock_irqrestore(&chan->lock, flags);
+}
+
+static int ccp_issue_next_cmd(struct ccp_dma_desc *desc)
+{
+ struct ccp_dma_cmd *cmd;
+ int ret;
+
+ cmd = list_first_entry(&desc->pending, struct ccp_dma_cmd, entry);
+ list_move(&cmd->entry, &desc->active);
+
+ dev_dbg(desc->ccp->dev, "%s - tx %d, cmd=%p\n", __func__,
+ desc->tx_desc.cookie, cmd);
+
+ ret = ccp_enqueue_cmd(&cmd->ccp_cmd);
+ if (!ret || (ret == -EINPROGRESS) || (ret == -EBUSY))
+ return 0;
+
+ dev_dbg(desc->ccp->dev, "%s - error: ret=%d, tx %d, cmd=%p\n", __func__,
+ ret, desc->tx_desc.cookie, cmd);
+
+ return ret;
+}
+
+static void ccp_free_active_cmd(struct ccp_dma_desc *desc)
+{
+ struct ccp_dma_cmd *cmd;
+
+ cmd = list_first_entry_or_null(&desc->active, struct ccp_dma_cmd,
+ entry);
+ if (!cmd)
+ return;
+
+ dev_dbg(desc->ccp->dev, "%s - freeing tx %d cmd=%p\n",
+ __func__, desc->tx_desc.cookie, cmd);
+
+ list_del(&cmd->entry);
+ kmem_cache_free(desc->ccp->dma_cmd_cache, cmd);
+}
+
+static struct ccp_dma_desc *__ccp_next_dma_desc(struct ccp_dma_chan *chan,
+ struct ccp_dma_desc *desc)
+{
+ /* Move current DMA descriptor to the complete list */
+ if (desc)
+ list_move(&desc->entry, &chan->complete);
+
+ /* Get the next DMA descriptor on the active list */
+ desc = list_first_entry_or_null(&chan->active, struct ccp_dma_desc,
+ entry);
+
+ return desc;
+}
+
+static struct ccp_dma_desc *ccp_handle_active_desc(struct ccp_dma_chan *chan,
+ struct ccp_dma_desc *desc)
+{
+ struct dma_async_tx_descriptor *tx_desc;
+ unsigned long flags;
+
+ /* Loop over descriptors until one is found with commands */
+ do {
+ if (desc) {
+ /* Remove the DMA command from the list and free it */
+ ccp_free_active_cmd(desc);
+
+ if (!list_empty(&desc->pending)) {
+ /* No errors, keep going */
+ if (desc->status != DMA_ERROR)
+ return desc;
+
+ /* Error, free remaining commands and move on */
+ ccp_free_cmd_resources(desc->ccp,
+ &desc->pending);
+ }
+
+ tx_desc = &desc->tx_desc;
+ } else {
+ tx_desc = NULL;
+ }
+
+ spin_lock_irqsave(&chan->lock, flags);
+
+ if (desc) {
+ if (desc->status != DMA_ERROR)
+ desc->status = DMA_COMPLETE;
+
+ dev_dbg(desc->ccp->dev,
+ "%s - tx %d complete, status=%u\n", __func__,
+ desc->tx_desc.cookie, desc->status);
+
+ dma_cookie_complete(tx_desc);
+ }
+
+ desc = __ccp_next_dma_desc(chan, desc);
+
+ spin_unlock_irqrestore(&chan->lock, flags);
+
+ if (tx_desc) {
+ if (tx_desc->callback &&
+ (tx_desc->flags & DMA_PREP_INTERRUPT))
+ tx_desc->callback(tx_desc->callback_param);
+
+ dma_run_dependencies(tx_desc);
+ }
+ } while (desc);
+
+ return NULL;
+}
+
+static struct ccp_dma_desc *__ccp_pending_to_active(struct ccp_dma_chan *chan)
+{
+ struct ccp_dma_desc *desc;
+
+ if (list_empty(&chan->pending))
+ return NULL;
+
+ desc = list_empty(&chan->active)
+ ? list_first_entry(&chan->pending, struct ccp_dma_desc, entry)
+ : NULL;
+
+ list_splice_tail_init(&chan->pending, &chan->active);
+
+ return desc;
+}
+
+static void ccp_cmd_callback(void *data, int err)
+{
+ struct ccp_dma_desc *desc = data;
+ struct ccp_dma_chan *chan;
+ int ret;
+
+ if (err == -EINPROGRESS)
+ return;
+
+ chan = container_of(desc->tx_desc.chan, struct ccp_dma_chan,
+ dma_chan);
+
+ dev_dbg(chan->ccp->dev, "%s - tx %d callback, err=%d\n",
+ __func__, desc->tx_desc.cookie, err);
+
+ if (err)
+ desc->status = DMA_ERROR;
+
+ while (true) {
+ /* Check for DMA descriptor completion */
+ desc = ccp_handle_active_desc(chan, desc);
+
+ /* Don't submit cmd if no descriptor or DMA is paused */
+ if (!desc || (chan->status == DMA_PAUSED))
+ break;
+
+ ret = ccp_issue_next_cmd(desc);
+ if (!ret)
+ break;
+
+ desc->status = DMA_ERROR;
+ }
+
+ tasklet_schedule(&chan->cleanup_tasklet);
+}
+
+static dma_cookie_t ccp_tx_submit(struct dma_async_tx_descriptor *tx_desc)
+{
+ struct ccp_dma_desc *desc = container_of(tx_desc, struct ccp_dma_desc,
+ tx_desc);
+ struct ccp_dma_chan *chan;
+ dma_cookie_t cookie;
+ unsigned long flags;
+
+ chan = container_of(tx_desc->chan, struct ccp_dma_chan, dma_chan);
+
+ spin_lock_irqsave(&chan->lock, flags);
+
+ cookie = dma_cookie_assign(tx_desc);
+ list_add_tail(&desc->entry, &chan->pending);
+
+ spin_unlock_irqrestore(&chan->lock, flags);
+
+ dev_dbg(chan->ccp->dev, "%s - added tx descriptor %d to pending list\n",
+ __func__, cookie);
+
+ return cookie;
+}
+
+static struct ccp_dma_cmd *ccp_alloc_dma_cmd(struct ccp_dma_chan *chan)
+{
+ struct ccp_dma_cmd *cmd;
+
+ cmd = kmem_cache_alloc(chan->ccp->dma_cmd_cache, GFP_NOWAIT);
+ if (cmd)
+ memset(cmd, 0, sizeof(*cmd));
+
+ return cmd;
+}
+
+static struct ccp_dma_desc *ccp_alloc_dma_desc(struct ccp_dma_chan *chan,
+ unsigned long flags)
+{
+ struct ccp_dma_desc *desc;
+
+ desc = kmem_cache_alloc(chan->ccp->dma_desc_cache, GFP_NOWAIT);
+ if (!desc)
+ return NULL;
+
+ memset(desc, 0, sizeof(*desc));
+
+ dma_async_tx_descriptor_init(&desc->tx_desc, &chan->dma_chan);
+ desc->tx_desc.flags = flags;
+ desc->tx_desc.tx_submit = ccp_tx_submit;
+ desc->ccp = chan->ccp;
+ INIT_LIST_HEAD(&desc->pending);
+ INIT_LIST_HEAD(&desc->active);
+ desc->status = DMA_IN_PROGRESS;
+
+ return desc;
+}
+
+static struct ccp_dma_desc *ccp_create_desc(struct dma_chan *dma_chan,
+ struct scatterlist *dst_sg,
+ unsigned int dst_nents,
+ struct scatterlist *src_sg,
+ unsigned int src_nents,
+ unsigned long flags)
+{
+ struct ccp_dma_chan *chan = container_of(dma_chan, struct ccp_dma_chan,
+ dma_chan);
+ struct ccp_device *ccp = chan->ccp;
+ struct ccp_dma_desc *desc;
+ struct ccp_dma_cmd *cmd;
+ struct ccp_cmd *ccp_cmd;
+ struct ccp_passthru_nomap_engine *ccp_pt;
+ unsigned int src_offset, src_len;
+ unsigned int dst_offset, dst_len;
+ unsigned int len;
+ unsigned long sflags;
+ size_t total_len;
+
+ if (!dst_sg || !src_sg)
+ return NULL;
+
+ if (!dst_nents || !src_nents)
+ return NULL;
+
+ desc = ccp_alloc_dma_desc(chan, flags);
+ if (!desc)
+ return NULL;
+
+ total_len = 0;
+
+ src_len = sg_dma_len(src_sg);
+ src_offset = 0;
+
+ dst_len = sg_dma_len(dst_sg);
+ dst_offset = 0;
+
+ while (true) {
+ if (!src_len) {
+ src_nents--;
+ if (!src_nents)
+ break;
+
+ src_sg = sg_next(src_sg);
+ if (!src_sg)
+ break;
+
+ src_len = sg_dma_len(src_sg);
+ src_offset = 0;
+ continue;
+ }
+
+ if (!dst_len) {
+ dst_nents--;
+ if (!dst_nents)
+ break;
+
+ dst_sg = sg_next(dst_sg);
+ if (!dst_sg)
+ break;
+
+ dst_len = sg_dma_len(dst_sg);
+ dst_offset = 0;
+ continue;
+ }
+
+ len = min(dst_len, src_len);
+
+ cmd = ccp_alloc_dma_cmd(chan);
+ if (!cmd)
+ goto err;
+
+ ccp_cmd = &cmd->ccp_cmd;
+ ccp_pt = &ccp_cmd->u.passthru_nomap;
+ ccp_cmd->flags = CCP_CMD_MAY_BACKLOG;
+ ccp_cmd->flags |= CCP_CMD_PASSTHRU_NO_DMA_MAP;
+ ccp_cmd->engine = CCP_ENGINE_PASSTHRU;
+ ccp_pt->bit_mod = CCP_PASSTHRU_BITWISE_NOOP;
+ ccp_pt->byte_swap = CCP_PASSTHRU_BYTESWAP_NOOP;
+ ccp_pt->src_dma = sg_dma_address(src_sg) + src_offset;
+ ccp_pt->dst_dma = sg_dma_address(dst_sg) + dst_offset;
+ ccp_pt->src_len = len;
+ ccp_pt->final = 1;
+ ccp_cmd->callback = ccp_cmd_callback;
+ ccp_cmd->data = desc;
+
+ list_add_tail(&cmd->entry, &desc->pending);
+
+ dev_dbg(ccp->dev,
+ "%s - cmd=%p, src=%pad, dst=%pad, len=%llu\n", __func__,
+ cmd, &ccp_pt->src_dma,
+ &ccp_pt->dst_dma, ccp_pt->src_len);
+
+ total_len += len;
+
+ src_len -= len;
+ src_offset += len;
+
+ dst_len -= len;
+ dst_offset += len;
+ }
+
+ desc->len = total_len;
+
+ if (list_empty(&desc->pending))
+ goto err;
+
+ dev_dbg(ccp->dev, "%s - desc=%p\n", __func__, desc);
+
+ spin_lock_irqsave(&chan->lock, sflags);
+
+ list_add_tail(&desc->entry, &chan->pending);
+
+ spin_unlock_irqrestore(&chan->lock, sflags);
+
+ return desc;
+
+err:
+ ccp_free_cmd_resources(ccp, &desc->pending);
+ kmem_cache_free(ccp->dma_desc_cache, desc);
+
+ return NULL;
+}
+
+static struct dma_async_tx_descriptor *ccp_prep_dma_memcpy(
+ struct dma_chan *dma_chan, dma_addr_t dst, dma_addr_t src, size_t len,
+ unsigned long flags)
+{
+ struct ccp_dma_chan *chan = container_of(dma_chan, struct ccp_dma_chan,
+ dma_chan);
+ struct ccp_dma_desc *desc;
+ struct scatterlist dst_sg, src_sg;
+
+ dev_dbg(chan->ccp->dev,
+ "%s - src=%pad, dst=%pad, len=%zu, flags=%#lx\n",
+ __func__, &src, &dst, len, flags);
+
+ sg_init_table(&dst_sg, 1);
+ sg_dma_address(&dst_sg) = dst;
+ sg_dma_len(&dst_sg) = len;
+
+ sg_init_table(&src_sg, 1);
+ sg_dma_address(&src_sg) = src;
+ sg_dma_len(&src_sg) = len;
+
+ desc = ccp_create_desc(dma_chan, &dst_sg, 1, &src_sg, 1, flags);
+ if (!desc)
+ return NULL;
+
+ return &desc->tx_desc;
+}
+
+static struct dma_async_tx_descriptor *ccp_prep_dma_sg(
+ struct dma_chan *dma_chan, struct scatterlist *dst_sg,
+ unsigned int dst_nents, struct scatterlist *src_sg,
+ unsigned int src_nents, unsigned long flags)
+{
+ struct ccp_dma_chan *chan = container_of(dma_chan, struct ccp_dma_chan,
+ dma_chan);
+ struct ccp_dma_desc *desc;
+
+ dev_dbg(chan->ccp->dev,
+ "%s - src=%p, src_nents=%u dst=%p, dst_nents=%u, flags=%#lx\n",
+ __func__, src_sg, src_nents, dst_sg, dst_nents, flags);
+
+ desc = ccp_create_desc(dma_chan, dst_sg, dst_nents, src_sg, src_nents,
+ flags);
+ if (!desc)
+ return NULL;
+
+ return &desc->tx_desc;
+}
+
+static struct dma_async_tx_descriptor *ccp_prep_dma_interrupt(
+ struct dma_chan *dma_chan, unsigned long flags)
+{
+ struct ccp_dma_chan *chan = container_of(dma_chan, struct ccp_dma_chan,
+ dma_chan);
+ struct ccp_dma_desc *desc;
+
+ desc = ccp_alloc_dma_desc(chan, flags);
+ if (!desc)
+ return NULL;
+
+ return &desc->tx_desc;
+}
+
+static void ccp_issue_pending(struct dma_chan *dma_chan)
+{
+ struct ccp_dma_chan *chan = container_of(dma_chan, struct ccp_dma_chan,
+ dma_chan);
+ struct ccp_dma_desc *desc;
+ unsigned long flags;
+
+ dev_dbg(chan->ccp->dev, "%s\n", __func__);
+
+ spin_lock_irqsave(&chan->lock, flags);
+
+ desc = __ccp_pending_to_active(chan);
+
+ spin_unlock_irqrestore(&chan->lock, flags);
+
+ /* If there was nothing active, start processing */
+ if (desc)
+ ccp_cmd_callback(desc, 0);
+}
+
+static enum dma_status ccp_tx_status(struct dma_chan *dma_chan,
+ dma_cookie_t cookie,
+ struct dma_tx_state *state)
+{
+ struct ccp_dma_chan *chan = container_of(dma_chan, struct ccp_dma_chan,
+ dma_chan);
+ struct ccp_dma_desc *desc;
+ enum dma_status ret;
+ unsigned long flags;
+
+ if (chan->status == DMA_PAUSED) {
+ ret = DMA_PAUSED;
+ goto out;
+ }
+
+ ret = dma_cookie_status(dma_chan, cookie, state);
+ if (ret == DMA_COMPLETE) {
+ spin_lock_irqsave(&chan->lock, flags);
+
+ /* Get status from complete chain, if still there */
+ list_for_each_entry(desc, &chan->complete, entry) {
+ if (desc->tx_desc.cookie != cookie)
+ continue;
+
+ ret = desc->status;
+ break;
+ }
+
+ spin_unlock_irqrestore(&chan->lock, flags);
+ }
+
+out:
+ dev_dbg(chan->ccp->dev, "%s - %u\n", __func__, ret);
+
+ return ret;
+}
+
+static int ccp_pause(struct dma_chan *dma_chan)
+{
+ struct ccp_dma_chan *chan = container_of(dma_chan, struct ccp_dma_chan,
+ dma_chan);
+
+ chan->status = DMA_PAUSED;
+
+ /*TODO: Wait for active DMA to complete before returning? */
+
+ return 0;
+}
+
+static int ccp_resume(struct dma_chan *dma_chan)
+{
+ struct ccp_dma_chan *chan = container_of(dma_chan, struct ccp_dma_chan,
+ dma_chan);
+ struct ccp_dma_desc *desc;
+ unsigned long flags;
+
+ spin_lock_irqsave(&chan->lock, flags);
+
+ desc = list_first_entry_or_null(&chan->active, struct ccp_dma_desc,
+ entry);
+
+ spin_unlock_irqrestore(&chan->lock, flags);
+
+ /* Indicate the channel is running again */
+ chan->status = DMA_IN_PROGRESS;
+
+ /* If there was something active, re-start */
+ if (desc)
+ ccp_cmd_callback(desc, 0);
+
+ return 0;
+}
+
+static int ccp_terminate_all(struct dma_chan *dma_chan)
+{
+ struct ccp_dma_chan *chan = container_of(dma_chan, struct ccp_dma_chan,
+ dma_chan);
+ unsigned long flags;
+
+ dev_dbg(chan->ccp->dev, "%s\n", __func__);
+
+ /*TODO: Wait for active DMA to complete before continuing */
+
+ spin_lock_irqsave(&chan->lock, flags);
+
+ /*TODO: Purge the complete list? */
+ ccp_free_desc_resources(chan->ccp, &chan->active);
+ ccp_free_desc_resources(chan->ccp, &chan->pending);
+
+ spin_unlock_irqrestore(&chan->lock, flags);
+
+ return 0;
+}
+
+int ccp_dmaengine_register(struct ccp_device *ccp)
+{
+ struct ccp_dma_chan *chan;
+ struct dma_device *dma_dev = &ccp->dma_dev;
+ struct dma_chan *dma_chan;
+ char *dma_cmd_cache_name;
+ char *dma_desc_cache_name;
+ unsigned int i;
+ int ret;
+
+ ccp->ccp_dma_chan = devm_kcalloc(ccp->dev, ccp->cmd_q_count,
+ sizeof(*(ccp->ccp_dma_chan)),
+ GFP_KERNEL);
+ if (!ccp->ccp_dma_chan)
+ return -ENOMEM;
+
+ dma_cmd_cache_name = devm_kasprintf(ccp->dev, GFP_KERNEL,
+ "%s-dmaengine-cmd-cache",
+ ccp->name);
+ if (!dma_cmd_cache_name)
+ return -ENOMEM;
+
+ ccp->dma_cmd_cache = kmem_cache_create(dma_cmd_cache_name,
+ sizeof(struct ccp_dma_cmd),
+ sizeof(void *),
+ SLAB_HWCACHE_ALIGN, NULL);
+ if (!ccp->dma_cmd_cache)
+ return -ENOMEM;
+
+ dma_desc_cache_name = devm_kasprintf(ccp->dev, GFP_KERNEL,
+ "%s-dmaengine-desc-cache",
+ ccp->name);
+ if (!dma_cmd_cache_name)
+ return -ENOMEM;
+ ccp->dma_desc_cache = kmem_cache_create(dma_desc_cache_name,
+ sizeof(struct ccp_dma_desc),
+ sizeof(void *),
+ SLAB_HWCACHE_ALIGN, NULL);
+ if (!ccp->dma_desc_cache) {
+ ret = -ENOMEM;
+ goto err_cache;
+ }
+
+ dma_dev->dev = ccp->dev;
+ dma_dev->src_addr_widths = CCP_DMA_WIDTH(dma_get_mask(ccp->dev));
+ dma_dev->dst_addr_widths = CCP_DMA_WIDTH(dma_get_mask(ccp->dev));
+ dma_dev->directions = DMA_MEM_TO_MEM;
+ dma_dev->residue_granularity = DMA_RESIDUE_GRANULARITY_DESCRIPTOR;
+ dma_cap_set(DMA_MEMCPY, dma_dev->cap_mask);
+ dma_cap_set(DMA_SG, dma_dev->cap_mask);
+ dma_cap_set(DMA_INTERRUPT, dma_dev->cap_mask);
+
+ INIT_LIST_HEAD(&dma_dev->channels);
+ for (i = 0; i < ccp->cmd_q_count; i++) {
+ chan = ccp->ccp_dma_chan + i;
+ dma_chan = &chan->dma_chan;
+
+ chan->ccp = ccp;
+
+ spin_lock_init(&chan->lock);
+ INIT_LIST_HEAD(&chan->pending);
+ INIT_LIST_HEAD(&chan->active);
+ INIT_LIST_HEAD(&chan->complete);
+
+ tasklet_init(&chan->cleanup_tasklet, ccp_do_cleanup,
+ (unsigned long)chan);
+
+ dma_chan->device = dma_dev;
+ dma_cookie_init(dma_chan);
+
+ list_add_tail(&dma_chan->device_node, &dma_dev->channels);
+ }
+
+ dma_dev->device_free_chan_resources = ccp_free_chan_resources;
+ dma_dev->device_prep_dma_memcpy = ccp_prep_dma_memcpy;
+ dma_dev->device_prep_dma_sg = ccp_prep_dma_sg;
+ dma_dev->device_prep_dma_interrupt = ccp_prep_dma_interrupt;
+ dma_dev->device_issue_pending = ccp_issue_pending;
+ dma_dev->device_tx_status = ccp_tx_status;
+ dma_dev->device_pause = ccp_pause;
+ dma_dev->device_resume = ccp_resume;
+ dma_dev->device_terminate_all = ccp_terminate_all;
+
+ ret = dma_async_device_register(dma_dev);
+ if (ret)
+ goto err_reg;
+
+ return 0;
+
+err_reg:
+ kmem_cache_destroy(ccp->dma_desc_cache);
+
+err_cache:
+ kmem_cache_destroy(ccp->dma_cmd_cache);
+
+ return ret;
+}
+
+void ccp_dmaengine_unregister(struct ccp_device *ccp)
+{
+ struct dma_device *dma_dev = &ccp->dma_dev;
+
+ dma_async_device_unregister(dma_dev);
+
+ kmem_cache_destroy(ccp->dma_desc_cache);
+ kmem_cache_destroy(ccp->dma_cmd_cache);
+}
diff --git a/drivers/crypto/ccp/ccp-ops.c b/drivers/crypto/ccp/ccp-ops.c
index eefdf595f758..ffa2891035ac 100644
--- a/drivers/crypto/ccp/ccp-ops.c
+++ b/drivers/crypto/ccp/ccp-ops.c
@@ -1427,6 +1427,70 @@ e_mask:
return ret;
}
+static int ccp_run_passthru_nomap_cmd(struct ccp_cmd_queue *cmd_q,
+ struct ccp_cmd *cmd)
+{
+ struct ccp_passthru_nomap_engine *pt = &cmd->u.passthru_nomap;
+ struct ccp_dm_workarea mask;
+ struct ccp_op op;
+ int ret;
+
+ if (!pt->final && (pt->src_len & (CCP_PASSTHRU_BLOCKSIZE - 1)))
+ return -EINVAL;
+
+ if (!pt->src_dma || !pt->dst_dma)
+ return -EINVAL;
+
+ if (pt->bit_mod != CCP_PASSTHRU_BITWISE_NOOP) {
+ if (pt->mask_len != CCP_PASSTHRU_MASKSIZE)
+ return -EINVAL;
+ if (!pt->mask)
+ return -EINVAL;
+ }
+
+ BUILD_BUG_ON(CCP_PASSTHRU_KSB_COUNT != 1);
+
+ memset(&op, 0, sizeof(op));
+ op.cmd_q = cmd_q;
+ op.jobid = ccp_gen_jobid(cmd_q->ccp);
+
+ if (pt->bit_mod != CCP_PASSTHRU_BITWISE_NOOP) {
+ /* Load the mask */
+ op.ksb_key = cmd_q->ksb_key;
+
+ mask.length = pt->mask_len;
+ mask.dma.address = pt->mask;
+ mask.dma.length = pt->mask_len;
+
+ ret = ccp_copy_to_ksb(cmd_q, &mask, op.jobid, op.ksb_key,
+ CCP_PASSTHRU_BYTESWAP_NOOP);
+ if (ret) {
+ cmd->engine_error = cmd_q->cmd_error;
+ return ret;
+ }
+ }
+
+ /* Send data to the CCP Passthru engine */
+ op.eom = 1;
+ op.soc = 1;
+
+ op.src.type = CCP_MEMTYPE_SYSTEM;
+ op.src.u.dma.address = pt->src_dma;
+ op.src.u.dma.offset = 0;
+ op.src.u.dma.length = pt->src_len;
+
+ op.dst.type = CCP_MEMTYPE_SYSTEM;
+ op.dst.u.dma.address = pt->dst_dma;
+ op.dst.u.dma.offset = 0;
+ op.dst.u.dma.length = pt->src_len;
+
+ ret = cmd_q->ccp->vdata->perform->perform_passthru(&op);
+ if (ret)
+ cmd->engine_error = cmd_q->cmd_error;
+
+ return ret;
+}
+
static int ccp_run_ecc_mm_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
{
struct ccp_ecc_engine *ecc = &cmd->u.ecc;
@@ -1762,7 +1826,10 @@ int ccp_run_cmd(struct ccp_cmd_queue *cmd_q, struct ccp_cmd *cmd)
ret = ccp_run_rsa_cmd(cmd_q, cmd);
break;
case CCP_ENGINE_PASSTHRU:
- ret = ccp_run_passthru_cmd(cmd_q, cmd);
+ if (cmd->flags & CCP_CMD_PASSTHRU_NO_DMA_MAP)
+ ret = ccp_run_passthru_nomap_cmd(cmd_q, cmd);
+ else
+ ret = ccp_run_passthru_cmd(cmd_q, cmd);
break;
case CCP_ENGINE_ECC:
ret = ccp_run_ecc_cmd(cmd_q, cmd);
diff --git a/drivers/crypto/marvell/cesa.c b/drivers/crypto/marvell/cesa.c
index 80239ae69527..e8ef9fd24a16 100644
--- a/drivers/crypto/marvell/cesa.c
+++ b/drivers/crypto/marvell/cesa.c
@@ -475,18 +475,18 @@ static int mv_cesa_probe(struct platform_device *pdev)
engine->regs = cesa->regs + CESA_ENGINE_OFF(i);
if (dram && cesa->caps->has_tdma)
- mv_cesa_conf_mbus_windows(&cesa->engines[i], dram);
+ mv_cesa_conf_mbus_windows(engine, dram);
- writel(0, cesa->engines[i].regs + CESA_SA_INT_STATUS);
+ writel(0, engine->regs + CESA_SA_INT_STATUS);
writel(CESA_SA_CFG_STOP_DIG_ERR,
- cesa->engines[i].regs + CESA_SA_CFG);
+ engine->regs + CESA_SA_CFG);
writel(engine->sram_dma & CESA_SA_SRAM_MSK,
- cesa->engines[i].regs + CESA_SA_DESC_P0);
+ engine->regs + CESA_SA_DESC_P0);
ret = devm_request_threaded_irq(dev, irq, NULL, mv_cesa_int,
IRQF_ONESHOT,
dev_name(&pdev->dev),
- &cesa->engines[i]);
+ engine);
if (ret)
goto err_cleanup;
}
diff --git a/drivers/crypto/marvell/hash.c b/drivers/crypto/marvell/hash.c
index 7ca2e0f9dc2e..7a5058da9151 100644
--- a/drivers/crypto/marvell/hash.c
+++ b/drivers/crypto/marvell/hash.c
@@ -768,8 +768,7 @@ static int mv_cesa_ahash_export(struct ahash_request *req, void *hash,
*len = creq->len;
memcpy(hash, creq->state, digsize);
memset(cache, 0, blocksize);
- if (creq->cache)
- memcpy(cache, creq->cache, creq->cache_ptr);
+ memcpy(cache, creq->cache, creq->cache_ptr);
return 0;
}
diff --git a/drivers/crypto/marvell/tdma.c b/drivers/crypto/marvell/tdma.c
index 76427981275b..0ad8f1ecf175 100644
--- a/drivers/crypto/marvell/tdma.c
+++ b/drivers/crypto/marvell/tdma.c
@@ -99,12 +99,11 @@ mv_cesa_dma_add_desc(struct mv_cesa_tdma_chain *chain, gfp_t flags)
struct mv_cesa_tdma_desc *new_tdma = NULL;
dma_addr_t dma_handle;
- new_tdma = dma_pool_alloc(cesa_dev->dma->tdma_desc_pool, flags,
- &dma_handle);
+ new_tdma = dma_pool_zalloc(cesa_dev->dma->tdma_desc_pool, flags,
+ &dma_handle);
if (!new_tdma)
return ERR_PTR(-ENOMEM);
- memset(new_tdma, 0, sizeof(*new_tdma));
new_tdma->cur_dma = dma_handle;
if (chain->last) {
chain->last->next_dma = cpu_to_le32(dma_handle);
diff --git a/drivers/crypto/mxc-scc.c b/drivers/crypto/mxc-scc.c
new file mode 100644
index 000000000000..ff383ef83871
--- /dev/null
+++ b/drivers/crypto/mxc-scc.c
@@ -0,0 +1,765 @@
+/*
+ * Copyright (C) 2016 Pengutronix, Steffen Trumtrar <kernel@pengutronix.de>
+ *
+ * The driver is based on information gathered from
+ * drivers/mxc/security/mxc_scc.c which can be found in
+ * the Freescale linux-2.6-imx.git in the imx_2.6.35_maintain branch.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+#include <linux/clk.h>
+#include <linux/crypto.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
+#include <linux/irq.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/platform_device.h>
+
+#include <crypto/algapi.h>
+#include <crypto/des.h>
+
+/* Secure Memory (SCM) registers */
+#define SCC_SCM_RED_START 0x0000
+#define SCC_SCM_BLACK_START 0x0004
+#define SCC_SCM_LENGTH 0x0008
+#define SCC_SCM_CTRL 0x000C
+#define SCC_SCM_STATUS 0x0010
+#define SCC_SCM_ERROR_STATUS 0x0014
+#define SCC_SCM_INTR_CTRL 0x0018
+#define SCC_SCM_CFG 0x001C
+#define SCC_SCM_INIT_VECTOR_0 0x0020
+#define SCC_SCM_INIT_VECTOR_1 0x0024
+#define SCC_SCM_RED_MEMORY 0x0400
+#define SCC_SCM_BLACK_MEMORY 0x0800
+
+/* Security Monitor (SMN) Registers */
+#define SCC_SMN_STATUS 0x1000
+#define SCC_SMN_COMMAND 0x1004
+#define SCC_SMN_SEQ_START 0x1008
+#define SCC_SMN_SEQ_END 0x100C
+#define SCC_SMN_SEQ_CHECK 0x1010
+#define SCC_SMN_BIT_COUNT 0x1014
+#define SCC_SMN_BITBANK_INC_SIZE 0x1018
+#define SCC_SMN_BITBANK_DECREMENT 0x101C
+#define SCC_SMN_COMPARE_SIZE 0x1020
+#define SCC_SMN_PLAINTEXT_CHECK 0x1024
+#define SCC_SMN_CIPHERTEXT_CHECK 0x1028
+#define SCC_SMN_TIMER_IV 0x102C
+#define SCC_SMN_TIMER_CONTROL 0x1030
+#define SCC_SMN_DEBUG_DETECT_STAT 0x1034
+#define SCC_SMN_TIMER 0x1038
+
+#define SCC_SCM_CTRL_START_CIPHER BIT(2)
+#define SCC_SCM_CTRL_CBC_MODE BIT(1)
+#define SCC_SCM_CTRL_DECRYPT_MODE BIT(0)
+
+#define SCC_SCM_STATUS_LEN_ERR BIT(12)
+#define SCC_SCM_STATUS_SMN_UNBLOCKED BIT(11)
+#define SCC_SCM_STATUS_CIPHERING_DONE BIT(10)
+#define SCC_SCM_STATUS_ZEROIZING_DONE BIT(9)
+#define SCC_SCM_STATUS_INTR_STATUS BIT(8)
+#define SCC_SCM_STATUS_SEC_KEY BIT(7)
+#define SCC_SCM_STATUS_INTERNAL_ERR BIT(6)
+#define SCC_SCM_STATUS_BAD_SEC_KEY BIT(5)
+#define SCC_SCM_STATUS_ZEROIZE_FAIL BIT(4)
+#define SCC_SCM_STATUS_SMN_BLOCKED BIT(3)
+#define SCC_SCM_STATUS_CIPHERING BIT(2)
+#define SCC_SCM_STATUS_ZEROIZING BIT(1)
+#define SCC_SCM_STATUS_BUSY BIT(0)
+
+#define SCC_SMN_STATUS_STATE_MASK 0x0000001F
+#define SCC_SMN_STATE_START 0x0
+/* The SMN is zeroizing its RAM during reset */
+#define SCC_SMN_STATE_ZEROIZE_RAM 0x5
+/* SMN has passed internal checks */
+#define SCC_SMN_STATE_HEALTH_CHECK 0x6
+/* Fatal Security Violation. SMN is locked, SCM is inoperative. */
+#define SCC_SMN_STATE_FAIL 0x9
+/* SCC is in secure state. SCM is using secret key. */
+#define SCC_SMN_STATE_SECURE 0xA
+/* SCC is not secure. SCM is using default key. */
+#define SCC_SMN_STATE_NON_SECURE 0xC
+
+#define SCC_SCM_INTR_CTRL_ZEROIZE_MEM BIT(2)
+#define SCC_SCM_INTR_CTRL_CLR_INTR BIT(1)
+#define SCC_SCM_INTR_CTRL_MASK_INTR BIT(0)
+
+/* Size, in blocks, of Red memory. */
+#define SCC_SCM_CFG_BLACK_SIZE_MASK 0x07fe0000
+#define SCC_SCM_CFG_BLACK_SIZE_SHIFT 17
+/* Size, in blocks, of Black memory. */
+#define SCC_SCM_CFG_RED_SIZE_MASK 0x0001ff80
+#define SCC_SCM_CFG_RED_SIZE_SHIFT 7
+/* Number of bytes per block. */
+#define SCC_SCM_CFG_BLOCK_SIZE_MASK 0x0000007f
+
+#define SCC_SMN_COMMAND_TAMPER_LOCK BIT(4)
+#define SCC_SMN_COMMAND_CLR_INTR BIT(3)
+#define SCC_SMN_COMMAND_CLR_BIT_BANK BIT(2)
+#define SCC_SMN_COMMAND_EN_INTR BIT(1)
+#define SCC_SMN_COMMAND_SET_SOFTWARE_ALARM BIT(0)
+
+#define SCC_KEY_SLOTS 20
+#define SCC_MAX_KEY_SIZE 32
+#define SCC_KEY_SLOT_SIZE 32
+
+#define SCC_CRC_CCITT_START 0xFFFF
+
+/*
+ * Offset into each RAM of the base of the area which is not
+ * used for Stored Keys.
+ */
+#define SCC_NON_RESERVED_OFFSET (SCC_KEY_SLOTS * SCC_KEY_SLOT_SIZE)
+
+/* Fixed padding for appending to plaintext to fill out a block */
+static char scc_block_padding[8] = { 0x80, 0, 0, 0, 0, 0, 0, 0 };
+
+enum mxc_scc_state {
+ SCC_STATE_OK,
+ SCC_STATE_UNIMPLEMENTED,
+ SCC_STATE_FAILED
+};
+
+struct mxc_scc {
+ struct device *dev;
+ void __iomem *base;
+ struct clk *clk;
+ bool hw_busy;
+ spinlock_t lock;
+ struct crypto_queue queue;
+ struct crypto_async_request *req;
+ int block_size_bytes;
+ int black_ram_size_blocks;
+ int memory_size_bytes;
+ int bytes_remaining;
+
+ void __iomem *red_memory;
+ void __iomem *black_memory;
+};
+
+struct mxc_scc_ctx {
+ struct mxc_scc *scc;
+ struct scatterlist *sg_src;
+ size_t src_nents;
+ struct scatterlist *sg_dst;
+ size_t dst_nents;
+ unsigned int offset;
+ unsigned int size;
+ unsigned int ctrl;
+};
+
+struct mxc_scc_crypto_tmpl {
+ struct mxc_scc *scc;
+ struct crypto_alg alg;
+};
+
+static int mxc_scc_get_data(struct mxc_scc_ctx *ctx,
+ struct crypto_async_request *req)
+{
+ struct ablkcipher_request *ablkreq = ablkcipher_request_cast(req);
+ struct mxc_scc *scc = ctx->scc;
+ size_t len;
+ void __iomem *from;
+
+ if (ctx->ctrl & SCC_SCM_CTRL_DECRYPT_MODE)
+ from = scc->red_memory;
+ else
+ from = scc->black_memory;
+
+ dev_dbg(scc->dev, "pcopy: from 0x%p %d bytes\n", from,
+ ctx->dst_nents * 8);
+ len = sg_pcopy_from_buffer(ablkreq->dst, ctx->dst_nents,
+ from, ctx->size, ctx->offset);
+ if (!len) {
+ dev_err(scc->dev, "pcopy err from 0x%p (len=%d)\n", from, len);
+ return -EINVAL;
+ }
+
+#ifdef DEBUG
+ print_hex_dump(KERN_ERR,
+ "red memory@"__stringify(__LINE__)": ",
+ DUMP_PREFIX_ADDRESS, 16, 4,
+ scc->red_memory, ctx->size, 1);
+ print_hex_dump(KERN_ERR,
+ "black memory@"__stringify(__LINE__)": ",
+ DUMP_PREFIX_ADDRESS, 16, 4,
+ scc->black_memory, ctx->size, 1);
+#endif
+
+ ctx->offset += len;
+
+ if (ctx->offset < ablkreq->nbytes)
+ return -EINPROGRESS;
+
+ return 0;
+}
+
+static int mxc_scc_ablkcipher_req_init(struct ablkcipher_request *req,
+ struct mxc_scc_ctx *ctx)
+{
+ struct mxc_scc *scc = ctx->scc;
+ int nents;
+
+ nents = sg_nents_for_len(req->src, req->nbytes);
+ if (nents < 0) {
+ dev_err(scc->dev, "Invalid number of src SC");
+ return nents;
+ }
+ ctx->src_nents = nents;
+
+ nents = sg_nents_for_len(req->dst, req->nbytes);
+ if (nents < 0) {
+ dev_err(scc->dev, "Invalid number of dst SC");
+ return nents;
+ }
+ ctx->dst_nents = nents;
+
+ ctx->size = 0;
+ ctx->offset = 0;
+
+ return 0;
+}
+
+static int mxc_scc_ablkcipher_req_complete(struct crypto_async_request *req,
+ struct mxc_scc_ctx *ctx,
+ int result)
+{
+ struct ablkcipher_request *ablkreq = ablkcipher_request_cast(req);
+ struct mxc_scc *scc = ctx->scc;
+
+ scc->req = NULL;
+ scc->bytes_remaining = scc->memory_size_bytes;
+
+ if (ctx->ctrl & SCC_SCM_CTRL_CBC_MODE)
+ memcpy(ablkreq->info, scc->base + SCC_SCM_INIT_VECTOR_0,
+ scc->block_size_bytes);
+
+ req->complete(req, result);
+ scc->hw_busy = false;
+
+ return 0;
+}
+
+static int mxc_scc_put_data(struct mxc_scc_ctx *ctx,
+ struct ablkcipher_request *req)
+{
+ u8 padding_buffer[sizeof(u16) + sizeof(scc_block_padding)];
+ size_t len = min_t(size_t, req->nbytes - ctx->offset,
+ ctx->scc->bytes_remaining);
+ unsigned int padding_byte_count = 0;
+ struct mxc_scc *scc = ctx->scc;
+ void __iomem *to;
+
+ if (ctx->ctrl & SCC_SCM_CTRL_DECRYPT_MODE)
+ to = scc->black_memory;
+ else
+ to = scc->red_memory;
+
+ if (ctx->ctrl & SCC_SCM_CTRL_CBC_MODE && req->info)
+ memcpy(scc->base + SCC_SCM_INIT_VECTOR_0, req->info,
+ scc->block_size_bytes);
+
+ len = sg_pcopy_to_buffer(req->src, ctx->src_nents,
+ to, len, ctx->offset);
+ if (!len) {
+ dev_err(scc->dev, "pcopy err to 0x%p (len=%d)\n", to, len);
+ return -EINVAL;
+ }
+
+ ctx->size = len;
+
+#ifdef DEBUG
+ dev_dbg(scc->dev, "copied %d bytes to 0x%p\n", len, to);
+ print_hex_dump(KERN_ERR,
+ "init vector0@"__stringify(__LINE__)": ",
+ DUMP_PREFIX_ADDRESS, 16, 4,
+ scc->base + SCC_SCM_INIT_VECTOR_0, scc->block_size_bytes,
+ 1);
+ print_hex_dump(KERN_ERR,
+ "red memory@"__stringify(__LINE__)": ",
+ DUMP_PREFIX_ADDRESS, 16, 4,
+ scc->red_memory, ctx->size, 1);
+ print_hex_dump(KERN_ERR,
+ "black memory@"__stringify(__LINE__)": ",
+ DUMP_PREFIX_ADDRESS, 16, 4,
+ scc->black_memory, ctx->size, 1);
+#endif
+
+ scc->bytes_remaining -= len;
+
+ padding_byte_count = len % scc->block_size_bytes;
+
+ if (padding_byte_count) {
+ memcpy(padding_buffer, scc_block_padding, padding_byte_count);
+ memcpy(to + len, padding_buffer, padding_byte_count);
+ ctx->size += padding_byte_count;
+ }
+
+#ifdef DEBUG
+ print_hex_dump(KERN_ERR,
+ "data to encrypt@"__stringify(__LINE__)": ",
+ DUMP_PREFIX_ADDRESS, 16, 4,
+ to, ctx->size, 1);
+#endif
+
+ return 0;
+}
+
+static void mxc_scc_ablkcipher_next(struct mxc_scc_ctx *ctx,
+ struct crypto_async_request *req)
+{
+ struct ablkcipher_request *ablkreq = ablkcipher_request_cast(req);
+ struct mxc_scc *scc = ctx->scc;
+ int err;
+
+ dev_dbg(scc->dev, "dispatch request (nbytes=%d, src=%p, dst=%p)\n",
+ ablkreq->nbytes, ablkreq->src, ablkreq->dst);
+
+ writel(0, scc->base + SCC_SCM_ERROR_STATUS);
+
+ err = mxc_scc_put_data(ctx, ablkreq);
+ if (err) {
+ mxc_scc_ablkcipher_req_complete(req, ctx, err);
+ return;
+ }
+
+ dev_dbg(scc->dev, "Start encryption (0x%p/0x%p)\n",
+ (void *)readl(scc->base + SCC_SCM_RED_START),
+ (void *)readl(scc->base + SCC_SCM_BLACK_START));
+
+ /* clear interrupt control registers */
+ writel(SCC_SCM_INTR_CTRL_CLR_INTR,
+ scc->base + SCC_SCM_INTR_CTRL);
+
+ writel((ctx->size / ctx->scc->block_size_bytes) - 1,
+ scc->base + SCC_SCM_LENGTH);
+
+ dev_dbg(scc->dev, "Process %d block(s) in 0x%p\n",
+ ctx->size / ctx->scc->block_size_bytes,
+ (ctx->ctrl & SCC_SCM_CTRL_DECRYPT_MODE) ? scc->black_memory :
+ scc->red_memory);
+
+ writel(ctx->ctrl, scc->base + SCC_SCM_CTRL);
+}
+
+static irqreturn_t mxc_scc_int(int irq, void *priv)
+{
+ struct crypto_async_request *req;
+ struct mxc_scc_ctx *ctx;
+ struct mxc_scc *scc = priv;
+ int status;
+ int ret;
+
+ status = readl(scc->base + SCC_SCM_STATUS);
+
+ /* clear interrupt control registers */
+ writel(SCC_SCM_INTR_CTRL_CLR_INTR, scc->base + SCC_SCM_INTR_CTRL);
+
+ if (status & SCC_SCM_STATUS_BUSY)
+ return IRQ_NONE;
+
+ req = scc->req;
+ if (req) {
+ ctx = crypto_tfm_ctx(req->tfm);
+ ret = mxc_scc_get_data(ctx, req);
+ if (ret != -EINPROGRESS)
+ mxc_scc_ablkcipher_req_complete(req, ctx, ret);
+ else
+ mxc_scc_ablkcipher_next(ctx, req);
+ }
+
+ return IRQ_HANDLED;
+}
+
+static int mxc_scc_cra_init(struct crypto_tfm *tfm)
+{
+ struct mxc_scc_ctx *ctx = crypto_tfm_ctx(tfm);
+ struct crypto_alg *alg = tfm->__crt_alg;
+ struct mxc_scc_crypto_tmpl *algt;
+
+ algt = container_of(alg, struct mxc_scc_crypto_tmpl, alg);
+
+ ctx->scc = algt->scc;
+ return 0;
+}
+
+static void mxc_scc_dequeue_req_unlocked(struct mxc_scc_ctx *ctx)
+{
+ struct crypto_async_request *req, *backlog;
+
+ if (ctx->scc->hw_busy)
+ return;
+
+ spin_lock_bh(&ctx->scc->lock);
+ backlog = crypto_get_backlog(&ctx->scc->queue);
+ req = crypto_dequeue_request(&ctx->scc->queue);
+ ctx->scc->req = req;
+ ctx->scc->hw_busy = true;
+ spin_unlock_bh(&ctx->scc->lock);
+
+ if (!req)
+ return;
+
+ if (backlog)
+ backlog->complete(backlog, -EINPROGRESS);
+
+ mxc_scc_ablkcipher_next(ctx, req);
+}
+
+static int mxc_scc_queue_req(struct mxc_scc_ctx *ctx,
+ struct crypto_async_request *req)
+{
+ int ret;
+
+ spin_lock_bh(&ctx->scc->lock);
+ ret = crypto_enqueue_request(&ctx->scc->queue, req);
+ spin_unlock_bh(&ctx->scc->lock);
+
+ if (ret != -EINPROGRESS)
+ return ret;
+
+ mxc_scc_dequeue_req_unlocked(ctx);
+
+ return -EINPROGRESS;
+}
+
+static int mxc_scc_des3_op(struct mxc_scc_ctx *ctx,
+ struct ablkcipher_request *req)
+{
+ int err;
+
+ err = mxc_scc_ablkcipher_req_init(req, ctx);
+ if (err)
+ return err;
+
+ return mxc_scc_queue_req(ctx, &req->base);
+}
+
+static int mxc_scc_ecb_des_encrypt(struct ablkcipher_request *req)
+{
+ struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(req);
+ struct mxc_scc_ctx *ctx = crypto_ablkcipher_ctx(cipher);
+
+ ctx->ctrl = SCC_SCM_CTRL_START_CIPHER;
+
+ return mxc_scc_des3_op(ctx, req);
+}
+
+static int mxc_scc_ecb_des_decrypt(struct ablkcipher_request *req)
+{
+ struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(req);
+ struct mxc_scc_ctx *ctx = crypto_ablkcipher_ctx(cipher);
+
+ ctx->ctrl = SCC_SCM_CTRL_START_CIPHER;
+ ctx->ctrl |= SCC_SCM_CTRL_DECRYPT_MODE;
+
+ return mxc_scc_des3_op(ctx, req);
+}
+
+static int mxc_scc_cbc_des_encrypt(struct ablkcipher_request *req)
+{
+ struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(req);
+ struct mxc_scc_ctx *ctx = crypto_ablkcipher_ctx(cipher);
+
+ ctx->ctrl = SCC_SCM_CTRL_START_CIPHER;
+ ctx->ctrl |= SCC_SCM_CTRL_CBC_MODE;
+
+ return mxc_scc_des3_op(ctx, req);
+}
+
+static int mxc_scc_cbc_des_decrypt(struct ablkcipher_request *req)
+{
+ struct crypto_ablkcipher *cipher = crypto_ablkcipher_reqtfm(req);
+ struct mxc_scc_ctx *ctx = crypto_ablkcipher_ctx(cipher);
+
+ ctx->ctrl = SCC_SCM_CTRL_START_CIPHER;
+ ctx->ctrl |= SCC_SCM_CTRL_CBC_MODE;
+ ctx->ctrl |= SCC_SCM_CTRL_DECRYPT_MODE;
+
+ return mxc_scc_des3_op(ctx, req);
+}
+
+static void mxc_scc_hw_init(struct mxc_scc *scc)
+{
+ int offset;
+
+ offset = SCC_NON_RESERVED_OFFSET / scc->block_size_bytes;
+
+ /* Fill the RED_START register */
+ writel(offset, scc->base + SCC_SCM_RED_START);
+
+ /* Fill the BLACK_START register */
+ writel(offset, scc->base + SCC_SCM_BLACK_START);
+
+ scc->red_memory = scc->base + SCC_SCM_RED_MEMORY +
+ SCC_NON_RESERVED_OFFSET;
+
+ scc->black_memory = scc->base + SCC_SCM_BLACK_MEMORY +
+ SCC_NON_RESERVED_OFFSET;
+
+ scc->bytes_remaining = scc->memory_size_bytes;
+}
+
+static int mxc_scc_get_config(struct mxc_scc *scc)
+{
+ int config;
+
+ config = readl(scc->base + SCC_SCM_CFG);
+
+ scc->block_size_bytes = config & SCC_SCM_CFG_BLOCK_SIZE_MASK;
+
+ scc->black_ram_size_blocks = config & SCC_SCM_CFG_BLACK_SIZE_MASK;
+
+ scc->memory_size_bytes = (scc->block_size_bytes *
+ scc->black_ram_size_blocks) -
+ SCC_NON_RESERVED_OFFSET;
+
+ return 0;
+}
+
+static enum mxc_scc_state mxc_scc_get_state(struct mxc_scc *scc)
+{
+ enum mxc_scc_state state;
+ int status;
+
+ status = readl(scc->base + SCC_SMN_STATUS) &
+ SCC_SMN_STATUS_STATE_MASK;
+
+ /* If in Health Check, try to bringup to secure state */
+ if (status & SCC_SMN_STATE_HEALTH_CHECK) {
+ /*
+ * Write a simple algorithm to the Algorithm Sequence
+ * Checker (ASC)
+ */
+ writel(0xaaaa, scc->base + SCC_SMN_SEQ_START);
+ writel(0x5555, scc->base + SCC_SMN_SEQ_END);
+ writel(0x5555, scc->base + SCC_SMN_SEQ_CHECK);
+
+ status = readl(scc->base + SCC_SMN_STATUS) &
+ SCC_SMN_STATUS_STATE_MASK;
+ }
+
+ switch (status) {
+ case SCC_SMN_STATE_NON_SECURE:
+ case SCC_SMN_STATE_SECURE:
+ state = SCC_STATE_OK;
+ break;
+ case SCC_SMN_STATE_FAIL:
+ state = SCC_STATE_FAILED;
+ break;
+ default:
+ state = SCC_STATE_UNIMPLEMENTED;
+ break;
+ }
+
+ return state;
+}
+
+static struct mxc_scc_crypto_tmpl scc_ecb_des = {
+ .alg = {
+ .cra_name = "ecb(des3_ede)",
+ .cra_driver_name = "ecb-des3-scc",
+ .cra_priority = 300,
+ .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER,
+ .cra_blocksize = DES3_EDE_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct mxc_scc_ctx),
+ .cra_alignmask = 0,
+ .cra_type = &crypto_ablkcipher_type,
+ .cra_module = THIS_MODULE,
+ .cra_init = mxc_scc_cra_init,
+ .cra_u.ablkcipher = {
+ .min_keysize = DES3_EDE_KEY_SIZE,
+ .max_keysize = DES3_EDE_KEY_SIZE,
+ .encrypt = mxc_scc_ecb_des_encrypt,
+ .decrypt = mxc_scc_ecb_des_decrypt,
+ }
+ }
+};
+
+static struct mxc_scc_crypto_tmpl scc_cbc_des = {
+ .alg = {
+ .cra_name = "cbc(des3_ede)",
+ .cra_driver_name = "cbc-des3-scc",
+ .cra_priority = 300,
+ .cra_flags = CRYPTO_ALG_TYPE_ABLKCIPHER,
+ .cra_blocksize = DES3_EDE_BLOCK_SIZE,
+ .cra_ctxsize = sizeof(struct mxc_scc_ctx),
+ .cra_alignmask = 0,
+ .cra_type = &crypto_ablkcipher_type,
+ .cra_module = THIS_MODULE,
+ .cra_init = mxc_scc_cra_init,
+ .cra_u.ablkcipher = {
+ .min_keysize = DES3_EDE_KEY_SIZE,
+ .max_keysize = DES3_EDE_KEY_SIZE,
+ .encrypt = mxc_scc_cbc_des_encrypt,
+ .decrypt = mxc_scc_cbc_des_decrypt,
+ }
+ }
+};
+
+static struct mxc_scc_crypto_tmpl *scc_crypto_algs[] = {
+ &scc_ecb_des,
+ &scc_cbc_des,
+};
+
+static int mxc_scc_crypto_register(struct mxc_scc *scc)
+{
+ int i;
+ int err = 0;
+
+ for (i = 0; i < ARRAY_SIZE(scc_crypto_algs); i++) {
+ scc_crypto_algs[i]->scc = scc;
+ err = crypto_register_alg(&scc_crypto_algs[i]->alg);
+ if (err)
+ goto err_out;
+ }
+
+ return 0;
+
+err_out:
+ while (--i >= 0)
+ crypto_unregister_alg(&scc_crypto_algs[i]->alg);
+
+ return err;
+}
+
+static void mxc_scc_crypto_unregister(void)
+{
+ unsigned int i;
+
+ for (i = 0; i < ARRAY_SIZE(scc_crypto_algs); i++)
+ crypto_unregister_alg(&scc_crypto_algs[i]->alg);
+}
+
+static int mxc_scc_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct resource *res;
+ struct mxc_scc *scc;
+ enum mxc_scc_state state;
+ int irq;
+ int ret;
+ int i;
+
+ scc = devm_kzalloc(dev, sizeof(*scc), GFP_KERNEL);
+ if (!scc)
+ return -ENOMEM;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ scc->base = devm_ioremap_resource(dev, res);
+ if (IS_ERR(scc->base))
+ return PTR_ERR(scc->base);
+
+ scc->clk = devm_clk_get(&pdev->dev, "ipg");
+ if (IS_ERR(scc->clk)) {
+ dev_err(dev, "Could not get ipg clock\n");
+ return PTR_ERR(scc->clk);
+ }
+
+ clk_prepare_enable(scc->clk);
+
+ /* clear error status register */
+ writel(0x0, scc->base + SCC_SCM_ERROR_STATUS);
+
+ /* clear interrupt control registers */
+ writel(SCC_SCM_INTR_CTRL_CLR_INTR |
+ SCC_SCM_INTR_CTRL_MASK_INTR,
+ scc->base + SCC_SCM_INTR_CTRL);
+
+ writel(SCC_SMN_COMMAND_CLR_INTR |
+ SCC_SMN_COMMAND_EN_INTR,
+ scc->base + SCC_SMN_COMMAND);
+
+ scc->dev = dev;
+ platform_set_drvdata(pdev, scc);
+
+ ret = mxc_scc_get_config(scc);
+ if (ret)
+ goto err_out;
+
+ state = mxc_scc_get_state(scc);
+
+ if (state != SCC_STATE_OK) {
+ dev_err(dev, "SCC in unusable state %d\n", state);
+ ret = -EINVAL;
+ goto err_out;
+ }
+
+ mxc_scc_hw_init(scc);
+
+ spin_lock_init(&scc->lock);
+ /* FIXME: calculate queue from RAM slots */
+ crypto_init_queue(&scc->queue, 50);
+
+ for (i = 0; i < 2; i++) {
+ irq = platform_get_irq(pdev, i);
+ if (irq < 0) {
+ dev_err(dev, "failed to get irq resource\n");
+ ret = -EINVAL;
+ goto err_out;
+ }
+
+ ret = devm_request_threaded_irq(dev, irq, NULL, mxc_scc_int,
+ IRQF_ONESHOT, dev_name(dev), scc);
+ if (ret)
+ goto err_out;
+ }
+
+ ret = mxc_scc_crypto_register(scc);
+ if (ret) {
+ dev_err(dev, "could not register algorithms");
+ goto err_out;
+ }
+
+ dev_info(dev, "registered successfully.\n");
+
+ return 0;
+
+err_out:
+ clk_disable_unprepare(scc->clk);
+
+ return ret;
+}
+
+static int mxc_scc_remove(struct platform_device *pdev)
+{
+ struct mxc_scc *scc = platform_get_drvdata(pdev);
+
+ mxc_scc_crypto_unregister();
+
+ clk_disable_unprepare(scc->clk);
+
+ return 0;
+}
+
+static const struct of_device_id mxc_scc_dt_ids[] = {
+ { .compatible = "fsl,imx25-scc", .data = NULL, },
+ { /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, mxc_scc_dt_ids);
+
+static struct platform_driver mxc_scc_driver = {
+ .probe = mxc_scc_probe,
+ .remove = mxc_scc_remove,
+ .driver = {
+ .name = "mxc-scc",
+ .of_match_table = mxc_scc_dt_ids,
+ },
+};
+
+module_platform_driver(mxc_scc_driver);
+MODULE_AUTHOR("Steffen Trumtrar <kernel@pengutronix.de>");
+MODULE_DESCRIPTION("Freescale i.MX25 SCC Crypto driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/crypto/n2_core.c b/drivers/crypto/n2_core.c
index b85a7a7dbf63..c5aac25a5738 100644
--- a/drivers/crypto/n2_core.c
+++ b/drivers/crypto/n2_core.c
@@ -1598,7 +1598,7 @@ static void *new_queue(unsigned long q_type)
static void free_queue(void *p, unsigned long q_type)
{
- return kmem_cache_free(queue_cache[q_type - 1], p);
+ kmem_cache_free(queue_cache[q_type - 1], p);
}
static int queue_cache_init(void)
diff --git a/drivers/crypto/omap-aes.c b/drivers/crypto/omap-aes.c
index d420ec751c7c..ce174d3b842c 100644
--- a/drivers/crypto/omap-aes.c
+++ b/drivers/crypto/omap-aes.c
@@ -26,7 +26,6 @@
#include <linux/scatterlist.h>
#include <linux/dma-mapping.h>
#include <linux/dmaengine.h>
-#include <linux/omap-dma.h>
#include <linux/pm_runtime.h>
#include <linux/of.h>
#include <linux/of_device.h>
@@ -176,9 +175,7 @@ struct omap_aes_dev {
struct scatter_walk in_walk;
struct scatter_walk out_walk;
- int dma_in;
struct dma_chan *dma_lch_in;
- int dma_out;
struct dma_chan *dma_lch_out;
int in_sg_len;
int out_sg_len;
@@ -351,30 +348,21 @@ static void omap_aes_dma_out_callback(void *data)
static int omap_aes_dma_init(struct omap_aes_dev *dd)
{
- int err = -ENOMEM;
- dma_cap_mask_t mask;
+ int err;
dd->dma_lch_out = NULL;
dd->dma_lch_in = NULL;
- dma_cap_zero(mask);
- dma_cap_set(DMA_SLAVE, mask);
-
- dd->dma_lch_in = dma_request_slave_channel_compat(mask,
- omap_dma_filter_fn,
- &dd->dma_in,
- dd->dev, "rx");
- if (!dd->dma_lch_in) {
+ dd->dma_lch_in = dma_request_chan(dd->dev, "rx");
+ if (IS_ERR(dd->dma_lch_in)) {
dev_err(dd->dev, "Unable to request in DMA channel\n");
- goto err_dma_in;
+ return PTR_ERR(dd->dma_lch_in);
}
- dd->dma_lch_out = dma_request_slave_channel_compat(mask,
- omap_dma_filter_fn,
- &dd->dma_out,
- dd->dev, "tx");
- if (!dd->dma_lch_out) {
+ dd->dma_lch_out = dma_request_chan(dd->dev, "tx");
+ if (IS_ERR(dd->dma_lch_out)) {
dev_err(dd->dev, "Unable to request out DMA channel\n");
+ err = PTR_ERR(dd->dma_lch_out);
goto err_dma_out;
}
@@ -382,14 +370,15 @@ static int omap_aes_dma_init(struct omap_aes_dev *dd)
err_dma_out:
dma_release_channel(dd->dma_lch_in);
-err_dma_in:
- if (err)
- pr_err("error: %d\n", err);
+
return err;
}
static void omap_aes_dma_cleanup(struct omap_aes_dev *dd)
{
+ if (dd->pio_only)
+ return;
+
dma_release_channel(dd->dma_lch_out);
dma_release_channel(dd->dma_lch_in);
}
@@ -1080,9 +1069,6 @@ static int omap_aes_get_res_of(struct omap_aes_dev *dd,
goto err;
}
- dd->dma_out = -1; /* Dummy value that's unused */
- dd->dma_in = -1; /* Dummy value that's unused */
-
dd->pdata = match->data;
err:
@@ -1116,24 +1102,6 @@ static int omap_aes_get_res_pdev(struct omap_aes_dev *dd,
}
memcpy(res, r, sizeof(*res));
- /* Get the DMA out channel */
- r = platform_get_resource(pdev, IORESOURCE_DMA, 0);
- if (!r) {
- dev_err(dev, "no DMA out resource info\n");
- err = -ENODEV;
- goto err;
- }
- dd->dma_out = r->start;
-
- /* Get the DMA in channel */
- r = platform_get_resource(pdev, IORESOURCE_DMA, 1);
- if (!r) {
- dev_err(dev, "no DMA in resource info\n");
- err = -ENODEV;
- goto err;
- }
- dd->dma_in = r->start;
-
/* Only OMAP2/3 can be non-DT */
dd->pdata = &omap_aes_pdata_omap2;
@@ -1191,7 +1159,9 @@ static int omap_aes_probe(struct platform_device *pdev)
tasklet_init(&dd->done_task, omap_aes_done_task, (unsigned long)dd);
err = omap_aes_dma_init(dd);
- if (err && AES_REG_IRQ_STATUS(dd) && AES_REG_IRQ_ENABLE(dd)) {
+ if (err == -EPROBE_DEFER) {
+ goto err_irq;
+ } else if (err && AES_REG_IRQ_STATUS(dd) && AES_REG_IRQ_ENABLE(dd)) {
dd->pio_only = 1;
irq = platform_get_irq(pdev, 0);
@@ -1248,8 +1218,8 @@ err_algs:
for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--)
crypto_unregister_alg(
&dd->pdata->algs_info[i].algs_list[j]);
- if (!dd->pio_only)
- omap_aes_dma_cleanup(dd);
+
+ omap_aes_dma_cleanup(dd);
err_irq:
tasklet_kill(&dd->done_task);
pm_runtime_disable(dev);
diff --git a/drivers/crypto/omap-des.c b/drivers/crypto/omap-des.c
index dd7b93f2f94c..3eedb03111ba 100644
--- a/drivers/crypto/omap-des.c
+++ b/drivers/crypto/omap-des.c
@@ -29,7 +29,6 @@
#include <linux/scatterlist.h>
#include <linux/dma-mapping.h>
#include <linux/dmaengine.h>
-#include <linux/omap-dma.h>
#include <linux/pm_runtime.h>
#include <linux/of.h>
#include <linux/of_device.h>
@@ -39,6 +38,7 @@
#include <linux/interrupt.h>
#include <crypto/scatterwalk.h>
#include <crypto/des.h>
+#include <crypto/algapi.h>
#define DST_MAXBURST 2
@@ -132,14 +132,10 @@ struct omap_des_dev {
unsigned long flags;
int err;
- /* spinlock used for queues */
- spinlock_t lock;
- struct crypto_queue queue;
-
struct tasklet_struct done_task;
- struct tasklet_struct queue_task;
struct ablkcipher_request *req;
+ struct crypto_engine *engine;
/*
* total is used by PIO mode for book keeping so introduce
* variable total_save as need it to calc page_order
@@ -158,9 +154,7 @@ struct omap_des_dev {
struct scatter_walk in_walk;
struct scatter_walk out_walk;
- int dma_in;
struct dma_chan *dma_lch_in;
- int dma_out;
struct dma_chan *dma_lch_out;
int in_sg_len;
int out_sg_len;
@@ -340,30 +334,21 @@ static void omap_des_dma_out_callback(void *data)
static int omap_des_dma_init(struct omap_des_dev *dd)
{
- int err = -ENOMEM;
- dma_cap_mask_t mask;
+ int err;
dd->dma_lch_out = NULL;
dd->dma_lch_in = NULL;
- dma_cap_zero(mask);
- dma_cap_set(DMA_SLAVE, mask);
-
- dd->dma_lch_in = dma_request_slave_channel_compat(mask,
- omap_dma_filter_fn,
- &dd->dma_in,
- dd->dev, "rx");
- if (!dd->dma_lch_in) {
+ dd->dma_lch_in = dma_request_chan(dd->dev, "rx");
+ if (IS_ERR(dd->dma_lch_in)) {
dev_err(dd->dev, "Unable to request in DMA channel\n");
- goto err_dma_in;
+ return PTR_ERR(dd->dma_lch_in);
}
- dd->dma_lch_out = dma_request_slave_channel_compat(mask,
- omap_dma_filter_fn,
- &dd->dma_out,
- dd->dev, "tx");
- if (!dd->dma_lch_out) {
+ dd->dma_lch_out = dma_request_chan(dd->dev, "tx");
+ if (IS_ERR(dd->dma_lch_out)) {
dev_err(dd->dev, "Unable to request out DMA channel\n");
+ err = PTR_ERR(dd->dma_lch_out);
goto err_dma_out;
}
@@ -371,14 +356,15 @@ static int omap_des_dma_init(struct omap_des_dev *dd)
err_dma_out:
dma_release_channel(dd->dma_lch_in);
-err_dma_in:
- if (err)
- pr_err("error: %d\n", err);
+
return err;
}
static void omap_des_dma_cleanup(struct omap_des_dev *dd)
{
+ if (dd->pio_only)
+ return;
+
dma_release_channel(dd->dma_lch_out);
dma_release_channel(dd->dma_lch_in);
}
@@ -520,9 +506,7 @@ static void omap_des_finish_req(struct omap_des_dev *dd, int err)
pr_debug("err: %d\n", err);
pm_runtime_put(dd->dev);
- dd->flags &= ~FLAGS_BUSY;
-
- req->base.complete(&req->base, err);
+ crypto_finalize_request(dd->engine, req, err);
}
static int omap_des_crypt_dma_stop(struct omap_des_dev *dd)
@@ -585,34 +569,24 @@ static int omap_des_copy_sgs(struct omap_des_dev *dd)
}
static int omap_des_handle_queue(struct omap_des_dev *dd,
- struct ablkcipher_request *req)
+ struct ablkcipher_request *req)
{
- struct crypto_async_request *async_req, *backlog;
- struct omap_des_ctx *ctx;
- struct omap_des_reqctx *rctx;
- unsigned long flags;
- int err, ret = 0;
-
- spin_lock_irqsave(&dd->lock, flags);
if (req)
- ret = ablkcipher_enqueue_request(&dd->queue, req);
- if (dd->flags & FLAGS_BUSY) {
- spin_unlock_irqrestore(&dd->lock, flags);
- return ret;
- }
- backlog = crypto_get_backlog(&dd->queue);
- async_req = crypto_dequeue_request(&dd->queue);
- if (async_req)
- dd->flags |= FLAGS_BUSY;
- spin_unlock_irqrestore(&dd->lock, flags);
+ return crypto_transfer_request_to_engine(dd->engine, req);
- if (!async_req)
- return ret;
+ return 0;
+}
- if (backlog)
- backlog->complete(backlog, -EINPROGRESS);
+static int omap_des_prepare_req(struct crypto_engine *engine,
+ struct ablkcipher_request *req)
+{
+ struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
+ crypto_ablkcipher_reqtfm(req));
+ struct omap_des_dev *dd = omap_des_find_dev(ctx);
+ struct omap_des_reqctx *rctx;
- req = ablkcipher_request_cast(async_req);
+ if (!dd)
+ return -ENODEV;
/* assign new request to device */
dd->req = req;
@@ -642,16 +616,20 @@ static int omap_des_handle_queue(struct omap_des_dev *dd,
dd->ctx = ctx;
ctx->dd = dd;
- err = omap_des_write_ctrl(dd);
- if (!err)
- err = omap_des_crypt_dma_start(dd);
- if (err) {
- /* des_task will not finish it, so do it here */
- omap_des_finish_req(dd, err);
- tasklet_schedule(&dd->queue_task);
- }
+ return omap_des_write_ctrl(dd);
+}
+
+static int omap_des_crypt_req(struct crypto_engine *engine,
+ struct ablkcipher_request *req)
+{
+ struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
+ crypto_ablkcipher_reqtfm(req));
+ struct omap_des_dev *dd = omap_des_find_dev(ctx);
+
+ if (!dd)
+ return -ENODEV;
- return ret; /* return ret, which is enqueue return value */
+ return omap_des_crypt_dma_start(dd);
}
static void omap_des_done_task(unsigned long data)
@@ -683,18 +661,10 @@ static void omap_des_done_task(unsigned long data)
}
omap_des_finish_req(dd, 0);
- omap_des_handle_queue(dd, NULL);
pr_debug("exit\n");
}
-static void omap_des_queue_task(unsigned long data)
-{
- struct omap_des_dev *dd = (struct omap_des_dev *)data;
-
- omap_des_handle_queue(dd, NULL);
-}
-
static int omap_des_crypt(struct ablkcipher_request *req, unsigned long mode)
{
struct omap_des_ctx *ctx = crypto_ablkcipher_ctx(
@@ -999,8 +969,6 @@ static int omap_des_get_of(struct omap_des_dev *dd,
return -EINVAL;
}
- dd->dma_out = -1; /* Dummy value that's unused */
- dd->dma_in = -1; /* Dummy value that's unused */
dd->pdata = match->data;
return 0;
@@ -1016,33 +984,10 @@ static int omap_des_get_of(struct omap_des_dev *dd,
static int omap_des_get_pdev(struct omap_des_dev *dd,
struct platform_device *pdev)
{
- struct device *dev = &pdev->dev;
- struct resource *r;
- int err = 0;
-
- /* Get the DMA out channel */
- r = platform_get_resource(pdev, IORESOURCE_DMA, 0);
- if (!r) {
- dev_err(dev, "no DMA out resource info\n");
- err = -ENODEV;
- goto err;
- }
- dd->dma_out = r->start;
-
- /* Get the DMA in channel */
- r = platform_get_resource(pdev, IORESOURCE_DMA, 1);
- if (!r) {
- dev_err(dev, "no DMA in resource info\n");
- err = -ENODEV;
- goto err;
- }
- dd->dma_in = r->start;
-
/* non-DT devices get pdata from pdev */
dd->pdata = pdev->dev.platform_data;
-err:
- return err;
+ return 0;
}
static int omap_des_probe(struct platform_device *pdev)
@@ -1062,9 +1007,6 @@ static int omap_des_probe(struct platform_device *pdev)
dd->dev = dev;
platform_set_drvdata(pdev, dd);
- spin_lock_init(&dd->lock);
- crypto_init_queue(&dd->queue, OMAP_DES_QUEUE_LENGTH);
-
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (!res) {
dev_err(dev, "no MEM resource info\n");
@@ -1103,10 +1045,11 @@ static int omap_des_probe(struct platform_device *pdev)
(reg & dd->pdata->minor_mask) >> dd->pdata->minor_shift);
tasklet_init(&dd->done_task, omap_des_done_task, (unsigned long)dd);
- tasklet_init(&dd->queue_task, omap_des_queue_task, (unsigned long)dd);
err = omap_des_dma_init(dd);
- if (err && DES_REG_IRQ_STATUS(dd) && DES_REG_IRQ_ENABLE(dd)) {
+ if (err == -EPROBE_DEFER) {
+ goto err_irq;
+ } else if (err && DES_REG_IRQ_STATUS(dd) && DES_REG_IRQ_ENABLE(dd)) {
dd->pio_only = 1;
irq = platform_get_irq(pdev, 0);
@@ -1144,17 +1087,30 @@ static int omap_des_probe(struct platform_device *pdev)
}
}
+ /* Initialize des crypto engine */
+ dd->engine = crypto_engine_alloc_init(dev, 1);
+ if (!dd->engine)
+ goto err_algs;
+
+ dd->engine->prepare_request = omap_des_prepare_req;
+ dd->engine->crypt_one_request = omap_des_crypt_req;
+ err = crypto_engine_start(dd->engine);
+ if (err)
+ goto err_engine;
+
return 0;
+
+err_engine:
+ crypto_engine_exit(dd->engine);
err_algs:
for (i = dd->pdata->algs_info_size - 1; i >= 0; i--)
for (j = dd->pdata->algs_info[i].registered - 1; j >= 0; j--)
crypto_unregister_alg(
&dd->pdata->algs_info[i].algs_list[j]);
- if (!dd->pio_only)
- omap_des_dma_cleanup(dd);
+
+ omap_des_dma_cleanup(dd);
err_irq:
tasklet_kill(&dd->done_task);
- tasklet_kill(&dd->queue_task);
err_get:
pm_runtime_disable(dev);
err_res:
@@ -1182,7 +1138,6 @@ static int omap_des_remove(struct platform_device *pdev)
&dd->pdata->algs_info[i].algs_list[j]);
tasklet_kill(&dd->done_task);
- tasklet_kill(&dd->queue_task);
omap_des_dma_cleanup(dd);
pm_runtime_disable(dd->dev);
dd = NULL;
diff --git a/drivers/crypto/omap-sham.c b/drivers/crypto/omap-sham.c
index 48adb2a0903e..6eefaa2fe58f 100644
--- a/drivers/crypto/omap-sham.c
+++ b/drivers/crypto/omap-sham.c
@@ -29,7 +29,6 @@
#include <linux/scatterlist.h>
#include <linux/dma-mapping.h>
#include <linux/dmaengine.h>
-#include <linux/omap-dma.h>
#include <linux/pm_runtime.h>
#include <linux/of.h>
#include <linux/of_device.h>
@@ -219,7 +218,6 @@ struct omap_sham_dev {
int irq;
spinlock_t lock;
int err;
- unsigned int dma;
struct dma_chan *dma_lch;
struct tasklet_struct done_task;
u8 polling_mode;
@@ -1842,7 +1840,6 @@ static int omap_sham_get_res_of(struct omap_sham_dev *dd,
goto err;
}
- dd->dma = -1; /* Dummy value that's unused */
dd->pdata = match->data;
err:
@@ -1884,15 +1881,6 @@ static int omap_sham_get_res_pdev(struct omap_sham_dev *dd,
goto err;
}
- /* Get the DMA */
- r = platform_get_resource(pdev, IORESOURCE_DMA, 0);
- if (!r) {
- dev_err(dev, "no DMA resource info\n");
- err = -ENODEV;
- goto err;
- }
- dd->dma = r->start;
-
/* Only OMAP2/3 can be non-DT */
dd->pdata = &omap_sham_pdata_omap2;
@@ -1946,9 +1934,12 @@ static int omap_sham_probe(struct platform_device *pdev)
dma_cap_zero(mask);
dma_cap_set(DMA_SLAVE, mask);
- dd->dma_lch = dma_request_slave_channel_compat(mask, omap_dma_filter_fn,
- &dd->dma, dev, "rx");
- if (!dd->dma_lch) {
+ dd->dma_lch = dma_request_chan(dev, "rx");
+ if (IS_ERR(dd->dma_lch)) {
+ err = PTR_ERR(dd->dma_lch);
+ if (err == -EPROBE_DEFER)
+ goto data_err;
+
dd->polling_mode = 1;
dev_dbg(dev, "using polling mode instead of dma\n");
}
@@ -1995,7 +1986,7 @@ err_algs:
&dd->pdata->algs_info[i].algs_list[j]);
err_pm:
pm_runtime_disable(dev);
- if (dd->dma_lch)
+ if (dd->polling_mode)
dma_release_channel(dd->dma_lch);
data_err:
dev_err(dev, "initialization failed.\n");
@@ -2021,7 +2012,7 @@ static int omap_sham_remove(struct platform_device *pdev)
tasklet_kill(&dd->done_task);
pm_runtime_disable(&pdev->dev);
- if (dd->dma_lch)
+ if (!dd->polling_mode)
dma_release_channel(dd->dma_lch);
return 0;
diff --git a/drivers/crypto/qat/qat_c3xxx/adf_drv.c b/drivers/crypto/qat/qat_c3xxx/adf_drv.c
index e13bd08ddd1e..640c3fc870fd 100644
--- a/drivers/crypto/qat/qat_c3xxx/adf_drv.c
+++ b/drivers/crypto/qat/qat_c3xxx/adf_drv.c
@@ -300,9 +300,7 @@ static void adf_remove(struct pci_dev *pdev)
pr_err("QAT: Driver removal failed\n");
return;
}
- if (adf_dev_stop(accel_dev))
- dev_err(&GET_DEV(accel_dev), "Failed to stop QAT accel dev\n");
-
+ adf_dev_stop(accel_dev);
adf_dev_shutdown(accel_dev);
adf_disable_aer(accel_dev);
adf_cleanup_accel(accel_dev);
diff --git a/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c b/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c
index 1af321c2ce1a..d2d0ae445fd8 100644
--- a/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c
+++ b/drivers/crypto/qat/qat_c3xxxvf/adf_c3xxxvf_hw_data.c
@@ -109,29 +109,6 @@ static void adf_vf_void_noop(struct adf_accel_dev *accel_dev)
{
}
-static int adf_vf2pf_init(struct adf_accel_dev *accel_dev)
-{
- u32 msg = (ADF_VF2PF_MSGORIGIN_SYSTEM |
- (ADF_VF2PF_MSGTYPE_INIT << ADF_VF2PF_MSGTYPE_SHIFT));
-
- if (adf_iov_putmsg(accel_dev, msg, 0)) {
- dev_err(&GET_DEV(accel_dev),
- "Failed to send Init event to PF\n");
- return -EFAULT;
- }
- return 0;
-}
-
-static void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev)
-{
- u32 msg = (ADF_VF2PF_MSGORIGIN_SYSTEM |
- (ADF_VF2PF_MSGTYPE_SHUTDOWN << ADF_VF2PF_MSGTYPE_SHIFT));
-
- if (adf_iov_putmsg(accel_dev, msg, 0))
- dev_err(&GET_DEV(accel_dev),
- "Failed to send Shutdown event to PF\n");
-}
-
void adf_init_hw_data_c3xxxiov(struct adf_hw_device_data *hw_data)
{
hw_data->dev_class = &c3xxxiov_class;
diff --git a/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c b/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
index 1ac4ae90e072..949d77b79fbe 100644
--- a/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
+++ b/drivers/crypto/qat/qat_c3xxxvf/adf_drv.c
@@ -238,6 +238,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (ret)
goto out_err_free_reg;
+ set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
+
ret = adf_dev_init(accel_dev);
if (ret)
goto out_err_dev_shutdown;
@@ -270,9 +272,7 @@ static void adf_remove(struct pci_dev *pdev)
pr_err("QAT: Driver removal failed\n");
return;
}
- if (adf_dev_stop(accel_dev))
- dev_err(&GET_DEV(accel_dev), "Failed to stop QAT accel dev\n");
-
+ adf_dev_stop(accel_dev);
adf_dev_shutdown(accel_dev);
adf_cleanup_accel(accel_dev);
adf_cleanup_pci_dev(accel_dev);
diff --git a/drivers/crypto/qat/qat_c62x/adf_drv.c b/drivers/crypto/qat/qat_c62x/adf_drv.c
index 512c56509718..bc5cbc193aae 100644
--- a/drivers/crypto/qat/qat_c62x/adf_drv.c
+++ b/drivers/crypto/qat/qat_c62x/adf_drv.c
@@ -300,9 +300,7 @@ static void adf_remove(struct pci_dev *pdev)
pr_err("QAT: Driver removal failed\n");
return;
}
- if (adf_dev_stop(accel_dev))
- dev_err(&GET_DEV(accel_dev), "Failed to stop QAT accel dev\n");
-
+ adf_dev_stop(accel_dev);
adf_dev_shutdown(accel_dev);
adf_disable_aer(accel_dev);
adf_cleanup_accel(accel_dev);
diff --git a/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c b/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c
index baf4b509c892..38e4bc04f407 100644
--- a/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c
+++ b/drivers/crypto/qat/qat_c62xvf/adf_c62xvf_hw_data.c
@@ -109,29 +109,6 @@ static void adf_vf_void_noop(struct adf_accel_dev *accel_dev)
{
}
-static int adf_vf2pf_init(struct adf_accel_dev *accel_dev)
-{
- u32 msg = (ADF_VF2PF_MSGORIGIN_SYSTEM |
- (ADF_VF2PF_MSGTYPE_INIT << ADF_VF2PF_MSGTYPE_SHIFT));
-
- if (adf_iov_putmsg(accel_dev, msg, 0)) {
- dev_err(&GET_DEV(accel_dev),
- "Failed to send Init event to PF\n");
- return -EFAULT;
- }
- return 0;
-}
-
-static void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev)
-{
- u32 msg = (ADF_VF2PF_MSGORIGIN_SYSTEM |
- (ADF_VF2PF_MSGTYPE_SHUTDOWN << ADF_VF2PF_MSGTYPE_SHIFT));
-
- if (adf_iov_putmsg(accel_dev, msg, 0))
- dev_err(&GET_DEV(accel_dev),
- "Failed to send Shutdown event to PF\n");
-}
-
void adf_init_hw_data_c62xiov(struct adf_hw_device_data *hw_data)
{
hw_data->dev_class = &c62xiov_class;
diff --git a/drivers/crypto/qat/qat_c62xvf/adf_drv.c b/drivers/crypto/qat/qat_c62xvf/adf_drv.c
index d2e4b928f3be..7540ce13b0d0 100644
--- a/drivers/crypto/qat/qat_c62xvf/adf_drv.c
+++ b/drivers/crypto/qat/qat_c62xvf/adf_drv.c
@@ -238,6 +238,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (ret)
goto out_err_free_reg;
+ set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
+
ret = adf_dev_init(accel_dev);
if (ret)
goto out_err_dev_shutdown;
@@ -270,9 +272,7 @@ static void adf_remove(struct pci_dev *pdev)
pr_err("QAT: Driver removal failed\n");
return;
}
- if (adf_dev_stop(accel_dev))
- dev_err(&GET_DEV(accel_dev), "Failed to stop QAT accel dev\n");
-
+ adf_dev_stop(accel_dev);
adf_dev_shutdown(accel_dev);
adf_cleanup_accel(accel_dev);
adf_cleanup_pci_dev(accel_dev);
diff --git a/drivers/crypto/qat/qat_common/Makefile b/drivers/crypto/qat/qat_common/Makefile
index 29c7c53d2845..6d74b91f2152 100644
--- a/drivers/crypto/qat/qat_common/Makefile
+++ b/drivers/crypto/qat/qat_common/Makefile
@@ -9,7 +9,6 @@ clean-files += qat_rsaprivkey-asn1.c qat_rsaprivkey-asn1.h
obj-$(CONFIG_CRYPTO_DEV_QAT) += intel_qat.o
intel_qat-objs := adf_cfg.o \
adf_isr.o \
- adf_vf_isr.o \
adf_ctl_drv.o \
adf_dev_mgr.o \
adf_init.o \
@@ -27,4 +26,5 @@ intel_qat-objs := adf_cfg.o \
qat_hal.o
intel_qat-$(CONFIG_DEBUG_FS) += adf_transport_debug.o
-intel_qat-$(CONFIG_PCI_IOV) += adf_sriov.o adf_pf2vf_msg.o
+intel_qat-$(CONFIG_PCI_IOV) += adf_sriov.o adf_pf2vf_msg.o \
+ adf_vf2pf_msg.o adf_vf_isr.o
diff --git a/drivers/crypto/qat/qat_common/adf_admin.c b/drivers/crypto/qat/qat_common/adf_admin.c
index eb557f69e367..ce7c4626c983 100644
--- a/drivers/crypto/qat/qat_common/adf_admin.c
+++ b/drivers/crypto/qat/qat_common/adf_admin.c
@@ -61,7 +61,7 @@
#define ADF_DH895XCC_MAILBOX_STRIDE 0x1000
#define ADF_ADMINMSG_LEN 32
-static const u8 const_tab[1024] = {
+static const u8 const_tab[1024] __aligned(1024) = {
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
diff --git a/drivers/crypto/qat/qat_common/adf_cfg_strings.h b/drivers/crypto/qat/qat_common/adf_cfg_strings.h
index 13575111382c..7632ed0f25c5 100644
--- a/drivers/crypto/qat/qat_common/adf_cfg_strings.h
+++ b/drivers/crypto/qat/qat_common/adf_cfg_strings.h
@@ -57,10 +57,8 @@
#define ADF_RING_DC_SIZE "NumConcurrentRequests"
#define ADF_RING_ASYM_TX "RingAsymTx"
#define ADF_RING_SYM_TX "RingSymTx"
-#define ADF_RING_RND_TX "RingNrbgTx"
#define ADF_RING_ASYM_RX "RingAsymRx"
#define ADF_RING_SYM_RX "RingSymRx"
-#define ADF_RING_RND_RX "RingNrbgRx"
#define ADF_RING_DC_TX "RingTx"
#define ADF_RING_DC_RX "RingRx"
#define ADF_ETRMGR_BANK "Bank"
diff --git a/drivers/crypto/qat/qat_common/adf_common_drv.h b/drivers/crypto/qat/qat_common/adf_common_drv.h
index 0e82ce3c383e..75faa39bc8d0 100644
--- a/drivers/crypto/qat/qat_common/adf_common_drv.h
+++ b/drivers/crypto/qat/qat_common/adf_common_drv.h
@@ -67,7 +67,7 @@
#define ADF_STATUS_AE_INITIALISED 4
#define ADF_STATUS_AE_UCODE_LOADED 5
#define ADF_STATUS_AE_STARTED 6
-#define ADF_STATUS_ORPHAN_TH_RUNNING 7
+#define ADF_STATUS_PF_RUNNING 7
#define ADF_STATUS_IRQ_ALLOCATED 8
enum adf_dev_reset_mode {
@@ -103,7 +103,7 @@ int adf_service_unregister(struct service_hndl *service);
int adf_dev_init(struct adf_accel_dev *accel_dev);
int adf_dev_start(struct adf_accel_dev *accel_dev);
-int adf_dev_stop(struct adf_accel_dev *accel_dev);
+void adf_dev_stop(struct adf_accel_dev *accel_dev);
void adf_dev_shutdown(struct adf_accel_dev *accel_dev);
int adf_iov_putmsg(struct adf_accel_dev *accel_dev, u32 msg, u8 vf_nr);
@@ -236,6 +236,13 @@ void adf_enable_vf2pf_interrupts(struct adf_accel_dev *accel_dev,
uint32_t vf_mask);
void adf_enable_pf2vf_interrupts(struct adf_accel_dev *accel_dev);
void adf_disable_pf2vf_interrupts(struct adf_accel_dev *accel_dev);
+
+int adf_vf2pf_init(struct adf_accel_dev *accel_dev);
+void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev);
+int adf_init_pf_wq(void);
+void adf_exit_pf_wq(void);
+int adf_init_vf_wq(void);
+void adf_exit_vf_wq(void);
#else
static inline int adf_sriov_configure(struct pci_dev *pdev, int numvfs)
{
@@ -253,5 +260,33 @@ static inline void adf_enable_pf2vf_interrupts(struct adf_accel_dev *accel_dev)
static inline void adf_disable_pf2vf_interrupts(struct adf_accel_dev *accel_dev)
{
}
+
+static inline int adf_vf2pf_init(struct adf_accel_dev *accel_dev)
+{
+ return 0;
+}
+
+static inline void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev)
+{
+}
+
+static inline int adf_init_pf_wq(void)
+{
+ return 0;
+}
+
+static inline void adf_exit_pf_wq(void)
+{
+}
+
+static inline int adf_init_vf_wq(void)
+{
+ return 0;
+}
+
+static inline void adf_exit_vf_wq(void)
+{
+}
+
#endif
#endif
diff --git a/drivers/crypto/qat/qat_common/adf_ctl_drv.c b/drivers/crypto/qat/qat_common/adf_ctl_drv.c
index 5c897e6e7994..abc7a7f64d64 100644
--- a/drivers/crypto/qat/qat_common/adf_ctl_drv.c
+++ b/drivers/crypto/qat/qat_common/adf_ctl_drv.c
@@ -270,26 +270,33 @@ static int adf_ctl_is_device_in_use(int id)
return 0;
}
-static int adf_ctl_stop_devices(uint32_t id)
+static void adf_ctl_stop_devices(uint32_t id)
{
struct adf_accel_dev *accel_dev;
- int ret = 0;
- list_for_each_entry_reverse(accel_dev, adf_devmgr_get_head(), list) {
+ list_for_each_entry(accel_dev, adf_devmgr_get_head(), list) {
if (id == accel_dev->accel_id || id == ADF_CFG_ALL_DEVICES) {
if (!adf_dev_started(accel_dev))
continue;
- if (adf_dev_stop(accel_dev)) {
- dev_err(&GET_DEV(accel_dev),
- "Failed to stop qat_dev%d\n", id);
- ret = -EFAULT;
- } else {
- adf_dev_shutdown(accel_dev);
- }
+ /* First stop all VFs */
+ if (!accel_dev->is_vf)
+ continue;
+
+ adf_dev_stop(accel_dev);
+ adf_dev_shutdown(accel_dev);
+ }
+ }
+
+ list_for_each_entry(accel_dev, adf_devmgr_get_head(), list) {
+ if (id == accel_dev->accel_id || id == ADF_CFG_ALL_DEVICES) {
+ if (!adf_dev_started(accel_dev))
+ continue;
+
+ adf_dev_stop(accel_dev);
+ adf_dev_shutdown(accel_dev);
}
}
- return ret;
}
static int adf_ctl_ioctl_dev_stop(struct file *fp, unsigned int cmd,
@@ -318,9 +325,8 @@ static int adf_ctl_ioctl_dev_stop(struct file *fp, unsigned int cmd,
pr_info("QAT: Stopping acceleration device qat_dev%d.\n",
ctl_data->device_id);
- ret = adf_ctl_stop_devices(ctl_data->device_id);
- if (ret)
- pr_err("QAT: failed to stop device.\n");
+ adf_ctl_stop_devices(ctl_data->device_id);
+
out:
kfree(ctl_data);
return ret;
@@ -462,12 +468,22 @@ static int __init adf_register_ctl_device_driver(void)
if (adf_init_aer())
goto err_aer;
+ if (adf_init_pf_wq())
+ goto err_pf_wq;
+
+ if (adf_init_vf_wq())
+ goto err_vf_wq;
+
if (qat_crypto_register())
goto err_crypto_register;
return 0;
err_crypto_register:
+ adf_exit_vf_wq();
+err_vf_wq:
+ adf_exit_pf_wq();
+err_pf_wq:
adf_exit_aer();
err_aer:
adf_chr_drv_destroy();
@@ -480,6 +496,8 @@ static void __exit adf_unregister_ctl_device_driver(void)
{
adf_chr_drv_destroy();
adf_exit_aer();
+ adf_exit_vf_wq();
+ adf_exit_pf_wq();
qat_crypto_unregister();
adf_clean_vf_map(false);
mutex_destroy(&adf_ctl_lock);
diff --git a/drivers/crypto/qat/qat_common/adf_init.c b/drivers/crypto/qat/qat_common/adf_init.c
index ef5575e4a215..888c6675e7e5 100644
--- a/drivers/crypto/qat/qat_common/adf_init.c
+++ b/drivers/crypto/qat/qat_common/adf_init.c
@@ -236,9 +236,9 @@ EXPORT_SYMBOL_GPL(adf_dev_start);
* is shuting down.
* To be used by QAT device specific drivers.
*
- * Return: 0 on success, error code otherwise.
+ * Return: void
*/
-int adf_dev_stop(struct adf_accel_dev *accel_dev)
+void adf_dev_stop(struct adf_accel_dev *accel_dev)
{
struct service_hndl *service;
struct list_head *list_itr;
@@ -246,9 +246,9 @@ int adf_dev_stop(struct adf_accel_dev *accel_dev)
int ret;
if (!adf_dev_started(accel_dev) &&
- !test_bit(ADF_STATUS_STARTING, &accel_dev->status)) {
- return 0;
- }
+ !test_bit(ADF_STATUS_STARTING, &accel_dev->status))
+ return;
+
clear_bit(ADF_STATUS_STARTING, &accel_dev->status);
clear_bit(ADF_STATUS_STARTED, &accel_dev->status);
@@ -279,8 +279,6 @@ int adf_dev_stop(struct adf_accel_dev *accel_dev)
else
clear_bit(ADF_STATUS_AE_STARTED, &accel_dev->status);
}
-
- return 0;
}
EXPORT_SYMBOL_GPL(adf_dev_stop);
@@ -329,6 +327,8 @@ void adf_dev_shutdown(struct adf_accel_dev *accel_dev)
clear_bit(accel_dev->accel_id, &service->init_status);
}
+ hw_data->disable_iov(accel_dev);
+
if (test_bit(ADF_STATUS_IRQ_ALLOCATED, &accel_dev->status)) {
hw_data->free_irq(accel_dev);
clear_bit(ADF_STATUS_IRQ_ALLOCATED, &accel_dev->status);
@@ -344,7 +344,6 @@ void adf_dev_shutdown(struct adf_accel_dev *accel_dev)
if (hw_data->exit_admin_comms)
hw_data->exit_admin_comms(accel_dev);
- hw_data->disable_iov(accel_dev);
adf_cleanup_etr_data(accel_dev);
adf_dev_restore(accel_dev);
}
diff --git a/drivers/crypto/qat/qat_common/adf_isr.c b/drivers/crypto/qat/qat_common/adf_isr.c
index b81f79acc4ea..06d49017a52b 100644
--- a/drivers/crypto/qat/qat_common/adf_isr.c
+++ b/drivers/crypto/qat/qat_common/adf_isr.c
@@ -302,7 +302,7 @@ static void adf_cleanup_bh(struct adf_accel_dev *accel_dev)
}
/**
- * adf_vf_isr_resource_free() - Free IRQ for acceleration device
+ * adf_isr_resource_free() - Free IRQ for acceleration device
* @accel_dev: Pointer to acceleration device.
*
* Function frees interrupts for acceleration device.
@@ -317,7 +317,7 @@ void adf_isr_resource_free(struct adf_accel_dev *accel_dev)
EXPORT_SYMBOL_GPL(adf_isr_resource_free);
/**
- * adf_vf_isr_resource_alloc() - Allocate IRQ for acceleration device
+ * adf_isr_resource_alloc() - Allocate IRQ for acceleration device
* @accel_dev: Pointer to acceleration device.
*
* Function allocates interrupts for acceleration device.
diff --git a/drivers/crypto/qat/qat_common/adf_sriov.c b/drivers/crypto/qat/qat_common/adf_sriov.c
index 1117a8b58280..4a526e2f1d7f 100644
--- a/drivers/crypto/qat/qat_common/adf_sriov.c
+++ b/drivers/crypto/qat/qat_common/adf_sriov.c
@@ -119,11 +119,6 @@ static int adf_enable_sriov(struct adf_accel_dev *accel_dev)
int i;
u32 reg;
- /* Workqueue for PF2VF responses */
- pf2vf_resp_wq = create_workqueue("qat_pf2vf_resp_wq");
- if (!pf2vf_resp_wq)
- return -ENOMEM;
-
for (i = 0, vf_info = accel_dev->pf.vf_info; i < totalvfs;
i++, vf_info++) {
/* This ptr will be populated when VFs will be created */
@@ -216,11 +211,6 @@ void adf_disable_sriov(struct adf_accel_dev *accel_dev)
kfree(accel_dev->pf.vf_info);
accel_dev->pf.vf_info = NULL;
-
- if (pf2vf_resp_wq) {
- destroy_workqueue(pf2vf_resp_wq);
- pf2vf_resp_wq = NULL;
- }
}
EXPORT_SYMBOL_GPL(adf_disable_sriov);
@@ -259,13 +249,7 @@ int adf_sriov_configure(struct pci_dev *pdev, int numvfs)
return -EBUSY;
}
- if (adf_dev_stop(accel_dev)) {
- dev_err(&GET_DEV(accel_dev),
- "Failed to stop qat_dev%d\n",
- accel_dev->accel_id);
- return -EFAULT;
- }
-
+ adf_dev_stop(accel_dev);
adf_dev_shutdown(accel_dev);
}
@@ -304,3 +288,19 @@ int adf_sriov_configure(struct pci_dev *pdev, int numvfs)
return numvfs;
}
EXPORT_SYMBOL_GPL(adf_sriov_configure);
+
+int __init adf_init_pf_wq(void)
+{
+ /* Workqueue for PF2VF responses */
+ pf2vf_resp_wq = create_workqueue("qat_pf2vf_resp_wq");
+
+ return !pf2vf_resp_wq ? -ENOMEM : 0;
+}
+
+void adf_exit_pf_wq(void)
+{
+ if (pf2vf_resp_wq) {
+ destroy_workqueue(pf2vf_resp_wq);
+ pf2vf_resp_wq = NULL;
+ }
+}
diff --git a/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c b/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c
new file mode 100644
index 000000000000..cd5f37dffe8a
--- /dev/null
+++ b/drivers/crypto/qat/qat_common/adf_vf2pf_msg.c
@@ -0,0 +1,92 @@
+/*
+ This file is provided under a dual BSD/GPLv2 license. When using or
+ redistributing this file, you may do so under either license.
+
+ GPL LICENSE SUMMARY
+ Copyright(c) 2015 Intel Corporation.
+ This program is free software; you can redistribute it and/or modify
+ it under the terms of version 2 of the GNU General Public License as
+ published by the Free Software Foundation.
+
+ This program is distributed in the hope that it will be useful, but
+ WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ General Public License for more details.
+
+ Contact Information:
+ qat-linux@intel.com
+
+ BSD LICENSE
+ Copyright(c) 2015 Intel Corporation.
+ Redistribution and use in source and binary forms, with or without
+ modification, are permitted provided that the following conditions
+ are met:
+
+ * Redistributions of source code must retain the above copyright
+ notice, this list of conditions and the following disclaimer.
+ * Redistributions in binary form must reproduce the above copyright
+ notice, this list of conditions and the following disclaimer in
+ the documentation and/or other materials provided with the
+ distribution.
+ * Neither the name of Intel Corporation nor the names of its
+ contributors may be used to endorse or promote products derived
+ from this software without specific prior written permission.
+
+ THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+*/
+#include "adf_accel_devices.h"
+#include "adf_common_drv.h"
+#include "adf_pf2vf_msg.h"
+
+/**
+ * adf_vf2pf_init() - send init msg to PF
+ * @accel_dev: Pointer to acceleration VF device.
+ *
+ * Function sends an init messge from the VF to a PF
+ *
+ * Return: 0 on success, error code otherwise.
+ */
+int adf_vf2pf_init(struct adf_accel_dev *accel_dev)
+{
+ u32 msg = (ADF_VF2PF_MSGORIGIN_SYSTEM |
+ (ADF_VF2PF_MSGTYPE_INIT << ADF_VF2PF_MSGTYPE_SHIFT));
+
+ if (adf_iov_putmsg(accel_dev, msg, 0)) {
+ dev_err(&GET_DEV(accel_dev),
+ "Failed to send Init event to PF\n");
+ return -EFAULT;
+ }
+ set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(adf_vf2pf_init);
+
+/**
+ * adf_vf2pf_shutdown() - send shutdown msg to PF
+ * @accel_dev: Pointer to acceleration VF device.
+ *
+ * Function sends a shutdown messge from the VF to a PF
+ *
+ * Return: void
+ */
+void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev)
+{
+ u32 msg = (ADF_VF2PF_MSGORIGIN_SYSTEM |
+ (ADF_VF2PF_MSGTYPE_SHUTDOWN << ADF_VF2PF_MSGTYPE_SHIFT));
+
+ if (test_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status))
+ if (adf_iov_putmsg(accel_dev, msg, 0))
+ dev_err(&GET_DEV(accel_dev),
+ "Failed to send Shutdown event to PF\n");
+}
+EXPORT_SYMBOL_GPL(adf_vf2pf_shutdown);
diff --git a/drivers/crypto/qat/qat_common/adf_vf_isr.c b/drivers/crypto/qat/qat_common/adf_vf_isr.c
index 09427b3d4d55..aa689cabedb4 100644
--- a/drivers/crypto/qat/qat_common/adf_vf_isr.c
+++ b/drivers/crypto/qat/qat_common/adf_vf_isr.c
@@ -51,6 +51,7 @@
#include <linux/slab.h>
#include <linux/errno.h>
#include <linux/interrupt.h>
+#include <linux/workqueue.h>
#include "adf_accel_devices.h"
#include "adf_common_drv.h"
#include "adf_cfg.h"
@@ -64,6 +65,13 @@
#define ADF_VINTSOU_BUN BIT(0)
#define ADF_VINTSOU_PF2VF BIT(1)
+static struct workqueue_struct *adf_vf_stop_wq;
+
+struct adf_vf_stop_data {
+ struct adf_accel_dev *accel_dev;
+ struct work_struct work;
+};
+
static int adf_enable_msi(struct adf_accel_dev *accel_dev)
{
struct adf_accel_pci *pci_dev_info = &accel_dev->accel_pci_dev;
@@ -90,6 +98,20 @@ static void adf_disable_msi(struct adf_accel_dev *accel_dev)
pci_disable_msi(pdev);
}
+static void adf_dev_stop_async(struct work_struct *work)
+{
+ struct adf_vf_stop_data *stop_data =
+ container_of(work, struct adf_vf_stop_data, work);
+ struct adf_accel_dev *accel_dev = stop_data->accel_dev;
+
+ adf_dev_stop(accel_dev);
+ adf_dev_shutdown(accel_dev);
+
+ /* Re-enable PF2VF interrupts */
+ adf_enable_pf2vf_interrupts(accel_dev);
+ kfree(stop_data);
+}
+
static void adf_pf2vf_bh_handler(void *data)
{
struct adf_accel_dev *accel_dev = data;
@@ -107,11 +129,29 @@ static void adf_pf2vf_bh_handler(void *data)
goto err;
switch ((msg & ADF_PF2VF_MSGTYPE_MASK) >> ADF_PF2VF_MSGTYPE_SHIFT) {
- case ADF_PF2VF_MSGTYPE_RESTARTING:
+ case ADF_PF2VF_MSGTYPE_RESTARTING: {
+ struct adf_vf_stop_data *stop_data;
+
dev_dbg(&GET_DEV(accel_dev),
"Restarting msg received from PF 0x%x\n", msg);
- adf_dev_stop(accel_dev);
- break;
+
+ clear_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
+
+ stop_data = kzalloc(sizeof(*stop_data), GFP_ATOMIC);
+ if (!stop_data) {
+ dev_err(&GET_DEV(accel_dev),
+ "Couldn't schedule stop for vf_%d\n",
+ accel_dev->accel_id);
+ return;
+ }
+ stop_data->accel_dev = accel_dev;
+ INIT_WORK(&stop_data->work, adf_dev_stop_async);
+ queue_work(adf_vf_stop_wq, &stop_data->work);
+ /* To ack, clear the PF2VFINT bit */
+ msg &= ~BIT(0);
+ ADF_CSR_WR(pmisc_bar_addr, hw_data->get_pf2vf_offset(0), msg);
+ return;
+ }
case ADF_PF2VF_MSGTYPE_VERSION_RESP:
dev_dbg(&GET_DEV(accel_dev),
"Version resp received from PF 0x%x\n", msg);
@@ -278,3 +318,18 @@ err_out:
return -EFAULT;
}
EXPORT_SYMBOL_GPL(adf_vf_isr_resource_alloc);
+
+int __init adf_init_vf_wq(void)
+{
+ adf_vf_stop_wq = create_workqueue("adf_vf_stop_wq");
+
+ return !adf_vf_stop_wq ? -EFAULT : 0;
+}
+
+void adf_exit_vf_wq(void)
+{
+ if (adf_vf_stop_wq)
+ destroy_workqueue(adf_vf_stop_wq);
+
+ adf_vf_stop_wq = NULL;
+}
diff --git a/drivers/crypto/qat/qat_common/qat_asym_algs.c b/drivers/crypto/qat/qat_common/qat_asym_algs.c
index e5c0727d4876..05f49d4f94b2 100644
--- a/drivers/crypto/qat/qat_common/qat_asym_algs.c
+++ b/drivers/crypto/qat/qat_common/qat_asym_algs.c
@@ -593,7 +593,7 @@ int qat_rsa_get_d(void *context, size_t hdrlen, unsigned char tag,
ret = -ENOMEM;
ctx->d = dma_zalloc_coherent(dev, ctx->key_sz, &ctx->dma_d, GFP_KERNEL);
- if (!ctx->n)
+ if (!ctx->d)
goto err;
memcpy(ctx->d + (ctx->key_sz - vlen), ptr, vlen);
@@ -711,7 +711,7 @@ static void qat_rsa_exit_tfm(struct crypto_akcipher *tfm)
}
qat_crypto_put_instance(ctx->inst);
ctx->n = NULL;
- ctx->d = NULL;
+ ctx->e = NULL;
ctx->d = NULL;
}
diff --git a/drivers/crypto/qat/qat_dh895xcc/adf_drv.c b/drivers/crypto/qat/qat_dh895xcc/adf_drv.c
index a8c4b92a7cbd..4d2de2838451 100644
--- a/drivers/crypto/qat/qat_dh895xcc/adf_drv.c
+++ b/drivers/crypto/qat/qat_dh895xcc/adf_drv.c
@@ -302,9 +302,7 @@ static void adf_remove(struct pci_dev *pdev)
pr_err("QAT: Driver removal failed\n");
return;
}
- if (adf_dev_stop(accel_dev))
- dev_err(&GET_DEV(accel_dev), "Failed to stop QAT accel dev\n");
-
+ adf_dev_stop(accel_dev);
adf_dev_shutdown(accel_dev);
adf_disable_aer(accel_dev);
adf_cleanup_accel(accel_dev);
diff --git a/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c b/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c
index dc04ab68d24d..a3b4dd8099a7 100644
--- a/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c
+++ b/drivers/crypto/qat/qat_dh895xccvf/adf_dh895xccvf_hw_data.c
@@ -109,29 +109,6 @@ static void adf_vf_void_noop(struct adf_accel_dev *accel_dev)
{
}
-static int adf_vf2pf_init(struct adf_accel_dev *accel_dev)
-{
- u32 msg = (ADF_VF2PF_MSGORIGIN_SYSTEM |
- (ADF_VF2PF_MSGTYPE_INIT << ADF_VF2PF_MSGTYPE_SHIFT));
-
- if (adf_iov_putmsg(accel_dev, msg, 0)) {
- dev_err(&GET_DEV(accel_dev),
- "Failed to send Init event to PF\n");
- return -EFAULT;
- }
- return 0;
-}
-
-static void adf_vf2pf_shutdown(struct adf_accel_dev *accel_dev)
-{
- u32 msg = (ADF_VF2PF_MSGORIGIN_SYSTEM |
- (ADF_VF2PF_MSGTYPE_SHUTDOWN << ADF_VF2PF_MSGTYPE_SHIFT));
-
- if (adf_iov_putmsg(accel_dev, msg, 0))
- dev_err(&GET_DEV(accel_dev),
- "Failed to send Shutdown event to PF\n");
-}
-
void adf_init_hw_data_dh895xcciov(struct adf_hw_device_data *hw_data)
{
hw_data->dev_class = &dh895xcciov_class;
diff --git a/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c b/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
index f8cc4bf0a50c..60df98632fa2 100644
--- a/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
+++ b/drivers/crypto/qat/qat_dh895xccvf/adf_drv.c
@@ -238,6 +238,8 @@ static int adf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (ret)
goto out_err_free_reg;
+ set_bit(ADF_STATUS_PF_RUNNING, &accel_dev->status);
+
ret = adf_dev_init(accel_dev);
if (ret)
goto out_err_dev_shutdown;
@@ -270,9 +272,7 @@ static void adf_remove(struct pci_dev *pdev)
pr_err("QAT: Driver removal failed\n");
return;
}
- if (adf_dev_stop(accel_dev))
- dev_err(&GET_DEV(accel_dev), "Failed to stop QAT accel dev\n");
-
+ adf_dev_stop(accel_dev);
adf_dev_shutdown(accel_dev);
adf_cleanup_accel(accel_dev);
adf_cleanup_pci_dev(accel_dev);
diff --git a/drivers/crypto/s5p-sss.c b/drivers/crypto/s5p-sss.c
index 5f161a9777e3..2b3a0cfe3331 100644
--- a/drivers/crypto/s5p-sss.c
+++ b/drivers/crypto/s5p-sss.c
@@ -11,65 +11,64 @@
*
*/
-#include <linux/delay.h>
+#include <linux/clk.h>
+#include <linux/crypto.h>
+#include <linux/dma-mapping.h>
#include <linux/err.h>
-#include <linux/module.h>
-#include <linux/init.h>
#include <linux/errno.h>
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/io.h>
#include <linux/kernel.h>
-#include <linux/clk.h>
+#include <linux/module.h>
+#include <linux/of.h>
#include <linux/platform_device.h>
#include <linux/scatterlist.h>
-#include <linux/dma-mapping.h>
-#include <linux/io.h>
-#include <linux/of.h>
-#include <linux/crypto.h>
-#include <linux/interrupt.h>
-#include <crypto/algapi.h>
-#include <crypto/aes.h>
#include <crypto/ctr.h>
+#include <crypto/aes.h>
+#include <crypto/algapi.h>
+#include <crypto/scatterwalk.h>
#define _SBF(s, v) ((v) << (s))
-#define _BIT(b) _SBF(b, 1)
/* Feed control registers */
#define SSS_REG_FCINTSTAT 0x0000
-#define SSS_FCINTSTAT_BRDMAINT _BIT(3)
-#define SSS_FCINTSTAT_BTDMAINT _BIT(2)
-#define SSS_FCINTSTAT_HRDMAINT _BIT(1)
-#define SSS_FCINTSTAT_PKDMAINT _BIT(0)
+#define SSS_FCINTSTAT_BRDMAINT BIT(3)
+#define SSS_FCINTSTAT_BTDMAINT BIT(2)
+#define SSS_FCINTSTAT_HRDMAINT BIT(1)
+#define SSS_FCINTSTAT_PKDMAINT BIT(0)
#define SSS_REG_FCINTENSET 0x0004
-#define SSS_FCINTENSET_BRDMAINTENSET _BIT(3)
-#define SSS_FCINTENSET_BTDMAINTENSET _BIT(2)
-#define SSS_FCINTENSET_HRDMAINTENSET _BIT(1)
-#define SSS_FCINTENSET_PKDMAINTENSET _BIT(0)
+#define SSS_FCINTENSET_BRDMAINTENSET BIT(3)
+#define SSS_FCINTENSET_BTDMAINTENSET BIT(2)
+#define SSS_FCINTENSET_HRDMAINTENSET BIT(1)
+#define SSS_FCINTENSET_PKDMAINTENSET BIT(0)
#define SSS_REG_FCINTENCLR 0x0008
-#define SSS_FCINTENCLR_BRDMAINTENCLR _BIT(3)
-#define SSS_FCINTENCLR_BTDMAINTENCLR _BIT(2)
-#define SSS_FCINTENCLR_HRDMAINTENCLR _BIT(1)
-#define SSS_FCINTENCLR_PKDMAINTENCLR _BIT(0)
+#define SSS_FCINTENCLR_BRDMAINTENCLR BIT(3)
+#define SSS_FCINTENCLR_BTDMAINTENCLR BIT(2)
+#define SSS_FCINTENCLR_HRDMAINTENCLR BIT(1)
+#define SSS_FCINTENCLR_PKDMAINTENCLR BIT(0)
#define SSS_REG_FCINTPEND 0x000C
-#define SSS_FCINTPEND_BRDMAINTP _BIT(3)
-#define SSS_FCINTPEND_BTDMAINTP _BIT(2)
-#define SSS_FCINTPEND_HRDMAINTP _BIT(1)
-#define SSS_FCINTPEND_PKDMAINTP _BIT(0)
+#define SSS_FCINTPEND_BRDMAINTP BIT(3)
+#define SSS_FCINTPEND_BTDMAINTP BIT(2)
+#define SSS_FCINTPEND_HRDMAINTP BIT(1)
+#define SSS_FCINTPEND_PKDMAINTP BIT(0)
#define SSS_REG_FCFIFOSTAT 0x0010
-#define SSS_FCFIFOSTAT_BRFIFOFUL _BIT(7)
-#define SSS_FCFIFOSTAT_BRFIFOEMP _BIT(6)
-#define SSS_FCFIFOSTAT_BTFIFOFUL _BIT(5)
-#define SSS_FCFIFOSTAT_BTFIFOEMP _BIT(4)
-#define SSS_FCFIFOSTAT_HRFIFOFUL _BIT(3)
-#define SSS_FCFIFOSTAT_HRFIFOEMP _BIT(2)
-#define SSS_FCFIFOSTAT_PKFIFOFUL _BIT(1)
-#define SSS_FCFIFOSTAT_PKFIFOEMP _BIT(0)
+#define SSS_FCFIFOSTAT_BRFIFOFUL BIT(7)
+#define SSS_FCFIFOSTAT_BRFIFOEMP BIT(6)
+#define SSS_FCFIFOSTAT_BTFIFOFUL BIT(5)
+#define SSS_FCFIFOSTAT_BTFIFOEMP BIT(4)
+#define SSS_FCFIFOSTAT_HRFIFOFUL BIT(3)
+#define SSS_FCFIFOSTAT_HRFIFOEMP BIT(2)
+#define SSS_FCFIFOSTAT_PKFIFOFUL BIT(1)
+#define SSS_FCFIFOSTAT_PKFIFOEMP BIT(0)
#define SSS_REG_FCFIFOCTRL 0x0014
-#define SSS_FCFIFOCTRL_DESSEL _BIT(2)
+#define SSS_FCFIFOCTRL_DESSEL BIT(2)
#define SSS_HASHIN_INDEPENDENT _SBF(0, 0x00)
#define SSS_HASHIN_CIPHER_INPUT _SBF(0, 0x01)
#define SSS_HASHIN_CIPHER_OUTPUT _SBF(0, 0x02)
@@ -77,52 +76,52 @@
#define SSS_REG_FCBRDMAS 0x0020
#define SSS_REG_FCBRDMAL 0x0024
#define SSS_REG_FCBRDMAC 0x0028
-#define SSS_FCBRDMAC_BYTESWAP _BIT(1)
-#define SSS_FCBRDMAC_FLUSH _BIT(0)
+#define SSS_FCBRDMAC_BYTESWAP BIT(1)
+#define SSS_FCBRDMAC_FLUSH BIT(0)
#define SSS_REG_FCBTDMAS 0x0030
#define SSS_REG_FCBTDMAL 0x0034
#define SSS_REG_FCBTDMAC 0x0038
-#define SSS_FCBTDMAC_BYTESWAP _BIT(1)
-#define SSS_FCBTDMAC_FLUSH _BIT(0)
+#define SSS_FCBTDMAC_BYTESWAP BIT(1)
+#define SSS_FCBTDMAC_FLUSH BIT(0)
#define SSS_REG_FCHRDMAS 0x0040
#define SSS_REG_FCHRDMAL 0x0044
#define SSS_REG_FCHRDMAC 0x0048
-#define SSS_FCHRDMAC_BYTESWAP _BIT(1)
-#define SSS_FCHRDMAC_FLUSH _BIT(0)
+#define SSS_FCHRDMAC_BYTESWAP BIT(1)
+#define SSS_FCHRDMAC_FLUSH BIT(0)
#define SSS_REG_FCPKDMAS 0x0050
#define SSS_REG_FCPKDMAL 0x0054
#define SSS_REG_FCPKDMAC 0x0058
-#define SSS_FCPKDMAC_BYTESWAP _BIT(3)
-#define SSS_FCPKDMAC_DESCEND _BIT(2)
-#define SSS_FCPKDMAC_TRANSMIT _BIT(1)
-#define SSS_FCPKDMAC_FLUSH _BIT(0)
+#define SSS_FCPKDMAC_BYTESWAP BIT(3)
+#define SSS_FCPKDMAC_DESCEND BIT(2)
+#define SSS_FCPKDMAC_TRANSMIT BIT(1)
+#define SSS_FCPKDMAC_FLUSH BIT(0)
#define SSS_REG_FCPKDMAO 0x005C
/* AES registers */
#define SSS_REG_AES_CONTROL 0x00
-#define SSS_AES_BYTESWAP_DI _BIT(11)
-#define SSS_AES_BYTESWAP_DO _BIT(10)
-#define SSS_AES_BYTESWAP_IV _BIT(9)
-#define SSS_AES_BYTESWAP_CNT _BIT(8)
-#define SSS_AES_BYTESWAP_KEY _BIT(7)
-#define SSS_AES_KEY_CHANGE_MODE _BIT(6)
+#define SSS_AES_BYTESWAP_DI BIT(11)
+#define SSS_AES_BYTESWAP_DO BIT(10)
+#define SSS_AES_BYTESWAP_IV BIT(9)
+#define SSS_AES_BYTESWAP_CNT BIT(8)
+#define SSS_AES_BYTESWAP_KEY BIT(7)
+#define SSS_AES_KEY_CHANGE_MODE BIT(6)
#define SSS_AES_KEY_SIZE_128 _SBF(4, 0x00)
#define SSS_AES_KEY_SIZE_192 _SBF(4, 0x01)
#define SSS_AES_KEY_SIZE_256 _SBF(4, 0x02)
-#define SSS_AES_FIFO_MODE _BIT(3)
+#define SSS_AES_FIFO_MODE BIT(3)
#define SSS_AES_CHAIN_MODE_ECB _SBF(1, 0x00)
#define SSS_AES_CHAIN_MODE_CBC _SBF(1, 0x01)
#define SSS_AES_CHAIN_MODE_CTR _SBF(1, 0x02)
-#define SSS_AES_MODE_DECRYPT _BIT(0)
+#define SSS_AES_MODE_DECRYPT BIT(0)
#define SSS_REG_AES_STATUS 0x04
-#define SSS_AES_BUSY _BIT(2)
-#define SSS_AES_INPUT_READY _BIT(1)
-#define SSS_AES_OUTPUT_READY _BIT(0)
+#define SSS_AES_BUSY BIT(2)
+#define SSS_AES_INPUT_READY BIT(1)
+#define SSS_AES_OUTPUT_READY BIT(0)
#define SSS_REG_AES_IN_DATA(s) (0x10 + (s << 2))
#define SSS_REG_AES_OUT_DATA(s) (0x20 + (s << 2))
@@ -139,7 +138,7 @@
SSS_AES_REG(dev, reg))
/* HW engine modes */
-#define FLAGS_AES_DECRYPT _BIT(0)
+#define FLAGS_AES_DECRYPT BIT(0)
#define FLAGS_AES_MODE_MASK _SBF(1, 0x03)
#define FLAGS_AES_CBC _SBF(1, 0x01)
#define FLAGS_AES_CTR _SBF(1, 0x02)
@@ -149,7 +148,6 @@
/**
* struct samsung_aes_variant - platform specific SSS driver data
- * @has_hash_irq: true if SSS module uses hash interrupt, false otherwise
* @aes_offset: AES register offset from SSS module's base.
*
* Specifies platform specific configuration of SSS module.
@@ -157,7 +155,6 @@
* expansion of its usage.
*/
struct samsung_aes_variant {
- bool has_hash_irq;
unsigned int aes_offset;
};
@@ -178,7 +175,6 @@ struct s5p_aes_dev {
struct clk *clk;
void __iomem *ioaddr;
void __iomem *aes_ioaddr;
- int irq_hash;
int irq_fc;
struct ablkcipher_request *req;
@@ -186,6 +182,10 @@ struct s5p_aes_dev {
struct scatterlist *sg_src;
struct scatterlist *sg_dst;
+ /* In case of unaligned access: */
+ struct scatterlist *sg_src_cpy;
+ struct scatterlist *sg_dst_cpy;
+
struct tasklet_struct tasklet;
struct crypto_queue queue;
bool busy;
@@ -197,12 +197,10 @@ struct s5p_aes_dev {
static struct s5p_aes_dev *s5p_dev;
static const struct samsung_aes_variant s5p_aes_data = {
- .has_hash_irq = true,
.aes_offset = 0x4000,
};
static const struct samsung_aes_variant exynos_aes_data = {
- .has_hash_irq = false,
.aes_offset = 0x200,
};
@@ -245,8 +243,45 @@ static void s5p_set_dma_outdata(struct s5p_aes_dev *dev, struct scatterlist *sg)
SSS_WRITE(dev, FCBTDMAL, sg_dma_len(sg));
}
+static void s5p_free_sg_cpy(struct s5p_aes_dev *dev, struct scatterlist **sg)
+{
+ int len;
+
+ if (!*sg)
+ return;
+
+ len = ALIGN(dev->req->nbytes, AES_BLOCK_SIZE);
+ free_pages((unsigned long)sg_virt(*sg), get_order(len));
+
+ kfree(*sg);
+ *sg = NULL;
+}
+
+static void s5p_sg_copy_buf(void *buf, struct scatterlist *sg,
+ unsigned int nbytes, int out)
+{
+ struct scatter_walk walk;
+
+ if (!nbytes)
+ return;
+
+ scatterwalk_start(&walk, sg);
+ scatterwalk_copychunks(buf, &walk, nbytes, out);
+ scatterwalk_done(&walk, out, 0);
+}
+
static void s5p_aes_complete(struct s5p_aes_dev *dev, int err)
{
+ if (dev->sg_dst_cpy) {
+ dev_dbg(dev->dev,
+ "Copying %d bytes of output data back to original place\n",
+ dev->req->nbytes);
+ s5p_sg_copy_buf(sg_virt(dev->sg_dst_cpy), dev->req->dst,
+ dev->req->nbytes, 1);
+ }
+ s5p_free_sg_cpy(dev, &dev->sg_src_cpy);
+ s5p_free_sg_cpy(dev, &dev->sg_dst_cpy);
+
/* holding a lock outside */
dev->req->base.complete(&dev->req->base, err);
dev->busy = false;
@@ -262,15 +297,37 @@ static void s5p_unset_indata(struct s5p_aes_dev *dev)
dma_unmap_sg(dev->dev, dev->sg_src, 1, DMA_TO_DEVICE);
}
+static int s5p_make_sg_cpy(struct s5p_aes_dev *dev, struct scatterlist *src,
+ struct scatterlist **dst)
+{
+ void *pages;
+ int len;
+
+ *dst = kmalloc(sizeof(**dst), GFP_ATOMIC);
+ if (!*dst)
+ return -ENOMEM;
+
+ len = ALIGN(dev->req->nbytes, AES_BLOCK_SIZE);
+ pages = (void *)__get_free_pages(GFP_ATOMIC, get_order(len));
+ if (!pages) {
+ kfree(*dst);
+ *dst = NULL;
+ return -ENOMEM;
+ }
+
+ s5p_sg_copy_buf(pages, src, dev->req->nbytes, 0);
+
+ sg_init_table(*dst, 1);
+ sg_set_buf(*dst, pages, len);
+
+ return 0;
+}
+
static int s5p_set_outdata(struct s5p_aes_dev *dev, struct scatterlist *sg)
{
int err;
- if (!IS_ALIGNED(sg_dma_len(sg), AES_BLOCK_SIZE)) {
- err = -EINVAL;
- goto exit;
- }
- if (!sg_dma_len(sg)) {
+ if (!sg->length) {
err = -EINVAL;
goto exit;
}
@@ -284,7 +341,7 @@ static int s5p_set_outdata(struct s5p_aes_dev *dev, struct scatterlist *sg)
dev->sg_dst = sg;
err = 0;
- exit:
+exit:
return err;
}
@@ -292,11 +349,7 @@ static int s5p_set_indata(struct s5p_aes_dev *dev, struct scatterlist *sg)
{
int err;
- if (!IS_ALIGNED(sg_dma_len(sg), AES_BLOCK_SIZE)) {
- err = -EINVAL;
- goto exit;
- }
- if (!sg_dma_len(sg)) {
+ if (!sg->length) {
err = -EINVAL;
goto exit;
}
@@ -310,47 +363,59 @@ static int s5p_set_indata(struct s5p_aes_dev *dev, struct scatterlist *sg)
dev->sg_src = sg;
err = 0;
- exit:
+exit:
return err;
}
-static void s5p_aes_tx(struct s5p_aes_dev *dev)
+/*
+ * Returns true if new transmitting (output) data is ready and its
+ * address+length have to be written to device (by calling
+ * s5p_set_dma_outdata()). False otherwise.
+ */
+static bool s5p_aes_tx(struct s5p_aes_dev *dev)
{
int err = 0;
+ bool ret = false;
s5p_unset_outdata(dev);
if (!sg_is_last(dev->sg_dst)) {
err = s5p_set_outdata(dev, sg_next(dev->sg_dst));
- if (err) {
+ if (err)
s5p_aes_complete(dev, err);
- return;
- }
-
- s5p_set_dma_outdata(dev, dev->sg_dst);
+ else
+ ret = true;
} else {
s5p_aes_complete(dev, err);
dev->busy = true;
tasklet_schedule(&dev->tasklet);
}
+
+ return ret;
}
-static void s5p_aes_rx(struct s5p_aes_dev *dev)
+/*
+ * Returns true if new receiving (input) data is ready and its
+ * address+length have to be written to device (by calling
+ * s5p_set_dma_indata()). False otherwise.
+ */
+static bool s5p_aes_rx(struct s5p_aes_dev *dev)
{
int err;
+ bool ret = false;
s5p_unset_indata(dev);
if (!sg_is_last(dev->sg_src)) {
err = s5p_set_indata(dev, sg_next(dev->sg_src));
- if (err) {
+ if (err)
s5p_aes_complete(dev, err);
- return;
- }
-
- s5p_set_dma_indata(dev, dev->sg_src);
+ else
+ ret = true;
}
+
+ return ret;
}
static irqreturn_t s5p_aes_interrupt(int irq, void *dev_id)
@@ -359,18 +424,29 @@ static irqreturn_t s5p_aes_interrupt(int irq, void *dev_id)
struct s5p_aes_dev *dev = platform_get_drvdata(pdev);
uint32_t status;
unsigned long flags;
+ bool set_dma_tx = false;
+ bool set_dma_rx = false;
spin_lock_irqsave(&dev->lock, flags);
- if (irq == dev->irq_fc) {
- status = SSS_READ(dev, FCINTSTAT);
- if (status & SSS_FCINTSTAT_BRDMAINT)
- s5p_aes_rx(dev);
- if (status & SSS_FCINTSTAT_BTDMAINT)
- s5p_aes_tx(dev);
-
- SSS_WRITE(dev, FCINTPEND, status);
- }
+ status = SSS_READ(dev, FCINTSTAT);
+ if (status & SSS_FCINTSTAT_BRDMAINT)
+ set_dma_rx = s5p_aes_rx(dev);
+ if (status & SSS_FCINTSTAT_BTDMAINT)
+ set_dma_tx = s5p_aes_tx(dev);
+
+ SSS_WRITE(dev, FCINTPEND, status);
+
+ /*
+ * Writing length of DMA block (either receiving or transmitting)
+ * will start the operation immediately, so this should be done
+ * at the end (even after clearing pending interrupts to not miss the
+ * interrupt).
+ */
+ if (set_dma_tx)
+ s5p_set_dma_outdata(dev, dev->sg_dst);
+ if (set_dma_rx)
+ s5p_set_dma_indata(dev, dev->sg_src);
spin_unlock_irqrestore(&dev->lock, flags);
@@ -395,6 +471,71 @@ static void s5p_set_aes(struct s5p_aes_dev *dev,
memcpy_toio(keystart, key, keylen);
}
+static bool s5p_is_sg_aligned(struct scatterlist *sg)
+{
+ while (sg) {
+ if (!IS_ALIGNED(sg->length, AES_BLOCK_SIZE))
+ return false;
+ sg = sg_next(sg);
+ }
+
+ return true;
+}
+
+static int s5p_set_indata_start(struct s5p_aes_dev *dev,
+ struct ablkcipher_request *req)
+{
+ struct scatterlist *sg;
+ int err;
+
+ dev->sg_src_cpy = NULL;
+ sg = req->src;
+ if (!s5p_is_sg_aligned(sg)) {
+ dev_dbg(dev->dev,
+ "At least one unaligned source scatter list, making a copy\n");
+ err = s5p_make_sg_cpy(dev, sg, &dev->sg_src_cpy);
+ if (err)
+ return err;
+
+ sg = dev->sg_src_cpy;
+ }
+
+ err = s5p_set_indata(dev, sg);
+ if (err) {
+ s5p_free_sg_cpy(dev, &dev->sg_src_cpy);
+ return err;
+ }
+
+ return 0;
+}
+
+static int s5p_set_outdata_start(struct s5p_aes_dev *dev,
+ struct ablkcipher_request *req)
+{
+ struct scatterlist *sg;
+ int err;
+
+ dev->sg_dst_cpy = NULL;
+ sg = req->dst;
+ if (!s5p_is_sg_aligned(sg)) {
+ dev_dbg(dev->dev,
+ "At least one unaligned dest scatter list, making a copy\n");
+ err = s5p_make_sg_cpy(dev, sg, &dev->sg_dst_cpy);
+ if (err)
+ return err;
+
+ sg = dev->sg_dst_cpy;
+ }
+
+ err = s5p_set_outdata(dev, sg);
+ if (err) {
+ s5p_free_sg_cpy(dev, &dev->sg_dst_cpy);
+ return err;
+ }
+
+ return 0;
+}
+
static void s5p_aes_crypt_start(struct s5p_aes_dev *dev, unsigned long mode)
{
struct ablkcipher_request *req = dev->req;
@@ -431,19 +572,19 @@ static void s5p_aes_crypt_start(struct s5p_aes_dev *dev, unsigned long mode)
SSS_FCINTENCLR_BTDMAINTENCLR | SSS_FCINTENCLR_BRDMAINTENCLR);
SSS_WRITE(dev, FCFIFOCTRL, 0x00);
- err = s5p_set_indata(dev, req->src);
+ err = s5p_set_indata_start(dev, req);
if (err)
goto indata_error;
- err = s5p_set_outdata(dev, req->dst);
+ err = s5p_set_outdata_start(dev, req);
if (err)
goto outdata_error;
SSS_AES_WRITE(dev, AES_CONTROL, aes_control);
s5p_set_aes(dev, dev->ctx->aes_key, req->info, dev->ctx->keylen);
- s5p_set_dma_indata(dev, req->src);
- s5p_set_dma_outdata(dev, req->dst);
+ s5p_set_dma_indata(dev, dev->sg_src);
+ s5p_set_dma_outdata(dev, dev->sg_dst);
SSS_WRITE(dev, FCINTENSET,
SSS_FCINTENSET_BTDMAINTENSET | SSS_FCINTENSET_BRDMAINTENSET);
@@ -452,10 +593,10 @@ static void s5p_aes_crypt_start(struct s5p_aes_dev *dev, unsigned long mode)
return;
- outdata_error:
+outdata_error:
s5p_unset_indata(dev);
- indata_error:
+indata_error:
s5p_aes_complete(dev, err);
spin_unlock_irqrestore(&dev->lock, flags);
}
@@ -506,7 +647,7 @@ static int s5p_aes_handle_req(struct s5p_aes_dev *dev,
tasklet_schedule(&dev->tasklet);
- exit:
+exit:
return err;
}
@@ -671,21 +812,6 @@ static int s5p_aes_probe(struct platform_device *pdev)
goto err_irq;
}
- if (variant->has_hash_irq) {
- pdata->irq_hash = platform_get_irq(pdev, 1);
- if (pdata->irq_hash < 0) {
- err = pdata->irq_hash;
- dev_warn(dev, "hash interrupt is not available.\n");
- goto err_irq;
- }
- err = devm_request_irq(dev, pdata->irq_hash, s5p_aes_interrupt,
- IRQF_SHARED, pdev->name, pdev);
- if (err < 0) {
- dev_warn(dev, "hash interrupt is not available.\n");
- goto err_irq;
- }
- }
-
pdata->busy = false;
pdata->variant = variant;
pdata->dev = dev;
@@ -705,7 +831,7 @@ static int s5p_aes_probe(struct platform_device *pdev)
return 0;
- err_algs:
+err_algs:
dev_err(dev, "can't register '%s': %d\n", algs[i].cra_name, err);
for (j = 0; j < i; j++)
@@ -713,7 +839,7 @@ static int s5p_aes_probe(struct platform_device *pdev)
tasklet_kill(&pdata->tasklet);
- err_irq:
+err_irq:
clk_disable_unprepare(pdata->clk);
s5p_dev = NULL;
diff --git a/drivers/crypto/sunxi-ss/sun4i-ss-cipher.c b/drivers/crypto/sunxi-ss/sun4i-ss-cipher.c
index 7be3fbcd8d78..3830d7c4e138 100644
--- a/drivers/crypto/sunxi-ss/sun4i-ss-cipher.c
+++ b/drivers/crypto/sunxi-ss/sun4i-ss-cipher.c
@@ -35,6 +35,7 @@ static int sun4i_ss_opti_poll(struct ablkcipher_request *areq)
unsigned int todo;
struct sg_mapping_iter mi, mo;
unsigned int oi, oo; /* offset for in and out */
+ unsigned long flags;
if (areq->nbytes == 0)
return 0;
@@ -49,7 +50,7 @@ static int sun4i_ss_opti_poll(struct ablkcipher_request *areq)
return -EINVAL;
}
- spin_lock_bh(&ss->slock);
+ spin_lock_irqsave(&ss->slock, flags);
for (i = 0; i < op->keylen; i += 4)
writel(*(op->key + i / 4), ss->base + SS_KEY0 + i);
@@ -117,7 +118,7 @@ release_ss:
sg_miter_stop(&mi);
sg_miter_stop(&mo);
writel(0, ss->base + SS_CTL);
- spin_unlock_bh(&ss->slock);
+ spin_unlock_irqrestore(&ss->slock, flags);
return err;
}
@@ -149,6 +150,7 @@ static int sun4i_ss_cipher_poll(struct ablkcipher_request *areq)
unsigned int ob = 0; /* offset in buf */
unsigned int obo = 0; /* offset in bufo*/
unsigned int obl = 0; /* length of data in bufo */
+ unsigned long flags;
if (areq->nbytes == 0)
return 0;
@@ -181,7 +183,7 @@ static int sun4i_ss_cipher_poll(struct ablkcipher_request *areq)
if (no_chunk == 1)
return sun4i_ss_opti_poll(areq);
- spin_lock_bh(&ss->slock);
+ spin_lock_irqsave(&ss->slock, flags);
for (i = 0; i < op->keylen; i += 4)
writel(*(op->key + i / 4), ss->base + SS_KEY0 + i);
@@ -307,7 +309,7 @@ release_ss:
sg_miter_stop(&mi);
sg_miter_stop(&mo);
writel(0, ss->base + SS_CTL);
- spin_unlock_bh(&ss->slock);
+ spin_unlock_irqrestore(&ss->slock, flags);
return err;
}
diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c
index aae05547b924..b7ee8d30147d 100644
--- a/drivers/crypto/talitos.c
+++ b/drivers/crypto/talitos.c
@@ -835,6 +835,16 @@ struct talitos_ahash_req_ctx {
struct scatterlist *psrc;
};
+struct talitos_export_state {
+ u32 hw_context[TALITOS_MDEU_MAX_CONTEXT_SIZE / sizeof(u32)];
+ u8 buf[HASH_MAX_BLOCK_SIZE];
+ unsigned int swinit;
+ unsigned int first;
+ unsigned int last;
+ unsigned int to_hash_later;
+ unsigned int nbuf;
+};
+
static int aead_setkey(struct crypto_aead *authenc,
const u8 *key, unsigned int keylen)
{
@@ -1981,6 +1991,46 @@ static int ahash_digest(struct ahash_request *areq)
return ahash_process_req(areq, areq->nbytes);
}
+static int ahash_export(struct ahash_request *areq, void *out)
+{
+ struct talitos_ahash_req_ctx *req_ctx = ahash_request_ctx(areq);
+ struct talitos_export_state *export = out;
+
+ memcpy(export->hw_context, req_ctx->hw_context,
+ req_ctx->hw_context_size);
+ memcpy(export->buf, req_ctx->buf, req_ctx->nbuf);
+ export->swinit = req_ctx->swinit;
+ export->first = req_ctx->first;
+ export->last = req_ctx->last;
+ export->to_hash_later = req_ctx->to_hash_later;
+ export->nbuf = req_ctx->nbuf;
+
+ return 0;
+}
+
+static int ahash_import(struct ahash_request *areq, const void *in)
+{
+ struct talitos_ahash_req_ctx *req_ctx = ahash_request_ctx(areq);
+ struct crypto_ahash *tfm = crypto_ahash_reqtfm(areq);
+ const struct talitos_export_state *export = in;
+
+ memset(req_ctx, 0, sizeof(*req_ctx));
+ req_ctx->hw_context_size =
+ (crypto_ahash_digestsize(tfm) <= SHA256_DIGEST_SIZE)
+ ? TALITOS_MDEU_CONTEXT_SIZE_MD5_SHA1_SHA256
+ : TALITOS_MDEU_CONTEXT_SIZE_SHA384_SHA512;
+ memcpy(req_ctx->hw_context, export->hw_context,
+ req_ctx->hw_context_size);
+ memcpy(req_ctx->buf, export->buf, export->nbuf);
+ req_ctx->swinit = export->swinit;
+ req_ctx->first = export->first;
+ req_ctx->last = export->last;
+ req_ctx->to_hash_later = export->to_hash_later;
+ req_ctx->nbuf = export->nbuf;
+
+ return 0;
+}
+
struct keyhash_result {
struct completion completion;
int err;
@@ -2458,6 +2508,7 @@ static struct talitos_alg_template driver_algs[] = {
{ .type = CRYPTO_ALG_TYPE_AHASH,
.alg.hash = {
.halg.digestsize = MD5_DIGEST_SIZE,
+ .halg.statesize = sizeof(struct talitos_export_state),
.halg.base = {
.cra_name = "md5",
.cra_driver_name = "md5-talitos",
@@ -2473,6 +2524,7 @@ static struct talitos_alg_template driver_algs[] = {
{ .type = CRYPTO_ALG_TYPE_AHASH,
.alg.hash = {
.halg.digestsize = SHA1_DIGEST_SIZE,
+ .halg.statesize = sizeof(struct talitos_export_state),
.halg.base = {
.cra_name = "sha1",
.cra_driver_name = "sha1-talitos",
@@ -2488,6 +2540,7 @@ static struct talitos_alg_template driver_algs[] = {
{ .type = CRYPTO_ALG_TYPE_AHASH,
.alg.hash = {
.halg.digestsize = SHA224_DIGEST_SIZE,
+ .halg.statesize = sizeof(struct talitos_export_state),
.halg.base = {
.cra_name = "sha224",
.cra_driver_name = "sha224-talitos",
@@ -2503,6 +2556,7 @@ static struct talitos_alg_template driver_algs[] = {
{ .type = CRYPTO_ALG_TYPE_AHASH,
.alg.hash = {
.halg.digestsize = SHA256_DIGEST_SIZE,
+ .halg.statesize = sizeof(struct talitos_export_state),
.halg.base = {
.cra_name = "sha256",
.cra_driver_name = "sha256-talitos",
@@ -2518,6 +2572,7 @@ static struct talitos_alg_template driver_algs[] = {
{ .type = CRYPTO_ALG_TYPE_AHASH,
.alg.hash = {
.halg.digestsize = SHA384_DIGEST_SIZE,
+ .halg.statesize = sizeof(struct talitos_export_state),
.halg.base = {
.cra_name = "sha384",
.cra_driver_name = "sha384-talitos",
@@ -2533,6 +2588,7 @@ static struct talitos_alg_template driver_algs[] = {
{ .type = CRYPTO_ALG_TYPE_AHASH,
.alg.hash = {
.halg.digestsize = SHA512_DIGEST_SIZE,
+ .halg.statesize = sizeof(struct talitos_export_state),
.halg.base = {
.cra_name = "sha512",
.cra_driver_name = "sha512-talitos",
@@ -2548,6 +2604,7 @@ static struct talitos_alg_template driver_algs[] = {
{ .type = CRYPTO_ALG_TYPE_AHASH,
.alg.hash = {
.halg.digestsize = MD5_DIGEST_SIZE,
+ .halg.statesize = sizeof(struct talitos_export_state),
.halg.base = {
.cra_name = "hmac(md5)",
.cra_driver_name = "hmac-md5-talitos",
@@ -2563,6 +2620,7 @@ static struct talitos_alg_template driver_algs[] = {
{ .type = CRYPTO_ALG_TYPE_AHASH,
.alg.hash = {
.halg.digestsize = SHA1_DIGEST_SIZE,
+ .halg.statesize = sizeof(struct talitos_export_state),
.halg.base = {
.cra_name = "hmac(sha1)",
.cra_driver_name = "hmac-sha1-talitos",
@@ -2578,6 +2636,7 @@ static struct talitos_alg_template driver_algs[] = {
{ .type = CRYPTO_ALG_TYPE_AHASH,
.alg.hash = {
.halg.digestsize = SHA224_DIGEST_SIZE,
+ .halg.statesize = sizeof(struct talitos_export_state),
.halg.base = {
.cra_name = "hmac(sha224)",
.cra_driver_name = "hmac-sha224-talitos",
@@ -2593,6 +2652,7 @@ static struct talitos_alg_template driver_algs[] = {
{ .type = CRYPTO_ALG_TYPE_AHASH,
.alg.hash = {
.halg.digestsize = SHA256_DIGEST_SIZE,
+ .halg.statesize = sizeof(struct talitos_export_state),
.halg.base = {
.cra_name = "hmac(sha256)",
.cra_driver_name = "hmac-sha256-talitos",
@@ -2608,6 +2668,7 @@ static struct talitos_alg_template driver_algs[] = {
{ .type = CRYPTO_ALG_TYPE_AHASH,
.alg.hash = {
.halg.digestsize = SHA384_DIGEST_SIZE,
+ .halg.statesize = sizeof(struct talitos_export_state),
.halg.base = {
.cra_name = "hmac(sha384)",
.cra_driver_name = "hmac-sha384-talitos",
@@ -2623,6 +2684,7 @@ static struct talitos_alg_template driver_algs[] = {
{ .type = CRYPTO_ALG_TYPE_AHASH,
.alg.hash = {
.halg.digestsize = SHA512_DIGEST_SIZE,
+ .halg.statesize = sizeof(struct talitos_export_state),
.halg.base = {
.cra_name = "hmac(sha512)",
.cra_driver_name = "hmac-sha512-talitos",
@@ -2814,6 +2876,8 @@ static struct talitos_crypto_alg *talitos_alg_alloc(struct device *dev,
t_alg->algt.alg.hash.finup = ahash_finup;
t_alg->algt.alg.hash.digest = ahash_digest;
t_alg->algt.alg.hash.setkey = ahash_setkey;
+ t_alg->algt.alg.hash.import = ahash_import;
+ t_alg->algt.alg.hash.export = ahash_export;
if (!(priv->features & TALITOS_FTR_HMAC_OK) &&
!strncmp(alg->cra_name, "hmac", 4)) {
diff --git a/drivers/crypto/vmx/ppc-xlate.pl b/drivers/crypto/vmx/ppc-xlate.pl
index b9997335f193..9f4994cabcc7 100644
--- a/drivers/crypto/vmx/ppc-xlate.pl
+++ b/drivers/crypto/vmx/ppc-xlate.pl
@@ -139,6 +139,26 @@ my $vmr = sub {
" vor $vx,$vy,$vy";
};
+# Some ABIs specify vrsave, special-purpose register #256, as reserved
+# for system use.
+my $no_vrsave = ($flavour =~ /aix|linux64le/);
+my $mtspr = sub {
+ my ($f,$idx,$ra) = @_;
+ if ($idx == 256 && $no_vrsave) {
+ " or $ra,$ra,$ra";
+ } else {
+ " mtspr $idx,$ra";
+ }
+};
+my $mfspr = sub {
+ my ($f,$rd,$idx) = @_;
+ if ($idx == 256 && $no_vrsave) {
+ " li $rd,-1";
+ } else {
+ " mfspr $rd,$idx";
+ }
+};
+
# PowerISA 2.06 stuff
sub vsxmem_op {
my ($f, $vrt, $ra, $rb, $op) = @_;
diff --git a/drivers/dax/Kconfig b/drivers/dax/Kconfig
new file mode 100644
index 000000000000..cedab7572de3
--- /dev/null
+++ b/drivers/dax/Kconfig
@@ -0,0 +1,26 @@
+menuconfig DEV_DAX
+ tristate "DAX: direct access to differentiated memory"
+ default m if NVDIMM_DAX
+ depends on TRANSPARENT_HUGEPAGE
+ help
+ Support raw access to differentiated (persistence, bandwidth,
+ latency...) memory via an mmap(2) capable character
+ device. Platform firmware or a device driver may identify a
+ platform memory resource that is differentiated from the
+ baseline memory pool. Mappings of a /dev/daxX.Y device impose
+ restrictions that make the mapping behavior deterministic.
+
+if DEV_DAX
+
+config DEV_DAX_PMEM
+ tristate "PMEM DAX: direct access to persistent memory"
+ depends on NVDIMM_DAX
+ default DEV_DAX
+ help
+ Support raw access to persistent memory. Note that this
+ driver consumes memory ranges allocated and exported by the
+ libnvdimm sub-system.
+
+ Say Y if unsure
+
+endif
diff --git a/drivers/dax/Makefile b/drivers/dax/Makefile
new file mode 100644
index 000000000000..27c54e38478a
--- /dev/null
+++ b/drivers/dax/Makefile
@@ -0,0 +1,4 @@
+obj-$(CONFIG_DEV_DAX) += dax.o
+obj-$(CONFIG_DEV_DAX_PMEM) += dax_pmem.o
+
+dax_pmem-y := pmem.o
diff --git a/drivers/dax/dax.c b/drivers/dax/dax.c
new file mode 100644
index 000000000000..b891a129b275
--- /dev/null
+++ b/drivers/dax/dax.c
@@ -0,0 +1,575 @@
+/*
+ * Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#include <linux/pagemap.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/pfn_t.h>
+#include <linux/slab.h>
+#include <linux/dax.h>
+#include <linux/fs.h>
+#include <linux/mm.h>
+
+static int dax_major;
+static struct class *dax_class;
+static DEFINE_IDA(dax_minor_ida);
+
+/**
+ * struct dax_region - mapping infrastructure for dax devices
+ * @id: kernel-wide unique region for a memory range
+ * @base: linear address corresponding to @res
+ * @kref: to pin while other agents have a need to do lookups
+ * @dev: parent device backing this region
+ * @align: allocation and mapping alignment for child dax devices
+ * @res: physical address range of the region
+ * @pfn_flags: identify whether the pfns are paged back or not
+ */
+struct dax_region {
+ int id;
+ struct ida ida;
+ void *base;
+ struct kref kref;
+ struct device *dev;
+ unsigned int align;
+ struct resource res;
+ unsigned long pfn_flags;
+};
+
+/**
+ * struct dax_dev - subdivision of a dax region
+ * @region - parent region
+ * @dev - device backing the character device
+ * @kref - enable this data to be tracked in filp->private_data
+ * @alive - !alive + rcu grace period == no new mappings can be established
+ * @id - child id in the region
+ * @num_resources - number of physical address extents in this device
+ * @res - array of physical address ranges
+ */
+struct dax_dev {
+ struct dax_region *region;
+ struct device *dev;
+ struct kref kref;
+ bool alive;
+ int id;
+ int num_resources;
+ struct resource res[0];
+};
+
+static void dax_region_free(struct kref *kref)
+{
+ struct dax_region *dax_region;
+
+ dax_region = container_of(kref, struct dax_region, kref);
+ kfree(dax_region);
+}
+
+void dax_region_put(struct dax_region *dax_region)
+{
+ kref_put(&dax_region->kref, dax_region_free);
+}
+EXPORT_SYMBOL_GPL(dax_region_put);
+
+static void dax_dev_free(struct kref *kref)
+{
+ struct dax_dev *dax_dev;
+
+ dax_dev = container_of(kref, struct dax_dev, kref);
+ dax_region_put(dax_dev->region);
+ kfree(dax_dev);
+}
+
+static void dax_dev_put(struct dax_dev *dax_dev)
+{
+ kref_put(&dax_dev->kref, dax_dev_free);
+}
+
+struct dax_region *alloc_dax_region(struct device *parent, int region_id,
+ struct resource *res, unsigned int align, void *addr,
+ unsigned long pfn_flags)
+{
+ struct dax_region *dax_region;
+
+ dax_region = kzalloc(sizeof(*dax_region), GFP_KERNEL);
+
+ if (!dax_region)
+ return NULL;
+
+ memcpy(&dax_region->res, res, sizeof(*res));
+ dax_region->pfn_flags = pfn_flags;
+ kref_init(&dax_region->kref);
+ dax_region->id = region_id;
+ ida_init(&dax_region->ida);
+ dax_region->align = align;
+ dax_region->dev = parent;
+ dax_region->base = addr;
+
+ return dax_region;
+}
+EXPORT_SYMBOL_GPL(alloc_dax_region);
+
+static ssize_t size_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct dax_dev *dax_dev = dev_get_drvdata(dev);
+ unsigned long long size = 0;
+ int i;
+
+ for (i = 0; i < dax_dev->num_resources; i++)
+ size += resource_size(&dax_dev->res[i]);
+
+ return sprintf(buf, "%llu\n", size);
+}
+static DEVICE_ATTR_RO(size);
+
+static struct attribute *dax_device_attributes[] = {
+ &dev_attr_size.attr,
+ NULL,
+};
+
+static const struct attribute_group dax_device_attribute_group = {
+ .attrs = dax_device_attributes,
+};
+
+static const struct attribute_group *dax_attribute_groups[] = {
+ &dax_device_attribute_group,
+ NULL,
+};
+
+static void unregister_dax_dev(void *_dev)
+{
+ struct device *dev = _dev;
+ struct dax_dev *dax_dev = dev_get_drvdata(dev);
+ struct dax_region *dax_region = dax_dev->region;
+
+ dev_dbg(dev, "%s\n", __func__);
+
+ /*
+ * Note, rcu is not protecting the liveness of dax_dev, rcu is
+ * ensuring that any fault handlers that might have seen
+ * dax_dev->alive == true, have completed. Any fault handlers
+ * that start after synchronize_rcu() has started will abort
+ * upon seeing dax_dev->alive == false.
+ */
+ dax_dev->alive = false;
+ synchronize_rcu();
+
+ get_device(dev);
+ device_unregister(dev);
+ ida_simple_remove(&dax_region->ida, dax_dev->id);
+ ida_simple_remove(&dax_minor_ida, MINOR(dev->devt));
+ put_device(dev);
+ dax_dev_put(dax_dev);
+}
+
+int devm_create_dax_dev(struct dax_region *dax_region, struct resource *res,
+ int count)
+{
+ struct device *parent = dax_region->dev;
+ struct dax_dev *dax_dev;
+ struct device *dev;
+ int rc, minor;
+ dev_t dev_t;
+
+ dax_dev = kzalloc(sizeof(*dax_dev) + sizeof(*res) * count, GFP_KERNEL);
+ if (!dax_dev)
+ return -ENOMEM;
+ memcpy(dax_dev->res, res, sizeof(*res) * count);
+ dax_dev->num_resources = count;
+ kref_init(&dax_dev->kref);
+ dax_dev->alive = true;
+ dax_dev->region = dax_region;
+ kref_get(&dax_region->kref);
+
+ dax_dev->id = ida_simple_get(&dax_region->ida, 0, 0, GFP_KERNEL);
+ if (dax_dev->id < 0) {
+ rc = dax_dev->id;
+ goto err_id;
+ }
+
+ minor = ida_simple_get(&dax_minor_ida, 0, 0, GFP_KERNEL);
+ if (minor < 0) {
+ rc = minor;
+ goto err_minor;
+ }
+
+ dev_t = MKDEV(dax_major, minor);
+ dev = device_create_with_groups(dax_class, parent, dev_t, dax_dev,
+ dax_attribute_groups, "dax%d.%d", dax_region->id,
+ dax_dev->id);
+ if (IS_ERR(dev)) {
+ rc = PTR_ERR(dev);
+ goto err_create;
+ }
+ dax_dev->dev = dev;
+
+ rc = devm_add_action(dax_region->dev, unregister_dax_dev, dev);
+ if (rc) {
+ unregister_dax_dev(dev);
+ return rc;
+ }
+
+ return 0;
+
+ err_create:
+ ida_simple_remove(&dax_minor_ida, minor);
+ err_minor:
+ ida_simple_remove(&dax_region->ida, dax_dev->id);
+ err_id:
+ dax_dev_put(dax_dev);
+
+ return rc;
+}
+EXPORT_SYMBOL_GPL(devm_create_dax_dev);
+
+/* return an unmapped area aligned to the dax region specified alignment */
+static unsigned long dax_dev_get_unmapped_area(struct file *filp,
+ unsigned long addr, unsigned long len, unsigned long pgoff,
+ unsigned long flags)
+{
+ unsigned long off, off_end, off_align, len_align, addr_align, align;
+ struct dax_dev *dax_dev = filp ? filp->private_data : NULL;
+ struct dax_region *dax_region;
+
+ if (!dax_dev || addr)
+ goto out;
+
+ dax_region = dax_dev->region;
+ align = dax_region->align;
+ off = pgoff << PAGE_SHIFT;
+ off_end = off + len;
+ off_align = round_up(off, align);
+
+ if ((off_end <= off_align) || ((off_end - off_align) < align))
+ goto out;
+
+ len_align = len + align;
+ if ((off + len_align) < off)
+ goto out;
+
+ addr_align = current->mm->get_unmapped_area(filp, addr, len_align,
+ pgoff, flags);
+ if (!IS_ERR_VALUE(addr_align)) {
+ addr_align += (off - addr_align) & (align - 1);
+ return addr_align;
+ }
+ out:
+ return current->mm->get_unmapped_area(filp, addr, len, pgoff, flags);
+}
+
+static int __match_devt(struct device *dev, const void *data)
+{
+ const dev_t *devt = data;
+
+ return dev->devt == *devt;
+}
+
+static struct device *dax_dev_find(dev_t dev_t)
+{
+ return class_find_device(dax_class, NULL, &dev_t, __match_devt);
+}
+
+static int dax_dev_open(struct inode *inode, struct file *filp)
+{
+ struct dax_dev *dax_dev = NULL;
+ struct device *dev;
+
+ dev = dax_dev_find(inode->i_rdev);
+ if (!dev)
+ return -ENXIO;
+
+ device_lock(dev);
+ dax_dev = dev_get_drvdata(dev);
+ if (dax_dev) {
+ dev_dbg(dev, "%s\n", __func__);
+ filp->private_data = dax_dev;
+ kref_get(&dax_dev->kref);
+ inode->i_flags = S_DAX;
+ }
+ device_unlock(dev);
+
+ if (!dax_dev) {
+ put_device(dev);
+ return -ENXIO;
+ }
+ return 0;
+}
+
+static int dax_dev_release(struct inode *inode, struct file *filp)
+{
+ struct dax_dev *dax_dev = filp->private_data;
+ struct device *dev = dax_dev->dev;
+
+ dev_dbg(dax_dev->dev, "%s\n", __func__);
+ dax_dev_put(dax_dev);
+ put_device(dev);
+
+ return 0;
+}
+
+static int check_vma(struct dax_dev *dax_dev, struct vm_area_struct *vma,
+ const char *func)
+{
+ struct dax_region *dax_region = dax_dev->region;
+ struct device *dev = dax_dev->dev;
+ unsigned long mask;
+
+ if (!dax_dev->alive)
+ return -ENXIO;
+
+ /* prevent private / writable mappings from being established */
+ if ((vma->vm_flags & (VM_NORESERVE|VM_SHARED|VM_WRITE)) == VM_WRITE) {
+ dev_info(dev, "%s: %s: fail, attempted private mapping\n",
+ current->comm, func);
+ return -EINVAL;
+ }
+
+ mask = dax_region->align - 1;
+ if (vma->vm_start & mask || vma->vm_end & mask) {
+ dev_info(dev, "%s: %s: fail, unaligned vma (%#lx - %#lx, %#lx)\n",
+ current->comm, func, vma->vm_start, vma->vm_end,
+ mask);
+ return -EINVAL;
+ }
+
+ if ((dax_region->pfn_flags & (PFN_DEV|PFN_MAP)) == PFN_DEV
+ && (vma->vm_flags & VM_DONTCOPY) == 0) {
+ dev_info(dev, "%s: %s: fail, dax range requires MADV_DONTFORK\n",
+ current->comm, func);
+ return -EINVAL;
+ }
+
+ if (!vma_is_dax(vma)) {
+ dev_info(dev, "%s: %s: fail, vma is not DAX capable\n",
+ current->comm, func);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static phys_addr_t pgoff_to_phys(struct dax_dev *dax_dev, pgoff_t pgoff,
+ unsigned long size)
+{
+ struct resource *res;
+ phys_addr_t phys;
+ int i;
+
+ for (i = 0; i < dax_dev->num_resources; i++) {
+ res = &dax_dev->res[i];
+ phys = pgoff * PAGE_SIZE + res->start;
+ if (phys >= res->start && phys <= res->end)
+ break;
+ pgoff -= PHYS_PFN(resource_size(res));
+ }
+
+ if (i < dax_dev->num_resources) {
+ res = &dax_dev->res[i];
+ if (phys + size - 1 <= res->end)
+ return phys;
+ }
+
+ return -1;
+}
+
+static int __dax_dev_fault(struct dax_dev *dax_dev, struct vm_area_struct *vma,
+ struct vm_fault *vmf)
+{
+ unsigned long vaddr = (unsigned long) vmf->virtual_address;
+ struct device *dev = dax_dev->dev;
+ struct dax_region *dax_region;
+ int rc = VM_FAULT_SIGBUS;
+ phys_addr_t phys;
+ pfn_t pfn;
+
+ if (check_vma(dax_dev, vma, __func__))
+ return VM_FAULT_SIGBUS;
+
+ dax_region = dax_dev->region;
+ if (dax_region->align > PAGE_SIZE) {
+ dev_dbg(dev, "%s: alignment > fault size\n", __func__);
+ return VM_FAULT_SIGBUS;
+ }
+
+ phys = pgoff_to_phys(dax_dev, vmf->pgoff, PAGE_SIZE);
+ if (phys == -1) {
+ dev_dbg(dev, "%s: phys_to_pgoff(%#lx) failed\n", __func__,
+ vmf->pgoff);
+ return VM_FAULT_SIGBUS;
+ }
+
+ pfn = phys_to_pfn_t(phys, dax_region->pfn_flags);
+
+ rc = vm_insert_mixed(vma, vaddr, pfn);
+
+ if (rc == -ENOMEM)
+ return VM_FAULT_OOM;
+ if (rc < 0 && rc != -EBUSY)
+ return VM_FAULT_SIGBUS;
+
+ return VM_FAULT_NOPAGE;
+}
+
+static int dax_dev_fault(struct vm_area_struct *vma, struct vm_fault *vmf)
+{
+ int rc;
+ struct file *filp = vma->vm_file;
+ struct dax_dev *dax_dev = filp->private_data;
+
+ dev_dbg(dax_dev->dev, "%s: %s: %s (%#lx - %#lx)\n", __func__,
+ current->comm, (vmf->flags & FAULT_FLAG_WRITE)
+ ? "write" : "read", vma->vm_start, vma->vm_end);
+ rcu_read_lock();
+ rc = __dax_dev_fault(dax_dev, vma, vmf);
+ rcu_read_unlock();
+
+ return rc;
+}
+
+static int __dax_dev_pmd_fault(struct dax_dev *dax_dev,
+ struct vm_area_struct *vma, unsigned long addr, pmd_t *pmd,
+ unsigned int flags)
+{
+ unsigned long pmd_addr = addr & PMD_MASK;
+ struct device *dev = dax_dev->dev;
+ struct dax_region *dax_region;
+ phys_addr_t phys;
+ pgoff_t pgoff;
+ pfn_t pfn;
+
+ if (check_vma(dax_dev, vma, __func__))
+ return VM_FAULT_SIGBUS;
+
+ dax_region = dax_dev->region;
+ if (dax_region->align > PMD_SIZE) {
+ dev_dbg(dev, "%s: alignment > fault size\n", __func__);
+ return VM_FAULT_SIGBUS;
+ }
+
+ /* dax pmd mappings require pfn_t_devmap() */
+ if ((dax_region->pfn_flags & (PFN_DEV|PFN_MAP)) != (PFN_DEV|PFN_MAP)) {
+ dev_dbg(dev, "%s: alignment > fault size\n", __func__);
+ return VM_FAULT_SIGBUS;
+ }
+
+ pgoff = linear_page_index(vma, pmd_addr);
+ phys = pgoff_to_phys(dax_dev, pgoff, PAGE_SIZE);
+ if (phys == -1) {
+ dev_dbg(dev, "%s: phys_to_pgoff(%#lx) failed\n", __func__,
+ pgoff);
+ return VM_FAULT_SIGBUS;
+ }
+
+ pfn = phys_to_pfn_t(phys, dax_region->pfn_flags);
+
+ return vmf_insert_pfn_pmd(vma, addr, pmd, pfn,
+ flags & FAULT_FLAG_WRITE);
+}
+
+static int dax_dev_pmd_fault(struct vm_area_struct *vma, unsigned long addr,
+ pmd_t *pmd, unsigned int flags)
+{
+ int rc;
+ struct file *filp = vma->vm_file;
+ struct dax_dev *dax_dev = filp->private_data;
+
+ dev_dbg(dax_dev->dev, "%s: %s: %s (%#lx - %#lx)\n", __func__,
+ current->comm, (flags & FAULT_FLAG_WRITE)
+ ? "write" : "read", vma->vm_start, vma->vm_end);
+
+ rcu_read_lock();
+ rc = __dax_dev_pmd_fault(dax_dev, vma, addr, pmd, flags);
+ rcu_read_unlock();
+
+ return rc;
+}
+
+static void dax_dev_vm_open(struct vm_area_struct *vma)
+{
+ struct file *filp = vma->vm_file;
+ struct dax_dev *dax_dev = filp->private_data;
+
+ dev_dbg(dax_dev->dev, "%s\n", __func__);
+ kref_get(&dax_dev->kref);
+}
+
+static void dax_dev_vm_close(struct vm_area_struct *vma)
+{
+ struct file *filp = vma->vm_file;
+ struct dax_dev *dax_dev = filp->private_data;
+
+ dev_dbg(dax_dev->dev, "%s\n", __func__);
+ dax_dev_put(dax_dev);
+}
+
+static const struct vm_operations_struct dax_dev_vm_ops = {
+ .fault = dax_dev_fault,
+ .pmd_fault = dax_dev_pmd_fault,
+ .open = dax_dev_vm_open,
+ .close = dax_dev_vm_close,
+};
+
+static int dax_dev_mmap(struct file *filp, struct vm_area_struct *vma)
+{
+ struct dax_dev *dax_dev = filp->private_data;
+ int rc;
+
+ dev_dbg(dax_dev->dev, "%s\n", __func__);
+
+ rc = check_vma(dax_dev, vma, __func__);
+ if (rc)
+ return rc;
+
+ kref_get(&dax_dev->kref);
+ vma->vm_ops = &dax_dev_vm_ops;
+ vma->vm_flags |= VM_MIXEDMAP | VM_HUGEPAGE;
+ return 0;
+
+}
+
+static const struct file_operations dax_fops = {
+ .llseek = noop_llseek,
+ .owner = THIS_MODULE,
+ .open = dax_dev_open,
+ .release = dax_dev_release,
+ .get_unmapped_area = dax_dev_get_unmapped_area,
+ .mmap = dax_dev_mmap,
+};
+
+static int __init dax_init(void)
+{
+ int rc;
+
+ rc = register_chrdev(0, "dax", &dax_fops);
+ if (rc < 0)
+ return rc;
+ dax_major = rc;
+
+ dax_class = class_create(THIS_MODULE, "dax");
+ if (IS_ERR(dax_class)) {
+ unregister_chrdev(dax_major, "dax");
+ return PTR_ERR(dax_class);
+ }
+
+ return 0;
+}
+
+static void __exit dax_exit(void)
+{
+ class_destroy(dax_class);
+ unregister_chrdev(dax_major, "dax");
+ ida_destroy(&dax_minor_ida);
+}
+
+MODULE_AUTHOR("Intel Corporation");
+MODULE_LICENSE("GPL v2");
+subsys_initcall(dax_init);
+module_exit(dax_exit);
diff --git a/drivers/dax/dax.h b/drivers/dax/dax.h
new file mode 100644
index 000000000000..d8b8f1f25054
--- /dev/null
+++ b/drivers/dax/dax.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#ifndef __DAX_H__
+#define __DAX_H__
+struct device;
+struct resource;
+struct dax_region;
+void dax_region_put(struct dax_region *dax_region);
+struct dax_region *alloc_dax_region(struct device *parent,
+ int region_id, struct resource *res, unsigned int align,
+ void *addr, unsigned long flags);
+int devm_create_dax_dev(struct dax_region *dax_region, struct resource *res,
+ int count);
+#endif /* __DAX_H__ */
diff --git a/drivers/dax/pmem.c b/drivers/dax/pmem.c
new file mode 100644
index 000000000000..55d510e36cd1
--- /dev/null
+++ b/drivers/dax/pmem.c
@@ -0,0 +1,158 @@
+/*
+ * Copyright(c) 2016 Intel Corporation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ */
+#include <linux/percpu-refcount.h>
+#include <linux/memremap.h>
+#include <linux/module.h>
+#include <linux/pfn_t.h>
+#include "../nvdimm/pfn.h"
+#include "../nvdimm/nd.h"
+#include "dax.h"
+
+struct dax_pmem {
+ struct device *dev;
+ struct percpu_ref ref;
+ struct completion cmp;
+};
+
+struct dax_pmem *to_dax_pmem(struct percpu_ref *ref)
+{
+ return container_of(ref, struct dax_pmem, ref);
+}
+
+static void dax_pmem_percpu_release(struct percpu_ref *ref)
+{
+ struct dax_pmem *dax_pmem = to_dax_pmem(ref);
+
+ dev_dbg(dax_pmem->dev, "%s\n", __func__);
+ complete(&dax_pmem->cmp);
+}
+
+static void dax_pmem_percpu_exit(void *data)
+{
+ struct percpu_ref *ref = data;
+ struct dax_pmem *dax_pmem = to_dax_pmem(ref);
+
+ dev_dbg(dax_pmem->dev, "%s\n", __func__);
+ percpu_ref_exit(ref);
+ wait_for_completion(&dax_pmem->cmp);
+}
+
+static void dax_pmem_percpu_kill(void *data)
+{
+ struct percpu_ref *ref = data;
+ struct dax_pmem *dax_pmem = to_dax_pmem(ref);
+
+ dev_dbg(dax_pmem->dev, "%s\n", __func__);
+ percpu_ref_kill(ref);
+}
+
+static int dax_pmem_probe(struct device *dev)
+{
+ int rc;
+ void *addr;
+ struct resource res;
+ struct nd_pfn_sb *pfn_sb;
+ struct dax_pmem *dax_pmem;
+ struct nd_region *nd_region;
+ struct nd_namespace_io *nsio;
+ struct dax_region *dax_region;
+ struct nd_namespace_common *ndns;
+ struct nd_dax *nd_dax = to_nd_dax(dev);
+ struct nd_pfn *nd_pfn = &nd_dax->nd_pfn;
+ struct vmem_altmap __altmap, *altmap = NULL;
+
+ ndns = nvdimm_namespace_common_probe(dev);
+ if (IS_ERR(ndns))
+ return PTR_ERR(ndns);
+ nsio = to_nd_namespace_io(&ndns->dev);
+
+ /* parse the 'pfn' info block via ->rw_bytes */
+ devm_nsio_enable(dev, nsio);
+ altmap = nvdimm_setup_pfn(nd_pfn, &res, &__altmap);
+ if (IS_ERR(altmap))
+ return PTR_ERR(altmap);
+ devm_nsio_disable(dev, nsio);
+
+ pfn_sb = nd_pfn->pfn_sb;
+
+ if (!devm_request_mem_region(dev, nsio->res.start,
+ resource_size(&nsio->res), dev_name(dev))) {
+ dev_warn(dev, "could not reserve region %pR\n", &nsio->res);
+ return -EBUSY;
+ }
+
+ dax_pmem = devm_kzalloc(dev, sizeof(*dax_pmem), GFP_KERNEL);
+ if (!dax_pmem)
+ return -ENOMEM;
+
+ dax_pmem->dev = dev;
+ init_completion(&dax_pmem->cmp);
+ rc = percpu_ref_init(&dax_pmem->ref, dax_pmem_percpu_release, 0,
+ GFP_KERNEL);
+ if (rc)
+ return rc;
+
+ rc = devm_add_action(dev, dax_pmem_percpu_exit, &dax_pmem->ref);
+ if (rc) {
+ dax_pmem_percpu_exit(&dax_pmem->ref);
+ return rc;
+ }
+
+ addr = devm_memremap_pages(dev, &res, &dax_pmem->ref, altmap);
+ if (IS_ERR(addr))
+ return PTR_ERR(addr);
+
+ rc = devm_add_action(dev, dax_pmem_percpu_kill, &dax_pmem->ref);
+ if (rc) {
+ dax_pmem_percpu_kill(&dax_pmem->ref);
+ return rc;
+ }
+
+ nd_region = to_nd_region(dev->parent);
+ dax_region = alloc_dax_region(dev, nd_region->id, &res,
+ le32_to_cpu(pfn_sb->align), addr, PFN_DEV|PFN_MAP);
+ if (!dax_region)
+ return -ENOMEM;
+
+ /* TODO: support for subdividing a dax region... */
+ rc = devm_create_dax_dev(dax_region, &res, 1);
+
+ /* child dax_dev instances now own the lifetime of the dax_region */
+ dax_region_put(dax_region);
+
+ return rc;
+}
+
+static struct nd_device_driver dax_pmem_driver = {
+ .probe = dax_pmem_probe,
+ .drv = {
+ .name = "dax_pmem",
+ },
+ .type = ND_DRIVER_DAX_PMEM,
+};
+
+static int __init dax_pmem_init(void)
+{
+ return nd_driver_register(&dax_pmem_driver);
+}
+module_init(dax_pmem_init);
+
+static void __exit dax_pmem_exit(void)
+{
+ driver_unregister(&dax_pmem_driver.drv);
+}
+module_exit(dax_pmem_exit);
+
+MODULE_LICENSE("GPL v2");
+MODULE_AUTHOR("Intel Corporation");
+MODULE_ALIAS_ND_DEVICE(ND_DEVICE_DAX_PMEM);
diff --git a/drivers/devfreq/Kconfig b/drivers/devfreq/Kconfig
index 4de78c552251..78dac0e9da11 100644
--- a/drivers/devfreq/Kconfig
+++ b/drivers/devfreq/Kconfig
@@ -64,30 +64,32 @@ config DEVFREQ_GOV_USERSPACE
Otherwise, the governor does not change the frequency
given at the initialization.
+config DEVFREQ_GOV_PASSIVE
+ tristate "Passive"
+ help
+ Sets the frequency based on the frequency of its parent devfreq
+ device. This governor does not change the frequency by itself
+ through sysfs entries. The passive governor recommends that
+ devfreq device uses the OPP table to get the frequency/voltage.
+
comment "DEVFREQ Drivers"
-config ARM_EXYNOS4_BUS_DEVFREQ
- bool "ARM Exynos4210/4212/4412 Memory Bus DEVFREQ Driver"
- depends on (CPU_EXYNOS4210 || SOC_EXYNOS4212 || SOC_EXYNOS4412) && !ARCH_MULTIPLATFORM
+config ARM_EXYNOS_BUS_DEVFREQ
+ bool "ARM EXYNOS Generic Memory Bus DEVFREQ Driver"
+ depends on ARCH_EXYNOS
select DEVFREQ_GOV_SIMPLE_ONDEMAND
+ select DEVFREQ_GOV_PASSIVE
+ select DEVFREQ_EVENT_EXYNOS_PPMU
+ select PM_DEVFREQ_EVENT
select PM_OPP
help
- This adds the DEVFREQ driver for Exynos4210 memory bus (vdd_int)
- and Exynos4212/4412 memory interface and bus (vdd_mif + vdd_int).
- It reads PPMU counters of memory controllers and adjusts
- the operating frequencies and voltages with OPP support.
+ This adds the common DEVFREQ driver for Exynos Memory bus. Exynos
+ Memory bus has one more group of memory bus (e.g, MIF and INT block).
+ Each memory bus group could contain many memoby bus block. It reads
+ PPMU counters of memory controllers by using DEVFREQ-event device
+ and adjusts the operating frequencies and voltages with OPP support.
This does not yet operate with optimal voltages.
-config ARM_EXYNOS5_BUS_DEVFREQ
- tristate "ARM Exynos5250 Bus DEVFREQ Driver"
- depends on SOC_EXYNOS5250
- select DEVFREQ_GOV_SIMPLE_ONDEMAND
- select PM_OPP
- help
- This adds the DEVFREQ driver for Exynos5250 bus interface (vdd_int).
- It reads PPMU counters of memory controllers and adjusts the
- operating frequencies and voltages with OPP support.
-
config ARM_TEGRA_DEVFREQ
tristate "Tegra DEVFREQ Driver"
depends on ARCH_TEGRA_124_SOC
diff --git a/drivers/devfreq/Makefile b/drivers/devfreq/Makefile
index 5134f9ee983d..09f11d9d40d5 100644
--- a/drivers/devfreq/Makefile
+++ b/drivers/devfreq/Makefile
@@ -4,10 +4,10 @@ obj-$(CONFIG_DEVFREQ_GOV_SIMPLE_ONDEMAND) += governor_simpleondemand.o
obj-$(CONFIG_DEVFREQ_GOV_PERFORMANCE) += governor_performance.o
obj-$(CONFIG_DEVFREQ_GOV_POWERSAVE) += governor_powersave.o
obj-$(CONFIG_DEVFREQ_GOV_USERSPACE) += governor_userspace.o
+obj-$(CONFIG_DEVFREQ_GOV_PASSIVE) += governor_passive.o
# DEVFREQ Drivers
-obj-$(CONFIG_ARM_EXYNOS4_BUS_DEVFREQ) += exynos/
-obj-$(CONFIG_ARM_EXYNOS5_BUS_DEVFREQ) += exynos/
+obj-$(CONFIG_ARM_EXYNOS_BUS_DEVFREQ) += exynos-bus.o
obj-$(CONFIG_ARM_TEGRA_DEVFREQ) += tegra-devfreq.o
# DEVFREQ Event Drivers
diff --git a/drivers/devfreq/devfreq-event.c b/drivers/devfreq/devfreq-event.c
index 38bf144ca147..39b048eda2ce 100644
--- a/drivers/devfreq/devfreq-event.c
+++ b/drivers/devfreq/devfreq-event.c
@@ -235,6 +235,11 @@ struct devfreq_event_dev *devfreq_event_get_edev_by_phandle(struct device *dev,
mutex_lock(&devfreq_event_list_lock);
list_for_each_entry(edev, &devfreq_event_list, node) {
+ if (edev->dev.parent && edev->dev.parent->of_node == node)
+ goto out;
+ }
+
+ list_for_each_entry(edev, &devfreq_event_list, node) {
if (!strcmp(edev->desc->name, node->name))
goto out;
}
diff --git a/drivers/devfreq/devfreq.c b/drivers/devfreq/devfreq.c
index 984c5e9e7bdd..1d6c803804d5 100644
--- a/drivers/devfreq/devfreq.c
+++ b/drivers/devfreq/devfreq.c
@@ -25,6 +25,7 @@
#include <linux/list.h>
#include <linux/printk.h>
#include <linux/hrtimer.h>
+#include <linux/of.h>
#include "governor.h"
static struct class *devfreq_class;
@@ -188,6 +189,29 @@ static struct devfreq_governor *find_devfreq_governor(const char *name)
return ERR_PTR(-ENODEV);
}
+static int devfreq_notify_transition(struct devfreq *devfreq,
+ struct devfreq_freqs *freqs, unsigned int state)
+{
+ if (!devfreq)
+ return -EINVAL;
+
+ switch (state) {
+ case DEVFREQ_PRECHANGE:
+ srcu_notifier_call_chain(&devfreq->transition_notifier_list,
+ DEVFREQ_PRECHANGE, freqs);
+ break;
+
+ case DEVFREQ_POSTCHANGE:
+ srcu_notifier_call_chain(&devfreq->transition_notifier_list,
+ DEVFREQ_POSTCHANGE, freqs);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
/* Load monitoring helper functions for governors use */
/**
@@ -199,7 +223,8 @@ static struct devfreq_governor *find_devfreq_governor(const char *name)
*/
int update_devfreq(struct devfreq *devfreq)
{
- unsigned long freq;
+ struct devfreq_freqs freqs;
+ unsigned long freq, cur_freq;
int err = 0;
u32 flags = 0;
@@ -233,10 +258,22 @@ int update_devfreq(struct devfreq *devfreq)
flags |= DEVFREQ_FLAG_LEAST_UPPER_BOUND; /* Use LUB */
}
+ if (devfreq->profile->get_cur_freq)
+ devfreq->profile->get_cur_freq(devfreq->dev.parent, &cur_freq);
+ else
+ cur_freq = devfreq->previous_freq;
+
+ freqs.old = cur_freq;
+ freqs.new = freq;
+ devfreq_notify_transition(devfreq, &freqs, DEVFREQ_PRECHANGE);
+
err = devfreq->profile->target(devfreq->dev.parent, &freq, flags);
if (err)
return err;
+ freqs.new = freq;
+ devfreq_notify_transition(devfreq, &freqs, DEVFREQ_POSTCHANGE);
+
if (devfreq->profile->freq_table)
if (devfreq_update_status(devfreq, freq))
dev_err(&devfreq->dev,
@@ -541,6 +578,8 @@ struct devfreq *devfreq_add_device(struct device *dev,
goto err_out;
}
+ srcu_init_notifier_head(&devfreq->transition_notifier_list);
+
mutex_unlock(&devfreq->lock);
mutex_lock(&devfreq_list_lock);
@@ -639,6 +678,49 @@ struct devfreq *devm_devfreq_add_device(struct device *dev,
}
EXPORT_SYMBOL(devm_devfreq_add_device);
+#ifdef CONFIG_OF
+/*
+ * devfreq_get_devfreq_by_phandle - Get the devfreq device from devicetree
+ * @dev - instance to the given device
+ * @index - index into list of devfreq
+ *
+ * return the instance of devfreq device
+ */
+struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, int index)
+{
+ struct device_node *node;
+ struct devfreq *devfreq;
+
+ if (!dev)
+ return ERR_PTR(-EINVAL);
+
+ if (!dev->of_node)
+ return ERR_PTR(-EINVAL);
+
+ node = of_parse_phandle(dev->of_node, "devfreq", index);
+ if (!node)
+ return ERR_PTR(-ENODEV);
+
+ mutex_lock(&devfreq_list_lock);
+ list_for_each_entry(devfreq, &devfreq_list, node) {
+ if (devfreq->dev.parent
+ && devfreq->dev.parent->of_node == node) {
+ mutex_unlock(&devfreq_list_lock);
+ return devfreq;
+ }
+ }
+ mutex_unlock(&devfreq_list_lock);
+
+ return ERR_PTR(-EPROBE_DEFER);
+}
+#else
+struct devfreq *devfreq_get_devfreq_by_phandle(struct device *dev, int index)
+{
+ return ERR_PTR(-ENODEV);
+}
+#endif /* CONFIG_OF */
+EXPORT_SYMBOL_GPL(devfreq_get_devfreq_by_phandle);
+
/**
* devm_devfreq_remove_device() - Resource-managed devfreq_remove_device()
* @dev: the device to add devfreq feature.
@@ -1266,6 +1348,129 @@ void devm_devfreq_unregister_opp_notifier(struct device *dev,
}
EXPORT_SYMBOL(devm_devfreq_unregister_opp_notifier);
+/**
+ * devfreq_register_notifier() - Register a driver with devfreq
+ * @devfreq: The devfreq object.
+ * @nb: The notifier block to register.
+ * @list: DEVFREQ_TRANSITION_NOTIFIER.
+ */
+int devfreq_register_notifier(struct devfreq *devfreq,
+ struct notifier_block *nb,
+ unsigned int list)
+{
+ int ret = 0;
+
+ if (!devfreq)
+ return -EINVAL;
+
+ switch (list) {
+ case DEVFREQ_TRANSITION_NOTIFIER:
+ ret = srcu_notifier_chain_register(
+ &devfreq->transition_notifier_list, nb);
+ break;
+ default:
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL(devfreq_register_notifier);
+
+/*
+ * devfreq_unregister_notifier() - Unregister a driver with devfreq
+ * @devfreq: The devfreq object.
+ * @nb: The notifier block to be unregistered.
+ * @list: DEVFREQ_TRANSITION_NOTIFIER.
+ */
+int devfreq_unregister_notifier(struct devfreq *devfreq,
+ struct notifier_block *nb,
+ unsigned int list)
+{
+ int ret = 0;
+
+ if (!devfreq)
+ return -EINVAL;
+
+ switch (list) {
+ case DEVFREQ_TRANSITION_NOTIFIER:
+ ret = srcu_notifier_chain_unregister(
+ &devfreq->transition_notifier_list, nb);
+ break;
+ default:
+ ret = -EINVAL;
+ }
+
+ return ret;
+}
+EXPORT_SYMBOL(devfreq_unregister_notifier);
+
+struct devfreq_notifier_devres {
+ struct devfreq *devfreq;
+ struct notifier_block *nb;
+ unsigned int list;
+};
+
+static void devm_devfreq_notifier_release(struct device *dev, void *res)
+{
+ struct devfreq_notifier_devres *this = res;
+
+ devfreq_unregister_notifier(this->devfreq, this->nb, this->list);
+}
+
+/**
+ * devm_devfreq_register_notifier()
+ - Resource-managed devfreq_register_notifier()
+ * @dev: The devfreq user device. (parent of devfreq)
+ * @devfreq: The devfreq object.
+ * @nb: The notifier block to be unregistered.
+ * @list: DEVFREQ_TRANSITION_NOTIFIER.
+ */
+int devm_devfreq_register_notifier(struct device *dev,
+ struct devfreq *devfreq,
+ struct notifier_block *nb,
+ unsigned int list)
+{
+ struct devfreq_notifier_devres *ptr;
+ int ret;
+
+ ptr = devres_alloc(devm_devfreq_notifier_release, sizeof(*ptr),
+ GFP_KERNEL);
+ if (!ptr)
+ return -ENOMEM;
+
+ ret = devfreq_register_notifier(devfreq, nb, list);
+ if (ret) {
+ devres_free(ptr);
+ return ret;
+ }
+
+ ptr->devfreq = devfreq;
+ ptr->nb = nb;
+ ptr->list = list;
+ devres_add(dev, ptr);
+
+ return 0;
+}
+EXPORT_SYMBOL(devm_devfreq_register_notifier);
+
+/**
+ * devm_devfreq_unregister_notifier()
+ - Resource-managed devfreq_unregister_notifier()
+ * @dev: The devfreq user device. (parent of devfreq)
+ * @devfreq: The devfreq object.
+ * @nb: The notifier block to be unregistered.
+ * @list: DEVFREQ_TRANSITION_NOTIFIER.
+ */
+void devm_devfreq_unregister_notifier(struct device *dev,
+ struct devfreq *devfreq,
+ struct notifier_block *nb,
+ unsigned int list)
+{
+ WARN_ON(devres_release(dev, devm_devfreq_notifier_release,
+ devm_devfreq_dev_match, devfreq));
+}
+EXPORT_SYMBOL(devm_devfreq_unregister_notifier);
+
MODULE_AUTHOR("MyungJoo Ham <myungjoo.ham@samsung.com>");
MODULE_DESCRIPTION("devfreq class support");
MODULE_LICENSE("GPL");
diff --git a/drivers/devfreq/event/Kconfig b/drivers/devfreq/event/Kconfig
index a11720affc31..1e8b4f469f38 100644
--- a/drivers/devfreq/event/Kconfig
+++ b/drivers/devfreq/event/Kconfig
@@ -13,6 +13,14 @@ menuconfig PM_DEVFREQ_EVENT
if PM_DEVFREQ_EVENT
+config DEVFREQ_EVENT_EXYNOS_NOCP
+ bool "EXYNOS NoC (Network On Chip) Probe DEVFREQ event Driver"
+ depends on ARCH_EXYNOS
+ select PM_OPP
+ help
+ This add the devfreq-event driver for Exynos SoC. It provides NoC
+ (Network on Chip) Probe counters to measure the bandwidth of AXI bus.
+
config DEVFREQ_EVENT_EXYNOS_PPMU
bool "EXYNOS PPMU (Platform Performance Monitoring Unit) DEVFREQ event Driver"
depends on ARCH_EXYNOS
diff --git a/drivers/devfreq/event/Makefile b/drivers/devfreq/event/Makefile
index be146ead79cf..3d6afd352253 100644
--- a/drivers/devfreq/event/Makefile
+++ b/drivers/devfreq/event/Makefile
@@ -1,2 +1,4 @@
# Exynos DEVFREQ Event Drivers
+
+obj-$(CONFIG_DEVFREQ_EVENT_EXYNOS_NOCP) += exynos-nocp.o
obj-$(CONFIG_DEVFREQ_EVENT_EXYNOS_PPMU) += exynos-ppmu.o
diff --git a/drivers/devfreq/event/exynos-nocp.c b/drivers/devfreq/event/exynos-nocp.c
new file mode 100644
index 000000000000..6b6a5f310486
--- /dev/null
+++ b/drivers/devfreq/event/exynos-nocp.c
@@ -0,0 +1,304 @@
+/*
+ * exynos-nocp.c - EXYNOS NoC (Network On Chip) Probe support
+ *
+ * Copyright (c) 2016 Samsung Electronics Co., Ltd.
+ * Author : Chanwoo Choi <cw00.choi@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/clk.h>
+#include <linux/module.h>
+#include <linux/devfreq-event.h>
+#include <linux/kernel.h>
+#include <linux/of_address.h>
+#include <linux/platform_device.h>
+#include <linux/regmap.h>
+
+#include "exynos-nocp.h"
+
+struct exynos_nocp {
+ struct devfreq_event_dev *edev;
+ struct devfreq_event_desc desc;
+
+ struct device *dev;
+
+ struct regmap *regmap;
+ struct clk *clk;
+};
+
+/*
+ * The devfreq-event ops structure for nocp probe.
+ */
+static int exynos_nocp_set_event(struct devfreq_event_dev *edev)
+{
+ struct exynos_nocp *nocp = devfreq_event_get_drvdata(edev);
+ int ret;
+
+ /* Disable NoC probe */
+ ret = regmap_update_bits(nocp->regmap, NOCP_MAIN_CTL,
+ NOCP_MAIN_CTL_STATEN_MASK, 0);
+ if (ret < 0) {
+ dev_err(nocp->dev, "failed to disable the NoC probe device\n");
+ return ret;
+ }
+
+ /* Set a statistics dump period to 0 */
+ ret = regmap_write(nocp->regmap, NOCP_STAT_PERIOD, 0x0);
+ if (ret < 0)
+ goto out;
+
+ /* Set the IntEvent fields of *_SRC */
+ ret = regmap_update_bits(nocp->regmap, NOCP_COUNTERS_0_SRC,
+ NOCP_CNT_SRC_INTEVENT_MASK,
+ NOCP_CNT_SRC_INTEVENT_BYTE_MASK);
+ if (ret < 0)
+ goto out;
+
+ ret = regmap_update_bits(nocp->regmap, NOCP_COUNTERS_1_SRC,
+ NOCP_CNT_SRC_INTEVENT_MASK,
+ NOCP_CNT_SRC_INTEVENT_CHAIN_MASK);
+ if (ret < 0)
+ goto out;
+
+ ret = regmap_update_bits(nocp->regmap, NOCP_COUNTERS_2_SRC,
+ NOCP_CNT_SRC_INTEVENT_MASK,
+ NOCP_CNT_SRC_INTEVENT_CYCLE_MASK);
+ if (ret < 0)
+ goto out;
+
+ ret = regmap_update_bits(nocp->regmap, NOCP_COUNTERS_3_SRC,
+ NOCP_CNT_SRC_INTEVENT_MASK,
+ NOCP_CNT_SRC_INTEVENT_CHAIN_MASK);
+ if (ret < 0)
+ goto out;
+
+
+ /* Set an alarm with a max/min value of 0 to generate StatALARM */
+ ret = regmap_write(nocp->regmap, NOCP_STAT_ALARM_MIN, 0x0);
+ if (ret < 0)
+ goto out;
+
+ ret = regmap_write(nocp->regmap, NOCP_STAT_ALARM_MAX, 0x0);
+ if (ret < 0)
+ goto out;
+
+ /* Set AlarmMode */
+ ret = regmap_update_bits(nocp->regmap, NOCP_COUNTERS_0_ALARM_MODE,
+ NOCP_CNT_ALARM_MODE_MASK,
+ NOCP_CNT_ALARM_MODE_MIN_MAX_MASK);
+ if (ret < 0)
+ goto out;
+
+ ret = regmap_update_bits(nocp->regmap, NOCP_COUNTERS_1_ALARM_MODE,
+ NOCP_CNT_ALARM_MODE_MASK,
+ NOCP_CNT_ALARM_MODE_MIN_MAX_MASK);
+ if (ret < 0)
+ goto out;
+
+ ret = regmap_update_bits(nocp->regmap, NOCP_COUNTERS_2_ALARM_MODE,
+ NOCP_CNT_ALARM_MODE_MASK,
+ NOCP_CNT_ALARM_MODE_MIN_MAX_MASK);
+ if (ret < 0)
+ goto out;
+
+ ret = regmap_update_bits(nocp->regmap, NOCP_COUNTERS_3_ALARM_MODE,
+ NOCP_CNT_ALARM_MODE_MASK,
+ NOCP_CNT_ALARM_MODE_MIN_MAX_MASK);
+ if (ret < 0)
+ goto out;
+
+ /* Enable the measurements by setting AlarmEn and StatEn */
+ ret = regmap_update_bits(nocp->regmap, NOCP_MAIN_CTL,
+ NOCP_MAIN_CTL_STATEN_MASK | NOCP_MAIN_CTL_ALARMEN_MASK,
+ NOCP_MAIN_CTL_STATEN_MASK | NOCP_MAIN_CTL_ALARMEN_MASK);
+ if (ret < 0)
+ goto out;
+
+ /* Set GlobalEN */
+ ret = regmap_update_bits(nocp->regmap, NOCP_CFG_CTL,
+ NOCP_CFG_CTL_GLOBALEN_MASK,
+ NOCP_CFG_CTL_GLOBALEN_MASK);
+ if (ret < 0)
+ goto out;
+
+ /* Enable NoC probe */
+ ret = regmap_update_bits(nocp->regmap, NOCP_MAIN_CTL,
+ NOCP_MAIN_CTL_STATEN_MASK,
+ NOCP_MAIN_CTL_STATEN_MASK);
+ if (ret < 0)
+ goto out;
+
+ return 0;
+
+out:
+ /* Reset NoC probe */
+ if (regmap_update_bits(nocp->regmap, NOCP_MAIN_CTL,
+ NOCP_MAIN_CTL_STATEN_MASK, 0)) {
+ dev_err(nocp->dev, "Failed to reset NoC probe device\n");
+ }
+
+ return ret;
+}
+
+static int exynos_nocp_get_event(struct devfreq_event_dev *edev,
+ struct devfreq_event_data *edata)
+{
+ struct exynos_nocp *nocp = devfreq_event_get_drvdata(edev);
+ unsigned int counter[4];
+ int ret;
+
+ /* Read cycle count */
+ ret = regmap_read(nocp->regmap, NOCP_COUNTERS_0_VAL, &counter[0]);
+ if (ret < 0)
+ goto out;
+
+ ret = regmap_read(nocp->regmap, NOCP_COUNTERS_1_VAL, &counter[1]);
+ if (ret < 0)
+ goto out;
+
+ ret = regmap_read(nocp->regmap, NOCP_COUNTERS_2_VAL, &counter[2]);
+ if (ret < 0)
+ goto out;
+
+ ret = regmap_read(nocp->regmap, NOCP_COUNTERS_3_VAL, &counter[3]);
+ if (ret < 0)
+ goto out;
+
+ edata->load_count = ((counter[1] << 16) | counter[0]);
+ edata->total_count = ((counter[3] << 16) | counter[2]);
+
+ dev_dbg(&edev->dev, "%s (event: %ld/%ld)\n", edev->desc->name,
+ edata->load_count, edata->total_count);
+
+ return 0;
+
+out:
+ edata->load_count = 0;
+ edata->total_count = 0;
+
+ dev_err(nocp->dev, "Failed to read the counter of NoC probe device\n");
+
+ return ret;
+}
+
+static const struct devfreq_event_ops exynos_nocp_ops = {
+ .set_event = exynos_nocp_set_event,
+ .get_event = exynos_nocp_get_event,
+};
+
+static const struct of_device_id exynos_nocp_id_match[] = {
+ { .compatible = "samsung,exynos5420-nocp", },
+ { /* sentinel */ },
+};
+
+static struct regmap_config exynos_nocp_regmap_config = {
+ .reg_bits = 32,
+ .val_bits = 32,
+ .reg_stride = 4,
+ .max_register = NOCP_COUNTERS_3_VAL,
+};
+
+static int exynos_nocp_parse_dt(struct platform_device *pdev,
+ struct exynos_nocp *nocp)
+{
+ struct device *dev = nocp->dev;
+ struct device_node *np = dev->of_node;
+ struct resource *res;
+ void __iomem *base;
+
+ if (!np) {
+ dev_err(dev, "failed to find devicetree node\n");
+ return -EINVAL;
+ }
+
+ nocp->clk = devm_clk_get(dev, "nocp");
+ if (IS_ERR(nocp->clk))
+ nocp->clk = NULL;
+
+ /* Maps the memory mapped IO to control nocp register */
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (IS_ERR(res))
+ return PTR_ERR(res);
+
+ base = devm_ioremap_resource(dev, res);
+ if (IS_ERR(base))
+ return PTR_ERR(base);
+
+ exynos_nocp_regmap_config.max_register = resource_size(res) - 4;
+
+ nocp->regmap = devm_regmap_init_mmio(dev, base,
+ &exynos_nocp_regmap_config);
+ if (IS_ERR(nocp->regmap)) {
+ dev_err(dev, "failed to initialize regmap\n");
+ return PTR_ERR(nocp->regmap);
+ }
+
+ return 0;
+}
+
+static int exynos_nocp_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct device_node *np = dev->of_node;
+ struct exynos_nocp *nocp;
+ int ret;
+
+ nocp = devm_kzalloc(&pdev->dev, sizeof(*nocp), GFP_KERNEL);
+ if (!nocp)
+ return -ENOMEM;
+
+ nocp->dev = &pdev->dev;
+
+ /* Parse dt data to get resource */
+ ret = exynos_nocp_parse_dt(pdev, nocp);
+ if (ret < 0) {
+ dev_err(&pdev->dev,
+ "failed to parse devicetree for resource\n");
+ return ret;
+ }
+
+ /* Add devfreq-event device to measure the bandwidth of NoC */
+ nocp->desc.ops = &exynos_nocp_ops;
+ nocp->desc.driver_data = nocp;
+ nocp->desc.name = np->full_name;
+ nocp->edev = devm_devfreq_event_add_edev(&pdev->dev, &nocp->desc);
+ if (IS_ERR(nocp->edev)) {
+ dev_err(&pdev->dev,
+ "failed to add devfreq-event device\n");
+ return PTR_ERR(nocp->edev);
+ }
+ platform_set_drvdata(pdev, nocp);
+
+ clk_prepare_enable(nocp->clk);
+
+ pr_info("exynos-nocp: new NoC Probe device registered: %s\n",
+ dev_name(dev));
+
+ return 0;
+}
+
+static int exynos_nocp_remove(struct platform_device *pdev)
+{
+ struct exynos_nocp *nocp = platform_get_drvdata(pdev);
+
+ clk_disable_unprepare(nocp->clk);
+
+ return 0;
+}
+
+static struct platform_driver exynos_nocp_driver = {
+ .probe = exynos_nocp_probe,
+ .remove = exynos_nocp_remove,
+ .driver = {
+ .name = "exynos-nocp",
+ .of_match_table = exynos_nocp_id_match,
+ },
+};
+module_platform_driver(exynos_nocp_driver);
+
+MODULE_DESCRIPTION("Exynos NoC (Network on Chip) Probe driver");
+MODULE_AUTHOR("Chanwoo Choi <cw00.choi@samsung.com>");
+MODULE_LICENSE("GPL");
diff --git a/drivers/devfreq/event/exynos-nocp.h b/drivers/devfreq/event/exynos-nocp.h
new file mode 100644
index 000000000000..28564db0edb8
--- /dev/null
+++ b/drivers/devfreq/event/exynos-nocp.h
@@ -0,0 +1,78 @@
+/*
+ * exynos-nocp.h - EXYNOS NoC (Network on Chip) Probe header file
+ *
+ * Copyright (c) 2016 Samsung Electronics Co., Ltd.
+ * Author : Chanwoo Choi <cw00.choi@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#ifndef __EXYNOS_NOCP_H__
+#define __EXYNOS_NOCP_H__
+
+enum nocp_reg {
+ NOCP_ID_REVISION_ID = 0x04,
+ NOCP_MAIN_CTL = 0x08,
+ NOCP_CFG_CTL = 0x0C,
+
+ NOCP_STAT_PERIOD = 0x24,
+ NOCP_STAT_GO = 0x28,
+ NOCP_STAT_ALARM_MIN = 0x2C,
+ NOCP_STAT_ALARM_MAX = 0x30,
+ NOCP_STAT_ALARM_STATUS = 0x34,
+ NOCP_STAT_ALARM_CLR = 0x38,
+
+ NOCP_COUNTERS_0_SRC = 0x138,
+ NOCP_COUNTERS_0_ALARM_MODE = 0x13C,
+ NOCP_COUNTERS_0_VAL = 0x140,
+
+ NOCP_COUNTERS_1_SRC = 0x14C,
+ NOCP_COUNTERS_1_ALARM_MODE = 0x150,
+ NOCP_COUNTERS_1_VAL = 0x154,
+
+ NOCP_COUNTERS_2_SRC = 0x160,
+ NOCP_COUNTERS_2_ALARM_MODE = 0x164,
+ NOCP_COUNTERS_2_VAL = 0x168,
+
+ NOCP_COUNTERS_3_SRC = 0x174,
+ NOCP_COUNTERS_3_ALARM_MODE = 0x178,
+ NOCP_COUNTERS_3_VAL = 0x17C,
+};
+
+/* NOCP_MAIN_CTL register */
+#define NOCP_MAIN_CTL_ERREN_MASK BIT(0)
+#define NOCP_MAIN_CTL_TRACEEN_MASK BIT(1)
+#define NOCP_MAIN_CTL_PAYLOADEN_MASK BIT(2)
+#define NOCP_MAIN_CTL_STATEN_MASK BIT(3)
+#define NOCP_MAIN_CTL_ALARMEN_MASK BIT(4)
+#define NOCP_MAIN_CTL_STATCONDDUMP_MASK BIT(5)
+#define NOCP_MAIN_CTL_INTRUSIVEMODE_MASK BIT(6)
+
+/* NOCP_CFG_CTL register */
+#define NOCP_CFG_CTL_GLOBALEN_MASK BIT(0)
+#define NOCP_CFG_CTL_ACTIVE_MASK BIT(1)
+
+/* NOCP_COUNTERS_x_SRC register */
+#define NOCP_CNT_SRC_INTEVENT_SHIFT 0
+#define NOCP_CNT_SRC_INTEVENT_MASK (0x1F << NOCP_CNT_SRC_INTEVENT_SHIFT)
+#define NOCP_CNT_SRC_INTEVENT_OFF_MASK (0x0 << NOCP_CNT_SRC_INTEVENT_SHIFT)
+#define NOCP_CNT_SRC_INTEVENT_CYCLE_MASK (0x1 << NOCP_CNT_SRC_INTEVENT_SHIFT)
+#define NOCP_CNT_SRC_INTEVENT_IDLE_MASK (0x2 << NOCP_CNT_SRC_INTEVENT_SHIFT)
+#define NOCP_CNT_SRC_INTEVENT_XFER_MASK (0x3 << NOCP_CNT_SRC_INTEVENT_SHIFT)
+#define NOCP_CNT_SRC_INTEVENT_BUSY_MASK (0x4 << NOCP_CNT_SRC_INTEVENT_SHIFT)
+#define NOCP_CNT_SRC_INTEVENT_WAIT_MASK (0x5 << NOCP_CNT_SRC_INTEVENT_SHIFT)
+#define NOCP_CNT_SRC_INTEVENT_PKT_MASK (0x6 << NOCP_CNT_SRC_INTEVENT_SHIFT)
+#define NOCP_CNT_SRC_INTEVENT_BYTE_MASK (0x8 << NOCP_CNT_SRC_INTEVENT_SHIFT)
+#define NOCP_CNT_SRC_INTEVENT_CHAIN_MASK (0x10 << NOCP_CNT_SRC_INTEVENT_SHIFT)
+
+/* NOCP_COUNTERS_x_ALARM_MODE register */
+#define NOCP_CNT_ALARM_MODE_SHIFT 0
+#define NOCP_CNT_ALARM_MODE_MASK (0x3 << NOCP_CNT_ALARM_MODE_SHIFT)
+#define NOCP_CNT_ALARM_MODE_OFF_MASK (0x0 << NOCP_CNT_ALARM_MODE_SHIFT)
+#define NOCP_CNT_ALARM_MODE_MIN_MASK (0x1 << NOCP_CNT_ALARM_MODE_SHIFT)
+#define NOCP_CNT_ALARM_MODE_MAX_MASK (0x2 << NOCP_CNT_ALARM_MODE_SHIFT)
+#define NOCP_CNT_ALARM_MODE_MIN_MAX_MASK (0x3 << NOCP_CNT_ALARM_MODE_SHIFT)
+
+#endif /* __EXYNOS_NOCP_H__ */
diff --git a/drivers/devfreq/exynos-bus.c b/drivers/devfreq/exynos-bus.c
new file mode 100644
index 000000000000..2363d0a189b7
--- /dev/null
+++ b/drivers/devfreq/exynos-bus.c
@@ -0,0 +1,570 @@
+/*
+ * Generic Exynos Bus frequency driver with DEVFREQ Framework
+ *
+ * Copyright (c) 2016 Samsung Electronics Co., Ltd.
+ * Author : Chanwoo Choi <cw00.choi@samsung.com>
+ *
+ * This driver support Exynos Bus frequency feature by using
+ * DEVFREQ framework and is based on drivers/devfreq/exynos/exynos4_bus.c.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/clk.h>
+#include <linux/devfreq.h>
+#include <linux/devfreq-event.h>
+#include <linux/device.h>
+#include <linux/export.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/pm_opp.h>
+#include <linux/platform_device.h>
+#include <linux/regulator/consumer.h>
+#include <linux/slab.h>
+
+#define DEFAULT_SATURATION_RATIO 40
+#define DEFAULT_VOLTAGE_TOLERANCE 2
+
+struct exynos_bus {
+ struct device *dev;
+
+ struct devfreq *devfreq;
+ struct devfreq_event_dev **edev;
+ unsigned int edev_count;
+ struct mutex lock;
+
+ struct dev_pm_opp *curr_opp;
+
+ struct regulator *regulator;
+ struct clk *clk;
+ unsigned int voltage_tolerance;
+ unsigned int ratio;
+};
+
+/*
+ * Control the devfreq-event device to get the current state of bus
+ */
+#define exynos_bus_ops_edev(ops) \
+static int exynos_bus_##ops(struct exynos_bus *bus) \
+{ \
+ int i, ret; \
+ \
+ for (i = 0; i < bus->edev_count; i++) { \
+ if (!bus->edev[i]) \
+ continue; \
+ ret = devfreq_event_##ops(bus->edev[i]); \
+ if (ret < 0) \
+ return ret; \
+ } \
+ \
+ return 0; \
+}
+exynos_bus_ops_edev(enable_edev);
+exynos_bus_ops_edev(disable_edev);
+exynos_bus_ops_edev(set_event);
+
+static int exynos_bus_get_event(struct exynos_bus *bus,
+ struct devfreq_event_data *edata)
+{
+ struct devfreq_event_data event_data;
+ unsigned long load_count = 0, total_count = 0;
+ int i, ret = 0;
+
+ for (i = 0; i < bus->edev_count; i++) {
+ if (!bus->edev[i])
+ continue;
+
+ ret = devfreq_event_get_event(bus->edev[i], &event_data);
+ if (ret < 0)
+ return ret;
+
+ if (i == 0 || event_data.load_count > load_count) {
+ load_count = event_data.load_count;
+ total_count = event_data.total_count;
+ }
+ }
+
+ edata->load_count = load_count;
+ edata->total_count = total_count;
+
+ return ret;
+}
+
+/*
+ * Must necessary function for devfreq simple-ondemand governor
+ */
+static int exynos_bus_target(struct device *dev, unsigned long *freq, u32 flags)
+{
+ struct exynos_bus *bus = dev_get_drvdata(dev);
+ struct dev_pm_opp *new_opp;
+ unsigned long old_freq, new_freq, old_volt, new_volt, tol;
+ int ret = 0;
+
+ /* Get new opp-bus instance according to new bus clock */
+ rcu_read_lock();
+ new_opp = devfreq_recommended_opp(dev, freq, flags);
+ if (IS_ERR(new_opp)) {
+ dev_err(dev, "failed to get recommended opp instance\n");
+ rcu_read_unlock();
+ return PTR_ERR(new_opp);
+ }
+
+ new_freq = dev_pm_opp_get_freq(new_opp);
+ new_volt = dev_pm_opp_get_voltage(new_opp);
+ old_freq = dev_pm_opp_get_freq(bus->curr_opp);
+ old_volt = dev_pm_opp_get_voltage(bus->curr_opp);
+ rcu_read_unlock();
+
+ if (old_freq == new_freq)
+ return 0;
+ tol = new_volt * bus->voltage_tolerance / 100;
+
+ /* Change voltage and frequency according to new OPP level */
+ mutex_lock(&bus->lock);
+
+ if (old_freq < new_freq) {
+ ret = regulator_set_voltage_tol(bus->regulator, new_volt, tol);
+ if (ret < 0) {
+ dev_err(bus->dev, "failed to set voltage\n");
+ goto out;
+ }
+ }
+
+ ret = clk_set_rate(bus->clk, new_freq);
+ if (ret < 0) {
+ dev_err(dev, "failed to change clock of bus\n");
+ clk_set_rate(bus->clk, old_freq);
+ goto out;
+ }
+
+ if (old_freq > new_freq) {
+ ret = regulator_set_voltage_tol(bus->regulator, new_volt, tol);
+ if (ret < 0) {
+ dev_err(bus->dev, "failed to set voltage\n");
+ goto out;
+ }
+ }
+ bus->curr_opp = new_opp;
+
+ dev_dbg(dev, "Set the frequency of bus (%lukHz -> %lukHz)\n",
+ old_freq/1000, new_freq/1000);
+out:
+ mutex_unlock(&bus->lock);
+
+ return ret;
+}
+
+static int exynos_bus_get_dev_status(struct device *dev,
+ struct devfreq_dev_status *stat)
+{
+ struct exynos_bus *bus = dev_get_drvdata(dev);
+ struct devfreq_event_data edata;
+ int ret;
+
+ rcu_read_lock();
+ stat->current_frequency = dev_pm_opp_get_freq(bus->curr_opp);
+ rcu_read_unlock();
+
+ ret = exynos_bus_get_event(bus, &edata);
+ if (ret < 0) {
+ stat->total_time = stat->busy_time = 0;
+ goto err;
+ }
+
+ stat->busy_time = (edata.load_count * 100) / bus->ratio;
+ stat->total_time = edata.total_count;
+
+ dev_dbg(dev, "Usage of devfreq-event : %lu/%lu\n", stat->busy_time,
+ stat->total_time);
+
+err:
+ ret = exynos_bus_set_event(bus);
+ if (ret < 0) {
+ dev_err(dev, "failed to set event to devfreq-event devices\n");
+ return ret;
+ }
+
+ return ret;
+}
+
+static void exynos_bus_exit(struct device *dev)
+{
+ struct exynos_bus *bus = dev_get_drvdata(dev);
+ int ret;
+
+ ret = exynos_bus_disable_edev(bus);
+ if (ret < 0)
+ dev_warn(dev, "failed to disable the devfreq-event devices\n");
+
+ if (bus->regulator)
+ regulator_disable(bus->regulator);
+
+ dev_pm_opp_of_remove_table(dev);
+ clk_disable_unprepare(bus->clk);
+}
+
+/*
+ * Must necessary function for devfreq passive governor
+ */
+static int exynos_bus_passive_target(struct device *dev, unsigned long *freq,
+ u32 flags)
+{
+ struct exynos_bus *bus = dev_get_drvdata(dev);
+ struct dev_pm_opp *new_opp;
+ unsigned long old_freq, new_freq;
+ int ret = 0;
+
+ /* Get new opp-bus instance according to new bus clock */
+ rcu_read_lock();
+ new_opp = devfreq_recommended_opp(dev, freq, flags);
+ if (IS_ERR(new_opp)) {
+ dev_err(dev, "failed to get recommended opp instance\n");
+ rcu_read_unlock();
+ return PTR_ERR(new_opp);
+ }
+
+ new_freq = dev_pm_opp_get_freq(new_opp);
+ old_freq = dev_pm_opp_get_freq(bus->curr_opp);
+ rcu_read_unlock();
+
+ if (old_freq == new_freq)
+ return 0;
+
+ /* Change the frequency according to new OPP level */
+ mutex_lock(&bus->lock);
+
+ ret = clk_set_rate(bus->clk, new_freq);
+ if (ret < 0) {
+ dev_err(dev, "failed to set the clock of bus\n");
+ goto out;
+ }
+
+ *freq = new_freq;
+ bus->curr_opp = new_opp;
+
+ dev_dbg(dev, "Set the frequency of bus (%lukHz -> %lukHz)\n",
+ old_freq/1000, new_freq/1000);
+out:
+ mutex_unlock(&bus->lock);
+
+ return ret;
+}
+
+static void exynos_bus_passive_exit(struct device *dev)
+{
+ struct exynos_bus *bus = dev_get_drvdata(dev);
+
+ dev_pm_opp_of_remove_table(dev);
+ clk_disable_unprepare(bus->clk);
+}
+
+static int exynos_bus_parent_parse_of(struct device_node *np,
+ struct exynos_bus *bus)
+{
+ struct device *dev = bus->dev;
+ int i, ret, count, size;
+
+ /* Get the regulator to provide each bus with the power */
+ bus->regulator = devm_regulator_get(dev, "vdd");
+ if (IS_ERR(bus->regulator)) {
+ dev_err(dev, "failed to get VDD regulator\n");
+ return PTR_ERR(bus->regulator);
+ }
+
+ ret = regulator_enable(bus->regulator);
+ if (ret < 0) {
+ dev_err(dev, "failed to enable VDD regulator\n");
+ return ret;
+ }
+
+ /*
+ * Get the devfreq-event devices to get the current utilization of
+ * buses. This raw data will be used in devfreq ondemand governor.
+ */
+ count = devfreq_event_get_edev_count(dev);
+ if (count < 0) {
+ dev_err(dev, "failed to get the count of devfreq-event dev\n");
+ ret = count;
+ goto err_regulator;
+ }
+ bus->edev_count = count;
+
+ size = sizeof(*bus->edev) * count;
+ bus->edev = devm_kzalloc(dev, size, GFP_KERNEL);
+ if (!bus->edev) {
+ ret = -ENOMEM;
+ goto err_regulator;
+ }
+
+ for (i = 0; i < count; i++) {
+ bus->edev[i] = devfreq_event_get_edev_by_phandle(dev, i);
+ if (IS_ERR(bus->edev[i])) {
+ ret = -EPROBE_DEFER;
+ goto err_regulator;
+ }
+ }
+
+ /*
+ * Optionally, Get the saturation ratio according to Exynos SoC
+ * When measuring the utilization of each AXI bus with devfreq-event
+ * devices, the measured real cycle might be much lower than the
+ * total cycle of bus during sampling rate. In result, the devfreq
+ * simple-ondemand governor might not decide to change the current
+ * frequency due to too utilization (= real cycle/total cycle).
+ * So, this property is used to adjust the utilization when calculating
+ * the busy_time in exynos_bus_get_dev_status().
+ */
+ if (of_property_read_u32(np, "exynos,saturation-ratio", &bus->ratio))
+ bus->ratio = DEFAULT_SATURATION_RATIO;
+
+ if (of_property_read_u32(np, "exynos,voltage-tolerance",
+ &bus->voltage_tolerance))
+ bus->voltage_tolerance = DEFAULT_VOLTAGE_TOLERANCE;
+
+ return 0;
+
+err_regulator:
+ regulator_disable(bus->regulator);
+
+ return ret;
+}
+
+static int exynos_bus_parse_of(struct device_node *np,
+ struct exynos_bus *bus)
+{
+ struct device *dev = bus->dev;
+ unsigned long rate;
+ int ret;
+
+ /* Get the clock to provide each bus with source clock */
+ bus->clk = devm_clk_get(dev, "bus");
+ if (IS_ERR(bus->clk)) {
+ dev_err(dev, "failed to get bus clock\n");
+ return PTR_ERR(bus->clk);
+ }
+
+ ret = clk_prepare_enable(bus->clk);
+ if (ret < 0) {
+ dev_err(dev, "failed to get enable clock\n");
+ return ret;
+ }
+
+ /* Get the freq and voltage from OPP table to scale the bus freq */
+ rcu_read_lock();
+ ret = dev_pm_opp_of_add_table(dev);
+ if (ret < 0) {
+ dev_err(dev, "failed to get OPP table\n");
+ rcu_read_unlock();
+ goto err_clk;
+ }
+
+ rate = clk_get_rate(bus->clk);
+ bus->curr_opp = devfreq_recommended_opp(dev, &rate, 0);
+ if (IS_ERR(bus->curr_opp)) {
+ dev_err(dev, "failed to find dev_pm_opp\n");
+ rcu_read_unlock();
+ ret = PTR_ERR(bus->curr_opp);
+ goto err_opp;
+ }
+ rcu_read_unlock();
+
+ return 0;
+
+err_opp:
+ dev_pm_opp_of_remove_table(dev);
+err_clk:
+ clk_disable_unprepare(bus->clk);
+
+ return ret;
+}
+
+static int exynos_bus_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct device_node *np = dev->of_node;
+ struct devfreq_dev_profile *profile;
+ struct devfreq_simple_ondemand_data *ondemand_data;
+ struct devfreq_passive_data *passive_data;
+ struct devfreq *parent_devfreq;
+ struct exynos_bus *bus;
+ int ret, max_state;
+ unsigned long min_freq, max_freq;
+
+ if (!np) {
+ dev_err(dev, "failed to find devicetree node\n");
+ return -EINVAL;
+ }
+
+ bus = devm_kzalloc(&pdev->dev, sizeof(*bus), GFP_KERNEL);
+ if (!bus)
+ return -ENOMEM;
+ mutex_init(&bus->lock);
+ bus->dev = &pdev->dev;
+ platform_set_drvdata(pdev, bus);
+
+ /* Parse the device-tree to get the resource information */
+ ret = exynos_bus_parse_of(np, bus);
+ if (ret < 0)
+ goto err;
+
+ profile = devm_kzalloc(dev, sizeof(*profile), GFP_KERNEL);
+ if (!profile) {
+ ret = -ENOMEM;
+ goto err;
+ }
+
+ if (of_parse_phandle(dev->of_node, "devfreq", 0))
+ goto passive;
+ else
+ ret = exynos_bus_parent_parse_of(np, bus);
+
+ if (ret < 0)
+ goto err;
+
+ /* Initialize the struct profile and governor data for parent device */
+ profile->polling_ms = 50;
+ profile->target = exynos_bus_target;
+ profile->get_dev_status = exynos_bus_get_dev_status;
+ profile->exit = exynos_bus_exit;
+
+ ondemand_data = devm_kzalloc(dev, sizeof(*ondemand_data), GFP_KERNEL);
+ if (!ondemand_data) {
+ ret = -ENOMEM;
+ goto err;
+ }
+ ondemand_data->upthreshold = 40;
+ ondemand_data->downdifferential = 5;
+
+ /* Add devfreq device to monitor and handle the exynos bus */
+ bus->devfreq = devm_devfreq_add_device(dev, profile, "simple_ondemand",
+ ondemand_data);
+ if (IS_ERR(bus->devfreq)) {
+ dev_err(dev, "failed to add devfreq device\n");
+ ret = PTR_ERR(bus->devfreq);
+ goto err;
+ }
+
+ /* Register opp_notifier to catch the change of OPP */
+ ret = devm_devfreq_register_opp_notifier(dev, bus->devfreq);
+ if (ret < 0) {
+ dev_err(dev, "failed to register opp notifier\n");
+ goto err;
+ }
+
+ /*
+ * Enable devfreq-event to get raw data which is used to determine
+ * current bus load.
+ */
+ ret = exynos_bus_enable_edev(bus);
+ if (ret < 0) {
+ dev_err(dev, "failed to enable devfreq-event devices\n");
+ goto err;
+ }
+
+ ret = exynos_bus_set_event(bus);
+ if (ret < 0) {
+ dev_err(dev, "failed to set event to devfreq-event devices\n");
+ goto err;
+ }
+
+ goto out;
+passive:
+ /* Initialize the struct profile and governor data for passive device */
+ profile->target = exynos_bus_passive_target;
+ profile->exit = exynos_bus_passive_exit;
+
+ /* Get the instance of parent devfreq device */
+ parent_devfreq = devfreq_get_devfreq_by_phandle(dev, 0);
+ if (IS_ERR(parent_devfreq)) {
+ ret = -EPROBE_DEFER;
+ goto err;
+ }
+
+ passive_data = devm_kzalloc(dev, sizeof(*passive_data), GFP_KERNEL);
+ if (!passive_data) {
+ ret = -ENOMEM;
+ goto err;
+ }
+ passive_data->parent = parent_devfreq;
+
+ /* Add devfreq device for exynos bus with passive governor */
+ bus->devfreq = devm_devfreq_add_device(dev, profile, "passive",
+ passive_data);
+ if (IS_ERR(bus->devfreq)) {
+ dev_err(dev,
+ "failed to add devfreq dev with passive governor\n");
+ ret = -EPROBE_DEFER;
+ goto err;
+ }
+
+out:
+ max_state = bus->devfreq->profile->max_state;
+ min_freq = (bus->devfreq->profile->freq_table[0] / 1000);
+ max_freq = (bus->devfreq->profile->freq_table[max_state - 1] / 1000);
+ pr_info("exynos-bus: new bus device registered: %s (%6ld KHz ~ %6ld KHz)\n",
+ dev_name(dev), min_freq, max_freq);
+
+ return 0;
+
+err:
+ dev_pm_opp_of_remove_table(dev);
+ clk_disable_unprepare(bus->clk);
+
+ return ret;
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int exynos_bus_resume(struct device *dev)
+{
+ struct exynos_bus *bus = dev_get_drvdata(dev);
+ int ret;
+
+ ret = exynos_bus_enable_edev(bus);
+ if (ret < 0) {
+ dev_err(dev, "failed to enable the devfreq-event devices\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static int exynos_bus_suspend(struct device *dev)
+{
+ struct exynos_bus *bus = dev_get_drvdata(dev);
+ int ret;
+
+ ret = exynos_bus_disable_edev(bus);
+ if (ret < 0) {
+ dev_err(dev, "failed to disable the devfreq-event devices\n");
+ return ret;
+ }
+
+ return 0;
+}
+#endif
+
+static const struct dev_pm_ops exynos_bus_pm = {
+ SET_SYSTEM_SLEEP_PM_OPS(exynos_bus_suspend, exynos_bus_resume)
+};
+
+static const struct of_device_id exynos_bus_of_match[] = {
+ { .compatible = "samsung,exynos-bus", },
+ { /* sentinel */ },
+};
+MODULE_DEVICE_TABLE(of, exynos_bus_of_match);
+
+static struct platform_driver exynos_bus_platdrv = {
+ .probe = exynos_bus_probe,
+ .driver = {
+ .name = "exynos-bus",
+ .pm = &exynos_bus_pm,
+ .of_match_table = of_match_ptr(exynos_bus_of_match),
+ },
+};
+module_platform_driver(exynos_bus_platdrv);
+
+MODULE_DESCRIPTION("Generic Exynos Bus frequency driver");
+MODULE_AUTHOR("Chanwoo Choi <cw00.choi@samsung.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/devfreq/exynos/Makefile b/drivers/devfreq/exynos/Makefile
deleted file mode 100644
index 49bc9175f923..000000000000
--- a/drivers/devfreq/exynos/Makefile
+++ /dev/null
@@ -1,3 +0,0 @@
-# Exynos DEVFREQ Drivers
-obj-$(CONFIG_ARM_EXYNOS4_BUS_DEVFREQ) += exynos_ppmu.o exynos4_bus.o
-obj-$(CONFIG_ARM_EXYNOS5_BUS_DEVFREQ) += exynos_ppmu.o exynos5_bus.o
diff --git a/drivers/devfreq/exynos/exynos4_bus.c b/drivers/devfreq/exynos/exynos4_bus.c
deleted file mode 100644
index da9509205169..000000000000
--- a/drivers/devfreq/exynos/exynos4_bus.c
+++ /dev/null
@@ -1,1055 +0,0 @@
-/* drivers/devfreq/exynos4210_memorybus.c
- *
- * Copyright (c) 2011 Samsung Electronics Co., Ltd.
- * http://www.samsung.com/
- * MyungJoo Ham <myungjoo.ham@samsung.com>
- *
- * EXYNOS4 - Memory/Bus clock frequency scaling support in DEVFREQ framework
- * This version supports EXYNOS4210 only. This changes bus frequencies
- * and vddint voltages. Exynos4412/4212 should be able to be supported
- * with minor modifications.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- */
-
-#include <linux/io.h>
-#include <linux/slab.h>
-#include <linux/mutex.h>
-#include <linux/suspend.h>
-#include <linux/pm_opp.h>
-#include <linux/devfreq.h>
-#include <linux/platform_device.h>
-#include <linux/regulator/consumer.h>
-#include <linux/module.h>
-
-#include <mach/map.h>
-
-#include "exynos_ppmu.h"
-#include "exynos4_bus.h"
-
-#define MAX_SAFEVOLT 1200000 /* 1.2V */
-
-enum exynos4_busf_type {
- TYPE_BUSF_EXYNOS4210,
- TYPE_BUSF_EXYNOS4x12,
-};
-
-/* Assume that the bus is saturated if the utilization is 40% */
-#define BUS_SATURATION_RATIO 40
-
-enum busclk_level_idx {
- LV_0 = 0,
- LV_1,
- LV_2,
- LV_3,
- LV_4,
- _LV_END
-};
-
-enum exynos_ppmu_idx {
- PPMU_DMC0,
- PPMU_DMC1,
- PPMU_END,
-};
-
-#define EX4210_LV_MAX LV_2
-#define EX4x12_LV_MAX LV_4
-#define EX4210_LV_NUM (LV_2 + 1)
-#define EX4x12_LV_NUM (LV_4 + 1)
-
-/**
- * struct busfreq_opp_info - opp information for bus
- * @rate: Frequency in hertz
- * @volt: Voltage in microvolts corresponding to this OPP
- */
-struct busfreq_opp_info {
- unsigned long rate;
- unsigned long volt;
-};
-
-struct busfreq_data {
- enum exynos4_busf_type type;
- struct device *dev;
- struct devfreq *devfreq;
- bool disabled;
- struct regulator *vdd_int;
- struct regulator *vdd_mif; /* Exynos4412/4212 only */
- struct busfreq_opp_info curr_oppinfo;
- struct busfreq_ppmu_data ppmu_data;
-
- struct notifier_block pm_notifier;
- struct mutex lock;
-
- /* Dividers calculated at boot/probe-time */
- unsigned int dmc_divtable[_LV_END]; /* DMC0 */
- unsigned int top_divtable[_LV_END];
-};
-
-/* 4210 controls clock of mif and voltage of int */
-static struct bus_opp_table exynos4210_busclk_table[] = {
- {LV_0, 400000, 1150000},
- {LV_1, 267000, 1050000},
- {LV_2, 133000, 1025000},
- {0, 0, 0},
-};
-
-/*
- * MIF is the main control knob clock for Exynos4x12 MIF/INT
- * clock and voltage of both mif/int are controlled.
- */
-static struct bus_opp_table exynos4x12_mifclk_table[] = {
- {LV_0, 400000, 1100000},
- {LV_1, 267000, 1000000},
- {LV_2, 160000, 950000},
- {LV_3, 133000, 950000},
- {LV_4, 100000, 950000},
- {0, 0, 0},
-};
-
-/*
- * INT is not the control knob of 4x12. LV_x is not meant to represent
- * the current performance. (MIF does)
- */
-static struct bus_opp_table exynos4x12_intclk_table[] = {
- {LV_0, 200000, 1000000},
- {LV_1, 160000, 950000},
- {LV_2, 133000, 925000},
- {LV_3, 100000, 900000},
- {0, 0, 0},
-};
-
-/* TODO: asv volt definitions are "__initdata"? */
-/* Some chips have different operating voltages */
-static unsigned int exynos4210_asv_volt[][EX4210_LV_NUM] = {
- {1150000, 1050000, 1050000},
- {1125000, 1025000, 1025000},
- {1100000, 1000000, 1000000},
- {1075000, 975000, 975000},
- {1050000, 950000, 950000},
-};
-
-static unsigned int exynos4x12_mif_step_50[][EX4x12_LV_NUM] = {
- /* 400 267 160 133 100 */
- {1050000, 950000, 900000, 900000, 900000}, /* ASV0 */
- {1050000, 950000, 900000, 900000, 900000}, /* ASV1 */
- {1050000, 950000, 900000, 900000, 900000}, /* ASV2 */
- {1050000, 900000, 900000, 900000, 900000}, /* ASV3 */
- {1050000, 900000, 900000, 900000, 850000}, /* ASV4 */
- {1050000, 900000, 900000, 850000, 850000}, /* ASV5 */
- {1050000, 900000, 850000, 850000, 850000}, /* ASV6 */
- {1050000, 900000, 850000, 850000, 850000}, /* ASV7 */
- {1050000, 900000, 850000, 850000, 850000}, /* ASV8 */
-};
-
-static unsigned int exynos4x12_int_volt[][EX4x12_LV_NUM] = {
- /* 200 160 133 100 */
- {1000000, 950000, 925000, 900000}, /* ASV0 */
- {975000, 925000, 925000, 900000}, /* ASV1 */
- {950000, 925000, 900000, 875000}, /* ASV2 */
- {950000, 900000, 900000, 875000}, /* ASV3 */
- {925000, 875000, 875000, 875000}, /* ASV4 */
- {900000, 850000, 850000, 850000}, /* ASV5 */
- {900000, 850000, 850000, 850000}, /* ASV6 */
- {900000, 850000, 850000, 850000}, /* ASV7 */
- {900000, 850000, 850000, 850000}, /* ASV8 */
-};
-
-/*** Clock Divider Data for Exynos4210 ***/
-static unsigned int exynos4210_clkdiv_dmc0[][8] = {
- /*
- * Clock divider value for following
- * { DIVACP, DIVACP_PCLK, DIVDPHY, DIVDMC, DIVDMCD
- * DIVDMCP, DIVCOPY2, DIVCORE_TIMERS }
- */
-
- /* DMC L0: 400MHz */
- { 3, 1, 1, 1, 1, 1, 3, 1 },
- /* DMC L1: 266.7MHz */
- { 4, 1, 1, 2, 1, 1, 3, 1 },
- /* DMC L2: 133MHz */
- { 5, 1, 1, 5, 1, 1, 3, 1 },
-};
-static unsigned int exynos4210_clkdiv_top[][5] = {
- /*
- * Clock divider value for following
- * { DIVACLK200, DIVACLK100, DIVACLK160, DIVACLK133, DIVONENAND }
- */
- /* ACLK200 L0: 200MHz */
- { 3, 7, 4, 5, 1 },
- /* ACLK200 L1: 160MHz */
- { 4, 7, 5, 6, 1 },
- /* ACLK200 L2: 133MHz */
- { 5, 7, 7, 7, 1 },
-};
-static unsigned int exynos4210_clkdiv_lr_bus[][2] = {
- /*
- * Clock divider value for following
- * { DIVGDL/R, DIVGPL/R }
- */
- /* ACLK_GDL/R L1: 200MHz */
- { 3, 1 },
- /* ACLK_GDL/R L2: 160MHz */
- { 4, 1 },
- /* ACLK_GDL/R L3: 133MHz */
- { 5, 1 },
-};
-
-/*** Clock Divider Data for Exynos4212/4412 ***/
-static unsigned int exynos4x12_clkdiv_dmc0[][6] = {
- /*
- * Clock divider value for following
- * { DIVACP, DIVACP_PCLK, DIVDPHY, DIVDMC, DIVDMCD
- * DIVDMCP}
- */
-
- /* DMC L0: 400MHz */
- {3, 1, 1, 1, 1, 1},
- /* DMC L1: 266.7MHz */
- {4, 1, 1, 2, 1, 1},
- /* DMC L2: 160MHz */
- {5, 1, 1, 4, 1, 1},
- /* DMC L3: 133MHz */
- {5, 1, 1, 5, 1, 1},
- /* DMC L4: 100MHz */
- {7, 1, 1, 7, 1, 1},
-};
-static unsigned int exynos4x12_clkdiv_dmc1[][6] = {
- /*
- * Clock divider value for following
- * { G2DACP, DIVC2C, DIVC2C_ACLK }
- */
-
- /* DMC L0: 400MHz */
- {3, 1, 1},
- /* DMC L1: 266.7MHz */
- {4, 2, 1},
- /* DMC L2: 160MHz */
- {5, 4, 1},
- /* DMC L3: 133MHz */
- {5, 5, 1},
- /* DMC L4: 100MHz */
- {7, 7, 1},
-};
-static unsigned int exynos4x12_clkdiv_top[][5] = {
- /*
- * Clock divider value for following
- * { DIVACLK266_GPS, DIVACLK100, DIVACLK160,
- DIVACLK133, DIVONENAND }
- */
-
- /* ACLK_GDL/R L0: 200MHz */
- {2, 7, 4, 5, 1},
- /* ACLK_GDL/R L1: 200MHz */
- {2, 7, 4, 5, 1},
- /* ACLK_GDL/R L2: 160MHz */
- {4, 7, 5, 7, 1},
- /* ACLK_GDL/R L3: 133MHz */
- {4, 7, 5, 7, 1},
- /* ACLK_GDL/R L4: 100MHz */
- {7, 7, 7, 7, 1},
-};
-static unsigned int exynos4x12_clkdiv_lr_bus[][2] = {
- /*
- * Clock divider value for following
- * { DIVGDL/R, DIVGPL/R }
- */
-
- /* ACLK_GDL/R L0: 200MHz */
- {3, 1},
- /* ACLK_GDL/R L1: 200MHz */
- {3, 1},
- /* ACLK_GDL/R L2: 160MHz */
- {4, 1},
- /* ACLK_GDL/R L3: 133MHz */
- {5, 1},
- /* ACLK_GDL/R L4: 100MHz */
- {7, 1},
-};
-static unsigned int exynos4x12_clkdiv_sclkip[][3] = {
- /*
- * Clock divider value for following
- * { DIVMFC, DIVJPEG, DIVFIMC0~3}
- */
-
- /* SCLK_MFC: 200MHz */
- {3, 3, 4},
- /* SCLK_MFC: 200MHz */
- {3, 3, 4},
- /* SCLK_MFC: 160MHz */
- {4, 4, 5},
- /* SCLK_MFC: 133MHz */
- {5, 5, 5},
- /* SCLK_MFC: 100MHz */
- {7, 7, 7},
-};
-
-
-static int exynos4210_set_busclk(struct busfreq_data *data,
- struct busfreq_opp_info *oppi)
-{
- unsigned int index;
- unsigned int tmp;
-
- for (index = LV_0; index < EX4210_LV_NUM; index++)
- if (oppi->rate == exynos4210_busclk_table[index].clk)
- break;
-
- if (index == EX4210_LV_NUM)
- return -EINVAL;
-
- /* Change Divider - DMC0 */
- tmp = data->dmc_divtable[index];
-
- __raw_writel(tmp, EXYNOS4_CLKDIV_DMC0);
-
- do {
- tmp = __raw_readl(EXYNOS4_CLKDIV_STAT_DMC0);
- } while (tmp & 0x11111111);
-
- /* Change Divider - TOP */
- tmp = data->top_divtable[index];
-
- __raw_writel(tmp, EXYNOS4_CLKDIV_TOP);
-
- do {
- tmp = __raw_readl(EXYNOS4_CLKDIV_STAT_TOP);
- } while (tmp & 0x11111);
-
- /* Change Divider - LEFTBUS */
- tmp = __raw_readl(EXYNOS4_CLKDIV_LEFTBUS);
-
- tmp &= ~(EXYNOS4_CLKDIV_BUS_GDLR_MASK | EXYNOS4_CLKDIV_BUS_GPLR_MASK);
-
- tmp |= ((exynos4210_clkdiv_lr_bus[index][0] <<
- EXYNOS4_CLKDIV_BUS_GDLR_SHIFT) |
- (exynos4210_clkdiv_lr_bus[index][1] <<
- EXYNOS4_CLKDIV_BUS_GPLR_SHIFT));
-
- __raw_writel(tmp, EXYNOS4_CLKDIV_LEFTBUS);
-
- do {
- tmp = __raw_readl(EXYNOS4_CLKDIV_STAT_LEFTBUS);
- } while (tmp & 0x11);
-
- /* Change Divider - RIGHTBUS */
- tmp = __raw_readl(EXYNOS4_CLKDIV_RIGHTBUS);
-
- tmp &= ~(EXYNOS4_CLKDIV_BUS_GDLR_MASK | EXYNOS4_CLKDIV_BUS_GPLR_MASK);
-
- tmp |= ((exynos4210_clkdiv_lr_bus[index][0] <<
- EXYNOS4_CLKDIV_BUS_GDLR_SHIFT) |
- (exynos4210_clkdiv_lr_bus[index][1] <<
- EXYNOS4_CLKDIV_BUS_GPLR_SHIFT));
-
- __raw_writel(tmp, EXYNOS4_CLKDIV_RIGHTBUS);
-
- do {
- tmp = __raw_readl(EXYNOS4_CLKDIV_STAT_RIGHTBUS);
- } while (tmp & 0x11);
-
- return 0;
-}
-
-static int exynos4x12_set_busclk(struct busfreq_data *data,
- struct busfreq_opp_info *oppi)
-{
- unsigned int index;
- unsigned int tmp;
-
- for (index = LV_0; index < EX4x12_LV_NUM; index++)
- if (oppi->rate == exynos4x12_mifclk_table[index].clk)
- break;
-
- if (index == EX4x12_LV_NUM)
- return -EINVAL;
-
- /* Change Divider - DMC0 */
- tmp = data->dmc_divtable[index];
-
- __raw_writel(tmp, EXYNOS4_CLKDIV_DMC0);
-
- do {
- tmp = __raw_readl(EXYNOS4_CLKDIV_STAT_DMC0);
- } while (tmp & 0x11111111);
-
- /* Change Divider - DMC1 */
- tmp = __raw_readl(EXYNOS4_CLKDIV_DMC1);
-
- tmp &= ~(EXYNOS4_CLKDIV_DMC1_G2D_ACP_MASK |
- EXYNOS4_CLKDIV_DMC1_C2C_MASK |
- EXYNOS4_CLKDIV_DMC1_C2CACLK_MASK);
-
- tmp |= ((exynos4x12_clkdiv_dmc1[index][0] <<
- EXYNOS4_CLKDIV_DMC1_G2D_ACP_SHIFT) |
- (exynos4x12_clkdiv_dmc1[index][1] <<
- EXYNOS4_CLKDIV_DMC1_C2C_SHIFT) |
- (exynos4x12_clkdiv_dmc1[index][2] <<
- EXYNOS4_CLKDIV_DMC1_C2CACLK_SHIFT));
-
- __raw_writel(tmp, EXYNOS4_CLKDIV_DMC1);
-
- do {
- tmp = __raw_readl(EXYNOS4_CLKDIV_STAT_DMC1);
- } while (tmp & 0x111111);
-
- /* Change Divider - TOP */
- tmp = __raw_readl(EXYNOS4_CLKDIV_TOP);
-
- tmp &= ~(EXYNOS4_CLKDIV_TOP_ACLK266_GPS_MASK |
- EXYNOS4_CLKDIV_TOP_ACLK100_MASK |
- EXYNOS4_CLKDIV_TOP_ACLK160_MASK |
- EXYNOS4_CLKDIV_TOP_ACLK133_MASK |
- EXYNOS4_CLKDIV_TOP_ONENAND_MASK);
-
- tmp |= ((exynos4x12_clkdiv_top[index][0] <<
- EXYNOS4_CLKDIV_TOP_ACLK266_GPS_SHIFT) |
- (exynos4x12_clkdiv_top[index][1] <<
- EXYNOS4_CLKDIV_TOP_ACLK100_SHIFT) |
- (exynos4x12_clkdiv_top[index][2] <<
- EXYNOS4_CLKDIV_TOP_ACLK160_SHIFT) |
- (exynos4x12_clkdiv_top[index][3] <<
- EXYNOS4_CLKDIV_TOP_ACLK133_SHIFT) |
- (exynos4x12_clkdiv_top[index][4] <<
- EXYNOS4_CLKDIV_TOP_ONENAND_SHIFT));
-
- __raw_writel(tmp, EXYNOS4_CLKDIV_TOP);
-
- do {
- tmp = __raw_readl(EXYNOS4_CLKDIV_STAT_TOP);
- } while (tmp & 0x11111);
-
- /* Change Divider - LEFTBUS */
- tmp = __raw_readl(EXYNOS4_CLKDIV_LEFTBUS);
-
- tmp &= ~(EXYNOS4_CLKDIV_BUS_GDLR_MASK | EXYNOS4_CLKDIV_BUS_GPLR_MASK);
-
- tmp |= ((exynos4x12_clkdiv_lr_bus[index][0] <<
- EXYNOS4_CLKDIV_BUS_GDLR_SHIFT) |
- (exynos4x12_clkdiv_lr_bus[index][1] <<
- EXYNOS4_CLKDIV_BUS_GPLR_SHIFT));
-
- __raw_writel(tmp, EXYNOS4_CLKDIV_LEFTBUS);
-
- do {
- tmp = __raw_readl(EXYNOS4_CLKDIV_STAT_LEFTBUS);
- } while (tmp & 0x11);
-
- /* Change Divider - RIGHTBUS */
- tmp = __raw_readl(EXYNOS4_CLKDIV_RIGHTBUS);
-
- tmp &= ~(EXYNOS4_CLKDIV_BUS_GDLR_MASK | EXYNOS4_CLKDIV_BUS_GPLR_MASK);
-
- tmp |= ((exynos4x12_clkdiv_lr_bus[index][0] <<
- EXYNOS4_CLKDIV_BUS_GDLR_SHIFT) |
- (exynos4x12_clkdiv_lr_bus[index][1] <<
- EXYNOS4_CLKDIV_BUS_GPLR_SHIFT));
-
- __raw_writel(tmp, EXYNOS4_CLKDIV_RIGHTBUS);
-
- do {
- tmp = __raw_readl(EXYNOS4_CLKDIV_STAT_RIGHTBUS);
- } while (tmp & 0x11);
-
- /* Change Divider - MFC */
- tmp = __raw_readl(EXYNOS4_CLKDIV_MFC);
-
- tmp &= ~(EXYNOS4_CLKDIV_MFC_MASK);
-
- tmp |= ((exynos4x12_clkdiv_sclkip[index][0] <<
- EXYNOS4_CLKDIV_MFC_SHIFT));
-
- __raw_writel(tmp, EXYNOS4_CLKDIV_MFC);
-
- do {
- tmp = __raw_readl(EXYNOS4_CLKDIV_STAT_MFC);
- } while (tmp & 0x1);
-
- /* Change Divider - JPEG */
- tmp = __raw_readl(EXYNOS4_CLKDIV_CAM1);
-
- tmp &= ~(EXYNOS4_CLKDIV_CAM1_JPEG_MASK);
-
- tmp |= ((exynos4x12_clkdiv_sclkip[index][1] <<
- EXYNOS4_CLKDIV_CAM1_JPEG_SHIFT));
-
- __raw_writel(tmp, EXYNOS4_CLKDIV_CAM1);
-
- do {
- tmp = __raw_readl(EXYNOS4_CLKDIV_STAT_CAM1);
- } while (tmp & 0x1);
-
- /* Change Divider - FIMC0~3 */
- tmp = __raw_readl(EXYNOS4_CLKDIV_CAM);
-
- tmp &= ~(EXYNOS4_CLKDIV_CAM_FIMC0_MASK | EXYNOS4_CLKDIV_CAM_FIMC1_MASK |
- EXYNOS4_CLKDIV_CAM_FIMC2_MASK | EXYNOS4_CLKDIV_CAM_FIMC3_MASK);
-
- tmp |= ((exynos4x12_clkdiv_sclkip[index][2] <<
- EXYNOS4_CLKDIV_CAM_FIMC0_SHIFT) |
- (exynos4x12_clkdiv_sclkip[index][2] <<
- EXYNOS4_CLKDIV_CAM_FIMC1_SHIFT) |
- (exynos4x12_clkdiv_sclkip[index][2] <<
- EXYNOS4_CLKDIV_CAM_FIMC2_SHIFT) |
- (exynos4x12_clkdiv_sclkip[index][2] <<
- EXYNOS4_CLKDIV_CAM_FIMC3_SHIFT));
-
- __raw_writel(tmp, EXYNOS4_CLKDIV_CAM);
-
- do {
- tmp = __raw_readl(EXYNOS4_CLKDIV_STAT_CAM1);
- } while (tmp & 0x1111);
-
- return 0;
-}
-
-static int exynos4x12_get_intspec(unsigned long mifclk)
-{
- int i = 0;
-
- while (exynos4x12_intclk_table[i].clk) {
- if (exynos4x12_intclk_table[i].clk <= mifclk)
- return i;
- i++;
- }
-
- return -EINVAL;
-}
-
-static int exynos4_bus_setvolt(struct busfreq_data *data,
- struct busfreq_opp_info *oppi,
- struct busfreq_opp_info *oldoppi)
-{
- int err = 0, tmp;
- unsigned long volt = oppi->volt;
-
- switch (data->type) {
- case TYPE_BUSF_EXYNOS4210:
- /* OPP represents DMC clock + INT voltage */
- err = regulator_set_voltage(data->vdd_int, volt,
- MAX_SAFEVOLT);
- break;
- case TYPE_BUSF_EXYNOS4x12:
- /* OPP represents MIF clock + MIF voltage */
- err = regulator_set_voltage(data->vdd_mif, volt,
- MAX_SAFEVOLT);
- if (err)
- break;
-
- tmp = exynos4x12_get_intspec(oppi->rate);
- if (tmp < 0) {
- err = tmp;
- regulator_set_voltage(data->vdd_mif,
- oldoppi->volt,
- MAX_SAFEVOLT);
- break;
- }
- err = regulator_set_voltage(data->vdd_int,
- exynos4x12_intclk_table[tmp].volt,
- MAX_SAFEVOLT);
- /* Try to recover */
- if (err)
- regulator_set_voltage(data->vdd_mif,
- oldoppi->volt,
- MAX_SAFEVOLT);
- break;
- default:
- err = -EINVAL;
- }
-
- return err;
-}
-
-static int exynos4_bus_target(struct device *dev, unsigned long *_freq,
- u32 flags)
-{
- int err = 0;
- struct platform_device *pdev = container_of(dev, struct platform_device,
- dev);
- struct busfreq_data *data = platform_get_drvdata(pdev);
- struct dev_pm_opp *opp;
- unsigned long freq;
- unsigned long old_freq = data->curr_oppinfo.rate;
- struct busfreq_opp_info new_oppinfo;
-
- rcu_read_lock();
- opp = devfreq_recommended_opp(dev, _freq, flags);
- if (IS_ERR(opp)) {
- rcu_read_unlock();
- return PTR_ERR(opp);
- }
- new_oppinfo.rate = dev_pm_opp_get_freq(opp);
- new_oppinfo.volt = dev_pm_opp_get_voltage(opp);
- rcu_read_unlock();
- freq = new_oppinfo.rate;
-
- if (old_freq == freq)
- return 0;
-
- dev_dbg(dev, "targeting %lukHz %luuV\n", freq, new_oppinfo.volt);
-
- mutex_lock(&data->lock);
-
- if (data->disabled)
- goto out;
-
- if (old_freq < freq)
- err = exynos4_bus_setvolt(data, &new_oppinfo,
- &data->curr_oppinfo);
- if (err)
- goto out;
-
- if (old_freq != freq) {
- switch (data->type) {
- case TYPE_BUSF_EXYNOS4210:
- err = exynos4210_set_busclk(data, &new_oppinfo);
- break;
- case TYPE_BUSF_EXYNOS4x12:
- err = exynos4x12_set_busclk(data, &new_oppinfo);
- break;
- default:
- err = -EINVAL;
- }
- }
- if (err)
- goto out;
-
- if (old_freq > freq)
- err = exynos4_bus_setvolt(data, &new_oppinfo,
- &data->curr_oppinfo);
- if (err)
- goto out;
-
- data->curr_oppinfo = new_oppinfo;
-out:
- mutex_unlock(&data->lock);
- return err;
-}
-
-static int exynos4_bus_get_dev_status(struct device *dev,
- struct devfreq_dev_status *stat)
-{
- struct busfreq_data *data = dev_get_drvdata(dev);
- struct busfreq_ppmu_data *ppmu_data = &data->ppmu_data;
- int busier;
-
- exynos_read_ppmu(ppmu_data);
- busier = exynos_get_busier_ppmu(ppmu_data);
- stat->current_frequency = data->curr_oppinfo.rate;
-
- /* Number of cycles spent on memory access */
- stat->busy_time = ppmu_data->ppmu[busier].count[PPMU_PMNCNT3];
- stat->busy_time *= 100 / BUS_SATURATION_RATIO;
- stat->total_time = ppmu_data->ppmu[busier].ccnt;
-
- /* If the counters have overflown, retry */
- if (ppmu_data->ppmu[busier].ccnt_overflow ||
- ppmu_data->ppmu[busier].count_overflow[0])
- return -EAGAIN;
-
- return 0;
-}
-
-static struct devfreq_dev_profile exynos4_devfreq_profile = {
- .initial_freq = 400000,
- .polling_ms = 50,
- .target = exynos4_bus_target,
- .get_dev_status = exynos4_bus_get_dev_status,
-};
-
-static int exynos4210_init_tables(struct busfreq_data *data)
-{
- u32 tmp;
- int mgrp;
- int i, err = 0;
-
- tmp = __raw_readl(EXYNOS4_CLKDIV_DMC0);
- for (i = LV_0; i < EX4210_LV_NUM; i++) {
- tmp &= ~(EXYNOS4_CLKDIV_DMC0_ACP_MASK |
- EXYNOS4_CLKDIV_DMC0_ACPPCLK_MASK |
- EXYNOS4_CLKDIV_DMC0_DPHY_MASK |
- EXYNOS4_CLKDIV_DMC0_DMC_MASK |
- EXYNOS4_CLKDIV_DMC0_DMCD_MASK |
- EXYNOS4_CLKDIV_DMC0_DMCP_MASK |
- EXYNOS4_CLKDIV_DMC0_COPY2_MASK |
- EXYNOS4_CLKDIV_DMC0_CORETI_MASK);
-
- tmp |= ((exynos4210_clkdiv_dmc0[i][0] <<
- EXYNOS4_CLKDIV_DMC0_ACP_SHIFT) |
- (exynos4210_clkdiv_dmc0[i][1] <<
- EXYNOS4_CLKDIV_DMC0_ACPPCLK_SHIFT) |
- (exynos4210_clkdiv_dmc0[i][2] <<
- EXYNOS4_CLKDIV_DMC0_DPHY_SHIFT) |
- (exynos4210_clkdiv_dmc0[i][3] <<
- EXYNOS4_CLKDIV_DMC0_DMC_SHIFT) |
- (exynos4210_clkdiv_dmc0[i][4] <<
- EXYNOS4_CLKDIV_DMC0_DMCD_SHIFT) |
- (exynos4210_clkdiv_dmc0[i][5] <<
- EXYNOS4_CLKDIV_DMC0_DMCP_SHIFT) |
- (exynos4210_clkdiv_dmc0[i][6] <<
- EXYNOS4_CLKDIV_DMC0_COPY2_SHIFT) |
- (exynos4210_clkdiv_dmc0[i][7] <<
- EXYNOS4_CLKDIV_DMC0_CORETI_SHIFT));
-
- data->dmc_divtable[i] = tmp;
- }
-
- tmp = __raw_readl(EXYNOS4_CLKDIV_TOP);
- for (i = LV_0; i < EX4210_LV_NUM; i++) {
- tmp &= ~(EXYNOS4_CLKDIV_TOP_ACLK200_MASK |
- EXYNOS4_CLKDIV_TOP_ACLK100_MASK |
- EXYNOS4_CLKDIV_TOP_ACLK160_MASK |
- EXYNOS4_CLKDIV_TOP_ACLK133_MASK |
- EXYNOS4_CLKDIV_TOP_ONENAND_MASK);
-
- tmp |= ((exynos4210_clkdiv_top[i][0] <<
- EXYNOS4_CLKDIV_TOP_ACLK200_SHIFT) |
- (exynos4210_clkdiv_top[i][1] <<
- EXYNOS4_CLKDIV_TOP_ACLK100_SHIFT) |
- (exynos4210_clkdiv_top[i][2] <<
- EXYNOS4_CLKDIV_TOP_ACLK160_SHIFT) |
- (exynos4210_clkdiv_top[i][3] <<
- EXYNOS4_CLKDIV_TOP_ACLK133_SHIFT) |
- (exynos4210_clkdiv_top[i][4] <<
- EXYNOS4_CLKDIV_TOP_ONENAND_SHIFT));
-
- data->top_divtable[i] = tmp;
- }
-
- /*
- * TODO: init tmp based on busfreq_data
- * (device-tree or platform-data)
- */
- tmp = 0; /* Max voltages for the reliability of the unknown */
-
- pr_debug("ASV Group of Exynos4 is %d\n", tmp);
- /* Use merged grouping for voltage */
- switch (tmp) {
- case 0:
- mgrp = 0;
- break;
- case 1:
- case 2:
- mgrp = 1;
- break;
- case 3:
- case 4:
- mgrp = 2;
- break;
- case 5:
- case 6:
- mgrp = 3;
- break;
- case 7:
- mgrp = 4;
- break;
- default:
- pr_warn("Unknown ASV Group. Use max voltage.\n");
- mgrp = 0;
- }
-
- for (i = LV_0; i < EX4210_LV_NUM; i++)
- exynos4210_busclk_table[i].volt = exynos4210_asv_volt[mgrp][i];
-
- for (i = LV_0; i < EX4210_LV_NUM; i++) {
- err = dev_pm_opp_add(data->dev, exynos4210_busclk_table[i].clk,
- exynos4210_busclk_table[i].volt);
- if (err) {
- dev_err(data->dev, "Cannot add opp entries.\n");
- return err;
- }
- }
-
-
- return 0;
-}
-
-static int exynos4x12_init_tables(struct busfreq_data *data)
-{
- unsigned int i;
- unsigned int tmp;
- int ret;
-
- /* Enable pause function for DREX2 DVFS */
- tmp = __raw_readl(EXYNOS4_DMC_PAUSE_CTRL);
- tmp |= EXYNOS4_DMC_PAUSE_ENABLE;
- __raw_writel(tmp, EXYNOS4_DMC_PAUSE_CTRL);
-
- tmp = __raw_readl(EXYNOS4_CLKDIV_DMC0);
-
- for (i = 0; i < EX4x12_LV_NUM; i++) {
- tmp &= ~(EXYNOS4_CLKDIV_DMC0_ACP_MASK |
- EXYNOS4_CLKDIV_DMC0_ACPPCLK_MASK |
- EXYNOS4_CLKDIV_DMC0_DPHY_MASK |
- EXYNOS4_CLKDIV_DMC0_DMC_MASK |
- EXYNOS4_CLKDIV_DMC0_DMCD_MASK |
- EXYNOS4_CLKDIV_DMC0_DMCP_MASK);
-
- tmp |= ((exynos4x12_clkdiv_dmc0[i][0] <<
- EXYNOS4_CLKDIV_DMC0_ACP_SHIFT) |
- (exynos4x12_clkdiv_dmc0[i][1] <<
- EXYNOS4_CLKDIV_DMC0_ACPPCLK_SHIFT) |
- (exynos4x12_clkdiv_dmc0[i][2] <<
- EXYNOS4_CLKDIV_DMC0_DPHY_SHIFT) |
- (exynos4x12_clkdiv_dmc0[i][3] <<
- EXYNOS4_CLKDIV_DMC0_DMC_SHIFT) |
- (exynos4x12_clkdiv_dmc0[i][4] <<
- EXYNOS4_CLKDIV_DMC0_DMCD_SHIFT) |
- (exynos4x12_clkdiv_dmc0[i][5] <<
- EXYNOS4_CLKDIV_DMC0_DMCP_SHIFT));
-
- data->dmc_divtable[i] = tmp;
- }
-
- tmp = 0; /* Max voltages for the reliability of the unknown */
-
- if (tmp > 8)
- tmp = 0;
- pr_debug("ASV Group of Exynos4x12 is %d\n", tmp);
-
- for (i = 0; i < EX4x12_LV_NUM; i++) {
- exynos4x12_mifclk_table[i].volt =
- exynos4x12_mif_step_50[tmp][i];
- exynos4x12_intclk_table[i].volt =
- exynos4x12_int_volt[tmp][i];
- }
-
- for (i = 0; i < EX4x12_LV_NUM; i++) {
- ret = dev_pm_opp_add(data->dev, exynos4x12_mifclk_table[i].clk,
- exynos4x12_mifclk_table[i].volt);
- if (ret) {
- dev_err(data->dev, "Fail to add opp entries.\n");
- return ret;
- }
- }
-
- return 0;
-}
-
-static int exynos4_busfreq_pm_notifier_event(struct notifier_block *this,
- unsigned long event, void *ptr)
-{
- struct busfreq_data *data = container_of(this, struct busfreq_data,
- pm_notifier);
- struct dev_pm_opp *opp;
- struct busfreq_opp_info new_oppinfo;
- unsigned long maxfreq = ULONG_MAX;
- int err = 0;
-
- switch (event) {
- case PM_SUSPEND_PREPARE:
- /* Set Fastest and Deactivate DVFS */
- mutex_lock(&data->lock);
-
- data->disabled = true;
-
- rcu_read_lock();
- opp = dev_pm_opp_find_freq_floor(data->dev, &maxfreq);
- if (IS_ERR(opp)) {
- rcu_read_unlock();
- dev_err(data->dev, "%s: unable to find a min freq\n",
- __func__);
- mutex_unlock(&data->lock);
- return PTR_ERR(opp);
- }
- new_oppinfo.rate = dev_pm_opp_get_freq(opp);
- new_oppinfo.volt = dev_pm_opp_get_voltage(opp);
- rcu_read_unlock();
-
- err = exynos4_bus_setvolt(data, &new_oppinfo,
- &data->curr_oppinfo);
- if (err)
- goto unlock;
-
- switch (data->type) {
- case TYPE_BUSF_EXYNOS4210:
- err = exynos4210_set_busclk(data, &new_oppinfo);
- break;
- case TYPE_BUSF_EXYNOS4x12:
- err = exynos4x12_set_busclk(data, &new_oppinfo);
- break;
- default:
- err = -EINVAL;
- }
- if (err)
- goto unlock;
-
- data->curr_oppinfo = new_oppinfo;
-unlock:
- mutex_unlock(&data->lock);
- if (err)
- return err;
- return NOTIFY_OK;
- case PM_POST_RESTORE:
- case PM_POST_SUSPEND:
- /* Reactivate */
- mutex_lock(&data->lock);
- data->disabled = false;
- mutex_unlock(&data->lock);
- return NOTIFY_OK;
- }
-
- return NOTIFY_DONE;
-}
-
-static int exynos4_busfreq_probe(struct platform_device *pdev)
-{
- struct busfreq_data *data;
- struct busfreq_ppmu_data *ppmu_data;
- struct dev_pm_opp *opp;
- struct device *dev = &pdev->dev;
- int err = 0;
-
- data = devm_kzalloc(&pdev->dev, sizeof(struct busfreq_data), GFP_KERNEL);
- if (data == NULL) {
- dev_err(dev, "Cannot allocate memory.\n");
- return -ENOMEM;
- }
-
- ppmu_data = &data->ppmu_data;
- ppmu_data->ppmu_end = PPMU_END;
- ppmu_data->ppmu = devm_kzalloc(dev,
- sizeof(struct exynos_ppmu) * PPMU_END,
- GFP_KERNEL);
- if (!ppmu_data->ppmu) {
- dev_err(dev, "Failed to allocate memory for exynos_ppmu\n");
- return -ENOMEM;
- }
-
- data->type = pdev->id_entry->driver_data;
- ppmu_data->ppmu[PPMU_DMC0].hw_base = S5P_VA_DMC0;
- ppmu_data->ppmu[PPMU_DMC1].hw_base = S5P_VA_DMC1;
- data->pm_notifier.notifier_call = exynos4_busfreq_pm_notifier_event;
- data->dev = dev;
- mutex_init(&data->lock);
-
- switch (data->type) {
- case TYPE_BUSF_EXYNOS4210:
- err = exynos4210_init_tables(data);
- break;
- case TYPE_BUSF_EXYNOS4x12:
- err = exynos4x12_init_tables(data);
- break;
- default:
- dev_err(dev, "Cannot determine the device id %d\n", data->type);
- err = -EINVAL;
- }
- if (err) {
- dev_err(dev, "Cannot initialize busfreq table %d\n",
- data->type);
- return err;
- }
-
- data->vdd_int = devm_regulator_get(dev, "vdd_int");
- if (IS_ERR(data->vdd_int)) {
- dev_err(dev, "Cannot get the regulator \"vdd_int\"\n");
- return PTR_ERR(data->vdd_int);
- }
- if (data->type == TYPE_BUSF_EXYNOS4x12) {
- data->vdd_mif = devm_regulator_get(dev, "vdd_mif");
- if (IS_ERR(data->vdd_mif)) {
- dev_err(dev, "Cannot get the regulator \"vdd_mif\"\n");
- return PTR_ERR(data->vdd_mif);
- }
- }
-
- rcu_read_lock();
- opp = dev_pm_opp_find_freq_floor(dev,
- &exynos4_devfreq_profile.initial_freq);
- if (IS_ERR(opp)) {
- rcu_read_unlock();
- dev_err(dev, "Invalid initial frequency %lu kHz.\n",
- exynos4_devfreq_profile.initial_freq);
- return PTR_ERR(opp);
- }
- data->curr_oppinfo.rate = dev_pm_opp_get_freq(opp);
- data->curr_oppinfo.volt = dev_pm_opp_get_voltage(opp);
- rcu_read_unlock();
-
- platform_set_drvdata(pdev, data);
-
- data->devfreq = devm_devfreq_add_device(dev, &exynos4_devfreq_profile,
- "simple_ondemand", NULL);
- if (IS_ERR(data->devfreq))
- return PTR_ERR(data->devfreq);
-
- /*
- * Start PPMU (Performance Profiling Monitoring Unit) to check
- * utilization of each IP in the Exynos4 SoC.
- */
- busfreq_mon_reset(ppmu_data);
-
- /* Register opp_notifier for Exynos4 busfreq */
- err = devm_devfreq_register_opp_notifier(dev, data->devfreq);
- if (err < 0) {
- dev_err(dev, "Failed to register opp notifier\n");
- return err;
- }
-
- /* Register pm_notifier for Exynos4 busfreq */
- err = register_pm_notifier(&data->pm_notifier);
- if (err) {
- dev_err(dev, "Failed to setup pm notifier\n");
- return err;
- }
-
- return 0;
-}
-
-static int exynos4_busfreq_remove(struct platform_device *pdev)
-{
- struct busfreq_data *data = platform_get_drvdata(pdev);
-
- /* Unregister all of notifier chain */
- unregister_pm_notifier(&data->pm_notifier);
-
- return 0;
-}
-
-#ifdef CONFIG_PM_SLEEP
-static int exynos4_busfreq_resume(struct device *dev)
-{
- struct busfreq_data *data = dev_get_drvdata(dev);
- struct busfreq_ppmu_data *ppmu_data = &data->ppmu_data;
-
- busfreq_mon_reset(ppmu_data);
- return 0;
-}
-#endif
-
-static SIMPLE_DEV_PM_OPS(exynos4_busfreq_pm_ops, NULL, exynos4_busfreq_resume);
-
-static const struct platform_device_id exynos4_busfreq_id[] = {
- { "exynos4210-busfreq", TYPE_BUSF_EXYNOS4210 },
- { "exynos4412-busfreq", TYPE_BUSF_EXYNOS4x12 },
- { "exynos4212-busfreq", TYPE_BUSF_EXYNOS4x12 },
- { },
-};
-
-static struct platform_driver exynos4_busfreq_driver = {
- .probe = exynos4_busfreq_probe,
- .remove = exynos4_busfreq_remove,
- .id_table = exynos4_busfreq_id,
- .driver = {
- .name = "exynos4-busfreq",
- .pm = &exynos4_busfreq_pm_ops,
- },
-};
-
-static int __init exynos4_busfreq_init(void)
-{
- return platform_driver_register(&exynos4_busfreq_driver);
-}
-late_initcall(exynos4_busfreq_init);
-
-static void __exit exynos4_busfreq_exit(void)
-{
- platform_driver_unregister(&exynos4_busfreq_driver);
-}
-module_exit(exynos4_busfreq_exit);
-
-MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("EXYNOS4 busfreq driver with devfreq framework");
-MODULE_AUTHOR("MyungJoo Ham <myungjoo.ham@samsung.com>");
diff --git a/drivers/devfreq/exynos/exynos4_bus.h b/drivers/devfreq/exynos/exynos4_bus.h
deleted file mode 100644
index 94c73c18d28c..000000000000
--- a/drivers/devfreq/exynos/exynos4_bus.h
+++ /dev/null
@@ -1,110 +0,0 @@
-/*
- * Copyright (c) 2013 Samsung Electronics Co., Ltd.
- * http://www.samsung.com/
- *
- * EXYNOS4 BUS header
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
-*/
-
-#ifndef __DEVFREQ_EXYNOS4_BUS_H
-#define __DEVFREQ_EXYNOS4_BUS_H __FILE__
-
-#include <mach/map.h>
-
-#define EXYNOS4_CLKDIV_LEFTBUS (S5P_VA_CMU + 0x04500)
-#define EXYNOS4_CLKDIV_STAT_LEFTBUS (S5P_VA_CMU + 0x04600)
-
-#define EXYNOS4_CLKDIV_RIGHTBUS (S5P_VA_CMU + 0x08500)
-#define EXYNOS4_CLKDIV_STAT_RIGHTBUS (S5P_VA_CMU + 0x08600)
-
-#define EXYNOS4_CLKDIV_TOP (S5P_VA_CMU + 0x0C510)
-#define EXYNOS4_CLKDIV_CAM (S5P_VA_CMU + 0x0C520)
-#define EXYNOS4_CLKDIV_MFC (S5P_VA_CMU + 0x0C528)
-
-#define EXYNOS4_CLKDIV_STAT_TOP (S5P_VA_CMU + 0x0C610)
-#define EXYNOS4_CLKDIV_STAT_MFC (S5P_VA_CMU + 0x0C628)
-
-#define EXYNOS4210_CLKGATE_IP_IMAGE (S5P_VA_CMU + 0x0C930)
-#define EXYNOS4212_CLKGATE_IP_IMAGE (S5P_VA_CMU + 0x04930)
-
-#define EXYNOS4_CLKDIV_DMC0 (S5P_VA_CMU + 0x10500)
-#define EXYNOS4_CLKDIV_DMC1 (S5P_VA_CMU + 0x10504)
-#define EXYNOS4_CLKDIV_STAT_DMC0 (S5P_VA_CMU + 0x10600)
-#define EXYNOS4_CLKDIV_STAT_DMC1 (S5P_VA_CMU + 0x10604)
-
-#define EXYNOS4_DMC_PAUSE_CTRL (S5P_VA_CMU + 0x11094)
-#define EXYNOS4_DMC_PAUSE_ENABLE (1 << 0)
-
-#define EXYNOS4_CLKDIV_DMC0_ACP_SHIFT (0)
-#define EXYNOS4_CLKDIV_DMC0_ACP_MASK (0x7 << EXYNOS4_CLKDIV_DMC0_ACP_SHIFT)
-#define EXYNOS4_CLKDIV_DMC0_ACPPCLK_SHIFT (4)
-#define EXYNOS4_CLKDIV_DMC0_ACPPCLK_MASK (0x7 << EXYNOS4_CLKDIV_DMC0_ACPPCLK_SHIFT)
-#define EXYNOS4_CLKDIV_DMC0_DPHY_SHIFT (8)
-#define EXYNOS4_CLKDIV_DMC0_DPHY_MASK (0x7 << EXYNOS4_CLKDIV_DMC0_DPHY_SHIFT)
-#define EXYNOS4_CLKDIV_DMC0_DMC_SHIFT (12)
-#define EXYNOS4_CLKDIV_DMC0_DMC_MASK (0x7 << EXYNOS4_CLKDIV_DMC0_DMC_SHIFT)
-#define EXYNOS4_CLKDIV_DMC0_DMCD_SHIFT (16)
-#define EXYNOS4_CLKDIV_DMC0_DMCD_MASK (0x7 << EXYNOS4_CLKDIV_DMC0_DMCD_SHIFT)
-#define EXYNOS4_CLKDIV_DMC0_DMCP_SHIFT (20)
-#define EXYNOS4_CLKDIV_DMC0_DMCP_MASK (0x7 << EXYNOS4_CLKDIV_DMC0_DMCP_SHIFT)
-#define EXYNOS4_CLKDIV_DMC0_COPY2_SHIFT (24)
-#define EXYNOS4_CLKDIV_DMC0_COPY2_MASK (0x7 << EXYNOS4_CLKDIV_DMC0_COPY2_SHIFT)
-#define EXYNOS4_CLKDIV_DMC0_CORETI_SHIFT (28)
-#define EXYNOS4_CLKDIV_DMC0_CORETI_MASK (0x7 << EXYNOS4_CLKDIV_DMC0_CORETI_SHIFT)
-
-#define EXYNOS4_CLKDIV_DMC1_G2D_ACP_SHIFT (0)
-#define EXYNOS4_CLKDIV_DMC1_G2D_ACP_MASK (0xf << EXYNOS4_CLKDIV_DMC1_G2D_ACP_SHIFT)
-#define EXYNOS4_CLKDIV_DMC1_C2C_SHIFT (4)
-#define EXYNOS4_CLKDIV_DMC1_C2C_MASK (0x7 << EXYNOS4_CLKDIV_DMC1_C2C_SHIFT)
-#define EXYNOS4_CLKDIV_DMC1_PWI_SHIFT (8)
-#define EXYNOS4_CLKDIV_DMC1_PWI_MASK (0xf << EXYNOS4_CLKDIV_DMC1_PWI_SHIFT)
-#define EXYNOS4_CLKDIV_DMC1_C2CACLK_SHIFT (12)
-#define EXYNOS4_CLKDIV_DMC1_C2CACLK_MASK (0x7 << EXYNOS4_CLKDIV_DMC1_C2CACLK_SHIFT)
-#define EXYNOS4_CLKDIV_DMC1_DVSEM_SHIFT (16)
-#define EXYNOS4_CLKDIV_DMC1_DVSEM_MASK (0x7f << EXYNOS4_CLKDIV_DMC1_DVSEM_SHIFT)
-#define EXYNOS4_CLKDIV_DMC1_DPM_SHIFT (24)
-#define EXYNOS4_CLKDIV_DMC1_DPM_MASK (0x7f << EXYNOS4_CLKDIV_DMC1_DPM_SHIFT)
-
-#define EXYNOS4_CLKDIV_MFC_SHIFT (0)
-#define EXYNOS4_CLKDIV_MFC_MASK (0x7 << EXYNOS4_CLKDIV_MFC_SHIFT)
-
-#define EXYNOS4_CLKDIV_TOP_ACLK200_SHIFT (0)
-#define EXYNOS4_CLKDIV_TOP_ACLK200_MASK (0x7 << EXYNOS4_CLKDIV_TOP_ACLK200_SHIFT)
-#define EXYNOS4_CLKDIV_TOP_ACLK100_SHIFT (4)
-#define EXYNOS4_CLKDIV_TOP_ACLK100_MASK (0xF << EXYNOS4_CLKDIV_TOP_ACLK100_SHIFT)
-#define EXYNOS4_CLKDIV_TOP_ACLK160_SHIFT (8)
-#define EXYNOS4_CLKDIV_TOP_ACLK160_MASK (0x7 << EXYNOS4_CLKDIV_TOP_ACLK160_SHIFT)
-#define EXYNOS4_CLKDIV_TOP_ACLK133_SHIFT (12)
-#define EXYNOS4_CLKDIV_TOP_ACLK133_MASK (0x7 << EXYNOS4_CLKDIV_TOP_ACLK133_SHIFT)
-#define EXYNOS4_CLKDIV_TOP_ONENAND_SHIFT (16)
-#define EXYNOS4_CLKDIV_TOP_ONENAND_MASK (0x7 << EXYNOS4_CLKDIV_TOP_ONENAND_SHIFT)
-#define EXYNOS4_CLKDIV_TOP_ACLK266_GPS_SHIFT (20)
-#define EXYNOS4_CLKDIV_TOP_ACLK266_GPS_MASK (0x7 << EXYNOS4_CLKDIV_TOP_ACLK266_GPS_SHIFT)
-#define EXYNOS4_CLKDIV_TOP_ACLK400_MCUISP_SHIFT (24)
-#define EXYNOS4_CLKDIV_TOP_ACLK400_MCUISP_MASK (0x7 << EXYNOS4_CLKDIV_TOP_ACLK400_MCUISP_SHIFT)
-
-#define EXYNOS4_CLKDIV_BUS_GDLR_SHIFT (0)
-#define EXYNOS4_CLKDIV_BUS_GDLR_MASK (0x7 << EXYNOS4_CLKDIV_BUS_GDLR_SHIFT)
-#define EXYNOS4_CLKDIV_BUS_GPLR_SHIFT (4)
-#define EXYNOS4_CLKDIV_BUS_GPLR_MASK (0x7 << EXYNOS4_CLKDIV_BUS_GPLR_SHIFT)
-
-#define EXYNOS4_CLKDIV_CAM_FIMC0_SHIFT (0)
-#define EXYNOS4_CLKDIV_CAM_FIMC0_MASK (0xf << EXYNOS4_CLKDIV_CAM_FIMC0_SHIFT)
-#define EXYNOS4_CLKDIV_CAM_FIMC1_SHIFT (4)
-#define EXYNOS4_CLKDIV_CAM_FIMC1_MASK (0xf << EXYNOS4_CLKDIV_CAM_FIMC1_SHIFT)
-#define EXYNOS4_CLKDIV_CAM_FIMC2_SHIFT (8)
-#define EXYNOS4_CLKDIV_CAM_FIMC2_MASK (0xf << EXYNOS4_CLKDIV_CAM_FIMC2_SHIFT)
-#define EXYNOS4_CLKDIV_CAM_FIMC3_SHIFT (12)
-#define EXYNOS4_CLKDIV_CAM_FIMC3_MASK (0xf << EXYNOS4_CLKDIV_CAM_FIMC3_SHIFT)
-
-#define EXYNOS4_CLKDIV_CAM1 (S5P_VA_CMU + 0x0C568)
-
-#define EXYNOS4_CLKDIV_STAT_CAM1 (S5P_VA_CMU + 0x0C668)
-
-#define EXYNOS4_CLKDIV_CAM1_JPEG_SHIFT (0)
-#define EXYNOS4_CLKDIV_CAM1_JPEG_MASK (0xf << EXYNOS4_CLKDIV_CAM1_JPEG_SHIFT)
-
-#endif /* __DEVFREQ_EXYNOS4_BUS_H */
diff --git a/drivers/devfreq/exynos/exynos5_bus.c b/drivers/devfreq/exynos/exynos5_bus.c
deleted file mode 100644
index 297ea30d4159..000000000000
--- a/drivers/devfreq/exynos/exynos5_bus.c
+++ /dev/null
@@ -1,431 +0,0 @@
-/*
- * Copyright (c) 2012 Samsung Electronics Co., Ltd.
- * http://www.samsung.com/
- *
- * EXYNOS5 INT clock frequency scaling support using DEVFREQ framework
- * Based on work done by Jonghwan Choi <jhbird.choi@samsung.com>
- * Support for only EXYNOS5250 is present.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- *
- */
-
-#include <linux/module.h>
-#include <linux/devfreq.h>
-#include <linux/io.h>
-#include <linux/pm_opp.h>
-#include <linux/slab.h>
-#include <linux/suspend.h>
-#include <linux/clk.h>
-#include <linux/delay.h>
-#include <linux/platform_device.h>
-#include <linux/pm_qos.h>
-#include <linux/regulator/consumer.h>
-#include <linux/of_address.h>
-#include <linux/of_platform.h>
-
-#include "exynos_ppmu.h"
-
-#define MAX_SAFEVOLT 1100000 /* 1.10V */
-/* Assume that the bus is saturated if the utilization is 25% */
-#define INT_BUS_SATURATION_RATIO 25
-
-enum int_level_idx {
- LV_0,
- LV_1,
- LV_2,
- LV_3,
- LV_4,
- _LV_END
-};
-
-enum exynos_ppmu_list {
- PPMU_RIGHT,
- PPMU_END,
-};
-
-struct busfreq_data_int {
- struct device *dev;
- struct devfreq *devfreq;
- struct regulator *vdd_int;
- struct busfreq_ppmu_data ppmu_data;
- unsigned long curr_freq;
- bool disabled;
-
- struct notifier_block pm_notifier;
- struct mutex lock;
- struct pm_qos_request int_req;
- struct clk *int_clk;
-};
-
-struct int_bus_opp_table {
- unsigned int idx;
- unsigned long clk;
- unsigned long volt;
-};
-
-static struct int_bus_opp_table exynos5_int_opp_table[] = {
- {LV_0, 266000, 1025000},
- {LV_1, 200000, 1025000},
- {LV_2, 160000, 1025000},
- {LV_3, 133000, 1025000},
- {LV_4, 100000, 1025000},
- {0, 0, 0},
-};
-
-static int exynos5_int_setvolt(struct busfreq_data_int *data,
- unsigned long volt)
-{
- return regulator_set_voltage(data->vdd_int, volt, MAX_SAFEVOLT);
-}
-
-static int exynos5_busfreq_int_target(struct device *dev, unsigned long *_freq,
- u32 flags)
-{
- int err = 0;
- struct platform_device *pdev = container_of(dev, struct platform_device,
- dev);
- struct busfreq_data_int *data = platform_get_drvdata(pdev);
- struct dev_pm_opp *opp;
- unsigned long old_freq, freq;
- unsigned long volt;
-
- rcu_read_lock();
- opp = devfreq_recommended_opp(dev, _freq, flags);
- if (IS_ERR(opp)) {
- rcu_read_unlock();
- dev_err(dev, "%s: Invalid OPP.\n", __func__);
- return PTR_ERR(opp);
- }
-
- freq = dev_pm_opp_get_freq(opp);
- volt = dev_pm_opp_get_voltage(opp);
- rcu_read_unlock();
-
- old_freq = data->curr_freq;
-
- if (old_freq == freq)
- return 0;
-
- dev_dbg(dev, "targeting %lukHz %luuV\n", freq, volt);
-
- mutex_lock(&data->lock);
-
- if (data->disabled)
- goto out;
-
- if (freq > exynos5_int_opp_table[0].clk)
- pm_qos_update_request(&data->int_req, freq * 16 / 1000);
- else
- pm_qos_update_request(&data->int_req, -1);
-
- if (old_freq < freq)
- err = exynos5_int_setvolt(data, volt);
- if (err)
- goto out;
-
- err = clk_set_rate(data->int_clk, freq * 1000);
-
- if (err)
- goto out;
-
- if (old_freq > freq)
- err = exynos5_int_setvolt(data, volt);
- if (err)
- goto out;
-
- data->curr_freq = freq;
-out:
- mutex_unlock(&data->lock);
- return err;
-}
-
-static int exynos5_int_get_dev_status(struct device *dev,
- struct devfreq_dev_status *stat)
-{
- struct platform_device *pdev = container_of(dev, struct platform_device,
- dev);
- struct busfreq_data_int *data = platform_get_drvdata(pdev);
- struct busfreq_ppmu_data *ppmu_data = &data->ppmu_data;
- int busier_dmc;
-
- exynos_read_ppmu(ppmu_data);
- busier_dmc = exynos_get_busier_ppmu(ppmu_data);
-
- stat->current_frequency = data->curr_freq;
-
- /* Number of cycles spent on memory access */
- stat->busy_time = ppmu_data->ppmu[busier_dmc].count[PPMU_PMNCNT3];
- stat->busy_time *= 100 / INT_BUS_SATURATION_RATIO;
- stat->total_time = ppmu_data->ppmu[busier_dmc].ccnt;
-
- return 0;
-}
-
-static struct devfreq_dev_profile exynos5_devfreq_int_profile = {
- .initial_freq = 160000,
- .polling_ms = 100,
- .target = exynos5_busfreq_int_target,
- .get_dev_status = exynos5_int_get_dev_status,
-};
-
-static int exynos5250_init_int_tables(struct busfreq_data_int *data)
-{
- int i, err = 0;
-
- for (i = LV_0; i < _LV_END; i++) {
- err = dev_pm_opp_add(data->dev, exynos5_int_opp_table[i].clk,
- exynos5_int_opp_table[i].volt);
- if (err) {
- dev_err(data->dev, "Cannot add opp entries.\n");
- return err;
- }
- }
-
- return 0;
-}
-
-static int exynos5_busfreq_int_pm_notifier_event(struct notifier_block *this,
- unsigned long event, void *ptr)
-{
- struct busfreq_data_int *data = container_of(this,
- struct busfreq_data_int, pm_notifier);
- struct dev_pm_opp *opp;
- unsigned long maxfreq = ULONG_MAX;
- unsigned long freq;
- unsigned long volt;
- int err = 0;
-
- switch (event) {
- case PM_SUSPEND_PREPARE:
- /* Set Fastest and Deactivate DVFS */
- mutex_lock(&data->lock);
-
- data->disabled = true;
-
- rcu_read_lock();
- opp = dev_pm_opp_find_freq_floor(data->dev, &maxfreq);
- if (IS_ERR(opp)) {
- rcu_read_unlock();
- err = PTR_ERR(opp);
- goto unlock;
- }
- freq = dev_pm_opp_get_freq(opp);
- volt = dev_pm_opp_get_voltage(opp);
- rcu_read_unlock();
-
- err = exynos5_int_setvolt(data, volt);
- if (err)
- goto unlock;
-
- err = clk_set_rate(data->int_clk, freq * 1000);
-
- if (err)
- goto unlock;
-
- data->curr_freq = freq;
-unlock:
- mutex_unlock(&data->lock);
- if (err)
- return NOTIFY_BAD;
- return NOTIFY_OK;
- case PM_POST_RESTORE:
- case PM_POST_SUSPEND:
- /* Reactivate */
- mutex_lock(&data->lock);
- data->disabled = false;
- mutex_unlock(&data->lock);
- return NOTIFY_OK;
- }
-
- return NOTIFY_DONE;
-}
-
-static int exynos5_busfreq_int_probe(struct platform_device *pdev)
-{
- struct busfreq_data_int *data;
- struct busfreq_ppmu_data *ppmu_data;
- struct dev_pm_opp *opp;
- struct device *dev = &pdev->dev;
- struct device_node *np;
- unsigned long initial_freq;
- unsigned long initial_volt;
- int err = 0;
- int i;
-
- data = devm_kzalloc(&pdev->dev, sizeof(struct busfreq_data_int),
- GFP_KERNEL);
- if (data == NULL) {
- dev_err(dev, "Cannot allocate memory.\n");
- return -ENOMEM;
- }
-
- ppmu_data = &data->ppmu_data;
- ppmu_data->ppmu_end = PPMU_END;
- ppmu_data->ppmu = devm_kzalloc(dev,
- sizeof(struct exynos_ppmu) * PPMU_END,
- GFP_KERNEL);
- if (!ppmu_data->ppmu) {
- dev_err(dev, "Failed to allocate memory for exynos_ppmu\n");
- return -ENOMEM;
- }
-
- np = of_find_compatible_node(NULL, NULL, "samsung,exynos5250-ppmu");
- if (np == NULL) {
- pr_err("Unable to find PPMU node\n");
- return -ENOENT;
- }
-
- for (i = 0; i < ppmu_data->ppmu_end; i++) {
- /* map PPMU memory region */
- ppmu_data->ppmu[i].hw_base = of_iomap(np, i);
- if (ppmu_data->ppmu[i].hw_base == NULL) {
- dev_err(&pdev->dev, "failed to map memory region\n");
- return -ENOMEM;
- }
- }
- data->pm_notifier.notifier_call = exynos5_busfreq_int_pm_notifier_event;
- data->dev = dev;
- mutex_init(&data->lock);
-
- err = exynos5250_init_int_tables(data);
- if (err)
- return err;
-
- data->vdd_int = devm_regulator_get(dev, "vdd_int");
- if (IS_ERR(data->vdd_int)) {
- dev_err(dev, "Cannot get the regulator \"vdd_int\"\n");
- return PTR_ERR(data->vdd_int);
- }
-
- data->int_clk = devm_clk_get(dev, "int_clk");
- if (IS_ERR(data->int_clk)) {
- dev_err(dev, "Cannot get clock \"int_clk\"\n");
- return PTR_ERR(data->int_clk);
- }
-
- rcu_read_lock();
- opp = dev_pm_opp_find_freq_floor(dev,
- &exynos5_devfreq_int_profile.initial_freq);
- if (IS_ERR(opp)) {
- rcu_read_unlock();
- dev_err(dev, "Invalid initial frequency %lu kHz.\n",
- exynos5_devfreq_int_profile.initial_freq);
- return PTR_ERR(opp);
- }
- initial_freq = dev_pm_opp_get_freq(opp);
- initial_volt = dev_pm_opp_get_voltage(opp);
- rcu_read_unlock();
- data->curr_freq = initial_freq;
-
- err = clk_set_rate(data->int_clk, initial_freq * 1000);
- if (err) {
- dev_err(dev, "Failed to set initial frequency\n");
- return err;
- }
-
- err = exynos5_int_setvolt(data, initial_volt);
- if (err)
- return err;
-
- platform_set_drvdata(pdev, data);
-
- busfreq_mon_reset(ppmu_data);
-
- data->devfreq = devm_devfreq_add_device(dev, &exynos5_devfreq_int_profile,
- "simple_ondemand", NULL);
- if (IS_ERR(data->devfreq))
- return PTR_ERR(data->devfreq);
-
- err = devm_devfreq_register_opp_notifier(dev, data->devfreq);
- if (err < 0) {
- dev_err(dev, "Failed to register opp notifier\n");
- return err;
- }
-
- err = register_pm_notifier(&data->pm_notifier);
- if (err) {
- dev_err(dev, "Failed to setup pm notifier\n");
- return err;
- }
-
- /* TODO: Add a new QOS class for int/mif bus */
- pm_qos_add_request(&data->int_req, PM_QOS_NETWORK_THROUGHPUT, -1);
-
- return 0;
-}
-
-static int exynos5_busfreq_int_remove(struct platform_device *pdev)
-{
- struct busfreq_data_int *data = platform_get_drvdata(pdev);
-
- pm_qos_remove_request(&data->int_req);
- unregister_pm_notifier(&data->pm_notifier);
-
- return 0;
-}
-
-#ifdef CONFIG_PM_SLEEP
-static int exynos5_busfreq_int_resume(struct device *dev)
-{
- struct platform_device *pdev = container_of(dev, struct platform_device,
- dev);
- struct busfreq_data_int *data = platform_get_drvdata(pdev);
- struct busfreq_ppmu_data *ppmu_data = &data->ppmu_data;
-
- busfreq_mon_reset(ppmu_data);
- return 0;
-}
-static const struct dev_pm_ops exynos5_busfreq_int_pm = {
- .resume = exynos5_busfreq_int_resume,
-};
-#endif
-static SIMPLE_DEV_PM_OPS(exynos5_busfreq_int_pm_ops, NULL,
- exynos5_busfreq_int_resume);
-
-/* platform device pointer for exynos5 devfreq device. */
-static struct platform_device *exynos5_devfreq_pdev;
-
-static struct platform_driver exynos5_busfreq_int_driver = {
- .probe = exynos5_busfreq_int_probe,
- .remove = exynos5_busfreq_int_remove,
- .driver = {
- .name = "exynos5-bus-int",
- .pm = &exynos5_busfreq_int_pm_ops,
- },
-};
-
-static int __init exynos5_busfreq_int_init(void)
-{
- int ret;
-
- ret = platform_driver_register(&exynos5_busfreq_int_driver);
- if (ret < 0)
- goto out;
-
- exynos5_devfreq_pdev =
- platform_device_register_simple("exynos5-bus-int", -1, NULL, 0);
- if (IS_ERR(exynos5_devfreq_pdev)) {
- ret = PTR_ERR(exynos5_devfreq_pdev);
- goto out1;
- }
-
- return 0;
-out1:
- platform_driver_unregister(&exynos5_busfreq_int_driver);
-out:
- return ret;
-}
-late_initcall(exynos5_busfreq_int_init);
-
-static void __exit exynos5_busfreq_int_exit(void)
-{
- platform_device_unregister(exynos5_devfreq_pdev);
- platform_driver_unregister(&exynos5_busfreq_int_driver);
-}
-module_exit(exynos5_busfreq_int_exit);
-
-MODULE_LICENSE("GPL");
-MODULE_DESCRIPTION("EXYNOS5 busfreq driver with devfreq framework");
diff --git a/drivers/devfreq/exynos/exynos_ppmu.c b/drivers/devfreq/exynos/exynos_ppmu.c
deleted file mode 100644
index 97b75e513d29..000000000000
--- a/drivers/devfreq/exynos/exynos_ppmu.c
+++ /dev/null
@@ -1,119 +0,0 @@
-/*
- * Copyright (c) 2012 Samsung Electronics Co., Ltd.
- * http://www.samsung.com/
- *
- * EXYNOS - PPMU support
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- */
-
-#include <linux/kernel.h>
-#include <linux/types.h>
-#include <linux/io.h>
-
-#include "exynos_ppmu.h"
-
-void exynos_ppmu_reset(void __iomem *ppmu_base)
-{
- __raw_writel(PPMU_CYCLE_RESET | PPMU_COUNTER_RESET, ppmu_base);
- __raw_writel(PPMU_ENABLE_CYCLE |
- PPMU_ENABLE_COUNT0 |
- PPMU_ENABLE_COUNT1 |
- PPMU_ENABLE_COUNT2 |
- PPMU_ENABLE_COUNT3,
- ppmu_base + PPMU_CNTENS);
-}
-
-void exynos_ppmu_setevent(void __iomem *ppmu_base, unsigned int ch,
- unsigned int evt)
-{
- __raw_writel(evt, ppmu_base + PPMU_BEVTSEL(ch));
-}
-
-void exynos_ppmu_start(void __iomem *ppmu_base)
-{
- __raw_writel(PPMU_ENABLE, ppmu_base);
-}
-
-void exynos_ppmu_stop(void __iomem *ppmu_base)
-{
- __raw_writel(PPMU_DISABLE, ppmu_base);
-}
-
-unsigned int exynos_ppmu_read(void __iomem *ppmu_base, unsigned int ch)
-{
- unsigned int total;
-
- if (ch == PPMU_PMNCNT3)
- total = ((__raw_readl(ppmu_base + PMCNT_OFFSET(ch)) << 8) |
- __raw_readl(ppmu_base + PMCNT_OFFSET(ch + 1)));
- else
- total = __raw_readl(ppmu_base + PMCNT_OFFSET(ch));
-
- return total;
-}
-
-void busfreq_mon_reset(struct busfreq_ppmu_data *ppmu_data)
-{
- unsigned int i;
-
- for (i = 0; i < ppmu_data->ppmu_end; i++) {
- void __iomem *ppmu_base = ppmu_data->ppmu[i].hw_base;
-
- /* Reset the performance and cycle counters */
- exynos_ppmu_reset(ppmu_base);
-
- /* Setup count registers to monitor read/write transactions */
- ppmu_data->ppmu[i].event[PPMU_PMNCNT3] = RDWR_DATA_COUNT;
- exynos_ppmu_setevent(ppmu_base, PPMU_PMNCNT3,
- ppmu_data->ppmu[i].event[PPMU_PMNCNT3]);
-
- exynos_ppmu_start(ppmu_base);
- }
-}
-EXPORT_SYMBOL(busfreq_mon_reset);
-
-void exynos_read_ppmu(struct busfreq_ppmu_data *ppmu_data)
-{
- int i, j;
-
- for (i = 0; i < ppmu_data->ppmu_end; i++) {
- void __iomem *ppmu_base = ppmu_data->ppmu[i].hw_base;
-
- exynos_ppmu_stop(ppmu_base);
-
- /* Update local data from PPMU */
- ppmu_data->ppmu[i].ccnt = __raw_readl(ppmu_base + PPMU_CCNT);
-
- for (j = PPMU_PMNCNT0; j < PPMU_PMNCNT_MAX; j++) {
- if (ppmu_data->ppmu[i].event[j] == 0)
- ppmu_data->ppmu[i].count[j] = 0;
- else
- ppmu_data->ppmu[i].count[j] =
- exynos_ppmu_read(ppmu_base, j);
- }
- }
-
- busfreq_mon_reset(ppmu_data);
-}
-EXPORT_SYMBOL(exynos_read_ppmu);
-
-int exynos_get_busier_ppmu(struct busfreq_ppmu_data *ppmu_data)
-{
- unsigned int count = 0;
- int i, j, busy = 0;
-
- for (i = 0; i < ppmu_data->ppmu_end; i++) {
- for (j = PPMU_PMNCNT0; j < PPMU_PMNCNT_MAX; j++) {
- if (ppmu_data->ppmu[i].count[j] > count) {
- count = ppmu_data->ppmu[i].count[j];
- busy = i;
- }
- }
- }
-
- return busy;
-}
-EXPORT_SYMBOL(exynos_get_busier_ppmu);
diff --git a/drivers/devfreq/exynos/exynos_ppmu.h b/drivers/devfreq/exynos/exynos_ppmu.h
deleted file mode 100644
index 71f17ba3563c..000000000000
--- a/drivers/devfreq/exynos/exynos_ppmu.h
+++ /dev/null
@@ -1,86 +0,0 @@
-/*
- * Copyright (c) 2012 Samsung Electronics Co., Ltd.
- * http://www.samsung.com/
- *
- * EXYNOS PPMU header
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
-*/
-
-#ifndef __DEVFREQ_EXYNOS_PPMU_H
-#define __DEVFREQ_EXYNOS_PPMU_H __FILE__
-
-#include <linux/ktime.h>
-
-/* For PPMU Control */
-#define PPMU_ENABLE BIT(0)
-#define PPMU_DISABLE 0x0
-#define PPMU_CYCLE_RESET BIT(1)
-#define PPMU_COUNTER_RESET BIT(2)
-
-#define PPMU_ENABLE_COUNT0 BIT(0)
-#define PPMU_ENABLE_COUNT1 BIT(1)
-#define PPMU_ENABLE_COUNT2 BIT(2)
-#define PPMU_ENABLE_COUNT3 BIT(3)
-#define PPMU_ENABLE_CYCLE BIT(31)
-
-#define PPMU_CNTENS 0x10
-#define PPMU_FLAG 0x50
-#define PPMU_CCNT_OVERFLOW BIT(31)
-#define PPMU_CCNT 0x100
-
-#define PPMU_PMCNT0 0x110
-#define PPMU_PMCNT_OFFSET 0x10
-#define PMCNT_OFFSET(x) (PPMU_PMCNT0 + (PPMU_PMCNT_OFFSET * x))
-
-#define PPMU_BEVT0SEL 0x1000
-#define PPMU_BEVTSEL_OFFSET 0x100
-#define PPMU_BEVTSEL(x) (PPMU_BEVT0SEL + (ch * PPMU_BEVTSEL_OFFSET))
-
-/* For Event Selection */
-#define RD_DATA_COUNT 0x5
-#define WR_DATA_COUNT 0x6
-#define RDWR_DATA_COUNT 0x7
-
-enum ppmu_counter {
- PPMU_PMNCNT0,
- PPMU_PMCCNT1,
- PPMU_PMNCNT2,
- PPMU_PMNCNT3,
- PPMU_PMNCNT_MAX,
-};
-
-struct bus_opp_table {
- unsigned int idx;
- unsigned long clk;
- unsigned long volt;
-};
-
-struct exynos_ppmu {
- void __iomem *hw_base;
- unsigned int ccnt;
- unsigned int event[PPMU_PMNCNT_MAX];
- unsigned int count[PPMU_PMNCNT_MAX];
- unsigned long long ns;
- ktime_t reset_time;
- bool ccnt_overflow;
- bool count_overflow[PPMU_PMNCNT_MAX];
-};
-
-struct busfreq_ppmu_data {
- struct exynos_ppmu *ppmu;
- int ppmu_end;
-};
-
-void exynos_ppmu_reset(void __iomem *ppmu_base);
-void exynos_ppmu_setevent(void __iomem *ppmu_base, unsigned int ch,
- unsigned int evt);
-void exynos_ppmu_start(void __iomem *ppmu_base);
-void exynos_ppmu_stop(void __iomem *ppmu_base);
-unsigned int exynos_ppmu_read(void __iomem *ppmu_base, unsigned int ch);
-void busfreq_mon_reset(struct busfreq_ppmu_data *ppmu_data);
-void exynos_read_ppmu(struct busfreq_ppmu_data *ppmu_data);
-int exynos_get_busier_ppmu(struct busfreq_ppmu_data *ppmu_data);
-#endif /* __DEVFREQ_EXYNOS_PPMU_H */
diff --git a/drivers/devfreq/governor_passive.c b/drivers/devfreq/governor_passive.c
new file mode 100644
index 000000000000..9ef46e2592c4
--- /dev/null
+++ b/drivers/devfreq/governor_passive.c
@@ -0,0 +1,205 @@
+/*
+ * linux/drivers/devfreq/governor_passive.c
+ *
+ * Copyright (C) 2016 Samsung Electronics
+ * Author: Chanwoo Choi <cw00.choi@samsung.com>
+ * Author: MyungJoo Ham <myungjoo.ham@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/devfreq.h>
+#include "governor.h"
+
+static int devfreq_passive_get_target_freq(struct devfreq *devfreq,
+ unsigned long *freq)
+{
+ struct devfreq_passive_data *p_data
+ = (struct devfreq_passive_data *)devfreq->data;
+ struct devfreq *parent_devfreq = (struct devfreq *)p_data->parent;
+ unsigned long child_freq = ULONG_MAX;
+ struct dev_pm_opp *opp;
+ int i, count, ret = 0;
+
+ /*
+ * If the devfreq device with passive governor has the specific method
+ * to determine the next frequency, should use the get_target_freq()
+ * of struct devfreq_passive_data.
+ */
+ if (p_data->get_target_freq) {
+ ret = p_data->get_target_freq(devfreq, freq);
+ goto out;
+ }
+
+ /*
+ * If the parent and passive devfreq device uses the OPP table,
+ * get the next frequency by using the OPP table.
+ */
+
+ /*
+ * - parent devfreq device uses the governors except for passive.
+ * - passive devfreq device uses the passive governor.
+ *
+ * Each devfreq has the OPP table. After deciding the new frequency
+ * from the governor of parent devfreq device, the passive governor
+ * need to get the index of new frequency on OPP table of parent
+ * device. And then the index is used for getting the suitable
+ * new frequency for passive devfreq device.
+ */
+ if (!devfreq->profile || !devfreq->profile->freq_table
+ || devfreq->profile->max_state <= 0)
+ return -EINVAL;
+
+ /*
+ * The passive governor have to get the correct frequency from OPP
+ * list of parent device. Because in this case, *freq is temporary
+ * value which is decided by ondemand governor.
+ */
+ rcu_read_lock();
+ opp = devfreq_recommended_opp(parent_devfreq->dev.parent, freq, 0);
+ rcu_read_unlock();
+ if (IS_ERR(opp)) {
+ ret = PTR_ERR(opp);
+ goto out;
+ }
+
+ /*
+ * Get the OPP table's index of decided freqeuncy by governor
+ * of parent device.
+ */
+ for (i = 0; i < parent_devfreq->profile->max_state; i++)
+ if (parent_devfreq->profile->freq_table[i] == *freq)
+ break;
+
+ if (i == parent_devfreq->profile->max_state) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ /* Get the suitable frequency by using index of parent device. */
+ if (i < devfreq->profile->max_state) {
+ child_freq = devfreq->profile->freq_table[i];
+ } else {
+ count = devfreq->profile->max_state;
+ child_freq = devfreq->profile->freq_table[count - 1];
+ }
+
+ /* Return the suitable frequency for passive device. */
+ *freq = child_freq;
+
+out:
+ return ret;
+}
+
+static int update_devfreq_passive(struct devfreq *devfreq, unsigned long freq)
+{
+ int ret;
+
+ if (!devfreq->governor)
+ return -EINVAL;
+
+ mutex_lock_nested(&devfreq->lock, SINGLE_DEPTH_NESTING);
+
+ ret = devfreq->governor->get_target_freq(devfreq, &freq);
+ if (ret < 0)
+ goto out;
+
+ ret = devfreq->profile->target(devfreq->dev.parent, &freq, 0);
+ if (ret < 0)
+ goto out;
+
+ devfreq->previous_freq = freq;
+
+out:
+ mutex_unlock(&devfreq->lock);
+
+ return 0;
+}
+
+static int devfreq_passive_notifier_call(struct notifier_block *nb,
+ unsigned long event, void *ptr)
+{
+ struct devfreq_passive_data *data
+ = container_of(nb, struct devfreq_passive_data, nb);
+ struct devfreq *devfreq = (struct devfreq *)data->this;
+ struct devfreq *parent = (struct devfreq *)data->parent;
+ struct devfreq_freqs *freqs = (struct devfreq_freqs *)ptr;
+ unsigned long freq = freqs->new;
+
+ switch (event) {
+ case DEVFREQ_PRECHANGE:
+ if (parent->previous_freq > freq)
+ update_devfreq_passive(devfreq, freq);
+ break;
+ case DEVFREQ_POSTCHANGE:
+ if (parent->previous_freq < freq)
+ update_devfreq_passive(devfreq, freq);
+ break;
+ }
+
+ return NOTIFY_DONE;
+}
+
+static int devfreq_passive_event_handler(struct devfreq *devfreq,
+ unsigned int event, void *data)
+{
+ struct device *dev = devfreq->dev.parent;
+ struct devfreq_passive_data *p_data
+ = (struct devfreq_passive_data *)devfreq->data;
+ struct devfreq *parent = (struct devfreq *)p_data->parent;
+ struct notifier_block *nb = &p_data->nb;
+ int ret = 0;
+
+ if (!parent)
+ return -EPROBE_DEFER;
+
+ switch (event) {
+ case DEVFREQ_GOV_START:
+ if (!p_data->this)
+ p_data->this = devfreq;
+
+ nb->notifier_call = devfreq_passive_notifier_call;
+ ret = devm_devfreq_register_notifier(dev, parent, nb,
+ DEVFREQ_TRANSITION_NOTIFIER);
+ break;
+ case DEVFREQ_GOV_STOP:
+ devm_devfreq_unregister_notifier(dev, parent, nb,
+ DEVFREQ_TRANSITION_NOTIFIER);
+ break;
+ default:
+ break;
+ }
+
+ return ret;
+}
+
+static struct devfreq_governor devfreq_passive = {
+ .name = "passive",
+ .get_target_freq = devfreq_passive_get_target_freq,
+ .event_handler = devfreq_passive_event_handler,
+};
+
+static int __init devfreq_passive_init(void)
+{
+ return devfreq_add_governor(&devfreq_passive);
+}
+subsys_initcall(devfreq_passive_init);
+
+static void __exit devfreq_passive_exit(void)
+{
+ int ret;
+
+ ret = devfreq_remove_governor(&devfreq_passive);
+ if (ret)
+ pr_err("%s: failed remove governor %d\n", __func__, ret);
+}
+module_exit(devfreq_passive_exit);
+
+MODULE_AUTHOR("Chanwoo Choi <cw00.choi@samsung.com>");
+MODULE_AUTHOR("MyungJoo Ham <myungjoo.ham@samsung.com>");
+MODULE_DESCRIPTION("DEVFREQ Passive governor");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/dma-buf/Kconfig b/drivers/dma-buf/Kconfig
new file mode 100644
index 000000000000..9824bc4addf8
--- /dev/null
+++ b/drivers/dma-buf/Kconfig
@@ -0,0 +1,11 @@
+menu "DMABUF options"
+
+config SYNC_FILE
+ bool "sync_file support for fences"
+ default n
+ select ANON_INODES
+ select DMA_SHARED_BUFFER
+ ---help---
+ This option enables the fence framework synchronization to export
+ sync_files to userspace that can represent one or more fences.
+endmenu
diff --git a/drivers/dma-buf/Makefile b/drivers/dma-buf/Makefile
index 57a675f90cd0..4a424eca75ed 100644
--- a/drivers/dma-buf/Makefile
+++ b/drivers/dma-buf/Makefile
@@ -1 +1,2 @@
obj-y := dma-buf.o fence.o reservation.o seqno-fence.o
+obj-$(CONFIG_SYNC_FILE) += sync_file.o
diff --git a/drivers/dma-buf/sync_file.c b/drivers/dma-buf/sync_file.c
new file mode 100644
index 000000000000..f08cf2d8309e
--- /dev/null
+++ b/drivers/dma-buf/sync_file.c
@@ -0,0 +1,395 @@
+/*
+ * drivers/dma-buf/sync_file.c
+ *
+ * Copyright (C) 2012 Google, Inc.
+ *
+ * This software is licensed under the terms of the GNU General Public
+ * License version 2, as published by the Free Software Foundation, and
+ * may be copied, distributed, and modified under those terms.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ */
+
+#include <linux/export.h>
+#include <linux/file.h>
+#include <linux/fs.h>
+#include <linux/kernel.h>
+#include <linux/poll.h>
+#include <linux/sched.h>
+#include <linux/slab.h>
+#include <linux/uaccess.h>
+#include <linux/anon_inodes.h>
+#include <linux/sync_file.h>
+#include <uapi/linux/sync_file.h>
+
+static const struct file_operations sync_file_fops;
+
+static struct sync_file *sync_file_alloc(int size)
+{
+ struct sync_file *sync_file;
+
+ sync_file = kzalloc(size, GFP_KERNEL);
+ if (!sync_file)
+ return NULL;
+
+ sync_file->file = anon_inode_getfile("sync_file", &sync_file_fops,
+ sync_file, 0);
+ if (IS_ERR(sync_file->file))
+ goto err;
+
+ kref_init(&sync_file->kref);
+
+ init_waitqueue_head(&sync_file->wq);
+
+ return sync_file;
+
+err:
+ kfree(sync_file);
+ return NULL;
+}
+
+static void fence_check_cb_func(struct fence *f, struct fence_cb *cb)
+{
+ struct sync_file_cb *check;
+ struct sync_file *sync_file;
+
+ check = container_of(cb, struct sync_file_cb, cb);
+ sync_file = check->sync_file;
+
+ if (atomic_dec_and_test(&sync_file->status))
+ wake_up_all(&sync_file->wq);
+}
+
+/**
+ * sync_file_create() - creates a sync file
+ * @fence: fence to add to the sync_fence
+ *
+ * Creates a sync_file containg @fence. Once this is called, the sync_file
+ * takes ownership of @fence. The sync_file can be released with
+ * fput(sync_file->file). Returns the sync_file or NULL in case of error.
+ */
+struct sync_file *sync_file_create(struct fence *fence)
+{
+ struct sync_file *sync_file;
+
+ sync_file = sync_file_alloc(offsetof(struct sync_file, cbs[1]));
+ if (!sync_file)
+ return NULL;
+
+ sync_file->num_fences = 1;
+ atomic_set(&sync_file->status, 1);
+ snprintf(sync_file->name, sizeof(sync_file->name), "%s-%s%d-%d",
+ fence->ops->get_driver_name(fence),
+ fence->ops->get_timeline_name(fence), fence->context,
+ fence->seqno);
+
+ sync_file->cbs[0].fence = fence;
+ sync_file->cbs[0].sync_file = sync_file;
+ if (fence_add_callback(fence, &sync_file->cbs[0].cb,
+ fence_check_cb_func))
+ atomic_dec(&sync_file->status);
+
+ return sync_file;
+}
+EXPORT_SYMBOL(sync_file_create);
+
+/**
+ * sync_file_fdget() - get a sync_file from an fd
+ * @fd: fd referencing a fence
+ *
+ * Ensures @fd references a valid sync_file, increments the refcount of the
+ * backing file. Returns the sync_file or NULL in case of error.
+ */
+static struct sync_file *sync_file_fdget(int fd)
+{
+ struct file *file = fget(fd);
+
+ if (!file)
+ return NULL;
+
+ if (file->f_op != &sync_file_fops)
+ goto err;
+
+ return file->private_data;
+
+err:
+ fput(file);
+ return NULL;
+}
+
+static void sync_file_add_pt(struct sync_file *sync_file, int *i,
+ struct fence *fence)
+{
+ sync_file->cbs[*i].fence = fence;
+ sync_file->cbs[*i].sync_file = sync_file;
+
+ if (!fence_add_callback(fence, &sync_file->cbs[*i].cb,
+ fence_check_cb_func)) {
+ fence_get(fence);
+ (*i)++;
+ }
+}
+
+/**
+ * sync_file_merge() - merge two sync_files
+ * @name: name of new fence
+ * @a: sync_file a
+ * @b: sync_file b
+ *
+ * Creates a new sync_file which contains copies of all the fences in both
+ * @a and @b. @a and @b remain valid, independent sync_file. Returns the
+ * new merged sync_file or NULL in case of error.
+ */
+static struct sync_file *sync_file_merge(const char *name, struct sync_file *a,
+ struct sync_file *b)
+{
+ int num_fences = a->num_fences + b->num_fences;
+ struct sync_file *sync_file;
+ int i, i_a, i_b;
+ unsigned long size = offsetof(struct sync_file, cbs[num_fences]);
+
+ sync_file = sync_file_alloc(size);
+ if (!sync_file)
+ return NULL;
+
+ atomic_set(&sync_file->status, num_fences);
+
+ /*
+ * Assume sync_file a and b are both ordered and have no
+ * duplicates with the same context.
+ *
+ * If a sync_file can only be created with sync_file_merge
+ * and sync_file_create, this is a reasonable assumption.
+ */
+ for (i = i_a = i_b = 0; i_a < a->num_fences && i_b < b->num_fences; ) {
+ struct fence *pt_a = a->cbs[i_a].fence;
+ struct fence *pt_b = b->cbs[i_b].fence;
+
+ if (pt_a->context < pt_b->context) {
+ sync_file_add_pt(sync_file, &i, pt_a);
+
+ i_a++;
+ } else if (pt_a->context > pt_b->context) {
+ sync_file_add_pt(sync_file, &i, pt_b);
+
+ i_b++;
+ } else {
+ if (pt_a->seqno - pt_b->seqno <= INT_MAX)
+ sync_file_add_pt(sync_file, &i, pt_a);
+ else
+ sync_file_add_pt(sync_file, &i, pt_b);
+
+ i_a++;
+ i_b++;
+ }
+ }
+
+ for (; i_a < a->num_fences; i_a++)
+ sync_file_add_pt(sync_file, &i, a->cbs[i_a].fence);
+
+ for (; i_b < b->num_fences; i_b++)
+ sync_file_add_pt(sync_file, &i, b->cbs[i_b].fence);
+
+ if (num_fences > i)
+ atomic_sub(num_fences - i, &sync_file->status);
+ sync_file->num_fences = i;
+
+ strlcpy(sync_file->name, name, sizeof(sync_file->name));
+ return sync_file;
+}
+
+static void sync_file_free(struct kref *kref)
+{
+ struct sync_file *sync_file = container_of(kref, struct sync_file,
+ kref);
+ int i;
+
+ for (i = 0; i < sync_file->num_fences; ++i) {
+ fence_remove_callback(sync_file->cbs[i].fence,
+ &sync_file->cbs[i].cb);
+ fence_put(sync_file->cbs[i].fence);
+ }
+
+ kfree(sync_file);
+}
+
+static int sync_file_release(struct inode *inode, struct file *file)
+{
+ struct sync_file *sync_file = file->private_data;
+
+ kref_put(&sync_file->kref, sync_file_free);
+ return 0;
+}
+
+static unsigned int sync_file_poll(struct file *file, poll_table *wait)
+{
+ struct sync_file *sync_file = file->private_data;
+ int status;
+
+ poll_wait(file, &sync_file->wq, wait);
+
+ status = atomic_read(&sync_file->status);
+
+ if (!status)
+ return POLLIN;
+ if (status < 0)
+ return POLLERR;
+ return 0;
+}
+
+static long sync_file_ioctl_merge(struct sync_file *sync_file,
+ unsigned long arg)
+{
+ int fd = get_unused_fd_flags(O_CLOEXEC);
+ int err;
+ struct sync_file *fence2, *fence3;
+ struct sync_merge_data data;
+
+ if (fd < 0)
+ return fd;
+
+ if (copy_from_user(&data, (void __user *)arg, sizeof(data))) {
+ err = -EFAULT;
+ goto err_put_fd;
+ }
+
+ if (data.flags || data.pad) {
+ err = -EINVAL;
+ goto err_put_fd;
+ }
+
+ fence2 = sync_file_fdget(data.fd2);
+ if (!fence2) {
+ err = -ENOENT;
+ goto err_put_fd;
+ }
+
+ data.name[sizeof(data.name) - 1] = '\0';
+ fence3 = sync_file_merge(data.name, sync_file, fence2);
+ if (!fence3) {
+ err = -ENOMEM;
+ goto err_put_fence2;
+ }
+
+ data.fence = fd;
+ if (copy_to_user((void __user *)arg, &data, sizeof(data))) {
+ err = -EFAULT;
+ goto err_put_fence3;
+ }
+
+ fd_install(fd, fence3->file);
+ fput(fence2->file);
+ return 0;
+
+err_put_fence3:
+ fput(fence3->file);
+
+err_put_fence2:
+ fput(fence2->file);
+
+err_put_fd:
+ put_unused_fd(fd);
+ return err;
+}
+
+static void sync_fill_fence_info(struct fence *fence,
+ struct sync_fence_info *info)
+{
+ strlcpy(info->obj_name, fence->ops->get_timeline_name(fence),
+ sizeof(info->obj_name));
+ strlcpy(info->driver_name, fence->ops->get_driver_name(fence),
+ sizeof(info->driver_name));
+ if (fence_is_signaled(fence))
+ info->status = fence->status >= 0 ? 1 : fence->status;
+ else
+ info->status = 0;
+ info->timestamp_ns = ktime_to_ns(fence->timestamp);
+}
+
+static long sync_file_ioctl_fence_info(struct sync_file *sync_file,
+ unsigned long arg)
+{
+ struct sync_file_info info;
+ struct sync_fence_info *fence_info = NULL;
+ __u32 size;
+ int ret, i;
+
+ if (copy_from_user(&info, (void __user *)arg, sizeof(info)))
+ return -EFAULT;
+
+ if (info.flags || info.pad)
+ return -EINVAL;
+
+ /*
+ * Passing num_fences = 0 means that userspace doesn't want to
+ * retrieve any sync_fence_info. If num_fences = 0 we skip filling
+ * sync_fence_info and return the actual number of fences on
+ * info->num_fences.
+ */
+ if (!info.num_fences)
+ goto no_fences;
+
+ if (info.num_fences < sync_file->num_fences)
+ return -EINVAL;
+
+ size = sync_file->num_fences * sizeof(*fence_info);
+ fence_info = kzalloc(size, GFP_KERNEL);
+ if (!fence_info)
+ return -ENOMEM;
+
+ for (i = 0; i < sync_file->num_fences; ++i)
+ sync_fill_fence_info(sync_file->cbs[i].fence, &fence_info[i]);
+
+ if (copy_to_user(u64_to_user_ptr(info.sync_fence_info), fence_info,
+ size)) {
+ ret = -EFAULT;
+ goto out;
+ }
+
+no_fences:
+ strlcpy(info.name, sync_file->name, sizeof(info.name));
+ info.status = atomic_read(&sync_file->status);
+ if (info.status >= 0)
+ info.status = !info.status;
+
+ info.num_fences = sync_file->num_fences;
+
+ if (copy_to_user((void __user *)arg, &info, sizeof(info)))
+ ret = -EFAULT;
+ else
+ ret = 0;
+
+out:
+ kfree(fence_info);
+
+ return ret;
+}
+
+static long sync_file_ioctl(struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ struct sync_file *sync_file = file->private_data;
+
+ switch (cmd) {
+ case SYNC_IOC_MERGE:
+ return sync_file_ioctl_merge(sync_file, arg);
+
+ case SYNC_IOC_FILE_INFO:
+ return sync_file_ioctl_fence_info(sync_file, arg);
+
+ default:
+ return -ENOTTY;
+ }
+}
+
+static const struct file_operations sync_file_fops = {
+ .release = sync_file_release,
+ .poll = sync_file_poll,
+ .unlocked_ioctl = sync_file_ioctl,
+ .compat_ioctl = sync_file_ioctl,
+};
+
diff --git a/drivers/dma/Kconfig b/drivers/dma/Kconfig
index d96d87c56f2e..8c98779a12b1 100644
--- a/drivers/dma/Kconfig
+++ b/drivers/dma/Kconfig
@@ -332,7 +332,7 @@ config MPC512X_DMA
config MV_XOR
bool "Marvell XOR engine support"
- depends on PLAT_ORION
+ depends on PLAT_ORION || ARCH_MVEBU || COMPILE_TEST
select DMA_ENGINE
select DMA_ENGINE_RAID
select ASYNC_TX_ENABLE_CHANNEL_SWITCH
@@ -467,6 +467,20 @@ config TEGRA20_APB_DMA
This DMA controller transfers data from memory to peripheral fifo
or vice versa. It does not support memory to memory data transfer.
+config TEGRA210_ADMA
+ bool "NVIDIA Tegra210 ADMA support"
+ depends on ARCH_TEGRA_210_SOC
+ select DMA_ENGINE
+ select DMA_VIRTUAL_CHANNELS
+ select PM_CLK
+ help
+ Support for the NVIDIA Tegra210 ADMA controller driver. The
+ DMA controller has multiple DMA channels and is used to service
+ various audio clients in the Tegra210 audio processing engine
+ (APE). This DMA controller transfers data from memory to
+ peripheral and vice versa. It does not support memory to
+ memory data transfer.
+
config TIMB_DMA
tristate "Timberdale FPGA DMA support"
depends on MFD_TIMBERDALE
@@ -507,7 +521,7 @@ config XGENE_DMA
config XILINX_VDMA
tristate "Xilinx AXI VDMA Engine"
- depends on (ARCH_ZYNQ || MICROBLAZE)
+ depends on (ARCH_ZYNQ || MICROBLAZE || ARM64)
select DMA_ENGINE
help
Enable support for Xilinx AXI VDMA Soft IP.
diff --git a/drivers/dma/Makefile b/drivers/dma/Makefile
index 6084127c1486..614f28b0b739 100644
--- a/drivers/dma/Makefile
+++ b/drivers/dma/Makefile
@@ -59,6 +59,7 @@ obj-$(CONFIG_STM32_DMA) += stm32-dma.o
obj-$(CONFIG_S3C24XX_DMAC) += s3c24xx-dma.o
obj-$(CONFIG_TXX9_DMAC) += txx9dmac.o
obj-$(CONFIG_TEGRA20_APB_DMA) += tegra20-apb-dma.o
+obj-$(CONFIG_TEGRA210_ADMA) += tegra210-adma.o
obj-$(CONFIG_TIMB_DMA) += timb_dma.o
obj-$(CONFIG_TI_CPPI41) += cppi41.o
obj-$(CONFIG_TI_DMA_CROSSBAR) += ti-dma-crossbar.o
diff --git a/drivers/dma/amba-pl08x.c b/drivers/dma/amba-pl08x.c
index 9b42c0588550..81db1c4811ce 100644
--- a/drivers/dma/amba-pl08x.c
+++ b/drivers/dma/amba-pl08x.c
@@ -107,16 +107,20 @@ struct pl08x_driver_data;
/**
* struct vendor_data - vendor-specific config parameters for PL08x derivatives
* @channels: the number of channels available in this variant
+ * @signals: the number of request signals available from the hardware
* @dualmaster: whether this version supports dual AHB masters or not.
* @nomadik: whether the channels have Nomadik security extension bits
* that need to be checked for permission before use and some registers are
* missing
* @pl080s: whether this version is a PL080S, which has separate register and
* LLI word for transfer size.
+ * @max_transfer_size: the maximum single element transfer size for this
+ * PL08x variant.
*/
struct vendor_data {
u8 config_offset;
u8 channels;
+ u8 signals;
bool dualmaster;
bool nomadik;
bool pl080s;
@@ -235,7 +239,7 @@ struct pl08x_dma_chan {
struct virt_dma_chan vc;
struct pl08x_phy_chan *phychan;
const char *name;
- const struct pl08x_channel_data *cd;
+ struct pl08x_channel_data *cd;
struct dma_slave_config cfg;
struct pl08x_txd *at;
struct pl08x_driver_data *host;
@@ -1909,6 +1913,12 @@ static int pl08x_dma_init_virtual_channels(struct pl08x_driver_data *pl08x,
if (slave) {
chan->cd = &pl08x->pd->slave_channels[i];
+ /*
+ * Some implementations have muxed signals, whereas some
+ * use a mux in front of the signals and need dynamic
+ * assignment of signals.
+ */
+ chan->signal = i;
pl08x_dma_slave_init(chan);
} else {
chan->cd = &pl08x->pd->memcpy_channel;
@@ -2050,40 +2060,33 @@ static struct dma_chan *pl08x_of_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
struct pl08x_driver_data *pl08x = ofdma->of_dma_data;
- struct pl08x_channel_data *data;
- struct pl08x_dma_chan *chan;
struct dma_chan *dma_chan;
+ struct pl08x_dma_chan *plchan;
if (!pl08x)
return NULL;
- if (dma_spec->args_count != 2)
+ if (dma_spec->args_count != 2) {
+ dev_err(&pl08x->adev->dev,
+ "DMA channel translation requires two cells\n");
return NULL;
+ }
dma_chan = pl08x_find_chan_id(pl08x, dma_spec->args[0]);
- if (dma_chan)
- return dma_get_slave_channel(dma_chan);
-
- chan = devm_kzalloc(pl08x->slave.dev, sizeof(*chan) + sizeof(*data),
- GFP_KERNEL);
- if (!chan)
+ if (!dma_chan) {
+ dev_err(&pl08x->adev->dev,
+ "DMA slave channel not found\n");
return NULL;
+ }
- data = (void *)&chan[1];
- data->bus_id = "(none)";
- data->periph_buses = dma_spec->args[1];
-
- chan->cd = data;
- chan->host = pl08x;
- chan->slave = true;
- chan->name = data->bus_id;
- chan->state = PL08X_CHAN_IDLE;
- chan->signal = dma_spec->args[0];
- chan->vc.desc_free = pl08x_desc_free;
-
- vchan_init(&chan->vc, &pl08x->slave);
+ plchan = to_pl08x_chan(dma_chan);
+ dev_dbg(&pl08x->adev->dev,
+ "translated channel for signal %d\n",
+ dma_spec->args[0]);
- return dma_get_slave_channel(&chan->vc.chan);
+ /* Augment channel data for applicable AHB buses */
+ plchan->cd->periph_buses = dma_spec->args[1];
+ return dma_get_slave_channel(dma_chan);
}
static int pl08x_of_probe(struct amba_device *adev,
@@ -2091,9 +2094,11 @@ static int pl08x_of_probe(struct amba_device *adev,
struct device_node *np)
{
struct pl08x_platform_data *pd;
+ struct pl08x_channel_data *chanp = NULL;
u32 cctl_memcpy = 0;
u32 val;
int ret;
+ int i;
pd = devm_kzalloc(&adev->dev, sizeof(*pd), GFP_KERNEL);
if (!pd)
@@ -2195,6 +2200,27 @@ static int pl08x_of_probe(struct amba_device *adev,
/* Use the buses that can access memory, obviously */
pd->memcpy_channel.periph_buses = pd->mem_buses;
+ /*
+ * Allocate channel data for all possible slave channels (one
+ * for each possible signal), channels will then be allocated
+ * for a device and have it's AHB interfaces set up at
+ * translation time.
+ */
+ chanp = devm_kcalloc(&adev->dev,
+ pl08x->vd->signals,
+ sizeof(struct pl08x_channel_data),
+ GFP_KERNEL);
+ if (!chanp)
+ return -ENOMEM;
+
+ pd->slave_channels = chanp;
+ for (i = 0; i < pl08x->vd->signals; i++) {
+ /* chanp->periph_buses will be assigned at translation */
+ chanp->bus_id = kasprintf(GFP_KERNEL, "slave%d", i);
+ chanp++;
+ }
+ pd->num_slave_channels = pl08x->vd->signals;
+
pl08x->pd = pd;
return of_dma_controller_register(adev->dev.of_node, pl08x_of_xlate,
@@ -2234,6 +2260,10 @@ static int pl08x_probe(struct amba_device *adev, const struct amba_id *id)
goto out_no_pl08x;
}
+ /* Assign useful pointers to the driver state */
+ pl08x->adev = adev;
+ pl08x->vd = vd;
+
/* Initialize memcpy engine */
dma_cap_set(DMA_MEMCPY, pl08x->memcpy.cap_mask);
pl08x->memcpy.dev = &adev->dev;
@@ -2284,10 +2314,6 @@ static int pl08x_probe(struct amba_device *adev, const struct amba_id *id)
}
}
- /* Assign useful pointers to the driver state */
- pl08x->adev = adev;
- pl08x->vd = vd;
-
/* By default, AHB1 only. If dualmaster, from platform */
pl08x->lli_buses = PL08X_AHB1;
pl08x->mem_buses = PL08X_AHB1;
@@ -2438,6 +2464,7 @@ out_no_pl08x:
static struct vendor_data vendor_pl080 = {
.config_offset = PL080_CH_CONFIG,
.channels = 8,
+ .signals = 16,
.dualmaster = true,
.max_transfer_size = PL080_CONTROL_TRANSFER_SIZE_MASK,
};
@@ -2445,6 +2472,7 @@ static struct vendor_data vendor_pl080 = {
static struct vendor_data vendor_nomadik = {
.config_offset = PL080_CH_CONFIG,
.channels = 8,
+ .signals = 32,
.dualmaster = true,
.nomadik = true,
.max_transfer_size = PL080_CONTROL_TRANSFER_SIZE_MASK,
@@ -2453,6 +2481,7 @@ static struct vendor_data vendor_nomadik = {
static struct vendor_data vendor_pl080s = {
.config_offset = PL080S_CH_CONFIG,
.channels = 8,
+ .signals = 32,
.pl080s = true,
.max_transfer_size = PL080S_CONTROL_TRANSFER_SIZE_MASK,
};
@@ -2460,6 +2489,7 @@ static struct vendor_data vendor_pl080s = {
static struct vendor_data vendor_pl081 = {
.config_offset = PL080_CH_CONFIG,
.channels = 2,
+ .signals = 16,
.dualmaster = false,
.max_transfer_size = PL080_CONTROL_TRANSFER_SIZE_MASK,
};
diff --git a/drivers/dma/bcm2835-dma.c b/drivers/dma/bcm2835-dma.c
index 996c4b00d323..6149b27c33ad 100644
--- a/drivers/dma/bcm2835-dma.c
+++ b/drivers/dma/bcm2835-dma.c
@@ -46,6 +46,9 @@
#include "virt-dma.h"
+#define BCM2835_DMA_MAX_DMA_CHAN_SUPPORTED 14
+#define BCM2835_DMA_CHAN_NAME_SIZE 8
+
struct bcm2835_dmadev {
struct dma_device ddev;
spinlock_t lock;
@@ -73,7 +76,6 @@ struct bcm2835_chan {
struct list_head node;
struct dma_slave_config cfg;
- bool cyclic;
unsigned int dreq;
int ch;
@@ -82,6 +84,9 @@ struct bcm2835_chan {
void __iomem *chan_base;
int irq_number;
+ unsigned int irq_flags;
+
+ bool is_lite_channel;
};
struct bcm2835_desc {
@@ -89,47 +94,104 @@ struct bcm2835_desc {
struct virt_dma_desc vd;
enum dma_transfer_direction dir;
- struct bcm2835_cb_entry *cb_list;
-
unsigned int frames;
size_t size;
+
+ bool cyclic;
+
+ struct bcm2835_cb_entry cb_list[];
};
#define BCM2835_DMA_CS 0x00
#define BCM2835_DMA_ADDR 0x04
+#define BCM2835_DMA_TI 0x08
#define BCM2835_DMA_SOURCE_AD 0x0c
#define BCM2835_DMA_DEST_AD 0x10
-#define BCM2835_DMA_NEXTCB 0x1C
+#define BCM2835_DMA_LEN 0x14
+#define BCM2835_DMA_STRIDE 0x18
+#define BCM2835_DMA_NEXTCB 0x1c
+#define BCM2835_DMA_DEBUG 0x20
/* DMA CS Control and Status bits */
-#define BCM2835_DMA_ACTIVE BIT(0)
-#define BCM2835_DMA_INT BIT(2)
+#define BCM2835_DMA_ACTIVE BIT(0) /* activate the DMA */
+#define BCM2835_DMA_END BIT(1) /* current CB has ended */
+#define BCM2835_DMA_INT BIT(2) /* interrupt status */
+#define BCM2835_DMA_DREQ BIT(3) /* DREQ state */
#define BCM2835_DMA_ISPAUSED BIT(4) /* Pause requested or not active */
#define BCM2835_DMA_ISHELD BIT(5) /* Is held by DREQ flow control */
-#define BCM2835_DMA_ERR BIT(8)
+#define BCM2835_DMA_WAITING_FOR_WRITES BIT(6) /* waiting for last
+ * AXI-write to ack
+ */
+#define BCM2835_DMA_ERR BIT(8)
+#define BCM2835_DMA_PRIORITY(x) ((x & 15) << 16) /* AXI priority */
+#define BCM2835_DMA_PANIC_PRIORITY(x) ((x & 15) << 20) /* panic priority */
+/* current value of TI.BCM2835_DMA_WAIT_RESP */
+#define BCM2835_DMA_WAIT_FOR_WRITES BIT(28)
+#define BCM2835_DMA_DIS_DEBUG BIT(29) /* disable debug pause signal */
#define BCM2835_DMA_ABORT BIT(30) /* Stop current CB, go to next, WO */
#define BCM2835_DMA_RESET BIT(31) /* WO, self clearing */
+/* Transfer information bits - also bcm2835_cb.info field */
#define BCM2835_DMA_INT_EN BIT(0)
+#define BCM2835_DMA_TDMODE BIT(1) /* 2D-Mode */
+#define BCM2835_DMA_WAIT_RESP BIT(3) /* wait for AXI-write to be acked */
#define BCM2835_DMA_D_INC BIT(4)
-#define BCM2835_DMA_D_DREQ BIT(6)
+#define BCM2835_DMA_D_WIDTH BIT(5) /* 128bit writes if set */
+#define BCM2835_DMA_D_DREQ BIT(6) /* enable DREQ for destination */
+#define BCM2835_DMA_D_IGNORE BIT(7) /* ignore destination writes */
#define BCM2835_DMA_S_INC BIT(8)
-#define BCM2835_DMA_S_DREQ BIT(10)
-
-#define BCM2835_DMA_PER_MAP(x) ((x) << 16)
+#define BCM2835_DMA_S_WIDTH BIT(9) /* 128bit writes if set */
+#define BCM2835_DMA_S_DREQ BIT(10) /* enable SREQ for source */
+#define BCM2835_DMA_S_IGNORE BIT(11) /* ignore source reads - read 0 */
+#define BCM2835_DMA_BURST_LENGTH(x) ((x & 15) << 12)
+#define BCM2835_DMA_PER_MAP(x) ((x & 31) << 16) /* REQ source */
+#define BCM2835_DMA_WAIT(x) ((x & 31) << 21) /* add DMA-wait cycles */
+#define BCM2835_DMA_NO_WIDE_BURSTS BIT(26) /* no 2 beat write bursts */
+
+/* debug register bits */
+#define BCM2835_DMA_DEBUG_LAST_NOT_SET_ERR BIT(0)
+#define BCM2835_DMA_DEBUG_FIFO_ERR BIT(1)
+#define BCM2835_DMA_DEBUG_READ_ERR BIT(2)
+#define BCM2835_DMA_DEBUG_OUTSTANDING_WRITES_SHIFT 4
+#define BCM2835_DMA_DEBUG_OUTSTANDING_WRITES_BITS 4
+#define BCM2835_DMA_DEBUG_ID_SHIFT 16
+#define BCM2835_DMA_DEBUG_ID_BITS 9
+#define BCM2835_DMA_DEBUG_STATE_SHIFT 16
+#define BCM2835_DMA_DEBUG_STATE_BITS 9
+#define BCM2835_DMA_DEBUG_VERSION_SHIFT 25
+#define BCM2835_DMA_DEBUG_VERSION_BITS 3
+#define BCM2835_DMA_DEBUG_LITE BIT(28)
+
+/* shared registers for all dma channels */
+#define BCM2835_DMA_INT_STATUS 0xfe0
+#define BCM2835_DMA_ENABLE 0xff0
#define BCM2835_DMA_DATA_TYPE_S8 1
#define BCM2835_DMA_DATA_TYPE_S16 2
#define BCM2835_DMA_DATA_TYPE_S32 4
#define BCM2835_DMA_DATA_TYPE_S128 16
-#define BCM2835_DMA_BULK_MASK BIT(0)
-#define BCM2835_DMA_FIQ_MASK (BIT(2) | BIT(3))
-
/* Valid only for channels 0 - 14, 15 has its own base address */
#define BCM2835_DMA_CHAN(n) ((n) << 8) /* Base address */
#define BCM2835_DMA_CHANIO(base, n) ((base) + BCM2835_DMA_CHAN(n))
+/* the max dma length for different channels */
+#define MAX_DMA_LEN SZ_1G
+#define MAX_LITE_DMA_LEN (SZ_64K - 4)
+
+static inline size_t bcm2835_dma_max_frame_length(struct bcm2835_chan *c)
+{
+ /* lite and normal channels have different max frame length */
+ return c->is_lite_channel ? MAX_LITE_DMA_LEN : MAX_DMA_LEN;
+}
+
+/* how many frames of max_len size do we need to transfer len bytes */
+static inline size_t bcm2835_dma_frames_for_length(size_t len,
+ size_t max_len)
+{
+ return DIV_ROUND_UP(len, max_len);
+}
+
static inline struct bcm2835_dmadev *to_bcm2835_dma_dev(struct dma_device *d)
{
return container_of(d, struct bcm2835_dmadev, ddev);
@@ -146,19 +208,209 @@ static inline struct bcm2835_desc *to_bcm2835_dma_desc(
return container_of(t, struct bcm2835_desc, vd.tx);
}
-static void bcm2835_dma_desc_free(struct virt_dma_desc *vd)
+static void bcm2835_dma_free_cb_chain(struct bcm2835_desc *desc)
{
- struct bcm2835_desc *desc = container_of(vd, struct bcm2835_desc, vd);
- int i;
+ size_t i;
for (i = 0; i < desc->frames; i++)
dma_pool_free(desc->c->cb_pool, desc->cb_list[i].cb,
desc->cb_list[i].paddr);
- kfree(desc->cb_list);
kfree(desc);
}
+static void bcm2835_dma_desc_free(struct virt_dma_desc *vd)
+{
+ bcm2835_dma_free_cb_chain(
+ container_of(vd, struct bcm2835_desc, vd));
+}
+
+static void bcm2835_dma_create_cb_set_length(
+ struct bcm2835_chan *chan,
+ struct bcm2835_dma_cb *control_block,
+ size_t len,
+ size_t period_len,
+ size_t *total_len,
+ u32 finalextrainfo)
+{
+ size_t max_len = bcm2835_dma_max_frame_length(chan);
+
+ /* set the length taking lite-channel limitations into account */
+ control_block->length = min_t(u32, len, max_len);
+
+ /* finished if we have no period_length */
+ if (!period_len)
+ return;
+
+ /*
+ * period_len means: that we need to generate
+ * transfers that are terminating at every
+ * multiple of period_len - this is typically
+ * used to set the interrupt flag in info
+ * which is required during cyclic transfers
+ */
+
+ /* have we filled in period_length yet? */
+ if (*total_len + control_block->length < period_len)
+ return;
+
+ /* calculate the length that remains to reach period_length */
+ control_block->length = period_len - *total_len;
+
+ /* reset total_length for next period */
+ *total_len = 0;
+
+ /* add extrainfo bits in info */
+ control_block->info |= finalextrainfo;
+}
+
+static inline size_t bcm2835_dma_count_frames_for_sg(
+ struct bcm2835_chan *c,
+ struct scatterlist *sgl,
+ unsigned int sg_len)
+{
+ size_t frames = 0;
+ struct scatterlist *sgent;
+ unsigned int i;
+ size_t plength = bcm2835_dma_max_frame_length(c);
+
+ for_each_sg(sgl, sgent, sg_len, i)
+ frames += bcm2835_dma_frames_for_length(
+ sg_dma_len(sgent), plength);
+
+ return frames;
+}
+
+/**
+ * bcm2835_dma_create_cb_chain - create a control block and fills data in
+ *
+ * @chan: the @dma_chan for which we run this
+ * @direction: the direction in which we transfer
+ * @cyclic: it is a cyclic transfer
+ * @info: the default info bits to apply per controlblock
+ * @frames: number of controlblocks to allocate
+ * @src: the src address to assign (if the S_INC bit is set
+ * in @info, then it gets incremented)
+ * @dst: the dst address to assign (if the D_INC bit is set
+ * in @info, then it gets incremented)
+ * @buf_len: the full buffer length (may also be 0)
+ * @period_len: the period length when to apply @finalextrainfo
+ * in addition to the last transfer
+ * this will also break some control-blocks early
+ * @finalextrainfo: additional bits in last controlblock
+ * (or when period_len is reached in case of cyclic)
+ * @gfp: the GFP flag to use for allocation
+ */
+static struct bcm2835_desc *bcm2835_dma_create_cb_chain(
+ struct dma_chan *chan, enum dma_transfer_direction direction,
+ bool cyclic, u32 info, u32 finalextrainfo, size_t frames,
+ dma_addr_t src, dma_addr_t dst, size_t buf_len,
+ size_t period_len, gfp_t gfp)
+{
+ struct bcm2835_chan *c = to_bcm2835_dma_chan(chan);
+ size_t len = buf_len, total_len;
+ size_t frame;
+ struct bcm2835_desc *d;
+ struct bcm2835_cb_entry *cb_entry;
+ struct bcm2835_dma_cb *control_block;
+
+ if (!frames)
+ return NULL;
+
+ /* allocate and setup the descriptor. */
+ d = kzalloc(sizeof(*d) + frames * sizeof(struct bcm2835_cb_entry),
+ gfp);
+ if (!d)
+ return NULL;
+
+ d->c = c;
+ d->dir = direction;
+ d->cyclic = cyclic;
+
+ /*
+ * Iterate over all frames, create a control block
+ * for each frame and link them together.
+ */
+ for (frame = 0, total_len = 0; frame < frames; d->frames++, frame++) {
+ cb_entry = &d->cb_list[frame];
+ cb_entry->cb = dma_pool_alloc(c->cb_pool, gfp,
+ &cb_entry->paddr);
+ if (!cb_entry->cb)
+ goto error_cb;
+
+ /* fill in the control block */
+ control_block = cb_entry->cb;
+ control_block->info = info;
+ control_block->src = src;
+ control_block->dst = dst;
+ control_block->stride = 0;
+ control_block->next = 0;
+ /* set up length in control_block if requested */
+ if (buf_len) {
+ /* calculate length honoring period_length */
+ bcm2835_dma_create_cb_set_length(
+ c, control_block,
+ len, period_len, &total_len,
+ cyclic ? finalextrainfo : 0);
+
+ /* calculate new remaining length */
+ len -= control_block->length;
+ }
+
+ /* link this the last controlblock */
+ if (frame)
+ d->cb_list[frame - 1].cb->next = cb_entry->paddr;
+
+ /* update src and dst and length */
+ if (src && (info & BCM2835_DMA_S_INC))
+ src += control_block->length;
+ if (dst && (info & BCM2835_DMA_D_INC))
+ dst += control_block->length;
+
+ /* Length of total transfer */
+ d->size += control_block->length;
+ }
+
+ /* the last frame requires extra flags */
+ d->cb_list[d->frames - 1].cb->info |= finalextrainfo;
+
+ /* detect a size missmatch */
+ if (buf_len && (d->size != buf_len))
+ goto error_cb;
+
+ return d;
+error_cb:
+ bcm2835_dma_free_cb_chain(d);
+
+ return NULL;
+}
+
+static void bcm2835_dma_fill_cb_chain_with_sg(
+ struct dma_chan *chan,
+ enum dma_transfer_direction direction,
+ struct bcm2835_cb_entry *cb,
+ struct scatterlist *sgl,
+ unsigned int sg_len)
+{
+ struct bcm2835_chan *c = to_bcm2835_dma_chan(chan);
+ size_t max_len = bcm2835_dma_max_frame_length(c);
+ unsigned int i, len;
+ dma_addr_t addr;
+ struct scatterlist *sgent;
+
+ for_each_sg(sgl, sgent, sg_len, i) {
+ for (addr = sg_dma_address(sgent), len = sg_dma_len(sgent);
+ len > 0;
+ addr += cb->cb->length, len -= cb->cb->length, cb++) {
+ if (direction == DMA_DEV_TO_MEM)
+ cb->cb->dst = addr;
+ else
+ cb->cb->src = addr;
+ cb->cb->length = min(len, max_len);
+ }
+ }
+}
+
static int bcm2835_dma_abort(void __iomem *chan_base)
{
unsigned long cs;
@@ -218,6 +470,15 @@ static irqreturn_t bcm2835_dma_callback(int irq, void *data)
struct bcm2835_desc *d;
unsigned long flags;
+ /* check the shared interrupt */
+ if (c->irq_flags & IRQF_SHARED) {
+ /* check if the interrupt is enabled */
+ flags = readl(c->chan_base + BCM2835_DMA_CS);
+ /* if not set then we are not the reason for the irq */
+ if (!(flags & BCM2835_DMA_INT))
+ return IRQ_NONE;
+ }
+
spin_lock_irqsave(&c->vc.lock, flags);
/* Acknowledge interrupt */
@@ -226,12 +487,18 @@ static irqreturn_t bcm2835_dma_callback(int irq, void *data)
d = c->desc;
if (d) {
- /* TODO Only works for cyclic DMA */
- vchan_cyclic_callback(&d->vd);
- }
+ if (d->cyclic) {
+ /* call the cyclic callback */
+ vchan_cyclic_callback(&d->vd);
- /* Keep the DMA engine running */
- writel(BCM2835_DMA_ACTIVE, c->chan_base + BCM2835_DMA_CS);
+ /* Keep the DMA engine running */
+ writel(BCM2835_DMA_ACTIVE,
+ c->chan_base + BCM2835_DMA_CS);
+ } else {
+ vchan_cookie_complete(&c->desc->vd);
+ bcm2835_dma_start_desc(c);
+ }
+ }
spin_unlock_irqrestore(&c->vc.lock, flags);
@@ -252,8 +519,8 @@ static int bcm2835_dma_alloc_chan_resources(struct dma_chan *chan)
return -ENOMEM;
}
- return request_irq(c->irq_number,
- bcm2835_dma_callback, 0, "DMA IRQ", c);
+ return request_irq(c->irq_number, bcm2835_dma_callback,
+ c->irq_flags, "DMA IRQ", c);
}
static void bcm2835_dma_free_chan_resources(struct dma_chan *chan)
@@ -339,8 +606,6 @@ static void bcm2835_dma_issue_pending(struct dma_chan *chan)
struct bcm2835_chan *c = to_bcm2835_dma_chan(chan);
unsigned long flags;
- c->cyclic = true; /* Nothing else is implemented */
-
spin_lock_irqsave(&c->vc.lock, flags);
if (vchan_issue_pending(&c->vc) && !c->desc)
bcm2835_dma_start_desc(c);
@@ -348,122 +613,160 @@ static void bcm2835_dma_issue_pending(struct dma_chan *chan)
spin_unlock_irqrestore(&c->vc.lock, flags);
}
-static struct dma_async_tx_descriptor *bcm2835_dma_prep_dma_cyclic(
- struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
- size_t period_len, enum dma_transfer_direction direction,
- unsigned long flags)
+struct dma_async_tx_descriptor *bcm2835_dma_prep_dma_memcpy(
+ struct dma_chan *chan, dma_addr_t dst, dma_addr_t src,
+ size_t len, unsigned long flags)
{
struct bcm2835_chan *c = to_bcm2835_dma_chan(chan);
- enum dma_slave_buswidth dev_width;
struct bcm2835_desc *d;
- dma_addr_t dev_addr;
- unsigned int es, sync_type;
- unsigned int frame;
- int i;
+ u32 info = BCM2835_DMA_D_INC | BCM2835_DMA_S_INC;
+ u32 extra = BCM2835_DMA_INT_EN | BCM2835_DMA_WAIT_RESP;
+ size_t max_len = bcm2835_dma_max_frame_length(c);
+ size_t frames;
+
+ /* if src, dst or len is not given return with an error */
+ if (!src || !dst || !len)
+ return NULL;
+
+ /* calculate number of frames */
+ frames = bcm2835_dma_frames_for_length(len, max_len);
+
+ /* allocate the CB chain - this also fills in the pointers */
+ d = bcm2835_dma_create_cb_chain(chan, DMA_MEM_TO_MEM, false,
+ info, extra, frames,
+ src, dst, len, 0, GFP_KERNEL);
+ if (!d)
+ return NULL;
+
+ return vchan_tx_prep(&c->vc, &d->vd, flags);
+}
+
+static struct dma_async_tx_descriptor *bcm2835_dma_prep_slave_sg(
+ struct dma_chan *chan,
+ struct scatterlist *sgl, unsigned int sg_len,
+ enum dma_transfer_direction direction,
+ unsigned long flags, void *context)
+{
+ struct bcm2835_chan *c = to_bcm2835_dma_chan(chan);
+ struct bcm2835_desc *d;
+ dma_addr_t src = 0, dst = 0;
+ u32 info = BCM2835_DMA_WAIT_RESP;
+ u32 extra = BCM2835_DMA_INT_EN;
+ size_t frames;
- /* Grab configuration */
if (!is_slave_direction(direction)) {
- dev_err(chan->device->dev, "%s: bad direction?\n", __func__);
+ dev_err(chan->device->dev,
+ "%s: bad direction?\n", __func__);
return NULL;
}
+ if (c->dreq != 0)
+ info |= BCM2835_DMA_PER_MAP(c->dreq);
+
if (direction == DMA_DEV_TO_MEM) {
- dev_addr = c->cfg.src_addr;
- dev_width = c->cfg.src_addr_width;
- sync_type = BCM2835_DMA_S_DREQ;
+ if (c->cfg.src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES)
+ return NULL;
+ src = c->cfg.src_addr;
+ info |= BCM2835_DMA_S_DREQ | BCM2835_DMA_D_INC;
} else {
- dev_addr = c->cfg.dst_addr;
- dev_width = c->cfg.dst_addr_width;
- sync_type = BCM2835_DMA_D_DREQ;
+ if (c->cfg.dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES)
+ return NULL;
+ dst = c->cfg.dst_addr;
+ info |= BCM2835_DMA_D_DREQ | BCM2835_DMA_S_INC;
}
- /* Bus width translates to the element size (ES) */
- switch (dev_width) {
- case DMA_SLAVE_BUSWIDTH_4_BYTES:
- es = BCM2835_DMA_DATA_TYPE_S32;
- break;
- default:
- return NULL;
- }
+ /* count frames in sg list */
+ frames = bcm2835_dma_count_frames_for_sg(c, sgl, sg_len);
- /* Now allocate and setup the descriptor. */
- d = kzalloc(sizeof(*d), GFP_NOWAIT);
+ /* allocate the CB chain */
+ d = bcm2835_dma_create_cb_chain(chan, direction, false,
+ info, extra,
+ frames, src, dst, 0, 0,
+ GFP_KERNEL);
if (!d)
return NULL;
- d->c = c;
- d->dir = direction;
- d->frames = buf_len / period_len;
+ /* fill in frames with scatterlist pointers */
+ bcm2835_dma_fill_cb_chain_with_sg(chan, direction, d->cb_list,
+ sgl, sg_len);
- d->cb_list = kcalloc(d->frames, sizeof(*d->cb_list), GFP_KERNEL);
- if (!d->cb_list) {
- kfree(d);
+ return vchan_tx_prep(&c->vc, &d->vd, flags);
+}
+
+static struct dma_async_tx_descriptor *bcm2835_dma_prep_dma_cyclic(
+ struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len,
+ size_t period_len, enum dma_transfer_direction direction,
+ unsigned long flags)
+{
+ struct bcm2835_chan *c = to_bcm2835_dma_chan(chan);
+ struct bcm2835_desc *d;
+ dma_addr_t src, dst;
+ u32 info = BCM2835_DMA_WAIT_RESP;
+ u32 extra = BCM2835_DMA_INT_EN;
+ size_t max_len = bcm2835_dma_max_frame_length(c);
+ size_t frames;
+
+ /* Grab configuration */
+ if (!is_slave_direction(direction)) {
+ dev_err(chan->device->dev, "%s: bad direction?\n", __func__);
return NULL;
}
- /* Allocate memory for control blocks */
- for (i = 0; i < d->frames; i++) {
- struct bcm2835_cb_entry *cb_entry = &d->cb_list[i];
- cb_entry->cb = dma_pool_zalloc(c->cb_pool, GFP_ATOMIC,
- &cb_entry->paddr);
- if (!cb_entry->cb)
- goto error_cb;
+ if (!buf_len) {
+ dev_err(chan->device->dev,
+ "%s: bad buffer length (= 0)\n", __func__);
+ return NULL;
}
/*
- * Iterate over all frames, create a control block
- * for each frame and link them together.
+ * warn if buf_len is not a multiple of period_len - this may leed
+ * to unexpected latencies for interrupts and thus audiable clicks
*/
- for (frame = 0; frame < d->frames; frame++) {
- struct bcm2835_dma_cb *control_block = d->cb_list[frame].cb;
-
- /* Setup adresses */
- if (d->dir == DMA_DEV_TO_MEM) {
- control_block->info = BCM2835_DMA_D_INC;
- control_block->src = dev_addr;
- control_block->dst = buf_addr + frame * period_len;
- } else {
- control_block->info = BCM2835_DMA_S_INC;
- control_block->src = buf_addr + frame * period_len;
- control_block->dst = dev_addr;
- }
+ if (buf_len % period_len)
+ dev_warn_once(chan->device->dev,
+ "%s: buffer_length (%zd) is not a multiple of period_len (%zd)\n",
+ __func__, buf_len, period_len);
- /* Enable interrupt */
- control_block->info |= BCM2835_DMA_INT_EN;
+ /* Setup DREQ channel */
+ if (c->dreq != 0)
+ info |= BCM2835_DMA_PER_MAP(c->dreq);
- /* Setup synchronization */
- if (sync_type != 0)
- control_block->info |= sync_type;
+ if (direction == DMA_DEV_TO_MEM) {
+ if (c->cfg.src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES)
+ return NULL;
+ src = c->cfg.src_addr;
+ dst = buf_addr;
+ info |= BCM2835_DMA_S_DREQ | BCM2835_DMA_D_INC;
+ } else {
+ if (c->cfg.dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES)
+ return NULL;
+ dst = c->cfg.dst_addr;
+ src = buf_addr;
+ info |= BCM2835_DMA_D_DREQ | BCM2835_DMA_S_INC;
+ }
- /* Setup DREQ channel */
- if (c->dreq != 0)
- control_block->info |=
- BCM2835_DMA_PER_MAP(c->dreq);
+ /* calculate number of frames */
+ frames = /* number of periods */
+ DIV_ROUND_UP(buf_len, period_len) *
+ /* number of frames per period */
+ bcm2835_dma_frames_for_length(period_len, max_len);
- /* Length of a frame */
- control_block->length = period_len;
- d->size += control_block->length;
+ /*
+ * allocate the CB chain
+ * note that we need to use GFP_NOWAIT, as the ALSA i2s dmaengine
+ * implementation calls prep_dma_cyclic with interrupts disabled.
+ */
+ d = bcm2835_dma_create_cb_chain(chan, direction, true,
+ info, extra,
+ frames, src, dst, buf_len,
+ period_len, GFP_NOWAIT);
+ if (!d)
+ return NULL;
- /*
- * Next block is the next frame.
- * This DMA engine driver currently only supports cyclic DMA.
- * Therefore, wrap around at number of frames.
- */
- control_block->next = d->cb_list[((frame + 1) % d->frames)].paddr;
- }
+ /* wrap around into a loop */
+ d->cb_list[d->frames - 1].cb->next = d->cb_list[0].paddr;
return vchan_tx_prep(&c->vc, &d->vd, flags);
-error_cb:
- i--;
- for (; i >= 0; i--) {
- struct bcm2835_cb_entry *cb_entry = &d->cb_list[i];
-
- dma_pool_free(c->cb_pool, cb_entry->cb, cb_entry->paddr);
- }
-
- kfree(d->cb_list);
- kfree(d);
- return NULL;
}
static int bcm2835_dma_slave_config(struct dma_chan *chan,
@@ -529,7 +832,8 @@ static int bcm2835_dma_terminate_all(struct dma_chan *chan)
return 0;
}
-static int bcm2835_dma_chan_init(struct bcm2835_dmadev *d, int chan_id, int irq)
+static int bcm2835_dma_chan_init(struct bcm2835_dmadev *d, int chan_id,
+ int irq, unsigned int irq_flags)
{
struct bcm2835_chan *c;
@@ -544,6 +848,12 @@ static int bcm2835_dma_chan_init(struct bcm2835_dmadev *d, int chan_id, int irq)
c->chan_base = BCM2835_DMA_CHANIO(d->base, chan_id);
c->ch = chan_id;
c->irq_number = irq;
+ c->irq_flags = irq_flags;
+
+ /* check in DEBUG register if this is a LITE channel */
+ if (readl(c->chan_base + BCM2835_DMA_DEBUG) &
+ BCM2835_DMA_DEBUG_LITE)
+ c->is_lite_channel = true;
return 0;
}
@@ -587,9 +897,11 @@ static int bcm2835_dma_probe(struct platform_device *pdev)
struct resource *res;
void __iomem *base;
int rc;
- int i;
- int irq;
+ int i, j;
+ int irq[BCM2835_DMA_MAX_DMA_CHAN_SUPPORTED + 1];
+ int irq_flags;
uint32_t chans_available;
+ char chan_name[BCM2835_DMA_CHAN_NAME_SIZE];
if (!pdev->dev.dma_mask)
pdev->dev.dma_mask = &pdev->dev.coherent_dma_mask;
@@ -615,16 +927,22 @@ static int bcm2835_dma_probe(struct platform_device *pdev)
dma_cap_set(DMA_SLAVE, od->ddev.cap_mask);
dma_cap_set(DMA_PRIVATE, od->ddev.cap_mask);
dma_cap_set(DMA_CYCLIC, od->ddev.cap_mask);
+ dma_cap_set(DMA_SLAVE, od->ddev.cap_mask);
+ dma_cap_set(DMA_MEMCPY, od->ddev.cap_mask);
od->ddev.device_alloc_chan_resources = bcm2835_dma_alloc_chan_resources;
od->ddev.device_free_chan_resources = bcm2835_dma_free_chan_resources;
od->ddev.device_tx_status = bcm2835_dma_tx_status;
od->ddev.device_issue_pending = bcm2835_dma_issue_pending;
od->ddev.device_prep_dma_cyclic = bcm2835_dma_prep_dma_cyclic;
+ od->ddev.device_prep_slave_sg = bcm2835_dma_prep_slave_sg;
+ od->ddev.device_prep_dma_memcpy = bcm2835_dma_prep_dma_memcpy;
od->ddev.device_config = bcm2835_dma_slave_config;
od->ddev.device_terminate_all = bcm2835_dma_terminate_all;
od->ddev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
od->ddev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
- od->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+ od->ddev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV) |
+ BIT(DMA_MEM_TO_MEM);
+ od->ddev.residue_granularity = DMA_RESIDUE_GRANULARITY_BURST;
od->ddev.dev = &pdev->dev;
INIT_LIST_HEAD(&od->ddev.channels);
spin_lock_init(&od->lock);
@@ -640,22 +958,48 @@ static int bcm2835_dma_probe(struct platform_device *pdev)
goto err_no_dma;
}
- /*
- * Do not use the FIQ and BULK channels,
- * because they are used by the GPU.
- */
- chans_available &= ~(BCM2835_DMA_FIQ_MASK | BCM2835_DMA_BULK_MASK);
+ /* get irqs for each channel that we support */
+ for (i = 0; i <= BCM2835_DMA_MAX_DMA_CHAN_SUPPORTED; i++) {
+ /* skip masked out channels */
+ if (!(chans_available & (1 << i))) {
+ irq[i] = -1;
+ continue;
+ }
- for (i = 0; i < pdev->num_resources; i++) {
- irq = platform_get_irq(pdev, i);
- if (irq < 0)
- break;
+ /* get the named irq */
+ snprintf(chan_name, sizeof(chan_name), "dma%i", i);
+ irq[i] = platform_get_irq_byname(pdev, chan_name);
+ if (irq[i] >= 0)
+ continue;
- if (chans_available & (1 << i)) {
- rc = bcm2835_dma_chan_init(od, i, irq);
- if (rc)
- goto err_no_dma;
- }
+ /* legacy device tree case handling */
+ dev_warn_once(&pdev->dev,
+ "missing interrupt-names property in device tree - legacy interpretation is used\n");
+ /*
+ * in case of channel >= 11
+ * use the 11th interrupt and that is shared
+ */
+ irq[i] = platform_get_irq(pdev, i < 11 ? i : 11);
+ }
+
+ /* get irqs for each channel */
+ for (i = 0; i <= BCM2835_DMA_MAX_DMA_CHAN_SUPPORTED; i++) {
+ /* skip channels without irq */
+ if (irq[i] < 0)
+ continue;
+
+ /* check if there are other channels that also use this irq */
+ irq_flags = 0;
+ for (j = 0; j <= BCM2835_DMA_MAX_DMA_CHAN_SUPPORTED; j++)
+ if ((i != j) && (irq[j] == irq[i])) {
+ irq_flags = IRQF_SHARED;
+ break;
+ }
+
+ /* initialize the channel */
+ rc = bcm2835_dma_chan_init(od, i, irq[i], irq_flags);
+ if (rc)
+ goto err_no_dma;
}
dev_dbg(&pdev->dev, "Initialized %i DMA channels\n", i);
diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
index 0cb259c59916..8c9f45fd55fc 100644
--- a/drivers/dma/dmaengine.c
+++ b/drivers/dma/dmaengine.c
@@ -289,7 +289,7 @@ enum dma_status dma_sync_wait(struct dma_chan *chan, dma_cookie_t cookie)
do {
status = dma_async_is_tx_complete(chan, cookie, NULL, NULL);
if (time_after_eq(jiffies, dma_sync_wait_timeout)) {
- pr_err("%s: timeout!\n", __func__);
+ dev_err(chan->device->dev, "%s: timeout!\n", __func__);
return DMA_ERROR;
}
if (status != DMA_IN_PROGRESS)
@@ -482,7 +482,8 @@ int dma_get_slave_caps(struct dma_chan *chan, struct dma_slave_caps *caps)
device = chan->device;
/* check if the channel supports slave transactions */
- if (!test_bit(DMA_SLAVE, device->cap_mask.bits))
+ if (!(test_bit(DMA_SLAVE, device->cap_mask.bits) ||
+ test_bit(DMA_CYCLIC, device->cap_mask.bits)))
return -ENXIO;
/*
@@ -518,7 +519,7 @@ static struct dma_chan *private_candidate(const dma_cap_mask_t *mask,
struct dma_chan *chan;
if (mask && !__dma_device_satisfies_mask(dev, mask)) {
- pr_debug("%s: wrong capabilities\n", __func__);
+ dev_dbg(dev->dev, "%s: wrong capabilities\n", __func__);
return NULL;
}
/* devices with multiple channels need special handling as we need to
@@ -533,12 +534,12 @@ static struct dma_chan *private_candidate(const dma_cap_mask_t *mask,
list_for_each_entry(chan, &dev->channels, device_node) {
if (chan->client_count) {
- pr_debug("%s: %s busy\n",
+ dev_dbg(dev->dev, "%s: %s busy\n",
__func__, dma_chan_name(chan));
continue;
}
if (fn && !fn(chan, fn_param)) {
- pr_debug("%s: %s filter said false\n",
+ dev_dbg(dev->dev, "%s: %s filter said false\n",
__func__, dma_chan_name(chan));
continue;
}
@@ -567,11 +568,12 @@ static struct dma_chan *find_candidate(struct dma_device *device,
if (err) {
if (err == -ENODEV) {
- pr_debug("%s: %s module removed\n", __func__,
- dma_chan_name(chan));
+ dev_dbg(device->dev, "%s: %s module removed\n",
+ __func__, dma_chan_name(chan));
list_del_rcu(&device->global_node);
} else
- pr_debug("%s: failed to get %s: (%d)\n",
+ dev_dbg(device->dev,
+ "%s: failed to get %s: (%d)\n",
__func__, dma_chan_name(chan), err);
if (--device->privatecnt == 0)
@@ -602,7 +604,8 @@ struct dma_chan *dma_get_slave_channel(struct dma_chan *chan)
device->privatecnt++;
err = dma_chan_get(chan);
if (err) {
- pr_debug("%s: failed to get %s: (%d)\n",
+ dev_dbg(chan->device->dev,
+ "%s: failed to get %s: (%d)\n",
__func__, dma_chan_name(chan), err);
chan = NULL;
if (--device->privatecnt == 0)
@@ -814,8 +817,9 @@ void dmaengine_get(void)
list_del_rcu(&device->global_node);
break;
} else if (err)
- pr_debug("%s: failed to get %s: (%d)\n",
- __func__, dma_chan_name(chan), err);
+ dev_dbg(chan->device->dev,
+ "%s: failed to get %s: (%d)\n",
+ __func__, dma_chan_name(chan), err);
}
}
@@ -862,12 +866,12 @@ static bool device_has_all_tx_types(struct dma_device *device)
return false;
#endif
- #if defined(CONFIG_ASYNC_MEMCPY) || defined(CONFIG_ASYNC_MEMCPY_MODULE)
+ #if IS_ENABLED(CONFIG_ASYNC_MEMCPY)
if (!dma_has_cap(DMA_MEMCPY, device->cap_mask))
return false;
#endif
- #if defined(CONFIG_ASYNC_XOR) || defined(CONFIG_ASYNC_XOR_MODULE)
+ #if IS_ENABLED(CONFIG_ASYNC_XOR)
if (!dma_has_cap(DMA_XOR, device->cap_mask))
return false;
@@ -877,7 +881,7 @@ static bool device_has_all_tx_types(struct dma_device *device)
#endif
#endif
- #if defined(CONFIG_ASYNC_PQ) || defined(CONFIG_ASYNC_PQ_MODULE)
+ #if IS_ENABLED(CONFIG_ASYNC_PQ)
if (!dma_has_cap(DMA_PQ, device->cap_mask))
return false;
@@ -1222,8 +1226,9 @@ dma_wait_for_async_tx(struct dma_async_tx_descriptor *tx)
while (tx->cookie == -EBUSY) {
if (time_after_eq(jiffies, dma_sync_wait_timeout)) {
- pr_err("%s timeout waiting for descriptor submission\n",
- __func__);
+ dev_err(tx->chan->device->dev,
+ "%s timeout waiting for descriptor submission\n",
+ __func__);
return DMA_ERROR;
}
cpu_relax();
diff --git a/drivers/dma/dw/core.c b/drivers/dma/dw/core.c
index 97199b3c25a2..edf053f73a49 100644
--- a/drivers/dma/dw/core.c
+++ b/drivers/dma/dw/core.c
@@ -45,22 +45,19 @@
DW_DMA_MSIZE_16; \
u8 _dmsize = _is_slave ? _sconfig->dst_maxburst : \
DW_DMA_MSIZE_16; \
+ u8 _dms = (_dwc->direction == DMA_MEM_TO_DEV) ? \
+ _dwc->p_master : _dwc->m_master; \
+ u8 _sms = (_dwc->direction == DMA_DEV_TO_MEM) ? \
+ _dwc->p_master : _dwc->m_master; \
\
(DWC_CTLL_DST_MSIZE(_dmsize) \
| DWC_CTLL_SRC_MSIZE(_smsize) \
| DWC_CTLL_LLP_D_EN \
| DWC_CTLL_LLP_S_EN \
- | DWC_CTLL_DMS(_dwc->dst_master) \
- | DWC_CTLL_SMS(_dwc->src_master)); \
+ | DWC_CTLL_DMS(_dms) \
+ | DWC_CTLL_SMS(_sms)); \
})
-/*
- * Number of descriptors to allocate for each channel. This should be
- * made configurable somehow; preferably, the clients (at least the
- * ones using slave transfers) should be able to give us a hint.
- */
-#define NR_DESCS_PER_CHANNEL 64
-
/* The set of bus widths supported by the DMA controller */
#define DW_DMA_BUSWIDTHS \
BIT(DMA_SLAVE_BUSWIDTH_UNDEFINED) | \
@@ -80,51 +77,65 @@ static struct dw_desc *dwc_first_active(struct dw_dma_chan *dwc)
return to_dw_desc(dwc->active_list.next);
}
-static struct dw_desc *dwc_desc_get(struct dw_dma_chan *dwc)
+static dma_cookie_t dwc_tx_submit(struct dma_async_tx_descriptor *tx)
{
- struct dw_desc *desc, *_desc;
- struct dw_desc *ret = NULL;
- unsigned int i = 0;
- unsigned long flags;
+ struct dw_desc *desc = txd_to_dw_desc(tx);
+ struct dw_dma_chan *dwc = to_dw_dma_chan(tx->chan);
+ dma_cookie_t cookie;
+ unsigned long flags;
spin_lock_irqsave(&dwc->lock, flags);
- list_for_each_entry_safe(desc, _desc, &dwc->free_list, desc_node) {
- i++;
- if (async_tx_test_ack(&desc->txd)) {
- list_del(&desc->desc_node);
- ret = desc;
- break;
- }
- dev_dbg(chan2dev(&dwc->chan), "desc %p not ACKed\n", desc);
- }
+ cookie = dma_cookie_assign(tx);
+
+ /*
+ * REVISIT: We should attempt to chain as many descriptors as
+ * possible, perhaps even appending to those already submitted
+ * for DMA. But this is hard to do in a race-free manner.
+ */
+
+ list_add_tail(&desc->desc_node, &dwc->queue);
spin_unlock_irqrestore(&dwc->lock, flags);
+ dev_vdbg(chan2dev(tx->chan), "%s: queued %u\n",
+ __func__, desc->txd.cookie);
- dev_vdbg(chan2dev(&dwc->chan), "scanned %u descriptors on freelist\n", i);
+ return cookie;
+}
- return ret;
+static struct dw_desc *dwc_desc_get(struct dw_dma_chan *dwc)
+{
+ struct dw_dma *dw = to_dw_dma(dwc->chan.device);
+ struct dw_desc *desc;
+ dma_addr_t phys;
+
+ desc = dma_pool_zalloc(dw->desc_pool, GFP_ATOMIC, &phys);
+ if (!desc)
+ return NULL;
+
+ dwc->descs_allocated++;
+ INIT_LIST_HEAD(&desc->tx_list);
+ dma_async_tx_descriptor_init(&desc->txd, &dwc->chan);
+ desc->txd.tx_submit = dwc_tx_submit;
+ desc->txd.flags = DMA_CTRL_ACK;
+ desc->txd.phys = phys;
+ return desc;
}
-/*
- * Move a descriptor, including any children, to the free list.
- * `desc' must not be on any lists.
- */
static void dwc_desc_put(struct dw_dma_chan *dwc, struct dw_desc *desc)
{
- unsigned long flags;
+ struct dw_dma *dw = to_dw_dma(dwc->chan.device);
+ struct dw_desc *child, *_next;
- if (desc) {
- struct dw_desc *child;
+ if (unlikely(!desc))
+ return;
- spin_lock_irqsave(&dwc->lock, flags);
- list_for_each_entry(child, &desc->tx_list, desc_node)
- dev_vdbg(chan2dev(&dwc->chan),
- "moving child desc %p to freelist\n",
- child);
- list_splice_init(&desc->tx_list, &dwc->free_list);
- dev_vdbg(chan2dev(&dwc->chan), "moving desc %p to freelist\n", desc);
- list_add(&desc->desc_node, &dwc->free_list);
- spin_unlock_irqrestore(&dwc->lock, flags);
+ list_for_each_entry_safe(child, _next, &desc->tx_list, desc_node) {
+ list_del(&child->desc_node);
+ dma_pool_free(dw->desc_pool, child, child->txd.phys);
+ dwc->descs_allocated--;
}
+
+ dma_pool_free(dw->desc_pool, desc, desc->txd.phys);
+ dwc->descs_allocated--;
}
static void dwc_initialize(struct dw_dma_chan *dwc)
@@ -133,7 +144,7 @@ static void dwc_initialize(struct dw_dma_chan *dwc)
u32 cfghi = DWC_CFGH_FIFO_MODE;
u32 cfglo = DWC_CFGL_CH_PRIOR(dwc->priority);
- if (dwc->initialized == true)
+ if (test_bit(DW_DMA_IS_INITIALIZED, &dwc->flags))
return;
cfghi |= DWC_CFGH_DST_PER(dwc->dst_id);
@@ -146,26 +157,11 @@ static void dwc_initialize(struct dw_dma_chan *dwc)
channel_set_bit(dw, MASK.XFER, dwc->mask);
channel_set_bit(dw, MASK.ERROR, dwc->mask);
- dwc->initialized = true;
+ set_bit(DW_DMA_IS_INITIALIZED, &dwc->flags);
}
/*----------------------------------------------------------------------*/
-static inline unsigned int dwc_fast_ffs(unsigned long long v)
-{
- /*
- * We can be a lot more clever here, but this should take care
- * of the most common optimization.
- */
- if (!(v & 7))
- return 3;
- else if (!(v & 3))
- return 2;
- else if (!(v & 1))
- return 1;
- return 0;
-}
-
static inline void dwc_dump_chan_regs(struct dw_dma_chan *dwc)
{
dev_err(chan2dev(&dwc->chan),
@@ -197,12 +193,12 @@ static inline void dwc_do_single_block(struct dw_dma_chan *dwc,
* Software emulation of LLP mode relies on interrupts to continue
* multi block transfer.
*/
- ctllo = desc->lli.ctllo | DWC_CTLL_INT_EN;
+ ctllo = lli_read(desc, ctllo) | DWC_CTLL_INT_EN;
- channel_writel(dwc, SAR, desc->lli.sar);
- channel_writel(dwc, DAR, desc->lli.dar);
+ channel_writel(dwc, SAR, lli_read(desc, sar));
+ channel_writel(dwc, DAR, lli_read(desc, dar));
channel_writel(dwc, CTL_LO, ctllo);
- channel_writel(dwc, CTL_HI, desc->lli.ctlhi);
+ channel_writel(dwc, CTL_HI, lli_read(desc, ctlhi));
channel_set_bit(dw, CH_EN, dwc->mask);
/* Move pointer to next descriptor */
@@ -213,6 +209,7 @@ static inline void dwc_do_single_block(struct dw_dma_chan *dwc,
static void dwc_dostart(struct dw_dma_chan *dwc, struct dw_desc *first)
{
struct dw_dma *dw = to_dw_dma(dwc->chan.device);
+ u8 lms = DWC_LLP_LMS(dwc->m_master);
unsigned long was_soft_llp;
/* ASSERT: channel is idle */
@@ -237,7 +234,7 @@ static void dwc_dostart(struct dw_dma_chan *dwc, struct dw_desc *first)
dwc_initialize(dwc);
- dwc->residue = first->total_len;
+ first->residue = first->total_len;
dwc->tx_node_active = &first->tx_list;
/* Submit first block */
@@ -248,9 +245,8 @@ static void dwc_dostart(struct dw_dma_chan *dwc, struct dw_desc *first)
dwc_initialize(dwc);
- channel_writel(dwc, LLP, first->txd.phys);
- channel_writel(dwc, CTL_LO,
- DWC_CTLL_LLP_D_EN | DWC_CTLL_LLP_S_EN);
+ channel_writel(dwc, LLP, first->txd.phys | lms);
+ channel_writel(dwc, CTL_LO, DWC_CTLL_LLP_D_EN | DWC_CTLL_LLP_S_EN);
channel_writel(dwc, CTL_HI, 0);
channel_set_bit(dw, CH_EN, dwc->mask);
}
@@ -293,11 +289,7 @@ dwc_descriptor_complete(struct dw_dma_chan *dwc, struct dw_desc *desc,
list_for_each_entry(child, &desc->tx_list, desc_node)
async_tx_ack(&child->txd);
async_tx_ack(&desc->txd);
-
- list_splice_init(&desc->tx_list, &dwc->free_list);
- list_move(&desc->desc_node, &dwc->free_list);
-
- dma_descriptor_unmap(txd);
+ dwc_desc_put(dwc, desc);
spin_unlock_irqrestore(&dwc->lock, flags);
if (callback)
@@ -368,11 +360,11 @@ static void dwc_scan_descriptors(struct dw_dma *dw, struct dw_dma_chan *dwc)
head = &desc->tx_list;
if (active != head) {
- /* Update desc to reflect last sent one */
- if (active != head->next)
- desc = to_dw_desc(active->prev);
-
- dwc->residue -= desc->len;
+ /* Update residue to reflect last sent descriptor */
+ if (active == head->next)
+ desc->residue -= desc->len;
+ else
+ desc->residue -= to_dw_desc(active->prev)->len;
child = to_dw_desc(active);
@@ -387,8 +379,6 @@ static void dwc_scan_descriptors(struct dw_dma *dw, struct dw_dma_chan *dwc)
clear_bit(DW_DMA_IS_SOFT_LLP, &dwc->flags);
}
- dwc->residue = 0;
-
spin_unlock_irqrestore(&dwc->lock, flags);
dwc_complete_all(dw, dwc);
@@ -396,7 +386,6 @@ static void dwc_scan_descriptors(struct dw_dma *dw, struct dw_dma_chan *dwc)
}
if (list_empty(&dwc->active_list)) {
- dwc->residue = 0;
spin_unlock_irqrestore(&dwc->lock, flags);
return;
}
@@ -411,31 +400,31 @@ static void dwc_scan_descriptors(struct dw_dma *dw, struct dw_dma_chan *dwc)
list_for_each_entry_safe(desc, _desc, &dwc->active_list, desc_node) {
/* Initial residue value */
- dwc->residue = desc->total_len;
+ desc->residue = desc->total_len;
/* Check first descriptors addr */
- if (desc->txd.phys == llp) {
+ if (desc->txd.phys == DWC_LLP_LOC(llp)) {
spin_unlock_irqrestore(&dwc->lock, flags);
return;
}
/* Check first descriptors llp */
- if (desc->lli.llp == llp) {
+ if (lli_read(desc, llp) == llp) {
/* This one is currently in progress */
- dwc->residue -= dwc_get_sent(dwc);
+ desc->residue -= dwc_get_sent(dwc);
spin_unlock_irqrestore(&dwc->lock, flags);
return;
}
- dwc->residue -= desc->len;
+ desc->residue -= desc->len;
list_for_each_entry(child, &desc->tx_list, desc_node) {
- if (child->lli.llp == llp) {
+ if (lli_read(child, llp) == llp) {
/* Currently in progress */
- dwc->residue -= dwc_get_sent(dwc);
+ desc->residue -= dwc_get_sent(dwc);
spin_unlock_irqrestore(&dwc->lock, flags);
return;
}
- dwc->residue -= child->len;
+ desc->residue -= child->len;
}
/*
@@ -457,10 +446,14 @@ static void dwc_scan_descriptors(struct dw_dma *dw, struct dw_dma_chan *dwc)
spin_unlock_irqrestore(&dwc->lock, flags);
}
-static inline void dwc_dump_lli(struct dw_dma_chan *dwc, struct dw_lli *lli)
+static inline void dwc_dump_lli(struct dw_dma_chan *dwc, struct dw_desc *desc)
{
dev_crit(chan2dev(&dwc->chan), " desc: s0x%x d0x%x l0x%x c0x%x:%x\n",
- lli->sar, lli->dar, lli->llp, lli->ctlhi, lli->ctllo);
+ lli_read(desc, sar),
+ lli_read(desc, dar),
+ lli_read(desc, llp),
+ lli_read(desc, ctlhi),
+ lli_read(desc, ctllo));
}
static void dwc_handle_error(struct dw_dma *dw, struct dw_dma_chan *dwc)
@@ -496,9 +489,9 @@ static void dwc_handle_error(struct dw_dma *dw, struct dw_dma_chan *dwc)
*/
dev_WARN(chan2dev(&dwc->chan), "Bad descriptor submitted for DMA!\n"
" cookie: %d\n", bad_desc->txd.cookie);
- dwc_dump_lli(dwc, &bad_desc->lli);
+ dwc_dump_lli(dwc, bad_desc);
list_for_each_entry(child, &bad_desc->tx_list, desc_node)
- dwc_dump_lli(dwc, &child->lli);
+ dwc_dump_lli(dwc, child);
spin_unlock_irqrestore(&dwc->lock, flags);
@@ -549,7 +542,7 @@ static void dwc_handle_cyclic(struct dw_dma *dw, struct dw_dma_chan *dwc,
*/
if (unlikely(status_err & dwc->mask) ||
unlikely(status_xfer & dwc->mask)) {
- int i;
+ unsigned int i;
dev_err(chan2dev(&dwc->chan),
"cyclic DMA unexpected %s interrupt, stopping DMA transfer\n",
@@ -571,7 +564,7 @@ static void dwc_handle_cyclic(struct dw_dma *dw, struct dw_dma_chan *dwc,
dma_writel(dw, CLEAR.XFER, dwc->mask);
for (i = 0; i < dwc->cdesc->periods; i++)
- dwc_dump_lli(dwc, &dwc->cdesc->desc[i]->lli);
+ dwc_dump_lli(dwc, dwc->cdesc->desc[i]);
spin_unlock_irqrestore(&dwc->lock, flags);
}
@@ -589,7 +582,7 @@ static void dw_dma_tasklet(unsigned long data)
u32 status_block;
u32 status_xfer;
u32 status_err;
- int i;
+ unsigned int i;
status_block = dma_readl(dw, RAW.BLOCK);
status_xfer = dma_readl(dw, RAW.XFER);
@@ -658,30 +651,6 @@ static irqreturn_t dw_dma_interrupt(int irq, void *dev_id)
/*----------------------------------------------------------------------*/
-static dma_cookie_t dwc_tx_submit(struct dma_async_tx_descriptor *tx)
-{
- struct dw_desc *desc = txd_to_dw_desc(tx);
- struct dw_dma_chan *dwc = to_dw_dma_chan(tx->chan);
- dma_cookie_t cookie;
- unsigned long flags;
-
- spin_lock_irqsave(&dwc->lock, flags);
- cookie = dma_cookie_assign(tx);
-
- /*
- * REVISIT: We should attempt to chain as many descriptors as
- * possible, perhaps even appending to those already submitted
- * for DMA. But this is hard to do in a race-free manner.
- */
-
- dev_vdbg(chan2dev(tx->chan), "%s: queued %u\n", __func__, desc->txd.cookie);
- list_add_tail(&desc->desc_node, &dwc->queue);
-
- spin_unlock_irqrestore(&dwc->lock, flags);
-
- return cookie;
-}
-
static struct dma_async_tx_descriptor *
dwc_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
size_t len, unsigned long flags)
@@ -693,10 +662,12 @@ dwc_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
struct dw_desc *prev;
size_t xfer_count;
size_t offset;
+ u8 m_master = dwc->m_master;
unsigned int src_width;
unsigned int dst_width;
- unsigned int data_width;
+ unsigned int data_width = dw->pdata->data_width[m_master];
u32 ctllo;
+ u8 lms = DWC_LLP_LMS(m_master);
dev_vdbg(chan2dev(chan),
"%s: d%pad s%pad l0x%zx f0x%lx\n", __func__,
@@ -709,11 +680,7 @@ dwc_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
dwc->direction = DMA_MEM_TO_MEM;
- data_width = min_t(unsigned int, dw->data_width[dwc->src_master],
- dw->data_width[dwc->dst_master]);
-
- src_width = dst_width = min_t(unsigned int, data_width,
- dwc_fast_ffs(src | dest | len));
+ src_width = dst_width = __ffs(data_width | src | dest | len);
ctllo = DWC_DEFAULT_CTLLO(chan)
| DWC_CTLL_DST_WIDTH(dst_width)
@@ -731,27 +698,27 @@ dwc_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
if (!desc)
goto err_desc_get;
- desc->lli.sar = src + offset;
- desc->lli.dar = dest + offset;
- desc->lli.ctllo = ctllo;
- desc->lli.ctlhi = xfer_count;
+ lli_write(desc, sar, src + offset);
+ lli_write(desc, dar, dest + offset);
+ lli_write(desc, ctllo, ctllo);
+ lli_write(desc, ctlhi, xfer_count);
desc->len = xfer_count << src_width;
if (!first) {
first = desc;
} else {
- prev->lli.llp = desc->txd.phys;
- list_add_tail(&desc->desc_node,
- &first->tx_list);
+ lli_write(prev, llp, desc->txd.phys | lms);
+ list_add_tail(&desc->desc_node, &first->tx_list);
}
prev = desc;
}
if (flags & DMA_PREP_INTERRUPT)
/* Trigger interrupt after last block */
- prev->lli.ctllo |= DWC_CTLL_INT_EN;
+ lli_set(prev, ctllo, DWC_CTLL_INT_EN);
prev->lli.llp = 0;
+ lli_clear(prev, ctllo, DWC_CTLL_LLP_D_EN | DWC_CTLL_LLP_S_EN);
first->txd.flags = flags;
first->total_len = len;
@@ -773,10 +740,12 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
struct dw_desc *prev;
struct dw_desc *first;
u32 ctllo;
+ u8 m_master = dwc->m_master;
+ u8 lms = DWC_LLP_LMS(m_master);
dma_addr_t reg;
unsigned int reg_width;
unsigned int mem_width;
- unsigned int data_width;
+ unsigned int data_width = dw->pdata->data_width[m_master];
unsigned int i;
struct scatterlist *sg;
size_t total_len = 0;
@@ -802,8 +771,6 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
ctllo |= sconfig->device_fc ? DWC_CTLL_FC(DW_DMA_FC_P_M2P) :
DWC_CTLL_FC(DW_DMA_FC_D_M2P);
- data_width = dw->data_width[dwc->src_master];
-
for_each_sg(sgl, sg, sg_len, i) {
struct dw_desc *desc;
u32 len, dlen, mem;
@@ -811,17 +778,16 @@ dwc_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
mem = sg_dma_address(sg);
len = sg_dma_len(sg);
- mem_width = min_t(unsigned int,
- data_width, dwc_fast_ffs(mem | len));
+ mem_width = __ffs(data_width | mem | len);
slave_sg_todev_fill_desc:
desc = dwc_desc_get(dwc);
if (!desc)
goto err_desc_get;
- desc->lli.sar = mem;
- desc->lli.dar = reg;
- desc->lli.ctllo = ctllo | DWC_CTLL_SRC_WIDTH(mem_width);
+ lli_write(desc, sar, mem);
+ lli_write(desc, dar, reg);
+ lli_write(desc, ctllo, ctllo | DWC_CTLL_SRC_WIDTH(mem_width));
if ((len >> mem_width) > dwc->block_size) {
dlen = dwc->block_size << mem_width;
mem += dlen;
@@ -831,15 +797,14 @@ slave_sg_todev_fill_desc:
len = 0;
}
- desc->lli.ctlhi = dlen >> mem_width;
+ lli_write(desc, ctlhi, dlen >> mem_width);
desc->len = dlen;
if (!first) {
first = desc;
} else {
- prev->lli.llp = desc->txd.phys;
- list_add_tail(&desc->desc_node,
- &first->tx_list);
+ lli_write(prev, llp, desc->txd.phys | lms);
+ list_add_tail(&desc->desc_node, &first->tx_list);
}
prev = desc;
total_len += dlen;
@@ -859,8 +824,6 @@ slave_sg_todev_fill_desc:
ctllo |= sconfig->device_fc ? DWC_CTLL_FC(DW_DMA_FC_P_P2M) :
DWC_CTLL_FC(DW_DMA_FC_D_P2M);
- data_width = dw->data_width[dwc->dst_master];
-
for_each_sg(sgl, sg, sg_len, i) {
struct dw_desc *desc;
u32 len, dlen, mem;
@@ -868,17 +831,16 @@ slave_sg_todev_fill_desc:
mem = sg_dma_address(sg);
len = sg_dma_len(sg);
- mem_width = min_t(unsigned int,
- data_width, dwc_fast_ffs(mem | len));
+ mem_width = __ffs(data_width | mem | len);
slave_sg_fromdev_fill_desc:
desc = dwc_desc_get(dwc);
if (!desc)
goto err_desc_get;
- desc->lli.sar = reg;
- desc->lli.dar = mem;
- desc->lli.ctllo = ctllo | DWC_CTLL_DST_WIDTH(mem_width);
+ lli_write(desc, sar, reg);
+ lli_write(desc, dar, mem);
+ lli_write(desc, ctllo, ctllo | DWC_CTLL_DST_WIDTH(mem_width));
if ((len >> reg_width) > dwc->block_size) {
dlen = dwc->block_size << reg_width;
mem += dlen;
@@ -887,15 +849,14 @@ slave_sg_fromdev_fill_desc:
dlen = len;
len = 0;
}
- desc->lli.ctlhi = dlen >> reg_width;
+ lli_write(desc, ctlhi, dlen >> reg_width);
desc->len = dlen;
if (!first) {
first = desc;
} else {
- prev->lli.llp = desc->txd.phys;
- list_add_tail(&desc->desc_node,
- &first->tx_list);
+ lli_write(prev, llp, desc->txd.phys | lms);
+ list_add_tail(&desc->desc_node, &first->tx_list);
}
prev = desc;
total_len += dlen;
@@ -910,9 +871,10 @@ slave_sg_fromdev_fill_desc:
if (flags & DMA_PREP_INTERRUPT)
/* Trigger interrupt after last block */
- prev->lli.ctllo |= DWC_CTLL_INT_EN;
+ lli_set(prev, ctllo, DWC_CTLL_INT_EN);
prev->lli.llp = 0;
+ lli_clear(prev, ctllo, DWC_CTLL_LLP_D_EN | DWC_CTLL_LLP_S_EN);
first->total_len = total_len;
return &first->txd;
@@ -937,8 +899,8 @@ bool dw_dma_filter(struct dma_chan *chan, void *param)
dwc->src_id = dws->src_id;
dwc->dst_id = dws->dst_id;
- dwc->src_master = dws->src_master;
- dwc->dst_master = dws->dst_master;
+ dwc->m_master = dws->m_master;
+ dwc->p_master = dws->p_master;
return true;
}
@@ -991,7 +953,7 @@ static int dwc_pause(struct dma_chan *chan)
while (!(channel_readl(dwc, CFG_LO) & DWC_CFGL_FIFO_EMPTY) && count--)
udelay(2);
- dwc->paused = true;
+ set_bit(DW_DMA_IS_PAUSED, &dwc->flags);
spin_unlock_irqrestore(&dwc->lock, flags);
@@ -1004,7 +966,7 @@ static inline void dwc_chan_resume(struct dw_dma_chan *dwc)
channel_writel(dwc, CFG_LO, cfglo & ~DWC_CFGL_CH_SUSP);
- dwc->paused = false;
+ clear_bit(DW_DMA_IS_PAUSED, &dwc->flags);
}
static int dwc_resume(struct dma_chan *chan)
@@ -1012,12 +974,10 @@ static int dwc_resume(struct dma_chan *chan)
struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
unsigned long flags;
- if (!dwc->paused)
- return 0;
-
spin_lock_irqsave(&dwc->lock, flags);
- dwc_chan_resume(dwc);
+ if (test_bit(DW_DMA_IS_PAUSED, &dwc->flags))
+ dwc_chan_resume(dwc);
spin_unlock_irqrestore(&dwc->lock, flags);
@@ -1053,16 +1013,37 @@ static int dwc_terminate_all(struct dma_chan *chan)
return 0;
}
-static inline u32 dwc_get_residue(struct dw_dma_chan *dwc)
+static struct dw_desc *dwc_find_desc(struct dw_dma_chan *dwc, dma_cookie_t c)
{
+ struct dw_desc *desc;
+
+ list_for_each_entry(desc, &dwc->active_list, desc_node)
+ if (desc->txd.cookie == c)
+ return desc;
+
+ return NULL;
+}
+
+static u32 dwc_get_residue(struct dw_dma_chan *dwc, dma_cookie_t cookie)
+{
+ struct dw_desc *desc;
unsigned long flags;
u32 residue;
spin_lock_irqsave(&dwc->lock, flags);
- residue = dwc->residue;
- if (test_bit(DW_DMA_IS_SOFT_LLP, &dwc->flags) && residue)
- residue -= dwc_get_sent(dwc);
+ desc = dwc_find_desc(dwc, cookie);
+ if (desc) {
+ if (desc == dwc_first_active(dwc)) {
+ residue = desc->residue;
+ if (test_bit(DW_DMA_IS_SOFT_LLP, &dwc->flags) && residue)
+ residue -= dwc_get_sent(dwc);
+ } else {
+ residue = desc->total_len;
+ }
+ } else {
+ residue = 0;
+ }
spin_unlock_irqrestore(&dwc->lock, flags);
return residue;
@@ -1083,10 +1064,12 @@ dwc_tx_status(struct dma_chan *chan,
dwc_scan_descriptors(to_dw_dma(chan->device), dwc);
ret = dma_cookie_status(chan, cookie, txstate);
- if (ret != DMA_COMPLETE)
- dma_set_residue(txstate, dwc_get_residue(dwc));
+ if (ret == DMA_COMPLETE)
+ return ret;
+
+ dma_set_residue(txstate, dwc_get_residue(dwc, cookie));
- if (dwc->paused && ret == DMA_IN_PROGRESS)
+ if (test_bit(DW_DMA_IS_PAUSED, &dwc->flags) && ret == DMA_IN_PROGRESS)
return DMA_PAUSED;
return ret;
@@ -1107,7 +1090,7 @@ static void dwc_issue_pending(struct dma_chan *chan)
static void dw_dma_off(struct dw_dma *dw)
{
- int i;
+ unsigned int i;
dma_writel(dw, CFG, 0);
@@ -1121,7 +1104,7 @@ static void dw_dma_off(struct dw_dma *dw)
cpu_relax();
for (i = 0; i < dw->dma.chancnt; i++)
- dw->chan[i].initialized = false;
+ clear_bit(DW_DMA_IS_INITIALIZED, &dw->chan[i].flags);
}
static void dw_dma_on(struct dw_dma *dw)
@@ -1133,9 +1116,6 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan)
{
struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
struct dw_dma *dw = to_dw_dma(chan->device);
- struct dw_desc *desc;
- int i;
- unsigned long flags;
dev_vdbg(chan2dev(chan), "%s\n", __func__);
@@ -1166,48 +1146,13 @@ static int dwc_alloc_chan_resources(struct dma_chan *chan)
dw_dma_on(dw);
dw->in_use |= dwc->mask;
- spin_lock_irqsave(&dwc->lock, flags);
- i = dwc->descs_allocated;
- while (dwc->descs_allocated < NR_DESCS_PER_CHANNEL) {
- dma_addr_t phys;
-
- spin_unlock_irqrestore(&dwc->lock, flags);
-
- desc = dma_pool_alloc(dw->desc_pool, GFP_ATOMIC, &phys);
- if (!desc)
- goto err_desc_alloc;
-
- memset(desc, 0, sizeof(struct dw_desc));
-
- INIT_LIST_HEAD(&desc->tx_list);
- dma_async_tx_descriptor_init(&desc->txd, chan);
- desc->txd.tx_submit = dwc_tx_submit;
- desc->txd.flags = DMA_CTRL_ACK;
- desc->txd.phys = phys;
-
- dwc_desc_put(dwc, desc);
-
- spin_lock_irqsave(&dwc->lock, flags);
- i = ++dwc->descs_allocated;
- }
-
- spin_unlock_irqrestore(&dwc->lock, flags);
-
- dev_dbg(chan2dev(chan), "%s: allocated %d descriptors\n", __func__, i);
-
- return i;
-
-err_desc_alloc:
- dev_info(chan2dev(chan), "only allocated %d descriptors\n", i);
-
- return i;
+ return 0;
}
static void dwc_free_chan_resources(struct dma_chan *chan)
{
struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
struct dw_dma *dw = to_dw_dma(chan->device);
- struct dw_desc *desc, *_desc;
unsigned long flags;
LIST_HEAD(list);
@@ -1220,17 +1165,15 @@ static void dwc_free_chan_resources(struct dma_chan *chan)
BUG_ON(dma_readl(to_dw_dma(chan->device), CH_EN) & dwc->mask);
spin_lock_irqsave(&dwc->lock, flags);
- list_splice_init(&dwc->free_list, &list);
- dwc->descs_allocated = 0;
/* Clear custom channel configuration */
dwc->src_id = 0;
dwc->dst_id = 0;
- dwc->src_master = 0;
- dwc->dst_master = 0;
+ dwc->m_master = 0;
+ dwc->p_master = 0;
- dwc->initialized = false;
+ clear_bit(DW_DMA_IS_INITIALIZED, &dwc->flags);
/* Disable interrupts */
channel_clear_bit(dw, MASK.XFER, dwc->mask);
@@ -1244,11 +1187,6 @@ static void dwc_free_chan_resources(struct dma_chan *chan)
if (!dw->in_use)
dw_dma_off(dw);
- list_for_each_entry_safe(desc, _desc, &list, desc_node) {
- dev_vdbg(chan2dev(chan), " freeing descriptor %p\n", desc);
- dma_pool_free(dw->desc_pool, desc, desc->txd.phys);
- }
-
dev_vdbg(chan2dev(chan), "%s: done\n", __func__);
}
@@ -1326,6 +1264,7 @@ struct dw_cyclic_desc *dw_dma_cyclic_prep(struct dma_chan *chan,
struct dw_cyclic_desc *retval = NULL;
struct dw_desc *desc;
struct dw_desc *last = NULL;
+ u8 lms = DWC_LLP_LMS(dwc->m_master);
unsigned long was_cyclic;
unsigned int reg_width;
unsigned int periods;
@@ -1379,9 +1318,6 @@ struct dw_cyclic_desc *dw_dma_cyclic_prep(struct dma_chan *chan,
retval = ERR_PTR(-ENOMEM);
- if (periods > NR_DESCS_PER_CHANNEL)
- goto out_err;
-
cdesc = kzalloc(sizeof(struct dw_cyclic_desc), GFP_KERNEL);
if (!cdesc)
goto out_err;
@@ -1397,50 +1333,50 @@ struct dw_cyclic_desc *dw_dma_cyclic_prep(struct dma_chan *chan,
switch (direction) {
case DMA_MEM_TO_DEV:
- desc->lli.dar = sconfig->dst_addr;
- desc->lli.sar = buf_addr + (period_len * i);
- desc->lli.ctllo = (DWC_DEFAULT_CTLLO(chan)
- | DWC_CTLL_DST_WIDTH(reg_width)
- | DWC_CTLL_SRC_WIDTH(reg_width)
- | DWC_CTLL_DST_FIX
- | DWC_CTLL_SRC_INC
- | DWC_CTLL_INT_EN);
-
- desc->lli.ctllo |= sconfig->device_fc ?
- DWC_CTLL_FC(DW_DMA_FC_P_M2P) :
- DWC_CTLL_FC(DW_DMA_FC_D_M2P);
+ lli_write(desc, dar, sconfig->dst_addr);
+ lli_write(desc, sar, buf_addr + period_len * i);
+ lli_write(desc, ctllo, (DWC_DEFAULT_CTLLO(chan)
+ | DWC_CTLL_DST_WIDTH(reg_width)
+ | DWC_CTLL_SRC_WIDTH(reg_width)
+ | DWC_CTLL_DST_FIX
+ | DWC_CTLL_SRC_INC
+ | DWC_CTLL_INT_EN));
+
+ lli_set(desc, ctllo, sconfig->device_fc ?
+ DWC_CTLL_FC(DW_DMA_FC_P_M2P) :
+ DWC_CTLL_FC(DW_DMA_FC_D_M2P));
break;
case DMA_DEV_TO_MEM:
- desc->lli.dar = buf_addr + (period_len * i);
- desc->lli.sar = sconfig->src_addr;
- desc->lli.ctllo = (DWC_DEFAULT_CTLLO(chan)
- | DWC_CTLL_SRC_WIDTH(reg_width)
- | DWC_CTLL_DST_WIDTH(reg_width)
- | DWC_CTLL_DST_INC
- | DWC_CTLL_SRC_FIX
- | DWC_CTLL_INT_EN);
-
- desc->lli.ctllo |= sconfig->device_fc ?
- DWC_CTLL_FC(DW_DMA_FC_P_P2M) :
- DWC_CTLL_FC(DW_DMA_FC_D_P2M);
+ lli_write(desc, dar, buf_addr + period_len * i);
+ lli_write(desc, sar, sconfig->src_addr);
+ lli_write(desc, ctllo, (DWC_DEFAULT_CTLLO(chan)
+ | DWC_CTLL_SRC_WIDTH(reg_width)
+ | DWC_CTLL_DST_WIDTH(reg_width)
+ | DWC_CTLL_DST_INC
+ | DWC_CTLL_SRC_FIX
+ | DWC_CTLL_INT_EN));
+
+ lli_set(desc, ctllo, sconfig->device_fc ?
+ DWC_CTLL_FC(DW_DMA_FC_P_P2M) :
+ DWC_CTLL_FC(DW_DMA_FC_D_P2M));
break;
default:
break;
}
- desc->lli.ctlhi = (period_len >> reg_width);
+ lli_write(desc, ctlhi, period_len >> reg_width);
cdesc->desc[i] = desc;
if (last)
- last->lli.llp = desc->txd.phys;
+ lli_write(last, llp, desc->txd.phys | lms);
last = desc;
}
/* Let's make a cyclic list */
- last->lli.llp = cdesc->desc[0]->txd.phys;
+ lli_write(last, llp, cdesc->desc[0]->txd.phys | lms);
dev_dbg(chan2dev(&dwc->chan),
"cyclic prepared buf %pad len %zu period %zu periods %d\n",
@@ -1471,7 +1407,7 @@ void dw_dma_cyclic_free(struct dma_chan *chan)
struct dw_dma_chan *dwc = to_dw_dma_chan(chan);
struct dw_dma *dw = to_dw_dma(dwc->chan.device);
struct dw_cyclic_desc *cdesc = dwc->cdesc;
- int i;
+ unsigned int i;
unsigned long flags;
dev_dbg(chan2dev(&dwc->chan), "%s\n", __func__);
@@ -1495,32 +1431,38 @@ void dw_dma_cyclic_free(struct dma_chan *chan)
kfree(cdesc->desc);
kfree(cdesc);
+ dwc->cdesc = NULL;
+
clear_bit(DW_DMA_IS_CYCLIC, &dwc->flags);
}
EXPORT_SYMBOL(dw_dma_cyclic_free);
/*----------------------------------------------------------------------*/
-int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
+int dw_dma_probe(struct dw_dma_chip *chip)
{
+ struct dw_dma_platform_data *pdata;
struct dw_dma *dw;
bool autocfg = false;
unsigned int dw_params;
- unsigned int max_blk_size = 0;
+ unsigned int i;
int err;
- int i;
dw = devm_kzalloc(chip->dev, sizeof(*dw), GFP_KERNEL);
if (!dw)
return -ENOMEM;
+ dw->pdata = devm_kzalloc(chip->dev, sizeof(*dw->pdata), GFP_KERNEL);
+ if (!dw->pdata)
+ return -ENOMEM;
+
dw->regs = chip->regs;
chip->dw = dw;
pm_runtime_get_sync(chip->dev);
- if (!pdata) {
- dw_params = dma_read_byaddr(chip->regs, DW_PARAMS);
+ if (!chip->pdata) {
+ dw_params = dma_readl(dw, DW_PARAMS);
dev_dbg(chip->dev, "DW_PARAMS: 0x%08x\n", dw_params);
autocfg = dw_params >> DW_PARAMS_EN & 1;
@@ -1529,29 +1471,31 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
goto err_pdata;
}
- pdata = devm_kzalloc(chip->dev, sizeof(*pdata), GFP_KERNEL);
- if (!pdata) {
- err = -ENOMEM;
- goto err_pdata;
- }
+ /* Reassign the platform data pointer */
+ pdata = dw->pdata;
/* Get hardware configuration parameters */
pdata->nr_channels = (dw_params >> DW_PARAMS_NR_CHAN & 7) + 1;
pdata->nr_masters = (dw_params >> DW_PARAMS_NR_MASTER & 3) + 1;
for (i = 0; i < pdata->nr_masters; i++) {
pdata->data_width[i] =
- (dw_params >> DW_PARAMS_DATA_WIDTH(i) & 3) + 2;
+ 4 << (dw_params >> DW_PARAMS_DATA_WIDTH(i) & 3);
}
- max_blk_size = dma_readl(dw, MAX_BLK_SIZE);
+ pdata->block_size = dma_readl(dw, MAX_BLK_SIZE);
/* Fill platform data with the default values */
pdata->is_private = true;
pdata->is_memcpy = true;
pdata->chan_allocation_order = CHAN_ALLOCATION_ASCENDING;
pdata->chan_priority = CHAN_PRIORITY_ASCENDING;
- } else if (pdata->nr_channels > DW_DMA_MAX_NR_CHANNELS) {
+ } else if (chip->pdata->nr_channels > DW_DMA_MAX_NR_CHANNELS) {
err = -EINVAL;
goto err_pdata;
+ } else {
+ memcpy(dw->pdata, chip->pdata, sizeof(*dw->pdata));
+
+ /* Reassign the platform data pointer */
+ pdata = dw->pdata;
}
dw->chan = devm_kcalloc(chip->dev, pdata->nr_channels, sizeof(*dw->chan),
@@ -1561,11 +1505,6 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
goto err_pdata;
}
- /* Get hardware configuration parameters */
- dw->nr_masters = pdata->nr_masters;
- for (i = 0; i < dw->nr_masters; i++)
- dw->data_width[i] = pdata->data_width[i];
-
/* Calculate all channel mask before DMA setup */
dw->all_chan_mask = (1 << pdata->nr_channels) - 1;
@@ -1612,7 +1551,6 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
INIT_LIST_HEAD(&dwc->active_list);
INIT_LIST_HEAD(&dwc->queue);
- INIT_LIST_HEAD(&dwc->free_list);
channel_clear_bit(dw, CH_EN, dwc->mask);
@@ -1620,11 +1558,9 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
/* Hardware configuration */
if (autocfg) {
- unsigned int dwc_params;
unsigned int r = DW_DMA_MAX_NR_CHANNELS - i - 1;
- void __iomem *addr = chip->regs + r * sizeof(u32);
-
- dwc_params = dma_read_byaddr(addr, DWC_PARAMS);
+ void __iomem *addr = &__dw_regs(dw)->DWC_PARAMS[r];
+ unsigned int dwc_params = dma_readl_native(addr);
dev_dbg(chip->dev, "DWC_PARAMS[%d]: 0x%08x\n", i,
dwc_params);
@@ -1635,16 +1571,15 @@ int dw_dma_probe(struct dw_dma_chip *chip, struct dw_dma_platform_data *pdata)
* up to 0x0a for 4095.
*/
dwc->block_size =
- (4 << ((max_blk_size >> 4 * i) & 0xf)) - 1;
+ (4 << ((pdata->block_size >> 4 * i) & 0xf)) - 1;
dwc->nollp =
(dwc_params >> DWC_PARAMS_MBLK_EN & 0x1) == 0;
} else {
dwc->block_size = pdata->block_size;
/* Check if channel supports multi block transfer */
- channel_writel(dwc, LLP, 0xfffffffc);
- dwc->nollp =
- (channel_readl(dwc, LLP) & 0xfffffffc) == 0;
+ channel_writel(dwc, LLP, DWC_LLP_LOC(0xffffffff));
+ dwc->nollp = DWC_LLP_LOC(channel_readl(dwc, LLP)) == 0;
channel_writel(dwc, LLP, 0);
}
}
diff --git a/drivers/dma/dw/pci.c b/drivers/dma/dw/pci.c
index 358f9689a3f5..0ae6c3b1d34e 100644
--- a/drivers/dma/dw/pci.c
+++ b/drivers/dma/dw/pci.c
@@ -17,8 +17,8 @@
static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid)
{
+ const struct dw_dma_platform_data *pdata = (void *)pid->driver_data;
struct dw_dma_chip *chip;
- struct dw_dma_platform_data *pdata = (void *)pid->driver_data;
int ret;
ret = pcim_enable_device(pdev);
@@ -49,8 +49,9 @@ static int dw_pci_probe(struct pci_dev *pdev, const struct pci_device_id *pid)
chip->dev = &pdev->dev;
chip->regs = pcim_iomap_table(pdev)[0];
chip->irq = pdev->irq;
+ chip->pdata = pdata;
- ret = dw_dma_probe(chip, pdata);
+ ret = dw_dma_probe(chip);
if (ret)
return ret;
diff --git a/drivers/dma/dw/platform.c b/drivers/dma/dw/platform.c
index 26edbe3a27ac..5bda0eb9f393 100644
--- a/drivers/dma/dw/platform.c
+++ b/drivers/dma/dw/platform.c
@@ -42,13 +42,13 @@ static struct dma_chan *dw_dma_of_xlate(struct of_phandle_args *dma_spec,
slave.src_id = dma_spec->args[0];
slave.dst_id = dma_spec->args[0];
- slave.src_master = dma_spec->args[1];
- slave.dst_master = dma_spec->args[2];
+ slave.m_master = dma_spec->args[1];
+ slave.p_master = dma_spec->args[2];
if (WARN_ON(slave.src_id >= DW_DMA_MAX_NR_REQUESTS ||
slave.dst_id >= DW_DMA_MAX_NR_REQUESTS ||
- slave.src_master >= dw->nr_masters ||
- slave.dst_master >= dw->nr_masters))
+ slave.m_master >= dw->pdata->nr_masters ||
+ slave.p_master >= dw->pdata->nr_masters))
return NULL;
dma_cap_zero(cap);
@@ -66,8 +66,8 @@ static bool dw_dma_acpi_filter(struct dma_chan *chan, void *param)
.dma_dev = dma_spec->dev,
.src_id = dma_spec->slave_id,
.dst_id = dma_spec->slave_id,
- .src_master = 1,
- .dst_master = 0,
+ .m_master = 0,
+ .p_master = 1,
};
return dw_dma_filter(chan, &slave);
@@ -103,6 +103,7 @@ dw_dma_parse_dt(struct platform_device *pdev)
struct device_node *np = pdev->dev.of_node;
struct dw_dma_platform_data *pdata;
u32 tmp, arr[DW_DMA_MAX_NR_MASTERS];
+ u32 nr_masters;
u32 nr_channels;
if (!np) {
@@ -110,6 +111,11 @@ dw_dma_parse_dt(struct platform_device *pdev)
return NULL;
}
+ if (of_property_read_u32(np, "dma-masters", &nr_masters))
+ return NULL;
+ if (nr_masters < 1 || nr_masters > DW_DMA_MAX_NR_MASTERS)
+ return NULL;
+
if (of_property_read_u32(np, "dma-channels", &nr_channels))
return NULL;
@@ -117,6 +123,7 @@ dw_dma_parse_dt(struct platform_device *pdev)
if (!pdata)
return NULL;
+ pdata->nr_masters = nr_masters;
pdata->nr_channels = nr_channels;
if (of_property_read_bool(np, "is_private"))
@@ -131,17 +138,13 @@ dw_dma_parse_dt(struct platform_device *pdev)
if (!of_property_read_u32(np, "block_size", &tmp))
pdata->block_size = tmp;
- if (!of_property_read_u32(np, "dma-masters", &tmp)) {
- if (tmp > DW_DMA_MAX_NR_MASTERS)
- return NULL;
-
- pdata->nr_masters = tmp;
- }
-
- if (!of_property_read_u32_array(np, "data_width", arr,
- pdata->nr_masters))
- for (tmp = 0; tmp < pdata->nr_masters; tmp++)
+ if (!of_property_read_u32_array(np, "data-width", arr, nr_masters)) {
+ for (tmp = 0; tmp < nr_masters; tmp++)
pdata->data_width[tmp] = arr[tmp];
+ } else if (!of_property_read_u32_array(np, "data_width", arr, nr_masters)) {
+ for (tmp = 0; tmp < nr_masters; tmp++)
+ pdata->data_width[tmp] = BIT(arr[tmp] & 0x07);
+ }
return pdata;
}
@@ -158,7 +161,7 @@ static int dw_probe(struct platform_device *pdev)
struct dw_dma_chip *chip;
struct device *dev = &pdev->dev;
struct resource *mem;
- struct dw_dma_platform_data *pdata;
+ const struct dw_dma_platform_data *pdata;
int err;
chip = devm_kzalloc(dev, sizeof(*chip), GFP_KERNEL);
@@ -183,6 +186,7 @@ static int dw_probe(struct platform_device *pdev)
pdata = dw_dma_parse_dt(pdev);
chip->dev = dev;
+ chip->pdata = pdata;
chip->clk = devm_clk_get(chip->dev, "hclk");
if (IS_ERR(chip->clk))
@@ -193,7 +197,7 @@ static int dw_probe(struct platform_device *pdev)
pm_runtime_enable(&pdev->dev);
- err = dw_dma_probe(chip, pdata);
+ err = dw_dma_probe(chip);
if (err)
goto err_dw_dma_probe;
diff --git a/drivers/dma/dw/regs.h b/drivers/dma/dw/regs.h
index 0a50c18d85b8..4b7bd7834046 100644
--- a/drivers/dma/dw/regs.h
+++ b/drivers/dma/dw/regs.h
@@ -114,10 +114,6 @@ struct dw_dma_regs {
#define dma_writel_native writel
#endif
-/* To access the registers in early stage of probe */
-#define dma_read_byaddr(addr, name) \
- dma_readl_native((addr) + offsetof(struct dw_dma_regs, name))
-
/* Bitfields in DW_PARAMS */
#define DW_PARAMS_NR_CHAN 8 /* number of channels */
#define DW_PARAMS_NR_MASTER 11 /* number of AHB masters */
@@ -143,6 +139,10 @@ enum dw_dma_msize {
DW_DMA_MSIZE_256,
};
+/* Bitfields in LLP */
+#define DWC_LLP_LMS(x) ((x) & 3) /* list master select */
+#define DWC_LLP_LOC(x) ((x) & ~3) /* next lli */
+
/* Bitfields in CTL_LO */
#define DWC_CTLL_INT_EN (1 << 0) /* irqs enabled? */
#define DWC_CTLL_DST_WIDTH(n) ((n)<<1) /* bytes per element */
@@ -216,6 +216,8 @@ enum dw_dma_msize {
enum dw_dmac_flags {
DW_DMA_IS_CYCLIC = 0,
DW_DMA_IS_SOFT_LLP = 1,
+ DW_DMA_IS_PAUSED = 2,
+ DW_DMA_IS_INITIALIZED = 3,
};
struct dw_dma_chan {
@@ -224,8 +226,6 @@ struct dw_dma_chan {
u8 mask;
u8 priority;
enum dma_transfer_direction direction;
- bool paused;
- bool initialized;
/* software emulation of the LLP transfers */
struct list_head *tx_node_active;
@@ -236,8 +236,6 @@ struct dw_dma_chan {
unsigned long flags;
struct list_head active_list;
struct list_head queue;
- struct list_head free_list;
- u32 residue;
struct dw_cyclic_desc *cdesc;
unsigned int descs_allocated;
@@ -249,8 +247,8 @@ struct dw_dma_chan {
/* custom slave configuration */
u8 src_id;
u8 dst_id;
- u8 src_master;
- u8 dst_master;
+ u8 m_master;
+ u8 p_master;
/* configuration passed via .device_config */
struct dma_slave_config dma_sconfig;
@@ -283,9 +281,8 @@ struct dw_dma {
u8 all_chan_mask;
u8 in_use;
- /* hardware configuration */
- unsigned char nr_masters;
- unsigned char data_width[DW_DMA_MAX_NR_MASTERS];
+ /* platform data */
+ struct dw_dma_platform_data *pdata;
};
static inline struct dw_dma_regs __iomem *__dw_regs(struct dw_dma *dw)
@@ -308,32 +305,51 @@ static inline struct dw_dma *to_dw_dma(struct dma_device *ddev)
return container_of(ddev, struct dw_dma, dma);
}
+#ifdef CONFIG_DW_DMAC_BIG_ENDIAN_IO
+typedef __be32 __dw32;
+#else
+typedef __le32 __dw32;
+#endif
+
/* LLI == Linked List Item; a.k.a. DMA block descriptor */
struct dw_lli {
/* values that are not changed by hardware */
- u32 sar;
- u32 dar;
- u32 llp; /* chain to next lli */
- u32 ctllo;
+ __dw32 sar;
+ __dw32 dar;
+ __dw32 llp; /* chain to next lli */
+ __dw32 ctllo;
/* values that may get written back: */
- u32 ctlhi;
+ __dw32 ctlhi;
/* sstat and dstat can snapshot peripheral register state.
* silicon config may discard either or both...
*/
- u32 sstat;
- u32 dstat;
+ __dw32 sstat;
+ __dw32 dstat;
};
struct dw_desc {
/* FIRST values the hardware uses */
struct dw_lli lli;
+#ifdef CONFIG_DW_DMAC_BIG_ENDIAN_IO
+#define lli_set(d, reg, v) ((d)->lli.reg |= cpu_to_be32(v))
+#define lli_clear(d, reg, v) ((d)->lli.reg &= ~cpu_to_be32(v))
+#define lli_read(d, reg) be32_to_cpu((d)->lli.reg)
+#define lli_write(d, reg, v) ((d)->lli.reg = cpu_to_be32(v))
+#else
+#define lli_set(d, reg, v) ((d)->lli.reg |= cpu_to_le32(v))
+#define lli_clear(d, reg, v) ((d)->lli.reg &= ~cpu_to_le32(v))
+#define lli_read(d, reg) le32_to_cpu((d)->lli.reg)
+#define lli_write(d, reg, v) ((d)->lli.reg = cpu_to_le32(v))
+#endif
+
/* THEN values for driver housekeeping */
struct list_head desc_node;
struct list_head tx_list;
struct dma_async_tx_descriptor txd;
size_t len;
size_t total_len;
+ u32 residue;
};
#define to_dw_desc(h) list_entry(h, struct dw_desc, desc_node)
diff --git a/drivers/dma/edma.c b/drivers/dma/edma.c
index 04070baab78a..8181ed131386 100644
--- a/drivers/dma/edma.c
+++ b/drivers/dma/edma.c
@@ -1537,8 +1537,17 @@ static irqreturn_t dma_ccerr_handler(int irq, void *data)
dev_vdbg(ecc->dev, "dma_ccerr_handler\n");
- if (!edma_error_pending(ecc))
+ if (!edma_error_pending(ecc)) {
+ /*
+ * The registers indicate no pending error event but the irq
+ * handler has been called.
+ * Ask eDMA to re-evaluate the error registers.
+ */
+ dev_err(ecc->dev, "%s: Error interrupt without error event!\n",
+ __func__);
+ edma_write(ecc, EDMA_EEVAL, 1);
return IRQ_NONE;
+ }
while (1) {
/* Event missed register(s) */
diff --git a/drivers/dma/fsldma.c b/drivers/dma/fsldma.c
index aac85c30c2cf..a8828ed639b3 100644
--- a/drivers/dma/fsldma.c
+++ b/drivers/dma/fsldma.c
@@ -462,13 +462,12 @@ static struct fsl_desc_sw *fsl_dma_alloc_descriptor(struct fsldma_chan *chan)
struct fsl_desc_sw *desc;
dma_addr_t pdesc;
- desc = dma_pool_alloc(chan->desc_pool, GFP_ATOMIC, &pdesc);
+ desc = dma_pool_zalloc(chan->desc_pool, GFP_ATOMIC, &pdesc);
if (!desc) {
chan_dbg(chan, "out of memory for link descriptor\n");
return NULL;
}
- memset(desc, 0, sizeof(*desc));
INIT_LIST_HEAD(&desc->tx_list);
dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
desc->async_tx.tx_submit = fsl_dma_tx_submit;
diff --git a/drivers/dma/hsu/hsu.c b/drivers/dma/hsu/hsu.c
index ee510515ce18..f8c5cd53307c 100644
--- a/drivers/dma/hsu/hsu.c
+++ b/drivers/dma/hsu/hsu.c
@@ -77,8 +77,8 @@ static void hsu_dma_chan_start(struct hsu_dma_chan *hsuc)
hsu_chan_writel(hsuc, HSU_CH_MTSR, mtsr);
/* Set descriptors */
- count = (desc->nents - desc->active) % HSU_DMA_CHAN_NR_DESC;
- for (i = 0; i < count; i++) {
+ count = desc->nents - desc->active;
+ for (i = 0; i < count && i < HSU_DMA_CHAN_NR_DESC; i++) {
hsu_chan_writel(hsuc, HSU_CH_DxSAR(i), desc->sg[i].addr);
hsu_chan_writel(hsuc, HSU_CH_DxTSR(i), desc->sg[i].len);
@@ -160,7 +160,7 @@ irqreturn_t hsu_dma_irq(struct hsu_dma_chip *chip, unsigned short nr)
return IRQ_NONE;
/* Timeout IRQ, need wait some time, see Errata 2 */
- if (hsuc->direction == DMA_DEV_TO_MEM && (sr & HSU_CH_SR_DESCTO_ANY))
+ if (sr & HSU_CH_SR_DESCTO_ANY)
udelay(2);
sr &= ~HSU_CH_SR_DESCTO_ANY;
@@ -420,6 +420,8 @@ int hsu_dma_probe(struct hsu_dma_chip *chip)
hsu->dma.dev = chip->dev;
+ dma_set_max_seg_size(hsu->dma.dev, HSU_CH_DxTSR_MASK);
+
ret = dma_async_device_register(&hsu->dma);
if (ret)
return ret;
diff --git a/drivers/dma/hsu/hsu.h b/drivers/dma/hsu/hsu.h
index 6b070c22b1df..486b023b3af0 100644
--- a/drivers/dma/hsu/hsu.h
+++ b/drivers/dma/hsu/hsu.h
@@ -58,6 +58,10 @@
#define HSU_CH_DCR_CHEI BIT(23)
#define HSU_CH_DCR_CHTOI(x) BIT(24 + (x))
+/* Bits in HSU_CH_DxTSR */
+#define HSU_CH_DxTSR_MASK GENMASK(15, 0)
+#define HSU_CH_DxTSR_TSR(x) ((x) & HSU_CH_DxTSR_MASK)
+
struct hsu_dma_sg {
dma_addr_t addr;
unsigned int len;
diff --git a/drivers/dma/ioat/init.c b/drivers/dma/ioat/init.c
index efdee1a69fc4..d406056e8892 100644
--- a/drivers/dma/ioat/init.c
+++ b/drivers/dma/ioat/init.c
@@ -690,12 +690,11 @@ static int ioat_alloc_chan_resources(struct dma_chan *c)
/* allocate a completion writeback area */
/* doing 2 32bit writes to mmio since 1 64b write doesn't work */
ioat_chan->completion =
- dma_pool_alloc(ioat_chan->ioat_dma->completion_pool,
- GFP_KERNEL, &ioat_chan->completion_dma);
+ dma_pool_zalloc(ioat_chan->ioat_dma->completion_pool,
+ GFP_KERNEL, &ioat_chan->completion_dma);
if (!ioat_chan->completion)
return -ENOMEM;
- memset(ioat_chan->completion, 0, sizeof(*ioat_chan->completion));
writel(((u64)ioat_chan->completion_dma) & 0x00000000FFFFFFFF,
ioat_chan->reg_base + IOAT_CHANCMP_OFFSET_LOW);
writel(((u64)ioat_chan->completion_dma) >> 32,
@@ -1074,6 +1073,7 @@ static int ioat3_dma_probe(struct ioatdma_device *ioat_dma, int dca)
struct ioatdma_chan *ioat_chan;
bool is_raid_device = false;
int err;
+ u16 val16;
dma = &ioat_dma->dma_dev;
dma->device_prep_dma_memcpy = ioat_dma_prep_memcpy_lock;
@@ -1173,6 +1173,17 @@ static int ioat3_dma_probe(struct ioatdma_device *ioat_dma, int dca)
if (dca)
ioat_dma->dca = ioat_dca_init(pdev, ioat_dma->reg_base);
+ /* disable relaxed ordering */
+ err = pcie_capability_read_word(pdev, IOAT_DEVCTRL_OFFSET, &val16);
+ if (err)
+ return err;
+
+ /* clear relaxed ordering enable */
+ val16 &= ~IOAT_DEVCTRL_ROE;
+ err = pcie_capability_write_word(pdev, IOAT_DEVCTRL_OFFSET, val16);
+ if (err)
+ return err;
+
return 0;
}
diff --git a/drivers/dma/ioat/registers.h b/drivers/dma/ioat/registers.h
index 4994a3623aee..70534981a49b 100644
--- a/drivers/dma/ioat/registers.h
+++ b/drivers/dma/ioat/registers.h
@@ -26,6 +26,13 @@
#define IOAT_PCI_CHANERR_INT_OFFSET 0x180
#define IOAT_PCI_CHANERRMASK_INT_OFFSET 0x184
+/* PCIe config registers */
+
+/* EXPCAPID + N */
+#define IOAT_DEVCTRL_OFFSET 0x8
+/* relaxed ordering enable */
+#define IOAT_DEVCTRL_ROE 0x10
+
/* MMIO Device Registers */
#define IOAT_CHANCNT_OFFSET 0x00 /* 8-bit */
diff --git a/drivers/dma/mmp_pdma.c b/drivers/dma/mmp_pdma.c
index e39457f13d4d..56f1fd68b620 100644
--- a/drivers/dma/mmp_pdma.c
+++ b/drivers/dma/mmp_pdma.c
@@ -364,13 +364,12 @@ mmp_pdma_alloc_descriptor(struct mmp_pdma_chan *chan)
struct mmp_pdma_desc_sw *desc;
dma_addr_t pdesc;
- desc = dma_pool_alloc(chan->desc_pool, GFP_ATOMIC, &pdesc);
+ desc = dma_pool_zalloc(chan->desc_pool, GFP_ATOMIC, &pdesc);
if (!desc) {
dev_err(chan->dev, "out of memory for link descriptor\n");
return NULL;
}
- memset(desc, 0, sizeof(*desc));
INIT_LIST_HEAD(&desc->tx_list);
dma_async_tx_descriptor_init(&desc->async_tx, &chan->chan);
/* each desc has submit */
diff --git a/drivers/dma/mpc512x_dma.c b/drivers/dma/mpc512x_dma.c
index aae76fb39adc..ccadafa51d5e 100644
--- a/drivers/dma/mpc512x_dma.c
+++ b/drivers/dma/mpc512x_dma.c
@@ -3,6 +3,7 @@
* Copyright (C) Semihalf 2009
* Copyright (C) Ilya Yanok, Emcraft Systems 2010
* Copyright (C) Alexander Popov, Promcontroller 2014
+ * Copyright (C) Mario Six, Guntermann & Drunck GmbH, 2016
*
* Written by Piotr Ziecik <kosmo@semihalf.com>. Hardware description
* (defines, structures and comments) was taken from MPC5121 DMA driver
@@ -26,18 +27,19 @@
*/
/*
- * MPC512x and MPC8308 DMA driver. It supports
- * memory to memory data transfers (tested using dmatest module) and
- * data transfers between memory and peripheral I/O memory
- * by means of slave scatter/gather with these limitations:
- * - chunked transfers (described by s/g lists with more than one item)
- * are refused as long as proper support for scatter/gather is missing;
- * - transfers on MPC8308 always start from software as this SoC appears
- * not to have external request lines for peripheral flow control;
- * - only peripheral devices with 4-byte FIFO access register are supported;
- * - minimal memory <-> I/O memory transfer chunk is 4 bytes and consequently
- * source and destination addresses must be 4-byte aligned
- * and transfer size must be aligned on (4 * maxburst) boundary;
+ * MPC512x and MPC8308 DMA driver. It supports memory to memory data transfers
+ * (tested using dmatest module) and data transfers between memory and
+ * peripheral I/O memory by means of slave scatter/gather with these
+ * limitations:
+ * - chunked transfers (described by s/g lists with more than one item) are
+ * refused as long as proper support for scatter/gather is missing
+ * - transfers on MPC8308 always start from software as this SoC does not have
+ * external request lines for peripheral flow control
+ * - memory <-> I/O memory transfer chunks of sizes of 1, 2, 4, 16 (for
+ * MPC512x), and 32 bytes are supported, and, consequently, source
+ * addresses and destination addresses must be aligned accordingly;
+ * furthermore, for MPC512x SoCs, the transfer size must be aligned on
+ * (chunk size * maxburst)
*/
#include <linux/module.h>
@@ -213,8 +215,10 @@ struct mpc_dma_chan {
/* Settings for access to peripheral FIFO */
dma_addr_t src_per_paddr;
u32 src_tcd_nunits;
+ u8 swidth;
dma_addr_t dst_per_paddr;
u32 dst_tcd_nunits;
+ u8 dwidth;
/* Lock for this structure */
spinlock_t lock;
@@ -247,6 +251,7 @@ static inline struct mpc_dma_chan *dma_chan_to_mpc_dma_chan(struct dma_chan *c)
static inline struct mpc_dma *dma_chan_to_mpc_dma(struct dma_chan *c)
{
struct mpc_dma_chan *mchan = dma_chan_to_mpc_dma_chan(c);
+
return container_of(mchan, struct mpc_dma, channels[c->chan_id]);
}
@@ -254,9 +259,9 @@ static inline struct mpc_dma *dma_chan_to_mpc_dma(struct dma_chan *c)
* Execute all queued DMA descriptors.
*
* Following requirements must be met while calling mpc_dma_execute():
- * a) mchan->lock is acquired,
- * b) mchan->active list is empty,
- * c) mchan->queued list contains at least one entry.
+ * a) mchan->lock is acquired,
+ * b) mchan->active list is empty,
+ * c) mchan->queued list contains at least one entry.
*/
static void mpc_dma_execute(struct mpc_dma_chan *mchan)
{
@@ -446,20 +451,15 @@ static void mpc_dma_tasklet(unsigned long data)
if (es & MPC_DMA_DMAES_SAE)
dev_err(mdma->dma.dev, "- Source Address Error\n");
if (es & MPC_DMA_DMAES_SOE)
- dev_err(mdma->dma.dev, "- Source Offset"
- " Configuration Error\n");
+ dev_err(mdma->dma.dev, "- Source Offset Configuration Error\n");
if (es & MPC_DMA_DMAES_DAE)
- dev_err(mdma->dma.dev, "- Destination Address"
- " Error\n");
+ dev_err(mdma->dma.dev, "- Destination Address Error\n");
if (es & MPC_DMA_DMAES_DOE)
- dev_err(mdma->dma.dev, "- Destination Offset"
- " Configuration Error\n");
+ dev_err(mdma->dma.dev, "- Destination Offset Configuration Error\n");
if (es & MPC_DMA_DMAES_NCE)
- dev_err(mdma->dma.dev, "- NBytes/Citter"
- " Configuration Error\n");
+ dev_err(mdma->dma.dev, "- NBytes/Citter Configuration Error\n");
if (es & MPC_DMA_DMAES_SGE)
- dev_err(mdma->dma.dev, "- Scatter/Gather"
- " Configuration Error\n");
+ dev_err(mdma->dma.dev, "- Scatter/Gather Configuration Error\n");
if (es & MPC_DMA_DMAES_SBE)
dev_err(mdma->dma.dev, "- Source Bus Error\n");
if (es & MPC_DMA_DMAES_DBE)
@@ -518,8 +518,8 @@ static int mpc_dma_alloc_chan_resources(struct dma_chan *chan)
for (i = 0; i < MPC_DMA_DESCRIPTORS; i++) {
mdesc = kzalloc(sizeof(struct mpc_dma_desc), GFP_KERNEL);
if (!mdesc) {
- dev_notice(mdma->dma.dev, "Memory allocation error. "
- "Allocated only %u descriptors\n", i);
+ dev_notice(mdma->dma.dev,
+ "Memory allocation error. Allocated only %u descriptors\n", i);
break;
}
@@ -684,6 +684,15 @@ mpc_dma_prep_memcpy(struct dma_chan *chan, dma_addr_t dst, dma_addr_t src,
return &mdesc->desc;
}
+inline u8 buswidth_to_dmatsize(u8 buswidth)
+{
+ u8 res;
+
+ for (res = 0; buswidth > 1; buswidth /= 2)
+ res++;
+ return res;
+}
+
static struct dma_async_tx_descriptor *
mpc_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
unsigned int sg_len, enum dma_transfer_direction direction,
@@ -742,39 +751,54 @@ mpc_dma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl,
memset(tcd, 0, sizeof(struct mpc_dma_tcd));
- if (!IS_ALIGNED(sg_dma_address(sg), 4))
- goto err_prep;
-
if (direction == DMA_DEV_TO_MEM) {
tcd->saddr = per_paddr;
tcd->daddr = sg_dma_address(sg);
+
+ if (!IS_ALIGNED(sg_dma_address(sg), mchan->dwidth))
+ goto err_prep;
+
tcd->soff = 0;
- tcd->doff = 4;
+ tcd->doff = mchan->dwidth;
} else {
tcd->saddr = sg_dma_address(sg);
tcd->daddr = per_paddr;
- tcd->soff = 4;
+
+ if (!IS_ALIGNED(sg_dma_address(sg), mchan->swidth))
+ goto err_prep;
+
+ tcd->soff = mchan->swidth;
tcd->doff = 0;
}
- tcd->ssize = MPC_DMA_TSIZE_4;
- tcd->dsize = MPC_DMA_TSIZE_4;
+ tcd->ssize = buswidth_to_dmatsize(mchan->swidth);
+ tcd->dsize = buswidth_to_dmatsize(mchan->dwidth);
- len = sg_dma_len(sg);
- tcd->nbytes = tcd_nunits * 4;
- if (!IS_ALIGNED(len, tcd->nbytes))
- goto err_prep;
+ if (mdma->is_mpc8308) {
+ tcd->nbytes = sg_dma_len(sg);
+ if (!IS_ALIGNED(tcd->nbytes, mchan->swidth))
+ goto err_prep;
- iter = len / tcd->nbytes;
- if (iter >= 1 << 15) {
- /* len is too big */
- goto err_prep;
+ /* No major loops for MPC8303 */
+ tcd->biter = 1;
+ tcd->citer = 1;
+ } else {
+ len = sg_dma_len(sg);
+ tcd->nbytes = tcd_nunits * tcd->ssize;
+ if (!IS_ALIGNED(len, tcd->nbytes))
+ goto err_prep;
+
+ iter = len / tcd->nbytes;
+ if (iter >= 1 << 15) {
+ /* len is too big */
+ goto err_prep;
+ }
+ /* citer_linkch contains the high bits of iter */
+ tcd->biter = iter & 0x1ff;
+ tcd->biter_linkch = iter >> 9;
+ tcd->citer = tcd->biter;
+ tcd->citer_linkch = tcd->biter_linkch;
}
- /* citer_linkch contains the high bits of iter */
- tcd->biter = iter & 0x1ff;
- tcd->biter_linkch = iter >> 9;
- tcd->citer = tcd->biter;
- tcd->citer_linkch = tcd->biter_linkch;
tcd->e_sg = 0;
tcd->d_req = 1;
@@ -796,40 +820,62 @@ err_prep:
return NULL;
}
+inline bool is_buswidth_valid(u8 buswidth, bool is_mpc8308)
+{
+ switch (buswidth) {
+ case 16:
+ if (is_mpc8308)
+ return false;
+ case 1:
+ case 2:
+ case 4:
+ case 32:
+ break;
+ default:
+ return false;
+ }
+
+ return true;
+}
+
static int mpc_dma_device_config(struct dma_chan *chan,
struct dma_slave_config *cfg)
{
struct mpc_dma_chan *mchan = dma_chan_to_mpc_dma_chan(chan);
+ struct mpc_dma *mdma = dma_chan_to_mpc_dma(&mchan->chan);
unsigned long flags;
/*
* Software constraints:
- * - only transfers between a peripheral device and
- * memory are supported;
- * - only peripheral devices with 4-byte FIFO access register
- * are supported;
- * - minimal transfer chunk is 4 bytes and consequently
- * source and destination addresses must be 4-byte aligned
- * and transfer size must be aligned on (4 * maxburst)
- * boundary;
- * - during the transfer RAM address is being incremented by
- * the size of minimal transfer chunk;
- * - peripheral port's address is constant during the transfer.
+ * - only transfers between a peripheral device and memory are
+ * supported
+ * - transfer chunk sizes of 1, 2, 4, 16 (for MPC512x), and 32 bytes
+ * are supported, and, consequently, source addresses and
+ * destination addresses; must be aligned accordingly; furthermore,
+ * for MPC512x SoCs, the transfer size must be aligned on (chunk
+ * size * maxburst)
+ * - during the transfer, the RAM address is incremented by the size
+ * of transfer chunk
+ * - the peripheral port's address is constant during the transfer.
*/
- if (cfg->src_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES ||
- cfg->dst_addr_width != DMA_SLAVE_BUSWIDTH_4_BYTES ||
- !IS_ALIGNED(cfg->src_addr, 4) ||
- !IS_ALIGNED(cfg->dst_addr, 4)) {
+ if (!IS_ALIGNED(cfg->src_addr, cfg->src_addr_width) ||
+ !IS_ALIGNED(cfg->dst_addr, cfg->dst_addr_width)) {
return -EINVAL;
}
+ if (!is_buswidth_valid(cfg->src_addr_width, mdma->is_mpc8308) ||
+ !is_buswidth_valid(cfg->dst_addr_width, mdma->is_mpc8308))
+ return -EINVAL;
+
spin_lock_irqsave(&mchan->lock, flags);
mchan->src_per_paddr = cfg->src_addr;
mchan->src_tcd_nunits = cfg->src_maxburst;
+ mchan->swidth = cfg->src_addr_width;
mchan->dst_per_paddr = cfg->dst_addr;
mchan->dst_tcd_nunits = cfg->dst_maxburst;
+ mchan->dwidth = cfg->dst_addr_width;
/* Apply defaults */
if (mchan->src_tcd_nunits == 0)
@@ -875,7 +921,6 @@ static int mpc_dma_probe(struct platform_device *op)
mdma = devm_kzalloc(dev, sizeof(struct mpc_dma), GFP_KERNEL);
if (!mdma) {
- dev_err(dev, "Memory exhausted!\n");
retval = -ENOMEM;
goto err;
}
@@ -999,7 +1044,8 @@ static int mpc_dma_probe(struct platform_device *op)
out_be32(&mdma->regs->dmaerrl, 0xFFFF);
} else {
out_be32(&mdma->regs->dmacr, MPC_DMA_DMACR_EDCG |
- MPC_DMA_DMACR_ERGA | MPC_DMA_DMACR_ERCA);
+ MPC_DMA_DMACR_ERGA |
+ MPC_DMA_DMACR_ERCA);
/* Disable hardware DMA requests */
out_be32(&mdma->regs->dmaerqh, 0);
diff --git a/drivers/dma/mv_xor.c b/drivers/dma/mv_xor.c
index 3922a5d56806..25d1dadcddd1 100644
--- a/drivers/dma/mv_xor.c
+++ b/drivers/dma/mv_xor.c
@@ -31,6 +31,12 @@
#include "dmaengine.h"
#include "mv_xor.h"
+enum mv_xor_type {
+ XOR_ORION,
+ XOR_ARMADA_38X,
+ XOR_ARMADA_37XX,
+};
+
enum mv_xor_mode {
XOR_MODE_IN_REG,
XOR_MODE_IN_DESC,
@@ -477,7 +483,7 @@ mv_xor_prep_dma_xor(struct dma_chan *chan, dma_addr_t dest, dma_addr_t *src,
BUG_ON(len > MV_XOR_MAX_BYTE_COUNT);
dev_dbg(mv_chan_to_devp(mv_chan),
- "%s src_cnt: %d len: %u dest %pad flags: %ld\n",
+ "%s src_cnt: %d len: %zu dest %pad flags: %ld\n",
__func__, src_cnt, len, &dest, flags);
sw_desc = mv_chan_alloc_slot(mv_chan);
@@ -933,7 +939,7 @@ static int mv_xor_channel_remove(struct mv_xor_chan *mv_chan)
static struct mv_xor_chan *
mv_xor_channel_add(struct mv_xor_device *xordev,
struct platform_device *pdev,
- int idx, dma_cap_mask_t cap_mask, int irq, int op_in_desc)
+ int idx, dma_cap_mask_t cap_mask, int irq)
{
int ret = 0;
struct mv_xor_chan *mv_chan;
@@ -945,7 +951,10 @@ mv_xor_channel_add(struct mv_xor_device *xordev,
mv_chan->idx = idx;
mv_chan->irq = irq;
- mv_chan->op_in_desc = op_in_desc;
+ if (xordev->xor_type == XOR_ORION)
+ mv_chan->op_in_desc = XOR_MODE_IN_REG;
+ else
+ mv_chan->op_in_desc = XOR_MODE_IN_DESC;
dma_dev = &mv_chan->dmadev;
@@ -1085,6 +1094,33 @@ mv_xor_conf_mbus_windows(struct mv_xor_device *xordev,
writel(0, base + WINDOW_OVERRIDE_CTRL(1));
}
+static void
+mv_xor_conf_mbus_windows_a3700(struct mv_xor_device *xordev)
+{
+ void __iomem *base = xordev->xor_high_base;
+ u32 win_enable = 0;
+ int i;
+
+ for (i = 0; i < 8; i++) {
+ writel(0, base + WINDOW_BASE(i));
+ writel(0, base + WINDOW_SIZE(i));
+ if (i < 4)
+ writel(0, base + WINDOW_REMAP_HIGH(i));
+ }
+ /*
+ * For Armada3700 open default 4GB Mbus window. The dram
+ * related configuration are done at AXIS level.
+ */
+ writel(0xffff0000, base + WINDOW_SIZE(0));
+ win_enable |= 1;
+ win_enable |= 3 << 16;
+
+ writel(win_enable, base + WINDOW_BAR_ENABLE(0));
+ writel(win_enable, base + WINDOW_BAR_ENABLE(1));
+ writel(0, base + WINDOW_OVERRIDE_CTRL(0));
+ writel(0, base + WINDOW_OVERRIDE_CTRL(1));
+}
+
/*
* Since this XOR driver is basically used only for RAID5, we don't
* need to care about synchronizing ->suspend with DMA activity,
@@ -1129,6 +1165,11 @@ static int mv_xor_resume(struct platform_device *dev)
XOR_INTR_MASK(mv_chan));
}
+ if (xordev->xor_type == XOR_ARMADA_37XX) {
+ mv_xor_conf_mbus_windows_a3700(xordev);
+ return 0;
+ }
+
dram = mv_mbus_dram_info();
if (dram)
mv_xor_conf_mbus_windows(xordev, dram);
@@ -1137,8 +1178,9 @@ static int mv_xor_resume(struct platform_device *dev)
}
static const struct of_device_id mv_xor_dt_ids[] = {
- { .compatible = "marvell,orion-xor", .data = (void *)XOR_MODE_IN_REG },
- { .compatible = "marvell,armada-380-xor", .data = (void *)XOR_MODE_IN_DESC },
+ { .compatible = "marvell,orion-xor", .data = (void *)XOR_ORION },
+ { .compatible = "marvell,armada-380-xor", .data = (void *)XOR_ARMADA_38X },
+ { .compatible = "marvell,armada-3700-xor", .data = (void *)XOR_ARMADA_37XX },
{},
};
@@ -1152,7 +1194,6 @@ static int mv_xor_probe(struct platform_device *pdev)
struct resource *res;
unsigned int max_engines, max_channels;
int i, ret;
- int op_in_desc;
dev_notice(&pdev->dev, "Marvell shared XOR driver\n");
@@ -1180,12 +1221,30 @@ static int mv_xor_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, xordev);
+
+ /*
+ * We need to know which type of XOR device we use before
+ * setting up. In non-dt case it can only be the legacy one.
+ */
+ xordev->xor_type = XOR_ORION;
+ if (pdev->dev.of_node) {
+ const struct of_device_id *of_id =
+ of_match_device(mv_xor_dt_ids,
+ &pdev->dev);
+
+ xordev->xor_type = (uintptr_t)of_id->data;
+ }
+
/*
* (Re-)program MBUS remapping windows if we are asked to.
*/
- dram = mv_mbus_dram_info();
- if (dram)
- mv_xor_conf_mbus_windows(xordev, dram);
+ if (xordev->xor_type == XOR_ARMADA_37XX) {
+ mv_xor_conf_mbus_windows_a3700(xordev);
+ } else {
+ dram = mv_mbus_dram_info();
+ if (dram)
+ mv_xor_conf_mbus_windows(xordev, dram);
+ }
/* Not all platforms can gate the clock, so it is not
* an error if the clock does not exists.
@@ -1199,12 +1258,16 @@ static int mv_xor_probe(struct platform_device *pdev)
* order for async_tx to perform well. So we limit the number
* of engines and channels so that we take into account this
* constraint. Note that we also want to use channels from
- * separate engines when possible.
+ * separate engines when possible. For dual-CPU Armada 3700
+ * SoC with single XOR engine allow using its both channels.
*/
max_engines = num_present_cpus();
- max_channels = min_t(unsigned int,
- MV_XOR_MAX_CHANNELS,
- DIV_ROUND_UP(num_present_cpus(), 2));
+ if (xordev->xor_type == XOR_ARMADA_37XX)
+ max_channels = num_present_cpus();
+ else
+ max_channels = min_t(unsigned int,
+ MV_XOR_MAX_CHANNELS,
+ DIV_ROUND_UP(num_present_cpus(), 2));
if (mv_xor_engine_count >= max_engines)
return 0;
@@ -1212,15 +1275,11 @@ static int mv_xor_probe(struct platform_device *pdev)
if (pdev->dev.of_node) {
struct device_node *np;
int i = 0;
- const struct of_device_id *of_id =
- of_match_device(mv_xor_dt_ids,
- &pdev->dev);
for_each_child_of_node(pdev->dev.of_node, np) {
struct mv_xor_chan *chan;
dma_cap_mask_t cap_mask;
int irq;
- op_in_desc = (int)of_id->data;
if (i >= max_channels)
continue;
@@ -1237,7 +1296,7 @@ static int mv_xor_probe(struct platform_device *pdev)
}
chan = mv_xor_channel_add(xordev, pdev, i,
- cap_mask, irq, op_in_desc);
+ cap_mask, irq);
if (IS_ERR(chan)) {
ret = PTR_ERR(chan);
irq_dispose_mapping(irq);
@@ -1266,8 +1325,7 @@ static int mv_xor_probe(struct platform_device *pdev)
}
chan = mv_xor_channel_add(xordev, pdev, i,
- cd->cap_mask, irq,
- XOR_MODE_IN_REG);
+ cd->cap_mask, irq);
if (IS_ERR(chan)) {
ret = PTR_ERR(chan);
goto err_channel_add;
diff --git a/drivers/dma/mv_xor.h b/drivers/dma/mv_xor.h
index c19fe30e5ae9..bf56e082e7cd 100644
--- a/drivers/dma/mv_xor.h
+++ b/drivers/dma/mv_xor.h
@@ -85,6 +85,7 @@ struct mv_xor_device {
void __iomem *xor_high_base;
struct clk *clk;
struct mv_xor_chan *channels[MV_XOR_MAX_CHANNELS];
+ int xor_type;
};
/**
diff --git a/drivers/dma/of-dma.c b/drivers/dma/of-dma.c
index 1e1f2986eba8..faae0bfe1109 100644
--- a/drivers/dma/of-dma.c
+++ b/drivers/dma/of-dma.c
@@ -240,8 +240,9 @@ struct dma_chan *of_dma_request_slave_channel(struct device_node *np,
struct of_phandle_args dma_spec;
struct of_dma *ofdma;
struct dma_chan *chan;
- int count, i;
+ int count, i, start;
int ret_no_channel = -ENODEV;
+ static atomic_t last_index;
if (!np || !name) {
pr_err("%s: not enough information provided\n", __func__);
@@ -259,8 +260,15 @@ struct dma_chan *of_dma_request_slave_channel(struct device_node *np,
return ERR_PTR(-ENODEV);
}
+ /*
+ * approximate an average distribution across multiple
+ * entries with the same name
+ */
+ start = atomic_inc_return(&last_index);
for (i = 0; i < count; i++) {
- if (of_dma_match_channel(np, name, i, &dma_spec))
+ if (of_dma_match_channel(np, name,
+ (i + start) % count,
+ &dma_spec))
continue;
mutex_lock(&of_dma_lock);
diff --git a/drivers/dma/pxa_dma.c b/drivers/dma/pxa_dma.c
index 77c1c44009d8..e756a30ccba2 100644
--- a/drivers/dma/pxa_dma.c
+++ b/drivers/dma/pxa_dma.c
@@ -117,6 +117,7 @@ struct pxad_chan {
/* protected by vc->lock */
struct pxad_phy *phy;
struct dma_pool *desc_pool; /* Descriptors pool */
+ dma_cookie_t bus_error;
};
struct pxad_device {
@@ -563,6 +564,7 @@ static void pxad_launch_chan(struct pxad_chan *chan,
return;
}
}
+ chan->bus_error = 0;
/*
* Program the descriptor's address into the DMA controller,
@@ -666,6 +668,7 @@ static irqreturn_t pxad_chan_handler(int irq, void *dev_id)
struct virt_dma_desc *vd, *tmp;
unsigned int dcsr;
unsigned long flags;
+ dma_cookie_t last_started = 0;
BUG_ON(!chan);
@@ -678,6 +681,7 @@ static irqreturn_t pxad_chan_handler(int irq, void *dev_id)
dev_dbg(&chan->vc.chan.dev->device,
"%s(): checking txd %p[%x]: completed=%d\n",
__func__, vd, vd->tx.cookie, is_desc_completed(vd));
+ last_started = vd->tx.cookie;
if (to_pxad_sw_desc(vd)->cyclic) {
vchan_cyclic_callback(vd);
break;
@@ -690,7 +694,12 @@ static irqreturn_t pxad_chan_handler(int irq, void *dev_id)
}
}
- if (dcsr & PXA_DCSR_STOPSTATE) {
+ if (dcsr & PXA_DCSR_BUSERR) {
+ chan->bus_error = last_started;
+ phy_disable(phy);
+ }
+
+ if (!chan->bus_error && dcsr & PXA_DCSR_STOPSTATE) {
dev_dbg(&chan->vc.chan.dev->device,
"%s(): channel stopped, submitted_empty=%d issued_empty=%d",
__func__,
@@ -1249,6 +1258,9 @@ static enum dma_status pxad_tx_status(struct dma_chan *dchan,
struct pxad_chan *chan = to_pxad_chan(dchan);
enum dma_status ret;
+ if (cookie == chan->bus_error)
+ return DMA_ERROR;
+
ret = dma_cookie_status(dchan, cookie, txstate);
if (likely(txstate && (ret != DMA_ERROR)))
dma_set_residue(txstate, pxad_residue(chan, cookie));
@@ -1321,7 +1333,7 @@ static int pxad_init_phys(struct platform_device *op,
return 0;
}
-static const struct of_device_id const pxad_dt_ids[] = {
+static const struct of_device_id pxad_dt_ids[] = {
{ .compatible = "marvell,pdma-1.0", },
{}
};
diff --git a/drivers/dma/qcom/Makefile b/drivers/dma/qcom/Makefile
index bfea6990229f..4bfc38b45220 100644
--- a/drivers/dma/qcom/Makefile
+++ b/drivers/dma/qcom/Makefile
@@ -1,3 +1,5 @@
obj-$(CONFIG_QCOM_BAM_DMA) += bam_dma.o
obj-$(CONFIG_QCOM_HIDMA_MGMT) += hdma_mgmt.o
hdma_mgmt-objs := hidma_mgmt.o hidma_mgmt_sys.o
+obj-$(CONFIG_QCOM_HIDMA) += hdma.o
+hdma-objs := hidma_ll.o hidma.o hidma_dbg.o
diff --git a/drivers/dma/qcom/bam_dma.c b/drivers/dma/qcom/bam_dma.c
index d5e0a9c3ad5d..969b48176745 100644
--- a/drivers/dma/qcom/bam_dma.c
+++ b/drivers/dma/qcom/bam_dma.c
@@ -342,7 +342,7 @@ static const struct reg_offset_data bam_v1_7_reg_info[] = {
#define BAM_DESC_FIFO_SIZE SZ_32K
#define MAX_DESCRIPTORS (BAM_DESC_FIFO_SIZE / sizeof(struct bam_desc_hw) - 1)
-#define BAM_MAX_DATA_SIZE (SZ_32K - 8)
+#define BAM_FIFO_SIZE (SZ_32K - 8)
struct bam_chan {
struct virt_dma_chan vc;
@@ -387,6 +387,7 @@ struct bam_device {
/* execution environment ID, from DT */
u32 ee;
+ bool controlled_remotely;
const struct reg_offset_data *layout;
@@ -458,7 +459,7 @@ static void bam_chan_init_hw(struct bam_chan *bchan,
*/
writel_relaxed(ALIGN(bchan->fifo_phys, sizeof(struct bam_desc_hw)),
bam_addr(bdev, bchan->id, BAM_P_DESC_FIFO_ADDR));
- writel_relaxed(BAM_DESC_FIFO_SIZE,
+ writel_relaxed(BAM_FIFO_SIZE,
bam_addr(bdev, bchan->id, BAM_P_FIFO_SIZES));
/* enable the per pipe interrupts, enable EOT, ERR, and INT irqs */
@@ -604,7 +605,7 @@ static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan,
/* calculate number of required entries */
for_each_sg(sgl, sg, sg_len, i)
- num_alloc += DIV_ROUND_UP(sg_dma_len(sg), BAM_MAX_DATA_SIZE);
+ num_alloc += DIV_ROUND_UP(sg_dma_len(sg), BAM_FIFO_SIZE);
/* allocate enough room to accomodate the number of entries */
async_desc = kzalloc(sizeof(*async_desc) +
@@ -635,10 +636,10 @@ static struct dma_async_tx_descriptor *bam_prep_slave_sg(struct dma_chan *chan,
desc->addr = cpu_to_le32(sg_dma_address(sg) +
curr_offset);
- if (remainder > BAM_MAX_DATA_SIZE) {
- desc->size = cpu_to_le16(BAM_MAX_DATA_SIZE);
- remainder -= BAM_MAX_DATA_SIZE;
- curr_offset += BAM_MAX_DATA_SIZE;
+ if (remainder > BAM_FIFO_SIZE) {
+ desc->size = cpu_to_le16(BAM_FIFO_SIZE);
+ remainder -= BAM_FIFO_SIZE;
+ curr_offset += BAM_FIFO_SIZE;
} else {
desc->size = cpu_to_le16(remainder);
remainder = 0;
@@ -801,13 +802,17 @@ static irqreturn_t bam_dma_irq(int irq, void *data)
if (srcs & P_IRQ)
tasklet_schedule(&bdev->task);
- if (srcs & BAM_IRQ)
+ if (srcs & BAM_IRQ) {
clr_mask = readl_relaxed(bam_addr(bdev, 0, BAM_IRQ_STTS));
- /* don't allow reorder of the various accesses to the BAM registers */
- mb();
+ /*
+ * don't allow reorder of the various accesses to the BAM
+ * registers
+ */
+ mb();
- writel_relaxed(clr_mask, bam_addr(bdev, 0, BAM_IRQ_CLR));
+ writel_relaxed(clr_mask, bam_addr(bdev, 0, BAM_IRQ_CLR));
+ }
return IRQ_HANDLED;
}
@@ -1038,6 +1043,9 @@ static int bam_init(struct bam_device *bdev)
val = readl_relaxed(bam_addr(bdev, 0, BAM_NUM_PIPES));
bdev->num_channels = val & BAM_NUM_PIPES_MASK;
+ if (bdev->controlled_remotely)
+ return 0;
+
/* s/w reset bam */
/* after reset all pipes are disabled and idle */
val = readl_relaxed(bam_addr(bdev, 0, BAM_CTRL));
@@ -1125,6 +1133,9 @@ static int bam_dma_probe(struct platform_device *pdev)
return ret;
}
+ bdev->controlled_remotely = of_property_read_bool(pdev->dev.of_node,
+ "qcom,controlled-remotely");
+
bdev->bamclk = devm_clk_get(bdev->dev, "bam_clk");
if (IS_ERR(bdev->bamclk))
return PTR_ERR(bdev->bamclk);
@@ -1163,7 +1174,7 @@ static int bam_dma_probe(struct platform_device *pdev)
/* set max dma segment size */
bdev->common.dev = bdev->dev;
bdev->common.dev->dma_parms = &bdev->dma_parms;
- ret = dma_set_max_seg_size(bdev->common.dev, BAM_MAX_DATA_SIZE);
+ ret = dma_set_max_seg_size(bdev->common.dev, BAM_FIFO_SIZE);
if (ret) {
dev_err(bdev->dev, "cannot set maximum segment size\n");
goto err_bam_channel_exit;
@@ -1234,6 +1245,9 @@ static int bam_dma_remove(struct platform_device *pdev)
bam_dma_terminate_all(&bdev->channels[i].vc.chan);
tasklet_kill(&bdev->channels[i].vc.task);
+ if (!bdev->channels[i].fifo_virt)
+ continue;
+
dma_free_wc(bdev->dev, BAM_DESC_FIFO_SIZE,
bdev->channels[i].fifo_virt,
bdev->channels[i].fifo_phys);
diff --git a/drivers/dma/qcom/hidma.c b/drivers/dma/qcom/hidma.c
index cccc78efbca9..41b5c6dee713 100644
--- a/drivers/dma/qcom/hidma.c
+++ b/drivers/dma/qcom/hidma.c
@@ -1,7 +1,7 @@
/*
* Qualcomm Technologies HIDMA DMA engine interface
*
- * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -404,7 +404,7 @@ static int hidma_terminate_channel(struct dma_chan *chan)
spin_unlock_irqrestore(&mchan->lock, irqflags);
/* this suspends the existing transfer */
- rc = hidma_ll_pause(dmadev->lldev);
+ rc = hidma_ll_disable(dmadev->lldev);
if (rc) {
dev_err(dmadev->ddev.dev, "channel did not pause\n");
goto out;
@@ -427,7 +427,7 @@ static int hidma_terminate_channel(struct dma_chan *chan)
list_move(&mdesc->node, &mchan->free);
}
- rc = hidma_ll_resume(dmadev->lldev);
+ rc = hidma_ll_enable(dmadev->lldev);
out:
pm_runtime_mark_last_busy(dmadev->ddev.dev);
pm_runtime_put_autosuspend(dmadev->ddev.dev);
@@ -488,7 +488,7 @@ static int hidma_pause(struct dma_chan *chan)
dmadev = to_hidma_dev(mchan->chan.device);
if (!mchan->paused) {
pm_runtime_get_sync(dmadev->ddev.dev);
- if (hidma_ll_pause(dmadev->lldev))
+ if (hidma_ll_disable(dmadev->lldev))
dev_warn(dmadev->ddev.dev, "channel did not stop\n");
mchan->paused = true;
pm_runtime_mark_last_busy(dmadev->ddev.dev);
@@ -507,7 +507,7 @@ static int hidma_resume(struct dma_chan *chan)
dmadev = to_hidma_dev(mchan->chan.device);
if (mchan->paused) {
pm_runtime_get_sync(dmadev->ddev.dev);
- rc = hidma_ll_resume(dmadev->lldev);
+ rc = hidma_ll_enable(dmadev->lldev);
if (!rc)
mchan->paused = false;
else
@@ -530,6 +530,43 @@ static irqreturn_t hidma_chirq_handler(int chirq, void *arg)
return hidma_ll_inthandler(chirq, lldev);
}
+static ssize_t hidma_show_values(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct hidma_dev *mdev = platform_get_drvdata(pdev);
+
+ buf[0] = 0;
+
+ if (strcmp(attr->attr.name, "chid") == 0)
+ sprintf(buf, "%d\n", mdev->chidx);
+
+ return strlen(buf);
+}
+
+static int hidma_create_sysfs_entry(struct hidma_dev *dev, char *name,
+ int mode)
+{
+ struct device_attribute *attrs;
+ char *name_copy;
+
+ attrs = devm_kmalloc(dev->ddev.dev, sizeof(struct device_attribute),
+ GFP_KERNEL);
+ if (!attrs)
+ return -ENOMEM;
+
+ name_copy = devm_kstrdup(dev->ddev.dev, name, GFP_KERNEL);
+ if (!name_copy)
+ return -ENOMEM;
+
+ attrs->attr.name = name_copy;
+ attrs->attr.mode = mode;
+ attrs->show = hidma_show_values;
+ sysfs_attr_init(&attrs->attr);
+
+ return device_create_file(dev->ddev.dev, attrs);
+}
+
static int hidma_probe(struct platform_device *pdev)
{
struct hidma_dev *dmadev;
@@ -644,6 +681,8 @@ static int hidma_probe(struct platform_device *pdev)
dmadev->irq = chirq;
tasklet_init(&dmadev->task, hidma_issue_task, (unsigned long)dmadev);
+ hidma_debug_init(dmadev);
+ hidma_create_sysfs_entry(dmadev, "chid", S_IRUGO);
dev_info(&pdev->dev, "HI-DMA engine driver registration complete\n");
platform_set_drvdata(pdev, dmadev);
pm_runtime_mark_last_busy(dmadev->ddev.dev);
@@ -651,6 +690,7 @@ static int hidma_probe(struct platform_device *pdev)
return 0;
uninit:
+ hidma_debug_uninit(dmadev);
hidma_ll_uninit(dmadev->lldev);
dmafree:
if (dmadev)
@@ -668,6 +708,7 @@ static int hidma_remove(struct platform_device *pdev)
pm_runtime_get_sync(dmadev->ddev.dev);
dma_async_device_unregister(&dmadev->ddev);
devm_free_irq(dmadev->ddev.dev, dmadev->irq, dmadev->lldev);
+ hidma_debug_uninit(dmadev);
hidma_ll_uninit(dmadev->lldev);
hidma_free(dmadev);
@@ -689,7 +730,6 @@ static const struct of_device_id hidma_match[] = {
{.compatible = "qcom,hidma-1.0",},
{},
};
-
MODULE_DEVICE_TABLE(of, hidma_match);
static struct platform_driver hidma_driver = {
diff --git a/drivers/dma/qcom/hidma.h b/drivers/dma/qcom/hidma.h
index 231e306f6d87..db413a5efc4e 100644
--- a/drivers/dma/qcom/hidma.h
+++ b/drivers/dma/qcom/hidma.h
@@ -1,7 +1,7 @@
/*
* Qualcomm Technologies HIDMA data structures
*
- * Copyright (c) 2014, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2014-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -20,32 +20,29 @@
#include <linux/interrupt.h>
#include <linux/dmaengine.h>
-#define TRE_SIZE 32 /* each TRE is 32 bytes */
-#define TRE_CFG_IDX 0
-#define TRE_LEN_IDX 1
-#define TRE_SRC_LOW_IDX 2
-#define TRE_SRC_HI_IDX 3
-#define TRE_DEST_LOW_IDX 4
-#define TRE_DEST_HI_IDX 5
-
-struct hidma_tx_status {
- u8 err_info; /* error record in this transfer */
- u8 err_code; /* completion code */
-};
+#define HIDMA_TRE_SIZE 32 /* each TRE is 32 bytes */
+#define HIDMA_TRE_CFG_IDX 0
+#define HIDMA_TRE_LEN_IDX 1
+#define HIDMA_TRE_SRC_LOW_IDX 2
+#define HIDMA_TRE_SRC_HI_IDX 3
+#define HIDMA_TRE_DEST_LOW_IDX 4
+#define HIDMA_TRE_DEST_HI_IDX 5
struct hidma_tre {
atomic_t allocated; /* if this channel is allocated */
bool queued; /* flag whether this is pending */
u16 status; /* status */
- u32 chidx; /* index of the tre */
+ u32 idx; /* index of the tre */
u32 dma_sig; /* signature of the tre */
const char *dev_name; /* name of the device */
void (*callback)(void *data); /* requester callback */
void *data; /* Data associated with this channel*/
struct hidma_lldev *lldev; /* lldma device pointer */
- u32 tre_local[TRE_SIZE / sizeof(u32) + 1]; /* TRE local copy */
+ u32 tre_local[HIDMA_TRE_SIZE / sizeof(u32) + 1]; /* TRE local copy */
u32 tre_index; /* the offset where this was written*/
u32 int_flags; /* interrupt flags */
+ u8 err_info; /* error record in this transfer */
+ u8 err_code; /* completion code */
};
struct hidma_lldev {
@@ -61,22 +58,21 @@ struct hidma_lldev {
void __iomem *evca; /* Event Channel address */
struct hidma_tre
**pending_tre_list; /* Pointers to pending TREs */
- struct hidma_tx_status
- *tx_status_list; /* Pointers to pending TREs status*/
s32 pending_tre_count; /* Number of TREs pending */
void *tre_ring; /* TRE ring */
- dma_addr_t tre_ring_handle; /* TRE ring to be shared with HW */
+ dma_addr_t tre_dma; /* TRE ring to be shared with HW */
u32 tre_ring_size; /* Byte size of the ring */
u32 tre_processed_off; /* last processed TRE */
void *evre_ring; /* EVRE ring */
- dma_addr_t evre_ring_handle; /* EVRE ring to be shared with HW */
+ dma_addr_t evre_dma; /* EVRE ring to be shared with HW */
u32 evre_ring_size; /* Byte size of the ring */
u32 evre_processed_off; /* last processed EVRE */
u32 tre_write_offset; /* TRE write location */
struct tasklet_struct task; /* task delivering notifications */
+ struct tasklet_struct rst_task; /* task to reset HW */
DECLARE_KFIFO_PTR(handoff_fifo,
struct hidma_tre *); /* pending TREs FIFO */
};
@@ -145,8 +141,8 @@ enum dma_status hidma_ll_status(struct hidma_lldev *llhndl, u32 tre_ch);
bool hidma_ll_isenabled(struct hidma_lldev *llhndl);
void hidma_ll_queue_request(struct hidma_lldev *llhndl, u32 tre_ch);
void hidma_ll_start(struct hidma_lldev *llhndl);
-int hidma_ll_pause(struct hidma_lldev *llhndl);
-int hidma_ll_resume(struct hidma_lldev *llhndl);
+int hidma_ll_disable(struct hidma_lldev *lldev);
+int hidma_ll_enable(struct hidma_lldev *llhndl);
void hidma_ll_set_transfer_params(struct hidma_lldev *llhndl, u32 tre_ch,
dma_addr_t src, dma_addr_t dest, u32 len, u32 flags);
int hidma_ll_setup(struct hidma_lldev *lldev);
@@ -157,4 +153,6 @@ int hidma_ll_uninit(struct hidma_lldev *llhndl);
irqreturn_t hidma_ll_inthandler(int irq, void *arg);
void hidma_cleanup_pending_tre(struct hidma_lldev *llhndl, u8 err_info,
u8 err_code);
+int hidma_debug_init(struct hidma_dev *dmadev);
+void hidma_debug_uninit(struct hidma_dev *dmadev);
#endif
diff --git a/drivers/dma/qcom/hidma_dbg.c b/drivers/dma/qcom/hidma_dbg.c
new file mode 100644
index 000000000000..fa827e5ffd68
--- /dev/null
+++ b/drivers/dma/qcom/hidma_dbg.c
@@ -0,0 +1,217 @@
+/*
+ * Qualcomm Technologies HIDMA debug file
+ *
+ * Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/debugfs.h>
+#include <linux/device.h>
+#include <linux/list.h>
+#include <linux/pm_runtime.h>
+
+#include "hidma.h"
+
+static void hidma_ll_chstats(struct seq_file *s, void *llhndl, u32 tre_ch)
+{
+ struct hidma_lldev *lldev = llhndl;
+ struct hidma_tre *tre;
+ u32 length;
+ dma_addr_t src_start;
+ dma_addr_t dest_start;
+ u32 *tre_local;
+
+ if (tre_ch >= lldev->nr_tres) {
+ dev_err(lldev->dev, "invalid TRE number in chstats:%d", tre_ch);
+ return;
+ }
+ tre = &lldev->trepool[tre_ch];
+ seq_printf(s, "------Channel %d -----\n", tre_ch);
+ seq_printf(s, "allocated=%d\n", atomic_read(&tre->allocated));
+ seq_printf(s, "queued = 0x%x\n", tre->queued);
+ seq_printf(s, "err_info = 0x%x\n", tre->err_info);
+ seq_printf(s, "err_code = 0x%x\n", tre->err_code);
+ seq_printf(s, "status = 0x%x\n", tre->status);
+ seq_printf(s, "idx = 0x%x\n", tre->idx);
+ seq_printf(s, "dma_sig = 0x%x\n", tre->dma_sig);
+ seq_printf(s, "dev_name=%s\n", tre->dev_name);
+ seq_printf(s, "callback=%p\n", tre->callback);
+ seq_printf(s, "data=%p\n", tre->data);
+ seq_printf(s, "tre_index = 0x%x\n", tre->tre_index);
+
+ tre_local = &tre->tre_local[0];
+ src_start = tre_local[HIDMA_TRE_SRC_LOW_IDX];
+ src_start = ((u64) (tre_local[HIDMA_TRE_SRC_HI_IDX]) << 32) + src_start;
+ dest_start = tre_local[HIDMA_TRE_DEST_LOW_IDX];
+ dest_start += ((u64) (tre_local[HIDMA_TRE_DEST_HI_IDX]) << 32);
+ length = tre_local[HIDMA_TRE_LEN_IDX];
+
+ seq_printf(s, "src=%pap\n", &src_start);
+ seq_printf(s, "dest=%pap\n", &dest_start);
+ seq_printf(s, "length = 0x%x\n", length);
+}
+
+static void hidma_ll_devstats(struct seq_file *s, void *llhndl)
+{
+ struct hidma_lldev *lldev = llhndl;
+
+ seq_puts(s, "------Device -----\n");
+ seq_printf(s, "lldev init = 0x%x\n", lldev->initialized);
+ seq_printf(s, "trch_state = 0x%x\n", lldev->trch_state);
+ seq_printf(s, "evch_state = 0x%x\n", lldev->evch_state);
+ seq_printf(s, "chidx = 0x%x\n", lldev->chidx);
+ seq_printf(s, "nr_tres = 0x%x\n", lldev->nr_tres);
+ seq_printf(s, "trca=%p\n", lldev->trca);
+ seq_printf(s, "tre_ring=%p\n", lldev->tre_ring);
+ seq_printf(s, "tre_ring_handle=%pap\n", &lldev->tre_dma);
+ seq_printf(s, "tre_ring_size = 0x%x\n", lldev->tre_ring_size);
+ seq_printf(s, "tre_processed_off = 0x%x\n", lldev->tre_processed_off);
+ seq_printf(s, "pending_tre_count=%d\n", lldev->pending_tre_count);
+ seq_printf(s, "evca=%p\n", lldev->evca);
+ seq_printf(s, "evre_ring=%p\n", lldev->evre_ring);
+ seq_printf(s, "evre_ring_handle=%pap\n", &lldev->evre_dma);
+ seq_printf(s, "evre_ring_size = 0x%x\n", lldev->evre_ring_size);
+ seq_printf(s, "evre_processed_off = 0x%x\n", lldev->evre_processed_off);
+ seq_printf(s, "tre_write_offset = 0x%x\n", lldev->tre_write_offset);
+}
+
+/*
+ * hidma_chan_stats: display HIDMA channel statistics
+ *
+ * Display the statistics for the current HIDMA virtual channel device.
+ */
+static int hidma_chan_stats(struct seq_file *s, void *unused)
+{
+ struct hidma_chan *mchan = s->private;
+ struct hidma_desc *mdesc;
+ struct hidma_dev *dmadev = mchan->dmadev;
+
+ pm_runtime_get_sync(dmadev->ddev.dev);
+ seq_printf(s, "paused=%u\n", mchan->paused);
+ seq_printf(s, "dma_sig=%u\n", mchan->dma_sig);
+ seq_puts(s, "prepared\n");
+ list_for_each_entry(mdesc, &mchan->prepared, node)
+ hidma_ll_chstats(s, mchan->dmadev->lldev, mdesc->tre_ch);
+
+ seq_puts(s, "active\n");
+ list_for_each_entry(mdesc, &mchan->active, node)
+ hidma_ll_chstats(s, mchan->dmadev->lldev, mdesc->tre_ch);
+
+ seq_puts(s, "completed\n");
+ list_for_each_entry(mdesc, &mchan->completed, node)
+ hidma_ll_chstats(s, mchan->dmadev->lldev, mdesc->tre_ch);
+
+ hidma_ll_devstats(s, mchan->dmadev->lldev);
+ pm_runtime_mark_last_busy(dmadev->ddev.dev);
+ pm_runtime_put_autosuspend(dmadev->ddev.dev);
+ return 0;
+}
+
+/*
+ * hidma_dma_info: display HIDMA device info
+ *
+ * Display the info for the current HIDMA device.
+ */
+static int hidma_dma_info(struct seq_file *s, void *unused)
+{
+ struct hidma_dev *dmadev = s->private;
+ resource_size_t sz;
+
+ seq_printf(s, "nr_descriptors=%d\n", dmadev->nr_descriptors);
+ seq_printf(s, "dev_trca=%p\n", &dmadev->dev_trca);
+ seq_printf(s, "dev_trca_phys=%pa\n", &dmadev->trca_resource->start);
+ sz = resource_size(dmadev->trca_resource);
+ seq_printf(s, "dev_trca_size=%pa\n", &sz);
+ seq_printf(s, "dev_evca=%p\n", &dmadev->dev_evca);
+ seq_printf(s, "dev_evca_phys=%pa\n", &dmadev->evca_resource->start);
+ sz = resource_size(dmadev->evca_resource);
+ seq_printf(s, "dev_evca_size=%pa\n", &sz);
+ return 0;
+}
+
+static int hidma_chan_stats_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, hidma_chan_stats, inode->i_private);
+}
+
+static int hidma_dma_info_open(struct inode *inode, struct file *file)
+{
+ return single_open(file, hidma_dma_info, inode->i_private);
+}
+
+static const struct file_operations hidma_chan_fops = {
+ .open = hidma_chan_stats_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+static const struct file_operations hidma_dma_fops = {
+ .open = hidma_dma_info_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = single_release,
+};
+
+void hidma_debug_uninit(struct hidma_dev *dmadev)
+{
+ debugfs_remove_recursive(dmadev->debugfs);
+ debugfs_remove_recursive(dmadev->stats);
+}
+
+int hidma_debug_init(struct hidma_dev *dmadev)
+{
+ int rc = 0;
+ int chidx = 0;
+ struct list_head *position = NULL;
+
+ dmadev->debugfs = debugfs_create_dir(dev_name(dmadev->ddev.dev), NULL);
+ if (!dmadev->debugfs) {
+ rc = -ENODEV;
+ return rc;
+ }
+
+ /* walk through the virtual channel list */
+ list_for_each(position, &dmadev->ddev.channels) {
+ struct hidma_chan *chan;
+
+ chan = list_entry(position, struct hidma_chan,
+ chan.device_node);
+ sprintf(chan->dbg_name, "chan%d", chidx);
+ chan->debugfs = debugfs_create_dir(chan->dbg_name,
+ dmadev->debugfs);
+ if (!chan->debugfs) {
+ rc = -ENOMEM;
+ goto cleanup;
+ }
+ chan->stats = debugfs_create_file("stats", S_IRUGO,
+ chan->debugfs, chan,
+ &hidma_chan_fops);
+ if (!chan->stats) {
+ rc = -ENOMEM;
+ goto cleanup;
+ }
+ chidx++;
+ }
+
+ dmadev->stats = debugfs_create_file("stats", S_IRUGO,
+ dmadev->debugfs, dmadev,
+ &hidma_dma_fops);
+ if (!dmadev->stats) {
+ rc = -ENOMEM;
+ goto cleanup;
+ }
+
+ return 0;
+cleanup:
+ hidma_debug_uninit(dmadev);
+ return rc;
+}
diff --git a/drivers/dma/qcom/hidma_ll.c b/drivers/dma/qcom/hidma_ll.c
new file mode 100644
index 000000000000..f3929001539b
--- /dev/null
+++ b/drivers/dma/qcom/hidma_ll.c
@@ -0,0 +1,872 @@
+/*
+ * Qualcomm Technologies HIDMA DMA engine low level code
+ *
+ * Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ */
+
+#include <linux/dmaengine.h>
+#include <linux/slab.h>
+#include <linux/interrupt.h>
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <linux/dma-mapping.h>
+#include <linux/delay.h>
+#include <linux/atomic.h>
+#include <linux/iopoll.h>
+#include <linux/kfifo.h>
+#include <linux/bitops.h>
+
+#include "hidma.h"
+
+#define HIDMA_EVRE_SIZE 16 /* each EVRE is 16 bytes */
+
+#define HIDMA_TRCA_CTRLSTS_REG 0x000
+#define HIDMA_TRCA_RING_LOW_REG 0x008
+#define HIDMA_TRCA_RING_HIGH_REG 0x00C
+#define HIDMA_TRCA_RING_LEN_REG 0x010
+#define HIDMA_TRCA_DOORBELL_REG 0x400
+
+#define HIDMA_EVCA_CTRLSTS_REG 0x000
+#define HIDMA_EVCA_INTCTRL_REG 0x004
+#define HIDMA_EVCA_RING_LOW_REG 0x008
+#define HIDMA_EVCA_RING_HIGH_REG 0x00C
+#define HIDMA_EVCA_RING_LEN_REG 0x010
+#define HIDMA_EVCA_WRITE_PTR_REG 0x020
+#define HIDMA_EVCA_DOORBELL_REG 0x400
+
+#define HIDMA_EVCA_IRQ_STAT_REG 0x100
+#define HIDMA_EVCA_IRQ_CLR_REG 0x108
+#define HIDMA_EVCA_IRQ_EN_REG 0x110
+
+#define HIDMA_EVRE_CFG_IDX 0
+
+#define HIDMA_EVRE_ERRINFO_BIT_POS 24
+#define HIDMA_EVRE_CODE_BIT_POS 28
+
+#define HIDMA_EVRE_ERRINFO_MASK GENMASK(3, 0)
+#define HIDMA_EVRE_CODE_MASK GENMASK(3, 0)
+
+#define HIDMA_CH_CONTROL_MASK GENMASK(7, 0)
+#define HIDMA_CH_STATE_MASK GENMASK(7, 0)
+#define HIDMA_CH_STATE_BIT_POS 0x8
+
+#define HIDMA_IRQ_EV_CH_EOB_IRQ_BIT_POS 0
+#define HIDMA_IRQ_EV_CH_WR_RESP_BIT_POS 1
+#define HIDMA_IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS 9
+#define HIDMA_IRQ_TR_CH_DATA_RD_ER_BIT_POS 10
+#define HIDMA_IRQ_TR_CH_DATA_WR_ER_BIT_POS 11
+#define HIDMA_IRQ_TR_CH_INVALID_TRE_BIT_POS 14
+
+#define ENABLE_IRQS (BIT(HIDMA_IRQ_EV_CH_EOB_IRQ_BIT_POS) | \
+ BIT(HIDMA_IRQ_EV_CH_WR_RESP_BIT_POS) | \
+ BIT(HIDMA_IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS) | \
+ BIT(HIDMA_IRQ_TR_CH_DATA_RD_ER_BIT_POS) | \
+ BIT(HIDMA_IRQ_TR_CH_DATA_WR_ER_BIT_POS) | \
+ BIT(HIDMA_IRQ_TR_CH_INVALID_TRE_BIT_POS))
+
+#define HIDMA_INCREMENT_ITERATOR(iter, size, ring_size) \
+do { \
+ iter += size; \
+ if (iter >= ring_size) \
+ iter -= ring_size; \
+} while (0)
+
+#define HIDMA_CH_STATE(val) \
+ ((val >> HIDMA_CH_STATE_BIT_POS) & HIDMA_CH_STATE_MASK)
+
+#define HIDMA_ERR_INT_MASK \
+ (BIT(HIDMA_IRQ_TR_CH_INVALID_TRE_BIT_POS) | \
+ BIT(HIDMA_IRQ_TR_CH_TRE_RD_RSP_ER_BIT_POS) | \
+ BIT(HIDMA_IRQ_EV_CH_WR_RESP_BIT_POS) | \
+ BIT(HIDMA_IRQ_TR_CH_DATA_RD_ER_BIT_POS) | \
+ BIT(HIDMA_IRQ_TR_CH_DATA_WR_ER_BIT_POS))
+
+enum ch_command {
+ HIDMA_CH_DISABLE = 0,
+ HIDMA_CH_ENABLE = 1,
+ HIDMA_CH_SUSPEND = 2,
+ HIDMA_CH_RESET = 9,
+};
+
+enum ch_state {
+ HIDMA_CH_DISABLED = 0,
+ HIDMA_CH_ENABLED = 1,
+ HIDMA_CH_RUNNING = 2,
+ HIDMA_CH_SUSPENDED = 3,
+ HIDMA_CH_STOPPED = 4,
+};
+
+enum tre_type {
+ HIDMA_TRE_MEMCPY = 3,
+};
+
+enum err_code {
+ HIDMA_EVRE_STATUS_COMPLETE = 1,
+ HIDMA_EVRE_STATUS_ERROR = 4,
+};
+
+static int hidma_is_chan_enabled(int state)
+{
+ switch (state) {
+ case HIDMA_CH_ENABLED:
+ case HIDMA_CH_RUNNING:
+ return true;
+ default:
+ return false;
+ }
+}
+
+void hidma_ll_free(struct hidma_lldev *lldev, u32 tre_ch)
+{
+ struct hidma_tre *tre;
+
+ if (tre_ch >= lldev->nr_tres) {
+ dev_err(lldev->dev, "invalid TRE number in free:%d", tre_ch);
+ return;
+ }
+
+ tre = &lldev->trepool[tre_ch];
+ if (atomic_read(&tre->allocated) != true) {
+ dev_err(lldev->dev, "trying to free an unused TRE:%d", tre_ch);
+ return;
+ }
+
+ atomic_set(&tre->allocated, 0);
+}
+
+int hidma_ll_request(struct hidma_lldev *lldev, u32 sig, const char *dev_name,
+ void (*callback)(void *data), void *data, u32 *tre_ch)
+{
+ unsigned int i;
+ struct hidma_tre *tre;
+ u32 *tre_local;
+
+ if (!tre_ch || !lldev)
+ return -EINVAL;
+
+ /* need to have at least one empty spot in the queue */
+ for (i = 0; i < lldev->nr_tres - 1; i++) {
+ if (atomic_add_unless(&lldev->trepool[i].allocated, 1, 1))
+ break;
+ }
+
+ if (i == (lldev->nr_tres - 1))
+ return -ENOMEM;
+
+ tre = &lldev->trepool[i];
+ tre->dma_sig = sig;
+ tre->dev_name = dev_name;
+ tre->callback = callback;
+ tre->data = data;
+ tre->idx = i;
+ tre->status = 0;
+ tre->queued = 0;
+ tre->err_code = 0;
+ tre->err_info = 0;
+ tre->lldev = lldev;
+ tre_local = &tre->tre_local[0];
+ tre_local[HIDMA_TRE_CFG_IDX] = HIDMA_TRE_MEMCPY;
+ tre_local[HIDMA_TRE_CFG_IDX] |= (lldev->chidx & 0xFF) << 8;
+ tre_local[HIDMA_TRE_CFG_IDX] |= BIT(16); /* set IEOB */
+ *tre_ch = i;
+ if (callback)
+ callback(data);
+ return 0;
+}
+
+/*
+ * Multiple TREs may be queued and waiting in the pending queue.
+ */
+static void hidma_ll_tre_complete(unsigned long arg)
+{
+ struct hidma_lldev *lldev = (struct hidma_lldev *)arg;
+ struct hidma_tre *tre;
+
+ while (kfifo_out(&lldev->handoff_fifo, &tre, 1)) {
+ /* call the user if it has been read by the hardware */
+ if (tre->callback)
+ tre->callback(tre->data);
+ }
+}
+
+static int hidma_post_completed(struct hidma_lldev *lldev, int tre_iterator,
+ u8 err_info, u8 err_code)
+{
+ struct hidma_tre *tre;
+ unsigned long flags;
+
+ spin_lock_irqsave(&lldev->lock, flags);
+ tre = lldev->pending_tre_list[tre_iterator / HIDMA_TRE_SIZE];
+ if (!tre) {
+ spin_unlock_irqrestore(&lldev->lock, flags);
+ dev_warn(lldev->dev, "tre_index [%d] and tre out of sync\n",
+ tre_iterator / HIDMA_TRE_SIZE);
+ return -EINVAL;
+ }
+ lldev->pending_tre_list[tre->tre_index] = NULL;
+
+ /*
+ * Keep track of pending TREs that SW is expecting to receive
+ * from HW. We got one now. Decrement our counter.
+ */
+ lldev->pending_tre_count--;
+ if (lldev->pending_tre_count < 0) {
+ dev_warn(lldev->dev, "tre count mismatch on completion");
+ lldev->pending_tre_count = 0;
+ }
+
+ spin_unlock_irqrestore(&lldev->lock, flags);
+
+ tre->err_info = err_info;
+ tre->err_code = err_code;
+ tre->queued = 0;
+
+ kfifo_put(&lldev->handoff_fifo, tre);
+ tasklet_schedule(&lldev->task);
+
+ return 0;
+}
+
+/*
+ * Called to handle the interrupt for the channel.
+ * Return a positive number if TRE or EVRE were consumed on this run.
+ * Return a positive number if there are pending TREs or EVREs.
+ * Return 0 if there is nothing to consume or no pending TREs/EVREs found.
+ */
+static int hidma_handle_tre_completion(struct hidma_lldev *lldev)
+{
+ u32 evre_ring_size = lldev->evre_ring_size;
+ u32 tre_ring_size = lldev->tre_ring_size;
+ u32 err_info, err_code, evre_write_off;
+ u32 tre_iterator, evre_iterator;
+ u32 num_completed = 0;
+
+ evre_write_off = readl_relaxed(lldev->evca + HIDMA_EVCA_WRITE_PTR_REG);
+ tre_iterator = lldev->tre_processed_off;
+ evre_iterator = lldev->evre_processed_off;
+
+ if ((evre_write_off > evre_ring_size) ||
+ (evre_write_off % HIDMA_EVRE_SIZE)) {
+ dev_err(lldev->dev, "HW reports invalid EVRE write offset\n");
+ return 0;
+ }
+
+ /*
+ * By the time control reaches here the number of EVREs and TREs
+ * may not match. Only consume the ones that hardware told us.
+ */
+ while ((evre_iterator != evre_write_off)) {
+ u32 *current_evre = lldev->evre_ring + evre_iterator;
+ u32 cfg;
+
+ cfg = current_evre[HIDMA_EVRE_CFG_IDX];
+ err_info = cfg >> HIDMA_EVRE_ERRINFO_BIT_POS;
+ err_info &= HIDMA_EVRE_ERRINFO_MASK;
+ err_code =
+ (cfg >> HIDMA_EVRE_CODE_BIT_POS) & HIDMA_EVRE_CODE_MASK;
+
+ if (hidma_post_completed(lldev, tre_iterator, err_info,
+ err_code))
+ break;
+
+ HIDMA_INCREMENT_ITERATOR(tre_iterator, HIDMA_TRE_SIZE,
+ tre_ring_size);
+ HIDMA_INCREMENT_ITERATOR(evre_iterator, HIDMA_EVRE_SIZE,
+ evre_ring_size);
+
+ /*
+ * Read the new event descriptor written by the HW.
+ * As we are processing the delivered events, other events
+ * get queued to the SW for processing.
+ */
+ evre_write_off =
+ readl_relaxed(lldev->evca + HIDMA_EVCA_WRITE_PTR_REG);
+ num_completed++;
+ }
+
+ if (num_completed) {
+ u32 evre_read_off = (lldev->evre_processed_off +
+ HIDMA_EVRE_SIZE * num_completed);
+ u32 tre_read_off = (lldev->tre_processed_off +
+ HIDMA_TRE_SIZE * num_completed);
+
+ evre_read_off = evre_read_off % evre_ring_size;
+ tre_read_off = tre_read_off % tre_ring_size;
+
+ writel(evre_read_off, lldev->evca + HIDMA_EVCA_DOORBELL_REG);
+
+ /* record the last processed tre offset */
+ lldev->tre_processed_off = tre_read_off;
+ lldev->evre_processed_off = evre_read_off;
+ }
+
+ return num_completed;
+}
+
+void hidma_cleanup_pending_tre(struct hidma_lldev *lldev, u8 err_info,
+ u8 err_code)
+{
+ u32 tre_iterator;
+ u32 tre_ring_size = lldev->tre_ring_size;
+ int num_completed = 0;
+ u32 tre_read_off;
+
+ tre_iterator = lldev->tre_processed_off;
+ while (lldev->pending_tre_count) {
+ if (hidma_post_completed(lldev, tre_iterator, err_info,
+ err_code))
+ break;
+ HIDMA_INCREMENT_ITERATOR(tre_iterator, HIDMA_TRE_SIZE,
+ tre_ring_size);
+ num_completed++;
+ }
+ tre_read_off = (lldev->tre_processed_off +
+ HIDMA_TRE_SIZE * num_completed);
+
+ tre_read_off = tre_read_off % tre_ring_size;
+
+ /* record the last processed tre offset */
+ lldev->tre_processed_off = tre_read_off;
+}
+
+static int hidma_ll_reset(struct hidma_lldev *lldev)
+{
+ u32 val;
+ int ret;
+
+ val = readl(lldev->trca + HIDMA_TRCA_CTRLSTS_REG);
+ val &= ~(HIDMA_CH_CONTROL_MASK << 16);
+ val |= HIDMA_CH_RESET << 16;
+ writel(val, lldev->trca + HIDMA_TRCA_CTRLSTS_REG);
+
+ /*
+ * Delay 10ms after reset to allow DMA logic to quiesce.
+ * Do a polled read up to 1ms and 10ms maximum.
+ */
+ ret = readl_poll_timeout(lldev->trca + HIDMA_TRCA_CTRLSTS_REG, val,
+ HIDMA_CH_STATE(val) == HIDMA_CH_DISABLED,
+ 1000, 10000);
+ if (ret) {
+ dev_err(lldev->dev, "transfer channel did not reset\n");
+ return ret;
+ }
+
+ val = readl(lldev->evca + HIDMA_EVCA_CTRLSTS_REG);
+ val &= ~(HIDMA_CH_CONTROL_MASK << 16);
+ val |= HIDMA_CH_RESET << 16;
+ writel(val, lldev->evca + HIDMA_EVCA_CTRLSTS_REG);
+
+ /*
+ * Delay 10ms after reset to allow DMA logic to quiesce.
+ * Do a polled read up to 1ms and 10ms maximum.
+ */
+ ret = readl_poll_timeout(lldev->evca + HIDMA_EVCA_CTRLSTS_REG, val,
+ HIDMA_CH_STATE(val) == HIDMA_CH_DISABLED,
+ 1000, 10000);
+ if (ret)
+ return ret;
+
+ lldev->trch_state = HIDMA_CH_DISABLED;
+ lldev->evch_state = HIDMA_CH_DISABLED;
+ return 0;
+}
+
+/*
+ * Abort all transactions and perform a reset.
+ */
+static void hidma_ll_abort(unsigned long arg)
+{
+ struct hidma_lldev *lldev = (struct hidma_lldev *)arg;
+ u8 err_code = HIDMA_EVRE_STATUS_ERROR;
+ u8 err_info = 0xFF;
+ int rc;
+
+ hidma_cleanup_pending_tre(lldev, err_info, err_code);
+
+ /* reset the channel for recovery */
+ rc = hidma_ll_setup(lldev);
+ if (rc) {
+ dev_err(lldev->dev, "channel reinitialize failed after error\n");
+ return;
+ }
+ writel(ENABLE_IRQS, lldev->evca + HIDMA_EVCA_IRQ_EN_REG);
+}
+
+/*
+ * The interrupt handler for HIDMA will try to consume as many pending
+ * EVRE from the event queue as possible. Each EVRE has an associated
+ * TRE that holds the user interface parameters. EVRE reports the
+ * result of the transaction. Hardware guarantees ordering between EVREs
+ * and TREs. We use last processed offset to figure out which TRE is
+ * associated with which EVRE. If two TREs are consumed by HW, the EVREs
+ * are in order in the event ring.
+ *
+ * This handler will do a one pass for consuming EVREs. Other EVREs may
+ * be delivered while we are working. It will try to consume incoming
+ * EVREs one more time and return.
+ *
+ * For unprocessed EVREs, hardware will trigger another interrupt until
+ * all the interrupt bits are cleared.
+ *
+ * Hardware guarantees that by the time interrupt is observed, all data
+ * transactions in flight are delivered to their respective places and
+ * are visible to the CPU.
+ *
+ * On demand paging for IOMMU is only supported for PCIe via PRI
+ * (Page Request Interface) not for HIDMA. All other hardware instances
+ * including HIDMA work on pinned DMA addresses.
+ *
+ * HIDMA is not aware of IOMMU presence since it follows the DMA API. All
+ * IOMMU latency will be built into the data movement time. By the time
+ * interrupt happens, IOMMU lookups + data movement has already taken place.
+ *
+ * While the first read in a typical PCI endpoint ISR flushes all outstanding
+ * requests traditionally to the destination, this concept does not apply
+ * here for this HW.
+ */
+irqreturn_t hidma_ll_inthandler(int chirq, void *arg)
+{
+ struct hidma_lldev *lldev = arg;
+ u32 status;
+ u32 enable;
+ u32 cause;
+
+ /*
+ * Fine tuned for this HW...
+ *
+ * This ISR has been designed for this particular hardware. Relaxed
+ * read and write accessors are used for performance reasons due to
+ * interrupt delivery guarantees. Do not copy this code blindly and
+ * expect that to work.
+ */
+ status = readl_relaxed(lldev->evca + HIDMA_EVCA_IRQ_STAT_REG);
+ enable = readl_relaxed(lldev->evca + HIDMA_EVCA_IRQ_EN_REG);
+ cause = status & enable;
+
+ while (cause) {
+ if (cause & HIDMA_ERR_INT_MASK) {
+ dev_err(lldev->dev, "error 0x%x, resetting...\n",
+ cause);
+
+ /* Clear out pending interrupts */
+ writel(cause, lldev->evca + HIDMA_EVCA_IRQ_CLR_REG);
+
+ tasklet_schedule(&lldev->rst_task);
+ goto out;
+ }
+
+ /*
+ * Try to consume as many EVREs as possible.
+ */
+ hidma_handle_tre_completion(lldev);
+
+ /* We consumed TREs or there are pending TREs or EVREs. */
+ writel_relaxed(cause, lldev->evca + HIDMA_EVCA_IRQ_CLR_REG);
+
+ /*
+ * Another interrupt might have arrived while we are
+ * processing this one. Read the new cause.
+ */
+ status = readl_relaxed(lldev->evca + HIDMA_EVCA_IRQ_STAT_REG);
+ enable = readl_relaxed(lldev->evca + HIDMA_EVCA_IRQ_EN_REG);
+ cause = status & enable;
+ }
+
+out:
+ return IRQ_HANDLED;
+}
+
+int hidma_ll_enable(struct hidma_lldev *lldev)
+{
+ u32 val;
+ int ret;
+
+ val = readl(lldev->evca + HIDMA_EVCA_CTRLSTS_REG);
+ val &= ~(HIDMA_CH_CONTROL_MASK << 16);
+ val |= HIDMA_CH_ENABLE << 16;
+ writel(val, lldev->evca + HIDMA_EVCA_CTRLSTS_REG);
+
+ ret = readl_poll_timeout(lldev->evca + HIDMA_EVCA_CTRLSTS_REG, val,
+ hidma_is_chan_enabled(HIDMA_CH_STATE(val)),
+ 1000, 10000);
+ if (ret) {
+ dev_err(lldev->dev, "event channel did not get enabled\n");
+ return ret;
+ }
+
+ val = readl(lldev->trca + HIDMA_TRCA_CTRLSTS_REG);
+ val &= ~(HIDMA_CH_CONTROL_MASK << 16);
+ val |= HIDMA_CH_ENABLE << 16;
+ writel(val, lldev->trca + HIDMA_TRCA_CTRLSTS_REG);
+
+ ret = readl_poll_timeout(lldev->trca + HIDMA_TRCA_CTRLSTS_REG, val,
+ hidma_is_chan_enabled(HIDMA_CH_STATE(val)),
+ 1000, 10000);
+ if (ret) {
+ dev_err(lldev->dev, "transfer channel did not get enabled\n");
+ return ret;
+ }
+
+ lldev->trch_state = HIDMA_CH_ENABLED;
+ lldev->evch_state = HIDMA_CH_ENABLED;
+
+ return 0;
+}
+
+void hidma_ll_start(struct hidma_lldev *lldev)
+{
+ unsigned long irqflags;
+
+ spin_lock_irqsave(&lldev->lock, irqflags);
+ writel(lldev->tre_write_offset, lldev->trca + HIDMA_TRCA_DOORBELL_REG);
+ spin_unlock_irqrestore(&lldev->lock, irqflags);
+}
+
+bool hidma_ll_isenabled(struct hidma_lldev *lldev)
+{
+ u32 val;
+
+ val = readl(lldev->trca + HIDMA_TRCA_CTRLSTS_REG);
+ lldev->trch_state = HIDMA_CH_STATE(val);
+ val = readl(lldev->evca + HIDMA_EVCA_CTRLSTS_REG);
+ lldev->evch_state = HIDMA_CH_STATE(val);
+
+ /* both channels have to be enabled before calling this function */
+ if (hidma_is_chan_enabled(lldev->trch_state) &&
+ hidma_is_chan_enabled(lldev->evch_state))
+ return true;
+
+ return false;
+}
+
+void hidma_ll_queue_request(struct hidma_lldev *lldev, u32 tre_ch)
+{
+ struct hidma_tre *tre;
+ unsigned long flags;
+
+ tre = &lldev->trepool[tre_ch];
+
+ /* copy the TRE into its location in the TRE ring */
+ spin_lock_irqsave(&lldev->lock, flags);
+ tre->tre_index = lldev->tre_write_offset / HIDMA_TRE_SIZE;
+ lldev->pending_tre_list[tre->tre_index] = tre;
+ memcpy(lldev->tre_ring + lldev->tre_write_offset,
+ &tre->tre_local[0], HIDMA_TRE_SIZE);
+ tre->err_code = 0;
+ tre->err_info = 0;
+ tre->queued = 1;
+ lldev->pending_tre_count++;
+ lldev->tre_write_offset = (lldev->tre_write_offset + HIDMA_TRE_SIZE)
+ % lldev->tre_ring_size;
+ spin_unlock_irqrestore(&lldev->lock, flags);
+}
+
+/*
+ * Note that even though we stop this channel if there is a pending transaction
+ * in flight it will complete and follow the callback. This request will
+ * prevent further requests to be made.
+ */
+int hidma_ll_disable(struct hidma_lldev *lldev)
+{
+ u32 val;
+ int ret;
+
+ val = readl(lldev->evca + HIDMA_EVCA_CTRLSTS_REG);
+ lldev->evch_state = HIDMA_CH_STATE(val);
+ val = readl(lldev->trca + HIDMA_TRCA_CTRLSTS_REG);
+ lldev->trch_state = HIDMA_CH_STATE(val);
+
+ /* already suspended by this OS */
+ if ((lldev->trch_state == HIDMA_CH_SUSPENDED) ||
+ (lldev->evch_state == HIDMA_CH_SUSPENDED))
+ return 0;
+
+ /* already stopped by the manager */
+ if ((lldev->trch_state == HIDMA_CH_STOPPED) ||
+ (lldev->evch_state == HIDMA_CH_STOPPED))
+ return 0;
+
+ val = readl(lldev->trca + HIDMA_TRCA_CTRLSTS_REG);
+ val &= ~(HIDMA_CH_CONTROL_MASK << 16);
+ val |= HIDMA_CH_SUSPEND << 16;
+ writel(val, lldev->trca + HIDMA_TRCA_CTRLSTS_REG);
+
+ /*
+ * Start the wait right after the suspend is confirmed.
+ * Do a polled read up to 1ms and 10ms maximum.
+ */
+ ret = readl_poll_timeout(lldev->trca + HIDMA_TRCA_CTRLSTS_REG, val,
+ HIDMA_CH_STATE(val) == HIDMA_CH_SUSPENDED,
+ 1000, 10000);
+ if (ret)
+ return ret;
+
+ val = readl(lldev->evca + HIDMA_EVCA_CTRLSTS_REG);
+ val &= ~(HIDMA_CH_CONTROL_MASK << 16);
+ val |= HIDMA_CH_SUSPEND << 16;
+ writel(val, lldev->evca + HIDMA_EVCA_CTRLSTS_REG);
+
+ /*
+ * Start the wait right after the suspend is confirmed
+ * Delay up to 10ms after reset to allow DMA logic to quiesce.
+ */
+ ret = readl_poll_timeout(lldev->evca + HIDMA_EVCA_CTRLSTS_REG, val,
+ HIDMA_CH_STATE(val) == HIDMA_CH_SUSPENDED,
+ 1000, 10000);
+ if (ret)
+ return ret;
+
+ lldev->trch_state = HIDMA_CH_SUSPENDED;
+ lldev->evch_state = HIDMA_CH_SUSPENDED;
+ return 0;
+}
+
+void hidma_ll_set_transfer_params(struct hidma_lldev *lldev, u32 tre_ch,
+ dma_addr_t src, dma_addr_t dest, u32 len,
+ u32 flags)
+{
+ struct hidma_tre *tre;
+ u32 *tre_local;
+
+ if (tre_ch >= lldev->nr_tres) {
+ dev_err(lldev->dev, "invalid TRE number in transfer params:%d",
+ tre_ch);
+ return;
+ }
+
+ tre = &lldev->trepool[tre_ch];
+ if (atomic_read(&tre->allocated) != true) {
+ dev_err(lldev->dev, "trying to set params on an unused TRE:%d",
+ tre_ch);
+ return;
+ }
+
+ tre_local = &tre->tre_local[0];
+ tre_local[HIDMA_TRE_LEN_IDX] = len;
+ tre_local[HIDMA_TRE_SRC_LOW_IDX] = lower_32_bits(src);
+ tre_local[HIDMA_TRE_SRC_HI_IDX] = upper_32_bits(src);
+ tre_local[HIDMA_TRE_DEST_LOW_IDX] = lower_32_bits(dest);
+ tre_local[HIDMA_TRE_DEST_HI_IDX] = upper_32_bits(dest);
+ tre->int_flags = flags;
+}
+
+/*
+ * Called during initialization and after an error condition
+ * to restore hardware state.
+ */
+int hidma_ll_setup(struct hidma_lldev *lldev)
+{
+ int rc;
+ u64 addr;
+ u32 val;
+ u32 nr_tres = lldev->nr_tres;
+
+ lldev->pending_tre_count = 0;
+ lldev->tre_processed_off = 0;
+ lldev->evre_processed_off = 0;
+ lldev->tre_write_offset = 0;
+
+ /* disable interrupts */
+ writel(0, lldev->evca + HIDMA_EVCA_IRQ_EN_REG);
+
+ /* clear all pending interrupts */
+ val = readl(lldev->evca + HIDMA_EVCA_IRQ_STAT_REG);
+ writel(val, lldev->evca + HIDMA_EVCA_IRQ_CLR_REG);
+
+ rc = hidma_ll_reset(lldev);
+ if (rc)
+ return rc;
+
+ /*
+ * Clear all pending interrupts again.
+ * Otherwise, we observe reset complete interrupts.
+ */
+ val = readl(lldev->evca + HIDMA_EVCA_IRQ_STAT_REG);
+ writel(val, lldev->evca + HIDMA_EVCA_IRQ_CLR_REG);
+
+ /* disable interrupts again after reset */
+ writel(0, lldev->evca + HIDMA_EVCA_IRQ_EN_REG);
+
+ addr = lldev->tre_dma;
+ writel(lower_32_bits(addr), lldev->trca + HIDMA_TRCA_RING_LOW_REG);
+ writel(upper_32_bits(addr), lldev->trca + HIDMA_TRCA_RING_HIGH_REG);
+ writel(lldev->tre_ring_size, lldev->trca + HIDMA_TRCA_RING_LEN_REG);
+
+ addr = lldev->evre_dma;
+ writel(lower_32_bits(addr), lldev->evca + HIDMA_EVCA_RING_LOW_REG);
+ writel(upper_32_bits(addr), lldev->evca + HIDMA_EVCA_RING_HIGH_REG);
+ writel(HIDMA_EVRE_SIZE * nr_tres,
+ lldev->evca + HIDMA_EVCA_RING_LEN_REG);
+
+ /* support IRQ only for now */
+ val = readl(lldev->evca + HIDMA_EVCA_INTCTRL_REG);
+ val &= ~0xF;
+ val |= 0x1;
+ writel(val, lldev->evca + HIDMA_EVCA_INTCTRL_REG);
+
+ /* clear all pending interrupts and enable them */
+ writel(ENABLE_IRQS, lldev->evca + HIDMA_EVCA_IRQ_CLR_REG);
+ writel(ENABLE_IRQS, lldev->evca + HIDMA_EVCA_IRQ_EN_REG);
+
+ return hidma_ll_enable(lldev);
+}
+
+struct hidma_lldev *hidma_ll_init(struct device *dev, u32 nr_tres,
+ void __iomem *trca, void __iomem *evca,
+ u8 chidx)
+{
+ u32 required_bytes;
+ struct hidma_lldev *lldev;
+ int rc;
+ size_t sz;
+
+ if (!trca || !evca || !dev || !nr_tres)
+ return NULL;
+
+ /* need at least four TREs */
+ if (nr_tres < 4)
+ return NULL;
+
+ /* need an extra space */
+ nr_tres += 1;
+
+ lldev = devm_kzalloc(dev, sizeof(struct hidma_lldev), GFP_KERNEL);
+ if (!lldev)
+ return NULL;
+
+ lldev->evca = evca;
+ lldev->trca = trca;
+ lldev->dev = dev;
+ sz = sizeof(struct hidma_tre);
+ lldev->trepool = devm_kcalloc(lldev->dev, nr_tres, sz, GFP_KERNEL);
+ if (!lldev->trepool)
+ return NULL;
+
+ required_bytes = sizeof(lldev->pending_tre_list[0]);
+ lldev->pending_tre_list = devm_kcalloc(dev, nr_tres, required_bytes,
+ GFP_KERNEL);
+ if (!lldev->pending_tre_list)
+ return NULL;
+
+ sz = (HIDMA_TRE_SIZE + 1) * nr_tres;
+ lldev->tre_ring = dmam_alloc_coherent(dev, sz, &lldev->tre_dma,
+ GFP_KERNEL);
+ if (!lldev->tre_ring)
+ return NULL;
+
+ memset(lldev->tre_ring, 0, (HIDMA_TRE_SIZE + 1) * nr_tres);
+ lldev->tre_ring_size = HIDMA_TRE_SIZE * nr_tres;
+ lldev->nr_tres = nr_tres;
+
+ /* the TRE ring has to be TRE_SIZE aligned */
+ if (!IS_ALIGNED(lldev->tre_dma, HIDMA_TRE_SIZE)) {
+ u8 tre_ring_shift;
+
+ tre_ring_shift = lldev->tre_dma % HIDMA_TRE_SIZE;
+ tre_ring_shift = HIDMA_TRE_SIZE - tre_ring_shift;
+ lldev->tre_dma += tre_ring_shift;
+ lldev->tre_ring += tre_ring_shift;
+ }
+
+ sz = (HIDMA_EVRE_SIZE + 1) * nr_tres;
+ lldev->evre_ring = dmam_alloc_coherent(dev, sz, &lldev->evre_dma,
+ GFP_KERNEL);
+ if (!lldev->evre_ring)
+ return NULL;
+
+ memset(lldev->evre_ring, 0, (HIDMA_EVRE_SIZE + 1) * nr_tres);
+ lldev->evre_ring_size = HIDMA_EVRE_SIZE * nr_tres;
+
+ /* the EVRE ring has to be EVRE_SIZE aligned */
+ if (!IS_ALIGNED(lldev->evre_dma, HIDMA_EVRE_SIZE)) {
+ u8 evre_ring_shift;
+
+ evre_ring_shift = lldev->evre_dma % HIDMA_EVRE_SIZE;
+ evre_ring_shift = HIDMA_EVRE_SIZE - evre_ring_shift;
+ lldev->evre_dma += evre_ring_shift;
+ lldev->evre_ring += evre_ring_shift;
+ }
+ lldev->nr_tres = nr_tres;
+ lldev->chidx = chidx;
+
+ sz = nr_tres * sizeof(struct hidma_tre *);
+ rc = kfifo_alloc(&lldev->handoff_fifo, sz, GFP_KERNEL);
+ if (rc)
+ return NULL;
+
+ rc = hidma_ll_setup(lldev);
+ if (rc)
+ return NULL;
+
+ spin_lock_init(&lldev->lock);
+ tasklet_init(&lldev->rst_task, hidma_ll_abort, (unsigned long)lldev);
+ tasklet_init(&lldev->task, hidma_ll_tre_complete, (unsigned long)lldev);
+ lldev->initialized = 1;
+ writel(ENABLE_IRQS, lldev->evca + HIDMA_EVCA_IRQ_EN_REG);
+ return lldev;
+}
+
+int hidma_ll_uninit(struct hidma_lldev *lldev)
+{
+ u32 required_bytes;
+ int rc = 0;
+ u32 val;
+
+ if (!lldev)
+ return -ENODEV;
+
+ if (!lldev->initialized)
+ return 0;
+
+ lldev->initialized = 0;
+
+ required_bytes = sizeof(struct hidma_tre) * lldev->nr_tres;
+ tasklet_kill(&lldev->task);
+ memset(lldev->trepool, 0, required_bytes);
+ lldev->trepool = NULL;
+ lldev->pending_tre_count = 0;
+ lldev->tre_write_offset = 0;
+
+ rc = hidma_ll_reset(lldev);
+
+ /*
+ * Clear all pending interrupts again.
+ * Otherwise, we observe reset complete interrupts.
+ */
+ val = readl(lldev->evca + HIDMA_EVCA_IRQ_STAT_REG);
+ writel(val, lldev->evca + HIDMA_EVCA_IRQ_CLR_REG);
+ writel(0, lldev->evca + HIDMA_EVCA_IRQ_EN_REG);
+ return rc;
+}
+
+enum dma_status hidma_ll_status(struct hidma_lldev *lldev, u32 tre_ch)
+{
+ enum dma_status ret = DMA_ERROR;
+ struct hidma_tre *tre;
+ unsigned long flags;
+ u8 err_code;
+
+ spin_lock_irqsave(&lldev->lock, flags);
+
+ tre = &lldev->trepool[tre_ch];
+ err_code = tre->err_code;
+
+ if (err_code & HIDMA_EVRE_STATUS_COMPLETE)
+ ret = DMA_COMPLETE;
+ else if (err_code & HIDMA_EVRE_STATUS_ERROR)
+ ret = DMA_ERROR;
+ else
+ ret = DMA_IN_PROGRESS;
+ spin_unlock_irqrestore(&lldev->lock, flags);
+
+ return ret;
+}
diff --git a/drivers/dma/qcom/hidma_mgmt.c b/drivers/dma/qcom/hidma_mgmt.c
index ef491b893f40..c0e365321310 100644
--- a/drivers/dma/qcom/hidma_mgmt.c
+++ b/drivers/dma/qcom/hidma_mgmt.c
@@ -1,7 +1,7 @@
/*
* Qualcomm Technologies HIDMA DMA engine Management interface
*
- * Copyright (c) 2015, The Linux Foundation. All rights reserved.
+ * Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
@@ -17,13 +17,14 @@
#include <linux/acpi.h>
#include <linux/of.h>
#include <linux/property.h>
-#include <linux/interrupt.h>
-#include <linux/platform_device.h>
+#include <linux/of_irq.h>
+#include <linux/of_platform.h>
#include <linux/module.h>
#include <linux/uaccess.h>
#include <linux/slab.h>
#include <linux/pm_runtime.h>
#include <linux/bitops.h>
+#include <linux/dma-mapping.h>
#include "hidma_mgmt.h"
@@ -298,5 +299,109 @@ static struct platform_driver hidma_mgmt_driver = {
},
};
-module_platform_driver(hidma_mgmt_driver);
+#if defined(CONFIG_OF) && defined(CONFIG_OF_IRQ)
+static int object_counter;
+
+static int __init hidma_mgmt_of_populate_channels(struct device_node *np)
+{
+ struct platform_device *pdev_parent = of_find_device_by_node(np);
+ struct platform_device_info pdevinfo;
+ struct of_phandle_args out_irq;
+ struct device_node *child;
+ struct resource *res;
+ const __be32 *cell;
+ int ret = 0, size, i, num;
+ u64 addr, addr_size;
+
+ for_each_available_child_of_node(np, child) {
+ struct resource *res_iter;
+ struct platform_device *new_pdev;
+
+ cell = of_get_property(child, "reg", &size);
+ if (!cell) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ size /= sizeof(*cell);
+ num = size /
+ (of_n_addr_cells(child) + of_n_size_cells(child)) + 1;
+
+ /* allocate a resource array */
+ res = kcalloc(num, sizeof(*res), GFP_KERNEL);
+ if (!res) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ /* read each reg value */
+ i = 0;
+ res_iter = res;
+ while (i < size) {
+ addr = of_read_number(&cell[i],
+ of_n_addr_cells(child));
+ i += of_n_addr_cells(child);
+
+ addr_size = of_read_number(&cell[i],
+ of_n_size_cells(child));
+ i += of_n_size_cells(child);
+
+ res_iter->start = addr;
+ res_iter->end = res_iter->start + addr_size - 1;
+ res_iter->flags = IORESOURCE_MEM;
+ res_iter++;
+ }
+
+ ret = of_irq_parse_one(child, 0, &out_irq);
+ if (ret)
+ goto out;
+
+ res_iter->start = irq_create_of_mapping(&out_irq);
+ res_iter->name = "hidma event irq";
+ res_iter->flags = IORESOURCE_IRQ;
+
+ memset(&pdevinfo, 0, sizeof(pdevinfo));
+ pdevinfo.fwnode = &child->fwnode;
+ pdevinfo.parent = pdev_parent ? &pdev_parent->dev : NULL;
+ pdevinfo.name = child->name;
+ pdevinfo.id = object_counter++;
+ pdevinfo.res = res;
+ pdevinfo.num_res = num;
+ pdevinfo.data = NULL;
+ pdevinfo.size_data = 0;
+ pdevinfo.dma_mask = DMA_BIT_MASK(64);
+ new_pdev = platform_device_register_full(&pdevinfo);
+ if (!new_pdev) {
+ ret = -ENODEV;
+ goto out;
+ }
+ of_dma_configure(&new_pdev->dev, child);
+
+ kfree(res);
+ res = NULL;
+ }
+out:
+ kfree(res);
+
+ return ret;
+}
+#endif
+
+static int __init hidma_mgmt_init(void)
+{
+#if defined(CONFIG_OF) && defined(CONFIG_OF_IRQ)
+ struct device_node *child;
+
+ for (child = of_find_matching_node(NULL, hidma_mgmt_match); child;
+ child = of_find_matching_node(child, hidma_mgmt_match)) {
+ /* device tree based firmware here */
+ hidma_mgmt_of_populate_channels(child);
+ of_node_put(child);
+ }
+#endif
+ platform_driver_register(&hidma_mgmt_driver);
+
+ return 0;
+}
+module_init(hidma_mgmt_init);
MODULE_LICENSE("GPL v2");
diff --git a/drivers/dma/sun4i-dma.c b/drivers/dma/sun4i-dma.c
index e0df233dde92..57aa227bfadb 100644
--- a/drivers/dma/sun4i-dma.c
+++ b/drivers/dma/sun4i-dma.c
@@ -461,25 +461,25 @@ generate_ndma_promise(struct dma_chan *chan, dma_addr_t src, dma_addr_t dest,
/* Source burst */
ret = convert_burst(sconfig->src_maxburst);
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
goto fail;
promise->cfg |= SUN4I_DMA_CFG_SRC_BURST_LENGTH(ret);
/* Destination burst */
ret = convert_burst(sconfig->dst_maxburst);
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
goto fail;
promise->cfg |= SUN4I_DMA_CFG_DST_BURST_LENGTH(ret);
/* Source bus width */
ret = convert_buswidth(sconfig->src_addr_width);
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
goto fail;
promise->cfg |= SUN4I_DMA_CFG_SRC_DATA_WIDTH(ret);
/* Destination bus width */
ret = convert_buswidth(sconfig->dst_addr_width);
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
goto fail;
promise->cfg |= SUN4I_DMA_CFG_DST_DATA_WIDTH(ret);
@@ -518,25 +518,25 @@ generate_ddma_promise(struct dma_chan *chan, dma_addr_t src, dma_addr_t dest,
/* Source burst */
ret = convert_burst(sconfig->src_maxburst);
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
goto fail;
promise->cfg |= SUN4I_DMA_CFG_SRC_BURST_LENGTH(ret);
/* Destination burst */
ret = convert_burst(sconfig->dst_maxburst);
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
goto fail;
promise->cfg |= SUN4I_DMA_CFG_DST_BURST_LENGTH(ret);
/* Source bus width */
ret = convert_buswidth(sconfig->src_addr_width);
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
goto fail;
promise->cfg |= SUN4I_DMA_CFG_SRC_DATA_WIDTH(ret);
/* Destination bus width */
ret = convert_buswidth(sconfig->dst_addr_width);
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
goto fail;
promise->cfg |= SUN4I_DMA_CFG_DST_DATA_WIDTH(ret);
diff --git a/drivers/dma/sun6i-dma.c b/drivers/dma/sun6i-dma.c
index 2db12e493c53..5065ca43face 100644
--- a/drivers/dma/sun6i-dma.c
+++ b/drivers/dma/sun6i-dma.c
@@ -146,6 +146,8 @@ struct sun6i_vchan {
struct dma_slave_config cfg;
struct sun6i_pchan *phy;
u8 port;
+ u8 irq_type;
+ bool cyclic;
};
struct sun6i_dma_dev {
@@ -254,6 +256,30 @@ static inline s8 convert_buswidth(enum dma_slave_buswidth addr_width)
return addr_width >> 1;
}
+static size_t sun6i_get_chan_size(struct sun6i_pchan *pchan)
+{
+ struct sun6i_desc *txd = pchan->desc;
+ struct sun6i_dma_lli *lli;
+ size_t bytes;
+ dma_addr_t pos;
+
+ pos = readl(pchan->base + DMA_CHAN_LLI_ADDR);
+ bytes = readl(pchan->base + DMA_CHAN_CUR_CNT);
+
+ if (pos == LLI_LAST_ITEM)
+ return bytes;
+
+ for (lli = txd->v_lli; lli; lli = lli->v_lli_next) {
+ if (lli->p_lli_next == pos) {
+ for (lli = lli->v_lli_next; lli; lli = lli->v_lli_next)
+ bytes += lli->len;
+ break;
+ }
+ }
+
+ return bytes;
+}
+
static void *sun6i_dma_lli_add(struct sun6i_dma_lli *prev,
struct sun6i_dma_lli *next,
dma_addr_t next_phy,
@@ -276,45 +302,6 @@ static void *sun6i_dma_lli_add(struct sun6i_dma_lli *prev,
return next;
}
-static inline int sun6i_dma_cfg_lli(struct sun6i_dma_lli *lli,
- dma_addr_t src,
- dma_addr_t dst, u32 len,
- struct dma_slave_config *config)
-{
- u8 src_width, dst_width, src_burst, dst_burst;
-
- if (!config)
- return -EINVAL;
-
- src_burst = convert_burst(config->src_maxburst);
- if (src_burst)
- return src_burst;
-
- dst_burst = convert_burst(config->dst_maxburst);
- if (dst_burst)
- return dst_burst;
-
- src_width = convert_buswidth(config->src_addr_width);
- if (src_width)
- return src_width;
-
- dst_width = convert_buswidth(config->dst_addr_width);
- if (dst_width)
- return dst_width;
-
- lli->cfg = DMA_CHAN_CFG_SRC_BURST(src_burst) |
- DMA_CHAN_CFG_SRC_WIDTH(src_width) |
- DMA_CHAN_CFG_DST_BURST(dst_burst) |
- DMA_CHAN_CFG_DST_WIDTH(dst_width);
-
- lli->src = src;
- lli->dst = dst;
- lli->len = len;
- lli->para = NORMAL_WAIT;
-
- return 0;
-}
-
static inline void sun6i_dma_dump_lli(struct sun6i_vchan *vchan,
struct sun6i_dma_lli *lli)
{
@@ -381,9 +368,13 @@ static int sun6i_dma_start_desc(struct sun6i_vchan *vchan)
irq_reg = pchan->idx / DMA_IRQ_CHAN_NR;
irq_offset = pchan->idx % DMA_IRQ_CHAN_NR;
- irq_val = readl(sdev->base + DMA_IRQ_EN(irq_offset));
- irq_val |= DMA_IRQ_QUEUE << (irq_offset * DMA_IRQ_CHAN_WIDTH);
- writel(irq_val, sdev->base + DMA_IRQ_EN(irq_offset));
+ vchan->irq_type = vchan->cyclic ? DMA_IRQ_PKG : DMA_IRQ_QUEUE;
+
+ irq_val = readl(sdev->base + DMA_IRQ_EN(irq_reg));
+ irq_val &= ~((DMA_IRQ_HALF | DMA_IRQ_PKG | DMA_IRQ_QUEUE) <<
+ (irq_offset * DMA_IRQ_CHAN_WIDTH));
+ irq_val |= vchan->irq_type << (irq_offset * DMA_IRQ_CHAN_WIDTH);
+ writel(irq_val, sdev->base + DMA_IRQ_EN(irq_reg));
writel(pchan->desc->p_lli, pchan->base + DMA_CHAN_LLI_ADDR);
writel(DMA_CHAN_ENABLE_START, pchan->base + DMA_CHAN_ENABLE);
@@ -479,11 +470,12 @@ static irqreturn_t sun6i_dma_interrupt(int irq, void *dev_id)
writel(status, sdev->base + DMA_IRQ_STAT(i));
for (j = 0; (j < DMA_IRQ_CHAN_NR) && status; j++) {
- if (status & DMA_IRQ_QUEUE) {
- pchan = sdev->pchans + j;
- vchan = pchan->vchan;
-
- if (vchan) {
+ pchan = sdev->pchans + j;
+ vchan = pchan->vchan;
+ if (vchan && (status & vchan->irq_type)) {
+ if (vchan->cyclic) {
+ vchan_cyclic_callback(&pchan->desc->vd);
+ } else {
spin_lock(&vchan->vc.lock);
vchan_cookie_complete(&pchan->desc->vd);
pchan->done = pchan->desc;
@@ -502,6 +494,55 @@ static irqreturn_t sun6i_dma_interrupt(int irq, void *dev_id)
return ret;
}
+static int set_config(struct sun6i_dma_dev *sdev,
+ struct dma_slave_config *sconfig,
+ enum dma_transfer_direction direction,
+ u32 *p_cfg)
+{
+ s8 src_width, dst_width, src_burst, dst_burst;
+
+ switch (direction) {
+ case DMA_MEM_TO_DEV:
+ src_burst = convert_burst(sconfig->src_maxburst ?
+ sconfig->src_maxburst : 8);
+ src_width = convert_buswidth(sconfig->src_addr_width !=
+ DMA_SLAVE_BUSWIDTH_UNDEFINED ?
+ sconfig->src_addr_width :
+ DMA_SLAVE_BUSWIDTH_4_BYTES);
+ dst_burst = convert_burst(sconfig->dst_maxburst);
+ dst_width = convert_buswidth(sconfig->dst_addr_width);
+ break;
+ case DMA_DEV_TO_MEM:
+ src_burst = convert_burst(sconfig->src_maxburst);
+ src_width = convert_buswidth(sconfig->src_addr_width);
+ dst_burst = convert_burst(sconfig->dst_maxburst ?
+ sconfig->dst_maxburst : 8);
+ dst_width = convert_buswidth(sconfig->dst_addr_width !=
+ DMA_SLAVE_BUSWIDTH_UNDEFINED ?
+ sconfig->dst_addr_width :
+ DMA_SLAVE_BUSWIDTH_4_BYTES);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (src_burst < 0)
+ return src_burst;
+ if (src_width < 0)
+ return src_width;
+ if (dst_burst < 0)
+ return dst_burst;
+ if (dst_width < 0)
+ return dst_width;
+
+ *p_cfg = DMA_CHAN_CFG_SRC_BURST(src_burst) |
+ DMA_CHAN_CFG_SRC_WIDTH(src_width) |
+ DMA_CHAN_CFG_DST_BURST(dst_burst) |
+ DMA_CHAN_CFG_DST_WIDTH(dst_width);
+
+ return 0;
+}
+
static struct dma_async_tx_descriptor *sun6i_dma_prep_dma_memcpy(
struct dma_chan *chan, dma_addr_t dest, dma_addr_t src,
size_t len, unsigned long flags)
@@ -569,13 +610,15 @@ static struct dma_async_tx_descriptor *sun6i_dma_prep_slave_sg(
struct sun6i_desc *txd;
struct scatterlist *sg;
dma_addr_t p_lli;
+ u32 lli_cfg;
int i, ret;
if (!sgl)
return NULL;
- if (!is_slave_direction(dir)) {
- dev_err(chan2dev(chan), "Invalid DMA direction\n");
+ ret = set_config(sdev, sconfig, dir, &lli_cfg);
+ if (ret) {
+ dev_err(chan2dev(chan), "Invalid DMA configuration\n");
return NULL;
}
@@ -588,14 +631,14 @@ static struct dma_async_tx_descriptor *sun6i_dma_prep_slave_sg(
if (!v_lli)
goto err_lli_free;
- if (dir == DMA_MEM_TO_DEV) {
- ret = sun6i_dma_cfg_lli(v_lli, sg_dma_address(sg),
- sconfig->dst_addr, sg_dma_len(sg),
- sconfig);
- if (ret)
- goto err_cur_lli_free;
+ v_lli->len = sg_dma_len(sg);
+ v_lli->para = NORMAL_WAIT;
- v_lli->cfg |= DMA_CHAN_CFG_DST_IO_MODE |
+ if (dir == DMA_MEM_TO_DEV) {
+ v_lli->src = sg_dma_address(sg);
+ v_lli->dst = sconfig->dst_addr;
+ v_lli->cfg = lli_cfg |
+ DMA_CHAN_CFG_DST_IO_MODE |
DMA_CHAN_CFG_SRC_LINEAR_MODE |
DMA_CHAN_CFG_SRC_DRQ(DRQ_SDRAM) |
DMA_CHAN_CFG_DST_DRQ(vchan->port);
@@ -607,13 +650,10 @@ static struct dma_async_tx_descriptor *sun6i_dma_prep_slave_sg(
sg_dma_len(sg), flags);
} else {
- ret = sun6i_dma_cfg_lli(v_lli, sconfig->src_addr,
- sg_dma_address(sg), sg_dma_len(sg),
- sconfig);
- if (ret)
- goto err_cur_lli_free;
-
- v_lli->cfg |= DMA_CHAN_CFG_DST_LINEAR_MODE |
+ v_lli->src = sconfig->src_addr;
+ v_lli->dst = sg_dma_address(sg);
+ v_lli->cfg = lli_cfg |
+ DMA_CHAN_CFG_DST_LINEAR_MODE |
DMA_CHAN_CFG_SRC_IO_MODE |
DMA_CHAN_CFG_DST_DRQ(DRQ_SDRAM) |
DMA_CHAN_CFG_SRC_DRQ(vchan->port);
@@ -634,8 +674,78 @@ static struct dma_async_tx_descriptor *sun6i_dma_prep_slave_sg(
return vchan_tx_prep(&vchan->vc, &txd->vd, flags);
-err_cur_lli_free:
- dma_pool_free(sdev->pool, v_lli, p_lli);
+err_lli_free:
+ for (prev = txd->v_lli; prev; prev = prev->v_lli_next)
+ dma_pool_free(sdev->pool, prev, virt_to_phys(prev));
+ kfree(txd);
+ return NULL;
+}
+
+static struct dma_async_tx_descriptor *sun6i_dma_prep_dma_cyclic(
+ struct dma_chan *chan,
+ dma_addr_t buf_addr,
+ size_t buf_len,
+ size_t period_len,
+ enum dma_transfer_direction dir,
+ unsigned long flags)
+{
+ struct sun6i_dma_dev *sdev = to_sun6i_dma_dev(chan->device);
+ struct sun6i_vchan *vchan = to_sun6i_vchan(chan);
+ struct dma_slave_config *sconfig = &vchan->cfg;
+ struct sun6i_dma_lli *v_lli, *prev = NULL;
+ struct sun6i_desc *txd;
+ dma_addr_t p_lli;
+ u32 lli_cfg;
+ unsigned int i, periods = buf_len / period_len;
+ int ret;
+
+ ret = set_config(sdev, sconfig, dir, &lli_cfg);
+ if (ret) {
+ dev_err(chan2dev(chan), "Invalid DMA configuration\n");
+ return NULL;
+ }
+
+ txd = kzalloc(sizeof(*txd), GFP_NOWAIT);
+ if (!txd)
+ return NULL;
+
+ for (i = 0; i < periods; i++) {
+ v_lli = dma_pool_alloc(sdev->pool, GFP_NOWAIT, &p_lli);
+ if (!v_lli) {
+ dev_err(sdev->slave.dev, "Failed to alloc lli memory\n");
+ goto err_lli_free;
+ }
+
+ v_lli->len = period_len;
+ v_lli->para = NORMAL_WAIT;
+
+ if (dir == DMA_MEM_TO_DEV) {
+ v_lli->src = buf_addr + period_len * i;
+ v_lli->dst = sconfig->dst_addr;
+ v_lli->cfg = lli_cfg |
+ DMA_CHAN_CFG_DST_IO_MODE |
+ DMA_CHAN_CFG_SRC_LINEAR_MODE |
+ DMA_CHAN_CFG_SRC_DRQ(DRQ_SDRAM) |
+ DMA_CHAN_CFG_DST_DRQ(vchan->port);
+ } else {
+ v_lli->src = sconfig->src_addr;
+ v_lli->dst = buf_addr + period_len * i;
+ v_lli->cfg = lli_cfg |
+ DMA_CHAN_CFG_DST_LINEAR_MODE |
+ DMA_CHAN_CFG_SRC_IO_MODE |
+ DMA_CHAN_CFG_DST_DRQ(DRQ_SDRAM) |
+ DMA_CHAN_CFG_SRC_DRQ(vchan->port);
+ }
+
+ prev = sun6i_dma_lli_add(prev, v_lli, p_lli, txd);
+ }
+
+ prev->p_lli_next = txd->p_lli; /* cyclic list */
+
+ vchan->cyclic = true;
+
+ return vchan_tx_prep(&vchan->vc, &txd->vd, flags);
+
err_lli_free:
for (prev = txd->v_lli; prev; prev = prev->v_lli_next)
dma_pool_free(sdev->pool, prev, virt_to_phys(prev));
@@ -712,6 +822,16 @@ static int sun6i_dma_terminate_all(struct dma_chan *chan)
spin_lock_irqsave(&vchan->vc.lock, flags);
+ if (vchan->cyclic) {
+ vchan->cyclic = false;
+ if (pchan && pchan->desc) {
+ struct virt_dma_desc *vd = &pchan->desc->vd;
+ struct virt_dma_chan *vc = &vchan->vc;
+
+ list_add_tail(&vd->node, &vc->desc_completed);
+ }
+ }
+
vchan_get_all_descriptors(&vchan->vc, &head);
if (pchan) {
@@ -759,7 +879,7 @@ static enum dma_status sun6i_dma_tx_status(struct dma_chan *chan,
} else if (!pchan || !pchan->desc) {
bytes = 0;
} else {
- bytes = readl(pchan->base + DMA_CHAN_CUR_CNT);
+ bytes = sun6i_get_chan_size(pchan);
}
spin_unlock_irqrestore(&vchan->vc.lock, flags);
@@ -963,6 +1083,7 @@ static int sun6i_dma_probe(struct platform_device *pdev)
dma_cap_set(DMA_PRIVATE, sdc->slave.cap_mask);
dma_cap_set(DMA_MEMCPY, sdc->slave.cap_mask);
dma_cap_set(DMA_SLAVE, sdc->slave.cap_mask);
+ dma_cap_set(DMA_CYCLIC, sdc->slave.cap_mask);
INIT_LIST_HEAD(&sdc->slave.channels);
sdc->slave.device_free_chan_resources = sun6i_dma_free_chan_resources;
@@ -970,6 +1091,7 @@ static int sun6i_dma_probe(struct platform_device *pdev)
sdc->slave.device_issue_pending = sun6i_dma_issue_pending;
sdc->slave.device_prep_slave_sg = sun6i_dma_prep_slave_sg;
sdc->slave.device_prep_dma_memcpy = sun6i_dma_prep_dma_memcpy;
+ sdc->slave.device_prep_dma_cyclic = sun6i_dma_prep_dma_cyclic;
sdc->slave.copy_align = DMAENGINE_ALIGN_4_BYTES;
sdc->slave.device_config = sun6i_dma_config;
sdc->slave.device_pause = sun6i_dma_pause;
diff --git a/drivers/dma/tegra20-apb-dma.c b/drivers/dma/tegra20-apb-dma.c
index 3871f29e523d..01e316f73559 100644
--- a/drivers/dma/tegra20-apb-dma.c
+++ b/drivers/dma/tegra20-apb-dma.c
@@ -54,6 +54,7 @@
#define TEGRA_APBDMA_CSR_ONCE BIT(27)
#define TEGRA_APBDMA_CSR_FLOW BIT(21)
#define TEGRA_APBDMA_CSR_REQ_SEL_SHIFT 16
+#define TEGRA_APBDMA_CSR_REQ_SEL_MASK 0x1F
#define TEGRA_APBDMA_CSR_WCOUNT_MASK 0xFFFC
/* STATUS register */
@@ -114,6 +115,8 @@
/* Channel base address offset from APBDMA base address */
#define TEGRA_APBDMA_CHANNEL_BASE_ADD_OFFSET 0x1000
+#define TEGRA_APBDMA_SLAVE_ID_INVALID (TEGRA_APBDMA_CSR_REQ_SEL_MASK + 1)
+
struct tegra_dma;
/*
@@ -353,8 +356,11 @@ static int tegra_dma_slave_config(struct dma_chan *dc,
}
memcpy(&tdc->dma_sconfig, sconfig, sizeof(*sconfig));
- if (!tdc->slave_id)
+ if (tdc->slave_id == TEGRA_APBDMA_SLAVE_ID_INVALID) {
+ if (sconfig->slave_id > TEGRA_APBDMA_CSR_REQ_SEL_MASK)
+ return -EINVAL;
tdc->slave_id = sconfig->slave_id;
+ }
tdc->config_init = true;
return 0;
}
@@ -1236,7 +1242,7 @@ static void tegra_dma_free_chan_resources(struct dma_chan *dc)
}
pm_runtime_put(tdma->dev);
- tdc->slave_id = 0;
+ tdc->slave_id = TEGRA_APBDMA_SLAVE_ID_INVALID;
}
static struct dma_chan *tegra_dma_of_xlate(struct of_phandle_args *dma_spec,
@@ -1246,6 +1252,11 @@ static struct dma_chan *tegra_dma_of_xlate(struct of_phandle_args *dma_spec,
struct dma_chan *chan;
struct tegra_dma_channel *tdc;
+ if (dma_spec->args[0] > TEGRA_APBDMA_CSR_REQ_SEL_MASK) {
+ dev_err(tdma->dev, "Invalid slave id: %d\n", dma_spec->args[0]);
+ return NULL;
+ }
+
chan = dma_get_any_slave_channel(&tdma->dma_dev);
if (!chan)
return NULL;
@@ -1389,6 +1400,7 @@ static int tegra_dma_probe(struct platform_device *pdev)
&tdma->dma_dev.channels);
tdc->tdma = tdma;
tdc->id = i;
+ tdc->slave_id = TEGRA_APBDMA_SLAVE_ID_INVALID;
tasklet_init(&tdc->tasklet, tegra_dma_tasklet,
(unsigned long)tdc);
diff --git a/drivers/dma/tegra210-adma.c b/drivers/dma/tegra210-adma.c
new file mode 100644
index 000000000000..c4b121c4559d
--- /dev/null
+++ b/drivers/dma/tegra210-adma.c
@@ -0,0 +1,840 @@
+/*
+ * ADMA driver for Nvidia's Tegra210 ADMA controller.
+ *
+ * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/clk.h>
+#include <linux/iopoll.h>
+#include <linux/module.h>
+#include <linux/of_device.h>
+#include <linux/of_dma.h>
+#include <linux/of_irq.h>
+#include <linux/pm_clock.h>
+#include <linux/pm_runtime.h>
+#include <linux/slab.h>
+
+#include "virt-dma.h"
+
+#define ADMA_CH_CMD 0x00
+#define ADMA_CH_STATUS 0x0c
+#define ADMA_CH_STATUS_XFER_EN BIT(0)
+
+#define ADMA_CH_INT_STATUS 0x10
+#define ADMA_CH_INT_STATUS_XFER_DONE BIT(0)
+
+#define ADMA_CH_INT_CLEAR 0x1c
+#define ADMA_CH_CTRL 0x24
+#define ADMA_CH_CTRL_TX_REQ(val) (((val) & 0xf) << 28)
+#define ADMA_CH_CTRL_TX_REQ_MAX 10
+#define ADMA_CH_CTRL_RX_REQ(val) (((val) & 0xf) << 24)
+#define ADMA_CH_CTRL_RX_REQ_MAX 10
+#define ADMA_CH_CTRL_DIR(val) (((val) & 0xf) << 12)
+#define ADMA_CH_CTRL_DIR_AHUB2MEM 2
+#define ADMA_CH_CTRL_DIR_MEM2AHUB 4
+#define ADMA_CH_CTRL_MODE_CONTINUOUS (2 << 8)
+#define ADMA_CH_CTRL_FLOWCTRL_EN BIT(1)
+
+#define ADMA_CH_CONFIG 0x28
+#define ADMA_CH_CONFIG_SRC_BUF(val) (((val) & 0x7) << 28)
+#define ADMA_CH_CONFIG_TRG_BUF(val) (((val) & 0x7) << 24)
+#define ADMA_CH_CONFIG_BURST_SIZE(val) (((val) & 0x7) << 20)
+#define ADMA_CH_CONFIG_BURST_16 5
+#define ADMA_CH_CONFIG_WEIGHT_FOR_WRR(val) ((val) & 0xf)
+#define ADMA_CH_CONFIG_MAX_BUFS 8
+
+#define ADMA_CH_FIFO_CTRL 0x2c
+#define ADMA_CH_FIFO_CTRL_OVRFW_THRES(val) (((val) & 0xf) << 24)
+#define ADMA_CH_FIFO_CTRL_STARV_THRES(val) (((val) & 0xf) << 16)
+#define ADMA_CH_FIFO_CTRL_TX_SIZE(val) (((val) & 0xf) << 8)
+#define ADMA_CH_FIFO_CTRL_RX_SIZE(val) ((val) & 0xf)
+
+#define ADMA_CH_LOWER_SRC_ADDR 0x34
+#define ADMA_CH_LOWER_TRG_ADDR 0x3c
+#define ADMA_CH_TC 0x44
+#define ADMA_CH_TC_COUNT_MASK 0x3ffffffc
+
+#define ADMA_CH_XFER_STATUS 0x54
+#define ADMA_CH_XFER_STATUS_COUNT_MASK 0xffff
+
+#define ADMA_GLOBAL_CMD 0xc00
+#define ADMA_GLOBAL_SOFT_RESET 0xc04
+#define ADMA_GLOBAL_INT_CLEAR 0xc20
+#define ADMA_GLOBAL_CTRL 0xc24
+
+#define ADMA_CH_REG_OFFSET(a) (a * 0x80)
+
+#define ADMA_CH_FIFO_CTRL_DEFAULT (ADMA_CH_FIFO_CTRL_OVRFW_THRES(1) | \
+ ADMA_CH_FIFO_CTRL_STARV_THRES(1) | \
+ ADMA_CH_FIFO_CTRL_TX_SIZE(3) | \
+ ADMA_CH_FIFO_CTRL_RX_SIZE(3))
+struct tegra_adma;
+
+/*
+ * struct tegra_adma_chip_data - Tegra chip specific data
+ * @nr_channels: Number of DMA channels available.
+ */
+struct tegra_adma_chip_data {
+ int nr_channels;
+};
+
+/*
+ * struct tegra_adma_chan_regs - Tegra ADMA channel registers
+ */
+struct tegra_adma_chan_regs {
+ unsigned int ctrl;
+ unsigned int config;
+ unsigned int src_addr;
+ unsigned int trg_addr;
+ unsigned int fifo_ctrl;
+ unsigned int tc;
+};
+
+/*
+ * struct tegra_adma_desc - Tegra ADMA descriptor to manage transfer requests.
+ */
+struct tegra_adma_desc {
+ struct virt_dma_desc vd;
+ struct tegra_adma_chan_regs ch_regs;
+ size_t buf_len;
+ size_t period_len;
+ size_t num_periods;
+};
+
+/*
+ * struct tegra_adma_chan - Tegra ADMA channel information
+ */
+struct tegra_adma_chan {
+ struct virt_dma_chan vc;
+ struct tegra_adma_desc *desc;
+ struct tegra_adma *tdma;
+ int irq;
+ void __iomem *chan_addr;
+
+ /* Slave channel configuration info */
+ struct dma_slave_config sconfig;
+ enum dma_transfer_direction sreq_dir;
+ unsigned int sreq_index;
+ bool sreq_reserved;
+
+ /* Transfer count and position info */
+ unsigned int tx_buf_count;
+ unsigned int tx_buf_pos;
+};
+
+/*
+ * struct tegra_adma - Tegra ADMA controller information
+ */
+struct tegra_adma {
+ struct dma_device dma_dev;
+ struct device *dev;
+ void __iomem *base_addr;
+ unsigned int nr_channels;
+ unsigned long rx_requests_reserved;
+ unsigned long tx_requests_reserved;
+
+ /* Used to store global command register state when suspending */
+ unsigned int global_cmd;
+
+ /* Last member of the structure */
+ struct tegra_adma_chan channels[0];
+};
+
+static inline void tdma_write(struct tegra_adma *tdma, u32 reg, u32 val)
+{
+ writel(val, tdma->base_addr + reg);
+}
+
+static inline u32 tdma_read(struct tegra_adma *tdma, u32 reg)
+{
+ return readl(tdma->base_addr + reg);
+}
+
+static inline void tdma_ch_write(struct tegra_adma_chan *tdc, u32 reg, u32 val)
+{
+ writel(val, tdc->chan_addr + reg);
+}
+
+static inline u32 tdma_ch_read(struct tegra_adma_chan *tdc, u32 reg)
+{
+ return readl(tdc->chan_addr + reg);
+}
+
+static inline struct tegra_adma_chan *to_tegra_adma_chan(struct dma_chan *dc)
+{
+ return container_of(dc, struct tegra_adma_chan, vc.chan);
+}
+
+static inline struct tegra_adma_desc *to_tegra_adma_desc(
+ struct dma_async_tx_descriptor *td)
+{
+ return container_of(td, struct tegra_adma_desc, vd.tx);
+}
+
+static inline struct device *tdc2dev(struct tegra_adma_chan *tdc)
+{
+ return tdc->tdma->dev;
+}
+
+static void tegra_adma_desc_free(struct virt_dma_desc *vd)
+{
+ kfree(container_of(vd, struct tegra_adma_desc, vd));
+}
+
+static int tegra_adma_slave_config(struct dma_chan *dc,
+ struct dma_slave_config *sconfig)
+{
+ struct tegra_adma_chan *tdc = to_tegra_adma_chan(dc);
+
+ memcpy(&tdc->sconfig, sconfig, sizeof(*sconfig));
+
+ return 0;
+}
+
+static int tegra_adma_init(struct tegra_adma *tdma)
+{
+ u32 status;
+ int ret;
+
+ /* Clear any interrupts */
+ tdma_write(tdma, ADMA_GLOBAL_INT_CLEAR, 0x1);
+
+ /* Assert soft reset */
+ tdma_write(tdma, ADMA_GLOBAL_SOFT_RESET, 0x1);
+
+ /* Wait for reset to clear */
+ ret = readx_poll_timeout(readl,
+ tdma->base_addr + ADMA_GLOBAL_SOFT_RESET,
+ status, status == 0, 20, 10000);
+ if (ret)
+ return ret;
+
+ /* Enable global ADMA registers */
+ tdma_write(tdma, ADMA_GLOBAL_CMD, 1);
+
+ return 0;
+}
+
+static int tegra_adma_request_alloc(struct tegra_adma_chan *tdc,
+ enum dma_transfer_direction direction)
+{
+ struct tegra_adma *tdma = tdc->tdma;
+ unsigned int sreq_index = tdc->sreq_index;
+
+ if (tdc->sreq_reserved)
+ return tdc->sreq_dir == direction ? 0 : -EINVAL;
+
+ switch (direction) {
+ case DMA_MEM_TO_DEV:
+ if (sreq_index > ADMA_CH_CTRL_TX_REQ_MAX) {
+ dev_err(tdma->dev, "invalid DMA request\n");
+ return -EINVAL;
+ }
+
+ if (test_and_set_bit(sreq_index, &tdma->tx_requests_reserved)) {
+ dev_err(tdma->dev, "DMA request reserved\n");
+ return -EINVAL;
+ }
+ break;
+
+ case DMA_DEV_TO_MEM:
+ if (sreq_index > ADMA_CH_CTRL_RX_REQ_MAX) {
+ dev_err(tdma->dev, "invalid DMA request\n");
+ return -EINVAL;
+ }
+
+ if (test_and_set_bit(sreq_index, &tdma->rx_requests_reserved)) {
+ dev_err(tdma->dev, "DMA request reserved\n");
+ return -EINVAL;
+ }
+ break;
+
+ default:
+ dev_WARN(tdma->dev, "channel %s has invalid transfer type\n",
+ dma_chan_name(&tdc->vc.chan));
+ return -EINVAL;
+ }
+
+ tdc->sreq_dir = direction;
+ tdc->sreq_reserved = true;
+
+ return 0;
+}
+
+static void tegra_adma_request_free(struct tegra_adma_chan *tdc)
+{
+ struct tegra_adma *tdma = tdc->tdma;
+
+ if (!tdc->sreq_reserved)
+ return;
+
+ switch (tdc->sreq_dir) {
+ case DMA_MEM_TO_DEV:
+ clear_bit(tdc->sreq_index, &tdma->tx_requests_reserved);
+ break;
+
+ case DMA_DEV_TO_MEM:
+ clear_bit(tdc->sreq_index, &tdma->rx_requests_reserved);
+ break;
+
+ default:
+ dev_WARN(tdma->dev, "channel %s has invalid transfer type\n",
+ dma_chan_name(&tdc->vc.chan));
+ return;
+ }
+
+ tdc->sreq_reserved = false;
+}
+
+static u32 tegra_adma_irq_status(struct tegra_adma_chan *tdc)
+{
+ u32 status = tdma_ch_read(tdc, ADMA_CH_INT_STATUS);
+
+ return status & ADMA_CH_INT_STATUS_XFER_DONE;
+}
+
+static u32 tegra_adma_irq_clear(struct tegra_adma_chan *tdc)
+{
+ u32 status = tegra_adma_irq_status(tdc);
+
+ if (status)
+ tdma_ch_write(tdc, ADMA_CH_INT_CLEAR, status);
+
+ return status;
+}
+
+static void tegra_adma_stop(struct tegra_adma_chan *tdc)
+{
+ unsigned int status;
+
+ /* Disable ADMA */
+ tdma_ch_write(tdc, ADMA_CH_CMD, 0);
+
+ /* Clear interrupt status */
+ tegra_adma_irq_clear(tdc);
+
+ if (readx_poll_timeout_atomic(readl, tdc->chan_addr + ADMA_CH_STATUS,
+ status, !(status & ADMA_CH_STATUS_XFER_EN),
+ 20, 10000)) {
+ dev_err(tdc2dev(tdc), "unable to stop DMA channel\n");
+ return;
+ }
+
+ kfree(tdc->desc);
+ tdc->desc = NULL;
+}
+
+static void tegra_adma_start(struct tegra_adma_chan *tdc)
+{
+ struct virt_dma_desc *vd = vchan_next_desc(&tdc->vc);
+ struct tegra_adma_chan_regs *ch_regs;
+ struct tegra_adma_desc *desc;
+
+ if (!vd)
+ return;
+
+ list_del(&vd->node);
+
+ desc = to_tegra_adma_desc(&vd->tx);
+
+ if (!desc) {
+ dev_warn(tdc2dev(tdc), "unable to start DMA, no descriptor\n");
+ return;
+ }
+
+ ch_regs = &desc->ch_regs;
+
+ tdc->tx_buf_pos = 0;
+ tdc->tx_buf_count = 0;
+ tdma_ch_write(tdc, ADMA_CH_TC, ch_regs->tc);
+ tdma_ch_write(tdc, ADMA_CH_CTRL, ch_regs->ctrl);
+ tdma_ch_write(tdc, ADMA_CH_LOWER_SRC_ADDR, ch_regs->src_addr);
+ tdma_ch_write(tdc, ADMA_CH_LOWER_TRG_ADDR, ch_regs->trg_addr);
+ tdma_ch_write(tdc, ADMA_CH_FIFO_CTRL, ch_regs->fifo_ctrl);
+ tdma_ch_write(tdc, ADMA_CH_CONFIG, ch_regs->config);
+
+ /* Start ADMA */
+ tdma_ch_write(tdc, ADMA_CH_CMD, 1);
+
+ tdc->desc = desc;
+}
+
+static unsigned int tegra_adma_get_residue(struct tegra_adma_chan *tdc)
+{
+ struct tegra_adma_desc *desc = tdc->desc;
+ unsigned int max = ADMA_CH_XFER_STATUS_COUNT_MASK + 1;
+ unsigned int pos = tdma_ch_read(tdc, ADMA_CH_XFER_STATUS);
+ unsigned int periods_remaining;
+
+ /*
+ * Handle wrap around of buffer count register
+ */
+ if (pos < tdc->tx_buf_pos)
+ tdc->tx_buf_count += pos + (max - tdc->tx_buf_pos);
+ else
+ tdc->tx_buf_count += pos - tdc->tx_buf_pos;
+
+ periods_remaining = tdc->tx_buf_count % desc->num_periods;
+ tdc->tx_buf_pos = pos;
+
+ return desc->buf_len - (periods_remaining * desc->period_len);
+}
+
+static irqreturn_t tegra_adma_isr(int irq, void *dev_id)
+{
+ struct tegra_adma_chan *tdc = dev_id;
+ unsigned long status;
+ unsigned long flags;
+
+ spin_lock_irqsave(&tdc->vc.lock, flags);
+
+ status = tegra_adma_irq_clear(tdc);
+ if (status == 0 || !tdc->desc) {
+ spin_unlock_irqrestore(&tdc->vc.lock, flags);
+ return IRQ_NONE;
+ }
+
+ vchan_cyclic_callback(&tdc->desc->vd);
+
+ spin_unlock_irqrestore(&tdc->vc.lock, flags);
+
+ return IRQ_HANDLED;
+}
+
+static void tegra_adma_issue_pending(struct dma_chan *dc)
+{
+ struct tegra_adma_chan *tdc = to_tegra_adma_chan(dc);
+ unsigned long flags;
+
+ spin_lock_irqsave(&tdc->vc.lock, flags);
+
+ if (vchan_issue_pending(&tdc->vc)) {
+ if (!tdc->desc)
+ tegra_adma_start(tdc);
+ }
+
+ spin_unlock_irqrestore(&tdc->vc.lock, flags);
+}
+
+static int tegra_adma_terminate_all(struct dma_chan *dc)
+{
+ struct tegra_adma_chan *tdc = to_tegra_adma_chan(dc);
+ unsigned long flags;
+ LIST_HEAD(head);
+
+ spin_lock_irqsave(&tdc->vc.lock, flags);
+
+ if (tdc->desc)
+ tegra_adma_stop(tdc);
+
+ tegra_adma_request_free(tdc);
+ vchan_get_all_descriptors(&tdc->vc, &head);
+ spin_unlock_irqrestore(&tdc->vc.lock, flags);
+ vchan_dma_desc_free_list(&tdc->vc, &head);
+
+ return 0;
+}
+
+static enum dma_status tegra_adma_tx_status(struct dma_chan *dc,
+ dma_cookie_t cookie,
+ struct dma_tx_state *txstate)
+{
+ struct tegra_adma_chan *tdc = to_tegra_adma_chan(dc);
+ struct tegra_adma_desc *desc;
+ struct virt_dma_desc *vd;
+ enum dma_status ret;
+ unsigned long flags;
+ unsigned int residual;
+
+ ret = dma_cookie_status(dc, cookie, txstate);
+ if (ret == DMA_COMPLETE || !txstate)
+ return ret;
+
+ spin_lock_irqsave(&tdc->vc.lock, flags);
+
+ vd = vchan_find_desc(&tdc->vc, cookie);
+ if (vd) {
+ desc = to_tegra_adma_desc(&vd->tx);
+ residual = desc->ch_regs.tc;
+ } else if (tdc->desc && tdc->desc->vd.tx.cookie == cookie) {
+ residual = tegra_adma_get_residue(tdc);
+ } else {
+ residual = 0;
+ }
+
+ spin_unlock_irqrestore(&tdc->vc.lock, flags);
+
+ dma_set_residue(txstate, residual);
+
+ return ret;
+}
+
+static int tegra_adma_set_xfer_params(struct tegra_adma_chan *tdc,
+ struct tegra_adma_desc *desc,
+ dma_addr_t buf_addr,
+ enum dma_transfer_direction direction)
+{
+ struct tegra_adma_chan_regs *ch_regs = &desc->ch_regs;
+ unsigned int burst_size, adma_dir;
+
+ if (desc->num_periods > ADMA_CH_CONFIG_MAX_BUFS)
+ return -EINVAL;
+
+ switch (direction) {
+ case DMA_MEM_TO_DEV:
+ adma_dir = ADMA_CH_CTRL_DIR_MEM2AHUB;
+ burst_size = fls(tdc->sconfig.dst_maxburst);
+ ch_regs->config = ADMA_CH_CONFIG_SRC_BUF(desc->num_periods - 1);
+ ch_regs->ctrl = ADMA_CH_CTRL_TX_REQ(tdc->sreq_index);
+ ch_regs->src_addr = buf_addr;
+ break;
+
+ case DMA_DEV_TO_MEM:
+ adma_dir = ADMA_CH_CTRL_DIR_AHUB2MEM;
+ burst_size = fls(tdc->sconfig.src_maxburst);
+ ch_regs->config = ADMA_CH_CONFIG_TRG_BUF(desc->num_periods - 1);
+ ch_regs->ctrl = ADMA_CH_CTRL_RX_REQ(tdc->sreq_index);
+ ch_regs->trg_addr = buf_addr;
+ break;
+
+ default:
+ dev_err(tdc2dev(tdc), "DMA direction is not supported\n");
+ return -EINVAL;
+ }
+
+ if (!burst_size || burst_size > ADMA_CH_CONFIG_BURST_16)
+ burst_size = ADMA_CH_CONFIG_BURST_16;
+
+ ch_regs->ctrl |= ADMA_CH_CTRL_DIR(adma_dir) |
+ ADMA_CH_CTRL_MODE_CONTINUOUS |
+ ADMA_CH_CTRL_FLOWCTRL_EN;
+ ch_regs->config |= ADMA_CH_CONFIG_BURST_SIZE(burst_size);
+ ch_regs->config |= ADMA_CH_CONFIG_WEIGHT_FOR_WRR(1);
+ ch_regs->fifo_ctrl = ADMA_CH_FIFO_CTRL_DEFAULT;
+ ch_regs->tc = desc->period_len & ADMA_CH_TC_COUNT_MASK;
+
+ return tegra_adma_request_alloc(tdc, direction);
+}
+
+static struct dma_async_tx_descriptor *tegra_adma_prep_dma_cyclic(
+ struct dma_chan *dc, dma_addr_t buf_addr, size_t buf_len,
+ size_t period_len, enum dma_transfer_direction direction,
+ unsigned long flags)
+{
+ struct tegra_adma_chan *tdc = to_tegra_adma_chan(dc);
+ struct tegra_adma_desc *desc = NULL;
+
+ if (!buf_len || !period_len || period_len > ADMA_CH_TC_COUNT_MASK) {
+ dev_err(tdc2dev(tdc), "invalid buffer/period len\n");
+ return NULL;
+ }
+
+ if (buf_len % period_len) {
+ dev_err(tdc2dev(tdc), "buf_len not a multiple of period_len\n");
+ return NULL;
+ }
+
+ if (!IS_ALIGNED(buf_addr, 4)) {
+ dev_err(tdc2dev(tdc), "invalid buffer alignment\n");
+ return NULL;
+ }
+
+ desc = kzalloc(sizeof(*desc), GFP_NOWAIT);
+ if (!desc)
+ return NULL;
+
+ desc->buf_len = buf_len;
+ desc->period_len = period_len;
+ desc->num_periods = buf_len / period_len;
+
+ if (tegra_adma_set_xfer_params(tdc, desc, buf_addr, direction)) {
+ kfree(desc);
+ return NULL;
+ }
+
+ return vchan_tx_prep(&tdc->vc, &desc->vd, flags);
+}
+
+static int tegra_adma_alloc_chan_resources(struct dma_chan *dc)
+{
+ struct tegra_adma_chan *tdc = to_tegra_adma_chan(dc);
+ int ret;
+
+ ret = request_irq(tdc->irq, tegra_adma_isr, 0, dma_chan_name(dc), tdc);
+ if (ret) {
+ dev_err(tdc2dev(tdc), "failed to get interrupt for %s\n",
+ dma_chan_name(dc));
+ return ret;
+ }
+
+ ret = pm_runtime_get_sync(tdc2dev(tdc));
+ if (ret < 0) {
+ free_irq(tdc->irq, tdc);
+ return ret;
+ }
+
+ dma_cookie_init(&tdc->vc.chan);
+
+ return 0;
+}
+
+static void tegra_adma_free_chan_resources(struct dma_chan *dc)
+{
+ struct tegra_adma_chan *tdc = to_tegra_adma_chan(dc);
+
+ tegra_adma_terminate_all(dc);
+ vchan_free_chan_resources(&tdc->vc);
+ tasklet_kill(&tdc->vc.task);
+ free_irq(tdc->irq, tdc);
+ pm_runtime_put(tdc2dev(tdc));
+
+ tdc->sreq_index = 0;
+ tdc->sreq_dir = DMA_TRANS_NONE;
+}
+
+static struct dma_chan *tegra_dma_of_xlate(struct of_phandle_args *dma_spec,
+ struct of_dma *ofdma)
+{
+ struct tegra_adma *tdma = ofdma->of_dma_data;
+ struct tegra_adma_chan *tdc;
+ struct dma_chan *chan;
+ unsigned int sreq_index;
+
+ if (dma_spec->args_count != 1)
+ return NULL;
+
+ sreq_index = dma_spec->args[0];
+
+ if (sreq_index == 0) {
+ dev_err(tdma->dev, "DMA request must not be 0\n");
+ return NULL;
+ }
+
+ chan = dma_get_any_slave_channel(&tdma->dma_dev);
+ if (!chan)
+ return NULL;
+
+ tdc = to_tegra_adma_chan(chan);
+ tdc->sreq_index = sreq_index;
+
+ return chan;
+}
+
+static int tegra_adma_runtime_suspend(struct device *dev)
+{
+ struct tegra_adma *tdma = dev_get_drvdata(dev);
+
+ tdma->global_cmd = tdma_read(tdma, ADMA_GLOBAL_CMD);
+
+ return pm_clk_suspend(dev);
+}
+
+static int tegra_adma_runtime_resume(struct device *dev)
+{
+ struct tegra_adma *tdma = dev_get_drvdata(dev);
+ int ret;
+
+ ret = pm_clk_resume(dev);
+ if (ret)
+ return ret;
+
+ tdma_write(tdma, ADMA_GLOBAL_CMD, tdma->global_cmd);
+
+ return 0;
+}
+
+static const struct tegra_adma_chip_data tegra210_chip_data = {
+ .nr_channels = 22,
+};
+
+static const struct of_device_id tegra_adma_of_match[] = {
+ { .compatible = "nvidia,tegra210-adma", .data = &tegra210_chip_data },
+ { },
+};
+MODULE_DEVICE_TABLE(of, tegra_adma_of_match);
+
+static int tegra_adma_probe(struct platform_device *pdev)
+{
+ const struct tegra_adma_chip_data *cdata;
+ struct tegra_adma *tdma;
+ struct resource *res;
+ struct clk *clk;
+ int ret, i;
+
+ cdata = of_device_get_match_data(&pdev->dev);
+ if (!cdata) {
+ dev_err(&pdev->dev, "device match data not found\n");
+ return -ENODEV;
+ }
+
+ tdma = devm_kzalloc(&pdev->dev, sizeof(*tdma) + cdata->nr_channels *
+ sizeof(struct tegra_adma_chan), GFP_KERNEL);
+ if (!tdma)
+ return -ENOMEM;
+
+ tdma->dev = &pdev->dev;
+ tdma->nr_channels = cdata->nr_channels;
+ platform_set_drvdata(pdev, tdma);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ tdma->base_addr = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(tdma->base_addr))
+ return PTR_ERR(tdma->base_addr);
+
+ ret = pm_clk_create(&pdev->dev);
+ if (ret)
+ return ret;
+
+ clk = clk_get(&pdev->dev, "d_audio");
+ if (IS_ERR(clk)) {
+ dev_err(&pdev->dev, "ADMA clock not found\n");
+ ret = PTR_ERR(clk);
+ goto clk_destroy;
+ }
+
+ ret = pm_clk_add_clk(&pdev->dev, clk);
+ if (ret) {
+ clk_put(clk);
+ goto clk_destroy;
+ }
+
+ pm_runtime_enable(&pdev->dev);
+
+ ret = pm_runtime_get_sync(&pdev->dev);
+ if (ret < 0)
+ goto rpm_disable;
+
+ ret = tegra_adma_init(tdma);
+ if (ret)
+ goto rpm_put;
+
+ INIT_LIST_HEAD(&tdma->dma_dev.channels);
+ for (i = 0; i < tdma->nr_channels; i++) {
+ struct tegra_adma_chan *tdc = &tdma->channels[i];
+
+ tdc->chan_addr = tdma->base_addr + ADMA_CH_REG_OFFSET(i);
+
+ tdc->irq = of_irq_get(pdev->dev.of_node, i);
+ if (tdc->irq < 0) {
+ ret = tdc->irq;
+ goto irq_dispose;
+ }
+
+ vchan_init(&tdc->vc, &tdma->dma_dev);
+ tdc->vc.desc_free = tegra_adma_desc_free;
+ tdc->tdma = tdma;
+ }
+
+ dma_cap_set(DMA_SLAVE, tdma->dma_dev.cap_mask);
+ dma_cap_set(DMA_PRIVATE, tdma->dma_dev.cap_mask);
+ dma_cap_set(DMA_CYCLIC, tdma->dma_dev.cap_mask);
+
+ tdma->dma_dev.dev = &pdev->dev;
+ tdma->dma_dev.device_alloc_chan_resources =
+ tegra_adma_alloc_chan_resources;
+ tdma->dma_dev.device_free_chan_resources =
+ tegra_adma_free_chan_resources;
+ tdma->dma_dev.device_issue_pending = tegra_adma_issue_pending;
+ tdma->dma_dev.device_prep_dma_cyclic = tegra_adma_prep_dma_cyclic;
+ tdma->dma_dev.device_config = tegra_adma_slave_config;
+ tdma->dma_dev.device_tx_status = tegra_adma_tx_status;
+ tdma->dma_dev.device_terminate_all = tegra_adma_terminate_all;
+ tdma->dma_dev.src_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
+ tdma->dma_dev.dst_addr_widths = BIT(DMA_SLAVE_BUSWIDTH_4_BYTES);
+ tdma->dma_dev.directions = BIT(DMA_DEV_TO_MEM) | BIT(DMA_MEM_TO_DEV);
+ tdma->dma_dev.residue_granularity = DMA_RESIDUE_GRANULARITY_SEGMENT;
+
+ ret = dma_async_device_register(&tdma->dma_dev);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "ADMA registration failed: %d\n", ret);
+ goto irq_dispose;
+ }
+
+ ret = of_dma_controller_register(pdev->dev.of_node,
+ tegra_dma_of_xlate, tdma);
+ if (ret < 0) {
+ dev_err(&pdev->dev, "ADMA OF registration failed %d\n", ret);
+ goto dma_remove;
+ }
+
+ pm_runtime_put(&pdev->dev);
+
+ dev_info(&pdev->dev, "Tegra210 ADMA driver registered %d channels\n",
+ tdma->nr_channels);
+
+ return 0;
+
+dma_remove:
+ dma_async_device_unregister(&tdma->dma_dev);
+irq_dispose:
+ while (--i >= 0)
+ irq_dispose_mapping(tdma->channels[i].irq);
+rpm_put:
+ pm_runtime_put_sync(&pdev->dev);
+rpm_disable:
+ pm_runtime_disable(&pdev->dev);
+clk_destroy:
+ pm_clk_destroy(&pdev->dev);
+
+ return ret;
+}
+
+static int tegra_adma_remove(struct platform_device *pdev)
+{
+ struct tegra_adma *tdma = platform_get_drvdata(pdev);
+ int i;
+
+ dma_async_device_unregister(&tdma->dma_dev);
+
+ for (i = 0; i < tdma->nr_channels; ++i)
+ irq_dispose_mapping(tdma->channels[i].irq);
+
+ pm_runtime_put_sync(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
+ pm_clk_destroy(&pdev->dev);
+
+ return 0;
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int tegra_adma_pm_suspend(struct device *dev)
+{
+ return pm_runtime_suspended(dev) == false;
+}
+#endif
+
+static const struct dev_pm_ops tegra_adma_dev_pm_ops = {
+ SET_RUNTIME_PM_OPS(tegra_adma_runtime_suspend,
+ tegra_adma_runtime_resume, NULL)
+ SET_SYSTEM_SLEEP_PM_OPS(tegra_adma_pm_suspend, NULL)
+};
+
+static struct platform_driver tegra_admac_driver = {
+ .driver = {
+ .name = "tegra-adma",
+ .pm = &tegra_adma_dev_pm_ops,
+ .of_match_table = tegra_adma_of_match,
+ },
+ .probe = tegra_adma_probe,
+ .remove = tegra_adma_remove,
+};
+
+module_platform_driver(tegra_admac_driver);
+
+MODULE_ALIAS("platform:tegra210-adma");
+MODULE_DESCRIPTION("NVIDIA Tegra ADMA driver");
+MODULE_AUTHOR("Dara Ramesh <dramesh@nvidia.com>");
+MODULE_AUTHOR("Jon Hunter <jonathanh@nvidia.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/dma/xilinx/xilinx_vdma.c b/drivers/dma/xilinx/xilinx_vdma.c
index ef67f278e076..df9118540b91 100644
--- a/drivers/dma/xilinx/xilinx_vdma.c
+++ b/drivers/dma/xilinx/xilinx_vdma.c
@@ -16,6 +16,15 @@
* video device (S2MM). Initialization, status, interrupt and management
* registers are accessed through an AXI4-Lite slave interface.
*
+ * The AXI Direct Memory Access (AXI DMA) core is a soft Xilinx IP core that
+ * provides high-bandwidth one dimensional direct memory access between memory
+ * and AXI4-Stream target peripherals. It supports one receive and one
+ * transmit channel, both of them optional at synthesis time.
+ *
+ * The AXI CDMA, is a soft IP, which provides high-bandwidth Direct Memory
+ * Access (DMA) between a memory-mapped source address and a memory-mapped
+ * destination address.
+ *
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation, either version 2 of the License, or
@@ -35,116 +44,138 @@
#include <linux/of_platform.h>
#include <linux/of_irq.h>
#include <linux/slab.h>
+#include <linux/clk.h>
#include "../dmaengine.h"
/* Register/Descriptor Offsets */
-#define XILINX_VDMA_MM2S_CTRL_OFFSET 0x0000
-#define XILINX_VDMA_S2MM_CTRL_OFFSET 0x0030
+#define XILINX_DMA_MM2S_CTRL_OFFSET 0x0000
+#define XILINX_DMA_S2MM_CTRL_OFFSET 0x0030
#define XILINX_VDMA_MM2S_DESC_OFFSET 0x0050
#define XILINX_VDMA_S2MM_DESC_OFFSET 0x00a0
/* Control Registers */
-#define XILINX_VDMA_REG_DMACR 0x0000
-#define XILINX_VDMA_DMACR_DELAY_MAX 0xff
-#define XILINX_VDMA_DMACR_DELAY_SHIFT 24
-#define XILINX_VDMA_DMACR_FRAME_COUNT_MAX 0xff
-#define XILINX_VDMA_DMACR_FRAME_COUNT_SHIFT 16
-#define XILINX_VDMA_DMACR_ERR_IRQ BIT(14)
-#define XILINX_VDMA_DMACR_DLY_CNT_IRQ BIT(13)
-#define XILINX_VDMA_DMACR_FRM_CNT_IRQ BIT(12)
-#define XILINX_VDMA_DMACR_MASTER_SHIFT 8
-#define XILINX_VDMA_DMACR_FSYNCSRC_SHIFT 5
-#define XILINX_VDMA_DMACR_FRAMECNT_EN BIT(4)
-#define XILINX_VDMA_DMACR_GENLOCK_EN BIT(3)
-#define XILINX_VDMA_DMACR_RESET BIT(2)
-#define XILINX_VDMA_DMACR_CIRC_EN BIT(1)
-#define XILINX_VDMA_DMACR_RUNSTOP BIT(0)
-#define XILINX_VDMA_DMACR_FSYNCSRC_MASK GENMASK(6, 5)
-
-#define XILINX_VDMA_REG_DMASR 0x0004
-#define XILINX_VDMA_DMASR_EOL_LATE_ERR BIT(15)
-#define XILINX_VDMA_DMASR_ERR_IRQ BIT(14)
-#define XILINX_VDMA_DMASR_DLY_CNT_IRQ BIT(13)
-#define XILINX_VDMA_DMASR_FRM_CNT_IRQ BIT(12)
-#define XILINX_VDMA_DMASR_SOF_LATE_ERR BIT(11)
-#define XILINX_VDMA_DMASR_SG_DEC_ERR BIT(10)
-#define XILINX_VDMA_DMASR_SG_SLV_ERR BIT(9)
-#define XILINX_VDMA_DMASR_EOF_EARLY_ERR BIT(8)
-#define XILINX_VDMA_DMASR_SOF_EARLY_ERR BIT(7)
-#define XILINX_VDMA_DMASR_DMA_DEC_ERR BIT(6)
-#define XILINX_VDMA_DMASR_DMA_SLAVE_ERR BIT(5)
-#define XILINX_VDMA_DMASR_DMA_INT_ERR BIT(4)
-#define XILINX_VDMA_DMASR_IDLE BIT(1)
-#define XILINX_VDMA_DMASR_HALTED BIT(0)
-#define XILINX_VDMA_DMASR_DELAY_MASK GENMASK(31, 24)
-#define XILINX_VDMA_DMASR_FRAME_COUNT_MASK GENMASK(23, 16)
-
-#define XILINX_VDMA_REG_CURDESC 0x0008
-#define XILINX_VDMA_REG_TAILDESC 0x0010
-#define XILINX_VDMA_REG_REG_INDEX 0x0014
-#define XILINX_VDMA_REG_FRMSTORE 0x0018
-#define XILINX_VDMA_REG_THRESHOLD 0x001c
-#define XILINX_VDMA_REG_FRMPTR_STS 0x0024
-#define XILINX_VDMA_REG_PARK_PTR 0x0028
-#define XILINX_VDMA_PARK_PTR_WR_REF_SHIFT 8
-#define XILINX_VDMA_PARK_PTR_RD_REF_SHIFT 0
-#define XILINX_VDMA_REG_VDMA_VERSION 0x002c
+#define XILINX_DMA_REG_DMACR 0x0000
+#define XILINX_DMA_DMACR_DELAY_MAX 0xff
+#define XILINX_DMA_DMACR_DELAY_SHIFT 24
+#define XILINX_DMA_DMACR_FRAME_COUNT_MAX 0xff
+#define XILINX_DMA_DMACR_FRAME_COUNT_SHIFT 16
+#define XILINX_DMA_DMACR_ERR_IRQ BIT(14)
+#define XILINX_DMA_DMACR_DLY_CNT_IRQ BIT(13)
+#define XILINX_DMA_DMACR_FRM_CNT_IRQ BIT(12)
+#define XILINX_DMA_DMACR_MASTER_SHIFT 8
+#define XILINX_DMA_DMACR_FSYNCSRC_SHIFT 5
+#define XILINX_DMA_DMACR_FRAMECNT_EN BIT(4)
+#define XILINX_DMA_DMACR_GENLOCK_EN BIT(3)
+#define XILINX_DMA_DMACR_RESET BIT(2)
+#define XILINX_DMA_DMACR_CIRC_EN BIT(1)
+#define XILINX_DMA_DMACR_RUNSTOP BIT(0)
+#define XILINX_DMA_DMACR_FSYNCSRC_MASK GENMASK(6, 5)
+
+#define XILINX_DMA_REG_DMASR 0x0004
+#define XILINX_DMA_DMASR_EOL_LATE_ERR BIT(15)
+#define XILINX_DMA_DMASR_ERR_IRQ BIT(14)
+#define XILINX_DMA_DMASR_DLY_CNT_IRQ BIT(13)
+#define XILINX_DMA_DMASR_FRM_CNT_IRQ BIT(12)
+#define XILINX_DMA_DMASR_SOF_LATE_ERR BIT(11)
+#define XILINX_DMA_DMASR_SG_DEC_ERR BIT(10)
+#define XILINX_DMA_DMASR_SG_SLV_ERR BIT(9)
+#define XILINX_DMA_DMASR_EOF_EARLY_ERR BIT(8)
+#define XILINX_DMA_DMASR_SOF_EARLY_ERR BIT(7)
+#define XILINX_DMA_DMASR_DMA_DEC_ERR BIT(6)
+#define XILINX_DMA_DMASR_DMA_SLAVE_ERR BIT(5)
+#define XILINX_DMA_DMASR_DMA_INT_ERR BIT(4)
+#define XILINX_DMA_DMASR_IDLE BIT(1)
+#define XILINX_DMA_DMASR_HALTED BIT(0)
+#define XILINX_DMA_DMASR_DELAY_MASK GENMASK(31, 24)
+#define XILINX_DMA_DMASR_FRAME_COUNT_MASK GENMASK(23, 16)
+
+#define XILINX_DMA_REG_CURDESC 0x0008
+#define XILINX_DMA_REG_TAILDESC 0x0010
+#define XILINX_DMA_REG_REG_INDEX 0x0014
+#define XILINX_DMA_REG_FRMSTORE 0x0018
+#define XILINX_DMA_REG_THRESHOLD 0x001c
+#define XILINX_DMA_REG_FRMPTR_STS 0x0024
+#define XILINX_DMA_REG_PARK_PTR 0x0028
+#define XILINX_DMA_PARK_PTR_WR_REF_SHIFT 8
+#define XILINX_DMA_PARK_PTR_RD_REF_SHIFT 0
+#define XILINX_DMA_REG_VDMA_VERSION 0x002c
/* Register Direct Mode Registers */
-#define XILINX_VDMA_REG_VSIZE 0x0000
-#define XILINX_VDMA_REG_HSIZE 0x0004
+#define XILINX_DMA_REG_VSIZE 0x0000
+#define XILINX_DMA_REG_HSIZE 0x0004
-#define XILINX_VDMA_REG_FRMDLY_STRIDE 0x0008
-#define XILINX_VDMA_FRMDLY_STRIDE_FRMDLY_SHIFT 24
-#define XILINX_VDMA_FRMDLY_STRIDE_STRIDE_SHIFT 0
+#define XILINX_DMA_REG_FRMDLY_STRIDE 0x0008
+#define XILINX_DMA_FRMDLY_STRIDE_FRMDLY_SHIFT 24
+#define XILINX_DMA_FRMDLY_STRIDE_STRIDE_SHIFT 0
#define XILINX_VDMA_REG_START_ADDRESS(n) (0x000c + 4 * (n))
+#define XILINX_VDMA_REG_START_ADDRESS_64(n) (0x000c + 8 * (n))
/* HW specific definitions */
-#define XILINX_VDMA_MAX_CHANS_PER_DEVICE 0x2
-
-#define XILINX_VDMA_DMAXR_ALL_IRQ_MASK \
- (XILINX_VDMA_DMASR_FRM_CNT_IRQ | \
- XILINX_VDMA_DMASR_DLY_CNT_IRQ | \
- XILINX_VDMA_DMASR_ERR_IRQ)
-
-#define XILINX_VDMA_DMASR_ALL_ERR_MASK \
- (XILINX_VDMA_DMASR_EOL_LATE_ERR | \
- XILINX_VDMA_DMASR_SOF_LATE_ERR | \
- XILINX_VDMA_DMASR_SG_DEC_ERR | \
- XILINX_VDMA_DMASR_SG_SLV_ERR | \
- XILINX_VDMA_DMASR_EOF_EARLY_ERR | \
- XILINX_VDMA_DMASR_SOF_EARLY_ERR | \
- XILINX_VDMA_DMASR_DMA_DEC_ERR | \
- XILINX_VDMA_DMASR_DMA_SLAVE_ERR | \
- XILINX_VDMA_DMASR_DMA_INT_ERR)
+#define XILINX_DMA_MAX_CHANS_PER_DEVICE 0x2
+
+#define XILINX_DMA_DMAXR_ALL_IRQ_MASK \
+ (XILINX_DMA_DMASR_FRM_CNT_IRQ | \
+ XILINX_DMA_DMASR_DLY_CNT_IRQ | \
+ XILINX_DMA_DMASR_ERR_IRQ)
+
+#define XILINX_DMA_DMASR_ALL_ERR_MASK \
+ (XILINX_DMA_DMASR_EOL_LATE_ERR | \
+ XILINX_DMA_DMASR_SOF_LATE_ERR | \
+ XILINX_DMA_DMASR_SG_DEC_ERR | \
+ XILINX_DMA_DMASR_SG_SLV_ERR | \
+ XILINX_DMA_DMASR_EOF_EARLY_ERR | \
+ XILINX_DMA_DMASR_SOF_EARLY_ERR | \
+ XILINX_DMA_DMASR_DMA_DEC_ERR | \
+ XILINX_DMA_DMASR_DMA_SLAVE_ERR | \
+ XILINX_DMA_DMASR_DMA_INT_ERR)
/*
* Recoverable errors are DMA Internal error, SOF Early, EOF Early
* and SOF Late. They are only recoverable when C_FLUSH_ON_FSYNC
* is enabled in the h/w system.
*/
-#define XILINX_VDMA_DMASR_ERR_RECOVER_MASK \
- (XILINX_VDMA_DMASR_SOF_LATE_ERR | \
- XILINX_VDMA_DMASR_EOF_EARLY_ERR | \
- XILINX_VDMA_DMASR_SOF_EARLY_ERR | \
- XILINX_VDMA_DMASR_DMA_INT_ERR)
+#define XILINX_DMA_DMASR_ERR_RECOVER_MASK \
+ (XILINX_DMA_DMASR_SOF_LATE_ERR | \
+ XILINX_DMA_DMASR_EOF_EARLY_ERR | \
+ XILINX_DMA_DMASR_SOF_EARLY_ERR | \
+ XILINX_DMA_DMASR_DMA_INT_ERR)
/* Axi VDMA Flush on Fsync bits */
-#define XILINX_VDMA_FLUSH_S2MM 3
-#define XILINX_VDMA_FLUSH_MM2S 2
-#define XILINX_VDMA_FLUSH_BOTH 1
+#define XILINX_DMA_FLUSH_S2MM 3
+#define XILINX_DMA_FLUSH_MM2S 2
+#define XILINX_DMA_FLUSH_BOTH 1
/* Delay loop counter to prevent hardware failure */
-#define XILINX_VDMA_LOOP_COUNT 1000000
+#define XILINX_DMA_LOOP_COUNT 1000000
+
+/* AXI DMA Specific Registers/Offsets */
+#define XILINX_DMA_REG_SRCDSTADDR 0x18
+#define XILINX_DMA_REG_BTT 0x28
+
+/* AXI DMA Specific Masks/Bit fields */
+#define XILINX_DMA_MAX_TRANS_LEN GENMASK(22, 0)
+#define XILINX_DMA_CR_COALESCE_MAX GENMASK(23, 16)
+#define XILINX_DMA_CR_COALESCE_SHIFT 16
+#define XILINX_DMA_BD_SOP BIT(27)
+#define XILINX_DMA_BD_EOP BIT(26)
+#define XILINX_DMA_COALESCE_MAX 255
+#define XILINX_DMA_NUM_APP_WORDS 5
+
+/* AXI CDMA Specific Registers/Offsets */
+#define XILINX_CDMA_REG_SRCADDR 0x18
+#define XILINX_CDMA_REG_DSTADDR 0x20
+
+/* AXI CDMA Specific Masks */
+#define XILINX_CDMA_CR_SGMODE BIT(3)
/**
* struct xilinx_vdma_desc_hw - Hardware Descriptor
* @next_desc: Next Descriptor Pointer @0x00
* @pad1: Reserved @0x04
* @buf_addr: Buffer address @0x08
- * @pad2: Reserved @0x0C
+ * @buf_addr_msb: MSB of Buffer address @0x0C
* @vsize: Vertical Size @0x10
* @hsize: Horizontal Size @0x14
* @stride: Number of bytes between the first
@@ -154,13 +185,59 @@ struct xilinx_vdma_desc_hw {
u32 next_desc;
u32 pad1;
u32 buf_addr;
- u32 pad2;
+ u32 buf_addr_msb;
u32 vsize;
u32 hsize;
u32 stride;
} __aligned(64);
/**
+ * struct xilinx_axidma_desc_hw - Hardware Descriptor for AXI DMA
+ * @next_desc: Next Descriptor Pointer @0x00
+ * @pad1: Reserved @0x04
+ * @buf_addr: Buffer address @0x08
+ * @pad2: Reserved @0x0C
+ * @pad3: Reserved @0x10
+ * @pad4: Reserved @0x14
+ * @control: Control field @0x18
+ * @status: Status field @0x1C
+ * @app: APP Fields @0x20 - 0x30
+ */
+struct xilinx_axidma_desc_hw {
+ u32 next_desc;
+ u32 pad1;
+ u32 buf_addr;
+ u32 pad2;
+ u32 pad3;
+ u32 pad4;
+ u32 control;
+ u32 status;
+ u32 app[XILINX_DMA_NUM_APP_WORDS];
+} __aligned(64);
+
+/**
+ * struct xilinx_cdma_desc_hw - Hardware Descriptor
+ * @next_desc: Next Descriptor Pointer @0x00
+ * @pad1: Reserved @0x04
+ * @src_addr: Source address @0x08
+ * @pad2: Reserved @0x0C
+ * @dest_addr: Destination address @0x10
+ * @pad3: Reserved @0x14
+ * @control: Control field @0x18
+ * @status: Status field @0x1C
+ */
+struct xilinx_cdma_desc_hw {
+ u32 next_desc;
+ u32 pad1;
+ u32 src_addr;
+ u32 pad2;
+ u32 dest_addr;
+ u32 pad3;
+ u32 control;
+ u32 status;
+} __aligned(64);
+
+/**
* struct xilinx_vdma_tx_segment - Descriptor segment
* @hw: Hardware descriptor
* @node: Node in the descriptor segments list
@@ -173,19 +250,43 @@ struct xilinx_vdma_tx_segment {
} __aligned(64);
/**
- * struct xilinx_vdma_tx_descriptor - Per Transaction structure
+ * struct xilinx_axidma_tx_segment - Descriptor segment
+ * @hw: Hardware descriptor
+ * @node: Node in the descriptor segments list
+ * @phys: Physical address of segment
+ */
+struct xilinx_axidma_tx_segment {
+ struct xilinx_axidma_desc_hw hw;
+ struct list_head node;
+ dma_addr_t phys;
+} __aligned(64);
+
+/**
+ * struct xilinx_cdma_tx_segment - Descriptor segment
+ * @hw: Hardware descriptor
+ * @node: Node in the descriptor segments list
+ * @phys: Physical address of segment
+ */
+struct xilinx_cdma_tx_segment {
+ struct xilinx_cdma_desc_hw hw;
+ struct list_head node;
+ dma_addr_t phys;
+} __aligned(64);
+
+/**
+ * struct xilinx_dma_tx_descriptor - Per Transaction structure
* @async_tx: Async transaction descriptor
* @segments: TX segments list
* @node: Node in the channel descriptors list
*/
-struct xilinx_vdma_tx_descriptor {
+struct xilinx_dma_tx_descriptor {
struct dma_async_tx_descriptor async_tx;
struct list_head segments;
struct list_head node;
};
/**
- * struct xilinx_vdma_chan - Driver specific VDMA channel structure
+ * struct xilinx_dma_chan - Driver specific DMA channel structure
* @xdev: Driver specific device structure
* @ctrl_offset: Control registers offset
* @desc_offset: TX descriptor registers offset
@@ -207,9 +308,14 @@ struct xilinx_vdma_tx_descriptor {
* @config: Device configuration info
* @flush_on_fsync: Flush on Frame sync
* @desc_pendingcount: Descriptor pending count
+ * @ext_addr: Indicates 64 bit addressing is supported by dma channel
+ * @desc_submitcount: Descriptor h/w submitted count
+ * @residue: Residue for AXI DMA
+ * @seg_v: Statically allocated segments base
+ * @start_transfer: Differentiate b/w DMA IP's transfer
*/
-struct xilinx_vdma_chan {
- struct xilinx_vdma_device *xdev;
+struct xilinx_dma_chan {
+ struct xilinx_dma_device *xdev;
u32 ctrl_offset;
u32 desc_offset;
spinlock_t lock;
@@ -230,73 +336,122 @@ struct xilinx_vdma_chan {
struct xilinx_vdma_config config;
bool flush_on_fsync;
u32 desc_pendingcount;
+ bool ext_addr;
+ u32 desc_submitcount;
+ u32 residue;
+ struct xilinx_axidma_tx_segment *seg_v;
+ void (*start_transfer)(struct xilinx_dma_chan *chan);
+};
+
+struct xilinx_dma_config {
+ enum xdma_ip_type dmatype;
+ int (*clk_init)(struct platform_device *pdev, struct clk **axi_clk,
+ struct clk **tx_clk, struct clk **txs_clk,
+ struct clk **rx_clk, struct clk **rxs_clk);
};
/**
- * struct xilinx_vdma_device - VDMA device structure
+ * struct xilinx_dma_device - DMA device structure
* @regs: I/O mapped base address
* @dev: Device Structure
* @common: DMA device structure
- * @chan: Driver specific VDMA channel
+ * @chan: Driver specific DMA channel
* @has_sg: Specifies whether Scatter-Gather is present or not
* @flush_on_fsync: Flush on frame sync
+ * @ext_addr: Indicates 64 bit addressing is supported by dma device
+ * @pdev: Platform device structure pointer
+ * @dma_config: DMA config structure
+ * @axi_clk: DMA Axi4-lite interace clock
+ * @tx_clk: DMA mm2s clock
+ * @txs_clk: DMA mm2s stream clock
+ * @rx_clk: DMA s2mm clock
+ * @rxs_clk: DMA s2mm stream clock
*/
-struct xilinx_vdma_device {
+struct xilinx_dma_device {
void __iomem *regs;
struct device *dev;
struct dma_device common;
- struct xilinx_vdma_chan *chan[XILINX_VDMA_MAX_CHANS_PER_DEVICE];
+ struct xilinx_dma_chan *chan[XILINX_DMA_MAX_CHANS_PER_DEVICE];
bool has_sg;
u32 flush_on_fsync;
+ bool ext_addr;
+ struct platform_device *pdev;
+ const struct xilinx_dma_config *dma_config;
+ struct clk *axi_clk;
+ struct clk *tx_clk;
+ struct clk *txs_clk;
+ struct clk *rx_clk;
+ struct clk *rxs_clk;
};
/* Macros */
#define to_xilinx_chan(chan) \
- container_of(chan, struct xilinx_vdma_chan, common)
-#define to_vdma_tx_descriptor(tx) \
- container_of(tx, struct xilinx_vdma_tx_descriptor, async_tx)
-#define xilinx_vdma_poll_timeout(chan, reg, val, cond, delay_us, timeout_us) \
+ container_of(chan, struct xilinx_dma_chan, common)
+#define to_dma_tx_descriptor(tx) \
+ container_of(tx, struct xilinx_dma_tx_descriptor, async_tx)
+#define xilinx_dma_poll_timeout(chan, reg, val, cond, delay_us, timeout_us) \
readl_poll_timeout(chan->xdev->regs + chan->ctrl_offset + reg, val, \
cond, delay_us, timeout_us)
/* IO accessors */
-static inline u32 vdma_read(struct xilinx_vdma_chan *chan, u32 reg)
+static inline u32 dma_read(struct xilinx_dma_chan *chan, u32 reg)
{
return ioread32(chan->xdev->regs + reg);
}
-static inline void vdma_write(struct xilinx_vdma_chan *chan, u32 reg, u32 value)
+static inline void dma_write(struct xilinx_dma_chan *chan, u32 reg, u32 value)
{
iowrite32(value, chan->xdev->regs + reg);
}
-static inline void vdma_desc_write(struct xilinx_vdma_chan *chan, u32 reg,
+static inline void vdma_desc_write(struct xilinx_dma_chan *chan, u32 reg,
u32 value)
{
- vdma_write(chan, chan->desc_offset + reg, value);
+ dma_write(chan, chan->desc_offset + reg, value);
}
-static inline u32 vdma_ctrl_read(struct xilinx_vdma_chan *chan, u32 reg)
+static inline u32 dma_ctrl_read(struct xilinx_dma_chan *chan, u32 reg)
{
- return vdma_read(chan, chan->ctrl_offset + reg);
+ return dma_read(chan, chan->ctrl_offset + reg);
}
-static inline void vdma_ctrl_write(struct xilinx_vdma_chan *chan, u32 reg,
+static inline void dma_ctrl_write(struct xilinx_dma_chan *chan, u32 reg,
u32 value)
{
- vdma_write(chan, chan->ctrl_offset + reg, value);
+ dma_write(chan, chan->ctrl_offset + reg, value);
}
-static inline void vdma_ctrl_clr(struct xilinx_vdma_chan *chan, u32 reg,
+static inline void dma_ctrl_clr(struct xilinx_dma_chan *chan, u32 reg,
u32 clr)
{
- vdma_ctrl_write(chan, reg, vdma_ctrl_read(chan, reg) & ~clr);
+ dma_ctrl_write(chan, reg, dma_ctrl_read(chan, reg) & ~clr);
}
-static inline void vdma_ctrl_set(struct xilinx_vdma_chan *chan, u32 reg,
+static inline void dma_ctrl_set(struct xilinx_dma_chan *chan, u32 reg,
u32 set)
{
- vdma_ctrl_write(chan, reg, vdma_ctrl_read(chan, reg) | set);
+ dma_ctrl_write(chan, reg, dma_ctrl_read(chan, reg) | set);
+}
+
+/**
+ * vdma_desc_write_64 - 64-bit descriptor write
+ * @chan: Driver specific VDMA channel
+ * @reg: Register to write
+ * @value_lsb: lower address of the descriptor.
+ * @value_msb: upper address of the descriptor.
+ *
+ * Since vdma driver is trying to write to a register offset which is not a
+ * multiple of 64 bits(ex : 0x5c), we are writing as two separate 32 bits
+ * instead of a single 64 bit register write.
+ */
+static inline void vdma_desc_write_64(struct xilinx_dma_chan *chan, u32 reg,
+ u32 value_lsb, u32 value_msb)
+{
+ /* Write the lsb 32 bits*/
+ writel(value_lsb, chan->xdev->regs + chan->desc_offset + reg);
+
+ /* Write the msb 32 bits */
+ writel(value_msb, chan->xdev->regs + chan->desc_offset + reg + 4);
}
/* -----------------------------------------------------------------------------
@@ -305,16 +460,59 @@ static inline void vdma_ctrl_set(struct xilinx_vdma_chan *chan, u32 reg,
/**
* xilinx_vdma_alloc_tx_segment - Allocate transaction segment
- * @chan: Driver specific VDMA channel
+ * @chan: Driver specific DMA channel
*
* Return: The allocated segment on success and NULL on failure.
*/
static struct xilinx_vdma_tx_segment *
-xilinx_vdma_alloc_tx_segment(struct xilinx_vdma_chan *chan)
+xilinx_vdma_alloc_tx_segment(struct xilinx_dma_chan *chan)
{
struct xilinx_vdma_tx_segment *segment;
dma_addr_t phys;
+ segment = dma_pool_zalloc(chan->desc_pool, GFP_ATOMIC, &phys);
+ if (!segment)
+ return NULL;
+
+ segment->phys = phys;
+
+ return segment;
+}
+
+/**
+ * xilinx_cdma_alloc_tx_segment - Allocate transaction segment
+ * @chan: Driver specific DMA channel
+ *
+ * Return: The allocated segment on success and NULL on failure.
+ */
+static struct xilinx_cdma_tx_segment *
+xilinx_cdma_alloc_tx_segment(struct xilinx_dma_chan *chan)
+{
+ struct xilinx_cdma_tx_segment *segment;
+ dma_addr_t phys;
+
+ segment = dma_pool_alloc(chan->desc_pool, GFP_ATOMIC, &phys);
+ if (!segment)
+ return NULL;
+
+ memset(segment, 0, sizeof(*segment));
+ segment->phys = phys;
+
+ return segment;
+}
+
+/**
+ * xilinx_axidma_alloc_tx_segment - Allocate transaction segment
+ * @chan: Driver specific DMA channel
+ *
+ * Return: The allocated segment on success and NULL on failure.
+ */
+static struct xilinx_axidma_tx_segment *
+xilinx_axidma_alloc_tx_segment(struct xilinx_dma_chan *chan)
+{
+ struct xilinx_axidma_tx_segment *segment;
+ dma_addr_t phys;
+
segment = dma_pool_alloc(chan->desc_pool, GFP_ATOMIC, &phys);
if (!segment)
return NULL;
@@ -326,26 +524,48 @@ xilinx_vdma_alloc_tx_segment(struct xilinx_vdma_chan *chan)
}
/**
+ * xilinx_dma_free_tx_segment - Free transaction segment
+ * @chan: Driver specific DMA channel
+ * @segment: DMA transaction segment
+ */
+static void xilinx_dma_free_tx_segment(struct xilinx_dma_chan *chan,
+ struct xilinx_axidma_tx_segment *segment)
+{
+ dma_pool_free(chan->desc_pool, segment, segment->phys);
+}
+
+/**
+ * xilinx_cdma_free_tx_segment - Free transaction segment
+ * @chan: Driver specific DMA channel
+ * @segment: DMA transaction segment
+ */
+static void xilinx_cdma_free_tx_segment(struct xilinx_dma_chan *chan,
+ struct xilinx_cdma_tx_segment *segment)
+{
+ dma_pool_free(chan->desc_pool, segment, segment->phys);
+}
+
+/**
* xilinx_vdma_free_tx_segment - Free transaction segment
- * @chan: Driver specific VDMA channel
- * @segment: VDMA transaction segment
+ * @chan: Driver specific DMA channel
+ * @segment: DMA transaction segment
*/
-static void xilinx_vdma_free_tx_segment(struct xilinx_vdma_chan *chan,
+static void xilinx_vdma_free_tx_segment(struct xilinx_dma_chan *chan,
struct xilinx_vdma_tx_segment *segment)
{
dma_pool_free(chan->desc_pool, segment, segment->phys);
}
/**
- * xilinx_vdma_tx_descriptor - Allocate transaction descriptor
- * @chan: Driver specific VDMA channel
+ * xilinx_dma_tx_descriptor - Allocate transaction descriptor
+ * @chan: Driver specific DMA channel
*
* Return: The allocated descriptor on success and NULL on failure.
*/
-static struct xilinx_vdma_tx_descriptor *
-xilinx_vdma_alloc_tx_descriptor(struct xilinx_vdma_chan *chan)
+static struct xilinx_dma_tx_descriptor *
+xilinx_dma_alloc_tx_descriptor(struct xilinx_dma_chan *chan)
{
- struct xilinx_vdma_tx_descriptor *desc;
+ struct xilinx_dma_tx_descriptor *desc;
desc = kzalloc(sizeof(*desc), GFP_KERNEL);
if (!desc)
@@ -357,22 +577,38 @@ xilinx_vdma_alloc_tx_descriptor(struct xilinx_vdma_chan *chan)
}
/**
- * xilinx_vdma_free_tx_descriptor - Free transaction descriptor
- * @chan: Driver specific VDMA channel
- * @desc: VDMA transaction descriptor
+ * xilinx_dma_free_tx_descriptor - Free transaction descriptor
+ * @chan: Driver specific DMA channel
+ * @desc: DMA transaction descriptor
*/
static void
-xilinx_vdma_free_tx_descriptor(struct xilinx_vdma_chan *chan,
- struct xilinx_vdma_tx_descriptor *desc)
+xilinx_dma_free_tx_descriptor(struct xilinx_dma_chan *chan,
+ struct xilinx_dma_tx_descriptor *desc)
{
struct xilinx_vdma_tx_segment *segment, *next;
+ struct xilinx_cdma_tx_segment *cdma_segment, *cdma_next;
+ struct xilinx_axidma_tx_segment *axidma_segment, *axidma_next;
if (!desc)
return;
- list_for_each_entry_safe(segment, next, &desc->segments, node) {
- list_del(&segment->node);
- xilinx_vdma_free_tx_segment(chan, segment);
+ if (chan->xdev->dma_config->dmatype == XDMA_TYPE_VDMA) {
+ list_for_each_entry_safe(segment, next, &desc->segments, node) {
+ list_del(&segment->node);
+ xilinx_vdma_free_tx_segment(chan, segment);
+ }
+ } else if (chan->xdev->dma_config->dmatype == XDMA_TYPE_CDMA) {
+ list_for_each_entry_safe(cdma_segment, cdma_next,
+ &desc->segments, node) {
+ list_del(&cdma_segment->node);
+ xilinx_cdma_free_tx_segment(chan, cdma_segment);
+ }
+ } else {
+ list_for_each_entry_safe(axidma_segment, axidma_next,
+ &desc->segments, node) {
+ list_del(&axidma_segment->node);
+ xilinx_dma_free_tx_segment(chan, axidma_segment);
+ }
}
kfree(desc);
@@ -381,60 +617,62 @@ xilinx_vdma_free_tx_descriptor(struct xilinx_vdma_chan *chan,
/* Required functions */
/**
- * xilinx_vdma_free_desc_list - Free descriptors list
- * @chan: Driver specific VDMA channel
+ * xilinx_dma_free_desc_list - Free descriptors list
+ * @chan: Driver specific DMA channel
* @list: List to parse and delete the descriptor
*/
-static void xilinx_vdma_free_desc_list(struct xilinx_vdma_chan *chan,
+static void xilinx_dma_free_desc_list(struct xilinx_dma_chan *chan,
struct list_head *list)
{
- struct xilinx_vdma_tx_descriptor *desc, *next;
+ struct xilinx_dma_tx_descriptor *desc, *next;
list_for_each_entry_safe(desc, next, list, node) {
list_del(&desc->node);
- xilinx_vdma_free_tx_descriptor(chan, desc);
+ xilinx_dma_free_tx_descriptor(chan, desc);
}
}
/**
- * xilinx_vdma_free_descriptors - Free channel descriptors
- * @chan: Driver specific VDMA channel
+ * xilinx_dma_free_descriptors - Free channel descriptors
+ * @chan: Driver specific DMA channel
*/
-static void xilinx_vdma_free_descriptors(struct xilinx_vdma_chan *chan)
+static void xilinx_dma_free_descriptors(struct xilinx_dma_chan *chan)
{
unsigned long flags;
spin_lock_irqsave(&chan->lock, flags);
- xilinx_vdma_free_desc_list(chan, &chan->pending_list);
- xilinx_vdma_free_desc_list(chan, &chan->done_list);
- xilinx_vdma_free_desc_list(chan, &chan->active_list);
+ xilinx_dma_free_desc_list(chan, &chan->pending_list);
+ xilinx_dma_free_desc_list(chan, &chan->done_list);
+ xilinx_dma_free_desc_list(chan, &chan->active_list);
spin_unlock_irqrestore(&chan->lock, flags);
}
/**
- * xilinx_vdma_free_chan_resources - Free channel resources
+ * xilinx_dma_free_chan_resources - Free channel resources
* @dchan: DMA channel
*/
-static void xilinx_vdma_free_chan_resources(struct dma_chan *dchan)
+static void xilinx_dma_free_chan_resources(struct dma_chan *dchan)
{
- struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
+ struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
dev_dbg(chan->dev, "Free all channel resources.\n");
- xilinx_vdma_free_descriptors(chan);
+ xilinx_dma_free_descriptors(chan);
+ if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA)
+ xilinx_dma_free_tx_segment(chan, chan->seg_v);
dma_pool_destroy(chan->desc_pool);
chan->desc_pool = NULL;
}
/**
- * xilinx_vdma_chan_desc_cleanup - Clean channel descriptors
- * @chan: Driver specific VDMA channel
+ * xilinx_dma_chan_desc_cleanup - Clean channel descriptors
+ * @chan: Driver specific DMA channel
*/
-static void xilinx_vdma_chan_desc_cleanup(struct xilinx_vdma_chan *chan)
+static void xilinx_dma_chan_desc_cleanup(struct xilinx_dma_chan *chan)
{
- struct xilinx_vdma_tx_descriptor *desc, *next;
+ struct xilinx_dma_tx_descriptor *desc, *next;
unsigned long flags;
spin_lock_irqsave(&chan->lock, flags);
@@ -457,32 +695,32 @@ static void xilinx_vdma_chan_desc_cleanup(struct xilinx_vdma_chan *chan)
/* Run any dependencies, then free the descriptor */
dma_run_dependencies(&desc->async_tx);
- xilinx_vdma_free_tx_descriptor(chan, desc);
+ xilinx_dma_free_tx_descriptor(chan, desc);
}
spin_unlock_irqrestore(&chan->lock, flags);
}
/**
- * xilinx_vdma_do_tasklet - Schedule completion tasklet
- * @data: Pointer to the Xilinx VDMA channel structure
+ * xilinx_dma_do_tasklet - Schedule completion tasklet
+ * @data: Pointer to the Xilinx DMA channel structure
*/
-static void xilinx_vdma_do_tasklet(unsigned long data)
+static void xilinx_dma_do_tasklet(unsigned long data)
{
- struct xilinx_vdma_chan *chan = (struct xilinx_vdma_chan *)data;
+ struct xilinx_dma_chan *chan = (struct xilinx_dma_chan *)data;
- xilinx_vdma_chan_desc_cleanup(chan);
+ xilinx_dma_chan_desc_cleanup(chan);
}
/**
- * xilinx_vdma_alloc_chan_resources - Allocate channel resources
+ * xilinx_dma_alloc_chan_resources - Allocate channel resources
* @dchan: DMA channel
*
* Return: '0' on success and failure value on error
*/
-static int xilinx_vdma_alloc_chan_resources(struct dma_chan *dchan)
+static int xilinx_dma_alloc_chan_resources(struct dma_chan *dchan)
{
- struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
+ struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
/* Has this channel already been allocated? */
if (chan->desc_pool)
@@ -492,10 +730,26 @@ static int xilinx_vdma_alloc_chan_resources(struct dma_chan *dchan)
* We need the descriptor to be aligned to 64bytes
* for meeting Xilinx VDMA specification requirement.
*/
- chan->desc_pool = dma_pool_create("xilinx_vdma_desc_pool",
- chan->dev,
- sizeof(struct xilinx_vdma_tx_segment),
- __alignof__(struct xilinx_vdma_tx_segment), 0);
+ if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
+ chan->desc_pool = dma_pool_create("xilinx_dma_desc_pool",
+ chan->dev,
+ sizeof(struct xilinx_axidma_tx_segment),
+ __alignof__(struct xilinx_axidma_tx_segment),
+ 0);
+ } else if (chan->xdev->dma_config->dmatype == XDMA_TYPE_CDMA) {
+ chan->desc_pool = dma_pool_create("xilinx_cdma_desc_pool",
+ chan->dev,
+ sizeof(struct xilinx_cdma_tx_segment),
+ __alignof__(struct xilinx_cdma_tx_segment),
+ 0);
+ } else {
+ chan->desc_pool = dma_pool_create("xilinx_vdma_desc_pool",
+ chan->dev,
+ sizeof(struct xilinx_vdma_tx_segment),
+ __alignof__(struct xilinx_vdma_tx_segment),
+ 0);
+ }
+
if (!chan->desc_pool) {
dev_err(chan->dev,
"unable to allocate channel %d descriptor pool\n",
@@ -503,110 +757,160 @@ static int xilinx_vdma_alloc_chan_resources(struct dma_chan *dchan)
return -ENOMEM;
}
+ if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA)
+ /*
+ * For AXI DMA case after submitting a pending_list, keep
+ * an extra segment allocated so that the "next descriptor"
+ * pointer on the tail descriptor always points to a
+ * valid descriptor, even when paused after reaching taildesc.
+ * This way, it is possible to issue additional
+ * transfers without halting and restarting the channel.
+ */
+ chan->seg_v = xilinx_axidma_alloc_tx_segment(chan);
+
dma_cookie_init(dchan);
+
+ if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
+ /* For AXI DMA resetting once channel will reset the
+ * other channel as well so enable the interrupts here.
+ */
+ dma_ctrl_set(chan, XILINX_DMA_REG_DMACR,
+ XILINX_DMA_DMAXR_ALL_IRQ_MASK);
+ }
+
+ if ((chan->xdev->dma_config->dmatype == XDMA_TYPE_CDMA) && chan->has_sg)
+ dma_ctrl_set(chan, XILINX_DMA_REG_DMACR,
+ XILINX_CDMA_CR_SGMODE);
+
return 0;
}
/**
- * xilinx_vdma_tx_status - Get VDMA transaction status
+ * xilinx_dma_tx_status - Get DMA transaction status
* @dchan: DMA channel
* @cookie: Transaction identifier
* @txstate: Transaction state
*
* Return: DMA transaction status
*/
-static enum dma_status xilinx_vdma_tx_status(struct dma_chan *dchan,
+static enum dma_status xilinx_dma_tx_status(struct dma_chan *dchan,
dma_cookie_t cookie,
struct dma_tx_state *txstate)
{
- return dma_cookie_status(dchan, cookie, txstate);
+ struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+ struct xilinx_dma_tx_descriptor *desc;
+ struct xilinx_axidma_tx_segment *segment;
+ struct xilinx_axidma_desc_hw *hw;
+ enum dma_status ret;
+ unsigned long flags;
+ u32 residue = 0;
+
+ ret = dma_cookie_status(dchan, cookie, txstate);
+ if (ret == DMA_COMPLETE || !txstate)
+ return ret;
+
+ if (chan->xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
+ spin_lock_irqsave(&chan->lock, flags);
+
+ desc = list_last_entry(&chan->active_list,
+ struct xilinx_dma_tx_descriptor, node);
+ if (chan->has_sg) {
+ list_for_each_entry(segment, &desc->segments, node) {
+ hw = &segment->hw;
+ residue += (hw->control - hw->status) &
+ XILINX_DMA_MAX_TRANS_LEN;
+ }
+ }
+ spin_unlock_irqrestore(&chan->lock, flags);
+
+ chan->residue = residue;
+ dma_set_residue(txstate, chan->residue);
+ }
+
+ return ret;
}
/**
- * xilinx_vdma_is_running - Check if VDMA channel is running
- * @chan: Driver specific VDMA channel
+ * xilinx_dma_is_running - Check if DMA channel is running
+ * @chan: Driver specific DMA channel
*
* Return: '1' if running, '0' if not.
*/
-static bool xilinx_vdma_is_running(struct xilinx_vdma_chan *chan)
+static bool xilinx_dma_is_running(struct xilinx_dma_chan *chan)
{
- return !(vdma_ctrl_read(chan, XILINX_VDMA_REG_DMASR) &
- XILINX_VDMA_DMASR_HALTED) &&
- (vdma_ctrl_read(chan, XILINX_VDMA_REG_DMACR) &
- XILINX_VDMA_DMACR_RUNSTOP);
+ return !(dma_ctrl_read(chan, XILINX_DMA_REG_DMASR) &
+ XILINX_DMA_DMASR_HALTED) &&
+ (dma_ctrl_read(chan, XILINX_DMA_REG_DMACR) &
+ XILINX_DMA_DMACR_RUNSTOP);
}
/**
- * xilinx_vdma_is_idle - Check if VDMA channel is idle
- * @chan: Driver specific VDMA channel
+ * xilinx_dma_is_idle - Check if DMA channel is idle
+ * @chan: Driver specific DMA channel
*
* Return: '1' if idle, '0' if not.
*/
-static bool xilinx_vdma_is_idle(struct xilinx_vdma_chan *chan)
+static bool xilinx_dma_is_idle(struct xilinx_dma_chan *chan)
{
- return vdma_ctrl_read(chan, XILINX_VDMA_REG_DMASR) &
- XILINX_VDMA_DMASR_IDLE;
+ return dma_ctrl_read(chan, XILINX_DMA_REG_DMASR) &
+ XILINX_DMA_DMASR_IDLE;
}
/**
- * xilinx_vdma_halt - Halt VDMA channel
- * @chan: Driver specific VDMA channel
+ * xilinx_dma_halt - Halt DMA channel
+ * @chan: Driver specific DMA channel
*/
-static void xilinx_vdma_halt(struct xilinx_vdma_chan *chan)
+static void xilinx_dma_halt(struct xilinx_dma_chan *chan)
{
int err;
u32 val;
- vdma_ctrl_clr(chan, XILINX_VDMA_REG_DMACR, XILINX_VDMA_DMACR_RUNSTOP);
+ dma_ctrl_clr(chan, XILINX_DMA_REG_DMACR, XILINX_DMA_DMACR_RUNSTOP);
/* Wait for the hardware to halt */
- err = xilinx_vdma_poll_timeout(chan, XILINX_VDMA_REG_DMASR, val,
- (val & XILINX_VDMA_DMASR_HALTED), 0,
- XILINX_VDMA_LOOP_COUNT);
+ err = xilinx_dma_poll_timeout(chan, XILINX_DMA_REG_DMASR, val,
+ (val & XILINX_DMA_DMASR_HALTED), 0,
+ XILINX_DMA_LOOP_COUNT);
if (err) {
dev_err(chan->dev, "Cannot stop channel %p: %x\n",
- chan, vdma_ctrl_read(chan, XILINX_VDMA_REG_DMASR));
+ chan, dma_ctrl_read(chan, XILINX_DMA_REG_DMASR));
chan->err = true;
}
-
- return;
}
/**
- * xilinx_vdma_start - Start VDMA channel
- * @chan: Driver specific VDMA channel
+ * xilinx_dma_start - Start DMA channel
+ * @chan: Driver specific DMA channel
*/
-static void xilinx_vdma_start(struct xilinx_vdma_chan *chan)
+static void xilinx_dma_start(struct xilinx_dma_chan *chan)
{
int err;
u32 val;
- vdma_ctrl_set(chan, XILINX_VDMA_REG_DMACR, XILINX_VDMA_DMACR_RUNSTOP);
+ dma_ctrl_set(chan, XILINX_DMA_REG_DMACR, XILINX_DMA_DMACR_RUNSTOP);
/* Wait for the hardware to start */
- err = xilinx_vdma_poll_timeout(chan, XILINX_VDMA_REG_DMASR, val,
- !(val & XILINX_VDMA_DMASR_HALTED), 0,
- XILINX_VDMA_LOOP_COUNT);
+ err = xilinx_dma_poll_timeout(chan, XILINX_DMA_REG_DMASR, val,
+ !(val & XILINX_DMA_DMASR_HALTED), 0,
+ XILINX_DMA_LOOP_COUNT);
if (err) {
dev_err(chan->dev, "Cannot start channel %p: %x\n",
- chan, vdma_ctrl_read(chan, XILINX_VDMA_REG_DMASR));
+ chan, dma_ctrl_read(chan, XILINX_DMA_REG_DMASR));
chan->err = true;
}
-
- return;
}
/**
* xilinx_vdma_start_transfer - Starts VDMA transfer
* @chan: Driver specific channel struct pointer
*/
-static void xilinx_vdma_start_transfer(struct xilinx_vdma_chan *chan)
+static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
{
struct xilinx_vdma_config *config = &chan->config;
- struct xilinx_vdma_tx_descriptor *desc, *tail_desc;
+ struct xilinx_dma_tx_descriptor *desc, *tail_desc;
u32 reg;
struct xilinx_vdma_tx_segment *tail_segment;
@@ -618,16 +922,16 @@ static void xilinx_vdma_start_transfer(struct xilinx_vdma_chan *chan)
return;
desc = list_first_entry(&chan->pending_list,
- struct xilinx_vdma_tx_descriptor, node);
+ struct xilinx_dma_tx_descriptor, node);
tail_desc = list_last_entry(&chan->pending_list,
- struct xilinx_vdma_tx_descriptor, node);
+ struct xilinx_dma_tx_descriptor, node);
tail_segment = list_last_entry(&tail_desc->segments,
struct xilinx_vdma_tx_segment, node);
/* If it is SG mode and hardware is busy, cannot submit */
- if (chan->has_sg && xilinx_vdma_is_running(chan) &&
- !xilinx_vdma_is_idle(chan)) {
+ if (chan->has_sg && xilinx_dma_is_running(chan) &&
+ !xilinx_dma_is_idle(chan)) {
dev_dbg(chan->dev, "DMA controller still busy\n");
return;
}
@@ -637,19 +941,19 @@ static void xilinx_vdma_start_transfer(struct xilinx_vdma_chan *chan)
* done, start new transfers
*/
if (chan->has_sg)
- vdma_ctrl_write(chan, XILINX_VDMA_REG_CURDESC,
+ dma_ctrl_write(chan, XILINX_DMA_REG_CURDESC,
desc->async_tx.phys);
/* Configure the hardware using info in the config structure */
- reg = vdma_ctrl_read(chan, XILINX_VDMA_REG_DMACR);
+ reg = dma_ctrl_read(chan, XILINX_DMA_REG_DMACR);
if (config->frm_cnt_en)
- reg |= XILINX_VDMA_DMACR_FRAMECNT_EN;
+ reg |= XILINX_DMA_DMACR_FRAMECNT_EN;
else
- reg &= ~XILINX_VDMA_DMACR_FRAMECNT_EN;
+ reg &= ~XILINX_DMA_DMACR_FRAMECNT_EN;
/* Configure channel to allow number frame buffers */
- vdma_ctrl_write(chan, XILINX_VDMA_REG_FRMSTORE,
+ dma_ctrl_write(chan, XILINX_DMA_REG_FRMSTORE,
chan->desc_pendingcount);
/*
@@ -657,45 +961,53 @@ static void xilinx_vdma_start_transfer(struct xilinx_vdma_chan *chan)
* In direct register mode, if not parking, enable circular mode
*/
if (chan->has_sg || !config->park)
- reg |= XILINX_VDMA_DMACR_CIRC_EN;
+ reg |= XILINX_DMA_DMACR_CIRC_EN;
if (config->park)
- reg &= ~XILINX_VDMA_DMACR_CIRC_EN;
+ reg &= ~XILINX_DMA_DMACR_CIRC_EN;
- vdma_ctrl_write(chan, XILINX_VDMA_REG_DMACR, reg);
+ dma_ctrl_write(chan, XILINX_DMA_REG_DMACR, reg);
if (config->park && (config->park_frm >= 0) &&
(config->park_frm < chan->num_frms)) {
if (chan->direction == DMA_MEM_TO_DEV)
- vdma_write(chan, XILINX_VDMA_REG_PARK_PTR,
+ dma_write(chan, XILINX_DMA_REG_PARK_PTR,
config->park_frm <<
- XILINX_VDMA_PARK_PTR_RD_REF_SHIFT);
+ XILINX_DMA_PARK_PTR_RD_REF_SHIFT);
else
- vdma_write(chan, XILINX_VDMA_REG_PARK_PTR,
+ dma_write(chan, XILINX_DMA_REG_PARK_PTR,
config->park_frm <<
- XILINX_VDMA_PARK_PTR_WR_REF_SHIFT);
+ XILINX_DMA_PARK_PTR_WR_REF_SHIFT);
}
/* Start the hardware */
- xilinx_vdma_start(chan);
+ xilinx_dma_start(chan);
if (chan->err)
return;
/* Start the transfer */
if (chan->has_sg) {
- vdma_ctrl_write(chan, XILINX_VDMA_REG_TAILDESC,
+ dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC,
tail_segment->phys);
} else {
struct xilinx_vdma_tx_segment *segment, *last = NULL;
int i = 0;
- list_for_each_entry(desc, &chan->pending_list, node) {
- segment = list_first_entry(&desc->segments,
- struct xilinx_vdma_tx_segment, node);
- vdma_desc_write(chan,
+ if (chan->desc_submitcount < chan->num_frms)
+ i = chan->desc_submitcount;
+
+ list_for_each_entry(segment, &desc->segments, node) {
+ if (chan->ext_addr)
+ vdma_desc_write_64(chan,
+ XILINX_VDMA_REG_START_ADDRESS_64(i++),
+ segment->hw.buf_addr,
+ segment->hw.buf_addr_msb);
+ else
+ vdma_desc_write(chan,
XILINX_VDMA_REG_START_ADDRESS(i++),
segment->hw.buf_addr);
+
last = segment;
}
@@ -703,10 +1015,164 @@ static void xilinx_vdma_start_transfer(struct xilinx_vdma_chan *chan)
return;
/* HW expects these parameters to be same for one transaction */
- vdma_desc_write(chan, XILINX_VDMA_REG_HSIZE, last->hw.hsize);
- vdma_desc_write(chan, XILINX_VDMA_REG_FRMDLY_STRIDE,
+ vdma_desc_write(chan, XILINX_DMA_REG_HSIZE, last->hw.hsize);
+ vdma_desc_write(chan, XILINX_DMA_REG_FRMDLY_STRIDE,
last->hw.stride);
- vdma_desc_write(chan, XILINX_VDMA_REG_VSIZE, last->hw.vsize);
+ vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last->hw.vsize);
+ }
+
+ if (!chan->has_sg) {
+ list_del(&desc->node);
+ list_add_tail(&desc->node, &chan->active_list);
+ chan->desc_submitcount++;
+ chan->desc_pendingcount--;
+ if (chan->desc_submitcount == chan->num_frms)
+ chan->desc_submitcount = 0;
+ } else {
+ list_splice_tail_init(&chan->pending_list, &chan->active_list);
+ chan->desc_pendingcount = 0;
+ }
+}
+
+/**
+ * xilinx_cdma_start_transfer - Starts cdma transfer
+ * @chan: Driver specific channel struct pointer
+ */
+static void xilinx_cdma_start_transfer(struct xilinx_dma_chan *chan)
+{
+ struct xilinx_dma_tx_descriptor *head_desc, *tail_desc;
+ struct xilinx_cdma_tx_segment *tail_segment;
+ u32 ctrl_reg = dma_read(chan, XILINX_DMA_REG_DMACR);
+
+ if (chan->err)
+ return;
+
+ if (list_empty(&chan->pending_list))
+ return;
+
+ head_desc = list_first_entry(&chan->pending_list,
+ struct xilinx_dma_tx_descriptor, node);
+ tail_desc = list_last_entry(&chan->pending_list,
+ struct xilinx_dma_tx_descriptor, node);
+ tail_segment = list_last_entry(&tail_desc->segments,
+ struct xilinx_cdma_tx_segment, node);
+
+ if (chan->desc_pendingcount <= XILINX_DMA_COALESCE_MAX) {
+ ctrl_reg &= ~XILINX_DMA_CR_COALESCE_MAX;
+ ctrl_reg |= chan->desc_pendingcount <<
+ XILINX_DMA_CR_COALESCE_SHIFT;
+ dma_ctrl_write(chan, XILINX_DMA_REG_DMACR, ctrl_reg);
+ }
+
+ if (chan->has_sg) {
+ dma_ctrl_write(chan, XILINX_DMA_REG_CURDESC,
+ head_desc->async_tx.phys);
+
+ /* Update tail ptr register which will start the transfer */
+ dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC,
+ tail_segment->phys);
+ } else {
+ /* In simple mode */
+ struct xilinx_cdma_tx_segment *segment;
+ struct xilinx_cdma_desc_hw *hw;
+
+ segment = list_first_entry(&head_desc->segments,
+ struct xilinx_cdma_tx_segment,
+ node);
+
+ hw = &segment->hw;
+
+ dma_ctrl_write(chan, XILINX_CDMA_REG_SRCADDR, hw->src_addr);
+ dma_ctrl_write(chan, XILINX_CDMA_REG_DSTADDR, hw->dest_addr);
+
+ /* Start the transfer */
+ dma_ctrl_write(chan, XILINX_DMA_REG_BTT,
+ hw->control & XILINX_DMA_MAX_TRANS_LEN);
+ }
+
+ list_splice_tail_init(&chan->pending_list, &chan->active_list);
+ chan->desc_pendingcount = 0;
+}
+
+/**
+ * xilinx_dma_start_transfer - Starts DMA transfer
+ * @chan: Driver specific channel struct pointer
+ */
+static void xilinx_dma_start_transfer(struct xilinx_dma_chan *chan)
+{
+ struct xilinx_dma_tx_descriptor *head_desc, *tail_desc;
+ struct xilinx_axidma_tx_segment *tail_segment, *old_head, *new_head;
+ u32 reg;
+
+ if (chan->err)
+ return;
+
+ if (list_empty(&chan->pending_list))
+ return;
+
+ /* If it is SG mode and hardware is busy, cannot submit */
+ if (chan->has_sg && xilinx_dma_is_running(chan) &&
+ !xilinx_dma_is_idle(chan)) {
+ dev_dbg(chan->dev, "DMA controller still busy\n");
+ return;
+ }
+
+ head_desc = list_first_entry(&chan->pending_list,
+ struct xilinx_dma_tx_descriptor, node);
+ tail_desc = list_last_entry(&chan->pending_list,
+ struct xilinx_dma_tx_descriptor, node);
+ tail_segment = list_last_entry(&tail_desc->segments,
+ struct xilinx_axidma_tx_segment, node);
+
+ old_head = list_first_entry(&head_desc->segments,
+ struct xilinx_axidma_tx_segment, node);
+ new_head = chan->seg_v;
+ /* Copy Buffer Descriptor fields. */
+ new_head->hw = old_head->hw;
+
+ /* Swap and save new reserve */
+ list_replace_init(&old_head->node, &new_head->node);
+ chan->seg_v = old_head;
+
+ tail_segment->hw.next_desc = chan->seg_v->phys;
+ head_desc->async_tx.phys = new_head->phys;
+
+ reg = dma_ctrl_read(chan, XILINX_DMA_REG_DMACR);
+
+ if (chan->desc_pendingcount <= XILINX_DMA_COALESCE_MAX) {
+ reg &= ~XILINX_DMA_CR_COALESCE_MAX;
+ reg |= chan->desc_pendingcount <<
+ XILINX_DMA_CR_COALESCE_SHIFT;
+ dma_ctrl_write(chan, XILINX_DMA_REG_DMACR, reg);
+ }
+
+ if (chan->has_sg)
+ dma_ctrl_write(chan, XILINX_DMA_REG_CURDESC,
+ head_desc->async_tx.phys);
+
+ xilinx_dma_start(chan);
+
+ if (chan->err)
+ return;
+
+ /* Start the transfer */
+ if (chan->has_sg) {
+ dma_ctrl_write(chan, XILINX_DMA_REG_TAILDESC,
+ tail_segment->phys);
+ } else {
+ struct xilinx_axidma_tx_segment *segment;
+ struct xilinx_axidma_desc_hw *hw;
+
+ segment = list_first_entry(&head_desc->segments,
+ struct xilinx_axidma_tx_segment,
+ node);
+ hw = &segment->hw;
+
+ dma_ctrl_write(chan, XILINX_DMA_REG_SRCDSTADDR, hw->buf_addr);
+
+ /* Start the transfer */
+ dma_ctrl_write(chan, XILINX_DMA_REG_BTT,
+ hw->control & XILINX_DMA_MAX_TRANS_LEN);
}
list_splice_tail_init(&chan->pending_list, &chan->active_list);
@@ -714,28 +1180,28 @@ static void xilinx_vdma_start_transfer(struct xilinx_vdma_chan *chan)
}
/**
- * xilinx_vdma_issue_pending - Issue pending transactions
+ * xilinx_dma_issue_pending - Issue pending transactions
* @dchan: DMA channel
*/
-static void xilinx_vdma_issue_pending(struct dma_chan *dchan)
+static void xilinx_dma_issue_pending(struct dma_chan *dchan)
{
- struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
+ struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
unsigned long flags;
spin_lock_irqsave(&chan->lock, flags);
- xilinx_vdma_start_transfer(chan);
+ chan->start_transfer(chan);
spin_unlock_irqrestore(&chan->lock, flags);
}
/**
- * xilinx_vdma_complete_descriptor - Mark the active descriptor as complete
+ * xilinx_dma_complete_descriptor - Mark the active descriptor as complete
* @chan : xilinx DMA channel
*
* CONTEXT: hardirq
*/
-static void xilinx_vdma_complete_descriptor(struct xilinx_vdma_chan *chan)
+static void xilinx_dma_complete_descriptor(struct xilinx_dma_chan *chan)
{
- struct xilinx_vdma_tx_descriptor *desc, *next;
+ struct xilinx_dma_tx_descriptor *desc, *next;
/* This function was invoked with lock held */
if (list_empty(&chan->active_list))
@@ -749,27 +1215,27 @@ static void xilinx_vdma_complete_descriptor(struct xilinx_vdma_chan *chan)
}
/**
- * xilinx_vdma_reset - Reset VDMA channel
- * @chan: Driver specific VDMA channel
+ * xilinx_dma_reset - Reset DMA channel
+ * @chan: Driver specific DMA channel
*
* Return: '0' on success and failure value on error
*/
-static int xilinx_vdma_reset(struct xilinx_vdma_chan *chan)
+static int xilinx_dma_reset(struct xilinx_dma_chan *chan)
{
int err;
u32 tmp;
- vdma_ctrl_set(chan, XILINX_VDMA_REG_DMACR, XILINX_VDMA_DMACR_RESET);
+ dma_ctrl_set(chan, XILINX_DMA_REG_DMACR, XILINX_DMA_DMACR_RESET);
/* Wait for the hardware to finish reset */
- err = xilinx_vdma_poll_timeout(chan, XILINX_VDMA_REG_DMACR, tmp,
- !(tmp & XILINX_VDMA_DMACR_RESET), 0,
- XILINX_VDMA_LOOP_COUNT);
+ err = xilinx_dma_poll_timeout(chan, XILINX_DMA_REG_DMACR, tmp,
+ !(tmp & XILINX_DMA_DMACR_RESET), 0,
+ XILINX_DMA_LOOP_COUNT);
if (err) {
dev_err(chan->dev, "reset timeout, cr %x, sr %x\n",
- vdma_ctrl_read(chan, XILINX_VDMA_REG_DMACR),
- vdma_ctrl_read(chan, XILINX_VDMA_REG_DMASR));
+ dma_ctrl_read(chan, XILINX_DMA_REG_DMACR),
+ dma_ctrl_read(chan, XILINX_DMA_REG_DMASR));
return -ETIMEDOUT;
}
@@ -779,48 +1245,48 @@ static int xilinx_vdma_reset(struct xilinx_vdma_chan *chan)
}
/**
- * xilinx_vdma_chan_reset - Reset VDMA channel and enable interrupts
- * @chan: Driver specific VDMA channel
+ * xilinx_dma_chan_reset - Reset DMA channel and enable interrupts
+ * @chan: Driver specific DMA channel
*
* Return: '0' on success and failure value on error
*/
-static int xilinx_vdma_chan_reset(struct xilinx_vdma_chan *chan)
+static int xilinx_dma_chan_reset(struct xilinx_dma_chan *chan)
{
int err;
/* Reset VDMA */
- err = xilinx_vdma_reset(chan);
+ err = xilinx_dma_reset(chan);
if (err)
return err;
/* Enable interrupts */
- vdma_ctrl_set(chan, XILINX_VDMA_REG_DMACR,
- XILINX_VDMA_DMAXR_ALL_IRQ_MASK);
+ dma_ctrl_set(chan, XILINX_DMA_REG_DMACR,
+ XILINX_DMA_DMAXR_ALL_IRQ_MASK);
return 0;
}
/**
- * xilinx_vdma_irq_handler - VDMA Interrupt handler
+ * xilinx_dma_irq_handler - DMA Interrupt handler
* @irq: IRQ number
- * @data: Pointer to the Xilinx VDMA channel structure
+ * @data: Pointer to the Xilinx DMA channel structure
*
* Return: IRQ_HANDLED/IRQ_NONE
*/
-static irqreturn_t xilinx_vdma_irq_handler(int irq, void *data)
+static irqreturn_t xilinx_dma_irq_handler(int irq, void *data)
{
- struct xilinx_vdma_chan *chan = data;
+ struct xilinx_dma_chan *chan = data;
u32 status;
/* Read the status and ack the interrupts. */
- status = vdma_ctrl_read(chan, XILINX_VDMA_REG_DMASR);
- if (!(status & XILINX_VDMA_DMAXR_ALL_IRQ_MASK))
+ status = dma_ctrl_read(chan, XILINX_DMA_REG_DMASR);
+ if (!(status & XILINX_DMA_DMAXR_ALL_IRQ_MASK))
return IRQ_NONE;
- vdma_ctrl_write(chan, XILINX_VDMA_REG_DMASR,
- status & XILINX_VDMA_DMAXR_ALL_IRQ_MASK);
+ dma_ctrl_write(chan, XILINX_DMA_REG_DMASR,
+ status & XILINX_DMA_DMAXR_ALL_IRQ_MASK);
- if (status & XILINX_VDMA_DMASR_ERR_IRQ) {
+ if (status & XILINX_DMA_DMASR_ERR_IRQ) {
/*
* An error occurred. If C_FLUSH_ON_FSYNC is enabled and the
* error is recoverable, ignore it. Otherwise flag the error.
@@ -828,22 +1294,23 @@ static irqreturn_t xilinx_vdma_irq_handler(int irq, void *data)
* Only recoverable errors can be cleared in the DMASR register,
* make sure not to write to other error bits to 1.
*/
- u32 errors = status & XILINX_VDMA_DMASR_ALL_ERR_MASK;
- vdma_ctrl_write(chan, XILINX_VDMA_REG_DMASR,
- errors & XILINX_VDMA_DMASR_ERR_RECOVER_MASK);
+ u32 errors = status & XILINX_DMA_DMASR_ALL_ERR_MASK;
+
+ dma_ctrl_write(chan, XILINX_DMA_REG_DMASR,
+ errors & XILINX_DMA_DMASR_ERR_RECOVER_MASK);
if (!chan->flush_on_fsync ||
- (errors & ~XILINX_VDMA_DMASR_ERR_RECOVER_MASK)) {
+ (errors & ~XILINX_DMA_DMASR_ERR_RECOVER_MASK)) {
dev_err(chan->dev,
"Channel %p has errors %x, cdr %x tdr %x\n",
chan, errors,
- vdma_ctrl_read(chan, XILINX_VDMA_REG_CURDESC),
- vdma_ctrl_read(chan, XILINX_VDMA_REG_TAILDESC));
+ dma_ctrl_read(chan, XILINX_DMA_REG_CURDESC),
+ dma_ctrl_read(chan, XILINX_DMA_REG_TAILDESC));
chan->err = true;
}
}
- if (status & XILINX_VDMA_DMASR_DLY_CNT_IRQ) {
+ if (status & XILINX_DMA_DMASR_DLY_CNT_IRQ) {
/*
* Device takes too long to do the transfer when user requires
* responsiveness.
@@ -851,10 +1318,10 @@ static irqreturn_t xilinx_vdma_irq_handler(int irq, void *data)
dev_dbg(chan->dev, "Inter-packet latency too long\n");
}
- if (status & XILINX_VDMA_DMASR_FRM_CNT_IRQ) {
+ if (status & XILINX_DMA_DMASR_FRM_CNT_IRQ) {
spin_lock(&chan->lock);
- xilinx_vdma_complete_descriptor(chan);
- xilinx_vdma_start_transfer(chan);
+ xilinx_dma_complete_descriptor(chan);
+ chan->start_transfer(chan);
spin_unlock(&chan->lock);
}
@@ -867,11 +1334,13 @@ static irqreturn_t xilinx_vdma_irq_handler(int irq, void *data)
* @chan: Driver specific dma channel
* @desc: dma transaction descriptor
*/
-static void append_desc_queue(struct xilinx_vdma_chan *chan,
- struct xilinx_vdma_tx_descriptor *desc)
+static void append_desc_queue(struct xilinx_dma_chan *chan,
+ struct xilinx_dma_tx_descriptor *desc)
{
struct xilinx_vdma_tx_segment *tail_segment;
- struct xilinx_vdma_tx_descriptor *tail_desc;
+ struct xilinx_dma_tx_descriptor *tail_desc;
+ struct xilinx_axidma_tx_segment *axidma_tail_segment;
+ struct xilinx_cdma_tx_segment *cdma_tail_segment;
if (list_empty(&chan->pending_list))
goto append;
@@ -881,10 +1350,23 @@ static void append_desc_queue(struct xilinx_vdma_chan *chan,
* that already exists in memory.
*/
tail_desc = list_last_entry(&chan->pending_list,
- struct xilinx_vdma_tx_descriptor, node);
- tail_segment = list_last_entry(&tail_desc->segments,
- struct xilinx_vdma_tx_segment, node);
- tail_segment->hw.next_desc = (u32)desc->async_tx.phys;
+ struct xilinx_dma_tx_descriptor, node);
+ if (chan->xdev->dma_config->dmatype == XDMA_TYPE_VDMA) {
+ tail_segment = list_last_entry(&tail_desc->segments,
+ struct xilinx_vdma_tx_segment,
+ node);
+ tail_segment->hw.next_desc = (u32)desc->async_tx.phys;
+ } else if (chan->xdev->dma_config->dmatype == XDMA_TYPE_CDMA) {
+ cdma_tail_segment = list_last_entry(&tail_desc->segments,
+ struct xilinx_cdma_tx_segment,
+ node);
+ cdma_tail_segment->hw.next_desc = (u32)desc->async_tx.phys;
+ } else {
+ axidma_tail_segment = list_last_entry(&tail_desc->segments,
+ struct xilinx_axidma_tx_segment,
+ node);
+ axidma_tail_segment->hw.next_desc = (u32)desc->async_tx.phys;
+ }
/*
* Add the software descriptor and all children to the list
@@ -894,22 +1376,23 @@ append:
list_add_tail(&desc->node, &chan->pending_list);
chan->desc_pendingcount++;
- if (unlikely(chan->desc_pendingcount > chan->num_frms)) {
+ if (chan->has_sg && (chan->xdev->dma_config->dmatype == XDMA_TYPE_VDMA)
+ && unlikely(chan->desc_pendingcount > chan->num_frms)) {
dev_dbg(chan->dev, "desc pendingcount is too high\n");
chan->desc_pendingcount = chan->num_frms;
}
}
/**
- * xilinx_vdma_tx_submit - Submit DMA transaction
+ * xilinx_dma_tx_submit - Submit DMA transaction
* @tx: Async transaction descriptor
*
* Return: cookie value on success and failure value on error
*/
-static dma_cookie_t xilinx_vdma_tx_submit(struct dma_async_tx_descriptor *tx)
+static dma_cookie_t xilinx_dma_tx_submit(struct dma_async_tx_descriptor *tx)
{
- struct xilinx_vdma_tx_descriptor *desc = to_vdma_tx_descriptor(tx);
- struct xilinx_vdma_chan *chan = to_xilinx_chan(tx->chan);
+ struct xilinx_dma_tx_descriptor *desc = to_dma_tx_descriptor(tx);
+ struct xilinx_dma_chan *chan = to_xilinx_chan(tx->chan);
dma_cookie_t cookie;
unsigned long flags;
int err;
@@ -919,7 +1402,7 @@ static dma_cookie_t xilinx_vdma_tx_submit(struct dma_async_tx_descriptor *tx)
* If reset fails, need to hard reset the system.
* Channel is no longer functional
*/
- err = xilinx_vdma_chan_reset(chan);
+ err = xilinx_dma_chan_reset(chan);
if (err < 0)
return err;
}
@@ -950,8 +1433,8 @@ xilinx_vdma_dma_prep_interleaved(struct dma_chan *dchan,
struct dma_interleaved_template *xt,
unsigned long flags)
{
- struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
- struct xilinx_vdma_tx_descriptor *desc;
+ struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+ struct xilinx_dma_tx_descriptor *desc;
struct xilinx_vdma_tx_segment *segment, *prev = NULL;
struct xilinx_vdma_desc_hw *hw;
@@ -965,12 +1448,12 @@ xilinx_vdma_dma_prep_interleaved(struct dma_chan *dchan,
return NULL;
/* Allocate a transaction descriptor. */
- desc = xilinx_vdma_alloc_tx_descriptor(chan);
+ desc = xilinx_dma_alloc_tx_descriptor(chan);
if (!desc)
return NULL;
dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
- desc->async_tx.tx_submit = xilinx_vdma_tx_submit;
+ desc->async_tx.tx_submit = xilinx_dma_tx_submit;
async_tx_ack(&desc->async_tx);
/* Allocate the link descriptor from DMA pool */
@@ -983,14 +1466,25 @@ xilinx_vdma_dma_prep_interleaved(struct dma_chan *dchan,
hw->vsize = xt->numf;
hw->hsize = xt->sgl[0].size;
hw->stride = (xt->sgl[0].icg + xt->sgl[0].size) <<
- XILINX_VDMA_FRMDLY_STRIDE_STRIDE_SHIFT;
+ XILINX_DMA_FRMDLY_STRIDE_STRIDE_SHIFT;
hw->stride |= chan->config.frm_dly <<
- XILINX_VDMA_FRMDLY_STRIDE_FRMDLY_SHIFT;
-
- if (xt->dir != DMA_MEM_TO_DEV)
- hw->buf_addr = xt->dst_start;
- else
- hw->buf_addr = xt->src_start;
+ XILINX_DMA_FRMDLY_STRIDE_FRMDLY_SHIFT;
+
+ if (xt->dir != DMA_MEM_TO_DEV) {
+ if (chan->ext_addr) {
+ hw->buf_addr = lower_32_bits(xt->dst_start);
+ hw->buf_addr_msb = upper_32_bits(xt->dst_start);
+ } else {
+ hw->buf_addr = xt->dst_start;
+ }
+ } else {
+ if (chan->ext_addr) {
+ hw->buf_addr = lower_32_bits(xt->src_start);
+ hw->buf_addr_msb = upper_32_bits(xt->src_start);
+ } else {
+ hw->buf_addr = xt->src_start;
+ }
+ }
/* Insert the segment into the descriptor segments list. */
list_add_tail(&segment->node, &desc->segments);
@@ -1005,29 +1499,194 @@ xilinx_vdma_dma_prep_interleaved(struct dma_chan *dchan,
return &desc->async_tx;
error:
- xilinx_vdma_free_tx_descriptor(chan, desc);
+ xilinx_dma_free_tx_descriptor(chan, desc);
+ return NULL;
+}
+
+/**
+ * xilinx_cdma_prep_memcpy - prepare descriptors for a memcpy transaction
+ * @dchan: DMA channel
+ * @dma_dst: destination address
+ * @dma_src: source address
+ * @len: transfer length
+ * @flags: transfer ack flags
+ *
+ * Return: Async transaction descriptor on success and NULL on failure
+ */
+static struct dma_async_tx_descriptor *
+xilinx_cdma_prep_memcpy(struct dma_chan *dchan, dma_addr_t dma_dst,
+ dma_addr_t dma_src, size_t len, unsigned long flags)
+{
+ struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+ struct xilinx_dma_tx_descriptor *desc;
+ struct xilinx_cdma_tx_segment *segment, *prev;
+ struct xilinx_cdma_desc_hw *hw;
+
+ if (!len || len > XILINX_DMA_MAX_TRANS_LEN)
+ return NULL;
+
+ desc = xilinx_dma_alloc_tx_descriptor(chan);
+ if (!desc)
+ return NULL;
+
+ dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
+ desc->async_tx.tx_submit = xilinx_dma_tx_submit;
+
+ /* Allocate the link descriptor from DMA pool */
+ segment = xilinx_cdma_alloc_tx_segment(chan);
+ if (!segment)
+ goto error;
+
+ hw = &segment->hw;
+ hw->control = len;
+ hw->src_addr = dma_src;
+ hw->dest_addr = dma_dst;
+
+ /* Fill the previous next descriptor with current */
+ prev = list_last_entry(&desc->segments,
+ struct xilinx_cdma_tx_segment, node);
+ prev->hw.next_desc = segment->phys;
+
+ /* Insert the segment into the descriptor segments list. */
+ list_add_tail(&segment->node, &desc->segments);
+
+ prev = segment;
+
+ /* Link the last hardware descriptor with the first. */
+ segment = list_first_entry(&desc->segments,
+ struct xilinx_cdma_tx_segment, node);
+ desc->async_tx.phys = segment->phys;
+ prev->hw.next_desc = segment->phys;
+
+ return &desc->async_tx;
+
+error:
+ xilinx_dma_free_tx_descriptor(chan, desc);
+ return NULL;
+}
+
+/**
+ * xilinx_dma_prep_slave_sg - prepare descriptors for a DMA_SLAVE transaction
+ * @dchan: DMA channel
+ * @sgl: scatterlist to transfer to/from
+ * @sg_len: number of entries in @scatterlist
+ * @direction: DMA direction
+ * @flags: transfer ack flags
+ * @context: APP words of the descriptor
+ *
+ * Return: Async transaction descriptor on success and NULL on failure
+ */
+static struct dma_async_tx_descriptor *xilinx_dma_prep_slave_sg(
+ struct dma_chan *dchan, struct scatterlist *sgl, unsigned int sg_len,
+ enum dma_transfer_direction direction, unsigned long flags,
+ void *context)
+{
+ struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
+ struct xilinx_dma_tx_descriptor *desc;
+ struct xilinx_axidma_tx_segment *segment = NULL, *prev = NULL;
+ u32 *app_w = (u32 *)context;
+ struct scatterlist *sg;
+ size_t copy;
+ size_t sg_used;
+ unsigned int i;
+
+ if (!is_slave_direction(direction))
+ return NULL;
+
+ /* Allocate a transaction descriptor. */
+ desc = xilinx_dma_alloc_tx_descriptor(chan);
+ if (!desc)
+ return NULL;
+
+ dma_async_tx_descriptor_init(&desc->async_tx, &chan->common);
+ desc->async_tx.tx_submit = xilinx_dma_tx_submit;
+
+ /* Build transactions using information in the scatter gather list */
+ for_each_sg(sgl, sg, sg_len, i) {
+ sg_used = 0;
+
+ /* Loop until the entire scatterlist entry is used */
+ while (sg_used < sg_dma_len(sg)) {
+ struct xilinx_axidma_desc_hw *hw;
+
+ /* Get a free segment */
+ segment = xilinx_axidma_alloc_tx_segment(chan);
+ if (!segment)
+ goto error;
+
+ /*
+ * Calculate the maximum number of bytes to transfer,
+ * making sure it is less than the hw limit
+ */
+ copy = min_t(size_t, sg_dma_len(sg) - sg_used,
+ XILINX_DMA_MAX_TRANS_LEN);
+ hw = &segment->hw;
+
+ /* Fill in the descriptor */
+ hw->buf_addr = sg_dma_address(sg) + sg_used;
+
+ hw->control = copy;
+
+ if (chan->direction == DMA_MEM_TO_DEV) {
+ if (app_w)
+ memcpy(hw->app, app_w, sizeof(u32) *
+ XILINX_DMA_NUM_APP_WORDS);
+ }
+
+ if (prev)
+ prev->hw.next_desc = segment->phys;
+
+ prev = segment;
+ sg_used += copy;
+
+ /*
+ * Insert the segment into the descriptor segments
+ * list.
+ */
+ list_add_tail(&segment->node, &desc->segments);
+ }
+ }
+
+ segment = list_first_entry(&desc->segments,
+ struct xilinx_axidma_tx_segment, node);
+ desc->async_tx.phys = segment->phys;
+ prev->hw.next_desc = segment->phys;
+
+ /* For the last DMA_MEM_TO_DEV transfer, set EOP */
+ if (chan->direction == DMA_MEM_TO_DEV) {
+ segment->hw.control |= XILINX_DMA_BD_SOP;
+ segment = list_last_entry(&desc->segments,
+ struct xilinx_axidma_tx_segment,
+ node);
+ segment->hw.control |= XILINX_DMA_BD_EOP;
+ }
+
+ return &desc->async_tx;
+
+error:
+ xilinx_dma_free_tx_descriptor(chan, desc);
return NULL;
}
/**
- * xilinx_vdma_terminate_all - Halt the channel and free descriptors
- * @chan: Driver specific VDMA Channel pointer
+ * xilinx_dma_terminate_all - Halt the channel and free descriptors
+ * @chan: Driver specific DMA Channel pointer
*/
-static int xilinx_vdma_terminate_all(struct dma_chan *dchan)
+static int xilinx_dma_terminate_all(struct dma_chan *dchan)
{
- struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
+ struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
/* Halt the DMA engine */
- xilinx_vdma_halt(chan);
+ xilinx_dma_halt(chan);
/* Remove and free all of the descriptors in the lists */
- xilinx_vdma_free_descriptors(chan);
+ xilinx_dma_free_descriptors(chan);
return 0;
}
/**
- * xilinx_vdma_channel_set_config - Configure VDMA channel
+ * xilinx_dma_channel_set_config - Configure VDMA channel
* Run-time configuration for Axi VDMA, supports:
* . halt the channel
* . configure interrupt coalescing and inter-packet delay threshold
@@ -1042,13 +1701,13 @@ static int xilinx_vdma_terminate_all(struct dma_chan *dchan)
int xilinx_vdma_channel_set_config(struct dma_chan *dchan,
struct xilinx_vdma_config *cfg)
{
- struct xilinx_vdma_chan *chan = to_xilinx_chan(dchan);
+ struct xilinx_dma_chan *chan = to_xilinx_chan(dchan);
u32 dmacr;
if (cfg->reset)
- return xilinx_vdma_chan_reset(chan);
+ return xilinx_dma_chan_reset(chan);
- dmacr = vdma_ctrl_read(chan, XILINX_VDMA_REG_DMACR);
+ dmacr = dma_ctrl_read(chan, XILINX_DMA_REG_DMACR);
chan->config.frm_dly = cfg->frm_dly;
chan->config.park = cfg->park;
@@ -1058,8 +1717,8 @@ int xilinx_vdma_channel_set_config(struct dma_chan *dchan,
chan->config.master = cfg->master;
if (cfg->gen_lock && chan->genlock) {
- dmacr |= XILINX_VDMA_DMACR_GENLOCK_EN;
- dmacr |= cfg->master << XILINX_VDMA_DMACR_MASTER_SHIFT;
+ dmacr |= XILINX_DMA_DMACR_GENLOCK_EN;
+ dmacr |= cfg->master << XILINX_DMA_DMACR_MASTER_SHIFT;
}
chan->config.frm_cnt_en = cfg->frm_cnt_en;
@@ -1071,21 +1730,21 @@ int xilinx_vdma_channel_set_config(struct dma_chan *dchan,
chan->config.coalesc = cfg->coalesc;
chan->config.delay = cfg->delay;
- if (cfg->coalesc <= XILINX_VDMA_DMACR_FRAME_COUNT_MAX) {
- dmacr |= cfg->coalesc << XILINX_VDMA_DMACR_FRAME_COUNT_SHIFT;
+ if (cfg->coalesc <= XILINX_DMA_DMACR_FRAME_COUNT_MAX) {
+ dmacr |= cfg->coalesc << XILINX_DMA_DMACR_FRAME_COUNT_SHIFT;
chan->config.coalesc = cfg->coalesc;
}
- if (cfg->delay <= XILINX_VDMA_DMACR_DELAY_MAX) {
- dmacr |= cfg->delay << XILINX_VDMA_DMACR_DELAY_SHIFT;
+ if (cfg->delay <= XILINX_DMA_DMACR_DELAY_MAX) {
+ dmacr |= cfg->delay << XILINX_DMA_DMACR_DELAY_SHIFT;
chan->config.delay = cfg->delay;
}
/* FSync Source selection */
- dmacr &= ~XILINX_VDMA_DMACR_FSYNCSRC_MASK;
- dmacr |= cfg->ext_fsync << XILINX_VDMA_DMACR_FSYNCSRC_SHIFT;
+ dmacr &= ~XILINX_DMA_DMACR_FSYNCSRC_MASK;
+ dmacr |= cfg->ext_fsync << XILINX_DMA_DMACR_FSYNCSRC_SHIFT;
- vdma_ctrl_write(chan, XILINX_VDMA_REG_DMACR, dmacr);
+ dma_ctrl_write(chan, XILINX_DMA_REG_DMACR, dmacr);
return 0;
}
@@ -1096,14 +1755,14 @@ EXPORT_SYMBOL(xilinx_vdma_channel_set_config);
*/
/**
- * xilinx_vdma_chan_remove - Per Channel remove function
- * @chan: Driver specific VDMA channel
+ * xilinx_dma_chan_remove - Per Channel remove function
+ * @chan: Driver specific DMA channel
*/
-static void xilinx_vdma_chan_remove(struct xilinx_vdma_chan *chan)
+static void xilinx_dma_chan_remove(struct xilinx_dma_chan *chan)
{
/* Disable all interrupts */
- vdma_ctrl_clr(chan, XILINX_VDMA_REG_DMACR,
- XILINX_VDMA_DMAXR_ALL_IRQ_MASK);
+ dma_ctrl_clr(chan, XILINX_DMA_REG_DMACR,
+ XILINX_DMA_DMAXR_ALL_IRQ_MASK);
if (chan->irq > 0)
free_irq(chan->irq, chan);
@@ -1113,8 +1772,197 @@ static void xilinx_vdma_chan_remove(struct xilinx_vdma_chan *chan)
list_del(&chan->common.device_node);
}
+static int axidma_clk_init(struct platform_device *pdev, struct clk **axi_clk,
+ struct clk **tx_clk, struct clk **rx_clk,
+ struct clk **sg_clk, struct clk **tmp_clk)
+{
+ int err;
+
+ *tmp_clk = NULL;
+
+ *axi_clk = devm_clk_get(&pdev->dev, "s_axi_lite_aclk");
+ if (IS_ERR(*axi_clk)) {
+ err = PTR_ERR(*axi_clk);
+ dev_err(&pdev->dev, "failed to get axi_aclk (%u)\n", err);
+ return err;
+ }
+
+ *tx_clk = devm_clk_get(&pdev->dev, "m_axi_mm2s_aclk");
+ if (IS_ERR(*tx_clk))
+ *tx_clk = NULL;
+
+ *rx_clk = devm_clk_get(&pdev->dev, "m_axi_s2mm_aclk");
+ if (IS_ERR(*rx_clk))
+ *rx_clk = NULL;
+
+ *sg_clk = devm_clk_get(&pdev->dev, "m_axi_sg_aclk");
+ if (IS_ERR(*sg_clk))
+ *sg_clk = NULL;
+
+ err = clk_prepare_enable(*axi_clk);
+ if (err) {
+ dev_err(&pdev->dev, "failed to enable axi_clk (%u)\n", err);
+ return err;
+ }
+
+ err = clk_prepare_enable(*tx_clk);
+ if (err) {
+ dev_err(&pdev->dev, "failed to enable tx_clk (%u)\n", err);
+ goto err_disable_axiclk;
+ }
+
+ err = clk_prepare_enable(*rx_clk);
+ if (err) {
+ dev_err(&pdev->dev, "failed to enable rx_clk (%u)\n", err);
+ goto err_disable_txclk;
+ }
+
+ err = clk_prepare_enable(*sg_clk);
+ if (err) {
+ dev_err(&pdev->dev, "failed to enable sg_clk (%u)\n", err);
+ goto err_disable_rxclk;
+ }
+
+ return 0;
+
+err_disable_rxclk:
+ clk_disable_unprepare(*rx_clk);
+err_disable_txclk:
+ clk_disable_unprepare(*tx_clk);
+err_disable_axiclk:
+ clk_disable_unprepare(*axi_clk);
+
+ return err;
+}
+
+static int axicdma_clk_init(struct platform_device *pdev, struct clk **axi_clk,
+ struct clk **dev_clk, struct clk **tmp_clk,
+ struct clk **tmp1_clk, struct clk **tmp2_clk)
+{
+ int err;
+
+ *tmp_clk = NULL;
+ *tmp1_clk = NULL;
+ *tmp2_clk = NULL;
+
+ *axi_clk = devm_clk_get(&pdev->dev, "s_axi_lite_aclk");
+ if (IS_ERR(*axi_clk)) {
+ err = PTR_ERR(*axi_clk);
+ dev_err(&pdev->dev, "failed to get axi_clk (%u)\n", err);
+ return err;
+ }
+
+ *dev_clk = devm_clk_get(&pdev->dev, "m_axi_aclk");
+ if (IS_ERR(*dev_clk)) {
+ err = PTR_ERR(*dev_clk);
+ dev_err(&pdev->dev, "failed to get dev_clk (%u)\n", err);
+ return err;
+ }
+
+ err = clk_prepare_enable(*axi_clk);
+ if (err) {
+ dev_err(&pdev->dev, "failed to enable axi_clk (%u)\n", err);
+ return err;
+ }
+
+ err = clk_prepare_enable(*dev_clk);
+ if (err) {
+ dev_err(&pdev->dev, "failed to enable dev_clk (%u)\n", err);
+ goto err_disable_axiclk;
+ }
+
+ return 0;
+
+err_disable_axiclk:
+ clk_disable_unprepare(*axi_clk);
+
+ return err;
+}
+
+static int axivdma_clk_init(struct platform_device *pdev, struct clk **axi_clk,
+ struct clk **tx_clk, struct clk **txs_clk,
+ struct clk **rx_clk, struct clk **rxs_clk)
+{
+ int err;
+
+ *axi_clk = devm_clk_get(&pdev->dev, "s_axi_lite_aclk");
+ if (IS_ERR(*axi_clk)) {
+ err = PTR_ERR(*axi_clk);
+ dev_err(&pdev->dev, "failed to get axi_aclk (%u)\n", err);
+ return err;
+ }
+
+ *tx_clk = devm_clk_get(&pdev->dev, "m_axi_mm2s_aclk");
+ if (IS_ERR(*tx_clk))
+ *tx_clk = NULL;
+
+ *txs_clk = devm_clk_get(&pdev->dev, "m_axis_mm2s_aclk");
+ if (IS_ERR(*txs_clk))
+ *txs_clk = NULL;
+
+ *rx_clk = devm_clk_get(&pdev->dev, "m_axi_s2mm_aclk");
+ if (IS_ERR(*rx_clk))
+ *rx_clk = NULL;
+
+ *rxs_clk = devm_clk_get(&pdev->dev, "s_axis_s2mm_aclk");
+ if (IS_ERR(*rxs_clk))
+ *rxs_clk = NULL;
+
+ err = clk_prepare_enable(*axi_clk);
+ if (err) {
+ dev_err(&pdev->dev, "failed to enable axi_clk (%u)\n", err);
+ return err;
+ }
+
+ err = clk_prepare_enable(*tx_clk);
+ if (err) {
+ dev_err(&pdev->dev, "failed to enable tx_clk (%u)\n", err);
+ goto err_disable_axiclk;
+ }
+
+ err = clk_prepare_enable(*txs_clk);
+ if (err) {
+ dev_err(&pdev->dev, "failed to enable txs_clk (%u)\n", err);
+ goto err_disable_txclk;
+ }
+
+ err = clk_prepare_enable(*rx_clk);
+ if (err) {
+ dev_err(&pdev->dev, "failed to enable rx_clk (%u)\n", err);
+ goto err_disable_txsclk;
+ }
+
+ err = clk_prepare_enable(*rxs_clk);
+ if (err) {
+ dev_err(&pdev->dev, "failed to enable rxs_clk (%u)\n", err);
+ goto err_disable_rxclk;
+ }
+
+ return 0;
+
+err_disable_rxclk:
+ clk_disable_unprepare(*rx_clk);
+err_disable_txsclk:
+ clk_disable_unprepare(*txs_clk);
+err_disable_txclk:
+ clk_disable_unprepare(*tx_clk);
+err_disable_axiclk:
+ clk_disable_unprepare(*axi_clk);
+
+ return err;
+}
+
+static void xdma_disable_allclks(struct xilinx_dma_device *xdev)
+{
+ clk_disable_unprepare(xdev->rxs_clk);
+ clk_disable_unprepare(xdev->rx_clk);
+ clk_disable_unprepare(xdev->txs_clk);
+ clk_disable_unprepare(xdev->tx_clk);
+ clk_disable_unprepare(xdev->axi_clk);
+}
+
/**
- * xilinx_vdma_chan_probe - Per Channel Probing
+ * xilinx_dma_chan_probe - Per Channel Probing
* It get channel features from the device tree entry and
* initialize special channel handling routines
*
@@ -1123,10 +1971,10 @@ static void xilinx_vdma_chan_remove(struct xilinx_vdma_chan *chan)
*
* Return: '0' on success and failure value on error
*/
-static int xilinx_vdma_chan_probe(struct xilinx_vdma_device *xdev,
+static int xilinx_dma_chan_probe(struct xilinx_dma_device *xdev,
struct device_node *node)
{
- struct xilinx_vdma_chan *chan;
+ struct xilinx_dma_chan *chan;
bool has_dre = false;
u32 value, width;
int err;
@@ -1140,6 +1988,7 @@ static int xilinx_vdma_chan_probe(struct xilinx_vdma_device *xdev,
chan->xdev = xdev;
chan->has_sg = xdev->has_sg;
chan->desc_pendingcount = 0x0;
+ chan->ext_addr = xdev->ext_addr;
spin_lock_init(&chan->lock);
INIT_LIST_HEAD(&chan->pending_list);
@@ -1169,23 +2018,27 @@ static int xilinx_vdma_chan_probe(struct xilinx_vdma_device *xdev,
chan->direction = DMA_MEM_TO_DEV;
chan->id = 0;
- chan->ctrl_offset = XILINX_VDMA_MM2S_CTRL_OFFSET;
- chan->desc_offset = XILINX_VDMA_MM2S_DESC_OFFSET;
+ chan->ctrl_offset = XILINX_DMA_MM2S_CTRL_OFFSET;
+ if (xdev->dma_config->dmatype == XDMA_TYPE_VDMA) {
+ chan->desc_offset = XILINX_VDMA_MM2S_DESC_OFFSET;
- if (xdev->flush_on_fsync == XILINX_VDMA_FLUSH_BOTH ||
- xdev->flush_on_fsync == XILINX_VDMA_FLUSH_MM2S)
- chan->flush_on_fsync = true;
+ if (xdev->flush_on_fsync == XILINX_DMA_FLUSH_BOTH ||
+ xdev->flush_on_fsync == XILINX_DMA_FLUSH_MM2S)
+ chan->flush_on_fsync = true;
+ }
} else if (of_device_is_compatible(node,
"xlnx,axi-vdma-s2mm-channel")) {
chan->direction = DMA_DEV_TO_MEM;
chan->id = 1;
- chan->ctrl_offset = XILINX_VDMA_S2MM_CTRL_OFFSET;
- chan->desc_offset = XILINX_VDMA_S2MM_DESC_OFFSET;
+ chan->ctrl_offset = XILINX_DMA_S2MM_CTRL_OFFSET;
+ if (xdev->dma_config->dmatype == XDMA_TYPE_VDMA) {
+ chan->desc_offset = XILINX_VDMA_S2MM_DESC_OFFSET;
- if (xdev->flush_on_fsync == XILINX_VDMA_FLUSH_BOTH ||
- xdev->flush_on_fsync == XILINX_VDMA_FLUSH_S2MM)
- chan->flush_on_fsync = true;
+ if (xdev->flush_on_fsync == XILINX_DMA_FLUSH_BOTH ||
+ xdev->flush_on_fsync == XILINX_DMA_FLUSH_S2MM)
+ chan->flush_on_fsync = true;
+ }
} else {
dev_err(xdev->dev, "Invalid channel compatible node\n");
return -EINVAL;
@@ -1193,15 +2046,22 @@ static int xilinx_vdma_chan_probe(struct xilinx_vdma_device *xdev,
/* Request the interrupt */
chan->irq = irq_of_parse_and_map(node, 0);
- err = request_irq(chan->irq, xilinx_vdma_irq_handler, IRQF_SHARED,
- "xilinx-vdma-controller", chan);
+ err = request_irq(chan->irq, xilinx_dma_irq_handler, IRQF_SHARED,
+ "xilinx-dma-controller", chan);
if (err) {
dev_err(xdev->dev, "unable to request IRQ %d\n", chan->irq);
return err;
}
+ if (xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA)
+ chan->start_transfer = xilinx_dma_start_transfer;
+ else if (xdev->dma_config->dmatype == XDMA_TYPE_CDMA)
+ chan->start_transfer = xilinx_cdma_start_transfer;
+ else
+ chan->start_transfer = xilinx_vdma_start_transfer;
+
/* Initialize the tasklet */
- tasklet_init(&chan->tasklet, xilinx_vdma_do_tasklet,
+ tasklet_init(&chan->tasklet, xilinx_dma_do_tasklet,
(unsigned long)chan);
/*
@@ -1214,7 +2074,7 @@ static int xilinx_vdma_chan_probe(struct xilinx_vdma_device *xdev,
xdev->chan[chan->id] = chan;
/* Reset the channel */
- err = xilinx_vdma_chan_reset(chan);
+ err = xilinx_dma_chan_reset(chan);
if (err < 0) {
dev_err(xdev->dev, "Reset channel failed\n");
return err;
@@ -1233,28 +2093,54 @@ static int xilinx_vdma_chan_probe(struct xilinx_vdma_device *xdev,
static struct dma_chan *of_dma_xilinx_xlate(struct of_phandle_args *dma_spec,
struct of_dma *ofdma)
{
- struct xilinx_vdma_device *xdev = ofdma->of_dma_data;
+ struct xilinx_dma_device *xdev = ofdma->of_dma_data;
int chan_id = dma_spec->args[0];
- if (chan_id >= XILINX_VDMA_MAX_CHANS_PER_DEVICE || !xdev->chan[chan_id])
+ if (chan_id >= XILINX_DMA_MAX_CHANS_PER_DEVICE || !xdev->chan[chan_id])
return NULL;
return dma_get_slave_channel(&xdev->chan[chan_id]->common);
}
+static const struct xilinx_dma_config axidma_config = {
+ .dmatype = XDMA_TYPE_AXIDMA,
+ .clk_init = axidma_clk_init,
+};
+
+static const struct xilinx_dma_config axicdma_config = {
+ .dmatype = XDMA_TYPE_CDMA,
+ .clk_init = axicdma_clk_init,
+};
+
+static const struct xilinx_dma_config axivdma_config = {
+ .dmatype = XDMA_TYPE_VDMA,
+ .clk_init = axivdma_clk_init,
+};
+
+static const struct of_device_id xilinx_dma_of_ids[] = {
+ { .compatible = "xlnx,axi-dma-1.00.a", .data = &axidma_config },
+ { .compatible = "xlnx,axi-cdma-1.00.a", .data = &axicdma_config },
+ { .compatible = "xlnx,axi-vdma-1.00.a", .data = &axivdma_config },
+ {}
+};
+MODULE_DEVICE_TABLE(of, xilinx_dma_of_ids);
+
/**
- * xilinx_vdma_probe - Driver probe function
+ * xilinx_dma_probe - Driver probe function
* @pdev: Pointer to the platform_device structure
*
* Return: '0' on success and failure value on error
*/
-static int xilinx_vdma_probe(struct platform_device *pdev)
+static int xilinx_dma_probe(struct platform_device *pdev)
{
+ int (*clk_init)(struct platform_device *, struct clk **, struct clk **,
+ struct clk **, struct clk **, struct clk **)
+ = axivdma_clk_init;
struct device_node *node = pdev->dev.of_node;
- struct xilinx_vdma_device *xdev;
- struct device_node *child;
+ struct xilinx_dma_device *xdev;
+ struct device_node *child, *np = pdev->dev.of_node;
struct resource *io;
- u32 num_frames;
+ u32 num_frames, addr_width;
int i, err;
/* Allocate and initialize the DMA engine structure */
@@ -1263,6 +2149,20 @@ static int xilinx_vdma_probe(struct platform_device *pdev)
return -ENOMEM;
xdev->dev = &pdev->dev;
+ if (np) {
+ const struct of_device_id *match;
+
+ match = of_match_node(xilinx_dma_of_ids, np);
+ if (match && match->data) {
+ xdev->dma_config = match->data;
+ clk_init = xdev->dma_config->clk_init;
+ }
+ }
+
+ err = clk_init(pdev, &xdev->axi_clk, &xdev->tx_clk, &xdev->txs_clk,
+ &xdev->rx_clk, &xdev->rxs_clk);
+ if (err)
+ return err;
/* Request and map I/O memory */
io = platform_get_resource(pdev, IORESOURCE_MEM, 0);
@@ -1273,46 +2173,77 @@ static int xilinx_vdma_probe(struct platform_device *pdev)
/* Retrieve the DMA engine properties from the device tree */
xdev->has_sg = of_property_read_bool(node, "xlnx,include-sg");
- err = of_property_read_u32(node, "xlnx,num-fstores", &num_frames);
- if (err < 0) {
- dev_err(xdev->dev, "missing xlnx,num-fstores property\n");
- return err;
+ if (xdev->dma_config->dmatype == XDMA_TYPE_VDMA) {
+ err = of_property_read_u32(node, "xlnx,num-fstores",
+ &num_frames);
+ if (err < 0) {
+ dev_err(xdev->dev,
+ "missing xlnx,num-fstores property\n");
+ return err;
+ }
+
+ err = of_property_read_u32(node, "xlnx,flush-fsync",
+ &xdev->flush_on_fsync);
+ if (err < 0)
+ dev_warn(xdev->dev,
+ "missing xlnx,flush-fsync property\n");
}
- err = of_property_read_u32(node, "xlnx,flush-fsync",
- &xdev->flush_on_fsync);
+ err = of_property_read_u32(node, "xlnx,addrwidth", &addr_width);
if (err < 0)
- dev_warn(xdev->dev, "missing xlnx,flush-fsync property\n");
+ dev_warn(xdev->dev, "missing xlnx,addrwidth property\n");
+
+ if (addr_width > 32)
+ xdev->ext_addr = true;
+ else
+ xdev->ext_addr = false;
+
+ /* Set the dma mask bits */
+ dma_set_mask(xdev->dev, DMA_BIT_MASK(addr_width));
/* Initialize the DMA engine */
xdev->common.dev = &pdev->dev;
INIT_LIST_HEAD(&xdev->common.channels);
- dma_cap_set(DMA_SLAVE, xdev->common.cap_mask);
- dma_cap_set(DMA_PRIVATE, xdev->common.cap_mask);
+ if (!(xdev->dma_config->dmatype == XDMA_TYPE_CDMA)) {
+ dma_cap_set(DMA_SLAVE, xdev->common.cap_mask);
+ dma_cap_set(DMA_PRIVATE, xdev->common.cap_mask);
+ }
xdev->common.device_alloc_chan_resources =
- xilinx_vdma_alloc_chan_resources;
+ xilinx_dma_alloc_chan_resources;
xdev->common.device_free_chan_resources =
- xilinx_vdma_free_chan_resources;
- xdev->common.device_prep_interleaved_dma =
+ xilinx_dma_free_chan_resources;
+ xdev->common.device_terminate_all = xilinx_dma_terminate_all;
+ xdev->common.device_tx_status = xilinx_dma_tx_status;
+ xdev->common.device_issue_pending = xilinx_dma_issue_pending;
+ if (xdev->dma_config->dmatype == XDMA_TYPE_AXIDMA) {
+ xdev->common.device_prep_slave_sg = xilinx_dma_prep_slave_sg;
+ /* Residue calculation is supported by only AXI DMA */
+ xdev->common.residue_granularity =
+ DMA_RESIDUE_GRANULARITY_SEGMENT;
+ } else if (xdev->dma_config->dmatype == XDMA_TYPE_CDMA) {
+ dma_cap_set(DMA_MEMCPY, xdev->common.cap_mask);
+ xdev->common.device_prep_dma_memcpy = xilinx_cdma_prep_memcpy;
+ } else {
+ xdev->common.device_prep_interleaved_dma =
xilinx_vdma_dma_prep_interleaved;
- xdev->common.device_terminate_all = xilinx_vdma_terminate_all;
- xdev->common.device_tx_status = xilinx_vdma_tx_status;
- xdev->common.device_issue_pending = xilinx_vdma_issue_pending;
+ }
platform_set_drvdata(pdev, xdev);
/* Initialize the channels */
for_each_child_of_node(node, child) {
- err = xilinx_vdma_chan_probe(xdev, child);
+ err = xilinx_dma_chan_probe(xdev, child);
if (err < 0)
- goto error;
+ goto disable_clks;
}
- for (i = 0; i < XILINX_VDMA_MAX_CHANS_PER_DEVICE; i++)
- if (xdev->chan[i])
- xdev->chan[i]->num_frms = num_frames;
+ if (xdev->dma_config->dmatype == XDMA_TYPE_VDMA) {
+ for (i = 0; i < XILINX_DMA_MAX_CHANS_PER_DEVICE; i++)
+ if (xdev->chan[i])
+ xdev->chan[i]->num_frms = num_frames;
+ }
/* Register the DMA engine with the core */
dma_async_device_register(&xdev->common);
@@ -1329,49 +2260,47 @@ static int xilinx_vdma_probe(struct platform_device *pdev)
return 0;
+disable_clks:
+ xdma_disable_allclks(xdev);
error:
- for (i = 0; i < XILINX_VDMA_MAX_CHANS_PER_DEVICE; i++)
+ for (i = 0; i < XILINX_DMA_MAX_CHANS_PER_DEVICE; i++)
if (xdev->chan[i])
- xilinx_vdma_chan_remove(xdev->chan[i]);
+ xilinx_dma_chan_remove(xdev->chan[i]);
return err;
}
/**
- * xilinx_vdma_remove - Driver remove function
+ * xilinx_dma_remove - Driver remove function
* @pdev: Pointer to the platform_device structure
*
* Return: Always '0'
*/
-static int xilinx_vdma_remove(struct platform_device *pdev)
+static int xilinx_dma_remove(struct platform_device *pdev)
{
- struct xilinx_vdma_device *xdev = platform_get_drvdata(pdev);
+ struct xilinx_dma_device *xdev = platform_get_drvdata(pdev);
int i;
of_dma_controller_free(pdev->dev.of_node);
dma_async_device_unregister(&xdev->common);
- for (i = 0; i < XILINX_VDMA_MAX_CHANS_PER_DEVICE; i++)
+ for (i = 0; i < XILINX_DMA_MAX_CHANS_PER_DEVICE; i++)
if (xdev->chan[i])
- xilinx_vdma_chan_remove(xdev->chan[i]);
+ xilinx_dma_chan_remove(xdev->chan[i]);
+
+ xdma_disable_allclks(xdev);
return 0;
}
-static const struct of_device_id xilinx_vdma_of_ids[] = {
- { .compatible = "xlnx,axi-vdma-1.00.a",},
- {}
-};
-MODULE_DEVICE_TABLE(of, xilinx_vdma_of_ids);
-
static struct platform_driver xilinx_vdma_driver = {
.driver = {
.name = "xilinx-vdma",
- .of_match_table = xilinx_vdma_of_ids,
+ .of_match_table = xilinx_dma_of_ids,
},
- .probe = xilinx_vdma_probe,
- .remove = xilinx_vdma_remove,
+ .probe = xilinx_dma_probe,
+ .remove = xilinx_dma_remove,
};
module_platform_driver(xilinx_vdma_driver);
diff --git a/drivers/edac/Kconfig b/drivers/edac/Kconfig
index 37755e63cc28..6ca7474baf4a 100644
--- a/drivers/edac/Kconfig
+++ b/drivers/edac/Kconfig
@@ -378,12 +378,11 @@ config EDAC_ALTERA
config EDAC_ALTERA_L2C
bool "Altera L2 Cache ECC"
- depends on EDAC_ALTERA=y
- select CACHE_L2X0
+ depends on EDAC_ALTERA=y && CACHE_L2X0
help
Support for error detection and correction on the
Altera L2 cache Memory for Altera SoCs. This option
- requires L2 cache so it will force that selection.
+ requires L2 cache.
config EDAC_ALTERA_OCRAM
bool "Altera On-Chip RAM ECC"
diff --git a/drivers/edac/altera_edac.c b/drivers/edac/altera_edac.c
index 63e42098726d..5b4d223d6d68 100644
--- a/drivers/edac/altera_edac.c
+++ b/drivers/edac/altera_edac.c
@@ -24,6 +24,7 @@
#include <linux/interrupt.h>
#include <linux/kernel.h>
#include <linux/mfd/syscon.h>
+#include <linux/of_address.h>
#include <linux/of_platform.h>
#include <linux/platform_device.h>
#include <linux/regmap.h>
@@ -78,27 +79,6 @@ static const struct altr_sdram_prv_data a10_data = {
.ue_set_mask = A10_DIAGINT_TDERRA_MASK,
};
-/************************** EDAC Device Defines **************************/
-
-/* OCRAM ECC Management Group Defines */
-#define ALTR_MAN_GRP_OCRAM_ECC_OFFSET 0x04
-#define ALTR_OCR_ECC_EN BIT(0)
-#define ALTR_OCR_ECC_INJS BIT(1)
-#define ALTR_OCR_ECC_INJD BIT(2)
-#define ALTR_OCR_ECC_SERR BIT(3)
-#define ALTR_OCR_ECC_DERR BIT(4)
-
-/* L2 ECC Management Group Defines */
-#define ALTR_MAN_GRP_L2_ECC_OFFSET 0x00
-#define ALTR_L2_ECC_EN BIT(0)
-#define ALTR_L2_ECC_INJS BIT(1)
-#define ALTR_L2_ECC_INJD BIT(2)
-
-#define ALTR_UE_TRIGGER_CHAR 'U' /* Trigger for UE */
-#define ALTR_TRIGGER_READ_WRD_CNT 32 /* Line size x 4 */
-#define ALTR_TRIG_OCRAM_BYTE_SIZE 128 /* Line size x 4 */
-#define ALTR_TRIG_L2C_BYTE_SIZE 4096 /* Full Page */
-
/*********************** EDAC Memory Controller Functions ****************/
/* The SDRAM controller uses the EDAC Memory Controller framework. */
@@ -252,8 +232,8 @@ static unsigned long get_total_mem(void)
}
static const struct of_device_id altr_sdram_ctrl_of_match[] = {
- { .compatible = "altr,sdram-edac", .data = (void *)&c5_data},
- { .compatible = "altr,sdram-edac-a10", .data = (void *)&a10_data},
+ { .compatible = "altr,sdram-edac", .data = &c5_data},
+ { .compatible = "altr,sdram-edac-a10", .data = &a10_data},
{},
};
MODULE_DEVICE_TABLE(of, altr_sdram_ctrl_of_match);
@@ -570,28 +550,8 @@ module_platform_driver(altr_edac_driver);
const struct edac_device_prv_data ocramecc_data;
const struct edac_device_prv_data l2ecc_data;
-
-struct edac_device_prv_data {
- int (*setup)(struct platform_device *pdev, void __iomem *base);
- int ce_clear_mask;
- int ue_clear_mask;
- char dbgfs_name[20];
- void * (*alloc_mem)(size_t size, void **other);
- void (*free_mem)(void *p, size_t size, void *other);
- int ecc_enable_mask;
- int ce_set_mask;
- int ue_set_mask;
- int trig_alloc_sz;
-};
-
-struct altr_edac_device_dev {
- void __iomem *base;
- int sb_irq;
- int db_irq;
- const struct edac_device_prv_data *data;
- struct dentry *debugfs_dir;
- char *edac_dev_name;
-};
+const struct edac_device_prv_data a10_ocramecc_data;
+const struct edac_device_prv_data a10_l2ecc_data;
static irqreturn_t altr_edac_device_handler(int irq, void *dev_id)
{
@@ -665,8 +625,9 @@ static ssize_t altr_edac_device_trig(struct file *file,
if (ACCESS_ONCE(ptemp[i]))
result = -1;
/* Toggle Error bit (it is latched), leave ECC enabled */
- writel(error_mask, drvdata->base);
- writel(priv->ecc_enable_mask, drvdata->base);
+ writel(error_mask, (drvdata->base + priv->set_err_ofst));
+ writel(priv->ecc_enable_mask, (drvdata->base +
+ priv->set_err_ofst));
ptemp[i] = i;
}
/* Ensure it has been written out */
@@ -694,6 +655,16 @@ static const struct file_operations altr_edac_device_inject_fops = {
.llseek = generic_file_llseek,
};
+static ssize_t altr_edac_a10_device_trig(struct file *file,
+ const char __user *user_buf,
+ size_t count, loff_t *ppos);
+
+static const struct file_operations altr_edac_a10_device_inject_fops = {
+ .open = simple_open,
+ .write = altr_edac_a10_device_trig,
+ .llseek = generic_file_llseek,
+};
+
static void altr_create_edacdev_dbgfs(struct edac_device_ctl_info *edac_dci,
const struct edac_device_prv_data *priv)
{
@@ -708,17 +679,18 @@ static void altr_create_edacdev_dbgfs(struct edac_device_ctl_info *edac_dci,
if (!edac_debugfs_create_file(priv->dbgfs_name, S_IWUSR,
drvdata->debugfs_dir, edac_dci,
- &altr_edac_device_inject_fops))
+ priv->inject_fops))
debugfs_remove_recursive(drvdata->debugfs_dir);
}
static const struct of_device_id altr_edac_device_of_match[] = {
#ifdef CONFIG_EDAC_ALTERA_L2C
- { .compatible = "altr,socfpga-l2-ecc", .data = (void *)&l2ecc_data },
+ { .compatible = "altr,socfpga-l2-ecc", .data = &l2ecc_data },
+ { .compatible = "altr,socfpga-a10-l2-ecc", .data = &a10_l2ecc_data },
#endif
#ifdef CONFIG_EDAC_ALTERA_OCRAM
- { .compatible = "altr,socfpga-ocram-ecc",
- .data = (void *)&ocramecc_data },
+ { .compatible = "altr,socfpga-ocram-ecc", .data = &ocramecc_data },
+ { .compatible = "altr,socfpga-a10-ocram-ecc", .data = &a10_ocramecc_data },
#endif
{},
};
@@ -789,7 +761,7 @@ static int altr_edac_device_probe(struct platform_device *pdev)
/* Check specific dependencies for the module */
if (drvdata->data->setup) {
- res = drvdata->data->setup(pdev, drvdata->base);
+ res = drvdata->data->setup(drvdata);
if (res)
goto fail1;
}
@@ -856,6 +828,25 @@ module_platform_driver(altr_edac_device_driver);
/*********************** OCRAM EDAC Device Functions *********************/
#ifdef CONFIG_EDAC_ALTERA_OCRAM
+/*
+ * Test for memory's ECC dependencies upon entry because platform specific
+ * startup should have initialized the memory and enabled the ECC.
+ * Can't turn on ECC here because accessing un-initialized memory will
+ * cause CE/UE errors possibly causing an ABORT.
+ */
+static int altr_check_ecc_deps(struct altr_edac_device_dev *device)
+{
+ void __iomem *base = device->base;
+ const struct edac_device_prv_data *prv = device->data;
+
+ if (readl(base + prv->ecc_en_ofst) & prv->ecc_enable_mask)
+ return 0;
+
+ edac_printk(KERN_ERR, EDAC_DEVICE,
+ "%s: No ECC present or ECC disabled.\n",
+ device->edac_dev_name);
+ return -ENODEV;
+}
static void *ocram_alloc_mem(size_t size, void **other)
{
@@ -891,36 +882,53 @@ static void ocram_free_mem(void *p, size_t size, void *other)
gen_pool_free((struct gen_pool *)other, (u32)p, size);
}
-/*
- * altr_ocram_check_deps()
- * Test for OCRAM cache ECC dependencies upon entry because
- * platform specific startup should have initialized the
- * On-Chip RAM memory and enabled the ECC.
- * Can't turn on ECC here because accessing un-initialized
- * memory will cause CE/UE errors possibly causing an ABORT.
- */
-static int altr_ocram_check_deps(struct platform_device *pdev,
- void __iomem *base)
+static irqreturn_t altr_edac_a10_ecc_irq(struct altr_edac_device_dev *dci,
+ bool sberr)
{
- if (readl(base) & ALTR_OCR_ECC_EN)
- return 0;
+ void __iomem *base = dci->base;
- edac_printk(KERN_ERR, EDAC_DEVICE,
- "OCRAM: No ECC present or ECC disabled.\n");
- return -ENODEV;
+ if (sberr) {
+ writel(ALTR_A10_ECC_SERRPENA,
+ base + ALTR_A10_ECC_INTSTAT_OFST);
+ edac_device_handle_ce(dci->edac_dev, 0, 0, dci->edac_dev_name);
+ } else {
+ writel(ALTR_A10_ECC_DERRPENA,
+ base + ALTR_A10_ECC_INTSTAT_OFST);
+ edac_device_handle_ue(dci->edac_dev, 0, 0, dci->edac_dev_name);
+ panic("\nEDAC:ECC_DEVICE[Uncorrectable errors]\n");
+ }
+ return IRQ_HANDLED;
}
const struct edac_device_prv_data ocramecc_data = {
- .setup = altr_ocram_check_deps,
+ .setup = altr_check_ecc_deps,
.ce_clear_mask = (ALTR_OCR_ECC_EN | ALTR_OCR_ECC_SERR),
.ue_clear_mask = (ALTR_OCR_ECC_EN | ALTR_OCR_ECC_DERR),
.dbgfs_name = "altr_ocram_trigger",
.alloc_mem = ocram_alloc_mem,
.free_mem = ocram_free_mem,
.ecc_enable_mask = ALTR_OCR_ECC_EN,
+ .ecc_en_ofst = ALTR_OCR_ECC_REG_OFFSET,
.ce_set_mask = (ALTR_OCR_ECC_EN | ALTR_OCR_ECC_INJS),
.ue_set_mask = (ALTR_OCR_ECC_EN | ALTR_OCR_ECC_INJD),
+ .set_err_ofst = ALTR_OCR_ECC_REG_OFFSET,
.trig_alloc_sz = ALTR_TRIG_OCRAM_BYTE_SIZE,
+ .inject_fops = &altr_edac_device_inject_fops,
+};
+
+const struct edac_device_prv_data a10_ocramecc_data = {
+ .setup = altr_check_ecc_deps,
+ .ce_clear_mask = ALTR_A10_ECC_SERRPENA,
+ .ue_clear_mask = ALTR_A10_ECC_DERRPENA,
+ .irq_status_mask = A10_SYSMGR_ECC_INTSTAT_OCRAM,
+ .dbgfs_name = "altr_ocram_trigger",
+ .ecc_enable_mask = ALTR_A10_OCRAM_ECC_EN_CTL,
+ .ecc_en_ofst = ALTR_A10_ECC_CTRL_OFST,
+ .ce_set_mask = ALTR_A10_ECC_TSERRA,
+ .ue_set_mask = ALTR_A10_ECC_TDERRA,
+ .set_err_ofst = ALTR_A10_ECC_INTTEST_OFST,
+ .ecc_irq_handler = altr_edac_a10_ecc_irq,
+ .inject_fops = &altr_edac_a10_device_inject_fops,
};
#endif /* CONFIG_EDAC_ALTERA_OCRAM */
@@ -966,10 +974,13 @@ static void l2_free_mem(void *p, size_t size, void *other)
* Bail if ECC is not enabled.
* Note that L2 Cache Enable is forced at build time.
*/
-static int altr_l2_check_deps(struct platform_device *pdev,
- void __iomem *base)
+static int altr_l2_check_deps(struct altr_edac_device_dev *device)
{
- if (readl(base) & ALTR_L2_ECC_EN)
+ void __iomem *base = device->base;
+ const struct edac_device_prv_data *prv = device->data;
+
+ if ((readl(base) & prv->ecc_enable_mask) ==
+ prv->ecc_enable_mask)
return 0;
edac_printk(KERN_ERR, EDAC_DEVICE,
@@ -977,6 +988,24 @@ static int altr_l2_check_deps(struct platform_device *pdev,
return -ENODEV;
}
+static irqreturn_t altr_edac_a10_l2_irq(struct altr_edac_device_dev *dci,
+ bool sberr)
+{
+ if (sberr) {
+ regmap_write(dci->edac->ecc_mgr_map,
+ A10_SYSGMR_MPU_CLEAR_L2_ECC_OFST,
+ A10_SYSGMR_MPU_CLEAR_L2_ECC_SB);
+ edac_device_handle_ce(dci->edac_dev, 0, 0, dci->edac_dev_name);
+ } else {
+ regmap_write(dci->edac->ecc_mgr_map,
+ A10_SYSGMR_MPU_CLEAR_L2_ECC_OFST,
+ A10_SYSGMR_MPU_CLEAR_L2_ECC_MB);
+ edac_device_handle_ue(dci->edac_dev, 0, 0, dci->edac_dev_name);
+ panic("\nEDAC:ECC_DEVICE[Uncorrectable errors]\n");
+ }
+ return IRQ_HANDLED;
+}
+
const struct edac_device_prv_data l2ecc_data = {
.setup = altr_l2_check_deps,
.ce_clear_mask = 0,
@@ -987,11 +1016,252 @@ const struct edac_device_prv_data l2ecc_data = {
.ecc_enable_mask = ALTR_L2_ECC_EN,
.ce_set_mask = (ALTR_L2_ECC_EN | ALTR_L2_ECC_INJS),
.ue_set_mask = (ALTR_L2_ECC_EN | ALTR_L2_ECC_INJD),
+ .set_err_ofst = ALTR_L2_ECC_REG_OFFSET,
+ .trig_alloc_sz = ALTR_TRIG_L2C_BYTE_SIZE,
+ .inject_fops = &altr_edac_device_inject_fops,
+};
+
+const struct edac_device_prv_data a10_l2ecc_data = {
+ .setup = altr_l2_check_deps,
+ .ce_clear_mask = ALTR_A10_L2_ECC_SERR_CLR,
+ .ue_clear_mask = ALTR_A10_L2_ECC_MERR_CLR,
+ .irq_status_mask = A10_SYSMGR_ECC_INTSTAT_L2,
+ .dbgfs_name = "altr_l2_trigger",
+ .alloc_mem = l2_alloc_mem,
+ .free_mem = l2_free_mem,
+ .ecc_enable_mask = ALTR_A10_L2_ECC_EN_CTL,
+ .ce_set_mask = ALTR_A10_L2_ECC_CE_INJ_MASK,
+ .ue_set_mask = ALTR_A10_L2_ECC_UE_INJ_MASK,
+ .set_err_ofst = ALTR_A10_L2_ECC_INJ_OFST,
+ .ecc_irq_handler = altr_edac_a10_l2_irq,
.trig_alloc_sz = ALTR_TRIG_L2C_BYTE_SIZE,
+ .inject_fops = &altr_edac_device_inject_fops,
};
#endif /* CONFIG_EDAC_ALTERA_L2C */
+/********************* Arria10 EDAC Device Functions *************************/
+
+/*
+ * The Arria10 EDAC Device Functions differ from the Cyclone5/Arria5
+ * because 2 IRQs are shared among the all ECC peripherals. The ECC
+ * manager manages the IRQs and the children.
+ * Based on xgene_edac.c peripheral code.
+ */
+
+static ssize_t altr_edac_a10_device_trig(struct file *file,
+ const char __user *user_buf,
+ size_t count, loff_t *ppos)
+{
+ struct edac_device_ctl_info *edac_dci = file->private_data;
+ struct altr_edac_device_dev *drvdata = edac_dci->pvt_info;
+ const struct edac_device_prv_data *priv = drvdata->data;
+ void __iomem *set_addr = (drvdata->base + priv->set_err_ofst);
+ unsigned long flags;
+ u8 trig_type;
+
+ if (!user_buf || get_user(trig_type, user_buf))
+ return -EFAULT;
+
+ local_irq_save(flags);
+ if (trig_type == ALTR_UE_TRIGGER_CHAR)
+ writel(priv->ue_set_mask, set_addr);
+ else
+ writel(priv->ce_set_mask, set_addr);
+ /* Ensure the interrupt test bits are set */
+ wmb();
+ local_irq_restore(flags);
+
+ return count;
+}
+
+static irqreturn_t altr_edac_a10_irq_handler(int irq, void *dev_id)
+{
+ irqreturn_t rc = IRQ_NONE;
+ struct altr_arria10_edac *edac = dev_id;
+ struct altr_edac_device_dev *dci;
+ int irq_status;
+ bool sberr = (irq == edac->sb_irq) ? 1 : 0;
+ int sm_offset = sberr ? A10_SYSMGR_ECC_INTSTAT_SERR_OFST :
+ A10_SYSMGR_ECC_INTSTAT_DERR_OFST;
+
+ regmap_read(edac->ecc_mgr_map, sm_offset, &irq_status);
+
+ if ((irq != edac->sb_irq) && (irq != edac->db_irq)) {
+ WARN_ON(1);
+ } else {
+ list_for_each_entry(dci, &edac->a10_ecc_devices, next) {
+ if (irq_status & dci->data->irq_status_mask)
+ rc = dci->data->ecc_irq_handler(dci, sberr);
+ }
+ }
+
+ return rc;
+}
+
+static int altr_edac_a10_device_add(struct altr_arria10_edac *edac,
+ struct device_node *np)
+{
+ struct edac_device_ctl_info *dci;
+ struct altr_edac_device_dev *altdev;
+ char *ecc_name = (char *)np->name;
+ struct resource res;
+ int edac_idx;
+ int rc = 0;
+ const struct edac_device_prv_data *prv;
+ /* Get matching node and check for valid result */
+ const struct of_device_id *pdev_id =
+ of_match_node(altr_edac_device_of_match, np);
+ if (IS_ERR_OR_NULL(pdev_id))
+ return -ENODEV;
+
+ /* Get driver specific data for this EDAC device */
+ prv = pdev_id->data;
+ if (IS_ERR_OR_NULL(prv))
+ return -ENODEV;
+
+ if (!devres_open_group(edac->dev, altr_edac_a10_device_add, GFP_KERNEL))
+ return -ENOMEM;
+
+ rc = of_address_to_resource(np, 0, &res);
+ if (rc < 0) {
+ edac_printk(KERN_ERR, EDAC_DEVICE,
+ "%s: no resource address\n", ecc_name);
+ goto err_release_group;
+ }
+
+ edac_idx = edac_device_alloc_index();
+ dci = edac_device_alloc_ctl_info(sizeof(*altdev), ecc_name,
+ 1, ecc_name, 1, 0, NULL, 0,
+ edac_idx);
+
+ if (!dci) {
+ edac_printk(KERN_ERR, EDAC_DEVICE,
+ "%s: Unable to allocate EDAC device\n", ecc_name);
+ rc = -ENOMEM;
+ goto err_release_group;
+ }
+
+ altdev = dci->pvt_info;
+ dci->dev = edac->dev;
+ altdev->edac_dev_name = ecc_name;
+ altdev->edac_idx = edac_idx;
+ altdev->edac = edac;
+ altdev->edac_dev = dci;
+ altdev->data = prv;
+ altdev->ddev = *edac->dev;
+ dci->dev = &altdev->ddev;
+ dci->ctl_name = "Altera ECC Manager";
+ dci->mod_name = ecc_name;
+ dci->dev_name = ecc_name;
+
+ altdev->base = devm_ioremap_resource(edac->dev, &res);
+ if (IS_ERR(altdev->base)) {
+ rc = PTR_ERR(altdev->base);
+ goto err_release_group1;
+ }
+
+ /* Check specific dependencies for the module */
+ if (altdev->data->setup) {
+ rc = altdev->data->setup(altdev);
+ if (rc)
+ goto err_release_group1;
+ }
+
+ rc = edac_device_add_device(dci);
+ if (rc) {
+ dev_err(edac->dev, "edac_device_add_device failed\n");
+ rc = -ENOMEM;
+ goto err_release_group1;
+ }
+
+ altr_create_edacdev_dbgfs(dci, prv);
+
+ list_add(&altdev->next, &edac->a10_ecc_devices);
+
+ devres_remove_group(edac->dev, altr_edac_a10_device_add);
+
+ return 0;
+
+err_release_group1:
+ edac_device_free_ctl_info(dci);
+err_release_group:
+ edac_printk(KERN_ALERT, EDAC_DEVICE, "%s: %d\n", __func__, __LINE__);
+ devres_release_group(edac->dev, NULL);
+ edac_printk(KERN_ERR, EDAC_DEVICE,
+ "%s:Error setting up EDAC device: %d\n", ecc_name, rc);
+
+ return rc;
+}
+
+static int altr_edac_a10_probe(struct platform_device *pdev)
+{
+ struct altr_arria10_edac *edac;
+ struct device_node *child;
+ int rc;
+
+ edac = devm_kzalloc(&pdev->dev, sizeof(*edac), GFP_KERNEL);
+ if (!edac)
+ return -ENOMEM;
+
+ edac->dev = &pdev->dev;
+ platform_set_drvdata(pdev, edac);
+ INIT_LIST_HEAD(&edac->a10_ecc_devices);
+
+ edac->ecc_mgr_map = syscon_regmap_lookup_by_phandle(pdev->dev.of_node,
+ "altr,sysmgr-syscon");
+ if (IS_ERR(edac->ecc_mgr_map)) {
+ edac_printk(KERN_ERR, EDAC_DEVICE,
+ "Unable to get syscon altr,sysmgr-syscon\n");
+ return PTR_ERR(edac->ecc_mgr_map);
+ }
+
+ edac->sb_irq = platform_get_irq(pdev, 0);
+ rc = devm_request_irq(&pdev->dev, edac->sb_irq,
+ altr_edac_a10_irq_handler,
+ IRQF_SHARED, dev_name(&pdev->dev), edac);
+ if (rc) {
+ edac_printk(KERN_ERR, EDAC_DEVICE, "No SBERR IRQ resource\n");
+ return rc;
+ }
+
+ edac->db_irq = platform_get_irq(pdev, 1);
+ rc = devm_request_irq(&pdev->dev, edac->db_irq,
+ altr_edac_a10_irq_handler,
+ IRQF_SHARED, dev_name(&pdev->dev), edac);
+ if (rc) {
+ edac_printk(KERN_ERR, EDAC_DEVICE, "No DBERR IRQ resource\n");
+ return rc;
+ }
+
+ for_each_child_of_node(pdev->dev.of_node, child) {
+ if (!of_device_is_available(child))
+ continue;
+ if (of_device_is_compatible(child, "altr,socfpga-a10-l2-ecc"))
+ altr_edac_a10_device_add(edac, child);
+ else if (of_device_is_compatible(child,
+ "altr,socfpga-a10-ocram-ecc"))
+ altr_edac_a10_device_add(edac, child);
+ }
+
+ return 0;
+}
+
+static const struct of_device_id altr_edac_a10_of_match[] = {
+ { .compatible = "altr,socfpga-a10-ecc-manager" },
+ {},
+};
+MODULE_DEVICE_TABLE(of, altr_edac_a10_of_match);
+
+static struct platform_driver altr_edac_a10_driver = {
+ .probe = altr_edac_a10_probe,
+ .driver = {
+ .name = "socfpga_a10_ecc_manager",
+ .of_match_table = altr_edac_a10_of_match,
+ },
+};
+module_platform_driver(altr_edac_a10_driver);
+
MODULE_LICENSE("GPL v2");
MODULE_AUTHOR("Thor Thayer");
MODULE_DESCRIPTION("EDAC Driver for Altera Memories");
diff --git a/drivers/edac/altera_edac.h b/drivers/edac/altera_edac.h
index 953077d3e4f3..42090f36ba6e 100644
--- a/drivers/edac/altera_edac.h
+++ b/drivers/edac/altera_edac.h
@@ -195,4 +195,132 @@ struct altr_sdram_mc_data {
const struct altr_sdram_prv_data *data;
};
+/************************** EDAC Device Defines **************************/
+/***** General Device Trigger Defines *****/
+#define ALTR_UE_TRIGGER_CHAR 'U' /* Trigger for UE */
+#define ALTR_TRIGGER_READ_WRD_CNT 32 /* Line size x 4 */
+#define ALTR_TRIG_OCRAM_BYTE_SIZE 128 /* Line size x 4 */
+#define ALTR_TRIG_L2C_BYTE_SIZE 4096 /* Full Page */
+
+/******* Cyclone5 and Arria5 Defines *******/
+/* OCRAM ECC Management Group Defines */
+#define ALTR_MAN_GRP_OCRAM_ECC_OFFSET 0x04
+#define ALTR_OCR_ECC_REG_OFFSET 0x00
+#define ALTR_OCR_ECC_EN BIT(0)
+#define ALTR_OCR_ECC_INJS BIT(1)
+#define ALTR_OCR_ECC_INJD BIT(2)
+#define ALTR_OCR_ECC_SERR BIT(3)
+#define ALTR_OCR_ECC_DERR BIT(4)
+
+/* L2 ECC Management Group Defines */
+#define ALTR_MAN_GRP_L2_ECC_OFFSET 0x00
+#define ALTR_L2_ECC_REG_OFFSET 0x00
+#define ALTR_L2_ECC_EN BIT(0)
+#define ALTR_L2_ECC_INJS BIT(1)
+#define ALTR_L2_ECC_INJD BIT(2)
+
+/* Arria10 General ECC Block Module Defines */
+#define ALTR_A10_ECC_CTRL_OFST 0x08
+#define ALTR_A10_ECC_EN BIT(0)
+#define ALTR_A10_ECC_INITA BIT(16)
+#define ALTR_A10_ECC_INITB BIT(24)
+
+#define ALTR_A10_ECC_INITSTAT_OFST 0x0C
+#define ALTR_A10_ECC_INITCOMPLETEA BIT(0)
+#define ALTR_A10_ECC_INITCOMPLETEB BIT(8)
+
+#define ALTR_A10_ECC_ERRINTEN_OFST 0x10
+#define ALTR_A10_ECC_SERRINTEN BIT(0)
+
+#define ALTR_A10_ECC_INTSTAT_OFST 0x20
+#define ALTR_A10_ECC_SERRPENA BIT(0)
+#define ALTR_A10_ECC_DERRPENA BIT(8)
+#define ALTR_A10_ECC_ERRPENA_MASK (ALTR_A10_ECC_SERRPENA | \
+ ALTR_A10_ECC_DERRPENA)
+#define ALTR_A10_ECC_SERRPENB BIT(16)
+#define ALTR_A10_ECC_DERRPENB BIT(24)
+#define ALTR_A10_ECC_ERRPENB_MASK (ALTR_A10_ECC_SERRPENB | \
+ ALTR_A10_ECC_DERRPENB)
+
+#define ALTR_A10_ECC_INTTEST_OFST 0x24
+#define ALTR_A10_ECC_TSERRA BIT(0)
+#define ALTR_A10_ECC_TDERRA BIT(8)
+
+/* ECC Manager Defines */
+#define A10_SYSMGR_ECC_INTMASK_SET_OFST 0x94
+#define A10_SYSMGR_ECC_INTMASK_CLR_OFST 0x98
+#define A10_SYSMGR_ECC_INTMASK_OCRAM BIT(1)
+
+#define A10_SYSMGR_ECC_INTSTAT_SERR_OFST 0x9C
+#define A10_SYSMGR_ECC_INTSTAT_DERR_OFST 0xA0
+#define A10_SYSMGR_ECC_INTSTAT_L2 BIT(0)
+#define A10_SYSMGR_ECC_INTSTAT_OCRAM BIT(1)
+
+#define A10_SYSGMR_MPU_CLEAR_L2_ECC_OFST 0xA8
+#define A10_SYSGMR_MPU_CLEAR_L2_ECC_SB BIT(15)
+#define A10_SYSGMR_MPU_CLEAR_L2_ECC_MB BIT(31)
+
+/* Arria 10 L2 ECC Management Group Defines */
+#define ALTR_A10_L2_ECC_CTL_OFST 0x0
+#define ALTR_A10_L2_ECC_EN_CTL BIT(0)
+
+#define ALTR_A10_L2_ECC_STATUS 0xFFD060A4
+#define ALTR_A10_L2_ECC_STAT_OFST 0xA4
+#define ALTR_A10_L2_ECC_SERR_PEND BIT(0)
+#define ALTR_A10_L2_ECC_MERR_PEND BIT(0)
+
+#define ALTR_A10_L2_ECC_CLR_OFST 0x4
+#define ALTR_A10_L2_ECC_SERR_CLR BIT(15)
+#define ALTR_A10_L2_ECC_MERR_CLR BIT(31)
+
+#define ALTR_A10_L2_ECC_INJ_OFST ALTR_A10_L2_ECC_CTL_OFST
+#define ALTR_A10_L2_ECC_CE_INJ_MASK 0x00000101
+#define ALTR_A10_L2_ECC_UE_INJ_MASK 0x00010101
+
+/* Arria 10 OCRAM ECC Management Group Defines */
+#define ALTR_A10_OCRAM_ECC_EN_CTL (BIT(1) | BIT(0))
+
+struct altr_edac_device_dev;
+
+struct edac_device_prv_data {
+ int (*setup)(struct altr_edac_device_dev *device);
+ int ce_clear_mask;
+ int ue_clear_mask;
+ int irq_status_mask;
+ char dbgfs_name[20];
+ void * (*alloc_mem)(size_t size, void **other);
+ void (*free_mem)(void *p, size_t size, void *other);
+ int ecc_enable_mask;
+ int ecc_en_ofst;
+ int ce_set_mask;
+ int ue_set_mask;
+ int set_err_ofst;
+ irqreturn_t (*ecc_irq_handler)(struct altr_edac_device_dev *dci,
+ bool sb);
+ int trig_alloc_sz;
+ const struct file_operations *inject_fops;
+};
+
+struct altr_edac_device_dev {
+ struct list_head next;
+ void __iomem *base;
+ int sb_irq;
+ int db_irq;
+ const struct edac_device_prv_data *data;
+ struct dentry *debugfs_dir;
+ char *edac_dev_name;
+ struct altr_arria10_edac *edac;
+ struct edac_device_ctl_info *edac_dev;
+ struct device ddev;
+ int edac_idx;
+};
+
+struct altr_arria10_edac {
+ struct device *dev;
+ struct regmap *ecc_mgr_map;
+ int sb_irq;
+ int db_irq;
+ struct list_head a10_ecc_devices;
+};
+
#endif /* #ifndef _ALTERA_EDAC_H */
diff --git a/drivers/edac/amd64_edac.c b/drivers/edac/amd64_edac.c
index d87a47547ba5..46784eb2edc6 100644
--- a/drivers/edac/amd64_edac.c
+++ b/drivers/edac/amd64_edac.c
@@ -15,11 +15,6 @@ module_param(ecc_enable_override, int, 0644);
static struct msr __percpu *msrs;
-/*
- * count successfully initialized driver instances for setup_pci_device()
- */
-static atomic_t drv_instances = ATOMIC_INIT(0);
-
/* Per-node stuff */
static struct ecc_settings **ecc_stngs;
@@ -645,7 +640,7 @@ static u64 sys_addr_to_input_addr(struct mem_ctl_info *mci, u64 sys_addr)
input_addr =
dram_addr_to_input_addr(mci, sys_addr_to_dram_addr(mci, sys_addr));
- edac_dbg(2, "SysAdddr 0x%lx translates to InputAddr 0x%lx\n",
+ edac_dbg(2, "SysAddr 0x%lx translates to InputAddr 0x%lx\n",
(unsigned long)sys_addr, (unsigned long)input_addr);
return input_addr;
@@ -1918,7 +1913,7 @@ static struct amd64_family_type family_types[] = {
[K8_CPUS] = {
.ctl_name = "K8",
.f1_id = PCI_DEVICE_ID_AMD_K8_NB_ADDRMAP,
- .f3_id = PCI_DEVICE_ID_AMD_K8_NB_MISC,
+ .f2_id = PCI_DEVICE_ID_AMD_K8_NB_MEMCTL,
.ops = {
.early_channel_count = k8_early_channel_count,
.map_sysaddr_to_csrow = k8_map_sysaddr_to_csrow,
@@ -1928,7 +1923,7 @@ static struct amd64_family_type family_types[] = {
[F10_CPUS] = {
.ctl_name = "F10h",
.f1_id = PCI_DEVICE_ID_AMD_10H_NB_MAP,
- .f3_id = PCI_DEVICE_ID_AMD_10H_NB_MISC,
+ .f2_id = PCI_DEVICE_ID_AMD_10H_NB_DRAM,
.ops = {
.early_channel_count = f1x_early_channel_count,
.map_sysaddr_to_csrow = f1x_map_sysaddr_to_csrow,
@@ -1938,7 +1933,7 @@ static struct amd64_family_type family_types[] = {
[F15_CPUS] = {
.ctl_name = "F15h",
.f1_id = PCI_DEVICE_ID_AMD_15H_NB_F1,
- .f3_id = PCI_DEVICE_ID_AMD_15H_NB_F3,
+ .f2_id = PCI_DEVICE_ID_AMD_15H_NB_F2,
.ops = {
.early_channel_count = f1x_early_channel_count,
.map_sysaddr_to_csrow = f1x_map_sysaddr_to_csrow,
@@ -1948,7 +1943,7 @@ static struct amd64_family_type family_types[] = {
[F15_M30H_CPUS] = {
.ctl_name = "F15h_M30h",
.f1_id = PCI_DEVICE_ID_AMD_15H_M30H_NB_F1,
- .f3_id = PCI_DEVICE_ID_AMD_15H_M30H_NB_F3,
+ .f2_id = PCI_DEVICE_ID_AMD_15H_M30H_NB_F2,
.ops = {
.early_channel_count = f1x_early_channel_count,
.map_sysaddr_to_csrow = f1x_map_sysaddr_to_csrow,
@@ -1958,7 +1953,7 @@ static struct amd64_family_type family_types[] = {
[F15_M60H_CPUS] = {
.ctl_name = "F15h_M60h",
.f1_id = PCI_DEVICE_ID_AMD_15H_M60H_NB_F1,
- .f3_id = PCI_DEVICE_ID_AMD_15H_M60H_NB_F3,
+ .f2_id = PCI_DEVICE_ID_AMD_15H_M60H_NB_F2,
.ops = {
.early_channel_count = f1x_early_channel_count,
.map_sysaddr_to_csrow = f1x_map_sysaddr_to_csrow,
@@ -1968,7 +1963,7 @@ static struct amd64_family_type family_types[] = {
[F16_CPUS] = {
.ctl_name = "F16h",
.f1_id = PCI_DEVICE_ID_AMD_16H_NB_F1,
- .f3_id = PCI_DEVICE_ID_AMD_16H_NB_F3,
+ .f2_id = PCI_DEVICE_ID_AMD_16H_NB_F2,
.ops = {
.early_channel_count = f1x_early_channel_count,
.map_sysaddr_to_csrow = f1x_map_sysaddr_to_csrow,
@@ -1978,7 +1973,7 @@ static struct amd64_family_type family_types[] = {
[F16_M30H_CPUS] = {
.ctl_name = "F16h_M30h",
.f1_id = PCI_DEVICE_ID_AMD_16H_M30H_NB_F1,
- .f3_id = PCI_DEVICE_ID_AMD_16H_M30H_NB_F3,
+ .f2_id = PCI_DEVICE_ID_AMD_16H_M30H_NB_F2,
.ops = {
.early_channel_count = f1x_early_channel_count,
.map_sysaddr_to_csrow = f1x_map_sysaddr_to_csrow,
@@ -2227,13 +2222,13 @@ static inline void decode_bus_error(int node_id, struct mce *m)
}
/*
- * Use pvt->F2 which contains the F2 CPU PCI device to get the related
- * F1 (AddrMap) and F3 (Misc) devices. Return negative value on error.
+ * Use pvt->F3 which contains the F3 CPU PCI device to get the related
+ * F1 (AddrMap) and F2 (Dct) devices. Return negative value on error.
*/
-static int reserve_mc_sibling_devs(struct amd64_pvt *pvt, u16 f1_id, u16 f3_id)
+static int reserve_mc_sibling_devs(struct amd64_pvt *pvt, u16 f1_id, u16 f2_id)
{
/* Reserve the ADDRESS MAP Device */
- pvt->F1 = pci_get_related_function(pvt->F2->vendor, f1_id, pvt->F2);
+ pvt->F1 = pci_get_related_function(pvt->F3->vendor, f1_id, pvt->F3);
if (!pvt->F1) {
amd64_err("error address map device not found: "
"vendor %x device 0x%x (broken BIOS?)\n",
@@ -2241,15 +2236,15 @@ static int reserve_mc_sibling_devs(struct amd64_pvt *pvt, u16 f1_id, u16 f3_id)
return -ENODEV;
}
- /* Reserve the MISC Device */
- pvt->F3 = pci_get_related_function(pvt->F2->vendor, f3_id, pvt->F2);
- if (!pvt->F3) {
+ /* Reserve the DCT Device */
+ pvt->F2 = pci_get_related_function(pvt->F3->vendor, f2_id, pvt->F3);
+ if (!pvt->F2) {
pci_dev_put(pvt->F1);
pvt->F1 = NULL;
- amd64_err("error F3 device not found: "
+ amd64_err("error F2 device not found: "
"vendor %x device 0x%x (broken BIOS?)\n",
- PCI_VENDOR_ID_AMD, f3_id);
+ PCI_VENDOR_ID_AMD, f2_id);
return -ENODEV;
}
@@ -2263,7 +2258,7 @@ static int reserve_mc_sibling_devs(struct amd64_pvt *pvt, u16 f1_id, u16 f3_id)
static void free_mc_sibling_devs(struct amd64_pvt *pvt)
{
pci_dev_put(pvt->F1);
- pci_dev_put(pvt->F3);
+ pci_dev_put(pvt->F2);
}
/*
@@ -2778,14 +2773,14 @@ static const struct attribute_group *amd64_edac_attr_groups[] = {
NULL
};
-static int init_one_instance(struct pci_dev *F2)
+static int init_one_instance(unsigned int nid)
{
- struct amd64_pvt *pvt = NULL;
+ struct pci_dev *F3 = node_to_amd_nb(nid)->misc;
struct amd64_family_type *fam_type = NULL;
struct mem_ctl_info *mci = NULL;
struct edac_mc_layer layers[2];
+ struct amd64_pvt *pvt = NULL;
int err = 0, ret;
- u16 nid = amd_pci_dev_to_node_id(F2);
ret = -ENOMEM;
pvt = kzalloc(sizeof(struct amd64_pvt), GFP_KERNEL);
@@ -2793,7 +2788,7 @@ static int init_one_instance(struct pci_dev *F2)
goto err_ret;
pvt->mc_node_id = nid;
- pvt->F2 = F2;
+ pvt->F3 = F3;
ret = -EINVAL;
fam_type = per_family_init(pvt);
@@ -2801,7 +2796,7 @@ static int init_one_instance(struct pci_dev *F2)
goto err_free;
ret = -ENODEV;
- err = reserve_mc_sibling_devs(pvt, fam_type->f1_id, fam_type->f3_id);
+ err = reserve_mc_sibling_devs(pvt, fam_type->f1_id, fam_type->f2_id);
if (err)
goto err_free;
@@ -2836,7 +2831,7 @@ static int init_one_instance(struct pci_dev *F2)
goto err_siblings;
mci->pvt_info = pvt;
- mci->pdev = &pvt->F2->dev;
+ mci->pdev = &pvt->F3->dev;
setup_mci_misc_attrs(mci, fam_type);
@@ -2855,8 +2850,6 @@ static int init_one_instance(struct pci_dev *F2)
amd_register_ecc_decoder(decode_bus_error);
- atomic_inc(&drv_instances);
-
return 0;
err_add_mc:
@@ -2872,19 +2865,11 @@ err_ret:
return ret;
}
-static int probe_one_instance(struct pci_dev *pdev,
- const struct pci_device_id *mc_type)
+static int probe_one_instance(unsigned int nid)
{
- u16 nid = amd_pci_dev_to_node_id(pdev);
struct pci_dev *F3 = node_to_amd_nb(nid)->misc;
struct ecc_settings *s;
- int ret = 0;
-
- ret = pci_enable_device(pdev);
- if (ret < 0) {
- edac_dbg(0, "ret=%d\n", ret);
- return -EIO;
- }
+ int ret;
ret = -ENOMEM;
s = kzalloc(sizeof(struct ecc_settings), GFP_KERNEL);
@@ -2905,7 +2890,7 @@ static int probe_one_instance(struct pci_dev *pdev,
goto err_enable;
}
- ret = init_one_instance(pdev);
+ ret = init_one_instance(nid);
if (ret < 0) {
amd64_err("Error probing instance: %d\n", nid);
restore_ecc_error_reporting(s, nid, F3);
@@ -2921,19 +2906,18 @@ err_out:
return ret;
}
-static void remove_one_instance(struct pci_dev *pdev)
+static void remove_one_instance(unsigned int nid)
{
- struct mem_ctl_info *mci;
- struct amd64_pvt *pvt;
- u16 nid = amd_pci_dev_to_node_id(pdev);
struct pci_dev *F3 = node_to_amd_nb(nid)->misc;
struct ecc_settings *s = ecc_stngs[nid];
+ struct mem_ctl_info *mci;
+ struct amd64_pvt *pvt;
- mci = find_mci_by_dev(&pdev->dev);
+ mci = find_mci_by_dev(&F3->dev);
WARN_ON(!mci);
/* Remove from EDAC CORE tracking list */
- mci = edac_mc_del_mc(&pdev->dev);
+ mci = edac_mc_del_mc(&F3->dev);
if (!mci)
return;
@@ -2957,31 +2941,6 @@ static void remove_one_instance(struct pci_dev *pdev)
edac_mc_free(mci);
}
-/*
- * This table is part of the interface for loading drivers for PCI devices. The
- * PCI core identifies what devices are on a system during boot, and then
- * inquiry this table to see if this driver is for a given device found.
- */
-static const struct pci_device_id amd64_pci_table[] = {
- { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_K8_NB_MEMCTL) },
- { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_10H_NB_DRAM) },
- { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_15H_NB_F2) },
- { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_15H_M30H_NB_F2) },
- { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_15H_M60H_NB_F2) },
- { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_16H_NB_F2) },
- { PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_16H_M30H_NB_F2) },
- {0, }
-};
-MODULE_DEVICE_TABLE(pci, amd64_pci_table);
-
-static struct pci_driver amd64_pci_driver = {
- .name = EDAC_MOD_STR,
- .probe = probe_one_instance,
- .remove = remove_one_instance,
- .id_table = amd64_pci_table,
- .driver.probe_type = PROBE_FORCE_SYNCHRONOUS,
-};
-
static void setup_pci_device(void)
{
struct mem_ctl_info *mci;
@@ -3005,8 +2964,7 @@ static void setup_pci_device(void)
static int __init amd64_edac_init(void)
{
int err = -ENODEV;
-
- printk(KERN_INFO "AMD64 EDAC driver v%s\n", EDAC_AMD64_VERSION);
+ int i;
opstate_init();
@@ -3022,13 +2980,14 @@ static int __init amd64_edac_init(void)
if (!msrs)
goto err_free;
- err = pci_register_driver(&amd64_pci_driver);
- if (err)
- goto err_pci;
+ for (i = 0; i < amd_nb_num(); i++)
+ if (probe_one_instance(i)) {
+ /* unwind properly */
+ while (--i >= 0)
+ remove_one_instance(i);
- err = -ENODEV;
- if (!atomic_read(&drv_instances))
- goto err_no_instances;
+ goto err_pci;
+ }
setup_pci_device();
@@ -3036,10 +2995,9 @@ static int __init amd64_edac_init(void)
amd64_err("%s on 32-bit is unsupported. USE AT YOUR OWN RISK!\n", EDAC_MOD_STR);
#endif
- return 0;
+ printk(KERN_INFO "AMD64 EDAC driver v%s\n", EDAC_AMD64_VERSION);
-err_no_instances:
- pci_unregister_driver(&amd64_pci_driver);
+ return 0;
err_pci:
msrs_free(msrs);
@@ -3055,10 +3013,13 @@ err_ret:
static void __exit amd64_edac_exit(void)
{
+ int i;
+
if (pci_ctl)
edac_pci_release_generic_ctl(pci_ctl);
- pci_unregister_driver(&amd64_pci_driver);
+ for (i = 0; i < amd_nb_num(); i++)
+ remove_one_instance(i);
kfree(ecc_stngs);
ecc_stngs = NULL;
diff --git a/drivers/edac/amd64_edac.h b/drivers/edac/amd64_edac.h
index c0f248f3aaf9..c08870479054 100644
--- a/drivers/edac/amd64_edac.h
+++ b/drivers/edac/amd64_edac.h
@@ -422,7 +422,7 @@ struct low_ops {
struct amd64_family_type {
const char *ctl_name;
- u16 f1_id, f3_id;
+ u16 f1_id, f2_id;
struct low_ops ops;
};
diff --git a/drivers/edac/edac_mc.c b/drivers/edac/edac_mc.c
index 1472f48c8ac6..6aa256b0a1ed 100644
--- a/drivers/edac/edac_mc.c
+++ b/drivers/edac/edac_mc.c
@@ -923,7 +923,7 @@ static void edac_inc_ue_error(struct mem_ctl_info *mci,
mci->ue_mc += count;
if (!enable_per_layer_report) {
- mci->ce_noinfo_count += count;
+ mci->ue_noinfo_count += count;
return;
}
diff --git a/drivers/edac/edac_mc_sysfs.c b/drivers/edac/edac_mc_sysfs.c
index 26e65ab5932a..10c305b4a2e1 100644
--- a/drivers/edac/edac_mc_sysfs.c
+++ b/drivers/edac/edac_mc_sysfs.c
@@ -998,11 +998,12 @@ void edac_remove_sysfs_mci_device(struct mem_ctl_info *mci)
void edac_unregister_sysfs(struct mem_ctl_info *mci)
{
+ struct bus_type *bus = mci->bus;
const char *name = mci->bus->name;
edac_dbg(1, "Unregistering device %s\n", dev_name(&mci->dev));
device_unregister(&mci->dev);
- bus_unregister(mci->bus);
+ bus_unregister(bus);
kfree(name);
}
diff --git a/drivers/edac/i7core_edac.c b/drivers/edac/i7core_edac.c
index 792bdae2b91d..8a68a5e943ea 100644
--- a/drivers/edac/i7core_edac.c
+++ b/drivers/edac/i7core_edac.c
@@ -271,16 +271,6 @@ struct i7core_pvt {
bool is_registered, enable_scrub;
- /* Fifo double buffers */
- struct mce mce_entry[MCE_LOG_LEN];
- struct mce mce_outentry[MCE_LOG_LEN];
-
- /* Fifo in/out counters */
- unsigned mce_in, mce_out;
-
- /* Count indicator to show errors not got */
- unsigned mce_overrun;
-
/* DCLK Frequency used for computing scrub rate */
int dclk_freq;
@@ -1792,56 +1782,15 @@ static void i7core_mce_output_error(struct mem_ctl_info *mci,
* i7core_check_error Retrieve and process errors reported by the
* hardware. Called by the Core module.
*/
-static void i7core_check_error(struct mem_ctl_info *mci)
+static void i7core_check_error(struct mem_ctl_info *mci, struct mce *m)
{
struct i7core_pvt *pvt = mci->pvt_info;
- int i;
- unsigned count = 0;
- struct mce *m;
- /*
- * MCE first step: Copy all mce errors into a temporary buffer
- * We use a double buffering here, to reduce the risk of
- * losing an error.
- */
- smp_rmb();
- count = (pvt->mce_out + MCE_LOG_LEN - pvt->mce_in)
- % MCE_LOG_LEN;
- if (!count)
- goto check_ce_error;
-
- m = pvt->mce_outentry;
- if (pvt->mce_in + count > MCE_LOG_LEN) {
- unsigned l = MCE_LOG_LEN - pvt->mce_in;
-
- memcpy(m, &pvt->mce_entry[pvt->mce_in], sizeof(*m) * l);
- smp_wmb();
- pvt->mce_in = 0;
- count -= l;
- m += l;
- }
- memcpy(m, &pvt->mce_entry[pvt->mce_in], sizeof(*m) * count);
- smp_wmb();
- pvt->mce_in += count;
-
- smp_rmb();
- if (pvt->mce_overrun) {
- i7core_printk(KERN_ERR, "Lost %d memory errors\n",
- pvt->mce_overrun);
- smp_wmb();
- pvt->mce_overrun = 0;
- }
-
- /*
- * MCE second step: parse errors and display
- */
- for (i = 0; i < count; i++)
- i7core_mce_output_error(mci, &pvt->mce_outentry[i]);
+ i7core_mce_output_error(mci, m);
/*
* Now, let's increment CE error counts
*/
-check_ce_error:
if (!pvt->is_registered)
i7core_udimm_check_mc_ecc_err(mci);
else
@@ -1849,12 +1798,8 @@ check_ce_error:
}
/*
- * i7core_mce_check_error Replicates mcelog routine to get errors
- * This routine simply queues mcelog errors, and
- * return. The error itself should be handled later
- * by i7core_check_error.
- * WARNING: As this routine should be called at NMI time, extra care should
- * be taken to avoid deadlocks, and to be as fast as possible.
+ * Check that logging is enabled and that this is the right type
+ * of error for us to handle.
*/
static int i7core_mce_check_error(struct notifier_block *nb, unsigned long val,
void *data)
@@ -1882,21 +1827,7 @@ static int i7core_mce_check_error(struct notifier_block *nb, unsigned long val,
if (mce->bank != 8)
return NOTIFY_DONE;
- smp_rmb();
- if ((pvt->mce_out + 1) % MCE_LOG_LEN == pvt->mce_in) {
- smp_wmb();
- pvt->mce_overrun++;
- return NOTIFY_DONE;
- }
-
- /* Copy memory error at the ringbuffer */
- memcpy(&pvt->mce_entry[pvt->mce_out], mce, sizeof(*mce));
- smp_wmb();
- pvt->mce_out = (pvt->mce_out + 1) % MCE_LOG_LEN;
-
- /* Handle fatal errors immediately */
- if (mce->mcgstatus & 1)
- i7core_check_error(mci);
+ i7core_check_error(mci, mce);
/* Advise mcelog that the errors were handled */
return NOTIFY_STOP;
@@ -2243,8 +2174,6 @@ static int i7core_register_mci(struct i7core_dev *i7core_dev)
get_dimm_config(mci);
/* record ptr to the generic device */
mci->pdev = &i7core_dev->pdev[0]->dev;
- /* Set the function pointer to an actual operation function */
- mci->edac_check = i7core_check_error;
/* Enable scrubrate setting */
if (pvt->enable_scrub)
diff --git a/drivers/edac/ie31200_edac.c b/drivers/edac/ie31200_edac.c
index 18d77ace4813..1c88d9707495 100644
--- a/drivers/edac/ie31200_edac.c
+++ b/drivers/edac/ie31200_edac.c
@@ -17,6 +17,7 @@
* 015c: Xeon E3-1200 v2/3rd Gen Core processor DRAM Controller
* 0c04: Xeon E3-1200 v3/4th Gen Core Processor DRAM Controller
* 0c08: Xeon E3-1200 v3 Processor DRAM Controller
+ * 1918: Xeon E3-1200 v5 Skylake Host Bridge/DRAM Registers
*
* Based on Intel specification:
* http://www.intel.com/content/dam/www/public/us/en/documents/datasheets/xeon-e3-1200v3-vol-2-datasheet.pdf
@@ -55,6 +56,7 @@
#define PCI_DEVICE_ID_INTEL_IE31200_HB_5 0x015c
#define PCI_DEVICE_ID_INTEL_IE31200_HB_6 0x0c04
#define PCI_DEVICE_ID_INTEL_IE31200_HB_7 0x0c08
+#define PCI_DEVICE_ID_INTEL_IE31200_HB_8 0x1918
#define IE31200_DIMMS 4
#define IE31200_RANKS 8
@@ -105,8 +107,11 @@
* 1 Multiple Bit Error Status (MERRSTS)
* 0 Correctable Error Status (CERRSTS)
*/
+
#define IE31200_C0ECCERRLOG 0x40c8
#define IE31200_C1ECCERRLOG 0x44c8
+#define IE31200_C0ECCERRLOG_SKL 0x4048
+#define IE31200_C1ECCERRLOG_SKL 0x4448
#define IE31200_ECCERRLOG_CE BIT(0)
#define IE31200_ECCERRLOG_UE BIT(1)
#define IE31200_ECCERRLOG_RANK_BITS GENMASK_ULL(28, 27)
@@ -123,17 +128,28 @@
#define IE31200_CAPID0_DDPCD BIT(6)
#define IE31200_CAPID0_ECC BIT(1)
-#define IE31200_MAD_DIMM_0_OFFSET 0x5004
-#define IE31200_MAD_DIMM_SIZE GENMASK_ULL(7, 0)
-#define IE31200_MAD_DIMM_A_RANK BIT(17)
-#define IE31200_MAD_DIMM_A_WIDTH BIT(19)
-
-#define IE31200_PAGES(n) (n << (28 - PAGE_SHIFT))
+#define IE31200_MAD_DIMM_0_OFFSET 0x5004
+#define IE31200_MAD_DIMM_0_OFFSET_SKL 0x500C
+#define IE31200_MAD_DIMM_SIZE GENMASK_ULL(7, 0)
+#define IE31200_MAD_DIMM_A_RANK BIT(17)
+#define IE31200_MAD_DIMM_A_RANK_SHIFT 17
+#define IE31200_MAD_DIMM_A_RANK_SKL BIT(10)
+#define IE31200_MAD_DIMM_A_RANK_SKL_SHIFT 10
+#define IE31200_MAD_DIMM_A_WIDTH BIT(19)
+#define IE31200_MAD_DIMM_A_WIDTH_SHIFT 19
+#define IE31200_MAD_DIMM_A_WIDTH_SKL GENMASK_ULL(9, 8)
+#define IE31200_MAD_DIMM_A_WIDTH_SKL_SHIFT 8
+
+/* Skylake reports 1GB increments, everything else is 256MB */
+#define IE31200_PAGES(n, skl) \
+ (n << (28 + (2 * skl) - PAGE_SHIFT))
static int nr_channels;
struct ie31200_priv {
void __iomem *window;
+ void __iomem *c0errlog;
+ void __iomem *c1errlog;
};
enum ie31200_chips {
@@ -157,9 +173,9 @@ static const struct ie31200_dev_info ie31200_devs[] = {
};
struct dimm_data {
- u8 size; /* in 256MB multiples */
+ u8 size; /* in multiples of 256MB, except Skylake is 1GB */
u8 dual_rank : 1,
- x16_width : 1; /* 0 means x8 width */
+ x16_width : 2; /* 0 means x8 width */
};
static int how_many_channels(struct pci_dev *pdev)
@@ -197,11 +213,10 @@ static bool ecc_capable(struct pci_dev *pdev)
return true;
}
-static int eccerrlog_row(int channel, u64 log)
+static int eccerrlog_row(u64 log)
{
- int rank = ((log & IE31200_ECCERRLOG_RANK_BITS) >>
- IE31200_ECCERRLOG_RANK_SHIFT);
- return rank | (channel * IE31200_RANKS_PER_CHANNEL);
+ return ((log & IE31200_ECCERRLOG_RANK_BITS) >>
+ IE31200_ECCERRLOG_RANK_SHIFT);
}
static void ie31200_clear_error_info(struct mem_ctl_info *mci)
@@ -219,7 +234,6 @@ static void ie31200_get_and_clear_error_info(struct mem_ctl_info *mci,
{
struct pci_dev *pdev;
struct ie31200_priv *priv = mci->pvt_info;
- void __iomem *window = priv->window;
pdev = to_pci_dev(mci->pdev);
@@ -232,9 +246,9 @@ static void ie31200_get_and_clear_error_info(struct mem_ctl_info *mci,
if (!(info->errsts & IE31200_ERRSTS_BITS))
return;
- info->eccerrlog[0] = lo_hi_readq(window + IE31200_C0ECCERRLOG);
+ info->eccerrlog[0] = lo_hi_readq(priv->c0errlog);
if (nr_channels == 2)
- info->eccerrlog[1] = lo_hi_readq(window + IE31200_C1ECCERRLOG);
+ info->eccerrlog[1] = lo_hi_readq(priv->c1errlog);
pci_read_config_word(pdev, IE31200_ERRSTS, &info->errsts2);
@@ -245,10 +259,10 @@ static void ie31200_get_and_clear_error_info(struct mem_ctl_info *mci,
* should be UE info.
*/
if ((info->errsts ^ info->errsts2) & IE31200_ERRSTS_BITS) {
- info->eccerrlog[0] = lo_hi_readq(window + IE31200_C0ECCERRLOG);
+ info->eccerrlog[0] = lo_hi_readq(priv->c0errlog);
if (nr_channels == 2)
info->eccerrlog[1] =
- lo_hi_readq(window + IE31200_C1ECCERRLOG);
+ lo_hi_readq(priv->c1errlog);
}
ie31200_clear_error_info(mci);
@@ -274,14 +288,14 @@ static void ie31200_process_error_info(struct mem_ctl_info *mci,
if (log & IE31200_ECCERRLOG_UE) {
edac_mc_handle_error(HW_EVENT_ERR_UNCORRECTED, mci, 1,
0, 0, 0,
- eccerrlog_row(channel, log),
+ eccerrlog_row(log),
channel, -1,
"ie31200 UE", "");
} else if (log & IE31200_ECCERRLOG_CE) {
edac_mc_handle_error(HW_EVENT_ERR_CORRECTED, mci, 1,
0, 0,
IE31200_ECCERRLOG_SYNDROME(log),
- eccerrlog_row(channel, log),
+ eccerrlog_row(log),
channel, -1,
"ie31200 CE", "");
}
@@ -326,6 +340,33 @@ static void __iomem *ie31200_map_mchbar(struct pci_dev *pdev)
return window;
}
+static void __skl_populate_dimm_info(struct dimm_data *dd, u32 addr_decode,
+ int chan)
+{
+ dd->size = (addr_decode >> (chan << 4)) & IE31200_MAD_DIMM_SIZE;
+ dd->dual_rank = (addr_decode & (IE31200_MAD_DIMM_A_RANK_SKL << (chan << 4))) ? 1 : 0;
+ dd->x16_width = ((addr_decode & (IE31200_MAD_DIMM_A_WIDTH_SKL << (chan << 4))) >>
+ (IE31200_MAD_DIMM_A_WIDTH_SKL_SHIFT + (chan << 4)));
+}
+
+static void __populate_dimm_info(struct dimm_data *dd, u32 addr_decode,
+ int chan)
+{
+ dd->size = (addr_decode >> (chan << 3)) & IE31200_MAD_DIMM_SIZE;
+ dd->dual_rank = (addr_decode & (IE31200_MAD_DIMM_A_RANK << chan)) ? 1 : 0;
+ dd->x16_width = (addr_decode & (IE31200_MAD_DIMM_A_WIDTH << chan)) ? 1 : 0;
+}
+
+static void populate_dimm_info(struct dimm_data *dd, u32 addr_decode, int chan,
+ bool skl)
+{
+ if (skl)
+ __skl_populate_dimm_info(dd, addr_decode, chan);
+ else
+ __populate_dimm_info(dd, addr_decode, chan);
+}
+
+
static int ie31200_probe1(struct pci_dev *pdev, int dev_idx)
{
int i, j, ret;
@@ -334,7 +375,8 @@ static int ie31200_probe1(struct pci_dev *pdev, int dev_idx)
struct dimm_data dimm_info[IE31200_CHANNELS][IE31200_DIMMS_PER_CHANNEL];
void __iomem *window;
struct ie31200_priv *priv;
- u32 addr_decode;
+ u32 addr_decode, mad_offset;
+ bool skl = (pdev->device == PCI_DEVICE_ID_INTEL_IE31200_HB_8);
edac_dbg(0, "MC:\n");
@@ -363,7 +405,10 @@ static int ie31200_probe1(struct pci_dev *pdev, int dev_idx)
edac_dbg(3, "MC: init mci\n");
mci->pdev = &pdev->dev;
- mci->mtype_cap = MEM_FLAG_DDR3;
+ if (skl)
+ mci->mtype_cap = MEM_FLAG_DDR4;
+ else
+ mci->mtype_cap = MEM_FLAG_DDR3;
mci->edac_ctl_cap = EDAC_FLAG_SECDED;
mci->edac_cap = EDAC_FLAG_SECDED;
mci->mod_name = EDAC_MOD_STR;
@@ -374,19 +419,24 @@ static int ie31200_probe1(struct pci_dev *pdev, int dev_idx)
mci->ctl_page_to_phys = NULL;
priv = mci->pvt_info;
priv->window = window;
+ if (skl) {
+ priv->c0errlog = window + IE31200_C0ECCERRLOG_SKL;
+ priv->c1errlog = window + IE31200_C1ECCERRLOG_SKL;
+ mad_offset = IE31200_MAD_DIMM_0_OFFSET_SKL;
+ } else {
+ priv->c0errlog = window + IE31200_C0ECCERRLOG;
+ priv->c1errlog = window + IE31200_C1ECCERRLOG;
+ mad_offset = IE31200_MAD_DIMM_0_OFFSET;
+ }
/* populate DIMM info */
for (i = 0; i < IE31200_CHANNELS; i++) {
- addr_decode = readl(window + IE31200_MAD_DIMM_0_OFFSET +
+ addr_decode = readl(window + mad_offset +
(i * 4));
edac_dbg(0, "addr_decode: 0x%x\n", addr_decode);
for (j = 0; j < IE31200_DIMMS_PER_CHANNEL; j++) {
- dimm_info[i][j].size = (addr_decode >> (j * 8)) &
- IE31200_MAD_DIMM_SIZE;
- dimm_info[i][j].dual_rank = (addr_decode &
- (IE31200_MAD_DIMM_A_RANK << j)) ? 1 : 0;
- dimm_info[i][j].x16_width = (addr_decode &
- (IE31200_MAD_DIMM_A_WIDTH << j)) ? 1 : 0;
+ populate_dimm_info(&dimm_info[i][j], addr_decode, j,
+ skl);
edac_dbg(0, "size: 0x%x, rank: %d, width: %d\n",
dimm_info[i][j].size,
dimm_info[i][j].dual_rank,
@@ -405,7 +455,7 @@ static int ie31200_probe1(struct pci_dev *pdev, int dev_idx)
struct dimm_info *dimm;
unsigned long nr_pages;
- nr_pages = IE31200_PAGES(dimm_info[j][i].size);
+ nr_pages = IE31200_PAGES(dimm_info[j][i].size, skl);
if (nr_pages == 0)
continue;
@@ -417,7 +467,10 @@ static int ie31200_probe1(struct pci_dev *pdev, int dev_idx)
dimm->nr_pages = nr_pages;
edac_dbg(0, "set nr pages: 0x%lx\n", nr_pages);
dimm->grain = 8; /* just a guess */
- dimm->mtype = MEM_DDR3;
+ if (skl)
+ dimm->mtype = MEM_DDR4;
+ else
+ dimm->mtype = MEM_DDR3;
dimm->dtype = DEV_UNKNOWN;
dimm->edac_mode = EDAC_UNKNOWN;
}
@@ -426,7 +479,10 @@ static int ie31200_probe1(struct pci_dev *pdev, int dev_idx)
dimm->nr_pages = nr_pages;
edac_dbg(0, "set nr pages: 0x%lx\n", nr_pages);
dimm->grain = 8; /* same guess */
- dimm->mtype = MEM_DDR3;
+ if (skl)
+ dimm->mtype = MEM_DDR4;
+ else
+ dimm->mtype = MEM_DDR3;
dimm->dtype = DEV_UNKNOWN;
dimm->edac_mode = EDAC_UNKNOWN;
}
@@ -501,6 +557,9 @@ static const struct pci_device_id ie31200_pci_tbl[] = {
PCI_VEND_DEV(INTEL, IE31200_HB_7), PCI_ANY_ID, PCI_ANY_ID, 0, 0,
IE31200},
{
+ PCI_VEND_DEV(INTEL, IE31200_HB_8), PCI_ANY_ID, PCI_ANY_ID, 0, 0,
+ IE31200},
+ {
0,
} /* 0 terminated list. */
};
diff --git a/drivers/edac/mce_amd.c b/drivers/edac/mce_amd.c
index 49768c08ac07..9b6800a79c7f 100644
--- a/drivers/edac/mce_amd.c
+++ b/drivers/edac/mce_amd.c
@@ -1052,7 +1052,6 @@ int amd_decode_mce(struct notifier_block *nb, unsigned long val, void *data)
struct mce *m = (struct mce *)data;
struct cpuinfo_x86 *c = &cpu_data(m->extcpu);
int ecc;
- u32 ebx = cpuid_ebx(0x80000007);
if (amd_filter_mce(m))
return NOTIFY_STOP;
@@ -1075,7 +1074,7 @@ int amd_decode_mce(struct notifier_block *nb, unsigned long val, void *data)
((m->status & MCI_STATUS_DEFERRED) ? "Deferred" : "-"),
((m->status & MCI_STATUS_POISON) ? "Poison" : "-"));
- if (!!(ebx & BIT(3))) {
+ if (boot_cpu_has(X86_FEATURE_SMCA)) {
u32 low, high;
u32 addr = MSR_AMD64_SMCA_MCx_CONFIG(m->bank);
@@ -1094,7 +1093,7 @@ int amd_decode_mce(struct notifier_block *nb, unsigned long val, void *data)
if (m->status & MCI_STATUS_ADDRV)
pr_emerg(HW_ERR "MC%d Error Address: 0x%016llx\n", m->bank, m->addr);
- if (!!(ebx & BIT(3))) {
+ if (boot_cpu_has(X86_FEATURE_SMCA)) {
decode_smca_errors(m);
goto err_code;
}
@@ -1149,7 +1148,6 @@ static struct notifier_block amd_mce_dec_nb = {
static int __init mce_amd_init(void)
{
struct cpuinfo_x86 *c = &boot_cpu_data;
- u32 ebx;
if (c->x86_vendor != X86_VENDOR_AMD)
return -ENODEV;
@@ -1205,9 +1203,8 @@ static int __init mce_amd_init(void)
break;
case 0x17:
- ebx = cpuid_ebx(0x80000007);
xec_mask = 0x3f;
- if (!(ebx & BIT(3))) {
+ if (!boot_cpu_has(X86_FEATURE_SMCA)) {
printk(KERN_WARNING "Decoding supported only on Scalable MCA processors.\n");
goto err_out;
}
diff --git a/drivers/edac/sb_edac.c b/drivers/edac/sb_edac.c
index 8bf745d2da7e..b4d0bf6534cf 100644
--- a/drivers/edac/sb_edac.c
+++ b/drivers/edac/sb_edac.c
@@ -21,6 +21,8 @@
#include <linux/smp.h>
#include <linux/bitmap.h>
#include <linux/math64.h>
+#include <linux/mod_devicetable.h>
+#include <asm/cpu_device_id.h>
#include <asm/processor.h>
#include <asm/mce.h>
@@ -28,8 +30,6 @@
/* Static vars */
static LIST_HEAD(sbridge_edac_list);
-static DEFINE_MUTEX(sbridge_edac_lock);
-static int probed;
/*
* Alter this version for the module when modifications are made
@@ -364,16 +364,6 @@ struct sbridge_pvt {
bool is_mirrored, is_lockstep, is_close_pg;
bool is_chan_hash;
- /* Fifo double buffers */
- struct mce mce_entry[MCE_LOG_LEN];
- struct mce mce_outentry[MCE_LOG_LEN];
-
- /* Fifo in/out counters */
- unsigned mce_in, mce_out;
-
- /* Count indicator to show errors not got */
- unsigned mce_overrun;
-
/* Memory description */
u64 tolm, tohm;
struct knl_pvt knl;
@@ -662,18 +652,6 @@ static const struct pci_id_table pci_dev_descr_broadwell_table[] = {
{0,} /* 0 terminated list. */
};
-/*
- * pci_device_id table for which devices we are looking for
- */
-static const struct pci_device_id sbridge_pci_tbl[] = {
- {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_SBRIDGE_IMC_HA0)},
- {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_IBRIDGE_IMC_HA0_TA)},
- {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_HASWELL_IMC_HA0)},
- {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_BROADWELL_IMC_HA0)},
- {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_INTEL_KNL_IMC_SAD0)},
- {0,} /* 0 terminated list. */
-};
-
/****************************************************************************
Ancillary status routines
@@ -3097,63 +3075,8 @@ err_parsing:
}
/*
- * sbridge_check_error Retrieve and process errors reported by the
- * hardware. Called by the Core module.
- */
-static void sbridge_check_error(struct mem_ctl_info *mci)
-{
- struct sbridge_pvt *pvt = mci->pvt_info;
- int i;
- unsigned count = 0;
- struct mce *m;
-
- /*
- * MCE first step: Copy all mce errors into a temporary buffer
- * We use a double buffering here, to reduce the risk of
- * loosing an error.
- */
- smp_rmb();
- count = (pvt->mce_out + MCE_LOG_LEN - pvt->mce_in)
- % MCE_LOG_LEN;
- if (!count)
- return;
-
- m = pvt->mce_outentry;
- if (pvt->mce_in + count > MCE_LOG_LEN) {
- unsigned l = MCE_LOG_LEN - pvt->mce_in;
-
- memcpy(m, &pvt->mce_entry[pvt->mce_in], sizeof(*m) * l);
- smp_wmb();
- pvt->mce_in = 0;
- count -= l;
- m += l;
- }
- memcpy(m, &pvt->mce_entry[pvt->mce_in], sizeof(*m) * count);
- smp_wmb();
- pvt->mce_in += count;
-
- smp_rmb();
- if (pvt->mce_overrun) {
- sbridge_printk(KERN_ERR, "Lost %d memory errors\n",
- pvt->mce_overrun);
- smp_wmb();
- pvt->mce_overrun = 0;
- }
-
- /*
- * MCE second step: parse errors and display
- */
- for (i = 0; i < count; i++)
- sbridge_mce_output_error(mci, &pvt->mce_outentry[i]);
-}
-
-/*
- * sbridge_mce_check_error Replicates mcelog routine to get errors
- * This routine simply queues mcelog errors, and
- * return. The error itself should be handled later
- * by sbridge_check_error.
- * WARNING: As this routine should be called at NMI time, extra care should
- * be taken to avoid deadlocks, and to be as fast as possible.
+ * Check that logging is enabled and that this is the right type
+ * of error for us to handle.
*/
static int sbridge_mce_check_error(struct notifier_block *nb, unsigned long val,
void *data)
@@ -3198,21 +3121,7 @@ static int sbridge_mce_check_error(struct notifier_block *nb, unsigned long val,
"%u APIC %x\n", mce->cpuvendor, mce->cpuid,
mce->time, mce->socketid, mce->apicid);
- smp_rmb();
- if ((pvt->mce_out + 1) % MCE_LOG_LEN == pvt->mce_in) {
- smp_wmb();
- pvt->mce_overrun++;
- return NOTIFY_DONE;
- }
-
- /* Copy memory error at the ringbuffer */
- memcpy(&pvt->mce_entry[pvt->mce_out], mce, sizeof(*mce));
- smp_wmb();
- pvt->mce_out = (pvt->mce_out + 1) % MCE_LOG_LEN;
-
- /* Handle fatal errors immediately */
- if (mce->mcgstatus & 1)
- sbridge_check_error(mci);
+ sbridge_mce_output_error(mci, mce);
/* Advice mcelog that the error were handled */
return NOTIFY_STOP;
@@ -3298,9 +3207,6 @@ static int sbridge_register_mci(struct sbridge_dev *sbridge_dev, enum type type)
mci->dev_name = pci_name(pdev);
mci->ctl_page_to_phys = NULL;
- /* Set the function pointer to an actual operation function */
- mci->edac_check = sbridge_check_error;
-
pvt->info.type = type;
switch (type) {
case IVY_BRIDGE:
@@ -3448,62 +3354,40 @@ fail0:
return rc;
}
+#define ICPU(model, table) \
+ { X86_VENDOR_INTEL, 6, model, 0, (unsigned long)&table }
+
+/* Order here must match "enum type" */
+static const struct x86_cpu_id sbridge_cpuids[] = {
+ ICPU(0x2d, pci_dev_descr_sbridge_table), /* SANDY_BRIDGE */
+ ICPU(0x3e, pci_dev_descr_ibridge_table), /* IVY_BRIDGE */
+ ICPU(0x3f, pci_dev_descr_haswell_table), /* HASWELL */
+ ICPU(0x4f, pci_dev_descr_broadwell_table), /* BROADWELL */
+ ICPU(0x57, pci_dev_descr_knl_table), /* KNIGHTS_LANDING */
+ { }
+};
+MODULE_DEVICE_TABLE(x86cpu, sbridge_cpuids);
+
/*
- * sbridge_probe Probe for ONE instance of device to see if it is
+ * sbridge_probe Get all devices and register memory controllers
* present.
* return:
* 0 for FOUND a device
* < 0 for error code
*/
-static int sbridge_probe(struct pci_dev *pdev, const struct pci_device_id *id)
+static int sbridge_probe(const struct x86_cpu_id *id)
{
int rc = -ENODEV;
u8 mc, num_mc = 0;
struct sbridge_dev *sbridge_dev;
- enum type type = SANDY_BRIDGE;
+ struct pci_id_table *ptable = (struct pci_id_table *)id->driver_data;
/* get the pci devices we want to reserve for our use */
- mutex_lock(&sbridge_edac_lock);
-
- /*
- * All memory controllers are allocated at the first pass.
- */
- if (unlikely(probed >= 1)) {
- mutex_unlock(&sbridge_edac_lock);
- return -ENODEV;
- }
- probed++;
+ rc = sbridge_get_all_devices(&num_mc, ptable);
- switch (pdev->device) {
- case PCI_DEVICE_ID_INTEL_IBRIDGE_IMC_HA0_TA:
- rc = sbridge_get_all_devices(&num_mc,
- pci_dev_descr_ibridge_table);
- type = IVY_BRIDGE;
- break;
- case PCI_DEVICE_ID_INTEL_SBRIDGE_IMC_HA0:
- rc = sbridge_get_all_devices(&num_mc,
- pci_dev_descr_sbridge_table);
- type = SANDY_BRIDGE;
- break;
- case PCI_DEVICE_ID_INTEL_HASWELL_IMC_HA0:
- rc = sbridge_get_all_devices(&num_mc,
- pci_dev_descr_haswell_table);
- type = HASWELL;
- break;
- case PCI_DEVICE_ID_INTEL_BROADWELL_IMC_HA0:
- rc = sbridge_get_all_devices(&num_mc,
- pci_dev_descr_broadwell_table);
- type = BROADWELL;
- break;
- case PCI_DEVICE_ID_INTEL_KNL_IMC_SAD0:
- rc = sbridge_get_all_devices_knl(&num_mc,
- pci_dev_descr_knl_table);
- type = KNIGHTS_LANDING;
- break;
- }
if (unlikely(rc < 0)) {
- edac_dbg(0, "couldn't get all devices for 0x%x\n", pdev->device);
+ edac_dbg(0, "couldn't get all devices\n");
goto fail0;
}
@@ -3514,14 +3398,13 @@ static int sbridge_probe(struct pci_dev *pdev, const struct pci_device_id *id)
mc, mc + 1, num_mc);
sbridge_dev->mc = mc++;
- rc = sbridge_register_mci(sbridge_dev, type);
+ rc = sbridge_register_mci(sbridge_dev, id - sbridge_cpuids);
if (unlikely(rc < 0))
goto fail1;
}
sbridge_printk(KERN_INFO, "%s\n", SBRIDGE_REVISION);
- mutex_unlock(&sbridge_edac_lock);
return 0;
fail1:
@@ -3530,74 +3413,47 @@ fail1:
sbridge_put_all_devices();
fail0:
- mutex_unlock(&sbridge_edac_lock);
return rc;
}
/*
- * sbridge_remove destructor for one instance of device
+ * sbridge_remove cleanup
*
*/
-static void sbridge_remove(struct pci_dev *pdev)
+static void sbridge_remove(void)
{
struct sbridge_dev *sbridge_dev;
edac_dbg(0, "\n");
- /*
- * we have a trouble here: pdev value for removal will be wrong, since
- * it will point to the X58 register used to detect that the machine
- * is a Nehalem or upper design. However, due to the way several PCI
- * devices are grouped together to provide MC functionality, we need
- * to use a different method for releasing the devices
- */
-
- mutex_lock(&sbridge_edac_lock);
-
- if (unlikely(!probed)) {
- mutex_unlock(&sbridge_edac_lock);
- return;
- }
-
list_for_each_entry(sbridge_dev, &sbridge_edac_list, list)
sbridge_unregister_mci(sbridge_dev);
/* Release PCI resources */
sbridge_put_all_devices();
-
- probed--;
-
- mutex_unlock(&sbridge_edac_lock);
}
-MODULE_DEVICE_TABLE(pci, sbridge_pci_tbl);
-
-/*
- * sbridge_driver pci_driver structure for this module
- *
- */
-static struct pci_driver sbridge_driver = {
- .name = "sbridge_edac",
- .probe = sbridge_probe,
- .remove = sbridge_remove,
- .id_table = sbridge_pci_tbl,
-};
-
/*
* sbridge_init Module entry function
* Try to initialize this module for its devices
*/
static int __init sbridge_init(void)
{
- int pci_rc;
+ const struct x86_cpu_id *id;
+ int rc;
edac_dbg(2, "\n");
+ id = x86_match_cpu(sbridge_cpuids);
+ if (!id)
+ return -ENODEV;
+
/* Ensure that the OPSTATE is set correctly for POLL or NMI */
opstate_init();
- pci_rc = pci_register_driver(&sbridge_driver);
- if (pci_rc >= 0) {
+ rc = sbridge_probe(id);
+
+ if (rc >= 0) {
mce_register_decode_chain(&sbridge_mce_dec);
if (get_edac_report_status() == EDAC_REPORTING_DISABLED)
sbridge_printk(KERN_WARNING, "Loading driver, error reporting disabled.\n");
@@ -3605,9 +3461,9 @@ static int __init sbridge_init(void)
}
sbridge_printk(KERN_ERR, "Failed to register device with error %d.\n",
- pci_rc);
+ rc);
- return pci_rc;
+ return rc;
}
/*
@@ -3617,7 +3473,7 @@ static int __init sbridge_init(void)
static void __exit sbridge_exit(void)
{
edac_dbg(2, "\n");
- pci_unregister_driver(&sbridge_driver);
+ sbridge_remove();
mce_unregister_decode_chain(&sbridge_mce_dec);
}
diff --git a/drivers/firewire/net.c b/drivers/firewire/net.c
index f4ea80d602f7..309311b1faae 100644
--- a/drivers/firewire/net.c
+++ b/drivers/firewire/net.c
@@ -1023,7 +1023,7 @@ static int fwnet_send_packet(struct fwnet_packet_task *ptask)
spin_unlock_irqrestore(&dev->lock, flags);
- dev->netdev->trans_start = jiffies;
+ netif_trans_update(dev->netdev);
out:
if (free)
fwnet_free_ptask(ptask);
diff --git a/drivers/firmware/broadcom/Kconfig b/drivers/firmware/broadcom/Kconfig
index 6bed119930dd..3c7e5b741e37 100644
--- a/drivers/firmware/broadcom/Kconfig
+++ b/drivers/firmware/broadcom/Kconfig
@@ -9,3 +9,14 @@ config BCM47XX_NVRAM
This driver provides an easy way to get value of requested parameter.
It simply reads content of NVRAM and parses it. It doesn't control any
hardware part itself.
+
+config BCM47XX_SPROM
+ bool "Broadcom SPROM driver"
+ depends on BCM47XX_NVRAM
+ help
+ Broadcom devices store configuration data in SPROM. Accessing it is
+ specific to the bus host type, e.g. PCI(e) devices have it mapped in
+ a PCI BAR.
+ In case of SoC devices SPROM content is stored on a flash used by
+ bootloader firmware CFE. This driver provides method to ssb and bcma
+ drivers to read SPROM on SoC.
diff --git a/drivers/firmware/broadcom/Makefile b/drivers/firmware/broadcom/Makefile
index d0e683583cd6..f93efc479b8b 100644
--- a/drivers/firmware/broadcom/Makefile
+++ b/drivers/firmware/broadcom/Makefile
@@ -1 +1,2 @@
obj-$(CONFIG_BCM47XX_NVRAM) += bcm47xx_nvram.o
+obj-$(CONFIG_BCM47XX_SPROM) += bcm47xx_sprom.o
diff --git a/arch/mips/bcm47xx/sprom.c b/drivers/firmware/broadcom/bcm47xx_sprom.c
index ca7ad131d057..b6eb875d4af3 100644
--- a/arch/mips/bcm47xx/sprom.c
+++ b/drivers/firmware/broadcom/bcm47xx_sprom.c
@@ -26,9 +26,11 @@
* 675 Mass Ave, Cambridge, MA 02139, USA.
*/
-#include <bcm47xx.h>
-#include <linux/if_ether.h>
+#include <linux/bcm47xx_nvram.h>
+#include <linux/bcma/bcma.h>
#include <linux/etherdevice.h>
+#include <linux/if_ether.h>
+#include <linux/ssb/ssb.h>
static void create_key(const char *prefix, const char *postfix,
const char *name, char *buf, int len)
@@ -599,7 +601,7 @@ void bcm47xx_fill_sprom(struct ssb_sprom *sprom, const char *prefix,
bcm47xx_sprom_fill_auto(sprom, prefix, fallback);
}
-#if defined(CONFIG_BCM47XX_SSB)
+#if IS_BUILTIN(CONFIG_SSB) && IS_ENABLED(CONFIG_SSB_SPROM)
static int bcm47xx_get_sprom_ssb(struct ssb_bus *bus, struct ssb_sprom *out)
{
char prefix[10];
@@ -622,7 +624,7 @@ static int bcm47xx_get_sprom_ssb(struct ssb_bus *bus, struct ssb_sprom *out)
}
#endif
-#if defined(CONFIG_BCM47XX_BCMA)
+#if IS_BUILTIN(CONFIG_BCMA)
/*
* Having many NVRAM entries for PCI devices led to repeating prefixes like
* pci/1/1/ all the time and wasting flash space. So at some point Broadcom
@@ -706,19 +708,30 @@ static int bcm47xx_get_sprom_bcma(struct bcma_bus *bus, struct ssb_sprom *out)
}
#endif
+static unsigned int bcm47xx_sprom_registered;
+
/*
* On bcm47xx we need to register SPROM fallback handler very early, so we can't
* use anything like platform device / driver for this.
*/
-void bcm47xx_sprom_register_fallbacks(void)
+int bcm47xx_sprom_register_fallbacks(void)
{
-#if defined(CONFIG_BCM47XX_SSB)
+ if (bcm47xx_sprom_registered)
+ return 0;
+
+#if IS_BUILTIN(CONFIG_SSB) && IS_ENABLED(CONFIG_SSB_SPROM)
if (ssb_arch_register_fallback_sprom(&bcm47xx_get_sprom_ssb))
pr_warn("Failed to register ssb SPROM handler\n");
#endif
-#if defined(CONFIG_BCM47XX_BCMA)
+#if IS_BUILTIN(CONFIG_BCMA)
if (bcma_arch_register_fallback_sprom(&bcm47xx_get_sprom_bcma))
pr_warn("Failed to register bcma SPROM handler\n");
#endif
+
+ bcm47xx_sprom_registered = 1;
+
+ return 0;
}
+
+fs_initcall(bcm47xx_sprom_register_fallbacks);
diff --git a/drivers/firmware/efi/Kconfig b/drivers/firmware/efi/Kconfig
index e1670d533f97..6394152f648f 100644
--- a/drivers/firmware/efi/Kconfig
+++ b/drivers/firmware/efi/Kconfig
@@ -87,6 +87,31 @@ config EFI_RUNTIME_WRAPPERS
config EFI_ARMSTUB
bool
+config EFI_BOOTLOADER_CONTROL
+ tristate "EFI Bootloader Control"
+ depends on EFI_VARS
+ default n
+ ---help---
+ This module installs a reboot hook, such that if reboot() is
+ invoked with a string argument NNN, "NNN" is copied to the
+ "LoaderEntryOneShot" EFI variable, to be read by the
+ bootloader. If the string matches one of the boot labels
+ defined in its configuration, the bootloader will boot once
+ to that label. The "LoaderEntryRebootReason" EFI variable is
+ set with the reboot reason: "reboot" or "shutdown". The
+ bootloader reads this reboot reason and takes particular
+ action according to its policy.
+
+config EFI_CAPSULE_LOADER
+ tristate "EFI capsule loader"
+ depends on EFI
+ help
+ This option exposes a loader interface "/dev/efi_capsule_loader" for
+ users to load EFI capsules. This driver requires working runtime
+ capsule support in the firmware, which many OEMs do not provide.
+
+ Most users should say N.
+
endmenu
config UEFI_CPER
diff --git a/drivers/firmware/efi/Makefile b/drivers/firmware/efi/Makefile
index 62e654f255f4..a219640f881f 100644
--- a/drivers/firmware/efi/Makefile
+++ b/drivers/firmware/efi/Makefile
@@ -9,7 +9,8 @@
#
KASAN_SANITIZE_runtime-wrappers.o := n
-obj-$(CONFIG_EFI) += efi.o vars.o reboot.o
+obj-$(CONFIG_EFI) += efi.o vars.o reboot.o memattr.o
+obj-$(CONFIG_EFI) += capsule.o
obj-$(CONFIG_EFI_VARS) += efivars.o
obj-$(CONFIG_EFI_ESRT) += esrt.o
obj-$(CONFIG_EFI_VARS_PSTORE) += efi-pstore.o
@@ -18,7 +19,9 @@ obj-$(CONFIG_EFI_RUNTIME_MAP) += runtime-map.o
obj-$(CONFIG_EFI_RUNTIME_WRAPPERS) += runtime-wrappers.o
obj-$(CONFIG_EFI_STUB) += libstub/
obj-$(CONFIG_EFI_FAKE_MEMMAP) += fake_mem.o
+obj-$(CONFIG_EFI_BOOTLOADER_CONTROL) += efibc.o
arm-obj-$(CONFIG_EFI) := arm-init.o arm-runtime.o
obj-$(CONFIG_ARM) += $(arm-obj-y)
obj-$(CONFIG_ARM64) += $(arm-obj-y)
+obj-$(CONFIG_EFI_CAPSULE_LOADER) += capsule-loader.o
diff --git a/drivers/firmware/efi/arm-init.c b/drivers/firmware/efi/arm-init.c
index 8714f8c271ba..a850cbc48d8d 100644
--- a/drivers/firmware/efi/arm-init.c
+++ b/drivers/firmware/efi/arm-init.c
@@ -11,17 +11,19 @@
*
*/
+#define pr_fmt(fmt) "efi: " fmt
+
#include <linux/efi.h>
#include <linux/init.h>
#include <linux/memblock.h>
#include <linux/mm_types.h>
#include <linux/of.h>
#include <linux/of_fdt.h>
+#include <linux/platform_device.h>
+#include <linux/screen_info.h>
#include <asm/efi.h>
-struct efi_memory_map memmap;
-
u64 efi_system_table;
static int __init is_normal_ram(efi_memory_desc_t *md)
@@ -40,7 +42,7 @@ static phys_addr_t efi_to_phys(unsigned long addr)
{
efi_memory_desc_t *md;
- for_each_efi_memory_desc(&memmap, md) {
+ for_each_efi_memory_desc(md) {
if (!(md->attribute & EFI_MEMORY_RUNTIME))
continue;
if (md->virt_addr == 0)
@@ -53,6 +55,36 @@ static phys_addr_t efi_to_phys(unsigned long addr)
return addr;
}
+static __initdata unsigned long screen_info_table = EFI_INVALID_TABLE_ADDR;
+
+static __initdata efi_config_table_type_t arch_tables[] = {
+ {LINUX_EFI_ARM_SCREEN_INFO_TABLE_GUID, NULL, &screen_info_table},
+ {NULL_GUID, NULL, NULL}
+};
+
+static void __init init_screen_info(void)
+{
+ struct screen_info *si;
+
+ if (screen_info_table != EFI_INVALID_TABLE_ADDR) {
+ si = early_memremap_ro(screen_info_table, sizeof(*si));
+ if (!si) {
+ pr_err("Could not map screen_info config table\n");
+ return;
+ }
+ screen_info = *si;
+ early_memunmap(si, sizeof(*si));
+
+ /* dummycon on ARM needs non-zero values for columns/lines */
+ screen_info.orig_video_cols = 80;
+ screen_info.orig_video_lines = 25;
+ }
+
+ if (screen_info.orig_video_isVGA == VIDEO_TYPE_EFI &&
+ memblock_is_map_memory(screen_info.lfb_base))
+ memblock_mark_nomap(screen_info.lfb_base, screen_info.lfb_size);
+}
+
static int __init uefi_init(void)
{
efi_char16_t *c16;
@@ -85,6 +117,8 @@ static int __init uefi_init(void)
efi.systab->hdr.revision >> 16,
efi.systab->hdr.revision & 0xffff);
+ efi.runtime_version = efi.systab->hdr.revision;
+
/* Show what we know for posterity */
c16 = early_memremap_ro(efi_to_phys(efi.systab->fw_vendor),
sizeof(vendor) * sizeof(efi_char16_t));
@@ -108,7 +142,8 @@ static int __init uefi_init(void)
goto out;
}
retval = efi_config_parse_tables(config_tables, efi.systab->nr_tables,
- sizeof(efi_config_table_t), NULL);
+ sizeof(efi_config_table_t),
+ arch_tables);
early_memunmap(config_tables, table_size);
out:
@@ -143,7 +178,15 @@ static __init void reserve_regions(void)
if (efi_enabled(EFI_DBG))
pr_info("Processing EFI memory map:\n");
- for_each_efi_memory_desc(&memmap, md) {
+ /*
+ * Discard memblocks discovered so far: if there are any at this
+ * point, they originate from memory nodes in the DT, and UEFI
+ * uses its own memory map instead.
+ */
+ memblock_dump_all();
+ memblock_remove(0, (phys_addr_t)ULLONG_MAX);
+
+ for_each_efi_memory_desc(md) {
paddr = md->phys_addr;
npages = md->num_pages;
@@ -184,9 +227,9 @@ void __init efi_init(void)
efi_system_table = params.system_table;
- memmap.phys_map = params.mmap;
- memmap.map = early_memremap_ro(params.mmap, params.mmap_size);
- if (memmap.map == NULL) {
+ efi.memmap.phys_map = params.mmap;
+ efi.memmap.map = early_memremap_ro(params.mmap, params.mmap_size);
+ if (efi.memmap.map == NULL) {
/*
* If we are booting via UEFI, the UEFI memory map is the only
* description of memory we have, so there is little point in
@@ -194,28 +237,37 @@ void __init efi_init(void)
*/
panic("Unable to map EFI memory map.\n");
}
- memmap.map_end = memmap.map + params.mmap_size;
- memmap.desc_size = params.desc_size;
- memmap.desc_version = params.desc_ver;
+ efi.memmap.map_end = efi.memmap.map + params.mmap_size;
+ efi.memmap.desc_size = params.desc_size;
+ efi.memmap.desc_version = params.desc_ver;
+
+ WARN(efi.memmap.desc_version != 1,
+ "Unexpected EFI_MEMORY_DESCRIPTOR version %ld",
+ efi.memmap.desc_version);
if (uefi_init() < 0)
return;
reserve_regions();
- early_memunmap(memmap.map, params.mmap_size);
+ efi_memattr_init();
+ early_memunmap(efi.memmap.map, params.mmap_size);
- if (IS_ENABLED(CONFIG_ARM)) {
- /*
- * ARM currently does not allow ioremap_cache() to be called on
- * memory regions that are covered by struct page. So remove the
- * UEFI memory map from the linear mapping.
- */
- memblock_mark_nomap(params.mmap & PAGE_MASK,
- PAGE_ALIGN(params.mmap_size +
- (params.mmap & ~PAGE_MASK)));
- } else {
- memblock_reserve(params.mmap & PAGE_MASK,
- PAGE_ALIGN(params.mmap_size +
- (params.mmap & ~PAGE_MASK)));
- }
+ memblock_reserve(params.mmap & PAGE_MASK,
+ PAGE_ALIGN(params.mmap_size +
+ (params.mmap & ~PAGE_MASK)));
+
+ init_screen_info();
+}
+
+static int __init register_gop_device(void)
+{
+ void *pd;
+
+ if (screen_info.orig_video_isVGA != VIDEO_TYPE_EFI)
+ return 0;
+
+ pd = platform_device_register_data(NULL, "efi-framebuffer", 0,
+ &screen_info, sizeof(screen_info));
+ return PTR_ERR_OR_ZERO(pd);
}
+subsys_initcall(register_gop_device);
diff --git a/drivers/firmware/efi/arm-runtime.c b/drivers/firmware/efi/arm-runtime.c
index 6ae21e41a429..17ccf0a8787a 100644
--- a/drivers/firmware/efi/arm-runtime.c
+++ b/drivers/firmware/efi/arm-runtime.c
@@ -42,11 +42,13 @@ static struct mm_struct efi_mm = {
static bool __init efi_virtmap_init(void)
{
efi_memory_desc_t *md;
+ bool systab_found;
efi_mm.pgd = pgd_alloc(&efi_mm);
init_new_context(NULL, &efi_mm);
- for_each_efi_memory_desc(&memmap, md) {
+ systab_found = false;
+ for_each_efi_memory_desc(md) {
phys_addr_t phys = md->phys_addr;
int ret;
@@ -64,7 +66,25 @@ static bool __init efi_virtmap_init(void)
&phys, ret);
return false;
}
+ /*
+ * If this entry covers the address of the UEFI system table,
+ * calculate and record its virtual address.
+ */
+ if (efi_system_table >= phys &&
+ efi_system_table < phys + (md->num_pages * EFI_PAGE_SIZE)) {
+ efi.systab = (void *)(unsigned long)(efi_system_table -
+ phys + md->virt_addr);
+ systab_found = true;
+ }
+ }
+ if (!systab_found) {
+ pr_err("No virtual mapping found for the UEFI System Table\n");
+ return false;
}
+
+ if (efi_memattr_apply_permissions(&efi_mm, efi_set_mapping_permissions))
+ return false;
+
return true;
}
@@ -89,26 +109,17 @@ static int __init arm_enable_runtime_services(void)
pr_info("Remapping and enabling EFI services.\n");
- mapsize = memmap.map_end - memmap.map;
- memmap.map = (__force void *)ioremap_cache(memmap.phys_map,
- mapsize);
- if (!memmap.map) {
- pr_err("Failed to remap EFI memory map\n");
- return -ENOMEM;
- }
- memmap.map_end = memmap.map + mapsize;
- efi.memmap = &memmap;
+ mapsize = efi.memmap.map_end - efi.memmap.map;
- efi.systab = (__force void *)ioremap_cache(efi_system_table,
- sizeof(efi_system_table_t));
- if (!efi.systab) {
- pr_err("Failed to remap EFI System Table\n");
+ efi.memmap.map = memremap(efi.memmap.phys_map, mapsize, MEMREMAP_WB);
+ if (!efi.memmap.map) {
+ pr_err("Failed to remap EFI memory map\n");
return -ENOMEM;
}
- set_bit(EFI_SYSTEM_TABLES, &efi.flags);
+ efi.memmap.map_end = efi.memmap.map + mapsize;
if (!efi_virtmap_init()) {
- pr_err("No UEFI virtual mapping was installed -- runtime services will not be available\n");
+ pr_err("UEFI virtual mapping missing or invalid -- runtime services will not be available\n");
return -ENOMEM;
}
@@ -116,8 +127,6 @@ static int __init arm_enable_runtime_services(void)
efi_native_runtime_setup();
set_bit(EFI_RUNTIME_SERVICES, &efi.flags);
- efi.runtime_version = efi.systab->hdr.revision;
-
return 0;
}
early_initcall(arm_enable_runtime_services);
diff --git a/drivers/firmware/efi/capsule-loader.c b/drivers/firmware/efi/capsule-loader.c
new file mode 100644
index 000000000000..c99c24bc79b0
--- /dev/null
+++ b/drivers/firmware/efi/capsule-loader.c
@@ -0,0 +1,343 @@
+/*
+ * EFI capsule loader driver.
+ *
+ * Copyright 2015 Intel Corporation
+ *
+ * This file is part of the Linux kernel, and is made available under
+ * the terms of the GNU General Public License version 2.
+ */
+
+#define pr_fmt(fmt) "efi: " fmt
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/miscdevice.h>
+#include <linux/highmem.h>
+#include <linux/slab.h>
+#include <linux/mutex.h>
+#include <linux/efi.h>
+
+#define NO_FURTHER_WRITE_ACTION -1
+
+struct capsule_info {
+ bool header_obtained;
+ int reset_type;
+ long index;
+ size_t count;
+ size_t total_size;
+ struct page **pages;
+ size_t page_bytes_remain;
+};
+
+/**
+ * efi_free_all_buff_pages - free all previous allocated buffer pages
+ * @cap_info: pointer to current instance of capsule_info structure
+ *
+ * In addition to freeing buffer pages, it flags NO_FURTHER_WRITE_ACTION
+ * to cease processing data in subsequent write(2) calls until close(2)
+ * is called.
+ **/
+static void efi_free_all_buff_pages(struct capsule_info *cap_info)
+{
+ while (cap_info->index > 0)
+ __free_page(cap_info->pages[--cap_info->index]);
+
+ cap_info->index = NO_FURTHER_WRITE_ACTION;
+}
+
+/**
+ * efi_capsule_setup_info - obtain the efi capsule header in the binary and
+ * setup capsule_info structure
+ * @cap_info: pointer to current instance of capsule_info structure
+ * @kbuff: a mapped first page buffer pointer
+ * @hdr_bytes: the total received number of bytes for efi header
+ **/
+static ssize_t efi_capsule_setup_info(struct capsule_info *cap_info,
+ void *kbuff, size_t hdr_bytes)
+{
+ efi_capsule_header_t *cap_hdr;
+ size_t pages_needed;
+ int ret;
+ void *temp_page;
+
+ /* Only process data block that is larger than efi header size */
+ if (hdr_bytes < sizeof(efi_capsule_header_t))
+ return 0;
+
+ /* Reset back to the correct offset of header */
+ cap_hdr = kbuff - cap_info->count;
+ pages_needed = ALIGN(cap_hdr->imagesize, PAGE_SIZE) >> PAGE_SHIFT;
+
+ if (pages_needed == 0) {
+ pr_err("%s: pages count invalid\n", __func__);
+ return -EINVAL;
+ }
+
+ /* Check if the capsule binary supported */
+ ret = efi_capsule_supported(cap_hdr->guid, cap_hdr->flags,
+ cap_hdr->imagesize,
+ &cap_info->reset_type);
+ if (ret) {
+ pr_err("%s: efi_capsule_supported() failed\n",
+ __func__);
+ return ret;
+ }
+
+ cap_info->total_size = cap_hdr->imagesize;
+ temp_page = krealloc(cap_info->pages,
+ pages_needed * sizeof(void *),
+ GFP_KERNEL | __GFP_ZERO);
+ if (!temp_page) {
+ pr_debug("%s: krealloc() failed\n", __func__);
+ return -ENOMEM;
+ }
+
+ cap_info->pages = temp_page;
+ cap_info->header_obtained = true;
+
+ return 0;
+}
+
+/**
+ * efi_capsule_submit_update - invoke the efi_capsule_update API once binary
+ * upload done
+ * @cap_info: pointer to current instance of capsule_info structure
+ **/
+static ssize_t efi_capsule_submit_update(struct capsule_info *cap_info)
+{
+ int ret;
+ void *cap_hdr_temp;
+
+ cap_hdr_temp = kmap(cap_info->pages[0]);
+ if (!cap_hdr_temp) {
+ pr_debug("%s: kmap() failed\n", __func__);
+ return -EFAULT;
+ }
+
+ ret = efi_capsule_update(cap_hdr_temp, cap_info->pages);
+ kunmap(cap_info->pages[0]);
+ if (ret) {
+ pr_err("%s: efi_capsule_update() failed\n", __func__);
+ return ret;
+ }
+
+ /* Indicate capsule binary uploading is done */
+ cap_info->index = NO_FURTHER_WRITE_ACTION;
+ pr_info("%s: Successfully upload capsule file with reboot type '%s'\n",
+ __func__, !cap_info->reset_type ? "RESET_COLD" :
+ cap_info->reset_type == 1 ? "RESET_WARM" :
+ "RESET_SHUTDOWN");
+ return 0;
+}
+
+/**
+ * efi_capsule_write - store the capsule binary and pass it to
+ * efi_capsule_update() API
+ * @file: file pointer
+ * @buff: buffer pointer
+ * @count: number of bytes in @buff
+ * @offp: not used
+ *
+ * Expectation:
+ * - A user space tool should start at the beginning of capsule binary and
+ * pass data in sequentially.
+ * - Users should close and re-open this file note in order to upload more
+ * capsules.
+ * - After an error returned, user should close the file and restart the
+ * operation for the next try otherwise -EIO will be returned until the
+ * file is closed.
+ * - An EFI capsule header must be located at the beginning of capsule
+ * binary file and passed in as first block data of write operation.
+ **/
+static ssize_t efi_capsule_write(struct file *file, const char __user *buff,
+ size_t count, loff_t *offp)
+{
+ int ret = 0;
+ struct capsule_info *cap_info = file->private_data;
+ struct page *page;
+ void *kbuff = NULL;
+ size_t write_byte;
+
+ if (count == 0)
+ return 0;
+
+ /* Return error while NO_FURTHER_WRITE_ACTION is flagged */
+ if (cap_info->index < 0)
+ return -EIO;
+
+ /* Only alloc a new page when previous page is full */
+ if (!cap_info->page_bytes_remain) {
+ page = alloc_page(GFP_KERNEL);
+ if (!page) {
+ pr_debug("%s: alloc_page() failed\n", __func__);
+ ret = -ENOMEM;
+ goto failed;
+ }
+
+ cap_info->pages[cap_info->index++] = page;
+ cap_info->page_bytes_remain = PAGE_SIZE;
+ }
+
+ page = cap_info->pages[cap_info->index - 1];
+
+ kbuff = kmap(page);
+ if (!kbuff) {
+ pr_debug("%s: kmap() failed\n", __func__);
+ ret = -EFAULT;
+ goto failed;
+ }
+ kbuff += PAGE_SIZE - cap_info->page_bytes_remain;
+
+ /* Copy capsule binary data from user space to kernel space buffer */
+ write_byte = min_t(size_t, count, cap_info->page_bytes_remain);
+ if (copy_from_user(kbuff, buff, write_byte)) {
+ pr_debug("%s: copy_from_user() failed\n", __func__);
+ ret = -EFAULT;
+ goto fail_unmap;
+ }
+ cap_info->page_bytes_remain -= write_byte;
+
+ /* Setup capsule binary info structure */
+ if (!cap_info->header_obtained) {
+ ret = efi_capsule_setup_info(cap_info, kbuff,
+ cap_info->count + write_byte);
+ if (ret)
+ goto fail_unmap;
+ }
+
+ cap_info->count += write_byte;
+ kunmap(page);
+
+ /* Submit the full binary to efi_capsule_update() API */
+ if (cap_info->header_obtained &&
+ cap_info->count >= cap_info->total_size) {
+ if (cap_info->count > cap_info->total_size) {
+ pr_err("%s: upload size exceeded header defined size\n",
+ __func__);
+ ret = -EINVAL;
+ goto failed;
+ }
+
+ ret = efi_capsule_submit_update(cap_info);
+ if (ret)
+ goto failed;
+ }
+
+ return write_byte;
+
+fail_unmap:
+ kunmap(page);
+failed:
+ efi_free_all_buff_pages(cap_info);
+ return ret;
+}
+
+/**
+ * efi_capsule_flush - called by file close or file flush
+ * @file: file pointer
+ * @id: not used
+ *
+ * If a capsule is being partially uploaded then calling this function
+ * will be treated as upload termination and will free those completed
+ * buffer pages and -ECANCELED will be returned.
+ **/
+static int efi_capsule_flush(struct file *file, fl_owner_t id)
+{
+ int ret = 0;
+ struct capsule_info *cap_info = file->private_data;
+
+ if (cap_info->index > 0) {
+ pr_err("%s: capsule upload not complete\n", __func__);
+ efi_free_all_buff_pages(cap_info);
+ ret = -ECANCELED;
+ }
+
+ return ret;
+}
+
+/**
+ * efi_capsule_release - called by file close
+ * @inode: not used
+ * @file: file pointer
+ *
+ * We will not free successfully submitted pages since efi update
+ * requires data to be maintained across system reboot.
+ **/
+static int efi_capsule_release(struct inode *inode, struct file *file)
+{
+ struct capsule_info *cap_info = file->private_data;
+
+ kfree(cap_info->pages);
+ kfree(file->private_data);
+ file->private_data = NULL;
+ return 0;
+}
+
+/**
+ * efi_capsule_open - called by file open
+ * @inode: not used
+ * @file: file pointer
+ *
+ * Will allocate each capsule_info memory for each file open call.
+ * This provided the capability to support multiple file open feature
+ * where user is not needed to wait for others to finish in order to
+ * upload their capsule binary.
+ **/
+static int efi_capsule_open(struct inode *inode, struct file *file)
+{
+ struct capsule_info *cap_info;
+
+ cap_info = kzalloc(sizeof(*cap_info), GFP_KERNEL);
+ if (!cap_info)
+ return -ENOMEM;
+
+ cap_info->pages = kzalloc(sizeof(void *), GFP_KERNEL);
+ if (!cap_info->pages) {
+ kfree(cap_info);
+ return -ENOMEM;
+ }
+
+ file->private_data = cap_info;
+
+ return 0;
+}
+
+static const struct file_operations efi_capsule_fops = {
+ .owner = THIS_MODULE,
+ .open = efi_capsule_open,
+ .write = efi_capsule_write,
+ .flush = efi_capsule_flush,
+ .release = efi_capsule_release,
+ .llseek = no_llseek,
+};
+
+static struct miscdevice efi_capsule_misc = {
+ .minor = MISC_DYNAMIC_MINOR,
+ .name = "efi_capsule_loader",
+ .fops = &efi_capsule_fops,
+};
+
+static int __init efi_capsule_loader_init(void)
+{
+ int ret;
+
+ if (!efi_enabled(EFI_RUNTIME_SERVICES))
+ return -ENODEV;
+
+ ret = misc_register(&efi_capsule_misc);
+ if (ret)
+ pr_err("%s: Failed to register misc char file note\n",
+ __func__);
+
+ return ret;
+}
+module_init(efi_capsule_loader_init);
+
+static void __exit efi_capsule_loader_exit(void)
+{
+ misc_deregister(&efi_capsule_misc);
+}
+module_exit(efi_capsule_loader_exit);
+
+MODULE_DESCRIPTION("EFI capsule firmware binary loader");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/firmware/efi/capsule.c b/drivers/firmware/efi/capsule.c
new file mode 100644
index 000000000000..53b9fd2293ee
--- /dev/null
+++ b/drivers/firmware/efi/capsule.c
@@ -0,0 +1,308 @@
+/*
+ * EFI capsule support.
+ *
+ * Copyright 2013 Intel Corporation; author Matt Fleming
+ *
+ * This file is part of the Linux kernel, and is made available under
+ * the terms of the GNU General Public License version 2.
+ */
+
+#define pr_fmt(fmt) "efi: " fmt
+
+#include <linux/slab.h>
+#include <linux/mutex.h>
+#include <linux/highmem.h>
+#include <linux/efi.h>
+#include <linux/vmalloc.h>
+#include <asm/io.h>
+
+typedef struct {
+ u64 length;
+ u64 data;
+} efi_capsule_block_desc_t;
+
+static bool capsule_pending;
+static bool stop_capsules;
+static int efi_reset_type = -1;
+
+/*
+ * capsule_mutex serialises access to both capsule_pending and
+ * efi_reset_type and stop_capsules.
+ */
+static DEFINE_MUTEX(capsule_mutex);
+
+/**
+ * efi_capsule_pending - has a capsule been passed to the firmware?
+ * @reset_type: store the type of EFI reset if capsule is pending
+ *
+ * To ensure that the registered capsule is processed correctly by the
+ * firmware we need to perform a specific type of reset. If a capsule is
+ * pending return the reset type in @reset_type.
+ *
+ * This function will race with callers of efi_capsule_update(), for
+ * example, calling this function while somebody else is in
+ * efi_capsule_update() but hasn't reached efi_capsue_update_locked()
+ * will miss the updates to capsule_pending and efi_reset_type after
+ * efi_capsule_update_locked() completes.
+ *
+ * A non-racy use is from platform reboot code because we use
+ * system_state to ensure no capsules can be sent to the firmware once
+ * we're at SYSTEM_RESTART. See efi_capsule_update_locked().
+ */
+bool efi_capsule_pending(int *reset_type)
+{
+ if (!capsule_pending)
+ return false;
+
+ if (reset_type)
+ *reset_type = efi_reset_type;
+
+ return true;
+}
+
+/*
+ * Whitelist of EFI capsule flags that we support.
+ *
+ * We do not handle EFI_CAPSULE_INITIATE_RESET because that would
+ * require us to prepare the kernel for reboot. Refuse to load any
+ * capsules with that flag and any other flags that we do not know how
+ * to handle.
+ */
+#define EFI_CAPSULE_SUPPORTED_FLAG_MASK \
+ (EFI_CAPSULE_PERSIST_ACROSS_RESET | EFI_CAPSULE_POPULATE_SYSTEM_TABLE)
+
+/**
+ * efi_capsule_supported - does the firmware support the capsule?
+ * @guid: vendor guid of capsule
+ * @flags: capsule flags
+ * @size: size of capsule data
+ * @reset: the reset type required for this capsule
+ *
+ * Check whether a capsule with @flags is supported by the firmware
+ * and that @size doesn't exceed the maximum size for a capsule.
+ *
+ * No attempt is made to check @reset against the reset type required
+ * by any pending capsules because of the races involved.
+ */
+int efi_capsule_supported(efi_guid_t guid, u32 flags, size_t size, int *reset)
+{
+ efi_capsule_header_t capsule;
+ efi_capsule_header_t *cap_list[] = { &capsule };
+ efi_status_t status;
+ u64 max_size;
+
+ if (flags & ~EFI_CAPSULE_SUPPORTED_FLAG_MASK)
+ return -EINVAL;
+
+ capsule.headersize = capsule.imagesize = sizeof(capsule);
+ memcpy(&capsule.guid, &guid, sizeof(efi_guid_t));
+ capsule.flags = flags;
+
+ status = efi.query_capsule_caps(cap_list, 1, &max_size, reset);
+ if (status != EFI_SUCCESS)
+ return efi_status_to_err(status);
+
+ if (size > max_size)
+ return -ENOSPC;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(efi_capsule_supported);
+
+/*
+ * Every scatter gather list (block descriptor) page must end with a
+ * continuation pointer. The last continuation pointer of the last
+ * page must be zero to mark the end of the chain.
+ */
+#define SGLIST_PER_PAGE ((PAGE_SIZE / sizeof(efi_capsule_block_desc_t)) - 1)
+
+/*
+ * How many scatter gather list (block descriptor) pages do we need
+ * to map @count pages?
+ */
+static inline unsigned int sg_pages_num(unsigned int count)
+{
+ return DIV_ROUND_UP(count, SGLIST_PER_PAGE);
+}
+
+/**
+ * efi_capsule_update_locked - pass a single capsule to the firmware
+ * @capsule: capsule to send to the firmware
+ * @sg_pages: array of scatter gather (block descriptor) pages
+ * @reset: the reset type required for @capsule
+ *
+ * Since this function must be called under capsule_mutex check
+ * whether efi_reset_type will conflict with @reset, and atomically
+ * set it and capsule_pending if a capsule was successfully sent to
+ * the firmware.
+ *
+ * We also check to see if the system is about to restart, and if so,
+ * abort. This avoids races between efi_capsule_update() and
+ * efi_capsule_pending().
+ */
+static int
+efi_capsule_update_locked(efi_capsule_header_t *capsule,
+ struct page **sg_pages, int reset)
+{
+ efi_physical_addr_t sglist_phys;
+ efi_status_t status;
+
+ lockdep_assert_held(&capsule_mutex);
+
+ /*
+ * If someone has already registered a capsule that requires a
+ * different reset type, we're out of luck and must abort.
+ */
+ if (efi_reset_type >= 0 && efi_reset_type != reset) {
+ pr_err("Conflicting capsule reset type %d (%d).\n",
+ reset, efi_reset_type);
+ return -EINVAL;
+ }
+
+ /*
+ * If the system is getting ready to restart it may have
+ * called efi_capsule_pending() to make decisions (such as
+ * whether to force an EFI reboot), and we're racing against
+ * that call. Abort in that case.
+ */
+ if (unlikely(stop_capsules)) {
+ pr_warn("Capsule update raced with reboot, aborting.\n");
+ return -EINVAL;
+ }
+
+ sglist_phys = page_to_phys(sg_pages[0]);
+
+ status = efi.update_capsule(&capsule, 1, sglist_phys);
+ if (status == EFI_SUCCESS) {
+ capsule_pending = true;
+ efi_reset_type = reset;
+ }
+
+ return efi_status_to_err(status);
+}
+
+/**
+ * efi_capsule_update - send a capsule to the firmware
+ * @capsule: capsule to send to firmware
+ * @pages: an array of capsule data pages
+ *
+ * Build a scatter gather list with EFI capsule block descriptors to
+ * map the capsule described by @capsule with its data in @pages and
+ * send it to the firmware via the UpdateCapsule() runtime service.
+ *
+ * @capsule must be a virtual mapping of the first page in @pages
+ * (@pages[0]) in the kernel address space. That is, a
+ * capsule_header_t that describes the entire contents of the capsule
+ * must be at the start of the first data page.
+ *
+ * Even though this function will validate that the firmware supports
+ * the capsule guid, users will likely want to check that
+ * efi_capsule_supported() returns true before calling this function
+ * because it makes it easier to print helpful error messages.
+ *
+ * If the capsule is successfully submitted to the firmware, any
+ * subsequent calls to efi_capsule_pending() will return true. @pages
+ * must not be released or modified if this function returns
+ * successfully.
+ *
+ * Callers must be prepared for this function to fail, which can
+ * happen if we raced with system reboot or if there is already a
+ * pending capsule that has a reset type that conflicts with the one
+ * required by @capsule. Do NOT use efi_capsule_pending() to detect
+ * this conflict since that would be racy. Instead, submit the capsule
+ * to efi_capsule_update() and check the return value.
+ *
+ * Return 0 on success, a converted EFI status code on failure.
+ */
+int efi_capsule_update(efi_capsule_header_t *capsule, struct page **pages)
+{
+ u32 imagesize = capsule->imagesize;
+ efi_guid_t guid = capsule->guid;
+ unsigned int count, sg_count;
+ u32 flags = capsule->flags;
+ struct page **sg_pages;
+ int rv, reset_type;
+ int i, j;
+
+ rv = efi_capsule_supported(guid, flags, imagesize, &reset_type);
+ if (rv)
+ return rv;
+
+ count = DIV_ROUND_UP(imagesize, PAGE_SIZE);
+ sg_count = sg_pages_num(count);
+
+ sg_pages = kzalloc(sg_count * sizeof(*sg_pages), GFP_KERNEL);
+ if (!sg_pages)
+ return -ENOMEM;
+
+ for (i = 0; i < sg_count; i++) {
+ sg_pages[i] = alloc_page(GFP_KERNEL);
+ if (!sg_pages[i]) {
+ rv = -ENOMEM;
+ goto out;
+ }
+ }
+
+ for (i = 0; i < sg_count; i++) {
+ efi_capsule_block_desc_t *sglist;
+
+ sglist = kmap(sg_pages[i]);
+ if (!sglist) {
+ rv = -ENOMEM;
+ goto out;
+ }
+
+ for (j = 0; j < SGLIST_PER_PAGE && count > 0; j++) {
+ u64 sz = min_t(u64, imagesize, PAGE_SIZE);
+
+ sglist[j].length = sz;
+ sglist[j].data = page_to_phys(*pages++);
+
+ imagesize -= sz;
+ count--;
+ }
+
+ /* Continuation pointer */
+ sglist[j].length = 0;
+
+ if (i + 1 == sg_count)
+ sglist[j].data = 0;
+ else
+ sglist[j].data = page_to_phys(sg_pages[i + 1]);
+
+ kunmap(sg_pages[i]);
+ }
+
+ mutex_lock(&capsule_mutex);
+ rv = efi_capsule_update_locked(capsule, sg_pages, reset_type);
+ mutex_unlock(&capsule_mutex);
+
+out:
+ for (i = 0; rv && i < sg_count; i++) {
+ if (sg_pages[i])
+ __free_page(sg_pages[i]);
+ }
+
+ kfree(sg_pages);
+ return rv;
+}
+EXPORT_SYMBOL_GPL(efi_capsule_update);
+
+static int capsule_reboot_notify(struct notifier_block *nb, unsigned long event, void *cmd)
+{
+ mutex_lock(&capsule_mutex);
+ stop_capsules = true;
+ mutex_unlock(&capsule_mutex);
+
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block capsule_reboot_nb = {
+ .notifier_call = capsule_reboot_notify,
+};
+
+static int __init capsule_reboot_register(void)
+{
+ return register_reboot_notifier(&capsule_reboot_nb);
+}
+core_initcall(capsule_reboot_register);
diff --git a/drivers/firmware/efi/efi.c b/drivers/firmware/efi/efi.c
index 3a69ed5ecfcb..05509f3aaee8 100644
--- a/drivers/firmware/efi/efi.c
+++ b/drivers/firmware/efi/efi.c
@@ -43,6 +43,7 @@ struct efi __read_mostly efi = {
.config_table = EFI_INVALID_TABLE_ADDR,
.esrt = EFI_INVALID_TABLE_ADDR,
.properties_table = EFI_INVALID_TABLE_ADDR,
+ .mem_attr_table = EFI_INVALID_TABLE_ADDR,
};
EXPORT_SYMBOL(efi);
@@ -256,7 +257,7 @@ subsys_initcall(efisubsys_init);
*/
int __init efi_mem_desc_lookup(u64 phys_addr, efi_memory_desc_t *out_md)
{
- struct efi_memory_map *map = efi.memmap;
+ struct efi_memory_map *map = &efi.memmap;
phys_addr_t p, e;
if (!efi_enabled(EFI_MEMMAP)) {
@@ -338,6 +339,7 @@ static __initdata efi_config_table_type_t common_tables[] = {
{UGA_IO_PROTOCOL_GUID, "UGA", &efi.uga},
{EFI_SYSTEM_RESOURCE_TABLE_GUID, "ESRT", &efi.esrt},
{EFI_PROPERTIES_TABLE_GUID, "PROP", &efi.properties_table},
+ {EFI_MEMORY_ATTRIBUTES_TABLE_GUID, "MEMATTR", &efi.mem_attr_table},
{NULL_GUID, NULL, NULL},
};
@@ -351,8 +353,9 @@ static __init int match_config_table(efi_guid_t *guid,
for (i = 0; efi_guidcmp(table_types[i].guid, NULL_GUID); i++) {
if (!efi_guidcmp(*guid, table_types[i].guid)) {
*(table_types[i].ptr) = table;
- pr_cont(" %s=0x%lx ",
- table_types[i].name, table);
+ if (table_types[i].name)
+ pr_cont(" %s=0x%lx ",
+ table_types[i].name, table);
return 1;
}
}
@@ -620,16 +623,12 @@ char * __init efi_md_typeattr_format(char *buf, size_t size,
*/
u64 __weak efi_mem_attributes(unsigned long phys_addr)
{
- struct efi_memory_map *map;
efi_memory_desc_t *md;
- void *p;
if (!efi_enabled(EFI_MEMMAP))
return 0;
- map = efi.memmap;
- for (p = map->map; p < map->map_end; p += map->desc_size) {
- md = p;
+ for_each_efi_memory_desc(md) {
if ((md->phys_addr <= phys_addr) &&
(phys_addr < (md->phys_addr +
(md->num_pages << EFI_PAGE_SHIFT))))
@@ -637,3 +636,36 @@ u64 __weak efi_mem_attributes(unsigned long phys_addr)
}
return 0;
}
+
+int efi_status_to_err(efi_status_t status)
+{
+ int err;
+
+ switch (status) {
+ case EFI_SUCCESS:
+ err = 0;
+ break;
+ case EFI_INVALID_PARAMETER:
+ err = -EINVAL;
+ break;
+ case EFI_OUT_OF_RESOURCES:
+ err = -ENOSPC;
+ break;
+ case EFI_DEVICE_ERROR:
+ err = -EIO;
+ break;
+ case EFI_WRITE_PROTECTED:
+ err = -EROFS;
+ break;
+ case EFI_SECURITY_VIOLATION:
+ err = -EACCES;
+ break;
+ case EFI_NOT_FOUND:
+ err = -ENOENT;
+ break;
+ default:
+ err = -EINVAL;
+ }
+
+ return err;
+}
diff --git a/drivers/firmware/efi/efibc.c b/drivers/firmware/efi/efibc.c
new file mode 100644
index 000000000000..8dd0c7085e59
--- /dev/null
+++ b/drivers/firmware/efi/efibc.c
@@ -0,0 +1,113 @@
+/*
+ * efibc: control EFI bootloaders which obey LoaderEntryOneShot var
+ * Copyright (c) 2013-2016, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ */
+
+#define pr_fmt(fmt) "efibc: " fmt
+
+#include <linux/efi.h>
+#include <linux/module.h>
+#include <linux/reboot.h>
+#include <linux/slab.h>
+
+static void efibc_str_to_str16(const char *str, efi_char16_t *str16)
+{
+ size_t i;
+
+ for (i = 0; i < strlen(str); i++)
+ str16[i] = str[i];
+
+ str16[i] = '\0';
+}
+
+static int efibc_set_variable(const char *name, const char *value)
+{
+ int ret;
+ efi_guid_t guid = LINUX_EFI_LOADER_ENTRY_GUID;
+ struct efivar_entry *entry;
+ size_t size = (strlen(value) + 1) * sizeof(efi_char16_t);
+
+ if (size > sizeof(entry->var.Data)) {
+ pr_err("value is too large");
+ return -EINVAL;
+ }
+
+ entry = kmalloc(sizeof(*entry), GFP_KERNEL);
+ if (!entry) {
+ pr_err("failed to allocate efivar entry");
+ return -ENOMEM;
+ }
+
+ efibc_str_to_str16(name, entry->var.VariableName);
+ efibc_str_to_str16(value, (efi_char16_t *)entry->var.Data);
+ memcpy(&entry->var.VendorGuid, &guid, sizeof(guid));
+
+ ret = efivar_entry_set(entry,
+ EFI_VARIABLE_NON_VOLATILE
+ | EFI_VARIABLE_BOOTSERVICE_ACCESS
+ | EFI_VARIABLE_RUNTIME_ACCESS,
+ size, entry->var.Data, NULL);
+ if (ret)
+ pr_err("failed to set %s EFI variable: 0x%x\n",
+ name, ret);
+
+ kfree(entry);
+ return ret;
+}
+
+static int efibc_reboot_notifier_call(struct notifier_block *notifier,
+ unsigned long event, void *data)
+{
+ const char *reason = "shutdown";
+ int ret;
+
+ if (event == SYS_RESTART)
+ reason = "reboot";
+
+ ret = efibc_set_variable("LoaderEntryRebootReason", reason);
+ if (ret || !data)
+ return NOTIFY_DONE;
+
+ efibc_set_variable("LoaderEntryOneShot", (char *)data);
+
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block efibc_reboot_notifier = {
+ .notifier_call = efibc_reboot_notifier_call,
+};
+
+static int __init efibc_init(void)
+{
+ int ret;
+
+ if (!efi_enabled(EFI_RUNTIME_SERVICES))
+ return -ENODEV;
+
+ ret = register_reboot_notifier(&efibc_reboot_notifier);
+ if (ret)
+ pr_err("unable to register reboot notifier\n");
+
+ return ret;
+}
+module_init(efibc_init);
+
+static void __exit efibc_exit(void)
+{
+ unregister_reboot_notifier(&efibc_reboot_notifier);
+}
+module_exit(efibc_exit);
+
+MODULE_AUTHOR("Jeremy Compostella <jeremy.compostella@intel.com>");
+MODULE_AUTHOR("Matt Gumbel <matthew.k.gumbel@intel.com");
+MODULE_DESCRIPTION("EFI Bootloader Control");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/firmware/efi/efivars.c b/drivers/firmware/efi/efivars.c
index 096adcbcb5a9..116b244dee68 100644
--- a/drivers/firmware/efi/efivars.c
+++ b/drivers/firmware/efi/efivars.c
@@ -661,7 +661,7 @@ static void efivar_update_sysfs_entries(struct work_struct *work)
return;
err = efivar_init(efivar_update_sysfs_entry, entry,
- true, false, &efivar_sysfs_list);
+ false, &efivar_sysfs_list);
if (!err)
break;
@@ -730,8 +730,7 @@ int efivars_sysfs_init(void)
return -ENOMEM;
}
- efivar_init(efivars_sysfs_callback, NULL, false,
- true, &efivar_sysfs_list);
+ efivar_init(efivars_sysfs_callback, NULL, true, &efivar_sysfs_list);
error = create_efivars_bin_attributes();
if (error) {
diff --git a/drivers/firmware/efi/fake_mem.c b/drivers/firmware/efi/fake_mem.c
index ed3a854950cc..48430aba13c1 100644
--- a/drivers/firmware/efi/fake_mem.c
+++ b/drivers/firmware/efi/fake_mem.c
@@ -57,7 +57,7 @@ static int __init cmp_fake_mem(const void *x1, const void *x2)
void __init efi_fake_memmap(void)
{
u64 start, end, m_start, m_end, m_attr;
- int new_nr_map = memmap.nr_map;
+ int new_nr_map = efi.memmap.nr_map;
efi_memory_desc_t *md;
phys_addr_t new_memmap_phy;
void *new_memmap;
@@ -68,8 +68,7 @@ void __init efi_fake_memmap(void)
return;
/* count up the number of EFI memory descriptor */
- for (old = memmap.map; old < memmap.map_end; old += memmap.desc_size) {
- md = old;
+ for_each_efi_memory_desc(md) {
start = md->phys_addr;
end = start + (md->num_pages << EFI_PAGE_SHIFT) - 1;
@@ -95,25 +94,25 @@ void __init efi_fake_memmap(void)
}
/* allocate memory for new EFI memmap */
- new_memmap_phy = memblock_alloc(memmap.desc_size * new_nr_map,
+ new_memmap_phy = memblock_alloc(efi.memmap.desc_size * new_nr_map,
PAGE_SIZE);
if (!new_memmap_phy)
return;
/* create new EFI memmap */
new_memmap = early_memremap(new_memmap_phy,
- memmap.desc_size * new_nr_map);
+ efi.memmap.desc_size * new_nr_map);
if (!new_memmap) {
- memblock_free(new_memmap_phy, memmap.desc_size * new_nr_map);
+ memblock_free(new_memmap_phy, efi.memmap.desc_size * new_nr_map);
return;
}
- for (old = memmap.map, new = new_memmap;
- old < memmap.map_end;
- old += memmap.desc_size, new += memmap.desc_size) {
+ for (old = efi.memmap.map, new = new_memmap;
+ old < efi.memmap.map_end;
+ old += efi.memmap.desc_size, new += efi.memmap.desc_size) {
/* copy original EFI memory descriptor */
- memcpy(new, old, memmap.desc_size);
+ memcpy(new, old, efi.memmap.desc_size);
md = new;
start = md->phys_addr;
end = md->phys_addr + (md->num_pages << EFI_PAGE_SHIFT) - 1;
@@ -134,8 +133,8 @@ void __init efi_fake_memmap(void)
md->num_pages = (m_end - md->phys_addr + 1) >>
EFI_PAGE_SHIFT;
/* latter part */
- new += memmap.desc_size;
- memcpy(new, old, memmap.desc_size);
+ new += efi.memmap.desc_size;
+ memcpy(new, old, efi.memmap.desc_size);
md = new;
md->phys_addr = m_end + 1;
md->num_pages = (end - md->phys_addr + 1) >>
@@ -147,16 +146,16 @@ void __init efi_fake_memmap(void)
md->num_pages = (m_start - md->phys_addr) >>
EFI_PAGE_SHIFT;
/* middle part */
- new += memmap.desc_size;
- memcpy(new, old, memmap.desc_size);
+ new += efi.memmap.desc_size;
+ memcpy(new, old, efi.memmap.desc_size);
md = new;
md->attribute |= m_attr;
md->phys_addr = m_start;
md->num_pages = (m_end - m_start + 1) >>
EFI_PAGE_SHIFT;
/* last part */
- new += memmap.desc_size;
- memcpy(new, old, memmap.desc_size);
+ new += efi.memmap.desc_size;
+ memcpy(new, old, efi.memmap.desc_size);
md = new;
md->phys_addr = m_end + 1;
md->num_pages = (end - m_end) >>
@@ -169,8 +168,8 @@ void __init efi_fake_memmap(void)
md->num_pages = (m_start - md->phys_addr) >>
EFI_PAGE_SHIFT;
/* latter part */
- new += memmap.desc_size;
- memcpy(new, old, memmap.desc_size);
+ new += efi.memmap.desc_size;
+ memcpy(new, old, efi.memmap.desc_size);
md = new;
md->phys_addr = m_start;
md->num_pages = (end - md->phys_addr + 1) >>
@@ -182,10 +181,10 @@ void __init efi_fake_memmap(void)
/* swap into new EFI memmap */
efi_unmap_memmap();
- memmap.map = new_memmap;
- memmap.phys_map = new_memmap_phy;
- memmap.nr_map = new_nr_map;
- memmap.map_end = memmap.map + memmap.nr_map * memmap.desc_size;
+ efi.memmap.map = new_memmap;
+ efi.memmap.phys_map = new_memmap_phy;
+ efi.memmap.nr_map = new_nr_map;
+ efi.memmap.map_end = efi.memmap.map + efi.memmap.nr_map * efi.memmap.desc_size;
set_bit(EFI_MEMMAP, &efi.flags);
/* print new EFI memmap */
diff --git a/drivers/firmware/efi/libstub/Makefile b/drivers/firmware/efi/libstub/Makefile
index da99bbb74aeb..c06945160a41 100644
--- a/drivers/firmware/efi/libstub/Makefile
+++ b/drivers/firmware/efi/libstub/Makefile
@@ -28,7 +28,7 @@ OBJECT_FILES_NON_STANDARD := y
# Prevents link failures: __sanitizer_cov_trace_pc() is not linked in.
KCOV_INSTRUMENT := n
-lib-y := efi-stub-helper.o
+lib-y := efi-stub-helper.o gop.o
# include the stub's generic dependencies from lib/ when building for ARM/arm64
arm-deps := fdt_rw.c fdt_ro.c fdt_wip.c fdt.c fdt_empty_tree.c fdt_sw.c sort.c
diff --git a/drivers/firmware/efi/libstub/arm-stub.c b/drivers/firmware/efi/libstub/arm-stub.c
index 414deb85c2e5..993aa56755f6 100644
--- a/drivers/firmware/efi/libstub/arm-stub.c
+++ b/drivers/firmware/efi/libstub/arm-stub.c
@@ -20,27 +20,49 @@
bool __nokaslr;
-static int efi_secureboot_enabled(efi_system_table_t *sys_table_arg)
+static int efi_get_secureboot(efi_system_table_t *sys_table_arg)
{
- static efi_guid_t const var_guid = EFI_GLOBAL_VARIABLE_GUID;
- static efi_char16_t const var_name[] = {
+ static efi_char16_t const sb_var_name[] = {
'S', 'e', 'c', 'u', 'r', 'e', 'B', 'o', 'o', 't', 0 };
+ static efi_char16_t const sm_var_name[] = {
+ 'S', 'e', 't', 'u', 'p', 'M', 'o', 'd', 'e', 0 };
+ efi_guid_t var_guid = EFI_GLOBAL_VARIABLE_GUID;
efi_get_variable_t *f_getvar = sys_table_arg->runtime->get_variable;
- unsigned long size = sizeof(u8);
- efi_status_t status;
u8 val;
+ unsigned long size = sizeof(val);
+ efi_status_t status;
- status = f_getvar((efi_char16_t *)var_name, (efi_guid_t *)&var_guid,
+ status = f_getvar((efi_char16_t *)sb_var_name, (efi_guid_t *)&var_guid,
NULL, &size, &val);
+ if (status != EFI_SUCCESS)
+ goto out_efi_err;
+
+ if (val == 0)
+ return 0;
+
+ status = f_getvar((efi_char16_t *)sm_var_name, (efi_guid_t *)&var_guid,
+ NULL, &size, &val);
+
+ if (status != EFI_SUCCESS)
+ goto out_efi_err;
+
+ if (val == 1)
+ return 0;
+
+ return 1;
+
+out_efi_err:
switch (status) {
- case EFI_SUCCESS:
- return val;
case EFI_NOT_FOUND:
return 0;
+ case EFI_DEVICE_ERROR:
+ return -EIO;
+ case EFI_SECURITY_VIOLATION:
+ return -EACCES;
default:
- return 1;
+ return -EINVAL;
}
}
@@ -147,6 +169,25 @@ void efi_char16_printk(efi_system_table_t *sys_table_arg,
out->output_string(out, str);
}
+static struct screen_info *setup_graphics(efi_system_table_t *sys_table_arg)
+{
+ efi_guid_t gop_proto = EFI_GRAPHICS_OUTPUT_PROTOCOL_GUID;
+ efi_status_t status;
+ unsigned long size;
+ void **gop_handle = NULL;
+ struct screen_info *si = NULL;
+
+ size = 0;
+ status = efi_call_early(locate_handle, EFI_LOCATE_BY_PROTOCOL,
+ &gop_proto, NULL, &size, gop_handle);
+ if (status == EFI_BUFFER_TOO_SMALL) {
+ si = alloc_screen_info(sys_table_arg);
+ if (!si)
+ return NULL;
+ efi_setup_gop(sys_table_arg, si, &gop_proto, size);
+ }
+ return si;
+}
/*
* This function handles the architcture specific differences between arm and
@@ -185,6 +226,8 @@ unsigned long efi_entry(void *handle, efi_system_table_t *sys_table,
efi_guid_t loaded_image_proto = LOADED_IMAGE_PROTOCOL_GUID;
unsigned long reserve_addr = 0;
unsigned long reserve_size = 0;
+ int secure_boot = 0;
+ struct screen_info *si;
/* Check if we were booted by the EFI firmware */
if (sys_table->hdr.signature != EFI_SYSTEM_TABLE_SIGNATURE)
@@ -237,6 +280,8 @@ unsigned long efi_entry(void *handle, efi_system_table_t *sys_table,
__nokaslr = true;
}
+ si = setup_graphics(sys_table);
+
status = handle_kernel_image(sys_table, image_addr, &image_size,
&reserve_addr,
&reserve_size,
@@ -250,12 +295,21 @@ unsigned long efi_entry(void *handle, efi_system_table_t *sys_table,
if (status != EFI_SUCCESS)
pr_efi_err(sys_table, "Failed to parse EFI cmdline options\n");
+ secure_boot = efi_get_secureboot(sys_table);
+ if (secure_boot > 0)
+ pr_efi(sys_table, "UEFI Secure Boot is enabled.\n");
+
+ if (secure_boot < 0) {
+ pr_efi_err(sys_table,
+ "could not determine UEFI Secure Boot status.\n");
+ }
+
/*
* Unauthenticated device tree data is a security hazard, so
* ignore 'dtb=' unless UEFI Secure Boot is disabled.
*/
- if (efi_secureboot_enabled(sys_table)) {
- pr_efi(sys_table, "UEFI Secure Boot is enabled.\n");
+ if (secure_boot != 0 && strstr(cmdline_ptr, "dtb=")) {
+ pr_efi(sys_table, "Ignoring DTB from command line.\n");
} else {
status = handle_cmdline_files(sys_table, image, cmdline_ptr,
"dtb=",
@@ -309,6 +363,7 @@ fail_free_image:
efi_free(sys_table, image_size, *image_addr);
efi_free(sys_table, reserve_size, reserve_addr);
fail_free_cmdline:
+ free_screen_info(sys_table, si);
efi_free(sys_table, cmdline_size, (unsigned long)cmdline_ptr);
fail:
return EFI_ERROR;
diff --git a/drivers/firmware/efi/libstub/arm32-stub.c b/drivers/firmware/efi/libstub/arm32-stub.c
index 6f42be4d0084..e1f0b28e1dcb 100644
--- a/drivers/firmware/efi/libstub/arm32-stub.c
+++ b/drivers/firmware/efi/libstub/arm32-stub.c
@@ -26,6 +26,43 @@ efi_status_t check_platform_features(efi_system_table_t *sys_table_arg)
return EFI_SUCCESS;
}
+static efi_guid_t screen_info_guid = LINUX_EFI_ARM_SCREEN_INFO_TABLE_GUID;
+
+struct screen_info *alloc_screen_info(efi_system_table_t *sys_table_arg)
+{
+ struct screen_info *si;
+ efi_status_t status;
+
+ /*
+ * Unlike on arm64, where we can directly fill out the screen_info
+ * structure from the stub, we need to allocate a buffer to hold
+ * its contents while we hand over to the kernel proper from the
+ * decompressor.
+ */
+ status = efi_call_early(allocate_pool, EFI_RUNTIME_SERVICES_DATA,
+ sizeof(*si), (void **)&si);
+
+ if (status != EFI_SUCCESS)
+ return NULL;
+
+ status = efi_call_early(install_configuration_table,
+ &screen_info_guid, si);
+ if (status == EFI_SUCCESS)
+ return si;
+
+ efi_call_early(free_pool, si);
+ return NULL;
+}
+
+void free_screen_info(efi_system_table_t *sys_table_arg, struct screen_info *si)
+{
+ if (!si)
+ return;
+
+ efi_call_early(install_configuration_table, &screen_info_guid, NULL);
+ efi_call_early(free_pool, si);
+}
+
efi_status_t handle_kernel_image(efi_system_table_t *sys_table,
unsigned long *image_addr,
unsigned long *image_size,
diff --git a/drivers/firmware/efi/libstub/arm64-stub.c b/drivers/firmware/efi/libstub/arm64-stub.c
index a90f6459f5c6..eae693eb3e91 100644
--- a/drivers/firmware/efi/libstub/arm64-stub.c
+++ b/drivers/firmware/efi/libstub/arm64-stub.c
@@ -81,15 +81,24 @@ efi_status_t handle_kernel_image(efi_system_table_t *sys_table_arg,
if (IS_ENABLED(CONFIG_RANDOMIZE_BASE) && phys_seed != 0) {
/*
+ * If CONFIG_DEBUG_ALIGN_RODATA is not set, produce a
+ * displacement in the interval [0, MIN_KIMG_ALIGN) that
+ * is a multiple of the minimal segment alignment (SZ_64K)
+ */
+ u32 mask = (MIN_KIMG_ALIGN - 1) & ~(SZ_64K - 1);
+ u32 offset = !IS_ENABLED(CONFIG_DEBUG_ALIGN_RODATA) ?
+ (phys_seed >> 32) & mask : TEXT_OFFSET;
+
+ /*
* If KASLR is enabled, and we have some randomness available,
* locate the kernel at a randomized offset in physical memory.
*/
- *reserve_size = kernel_memsize + TEXT_OFFSET;
+ *reserve_size = kernel_memsize + offset;
status = efi_random_alloc(sys_table_arg, *reserve_size,
MIN_KIMG_ALIGN, reserve_addr,
- phys_seed);
+ (u32)phys_seed);
- *image_addr = *reserve_addr + TEXT_OFFSET;
+ *image_addr = *reserve_addr + offset;
} else {
/*
* Else, try a straight allocation at the preferred offset.
diff --git a/drivers/firmware/efi/libstub/efi-stub-helper.c b/drivers/firmware/efi/libstub/efi-stub-helper.c
index 29ed2f9b218c..3bd127f95315 100644
--- a/drivers/firmware/efi/libstub/efi-stub-helper.c
+++ b/drivers/firmware/efi/libstub/efi-stub-helper.c
@@ -125,10 +125,12 @@ unsigned long get_dram_base(efi_system_table_t *sys_table_arg)
map.map_end = map.map + map_size;
- for_each_efi_memory_desc(&map, md)
- if (md->attribute & EFI_MEMORY_WB)
+ for_each_efi_memory_desc_in_map(&map, md) {
+ if (md->attribute & EFI_MEMORY_WB) {
if (membase > md->phys_addr)
membase = md->phys_addr;
+ }
+ }
efi_call_early(free_pool, map.map);
diff --git a/drivers/firmware/efi/libstub/fdt.c b/drivers/firmware/efi/libstub/fdt.c
index 6dba78aef337..e58abfa953cc 100644
--- a/drivers/firmware/efi/libstub/fdt.c
+++ b/drivers/firmware/efi/libstub/fdt.c
@@ -24,7 +24,7 @@ efi_status_t update_fdt(efi_system_table_t *sys_table, void *orig_fdt,
unsigned long map_size, unsigned long desc_size,
u32 desc_ver)
{
- int node, prev, num_rsv;
+ int node, num_rsv;
int status;
u32 fdt_val32;
u64 fdt_val64;
@@ -54,28 +54,6 @@ efi_status_t update_fdt(efi_system_table_t *sys_table, void *orig_fdt,
goto fdt_set_fail;
/*
- * Delete any memory nodes present. We must delete nodes which
- * early_init_dt_scan_memory may try to use.
- */
- prev = 0;
- for (;;) {
- const char *type;
- int len;
-
- node = fdt_next_node(fdt, prev, NULL);
- if (node < 0)
- break;
-
- type = fdt_getprop(fdt, node, "device_type", &len);
- if (type && strncmp(type, "memory", len) == 0) {
- fdt_del_node(fdt, node);
- continue;
- }
-
- prev = node;
- }
-
- /*
* Delete all memory reserve map entries. When booting via UEFI,
* kernel will use the UEFI memory map to find reserved regions.
*/
diff --git a/drivers/firmware/efi/libstub/gop.c b/drivers/firmware/efi/libstub/gop.c
new file mode 100644
index 000000000000..932742e4cf23
--- /dev/null
+++ b/drivers/firmware/efi/libstub/gop.c
@@ -0,0 +1,354 @@
+/* -----------------------------------------------------------------------
+ *
+ * Copyright 2011 Intel Corporation; author Matt Fleming
+ *
+ * This file is part of the Linux kernel, and is made available under
+ * the terms of the GNU General Public License version 2.
+ *
+ * ----------------------------------------------------------------------- */
+
+#include <linux/efi.h>
+#include <linux/screen_info.h>
+#include <asm/efi.h>
+#include <asm/setup.h>
+
+static void find_bits(unsigned long mask, u8 *pos, u8 *size)
+{
+ u8 first, len;
+
+ first = 0;
+ len = 0;
+
+ if (mask) {
+ while (!(mask & 0x1)) {
+ mask = mask >> 1;
+ first++;
+ }
+
+ while (mask & 0x1) {
+ mask = mask >> 1;
+ len++;
+ }
+ }
+
+ *pos = first;
+ *size = len;
+}
+
+static void
+setup_pixel_info(struct screen_info *si, u32 pixels_per_scan_line,
+ struct efi_pixel_bitmask pixel_info, int pixel_format)
+{
+ if (pixel_format == PIXEL_RGB_RESERVED_8BIT_PER_COLOR) {
+ si->lfb_depth = 32;
+ si->lfb_linelength = pixels_per_scan_line * 4;
+ si->red_size = 8;
+ si->red_pos = 0;
+ si->green_size = 8;
+ si->green_pos = 8;
+ si->blue_size = 8;
+ si->blue_pos = 16;
+ si->rsvd_size = 8;
+ si->rsvd_pos = 24;
+ } else if (pixel_format == PIXEL_BGR_RESERVED_8BIT_PER_COLOR) {
+ si->lfb_depth = 32;
+ si->lfb_linelength = pixels_per_scan_line * 4;
+ si->red_size = 8;
+ si->red_pos = 16;
+ si->green_size = 8;
+ si->green_pos = 8;
+ si->blue_size = 8;
+ si->blue_pos = 0;
+ si->rsvd_size = 8;
+ si->rsvd_pos = 24;
+ } else if (pixel_format == PIXEL_BIT_MASK) {
+ find_bits(pixel_info.red_mask, &si->red_pos, &si->red_size);
+ find_bits(pixel_info.green_mask, &si->green_pos,
+ &si->green_size);
+ find_bits(pixel_info.blue_mask, &si->blue_pos, &si->blue_size);
+ find_bits(pixel_info.reserved_mask, &si->rsvd_pos,
+ &si->rsvd_size);
+ si->lfb_depth = si->red_size + si->green_size +
+ si->blue_size + si->rsvd_size;
+ si->lfb_linelength = (pixels_per_scan_line * si->lfb_depth) / 8;
+ } else {
+ si->lfb_depth = 4;
+ si->lfb_linelength = si->lfb_width / 2;
+ si->red_size = 0;
+ si->red_pos = 0;
+ si->green_size = 0;
+ si->green_pos = 0;
+ si->blue_size = 0;
+ si->blue_pos = 0;
+ si->rsvd_size = 0;
+ si->rsvd_pos = 0;
+ }
+}
+
+static efi_status_t
+__gop_query32(efi_system_table_t *sys_table_arg,
+ struct efi_graphics_output_protocol_32 *gop32,
+ struct efi_graphics_output_mode_info **info,
+ unsigned long *size, u64 *fb_base)
+{
+ struct efi_graphics_output_protocol_mode_32 *mode;
+ efi_graphics_output_protocol_query_mode query_mode;
+ efi_status_t status;
+ unsigned long m;
+
+ m = gop32->mode;
+ mode = (struct efi_graphics_output_protocol_mode_32 *)m;
+ query_mode = (void *)(unsigned long)gop32->query_mode;
+
+ status = __efi_call_early(query_mode, (void *)gop32, mode->mode, size,
+ info);
+ if (status != EFI_SUCCESS)
+ return status;
+
+ *fb_base = mode->frame_buffer_base;
+ return status;
+}
+
+static efi_status_t
+setup_gop32(efi_system_table_t *sys_table_arg, struct screen_info *si,
+ efi_guid_t *proto, unsigned long size, void **gop_handle)
+{
+ struct efi_graphics_output_protocol_32 *gop32, *first_gop;
+ unsigned long nr_gops;
+ u16 width, height;
+ u32 pixels_per_scan_line;
+ u32 ext_lfb_base;
+ u64 fb_base;
+ struct efi_pixel_bitmask pixel_info;
+ int pixel_format;
+ efi_status_t status = EFI_NOT_FOUND;
+ u32 *handles = (u32 *)(unsigned long)gop_handle;
+ int i;
+
+ first_gop = NULL;
+ gop32 = NULL;
+
+ nr_gops = size / sizeof(u32);
+ for (i = 0; i < nr_gops; i++) {
+ struct efi_graphics_output_mode_info *info = NULL;
+ efi_guid_t conout_proto = EFI_CONSOLE_OUT_DEVICE_GUID;
+ bool conout_found = false;
+ void *dummy = NULL;
+ efi_handle_t h = (efi_handle_t)(unsigned long)handles[i];
+ u64 current_fb_base;
+
+ status = efi_call_early(handle_protocol, h,
+ proto, (void **)&gop32);
+ if (status != EFI_SUCCESS)
+ continue;
+
+ status = efi_call_early(handle_protocol, h,
+ &conout_proto, &dummy);
+ if (status == EFI_SUCCESS)
+ conout_found = true;
+
+ status = __gop_query32(sys_table_arg, gop32, &info, &size,
+ &current_fb_base);
+ if (status == EFI_SUCCESS && (!first_gop || conout_found)) {
+ /*
+ * Systems that use the UEFI Console Splitter may
+ * provide multiple GOP devices, not all of which are
+ * backed by real hardware. The workaround is to search
+ * for a GOP implementing the ConOut protocol, and if
+ * one isn't found, to just fall back to the first GOP.
+ */
+ width = info->horizontal_resolution;
+ height = info->vertical_resolution;
+ pixel_format = info->pixel_format;
+ pixel_info = info->pixel_information;
+ pixels_per_scan_line = info->pixels_per_scan_line;
+ fb_base = current_fb_base;
+
+ /*
+ * Once we've found a GOP supporting ConOut,
+ * don't bother looking any further.
+ */
+ first_gop = gop32;
+ if (conout_found)
+ break;
+ }
+ }
+
+ /* Did we find any GOPs? */
+ if (!first_gop)
+ goto out;
+
+ /* EFI framebuffer */
+ si->orig_video_isVGA = VIDEO_TYPE_EFI;
+
+ si->lfb_width = width;
+ si->lfb_height = height;
+ si->lfb_base = fb_base;
+
+ ext_lfb_base = (u64)(unsigned long)fb_base >> 32;
+ if (ext_lfb_base) {
+ si->capabilities |= VIDEO_CAPABILITY_64BIT_BASE;
+ si->ext_lfb_base = ext_lfb_base;
+ }
+
+ si->pages = 1;
+
+ setup_pixel_info(si, pixels_per_scan_line, pixel_info, pixel_format);
+
+ si->lfb_size = si->lfb_linelength * si->lfb_height;
+
+ si->capabilities |= VIDEO_CAPABILITY_SKIP_QUIRKS;
+out:
+ return status;
+}
+
+static efi_status_t
+__gop_query64(efi_system_table_t *sys_table_arg,
+ struct efi_graphics_output_protocol_64 *gop64,
+ struct efi_graphics_output_mode_info **info,
+ unsigned long *size, u64 *fb_base)
+{
+ struct efi_graphics_output_protocol_mode_64 *mode;
+ efi_graphics_output_protocol_query_mode query_mode;
+ efi_status_t status;
+ unsigned long m;
+
+ m = gop64->mode;
+ mode = (struct efi_graphics_output_protocol_mode_64 *)m;
+ query_mode = (void *)(unsigned long)gop64->query_mode;
+
+ status = __efi_call_early(query_mode, (void *)gop64, mode->mode, size,
+ info);
+ if (status != EFI_SUCCESS)
+ return status;
+
+ *fb_base = mode->frame_buffer_base;
+ return status;
+}
+
+static efi_status_t
+setup_gop64(efi_system_table_t *sys_table_arg, struct screen_info *si,
+ efi_guid_t *proto, unsigned long size, void **gop_handle)
+{
+ struct efi_graphics_output_protocol_64 *gop64, *first_gop;
+ unsigned long nr_gops;
+ u16 width, height;
+ u32 pixels_per_scan_line;
+ u32 ext_lfb_base;
+ u64 fb_base;
+ struct efi_pixel_bitmask pixel_info;
+ int pixel_format;
+ efi_status_t status = EFI_NOT_FOUND;
+ u64 *handles = (u64 *)(unsigned long)gop_handle;
+ int i;
+
+ first_gop = NULL;
+ gop64 = NULL;
+
+ nr_gops = size / sizeof(u64);
+ for (i = 0; i < nr_gops; i++) {
+ struct efi_graphics_output_mode_info *info = NULL;
+ efi_guid_t conout_proto = EFI_CONSOLE_OUT_DEVICE_GUID;
+ bool conout_found = false;
+ void *dummy = NULL;
+ efi_handle_t h = (efi_handle_t)(unsigned long)handles[i];
+ u64 current_fb_base;
+
+ status = efi_call_early(handle_protocol, h,
+ proto, (void **)&gop64);
+ if (status != EFI_SUCCESS)
+ continue;
+
+ status = efi_call_early(handle_protocol, h,
+ &conout_proto, &dummy);
+ if (status == EFI_SUCCESS)
+ conout_found = true;
+
+ status = __gop_query64(sys_table_arg, gop64, &info, &size,
+ &current_fb_base);
+ if (status == EFI_SUCCESS && (!first_gop || conout_found)) {
+ /*
+ * Systems that use the UEFI Console Splitter may
+ * provide multiple GOP devices, not all of which are
+ * backed by real hardware. The workaround is to search
+ * for a GOP implementing the ConOut protocol, and if
+ * one isn't found, to just fall back to the first GOP.
+ */
+ width = info->horizontal_resolution;
+ height = info->vertical_resolution;
+ pixel_format = info->pixel_format;
+ pixel_info = info->pixel_information;
+ pixels_per_scan_line = info->pixels_per_scan_line;
+ fb_base = current_fb_base;
+
+ /*
+ * Once we've found a GOP supporting ConOut,
+ * don't bother looking any further.
+ */
+ first_gop = gop64;
+ if (conout_found)
+ break;
+ }
+ }
+
+ /* Did we find any GOPs? */
+ if (!first_gop)
+ goto out;
+
+ /* EFI framebuffer */
+ si->orig_video_isVGA = VIDEO_TYPE_EFI;
+
+ si->lfb_width = width;
+ si->lfb_height = height;
+ si->lfb_base = fb_base;
+
+ ext_lfb_base = (u64)(unsigned long)fb_base >> 32;
+ if (ext_lfb_base) {
+ si->capabilities |= VIDEO_CAPABILITY_64BIT_BASE;
+ si->ext_lfb_base = ext_lfb_base;
+ }
+
+ si->pages = 1;
+
+ setup_pixel_info(si, pixels_per_scan_line, pixel_info, pixel_format);
+
+ si->lfb_size = si->lfb_linelength * si->lfb_height;
+
+ si->capabilities |= VIDEO_CAPABILITY_SKIP_QUIRKS;
+out:
+ return status;
+}
+
+/*
+ * See if we have Graphics Output Protocol
+ */
+efi_status_t efi_setup_gop(efi_system_table_t *sys_table_arg,
+ struct screen_info *si, efi_guid_t *proto,
+ unsigned long size)
+{
+ efi_status_t status;
+ void **gop_handle = NULL;
+
+ status = efi_call_early(allocate_pool, EFI_LOADER_DATA,
+ size, (void **)&gop_handle);
+ if (status != EFI_SUCCESS)
+ return status;
+
+ status = efi_call_early(locate_handle,
+ EFI_LOCATE_BY_PROTOCOL,
+ proto, NULL, &size, gop_handle);
+ if (status != EFI_SUCCESS)
+ goto free_handle;
+
+ if (efi_is_64bit()) {
+ status = setup_gop64(sys_table_arg, si, proto, size,
+ gop_handle);
+ } else {
+ status = setup_gop32(sys_table_arg, si, proto, size,
+ gop_handle);
+ }
+
+free_handle:
+ efi_call_early(free_pool, gop_handle);
+ return status;
+}
diff --git a/drivers/firmware/efi/memattr.c b/drivers/firmware/efi/memattr.c
new file mode 100644
index 000000000000..236004b9a50d
--- /dev/null
+++ b/drivers/firmware/efi/memattr.c
@@ -0,0 +1,182 @@
+/*
+ * Copyright (C) 2016 Linaro Ltd. <ard.biesheuvel@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#define pr_fmt(fmt) "efi: memattr: " fmt
+
+#include <linux/efi.h>
+#include <linux/init.h>
+#include <linux/io.h>
+#include <linux/memblock.h>
+
+#include <asm/early_ioremap.h>
+
+static int __initdata tbl_size;
+
+/*
+ * Reserve the memory associated with the Memory Attributes configuration
+ * table, if it exists.
+ */
+int __init efi_memattr_init(void)
+{
+ efi_memory_attributes_table_t *tbl;
+
+ if (efi.mem_attr_table == EFI_INVALID_TABLE_ADDR)
+ return 0;
+
+ tbl = early_memremap(efi.mem_attr_table, sizeof(*tbl));
+ if (!tbl) {
+ pr_err("Failed to map EFI Memory Attributes table @ 0x%lx\n",
+ efi.mem_attr_table);
+ return -ENOMEM;
+ }
+
+ if (tbl->version > 1) {
+ pr_warn("Unexpected EFI Memory Attributes table version %d\n",
+ tbl->version);
+ goto unmap;
+ }
+
+ tbl_size = sizeof(*tbl) + tbl->num_entries * tbl->desc_size;
+ memblock_reserve(efi.mem_attr_table, tbl_size);
+
+unmap:
+ early_memunmap(tbl, sizeof(*tbl));
+ return 0;
+}
+
+/*
+ * Returns a copy @out of the UEFI memory descriptor @in if it is covered
+ * entirely by a UEFI memory map entry with matching attributes. The virtual
+ * address of @out is set according to the matching entry that was found.
+ */
+static bool entry_is_valid(const efi_memory_desc_t *in, efi_memory_desc_t *out)
+{
+ u64 in_paddr = in->phys_addr;
+ u64 in_size = in->num_pages << EFI_PAGE_SHIFT;
+ efi_memory_desc_t *md;
+
+ *out = *in;
+
+ if (in->type != EFI_RUNTIME_SERVICES_CODE &&
+ in->type != EFI_RUNTIME_SERVICES_DATA) {
+ pr_warn("Entry type should be RuntimeServiceCode/Data\n");
+ return false;
+ }
+
+ if (!(in->attribute & (EFI_MEMORY_RO | EFI_MEMORY_XP))) {
+ pr_warn("Entry attributes invalid: RO and XP bits both cleared\n");
+ return false;
+ }
+
+ if (PAGE_SIZE > EFI_PAGE_SIZE &&
+ (!PAGE_ALIGNED(in->phys_addr) ||
+ !PAGE_ALIGNED(in->num_pages << EFI_PAGE_SHIFT))) {
+ /*
+ * Since arm64 may execute with page sizes of up to 64 KB, the
+ * UEFI spec mandates that RuntimeServices memory regions must
+ * be 64 KB aligned. We need to validate this here since we will
+ * not be able to tighten permissions on such regions without
+ * affecting adjacent regions.
+ */
+ pr_warn("Entry address region misaligned\n");
+ return false;
+ }
+
+ for_each_efi_memory_desc(md) {
+ u64 md_paddr = md->phys_addr;
+ u64 md_size = md->num_pages << EFI_PAGE_SHIFT;
+
+ if (!(md->attribute & EFI_MEMORY_RUNTIME))
+ continue;
+ if (md->virt_addr == 0) {
+ /* no virtual mapping has been installed by the stub */
+ break;
+ }
+
+ if (md_paddr > in_paddr || (in_paddr - md_paddr) >= md_size)
+ continue;
+
+ /*
+ * This entry covers the start of @in, check whether
+ * it covers the end as well.
+ */
+ if (md_paddr + md_size < in_paddr + in_size) {
+ pr_warn("Entry covers multiple EFI memory map regions\n");
+ return false;
+ }
+
+ if (md->type != in->type) {
+ pr_warn("Entry type deviates from EFI memory map region type\n");
+ return false;
+ }
+
+ out->virt_addr = in_paddr + (md->virt_addr - md_paddr);
+
+ return true;
+ }
+
+ pr_warn("No matching entry found in the EFI memory map\n");
+ return false;
+}
+
+/*
+ * To be called after the EFI page tables have been populated. If a memory
+ * attributes table is available, its contents will be used to update the
+ * mappings with tightened permissions as described by the table.
+ * This requires the UEFI memory map to have already been populated with
+ * virtual addresses.
+ */
+int __init efi_memattr_apply_permissions(struct mm_struct *mm,
+ efi_memattr_perm_setter fn)
+{
+ efi_memory_attributes_table_t *tbl;
+ int i, ret;
+
+ if (tbl_size <= sizeof(*tbl))
+ return 0;
+
+ /*
+ * We need the EFI memory map to be setup so we can use it to
+ * lookup the virtual addresses of all entries in the of EFI
+ * Memory Attributes table. If it isn't available, this
+ * function should not be called.
+ */
+ if (WARN_ON(!efi_enabled(EFI_MEMMAP)))
+ return 0;
+
+ tbl = memremap(efi.mem_attr_table, tbl_size, MEMREMAP_WB);
+ if (!tbl) {
+ pr_err("Failed to map EFI Memory Attributes table @ 0x%lx\n",
+ efi.mem_attr_table);
+ return -ENOMEM;
+ }
+
+ if (efi_enabled(EFI_DBG))
+ pr_info("Processing EFI Memory Attributes table:\n");
+
+ for (i = ret = 0; ret == 0 && i < tbl->num_entries; i++) {
+ efi_memory_desc_t md;
+ unsigned long size;
+ bool valid;
+ char buf[64];
+
+ valid = entry_is_valid((void *)tbl->entry + i * tbl->desc_size,
+ &md);
+ size = md.num_pages << EFI_PAGE_SHIFT;
+ if (efi_enabled(EFI_DBG) || !valid)
+ pr_info("%s 0x%012llx-0x%012llx %s\n",
+ valid ? "" : "!", md.phys_addr,
+ md.phys_addr + size - 1,
+ efi_md_typeattr_format(buf, sizeof(buf), &md));
+
+ if (valid)
+ ret = fn(mm, &md);
+ }
+ memunmap(tbl);
+ return ret;
+}
diff --git a/drivers/firmware/efi/reboot.c b/drivers/firmware/efi/reboot.c
index 9c59d1c795d1..62ead9b9d871 100644
--- a/drivers/firmware/efi/reboot.c
+++ b/drivers/firmware/efi/reboot.c
@@ -9,7 +9,8 @@ int efi_reboot_quirk_mode = -1;
void efi_reboot(enum reboot_mode reboot_mode, const char *__unused)
{
- int efi_mode;
+ const char *str[] = { "cold", "warm", "shutdown", "platform" };
+ int efi_mode, cap_reset_mode;
if (!efi_enabled(EFI_RUNTIME_SERVICES))
return;
@@ -30,6 +31,15 @@ void efi_reboot(enum reboot_mode reboot_mode, const char *__unused)
if (efi_reboot_quirk_mode != -1)
efi_mode = efi_reboot_quirk_mode;
+ if (efi_capsule_pending(&cap_reset_mode)) {
+ if (efi_mode != cap_reset_mode)
+ printk(KERN_CRIT "efi: %s reset requested but pending "
+ "capsule update requires %s reset... Performing "
+ "%s reset.\n", str[efi_mode], str[cap_reset_mode],
+ str[cap_reset_mode]);
+ efi_mode = cap_reset_mode;
+ }
+
efi.reset_system(efi_mode, EFI_SUCCESS, 0, NULL);
}
diff --git a/drivers/firmware/efi/runtime-wrappers.c b/drivers/firmware/efi/runtime-wrappers.c
index de6953039af6..23bef6bb73ee 100644
--- a/drivers/firmware/efi/runtime-wrappers.c
+++ b/drivers/firmware/efi/runtime-wrappers.c
@@ -16,10 +16,70 @@
#include <linux/bug.h>
#include <linux/efi.h>
+#include <linux/irqflags.h>
#include <linux/mutex.h>
#include <linux/spinlock.h>
+#include <linux/stringify.h>
#include <asm/efi.h>
+static void efi_call_virt_check_flags(unsigned long flags, const char *call)
+{
+ unsigned long cur_flags, mismatch;
+
+ local_save_flags(cur_flags);
+
+ mismatch = flags ^ cur_flags;
+ if (!WARN_ON_ONCE(mismatch & ARCH_EFI_IRQ_FLAGS_MASK))
+ return;
+
+ add_taint(TAINT_FIRMWARE_WORKAROUND, LOCKDEP_NOW_UNRELIABLE);
+ pr_err_ratelimited(FW_BUG "IRQ flags corrupted (0x%08lx=>0x%08lx) by EFI %s\n",
+ flags, cur_flags, call);
+ local_irq_restore(flags);
+}
+
+/*
+ * Arch code can implement the following three template macros, avoiding
+ * reptition for the void/non-void return cases of {__,}efi_call_virt:
+ *
+ * * arch_efi_call_virt_setup
+ *
+ * Sets up the environment for the call (e.g. switching page tables,
+ * allowing kernel-mode use of floating point, if required).
+ *
+ * * arch_efi_call_virt
+ *
+ * Performs the call. The last expression in the macro must be the call
+ * itself, allowing the logic to be shared by the void and non-void
+ * cases.
+ *
+ * * arch_efi_call_virt_teardown
+ *
+ * Restores the usual kernel environment once the call has returned.
+ */
+
+#define efi_call_virt(f, args...) \
+({ \
+ efi_status_t __s; \
+ unsigned long flags; \
+ arch_efi_call_virt_setup(); \
+ local_save_flags(flags); \
+ __s = arch_efi_call_virt(f, args); \
+ efi_call_virt_check_flags(flags, __stringify(f)); \
+ arch_efi_call_virt_teardown(); \
+ __s; \
+})
+
+#define __efi_call_virt(f, args...) \
+({ \
+ unsigned long flags; \
+ arch_efi_call_virt_setup(); \
+ local_save_flags(flags); \
+ arch_efi_call_virt(f, args); \
+ efi_call_virt_check_flags(flags, __stringify(f)); \
+ arch_efi_call_virt_teardown(); \
+})
+
/*
* According to section 7.1 of the UEFI spec, Runtime Services are not fully
* reentrant, and there are particular combinations of calls that need to be
diff --git a/drivers/firmware/efi/vars.c b/drivers/firmware/efi/vars.c
index 34b741940494..d3b751383286 100644
--- a/drivers/firmware/efi/vars.c
+++ b/drivers/firmware/efi/vars.c
@@ -329,39 +329,6 @@ check_var_size_nonblocking(u32 attributes, unsigned long size)
return fops->query_variable_store(attributes, size, true);
}
-static int efi_status_to_err(efi_status_t status)
-{
- int err;
-
- switch (status) {
- case EFI_SUCCESS:
- err = 0;
- break;
- case EFI_INVALID_PARAMETER:
- err = -EINVAL;
- break;
- case EFI_OUT_OF_RESOURCES:
- err = -ENOSPC;
- break;
- case EFI_DEVICE_ERROR:
- err = -EIO;
- break;
- case EFI_WRITE_PROTECTED:
- err = -EROFS;
- break;
- case EFI_SECURITY_VIOLATION:
- err = -EACCES;
- break;
- case EFI_NOT_FOUND:
- err = -ENOENT;
- break;
- default:
- err = -EINVAL;
- }
-
- return err;
-}
-
static bool variable_is_present(efi_char16_t *variable_name, efi_guid_t *vendor,
struct list_head *head)
{
@@ -452,8 +419,7 @@ static void dup_variable_bug(efi_char16_t *str16, efi_guid_t *vendor_guid,
* Returns 0 on success, or a kernel error code on failure.
*/
int efivar_init(int (*func)(efi_char16_t *, efi_guid_t, unsigned long, void *),
- void *data, bool atomic, bool duplicates,
- struct list_head *head)
+ void *data, bool duplicates, struct list_head *head)
{
const struct efivar_operations *ops = __efivars->ops;
unsigned long variable_name_size = 1024;
@@ -483,7 +449,7 @@ int efivar_init(int (*func)(efi_char16_t *, efi_guid_t, unsigned long, void *),
&vendor_guid);
switch (status) {
case EFI_SUCCESS:
- if (!atomic)
+ if (duplicates)
spin_unlock_irq(&__efivars->lock);
variable_name_size = var_name_strnsize(variable_name,
@@ -498,21 +464,19 @@ int efivar_init(int (*func)(efi_char16_t *, efi_guid_t, unsigned long, void *),
* and may end up looping here forever.
*/
if (duplicates &&
- variable_is_present(variable_name, &vendor_guid, head)) {
+ variable_is_present(variable_name, &vendor_guid,
+ head)) {
dup_variable_bug(variable_name, &vendor_guid,
variable_name_size);
- if (!atomic)
- spin_lock_irq(&__efivars->lock);
-
status = EFI_NOT_FOUND;
- break;
+ } else {
+ err = func(variable_name, vendor_guid,
+ variable_name_size, data);
+ if (err)
+ status = EFI_NOT_FOUND;
}
- err = func(variable_name, vendor_guid, variable_name_size, data);
- if (err)
- status = EFI_NOT_FOUND;
-
- if (!atomic)
+ if (duplicates)
spin_lock_irq(&__efivars->lock);
break;
diff --git a/drivers/firmware/iscsi_ibft.c b/drivers/firmware/iscsi_ibft.c
index 81037e5fe301..14042a64bdd5 100644
--- a/drivers/firmware/iscsi_ibft.c
+++ b/drivers/firmware/iscsi_ibft.c
@@ -418,6 +418,31 @@ static ssize_t ibft_attr_show_target(void *data, int type, char *buf)
return str - buf;
}
+static ssize_t ibft_attr_show_acpitbl(void *data, int type, char *buf)
+{
+ struct ibft_kobject *entry = data;
+ char *str = buf;
+
+ switch (type) {
+ case ISCSI_BOOT_ACPITBL_SIGNATURE:
+ str += sprintf_string(str, ACPI_NAME_SIZE,
+ entry->header->header.signature);
+ break;
+ case ISCSI_BOOT_ACPITBL_OEM_ID:
+ str += sprintf_string(str, ACPI_OEM_ID_SIZE,
+ entry->header->header.oem_id);
+ break;
+ case ISCSI_BOOT_ACPITBL_OEM_TABLE_ID:
+ str += sprintf_string(str, ACPI_OEM_TABLE_ID_SIZE,
+ entry->header->header.oem_table_id);
+ break;
+ default:
+ break;
+ }
+
+ return str - buf;
+}
+
static int __init ibft_check_device(void)
{
int len;
@@ -576,6 +601,24 @@ static umode_t __init ibft_check_initiator_for(void *data, int type)
return rc;
}
+static umode_t __init ibft_check_acpitbl_for(void *data, int type)
+{
+
+ umode_t rc = 0;
+
+ switch (type) {
+ case ISCSI_BOOT_ACPITBL_SIGNATURE:
+ case ISCSI_BOOT_ACPITBL_OEM_ID:
+ case ISCSI_BOOT_ACPITBL_OEM_TABLE_ID:
+ rc = S_IRUGO;
+ break;
+ default:
+ break;
+ }
+
+ return rc;
+}
+
static void ibft_kobj_release(void *data)
{
kfree(data);
@@ -699,6 +742,8 @@ free_ibft_obj:
static int __init ibft_register_kobjects(struct acpi_table_ibft *header)
{
struct ibft_control *control = NULL;
+ struct iscsi_boot_kobj *boot_kobj;
+ struct ibft_kobject *ibft_kobj;
void *ptr, *end;
int rc = 0;
u16 offset;
@@ -726,6 +771,25 @@ static int __init ibft_register_kobjects(struct acpi_table_ibft *header)
break;
}
}
+ if (rc)
+ return rc;
+
+ ibft_kobj = kzalloc(sizeof(*ibft_kobj), GFP_KERNEL);
+ if (!ibft_kobj)
+ return -ENOMEM;
+
+ ibft_kobj->header = header;
+ ibft_kobj->hdr = NULL; /*for ibft_unregister*/
+
+ boot_kobj = iscsi_boot_create_acpitbl(boot_kset, 0,
+ ibft_kobj,
+ ibft_attr_show_acpitbl,
+ ibft_check_acpitbl_for,
+ ibft_kobj_release);
+ if (!boot_kobj) {
+ kfree(ibft_kobj);
+ rc = -ENOMEM;
+ }
return rc;
}
@@ -738,7 +802,7 @@ static void ibft_unregister(void)
list_for_each_entry_safe(boot_kobj, tmp_kobj,
&boot_kset->kobj_list, list) {
ibft_kobj = boot_kobj->data;
- if (ibft_kobj->hdr->id == id_nic)
+ if (ibft_kobj->hdr && ibft_kobj->hdr->id == id_nic)
sysfs_remove_link(&boot_kobj->kobj, "device");
};
}
diff --git a/drivers/firmware/psci.c b/drivers/firmware/psci.c
index b5d05807e6ec..03e04582791c 100644
--- a/drivers/firmware/psci.c
+++ b/drivers/firmware/psci.c
@@ -91,7 +91,7 @@ static inline bool psci_has_ext_power_state(void)
PSCI_1_0_FEATURES_CPU_SUSPEND_PF_MASK;
}
-bool psci_power_state_loses_context(u32 state)
+static inline bool psci_power_state_loses_context(u32 state)
{
const u32 mask = psci_has_ext_power_state() ?
PSCI_1_0_EXT_POWER_STATE_TYPE_MASK :
@@ -100,7 +100,7 @@ bool psci_power_state_loses_context(u32 state)
return state & mask;
}
-bool psci_power_state_is_valid(u32 state)
+static inline bool psci_power_state_is_valid(u32 state)
{
const u32 valid_mask = psci_has_ext_power_state() ?
PSCI_1_0_EXT_POWER_STATE_MASK :
@@ -355,7 +355,7 @@ int psci_cpu_suspend_enter(unsigned long index)
/* ARM specific CPU idle operations */
#ifdef CONFIG_ARM
-static struct cpuidle_ops psci_cpuidle_ops __initdata = {
+static const struct cpuidle_ops psci_cpuidle_ops __initconst = {
.suspend = psci_cpu_suspend_enter,
.init = psci_dt_cpu_init_idle,
};
@@ -563,7 +563,7 @@ out_put_node:
return err;
}
-static const struct of_device_id const psci_of_match[] __initconst = {
+static const struct of_device_id psci_of_match[] __initconst = {
{ .compatible = "arm,psci", .data = psci_0_1_init},
{ .compatible = "arm,psci-0.2", .data = psci_0_2_init},
{ .compatible = "arm,psci-1.0", .data = psci_0_2_init},
diff --git a/drivers/firmware/qemu_fw_cfg.c b/drivers/firmware/qemu_fw_cfg.c
index 1b95475b6aef..0e2011636fbb 100644
--- a/drivers/firmware/qemu_fw_cfg.c
+++ b/drivers/firmware/qemu_fw_cfg.c
@@ -125,9 +125,7 @@ static void fw_cfg_io_cleanup(void)
# define FW_CFG_CTRL_OFF 0x00
# define FW_CFG_DATA_OFF 0x01
# else
-# warning "QEMU FW_CFG may not be available on this architecture!"
-# define FW_CFG_CTRL_OFF 0x00
-# define FW_CFG_DATA_OFF 0x01
+# error "QEMU FW_CFG not available on this architecture!"
# endif
#endif
diff --git a/drivers/gpio/Kconfig b/drivers/gpio/Kconfig
index 5f3429f0bf46..48da857f4774 100644
--- a/drivers/gpio/Kconfig
+++ b/drivers/gpio/Kconfig
@@ -33,7 +33,6 @@ config ARCH_REQUIRE_GPIOLIB
menuconfig GPIOLIB
bool "GPIO Support"
- depends on ARCH_WANT_OPTIONAL_GPIOLIB || ARCH_REQUIRE_GPIOLIB
help
This enables GPIO support through the generic GPIO library.
You only need to enable this, if you also want to enable
@@ -49,7 +48,7 @@ config GPIO_DEVRES
config OF_GPIO
def_bool y
- depends on OF
+ depends on OF || COMPILE_TEST
config GPIO_ACPI
def_bool y
@@ -122,6 +121,7 @@ config GPIO_ALTERA
config GPIO_AMDPT
tristate "AMD Promontory GPIO support"
depends on ACPI
+ select GPIO_GENERIC
help
driver for GPIO functionality on Promontory IOHub
Require ACPI ASL code to enumerate as a platform device.
@@ -303,6 +303,7 @@ config GPIO_MPC8XXX
FSL_SOC_BOOKE || PPC_86xx || ARCH_LAYERSCAPE || ARM || \
COMPILE_TEST
select GPIO_GENERIC
+ select IRQ_DOMAIN
help
Say Y here if you're going to use hardware that connects to the
MPC512x/831x/834x/837x/8572/8610/QorIQ GPIOs.
@@ -399,6 +400,11 @@ config GPIO_TB10X
select GENERIC_IRQ_CHIP
select OF_GPIO
+config GPIO_TEGRA
+ bool
+ default y
+ depends on ARCH_TEGRA || COMPILE_TEST
+
config GPIO_TS4800
tristate "TS-4800 DIO blocks and compatibles"
depends on OF_GPIO
@@ -473,7 +479,7 @@ config GPIO_XILINX
config GPIO_XLP
tristate "Netlogic XLP GPIO support"
- depends on CPU_XLP && OF_GPIO
+ depends on OF_GPIO && (CPU_XLP || ARCH_VULCAN || COMPILE_TEST)
select GPIOLIB_IRQCHIP
help
This driver provides support for GPIO interface on Netlogic XLP MIPS64
@@ -510,6 +516,13 @@ config GPIO_ZX
help
Say yes here to support the GPIO device on ZTE ZX SoCs.
+config GPIO_LOONGSON1
+ tristate "Loongson1 GPIO support"
+ depends on MACH_LOONGSON32
+ select GPIO_GENERIC
+ help
+ Say Y or M here to support GPIO on Loongson1 SoCs.
+
endmenu
menu "Port-mapped I/O GPIO drivers"
@@ -517,30 +530,35 @@ menu "Port-mapped I/O GPIO drivers"
config GPIO_104_DIO_48E
tristate "ACCES 104-DIO-48E GPIO support"
+ depends on ISA
select GPIOLIB_IRQCHIP
help
- Enables GPIO support for the ACCES 104-DIO-48E family. The base port
- address for the device may be configured via the dio_48e_base module
- parameter. The interrupt line number for the device may be configured
- via the dio_48e_irq module parameter.
+ Enables GPIO support for the ACCES 104-DIO-48E series (104-DIO-48E,
+ 104-DIO-24E). The base port addresses for the devices may be
+ configured via the base module parameter. The interrupt line numbers
+ for the devices may be configured via the irq module parameter.
config GPIO_104_IDIO_16
tristate "ACCES 104-IDIO-16 GPIO support"
+ depends on ISA
select GPIOLIB_IRQCHIP
help
- Enables GPIO support for the ACCES 104-IDIO-16 family. The base port
- address for the device may be set via the idio_16_base module
- parameter. The interrupt line number for the device may be set via the
- idio_16_irq module parameter.
+ Enables GPIO support for the ACCES 104-IDIO-16 family (104-IDIO-16,
+ 104-IDIO-16E, 104-IDO-16, 104-IDIO-8, 104-IDIO-8E, 104-IDO-8). The
+ base port addresses for the devices may be configured via the base
+ module parameter. The interrupt line numbers for the devices may be
+ configured via the irq module parameter.
config GPIO_104_IDI_48
tristate "ACCES 104-IDI-48 GPIO support"
+ depends on ISA
select GPIOLIB_IRQCHIP
help
- Enables GPIO support for the ACCES 104-IDI-48 family. The base port
- address for the device may be configured via the idi_48_base module
- parameter. The interrupt line number for the device may be configured
- via the idi_48_irq module parameter.
+ Enables GPIO support for the ACCES 104-IDI-48 family (104-IDI-48A,
+ 104-IDI-48AC, 104-IDI-48B, 104-IDI-48BC). The base port addresses for
+ the devices may be configured via the base module parameter. The
+ interrupt line numbers for the devices may be configured via the irq
+ module parameter.
config GPIO_F7188X
tristate "F71869, F71869A, F71882FG, F71889F and F81866 GPIO support"
@@ -557,7 +575,7 @@ config GPIO_IT87
Say yes here to support GPIO functionality of IT87xx Super I/O chips.
This driver is tested with ITE IT8728 and IT8732 Super I/O chips, and
- supports the IT8761E Super I/O chip as well.
+ supports the IT8761E, IT8620E and IT8628E Super I/O chip as well.
To compile this driver as a module, choose M here: the module will
be called gpio_it87
@@ -609,12 +627,13 @@ config GPIO_TS5500
config GPIO_WS16C48
tristate "WinSystems WS16C48 GPIO support"
+ depends on ISA
select GPIOLIB_IRQCHIP
help
- Enables GPIO support for the WinSystems WS16C48. The base port address
- for the device may be configured via the ws16c48_base module
- parameter. The interrupt line number for the device may be configured
- via the ws16c48_irq module parameter.
+ Enables GPIO support for the WinSystems WS16C48. The base port
+ addresses for the devices may be configured via the base module
+ parameter. The interrupt line numbers for the devices may be
+ configured via the irq module parameter.
endmenu
@@ -1091,6 +1110,7 @@ menu "SPI or I2C GPIO expanders"
config GPIO_MCP23S08
tristate "Microchip MCP23xxx I/O expander"
+ select GPIOLIB_IRQCHIP
help
SPI/I2C driver for Microchip MCP23S08/MCP23S17/MCP23008/MCP23017
I/O expanders.
diff --git a/drivers/gpio/Makefile b/drivers/gpio/Makefile
index 1e0b74f3b1ed..991598ea3fba 100644
--- a/drivers/gpio/Makefile
+++ b/drivers/gpio/Makefile
@@ -12,6 +12,9 @@ obj-$(CONFIG_GPIO_ACPI) += gpiolib-acpi.o
# Device drivers. Generally keep list sorted alphabetically
obj-$(CONFIG_GPIO_GENERIC) += gpio-generic.o
+# directly supported by gpio-generic
+gpio-generic-$(CONFIG_GPIO_GENERIC) += gpio-mmio.o
+
obj-$(CONFIG_GPIO_104_DIO_48E) += gpio-104-dio-48e.o
obj-$(CONFIG_GPIO_104_IDIO_16) += gpio-104-idio-16.o
obj-$(CONFIG_GPIO_104_IDI_48) += gpio-104-idi-48.o
@@ -95,7 +98,7 @@ obj-$(CONFIG_GPIO_SX150X) += gpio-sx150x.o
obj-$(CONFIG_GPIO_SYSCON) += gpio-syscon.o
obj-$(CONFIG_GPIO_TB10X) += gpio-tb10x.o
obj-$(CONFIG_GPIO_TC3589X) += gpio-tc3589x.o
-obj-$(CONFIG_ARCH_TEGRA) += gpio-tegra.o
+obj-$(CONFIG_GPIO_TEGRA) += gpio-tegra.o
obj-$(CONFIG_GPIO_TIMBERDALE) += gpio-timberdale.o
obj-$(CONFIG_GPIO_PALMAS) += gpio-palmas.o
obj-$(CONFIG_GPIO_TPIC2810) += gpio-tpic2810.o
@@ -127,3 +130,4 @@ obj-$(CONFIG_GPIO_XTENSA) += gpio-xtensa.o
obj-$(CONFIG_GPIO_ZEVIO) += gpio-zevio.o
obj-$(CONFIG_GPIO_ZYNQ) += gpio-zynq.o
obj-$(CONFIG_GPIO_ZX) += gpio-zx.o
+obj-$(CONFIG_GPIO_LOONGSON1) += gpio-loongson1.o
diff --git a/drivers/gpio/gpio-104-dio-48e.c b/drivers/gpio/gpio-104-dio-48e.c
index 448a903089ef..1a647c07be67 100644
--- a/drivers/gpio/gpio-104-dio-48e.c
+++ b/drivers/gpio/gpio-104-dio-48e.c
@@ -1,5 +1,5 @@
/*
- * GPIO driver for the ACCES 104-DIO-48E
+ * GPIO driver for the ACCES 104-DIO-48E series
* Copyright (C) 2016 William Breathitt Gray
*
* This program is free software; you can redistribute it and/or modify
@@ -10,6 +10,9 @@
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
+ *
+ * This driver supports the following ACCES devices: 104-DIO-48E and
+ * 104-DIO-24E.
*/
#include <linux/bitops.h>
#include <linux/device.h>
@@ -19,18 +22,23 @@
#include <linux/ioport.h>
#include <linux/interrupt.h>
#include <linux/irqdesc.h>
+#include <linux/isa.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
-#include <linux/platform_device.h>
#include <linux/spinlock.h>
-static unsigned dio_48e_base;
-module_param(dio_48e_base, uint, 0);
-MODULE_PARM_DESC(dio_48e_base, "ACCES 104-DIO-48E base address");
-static unsigned dio_48e_irq;
-module_param(dio_48e_irq, uint, 0);
-MODULE_PARM_DESC(dio_48e_irq, "ACCES 104-DIO-48E interrupt line number");
+#define DIO48E_EXTENT 16
+#define MAX_NUM_DIO48E max_num_isa_dev(DIO48E_EXTENT)
+
+static unsigned int base[MAX_NUM_DIO48E];
+static unsigned int num_dio48e;
+module_param_array(base, uint, &num_dio48e, 0);
+MODULE_PARM_DESC(base, "ACCES 104-DIO-48E base addresses");
+
+static unsigned int irq[MAX_NUM_DIO48E];
+module_param_array(irq, uint, NULL, 0);
+MODULE_PARM_DESC(irq, "ACCES 104-DIO-48E interrupt line numbers");
/**
* struct dio48e_gpio - GPIO device private data structure
@@ -294,23 +302,19 @@ static irqreturn_t dio48e_irq_handler(int irq, void *dev_id)
return IRQ_HANDLED;
}
-static int __init dio48e_probe(struct platform_device *pdev)
+static int dio48e_probe(struct device *dev, unsigned int id)
{
- struct device *dev = &pdev->dev;
struct dio48e_gpio *dio48egpio;
- const unsigned base = dio_48e_base;
- const unsigned extent = 16;
const char *const name = dev_name(dev);
int err;
- const unsigned irq = dio_48e_irq;
dio48egpio = devm_kzalloc(dev, sizeof(*dio48egpio), GFP_KERNEL);
if (!dio48egpio)
return -ENOMEM;
- if (!devm_request_region(dev, base, extent, name)) {
+ if (!devm_request_region(dev, base[id], DIO48E_EXTENT, name)) {
dev_err(dev, "Unable to lock port addresses (0x%X-0x%X)\n",
- base, base + extent);
+ base[id], base[id] + DIO48E_EXTENT);
return -EBUSY;
}
@@ -324,8 +328,8 @@ static int __init dio48e_probe(struct platform_device *pdev)
dio48egpio->chip.direction_output = dio48e_gpio_direction_output;
dio48egpio->chip.get = dio48e_gpio_get;
dio48egpio->chip.set = dio48e_gpio_set;
- dio48egpio->base = base;
- dio48egpio->irq = irq;
+ dio48egpio->base = base[id];
+ dio48egpio->irq = irq[id];
spin_lock_init(&dio48egpio->lock);
@@ -338,19 +342,19 @@ static int __init dio48e_probe(struct platform_device *pdev)
}
/* initialize all GPIO as output */
- outb(0x80, base + 3);
- outb(0x00, base);
- outb(0x00, base + 1);
- outb(0x00, base + 2);
- outb(0x00, base + 3);
- outb(0x80, base + 7);
- outb(0x00, base + 4);
- outb(0x00, base + 5);
- outb(0x00, base + 6);
- outb(0x00, base + 7);
+ outb(0x80, base[id] + 3);
+ outb(0x00, base[id]);
+ outb(0x00, base[id] + 1);
+ outb(0x00, base[id] + 2);
+ outb(0x00, base[id] + 3);
+ outb(0x80, base[id] + 7);
+ outb(0x00, base[id] + 4);
+ outb(0x00, base[id] + 5);
+ outb(0x00, base[id] + 6);
+ outb(0x00, base[id] + 7);
/* disable IRQ by default */
- inb(base + 0xB);
+ inb(base[id] + 0xB);
err = gpiochip_irqchip_add(&dio48egpio->chip, &dio48e_irqchip, 0,
handle_edge_irq, IRQ_TYPE_NONE);
@@ -359,7 +363,7 @@ static int __init dio48e_probe(struct platform_device *pdev)
goto err_gpiochip_remove;
}
- err = request_irq(irq, dio48e_irq_handler, 0, name, dio48egpio);
+ err = request_irq(irq[id], dio48e_irq_handler, 0, name, dio48egpio);
if (err) {
dev_err(dev, "IRQ handler registering failed (%d)\n", err);
goto err_gpiochip_remove;
@@ -372,9 +376,9 @@ err_gpiochip_remove:
return err;
}
-static int dio48e_remove(struct platform_device *pdev)
+static int dio48e_remove(struct device *dev, unsigned int id)
{
- struct dio48e_gpio *const dio48egpio = platform_get_drvdata(pdev);
+ struct dio48e_gpio *const dio48egpio = dev_get_drvdata(dev);
free_irq(dio48egpio->irq, dio48egpio);
gpiochip_remove(&dio48egpio->chip);
@@ -382,48 +386,14 @@ static int dio48e_remove(struct platform_device *pdev)
return 0;
}
-static struct platform_device *dio48e_device;
-
-static struct platform_driver dio48e_driver = {
+static struct isa_driver dio48e_driver = {
+ .probe = dio48e_probe,
.driver = {
.name = "104-dio-48e"
},
.remove = dio48e_remove
};
-
-static void __exit dio48e_exit(void)
-{
- platform_device_unregister(dio48e_device);
- platform_driver_unregister(&dio48e_driver);
-}
-
-static int __init dio48e_init(void)
-{
- int err;
-
- dio48e_device = platform_device_alloc(dio48e_driver.driver.name, -1);
- if (!dio48e_device)
- return -ENOMEM;
-
- err = platform_device_add(dio48e_device);
- if (err)
- goto err_platform_device;
-
- err = platform_driver_probe(&dio48e_driver, dio48e_probe);
- if (err)
- goto err_platform_driver;
-
- return 0;
-
-err_platform_driver:
- platform_device_del(dio48e_device);
-err_platform_device:
- platform_device_put(dio48e_device);
- return err;
-}
-
-module_init(dio48e_init);
-module_exit(dio48e_exit);
+module_isa_driver(dio48e_driver, num_dio48e);
MODULE_AUTHOR("William Breathitt Gray <vilhelm.gray@gmail.com>");
MODULE_DESCRIPTION("ACCES 104-DIO-48E GPIO driver");
diff --git a/drivers/gpio/gpio-104-idi-48.c b/drivers/gpio/gpio-104-idi-48.c
index e37cd4cdda35..6c75c83baf5a 100644
--- a/drivers/gpio/gpio-104-idi-48.c
+++ b/drivers/gpio/gpio-104-idi-48.c
@@ -10,6 +10,9 @@
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
+ *
+ * This driver supports the following ACCES devices: 104-IDI-48A,
+ * 104-IDI-48AC, 104-IDI-48B, and 104-IDI-48BC.
*/
#include <linux/bitops.h>
#include <linux/device.h>
@@ -19,18 +22,23 @@
#include <linux/ioport.h>
#include <linux/interrupt.h>
#include <linux/irqdesc.h>
+#include <linux/isa.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
-#include <linux/platform_device.h>
#include <linux/spinlock.h>
-static unsigned idi_48_base;
-module_param(idi_48_base, uint, 0);
-MODULE_PARM_DESC(idi_48_base, "ACCES 104-IDI-48 base address");
-static unsigned idi_48_irq;
-module_param(idi_48_irq, uint, 0);
-MODULE_PARM_DESC(idi_48_irq, "ACCES 104-IDI-48 interrupt line number");
+#define IDI_48_EXTENT 8
+#define MAX_NUM_IDI_48 max_num_isa_dev(IDI_48_EXTENT)
+
+static unsigned int base[MAX_NUM_IDI_48];
+static unsigned int num_idi_48;
+module_param_array(base, uint, &num_idi_48, 0);
+MODULE_PARM_DESC(base, "ACCES 104-IDI-48 base addresses");
+
+static unsigned int irq[MAX_NUM_IDI_48];
+module_param_array(irq, uint, NULL, 0);
+MODULE_PARM_DESC(irq, "ACCES 104-IDI-48 interrupt line numbers");
/**
* struct idi_48_gpio - GPIO device private data structure
@@ -211,23 +219,19 @@ static irqreturn_t idi_48_irq_handler(int irq, void *dev_id)
return IRQ_HANDLED;
}
-static int __init idi_48_probe(struct platform_device *pdev)
+static int idi_48_probe(struct device *dev, unsigned int id)
{
- struct device *dev = &pdev->dev;
struct idi_48_gpio *idi48gpio;
- const unsigned base = idi_48_base;
- const unsigned extent = 8;
const char *const name = dev_name(dev);
int err;
- const unsigned irq = idi_48_irq;
idi48gpio = devm_kzalloc(dev, sizeof(*idi48gpio), GFP_KERNEL);
if (!idi48gpio)
return -ENOMEM;
- if (!devm_request_region(dev, base, extent, name)) {
+ if (!devm_request_region(dev, base[id], IDI_48_EXTENT, name)) {
dev_err(dev, "Unable to lock port addresses (0x%X-0x%X)\n",
- base, base + extent);
+ base[id], base[id] + IDI_48_EXTENT);
return -EBUSY;
}
@@ -239,8 +243,8 @@ static int __init idi_48_probe(struct platform_device *pdev)
idi48gpio->chip.get_direction = idi_48_gpio_get_direction;
idi48gpio->chip.direction_input = idi_48_gpio_direction_input;
idi48gpio->chip.get = idi_48_gpio_get;
- idi48gpio->base = base;
- idi48gpio->irq = irq;
+ idi48gpio->base = base[id];
+ idi48gpio->irq = irq[id];
spin_lock_init(&idi48gpio->lock);
@@ -253,8 +257,8 @@ static int __init idi_48_probe(struct platform_device *pdev)
}
/* Disable IRQ by default */
- outb(0, base + 7);
- inb(base + 7);
+ outb(0, base[id] + 7);
+ inb(base[id] + 7);
err = gpiochip_irqchip_add(&idi48gpio->chip, &idi_48_irqchip, 0,
handle_edge_irq, IRQ_TYPE_NONE);
@@ -263,7 +267,7 @@ static int __init idi_48_probe(struct platform_device *pdev)
goto err_gpiochip_remove;
}
- err = request_irq(irq, idi_48_irq_handler, IRQF_SHARED, name,
+ err = request_irq(irq[id], idi_48_irq_handler, IRQF_SHARED, name,
idi48gpio);
if (err) {
dev_err(dev, "IRQ handler registering failed (%d)\n", err);
@@ -277,9 +281,9 @@ err_gpiochip_remove:
return err;
}
-static int idi_48_remove(struct platform_device *pdev)
+static int idi_48_remove(struct device *dev, unsigned int id)
{
- struct idi_48_gpio *const idi48gpio = platform_get_drvdata(pdev);
+ struct idi_48_gpio *const idi48gpio = dev_get_drvdata(dev);
free_irq(idi48gpio->irq, idi48gpio);
gpiochip_remove(&idi48gpio->chip);
@@ -287,48 +291,14 @@ static int idi_48_remove(struct platform_device *pdev)
return 0;
}
-static struct platform_device *idi_48_device;
-
-static struct platform_driver idi_48_driver = {
+static struct isa_driver idi_48_driver = {
+ .probe = idi_48_probe,
.driver = {
.name = "104-idi-48"
},
.remove = idi_48_remove
};
-
-static void __exit idi_48_exit(void)
-{
- platform_device_unregister(idi_48_device);
- platform_driver_unregister(&idi_48_driver);
-}
-
-static int __init idi_48_init(void)
-{
- int err;
-
- idi_48_device = platform_device_alloc(idi_48_driver.driver.name, -1);
- if (!idi_48_device)
- return -ENOMEM;
-
- err = platform_device_add(idi_48_device);
- if (err)
- goto err_platform_device;
-
- err = platform_driver_probe(&idi_48_driver, idi_48_probe);
- if (err)
- goto err_platform_driver;
-
- return 0;
-
-err_platform_driver:
- platform_device_del(idi_48_device);
-err_platform_device:
- platform_device_put(idi_48_device);
- return err;
-}
-
-module_init(idi_48_init);
-module_exit(idi_48_exit);
+module_isa_driver(idi_48_driver, num_idi_48);
MODULE_AUTHOR("William Breathitt Gray <vilhelm.gray@gmail.com>");
MODULE_DESCRIPTION("ACCES 104-IDI-48 GPIO driver");
diff --git a/drivers/gpio/gpio-104-idio-16.c b/drivers/gpio/gpio-104-idio-16.c
index ecc85fe9323d..6787b8fcf0d8 100644
--- a/drivers/gpio/gpio-104-idio-16.c
+++ b/drivers/gpio/gpio-104-idio-16.c
@@ -10,6 +10,9 @@
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
+ *
+ * This driver supports the following ACCES devices: 104-IDIO-16,
+ * 104-IDIO-16E, 104-IDO-16, 104-IDIO-8, 104-IDIO-8E, and 104-IDO-8.
*/
#include <linux/bitops.h>
#include <linux/device.h>
@@ -19,18 +22,23 @@
#include <linux/ioport.h>
#include <linux/interrupt.h>
#include <linux/irqdesc.h>
+#include <linux/isa.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
-#include <linux/platform_device.h>
#include <linux/spinlock.h>
-static unsigned idio_16_base;
-module_param(idio_16_base, uint, 0);
-MODULE_PARM_DESC(idio_16_base, "ACCES 104-IDIO-16 base address");
-static unsigned idio_16_irq;
-module_param(idio_16_irq, uint, 0);
-MODULE_PARM_DESC(idio_16_irq, "ACCES 104-IDIO-16 interrupt line number");
+#define IDIO_16_EXTENT 8
+#define MAX_NUM_IDIO_16 max_num_isa_dev(IDIO_16_EXTENT)
+
+static unsigned int base[MAX_NUM_IDIO_16];
+static unsigned int num_idio_16;
+module_param_array(base, uint, &num_idio_16, 0);
+MODULE_PARM_DESC(base, "ACCES 104-IDIO-16 base addresses");
+
+static unsigned int irq[MAX_NUM_IDIO_16];
+module_param_array(irq, uint, NULL, 0);
+MODULE_PARM_DESC(irq, "ACCES 104-IDIO-16 interrupt line numbers");
/**
* struct idio_16_gpio - GPIO device private data structure
@@ -185,23 +193,19 @@ static irqreturn_t idio_16_irq_handler(int irq, void *dev_id)
return IRQ_HANDLED;
}
-static int __init idio_16_probe(struct platform_device *pdev)
+static int idio_16_probe(struct device *dev, unsigned int id)
{
- struct device *dev = &pdev->dev;
struct idio_16_gpio *idio16gpio;
- const unsigned base = idio_16_base;
- const unsigned extent = 8;
const char *const name = dev_name(dev);
int err;
- const unsigned irq = idio_16_irq;
idio16gpio = devm_kzalloc(dev, sizeof(*idio16gpio), GFP_KERNEL);
if (!idio16gpio)
return -ENOMEM;
- if (!devm_request_region(dev, base, extent, name)) {
+ if (!devm_request_region(dev, base[id], IDIO_16_EXTENT, name)) {
dev_err(dev, "Unable to lock port addresses (0x%X-0x%X)\n",
- base, base + extent);
+ base[id], base[id] + IDIO_16_EXTENT);
return -EBUSY;
}
@@ -215,8 +219,8 @@ static int __init idio_16_probe(struct platform_device *pdev)
idio16gpio->chip.direction_output = idio_16_gpio_direction_output;
idio16gpio->chip.get = idio_16_gpio_get;
idio16gpio->chip.set = idio_16_gpio_set;
- idio16gpio->base = base;
- idio16gpio->irq = irq;
+ idio16gpio->base = base[id];
+ idio16gpio->irq = irq[id];
idio16gpio->out_state = 0xFFFF;
spin_lock_init(&idio16gpio->lock);
@@ -230,8 +234,8 @@ static int __init idio_16_probe(struct platform_device *pdev)
}
/* Disable IRQ by default */
- outb(0, base + 2);
- outb(0, base + 1);
+ outb(0, base[id] + 2);
+ outb(0, base[id] + 1);
err = gpiochip_irqchip_add(&idio16gpio->chip, &idio_16_irqchip, 0,
handle_edge_irq, IRQ_TYPE_NONE);
@@ -240,7 +244,7 @@ static int __init idio_16_probe(struct platform_device *pdev)
goto err_gpiochip_remove;
}
- err = request_irq(irq, idio_16_irq_handler, 0, name, idio16gpio);
+ err = request_irq(irq[id], idio_16_irq_handler, 0, name, idio16gpio);
if (err) {
dev_err(dev, "IRQ handler registering failed (%d)\n", err);
goto err_gpiochip_remove;
@@ -253,9 +257,9 @@ err_gpiochip_remove:
return err;
}
-static int idio_16_remove(struct platform_device *pdev)
+static int idio_16_remove(struct device *dev, unsigned int id)
{
- struct idio_16_gpio *const idio16gpio = platform_get_drvdata(pdev);
+ struct idio_16_gpio *const idio16gpio = dev_get_drvdata(dev);
free_irq(idio16gpio->irq, idio16gpio);
gpiochip_remove(&idio16gpio->chip);
@@ -263,48 +267,15 @@ static int idio_16_remove(struct platform_device *pdev)
return 0;
}
-static struct platform_device *idio_16_device;
-
-static struct platform_driver idio_16_driver = {
+static struct isa_driver idio_16_driver = {
+ .probe = idio_16_probe,
.driver = {
.name = "104-idio-16"
},
.remove = idio_16_remove
};
-static void __exit idio_16_exit(void)
-{
- platform_device_unregister(idio_16_device);
- platform_driver_unregister(&idio_16_driver);
-}
-
-static int __init idio_16_init(void)
-{
- int err;
-
- idio_16_device = platform_device_alloc(idio_16_driver.driver.name, -1);
- if (!idio_16_device)
- return -ENOMEM;
-
- err = platform_device_add(idio_16_device);
- if (err)
- goto err_platform_device;
-
- err = platform_driver_probe(&idio_16_driver, idio_16_probe);
- if (err)
- goto err_platform_driver;
-
- return 0;
-
-err_platform_driver:
- platform_device_del(idio_16_device);
-err_platform_device:
- platform_device_put(idio_16_device);
- return err;
-}
-
-module_init(idio_16_init);
-module_exit(idio_16_exit);
+module_isa_driver(idio_16_driver, num_idio_16);
MODULE_AUTHOR("William Breathitt Gray <vilhelm.gray@gmail.com>");
MODULE_DESCRIPTION("ACCES 104-IDIO-16 GPIO driver");
diff --git a/drivers/gpio/gpio-74x164.c b/drivers/gpio/gpio-74x164.c
index c81224ff2dca..80f9ddf13343 100644
--- a/drivers/gpio/gpio-74x164.c
+++ b/drivers/gpio/gpio-74x164.c
@@ -75,6 +75,29 @@ static void gen_74x164_set_value(struct gpio_chip *gc,
mutex_unlock(&chip->lock);
}
+static void gen_74x164_set_multiple(struct gpio_chip *gc, unsigned long *mask,
+ unsigned long *bits)
+{
+ struct gen_74x164_chip *chip = gpiochip_get_data(gc);
+ unsigned int i, idx, shift;
+ u8 bank, bankmask;
+
+ mutex_lock(&chip->lock);
+ for (i = 0, bank = chip->registers - 1; i < chip->registers;
+ i++, bank--) {
+ idx = i / sizeof(*mask);
+ shift = i % sizeof(*mask) * BITS_PER_BYTE;
+ bankmask = mask[idx] >> shift;
+ if (!bankmask)
+ continue;
+
+ chip->buffer[bank] &= ~bankmask;
+ chip->buffer[bank] |= bankmask & (bits[idx] >> shift);
+ }
+ __gen_74x164_write_config(chip);
+ mutex_unlock(&chip->lock);
+}
+
static int gen_74x164_direction_output(struct gpio_chip *gc,
unsigned offset, int val)
{
@@ -114,6 +137,7 @@ static int gen_74x164_probe(struct spi_device *spi)
chip->gpio_chip.direction_output = gen_74x164_direction_output;
chip->gpio_chip.get = gen_74x164_get_value;
chip->gpio_chip.set = gen_74x164_set_value;
+ chip->gpio_chip.set_multiple = gen_74x164_set_multiple;
chip->gpio_chip.base = -1;
chip->registers = nregs;
@@ -153,6 +177,7 @@ static int gen_74x164_remove(struct spi_device *spi)
static const struct of_device_id gen_74x164_dt_ids[] = {
{ .compatible = "fairchild,74hc595" },
+ { .compatible = "nxp,74lvc594" },
{},
};
MODULE_DEVICE_TABLE(of, gen_74x164_dt_ids);
diff --git a/drivers/gpio/gpio-amdpt.c b/drivers/gpio/gpio-amdpt.c
index c2484046e8e9..9b78dc837603 100644
--- a/drivers/gpio/gpio-amdpt.c
+++ b/drivers/gpio/gpio-amdpt.c
@@ -28,7 +28,6 @@
struct pt_gpio_chip {
struct gpio_chip gc;
void __iomem *reg_base;
- spinlock_t lock;
};
static int pt_gpio_request(struct gpio_chip *gc, unsigned offset)
@@ -39,19 +38,19 @@ static int pt_gpio_request(struct gpio_chip *gc, unsigned offset)
dev_dbg(gc->parent, "pt_gpio_request offset=%x\n", offset);
- spin_lock_irqsave(&pt_gpio->lock, flags);
+ spin_lock_irqsave(&gc->bgpio_lock, flags);
using_pins = readl(pt_gpio->reg_base + PT_SYNC_REG);
if (using_pins & BIT(offset)) {
dev_warn(gc->parent, "PT GPIO pin %x reconfigured\n",
offset);
- spin_unlock_irqrestore(&pt_gpio->lock, flags);
+ spin_unlock_irqrestore(&gc->bgpio_lock, flags);
return -EINVAL;
}
writel(using_pins | BIT(offset), pt_gpio->reg_base + PT_SYNC_REG);
- spin_unlock_irqrestore(&pt_gpio->lock, flags);
+ spin_unlock_irqrestore(&gc->bgpio_lock, flags);
return 0;
}
@@ -62,111 +61,17 @@ static void pt_gpio_free(struct gpio_chip *gc, unsigned offset)
unsigned long flags;
u32 using_pins;
- spin_lock_irqsave(&pt_gpio->lock, flags);
+ spin_lock_irqsave(&gc->bgpio_lock, flags);
using_pins = readl(pt_gpio->reg_base + PT_SYNC_REG);
using_pins &= ~BIT(offset);
writel(using_pins, pt_gpio->reg_base + PT_SYNC_REG);
- spin_unlock_irqrestore(&pt_gpio->lock, flags);
+ spin_unlock_irqrestore(&gc->bgpio_lock, flags);
dev_dbg(gc->parent, "pt_gpio_free offset=%x\n", offset);
}
-static void pt_gpio_set_value(struct gpio_chip *gc, unsigned offset, int value)
-{
- struct pt_gpio_chip *pt_gpio = gpiochip_get_data(gc);
- unsigned long flags;
- u32 data;
-
- dev_dbg(gc->parent, "pt_gpio_set_value offset=%x, value=%x\n",
- offset, value);
-
- spin_lock_irqsave(&pt_gpio->lock, flags);
-
- data = readl(pt_gpio->reg_base + PT_OUTPUTDATA_REG);
- data &= ~BIT(offset);
- if (value)
- data |= BIT(offset);
- writel(data, pt_gpio->reg_base + PT_OUTPUTDATA_REG);
-
- spin_unlock_irqrestore(&pt_gpio->lock, flags);
-}
-
-static int pt_gpio_get_value(struct gpio_chip *gc, unsigned offset)
-{
- struct pt_gpio_chip *pt_gpio = gpiochip_get_data(gc);
- unsigned long flags;
- u32 data;
-
- spin_lock_irqsave(&pt_gpio->lock, flags);
-
- data = readl(pt_gpio->reg_base + PT_DIRECTION_REG);
-
- /* configure as output */
- if (data & BIT(offset))
- data = readl(pt_gpio->reg_base + PT_OUTPUTDATA_REG);
- else /* configure as input */
- data = readl(pt_gpio->reg_base + PT_INPUTDATA_REG);
-
- spin_unlock_irqrestore(&pt_gpio->lock, flags);
-
- data >>= offset;
- data &= 1;
-
- dev_dbg(gc->parent, "pt_gpio_get_value offset=%x, value=%x\n",
- offset, data);
-
- return data;
-}
-
-static int pt_gpio_direction_input(struct gpio_chip *gc, unsigned offset)
-{
- struct pt_gpio_chip *pt_gpio = gpiochip_get_data(gc);
- unsigned long flags;
- u32 data;
-
- dev_dbg(gc->parent, "pt_gpio_dirction_input offset=%x\n", offset);
-
- spin_lock_irqsave(&pt_gpio->lock, flags);
-
- data = readl(pt_gpio->reg_base + PT_DIRECTION_REG);
- data &= ~BIT(offset);
- writel(data, pt_gpio->reg_base + PT_DIRECTION_REG);
-
- spin_unlock_irqrestore(&pt_gpio->lock, flags);
-
- return 0;
-}
-
-static int pt_gpio_direction_output(struct gpio_chip *gc,
- unsigned offset, int value)
-{
- struct pt_gpio_chip *pt_gpio = gpiochip_get_data(gc);
- unsigned long flags;
- u32 data;
-
- dev_dbg(gc->parent, "pt_gpio_direction_output offset=%x, value=%x\n",
- offset, value);
-
- spin_lock_irqsave(&pt_gpio->lock, flags);
-
- data = readl(pt_gpio->reg_base + PT_OUTPUTDATA_REG);
- if (value)
- data |= BIT(offset);
- else
- data &= ~BIT(offset);
- writel(data, pt_gpio->reg_base + PT_OUTPUTDATA_REG);
-
- data = readl(pt_gpio->reg_base + PT_DIRECTION_REG);
- data |= BIT(offset);
- writel(data, pt_gpio->reg_base + PT_DIRECTION_REG);
-
- spin_unlock_irqrestore(&pt_gpio->lock, flags);
-
- return 0;
-}
-
static int pt_gpio_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
@@ -196,18 +101,19 @@ static int pt_gpio_probe(struct platform_device *pdev)
return PTR_ERR(pt_gpio->reg_base);
}
- spin_lock_init(&pt_gpio->lock);
+ ret = bgpio_init(&pt_gpio->gc, dev, 4,
+ pt_gpio->reg_base + PT_INPUTDATA_REG,
+ pt_gpio->reg_base + PT_OUTPUTDATA_REG, NULL,
+ pt_gpio->reg_base + PT_DIRECTION_REG, NULL,
+ BGPIOF_READ_OUTPUT_REG_SET);
+ if (ret) {
+ dev_err(&pdev->dev, "bgpio_init failed\n");
+ return ret;
+ }
- pt_gpio->gc.label = pdev->name;
pt_gpio->gc.owner = THIS_MODULE;
- pt_gpio->gc.parent = dev;
pt_gpio->gc.request = pt_gpio_request;
pt_gpio->gc.free = pt_gpio_free;
- pt_gpio->gc.direction_input = pt_gpio_direction_input;
- pt_gpio->gc.direction_output = pt_gpio_direction_output;
- pt_gpio->gc.get = pt_gpio_get_value;
- pt_gpio->gc.set = pt_gpio_set_value;
- pt_gpio->gc.base = -1;
pt_gpio->gc.ngpio = PT_TOTAL_GPIO;
#if defined(CONFIG_OF_GPIO)
pt_gpio->gc.of_node = pdev->dev.of_node;
@@ -239,6 +145,7 @@ static int pt_gpio_remove(struct platform_device *pdev)
static const struct acpi_device_id pt_gpio_acpi_match[] = {
{ "AMDF030", 0 },
+ { "AMDIF030", 0 },
{ },
};
MODULE_DEVICE_TABLE(acpi, pt_gpio_acpi_match);
diff --git a/drivers/gpio/gpio-bcm-kona.c b/drivers/gpio/gpio-bcm-kona.c
index 2fd38d598f3d..9aabc48ff5de 100644
--- a/drivers/gpio/gpio-bcm-kona.c
+++ b/drivers/gpio/gpio-bcm-kona.c
@@ -1,4 +1,7 @@
/*
+ * Broadcom Kona GPIO Driver
+ *
+ * Author: Broadcom Corporation <bcm-kernel-feedback-list@broadcom.com>
* Copyright (C) 2012-2014 Broadcom Corporation
*
* This program is free software; you can redistribute it and/or
@@ -17,7 +20,7 @@
#include <linux/gpio.h>
#include <linux/of_device.h>
#include <linux/of_irq.h>
-#include <linux/module.h>
+#include <linux/init.h>
#include <linux/irqdomain.h>
#include <linux/irqchip/chained_irq.h>
@@ -502,8 +505,6 @@ static struct of_device_id const bcm_kona_gpio_of_match[] = {
{}
};
-MODULE_DEVICE_TABLE(of, bcm_kona_gpio_of_match);
-
/*
* This lock class tells lockdep that GPIO irqs are in a different
* category than their parents, so it won't report false recursion.
@@ -659,9 +660,4 @@ static struct platform_driver bcm_kona_gpio_driver = {
},
.probe = bcm_kona_gpio_probe,
};
-
-module_platform_driver(bcm_kona_gpio_driver);
-
-MODULE_AUTHOR("Broadcom Corporation <bcm-kernel-feedback-list@broadcom.com>");
-MODULE_DESCRIPTION("Broadcom Kona GPIO Driver");
-MODULE_LICENSE("GPL v2");
+builtin_platform_driver(bcm_kona_gpio_driver);
diff --git a/drivers/gpio/gpio-brcmstb.c b/drivers/gpio/gpio-brcmstb.c
index 42d51c59ed50..e6489143721a 100644
--- a/drivers/gpio/gpio-brcmstb.c
+++ b/drivers/gpio/gpio-brcmstb.c
@@ -461,6 +461,7 @@ static int brcmstb_gpio_probe(struct platform_device *pdev)
bank->id = num_banks;
if (bank_width <= 0 || bank_width > MAX_GPIO_PER_BANK) {
dev_err(dev, "Invalid bank width %d\n", bank_width);
+ err = -EINVAL;
goto fail;
} else {
bank->width = bank_width;
diff --git a/drivers/gpio/gpio-dwapb.c b/drivers/gpio/gpio-dwapb.c
index 597de1ef497b..34779bb375de 100644
--- a/drivers/gpio/gpio-dwapb.c
+++ b/drivers/gpio/gpio-dwapb.c
@@ -7,6 +7,7 @@
*
* All enquiries to support@picochip.com
*/
+#include <linux/acpi.h>
#include <linux/gpio/driver.h>
/* FIXME: for gpio_get_value(), replace this with direct register read */
#include <linux/gpio.h>
@@ -22,10 +23,13 @@
#include <linux/of_address.h>
#include <linux/of_irq.h>
#include <linux/platform_device.h>
+#include <linux/property.h>
#include <linux/spinlock.h>
#include <linux/platform_data/gpio-dwapb.h>
#include <linux/slab.h>
+#include "gpiolib.h"
+
#define GPIO_SWPORTA_DR 0x00
#define GPIO_SWPORTA_DDR 0x04
#define GPIO_SWPORTB_DR 0x0c
@@ -290,14 +294,14 @@ static void dwapb_configure_irqs(struct dwapb_gpio *gpio,
struct dwapb_port_property *pp)
{
struct gpio_chip *gc = &port->gc;
- struct device_node *node = pp->node;
+ struct fwnode_handle *fwnode = pp->fwnode;
struct irq_chip_generic *irq_gc = NULL;
unsigned int hwirq, ngpio = gc->ngpio;
struct irq_chip_type *ct;
int err, i;
- gpio->domain = irq_domain_add_linear(node, ngpio,
- &irq_generic_chip_ops, gpio);
+ gpio->domain = irq_domain_create_linear(fwnode, ngpio,
+ &irq_generic_chip_ops, gpio);
if (!gpio->domain)
return;
@@ -409,13 +413,13 @@ static int dwapb_gpio_add_port(struct dwapb_gpio *gpio,
err = bgpio_init(&port->gc, gpio->dev, 4, dat, set, NULL, dirout,
NULL, false);
if (err) {
- dev_err(gpio->dev, "failed to init gpio chip for %s\n",
- pp->name);
+ dev_err(gpio->dev, "failed to init gpio chip for port%d\n",
+ port->idx);
return err;
}
#ifdef CONFIG_OF_GPIO
- port->gc.of_node = pp->node;
+ port->gc.of_node = to_of_node(pp->fwnode);
#endif
port->gc.ngpio = pp->ngpio;
port->gc.base = pp->gpio_base;
@@ -429,11 +433,15 @@ static int dwapb_gpio_add_port(struct dwapb_gpio *gpio,
err = gpiochip_add_data(&port->gc, port);
if (err)
- dev_err(gpio->dev, "failed to register gpiochip for %s\n",
- pp->name);
+ dev_err(gpio->dev, "failed to register gpiochip for port%d\n",
+ port->idx);
else
port->is_registered = true;
+ /* Add GPIO-signaled ACPI event support */
+ if (pp->irq)
+ acpi_gpiochip_request_interrupts(&port->gc);
+
return err;
}
@@ -447,19 +455,15 @@ static void dwapb_gpio_unregister(struct dwapb_gpio *gpio)
}
static struct dwapb_platform_data *
-dwapb_gpio_get_pdata_of(struct device *dev)
+dwapb_gpio_get_pdata(struct device *dev)
{
- struct device_node *node, *port_np;
+ struct fwnode_handle *fwnode;
struct dwapb_platform_data *pdata;
struct dwapb_port_property *pp;
int nports;
int i;
- node = dev->of_node;
- if (!IS_ENABLED(CONFIG_OF_GPIO) || !node)
- return ERR_PTR(-ENODEV);
-
- nports = of_get_child_count(node);
+ nports = device_get_child_node_count(dev);
if (nports == 0)
return ERR_PTR(-ENODEV);
@@ -474,21 +478,22 @@ dwapb_gpio_get_pdata_of(struct device *dev)
pdata->nports = nports;
i = 0;
- for_each_child_of_node(node, port_np) {
+ device_for_each_child_node(dev, fwnode) {
pp = &pdata->properties[i++];
- pp->node = port_np;
+ pp->fwnode = fwnode;
- if (of_property_read_u32(port_np, "reg", &pp->idx) ||
+ if (fwnode_property_read_u32(fwnode, "reg", &pp->idx) ||
pp->idx >= DWAPB_MAX_PORTS) {
- dev_err(dev, "missing/invalid port index for %s\n",
- port_np->full_name);
+ dev_err(dev,
+ "missing/invalid port index for port%d\n", i);
return ERR_PTR(-EINVAL);
}
- if (of_property_read_u32(port_np, "snps,nr-gpios",
+ if (fwnode_property_read_u32(fwnode, "snps,nr-gpios",
&pp->ngpio)) {
- dev_info(dev, "failed to get number of gpios for %s\n",
- port_np->full_name);
+ dev_info(dev,
+ "failed to get number of gpios for port%d\n",
+ i);
pp->ngpio = 32;
}
@@ -496,18 +501,19 @@ dwapb_gpio_get_pdata_of(struct device *dev)
* Only port A can provide interrupts in all configurations of
* the IP.
*/
- if (pp->idx == 0 &&
- of_property_read_bool(port_np, "interrupt-controller")) {
- pp->irq = irq_of_parse_and_map(port_np, 0);
- if (!pp->irq) {
- dev_warn(dev, "no irq for bank %s\n",
- port_np->full_name);
- }
+ if (dev->of_node && pp->idx == 0 &&
+ fwnode_property_read_bool(fwnode,
+ "interrupt-controller")) {
+ pp->irq = irq_of_parse_and_map(to_of_node(fwnode), 0);
+ if (!pp->irq)
+ dev_warn(dev, "no irq for port%d\n", pp->idx);
}
+ if (has_acpi_companion(dev) && pp->idx == 0)
+ pp->irq = platform_get_irq(to_platform_device(dev), 0);
+
pp->irq_shared = false;
pp->gpio_base = -1;
- pp->name = port_np->full_name;
}
return pdata;
@@ -523,7 +529,7 @@ static int dwapb_gpio_probe(struct platform_device *pdev)
struct dwapb_platform_data *pdata = dev_get_platdata(dev);
if (!pdata) {
- pdata = dwapb_gpio_get_pdata_of(dev);
+ pdata = dwapb_gpio_get_pdata(dev);
if (IS_ERR(pdata))
return PTR_ERR(pdata);
}
@@ -580,6 +586,13 @@ static const struct of_device_id dwapb_of_match[] = {
};
MODULE_DEVICE_TABLE(of, dwapb_of_match);
+static const struct acpi_device_id dwapb_acpi_match[] = {
+ {"HISI0181", 0},
+ {"APMC0D07", 0},
+ { }
+};
+MODULE_DEVICE_TABLE(acpi, dwapb_acpi_match);
+
#ifdef CONFIG_PM_SLEEP
static int dwapb_gpio_suspend(struct device *dev)
{
@@ -674,6 +687,7 @@ static struct platform_driver dwapb_gpio_driver = {
.name = "gpio-dwapb",
.pm = &dwapb_gpio_pm_ops,
.of_match_table = of_match_ptr(dwapb_of_match),
+ .acpi_match_table = ACPI_PTR(dwapb_acpi_match),
},
.probe = dwapb_gpio_probe,
.remove = dwapb_gpio_remove,
diff --git a/drivers/gpio/gpio-f7188x.c b/drivers/gpio/gpio-f7188x.c
index daac2d480db1..05aa538c3767 100644
--- a/drivers/gpio/gpio-f7188x.c
+++ b/drivers/gpio/gpio-f7188x.c
@@ -15,7 +15,8 @@
#include <linux/init.h>
#include <linux/platform_device.h>
#include <linux/io.h>
-#include <linux/gpio.h>
+#include <linux/gpio/driver.h>
+#include <linux/bitops.h>
#define DRVNAME "gpio-f7188x"
@@ -129,6 +130,9 @@ static int f7188x_gpio_get(struct gpio_chip *chip, unsigned offset);
static int f7188x_gpio_direction_out(struct gpio_chip *chip,
unsigned offset, int value);
static void f7188x_gpio_set(struct gpio_chip *chip, unsigned offset, int value);
+static int f7188x_gpio_set_single_ended(struct gpio_chip *gc,
+ unsigned offset,
+ enum single_ended_mode mode);
#define F7188X_GPIO_BANK(_base, _ngpio, _regbase) \
{ \
@@ -139,6 +143,7 @@ static void f7188x_gpio_set(struct gpio_chip *chip, unsigned offset, int value);
.get = f7188x_gpio_get, \
.direction_output = f7188x_gpio_direction_out, \
.set = f7188x_gpio_set, \
+ .set_single_ended = f7188x_gpio_set_single_ended, \
.base = _base, \
.ngpio = _ngpio, \
.can_sleep = true, \
@@ -217,7 +222,7 @@ static int f7188x_gpio_direction_in(struct gpio_chip *chip, unsigned offset)
superio_select(sio->addr, SIO_LD_GPIO);
dir = superio_inb(sio->addr, gpio_dir(bank->regbase));
- dir &= ~(1 << offset);
+ dir &= ~BIT(offset);
superio_outb(sio->addr, gpio_dir(bank->regbase), dir);
superio_exit(sio->addr);
@@ -238,7 +243,7 @@ static int f7188x_gpio_get(struct gpio_chip *chip, unsigned offset)
superio_select(sio->addr, SIO_LD_GPIO);
dir = superio_inb(sio->addr, gpio_dir(bank->regbase));
- dir = !!(dir & (1 << offset));
+ dir = !!(dir & BIT(offset));
if (dir)
data = superio_inb(sio->addr, gpio_data_out(bank->regbase));
else
@@ -246,7 +251,7 @@ static int f7188x_gpio_get(struct gpio_chip *chip, unsigned offset)
superio_exit(sio->addr);
- return !!(data & 1 << offset);
+ return !!(data & BIT(offset));
}
static int f7188x_gpio_direction_out(struct gpio_chip *chip,
@@ -264,13 +269,13 @@ static int f7188x_gpio_direction_out(struct gpio_chip *chip,
data_out = superio_inb(sio->addr, gpio_data_out(bank->regbase));
if (value)
- data_out |= (1 << offset);
+ data_out |= BIT(offset);
else
- data_out &= ~(1 << offset);
+ data_out &= ~BIT(offset);
superio_outb(sio->addr, gpio_data_out(bank->regbase), data_out);
dir = superio_inb(sio->addr, gpio_dir(bank->regbase));
- dir |= (1 << offset);
+ dir |= BIT(offset);
superio_outb(sio->addr, gpio_dir(bank->regbase), dir);
superio_exit(sio->addr);
@@ -292,14 +297,43 @@ static void f7188x_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
data_out = superio_inb(sio->addr, gpio_data_out(bank->regbase));
if (value)
- data_out |= (1 << offset);
+ data_out |= BIT(offset);
else
- data_out &= ~(1 << offset);
+ data_out &= ~BIT(offset);
superio_outb(sio->addr, gpio_data_out(bank->regbase), data_out);
superio_exit(sio->addr);
}
+static int f7188x_gpio_set_single_ended(struct gpio_chip *chip,
+ unsigned offset,
+ enum single_ended_mode mode)
+{
+ int err;
+ struct f7188x_gpio_bank *bank = gpiochip_get_data(chip);
+ struct f7188x_sio *sio = bank->data->sio;
+ u8 data;
+
+ if (mode != LINE_MODE_OPEN_DRAIN &&
+ mode != LINE_MODE_PUSH_PULL)
+ return -ENOTSUPP;
+
+ err = superio_enter(sio->addr);
+ if (err)
+ return err;
+ superio_select(sio->addr, SIO_LD_GPIO);
+
+ data = superio_inb(sio->addr, gpio_out_mode(bank->regbase));
+ if (mode == LINE_MODE_OPEN_DRAIN)
+ data &= ~BIT(offset);
+ else
+ data |= BIT(offset);
+ superio_outb(sio->addr, gpio_out_mode(bank->regbase), data);
+
+ superio_exit(sio->addr);
+ return 0;
+}
+
/*
* Platform device and driver.
*/
diff --git a/drivers/gpio/gpio-it87.c b/drivers/gpio/gpio-it87.c
index b219c82414bf..63a962d18cd6 100644
--- a/drivers/gpio/gpio-it87.c
+++ b/drivers/gpio/gpio-it87.c
@@ -34,6 +34,8 @@
/* Chip Id numbers */
#define NO_DEV_ID 0xffff
+#define IT8620_ID 0x8620
+#define IT8628_ID 0x8628
#define IT8728_ID 0x8728
#define IT8732_ID 0x8732
#define IT8761_ID 0x8761
@@ -302,6 +304,14 @@ static int __init it87_gpio_init(void)
it87_gpio->chip = it87_template_chip;
switch (chip_type) {
+ case IT8620_ID:
+ case IT8628_ID:
+ gpio_ba_reg = 0x62;
+ it87_gpio->io_size = 11;
+ it87_gpio->output_base = 0xc8;
+ it87_gpio->simple_size = 0;
+ it87_gpio->chip.ngpio = 64;
+ break;
case IT8728_ID:
case IT8732_ID:
gpio_ba_reg = 0x62;
diff --git a/drivers/gpio/gpio-loongson1.c b/drivers/gpio/gpio-loongson1.c
new file mode 100644
index 000000000000..10c09bdd8514
--- /dev/null
+++ b/drivers/gpio/gpio-loongson1.c
@@ -0,0 +1,102 @@
+/*
+ * GPIO Driver for Loongson 1 SoC
+ *
+ * Copyright (C) 2015-2016 Zhang, Keguang <keguang.zhang@gmail.com>
+ *
+ * This file is licensed under the terms of the GNU General Public
+ * License version 2. This program is licensed "as is" without any
+ * warranty of any kind, whether express or implied.
+ */
+
+#include <linux/gpio/driver.h>
+#include <linux/platform_device.h>
+
+/* Loongson 1 GPIO Register Definitions */
+#define GPIO_CFG 0x0
+#define GPIO_DIR 0x10
+#define GPIO_DATA 0x20
+#define GPIO_OUTPUT 0x30
+
+static void __iomem *gpio_reg_base;
+
+static int ls1x_gpio_request(struct gpio_chip *gc, unsigned int offset)
+{
+ unsigned long pinmask = gc->pin2mask(gc, offset);
+ unsigned long flags;
+
+ spin_lock_irqsave(&gc->bgpio_lock, flags);
+ __raw_writel(__raw_readl(gpio_reg_base + GPIO_CFG) | pinmask,
+ gpio_reg_base + GPIO_CFG);
+ spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+
+ return 0;
+}
+
+static void ls1x_gpio_free(struct gpio_chip *gc, unsigned int offset)
+{
+ unsigned long pinmask = gc->pin2mask(gc, offset);
+ unsigned long flags;
+
+ spin_lock_irqsave(&gc->bgpio_lock, flags);
+ __raw_writel(__raw_readl(gpio_reg_base + GPIO_CFG) & ~pinmask,
+ gpio_reg_base + GPIO_CFG);
+ spin_unlock_irqrestore(&gc->bgpio_lock, flags);
+}
+
+static int ls1x_gpio_probe(struct platform_device *pdev)
+{
+ struct device *dev = &pdev->dev;
+ struct gpio_chip *gc;
+ struct resource *res;
+ int ret;
+
+ gc = devm_kzalloc(dev, sizeof(*gc), GFP_KERNEL);
+ if (!gc)
+ return -ENOMEM;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ dev_err(dev, "failed to get I/O memory\n");
+ return -EINVAL;
+ }
+
+ gpio_reg_base = devm_ioremap_resource(dev, res);
+ if (IS_ERR(gpio_reg_base))
+ return PTR_ERR(gpio_reg_base);
+
+ ret = bgpio_init(gc, dev, 4, gpio_reg_base + GPIO_DATA,
+ gpio_reg_base + GPIO_OUTPUT, NULL,
+ NULL, gpio_reg_base + GPIO_DIR, 0);
+ if (ret)
+ goto err;
+
+ gc->owner = THIS_MODULE;
+ gc->request = ls1x_gpio_request;
+ gc->free = ls1x_gpio_free;
+ gc->base = pdev->id * 32;
+
+ ret = devm_gpiochip_add_data(dev, gc, NULL);
+ if (ret)
+ goto err;
+
+ platform_set_drvdata(pdev, gc);
+ dev_info(dev, "Loongson1 GPIO driver registered\n");
+
+ return 0;
+err:
+ dev_err(dev, "failed to register GPIO device\n");
+ return ret;
+}
+
+static struct platform_driver ls1x_gpio_driver = {
+ .probe = ls1x_gpio_probe,
+ .driver = {
+ .name = "ls1x-gpio",
+ },
+};
+
+module_platform_driver(ls1x_gpio_driver);
+
+MODULE_AUTHOR("Kelvin Cheung <keguang.zhang@gmail.com>");
+MODULE_DESCRIPTION("Loongson1 GPIO driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/gpio/gpio-mb86s7x.c b/drivers/gpio/gpio-mb86s7x.c
index 7fffc1d6c055..d55af50e7034 100644
--- a/drivers/gpio/gpio-mb86s7x.c
+++ b/drivers/gpio/gpio-mb86s7x.c
@@ -17,7 +17,6 @@
#include <linux/io.h>
#include <linux/init.h>
#include <linux/clk.h>
-#include <linux/module.h>
#include <linux/err.h>
#include <linux/errno.h>
#include <linux/ioport.h>
@@ -185,8 +184,6 @@ static int mb86s70_gpio_probe(struct platform_device *pdev)
gchip->gc.parent = &pdev->dev;
gchip->gc.base = -1;
- platform_set_drvdata(pdev, gchip);
-
ret = gpiochip_add_data(&gchip->gc, gchip);
if (ret) {
dev_err(&pdev->dev, "couldn't register gpio driver\n");
@@ -210,7 +207,6 @@ static const struct of_device_id mb86s70_gpio_dt_ids[] = {
{ .compatible = "fujitsu,mb86s70-gpio" },
{ /* sentinel */ }
};
-MODULE_DEVICE_TABLE(of, mb86s70_gpio_dt_ids);
static struct platform_driver mb86s70_gpio_driver = {
.driver = {
@@ -225,8 +221,4 @@ static int __init mb86s70_gpio_init(void)
{
return platform_driver_register(&mb86s70_gpio_driver);
}
-module_init(mb86s70_gpio_init);
-
-MODULE_DESCRIPTION("MB86S7x GPIO Driver");
-MODULE_ALIAS("platform:mb86s70-gpio");
-MODULE_LICENSE("GPL");
+device_initcall(mb86s70_gpio_init);
diff --git a/drivers/gpio/gpio-mc9s08dz60.c b/drivers/gpio/gpio-mc9s08dz60.c
index 14f252f9eb29..2fcad5b9cca5 100644
--- a/drivers/gpio/gpio-mc9s08dz60.c
+++ b/drivers/gpio/gpio-mc9s08dz60.c
@@ -15,7 +15,7 @@
*/
#include <linux/kernel.h>
-#include <linux/module.h>
+#include <linux/init.h>
#include <linux/slab.h>
#include <linux/i2c.h>
#include <linux/gpio.h>
@@ -111,8 +111,6 @@ static const struct i2c_device_id mc9s08dz60_id[] = {
{},
};
-MODULE_DEVICE_TABLE(i2c, mc9s08dz60_id);
-
static struct i2c_driver mc9s08dz60_i2c_driver = {
.driver = {
.name = "mc9s08dz60",
@@ -120,10 +118,4 @@ static struct i2c_driver mc9s08dz60_i2c_driver = {
.probe = mc9s08dz60_probe,
.id_table = mc9s08dz60_id,
};
-
-module_i2c_driver(mc9s08dz60_i2c_driver);
-
-MODULE_AUTHOR("Freescale Semiconductor, Inc. "
- "Wu Guoxing <b39297@freescale.com>");
-MODULE_DESCRIPTION("mc9s08dz60 gpio function on mx35 3ds board");
-MODULE_LICENSE("GPL v2");
+builtin_i2c_driver(mc9s08dz60_i2c_driver);
diff --git a/drivers/gpio/gpio-mcp23s08.c b/drivers/gpio/gpio-mcp23s08.c
index 47e486910aab..ac22efc1840e 100644
--- a/drivers/gpio/gpio-mcp23s08.c
+++ b/drivers/gpio/gpio-mcp23s08.c
@@ -77,7 +77,6 @@ struct mcp23s08 {
/* lock protects the cached values */
struct mutex lock;
struct mutex irq_lock;
- struct irq_domain *irq_domain;
struct gpio_chip chip;
@@ -96,11 +95,6 @@ struct mcp23s08_driver_data {
struct mcp23s08 chip[];
};
-/* This lock class tells lockdep that GPIO irqs are in a different
- * category than their parents, so it won't report false recursion.
- */
-static struct lock_class_key gpio_lock_class;
-
/*----------------------------------------------------------------------*/
#if IS_ENABLED(CONFIG_I2C)
@@ -368,8 +362,9 @@ static irqreturn_t mcp23s08_irq(int irq, void *data)
for (i = 0; i < mcp->chip.ngpio; i++) {
if ((BIT(i) & mcp->cache[MCP_INTF]) &&
((BIT(i) & intcap & mcp->irq_rise) ||
- (mcp->irq_fall & ~intcap & BIT(i)))) {
- child_irq = irq_find_mapping(mcp->irq_domain, i);
+ (mcp->irq_fall & ~intcap & BIT(i)) ||
+ (BIT(i) & mcp->cache[MCP_INTCON]))) {
+ child_irq = irq_find_mapping(mcp->chip.irqdomain, i);
handle_nested_irq(child_irq);
}
}
@@ -377,16 +372,10 @@ static irqreturn_t mcp23s08_irq(int irq, void *data)
return IRQ_HANDLED;
}
-static int mcp23s08_gpio_to_irq(struct gpio_chip *chip, unsigned offset)
-{
- struct mcp23s08 *mcp = gpiochip_get_data(chip);
-
- return irq_find_mapping(mcp->irq_domain, offset);
-}
-
static void mcp23s08_irq_mask(struct irq_data *data)
{
- struct mcp23s08 *mcp = irq_data_get_irq_chip_data(data);
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
+ struct mcp23s08 *mcp = gpiochip_get_data(gc);
unsigned int pos = data->hwirq;
mcp->cache[MCP_GPINTEN] &= ~BIT(pos);
@@ -394,7 +383,8 @@ static void mcp23s08_irq_mask(struct irq_data *data)
static void mcp23s08_irq_unmask(struct irq_data *data)
{
- struct mcp23s08 *mcp = irq_data_get_irq_chip_data(data);
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
+ struct mcp23s08 *mcp = gpiochip_get_data(gc);
unsigned int pos = data->hwirq;
mcp->cache[MCP_GPINTEN] |= BIT(pos);
@@ -402,7 +392,8 @@ static void mcp23s08_irq_unmask(struct irq_data *data)
static int mcp23s08_irq_set_type(struct irq_data *data, unsigned int type)
{
- struct mcp23s08 *mcp = irq_data_get_irq_chip_data(data);
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
+ struct mcp23s08 *mcp = gpiochip_get_data(gc);
unsigned int pos = data->hwirq;
int status = 0;
@@ -418,6 +409,12 @@ static int mcp23s08_irq_set_type(struct irq_data *data, unsigned int type)
mcp->cache[MCP_INTCON] &= ~BIT(pos);
mcp->irq_rise &= ~BIT(pos);
mcp->irq_fall |= BIT(pos);
+ } else if (type & IRQ_TYPE_LEVEL_HIGH) {
+ mcp->cache[MCP_INTCON] |= BIT(pos);
+ mcp->cache[MCP_DEFVAL] &= ~BIT(pos);
+ } else if (type & IRQ_TYPE_LEVEL_LOW) {
+ mcp->cache[MCP_INTCON] |= BIT(pos);
+ mcp->cache[MCP_DEFVAL] |= BIT(pos);
} else
return -EINVAL;
@@ -426,14 +423,16 @@ static int mcp23s08_irq_set_type(struct irq_data *data, unsigned int type)
static void mcp23s08_irq_bus_lock(struct irq_data *data)
{
- struct mcp23s08 *mcp = irq_data_get_irq_chip_data(data);
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
+ struct mcp23s08 *mcp = gpiochip_get_data(gc);
mutex_lock(&mcp->irq_lock);
}
static void mcp23s08_irq_bus_unlock(struct irq_data *data)
{
- struct mcp23s08 *mcp = irq_data_get_irq_chip_data(data);
+ struct gpio_chip *gc = irq_data_get_irq_chip_data(data);
+ struct mcp23s08 *mcp = gpiochip_get_data(gc);
mutex_lock(&mcp->lock);
mcp->ops->write(mcp, MCP_GPINTEN, mcp->cache[MCP_GPINTEN]);
@@ -443,27 +442,6 @@ static void mcp23s08_irq_bus_unlock(struct irq_data *data)
mutex_unlock(&mcp->irq_lock);
}
-static int mcp23s08_irq_reqres(struct irq_data *data)
-{
- struct mcp23s08 *mcp = irq_data_get_irq_chip_data(data);
-
- if (gpiochip_lock_as_irq(&mcp->chip, data->hwirq)) {
- dev_err(mcp->chip.parent,
- "unable to lock HW IRQ %lu for IRQ usage\n",
- data->hwirq);
- return -EINVAL;
- }
-
- return 0;
-}
-
-static void mcp23s08_irq_relres(struct irq_data *data)
-{
- struct mcp23s08 *mcp = irq_data_get_irq_chip_data(data);
-
- gpiochip_unlock_as_irq(&mcp->chip, data->hwirq);
-}
-
static struct irq_chip mcp23s08_irq_chip = {
.name = "gpio-mcp23xxx",
.irq_mask = mcp23s08_irq_mask,
@@ -471,24 +449,16 @@ static struct irq_chip mcp23s08_irq_chip = {
.irq_set_type = mcp23s08_irq_set_type,
.irq_bus_lock = mcp23s08_irq_bus_lock,
.irq_bus_sync_unlock = mcp23s08_irq_bus_unlock,
- .irq_request_resources = mcp23s08_irq_reqres,
- .irq_release_resources = mcp23s08_irq_relres,
};
static int mcp23s08_irq_setup(struct mcp23s08 *mcp)
{
struct gpio_chip *chip = &mcp->chip;
- int err, irq, j;
+ int err;
unsigned long irqflags = IRQF_ONESHOT | IRQF_SHARED;
mutex_init(&mcp->irq_lock);
- mcp->irq_domain = irq_domain_add_linear(chip->parent->of_node,
- chip->ngpio,
- &irq_domain_simple_ops, mcp);
- if (!mcp->irq_domain)
- return -ENODEV;
-
if (mcp->irq_active_high)
irqflags |= IRQF_TRIGGER_HIGH;
else
@@ -503,30 +473,23 @@ static int mcp23s08_irq_setup(struct mcp23s08 *mcp)
return err;
}
- chip->to_irq = mcp23s08_gpio_to_irq;
-
- for (j = 0; j < mcp->chip.ngpio; j++) {
- irq = irq_create_mapping(mcp->irq_domain, j);
- irq_set_lockdep_class(irq, &gpio_lock_class);
- irq_set_chip_data(irq, mcp);
- irq_set_chip(irq, &mcp23s08_irq_chip);
- irq_set_nested_thread(irq, true);
- irq_set_noprobe(irq);
+ err = gpiochip_irqchip_add(chip,
+ &mcp23s08_irq_chip,
+ 0,
+ handle_simple_irq,
+ IRQ_TYPE_NONE);
+ if (err) {
+ dev_err(chip->parent,
+ "could not connect irqchip to gpiochip: %d\n", err);
+ return err;
}
- return 0;
-}
-static void mcp23s08_irq_teardown(struct mcp23s08 *mcp)
-{
- unsigned int irq, i;
+ gpiochip_set_chained_irqchip(chip,
+ &mcp23s08_irq_chip,
+ mcp->irq,
+ NULL);
- for (i = 0; i < mcp->chip.ngpio; i++) {
- irq = irq_find_mapping(mcp->irq_domain, i);
- if (irq > 0)
- irq_dispose_mapping(irq);
- }
-
- irq_domain_remove(mcp->irq_domain);
+ return 0;
}
/*----------------------------------------------------------------------*/
@@ -721,7 +684,6 @@ static int mcp23s08_probe_one(struct mcp23s08 *mcp, struct device *dev,
if (mcp->irq && mcp->irq_controller) {
status = mcp23s08_irq_setup(mcp);
if (status) {
- mcp23s08_irq_teardown(mcp);
goto fail;
}
}
@@ -847,9 +809,6 @@ static int mcp230xx_remove(struct i2c_client *client)
{
struct mcp23s08 *mcp = i2c_get_clientdata(client);
- if (client->irq && mcp->irq_controller)
- mcp23s08_irq_teardown(mcp);
-
gpiochip_remove(&mcp->chip);
kfree(mcp);
@@ -1017,8 +976,6 @@ static int mcp23s08_remove(struct spi_device *spi)
if (!data->mcp[addr])
continue;
- if (spi->irq && data->mcp[addr]->irq_controller)
- mcp23s08_irq_teardown(data->mcp[addr]);
gpiochip_remove(&data->mcp[addr]->chip);
}
diff --git a/drivers/gpio/gpio-menz127.c b/drivers/gpio/gpio-menz127.c
index c5c9599a3a71..cc103aff45e4 100644
--- a/drivers/gpio/gpio-menz127.c
+++ b/drivers/gpio/gpio-menz127.c
@@ -35,7 +35,6 @@
struct men_z127_gpio {
struct gpio_chip gc;
void __iomem *reg_base;
- struct mcb_device *mdev;
struct resource *mem;
};
@@ -43,7 +42,7 @@ static int men_z127_debounce(struct gpio_chip *gc, unsigned gpio,
unsigned debounce)
{
struct men_z127_gpio *priv = gpiochip_get_data(gc);
- struct device *dev = &priv->mdev->dev;
+ struct device *dev = gc->parent;
unsigned int rnd;
u32 db_en, db_cnt;
@@ -88,21 +87,25 @@ static int men_z127_debounce(struct gpio_chip *gc, unsigned gpio,
return 0;
}
-static int men_z127_request(struct gpio_chip *gc, unsigned gpio_pin)
+static int men_z127_set_single_ended(struct gpio_chip *gc,
+ unsigned offset,
+ enum single_ended_mode mode)
{
struct men_z127_gpio *priv = gpiochip_get_data(gc);
u32 od_en;
- if (gpio_pin >= gc->ngpio)
- return -EINVAL;
+ if (mode != LINE_MODE_OPEN_DRAIN &&
+ mode != LINE_MODE_PUSH_PULL)
+ return -ENOTSUPP;
spin_lock(&gc->bgpio_lock);
od_en = readl(priv->reg_base + MEN_Z127_ODER);
- if (gpiochip_line_is_open_drain(gc, gpio_pin))
- od_en |= BIT(gpio_pin);
+ if (mode == LINE_MODE_OPEN_DRAIN)
+ od_en |= BIT(offset);
else
- od_en &= ~BIT(gpio_pin);
+ /* Implicitly LINE_MODE_PUSH_PULL */
+ od_en &= ~BIT(offset);
writel(od_en, priv->reg_base + MEN_Z127_ODER);
spin_unlock(&gc->bgpio_lock);
@@ -135,7 +138,6 @@ static int men_z127_probe(struct mcb_device *mdev,
goto err_release;
}
- men_z127_gpio->mdev = mdev;
mcb_set_drvdata(mdev, men_z127_gpio);
ret = bgpio_init(&men_z127_gpio->gc, &mdev->dev, 4,
@@ -148,7 +150,7 @@ static int men_z127_probe(struct mcb_device *mdev,
goto err_unmap;
men_z127_gpio->gc.set_debounce = men_z127_debounce;
- men_z127_gpio->gc.request = men_z127_request;
+ men_z127_gpio->gc.set_single_ended = men_z127_set_single_ended;
ret = gpiochip_add_data(&men_z127_gpio->gc, men_z127_gpio);
if (ret) {
diff --git a/drivers/gpio/gpio-generic.c b/drivers/gpio/gpio-mmio.c
index 54cddfa98f50..6c1cb3b8c02c 100644
--- a/drivers/gpio/gpio-generic.c
+++ b/drivers/gpio/gpio-mmio.c
@@ -549,7 +549,7 @@ int bgpio_init(struct gpio_chip *gc, struct device *dev,
}
EXPORT_SYMBOL_GPL(bgpio_init);
-#ifdef CONFIG_GPIO_GENERIC_PLATFORM
+#if IS_ENABLED(CONFIG_GPIO_GENERIC_PLATFORM)
static void __iomem *bgpio_map(struct platform_device *pdev,
const char *name,
diff --git a/drivers/gpio/gpio-moxart.c b/drivers/gpio/gpio-moxart.c
index f02d0b490978..d58d38906ba6 100644
--- a/drivers/gpio/gpio-moxart.c
+++ b/drivers/gpio/gpio-moxart.c
@@ -15,7 +15,6 @@
#include <linux/irq.h>
#include <linux/io.h>
#include <linux/platform_device.h>
-#include <linux/module.h>
#include <linux/of_address.h>
#include <linux/of_gpio.h>
#include <linux/pinctrl/consumer.h>
@@ -82,8 +81,4 @@ static struct platform_driver moxart_gpio_driver = {
},
.probe = moxart_gpio_probe,
};
-module_platform_driver(moxart_gpio_driver);
-
-MODULE_DESCRIPTION("MOXART GPIO chip driver");
-MODULE_LICENSE("GPL");
-MODULE_AUTHOR("Jonas Jensen <jonas.jensen@gmail.com>");
+builtin_platform_driver(moxart_gpio_driver);
diff --git a/drivers/gpio/gpio-mvebu.c b/drivers/gpio/gpio-mvebu.c
index 11c6582ef0a6..cd5dc27320a2 100644
--- a/drivers/gpio/gpio-mvebu.c
+++ b/drivers/gpio/gpio-mvebu.c
@@ -34,7 +34,7 @@
*/
#include <linux/err.h>
-#include <linux/module.h>
+#include <linux/init.h>
#include <linux/gpio.h>
#include <linux/irq.h>
#include <linux/slab.h>
@@ -557,7 +557,6 @@ static const struct of_device_id mvebu_gpio_of_match[] = {
/* sentinel */
},
};
-MODULE_DEVICE_TABLE(of, mvebu_gpio_of_match);
static int mvebu_gpio_suspend(struct platform_device *pdev, pm_message_t state)
{
@@ -838,4 +837,4 @@ static struct platform_driver mvebu_gpio_driver = {
.suspend = mvebu_gpio_suspend,
.resume = mvebu_gpio_resume,
};
-module_platform_driver(mvebu_gpio_driver);
+builtin_platform_driver(mvebu_gpio_driver);
diff --git a/drivers/gpio/gpio-octeon.c b/drivers/gpio/gpio-octeon.c
index 47aead1ed1cc..96a8a8cb2729 100644
--- a/drivers/gpio/gpio-octeon.c
+++ b/drivers/gpio/gpio-octeon.c
@@ -83,6 +83,7 @@ static int octeon_gpio_probe(struct platform_device *pdev)
struct octeon_gpio *gpio;
struct gpio_chip *chip;
struct resource *res_mem;
+ void __iomem *reg_base;
int err = 0;
gpio = devm_kzalloc(&pdev->dev, sizeof(*gpio), GFP_KERNEL);
@@ -91,21 +92,11 @@ static int octeon_gpio_probe(struct platform_device *pdev)
chip = &gpio->chip;
res_mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- if (res_mem == NULL) {
- dev_err(&pdev->dev, "found no memory resource\n");
- err = -ENXIO;
- goto out;
- }
- if (!devm_request_mem_region(&pdev->dev, res_mem->start,
- resource_size(res_mem),
- res_mem->name)) {
- dev_err(&pdev->dev, "request_mem_region failed\n");
- err = -ENXIO;
- goto out;
- }
- gpio->register_base = (u64)devm_ioremap(&pdev->dev, res_mem->start,
- resource_size(res_mem));
+ reg_base = devm_ioremap_resource(&pdev->dev, res_mem);
+ if (IS_ERR(reg_base))
+ return PTR_ERR(reg_base);
+ gpio->register_base = (u64)reg_base;
pdev->dev.platform_data = chip;
chip->label = "octeon-gpio";
chip->parent = &pdev->dev;
@@ -119,14 +110,13 @@ static int octeon_gpio_probe(struct platform_device *pdev)
chip->set = octeon_gpio_set;
err = devm_gpiochip_add_data(&pdev->dev, chip, gpio);
if (err)
- goto out;
+ return err;
dev_info(&pdev->dev, "OCTEON GPIO driver probed.\n");
-out:
- return err;
+ return 0;
}
-static struct of_device_id octeon_gpio_match[] = {
+static const struct of_device_id octeon_gpio_match[] = {
{
.compatible = "cavium,octeon-3860-gpio",
},
diff --git a/drivers/gpio/gpio-omap.c b/drivers/gpio/gpio-omap.c
index 551dfa9d97ab..b98ede78c9d8 100644
--- a/drivers/gpio/gpio-omap.c
+++ b/drivers/gpio/gpio-omap.c
@@ -611,51 +611,12 @@ static inline void omap_set_gpio_irqenable(struct gpio_bank *bank,
omap_disable_gpio_irqbank(bank, BIT(offset));
}
-/*
- * Note that ENAWAKEUP needs to be enabled in GPIO_SYSCONFIG register.
- * 1510 does not seem to have a wake-up register. If JTAG is connected
- * to the target, system will wake up always on GPIO events. While
- * system is running all registered GPIO interrupts need to have wake-up
- * enabled. When system is suspended, only selected GPIO interrupts need
- * to have wake-up enabled.
- */
-static int omap_set_gpio_wakeup(struct gpio_bank *bank, unsigned offset,
- int enable)
-{
- u32 gpio_bit = BIT(offset);
- unsigned long flags;
-
- if (bank->non_wakeup_gpios & gpio_bit) {
- dev_err(bank->chip.parent,
- "Unable to modify wakeup on non-wakeup GPIO%d\n",
- offset);
- return -EINVAL;
- }
-
- raw_spin_lock_irqsave(&bank->lock, flags);
- if (enable)
- bank->context.wake_en |= gpio_bit;
- else
- bank->context.wake_en &= ~gpio_bit;
-
- writel_relaxed(bank->context.wake_en, bank->base + bank->regs->wkup_en);
- raw_spin_unlock_irqrestore(&bank->lock, flags);
-
- return 0;
-}
-
/* Use disable_irq_wake() and enable_irq_wake() functions from drivers */
static int omap_gpio_wake_enable(struct irq_data *d, unsigned int enable)
{
struct gpio_bank *bank = omap_irq_data_get_bank(d);
- unsigned offset = d->hwirq;
- int ret;
- ret = omap_set_gpio_wakeup(bank, offset, enable);
- if (!ret)
- ret = irq_set_irq_wake(bank->irq, enable);
-
- return ret;
+ return irq_set_irq_wake(bank->irq, enable);
}
static int omap_gpio_request(struct gpio_chip *chip, unsigned offset)
@@ -1187,6 +1148,7 @@ static int omap_gpio_probe(struct platform_device *pdev)
irqc->irq_bus_lock = omap_gpio_irq_bus_lock,
irqc->irq_bus_sync_unlock = gpio_irq_bus_sync_unlock,
irqc->name = dev_name(&pdev->dev);
+ irqc->flags = IRQCHIP_MASK_ON_SUSPEND;
bank->irq = platform_get_irq(pdev, 0);
if (bank->irq <= 0) {
diff --git a/drivers/gpio/gpio-palmas.c b/drivers/gpio/gpio-palmas.c
index 6f27b3d94d53..e248707ca39e 100644
--- a/drivers/gpio/gpio-palmas.c
+++ b/drivers/gpio/gpio-palmas.c
@@ -20,7 +20,7 @@
#include <linux/gpio.h>
#include <linux/kernel.h>
-#include <linux/module.h>
+#include <linux/init.h>
#include <linux/mfd/palmas.h>
#include <linux/of.h>
#include <linux/of_device.h>
@@ -218,14 +218,3 @@ static int __init palmas_gpio_init(void)
return platform_driver_register(&palmas_gpio_driver);
}
subsys_initcall(palmas_gpio_init);
-
-static void __exit palmas_gpio_exit(void)
-{
- platform_driver_unregister(&palmas_gpio_driver);
-}
-module_exit(palmas_gpio_exit);
-
-MODULE_ALIAS("platform:palmas-gpio");
-MODULE_AUTHOR("Laxman Dewangan <ldewangan@nvidia.com>");
-MODULE_DESCRIPTION("GPIO driver for TI Palmas series PMICs");
-MODULE_LICENSE("GPL v2");
diff --git a/drivers/gpio/gpio-pca953x.c b/drivers/gpio/gpio-pca953x.c
index e66084c295fb..5e3be32ebb8d 100644
--- a/drivers/gpio/gpio-pca953x.c
+++ b/drivers/gpio/gpio-pca953x.c
@@ -38,8 +38,13 @@
#define PCA957X_MSK 6
#define PCA957X_INTS 7
+#define PCAL953X_IN_LATCH 34
+#define PCAL953X_INT_MASK 37
+#define PCAL953X_INT_STAT 38
+
#define PCA_GPIO_MASK 0x00FF
#define PCA_INT 0x0100
+#define PCA_PCAL 0x0200
#define PCA953X_TYPE 0x1000
#define PCA957X_TYPE 0x2000
#define PCA_TYPE_MASK 0xF000
@@ -77,7 +82,7 @@ static const struct i2c_device_id pca953x_id[] = {
MODULE_DEVICE_TABLE(i2c, pca953x_id);
static const struct acpi_device_id pca953x_acpi_ids[] = {
- { "INT3491", 16 | PCA953X_TYPE | PCA_INT, },
+ { "INT3491", 16 | PCA953X_TYPE | PCA_INT | PCA_PCAL, },
{ }
};
MODULE_DEVICE_TABLE(acpi, pca953x_acpi_ids);
@@ -437,6 +442,18 @@ static void pca953x_irq_bus_sync_unlock(struct irq_data *d)
struct pca953x_chip *chip = gpiochip_get_data(gc);
u8 new_irqs;
int level, i;
+ u8 invert_irq_mask[MAX_BANK];
+
+ if (chip->driver_data & PCA_PCAL) {
+ /* Enable latch on interrupt-enabled inputs */
+ pca953x_write_regs(chip, PCAL953X_IN_LATCH, chip->irq_mask);
+
+ for (i = 0; i < NBANK(chip); i++)
+ invert_irq_mask[i] = ~chip->irq_mask[i];
+
+ /* Unmask enabled interrupts */
+ pca953x_write_regs(chip, PCAL953X_INT_MASK, invert_irq_mask);
+ }
/* Look for any newly setup interrupt */
for (i = 0; i < NBANK(chip); i++) {
@@ -498,6 +515,29 @@ static bool pca953x_irq_pending(struct pca953x_chip *chip, u8 *pending)
u8 trigger[MAX_BANK];
int ret, i, offset = 0;
+ if (chip->driver_data & PCA_PCAL) {
+ /* Read the current interrupt status from the device */
+ ret = pca953x_read_regs(chip, PCAL953X_INT_STAT, trigger);
+ if (ret)
+ return false;
+
+ /* Check latched inputs and clear interrupt status */
+ ret = pca953x_read_regs(chip, PCA953X_INPUT, cur_stat);
+ if (ret)
+ return false;
+
+ for (i = 0; i < NBANK(chip); i++) {
+ /* Apply filter for rising/falling edge selection */
+ pending[i] = (~cur_stat[i] & chip->irq_trig_fall[i]) |
+ (cur_stat[i] & chip->irq_trig_raise[i]);
+ pending[i] &= trigger[i];
+ if (pending[i])
+ pending_seen = true;
+ }
+
+ return pending_seen;
+ }
+
switch (chip->chip_type) {
case PCA953X_TYPE:
offset = PCA953X_INPUT;
diff --git a/drivers/gpio/gpio-pl061.c b/drivers/gpio/gpio-pl061.c
index 5cb38212bbc0..6e3c1430616f 100644
--- a/drivers/gpio/gpio-pl061.c
+++ b/drivers/gpio/gpio-pl061.c
@@ -1,6 +1,8 @@
/*
* Copyright (C) 2008, 2009 Provigent Ltd.
*
+ * Author: Baruch Siach <baruch@tkos.co.il>
+ *
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
@@ -11,7 +13,7 @@
*/
#include <linux/spinlock.h>
#include <linux/errno.h>
-#include <linux/module.h>
+#include <linux/init.h>
#include <linux/io.h>
#include <linux/ioport.h>
#include <linux/interrupt.h>
@@ -59,15 +61,19 @@ struct pl061_gpio {
#endif
};
+static int pl061_get_direction(struct gpio_chip *gc, unsigned offset)
+{
+ struct pl061_gpio *chip = gpiochip_get_data(gc);
+
+ return !(readb(chip->base + GPIODIR) & BIT(offset));
+}
+
static int pl061_direction_input(struct gpio_chip *gc, unsigned offset)
{
struct pl061_gpio *chip = gpiochip_get_data(gc);
unsigned long flags;
unsigned char gpiodir;
- if (offset >= gc->ngpio)
- return -EINVAL;
-
spin_lock_irqsave(&chip->lock, flags);
gpiodir = readb(chip->base + GPIODIR);
gpiodir &= ~(BIT(offset));
@@ -84,9 +90,6 @@ static int pl061_direction_output(struct gpio_chip *gc, unsigned offset,
unsigned long flags;
unsigned char gpiodir;
- if (offset >= gc->ngpio)
- return -EINVAL;
-
spin_lock_irqsave(&chip->lock, flags);
writeb(!!value << offset, chip->base + (BIT(offset + 2)));
gpiodir = readb(chip->base + GPIODIR);
@@ -319,6 +322,7 @@ static int pl061_probe(struct amba_device *adev, const struct amba_id *id)
chip->gc.free = gpiochip_generic_free;
}
+ chip->gc.get_direction = pl061_get_direction;
chip->gc.direction_input = pl061_direction_input;
chip->gc.direction_output = pl061_direction_output;
chip->gc.get = pl061_get_value;
@@ -429,8 +433,6 @@ static struct amba_id pl061_ids[] = {
{ 0, 0 },
};
-MODULE_DEVICE_TABLE(amba, pl061_ids);
-
static struct amba_driver pl061_gpio_driver = {
.drv = {
.name = "pl061_gpio",
@@ -446,8 +448,4 @@ static int __init pl061_gpio_init(void)
{
return amba_driver_register(&pl061_gpio_driver);
}
-module_init(pl061_gpio_init);
-
-MODULE_AUTHOR("Baruch Siach <baruch@tkos.co.il>");
-MODULE_DESCRIPTION("PL061 GPIO driver");
-MODULE_LICENSE("GPL");
+device_initcall(pl061_gpio_init);
diff --git a/drivers/gpio/gpio-rc5t583.c b/drivers/gpio/gpio-rc5t583.c
index 1d6100fa312a..3b4dc1a9a68d 100644
--- a/drivers/gpio/gpio-rc5t583.c
+++ b/drivers/gpio/gpio-rc5t583.c
@@ -23,7 +23,6 @@
#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/slab.h>
-#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/device.h>
#include <linux/gpio.h>
@@ -152,14 +151,3 @@ static int __init rc5t583_gpio_init(void)
return platform_driver_register(&rc5t583_gpio_driver);
}
subsys_initcall(rc5t583_gpio_init);
-
-static void __exit rc5t583_gpio_exit(void)
-{
- platform_driver_unregister(&rc5t583_gpio_driver);
-}
-module_exit(rc5t583_gpio_exit);
-
-MODULE_AUTHOR("Laxman Dewangan <ldewangan@nvidia.com>");
-MODULE_DESCRIPTION("GPIO interface for RC5T583");
-MODULE_LICENSE("GPL v2");
-MODULE_ALIAS("platform:rc5t583-gpio");
diff --git a/drivers/gpio/gpio-rcar.c b/drivers/gpio/gpio-rcar.c
index 4d9a315cfd43..681c93fb9e70 100644
--- a/drivers/gpio/gpio-rcar.c
+++ b/drivers/gpio/gpio-rcar.c
@@ -284,6 +284,25 @@ static void gpio_rcar_set(struct gpio_chip *chip, unsigned offset, int value)
spin_unlock_irqrestore(&p->lock, flags);
}
+static void gpio_rcar_set_multiple(struct gpio_chip *chip, unsigned long *mask,
+ unsigned long *bits)
+{
+ struct gpio_rcar_priv *p = gpiochip_get_data(chip);
+ unsigned long flags;
+ u32 val, bankmask;
+
+ bankmask = mask[0] & GENMASK(chip->ngpio - 1, 0);
+ if (!bankmask)
+ return;
+
+ spin_lock_irqsave(&p->lock, flags);
+ val = gpio_rcar_read(p, OUTDT);
+ val &= ~bankmask;
+ val |= (bankmask & bits[0]);
+ gpio_rcar_write(p, OUTDT, val);
+ spin_unlock_irqrestore(&p->lock, flags);
+}
+
static int gpio_rcar_direction_output(struct gpio_chip *chip, unsigned offset,
int value)
{
@@ -425,6 +444,7 @@ static int gpio_rcar_probe(struct platform_device *pdev)
gpio_chip->get = gpio_rcar_get;
gpio_chip->direction_output = gpio_rcar_direction_output;
gpio_chip->set = gpio_rcar_set;
+ gpio_chip->set_multiple = gpio_rcar_set_multiple;
gpio_chip->label = name;
gpio_chip->parent = dev;
gpio_chip->owner = THIS_MODULE;
diff --git a/drivers/gpio/gpio-sodaville.c b/drivers/gpio/gpio-sodaville.c
index e3cb6772f6ec..7da9e6c4546a 100644
--- a/drivers/gpio/gpio-sodaville.c
+++ b/drivers/gpio/gpio-sodaville.c
@@ -3,6 +3,8 @@
*
* Copyright (c) 2010, 2011 Intel Corporation
*
+ * Author: Hans J. Koch <hjk@linutronix.de>
+ *
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License 2 as published
* by the Free Software Foundation.
@@ -15,7 +17,6 @@
#include <linux/irq.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
-#include <linux/module.h>
#include <linux/pci.h>
#include <linux/platform_device.h>
#include <linux/of_irq.h>
@@ -257,34 +258,17 @@ done:
return ret;
}
-static void sdv_gpio_remove(struct pci_dev *pdev)
-{
- struct sdv_gpio_chip_data *sd = pci_get_drvdata(pdev);
-
- free_irq(pdev->irq, sd);
- irq_free_descs(sd->irq_base, SDV_NUM_PUB_GPIOS);
-
- gpiochip_remove(&sd->chip);
- pci_release_region(pdev, GPIO_BAR);
- iounmap(sd->gpio_pub_base);
- pci_disable_device(pdev);
- kfree(sd);
-}
-
static const struct pci_device_id sdv_gpio_pci_ids[] = {
{ PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCI_DEVICE_ID_SDV_GPIO) },
{ 0, },
};
static struct pci_driver sdv_gpio_driver = {
+ .driver = {
+ .suppress_bind_attrs = true,
+ },
.name = DRV_NAME,
.id_table = sdv_gpio_pci_ids,
.probe = sdv_gpio_probe,
- .remove = sdv_gpio_remove,
};
-
-module_pci_driver(sdv_gpio_driver);
-
-MODULE_AUTHOR("Hans J. Koch <hjk@linutronix.de>");
-MODULE_DESCRIPTION("GPIO interface for Intel Sodaville SoCs");
-MODULE_LICENSE("GPL v2");
+builtin_pci_driver(sdv_gpio_driver);
diff --git a/drivers/gpio/gpio-sta2x11.c b/drivers/gpio/gpio-sta2x11.c
index 0d5b8c525dd9..853ca23cad88 100644
--- a/drivers/gpio/gpio-sta2x11.c
+++ b/drivers/gpio/gpio-sta2x11.c
@@ -20,7 +20,7 @@
*
*/
-#include <linux/module.h>
+#include <linux/init.h>
#include <linux/kernel.h>
#include <linux/slab.h>
#include <linux/gpio.h>
@@ -432,8 +432,4 @@ static struct platform_driver sta2x11_gpio_platform_driver = {
},
.probe = gsta_probe,
};
-
-module_platform_driver(sta2x11_gpio_platform_driver);
-
-MODULE_LICENSE("GPL v2");
-MODULE_DESCRIPTION("sta2x11_gpio GPIO driver");
+builtin_platform_driver(sta2x11_gpio_platform_driver);
diff --git a/drivers/gpio/gpio-stmpe.c b/drivers/gpio/gpio-stmpe.c
index 5197edf1acfd..6f7af28b8966 100644
--- a/drivers/gpio/gpio-stmpe.c
+++ b/drivers/gpio/gpio-stmpe.c
@@ -5,7 +5,6 @@
* Author: Rabin Vincent <rabin.vincent@stericsson.com> for ST-Ericsson
*/
-#include <linux/module.h>
#include <linux/init.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
@@ -413,23 +412,13 @@ out_free:
return ret;
}
-static int stmpe_gpio_remove(struct platform_device *pdev)
-{
- struct stmpe_gpio *stmpe_gpio = platform_get_drvdata(pdev);
- struct stmpe *stmpe = stmpe_gpio->stmpe;
-
- gpiochip_remove(&stmpe_gpio->chip);
- stmpe_disable(stmpe, STMPE_BLOCK_GPIO);
- kfree(stmpe_gpio);
-
- return 0;
-}
-
static struct platform_driver stmpe_gpio_driver = {
- .driver.name = "stmpe-gpio",
- .driver.owner = THIS_MODULE,
+ .driver = {
+ .suppress_bind_attrs = true,
+ .name = "stmpe-gpio",
+ .owner = THIS_MODULE,
+ },
.probe = stmpe_gpio_probe,
- .remove = stmpe_gpio_remove,
};
static int __init stmpe_gpio_init(void)
@@ -437,13 +426,3 @@ static int __init stmpe_gpio_init(void)
return platform_driver_register(&stmpe_gpio_driver);
}
subsys_initcall(stmpe_gpio_init);
-
-static void __exit stmpe_gpio_exit(void)
-{
- platform_driver_unregister(&stmpe_gpio_driver);
-}
-module_exit(stmpe_gpio_exit);
-
-MODULE_LICENSE("GPL v2");
-MODULE_DESCRIPTION("STMPExxxx GPIO driver");
-MODULE_AUTHOR("Rabin Vincent <rabin.vincent@stericsson.com>");
diff --git a/drivers/gpio/gpio-sx150x.c b/drivers/gpio/gpio-sx150x.c
index d387eb524bf3..a177ebd921d5 100644
--- a/drivers/gpio/gpio-sx150x.c
+++ b/drivers/gpio/gpio-sx150x.c
@@ -1,5 +1,9 @@
/* Copyright (c) 2010, Code Aurora Forum. All rights reserved.
*
+ * Driver for Semtech SX150X I2C GPIO Expanders
+ *
+ * Author: Gregory Bean <gbean@codeaurora.org>
+ *
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 and
* only version 2 as published by the Free Software Foundation.
@@ -19,10 +23,8 @@
#include <linux/init.h>
#include <linux/interrupt.h>
#include <linux/irq.h>
-#include <linux/module.h>
#include <linux/mutex.h>
#include <linux/slab.h>
-#include <linux/i2c/sx150x.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
@@ -82,6 +84,57 @@ struct sx150x_device_data {
} pri;
};
+/**
+ * struct sx150x_platform_data - config data for SX150x driver
+ * @gpio_base: The index number of the first GPIO assigned to this
+ * GPIO expander. The expander will create a block of
+ * consecutively numbered gpios beginning at the given base,
+ * with the size of the block depending on the model of the
+ * expander chip.
+ * @oscio_is_gpo: If set to true, the driver will configure OSCIO as a GPO
+ * instead of as an oscillator, increasing the size of the
+ * GP(I)O pool created by this expander by one. The
+ * output-only GPO pin will be added at the end of the block.
+ * @io_pullup_ena: A bit-mask which enables or disables the pull-up resistor
+ * for each IO line in the expander. Setting the bit at
+ * position n will enable the pull-up for the IO at
+ * the corresponding offset. For chips with fewer than
+ * 16 IO pins, high-end bits are ignored.
+ * @io_pulldn_ena: A bit-mask which enables-or disables the pull-down
+ * resistor for each IO line in the expander. Setting the
+ * bit at position n will enable the pull-down for the IO at
+ * the corresponding offset. For chips with fewer than
+ * 16 IO pins, high-end bits are ignored.
+ * @io_polarity: A bit-mask which enables polarity inversion for each IO line
+ * in the expander. Setting the bit at position n inverts
+ * the polarity of that IO line, while clearing it results
+ * in normal polarity. For chips with fewer than 16 IO pins,
+ * high-end bits are ignored.
+ * @irq_summary: The 'summary IRQ' line to which the GPIO expander's INT line
+ * is connected, via which it reports interrupt events
+ * across all GPIO lines. This must be a real,
+ * pre-existing IRQ line.
+ * Setting this value < 0 disables the irq_chip functionality
+ * of the driver.
+ * @irq_base: The first 'virtual IRQ' line at which our block of GPIO-based
+ * IRQ lines will appear. Similarly to gpio_base, the expander
+ * will create a block of irqs beginning at this number.
+ * This value is ignored if irq_summary is < 0.
+ * @reset_during_probe: If set to true, the driver will trigger a full
+ * reset of the chip at the beginning of the probe
+ * in order to place it in a known state.
+ */
+struct sx150x_platform_data {
+ unsigned gpio_base;
+ bool oscio_is_gpo;
+ u16 io_pullup_ena;
+ u16 io_pulldn_ena;
+ u16 io_polarity;
+ int irq_summary;
+ unsigned irq_base;
+ bool reset_during_probe;
+};
+
struct sx150x_chip {
struct gpio_chip gpio_chip;
struct i2c_client *client;
@@ -354,6 +407,32 @@ static void sx150x_gpio_set(struct gpio_chip *gc, unsigned offset, int val)
mutex_unlock(&chip->lock);
}
+static int sx150x_gpio_set_single_ended(struct gpio_chip *gc,
+ unsigned offset,
+ enum single_ended_mode mode)
+{
+ struct sx150x_chip *chip = gpiochip_get_data(gc);
+
+ /* On the SX160X 789 we can set open drain */
+ if (chip->dev_cfg->model != SX150X_789)
+ return -ENOTSUPP;
+
+ if (mode == LINE_MODE_PUSH_PULL)
+ return sx150x_write_cfg(chip,
+ offset,
+ 1,
+ chip->dev_cfg->pri.x789.reg_drain,
+ 0);
+
+ if (mode == LINE_MODE_OPEN_DRAIN)
+ return sx150x_write_cfg(chip,
+ offset,
+ 1,
+ chip->dev_cfg->pri.x789.reg_drain,
+ 1);
+ return -ENOTSUPP;
+}
+
static int sx150x_gpio_direction_input(struct gpio_chip *gc, unsigned offset)
{
struct sx150x_chip *chip = gpiochip_get_data(gc);
@@ -508,6 +587,7 @@ static void sx150x_init_chip(struct sx150x_chip *chip,
chip->gpio_chip.direction_output = sx150x_gpio_direction_output;
chip->gpio_chip.get = sx150x_gpio_get;
chip->gpio_chip.set = sx150x_gpio_set;
+ chip->gpio_chip.set_single_ended = sx150x_gpio_set_single_ended;
chip->gpio_chip.base = pdata->gpio_base;
chip->gpio_chip.can_sleep = true;
chip->gpio_chip.ngpio = chip->dev_cfg->ngpios;
@@ -597,12 +677,6 @@ static int sx150x_init_hw(struct sx150x_chip *chip,
if (chip->dev_cfg->model == SX150X_789) {
err = sx150x_init_io(chip,
- chip->dev_cfg->pri.x789.reg_drain,
- pdata->io_open_drain_ena);
- if (err < 0)
- return err;
-
- err = sx150x_init_io(chip,
chip->dev_cfg->pri.x789.reg_polarity,
pdata->io_polarity);
if (err < 0)
@@ -718,13 +792,3 @@ static int __init sx150x_init(void)
return i2c_add_driver(&sx150x_driver);
}
subsys_initcall(sx150x_init);
-
-static void __exit sx150x_exit(void)
-{
- return i2c_del_driver(&sx150x_driver);
-}
-module_exit(sx150x_exit);
-
-MODULE_AUTHOR("Gregory Bean <gbean@codeaurora.org>");
-MODULE_DESCRIPTION("Driver for Semtech SX150X I2C GPIO Expanders");
-MODULE_LICENSE("GPL v2");
diff --git a/drivers/gpio/gpio-tc3589x.c b/drivers/gpio/gpio-tc3589x.c
index 4f566e6b81f1..2e35ed3abbcf 100644
--- a/drivers/gpio/gpio-tc3589x.c
+++ b/drivers/gpio/gpio-tc3589x.c
@@ -6,14 +6,14 @@
* Author: Rabin Vincent <rabin.vincent@stericsson.com> for ST-Ericsson
*/
-#include <linux/module.h>
#include <linux/init.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
-#include <linux/gpio.h>
+#include <linux/gpio/driver.h>
#include <linux/of.h>
#include <linux/interrupt.h>
#include <linux/mfd/tc3589x.h>
+#include <linux/bitops.h>
/*
* These registers are modified under the irq bus lock and cached to avoid
@@ -39,7 +39,7 @@ static int tc3589x_gpio_get(struct gpio_chip *chip, unsigned offset)
struct tc3589x_gpio *tc3589x_gpio = gpiochip_get_data(chip);
struct tc3589x *tc3589x = tc3589x_gpio->tc3589x;
u8 reg = TC3589x_GPIODATA0 + (offset / 8) * 2;
- u8 mask = 1 << (offset % 8);
+ u8 mask = BIT(offset % 8);
int ret;
ret = tc3589x_reg_read(tc3589x, reg);
@@ -55,7 +55,7 @@ static void tc3589x_gpio_set(struct gpio_chip *chip, unsigned offset, int val)
struct tc3589x *tc3589x = tc3589x_gpio->tc3589x;
u8 reg = TC3589x_GPIODATA0 + (offset / 8) * 2;
unsigned pos = offset % 8;
- u8 data[] = {!!val << pos, 1 << pos};
+ u8 data[] = {val ? BIT(pos) : 0, BIT(pos)};
tc3589x_block_write(tc3589x, reg, ARRAY_SIZE(data), data);
}
@@ -70,7 +70,7 @@ static int tc3589x_gpio_direction_output(struct gpio_chip *chip,
tc3589x_gpio_set(chip, offset, val);
- return tc3589x_set_bits(tc3589x, reg, 1 << pos, 1 << pos);
+ return tc3589x_set_bits(tc3589x, reg, BIT(pos), BIT(pos));
}
static int tc3589x_gpio_direction_input(struct gpio_chip *chip,
@@ -81,7 +81,47 @@ static int tc3589x_gpio_direction_input(struct gpio_chip *chip,
u8 reg = TC3589x_GPIODIR0 + offset / 8;
unsigned pos = offset % 8;
- return tc3589x_set_bits(tc3589x, reg, 1 << pos, 0);
+ return tc3589x_set_bits(tc3589x, reg, BIT(pos), 0);
+}
+
+static int tc3589x_gpio_single_ended(struct gpio_chip *chip,
+ unsigned offset,
+ enum single_ended_mode mode)
+{
+ struct tc3589x_gpio *tc3589x_gpio = gpiochip_get_data(chip);
+ struct tc3589x *tc3589x = tc3589x_gpio->tc3589x;
+ /*
+ * These registers are alterated at each second address
+ * ODM bit 0 = drive to GND or Hi-Z (open drain)
+ * ODM bit 1 = drive to VDD or Hi-Z (open source)
+ */
+ u8 odmreg = TC3589x_GPIOODM0 + (offset / 8) * 2;
+ u8 odereg = TC3589x_GPIOODE0 + (offset / 8) * 2;
+ unsigned pos = offset % 8;
+ int ret;
+
+ switch(mode) {
+ case LINE_MODE_OPEN_DRAIN:
+ /* Set open drain mode */
+ ret = tc3589x_set_bits(tc3589x, odmreg, BIT(pos), 0);
+ if (ret)
+ return ret;
+ /* Enable open drain/source mode */
+ return tc3589x_set_bits(tc3589x, odereg, BIT(pos), BIT(pos));
+ case LINE_MODE_OPEN_SOURCE:
+ /* Set open source mode */
+ ret = tc3589x_set_bits(tc3589x, odmreg, BIT(pos), BIT(pos));
+ if (ret)
+ return ret;
+ /* Enable open drain/source mode */
+ return tc3589x_set_bits(tc3589x, odereg, BIT(pos), BIT(pos));
+ case LINE_MODE_PUSH_PULL:
+ /* Disable open drain/source mode */
+ return tc3589x_set_bits(tc3589x, odereg, BIT(pos), 0);
+ default:
+ break;
+ }
+ return -ENOTSUPP;
}
static struct gpio_chip template_chip = {
@@ -91,6 +131,7 @@ static struct gpio_chip template_chip = {
.get = tc3589x_gpio_get,
.direction_output = tc3589x_gpio_direction_output,
.set = tc3589x_gpio_set,
+ .set_single_ended = tc3589x_gpio_single_ended,
.can_sleep = true,
};
@@ -100,7 +141,7 @@ static int tc3589x_gpio_irq_set_type(struct irq_data *d, unsigned int type)
struct tc3589x_gpio *tc3589x_gpio = gpiochip_get_data(gc);
int offset = d->hwirq;
int regoffset = offset / 8;
- int mask = 1 << (offset % 8);
+ int mask = BIT(offset % 8);
if (type == IRQ_TYPE_EDGE_BOTH) {
tc3589x_gpio->regs[REG_IBE][regoffset] |= mask;
@@ -165,7 +206,7 @@ static void tc3589x_gpio_irq_mask(struct irq_data *d)
struct tc3589x_gpio *tc3589x_gpio = gpiochip_get_data(gc);
int offset = d->hwirq;
int regoffset = offset / 8;
- int mask = 1 << (offset % 8);
+ int mask = BIT(offset % 8);
tc3589x_gpio->regs[REG_IE][regoffset] &= ~mask;
}
@@ -176,7 +217,7 @@ static void tc3589x_gpio_irq_unmask(struct irq_data *d)
struct tc3589x_gpio *tc3589x_gpio = gpiochip_get_data(gc);
int offset = d->hwirq;
int regoffset = offset / 8;
- int mask = 1 << (offset % 8);
+ int mask = BIT(offset % 8);
tc3589x_gpio->regs[REG_IE][regoffset] |= mask;
}
@@ -311,13 +352,3 @@ static int __init tc3589x_gpio_init(void)
return platform_driver_register(&tc3589x_gpio_driver);
}
subsys_initcall(tc3589x_gpio_init);
-
-static void __exit tc3589x_gpio_exit(void)
-{
- platform_driver_unregister(&tc3589x_gpio_driver);
-}
-module_exit(tc3589x_gpio_exit);
-
-MODULE_LICENSE("GPL v2");
-MODULE_DESCRIPTION("TC3589x GPIO driver");
-MODULE_AUTHOR("Hanumath Prasad, Rabin Vincent");
diff --git a/drivers/gpio/gpio-tegra.c b/drivers/gpio/gpio-tegra.c
index 790bb111b2cb..ec891a27952f 100644
--- a/drivers/gpio/gpio-tegra.c
+++ b/drivers/gpio/gpio-tegra.c
@@ -35,24 +35,27 @@
#define GPIO_PORT(x) (((x) >> 3) & 0x3)
#define GPIO_BIT(x) ((x) & 0x7)
-#define GPIO_REG(x) (GPIO_BANK(x) * tegra_gpio_bank_stride + \
+#define GPIO_REG(tgi, x) (GPIO_BANK(x) * tgi->soc->bank_stride + \
GPIO_PORT(x) * 4)
-#define GPIO_CNF(x) (GPIO_REG(x) + 0x00)
-#define GPIO_OE(x) (GPIO_REG(x) + 0x10)
-#define GPIO_OUT(x) (GPIO_REG(x) + 0X20)
-#define GPIO_IN(x) (GPIO_REG(x) + 0x30)
-#define GPIO_INT_STA(x) (GPIO_REG(x) + 0x40)
-#define GPIO_INT_ENB(x) (GPIO_REG(x) + 0x50)
-#define GPIO_INT_LVL(x) (GPIO_REG(x) + 0x60)
-#define GPIO_INT_CLR(x) (GPIO_REG(x) + 0x70)
-
-#define GPIO_MSK_CNF(x) (GPIO_REG(x) + tegra_gpio_upper_offset + 0x00)
-#define GPIO_MSK_OE(x) (GPIO_REG(x) + tegra_gpio_upper_offset + 0x10)
-#define GPIO_MSK_OUT(x) (GPIO_REG(x) + tegra_gpio_upper_offset + 0X20)
-#define GPIO_MSK_INT_STA(x) (GPIO_REG(x) + tegra_gpio_upper_offset + 0x40)
-#define GPIO_MSK_INT_ENB(x) (GPIO_REG(x) + tegra_gpio_upper_offset + 0x50)
-#define GPIO_MSK_INT_LVL(x) (GPIO_REG(x) + tegra_gpio_upper_offset + 0x60)
+#define GPIO_CNF(t, x) (GPIO_REG(t, x) + 0x00)
+#define GPIO_OE(t, x) (GPIO_REG(t, x) + 0x10)
+#define GPIO_OUT(t, x) (GPIO_REG(t, x) + 0X20)
+#define GPIO_IN(t, x) (GPIO_REG(t, x) + 0x30)
+#define GPIO_INT_STA(t, x) (GPIO_REG(t, x) + 0x40)
+#define GPIO_INT_ENB(t, x) (GPIO_REG(t, x) + 0x50)
+#define GPIO_INT_LVL(t, x) (GPIO_REG(t, x) + 0x60)
+#define GPIO_INT_CLR(t, x) (GPIO_REG(t, x) + 0x70)
+#define GPIO_DBC_CNT(t, x) (GPIO_REG(t, x) + 0xF0)
+
+
+#define GPIO_MSK_CNF(t, x) (GPIO_REG(t, x) + t->soc->upper_offset + 0x00)
+#define GPIO_MSK_OE(t, x) (GPIO_REG(t, x) + t->soc->upper_offset + 0x10)
+#define GPIO_MSK_OUT(t, x) (GPIO_REG(t, x) + t->soc->upper_offset + 0X20)
+#define GPIO_MSK_DBC_EN(t, x) (GPIO_REG(t, x) + t->soc->upper_offset + 0x30)
+#define GPIO_MSK_INT_STA(t, x) (GPIO_REG(t, x) + t->soc->upper_offset + 0x40)
+#define GPIO_MSK_INT_ENB(t, x) (GPIO_REG(t, x) + t->soc->upper_offset + 0x50)
+#define GPIO_MSK_INT_LVL(t, x) (GPIO_REG(t, x) + t->soc->upper_offset + 0x60)
#define GPIO_INT_LVL_MASK 0x010101
#define GPIO_INT_LVL_EDGE_RISING 0x000101
@@ -61,10 +64,13 @@
#define GPIO_INT_LVL_LEVEL_HIGH 0x000001
#define GPIO_INT_LVL_LEVEL_LOW 0x000000
+struct tegra_gpio_info;
+
struct tegra_gpio_bank {
int bank;
int irq;
spinlock_t lvl_lock[4];
+ spinlock_t dbc_lock[4]; /* Lock for updating debounce count register */
#ifdef CONFIG_PM_SLEEP
u32 cnf[4];
u32 out[4];
@@ -72,25 +78,39 @@ struct tegra_gpio_bank {
u32 int_enb[4];
u32 int_lvl[4];
u32 wake_enb[4];
+ u32 dbc_enb[4];
#endif
+ u32 dbc_cnt[4];
+ struct tegra_gpio_info *tgi;
};
-static struct device *dev;
-static struct irq_domain *irq_domain;
-static void __iomem *regs;
-static u32 tegra_gpio_bank_count;
-static u32 tegra_gpio_bank_stride;
-static u32 tegra_gpio_upper_offset;
-static struct tegra_gpio_bank *tegra_gpio_banks;
+struct tegra_gpio_soc_config {
+ bool debounce_supported;
+ u32 bank_stride;
+ u32 upper_offset;
+};
+
+struct tegra_gpio_info {
+ struct device *dev;
+ void __iomem *regs;
+ struct irq_domain *irq_domain;
+ struct tegra_gpio_bank *bank_info;
+ const struct tegra_gpio_soc_config *soc;
+ struct gpio_chip gc;
+ struct irq_chip ic;
+ struct lock_class_key lock_class;
+ u32 bank_count;
+};
-static inline void tegra_gpio_writel(u32 val, u32 reg)
+static inline void tegra_gpio_writel(struct tegra_gpio_info *tgi,
+ u32 val, u32 reg)
{
- __raw_writel(val, regs + reg);
+ __raw_writel(val, tgi->regs + reg);
}
-static inline u32 tegra_gpio_readl(u32 reg)
+static inline u32 tegra_gpio_readl(struct tegra_gpio_info *tgi, u32 reg)
{
- return __raw_readl(regs + reg);
+ return __raw_readl(tgi->regs + reg);
}
static int tegra_gpio_compose(int bank, int port, int bit)
@@ -98,24 +118,25 @@ static int tegra_gpio_compose(int bank, int port, int bit)
return (bank << 5) | ((port & 0x3) << 3) | (bit & 0x7);
}
-static void tegra_gpio_mask_write(u32 reg, int gpio, int value)
+static void tegra_gpio_mask_write(struct tegra_gpio_info *tgi, u32 reg,
+ int gpio, int value)
{
u32 val;
val = 0x100 << GPIO_BIT(gpio);
if (value)
val |= 1 << GPIO_BIT(gpio);
- tegra_gpio_writel(val, reg);
+ tegra_gpio_writel(tgi, val, reg);
}
-static void tegra_gpio_enable(int gpio)
+static void tegra_gpio_enable(struct tegra_gpio_info *tgi, int gpio)
{
- tegra_gpio_mask_write(GPIO_MSK_CNF(gpio), gpio, 1);
+ tegra_gpio_mask_write(tgi, GPIO_MSK_CNF(tgi, gpio), gpio, 1);
}
-static void tegra_gpio_disable(int gpio)
+static void tegra_gpio_disable(struct tegra_gpio_info *tgi, int gpio)
{
- tegra_gpio_mask_write(GPIO_MSK_CNF(gpio), gpio, 0);
+ tegra_gpio_mask_write(tgi, GPIO_MSK_CNF(tgi, gpio), gpio, 0);
}
static int tegra_gpio_request(struct gpio_chip *chip, unsigned offset)
@@ -125,83 +146,138 @@ static int tegra_gpio_request(struct gpio_chip *chip, unsigned offset)
static void tegra_gpio_free(struct gpio_chip *chip, unsigned offset)
{
+ struct tegra_gpio_info *tgi = gpiochip_get_data(chip);
+
pinctrl_free_gpio(offset);
- tegra_gpio_disable(offset);
+ tegra_gpio_disable(tgi, offset);
}
static void tegra_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
{
- tegra_gpio_mask_write(GPIO_MSK_OUT(offset), offset, value);
+ struct tegra_gpio_info *tgi = gpiochip_get_data(chip);
+
+ tegra_gpio_mask_write(tgi, GPIO_MSK_OUT(tgi, offset), offset, value);
}
static int tegra_gpio_get(struct gpio_chip *chip, unsigned offset)
{
+ struct tegra_gpio_info *tgi = gpiochip_get_data(chip);
+ int bval = BIT(GPIO_BIT(offset));
+
/* If gpio is in output mode then read from the out value */
- if ((tegra_gpio_readl(GPIO_OE(offset)) >> GPIO_BIT(offset)) & 1)
- return (tegra_gpio_readl(GPIO_OUT(offset)) >>
- GPIO_BIT(offset)) & 0x1;
+ if (tegra_gpio_readl(tgi, GPIO_OE(tgi, offset)) & bval)
+ return !!(tegra_gpio_readl(tgi, GPIO_OUT(tgi, offset)) & bval);
- return (tegra_gpio_readl(GPIO_IN(offset)) >> GPIO_BIT(offset)) & 0x1;
+ return !!(tegra_gpio_readl(tgi, GPIO_IN(tgi, offset)) & bval);
}
static int tegra_gpio_direction_input(struct gpio_chip *chip, unsigned offset)
{
- tegra_gpio_mask_write(GPIO_MSK_OE(offset), offset, 0);
- tegra_gpio_enable(offset);
+ struct tegra_gpio_info *tgi = gpiochip_get_data(chip);
+
+ tegra_gpio_mask_write(tgi, GPIO_MSK_OE(tgi, offset), offset, 0);
+ tegra_gpio_enable(tgi, offset);
return 0;
}
static int tegra_gpio_direction_output(struct gpio_chip *chip, unsigned offset,
int value)
{
+ struct tegra_gpio_info *tgi = gpiochip_get_data(chip);
+
tegra_gpio_set(chip, offset, value);
- tegra_gpio_mask_write(GPIO_MSK_OE(offset), offset, 1);
- tegra_gpio_enable(offset);
+ tegra_gpio_mask_write(tgi, GPIO_MSK_OE(tgi, offset), offset, 1);
+ tegra_gpio_enable(tgi, offset);
return 0;
}
-static int tegra_gpio_to_irq(struct gpio_chip *chip, unsigned offset)
+static int tegra_gpio_get_direction(struct gpio_chip *chip, unsigned offset)
{
- return irq_find_mapping(irq_domain, offset);
+ struct tegra_gpio_info *tgi = gpiochip_get_data(chip);
+ u32 pin_mask = BIT(GPIO_BIT(offset));
+ u32 cnf, oe;
+
+ cnf = tegra_gpio_readl(tgi, GPIO_CNF(tgi, offset));
+ if (!(cnf & pin_mask))
+ return -EINVAL;
+
+ oe = tegra_gpio_readl(tgi, GPIO_OE(tgi, offset));
+
+ return (oe & pin_mask) ? GPIOF_DIR_OUT : GPIOF_DIR_IN;
}
-static struct gpio_chip tegra_gpio_chip = {
- .label = "tegra-gpio",
- .request = tegra_gpio_request,
- .free = tegra_gpio_free,
- .direction_input = tegra_gpio_direction_input,
- .get = tegra_gpio_get,
- .direction_output = tegra_gpio_direction_output,
- .set = tegra_gpio_set,
- .to_irq = tegra_gpio_to_irq,
- .base = 0,
-};
+static int tegra_gpio_set_debounce(struct gpio_chip *chip, unsigned int offset,
+ unsigned int debounce)
+{
+ struct tegra_gpio_info *tgi = gpiochip_get_data(chip);
+ struct tegra_gpio_bank *bank = &tgi->bank_info[GPIO_BANK(offset)];
+ unsigned int debounce_ms = DIV_ROUND_UP(debounce, 1000);
+ unsigned long flags;
+ int port;
+
+ if (!debounce_ms) {
+ tegra_gpio_mask_write(tgi, GPIO_MSK_DBC_EN(tgi, offset),
+ offset, 0);
+ return 0;
+ }
+
+ debounce_ms = min(debounce_ms, 255U);
+ port = GPIO_PORT(offset);
+
+ /* There is only one debounce count register per port and hence
+ * set the maximum of current and requested debounce time.
+ */
+ spin_lock_irqsave(&bank->dbc_lock[port], flags);
+ if (bank->dbc_cnt[port] < debounce_ms) {
+ tegra_gpio_writel(tgi, debounce_ms, GPIO_DBC_CNT(tgi, offset));
+ bank->dbc_cnt[port] = debounce_ms;
+ }
+ spin_unlock_irqrestore(&bank->dbc_lock[port], flags);
+
+ tegra_gpio_mask_write(tgi, GPIO_MSK_DBC_EN(tgi, offset), offset, 1);
+
+ return 0;
+}
+
+static int tegra_gpio_to_irq(struct gpio_chip *chip, unsigned offset)
+{
+ struct tegra_gpio_info *tgi = gpiochip_get_data(chip);
+
+ return irq_find_mapping(tgi->irq_domain, offset);
+}
static void tegra_gpio_irq_ack(struct irq_data *d)
{
+ struct tegra_gpio_bank *bank = irq_data_get_irq_chip_data(d);
+ struct tegra_gpio_info *tgi = bank->tgi;
int gpio = d->hwirq;
- tegra_gpio_writel(1 << GPIO_BIT(gpio), GPIO_INT_CLR(gpio));
+ tegra_gpio_writel(tgi, 1 << GPIO_BIT(gpio), GPIO_INT_CLR(tgi, gpio));
}
static void tegra_gpio_irq_mask(struct irq_data *d)
{
+ struct tegra_gpio_bank *bank = irq_data_get_irq_chip_data(d);
+ struct tegra_gpio_info *tgi = bank->tgi;
int gpio = d->hwirq;
- tegra_gpio_mask_write(GPIO_MSK_INT_ENB(gpio), gpio, 0);
+ tegra_gpio_mask_write(tgi, GPIO_MSK_INT_ENB(tgi, gpio), gpio, 0);
}
static void tegra_gpio_irq_unmask(struct irq_data *d)
{
+ struct tegra_gpio_bank *bank = irq_data_get_irq_chip_data(d);
+ struct tegra_gpio_info *tgi = bank->tgi;
int gpio = d->hwirq;
- tegra_gpio_mask_write(GPIO_MSK_INT_ENB(gpio), gpio, 1);
+ tegra_gpio_mask_write(tgi, GPIO_MSK_INT_ENB(tgi, gpio), gpio, 1);
}
static int tegra_gpio_irq_set_type(struct irq_data *d, unsigned int type)
{
int gpio = d->hwirq;
struct tegra_gpio_bank *bank = irq_data_get_irq_chip_data(d);
+ struct tegra_gpio_info *tgi = bank->tgi;
int port = GPIO_PORT(gpio);
int lvl_type;
int val;
@@ -233,23 +309,24 @@ static int tegra_gpio_irq_set_type(struct irq_data *d, unsigned int type)
return -EINVAL;
}
- ret = gpiochip_lock_as_irq(&tegra_gpio_chip, gpio);
+ ret = gpiochip_lock_as_irq(&tgi->gc, gpio);
if (ret) {
- dev_err(dev, "unable to lock Tegra GPIO %d as IRQ\n", gpio);
+ dev_err(tgi->dev,
+ "unable to lock Tegra GPIO %d as IRQ\n", gpio);
return ret;
}
spin_lock_irqsave(&bank->lvl_lock[port], flags);
- val = tegra_gpio_readl(GPIO_INT_LVL(gpio));
+ val = tegra_gpio_readl(tgi, GPIO_INT_LVL(tgi, gpio));
val &= ~(GPIO_INT_LVL_MASK << GPIO_BIT(gpio));
val |= lvl_type << GPIO_BIT(gpio);
- tegra_gpio_writel(val, GPIO_INT_LVL(gpio));
+ tegra_gpio_writel(tgi, val, GPIO_INT_LVL(tgi, gpio));
spin_unlock_irqrestore(&bank->lvl_lock[port], flags);
- tegra_gpio_mask_write(GPIO_MSK_OE(gpio), gpio, 0);
- tegra_gpio_enable(gpio);
+ tegra_gpio_mask_write(tgi, GPIO_MSK_OE(tgi, gpio), gpio, 0);
+ tegra_gpio_enable(tgi, gpio);
if (type & (IRQ_TYPE_LEVEL_LOW | IRQ_TYPE_LEVEL_HIGH))
irq_set_handler_locked(d, handle_level_irq);
@@ -261,9 +338,11 @@ static int tegra_gpio_irq_set_type(struct irq_data *d, unsigned int type)
static void tegra_gpio_irq_shutdown(struct irq_data *d)
{
+ struct tegra_gpio_bank *bank = irq_data_get_irq_chip_data(d);
+ struct tegra_gpio_info *tgi = bank->tgi;
int gpio = d->hwirq;
- gpiochip_unlock_as_irq(&tegra_gpio_chip, gpio);
+ gpiochip_unlock_as_irq(&tgi->gc, gpio);
}
static void tegra_gpio_irq_handler(struct irq_desc *desc)
@@ -271,19 +350,24 @@ static void tegra_gpio_irq_handler(struct irq_desc *desc)
int port;
int pin;
int unmasked = 0;
+ int gpio;
+ u32 lvl;
+ unsigned long sta;
struct irq_chip *chip = irq_desc_get_chip(desc);
struct tegra_gpio_bank *bank = irq_desc_get_handler_data(desc);
+ struct tegra_gpio_info *tgi = bank->tgi;
chained_irq_enter(chip, desc);
for (port = 0; port < 4; port++) {
- int gpio = tegra_gpio_compose(bank->bank, port, 0);
- unsigned long sta = tegra_gpio_readl(GPIO_INT_STA(gpio)) &
- tegra_gpio_readl(GPIO_INT_ENB(gpio));
- u32 lvl = tegra_gpio_readl(GPIO_INT_LVL(gpio));
+ gpio = tegra_gpio_compose(bank->bank, port, 0);
+ sta = tegra_gpio_readl(tgi, GPIO_INT_STA(tgi, gpio)) &
+ tegra_gpio_readl(tgi, GPIO_INT_ENB(tgi, gpio));
+ lvl = tegra_gpio_readl(tgi, GPIO_INT_LVL(tgi, gpio));
for_each_set_bit(pin, &sta, 8) {
- tegra_gpio_writel(1 << pin, GPIO_INT_CLR(gpio));
+ tegra_gpio_writel(tgi, 1 << pin,
+ GPIO_INT_CLR(tgi, gpio));
/* if gpio is edge triggered, clear condition
* before executing the handler so that we don't
@@ -306,22 +390,37 @@ static void tegra_gpio_irq_handler(struct irq_desc *desc)
#ifdef CONFIG_PM_SLEEP
static int tegra_gpio_resume(struct device *dev)
{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct tegra_gpio_info *tgi = platform_get_drvdata(pdev);
unsigned long flags;
int b;
int p;
local_irq_save(flags);
- for (b = 0; b < tegra_gpio_bank_count; b++) {
- struct tegra_gpio_bank *bank = &tegra_gpio_banks[b];
+ for (b = 0; b < tgi->bank_count; b++) {
+ struct tegra_gpio_bank *bank = &tgi->bank_info[b];
for (p = 0; p < ARRAY_SIZE(bank->oe); p++) {
unsigned int gpio = (b<<5) | (p<<3);
- tegra_gpio_writel(bank->cnf[p], GPIO_CNF(gpio));
- tegra_gpio_writel(bank->out[p], GPIO_OUT(gpio));
- tegra_gpio_writel(bank->oe[p], GPIO_OE(gpio));
- tegra_gpio_writel(bank->int_lvl[p], GPIO_INT_LVL(gpio));
- tegra_gpio_writel(bank->int_enb[p], GPIO_INT_ENB(gpio));
+ tegra_gpio_writel(tgi, bank->cnf[p],
+ GPIO_CNF(tgi, gpio));
+
+ if (tgi->soc->debounce_supported) {
+ tegra_gpio_writel(tgi, bank->dbc_cnt[p],
+ GPIO_DBC_CNT(tgi, gpio));
+ tegra_gpio_writel(tgi, bank->dbc_enb[p],
+ GPIO_MSK_DBC_EN(tgi, gpio));
+ }
+
+ tegra_gpio_writel(tgi, bank->out[p],
+ GPIO_OUT(tgi, gpio));
+ tegra_gpio_writel(tgi, bank->oe[p],
+ GPIO_OE(tgi, gpio));
+ tegra_gpio_writel(tgi, bank->int_lvl[p],
+ GPIO_INT_LVL(tgi, gpio));
+ tegra_gpio_writel(tgi, bank->int_enb[p],
+ GPIO_INT_ENB(tgi, gpio));
}
}
@@ -331,25 +430,39 @@ static int tegra_gpio_resume(struct device *dev)
static int tegra_gpio_suspend(struct device *dev)
{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct tegra_gpio_info *tgi = platform_get_drvdata(pdev);
unsigned long flags;
int b;
int p;
local_irq_save(flags);
- for (b = 0; b < tegra_gpio_bank_count; b++) {
- struct tegra_gpio_bank *bank = &tegra_gpio_banks[b];
+ for (b = 0; b < tgi->bank_count; b++) {
+ struct tegra_gpio_bank *bank = &tgi->bank_info[b];
for (p = 0; p < ARRAY_SIZE(bank->oe); p++) {
unsigned int gpio = (b<<5) | (p<<3);
- bank->cnf[p] = tegra_gpio_readl(GPIO_CNF(gpio));
- bank->out[p] = tegra_gpio_readl(GPIO_OUT(gpio));
- bank->oe[p] = tegra_gpio_readl(GPIO_OE(gpio));
- bank->int_enb[p] = tegra_gpio_readl(GPIO_INT_ENB(gpio));
- bank->int_lvl[p] = tegra_gpio_readl(GPIO_INT_LVL(gpio));
+ bank->cnf[p] = tegra_gpio_readl(tgi,
+ GPIO_CNF(tgi, gpio));
+ bank->out[p] = tegra_gpio_readl(tgi,
+ GPIO_OUT(tgi, gpio));
+ bank->oe[p] = tegra_gpio_readl(tgi,
+ GPIO_OE(tgi, gpio));
+ if (tgi->soc->debounce_supported) {
+ bank->dbc_enb[p] = tegra_gpio_readl(tgi,
+ GPIO_MSK_DBC_EN(tgi, gpio));
+ bank->dbc_enb[p] = (bank->dbc_enb[p] << 8) |
+ bank->dbc_enb[p];
+ }
+
+ bank->int_enb[p] = tegra_gpio_readl(tgi,
+ GPIO_INT_ENB(tgi, gpio));
+ bank->int_lvl[p] = tegra_gpio_readl(tgi,
+ GPIO_INT_LVL(tgi, gpio));
/* Enable gpio irq for wake up source */
- tegra_gpio_writel(bank->wake_enb[p],
- GPIO_INT_ENB(gpio));
+ tegra_gpio_writel(tgi, bank->wake_enb[p],
+ GPIO_INT_ENB(tgi, gpio));
}
}
local_irq_restore(flags);
@@ -382,22 +495,23 @@ static int tegra_gpio_irq_set_wake(struct irq_data *d, unsigned int enable)
static int dbg_gpio_show(struct seq_file *s, void *unused)
{
+ struct tegra_gpio_info *tgi = s->private;
int i;
int j;
- for (i = 0; i < tegra_gpio_bank_count; i++) {
+ for (i = 0; i < tgi->bank_count; i++) {
for (j = 0; j < 4; j++) {
int gpio = tegra_gpio_compose(i, j, 0);
seq_printf(s,
"%d:%d %02x %02x %02x %02x %02x %02x %06x\n",
i, j,
- tegra_gpio_readl(GPIO_CNF(gpio)),
- tegra_gpio_readl(GPIO_OE(gpio)),
- tegra_gpio_readl(GPIO_OUT(gpio)),
- tegra_gpio_readl(GPIO_IN(gpio)),
- tegra_gpio_readl(GPIO_INT_STA(gpio)),
- tegra_gpio_readl(GPIO_INT_ENB(gpio)),
- tegra_gpio_readl(GPIO_INT_LVL(gpio)));
+ tegra_gpio_readl(tgi, GPIO_CNF(tgi, gpio)),
+ tegra_gpio_readl(tgi, GPIO_OE(tgi, gpio)),
+ tegra_gpio_readl(tgi, GPIO_OUT(tgi, gpio)),
+ tegra_gpio_readl(tgi, GPIO_IN(tgi, gpio)),
+ tegra_gpio_readl(tgi, GPIO_INT_STA(tgi, gpio)),
+ tegra_gpio_readl(tgi, GPIO_INT_ENB(tgi, gpio)),
+ tegra_gpio_readl(tgi, GPIO_INT_LVL(tgi, gpio)));
}
}
return 0;
@@ -405,7 +519,7 @@ static int dbg_gpio_show(struct seq_file *s, void *unused)
static int dbg_gpio_open(struct inode *inode, struct file *file)
{
- return single_open(file, dbg_gpio_show, &inode->i_private);
+ return single_open(file, dbg_gpio_show, inode->i_private);
}
static const struct file_operations debug_fops = {
@@ -415,66 +529,28 @@ static const struct file_operations debug_fops = {
.release = single_release,
};
-static void tegra_gpio_debuginit(void)
+static void tegra_gpio_debuginit(struct tegra_gpio_info *tgi)
{
(void) debugfs_create_file("tegra_gpio", S_IRUGO,
- NULL, NULL, &debug_fops);
+ NULL, tgi, &debug_fops);
}
#else
-static inline void tegra_gpio_debuginit(void)
+static inline void tegra_gpio_debuginit(struct tegra_gpio_info *tgi)
{
}
#endif
-static struct irq_chip tegra_gpio_irq_chip = {
- .name = "GPIO",
- .irq_ack = tegra_gpio_irq_ack,
- .irq_mask = tegra_gpio_irq_mask,
- .irq_unmask = tegra_gpio_irq_unmask,
- .irq_set_type = tegra_gpio_irq_set_type,
- .irq_shutdown = tegra_gpio_irq_shutdown,
-#ifdef CONFIG_PM_SLEEP
- .irq_set_wake = tegra_gpio_irq_set_wake,
-#endif
-};
-
static const struct dev_pm_ops tegra_gpio_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(tegra_gpio_suspend, tegra_gpio_resume)
};
-struct tegra_gpio_soc_config {
- u32 bank_stride;
- u32 upper_offset;
-};
-
-static struct tegra_gpio_soc_config tegra20_gpio_config = {
- .bank_stride = 0x80,
- .upper_offset = 0x800,
-};
-
-static struct tegra_gpio_soc_config tegra30_gpio_config = {
- .bank_stride = 0x100,
- .upper_offset = 0x80,
-};
-
-static const struct of_device_id tegra_gpio_of_match[] = {
- { .compatible = "nvidia,tegra30-gpio", .data = &tegra30_gpio_config },
- { .compatible = "nvidia,tegra20-gpio", .data = &tegra20_gpio_config },
- { },
-};
-
-/* This lock class tells lockdep that GPIO irqs are in a different
- * category than their parents, so it won't report false recursion.
- */
-static struct lock_class_key gpio_lock_class;
-
static int tegra_gpio_probe(struct platform_device *pdev)
{
- const struct of_device_id *match;
- struct tegra_gpio_soc_config *config;
+ const struct tegra_gpio_soc_config *config;
+ struct tegra_gpio_info *tgi;
struct resource *res;
struct tegra_gpio_bank *bank;
int ret;
@@ -482,102 +558,153 @@ static int tegra_gpio_probe(struct platform_device *pdev)
int i;
int j;
- dev = &pdev->dev;
-
- match = of_match_device(tegra_gpio_of_match, &pdev->dev);
- if (!match) {
+ config = of_device_get_match_data(&pdev->dev);
+ if (!config) {
dev_err(&pdev->dev, "Error: No device match found\n");
return -ENODEV;
}
- config = (struct tegra_gpio_soc_config *)match->data;
- tegra_gpio_bank_stride = config->bank_stride;
- tegra_gpio_upper_offset = config->upper_offset;
+ tgi = devm_kzalloc(&pdev->dev, sizeof(*tgi), GFP_KERNEL);
+ if (!tgi)
+ return -ENODEV;
+
+ tgi->soc = config;
+ tgi->dev = &pdev->dev;
for (;;) {
- res = platform_get_resource(pdev, IORESOURCE_IRQ, tegra_gpio_bank_count);
+ res = platform_get_resource(pdev, IORESOURCE_IRQ,
+ tgi->bank_count);
if (!res)
break;
- tegra_gpio_bank_count++;
+ tgi->bank_count++;
}
- if (!tegra_gpio_bank_count) {
+ if (!tgi->bank_count) {
dev_err(&pdev->dev, "Missing IRQ resource\n");
return -ENODEV;
}
- tegra_gpio_chip.ngpio = tegra_gpio_bank_count * 32;
+ tgi->gc.label = "tegra-gpio";
+ tgi->gc.request = tegra_gpio_request;
+ tgi->gc.free = tegra_gpio_free;
+ tgi->gc.direction_input = tegra_gpio_direction_input;
+ tgi->gc.get = tegra_gpio_get;
+ tgi->gc.direction_output = tegra_gpio_direction_output;
+ tgi->gc.set = tegra_gpio_set;
+ tgi->gc.get_direction = tegra_gpio_get_direction;
+ tgi->gc.to_irq = tegra_gpio_to_irq;
+ tgi->gc.base = 0;
+ tgi->gc.ngpio = tgi->bank_count * 32;
+ tgi->gc.parent = &pdev->dev;
+ tgi->gc.of_node = pdev->dev.of_node;
+
+ tgi->ic.name = "GPIO";
+ tgi->ic.irq_ack = tegra_gpio_irq_ack;
+ tgi->ic.irq_mask = tegra_gpio_irq_mask;
+ tgi->ic.irq_unmask = tegra_gpio_irq_unmask;
+ tgi->ic.irq_set_type = tegra_gpio_irq_set_type;
+ tgi->ic.irq_shutdown = tegra_gpio_irq_shutdown;
+#ifdef CONFIG_PM_SLEEP
+ tgi->ic.irq_set_wake = tegra_gpio_irq_set_wake;
+#endif
+
+ platform_set_drvdata(pdev, tgi);
+
+ if (config->debounce_supported)
+ tgi->gc.set_debounce = tegra_gpio_set_debounce;
- tegra_gpio_banks = devm_kzalloc(&pdev->dev,
- tegra_gpio_bank_count * sizeof(*tegra_gpio_banks),
- GFP_KERNEL);
- if (!tegra_gpio_banks)
+ tgi->bank_info = devm_kzalloc(&pdev->dev, tgi->bank_count *
+ sizeof(*tgi->bank_info), GFP_KERNEL);
+ if (!tgi->bank_info)
return -ENODEV;
- irq_domain = irq_domain_add_linear(pdev->dev.of_node,
- tegra_gpio_chip.ngpio,
- &irq_domain_simple_ops, NULL);
- if (!irq_domain)
+ tgi->irq_domain = irq_domain_add_linear(pdev->dev.of_node,
+ tgi->gc.ngpio,
+ &irq_domain_simple_ops, NULL);
+ if (!tgi->irq_domain)
return -ENODEV;
- for (i = 0; i < tegra_gpio_bank_count; i++) {
+ for (i = 0; i < tgi->bank_count; i++) {
res = platform_get_resource(pdev, IORESOURCE_IRQ, i);
if (!res) {
dev_err(&pdev->dev, "Missing IRQ resource\n");
return -ENODEV;
}
- bank = &tegra_gpio_banks[i];
+ bank = &tgi->bank_info[i];
bank->bank = i;
bank->irq = res->start;
+ bank->tgi = tgi;
}
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- regs = devm_ioremap_resource(&pdev->dev, res);
- if (IS_ERR(regs))
- return PTR_ERR(regs);
+ tgi->regs = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(tgi->regs))
+ return PTR_ERR(tgi->regs);
- for (i = 0; i < tegra_gpio_bank_count; i++) {
+ for (i = 0; i < tgi->bank_count; i++) {
for (j = 0; j < 4; j++) {
int gpio = tegra_gpio_compose(i, j, 0);
- tegra_gpio_writel(0x00, GPIO_INT_ENB(gpio));
+ tegra_gpio_writel(tgi, 0x00, GPIO_INT_ENB(tgi, gpio));
}
}
- tegra_gpio_chip.of_node = pdev->dev.of_node;
-
- ret = devm_gpiochip_add_data(&pdev->dev, &tegra_gpio_chip, NULL);
+ ret = devm_gpiochip_add_data(&pdev->dev, &tgi->gc, tgi);
if (ret < 0) {
- irq_domain_remove(irq_domain);
+ irq_domain_remove(tgi->irq_domain);
return ret;
}
- for (gpio = 0; gpio < tegra_gpio_chip.ngpio; gpio++) {
- int irq = irq_create_mapping(irq_domain, gpio);
+ for (gpio = 0; gpio < tgi->gc.ngpio; gpio++) {
+ int irq = irq_create_mapping(tgi->irq_domain, gpio);
/* No validity check; all Tegra GPIOs are valid IRQs */
- bank = &tegra_gpio_banks[GPIO_BANK(gpio)];
+ bank = &tgi->bank_info[GPIO_BANK(gpio)];
- irq_set_lockdep_class(irq, &gpio_lock_class);
+ irq_set_lockdep_class(irq, &tgi->lock_class);
irq_set_chip_data(irq, bank);
- irq_set_chip_and_handler(irq, &tegra_gpio_irq_chip,
- handle_simple_irq);
+ irq_set_chip_and_handler(irq, &tgi->ic, handle_simple_irq);
}
- for (i = 0; i < tegra_gpio_bank_count; i++) {
- bank = &tegra_gpio_banks[i];
+ for (i = 0; i < tgi->bank_count; i++) {
+ bank = &tgi->bank_info[i];
irq_set_chained_handler_and_data(bank->irq,
tegra_gpio_irq_handler, bank);
- for (j = 0; j < 4; j++)
+ for (j = 0; j < 4; j++) {
spin_lock_init(&bank->lvl_lock[j]);
+ spin_lock_init(&bank->dbc_lock[j]);
+ }
}
- tegra_gpio_debuginit();
+ tegra_gpio_debuginit(tgi);
return 0;
}
+static const struct tegra_gpio_soc_config tegra20_gpio_config = {
+ .bank_stride = 0x80,
+ .upper_offset = 0x800,
+};
+
+static const struct tegra_gpio_soc_config tegra30_gpio_config = {
+ .bank_stride = 0x100,
+ .upper_offset = 0x80,
+};
+
+static const struct tegra_gpio_soc_config tegra210_gpio_config = {
+ .debounce_supported = true,
+ .bank_stride = 0x100,
+ .upper_offset = 0x80,
+};
+
+static const struct of_device_id tegra_gpio_of_match[] = {
+ { .compatible = "nvidia,tegra210-gpio", .data = &tegra210_gpio_config },
+ { .compatible = "nvidia,tegra30-gpio", .data = &tegra30_gpio_config },
+ { .compatible = "nvidia,tegra20-gpio", .data = &tegra20_gpio_config },
+ { },
+};
+
static struct platform_driver tegra_gpio_driver = {
.driver = {
.name = "tegra-gpio",
diff --git a/drivers/gpio/gpio-timberdale.c b/drivers/gpio/gpio-timberdale.c
index 85ed608c2b27..181f86ce00cd 100644
--- a/drivers/gpio/gpio-timberdale.c
+++ b/drivers/gpio/gpio-timberdale.c
@@ -1,5 +1,6 @@
/*
* Timberdale FPGA GPIO driver
+ * Author: Mocean Laboratories
* Copyright (c) 2009 Intel Corporation
*
* This program is free software; you can redistribute it and/or modify
@@ -20,7 +21,7 @@
* Timberdale FPGA GPIO
*/
-#include <linux/module.h>
+#include <linux/init.h>
#include <linux/gpio.h>
#include <linux/platform_device.h>
#include <linux/irq.h>
@@ -290,40 +291,14 @@ static int timbgpio_probe(struct platform_device *pdev)
return 0;
}
-static int timbgpio_remove(struct platform_device *pdev)
-{
- struct timbgpio_platform_data *pdata = dev_get_platdata(&pdev->dev);
- struct timbgpio *tgpio = platform_get_drvdata(pdev);
- int irq = platform_get_irq(pdev, 0);
-
- if (irq >= 0 && tgpio->irq_base > 0) {
- int i;
- for (i = 0; i < pdata->nr_pins; i++) {
- irq_set_chip(tgpio->irq_base + i, NULL);
- irq_set_chip_data(tgpio->irq_base + i, NULL);
- }
-
- irq_set_handler(irq, NULL);
- irq_set_handler_data(irq, NULL);
- }
-
- return 0;
-}
-
static struct platform_driver timbgpio_platform_driver = {
.driver = {
- .name = DRIVER_NAME,
+ .name = DRIVER_NAME,
+ .suppress_bind_attrs = true,
},
.probe = timbgpio_probe,
- .remove = timbgpio_remove,
};
/*--------------------------------------------------------------------------*/
-module_platform_driver(timbgpio_platform_driver);
-
-MODULE_DESCRIPTION("Timberdale GPIO driver");
-MODULE_LICENSE("GPL v2");
-MODULE_AUTHOR("Mocean Laboratories");
-MODULE_ALIAS("platform:"DRIVER_NAME);
-
+builtin_platform_driver(timbgpio_platform_driver);
diff --git a/drivers/gpio/gpio-tpic2810.c b/drivers/gpio/gpio-tpic2810.c
index 9f020aa4b067..cace79c1b70a 100644
--- a/drivers/gpio/gpio-tpic2810.c
+++ b/drivers/gpio/gpio-tpic2810.c
@@ -57,39 +57,34 @@ static int tpic2810_direction_output(struct gpio_chip *chip,
return 0;
}
-static void tpic2810_set(struct gpio_chip *chip, unsigned offset, int value)
+static void tpic2810_set_mask_bits(struct gpio_chip *chip, u8 mask, u8 bits)
{
struct tpic2810 *gpio = gpiochip_get_data(chip);
+ u8 buffer;
+ int err;
mutex_lock(&gpio->lock);
- if (value)
- gpio->buffer |= BIT(offset);
- else
- gpio->buffer &= ~BIT(offset);
+ buffer = gpio->buffer & ~mask;
+ buffer |= (mask & bits);
- i2c_smbus_write_byte_data(gpio->client, TPIC2810_WS_COMMAND,
- gpio->buffer);
+ err = i2c_smbus_write_byte_data(gpio->client, TPIC2810_WS_COMMAND,
+ buffer);
+ if (!err)
+ gpio->buffer = buffer;
mutex_unlock(&gpio->lock);
}
+static void tpic2810_set(struct gpio_chip *chip, unsigned offset, int value)
+{
+ tpic2810_set_mask_bits(chip, BIT(offset), value ? BIT(offset) : 0);
+}
+
static void tpic2810_set_multiple(struct gpio_chip *chip, unsigned long *mask,
unsigned long *bits)
{
- struct tpic2810 *gpio = gpiochip_get_data(chip);
-
- mutex_lock(&gpio->lock);
-
- /* clear bits under mask */
- gpio->buffer &= ~(*mask);
- /* set bits under mask */
- gpio->buffer |= ((*mask) & (*bits));
-
- i2c_smbus_write_byte_data(gpio->client, TPIC2810_WS_COMMAND,
- gpio->buffer);
-
- mutex_unlock(&gpio->lock);
+ tpic2810_set_mask_bits(chip, *mask, *bits);
}
static struct gpio_chip template_chip = {
diff --git a/drivers/gpio/gpio-tps65218.c b/drivers/gpio/gpio-tps65218.c
index 313c0e484607..0eaeac8de9de 100644
--- a/drivers/gpio/gpio-tps65218.c
+++ b/drivers/gpio/gpio-tps65218.c
@@ -101,16 +101,6 @@ static int tps65218_gpio_request(struct gpio_chip *gc, unsigned offset)
break;
case 1:
- /* GP02 is push-pull by default, can be set as open drain. */
- if (gpiochip_line_is_open_drain(gc, offset)) {
- ret = tps65218_clear_bits(tps65218,
- TPS65218_REG_CONFIG1,
- TPS65218_CONFIG1_GPO2_BUF,
- TPS65218_PROTECT_L1);
- if (ret)
- return ret;
- }
-
/* Setup GPO2 */
ret = tps65218_clear_bits(tps65218, TPS65218_REG_CONFIG1,
TPS65218_CONFIG1_IO1_SEL,
@@ -148,6 +138,40 @@ static int tps65218_gpio_request(struct gpio_chip *gc, unsigned offset)
return 0;
}
+static int tps65218_gpio_set_single_ended(struct gpio_chip *gc,
+ unsigned offset,
+ enum single_ended_mode mode)
+{
+ struct tps65218_gpio *tps65218_gpio = gpiochip_get_data(gc);
+ struct tps65218 *tps65218 = tps65218_gpio->tps65218;
+
+ switch (offset) {
+ case 0:
+ case 2:
+ /* GPO1 is hardwired to be open drain */
+ if (mode == LINE_MODE_OPEN_DRAIN)
+ return 0;
+ return -ENOTSUPP;
+ case 1:
+ /* GPO2 is push-pull by default, can be set as open drain. */
+ if (mode == LINE_MODE_OPEN_DRAIN)
+ return tps65218_clear_bits(tps65218,
+ TPS65218_REG_CONFIG1,
+ TPS65218_CONFIG1_GPO2_BUF,
+ TPS65218_PROTECT_L1);
+ if (mode == LINE_MODE_PUSH_PULL)
+ return tps65218_set_bits(tps65218,
+ TPS65218_REG_CONFIG1,
+ TPS65218_CONFIG1_GPO2_BUF,
+ TPS65218_CONFIG1_GPO2_BUF,
+ TPS65218_PROTECT_L1);
+ return -ENOTSUPP;
+ default:
+ break;
+ }
+ return -ENOTSUPP;
+}
+
static struct gpio_chip template_chip = {
.label = "gpio-tps65218",
.owner = THIS_MODULE,
@@ -156,6 +180,7 @@ static struct gpio_chip template_chip = {
.direction_input = tps65218_gpio_input,
.get = tps65218_gpio_get,
.set = tps65218_gpio_set,
+ .set_single_ended = tps65218_gpio_set_single_ended,
.can_sleep = true,
.ngpio = 3,
.base = -1,
diff --git a/drivers/gpio/gpio-tps6586x.c b/drivers/gpio/gpio-tps6586x.c
index c88bdc8ee2c9..6b15e68a314f 100644
--- a/drivers/gpio/gpio-tps6586x.c
+++ b/drivers/gpio/gpio-tps6586x.c
@@ -24,7 +24,7 @@
#include <linux/errno.h>
#include <linux/gpio.h>
#include <linux/kernel.h>
-#include <linux/module.h>
+#include <linux/init.h>
#include <linux/mfd/tps6586x.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
@@ -140,14 +140,3 @@ static int __init tps6586x_gpio_init(void)
return platform_driver_register(&tps6586x_gpio_driver);
}
subsys_initcall(tps6586x_gpio_init);
-
-static void __exit tps6586x_gpio_exit(void)
-{
- platform_driver_unregister(&tps6586x_gpio_driver);
-}
-module_exit(tps6586x_gpio_exit);
-
-MODULE_ALIAS("platform:tps6586x-gpio");
-MODULE_DESCRIPTION("GPIO interface for TPS6586X PMIC");
-MODULE_AUTHOR("Laxman Dewangan <ldewangan@nvidia.com>");
-MODULE_LICENSE("GPL");
diff --git a/drivers/gpio/gpio-tps65910.c b/drivers/gpio/gpio-tps65910.c
index cdbd7c740043..0ae6a5a54ea8 100644
--- a/drivers/gpio/gpio-tps65910.c
+++ b/drivers/gpio/gpio-tps65910.c
@@ -4,7 +4,7 @@
* Copyright 2010 Texas Instruments Inc.
*
* Author: Graeme Gregory <gg@slimlogic.co.uk>
- * Author: Jorge Eduardo Candelaria jedu@slimlogic.co.uk>
+ * Author: Jorge Eduardo Candelaria <jedu@slimlogic.co.uk>
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the
@@ -14,7 +14,7 @@
*/
#include <linux/kernel.h>
-#include <linux/module.h>
+#include <linux/init.h>
#include <linux/errno.h>
#include <linux/gpio.h>
#include <linux/i2c.h>
@@ -193,15 +193,3 @@ static int __init tps65910_gpio_init(void)
return platform_driver_register(&tps65910_gpio_driver);
}
subsys_initcall(tps65910_gpio_init);
-
-static void __exit tps65910_gpio_exit(void)
-{
- platform_driver_unregister(&tps65910_gpio_driver);
-}
-module_exit(tps65910_gpio_exit);
-
-MODULE_AUTHOR("Graeme Gregory <gg@slimlogic.co.uk>");
-MODULE_AUTHOR("Jorge Eduardo Candelaria jedu@slimlogic.co.uk>");
-MODULE_DESCRIPTION("GPIO interface for TPS65910/TPS6511 PMICs");
-MODULE_LICENSE("GPL v2");
-MODULE_ALIAS("platform:tps65910-gpio");
diff --git a/drivers/gpio/gpio-vx855.c b/drivers/gpio/gpio-vx855.c
index 8cdb9f7ec7e0..4e450121129b 100644
--- a/drivers/gpio/gpio-vx855.c
+++ b/drivers/gpio/gpio-vx855.c
@@ -186,6 +186,28 @@ static int vx855gpio_direction_output(struct gpio_chip *gpio,
return 0;
}
+static int vx855gpio_set_single_ended(struct gpio_chip *gpio,
+ unsigned int nr,
+ enum single_ended_mode mode)
+{
+ /* The GPI cannot be single-ended */
+ if (nr < NR_VX855_GPI)
+ return -EINVAL;
+
+ /* The GPO's are push-pull */
+ if (nr < NR_VX855_GPInO) {
+ if (mode != LINE_MODE_PUSH_PULL)
+ return -ENOTSUPP;
+ return 0;
+ }
+
+ /* The GPIO's are open drain */
+ if (mode != LINE_MODE_OPEN_DRAIN)
+ return -ENOTSUPP;
+
+ return 0;
+}
+
static const char *vx855gpio_names[NR_VX855_GP] = {
"VX855_GPI0", "VX855_GPI1", "VX855_GPI2", "VX855_GPI3", "VX855_GPI4",
"VX855_GPI5", "VX855_GPI6", "VX855_GPI7", "VX855_GPI8", "VX855_GPI9",
@@ -209,6 +231,7 @@ static void vx855gpio_gpio_setup(struct vx855_gpio *vg)
c->direction_output = vx855gpio_direction_output;
c->get = vx855gpio_get;
c->set = vx855gpio_set;
+ c->set_single_ended = vx855gpio_set_single_ended;
c->dbg_show = NULL;
c->base = 0;
c->ngpio = NR_VX855_GP;
diff --git a/drivers/gpio/gpio-wm831x.c b/drivers/gpio/gpio-wm831x.c
index 18cb0f534b91..41ec7834059a 100644
--- a/drivers/gpio/gpio-wm831x.c
+++ b/drivers/gpio/gpio-wm831x.c
@@ -132,6 +132,28 @@ static int wm831x_gpio_set_debounce(struct gpio_chip *chip, unsigned offset,
return wm831x_set_bits(wm831x, reg, WM831X_GPN_FN_MASK, fn);
}
+static int wm831x_set_single_ended(struct gpio_chip *chip,
+ unsigned int offset,
+ enum single_ended_mode mode)
+{
+ struct wm831x_gpio *wm831x_gpio = gpiochip_get_data(chip);
+ struct wm831x *wm831x = wm831x_gpio->wm831x;
+ int reg = WM831X_GPIO1_CONTROL + offset;
+
+ switch (mode) {
+ case LINE_MODE_OPEN_DRAIN:
+ return wm831x_set_bits(wm831x, reg,
+ WM831X_GPN_OD_MASK, WM831X_GPN_OD);
+ case LINE_MODE_PUSH_PULL:
+ return wm831x_set_bits(wm831x, reg,
+ WM831X_GPN_OD_MASK, 0);
+ default:
+ break;
+ }
+
+ return -ENOTSUPP;
+}
+
#ifdef CONFIG_DEBUG_FS
static void wm831x_gpio_dbg_show(struct seq_file *s, struct gpio_chip *chip)
{
@@ -216,7 +238,7 @@ static void wm831x_gpio_dbg_show(struct seq_file *s, struct gpio_chip *chip)
pull,
powerdomain,
reg & WM831X_GPN_POL ? "" : " inverted",
- reg & WM831X_GPN_OD ? "open-drain" : "CMOS",
+ reg & WM831X_GPN_OD ? "open-drain" : "push-pull",
tristated ? " tristated" : "",
reg);
}
@@ -234,6 +256,7 @@ static struct gpio_chip template_chip = {
.set = wm831x_gpio_set,
.to_irq = wm831x_gpio_to_irq,
.set_debounce = wm831x_gpio_set_debounce,
+ .set_single_ended = wm831x_set_single_ended,
.dbg_show = wm831x_gpio_dbg_show,
.can_sleep = true,
};
diff --git a/drivers/gpio/gpio-wm8994.c b/drivers/gpio/gpio-wm8994.c
index b089df99a0d0..744af388c949 100644
--- a/drivers/gpio/gpio-wm8994.c
+++ b/drivers/gpio/gpio-wm8994.c
@@ -103,6 +103,28 @@ static void wm8994_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
wm8994_set_bits(wm8994, WM8994_GPIO_1 + offset, WM8994_GPN_LVL, value);
}
+static int wm8994_gpio_set_single_ended(struct gpio_chip *chip,
+ unsigned int offset,
+ enum single_ended_mode mode)
+{
+ struct wm8994_gpio *wm8994_gpio = gpiochip_get_data(chip);
+ struct wm8994 *wm8994 = wm8994_gpio->wm8994;
+
+ switch (mode) {
+ case LINE_MODE_OPEN_DRAIN:
+ return wm8994_set_bits(wm8994, WM8994_GPIO_1 + offset,
+ WM8994_GPN_OP_CFG_MASK,
+ WM8994_GPN_OP_CFG);
+ case LINE_MODE_PUSH_PULL:
+ return wm8994_set_bits(wm8994, WM8994_GPIO_1 + offset,
+ WM8994_GPN_OP_CFG_MASK, 0);
+ default:
+ break;
+ }
+
+ return -ENOTSUPP;
+}
+
static int wm8994_gpio_to_irq(struct gpio_chip *chip, unsigned offset)
{
struct wm8994_gpio *wm8994_gpio = gpiochip_get_data(chip);
@@ -217,7 +239,7 @@ static void wm8994_gpio_dbg_show(struct seq_file *s, struct gpio_chip *chip)
if (reg & WM8994_GPN_OP_CFG)
seq_printf(s, "open drain ");
else
- seq_printf(s, "CMOS ");
+ seq_printf(s, "push-pull ");
seq_printf(s, "%s (%x)\n",
wm8994_gpio_fn(reg & WM8994_GPN_FN_MASK), reg);
@@ -235,6 +257,7 @@ static struct gpio_chip template_chip = {
.get = wm8994_gpio_get,
.direction_output = wm8994_gpio_direction_out,
.set = wm8994_gpio_set,
+ .set_single_ended = wm8994_gpio_set_single_ended,
.to_irq = wm8994_gpio_to_irq,
.dbg_show = wm8994_gpio_dbg_show,
.can_sleep = true,
diff --git a/drivers/gpio/gpio-ws16c48.c b/drivers/gpio/gpio-ws16c48.c
index 51f41e8fd21e..eaa71d440ccf 100644
--- a/drivers/gpio/gpio-ws16c48.c
+++ b/drivers/gpio/gpio-ws16c48.c
@@ -19,18 +19,23 @@
#include <linux/ioport.h>
#include <linux/interrupt.h>
#include <linux/irqdesc.h>
+#include <linux/isa.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/moduleparam.h>
-#include <linux/platform_device.h>
#include <linux/spinlock.h>
-static unsigned ws16c48_base;
-module_param(ws16c48_base, uint, 0);
-MODULE_PARM_DESC(ws16c48_base, "WinSystems WS16C48 base address");
-static unsigned ws16c48_irq;
-module_param(ws16c48_irq, uint, 0);
-MODULE_PARM_DESC(ws16c48_irq, "WinSystems WS16C48 interrupt line number");
+#define WS16C48_EXTENT 16
+#define MAX_NUM_WS16C48 max_num_isa_dev(WS16C48_EXTENT)
+
+static unsigned int base[MAX_NUM_WS16C48];
+static unsigned int num_ws16c48;
+module_param_array(base, uint, &num_ws16c48, 0);
+MODULE_PARM_DESC(base, "WinSystems WS16C48 base addresses");
+
+static unsigned int irq[MAX_NUM_WS16C48];
+module_param_array(irq, uint, NULL, 0);
+MODULE_PARM_DESC(irq, "WinSystems WS16C48 interrupt line numbers");
/**
* struct ws16c48_gpio - GPIO device private data structure
@@ -298,23 +303,19 @@ static irqreturn_t ws16c48_irq_handler(int irq, void *dev_id)
return IRQ_HANDLED;
}
-static int __init ws16c48_probe(struct platform_device *pdev)
+static int ws16c48_probe(struct device *dev, unsigned int id)
{
- struct device *dev = &pdev->dev;
struct ws16c48_gpio *ws16c48gpio;
- const unsigned base = ws16c48_base;
- const unsigned extent = 16;
const char *const name = dev_name(dev);
int err;
- const unsigned irq = ws16c48_irq;
ws16c48gpio = devm_kzalloc(dev, sizeof(*ws16c48gpio), GFP_KERNEL);
if (!ws16c48gpio)
return -ENOMEM;
- if (!devm_request_region(dev, base, extent, name)) {
+ if (!devm_request_region(dev, base[id], WS16C48_EXTENT, name)) {
dev_err(dev, "Unable to lock port addresses (0x%X-0x%X)\n",
- base, base + extent);
+ base[id], base[id] + WS16C48_EXTENT);
return -EBUSY;
}
@@ -328,8 +329,8 @@ static int __init ws16c48_probe(struct platform_device *pdev)
ws16c48gpio->chip.direction_output = ws16c48_gpio_direction_output;
ws16c48gpio->chip.get = ws16c48_gpio_get;
ws16c48gpio->chip.set = ws16c48_gpio_set;
- ws16c48gpio->base = base;
- ws16c48gpio->irq = irq;
+ ws16c48gpio->base = base[id];
+ ws16c48gpio->irq = irq[id];
spin_lock_init(&ws16c48gpio->lock);
@@ -342,11 +343,11 @@ static int __init ws16c48_probe(struct platform_device *pdev)
}
/* Disable IRQ by default */
- outb(0x80, base + 7);
- outb(0, base + 8);
- outb(0, base + 9);
- outb(0, base + 10);
- outb(0xC0, base + 7);
+ outb(0x80, base[id] + 7);
+ outb(0, base[id] + 8);
+ outb(0, base[id] + 9);
+ outb(0, base[id] + 10);
+ outb(0xC0, base[id] + 7);
err = gpiochip_irqchip_add(&ws16c48gpio->chip, &ws16c48_irqchip, 0,
handle_edge_irq, IRQ_TYPE_NONE);
@@ -355,7 +356,7 @@ static int __init ws16c48_probe(struct platform_device *pdev)
goto err_gpiochip_remove;
}
- err = request_irq(irq, ws16c48_irq_handler, IRQF_SHARED, name,
+ err = request_irq(irq[id], ws16c48_irq_handler, IRQF_SHARED, name,
ws16c48gpio);
if (err) {
dev_err(dev, "IRQ handler registering failed (%d)\n", err);
@@ -369,9 +370,9 @@ err_gpiochip_remove:
return err;
}
-static int ws16c48_remove(struct platform_device *pdev)
+static int ws16c48_remove(struct device *dev, unsigned int id)
{
- struct ws16c48_gpio *const ws16c48gpio = platform_get_drvdata(pdev);
+ struct ws16c48_gpio *const ws16c48gpio = dev_get_drvdata(dev);
free_irq(ws16c48gpio->irq, ws16c48gpio);
gpiochip_remove(&ws16c48gpio->chip);
@@ -379,48 +380,15 @@ static int ws16c48_remove(struct platform_device *pdev)
return 0;
}
-static struct platform_device *ws16c48_device;
-
-static struct platform_driver ws16c48_driver = {
+static struct isa_driver ws16c48_driver = {
+ .probe = ws16c48_probe,
.driver = {
.name = "ws16c48"
},
.remove = ws16c48_remove
};
-static void __exit ws16c48_exit(void)
-{
- platform_device_unregister(ws16c48_device);
- platform_driver_unregister(&ws16c48_driver);
-}
-
-static int __init ws16c48_init(void)
-{
- int err;
-
- ws16c48_device = platform_device_alloc(ws16c48_driver.driver.name, -1);
- if (!ws16c48_device)
- return -ENOMEM;
-
- err = platform_device_add(ws16c48_device);
- if (err)
- goto err_platform_device;
-
- err = platform_driver_probe(&ws16c48_driver, ws16c48_probe);
- if (err)
- goto err_platform_driver;
-
- return 0;
-
-err_platform_driver:
- platform_device_del(ws16c48_device);
-err_platform_device:
- platform_device_put(ws16c48_device);
- return err;
-}
-
-module_init(ws16c48_init);
-module_exit(ws16c48_exit);
+module_isa_driver(ws16c48_driver, num_ws16c48);
MODULE_AUTHOR("William Breathitt Gray <vilhelm.gray@gmail.com>");
MODULE_DESCRIPTION("WinSystems WS16C48 GPIO driver");
diff --git a/drivers/gpio/gpio-xgene-sb.c b/drivers/gpio/gpio-xgene-sb.c
index 31cbcb84cfaf..033258634b8c 100644
--- a/drivers/gpio/gpio-xgene-sb.c
+++ b/drivers/gpio/gpio-xgene-sb.c
@@ -216,23 +216,10 @@ static int xgene_gpio_sb_domain_alloc(struct irq_domain *domain,
&parent_fwspec);
}
-static void xgene_gpio_sb_domain_free(struct irq_domain *domain,
- unsigned int virq,
- unsigned int nr_irqs)
-{
- struct irq_data *d;
- unsigned int i;
-
- for (i = 0; i < nr_irqs; i++) {
- d = irq_domain_get_irq_data(domain, virq + i);
- irq_domain_reset_irq_data(d);
- }
-}
-
static const struct irq_domain_ops xgene_gpio_sb_domain_ops = {
.translate = xgene_gpio_sb_domain_translate,
.alloc = xgene_gpio_sb_domain_alloc,
- .free = xgene_gpio_sb_domain_free,
+ .free = irq_domain_free_irqs_common,
.activate = xgene_gpio_sb_domain_activate,
.deactivate = xgene_gpio_sb_domain_deactivate,
};
diff --git a/drivers/gpio/gpio-xgene.c b/drivers/gpio/gpio-xgene.c
index 0dc916191689..40a8881c2ce8 100644
--- a/drivers/gpio/gpio-xgene.c
+++ b/drivers/gpio/gpio-xgene.c
@@ -17,7 +17,7 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
-#include <linux/module.h>
+#include <linux/acpi.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/io.h>
@@ -85,6 +85,17 @@ static void xgene_gpio_set(struct gpio_chip *gc, unsigned int offset, int val)
spin_unlock_irqrestore(&chip->lock, flags);
}
+static int xgene_gpio_get_direction(struct gpio_chip *gc, unsigned int offset)
+{
+ struct xgene_gpio *chip = gpiochip_get_data(gc);
+ unsigned long bank_offset, bit_offset;
+
+ bank_offset = GPIO_SET_DR_OFFSET + GPIO_BANK_OFFSET(offset);
+ bit_offset = GPIO_BIT_OFFSET(offset);
+
+ return !!(ioread32(chip->base + bank_offset) & BIT(bit_offset));
+}
+
static int xgene_gpio_dir_in(struct gpio_chip *gc, unsigned int offset)
{
struct xgene_gpio *chip = gpiochip_get_data(gc);
@@ -189,6 +200,7 @@ static int xgene_gpio_probe(struct platform_device *pdev)
spin_lock_init(&gpio->lock);
gpio->chip.parent = &pdev->dev;
+ gpio->chip.get_direction = xgene_gpio_get_direction;
gpio->chip.direction_input = xgene_gpio_dir_in;
gpio->chip.direction_output = xgene_gpio_dir_out;
gpio->chip.get = xgene_gpio_get;
@@ -216,19 +228,21 @@ static const struct of_device_id xgene_gpio_of_match[] = {
{ .compatible = "apm,xgene-gpio", },
{},
};
-MODULE_DEVICE_TABLE(of, xgene_gpio_of_match);
+
+#ifdef CONFIG_ACPI
+static const struct acpi_device_id xgene_gpio_acpi_match[] = {
+ { "APMC0D14", 0 },
+ { },
+};
+#endif
static struct platform_driver xgene_gpio_driver = {
.driver = {
.name = "xgene-gpio",
.of_match_table = xgene_gpio_of_match,
+ .acpi_match_table = ACPI_PTR(xgene_gpio_acpi_match),
.pm = XGENE_GPIO_PM_OPS,
},
.probe = xgene_gpio_probe,
};
-
-module_platform_driver(xgene_gpio_driver);
-
-MODULE_AUTHOR("Feng Kan <fkan@apm.com>");
-MODULE_DESCRIPTION("APM X-Gene GPIO driver");
-MODULE_LICENSE("GPL");
+builtin_platform_driver(xgene_gpio_driver);
diff --git a/drivers/gpio/gpio-xlp.c b/drivers/gpio/gpio-xlp.c
index aa5813d2deb1..1a33a19d95b9 100644
--- a/drivers/gpio/gpio-xlp.c
+++ b/drivers/gpio/gpio-xlp.c
@@ -85,7 +85,8 @@ enum {
XLP_GPIO_VARIANT_XLP316,
XLP_GPIO_VARIANT_XLP208,
XLP_GPIO_VARIANT_XLP980,
- XLP_GPIO_VARIANT_XLP532
+ XLP_GPIO_VARIANT_XLP532,
+ GPIO_VARIANT_VULCAN
};
struct xlp_gpio_priv {
@@ -285,6 +286,10 @@ static const struct of_device_id xlp_gpio_of_ids[] = {
.compatible = "netlogic,xlp532-gpio",
.data = (void *)XLP_GPIO_VARIANT_XLP532,
},
+ {
+ .compatible = "brcm,vulcan-gpio",
+ .data = (void *)GPIO_VARIANT_VULCAN,
+ },
{ /* sentinel */ },
};
MODULE_DEVICE_TABLE(of, xlp_gpio_of_ids);
@@ -347,6 +352,7 @@ static int xlp_gpio_probe(struct platform_device *pdev)
break;
case XLP_GPIO_VARIANT_XLP980:
case XLP_GPIO_VARIANT_XLP532:
+ case GPIO_VARIANT_VULCAN:
priv->gpio_out_en = gpio_base + GPIO_9XX_OUTPUT_EN;
priv->gpio_paddrv = gpio_base + GPIO_9XX_PADDRV;
priv->gpio_intr_stat = gpio_base + GPIO_9XX_INT_STAT;
@@ -354,7 +360,12 @@ static int xlp_gpio_probe(struct platform_device *pdev)
priv->gpio_intr_pol = gpio_base + GPIO_9XX_INT_POL;
priv->gpio_intr_en = gpio_base + GPIO_9XX_INT_EN00;
- ngpio = (soc_type == XLP_GPIO_VARIANT_XLP980) ? 66 : 67;
+ if (soc_type == XLP_GPIO_VARIANT_XLP980)
+ ngpio = 66;
+ else if (soc_type == XLP_GPIO_VARIANT_XLP532)
+ ngpio = 67;
+ else
+ ngpio = 70;
break;
default:
dev_err(&pdev->dev, "Unknown Processor type!\n");
@@ -377,10 +388,14 @@ static int xlp_gpio_probe(struct platform_device *pdev)
gc->get = xlp_gpio_get;
spin_lock_init(&priv->lock);
- irq_base = irq_alloc_descs(-1, XLP_GPIO_IRQ_BASE, gc->ngpio, 0);
+ /* XLP has fixed IRQ range for GPIO interrupts */
+ if (soc_type == GPIO_VARIANT_VULCAN)
+ irq_base = irq_alloc_descs(-1, 0, gc->ngpio, 0);
+ else
+ irq_base = irq_alloc_descs(-1, XLP_GPIO_IRQ_BASE, gc->ngpio, 0);
if (irq_base < 0) {
dev_err(&pdev->dev, "Failed to allocate IRQ numbers\n");
- return -ENODEV;
+ return irq_base;
}
err = gpiochip_add_data(gc, priv);
diff --git a/drivers/gpio/gpio-zevio.c b/drivers/gpio/gpio-zevio.c
index cda6d922be98..e23ef7b9451d 100644
--- a/drivers/gpio/gpio-zevio.c
+++ b/drivers/gpio/gpio-zevio.c
@@ -10,7 +10,7 @@
#include <linux/spinlock.h>
#include <linux/errno.h>
-#include <linux/module.h>
+#include <linux/init.h>
#include <linux/bitops.h>
#include <linux/io.h>
#include <linux/of_device.h>
@@ -203,32 +203,17 @@ static int zevio_gpio_probe(struct platform_device *pdev)
return 0;
}
-static int zevio_gpio_remove(struct platform_device *pdev)
-{
- struct zevio_gpio *controller = platform_get_drvdata(pdev);
-
- of_mm_gpiochip_remove(&controller->chip);
-
- return 0;
-}
-
static const struct of_device_id zevio_gpio_of_match[] = {
{ .compatible = "lsi,zevio-gpio", },
{ },
};
-MODULE_DEVICE_TABLE(of, zevio_gpio_of_match);
-
static struct platform_driver zevio_gpio_driver = {
.driver = {
.name = "gpio-zevio",
.of_match_table = zevio_gpio_of_match,
+ .suppress_bind_attrs = true,
},
.probe = zevio_gpio_probe,
- .remove = zevio_gpio_remove,
};
-module_platform_driver(zevio_gpio_driver);
-
-MODULE_LICENSE("GPL");
-MODULE_AUTHOR("Fabian Vogt <fabian@ritter-vogt.de>");
-MODULE_DESCRIPTION("LSI ZEVIO SoC GPIO driver");
+builtin_platform_driver(zevio_gpio_driver);
diff --git a/drivers/gpio/gpio-zx.c b/drivers/gpio/gpio-zx.c
index 47c79fa65670..93de8be0d885 100644
--- a/drivers/gpio/gpio-zx.c
+++ b/drivers/gpio/gpio-zx.c
@@ -1,4 +1,8 @@
/*
+ * ZTE ZX296702 GPIO driver
+ *
+ * Author: Jun Nie <jun.nie@linaro.org>
+ *
* Copyright (C) 2015 Linaro Ltd.
*
* This program is free software; you can redistribute it and/or modify
@@ -10,7 +14,7 @@
#include <linux/errno.h>
#include <linux/gpio/driver.h>
#include <linux/irqchip/chained_irq.h>
-#include <linux/module.h>
+#include <linux/init.h>
#include <linux/of.h>
#include <linux/pinctrl/consumer.h>
#include <linux/platform_device.h>
@@ -282,7 +286,6 @@ static const struct of_device_id zx_gpio_match[] = {
},
{ },
};
-MODULE_DEVICE_TABLE(of, zx_gpio_match);
static struct platform_driver zx_gpio_driver = {
.probe = zx_gpio_probe,
@@ -291,9 +294,4 @@ static struct platform_driver zx_gpio_driver = {
.of_match_table = of_match_ptr(zx_gpio_match),
},
};
-
-module_platform_driver(zx_gpio_driver)
-
-MODULE_AUTHOR("Jun Nie <jun.nie@linaro.org>");
-MODULE_DESCRIPTION("ZTE ZX296702 GPIO driver");
-MODULE_LICENSE("GPL");
+builtin_platform_driver(zx_gpio_driver)
diff --git a/drivers/gpio/gpio-zynq.c b/drivers/gpio/gpio-zynq.c
index 66d3d247d76d..75c6355b018d 100644
--- a/drivers/gpio/gpio-zynq.c
+++ b/drivers/gpio/gpio-zynq.c
@@ -713,7 +713,7 @@ static int zynq_gpio_probe(struct platform_device *pdev)
pm_runtime_enable(&pdev->dev);
ret = pm_runtime_get_sync(&pdev->dev);
if (ret < 0)
- return ret;
+ goto err_pm_dis;
/* report a bug if gpio chip registration fails */
ret = gpiochip_add_data(chip, gpio);
@@ -745,6 +745,8 @@ err_rm_gpiochip:
gpiochip_remove(chip);
err_pm_put:
pm_runtime_put(&pdev->dev);
+err_pm_dis:
+ pm_runtime_disable(&pdev->dev);
return ret;
}
diff --git a/drivers/gpio/gpiolib-of.c b/drivers/gpio/gpiolib-of.c
index 42a4bb7cf49a..d22dcc38179d 100644
--- a/drivers/gpio/gpiolib-of.c
+++ b/drivers/gpio/gpiolib-of.c
@@ -196,21 +196,68 @@ static struct gpio_desc *of_parse_own_gpio(struct device_node *np,
}
/**
+ * of_gpiochip_set_names() - set up the names of the lines
+ * @chip: GPIO chip whose lines should be named, if possible
+ */
+static void of_gpiochip_set_names(struct gpio_chip *gc)
+{
+ struct gpio_device *gdev = gc->gpiodev;
+ struct device_node *np = gc->of_node;
+ int i;
+ int nstrings;
+
+ nstrings = of_property_count_strings(np, "gpio-line-names");
+ if (nstrings <= 0)
+ /* Lines names not present */
+ return;
+
+ /* This is normally not what you want */
+ if (gdev->ngpio != nstrings)
+ dev_info(&gdev->dev, "gpio-line-names specifies %d line "
+ "names but there are %d lines on the chip\n",
+ nstrings, gdev->ngpio);
+
+ /*
+ * Make sure to not index beyond the end of the number of descriptors
+ * of the GPIO device.
+ */
+ for (i = 0; i < gdev->ngpio; i++) {
+ const char *name;
+ int ret;
+
+ ret = of_property_read_string_index(np,
+ "gpio-line-names",
+ i,
+ &name);
+ if (ret) {
+ if (ret != -ENODATA)
+ dev_err(&gdev->dev,
+ "unable to name line %d: %d\n",
+ i, ret);
+ break;
+ }
+ gdev->descs[i].name = name;
+ }
+}
+
+/**
* of_gpiochip_scan_gpios - Scan gpio-controller for gpio definitions
* @chip: gpio chip to act on
*
* This is only used by of_gpiochip_add to request/set GPIO initial
* configuration.
+ * It retures error if it fails otherwise 0 on success.
*/
-static void of_gpiochip_scan_gpios(struct gpio_chip *chip)
+static int of_gpiochip_scan_gpios(struct gpio_chip *chip)
{
struct gpio_desc *desc = NULL;
struct device_node *np;
const char *name;
enum gpio_lookup_flags lflags;
enum gpiod_flags dflags;
+ int ret;
- for_each_child_of_node(chip->of_node, np) {
+ for_each_available_child_of_node(chip->of_node, np) {
if (!of_property_read_bool(np, "gpio-hog"))
continue;
@@ -218,9 +265,12 @@ static void of_gpiochip_scan_gpios(struct gpio_chip *chip)
if (IS_ERR(desc))
continue;
- if (gpiod_hog(desc, name, lflags, dflags))
- continue;
+ ret = gpiod_hog(desc, name, lflags, dflags);
+ if (ret < 0)
+ return ret;
}
+
+ return 0;
}
/**
@@ -440,11 +490,13 @@ int of_gpiochip_add(struct gpio_chip *chip)
if (status)
return status;
- of_node_get(chip->of_node);
+ /* If the chip defines names itself, these take precedence */
+ if (!chip->names)
+ of_gpiochip_set_names(chip);
- of_gpiochip_scan_gpios(chip);
+ of_node_get(chip->of_node);
- return 0;
+ return of_gpiochip_scan_gpios(chip);
}
void of_gpiochip_remove(struct gpio_chip *chip)
diff --git a/drivers/gpio/gpiolib.c b/drivers/gpio/gpiolib.c
index b747c76fd2b1..d407f904a31c 100644
--- a/drivers/gpio/gpiolib.c
+++ b/drivers/gpio/gpiolib.c
@@ -622,14 +622,31 @@ int gpiochip_add_data(struct gpio_chip *chip, void *data)
struct gpio_desc *desc = &gdev->descs[i];
desc->gdev = gdev;
-
- /* REVISIT: most hardware initializes GPIOs as inputs (often
- * with pullups enabled) so power usage is minimized. Linux
- * code should set the gpio direction first thing; but until
- * it does, and in case chip->get_direction is not set, we may
- * expose the wrong direction in sysfs.
+ /*
+ * REVISIT: most hardware initializes GPIOs as inputs
+ * (often with pullups enabled) so power usage is
+ * minimized. Linux code should set the gpio direction
+ * first thing; but until it does, and in case
+ * chip->get_direction is not set, we may expose the
+ * wrong direction in sysfs.
*/
- desc->flags = !chip->direction_input ? (1 << FLAG_IS_OUT) : 0;
+
+ if (chip->get_direction) {
+ /*
+ * If we have .get_direction, set up the initial
+ * direction flag from the hardware.
+ */
+ int dir = chip->get_direction(chip, i);
+
+ if (!dir)
+ set_bit(FLAG_IS_OUT, &desc->flags);
+ } else if (!chip->direction_input) {
+ /*
+ * If the chip lacks the .direction_input callback
+ * we logically assume all lines are outputs.
+ */
+ set_bit(FLAG_IS_OUT, &desc->flags);
+ }
}
spin_unlock_irqrestore(&gpio_lock, flags);
@@ -1547,8 +1564,8 @@ EXPORT_SYMBOL_GPL(gpiod_direction_input);
static int _gpiod_direction_output_raw(struct gpio_desc *desc, int value)
{
- struct gpio_chip *chip;
- int status = -EINVAL;
+ struct gpio_chip *gc = desc->gdev->chip;
+ int ret;
/* GPIOs used for IRQs shall not be set as output */
if (test_bit(FLAG_USED_AS_IRQ, &desc->flags)) {
@@ -1558,28 +1575,50 @@ static int _gpiod_direction_output_raw(struct gpio_desc *desc, int value)
return -EIO;
}
- /* Open drain pin should not be driven to 1 */
- if (value && test_bit(FLAG_OPEN_DRAIN, &desc->flags))
- return gpiod_direction_input(desc);
-
- /* Open source pin should not be driven to 0 */
- if (!value && test_bit(FLAG_OPEN_SOURCE, &desc->flags))
- return gpiod_direction_input(desc);
+ if (test_bit(FLAG_OPEN_DRAIN, &desc->flags)) {
+ /* First see if we can enable open drain in hardware */
+ if (gc->set_single_ended) {
+ ret = gc->set_single_ended(gc, gpio_chip_hwgpio(desc),
+ LINE_MODE_OPEN_DRAIN);
+ if (!ret)
+ goto set_output_value;
+ }
+ /* Emulate open drain by not actively driving the line high */
+ if (value)
+ return gpiod_direction_input(desc);
+ }
+ else if (test_bit(FLAG_OPEN_SOURCE, &desc->flags)) {
+ if (gc->set_single_ended) {
+ ret = gc->set_single_ended(gc, gpio_chip_hwgpio(desc),
+ LINE_MODE_OPEN_SOURCE);
+ if (!ret)
+ goto set_output_value;
+ }
+ /* Emulate open source by not actively driving the line low */
+ if (!value)
+ return gpiod_direction_input(desc);
+ } else {
+ /* Make sure to disable open drain/source hardware, if any */
+ if (gc->set_single_ended)
+ gc->set_single_ended(gc,
+ gpio_chip_hwgpio(desc),
+ LINE_MODE_PUSH_PULL);
+ }
- chip = desc->gdev->chip;
- if (!chip->set || !chip->direction_output) {
+set_output_value:
+ if (!gc->set || !gc->direction_output) {
gpiod_warn(desc,
"%s: missing set() or direction_output() operations\n",
__func__);
return -EIO;
}
- status = chip->direction_output(chip, gpio_chip_hwgpio(desc), value);
- if (status == 0)
+ ret = gc->direction_output(gc, gpio_chip_hwgpio(desc), value);
+ if (!ret)
set_bit(FLAG_IS_OUT, &desc->flags);
trace_gpio_value(desc_to_gpio(desc), 0, value);
- trace_gpio_direction(desc_to_gpio(desc), 0, status);
- return status;
+ trace_gpio_direction(desc_to_gpio(desc), 0, ret);
+ return ret;
}
/**
@@ -1841,10 +1880,10 @@ static void gpio_chip_set_multiple(struct gpio_chip *chip,
}
}
-static void gpiod_set_array_value_priv(bool raw, bool can_sleep,
- unsigned int array_size,
- struct gpio_desc **desc_array,
- int *value_array)
+void gpiod_set_array_value_complex(bool raw, bool can_sleep,
+ unsigned int array_size,
+ struct gpio_desc **desc_array,
+ int *value_array)
{
int i = 0;
@@ -1950,8 +1989,8 @@ void gpiod_set_raw_array_value(unsigned int array_size,
{
if (!desc_array)
return;
- gpiod_set_array_value_priv(true, false, array_size, desc_array,
- value_array);
+ gpiod_set_array_value_complex(true, false, array_size, desc_array,
+ value_array);
}
EXPORT_SYMBOL_GPL(gpiod_set_raw_array_value);
@@ -1972,8 +2011,8 @@ void gpiod_set_array_value(unsigned int array_size,
{
if (!desc_array)
return;
- gpiod_set_array_value_priv(false, false, array_size, desc_array,
- value_array);
+ gpiod_set_array_value_complex(false, false, array_size, desc_array,
+ value_array);
}
EXPORT_SYMBOL_GPL(gpiod_set_array_value);
@@ -1998,13 +2037,22 @@ EXPORT_SYMBOL_GPL(gpiod_cansleep);
*/
int gpiod_to_irq(const struct gpio_desc *desc)
{
- struct gpio_chip *chip;
- int offset;
+ struct gpio_chip *chip;
+ int offset;
VALIDATE_DESC(desc);
chip = desc->gdev->chip;
offset = gpio_chip_hwgpio(desc);
- return chip->to_irq ? chip->to_irq(chip, offset) : -ENXIO;
+ if (chip->to_irq) {
+ int retirq = chip->to_irq(chip, offset);
+
+ /* Zero means NO_IRQ */
+ if (!retirq)
+ return -ENXIO;
+
+ return retirq;
+ }
+ return -ENXIO;
}
EXPORT_SYMBOL_GPL(gpiod_to_irq);
@@ -2176,8 +2224,8 @@ void gpiod_set_raw_array_value_cansleep(unsigned int array_size,
might_sleep_if(extra_checks);
if (!desc_array)
return;
- gpiod_set_array_value_priv(true, true, array_size, desc_array,
- value_array);
+ gpiod_set_array_value_complex(true, true, array_size, desc_array,
+ value_array);
}
EXPORT_SYMBOL_GPL(gpiod_set_raw_array_value_cansleep);
@@ -2199,8 +2247,8 @@ void gpiod_set_array_value_cansleep(unsigned int array_size,
might_sleep_if(extra_checks);
if (!desc_array)
return;
- gpiod_set_array_value_priv(false, true, array_size, desc_array,
- value_array);
+ gpiod_set_array_value_complex(false, true, array_size, desc_array,
+ value_array);
}
EXPORT_SYMBOL_GPL(gpiod_set_array_value_cansleep);
@@ -2726,15 +2774,16 @@ int gpiod_hog(struct gpio_desc *desc, const char *name,
local_desc = gpiochip_request_own_desc(chip, hwnum, name);
if (IS_ERR(local_desc)) {
- pr_err("requesting hog GPIO %s (chip %s, offset %d) failed\n",
- name, chip->label, hwnum);
- return PTR_ERR(local_desc);
+ status = PTR_ERR(local_desc);
+ pr_err("requesting hog GPIO %s (chip %s, offset %d) failed, %d\n",
+ name, chip->label, hwnum, status);
+ return status;
}
status = gpiod_configure_flags(desc, name, dflags);
if (status < 0) {
- pr_err("setup of hog GPIO %s (chip %s, offset %d) failed\n",
- name, chip->label, hwnum);
+ pr_err("setup of hog GPIO %s (chip %s, offset %d) failed, %d\n",
+ name, chip->label, hwnum, status);
gpiochip_free_own_desc(desc);
return status;
}
diff --git a/drivers/gpio/gpiolib.h b/drivers/gpio/gpiolib.h
index e30e5fdb1214..2d9ea5e0cab3 100644
--- a/drivers/gpio/gpiolib.h
+++ b/drivers/gpio/gpiolib.h
@@ -141,6 +141,10 @@ struct gpio_desc *of_get_named_gpiod_flags(struct device_node *np,
const char *list_name, int index, enum of_gpio_flags *flags);
struct gpio_desc *gpiochip_get_desc(struct gpio_chip *chip, u16 hwnum);
+void gpiod_set_array_value_complex(bool raw, bool can_sleep,
+ unsigned int array_size,
+ struct gpio_desc **desc_array,
+ int *value_array);
extern struct spinlock gpio_lock;
extern struct list_head gpio_devices;
diff --git a/drivers/gpu/drm/Makefile b/drivers/gpu/drm/Makefile
index 2bd3e5aa43c6..be43afb08c69 100644
--- a/drivers/gpu/drm/Makefile
+++ b/drivers/gpu/drm/Makefile
@@ -23,7 +23,7 @@ drm-$(CONFIG_AGP) += drm_agpsupport.o
drm_kms_helper-y := drm_crtc_helper.o drm_dp_helper.o drm_probe_helper.o \
drm_plane_helper.o drm_dp_mst_topology.o drm_atomic_helper.o \
- drm_kms_helper_common.o
+ drm_kms_helper_common.o drm_dp_dual_mode_helper.o
drm_kms_helper-$(CONFIG_DRM_LOAD_EDID_FIRMWARE) += drm_edid_load.o
drm_kms_helper-$(CONFIG_DRM_FBDEV_EMULATION) += drm_fb_helper.o
diff --git a/drivers/gpu/drm/amd/acp/Kconfig b/drivers/gpu/drm/amd/acp/Kconfig
index ca77ec10147c..e503e3d6d920 100644
--- a/drivers/gpu/drm/amd/acp/Kconfig
+++ b/drivers/gpu/drm/amd/acp/Kconfig
@@ -2,6 +2,7 @@ menu "ACP (Audio CoProcessor) Configuration"
config DRM_AMD_ACP
bool "Enable AMD Audio CoProcessor IP support"
+ depends on DRM_AMDGPU
select MFD_CORE
select PM_GENERIC_DOMAINS if PM
help
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
index 2a009c398dcb..992f00b65be4 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
@@ -602,6 +602,8 @@ int amdgpu_sync_wait(struct amdgpu_sync *sync);
void amdgpu_sync_free(struct amdgpu_sync *sync);
int amdgpu_sync_init(void);
void amdgpu_sync_fini(void);
+int amdgpu_fence_slab_init(void);
+void amdgpu_fence_slab_fini(void);
/*
* GART structures, functions & helpers
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
index 60a0c9ac11b2..cb07da41152b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_connectors.c
@@ -194,12 +194,12 @@ int amdgpu_connector_get_monitor_bpc(struct drm_connector *connector)
bpc = 8;
DRM_DEBUG("%s: HDMI deep color 10 bpc exceeds max tmds clock. Using %d bpc.\n",
connector->name, bpc);
- } else if (bpc > 8) {
- /* max_tmds_clock missing, but hdmi spec mandates it for deep color. */
- DRM_DEBUG("%s: Required max tmds clock for HDMI deep color missing. Using 8 bpc.\n",
- connector->name);
- bpc = 8;
}
+ } else if (bpc > 8) {
+ /* max_tmds_clock missing, but hdmi spec mandates it for deep color. */
+ DRM_DEBUG("%s: Required max tmds clock for HDMI deep color missing. Using 8 bpc.\n",
+ connector->name);
+ bpc = 8;
}
}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 1dab5f2b725b..f888c015f76c 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
@@ -50,9 +50,11 @@
* KMS wrapper.
* - 3.0.0 - initial driver
* - 3.1.0 - allow reading more status registers (GRBM, SRBM, SDMA, CP)
+ * - 3.2.0 - GFX8: Uses EOP_TC_WB_ACTION_EN, so UMDs don't have to do the same
+ * at the end of IBs.
*/
#define KMS_DRIVER_MAJOR 3
-#define KMS_DRIVER_MINOR 1
+#define KMS_DRIVER_MINOR 2
#define KMS_DRIVER_PATCHLEVEL 0
int amdgpu_vram_limit = 0;
@@ -279,14 +281,26 @@ static const struct pci_device_id pciidlist[] = {
{0x1002, 0x98E4, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_STONEY|AMD_IS_APU},
/* Polaris11 */
{0x1002, 0x67E0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS11},
- {0x1002, 0x67E1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS11},
+ {0x1002, 0x67E3, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS11},
{0x1002, 0x67E8, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS11},
- {0x1002, 0x67E9, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS11},
{0x1002, 0x67EB, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS11},
+ {0x1002, 0x67EF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS11},
{0x1002, 0x67FF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS11},
+ {0x1002, 0x67E1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS11},
+ {0x1002, 0x67E7, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS11},
+ {0x1002, 0x67E9, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS11},
/* Polaris10 */
{0x1002, 0x67C0, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
+ {0x1002, 0x67C1, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
+ {0x1002, 0x67C2, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
+ {0x1002, 0x67C4, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
+ {0x1002, 0x67C7, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
{0x1002, 0x67DF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
+ {0x1002, 0x67C8, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
+ {0x1002, 0x67C9, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
+ {0x1002, 0x67CA, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
+ {0x1002, 0x67CC, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
+ {0x1002, 0x67CF, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_POLARIS10},
{0, 0, 0}
};
@@ -563,9 +577,12 @@ static struct pci_driver amdgpu_kms_pci_driver = {
.driver.pm = &amdgpu_pm_ops,
};
+
+
static int __init amdgpu_init(void)
{
amdgpu_sync_init();
+ amdgpu_fence_slab_init();
if (vgacon_text_force()) {
DRM_ERROR("VGACON disables amdgpu kernel modesetting.\n");
return -EINVAL;
@@ -576,7 +593,6 @@ static int __init amdgpu_init(void)
driver->driver_features |= DRIVER_MODESET;
driver->num_ioctls = amdgpu_max_kms_ioctl;
amdgpu_register_atpx_handler();
-
/* let modprobe override vga console setting */
return drm_pci_init(driver, pdriver);
}
@@ -587,6 +603,7 @@ static void __exit amdgpu_exit(void)
drm_pci_exit(driver, pdriver);
amdgpu_unregister_atpx_handler();
amdgpu_sync_fini();
+ amdgpu_fence_slab_fini();
}
module_init(amdgpu_init);
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
index ba9c04283d01..d1558768cfb7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_fence.c
@@ -55,8 +55,21 @@ struct amdgpu_fence {
};
static struct kmem_cache *amdgpu_fence_slab;
-static atomic_t amdgpu_fence_slab_ref = ATOMIC_INIT(0);
+int amdgpu_fence_slab_init(void)
+{
+ amdgpu_fence_slab = kmem_cache_create(
+ "amdgpu_fence", sizeof(struct amdgpu_fence), 0,
+ SLAB_HWCACHE_ALIGN, NULL);
+ if (!amdgpu_fence_slab)
+ return -ENOMEM;
+ return 0;
+}
+
+void amdgpu_fence_slab_fini(void)
+{
+ kmem_cache_destroy(amdgpu_fence_slab);
+}
/*
* Cast helper
*/
@@ -396,13 +409,6 @@ int amdgpu_fence_driver_init_ring(struct amdgpu_ring *ring,
*/
int amdgpu_fence_driver_init(struct amdgpu_device *adev)
{
- if (atomic_inc_return(&amdgpu_fence_slab_ref) == 1) {
- amdgpu_fence_slab = kmem_cache_create(
- "amdgpu_fence", sizeof(struct amdgpu_fence), 0,
- SLAB_HWCACHE_ALIGN, NULL);
- if (!amdgpu_fence_slab)
- return -ENOMEM;
- }
if (amdgpu_debugfs_fence_init(adev))
dev_err(adev->dev, "fence debugfs file creation failed\n");
@@ -437,13 +443,10 @@ void amdgpu_fence_driver_fini(struct amdgpu_device *adev)
amd_sched_fini(&ring->sched);
del_timer_sync(&ring->fence_drv.fallback_timer);
for (j = 0; j <= ring->fence_drv.num_fences_mask; ++j)
- fence_put(ring->fence_drv.fences[i]);
+ fence_put(ring->fence_drv.fences[j]);
kfree(ring->fence_drv.fences);
ring->fence_drv.initialized = false;
}
-
- if (atomic_dec_and_test(&amdgpu_fence_slab_ref))
- kmem_cache_destroy(amdgpu_fence_slab);
}
/**
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
index 9f4a45cd2aab..32fa7b7913f7 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mn.c
@@ -232,7 +232,10 @@ static struct amdgpu_mn *amdgpu_mn_get(struct amdgpu_device *adev)
int r;
mutex_lock(&adev->mn_lock);
- down_write(&mm->mmap_sem);
+ if (down_write_killable(&mm->mmap_sem)) {
+ mutex_unlock(&adev->mn_lock);
+ return ERR_PTR(-EINTR);
+ }
hash_for_each_possible(adev->mn_hash, rmn, node, (unsigned long)mm)
if (rmn->mm == mm)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
index ea708cb94862..9f36ed30ba11 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c
@@ -53,6 +53,18 @@
/* Special value that no flush is necessary */
#define AMDGPU_VM_NO_FLUSH (~0ll)
+/* Local structure. Encapsulate some VM table update parameters to reduce
+ * the number of function parameters
+ */
+struct amdgpu_vm_update_params {
+ /* address where to copy page table entries from */
+ uint64_t src;
+ /* DMA addresses to use for mapping */
+ dma_addr_t *pages_addr;
+ /* indirect buffer to fill with commands */
+ struct amdgpu_ib *ib;
+};
+
/**
* amdgpu_vm_num_pde - return the number of page directory entries
*
@@ -389,9 +401,7 @@ struct amdgpu_bo_va *amdgpu_vm_bo_find(struct amdgpu_vm *vm,
* amdgpu_vm_update_pages - helper to call the right asic function
*
* @adev: amdgpu_device pointer
- * @src: address where to copy page table entries from
- * @pages_addr: DMA addresses to use for mapping
- * @ib: indirect buffer to fill with commands
+ * @vm_update_params: see amdgpu_vm_update_params definition
* @pe: addr of the page entry
* @addr: dst addr to write into pe
* @count: number of page entries to update
@@ -402,29 +412,29 @@ struct amdgpu_bo_va *amdgpu_vm_bo_find(struct amdgpu_vm *vm,
* to setup the page table using the DMA.
*/
static void amdgpu_vm_update_pages(struct amdgpu_device *adev,
- uint64_t src,
- dma_addr_t *pages_addr,
- struct amdgpu_ib *ib,
+ struct amdgpu_vm_update_params
+ *vm_update_params,
uint64_t pe, uint64_t addr,
unsigned count, uint32_t incr,
uint32_t flags)
{
trace_amdgpu_vm_set_page(pe, addr, count, incr, flags);
- if (src) {
- src += (addr >> 12) * 8;
- amdgpu_vm_copy_pte(adev, ib, pe, src, count);
+ if (vm_update_params->src) {
+ amdgpu_vm_copy_pte(adev, vm_update_params->ib,
+ pe, (vm_update_params->src + (addr >> 12) * 8), count);
- } else if (pages_addr) {
- amdgpu_vm_write_pte(adev, ib, pages_addr, pe, addr,
- count, incr, flags);
+ } else if (vm_update_params->pages_addr) {
+ amdgpu_vm_write_pte(adev, vm_update_params->ib,
+ vm_update_params->pages_addr,
+ pe, addr, count, incr, flags);
} else if (count < 3) {
- amdgpu_vm_write_pte(adev, ib, NULL, pe, addr,
+ amdgpu_vm_write_pte(adev, vm_update_params->ib, NULL, pe, addr,
count, incr, flags);
} else {
- amdgpu_vm_set_pte_pde(adev, ib, pe, addr,
+ amdgpu_vm_set_pte_pde(adev, vm_update_params->ib, pe, addr,
count, incr, flags);
}
}
@@ -444,10 +454,12 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
struct amdgpu_ring *ring;
struct fence *fence = NULL;
struct amdgpu_job *job;
+ struct amdgpu_vm_update_params vm_update_params;
unsigned entries;
uint64_t addr;
int r;
+ memset(&vm_update_params, 0, sizeof(vm_update_params));
ring = container_of(vm->entity.sched, struct amdgpu_ring, sched);
r = reservation_object_reserve_shared(bo->tbo.resv);
@@ -465,7 +477,8 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev,
if (r)
goto error;
- amdgpu_vm_update_pages(adev, 0, NULL, &job->ibs[0], addr, 0, entries,
+ vm_update_params.ib = &job->ibs[0];
+ amdgpu_vm_update_pages(adev, &vm_update_params, addr, 0, entries,
0, 0);
amdgpu_ring_pad_ib(ring, &job->ibs[0]);
@@ -538,11 +551,12 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device *adev,
uint64_t last_pde = ~0, last_pt = ~0;
unsigned count = 0, pt_idx, ndw;
struct amdgpu_job *job;
- struct amdgpu_ib *ib;
+ struct amdgpu_vm_update_params vm_update_params;
struct fence *fence = NULL;
int r;
+ memset(&vm_update_params, 0, sizeof(vm_update_params));
ring = container_of(vm->entity.sched, struct amdgpu_ring, sched);
/* padding, etc. */
@@ -555,7 +569,7 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device *adev,
if (r)
return r;
- ib = &job->ibs[0];
+ vm_update_params.ib = &job->ibs[0];
/* walk over the address space and update the page directory */
for (pt_idx = 0; pt_idx <= vm->max_pde_used; ++pt_idx) {
@@ -575,7 +589,7 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device *adev,
((last_pt + incr * count) != pt)) {
if (count) {
- amdgpu_vm_update_pages(adev, 0, NULL, ib,
+ amdgpu_vm_update_pages(adev, &vm_update_params,
last_pde, last_pt,
count, incr,
AMDGPU_PTE_VALID);
@@ -590,14 +604,15 @@ int amdgpu_vm_update_page_directory(struct amdgpu_device *adev,
}
if (count)
- amdgpu_vm_update_pages(adev, 0, NULL, ib, last_pde, last_pt,
- count, incr, AMDGPU_PTE_VALID);
+ amdgpu_vm_update_pages(adev, &vm_update_params,
+ last_pde, last_pt,
+ count, incr, AMDGPU_PTE_VALID);
- if (ib->length_dw != 0) {
- amdgpu_ring_pad_ib(ring, ib);
+ if (vm_update_params.ib->length_dw != 0) {
+ amdgpu_ring_pad_ib(ring, vm_update_params.ib);
amdgpu_sync_resv(adev, &job->sync, pd->tbo.resv,
AMDGPU_FENCE_OWNER_VM);
- WARN_ON(ib->length_dw > ndw);
+ WARN_ON(vm_update_params.ib->length_dw > ndw);
r = amdgpu_job_submit(job, ring, &vm->entity,
AMDGPU_FENCE_OWNER_VM, &fence);
if (r)
@@ -623,18 +638,15 @@ error_free:
* amdgpu_vm_frag_ptes - add fragment information to PTEs
*
* @adev: amdgpu_device pointer
- * @src: address where to copy page table entries from
- * @pages_addr: DMA addresses to use for mapping
- * @ib: IB for the update
+ * @vm_update_params: see amdgpu_vm_update_params definition
* @pe_start: first PTE to handle
* @pe_end: last PTE to handle
* @addr: addr those PTEs should point to
* @flags: hw mapping flags
*/
static void amdgpu_vm_frag_ptes(struct amdgpu_device *adev,
- uint64_t src,
- dma_addr_t *pages_addr,
- struct amdgpu_ib *ib,
+ struct amdgpu_vm_update_params
+ *vm_update_params,
uint64_t pe_start, uint64_t pe_end,
uint64_t addr, uint32_t flags)
{
@@ -671,11 +683,11 @@ static void amdgpu_vm_frag_ptes(struct amdgpu_device *adev,
return;
/* system pages are non continuously */
- if (src || pages_addr || !(flags & AMDGPU_PTE_VALID) ||
- (frag_start >= frag_end)) {
+ if (vm_update_params->src || vm_update_params->pages_addr ||
+ !(flags & AMDGPU_PTE_VALID) || (frag_start >= frag_end)) {
count = (pe_end - pe_start) / 8;
- amdgpu_vm_update_pages(adev, src, pages_addr, ib, pe_start,
+ amdgpu_vm_update_pages(adev, vm_update_params, pe_start,
addr, count, AMDGPU_GPU_PAGE_SIZE,
flags);
return;
@@ -684,21 +696,21 @@ static void amdgpu_vm_frag_ptes(struct amdgpu_device *adev,
/* handle the 4K area at the beginning */
if (pe_start != frag_start) {
count = (frag_start - pe_start) / 8;
- amdgpu_vm_update_pages(adev, 0, NULL, ib, pe_start, addr,
+ amdgpu_vm_update_pages(adev, vm_update_params, pe_start, addr,
count, AMDGPU_GPU_PAGE_SIZE, flags);
addr += AMDGPU_GPU_PAGE_SIZE * count;
}
/* handle the area in the middle */
count = (frag_end - frag_start) / 8;
- amdgpu_vm_update_pages(adev, 0, NULL, ib, frag_start, addr, count,
+ amdgpu_vm_update_pages(adev, vm_update_params, frag_start, addr, count,
AMDGPU_GPU_PAGE_SIZE, flags | frag_flags);
/* handle the 4K area at the end */
if (frag_end != pe_end) {
addr += AMDGPU_GPU_PAGE_SIZE * count;
count = (pe_end - frag_end) / 8;
- amdgpu_vm_update_pages(adev, 0, NULL, ib, frag_end, addr,
+ amdgpu_vm_update_pages(adev, vm_update_params, frag_end, addr,
count, AMDGPU_GPU_PAGE_SIZE, flags);
}
}
@@ -707,8 +719,7 @@ static void amdgpu_vm_frag_ptes(struct amdgpu_device *adev,
* amdgpu_vm_update_ptes - make sure that page tables are valid
*
* @adev: amdgpu_device pointer
- * @src: address where to copy page table entries from
- * @pages_addr: DMA addresses to use for mapping
+ * @vm_update_params: see amdgpu_vm_update_params definition
* @vm: requested vm
* @start: start of GPU address range
* @end: end of GPU address range
@@ -718,10 +729,9 @@ static void amdgpu_vm_frag_ptes(struct amdgpu_device *adev,
* Update the page tables in the range @start - @end.
*/
static void amdgpu_vm_update_ptes(struct amdgpu_device *adev,
- uint64_t src,
- dma_addr_t *pages_addr,
+ struct amdgpu_vm_update_params
+ *vm_update_params,
struct amdgpu_vm *vm,
- struct amdgpu_ib *ib,
uint64_t start, uint64_t end,
uint64_t dst, uint32_t flags)
{
@@ -747,7 +757,7 @@ static void amdgpu_vm_update_ptes(struct amdgpu_device *adev,
if (last_pe_end != pe_start) {
- amdgpu_vm_frag_ptes(adev, src, pages_addr, ib,
+ amdgpu_vm_frag_ptes(adev, vm_update_params,
last_pe_start, last_pe_end,
last_dst, flags);
@@ -762,7 +772,7 @@ static void amdgpu_vm_update_ptes(struct amdgpu_device *adev,
dst += nptes * AMDGPU_GPU_PAGE_SIZE;
}
- amdgpu_vm_frag_ptes(adev, src, pages_addr, ib, last_pe_start,
+ amdgpu_vm_frag_ptes(adev, vm_update_params, last_pe_start,
last_pe_end, last_dst, flags);
}
@@ -794,11 +804,14 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
void *owner = AMDGPU_FENCE_OWNER_VM;
unsigned nptes, ncmds, ndw;
struct amdgpu_job *job;
- struct amdgpu_ib *ib;
+ struct amdgpu_vm_update_params vm_update_params;
struct fence *f = NULL;
int r;
ring = container_of(vm->entity.sched, struct amdgpu_ring, sched);
+ memset(&vm_update_params, 0, sizeof(vm_update_params));
+ vm_update_params.src = src;
+ vm_update_params.pages_addr = pages_addr;
/* sync to everything on unmapping */
if (!(flags & AMDGPU_PTE_VALID))
@@ -815,11 +828,11 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
/* padding, etc. */
ndw = 64;
- if (src) {
+ if (vm_update_params.src) {
/* only copy commands needed */
ndw += ncmds * 7;
- } else if (pages_addr) {
+ } else if (vm_update_params.pages_addr) {
/* header for write data commands */
ndw += ncmds * 4;
@@ -838,7 +851,7 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
if (r)
return r;
- ib = &job->ibs[0];
+ vm_update_params.ib = &job->ibs[0];
r = amdgpu_sync_resv(adev, &job->sync, vm->page_directory->tbo.resv,
owner);
@@ -849,11 +862,11 @@ static int amdgpu_vm_bo_update_mapping(struct amdgpu_device *adev,
if (r)
goto error_free;
- amdgpu_vm_update_ptes(adev, src, pages_addr, vm, ib, start,
+ amdgpu_vm_update_ptes(adev, &vm_update_params, vm, start,
last + 1, addr, flags);
- amdgpu_ring_pad_ib(ring, ib);
- WARN_ON(ib->length_dw > ndw);
+ amdgpu_ring_pad_ib(ring, vm_update_params.ib);
+ WARN_ON(vm_update_params.ib->length_dw > ndw);
r = amdgpu_job_submit(job, ring, &vm->entity,
AMDGPU_FENCE_OWNER_VM, &f);
if (r)
diff --git a/drivers/gpu/drm/amd/amdgpu/atombios_dp.c b/drivers/gpu/drm/amd/amdgpu/atombios_dp.c
index bf731e9f643e..7f85c2c1d681 100644
--- a/drivers/gpu/drm/amd/amdgpu/atombios_dp.c
+++ b/drivers/gpu/drm/amd/amdgpu/atombios_dp.c
@@ -276,8 +276,8 @@ static int amdgpu_atombios_dp_get_dp_link_config(struct drm_connector *connector
}
}
} else {
- for (lane_num = 1; lane_num <= max_lane_num; lane_num <<= 1) {
- for (i = 0; i < ARRAY_SIZE(link_rates) && link_rates[i] <= max_link_rate; i++) {
+ for (i = 0; i < ARRAY_SIZE(link_rates) && link_rates[i] <= max_link_rate; i++) {
+ for (lane_num = 1; lane_num <= max_lane_num; lane_num <<= 1) {
max_pix_clock = (lane_num * link_rates[i] * 8) / bpp;
if (max_pix_clock >= pix_clock) {
*dp_lanes = lane_num;
diff --git a/drivers/gpu/drm/amd/amdgpu/cik_ih.c b/drivers/gpu/drm/amd/amdgpu/cik_ih.c
index 845c21b1b2ee..be3d6f79a864 100644
--- a/drivers/gpu/drm/amd/amdgpu/cik_ih.c
+++ b/drivers/gpu/drm/amd/amdgpu/cik_ih.c
@@ -103,7 +103,6 @@ static void cik_ih_disable_interrupts(struct amdgpu_device *adev)
*/
static int cik_ih_irq_init(struct amdgpu_device *adev)
{
- int ret = 0;
int rb_bufsz;
u32 interrupt_cntl, ih_cntl, ih_rb_cntl;
u64 wptr_off;
@@ -156,7 +155,7 @@ static int cik_ih_irq_init(struct amdgpu_device *adev)
/* enable irqs */
cik_ih_enable_interrupts(adev);
- return ret;
+ return 0;
}
/**
diff --git a/drivers/gpu/drm/amd/amdgpu/cz_dpm.c b/drivers/gpu/drm/amd/amdgpu/cz_dpm.c
index fa4449e126e6..933e425a8154 100644
--- a/drivers/gpu/drm/amd/amdgpu/cz_dpm.c
+++ b/drivers/gpu/drm/amd/amdgpu/cz_dpm.c
@@ -1579,7 +1579,6 @@ static int cz_dpm_update_sclk_limit(struct amdgpu_device *adev)
static int cz_dpm_set_deep_sleep_sclk_threshold(struct amdgpu_device *adev)
{
- int ret = 0;
struct cz_power_info *pi = cz_get_pi(adev);
if (pi->caps_sclk_ds) {
@@ -1588,20 +1587,19 @@ static int cz_dpm_set_deep_sleep_sclk_threshold(struct amdgpu_device *adev)
CZ_MIN_DEEP_SLEEP_SCLK);
}
- return ret;
+ return 0;
}
/* ?? without dal support, is this still needed in setpowerstate list*/
static int cz_dpm_set_watermark_threshold(struct amdgpu_device *adev)
{
- int ret = 0;
struct cz_power_info *pi = cz_get_pi(adev);
cz_send_msg_to_smc_with_parameter(adev,
PPSMC_MSG_SetWatermarkFrequency,
pi->sclk_dpm.soft_max_clk);
- return ret;
+ return 0;
}
static int cz_dpm_enable_nbdpm(struct amdgpu_device *adev)
@@ -1636,7 +1634,6 @@ static void cz_dpm_nbdpm_lm_pstate_enable(struct amdgpu_device *adev,
static int cz_dpm_update_low_memory_pstate(struct amdgpu_device *adev)
{
- int ret = 0;
struct cz_power_info *pi = cz_get_pi(adev);
struct cz_ps *ps = &pi->requested_ps;
@@ -1647,21 +1644,19 @@ static int cz_dpm_update_low_memory_pstate(struct amdgpu_device *adev)
cz_dpm_nbdpm_lm_pstate_enable(adev, true);
}
- return ret;
+ return 0;
}
/* with dpm enabled */
static int cz_dpm_set_power_state(struct amdgpu_device *adev)
{
- int ret = 0;
-
cz_dpm_update_sclk_limit(adev);
cz_dpm_set_deep_sleep_sclk_threshold(adev);
cz_dpm_set_watermark_threshold(adev);
cz_dpm_enable_nbdpm(adev);
cz_dpm_update_low_memory_pstate(adev);
- return ret;
+ return 0;
}
static void cz_dpm_post_set_power_state(struct amdgpu_device *adev)
diff --git a/drivers/gpu/drm/amd/amdgpu/cz_ih.c b/drivers/gpu/drm/amd/amdgpu/cz_ih.c
index 863cb16f6126..3d23a70b6432 100644
--- a/drivers/gpu/drm/amd/amdgpu/cz_ih.c
+++ b/drivers/gpu/drm/amd/amdgpu/cz_ih.c
@@ -103,7 +103,6 @@ static void cz_ih_disable_interrupts(struct amdgpu_device *adev)
*/
static int cz_ih_irq_init(struct amdgpu_device *adev)
{
- int ret = 0;
int rb_bufsz;
u32 interrupt_cntl, ih_cntl, ih_rb_cntl;
u64 wptr_off;
@@ -157,7 +156,7 @@ static int cz_ih_irq_init(struct amdgpu_device *adev)
/* enable interrupts */
cz_ih_enable_interrupts(adev);
- return ret;
+ return 0;
}
/**
diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
index c11b6007af80..af26ec0bc59d 100644
--- a/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c
@@ -137,7 +137,7 @@ static const u32 polaris11_golden_settings_a11[] =
mmDCI_CLK_CNTL, 0x00000080, 0x00000000,
mmFBC_DEBUG_COMP, 0x000000f0, 0x00000070,
mmFBC_DEBUG1, 0xffffffff, 0x00000008,
- mmFBC_MISC, 0x9f313fff, 0x14300008,
+ mmFBC_MISC, 0x9f313fff, 0x14302008,
mmHDMI_CONTROL, 0x313f031f, 0x00000011,
};
@@ -145,7 +145,7 @@ static const u32 polaris10_golden_settings_a11[] =
{
mmDCI_CLK_CNTL, 0x00000080, 0x00000000,
mmFBC_DEBUG_COMP, 0x000000f0, 0x00000070,
- mmFBC_MISC, 0x9f313fff, 0x14300008,
+ mmFBC_MISC, 0x9f313fff, 0x14302008,
mmHDMI_CONTROL, 0x313f031f, 0x00000011,
};
diff --git a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
index 92647fbf5b8b..f19bab68fd83 100644
--- a/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/gfx_v8_0.c
@@ -267,10 +267,13 @@ static const u32 tonga_mgcg_cgcg_init[] =
static const u32 golden_settings_polaris11_a11[] =
{
+ mmCB_HW_CONTROL, 0xfffdf3cf, 0x00006208,
mmCB_HW_CONTROL_3, 0x000001ff, 0x00000040,
mmDB_DEBUG2, 0xf00fffff, 0x00000400,
mmPA_SC_ENHANCE, 0xffffffff, 0x20000001,
mmPA_SC_LINE_STIPPLE_STATE, 0x0000ff0f, 0x00000000,
+ mmPA_SC_RASTER_CONFIG, 0x3f3fffff, 0x16000012,
+ mmPA_SC_RASTER_CONFIG_1, 0x0000003f, 0x00000000,
mmRLC_CGCG_CGLS_CTRL, 0x00000003, 0x0001003c,
mmRLC_CGCG_CGLS_CTRL_3D, 0xffffffff, 0x0001003c,
mmSQ_CONFIG, 0x07f80000, 0x07180000,
@@ -284,8 +287,6 @@ static const u32 golden_settings_polaris11_a11[] =
static const u32 polaris11_golden_common_all[] =
{
mmGRBM_GFX_INDEX, 0xffffffff, 0xe0000000,
- mmPA_SC_RASTER_CONFIG, 0xffffffff, 0x16000012,
- mmPA_SC_RASTER_CONFIG_1, 0xffffffff, 0x00000000,
mmGB_ADDR_CONFIG, 0xffffffff, 0x22011002,
mmSPI_RESOURCE_RESERVE_CU_0, 0xffffffff, 0x00000800,
mmSPI_RESOURCE_RESERVE_CU_1, 0xffffffff, 0x00000800,
@@ -296,6 +297,7 @@ static const u32 polaris11_golden_common_all[] =
static const u32 golden_settings_polaris10_a11[] =
{
mmATC_MISC_CG, 0x000c0fc0, 0x000c0200,
+ mmCB_HW_CONTROL, 0xfffdf3cf, 0x00006208,
mmCB_HW_CONTROL_3, 0x000001ff, 0x00000040,
mmDB_DEBUG2, 0xf00fffff, 0x00000400,
mmPA_SC_ENHANCE, 0xffffffff, 0x20000001,
@@ -5725,6 +5727,7 @@ static void gfx_v8_0_ring_emit_fence_gfx(struct amdgpu_ring *ring, u64 addr,
amdgpu_ring_write(ring, PACKET3(PACKET3_EVENT_WRITE_EOP, 4));
amdgpu_ring_write(ring, (EOP_TCL1_ACTION_EN |
EOP_TC_ACTION_EN |
+ EOP_TC_WB_ACTION_EN |
EVENT_TYPE(CACHE_FLUSH_AND_INV_TS_EVENT) |
EVENT_INDEX(5)));
amdgpu_ring_write(ring, addr & 0xfffffffc);
diff --git a/drivers/gpu/drm/amd/amdgpu/iceland_ih.c b/drivers/gpu/drm/amd/amdgpu/iceland_ih.c
index 39bfc52d0b42..3b8906ce3511 100644
--- a/drivers/gpu/drm/amd/amdgpu/iceland_ih.c
+++ b/drivers/gpu/drm/amd/amdgpu/iceland_ih.c
@@ -103,7 +103,6 @@ static void iceland_ih_disable_interrupts(struct amdgpu_device *adev)
*/
static int iceland_ih_irq_init(struct amdgpu_device *adev)
{
- int ret = 0;
int rb_bufsz;
u32 interrupt_cntl, ih_cntl, ih_rb_cntl;
u64 wptr_off;
@@ -157,7 +156,7 @@ static int iceland_ih_irq_init(struct amdgpu_device *adev)
/* enable interrupts */
iceland_ih_enable_interrupts(adev);
- return ret;
+ return 0;
}
/**
diff --git a/drivers/gpu/drm/amd/amdgpu/kv_dpm.c b/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
index b45f54714574..a789a863d677 100644
--- a/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
+++ b/drivers/gpu/drm/amd/amdgpu/kv_dpm.c
@@ -2252,7 +2252,7 @@ static void kv_apply_state_adjust_rules(struct amdgpu_device *adev,
if (pi->caps_stable_p_state) {
stable_p_state_sclk = (max_limits->sclk * 75) / 100;
- for (i = table->count - 1; i >= 0; i++) {
+ for (i = table->count - 1; i >= 0; i--) {
if (stable_p_state_sclk >= table->entries[i].clk) {
stable_p_state_sclk = table->entries[i].clk;
break;
diff --git a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
index 063f08a9957a..31d99b0010f7 100644
--- a/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
+++ b/drivers/gpu/drm/amd/amdgpu/sdma_v3_0.c
@@ -109,10 +109,12 @@ static const u32 fiji_mgcg_cgcg_init[] =
static const u32 golden_settings_polaris11_a11[] =
{
mmSDMA0_CHICKEN_BITS, 0xfc910007, 0x00810007,
+ mmSDMA0_CLK_CTRL, 0xff000fff, 0x00000000,
mmSDMA0_GFX_IB_CNTL, 0x800f0111, 0x00000100,
mmSDMA0_RLC0_IB_CNTL, 0x800f0111, 0x00000100,
mmSDMA0_RLC1_IB_CNTL, 0x800f0111, 0x00000100,
mmSDMA1_CHICKEN_BITS, 0xfc910007, 0x00810007,
+ mmSDMA1_CLK_CTRL, 0xff000fff, 0x00000000,
mmSDMA1_GFX_IB_CNTL, 0x800f0111, 0x00000100,
mmSDMA1_RLC0_IB_CNTL, 0x800f0111, 0x00000100,
mmSDMA1_RLC1_IB_CNTL, 0x800f0111, 0x00000100,
diff --git a/drivers/gpu/drm/amd/amdgpu/tonga_ih.c b/drivers/gpu/drm/amd/amdgpu/tonga_ih.c
index f036af937fbc..c92055805a45 100644
--- a/drivers/gpu/drm/amd/amdgpu/tonga_ih.c
+++ b/drivers/gpu/drm/amd/amdgpu/tonga_ih.c
@@ -99,7 +99,6 @@ static void tonga_ih_disable_interrupts(struct amdgpu_device *adev)
*/
static int tonga_ih_irq_init(struct amdgpu_device *adev)
{
- int ret = 0;
int rb_bufsz;
u32 interrupt_cntl, ih_rb_cntl, ih_doorbell_rtpr;
u64 wptr_off;
@@ -165,7 +164,7 @@ static int tonga_ih_irq_init(struct amdgpu_device *adev)
/* enable interrupts */
tonga_ih_enable_interrupts(adev);
- return ret;
+ return 0;
}
/**
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/fiji_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/fiji_hwmgr.c
index c94f9faa220a..24a16e49b571 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/fiji_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/fiji_hwmgr.c
@@ -3573,46 +3573,11 @@ static int fiji_force_dpm_highest(struct pp_hwmgr *hwmgr)
return 0;
}
-static void fiji_apply_dal_min_voltage_request(struct pp_hwmgr *hwmgr)
-{
- struct phm_ppt_v1_information *table_info =
- (struct phm_ppt_v1_information *)hwmgr->pptable;
- struct phm_clock_voltage_dependency_table *table =
- table_info->vddc_dep_on_dal_pwrl;
- struct phm_ppt_v1_clock_voltage_dependency_table *vddc_table;
- enum PP_DAL_POWERLEVEL dal_power_level = hwmgr->dal_power_level;
- uint32_t req_vddc = 0, req_volt, i;
-
- if (!table && !(dal_power_level >= PP_DAL_POWERLEVEL_ULTRALOW &&
- dal_power_level <= PP_DAL_POWERLEVEL_PERFORMANCE))
- return;
-
- for (i= 0; i < table->count; i++) {
- if (dal_power_level == table->entries[i].clk) {
- req_vddc = table->entries[i].v;
- break;
- }
- }
-
- vddc_table = table_info->vdd_dep_on_sclk;
- for (i= 0; i < vddc_table->count; i++) {
- if (req_vddc <= vddc_table->entries[i].vddc) {
- req_volt = (((uint32_t)vddc_table->entries[i].vddc) * VOLTAGE_SCALE)
- << VDDC_SHIFT;
- smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
- PPSMC_MSG_VddC_Request, req_volt);
- return;
- }
- }
- printk(KERN_ERR "DAL requested level can not"
- " found a available voltage in VDDC DPM Table \n");
-}
-
static int fiji_upload_dpmlevel_enable_mask(struct pp_hwmgr *hwmgr)
{
struct fiji_hwmgr *data = (struct fiji_hwmgr *)(hwmgr->backend);
- fiji_apply_dal_min_voltage_request(hwmgr);
+ phm_apply_dal_min_voltage_request(hwmgr);
if (!data->sclk_dpm_key_disabled) {
if (data->dpm_level_enable_mask.sclk_dpm_enable_mask)
@@ -4349,7 +4314,7 @@ static int fiji_populate_and_upload_sclk_mclk_dpm_levels(
if (data->need_update_smu7_dpm_table &
(DPMTABLE_OD_UPDATE_SCLK + DPMTABLE_UPDATE_SCLK)) {
- result = fiji_populate_all_memory_levels(hwmgr);
+ result = fiji_populate_all_graphic_levels(hwmgr);
PP_ASSERT_WITH_CODE((0 == result),
"Failed to populate SCLK during PopulateNewDPMClocksStates Function!",
return result);
@@ -5109,11 +5074,11 @@ static int fiji_get_pp_table(struct pp_hwmgr *hwmgr, char **table)
struct fiji_hwmgr *data = (struct fiji_hwmgr *)(hwmgr->backend);
if (!data->soft_pp_table) {
- data->soft_pp_table = kzalloc(hwmgr->soft_pp_table_size, GFP_KERNEL);
+ data->soft_pp_table = kmemdup(hwmgr->soft_pp_table,
+ hwmgr->soft_pp_table_size,
+ GFP_KERNEL);
if (!data->soft_pp_table)
return -ENOMEM;
- memcpy(data->soft_pp_table, hwmgr->soft_pp_table,
- hwmgr->soft_pp_table_size);
}
*table = (char *)&data->soft_pp_table;
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
index 7d69ed635bc2..1c48917da3cf 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c
@@ -30,6 +30,9 @@
#include "pppcielanes.h"
#include "pp_debug.h"
#include "ppatomctrl.h"
+#include "ppsmc.h"
+
+#define VOLTAGE_SCALE 4
extern int cz_hwmgr_init(struct pp_hwmgr *hwmgr);
extern int tonga_hwmgr_init(struct pp_hwmgr *hwmgr);
@@ -566,3 +569,38 @@ uint32_t phm_get_lowest_enabled_level(struct pp_hwmgr *hwmgr, uint32_t mask)
return level;
}
+
+void phm_apply_dal_min_voltage_request(struct pp_hwmgr *hwmgr)
+{
+ struct phm_ppt_v1_information *table_info =
+ (struct phm_ppt_v1_information *)hwmgr->pptable;
+ struct phm_clock_voltage_dependency_table *table =
+ table_info->vddc_dep_on_dal_pwrl;
+ struct phm_ppt_v1_clock_voltage_dependency_table *vddc_table;
+ enum PP_DAL_POWERLEVEL dal_power_level = hwmgr->dal_power_level;
+ uint32_t req_vddc = 0, req_volt, i;
+
+ if (!table || table->count <= 0
+ || dal_power_level < PP_DAL_POWERLEVEL_ULTRALOW
+ || dal_power_level > PP_DAL_POWERLEVEL_PERFORMANCE)
+ return;
+
+ for (i = 0; i < table->count; i++) {
+ if (dal_power_level == table->entries[i].clk) {
+ req_vddc = table->entries[i].v;
+ break;
+ }
+ }
+
+ vddc_table = table_info->vdd_dep_on_sclk;
+ for (i = 0; i < vddc_table->count; i++) {
+ if (req_vddc <= vddc_table->entries[i].vddc) {
+ req_volt = (((uint32_t)vddc_table->entries[i].vddc) * VOLTAGE_SCALE);
+ smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
+ PPSMC_MSG_VddC_Request, req_volt);
+ return;
+ }
+ }
+ printk(KERN_ERR "DAL requested level can not"
+ " found a available voltage in VDDC DPM Table \n");
+}
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/polaris10_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/polaris10_hwmgr.c
index 93768fa1dcdc..aa6be033f21b 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/polaris10_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/polaris10_hwmgr.c
@@ -189,41 +189,6 @@ int phm_get_current_pcie_lane_number(struct pp_hwmgr *hwmgr)
return decode_pcie_lane_width(link_width);
}
-void phm_apply_dal_min_voltage_request(struct pp_hwmgr *hwmgr)
-{
- struct phm_ppt_v1_information *table_info =
- (struct phm_ppt_v1_information *)hwmgr->pptable;
- struct phm_clock_voltage_dependency_table *table =
- table_info->vddc_dep_on_dal_pwrl;
- struct phm_ppt_v1_clock_voltage_dependency_table *vddc_table;
- enum PP_DAL_POWERLEVEL dal_power_level = hwmgr->dal_power_level;
- uint32_t req_vddc = 0, req_volt, i;
-
- if (!table && !(dal_power_level >= PP_DAL_POWERLEVEL_ULTRALOW &&
- dal_power_level <= PP_DAL_POWERLEVEL_PERFORMANCE))
- return;
-
- for (i = 0; i < table->count; i++) {
- if (dal_power_level == table->entries[i].clk) {
- req_vddc = table->entries[i].v;
- break;
- }
- }
-
- vddc_table = table_info->vdd_dep_on_sclk;
- for (i = 0; i < vddc_table->count; i++) {
- if (req_vddc <= vddc_table->entries[i].vddc) {
- req_volt = (((uint32_t)vddc_table->entries[i].vddc) * VOLTAGE_SCALE)
- << VDDC_SHIFT;
- smum_send_msg_to_smc_with_parameter(hwmgr->smumgr,
- PPSMC_MSG_VddC_Request, req_volt);
- return;
- }
- }
- printk(KERN_ERR "DAL requested level can not"
- " found a available voltage in VDDC DPM Table \n");
-}
-
/**
* Enable voltage control
*
@@ -2091,7 +2056,7 @@ static int polaris10_init_smc_table(struct pp_hwmgr *hwmgr)
"Failed to populate Clock Stretcher Data Table!",
return result);
}
-
+ table->CurrSclkPllRange = 0xff;
table->GraphicsVoltageChangeEnable = 1;
table->GraphicsThermThrottleEnable = 1;
table->GraphicsInterval = 1;
@@ -2184,6 +2149,7 @@ static int polaris10_init_smc_table(struct pp_hwmgr *hwmgr)
CONVERT_FROM_HOST_TO_SMC_UL(table->SmioMask1);
CONVERT_FROM_HOST_TO_SMC_UL(table->SmioMask2);
CONVERT_FROM_HOST_TO_SMC_UL(table->SclkStepSize);
+ CONVERT_FROM_HOST_TO_SMC_UL(table->CurrSclkPllRange);
CONVERT_FROM_HOST_TO_SMC_US(table->TemperatureLimitHigh);
CONVERT_FROM_HOST_TO_SMC_US(table->TemperatureLimitLow);
CONVERT_FROM_HOST_TO_SMC_US(table->VoltageResponseTime);
@@ -4760,11 +4726,11 @@ static int polaris10_get_pp_table(struct pp_hwmgr *hwmgr, char **table)
struct polaris10_hwmgr *data = (struct polaris10_hwmgr *)(hwmgr->backend);
if (!data->soft_pp_table) {
- data->soft_pp_table = kzalloc(hwmgr->soft_pp_table_size, GFP_KERNEL);
+ data->soft_pp_table = kmemdup(hwmgr->soft_pp_table,
+ hwmgr->soft_pp_table_size,
+ GFP_KERNEL);
if (!data->soft_pp_table)
return -ENOMEM;
- memcpy(data->soft_pp_table, hwmgr->soft_pp_table,
- hwmgr->soft_pp_table_size);
}
*table = (char *)&data->soft_pp_table;
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/tonga_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/tonga_hwmgr.c
index 1faad92b50d3..16fed487973b 100644
--- a/drivers/gpu/drm/amd/powerplay/hwmgr/tonga_hwmgr.c
+++ b/drivers/gpu/drm/amd/powerplay/hwmgr/tonga_hwmgr.c
@@ -5331,7 +5331,7 @@ static int tonga_freeze_sclk_mclk_dpm(struct pp_hwmgr *hwmgr)
(data->need_update_smu7_dpm_table &
(DPMTABLE_OD_UPDATE_SCLK + DPMTABLE_UPDATE_SCLK))) {
PP_ASSERT_WITH_CODE(
- true == tonga_is_dpm_running(hwmgr),
+ 0 == tonga_is_dpm_running(hwmgr),
"Trying to freeze SCLK DPM when DPM is disabled",
);
PP_ASSERT_WITH_CODE(
@@ -5344,7 +5344,7 @@ static int tonga_freeze_sclk_mclk_dpm(struct pp_hwmgr *hwmgr)
if ((0 == data->mclk_dpm_key_disabled) &&
(data->need_update_smu7_dpm_table &
DPMTABLE_OD_UPDATE_MCLK)) {
- PP_ASSERT_WITH_CODE(true == tonga_is_dpm_running(hwmgr),
+ PP_ASSERT_WITH_CODE(0 == tonga_is_dpm_running(hwmgr),
"Trying to freeze MCLK DPM when DPM is disabled",
);
PP_ASSERT_WITH_CODE(
@@ -5445,7 +5445,7 @@ static int tonga_populate_and_upload_sclk_mclk_dpm_levels(struct pp_hwmgr *hwmgr
}
if (data->need_update_smu7_dpm_table & (DPMTABLE_OD_UPDATE_SCLK + DPMTABLE_UPDATE_SCLK)) {
- result = tonga_populate_all_memory_levels(hwmgr);
+ result = tonga_populate_all_graphic_levels(hwmgr);
PP_ASSERT_WITH_CODE((0 == result),
"Failed to populate SCLK during PopulateNewDPMClocksStates Function!",
return result);
@@ -5647,7 +5647,7 @@ static int tonga_unfreeze_sclk_mclk_dpm(struct pp_hwmgr *hwmgr)
(data->need_update_smu7_dpm_table &
(DPMTABLE_OD_UPDATE_SCLK + DPMTABLE_UPDATE_SCLK))) {
- PP_ASSERT_WITH_CODE(true == tonga_is_dpm_running(hwmgr),
+ PP_ASSERT_WITH_CODE(0 == tonga_is_dpm_running(hwmgr),
"Trying to Unfreeze SCLK DPM when DPM is disabled",
);
PP_ASSERT_WITH_CODE(
@@ -5661,7 +5661,7 @@ static int tonga_unfreeze_sclk_mclk_dpm(struct pp_hwmgr *hwmgr)
(data->need_update_smu7_dpm_table & DPMTABLE_OD_UPDATE_MCLK)) {
PP_ASSERT_WITH_CODE(
- true == tonga_is_dpm_running(hwmgr),
+ 0 == tonga_is_dpm_running(hwmgr),
"Trying to Unfreeze MCLK DPM when DPM is disabled",
);
PP_ASSERT_WITH_CODE(
@@ -6056,11 +6056,11 @@ static int tonga_get_pp_table(struct pp_hwmgr *hwmgr, char **table)
struct tonga_hwmgr *data = (struct tonga_hwmgr *)(hwmgr->backend);
if (!data->soft_pp_table) {
- data->soft_pp_table = kzalloc(hwmgr->soft_pp_table_size, GFP_KERNEL);
+ data->soft_pp_table = kmemdup(hwmgr->soft_pp_table,
+ hwmgr->soft_pp_table_size,
+ GFP_KERNEL);
if (!data->soft_pp_table)
return -ENOMEM;
- memcpy(data->soft_pp_table, hwmgr->soft_pp_table,
- hwmgr->soft_pp_table_size);
}
*table = (char *)&data->soft_pp_table;
diff --git a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
index fd4ce7aaeee9..28f571449495 100644
--- a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
+++ b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h
@@ -673,7 +673,7 @@ extern int phm_get_sclk_for_voltage_evv(struct pp_hwmgr *hwmgr, phm_ppt_v1_volta
extern int phm_initializa_dynamic_state_adjustment_rule_settings(struct pp_hwmgr *hwmgr);
extern int phm_hwmgr_backend_fini(struct pp_hwmgr *hwmgr);
extern uint32_t phm_get_lowest_enabled_level(struct pp_hwmgr *hwmgr, uint32_t mask);
-
+extern void phm_apply_dal_min_voltage_request(struct pp_hwmgr *hwmgr);
#define PHM_ENTIRE_REGISTER_MASK 0xFFFFFFFFU
diff --git a/drivers/gpu/drm/amd/powerplay/smumgr/cz_smumgr.c b/drivers/gpu/drm/amd/powerplay/smumgr/cz_smumgr.c
index da18f44fd1c8..87c023e518ab 100644
--- a/drivers/gpu/drm/amd/powerplay/smumgr/cz_smumgr.c
+++ b/drivers/gpu/drm/amd/powerplay/smumgr/cz_smumgr.c
@@ -639,7 +639,7 @@ static int cz_smu_populate_firmware_entries(struct pp_smumgr *smumgr)
cz_smu->driver_buffer_length = 0;
- for (i = 0; i < sizeof(firmware_list)/sizeof(*firmware_list); i++) {
+ for (i = 0; i < ARRAY_SIZE(firmware_list); i++) {
firmware_type = cz_translate_firmware_enum_to_arg(smumgr,
firmware_list[i]);
diff --git a/drivers/gpu/drm/drm_cache.c b/drivers/gpu/drm/drm_cache.c
index 6743ff7dccfa..059f7c39c582 100644
--- a/drivers/gpu/drm/drm_cache.c
+++ b/drivers/gpu/drm/drm_cache.c
@@ -72,7 +72,7 @@ drm_clflush_pages(struct page *pages[], unsigned long num_pages)
{
#if defined(CONFIG_X86)
- if (cpu_has_clflush) {
+ if (static_cpu_has(X86_FEATURE_CLFLUSH)) {
drm_cache_flush_clflush(pages, num_pages);
return;
}
@@ -105,7 +105,7 @@ void
drm_clflush_sg(struct sg_table *st)
{
#if defined(CONFIG_X86)
- if (cpu_has_clflush) {
+ if (static_cpu_has(X86_FEATURE_CLFLUSH)) {
struct sg_page_iter sg_iter;
mb();
@@ -129,7 +129,7 @@ void
drm_clflush_virt_range(void *addr, unsigned long length)
{
#if defined(CONFIG_X86)
- if (cpu_has_clflush) {
+ if (static_cpu_has(X86_FEATURE_CLFLUSH)) {
const int size = boot_cpu_data.x86_clflush_size;
void *end = addr + length;
addr = (void *)(((unsigned long)addr) & -size);
diff --git a/drivers/gpu/drm/drm_dp_dual_mode_helper.c b/drivers/gpu/drm/drm_dp_dual_mode_helper.c
new file mode 100644
index 000000000000..a7b2a751f6fe
--- /dev/null
+++ b/drivers/gpu/drm/drm_dp_dual_mode_helper.c
@@ -0,0 +1,366 @@
+/*
+ * Copyright © 2016 Intel Corporation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE COPYRIGHT HOLDER(S) OR AUTHOR(S) BE LIABLE FOR ANY CLAIM, DAMAGES OR
+ * OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
+ * ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
+ * OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include <linux/errno.h>
+#include <linux/export.h>
+#include <linux/i2c.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include <drm/drm_dp_dual_mode_helper.h>
+#include <drm/drmP.h>
+
+/**
+ * DOC: dp dual mode helpers
+ *
+ * Helper functions to deal with DP dual mode (aka. DP++) adaptors.
+ *
+ * Type 1:
+ * Adaptor registers (if any) and the sink DDC bus may be accessed via I2C.
+ *
+ * Type 2:
+ * Adaptor registers and sink DDC bus can be accessed either via I2C or
+ * I2C-over-AUX. Source devices may choose to implement either of these
+ * access methods.
+ */
+
+#define DP_DUAL_MODE_SLAVE_ADDRESS 0x40
+
+/**
+ * drm_dp_dual_mode_read - Read from the DP dual mode adaptor register(s)
+ * @adapter: I2C adapter for the DDC bus
+ * @offset: register offset
+ * @buffer: buffer for return data
+ * @size: sizo of the buffer
+ *
+ * Reads @size bytes from the DP dual mode adaptor registers
+ * starting at @offset.
+ *
+ * Returns:
+ * 0 on success, negative error code on failure
+ */
+ssize_t drm_dp_dual_mode_read(struct i2c_adapter *adapter,
+ u8 offset, void *buffer, size_t size)
+{
+ struct i2c_msg msgs[] = {
+ {
+ .addr = DP_DUAL_MODE_SLAVE_ADDRESS,
+ .flags = 0,
+ .len = 1,
+ .buf = &offset,
+ },
+ {
+ .addr = DP_DUAL_MODE_SLAVE_ADDRESS,
+ .flags = I2C_M_RD,
+ .len = size,
+ .buf = buffer,
+ },
+ };
+ int ret;
+
+ ret = i2c_transfer(adapter, msgs, ARRAY_SIZE(msgs));
+ if (ret < 0)
+ return ret;
+ if (ret != ARRAY_SIZE(msgs))
+ return -EPROTO;
+
+ return 0;
+}
+EXPORT_SYMBOL(drm_dp_dual_mode_read);
+
+/**
+ * drm_dp_dual_mode_write - Write to the DP dual mode adaptor register(s)
+ * @adapter: I2C adapter for the DDC bus
+ * @offset: register offset
+ * @buffer: buffer for write data
+ * @size: sizo of the buffer
+ *
+ * Writes @size bytes to the DP dual mode adaptor registers
+ * starting at @offset.
+ *
+ * Returns:
+ * 0 on success, negative error code on failure
+ */
+ssize_t drm_dp_dual_mode_write(struct i2c_adapter *adapter,
+ u8 offset, const void *buffer, size_t size)
+{
+ struct i2c_msg msg = {
+ .addr = DP_DUAL_MODE_SLAVE_ADDRESS,
+ .flags = 0,
+ .len = 1 + size,
+ .buf = NULL,
+ };
+ void *data;
+ int ret;
+
+ data = kmalloc(msg.len, GFP_TEMPORARY);
+ if (!data)
+ return -ENOMEM;
+
+ msg.buf = data;
+
+ memcpy(data, &offset, 1);
+ memcpy(data + 1, buffer, size);
+
+ ret = i2c_transfer(adapter, &msg, 1);
+
+ kfree(data);
+
+ if (ret < 0)
+ return ret;
+ if (ret != 1)
+ return -EPROTO;
+
+ return 0;
+}
+EXPORT_SYMBOL(drm_dp_dual_mode_write);
+
+static bool is_hdmi_adaptor(const char hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN])
+{
+ static const char dp_dual_mode_hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN] =
+ "DP-HDMI ADAPTOR\x04";
+
+ return memcmp(hdmi_id, dp_dual_mode_hdmi_id,
+ sizeof(dp_dual_mode_hdmi_id)) == 0;
+}
+
+static bool is_type2_adaptor(uint8_t adaptor_id)
+{
+ return adaptor_id == (DP_DUAL_MODE_TYPE_TYPE2 |
+ DP_DUAL_MODE_REV_TYPE2);
+}
+
+/**
+ * drm_dp_dual_mode_detect - Identify the DP dual mode adaptor
+ * @adapter: I2C adapter for the DDC bus
+ *
+ * Attempt to identify the type of the DP dual mode adaptor used.
+ *
+ * Note that when the answer is @DRM_DP_DUAL_MODE_UNKNOWN it's not
+ * certain whether we're dealing with a native HDMI port or
+ * a type 1 DVI dual mode adaptor. The driver will have to use
+ * some other hardware/driver specific mechanism to make that
+ * distinction.
+ *
+ * Returns:
+ * The type of the DP dual mode adaptor used
+ */
+enum drm_dp_dual_mode_type drm_dp_dual_mode_detect(struct i2c_adapter *adapter)
+{
+ char hdmi_id[DP_DUAL_MODE_HDMI_ID_LEN] = {};
+ uint8_t adaptor_id = 0x00;
+ ssize_t ret;
+
+ /*
+ * Let's see if the adaptor is there the by reading the
+ * HDMI ID registers.
+ *
+ * Note that type 1 DVI adaptors are not required to implemnt
+ * any registers, and that presents a problem for detection.
+ * If the i2c transfer is nacked, we may or may not be dealing
+ * with a type 1 DVI adaptor. Some other mechanism of detecting
+ * the presence of the adaptor is required. One way would be
+ * to check the state of the CONFIG1 pin, Another method would
+ * simply require the driver to know whether the port is a DP++
+ * port or a native HDMI port. Both of these methods are entirely
+ * hardware/driver specific so we can't deal with them here.
+ */
+ ret = drm_dp_dual_mode_read(adapter, DP_DUAL_MODE_HDMI_ID,
+ hdmi_id, sizeof(hdmi_id));
+ if (ret)
+ return DRM_DP_DUAL_MODE_UNKNOWN;
+
+ /*
+ * Sigh. Some (maybe all?) type 1 adaptors are broken and ack
+ * the offset but ignore it, and instead they just always return
+ * data from the start of the HDMI ID buffer. So for a broken
+ * type 1 HDMI adaptor a single byte read will always give us
+ * 0x44, and for a type 1 DVI adaptor it should give 0x00
+ * (assuming it implements any registers). Fortunately neither
+ * of those values will match the type 2 signature of the
+ * DP_DUAL_MODE_ADAPTOR_ID register so we can proceed with
+ * the type 2 adaptor detection safely even in the presence
+ * of broken type 1 adaptors.
+ */
+ ret = drm_dp_dual_mode_read(adapter, DP_DUAL_MODE_ADAPTOR_ID,
+ &adaptor_id, sizeof(adaptor_id));
+ if (ret == 0) {
+ if (is_type2_adaptor(adaptor_id)) {
+ if (is_hdmi_adaptor(hdmi_id))
+ return DRM_DP_DUAL_MODE_TYPE2_HDMI;
+ else
+ return DRM_DP_DUAL_MODE_TYPE2_DVI;
+ }
+ }
+
+ if (is_hdmi_adaptor(hdmi_id))
+ return DRM_DP_DUAL_MODE_TYPE1_HDMI;
+ else
+ return DRM_DP_DUAL_MODE_TYPE1_DVI;
+}
+EXPORT_SYMBOL(drm_dp_dual_mode_detect);
+
+/**
+ * drm_dp_dual_mode_max_tmds_clock - Max TMDS clock for DP dual mode adaptor
+ * @type: DP dual mode adaptor type
+ * @adapter: I2C adapter for the DDC bus
+ *
+ * Determine the max TMDS clock the adaptor supports based on the
+ * type of the dual mode adaptor and the DP_DUAL_MODE_MAX_TMDS_CLOCK
+ * register (on type2 adaptors). As some type 1 adaptors have
+ * problems with registers (see comments in drm_dp_dual_mode_detect())
+ * we don't read the register on those, instead we simply assume
+ * a 165 MHz limit based on the specification.
+ *
+ * Returns:
+ * Maximum supported TMDS clock rate for the DP dual mode adaptor in kHz.
+ */
+int drm_dp_dual_mode_max_tmds_clock(enum drm_dp_dual_mode_type type,
+ struct i2c_adapter *adapter)
+{
+ uint8_t max_tmds_clock;
+ ssize_t ret;
+
+ /* native HDMI so no limit */
+ if (type == DRM_DP_DUAL_MODE_NONE)
+ return 0;
+
+ /*
+ * Type 1 adaptors are limited to 165MHz
+ * Type 2 adaptors can tells us their limit
+ */
+ if (type < DRM_DP_DUAL_MODE_TYPE2_DVI)
+ return 165000;
+
+ ret = drm_dp_dual_mode_read(adapter, DP_DUAL_MODE_MAX_TMDS_CLOCK,
+ &max_tmds_clock, sizeof(max_tmds_clock));
+ if (ret || max_tmds_clock == 0x00 || max_tmds_clock == 0xff) {
+ DRM_DEBUG_KMS("Failed to query max TMDS clock\n");
+ return 165000;
+ }
+
+ return max_tmds_clock * 5000 / 2;
+}
+EXPORT_SYMBOL(drm_dp_dual_mode_max_tmds_clock);
+
+/**
+ * drm_dp_dual_mode_get_tmds_output - Get the state of the TMDS output buffers in the DP dual mode adaptor
+ * @type: DP dual mode adaptor type
+ * @adapter: I2C adapter for the DDC bus
+ * @enabled: current state of the TMDS output buffers
+ *
+ * Get the state of the TMDS output buffers in the adaptor. For
+ * type2 adaptors this is queried from the DP_DUAL_MODE_TMDS_OEN
+ * register. As some type 1 adaptors have problems with registers
+ * (see comments in drm_dp_dual_mode_detect()) we don't read the
+ * register on those, instead we simply assume that the buffers
+ * are always enabled.
+ *
+ * Returns:
+ * 0 on success, negative error code on failure
+ */
+int drm_dp_dual_mode_get_tmds_output(enum drm_dp_dual_mode_type type,
+ struct i2c_adapter *adapter,
+ bool *enabled)
+{
+ uint8_t tmds_oen;
+ ssize_t ret;
+
+ if (type < DRM_DP_DUAL_MODE_TYPE2_DVI) {
+ *enabled = true;
+ return 0;
+ }
+
+ ret = drm_dp_dual_mode_read(adapter, DP_DUAL_MODE_TMDS_OEN,
+ &tmds_oen, sizeof(tmds_oen));
+ if (ret) {
+ DRM_DEBUG_KMS("Failed to query state of TMDS output buffers\n");
+ return ret;
+ }
+
+ *enabled = !(tmds_oen & DP_DUAL_MODE_TMDS_DISABLE);
+
+ return 0;
+}
+EXPORT_SYMBOL(drm_dp_dual_mode_get_tmds_output);
+
+/**
+ * drm_dp_dual_mode_set_tmds_output - Enable/disable TMDS output buffers in the DP dual mode adaptor
+ * @type: DP dual mode adaptor type
+ * @adapter: I2C adapter for the DDC bus
+ * @enable: enable (as opposed to disable) the TMDS output buffers
+ *
+ * Set the state of the TMDS output buffers in the adaptor. For
+ * type2 this is set via the DP_DUAL_MODE_TMDS_OEN register. As
+ * some type 1 adaptors have problems with registers (see comments
+ * in drm_dp_dual_mode_detect()) we avoid touching the register,
+ * making this function a no-op on type 1 adaptors.
+ *
+ * Returns:
+ * 0 on success, negative error code on failure
+ */
+int drm_dp_dual_mode_set_tmds_output(enum drm_dp_dual_mode_type type,
+ struct i2c_adapter *adapter, bool enable)
+{
+ uint8_t tmds_oen = enable ? 0 : DP_DUAL_MODE_TMDS_DISABLE;
+ ssize_t ret;
+
+ if (type < DRM_DP_DUAL_MODE_TYPE2_DVI)
+ return 0;
+
+ ret = drm_dp_dual_mode_write(adapter, DP_DUAL_MODE_TMDS_OEN,
+ &tmds_oen, sizeof(tmds_oen));
+ if (ret) {
+ DRM_DEBUG_KMS("Failed to %s TMDS output buffers\n",
+ enable ? "enable" : "disable");
+ return ret;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL(drm_dp_dual_mode_set_tmds_output);
+
+/**
+ * drm_dp_get_dual_mode_type_name - Get the name of the DP dual mode adaptor type as a string
+ * @type: DP dual mode adaptor type
+ *
+ * Returns:
+ * String representation of the DP dual mode adaptor type
+ */
+const char *drm_dp_get_dual_mode_type_name(enum drm_dp_dual_mode_type type)
+{
+ switch (type) {
+ case DRM_DP_DUAL_MODE_NONE:
+ return "none";
+ case DRM_DP_DUAL_MODE_TYPE1_DVI:
+ return "type 1 DVI";
+ case DRM_DP_DUAL_MODE_TYPE1_HDMI:
+ return "type 1 HDMI";
+ case DRM_DP_DUAL_MODE_TYPE2_DVI:
+ return "type 2 DVI";
+ case DRM_DP_DUAL_MODE_TYPE2_HDMI:
+ return "type 2 HDMI";
+ default:
+ WARN_ON(type != DRM_DP_DUAL_MODE_UNKNOWN);
+ return "unknown";
+ }
+}
+EXPORT_SYMBOL(drm_dp_get_dual_mode_type_name);
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
index 236ada93df53..afdd55ddf821 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
@@ -28,11 +28,6 @@
#define BO_LOCKED 0x4000
#define BO_PINNED 0x2000
-static inline void __user *to_user_ptr(u64 address)
-{
- return (void __user *)(uintptr_t)address;
-}
-
static struct etnaviv_gem_submit *submit_create(struct drm_device *dev,
struct etnaviv_gpu *gpu, size_t nr)
{
@@ -347,21 +342,21 @@ int etnaviv_ioctl_gem_submit(struct drm_device *dev, void *data,
cmdbuf->exec_state = args->exec_state;
cmdbuf->ctx = file->driver_priv;
- ret = copy_from_user(bos, to_user_ptr(args->bos),
+ ret = copy_from_user(bos, u64_to_user_ptr(args->bos),
args->nr_bos * sizeof(*bos));
if (ret) {
ret = -EFAULT;
goto err_submit_cmds;
}
- ret = copy_from_user(relocs, to_user_ptr(args->relocs),
+ ret = copy_from_user(relocs, u64_to_user_ptr(args->relocs),
args->nr_relocs * sizeof(*relocs));
if (ret) {
ret = -EFAULT;
goto err_submit_cmds;
}
- ret = copy_from_user(stream, to_user_ptr(args->stream),
+ ret = copy_from_user(stream, u64_to_user_ptr(args->stream),
args->stream_size);
if (ret) {
ret = -EFAULT;
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
index 049d00d8ded5..ff6aa5dfb2d7 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_gpu.c
@@ -796,9 +796,9 @@ int etnaviv_gpu_debugfs(struct etnaviv_gpu *gpu, struct seq_file *m)
debug.state[0] == debug.state[1]) {
seq_puts(m, "seems to be stuck\n");
} else if (debug.address[0] == debug.address[1]) {
- seq_puts(m, "adress is constant\n");
+ seq_puts(m, "address is constant\n");
} else {
- seq_puts(m, "is runing\n");
+ seq_puts(m, "is running\n");
}
seq_printf(m, "\t address 0: 0x%08x\n", debug.address[0]);
diff --git a/drivers/gpu/drm/i915/i915_debugfs.c b/drivers/gpu/drm/i915/i915_debugfs.c
index 8b8d6f07d7aa..32690332d441 100644
--- a/drivers/gpu/drm/i915/i915_debugfs.c
+++ b/drivers/gpu/drm/i915/i915_debugfs.c
@@ -2921,20 +2921,6 @@ static void intel_dp_info(struct seq_file *m,
intel_panel_info(m, &intel_connector->panel);
}
-static void intel_dp_mst_info(struct seq_file *m,
- struct intel_connector *intel_connector)
-{
- struct intel_encoder *intel_encoder = intel_connector->encoder;
- struct intel_dp_mst_encoder *intel_mst =
- enc_to_mst(&intel_encoder->base);
- struct intel_digital_port *intel_dig_port = intel_mst->primary;
- struct intel_dp *intel_dp = &intel_dig_port->dp;
- bool has_audio = drm_dp_mst_port_has_audio(&intel_dp->mst_mgr,
- intel_connector->port);
-
- seq_printf(m, "\taudio support: %s\n", yesno(has_audio));
-}
-
static void intel_hdmi_info(struct seq_file *m,
struct intel_connector *intel_connector)
{
@@ -2978,8 +2964,6 @@ static void intel_connector_info(struct seq_file *m,
intel_hdmi_info(m, intel_connector);
else if (intel_encoder->type == INTEL_OUTPUT_LVDS)
intel_lvds_info(m, intel_connector);
- else if (intel_encoder->type == INTEL_OUTPUT_DP_MST)
- intel_dp_mst_info(m, intel_connector);
}
seq_printf(m, "\tmodes:\n");
diff --git a/drivers/gpu/drm/i915/i915_dma.c b/drivers/gpu/drm/i915/i915_dma.c
index 15615fb9bde6..b3198fcd0536 100644
--- a/drivers/gpu/drm/i915/i915_dma.c
+++ b/drivers/gpu/drm/i915/i915_dma.c
@@ -1183,6 +1183,12 @@ static int i915_driver_init_hw(struct drm_i915_private *dev_priv)
if (ret)
return ret;
+ ret = i915_ggtt_enable_hw(dev);
+ if (ret) {
+ DRM_ERROR("failed to enable GGTT\n");
+ goto out_ggtt;
+ }
+
/* WARNING: Apparently we must kick fbdev drivers before vgacon,
* otherwise the vga fbdev driver falls over. */
ret = i915_kick_out_firmware_fb(dev_priv);
diff --git a/drivers/gpu/drm/i915/i915_drv.c b/drivers/gpu/drm/i915/i915_drv.c
index 20d58987381d..e264a90d1b0d 100644
--- a/drivers/gpu/drm/i915/i915_drv.c
+++ b/drivers/gpu/drm/i915/i915_drv.c
@@ -732,9 +732,14 @@ int i915_suspend_switcheroo(struct drm_device *dev, pm_message_t state)
static int i915_drm_resume(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
+ int ret;
disable_rpm_wakeref_asserts(dev_priv);
+ ret = i915_ggtt_enable_hw(dev);
+ if (ret)
+ DRM_ERROR("failed to re-enable GGTT\n");
+
intel_csr_ucode_resume(dev_priv);
mutex_lock(&dev->struct_mutex);
diff --git a/drivers/gpu/drm/i915/i915_drv.h b/drivers/gpu/drm/i915/i915_drv.h
index 9d7b54ea14f9..5faacc6e548d 100644
--- a/drivers/gpu/drm/i915/i915_drv.h
+++ b/drivers/gpu/drm/i915/i915_drv.h
@@ -3482,6 +3482,7 @@ bool intel_bios_is_valid_vbt(const void *buf, size_t size);
bool intel_bios_is_tv_present(struct drm_i915_private *dev_priv);
bool intel_bios_is_lvds_present(struct drm_i915_private *dev_priv, u8 *i2c_pin);
bool intel_bios_is_port_edp(struct drm_i915_private *dev_priv, enum port port);
+bool intel_bios_is_port_dp_dual_mode(struct drm_i915_private *dev_priv, enum port port);
bool intel_bios_is_dsi_present(struct drm_i915_private *dev_priv, enum port *port);
bool intel_bios_is_port_hpd_inverted(struct drm_i915_private *dev_priv,
enum port port);
@@ -3675,11 +3676,6 @@ static inline i915_reg_t i915_vgacntrl_reg(struct drm_device *dev)
return VGACNTRL;
}
-static inline void __user *to_user_ptr(u64 address)
-{
- return (void __user *)(uintptr_t)address;
-}
-
static inline unsigned long msecs_to_jiffies_timeout(const unsigned int m)
{
unsigned long j = msecs_to_jiffies(m);
diff --git a/drivers/gpu/drm/i915/i915_gem.c b/drivers/gpu/drm/i915/i915_gem.c
index cdd2f438d4b4..aad26851cee3 100644
--- a/drivers/gpu/drm/i915/i915_gem.c
+++ b/drivers/gpu/drm/i915/i915_gem.c
@@ -319,7 +319,7 @@ i915_gem_phys_pwrite(struct drm_i915_gem_object *obj,
{
struct drm_device *dev = obj->base.dev;
void *vaddr = obj->phys_handle->vaddr + args->offset;
- char __user *user_data = to_user_ptr(args->data_ptr);
+ char __user *user_data = u64_to_user_ptr(args->data_ptr);
int ret = 0;
/* We manually control the domain here and pretend that it
@@ -600,7 +600,7 @@ i915_gem_shmem_pread(struct drm_device *dev,
int needs_clflush = 0;
struct sg_page_iter sg_iter;
- user_data = to_user_ptr(args->data_ptr);
+ user_data = u64_to_user_ptr(args->data_ptr);
remain = args->size;
obj_do_bit17_swizzling = i915_gem_object_needs_bit17_swizzle(obj);
@@ -687,7 +687,7 @@ i915_gem_pread_ioctl(struct drm_device *dev, void *data,
return 0;
if (!access_ok(VERIFY_WRITE,
- to_user_ptr(args->data_ptr),
+ u64_to_user_ptr(args->data_ptr),
args->size))
return -EFAULT;
@@ -779,7 +779,7 @@ i915_gem_gtt_pwrite_fast(struct drm_device *dev,
if (ret)
goto out_unpin;
- user_data = to_user_ptr(args->data_ptr);
+ user_data = u64_to_user_ptr(args->data_ptr);
remain = args->size;
offset = i915_gem_obj_ggtt_offset(obj) + args->offset;
@@ -903,7 +903,7 @@ i915_gem_shmem_pwrite(struct drm_device *dev,
int needs_clflush_before = 0;
struct sg_page_iter sg_iter;
- user_data = to_user_ptr(args->data_ptr);
+ user_data = u64_to_user_ptr(args->data_ptr);
remain = args->size;
obj_do_bit17_swizzling = i915_gem_object_needs_bit17_swizzle(obj);
@@ -1032,12 +1032,12 @@ i915_gem_pwrite_ioctl(struct drm_device *dev, void *data,
return 0;
if (!access_ok(VERIFY_READ,
- to_user_ptr(args->data_ptr),
+ u64_to_user_ptr(args->data_ptr),
args->size))
return -EFAULT;
if (likely(!i915.prefault_disable)) {
- ret = fault_in_multipages_readable(to_user_ptr(args->data_ptr),
+ ret = fault_in_multipages_readable(u64_to_user_ptr(args->data_ptr),
args->size);
if (ret)
return -EFAULT;
@@ -1456,7 +1456,10 @@ i915_wait_request(struct drm_i915_gem_request *req)
if (ret)
return ret;
- __i915_gem_request_retire__upto(req);
+ /* If the GPU hung, we want to keep the requests to find the guilty. */
+ if (req->reset_counter == i915_reset_counter(&dev_priv->gpu_error))
+ __i915_gem_request_retire__upto(req);
+
return 0;
}
@@ -1513,7 +1516,8 @@ i915_gem_object_retire_request(struct drm_i915_gem_object *obj,
else if (obj->last_write_req == req)
i915_gem_object_retire__write(obj);
- __i915_gem_request_retire__upto(req);
+ if (req->reset_counter == i915_reset_counter(&req->i915->gpu_error))
+ __i915_gem_request_retire__upto(req);
}
/* A nonblocking variant of the above wait. This is a highly dangerous routine
@@ -1699,7 +1703,7 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
if (args->flags & ~(I915_MMAP_WC))
return -EINVAL;
- if (args->flags & I915_MMAP_WC && !cpu_has_pat)
+ if (args->flags & I915_MMAP_WC && !boot_cpu_has(X86_FEATURE_PAT))
return -ENODEV;
obj = drm_gem_object_lookup(file, args->handle);
@@ -1721,7 +1725,10 @@ i915_gem_mmap_ioctl(struct drm_device *dev, void *data,
struct mm_struct *mm = current->mm;
struct vm_area_struct *vma;
- down_write(&mm->mmap_sem);
+ if (down_write_killable(&mm->mmap_sem)) {
+ drm_gem_object_unreference_unlocked(obj);
+ return -EINTR;
+ }
vma = find_vma(mm, addr);
if (vma)
vma->vm_page_prot =
@@ -4857,9 +4864,6 @@ i915_gem_init_hw(struct drm_device *dev)
struct intel_engine_cs *engine;
int ret, j;
- if (INTEL_INFO(dev)->gen < 6 && !intel_enable_gtt())
- return -EIO;
-
/* Double layer security blanket, see i915_gem_init() */
intel_uncore_forcewake_get(dev_priv, FORCEWAKE_ALL);
diff --git a/drivers/gpu/drm/i915/i915_gem_execbuffer.c b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
index 6f4f2a6cdf93..33df74d98269 100644
--- a/drivers/gpu/drm/i915/i915_gem_execbuffer.c
+++ b/drivers/gpu/drm/i915/i915_gem_execbuffer.c
@@ -489,7 +489,7 @@ i915_gem_execbuffer_relocate_entry(struct drm_i915_gem_object *obj,
ret = relocate_entry_cpu(obj, reloc, target_offset);
else if (obj->map_and_fenceable)
ret = relocate_entry_gtt(obj, reloc, target_offset);
- else if (cpu_has_clflush)
+ else if (static_cpu_has(X86_FEATURE_CLFLUSH))
ret = relocate_entry_clflush(obj, reloc, target_offset);
else {
WARN_ONCE(1, "Impossible case in relocation handling\n");
@@ -515,7 +515,7 @@ i915_gem_execbuffer_relocate_vma(struct i915_vma *vma,
struct drm_i915_gem_exec_object2 *entry = vma->exec_entry;
int remain, ret;
- user_relocs = to_user_ptr(entry->relocs_ptr);
+ user_relocs = u64_to_user_ptr(entry->relocs_ptr);
remain = entry->relocation_count;
while (remain) {
@@ -536,9 +536,7 @@ i915_gem_execbuffer_relocate_vma(struct i915_vma *vma,
return ret;
if (r->presumed_offset != offset &&
- __copy_to_user_inatomic(&user_relocs->presumed_offset,
- &r->presumed_offset,
- sizeof(r->presumed_offset))) {
+ __put_user(r->presumed_offset, &user_relocs->presumed_offset)) {
return -EFAULT;
}
@@ -869,7 +867,7 @@ i915_gem_execbuffer_relocate_slow(struct drm_device *dev,
u64 invalid_offset = (u64)-1;
int j;
- user_relocs = to_user_ptr(exec[i].relocs_ptr);
+ user_relocs = u64_to_user_ptr(exec[i].relocs_ptr);
if (copy_from_user(reloc+total, user_relocs,
exec[i].relocation_count * sizeof(*reloc))) {
@@ -1014,7 +1012,7 @@ validate_exec_list(struct drm_device *dev,
invalid_flags |= EXEC_OBJECT_NEEDS_GTT;
for (i = 0; i < count; i++) {
- char __user *ptr = to_user_ptr(exec[i].relocs_ptr);
+ char __user *ptr = u64_to_user_ptr(exec[i].relocs_ptr);
int length; /* limited by fault_in_pages_readable() */
if (exec[i].flags & invalid_flags)
@@ -1697,7 +1695,7 @@ i915_gem_execbuffer(struct drm_device *dev, void *data,
return -ENOMEM;
}
ret = copy_from_user(exec_list,
- to_user_ptr(args->buffers_ptr),
+ u64_to_user_ptr(args->buffers_ptr),
sizeof(*exec_list) * args->buffer_count);
if (ret != 0) {
DRM_DEBUG("copy %d exec entries failed %d\n",
@@ -1733,7 +1731,7 @@ i915_gem_execbuffer(struct drm_device *dev, void *data,
ret = i915_gem_do_execbuffer(dev, data, file, &exec2, exec2_list);
if (!ret) {
struct drm_i915_gem_exec_object __user *user_exec_list =
- to_user_ptr(args->buffers_ptr);
+ u64_to_user_ptr(args->buffers_ptr);
/* Copy the new buffer offsets back to the user's exec list. */
for (i = 0; i < args->buffer_count; i++) {
@@ -1785,7 +1783,7 @@ i915_gem_execbuffer2(struct drm_device *dev, void *data,
return -ENOMEM;
}
ret = copy_from_user(exec2_list,
- to_user_ptr(args->buffers_ptr),
+ u64_to_user_ptr(args->buffers_ptr),
sizeof(*exec2_list) * args->buffer_count);
if (ret != 0) {
DRM_DEBUG("copy %d exec entries failed %d\n",
@@ -1798,7 +1796,7 @@ i915_gem_execbuffer2(struct drm_device *dev, void *data,
if (!ret) {
/* Copy the new buffer offsets back to the user's exec list. */
struct drm_i915_gem_exec_object2 __user *user_exec_list =
- to_user_ptr(args->buffers_ptr);
+ u64_to_user_ptr(args->buffers_ptr);
int i;
for (i = 0; i < args->buffer_count; i++) {
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.c b/drivers/gpu/drm/i915/i915_gem_gtt.c
index 0d666b3f7e9b..92acdff9dad3 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.c
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.c
@@ -3236,6 +3236,14 @@ out_gtt_cleanup:
return ret;
}
+int i915_ggtt_enable_hw(struct drm_device *dev)
+{
+ if (INTEL_INFO(dev)->gen < 6 && !intel_enable_gtt())
+ return -EIO;
+
+ return 0;
+}
+
void i915_gem_restore_gtt_mappings(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = to_i915(dev);
diff --git a/drivers/gpu/drm/i915/i915_gem_gtt.h b/drivers/gpu/drm/i915/i915_gem_gtt.h
index d7dd3d8a8758..0008543d55f6 100644
--- a/drivers/gpu/drm/i915/i915_gem_gtt.h
+++ b/drivers/gpu/drm/i915/i915_gem_gtt.h
@@ -514,6 +514,7 @@ i915_page_dir_dma_addr(const struct i915_hw_ppgtt *ppgtt, const unsigned n)
}
int i915_ggtt_init_hw(struct drm_device *dev);
+int i915_ggtt_enable_hw(struct drm_device *dev);
void i915_gem_init_ggtt(struct drm_device *dev);
void i915_ggtt_cleanup_hw(struct drm_device *dev);
diff --git a/drivers/gpu/drm/i915/i915_reg.h b/drivers/gpu/drm/i915/i915_reg.h
index 58ac6c7c690b..b407411e31ba 100644
--- a/drivers/gpu/drm/i915/i915_reg.h
+++ b/drivers/gpu/drm/i915/i915_reg.h
@@ -7515,6 +7515,8 @@ enum skl_disp_power_wells {
#define TRANS_CLK_SEL_DISABLED (0x0<<29)
#define TRANS_CLK_SEL_PORT(x) (((x)+1)<<29)
+#define CDCLK_FREQ _MMIO(0x46200)
+
#define _TRANSA_MSA_MISC 0x60410
#define _TRANSB_MSA_MISC 0x61410
#define _TRANSC_MSA_MISC 0x62410
diff --git a/drivers/gpu/drm/i915/intel_audio.c b/drivers/gpu/drm/i915/intel_audio.c
index 56ba8765816e..02a7527ce7bb 100644
--- a/drivers/gpu/drm/i915/intel_audio.c
+++ b/drivers/gpu/drm/i915/intel_audio.c
@@ -262,8 +262,7 @@ static void hsw_audio_codec_disable(struct intel_encoder *encoder)
tmp |= AUD_CONFIG_N_PROG_ENABLE;
tmp &= ~AUD_CONFIG_UPPER_N_MASK;
tmp &= ~AUD_CONFIG_LOWER_N_MASK;
- if (intel_pipe_has_type(intel_crtc, INTEL_OUTPUT_DISPLAYPORT) ||
- intel_pipe_has_type(intel_crtc, INTEL_OUTPUT_DP_MST))
+ if (intel_pipe_has_type(intel_crtc, INTEL_OUTPUT_DISPLAYPORT))
tmp |= AUD_CONFIG_N_VALUE_INDEX;
I915_WRITE(HSW_AUD_CFG(pipe), tmp);
@@ -476,8 +475,7 @@ static void ilk_audio_codec_enable(struct drm_connector *connector,
tmp &= ~AUD_CONFIG_N_VALUE_INDEX;
tmp &= ~AUD_CONFIG_N_PROG_ENABLE;
tmp &= ~AUD_CONFIG_PIXEL_CLOCK_HDMI_MASK;
- if (intel_pipe_has_type(intel_crtc, INTEL_OUTPUT_DISPLAYPORT) ||
- intel_pipe_has_type(intel_crtc, INTEL_OUTPUT_DP_MST))
+ if (intel_pipe_has_type(intel_crtc, INTEL_OUTPUT_DISPLAYPORT))
tmp |= AUD_CONFIG_N_VALUE_INDEX;
else
tmp |= audio_config_hdmi_pixel_clock(adjusted_mode);
@@ -515,8 +513,7 @@ void intel_audio_codec_enable(struct intel_encoder *intel_encoder)
/* ELD Conn_Type */
connector->eld[5] &= ~(3 << 2);
- if (intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT) ||
- intel_pipe_has_type(crtc, INTEL_OUTPUT_DP_MST))
+ if (intel_pipe_has_type(crtc, INTEL_OUTPUT_DISPLAYPORT))
connector->eld[5] |= (1 << 2);
connector->eld[6] = drm_av_sync_delay(connector, adjusted_mode) / 2;
diff --git a/drivers/gpu/drm/i915/intel_bios.c b/drivers/gpu/drm/i915/intel_bios.c
index e72dd9a8d6bf..b235b6e88ead 100644
--- a/drivers/gpu/drm/i915/intel_bios.c
+++ b/drivers/gpu/drm/i915/intel_bios.c
@@ -1578,6 +1578,42 @@ bool intel_bios_is_port_edp(struct drm_i915_private *dev_priv, enum port port)
return false;
}
+bool intel_bios_is_port_dp_dual_mode(struct drm_i915_private *dev_priv, enum port port)
+{
+ static const struct {
+ u16 dp, hdmi;
+ } port_mapping[] = {
+ /*
+ * Buggy VBTs may declare DP ports as having
+ * HDMI type dvo_port :( So let's check both.
+ */
+ [PORT_B] = { DVO_PORT_DPB, DVO_PORT_HDMIB, },
+ [PORT_C] = { DVO_PORT_DPC, DVO_PORT_HDMIC, },
+ [PORT_D] = { DVO_PORT_DPD, DVO_PORT_HDMID, },
+ [PORT_E] = { DVO_PORT_DPE, DVO_PORT_HDMIE, },
+ };
+ int i;
+
+ if (port == PORT_A || port >= ARRAY_SIZE(port_mapping))
+ return false;
+
+ if (!dev_priv->vbt.child_dev_num)
+ return false;
+
+ for (i = 0; i < dev_priv->vbt.child_dev_num; i++) {
+ const union child_device_config *p_child =
+ &dev_priv->vbt.child_dev[i];
+
+ if ((p_child->common.dvo_port == port_mapping[port].dp ||
+ p_child->common.dvo_port == port_mapping[port].hdmi) &&
+ (p_child->common.device_type & DEVICE_TYPE_DP_DUAL_MODE_BITS) ==
+ (DEVICE_TYPE_DP_DUAL_MODE & DEVICE_TYPE_DP_DUAL_MODE_BITS))
+ return true;
+ }
+
+ return false;
+}
+
/**
* intel_bios_is_dsi_present - is DSI present in VBT
* @dev_priv: i915 device instance
diff --git a/drivers/gpu/drm/i915/intel_crt.c b/drivers/gpu/drm/i915/intel_crt.c
index a2a31fd01d1d..3fbb6fc66451 100644
--- a/drivers/gpu/drm/i915/intel_crt.c
+++ b/drivers/gpu/drm/i915/intel_crt.c
@@ -261,8 +261,14 @@ static bool intel_crt_compute_config(struct intel_encoder *encoder,
pipe_config->has_pch_encoder = true;
/* LPT FDI RX only supports 8bpc. */
- if (HAS_PCH_LPT(dev))
+ if (HAS_PCH_LPT(dev)) {
+ if (pipe_config->bw_constrained && pipe_config->pipe_bpp < 24) {
+ DRM_DEBUG_KMS("LPT only supports 24bpp\n");
+ return false;
+ }
+
pipe_config->pipe_bpp = 24;
+ }
/* FDI must always be 2.7 GHz */
if (HAS_DDI(dev))
diff --git a/drivers/gpu/drm/i915/intel_ddi.c b/drivers/gpu/drm/i915/intel_ddi.c
index e30e1781fd71..01e523df363b 100644
--- a/drivers/gpu/drm/i915/intel_ddi.c
+++ b/drivers/gpu/drm/i915/intel_ddi.c
@@ -1601,6 +1601,12 @@ static void intel_ddi_pre_enable(struct intel_encoder *intel_encoder)
enum port port = intel_ddi_get_encoder_port(intel_encoder);
int type = intel_encoder->type;
+ if (type == INTEL_OUTPUT_HDMI) {
+ struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(encoder);
+
+ intel_dp_dual_mode_set_tmds_output(intel_hdmi, true);
+ }
+
intel_prepare_ddi_buffer(intel_encoder);
if (type == INTEL_OUTPUT_EDP) {
@@ -1667,6 +1673,12 @@ static void intel_ddi_post_disable(struct intel_encoder *intel_encoder)
DPLL_CTRL2_DDI_CLK_OFF(port)));
else if (INTEL_INFO(dev)->gen < 9)
I915_WRITE(PORT_CLK_SEL(port), PORT_CLK_SEL_NONE);
+
+ if (type == INTEL_OUTPUT_HDMI) {
+ struct intel_hdmi *intel_hdmi = enc_to_intel_hdmi(encoder);
+
+ intel_dp_dual_mode_set_tmds_output(intel_hdmi, false);
+ }
}
static void intel_enable_ddi(struct intel_encoder *intel_encoder)
@@ -2131,23 +2143,6 @@ void intel_ddi_fdi_disable(struct drm_crtc *crtc)
I915_WRITE(FDI_RX_CTL(PIPE_A), val);
}
-bool intel_ddi_is_audio_enabled(struct drm_i915_private *dev_priv,
- struct intel_crtc *intel_crtc)
-{
- u32 temp;
-
- if (intel_display_power_get_if_enabled(dev_priv, POWER_DOMAIN_AUDIO)) {
- temp = I915_READ(HSW_AUD_PIN_ELD_CP_VLD);
-
- intel_display_power_put(dev_priv, POWER_DOMAIN_AUDIO);
-
- if (temp & AUDIO_OUTPUT_ENABLE(intel_crtc->pipe))
- return true;
- }
-
- return false;
-}
-
void intel_ddi_get_config(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config)
{
@@ -2197,8 +2192,10 @@ void intel_ddi_get_config(struct intel_encoder *encoder,
if (intel_hdmi->infoframe_enabled(&encoder->base, pipe_config))
pipe_config->has_infoframe = true;
- break;
+ /* fall through */
case TRANS_DDI_MODE_SELECT_DVI:
+ pipe_config->lane_count = 4;
+ break;
case TRANS_DDI_MODE_SELECT_FDI:
break;
case TRANS_DDI_MODE_SELECT_DP_SST:
@@ -2212,8 +2209,11 @@ void intel_ddi_get_config(struct intel_encoder *encoder,
break;
}
- pipe_config->has_audio =
- intel_ddi_is_audio_enabled(dev_priv, intel_crtc);
+ if (intel_display_power_is_enabled(dev_priv, POWER_DOMAIN_AUDIO)) {
+ temp = I915_READ(HSW_AUD_PIN_ELD_CP_VLD);
+ if (temp & AUDIO_OUTPUT_ENABLE(intel_crtc->pipe))
+ pipe_config->has_audio = true;
+ }
if (encoder->type == INTEL_OUTPUT_EDP && dev_priv->vbt.edp.bpp &&
pipe_config->pipe_bpp > dev_priv->vbt.edp.bpp) {
diff --git a/drivers/gpu/drm/i915/intel_display.c b/drivers/gpu/drm/i915/intel_display.c
index 9cfbf2fe941e..2113f401f0ba 100644
--- a/drivers/gpu/drm/i915/intel_display.c
+++ b/drivers/gpu/drm/i915/intel_display.c
@@ -8026,9 +8026,6 @@ static void i9xx_get_pfit_config(struct intel_crtc *crtc,
pipe_config->gmch_pfit.control = tmp;
pipe_config->gmch_pfit.pgm_ratios = I915_READ(PFIT_PGM_RATIOS);
- if (INTEL_INFO(dev)->gen < 5)
- pipe_config->gmch_pfit.lvds_border_bits =
- I915_READ(LVDS) & LVDS_BORDER_ENABLE;
}
static void vlv_crtc_clock_get(struct intel_crtc *crtc,
@@ -9681,6 +9678,8 @@ static void broadwell_set_cdclk(struct drm_device *dev, int cdclk)
sandybridge_pcode_write(dev_priv, HSW_PCODE_DE_WRITE_FREQ_REQ, data);
mutex_unlock(&dev_priv->rps.hw_lock);
+ I915_WRITE(CDCLK_FREQ, DIV_ROUND_CLOSEST(cdclk, 1000) - 1);
+
intel_update_cdclk(dev);
WARN(cdclk != dev_priv->cdclk_freq,
@@ -12006,6 +12005,9 @@ static int intel_crtc_atomic_check(struct drm_crtc *crtc,
DRM_DEBUG_KMS("No valid intermediate pipe watermarks are possible\n");
return ret;
}
+ } else if (dev_priv->display.compute_intermediate_wm) {
+ if (HAS_PCH_SPLIT(dev_priv) && INTEL_GEN(dev_priv) < 9)
+ pipe_config->wm.intermediate = pipe_config->wm.optimal.ilk;
}
if (INTEL_INFO(dev)->gen >= 9) {
@@ -15991,6 +15993,9 @@ retry:
state->acquire_ctx = &ctx;
+ /* ignore any reset values/BIOS leftovers in the WM registers */
+ to_intel_atomic_state(state)->skip_intermediate_wm = true;
+
for_each_crtc_in_state(state, crtc, crtc_state, i) {
/*
* Force recalculation even if we restore
diff --git a/drivers/gpu/drm/i915/intel_dp_mst.c b/drivers/gpu/drm/i915/intel_dp_mst.c
index b6bf7fd0b806..7a34090cef34 100644
--- a/drivers/gpu/drm/i915/intel_dp_mst.c
+++ b/drivers/gpu/drm/i915/intel_dp_mst.c
@@ -77,8 +77,6 @@ static bool intel_dp_mst_compute_config(struct intel_encoder *encoder,
return false;
}
- if (drm_dp_mst_port_has_audio(&intel_dp->mst_mgr, found->port))
- pipe_config->has_audio = true;
mst_pbn = drm_dp_calc_pbn_mode(adjusted_mode->crtc_clock, bpp);
pipe_config->pbn = mst_pbn;
@@ -100,11 +98,6 @@ static void intel_mst_disable_dp(struct intel_encoder *encoder)
struct intel_dp_mst_encoder *intel_mst = enc_to_mst(&encoder->base);
struct intel_digital_port *intel_dig_port = intel_mst->primary;
struct intel_dp *intel_dp = &intel_dig_port->dp;
- struct drm_device *dev = encoder->base.dev;
- struct drm_i915_private *dev_priv = dev->dev_private;
- struct drm_crtc *crtc = encoder->base.crtc;
- struct intel_crtc *intel_crtc = to_intel_crtc(crtc);
-
int ret;
DRM_DEBUG_KMS("%d\n", intel_dp->active_mst_links);
@@ -115,10 +108,6 @@ static void intel_mst_disable_dp(struct intel_encoder *encoder)
if (ret) {
DRM_ERROR("failed to update payload %d\n", ret);
}
- if (intel_crtc->config->has_audio) {
- intel_audio_codec_disable(encoder);
- intel_display_power_put(dev_priv, POWER_DOMAIN_AUDIO);
- }
}
static void intel_mst_post_disable_dp(struct intel_encoder *encoder)
@@ -219,7 +208,6 @@ static void intel_mst_enable_dp(struct intel_encoder *encoder)
struct intel_dp *intel_dp = &intel_dig_port->dp;
struct drm_device *dev = intel_dig_port->base.base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- struct intel_crtc *crtc = to_intel_crtc(encoder->base.crtc);
enum port port = intel_dig_port->port;
int ret;
@@ -232,13 +220,6 @@ static void intel_mst_enable_dp(struct intel_encoder *encoder)
ret = drm_dp_check_act_status(&intel_dp->mst_mgr);
ret = drm_dp_update_payload_part2(&intel_dp->mst_mgr);
-
- if (crtc->config->has_audio) {
- DRM_DEBUG_DRIVER("Enabling DP audio on pipe %c\n",
- pipe_name(crtc->pipe));
- intel_display_power_get(dev_priv, POWER_DOMAIN_AUDIO);
- intel_audio_codec_enable(encoder);
- }
}
static bool intel_dp_mst_enc_get_hw_state(struct intel_encoder *encoder,
@@ -264,9 +245,6 @@ static void intel_dp_mst_enc_get_config(struct intel_encoder *encoder,
pipe_config->has_dp_encoder = true;
- pipe_config->has_audio =
- intel_ddi_is_audio_enabled(dev_priv, crtc);
-
temp = I915_READ(TRANS_DDI_FUNC_CTL(cpu_transcoder));
if (temp & TRANS_DDI_PHSYNC)
flags |= DRM_MODE_FLAG_PHSYNC;
diff --git a/drivers/gpu/drm/i915/intel_dpll_mgr.c b/drivers/gpu/drm/i915/intel_dpll_mgr.c
index 639bf0209c15..3ac705936b04 100644
--- a/drivers/gpu/drm/i915/intel_dpll_mgr.c
+++ b/drivers/gpu/drm/i915/intel_dpll_mgr.c
@@ -1702,9 +1702,9 @@ static const struct intel_dpll_mgr hsw_pll_mgr = {
static const struct dpll_info skl_plls[] = {
{ "DPLL 0", DPLL_ID_SKL_DPLL0, &skl_ddi_dpll0_funcs, INTEL_DPLL_ALWAYS_ON },
- { "DPPL 1", DPLL_ID_SKL_DPLL1, &skl_ddi_pll_funcs, 0 },
- { "DPPL 2", DPLL_ID_SKL_DPLL2, &skl_ddi_pll_funcs, 0 },
- { "DPPL 3", DPLL_ID_SKL_DPLL3, &skl_ddi_pll_funcs, 0 },
+ { "DPLL 1", DPLL_ID_SKL_DPLL1, &skl_ddi_pll_funcs, 0 },
+ { "DPLL 2", DPLL_ID_SKL_DPLL2, &skl_ddi_pll_funcs, 0 },
+ { "DPLL 3", DPLL_ID_SKL_DPLL3, &skl_ddi_pll_funcs, 0 },
{ NULL, -1, NULL, },
};
diff --git a/drivers/gpu/drm/i915/intel_drv.h b/drivers/gpu/drm/i915/intel_drv.h
index 315c971b5b31..a28b4aac1e02 100644
--- a/drivers/gpu/drm/i915/intel_drv.h
+++ b/drivers/gpu/drm/i915/intel_drv.h
@@ -33,6 +33,7 @@
#include <drm/drm_crtc.h>
#include <drm/drm_crtc_helper.h>
#include <drm/drm_fb_helper.h>
+#include <drm/drm_dp_dual_mode_helper.h>
#include <drm/drm_dp_mst_helper.h>
#include <drm/drm_rect.h>
#include <drm/drm_atomic.h>
@@ -753,6 +754,10 @@ struct cxsr_latency {
struct intel_hdmi {
i915_reg_t hdmi_reg;
int ddc_bus;
+ struct {
+ enum drm_dp_dual_mode_type type;
+ int max_tmds_clock;
+ } dp_dual_mode;
bool limited_color_range;
bool color_range_auto;
bool has_hdmi_sink;
@@ -1070,8 +1075,6 @@ void intel_ddi_set_pipe_settings(struct drm_crtc *crtc);
void intel_ddi_prepare_link_retrain(struct intel_dp *intel_dp);
bool intel_ddi_connector_get_hw_state(struct intel_connector *intel_connector);
void intel_ddi_fdi_disable(struct drm_crtc *crtc);
-bool intel_ddi_is_audio_enabled(struct drm_i915_private *dev_priv,
- struct intel_crtc *intel_crtc);
void intel_ddi_get_config(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config);
struct intel_encoder *
@@ -1403,6 +1406,7 @@ void intel_hdmi_init_connector(struct intel_digital_port *intel_dig_port,
struct intel_hdmi *enc_to_intel_hdmi(struct drm_encoder *encoder);
bool intel_hdmi_compute_config(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config);
+void intel_dp_dual_mode_set_tmds_output(struct intel_hdmi *hdmi, bool enable);
/* intel_lvds.c */
diff --git a/drivers/gpu/drm/i915/intel_dsi.c b/drivers/gpu/drm/i915/intel_dsi.c
index 2b22bb9bb86f..366ad6c67ce4 100644
--- a/drivers/gpu/drm/i915/intel_dsi.c
+++ b/drivers/gpu/drm/i915/intel_dsi.c
@@ -46,6 +46,22 @@ static const struct {
},
};
+/* return pixels in terms of txbyteclkhs */
+static u16 txbyteclkhs(u16 pixels, int bpp, int lane_count,
+ u16 burst_mode_ratio)
+{
+ return DIV_ROUND_UP(DIV_ROUND_UP(pixels * bpp * burst_mode_ratio,
+ 8 * 100), lane_count);
+}
+
+/* return pixels equvalent to txbyteclkhs */
+static u16 pixels_from_txbyteclkhs(u16 clk_hs, int bpp, int lane_count,
+ u16 burst_mode_ratio)
+{
+ return DIV_ROUND_UP((clk_hs * lane_count * 8 * 100),
+ (bpp * burst_mode_ratio));
+}
+
enum mipi_dsi_pixel_format pixel_format_from_register_bits(u32 fmt)
{
/* It just so happens the VBT matches register contents. */
@@ -780,10 +796,19 @@ static void bxt_dsi_get_pipe_config(struct intel_encoder *encoder,
struct drm_i915_private *dev_priv = dev->dev_private;
struct drm_display_mode *adjusted_mode =
&pipe_config->base.adjusted_mode;
+ struct drm_display_mode *adjusted_mode_sw;
+ struct intel_crtc *intel_crtc;
struct intel_dsi *intel_dsi = enc_to_intel_dsi(&encoder->base);
+ unsigned int lane_count = intel_dsi->lane_count;
unsigned int bpp, fmt;
enum port port;
- u16 vfp, vsync, vbp;
+ u16 hactive, hfp, hsync, hbp, vfp, vsync, vbp;
+ u16 hfp_sw, hsync_sw, hbp_sw;
+ u16 crtc_htotal_sw, crtc_hsync_start_sw, crtc_hsync_end_sw,
+ crtc_hblank_start_sw, crtc_hblank_end_sw;
+
+ intel_crtc = to_intel_crtc(encoder->base.crtc);
+ adjusted_mode_sw = &intel_crtc->config->base.adjusted_mode;
/*
* Atleast one port is active as encoder->get_config called only if
@@ -808,26 +833,118 @@ static void bxt_dsi_get_pipe_config(struct intel_encoder *encoder,
adjusted_mode->crtc_vtotal =
I915_READ(BXT_MIPI_TRANS_VTOTAL(port));
+ hactive = adjusted_mode->crtc_hdisplay;
+ hfp = I915_READ(MIPI_HFP_COUNT(port));
+
/*
- * TODO: Retrieve hfp, hsync and hbp. Adjust them for dual link and
- * calculate hsync_start, hsync_end, htotal and hblank_end
+ * Meaningful for video mode non-burst sync pulse mode only,
+ * can be zero for non-burst sync events and burst modes
*/
+ hsync = I915_READ(MIPI_HSYNC_PADDING_COUNT(port));
+ hbp = I915_READ(MIPI_HBP_COUNT(port));
+
+ /* harizontal values are in terms of high speed byte clock */
+ hfp = pixels_from_txbyteclkhs(hfp, bpp, lane_count,
+ intel_dsi->burst_mode_ratio);
+ hsync = pixels_from_txbyteclkhs(hsync, bpp, lane_count,
+ intel_dsi->burst_mode_ratio);
+ hbp = pixels_from_txbyteclkhs(hbp, bpp, lane_count,
+ intel_dsi->burst_mode_ratio);
+
+ if (intel_dsi->dual_link) {
+ hfp *= 2;
+ hsync *= 2;
+ hbp *= 2;
+ }
/* vertical values are in terms of lines */
vfp = I915_READ(MIPI_VFP_COUNT(port));
vsync = I915_READ(MIPI_VSYNC_PADDING_COUNT(port));
vbp = I915_READ(MIPI_VBP_COUNT(port));
+ adjusted_mode->crtc_htotal = hactive + hfp + hsync + hbp;
+ adjusted_mode->crtc_hsync_start = hfp + adjusted_mode->crtc_hdisplay;
+ adjusted_mode->crtc_hsync_end = hsync + adjusted_mode->crtc_hsync_start;
adjusted_mode->crtc_hblank_start = adjusted_mode->crtc_hdisplay;
+ adjusted_mode->crtc_hblank_end = adjusted_mode->crtc_htotal;
- adjusted_mode->crtc_vsync_start =
- vfp + adjusted_mode->crtc_vdisplay;
- adjusted_mode->crtc_vsync_end =
- vsync + adjusted_mode->crtc_vsync_start;
+ adjusted_mode->crtc_vsync_start = vfp + adjusted_mode->crtc_vdisplay;
+ adjusted_mode->crtc_vsync_end = vsync + adjusted_mode->crtc_vsync_start;
adjusted_mode->crtc_vblank_start = adjusted_mode->crtc_vdisplay;
adjusted_mode->crtc_vblank_end = adjusted_mode->crtc_vtotal;
-}
+ /*
+ * In BXT DSI there is no regs programmed with few horizontal timings
+ * in Pixels but txbyteclkhs.. So retrieval process adds some
+ * ROUND_UP ERRORS in the process of PIXELS<==>txbyteclkhs.
+ * Actually here for the given adjusted_mode, we are calculating the
+ * value programmed to the port and then back to the horizontal timing
+ * param in pixels. This is the expected value, including roundup errors
+ * And if that is same as retrieved value from port, then
+ * (HW state) adjusted_mode's horizontal timings are corrected to
+ * match with SW state to nullify the errors.
+ */
+ /* Calculating the value programmed to the Port register */
+ hfp_sw = adjusted_mode_sw->crtc_hsync_start -
+ adjusted_mode_sw->crtc_hdisplay;
+ hsync_sw = adjusted_mode_sw->crtc_hsync_end -
+ adjusted_mode_sw->crtc_hsync_start;
+ hbp_sw = adjusted_mode_sw->crtc_htotal -
+ adjusted_mode_sw->crtc_hsync_end;
+
+ if (intel_dsi->dual_link) {
+ hfp_sw /= 2;
+ hsync_sw /= 2;
+ hbp_sw /= 2;
+ }
+
+ hfp_sw = txbyteclkhs(hfp_sw, bpp, lane_count,
+ intel_dsi->burst_mode_ratio);
+ hsync_sw = txbyteclkhs(hsync_sw, bpp, lane_count,
+ intel_dsi->burst_mode_ratio);
+ hbp_sw = txbyteclkhs(hbp_sw, bpp, lane_count,
+ intel_dsi->burst_mode_ratio);
+
+ /* Reverse calculating the adjusted mode parameters from port reg vals*/
+ hfp_sw = pixels_from_txbyteclkhs(hfp_sw, bpp, lane_count,
+ intel_dsi->burst_mode_ratio);
+ hsync_sw = pixels_from_txbyteclkhs(hsync_sw, bpp, lane_count,
+ intel_dsi->burst_mode_ratio);
+ hbp_sw = pixels_from_txbyteclkhs(hbp_sw, bpp, lane_count,
+ intel_dsi->burst_mode_ratio);
+
+ if (intel_dsi->dual_link) {
+ hfp_sw *= 2;
+ hsync_sw *= 2;
+ hbp_sw *= 2;
+ }
+
+ crtc_htotal_sw = adjusted_mode_sw->crtc_hdisplay + hfp_sw +
+ hsync_sw + hbp_sw;
+ crtc_hsync_start_sw = hfp_sw + adjusted_mode_sw->crtc_hdisplay;
+ crtc_hsync_end_sw = hsync_sw + crtc_hsync_start_sw;
+ crtc_hblank_start_sw = adjusted_mode_sw->crtc_hdisplay;
+ crtc_hblank_end_sw = crtc_htotal_sw;
+
+ if (adjusted_mode->crtc_htotal == crtc_htotal_sw)
+ adjusted_mode->crtc_htotal = adjusted_mode_sw->crtc_htotal;
+
+ if (adjusted_mode->crtc_hsync_start == crtc_hsync_start_sw)
+ adjusted_mode->crtc_hsync_start =
+ adjusted_mode_sw->crtc_hsync_start;
+
+ if (adjusted_mode->crtc_hsync_end == crtc_hsync_end_sw)
+ adjusted_mode->crtc_hsync_end =
+ adjusted_mode_sw->crtc_hsync_end;
+
+ if (adjusted_mode->crtc_hblank_start == crtc_hblank_start_sw)
+ adjusted_mode->crtc_hblank_start =
+ adjusted_mode_sw->crtc_hblank_start;
+
+ if (adjusted_mode->crtc_hblank_end == crtc_hblank_end_sw)
+ adjusted_mode->crtc_hblank_end =
+ adjusted_mode_sw->crtc_hblank_end;
+}
static void intel_dsi_get_config(struct intel_encoder *encoder,
struct intel_crtc_state *pipe_config)
@@ -891,14 +1008,6 @@ static u16 txclkesc(u32 divider, unsigned int us)
}
}
-/* return pixels in terms of txbyteclkhs */
-static u16 txbyteclkhs(u16 pixels, int bpp, int lane_count,
- u16 burst_mode_ratio)
-{
- return DIV_ROUND_UP(DIV_ROUND_UP(pixels * bpp * burst_mode_ratio,
- 8 * 100), lane_count);
-}
-
static void set_dsi_timings(struct drm_encoder *encoder,
const struct drm_display_mode *adjusted_mode)
{
diff --git a/drivers/gpu/drm/i915/intel_hdmi.c b/drivers/gpu/drm/i915/intel_hdmi.c
index 2cdab73046f8..2c3bd9c2573e 100644
--- a/drivers/gpu/drm/i915/intel_hdmi.c
+++ b/drivers/gpu/drm/i915/intel_hdmi.c
@@ -836,6 +836,22 @@ static void hsw_set_infoframes(struct drm_encoder *encoder,
intel_hdmi_set_hdmi_infoframe(encoder, adjusted_mode);
}
+void intel_dp_dual_mode_set_tmds_output(struct intel_hdmi *hdmi, bool enable)
+{
+ struct drm_i915_private *dev_priv = to_i915(intel_hdmi_to_dev(hdmi));
+ struct i2c_adapter *adapter =
+ intel_gmbus_get_adapter(dev_priv, hdmi->ddc_bus);
+
+ if (hdmi->dp_dual_mode.type < DRM_DP_DUAL_MODE_TYPE2_DVI)
+ return;
+
+ DRM_DEBUG_KMS("%s DP dual mode adaptor TMDS output\n",
+ enable ? "Enabling" : "Disabling");
+
+ drm_dp_dual_mode_set_tmds_output(hdmi->dp_dual_mode.type,
+ adapter, enable);
+}
+
static void intel_hdmi_prepare(struct intel_encoder *encoder)
{
struct drm_device *dev = encoder->base.dev;
@@ -845,6 +861,8 @@ static void intel_hdmi_prepare(struct intel_encoder *encoder)
const struct drm_display_mode *adjusted_mode = &crtc->config->base.adjusted_mode;
u32 hdmi_val;
+ intel_dp_dual_mode_set_tmds_output(intel_hdmi, true);
+
hdmi_val = SDVO_ENCODING_HDMI;
if (!HAS_PCH_SPLIT(dev) && crtc->config->limited_color_range)
hdmi_val |= HDMI_COLOR_RANGE_16_235;
@@ -953,6 +971,8 @@ static void intel_hdmi_get_config(struct intel_encoder *encoder,
dotclock /= pipe_config->pixel_multiplier;
pipe_config->base.adjusted_mode.crtc_clock = dotclock;
+
+ pipe_config->lane_count = 4;
}
static void intel_enable_hdmi_audio(struct intel_encoder *encoder)
@@ -1140,6 +1160,8 @@ static void intel_disable_hdmi(struct intel_encoder *encoder)
}
intel_hdmi->set_infoframes(&encoder->base, false, NULL);
+
+ intel_dp_dual_mode_set_tmds_output(intel_hdmi, false);
}
static void g4x_disable_hdmi(struct intel_encoder *encoder)
@@ -1165,27 +1187,42 @@ static void pch_post_disable_hdmi(struct intel_encoder *encoder)
intel_disable_hdmi(encoder);
}
-static int hdmi_port_clock_limit(struct intel_hdmi *hdmi, bool respect_dvi_limit)
+static int intel_hdmi_source_max_tmds_clock(struct drm_i915_private *dev_priv)
{
- struct drm_device *dev = intel_hdmi_to_dev(hdmi);
-
- if ((respect_dvi_limit && !hdmi->has_hdmi_sink) || IS_G4X(dev))
+ if (IS_G4X(dev_priv))
return 165000;
- else if (IS_HASWELL(dev) || INTEL_INFO(dev)->gen >= 8)
+ else if (IS_HASWELL(dev_priv) || INTEL_INFO(dev_priv)->gen >= 8)
return 300000;
else
return 225000;
}
+static int hdmi_port_clock_limit(struct intel_hdmi *hdmi,
+ bool respect_downstream_limits)
+{
+ struct drm_device *dev = intel_hdmi_to_dev(hdmi);
+ int max_tmds_clock = intel_hdmi_source_max_tmds_clock(to_i915(dev));
+
+ if (respect_downstream_limits) {
+ if (hdmi->dp_dual_mode.max_tmds_clock)
+ max_tmds_clock = min(max_tmds_clock,
+ hdmi->dp_dual_mode.max_tmds_clock);
+ if (!hdmi->has_hdmi_sink)
+ max_tmds_clock = min(max_tmds_clock, 165000);
+ }
+
+ return max_tmds_clock;
+}
+
static enum drm_mode_status
hdmi_port_clock_valid(struct intel_hdmi *hdmi,
- int clock, bool respect_dvi_limit)
+ int clock, bool respect_downstream_limits)
{
struct drm_device *dev = intel_hdmi_to_dev(hdmi);
if (clock < 25000)
return MODE_CLOCK_LOW;
- if (clock > hdmi_port_clock_limit(hdmi, respect_dvi_limit))
+ if (clock > hdmi_port_clock_limit(hdmi, respect_downstream_limits))
return MODE_CLOCK_HIGH;
/* BXT DPLL can't generate 223-240 MHz */
@@ -1309,7 +1346,7 @@ bool intel_hdmi_compute_config(struct intel_encoder *encoder,
* within limits.
*/
if (pipe_config->pipe_bpp > 8*3 && pipe_config->has_hdmi_sink &&
- hdmi_port_clock_valid(intel_hdmi, clock_12bpc, false) == MODE_OK &&
+ hdmi_port_clock_valid(intel_hdmi, clock_12bpc, true) == MODE_OK &&
hdmi_12bpc_possible(pipe_config)) {
DRM_DEBUG_KMS("picking bpc to 12 for HDMI output\n");
desired_bpp = 12*3;
@@ -1337,6 +1374,8 @@ bool intel_hdmi_compute_config(struct intel_encoder *encoder,
/* Set user selected PAR to incoming mode's member */
adjusted_mode->picture_aspect_ratio = intel_hdmi->aspect_ratio;
+ pipe_config->lane_count = 4;
+
return true;
}
@@ -1349,10 +1388,57 @@ intel_hdmi_unset_edid(struct drm_connector *connector)
intel_hdmi->has_audio = false;
intel_hdmi->rgb_quant_range_selectable = false;
+ intel_hdmi->dp_dual_mode.type = DRM_DP_DUAL_MODE_NONE;
+ intel_hdmi->dp_dual_mode.max_tmds_clock = 0;
+
kfree(to_intel_connector(connector)->detect_edid);
to_intel_connector(connector)->detect_edid = NULL;
}
+static void
+intel_hdmi_dp_dual_mode_detect(struct drm_connector *connector, bool has_edid)
+{
+ struct drm_i915_private *dev_priv = to_i915(connector->dev);
+ struct intel_hdmi *hdmi = intel_attached_hdmi(connector);
+ enum port port = hdmi_to_dig_port(hdmi)->port;
+ struct i2c_adapter *adapter =
+ intel_gmbus_get_adapter(dev_priv, hdmi->ddc_bus);
+ enum drm_dp_dual_mode_type type = drm_dp_dual_mode_detect(adapter);
+
+ /*
+ * Type 1 DVI adaptors are not required to implement any
+ * registers, so we can't always detect their presence.
+ * Ideally we should be able to check the state of the
+ * CONFIG1 pin, but no such luck on our hardware.
+ *
+ * The only method left to us is to check the VBT to see
+ * if the port is a dual mode capable DP port. But let's
+ * only do that when we sucesfully read the EDID, to avoid
+ * confusing log messages about DP dual mode adaptors when
+ * there's nothing connected to the port.
+ */
+ if (type == DRM_DP_DUAL_MODE_UNKNOWN) {
+ if (has_edid &&
+ intel_bios_is_port_dp_dual_mode(dev_priv, port)) {
+ DRM_DEBUG_KMS("Assuming DP dual mode adaptor presence based on VBT\n");
+ type = DRM_DP_DUAL_MODE_TYPE1_DVI;
+ } else {
+ type = DRM_DP_DUAL_MODE_NONE;
+ }
+ }
+
+ if (type == DRM_DP_DUAL_MODE_NONE)
+ return;
+
+ hdmi->dp_dual_mode.type = type;
+ hdmi->dp_dual_mode.max_tmds_clock =
+ drm_dp_dual_mode_max_tmds_clock(type, adapter);
+
+ DRM_DEBUG_KMS("DP dual mode adaptor (%s) detected (max TMDS clock: %d kHz)\n",
+ drm_dp_get_dual_mode_type_name(type),
+ hdmi->dp_dual_mode.max_tmds_clock);
+}
+
static bool
intel_hdmi_set_edid(struct drm_connector *connector, bool force)
{
@@ -1368,6 +1454,8 @@ intel_hdmi_set_edid(struct drm_connector *connector, bool force)
intel_gmbus_get_adapter(dev_priv,
intel_hdmi->ddc_bus));
+ intel_hdmi_dp_dual_mode_detect(connector, edid != NULL);
+
intel_display_power_put(dev_priv, POWER_DOMAIN_GMBUS);
}
diff --git a/drivers/gpu/drm/i915/intel_lrc.c b/drivers/gpu/drm/i915/intel_lrc.c
index 6179b591ee84..42eac37de047 100644
--- a/drivers/gpu/drm/i915/intel_lrc.c
+++ b/drivers/gpu/drm/i915/intel_lrc.c
@@ -721,48 +721,6 @@ int intel_logical_ring_alloc_request_extras(struct drm_i915_gem_request *request
return ret;
}
-static int logical_ring_wait_for_space(struct drm_i915_gem_request *req,
- int bytes)
-{
- struct intel_ringbuffer *ringbuf = req->ringbuf;
- struct intel_engine_cs *engine = req->engine;
- struct drm_i915_gem_request *target;
- unsigned space;
- int ret;
-
- if (intel_ring_space(ringbuf) >= bytes)
- return 0;
-
- /* The whole point of reserving space is to not wait! */
- WARN_ON(ringbuf->reserved_in_use);
-
- list_for_each_entry(target, &engine->request_list, list) {
- /*
- * The request queue is per-engine, so can contain requests
- * from multiple ringbuffers. Here, we must ignore any that
- * aren't from the ringbuffer we're considering.
- */
- if (target->ringbuf != ringbuf)
- continue;
-
- /* Would completion of this request free enough space? */
- space = __intel_ring_space(target->postfix, ringbuf->tail,
- ringbuf->size);
- if (space >= bytes)
- break;
- }
-
- if (WARN_ON(&target->list == &engine->request_list))
- return -ENOSPC;
-
- ret = i915_wait_request(target);
- if (ret)
- return ret;
-
- ringbuf->space = space;
- return 0;
-}
-
/*
* intel_logical_ring_advance_and_submit() - advance the tail and submit the workload
* @request: Request to advance the logical ringbuffer of.
@@ -814,92 +772,6 @@ intel_logical_ring_advance_and_submit(struct drm_i915_gem_request *request)
return 0;
}
-static void __wrap_ring_buffer(struct intel_ringbuffer *ringbuf)
-{
- uint32_t __iomem *virt;
- int rem = ringbuf->size - ringbuf->tail;
-
- virt = ringbuf->virtual_start + ringbuf->tail;
- rem /= 4;
- while (rem--)
- iowrite32(MI_NOOP, virt++);
-
- ringbuf->tail = 0;
- intel_ring_update_space(ringbuf);
-}
-
-static int logical_ring_prepare(struct drm_i915_gem_request *req, int bytes)
-{
- struct intel_ringbuffer *ringbuf = req->ringbuf;
- int remain_usable = ringbuf->effective_size - ringbuf->tail;
- int remain_actual = ringbuf->size - ringbuf->tail;
- int ret, total_bytes, wait_bytes = 0;
- bool need_wrap = false;
-
- if (ringbuf->reserved_in_use)
- total_bytes = bytes;
- else
- total_bytes = bytes + ringbuf->reserved_size;
-
- if (unlikely(bytes > remain_usable)) {
- /*
- * Not enough space for the basic request. So need to flush
- * out the remainder and then wait for base + reserved.
- */
- wait_bytes = remain_actual + total_bytes;
- need_wrap = true;
- } else {
- if (unlikely(total_bytes > remain_usable)) {
- /*
- * The base request will fit but the reserved space
- * falls off the end. So don't need an immediate wrap
- * and only need to effectively wait for the reserved
- * size space from the start of ringbuffer.
- */
- wait_bytes = remain_actual + ringbuf->reserved_size;
- } else if (total_bytes > ringbuf->space) {
- /* No wrapping required, just waiting. */
- wait_bytes = total_bytes;
- }
- }
-
- if (wait_bytes) {
- ret = logical_ring_wait_for_space(req, wait_bytes);
- if (unlikely(ret))
- return ret;
-
- if (need_wrap)
- __wrap_ring_buffer(ringbuf);
- }
-
- return 0;
-}
-
-/**
- * intel_logical_ring_begin() - prepare the logical ringbuffer to accept some commands
- *
- * @req: The request to start some new work for
- * @num_dwords: number of DWORDs that we plan to write to the ringbuffer.
- *
- * The ringbuffer might not be ready to accept the commands right away (maybe it needs to
- * be wrapped, or wait a bit for the tail to be updated). This function takes care of that
- * and also preallocates a request (every workload submission is still mediated through
- * requests, same as it did with legacy ringbuffer submission).
- *
- * Return: non-zero if the ringbuffer is not ready to be written to.
- */
-int intel_logical_ring_begin(struct drm_i915_gem_request *req, int num_dwords)
-{
- int ret;
-
- ret = logical_ring_prepare(req, num_dwords * sizeof(uint32_t));
- if (ret)
- return ret;
-
- req->ringbuf->space -= num_dwords * sizeof(uint32_t);
- return 0;
-}
-
int intel_logical_ring_reserve_space(struct drm_i915_gem_request *request)
{
/*
@@ -912,7 +784,7 @@ int intel_logical_ring_reserve_space(struct drm_i915_gem_request *request)
*/
intel_ring_reserved_space_reserve(request->ringbuf, MIN_SPACE_FOR_ADD_REQUEST);
- return intel_logical_ring_begin(request, 0);
+ return intel_ring_begin(request, 0);
}
/**
@@ -982,7 +854,7 @@ int intel_execlists_submission(struct i915_execbuffer_params *params,
if (engine == &dev_priv->engine[RCS] &&
instp_mode != dev_priv->relative_constants_mode) {
- ret = intel_logical_ring_begin(params->request, 4);
+ ret = intel_ring_begin(params->request, 4);
if (ret)
return ret;
@@ -1178,7 +1050,7 @@ static int intel_logical_ring_workarounds_emit(struct drm_i915_gem_request *req)
if (ret)
return ret;
- ret = intel_logical_ring_begin(req, w->count * 2 + 2);
+ ret = intel_ring_begin(req, w->count * 2 + 2);
if (ret)
return ret;
@@ -1669,7 +1541,7 @@ static int intel_logical_ring_emit_pdps(struct drm_i915_gem_request *req)
const int num_lri_cmds = GEN8_LEGACY_PDPES * 2;
int i, ret;
- ret = intel_logical_ring_begin(req, num_lri_cmds * 2 + 2);
+ ret = intel_ring_begin(req, num_lri_cmds * 2 + 2);
if (ret)
return ret;
@@ -1716,7 +1588,7 @@ static int gen8_emit_bb_start(struct drm_i915_gem_request *req,
req->ctx->ppgtt->pd_dirty_rings &= ~intel_engine_flag(req->engine);
}
- ret = intel_logical_ring_begin(req, 4);
+ ret = intel_ring_begin(req, 4);
if (ret)
return ret;
@@ -1778,7 +1650,7 @@ static int gen8_emit_flush(struct drm_i915_gem_request *request,
uint32_t cmd;
int ret;
- ret = intel_logical_ring_begin(request, 4);
+ ret = intel_ring_begin(request, 4);
if (ret)
return ret;
@@ -1846,7 +1718,7 @@ static int gen8_emit_flush_render(struct drm_i915_gem_request *request,
vf_flush_wa = true;
}
- ret = intel_logical_ring_begin(request, vf_flush_wa ? 12 : 6);
+ ret = intel_ring_begin(request, vf_flush_wa ? 12 : 6);
if (ret)
return ret;
@@ -1920,7 +1792,7 @@ static int gen8_emit_request(struct drm_i915_gem_request *request)
struct intel_ringbuffer *ringbuf = request->ringbuf;
int ret;
- ret = intel_logical_ring_begin(request, 6 + WA_TAIL_DWORDS);
+ ret = intel_ring_begin(request, 6 + WA_TAIL_DWORDS);
if (ret)
return ret;
@@ -1944,7 +1816,7 @@ static int gen8_emit_request_render(struct drm_i915_gem_request *request)
struct intel_ringbuffer *ringbuf = request->ringbuf;
int ret;
- ret = intel_logical_ring_begin(request, 8 + WA_TAIL_DWORDS);
+ ret = intel_ring_begin(request, 8 + WA_TAIL_DWORDS);
if (ret)
return ret;
diff --git a/drivers/gpu/drm/i915/intel_lrc.h b/drivers/gpu/drm/i915/intel_lrc.h
index 461f1ef9b5c1..60a7385bc531 100644
--- a/drivers/gpu/drm/i915/intel_lrc.h
+++ b/drivers/gpu/drm/i915/intel_lrc.h
@@ -63,7 +63,6 @@ int intel_logical_ring_reserve_space(struct drm_i915_gem_request *request);
void intel_logical_ring_stop(struct intel_engine_cs *engine);
void intel_logical_ring_cleanup(struct intel_engine_cs *engine);
int intel_logical_rings_init(struct drm_device *dev);
-int intel_logical_ring_begin(struct drm_i915_gem_request *req, int num_dwords);
int logical_ring_flush_all_caches(struct drm_i915_gem_request *req);
/**
diff --git a/drivers/gpu/drm/i915/intel_lvds.c b/drivers/gpu/drm/i915/intel_lvds.c
index 66e832beeb37..bc53c0dd34d0 100644
--- a/drivers/gpu/drm/i915/intel_lvds.c
+++ b/drivers/gpu/drm/i915/intel_lvds.c
@@ -122,6 +122,10 @@ static void intel_lvds_get_config(struct intel_encoder *encoder,
pipe_config->base.adjusted_mode.flags |= flags;
+ if (INTEL_INFO(dev)->gen < 5)
+ pipe_config->gmch_pfit.lvds_border_bits =
+ tmp & LVDS_BORDER_ENABLE;
+
/* gen2/3 store dither state in pfit control, needs to match */
if (INTEL_INFO(dev)->gen < 4) {
tmp = I915_READ(PFIT_CONTROL);
diff --git a/drivers/gpu/drm/i915/intel_mocs.c b/drivers/gpu/drm/i915/intel_mocs.c
index 23b8545ad6b0..6ba4bf7f2a89 100644
--- a/drivers/gpu/drm/i915/intel_mocs.c
+++ b/drivers/gpu/drm/i915/intel_mocs.c
@@ -239,11 +239,9 @@ static int emit_mocs_control_table(struct drm_i915_gem_request *req,
if (WARN_ON(table->size > GEN9_NUM_MOCS_ENTRIES))
return -ENODEV;
- ret = intel_logical_ring_begin(req, 2 + 2 * GEN9_NUM_MOCS_ENTRIES);
- if (ret) {
- DRM_DEBUG("intel_logical_ring_begin failed %d\n", ret);
+ ret = intel_ring_begin(req, 2 + 2 * GEN9_NUM_MOCS_ENTRIES);
+ if (ret)
return ret;
- }
intel_logical_ring_emit(ringbuf,
MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES));
@@ -305,11 +303,9 @@ static int emit_mocs_l3cc_table(struct drm_i915_gem_request *req,
if (WARN_ON(table->size > GEN9_NUM_MOCS_ENTRIES))
return -ENODEV;
- ret = intel_logical_ring_begin(req, 2 + GEN9_NUM_MOCS_ENTRIES);
- if (ret) {
- DRM_DEBUG("intel_logical_ring_begin failed %d\n", ret);
+ ret = intel_ring_begin(req, 2 + GEN9_NUM_MOCS_ENTRIES);
+ if (ret)
return ret;
- }
intel_logical_ring_emit(ringbuf,
MI_LOAD_REGISTER_IMM(GEN9_NUM_MOCS_ENTRIES / 2));
diff --git a/drivers/gpu/drm/i915/intel_panel.c b/drivers/gpu/drm/i915/intel_panel.c
index a0788763757b..8357d571553a 100644
--- a/drivers/gpu/drm/i915/intel_panel.c
+++ b/drivers/gpu/drm/i915/intel_panel.c
@@ -1638,6 +1638,12 @@ static int pwm_setup_backlight(struct intel_connector *connector,
return -ENODEV;
}
+ /*
+ * FIXME: pwm_apply_args() should be removed when switching to
+ * the atomic PWM API.
+ */
+ pwm_apply_args(panel->backlight.pwm);
+
retval = pwm_config(panel->backlight.pwm, CRC_PMIC_PWM_PERIOD_NS,
CRC_PMIC_PWM_PERIOD_NS);
if (retval < 0) {
diff --git a/drivers/gpu/drm/i915/intel_pm.c b/drivers/gpu/drm/i915/intel_pm.c
index 695a464a5e64..a7ef45da0a9e 100644
--- a/drivers/gpu/drm/i915/intel_pm.c
+++ b/drivers/gpu/drm/i915/intel_pm.c
@@ -3904,6 +3904,8 @@ static void ilk_pipe_wm_get_hw_state(struct drm_crtc *crtc)
if (IS_HASWELL(dev) || IS_BROADWELL(dev))
hw->wm_linetime[pipe] = I915_READ(PIPE_WM_LINETIME(pipe));
+ memset(active, 0, sizeof(*active));
+
active->pipe_enabled = intel_crtc->active;
if (active->pipe_enabled) {
@@ -6738,6 +6740,12 @@ static void broadwell_init_clock_gating(struct drm_device *dev)
misccpctl = I915_READ(GEN7_MISCCPCTL);
I915_WRITE(GEN7_MISCCPCTL, misccpctl & ~GEN7_DOP_CLOCK_GATE_ENABLE);
I915_WRITE(GEN8_L3SQCREG1, BDW_WA_L3SQCREG1_DEFAULT);
+ /*
+ * Wait at least 100 clocks before re-enabling clock gating. See
+ * the definition of L3SQCREG1 in BSpec.
+ */
+ POSTING_READ(GEN8_L3SQCREG1);
+ udelay(1);
I915_WRITE(GEN7_MISCCPCTL, misccpctl);
/*
diff --git a/drivers/gpu/drm/i915/intel_psr.c b/drivers/gpu/drm/i915/intel_psr.c
index c3abae4bc596..a788d1e9589b 100644
--- a/drivers/gpu/drm/i915/intel_psr.c
+++ b/drivers/gpu/drm/i915/intel_psr.c
@@ -280,7 +280,10 @@ static void hsw_psr_enable_source(struct intel_dp *intel_dp)
* with the 5 or 6 idle patterns.
*/
uint32_t idle_frames = max(6, dev_priv->vbt.psr.idle_frames);
- uint32_t val = 0x0;
+ uint32_t val = EDP_PSR_ENABLE;
+
+ val |= max_sleep_time << EDP_PSR_MAX_SLEEP_TIME_SHIFT;
+ val |= idle_frames << EDP_PSR_IDLE_FRAME_SHIFT;
if (IS_HASWELL(dev))
val |= EDP_PSR_MIN_LINK_ENTRY_TIME_8_LINES;
@@ -288,14 +291,50 @@ static void hsw_psr_enable_source(struct intel_dp *intel_dp)
if (dev_priv->psr.link_standby)
val |= EDP_PSR_LINK_STANDBY;
- I915_WRITE(EDP_PSR_CTL, val |
- max_sleep_time << EDP_PSR_MAX_SLEEP_TIME_SHIFT |
- idle_frames << EDP_PSR_IDLE_FRAME_SHIFT |
- EDP_PSR_ENABLE);
+ if (dev_priv->vbt.psr.tp1_wakeup_time > 5)
+ val |= EDP_PSR_TP1_TIME_2500us;
+ else if (dev_priv->vbt.psr.tp1_wakeup_time > 1)
+ val |= EDP_PSR_TP1_TIME_500us;
+ else if (dev_priv->vbt.psr.tp1_wakeup_time > 0)
+ val |= EDP_PSR_TP1_TIME_100us;
+ else
+ val |= EDP_PSR_TP1_TIME_0us;
+
+ if (dev_priv->vbt.psr.tp2_tp3_wakeup_time > 5)
+ val |= EDP_PSR_TP2_TP3_TIME_2500us;
+ else if (dev_priv->vbt.psr.tp2_tp3_wakeup_time > 1)
+ val |= EDP_PSR_TP2_TP3_TIME_500us;
+ else if (dev_priv->vbt.psr.tp2_tp3_wakeup_time > 0)
+ val |= EDP_PSR_TP2_TP3_TIME_100us;
+ else
+ val |= EDP_PSR_TP2_TP3_TIME_0us;
+
+ if (intel_dp_source_supports_hbr2(intel_dp) &&
+ drm_dp_tps3_supported(intel_dp->dpcd))
+ val |= EDP_PSR_TP1_TP3_SEL;
+ else
+ val |= EDP_PSR_TP1_TP2_SEL;
+
+ I915_WRITE(EDP_PSR_CTL, val);
+
+ if (!dev_priv->psr.psr2_support)
+ return;
+
+ /* FIXME: selective update is probably totally broken because it doesn't
+ * mesh at all with our frontbuffer tracking. And the hw alone isn't
+ * good enough. */
+ val = EDP_PSR2_ENABLE | EDP_SU_TRACK_ENABLE;
+
+ if (dev_priv->vbt.psr.tp2_tp3_wakeup_time > 5)
+ val |= EDP_PSR2_TP2_TIME_2500;
+ else if (dev_priv->vbt.psr.tp2_tp3_wakeup_time > 1)
+ val |= EDP_PSR2_TP2_TIME_500;
+ else if (dev_priv->vbt.psr.tp2_tp3_wakeup_time > 0)
+ val |= EDP_PSR2_TP2_TIME_100;
+ else
+ val |= EDP_PSR2_TP2_TIME_50;
- if (dev_priv->psr.psr2_support)
- I915_WRITE(EDP_PSR2_CTL, EDP_PSR2_ENABLE |
- EDP_SU_TRACK_ENABLE | EDP_PSR2_TP2_TIME_100);
+ I915_WRITE(EDP_PSR2_CTL, val);
}
static bool intel_psr_match_conditions(struct intel_dp *intel_dp)
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.c b/drivers/gpu/drm/i915/intel_ringbuffer.c
index 245386e20c52..04402bb9d26b 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.c
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.c
@@ -53,12 +53,6 @@ void intel_ring_update_space(struct intel_ringbuffer *ringbuf)
ringbuf->tail, ringbuf->size);
}
-int intel_ring_space(struct intel_ringbuffer *ringbuf)
-{
- intel_ring_update_space(ringbuf);
- return ringbuf->space;
-}
-
bool intel_engine_stopped(struct intel_engine_cs *engine)
{
struct drm_i915_private *dev_priv = engine->dev->dev_private;
@@ -1309,7 +1303,7 @@ static int gen8_rcs_signal(struct drm_i915_gem_request *signaller_req,
intel_ring_emit(signaller, seqno);
intel_ring_emit(signaller, 0);
intel_ring_emit(signaller, MI_SEMAPHORE_SIGNAL |
- MI_SEMAPHORE_TARGET(waiter->id));
+ MI_SEMAPHORE_TARGET(waiter->hw_id));
intel_ring_emit(signaller, 0);
}
@@ -1349,7 +1343,7 @@ static int gen8_xcs_signal(struct drm_i915_gem_request *signaller_req,
intel_ring_emit(signaller, upper_32_bits(gtt_offset));
intel_ring_emit(signaller, seqno);
intel_ring_emit(signaller, MI_SEMAPHORE_SIGNAL |
- MI_SEMAPHORE_TARGET(waiter->id));
+ MI_SEMAPHORE_TARGET(waiter->hw_id));
intel_ring_emit(signaller, 0);
}
@@ -1573,6 +1567,8 @@ pc_render_add_request(struct drm_i915_gem_request *req)
static void
gen6_seqno_barrier(struct intel_engine_cs *engine)
{
+ struct drm_i915_private *dev_priv = engine->dev->dev_private;
+
/* Workaround to force correct ordering between irq and seqno writes on
* ivb (and maybe also on snb) by reading from a CS register (like
* ACTHD) before reading the status page.
@@ -1584,9 +1580,13 @@ gen6_seqno_barrier(struct intel_engine_cs *engine)
* the write time to land, but that would incur a delay after every
* batch i.e. much more frequent than a delay when waiting for the
* interrupt (with the same net latency).
+ *
+ * Also note that to prevent whole machine hangs on gen7, we have to
+ * take the spinlock to guard against concurrent cacheline access.
*/
- struct drm_i915_private *dev_priv = engine->dev->dev_private;
+ spin_lock_irq(&dev_priv->uncore.lock);
POSTING_READ_FW(RING_ACTHD(engine->mmio_base));
+ spin_unlock_irq(&dev_priv->uncore.lock);
}
static u32
@@ -2312,51 +2312,6 @@ void intel_cleanup_engine(struct intel_engine_cs *engine)
engine->dev = NULL;
}
-static int ring_wait_for_space(struct intel_engine_cs *engine, int n)
-{
- struct intel_ringbuffer *ringbuf = engine->buffer;
- struct drm_i915_gem_request *request;
- unsigned space;
- int ret;
-
- if (intel_ring_space(ringbuf) >= n)
- return 0;
-
- /* The whole point of reserving space is to not wait! */
- WARN_ON(ringbuf->reserved_in_use);
-
- list_for_each_entry(request, &engine->request_list, list) {
- space = __intel_ring_space(request->postfix, ringbuf->tail,
- ringbuf->size);
- if (space >= n)
- break;
- }
-
- if (WARN_ON(&request->list == &engine->request_list))
- return -ENOSPC;
-
- ret = i915_wait_request(request);
- if (ret)
- return ret;
-
- ringbuf->space = space;
- return 0;
-}
-
-static void __wrap_ring_buffer(struct intel_ringbuffer *ringbuf)
-{
- uint32_t __iomem *virt;
- int rem = ringbuf->size - ringbuf->tail;
-
- virt = ringbuf->virtual_start + ringbuf->tail;
- rem /= 4;
- while (rem--)
- iowrite32(MI_NOOP, virt++);
-
- ringbuf->tail = 0;
- intel_ring_update_space(ringbuf);
-}
-
int intel_engine_idle(struct intel_engine_cs *engine)
{
struct drm_i915_gem_request *req;
@@ -2398,63 +2353,82 @@ int intel_ring_reserve_space(struct drm_i915_gem_request *request)
void intel_ring_reserved_space_reserve(struct intel_ringbuffer *ringbuf, int size)
{
- WARN_ON(ringbuf->reserved_size);
- WARN_ON(ringbuf->reserved_in_use);
-
+ GEM_BUG_ON(ringbuf->reserved_size);
ringbuf->reserved_size = size;
}
void intel_ring_reserved_space_cancel(struct intel_ringbuffer *ringbuf)
{
- WARN_ON(ringbuf->reserved_in_use);
-
+ GEM_BUG_ON(!ringbuf->reserved_size);
ringbuf->reserved_size = 0;
- ringbuf->reserved_in_use = false;
}
void intel_ring_reserved_space_use(struct intel_ringbuffer *ringbuf)
{
- WARN_ON(ringbuf->reserved_in_use);
-
- ringbuf->reserved_in_use = true;
- ringbuf->reserved_tail = ringbuf->tail;
+ GEM_BUG_ON(!ringbuf->reserved_size);
+ ringbuf->reserved_size = 0;
}
void intel_ring_reserved_space_end(struct intel_ringbuffer *ringbuf)
{
- WARN_ON(!ringbuf->reserved_in_use);
- if (ringbuf->tail > ringbuf->reserved_tail) {
- WARN(ringbuf->tail > ringbuf->reserved_tail + ringbuf->reserved_size,
- "request reserved size too small: %d vs %d!\n",
- ringbuf->tail - ringbuf->reserved_tail, ringbuf->reserved_size);
- } else {
+ GEM_BUG_ON(ringbuf->reserved_size);
+}
+
+static int wait_for_space(struct drm_i915_gem_request *req, int bytes)
+{
+ struct intel_ringbuffer *ringbuf = req->ringbuf;
+ struct intel_engine_cs *engine = req->engine;
+ struct drm_i915_gem_request *target;
+
+ intel_ring_update_space(ringbuf);
+ if (ringbuf->space >= bytes)
+ return 0;
+
+ /*
+ * Space is reserved in the ringbuffer for finalising the request,
+ * as that cannot be allowed to fail. During request finalisation,
+ * reserved_space is set to 0 to stop the overallocation and the
+ * assumption is that then we never need to wait (which has the
+ * risk of failing with EINTR).
+ *
+ * See also i915_gem_request_alloc() and i915_add_request().
+ */
+ GEM_BUG_ON(!ringbuf->reserved_size);
+
+ list_for_each_entry(target, &engine->request_list, list) {
+ unsigned space;
+
/*
- * The ring was wrapped while the reserved space was in use.
- * That means that some unknown amount of the ring tail was
- * no-op filled and skipped. Thus simply adding the ring size
- * to the tail and doing the above space check will not work.
- * Rather than attempt to track how much tail was skipped,
- * it is much simpler to say that also skipping the sanity
- * check every once in a while is not a big issue.
+ * The request queue is per-engine, so can contain requests
+ * from multiple ringbuffers. Here, we must ignore any that
+ * aren't from the ringbuffer we're considering.
*/
+ if (target->ringbuf != ringbuf)
+ continue;
+
+ /* Would completion of this request free enough space? */
+ space = __intel_ring_space(target->postfix, ringbuf->tail,
+ ringbuf->size);
+ if (space >= bytes)
+ break;
}
- ringbuf->reserved_size = 0;
- ringbuf->reserved_in_use = false;
+ if (WARN_ON(&target->list == &engine->request_list))
+ return -ENOSPC;
+
+ return i915_wait_request(target);
}
-static int __intel_ring_prepare(struct intel_engine_cs *engine, int bytes)
+int intel_ring_begin(struct drm_i915_gem_request *req, int num_dwords)
{
- struct intel_ringbuffer *ringbuf = engine->buffer;
- int remain_usable = ringbuf->effective_size - ringbuf->tail;
+ struct intel_ringbuffer *ringbuf = req->ringbuf;
int remain_actual = ringbuf->size - ringbuf->tail;
- int ret, total_bytes, wait_bytes = 0;
+ int remain_usable = ringbuf->effective_size - ringbuf->tail;
+ int bytes = num_dwords * sizeof(u32);
+ int total_bytes, wait_bytes;
bool need_wrap = false;
- if (ringbuf->reserved_in_use)
- total_bytes = bytes;
- else
- total_bytes = bytes + ringbuf->reserved_size;
+ total_bytes = bytes + ringbuf->reserved_size;
if (unlikely(bytes > remain_usable)) {
/*
@@ -2463,44 +2437,42 @@ static int __intel_ring_prepare(struct intel_engine_cs *engine, int bytes)
*/
wait_bytes = remain_actual + total_bytes;
need_wrap = true;
+ } else if (unlikely(total_bytes > remain_usable)) {
+ /*
+ * The base request will fit but the reserved space
+ * falls off the end. So we don't need an immediate wrap
+ * and only need to effectively wait for the reserved
+ * size space from the start of ringbuffer.
+ */
+ wait_bytes = remain_actual + ringbuf->reserved_size;
} else {
- if (unlikely(total_bytes > remain_usable)) {
- /*
- * The base request will fit but the reserved space
- * falls off the end. So don't need an immediate wrap
- * and only need to effectively wait for the reserved
- * size space from the start of ringbuffer.
- */
- wait_bytes = remain_actual + ringbuf->reserved_size;
- } else if (total_bytes > ringbuf->space) {
- /* No wrapping required, just waiting. */
- wait_bytes = total_bytes;
- }
+ /* No wrapping required, just waiting. */
+ wait_bytes = total_bytes;
}
- if (wait_bytes) {
- ret = ring_wait_for_space(engine, wait_bytes);
+ if (wait_bytes > ringbuf->space) {
+ int ret = wait_for_space(req, wait_bytes);
if (unlikely(ret))
return ret;
- if (need_wrap)
- __wrap_ring_buffer(ringbuf);
+ intel_ring_update_space(ringbuf);
+ if (unlikely(ringbuf->space < wait_bytes))
+ return -EAGAIN;
}
- return 0;
-}
+ if (unlikely(need_wrap)) {
+ GEM_BUG_ON(remain_actual > ringbuf->space);
+ GEM_BUG_ON(ringbuf->tail + remain_actual > ringbuf->size);
-int intel_ring_begin(struct drm_i915_gem_request *req,
- int num_dwords)
-{
- struct intel_engine_cs *engine = req->engine;
- int ret;
-
- ret = __intel_ring_prepare(engine, num_dwords * sizeof(uint32_t));
- if (ret)
- return ret;
+ /* Fill the tail with MI_NOOP */
+ memset(ringbuf->virtual_start + ringbuf->tail,
+ 0, remain_actual);
+ ringbuf->tail = 0;
+ ringbuf->space -= remain_actual;
+ }
- engine->buffer->space -= num_dwords * sizeof(uint32_t);
+ ringbuf->space -= bytes;
+ GEM_BUG_ON(ringbuf->space < 0);
return 0;
}
@@ -2772,6 +2744,7 @@ int intel_init_render_ring_buffer(struct drm_device *dev)
engine->name = "render ring";
engine->id = RCS;
engine->exec_id = I915_EXEC_RENDER;
+ engine->hw_id = 0;
engine->mmio_base = RENDER_RING_BASE;
if (INTEL_INFO(dev)->gen >= 8) {
@@ -2923,6 +2896,7 @@ int intel_init_bsd_ring_buffer(struct drm_device *dev)
engine->name = "bsd ring";
engine->id = VCS;
engine->exec_id = I915_EXEC_BSD;
+ engine->hw_id = 1;
engine->write_tail = ring_write_tail;
if (INTEL_INFO(dev)->gen >= 6) {
@@ -3001,6 +2975,7 @@ int intel_init_bsd2_ring_buffer(struct drm_device *dev)
engine->name = "bsd2 ring";
engine->id = VCS2;
engine->exec_id = I915_EXEC_BSD;
+ engine->hw_id = 4;
engine->write_tail = ring_write_tail;
engine->mmio_base = GEN8_BSD2_RING_BASE;
@@ -3033,6 +3008,7 @@ int intel_init_blt_ring_buffer(struct drm_device *dev)
engine->name = "blitter ring";
engine->id = BCS;
engine->exec_id = I915_EXEC_BLT;
+ engine->hw_id = 2;
engine->mmio_base = BLT_RING_BASE;
engine->write_tail = ring_write_tail;
@@ -3092,6 +3068,7 @@ int intel_init_vebox_ring_buffer(struct drm_device *dev)
engine->name = "video enhancement ring";
engine->id = VECS;
engine->exec_id = I915_EXEC_VEBOX;
+ engine->hw_id = 3;
engine->mmio_base = VEBOX_RING_BASE;
engine->write_tail = ring_write_tail;
diff --git a/drivers/gpu/drm/i915/intel_ringbuffer.h b/drivers/gpu/drm/i915/intel_ringbuffer.h
index 2ade194bbea9..ff126485d398 100644
--- a/drivers/gpu/drm/i915/intel_ringbuffer.h
+++ b/drivers/gpu/drm/i915/intel_ringbuffer.h
@@ -108,8 +108,6 @@ struct intel_ringbuffer {
int size;
int effective_size;
int reserved_size;
- int reserved_tail;
- bool reserved_in_use;
/** We track the position of the requests in the ring buffer, and
* when each is retired we increment last_retired_head as the GPU
@@ -156,7 +154,8 @@ struct intel_engine_cs {
#define I915_NUM_ENGINES 5
#define _VCS(n) (VCS + (n))
unsigned int exec_id;
- unsigned int guc_id;
+ unsigned int hw_id;
+ unsigned int guc_id; /* XXX same as hw_id? */
u32 mmio_base;
struct drm_device *dev;
struct intel_ringbuffer *buffer;
@@ -459,7 +458,6 @@ static inline void intel_ring_advance(struct intel_engine_cs *engine)
}
int __intel_ring_space(int head, int tail, int size);
void intel_ring_update_space(struct intel_ringbuffer *ringbuf);
-int intel_ring_space(struct intel_ringbuffer *ringbuf);
bool intel_engine_stopped(struct intel_engine_cs *engine);
int __must_check intel_engine_idle(struct intel_engine_cs *engine);
diff --git a/drivers/gpu/drm/i915/intel_vbt_defs.h b/drivers/gpu/drm/i915/intel_vbt_defs.h
index 9ff1e960d617..c15051de8023 100644
--- a/drivers/gpu/drm/i915/intel_vbt_defs.h
+++ b/drivers/gpu/drm/i915/intel_vbt_defs.h
@@ -740,6 +740,7 @@ struct bdb_psr {
#define DEVICE_TYPE_INT_TV 0x1009
#define DEVICE_TYPE_HDMI 0x60D2
#define DEVICE_TYPE_DP 0x68C6
+#define DEVICE_TYPE_DP_DUAL_MODE 0x60D6
#define DEVICE_TYPE_eDP 0x78C6
#define DEVICE_TYPE_CLASS_EXTENSION (1 << 15)
@@ -774,6 +775,17 @@ struct bdb_psr {
DEVICE_TYPE_DISPLAYPORT_OUTPUT | \
DEVICE_TYPE_ANALOG_OUTPUT)
+#define DEVICE_TYPE_DP_DUAL_MODE_BITS \
+ (DEVICE_TYPE_INTERNAL_CONNECTOR | \
+ DEVICE_TYPE_MIPI_OUTPUT | \
+ DEVICE_TYPE_COMPOSITE_OUTPUT | \
+ DEVICE_TYPE_LVDS_SINGALING | \
+ DEVICE_TYPE_TMDS_DVI_SIGNALING | \
+ DEVICE_TYPE_VIDEO_SIGNALING | \
+ DEVICE_TYPE_DISPLAYPORT_OUTPUT | \
+ DEVICE_TYPE_DIGITAL_OUTPUT | \
+ DEVICE_TYPE_ANALOG_OUTPUT)
+
/* define the DVO port for HDMI output type */
#define DVO_B 1
#define DVO_C 2
diff --git a/drivers/gpu/drm/imx/imx-drm-core.c b/drivers/gpu/drm/imx/imx-drm-core.c
index 1080019e7b17..1f14b602882b 100644
--- a/drivers/gpu/drm/imx/imx-drm-core.c
+++ b/drivers/gpu/drm/imx/imx-drm-core.c
@@ -25,6 +25,7 @@
#include <drm/drm_fb_cma_helper.h>
#include <drm/drm_plane_helper.h>
#include <drm/drm_of.h>
+#include <video/imx-ipu-v3.h>
#include "imx-drm.h"
@@ -437,6 +438,13 @@ static int compare_of(struct device *dev, void *data)
{
struct device_node *np = data;
+ /* Special case for DI, dev->of_node may not be set yet */
+ if (strcmp(dev->driver->name, "imx-ipuv3-crtc") == 0) {
+ struct ipu_client_platformdata *pdata = dev->platform_data;
+
+ return pdata->of_node == np;
+ }
+
/* Special case for LDB, one device for two channels */
if (of_node_cmp(np->name, "lvds-channel") == 0) {
np = of_get_parent(np);
diff --git a/drivers/gpu/drm/imx/ipuv3-crtc.c b/drivers/gpu/drm/imx/ipuv3-crtc.c
index dee8e8b3523b..b2c30b8d9816 100644
--- a/drivers/gpu/drm/imx/ipuv3-crtc.c
+++ b/drivers/gpu/drm/imx/ipuv3-crtc.c
@@ -473,7 +473,7 @@ static int ipu_crtc_init(struct ipu_crtc *ipu_crtc,
ret = imx_drm_add_crtc(drm, &ipu_crtc->base, &ipu_crtc->imx_crtc,
&ipu_crtc->plane[0]->base, &ipu_crtc_helper_funcs,
- ipu_crtc->dev->of_node);
+ pdata->of_node);
if (ret) {
dev_err(ipu_crtc->dev, "adding crtc failed with %d.\n", ret);
goto err_put_resources;
diff --git a/drivers/gpu/drm/msm/msm_gem_submit.c b/drivers/gpu/drm/msm/msm_gem_submit.c
index a9a001150b9e..b89ca5174863 100644
--- a/drivers/gpu/drm/msm/msm_gem_submit.c
+++ b/drivers/gpu/drm/msm/msm_gem_submit.c
@@ -28,11 +28,6 @@
#define BO_LOCKED 0x4000
#define BO_PINNED 0x2000
-static inline void __user *to_user_ptr(u64 address)
-{
- return (void __user *)(uintptr_t)address;
-}
-
static struct msm_gem_submit *submit_create(struct drm_device *dev,
struct msm_gpu *gpu, int nr)
{
@@ -78,7 +73,7 @@ static int submit_lookup_objects(struct msm_gem_submit *submit,
struct drm_gem_object *obj;
struct msm_gem_object *msm_obj;
void __user *userptr =
- to_user_ptr(args->bos + (i * sizeof(submit_bo)));
+ u64_to_user_ptr(args->bos + (i * sizeof(submit_bo)));
ret = copy_from_user(&submit_bo, userptr, sizeof(submit_bo));
if (ret) {
@@ -288,7 +283,7 @@ static int submit_reloc(struct msm_gem_submit *submit, struct msm_gem_object *ob
for (i = 0; i < nr_relocs; i++) {
struct drm_msm_gem_submit_reloc submit_reloc;
void __user *userptr =
- to_user_ptr(relocs + (i * sizeof(submit_reloc)));
+ u64_to_user_ptr(relocs + (i * sizeof(submit_reloc)));
uint32_t iova, off;
bool valid;
@@ -395,7 +390,7 @@ int msm_ioctl_gem_submit(struct drm_device *dev, void *data,
for (i = 0; i < args->nr_cmds; i++) {
struct drm_msm_gem_submit_cmd submit_cmd;
void __user *userptr =
- to_user_ptr(args->cmds + (i * sizeof(submit_cmd)));
+ u64_to_user_ptr(args->cmds + (i * sizeof(submit_cmd)));
struct msm_gem_object *msm_obj;
uint32_t iova;
diff --git a/drivers/gpu/drm/radeon/atombios_crtc.c b/drivers/gpu/drm/radeon/atombios_crtc.c
index bdc7b9ee1930..2e216e2ea78c 100644
--- a/drivers/gpu/drm/radeon/atombios_crtc.c
+++ b/drivers/gpu/drm/radeon/atombios_crtc.c
@@ -1740,6 +1740,7 @@ static u32 radeon_get_pll_use_mask(struct drm_crtc *crtc)
static int radeon_get_shared_dp_ppll(struct drm_crtc *crtc)
{
struct drm_device *dev = crtc->dev;
+ struct radeon_device *rdev = dev->dev_private;
struct drm_crtc *test_crtc;
struct radeon_crtc *test_radeon_crtc;
@@ -1749,6 +1750,10 @@ static int radeon_get_shared_dp_ppll(struct drm_crtc *crtc)
test_radeon_crtc = to_radeon_crtc(test_crtc);
if (test_radeon_crtc->encoder &&
ENCODER_MODE_IS_DP(atombios_get_encoder_mode(test_radeon_crtc->encoder))) {
+ /* PPLL2 is exclusive to UNIPHYA on DCE61 */
+ if (ASIC_IS_DCE61(rdev) && !ASIC_IS_DCE8(rdev) &&
+ test_radeon_crtc->pll_id == ATOM_PPLL2)
+ continue;
/* for DP use the same PLL for all */
if (test_radeon_crtc->pll_id != ATOM_PPLL_INVALID)
return test_radeon_crtc->pll_id;
@@ -1770,6 +1775,7 @@ static int radeon_get_shared_nondp_ppll(struct drm_crtc *crtc)
{
struct radeon_crtc *radeon_crtc = to_radeon_crtc(crtc);
struct drm_device *dev = crtc->dev;
+ struct radeon_device *rdev = dev->dev_private;
struct drm_crtc *test_crtc;
struct radeon_crtc *test_radeon_crtc;
u32 adjusted_clock, test_adjusted_clock;
@@ -1785,6 +1791,10 @@ static int radeon_get_shared_nondp_ppll(struct drm_crtc *crtc)
test_radeon_crtc = to_radeon_crtc(test_crtc);
if (test_radeon_crtc->encoder &&
!ENCODER_MODE_IS_DP(atombios_get_encoder_mode(test_radeon_crtc->encoder))) {
+ /* PPLL2 is exclusive to UNIPHYA on DCE61 */
+ if (ASIC_IS_DCE61(rdev) && !ASIC_IS_DCE8(rdev) &&
+ test_radeon_crtc->pll_id == ATOM_PPLL2)
+ continue;
/* check if we are already driving this connector with another crtc */
if (test_radeon_crtc->connector == radeon_crtc->connector) {
/* if we are, return that pll */
diff --git a/drivers/gpu/drm/radeon/atombios_dp.c b/drivers/gpu/drm/radeon/atombios_dp.c
index afa9db1dc0e3..cead089a9e7d 100644
--- a/drivers/gpu/drm/radeon/atombios_dp.c
+++ b/drivers/gpu/drm/radeon/atombios_dp.c
@@ -326,8 +326,8 @@ int radeon_dp_get_dp_link_config(struct drm_connector *connector,
}
}
} else {
- for (lane_num = 1; lane_num <= max_lane_num; lane_num <<= 1) {
- for (i = 0; i < ARRAY_SIZE(link_rates) && link_rates[i] <= max_link_rate; i++) {
+ for (i = 0; i < ARRAY_SIZE(link_rates) && link_rates[i] <= max_link_rate; i++) {
+ for (lane_num = 1; lane_num <= max_lane_num; lane_num <<= 1) {
max_pix_clock = (lane_num * link_rates[i] * 8) / bpp;
if (max_pix_clock >= pix_clock) {
*dp_lanes = lane_num;
diff --git a/drivers/gpu/drm/radeon/kv_dpm.c b/drivers/gpu/drm/radeon/kv_dpm.c
index d0240743a17c..a7e978677937 100644
--- a/drivers/gpu/drm/radeon/kv_dpm.c
+++ b/drivers/gpu/drm/radeon/kv_dpm.c
@@ -2164,7 +2164,7 @@ static void kv_apply_state_adjust_rules(struct radeon_device *rdev,
if (pi->caps_stable_p_state) {
stable_p_state_sclk = (max_limits->sclk * 75) / 100;
- for (i = table->count - 1; i >= 0; i++) {
+ for (i = table->count - 1; i >= 0; i--) {
if (stable_p_state_sclk >= table->entries[i].clk) {
stable_p_state_sclk = table->entries[i].clk;
break;
diff --git a/drivers/gpu/drm/radeon/radeon_dp_auxch.c b/drivers/gpu/drm/radeon/radeon_dp_auxch.c
index 3b0c229d7dcd..db64e0062689 100644
--- a/drivers/gpu/drm/radeon/radeon_dp_auxch.c
+++ b/drivers/gpu/drm/radeon/radeon_dp_auxch.c
@@ -105,7 +105,7 @@ radeon_dp_aux_transfer_native(struct drm_dp_aux *aux, struct drm_dp_aux_msg *msg
tmp &= AUX_HPD_SEL(0x7);
tmp |= AUX_HPD_SEL(chan->rec.hpd);
- tmp |= AUX_EN | AUX_LS_READ_EN;
+ tmp |= AUX_EN | AUX_LS_READ_EN | AUX_HPD_DISCON(0x1);
WREG32(AUX_CONTROL + aux_offset[instance], tmp);
diff --git a/drivers/gpu/drm/radeon/radeon_mn.c b/drivers/gpu/drm/radeon/radeon_mn.c
index eef006c48584..896f2cf51e4e 100644
--- a/drivers/gpu/drm/radeon/radeon_mn.c
+++ b/drivers/gpu/drm/radeon/radeon_mn.c
@@ -186,7 +186,9 @@ static struct radeon_mn *radeon_mn_get(struct radeon_device *rdev)
struct radeon_mn *rmn;
int r;
- down_write(&mm->mmap_sem);
+ if (down_write_killable(&mm->mmap_sem))
+ return ERR_PTR(-EINTR);
+
mutex_lock(&rdev->mn_lock);
hash_for_each_possible(rdev->mn_hash, rmn, node, (unsigned long)mm)
diff --git a/drivers/gpu/drm/sti/sti_vtg.c b/drivers/gpu/drm/sti/sti_vtg.c
index 32c7986b63ab..6bf4ce466d20 100644
--- a/drivers/gpu/drm/sti/sti_vtg.c
+++ b/drivers/gpu/drm/sti/sti_vtg.c
@@ -437,7 +437,7 @@ static int vtg_probe(struct platform_device *pdev)
return -EPROBE_DEFER;
} else {
vtg->irq = platform_get_irq(pdev, 0);
- if (IS_ERR_VALUE(vtg->irq)) {
+ if (vtg->irq < 0) {
DRM_ERROR("Failed to get VTG interrupt\n");
return vtg->irq;
}
@@ -447,7 +447,7 @@ static int vtg_probe(struct platform_device *pdev)
ret = devm_request_threaded_irq(dev, vtg->irq, vtg_irq,
vtg_irq_thread, IRQF_ONESHOT,
dev_name(dev), vtg);
- if (IS_ERR_VALUE(ret)) {
+ if (ret < 0) {
DRM_ERROR("Failed to register VTG interrupt\n");
return ret;
}
diff --git a/drivers/gpu/drm/tegra/drm.h b/drivers/gpu/drm/tegra/drm.h
index 8a10f5b7d9dc..f52d6cb24ff5 100644
--- a/drivers/gpu/drm/tegra/drm.h
+++ b/drivers/gpu/drm/tegra/drm.h
@@ -121,7 +121,7 @@ struct tegra_dc {
spinlock_t lock;
struct drm_crtc base;
- int powergate;
+ unsigned int powergate;
int pipe;
struct clk *clk;
diff --git a/drivers/gpu/drm/tilcdc/tilcdc_slave_compat.c b/drivers/gpu/drm/tilcdc/tilcdc_slave_compat.c
index 106679bca6cb..f9c79dabce20 100644
--- a/drivers/gpu/drm/tilcdc/tilcdc_slave_compat.c
+++ b/drivers/gpu/drm/tilcdc/tilcdc_slave_compat.c
@@ -157,7 +157,7 @@ struct device_node * __init tilcdc_get_overlay(struct kfree_table *kft)
if (!overlay_data || kfree_table_add(kft, overlay_data))
return NULL;
- of_fdt_unflatten_tree(overlay_data, &overlay);
+ of_fdt_unflatten_tree(overlay_data, NULL, &overlay);
if (!overlay) {
pr_warn("%s: Unfattening overlay tree failed\n", __func__);
return NULL;
diff --git a/drivers/gpu/drm/tilcdc/tilcdc_tfp410.c b/drivers/gpu/drm/tilcdc/tilcdc_tfp410.c
index 7716f42f8aab..6b8c5b3bf588 100644
--- a/drivers/gpu/drm/tilcdc/tilcdc_tfp410.c
+++ b/drivers/gpu/drm/tilcdc/tilcdc_tfp410.c
@@ -342,7 +342,7 @@ static int tfp410_probe(struct platform_device *pdev)
tfp410_mod->gpio = of_get_named_gpio_flags(node, "powerdn-gpio",
0, NULL);
- if (IS_ERR_VALUE(tfp410_mod->gpio)) {
+ if (tfp410_mod->gpio < 0) {
dev_warn(&pdev->dev, "No power down GPIO\n");
} else {
ret = gpio_request(tfp410_mod->gpio, "DVI_PDn");
diff --git a/drivers/gpu/drm/vc4/vc4_drv.c b/drivers/gpu/drm/vc4/vc4_drv.c
index b662d0492471..58b8fc036332 100644
--- a/drivers/gpu/drm/vc4/vc4_drv.c
+++ b/drivers/gpu/drm/vc4/vc4_drv.c
@@ -154,6 +154,24 @@ static void vc4_match_add_drivers(struct device *dev,
}
}
+static void vc4_kick_out_firmware_fb(void)
+{
+ struct apertures_struct *ap;
+
+ ap = alloc_apertures(1);
+ if (!ap)
+ return;
+
+ /* Since VC4 is a UMA device, the simplefb node may have been
+ * located anywhere in memory.
+ */
+ ap->ranges[0].base = 0;
+ ap->ranges[0].size = ~0;
+
+ remove_conflicting_framebuffers(ap, "vc4drmfb", false);
+ kfree(ap);
+}
+
static int vc4_drm_bind(struct device *dev)
{
struct platform_device *pdev = to_platform_device(dev);
@@ -187,6 +205,8 @@ static int vc4_drm_bind(struct device *dev)
if (ret)
goto gem_destroy;
+ vc4_kick_out_firmware_fb();
+
ret = drm_dev_register(drm, 0);
if (ret < 0)
goto unbind_all;
diff --git a/drivers/gpu/host1x/hw/intr_hw.c b/drivers/gpu/host1x/hw/intr_hw.c
index 498b37e39058..e1e31e9e67cd 100644
--- a/drivers/gpu/host1x/hw/intr_hw.c
+++ b/drivers/gpu/host1x/hw/intr_hw.c
@@ -85,7 +85,7 @@ static int _host1x_intr_init_host_sync(struct host1x *host, u32 cpm,
err = devm_request_irq(host->dev, host->intr_syncpt_irq,
syncpt_thresh_isr, IRQF_SHARED,
"host1x_syncpt", host);
- if (IS_ERR_VALUE(err)) {
+ if (err < 0) {
WARN_ON(1);
return err;
}
diff --git a/drivers/gpu/ipu-v3/ipu-common.c b/drivers/gpu/ipu-v3/ipu-common.c
index abb98c77bad2..99dcacf05b99 100644
--- a/drivers/gpu/ipu-v3/ipu-common.c
+++ b/drivers/gpu/ipu-v3/ipu-common.c
@@ -997,7 +997,7 @@ struct ipu_platform_reg {
};
/* These must be in the order of the corresponding device tree port nodes */
-static const struct ipu_platform_reg client_reg[] = {
+static struct ipu_platform_reg client_reg[] = {
{
.pdata = {
.csi = 0,
@@ -1048,7 +1048,7 @@ static int ipu_add_client_devices(struct ipu_soc *ipu, unsigned long ipu_base)
mutex_unlock(&ipu_client_id_mutex);
for (i = 0; i < ARRAY_SIZE(client_reg); i++) {
- const struct ipu_platform_reg *reg = &client_reg[i];
+ struct ipu_platform_reg *reg = &client_reg[i];
struct platform_device *pdev;
struct device_node *of_node;
@@ -1070,6 +1070,7 @@ static int ipu_add_client_devices(struct ipu_soc *ipu, unsigned long ipu_base)
pdev->dev.parent = dev;
+ reg->pdata.of_node = of_node;
ret = platform_device_add_data(pdev, &reg->pdata,
sizeof(reg->pdata));
if (!ret)
diff --git a/drivers/hid/Kconfig b/drivers/hid/Kconfig
index 411722570035..5646ca4b95de 100644
--- a/drivers/hid/Kconfig
+++ b/drivers/hid/Kconfig
@@ -134,6 +134,16 @@ config HID_APPLEIR
Say Y here if you want support for Apple infrared remote control.
+config HID_ASUS
+ tristate "Asus"
+ depends on I2C_HID
+ ---help---
+ Support for Asus notebook built-in keyboard via i2c.
+
+ Supported devices:
+ - EeeBook X205TA
+ - VivoBook E200HA
+
config HID_AUREAL
tristate "Aureal"
depends on HID
diff --git a/drivers/hid/Makefile b/drivers/hid/Makefile
index be56ab6f75a8..a2fb562de748 100644
--- a/drivers/hid/Makefile
+++ b/drivers/hid/Makefile
@@ -24,6 +24,7 @@ obj-$(CONFIG_HID_A4TECH) += hid-a4tech.o
obj-$(CONFIG_HID_ACRUX) += hid-axff.o
obj-$(CONFIG_HID_APPLE) += hid-apple.o
obj-$(CONFIG_HID_APPLEIR) += hid-appleir.o
+obj-$(CONFIG_HID_ASUS) += hid-asus.o
obj-$(CONFIG_HID_AUREAL) += hid-aureal.o
obj-$(CONFIG_HID_BELKIN) += hid-belkin.o
obj-$(CONFIG_HID_BETOP_FF) += hid-betopff.o
diff --git a/drivers/hid/hid-asus.c b/drivers/hid/hid-asus.c
new file mode 100644
index 000000000000..7a811ec4f2e1
--- /dev/null
+++ b/drivers/hid/hid-asus.c
@@ -0,0 +1,52 @@
+/*
+ * HID driver for Asus notebook built-in keyboard.
+ * Fixes small logical maximum to match usage maximum.
+ *
+ * Currently supported devices are:
+ * EeeBook X205TA
+ * VivoBook E200HA
+ *
+ * Copyright (c) 2016 Yusuke Fujimaki <usk.fujimaki@gmail.com>
+ *
+ * This module based on hid-ortek by
+ * Copyright (c) 2010 Johnathon Harris <jmharris@gmail.com>
+ * Copyright (c) 2011 Jiri Kosina
+ */
+
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License as published by the Free
+ * Software Foundation; either version 2 of the License, or (at your option)
+ * any later version.
+ */
+
+#include <linux/device.h>
+#include <linux/hid.h>
+#include <linux/module.h>
+
+#include "hid-ids.h"
+
+static __u8 *asus_report_fixup(struct hid_device *hdev, __u8 *rdesc,
+ unsigned int *rsize)
+{
+ if (*rsize >= 56 && rdesc[54] == 0x25 && rdesc[55] == 0x65) {
+ hid_info(hdev, "Fixing up Asus notebook report descriptor\n");
+ rdesc[55] = 0xdd;
+ }
+ return rdesc;
+}
+
+static const struct hid_device_id asus_devices[] = {
+ { HID_I2C_DEVICE(USB_VENDOR_ID_ASUSTEK, USB_DEVICE_ID_ASUSTEK_NOTEBOOK_KEYBOARD) },
+ { }
+};
+MODULE_DEVICE_TABLE(hid, asus_devices);
+
+static struct hid_driver asus_driver = {
+ .name = "asus",
+ .id_table = asus_devices,
+ .report_fixup = asus_report_fixup
+};
+module_hid_driver(asus_driver);
+
+MODULE_LICENSE("GPL");
diff --git a/drivers/hid/hid-core.c b/drivers/hid/hid-core.c
index 4f9c5c6deaed..8ea3a26360e9 100644
--- a/drivers/hid/hid-core.c
+++ b/drivers/hid/hid-core.c
@@ -1129,49 +1129,46 @@ EXPORT_SYMBOL_GPL(hid_field_extract);
static void __implement(u8 *report, unsigned offset, int n, u32 value)
{
unsigned int idx = offset / 8;
- unsigned int size = offset + n;
unsigned int bit_shift = offset % 8;
int bits_to_set = 8 - bit_shift;
- u8 bit_mask = 0xff << bit_shift;
while (n - bits_to_set >= 0) {
- report[idx] &= ~bit_mask;
+ report[idx] &= ~(0xff << bit_shift);
report[idx] |= value << bit_shift;
value >>= bits_to_set;
n -= bits_to_set;
bits_to_set = 8;
- bit_mask = 0xff;
bit_shift = 0;
idx++;
}
/* last nibble */
if (n) {
- if (size % 8)
- bit_mask &= (1U << (size % 8)) - 1;
- report[idx] &= ~bit_mask;
- report[idx] |= (value << bit_shift) & bit_mask;
+ u8 bit_mask = ((1U << n) - 1);
+ report[idx] &= ~(bit_mask << bit_shift);
+ report[idx] |= value << bit_shift;
}
}
static void implement(const struct hid_device *hid, u8 *report,
unsigned offset, unsigned n, u32 value)
{
- u64 m;
-
- if (n > 32) {
+ if (unlikely(n > 32)) {
hid_warn(hid, "%s() called with n (%d) > 32! (%s)\n",
__func__, n, current->comm);
n = 32;
+ } else if (n < 32) {
+ u32 m = (1U << n) - 1;
+
+ if (unlikely(value > m)) {
+ hid_warn(hid,
+ "%s() called with too large value %d (n: %d)! (%s)\n",
+ __func__, value, n, current->comm);
+ WARN_ON(1);
+ value &= m;
+ }
}
- m = (1ULL << n) - 1;
- if (value > m)
- hid_warn(hid, "%s() called with too large value %d! (%s)\n",
- __func__, value, current->comm);
- WARN_ON(value > m);
- value &= m;
-
__implement(report, offset, n, value);
}
@@ -1856,6 +1853,7 @@ static const struct hid_device_id hid_have_special_driver[] = {
{ HID_BLUETOOTH_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_ALU_WIRELESS_2011_JIS) },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_FOUNTAIN_TP_ONLY) },
{ HID_USB_DEVICE(USB_VENDOR_ID_APPLE, USB_DEVICE_ID_APPLE_GEYSER1_TP_ONLY) },
+ { HID_I2C_DEVICE(USB_VENDOR_ID_ASUSTEK, USB_DEVICE_ID_ASUSTEK_NOTEBOOK_KEYBOARD) },
{ HID_USB_DEVICE(USB_VENDOR_ID_AUREAL, USB_DEVICE_ID_AUREAL_W01RN) },
{ HID_USB_DEVICE(USB_VENDOR_ID_BELKIN, USB_DEVICE_ID_FLIP_KVM) },
{ HID_USB_DEVICE(USB_VENDOR_ID_BETOP_2185BFM, 0x2208) },
diff --git a/drivers/hid/hid-ids.h b/drivers/hid/hid-ids.h
index 0238f0169e48..3eec09a134cb 100644
--- a/drivers/hid/hid-ids.h
+++ b/drivers/hid/hid-ids.h
@@ -163,6 +163,7 @@
#define USB_VENDOR_ID_ASUSTEK 0x0b05
#define USB_DEVICE_ID_ASUSTEK_LCM 0x1726
#define USB_DEVICE_ID_ASUSTEK_LCM2 0x175b
+#define USB_DEVICE_ID_ASUSTEK_NOTEBOOK_KEYBOARD 0x8585
#define USB_VENDOR_ID_ATEN 0x0557
#define USB_DEVICE_ID_ATEN_UC100KM 0x2004
@@ -258,6 +259,13 @@
#define USB_VENDOR_ID_CORSAIR 0x1b1c
#define USB_DEVICE_ID_CORSAIR_K90 0x1b02
+#define USB_VENDOR_ID_CORSAIR 0x1b1c
+#define USB_DEVICE_ID_CORSAIR_K70R 0x1b09
+#define USB_DEVICE_ID_CORSAIR_K95RGB 0x1b11
+#define USB_DEVICE_ID_CORSAIR_M65RGB 0x1b12
+#define USB_DEVICE_ID_CORSAIR_K70RGB 0x1b13
+#define USB_DEVICE_ID_CORSAIR_K65RGB 0x1b17
+
#define USB_VENDOR_ID_CREATIVELABS 0x041e
#define USB_DEVICE_ID_CREATIVE_SB_OMNI_SURROUND_51 0x322c
#define USB_DEVICE_ID_PRODIKEYS_PCMIDI 0x2801
diff --git a/drivers/hid/hid-roccat.c b/drivers/hid/hid-roccat.c
index 65c4ccfcbd29..76d06cf87b2a 100644
--- a/drivers/hid/hid-roccat.c
+++ b/drivers/hid/hid-roccat.c
@@ -421,14 +421,13 @@ static int __init roccat_init(void)
retval = alloc_chrdev_region(&dev_id, ROCCAT_FIRST_MINOR,
ROCCAT_MAX_DEVICES, "roccat");
-
- roccat_major = MAJOR(dev_id);
-
if (retval < 0) {
pr_warn("can't get major number\n");
goto error;
}
+ roccat_major = MAJOR(dev_id);
+
cdev_init(&roccat_cdev, &roccat_ops);
retval = cdev_add(&roccat_cdev, dev_id, ROCCAT_MAX_DEVICES);
diff --git a/drivers/hid/hid-thingm.c b/drivers/hid/hid-thingm.c
index 847a497cd472..9ad9c6ec5bba 100644
--- a/drivers/hid/hid-thingm.c
+++ b/drivers/hid/hid-thingm.c
@@ -148,13 +148,21 @@ static int thingm_led_set(struct led_classdev *ldev,
enum led_brightness brightness)
{
struct thingm_led *led = container_of(ldev, struct thingm_led, ldev);
- int ret;
- ret = thingm_write_color(led->rgb);
- if (ret)
- hid_err(led->rgb->tdev->hdev, "failed to write color\n");
+ return thingm_write_color(led->rgb);
+}
- return ret;
+static int thingm_init_led(struct thingm_led *led, const char *color_name,
+ struct thingm_rgb *rgb, int minor)
+{
+ snprintf(led->name, sizeof(led->name), "thingm%d:%s:led%d",
+ minor, color_name, rgb->num);
+ led->ldev.name = led->name;
+ led->ldev.max_brightness = 255;
+ led->ldev.brightness_set_blocking = thingm_led_set;
+ led->ldev.flags = LED_HW_PLUGGABLE;
+ led->rgb = rgb;
+ return devm_led_classdev_register(&rgb->tdev->hdev->dev, &led->ldev);
}
static int thingm_init_rgb(struct thingm_rgb *rgb)
@@ -163,42 +171,17 @@ static int thingm_init_rgb(struct thingm_rgb *rgb)
int err;
/* Register the red diode */
- snprintf(rgb->red.name, sizeof(rgb->red.name),
- "thingm%d:red:led%d", minor, rgb->num);
- rgb->red.ldev.name = rgb->red.name;
- rgb->red.ldev.max_brightness = 255;
- rgb->red.ldev.brightness_set_blocking = thingm_led_set;
- rgb->red.rgb = rgb;
-
- err = devm_led_classdev_register(&rgb->tdev->hdev->dev,
- &rgb->red.ldev);
+ err = thingm_init_led(&rgb->red, "red", rgb, minor);
if (err)
return err;
/* Register the green diode */
- snprintf(rgb->green.name, sizeof(rgb->green.name),
- "thingm%d:green:led%d", minor, rgb->num);
- rgb->green.ldev.name = rgb->green.name;
- rgb->green.ldev.max_brightness = 255;
- rgb->green.ldev.brightness_set_blocking = thingm_led_set;
- rgb->green.rgb = rgb;
-
- err = devm_led_classdev_register(&rgb->tdev->hdev->dev,
- &rgb->green.ldev);
+ err = thingm_init_led(&rgb->green, "green", rgb, minor);
if (err)
return err;
/* Register the blue diode */
- snprintf(rgb->blue.name, sizeof(rgb->blue.name),
- "thingm%d:blue:led%d", minor, rgb->num);
- rgb->blue.ldev.name = rgb->blue.name;
- rgb->blue.ldev.max_brightness = 255;
- rgb->blue.ldev.brightness_set_blocking = thingm_led_set;
- rgb->blue.rgb = rgb;
-
- err = devm_led_classdev_register(&rgb->tdev->hdev->dev,
- &rgb->blue.ldev);
- return err;
+ return thingm_init_led(&rgb->blue, "blue", rgb, minor);
}
static int thingm_probe(struct hid_device *hdev, const struct hid_device_id *id)
diff --git a/drivers/hid/hidraw.c b/drivers/hid/hidraw.c
index 9c2d7c23f296..f0e2757cb909 100644
--- a/drivers/hid/hidraw.c
+++ b/drivers/hid/hidraw.c
@@ -34,6 +34,7 @@
#include <linux/hid.h>
#include <linux/mutex.h>
#include <linux/sched.h>
+#include <linux/string.h>
#include <linux/hidraw.h>
@@ -123,7 +124,6 @@ static ssize_t hidraw_send_report(struct file *file, const char __user *buffer,
dev = hidraw_table[minor]->hid;
-
if (count > HID_MAX_BUFFER_SIZE) {
hid_warn(dev, "pid %d passed too large report\n",
task_pid_nr(current));
@@ -138,17 +138,12 @@ static ssize_t hidraw_send_report(struct file *file, const char __user *buffer,
goto out;
}
- buf = kmalloc(count * sizeof(__u8), GFP_KERNEL);
- if (!buf) {
- ret = -ENOMEM;
+ buf = memdup_user(buffer, count);
+ if (IS_ERR(buf)) {
+ ret = PTR_ERR(buf);
goto out;
}
- if (copy_from_user(buf, buffer, count)) {
- ret = -EFAULT;
- goto out_free;
- }
-
if ((report_type == HID_OUTPUT_REPORT) &&
!(dev->quirks & HID_QUIRK_NO_OUTPUT_REPORTS_ON_INTR_EP)) {
ret = hid_hw_output_report(dev, buf, count);
@@ -587,14 +582,13 @@ int __init hidraw_init(void)
result = alloc_chrdev_region(&dev_id, HIDRAW_FIRST_MINOR,
HIDRAW_MAX_DEVICES, "hidraw");
-
- hidraw_major = MAJOR(dev_id);
-
if (result < 0) {
pr_warn("can't get major number\n");
goto out;
}
+ hidraw_major = MAJOR(dev_id);
+
hidraw_class = class_create(THIS_MODULE, "hidraw");
if (IS_ERR(hidraw_class)) {
result = PTR_ERR(hidraw_class);
diff --git a/drivers/hid/usbhid/hid-quirks.c b/drivers/hid/usbhid/hid-quirks.c
index 53fc856d6867..b4b8c6abb03e 100644
--- a/drivers/hid/usbhid/hid-quirks.c
+++ b/drivers/hid/usbhid/hid-quirks.c
@@ -71,6 +71,11 @@ static const struct hid_blacklist {
{ USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_3AXIS_5BUTTON_STICK, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_AXIS_295, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_CHICONY, USB_DEVICE_ID_CHICONY_PIXART_USB_OPTICAL_MOUSE, HID_QUIRK_ALWAYS_POLL },
+ { USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_K70R, HID_QUIRK_NO_INIT_REPORTS },
+ { USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_M65RGB, HID_QUIRK_NO_INIT_REPORTS },
+ { USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_K95RGB, HID_QUIRK_NO_INIT_REPORTS | HID_QUIRK_ALWAYS_POLL },
+ { USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_K70RGB, HID_QUIRK_NO_INIT_REPORTS },
+ { USB_VENDOR_ID_CORSAIR, USB_DEVICE_ID_CORSAIR_K65RGB, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_CREATIVELABS, USB_DEVICE_ID_CREATIVE_SB_OMNI_SURROUND_51, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_DMI, USB_DEVICE_ID_DMI_ENC, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_DRAGONRISE, USB_DEVICE_ID_DRAGONRISE_WIIU, HID_QUIRK_MULTI_INPUT },
diff --git a/drivers/hid/wacom_sys.c b/drivers/hid/wacom_sys.c
index ccf1883318c3..499cc8213cfe 100644
--- a/drivers/hid/wacom_sys.c
+++ b/drivers/hid/wacom_sys.c
@@ -493,7 +493,8 @@ static void wacom_retrieve_hid_descriptor(struct hid_device *hdev,
features->x_fuzz = 4;
features->y_fuzz = 4;
features->pressure_fuzz = 0;
- features->distance_fuzz = 0;
+ features->distance_fuzz = 1;
+ features->tilt_fuzz = 1;
/*
* The wireless device HID is basic and layout conflicts with
diff --git a/drivers/hid/wacom_wac.c b/drivers/hid/wacom_wac.c
index cf2ba43453fd..1eae13cdc502 100644
--- a/drivers/hid/wacom_wac.c
+++ b/drivers/hid/wacom_wac.c
@@ -2344,12 +2344,13 @@ static void wacom_setup_basic_pro_pen(struct wacom_wac *wacom_wac)
__set_bit(BTN_STYLUS2, input_dev->keybit);
input_set_abs_params(input_dev, ABS_DISTANCE,
- 0, wacom_wac->features.distance_max, 0, 0);
+ 0, wacom_wac->features.distance_max, wacom_wac->features.distance_fuzz, 0);
}
static void wacom_setup_cintiq(struct wacom_wac *wacom_wac)
{
struct input_dev *input_dev = wacom_wac->pen_input;
+ struct wacom_features *features = &wacom_wac->features;
wacom_setup_basic_pro_pen(wacom_wac);
@@ -2359,9 +2360,9 @@ static void wacom_setup_cintiq(struct wacom_wac *wacom_wac)
__set_bit(BTN_TOOL_AIRBRUSH, input_dev->keybit);
input_set_abs_params(input_dev, ABS_WHEEL, 0, 1023, 0, 0);
- input_set_abs_params(input_dev, ABS_TILT_X, -64, 63, 0, 0);
+ input_set_abs_params(input_dev, ABS_TILT_X, -64, 63, features->tilt_fuzz, 0);
input_abs_set_res(input_dev, ABS_TILT_X, 57);
- input_set_abs_params(input_dev, ABS_TILT_Y, -64, 63, 0, 0);
+ input_set_abs_params(input_dev, ABS_TILT_Y, -64, 63, features->tilt_fuzz, 0);
input_abs_set_res(input_dev, ABS_TILT_Y, 57);
}
@@ -2507,7 +2508,7 @@ int wacom_setup_pen_input_capabilities(struct input_dev *input_dev,
case WACOM_G4:
input_set_abs_params(input_dev, ABS_DISTANCE, 0,
features->distance_max,
- 0, 0);
+ features->distance_fuzz, 0);
/* fall through */
case GRAPHIRE:
@@ -2569,7 +2570,7 @@ int wacom_setup_pen_input_capabilities(struct input_dev *input_dev,
input_set_abs_params(input_dev, ABS_DISTANCE, 0,
features->distance_max,
- 0, 0);
+ features->distance_fuzz, 0);
input_set_abs_params(input_dev, ABS_Z, -900, 899, 0, 0);
input_abs_set_res(input_dev, ABS_Z, 287);
@@ -2628,7 +2629,7 @@ int wacom_setup_pen_input_capabilities(struct input_dev *input_dev,
__set_bit(BTN_STYLUS2, input_dev->keybit);
input_set_abs_params(input_dev, ABS_DISTANCE, 0,
features->distance_max,
- 0, 0);
+ features->distance_fuzz, 0);
}
break;
case BAMBOO_PAD:
diff --git a/drivers/hid/wacom_wac.h b/drivers/hid/wacom_wac.h
index e2084d914c14..53d16537fd2a 100644
--- a/drivers/hid/wacom_wac.h
+++ b/drivers/hid/wacom_wac.h
@@ -177,6 +177,7 @@ struct wacom_features {
int y_fuzz;
int pressure_fuzz;
int distance_fuzz;
+ int tilt_fuzz;
unsigned quirks;
unsigned touch_max;
int oVid;
diff --git a/drivers/hsi/controllers/Kconfig b/drivers/hsi/controllers/Kconfig
index 6aba27808172..48e4eda186cc 100644
--- a/drivers/hsi/controllers/Kconfig
+++ b/drivers/hsi/controllers/Kconfig
@@ -5,15 +5,11 @@ comment "HSI controllers"
config OMAP_SSI
tristate "OMAP SSI hardware driver"
- depends on HSI && OF && (ARCH_OMAP3 || (ARM && COMPILE_TEST))
+ depends on HSI && OF && ARM && COMMON_CLK
+ depends on ARCH_OMAP3 || COMPILE_TEST
---help---
SSI is a legacy version of HSI. It is usually used to connect
an application engine with a cellular modem.
If you say Y here, you will enable the OMAP SSI hardware driver.
If unsure, say N.
-
-config OMAP_SSI_PORT
- tristate
- default m if OMAP_SSI=m
- default y if OMAP_SSI=y
diff --git a/drivers/hsi/controllers/Makefile b/drivers/hsi/controllers/Makefile
index d2665cf9c545..7aba9c7f71bb 100644
--- a/drivers/hsi/controllers/Makefile
+++ b/drivers/hsi/controllers/Makefile
@@ -2,5 +2,5 @@
# Makefile for HSI controllers drivers
#
-obj-$(CONFIG_OMAP_SSI) += omap_ssi.o
-obj-$(CONFIG_OMAP_SSI_PORT) += omap_ssi_port.o
+omap_ssi-objs += omap_ssi_core.o omap_ssi_port.o
+obj-$(CONFIG_OMAP_SSI) += omap_ssi.o
diff --git a/drivers/hsi/controllers/omap_ssi.h b/drivers/hsi/controllers/omap_ssi.h
index f9aaf37262be..7b4dec2c69ff 100644
--- a/drivers/hsi/controllers/omap_ssi.h
+++ b/drivers/hsi/controllers/omap_ssi.h
@@ -27,7 +27,7 @@
#include <linux/module.h>
#include <linux/platform_device.h>
#include <linux/hsi/hsi.h>
-#include <linux/gpio.h>
+#include <linux/gpio/consumer.h>
#include <linux/interrupt.h>
#include <linux/io.h>
@@ -97,7 +97,7 @@ struct omap_ssi_port {
struct list_head brkqueue;
unsigned int irq;
int wake_irq;
- int wake_gpio;
+ struct gpio_desc *wake_gpio;
struct tasklet_struct pio_tasklet;
struct tasklet_struct wake_tasklet;
bool wktest:1; /* FIXME: HACK to be removed */
@@ -134,6 +134,8 @@ struct gdd_trn {
* @gdd_tasklet: bottom half for DMA transfers
* @gdd_trn: Array of GDD transaction data for ongoing GDD transfers
* @lock: lock to serialize access to GDD
+ * @fck_nb: DVFS notfifier block
+ * @fck_rate: clock rate
* @loss_count: To follow if we need to restore context or not
* @max_speed: Maximum TX speed (Kb/s) set by the clients.
* @sysconfig: SSI controller saved context
@@ -151,6 +153,7 @@ struct omap_ssi_controller {
struct tasklet_struct gdd_tasklet;
struct gdd_trn gdd_trn[SSI_MAX_GDD_LCH];
spinlock_t lock;
+ struct notifier_block fck_nb;
unsigned long fck_rate;
u32 loss_count;
u32 max_speed;
@@ -164,4 +167,9 @@ struct omap_ssi_controller {
#endif
};
+void omap_ssi_port_update_fclk(struct hsi_controller *ssi,
+ struct omap_ssi_port *omap_port);
+
+extern struct platform_driver ssi_port_pdriver;
+
#endif /* __LINUX_HSI_OMAP_SSI_H__ */
diff --git a/drivers/hsi/controllers/omap_ssi.c b/drivers/hsi/controllers/omap_ssi_core.c
index 27b91f14ba7a..a3e0febfb64a 100644
--- a/drivers/hsi/controllers/omap_ssi.c
+++ b/drivers/hsi/controllers/omap_ssi_core.c
@@ -24,7 +24,6 @@
#include <linux/err.h>
#include <linux/ioport.h>
#include <linux/io.h>
-#include <linux/gpio.h>
#include <linux/clk.h>
#include <linux/device.h>
#include <linux/platform_device.h>
@@ -36,6 +35,7 @@
#include <linux/interrupt.h>
#include <linux/spinlock.h>
#include <linux/debugfs.h>
+#include <linux/pinctrl/consumer.h>
#include <linux/pm_runtime.h>
#include <linux/of_platform.h>
#include <linux/hsi/hsi.h>
@@ -141,7 +141,7 @@ static const struct file_operations ssi_gdd_regs_fops = {
.release = single_release,
};
-static int __init ssi_debug_add_ctrl(struct hsi_controller *ssi)
+static int ssi_debug_add_ctrl(struct hsi_controller *ssi)
{
struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
struct dentry *dir;
@@ -291,7 +291,65 @@ static unsigned long ssi_get_clk_rate(struct hsi_controller *ssi)
return rate;
}
-static int __init ssi_get_iomem(struct platform_device *pd,
+static int ssi_clk_event(struct notifier_block *nb, unsigned long event,
+ void *data)
+{
+ struct omap_ssi_controller *omap_ssi = container_of(nb,
+ struct omap_ssi_controller, fck_nb);
+ struct hsi_controller *ssi = to_hsi_controller(omap_ssi->dev);
+ struct clk_notifier_data *clk_data = data;
+ struct omap_ssi_port *omap_port;
+ int i;
+
+ switch (event) {
+ case PRE_RATE_CHANGE:
+ dev_dbg(&ssi->device, "pre rate change\n");
+
+ for (i = 0; i < ssi->num_ports; i++) {
+ omap_port = omap_ssi->port[i];
+
+ if (!omap_port)
+ continue;
+
+ /* Workaround for SWBREAK + CAwake down race in CMT */
+ tasklet_disable(&omap_port->wake_tasklet);
+
+ /* stop all ssi communication */
+ pinctrl_pm_select_idle_state(omap_port->pdev);
+ udelay(1); /* wait for racing frames */
+ }
+
+ break;
+ case ABORT_RATE_CHANGE:
+ dev_dbg(&ssi->device, "abort rate change\n");
+ /* Fall through */
+ case POST_RATE_CHANGE:
+ dev_dbg(&ssi->device, "post rate change (%lu -> %lu)\n",
+ clk_data->old_rate, clk_data->new_rate);
+ omap_ssi->fck_rate = DIV_ROUND_CLOSEST(clk_data->new_rate, 1000); /* KHz */
+
+ for (i = 0; i < ssi->num_ports; i++) {
+ omap_port = omap_ssi->port[i];
+
+ if (!omap_port)
+ continue;
+
+ omap_ssi_port_update_fclk(ssi, omap_port);
+
+ /* resume ssi communication */
+ pinctrl_pm_select_default_state(omap_port->pdev);
+ tasklet_enable(&omap_port->wake_tasklet);
+ }
+
+ break;
+ default:
+ break;
+ }
+
+ return NOTIFY_DONE;
+}
+
+static int ssi_get_iomem(struct platform_device *pd,
const char *name, void __iomem **pbase, dma_addr_t *phy)
{
struct resource *mem;
@@ -311,7 +369,7 @@ static int __init ssi_get_iomem(struct platform_device *pd,
return 0;
}
-static int __init ssi_add_controller(struct hsi_controller *ssi,
+static int ssi_add_controller(struct hsi_controller *ssi,
struct platform_device *pd)
{
struct omap_ssi_controller *omap_ssi;
@@ -370,6 +428,10 @@ static int __init ssi_add_controller(struct hsi_controller *ssi,
goto out_err;
}
+ omap_ssi->fck_nb.notifier_call = ssi_clk_event;
+ omap_ssi->fck_nb.priority = INT_MAX;
+ clk_notifier_register(omap_ssi->fck, &omap_ssi->fck_nb);
+
/* TODO: find register, which can be used to detect context loss */
omap_ssi->get_loss = NULL;
@@ -387,7 +449,7 @@ out_err:
return err;
}
-static int __init ssi_hw_init(struct hsi_controller *ssi)
+static int ssi_hw_init(struct hsi_controller *ssi)
{
struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
unsigned int i;
@@ -433,6 +495,7 @@ static void ssi_remove_controller(struct hsi_controller *ssi)
int id = ssi->id;
tasklet_kill(&omap_ssi->gdd_tasklet);
hsi_unregister_controller(ssi);
+ clk_notifier_unregister(omap_ssi->fck, &omap_ssi->fck_nb);
ida_simple_remove(&platform_omap_ssi_ida, id);
}
@@ -452,12 +515,16 @@ static int ssi_remove_ports(struct device *dev, void *c)
{
struct platform_device *pdev = to_platform_device(dev);
+ if (!dev->of_node)
+ return 0;
+
+ of_node_clear_flag(dev->of_node, OF_POPULATED);
of_device_unregister(pdev);
return 0;
}
-static int __init ssi_probe(struct platform_device *pd)
+static int ssi_probe(struct platform_device *pd)
{
struct platform_device *childpdev;
struct device_node *np = pd->dev.of_node;
@@ -523,10 +590,13 @@ out1:
return err;
}
-static int __exit ssi_remove(struct platform_device *pd)
+static int ssi_remove(struct platform_device *pd)
{
struct hsi_controller *ssi = platform_get_drvdata(pd);
+ /* cleanup of of_platform_populate() call */
+ device_for_each_child(&pd->dev, NULL, ssi_remove_ports);
+
#ifdef CONFIG_DEBUG_FS
ssi_debug_remove_ctrl(ssi);
#endif
@@ -535,9 +605,6 @@ static int __exit ssi_remove(struct platform_device *pd)
pm_runtime_disable(&pd->dev);
- /* cleanup of of_platform_populate() call */
- device_for_each_child(&pd->dev, NULL, ssi_remove_ports);
-
return 0;
}
@@ -593,7 +660,8 @@ MODULE_DEVICE_TABLE(of, omap_ssi_of_match);
#endif
static struct platform_driver ssi_pdriver = {
- .remove = __exit_p(ssi_remove),
+ .probe = ssi_probe,
+ .remove = ssi_remove,
.driver = {
.name = "omap_ssi",
.pm = DEV_PM_OPS,
@@ -601,7 +669,22 @@ static struct platform_driver ssi_pdriver = {
},
};
-module_platform_driver_probe(ssi_pdriver, ssi_probe);
+static int __init ssi_init(void) {
+ int ret;
+
+ ret = platform_driver_register(&ssi_pdriver);
+ if (ret)
+ return ret;
+
+ return platform_driver_register(&ssi_port_pdriver);
+}
+module_init(ssi_init);
+
+static void __exit ssi_exit(void) {
+ platform_driver_unregister(&ssi_port_pdriver);
+ platform_driver_unregister(&ssi_pdriver);
+}
+module_exit(ssi_exit);
MODULE_ALIAS("platform:omap_ssi");
MODULE_AUTHOR("Carlos Chinea <carlos.chinea@nokia.com>");
diff --git a/drivers/hsi/controllers/omap_ssi_port.c b/drivers/hsi/controllers/omap_ssi_port.c
index e80a66e20998..6b8f7739768a 100644
--- a/drivers/hsi/controllers/omap_ssi_port.c
+++ b/drivers/hsi/controllers/omap_ssi_port.c
@@ -23,8 +23,10 @@
#include <linux/platform_device.h>
#include <linux/dma-mapping.h>
#include <linux/pm_runtime.h>
+#include <linux/delay.h>
-#include <linux/of_gpio.h>
+#include <linux/gpio/consumer.h>
+#include <linux/pinctrl/consumer.h>
#include <linux/debugfs.h>
#include "omap_ssi_regs.h"
@@ -43,7 +45,7 @@ static inline int hsi_dummy_cl(struct hsi_client *cl __maybe_unused)
static inline unsigned int ssi_wakein(struct hsi_port *port)
{
struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
- return gpio_get_value(omap_port->wake_gpio);
+ return gpiod_get_value(omap_port->wake_gpio);
}
#ifdef CONFIG_DEBUG_FS
@@ -171,7 +173,7 @@ static int ssi_div_set(void *data, u64 val)
DEFINE_SIMPLE_ATTRIBUTE(ssi_sst_div_fops, ssi_div_get, ssi_div_set, "%llu\n");
-static int __init ssi_debug_add_port(struct omap_ssi_port *omap_port,
+static int ssi_debug_add_port(struct omap_ssi_port *omap_port,
struct dentry *dir)
{
struct hsi_port *port = to_hsi_port(omap_port->dev);
@@ -514,6 +516,11 @@ static int ssi_flush(struct hsi_client *cl)
pm_runtime_get_sync(omap_port->pdev);
spin_lock_bh(&omap_port->lock);
+
+ /* stop all ssi communication */
+ pinctrl_pm_select_idle_state(omap_port->pdev);
+ udelay(1); /* wait for racing frames */
+
/* Stop all DMA transfers */
for (i = 0; i < SSI_MAX_GDD_LCH; i++) {
msg = omap_ssi->gdd_trn[i].msg;
@@ -550,6 +557,10 @@ static int ssi_flush(struct hsi_client *cl)
ssi_flush_queue(&omap_port->rxqueue[i], NULL);
}
ssi_flush_queue(&omap_port->brkqueue, NULL);
+
+ /* Resume SSI communication */
+ pinctrl_pm_select_default_state(omap_port->pdev);
+
spin_unlock_bh(&omap_port->lock);
pm_runtime_put_sync(omap_port->pdev);
@@ -1007,7 +1018,7 @@ static irqreturn_t ssi_wake_isr(int irq __maybe_unused, void *ssi_port)
return IRQ_HANDLED;
}
-static int __init ssi_port_irq(struct hsi_port *port,
+static int ssi_port_irq(struct hsi_port *port,
struct platform_device *pd)
{
struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
@@ -1029,19 +1040,19 @@ static int __init ssi_port_irq(struct hsi_port *port,
return err;
}
-static int __init ssi_wake_irq(struct hsi_port *port,
+static int ssi_wake_irq(struct hsi_port *port,
struct platform_device *pd)
{
struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
int cawake_irq;
int err;
- if (omap_port->wake_gpio == -1) {
+ if (!omap_port->wake_gpio) {
omap_port->wake_irq = -1;
return 0;
}
- cawake_irq = gpio_to_irq(omap_port->wake_gpio);
+ cawake_irq = gpiod_to_irq(omap_port->wake_gpio);
omap_port->wake_irq = cawake_irq;
tasklet_init(&omap_port->wake_tasklet, ssi_wake_tasklet,
@@ -1060,7 +1071,7 @@ static int __init ssi_wake_irq(struct hsi_port *port,
return err;
}
-static void __init ssi_queues_init(struct omap_ssi_port *omap_port)
+static void ssi_queues_init(struct omap_ssi_port *omap_port)
{
unsigned int ch;
@@ -1071,7 +1082,7 @@ static void __init ssi_queues_init(struct omap_ssi_port *omap_port)
INIT_LIST_HEAD(&omap_port->brkqueue);
}
-static int __init ssi_port_get_iomem(struct platform_device *pd,
+static int ssi_port_get_iomem(struct platform_device *pd,
const char *name, void __iomem **pbase, dma_addr_t *phy)
{
struct hsi_port *port = platform_get_drvdata(pd);
@@ -1104,24 +1115,19 @@ static int __init ssi_port_get_iomem(struct platform_device *pd,
return 0;
}
-static int __init ssi_port_probe(struct platform_device *pd)
+static int ssi_port_probe(struct platform_device *pd)
{
struct device_node *np = pd->dev.of_node;
struct hsi_port *port;
struct omap_ssi_port *omap_port;
struct hsi_controller *ssi = dev_get_drvdata(pd->dev.parent);
struct omap_ssi_controller *omap_ssi = hsi_controller_drvdata(ssi);
- int cawake_gpio = 0;
+ struct gpio_desc *cawake_gpio = NULL;
u32 port_id;
int err;
dev_dbg(&pd->dev, "init ssi port...\n");
- if (!try_module_get(ssi->owner)) {
- dev_err(&pd->dev, "could not increment parent module refcount\n");
- return -ENODEV;
- }
-
if (!ssi->port || !omap_ssi->port) {
dev_err(&pd->dev, "ssi controller not initialized!\n");
err = -ENODEV;
@@ -1147,20 +1153,10 @@ static int __init ssi_port_probe(struct platform_device *pd)
goto error;
}
- err = of_get_named_gpio(np, "ti,ssi-cawake-gpio", 0);
- if (err < 0) {
- dev_err(&pd->dev, "DT data is missing cawake gpio (err=%d)\n",
- err);
- goto error;
- }
- cawake_gpio = err;
-
- err = devm_gpio_request_one(&port->device, cawake_gpio, GPIOF_DIR_IN,
- "cawake");
- if (err) {
- dev_err(&pd->dev, "could not request cawake gpio (err=%d)!\n",
- err);
- err = -ENXIO;
+ cawake_gpio = devm_gpiod_get(&pd->dev, "ti,ssi-cawake", GPIOD_IN);
+ if (IS_ERR(cawake_gpio)) {
+ err = PTR_ERR(cawake_gpio);
+ dev_err(&pd->dev, "couldn't get cawake gpio (err=%d)!\n", err);
goto error;
}
@@ -1219,8 +1215,7 @@ static int __init ssi_port_probe(struct platform_device *pd)
hsi_add_clients_from_dt(port, np);
- dev_info(&pd->dev, "ssi port %u successfully initialized (cawake=%d)\n",
- port_id, cawake_gpio);
+ dev_info(&pd->dev, "ssi port %u successfully initialized\n", port_id);
return 0;
@@ -1228,7 +1223,7 @@ error:
return err;
}
-static int __exit ssi_port_remove(struct platform_device *pd)
+static int ssi_port_remove(struct platform_device *pd)
{
struct hsi_port *port = platform_get_drvdata(pd);
struct omap_ssi_port *omap_port = hsi_port_drvdata(port);
@@ -1253,12 +1248,28 @@ static int __exit ssi_port_remove(struct platform_device *pd)
omap_ssi->port[omap_port->port_id] = NULL;
platform_set_drvdata(pd, NULL);
- module_put(ssi->owner);
pm_runtime_disable(&pd->dev);
return 0;
}
+static int ssi_restore_divisor(struct omap_ssi_port *omap_port)
+{
+ writel_relaxed(omap_port->sst.divisor,
+ omap_port->sst_base + SSI_SST_DIVISOR_REG);
+
+ return 0;
+}
+
+void omap_ssi_port_update_fclk(struct hsi_controller *ssi,
+ struct omap_ssi_port *omap_port)
+{
+ /* update divisor */
+ u32 div = ssi_calculate_div(ssi);
+ omap_port->sst.divisor = div;
+ ssi_restore_divisor(omap_port);
+}
+
#ifdef CONFIG_PM
static int ssi_save_port_ctx(struct omap_ssi_port *omap_port)
{
@@ -1311,14 +1322,6 @@ static int ssi_restore_port_mode(struct omap_ssi_port *omap_port)
return 0;
}
-static int ssi_restore_divisor(struct omap_ssi_port *omap_port)
-{
- writel_relaxed(omap_port->sst.divisor,
- omap_port->sst_base + SSI_SST_DIVISOR_REG);
-
- return 0;
-}
-
static int omap_ssi_port_runtime_suspend(struct device *dev)
{
struct hsi_port *port = dev_get_drvdata(dev);
@@ -1380,19 +1383,12 @@ MODULE_DEVICE_TABLE(of, omap_ssi_port_of_match);
#define omap_ssi_port_of_match NULL
#endif
-static struct platform_driver ssi_port_pdriver = {
- .remove = __exit_p(ssi_port_remove),
+struct platform_driver ssi_port_pdriver = {
+ .probe = ssi_port_probe,
+ .remove = ssi_port_remove,
.driver = {
.name = "omap_ssi_port",
.of_match_table = omap_ssi_port_of_match,
.pm = DEV_PM_OPS,
},
};
-
-module_platform_driver_probe(ssi_port_pdriver, ssi_port_probe);
-
-MODULE_ALIAS("platform:omap_ssi_port");
-MODULE_AUTHOR("Carlos Chinea <carlos.chinea@nokia.com>");
-MODULE_AUTHOR("Sebastian Reichel <sre@kernel.org>");
-MODULE_DESCRIPTION("Synchronous Serial Interface Port Driver");
-MODULE_LICENSE("GPL v2");
diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
index 38b682bab85a..b6c1211b4df7 100644
--- a/drivers/hv/channel_mgmt.c
+++ b/drivers/hv/channel_mgmt.c
@@ -597,27 +597,55 @@ static void init_vp_index(struct vmbus_channel *channel, u16 dev_type)
static void vmbus_wait_for_unload(void)
{
- int cpu = smp_processor_id();
- void *page_addr = hv_context.synic_message_page[cpu];
- struct hv_message *msg = (struct hv_message *)page_addr +
- VMBUS_MESSAGE_SINT;
+ int cpu;
+ void *page_addr;
+ struct hv_message *msg;
struct vmbus_channel_message_header *hdr;
- bool unloaded = false;
+ u32 message_type;
+ /*
+ * CHANNELMSG_UNLOAD_RESPONSE is always delivered to the CPU which was
+ * used for initial contact or to CPU0 depending on host version. When
+ * we're crashing on a different CPU let's hope that IRQ handler on
+ * the cpu which receives CHANNELMSG_UNLOAD_RESPONSE is still
+ * functional and vmbus_unload_response() will complete
+ * vmbus_connection.unload_event. If not, the last thing we can do is
+ * read message pages for all CPUs directly.
+ */
while (1) {
- if (READ_ONCE(msg->header.message_type) == HVMSG_NONE) {
- mdelay(10);
- continue;
- }
+ if (completion_done(&vmbus_connection.unload_event))
+ break;
- hdr = (struct vmbus_channel_message_header *)msg->u.payload;
- if (hdr->msgtype == CHANNELMSG_UNLOAD_RESPONSE)
- unloaded = true;
+ for_each_online_cpu(cpu) {
+ page_addr = hv_context.synic_message_page[cpu];
+ msg = (struct hv_message *)page_addr +
+ VMBUS_MESSAGE_SINT;
- vmbus_signal_eom(msg);
+ message_type = READ_ONCE(msg->header.message_type);
+ if (message_type == HVMSG_NONE)
+ continue;
- if (unloaded)
- break;
+ hdr = (struct vmbus_channel_message_header *)
+ msg->u.payload;
+
+ if (hdr->msgtype == CHANNELMSG_UNLOAD_RESPONSE)
+ complete(&vmbus_connection.unload_event);
+
+ vmbus_signal_eom(msg, message_type);
+ }
+
+ mdelay(10);
+ }
+
+ /*
+ * We're crashing and already got the UNLOAD_RESPONSE, cleanup all
+ * maybe-pending messages on all CPUs to be able to receive new
+ * messages after we reconnect.
+ */
+ for_each_online_cpu(cpu) {
+ page_addr = hv_context.synic_message_page[cpu];
+ msg = (struct hv_message *)page_addr + VMBUS_MESSAGE_SINT;
+ msg->header.message_type = HVMSG_NONE;
}
}
diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
index d02f1373dd98..fcf8a02dc0ea 100644
--- a/drivers/hv/connection.c
+++ b/drivers/hv/connection.c
@@ -495,3 +495,4 @@ void vmbus_set_event(struct vmbus_channel *channel)
hv_do_hypercall(HVCALL_SIGNAL_EVENT, channel->sig_event, NULL);
}
+EXPORT_SYMBOL_GPL(vmbus_set_event);
diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
index b853b4b083bd..df35fb7ed5df 100644
--- a/drivers/hv/hv_balloon.c
+++ b/drivers/hv/hv_balloon.c
@@ -714,7 +714,7 @@ static bool pfn_covered(unsigned long start_pfn, unsigned long pfn_cnt)
* If the pfn range we are dealing with is not in the current
* "hot add block", move on.
*/
- if ((start_pfn >= has->end_pfn))
+ if (start_pfn < has->start_pfn || start_pfn >= has->end_pfn)
continue;
/*
* If the current hot add-request extends beyond
@@ -768,7 +768,7 @@ static unsigned long handle_pg_range(unsigned long pg_start,
* If the pfn range we are dealing with is not in the current
* "hot add block", move on.
*/
- if ((start_pfn >= has->end_pfn))
+ if (start_pfn < has->start_pfn || start_pfn >= has->end_pfn)
continue;
old_covered_state = has->covered_end_pfn;
@@ -1400,6 +1400,7 @@ static void balloon_onchannelcallback(void *context)
* This is a normal hot-add request specifying
* hot-add memory.
*/
+ dm->host_specified_ha_region = false;
ha_pg_range = &ha_msg->range;
dm->ha_wrk.ha_page_range = *ha_pg_range;
dm->ha_wrk.ha_region_range.page_range = 0;
diff --git a/drivers/hv/hv_kvp.c b/drivers/hv/hv_kvp.c
index 9b9b370fe22a..cb1a9160aab1 100644
--- a/drivers/hv/hv_kvp.c
+++ b/drivers/hv/hv_kvp.c
@@ -78,9 +78,11 @@ static void kvp_send_key(struct work_struct *dummy);
static void kvp_respond_to_host(struct hv_kvp_msg *msg, int error);
static void kvp_timeout_func(struct work_struct *dummy);
+static void kvp_host_handshake_func(struct work_struct *dummy);
static void kvp_register(int);
static DECLARE_DELAYED_WORK(kvp_timeout_work, kvp_timeout_func);
+static DECLARE_DELAYED_WORK(kvp_host_handshake_work, kvp_host_handshake_func);
static DECLARE_WORK(kvp_sendkey_work, kvp_send_key);
static const char kvp_devname[] = "vmbus/hv_kvp";
@@ -130,6 +132,11 @@ static void kvp_timeout_func(struct work_struct *dummy)
hv_poll_channel(kvp_transaction.recv_channel, kvp_poll_wrapper);
}
+static void kvp_host_handshake_func(struct work_struct *dummy)
+{
+ hv_poll_channel(kvp_transaction.recv_channel, hv_kvp_onchannelcallback);
+}
+
static int kvp_handle_handshake(struct hv_kvp_msg *msg)
{
switch (msg->kvp_hdr.operation) {
@@ -154,6 +161,12 @@ static int kvp_handle_handshake(struct hv_kvp_msg *msg)
pr_debug("KVP: userspace daemon ver. %d registered\n",
KVP_OP_REGISTER);
kvp_register(dm_reg_value);
+
+ /*
+ * If we're still negotiating with the host cancel the timeout
+ * work to not poll the channel twice.
+ */
+ cancel_delayed_work_sync(&kvp_host_handshake_work);
hv_poll_channel(kvp_transaction.recv_channel, kvp_poll_wrapper);
return 0;
@@ -594,7 +607,22 @@ void hv_kvp_onchannelcallback(void *context)
struct icmsg_negotiate *negop = NULL;
int util_fw_version;
int kvp_srv_version;
+ static enum {NEGO_NOT_STARTED,
+ NEGO_IN_PROGRESS,
+ NEGO_FINISHED} host_negotiatied = NEGO_NOT_STARTED;
+ if (host_negotiatied == NEGO_NOT_STARTED &&
+ kvp_transaction.state < HVUTIL_READY) {
+ /*
+ * If userspace daemon is not connected and host is asking
+ * us to negotiate we need to delay to not lose messages.
+ * This is important for Failover IP setting.
+ */
+ host_negotiatied = NEGO_IN_PROGRESS;
+ schedule_delayed_work(&kvp_host_handshake_work,
+ HV_UTIL_NEGO_TIMEOUT * HZ);
+ return;
+ }
if (kvp_transaction.state > HVUTIL_READY)
return;
@@ -672,6 +700,8 @@ void hv_kvp_onchannelcallback(void *context)
vmbus_sendpacket(channel, recv_buffer,
recvlen, requestid,
VM_PKT_DATA_INBAND, 0);
+
+ host_negotiatied = NEGO_FINISHED;
}
}
@@ -708,6 +738,7 @@ hv_kvp_init(struct hv_util_service *srv)
void hv_kvp_deinit(void)
{
kvp_transaction.state = HVUTIL_DEVICE_DYING;
+ cancel_delayed_work_sync(&kvp_host_handshake_work);
cancel_delayed_work_sync(&kvp_timeout_work);
cancel_work_sync(&kvp_sendkey_work);
hvutil_transport_destroy(hvt);
diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h
index 12321b93a756..718b5c72f0c8 100644
--- a/drivers/hv/hyperv_vmbus.h
+++ b/drivers/hv/hyperv_vmbus.h
@@ -36,6 +36,11 @@
#define HV_UTIL_TIMEOUT 30
/*
+ * Timeout for guest-host handshake for services.
+ */
+#define HV_UTIL_NEGO_TIMEOUT 60
+
+/*
* The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent
* is set by CPUID(HVCPUID_VERSION_FEATURES).
*/
@@ -620,9 +625,21 @@ extern struct vmbus_channel_message_table_entry
channel_message_table[CHANNELMSG_COUNT];
/* Free the message slot and signal end-of-message if required */
-static inline void vmbus_signal_eom(struct hv_message *msg)
+static inline void vmbus_signal_eom(struct hv_message *msg, u32 old_msg_type)
{
- msg->header.message_type = HVMSG_NONE;
+ /*
+ * On crash we're reading some other CPU's message page and we need
+ * to be careful: this other CPU may already had cleared the header
+ * and the host may already had delivered some other message there.
+ * In case we blindly write msg->header.message_type we're going
+ * to lose it. We can still lose a message of the same type but
+ * we count on the fact that there can only be one
+ * CHANNELMSG_UNLOAD_RESPONSE and we don't care about other messages
+ * on crash.
+ */
+ if (cmpxchg(&msg->header.message_type, old_msg_type,
+ HVMSG_NONE) != old_msg_type)
+ return;
/*
* Make sure the write to MessageType (ie set to
@@ -667,8 +684,6 @@ void vmbus_disconnect(void);
int vmbus_post_msg(void *buffer, size_t buflen);
-void vmbus_set_event(struct vmbus_channel *channel);
-
void vmbus_on_event(unsigned long data);
void vmbus_on_msg_dpc(unsigned long data);
diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c
index a40a73a7b71d..fe586bf74e17 100644
--- a/drivers/hv/ring_buffer.c
+++ b/drivers/hv/ring_buffer.c
@@ -33,25 +33,21 @@
void hv_begin_read(struct hv_ring_buffer_info *rbi)
{
rbi->ring_buffer->interrupt_mask = 1;
- mb();
+ virt_mb();
}
u32 hv_end_read(struct hv_ring_buffer_info *rbi)
{
- u32 read;
- u32 write;
rbi->ring_buffer->interrupt_mask = 0;
- mb();
+ virt_mb();
/*
* Now check to see if the ring buffer is still empty.
* If it is not, we raced and we need to process new
* incoming messages.
*/
- hv_get_ringbuffer_availbytes(rbi, &read, &write);
-
- return read;
+ return hv_get_bytes_to_read(rbi);
}
/*
@@ -72,69 +68,17 @@ u32 hv_end_read(struct hv_ring_buffer_info *rbi)
static bool hv_need_to_signal(u32 old_write, struct hv_ring_buffer_info *rbi)
{
- mb();
- if (rbi->ring_buffer->interrupt_mask)
+ virt_mb();
+ if (READ_ONCE(rbi->ring_buffer->interrupt_mask))
return false;
/* check interrupt_mask before read_index */
- rmb();
+ virt_rmb();
/*
* This is the only case we need to signal when the
* ring transitions from being empty to non-empty.
*/
- if (old_write == rbi->ring_buffer->read_index)
- return true;
-
- return false;
-}
-
-/*
- * To optimize the flow management on the send-side,
- * when the sender is blocked because of lack of
- * sufficient space in the ring buffer, potential the
- * consumer of the ring buffer can signal the producer.
- * This is controlled by the following parameters:
- *
- * 1. pending_send_sz: This is the size in bytes that the
- * producer is trying to send.
- * 2. The feature bit feat_pending_send_sz set to indicate if
- * the consumer of the ring will signal when the ring
- * state transitions from being full to a state where
- * there is room for the producer to send the pending packet.
- */
-
-static bool hv_need_to_signal_on_read(struct hv_ring_buffer_info *rbi)
-{
- u32 cur_write_sz;
- u32 r_size;
- u32 write_loc;
- u32 read_loc = rbi->ring_buffer->read_index;
- u32 pending_sz;
-
- /*
- * Issue a full memory barrier before making the signaling decision.
- * Here is the reason for having this barrier:
- * If the reading of the pend_sz (in this function)
- * were to be reordered and read before we commit the new read
- * index (in the calling function) we could
- * have a problem. If the host were to set the pending_sz after we
- * have sampled pending_sz and go to sleep before we commit the
- * read index, we could miss sending the interrupt. Issue a full
- * memory barrier to address this.
- */
- mb();
-
- pending_sz = rbi->ring_buffer->pending_send_sz;
- write_loc = rbi->ring_buffer->write_index;
- /* If the other end is not blocked on write don't bother. */
- if (pending_sz == 0)
- return false;
-
- r_size = rbi->ring_datasize;
- cur_write_sz = write_loc >= read_loc ? r_size - (write_loc - read_loc) :
- read_loc - write_loc;
-
- if (cur_write_sz >= pending_sz)
+ if (old_write == READ_ONCE(rbi->ring_buffer->read_index))
return true;
return false;
@@ -188,17 +132,9 @@ hv_set_next_read_location(struct hv_ring_buffer_info *ring_info,
u32 next_read_location)
{
ring_info->ring_buffer->read_index = next_read_location;
+ ring_info->priv_read_index = next_read_location;
}
-
-/* Get the start of the ring buffer. */
-static inline void *
-hv_get_ring_buffer(struct hv_ring_buffer_info *ring_info)
-{
- return (void *)ring_info->ring_buffer->buffer;
-}
-
-
/* Get the size of the ring buffer. */
static inline u32
hv_get_ring_buffersize(struct hv_ring_buffer_info *ring_info)
@@ -332,7 +268,6 @@ int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info,
{
int i = 0;
u32 bytes_avail_towrite;
- u32 bytes_avail_toread;
u32 totalbytes_towrite = 0;
u32 next_write_location;
@@ -348,9 +283,7 @@ int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info,
if (lock)
spin_lock_irqsave(&outring_info->ring_lock, flags);
- hv_get_ringbuffer_availbytes(outring_info,
- &bytes_avail_toread,
- &bytes_avail_towrite);
+ bytes_avail_towrite = hv_get_bytes_to_write(outring_info);
/*
* If there is only room for the packet, assume it is full.
@@ -384,7 +317,7 @@ int hv_ringbuffer_write(struct hv_ring_buffer_info *outring_info,
sizeof(u64));
/* Issue a full memory barrier before updating the write index */
- mb();
+ virt_mb();
/* Now, update the write location */
hv_set_next_write_location(outring_info, next_write_location);
@@ -401,7 +334,6 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
void *buffer, u32 buflen, u32 *buffer_actual_len,
u64 *requestid, bool *signal, bool raw)
{
- u32 bytes_avail_towrite;
u32 bytes_avail_toread;
u32 next_read_location = 0;
u64 prev_indices = 0;
@@ -417,10 +349,7 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
*buffer_actual_len = 0;
*requestid = 0;
- hv_get_ringbuffer_availbytes(inring_info,
- &bytes_avail_toread,
- &bytes_avail_towrite);
-
+ bytes_avail_toread = hv_get_bytes_to_read(inring_info);
/* Make sure there is something to read */
if (bytes_avail_toread < sizeof(desc)) {
/*
@@ -464,7 +393,7 @@ int hv_ringbuffer_read(struct hv_ring_buffer_info *inring_info,
* the writer may start writing to the read area once the read index
* is updated.
*/
- mb();
+ virt_mb();
/* Update the read index */
hv_set_next_read_location(inring_info, next_read_location);
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index 64713ff47e36..952f20fdc7e3 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -41,6 +41,7 @@
#include <linux/ptrace.h>
#include <linux/screen_info.h>
#include <linux/kdebug.h>
+#include <linux/efi.h>
#include "hyperv_vmbus.h"
static struct acpi_device *hv_acpi_dev;
@@ -101,7 +102,10 @@ static struct notifier_block hyperv_panic_block = {
.notifier_call = hyperv_panic_event,
};
+static const char *fb_mmio_name = "fb_range";
+static struct resource *fb_mmio;
struct resource *hyperv_mmio;
+DEFINE_SEMAPHORE(hyperv_mmio_lock);
static int vmbus_exists(void)
{
@@ -708,7 +712,7 @@ static void hv_process_timer_expiration(struct hv_message *msg, int cpu)
if (dev->event_handler)
dev->event_handler(dev);
- vmbus_signal_eom(msg);
+ vmbus_signal_eom(msg, HVMSG_TIMER_EXPIRED);
}
void vmbus_on_msg_dpc(unsigned long data)
@@ -720,8 +724,9 @@ void vmbus_on_msg_dpc(unsigned long data)
struct vmbus_channel_message_header *hdr;
struct vmbus_channel_message_table_entry *entry;
struct onmessage_work_context *ctx;
+ u32 message_type = msg->header.message_type;
- if (msg->header.message_type == HVMSG_NONE)
+ if (message_type == HVMSG_NONE)
/* no msg */
return;
@@ -746,7 +751,7 @@ void vmbus_on_msg_dpc(unsigned long data)
entry->message_handler(hdr);
msg_handled:
- vmbus_signal_eom(msg);
+ vmbus_signal_eom(msg, message_type);
}
static void vmbus_isr(void)
@@ -1048,7 +1053,6 @@ static acpi_status vmbus_walk_resources(struct acpi_resource *res, void *ctx)
new_res->end = end;
/*
- * Stick ranges from higher in address space at the front of the list.
* If two ranges are adjacent, merge them.
*/
do {
@@ -1069,7 +1073,7 @@ static acpi_status vmbus_walk_resources(struct acpi_resource *res, void *ctx)
break;
}
- if ((*old_res)->end < new_res->start) {
+ if ((*old_res)->start > new_res->end) {
new_res->sibling = *old_res;
if (prev_res)
(*prev_res)->sibling = new_res;
@@ -1091,6 +1095,12 @@ static int vmbus_acpi_remove(struct acpi_device *device)
struct resource *next_res;
if (hyperv_mmio) {
+ if (fb_mmio) {
+ __release_region(hyperv_mmio, fb_mmio->start,
+ resource_size(fb_mmio));
+ fb_mmio = NULL;
+ }
+
for (cur_res = hyperv_mmio; cur_res; cur_res = next_res) {
next_res = cur_res->sibling;
kfree(cur_res);
@@ -1100,6 +1110,30 @@ static int vmbus_acpi_remove(struct acpi_device *device)
return 0;
}
+static void vmbus_reserve_fb(void)
+{
+ int size;
+ /*
+ * Make a claim for the frame buffer in the resource tree under the
+ * first node, which will be the one below 4GB. The length seems to
+ * be underreported, particularly in a Generation 1 VM. So start out
+ * reserving a larger area and make it smaller until it succeeds.
+ */
+
+ if (screen_info.lfb_base) {
+ if (efi_enabled(EFI_BOOT))
+ size = max_t(__u32, screen_info.lfb_size, 0x800000);
+ else
+ size = max_t(__u32, screen_info.lfb_size, 0x4000000);
+
+ for (; !fb_mmio && (size >= 0x100000); size >>= 1) {
+ fb_mmio = __request_region(hyperv_mmio,
+ screen_info.lfb_base, size,
+ fb_mmio_name, 0);
+ }
+ }
+}
+
/**
* vmbus_allocate_mmio() - Pick a memory-mapped I/O range.
* @new: If successful, supplied a pointer to the
@@ -1128,11 +1162,33 @@ int vmbus_allocate_mmio(struct resource **new, struct hv_device *device_obj,
resource_size_t size, resource_size_t align,
bool fb_overlap_ok)
{
- struct resource *iter;
- resource_size_t range_min, range_max, start, local_min, local_max;
+ struct resource *iter, *shadow;
+ resource_size_t range_min, range_max, start;
const char *dev_n = dev_name(&device_obj->device);
- u32 fb_end = screen_info.lfb_base + (screen_info.lfb_size << 1);
- int i;
+ int retval;
+
+ retval = -ENXIO;
+ down(&hyperv_mmio_lock);
+
+ /*
+ * If overlaps with frame buffers are allowed, then first attempt to
+ * make the allocation from within the reserved region. Because it
+ * is already reserved, no shadow allocation is necessary.
+ */
+ if (fb_overlap_ok && fb_mmio && !(min > fb_mmio->end) &&
+ !(max < fb_mmio->start)) {
+
+ range_min = fb_mmio->start;
+ range_max = fb_mmio->end;
+ start = (range_min + align - 1) & ~(align - 1);
+ for (; start + size - 1 <= range_max; start += align) {
+ *new = request_mem_region_exclusive(start, size, dev_n);
+ if (*new) {
+ retval = 0;
+ goto exit;
+ }
+ }
+ }
for (iter = hyperv_mmio; iter; iter = iter->sibling) {
if ((iter->start >= max) || (iter->end <= min))
@@ -1140,46 +1196,56 @@ int vmbus_allocate_mmio(struct resource **new, struct hv_device *device_obj,
range_min = iter->start;
range_max = iter->end;
-
- /* If this range overlaps the frame buffer, split it into
- two tries. */
- for (i = 0; i < 2; i++) {
- local_min = range_min;
- local_max = range_max;
- if (fb_overlap_ok || (range_min >= fb_end) ||
- (range_max <= screen_info.lfb_base)) {
- i++;
- } else {
- if ((range_min <= screen_info.lfb_base) &&
- (range_max >= screen_info.lfb_base)) {
- /*
- * The frame buffer is in this window,
- * so trim this into the part that
- * preceeds the frame buffer.
- */
- local_max = screen_info.lfb_base - 1;
- range_min = fb_end;
- } else {
- range_min = fb_end;
- continue;
- }
+ start = (range_min + align - 1) & ~(align - 1);
+ for (; start + size - 1 <= range_max; start += align) {
+ shadow = __request_region(iter, start, size, NULL,
+ IORESOURCE_BUSY);
+ if (!shadow)
+ continue;
+
+ *new = request_mem_region_exclusive(start, size, dev_n);
+ if (*new) {
+ shadow->name = (char *)*new;
+ retval = 0;
+ goto exit;
}
- start = (local_min + align - 1) & ~(align - 1);
- for (; start + size - 1 <= local_max; start += align) {
- *new = request_mem_region_exclusive(start, size,
- dev_n);
- if (*new)
- return 0;
- }
+ __release_region(iter, start, size);
}
}
- return -ENXIO;
+exit:
+ up(&hyperv_mmio_lock);
+ return retval;
}
EXPORT_SYMBOL_GPL(vmbus_allocate_mmio);
/**
+ * vmbus_free_mmio() - Free a memory-mapped I/O range.
+ * @start: Base address of region to release.
+ * @size: Size of the range to be allocated
+ *
+ * This function releases anything requested by
+ * vmbus_mmio_allocate().
+ */
+void vmbus_free_mmio(resource_size_t start, resource_size_t size)
+{
+ struct resource *iter;
+
+ down(&hyperv_mmio_lock);
+ for (iter = hyperv_mmio; iter; iter = iter->sibling) {
+ if ((iter->start >= start + size) || (iter->end <= start))
+ continue;
+
+ __release_region(iter, start, size);
+ }
+ release_mem_region(start, size);
+ up(&hyperv_mmio_lock);
+
+}
+EXPORT_SYMBOL_GPL(vmbus_free_mmio);
+
+/**
* vmbus_cpu_number_to_vp_number() - Map CPU to VP.
* @cpu_number: CPU number in Linux terms
*
@@ -1219,8 +1285,10 @@ static int vmbus_acpi_add(struct acpi_device *device)
if (ACPI_FAILURE(result))
continue;
- if (hyperv_mmio)
+ if (hyperv_mmio) {
+ vmbus_reserve_fb();
break;
+ }
}
ret_val = 0;
diff --git a/drivers/hwmon/Kconfig b/drivers/hwmon/Kconfig
index 5c2d13a687aa..ff940075bb90 100644
--- a/drivers/hwmon/Kconfig
+++ b/drivers/hwmon/Kconfig
@@ -288,7 +288,7 @@ config SENSORS_K10TEMP
config SENSORS_FAM15H_POWER
tristate "AMD Family 15h processor power"
- depends on X86 && PCI
+ depends on X86 && PCI && CPU_SUP_AMD
help
If you say yes here you get support for processor power
information of your AMD family 15h CPU.
@@ -621,7 +621,8 @@ config SENSORS_IT87
If you say yes here you get support for ITE IT8705F, IT8712F, IT8716F,
IT8718F, IT8720F, IT8721F, IT8726F, IT8728F, IT8732F, IT8758E,
IT8771E, IT8772E, IT8781F, IT8782F, IT8783E/F, IT8786E, IT8790E,
- IT8603E, IT8620E, and IT8623E sensor chips, and the SiS950 clone.
+ IT8603E, IT8620E, IT8623E, and IT8628E sensor chips, and the SiS950
+ clone.
This driver can also be built as a module. If so, the module
will be called it87.
@@ -821,6 +822,16 @@ config SENSORS_MAX197
This driver can also be built as a module. If so, the module
will be called max197.
+config SENSORS_MAX31722
+tristate "MAX31722 temperature sensor"
+ depends on SPI
+ help
+ Support for the Maxim Integrated MAX31722/MAX31723 digital
+ thermometers/thermostats operating over an SPI interface.
+
+ This driver can also be built as a module. If so, the module
+ will be called max31722.
+
config SENSORS_MAX6639
tristate "Maxim MAX6639 sensor chip"
depends on I2C
diff --git a/drivers/hwmon/Makefile b/drivers/hwmon/Makefile
index 58cc3acba7e7..2ef5b7c4c54f 100644
--- a/drivers/hwmon/Makefile
+++ b/drivers/hwmon/Makefile
@@ -112,6 +112,7 @@ obj-$(CONFIG_SENSORS_MAX16065) += max16065.o
obj-$(CONFIG_SENSORS_MAX1619) += max1619.o
obj-$(CONFIG_SENSORS_MAX1668) += max1668.o
obj-$(CONFIG_SENSORS_MAX197) += max197.o
+obj-$(CONFIG_SENSORS_MAX31722) += max31722.o
obj-$(CONFIG_SENSORS_MAX6639) += max6639.o
obj-$(CONFIG_SENSORS_MAX6642) += max6642.o
obj-$(CONFIG_SENSORS_MAX6650) += max6650.o
diff --git a/drivers/hwmon/ads7828.c b/drivers/hwmon/ads7828.c
index 6c99ee7bafa3..ee396ff167d9 100644
--- a/drivers/hwmon/ads7828.c
+++ b/drivers/hwmon/ads7828.c
@@ -120,6 +120,7 @@ static int ads7828_probe(struct i2c_client *client,
unsigned int vref_mv = ADS7828_INT_VREF_MV;
bool diff_input = false;
bool ext_vref = false;
+ unsigned int regval;
data = devm_kzalloc(dev, sizeof(struct ads7828_data), GFP_KERNEL);
if (!data)
@@ -154,6 +155,15 @@ static int ads7828_probe(struct i2c_client *client,
if (!diff_input)
data->cmd_byte |= ADS7828_CMD_SD_SE;
+ /*
+ * Datasheet specifies internal reference voltage is disabled by
+ * default. The internal reference voltage needs to be enabled and
+ * voltage needs to settle before getting valid ADC data. So perform a
+ * dummy read to enable the internal reference voltage.
+ */
+ if (!ext_vref)
+ regmap_read(data->regmap, data->cmd_byte, &regval);
+
hwmon_dev = devm_hwmon_device_register_with_groups(dev, client->name,
data,
ads7828_groups);
diff --git a/drivers/hwmon/emc2103.c b/drivers/hwmon/emc2103.c
index 952fe692d764..24e395c5907d 100644
--- a/drivers/hwmon/emc2103.c
+++ b/drivers/hwmon/emc2103.c
@@ -58,7 +58,7 @@ static const u8 REG_TEMP_MAX[4] = { 0x34, 0x30, 0x31, 0x32 };
*/
static int apd = -1;
module_param(apd, bint, 0);
-MODULE_PARM_DESC(init, "Set to zero to disable anti-parallel diode mode");
+MODULE_PARM_DESC(apd, "Set to zero to disable anti-parallel diode mode");
struct temperature {
s8 degrees;
diff --git a/drivers/hwmon/fam15h_power.c b/drivers/hwmon/fam15h_power.c
index 4f695d8fcafa..eb97a9241d17 100644
--- a/drivers/hwmon/fam15h_power.c
+++ b/drivers/hwmon/fam15h_power.c
@@ -1,7 +1,7 @@
/*
* fam15h_power.c - AMD Family 15h processor power monitoring
*
- * Copyright (c) 2011 Advanced Micro Devices, Inc.
+ * Copyright (c) 2011-2016 Advanced Micro Devices, Inc.
* Author: Andreas Herrmann <herrmann.der.user@googlemail.com>
*
*
@@ -25,6 +25,10 @@
#include <linux/module.h>
#include <linux/pci.h>
#include <linux/bitops.h>
+#include <linux/cpu.h>
+#include <linux/cpumask.h>
+#include <linux/time.h>
+#include <linux/sched.h>
#include <asm/processor.h>
#include <asm/msr.h>
@@ -44,8 +48,14 @@ MODULE_LICENSE("GPL");
#define FAM15H_MIN_NUM_ATTRS 2
#define FAM15H_NUM_GROUPS 2
+#define MAX_CUS 8
+/* set maximum interval as 1 second */
+#define MAX_INTERVAL 1000
+
+#define MSR_F15H_CU_PWR_ACCUMULATOR 0xc001007a
#define MSR_F15H_CU_MAX_PWR_ACCUMULATOR 0xc001007b
+#define MSR_F15H_PTSC 0xc0010280
#define PCI_DEVICE_ID_AMD_15H_M70H_NB_F4 0x15b4
@@ -59,8 +69,20 @@ struct fam15h_power_data {
struct attribute_group group;
/* maximum accumulated power of a compute unit */
u64 max_cu_acc_power;
+ /* accumulated power of the compute units */
+ u64 cu_acc_power[MAX_CUS];
+ /* performance timestamp counter */
+ u64 cpu_sw_pwr_ptsc[MAX_CUS];
+ /* online/offline status of current compute unit */
+ int cu_on[MAX_CUS];
+ unsigned long power_period;
};
+static bool is_carrizo_or_later(void)
+{
+ return boot_cpu_data.x86 == 0x15 && boot_cpu_data.x86_model >= 0x60;
+}
+
static ssize_t show_power(struct device *dev,
struct device_attribute *attr, char *buf)
{
@@ -77,7 +99,7 @@ static ssize_t show_power(struct device *dev,
* On Carrizo and later platforms, TdpRunAvgAccCap bit field
* is extended to 4:31 from 4:25.
*/
- if (boot_cpu_data.x86 == 0x15 && boot_cpu_data.x86_model >= 0x60) {
+ if (is_carrizo_or_later()) {
running_avg_capture = val >> 4;
running_avg_capture = sign_extend32(running_avg_capture, 27);
} else {
@@ -94,7 +116,7 @@ static ssize_t show_power(struct device *dev,
* On Carrizo and later platforms, ApmTdpLimit bit field
* is extended to 16:31 from 16:28.
*/
- if (boot_cpu_data.x86 == 0x15 && boot_cpu_data.x86_model >= 0x60)
+ if (is_carrizo_or_later())
tdp_limit = val >> 16;
else
tdp_limit = (val >> 16) & 0x1fff;
@@ -125,6 +147,167 @@ static ssize_t show_power_crit(struct device *dev,
}
static DEVICE_ATTR(power1_crit, S_IRUGO, show_power_crit, NULL);
+static void do_read_registers_on_cu(void *_data)
+{
+ struct fam15h_power_data *data = _data;
+ int cpu, cu;
+
+ cpu = smp_processor_id();
+
+ /*
+ * With the new x86 topology modelling, cpu core id actually
+ * is compute unit id.
+ */
+ cu = cpu_data(cpu).cpu_core_id;
+
+ rdmsrl_safe(MSR_F15H_CU_PWR_ACCUMULATOR, &data->cu_acc_power[cu]);
+ rdmsrl_safe(MSR_F15H_PTSC, &data->cpu_sw_pwr_ptsc[cu]);
+
+ data->cu_on[cu] = 1;
+}
+
+/*
+ * This function is only able to be called when CPUID
+ * Fn8000_0007:EDX[12] is set.
+ */
+static int read_registers(struct fam15h_power_data *data)
+{
+ int this_cpu, ret, cpu;
+ int core, this_core;
+ cpumask_var_t mask;
+
+ ret = zalloc_cpumask_var(&mask, GFP_KERNEL);
+ if (!ret)
+ return -ENOMEM;
+
+ memset(data->cu_on, 0, sizeof(int) * MAX_CUS);
+
+ get_online_cpus();
+ this_cpu = smp_processor_id();
+
+ /*
+ * Choose the first online core of each compute unit, and then
+ * read their MSR value of power and ptsc in a single IPI,
+ * because the MSR value of CPU core represent the compute
+ * unit's.
+ */
+ core = -1;
+
+ for_each_online_cpu(cpu) {
+ this_core = topology_core_id(cpu);
+
+ if (this_core == core)
+ continue;
+
+ core = this_core;
+
+ /* get any CPU on this compute unit */
+ cpumask_set_cpu(cpumask_any(topology_sibling_cpumask(cpu)), mask);
+ }
+
+ if (cpumask_test_cpu(this_cpu, mask))
+ do_read_registers_on_cu(data);
+
+ smp_call_function_many(mask, do_read_registers_on_cu, data, true);
+ put_online_cpus();
+
+ free_cpumask_var(mask);
+
+ return 0;
+}
+
+static ssize_t acc_show_power(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct fam15h_power_data *data = dev_get_drvdata(dev);
+ u64 prev_cu_acc_power[MAX_CUS], prev_ptsc[MAX_CUS],
+ jdelta[MAX_CUS];
+ u64 tdelta, avg_acc;
+ int cu, cu_num, ret;
+ signed long leftover;
+
+ /*
+ * With the new x86 topology modelling, x86_max_cores is the
+ * compute unit number.
+ */
+ cu_num = boot_cpu_data.x86_max_cores;
+
+ ret = read_registers(data);
+ if (ret)
+ return 0;
+
+ for (cu = 0; cu < cu_num; cu++) {
+ prev_cu_acc_power[cu] = data->cu_acc_power[cu];
+ prev_ptsc[cu] = data->cpu_sw_pwr_ptsc[cu];
+ }
+
+ leftover = schedule_timeout_interruptible(msecs_to_jiffies(data->power_period));
+ if (leftover)
+ return 0;
+
+ ret = read_registers(data);
+ if (ret)
+ return 0;
+
+ for (cu = 0, avg_acc = 0; cu < cu_num; cu++) {
+ /* check if current compute unit is online */
+ if (data->cu_on[cu] == 0)
+ continue;
+
+ if (data->cu_acc_power[cu] < prev_cu_acc_power[cu]) {
+ jdelta[cu] = data->max_cu_acc_power + data->cu_acc_power[cu];
+ jdelta[cu] -= prev_cu_acc_power[cu];
+ } else {
+ jdelta[cu] = data->cu_acc_power[cu] - prev_cu_acc_power[cu];
+ }
+ tdelta = data->cpu_sw_pwr_ptsc[cu] - prev_ptsc[cu];
+ jdelta[cu] *= data->cpu_pwr_sample_ratio * 1000;
+ do_div(jdelta[cu], tdelta);
+
+ /* the unit is microWatt */
+ avg_acc += jdelta[cu];
+ }
+
+ return sprintf(buf, "%llu\n", (unsigned long long)avg_acc);
+}
+static DEVICE_ATTR(power1_average, S_IRUGO, acc_show_power, NULL);
+
+static ssize_t acc_show_power_period(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct fam15h_power_data *data = dev_get_drvdata(dev);
+
+ return sprintf(buf, "%lu\n", data->power_period);
+}
+
+static ssize_t acc_set_power_period(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct fam15h_power_data *data = dev_get_drvdata(dev);
+ unsigned long temp;
+ int ret;
+
+ ret = kstrtoul(buf, 10, &temp);
+ if (ret)
+ return ret;
+
+ if (temp > MAX_INTERVAL)
+ return -EINVAL;
+
+ /* the interval value should be greater than 0 */
+ if (temp <= 0)
+ return -EINVAL;
+
+ data->power_period = temp;
+
+ return count;
+}
+static DEVICE_ATTR(power1_average_interval, S_IRUGO | S_IWUSR,
+ acc_show_power_period, acc_set_power_period);
+
static int fam15h_power_init_attrs(struct pci_dev *pdev,
struct fam15h_power_data *data)
{
@@ -137,6 +320,10 @@ static int fam15h_power_init_attrs(struct pci_dev *pdev,
(c->x86_model >= 0x60 && c->x86_model <= 0x7f)))
n += 1;
+ /* check if processor supports accumulated power */
+ if (boot_cpu_has(X86_FEATURE_ACC_POWER))
+ n += 2;
+
fam15h_power_attrs = devm_kcalloc(&pdev->dev, n,
sizeof(*fam15h_power_attrs),
GFP_KERNEL);
@@ -151,6 +338,11 @@ static int fam15h_power_init_attrs(struct pci_dev *pdev,
(c->x86_model >= 0x60 && c->x86_model <= 0x7f)))
fam15h_power_attrs[n++] = &dev_attr_power1_input.attr;
+ if (boot_cpu_has(X86_FEATURE_ACC_POWER)) {
+ fam15h_power_attrs[n++] = &dev_attr_power1_average.attr;
+ fam15h_power_attrs[n++] = &dev_attr_power1_average_interval.attr;
+ }
+
data->group.attrs = fam15h_power_attrs;
return 0;
@@ -216,7 +408,7 @@ static int fam15h_power_resume(struct pci_dev *pdev)
static int fam15h_power_init_data(struct pci_dev *f4,
struct fam15h_power_data *data)
{
- u32 val, eax, ebx, ecx, edx;
+ u32 val;
u64 tmp;
int ret;
@@ -243,10 +435,9 @@ static int fam15h_power_init_data(struct pci_dev *f4,
if (ret)
return ret;
- cpuid(0x80000007, &eax, &ebx, &ecx, &edx);
/* CPUID Fn8000_0007:EDX[12] indicates to support accumulated power */
- if (!(edx & BIT(12)))
+ if (!boot_cpu_has(X86_FEATURE_ACC_POWER))
return 0;
/*
@@ -254,7 +445,7 @@ static int fam15h_power_init_data(struct pci_dev *f4,
* sample period to the PTSC counter period by executing CPUID
* Fn8000_0007:ECX
*/
- data->cpu_pwr_sample_ratio = ecx;
+ data->cpu_pwr_sample_ratio = cpuid_ecx(0x80000007);
if (rdmsrl_safe(MSR_F15H_CU_MAX_PWR_ACCUMULATOR, &tmp)) {
pr_err("Failed to read max compute unit power accumulator MSR\n");
@@ -263,7 +454,15 @@ static int fam15h_power_init_data(struct pci_dev *f4,
data->max_cu_acc_power = tmp;
- return 0;
+ /*
+ * Milliseconds are a reasonable interval for the measurement.
+ * But it shouldn't set too long here, because several seconds
+ * would cause the read function to hang. So set default
+ * interval as 10 ms.
+ */
+ data->power_period = 10;
+
+ return read_registers(data);
}
static int fam15h_power_probe(struct pci_dev *pdev,
diff --git a/drivers/hwmon/it87.c b/drivers/hwmon/it87.c
index 1896e26df634..730d84028260 100644
--- a/drivers/hwmon/it87.c
+++ b/drivers/hwmon/it87.c
@@ -13,6 +13,7 @@
* Supports: IT8603E Super I/O chip w/LPC interface
* IT8620E Super I/O chip w/LPC interface
* IT8623E Super I/O chip w/LPC interface
+ * IT8628E Super I/O chip w/LPC interface
* IT8705F Super I/O chip w/LPC interface
* IT8712F Super I/O chip w/LPC interface
* IT8716F Super I/O chip w/LPC interface
@@ -44,14 +45,11 @@
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
*/
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#include <linux/bitops.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/slab.h>
@@ -72,17 +70,18 @@
enum chips { it87, it8712, it8716, it8718, it8720, it8721, it8728, it8732,
it8771, it8772, it8781, it8782, it8783, it8786, it8790, it8603,
- it8620 };
+ it8620, it8628 };
static unsigned short force_id;
module_param(force_id, ushort, 0);
MODULE_PARM_DESC(force_id, "Override the detected device ID");
-static struct platform_device *pdev;
+static struct platform_device *it87_pdev[2];
+
+#define REG_2E 0x2e /* The register to read/write */
+#define REG_4E 0x4e /* Secondary register to read/write */
-#define REG 0x2e /* The register to read/write */
#define DEV 0x07 /* Register: Logical device select */
-#define VAL 0x2f /* The value to read/write */
#define PME 0x04 /* The device with the fan registers in it */
/* The device with the IT8718F/IT8720F VID value in it */
@@ -91,54 +90,55 @@ static struct platform_device *pdev;
#define DEVID 0x20 /* Register: Device ID */
#define DEVREV 0x22 /* Register: Device Revision */
-static inline int superio_inb(int reg)
+static inline int superio_inb(int ioreg, int reg)
{
- outb(reg, REG);
- return inb(VAL);
+ outb(reg, ioreg);
+ return inb(ioreg + 1);
}
-static inline void superio_outb(int reg, int val)
+static inline void superio_outb(int ioreg, int reg, int val)
{
- outb(reg, REG);
- outb(val, VAL);
+ outb(reg, ioreg);
+ outb(val, ioreg + 1);
}
-static int superio_inw(int reg)
+static int superio_inw(int ioreg, int reg)
{
int val;
- outb(reg++, REG);
- val = inb(VAL) << 8;
- outb(reg, REG);
- val |= inb(VAL);
+
+ outb(reg++, ioreg);
+ val = inb(ioreg + 1) << 8;
+ outb(reg, ioreg);
+ val |= inb(ioreg + 1);
return val;
}
-static inline void superio_select(int ldn)
+static inline void superio_select(int ioreg, int ldn)
{
- outb(DEV, REG);
- outb(ldn, VAL);
+ outb(DEV, ioreg);
+ outb(ldn, ioreg + 1);
}
-static inline int superio_enter(void)
+static inline int superio_enter(int ioreg)
{
/*
- * Try to reserve REG and REG + 1 for exclusive access.
+ * Try to reserve ioreg and ioreg + 1 for exclusive access.
*/
- if (!request_muxed_region(REG, 2, DRVNAME))
+ if (!request_muxed_region(ioreg, 2, DRVNAME))
return -EBUSY;
- outb(0x87, REG);
- outb(0x01, REG);
- outb(0x55, REG);
- outb(0x55, REG);
+ outb(0x87, ioreg);
+ outb(0x01, ioreg);
+ outb(0x55, ioreg);
+ outb(ioreg == REG_4E ? 0xaa : 0x55, ioreg);
return 0;
}
-static inline void superio_exit(void)
+static inline void superio_exit(int ioreg)
{
- outb(0x02, REG);
- outb(0x02, VAL);
- release_region(REG, 2);
+ outb(0x02, ioreg);
+ outb(0x02, ioreg + 1);
+ release_region(ioreg, 2);
}
/* Logical device 4 registers */
@@ -161,6 +161,7 @@ static inline void superio_exit(void)
#define IT8603E_DEVID 0x8603
#define IT8620E_DEVID 0x8620
#define IT8623E_DEVID 0x8623
+#define IT8628E_DEVID 0x8628
#define IT87_ACT_REG 0x30
#define IT87_BASE_REG 0x60
@@ -168,6 +169,7 @@ static inline void superio_exit(void)
#define IT87_SIO_GPIO1_REG 0x25
#define IT87_SIO_GPIO2_REG 0x26
#define IT87_SIO_GPIO3_REG 0x27
+#define IT87_SIO_GPIO4_REG 0x28
#define IT87_SIO_GPIO5_REG 0x29
#define IT87_SIO_PINX1_REG 0x2a /* Pin selection */
#define IT87_SIO_PINX2_REG 0x2c /* Pin selection */
@@ -217,7 +219,12 @@ static bool fix_pwm_polarity;
#define IT87_REG_FAN_DIV 0x0b
#define IT87_REG_FAN_16BIT 0x0c
-/* Monitors: 9 voltage (0 to 7, battery), 3 temp (1 to 3), 3 fan (1 to 3) */
+/*
+ * Monitors:
+ * - up to 13 voltage (0 to 7, battery, avcc, 10 to 12)
+ * - up to 6 temp (1 to 6)
+ * - up to 6 fan (1 to 6)
+ */
static const u8 IT87_REG_FAN[] = { 0x0d, 0x0e, 0x0f, 0x80, 0x82, 0x4c };
static const u8 IT87_REG_FAN_MIN[] = { 0x10, 0x11, 0x12, 0x84, 0x86, 0x4e };
@@ -227,10 +234,12 @@ static const u8 IT87_REG_TEMP_OFFSET[] = { 0x56, 0x57, 0x59 };
#define IT87_REG_FAN_MAIN_CTRL 0x13
#define IT87_REG_FAN_CTL 0x14
-#define IT87_REG_PWM(nr) (0x15 + (nr))
-#define IT87_REG_PWM_DUTY(nr) (0x63 + (nr) * 8)
+static const u8 IT87_REG_PWM[] = { 0x15, 0x16, 0x17, 0x7f, 0xa7, 0xaf };
+static const u8 IT87_REG_PWM_DUTY[] = { 0x63, 0x6b, 0x73, 0x7b, 0xa3, 0xab };
+
+static const u8 IT87_REG_VIN[] = { 0x20, 0x21, 0x22, 0x23, 0x24, 0x25, 0x26,
+ 0x27, 0x28, 0x2f, 0x2c, 0x2d, 0x2e };
-#define IT87_REG_VIN(nr) (0x20 + (nr))
#define IT87_REG_TEMP(nr) (0x29 + (nr))
#define IT87_REG_VIN_MAX(nr) (0x30 + (nr) * 2)
@@ -245,30 +254,48 @@ static const u8 IT87_REG_TEMP_OFFSET[] = { 0x56, 0x57, 0x59 };
#define IT87_REG_CHIPID 0x58
-#define IT87_REG_AUTO_TEMP(nr, i) (0x60 + (nr) * 8 + (i))
-#define IT87_REG_AUTO_PWM(nr, i) (0x65 + (nr) * 8 + (i))
+static const u8 IT87_REG_AUTO_BASE[] = { 0x60, 0x68, 0x70, 0x78, 0xa0, 0xa8 };
+
+#define IT87_REG_AUTO_TEMP(nr, i) (IT87_REG_AUTO_BASE[nr] + (i))
+#define IT87_REG_AUTO_PWM(nr, i) (IT87_REG_AUTO_BASE[nr] + 5 + (i))
+
+#define IT87_REG_TEMP456_ENABLE 0x77
+
+#define NUM_VIN ARRAY_SIZE(IT87_REG_VIN)
+#define NUM_VIN_LIMIT 8
+#define NUM_TEMP 6
+#define NUM_TEMP_OFFSET ARRAY_SIZE(IT87_REG_TEMP_OFFSET)
+#define NUM_TEMP_LIMIT 3
+#define NUM_FAN ARRAY_SIZE(IT87_REG_FAN)
+#define NUM_FAN_DIV 3
+#define NUM_PWM ARRAY_SIZE(IT87_REG_PWM)
+#define NUM_AUTO_PWM ARRAY_SIZE(IT87_REG_PWM)
struct it87_devices {
const char *name;
const char * const suffix;
- u16 features;
+ u32 features;
u8 peci_mask;
u8 old_peci_mask;
};
-#define FEAT_12MV_ADC (1 << 0)
-#define FEAT_NEWER_AUTOPWM (1 << 1)
-#define FEAT_OLD_AUTOPWM (1 << 2)
-#define FEAT_16BIT_FANS (1 << 3)
-#define FEAT_TEMP_OFFSET (1 << 4)
-#define FEAT_TEMP_PECI (1 << 5)
-#define FEAT_TEMP_OLD_PECI (1 << 6)
-#define FEAT_FAN16_CONFIG (1 << 7) /* Need to enable 16-bit fans */
-#define FEAT_FIVE_FANS (1 << 8) /* Supports five fans */
-#define FEAT_VID (1 << 9) /* Set if chip supports VID */
-#define FEAT_IN7_INTERNAL (1 << 10) /* Set if in7 is internal */
-#define FEAT_SIX_FANS (1 << 11) /* Supports six fans */
-#define FEAT_10_9MV_ADC (1 << 12)
+#define FEAT_12MV_ADC BIT(0)
+#define FEAT_NEWER_AUTOPWM BIT(1)
+#define FEAT_OLD_AUTOPWM BIT(2)
+#define FEAT_16BIT_FANS BIT(3)
+#define FEAT_TEMP_OFFSET BIT(4)
+#define FEAT_TEMP_PECI BIT(5)
+#define FEAT_TEMP_OLD_PECI BIT(6)
+#define FEAT_FAN16_CONFIG BIT(7) /* Need to enable 16-bit fans */
+#define FEAT_FIVE_FANS BIT(8) /* Supports five fans */
+#define FEAT_VID BIT(9) /* Set if chip supports VID */
+#define FEAT_IN7_INTERNAL BIT(10) /* Set if in7 is internal */
+#define FEAT_SIX_FANS BIT(11) /* Supports six fans */
+#define FEAT_10_9MV_ADC BIT(12)
+#define FEAT_AVCC3 BIT(13) /* Chip supports in9/AVCC3 */
+#define FEAT_SIX_PWM BIT(14) /* Chip supports 6 pwm chn */
+#define FEAT_PWM_FREQ2 BIT(15) /* Separate pwm freq 2 */
+#define FEAT_SIX_TEMP BIT(16) /* Up to 6 temp sensors */
static const struct it87_devices it87_devices[] = {
[it87] = {
@@ -286,20 +313,22 @@ static const struct it87_devices it87_devices[] = {
.name = "it8716",
.suffix = "F",
.features = FEAT_16BIT_FANS | FEAT_TEMP_OFFSET | FEAT_VID
- | FEAT_FAN16_CONFIG | FEAT_FIVE_FANS,
+ | FEAT_FAN16_CONFIG | FEAT_FIVE_FANS | FEAT_PWM_FREQ2,
},
[it8718] = {
.name = "it8718",
.suffix = "F",
.features = FEAT_16BIT_FANS | FEAT_TEMP_OFFSET | FEAT_VID
- | FEAT_TEMP_OLD_PECI | FEAT_FAN16_CONFIG | FEAT_FIVE_FANS,
+ | FEAT_TEMP_OLD_PECI | FEAT_FAN16_CONFIG | FEAT_FIVE_FANS
+ | FEAT_PWM_FREQ2,
.old_peci_mask = 0x4,
},
[it8720] = {
.name = "it8720",
.suffix = "F",
.features = FEAT_16BIT_FANS | FEAT_TEMP_OFFSET | FEAT_VID
- | FEAT_TEMP_OLD_PECI | FEAT_FAN16_CONFIG | FEAT_FIVE_FANS,
+ | FEAT_TEMP_OLD_PECI | FEAT_FAN16_CONFIG | FEAT_FIVE_FANS
+ | FEAT_PWM_FREQ2,
.old_peci_mask = 0x4,
},
[it8721] = {
@@ -307,7 +336,8 @@ static const struct it87_devices it87_devices[] = {
.suffix = "F",
.features = FEAT_NEWER_AUTOPWM | FEAT_12MV_ADC | FEAT_16BIT_FANS
| FEAT_TEMP_OFFSET | FEAT_TEMP_OLD_PECI | FEAT_TEMP_PECI
- | FEAT_FAN16_CONFIG | FEAT_FIVE_FANS | FEAT_IN7_INTERNAL,
+ | FEAT_FAN16_CONFIG | FEAT_FIVE_FANS | FEAT_IN7_INTERNAL
+ | FEAT_PWM_FREQ2,
.peci_mask = 0x05,
.old_peci_mask = 0x02, /* Actually reports PCH */
},
@@ -316,7 +346,7 @@ static const struct it87_devices it87_devices[] = {
.suffix = "F",
.features = FEAT_NEWER_AUTOPWM | FEAT_12MV_ADC | FEAT_16BIT_FANS
| FEAT_TEMP_OFFSET | FEAT_TEMP_PECI | FEAT_FIVE_FANS
- | FEAT_IN7_INTERNAL,
+ | FEAT_IN7_INTERNAL | FEAT_PWM_FREQ2,
.peci_mask = 0x07,
},
[it8732] = {
@@ -332,7 +362,8 @@ static const struct it87_devices it87_devices[] = {
.name = "it8771",
.suffix = "E",
.features = FEAT_NEWER_AUTOPWM | FEAT_12MV_ADC | FEAT_16BIT_FANS
- | FEAT_TEMP_OFFSET | FEAT_TEMP_PECI | FEAT_IN7_INTERNAL,
+ | FEAT_TEMP_OFFSET | FEAT_TEMP_PECI | FEAT_IN7_INTERNAL
+ | FEAT_PWM_FREQ2,
/* PECI: guesswork */
/* 12mV ADC (OHM) */
/* 16 bit fans (OHM) */
@@ -343,7 +374,8 @@ static const struct it87_devices it87_devices[] = {
.name = "it8772",
.suffix = "E",
.features = FEAT_NEWER_AUTOPWM | FEAT_12MV_ADC | FEAT_16BIT_FANS
- | FEAT_TEMP_OFFSET | FEAT_TEMP_PECI | FEAT_IN7_INTERNAL,
+ | FEAT_TEMP_OFFSET | FEAT_TEMP_PECI | FEAT_IN7_INTERNAL
+ | FEAT_PWM_FREQ2,
/* PECI (coreboot) */
/* 12mV ADC (HWSensors4, OHM) */
/* 16 bit fans (HWSensors4, OHM) */
@@ -354,42 +386,45 @@ static const struct it87_devices it87_devices[] = {
.name = "it8781",
.suffix = "F",
.features = FEAT_16BIT_FANS | FEAT_TEMP_OFFSET
- | FEAT_TEMP_OLD_PECI | FEAT_FAN16_CONFIG,
+ | FEAT_TEMP_OLD_PECI | FEAT_FAN16_CONFIG | FEAT_PWM_FREQ2,
.old_peci_mask = 0x4,
},
[it8782] = {
.name = "it8782",
.suffix = "F",
.features = FEAT_16BIT_FANS | FEAT_TEMP_OFFSET
- | FEAT_TEMP_OLD_PECI | FEAT_FAN16_CONFIG,
+ | FEAT_TEMP_OLD_PECI | FEAT_FAN16_CONFIG | FEAT_PWM_FREQ2,
.old_peci_mask = 0x4,
},
[it8783] = {
.name = "it8783",
.suffix = "E/F",
.features = FEAT_16BIT_FANS | FEAT_TEMP_OFFSET
- | FEAT_TEMP_OLD_PECI | FEAT_FAN16_CONFIG,
+ | FEAT_TEMP_OLD_PECI | FEAT_FAN16_CONFIG | FEAT_PWM_FREQ2,
.old_peci_mask = 0x4,
},
[it8786] = {
.name = "it8786",
.suffix = "E",
.features = FEAT_NEWER_AUTOPWM | FEAT_12MV_ADC | FEAT_16BIT_FANS
- | FEAT_TEMP_OFFSET | FEAT_TEMP_PECI | FEAT_IN7_INTERNAL,
+ | FEAT_TEMP_OFFSET | FEAT_TEMP_PECI | FEAT_IN7_INTERNAL
+ | FEAT_PWM_FREQ2,
.peci_mask = 0x07,
},
[it8790] = {
.name = "it8790",
.suffix = "E",
.features = FEAT_NEWER_AUTOPWM | FEAT_12MV_ADC | FEAT_16BIT_FANS
- | FEAT_TEMP_OFFSET | FEAT_TEMP_PECI | FEAT_IN7_INTERNAL,
+ | FEAT_TEMP_OFFSET | FEAT_TEMP_PECI | FEAT_IN7_INTERNAL
+ | FEAT_PWM_FREQ2,
.peci_mask = 0x07,
},
[it8603] = {
.name = "it8603",
.suffix = "E",
.features = FEAT_NEWER_AUTOPWM | FEAT_12MV_ADC | FEAT_16BIT_FANS
- | FEAT_TEMP_OFFSET | FEAT_TEMP_PECI | FEAT_IN7_INTERNAL,
+ | FEAT_TEMP_OFFSET | FEAT_TEMP_PECI | FEAT_IN7_INTERNAL
+ | FEAT_AVCC3 | FEAT_PWM_FREQ2,
.peci_mask = 0x07,
},
[it8620] = {
@@ -397,7 +432,17 @@ static const struct it87_devices it87_devices[] = {
.suffix = "E",
.features = FEAT_NEWER_AUTOPWM | FEAT_12MV_ADC | FEAT_16BIT_FANS
| FEAT_TEMP_OFFSET | FEAT_TEMP_PECI | FEAT_SIX_FANS
- | FEAT_IN7_INTERNAL,
+ | FEAT_IN7_INTERNAL | FEAT_SIX_PWM | FEAT_PWM_FREQ2
+ | FEAT_SIX_TEMP,
+ .peci_mask = 0x07,
+ },
+ [it8628] = {
+ .name = "it8628",
+ .suffix = "E",
+ .features = FEAT_NEWER_AUTOPWM | FEAT_12MV_ADC | FEAT_16BIT_FANS
+ | FEAT_TEMP_OFFSET | FEAT_TEMP_PECI | FEAT_SIX_FANS
+ | FEAT_IN7_INTERNAL | FEAT_SIX_PWM | FEAT_PWM_FREQ2
+ | FEAT_SIX_TEMP,
.peci_mask = 0x07,
},
};
@@ -409,16 +454,20 @@ static const struct it87_devices it87_devices[] = {
#define has_old_autopwm(data) ((data)->features & FEAT_OLD_AUTOPWM)
#define has_temp_offset(data) ((data)->features & FEAT_TEMP_OFFSET)
#define has_temp_peci(data, nr) (((data)->features & FEAT_TEMP_PECI) && \
- ((data)->peci_mask & (1 << nr)))
+ ((data)->peci_mask & BIT(nr)))
#define has_temp_old_peci(data, nr) \
(((data)->features & FEAT_TEMP_OLD_PECI) && \
- ((data)->old_peci_mask & (1 << nr)))
+ ((data)->old_peci_mask & BIT(nr)))
#define has_fan16_config(data) ((data)->features & FEAT_FAN16_CONFIG)
#define has_five_fans(data) ((data)->features & (FEAT_FIVE_FANS | \
FEAT_SIX_FANS))
#define has_vid(data) ((data)->features & FEAT_VID)
#define has_in7_internal(data) ((data)->features & FEAT_IN7_INTERNAL)
#define has_six_fans(data) ((data)->features & FEAT_SIX_FANS)
+#define has_avcc3(data) ((data)->features & FEAT_AVCC3)
+#define has_six_pwm(data) ((data)->features & FEAT_SIX_PWM)
+#define has_pwm_freq2(data) ((data)->features & FEAT_PWM_FREQ2)
+#define has_six_temp(data) ((data)->features & FEAT_SIX_TEMP)
struct it87_sio_data {
enum chips type;
@@ -440,7 +489,7 @@ struct it87_sio_data {
* The structure is dynamically allocated.
*/
struct it87_data {
- struct device *hwmon_dev;
+ const struct attribute_group *groups[7];
enum chips type;
u16 features;
u8 peci_mask;
@@ -453,17 +502,21 @@ struct it87_data {
unsigned long last_updated; /* In jiffies */
u16 in_scaled; /* Internal voltage sensors are scaled */
- u8 in[10][3]; /* [nr][0]=in, [1]=min, [2]=max */
+ u16 in_internal; /* Bitfield, internal sensors (for labels) */
+ u16 has_in; /* Bitfield, voltage sensors enabled */
+ u8 in[NUM_VIN][3]; /* [nr][0]=in, [1]=min, [2]=max */
u8 has_fan; /* Bitfield, fans enabled */
- u16 fan[6][2]; /* Register values, [nr][0]=fan, [1]=min */
+ u16 fan[NUM_FAN][2]; /* Register values, [nr][0]=fan, [1]=min */
u8 has_temp; /* Bitfield, temp sensors enabled */
- s8 temp[3][4]; /* [nr][0]=temp, [1]=min, [2]=max, [3]=offset */
+ s8 temp[NUM_TEMP][4]; /* [nr][0]=temp, [1]=min, [2]=max, [3]=offset */
u8 sensor; /* Register value (IT87_REG_TEMP_ENABLE) */
u8 extra; /* Register value (IT87_REG_TEMP_EXTRA) */
- u8 fan_div[3]; /* Register encoding, shifted right */
+ u8 fan_div[NUM_FAN_DIV];/* Register encoding, shifted right */
+ bool has_vid; /* True if VID supported */
u8 vid; /* Register encoding, combined */
u8 vrm;
u32 alarms; /* Register encoding, combined */
+ bool has_beep; /* true if beep supported */
u8 beeps; /* Register encoding */
u8 fan_main_ctrl; /* Register value */
u8 fan_ctl; /* Register value */
@@ -478,13 +531,14 @@ struct it87_data {
* is no longer needed, but it is still done to keep the driver
* simple.
*/
- u8 pwm_ctrl[3]; /* Register value */
- u8 pwm_duty[3]; /* Manual PWM value set by user */
- u8 pwm_temp_map[3]; /* PWM to temp. chan. mapping (bits 1-0) */
+ u8 has_pwm; /* Bitfield, pwm control enabled */
+ u8 pwm_ctrl[NUM_PWM]; /* Register value */
+ u8 pwm_duty[NUM_PWM]; /* Manual PWM value set by user */
+ u8 pwm_temp_map[NUM_PWM];/* PWM to temp. chan. mapping (bits 1-0) */
/* Automatic fan speed control registers */
- u8 auto_pwm[3][4]; /* [nr][3] is hard-coded */
- s8 auto_temp[3][5]; /* [nr][0] is point1_temp_hyst */
+ u8 auto_pwm[NUM_AUTO_PWM][4]; /* [nr][3] is hard-coded */
+ s8 auto_temp[NUM_AUTO_PWM][5]; /* [nr][0] is point1_temp_hyst */
};
static int adc_lsb(const struct it87_data *data, int nr)
@@ -497,7 +551,7 @@ static int adc_lsb(const struct it87_data *data, int nr)
lsb = 109;
else
lsb = 160;
- if (data->in_scaled & (1 << nr))
+ if (data->in_scaled & BIT(nr))
lsb <<= 1;
return lsb;
}
@@ -554,15 +608,16 @@ static int pwm_from_reg(const struct it87_data *data, u8 reg)
return (reg & 0x7f) << 1;
}
-
static int DIV_TO_REG(int val)
{
int answer = 0;
+
while (answer < 7 && (val >>= 1))
answer++;
return answer;
}
-#define DIV_FROM_REG(val) (1 << (val))
+
+#define DIV_FROM_REG(val) BIT(val)
/*
* PWM base frequencies. The frequency has to be divided by either 128 or 256,
@@ -585,32 +640,204 @@ static const unsigned int pwm_freq[8] = {
750000,
};
-static int it87_probe(struct platform_device *pdev);
-static int it87_remove(struct platform_device *pdev);
+/*
+ * Must be called with data->update_lock held, except during initialization.
+ * We ignore the IT87 BUSY flag at this moment - it could lead to deadlocks,
+ * would slow down the IT87 access and should not be necessary.
+ */
+static int it87_read_value(struct it87_data *data, u8 reg)
+{
+ outb_p(reg, data->addr + IT87_ADDR_REG_OFFSET);
+ return inb_p(data->addr + IT87_DATA_REG_OFFSET);
+}
-static int it87_read_value(struct it87_data *data, u8 reg);
-static void it87_write_value(struct it87_data *data, u8 reg, u8 value);
-static struct it87_data *it87_update_device(struct device *dev);
-static int it87_check_pwm(struct device *dev);
-static void it87_init_device(struct platform_device *pdev);
+/*
+ * Must be called with data->update_lock held, except during initialization.
+ * We ignore the IT87 BUSY flag at this moment - it could lead to deadlocks,
+ * would slow down the IT87 access and should not be necessary.
+ */
+static void it87_write_value(struct it87_data *data, u8 reg, u8 value)
+{
+ outb_p(reg, data->addr + IT87_ADDR_REG_OFFSET);
+ outb_p(value, data->addr + IT87_DATA_REG_OFFSET);
+}
+static void it87_update_pwm_ctrl(struct it87_data *data, int nr)
+{
+ data->pwm_ctrl[nr] = it87_read_value(data, IT87_REG_PWM[nr]);
+ if (has_newer_autopwm(data)) {
+ data->pwm_temp_map[nr] = data->pwm_ctrl[nr] & 0x03;
+ data->pwm_duty[nr] = it87_read_value(data,
+ IT87_REG_PWM_DUTY[nr]);
+ } else {
+ if (data->pwm_ctrl[nr] & 0x80) /* Automatic mode */
+ data->pwm_temp_map[nr] = data->pwm_ctrl[nr] & 0x03;
+ else /* Manual mode */
+ data->pwm_duty[nr] = data->pwm_ctrl[nr] & 0x7f;
+ }
-static struct platform_driver it87_driver = {
- .driver = {
- .name = DRVNAME,
- },
- .probe = it87_probe,
- .remove = it87_remove,
-};
+ if (has_old_autopwm(data)) {
+ int i;
+
+ for (i = 0; i < 5 ; i++)
+ data->auto_temp[nr][i] = it87_read_value(data,
+ IT87_REG_AUTO_TEMP(nr, i));
+ for (i = 0; i < 3 ; i++)
+ data->auto_pwm[nr][i] = it87_read_value(data,
+ IT87_REG_AUTO_PWM(nr, i));
+ } else if (has_newer_autopwm(data)) {
+ int i;
+
+ /*
+ * 0: temperature hysteresis (base + 5)
+ * 1: fan off temperature (base + 0)
+ * 2: fan start temperature (base + 1)
+ * 3: fan max temperature (base + 2)
+ */
+ data->auto_temp[nr][0] =
+ it87_read_value(data, IT87_REG_AUTO_TEMP(nr, 5));
+
+ for (i = 0; i < 3 ; i++)
+ data->auto_temp[nr][i + 1] =
+ it87_read_value(data,
+ IT87_REG_AUTO_TEMP(nr, i));
+ /*
+ * 0: start pwm value (base + 3)
+ * 1: pwm slope (base + 4, 1/8th pwm)
+ */
+ data->auto_pwm[nr][0] =
+ it87_read_value(data, IT87_REG_AUTO_TEMP(nr, 3));
+ data->auto_pwm[nr][1] =
+ it87_read_value(data, IT87_REG_AUTO_TEMP(nr, 4));
+ }
+}
+
+static struct it87_data *it87_update_device(struct device *dev)
+{
+ struct it87_data *data = dev_get_drvdata(dev);
+ int i;
+
+ mutex_lock(&data->update_lock);
+
+ if (time_after(jiffies, data->last_updated + HZ + HZ / 2) ||
+ !data->valid) {
+ if (update_vbat) {
+ /*
+ * Cleared after each update, so reenable. Value
+ * returned by this read will be previous value
+ */
+ it87_write_value(data, IT87_REG_CONFIG,
+ it87_read_value(data, IT87_REG_CONFIG) | 0x40);
+ }
+ for (i = 0; i < NUM_VIN; i++) {
+ if (!(data->has_in & BIT(i)))
+ continue;
+
+ data->in[i][0] =
+ it87_read_value(data, IT87_REG_VIN[i]);
+
+ /* VBAT and AVCC don't have limit registers */
+ if (i >= NUM_VIN_LIMIT)
+ continue;
+
+ data->in[i][1] =
+ it87_read_value(data, IT87_REG_VIN_MIN(i));
+ data->in[i][2] =
+ it87_read_value(data, IT87_REG_VIN_MAX(i));
+ }
+
+ for (i = 0; i < NUM_FAN; i++) {
+ /* Skip disabled fans */
+ if (!(data->has_fan & BIT(i)))
+ continue;
+
+ data->fan[i][1] =
+ it87_read_value(data, IT87_REG_FAN_MIN[i]);
+ data->fan[i][0] = it87_read_value(data,
+ IT87_REG_FAN[i]);
+ /* Add high byte if in 16-bit mode */
+ if (has_16bit_fans(data)) {
+ data->fan[i][0] |= it87_read_value(data,
+ IT87_REG_FANX[i]) << 8;
+ data->fan[i][1] |= it87_read_value(data,
+ IT87_REG_FANX_MIN[i]) << 8;
+ }
+ }
+ for (i = 0; i < NUM_TEMP; i++) {
+ if (!(data->has_temp & BIT(i)))
+ continue;
+ data->temp[i][0] =
+ it87_read_value(data, IT87_REG_TEMP(i));
+
+ if (has_temp_offset(data) && i < NUM_TEMP_OFFSET)
+ data->temp[i][3] =
+ it87_read_value(data,
+ IT87_REG_TEMP_OFFSET[i]);
+
+ if (i >= NUM_TEMP_LIMIT)
+ continue;
+
+ data->temp[i][1] =
+ it87_read_value(data, IT87_REG_TEMP_LOW(i));
+ data->temp[i][2] =
+ it87_read_value(data, IT87_REG_TEMP_HIGH(i));
+ }
+
+ /* Newer chips don't have clock dividers */
+ if ((data->has_fan & 0x07) && !has_16bit_fans(data)) {
+ i = it87_read_value(data, IT87_REG_FAN_DIV);
+ data->fan_div[0] = i & 0x07;
+ data->fan_div[1] = (i >> 3) & 0x07;
+ data->fan_div[2] = (i & 0x40) ? 3 : 1;
+ }
+
+ data->alarms =
+ it87_read_value(data, IT87_REG_ALARM1) |
+ (it87_read_value(data, IT87_REG_ALARM2) << 8) |
+ (it87_read_value(data, IT87_REG_ALARM3) << 16);
+ data->beeps = it87_read_value(data, IT87_REG_BEEP_ENABLE);
+
+ data->fan_main_ctrl = it87_read_value(data,
+ IT87_REG_FAN_MAIN_CTRL);
+ data->fan_ctl = it87_read_value(data, IT87_REG_FAN_CTL);
+ for (i = 0; i < NUM_PWM; i++) {
+ if (!(data->has_pwm & BIT(i)))
+ continue;
+ it87_update_pwm_ctrl(data, i);
+ }
+
+ data->sensor = it87_read_value(data, IT87_REG_TEMP_ENABLE);
+ data->extra = it87_read_value(data, IT87_REG_TEMP_EXTRA);
+ /*
+ * The IT8705F does not have VID capability.
+ * The IT8718F and later don't use IT87_REG_VID for the
+ * same purpose.
+ */
+ if (data->type == it8712 || data->type == it8716) {
+ data->vid = it87_read_value(data, IT87_REG_VID);
+ /*
+ * The older IT8712F revisions had only 5 VID pins,
+ * but we assume it is always safe to read 6 bits.
+ */
+ data->vid &= 0x3f;
+ }
+ data->last_updated = jiffies;
+ data->valid = 1;
+ }
+
+ mutex_unlock(&data->update_lock);
+
+ return data;
+}
static ssize_t show_in(struct device *dev, struct device_attribute *attr,
char *buf)
{
struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr);
- int nr = sattr->nr;
+ struct it87_data *data = it87_update_device(dev);
int index = sattr->index;
+ int nr = sattr->nr;
- struct it87_data *data = it87_update_device(dev);
return sprintf(buf, "%d\n", in_from_reg(data, nr, data->in[nr][index]));
}
@@ -618,10 +845,9 @@ static ssize_t set_in(struct device *dev, struct device_attribute *attr,
const char *buf, size_t count)
{
struct sensor_device_attribute_2 *sattr = to_sensor_dev_attr_2(attr);
- int nr = sattr->nr;
- int index = sattr->index;
-
struct it87_data *data = dev_get_drvdata(dev);
+ int index = sattr->index;
+ int nr = sattr->nr;
unsigned long val;
if (kstrtoul(buf, 10, &val) < 0)
@@ -687,8 +913,11 @@ static SENSOR_DEVICE_ATTR_2(in7_max, S_IRUGO | S_IWUSR, show_in, set_in,
static SENSOR_DEVICE_ATTR_2(in8_input, S_IRUGO, show_in, NULL, 8, 0);
static SENSOR_DEVICE_ATTR_2(in9_input, S_IRUGO, show_in, NULL, 9, 0);
+static SENSOR_DEVICE_ATTR_2(in10_input, S_IRUGO, show_in, NULL, 10, 0);
+static SENSOR_DEVICE_ATTR_2(in11_input, S_IRUGO, show_in, NULL, 11, 0);
+static SENSOR_DEVICE_ATTR_2(in12_input, S_IRUGO, show_in, NULL, 12, 0);
-/* 3 temperatures */
+/* Up to 6 temperatures */
static ssize_t show_temp(struct device *dev, struct device_attribute *attr,
char *buf)
{
@@ -761,6 +990,9 @@ static SENSOR_DEVICE_ATTR_2(temp3_max, S_IRUGO | S_IWUSR, show_temp, set_temp,
2, 2);
static SENSOR_DEVICE_ATTR_2(temp3_offset, S_IRUGO | S_IWUSR, show_temp,
set_temp, 2, 3);
+static SENSOR_DEVICE_ATTR_2(temp4_input, S_IRUGO, show_temp, NULL, 3, 0);
+static SENSOR_DEVICE_ATTR_2(temp5_input, S_IRUGO, show_temp, NULL, 4, 0);
+static SENSOR_DEVICE_ATTR_2(temp6_input, S_IRUGO, show_temp, NULL, 5, 0);
static ssize_t show_temp_type(struct device *dev, struct device_attribute *attr,
char *buf)
@@ -771,8 +1003,8 @@ static ssize_t show_temp_type(struct device *dev, struct device_attribute *attr,
u8 reg = data->sensor; /* In case value is updated while used */
u8 extra = data->extra;
- if ((has_temp_peci(data, nr) && (reg >> 6 == nr + 1))
- || (has_temp_old_peci(data, nr) && (extra & 0x80)))
+ if ((has_temp_peci(data, nr) && (reg >> 6 == nr + 1)) ||
+ (has_temp_old_peci(data, nr) && (extra & 0x80)))
return sprintf(buf, "6\n"); /* Intel PECI */
if (reg & (1 << nr))
return sprintf(buf, "3\n"); /* thermal diode */
@@ -837,18 +1069,19 @@ static SENSOR_DEVICE_ATTR(temp2_type, S_IRUGO | S_IWUSR, show_temp_type,
static SENSOR_DEVICE_ATTR(temp3_type, S_IRUGO | S_IWUSR, show_temp_type,
set_temp_type, 2);
-/* 3 Fans */
+/* 6 Fans */
static int pwm_mode(const struct it87_data *data, int nr)
{
- int ctrl = data->fan_main_ctrl & (1 << nr);
-
- if (ctrl == 0 && data->type != it8603) /* Full speed */
- return 0;
- if (data->pwm_ctrl[nr] & 0x80) /* Automatic mode */
- return 2;
- else /* Manual mode */
- return 1;
+ if (data->type != it8603 && nr < 3 && !(data->fan_main_ctrl & BIT(nr)))
+ return 0; /* Full speed */
+ if (data->pwm_ctrl[nr] & 0x80)
+ return 2; /* Automatic mode */
+ if ((data->type == it8603 || nr >= 3) &&
+ data->pwm_duty[nr] == pwm_to_reg(data, 0xff))
+ return 0; /* Full speed */
+
+ return 1; /* Manual mode */
}
static ssize_t show_fan(struct device *dev, struct device_attribute *attr,
@@ -868,39 +1101,49 @@ static ssize_t show_fan(struct device *dev, struct device_attribute *attr,
}
static ssize_t show_fan_div(struct device *dev, struct device_attribute *attr,
- char *buf)
+ char *buf)
{
struct sensor_device_attribute *sensor_attr = to_sensor_dev_attr(attr);
+ struct it87_data *data = it87_update_device(dev);
int nr = sensor_attr->index;
- struct it87_data *data = it87_update_device(dev);
- return sprintf(buf, "%d\n", DIV_FROM_REG(data->fan_div[nr]));
+ return sprintf(buf, "%lu\n", DIV_FROM_REG(data->fan_div[nr]));
}
+
static ssize_t show_pwm_enable(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr, char *buf)
{
struct sensor_device_attribute *sensor_attr = to_sensor_dev_attr(attr);
+ struct it87_data *data = it87_update_device(dev);
int nr = sensor_attr->index;
- struct it87_data *data = it87_update_device(dev);
return sprintf(buf, "%d\n", pwm_mode(data, nr));
}
+
static ssize_t show_pwm(struct device *dev, struct device_attribute *attr,
- char *buf)
+ char *buf)
{
struct sensor_device_attribute *sensor_attr = to_sensor_dev_attr(attr);
+ struct it87_data *data = it87_update_device(dev);
int nr = sensor_attr->index;
- struct it87_data *data = it87_update_device(dev);
return sprintf(buf, "%d\n",
pwm_from_reg(data, data->pwm_duty[nr]));
}
+
static ssize_t show_pwm_freq(struct device *dev, struct device_attribute *attr,
- char *buf)
+ char *buf)
{
+ struct sensor_device_attribute *sensor_attr = to_sensor_dev_attr(attr);
struct it87_data *data = it87_update_device(dev);
- int index = (data->fan_ctl >> 4) & 0x07;
+ int nr = sensor_attr->index;
unsigned int freq;
+ int index;
+
+ if (has_pwm_freq2(data) && nr == 1)
+ index = (data->extra >> 4) & 0x07;
+ else
+ index = (data->fan_ctl >> 4) & 0x07;
freq = pwm_freq[index] / (has_newer_autopwm(data) ? 256 : 128);
@@ -953,12 +1196,11 @@ static ssize_t set_fan(struct device *dev, struct device_attribute *attr,
}
static ssize_t set_fan_div(struct device *dev, struct device_attribute *attr,
- const char *buf, size_t count)
+ const char *buf, size_t count)
{
struct sensor_device_attribute *sensor_attr = to_sensor_dev_attr(attr);
- int nr = sensor_attr->index;
-
struct it87_data *data = dev_get_drvdata(dev);
+ int nr = sensor_attr->index;
unsigned long val;
int min;
u8 old;
@@ -1013,6 +1255,11 @@ static int check_trip_points(struct device *dev, int nr)
if (data->auto_pwm[nr][i] > data->auto_pwm[nr][i + 1])
err = -EINVAL;
}
+ } else if (has_newer_autopwm(data)) {
+ for (i = 1; i < 3; i++) {
+ if (data->auto_temp[nr][i] > data->auto_temp[nr][i + 1])
+ err = -EINVAL;
+ }
}
if (err) {
@@ -1023,13 +1270,12 @@ static int check_trip_points(struct device *dev, int nr)
return err;
}
-static ssize_t set_pwm_enable(struct device *dev,
- struct device_attribute *attr, const char *buf, size_t count)
+static ssize_t set_pwm_enable(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
{
struct sensor_device_attribute *sensor_attr = to_sensor_dev_attr(attr);
- int nr = sensor_attr->index;
-
struct it87_data *data = dev_get_drvdata(dev);
+ int nr = sensor_attr->index;
long val;
if (kstrtol(buf, 10, &val) < 0 || val < 0 || val > 2)
@@ -1041,21 +1287,30 @@ static ssize_t set_pwm_enable(struct device *dev,
return -EINVAL;
}
- /* IT8603E does not have on/off mode */
- if (val == 0 && data->type == it8603)
- return -EINVAL;
-
mutex_lock(&data->update_lock);
if (val == 0) {
- int tmp;
- /* make sure the fan is on when in on/off mode */
- tmp = it87_read_value(data, IT87_REG_FAN_CTL);
- it87_write_value(data, IT87_REG_FAN_CTL, tmp | (1 << nr));
- /* set on/off mode */
- data->fan_main_ctrl &= ~(1 << nr);
- it87_write_value(data, IT87_REG_FAN_MAIN_CTRL,
- data->fan_main_ctrl);
+ if (nr < 3 && data->type != it8603) {
+ int tmp;
+ /* make sure the fan is on when in on/off mode */
+ tmp = it87_read_value(data, IT87_REG_FAN_CTL);
+ it87_write_value(data, IT87_REG_FAN_CTL, tmp | BIT(nr));
+ /* set on/off mode */
+ data->fan_main_ctrl &= ~BIT(nr);
+ it87_write_value(data, IT87_REG_FAN_MAIN_CTRL,
+ data->fan_main_ctrl);
+ } else {
+ /* No on/off mode, set maximum pwm value */
+ data->pwm_duty[nr] = pwm_to_reg(data, 0xff);
+ it87_write_value(data, IT87_REG_PWM_DUTY[nr],
+ data->pwm_duty[nr]);
+ /* and set manual mode */
+ data->pwm_ctrl[nr] = has_newer_autopwm(data) ?
+ data->pwm_temp_map[nr] :
+ data->pwm_duty[nr];
+ it87_write_value(data, IT87_REG_PWM[nr],
+ data->pwm_ctrl[nr]);
+ }
} else {
if (val == 1) /* Manual mode */
data->pwm_ctrl[nr] = has_newer_autopwm(data) ?
@@ -1063,11 +1318,11 @@ static ssize_t set_pwm_enable(struct device *dev,
data->pwm_duty[nr];
else /* Automatic mode */
data->pwm_ctrl[nr] = 0x80 | data->pwm_temp_map[nr];
- it87_write_value(data, IT87_REG_PWM(nr), data->pwm_ctrl[nr]);
+ it87_write_value(data, IT87_REG_PWM[nr], data->pwm_ctrl[nr]);
- if (data->type != it8603) {
+ if (data->type != it8603 && nr < 3) {
/* set SmartGuardian mode */
- data->fan_main_ctrl |= (1 << nr);
+ data->fan_main_ctrl |= BIT(nr);
it87_write_value(data, IT87_REG_FAN_MAIN_CTRL,
data->fan_main_ctrl);
}
@@ -1076,13 +1331,13 @@ static ssize_t set_pwm_enable(struct device *dev,
mutex_unlock(&data->update_lock);
return count;
}
+
static ssize_t set_pwm(struct device *dev, struct device_attribute *attr,
- const char *buf, size_t count)
+ const char *buf, size_t count)
{
struct sensor_device_attribute *sensor_attr = to_sensor_dev_attr(attr);
- int nr = sensor_attr->index;
-
struct it87_data *data = dev_get_drvdata(dev);
+ int nr = sensor_attr->index;
long val;
if (kstrtol(buf, 10, &val) < 0 || val < 0 || val > 255)
@@ -1099,7 +1354,7 @@ static ssize_t set_pwm(struct device *dev, struct device_attribute *attr,
return -EBUSY;
}
data->pwm_duty[nr] = pwm_to_reg(data, val);
- it87_write_value(data, IT87_REG_PWM_DUTY(nr),
+ it87_write_value(data, IT87_REG_PWM_DUTY[nr],
data->pwm_duty[nr]);
} else {
data->pwm_duty[nr] = pwm_to_reg(data, val);
@@ -1109,17 +1364,20 @@ static ssize_t set_pwm(struct device *dev, struct device_attribute *attr,
*/
if (!(data->pwm_ctrl[nr] & 0x80)) {
data->pwm_ctrl[nr] = data->pwm_duty[nr];
- it87_write_value(data, IT87_REG_PWM(nr),
+ it87_write_value(data, IT87_REG_PWM[nr],
data->pwm_ctrl[nr]);
}
}
mutex_unlock(&data->update_lock);
return count;
}
-static ssize_t set_pwm_freq(struct device *dev,
- struct device_attribute *attr, const char *buf, size_t count)
+
+static ssize_t set_pwm_freq(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
{
+ struct sensor_device_attribute *sensor_attr = to_sensor_dev_attr(attr);
struct it87_data *data = dev_get_drvdata(dev);
+ int nr = sensor_attr->index;
unsigned long val;
int i;
@@ -1131,63 +1389,66 @@ static ssize_t set_pwm_freq(struct device *dev,
/* Search for the nearest available frequency */
for (i = 0; i < 7; i++) {
- if (val > (pwm_freq[i] + pwm_freq[i+1]) / 2)
+ if (val > (pwm_freq[i] + pwm_freq[i + 1]) / 2)
break;
}
mutex_lock(&data->update_lock);
- data->fan_ctl = it87_read_value(data, IT87_REG_FAN_CTL) & 0x8f;
- data->fan_ctl |= i << 4;
- it87_write_value(data, IT87_REG_FAN_CTL, data->fan_ctl);
+ if (nr == 0) {
+ data->fan_ctl = it87_read_value(data, IT87_REG_FAN_CTL) & 0x8f;
+ data->fan_ctl |= i << 4;
+ it87_write_value(data, IT87_REG_FAN_CTL, data->fan_ctl);
+ } else {
+ data->extra = it87_read_value(data, IT87_REG_TEMP_EXTRA) & 0x8f;
+ data->extra |= i << 4;
+ it87_write_value(data, IT87_REG_TEMP_EXTRA, data->extra);
+ }
mutex_unlock(&data->update_lock);
return count;
}
+
static ssize_t show_pwm_temp_map(struct device *dev,
- struct device_attribute *attr, char *buf)
+ struct device_attribute *attr, char *buf)
{
struct sensor_device_attribute *sensor_attr = to_sensor_dev_attr(attr);
- int nr = sensor_attr->index;
-
struct it87_data *data = it87_update_device(dev);
+ int nr = sensor_attr->index;
int map;
- if (data->pwm_temp_map[nr] < 3)
- map = 1 << data->pwm_temp_map[nr];
- else
- map = 0; /* Should never happen */
- return sprintf(buf, "%d\n", map);
+ map = data->pwm_temp_map[nr];
+ if (map >= 3)
+ map = 0; /* Should never happen */
+ if (nr >= 3) /* pwm channels 3..6 map to temp4..6 */
+ map += 3;
+
+ return sprintf(buf, "%d\n", (int)BIT(map));
}
+
static ssize_t set_pwm_temp_map(struct device *dev,
- struct device_attribute *attr, const char *buf, size_t count)
+ struct device_attribute *attr, const char *buf,
+ size_t count)
{
struct sensor_device_attribute *sensor_attr = to_sensor_dev_attr(attr);
- int nr = sensor_attr->index;
-
struct it87_data *data = dev_get_drvdata(dev);
+ int nr = sensor_attr->index;
long val;
u8 reg;
- /*
- * This check can go away if we ever support automatic fan speed
- * control on newer chips.
- */
- if (!has_old_autopwm(data)) {
- dev_notice(dev, "Mapping change disabled for safety reasons\n");
- return -EINVAL;
- }
-
if (kstrtol(buf, 10, &val) < 0)
return -EINVAL;
+ if (nr >= 3)
+ val -= 3;
+
switch (val) {
- case (1 << 0):
+ case BIT(0):
reg = 0x00;
break;
- case (1 << 1):
+ case BIT(1):
reg = 0x01;
break;
- case (1 << 2):
+ case BIT(2):
reg = 0x02;
break;
default:
@@ -1202,14 +1463,14 @@ static ssize_t set_pwm_temp_map(struct device *dev,
*/
if (data->pwm_ctrl[nr] & 0x80) {
data->pwm_ctrl[nr] = 0x80 | data->pwm_temp_map[nr];
- it87_write_value(data, IT87_REG_PWM(nr), data->pwm_ctrl[nr]);
+ it87_write_value(data, IT87_REG_PWM[nr], data->pwm_ctrl[nr]);
}
mutex_unlock(&data->update_lock);
return count;
}
-static ssize_t show_auto_pwm(struct device *dev,
- struct device_attribute *attr, char *buf)
+static ssize_t show_auto_pwm(struct device *dev, struct device_attribute *attr,
+ char *buf)
{
struct it87_data *data = it87_update_device(dev);
struct sensor_device_attribute_2 *sensor_attr =
@@ -1221,14 +1482,15 @@ static ssize_t show_auto_pwm(struct device *dev,
pwm_from_reg(data, data->auto_pwm[nr][point]));
}
-static ssize_t set_auto_pwm(struct device *dev,
- struct device_attribute *attr, const char *buf, size_t count)
+static ssize_t set_auto_pwm(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
{
struct it87_data *data = dev_get_drvdata(dev);
struct sensor_device_attribute_2 *sensor_attr =
to_sensor_dev_attr_2(attr);
int nr = sensor_attr->nr;
int point = sensor_attr->index;
+ int regaddr;
long val;
if (kstrtol(buf, 10, &val) < 0 || val < 0 || val > 255)
@@ -1236,26 +1498,65 @@ static ssize_t set_auto_pwm(struct device *dev,
mutex_lock(&data->update_lock);
data->auto_pwm[nr][point] = pwm_to_reg(data, val);
- it87_write_value(data, IT87_REG_AUTO_PWM(nr, point),
- data->auto_pwm[nr][point]);
+ if (has_newer_autopwm(data))
+ regaddr = IT87_REG_AUTO_TEMP(nr, 3);
+ else
+ regaddr = IT87_REG_AUTO_PWM(nr, point);
+ it87_write_value(data, regaddr, data->auto_pwm[nr][point]);
mutex_unlock(&data->update_lock);
return count;
}
-static ssize_t show_auto_temp(struct device *dev,
- struct device_attribute *attr, char *buf)
+static ssize_t show_auto_pwm_slope(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct it87_data *data = it87_update_device(dev);
+ struct sensor_device_attribute *sensor_attr = to_sensor_dev_attr(attr);
+ int nr = sensor_attr->index;
+
+ return sprintf(buf, "%d\n", data->auto_pwm[nr][1] & 0x7f);
+}
+
+static ssize_t set_auto_pwm_slope(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct it87_data *data = dev_get_drvdata(dev);
+ struct sensor_device_attribute *sensor_attr = to_sensor_dev_attr(attr);
+ int nr = sensor_attr->index;
+ unsigned long val;
+
+ if (kstrtoul(buf, 10, &val) < 0 || val > 127)
+ return -EINVAL;
+
+ mutex_lock(&data->update_lock);
+ data->auto_pwm[nr][1] = (data->auto_pwm[nr][1] & 0x80) | val;
+ it87_write_value(data, IT87_REG_AUTO_TEMP(nr, 4),
+ data->auto_pwm[nr][1]);
+ mutex_unlock(&data->update_lock);
+ return count;
+}
+
+static ssize_t show_auto_temp(struct device *dev, struct device_attribute *attr,
+ char *buf)
{
struct it87_data *data = it87_update_device(dev);
struct sensor_device_attribute_2 *sensor_attr =
to_sensor_dev_attr_2(attr);
int nr = sensor_attr->nr;
int point = sensor_attr->index;
+ int reg;
+
+ if (has_old_autopwm(data) || point)
+ reg = data->auto_temp[nr][point];
+ else
+ reg = data->auto_temp[nr][1] - (data->auto_temp[nr][0] & 0x1f);
- return sprintf(buf, "%d\n", TEMP_FROM_REG(data->auto_temp[nr][point]));
+ return sprintf(buf, "%d\n", TEMP_FROM_REG(reg));
}
-static ssize_t set_auto_temp(struct device *dev,
- struct device_attribute *attr, const char *buf, size_t count)
+static ssize_t set_auto_temp(struct device *dev, struct device_attribute *attr,
+ const char *buf, size_t count)
{
struct it87_data *data = dev_get_drvdata(dev);
struct sensor_device_attribute_2 *sensor_attr =
@@ -1263,14 +1564,24 @@ static ssize_t set_auto_temp(struct device *dev,
int nr = sensor_attr->nr;
int point = sensor_attr->index;
long val;
+ int reg;
if (kstrtol(buf, 10, &val) < 0 || val < -128000 || val > 127000)
return -EINVAL;
mutex_lock(&data->update_lock);
- data->auto_temp[nr][point] = TEMP_TO_REG(val);
- it87_write_value(data, IT87_REG_AUTO_TEMP(nr, point),
- data->auto_temp[nr][point]);
+ if (has_newer_autopwm(data) && !point) {
+ reg = data->auto_temp[nr][1] - TEMP_TO_REG(val);
+ reg = clamp_val(reg, 0, 0x1f) | (data->auto_temp[nr][0] & 0xe0);
+ data->auto_temp[nr][0] = reg;
+ it87_write_value(data, IT87_REG_AUTO_TEMP(nr, 5), reg);
+ } else {
+ reg = TEMP_TO_REG(val);
+ data->auto_temp[nr][point] = reg;
+ if (has_newer_autopwm(data))
+ point--;
+ it87_write_value(data, IT87_REG_AUTO_TEMP(nr, point), reg);
+ }
mutex_unlock(&data->update_lock);
return count;
}
@@ -1308,8 +1619,9 @@ static SENSOR_DEVICE_ATTR_2(fan6_min, S_IRUGO | S_IWUSR, show_fan, set_fan,
static SENSOR_DEVICE_ATTR(pwm1_enable, S_IRUGO | S_IWUSR,
show_pwm_enable, set_pwm_enable, 0);
static SENSOR_DEVICE_ATTR(pwm1, S_IRUGO | S_IWUSR, show_pwm, set_pwm, 0);
-static DEVICE_ATTR(pwm1_freq, S_IRUGO | S_IWUSR, show_pwm_freq, set_pwm_freq);
-static SENSOR_DEVICE_ATTR(pwm1_auto_channels_temp, S_IRUGO | S_IWUSR,
+static SENSOR_DEVICE_ATTR(pwm1_freq, S_IRUGO | S_IWUSR, show_pwm_freq,
+ set_pwm_freq, 0);
+static SENSOR_DEVICE_ATTR(pwm1_auto_channels_temp, S_IRUGO,
show_pwm_temp_map, set_pwm_temp_map, 0);
static SENSOR_DEVICE_ATTR_2(pwm1_auto_point1_pwm, S_IRUGO | S_IWUSR,
show_auto_pwm, set_auto_pwm, 0, 0);
@@ -1329,12 +1641,16 @@ static SENSOR_DEVICE_ATTR_2(pwm1_auto_point3_temp, S_IRUGO | S_IWUSR,
show_auto_temp, set_auto_temp, 0, 3);
static SENSOR_DEVICE_ATTR_2(pwm1_auto_point4_temp, S_IRUGO | S_IWUSR,
show_auto_temp, set_auto_temp, 0, 4);
+static SENSOR_DEVICE_ATTR_2(pwm1_auto_start, S_IRUGO | S_IWUSR,
+ show_auto_pwm, set_auto_pwm, 0, 0);
+static SENSOR_DEVICE_ATTR(pwm1_auto_slope, S_IRUGO | S_IWUSR,
+ show_auto_pwm_slope, set_auto_pwm_slope, 0);
static SENSOR_DEVICE_ATTR(pwm2_enable, S_IRUGO | S_IWUSR,
show_pwm_enable, set_pwm_enable, 1);
static SENSOR_DEVICE_ATTR(pwm2, S_IRUGO | S_IWUSR, show_pwm, set_pwm, 1);
-static DEVICE_ATTR(pwm2_freq, S_IRUGO, show_pwm_freq, NULL);
-static SENSOR_DEVICE_ATTR(pwm2_auto_channels_temp, S_IRUGO | S_IWUSR,
+static SENSOR_DEVICE_ATTR(pwm2_freq, S_IRUGO, show_pwm_freq, set_pwm_freq, 1);
+static SENSOR_DEVICE_ATTR(pwm2_auto_channels_temp, S_IRUGO,
show_pwm_temp_map, set_pwm_temp_map, 1);
static SENSOR_DEVICE_ATTR_2(pwm2_auto_point1_pwm, S_IRUGO | S_IWUSR,
show_auto_pwm, set_auto_pwm, 1, 0);
@@ -1354,12 +1670,16 @@ static SENSOR_DEVICE_ATTR_2(pwm2_auto_point3_temp, S_IRUGO | S_IWUSR,
show_auto_temp, set_auto_temp, 1, 3);
static SENSOR_DEVICE_ATTR_2(pwm2_auto_point4_temp, S_IRUGO | S_IWUSR,
show_auto_temp, set_auto_temp, 1, 4);
+static SENSOR_DEVICE_ATTR_2(pwm2_auto_start, S_IRUGO | S_IWUSR,
+ show_auto_pwm, set_auto_pwm, 1, 0);
+static SENSOR_DEVICE_ATTR(pwm2_auto_slope, S_IRUGO | S_IWUSR,
+ show_auto_pwm_slope, set_auto_pwm_slope, 1);
static SENSOR_DEVICE_ATTR(pwm3_enable, S_IRUGO | S_IWUSR,
show_pwm_enable, set_pwm_enable, 2);
static SENSOR_DEVICE_ATTR(pwm3, S_IRUGO | S_IWUSR, show_pwm, set_pwm, 2);
-static DEVICE_ATTR(pwm3_freq, S_IRUGO, show_pwm_freq, NULL);
-static SENSOR_DEVICE_ATTR(pwm3_auto_channels_temp, S_IRUGO | S_IWUSR,
+static SENSOR_DEVICE_ATTR(pwm3_freq, S_IRUGO, show_pwm_freq, NULL, 2);
+static SENSOR_DEVICE_ATTR(pwm3_auto_channels_temp, S_IRUGO,
show_pwm_temp_map, set_pwm_temp_map, 2);
static SENSOR_DEVICE_ATTR_2(pwm3_auto_point1_pwm, S_IRUGO | S_IWUSR,
show_auto_pwm, set_auto_pwm, 2, 0);
@@ -1379,30 +1699,94 @@ static SENSOR_DEVICE_ATTR_2(pwm3_auto_point3_temp, S_IRUGO | S_IWUSR,
show_auto_temp, set_auto_temp, 2, 3);
static SENSOR_DEVICE_ATTR_2(pwm3_auto_point4_temp, S_IRUGO | S_IWUSR,
show_auto_temp, set_auto_temp, 2, 4);
+static SENSOR_DEVICE_ATTR_2(pwm3_auto_start, S_IRUGO | S_IWUSR,
+ show_auto_pwm, set_auto_pwm, 2, 0);
+static SENSOR_DEVICE_ATTR(pwm3_auto_slope, S_IRUGO | S_IWUSR,
+ show_auto_pwm_slope, set_auto_pwm_slope, 2);
+
+static SENSOR_DEVICE_ATTR(pwm4_enable, S_IRUGO | S_IWUSR,
+ show_pwm_enable, set_pwm_enable, 3);
+static SENSOR_DEVICE_ATTR(pwm4, S_IRUGO | S_IWUSR, show_pwm, set_pwm, 3);
+static SENSOR_DEVICE_ATTR(pwm4_freq, S_IRUGO, show_pwm_freq, NULL, 3);
+static SENSOR_DEVICE_ATTR(pwm4_auto_channels_temp, S_IRUGO,
+ show_pwm_temp_map, set_pwm_temp_map, 3);
+static SENSOR_DEVICE_ATTR_2(pwm4_auto_point1_temp, S_IRUGO | S_IWUSR,
+ show_auto_temp, set_auto_temp, 2, 1);
+static SENSOR_DEVICE_ATTR_2(pwm4_auto_point1_temp_hyst, S_IRUGO | S_IWUSR,
+ show_auto_temp, set_auto_temp, 2, 0);
+static SENSOR_DEVICE_ATTR_2(pwm4_auto_point2_temp, S_IRUGO | S_IWUSR,
+ show_auto_temp, set_auto_temp, 2, 2);
+static SENSOR_DEVICE_ATTR_2(pwm4_auto_point3_temp, S_IRUGO | S_IWUSR,
+ show_auto_temp, set_auto_temp, 2, 3);
+static SENSOR_DEVICE_ATTR_2(pwm4_auto_start, S_IRUGO | S_IWUSR,
+ show_auto_pwm, set_auto_pwm, 3, 0);
+static SENSOR_DEVICE_ATTR(pwm4_auto_slope, S_IRUGO | S_IWUSR,
+ show_auto_pwm_slope, set_auto_pwm_slope, 3);
+
+static SENSOR_DEVICE_ATTR(pwm5_enable, S_IRUGO | S_IWUSR,
+ show_pwm_enable, set_pwm_enable, 4);
+static SENSOR_DEVICE_ATTR(pwm5, S_IRUGO | S_IWUSR, show_pwm, set_pwm, 4);
+static SENSOR_DEVICE_ATTR(pwm5_freq, S_IRUGO, show_pwm_freq, NULL, 4);
+static SENSOR_DEVICE_ATTR(pwm5_auto_channels_temp, S_IRUGO,
+ show_pwm_temp_map, set_pwm_temp_map, 4);
+static SENSOR_DEVICE_ATTR_2(pwm5_auto_point1_temp, S_IRUGO | S_IWUSR,
+ show_auto_temp, set_auto_temp, 2, 1);
+static SENSOR_DEVICE_ATTR_2(pwm5_auto_point1_temp_hyst, S_IRUGO | S_IWUSR,
+ show_auto_temp, set_auto_temp, 2, 0);
+static SENSOR_DEVICE_ATTR_2(pwm5_auto_point2_temp, S_IRUGO | S_IWUSR,
+ show_auto_temp, set_auto_temp, 2, 2);
+static SENSOR_DEVICE_ATTR_2(pwm5_auto_point3_temp, S_IRUGO | S_IWUSR,
+ show_auto_temp, set_auto_temp, 2, 3);
+static SENSOR_DEVICE_ATTR_2(pwm5_auto_start, S_IRUGO | S_IWUSR,
+ show_auto_pwm, set_auto_pwm, 4, 0);
+static SENSOR_DEVICE_ATTR(pwm5_auto_slope, S_IRUGO | S_IWUSR,
+ show_auto_pwm_slope, set_auto_pwm_slope, 4);
+
+static SENSOR_DEVICE_ATTR(pwm6_enable, S_IRUGO | S_IWUSR,
+ show_pwm_enable, set_pwm_enable, 5);
+static SENSOR_DEVICE_ATTR(pwm6, S_IRUGO | S_IWUSR, show_pwm, set_pwm, 5);
+static SENSOR_DEVICE_ATTR(pwm6_freq, S_IRUGO, show_pwm_freq, NULL, 5);
+static SENSOR_DEVICE_ATTR(pwm6_auto_channels_temp, S_IRUGO,
+ show_pwm_temp_map, set_pwm_temp_map, 5);
+static SENSOR_DEVICE_ATTR_2(pwm6_auto_point1_temp, S_IRUGO | S_IWUSR,
+ show_auto_temp, set_auto_temp, 2, 1);
+static SENSOR_DEVICE_ATTR_2(pwm6_auto_point1_temp_hyst, S_IRUGO | S_IWUSR,
+ show_auto_temp, set_auto_temp, 2, 0);
+static SENSOR_DEVICE_ATTR_2(pwm6_auto_point2_temp, S_IRUGO | S_IWUSR,
+ show_auto_temp, set_auto_temp, 2, 2);
+static SENSOR_DEVICE_ATTR_2(pwm6_auto_point3_temp, S_IRUGO | S_IWUSR,
+ show_auto_temp, set_auto_temp, 2, 3);
+static SENSOR_DEVICE_ATTR_2(pwm6_auto_start, S_IRUGO | S_IWUSR,
+ show_auto_pwm, set_auto_pwm, 5, 0);
+static SENSOR_DEVICE_ATTR(pwm6_auto_slope, S_IRUGO | S_IWUSR,
+ show_auto_pwm_slope, set_auto_pwm_slope, 5);
/* Alarms */
static ssize_t show_alarms(struct device *dev, struct device_attribute *attr,
- char *buf)
+ char *buf)
{
struct it87_data *data = it87_update_device(dev);
+
return sprintf(buf, "%u\n", data->alarms);
}
static DEVICE_ATTR(alarms, S_IRUGO, show_alarms, NULL);
static ssize_t show_alarm(struct device *dev, struct device_attribute *attr,
- char *buf)
+ char *buf)
{
- int bitnr = to_sensor_dev_attr(attr)->index;
struct it87_data *data = it87_update_device(dev);
+ int bitnr = to_sensor_dev_attr(attr)->index;
+
return sprintf(buf, "%u\n", (data->alarms >> bitnr) & 1);
}
-static ssize_t clear_intrusion(struct device *dev, struct device_attribute
- *attr, const char *buf, size_t count)
+static ssize_t clear_intrusion(struct device *dev,
+ struct device_attribute *attr, const char *buf,
+ size_t count)
{
struct it87_data *data = dev_get_drvdata(dev);
- long val;
int config;
+ long val;
if (kstrtol(buf, 10, &val) < 0 || val != 0)
return -EINVAL;
@@ -1412,7 +1796,7 @@ static ssize_t clear_intrusion(struct device *dev, struct device_attribute
if (config < 0) {
count = config;
} else {
- config |= 1 << 5;
+ config |= BIT(5);
it87_write_value(data, IT87_REG_CONFIG, config);
/* Invalidate cache to force re-read */
data->valid = 0;
@@ -1443,29 +1827,30 @@ static SENSOR_DEVICE_ATTR(intrusion0_alarm, S_IRUGO | S_IWUSR,
show_alarm, clear_intrusion, 4);
static ssize_t show_beep(struct device *dev, struct device_attribute *attr,
- char *buf)
+ char *buf)
{
- int bitnr = to_sensor_dev_attr(attr)->index;
struct it87_data *data = it87_update_device(dev);
+ int bitnr = to_sensor_dev_attr(attr)->index;
+
return sprintf(buf, "%u\n", (data->beeps >> bitnr) & 1);
}
+
static ssize_t set_beep(struct device *dev, struct device_attribute *attr,
- const char *buf, size_t count)
+ const char *buf, size_t count)
{
int bitnr = to_sensor_dev_attr(attr)->index;
struct it87_data *data = dev_get_drvdata(dev);
long val;
- if (kstrtol(buf, 10, &val) < 0
- || (val != 0 && val != 1))
+ if (kstrtol(buf, 10, &val) < 0 || (val != 0 && val != 1))
return -EINVAL;
mutex_lock(&data->update_lock);
data->beeps = it87_read_value(data, IT87_REG_BEEP_ENABLE);
if (val)
- data->beeps |= (1 << bitnr);
+ data->beeps |= BIT(bitnr);
else
- data->beeps &= ~(1 << bitnr);
+ data->beeps &= ~BIT(bitnr);
it87_write_value(data, IT87_REG_BEEP_ENABLE, data->beeps);
mutex_unlock(&data->update_lock);
return count;
@@ -1493,13 +1878,15 @@ static SENSOR_DEVICE_ATTR(temp2_beep, S_IRUGO, show_beep, NULL, 2);
static SENSOR_DEVICE_ATTR(temp3_beep, S_IRUGO, show_beep, NULL, 2);
static ssize_t show_vrm_reg(struct device *dev, struct device_attribute *attr,
- char *buf)
+ char *buf)
{
struct it87_data *data = dev_get_drvdata(dev);
+
return sprintf(buf, "%u\n", data->vrm);
}
+
static ssize_t store_vrm_reg(struct device *dev, struct device_attribute *attr,
- const char *buf, size_t count)
+ const char *buf, size_t count)
{
struct it87_data *data = dev_get_drvdata(dev);
unsigned long val;
@@ -1514,15 +1901,16 @@ static ssize_t store_vrm_reg(struct device *dev, struct device_attribute *attr,
static DEVICE_ATTR(vrm, S_IRUGO | S_IWUSR, show_vrm_reg, store_vrm_reg);
static ssize_t show_vid_reg(struct device *dev, struct device_attribute *attr,
- char *buf)
+ char *buf)
{
struct it87_data *data = it87_update_device(dev);
- return sprintf(buf, "%ld\n", (long) vid_from_reg(data->vid, data->vrm));
+
+ return sprintf(buf, "%ld\n", (long)vid_from_reg(data->vid, data->vrm));
}
static DEVICE_ATTR(cpu0_vid, S_IRUGO, show_vid_reg, NULL);
static ssize_t show_label(struct device *dev, struct device_attribute *attr,
- char *buf)
+ char *buf)
{
static const char * const labels[] = {
"+5V",
@@ -1548,227 +1936,348 @@ static ssize_t show_label(struct device *dev, struct device_attribute *attr,
static SENSOR_DEVICE_ATTR(in3_label, S_IRUGO, show_label, NULL, 0);
static SENSOR_DEVICE_ATTR(in7_label, S_IRUGO, show_label, NULL, 1);
static SENSOR_DEVICE_ATTR(in8_label, S_IRUGO, show_label, NULL, 2);
-/* special AVCC3 IT8603E in9 */
+/* AVCC3 */
static SENSOR_DEVICE_ATTR(in9_label, S_IRUGO, show_label, NULL, 0);
-static ssize_t show_name(struct device *dev, struct device_attribute
- *devattr, char *buf)
+static umode_t it87_in_is_visible(struct kobject *kobj,
+ struct attribute *attr, int index)
{
+ struct device *dev = container_of(kobj, struct device, kobj);
struct it87_data *data = dev_get_drvdata(dev);
- return sprintf(buf, "%s\n", data->name);
+ int i = index / 5; /* voltage index */
+ int a = index % 5; /* attribute index */
+
+ if (index >= 40) { /* in8 and higher only have input attributes */
+ i = index - 40 + 8;
+ a = 0;
+ }
+
+ if (!(data->has_in & BIT(i)))
+ return 0;
+
+ if (a == 4 && !data->has_beep)
+ return 0;
+
+ return attr->mode;
}
-static DEVICE_ATTR(name, S_IRUGO, show_name, NULL);
-static struct attribute *it87_attributes_in[10][5] = {
-{
+static struct attribute *it87_attributes_in[] = {
&sensor_dev_attr_in0_input.dev_attr.attr,
&sensor_dev_attr_in0_min.dev_attr.attr,
&sensor_dev_attr_in0_max.dev_attr.attr,
&sensor_dev_attr_in0_alarm.dev_attr.attr,
- NULL
-}, {
+ &sensor_dev_attr_in0_beep.dev_attr.attr, /* 4 */
+
&sensor_dev_attr_in1_input.dev_attr.attr,
&sensor_dev_attr_in1_min.dev_attr.attr,
&sensor_dev_attr_in1_max.dev_attr.attr,
&sensor_dev_attr_in1_alarm.dev_attr.attr,
- NULL
-}, {
+ &sensor_dev_attr_in1_beep.dev_attr.attr, /* 9 */
+
&sensor_dev_attr_in2_input.dev_attr.attr,
&sensor_dev_attr_in2_min.dev_attr.attr,
&sensor_dev_attr_in2_max.dev_attr.attr,
&sensor_dev_attr_in2_alarm.dev_attr.attr,
- NULL
-}, {
+ &sensor_dev_attr_in2_beep.dev_attr.attr, /* 14 */
+
&sensor_dev_attr_in3_input.dev_attr.attr,
&sensor_dev_attr_in3_min.dev_attr.attr,
&sensor_dev_attr_in3_max.dev_attr.attr,
&sensor_dev_attr_in3_alarm.dev_attr.attr,
- NULL
-}, {
+ &sensor_dev_attr_in3_beep.dev_attr.attr, /* 19 */
+
&sensor_dev_attr_in4_input.dev_attr.attr,
&sensor_dev_attr_in4_min.dev_attr.attr,
&sensor_dev_attr_in4_max.dev_attr.attr,
&sensor_dev_attr_in4_alarm.dev_attr.attr,
- NULL
-}, {
+ &sensor_dev_attr_in4_beep.dev_attr.attr, /* 24 */
+
&sensor_dev_attr_in5_input.dev_attr.attr,
&sensor_dev_attr_in5_min.dev_attr.attr,
&sensor_dev_attr_in5_max.dev_attr.attr,
&sensor_dev_attr_in5_alarm.dev_attr.attr,
- NULL
-}, {
+ &sensor_dev_attr_in5_beep.dev_attr.attr, /* 29 */
+
&sensor_dev_attr_in6_input.dev_attr.attr,
&sensor_dev_attr_in6_min.dev_attr.attr,
&sensor_dev_attr_in6_max.dev_attr.attr,
&sensor_dev_attr_in6_alarm.dev_attr.attr,
- NULL
-}, {
+ &sensor_dev_attr_in6_beep.dev_attr.attr, /* 34 */
+
&sensor_dev_attr_in7_input.dev_attr.attr,
&sensor_dev_attr_in7_min.dev_attr.attr,
&sensor_dev_attr_in7_max.dev_attr.attr,
&sensor_dev_attr_in7_alarm.dev_attr.attr,
- NULL
-}, {
- &sensor_dev_attr_in8_input.dev_attr.attr,
- NULL
-}, {
- &sensor_dev_attr_in9_input.dev_attr.attr,
- NULL
-} };
-
-static const struct attribute_group it87_group_in[10] = {
- { .attrs = it87_attributes_in[0] },
- { .attrs = it87_attributes_in[1] },
- { .attrs = it87_attributes_in[2] },
- { .attrs = it87_attributes_in[3] },
- { .attrs = it87_attributes_in[4] },
- { .attrs = it87_attributes_in[5] },
- { .attrs = it87_attributes_in[6] },
- { .attrs = it87_attributes_in[7] },
- { .attrs = it87_attributes_in[8] },
- { .attrs = it87_attributes_in[9] },
+ &sensor_dev_attr_in7_beep.dev_attr.attr, /* 39 */
+
+ &sensor_dev_attr_in8_input.dev_attr.attr, /* 40 */
+ &sensor_dev_attr_in9_input.dev_attr.attr, /* 41 */
+ &sensor_dev_attr_in10_input.dev_attr.attr, /* 41 */
+ &sensor_dev_attr_in11_input.dev_attr.attr, /* 41 */
+ &sensor_dev_attr_in12_input.dev_attr.attr, /* 41 */
};
-static struct attribute *it87_attributes_temp[3][6] = {
+static const struct attribute_group it87_group_in = {
+ .attrs = it87_attributes_in,
+ .is_visible = it87_in_is_visible,
+};
+
+static umode_t it87_temp_is_visible(struct kobject *kobj,
+ struct attribute *attr, int index)
{
+ struct device *dev = container_of(kobj, struct device, kobj);
+ struct it87_data *data = dev_get_drvdata(dev);
+ int i = index / 7; /* temperature index */
+ int a = index % 7; /* attribute index */
+
+ if (index >= 21) {
+ i = index - 21 + 3;
+ a = 0;
+ }
+
+ if (!(data->has_temp & BIT(i)))
+ return 0;
+
+ if (a == 5 && !has_temp_offset(data))
+ return 0;
+
+ if (a == 6 && !data->has_beep)
+ return 0;
+
+ return attr->mode;
+}
+
+static struct attribute *it87_attributes_temp[] = {
&sensor_dev_attr_temp1_input.dev_attr.attr,
&sensor_dev_attr_temp1_max.dev_attr.attr,
&sensor_dev_attr_temp1_min.dev_attr.attr,
&sensor_dev_attr_temp1_type.dev_attr.attr,
&sensor_dev_attr_temp1_alarm.dev_attr.attr,
- NULL
-} , {
- &sensor_dev_attr_temp2_input.dev_attr.attr,
+ &sensor_dev_attr_temp1_offset.dev_attr.attr, /* 5 */
+ &sensor_dev_attr_temp1_beep.dev_attr.attr, /* 6 */
+
+ &sensor_dev_attr_temp2_input.dev_attr.attr, /* 7 */
&sensor_dev_attr_temp2_max.dev_attr.attr,
&sensor_dev_attr_temp2_min.dev_attr.attr,
&sensor_dev_attr_temp2_type.dev_attr.attr,
&sensor_dev_attr_temp2_alarm.dev_attr.attr,
- NULL
-} , {
- &sensor_dev_attr_temp3_input.dev_attr.attr,
+ &sensor_dev_attr_temp2_offset.dev_attr.attr,
+ &sensor_dev_attr_temp2_beep.dev_attr.attr,
+
+ &sensor_dev_attr_temp3_input.dev_attr.attr, /* 14 */
&sensor_dev_attr_temp3_max.dev_attr.attr,
&sensor_dev_attr_temp3_min.dev_attr.attr,
&sensor_dev_attr_temp3_type.dev_attr.attr,
&sensor_dev_attr_temp3_alarm.dev_attr.attr,
- NULL
-} };
+ &sensor_dev_attr_temp3_offset.dev_attr.attr,
+ &sensor_dev_attr_temp3_beep.dev_attr.attr,
-static const struct attribute_group it87_group_temp[3] = {
- { .attrs = it87_attributes_temp[0] },
- { .attrs = it87_attributes_temp[1] },
- { .attrs = it87_attributes_temp[2] },
+ &sensor_dev_attr_temp4_input.dev_attr.attr, /* 21 */
+ &sensor_dev_attr_temp5_input.dev_attr.attr,
+ &sensor_dev_attr_temp6_input.dev_attr.attr,
+ NULL
};
-static struct attribute *it87_attributes_temp_offset[] = {
- &sensor_dev_attr_temp1_offset.dev_attr.attr,
- &sensor_dev_attr_temp2_offset.dev_attr.attr,
- &sensor_dev_attr_temp3_offset.dev_attr.attr,
+static const struct attribute_group it87_group_temp = {
+ .attrs = it87_attributes_temp,
+ .is_visible = it87_temp_is_visible,
};
+static umode_t it87_is_visible(struct kobject *kobj,
+ struct attribute *attr, int index)
+{
+ struct device *dev = container_of(kobj, struct device, kobj);
+ struct it87_data *data = dev_get_drvdata(dev);
+
+ if ((index == 2 || index == 3) && !data->has_vid)
+ return 0;
+
+ if (index > 3 && !(data->in_internal & BIT(index - 4)))
+ return 0;
+
+ return attr->mode;
+}
+
static struct attribute *it87_attributes[] = {
&dev_attr_alarms.attr,
&sensor_dev_attr_intrusion0_alarm.dev_attr.attr,
- &dev_attr_name.attr,
+ &dev_attr_vrm.attr, /* 2 */
+ &dev_attr_cpu0_vid.attr, /* 3 */
+ &sensor_dev_attr_in3_label.dev_attr.attr, /* 4 .. 7 */
+ &sensor_dev_attr_in7_label.dev_attr.attr,
+ &sensor_dev_attr_in8_label.dev_attr.attr,
+ &sensor_dev_attr_in9_label.dev_attr.attr,
NULL
};
static const struct attribute_group it87_group = {
.attrs = it87_attributes,
+ .is_visible = it87_is_visible,
};
-static struct attribute *it87_attributes_in_beep[] = {
- &sensor_dev_attr_in0_beep.dev_attr.attr,
- &sensor_dev_attr_in1_beep.dev_attr.attr,
- &sensor_dev_attr_in2_beep.dev_attr.attr,
- &sensor_dev_attr_in3_beep.dev_attr.attr,
- &sensor_dev_attr_in4_beep.dev_attr.attr,
- &sensor_dev_attr_in5_beep.dev_attr.attr,
- &sensor_dev_attr_in6_beep.dev_attr.attr,
- &sensor_dev_attr_in7_beep.dev_attr.attr,
- NULL,
- NULL,
-};
+static umode_t it87_fan_is_visible(struct kobject *kobj,
+ struct attribute *attr, int index)
+{
+ struct device *dev = container_of(kobj, struct device, kobj);
+ struct it87_data *data = dev_get_drvdata(dev);
+ int i = index / 5; /* fan index */
+ int a = index % 5; /* attribute index */
-static struct attribute *it87_attributes_temp_beep[] = {
- &sensor_dev_attr_temp1_beep.dev_attr.attr,
- &sensor_dev_attr_temp2_beep.dev_attr.attr,
- &sensor_dev_attr_temp3_beep.dev_attr.attr,
-};
+ if (index >= 15) { /* fan 4..6 don't have divisor attributes */
+ i = (index - 15) / 4 + 3;
+ a = (index - 15) % 4;
+ }
-static struct attribute *it87_attributes_fan[6][3+1] = { {
+ if (!(data->has_fan & BIT(i)))
+ return 0;
+
+ if (a == 3) { /* beep */
+ if (!data->has_beep)
+ return 0;
+ /* first fan beep attribute is writable */
+ if (i == __ffs(data->has_fan))
+ return attr->mode | S_IWUSR;
+ }
+
+ if (a == 4 && has_16bit_fans(data)) /* divisor */
+ return 0;
+
+ return attr->mode;
+}
+
+static struct attribute *it87_attributes_fan[] = {
&sensor_dev_attr_fan1_input.dev_attr.attr,
&sensor_dev_attr_fan1_min.dev_attr.attr,
&sensor_dev_attr_fan1_alarm.dev_attr.attr,
- NULL
-}, {
+ &sensor_dev_attr_fan1_beep.dev_attr.attr, /* 3 */
+ &sensor_dev_attr_fan1_div.dev_attr.attr, /* 4 */
+
&sensor_dev_attr_fan2_input.dev_attr.attr,
&sensor_dev_attr_fan2_min.dev_attr.attr,
&sensor_dev_attr_fan2_alarm.dev_attr.attr,
- NULL
-}, {
+ &sensor_dev_attr_fan2_beep.dev_attr.attr,
+ &sensor_dev_attr_fan2_div.dev_attr.attr, /* 9 */
+
&sensor_dev_attr_fan3_input.dev_attr.attr,
&sensor_dev_attr_fan3_min.dev_attr.attr,
&sensor_dev_attr_fan3_alarm.dev_attr.attr,
- NULL
-}, {
- &sensor_dev_attr_fan4_input.dev_attr.attr,
+ &sensor_dev_attr_fan3_beep.dev_attr.attr,
+ &sensor_dev_attr_fan3_div.dev_attr.attr, /* 14 */
+
+ &sensor_dev_attr_fan4_input.dev_attr.attr, /* 15 */
&sensor_dev_attr_fan4_min.dev_attr.attr,
&sensor_dev_attr_fan4_alarm.dev_attr.attr,
- NULL
-}, {
- &sensor_dev_attr_fan5_input.dev_attr.attr,
+ &sensor_dev_attr_fan4_beep.dev_attr.attr,
+
+ &sensor_dev_attr_fan5_input.dev_attr.attr, /* 19 */
&sensor_dev_attr_fan5_min.dev_attr.attr,
&sensor_dev_attr_fan5_alarm.dev_attr.attr,
- NULL
-}, {
- &sensor_dev_attr_fan6_input.dev_attr.attr,
+ &sensor_dev_attr_fan5_beep.dev_attr.attr,
+
+ &sensor_dev_attr_fan6_input.dev_attr.attr, /* 23 */
&sensor_dev_attr_fan6_min.dev_attr.attr,
&sensor_dev_attr_fan6_alarm.dev_attr.attr,
+ &sensor_dev_attr_fan6_beep.dev_attr.attr,
NULL
-} };
-
-static const struct attribute_group it87_group_fan[6] = {
- { .attrs = it87_attributes_fan[0] },
- { .attrs = it87_attributes_fan[1] },
- { .attrs = it87_attributes_fan[2] },
- { .attrs = it87_attributes_fan[3] },
- { .attrs = it87_attributes_fan[4] },
- { .attrs = it87_attributes_fan[5] },
};
-static const struct attribute *it87_attributes_fan_div[] = {
- &sensor_dev_attr_fan1_div.dev_attr.attr,
- &sensor_dev_attr_fan2_div.dev_attr.attr,
- &sensor_dev_attr_fan3_div.dev_attr.attr,
+static const struct attribute_group it87_group_fan = {
+ .attrs = it87_attributes_fan,
+ .is_visible = it87_fan_is_visible,
};
-static struct attribute *it87_attributes_pwm[3][4+1] = { {
+static umode_t it87_pwm_is_visible(struct kobject *kobj,
+ struct attribute *attr, int index)
+{
+ struct device *dev = container_of(kobj, struct device, kobj);
+ struct it87_data *data = dev_get_drvdata(dev);
+ int i = index / 4; /* pwm index */
+ int a = index % 4; /* attribute index */
+
+ if (!(data->has_pwm & BIT(i)))
+ return 0;
+
+ /* pwmX_auto_channels_temp is only writable if auto pwm is supported */
+ if (a == 3 && (has_old_autopwm(data) || has_newer_autopwm(data)))
+ return attr->mode | S_IWUSR;
+
+ /* pwm2_freq is writable if there are two pwm frequency selects */
+ if (has_pwm_freq2(data) && i == 1 && a == 2)
+ return attr->mode | S_IWUSR;
+
+ return attr->mode;
+}
+
+static struct attribute *it87_attributes_pwm[] = {
&sensor_dev_attr_pwm1_enable.dev_attr.attr,
&sensor_dev_attr_pwm1.dev_attr.attr,
- &dev_attr_pwm1_freq.attr,
+ &sensor_dev_attr_pwm1_freq.dev_attr.attr,
&sensor_dev_attr_pwm1_auto_channels_temp.dev_attr.attr,
- NULL
-}, {
+
&sensor_dev_attr_pwm2_enable.dev_attr.attr,
&sensor_dev_attr_pwm2.dev_attr.attr,
- &dev_attr_pwm2_freq.attr,
+ &sensor_dev_attr_pwm2_freq.dev_attr.attr,
&sensor_dev_attr_pwm2_auto_channels_temp.dev_attr.attr,
- NULL
-}, {
+
&sensor_dev_attr_pwm3_enable.dev_attr.attr,
&sensor_dev_attr_pwm3.dev_attr.attr,
- &dev_attr_pwm3_freq.attr,
+ &sensor_dev_attr_pwm3_freq.dev_attr.attr,
&sensor_dev_attr_pwm3_auto_channels_temp.dev_attr.attr,
+
+ &sensor_dev_attr_pwm4_enable.dev_attr.attr,
+ &sensor_dev_attr_pwm4.dev_attr.attr,
+ &sensor_dev_attr_pwm4_freq.dev_attr.attr,
+ &sensor_dev_attr_pwm4_auto_channels_temp.dev_attr.attr,
+
+ &sensor_dev_attr_pwm5_enable.dev_attr.attr,
+ &sensor_dev_attr_pwm5.dev_attr.attr,
+ &sensor_dev_attr_pwm5_freq.dev_attr.attr,
+ &sensor_dev_attr_pwm5_auto_channels_temp.dev_attr.attr,
+
+ &sensor_dev_attr_pwm6_enable.dev_attr.attr,
+ &sensor_dev_attr_pwm6.dev_attr.attr,
+ &sensor_dev_attr_pwm6_freq.dev_attr.attr,
+ &sensor_dev_attr_pwm6_auto_channels_temp.dev_attr.attr,
+
NULL
-} };
+};
-static const struct attribute_group it87_group_pwm[3] = {
- { .attrs = it87_attributes_pwm[0] },
- { .attrs = it87_attributes_pwm[1] },
- { .attrs = it87_attributes_pwm[2] },
+static const struct attribute_group it87_group_pwm = {
+ .attrs = it87_attributes_pwm,
+ .is_visible = it87_pwm_is_visible,
};
-static struct attribute *it87_attributes_autopwm[3][9+1] = { {
+static umode_t it87_auto_pwm_is_visible(struct kobject *kobj,
+ struct attribute *attr, int index)
+{
+ struct device *dev = container_of(kobj, struct device, kobj);
+ struct it87_data *data = dev_get_drvdata(dev);
+ int i = index / 11; /* pwm index */
+ int a = index % 11; /* attribute index */
+
+ if (index >= 33) { /* pwm 4..6 */
+ i = (index - 33) / 6 + 3;
+ a = (index - 33) % 6 + 4;
+ }
+
+ if (!(data->has_pwm & BIT(i)))
+ return 0;
+
+ if (has_newer_autopwm(data)) {
+ if (a < 4) /* no auto point pwm */
+ return 0;
+ if (a == 8) /* no auto_point4 */
+ return 0;
+ }
+ if (has_old_autopwm(data)) {
+ if (a >= 9) /* no pwm_auto_start, pwm_auto_slope */
+ return 0;
+ }
+
+ return attr->mode;
+}
+
+static struct attribute *it87_attributes_auto_pwm[] = {
&sensor_dev_attr_pwm1_auto_point1_pwm.dev_attr.attr,
&sensor_dev_attr_pwm1_auto_point2_pwm.dev_attr.attr,
&sensor_dev_attr_pwm1_auto_point3_pwm.dev_attr.attr,
@@ -1778,9 +2287,10 @@ static struct attribute *it87_attributes_autopwm[3][9+1] = { {
&sensor_dev_attr_pwm1_auto_point2_temp.dev_attr.attr,
&sensor_dev_attr_pwm1_auto_point3_temp.dev_attr.attr,
&sensor_dev_attr_pwm1_auto_point4_temp.dev_attr.attr,
- NULL
-}, {
- &sensor_dev_attr_pwm2_auto_point1_pwm.dev_attr.attr,
+ &sensor_dev_attr_pwm1_auto_start.dev_attr.attr,
+ &sensor_dev_attr_pwm1_auto_slope.dev_attr.attr,
+
+ &sensor_dev_attr_pwm2_auto_point1_pwm.dev_attr.attr, /* 11 */
&sensor_dev_attr_pwm2_auto_point2_pwm.dev_attr.attr,
&sensor_dev_attr_pwm2_auto_point3_pwm.dev_attr.attr,
&sensor_dev_attr_pwm2_auto_point4_pwm.dev_attr.attr,
@@ -1789,9 +2299,10 @@ static struct attribute *it87_attributes_autopwm[3][9+1] = { {
&sensor_dev_attr_pwm2_auto_point2_temp.dev_attr.attr,
&sensor_dev_attr_pwm2_auto_point3_temp.dev_attr.attr,
&sensor_dev_attr_pwm2_auto_point4_temp.dev_attr.attr,
- NULL
-}, {
- &sensor_dev_attr_pwm3_auto_point1_pwm.dev_attr.attr,
+ &sensor_dev_attr_pwm2_auto_start.dev_attr.attr,
+ &sensor_dev_attr_pwm2_auto_slope.dev_attr.attr,
+
+ &sensor_dev_attr_pwm3_auto_point1_pwm.dev_attr.attr, /* 22 */
&sensor_dev_attr_pwm3_auto_point2_pwm.dev_attr.attr,
&sensor_dev_attr_pwm3_auto_point3_pwm.dev_attr.attr,
&sensor_dev_attr_pwm3_auto_point4_pwm.dev_attr.attr,
@@ -1800,61 +2311,53 @@ static struct attribute *it87_attributes_autopwm[3][9+1] = { {
&sensor_dev_attr_pwm3_auto_point2_temp.dev_attr.attr,
&sensor_dev_attr_pwm3_auto_point3_temp.dev_attr.attr,
&sensor_dev_attr_pwm3_auto_point4_temp.dev_attr.attr,
- NULL
-} };
-
-static const struct attribute_group it87_group_autopwm[3] = {
- { .attrs = it87_attributes_autopwm[0] },
- { .attrs = it87_attributes_autopwm[1] },
- { .attrs = it87_attributes_autopwm[2] },
-};
+ &sensor_dev_attr_pwm3_auto_start.dev_attr.attr,
+ &sensor_dev_attr_pwm3_auto_slope.dev_attr.attr,
+
+ &sensor_dev_attr_pwm4_auto_point1_temp.dev_attr.attr, /* 33 */
+ &sensor_dev_attr_pwm4_auto_point1_temp_hyst.dev_attr.attr,
+ &sensor_dev_attr_pwm4_auto_point2_temp.dev_attr.attr,
+ &sensor_dev_attr_pwm4_auto_point3_temp.dev_attr.attr,
+ &sensor_dev_attr_pwm4_auto_start.dev_attr.attr,
+ &sensor_dev_attr_pwm4_auto_slope.dev_attr.attr,
+
+ &sensor_dev_attr_pwm5_auto_point1_temp.dev_attr.attr,
+ &sensor_dev_attr_pwm5_auto_point1_temp_hyst.dev_attr.attr,
+ &sensor_dev_attr_pwm5_auto_point2_temp.dev_attr.attr,
+ &sensor_dev_attr_pwm5_auto_point3_temp.dev_attr.attr,
+ &sensor_dev_attr_pwm5_auto_start.dev_attr.attr,
+ &sensor_dev_attr_pwm5_auto_slope.dev_attr.attr,
+
+ &sensor_dev_attr_pwm6_auto_point1_temp.dev_attr.attr,
+ &sensor_dev_attr_pwm6_auto_point1_temp_hyst.dev_attr.attr,
+ &sensor_dev_attr_pwm6_auto_point2_temp.dev_attr.attr,
+ &sensor_dev_attr_pwm6_auto_point3_temp.dev_attr.attr,
+ &sensor_dev_attr_pwm6_auto_start.dev_attr.attr,
+ &sensor_dev_attr_pwm6_auto_slope.dev_attr.attr,
-static struct attribute *it87_attributes_fan_beep[] = {
- &sensor_dev_attr_fan1_beep.dev_attr.attr,
- &sensor_dev_attr_fan2_beep.dev_attr.attr,
- &sensor_dev_attr_fan3_beep.dev_attr.attr,
- &sensor_dev_attr_fan4_beep.dev_attr.attr,
- &sensor_dev_attr_fan5_beep.dev_attr.attr,
- &sensor_dev_attr_fan6_beep.dev_attr.attr,
-};
-
-static struct attribute *it87_attributes_vid[] = {
- &dev_attr_vrm.attr,
- &dev_attr_cpu0_vid.attr,
- NULL
-};
-
-static const struct attribute_group it87_group_vid = {
- .attrs = it87_attributes_vid,
-};
-
-static struct attribute *it87_attributes_label[] = {
- &sensor_dev_attr_in3_label.dev_attr.attr,
- &sensor_dev_attr_in7_label.dev_attr.attr,
- &sensor_dev_attr_in8_label.dev_attr.attr,
- &sensor_dev_attr_in9_label.dev_attr.attr,
- NULL
+ NULL,
};
-static const struct attribute_group it87_group_label = {
- .attrs = it87_attributes_label,
+static const struct attribute_group it87_group_auto_pwm = {
+ .attrs = it87_attributes_auto_pwm,
+ .is_visible = it87_auto_pwm_is_visible,
};
/* SuperIO detection - will change isa_address if a chip is found */
-static int __init it87_find(unsigned short *address,
- struct it87_sio_data *sio_data)
+static int __init it87_find(int sioaddr, unsigned short *address,
+ struct it87_sio_data *sio_data)
{
int err;
u16 chip_type;
const char *board_vendor, *board_name;
const struct it87_devices *config;
- err = superio_enter();
+ err = superio_enter(sioaddr);
if (err)
return err;
err = -ENODEV;
- chip_type = force_id ? force_id : superio_inw(DEVID);
+ chip_type = force_id ? force_id : superio_inw(sioaddr, DEVID);
switch (chip_type) {
case IT8705F_DEVID:
@@ -1910,6 +2413,9 @@ static int __init it87_find(unsigned short *address,
case IT8620E_DEVID:
sio_data->type = it8620;
break;
+ case IT8628E_DEVID:
+ sio_data->type = it8628;
+ break;
case 0xffff: /* No device at all */
goto exit;
default:
@@ -1917,20 +2423,20 @@ static int __init it87_find(unsigned short *address,
goto exit;
}
- superio_select(PME);
- if (!(superio_inb(IT87_ACT_REG) & 0x01)) {
+ superio_select(sioaddr, PME);
+ if (!(superio_inb(sioaddr, IT87_ACT_REG) & 0x01)) {
pr_info("Device not activated, skipping\n");
goto exit;
}
- *address = superio_inw(IT87_BASE_REG) & ~(IT87_EXTENT - 1);
+ *address = superio_inw(sioaddr, IT87_BASE_REG) & ~(IT87_EXTENT - 1);
if (*address == 0) {
pr_info("Base address not set, skipping\n");
goto exit;
}
err = 0;
- sio_data->revision = superio_inb(DEVREV) & 0x0f;
+ sio_data->revision = superio_inb(sioaddr, DEVREV) & 0x0f;
pr_info("Found IT%04x%s chip at 0x%x, revision %d\n", chip_type,
it87_devices[sio_data->type].suffix,
*address, sio_data->revision);
@@ -1939,14 +2445,19 @@ static int __init it87_find(unsigned short *address,
/* in7 (VSB or VCCH5V) is always internal on some chips */
if (has_in7_internal(config))
- sio_data->internal |= (1 << 1);
+ sio_data->internal |= BIT(1);
/* in8 (Vbat) is always internal */
- sio_data->internal |= (1 << 2);
+ sio_data->internal |= BIT(2);
- /* Only the IT8603E has in9 */
- if (sio_data->type != it8603)
- sio_data->skip_in |= (1 << 9);
+ /* in9 (AVCC3), always internal if supported */
+ if (has_avcc3(config))
+ sio_data->internal |= BIT(3); /* in9 is AVCC */
+ else
+ sio_data->skip_in |= BIT(9);
+
+ if (!has_six_pwm(config))
+ sio_data->skip_pwm |= BIT(3) | BIT(4) | BIT(5);
if (!has_vid(config))
sio_data->skip_vid = 1;
@@ -1954,45 +2465,46 @@ static int __init it87_find(unsigned short *address,
/* Read GPIO config and VID value from LDN 7 (GPIO) */
if (sio_data->type == it87) {
/* The IT8705F has a different LD number for GPIO */
- superio_select(5);
- sio_data->beep_pin = superio_inb(IT87_SIO_BEEP_PIN_REG) & 0x3f;
+ superio_select(sioaddr, 5);
+ sio_data->beep_pin = superio_inb(sioaddr,
+ IT87_SIO_BEEP_PIN_REG) & 0x3f;
} else if (sio_data->type == it8783) {
int reg25, reg27, reg2a, reg2c, regef;
- superio_select(GPIO);
+ superio_select(sioaddr, GPIO);
- reg25 = superio_inb(IT87_SIO_GPIO1_REG);
- reg27 = superio_inb(IT87_SIO_GPIO3_REG);
- reg2a = superio_inb(IT87_SIO_PINX1_REG);
- reg2c = superio_inb(IT87_SIO_PINX2_REG);
- regef = superio_inb(IT87_SIO_SPI_REG);
+ reg25 = superio_inb(sioaddr, IT87_SIO_GPIO1_REG);
+ reg27 = superio_inb(sioaddr, IT87_SIO_GPIO3_REG);
+ reg2a = superio_inb(sioaddr, IT87_SIO_PINX1_REG);
+ reg2c = superio_inb(sioaddr, IT87_SIO_PINX2_REG);
+ regef = superio_inb(sioaddr, IT87_SIO_SPI_REG);
/* Check if fan3 is there or not */
- if ((reg27 & (1 << 0)) || !(reg2c & (1 << 2)))
- sio_data->skip_fan |= (1 << 2);
- if ((reg25 & (1 << 4))
- || (!(reg2a & (1 << 1)) && (regef & (1 << 0))))
- sio_data->skip_pwm |= (1 << 2);
+ if ((reg27 & BIT(0)) || !(reg2c & BIT(2)))
+ sio_data->skip_fan |= BIT(2);
+ if ((reg25 & BIT(4)) ||
+ (!(reg2a & BIT(1)) && (regef & BIT(0))))
+ sio_data->skip_pwm |= BIT(2);
/* Check if fan2 is there or not */
- if (reg27 & (1 << 7))
- sio_data->skip_fan |= (1 << 1);
- if (reg27 & (1 << 3))
- sio_data->skip_pwm |= (1 << 1);
+ if (reg27 & BIT(7))
+ sio_data->skip_fan |= BIT(1);
+ if (reg27 & BIT(3))
+ sio_data->skip_pwm |= BIT(1);
/* VIN5 */
- if ((reg27 & (1 << 0)) || (reg2c & (1 << 2)))
- sio_data->skip_in |= (1 << 5); /* No VIN5 */
+ if ((reg27 & BIT(0)) || (reg2c & BIT(2)))
+ sio_data->skip_in |= BIT(5); /* No VIN5 */
/* VIN6 */
- if (reg27 & (1 << 1))
- sio_data->skip_in |= (1 << 6); /* No VIN6 */
+ if (reg27 & BIT(1))
+ sio_data->skip_in |= BIT(6); /* No VIN6 */
/*
* VIN7
* Does not depend on bit 2 of Reg2C, contrary to datasheet.
*/
- if (reg27 & (1 << 2)) {
+ if (reg27 & BIT(2)) {
/*
* The data sheet is a bit unclear regarding the
* internal voltage divider for VCCH5V. It says
@@ -2006,81 +2518,121 @@ static int __init it87_find(unsigned short *address,
* not the case, and ask the user to report if the
* resulting voltage is sane.
*/
- if (!(reg2c & (1 << 1))) {
- reg2c |= (1 << 1);
- superio_outb(IT87_SIO_PINX2_REG, reg2c);
+ if (!(reg2c & BIT(1))) {
+ reg2c |= BIT(1);
+ superio_outb(sioaddr, IT87_SIO_PINX2_REG,
+ reg2c);
pr_notice("Routing internal VCCH5V to in7.\n");
}
pr_notice("in7 routed to internal voltage divider, with external pin disabled.\n");
pr_notice("Please report if it displays a reasonable voltage.\n");
}
- if (reg2c & (1 << 0))
- sio_data->internal |= (1 << 0);
- if (reg2c & (1 << 1))
- sio_data->internal |= (1 << 1);
+ if (reg2c & BIT(0))
+ sio_data->internal |= BIT(0);
+ if (reg2c & BIT(1))
+ sio_data->internal |= BIT(1);
- sio_data->beep_pin = superio_inb(IT87_SIO_BEEP_PIN_REG) & 0x3f;
+ sio_data->beep_pin = superio_inb(sioaddr,
+ IT87_SIO_BEEP_PIN_REG) & 0x3f;
} else if (sio_data->type == it8603) {
int reg27, reg29;
- superio_select(GPIO);
+ superio_select(sioaddr, GPIO);
- reg27 = superio_inb(IT87_SIO_GPIO3_REG);
+ reg27 = superio_inb(sioaddr, IT87_SIO_GPIO3_REG);
/* Check if fan3 is there or not */
- if (reg27 & (1 << 6))
- sio_data->skip_pwm |= (1 << 2);
- if (reg27 & (1 << 7))
- sio_data->skip_fan |= (1 << 2);
+ if (reg27 & BIT(6))
+ sio_data->skip_pwm |= BIT(2);
+ if (reg27 & BIT(7))
+ sio_data->skip_fan |= BIT(2);
/* Check if fan2 is there or not */
- reg29 = superio_inb(IT87_SIO_GPIO5_REG);
- if (reg29 & (1 << 1))
- sio_data->skip_pwm |= (1 << 1);
- if (reg29 & (1 << 2))
- sio_data->skip_fan |= (1 << 1);
-
- sio_data->skip_in |= (1 << 5); /* No VIN5 */
- sio_data->skip_in |= (1 << 6); /* No VIN6 */
-
- sio_data->internal |= (1 << 3); /* in9 is AVCC */
-
- sio_data->beep_pin = superio_inb(IT87_SIO_BEEP_PIN_REG) & 0x3f;
- } else if (sio_data->type == it8620) {
+ reg29 = superio_inb(sioaddr, IT87_SIO_GPIO5_REG);
+ if (reg29 & BIT(1))
+ sio_data->skip_pwm |= BIT(1);
+ if (reg29 & BIT(2))
+ sio_data->skip_fan |= BIT(1);
+
+ sio_data->skip_in |= BIT(5); /* No VIN5 */
+ sio_data->skip_in |= BIT(6); /* No VIN6 */
+
+ sio_data->beep_pin = superio_inb(sioaddr,
+ IT87_SIO_BEEP_PIN_REG) & 0x3f;
+ } else if (sio_data->type == it8620 || sio_data->type == it8628) {
int reg;
- superio_select(GPIO);
+ superio_select(sioaddr, GPIO);
+
+ /* Check for pwm5 */
+ reg = superio_inb(sioaddr, IT87_SIO_GPIO1_REG);
+ if (reg & BIT(6))
+ sio_data->skip_pwm |= BIT(4);
/* Check for fan4, fan5 */
- reg = superio_inb(IT87_SIO_GPIO2_REG);
- if (!(reg & (1 << 5)))
- sio_data->skip_fan |= (1 << 3);
- if (!(reg & (1 << 4)))
- sio_data->skip_fan |= (1 << 4);
+ reg = superio_inb(sioaddr, IT87_SIO_GPIO2_REG);
+ if (!(reg & BIT(5)))
+ sio_data->skip_fan |= BIT(3);
+ if (!(reg & BIT(4)))
+ sio_data->skip_fan |= BIT(4);
/* Check for pwm3, fan3 */
- reg = superio_inb(IT87_SIO_GPIO3_REG);
- if (reg & (1 << 6))
- sio_data->skip_pwm |= (1 << 2);
- if (reg & (1 << 7))
- sio_data->skip_fan |= (1 << 2);
+ reg = superio_inb(sioaddr, IT87_SIO_GPIO3_REG);
+ if (reg & BIT(6))
+ sio_data->skip_pwm |= BIT(2);
+ if (reg & BIT(7))
+ sio_data->skip_fan |= BIT(2);
+
+ /* Check for pwm4 */
+ reg = superio_inb(sioaddr, IT87_SIO_GPIO4_REG);
+ if (!(reg & BIT(2)))
+ sio_data->skip_pwm |= BIT(3);
/* Check for pwm2, fan2 */
- reg = superio_inb(IT87_SIO_GPIO5_REG);
- if (reg & (1 << 1))
- sio_data->skip_pwm |= (1 << 1);
- if (reg & (1 << 2))
- sio_data->skip_fan |= (1 << 1);
+ reg = superio_inb(sioaddr, IT87_SIO_GPIO5_REG);
+ if (reg & BIT(1))
+ sio_data->skip_pwm |= BIT(1);
+ if (reg & BIT(2))
+ sio_data->skip_fan |= BIT(1);
+ /* Check for pwm6, fan6 */
+ if (!(reg & BIT(7))) {
+ sio_data->skip_pwm |= BIT(5);
+ sio_data->skip_fan |= BIT(5);
+ }
- sio_data->beep_pin = superio_inb(IT87_SIO_BEEP_PIN_REG) & 0x3f;
+ sio_data->beep_pin = superio_inb(sioaddr,
+ IT87_SIO_BEEP_PIN_REG) & 0x3f;
} else {
int reg;
bool uart6;
- superio_select(GPIO);
+ superio_select(sioaddr, GPIO);
+
+ /* Check for fan4, fan5 */
+ if (has_five_fans(config)) {
+ reg = superio_inb(sioaddr, IT87_SIO_GPIO2_REG);
+ switch (sio_data->type) {
+ case it8718:
+ if (reg & BIT(5))
+ sio_data->skip_fan |= BIT(3);
+ if (reg & BIT(4))
+ sio_data->skip_fan |= BIT(4);
+ break;
+ case it8720:
+ case it8721:
+ case it8728:
+ if (!(reg & BIT(5)))
+ sio_data->skip_fan |= BIT(3);
+ if (!(reg & BIT(4)))
+ sio_data->skip_fan |= BIT(4);
+ break;
+ default:
+ break;
+ }
+ }
- reg = superio_inb(IT87_SIO_GPIO3_REG);
+ reg = superio_inb(sioaddr, IT87_SIO_GPIO3_REG);
if (!sio_data->skip_vid) {
/* We need at least 4 VID pins */
if (reg & 0x0f) {
@@ -2090,25 +2642,26 @@ static int __init it87_find(unsigned short *address,
}
/* Check if fan3 is there or not */
- if (reg & (1 << 6))
- sio_data->skip_pwm |= (1 << 2);
- if (reg & (1 << 7))
- sio_data->skip_fan |= (1 << 2);
+ if (reg & BIT(6))
+ sio_data->skip_pwm |= BIT(2);
+ if (reg & BIT(7))
+ sio_data->skip_fan |= BIT(2);
/* Check if fan2 is there or not */
- reg = superio_inb(IT87_SIO_GPIO5_REG);
- if (reg & (1 << 1))
- sio_data->skip_pwm |= (1 << 1);
- if (reg & (1 << 2))
- sio_data->skip_fan |= (1 << 1);
+ reg = superio_inb(sioaddr, IT87_SIO_GPIO5_REG);
+ if (reg & BIT(1))
+ sio_data->skip_pwm |= BIT(1);
+ if (reg & BIT(2))
+ sio_data->skip_fan |= BIT(1);
- if ((sio_data->type == it8718 || sio_data->type == it8720)
- && !(sio_data->skip_vid))
- sio_data->vid_value = superio_inb(IT87_SIO_VID_REG);
+ if ((sio_data->type == it8718 || sio_data->type == it8720) &&
+ !(sio_data->skip_vid))
+ sio_data->vid_value = superio_inb(sioaddr,
+ IT87_SIO_VID_REG);
- reg = superio_inb(IT87_SIO_PINX2_REG);
+ reg = superio_inb(sioaddr, IT87_SIO_PINX2_REG);
- uart6 = sio_data->type == it8782 && (reg & (1 << 2));
+ uart6 = sio_data->type == it8782 && (reg & BIT(2));
/*
* The IT8720F has no VIN7 pin, so VCCH should always be
@@ -2124,15 +2677,15 @@ static int __init it87_find(unsigned short *address,
* If UART6 is enabled, re-route VIN7 to the internal divider
* if that is not already the case.
*/
- if ((sio_data->type == it8720 || uart6) && !(reg & (1 << 1))) {
- reg |= (1 << 1);
- superio_outb(IT87_SIO_PINX2_REG, reg);
+ if ((sio_data->type == it8720 || uart6) && !(reg & BIT(1))) {
+ reg |= BIT(1);
+ superio_outb(sioaddr, IT87_SIO_PINX2_REG, reg);
pr_notice("Routing internal VCCH to in7\n");
}
- if (reg & (1 << 0))
- sio_data->internal |= (1 << 0);
- if (reg & (1 << 1))
- sio_data->internal |= (1 << 1);
+ if (reg & BIT(0))
+ sio_data->internal |= BIT(0);
+ if (reg & BIT(1))
+ sio_data->internal |= BIT(1);
/*
* On IT8782F, UART6 pins overlap with VIN5, VIN6, and VIN7.
@@ -2144,11 +2697,12 @@ static int __init it87_find(unsigned short *address,
* temperature source here, skip_temp is preliminary.
*/
if (uart6) {
- sio_data->skip_in |= (1 << 5) | (1 << 6);
- sio_data->skip_temp |= (1 << 2);
+ sio_data->skip_in |= BIT(5) | BIT(6);
+ sio_data->skip_temp |= BIT(2);
}
- sio_data->beep_pin = superio_inb(IT87_SIO_BEEP_PIN_REG) & 0x3f;
+ sio_data->beep_pin = superio_inb(sioaddr,
+ IT87_SIO_BEEP_PIN_REG) & 0x3f;
}
if (sio_data->beep_pin)
pr_info("Beeping is supported\n");
@@ -2157,8 +2711,8 @@ static int __init it87_find(unsigned short *address,
board_vendor = dmi_get_system_info(DMI_BOARD_VENDOR);
board_name = dmi_get_system_info(DMI_BOARD_NAME);
if (board_vendor && board_name) {
- if (strcmp(board_vendor, "nVIDIA") == 0
- && strcmp(board_name, "FN68PT") == 0) {
+ if (strcmp(board_vendor, "nVIDIA") == 0 &&
+ strcmp(board_name, "FN68PT") == 0) {
/*
* On the Shuttle SN68PT, FAN_CTL2 is apparently not
* connected to a fan, but to something else. One user
@@ -2168,373 +2722,15 @@ static int __init it87_find(unsigned short *address,
* the same board is ever used in other systems.
*/
pr_info("Disabling pwm2 due to hardware constraints\n");
- sio_data->skip_pwm = (1 << 1);
+ sio_data->skip_pwm = BIT(1);
}
}
exit:
- superio_exit();
+ superio_exit(sioaddr);
return err;
}
-static void it87_remove_files(struct device *dev)
-{
- struct it87_data *data = platform_get_drvdata(pdev);
- struct it87_sio_data *sio_data = dev_get_platdata(dev);
- int i;
-
- sysfs_remove_group(&dev->kobj, &it87_group);
- for (i = 0; i < 10; i++) {
- if (sio_data->skip_in & (1 << i))
- continue;
- sysfs_remove_group(&dev->kobj, &it87_group_in[i]);
- if (it87_attributes_in_beep[i])
- sysfs_remove_file(&dev->kobj,
- it87_attributes_in_beep[i]);
- }
- for (i = 0; i < 3; i++) {
- if (!(data->has_temp & (1 << i)))
- continue;
- sysfs_remove_group(&dev->kobj, &it87_group_temp[i]);
- if (has_temp_offset(data))
- sysfs_remove_file(&dev->kobj,
- it87_attributes_temp_offset[i]);
- if (sio_data->beep_pin)
- sysfs_remove_file(&dev->kobj,
- it87_attributes_temp_beep[i]);
- }
- for (i = 0; i < 6; i++) {
- if (!(data->has_fan & (1 << i)))
- continue;
- sysfs_remove_group(&dev->kobj, &it87_group_fan[i]);
- if (sio_data->beep_pin)
- sysfs_remove_file(&dev->kobj,
- it87_attributes_fan_beep[i]);
- if (i < 3 && !has_16bit_fans(data))
- sysfs_remove_file(&dev->kobj,
- it87_attributes_fan_div[i]);
- }
- for (i = 0; i < 3; i++) {
- if (sio_data->skip_pwm & (1 << i))
- continue;
- sysfs_remove_group(&dev->kobj, &it87_group_pwm[i]);
- if (has_old_autopwm(data))
- sysfs_remove_group(&dev->kobj,
- &it87_group_autopwm[i]);
- }
- if (!sio_data->skip_vid)
- sysfs_remove_group(&dev->kobj, &it87_group_vid);
- sysfs_remove_group(&dev->kobj, &it87_group_label);
-}
-
-static int it87_probe(struct platform_device *pdev)
-{
- struct it87_data *data;
- struct resource *res;
- struct device *dev = &pdev->dev;
- struct it87_sio_data *sio_data = dev_get_platdata(dev);
- int err = 0, i;
- int enable_pwm_interface;
- int fan_beep_need_rw;
-
- res = platform_get_resource(pdev, IORESOURCE_IO, 0);
- if (!devm_request_region(&pdev->dev, res->start, IT87_EC_EXTENT,
- DRVNAME)) {
- dev_err(dev, "Failed to request region 0x%lx-0x%lx\n",
- (unsigned long)res->start,
- (unsigned long)(res->start + IT87_EC_EXTENT - 1));
- return -EBUSY;
- }
-
- data = devm_kzalloc(&pdev->dev, sizeof(struct it87_data), GFP_KERNEL);
- if (!data)
- return -ENOMEM;
-
- data->addr = res->start;
- data->type = sio_data->type;
- data->features = it87_devices[sio_data->type].features;
- data->peci_mask = it87_devices[sio_data->type].peci_mask;
- data->old_peci_mask = it87_devices[sio_data->type].old_peci_mask;
- data->name = it87_devices[sio_data->type].name;
- /*
- * IT8705F Datasheet 0.4.1, 3h == Version G.
- * IT8712F Datasheet 0.9.1, section 8.3.5 indicates 8h == Version J.
- * These are the first revisions with 16-bit tachometer support.
- */
- switch (data->type) {
- case it87:
- if (sio_data->revision >= 0x03) {
- data->features &= ~FEAT_OLD_AUTOPWM;
- data->features |= FEAT_FAN16_CONFIG | FEAT_16BIT_FANS;
- }
- break;
- case it8712:
- if (sio_data->revision >= 0x08) {
- data->features &= ~FEAT_OLD_AUTOPWM;
- data->features |= FEAT_FAN16_CONFIG | FEAT_16BIT_FANS |
- FEAT_FIVE_FANS;
- }
- break;
- default:
- break;
- }
-
- /* Now, we do the remaining detection. */
- if ((it87_read_value(data, IT87_REG_CONFIG) & 0x80)
- || it87_read_value(data, IT87_REG_CHIPID) != 0x90)
- return -ENODEV;
-
- platform_set_drvdata(pdev, data);
-
- mutex_init(&data->update_lock);
-
- /* Check PWM configuration */
- enable_pwm_interface = it87_check_pwm(dev);
-
- /* Starting with IT8721F, we handle scaling of internal voltages */
- if (has_12mv_adc(data)) {
- if (sio_data->internal & (1 << 0))
- data->in_scaled |= (1 << 3); /* in3 is AVCC */
- if (sio_data->internal & (1 << 1))
- data->in_scaled |= (1 << 7); /* in7 is VSB */
- if (sio_data->internal & (1 << 2))
- data->in_scaled |= (1 << 8); /* in8 is Vbat */
- if (sio_data->internal & (1 << 3))
- data->in_scaled |= (1 << 9); /* in9 is AVCC */
- } else if (sio_data->type == it8781 || sio_data->type == it8782 ||
- sio_data->type == it8783) {
- if (sio_data->internal & (1 << 0))
- data->in_scaled |= (1 << 3); /* in3 is VCC5V */
- if (sio_data->internal & (1 << 1))
- data->in_scaled |= (1 << 7); /* in7 is VCCH5V */
- }
-
- data->has_temp = 0x07;
- if (sio_data->skip_temp & (1 << 2)) {
- if (sio_data->type == it8782
- && !(it87_read_value(data, IT87_REG_TEMP_EXTRA) & 0x80))
- data->has_temp &= ~(1 << 2);
- }
-
- /* Initialize the IT87 chip */
- it87_init_device(pdev);
-
- /* Register sysfs hooks */
- err = sysfs_create_group(&dev->kobj, &it87_group);
- if (err)
- return err;
-
- for (i = 0; i < 10; i++) {
- if (sio_data->skip_in & (1 << i))
- continue;
- err = sysfs_create_group(&dev->kobj, &it87_group_in[i]);
- if (err)
- goto error;
- if (sio_data->beep_pin && it87_attributes_in_beep[i]) {
- err = sysfs_create_file(&dev->kobj,
- it87_attributes_in_beep[i]);
- if (err)
- goto error;
- }
- }
-
- for (i = 0; i < 3; i++) {
- if (!(data->has_temp & (1 << i)))
- continue;
- err = sysfs_create_group(&dev->kobj, &it87_group_temp[i]);
- if (err)
- goto error;
- if (has_temp_offset(data)) {
- err = sysfs_create_file(&dev->kobj,
- it87_attributes_temp_offset[i]);
- if (err)
- goto error;
- }
- if (sio_data->beep_pin) {
- err = sysfs_create_file(&dev->kobj,
- it87_attributes_temp_beep[i]);
- if (err)
- goto error;
- }
- }
-
- /* Do not create fan files for disabled fans */
- fan_beep_need_rw = 1;
- for (i = 0; i < 6; i++) {
- if (!(data->has_fan & (1 << i)))
- continue;
- err = sysfs_create_group(&dev->kobj, &it87_group_fan[i]);
- if (err)
- goto error;
-
- if (i < 3 && !has_16bit_fans(data)) {
- err = sysfs_create_file(&dev->kobj,
- it87_attributes_fan_div[i]);
- if (err)
- goto error;
- }
-
- if (sio_data->beep_pin) {
- err = sysfs_create_file(&dev->kobj,
- it87_attributes_fan_beep[i]);
- if (err)
- goto error;
- if (!fan_beep_need_rw)
- continue;
-
- /*
- * As we have a single beep enable bit for all fans,
- * only the first enabled fan has a writable attribute
- * for it.
- */
- if (sysfs_chmod_file(&dev->kobj,
- it87_attributes_fan_beep[i],
- S_IRUGO | S_IWUSR))
- dev_dbg(dev, "chmod +w fan%d_beep failed\n",
- i + 1);
- fan_beep_need_rw = 0;
- }
- }
-
- if (enable_pwm_interface) {
- for (i = 0; i < 3; i++) {
- if (sio_data->skip_pwm & (1 << i))
- continue;
- err = sysfs_create_group(&dev->kobj,
- &it87_group_pwm[i]);
- if (err)
- goto error;
-
- if (!has_old_autopwm(data))
- continue;
- err = sysfs_create_group(&dev->kobj,
- &it87_group_autopwm[i]);
- if (err)
- goto error;
- }
- }
-
- if (!sio_data->skip_vid) {
- data->vrm = vid_which_vrm();
- /* VID reading from Super-I/O config space if available */
- data->vid = sio_data->vid_value;
- err = sysfs_create_group(&dev->kobj, &it87_group_vid);
- if (err)
- goto error;
- }
-
- /* Export labels for internal sensors */
- for (i = 0; i < 4; i++) {
- if (!(sio_data->internal & (1 << i)))
- continue;
- err = sysfs_create_file(&dev->kobj,
- it87_attributes_label[i]);
- if (err)
- goto error;
- }
-
- data->hwmon_dev = hwmon_device_register(dev);
- if (IS_ERR(data->hwmon_dev)) {
- err = PTR_ERR(data->hwmon_dev);
- goto error;
- }
-
- return 0;
-
-error:
- it87_remove_files(dev);
- return err;
-}
-
-static int it87_remove(struct platform_device *pdev)
-{
- struct it87_data *data = platform_get_drvdata(pdev);
-
- hwmon_device_unregister(data->hwmon_dev);
- it87_remove_files(&pdev->dev);
-
- return 0;
-}
-
-/*
- * Must be called with data->update_lock held, except during initialization.
- * We ignore the IT87 BUSY flag at this moment - it could lead to deadlocks,
- * would slow down the IT87 access and should not be necessary.
- */
-static int it87_read_value(struct it87_data *data, u8 reg)
-{
- outb_p(reg, data->addr + IT87_ADDR_REG_OFFSET);
- return inb_p(data->addr + IT87_DATA_REG_OFFSET);
-}
-
-/*
- * Must be called with data->update_lock held, except during initialization.
- * We ignore the IT87 BUSY flag at this moment - it could lead to deadlocks,
- * would slow down the IT87 access and should not be necessary.
- */
-static void it87_write_value(struct it87_data *data, u8 reg, u8 value)
-{
- outb_p(reg, data->addr + IT87_ADDR_REG_OFFSET);
- outb_p(value, data->addr + IT87_DATA_REG_OFFSET);
-}
-
-/* Return 1 if and only if the PWM interface is safe to use */
-static int it87_check_pwm(struct device *dev)
-{
- struct it87_data *data = dev_get_drvdata(dev);
- /*
- * Some BIOSes fail to correctly configure the IT87 fans. All fans off
- * and polarity set to active low is sign that this is the case so we
- * disable pwm control to protect the user.
- */
- int tmp = it87_read_value(data, IT87_REG_FAN_CTL);
- if ((tmp & 0x87) == 0) {
- if (fix_pwm_polarity) {
- /*
- * The user asks us to attempt a chip reconfiguration.
- * This means switching to active high polarity and
- * inverting all fan speed values.
- */
- int i;
- u8 pwm[3];
-
- for (i = 0; i < 3; i++)
- pwm[i] = it87_read_value(data,
- IT87_REG_PWM(i));
-
- /*
- * If any fan is in automatic pwm mode, the polarity
- * might be correct, as suspicious as it seems, so we
- * better don't change anything (but still disable the
- * PWM interface).
- */
- if (!((pwm[0] | pwm[1] | pwm[2]) & 0x80)) {
- dev_info(dev,
- "Reconfiguring PWM to active high polarity\n");
- it87_write_value(data, IT87_REG_FAN_CTL,
- tmp | 0x87);
- for (i = 0; i < 3; i++)
- it87_write_value(data,
- IT87_REG_PWM(i),
- 0x7f & ~pwm[i]);
- return 1;
- }
-
- dev_info(dev,
- "PWM configuration is too broken to be fixed\n");
- }
-
- dev_info(dev,
- "Detected broken BIOS defaults, disabling PWM interface\n");
- return 0;
- } else if (fix_pwm_polarity) {
- dev_info(dev,
- "PWM configuration looks sane, won't touch\n");
- }
-
- return 1;
-}
-
/* Called when we have found a new IT87. */
static void it87_init_device(struct platform_device *pdev)
{
@@ -2556,7 +2752,7 @@ static void it87_init_device(struct platform_device *pdev)
* these have separate registers for the temperature mapping and the
* manual duty cycle.
*/
- for (i = 0; i < 3; i++) {
+ for (i = 0; i < NUM_AUTO_PWM; i++) {
data->pwm_temp_map[i] = i;
data->pwm_duty[i] = 0x7f; /* Full speed */
data->auto_pwm[i][3] = 0x7f; /* Full speed, hard-coded */
@@ -2569,12 +2765,12 @@ static void it87_init_device(struct platform_device *pdev)
* means -1 degree C, which surprisingly doesn't trigger an alarm,
* but is still confusing, so change to 127 degrees C.
*/
- for (i = 0; i < 8; i++) {
+ for (i = 0; i < NUM_VIN_LIMIT; i++) {
tmp = it87_read_value(data, IT87_REG_VIN_MIN(i));
if (tmp == 0xff)
it87_write_value(data, IT87_REG_VIN_MIN(i), 0);
}
- for (i = 0; i < 3; i++) {
+ for (i = 0; i < NUM_TEMP_LIMIT; i++) {
tmp = it87_read_value(data, IT87_REG_TEMP_HIGH(i));
if (tmp == 0xff)
it87_write_value(data, IT87_REG_TEMP_HIGH(i), 127);
@@ -2619,158 +2815,245 @@ static void it87_init_device(struct platform_device *pdev)
/* Check for additional fans */
if (has_five_fans(data)) {
- if (tmp & (1 << 4))
- data->has_fan |= (1 << 3); /* fan4 enabled */
- if (tmp & (1 << 5))
- data->has_fan |= (1 << 4); /* fan5 enabled */
- if (has_six_fans(data) && (tmp & (1 << 2)))
- data->has_fan |= (1 << 5); /* fan6 enabled */
+ if (tmp & BIT(4))
+ data->has_fan |= BIT(3); /* fan4 enabled */
+ if (tmp & BIT(5))
+ data->has_fan |= BIT(4); /* fan5 enabled */
+ if (has_six_fans(data) && (tmp & BIT(2)))
+ data->has_fan |= BIT(5); /* fan6 enabled */
}
/* Fan input pins may be used for alternative functions */
data->has_fan &= ~sio_data->skip_fan;
+ /* Check if pwm5, pwm6 are enabled */
+ if (has_six_pwm(data)) {
+ /* The following code may be IT8620E specific */
+ tmp = it87_read_value(data, IT87_REG_FAN_DIV);
+ if ((tmp & 0xc0) == 0xc0)
+ sio_data->skip_pwm |= BIT(4);
+ if (!(tmp & BIT(3)))
+ sio_data->skip_pwm |= BIT(5);
+ }
+
/* Start monitoring */
it87_write_value(data, IT87_REG_CONFIG,
(it87_read_value(data, IT87_REG_CONFIG) & 0x3e)
| (update_vbat ? 0x41 : 0x01));
}
-static void it87_update_pwm_ctrl(struct it87_data *data, int nr)
+/* Return 1 if and only if the PWM interface is safe to use */
+static int it87_check_pwm(struct device *dev)
{
- data->pwm_ctrl[nr] = it87_read_value(data, IT87_REG_PWM(nr));
- if (has_newer_autopwm(data)) {
- data->pwm_temp_map[nr] = data->pwm_ctrl[nr] & 0x03;
- data->pwm_duty[nr] = it87_read_value(data,
- IT87_REG_PWM_DUTY(nr));
- } else {
- if (data->pwm_ctrl[nr] & 0x80) /* Automatic mode */
- data->pwm_temp_map[nr] = data->pwm_ctrl[nr] & 0x03;
- else /* Manual mode */
- data->pwm_duty[nr] = data->pwm_ctrl[nr] & 0x7f;
- }
+ struct it87_data *data = dev_get_drvdata(dev);
+ /*
+ * Some BIOSes fail to correctly configure the IT87 fans. All fans off
+ * and polarity set to active low is sign that this is the case so we
+ * disable pwm control to protect the user.
+ */
+ int tmp = it87_read_value(data, IT87_REG_FAN_CTL);
- if (has_old_autopwm(data)) {
- int i;
+ if ((tmp & 0x87) == 0) {
+ if (fix_pwm_polarity) {
+ /*
+ * The user asks us to attempt a chip reconfiguration.
+ * This means switching to active high polarity and
+ * inverting all fan speed values.
+ */
+ int i;
+ u8 pwm[3];
- for (i = 0; i < 5 ; i++)
- data->auto_temp[nr][i] = it87_read_value(data,
- IT87_REG_AUTO_TEMP(nr, i));
- for (i = 0; i < 3 ; i++)
- data->auto_pwm[nr][i] = it87_read_value(data,
- IT87_REG_AUTO_PWM(nr, i));
+ for (i = 0; i < ARRAY_SIZE(pwm); i++)
+ pwm[i] = it87_read_value(data,
+ IT87_REG_PWM[i]);
+
+ /*
+ * If any fan is in automatic pwm mode, the polarity
+ * might be correct, as suspicious as it seems, so we
+ * better don't change anything (but still disable the
+ * PWM interface).
+ */
+ if (!((pwm[0] | pwm[1] | pwm[2]) & 0x80)) {
+ dev_info(dev,
+ "Reconfiguring PWM to active high polarity\n");
+ it87_write_value(data, IT87_REG_FAN_CTL,
+ tmp | 0x87);
+ for (i = 0; i < 3; i++)
+ it87_write_value(data,
+ IT87_REG_PWM[i],
+ 0x7f & ~pwm[i]);
+ return 1;
+ }
+
+ dev_info(dev,
+ "PWM configuration is too broken to be fixed\n");
+ }
+
+ dev_info(dev,
+ "Detected broken BIOS defaults, disabling PWM interface\n");
+ return 0;
+ } else if (fix_pwm_polarity) {
+ dev_info(dev,
+ "PWM configuration looks sane, won't touch\n");
}
+
+ return 1;
}
-static struct it87_data *it87_update_device(struct device *dev)
+static int it87_probe(struct platform_device *pdev)
{
- struct it87_data *data = dev_get_drvdata(dev);
- int i;
+ struct it87_data *data;
+ struct resource *res;
+ struct device *dev = &pdev->dev;
+ struct it87_sio_data *sio_data = dev_get_platdata(dev);
+ int enable_pwm_interface;
+ struct device *hwmon_dev;
- mutex_lock(&data->update_lock);
+ res = platform_get_resource(pdev, IORESOURCE_IO, 0);
+ if (!devm_request_region(&pdev->dev, res->start, IT87_EC_EXTENT,
+ DRVNAME)) {
+ dev_err(dev, "Failed to request region 0x%lx-0x%lx\n",
+ (unsigned long)res->start,
+ (unsigned long)(res->start + IT87_EC_EXTENT - 1));
+ return -EBUSY;
+ }
- if (time_after(jiffies, data->last_updated + HZ + HZ / 2)
- || !data->valid) {
- if (update_vbat) {
- /*
- * Cleared after each update, so reenable. Value
- * returned by this read will be previous value
- */
- it87_write_value(data, IT87_REG_CONFIG,
- it87_read_value(data, IT87_REG_CONFIG) | 0x40);
+ data = devm_kzalloc(&pdev->dev, sizeof(struct it87_data), GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+
+ data->addr = res->start;
+ data->type = sio_data->type;
+ data->features = it87_devices[sio_data->type].features;
+ data->peci_mask = it87_devices[sio_data->type].peci_mask;
+ data->old_peci_mask = it87_devices[sio_data->type].old_peci_mask;
+ /*
+ * IT8705F Datasheet 0.4.1, 3h == Version G.
+ * IT8712F Datasheet 0.9.1, section 8.3.5 indicates 8h == Version J.
+ * These are the first revisions with 16-bit tachometer support.
+ */
+ switch (data->type) {
+ case it87:
+ if (sio_data->revision >= 0x03) {
+ data->features &= ~FEAT_OLD_AUTOPWM;
+ data->features |= FEAT_FAN16_CONFIG | FEAT_16BIT_FANS;
}
- for (i = 0; i <= 7; i++) {
- data->in[i][0] =
- it87_read_value(data, IT87_REG_VIN(i));
- data->in[i][1] =
- it87_read_value(data, IT87_REG_VIN_MIN(i));
- data->in[i][2] =
- it87_read_value(data, IT87_REG_VIN_MAX(i));
+ break;
+ case it8712:
+ if (sio_data->revision >= 0x08) {
+ data->features &= ~FEAT_OLD_AUTOPWM;
+ data->features |= FEAT_FAN16_CONFIG | FEAT_16BIT_FANS |
+ FEAT_FIVE_FANS;
}
- /* in8 (battery) has no limit registers */
- data->in[8][0] = it87_read_value(data, IT87_REG_VIN(8));
- if (data->type == it8603)
- data->in[9][0] = it87_read_value(data, 0x2f);
+ break;
+ default:
+ break;
+ }
- for (i = 0; i < 6; i++) {
- /* Skip disabled fans */
- if (!(data->has_fan & (1 << i)))
- continue;
+ /* Now, we do the remaining detection. */
+ if ((it87_read_value(data, IT87_REG_CONFIG) & 0x80) ||
+ it87_read_value(data, IT87_REG_CHIPID) != 0x90)
+ return -ENODEV;
- data->fan[i][1] =
- it87_read_value(data, IT87_REG_FAN_MIN[i]);
- data->fan[i][0] = it87_read_value(data,
- IT87_REG_FAN[i]);
- /* Add high byte if in 16-bit mode */
- if (has_16bit_fans(data)) {
- data->fan[i][0] |= it87_read_value(data,
- IT87_REG_FANX[i]) << 8;
- data->fan[i][1] |= it87_read_value(data,
- IT87_REG_FANX_MIN[i]) << 8;
- }
- }
- for (i = 0; i < 3; i++) {
- if (!(data->has_temp & (1 << i)))
- continue;
- data->temp[i][0] =
- it87_read_value(data, IT87_REG_TEMP(i));
- data->temp[i][1] =
- it87_read_value(data, IT87_REG_TEMP_LOW(i));
- data->temp[i][2] =
- it87_read_value(data, IT87_REG_TEMP_HIGH(i));
- if (has_temp_offset(data))
- data->temp[i][3] =
- it87_read_value(data,
- IT87_REG_TEMP_OFFSET[i]);
- }
+ platform_set_drvdata(pdev, data);
- /* Newer chips don't have clock dividers */
- if ((data->has_fan & 0x07) && !has_16bit_fans(data)) {
- i = it87_read_value(data, IT87_REG_FAN_DIV);
- data->fan_div[0] = i & 0x07;
- data->fan_div[1] = (i >> 3) & 0x07;
- data->fan_div[2] = (i & 0x40) ? 3 : 1;
- }
+ mutex_init(&data->update_lock);
- data->alarms =
- it87_read_value(data, IT87_REG_ALARM1) |
- (it87_read_value(data, IT87_REG_ALARM2) << 8) |
- (it87_read_value(data, IT87_REG_ALARM3) << 16);
- data->beeps = it87_read_value(data, IT87_REG_BEEP_ENABLE);
+ /* Check PWM configuration */
+ enable_pwm_interface = it87_check_pwm(dev);
- data->fan_main_ctrl = it87_read_value(data,
- IT87_REG_FAN_MAIN_CTRL);
- data->fan_ctl = it87_read_value(data, IT87_REG_FAN_CTL);
- for (i = 0; i < 3; i++)
- it87_update_pwm_ctrl(data, i);
+ /* Starting with IT8721F, we handle scaling of internal voltages */
+ if (has_12mv_adc(data)) {
+ if (sio_data->internal & BIT(0))
+ data->in_scaled |= BIT(3); /* in3 is AVCC */
+ if (sio_data->internal & BIT(1))
+ data->in_scaled |= BIT(7); /* in7 is VSB */
+ if (sio_data->internal & BIT(2))
+ data->in_scaled |= BIT(8); /* in8 is Vbat */
+ if (sio_data->internal & BIT(3))
+ data->in_scaled |= BIT(9); /* in9 is AVCC */
+ } else if (sio_data->type == it8781 || sio_data->type == it8782 ||
+ sio_data->type == it8783) {
+ if (sio_data->internal & BIT(0))
+ data->in_scaled |= BIT(3); /* in3 is VCC5V */
+ if (sio_data->internal & BIT(1))
+ data->in_scaled |= BIT(7); /* in7 is VCCH5V */
+ }
- data->sensor = it87_read_value(data, IT87_REG_TEMP_ENABLE);
- data->extra = it87_read_value(data, IT87_REG_TEMP_EXTRA);
- /*
- * The IT8705F does not have VID capability.
- * The IT8718F and later don't use IT87_REG_VID for the
- * same purpose.
- */
- if (data->type == it8712 || data->type == it8716) {
- data->vid = it87_read_value(data, IT87_REG_VID);
- /*
- * The older IT8712F revisions had only 5 VID pins,
- * but we assume it is always safe to read 6 bits.
- */
- data->vid &= 0x3f;
- }
- data->last_updated = jiffies;
- data->valid = 1;
+ data->has_temp = 0x07;
+ if (sio_data->skip_temp & BIT(2)) {
+ if (sio_data->type == it8782 &&
+ !(it87_read_value(data, IT87_REG_TEMP_EXTRA) & 0x80))
+ data->has_temp &= ~BIT(2);
}
- mutex_unlock(&data->update_lock);
+ data->in_internal = sio_data->internal;
+ data->has_in = 0x3ff & ~sio_data->skip_in;
+
+ if (has_six_temp(data)) {
+ u8 reg = it87_read_value(data, IT87_REG_TEMP456_ENABLE);
+
+ /* Check for additional temperature sensors */
+ if ((reg & 0x03) >= 0x02)
+ data->has_temp |= BIT(3);
+ if (((reg >> 2) & 0x03) >= 0x02)
+ data->has_temp |= BIT(4);
+ if (((reg >> 4) & 0x03) >= 0x02)
+ data->has_temp |= BIT(5);
+
+ /* Check for additional voltage sensors */
+ if ((reg & 0x03) == 0x01)
+ data->has_in |= BIT(10);
+ if (((reg >> 2) & 0x03) == 0x01)
+ data->has_in |= BIT(11);
+ if (((reg >> 4) & 0x03) == 0x01)
+ data->has_in |= BIT(12);
+ }
- return data;
+ data->has_beep = !!sio_data->beep_pin;
+
+ /* Initialize the IT87 chip */
+ it87_init_device(pdev);
+
+ if (!sio_data->skip_vid) {
+ data->has_vid = true;
+ data->vrm = vid_which_vrm();
+ /* VID reading from Super-I/O config space if available */
+ data->vid = sio_data->vid_value;
+ }
+
+ /* Prepare for sysfs hooks */
+ data->groups[0] = &it87_group;
+ data->groups[1] = &it87_group_in;
+ data->groups[2] = &it87_group_temp;
+ data->groups[3] = &it87_group_fan;
+
+ if (enable_pwm_interface) {
+ data->has_pwm = BIT(ARRAY_SIZE(IT87_REG_PWM)) - 1;
+ data->has_pwm &= ~sio_data->skip_pwm;
+
+ data->groups[4] = &it87_group_pwm;
+ if (has_old_autopwm(data) || has_newer_autopwm(data))
+ data->groups[5] = &it87_group_auto_pwm;
+ }
+
+ hwmon_dev = devm_hwmon_device_register_with_groups(dev,
+ it87_devices[sio_data->type].name,
+ data, data->groups);
+ return PTR_ERR_OR_ZERO(hwmon_dev);
}
-static int __init it87_device_add(unsigned short address,
+static struct platform_driver it87_driver = {
+ .driver = {
+ .name = DRVNAME,
+ },
+ .probe = it87_probe,
+};
+
+static int __init it87_device_add(int index, unsigned short address,
const struct it87_sio_data *sio_data)
{
+ struct platform_device *pdev;
struct resource res = {
.start = address + IT87_EC_OFFSET,
.end = address + IT87_EC_OFFSET + IT87_EC_EXTENT - 1,
@@ -2781,14 +3064,11 @@ static int __init it87_device_add(unsigned short address,
err = acpi_check_resource_conflict(&res);
if (err)
- goto exit;
+ return err;
pdev = platform_device_alloc(DRVNAME, address);
- if (!pdev) {
- err = -ENOMEM;
- pr_err("Device allocation failed\n");
- goto exit;
- }
+ if (!pdev)
+ return -ENOMEM;
err = platform_device_add_resources(pdev, &res, 1);
if (err) {
@@ -2809,44 +3089,61 @@ static int __init it87_device_add(unsigned short address,
goto exit_device_put;
}
+ it87_pdev[index] = pdev;
return 0;
exit_device_put:
platform_device_put(pdev);
-exit:
return err;
}
static int __init sm_it87_init(void)
{
- int err;
- unsigned short isa_address = 0;
+ int sioaddr[2] = { REG_2E, REG_4E };
struct it87_sio_data sio_data;
+ unsigned short isa_address;
+ bool found = false;
+ int i, err;
- memset(&sio_data, 0, sizeof(struct it87_sio_data));
- err = it87_find(&isa_address, &sio_data);
- if (err)
- return err;
err = platform_driver_register(&it87_driver);
if (err)
return err;
- err = it87_device_add(isa_address, &sio_data);
- if (err) {
- platform_driver_unregister(&it87_driver);
- return err;
+ for (i = 0; i < ARRAY_SIZE(sioaddr); i++) {
+ memset(&sio_data, 0, sizeof(struct it87_sio_data));
+ isa_address = 0;
+ err = it87_find(sioaddr[i], &isa_address, &sio_data);
+ if (err || isa_address == 0)
+ continue;
+
+ err = it87_device_add(i, isa_address, &sio_data);
+ if (err)
+ goto exit_dev_unregister;
+ found = true;
}
+ if (!found) {
+ err = -ENODEV;
+ goto exit_unregister;
+ }
return 0;
+
+exit_dev_unregister:
+ /* NULL check handled by platform_device_unregister */
+ platform_device_unregister(it87_pdev[0]);
+exit_unregister:
+ platform_driver_unregister(&it87_driver);
+ return err;
}
static void __exit sm_it87_exit(void)
{
- platform_device_unregister(pdev);
+ /* NULL check handled by platform_device_unregister */
+ platform_device_unregister(it87_pdev[1]);
+ platform_device_unregister(it87_pdev[0]);
platform_driver_unregister(&it87_driver);
}
-
MODULE_AUTHOR("Chris Gauthron, Jean Delvare <jdelvare@suse.de>");
MODULE_DESCRIPTION("IT8705F/IT871xF/IT872xF hardware monitoring driver");
module_param(update_vbat, bool, 0);
diff --git a/drivers/hwmon/lm75.c b/drivers/hwmon/lm75.c
index 0addc84ba948..69166ab3151d 100644
--- a/drivers/hwmon/lm75.c
+++ b/drivers/hwmon/lm75.c
@@ -77,7 +77,6 @@ static const u8 LM75_REG_TEMP[3] = {
struct lm75_data {
struct i2c_client *client;
struct device *hwmon_dev;
- struct thermal_zone_device *tz;
struct mutex update_lock;
u8 orig_conf;
u8 resolution; /* In bits, between 9 and 12 */
@@ -306,11 +305,9 @@ lm75_probe(struct i2c_client *client, const struct i2c_device_id *id)
if (IS_ERR(data->hwmon_dev))
return PTR_ERR(data->hwmon_dev);
- data->tz = thermal_zone_of_sensor_register(data->hwmon_dev, 0,
- data->hwmon_dev,
- &lm75_of_thermal_ops);
- if (IS_ERR(data->tz))
- data->tz = NULL;
+ devm_thermal_zone_of_sensor_register(data->hwmon_dev, 0,
+ data->hwmon_dev,
+ &lm75_of_thermal_ops);
dev_info(dev, "%s: sensor '%s'\n",
dev_name(data->hwmon_dev), client->name);
@@ -322,7 +319,6 @@ static int lm75_remove(struct i2c_client *client)
{
struct lm75_data *data = i2c_get_clientdata(client);
- thermal_zone_of_sensor_unregister(data->hwmon_dev, data->tz);
hwmon_device_unregister(data->hwmon_dev);
lm75_write_value(client, LM75_REG_CONF, data->orig_conf);
return 0;
diff --git a/drivers/hwmon/max31722.c b/drivers/hwmon/max31722.c
new file mode 100644
index 000000000000..30a100e70a0d
--- /dev/null
+++ b/drivers/hwmon/max31722.c
@@ -0,0 +1,165 @@
+/*
+ * max31722 - hwmon driver for Maxim Integrated MAX31722/MAX31723 SPI
+ * digital thermometer and thermostats.
+ *
+ * Copyright (c) 2016, Intel Corporation.
+ *
+ * This file is subject to the terms and conditions of version 2 of
+ * the GNU General Public License. See the file COPYING in the main
+ * directory of this archive for more details.
+ */
+
+#include <linux/acpi.h>
+#include <linux/hwmon.h>
+#include <linux/hwmon-sysfs.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/spi/spi.h>
+
+#define MAX31722_REG_CFG 0x00
+#define MAX31722_REG_TEMP_LSB 0x01
+
+#define MAX31722_MODE_CONTINUOUS 0x00
+#define MAX31722_MODE_STANDBY 0x01
+#define MAX31722_MODE_MASK 0xFE
+#define MAX31722_RESOLUTION_12BIT 0x06
+#define MAX31722_WRITE_MASK 0x80
+
+struct max31722_data {
+ struct device *hwmon_dev;
+ struct spi_device *spi_device;
+ u8 mode;
+};
+
+static int max31722_set_mode(struct max31722_data *data, u8 mode)
+{
+ int ret;
+ struct spi_device *spi = data->spi_device;
+ u8 buf[2] = {
+ MAX31722_REG_CFG | MAX31722_WRITE_MASK,
+ (data->mode & MAX31722_MODE_MASK) | mode
+ };
+
+ ret = spi_write(spi, &buf, sizeof(buf));
+ if (ret < 0) {
+ dev_err(&spi->dev, "failed to set sensor mode.\n");
+ return ret;
+ }
+ data->mode = (data->mode & MAX31722_MODE_MASK) | mode;
+
+ return 0;
+}
+
+static ssize_t max31722_show_temp(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ ssize_t ret;
+ struct max31722_data *data = dev_get_drvdata(dev);
+
+ ret = spi_w8r16(data->spi_device, MAX31722_REG_TEMP_LSB);
+ if (ret < 0)
+ return ret;
+ /* Keep 12 bits and multiply by the scale of 62.5 millidegrees/bit. */
+ return sprintf(buf, "%d\n", (s16)le16_to_cpu(ret) * 125 / 32);
+}
+
+static SENSOR_DEVICE_ATTR(temp1_input, S_IRUGO,
+ max31722_show_temp, NULL, 0);
+
+static struct attribute *max31722_attrs[] = {
+ &sensor_dev_attr_temp1_input.dev_attr.attr,
+ NULL,
+};
+
+ATTRIBUTE_GROUPS(max31722);
+
+static int max31722_probe(struct spi_device *spi)
+{
+ int ret;
+ struct max31722_data *data;
+
+ data = devm_kzalloc(&spi->dev, sizeof(*data), GFP_KERNEL);
+ if (!data)
+ return -ENOMEM;
+
+ spi_set_drvdata(spi, data);
+ data->spi_device = spi;
+ /*
+ * Set SD bit to 0 so we can have continuous measurements.
+ * Set resolution to 12 bits for maximum precision.
+ */
+ data->mode = MAX31722_MODE_CONTINUOUS | MAX31722_RESOLUTION_12BIT;
+ ret = max31722_set_mode(data, MAX31722_MODE_CONTINUOUS);
+ if (ret < 0)
+ return ret;
+
+ data->hwmon_dev = hwmon_device_register_with_groups(&spi->dev,
+ spi->modalias,
+ data,
+ max31722_groups);
+ if (IS_ERR(data->hwmon_dev)) {
+ max31722_set_mode(data, MAX31722_MODE_STANDBY);
+ return PTR_ERR(data->hwmon_dev);
+ }
+
+ return 0;
+}
+
+static int max31722_remove(struct spi_device *spi)
+{
+ struct max31722_data *data = spi_get_drvdata(spi);
+
+ hwmon_device_unregister(data->hwmon_dev);
+
+ return max31722_set_mode(data, MAX31722_MODE_STANDBY);
+}
+
+static int __maybe_unused max31722_suspend(struct device *dev)
+{
+ struct spi_device *spi_device = to_spi_device(dev);
+ struct max31722_data *data = spi_get_drvdata(spi_device);
+
+ return max31722_set_mode(data, MAX31722_MODE_STANDBY);
+}
+
+static int __maybe_unused max31722_resume(struct device *dev)
+{
+ struct spi_device *spi_device = to_spi_device(dev);
+ struct max31722_data *data = spi_get_drvdata(spi_device);
+
+ return max31722_set_mode(data, MAX31722_MODE_CONTINUOUS);
+}
+
+static SIMPLE_DEV_PM_OPS(max31722_pm_ops, max31722_suspend, max31722_resume);
+
+static const struct spi_device_id max31722_spi_id[] = {
+ {"max31722", 0},
+ {"max31723", 0},
+ {}
+};
+
+static const struct acpi_device_id __maybe_unused max31722_acpi_id[] = {
+ {"MAX31722", 0},
+ {"MAX31723", 0},
+ {}
+};
+
+MODULE_DEVICE_TABLE(spi, max31722_spi_id);
+
+static struct spi_driver max31722_driver = {
+ .driver = {
+ .name = "max31722",
+ .pm = &max31722_pm_ops,
+ .acpi_match_table = ACPI_PTR(max31722_acpi_id),
+ },
+ .probe = max31722_probe,
+ .remove = max31722_remove,
+ .id_table = max31722_spi_id,
+};
+
+module_spi_driver(max31722_driver);
+
+MODULE_AUTHOR("Tiberiu Breana <tiberiu.a.breana@intel.com>");
+MODULE_DESCRIPTION("max31722 sensor driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/hwmon/ntc_thermistor.c b/drivers/hwmon/ntc_thermistor.c
index faa6e8dfbaaf..8ef7b713cb1a 100644
--- a/drivers/hwmon/ntc_thermistor.c
+++ b/drivers/hwmon/ntc_thermistor.c
@@ -259,7 +259,6 @@ struct ntc_data {
struct device *dev;
int n_comp;
char name[PLATFORM_NAME_SIZE];
- struct thermal_zone_device *tz;
};
#if defined(CONFIG_OF) && IS_ENABLED(CONFIG_IIO)
@@ -579,6 +578,7 @@ static const struct thermal_zone_of_device_ops ntc_of_thermal_ops = {
static int ntc_thermistor_probe(struct platform_device *pdev)
{
+ struct thermal_zone_device *tz;
const struct of_device_id *of_id =
of_match_device(of_match_ptr(ntc_match), &pdev->dev);
const struct platform_device_id *pdev_id;
@@ -677,12 +677,10 @@ static int ntc_thermistor_probe(struct platform_device *pdev)
dev_info(&pdev->dev, "Thermistor type: %s successfully probed.\n",
pdev_id->name);
- data->tz = thermal_zone_of_sensor_register(data->dev, 0, data->dev,
- &ntc_of_thermal_ops);
- if (IS_ERR(data->tz)) {
+ tz = devm_thermal_zone_of_sensor_register(data->dev, 0, data->dev,
+ &ntc_of_thermal_ops);
+ if (IS_ERR(tz))
dev_dbg(&pdev->dev, "Failed to register to thermal fw.\n");
- data->tz = NULL;
- }
return 0;
err_after_sysfs:
@@ -700,8 +698,6 @@ static int ntc_thermistor_remove(struct platform_device *pdev)
sysfs_remove_group(&data->dev->kobj, &ntc_attr_group);
ntc_iio_channel_release(pdata);
- thermal_zone_of_sensor_unregister(data->dev, data->tz);
-
return 0;
}
diff --git a/drivers/hwmon/pwm-fan.c b/drivers/hwmon/pwm-fan.c
index 3e23003f78b0..f9af3935b427 100644
--- a/drivers/hwmon/pwm-fan.c
+++ b/drivers/hwmon/pwm-fan.c
@@ -40,15 +40,18 @@ struct pwm_fan_ctx {
static int __set_pwm(struct pwm_fan_ctx *ctx, unsigned long pwm)
{
+ struct pwm_args pargs;
unsigned long duty;
int ret = 0;
+ pwm_get_args(ctx->pwm, &pargs);
+
mutex_lock(&ctx->lock);
if (ctx->pwm_value == pwm)
goto exit_set_pwm_err;
- duty = DIV_ROUND_UP(pwm * (ctx->pwm->period - 1), MAX_PWM);
- ret = pwm_config(ctx->pwm, duty, ctx->pwm->period);
+ duty = DIV_ROUND_UP(pwm * (pargs.period - 1), MAX_PWM);
+ ret = pwm_config(ctx->pwm, duty, pargs.period);
if (ret)
goto exit_set_pwm_err;
@@ -215,6 +218,7 @@ static int pwm_fan_probe(struct platform_device *pdev)
{
struct thermal_cooling_device *cdev;
struct pwm_fan_ctx *ctx;
+ struct pwm_args pargs;
struct device *hwmon;
int duty_cycle;
int ret;
@@ -233,11 +237,19 @@ static int pwm_fan_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, ctx);
+ /*
+ * FIXME: pwm_apply_args() should be removed when switching to the
+ * atomic PWM API.
+ */
+ pwm_apply_args(ctx->pwm);
+
/* Set duty cycle to maximum allowed */
- duty_cycle = ctx->pwm->period - 1;
+ pwm_get_args(ctx->pwm, &pargs);
+
+ duty_cycle = pargs.period - 1;
ctx->pwm_value = MAX_PWM;
- ret = pwm_config(ctx->pwm, duty_cycle, ctx->pwm->period);
+ ret = pwm_config(ctx->pwm, duty_cycle, pargs.period);
if (ret) {
dev_err(&pdev->dev, "Failed to configure PWM\n");
return ret;
@@ -303,14 +315,16 @@ static int pwm_fan_suspend(struct device *dev)
static int pwm_fan_resume(struct device *dev)
{
struct pwm_fan_ctx *ctx = dev_get_drvdata(dev);
+ struct pwm_args pargs;
unsigned long duty;
int ret;
if (ctx->pwm_value == 0)
return 0;
- duty = DIV_ROUND_UP(ctx->pwm_value * (ctx->pwm->period - 1), MAX_PWM);
- ret = pwm_config(ctx->pwm, duty, ctx->pwm->period);
+ pwm_get_args(ctx->pwm, &pargs);
+ duty = DIV_ROUND_UP(ctx->pwm_value * (pargs.period - 1), MAX_PWM);
+ ret = pwm_config(ctx->pwm, duty, pargs.period);
if (ret)
return ret;
return pwm_enable(ctx->pwm);
diff --git a/drivers/hwmon/sch5636.c b/drivers/hwmon/sch5636.c
index 131a2815dbda..d24d7b6047f2 100644
--- a/drivers/hwmon/sch5636.c
+++ b/drivers/hwmon/sch5636.c
@@ -449,7 +449,7 @@ static int sch5636_probe(struct platform_device *pdev)
}
revision[i] = val;
}
- pr_info("Found %s chip at %#hx, revison: %d.%02d\n", DEVNAME,
+ pr_info("Found %s chip at %#hx, revision: %d.%02d\n", DEVNAME,
data->addr, revision[0], revision[1]);
/* Read all temp + fan ctrl registers to determine which are active */
diff --git a/drivers/hwmon/scpi-hwmon.c b/drivers/hwmon/scpi-hwmon.c
index 912b449c8303..25b44e68926d 100644
--- a/drivers/hwmon/scpi-hwmon.c
+++ b/drivers/hwmon/scpi-hwmon.c
@@ -31,10 +31,8 @@ struct sensor_data {
};
struct scpi_thermal_zone {
- struct list_head list;
int sensor_id;
struct scpi_sensors *scpi_sensors;
- struct thermal_zone_device *tzd;
};
struct scpi_sensors {
@@ -92,20 +90,6 @@ scpi_show_label(struct device *dev, struct device_attribute *attr, char *buf)
return sprintf(buf, "%s\n", sensor->info.name);
}
-static void
-unregister_thermal_zones(struct platform_device *pdev,
- struct scpi_sensors *scpi_sensors)
-{
- struct list_head *pos;
-
- list_for_each(pos, &scpi_sensors->thermal_zones) {
- struct scpi_thermal_zone *zone;
-
- zone = list_entry(pos, struct scpi_thermal_zone, list);
- thermal_zone_of_sensor_unregister(&pdev->dev, zone->tzd);
- }
-}
-
static struct thermal_zone_of_device_ops scpi_sensor_ops = {
.get_temp = scpi_read_temp,
};
@@ -118,7 +102,7 @@ static int scpi_hwmon_probe(struct platform_device *pdev)
struct scpi_ops *scpi_ops;
struct device *hwdev, *dev = &pdev->dev;
struct scpi_sensors *scpi_sensors;
- int ret, idx;
+ int idx, ret;
scpi_ops = get_scpi_ops();
if (!scpi_ops)
@@ -232,48 +216,35 @@ static int scpi_hwmon_probe(struct platform_device *pdev)
INIT_LIST_HEAD(&scpi_sensors->thermal_zones);
for (i = 0; i < nr_sensors; i++) {
struct sensor_data *sensor = &scpi_sensors->data[i];
+ struct thermal_zone_device *z;
struct scpi_thermal_zone *zone;
if (sensor->info.class != TEMPERATURE)
continue;
zone = devm_kzalloc(dev, sizeof(*zone), GFP_KERNEL);
- if (!zone) {
- ret = -ENOMEM;
- goto unregister_tzd;
- }
+ if (!zone)
+ return -ENOMEM;
zone->sensor_id = i;
zone->scpi_sensors = scpi_sensors;
- zone->tzd = thermal_zone_of_sensor_register(dev,
- sensor->info.sensor_id, zone, &scpi_sensor_ops);
+ z = devm_thermal_zone_of_sensor_register(dev,
+ sensor->info.sensor_id,
+ zone,
+ &scpi_sensor_ops);
/*
* The call to thermal_zone_of_sensor_register returns
* an error for sensors that are not associated with
* any thermal zones or if the thermal subsystem is
* not configured.
*/
- if (IS_ERR(zone->tzd)) {
+ if (IS_ERR(z)) {
devm_kfree(dev, zone);
continue;
}
- list_add(&zone->list, &scpi_sensors->thermal_zones);
}
return 0;
-
-unregister_tzd:
- unregister_thermal_zones(pdev, scpi_sensors);
- return ret;
-}
-
-static int scpi_hwmon_remove(struct platform_device *pdev)
-{
- struct scpi_sensors *scpi_sensors = platform_get_drvdata(pdev);
-
- unregister_thermal_zones(pdev, scpi_sensors);
-
- return 0;
}
static const struct of_device_id scpi_of_match[] = {
@@ -288,7 +259,6 @@ static struct platform_driver scpi_hwmon_platdrv = {
.of_match_table = scpi_of_match,
},
.probe = scpi_hwmon_probe,
- .remove = scpi_hwmon_remove,
};
module_platform_driver(scpi_hwmon_platdrv);
diff --git a/drivers/hwmon/tmp102.c b/drivers/hwmon/tmp102.c
index 5289aa0980a8..f1e96fd7f445 100644
--- a/drivers/hwmon/tmp102.c
+++ b/drivers/hwmon/tmp102.c
@@ -53,7 +53,6 @@
struct tmp102 {
struct i2c_client *client;
struct device *hwmon_dev;
- struct thermal_zone_device *tz;
struct mutex lock;
u16 config_orig;
unsigned long last_update;
@@ -232,10 +231,8 @@ static int tmp102_probe(struct i2c_client *client,
goto fail_restore_config;
}
tmp102->hwmon_dev = hwmon_dev;
- tmp102->tz = thermal_zone_of_sensor_register(hwmon_dev, 0, hwmon_dev,
- &tmp102_of_thermal_ops);
- if (IS_ERR(tmp102->tz))
- tmp102->tz = NULL;
+ devm_thermal_zone_of_sensor_register(hwmon_dev, 0, hwmon_dev,
+ &tmp102_of_thermal_ops);
dev_info(dev, "initialized\n");
@@ -251,7 +248,6 @@ static int tmp102_remove(struct i2c_client *client)
{
struct tmp102 *tmp102 = i2c_get_clientdata(client);
- thermal_zone_of_sensor_unregister(tmp102->hwmon_dev, tmp102->tz);
hwmon_device_unregister(tmp102->hwmon_dev);
/* Stop monitoring if device was stopped originally */
diff --git a/drivers/hwspinlock/hwspinlock_core.c b/drivers/hwspinlock/hwspinlock_core.c
index d50c701b19d6..4074441444fe 100644
--- a/drivers/hwspinlock/hwspinlock_core.c
+++ b/drivers/hwspinlock/hwspinlock_core.c
@@ -313,7 +313,7 @@ int of_hwspin_lock_get_id(struct device_node *np, int index)
hwlock = radix_tree_deref_slot(slot);
if (unlikely(!hwlock))
continue;
- if (radix_tree_is_indirect_ptr(hwlock)) {
+ if (radix_tree_deref_retry(hwlock)) {
slot = radix_tree_iter_retry(&iter);
continue;
}
diff --git a/drivers/hwtracing/coresight/Kconfig b/drivers/hwtracing/coresight/Kconfig
index db0541031c72..130cb2114059 100644
--- a/drivers/hwtracing/coresight/Kconfig
+++ b/drivers/hwtracing/coresight/Kconfig
@@ -78,4 +78,15 @@ config CORESIGHT_QCOM_REPLICATOR
programmable ATB replicator sends the ATB trace stream from the
ETB/ETF to the TPIUi and ETR.
+config CORESIGHT_STM
+ bool "CoreSight System Trace Macrocell driver"
+ depends on (ARM && !(CPU_32v3 || CPU_32v4 || CPU_32v4T)) || ARM64
+ select CORESIGHT_LINKS_AND_SINKS
+ select STM
+ help
+ This driver provides support for hardware assisted software
+ instrumentation based tracing. This is primarily used for
+ logging useful software events or data coming from various entities
+ in the system, possibly running different OSs
+
endif
diff --git a/drivers/hwtracing/coresight/Makefile b/drivers/hwtracing/coresight/Makefile
index cf8c6d689747..af480d9c1441 100644
--- a/drivers/hwtracing/coresight/Makefile
+++ b/drivers/hwtracing/coresight/Makefile
@@ -1,15 +1,18 @@
#
# Makefile for CoreSight drivers.
#
-obj-$(CONFIG_CORESIGHT) += coresight.o
+obj-$(CONFIG_CORESIGHT) += coresight.o coresight-etm-perf.o
obj-$(CONFIG_OF) += of_coresight.o
-obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o
+obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o \
+ coresight-tmc-etf.o \
+ coresight-tmc-etr.o
obj-$(CONFIG_CORESIGHT_SINK_TPIU) += coresight-tpiu.o
obj-$(CONFIG_CORESIGHT_SINK_ETBV10) += coresight-etb10.o
obj-$(CONFIG_CORESIGHT_LINKS_AND_SINKS) += coresight-funnel.o \
coresight-replicator.o
obj-$(CONFIG_CORESIGHT_SOURCE_ETM3X) += coresight-etm3x.o coresight-etm-cp14.o \
- coresight-etm3x-sysfs.o \
- coresight-etm-perf.o
-obj-$(CONFIG_CORESIGHT_SOURCE_ETM4X) += coresight-etm4x.o
+ coresight-etm3x-sysfs.o
+obj-$(CONFIG_CORESIGHT_SOURCE_ETM4X) += coresight-etm4x.o \
+ coresight-etm4x-sysfs.o
obj-$(CONFIG_CORESIGHT_QCOM_REPLICATOR) += coresight-replicator-qcom.o
+obj-$(CONFIG_CORESIGHT_STM) += coresight-stm.o
diff --git a/drivers/hwtracing/coresight/coresight-etb10.c b/drivers/hwtracing/coresight/coresight-etb10.c
index acbce79934d6..4d20b0be0c0b 100644
--- a/drivers/hwtracing/coresight/coresight-etb10.c
+++ b/drivers/hwtracing/coresight/coresight-etb10.c
@@ -71,26 +71,6 @@
#define ETB_FRAME_SIZE_WORDS 4
/**
- * struct cs_buffer - keep track of a recording session' specifics
- * @cur: index of the current buffer
- * @nr_pages: max number of pages granted to us
- * @offset: offset within the current buffer
- * @data_size: how much we collected in this run
- * @lost: other than zero if we had a HW buffer wrap around
- * @snapshot: is this run in snapshot mode
- * @data_pages: a handle the ring buffer
- */
-struct cs_buffers {
- unsigned int cur;
- unsigned int nr_pages;
- unsigned long offset;
- local_t data_size;
- local_t lost;
- bool snapshot;
- void **data_pages;
-};
-
-/**
* struct etb_drvdata - specifics associated to an ETB component
* @base: memory mapped base address for this component.
* @dev: the device entity associated to this component.
@@ -440,7 +420,7 @@ static void etb_update_buffer(struct coresight_device *csdev,
u32 mask = ~(ETB_FRAME_SIZE_WORDS - 1);
/* The new read pointer must be frame size aligned */
- to_read -= handle->size & mask;
+ to_read = handle->size & mask;
/*
* Move the RAM read pointer up, keeping in mind that
* everything is in frame size units.
@@ -448,7 +428,8 @@ static void etb_update_buffer(struct coresight_device *csdev,
read_ptr = (write_ptr + drvdata->buffer_depth) -
to_read / ETB_FRAME_SIZE_WORDS;
/* Wrap around if need be*/
- read_ptr &= ~(drvdata->buffer_depth - 1);
+ if (read_ptr > (drvdata->buffer_depth - 1))
+ read_ptr -= drvdata->buffer_depth;
/* let the decoder know we've skipped ahead */
local_inc(&buf->lost);
}
@@ -579,47 +560,29 @@ static const struct file_operations etb_fops = {
.llseek = no_llseek,
};
-static ssize_t status_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- unsigned long flags;
- u32 etb_rdr, etb_sr, etb_rrp, etb_rwp;
- u32 etb_trg, etb_cr, etb_ffsr, etb_ffcr;
- struct etb_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- pm_runtime_get_sync(drvdata->dev);
- spin_lock_irqsave(&drvdata->spinlock, flags);
- CS_UNLOCK(drvdata->base);
-
- etb_rdr = readl_relaxed(drvdata->base + ETB_RAM_DEPTH_REG);
- etb_sr = readl_relaxed(drvdata->base + ETB_STATUS_REG);
- etb_rrp = readl_relaxed(drvdata->base + ETB_RAM_READ_POINTER);
- etb_rwp = readl_relaxed(drvdata->base + ETB_RAM_WRITE_POINTER);
- etb_trg = readl_relaxed(drvdata->base + ETB_TRG);
- etb_cr = readl_relaxed(drvdata->base + ETB_CTL_REG);
- etb_ffsr = readl_relaxed(drvdata->base + ETB_FFSR);
- etb_ffcr = readl_relaxed(drvdata->base + ETB_FFCR);
-
- CS_LOCK(drvdata->base);
- spin_unlock_irqrestore(&drvdata->spinlock, flags);
-
- pm_runtime_put(drvdata->dev);
-
- return sprintf(buf,
- "Depth:\t\t0x%x\n"
- "Status:\t\t0x%x\n"
- "RAM read ptr:\t0x%x\n"
- "RAM wrt ptr:\t0x%x\n"
- "Trigger cnt:\t0x%x\n"
- "Control:\t0x%x\n"
- "Flush status:\t0x%x\n"
- "Flush ctrl:\t0x%x\n",
- etb_rdr, etb_sr, etb_rrp, etb_rwp,
- etb_trg, etb_cr, etb_ffsr, etb_ffcr);
-
- return -EINVAL;
-}
-static DEVICE_ATTR_RO(status);
+#define coresight_etb10_simple_func(name, offset) \
+ coresight_simple_func(struct etb_drvdata, name, offset)
+
+coresight_etb10_simple_func(rdp, ETB_RAM_DEPTH_REG);
+coresight_etb10_simple_func(sts, ETB_STATUS_REG);
+coresight_etb10_simple_func(rrp, ETB_RAM_READ_POINTER);
+coresight_etb10_simple_func(rwp, ETB_RAM_WRITE_POINTER);
+coresight_etb10_simple_func(trg, ETB_TRG);
+coresight_etb10_simple_func(ctl, ETB_CTL_REG);
+coresight_etb10_simple_func(ffsr, ETB_FFSR);
+coresight_etb10_simple_func(ffcr, ETB_FFCR);
+
+static struct attribute *coresight_etb_mgmt_attrs[] = {
+ &dev_attr_rdp.attr,
+ &dev_attr_sts.attr,
+ &dev_attr_rrp.attr,
+ &dev_attr_rwp.attr,
+ &dev_attr_trg.attr,
+ &dev_attr_ctl.attr,
+ &dev_attr_ffsr.attr,
+ &dev_attr_ffcr.attr,
+ NULL,
+};
static ssize_t trigger_cntr_show(struct device *dev,
struct device_attribute *attr, char *buf)
@@ -649,10 +612,23 @@ static DEVICE_ATTR_RW(trigger_cntr);
static struct attribute *coresight_etb_attrs[] = {
&dev_attr_trigger_cntr.attr,
- &dev_attr_status.attr,
NULL,
};
-ATTRIBUTE_GROUPS(coresight_etb);
+
+static const struct attribute_group coresight_etb_group = {
+ .attrs = coresight_etb_attrs,
+};
+
+static const struct attribute_group coresight_etb_mgmt_group = {
+ .attrs = coresight_etb_mgmt_attrs,
+ .name = "mgmt",
+};
+
+const struct attribute_group *coresight_etb_groups[] = {
+ &coresight_etb_group,
+ &coresight_etb_mgmt_group,
+ NULL,
+};
static int etb_probe(struct amba_device *adev, const struct amba_id *id)
{
@@ -729,7 +705,6 @@ static int etb_probe(struct amba_device *adev, const struct amba_id *id)
if (ret)
goto err_misc_register;
- dev_info(dev, "ETB initialized\n");
return 0;
err_misc_register:
diff --git a/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
index cbb4046c1070..02d4b629891f 100644
--- a/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
+++ b/drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
@@ -1221,26 +1221,19 @@ static struct attribute *coresight_etm_attrs[] = {
NULL,
};
-#define coresight_simple_func(name, offset) \
-static ssize_t name##_show(struct device *_dev, \
- struct device_attribute *attr, char *buf) \
-{ \
- struct etm_drvdata *drvdata = dev_get_drvdata(_dev->parent); \
- return scnprintf(buf, PAGE_SIZE, "0x%x\n", \
- readl_relaxed(drvdata->base + offset)); \
-} \
-DEVICE_ATTR_RO(name)
-
-coresight_simple_func(etmccr, ETMCCR);
-coresight_simple_func(etmccer, ETMCCER);
-coresight_simple_func(etmscr, ETMSCR);
-coresight_simple_func(etmidr, ETMIDR);
-coresight_simple_func(etmcr, ETMCR);
-coresight_simple_func(etmtraceidr, ETMTRACEIDR);
-coresight_simple_func(etmteevr, ETMTEEVR);
-coresight_simple_func(etmtssvr, ETMTSSCR);
-coresight_simple_func(etmtecr1, ETMTECR1);
-coresight_simple_func(etmtecr2, ETMTECR2);
+#define coresight_etm3x_simple_func(name, offset) \
+ coresight_simple_func(struct etm_drvdata, name, offset)
+
+coresight_etm3x_simple_func(etmccr, ETMCCR);
+coresight_etm3x_simple_func(etmccer, ETMCCER);
+coresight_etm3x_simple_func(etmscr, ETMSCR);
+coresight_etm3x_simple_func(etmidr, ETMIDR);
+coresight_etm3x_simple_func(etmcr, ETMCR);
+coresight_etm3x_simple_func(etmtraceidr, ETMTRACEIDR);
+coresight_etm3x_simple_func(etmteevr, ETMTEEVR);
+coresight_etm3x_simple_func(etmtssvr, ETMTSSCR);
+coresight_etm3x_simple_func(etmtecr1, ETMTECR1);
+coresight_etm3x_simple_func(etmtecr2, ETMTECR2);
static struct attribute *coresight_etm_mgmt_attrs[] = {
&dev_attr_etmccr.attr,
diff --git a/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
new file mode 100644
index 000000000000..7c84308c5564
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
@@ -0,0 +1,2126 @@
+/*
+ * Copyright(C) 2015 Linaro Limited. All rights reserved.
+ * Author: Mathieu Poirier <mathieu.poirier@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/pm_runtime.h>
+#include <linux/sysfs.h>
+#include "coresight-etm4x.h"
+
+static int etm4_set_mode_exclude(struct etmv4_drvdata *drvdata, bool exclude)
+{
+ u8 idx;
+ struct etmv4_config *config = &drvdata->config;
+
+ idx = config->addr_idx;
+
+ /*
+ * TRCACATRn.TYPE bit[1:0]: type of comparison
+ * the trace unit performs
+ */
+ if (BMVAL(config->addr_acc[idx], 0, 1) == ETM_INSTR_ADDR) {
+ if (idx % 2 != 0)
+ return -EINVAL;
+
+ /*
+ * We are performing instruction address comparison. Set the
+ * relevant bit of ViewInst Include/Exclude Control register
+ * for corresponding address comparator pair.
+ */
+ if (config->addr_type[idx] != ETM_ADDR_TYPE_RANGE ||
+ config->addr_type[idx + 1] != ETM_ADDR_TYPE_RANGE)
+ return -EINVAL;
+
+ if (exclude == true) {
+ /*
+ * Set exclude bit and unset the include bit
+ * corresponding to comparator pair
+ */
+ config->viiectlr |= BIT(idx / 2 + 16);
+ config->viiectlr &= ~BIT(idx / 2);
+ } else {
+ /*
+ * Set include bit and unset exclude bit
+ * corresponding to comparator pair
+ */
+ config->viiectlr |= BIT(idx / 2);
+ config->viiectlr &= ~BIT(idx / 2 + 16);
+ }
+ }
+ return 0;
+}
+
+static ssize_t nr_pe_cmp_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ val = drvdata->nr_pe_cmp;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(nr_pe_cmp);
+
+static ssize_t nr_addr_cmp_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ val = drvdata->nr_addr_cmp;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(nr_addr_cmp);
+
+static ssize_t nr_cntr_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ val = drvdata->nr_cntr;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(nr_cntr);
+
+static ssize_t nr_ext_inp_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ val = drvdata->nr_ext_inp;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(nr_ext_inp);
+
+static ssize_t numcidc_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ val = drvdata->numcidc;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(numcidc);
+
+static ssize_t numvmidc_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ val = drvdata->numvmidc;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(numvmidc);
+
+static ssize_t nrseqstate_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ val = drvdata->nrseqstate;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(nrseqstate);
+
+static ssize_t nr_resource_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ val = drvdata->nr_resource;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(nr_resource);
+
+static ssize_t nr_ss_cmp_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ val = drvdata->nr_ss_cmp;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+static DEVICE_ATTR_RO(nr_ss_cmp);
+
+static ssize_t reset_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int i;
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ if (val)
+ config->mode = 0x0;
+
+ /* Disable data tracing: do not trace load and store data transfers */
+ config->mode &= ~(ETM_MODE_LOAD | ETM_MODE_STORE);
+ config->cfg &= ~(BIT(1) | BIT(2));
+
+ /* Disable data value and data address tracing */
+ config->mode &= ~(ETM_MODE_DATA_TRACE_ADDR |
+ ETM_MODE_DATA_TRACE_VAL);
+ config->cfg &= ~(BIT(16) | BIT(17));
+
+ /* Disable all events tracing */
+ config->eventctrl0 = 0x0;
+ config->eventctrl1 = 0x0;
+
+ /* Disable timestamp event */
+ config->ts_ctrl = 0x0;
+
+ /* Disable stalling */
+ config->stall_ctrl = 0x0;
+
+ /* Reset trace synchronization period to 2^8 = 256 bytes*/
+ if (drvdata->syncpr == false)
+ config->syncfreq = 0x8;
+
+ /*
+ * Enable ViewInst to trace everything with start-stop logic in
+ * started state. ARM recommends start-stop logic is set before
+ * each trace run.
+ */
+ config->vinst_ctrl |= BIT(0);
+ if (drvdata->nr_addr_cmp == true) {
+ config->mode |= ETM_MODE_VIEWINST_STARTSTOP;
+ /* SSSTATUS, bit[9] */
+ config->vinst_ctrl |= BIT(9);
+ }
+
+ /* No address range filtering for ViewInst */
+ config->viiectlr = 0x0;
+
+ /* No start-stop filtering for ViewInst */
+ config->vissctlr = 0x0;
+
+ /* Disable seq events */
+ for (i = 0; i < drvdata->nrseqstate-1; i++)
+ config->seq_ctrl[i] = 0x0;
+ config->seq_rst = 0x0;
+ config->seq_state = 0x0;
+
+ /* Disable external input events */
+ config->ext_inp = 0x0;
+
+ config->cntr_idx = 0x0;
+ for (i = 0; i < drvdata->nr_cntr; i++) {
+ config->cntrldvr[i] = 0x0;
+ config->cntr_ctrl[i] = 0x0;
+ config->cntr_val[i] = 0x0;
+ }
+
+ config->res_idx = 0x0;
+ for (i = 0; i < drvdata->nr_resource; i++)
+ config->res_ctrl[i] = 0x0;
+
+ for (i = 0; i < drvdata->nr_ss_cmp; i++) {
+ config->ss_ctrl[i] = 0x0;
+ config->ss_pe_cmp[i] = 0x0;
+ }
+
+ config->addr_idx = 0x0;
+ for (i = 0; i < drvdata->nr_addr_cmp * 2; i++) {
+ config->addr_val[i] = 0x0;
+ config->addr_acc[i] = 0x0;
+ config->addr_type[i] = ETM_ADDR_TYPE_NONE;
+ }
+
+ config->ctxid_idx = 0x0;
+ for (i = 0; i < drvdata->numcidc; i++) {
+ config->ctxid_pid[i] = 0x0;
+ config->ctxid_vpid[i] = 0x0;
+ }
+
+ config->ctxid_mask0 = 0x0;
+ config->ctxid_mask1 = 0x0;
+
+ config->vmid_idx = 0x0;
+ for (i = 0; i < drvdata->numvmidc; i++)
+ config->vmid_val[i] = 0x0;
+ config->vmid_mask0 = 0x0;
+ config->vmid_mask1 = 0x0;
+
+ drvdata->trcid = drvdata->cpu + 1;
+
+ spin_unlock(&drvdata->spinlock);
+
+ return size;
+}
+static DEVICE_ATTR_WO(reset);
+
+static ssize_t mode_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ val = config->mode;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t mode_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ unsigned long val, mode;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ config->mode = val & ETMv4_MODE_ALL;
+
+ if (config->mode & ETM_MODE_EXCLUDE)
+ etm4_set_mode_exclude(drvdata, true);
+ else
+ etm4_set_mode_exclude(drvdata, false);
+
+ if (drvdata->instrp0 == true) {
+ /* start by clearing instruction P0 field */
+ config->cfg &= ~(BIT(1) | BIT(2));
+ if (config->mode & ETM_MODE_LOAD)
+ /* 0b01 Trace load instructions as P0 instructions */
+ config->cfg |= BIT(1);
+ if (config->mode & ETM_MODE_STORE)
+ /* 0b10 Trace store instructions as P0 instructions */
+ config->cfg |= BIT(2);
+ if (config->mode & ETM_MODE_LOAD_STORE)
+ /*
+ * 0b11 Trace load and store instructions
+ * as P0 instructions
+ */
+ config->cfg |= BIT(1) | BIT(2);
+ }
+
+ /* bit[3], Branch broadcast mode */
+ if ((config->mode & ETM_MODE_BB) && (drvdata->trcbb == true))
+ config->cfg |= BIT(3);
+ else
+ config->cfg &= ~BIT(3);
+
+ /* bit[4], Cycle counting instruction trace bit */
+ if ((config->mode & ETMv4_MODE_CYCACC) &&
+ (drvdata->trccci == true))
+ config->cfg |= BIT(4);
+ else
+ config->cfg &= ~BIT(4);
+
+ /* bit[6], Context ID tracing bit */
+ if ((config->mode & ETMv4_MODE_CTXID) && (drvdata->ctxid_size))
+ config->cfg |= BIT(6);
+ else
+ config->cfg &= ~BIT(6);
+
+ if ((config->mode & ETM_MODE_VMID) && (drvdata->vmid_size))
+ config->cfg |= BIT(7);
+ else
+ config->cfg &= ~BIT(7);
+
+ /* bits[10:8], Conditional instruction tracing bit */
+ mode = ETM_MODE_COND(config->mode);
+ if (drvdata->trccond == true) {
+ config->cfg &= ~(BIT(8) | BIT(9) | BIT(10));
+ config->cfg |= mode << 8;
+ }
+
+ /* bit[11], Global timestamp tracing bit */
+ if ((config->mode & ETMv4_MODE_TIMESTAMP) && (drvdata->ts_size))
+ config->cfg |= BIT(11);
+ else
+ config->cfg &= ~BIT(11);
+
+ /* bit[12], Return stack enable bit */
+ if ((config->mode & ETM_MODE_RETURNSTACK) &&
+ (drvdata->retstack == true))
+ config->cfg |= BIT(12);
+ else
+ config->cfg &= ~BIT(12);
+
+ /* bits[14:13], Q element enable field */
+ mode = ETM_MODE_QELEM(config->mode);
+ /* start by clearing QE bits */
+ config->cfg &= ~(BIT(13) | BIT(14));
+ /* if supported, Q elements with instruction counts are enabled */
+ if ((mode & BIT(0)) && (drvdata->q_support & BIT(0)))
+ config->cfg |= BIT(13);
+ /*
+ * if supported, Q elements with and without instruction
+ * counts are enabled
+ */
+ if ((mode & BIT(1)) && (drvdata->q_support & BIT(1)))
+ config->cfg |= BIT(14);
+
+ /* bit[11], AMBA Trace Bus (ATB) trigger enable bit */
+ if ((config->mode & ETM_MODE_ATB_TRIGGER) &&
+ (drvdata->atbtrig == true))
+ config->eventctrl1 |= BIT(11);
+ else
+ config->eventctrl1 &= ~BIT(11);
+
+ /* bit[12], Low-power state behavior override bit */
+ if ((config->mode & ETM_MODE_LPOVERRIDE) &&
+ (drvdata->lpoverride == true))
+ config->eventctrl1 |= BIT(12);
+ else
+ config->eventctrl1 &= ~BIT(12);
+
+ /* bit[8], Instruction stall bit */
+ if (config->mode & ETM_MODE_ISTALL_EN)
+ config->stall_ctrl |= BIT(8);
+ else
+ config->stall_ctrl &= ~BIT(8);
+
+ /* bit[10], Prioritize instruction trace bit */
+ if (config->mode & ETM_MODE_INSTPRIO)
+ config->stall_ctrl |= BIT(10);
+ else
+ config->stall_ctrl &= ~BIT(10);
+
+ /* bit[13], Trace overflow prevention bit */
+ if ((config->mode & ETM_MODE_NOOVERFLOW) &&
+ (drvdata->nooverflow == true))
+ config->stall_ctrl |= BIT(13);
+ else
+ config->stall_ctrl &= ~BIT(13);
+
+ /* bit[9] Start/stop logic control bit */
+ if (config->mode & ETM_MODE_VIEWINST_STARTSTOP)
+ config->vinst_ctrl |= BIT(9);
+ else
+ config->vinst_ctrl &= ~BIT(9);
+
+ /* bit[10], Whether a trace unit must trace a Reset exception */
+ if (config->mode & ETM_MODE_TRACE_RESET)
+ config->vinst_ctrl |= BIT(10);
+ else
+ config->vinst_ctrl &= ~BIT(10);
+
+ /* bit[11], Whether a trace unit must trace a system error exception */
+ if ((config->mode & ETM_MODE_TRACE_ERR) &&
+ (drvdata->trc_error == true))
+ config->vinst_ctrl |= BIT(11);
+ else
+ config->vinst_ctrl &= ~BIT(11);
+
+ if (config->mode & (ETM_MODE_EXCL_KERN | ETM_MODE_EXCL_USER))
+ etm4_config_trace_mode(config);
+
+ spin_unlock(&drvdata->spinlock);
+
+ return size;
+}
+static DEVICE_ATTR_RW(mode);
+
+static ssize_t pe_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ val = config->pe_sel;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t pe_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ if (val > drvdata->nr_pe) {
+ spin_unlock(&drvdata->spinlock);
+ return -EINVAL;
+ }
+
+ config->pe_sel = val;
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(pe);
+
+static ssize_t event_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ val = config->eventctrl0;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t event_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ switch (drvdata->nr_event) {
+ case 0x0:
+ /* EVENT0, bits[7:0] */
+ config->eventctrl0 = val & 0xFF;
+ break;
+ case 0x1:
+ /* EVENT1, bits[15:8] */
+ config->eventctrl0 = val & 0xFFFF;
+ break;
+ case 0x2:
+ /* EVENT2, bits[23:16] */
+ config->eventctrl0 = val & 0xFFFFFF;
+ break;
+ case 0x3:
+ /* EVENT3, bits[31:24] */
+ config->eventctrl0 = val;
+ break;
+ default:
+ break;
+ }
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(event);
+
+static ssize_t event_instren_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ val = BMVAL(config->eventctrl1, 0, 3);
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t event_instren_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ /* start by clearing all instruction event enable bits */
+ config->eventctrl1 &= ~(BIT(0) | BIT(1) | BIT(2) | BIT(3));
+ switch (drvdata->nr_event) {
+ case 0x0:
+ /* generate Event element for event 1 */
+ config->eventctrl1 |= val & BIT(1);
+ break;
+ case 0x1:
+ /* generate Event element for event 1 and 2 */
+ config->eventctrl1 |= val & (BIT(0) | BIT(1));
+ break;
+ case 0x2:
+ /* generate Event element for event 1, 2 and 3 */
+ config->eventctrl1 |= val & (BIT(0) | BIT(1) | BIT(2));
+ break;
+ case 0x3:
+ /* generate Event element for all 4 events */
+ config->eventctrl1 |= val & 0xF;
+ break;
+ default:
+ break;
+ }
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(event_instren);
+
+static ssize_t event_ts_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ val = config->ts_ctrl;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t event_ts_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+ if (!drvdata->ts_size)
+ return -EINVAL;
+
+ config->ts_ctrl = val & ETMv4_EVENT_MASK;
+ return size;
+}
+static DEVICE_ATTR_RW(event_ts);
+
+static ssize_t syncfreq_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ val = config->syncfreq;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t syncfreq_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+ if (drvdata->syncpr == true)
+ return -EINVAL;
+
+ config->syncfreq = val & ETMv4_SYNC_MASK;
+ return size;
+}
+static DEVICE_ATTR_RW(syncfreq);
+
+static ssize_t cyc_threshold_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ val = config->ccctlr;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t cyc_threshold_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+ if (val < drvdata->ccitmin)
+ return -EINVAL;
+
+ config->ccctlr = val & ETM_CYC_THRESHOLD_MASK;
+ return size;
+}
+static DEVICE_ATTR_RW(cyc_threshold);
+
+static ssize_t bb_ctrl_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ val = config->bb_ctrl;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t bb_ctrl_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+ if (drvdata->trcbb == false)
+ return -EINVAL;
+ if (!drvdata->nr_addr_cmp)
+ return -EINVAL;
+ /*
+ * Bit[7:0] selects which address range comparator is used for
+ * branch broadcast control.
+ */
+ if (BMVAL(val, 0, 7) > drvdata->nr_addr_cmp)
+ return -EINVAL;
+
+ config->bb_ctrl = val;
+ return size;
+}
+static DEVICE_ATTR_RW(bb_ctrl);
+
+static ssize_t event_vinst_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ val = config->vinst_ctrl & ETMv4_EVENT_MASK;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t event_vinst_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ val &= ETMv4_EVENT_MASK;
+ config->vinst_ctrl &= ~ETMv4_EVENT_MASK;
+ config->vinst_ctrl |= val;
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(event_vinst);
+
+static ssize_t s_exlevel_vinst_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ val = BMVAL(config->vinst_ctrl, 16, 19);
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t s_exlevel_vinst_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ /* clear all EXLEVEL_S bits (bit[18] is never implemented) */
+ config->vinst_ctrl &= ~(BIT(16) | BIT(17) | BIT(19));
+ /* enable instruction tracing for corresponding exception level */
+ val &= drvdata->s_ex_level;
+ config->vinst_ctrl |= (val << 16);
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(s_exlevel_vinst);
+
+static ssize_t ns_exlevel_vinst_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ /* EXLEVEL_NS, bits[23:20] */
+ val = BMVAL(config->vinst_ctrl, 20, 23);
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t ns_exlevel_vinst_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ /* clear EXLEVEL_NS bits (bit[23] is never implemented */
+ config->vinst_ctrl &= ~(BIT(20) | BIT(21) | BIT(22));
+ /* enable instruction tracing for corresponding exception level */
+ val &= drvdata->ns_ex_level;
+ config->vinst_ctrl |= (val << 20);
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(ns_exlevel_vinst);
+
+static ssize_t addr_idx_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ val = config->addr_idx;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t addr_idx_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+ if (val >= drvdata->nr_addr_cmp * 2)
+ return -EINVAL;
+
+ /*
+ * Use spinlock to ensure index doesn't change while it gets
+ * dereferenced multiple times within a spinlock block elsewhere.
+ */
+ spin_lock(&drvdata->spinlock);
+ config->addr_idx = val;
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(addr_idx);
+
+static ssize_t addr_instdatatype_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ ssize_t len;
+ u8 val, idx;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+ val = BMVAL(config->addr_acc[idx], 0, 1);
+ len = scnprintf(buf, PAGE_SIZE, "%s\n",
+ val == ETM_INSTR_ADDR ? "instr" :
+ (val == ETM_DATA_LOAD_ADDR ? "data_load" :
+ (val == ETM_DATA_STORE_ADDR ? "data_store" :
+ "data_load_store")));
+ spin_unlock(&drvdata->spinlock);
+ return len;
+}
+
+static ssize_t addr_instdatatype_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ u8 idx;
+ char str[20] = "";
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (strlen(buf) >= 20)
+ return -EINVAL;
+ if (sscanf(buf, "%s", str) != 1)
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+ if (!strcmp(str, "instr"))
+ /* TYPE, bits[1:0] */
+ config->addr_acc[idx] &= ~(BIT(0) | BIT(1));
+
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(addr_instdatatype);
+
+static ssize_t addr_single_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ u8 idx;
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ idx = config->addr_idx;
+ spin_lock(&drvdata->spinlock);
+ if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
+ config->addr_type[idx] == ETM_ADDR_TYPE_SINGLE)) {
+ spin_unlock(&drvdata->spinlock);
+ return -EPERM;
+ }
+ val = (unsigned long)config->addr_val[idx];
+ spin_unlock(&drvdata->spinlock);
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t addr_single_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ u8 idx;
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+ if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
+ config->addr_type[idx] == ETM_ADDR_TYPE_SINGLE)) {
+ spin_unlock(&drvdata->spinlock);
+ return -EPERM;
+ }
+
+ config->addr_val[idx] = (u64)val;
+ config->addr_type[idx] = ETM_ADDR_TYPE_SINGLE;
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(addr_single);
+
+static ssize_t addr_range_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ u8 idx;
+ unsigned long val1, val2;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+ if (idx % 2 != 0) {
+ spin_unlock(&drvdata->spinlock);
+ return -EPERM;
+ }
+ if (!((config->addr_type[idx] == ETM_ADDR_TYPE_NONE &&
+ config->addr_type[idx + 1] == ETM_ADDR_TYPE_NONE) ||
+ (config->addr_type[idx] == ETM_ADDR_TYPE_RANGE &&
+ config->addr_type[idx + 1] == ETM_ADDR_TYPE_RANGE))) {
+ spin_unlock(&drvdata->spinlock);
+ return -EPERM;
+ }
+
+ val1 = (unsigned long)config->addr_val[idx];
+ val2 = (unsigned long)config->addr_val[idx + 1];
+ spin_unlock(&drvdata->spinlock);
+ return scnprintf(buf, PAGE_SIZE, "%#lx %#lx\n", val1, val2);
+}
+
+static ssize_t addr_range_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ u8 idx;
+ unsigned long val1, val2;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (sscanf(buf, "%lx %lx", &val1, &val2) != 2)
+ return -EINVAL;
+ /* lower address comparator cannot have a higher address value */
+ if (val1 > val2)
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+ if (idx % 2 != 0) {
+ spin_unlock(&drvdata->spinlock);
+ return -EPERM;
+ }
+
+ if (!((config->addr_type[idx] == ETM_ADDR_TYPE_NONE &&
+ config->addr_type[idx + 1] == ETM_ADDR_TYPE_NONE) ||
+ (config->addr_type[idx] == ETM_ADDR_TYPE_RANGE &&
+ config->addr_type[idx + 1] == ETM_ADDR_TYPE_RANGE))) {
+ spin_unlock(&drvdata->spinlock);
+ return -EPERM;
+ }
+
+ config->addr_val[idx] = (u64)val1;
+ config->addr_type[idx] = ETM_ADDR_TYPE_RANGE;
+ config->addr_val[idx + 1] = (u64)val2;
+ config->addr_type[idx + 1] = ETM_ADDR_TYPE_RANGE;
+ /*
+ * Program include or exclude control bits for vinst or vdata
+ * whenever we change addr comparators to ETM_ADDR_TYPE_RANGE
+ */
+ if (config->mode & ETM_MODE_EXCLUDE)
+ etm4_set_mode_exclude(drvdata, true);
+ else
+ etm4_set_mode_exclude(drvdata, false);
+
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(addr_range);
+
+static ssize_t addr_start_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ u8 idx;
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+
+ if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
+ config->addr_type[idx] == ETM_ADDR_TYPE_START)) {
+ spin_unlock(&drvdata->spinlock);
+ return -EPERM;
+ }
+
+ val = (unsigned long)config->addr_val[idx];
+ spin_unlock(&drvdata->spinlock);
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t addr_start_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ u8 idx;
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+ if (!drvdata->nr_addr_cmp) {
+ spin_unlock(&drvdata->spinlock);
+ return -EINVAL;
+ }
+ if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
+ config->addr_type[idx] == ETM_ADDR_TYPE_START)) {
+ spin_unlock(&drvdata->spinlock);
+ return -EPERM;
+ }
+
+ config->addr_val[idx] = (u64)val;
+ config->addr_type[idx] = ETM_ADDR_TYPE_START;
+ config->vissctlr |= BIT(idx);
+ /* SSSTATUS, bit[9] - turn on start/stop logic */
+ config->vinst_ctrl |= BIT(9);
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(addr_start);
+
+static ssize_t addr_stop_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ u8 idx;
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+
+ if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
+ config->addr_type[idx] == ETM_ADDR_TYPE_STOP)) {
+ spin_unlock(&drvdata->spinlock);
+ return -EPERM;
+ }
+
+ val = (unsigned long)config->addr_val[idx];
+ spin_unlock(&drvdata->spinlock);
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t addr_stop_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ u8 idx;
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+ if (!drvdata->nr_addr_cmp) {
+ spin_unlock(&drvdata->spinlock);
+ return -EINVAL;
+ }
+ if (!(config->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
+ config->addr_type[idx] == ETM_ADDR_TYPE_STOP)) {
+ spin_unlock(&drvdata->spinlock);
+ return -EPERM;
+ }
+
+ config->addr_val[idx] = (u64)val;
+ config->addr_type[idx] = ETM_ADDR_TYPE_STOP;
+ config->vissctlr |= BIT(idx + 16);
+ /* SSSTATUS, bit[9] - turn on start/stop logic */
+ config->vinst_ctrl |= BIT(9);
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(addr_stop);
+
+static ssize_t addr_ctxtype_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ ssize_t len;
+ u8 idx, val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+ /* CONTEXTTYPE, bits[3:2] */
+ val = BMVAL(config->addr_acc[idx], 2, 3);
+ len = scnprintf(buf, PAGE_SIZE, "%s\n", val == ETM_CTX_NONE ? "none" :
+ (val == ETM_CTX_CTXID ? "ctxid" :
+ (val == ETM_CTX_VMID ? "vmid" : "all")));
+ spin_unlock(&drvdata->spinlock);
+ return len;
+}
+
+static ssize_t addr_ctxtype_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ u8 idx;
+ char str[10] = "";
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (strlen(buf) >= 10)
+ return -EINVAL;
+ if (sscanf(buf, "%s", str) != 1)
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+ if (!strcmp(str, "none"))
+ /* start by clearing context type bits */
+ config->addr_acc[idx] &= ~(BIT(2) | BIT(3));
+ else if (!strcmp(str, "ctxid")) {
+ /* 0b01 The trace unit performs a Context ID */
+ if (drvdata->numcidc) {
+ config->addr_acc[idx] |= BIT(2);
+ config->addr_acc[idx] &= ~BIT(3);
+ }
+ } else if (!strcmp(str, "vmid")) {
+ /* 0b10 The trace unit performs a VMID */
+ if (drvdata->numvmidc) {
+ config->addr_acc[idx] &= ~BIT(2);
+ config->addr_acc[idx] |= BIT(3);
+ }
+ } else if (!strcmp(str, "all")) {
+ /*
+ * 0b11 The trace unit performs a Context ID
+ * comparison and a VMID
+ */
+ if (drvdata->numcidc)
+ config->addr_acc[idx] |= BIT(2);
+ if (drvdata->numvmidc)
+ config->addr_acc[idx] |= BIT(3);
+ }
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(addr_ctxtype);
+
+static ssize_t addr_context_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ u8 idx;
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+ /* context ID comparator bits[6:4] */
+ val = BMVAL(config->addr_acc[idx], 4, 6);
+ spin_unlock(&drvdata->spinlock);
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t addr_context_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ u8 idx;
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+ if ((drvdata->numcidc <= 1) && (drvdata->numvmidc <= 1))
+ return -EINVAL;
+ if (val >= (drvdata->numcidc >= drvdata->numvmidc ?
+ drvdata->numcidc : drvdata->numvmidc))
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->addr_idx;
+ /* clear context ID comparator bits[6:4] */
+ config->addr_acc[idx] &= ~(BIT(4) | BIT(5) | BIT(6));
+ config->addr_acc[idx] |= (val << 4);
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(addr_context);
+
+static ssize_t seq_idx_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ val = config->seq_idx;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t seq_idx_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+ if (val >= drvdata->nrseqstate - 1)
+ return -EINVAL;
+
+ /*
+ * Use spinlock to ensure index doesn't change while it gets
+ * dereferenced multiple times within a spinlock block elsewhere.
+ */
+ spin_lock(&drvdata->spinlock);
+ config->seq_idx = val;
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(seq_idx);
+
+static ssize_t seq_state_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ val = config->seq_state;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t seq_state_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+ if (val >= drvdata->nrseqstate)
+ return -EINVAL;
+
+ config->seq_state = val;
+ return size;
+}
+static DEVICE_ATTR_RW(seq_state);
+
+static ssize_t seq_event_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ u8 idx;
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->seq_idx;
+ val = config->seq_ctrl[idx];
+ spin_unlock(&drvdata->spinlock);
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t seq_event_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ u8 idx;
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->seq_idx;
+ /* RST, bits[7:0] */
+ config->seq_ctrl[idx] = val & 0xFF;
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(seq_event);
+
+static ssize_t seq_reset_event_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ val = config->seq_rst;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t seq_reset_event_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+ if (!(drvdata->nrseqstate))
+ return -EINVAL;
+
+ config->seq_rst = val & ETMv4_EVENT_MASK;
+ return size;
+}
+static DEVICE_ATTR_RW(seq_reset_event);
+
+static ssize_t cntr_idx_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ val = config->cntr_idx;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t cntr_idx_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+ if (val >= drvdata->nr_cntr)
+ return -EINVAL;
+
+ /*
+ * Use spinlock to ensure index doesn't change while it gets
+ * dereferenced multiple times within a spinlock block elsewhere.
+ */
+ spin_lock(&drvdata->spinlock);
+ config->cntr_idx = val;
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(cntr_idx);
+
+static ssize_t cntrldvr_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ u8 idx;
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->cntr_idx;
+ val = config->cntrldvr[idx];
+ spin_unlock(&drvdata->spinlock);
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t cntrldvr_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ u8 idx;
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+ if (val > ETM_CNTR_MAX_VAL)
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->cntr_idx;
+ config->cntrldvr[idx] = val;
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(cntrldvr);
+
+static ssize_t cntr_val_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ u8 idx;
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->cntr_idx;
+ val = config->cntr_val[idx];
+ spin_unlock(&drvdata->spinlock);
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t cntr_val_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ u8 idx;
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+ if (val > ETM_CNTR_MAX_VAL)
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->cntr_idx;
+ config->cntr_val[idx] = val;
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(cntr_val);
+
+static ssize_t cntr_ctrl_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ u8 idx;
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->cntr_idx;
+ val = config->cntr_ctrl[idx];
+ spin_unlock(&drvdata->spinlock);
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t cntr_ctrl_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ u8 idx;
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->cntr_idx;
+ config->cntr_ctrl[idx] = val;
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(cntr_ctrl);
+
+static ssize_t res_idx_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ val = config->res_idx;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t res_idx_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+ /* Resource selector pair 0 is always implemented and reserved */
+ if ((val == 0) || (val >= drvdata->nr_resource))
+ return -EINVAL;
+
+ /*
+ * Use spinlock to ensure index doesn't change while it gets
+ * dereferenced multiple times within a spinlock block elsewhere.
+ */
+ spin_lock(&drvdata->spinlock);
+ config->res_idx = val;
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(res_idx);
+
+static ssize_t res_ctrl_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ u8 idx;
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->res_idx;
+ val = config->res_ctrl[idx];
+ spin_unlock(&drvdata->spinlock);
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t res_ctrl_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ u8 idx;
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->res_idx;
+ /* For odd idx pair inversal bit is RES0 */
+ if (idx % 2 != 0)
+ /* PAIRINV, bit[21] */
+ val &= ~BIT(21);
+ config->res_ctrl[idx] = val;
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(res_ctrl);
+
+static ssize_t ctxid_idx_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ val = config->ctxid_idx;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t ctxid_idx_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+ if (val >= drvdata->numcidc)
+ return -EINVAL;
+
+ /*
+ * Use spinlock to ensure index doesn't change while it gets
+ * dereferenced multiple times within a spinlock block elsewhere.
+ */
+ spin_lock(&drvdata->spinlock);
+ config->ctxid_idx = val;
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(ctxid_idx);
+
+static ssize_t ctxid_pid_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ u8 idx;
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->ctxid_idx;
+ val = (unsigned long)config->ctxid_vpid[idx];
+ spin_unlock(&drvdata->spinlock);
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t ctxid_pid_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ u8 idx;
+ unsigned long vpid, pid;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ /*
+ * only implemented when ctxid tracing is enabled, i.e. at least one
+ * ctxid comparator is implemented and ctxid is greater than 0 bits
+ * in length
+ */
+ if (!drvdata->ctxid_size || !drvdata->numcidc)
+ return -EINVAL;
+ if (kstrtoul(buf, 16, &vpid))
+ return -EINVAL;
+
+ pid = coresight_vpid_to_pid(vpid);
+
+ spin_lock(&drvdata->spinlock);
+ idx = config->ctxid_idx;
+ config->ctxid_pid[idx] = (u64)pid;
+ config->ctxid_vpid[idx] = (u64)vpid;
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(ctxid_pid);
+
+static ssize_t ctxid_masks_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val1, val2;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ val1 = config->ctxid_mask0;
+ val2 = config->ctxid_mask1;
+ spin_unlock(&drvdata->spinlock);
+ return scnprintf(buf, PAGE_SIZE, "%#lx %#lx\n", val1, val2);
+}
+
+static ssize_t ctxid_masks_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ u8 i, j, maskbyte;
+ unsigned long val1, val2, mask;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ /*
+ * only implemented when ctxid tracing is enabled, i.e. at least one
+ * ctxid comparator is implemented and ctxid is greater than 0 bits
+ * in length
+ */
+ if (!drvdata->ctxid_size || !drvdata->numcidc)
+ return -EINVAL;
+ if (sscanf(buf, "%lx %lx", &val1, &val2) != 2)
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ /*
+ * each byte[0..3] controls mask value applied to ctxid
+ * comparator[0..3]
+ */
+ switch (drvdata->numcidc) {
+ case 0x1:
+ /* COMP0, bits[7:0] */
+ config->ctxid_mask0 = val1 & 0xFF;
+ break;
+ case 0x2:
+ /* COMP1, bits[15:8] */
+ config->ctxid_mask0 = val1 & 0xFFFF;
+ break;
+ case 0x3:
+ /* COMP2, bits[23:16] */
+ config->ctxid_mask0 = val1 & 0xFFFFFF;
+ break;
+ case 0x4:
+ /* COMP3, bits[31:24] */
+ config->ctxid_mask0 = val1;
+ break;
+ case 0x5:
+ /* COMP4, bits[7:0] */
+ config->ctxid_mask0 = val1;
+ config->ctxid_mask1 = val2 & 0xFF;
+ break;
+ case 0x6:
+ /* COMP5, bits[15:8] */
+ config->ctxid_mask0 = val1;
+ config->ctxid_mask1 = val2 & 0xFFFF;
+ break;
+ case 0x7:
+ /* COMP6, bits[23:16] */
+ config->ctxid_mask0 = val1;
+ config->ctxid_mask1 = val2 & 0xFFFFFF;
+ break;
+ case 0x8:
+ /* COMP7, bits[31:24] */
+ config->ctxid_mask0 = val1;
+ config->ctxid_mask1 = val2;
+ break;
+ default:
+ break;
+ }
+ /*
+ * If software sets a mask bit to 1, it must program relevant byte
+ * of ctxid comparator value 0x0, otherwise behavior is unpredictable.
+ * For example, if bit[3] of ctxid_mask0 is 1, we must clear bits[31:24]
+ * of ctxid comparator0 value (corresponding to byte 0) register.
+ */
+ mask = config->ctxid_mask0;
+ for (i = 0; i < drvdata->numcidc; i++) {
+ /* mask value of corresponding ctxid comparator */
+ maskbyte = mask & ETMv4_EVENT_MASK;
+ /*
+ * each bit corresponds to a byte of respective ctxid comparator
+ * value register
+ */
+ for (j = 0; j < 8; j++) {
+ if (maskbyte & 1)
+ config->ctxid_pid[i] &= ~(0xFF << (j * 8));
+ maskbyte >>= 1;
+ }
+ /* Select the next ctxid comparator mask value */
+ if (i == 3)
+ /* ctxid comparators[4-7] */
+ mask = config->ctxid_mask1;
+ else
+ mask >>= 0x8;
+ }
+
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(ctxid_masks);
+
+static ssize_t vmid_idx_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ val = config->vmid_idx;
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t vmid_idx_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+ if (val >= drvdata->numvmidc)
+ return -EINVAL;
+
+ /*
+ * Use spinlock to ensure index doesn't change while it gets
+ * dereferenced multiple times within a spinlock block elsewhere.
+ */
+ spin_lock(&drvdata->spinlock);
+ config->vmid_idx = val;
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(vmid_idx);
+
+static ssize_t vmid_val_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ val = (unsigned long)config->vmid_val[config->vmid_idx];
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t vmid_val_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ unsigned long val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ /*
+ * only implemented when vmid tracing is enabled, i.e. at least one
+ * vmid comparator is implemented and at least 8 bit vmid size
+ */
+ if (!drvdata->vmid_size || !drvdata->numvmidc)
+ return -EINVAL;
+ if (kstrtoul(buf, 16, &val))
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+ config->vmid_val[config->vmid_idx] = (u64)val;
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(vmid_val);
+
+static ssize_t vmid_masks_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val1, val2;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ spin_lock(&drvdata->spinlock);
+ val1 = config->vmid_mask0;
+ val2 = config->vmid_mask1;
+ spin_unlock(&drvdata->spinlock);
+ return scnprintf(buf, PAGE_SIZE, "%#lx %#lx\n", val1, val2);
+}
+
+static ssize_t vmid_masks_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ u8 i, j, maskbyte;
+ unsigned long val1, val2, mask;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_config *config = &drvdata->config;
+
+ /*
+ * only implemented when vmid tracing is enabled, i.e. at least one
+ * vmid comparator is implemented and at least 8 bit vmid size
+ */
+ if (!drvdata->vmid_size || !drvdata->numvmidc)
+ return -EINVAL;
+ if (sscanf(buf, "%lx %lx", &val1, &val2) != 2)
+ return -EINVAL;
+
+ spin_lock(&drvdata->spinlock);
+
+ /*
+ * each byte[0..3] controls mask value applied to vmid
+ * comparator[0..3]
+ */
+ switch (drvdata->numvmidc) {
+ case 0x1:
+ /* COMP0, bits[7:0] */
+ config->vmid_mask0 = val1 & 0xFF;
+ break;
+ case 0x2:
+ /* COMP1, bits[15:8] */
+ config->vmid_mask0 = val1 & 0xFFFF;
+ break;
+ case 0x3:
+ /* COMP2, bits[23:16] */
+ config->vmid_mask0 = val1 & 0xFFFFFF;
+ break;
+ case 0x4:
+ /* COMP3, bits[31:24] */
+ config->vmid_mask0 = val1;
+ break;
+ case 0x5:
+ /* COMP4, bits[7:0] */
+ config->vmid_mask0 = val1;
+ config->vmid_mask1 = val2 & 0xFF;
+ break;
+ case 0x6:
+ /* COMP5, bits[15:8] */
+ config->vmid_mask0 = val1;
+ config->vmid_mask1 = val2 & 0xFFFF;
+ break;
+ case 0x7:
+ /* COMP6, bits[23:16] */
+ config->vmid_mask0 = val1;
+ config->vmid_mask1 = val2 & 0xFFFFFF;
+ break;
+ case 0x8:
+ /* COMP7, bits[31:24] */
+ config->vmid_mask0 = val1;
+ config->vmid_mask1 = val2;
+ break;
+ default:
+ break;
+ }
+
+ /*
+ * If software sets a mask bit to 1, it must program relevant byte
+ * of vmid comparator value 0x0, otherwise behavior is unpredictable.
+ * For example, if bit[3] of vmid_mask0 is 1, we must clear bits[31:24]
+ * of vmid comparator0 value (corresponding to byte 0) register.
+ */
+ mask = config->vmid_mask0;
+ for (i = 0; i < drvdata->numvmidc; i++) {
+ /* mask value of corresponding vmid comparator */
+ maskbyte = mask & ETMv4_EVENT_MASK;
+ /*
+ * each bit corresponds to a byte of respective vmid comparator
+ * value register
+ */
+ for (j = 0; j < 8; j++) {
+ if (maskbyte & 1)
+ config->vmid_val[i] &= ~(0xFF << (j * 8));
+ maskbyte >>= 1;
+ }
+ /* Select the next vmid comparator mask value */
+ if (i == 3)
+ /* vmid comparators[4-7] */
+ mask = config->vmid_mask1;
+ else
+ mask >>= 0x8;
+ }
+ spin_unlock(&drvdata->spinlock);
+ return size;
+}
+static DEVICE_ATTR_RW(vmid_masks);
+
+static ssize_t cpu_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ int val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ val = drvdata->cpu;
+ return scnprintf(buf, PAGE_SIZE, "%d\n", val);
+
+}
+static DEVICE_ATTR_RO(cpu);
+
+static struct attribute *coresight_etmv4_attrs[] = {
+ &dev_attr_nr_pe_cmp.attr,
+ &dev_attr_nr_addr_cmp.attr,
+ &dev_attr_nr_cntr.attr,
+ &dev_attr_nr_ext_inp.attr,
+ &dev_attr_numcidc.attr,
+ &dev_attr_numvmidc.attr,
+ &dev_attr_nrseqstate.attr,
+ &dev_attr_nr_resource.attr,
+ &dev_attr_nr_ss_cmp.attr,
+ &dev_attr_reset.attr,
+ &dev_attr_mode.attr,
+ &dev_attr_pe.attr,
+ &dev_attr_event.attr,
+ &dev_attr_event_instren.attr,
+ &dev_attr_event_ts.attr,
+ &dev_attr_syncfreq.attr,
+ &dev_attr_cyc_threshold.attr,
+ &dev_attr_bb_ctrl.attr,
+ &dev_attr_event_vinst.attr,
+ &dev_attr_s_exlevel_vinst.attr,
+ &dev_attr_ns_exlevel_vinst.attr,
+ &dev_attr_addr_idx.attr,
+ &dev_attr_addr_instdatatype.attr,
+ &dev_attr_addr_single.attr,
+ &dev_attr_addr_range.attr,
+ &dev_attr_addr_start.attr,
+ &dev_attr_addr_stop.attr,
+ &dev_attr_addr_ctxtype.attr,
+ &dev_attr_addr_context.attr,
+ &dev_attr_seq_idx.attr,
+ &dev_attr_seq_state.attr,
+ &dev_attr_seq_event.attr,
+ &dev_attr_seq_reset_event.attr,
+ &dev_attr_cntr_idx.attr,
+ &dev_attr_cntrldvr.attr,
+ &dev_attr_cntr_val.attr,
+ &dev_attr_cntr_ctrl.attr,
+ &dev_attr_res_idx.attr,
+ &dev_attr_res_ctrl.attr,
+ &dev_attr_ctxid_idx.attr,
+ &dev_attr_ctxid_pid.attr,
+ &dev_attr_ctxid_masks.attr,
+ &dev_attr_vmid_idx.attr,
+ &dev_attr_vmid_val.attr,
+ &dev_attr_vmid_masks.attr,
+ &dev_attr_cpu.attr,
+ NULL,
+};
+
+#define coresight_etm4x_simple_func(name, offset) \
+ coresight_simple_func(struct etmv4_drvdata, name, offset)
+
+coresight_etm4x_simple_func(trcoslsr, TRCOSLSR);
+coresight_etm4x_simple_func(trcpdcr, TRCPDCR);
+coresight_etm4x_simple_func(trcpdsr, TRCPDSR);
+coresight_etm4x_simple_func(trclsr, TRCLSR);
+coresight_etm4x_simple_func(trcconfig, TRCCONFIGR);
+coresight_etm4x_simple_func(trctraceid, TRCTRACEIDR);
+coresight_etm4x_simple_func(trcauthstatus, TRCAUTHSTATUS);
+coresight_etm4x_simple_func(trcdevid, TRCDEVID);
+coresight_etm4x_simple_func(trcdevtype, TRCDEVTYPE);
+coresight_etm4x_simple_func(trcpidr0, TRCPIDR0);
+coresight_etm4x_simple_func(trcpidr1, TRCPIDR1);
+coresight_etm4x_simple_func(trcpidr2, TRCPIDR2);
+coresight_etm4x_simple_func(trcpidr3, TRCPIDR3);
+
+static struct attribute *coresight_etmv4_mgmt_attrs[] = {
+ &dev_attr_trcoslsr.attr,
+ &dev_attr_trcpdcr.attr,
+ &dev_attr_trcpdsr.attr,
+ &dev_attr_trclsr.attr,
+ &dev_attr_trcconfig.attr,
+ &dev_attr_trctraceid.attr,
+ &dev_attr_trcauthstatus.attr,
+ &dev_attr_trcdevid.attr,
+ &dev_attr_trcdevtype.attr,
+ &dev_attr_trcpidr0.attr,
+ &dev_attr_trcpidr1.attr,
+ &dev_attr_trcpidr2.attr,
+ &dev_attr_trcpidr3.attr,
+ NULL,
+};
+
+coresight_etm4x_simple_func(trcidr0, TRCIDR0);
+coresight_etm4x_simple_func(trcidr1, TRCIDR1);
+coresight_etm4x_simple_func(trcidr2, TRCIDR2);
+coresight_etm4x_simple_func(trcidr3, TRCIDR3);
+coresight_etm4x_simple_func(trcidr4, TRCIDR4);
+coresight_etm4x_simple_func(trcidr5, TRCIDR5);
+/* trcidr[6,7] are reserved */
+coresight_etm4x_simple_func(trcidr8, TRCIDR8);
+coresight_etm4x_simple_func(trcidr9, TRCIDR9);
+coresight_etm4x_simple_func(trcidr10, TRCIDR10);
+coresight_etm4x_simple_func(trcidr11, TRCIDR11);
+coresight_etm4x_simple_func(trcidr12, TRCIDR12);
+coresight_etm4x_simple_func(trcidr13, TRCIDR13);
+
+static struct attribute *coresight_etmv4_trcidr_attrs[] = {
+ &dev_attr_trcidr0.attr,
+ &dev_attr_trcidr1.attr,
+ &dev_attr_trcidr2.attr,
+ &dev_attr_trcidr3.attr,
+ &dev_attr_trcidr4.attr,
+ &dev_attr_trcidr5.attr,
+ /* trcidr[6,7] are reserved */
+ &dev_attr_trcidr8.attr,
+ &dev_attr_trcidr9.attr,
+ &dev_attr_trcidr10.attr,
+ &dev_attr_trcidr11.attr,
+ &dev_attr_trcidr12.attr,
+ &dev_attr_trcidr13.attr,
+ NULL,
+};
+
+static const struct attribute_group coresight_etmv4_group = {
+ .attrs = coresight_etmv4_attrs,
+};
+
+static const struct attribute_group coresight_etmv4_mgmt_group = {
+ .attrs = coresight_etmv4_mgmt_attrs,
+ .name = "mgmt",
+};
+
+static const struct attribute_group coresight_etmv4_trcidr_group = {
+ .attrs = coresight_etmv4_trcidr_attrs,
+ .name = "trcidr",
+};
+
+const struct attribute_group *coresight_etmv4_groups[] = {
+ &coresight_etmv4_group,
+ &coresight_etmv4_mgmt_group,
+ &coresight_etmv4_trcidr_group,
+ NULL,
+};
diff --git a/drivers/hwtracing/coresight/coresight-etm4x.c b/drivers/hwtracing/coresight/coresight-etm4x.c
index 1c59bd36834c..462f0dc15757 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x.c
+++ b/drivers/hwtracing/coresight/coresight-etm4x.c
@@ -26,15 +26,19 @@
#include <linux/clk.h>
#include <linux/cpu.h>
#include <linux/coresight.h>
+#include <linux/coresight-pmu.h>
#include <linux/pm_wakeup.h>
#include <linux/amba/bus.h>
#include <linux/seq_file.h>
#include <linux/uaccess.h>
+#include <linux/perf_event.h>
#include <linux/pm_runtime.h>
#include <linux/perf_event.h>
#include <asm/sections.h>
+#include <asm/local.h>
#include "coresight-etm4x.h"
+#include "coresight-etm-perf.h"
static int boot_enable;
module_param_named(boot_enable, boot_enable, int, S_IRUGO);
@@ -42,13 +46,13 @@ module_param_named(boot_enable, boot_enable, int, S_IRUGO);
/* The number of ETMv4 currently registered */
static int etm4_count;
static struct etmv4_drvdata *etmdrvdata[NR_CPUS];
+static void etm4_set_default(struct etmv4_config *config);
-static void etm4_os_unlock(void *info)
+static void etm4_os_unlock(struct etmv4_drvdata *drvdata)
{
- struct etmv4_drvdata *drvdata = (struct etmv4_drvdata *)info;
-
/* Writing any value to ETMOSLAR unlocks the trace registers */
writel_relaxed(0x0, drvdata->base + TRCOSLAR);
+ drvdata->os_unlock = true;
isb();
}
@@ -76,7 +80,7 @@ static int etm4_trace_id(struct coresight_device *csdev)
unsigned long flags;
int trace_id = -1;
- if (!drvdata->enable)
+ if (!local_read(&drvdata->mode))
return drvdata->trcid;
spin_lock_irqsave(&drvdata->spinlock, flags);
@@ -95,6 +99,7 @@ static void etm4_enable_hw(void *info)
{
int i;
struct etmv4_drvdata *drvdata = info;
+ struct etmv4_config *config = &drvdata->config;
CS_UNLOCK(drvdata->base);
@@ -109,69 +114,69 @@ static void etm4_enable_hw(void *info)
"timeout observed when probing at offset %#x\n",
TRCSTATR);
- writel_relaxed(drvdata->pe_sel, drvdata->base + TRCPROCSELR);
- writel_relaxed(drvdata->cfg, drvdata->base + TRCCONFIGR);
+ writel_relaxed(config->pe_sel, drvdata->base + TRCPROCSELR);
+ writel_relaxed(config->cfg, drvdata->base + TRCCONFIGR);
/* nothing specific implemented */
writel_relaxed(0x0, drvdata->base + TRCAUXCTLR);
- writel_relaxed(drvdata->eventctrl0, drvdata->base + TRCEVENTCTL0R);
- writel_relaxed(drvdata->eventctrl1, drvdata->base + TRCEVENTCTL1R);
- writel_relaxed(drvdata->stall_ctrl, drvdata->base + TRCSTALLCTLR);
- writel_relaxed(drvdata->ts_ctrl, drvdata->base + TRCTSCTLR);
- writel_relaxed(drvdata->syncfreq, drvdata->base + TRCSYNCPR);
- writel_relaxed(drvdata->ccctlr, drvdata->base + TRCCCCTLR);
- writel_relaxed(drvdata->bb_ctrl, drvdata->base + TRCBBCTLR);
+ writel_relaxed(config->eventctrl0, drvdata->base + TRCEVENTCTL0R);
+ writel_relaxed(config->eventctrl1, drvdata->base + TRCEVENTCTL1R);
+ writel_relaxed(config->stall_ctrl, drvdata->base + TRCSTALLCTLR);
+ writel_relaxed(config->ts_ctrl, drvdata->base + TRCTSCTLR);
+ writel_relaxed(config->syncfreq, drvdata->base + TRCSYNCPR);
+ writel_relaxed(config->ccctlr, drvdata->base + TRCCCCTLR);
+ writel_relaxed(config->bb_ctrl, drvdata->base + TRCBBCTLR);
writel_relaxed(drvdata->trcid, drvdata->base + TRCTRACEIDR);
- writel_relaxed(drvdata->vinst_ctrl, drvdata->base + TRCVICTLR);
- writel_relaxed(drvdata->viiectlr, drvdata->base + TRCVIIECTLR);
- writel_relaxed(drvdata->vissctlr,
+ writel_relaxed(config->vinst_ctrl, drvdata->base + TRCVICTLR);
+ writel_relaxed(config->viiectlr, drvdata->base + TRCVIIECTLR);
+ writel_relaxed(config->vissctlr,
drvdata->base + TRCVISSCTLR);
- writel_relaxed(drvdata->vipcssctlr,
+ writel_relaxed(config->vipcssctlr,
drvdata->base + TRCVIPCSSCTLR);
for (i = 0; i < drvdata->nrseqstate - 1; i++)
- writel_relaxed(drvdata->seq_ctrl[i],
+ writel_relaxed(config->seq_ctrl[i],
drvdata->base + TRCSEQEVRn(i));
- writel_relaxed(drvdata->seq_rst, drvdata->base + TRCSEQRSTEVR);
- writel_relaxed(drvdata->seq_state, drvdata->base + TRCSEQSTR);
- writel_relaxed(drvdata->ext_inp, drvdata->base + TRCEXTINSELR);
+ writel_relaxed(config->seq_rst, drvdata->base + TRCSEQRSTEVR);
+ writel_relaxed(config->seq_state, drvdata->base + TRCSEQSTR);
+ writel_relaxed(config->ext_inp, drvdata->base + TRCEXTINSELR);
for (i = 0; i < drvdata->nr_cntr; i++) {
- writel_relaxed(drvdata->cntrldvr[i],
+ writel_relaxed(config->cntrldvr[i],
drvdata->base + TRCCNTRLDVRn(i));
- writel_relaxed(drvdata->cntr_ctrl[i],
+ writel_relaxed(config->cntr_ctrl[i],
drvdata->base + TRCCNTCTLRn(i));
- writel_relaxed(drvdata->cntr_val[i],
+ writel_relaxed(config->cntr_val[i],
drvdata->base + TRCCNTVRn(i));
}
/* Resource selector pair 0 is always implemented and reserved */
- for (i = 2; i < drvdata->nr_resource * 2; i++)
- writel_relaxed(drvdata->res_ctrl[i],
+ for (i = 0; i < drvdata->nr_resource * 2; i++)
+ writel_relaxed(config->res_ctrl[i],
drvdata->base + TRCRSCTLRn(i));
for (i = 0; i < drvdata->nr_ss_cmp; i++) {
- writel_relaxed(drvdata->ss_ctrl[i],
+ writel_relaxed(config->ss_ctrl[i],
drvdata->base + TRCSSCCRn(i));
- writel_relaxed(drvdata->ss_status[i],
+ writel_relaxed(config->ss_status[i],
drvdata->base + TRCSSCSRn(i));
- writel_relaxed(drvdata->ss_pe_cmp[i],
+ writel_relaxed(config->ss_pe_cmp[i],
drvdata->base + TRCSSPCICRn(i));
}
for (i = 0; i < drvdata->nr_addr_cmp; i++) {
- writeq_relaxed(drvdata->addr_val[i],
+ writeq_relaxed(config->addr_val[i],
drvdata->base + TRCACVRn(i));
- writeq_relaxed(drvdata->addr_acc[i],
+ writeq_relaxed(config->addr_acc[i],
drvdata->base + TRCACATRn(i));
}
for (i = 0; i < drvdata->numcidc; i++)
- writeq_relaxed(drvdata->ctxid_pid[i],
+ writeq_relaxed(config->ctxid_pid[i],
drvdata->base + TRCCIDCVRn(i));
- writel_relaxed(drvdata->ctxid_mask0, drvdata->base + TRCCIDCCTLR0);
- writel_relaxed(drvdata->ctxid_mask1, drvdata->base + TRCCIDCCTLR1);
+ writel_relaxed(config->ctxid_mask0, drvdata->base + TRCCIDCCTLR0);
+ writel_relaxed(config->ctxid_mask1, drvdata->base + TRCCIDCCTLR1);
for (i = 0; i < drvdata->numvmidc; i++)
- writeq_relaxed(drvdata->vmid_val[i],
+ writeq_relaxed(config->vmid_val[i],
drvdata->base + TRCVMIDCVRn(i));
- writel_relaxed(drvdata->vmid_mask0, drvdata->base + TRCVMIDCCTLR0);
- writel_relaxed(drvdata->vmid_mask1, drvdata->base + TRCVMIDCCTLR1);
+ writel_relaxed(config->vmid_mask0, drvdata->base + TRCVMIDCCTLR0);
+ writel_relaxed(config->vmid_mask1, drvdata->base + TRCVMIDCCTLR1);
/* Enable the trace unit */
writel_relaxed(1, drvdata->base + TRCPRGCTLR);
@@ -187,2120 +192,210 @@ static void etm4_enable_hw(void *info)
dev_dbg(drvdata->dev, "cpu: %d enable smp call done\n", drvdata->cpu);
}
-static int etm4_enable(struct coresight_device *csdev,
- struct perf_event_attr *attr, u32 mode)
+static int etm4_parse_event_config(struct etmv4_drvdata *drvdata,
+ struct perf_event_attr *attr)
{
- struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
- int ret;
-
- spin_lock(&drvdata->spinlock);
-
- /*
- * Executing etm4_enable_hw on the cpu whose ETM is being enabled
- * ensures that register writes occur when cpu is powered.
- */
- ret = smp_call_function_single(drvdata->cpu,
- etm4_enable_hw, drvdata, 1);
- if (ret)
- goto err;
- drvdata->enable = true;
- drvdata->sticky_enable = true;
-
- spin_unlock(&drvdata->spinlock);
+ struct etmv4_config *config = &drvdata->config;
- dev_info(drvdata->dev, "ETM tracing enabled\n");
- return 0;
-err:
- spin_unlock(&drvdata->spinlock);
- return ret;
-}
+ if (!attr)
+ return -EINVAL;
-static void etm4_disable_hw(void *info)
-{
- u32 control;
- struct etmv4_drvdata *drvdata = info;
+ /* Clear configuration from previous run */
+ memset(config, 0, sizeof(struct etmv4_config));
- CS_UNLOCK(drvdata->base);
+ if (attr->exclude_kernel)
+ config->mode = ETM_MODE_EXCL_KERN;
- control = readl_relaxed(drvdata->base + TRCPRGCTLR);
+ if (attr->exclude_user)
+ config->mode = ETM_MODE_EXCL_USER;
- /* EN, bit[0] Trace unit enable bit */
- control &= ~0x1;
-
- /* make sure everything completes before disabling */
- mb();
- isb();
- writel_relaxed(control, drvdata->base + TRCPRGCTLR);
-
- CS_LOCK(drvdata->base);
-
- dev_dbg(drvdata->dev, "cpu: %d disable smp call done\n", drvdata->cpu);
-}
-
-static void etm4_disable(struct coresight_device *csdev)
-{
- struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+ /* Always start from the default config */
+ etm4_set_default(config);
/*
- * Taking hotplug lock here protects from clocks getting disabled
- * with tracing being left on (crash scenario) if user disable occurs
- * after cpu online mask indicates the cpu is offline but before the
- * DYING hotplug callback is serviced by the ETM driver.
+ * By default the tracers are configured to trace the whole address
+ * range. Narrow the field only if requested by user space.
*/
- get_online_cpus();
- spin_lock(&drvdata->spinlock);
-
- /*
- * Executing etm4_disable_hw on the cpu whose ETM is being disabled
- * ensures that register writes occur when cpu is powered.
- */
- smp_call_function_single(drvdata->cpu, etm4_disable_hw, drvdata, 1);
- drvdata->enable = false;
-
- spin_unlock(&drvdata->spinlock);
- put_online_cpus();
-
- dev_info(drvdata->dev, "ETM tracing disabled\n");
-}
-
-static const struct coresight_ops_source etm4_source_ops = {
- .cpu_id = etm4_cpu_id,
- .trace_id = etm4_trace_id,
- .enable = etm4_enable,
- .disable = etm4_disable,
-};
-
-static const struct coresight_ops etm4_cs_ops = {
- .source_ops = &etm4_source_ops,
-};
+ if (config->mode)
+ etm4_config_trace_mode(config);
-static int etm4_set_mode_exclude(struct etmv4_drvdata *drvdata, bool exclude)
-{
- u8 idx = drvdata->addr_idx;
+ /* Go from generic option to ETMv4 specifics */
+ if (attr->config & BIT(ETM_OPT_CYCACC))
+ config->cfg |= ETMv4_MODE_CYCACC;
+ if (attr->config & BIT(ETM_OPT_TS))
+ config->cfg |= ETMv4_MODE_TIMESTAMP;
- /*
- * TRCACATRn.TYPE bit[1:0]: type of comparison
- * the trace unit performs
- */
- if (BMVAL(drvdata->addr_acc[idx], 0, 1) == ETM_INSTR_ADDR) {
- if (idx % 2 != 0)
- return -EINVAL;
-
- /*
- * We are performing instruction address comparison. Set the
- * relevant bit of ViewInst Include/Exclude Control register
- * for corresponding address comparator pair.
- */
- if (drvdata->addr_type[idx] != ETM_ADDR_TYPE_RANGE ||
- drvdata->addr_type[idx + 1] != ETM_ADDR_TYPE_RANGE)
- return -EINVAL;
-
- if (exclude == true) {
- /*
- * Set exclude bit and unset the include bit
- * corresponding to comparator pair
- */
- drvdata->viiectlr |= BIT(idx / 2 + 16);
- drvdata->viiectlr &= ~BIT(idx / 2);
- } else {
- /*
- * Set include bit and unset exclude bit
- * corresponding to comparator pair
- */
- drvdata->viiectlr |= BIT(idx / 2);
- drvdata->viiectlr &= ~BIT(idx / 2 + 16);
- }
- }
return 0;
}
-static ssize_t nr_pe_cmp_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
+static int etm4_enable_perf(struct coresight_device *csdev,
+ struct perf_event_attr *attr)
{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->nr_pe_cmp;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(nr_pe_cmp);
-
-static ssize_t nr_addr_cmp_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->nr_addr_cmp;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(nr_addr_cmp);
-
-static ssize_t nr_cntr_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->nr_cntr;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(nr_cntr);
-
-static ssize_t nr_ext_inp_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->nr_ext_inp;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(nr_ext_inp);
-
-static ssize_t numcidc_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->numcidc;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(numcidc);
-
-static ssize_t numvmidc_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->numvmidc;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(numvmidc);
-
-static ssize_t nrseqstate_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->nrseqstate;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(nrseqstate);
-
-static ssize_t nr_resource_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->nr_resource;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(nr_resource);
-
-static ssize_t nr_ss_cmp_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->nr_ss_cmp;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-static DEVICE_ATTR_RO(nr_ss_cmp);
-
-static ssize_t reset_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- int i;
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
- if (kstrtoul(buf, 16, &val))
+ if (WARN_ON_ONCE(drvdata->cpu != smp_processor_id()))
return -EINVAL;
- spin_lock(&drvdata->spinlock);
- if (val)
- drvdata->mode = 0x0;
-
- /* Disable data tracing: do not trace load and store data transfers */
- drvdata->mode &= ~(ETM_MODE_LOAD | ETM_MODE_STORE);
- drvdata->cfg &= ~(BIT(1) | BIT(2));
-
- /* Disable data value and data address tracing */
- drvdata->mode &= ~(ETM_MODE_DATA_TRACE_ADDR |
- ETM_MODE_DATA_TRACE_VAL);
- drvdata->cfg &= ~(BIT(16) | BIT(17));
-
- /* Disable all events tracing */
- drvdata->eventctrl0 = 0x0;
- drvdata->eventctrl1 = 0x0;
-
- /* Disable timestamp event */
- drvdata->ts_ctrl = 0x0;
-
- /* Disable stalling */
- drvdata->stall_ctrl = 0x0;
-
- /* Reset trace synchronization period to 2^8 = 256 bytes*/
- if (drvdata->syncpr == false)
- drvdata->syncfreq = 0x8;
-
- /*
- * Enable ViewInst to trace everything with start-stop logic in
- * started state. ARM recommends start-stop logic is set before
- * each trace run.
- */
- drvdata->vinst_ctrl |= BIT(0);
- if (drvdata->nr_addr_cmp == true) {
- drvdata->mode |= ETM_MODE_VIEWINST_STARTSTOP;
- /* SSSTATUS, bit[9] */
- drvdata->vinst_ctrl |= BIT(9);
- }
-
- /* No address range filtering for ViewInst */
- drvdata->viiectlr = 0x0;
-
- /* No start-stop filtering for ViewInst */
- drvdata->vissctlr = 0x0;
-
- /* Disable seq events */
- for (i = 0; i < drvdata->nrseqstate-1; i++)
- drvdata->seq_ctrl[i] = 0x0;
- drvdata->seq_rst = 0x0;
- drvdata->seq_state = 0x0;
-
- /* Disable external input events */
- drvdata->ext_inp = 0x0;
-
- drvdata->cntr_idx = 0x0;
- for (i = 0; i < drvdata->nr_cntr; i++) {
- drvdata->cntrldvr[i] = 0x0;
- drvdata->cntr_ctrl[i] = 0x0;
- drvdata->cntr_val[i] = 0x0;
- }
-
- /* Resource selector pair 0 is always implemented and reserved */
- drvdata->res_idx = 0x2;
- for (i = 2; i < drvdata->nr_resource * 2; i++)
- drvdata->res_ctrl[i] = 0x0;
-
- for (i = 0; i < drvdata->nr_ss_cmp; i++) {
- drvdata->ss_ctrl[i] = 0x0;
- drvdata->ss_pe_cmp[i] = 0x0;
- }
-
- drvdata->addr_idx = 0x0;
- for (i = 0; i < drvdata->nr_addr_cmp * 2; i++) {
- drvdata->addr_val[i] = 0x0;
- drvdata->addr_acc[i] = 0x0;
- drvdata->addr_type[i] = ETM_ADDR_TYPE_NONE;
- }
-
- drvdata->ctxid_idx = 0x0;
- for (i = 0; i < drvdata->numcidc; i++) {
- drvdata->ctxid_pid[i] = 0x0;
- drvdata->ctxid_vpid[i] = 0x0;
- }
-
- drvdata->ctxid_mask0 = 0x0;
- drvdata->ctxid_mask1 = 0x0;
-
- drvdata->vmid_idx = 0x0;
- for (i = 0; i < drvdata->numvmidc; i++)
- drvdata->vmid_val[i] = 0x0;
- drvdata->vmid_mask0 = 0x0;
- drvdata->vmid_mask1 = 0x0;
-
- drvdata->trcid = drvdata->cpu + 1;
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_WO(reset);
+ /* Configure the tracer based on the session's specifics */
+ etm4_parse_event_config(drvdata, attr);
+ /* And enable it */
+ etm4_enable_hw(drvdata);
-static ssize_t mode_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->mode;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+ return 0;
}
-static ssize_t mode_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
+static int etm4_enable_sysfs(struct coresight_device *csdev)
{
- unsigned long val, mode;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+ int ret;
spin_lock(&drvdata->spinlock);
- drvdata->mode = val & ETMv4_MODE_ALL;
-
- if (drvdata->mode & ETM_MODE_EXCLUDE)
- etm4_set_mode_exclude(drvdata, true);
- else
- etm4_set_mode_exclude(drvdata, false);
-
- if (drvdata->instrp0 == true) {
- /* start by clearing instruction P0 field */
- drvdata->cfg &= ~(BIT(1) | BIT(2));
- if (drvdata->mode & ETM_MODE_LOAD)
- /* 0b01 Trace load instructions as P0 instructions */
- drvdata->cfg |= BIT(1);
- if (drvdata->mode & ETM_MODE_STORE)
- /* 0b10 Trace store instructions as P0 instructions */
- drvdata->cfg |= BIT(2);
- if (drvdata->mode & ETM_MODE_LOAD_STORE)
- /*
- * 0b11 Trace load and store instructions
- * as P0 instructions
- */
- drvdata->cfg |= BIT(1) | BIT(2);
- }
-
- /* bit[3], Branch broadcast mode */
- if ((drvdata->mode & ETM_MODE_BB) && (drvdata->trcbb == true))
- drvdata->cfg |= BIT(3);
- else
- drvdata->cfg &= ~BIT(3);
-
- /* bit[4], Cycle counting instruction trace bit */
- if ((drvdata->mode & ETMv4_MODE_CYCACC) &&
- (drvdata->trccci == true))
- drvdata->cfg |= BIT(4);
- else
- drvdata->cfg &= ~BIT(4);
-
- /* bit[6], Context ID tracing bit */
- if ((drvdata->mode & ETMv4_MODE_CTXID) && (drvdata->ctxid_size))
- drvdata->cfg |= BIT(6);
- else
- drvdata->cfg &= ~BIT(6);
-
- if ((drvdata->mode & ETM_MODE_VMID) && (drvdata->vmid_size))
- drvdata->cfg |= BIT(7);
- else
- drvdata->cfg &= ~BIT(7);
-
- /* bits[10:8], Conditional instruction tracing bit */
- mode = ETM_MODE_COND(drvdata->mode);
- if (drvdata->trccond == true) {
- drvdata->cfg &= ~(BIT(8) | BIT(9) | BIT(10));
- drvdata->cfg |= mode << 8;
- }
-
- /* bit[11], Global timestamp tracing bit */
- if ((drvdata->mode & ETMv4_MODE_TIMESTAMP) && (drvdata->ts_size))
- drvdata->cfg |= BIT(11);
- else
- drvdata->cfg &= ~BIT(11);
- /* bit[12], Return stack enable bit */
- if ((drvdata->mode & ETM_MODE_RETURNSTACK) &&
- (drvdata->retstack == true))
- drvdata->cfg |= BIT(12);
- else
- drvdata->cfg &= ~BIT(12);
-
- /* bits[14:13], Q element enable field */
- mode = ETM_MODE_QELEM(drvdata->mode);
- /* start by clearing QE bits */
- drvdata->cfg &= ~(BIT(13) | BIT(14));
- /* if supported, Q elements with instruction counts are enabled */
- if ((mode & BIT(0)) && (drvdata->q_support & BIT(0)))
- drvdata->cfg |= BIT(13);
/*
- * if supported, Q elements with and without instruction
- * counts are enabled
+ * Executing etm4_enable_hw on the cpu whose ETM is being enabled
+ * ensures that register writes occur when cpu is powered.
*/
- if ((mode & BIT(1)) && (drvdata->q_support & BIT(1)))
- drvdata->cfg |= BIT(14);
-
- /* bit[11], AMBA Trace Bus (ATB) trigger enable bit */
- if ((drvdata->mode & ETM_MODE_ATB_TRIGGER) &&
- (drvdata->atbtrig == true))
- drvdata->eventctrl1 |= BIT(11);
- else
- drvdata->eventctrl1 &= ~BIT(11);
-
- /* bit[12], Low-power state behavior override bit */
- if ((drvdata->mode & ETM_MODE_LPOVERRIDE) &&
- (drvdata->lpoverride == true))
- drvdata->eventctrl1 |= BIT(12);
- else
- drvdata->eventctrl1 &= ~BIT(12);
-
- /* bit[8], Instruction stall bit */
- if (drvdata->mode & ETM_MODE_ISTALL_EN)
- drvdata->stall_ctrl |= BIT(8);
- else
- drvdata->stall_ctrl &= ~BIT(8);
-
- /* bit[10], Prioritize instruction trace bit */
- if (drvdata->mode & ETM_MODE_INSTPRIO)
- drvdata->stall_ctrl |= BIT(10);
- else
- drvdata->stall_ctrl &= ~BIT(10);
-
- /* bit[13], Trace overflow prevention bit */
- if ((drvdata->mode & ETM_MODE_NOOVERFLOW) &&
- (drvdata->nooverflow == true))
- drvdata->stall_ctrl |= BIT(13);
- else
- drvdata->stall_ctrl &= ~BIT(13);
-
- /* bit[9] Start/stop logic control bit */
- if (drvdata->mode & ETM_MODE_VIEWINST_STARTSTOP)
- drvdata->vinst_ctrl |= BIT(9);
- else
- drvdata->vinst_ctrl &= ~BIT(9);
-
- /* bit[10], Whether a trace unit must trace a Reset exception */
- if (drvdata->mode & ETM_MODE_TRACE_RESET)
- drvdata->vinst_ctrl |= BIT(10);
- else
- drvdata->vinst_ctrl &= ~BIT(10);
-
- /* bit[11], Whether a trace unit must trace a system error exception */
- if ((drvdata->mode & ETM_MODE_TRACE_ERR) &&
- (drvdata->trc_error == true))
- drvdata->vinst_ctrl |= BIT(11);
- else
- drvdata->vinst_ctrl &= ~BIT(11);
-
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(mode);
-
-static ssize_t pe_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->pe_sel;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t pe_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
-
- spin_lock(&drvdata->spinlock);
- if (val > drvdata->nr_pe) {
- spin_unlock(&drvdata->spinlock);
- return -EINVAL;
- }
+ ret = smp_call_function_single(drvdata->cpu,
+ etm4_enable_hw, drvdata, 1);
+ if (ret)
+ goto err;
- drvdata->pe_sel = val;
+ drvdata->sticky_enable = true;
spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(pe);
-
-static ssize_t event_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->eventctrl0;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-static ssize_t event_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
+ dev_info(drvdata->dev, "ETM tracing enabled\n");
+ return 0;
- spin_lock(&drvdata->spinlock);
- switch (drvdata->nr_event) {
- case 0x0:
- /* EVENT0, bits[7:0] */
- drvdata->eventctrl0 = val & 0xFF;
- break;
- case 0x1:
- /* EVENT1, bits[15:8] */
- drvdata->eventctrl0 = val & 0xFFFF;
- break;
- case 0x2:
- /* EVENT2, bits[23:16] */
- drvdata->eventctrl0 = val & 0xFFFFFF;
- break;
- case 0x3:
- /* EVENT3, bits[31:24] */
- drvdata->eventctrl0 = val;
- break;
- default:
- break;
- }
+err:
spin_unlock(&drvdata->spinlock);
- return size;
+ return ret;
}
-static DEVICE_ATTR_RW(event);
-static ssize_t event_instren_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
+static int etm4_enable(struct coresight_device *csdev,
+ struct perf_event_attr *attr, u32 mode)
{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = BMVAL(drvdata->eventctrl1, 0, 3);
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
+ int ret;
+ u32 val;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
-static ssize_t event_instren_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ val = local_cmpxchg(&drvdata->mode, CS_MODE_DISABLED, mode);
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
+ /* Someone is already using the tracer */
+ if (val)
+ return -EBUSY;
- spin_lock(&drvdata->spinlock);
- /* start by clearing all instruction event enable bits */
- drvdata->eventctrl1 &= ~(BIT(0) | BIT(1) | BIT(2) | BIT(3));
- switch (drvdata->nr_event) {
- case 0x0:
- /* generate Event element for event 1 */
- drvdata->eventctrl1 |= val & BIT(1);
+ switch (mode) {
+ case CS_MODE_SYSFS:
+ ret = etm4_enable_sysfs(csdev);
break;
- case 0x1:
- /* generate Event element for event 1 and 2 */
- drvdata->eventctrl1 |= val & (BIT(0) | BIT(1));
- break;
- case 0x2:
- /* generate Event element for event 1, 2 and 3 */
- drvdata->eventctrl1 |= val & (BIT(0) | BIT(1) | BIT(2));
- break;
- case 0x3:
- /* generate Event element for all 4 events */
- drvdata->eventctrl1 |= val & 0xF;
+ case CS_MODE_PERF:
+ ret = etm4_enable_perf(csdev, attr);
break;
default:
- break;
- }
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(event_instren);
-
-static ssize_t event_ts_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->ts_ctrl;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t event_ts_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
- if (!drvdata->ts_size)
- return -EINVAL;
-
- drvdata->ts_ctrl = val & ETMv4_EVENT_MASK;
- return size;
-}
-static DEVICE_ATTR_RW(event_ts);
-
-static ssize_t syncfreq_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->syncfreq;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t syncfreq_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
- if (drvdata->syncpr == true)
- return -EINVAL;
-
- drvdata->syncfreq = val & ETMv4_SYNC_MASK;
- return size;
-}
-static DEVICE_ATTR_RW(syncfreq);
-
-static ssize_t cyc_threshold_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->ccctlr;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t cyc_threshold_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
- if (val < drvdata->ccitmin)
- return -EINVAL;
-
- drvdata->ccctlr = val & ETM_CYC_THRESHOLD_MASK;
- return size;
-}
-static DEVICE_ATTR_RW(cyc_threshold);
-
-static ssize_t bb_ctrl_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->bb_ctrl;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t bb_ctrl_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
- if (drvdata->trcbb == false)
- return -EINVAL;
- if (!drvdata->nr_addr_cmp)
- return -EINVAL;
- /*
- * Bit[7:0] selects which address range comparator is used for
- * branch broadcast control.
- */
- if (BMVAL(val, 0, 7) > drvdata->nr_addr_cmp)
- return -EINVAL;
-
- drvdata->bb_ctrl = val;
- return size;
-}
-static DEVICE_ATTR_RW(bb_ctrl);
-
-static ssize_t event_vinst_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->vinst_ctrl & ETMv4_EVENT_MASK;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t event_vinst_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
-
- spin_lock(&drvdata->spinlock);
- val &= ETMv4_EVENT_MASK;
- drvdata->vinst_ctrl &= ~ETMv4_EVENT_MASK;
- drvdata->vinst_ctrl |= val;
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(event_vinst);
-
-static ssize_t s_exlevel_vinst_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = BMVAL(drvdata->vinst_ctrl, 16, 19);
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t s_exlevel_vinst_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
-
- spin_lock(&drvdata->spinlock);
- /* clear all EXLEVEL_S bits (bit[18] is never implemented) */
- drvdata->vinst_ctrl &= ~(BIT(16) | BIT(17) | BIT(19));
- /* enable instruction tracing for corresponding exception level */
- val &= drvdata->s_ex_level;
- drvdata->vinst_ctrl |= (val << 16);
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(s_exlevel_vinst);
-
-static ssize_t ns_exlevel_vinst_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- /* EXLEVEL_NS, bits[23:20] */
- val = BMVAL(drvdata->vinst_ctrl, 20, 23);
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t ns_exlevel_vinst_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
-
- spin_lock(&drvdata->spinlock);
- /* clear EXLEVEL_NS bits (bit[23] is never implemented */
- drvdata->vinst_ctrl &= ~(BIT(20) | BIT(21) | BIT(22));
- /* enable instruction tracing for corresponding exception level */
- val &= drvdata->ns_ex_level;
- drvdata->vinst_ctrl |= (val << 20);
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(ns_exlevel_vinst);
-
-static ssize_t addr_idx_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->addr_idx;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t addr_idx_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
- if (val >= drvdata->nr_addr_cmp * 2)
- return -EINVAL;
-
- /*
- * Use spinlock to ensure index doesn't change while it gets
- * dereferenced multiple times within a spinlock block elsewhere.
- */
- spin_lock(&drvdata->spinlock);
- drvdata->addr_idx = val;
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(addr_idx);
-
-static ssize_t addr_instdatatype_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- ssize_t len;
- u8 val, idx;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
- val = BMVAL(drvdata->addr_acc[idx], 0, 1);
- len = scnprintf(buf, PAGE_SIZE, "%s\n",
- val == ETM_INSTR_ADDR ? "instr" :
- (val == ETM_DATA_LOAD_ADDR ? "data_load" :
- (val == ETM_DATA_STORE_ADDR ? "data_store" :
- "data_load_store")));
- spin_unlock(&drvdata->spinlock);
- return len;
-}
-
-static ssize_t addr_instdatatype_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- u8 idx;
- char str[20] = "";
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (strlen(buf) >= 20)
- return -EINVAL;
- if (sscanf(buf, "%s", str) != 1)
- return -EINVAL;
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
- if (!strcmp(str, "instr"))
- /* TYPE, bits[1:0] */
- drvdata->addr_acc[idx] &= ~(BIT(0) | BIT(1));
-
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(addr_instdatatype);
-
-static ssize_t addr_single_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- u8 idx;
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- idx = drvdata->addr_idx;
- spin_lock(&drvdata->spinlock);
- if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
- drvdata->addr_type[idx] == ETM_ADDR_TYPE_SINGLE)) {
- spin_unlock(&drvdata->spinlock);
- return -EPERM;
- }
- val = (unsigned long)drvdata->addr_val[idx];
- spin_unlock(&drvdata->spinlock);
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t addr_single_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- u8 idx;
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
- if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
- drvdata->addr_type[idx] == ETM_ADDR_TYPE_SINGLE)) {
- spin_unlock(&drvdata->spinlock);
- return -EPERM;
- }
-
- drvdata->addr_val[idx] = (u64)val;
- drvdata->addr_type[idx] = ETM_ADDR_TYPE_SINGLE;
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(addr_single);
-
-static ssize_t addr_range_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- u8 idx;
- unsigned long val1, val2;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
- if (idx % 2 != 0) {
- spin_unlock(&drvdata->spinlock);
- return -EPERM;
- }
- if (!((drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE &&
- drvdata->addr_type[idx + 1] == ETM_ADDR_TYPE_NONE) ||
- (drvdata->addr_type[idx] == ETM_ADDR_TYPE_RANGE &&
- drvdata->addr_type[idx + 1] == ETM_ADDR_TYPE_RANGE))) {
- spin_unlock(&drvdata->spinlock);
- return -EPERM;
- }
-
- val1 = (unsigned long)drvdata->addr_val[idx];
- val2 = (unsigned long)drvdata->addr_val[idx + 1];
- spin_unlock(&drvdata->spinlock);
- return scnprintf(buf, PAGE_SIZE, "%#lx %#lx\n", val1, val2);
-}
-
-static ssize_t addr_range_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- u8 idx;
- unsigned long val1, val2;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (sscanf(buf, "%lx %lx", &val1, &val2) != 2)
- return -EINVAL;
- /* lower address comparator cannot have a higher address value */
- if (val1 > val2)
- return -EINVAL;
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
- if (idx % 2 != 0) {
- spin_unlock(&drvdata->spinlock);
- return -EPERM;
- }
-
- if (!((drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE &&
- drvdata->addr_type[idx + 1] == ETM_ADDR_TYPE_NONE) ||
- (drvdata->addr_type[idx] == ETM_ADDR_TYPE_RANGE &&
- drvdata->addr_type[idx + 1] == ETM_ADDR_TYPE_RANGE))) {
- spin_unlock(&drvdata->spinlock);
- return -EPERM;
- }
-
- drvdata->addr_val[idx] = (u64)val1;
- drvdata->addr_type[idx] = ETM_ADDR_TYPE_RANGE;
- drvdata->addr_val[idx + 1] = (u64)val2;
- drvdata->addr_type[idx + 1] = ETM_ADDR_TYPE_RANGE;
- /*
- * Program include or exclude control bits for vinst or vdata
- * whenever we change addr comparators to ETM_ADDR_TYPE_RANGE
- */
- if (drvdata->mode & ETM_MODE_EXCLUDE)
- etm4_set_mode_exclude(drvdata, true);
- else
- etm4_set_mode_exclude(drvdata, false);
-
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(addr_range);
-
-static ssize_t addr_start_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- u8 idx;
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
-
- if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
- drvdata->addr_type[idx] == ETM_ADDR_TYPE_START)) {
- spin_unlock(&drvdata->spinlock);
- return -EPERM;
- }
-
- val = (unsigned long)drvdata->addr_val[idx];
- spin_unlock(&drvdata->spinlock);
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t addr_start_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- u8 idx;
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
- if (!drvdata->nr_addr_cmp) {
- spin_unlock(&drvdata->spinlock);
- return -EINVAL;
- }
- if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
- drvdata->addr_type[idx] == ETM_ADDR_TYPE_START)) {
- spin_unlock(&drvdata->spinlock);
- return -EPERM;
- }
-
- drvdata->addr_val[idx] = (u64)val;
- drvdata->addr_type[idx] = ETM_ADDR_TYPE_START;
- drvdata->vissctlr |= BIT(idx);
- /* SSSTATUS, bit[9] - turn on start/stop logic */
- drvdata->vinst_ctrl |= BIT(9);
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(addr_start);
-
-static ssize_t addr_stop_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- u8 idx;
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
-
- if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
- drvdata->addr_type[idx] == ETM_ADDR_TYPE_STOP)) {
- spin_unlock(&drvdata->spinlock);
- return -EPERM;
- }
-
- val = (unsigned long)drvdata->addr_val[idx];
- spin_unlock(&drvdata->spinlock);
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t addr_stop_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- u8 idx;
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
- if (!drvdata->nr_addr_cmp) {
- spin_unlock(&drvdata->spinlock);
- return -EINVAL;
- }
- if (!(drvdata->addr_type[idx] == ETM_ADDR_TYPE_NONE ||
- drvdata->addr_type[idx] == ETM_ADDR_TYPE_STOP)) {
- spin_unlock(&drvdata->spinlock);
- return -EPERM;
- }
-
- drvdata->addr_val[idx] = (u64)val;
- drvdata->addr_type[idx] = ETM_ADDR_TYPE_STOP;
- drvdata->vissctlr |= BIT(idx + 16);
- /* SSSTATUS, bit[9] - turn on start/stop logic */
- drvdata->vinst_ctrl |= BIT(9);
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(addr_stop);
-
-static ssize_t addr_ctxtype_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- ssize_t len;
- u8 idx, val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
- /* CONTEXTTYPE, bits[3:2] */
- val = BMVAL(drvdata->addr_acc[idx], 2, 3);
- len = scnprintf(buf, PAGE_SIZE, "%s\n", val == ETM_CTX_NONE ? "none" :
- (val == ETM_CTX_CTXID ? "ctxid" :
- (val == ETM_CTX_VMID ? "vmid" : "all")));
- spin_unlock(&drvdata->spinlock);
- return len;
-}
-
-static ssize_t addr_ctxtype_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- u8 idx;
- char str[10] = "";
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (strlen(buf) >= 10)
- return -EINVAL;
- if (sscanf(buf, "%s", str) != 1)
- return -EINVAL;
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
- if (!strcmp(str, "none"))
- /* start by clearing context type bits */
- drvdata->addr_acc[idx] &= ~(BIT(2) | BIT(3));
- else if (!strcmp(str, "ctxid")) {
- /* 0b01 The trace unit performs a Context ID */
- if (drvdata->numcidc) {
- drvdata->addr_acc[idx] |= BIT(2);
- drvdata->addr_acc[idx] &= ~BIT(3);
- }
- } else if (!strcmp(str, "vmid")) {
- /* 0b10 The trace unit performs a VMID */
- if (drvdata->numvmidc) {
- drvdata->addr_acc[idx] &= ~BIT(2);
- drvdata->addr_acc[idx] |= BIT(3);
- }
- } else if (!strcmp(str, "all")) {
- /*
- * 0b11 The trace unit performs a Context ID
- * comparison and a VMID
- */
- if (drvdata->numcidc)
- drvdata->addr_acc[idx] |= BIT(2);
- if (drvdata->numvmidc)
- drvdata->addr_acc[idx] |= BIT(3);
+ ret = -EINVAL;
}
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(addr_ctxtype);
-
-static ssize_t addr_context_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- u8 idx;
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
- /* context ID comparator bits[6:4] */
- val = BMVAL(drvdata->addr_acc[idx], 4, 6);
- spin_unlock(&drvdata->spinlock);
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t addr_context_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- u8 idx;
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
- if ((drvdata->numcidc <= 1) && (drvdata->numvmidc <= 1))
- return -EINVAL;
- if (val >= (drvdata->numcidc >= drvdata->numvmidc ?
- drvdata->numcidc : drvdata->numvmidc))
- return -EINVAL;
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->addr_idx;
- /* clear context ID comparator bits[6:4] */
- drvdata->addr_acc[idx] &= ~(BIT(4) | BIT(5) | BIT(6));
- drvdata->addr_acc[idx] |= (val << 4);
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(addr_context);
-
-static ssize_t seq_idx_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->seq_idx;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t seq_idx_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
- if (val >= drvdata->nrseqstate - 1)
- return -EINVAL;
-
- /*
- * Use spinlock to ensure index doesn't change while it gets
- * dereferenced multiple times within a spinlock block elsewhere.
- */
- spin_lock(&drvdata->spinlock);
- drvdata->seq_idx = val;
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(seq_idx);
-
-static ssize_t seq_state_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
- val = drvdata->seq_state;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t seq_state_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
- if (val >= drvdata->nrseqstate)
- return -EINVAL;
-
- drvdata->seq_state = val;
- return size;
-}
-static DEVICE_ATTR_RW(seq_state);
-
-static ssize_t seq_event_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- u8 idx;
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->seq_idx;
- val = drvdata->seq_ctrl[idx];
- spin_unlock(&drvdata->spinlock);
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t seq_event_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- u8 idx;
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->seq_idx;
- /* RST, bits[7:0] */
- drvdata->seq_ctrl[idx] = val & 0xFF;
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(seq_event);
-
-static ssize_t seq_reset_event_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->seq_rst;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t seq_reset_event_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
- if (!(drvdata->nrseqstate))
- return -EINVAL;
-
- drvdata->seq_rst = val & ETMv4_EVENT_MASK;
- return size;
-}
-static DEVICE_ATTR_RW(seq_reset_event);
-
-static ssize_t cntr_idx_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->cntr_idx;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t cntr_idx_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
- if (val >= drvdata->nr_cntr)
- return -EINVAL;
-
- /*
- * Use spinlock to ensure index doesn't change while it gets
- * dereferenced multiple times within a spinlock block elsewhere.
- */
- spin_lock(&drvdata->spinlock);
- drvdata->cntr_idx = val;
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(cntr_idx);
-
-static ssize_t cntrldvr_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- u8 idx;
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->cntr_idx;
- val = drvdata->cntrldvr[idx];
- spin_unlock(&drvdata->spinlock);
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t cntrldvr_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- u8 idx;
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
- if (val > ETM_CNTR_MAX_VAL)
- return -EINVAL;
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->cntr_idx;
- drvdata->cntrldvr[idx] = val;
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(cntrldvr);
-
-static ssize_t cntr_val_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- u8 idx;
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->cntr_idx;
- val = drvdata->cntr_val[idx];
- spin_unlock(&drvdata->spinlock);
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t cntr_val_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- u8 idx;
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
- if (val > ETM_CNTR_MAX_VAL)
- return -EINVAL;
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->cntr_idx;
- drvdata->cntr_val[idx] = val;
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(cntr_val);
-
-static ssize_t cntr_ctrl_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- u8 idx;
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->cntr_idx;
- val = drvdata->cntr_ctrl[idx];
- spin_unlock(&drvdata->spinlock);
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t cntr_ctrl_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- u8 idx;
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->cntr_idx;
- drvdata->cntr_ctrl[idx] = val;
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(cntr_ctrl);
-
-static ssize_t res_idx_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->res_idx;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t res_idx_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
- /* Resource selector pair 0 is always implemented and reserved */
- if (val < 2 || val >= drvdata->nr_resource * 2)
- return -EINVAL;
+ /* The tracer didn't start */
+ if (ret)
+ local_set(&drvdata->mode, CS_MODE_DISABLED);
- /*
- * Use spinlock to ensure index doesn't change while it gets
- * dereferenced multiple times within a spinlock block elsewhere.
- */
- spin_lock(&drvdata->spinlock);
- drvdata->res_idx = val;
- spin_unlock(&drvdata->spinlock);
- return size;
+ return ret;
}
-static DEVICE_ATTR_RW(res_idx);
-static ssize_t res_ctrl_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
+static void etm4_disable_hw(void *info)
{
- u8 idx;
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ u32 control;
+ struct etmv4_drvdata *drvdata = info;
- spin_lock(&drvdata->spinlock);
- idx = drvdata->res_idx;
- val = drvdata->res_ctrl[idx];
- spin_unlock(&drvdata->spinlock);
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
+ CS_UNLOCK(drvdata->base);
-static ssize_t res_ctrl_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- u8 idx;
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ control = readl_relaxed(drvdata->base + TRCPRGCTLR);
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
+ /* EN, bit[0] Trace unit enable bit */
+ control &= ~0x1;
- spin_lock(&drvdata->spinlock);
- idx = drvdata->res_idx;
- /* For odd idx pair inversal bit is RES0 */
- if (idx % 2 != 0)
- /* PAIRINV, bit[21] */
- val &= ~BIT(21);
- drvdata->res_ctrl[idx] = val;
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(res_ctrl);
+ /* make sure everything completes before disabling */
+ mb();
+ isb();
+ writel_relaxed(control, drvdata->base + TRCPRGCTLR);
-static ssize_t ctxid_idx_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ CS_LOCK(drvdata->base);
- val = drvdata->ctxid_idx;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+ dev_dbg(drvdata->dev, "cpu: %d disable smp call done\n", drvdata->cpu);
}
-static ssize_t ctxid_idx_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
+static int etm4_disable_perf(struct coresight_device *csdev)
{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
- if (val >= drvdata->numcidc)
+ if (WARN_ON_ONCE(drvdata->cpu != smp_processor_id()))
return -EINVAL;
- /*
- * Use spinlock to ensure index doesn't change while it gets
- * dereferenced multiple times within a spinlock block elsewhere.
- */
- spin_lock(&drvdata->spinlock);
- drvdata->ctxid_idx = val;
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(ctxid_idx);
-
-static ssize_t ctxid_pid_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- u8 idx;
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->ctxid_idx;
- val = (unsigned long)drvdata->ctxid_vpid[idx];
- spin_unlock(&drvdata->spinlock);
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+ etm4_disable_hw(drvdata);
+ return 0;
}
-static ssize_t ctxid_pid_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
+static void etm4_disable_sysfs(struct coresight_device *csdev)
{
- u8 idx;
- unsigned long vpid, pid;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
/*
- * only implemented when ctxid tracing is enabled, i.e. at least one
- * ctxid comparator is implemented and ctxid is greater than 0 bits
- * in length
+ * Taking hotplug lock here protects from clocks getting disabled
+ * with tracing being left on (crash scenario) if user disable occurs
+ * after cpu online mask indicates the cpu is offline but before the
+ * DYING hotplug callback is serviced by the ETM driver.
*/
- if (!drvdata->ctxid_size || !drvdata->numcidc)
- return -EINVAL;
- if (kstrtoul(buf, 16, &vpid))
- return -EINVAL;
-
- pid = coresight_vpid_to_pid(vpid);
-
- spin_lock(&drvdata->spinlock);
- idx = drvdata->ctxid_idx;
- drvdata->ctxid_pid[idx] = (u64)pid;
- drvdata->ctxid_vpid[idx] = (u64)vpid;
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(ctxid_pid);
-
-static ssize_t ctxid_masks_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val1, val2;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
+ get_online_cpus();
spin_lock(&drvdata->spinlock);
- val1 = drvdata->ctxid_mask0;
- val2 = drvdata->ctxid_mask1;
- spin_unlock(&drvdata->spinlock);
- return scnprintf(buf, PAGE_SIZE, "%#lx %#lx\n", val1, val2);
-}
-static ssize_t ctxid_masks_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- u8 i, j, maskbyte;
- unsigned long val1, val2, mask;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- /*
- * only implemented when ctxid tracing is enabled, i.e. at least one
- * ctxid comparator is implemented and ctxid is greater than 0 bits
- * in length
- */
- if (!drvdata->ctxid_size || !drvdata->numcidc)
- return -EINVAL;
- if (sscanf(buf, "%lx %lx", &val1, &val2) != 2)
- return -EINVAL;
-
- spin_lock(&drvdata->spinlock);
- /*
- * each byte[0..3] controls mask value applied to ctxid
- * comparator[0..3]
- */
- switch (drvdata->numcidc) {
- case 0x1:
- /* COMP0, bits[7:0] */
- drvdata->ctxid_mask0 = val1 & 0xFF;
- break;
- case 0x2:
- /* COMP1, bits[15:8] */
- drvdata->ctxid_mask0 = val1 & 0xFFFF;
- break;
- case 0x3:
- /* COMP2, bits[23:16] */
- drvdata->ctxid_mask0 = val1 & 0xFFFFFF;
- break;
- case 0x4:
- /* COMP3, bits[31:24] */
- drvdata->ctxid_mask0 = val1;
- break;
- case 0x5:
- /* COMP4, bits[7:0] */
- drvdata->ctxid_mask0 = val1;
- drvdata->ctxid_mask1 = val2 & 0xFF;
- break;
- case 0x6:
- /* COMP5, bits[15:8] */
- drvdata->ctxid_mask0 = val1;
- drvdata->ctxid_mask1 = val2 & 0xFFFF;
- break;
- case 0x7:
- /* COMP6, bits[23:16] */
- drvdata->ctxid_mask0 = val1;
- drvdata->ctxid_mask1 = val2 & 0xFFFFFF;
- break;
- case 0x8:
- /* COMP7, bits[31:24] */
- drvdata->ctxid_mask0 = val1;
- drvdata->ctxid_mask1 = val2;
- break;
- default:
- break;
- }
/*
- * If software sets a mask bit to 1, it must program relevant byte
- * of ctxid comparator value 0x0, otherwise behavior is unpredictable.
- * For example, if bit[3] of ctxid_mask0 is 1, we must clear bits[31:24]
- * of ctxid comparator0 value (corresponding to byte 0) register.
+ * Executing etm4_disable_hw on the cpu whose ETM is being disabled
+ * ensures that register writes occur when cpu is powered.
*/
- mask = drvdata->ctxid_mask0;
- for (i = 0; i < drvdata->numcidc; i++) {
- /* mask value of corresponding ctxid comparator */
- maskbyte = mask & ETMv4_EVENT_MASK;
- /*
- * each bit corresponds to a byte of respective ctxid comparator
- * value register
- */
- for (j = 0; j < 8; j++) {
- if (maskbyte & 1)
- drvdata->ctxid_pid[i] &= ~(0xFF << (j * 8));
- maskbyte >>= 1;
- }
- /* Select the next ctxid comparator mask value */
- if (i == 3)
- /* ctxid comparators[4-7] */
- mask = drvdata->ctxid_mask1;
- else
- mask >>= 0x8;
- }
+ smp_call_function_single(drvdata->cpu, etm4_disable_hw, drvdata, 1);
spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(ctxid_masks);
-
-static ssize_t vmid_idx_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->vmid_idx;
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t vmid_idx_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
- if (val >= drvdata->numvmidc)
- return -EINVAL;
-
- /*
- * Use spinlock to ensure index doesn't change while it gets
- * dereferenced multiple times within a spinlock block elsewhere.
- */
- spin_lock(&drvdata->spinlock);
- drvdata->vmid_idx = val;
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(vmid_idx);
-
-static ssize_t vmid_val_show(struct device *dev,
- struct device_attribute *attr,
- char *buf)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = (unsigned long)drvdata->vmid_val[drvdata->vmid_idx];
- return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
-}
-
-static ssize_t vmid_val_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- unsigned long val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- /*
- * only implemented when vmid tracing is enabled, i.e. at least one
- * vmid comparator is implemented and at least 8 bit vmid size
- */
- if (!drvdata->vmid_size || !drvdata->numvmidc)
- return -EINVAL;
- if (kstrtoul(buf, 16, &val))
- return -EINVAL;
+ put_online_cpus();
- spin_lock(&drvdata->spinlock);
- drvdata->vmid_val[drvdata->vmid_idx] = (u64)val;
- spin_unlock(&drvdata->spinlock);
- return size;
+ dev_info(drvdata->dev, "ETM tracing disabled\n");
}
-static DEVICE_ATTR_RW(vmid_val);
-static ssize_t vmid_masks_show(struct device *dev,
- struct device_attribute *attr, char *buf)
+static void etm4_disable(struct coresight_device *csdev)
{
- unsigned long val1, val2;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- spin_lock(&drvdata->spinlock);
- val1 = drvdata->vmid_mask0;
- val2 = drvdata->vmid_mask1;
- spin_unlock(&drvdata->spinlock);
- return scnprintf(buf, PAGE_SIZE, "%#lx %#lx\n", val1, val2);
-}
+ u32 mode;
+ struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
-static ssize_t vmid_masks_store(struct device *dev,
- struct device_attribute *attr,
- const char *buf, size_t size)
-{
- u8 i, j, maskbyte;
- unsigned long val1, val2, mask;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
/*
- * only implemented when vmid tracing is enabled, i.e. at least one
- * vmid comparator is implemented and at least 8 bit vmid size
+ * For as long as the tracer isn't disabled another entity can't
+ * change its status. As such we can read the status here without
+ * fearing it will change under us.
*/
- if (!drvdata->vmid_size || !drvdata->numvmidc)
- return -EINVAL;
- if (sscanf(buf, "%lx %lx", &val1, &val2) != 2)
- return -EINVAL;
-
- spin_lock(&drvdata->spinlock);
+ mode = local_read(&drvdata->mode);
- /*
- * each byte[0..3] controls mask value applied to vmid
- * comparator[0..3]
- */
- switch (drvdata->numvmidc) {
- case 0x1:
- /* COMP0, bits[7:0] */
- drvdata->vmid_mask0 = val1 & 0xFF;
- break;
- case 0x2:
- /* COMP1, bits[15:8] */
- drvdata->vmid_mask0 = val1 & 0xFFFF;
- break;
- case 0x3:
- /* COMP2, bits[23:16] */
- drvdata->vmid_mask0 = val1 & 0xFFFFFF;
+ switch (mode) {
+ case CS_MODE_DISABLED:
break;
- case 0x4:
- /* COMP3, bits[31:24] */
- drvdata->vmid_mask0 = val1;
+ case CS_MODE_SYSFS:
+ etm4_disable_sysfs(csdev);
break;
- case 0x5:
- /* COMP4, bits[7:0] */
- drvdata->vmid_mask0 = val1;
- drvdata->vmid_mask1 = val2 & 0xFF;
- break;
- case 0x6:
- /* COMP5, bits[15:8] */
- drvdata->vmid_mask0 = val1;
- drvdata->vmid_mask1 = val2 & 0xFFFF;
- break;
- case 0x7:
- /* COMP6, bits[23:16] */
- drvdata->vmid_mask0 = val1;
- drvdata->vmid_mask1 = val2 & 0xFFFFFF;
- break;
- case 0x8:
- /* COMP7, bits[31:24] */
- drvdata->vmid_mask0 = val1;
- drvdata->vmid_mask1 = val2;
- break;
- default:
+ case CS_MODE_PERF:
+ etm4_disable_perf(csdev);
break;
}
- /*
- * If software sets a mask bit to 1, it must program relevant byte
- * of vmid comparator value 0x0, otherwise behavior is unpredictable.
- * For example, if bit[3] of vmid_mask0 is 1, we must clear bits[31:24]
- * of vmid comparator0 value (corresponding to byte 0) register.
- */
- mask = drvdata->vmid_mask0;
- for (i = 0; i < drvdata->numvmidc; i++) {
- /* mask value of corresponding vmid comparator */
- maskbyte = mask & ETMv4_EVENT_MASK;
- /*
- * each bit corresponds to a byte of respective vmid comparator
- * value register
- */
- for (j = 0; j < 8; j++) {
- if (maskbyte & 1)
- drvdata->vmid_val[i] &= ~(0xFF << (j * 8));
- maskbyte >>= 1;
- }
- /* Select the next vmid comparator mask value */
- if (i == 3)
- /* vmid comparators[4-7] */
- mask = drvdata->vmid_mask1;
- else
- mask >>= 0x8;
- }
- spin_unlock(&drvdata->spinlock);
- return size;
-}
-static DEVICE_ATTR_RW(vmid_masks);
-
-static ssize_t cpu_show(struct device *dev,
- struct device_attribute *attr, char *buf)
-{
- int val;
- struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent);
-
- val = drvdata->cpu;
- return scnprintf(buf, PAGE_SIZE, "%d\n", val);
-
+ if (mode)
+ local_set(&drvdata->mode, CS_MODE_DISABLED);
}
-static DEVICE_ATTR_RO(cpu);
-
-static struct attribute *coresight_etmv4_attrs[] = {
- &dev_attr_nr_pe_cmp.attr,
- &dev_attr_nr_addr_cmp.attr,
- &dev_attr_nr_cntr.attr,
- &dev_attr_nr_ext_inp.attr,
- &dev_attr_numcidc.attr,
- &dev_attr_numvmidc.attr,
- &dev_attr_nrseqstate.attr,
- &dev_attr_nr_resource.attr,
- &dev_attr_nr_ss_cmp.attr,
- &dev_attr_reset.attr,
- &dev_attr_mode.attr,
- &dev_attr_pe.attr,
- &dev_attr_event.attr,
- &dev_attr_event_instren.attr,
- &dev_attr_event_ts.attr,
- &dev_attr_syncfreq.attr,
- &dev_attr_cyc_threshold.attr,
- &dev_attr_bb_ctrl.attr,
- &dev_attr_event_vinst.attr,
- &dev_attr_s_exlevel_vinst.attr,
- &dev_attr_ns_exlevel_vinst.attr,
- &dev_attr_addr_idx.attr,
- &dev_attr_addr_instdatatype.attr,
- &dev_attr_addr_single.attr,
- &dev_attr_addr_range.attr,
- &dev_attr_addr_start.attr,
- &dev_attr_addr_stop.attr,
- &dev_attr_addr_ctxtype.attr,
- &dev_attr_addr_context.attr,
- &dev_attr_seq_idx.attr,
- &dev_attr_seq_state.attr,
- &dev_attr_seq_event.attr,
- &dev_attr_seq_reset_event.attr,
- &dev_attr_cntr_idx.attr,
- &dev_attr_cntrldvr.attr,
- &dev_attr_cntr_val.attr,
- &dev_attr_cntr_ctrl.attr,
- &dev_attr_res_idx.attr,
- &dev_attr_res_ctrl.attr,
- &dev_attr_ctxid_idx.attr,
- &dev_attr_ctxid_pid.attr,
- &dev_attr_ctxid_masks.attr,
- &dev_attr_vmid_idx.attr,
- &dev_attr_vmid_val.attr,
- &dev_attr_vmid_masks.attr,
- &dev_attr_cpu.attr,
- NULL,
-};
-
-#define coresight_simple_func(name, offset) \
-static ssize_t name##_show(struct device *_dev, \
- struct device_attribute *attr, char *buf) \
-{ \
- struct etmv4_drvdata *drvdata = dev_get_drvdata(_dev->parent); \
- return scnprintf(buf, PAGE_SIZE, "0x%x\n", \
- readl_relaxed(drvdata->base + offset)); \
-} \
-static DEVICE_ATTR_RO(name)
-
-coresight_simple_func(trcoslsr, TRCOSLSR);
-coresight_simple_func(trcpdcr, TRCPDCR);
-coresight_simple_func(trcpdsr, TRCPDSR);
-coresight_simple_func(trclsr, TRCLSR);
-coresight_simple_func(trcauthstatus, TRCAUTHSTATUS);
-coresight_simple_func(trcdevid, TRCDEVID);
-coresight_simple_func(trcdevtype, TRCDEVTYPE);
-coresight_simple_func(trcpidr0, TRCPIDR0);
-coresight_simple_func(trcpidr1, TRCPIDR1);
-coresight_simple_func(trcpidr2, TRCPIDR2);
-coresight_simple_func(trcpidr3, TRCPIDR3);
-
-static struct attribute *coresight_etmv4_mgmt_attrs[] = {
- &dev_attr_trcoslsr.attr,
- &dev_attr_trcpdcr.attr,
- &dev_attr_trcpdsr.attr,
- &dev_attr_trclsr.attr,
- &dev_attr_trcauthstatus.attr,
- &dev_attr_trcdevid.attr,
- &dev_attr_trcdevtype.attr,
- &dev_attr_trcpidr0.attr,
- &dev_attr_trcpidr1.attr,
- &dev_attr_trcpidr2.attr,
- &dev_attr_trcpidr3.attr,
- NULL,
-};
-coresight_simple_func(trcidr0, TRCIDR0);
-coresight_simple_func(trcidr1, TRCIDR1);
-coresight_simple_func(trcidr2, TRCIDR2);
-coresight_simple_func(trcidr3, TRCIDR3);
-coresight_simple_func(trcidr4, TRCIDR4);
-coresight_simple_func(trcidr5, TRCIDR5);
-/* trcidr[6,7] are reserved */
-coresight_simple_func(trcidr8, TRCIDR8);
-coresight_simple_func(trcidr9, TRCIDR9);
-coresight_simple_func(trcidr10, TRCIDR10);
-coresight_simple_func(trcidr11, TRCIDR11);
-coresight_simple_func(trcidr12, TRCIDR12);
-coresight_simple_func(trcidr13, TRCIDR13);
-
-static struct attribute *coresight_etmv4_trcidr_attrs[] = {
- &dev_attr_trcidr0.attr,
- &dev_attr_trcidr1.attr,
- &dev_attr_trcidr2.attr,
- &dev_attr_trcidr3.attr,
- &dev_attr_trcidr4.attr,
- &dev_attr_trcidr5.attr,
- /* trcidr[6,7] are reserved */
- &dev_attr_trcidr8.attr,
- &dev_attr_trcidr9.attr,
- &dev_attr_trcidr10.attr,
- &dev_attr_trcidr11.attr,
- &dev_attr_trcidr12.attr,
- &dev_attr_trcidr13.attr,
- NULL,
-};
-
-static const struct attribute_group coresight_etmv4_group = {
- .attrs = coresight_etmv4_attrs,
-};
-
-static const struct attribute_group coresight_etmv4_mgmt_group = {
- .attrs = coresight_etmv4_mgmt_attrs,
- .name = "mgmt",
-};
-
-static const struct attribute_group coresight_etmv4_trcidr_group = {
- .attrs = coresight_etmv4_trcidr_attrs,
- .name = "trcidr",
+static const struct coresight_ops_source etm4_source_ops = {
+ .cpu_id = etm4_cpu_id,
+ .trace_id = etm4_trace_id,
+ .enable = etm4_enable,
+ .disable = etm4_disable,
};
-static const struct attribute_group *coresight_etmv4_groups[] = {
- &coresight_etmv4_group,
- &coresight_etmv4_mgmt_group,
- &coresight_etmv4_trcidr_group,
- NULL,
+static const struct coresight_ops etm4_cs_ops = {
+ .source_ops = &etm4_source_ops,
};
static void etm4_init_arch_data(void *info)
@@ -2313,6 +408,9 @@ static void etm4_init_arch_data(void *info)
u32 etmidr5;
struct etmv4_drvdata *drvdata = info;
+ /* Make sure all registers are accessible */
+ etm4_os_unlock(drvdata);
+
CS_UNLOCK(drvdata->base);
/* find all capabilities of the tracing unit */
@@ -2464,93 +562,115 @@ static void etm4_init_arch_data(void *info)
CS_LOCK(drvdata->base);
}
-static void etm4_init_default_data(struct etmv4_drvdata *drvdata)
+static void etm4_set_default(struct etmv4_config *config)
{
- int i;
+ if (WARN_ON_ONCE(!config))
+ return;
- drvdata->pe_sel = 0x0;
- drvdata->cfg = (ETMv4_MODE_CTXID | ETM_MODE_VMID |
- ETMv4_MODE_TIMESTAMP | ETM_MODE_RETURNSTACK);
+ /*
+ * Make default initialisation trace everything
+ *
+ * Select the "always true" resource selector on the
+ * "Enablign Event" line and configure address range comparator
+ * '0' to trace all the possible address range. From there
+ * configure the "include/exclude" engine to include address
+ * range comparator '0'.
+ */
/* disable all events tracing */
- drvdata->eventctrl0 = 0x0;
- drvdata->eventctrl1 = 0x0;
+ config->eventctrl0 = 0x0;
+ config->eventctrl1 = 0x0;
/* disable stalling */
- drvdata->stall_ctrl = 0x0;
+ config->stall_ctrl = 0x0;
+
+ /* enable trace synchronization every 4096 bytes, if available */
+ config->syncfreq = 0xC;
/* disable timestamp event */
- drvdata->ts_ctrl = 0x0;
+ config->ts_ctrl = 0x0;
- /* enable trace synchronization every 4096 bytes for trace */
- if (drvdata->syncpr == false)
- drvdata->syncfreq = 0xC;
+ /* TRCVICTLR::EVENT = 0x01, select the always on logic */
+ config->vinst_ctrl |= BIT(0);
/*
- * enable viewInst to trace everything with start-stop logic in
- * started state
+ * TRCVICTLR::SSSTATUS == 1, the start-stop logic is
+ * in the started state
*/
- drvdata->vinst_ctrl |= BIT(0);
- /* set initial state of start-stop logic */
- if (drvdata->nr_addr_cmp)
- drvdata->vinst_ctrl |= BIT(9);
+ config->vinst_ctrl |= BIT(9);
- /* no address range filtering for ViewInst */
- drvdata->viiectlr = 0x0;
- /* no start-stop filtering for ViewInst */
- drvdata->vissctlr = 0x0;
+ /*
+ * Configure address range comparator '0' to encompass all
+ * possible addresses.
+ */
- /* disable seq events */
- for (i = 0; i < drvdata->nrseqstate-1; i++)
- drvdata->seq_ctrl[i] = 0x0;
- drvdata->seq_rst = 0x0;
- drvdata->seq_state = 0x0;
+ /* First half of default address comparator: start at address 0 */
+ config->addr_val[ETM_DEFAULT_ADDR_COMP] = 0x0;
+ /* trace instruction addresses */
+ config->addr_acc[ETM_DEFAULT_ADDR_COMP] &= ~(BIT(0) | BIT(1));
+ /* EXLEVEL_NS, bits[12:15], only trace application and kernel space */
+ config->addr_acc[ETM_DEFAULT_ADDR_COMP] |= ETM_EXLEVEL_NS_HYP;
+ /* EXLEVEL_S, bits[11:8], don't trace anything in secure state */
+ config->addr_acc[ETM_DEFAULT_ADDR_COMP] |= (ETM_EXLEVEL_S_APP |
+ ETM_EXLEVEL_S_OS |
+ ETM_EXLEVEL_S_HYP);
+ config->addr_type[ETM_DEFAULT_ADDR_COMP] = ETM_ADDR_TYPE_RANGE;
- /* disable external input events */
- drvdata->ext_inp = 0x0;
+ /*
+ * Second half of default address comparator: go all
+ * the way to the top.
+ */
+ config->addr_val[ETM_DEFAULT_ADDR_COMP + 1] = ~0x0;
+ /* trace instruction addresses */
+ config->addr_acc[ETM_DEFAULT_ADDR_COMP + 1] &= ~(BIT(0) | BIT(1));
+ /* Address comparator type must be equal for both halves */
+ config->addr_acc[ETM_DEFAULT_ADDR_COMP + 1] =
+ config->addr_acc[ETM_DEFAULT_ADDR_COMP];
+ config->addr_type[ETM_DEFAULT_ADDR_COMP + 1] = ETM_ADDR_TYPE_RANGE;
- for (i = 0; i < drvdata->nr_cntr; i++) {
- drvdata->cntrldvr[i] = 0x0;
- drvdata->cntr_ctrl[i] = 0x0;
- drvdata->cntr_val[i] = 0x0;
- }
+ /*
+ * Configure the ViewInst function to filter on address range
+ * comparator '0'.
+ */
+ config->viiectlr = BIT(0);
- /* Resource selector pair 0 is always implemented and reserved */
- drvdata->res_idx = 0x2;
- for (i = 2; i < drvdata->nr_resource * 2; i++)
- drvdata->res_ctrl[i] = 0x0;
+ /* no start-stop filtering for ViewInst */
+ config->vissctlr = 0x0;
+}
- for (i = 0; i < drvdata->nr_ss_cmp; i++) {
- drvdata->ss_ctrl[i] = 0x0;
- drvdata->ss_pe_cmp[i] = 0x0;
- }
+void etm4_config_trace_mode(struct etmv4_config *config)
+{
+ u32 addr_acc, mode;
- if (drvdata->nr_addr_cmp >= 1) {
- drvdata->addr_val[0] = (unsigned long)_stext;
- drvdata->addr_val[1] = (unsigned long)_etext;
- drvdata->addr_type[0] = ETM_ADDR_TYPE_RANGE;
- drvdata->addr_type[1] = ETM_ADDR_TYPE_RANGE;
- }
+ mode = config->mode;
+ mode &= (ETM_MODE_EXCL_KERN | ETM_MODE_EXCL_USER);
- for (i = 0; i < drvdata->numcidc; i++) {
- drvdata->ctxid_pid[i] = 0x0;
- drvdata->ctxid_vpid[i] = 0x0;
- }
+ /* excluding kernel AND user space doesn't make sense */
+ WARN_ON_ONCE(mode == (ETM_MODE_EXCL_KERN | ETM_MODE_EXCL_USER));
- drvdata->ctxid_mask0 = 0x0;
- drvdata->ctxid_mask1 = 0x0;
+ /* nothing to do if neither flags are set */
+ if (!(mode & ETM_MODE_EXCL_KERN) && !(mode & ETM_MODE_EXCL_USER))
+ return;
- for (i = 0; i < drvdata->numvmidc; i++)
- drvdata->vmid_val[i] = 0x0;
- drvdata->vmid_mask0 = 0x0;
- drvdata->vmid_mask1 = 0x0;
+ addr_acc = config->addr_acc[ETM_DEFAULT_ADDR_COMP];
+ /* clear default config */
+ addr_acc &= ~(ETM_EXLEVEL_NS_APP | ETM_EXLEVEL_NS_OS);
/*
- * A trace ID value of 0 is invalid, so let's start at some
- * random value that fits in 7 bits. ETMv3.x has 0x10 so let's
- * start at 0x20.
+ * EXLEVEL_NS, bits[15:12]
+ * The Exception levels are:
+ * Bit[12] Exception level 0 - Application
+ * Bit[13] Exception level 1 - OS
+ * Bit[14] Exception level 2 - Hypervisor
+ * Bit[15] Never implemented
*/
- drvdata->trcid = 0x20 + drvdata->cpu;
+ if (mode & ETM_MODE_EXCL_KERN)
+ addr_acc |= ETM_EXLEVEL_NS_OS;
+ else
+ addr_acc |= ETM_EXLEVEL_NS_APP;
+
+ config->addr_acc[ETM_DEFAULT_ADDR_COMP] = addr_acc;
+ config->addr_acc[ETM_DEFAULT_ADDR_COMP + 1] = addr_acc;
}
static int etm4_cpu_callback(struct notifier_block *nfb, unsigned long action,
@@ -2569,7 +689,7 @@ static int etm4_cpu_callback(struct notifier_block *nfb, unsigned long action,
etmdrvdata[cpu]->os_unlock = true;
}
- if (etmdrvdata[cpu]->enable)
+ if (local_read(&etmdrvdata[cpu]->mode))
etm4_enable_hw(etmdrvdata[cpu]);
spin_unlock(&etmdrvdata[cpu]->spinlock);
break;
@@ -2582,7 +702,7 @@ static int etm4_cpu_callback(struct notifier_block *nfb, unsigned long action,
case CPU_DYING:
spin_lock(&etmdrvdata[cpu]->spinlock);
- if (etmdrvdata[cpu]->enable)
+ if (local_read(&etmdrvdata[cpu]->mode))
etm4_disable_hw(etmdrvdata[cpu]);
spin_unlock(&etmdrvdata[cpu]->spinlock);
break;
@@ -2595,6 +715,11 @@ static struct notifier_block etm4_cpu_notifier = {
.notifier_call = etm4_cpu_callback,
};
+static void etm4_init_trace_id(struct etmv4_drvdata *drvdata)
+{
+ drvdata->trcid = coresight_get_trace_id(drvdata->cpu);
+}
+
static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
{
int ret;
@@ -2638,9 +763,6 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
get_online_cpus();
etmdrvdata[drvdata->cpu] = drvdata;
- if (!smp_call_function_single(drvdata->cpu, etm4_os_unlock, drvdata, 1))
- drvdata->os_unlock = true;
-
if (smp_call_function_single(drvdata->cpu,
etm4_init_arch_data, drvdata, 1))
dev_err(dev, "ETM arch init failed\n");
@@ -2654,9 +776,9 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
ret = -EINVAL;
goto err_arch_supported;
}
- etm4_init_default_data(drvdata);
- pm_runtime_put(&adev->dev);
+ etm4_init_trace_id(drvdata);
+ etm4_set_default(&drvdata->config);
desc->type = CORESIGHT_DEV_TYPE_SOURCE;
desc->subtype.source_subtype = CORESIGHT_DEV_SUBTYPE_SOURCE_PROC;
@@ -2667,9 +789,16 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
drvdata->csdev = coresight_register(desc);
if (IS_ERR(drvdata->csdev)) {
ret = PTR_ERR(drvdata->csdev);
- goto err_coresight_register;
+ goto err_arch_supported;
}
+ ret = etm_perf_symlink(drvdata->csdev, true);
+ if (ret) {
+ coresight_unregister(drvdata->csdev);
+ goto err_arch_supported;
+ }
+
+ pm_runtime_put(&adev->dev);
dev_info(dev, "%s initialized\n", (char *)id->data);
if (boot_enable) {
@@ -2680,8 +809,6 @@ static int etm4_probe(struct amba_device *adev, const struct amba_id *id)
return 0;
err_arch_supported:
- pm_runtime_put(&adev->dev);
-err_coresight_register:
if (--etm4_count == 0)
unregister_hotcpu_notifier(&etm4_cpu_notifier);
return ret;
@@ -2698,6 +825,11 @@ static struct amba_id etm4_ids[] = {
.mask = 0x000fffff,
.data = "ETM 4.0",
},
+ { /* ETM 4.0 - A72, Maia, HiSilicon */
+ .id = 0x000bb95a,
+ .mask = 0x000fffff,
+ .data = "ETM 4.0",
+ },
{ 0, 0},
};
diff --git a/drivers/hwtracing/coresight/coresight-etm4x.h b/drivers/hwtracing/coresight/coresight-etm4x.h
index c34100205ca9..5359c5197c1d 100644
--- a/drivers/hwtracing/coresight/coresight-etm4x.h
+++ b/drivers/hwtracing/coresight/coresight-etm4x.h
@@ -13,6 +13,7 @@
#ifndef _CORESIGHT_CORESIGHT_ETM_H
#define _CORESIGHT_CORESIGHT_ETM_H
+#include <asm/local.h>
#include <linux/spinlock.h>
#include "coresight-priv.h"
@@ -175,71 +176,38 @@
#define ETM_MODE_TRACE_RESET BIT(25)
#define ETM_MODE_TRACE_ERR BIT(26)
#define ETM_MODE_VIEWINST_STARTSTOP BIT(27)
-#define ETMv4_MODE_ALL 0xFFFFFFF
+#define ETMv4_MODE_ALL (GENMASK(27, 0) | \
+ ETM_MODE_EXCL_KERN | \
+ ETM_MODE_EXCL_USER)
#define TRCSTATR_IDLE_BIT 0
+#define ETM_DEFAULT_ADDR_COMP 0
+
+/* secure state access levels */
+#define ETM_EXLEVEL_S_APP BIT(8)
+#define ETM_EXLEVEL_S_OS BIT(9)
+#define ETM_EXLEVEL_S_NA BIT(10)
+#define ETM_EXLEVEL_S_HYP BIT(11)
+/* non-secure state access levels */
+#define ETM_EXLEVEL_NS_APP BIT(12)
+#define ETM_EXLEVEL_NS_OS BIT(13)
+#define ETM_EXLEVEL_NS_HYP BIT(14)
+#define ETM_EXLEVEL_NS_NA BIT(15)
/**
- * struct etm4_drvdata - specifics associated to an ETM component
- * @base: Memory mapped base address for this component.
- * @dev: The device entity associated to this component.
- * @csdev: Component vitals needed by the framework.
- * @spinlock: Only one at a time pls.
- * @cpu: The cpu this component is affined to.
- * @arch: ETM version number.
- * @enable: Is this ETM currently tracing.
- * @sticky_enable: true if ETM base configuration has been done.
- * @boot_enable:True if we should start tracing at boot time.
- * @os_unlock: True if access to management registers is allowed.
- * @nr_pe: The number of processing entity available for tracing.
- * @nr_pe_cmp: The number of processing entity comparator inputs that are
- * available for tracing.
- * @nr_addr_cmp:Number of pairs of address comparators available
- * as found in ETMIDR4 0-3.
- * @nr_cntr: Number of counters as found in ETMIDR5 bit 28-30.
- * @nr_ext_inp: Number of external input.
- * @numcidc: Number of contextID comparators.
- * @numvmidc: Number of VMID comparators.
- * @nrseqstate: The number of sequencer states that are implemented.
- * @nr_event: Indicates how many events the trace unit support.
- * @nr_resource:The number of resource selection pairs available for tracing.
- * @nr_ss_cmp: Number of single-shot comparator controls that are available.
+ * struct etmv4_config - configuration information related to an ETMv4
* @mode: Controls various modes supported by this ETM.
- * @trcid: value of the current ID for this component.
- * @trcid_size: Indicates the trace ID width.
- * @instrp0: Tracing of load and store instructions
- * as P0 elements is supported.
- * @trccond: If the trace unit supports conditional
- * instruction tracing.
- * @retstack: Indicates if the implementation supports a return stack.
- * @trc_error: Whether a trace unit can trace a system
- * error exception.
- * @atbtrig: If the implementation can support ATB triggers
- * @lpoverride: If the implementation can support low-power state over.
* @pe_sel: Controls which PE to trace.
* @cfg: Controls the tracing options.
* @eventctrl0: Controls the tracing of arbitrary events.
* @eventctrl1: Controls the behavior of the events that @event_ctrl0 selects.
* @stallctl: If functionality that prevents trace unit buffer overflows
* is available.
- * @sysstall: Does the system support stall control of the PE?
- * @nooverflow: Indicate if overflow prevention is supported.
- * @stall_ctrl: Enables trace unit functionality that prevents trace
- * unit buffer overflows.
- * @ts_size: Global timestamp size field.
* @ts_ctrl: Controls the insertion of global timestamps in the
* trace streams.
- * @syncpr: Indicates if an implementation has a fixed
- * synchronization period.
* @syncfreq: Controls how often trace synchronization requests occur.
- * @trccci: Indicates if the trace unit supports cycle counting
- * for instruction.
- * @ccsize: Indicates the size of the cycle counter in bits.
- * @ccitmin: minimum value that can be programmed in
* the TRCCCCTLR register.
* @ccctlr: Sets the threshold value for cycle counting.
- * @trcbb: Indicates if the trace unit supports branch broadcast tracing.
- * @q_support: Q element support characteristics.
* @vinst_ctrl: Controls instruction trace filtering.
* @viiectlr: Set or read, the address range comparators.
* @vissctlr: Set, or read, the single address comparators that control the
@@ -264,73 +232,28 @@
* @addr_acc: Address comparator access type.
* @addr_type: Current status of the comparator register.
* @ctxid_idx: Context ID index selector.
- * @ctxid_size: Size of the context ID field to consider.
* @ctxid_pid: Value of the context ID comparator.
* @ctxid_vpid: Virtual PID seen by users if PID namespace is enabled, otherwise
* the same value of ctxid_pid.
* @ctxid_mask0:Context ID comparator mask for comparator 0-3.
* @ctxid_mask1:Context ID comparator mask for comparator 4-7.
* @vmid_idx: VM ID index selector.
- * @vmid_size: Size of the VM ID comparator to consider.
* @vmid_val: Value of the VM ID comparator.
* @vmid_mask0: VM ID comparator mask for comparator 0-3.
* @vmid_mask1: VM ID comparator mask for comparator 4-7.
- * @s_ex_level: In secure state, indicates whether instruction tracing is
- * supported for the corresponding Exception level.
- * @ns_ex_level:In non-secure state, indicates whether instruction tracing is
- * supported for the corresponding Exception level.
* @ext_inp: External input selection.
*/
-struct etmv4_drvdata {
- void __iomem *base;
- struct device *dev;
- struct coresight_device *csdev;
- spinlock_t spinlock;
- int cpu;
- u8 arch;
- bool enable;
- bool sticky_enable;
- bool boot_enable;
- bool os_unlock;
- u8 nr_pe;
- u8 nr_pe_cmp;
- u8 nr_addr_cmp;
- u8 nr_cntr;
- u8 nr_ext_inp;
- u8 numcidc;
- u8 numvmidc;
- u8 nrseqstate;
- u8 nr_event;
- u8 nr_resource;
- u8 nr_ss_cmp;
+struct etmv4_config {
u32 mode;
- u8 trcid;
- u8 trcid_size;
- bool instrp0;
- bool trccond;
- bool retstack;
- bool trc_error;
- bool atbtrig;
- bool lpoverride;
u32 pe_sel;
u32 cfg;
u32 eventctrl0;
u32 eventctrl1;
- bool stallctl;
- bool sysstall;
- bool nooverflow;
u32 stall_ctrl;
- u8 ts_size;
u32 ts_ctrl;
- bool syncpr;
u32 syncfreq;
- bool trccci;
- u8 ccsize;
- u8 ccitmin;
u32 ccctlr;
- bool trcbb;
u32 bb_ctrl;
- bool q_support;
u32 vinst_ctrl;
u32 viiectlr;
u32 vissctlr;
@@ -353,19 +276,119 @@ struct etmv4_drvdata {
u64 addr_acc[ETM_MAX_SINGLE_ADDR_CMP];
u8 addr_type[ETM_MAX_SINGLE_ADDR_CMP];
u8 ctxid_idx;
- u8 ctxid_size;
u64 ctxid_pid[ETMv4_MAX_CTXID_CMP];
u64 ctxid_vpid[ETMv4_MAX_CTXID_CMP];
u32 ctxid_mask0;
u32 ctxid_mask1;
u8 vmid_idx;
- u8 vmid_size;
u64 vmid_val[ETM_MAX_VMID_CMP];
u32 vmid_mask0;
u32 vmid_mask1;
+ u32 ext_inp;
+};
+
+/**
+ * struct etm4_drvdata - specifics associated to an ETM component
+ * @base: Memory mapped base address for this component.
+ * @dev: The device entity associated to this component.
+ * @csdev: Component vitals needed by the framework.
+ * @spinlock: Only one at a time pls.
+ * @mode: This tracer's mode, i.e sysFS, Perf or disabled.
+ * @cpu: The cpu this component is affined to.
+ * @arch: ETM version number.
+ * @nr_pe: The number of processing entity available for tracing.
+ * @nr_pe_cmp: The number of processing entity comparator inputs that are
+ * available for tracing.
+ * @nr_addr_cmp:Number of pairs of address comparators available
+ * as found in ETMIDR4 0-3.
+ * @nr_cntr: Number of counters as found in ETMIDR5 bit 28-30.
+ * @nr_ext_inp: Number of external input.
+ * @numcidc: Number of contextID comparators.
+ * @numvmidc: Number of VMID comparators.
+ * @nrseqstate: The number of sequencer states that are implemented.
+ * @nr_event: Indicates how many events the trace unit support.
+ * @nr_resource:The number of resource selection pairs available for tracing.
+ * @nr_ss_cmp: Number of single-shot comparator controls that are available.
+ * @trcid: value of the current ID for this component.
+ * @trcid_size: Indicates the trace ID width.
+ * @ts_size: Global timestamp size field.
+ * @ctxid_size: Size of the context ID field to consider.
+ * @vmid_size: Size of the VM ID comparator to consider.
+ * @ccsize: Indicates the size of the cycle counter in bits.
+ * @ccitmin: minimum value that can be programmed in
+ * @s_ex_level: In secure state, indicates whether instruction tracing is
+ * supported for the corresponding Exception level.
+ * @ns_ex_level:In non-secure state, indicates whether instruction tracing is
+ * supported for the corresponding Exception level.
+ * @sticky_enable: true if ETM base configuration has been done.
+ * @boot_enable:True if we should start tracing at boot time.
+ * @os_unlock: True if access to management registers is allowed.
+ * @instrp0: Tracing of load and store instructions
+ * as P0 elements is supported.
+ * @trcbb: Indicates if the trace unit supports branch broadcast tracing.
+ * @trccond: If the trace unit supports conditional
+ * instruction tracing.
+ * @retstack: Indicates if the implementation supports a return stack.
+ * @trccci: Indicates if the trace unit supports cycle counting
+ * for instruction.
+ * @q_support: Q element support characteristics.
+ * @trc_error: Whether a trace unit can trace a system
+ * error exception.
+ * @syncpr: Indicates if an implementation has a fixed
+ * synchronization period.
+ * @stall_ctrl: Enables trace unit functionality that prevents trace
+ * unit buffer overflows.
+ * @sysstall: Does the system support stall control of the PE?
+ * @nooverflow: Indicate if overflow prevention is supported.
+ * @atbtrig: If the implementation can support ATB triggers
+ * @lpoverride: If the implementation can support low-power state over.
+ * @config: structure holding configuration parameters.
+ */
+struct etmv4_drvdata {
+ void __iomem *base;
+ struct device *dev;
+ struct coresight_device *csdev;
+ spinlock_t spinlock;
+ local_t mode;
+ int cpu;
+ u8 arch;
+ u8 nr_pe;
+ u8 nr_pe_cmp;
+ u8 nr_addr_cmp;
+ u8 nr_cntr;
+ u8 nr_ext_inp;
+ u8 numcidc;
+ u8 numvmidc;
+ u8 nrseqstate;
+ u8 nr_event;
+ u8 nr_resource;
+ u8 nr_ss_cmp;
+ u8 trcid;
+ u8 trcid_size;
+ u8 ts_size;
+ u8 ctxid_size;
+ u8 vmid_size;
+ u8 ccsize;
+ u8 ccitmin;
u8 s_ex_level;
u8 ns_ex_level;
- u32 ext_inp;
+ u8 q_support;
+ bool sticky_enable;
+ bool boot_enable;
+ bool os_unlock;
+ bool instrp0;
+ bool trcbb;
+ bool trccond;
+ bool retstack;
+ bool trccci;
+ bool trc_error;
+ bool syncpr;
+ bool stallctl;
+ bool sysstall;
+ bool nooverflow;
+ bool atbtrig;
+ bool lpoverride;
+ struct etmv4_config config;
};
/* Address comparator access types */
@@ -391,4 +414,7 @@ enum etm_addr_type {
ETM_ADDR_TYPE_START,
ETM_ADDR_TYPE_STOP,
};
+
+extern const struct attribute_group *coresight_etmv4_groups[];
+void etm4_config_trace_mode(struct etmv4_config *config);
#endif
diff --git a/drivers/hwtracing/coresight/coresight-funnel.c b/drivers/hwtracing/coresight/coresight-funnel.c
index 0600ca30649d..05df789056cc 100644
--- a/drivers/hwtracing/coresight/coresight-funnel.c
+++ b/drivers/hwtracing/coresight/coresight-funnel.c
@@ -221,7 +221,6 @@ static int funnel_probe(struct amba_device *adev, const struct amba_id *id)
if (IS_ERR(drvdata->csdev))
return PTR_ERR(drvdata->csdev);
- dev_info(dev, "FUNNEL initialized\n");
return 0;
}
diff --git a/drivers/hwtracing/coresight/coresight-priv.h b/drivers/hwtracing/coresight/coresight-priv.h
index 333eddaed339..ad975c58080d 100644
--- a/drivers/hwtracing/coresight/coresight-priv.h
+++ b/drivers/hwtracing/coresight/coresight-priv.h
@@ -37,12 +37,42 @@
#define ETM_MODE_EXCL_KERN BIT(30)
#define ETM_MODE_EXCL_USER BIT(31)
+#define coresight_simple_func(type, name, offset) \
+static ssize_t name##_show(struct device *_dev, \
+ struct device_attribute *attr, char *buf) \
+{ \
+ type *drvdata = dev_get_drvdata(_dev->parent); \
+ return scnprintf(buf, PAGE_SIZE, "0x%x\n", \
+ readl_relaxed(drvdata->base + offset)); \
+} \
+static DEVICE_ATTR_RO(name)
+
enum cs_mode {
CS_MODE_DISABLED,
CS_MODE_SYSFS,
CS_MODE_PERF,
};
+/**
+ * struct cs_buffer - keep track of a recording session' specifics
+ * @cur: index of the current buffer
+ * @nr_pages: max number of pages granted to us
+ * @offset: offset within the current buffer
+ * @data_size: how much we collected in this run
+ * @lost: other than zero if we had a HW buffer wrap around
+ * @snapshot: is this run in snapshot mode
+ * @data_pages: a handle the ring buffer
+ */
+struct cs_buffers {
+ unsigned int cur;
+ unsigned int nr_pages;
+ unsigned long offset;
+ local_t data_size;
+ local_t lost;
+ bool snapshot;
+ void **data_pages;
+};
+
static inline void CS_LOCK(void __iomem *addr)
{
do {
diff --git a/drivers/hwtracing/coresight/coresight-replicator.c b/drivers/hwtracing/coresight/coresight-replicator.c
index 4299c0569340..c6982e312e15 100644
--- a/drivers/hwtracing/coresight/coresight-replicator.c
+++ b/drivers/hwtracing/coresight/coresight-replicator.c
@@ -114,7 +114,6 @@ static int replicator_probe(struct platform_device *pdev)
pm_runtime_put(&pdev->dev);
- dev_info(dev, "REPLICATOR initialized\n");
return 0;
out_disable_pm:
diff --git a/drivers/hwtracing/coresight/coresight-stm.c b/drivers/hwtracing/coresight/coresight-stm.c
new file mode 100644
index 000000000000..73be58a11e4f
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-stm.c
@@ -0,0 +1,920 @@
+/* Copyright (c) 2015-2016, The Linux Foundation. All rights reserved.
+ *
+ * Description: CoreSight System Trace Macrocell driver
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 and
+ * only version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * Initial implementation by Pratik Patel
+ * (C) 2014-2015 Pratik Patel <pratikp@codeaurora.org>
+ *
+ * Serious refactoring, code cleanup and upgrading to the Coresight upstream
+ * framework by Mathieu Poirier
+ * (C) 2015-2016 Mathieu Poirier <mathieu.poirier@linaro.org>
+ *
+ * Guaranteed timing and support for various packet type coming from the
+ * generic STM API by Chunyan Zhang
+ * (C) 2015-2016 Chunyan Zhang <zhang.chunyan@linaro.org>
+ */
+#include <asm/local.h>
+#include <linux/amba/bus.h>
+#include <linux/bitmap.h>
+#include <linux/clk.h>
+#include <linux/coresight.h>
+#include <linux/coresight-stm.h>
+#include <linux/err.h>
+#include <linux/kernel.h>
+#include <linux/moduleparam.h>
+#include <linux/of_address.h>
+#include <linux/perf_event.h>
+#include <linux/pm_runtime.h>
+#include <linux/stm.h>
+
+#include "coresight-priv.h"
+
+#define STMDMASTARTR 0xc04
+#define STMDMASTOPR 0xc08
+#define STMDMASTATR 0xc0c
+#define STMDMACTLR 0xc10
+#define STMDMAIDR 0xcfc
+#define STMHEER 0xd00
+#define STMHETER 0xd20
+#define STMHEBSR 0xd60
+#define STMHEMCR 0xd64
+#define STMHEMASTR 0xdf4
+#define STMHEFEAT1R 0xdf8
+#define STMHEIDR 0xdfc
+#define STMSPER 0xe00
+#define STMSPTER 0xe20
+#define STMPRIVMASKR 0xe40
+#define STMSPSCR 0xe60
+#define STMSPMSCR 0xe64
+#define STMSPOVERRIDER 0xe68
+#define STMSPMOVERRIDER 0xe6c
+#define STMSPTRIGCSR 0xe70
+#define STMTCSR 0xe80
+#define STMTSSTIMR 0xe84
+#define STMTSFREQR 0xe8c
+#define STMSYNCR 0xe90
+#define STMAUXCR 0xe94
+#define STMSPFEAT1R 0xea0
+#define STMSPFEAT2R 0xea4
+#define STMSPFEAT3R 0xea8
+#define STMITTRIGGER 0xee8
+#define STMITATBDATA0 0xeec
+#define STMITATBCTR2 0xef0
+#define STMITATBID 0xef4
+#define STMITATBCTR0 0xef8
+
+#define STM_32_CHANNEL 32
+#define BYTES_PER_CHANNEL 256
+#define STM_TRACE_BUF_SIZE 4096
+#define STM_SW_MASTER_END 127
+
+/* Register bit definition */
+#define STMTCSR_BUSY_BIT 23
+/* Reserve the first 10 channels for kernel usage */
+#define STM_CHANNEL_OFFSET 0
+
+enum stm_pkt_type {
+ STM_PKT_TYPE_DATA = 0x98,
+ STM_PKT_TYPE_FLAG = 0xE8,
+ STM_PKT_TYPE_TRIG = 0xF8,
+};
+
+#define stm_channel_addr(drvdata, ch) (drvdata->chs.base + \
+ (ch * BYTES_PER_CHANNEL))
+#define stm_channel_off(type, opts) (type & ~opts)
+
+static int boot_nr_channel;
+
+/*
+ * Not really modular but using module_param is the easiest way to
+ * remain consistent with existing use cases for now.
+ */
+module_param_named(
+ boot_nr_channel, boot_nr_channel, int, S_IRUGO
+);
+
+/**
+ * struct channel_space - central management entity for extended ports
+ * @base: memory mapped base address where channels start.
+ * @guaraneed: is the channel delivery guaranteed.
+ */
+struct channel_space {
+ void __iomem *base;
+ unsigned long *guaranteed;
+};
+
+/**
+ * struct stm_drvdata - specifics associated to an STM component
+ * @base: memory mapped base address for this component.
+ * @dev: the device entity associated to this component.
+ * @atclk: optional clock for the core parts of the STM.
+ * @csdev: component vitals needed by the framework.
+ * @spinlock: only one at a time pls.
+ * @chs: the channels accociated to this STM.
+ * @stm: structure associated to the generic STM interface.
+ * @mode: this tracer's mode, i.e sysFS, or disabled.
+ * @traceid: value of the current ID for this component.
+ * @write_bytes: Maximus bytes this STM can write at a time.
+ * @stmsper: settings for register STMSPER.
+ * @stmspscr: settings for register STMSPSCR.
+ * @numsp: the total number of stimulus port support by this STM.
+ * @stmheer: settings for register STMHEER.
+ * @stmheter: settings for register STMHETER.
+ * @stmhebsr: settings for register STMHEBSR.
+ */
+struct stm_drvdata {
+ void __iomem *base;
+ struct device *dev;
+ struct clk *atclk;
+ struct coresight_device *csdev;
+ spinlock_t spinlock;
+ struct channel_space chs;
+ struct stm_data stm;
+ local_t mode;
+ u8 traceid;
+ u32 write_bytes;
+ u32 stmsper;
+ u32 stmspscr;
+ u32 numsp;
+ u32 stmheer;
+ u32 stmheter;
+ u32 stmhebsr;
+};
+
+static void stm_hwevent_enable_hw(struct stm_drvdata *drvdata)
+{
+ CS_UNLOCK(drvdata->base);
+
+ writel_relaxed(drvdata->stmhebsr, drvdata->base + STMHEBSR);
+ writel_relaxed(drvdata->stmheter, drvdata->base + STMHETER);
+ writel_relaxed(drvdata->stmheer, drvdata->base + STMHEER);
+ writel_relaxed(0x01 | /* Enable HW event tracing */
+ 0x04, /* Error detection on event tracing */
+ drvdata->base + STMHEMCR);
+
+ CS_LOCK(drvdata->base);
+}
+
+static void stm_port_enable_hw(struct stm_drvdata *drvdata)
+{
+ CS_UNLOCK(drvdata->base);
+ /* ATB trigger enable on direct writes to TRIG locations */
+ writel_relaxed(0x10,
+ drvdata->base + STMSPTRIGCSR);
+ writel_relaxed(drvdata->stmspscr, drvdata->base + STMSPSCR);
+ writel_relaxed(drvdata->stmsper, drvdata->base + STMSPER);
+
+ CS_LOCK(drvdata->base);
+}
+
+static void stm_enable_hw(struct stm_drvdata *drvdata)
+{
+ if (drvdata->stmheer)
+ stm_hwevent_enable_hw(drvdata);
+
+ stm_port_enable_hw(drvdata);
+
+ CS_UNLOCK(drvdata->base);
+
+ /* 4096 byte between synchronisation packets */
+ writel_relaxed(0xFFF, drvdata->base + STMSYNCR);
+ writel_relaxed((drvdata->traceid << 16 | /* trace id */
+ 0x02 | /* timestamp enable */
+ 0x01), /* global STM enable */
+ drvdata->base + STMTCSR);
+
+ CS_LOCK(drvdata->base);
+}
+
+static int stm_enable(struct coresight_device *csdev,
+ struct perf_event_attr *attr, u32 mode)
+{
+ u32 val;
+ struct stm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+ if (mode != CS_MODE_SYSFS)
+ return -EINVAL;
+
+ val = local_cmpxchg(&drvdata->mode, CS_MODE_DISABLED, mode);
+
+ /* Someone is already using the tracer */
+ if (val)
+ return -EBUSY;
+
+ pm_runtime_get_sync(drvdata->dev);
+
+ spin_lock(&drvdata->spinlock);
+ stm_enable_hw(drvdata);
+ spin_unlock(&drvdata->spinlock);
+
+ dev_info(drvdata->dev, "STM tracing enabled\n");
+ return 0;
+}
+
+static void stm_hwevent_disable_hw(struct stm_drvdata *drvdata)
+{
+ CS_UNLOCK(drvdata->base);
+
+ writel_relaxed(0x0, drvdata->base + STMHEMCR);
+ writel_relaxed(0x0, drvdata->base + STMHEER);
+ writel_relaxed(0x0, drvdata->base + STMHETER);
+
+ CS_LOCK(drvdata->base);
+}
+
+static void stm_port_disable_hw(struct stm_drvdata *drvdata)
+{
+ CS_UNLOCK(drvdata->base);
+
+ writel_relaxed(0x0, drvdata->base + STMSPER);
+ writel_relaxed(0x0, drvdata->base + STMSPTRIGCSR);
+
+ CS_LOCK(drvdata->base);
+}
+
+static void stm_disable_hw(struct stm_drvdata *drvdata)
+{
+ u32 val;
+
+ CS_UNLOCK(drvdata->base);
+
+ val = readl_relaxed(drvdata->base + STMTCSR);
+ val &= ~0x1; /* clear global STM enable [0] */
+ writel_relaxed(val, drvdata->base + STMTCSR);
+
+ CS_LOCK(drvdata->base);
+
+ stm_port_disable_hw(drvdata);
+ if (drvdata->stmheer)
+ stm_hwevent_disable_hw(drvdata);
+}
+
+static void stm_disable(struct coresight_device *csdev)
+{
+ struct stm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+ /*
+ * For as long as the tracer isn't disabled another entity can't
+ * change its status. As such we can read the status here without
+ * fearing it will change under us.
+ */
+ if (local_read(&drvdata->mode) == CS_MODE_SYSFS) {
+ spin_lock(&drvdata->spinlock);
+ stm_disable_hw(drvdata);
+ spin_unlock(&drvdata->spinlock);
+
+ /* Wait until the engine has completely stopped */
+ coresight_timeout(drvdata, STMTCSR, STMTCSR_BUSY_BIT, 0);
+
+ pm_runtime_put(drvdata->dev);
+
+ local_set(&drvdata->mode, CS_MODE_DISABLED);
+ dev_info(drvdata->dev, "STM tracing disabled\n");
+ }
+}
+
+static int stm_trace_id(struct coresight_device *csdev)
+{
+ struct stm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+ return drvdata->traceid;
+}
+
+static const struct coresight_ops_source stm_source_ops = {
+ .trace_id = stm_trace_id,
+ .enable = stm_enable,
+ .disable = stm_disable,
+};
+
+static const struct coresight_ops stm_cs_ops = {
+ .source_ops = &stm_source_ops,
+};
+
+static inline bool stm_addr_unaligned(const void *addr, u8 write_bytes)
+{
+ return ((unsigned long)addr & (write_bytes - 1));
+}
+
+static void stm_send(void *addr, const void *data, u32 size, u8 write_bytes)
+{
+ u8 paload[8];
+
+ if (stm_addr_unaligned(data, write_bytes)) {
+ memcpy(paload, data, size);
+ data = paload;
+ }
+
+ /* now we are 64bit/32bit aligned */
+ switch (size) {
+#ifdef CONFIG_64BIT
+ case 8:
+ writeq_relaxed(*(u64 *)data, addr);
+ break;
+#endif
+ case 4:
+ writel_relaxed(*(u32 *)data, addr);
+ break;
+ case 2:
+ writew_relaxed(*(u16 *)data, addr);
+ break;
+ case 1:
+ writeb_relaxed(*(u8 *)data, addr);
+ break;
+ default:
+ break;
+ }
+}
+
+static int stm_generic_link(struct stm_data *stm_data,
+ unsigned int master, unsigned int channel)
+{
+ struct stm_drvdata *drvdata = container_of(stm_data,
+ struct stm_drvdata, stm);
+ if (!drvdata || !drvdata->csdev)
+ return -EINVAL;
+
+ return coresight_enable(drvdata->csdev);
+}
+
+static void stm_generic_unlink(struct stm_data *stm_data,
+ unsigned int master, unsigned int channel)
+{
+ struct stm_drvdata *drvdata = container_of(stm_data,
+ struct stm_drvdata, stm);
+ if (!drvdata || !drvdata->csdev)
+ return;
+
+ stm_disable(drvdata->csdev);
+}
+
+static long stm_generic_set_options(struct stm_data *stm_data,
+ unsigned int master,
+ unsigned int channel,
+ unsigned int nr_chans,
+ unsigned long options)
+{
+ struct stm_drvdata *drvdata = container_of(stm_data,
+ struct stm_drvdata, stm);
+ if (!(drvdata && local_read(&drvdata->mode)))
+ return -EINVAL;
+
+ if (channel >= drvdata->numsp)
+ return -EINVAL;
+
+ switch (options) {
+ case STM_OPTION_GUARANTEED:
+ set_bit(channel, drvdata->chs.guaranteed);
+ break;
+
+ case STM_OPTION_INVARIANT:
+ clear_bit(channel, drvdata->chs.guaranteed);
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static ssize_t stm_generic_packet(struct stm_data *stm_data,
+ unsigned int master,
+ unsigned int channel,
+ unsigned int packet,
+ unsigned int flags,
+ unsigned int size,
+ const unsigned char *payload)
+{
+ unsigned long ch_addr;
+ struct stm_drvdata *drvdata = container_of(stm_data,
+ struct stm_drvdata, stm);
+
+ if (!(drvdata && local_read(&drvdata->mode)))
+ return 0;
+
+ if (channel >= drvdata->numsp)
+ return 0;
+
+ ch_addr = (unsigned long)stm_channel_addr(drvdata, channel);
+
+ flags = (flags == STP_PACKET_TIMESTAMPED) ? STM_FLAG_TIMESTAMPED : 0;
+ flags |= test_bit(channel, drvdata->chs.guaranteed) ?
+ STM_FLAG_GUARANTEED : 0;
+
+ if (size > drvdata->write_bytes)
+ size = drvdata->write_bytes;
+ else
+ size = rounddown_pow_of_two(size);
+
+ switch (packet) {
+ case STP_PACKET_FLAG:
+ ch_addr |= stm_channel_off(STM_PKT_TYPE_FLAG, flags);
+
+ /*
+ * The generic STM core sets a size of '0' on flag packets.
+ * As such send a flag packet of size '1' and tell the
+ * core we did so.
+ */
+ stm_send((void *)ch_addr, payload, 1, drvdata->write_bytes);
+ size = 1;
+ break;
+
+ case STP_PACKET_DATA:
+ ch_addr |= stm_channel_off(STM_PKT_TYPE_DATA, flags);
+ stm_send((void *)ch_addr, payload, size,
+ drvdata->write_bytes);
+ break;
+
+ default:
+ return -ENOTSUPP;
+ }
+
+ return size;
+}
+
+static ssize_t hwevent_enable_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ unsigned long val = drvdata->stmheer;
+
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t hwevent_enable_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ unsigned long val;
+ int ret = 0;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return -EINVAL;
+
+ drvdata->stmheer = val;
+ /* HW event enable and trigger go hand in hand */
+ drvdata->stmheter = val;
+
+ return size;
+}
+static DEVICE_ATTR_RW(hwevent_enable);
+
+static ssize_t hwevent_select_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ unsigned long val = drvdata->stmhebsr;
+
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t hwevent_select_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ unsigned long val;
+ int ret = 0;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return -EINVAL;
+
+ drvdata->stmhebsr = val;
+
+ return size;
+}
+static DEVICE_ATTR_RW(hwevent_select);
+
+static ssize_t port_select_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ unsigned long val;
+
+ if (!local_read(&drvdata->mode)) {
+ val = drvdata->stmspscr;
+ } else {
+ spin_lock(&drvdata->spinlock);
+ val = readl_relaxed(drvdata->base + STMSPSCR);
+ spin_unlock(&drvdata->spinlock);
+ }
+
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t port_select_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ unsigned long val, stmsper;
+ int ret = 0;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ spin_lock(&drvdata->spinlock);
+ drvdata->stmspscr = val;
+
+ if (local_read(&drvdata->mode)) {
+ CS_UNLOCK(drvdata->base);
+ /* Process as per ARM's TRM recommendation */
+ stmsper = readl_relaxed(drvdata->base + STMSPER);
+ writel_relaxed(0x0, drvdata->base + STMSPER);
+ writel_relaxed(drvdata->stmspscr, drvdata->base + STMSPSCR);
+ writel_relaxed(stmsper, drvdata->base + STMSPER);
+ CS_LOCK(drvdata->base);
+ }
+ spin_unlock(&drvdata->spinlock);
+
+ return size;
+}
+static DEVICE_ATTR_RW(port_select);
+
+static ssize_t port_enable_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ unsigned long val;
+
+ if (!local_read(&drvdata->mode)) {
+ val = drvdata->stmsper;
+ } else {
+ spin_lock(&drvdata->spinlock);
+ val = readl_relaxed(drvdata->base + STMSPER);
+ spin_unlock(&drvdata->spinlock);
+ }
+
+ return scnprintf(buf, PAGE_SIZE, "%#lx\n", val);
+}
+
+static ssize_t port_enable_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ unsigned long val;
+ int ret = 0;
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ spin_lock(&drvdata->spinlock);
+ drvdata->stmsper = val;
+
+ if (local_read(&drvdata->mode)) {
+ CS_UNLOCK(drvdata->base);
+ writel_relaxed(drvdata->stmsper, drvdata->base + STMSPER);
+ CS_LOCK(drvdata->base);
+ }
+ spin_unlock(&drvdata->spinlock);
+
+ return size;
+}
+static DEVICE_ATTR_RW(port_enable);
+
+static ssize_t traceid_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ unsigned long val;
+ struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ val = drvdata->traceid;
+ return sprintf(buf, "%#lx\n", val);
+}
+
+static ssize_t traceid_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t size)
+{
+ int ret;
+ unsigned long val;
+ struct stm_drvdata *drvdata = dev_get_drvdata(dev->parent);
+
+ ret = kstrtoul(buf, 16, &val);
+ if (ret)
+ return ret;
+
+ /* traceid field is 7bit wide on STM32 */
+ drvdata->traceid = val & 0x7f;
+ return size;
+}
+static DEVICE_ATTR_RW(traceid);
+
+#define coresight_stm_simple_func(name, offset) \
+ coresight_simple_func(struct stm_drvdata, name, offset)
+
+coresight_stm_simple_func(tcsr, STMTCSR);
+coresight_stm_simple_func(tsfreqr, STMTSFREQR);
+coresight_stm_simple_func(syncr, STMSYNCR);
+coresight_stm_simple_func(sper, STMSPER);
+coresight_stm_simple_func(spter, STMSPTER);
+coresight_stm_simple_func(privmaskr, STMPRIVMASKR);
+coresight_stm_simple_func(spscr, STMSPSCR);
+coresight_stm_simple_func(spmscr, STMSPMSCR);
+coresight_stm_simple_func(spfeat1r, STMSPFEAT1R);
+coresight_stm_simple_func(spfeat2r, STMSPFEAT2R);
+coresight_stm_simple_func(spfeat3r, STMSPFEAT3R);
+coresight_stm_simple_func(devid, CORESIGHT_DEVID);
+
+static struct attribute *coresight_stm_attrs[] = {
+ &dev_attr_hwevent_enable.attr,
+ &dev_attr_hwevent_select.attr,
+ &dev_attr_port_enable.attr,
+ &dev_attr_port_select.attr,
+ &dev_attr_traceid.attr,
+ NULL,
+};
+
+static struct attribute *coresight_stm_mgmt_attrs[] = {
+ &dev_attr_tcsr.attr,
+ &dev_attr_tsfreqr.attr,
+ &dev_attr_syncr.attr,
+ &dev_attr_sper.attr,
+ &dev_attr_spter.attr,
+ &dev_attr_privmaskr.attr,
+ &dev_attr_spscr.attr,
+ &dev_attr_spmscr.attr,
+ &dev_attr_spfeat1r.attr,
+ &dev_attr_spfeat2r.attr,
+ &dev_attr_spfeat3r.attr,
+ &dev_attr_devid.attr,
+ NULL,
+};
+
+static const struct attribute_group coresight_stm_group = {
+ .attrs = coresight_stm_attrs,
+};
+
+static const struct attribute_group coresight_stm_mgmt_group = {
+ .attrs = coresight_stm_mgmt_attrs,
+ .name = "mgmt",
+};
+
+static const struct attribute_group *coresight_stm_groups[] = {
+ &coresight_stm_group,
+ &coresight_stm_mgmt_group,
+ NULL,
+};
+
+static int stm_get_resource_byname(struct device_node *np,
+ char *ch_base, struct resource *res)
+{
+ const char *name = NULL;
+ int index = 0, found = 0;
+
+ while (!of_property_read_string_index(np, "reg-names", index, &name)) {
+ if (strcmp(ch_base, name)) {
+ index++;
+ continue;
+ }
+
+ /* We have a match and @index is where it's at */
+ found = 1;
+ break;
+ }
+
+ if (!found)
+ return -EINVAL;
+
+ return of_address_to_resource(np, index, res);
+}
+
+static u32 stm_fundamental_data_size(struct stm_drvdata *drvdata)
+{
+ u32 stmspfeat2r;
+
+ if (!IS_ENABLED(CONFIG_64BIT))
+ return 4;
+
+ stmspfeat2r = readl_relaxed(drvdata->base + STMSPFEAT2R);
+
+ /*
+ * bit[15:12] represents the fundamental data size
+ * 0 - 32-bit data
+ * 1 - 64-bit data
+ */
+ return BMVAL(stmspfeat2r, 12, 15) ? 8 : 4;
+}
+
+static u32 stm_num_stimulus_port(struct stm_drvdata *drvdata)
+{
+ u32 numsp;
+
+ numsp = readl_relaxed(drvdata->base + CORESIGHT_DEVID);
+ /*
+ * NUMPS in STMDEVID is 17 bit long and if equal to 0x0,
+ * 32 stimulus ports are supported.
+ */
+ numsp &= 0x1ffff;
+ if (!numsp)
+ numsp = STM_32_CHANNEL;
+ return numsp;
+}
+
+static void stm_init_default_data(struct stm_drvdata *drvdata)
+{
+ /* Don't use port selection */
+ drvdata->stmspscr = 0x0;
+ /*
+ * Enable all channel regardless of their number. When port
+ * selection isn't used (see above) STMSPER applies to all
+ * 32 channel group available, hence setting all 32 bits to 1
+ */
+ drvdata->stmsper = ~0x0;
+
+ /*
+ * The trace ID value for *ETM* tracers start at CPU_ID * 2 + 0x10 and
+ * anything equal to or higher than 0x70 is reserved. Since 0x00 is
+ * also reserved the STM trace ID needs to be higher than 0x00 and
+ * lowner than 0x10.
+ */
+ drvdata->traceid = 0x1;
+
+ /* Set invariant transaction timing on all channels */
+ bitmap_clear(drvdata->chs.guaranteed, 0, drvdata->numsp);
+}
+
+static void stm_init_generic_data(struct stm_drvdata *drvdata)
+{
+ drvdata->stm.name = dev_name(drvdata->dev);
+
+ /*
+ * MasterIDs are assigned at HW design phase. As such the core is
+ * using a single master for interaction with this device.
+ */
+ drvdata->stm.sw_start = 1;
+ drvdata->stm.sw_end = 1;
+ drvdata->stm.hw_override = true;
+ drvdata->stm.sw_nchannels = drvdata->numsp;
+ drvdata->stm.packet = stm_generic_packet;
+ drvdata->stm.link = stm_generic_link;
+ drvdata->stm.unlink = stm_generic_unlink;
+ drvdata->stm.set_options = stm_generic_set_options;
+}
+
+static int stm_probe(struct amba_device *adev, const struct amba_id *id)
+{
+ int ret;
+ void __iomem *base;
+ unsigned long *guaranteed;
+ struct device *dev = &adev->dev;
+ struct coresight_platform_data *pdata = NULL;
+ struct stm_drvdata *drvdata;
+ struct resource *res = &adev->res;
+ struct resource ch_res;
+ size_t res_size, bitmap_size;
+ struct coresight_desc *desc;
+ struct device_node *np = adev->dev.of_node;
+
+ if (np) {
+ pdata = of_get_coresight_platform_data(dev, np);
+ if (IS_ERR(pdata))
+ return PTR_ERR(pdata);
+ adev->dev.platform_data = pdata;
+ }
+ drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL);
+ if (!drvdata)
+ return -ENOMEM;
+
+ drvdata->dev = &adev->dev;
+ drvdata->atclk = devm_clk_get(&adev->dev, "atclk"); /* optional */
+ if (!IS_ERR(drvdata->atclk)) {
+ ret = clk_prepare_enable(drvdata->atclk);
+ if (ret)
+ return ret;
+ }
+ dev_set_drvdata(dev, drvdata);
+
+ base = devm_ioremap_resource(dev, res);
+ if (IS_ERR(base))
+ return PTR_ERR(base);
+ drvdata->base = base;
+
+ ret = stm_get_resource_byname(np, "stm-stimulus-base", &ch_res);
+ if (ret)
+ return ret;
+
+ base = devm_ioremap_resource(dev, &ch_res);
+ if (IS_ERR(base))
+ return PTR_ERR(base);
+ drvdata->chs.base = base;
+
+ drvdata->write_bytes = stm_fundamental_data_size(drvdata);
+
+ if (boot_nr_channel) {
+ drvdata->numsp = boot_nr_channel;
+ res_size = min((resource_size_t)(boot_nr_channel *
+ BYTES_PER_CHANNEL), resource_size(res));
+ } else {
+ drvdata->numsp = stm_num_stimulus_port(drvdata);
+ res_size = min((resource_size_t)(drvdata->numsp *
+ BYTES_PER_CHANNEL), resource_size(res));
+ }
+ bitmap_size = BITS_TO_LONGS(drvdata->numsp) * sizeof(long);
+
+ guaranteed = devm_kzalloc(dev, bitmap_size, GFP_KERNEL);
+ if (!guaranteed)
+ return -ENOMEM;
+ drvdata->chs.guaranteed = guaranteed;
+
+ spin_lock_init(&drvdata->spinlock);
+
+ stm_init_default_data(drvdata);
+ stm_init_generic_data(drvdata);
+
+ if (stm_register_device(dev, &drvdata->stm, THIS_MODULE)) {
+ dev_info(dev,
+ "stm_register_device failed, probing deffered\n");
+ return -EPROBE_DEFER;
+ }
+
+ desc = devm_kzalloc(dev, sizeof(*desc), GFP_KERNEL);
+ if (!desc) {
+ ret = -ENOMEM;
+ goto stm_unregister;
+ }
+
+ desc->type = CORESIGHT_DEV_TYPE_SOURCE;
+ desc->subtype.source_subtype = CORESIGHT_DEV_SUBTYPE_SOURCE_SOFTWARE;
+ desc->ops = &stm_cs_ops;
+ desc->pdata = pdata;
+ desc->dev = dev;
+ desc->groups = coresight_stm_groups;
+ drvdata->csdev = coresight_register(desc);
+ if (IS_ERR(drvdata->csdev)) {
+ ret = PTR_ERR(drvdata->csdev);
+ goto stm_unregister;
+ }
+
+ pm_runtime_put(&adev->dev);
+
+ dev_info(dev, "%s initialized\n", (char *)id->data);
+ return 0;
+
+stm_unregister:
+ stm_unregister_device(&drvdata->stm);
+ return ret;
+}
+
+#ifdef CONFIG_PM
+static int stm_runtime_suspend(struct device *dev)
+{
+ struct stm_drvdata *drvdata = dev_get_drvdata(dev);
+
+ if (drvdata && !IS_ERR(drvdata->atclk))
+ clk_disable_unprepare(drvdata->atclk);
+
+ return 0;
+}
+
+static int stm_runtime_resume(struct device *dev)
+{
+ struct stm_drvdata *drvdata = dev_get_drvdata(dev);
+
+ if (drvdata && !IS_ERR(drvdata->atclk))
+ clk_prepare_enable(drvdata->atclk);
+
+ return 0;
+}
+#endif
+
+static const struct dev_pm_ops stm_dev_pm_ops = {
+ SET_RUNTIME_PM_OPS(stm_runtime_suspend, stm_runtime_resume, NULL)
+};
+
+static struct amba_id stm_ids[] = {
+ {
+ .id = 0x0003b962,
+ .mask = 0x0003ffff,
+ .data = "STM32",
+ },
+ { 0, 0},
+};
+
+static struct amba_driver stm_driver = {
+ .drv = {
+ .name = "coresight-stm",
+ .owner = THIS_MODULE,
+ .pm = &stm_dev_pm_ops,
+ .suppress_bind_attrs = true,
+ },
+ .probe = stm_probe,
+ .id_table = stm_ids,
+};
+
+builtin_amba_driver(stm_driver);
diff --git a/drivers/hwtracing/coresight/coresight-tmc-etf.c b/drivers/hwtracing/coresight/coresight-tmc-etf.c
new file mode 100644
index 000000000000..466af86fd76f
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-tmc-etf.c
@@ -0,0 +1,604 @@
+/*
+ * Copyright(C) 2016 Linaro Limited. All rights reserved.
+ * Author: Mathieu Poirier <mathieu.poirier@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/circ_buf.h>
+#include <linux/coresight.h>
+#include <linux/perf_event.h>
+#include <linux/slab.h>
+#include "coresight-priv.h"
+#include "coresight-tmc.h"
+
+void tmc_etb_enable_hw(struct tmc_drvdata *drvdata)
+{
+ CS_UNLOCK(drvdata->base);
+
+ /* Wait for TMCSReady bit to be set */
+ tmc_wait_for_tmcready(drvdata);
+
+ writel_relaxed(TMC_MODE_CIRCULAR_BUFFER, drvdata->base + TMC_MODE);
+ writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI |
+ TMC_FFCR_FON_FLIN | TMC_FFCR_FON_TRIG_EVT |
+ TMC_FFCR_TRIGON_TRIGIN,
+ drvdata->base + TMC_FFCR);
+
+ writel_relaxed(drvdata->trigger_cntr, drvdata->base + TMC_TRG);
+ tmc_enable_hw(drvdata);
+
+ CS_LOCK(drvdata->base);
+}
+
+static void tmc_etb_dump_hw(struct tmc_drvdata *drvdata)
+{
+ char *bufp;
+ u32 read_data;
+ int i;
+
+ bufp = drvdata->buf;
+ while (1) {
+ for (i = 0; i < drvdata->memwidth; i++) {
+ read_data = readl_relaxed(drvdata->base + TMC_RRD);
+ if (read_data == 0xFFFFFFFF)
+ return;
+ memcpy(bufp, &read_data, 4);
+ bufp += 4;
+ }
+ }
+}
+
+static void tmc_etb_disable_hw(struct tmc_drvdata *drvdata)
+{
+ CS_UNLOCK(drvdata->base);
+
+ tmc_flush_and_stop(drvdata);
+ /*
+ * When operating in sysFS mode the content of the buffer needs to be
+ * read before the TMC is disabled.
+ */
+ if (local_read(&drvdata->mode) == CS_MODE_SYSFS)
+ tmc_etb_dump_hw(drvdata);
+ tmc_disable_hw(drvdata);
+
+ CS_LOCK(drvdata->base);
+}
+
+static void tmc_etf_enable_hw(struct tmc_drvdata *drvdata)
+{
+ CS_UNLOCK(drvdata->base);
+
+ /* Wait for TMCSReady bit to be set */
+ tmc_wait_for_tmcready(drvdata);
+
+ writel_relaxed(TMC_MODE_HARDWARE_FIFO, drvdata->base + TMC_MODE);
+ writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI,
+ drvdata->base + TMC_FFCR);
+ writel_relaxed(0x0, drvdata->base + TMC_BUFWM);
+ tmc_enable_hw(drvdata);
+
+ CS_LOCK(drvdata->base);
+}
+
+static void tmc_etf_disable_hw(struct tmc_drvdata *drvdata)
+{
+ CS_UNLOCK(drvdata->base);
+
+ tmc_flush_and_stop(drvdata);
+ tmc_disable_hw(drvdata);
+
+ CS_LOCK(drvdata->base);
+}
+
+static int tmc_enable_etf_sink_sysfs(struct coresight_device *csdev, u32 mode)
+{
+ int ret = 0;
+ bool used = false;
+ char *buf = NULL;
+ long val;
+ unsigned long flags;
+ struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+ /* This shouldn't be happening */
+ if (WARN_ON(mode != CS_MODE_SYSFS))
+ return -EINVAL;
+
+ /*
+ * If we don't have a buffer release the lock and allocate memory.
+ * Otherwise keep the lock and move along.
+ */
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+ if (!drvdata->buf) {
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+ /* Allocating the memory here while outside of the spinlock */
+ buf = kzalloc(drvdata->size, GFP_KERNEL);
+ if (!buf)
+ return -ENOMEM;
+
+ /* Let's try again */
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+ }
+
+ if (drvdata->reading) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ val = local_xchg(&drvdata->mode, mode);
+ /*
+ * In sysFS mode we can have multiple writers per sink. Since this
+ * sink is already enabled no memory is needed and the HW need not be
+ * touched.
+ */
+ if (val == CS_MODE_SYSFS)
+ goto out;
+
+ /*
+ * If drvdata::buf isn't NULL, memory was allocated for a previous
+ * trace run but wasn't read. If so simply zero-out the memory.
+ * Otherwise use the memory allocated above.
+ *
+ * The memory is freed when users read the buffer using the
+ * /dev/xyz.{etf|etb} interface. See tmc_read_unprepare_etf() for
+ * details.
+ */
+ if (drvdata->buf) {
+ memset(drvdata->buf, 0, drvdata->size);
+ } else {
+ used = true;
+ drvdata->buf = buf;
+ }
+
+ tmc_etb_enable_hw(drvdata);
+out:
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+ /* Free memory outside the spinlock if need be */
+ if (!used && buf)
+ kfree(buf);
+
+ if (!ret)
+ dev_info(drvdata->dev, "TMC-ETB/ETF enabled\n");
+
+ return ret;
+}
+
+static int tmc_enable_etf_sink_perf(struct coresight_device *csdev, u32 mode)
+{
+ int ret = 0;
+ long val;
+ unsigned long flags;
+ struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+ /* This shouldn't be happening */
+ if (WARN_ON(mode != CS_MODE_PERF))
+ return -EINVAL;
+
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+ if (drvdata->reading) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ val = local_xchg(&drvdata->mode, mode);
+ /*
+ * In Perf mode there can be only one writer per sink. There
+ * is also no need to continue if the ETB/ETR is already operated
+ * from sysFS.
+ */
+ if (val != CS_MODE_DISABLED) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ tmc_etb_enable_hw(drvdata);
+out:
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+ return ret;
+}
+
+static int tmc_enable_etf_sink(struct coresight_device *csdev, u32 mode)
+{
+ switch (mode) {
+ case CS_MODE_SYSFS:
+ return tmc_enable_etf_sink_sysfs(csdev, mode);
+ case CS_MODE_PERF:
+ return tmc_enable_etf_sink_perf(csdev, mode);
+ }
+
+ /* We shouldn't be here */
+ return -EINVAL;
+}
+
+static void tmc_disable_etf_sink(struct coresight_device *csdev)
+{
+ long val;
+ unsigned long flags;
+ struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+ if (drvdata->reading) {
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+ return;
+ }
+
+ val = local_xchg(&drvdata->mode, CS_MODE_DISABLED);
+ /* Disable the TMC only if it needs to */
+ if (val != CS_MODE_DISABLED)
+ tmc_etb_disable_hw(drvdata);
+
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+ dev_info(drvdata->dev, "TMC-ETB/ETF disabled\n");
+}
+
+static int tmc_enable_etf_link(struct coresight_device *csdev,
+ int inport, int outport)
+{
+ unsigned long flags;
+ struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+ if (drvdata->reading) {
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+ return -EBUSY;
+ }
+
+ tmc_etf_enable_hw(drvdata);
+ local_set(&drvdata->mode, CS_MODE_SYSFS);
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+ dev_info(drvdata->dev, "TMC-ETF enabled\n");
+ return 0;
+}
+
+static void tmc_disable_etf_link(struct coresight_device *csdev,
+ int inport, int outport)
+{
+ unsigned long flags;
+ struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+ if (drvdata->reading) {
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+ return;
+ }
+
+ tmc_etf_disable_hw(drvdata);
+ local_set(&drvdata->mode, CS_MODE_DISABLED);
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+ dev_info(drvdata->dev, "TMC disabled\n");
+}
+
+static void *tmc_alloc_etf_buffer(struct coresight_device *csdev, int cpu,
+ void **pages, int nr_pages, bool overwrite)
+{
+ int node;
+ struct cs_buffers *buf;
+
+ if (cpu == -1)
+ cpu = smp_processor_id();
+ node = cpu_to_node(cpu);
+
+ /* Allocate memory structure for interaction with Perf */
+ buf = kzalloc_node(sizeof(struct cs_buffers), GFP_KERNEL, node);
+ if (!buf)
+ return NULL;
+
+ buf->snapshot = overwrite;
+ buf->nr_pages = nr_pages;
+ buf->data_pages = pages;
+
+ return buf;
+}
+
+static void tmc_free_etf_buffer(void *config)
+{
+ struct cs_buffers *buf = config;
+
+ kfree(buf);
+}
+
+static int tmc_set_etf_buffer(struct coresight_device *csdev,
+ struct perf_output_handle *handle,
+ void *sink_config)
+{
+ int ret = 0;
+ unsigned long head;
+ struct cs_buffers *buf = sink_config;
+
+ /* wrap head around to the amount of space we have */
+ head = handle->head & ((buf->nr_pages << PAGE_SHIFT) - 1);
+
+ /* find the page to write to */
+ buf->cur = head / PAGE_SIZE;
+
+ /* and offset within that page */
+ buf->offset = head % PAGE_SIZE;
+
+ local_set(&buf->data_size, 0);
+
+ return ret;
+}
+
+static unsigned long tmc_reset_etf_buffer(struct coresight_device *csdev,
+ struct perf_output_handle *handle,
+ void *sink_config, bool *lost)
+{
+ long size = 0;
+ struct cs_buffers *buf = sink_config;
+
+ if (buf) {
+ /*
+ * In snapshot mode ->data_size holds the new address of the
+ * ring buffer's head. The size itself is the whole address
+ * range since we want the latest information.
+ */
+ if (buf->snapshot)
+ handle->head = local_xchg(&buf->data_size,
+ buf->nr_pages << PAGE_SHIFT);
+ /*
+ * Tell the tracer PMU how much we got in this run and if
+ * something went wrong along the way. Nobody else can use
+ * this cs_buffers instance until we are done. As such
+ * resetting parameters here and squaring off with the ring
+ * buffer API in the tracer PMU is fine.
+ */
+ *lost = !!local_xchg(&buf->lost, 0);
+ size = local_xchg(&buf->data_size, 0);
+ }
+
+ return size;
+}
+
+static void tmc_update_etf_buffer(struct coresight_device *csdev,
+ struct perf_output_handle *handle,
+ void *sink_config)
+{
+ int i, cur;
+ u32 *buf_ptr;
+ u32 read_ptr, write_ptr;
+ u32 status, to_read;
+ unsigned long offset;
+ struct cs_buffers *buf = sink_config;
+ struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+ if (!buf)
+ return;
+
+ /* This shouldn't happen */
+ if (WARN_ON_ONCE(local_read(&drvdata->mode) != CS_MODE_PERF))
+ return;
+
+ CS_UNLOCK(drvdata->base);
+
+ tmc_flush_and_stop(drvdata);
+
+ read_ptr = readl_relaxed(drvdata->base + TMC_RRP);
+ write_ptr = readl_relaxed(drvdata->base + TMC_RWP);
+
+ /*
+ * Get a hold of the status register and see if a wrap around
+ * has occurred. If so adjust things accordingly.
+ */
+ status = readl_relaxed(drvdata->base + TMC_STS);
+ if (status & TMC_STS_FULL) {
+ local_inc(&buf->lost);
+ to_read = drvdata->size;
+ } else {
+ to_read = CIRC_CNT(write_ptr, read_ptr, drvdata->size);
+ }
+
+ /*
+ * The TMC RAM buffer may be bigger than the space available in the
+ * perf ring buffer (handle->size). If so advance the RRP so that we
+ * get the latest trace data.
+ */
+ if (to_read > handle->size) {
+ u32 mask = 0;
+
+ /*
+ * The value written to RRP must be byte-address aligned to
+ * the width of the trace memory databus _and_ to a frame
+ * boundary (16 byte), whichever is the biggest. For example,
+ * for 32-bit, 64-bit and 128-bit wide trace memory, the four
+ * LSBs must be 0s. For 256-bit wide trace memory, the five
+ * LSBs must be 0s.
+ */
+ switch (drvdata->memwidth) {
+ case TMC_MEM_INTF_WIDTH_32BITS:
+ case TMC_MEM_INTF_WIDTH_64BITS:
+ case TMC_MEM_INTF_WIDTH_128BITS:
+ mask = GENMASK(31, 5);
+ break;
+ case TMC_MEM_INTF_WIDTH_256BITS:
+ mask = GENMASK(31, 6);
+ break;
+ }
+
+ /*
+ * Make sure the new size is aligned in accordance with the
+ * requirement explained above.
+ */
+ to_read = handle->size & mask;
+ /* Move the RAM read pointer up */
+ read_ptr = (write_ptr + drvdata->size) - to_read;
+ /* Make sure we are still within our limits */
+ if (read_ptr > (drvdata->size - 1))
+ read_ptr -= drvdata->size;
+ /* Tell the HW */
+ writel_relaxed(read_ptr, drvdata->base + TMC_RRP);
+ local_inc(&buf->lost);
+ }
+
+ cur = buf->cur;
+ offset = buf->offset;
+
+ /* for every byte to read */
+ for (i = 0; i < to_read; i += 4) {
+ buf_ptr = buf->data_pages[cur] + offset;
+ *buf_ptr = readl_relaxed(drvdata->base + TMC_RRD);
+
+ offset += 4;
+ if (offset >= PAGE_SIZE) {
+ offset = 0;
+ cur++;
+ /* wrap around at the end of the buffer */
+ cur &= buf->nr_pages - 1;
+ }
+ }
+
+ /*
+ * In snapshot mode all we have to do is communicate to
+ * perf_aux_output_end() the address of the current head. In full
+ * trace mode the same function expects a size to move rb->aux_head
+ * forward.
+ */
+ if (buf->snapshot)
+ local_set(&buf->data_size, (cur * PAGE_SIZE) + offset);
+ else
+ local_add(to_read, &buf->data_size);
+
+ CS_LOCK(drvdata->base);
+}
+
+static const struct coresight_ops_sink tmc_etf_sink_ops = {
+ .enable = tmc_enable_etf_sink,
+ .disable = tmc_disable_etf_sink,
+ .alloc_buffer = tmc_alloc_etf_buffer,
+ .free_buffer = tmc_free_etf_buffer,
+ .set_buffer = tmc_set_etf_buffer,
+ .reset_buffer = tmc_reset_etf_buffer,
+ .update_buffer = tmc_update_etf_buffer,
+};
+
+static const struct coresight_ops_link tmc_etf_link_ops = {
+ .enable = tmc_enable_etf_link,
+ .disable = tmc_disable_etf_link,
+};
+
+const struct coresight_ops tmc_etb_cs_ops = {
+ .sink_ops = &tmc_etf_sink_ops,
+};
+
+const struct coresight_ops tmc_etf_cs_ops = {
+ .sink_ops = &tmc_etf_sink_ops,
+ .link_ops = &tmc_etf_link_ops,
+};
+
+int tmc_read_prepare_etb(struct tmc_drvdata *drvdata)
+{
+ long val;
+ enum tmc_mode mode;
+ int ret = 0;
+ unsigned long flags;
+
+ /* config types are set a boot time and never change */
+ if (WARN_ON_ONCE(drvdata->config_type != TMC_CONFIG_TYPE_ETB &&
+ drvdata->config_type != TMC_CONFIG_TYPE_ETF))
+ return -EINVAL;
+
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+
+ if (drvdata->reading) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ /* There is no point in reading a TMC in HW FIFO mode */
+ mode = readl_relaxed(drvdata->base + TMC_MODE);
+ if (mode != TMC_MODE_CIRCULAR_BUFFER) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ val = local_read(&drvdata->mode);
+ /* Don't interfere if operated from Perf */
+ if (val == CS_MODE_PERF) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ /* If drvdata::buf is NULL the trace data has been read already */
+ if (drvdata->buf == NULL) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ /* Disable the TMC if need be */
+ if (val == CS_MODE_SYSFS)
+ tmc_etb_disable_hw(drvdata);
+
+ drvdata->reading = true;
+out:
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+ return ret;
+}
+
+int tmc_read_unprepare_etb(struct tmc_drvdata *drvdata)
+{
+ char *buf = NULL;
+ enum tmc_mode mode;
+ unsigned long flags;
+
+ /* config types are set a boot time and never change */
+ if (WARN_ON_ONCE(drvdata->config_type != TMC_CONFIG_TYPE_ETB &&
+ drvdata->config_type != TMC_CONFIG_TYPE_ETF))
+ return -EINVAL;
+
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+
+ /* There is no point in reading a TMC in HW FIFO mode */
+ mode = readl_relaxed(drvdata->base + TMC_MODE);
+ if (mode != TMC_MODE_CIRCULAR_BUFFER) {
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+ return -EINVAL;
+ }
+
+ /* Re-enable the TMC if need be */
+ if (local_read(&drvdata->mode) == CS_MODE_SYSFS) {
+ /*
+ * The trace run will continue with the same allocated trace
+ * buffer. As such zero-out the buffer so that we don't end
+ * up with stale data.
+ *
+ * Since the tracer is still enabled drvdata::buf
+ * can't be NULL.
+ */
+ memset(drvdata->buf, 0, drvdata->size);
+ tmc_etb_enable_hw(drvdata);
+ } else {
+ /*
+ * The ETB/ETF is not tracing and the buffer was just read.
+ * As such prepare to free the trace buffer.
+ */
+ buf = drvdata->buf;
+ drvdata->buf = NULL;
+ }
+
+ drvdata->reading = false;
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+ /*
+ * Free allocated memory outside of the spinlock. There is no need
+ * to assert the validity of 'buf' since calling kfree(NULL) is safe.
+ */
+ kfree(buf);
+
+ return 0;
+}
diff --git a/drivers/hwtracing/coresight/coresight-tmc-etr.c b/drivers/hwtracing/coresight/coresight-tmc-etr.c
new file mode 100644
index 000000000000..847d1b5f2c13
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-tmc-etr.c
@@ -0,0 +1,329 @@
+/*
+ * Copyright(C) 2016 Linaro Limited. All rights reserved.
+ * Author: Mathieu Poirier <mathieu.poirier@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/coresight.h>
+#include <linux/dma-mapping.h>
+#include "coresight-priv.h"
+#include "coresight-tmc.h"
+
+void tmc_etr_enable_hw(struct tmc_drvdata *drvdata)
+{
+ u32 axictl;
+
+ /* Zero out the memory to help with debug */
+ memset(drvdata->vaddr, 0, drvdata->size);
+
+ CS_UNLOCK(drvdata->base);
+
+ /* Wait for TMCSReady bit to be set */
+ tmc_wait_for_tmcready(drvdata);
+
+ writel_relaxed(drvdata->size / 4, drvdata->base + TMC_RSZ);
+ writel_relaxed(TMC_MODE_CIRCULAR_BUFFER, drvdata->base + TMC_MODE);
+
+ axictl = readl_relaxed(drvdata->base + TMC_AXICTL);
+ axictl |= TMC_AXICTL_WR_BURST_16;
+ writel_relaxed(axictl, drvdata->base + TMC_AXICTL);
+ axictl &= ~TMC_AXICTL_SCT_GAT_MODE;
+ writel_relaxed(axictl, drvdata->base + TMC_AXICTL);
+ axictl = (axictl &
+ ~(TMC_AXICTL_PROT_CTL_B0 | TMC_AXICTL_PROT_CTL_B1)) |
+ TMC_AXICTL_PROT_CTL_B1;
+ writel_relaxed(axictl, drvdata->base + TMC_AXICTL);
+
+ writel_relaxed(drvdata->paddr, drvdata->base + TMC_DBALO);
+ writel_relaxed(0x0, drvdata->base + TMC_DBAHI);
+ writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI |
+ TMC_FFCR_FON_FLIN | TMC_FFCR_FON_TRIG_EVT |
+ TMC_FFCR_TRIGON_TRIGIN,
+ drvdata->base + TMC_FFCR);
+ writel_relaxed(drvdata->trigger_cntr, drvdata->base + TMC_TRG);
+ tmc_enable_hw(drvdata);
+
+ CS_LOCK(drvdata->base);
+}
+
+static void tmc_etr_dump_hw(struct tmc_drvdata *drvdata)
+{
+ u32 rwp, val;
+
+ rwp = readl_relaxed(drvdata->base + TMC_RWP);
+ val = readl_relaxed(drvdata->base + TMC_STS);
+
+ /* How much memory do we still have */
+ if (val & BIT(0))
+ drvdata->buf = drvdata->vaddr + rwp - drvdata->paddr;
+ else
+ drvdata->buf = drvdata->vaddr;
+}
+
+static void tmc_etr_disable_hw(struct tmc_drvdata *drvdata)
+{
+ CS_UNLOCK(drvdata->base);
+
+ tmc_flush_and_stop(drvdata);
+ /*
+ * When operating in sysFS mode the content of the buffer needs to be
+ * read before the TMC is disabled.
+ */
+ if (local_read(&drvdata->mode) == CS_MODE_SYSFS)
+ tmc_etr_dump_hw(drvdata);
+ tmc_disable_hw(drvdata);
+
+ CS_LOCK(drvdata->base);
+}
+
+static int tmc_enable_etr_sink_sysfs(struct coresight_device *csdev, u32 mode)
+{
+ int ret = 0;
+ bool used = false;
+ long val;
+ unsigned long flags;
+ void __iomem *vaddr = NULL;
+ dma_addr_t paddr;
+ struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+ /* This shouldn't be happening */
+ if (WARN_ON(mode != CS_MODE_SYSFS))
+ return -EINVAL;
+
+ /*
+ * If we don't have a buffer release the lock and allocate memory.
+ * Otherwise keep the lock and move along.
+ */
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+ if (!drvdata->vaddr) {
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+ /*
+ * Contiguous memory can't be allocated while a spinlock is
+ * held. As such allocate memory here and free it if a buffer
+ * has already been allocated (from a previous session).
+ */
+ vaddr = dma_alloc_coherent(drvdata->dev, drvdata->size,
+ &paddr, GFP_KERNEL);
+ if (!vaddr)
+ return -ENOMEM;
+
+ /* Let's try again */
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+ }
+
+ if (drvdata->reading) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ val = local_xchg(&drvdata->mode, mode);
+ /*
+ * In sysFS mode we can have multiple writers per sink. Since this
+ * sink is already enabled no memory is needed and the HW need not be
+ * touched.
+ */
+ if (val == CS_MODE_SYSFS)
+ goto out;
+
+ /*
+ * If drvdata::buf == NULL, use the memory allocated above.
+ * Otherwise a buffer still exists from a previous session, so
+ * simply use that.
+ */
+ if (drvdata->buf == NULL) {
+ used = true;
+ drvdata->vaddr = vaddr;
+ drvdata->paddr = paddr;
+ drvdata->buf = drvdata->vaddr;
+ }
+
+ memset(drvdata->vaddr, 0, drvdata->size);
+
+ tmc_etr_enable_hw(drvdata);
+out:
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+ /* Free memory outside the spinlock if need be */
+ if (!used && vaddr)
+ dma_free_coherent(drvdata->dev, drvdata->size, vaddr, paddr);
+
+ if (!ret)
+ dev_info(drvdata->dev, "TMC-ETR enabled\n");
+
+ return ret;
+}
+
+static int tmc_enable_etr_sink_perf(struct coresight_device *csdev, u32 mode)
+{
+ int ret = 0;
+ long val;
+ unsigned long flags;
+ struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+ /* This shouldn't be happening */
+ if (WARN_ON(mode != CS_MODE_PERF))
+ return -EINVAL;
+
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+ if (drvdata->reading) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ val = local_xchg(&drvdata->mode, mode);
+ /*
+ * In Perf mode there can be only one writer per sink. There
+ * is also no need to continue if the ETR is already operated
+ * from sysFS.
+ */
+ if (val != CS_MODE_DISABLED) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ tmc_etr_enable_hw(drvdata);
+out:
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+ return ret;
+}
+
+static int tmc_enable_etr_sink(struct coresight_device *csdev, u32 mode)
+{
+ switch (mode) {
+ case CS_MODE_SYSFS:
+ return tmc_enable_etr_sink_sysfs(csdev, mode);
+ case CS_MODE_PERF:
+ return tmc_enable_etr_sink_perf(csdev, mode);
+ }
+
+ /* We shouldn't be here */
+ return -EINVAL;
+}
+
+static void tmc_disable_etr_sink(struct coresight_device *csdev)
+{
+ long val;
+ unsigned long flags;
+ struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
+
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+ if (drvdata->reading) {
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+ return;
+ }
+
+ val = local_xchg(&drvdata->mode, CS_MODE_DISABLED);
+ /* Disable the TMC only if it needs to */
+ if (val != CS_MODE_DISABLED)
+ tmc_etr_disable_hw(drvdata);
+
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+ dev_info(drvdata->dev, "TMC-ETR disabled\n");
+}
+
+static const struct coresight_ops_sink tmc_etr_sink_ops = {
+ .enable = tmc_enable_etr_sink,
+ .disable = tmc_disable_etr_sink,
+};
+
+const struct coresight_ops tmc_etr_cs_ops = {
+ .sink_ops = &tmc_etr_sink_ops,
+};
+
+int tmc_read_prepare_etr(struct tmc_drvdata *drvdata)
+{
+ int ret = 0;
+ long val;
+ unsigned long flags;
+
+ /* config types are set a boot time and never change */
+ if (WARN_ON_ONCE(drvdata->config_type != TMC_CONFIG_TYPE_ETR))
+ return -EINVAL;
+
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+ if (drvdata->reading) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ val = local_read(&drvdata->mode);
+ /* Don't interfere if operated from Perf */
+ if (val == CS_MODE_PERF) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ /* If drvdata::buf is NULL the trace data has been read already */
+ if (drvdata->buf == NULL) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ /* Disable the TMC if need be */
+ if (val == CS_MODE_SYSFS)
+ tmc_etr_disable_hw(drvdata);
+
+ drvdata->reading = true;
+out:
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+ return ret;
+}
+
+int tmc_read_unprepare_etr(struct tmc_drvdata *drvdata)
+{
+ unsigned long flags;
+ dma_addr_t paddr;
+ void __iomem *vaddr = NULL;
+
+ /* config types are set a boot time and never change */
+ if (WARN_ON_ONCE(drvdata->config_type != TMC_CONFIG_TYPE_ETR))
+ return -EINVAL;
+
+ spin_lock_irqsave(&drvdata->spinlock, flags);
+
+ /* RE-enable the TMC if need be */
+ if (local_read(&drvdata->mode) == CS_MODE_SYSFS) {
+ /*
+ * The trace run will continue with the same allocated trace
+ * buffer. As such zero-out the buffer so that we don't end
+ * up with stale data.
+ *
+ * Since the tracer is still enabled drvdata::buf
+ * can't be NULL.
+ */
+ memset(drvdata->buf, 0, drvdata->size);
+ tmc_etr_enable_hw(drvdata);
+ } else {
+ /*
+ * The ETR is not tracing and the buffer was just read.
+ * As such prepare to free the trace buffer.
+ */
+ vaddr = drvdata->vaddr;
+ paddr = drvdata->paddr;
+ drvdata->buf = NULL;
+ }
+
+ drvdata->reading = false;
+ spin_unlock_irqrestore(&drvdata->spinlock, flags);
+
+ /* Free allocated memory out side of the spinlock */
+ if (vaddr)
+ dma_free_coherent(drvdata->dev, drvdata->size, vaddr, paddr);
+
+ return 0;
+}
diff --git a/drivers/hwtracing/coresight/coresight-tmc.c b/drivers/hwtracing/coresight/coresight-tmc.c
index 1be191f5d39c..9e02ac963cd0 100644
--- a/drivers/hwtracing/coresight/coresight-tmc.c
+++ b/drivers/hwtracing/coresight/coresight-tmc.c
@@ -30,127 +30,27 @@
#include <linux/amba/bus.h>
#include "coresight-priv.h"
+#include "coresight-tmc.h"
-#define TMC_RSZ 0x004
-#define TMC_STS 0x00c
-#define TMC_RRD 0x010
-#define TMC_RRP 0x014
-#define TMC_RWP 0x018
-#define TMC_TRG 0x01c
-#define TMC_CTL 0x020
-#define TMC_RWD 0x024
-#define TMC_MODE 0x028
-#define TMC_LBUFLEVEL 0x02c
-#define TMC_CBUFLEVEL 0x030
-#define TMC_BUFWM 0x034
-#define TMC_RRPHI 0x038
-#define TMC_RWPHI 0x03c
-#define TMC_AXICTL 0x110
-#define TMC_DBALO 0x118
-#define TMC_DBAHI 0x11c
-#define TMC_FFSR 0x300
-#define TMC_FFCR 0x304
-#define TMC_PSCR 0x308
-#define TMC_ITMISCOP0 0xee0
-#define TMC_ITTRFLIN 0xee8
-#define TMC_ITATBDATA0 0xeec
-#define TMC_ITATBCTR2 0xef0
-#define TMC_ITATBCTR1 0xef4
-#define TMC_ITATBCTR0 0xef8
-
-/* register description */
-/* TMC_CTL - 0x020 */
-#define TMC_CTL_CAPT_EN BIT(0)
-/* TMC_STS - 0x00C */
-#define TMC_STS_TRIGGERED BIT(1)
-/* TMC_AXICTL - 0x110 */
-#define TMC_AXICTL_PROT_CTL_B0 BIT(0)
-#define TMC_AXICTL_PROT_CTL_B1 BIT(1)
-#define TMC_AXICTL_SCT_GAT_MODE BIT(7)
-#define TMC_AXICTL_WR_BURST_LEN 0xF00
-/* TMC_FFCR - 0x304 */
-#define TMC_FFCR_EN_FMT BIT(0)
-#define TMC_FFCR_EN_TI BIT(1)
-#define TMC_FFCR_FON_FLIN BIT(4)
-#define TMC_FFCR_FON_TRIG_EVT BIT(5)
-#define TMC_FFCR_FLUSHMAN BIT(6)
-#define TMC_FFCR_TRIGON_TRIGIN BIT(8)
-#define TMC_FFCR_STOP_ON_FLUSH BIT(12)
-
-#define TMC_STS_TRIGGERED_BIT 2
-#define TMC_FFCR_FLUSHMAN_BIT 6
-
-enum tmc_config_type {
- TMC_CONFIG_TYPE_ETB,
- TMC_CONFIG_TYPE_ETR,
- TMC_CONFIG_TYPE_ETF,
-};
-
-enum tmc_mode {
- TMC_MODE_CIRCULAR_BUFFER,
- TMC_MODE_SOFTWARE_FIFO,
- TMC_MODE_HARDWARE_FIFO,
-};
-
-enum tmc_mem_intf_width {
- TMC_MEM_INTF_WIDTH_32BITS = 0x2,
- TMC_MEM_INTF_WIDTH_64BITS = 0x3,
- TMC_MEM_INTF_WIDTH_128BITS = 0x4,
- TMC_MEM_INTF_WIDTH_256BITS = 0x5,
-};
-
-/**
- * struct tmc_drvdata - specifics associated to an TMC component
- * @base: memory mapped base address for this component.
- * @dev: the device entity associated to this component.
- * @csdev: component vitals needed by the framework.
- * @miscdev: specifics to handle "/dev/xyz.tmc" entry.
- * @spinlock: only one at a time pls.
- * @read_count: manages preparation of buffer for reading.
- * @buf: area of memory where trace data get sent.
- * @paddr: DMA start location in RAM.
- * @vaddr: virtual representation of @paddr.
- * @size: @buf size.
- * @enable: this TMC is being used.
- * @config_type: TMC variant, must be of type @tmc_config_type.
- * @trigger_cntr: amount of words to store after a trigger.
- */
-struct tmc_drvdata {
- void __iomem *base;
- struct device *dev;
- struct coresight_device *csdev;
- struct miscdevice miscdev;
- spinlock_t spinlock;
- int read_count;
- bool reading;
- char *buf;
- dma_addr_t paddr;
- void *vaddr;
- u32 size;
- bool enable;
- enum tmc_config_type config_type;
- u32 trigger_cntr;
-};
-
-static void tmc_wait_for_ready(struct tmc_drvdata *drvdata)
+void tmc_wait_for_tmcready(struct tmc_drvdata *drvdata)
{
/* Ensure formatter, unformatter and hardware fifo are empty */
if (coresight_timeout(drvdata->base,
- TMC_STS, TMC_STS_TRIGGERED_BIT, 1)) {
+ TMC_STS, TMC_STS_TMCREADY_BIT, 1)) {
dev_err(drvdata->dev,
"timeout observed when probing at offset %#x\n",
TMC_STS);
}
}
-static void tmc_flush_and_stop(struct tmc_drvdata *drvdata)
+void tmc_flush_and_stop(struct tmc_drvdata *drvdata)
{
u32 ffcr;
ffcr = readl_relaxed(drvdata->base + TMC_FFCR);
ffcr |= TMC_FFCR_STOP_ON_FLUSH;
writel_relaxed(ffcr, drvdata->base + TMC_FFCR);
- ffcr |= TMC_FFCR_FLUSHMAN;
+ ffcr |= BIT(TMC_FFCR_FLUSHMAN_BIT);
writel_relaxed(ffcr, drvdata->base + TMC_FFCR);
/* Ensure flush completes */
if (coresight_timeout(drvdata->base,
@@ -160,338 +60,73 @@ static void tmc_flush_and_stop(struct tmc_drvdata *drvdata)
TMC_FFCR);
}
- tmc_wait_for_ready(drvdata);
+ tmc_wait_for_tmcready(drvdata);
}
-static void tmc_enable_hw(struct tmc_drvdata *drvdata)
+void tmc_enable_hw(struct tmc_drvdata *drvdata)
{
writel_relaxed(TMC_CTL_CAPT_EN, drvdata->base + TMC_CTL);
}
-static void tmc_disable_hw(struct tmc_drvdata *drvdata)
+void tmc_disable_hw(struct tmc_drvdata *drvdata)
{
writel_relaxed(0x0, drvdata->base + TMC_CTL);
}
-static void tmc_etb_enable_hw(struct tmc_drvdata *drvdata)
-{
- /* Zero out the memory to help with debug */
- memset(drvdata->buf, 0, drvdata->size);
-
- CS_UNLOCK(drvdata->base);
-
- writel_relaxed(TMC_MODE_CIRCULAR_BUFFER, drvdata->base + TMC_MODE);
- writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI |
- TMC_FFCR_FON_FLIN | TMC_FFCR_FON_TRIG_EVT |
- TMC_FFCR_TRIGON_TRIGIN,
- drvdata->base + TMC_FFCR);
-
- writel_relaxed(drvdata->trigger_cntr, drvdata->base + TMC_TRG);
- tmc_enable_hw(drvdata);
-
- CS_LOCK(drvdata->base);
-}
-
-static void tmc_etr_enable_hw(struct tmc_drvdata *drvdata)
-{
- u32 axictl;
-
- /* Zero out the memory to help with debug */
- memset(drvdata->vaddr, 0, drvdata->size);
-
- CS_UNLOCK(drvdata->base);
-
- writel_relaxed(drvdata->size / 4, drvdata->base + TMC_RSZ);
- writel_relaxed(TMC_MODE_CIRCULAR_BUFFER, drvdata->base + TMC_MODE);
-
- axictl = readl_relaxed(drvdata->base + TMC_AXICTL);
- axictl |= TMC_AXICTL_WR_BURST_LEN;
- writel_relaxed(axictl, drvdata->base + TMC_AXICTL);
- axictl &= ~TMC_AXICTL_SCT_GAT_MODE;
- writel_relaxed(axictl, drvdata->base + TMC_AXICTL);
- axictl = (axictl &
- ~(TMC_AXICTL_PROT_CTL_B0 | TMC_AXICTL_PROT_CTL_B1)) |
- TMC_AXICTL_PROT_CTL_B1;
- writel_relaxed(axictl, drvdata->base + TMC_AXICTL);
-
- writel_relaxed(drvdata->paddr, drvdata->base + TMC_DBALO);
- writel_relaxed(0x0, drvdata->base + TMC_DBAHI);
- writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI |
- TMC_FFCR_FON_FLIN | TMC_FFCR_FON_TRIG_EVT |
- TMC_FFCR_TRIGON_TRIGIN,
- drvdata->base + TMC_FFCR);
- writel_relaxed(drvdata->trigger_cntr, drvdata->base + TMC_TRG);
- tmc_enable_hw(drvdata);
-
- CS_LOCK(drvdata->base);
-}
-
-static void tmc_etf_enable_hw(struct tmc_drvdata *drvdata)
-{
- CS_UNLOCK(drvdata->base);
-
- writel_relaxed(TMC_MODE_HARDWARE_FIFO, drvdata->base + TMC_MODE);
- writel_relaxed(TMC_FFCR_EN_FMT | TMC_FFCR_EN_TI,
- drvdata->base + TMC_FFCR);
- writel_relaxed(0x0, drvdata->base + TMC_BUFWM);
- tmc_enable_hw(drvdata);
-
- CS_LOCK(drvdata->base);
-}
-
-static int tmc_enable(struct tmc_drvdata *drvdata, enum tmc_mode mode)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&drvdata->spinlock, flags);
- if (drvdata->reading) {
- spin_unlock_irqrestore(&drvdata->spinlock, flags);
- return -EBUSY;
- }
-
- if (drvdata->config_type == TMC_CONFIG_TYPE_ETB) {
- tmc_etb_enable_hw(drvdata);
- } else if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) {
- tmc_etr_enable_hw(drvdata);
- } else {
- if (mode == TMC_MODE_CIRCULAR_BUFFER)
- tmc_etb_enable_hw(drvdata);
- else
- tmc_etf_enable_hw(drvdata);
- }
- drvdata->enable = true;
- spin_unlock_irqrestore(&drvdata->spinlock, flags);
-
- dev_info(drvdata->dev, "TMC enabled\n");
- return 0;
-}
-
-static int tmc_enable_sink(struct coresight_device *csdev, u32 mode)
-{
- struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
-
- return tmc_enable(drvdata, TMC_MODE_CIRCULAR_BUFFER);
-}
-
-static int tmc_enable_link(struct coresight_device *csdev, int inport,
- int outport)
+static int tmc_read_prepare(struct tmc_drvdata *drvdata)
{
- struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
-
- return tmc_enable(drvdata, TMC_MODE_HARDWARE_FIFO);
-}
+ int ret = 0;
-static void tmc_etb_dump_hw(struct tmc_drvdata *drvdata)
-{
- enum tmc_mem_intf_width memwidth;
- u8 memwords;
- char *bufp;
- u32 read_data;
- int i;
-
- memwidth = BMVAL(readl_relaxed(drvdata->base + CORESIGHT_DEVID), 8, 10);
- if (memwidth == TMC_MEM_INTF_WIDTH_32BITS)
- memwords = 1;
- else if (memwidth == TMC_MEM_INTF_WIDTH_64BITS)
- memwords = 2;
- else if (memwidth == TMC_MEM_INTF_WIDTH_128BITS)
- memwords = 4;
- else
- memwords = 8;
-
- bufp = drvdata->buf;
- while (1) {
- for (i = 0; i < memwords; i++) {
- read_data = readl_relaxed(drvdata->base + TMC_RRD);
- if (read_data == 0xFFFFFFFF)
- return;
- memcpy(bufp, &read_data, 4);
- bufp += 4;
- }
+ switch (drvdata->config_type) {
+ case TMC_CONFIG_TYPE_ETB:
+ case TMC_CONFIG_TYPE_ETF:
+ ret = tmc_read_prepare_etb(drvdata);
+ break;
+ case TMC_CONFIG_TYPE_ETR:
+ ret = tmc_read_prepare_etr(drvdata);
+ break;
+ default:
+ ret = -EINVAL;
}
-}
-
-static void tmc_etb_disable_hw(struct tmc_drvdata *drvdata)
-{
- CS_UNLOCK(drvdata->base);
-
- tmc_flush_and_stop(drvdata);
- tmc_etb_dump_hw(drvdata);
- tmc_disable_hw(drvdata);
-
- CS_LOCK(drvdata->base);
-}
-
-static void tmc_etr_dump_hw(struct tmc_drvdata *drvdata)
-{
- u32 rwp, val;
- rwp = readl_relaxed(drvdata->base + TMC_RWP);
- val = readl_relaxed(drvdata->base + TMC_STS);
-
- /* How much memory do we still have */
- if (val & BIT(0))
- drvdata->buf = drvdata->vaddr + rwp - drvdata->paddr;
- else
- drvdata->buf = drvdata->vaddr;
-}
+ if (!ret)
+ dev_info(drvdata->dev, "TMC read start\n");
-static void tmc_etr_disable_hw(struct tmc_drvdata *drvdata)
-{
- CS_UNLOCK(drvdata->base);
-
- tmc_flush_and_stop(drvdata);
- tmc_etr_dump_hw(drvdata);
- tmc_disable_hw(drvdata);
-
- CS_LOCK(drvdata->base);
-}
-
-static void tmc_etf_disable_hw(struct tmc_drvdata *drvdata)
-{
- CS_UNLOCK(drvdata->base);
-
- tmc_flush_and_stop(drvdata);
- tmc_disable_hw(drvdata);
-
- CS_LOCK(drvdata->base);
+ return ret;
}
-static void tmc_disable(struct tmc_drvdata *drvdata, enum tmc_mode mode)
+static int tmc_read_unprepare(struct tmc_drvdata *drvdata)
{
- unsigned long flags;
-
- spin_lock_irqsave(&drvdata->spinlock, flags);
- if (drvdata->reading)
- goto out;
+ int ret = 0;
- if (drvdata->config_type == TMC_CONFIG_TYPE_ETB) {
- tmc_etb_disable_hw(drvdata);
- } else if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) {
- tmc_etr_disable_hw(drvdata);
- } else {
- if (mode == TMC_MODE_CIRCULAR_BUFFER)
- tmc_etb_disable_hw(drvdata);
- else
- tmc_etf_disable_hw(drvdata);
+ switch (drvdata->config_type) {
+ case TMC_CONFIG_TYPE_ETB:
+ case TMC_CONFIG_TYPE_ETF:
+ ret = tmc_read_unprepare_etb(drvdata);
+ break;
+ case TMC_CONFIG_TYPE_ETR:
+ ret = tmc_read_unprepare_etr(drvdata);
+ break;
+ default:
+ ret = -EINVAL;
}
-out:
- drvdata->enable = false;
- spin_unlock_irqrestore(&drvdata->spinlock, flags);
-
- dev_info(drvdata->dev, "TMC disabled\n");
-}
-
-static void tmc_disable_sink(struct coresight_device *csdev)
-{
- struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
-
- tmc_disable(drvdata, TMC_MODE_CIRCULAR_BUFFER);
-}
-
-static void tmc_disable_link(struct coresight_device *csdev, int inport,
- int outport)
-{
- struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent);
-
- tmc_disable(drvdata, TMC_MODE_HARDWARE_FIFO);
-}
-
-static const struct coresight_ops_sink tmc_sink_ops = {
- .enable = tmc_enable_sink,
- .disable = tmc_disable_sink,
-};
-
-static const struct coresight_ops_link tmc_link_ops = {
- .enable = tmc_enable_link,
- .disable = tmc_disable_link,
-};
-
-static const struct coresight_ops tmc_etb_cs_ops = {
- .sink_ops = &tmc_sink_ops,
-};
-
-static const struct coresight_ops tmc_etr_cs_ops = {
- .sink_ops = &tmc_sink_ops,
-};
-
-static const struct coresight_ops tmc_etf_cs_ops = {
- .sink_ops = &tmc_sink_ops,
- .link_ops = &tmc_link_ops,
-};
-
-static int tmc_read_prepare(struct tmc_drvdata *drvdata)
-{
- int ret;
- unsigned long flags;
- enum tmc_mode mode;
- spin_lock_irqsave(&drvdata->spinlock, flags);
- if (!drvdata->enable)
- goto out;
+ if (!ret)
+ dev_info(drvdata->dev, "TMC read end\n");
- if (drvdata->config_type == TMC_CONFIG_TYPE_ETB) {
- tmc_etb_disable_hw(drvdata);
- } else if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) {
- tmc_etr_disable_hw(drvdata);
- } else {
- mode = readl_relaxed(drvdata->base + TMC_MODE);
- if (mode == TMC_MODE_CIRCULAR_BUFFER) {
- tmc_etb_disable_hw(drvdata);
- } else {
- ret = -ENODEV;
- goto err;
- }
- }
-out:
- drvdata->reading = true;
- spin_unlock_irqrestore(&drvdata->spinlock, flags);
-
- dev_info(drvdata->dev, "TMC read start\n");
- return 0;
-err:
- spin_unlock_irqrestore(&drvdata->spinlock, flags);
return ret;
}
-static void tmc_read_unprepare(struct tmc_drvdata *drvdata)
-{
- unsigned long flags;
- enum tmc_mode mode;
-
- spin_lock_irqsave(&drvdata->spinlock, flags);
- if (!drvdata->enable)
- goto out;
-
- if (drvdata->config_type == TMC_CONFIG_TYPE_ETB) {
- tmc_etb_enable_hw(drvdata);
- } else if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) {
- tmc_etr_enable_hw(drvdata);
- } else {
- mode = readl_relaxed(drvdata->base + TMC_MODE);
- if (mode == TMC_MODE_CIRCULAR_BUFFER)
- tmc_etb_enable_hw(drvdata);
- }
-out:
- drvdata->reading = false;
- spin_unlock_irqrestore(&drvdata->spinlock, flags);
-
- dev_info(drvdata->dev, "TMC read end\n");
-}
-
static int tmc_open(struct inode *inode, struct file *file)
{
+ int ret;
struct tmc_drvdata *drvdata = container_of(file->private_data,
struct tmc_drvdata, miscdev);
- int ret = 0;
-
- if (drvdata->read_count++)
- goto out;
ret = tmc_read_prepare(drvdata);
if (ret)
return ret;
-out:
+
nonseekable_open(inode, file);
dev_dbg(drvdata->dev, "%s: successfully opened\n", __func__);
@@ -531,19 +166,14 @@ static ssize_t tmc_read(struct file *file, char __user *data, size_t len,
static int tmc_release(struct inode *inode, struct file *file)
{
+ int ret;
struct tmc_drvdata *drvdata = container_of(file->private_data,
struct tmc_drvdata, miscdev);
- if (--drvdata->read_count) {
- if (drvdata->read_count < 0) {
- dev_err(drvdata->dev, "mismatched close\n");
- drvdata->read_count = 0;
- }
- goto out;
- }
+ ret = tmc_read_unprepare(drvdata);
+ if (ret)
+ return ret;
- tmc_read_unprepare(drvdata);
-out:
dev_dbg(drvdata->dev, "%s: released\n", __func__);
return 0;
}
@@ -556,56 +186,71 @@ static const struct file_operations tmc_fops = {
.llseek = no_llseek,
};
-static ssize_t status_show(struct device *dev,
- struct device_attribute *attr, char *buf)
+static enum tmc_mem_intf_width tmc_get_memwidth(u32 devid)
{
- unsigned long flags;
- u32 tmc_rsz, tmc_sts, tmc_rrp, tmc_rwp, tmc_trg;
- u32 tmc_ctl, tmc_ffsr, tmc_ffcr, tmc_mode, tmc_pscr;
- u32 devid;
- struct tmc_drvdata *drvdata = dev_get_drvdata(dev->parent);
+ enum tmc_mem_intf_width memwidth;
- pm_runtime_get_sync(drvdata->dev);
- spin_lock_irqsave(&drvdata->spinlock, flags);
- CS_UNLOCK(drvdata->base);
-
- tmc_rsz = readl_relaxed(drvdata->base + TMC_RSZ);
- tmc_sts = readl_relaxed(drvdata->base + TMC_STS);
- tmc_rrp = readl_relaxed(drvdata->base + TMC_RRP);
- tmc_rwp = readl_relaxed(drvdata->base + TMC_RWP);
- tmc_trg = readl_relaxed(drvdata->base + TMC_TRG);
- tmc_ctl = readl_relaxed(drvdata->base + TMC_CTL);
- tmc_ffsr = readl_relaxed(drvdata->base + TMC_FFSR);
- tmc_ffcr = readl_relaxed(drvdata->base + TMC_FFCR);
- tmc_mode = readl_relaxed(drvdata->base + TMC_MODE);
- tmc_pscr = readl_relaxed(drvdata->base + TMC_PSCR);
- devid = readl_relaxed(drvdata->base + CORESIGHT_DEVID);
+ /*
+ * Excerpt from the TRM:
+ *
+ * DEVID::MEMWIDTH[10:8]
+ * 0x2 Memory interface databus is 32 bits wide.
+ * 0x3 Memory interface databus is 64 bits wide.
+ * 0x4 Memory interface databus is 128 bits wide.
+ * 0x5 Memory interface databus is 256 bits wide.
+ */
+ switch (BMVAL(devid, 8, 10)) {
+ case 0x2:
+ memwidth = TMC_MEM_INTF_WIDTH_32BITS;
+ break;
+ case 0x3:
+ memwidth = TMC_MEM_INTF_WIDTH_64BITS;
+ break;
+ case 0x4:
+ memwidth = TMC_MEM_INTF_WIDTH_128BITS;
+ break;
+ case 0x5:
+ memwidth = TMC_MEM_INTF_WIDTH_256BITS;
+ break;
+ default:
+ memwidth = 0;
+ }
- CS_LOCK(drvdata->base);
- spin_unlock_irqrestore(&drvdata->spinlock, flags);
- pm_runtime_put(drvdata->dev);
-
- return sprintf(buf,
- "Depth:\t\t0x%x\n"
- "Status:\t\t0x%x\n"
- "RAM read ptr:\t0x%x\n"
- "RAM wrt ptr:\t0x%x\n"
- "Trigger cnt:\t0x%x\n"
- "Control:\t0x%x\n"
- "Flush status:\t0x%x\n"
- "Flush ctrl:\t0x%x\n"
- "Mode:\t\t0x%x\n"
- "PSRC:\t\t0x%x\n"
- "DEVID:\t\t0x%x\n",
- tmc_rsz, tmc_sts, tmc_rrp, tmc_rwp, tmc_trg,
- tmc_ctl, tmc_ffsr, tmc_ffcr, tmc_mode, tmc_pscr, devid);
-
- return -EINVAL;
+ return memwidth;
}
-static DEVICE_ATTR_RO(status);
-static ssize_t trigger_cntr_show(struct device *dev,
- struct device_attribute *attr, char *buf)
+#define coresight_tmc_simple_func(name, offset) \
+ coresight_simple_func(struct tmc_drvdata, name, offset)
+
+coresight_tmc_simple_func(rsz, TMC_RSZ);
+coresight_tmc_simple_func(sts, TMC_STS);
+coresight_tmc_simple_func(rrp, TMC_RRP);
+coresight_tmc_simple_func(rwp, TMC_RWP);
+coresight_tmc_simple_func(trg, TMC_TRG);
+coresight_tmc_simple_func(ctl, TMC_CTL);
+coresight_tmc_simple_func(ffsr, TMC_FFSR);
+coresight_tmc_simple_func(ffcr, TMC_FFCR);
+coresight_tmc_simple_func(mode, TMC_MODE);
+coresight_tmc_simple_func(pscr, TMC_PSCR);
+coresight_tmc_simple_func(devid, CORESIGHT_DEVID);
+
+static struct attribute *coresight_tmc_mgmt_attrs[] = {
+ &dev_attr_rsz.attr,
+ &dev_attr_sts.attr,
+ &dev_attr_rrp.attr,
+ &dev_attr_rwp.attr,
+ &dev_attr_trg.attr,
+ &dev_attr_ctl.attr,
+ &dev_attr_ffsr.attr,
+ &dev_attr_ffcr.attr,
+ &dev_attr_mode.attr,
+ &dev_attr_pscr.attr,
+ &dev_attr_devid.attr,
+ NULL,
+};
+
+ssize_t trigger_cntr_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
{
struct tmc_drvdata *drvdata = dev_get_drvdata(dev->parent);
unsigned long val = drvdata->trigger_cntr;
@@ -630,26 +275,25 @@ static ssize_t trigger_cntr_store(struct device *dev,
}
static DEVICE_ATTR_RW(trigger_cntr);
-static struct attribute *coresight_etb_attrs[] = {
+static struct attribute *coresight_tmc_attrs[] = {
&dev_attr_trigger_cntr.attr,
- &dev_attr_status.attr,
NULL,
};
-ATTRIBUTE_GROUPS(coresight_etb);
-static struct attribute *coresight_etr_attrs[] = {
- &dev_attr_trigger_cntr.attr,
- &dev_attr_status.attr,
- NULL,
+static const struct attribute_group coresight_tmc_group = {
+ .attrs = coresight_tmc_attrs,
};
-ATTRIBUTE_GROUPS(coresight_etr);
-static struct attribute *coresight_etf_attrs[] = {
- &dev_attr_trigger_cntr.attr,
- &dev_attr_status.attr,
+static const struct attribute_group coresight_tmc_mgmt_group = {
+ .attrs = coresight_tmc_mgmt_attrs,
+ .name = "mgmt",
+};
+
+const struct attribute_group *coresight_tmc_groups[] = {
+ &coresight_tmc_group,
+ &coresight_tmc_mgmt_group,
NULL,
};
-ATTRIBUTE_GROUPS(coresight_etf);
static int tmc_probe(struct amba_device *adev, const struct amba_id *id)
{
@@ -688,6 +332,7 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id)
devid = readl_relaxed(drvdata->base + CORESIGHT_DEVID);
drvdata->config_type = BMVAL(devid, 6, 7);
+ drvdata->memwidth = tmc_get_memwidth(devid);
if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) {
if (np)
@@ -702,20 +347,6 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id)
pm_runtime_put(&adev->dev);
- if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) {
- drvdata->vaddr = dma_alloc_coherent(dev, drvdata->size,
- &drvdata->paddr, GFP_KERNEL);
- if (!drvdata->vaddr)
- return -ENOMEM;
-
- memset(drvdata->vaddr, 0, drvdata->size);
- drvdata->buf = drvdata->vaddr;
- } else {
- drvdata->buf = devm_kzalloc(dev, drvdata->size, GFP_KERNEL);
- if (!drvdata->buf)
- return -ENOMEM;
- }
-
desc = devm_kzalloc(dev, sizeof(*desc), GFP_KERNEL);
if (!desc) {
ret = -ENOMEM;
@@ -725,20 +356,18 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id)
desc->pdata = pdata;
desc->dev = dev;
desc->subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER;
+ desc->groups = coresight_tmc_groups;
if (drvdata->config_type == TMC_CONFIG_TYPE_ETB) {
desc->type = CORESIGHT_DEV_TYPE_SINK;
desc->ops = &tmc_etb_cs_ops;
- desc->groups = coresight_etb_groups;
} else if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) {
desc->type = CORESIGHT_DEV_TYPE_SINK;
desc->ops = &tmc_etr_cs_ops;
- desc->groups = coresight_etr_groups;
} else {
desc->type = CORESIGHT_DEV_TYPE_LINKSINK;
desc->subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_FIFO;
desc->ops = &tmc_etf_cs_ops;
- desc->groups = coresight_etf_groups;
}
drvdata->csdev = coresight_register(desc);
@@ -754,7 +383,6 @@ static int tmc_probe(struct amba_device *adev, const struct amba_id *id)
if (ret)
goto err_misc_register;
- dev_info(dev, "TMC initialized\n");
return 0;
err_misc_register:
diff --git a/drivers/hwtracing/coresight/coresight-tmc.h b/drivers/hwtracing/coresight/coresight-tmc.h
new file mode 100644
index 000000000000..5c5fe2ad2ca7
--- /dev/null
+++ b/drivers/hwtracing/coresight/coresight-tmc.h
@@ -0,0 +1,140 @@
+/*
+ * Copyright(C) 2015 Linaro Limited. All rights reserved.
+ * Author: Mathieu Poirier <mathieu.poirier@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _CORESIGHT_TMC_H
+#define _CORESIGHT_TMC_H
+
+#include <linux/miscdevice.h>
+
+#define TMC_RSZ 0x004
+#define TMC_STS 0x00c
+#define TMC_RRD 0x010
+#define TMC_RRP 0x014
+#define TMC_RWP 0x018
+#define TMC_TRG 0x01c
+#define TMC_CTL 0x020
+#define TMC_RWD 0x024
+#define TMC_MODE 0x028
+#define TMC_LBUFLEVEL 0x02c
+#define TMC_CBUFLEVEL 0x030
+#define TMC_BUFWM 0x034
+#define TMC_RRPHI 0x038
+#define TMC_RWPHI 0x03c
+#define TMC_AXICTL 0x110
+#define TMC_DBALO 0x118
+#define TMC_DBAHI 0x11c
+#define TMC_FFSR 0x300
+#define TMC_FFCR 0x304
+#define TMC_PSCR 0x308
+#define TMC_ITMISCOP0 0xee0
+#define TMC_ITTRFLIN 0xee8
+#define TMC_ITATBDATA0 0xeec
+#define TMC_ITATBCTR2 0xef0
+#define TMC_ITATBCTR1 0xef4
+#define TMC_ITATBCTR0 0xef8
+
+/* register description */
+/* TMC_CTL - 0x020 */
+#define TMC_CTL_CAPT_EN BIT(0)
+/* TMC_STS - 0x00C */
+#define TMC_STS_TMCREADY_BIT 2
+#define TMC_STS_FULL BIT(0)
+#define TMC_STS_TRIGGERED BIT(1)
+/* TMC_AXICTL - 0x110 */
+#define TMC_AXICTL_PROT_CTL_B0 BIT(0)
+#define TMC_AXICTL_PROT_CTL_B1 BIT(1)
+#define TMC_AXICTL_SCT_GAT_MODE BIT(7)
+#define TMC_AXICTL_WR_BURST_16 0xF00
+/* TMC_FFCR - 0x304 */
+#define TMC_FFCR_FLUSHMAN_BIT 6
+#define TMC_FFCR_EN_FMT BIT(0)
+#define TMC_FFCR_EN_TI BIT(1)
+#define TMC_FFCR_FON_FLIN BIT(4)
+#define TMC_FFCR_FON_TRIG_EVT BIT(5)
+#define TMC_FFCR_TRIGON_TRIGIN BIT(8)
+#define TMC_FFCR_STOP_ON_FLUSH BIT(12)
+
+
+enum tmc_config_type {
+ TMC_CONFIG_TYPE_ETB,
+ TMC_CONFIG_TYPE_ETR,
+ TMC_CONFIG_TYPE_ETF,
+};
+
+enum tmc_mode {
+ TMC_MODE_CIRCULAR_BUFFER,
+ TMC_MODE_SOFTWARE_FIFO,
+ TMC_MODE_HARDWARE_FIFO,
+};
+
+enum tmc_mem_intf_width {
+ TMC_MEM_INTF_WIDTH_32BITS = 1,
+ TMC_MEM_INTF_WIDTH_64BITS = 2,
+ TMC_MEM_INTF_WIDTH_128BITS = 4,
+ TMC_MEM_INTF_WIDTH_256BITS = 8,
+};
+
+/**
+ * struct tmc_drvdata - specifics associated to an TMC component
+ * @base: memory mapped base address for this component.
+ * @dev: the device entity associated to this component.
+ * @csdev: component vitals needed by the framework.
+ * @miscdev: specifics to handle "/dev/xyz.tmc" entry.
+ * @spinlock: only one at a time pls.
+ * @buf: area of memory where trace data get sent.
+ * @paddr: DMA start location in RAM.
+ * @vaddr: virtual representation of @paddr.
+ * @size: @buf size.
+ * @mode: how this TMC is being used.
+ * @config_type: TMC variant, must be of type @tmc_config_type.
+ * @memwidth: width of the memory interface databus, in bytes.
+ * @trigger_cntr: amount of words to store after a trigger.
+ */
+struct tmc_drvdata {
+ void __iomem *base;
+ struct device *dev;
+ struct coresight_device *csdev;
+ struct miscdevice miscdev;
+ spinlock_t spinlock;
+ bool reading;
+ char *buf;
+ dma_addr_t paddr;
+ void __iomem *vaddr;
+ u32 size;
+ local_t mode;
+ enum tmc_config_type config_type;
+ enum tmc_mem_intf_width memwidth;
+ u32 trigger_cntr;
+};
+
+/* Generic functions */
+void tmc_wait_for_tmcready(struct tmc_drvdata *drvdata);
+void tmc_flush_and_stop(struct tmc_drvdata *drvdata);
+void tmc_enable_hw(struct tmc_drvdata *drvdata);
+void tmc_disable_hw(struct tmc_drvdata *drvdata);
+
+/* ETB/ETF functions */
+int tmc_read_prepare_etb(struct tmc_drvdata *drvdata);
+int tmc_read_unprepare_etb(struct tmc_drvdata *drvdata);
+extern const struct coresight_ops tmc_etb_cs_ops;
+extern const struct coresight_ops tmc_etf_cs_ops;
+
+/* ETR functions */
+int tmc_read_prepare_etr(struct tmc_drvdata *drvdata);
+int tmc_read_unprepare_etr(struct tmc_drvdata *drvdata);
+extern const struct coresight_ops tmc_etr_cs_ops;
+#endif
diff --git a/drivers/hwtracing/coresight/coresight-tpiu.c b/drivers/hwtracing/coresight/coresight-tpiu.c
index 8fb09d9237ab..4e471e2e9d89 100644
--- a/drivers/hwtracing/coresight/coresight-tpiu.c
+++ b/drivers/hwtracing/coresight/coresight-tpiu.c
@@ -167,7 +167,6 @@ static int tpiu_probe(struct amba_device *adev, const struct amba_id *id)
if (IS_ERR(drvdata->csdev))
return PTR_ERR(drvdata->csdev);
- dev_info(dev, "TPIU initialized\n");
return 0;
}
diff --git a/drivers/hwtracing/coresight/coresight.c b/drivers/hwtracing/coresight/coresight.c
index 2ea5961092c1..5443d03a1eec 100644
--- a/drivers/hwtracing/coresight/coresight.c
+++ b/drivers/hwtracing/coresight/coresight.c
@@ -43,7 +43,15 @@ struct coresight_node {
* When operating Coresight drivers from the sysFS interface, only a single
* path can exist from a tracer (associated to a CPU) to a sink.
*/
-static DEFINE_PER_CPU(struct list_head *, sysfs_path);
+static DEFINE_PER_CPU(struct list_head *, tracer_path);
+
+/*
+ * As of this writing only a single STM can be found in CS topologies. Since
+ * there is no way to know if we'll ever see more and what kind of
+ * configuration they will enact, for the time being only define a single path
+ * for STM.
+ */
+static struct list_head *stm_path;
static int coresight_id_match(struct device *dev, void *data)
{
@@ -257,15 +265,27 @@ static void coresight_disable_source(struct coresight_device *csdev)
void coresight_disable_path(struct list_head *path)
{
+ u32 type;
struct coresight_node *nd;
struct coresight_device *csdev, *parent, *child;
list_for_each_entry(nd, path, link) {
csdev = nd->csdev;
+ type = csdev->type;
+
+ /*
+ * ETF devices are tricky... They can be a link or a sink,
+ * depending on how they are configured. If an ETF has been
+ * "activated" it will be configured as a sink, otherwise
+ * go ahead with the link configuration.
+ */
+ if (type == CORESIGHT_DEV_TYPE_LINKSINK)
+ type = (csdev == coresight_get_sink(path)) ?
+ CORESIGHT_DEV_TYPE_SINK :
+ CORESIGHT_DEV_TYPE_LINK;
- switch (csdev->type) {
+ switch (type) {
case CORESIGHT_DEV_TYPE_SINK:
- case CORESIGHT_DEV_TYPE_LINKSINK:
coresight_disable_sink(csdev);
break;
case CORESIGHT_DEV_TYPE_SOURCE:
@@ -286,15 +306,27 @@ int coresight_enable_path(struct list_head *path, u32 mode)
{
int ret = 0;
+ u32 type;
struct coresight_node *nd;
struct coresight_device *csdev, *parent, *child;
list_for_each_entry_reverse(nd, path, link) {
csdev = nd->csdev;
+ type = csdev->type;
- switch (csdev->type) {
+ /*
+ * ETF devices are tricky... They can be a link or a sink,
+ * depending on how they are configured. If an ETF has been
+ * "activated" it will be configured as a sink, otherwise
+ * go ahead with the link configuration.
+ */
+ if (type == CORESIGHT_DEV_TYPE_LINKSINK)
+ type = (csdev == coresight_get_sink(path)) ?
+ CORESIGHT_DEV_TYPE_SINK :
+ CORESIGHT_DEV_TYPE_LINK;
+
+ switch (type) {
case CORESIGHT_DEV_TYPE_SINK:
- case CORESIGHT_DEV_TYPE_LINKSINK:
ret = coresight_enable_sink(csdev, mode);
if (ret)
goto err;
@@ -432,18 +464,45 @@ void coresight_release_path(struct list_head *path)
path = NULL;
}
+/** coresight_validate_source - make sure a source has the right credentials
+ * @csdev: the device structure for a source.
+ * @function: the function this was called from.
+ *
+ * Assumes the coresight_mutex is held.
+ */
+static int coresight_validate_source(struct coresight_device *csdev,
+ const char *function)
+{
+ u32 type, subtype;
+
+ type = csdev->type;
+ subtype = csdev->subtype.source_subtype;
+
+ if (type != CORESIGHT_DEV_TYPE_SOURCE) {
+ dev_err(&csdev->dev, "wrong device type in %s\n", function);
+ return -EINVAL;
+ }
+
+ if (subtype != CORESIGHT_DEV_SUBTYPE_SOURCE_PROC &&
+ subtype != CORESIGHT_DEV_SUBTYPE_SOURCE_SOFTWARE) {
+ dev_err(&csdev->dev, "wrong device subtype in %s\n", function);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
int coresight_enable(struct coresight_device *csdev)
{
- int ret = 0;
- int cpu;
+ int cpu, ret = 0;
struct list_head *path;
mutex_lock(&coresight_mutex);
- if (csdev->type != CORESIGHT_DEV_TYPE_SOURCE) {
- ret = -EINVAL;
- dev_err(&csdev->dev, "wrong device type in %s\n", __func__);
+
+ ret = coresight_validate_source(csdev, __func__);
+ if (ret)
goto out;
- }
+
if (csdev->enable)
goto out;
@@ -461,15 +520,25 @@ int coresight_enable(struct coresight_device *csdev)
if (ret)
goto err_source;
- /*
- * When working from sysFS it is important to keep track
- * of the paths that were created so that they can be
- * undone in 'coresight_disable()'. Since there can only
- * be a single session per tracer (when working from sysFS)
- * a per-cpu variable will do just fine.
- */
- cpu = source_ops(csdev)->cpu_id(csdev);
- per_cpu(sysfs_path, cpu) = path;
+ switch (csdev->subtype.source_subtype) {
+ case CORESIGHT_DEV_SUBTYPE_SOURCE_PROC:
+ /*
+ * When working from sysFS it is important to keep track
+ * of the paths that were created so that they can be
+ * undone in 'coresight_disable()'. Since there can only
+ * be a single session per tracer (when working from sysFS)
+ * a per-cpu variable will do just fine.
+ */
+ cpu = source_ops(csdev)->cpu_id(csdev);
+ per_cpu(tracer_path, cpu) = path;
+ break;
+ case CORESIGHT_DEV_SUBTYPE_SOURCE_SOFTWARE:
+ stm_path = path;
+ break;
+ default:
+ /* We can't be here */
+ break;
+ }
out:
mutex_unlock(&coresight_mutex);
@@ -486,23 +555,36 @@ EXPORT_SYMBOL_GPL(coresight_enable);
void coresight_disable(struct coresight_device *csdev)
{
- int cpu;
- struct list_head *path;
+ int cpu, ret;
+ struct list_head *path = NULL;
mutex_lock(&coresight_mutex);
- if (csdev->type != CORESIGHT_DEV_TYPE_SOURCE) {
- dev_err(&csdev->dev, "wrong device type in %s\n", __func__);
+
+ ret = coresight_validate_source(csdev, __func__);
+ if (ret)
goto out;
- }
+
if (!csdev->enable)
goto out;
- cpu = source_ops(csdev)->cpu_id(csdev);
- path = per_cpu(sysfs_path, cpu);
+ switch (csdev->subtype.source_subtype) {
+ case CORESIGHT_DEV_SUBTYPE_SOURCE_PROC:
+ cpu = source_ops(csdev)->cpu_id(csdev);
+ path = per_cpu(tracer_path, cpu);
+ per_cpu(tracer_path, cpu) = NULL;
+ break;
+ case CORESIGHT_DEV_SUBTYPE_SOURCE_SOFTWARE:
+ path = stm_path;
+ stm_path = NULL;
+ break;
+ default:
+ /* We can't be here */
+ break;
+ }
+
coresight_disable_source(csdev);
coresight_disable_path(path);
coresight_release_path(path);
- per_cpu(sysfs_path, cpu) = NULL;
out:
mutex_unlock(&coresight_mutex);
@@ -514,7 +596,7 @@ static ssize_t enable_sink_show(struct device *dev,
{
struct coresight_device *csdev = to_coresight_device(dev);
- return scnprintf(buf, PAGE_SIZE, "%u\n", (unsigned)csdev->activated);
+ return scnprintf(buf, PAGE_SIZE, "%u\n", csdev->activated);
}
static ssize_t enable_sink_store(struct device *dev,
@@ -544,7 +626,7 @@ static ssize_t enable_source_show(struct device *dev,
{
struct coresight_device *csdev = to_coresight_device(dev);
- return scnprintf(buf, PAGE_SIZE, "%u\n", (unsigned)csdev->enable);
+ return scnprintf(buf, PAGE_SIZE, "%u\n", csdev->enable);
}
static ssize_t enable_source_store(struct device *dev,
diff --git a/drivers/hwtracing/intel_th/core.c b/drivers/hwtracing/intel_th/core.c
index 4272f2ce5f6e..1be543e8e42f 100644
--- a/drivers/hwtracing/intel_th/core.c
+++ b/drivers/hwtracing/intel_th/core.c
@@ -71,6 +71,15 @@ static int intel_th_probe(struct device *dev)
if (ret)
return ret;
+ if (thdrv->attr_group) {
+ ret = sysfs_create_group(&thdev->dev.kobj, thdrv->attr_group);
+ if (ret) {
+ thdrv->remove(thdev);
+
+ return ret;
+ }
+ }
+
if (thdev->type == INTEL_TH_OUTPUT &&
!intel_th_output_assigned(thdev))
ret = hubdrv->assign(hub, thdev);
@@ -91,6 +100,9 @@ static int intel_th_remove(struct device *dev)
return err;
}
+ if (thdrv->attr_group)
+ sysfs_remove_group(&thdev->dev.kobj, thdrv->attr_group);
+
thdrv->remove(thdev);
if (intel_th_output_assigned(thdev)) {
@@ -171,7 +183,14 @@ static DEVICE_ATTR_RO(port);
static int intel_th_output_activate(struct intel_th_device *thdev)
{
- struct intel_th_driver *thdrv = to_intel_th_driver(thdev->dev.driver);
+ struct intel_th_driver *thdrv =
+ to_intel_th_driver_or_null(thdev->dev.driver);
+
+ if (!thdrv)
+ return -ENODEV;
+
+ if (!try_module_get(thdrv->driver.owner))
+ return -ENODEV;
if (thdrv->activate)
return thdrv->activate(thdev);
@@ -183,12 +202,18 @@ static int intel_th_output_activate(struct intel_th_device *thdev)
static void intel_th_output_deactivate(struct intel_th_device *thdev)
{
- struct intel_th_driver *thdrv = to_intel_th_driver(thdev->dev.driver);
+ struct intel_th_driver *thdrv =
+ to_intel_th_driver_or_null(thdev->dev.driver);
+
+ if (!thdrv)
+ return;
if (thdrv->deactivate)
thdrv->deactivate(thdev);
else
intel_th_trace_disable(thdev);
+
+ module_put(thdrv->driver.owner);
}
static ssize_t active_show(struct device *dev, struct device_attribute *attr,
diff --git a/drivers/hwtracing/intel_th/intel_th.h b/drivers/hwtracing/intel_th/intel_th.h
index eedd09332db6..0df22e30673d 100644
--- a/drivers/hwtracing/intel_th/intel_th.h
+++ b/drivers/hwtracing/intel_th/intel_th.h
@@ -115,6 +115,7 @@ intel_th_output_assigned(struct intel_th_device *thdev)
* @enable: enable tracing for a given output device
* @disable: disable tracing for a given output device
* @fops: file operations for device nodes
+ * @attr_group: attributes provided by the driver
*
* Callbacks @probe and @remove are required for all device types.
* Switch device driver needs to fill in @assign, @enable and @disable
@@ -139,6 +140,8 @@ struct intel_th_driver {
void (*deactivate)(struct intel_th_device *thdev);
/* file_operations for those who want a device node */
const struct file_operations *fops;
+ /* optional attributes */
+ struct attribute_group *attr_group;
/* source ops */
int (*set_output)(struct intel_th_device *thdev,
@@ -148,6 +151,9 @@ struct intel_th_driver {
#define to_intel_th_driver(_d) \
container_of((_d), struct intel_th_driver, driver)
+#define to_intel_th_driver_or_null(_d) \
+ ((_d) ? to_intel_th_driver(_d) : NULL)
+
static inline struct intel_th_device *
to_intel_th_hub(struct intel_th_device *thdev)
{
diff --git a/drivers/hwtracing/intel_th/msu.c b/drivers/hwtracing/intel_th/msu.c
index d9d6022c5aca..e8d55a153a65 100644
--- a/drivers/hwtracing/intel_th/msu.c
+++ b/drivers/hwtracing/intel_th/msu.c
@@ -122,7 +122,6 @@ struct msc {
atomic_t mmap_count;
struct mutex buf_mutex;
- struct mutex iter_mutex;
struct list_head iter_list;
/* config */
@@ -257,23 +256,37 @@ static struct msc_iter *msc_iter_install(struct msc *msc)
iter = kzalloc(sizeof(*iter), GFP_KERNEL);
if (!iter)
- return NULL;
+ return ERR_PTR(-ENOMEM);
+
+ mutex_lock(&msc->buf_mutex);
+
+ /*
+ * Reading and tracing are mutually exclusive; if msc is
+ * enabled, open() will fail; otherwise existing readers
+ * will prevent enabling the msc and the rest of fops don't
+ * need to worry about it.
+ */
+ if (msc->enabled) {
+ kfree(iter);
+ iter = ERR_PTR(-EBUSY);
+ goto unlock;
+ }
msc_iter_init(iter);
iter->msc = msc;
- mutex_lock(&msc->iter_mutex);
list_add_tail(&iter->entry, &msc->iter_list);
- mutex_unlock(&msc->iter_mutex);
+unlock:
+ mutex_unlock(&msc->buf_mutex);
return iter;
}
static void msc_iter_remove(struct msc_iter *iter, struct msc *msc)
{
- mutex_lock(&msc->iter_mutex);
+ mutex_lock(&msc->buf_mutex);
list_del(&iter->entry);
- mutex_unlock(&msc->iter_mutex);
+ mutex_unlock(&msc->buf_mutex);
kfree(iter);
}
@@ -454,7 +467,6 @@ static void msc_buffer_clear_hw_header(struct msc *msc)
{
struct msc_window *win;
- mutex_lock(&msc->buf_mutex);
list_for_each_entry(win, &msc->win_list, entry) {
unsigned int blk;
size_t hw_sz = sizeof(struct msc_block_desc) -
@@ -466,7 +478,6 @@ static void msc_buffer_clear_hw_header(struct msc *msc)
memset(&bdesc->hw_tag, 0, hw_sz);
}
}
- mutex_unlock(&msc->buf_mutex);
}
/**
@@ -474,12 +485,15 @@ static void msc_buffer_clear_hw_header(struct msc *msc)
* @msc: the MSC device to configure
*
* Program storage mode, wrapping, burst length and trace buffer address
- * into a given MSC. If msc::enabled is set, enable the trace, too.
+ * into a given MSC. Then, enable tracing and set msc::enabled.
+ * The latter is serialized on msc::buf_mutex, so make sure to hold it.
*/
static int msc_configure(struct msc *msc)
{
u32 reg;
+ lockdep_assert_held(&msc->buf_mutex);
+
if (msc->mode > MSC_MODE_MULTI)
return -ENOTSUPP;
@@ -497,21 +511,19 @@ static int msc_configure(struct msc *msc)
reg = ioread32(msc->reg_base + REG_MSU_MSC0CTL);
reg &= ~(MSC_MODE | MSC_WRAPEN | MSC_EN | MSC_RD_HDR_OVRD);
+ reg |= MSC_EN;
reg |= msc->mode << __ffs(MSC_MODE);
reg |= msc->burst_len << __ffs(MSC_LEN);
- /*if (msc->mode == MSC_MODE_MULTI)
- reg |= MSC_RD_HDR_OVRD; */
+
if (msc->wrap)
reg |= MSC_WRAPEN;
- if (msc->enabled)
- reg |= MSC_EN;
iowrite32(reg, msc->reg_base + REG_MSU_MSC0CTL);
- if (msc->enabled) {
- msc->thdev->output.multiblock = msc->mode == MSC_MODE_MULTI;
- intel_th_trace_enable(msc->thdev);
- }
+ msc->thdev->output.multiblock = msc->mode == MSC_MODE_MULTI;
+ intel_th_trace_enable(msc->thdev);
+ msc->enabled = 1;
+
return 0;
}
@@ -521,15 +533,14 @@ static int msc_configure(struct msc *msc)
* @msc: MSC device to disable
*
* If @msc is enabled, disable tracing on the switch and then disable MSC
- * storage.
+ * storage. Caller must hold msc::buf_mutex.
*/
static void msc_disable(struct msc *msc)
{
unsigned long count;
u32 reg;
- if (!msc->enabled)
- return;
+ lockdep_assert_held(&msc->buf_mutex);
intel_th_trace_disable(msc->thdev);
@@ -569,33 +580,35 @@ static void msc_disable(struct msc *msc)
static int intel_th_msc_activate(struct intel_th_device *thdev)
{
struct msc *msc = dev_get_drvdata(&thdev->dev);
- int ret = 0;
+ int ret = -EBUSY;
if (!atomic_inc_unless_negative(&msc->user_count))
return -ENODEV;
- mutex_lock(&msc->iter_mutex);
- if (!list_empty(&msc->iter_list))
- ret = -EBUSY;
- mutex_unlock(&msc->iter_mutex);
+ mutex_lock(&msc->buf_mutex);
- if (ret) {
- atomic_dec(&msc->user_count);
- return ret;
- }
+ /* if there are readers, refuse */
+ if (list_empty(&msc->iter_list))
+ ret = msc_configure(msc);
- msc->enabled = 1;
+ mutex_unlock(&msc->buf_mutex);
+
+ if (ret)
+ atomic_dec(&msc->user_count);
- return msc_configure(msc);
+ return ret;
}
static void intel_th_msc_deactivate(struct intel_th_device *thdev)
{
struct msc *msc = dev_get_drvdata(&thdev->dev);
- msc_disable(msc);
-
- atomic_dec(&msc->user_count);
+ mutex_lock(&msc->buf_mutex);
+ if (msc->enabled) {
+ msc_disable(msc);
+ atomic_dec(&msc->user_count);
+ }
+ mutex_unlock(&msc->buf_mutex);
}
/**
@@ -1035,8 +1048,8 @@ static int intel_th_msc_open(struct inode *inode, struct file *file)
return -EPERM;
iter = msc_iter_install(msc);
- if (!iter)
- return -ENOMEM;
+ if (IS_ERR(iter))
+ return PTR_ERR(iter);
file->private_data = iter;
@@ -1101,11 +1114,6 @@ static ssize_t intel_th_msc_read(struct file *file, char __user *buf,
if (!atomic_inc_unless_negative(&msc->user_count))
return 0;
- if (msc->enabled) {
- ret = -EBUSY;
- goto put_count;
- }
-
if (msc->mode == MSC_MODE_SINGLE && !msc->single_wrap)
size = msc->single_sz;
else
@@ -1164,7 +1172,7 @@ static void msc_mmap_close(struct vm_area_struct *vma)
if (!atomic_dec_and_mutex_lock(&msc->mmap_count, &msc->buf_mutex))
return;
- /* drop page _counts */
+ /* drop page _refcounts */
for (pg = 0; pg < msc->nr_pages; pg++) {
struct page *page = msc_buffer_get_page(msc, pg);
@@ -1245,6 +1253,7 @@ static const struct file_operations intel_th_msc_fops = {
.read = intel_th_msc_read,
.mmap = intel_th_msc_mmap,
.llseek = no_llseek,
+ .owner = THIS_MODULE,
};
static int intel_th_msc_init(struct msc *msc)
@@ -1254,8 +1263,6 @@ static int intel_th_msc_init(struct msc *msc)
msc->mode = MSC_MODE_MULTI;
mutex_init(&msc->buf_mutex);
INIT_LIST_HEAD(&msc->win_list);
-
- mutex_init(&msc->iter_mutex);
INIT_LIST_HEAD(&msc->iter_list);
msc->burst_len =
@@ -1393,6 +1400,11 @@ nr_pages_store(struct device *dev, struct device_attribute *attr,
do {
end = memchr(p, ',', len);
s = kstrndup(p, end ? end - p : len, GFP_KERNEL);
+ if (!s) {
+ ret = -ENOMEM;
+ goto free_win;
+ }
+
ret = kstrtoul(s, 10, &val);
kfree(s);
@@ -1473,10 +1485,6 @@ static int intel_th_msc_probe(struct intel_th_device *thdev)
if (err)
return err;
- err = sysfs_create_group(&dev->kobj, &msc_output_group);
- if (err)
- return err;
-
dev_set_drvdata(dev, msc);
return 0;
@@ -1484,7 +1492,18 @@ static int intel_th_msc_probe(struct intel_th_device *thdev)
static void intel_th_msc_remove(struct intel_th_device *thdev)
{
- sysfs_remove_group(&thdev->dev.kobj, &msc_output_group);
+ struct msc *msc = dev_get_drvdata(&thdev->dev);
+ int ret;
+
+ intel_th_msc_deactivate(thdev);
+
+ /*
+ * Buffers should not be used at this point except if the
+ * output character device is still open and the parent
+ * device gets detached from its bus, which is a FIXME.
+ */
+ ret = msc_buffer_free_unless_used(msc);
+ WARN_ON_ONCE(ret);
}
static struct intel_th_driver intel_th_msc_driver = {
@@ -1493,6 +1512,7 @@ static struct intel_th_driver intel_th_msc_driver = {
.activate = intel_th_msc_activate,
.deactivate = intel_th_msc_deactivate,
.fops = &intel_th_msc_fops,
+ .attr_group = &msc_output_group,
.driver = {
.name = "msc",
.owner = THIS_MODULE,
diff --git a/drivers/hwtracing/intel_th/pci.c b/drivers/hwtracing/intel_th/pci.c
index bca7a2ac00d6..5e25c7eb31d3 100644
--- a/drivers/hwtracing/intel_th/pci.c
+++ b/drivers/hwtracing/intel_th/pci.c
@@ -75,6 +75,11 @@ static const struct pci_device_id intel_th_pci_id_table[] = {
PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x0a80),
.driver_data = (kernel_ulong_t)0,
},
+ {
+ /* Broxton B-step */
+ PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x1a8e),
+ .driver_data = (kernel_ulong_t)0,
+ },
{ 0 },
};
diff --git a/drivers/hwtracing/intel_th/pti.c b/drivers/hwtracing/intel_th/pti.c
index 57cbfdcc7ef0..35738b5bfccd 100644
--- a/drivers/hwtracing/intel_th/pti.c
+++ b/drivers/hwtracing/intel_th/pti.c
@@ -200,7 +200,6 @@ static int intel_th_pti_probe(struct intel_th_device *thdev)
struct resource *res;
struct pti_device *pti;
void __iomem *base;
- int ret;
res = intel_th_device_get_resource(thdev, IORESOURCE_MEM, 0);
if (!res)
@@ -219,10 +218,6 @@ static int intel_th_pti_probe(struct intel_th_device *thdev)
read_hw_config(pti);
- ret = sysfs_create_group(&dev->kobj, &pti_output_group);
- if (ret)
- return ret;
-
dev_set_drvdata(dev, pti);
return 0;
@@ -237,6 +232,7 @@ static struct intel_th_driver intel_th_pti_driver = {
.remove = intel_th_pti_remove,
.activate = intel_th_pti_activate,
.deactivate = intel_th_pti_deactivate,
+ .attr_group = &pti_output_group,
.driver = {
.name = "pti",
.owner = THIS_MODULE,
diff --git a/drivers/hwtracing/stm/core.c b/drivers/hwtracing/stm/core.c
index de80d45d8df9..ff31108b066f 100644
--- a/drivers/hwtracing/stm/core.c
+++ b/drivers/hwtracing/stm/core.c
@@ -67,9 +67,24 @@ static ssize_t channels_show(struct device *dev,
static DEVICE_ATTR_RO(channels);
+static ssize_t hw_override_show(struct device *dev,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct stm_device *stm = to_stm_device(dev);
+ int ret;
+
+ ret = sprintf(buf, "%u\n", stm->data->hw_override);
+
+ return ret;
+}
+
+static DEVICE_ATTR_RO(hw_override);
+
static struct attribute *stm_attrs[] = {
&dev_attr_masters.attr,
&dev_attr_channels.attr,
+ &dev_attr_hw_override.attr,
NULL,
};
@@ -546,8 +561,6 @@ static int stm_char_policy_set_ioctl(struct stm_file *stmf, void __user *arg)
if (ret)
goto err_free;
- ret = 0;
-
if (stm->data->link)
ret = stm->data->link(stm->data, stmf->output.master,
stmf->output.channel);
@@ -668,18 +681,11 @@ int stm_register_device(struct device *parent, struct stm_data *stm_data,
stm->dev.parent = parent;
stm->dev.release = stm_device_release;
- err = kobject_set_name(&stm->dev.kobj, "%s", stm_data->name);
- if (err)
- goto err_device;
-
- err = device_add(&stm->dev);
- if (err)
- goto err_device;
-
mutex_init(&stm->link_mutex);
spin_lock_init(&stm->link_lock);
INIT_LIST_HEAD(&stm->link_list);
+ /* initialize the object before it is accessible via sysfs */
spin_lock_init(&stm->mc_lock);
mutex_init(&stm->policy_mutex);
stm->sw_nmasters = nmasters;
@@ -687,9 +693,19 @@ int stm_register_device(struct device *parent, struct stm_data *stm_data,
stm->data = stm_data;
stm_data->stm = stm;
+ err = kobject_set_name(&stm->dev.kobj, "%s", stm_data->name);
+ if (err)
+ goto err_device;
+
+ err = device_add(&stm->dev);
+ if (err)
+ goto err_device;
+
return 0;
err_device:
+ unregister_chrdev(stm->major, stm_data->name);
+
/* matches device_initialize() above */
put_device(&stm->dev);
err_free:
diff --git a/drivers/hwtracing/stm/dummy_stm.c b/drivers/hwtracing/stm/dummy_stm.c
index 310adf57e7a1..a86612d989f9 100644
--- a/drivers/hwtracing/stm/dummy_stm.c
+++ b/drivers/hwtracing/stm/dummy_stm.c
@@ -46,9 +46,7 @@ static struct stm_data dummy_stm[DUMMY_STM_MAX];
static int nr_dummies = 4;
-module_param(nr_dummies, int, 0600);
-
-static unsigned int dummy_stm_nr;
+module_param(nr_dummies, int, 0400);
static unsigned int fail_mode;
@@ -65,12 +63,12 @@ static int dummy_stm_link(struct stm_data *data, unsigned int master,
static int dummy_stm_init(void)
{
- int i, ret = -ENOMEM, __nr_dummies = ACCESS_ONCE(nr_dummies);
+ int i, ret = -ENOMEM;
- if (__nr_dummies < 0 || __nr_dummies > DUMMY_STM_MAX)
+ if (nr_dummies < 0 || nr_dummies > DUMMY_STM_MAX)
return -EINVAL;
- for (i = 0; i < __nr_dummies; i++) {
+ for (i = 0; i < nr_dummies; i++) {
dummy_stm[i].name = kasprintf(GFP_KERNEL, "dummy_stm.%d", i);
if (!dummy_stm[i].name)
goto fail_unregister;
@@ -86,8 +84,6 @@ static int dummy_stm_init(void)
goto fail_free;
}
- dummy_stm_nr = __nr_dummies;
-
return 0;
fail_unregister:
@@ -105,7 +101,7 @@ static void dummy_stm_exit(void)
{
int i;
- for (i = 0; i < dummy_stm_nr; i++) {
+ for (i = 0; i < nr_dummies; i++) {
stm_unregister_device(&dummy_stm[i]);
kfree(dummy_stm[i].name);
}
diff --git a/drivers/hwtracing/stm/heartbeat.c b/drivers/hwtracing/stm/heartbeat.c
index 0133571b506f..3da7b673aab2 100644
--- a/drivers/hwtracing/stm/heartbeat.c
+++ b/drivers/hwtracing/stm/heartbeat.c
@@ -26,7 +26,7 @@
static int nr_devs = 4;
static int interval_ms = 10;
-module_param(nr_devs, int, 0600);
+module_param(nr_devs, int, 0400);
module_param(interval_ms, int, 0600);
static struct stm_heartbeat {
@@ -35,8 +35,6 @@ static struct stm_heartbeat {
unsigned int active;
} stm_heartbeat[STM_HEARTBEAT_MAX];
-static unsigned int nr_instances;
-
static const char str[] = "heartbeat stm source driver is here to serve you";
static enum hrtimer_restart stm_heartbeat_hrtimer_handler(struct hrtimer *hr)
@@ -74,12 +72,12 @@ static void stm_heartbeat_unlink(struct stm_source_data *data)
static int stm_heartbeat_init(void)
{
- int i, ret = -ENOMEM, __nr_instances = ACCESS_ONCE(nr_devs);
+ int i, ret = -ENOMEM;
- if (__nr_instances < 0 || __nr_instances > STM_HEARTBEAT_MAX)
+ if (nr_devs < 0 || nr_devs > STM_HEARTBEAT_MAX)
return -EINVAL;
- for (i = 0; i < __nr_instances; i++) {
+ for (i = 0; i < nr_devs; i++) {
stm_heartbeat[i].data.name =
kasprintf(GFP_KERNEL, "heartbeat.%d", i);
if (!stm_heartbeat[i].data.name)
@@ -98,8 +96,6 @@ static int stm_heartbeat_init(void)
goto fail_free;
}
- nr_instances = __nr_instances;
-
return 0;
fail_unregister:
@@ -116,7 +112,7 @@ static void stm_heartbeat_exit(void)
{
int i;
- for (i = 0; i < nr_instances; i++) {
+ for (i = 0; i < nr_devs; i++) {
stm_source_unregister_device(&stm_heartbeat[i].data);
kfree(stm_heartbeat[i].data.name);
}
diff --git a/drivers/hwtracing/stm/policy.c b/drivers/hwtracing/stm/policy.c
index 1db189657b2b..6c0ae2996326 100644
--- a/drivers/hwtracing/stm/policy.c
+++ b/drivers/hwtracing/stm/policy.c
@@ -107,8 +107,7 @@ stp_policy_node_masters_store(struct config_item *item, const char *page,
goto unlock;
/* must be within [sw_start..sw_end], which is an inclusive range */
- if (first > INT_MAX || last > INT_MAX || first > last ||
- first < stm->data->sw_start ||
+ if (first > last || first < stm->data->sw_start ||
last > stm->data->sw_end) {
ret = -ERANGE;
goto unlock;
@@ -342,7 +341,7 @@ stp_policies_make(struct config_group *group, const char *name)
return ERR_PTR(-EINVAL);
}
- *p++ = '\0';
+ *p = '\0';
stm = stm_find_device(devname);
kfree(devname);
diff --git a/drivers/i2c/algos/i2c-algo-bit.c b/drivers/i2c/algos/i2c-algo-bit.c
index 9d233bbde5e1..a8e89df665b9 100644
--- a/drivers/i2c/algos/i2c-algo-bit.c
+++ b/drivers/i2c/algos/i2c-algo-bit.c
@@ -617,7 +617,7 @@ const struct i2c_algorithm i2c_bit_algo = {
};
EXPORT_SYMBOL(i2c_bit_algo);
-const struct i2c_adapter_quirks i2c_bit_quirk_no_clk_stretch = {
+static const struct i2c_adapter_quirks i2c_bit_quirk_no_clk_stretch = {
.flags = I2C_AQ_NO_CLK_STRETCH,
};
diff --git a/drivers/i2c/busses/Kconfig b/drivers/i2c/busses/Kconfig
index 0967e1a5b3a2..f167021b8c21 100644
--- a/drivers/i2c/busses/Kconfig
+++ b/drivers/i2c/busses/Kconfig
@@ -663,7 +663,7 @@ config I2C_MT65XX
config I2C_MV64XXX
tristate "Marvell mv64xxx I2C Controller"
- depends on MV64X60 || PLAT_ORION || ARCH_SUNXI
+ depends on MV64X60 || PLAT_ORION || ARCH_SUNXI || ARCH_MVEBU
help
If you say yes to this option, support will be included for the
built-in I2C interface on the Marvell 64xxx line of host bridges.
@@ -965,7 +965,7 @@ config I2C_XILINX
config I2C_XLR
tristate "Netlogic XLR and Sigma Designs I2C support"
- depends on CPU_XLR || ARCH_TANGOX
+ depends on CPU_XLR || ARCH_TANGO
help
This driver enables support for the on-chip I2C interface of
the Netlogic XLR/XLS MIPS processors and Sigma Designs SOCs.
@@ -985,6 +985,7 @@ config I2C_XLP9XX
config I2C_RCAR
tristate "Renesas R-Car I2C Controller"
+ depends on HAS_DMA
depends on ARCH_RENESAS || COMPILE_TEST
select I2C_SLAVE
help
diff --git a/drivers/i2c/busses/i2c-at91.c b/drivers/i2c/busses/i2c-at91.c
index 921d32bfcda8..f23372669f77 100644
--- a/drivers/i2c/busses/i2c-at91.c
+++ b/drivers/i2c/busses/i2c-at91.c
@@ -1013,7 +1013,7 @@ static int at91_twi_configure_dma(struct at91_twi_dev *dev, u32 phy_addr)
error:
if (ret != -EPROBE_DEFER)
- dev_info(dev->dev, "can't use DMA, error %d\n", ret);
+ dev_info(dev->dev, "can't get DMA channel, continue without DMA support\n");
if (dma->chan_rx)
dma_release_channel(dma->chan_rx);
if (dma->chan_tx)
diff --git a/drivers/i2c/busses/i2c-bcm-iproc.c b/drivers/i2c/busses/i2c-bcm-iproc.c
index b9f0fff4e723..19c843828fe2 100644
--- a/drivers/i2c/busses/i2c-bcm-iproc.c
+++ b/drivers/i2c/busses/i2c-bcm-iproc.c
@@ -267,7 +267,7 @@ static int bcm_iproc_i2c_xfer_single_msg(struct bcm_iproc_i2c_dev *iproc_i2c,
iproc_i2c->msg = msg;
/* format and load slave address into the TX FIFO */
- addr = msg->addr << 1 | (msg->flags & I2C_M_RD ? 1 : 0);
+ addr = i2c_8bit_addr_from_msg(msg);
writel(addr, iproc_i2c->base + M_TX_OFFSET);
/*
diff --git a/drivers/i2c/busses/i2c-bcm-kona.c b/drivers/i2c/busses/i2c-bcm-kona.c
index 2c9d9b1c8e64..ac9f47679c3a 100644
--- a/drivers/i2c/busses/i2c-bcm-kona.c
+++ b/drivers/i2c/busses/i2c-bcm-kona.c
@@ -501,10 +501,7 @@ static int bcm_kona_i2c_do_addr(struct bcm_kona_i2c_dev *dev,
return -EREMOTEIO;
}
} else {
- addr = msg->addr << 1;
-
- if (msg->flags & I2C_M_RD)
- addr |= 1;
+ addr = i2c_8bit_addr_from_msg(msg);
if (bcm_kona_i2c_write_byte(dev, addr, 0) < 0)
return -EREMOTEIO;
diff --git a/drivers/i2c/busses/i2c-brcmstb.c b/drivers/i2c/busses/i2c-brcmstb.c
index 4a45408dd820..6a8cfc1344b2 100644
--- a/drivers/i2c/busses/i2c-brcmstb.c
+++ b/drivers/i2c/busses/i2c-brcmstb.c
@@ -446,9 +446,7 @@ static int brcmstb_i2c_do_addr(struct brcmstb_i2c_dev *dev,
}
} else {
- addr = msg->addr << 1;
- if (msg->flags & I2C_M_RD)
- addr |= 1;
+ addr = i2c_8bit_addr_from_msg(msg);
bsc_writel(dev, addr, chip_address);
}
diff --git a/drivers/i2c/busses/i2c-cpm.c b/drivers/i2c/busses/i2c-cpm.c
index b167ab25310a..ee57c1e865e2 100644
--- a/drivers/i2c/busses/i2c-cpm.c
+++ b/drivers/i2c/busses/i2c-cpm.c
@@ -197,9 +197,7 @@ static void cpm_i2c_parse_message(struct i2c_adapter *adap,
tbdf = cpm->tbase + tx;
rbdf = cpm->rbase + rx;
- addr = pmsg->addr << 1;
- if (pmsg->flags & I2C_M_RD)
- addr |= 1;
+ addr = i2c_8bit_addr_from_msg(pmsg);
tb = cpm->txbuf[tx];
rb = cpm->rxbuf[rx];
diff --git a/drivers/i2c/busses/i2c-dln2.c b/drivers/i2c/busses/i2c-dln2.c
index 1600edd57ce9..f2eb4f76591f 100644
--- a/drivers/i2c/busses/i2c-dln2.c
+++ b/drivers/i2c/busses/i2c-dln2.c
@@ -19,6 +19,7 @@
#include <linux/i2c.h>
#include <linux/platform_device.h>
#include <linux/mfd/dln2.h>
+#include <linux/acpi.h>
#define DLN2_I2C_MODULE_ID 0x03
#define DLN2_I2C_CMD(cmd) DLN2_CMD(cmd, DLN2_I2C_MODULE_ID)
@@ -210,6 +211,7 @@ static int dln2_i2c_probe(struct platform_device *pdev)
dln2->adapter.algo = &dln2_i2c_usb_algorithm;
dln2->adapter.quirks = &dln2_i2c_quirks;
dln2->adapter.dev.parent = dev;
+ ACPI_COMPANION_SET(&dln2->adapter.dev, ACPI_COMPANION(&pdev->dev));
dln2->adapter.dev.of_node = dev->of_node;
i2c_set_adapdata(&dln2->adapter, dln2);
snprintf(dln2->adapter.name, sizeof(dln2->adapter.name), "%s-%s-%d",
diff --git a/drivers/i2c/busses/i2c-exynos5.c b/drivers/i2c/busses/i2c-exynos5.c
index f54ece8fce78..c0e3ada02876 100644
--- a/drivers/i2c/busses/i2c-exynos5.c
+++ b/drivers/i2c/busses/i2c-exynos5.c
@@ -861,14 +861,8 @@ static int exynos5_i2c_resume_noirq(struct device *dev)
#endif
static const struct dev_pm_ops exynos5_i2c_dev_pm_ops = {
-#ifdef CONFIG_PM_SLEEP
- .suspend_noirq = exynos5_i2c_suspend_noirq,
- .resume_noirq = exynos5_i2c_resume_noirq,
- .freeze_noirq = exynos5_i2c_suspend_noirq,
- .thaw_noirq = exynos5_i2c_resume_noirq,
- .poweroff_noirq = exynos5_i2c_suspend_noirq,
- .restore_noirq = exynos5_i2c_resume_noirq,
-#endif
+ SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(exynos5_i2c_suspend_noirq,
+ exynos5_i2c_resume_noirq)
};
static struct platform_driver exynos5_i2c_driver = {
diff --git a/drivers/i2c/busses/i2c-i801.c b/drivers/i2c/busses/i2c-i801.c
index 585a3b7915bd..64b1208bca5e 100644
--- a/drivers/i2c/busses/i2c-i801.c
+++ b/drivers/i2c/busses/i2c-i801.c
@@ -94,6 +94,7 @@
#include <linux/err.h>
#include <linux/platform_device.h>
#include <linux/platform_data/itco_wdt.h>
+#include <linux/pm_runtime.h>
#if (defined CONFIG_I2C_MUX_GPIO || defined CONFIG_I2C_MUX_GPIO_MODULE) && \
defined CONFIG_DMI
@@ -714,9 +715,11 @@ static s32 i801_access(struct i2c_adapter *adap, u16 addr,
{
int hwpec;
int block = 0;
- int ret, xact = 0;
+ int ret = 0, xact = 0;
struct i801_priv *priv = i2c_get_adapdata(adap);
+ pm_runtime_get_sync(&priv->pci_dev->dev);
+
hwpec = (priv->features & FEATURE_SMBUS_PEC) && (flags & I2C_CLIENT_PEC)
&& size != I2C_SMBUS_QUICK
&& size != I2C_SMBUS_I2C_BLOCK_DATA;
@@ -773,7 +776,8 @@ static s32 i801_access(struct i2c_adapter *adap, u16 addr,
default:
dev_err(&priv->pci_dev->dev, "Unsupported transaction %d\n",
size);
- return -EOPNOTSUPP;
+ ret = -EOPNOTSUPP;
+ goto out;
}
if (hwpec) /* enable/disable hardware PEC */
@@ -796,11 +800,11 @@ static s32 i801_access(struct i2c_adapter *adap, u16 addr,
~(SMBAUXCTL_CRC | SMBAUXCTL_E32B), SMBAUXCTL(priv));
if (block)
- return ret;
+ goto out;
if (ret)
- return ret;
+ goto out;
if ((read_write == I2C_SMBUS_WRITE) || (xact == I801_QUICK))
- return 0;
+ goto out;
switch (xact & 0x7f) {
case I801_BYTE: /* Result put in SMBHSTDAT0 */
@@ -812,7 +816,11 @@ static s32 i801_access(struct i2c_adapter *adap, u16 addr,
(inb_p(SMBHSTDAT1(priv)) << 8);
break;
}
- return 0;
+
+out:
+ pm_runtime_mark_last_busy(&priv->pci_dev->dev);
+ pm_runtime_put_autosuspend(&priv->pci_dev->dev);
+ return ret;
}
@@ -1413,6 +1421,11 @@ static int i801_probe(struct pci_dev *dev, const struct pci_device_id *id)
pci_set_drvdata(dev, priv);
+ pm_runtime_set_autosuspend_delay(&dev->dev, 1000);
+ pm_runtime_use_autosuspend(&dev->dev);
+ pm_runtime_put_autosuspend(&dev->dev);
+ pm_runtime_allow(&dev->dev);
+
return 0;
}
@@ -1420,6 +1433,9 @@ static void i801_remove(struct pci_dev *dev)
{
struct i801_priv *priv = pci_get_drvdata(dev);
+ pm_runtime_forbid(&dev->dev);
+ pm_runtime_get_noresume(&dev->dev);
+
i801_del_mux(priv);
i2c_del_adapter(&priv->adapter);
pci_write_config_byte(dev, SMBHSTCFG, priv->original_hstcfg);
@@ -1433,34 +1449,32 @@ static void i801_remove(struct pci_dev *dev)
}
#ifdef CONFIG_PM
-static int i801_suspend(struct pci_dev *dev, pm_message_t mesg)
+static int i801_suspend(struct device *dev)
{
- struct i801_priv *priv = pci_get_drvdata(dev);
+ struct pci_dev *pci_dev = to_pci_dev(dev);
+ struct i801_priv *priv = pci_get_drvdata(pci_dev);
- pci_save_state(dev);
- pci_write_config_byte(dev, SMBHSTCFG, priv->original_hstcfg);
- pci_set_power_state(dev, pci_choose_state(dev, mesg));
+ pci_write_config_byte(pci_dev, SMBHSTCFG, priv->original_hstcfg);
return 0;
}
-static int i801_resume(struct pci_dev *dev)
+static int i801_resume(struct device *dev)
{
- pci_set_power_state(dev, PCI_D0);
- pci_restore_state(dev);
return 0;
}
-#else
-#define i801_suspend NULL
-#define i801_resume NULL
#endif
+static UNIVERSAL_DEV_PM_OPS(i801_pm_ops, i801_suspend,
+ i801_resume, NULL);
+
static struct pci_driver i801_driver = {
.name = "i801_smbus",
.id_table = i801_ids,
.probe = i801_probe,
.remove = i801_remove,
- .suspend = i801_suspend,
- .resume = i801_resume,
+ .driver = {
+ .pm = &i801_pm_ops,
+ },
};
static int __init i2c_i801_init(void)
diff --git a/drivers/i2c/busses/i2c-ibm_iic.c b/drivers/i2c/busses/i2c-ibm_iic.c
index b6c080334297..cdaa7be2cd1b 100644
--- a/drivers/i2c/busses/i2c-ibm_iic.c
+++ b/drivers/i2c/busses/i2c-ibm_iic.c
@@ -269,7 +269,7 @@ static int iic_smbus_quick(struct ibm_iic_private* dev, const struct i2c_msg* p)
ndelay(t->hd_sta);
/* Send address */
- v = (u8)((p->addr << 1) | ((p->flags & I2C_M_RD) ? 1 : 0));
+ v = i2c_8bit_addr_from_msg(p);
for (i = 0, mask = 0x80; i < 8; ++i, mask >>= 1){
out_8(&iic->directcntl, sda);
ndelay(t->low / 2);
diff --git a/drivers/i2c/busses/i2c-img-scb.c b/drivers/i2c/busses/i2c-img-scb.c
index 379ef9c31664..ea20425b6972 100644
--- a/drivers/i2c/busses/i2c-img-scb.c
+++ b/drivers/i2c/busses/i2c-img-scb.c
@@ -751,9 +751,7 @@ static unsigned int img_i2c_atomic(struct img_i2c *i2c,
switch (i2c->at_cur_cmd) {
case CMD_GEN_START:
next_cmd = CMD_GEN_DATA;
- next_data = (i2c->msg.addr << 1);
- if (i2c->msg.flags & I2C_M_RD)
- next_data |= 0x1;
+ next_data = i2c_8bit_addr_from_msg(&i2c->msg);
break;
case CMD_GEN_DATA:
if (i2c->line_status & LINESTAT_INPUT_HELD_V)
diff --git a/drivers/i2c/busses/i2c-imx.c b/drivers/i2c/busses/i2c-imx.c
index 1ca7ef2314f7..1844bc9f7cd5 100644
--- a/drivers/i2c/busses/i2c-imx.c
+++ b/drivers/i2c/busses/i2c-imx.c
@@ -525,7 +525,7 @@ static int i2c_imx_start(struct imx_i2c_struct *i2c_imx)
imx_i2c_write_reg(i2c_imx->hwdata->i2cr_ien_opcode, i2c_imx, IMX_I2C_I2CR);
/* Wait controller to be stable */
- udelay(50);
+ usleep_range(50, 150);
/* Start I2C transaction */
temp = imx_i2c_read_reg(i2c_imx, IMX_I2C_I2CR);
diff --git a/drivers/i2c/busses/i2c-iop3xx.c b/drivers/i2c/busses/i2c-iop3xx.c
index 72d6161cf77c..85cbe4b55578 100644
--- a/drivers/i2c/busses/i2c-iop3xx.c
+++ b/drivers/i2c/busses/i2c-iop3xx.c
@@ -50,10 +50,7 @@ iic_cook_addr(struct i2c_msg *msg)
{
unsigned char addr;
- addr = (msg->addr << 1);
-
- if (msg->flags & I2C_M_RD)
- addr |= 1;
+ addr = i2c_8bit_addr_from_msg(msg);
return addr;
}
diff --git a/drivers/i2c/busses/i2c-lpc2k.c b/drivers/i2c/busses/i2c-lpc2k.c
index 8560a13bf1b3..586a15205e61 100644
--- a/drivers/i2c/busses/i2c-lpc2k.c
+++ b/drivers/i2c/busses/i2c-lpc2k.c
@@ -133,9 +133,7 @@ static void i2c_lpc2k_pump_msg(struct lpc2k_i2c *i2c)
case M_START:
case M_REPSTART:
/* Start bit was just sent out, send out addr and dir */
- data = i2c->msg->addr << 1;
- if (i2c->msg->flags & I2C_M_RD)
- data |= 1;
+ data = i2c_8bit_addr_from_msg(i2c->msg);
writel(data, i2c->base + LPC24XX_I2DAT);
writel(LPC24XX_STA, i2c->base + LPC24XX_I2CONCLR);
diff --git a/drivers/i2c/busses/i2c-mt65xx.c b/drivers/i2c/busses/i2c-mt65xx.c
index 453358b4d9ca..d9373e60be8a 100644
--- a/drivers/i2c/busses/i2c-mt65xx.c
+++ b/drivers/i2c/busses/i2c-mt65xx.c
@@ -413,10 +413,7 @@ static int mtk_i2c_do_transfer(struct mtk_i2c *i2c, struct i2c_msg *msgs,
else
writew(I2C_FS_START_CON, i2c->base + OFFSET_EXT_CONF);
- addr_reg = msgs->addr << 1;
- if (i2c->op == I2C_MASTER_RD)
- addr_reg |= 0x1;
-
+ addr_reg = i2c_8bit_addr_from_msg(msgs);
writew(addr_reg, i2c->base + OFFSET_SLAVE_ADDR);
/* Clear interrupt status */
diff --git a/drivers/i2c/busses/i2c-mv64xxx.c b/drivers/i2c/busses/i2c-mv64xxx.c
index 43207f52e5a3..b4dec0841bc2 100644
--- a/drivers/i2c/busses/i2c-mv64xxx.c
+++ b/drivers/i2c/busses/i2c-mv64xxx.c
@@ -134,9 +134,7 @@ struct mv64xxx_i2c_data {
int rc;
u32 freq_m;
u32 freq_n;
-#if defined(CONFIG_HAVE_CLK)
struct clk *clk;
-#endif
wait_queue_head_t waitq;
spinlock_t lock;
struct i2c_msg *msg;
@@ -757,7 +755,6 @@ static const struct of_device_id mv64xxx_i2c_of_match_table[] = {
MODULE_DEVICE_TABLE(of, mv64xxx_i2c_of_match_table);
#ifdef CONFIG_OF
-#ifdef CONFIG_HAVE_CLK
static int
mv64xxx_calc_freq(struct mv64xxx_i2c_data *drv_data,
const int tclk, const int n, const int m)
@@ -791,25 +788,20 @@ mv64xxx_find_baud_factors(struct mv64xxx_i2c_data *drv_data,
return false;
return true;
}
-#endif /* CONFIG_HAVE_CLK */
static int
mv64xxx_of_config(struct mv64xxx_i2c_data *drv_data,
struct device *dev)
{
- /* CLK is mandatory when using DT to describe the i2c bus. We
- * need to know tclk in order to calculate bus clock
- * factors.
- */
-#if !defined(CONFIG_HAVE_CLK)
- /* Have OF but no CLK */
- return -ENODEV;
-#else
const struct of_device_id *device;
struct device_node *np = dev->of_node;
u32 bus_freq, tclk;
int rc = 0;
+ /* CLK is mandatory when using DT to describe the i2c bus. We
+ * need to know tclk in order to calculate bus clock
+ * factors.
+ */
if (IS_ERR(drv_data->clk)) {
rc = -ENODEV;
goto out;
@@ -869,7 +861,6 @@ mv64xxx_of_config(struct mv64xxx_i2c_data *drv_data,
out:
return rc;
-#endif
}
#else /* CONFIG_OF */
static int
@@ -907,14 +898,13 @@ mv64xxx_i2c_probe(struct platform_device *pd)
init_waitqueue_head(&drv_data->waitq);
spin_lock_init(&drv_data->lock);
-#if defined(CONFIG_HAVE_CLK)
/* Not all platforms have a clk */
drv_data->clk = devm_clk_get(&pd->dev, NULL);
- if (!IS_ERR(drv_data->clk)) {
- clk_prepare(drv_data->clk);
- clk_enable(drv_data->clk);
- }
-#endif
+ if (IS_ERR(drv_data->clk) && PTR_ERR(drv_data->clk) == -EPROBE_DEFER)
+ return -EPROBE_DEFER;
+ if (!IS_ERR(drv_data->clk))
+ clk_prepare_enable(drv_data->clk);
+
if (pdata) {
drv_data->freq_m = pdata->freq_m;
drv_data->freq_n = pdata->freq_n;
@@ -964,13 +954,10 @@ exit_reset:
if (!IS_ERR_OR_NULL(drv_data->rstc))
reset_control_assert(drv_data->rstc);
exit_clk:
-#if defined(CONFIG_HAVE_CLK)
/* Not all platforms have a clk */
- if (!IS_ERR(drv_data->clk)) {
- clk_disable(drv_data->clk);
- clk_unprepare(drv_data->clk);
- }
-#endif
+ if (!IS_ERR(drv_data->clk))
+ clk_disable_unprepare(drv_data->clk);
+
return rc;
}
@@ -983,13 +970,9 @@ mv64xxx_i2c_remove(struct platform_device *dev)
free_irq(drv_data->irq, drv_data);
if (!IS_ERR_OR_NULL(drv_data->rstc))
reset_control_assert(drv_data->rstc);
-#if defined(CONFIG_HAVE_CLK)
/* Not all platforms have a clk */
- if (!IS_ERR(drv_data->clk)) {
- clk_disable(drv_data->clk);
- clk_unprepare(drv_data->clk);
- }
-#endif
+ if (!IS_ERR(drv_data->clk))
+ clk_disable_unprepare(drv_data->clk);
return 0;
}
diff --git a/drivers/i2c/busses/i2c-nforce2.c b/drivers/i2c/busses/i2c-nforce2.c
index 70b3c9158509..42fcc9458432 100644
--- a/drivers/i2c/busses/i2c-nforce2.c
+++ b/drivers/i2c/busses/i2c-nforce2.c
@@ -127,7 +127,7 @@ static struct pci_driver nforce2_driver;
/* For multiplexing support, we need a global reference to the 1st
SMBus channel */
-#if defined CONFIG_I2C_NFORCE2_S4985 || defined CONFIG_I2C_NFORCE2_S4985_MODULE
+#if IS_ENABLED(CONFIG_I2C_NFORCE2_S4985)
struct i2c_adapter *nforce2_smbus;
EXPORT_SYMBOL_GPL(nforce2_smbus);
diff --git a/drivers/i2c/busses/i2c-ocores.c b/drivers/i2c/busses/i2c-ocores.c
index 11b7b87311ed..dfa7a4b4a91d 100644
--- a/drivers/i2c/busses/i2c-ocores.c
+++ b/drivers/i2c/busses/i2c-ocores.c
@@ -178,10 +178,7 @@ static void ocores_process(struct ocores_i2c *i2c)
if (i2c->nmsgs) { /* end? */
/* send start? */
if (!(msg->flags & I2C_M_NOSTART)) {
- u8 addr = (msg->addr << 1);
-
- if (msg->flags & I2C_M_RD)
- addr |= 1;
+ u8 addr = i2c_8bit_addr_from_msg(msg);
i2c->state = STATE_START;
diff --git a/drivers/i2c/busses/i2c-octeon.c b/drivers/i2c/busses/i2c-octeon.c
index 46fb6c42934f..aa5f01efd826 100644
--- a/drivers/i2c/busses/i2c-octeon.c
+++ b/drivers/i2c/busses/i2c-octeon.c
@@ -11,6 +11,7 @@
* warranty of any kind, whether express or implied.
*/
+#include <linux/atomic.h>
#include <linux/platform_device.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
@@ -29,13 +30,23 @@
/* Register offsets */
#define SW_TWSI 0x00
#define TWSI_INT 0x10
+#define SW_TWSI_EXT 0x18
/* Controller command patterns */
#define SW_TWSI_V BIT_ULL(63) /* Valid bit */
+#define SW_TWSI_EIA BIT_ULL(61) /* Extended internal address */
#define SW_TWSI_R BIT_ULL(56) /* Result or read bit */
+#define SW_TWSI_SOVR BIT_ULL(55) /* Size override */
+#define SW_TWSI_SIZE_SHIFT 52
+#define SW_TWSI_ADDR_SHIFT 40
+#define SW_TWSI_IA_SHIFT 32 /* Internal address */
/* Controller opcode word (bits 60:57) */
#define SW_TWSI_OP_SHIFT 57
+#define SW_TWSI_OP_7 (0ULL << SW_TWSI_OP_SHIFT)
+#define SW_TWSI_OP_7_IA (1ULL << SW_TWSI_OP_SHIFT)
+#define SW_TWSI_OP_10 (2ULL << SW_TWSI_OP_SHIFT)
+#define SW_TWSI_OP_10_IA (3ULL << SW_TWSI_OP_SHIFT)
#define SW_TWSI_OP_TWSI_CLK (4ULL << SW_TWSI_OP_SHIFT)
#define SW_TWSI_OP_EOP (6ULL << SW_TWSI_OP_SHIFT) /* Extended opcode */
@@ -48,46 +59,93 @@
#define SW_TWSI_EOP_TWSI_RST (SW_TWSI_OP_EOP | 7ULL << SW_TWSI_EOP_SHIFT)
/* Controller command and status bits */
-#define TWSI_CTL_CE 0x80
+#define TWSI_CTL_CE 0x80 /* High level controller enable */
#define TWSI_CTL_ENAB 0x40 /* Bus enable */
#define TWSI_CTL_STA 0x20 /* Master-mode start, HW clears when done */
#define TWSI_CTL_STP 0x10 /* Master-mode stop, HW clears when done */
#define TWSI_CTL_IFLG 0x08 /* HW event, SW writes 0 to ACK */
#define TWSI_CTL_AAK 0x04 /* Assert ACK */
-/* Some status values */
+/* Status values */
+#define STAT_ERROR 0x00
#define STAT_START 0x08
-#define STAT_RSTART 0x10
+#define STAT_REP_START 0x10
#define STAT_TXADDR_ACK 0x18
+#define STAT_TXADDR_NAK 0x20
#define STAT_TXDATA_ACK 0x28
+#define STAT_TXDATA_NAK 0x30
+#define STAT_LOST_ARB_38 0x38
#define STAT_RXADDR_ACK 0x40
+#define STAT_RXADDR_NAK 0x48
#define STAT_RXDATA_ACK 0x50
+#define STAT_RXDATA_NAK 0x58
+#define STAT_SLAVE_60 0x60
+#define STAT_LOST_ARB_68 0x68
+#define STAT_SLAVE_70 0x70
+#define STAT_LOST_ARB_78 0x78
+#define STAT_SLAVE_80 0x80
+#define STAT_SLAVE_88 0x88
+#define STAT_GENDATA_ACK 0x90
+#define STAT_GENDATA_NAK 0x98
+#define STAT_SLAVE_A0 0xA0
+#define STAT_SLAVE_A8 0xA8
+#define STAT_LOST_ARB_B0 0xB0
+#define STAT_SLAVE_LOST 0xB8
+#define STAT_SLAVE_NAK 0xC0
+#define STAT_SLAVE_ACK 0xC8
+#define STAT_AD2W_ACK 0xD0
+#define STAT_AD2W_NAK 0xD8
#define STAT_IDLE 0xF8
/* TWSI_INT values */
+#define TWSI_INT_ST_INT BIT_ULL(0)
+#define TWSI_INT_TS_INT BIT_ULL(1)
+#define TWSI_INT_CORE_INT BIT_ULL(2)
+#define TWSI_INT_ST_EN BIT_ULL(4)
+#define TWSI_INT_TS_EN BIT_ULL(5)
#define TWSI_INT_CORE_EN BIT_ULL(6)
#define TWSI_INT_SDA_OVR BIT_ULL(8)
#define TWSI_INT_SCL_OVR BIT_ULL(9)
+#define TWSI_INT_SDA BIT_ULL(10)
+#define TWSI_INT_SCL BIT_ULL(11)
+
+#define I2C_OCTEON_EVENT_WAIT 80 /* microseconds */
struct octeon_i2c {
wait_queue_head_t queue;
struct i2c_adapter adap;
int irq;
+ int hlc_irq; /* For cn7890 only */
u32 twsi_freq;
int sys_freq;
void __iomem *twsi_base;
struct device *dev;
+ bool hlc_enabled;
+ bool broken_irq_mode;
+ bool broken_irq_check;
+ void (*int_enable)(struct octeon_i2c *);
+ void (*int_disable)(struct octeon_i2c *);
+ void (*hlc_int_enable)(struct octeon_i2c *);
+ void (*hlc_int_disable)(struct octeon_i2c *);
+ atomic_t int_enable_cnt;
+ atomic_t hlc_int_enable_cnt;
};
+static void octeon_i2c_writeq_flush(u64 val, void __iomem *addr)
+{
+ __raw_writeq(val, addr);
+ __raw_readq(addr); /* wait for write to land */
+}
+
/**
- * octeon_i2c_write_sw - write an I2C core register
+ * octeon_i2c_reg_write - write an I2C core register
* @i2c: The struct octeon_i2c
* @eop_reg: Register selector
* @data: Value to be written
*
* The I2C core registers are accessed indirectly via the SW_TWSI CSR.
*/
-static void octeon_i2c_write_sw(struct octeon_i2c *i2c, u64 eop_reg, u8 data)
+static void octeon_i2c_reg_write(struct octeon_i2c *i2c, u64 eop_reg, u8 data)
{
u64 tmp;
@@ -97,8 +155,13 @@ static void octeon_i2c_write_sw(struct octeon_i2c *i2c, u64 eop_reg, u8 data)
} while ((tmp & SW_TWSI_V) != 0);
}
+#define octeon_i2c_ctl_write(i2c, val) \
+ octeon_i2c_reg_write(i2c, SW_TWSI_EOP_TWSI_CTL, val)
+#define octeon_i2c_data_write(i2c, val) \
+ octeon_i2c_reg_write(i2c, SW_TWSI_EOP_TWSI_DATA, val)
+
/**
- * octeon_i2c_read_sw - read lower bits of an I2C core register
+ * octeon_i2c_reg_read - read lower bits of an I2C core register
* @i2c: The struct octeon_i2c
* @eop_reg: Register selector
*
@@ -106,7 +169,7 @@ static void octeon_i2c_write_sw(struct octeon_i2c *i2c, u64 eop_reg, u8 data)
*
* The I2C core registers are accessed indirectly via the SW_TWSI CSR.
*/
-static u8 octeon_i2c_read_sw(struct octeon_i2c *i2c, u64 eop_reg)
+static u8 octeon_i2c_reg_read(struct octeon_i2c *i2c, u64 eop_reg)
{
u64 tmp;
@@ -118,6 +181,24 @@ static u8 octeon_i2c_read_sw(struct octeon_i2c *i2c, u64 eop_reg)
return tmp & 0xFF;
}
+#define octeon_i2c_ctl_read(i2c) \
+ octeon_i2c_reg_read(i2c, SW_TWSI_EOP_TWSI_CTL)
+#define octeon_i2c_data_read(i2c) \
+ octeon_i2c_reg_read(i2c, SW_TWSI_EOP_TWSI_DATA)
+#define octeon_i2c_stat_read(i2c) \
+ octeon_i2c_reg_read(i2c, SW_TWSI_EOP_TWSI_STAT)
+
+/**
+ * octeon_i2c_read_int - read the TWSI_INT register
+ * @i2c: The struct octeon_i2c
+ *
+ * Returns the value of the register.
+ */
+static u64 octeon_i2c_read_int(struct octeon_i2c *i2c)
+{
+ return __raw_readq(i2c->twsi_base + TWSI_INT);
+}
+
/**
* octeon_i2c_write_int - write the TWSI_INT register
* @i2c: The struct octeon_i2c
@@ -125,8 +206,7 @@ static u8 octeon_i2c_read_sw(struct octeon_i2c *i2c, u64 eop_reg)
*/
static void octeon_i2c_write_int(struct octeon_i2c *i2c, u64 data)
{
- __raw_writeq(data, i2c->twsi_base + TWSI_INT);
- __raw_readq(i2c->twsi_base + TWSI_INT);
+ octeon_i2c_writeq_flush(data, i2c->twsi_base + TWSI_INT);
}
/**
@@ -149,30 +229,96 @@ static void octeon_i2c_int_disable(struct octeon_i2c *i2c)
}
/**
- * octeon_i2c_unblock - unblock the bus
+ * octeon_i2c_int_enable78 - enable the CORE interrupt
* @i2c: The struct octeon_i2c
*
- * If there was a reset while a device was driving 0 to bus, bus is blocked.
- * We toggle it free manually by some clock cycles and send a stop.
+ * The interrupt will be asserted when there is non-STAT_IDLE state in the
+ * SW_TWSI_EOP_TWSI_STAT register.
*/
-static void octeon_i2c_unblock(struct octeon_i2c *i2c)
+static void octeon_i2c_int_enable78(struct octeon_i2c *i2c)
+{
+ atomic_inc_return(&i2c->int_enable_cnt);
+ enable_irq(i2c->irq);
+}
+
+static void __octeon_i2c_irq_disable(atomic_t *cnt, int irq)
{
- int i;
+ int count;
+
+ /*
+ * The interrupt can be disabled in two places, but we only
+ * want to make the disable_irq_nosync() call once, so keep
+ * track with the atomic variable.
+ */
+ count = atomic_dec_if_positive(cnt);
+ if (count >= 0)
+ disable_irq_nosync(irq);
+}
+
+/* disable the CORE interrupt */
+static void octeon_i2c_int_disable78(struct octeon_i2c *i2c)
+{
+ __octeon_i2c_irq_disable(&i2c->int_enable_cnt, i2c->irq);
+}
+
+/**
+ * octeon_i2c_hlc_int_enable78 - enable the ST interrupt
+ * @i2c: The struct octeon_i2c
+ *
+ * The interrupt will be asserted when there is non-STAT_IDLE state in
+ * the SW_TWSI_EOP_TWSI_STAT register.
+ */
+static void octeon_i2c_hlc_int_enable78(struct octeon_i2c *i2c)
+{
+ atomic_inc_return(&i2c->hlc_int_enable_cnt);
+ enable_irq(i2c->hlc_irq);
+}
+
+/* disable the ST interrupt */
+static void octeon_i2c_hlc_int_disable78(struct octeon_i2c *i2c)
+{
+ __octeon_i2c_irq_disable(&i2c->hlc_int_enable_cnt, i2c->hlc_irq);
+}
+
+/*
+ * Cleanup low-level state & enable high-level controller.
+ */
+static void octeon_i2c_hlc_enable(struct octeon_i2c *i2c)
+{
+ int try = 0;
+ u64 val;
+
+ if (i2c->hlc_enabled)
+ return;
+ i2c->hlc_enabled = true;
+
+ while (1) {
+ val = octeon_i2c_ctl_read(i2c);
+ if (!(val & (TWSI_CTL_STA | TWSI_CTL_STP)))
+ break;
+
+ /* clear IFLG event */
+ if (val & TWSI_CTL_IFLG)
+ octeon_i2c_ctl_write(i2c, TWSI_CTL_ENAB);
- dev_dbg(i2c->dev, "%s\n", __func__);
+ if (try++ > 100) {
+ pr_err("%s: giving up\n", __func__);
+ break;
+ }
- for (i = 0; i < 9; i++) {
- octeon_i2c_write_int(i2c, 0);
- udelay(5);
- octeon_i2c_write_int(i2c, TWSI_INT_SCL_OVR);
- udelay(5);
+ /* spin until any start/stop has finished */
+ udelay(10);
}
- /* hand-crank a STOP */
- octeon_i2c_write_int(i2c, TWSI_INT_SDA_OVR | TWSI_INT_SCL_OVR);
- udelay(5);
- octeon_i2c_write_int(i2c, TWSI_INT_SDA_OVR);
- udelay(5);
- octeon_i2c_write_int(i2c, 0);
+ octeon_i2c_ctl_write(i2c, TWSI_CTL_CE | TWSI_CTL_AAK | TWSI_CTL_ENAB);
+}
+
+static void octeon_i2c_hlc_disable(struct octeon_i2c *i2c)
+{
+ if (!i2c->hlc_enabled)
+ return;
+
+ i2c->hlc_enabled = false;
+ octeon_i2c_ctl_write(i2c, TWSI_CTL_ENAB);
}
/* interrupt service routine */
@@ -180,16 +326,44 @@ static irqreturn_t octeon_i2c_isr(int irq, void *dev_id)
{
struct octeon_i2c *i2c = dev_id;
- octeon_i2c_int_disable(i2c);
+ i2c->int_disable(i2c);
+ wake_up(&i2c->queue);
+
+ return IRQ_HANDLED;
+}
+
+/* HLC interrupt service routine */
+static irqreturn_t octeon_i2c_hlc_isr78(int irq, void *dev_id)
+{
+ struct octeon_i2c *i2c = dev_id;
+
+ i2c->hlc_int_disable(i2c);
wake_up(&i2c->queue);
return IRQ_HANDLED;
}
+static bool octeon_i2c_test_iflg(struct octeon_i2c *i2c)
+{
+ return (octeon_i2c_ctl_read(i2c) & TWSI_CTL_IFLG);
+}
-static int octeon_i2c_test_iflg(struct octeon_i2c *i2c)
+static bool octeon_i2c_test_ready(struct octeon_i2c *i2c, bool *first)
{
- return (octeon_i2c_read_sw(i2c, SW_TWSI_EOP_TWSI_CTL) & TWSI_CTL_IFLG) != 0;
+ if (octeon_i2c_test_iflg(i2c))
+ return true;
+
+ if (*first) {
+ *first = false;
+ return false;
+ }
+
+ /*
+ * IRQ has signaled an event but IFLG hasn't changed.
+ * Sleep and retry once.
+ */
+ usleep_range(I2C_OCTEON_EVENT_WAIT, 2 * I2C_OCTEON_EVENT_WAIT);
+ return octeon_i2c_test_iflg(i2c);
}
/**
@@ -201,64 +375,493 @@ static int octeon_i2c_test_iflg(struct octeon_i2c *i2c)
static int octeon_i2c_wait(struct octeon_i2c *i2c)
{
long time_left;
+ bool first = 1;
- octeon_i2c_int_enable(i2c);
- time_left = wait_event_timeout(i2c->queue, octeon_i2c_test_iflg(i2c),
+ /*
+ * Some chip revisions don't assert the irq in the interrupt
+ * controller. So we must poll for the IFLG change.
+ */
+ if (i2c->broken_irq_mode) {
+ u64 end = get_jiffies_64() + i2c->adap.timeout;
+
+ while (!octeon_i2c_test_iflg(i2c) &&
+ time_before64(get_jiffies_64(), end))
+ usleep_range(I2C_OCTEON_EVENT_WAIT / 2, I2C_OCTEON_EVENT_WAIT);
+
+ return octeon_i2c_test_iflg(i2c) ? 0 : -ETIMEDOUT;
+ }
+
+ i2c->int_enable(i2c);
+ time_left = wait_event_timeout(i2c->queue, octeon_i2c_test_ready(i2c, &first),
i2c->adap.timeout);
- octeon_i2c_int_disable(i2c);
- if (!time_left) {
- dev_dbg(i2c->dev, "%s: timeout\n", __func__);
- return -ETIMEDOUT;
+ i2c->int_disable(i2c);
+
+ if (i2c->broken_irq_check && !time_left &&
+ octeon_i2c_test_iflg(i2c)) {
+ dev_err(i2c->dev, "broken irq connection detected, switching to polling mode.\n");
+ i2c->broken_irq_mode = true;
+ return 0;
}
+ if (!time_left)
+ return -ETIMEDOUT;
+
return 0;
}
+static int octeon_i2c_check_status(struct octeon_i2c *i2c, int final_read)
+{
+ u8 stat = octeon_i2c_stat_read(i2c);
+
+ switch (stat) {
+ /* Everything is fine */
+ case STAT_IDLE:
+ case STAT_AD2W_ACK:
+ case STAT_RXADDR_ACK:
+ case STAT_TXADDR_ACK:
+ case STAT_TXDATA_ACK:
+ return 0;
+
+ /* ACK allowed on pre-terminal bytes only */
+ case STAT_RXDATA_ACK:
+ if (!final_read)
+ return 0;
+ return -EIO;
+
+ /* NAK allowed on terminal byte only */
+ case STAT_RXDATA_NAK:
+ if (final_read)
+ return 0;
+ return -EIO;
+
+ /* Arbitration lost */
+ case STAT_LOST_ARB_38:
+ case STAT_LOST_ARB_68:
+ case STAT_LOST_ARB_78:
+ case STAT_LOST_ARB_B0:
+ return -EAGAIN;
+
+ /* Being addressed as slave, should back off & listen */
+ case STAT_SLAVE_60:
+ case STAT_SLAVE_70:
+ case STAT_GENDATA_ACK:
+ case STAT_GENDATA_NAK:
+ return -EOPNOTSUPP;
+
+ /* Core busy as slave */
+ case STAT_SLAVE_80:
+ case STAT_SLAVE_88:
+ case STAT_SLAVE_A0:
+ case STAT_SLAVE_A8:
+ case STAT_SLAVE_LOST:
+ case STAT_SLAVE_NAK:
+ case STAT_SLAVE_ACK:
+ return -EOPNOTSUPP;
+
+ case STAT_TXDATA_NAK:
+ return -EIO;
+ case STAT_TXADDR_NAK:
+ case STAT_RXADDR_NAK:
+ case STAT_AD2W_NAK:
+ return -ENXIO;
+ default:
+ dev_err(i2c->dev, "unhandled state: %d\n", stat);
+ return -EIO;
+ }
+}
+
+static bool octeon_i2c_hlc_test_valid(struct octeon_i2c *i2c)
+{
+ return (__raw_readq(i2c->twsi_base + SW_TWSI) & SW_TWSI_V) == 0;
+}
+
+static bool octeon_i2c_hlc_test_ready(struct octeon_i2c *i2c, bool *first)
+{
+ /* check if valid bit is cleared */
+ if (octeon_i2c_hlc_test_valid(i2c))
+ return true;
+
+ if (*first) {
+ *first = false;
+ return false;
+ }
+
+ /*
+ * IRQ has signaled an event but valid bit isn't cleared.
+ * Sleep and retry once.
+ */
+ usleep_range(I2C_OCTEON_EVENT_WAIT, 2 * I2C_OCTEON_EVENT_WAIT);
+ return octeon_i2c_hlc_test_valid(i2c);
+}
+
+static void octeon_i2c_hlc_int_enable(struct octeon_i2c *i2c)
+{
+ octeon_i2c_write_int(i2c, TWSI_INT_ST_EN);
+}
+
+static void octeon_i2c_hlc_int_clear(struct octeon_i2c *i2c)
+{
+ /* clear ST/TS events, listen for neither */
+ octeon_i2c_write_int(i2c, TWSI_INT_ST_INT | TWSI_INT_TS_INT);
+}
+
/**
- * octeon_i2c_start - send START to the bus
+ * octeon_i2c_hlc_wait - wait for an HLC operation to complete
* @i2c: The struct octeon_i2c
*
- * Returns 0 on success, otherwise a negative errno.
+ * Returns 0 on success, otherwise -ETIMEDOUT.
*/
-static int octeon_i2c_start(struct octeon_i2c *i2c)
+static int octeon_i2c_hlc_wait(struct octeon_i2c *i2c)
{
- int result;
- u8 data;
+ bool first = 1;
+ int time_left;
- octeon_i2c_write_sw(i2c, SW_TWSI_EOP_TWSI_CTL,
- TWSI_CTL_ENAB | TWSI_CTL_STA);
+ /*
+ * Some cn38xx boards don't assert the irq in the interrupt
+ * controller. So we must poll for the valid bit change.
+ */
+ if (i2c->broken_irq_mode) {
+ u64 end = get_jiffies_64() + i2c->adap.timeout;
- result = octeon_i2c_wait(i2c);
- if (result) {
- if (octeon_i2c_read_sw(i2c, SW_TWSI_EOP_TWSI_STAT) == STAT_IDLE) {
+ while (!octeon_i2c_hlc_test_valid(i2c) &&
+ time_before64(get_jiffies_64(), end))
+ usleep_range(I2C_OCTEON_EVENT_WAIT / 2, I2C_OCTEON_EVENT_WAIT);
+
+ return octeon_i2c_hlc_test_valid(i2c) ? 0 : -ETIMEDOUT;
+ }
+
+ i2c->hlc_int_enable(i2c);
+ time_left = wait_event_timeout(i2c->queue,
+ octeon_i2c_hlc_test_ready(i2c, &first),
+ i2c->adap.timeout);
+ i2c->hlc_int_disable(i2c);
+ if (!time_left)
+ octeon_i2c_hlc_int_clear(i2c);
+
+ if (i2c->broken_irq_check && !time_left &&
+ octeon_i2c_hlc_test_valid(i2c)) {
+ dev_err(i2c->dev, "broken irq connection detected, switching to polling mode.\n");
+ i2c->broken_irq_mode = true;
+ return 0;
+ }
+
+ if (!time_left)
+ return -ETIMEDOUT;
+ return 0;
+}
+
+/* high-level-controller pure read of up to 8 bytes */
+static int octeon_i2c_hlc_read(struct octeon_i2c *i2c, struct i2c_msg *msgs)
+{
+ int i, j, ret = 0;
+ u64 cmd;
+
+ octeon_i2c_hlc_enable(i2c);
+ octeon_i2c_hlc_int_clear(i2c);
+
+ cmd = SW_TWSI_V | SW_TWSI_R | SW_TWSI_SOVR;
+ /* SIZE */
+ cmd |= (u64)(msgs[0].len - 1) << SW_TWSI_SIZE_SHIFT;
+ /* A */
+ cmd |= (u64)(msgs[0].addr & 0x7full) << SW_TWSI_ADDR_SHIFT;
+
+ if (msgs[0].flags & I2C_M_TEN)
+ cmd |= SW_TWSI_OP_10;
+ else
+ cmd |= SW_TWSI_OP_7;
+
+ octeon_i2c_writeq_flush(cmd, i2c->twsi_base + SW_TWSI);
+ ret = octeon_i2c_hlc_wait(i2c);
+ if (ret)
+ goto err;
+
+ cmd = __raw_readq(i2c->twsi_base + SW_TWSI);
+ if ((cmd & SW_TWSI_R) == 0)
+ return -EAGAIN;
+
+ for (i = 0, j = msgs[0].len - 1; i < msgs[0].len && i < 4; i++, j--)
+ msgs[0].buf[j] = (cmd >> (8 * i)) & 0xff;
+
+ if (msgs[0].len > 4) {
+ cmd = __raw_readq(i2c->twsi_base + SW_TWSI_EXT);
+ for (i = 0; i < msgs[0].len - 4 && i < 4; i++, j--)
+ msgs[0].buf[j] = (cmd >> (8 * i)) & 0xff;
+ }
+
+err:
+ return ret;
+}
+
+/* high-level-controller pure write of up to 8 bytes */
+static int octeon_i2c_hlc_write(struct octeon_i2c *i2c, struct i2c_msg *msgs)
+{
+ int i, j, ret = 0;
+ u64 cmd;
+
+ octeon_i2c_hlc_enable(i2c);
+ octeon_i2c_hlc_int_clear(i2c);
+
+ cmd = SW_TWSI_V | SW_TWSI_SOVR;
+ /* SIZE */
+ cmd |= (u64)(msgs[0].len - 1) << SW_TWSI_SIZE_SHIFT;
+ /* A */
+ cmd |= (u64)(msgs[0].addr & 0x7full) << SW_TWSI_ADDR_SHIFT;
+
+ if (msgs[0].flags & I2C_M_TEN)
+ cmd |= SW_TWSI_OP_10;
+ else
+ cmd |= SW_TWSI_OP_7;
+
+ for (i = 0, j = msgs[0].len - 1; i < msgs[0].len && i < 4; i++, j--)
+ cmd |= (u64)msgs[0].buf[j] << (8 * i);
+
+ if (msgs[0].len > 4) {
+ u64 ext = 0;
+
+ for (i = 0; i < msgs[0].len - 4 && i < 4; i++, j--)
+ ext |= (u64)msgs[0].buf[j] << (8 * i);
+ octeon_i2c_writeq_flush(ext, i2c->twsi_base + SW_TWSI_EXT);
+ }
+
+ octeon_i2c_writeq_flush(cmd, i2c->twsi_base + SW_TWSI);
+ ret = octeon_i2c_hlc_wait(i2c);
+ if (ret)
+ goto err;
+
+ cmd = __raw_readq(i2c->twsi_base + SW_TWSI);
+ if ((cmd & SW_TWSI_R) == 0)
+ return -EAGAIN;
+
+ ret = octeon_i2c_check_status(i2c, false);
+
+err:
+ return ret;
+}
+
+/* high-level-controller composite write+read, msg0=addr, msg1=data */
+static int octeon_i2c_hlc_comp_read(struct octeon_i2c *i2c, struct i2c_msg *msgs)
+{
+ int i, j, ret = 0;
+ u64 cmd;
+
+ octeon_i2c_hlc_enable(i2c);
+
+ cmd = SW_TWSI_V | SW_TWSI_R | SW_TWSI_SOVR;
+ /* SIZE */
+ cmd |= (u64)(msgs[1].len - 1) << SW_TWSI_SIZE_SHIFT;
+ /* A */
+ cmd |= (u64)(msgs[0].addr & 0x7full) << SW_TWSI_ADDR_SHIFT;
+
+ if (msgs[0].flags & I2C_M_TEN)
+ cmd |= SW_TWSI_OP_10_IA;
+ else
+ cmd |= SW_TWSI_OP_7_IA;
+
+ if (msgs[0].len == 2) {
+ u64 ext = 0;
+
+ cmd |= SW_TWSI_EIA;
+ ext = (u64)msgs[0].buf[0] << SW_TWSI_IA_SHIFT;
+ cmd |= (u64)msgs[0].buf[1] << SW_TWSI_IA_SHIFT;
+ octeon_i2c_writeq_flush(ext, i2c->twsi_base + SW_TWSI_EXT);
+ } else {
+ cmd |= (u64)msgs[0].buf[0] << SW_TWSI_IA_SHIFT;
+ }
+
+ octeon_i2c_hlc_int_clear(i2c);
+ octeon_i2c_writeq_flush(cmd, i2c->twsi_base + SW_TWSI);
+
+ ret = octeon_i2c_hlc_wait(i2c);
+ if (ret)
+ goto err;
+
+ cmd = __raw_readq(i2c->twsi_base + SW_TWSI);
+ if ((cmd & SW_TWSI_R) == 0)
+ return -EAGAIN;
+
+ for (i = 0, j = msgs[1].len - 1; i < msgs[1].len && i < 4; i++, j--)
+ msgs[1].buf[j] = (cmd >> (8 * i)) & 0xff;
+
+ if (msgs[1].len > 4) {
+ cmd = __raw_readq(i2c->twsi_base + SW_TWSI_EXT);
+ for (i = 0; i < msgs[1].len - 4 && i < 4; i++, j--)
+ msgs[1].buf[j] = (cmd >> (8 * i)) & 0xff;
+ }
+
+err:
+ return ret;
+}
+
+/* high-level-controller composite write+write, m[0]len<=2, m[1]len<=8 */
+static int octeon_i2c_hlc_comp_write(struct octeon_i2c *i2c, struct i2c_msg *msgs)
+{
+ bool set_ext = false;
+ int i, j, ret = 0;
+ u64 cmd, ext = 0;
+
+ octeon_i2c_hlc_enable(i2c);
+
+ cmd = SW_TWSI_V | SW_TWSI_SOVR;
+ /* SIZE */
+ cmd |= (u64)(msgs[1].len - 1) << SW_TWSI_SIZE_SHIFT;
+ /* A */
+ cmd |= (u64)(msgs[0].addr & 0x7full) << SW_TWSI_ADDR_SHIFT;
+
+ if (msgs[0].flags & I2C_M_TEN)
+ cmd |= SW_TWSI_OP_10_IA;
+ else
+ cmd |= SW_TWSI_OP_7_IA;
+
+ if (msgs[0].len == 2) {
+ cmd |= SW_TWSI_EIA;
+ ext |= (u64)msgs[0].buf[0] << SW_TWSI_IA_SHIFT;
+ set_ext = true;
+ cmd |= (u64)msgs[0].buf[1] << SW_TWSI_IA_SHIFT;
+ } else {
+ cmd |= (u64)msgs[0].buf[0] << SW_TWSI_IA_SHIFT;
+ }
+
+ for (i = 0, j = msgs[1].len - 1; i < msgs[1].len && i < 4; i++, j--)
+ cmd |= (u64)msgs[1].buf[j] << (8 * i);
+
+ if (msgs[1].len > 4) {
+ for (i = 0; i < msgs[1].len - 4 && i < 4; i++, j--)
+ ext |= (u64)msgs[1].buf[j] << (8 * i);
+ set_ext = true;
+ }
+ if (set_ext)
+ octeon_i2c_writeq_flush(ext, i2c->twsi_base + SW_TWSI_EXT);
+
+ octeon_i2c_hlc_int_clear(i2c);
+ octeon_i2c_writeq_flush(cmd, i2c->twsi_base + SW_TWSI);
+
+ ret = octeon_i2c_hlc_wait(i2c);
+ if (ret)
+ goto err;
+
+ cmd = __raw_readq(i2c->twsi_base + SW_TWSI);
+ if ((cmd & SW_TWSI_R) == 0)
+ return -EAGAIN;
+
+ ret = octeon_i2c_check_status(i2c, false);
+
+err:
+ return ret;
+}
+
+/* calculate and set clock divisors */
+static void octeon_i2c_set_clock(struct octeon_i2c *i2c)
+{
+ int tclk, thp_base, inc, thp_idx, mdiv_idx, ndiv_idx, foscl, diff;
+ int thp = 0x18, mdiv = 2, ndiv = 0, delta_hz = 1000000;
+
+ for (ndiv_idx = 0; ndiv_idx < 8 && delta_hz != 0; ndiv_idx++) {
+ /*
+ * An mdiv value of less than 2 seems to not work well
+ * with ds1337 RTCs, so we constrain it to larger values.
+ */
+ for (mdiv_idx = 15; mdiv_idx >= 2 && delta_hz != 0; mdiv_idx--) {
/*
- * Controller refused to send start flag May
- * be a client is holding SDA low - let's try
- * to free it.
+ * For given ndiv and mdiv values check the
+ * two closest thp values.
*/
- octeon_i2c_unblock(i2c);
- octeon_i2c_write_sw(i2c, SW_TWSI_EOP_TWSI_CTL,
- TWSI_CTL_ENAB | TWSI_CTL_STA);
- result = octeon_i2c_wait(i2c);
+ tclk = i2c->twsi_freq * (mdiv_idx + 1) * 10;
+ tclk *= (1 << ndiv_idx);
+ thp_base = (i2c->sys_freq / (tclk * 2)) - 1;
+
+ for (inc = 0; inc <= 1; inc++) {
+ thp_idx = thp_base + inc;
+ if (thp_idx < 5 || thp_idx > 0xff)
+ continue;
+
+ foscl = i2c->sys_freq / (2 * (thp_idx + 1));
+ foscl = foscl / (1 << ndiv_idx);
+ foscl = foscl / (mdiv_idx + 1) / 10;
+ diff = abs(foscl - i2c->twsi_freq);
+ if (diff < delta_hz) {
+ delta_hz = diff;
+ thp = thp_idx;
+ mdiv = mdiv_idx;
+ ndiv = ndiv_idx;
+ }
+ }
}
- if (result)
- return result;
}
+ octeon_i2c_reg_write(i2c, SW_TWSI_OP_TWSI_CLK, thp);
+ octeon_i2c_reg_write(i2c, SW_TWSI_EOP_TWSI_CLKCTL, (mdiv << 3) | ndiv);
+}
+
+static int octeon_i2c_init_lowlevel(struct octeon_i2c *i2c)
+{
+ u8 status = 0;
+ int tries;
+
+ /* reset controller */
+ octeon_i2c_reg_write(i2c, SW_TWSI_EOP_TWSI_RST, 0);
- data = octeon_i2c_read_sw(i2c, SW_TWSI_EOP_TWSI_STAT);
- if ((data != STAT_START) && (data != STAT_RSTART)) {
- dev_err(i2c->dev, "%s: bad status (0x%x)\n", __func__, data);
+ for (tries = 10; tries && status != STAT_IDLE; tries--) {
+ udelay(1);
+ status = octeon_i2c_stat_read(i2c);
+ if (status == STAT_IDLE)
+ break;
+ }
+
+ if (status != STAT_IDLE) {
+ dev_err(i2c->dev, "%s: TWSI_RST failed! (0x%x)\n",
+ __func__, status);
return -EIO;
}
+ /* toggle twice to force both teardowns */
+ octeon_i2c_hlc_enable(i2c);
+ octeon_i2c_hlc_disable(i2c);
return 0;
}
+static int octeon_i2c_recovery(struct octeon_i2c *i2c)
+{
+ int ret;
+
+ ret = i2c_recover_bus(&i2c->adap);
+ if (ret)
+ /* recover failed, try hardware re-init */
+ ret = octeon_i2c_init_lowlevel(i2c);
+ return ret;
+}
+
+/**
+ * octeon_i2c_start - send START to the bus
+ * @i2c: The struct octeon_i2c
+ *
+ * Returns 0 on success, otherwise a negative errno.
+ */
+static int octeon_i2c_start(struct octeon_i2c *i2c)
+{
+ int ret;
+ u8 stat;
+
+ octeon_i2c_hlc_disable(i2c);
+
+ octeon_i2c_ctl_write(i2c, TWSI_CTL_ENAB | TWSI_CTL_STA);
+ ret = octeon_i2c_wait(i2c);
+ if (ret)
+ goto error;
+
+ stat = octeon_i2c_stat_read(i2c);
+ if (stat == STAT_START || stat == STAT_REP_START)
+ /* START successful, bail out */
+ return 0;
+
+error:
+ /* START failed, try to recover */
+ ret = octeon_i2c_recovery(i2c);
+ return (ret) ? ret : -EAGAIN;
+}
+
/* send STOP to the bus */
static void octeon_i2c_stop(struct octeon_i2c *i2c)
{
- octeon_i2c_write_sw(i2c, SW_TWSI_EOP_TWSI_CTL,
- TWSI_CTL_ENAB | TWSI_CTL_STP);
+ octeon_i2c_ctl_write(i2c, TWSI_CTL_ENAB | TWSI_CTL_STP);
}
/**
@@ -276,31 +879,21 @@ static int octeon_i2c_write(struct octeon_i2c *i2c, int target,
const u8 *data, int length)
{
int i, result;
- u8 tmp;
-
- result = octeon_i2c_start(i2c);
- if (result)
- return result;
- octeon_i2c_write_sw(i2c, SW_TWSI_EOP_TWSI_DATA, target << 1);
- octeon_i2c_write_sw(i2c, SW_TWSI_EOP_TWSI_CTL, TWSI_CTL_ENAB);
+ octeon_i2c_data_write(i2c, target << 1);
+ octeon_i2c_ctl_write(i2c, TWSI_CTL_ENAB);
result = octeon_i2c_wait(i2c);
if (result)
return result;
for (i = 0; i < length; i++) {
- tmp = octeon_i2c_read_sw(i2c, SW_TWSI_EOP_TWSI_STAT);
-
- if ((tmp != STAT_TXADDR_ACK) && (tmp != STAT_TXDATA_ACK)) {
- dev_err(i2c->dev,
- "%s: bad status before write (0x%x)\n",
- __func__, tmp);
- return -EIO;
- }
+ result = octeon_i2c_check_status(i2c, false);
+ if (result)
+ return result;
- octeon_i2c_write_sw(i2c, SW_TWSI_EOP_TWSI_DATA, data[i]);
- octeon_i2c_write_sw(i2c, SW_TWSI_EOP_TWSI_CTL, TWSI_CTL_ENAB);
+ octeon_i2c_data_write(i2c, data[i]);
+ octeon_i2c_ctl_write(i2c, TWSI_CTL_ENAB);
result = octeon_i2c_wait(i2c);
if (result)
@@ -326,44 +919,36 @@ static int octeon_i2c_read(struct octeon_i2c *i2c, int target,
u8 *data, u16 *rlength, bool recv_len)
{
int i, result, length = *rlength;
- u8 tmp;
+ bool final_read = false;
- if (length < 1)
- return -EINVAL;
+ octeon_i2c_data_write(i2c, (target << 1) | 1);
+ octeon_i2c_ctl_write(i2c, TWSI_CTL_ENAB);
- result = octeon_i2c_start(i2c);
+ result = octeon_i2c_wait(i2c);
if (result)
return result;
- octeon_i2c_write_sw(i2c, SW_TWSI_EOP_TWSI_DATA, (target << 1) | 1);
- octeon_i2c_write_sw(i2c, SW_TWSI_EOP_TWSI_CTL, TWSI_CTL_ENAB);
-
- result = octeon_i2c_wait(i2c);
+ /* address OK ? */
+ result = octeon_i2c_check_status(i2c, false);
if (result)
return result;
for (i = 0; i < length; i++) {
- tmp = octeon_i2c_read_sw(i2c, SW_TWSI_EOP_TWSI_STAT);
+ /* for the last byte TWSI_CTL_AAK must not be set */
+ if (i + 1 == length)
+ final_read = true;
- if ((tmp != STAT_RXDATA_ACK) && (tmp != STAT_RXADDR_ACK)) {
- dev_err(i2c->dev,
- "%s: bad status before read (0x%x)\n",
- __func__, tmp);
- return -EIO;
- }
-
- if (i + 1 < length)
- octeon_i2c_write_sw(i2c, SW_TWSI_EOP_TWSI_CTL,
- TWSI_CTL_ENAB | TWSI_CTL_AAK);
+ /* clear iflg to allow next event */
+ if (final_read)
+ octeon_i2c_ctl_write(i2c, TWSI_CTL_ENAB);
else
- octeon_i2c_write_sw(i2c, SW_TWSI_EOP_TWSI_CTL,
- TWSI_CTL_ENAB);
+ octeon_i2c_ctl_write(i2c, TWSI_CTL_ENAB | TWSI_CTL_AAK);
result = octeon_i2c_wait(i2c);
if (result)
return result;
- data[i] = octeon_i2c_read_sw(i2c, SW_TWSI_EOP_TWSI_DATA);
+ data[i] = octeon_i2c_data_read(i2c);
if (recv_len && i == 0) {
if (data[i] > I2C_SMBUS_BLOCK_MAX + 1) {
dev_err(i2c->dev,
@@ -373,6 +958,10 @@ static int octeon_i2c_read(struct octeon_i2c *i2c, int target,
}
length += data[i];
}
+
+ result = octeon_i2c_check_status(i2c, final_read);
+ if (result)
+ return result;
}
*rlength = length;
return 0;
@@ -392,13 +981,41 @@ static int octeon_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
struct octeon_i2c *i2c = i2c_get_adapdata(adap);
int i, ret = 0;
+ if (num == 1) {
+ if (msgs[0].len > 0 && msgs[0].len <= 8) {
+ if (msgs[0].flags & I2C_M_RD)
+ ret = octeon_i2c_hlc_read(i2c, msgs);
+ else
+ ret = octeon_i2c_hlc_write(i2c, msgs);
+ goto out;
+ }
+ } else if (num == 2) {
+ if ((msgs[0].flags & I2C_M_RD) == 0 &&
+ (msgs[1].flags & I2C_M_RECV_LEN) == 0 &&
+ msgs[0].len > 0 && msgs[0].len <= 2 &&
+ msgs[1].len > 0 && msgs[1].len <= 8 &&
+ msgs[0].addr == msgs[1].addr) {
+ if (msgs[1].flags & I2C_M_RD)
+ ret = octeon_i2c_hlc_comp_read(i2c, msgs);
+ else
+ ret = octeon_i2c_hlc_comp_write(i2c, msgs);
+ goto out;
+ }
+ }
+
for (i = 0; ret == 0 && i < num; i++) {
struct i2c_msg *pmsg = &msgs[i];
- dev_dbg(i2c->dev,
- "Doing %s %d byte(s) to/from 0x%02x - %d of %d messages\n",
- pmsg->flags & I2C_M_RD ? "read" : "write",
- pmsg->len, pmsg->addr, i + 1, num);
+ /* zero-length messages are not supported */
+ if (!pmsg->len) {
+ ret = -EOPNOTSUPP;
+ break;
+ }
+
+ ret = octeon_i2c_start(i2c);
+ if (ret)
+ return ret;
+
if (pmsg->flags & I2C_M_RD)
ret = octeon_i2c_read(i2c, pmsg->addr, pmsg->buf,
&pmsg->len, pmsg->flags & I2C_M_RECV_LEN);
@@ -407,102 +1024,105 @@ static int octeon_i2c_xfer(struct i2c_adapter *adap, struct i2c_msg *msgs,
pmsg->len);
}
octeon_i2c_stop(i2c);
-
+out:
return (ret != 0) ? ret : num;
}
-static u32 octeon_i2c_functionality(struct i2c_adapter *adap)
+static int octeon_i2c_get_scl(struct i2c_adapter *adap)
{
- return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL |
- I2C_FUNC_SMBUS_READ_BLOCK_DATA | I2C_SMBUS_BLOCK_PROC_CALL;
+ struct octeon_i2c *i2c = i2c_get_adapdata(adap);
+ u64 state;
+
+ state = octeon_i2c_read_int(i2c);
+ return state & TWSI_INT_SCL;
}
-static const struct i2c_algorithm octeon_i2c_algo = {
- .master_xfer = octeon_i2c_xfer,
- .functionality = octeon_i2c_functionality,
-};
+static void octeon_i2c_set_scl(struct i2c_adapter *adap, int val)
+{
+ struct octeon_i2c *i2c = i2c_get_adapdata(adap);
-static struct i2c_adapter octeon_i2c_ops = {
- .owner = THIS_MODULE,
- .name = "OCTEON adapter",
- .algo = &octeon_i2c_algo,
- .timeout = HZ / 50,
-};
+ octeon_i2c_write_int(i2c, TWSI_INT_SCL_OVR);
+}
-/* calculate and set clock divisors */
-static void octeon_i2c_set_clock(struct octeon_i2c *i2c)
+static int octeon_i2c_get_sda(struct i2c_adapter *adap)
{
- int tclk, thp_base, inc, thp_idx, mdiv_idx, ndiv_idx, foscl, diff;
- int thp = 0x18, mdiv = 2, ndiv = 0, delta_hz = 1000000;
+ struct octeon_i2c *i2c = i2c_get_adapdata(adap);
+ u64 state;
- for (ndiv_idx = 0; ndiv_idx < 8 && delta_hz != 0; ndiv_idx++) {
- /*
- * An mdiv value of less than 2 seems to not work well
- * with ds1337 RTCs, so we constrain it to larger values.
- */
- for (mdiv_idx = 15; mdiv_idx >= 2 && delta_hz != 0; mdiv_idx--) {
- /*
- * For given ndiv and mdiv values check the
- * two closest thp values.
- */
- tclk = i2c->twsi_freq * (mdiv_idx + 1) * 10;
- tclk *= (1 << ndiv_idx);
- thp_base = (i2c->sys_freq / (tclk * 2)) - 1;
+ state = octeon_i2c_read_int(i2c);
+ return state & TWSI_INT_SDA;
+}
- for (inc = 0; inc <= 1; inc++) {
- thp_idx = thp_base + inc;
- if (thp_idx < 5 || thp_idx > 0xff)
- continue;
+static void octeon_i2c_prepare_recovery(struct i2c_adapter *adap)
+{
+ struct octeon_i2c *i2c = i2c_get_adapdata(adap);
- foscl = i2c->sys_freq / (2 * (thp_idx + 1));
- foscl = foscl / (1 << ndiv_idx);
- foscl = foscl / (mdiv_idx + 1) / 10;
- diff = abs(foscl - i2c->twsi_freq);
- if (diff < delta_hz) {
- delta_hz = diff;
- thp = thp_idx;
- mdiv = mdiv_idx;
- ndiv = ndiv_idx;
- }
- }
- }
- }
- octeon_i2c_write_sw(i2c, SW_TWSI_OP_TWSI_CLK, thp);
- octeon_i2c_write_sw(i2c, SW_TWSI_EOP_TWSI_CLKCTL, (mdiv << 3) | ndiv);
+ /*
+ * The stop resets the state machine, does not _transmit_ STOP unless
+ * engine was active.
+ */
+ octeon_i2c_stop(i2c);
+
+ octeon_i2c_hlc_disable(i2c);
+ octeon_i2c_write_int(i2c, 0);
}
-static int octeon_i2c_init_lowlevel(struct octeon_i2c *i2c)
+static void octeon_i2c_unprepare_recovery(struct i2c_adapter *adap)
{
- u8 status;
- int tries;
+ struct octeon_i2c *i2c = i2c_get_adapdata(adap);
- /* disable high level controller, enable bus access */
- octeon_i2c_write_sw(i2c, SW_TWSI_EOP_TWSI_CTL, TWSI_CTL_ENAB);
+ octeon_i2c_write_int(i2c, 0);
+}
- /* reset controller */
- octeon_i2c_write_sw(i2c, SW_TWSI_EOP_TWSI_RST, 0);
+static struct i2c_bus_recovery_info octeon_i2c_recovery_info = {
+ .recover_bus = i2c_generic_scl_recovery,
+ .get_scl = octeon_i2c_get_scl,
+ .set_scl = octeon_i2c_set_scl,
+ .get_sda = octeon_i2c_get_sda,
+ .prepare_recovery = octeon_i2c_prepare_recovery,
+ .unprepare_recovery = octeon_i2c_unprepare_recovery,
+};
- for (tries = 10; tries; tries--) {
- udelay(1);
- status = octeon_i2c_read_sw(i2c, SW_TWSI_EOP_TWSI_STAT);
- if (status == STAT_IDLE)
- return 0;
- }
- dev_err(i2c->dev, "%s: TWSI_RST failed! (0x%x)\n", __func__, status);
- return -EIO;
+static u32 octeon_i2c_functionality(struct i2c_adapter *adap)
+{
+ return I2C_FUNC_I2C | (I2C_FUNC_SMBUS_EMUL & ~I2C_FUNC_SMBUS_QUICK) |
+ I2C_FUNC_SMBUS_READ_BLOCK_DATA | I2C_SMBUS_BLOCK_PROC_CALL;
}
+static const struct i2c_algorithm octeon_i2c_algo = {
+ .master_xfer = octeon_i2c_xfer,
+ .functionality = octeon_i2c_functionality,
+};
+
+static struct i2c_adapter octeon_i2c_ops = {
+ .owner = THIS_MODULE,
+ .name = "OCTEON adapter",
+ .algo = &octeon_i2c_algo,
+};
+
static int octeon_i2c_probe(struct platform_device *pdev)
{
struct device_node *node = pdev->dev.of_node;
+ int irq, result = 0, hlc_irq = 0;
struct resource *res_mem;
struct octeon_i2c *i2c;
- int irq, result = 0;
-
- /* All adaptors have an irq. */
- irq = platform_get_irq(pdev, 0);
- if (irq < 0)
- return irq;
+ bool cn78xx_style;
+
+ cn78xx_style = of_device_is_compatible(node, "cavium,octeon-7890-twsi");
+ if (cn78xx_style) {
+ hlc_irq = platform_get_irq(pdev, 0);
+ if (hlc_irq < 0)
+ return hlc_irq;
+
+ irq = platform_get_irq(pdev, 2);
+ if (irq < 0)
+ return irq;
+ } else {
+ /* All adaptors have an irq. */
+ irq = platform_get_irq(pdev, 0);
+ if (irq < 0)
+ return irq;
+ }
i2c = devm_kzalloc(&pdev->dev, sizeof(*i2c), GFP_KERNEL);
if (!i2c) {
@@ -537,6 +1157,31 @@ static int octeon_i2c_probe(struct platform_device *pdev)
i2c->irq = irq;
+ if (cn78xx_style) {
+ i2c->hlc_irq = hlc_irq;
+
+ i2c->int_enable = octeon_i2c_int_enable78;
+ i2c->int_disable = octeon_i2c_int_disable78;
+ i2c->hlc_int_enable = octeon_i2c_hlc_int_enable78;
+ i2c->hlc_int_disable = octeon_i2c_hlc_int_disable78;
+
+ irq_set_status_flags(i2c->irq, IRQ_NOAUTOEN);
+ irq_set_status_flags(i2c->hlc_irq, IRQ_NOAUTOEN);
+
+ result = devm_request_irq(&pdev->dev, i2c->hlc_irq,
+ octeon_i2c_hlc_isr78, 0,
+ DRV_NAME, i2c);
+ if (result < 0) {
+ dev_err(i2c->dev, "failed to attach interrupt\n");
+ goto out;
+ }
+ } else {
+ i2c->int_enable = octeon_i2c_int_enable;
+ i2c->int_disable = octeon_i2c_int_disable;
+ i2c->hlc_int_enable = octeon_i2c_hlc_int_enable;
+ i2c->hlc_int_disable = octeon_i2c_int_disable;
+ }
+
result = devm_request_irq(&pdev->dev, i2c->irq,
octeon_i2c_isr, 0, DRV_NAME, i2c);
if (result < 0) {
@@ -544,6 +1189,9 @@ static int octeon_i2c_probe(struct platform_device *pdev)
goto out;
}
+ if (OCTEON_IS_MODEL(OCTEON_CN38XX))
+ i2c->broken_irq_check = true;
+
result = octeon_i2c_init_lowlevel(i2c);
if (result) {
dev_err(i2c->dev, "init low level failed\n");
@@ -553,6 +1201,9 @@ static int octeon_i2c_probe(struct platform_device *pdev)
octeon_i2c_set_clock(i2c);
i2c->adap = octeon_i2c_ops;
+ i2c->adap.timeout = msecs_to_jiffies(2);
+ i2c->adap.retries = 5;
+ i2c->adap.bus_recovery_info = &octeon_i2c_recovery_info;
i2c->adap.dev.parent = &pdev->dev;
i2c->adap.dev.of_node = node;
i2c_set_adapdata(&i2c->adap, i2c);
@@ -580,6 +1231,7 @@ static int octeon_i2c_remove(struct platform_device *pdev)
static const struct of_device_id octeon_i2c_match[] = {
{ .compatible = "cavium,octeon-3860-twsi", },
+ { .compatible = "cavium,octeon-7890-twsi", },
{},
};
MODULE_DEVICE_TABLE(of, octeon_i2c_match);
diff --git a/drivers/i2c/busses/i2c-omap.c b/drivers/i2c/busses/i2c-omap.c
index 13c45296ce5b..ab1279b8e240 100644
--- a/drivers/i2c/busses/i2c-omap.c
+++ b/drivers/i2c/busses/i2c-omap.c
@@ -185,7 +185,6 @@ enum {
#define OMAP_I2C_IP_V2_INTERRUPTS_MASK 0x6FFF
struct omap_i2c_dev {
- spinlock_t lock; /* IRQ synchronization */
struct device *dev;
void __iomem *base; /* virtual */
int irq;
@@ -995,15 +994,12 @@ omap_i2c_isr(int irq, void *dev_id)
u16 mask;
u16 stat;
- spin_lock(&omap->lock);
- mask = omap_i2c_read_reg(omap, OMAP_I2C_IE_REG);
stat = omap_i2c_read_reg(omap, OMAP_I2C_STAT_REG);
+ mask = omap_i2c_read_reg(omap, OMAP_I2C_IE_REG);
if (stat & mask)
ret = IRQ_WAKE_THREAD;
- spin_unlock(&omap->lock);
-
return ret;
}
@@ -1011,12 +1007,10 @@ static irqreturn_t
omap_i2c_isr_thread(int this_irq, void *dev_id)
{
struct omap_i2c_dev *omap = dev_id;
- unsigned long flags;
u16 bits;
u16 stat;
int err = 0, count = 0;
- spin_lock_irqsave(&omap->lock, flags);
do {
bits = omap_i2c_read_reg(omap, OMAP_I2C_IE_REG);
stat = omap_i2c_read_reg(omap, OMAP_I2C_STAT_REG);
@@ -1142,8 +1136,6 @@ omap_i2c_isr_thread(int this_irq, void *dev_id)
omap_i2c_complete_cmd(omap, err);
out:
- spin_unlock_irqrestore(&omap->lock, flags);
-
return IRQ_HANDLED;
}
@@ -1330,8 +1322,6 @@ omap_i2c_probe(struct platform_device *pdev)
omap->dev = &pdev->dev;
omap->irq = irq;
- spin_lock_init(&omap->lock);
-
platform_set_drvdata(pdev, omap);
init_completion(&omap->cmd_complete);
diff --git a/drivers/i2c/busses/i2c-powermac.c b/drivers/i2c/busses/i2c-powermac.c
index 6abcf696e359..b0d9dee14a7e 100644
--- a/drivers/i2c/busses/i2c-powermac.c
+++ b/drivers/i2c/busses/i2c-powermac.c
@@ -150,13 +150,11 @@ static int i2c_powermac_master_xfer( struct i2c_adapter *adap,
{
struct pmac_i2c_bus *bus = i2c_get_adapdata(adap);
int rc = 0;
- int read;
int addrdir;
if (msgs->flags & I2C_M_TEN)
return -EINVAL;
- read = (msgs->flags & I2C_M_RD) != 0;
- addrdir = (msgs->addr << 1) | read;
+ addrdir = i2c_8bit_addr_from_msg(msgs);
rc = pmac_i2c_open(bus, 0);
if (rc) {
diff --git a/drivers/i2c/busses/i2c-qup.c b/drivers/i2c/busses/i2c-qup.c
index 23eaabb19f96..cc6439ab3f71 100644
--- a/drivers/i2c/busses/i2c-qup.c
+++ b/drivers/i2c/busses/i2c-qup.c
@@ -515,7 +515,7 @@ static int qup_i2c_get_data_len(struct qup_i2c_dev *qup)
static int qup_i2c_set_tags(u8 *tags, struct qup_i2c_dev *qup,
struct i2c_msg *msg, int is_dma)
{
- u16 addr = (msg->addr << 1) | ((msg->flags & I2C_M_RD) == I2C_M_RD);
+ u16 addr = i2c_8bit_addr_from_msg(msg);
int len = 0;
int data_len;
diff --git a/drivers/i2c/busses/i2c-rcar.c b/drivers/i2c/busses/i2c-rcar.c
index 68ecb5630ad5..52407f3c9e1c 100644
--- a/drivers/i2c/busses/i2c-rcar.c
+++ b/drivers/i2c/busses/i2c-rcar.c
@@ -21,6 +21,8 @@
*/
#include <linux/clk.h>
#include <linux/delay.h>
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
#include <linux/err.h>
#include <linux/interrupt.h>
#include <linux/io.h>
@@ -43,6 +45,8 @@
#define ICSAR 0x1C /* slave address */
#define ICMAR 0x20 /* master address */
#define ICRXTX 0x24 /* data port */
+#define ICDMAER 0x3c /* DMA enable */
+#define ICFBSCR 0x38 /* first bit setup cycle */
/* ICSCR */
#define SDBS (1 << 3) /* slave data buffer select */
@@ -78,6 +82,16 @@
#define MDR (1 << 1)
#define MAT (1 << 0) /* slave addr xfer done */
+/* ICDMAER */
+#define RSDMAE (1 << 3) /* DMA Slave Received Enable */
+#define TSDMAE (1 << 2) /* DMA Slave Transmitted Enable */
+#define RMDMAE (1 << 1) /* DMA Master Received Enable */
+#define TMDMAE (1 << 0) /* DMA Master Transmitted Enable */
+
+/* ICFBSCR */
+#define TCYC06 0x04 /* 6*Tcyc delay 1st bit between SDA and SCL */
+#define TCYC17 0x0f /* 17*Tcyc delay 1st bit between SDA and SCL */
+
#define RCAR_BUS_PHASE_START (MDBS | MIE | ESG)
#define RCAR_BUS_PHASE_DATA (MDBS | MIE)
@@ -120,6 +134,12 @@ struct rcar_i2c_priv {
u32 flags;
enum rcar_i2c_type devtype;
struct i2c_client *slave;
+
+ struct resource *res;
+ struct dma_chan *dma_tx;
+ struct dma_chan *dma_rx;
+ struct scatterlist sg;
+ enum dma_data_direction dma_direction;
};
#define rcar_i2c_priv_to_dev(p) ((p)->adap.dev.parent)
@@ -287,6 +307,118 @@ static void rcar_i2c_next_msg(struct rcar_i2c_priv *priv)
/*
* interrupt functions
*/
+static void rcar_i2c_dma_unmap(struct rcar_i2c_priv *priv)
+{
+ struct dma_chan *chan = priv->dma_direction == DMA_FROM_DEVICE
+ ? priv->dma_rx : priv->dma_tx;
+
+ /* Disable DMA Master Received/Transmitted */
+ rcar_i2c_write(priv, ICDMAER, 0);
+
+ /* Reset default delay */
+ rcar_i2c_write(priv, ICFBSCR, TCYC06);
+
+ dma_unmap_single(chan->device->dev, sg_dma_address(&priv->sg),
+ priv->msg->len, priv->dma_direction);
+
+ priv->dma_direction = DMA_NONE;
+}
+
+static void rcar_i2c_cleanup_dma(struct rcar_i2c_priv *priv)
+{
+ if (priv->dma_direction == DMA_NONE)
+ return;
+ else if (priv->dma_direction == DMA_FROM_DEVICE)
+ dmaengine_terminate_all(priv->dma_rx);
+ else if (priv->dma_direction == DMA_TO_DEVICE)
+ dmaengine_terminate_all(priv->dma_tx);
+
+ rcar_i2c_dma_unmap(priv);
+}
+
+static void rcar_i2c_dma_callback(void *data)
+{
+ struct rcar_i2c_priv *priv = data;
+
+ priv->pos += sg_dma_len(&priv->sg);
+
+ rcar_i2c_dma_unmap(priv);
+}
+
+static void rcar_i2c_dma(struct rcar_i2c_priv *priv)
+{
+ struct device *dev = rcar_i2c_priv_to_dev(priv);
+ struct i2c_msg *msg = priv->msg;
+ bool read = msg->flags & I2C_M_RD;
+ enum dma_data_direction dir = read ? DMA_FROM_DEVICE : DMA_TO_DEVICE;
+ struct dma_chan *chan = read ? priv->dma_rx : priv->dma_tx;
+ struct dma_async_tx_descriptor *txdesc;
+ dma_addr_t dma_addr;
+ dma_cookie_t cookie;
+ unsigned char *buf;
+ int len;
+
+ /* Do not use DMA if it's not available or for messages < 8 bytes */
+ if (IS_ERR(chan) || msg->len < 8)
+ return;
+
+ if (read) {
+ /*
+ * The last two bytes needs to be fetched using PIO in
+ * order for the STOP phase to work.
+ */
+ buf = priv->msg->buf;
+ len = priv->msg->len - 2;
+ } else {
+ /*
+ * First byte in message was sent using PIO.
+ */
+ buf = priv->msg->buf + 1;
+ len = priv->msg->len - 1;
+ }
+
+ dma_addr = dma_map_single(chan->device->dev, buf, len, dir);
+ if (dma_mapping_error(dev, dma_addr)) {
+ dev_dbg(dev, "dma map failed, using PIO\n");
+ return;
+ }
+
+ sg_dma_len(&priv->sg) = len;
+ sg_dma_address(&priv->sg) = dma_addr;
+
+ priv->dma_direction = dir;
+
+ txdesc = dmaengine_prep_slave_sg(chan, &priv->sg, 1,
+ read ? DMA_DEV_TO_MEM : DMA_MEM_TO_DEV,
+ DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+ if (!txdesc) {
+ dev_dbg(dev, "dma prep slave sg failed, using PIO\n");
+ rcar_i2c_cleanup_dma(priv);
+ return;
+ }
+
+ txdesc->callback = rcar_i2c_dma_callback;
+ txdesc->callback_param = priv;
+
+ cookie = dmaengine_submit(txdesc);
+ if (dma_submit_error(cookie)) {
+ dev_dbg(dev, "submitting dma failed, using PIO\n");
+ rcar_i2c_cleanup_dma(priv);
+ return;
+ }
+
+ /* Set delay for DMA operations */
+ rcar_i2c_write(priv, ICFBSCR, TCYC17);
+
+ /* Enable DMA Master Received/Transmitted */
+ if (read)
+ rcar_i2c_write(priv, ICDMAER, RMDMAE);
+ else
+ rcar_i2c_write(priv, ICDMAER, TMDMAE);
+
+ dma_async_issue_pending(chan);
+}
+
static void rcar_i2c_irq_send(struct rcar_i2c_priv *priv, u32 msr)
{
struct i2c_msg *msg = priv->msg;
@@ -306,6 +438,12 @@ static void rcar_i2c_irq_send(struct rcar_i2c_priv *priv, u32 msr)
rcar_i2c_write(priv, ICRXTX, msg->buf[priv->pos]);
priv->pos++;
+ /*
+ * Try to use DMA to transmit the rest of the data if
+ * address transfer pashe just finished.
+ */
+ if (msr & MAT)
+ rcar_i2c_dma(priv);
} else {
/*
* The last data was pushed to ICRXTX on _PREV_ empty irq.
@@ -340,7 +478,11 @@ static void rcar_i2c_irq_recv(struct rcar_i2c_priv *priv, u32 msr)
return;
if (msr & MAT) {
- /* Address transfer phase finished, but no data at this point. */
+ /*
+ * Address transfer phase finished, but no data at this point.
+ * Try to use DMA to receive data.
+ */
+ rcar_i2c_dma(priv);
} else if (priv->pos < msg->len) {
/* get received data */
msg->buf[priv->pos] = rcar_i2c_read(priv, ICRXTX);
@@ -472,6 +614,81 @@ out:
return IRQ_HANDLED;
}
+static struct dma_chan *rcar_i2c_request_dma_chan(struct device *dev,
+ enum dma_transfer_direction dir,
+ dma_addr_t port_addr)
+{
+ struct dma_chan *chan;
+ struct dma_slave_config cfg;
+ char *chan_name = dir == DMA_MEM_TO_DEV ? "tx" : "rx";
+ int ret;
+
+ chan = dma_request_chan(dev, chan_name);
+ if (IS_ERR(chan)) {
+ ret = PTR_ERR(chan);
+ dev_dbg(dev, "request_channel failed for %s (%d)\n",
+ chan_name, ret);
+ return chan;
+ }
+
+ memset(&cfg, 0, sizeof(cfg));
+ cfg.direction = dir;
+ if (dir == DMA_MEM_TO_DEV) {
+ cfg.dst_addr = port_addr;
+ cfg.dst_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+ } else {
+ cfg.src_addr = port_addr;
+ cfg.src_addr_width = DMA_SLAVE_BUSWIDTH_1_BYTE;
+ }
+
+ ret = dmaengine_slave_config(chan, &cfg);
+ if (ret) {
+ dev_dbg(dev, "slave_config failed for %s (%d)\n",
+ chan_name, ret);
+ dma_release_channel(chan);
+ return ERR_PTR(ret);
+ }
+
+ dev_dbg(dev, "got DMA channel for %s\n", chan_name);
+ return chan;
+}
+
+static void rcar_i2c_request_dma(struct rcar_i2c_priv *priv,
+ struct i2c_msg *msg)
+{
+ struct device *dev = rcar_i2c_priv_to_dev(priv);
+ bool read;
+ struct dma_chan *chan;
+ enum dma_transfer_direction dir;
+
+ read = msg->flags & I2C_M_RD;
+
+ chan = read ? priv->dma_rx : priv->dma_tx;
+ if (PTR_ERR(chan) != -EPROBE_DEFER)
+ return;
+
+ dir = read ? DMA_DEV_TO_MEM : DMA_MEM_TO_DEV;
+ chan = rcar_i2c_request_dma_chan(dev, dir, priv->res->start + ICRXTX);
+
+ if (read)
+ priv->dma_rx = chan;
+ else
+ priv->dma_tx = chan;
+}
+
+static void rcar_i2c_release_dma(struct rcar_i2c_priv *priv)
+{
+ if (!IS_ERR(priv->dma_tx)) {
+ dma_release_channel(priv->dma_tx);
+ priv->dma_tx = ERR_PTR(-EPROBE_DEFER);
+ }
+
+ if (!IS_ERR(priv->dma_rx)) {
+ dma_release_channel(priv->dma_rx);
+ priv->dma_rx = ERR_PTR(-EPROBE_DEFER);
+ }
+}
+
static int rcar_i2c_master_xfer(struct i2c_adapter *adap,
struct i2c_msg *msgs,
int num)
@@ -493,6 +710,7 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap,
ret = -EOPNOTSUPP;
goto out;
}
+ rcar_i2c_request_dma(priv, msgs + i);
}
/* init first message */
@@ -504,6 +722,7 @@ static int rcar_i2c_master_xfer(struct i2c_adapter *adap,
time_left = wait_event_timeout(priv->wait, priv->flags & ID_DONE,
num * adap->timeout);
if (!time_left) {
+ rcar_i2c_cleanup_dma(priv);
rcar_i2c_init(priv);
ret = -ETIMEDOUT;
} else if (priv->flags & ID_NACK) {
@@ -591,7 +810,6 @@ static int rcar_i2c_probe(struct platform_device *pdev)
{
struct rcar_i2c_priv *priv;
struct i2c_adapter *adap;
- struct resource *res;
struct device *dev = &pdev->dev;
struct i2c_timings i2c_t;
int irq, ret;
@@ -606,8 +824,9 @@ static int rcar_i2c_probe(struct platform_device *pdev)
return PTR_ERR(priv->clk);
}
- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- priv->io = devm_ioremap_resource(dev, res);
+ priv->res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+
+ priv->io = devm_ioremap_resource(dev, priv->res);
if (IS_ERR(priv->io))
return PTR_ERR(priv->io);
@@ -626,6 +845,11 @@ static int rcar_i2c_probe(struct platform_device *pdev)
i2c_parse_fw_timings(dev, &i2c_t, false);
+ /* Init DMA */
+ sg_init_table(&priv->sg, 1);
+ priv->dma_direction = DMA_NONE;
+ priv->dma_rx = priv->dma_tx = ERR_PTR(-EPROBE_DEFER);
+
pm_runtime_enable(dev);
pm_runtime_get_sync(dev);
ret = rcar_i2c_clock_calculate(priv, &i2c_t);
@@ -673,6 +897,7 @@ static int rcar_i2c_remove(struct platform_device *pdev)
struct device *dev = &pdev->dev;
i2c_del_adapter(&priv->adap);
+ rcar_i2c_release_dma(priv);
if (priv->flags & ID_P_PM_BLOCKED)
pm_runtime_put(dev);
pm_runtime_disable(dev);
diff --git a/drivers/i2c/busses/i2c-rk3x.c b/drivers/i2c/busses/i2c-rk3x.c
index 3dcc5f3f26cb..80bed02cd942 100644
--- a/drivers/i2c/busses/i2c-rk3x.c
+++ b/drivers/i2c/busses/i2c-rk3x.c
@@ -101,10 +101,7 @@ struct rk3x_i2c {
struct notifier_block clk_rate_nb;
/* Settings */
- unsigned int scl_frequency;
- unsigned int scl_rise_ns;
- unsigned int scl_fall_ns;
- unsigned int sda_fall_ns;
+ struct i2c_timings t;
/* Synchronization & notification */
spinlock_t lock;
@@ -437,10 +434,7 @@ out:
* Calculate divider values for desired SCL frequency
*
* @clk_rate: I2C input clock rate
- * @scl_rate: Desired SCL rate
- * @scl_rise_ns: How many ns it takes for SCL to rise.
- * @scl_fall_ns: How many ns it takes for SCL to fall.
- * @sda_fall_ns: How many ns it takes for SDA to fall.
+ * @t: Known I2C timing information.
* @div_low: Divider output for low
* @div_high: Divider output for high
*
@@ -448,11 +442,10 @@ out:
* a best-effort divider value is returned in divs. If the target rate is
* too high, we silently use the highest possible rate.
*/
-static int rk3x_i2c_calc_divs(unsigned long clk_rate, unsigned long scl_rate,
- unsigned long scl_rise_ns,
- unsigned long scl_fall_ns,
- unsigned long sda_fall_ns,
- unsigned long *div_low, unsigned long *div_high)
+static int rk3x_i2c_calc_divs(unsigned long clk_rate,
+ struct i2c_timings *t,
+ unsigned long *div_low,
+ unsigned long *div_high)
{
unsigned long spec_min_low_ns, spec_min_high_ns;
unsigned long spec_setup_start, spec_max_data_hold_ns;
@@ -472,12 +465,12 @@ static int rk3x_i2c_calc_divs(unsigned long clk_rate, unsigned long scl_rate,
int ret = 0;
/* Only support standard-mode and fast-mode */
- if (WARN_ON(scl_rate > 400000))
- scl_rate = 400000;
+ if (WARN_ON(t->bus_freq_hz > 400000))
+ t->bus_freq_hz = 400000;
/* prevent scl_rate_khz from becoming 0 */
- if (WARN_ON(scl_rate < 1000))
- scl_rate = 1000;
+ if (WARN_ON(t->bus_freq_hz < 1000))
+ t->bus_freq_hz = 1000;
/*
* min_low_ns: The minimum number of ns we need to hold low to
@@ -491,7 +484,7 @@ static int rk3x_i2c_calc_divs(unsigned long clk_rate, unsigned long scl_rate,
* This is because the i2c host on Rockchip holds the data line
* for half the low time.
*/
- if (scl_rate <= 100000) {
+ if (t->bus_freq_hz <= 100000) {
/* Standard-mode */
spec_min_low_ns = 4700;
spec_setup_start = 4700;
@@ -506,7 +499,7 @@ static int rk3x_i2c_calc_divs(unsigned long clk_rate, unsigned long scl_rate,
spec_max_data_hold_ns = 900;
data_hold_buffer_ns = 50;
}
- min_high_ns = scl_rise_ns + spec_min_high_ns;
+ min_high_ns = t->scl_rise_ns + spec_min_high_ns;
/*
* Timings for repeated start:
@@ -517,18 +510,18 @@ static int rk3x_i2c_calc_divs(unsigned long clk_rate, unsigned long scl_rate,
* we meet tSU;STA and tHD;STA times.
*/
min_high_ns = max(min_high_ns,
- DIV_ROUND_UP((scl_rise_ns + spec_setup_start) * 1000, 875));
+ DIV_ROUND_UP((t->scl_rise_ns + spec_setup_start) * 1000, 875));
min_high_ns = max(min_high_ns,
- DIV_ROUND_UP((scl_rise_ns + spec_setup_start +
- sda_fall_ns + spec_min_high_ns), 2));
+ DIV_ROUND_UP((t->scl_rise_ns + spec_setup_start +
+ t->sda_fall_ns + spec_min_high_ns), 2));
- min_low_ns = scl_fall_ns + spec_min_low_ns;
+ min_low_ns = t->scl_fall_ns + spec_min_low_ns;
max_low_ns = spec_max_data_hold_ns * 2 - data_hold_buffer_ns;
min_total_ns = min_low_ns + min_high_ns;
/* Adjust to avoid overflow */
clk_rate_khz = DIV_ROUND_UP(clk_rate, 1000);
- scl_rate_khz = scl_rate / 1000;
+ scl_rate_khz = t->bus_freq_hz / 1000;
/*
* We need the total div to be >= this number
@@ -616,14 +609,13 @@ static int rk3x_i2c_calc_divs(unsigned long clk_rate, unsigned long scl_rate,
static void rk3x_i2c_adapt_div(struct rk3x_i2c *i2c, unsigned long clk_rate)
{
+ struct i2c_timings *t = &i2c->t;
unsigned long div_low, div_high;
u64 t_low_ns, t_high_ns;
int ret;
- ret = rk3x_i2c_calc_divs(clk_rate, i2c->scl_frequency, i2c->scl_rise_ns,
- i2c->scl_fall_ns, i2c->sda_fall_ns,
- &div_low, &div_high);
- WARN_ONCE(ret != 0, "Could not reach SCL freq %u", i2c->scl_frequency);
+ ret = rk3x_i2c_calc_divs(clk_rate, t, &div_low, &div_high);
+ WARN_ONCE(ret != 0, "Could not reach SCL freq %u", t->bus_freq_hz);
clk_enable(i2c->clk);
i2c_writel(i2c, (div_high << 16) | (div_low & 0xffff), REG_CLKDIV);
@@ -634,7 +626,7 @@ static void rk3x_i2c_adapt_div(struct rk3x_i2c *i2c, unsigned long clk_rate)
dev_dbg(i2c->dev,
"CLK %lukhz, Req %uns, Act low %lluns high %lluns\n",
clk_rate / 1000,
- 1000000000 / i2c->scl_frequency,
+ 1000000000 / t->bus_freq_hz,
t_low_ns, t_high_ns);
}
@@ -664,9 +656,7 @@ static int rk3x_i2c_clk_notifier_cb(struct notifier_block *nb, unsigned long
switch (event) {
case PRE_RATE_CHANGE:
- if (rk3x_i2c_calc_divs(ndata->new_rate, i2c->scl_frequency,
- i2c->scl_rise_ns, i2c->scl_fall_ns,
- i2c->sda_fall_ns,
+ if (rk3x_i2c_calc_divs(ndata->new_rate, &i2c->t,
&div_low, &div_high) != 0)
return NOTIFY_STOP;
@@ -880,37 +870,8 @@ static int rk3x_i2c_probe(struct platform_device *pdev)
match = of_match_node(rk3x_i2c_match, np);
i2c->soc_data = (struct rk3x_i2c_soc_data *)match->data;
- if (of_property_read_u32(pdev->dev.of_node, "clock-frequency",
- &i2c->scl_frequency)) {
- dev_info(&pdev->dev, "using default SCL frequency: %d\n",
- DEFAULT_SCL_RATE);
- i2c->scl_frequency = DEFAULT_SCL_RATE;
- }
-
- if (i2c->scl_frequency == 0 || i2c->scl_frequency > 400 * 1000) {
- dev_warn(&pdev->dev, "invalid SCL frequency specified.\n");
- dev_warn(&pdev->dev, "using default SCL frequency: %d\n",
- DEFAULT_SCL_RATE);
- i2c->scl_frequency = DEFAULT_SCL_RATE;
- }
-
- /*
- * Read rise and fall time from device tree. If not available use
- * the default maximum timing from the specification.
- */
- if (of_property_read_u32(pdev->dev.of_node, "i2c-scl-rising-time-ns",
- &i2c->scl_rise_ns)) {
- if (i2c->scl_frequency <= 100000)
- i2c->scl_rise_ns = 1000;
- else
- i2c->scl_rise_ns = 300;
- }
- if (of_property_read_u32(pdev->dev.of_node, "i2c-scl-falling-time-ns",
- &i2c->scl_fall_ns))
- i2c->scl_fall_ns = 300;
- if (of_property_read_u32(pdev->dev.of_node, "i2c-sda-falling-time-ns",
- &i2c->sda_fall_ns))
- i2c->sda_fall_ns = i2c->scl_fall_ns;
+ /* use common interface to get I2C timing properties */
+ i2c_parse_fw_timings(&pdev->dev, &i2c->t, true);
strlcpy(i2c->adap.name, "rk3x-i2c", sizeof(i2c->adap.name));
i2c->adap.owner = THIS_MODULE;
diff --git a/drivers/i2c/busses/i2c-s3c2410.c b/drivers/i2c/busses/i2c-s3c2410.c
index 362a6de54833..38dc1cacfd8b 100644
--- a/drivers/i2c/busses/i2c-s3c2410.c
+++ b/drivers/i2c/busses/i2c-s3c2410.c
@@ -163,15 +163,14 @@ static const struct of_device_id s3c24xx_i2c_match[] = {
MODULE_DEVICE_TABLE(of, s3c24xx_i2c_match);
#endif
-/* s3c24xx_get_device_quirks
- *
+/*
* Get controller type either from device tree or platform device variant.
-*/
-
+ */
static inline kernel_ulong_t s3c24xx_get_device_quirks(struct platform_device *pdev)
{
if (pdev->dev.of_node) {
const struct of_device_id *match;
+
match = of_match_node(s3c24xx_i2c_match, pdev->dev.of_node);
return (kernel_ulong_t)match->data;
}
@@ -179,12 +178,10 @@ static inline kernel_ulong_t s3c24xx_get_device_quirks(struct platform_device *p
return platform_get_device_id(pdev)->driver_data;
}
-/* s3c24xx_i2c_master_complete
- *
- * complete the message and wake up the caller, using the given return code,
+/*
+ * Complete the message and wake up the caller, using the given return code,
* or zero to mean ok.
-*/
-
+ */
static inline void s3c24xx_i2c_master_complete(struct s3c24xx_i2c *i2c, int ret)
{
dev_dbg(i2c->dev, "master_complete %d\n", ret);
@@ -217,7 +214,6 @@ static inline void s3c24xx_i2c_enable_ack(struct s3c24xx_i2c *i2c)
}
/* irq enable/disable functions */
-
static inline void s3c24xx_i2c_disable_irq(struct s3c24xx_i2c *i2c)
{
unsigned long tmp;
@@ -251,11 +247,9 @@ static bool is_ack(struct s3c24xx_i2c *i2c)
return false;
}
-/* s3c24xx_i2c_message_start
- *
+/*
* put the start of a message onto the bus
-*/
-
+ */
static void s3c24xx_i2c_message_start(struct s3c24xx_i2c *i2c,
struct i2c_msg *msg)
{
@@ -284,9 +278,10 @@ static void s3c24xx_i2c_message_start(struct s3c24xx_i2c *i2c,
dev_dbg(i2c->dev, "START: %08lx to IICSTAT, %02x to DS\n", stat, addr);
writeb(addr, i2c->regs + S3C2410_IICDS);
- /* delay here to ensure the data byte has gotten onto the bus
- * before the transaction is started */
-
+ /*
+ * delay here to ensure the data byte has gotten onto the bus
+ * before the transaction is started
+ */
ndelay(i2c->tx_setup);
dev_dbg(i2c->dev, "iiccon, %08lx\n", iiccon);
@@ -361,50 +356,46 @@ static inline void s3c24xx_i2c_stop(struct s3c24xx_i2c *i2c, int ret)
s3c24xx_i2c_disable_irq(i2c);
}
-/* helper functions to determine the current state in the set of
- * messages we are sending */
+/*
+ * helper functions to determine the current state in the set of
+ * messages we are sending
+ */
-/* is_lastmsg()
- *
+/*
* returns TRUE if the current message is the last in the set
-*/
-
+ */
static inline int is_lastmsg(struct s3c24xx_i2c *i2c)
{
return i2c->msg_idx >= (i2c->msg_num - 1);
}
-/* is_msglast
- *
+/*
* returns TRUE if we this is the last byte in the current message
-*/
-
+ */
static inline int is_msglast(struct s3c24xx_i2c *i2c)
{
- /* msg->len is always 1 for the first byte of smbus block read.
+ /*
+ * msg->len is always 1 for the first byte of smbus block read.
* Actual length will be read from slave. More bytes will be
- * read according to the length then. */
+ * read according to the length then.
+ */
if (i2c->msg->flags & I2C_M_RECV_LEN && i2c->msg->len == 1)
return 0;
return i2c->msg_ptr == i2c->msg->len-1;
}
-/* is_msgend
- *
+/*
* returns TRUE if we reached the end of the current message
-*/
-
+ */
static inline int is_msgend(struct s3c24xx_i2c *i2c)
{
return i2c->msg_ptr >= i2c->msg->len;
}
-/* i2c_s3c_irq_nextbyte
- *
+/*
* process an interrupt and work out what to do
*/
-
static int i2c_s3c_irq_nextbyte(struct s3c24xx_i2c *i2c, unsigned long iicstat)
{
unsigned long tmp;
@@ -423,14 +414,13 @@ static int i2c_s3c_irq_nextbyte(struct s3c24xx_i2c *i2c, unsigned long iicstat)
goto out_ack;
case STATE_START:
- /* last thing we did was send a start condition on the
+ /*
+ * last thing we did was send a start condition on the
* bus, or started a new i2c message
*/
-
if (iicstat & S3C2410_IICSTAT_LASTBIT &&
!(i2c->msg->flags & I2C_M_IGNORE_NAK)) {
/* ack was not received... */
-
dev_dbg(i2c->dev, "ack was not received\n");
s3c24xx_i2c_stop(i2c, -ENXIO);
goto out_ack;
@@ -441,9 +431,10 @@ static int i2c_s3c_irq_nextbyte(struct s3c24xx_i2c *i2c, unsigned long iicstat)
else
i2c->state = STATE_WRITE;
- /* terminate the transfer if there is nothing to do
- * as this is used by the i2c probe to find devices. */
-
+ /*
+ * Terminate the transfer if there is nothing to do
+ * as this is used by the i2c probe to find devices.
+ */
if (is_lastmsg(i2c) && i2c->msg->len == 0) {
s3c24xx_i2c_stop(i2c, 0);
goto out_ack;
@@ -452,14 +443,16 @@ static int i2c_s3c_irq_nextbyte(struct s3c24xx_i2c *i2c, unsigned long iicstat)
if (i2c->state == STATE_READ)
goto prepare_read;
- /* fall through to the write state, as we will need to
- * send a byte as well */
+ /*
+ * fall through to the write state, as we will need to
+ * send a byte as well
+ */
case STATE_WRITE:
- /* we are writing data to the device... check for the
+ /*
+ * we are writing data to the device... check for the
* end of the message, and if so, work out what to do
*/
-
if (!(i2c->msg->flags & I2C_M_IGNORE_NAK)) {
if (iicstat & S3C2410_IICSTAT_LASTBIT) {
dev_dbg(i2c->dev, "WRITE: No Ack\n");
@@ -475,12 +468,13 @@ static int i2c_s3c_irq_nextbyte(struct s3c24xx_i2c *i2c, unsigned long iicstat)
byte = i2c->msg->buf[i2c->msg_ptr++];
writeb(byte, i2c->regs + S3C2410_IICDS);
- /* delay after writing the byte to allow the
+ /*
+ * delay after writing the byte to allow the
* data setup time on the bus, as writing the
* data to the register causes the first bit
* to appear on SDA, and SCL will change as
- * soon as the interrupt is acknowledged */
-
+ * soon as the interrupt is acknowledged
+ */
ndelay(i2c->tx_setup);
} else if (!is_lastmsg(i2c)) {
@@ -496,10 +490,11 @@ static int i2c_s3c_irq_nextbyte(struct s3c24xx_i2c *i2c, unsigned long iicstat)
if (i2c->msg->flags & I2C_M_NOSTART) {
if (i2c->msg->flags & I2C_M_RD) {
- /* cannot do this, the controller
+ /*
+ * cannot do this, the controller
* forces us to send a new START
- * when we change direction */
-
+ * when we change direction
+ */
s3c24xx_i2c_stop(i2c, -EINVAL);
}
@@ -512,17 +507,16 @@ static int i2c_s3c_irq_nextbyte(struct s3c24xx_i2c *i2c, unsigned long iicstat)
} else {
/* send stop */
-
s3c24xx_i2c_stop(i2c, 0);
}
break;
case STATE_READ:
- /* we have a byte of data in the data register, do
+ /*
+ * we have a byte of data in the data register, do
* something with it, and then work out whether we are
* going to do any more read/write
*/
-
byte = readb(i2c->regs + S3C2410_IICDS);
i2c->msg->buf[i2c->msg_ptr++] = byte;
@@ -537,9 +531,10 @@ static int i2c_s3c_irq_nextbyte(struct s3c24xx_i2c *i2c, unsigned long iicstat)
s3c24xx_i2c_disable_ack(i2c);
} else if (is_msgend(i2c)) {
- /* ok, we've read the entire buffer, see if there
- * is anything else we need to do */
-
+ /*
+ * ok, we've read the entire buffer, see if there
+ * is anything else we need to do
+ */
if (is_lastmsg(i2c)) {
/* last message, send stop and complete */
dev_dbg(i2c->dev, "READ: Send Stop\n");
@@ -568,11 +563,9 @@ static int i2c_s3c_irq_nextbyte(struct s3c24xx_i2c *i2c, unsigned long iicstat)
return ret;
}
-/* s3c24xx_i2c_irq
- *
+/*
* top level IRQ servicing routine
-*/
-
+ */
static irqreturn_t s3c24xx_i2c_irq(int irqno, void *dev_id)
{
struct s3c24xx_i2c *i2c = dev_id;
@@ -595,9 +588,10 @@ static irqreturn_t s3c24xx_i2c_irq(int irqno, void *dev_id)
goto out;
}
- /* pretty much this leaves us with the fact that we've
- * transmitted or received whatever byte we last sent */
-
+ /*
+ * pretty much this leaves us with the fact that we've
+ * transmitted or received whatever byte we last sent
+ */
i2c_s3c_irq_nextbyte(i2c, status);
out:
@@ -630,11 +624,9 @@ static inline void s3c24xx_i2c_disable_bus(struct s3c24xx_i2c *i2c)
}
-/* s3c24xx_i2c_set_master
- *
+/*
* get the i2c bus for a master transaction
-*/
-
+ */
static int s3c24xx_i2c_set_master(struct s3c24xx_i2c *i2c)
{
unsigned long iicstat;
@@ -652,11 +644,9 @@ static int s3c24xx_i2c_set_master(struct s3c24xx_i2c *i2c)
return -ETIMEDOUT;
}
-/* s3c24xx_i2c_wait_idle
- *
+/*
* wait for the i2c bus to become idle.
-*/
-
+ */
static void s3c24xx_i2c_wait_idle(struct s3c24xx_i2c *i2c)
{
unsigned long iicstat;
@@ -706,11 +696,9 @@ static void s3c24xx_i2c_wait_idle(struct s3c24xx_i2c *i2c)
dev_warn(i2c->dev, "timeout waiting for bus idle\n");
}
-/* s3c24xx_i2c_doxfer
- *
+/*
* this starts an i2c transfer
-*/
-
+ */
static int s3c24xx_i2c_doxfer(struct s3c24xx_i2c *i2c,
struct i2c_msg *msgs, int num)
{
@@ -749,9 +737,10 @@ static int s3c24xx_i2c_doxfer(struct s3c24xx_i2c *i2c,
ret = i2c->msg_idx;
- /* having these next two as dev_err() makes life very
- * noisy when doing an i2cdetect */
-
+ /*
+ * Having these next two as dev_err() makes life very
+ * noisy when doing an i2cdetect
+ */
if (timeout == 0)
dev_dbg(i2c->dev, "timeout\n");
else if (ret != num)
@@ -771,12 +760,10 @@ static int s3c24xx_i2c_doxfer(struct s3c24xx_i2c *i2c,
return ret;
}
-/* s3c24xx_i2c_xfer
- *
+/*
* first port of call from the i2c bus code when an message needs
* transferring across the i2c bus.
-*/
-
+ */
static int s3c24xx_i2c_xfer(struct i2c_adapter *adap,
struct i2c_msg *msgs, int num)
{
@@ -814,17 +801,14 @@ static u32 s3c24xx_i2c_func(struct i2c_adapter *adap)
}
/* i2c bus registration info */
-
static const struct i2c_algorithm s3c24xx_i2c_algorithm = {
.master_xfer = s3c24xx_i2c_xfer,
.functionality = s3c24xx_i2c_func,
};
-/* s3c24xx_i2c_calcdivisor
- *
+/*
* return the divisor settings for a given frequency
-*/
-
+ */
static int s3c24xx_i2c_calcdivisor(unsigned long clkin, unsigned int wanted,
unsigned int *div1, unsigned int *divs)
{
@@ -850,13 +834,11 @@ static int s3c24xx_i2c_calcdivisor(unsigned long clkin, unsigned int wanted,
return clkin / (calc_divs * calc_div1);
}
-/* s3c24xx_i2c_clockrate
- *
+/*
* work out a divisor for the user requested frequency setting,
* either by the requested frequency, or scanning the acceptable
* range of frequencies until something is found
-*/
-
+ */
static int s3c24xx_i2c_clockrate(struct s3c24xx_i2c *i2c, unsigned int *got)
{
struct s3c2410_platform_i2c *pdata = i2c->pdata;
@@ -944,7 +926,7 @@ static int s3c24xx_i2c_cpufreq_transition(struct notifier_block *nb,
i2c_unlock_adapter(&i2c->adap);
if (ret < 0)
- dev_err(i2c->dev, "cannot find frequency\n");
+ dev_err(i2c->dev, "cannot find frequency (%d)\n", ret);
else
dev_info(i2c->dev, "setting freq %d\n", got);
}
@@ -995,7 +977,8 @@ static int s3c24xx_i2c_parse_dt_gpio(struct s3c24xx_i2c *i2c)
ret = gpio_request(gpio, "i2c-bus");
if (ret) {
- dev_err(i2c->dev, "gpio [%d] request failed\n", gpio);
+ dev_err(i2c->dev, "gpio [%d] request failed (%d)\n",
+ gpio, ret);
goto free_gpio;
}
}
@@ -1028,11 +1011,9 @@ static void s3c24xx_i2c_dt_gpio_free(struct s3c24xx_i2c *i2c)
}
#endif
-/* s3c24xx_i2c_init
- *
+/*
* initialise the controller, set the IO lines and frequency
-*/
-
+ */
static int s3c24xx_i2c_init(struct s3c24xx_i2c *i2c)
{
struct s3c2410_platform_i2c *pdata;
@@ -1068,11 +1049,9 @@ static int s3c24xx_i2c_init(struct s3c24xx_i2c *i2c)
}
#ifdef CONFIG_OF
-/* s3c24xx_i2c_parse_dt
- *
+/*
* Parse the device tree node and retreive the platform data.
-*/
-
+ */
static void
s3c24xx_i2c_parse_dt(struct device_node *np, struct s3c24xx_i2c *i2c)
{
@@ -1105,17 +1084,9 @@ s3c24xx_i2c_parse_dt(struct device_node *np, struct s3c24xx_i2c *i2c)
}
#else
static void
-s3c24xx_i2c_parse_dt(struct device_node *np, struct s3c24xx_i2c *i2c)
-{
- return;
-}
+s3c24xx_i2c_parse_dt(struct device_node *np, struct s3c24xx_i2c *i2c) { }
#endif
-/* s3c24xx_i2c_probe
- *
- * called by the bus driver when a suitable device is found
-*/
-
static int s3c24xx_i2c_probe(struct platform_device *pdev)
{
struct s3c24xx_i2c *i2c;
@@ -1156,7 +1127,6 @@ static int s3c24xx_i2c_probe(struct platform_device *pdev)
init_waitqueue_head(&i2c->wait);
/* find the clock and enable it */
-
i2c->dev = &pdev->dev;
i2c->clk = devm_clk_get(&pdev->dev, "i2c");
if (IS_ERR(i2c->clk)) {
@@ -1166,9 +1136,7 @@ static int s3c24xx_i2c_probe(struct platform_device *pdev)
dev_dbg(&pdev->dev, "clock source %p\n", i2c->clk);
-
/* map the registers */
-
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
i2c->regs = devm_ioremap_resource(&pdev->dev, res);
@@ -1179,33 +1147,35 @@ static int s3c24xx_i2c_probe(struct platform_device *pdev)
i2c->regs, res);
/* setup info block for the i2c core */
-
i2c->adap.algo_data = i2c;
i2c->adap.dev.parent = &pdev->dev;
-
i2c->pctrl = devm_pinctrl_get_select_default(i2c->dev);
/* inititalise the i2c gpio lines */
-
- if (i2c->pdata->cfg_gpio) {
+ if (i2c->pdata->cfg_gpio)
i2c->pdata->cfg_gpio(to_platform_device(i2c->dev));
- } else if (IS_ERR(i2c->pctrl) && s3c24xx_i2c_parse_dt_gpio(i2c)) {
+ else if (IS_ERR(i2c->pctrl) && s3c24xx_i2c_parse_dt_gpio(i2c))
return -EINVAL;
- }
/* initialise the i2c controller */
+ ret = clk_prepare_enable(i2c->clk);
+ if (ret) {
+ dev_err(&pdev->dev, "I2C clock enable failed\n");
+ return ret;
+ }
- clk_prepare_enable(i2c->clk);
ret = s3c24xx_i2c_init(i2c);
clk_disable(i2c->clk);
if (ret != 0) {
dev_err(&pdev->dev, "I2C controller init failed\n");
+ clk_unprepare(i2c->clk);
return ret;
}
- /* find the IRQ for this unit (note, this relies on the init call to
+
+ /*
+ * find the IRQ for this unit (note, this relies on the init call to
* ensure no current IRQs pending
*/
-
if (!(i2c->quirks & QUIRK_POLL)) {
i2c->irq = ret = platform_get_irq(pdev, 0);
if (ret <= 0) {
@@ -1214,9 +1184,8 @@ static int s3c24xx_i2c_probe(struct platform_device *pdev)
return ret;
}
- ret = devm_request_irq(&pdev->dev, i2c->irq, s3c24xx_i2c_irq, 0,
- dev_name(&pdev->dev), i2c);
-
+ ret = devm_request_irq(&pdev->dev, i2c->irq, s3c24xx_i2c_irq,
+ 0, dev_name(&pdev->dev), i2c);
if (ret != 0) {
dev_err(&pdev->dev, "cannot claim IRQ %d\n", i2c->irq);
clk_unprepare(i2c->clk);
@@ -1231,12 +1200,12 @@ static int s3c24xx_i2c_probe(struct platform_device *pdev)
return ret;
}
- /* Note, previous versions of the driver used i2c_add_adapter()
+ /*
+ * Note, previous versions of the driver used i2c_add_adapter()
* to add the bus at any number. We now pass the bus number via
* the platform data, so if unset it will now default to always
* being bus 0.
*/
-
i2c->adap.nr = i2c->pdata->bus_num;
i2c->adap.dev.of_node = pdev->dev.of_node;
@@ -1257,11 +1226,6 @@ static int s3c24xx_i2c_probe(struct platform_device *pdev)
return 0;
}
-/* s3c24xx_i2c_remove
- *
- * called when device is removed from the bus
-*/
-
static int s3c24xx_i2c_remove(struct platform_device *pdev)
{
struct s3c24xx_i2c *i2c = platform_get_drvdata(pdev);
@@ -1316,14 +1280,8 @@ static int s3c24xx_i2c_resume_noirq(struct device *dev)
#ifdef CONFIG_PM
static const struct dev_pm_ops s3c24xx_i2c_dev_pm_ops = {
-#ifdef CONFIG_PM_SLEEP
- .suspend_noirq = s3c24xx_i2c_suspend_noirq,
- .resume_noirq = s3c24xx_i2c_resume_noirq,
- .freeze_noirq = s3c24xx_i2c_suspend_noirq,
- .thaw_noirq = s3c24xx_i2c_resume_noirq,
- .poweroff_noirq = s3c24xx_i2c_suspend_noirq,
- .restore_noirq = s3c24xx_i2c_resume_noirq,
-#endif
+ SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(s3c24xx_i2c_suspend_noirq,
+ s3c24xx_i2c_resume_noirq)
};
#define S3C24XX_DEV_PM_OPS (&s3c24xx_i2c_dev_pm_ops)
@@ -1331,8 +1289,6 @@ static const struct dev_pm_ops s3c24xx_i2c_dev_pm_ops = {
#define S3C24XX_DEV_PM_OPS NULL
#endif
-/* device driver for platform bus bits */
-
static struct platform_driver s3c24xx_i2c_driver = {
.probe = s3c24xx_i2c_probe,
.remove = s3c24xx_i2c_remove,
diff --git a/drivers/i2c/busses/i2c-sh_mobile.c b/drivers/i2c/busses/i2c-sh_mobile.c
index 7d2bd3ec2d2d..6fb3e2645992 100644
--- a/drivers/i2c/busses/i2c-sh_mobile.c
+++ b/drivers/i2c/busses/i2c-sh_mobile.c
@@ -398,8 +398,7 @@ static void sh_mobile_i2c_get_data(struct sh_mobile_i2c_data *pd,
{
switch (pd->pos) {
case -1:
- *buf = (pd->msg->addr & 0x7f) << 1;
- *buf |= (pd->msg->flags & I2C_M_RD) ? 1 : 0;
+ *buf = i2c_8bit_addr_from_msg(pd->msg);
break;
default:
*buf = pd->msg->buf[pd->pos];
diff --git a/drivers/i2c/busses/i2c-sirf.c b/drivers/i2c/busses/i2c-sirf.c
index 13e51ef6af73..792a42bdd335 100644
--- a/drivers/i2c/busses/i2c-sirf.c
+++ b/drivers/i2c/busses/i2c-sirf.c
@@ -190,9 +190,7 @@ static void i2c_sirfsoc_set_address(struct sirfsoc_i2c *siic,
writel(regval, siic->base + SIRFSOC_I2C_CMD(siic->cmd_ptr++));
- addr = msg->addr << 1; /* Generate address */
- if (msg->flags & I2C_M_RD)
- addr |= 1;
+ addr = i2c_8bit_addr_from_msg(msg);
/* Reverse direction bit */
if (msg->flags & I2C_M_REV_DIR_ADDR)
diff --git a/drivers/i2c/busses/i2c-st.c b/drivers/i2c/busses/i2c-st.c
index 6ee77159ac57..944ec4205084 100644
--- a/drivers/i2c/busses/i2c-st.c
+++ b/drivers/i2c/busses/i2c-st.c
@@ -337,10 +337,42 @@ static void st_i2c_hw_config(struct st_i2c_dev *i2c_dev)
writel_relaxed(val, i2c_dev->base + SSC_NOISE_SUPP_WIDTH_DATAOUT);
}
+static int st_i2c_recover_bus(struct i2c_adapter *i2c_adap)
+{
+ struct st_i2c_dev *i2c_dev = i2c_get_adapdata(i2c_adap);
+ u32 ctl;
+
+ dev_dbg(i2c_dev->dev, "Trying to recover bus\n");
+
+ /*
+ * SSP IP is dual role SPI/I2C to generate 9 clock pulses
+ * we switch to SPI node, 9 bit words and write a 0. This
+ * has been validate with a oscilloscope and is easier
+ * than switching to GPIO mode.
+ */
+
+ /* Disable interrupts */
+ writel_relaxed(0, i2c_dev->base + SSC_IEN);
+
+ st_i2c_hw_config(i2c_dev);
+
+ ctl = SSC_CTL_EN | SSC_CTL_MS | SSC_CTL_EN_RX_FIFO | SSC_CTL_EN_TX_FIFO;
+ st_i2c_set_bits(i2c_dev->base + SSC_CTL, ctl);
+
+ st_i2c_clr_bits(i2c_dev->base + SSC_I2C, SSC_I2C_I2CM);
+ usleep_range(8000, 10000);
+
+ writel_relaxed(0, i2c_dev->base + SSC_TBUF);
+ usleep_range(2000, 4000);
+ st_i2c_set_bits(i2c_dev->base + SSC_I2C, SSC_I2C_I2CM);
+
+ return 0;
+}
+
static int st_i2c_wait_free_bus(struct st_i2c_dev *i2c_dev)
{
u32 sta;
- int i;
+ int i, ret;
for (i = 0; i < 10; i++) {
sta = readl_relaxed(i2c_dev->base + SSC_STA);
@@ -352,6 +384,12 @@ static int st_i2c_wait_free_bus(struct st_i2c_dev *i2c_dev)
dev_err(i2c_dev->dev, "bus not free (status = 0x%08x)\n", sta);
+ ret = i2c_recover_bus(&i2c_dev->adap);
+ if (ret) {
+ dev_err(i2c_dev->dev, "Failed to recover the bus (%d)\n", ret);
+ return ret;
+ }
+
return -EBUSY;
}
@@ -614,8 +652,7 @@ static int st_i2c_xfer_msg(struct st_i2c_dev *i2c_dev, struct i2c_msg *msg,
unsigned long timeout;
int ret;
- c->addr = (u8)(msg->addr << 1);
- c->addr |= (msg->flags & I2C_M_RD);
+ c->addr = i2c_8bit_addr_from_msg(msg);
c->buf = msg->buf;
c->count = msg->len;
c->xfered = 0;
@@ -744,6 +781,10 @@ static struct i2c_algorithm st_i2c_algo = {
.functionality = st_i2c_func,
};
+static struct i2c_bus_recovery_info st_i2c_recovery_info = {
+ .recover_bus = st_i2c_recover_bus,
+};
+
static int st_i2c_of_get_deglitch(struct device_node *np,
struct st_i2c_dev *i2c_dev)
{
@@ -826,6 +867,7 @@ static int st_i2c_probe(struct platform_device *pdev)
adap->timeout = 2 * HZ;
adap->retries = 0;
adap->algo = &st_i2c_algo;
+ adap->bus_recovery_info = &st_i2c_recovery_info;
adap->dev.parent = &pdev->dev;
adap->dev.of_node = pdev->dev.of_node;
diff --git a/drivers/i2c/busses/i2c-tegra.c b/drivers/i2c/busses/i2c-tegra.c
index 929185a7296c..445398c314a3 100644
--- a/drivers/i2c/busses/i2c-tegra.c
+++ b/drivers/i2c/busses/i2c-tegra.c
@@ -38,6 +38,7 @@
#define I2C_CNFG_DEBOUNCE_CNT_SHIFT 12
#define I2C_CNFG_PACKET_MODE_EN (1<<10)
#define I2C_CNFG_NEW_MASTER_FSM (1<<11)
+#define I2C_CNFG_MULTI_MASTER_MODE (1<<17)
#define I2C_STATUS 0x01C
#define I2C_SL_CNFG 0x020
#define I2C_SL_CNFG_NACK (1<<1)
@@ -106,6 +107,9 @@
#define I2C_SLV_CONFIG_LOAD (1 << 1)
#define I2C_TIMEOUT_CONFIG_LOAD (1 << 2)
+#define I2C_CLKEN_OVERRIDE 0x090
+#define I2C_MST_CORE_CLKEN_OVR (1 << 0)
+
/*
* msg_end_type: The bus control which need to be send at end of transfer.
* @MSG_END_STOP: Send stop pulse at end of transfer.
@@ -143,6 +147,8 @@ struct tegra_i2c_hw_feature {
int clk_divisor_hs_mode;
int clk_divisor_std_fast_mode;
u16 clk_divisor_fast_plus_mode;
+ bool has_multi_master_mode;
+ bool has_slcg_override_reg;
};
/**
@@ -184,6 +190,7 @@ struct tegra_i2c_dev {
u32 bus_clk_rate;
u16 clk_divisor_non_hs_mode;
bool is_suspended;
+ bool is_multimaster_mode;
};
static void dvc_writel(struct tegra_i2c_dev *i2c_dev, u32 val, unsigned long reg)
@@ -438,6 +445,10 @@ static int tegra_i2c_init(struct tegra_i2c_dev *i2c_dev)
val = I2C_CNFG_NEW_MASTER_FSM | I2C_CNFG_PACKET_MODE_EN |
(0x2 << I2C_CNFG_DEBOUNCE_CNT_SHIFT);
+
+ if (i2c_dev->hw->has_multi_master_mode)
+ val |= I2C_CNFG_MULTI_MASTER_MODE;
+
i2c_writel(i2c_dev, val, I2C_CNFG);
i2c_writel(i2c_dev, 0, I2C_INT_MASK);
@@ -463,25 +474,29 @@ static int tegra_i2c_init(struct tegra_i2c_dev *i2c_dev)
if (tegra_i2c_flush_fifos(i2c_dev))
err = -ETIMEDOUT;
+ if (i2c_dev->is_multimaster_mode && i2c_dev->hw->has_slcg_override_reg)
+ i2c_writel(i2c_dev, I2C_MST_CORE_CLKEN_OVR, I2C_CLKEN_OVERRIDE);
+
if (i2c_dev->hw->has_config_load_reg) {
i2c_writel(i2c_dev, I2C_MSTR_CONFIG_LOAD, I2C_CONFIG_LOAD);
while (i2c_readl(i2c_dev, I2C_CONFIG_LOAD) != 0) {
if (time_after(jiffies, timeout)) {
dev_warn(i2c_dev->dev,
"timeout waiting for config load\n");
- return -ETIMEDOUT;
+ err = -ETIMEDOUT;
+ goto err;
}
msleep(1);
}
}
- tegra_i2c_clock_disable(i2c_dev);
-
if (i2c_dev->irq_disabled) {
i2c_dev->irq_disabled = 0;
enable_irq(i2c_dev->irq);
}
+err:
+ tegra_i2c_clock_disable(i2c_dev);
return err;
}
@@ -688,6 +703,20 @@ static u32 tegra_i2c_func(struct i2c_adapter *adap)
return ret;
}
+static void tegra_i2c_parse_dt(struct tegra_i2c_dev *i2c_dev)
+{
+ struct device_node *np = i2c_dev->dev->of_node;
+ int ret;
+
+ ret = of_property_read_u32(np, "clock-frequency",
+ &i2c_dev->bus_clk_rate);
+ if (ret)
+ i2c_dev->bus_clk_rate = 100000; /* default clock rate */
+
+ i2c_dev->is_multimaster_mode = of_property_read_bool(np,
+ "multi-master");
+}
+
static const struct i2c_algorithm tegra_i2c_algo = {
.master_xfer = tegra_i2c_xfer,
.functionality = tegra_i2c_func,
@@ -707,6 +736,8 @@ static const struct tegra_i2c_hw_feature tegra20_i2c_hw = {
.clk_divisor_std_fast_mode = 0,
.clk_divisor_fast_plus_mode = 0,
.has_config_load_reg = false,
+ .has_multi_master_mode = false,
+ .has_slcg_override_reg = false,
};
static const struct tegra_i2c_hw_feature tegra30_i2c_hw = {
@@ -717,6 +748,8 @@ static const struct tegra_i2c_hw_feature tegra30_i2c_hw = {
.clk_divisor_std_fast_mode = 0,
.clk_divisor_fast_plus_mode = 0,
.has_config_load_reg = false,
+ .has_multi_master_mode = false,
+ .has_slcg_override_reg = false,
};
static const struct tegra_i2c_hw_feature tegra114_i2c_hw = {
@@ -727,6 +760,8 @@ static const struct tegra_i2c_hw_feature tegra114_i2c_hw = {
.clk_divisor_std_fast_mode = 0x19,
.clk_divisor_fast_plus_mode = 0x10,
.has_config_load_reg = false,
+ .has_multi_master_mode = false,
+ .has_slcg_override_reg = false,
};
static const struct tegra_i2c_hw_feature tegra124_i2c_hw = {
@@ -737,10 +772,25 @@ static const struct tegra_i2c_hw_feature tegra124_i2c_hw = {
.clk_divisor_std_fast_mode = 0x19,
.clk_divisor_fast_plus_mode = 0x10,
.has_config_load_reg = true,
+ .has_multi_master_mode = false,
+ .has_slcg_override_reg = true,
+};
+
+static const struct tegra_i2c_hw_feature tegra210_i2c_hw = {
+ .has_continue_xfer_support = true,
+ .has_per_pkt_xfer_complete_irq = true,
+ .has_single_clk_source = true,
+ .clk_divisor_hs_mode = 1,
+ .clk_divisor_std_fast_mode = 0x19,
+ .clk_divisor_fast_plus_mode = 0x10,
+ .has_config_load_reg = true,
+ .has_multi_master_mode = true,
+ .has_slcg_override_reg = true,
};
/* Match table for of_platform binding */
static const struct of_device_id tegra_i2c_of_match[] = {
+ { .compatible = "nvidia,tegra210-i2c", .data = &tegra210_i2c_hw, },
{ .compatible = "nvidia,tegra124-i2c", .data = &tegra124_i2c_hw, },
{ .compatible = "nvidia,tegra114-i2c", .data = &tegra114_i2c_hw, },
{ .compatible = "nvidia,tegra30-i2c", .data = &tegra30_i2c_hw, },
@@ -797,10 +847,7 @@ static int tegra_i2c_probe(struct platform_device *pdev)
return PTR_ERR(i2c_dev->rst);
}
- ret = of_property_read_u32(i2c_dev->dev->of_node, "clock-frequency",
- &i2c_dev->bus_clk_rate);
- if (ret)
- i2c_dev->bus_clk_rate = 100000; /* default clock rate */
+ tegra_i2c_parse_dt(i2c_dev);
i2c_dev->hw = &tegra20_i2c_hw;
@@ -853,6 +900,15 @@ static int tegra_i2c_probe(struct platform_device *pdev)
goto unprepare_fast_clk;
}
+ if (i2c_dev->is_multimaster_mode) {
+ ret = clk_enable(i2c_dev->div_clk);
+ if (ret < 0) {
+ dev_err(i2c_dev->dev, "div_clk enable failed %d\n",
+ ret);
+ goto unprepare_div_clk;
+ }
+ }
+
ret = tegra_i2c_init(i2c_dev);
if (ret) {
dev_err(&pdev->dev, "Failed to initialize i2c controller");
@@ -863,7 +919,7 @@ static int tegra_i2c_probe(struct platform_device *pdev)
tegra_i2c_isr, 0, dev_name(&pdev->dev), i2c_dev);
if (ret) {
dev_err(&pdev->dev, "Failed to request irq %i\n", i2c_dev->irq);
- goto unprepare_div_clk;
+ goto disable_div_clk;
}
i2c_set_adapdata(&i2c_dev->adapter, i2c_dev);
@@ -878,11 +934,15 @@ static int tegra_i2c_probe(struct platform_device *pdev)
ret = i2c_add_numbered_adapter(&i2c_dev->adapter);
if (ret) {
dev_err(&pdev->dev, "Failed to add I2C adapter\n");
- goto unprepare_div_clk;
+ goto disable_div_clk;
}
return 0;
+disable_div_clk:
+ if (i2c_dev->is_multimaster_mode)
+ clk_disable(i2c_dev->div_clk);
+
unprepare_div_clk:
clk_unprepare(i2c_dev->div_clk);
@@ -898,6 +958,9 @@ static int tegra_i2c_remove(struct platform_device *pdev)
struct tegra_i2c_dev *i2c_dev = platform_get_drvdata(pdev);
i2c_del_adapter(&i2c_dev->adapter);
+ if (i2c_dev->is_multimaster_mode)
+ clk_disable(i2c_dev->div_clk);
+
clk_unprepare(i2c_dev->div_clk);
if (!i2c_dev->hw->has_single_clk_source)
clk_unprepare(i2c_dev->fast_clk);
diff --git a/drivers/i2c/busses/i2c-uniphier-f.c b/drivers/i2c/busses/i2c-uniphier-f.c
index 213ba55e17c3..aeead0d27d10 100644
--- a/drivers/i2c/busses/i2c-uniphier-f.c
+++ b/drivers/i2c/busses/i2c-uniphier-f.c
@@ -524,7 +524,7 @@ static int uniphier_fi2c_probe(struct platform_device *pdev)
irq = platform_get_irq(pdev, 0);
if (irq < 0) {
- dev_err(dev, "failed to get IRQ number");
+ dev_err(dev, "failed to get IRQ number\n");
return irq;
}
diff --git a/drivers/i2c/busses/i2c-uniphier.c b/drivers/i2c/busses/i2c-uniphier.c
index 89eaa8a7e1e0..475a5eb514e2 100644
--- a/drivers/i2c/busses/i2c-uniphier.c
+++ b/drivers/i2c/busses/i2c-uniphier.c
@@ -381,7 +381,7 @@ static int uniphier_i2c_probe(struct platform_device *pdev)
irq = platform_get_irq(pdev, 0);
if (irq < 0) {
- dev_err(dev, "failed to get IRQ number");
+ dev_err(dev, "failed to get IRQ number\n");
return irq;
}
diff --git a/drivers/i2c/i2c-core.c b/drivers/i2c/i2c-core.c
index e584d88ee337..af11b658984d 100644
--- a/drivers/i2c/i2c-core.c
+++ b/drivers/i2c/i2c-core.c
@@ -954,48 +954,40 @@ static int i2c_check_addr_busy(struct i2c_adapter *adapter, int addr)
}
/**
- * i2c_lock_adapter - Get exclusive access to an I2C bus segment
+ * i2c_adapter_lock_bus - Get exclusive access to an I2C bus segment
* @adapter: Target I2C bus segment
+ * @flags: I2C_LOCK_ROOT_ADAPTER locks the root i2c adapter, I2C_LOCK_SEGMENT
+ * locks only this branch in the adapter tree
*/
-void i2c_lock_adapter(struct i2c_adapter *adapter)
+static void i2c_adapter_lock_bus(struct i2c_adapter *adapter,
+ unsigned int flags)
{
- struct i2c_adapter *parent = i2c_parent_is_i2c_adapter(adapter);
-
- if (parent)
- i2c_lock_adapter(parent);
- else
- rt_mutex_lock(&adapter->bus_lock);
+ rt_mutex_lock(&adapter->bus_lock);
}
-EXPORT_SYMBOL_GPL(i2c_lock_adapter);
/**
- * i2c_trylock_adapter - Try to get exclusive access to an I2C bus segment
+ * i2c_adapter_trylock_bus - Try to get exclusive access to an I2C bus segment
* @adapter: Target I2C bus segment
+ * @flags: I2C_LOCK_ROOT_ADAPTER trylocks the root i2c adapter, I2C_LOCK_SEGMENT
+ * trylocks only this branch in the adapter tree
*/
-static int i2c_trylock_adapter(struct i2c_adapter *adapter)
+static int i2c_adapter_trylock_bus(struct i2c_adapter *adapter,
+ unsigned int flags)
{
- struct i2c_adapter *parent = i2c_parent_is_i2c_adapter(adapter);
-
- if (parent)
- return i2c_trylock_adapter(parent);
- else
- return rt_mutex_trylock(&adapter->bus_lock);
+ return rt_mutex_trylock(&adapter->bus_lock);
}
/**
- * i2c_unlock_adapter - Release exclusive access to an I2C bus segment
+ * i2c_adapter_unlock_bus - Release exclusive access to an I2C bus segment
* @adapter: Target I2C bus segment
+ * @flags: I2C_LOCK_ROOT_ADAPTER unlocks the root i2c adapter, I2C_LOCK_SEGMENT
+ * unlocks only this branch in the adapter tree
*/
-void i2c_unlock_adapter(struct i2c_adapter *adapter)
+static void i2c_adapter_unlock_bus(struct i2c_adapter *adapter,
+ unsigned int flags)
{
- struct i2c_adapter *parent = i2c_parent_is_i2c_adapter(adapter);
-
- if (parent)
- i2c_unlock_adapter(parent);
- else
- rt_mutex_unlock(&adapter->bus_lock);
+ rt_mutex_unlock(&adapter->bus_lock);
}
-EXPORT_SYMBOL_GPL(i2c_unlock_adapter);
static void i2c_dev_set_name(struct i2c_adapter *adap,
struct i2c_client *client)
@@ -1541,7 +1533,14 @@ static int i2c_register_adapter(struct i2c_adapter *adap)
return -EINVAL;
}
+ if (!adap->lock_bus) {
+ adap->lock_bus = i2c_adapter_lock_bus;
+ adap->trylock_bus = i2c_adapter_trylock_bus;
+ adap->unlock_bus = i2c_adapter_unlock_bus;
+ }
+
rt_mutex_init(&adap->bus_lock);
+ rt_mutex_init(&adap->mux_lock);
mutex_init(&adap->userspace_clients_lock);
INIT_LIST_HEAD(&adap->userspace_clients);
@@ -1559,6 +1558,7 @@ static int i2c_register_adapter(struct i2c_adapter *adap)
dev_dbg(&adap->dev, "adapter [%s] registered\n", adap->name);
pm_runtime_no_callbacks(&adap->dev);
+ pm_suspend_ignore_children(&adap->dev, true);
pm_runtime_enable(&adap->dev);
#ifdef CONFIG_I2C_COMPAT
@@ -1594,10 +1594,12 @@ static int i2c_register_adapter(struct i2c_adapter *adap)
bri->get_scl = get_scl_gpio_value;
bri->set_scl = set_scl_gpio_value;
- } else if (!bri->set_scl || !bri->get_scl) {
+ } else if (bri->recover_bus == i2c_generic_scl_recovery) {
/* Generic SCL recovery */
- dev_err(&adap->dev, "No {get|set}_gpio() found, not using recovery\n");
- adap->bus_recovery_info = NULL;
+ if (!bri->set_scl || !bri->get_scl) {
+ dev_err(&adap->dev, "No {get|set}_scl() found, not using recovery\n");
+ adap->bus_recovery_info = NULL;
+ }
}
}
@@ -2309,16 +2311,16 @@ int i2c_transfer(struct i2c_adapter *adap, struct i2c_msg *msgs, int num)
#endif
if (in_atomic() || irqs_disabled()) {
- ret = i2c_trylock_adapter(adap);
+ ret = adap->trylock_bus(adap, I2C_LOCK_SEGMENT);
if (!ret)
/* I2C activity is ongoing. */
return -EAGAIN;
} else {
- i2c_lock_adapter(adap);
+ i2c_lock_bus(adap, I2C_LOCK_SEGMENT);
}
ret = __i2c_transfer(adap, msgs, num);
- i2c_unlock_adapter(adap);
+ i2c_unlock_bus(adap, I2C_LOCK_SEGMENT);
return ret;
} else {
@@ -2646,7 +2648,7 @@ static u8 i2c_smbus_pec(u8 crc, u8 *p, size_t count)
static u8 i2c_smbus_msg_pec(u8 pec, struct i2c_msg *msg)
{
/* The address will be sent first */
- u8 addr = (msg->addr << 1) | !!(msg->flags & I2C_M_RD);
+ u8 addr = i2c_8bit_addr_from_msg(msg);
pec = i2c_smbus_pec(pec, &addr, 1);
/* The data buffer follows */
@@ -3093,7 +3095,7 @@ s32 i2c_smbus_xfer(struct i2c_adapter *adapter, u16 addr, unsigned short flags,
flags &= I2C_M_TEN | I2C_CLIENT_PEC | I2C_CLIENT_SCCB;
if (adapter->algo->smbus_xfer) {
- i2c_lock_adapter(adapter);
+ i2c_lock_bus(adapter, I2C_LOCK_SEGMENT);
/* Retry automatically on arbitration loss */
orig_jiffies = jiffies;
@@ -3107,7 +3109,7 @@ s32 i2c_smbus_xfer(struct i2c_adapter *adapter, u16 addr, unsigned short flags,
orig_jiffies + adapter->timeout))
break;
}
- i2c_unlock_adapter(adapter);
+ i2c_unlock_bus(adapter, I2C_LOCK_SEGMENT);
if (res != -EOPNOTSUPP || !adapter->algo->master_xfer)
goto trace;
diff --git a/drivers/i2c/i2c-dev.c b/drivers/i2c/i2c-dev.c
index 0b1108d3c2f3..6ecfd76270f2 100644
--- a/drivers/i2c/i2c-dev.c
+++ b/drivers/i2c/i2c-dev.c
@@ -22,6 +22,7 @@
/* The I2C_RDWR ioctl code is written by Kolja Waschk <waschk@telos.de> */
+#include <linux/cdev.h>
#include <linux/device.h>
#include <linux/fs.h>
#include <linux/i2c-dev.h>
@@ -47,9 +48,10 @@ struct i2c_dev {
struct list_head list;
struct i2c_adapter *adap;
struct device *dev;
+ struct cdev cdev;
};
-#define I2C_MINORS 256
+#define I2C_MINORS MINORMASK
static LIST_HEAD(i2c_dev_list);
static DEFINE_SPINLOCK(i2c_dev_list_lock);
@@ -89,7 +91,7 @@ static struct i2c_dev *get_free_i2c_dev(struct i2c_adapter *adap)
return i2c_dev;
}
-static void return_i2c_dev(struct i2c_dev *i2c_dev)
+static void put_i2c_dev(struct i2c_dev *i2c_dev)
{
spin_lock(&i2c_dev_list_lock);
list_del(&i2c_dev->list);
@@ -552,6 +554,12 @@ static int i2cdev_attach_adapter(struct device *dev, void *dummy)
if (IS_ERR(i2c_dev))
return PTR_ERR(i2c_dev);
+ cdev_init(&i2c_dev->cdev, &i2cdev_fops);
+ i2c_dev->cdev.owner = THIS_MODULE;
+ res = cdev_add(&i2c_dev->cdev, MKDEV(I2C_MAJOR, adap->nr), 1);
+ if (res)
+ goto error_cdev;
+
/* register this i2c device with the driver core */
i2c_dev->dev = device_create(i2c_dev_class, &adap->dev,
MKDEV(I2C_MAJOR, adap->nr), NULL,
@@ -565,7 +573,9 @@ static int i2cdev_attach_adapter(struct device *dev, void *dummy)
adap->name, adap->nr);
return 0;
error:
- return_i2c_dev(i2c_dev);
+ cdev_del(&i2c_dev->cdev);
+error_cdev:
+ put_i2c_dev(i2c_dev);
return res;
}
@@ -582,7 +592,8 @@ static int i2cdev_detach_adapter(struct device *dev, void *dummy)
if (!i2c_dev) /* attach_adapter must have failed */
return 0;
- return_i2c_dev(i2c_dev);
+ cdev_del(&i2c_dev->cdev);
+ put_i2c_dev(i2c_dev);
device_destroy(i2c_dev_class, MKDEV(I2C_MAJOR, adap->nr));
pr_debug("i2c-dev: adapter [%s] unregistered\n", adap->name);
@@ -620,7 +631,7 @@ static int __init i2c_dev_init(void)
printk(KERN_INFO "i2c /dev entries driver\n");
- res = register_chrdev(I2C_MAJOR, "i2c", &i2cdev_fops);
+ res = register_chrdev_region(MKDEV(I2C_MAJOR, 0), I2C_MINORS, "i2c");
if (res)
goto out;
@@ -644,7 +655,7 @@ static int __init i2c_dev_init(void)
out_unreg_class:
class_destroy(i2c_dev_class);
out_unreg_chrdev:
- unregister_chrdev(I2C_MAJOR, "i2c");
+ unregister_chrdev_region(MKDEV(I2C_MAJOR, 0), I2C_MINORS);
out:
printk(KERN_ERR "%s: Driver Initialisation failed\n", __FILE__);
return res;
@@ -655,7 +666,7 @@ static void __exit i2c_dev_exit(void)
bus_unregister_notifier(&i2c_bus_type, &i2cdev_notifier);
i2c_for_each_dev(NULL, i2cdev_detach_adapter);
class_destroy(i2c_dev_class);
- unregister_chrdev(I2C_MAJOR, "i2c");
+ unregister_chrdev_region(MKDEV(I2C_MAJOR, 0), I2C_MINORS);
}
MODULE_AUTHOR("Frodo Looijaard <frodol@dds.nl> and "
diff --git a/drivers/i2c/i2c-mux.c b/drivers/i2c/i2c-mux.c
index d4022878b2f0..8eee98634cda 100644
--- a/drivers/i2c/i2c-mux.c
+++ b/drivers/i2c/i2c-mux.c
@@ -31,30 +31,66 @@
struct i2c_mux_priv {
struct i2c_adapter adap;
struct i2c_algorithm algo;
-
- struct i2c_adapter *parent;
- struct device *mux_dev;
- void *mux_priv;
+ struct i2c_mux_core *muxc;
u32 chan_id;
-
- int (*select)(struct i2c_adapter *, void *mux_priv, u32 chan_id);
- int (*deselect)(struct i2c_adapter *, void *mux_priv, u32 chan_id);
};
+static int __i2c_mux_master_xfer(struct i2c_adapter *adap,
+ struct i2c_msg msgs[], int num)
+{
+ struct i2c_mux_priv *priv = adap->algo_data;
+ struct i2c_mux_core *muxc = priv->muxc;
+ struct i2c_adapter *parent = muxc->parent;
+ int ret;
+
+ /* Switch to the right mux port and perform the transfer. */
+
+ ret = muxc->select(muxc, priv->chan_id);
+ if (ret >= 0)
+ ret = __i2c_transfer(parent, msgs, num);
+ if (muxc->deselect)
+ muxc->deselect(muxc, priv->chan_id);
+
+ return ret;
+}
+
static int i2c_mux_master_xfer(struct i2c_adapter *adap,
struct i2c_msg msgs[], int num)
{
struct i2c_mux_priv *priv = adap->algo_data;
- struct i2c_adapter *parent = priv->parent;
+ struct i2c_mux_core *muxc = priv->muxc;
+ struct i2c_adapter *parent = muxc->parent;
int ret;
/* Switch to the right mux port and perform the transfer. */
- ret = priv->select(parent, priv->mux_priv, priv->chan_id);
+ ret = muxc->select(muxc, priv->chan_id);
if (ret >= 0)
- ret = __i2c_transfer(parent, msgs, num);
- if (priv->deselect)
- priv->deselect(parent, priv->mux_priv, priv->chan_id);
+ ret = i2c_transfer(parent, msgs, num);
+ if (muxc->deselect)
+ muxc->deselect(muxc, priv->chan_id);
+
+ return ret;
+}
+
+static int __i2c_mux_smbus_xfer(struct i2c_adapter *adap,
+ u16 addr, unsigned short flags,
+ char read_write, u8 command,
+ int size, union i2c_smbus_data *data)
+{
+ struct i2c_mux_priv *priv = adap->algo_data;
+ struct i2c_mux_core *muxc = priv->muxc;
+ struct i2c_adapter *parent = muxc->parent;
+ int ret;
+
+ /* Select the right mux port and perform the transfer. */
+
+ ret = muxc->select(muxc, priv->chan_id);
+ if (ret >= 0)
+ ret = parent->algo->smbus_xfer(parent, addr, flags,
+ read_write, command, size, data);
+ if (muxc->deselect)
+ muxc->deselect(muxc, priv->chan_id);
return ret;
}
@@ -65,17 +101,18 @@ static int i2c_mux_smbus_xfer(struct i2c_adapter *adap,
int size, union i2c_smbus_data *data)
{
struct i2c_mux_priv *priv = adap->algo_data;
- struct i2c_adapter *parent = priv->parent;
+ struct i2c_mux_core *muxc = priv->muxc;
+ struct i2c_adapter *parent = muxc->parent;
int ret;
/* Select the right mux port and perform the transfer. */
- ret = priv->select(parent, priv->mux_priv, priv->chan_id);
+ ret = muxc->select(muxc, priv->chan_id);
if (ret >= 0)
- ret = parent->algo->smbus_xfer(parent, addr, flags,
- read_write, command, size, data);
- if (priv->deselect)
- priv->deselect(parent, priv->mux_priv, priv->chan_id);
+ ret = i2c_smbus_xfer(parent, addr, flags,
+ read_write, command, size, data);
+ if (muxc->deselect)
+ muxc->deselect(muxc, priv->chan_id);
return ret;
}
@@ -84,7 +121,7 @@ static int i2c_mux_smbus_xfer(struct i2c_adapter *adap,
static u32 i2c_mux_functionality(struct i2c_adapter *adap)
{
struct i2c_mux_priv *priv = adap->algo_data;
- struct i2c_adapter *parent = priv->parent;
+ struct i2c_adapter *parent = priv->muxc->parent;
return parent->algo->functionality(parent);
}
@@ -102,38 +139,167 @@ static unsigned int i2c_mux_parent_classes(struct i2c_adapter *parent)
return class;
}
-struct i2c_adapter *i2c_add_mux_adapter(struct i2c_adapter *parent,
- struct device *mux_dev,
- void *mux_priv, u32 force_nr, u32 chan_id,
- unsigned int class,
- int (*select) (struct i2c_adapter *,
- void *, u32),
- int (*deselect) (struct i2c_adapter *,
- void *, u32))
+static void i2c_mux_lock_bus(struct i2c_adapter *adapter, unsigned int flags)
+{
+ struct i2c_mux_priv *priv = adapter->algo_data;
+ struct i2c_adapter *parent = priv->muxc->parent;
+
+ rt_mutex_lock(&parent->mux_lock);
+ if (!(flags & I2C_LOCK_ROOT_ADAPTER))
+ return;
+ i2c_lock_bus(parent, flags);
+}
+
+static int i2c_mux_trylock_bus(struct i2c_adapter *adapter, unsigned int flags)
+{
+ struct i2c_mux_priv *priv = adapter->algo_data;
+ struct i2c_adapter *parent = priv->muxc->parent;
+
+ if (!rt_mutex_trylock(&parent->mux_lock))
+ return 0; /* mux_lock not locked, failure */
+ if (!(flags & I2C_LOCK_ROOT_ADAPTER))
+ return 1; /* we only want mux_lock, success */
+ if (parent->trylock_bus(parent, flags))
+ return 1; /* parent locked too, success */
+ rt_mutex_unlock(&parent->mux_lock);
+ return 0; /* parent not locked, failure */
+}
+
+static void i2c_mux_unlock_bus(struct i2c_adapter *adapter, unsigned int flags)
+{
+ struct i2c_mux_priv *priv = adapter->algo_data;
+ struct i2c_adapter *parent = priv->muxc->parent;
+
+ if (flags & I2C_LOCK_ROOT_ADAPTER)
+ i2c_unlock_bus(parent, flags);
+ rt_mutex_unlock(&parent->mux_lock);
+}
+
+static void i2c_parent_lock_bus(struct i2c_adapter *adapter,
+ unsigned int flags)
+{
+ struct i2c_mux_priv *priv = adapter->algo_data;
+ struct i2c_adapter *parent = priv->muxc->parent;
+
+ rt_mutex_lock(&parent->mux_lock);
+ i2c_lock_bus(parent, flags);
+}
+
+static int i2c_parent_trylock_bus(struct i2c_adapter *adapter,
+ unsigned int flags)
+{
+ struct i2c_mux_priv *priv = adapter->algo_data;
+ struct i2c_adapter *parent = priv->muxc->parent;
+
+ if (!rt_mutex_trylock(&parent->mux_lock))
+ return 0; /* mux_lock not locked, failure */
+ if (parent->trylock_bus(parent, flags))
+ return 1; /* parent locked too, success */
+ rt_mutex_unlock(&parent->mux_lock);
+ return 0; /* parent not locked, failure */
+}
+
+static void i2c_parent_unlock_bus(struct i2c_adapter *adapter,
+ unsigned int flags)
+{
+ struct i2c_mux_priv *priv = adapter->algo_data;
+ struct i2c_adapter *parent = priv->muxc->parent;
+
+ i2c_unlock_bus(parent, flags);
+ rt_mutex_unlock(&parent->mux_lock);
+}
+
+struct i2c_adapter *i2c_root_adapter(struct device *dev)
+{
+ struct device *i2c;
+ struct i2c_adapter *i2c_root;
+
+ /*
+ * Walk up the device tree to find an i2c adapter, indicating
+ * that this is an i2c client device. Check all ancestors to
+ * handle mfd devices etc.
+ */
+ for (i2c = dev; i2c; i2c = i2c->parent) {
+ if (i2c->type == &i2c_adapter_type)
+ break;
+ }
+ if (!i2c)
+ return NULL;
+
+ /* Continue up the tree to find the root i2c adapter */
+ i2c_root = to_i2c_adapter(i2c);
+ while (i2c_parent_is_i2c_adapter(i2c_root))
+ i2c_root = i2c_parent_is_i2c_adapter(i2c_root);
+
+ return i2c_root;
+}
+EXPORT_SYMBOL_GPL(i2c_root_adapter);
+
+struct i2c_mux_core *i2c_mux_alloc(struct i2c_adapter *parent,
+ struct device *dev, int max_adapters,
+ int sizeof_priv, u32 flags,
+ int (*select)(struct i2c_mux_core *, u32),
+ int (*deselect)(struct i2c_mux_core *, u32))
+{
+ struct i2c_mux_core *muxc;
+
+ muxc = devm_kzalloc(dev, sizeof(*muxc)
+ + max_adapters * sizeof(muxc->adapter[0])
+ + sizeof_priv, GFP_KERNEL);
+ if (!muxc)
+ return NULL;
+ if (sizeof_priv)
+ muxc->priv = &muxc->adapter[max_adapters];
+
+ muxc->parent = parent;
+ muxc->dev = dev;
+ if (flags & I2C_MUX_LOCKED)
+ muxc->mux_locked = true;
+ muxc->select = select;
+ muxc->deselect = deselect;
+ muxc->max_adapters = max_adapters;
+
+ return muxc;
+}
+EXPORT_SYMBOL_GPL(i2c_mux_alloc);
+
+int i2c_mux_add_adapter(struct i2c_mux_core *muxc,
+ u32 force_nr, u32 chan_id,
+ unsigned int class)
{
+ struct i2c_adapter *parent = muxc->parent;
struct i2c_mux_priv *priv;
char symlink_name[20];
int ret;
- priv = kzalloc(sizeof(struct i2c_mux_priv), GFP_KERNEL);
+ if (muxc->num_adapters >= muxc->max_adapters) {
+ dev_err(muxc->dev, "No room for more i2c-mux adapters\n");
+ return -EINVAL;
+ }
+
+ priv = kzalloc(sizeof(*priv), GFP_KERNEL);
if (!priv)
- return NULL;
+ return -ENOMEM;
/* Set up private adapter data */
- priv->parent = parent;
- priv->mux_dev = mux_dev;
- priv->mux_priv = mux_priv;
+ priv->muxc = muxc;
priv->chan_id = chan_id;
- priv->select = select;
- priv->deselect = deselect;
/* Need to do algo dynamically because we don't know ahead
* of time what sort of physical adapter we'll be dealing with.
*/
- if (parent->algo->master_xfer)
- priv->algo.master_xfer = i2c_mux_master_xfer;
- if (parent->algo->smbus_xfer)
- priv->algo.smbus_xfer = i2c_mux_smbus_xfer;
+ if (parent->algo->master_xfer) {
+ if (muxc->mux_locked)
+ priv->algo.master_xfer = i2c_mux_master_xfer;
+ else
+ priv->algo.master_xfer = __i2c_mux_master_xfer;
+ }
+ if (parent->algo->smbus_xfer) {
+ if (muxc->mux_locked)
+ priv->algo.smbus_xfer = i2c_mux_smbus_xfer;
+ else
+ priv->algo.smbus_xfer = __i2c_mux_smbus_xfer;
+ }
priv->algo.functionality = i2c_mux_functionality;
/* Now fill out new adapter structure */
@@ -146,6 +312,15 @@ struct i2c_adapter *i2c_add_mux_adapter(struct i2c_adapter *parent,
priv->adap.retries = parent->retries;
priv->adap.timeout = parent->timeout;
priv->adap.quirks = parent->quirks;
+ if (muxc->mux_locked) {
+ priv->adap.lock_bus = i2c_mux_lock_bus;
+ priv->adap.trylock_bus = i2c_mux_trylock_bus;
+ priv->adap.unlock_bus = i2c_mux_unlock_bus;
+ } else {
+ priv->adap.lock_bus = i2c_parent_lock_bus;
+ priv->adap.trylock_bus = i2c_parent_trylock_bus;
+ priv->adap.unlock_bus = i2c_parent_unlock_bus;
+ }
/* Sanity check on class */
if (i2c_mux_parent_classes(parent) & class)
@@ -159,11 +334,11 @@ struct i2c_adapter *i2c_add_mux_adapter(struct i2c_adapter *parent,
* Try to populate the mux adapter's of_node, expands to
* nothing if !CONFIG_OF.
*/
- if (mux_dev->of_node) {
+ if (muxc->dev->of_node) {
struct device_node *child;
u32 reg;
- for_each_child_of_node(mux_dev->of_node, child) {
+ for_each_child_of_node(muxc->dev->of_node, child) {
ret = of_property_read_u32(child, "reg", &reg);
if (ret)
continue;
@@ -177,8 +352,9 @@ struct i2c_adapter *i2c_add_mux_adapter(struct i2c_adapter *parent,
/*
* Associate the mux channel with an ACPI node.
*/
- if (has_acpi_companion(mux_dev))
- acpi_preset_companion(&priv->adap.dev, ACPI_COMPANION(mux_dev),
+ if (has_acpi_companion(muxc->dev))
+ acpi_preset_companion(&priv->adap.dev,
+ ACPI_COMPANION(muxc->dev),
chan_id);
if (force_nr) {
@@ -192,35 +368,45 @@ struct i2c_adapter *i2c_add_mux_adapter(struct i2c_adapter *parent,
"failed to add mux-adapter (error=%d)\n",
ret);
kfree(priv);
- return NULL;
+ return ret;
}
- WARN(sysfs_create_link(&priv->adap.dev.kobj, &mux_dev->kobj, "mux_device"),
- "can't create symlink to mux device\n");
+ WARN(sysfs_create_link(&priv->adap.dev.kobj, &muxc->dev->kobj,
+ "mux_device"),
+ "can't create symlink to mux device\n");
snprintf(symlink_name, sizeof(symlink_name), "channel-%u", chan_id);
- WARN(sysfs_create_link(&mux_dev->kobj, &priv->adap.dev.kobj, symlink_name),
- "can't create symlink for channel %u\n", chan_id);
+ WARN(sysfs_create_link(&muxc->dev->kobj, &priv->adap.dev.kobj,
+ symlink_name),
+ "can't create symlink for channel %u\n", chan_id);
dev_info(&parent->dev, "Added multiplexed i2c bus %d\n",
i2c_adapter_id(&priv->adap));
- return &priv->adap;
+ muxc->adapter[muxc->num_adapters++] = &priv->adap;
+ return 0;
}
-EXPORT_SYMBOL_GPL(i2c_add_mux_adapter);
+EXPORT_SYMBOL_GPL(i2c_mux_add_adapter);
-void i2c_del_mux_adapter(struct i2c_adapter *adap)
+void i2c_mux_del_adapters(struct i2c_mux_core *muxc)
{
- struct i2c_mux_priv *priv = adap->algo_data;
char symlink_name[20];
- snprintf(symlink_name, sizeof(symlink_name), "channel-%u", priv->chan_id);
- sysfs_remove_link(&priv->mux_dev->kobj, symlink_name);
+ while (muxc->num_adapters) {
+ struct i2c_adapter *adap = muxc->adapter[--muxc->num_adapters];
+ struct i2c_mux_priv *priv = adap->algo_data;
+
+ muxc->adapter[muxc->num_adapters] = NULL;
+
+ snprintf(symlink_name, sizeof(symlink_name),
+ "channel-%u", priv->chan_id);
+ sysfs_remove_link(&muxc->dev->kobj, symlink_name);
- sysfs_remove_link(&priv->adap.dev.kobj, "mux_device");
- i2c_del_adapter(adap);
- kfree(priv);
+ sysfs_remove_link(&priv->adap.dev.kobj, "mux_device");
+ i2c_del_adapter(adap);
+ kfree(priv);
+ }
}
-EXPORT_SYMBOL_GPL(i2c_del_mux_adapter);
+EXPORT_SYMBOL_GPL(i2c_mux_del_adapters);
MODULE_AUTHOR("Rodolfo Giometti <giometti@linux.it>");
MODULE_DESCRIPTION("I2C driver for multiplexed I2C busses");
diff --git a/drivers/i2c/muxes/i2c-arb-gpio-challenge.c b/drivers/i2c/muxes/i2c-arb-gpio-challenge.c
index 402e3a6c671a..a90bbc4037dd 100644
--- a/drivers/i2c/muxes/i2c-arb-gpio-challenge.c
+++ b/drivers/i2c/muxes/i2c-arb-gpio-challenge.c
@@ -28,8 +28,6 @@
/**
* struct i2c_arbitrator_data - Driver data for I2C arbitrator
*
- * @parent: Parent adapter
- * @child: Child bus
* @our_gpio: GPIO we'll use to claim.
* @our_gpio_release: 0 if active high; 1 if active low; AKA if the GPIO ==
* this then consider it released.
@@ -42,8 +40,6 @@
*/
struct i2c_arbitrator_data {
- struct i2c_adapter *parent;
- struct i2c_adapter *child;
int our_gpio;
int our_gpio_release;
int their_gpio;
@@ -59,9 +55,9 @@ struct i2c_arbitrator_data {
*
* Use the GPIO-based signalling protocol; return -EBUSY if we fail.
*/
-static int i2c_arbitrator_select(struct i2c_adapter *adap, void *data, u32 chan)
+static int i2c_arbitrator_select(struct i2c_mux_core *muxc, u32 chan)
{
- const struct i2c_arbitrator_data *arb = data;
+ const struct i2c_arbitrator_data *arb = i2c_mux_priv(muxc);
unsigned long stop_retry, stop_time;
/* Start a round of trying to claim the bus */
@@ -93,7 +89,7 @@ static int i2c_arbitrator_select(struct i2c_adapter *adap, void *data, u32 chan)
/* Give up, release our claim */
gpio_set_value(arb->our_gpio, arb->our_gpio_release);
udelay(arb->slew_delay_us);
- dev_err(&adap->dev, "Could not claim bus, timeout\n");
+ dev_err(muxc->dev, "Could not claim bus, timeout\n");
return -EBUSY;
}
@@ -102,10 +98,9 @@ static int i2c_arbitrator_select(struct i2c_adapter *adap, void *data, u32 chan)
*
* Release the I2C bus using the GPIO-based signalling protocol.
*/
-static int i2c_arbitrator_deselect(struct i2c_adapter *adap, void *data,
- u32 chan)
+static int i2c_arbitrator_deselect(struct i2c_mux_core *muxc, u32 chan)
{
- const struct i2c_arbitrator_data *arb = data;
+ const struct i2c_arbitrator_data *arb = i2c_mux_priv(muxc);
/* Release the bus and wait for the other master to notice */
gpio_set_value(arb->our_gpio, arb->our_gpio_release);
@@ -119,6 +114,7 @@ static int i2c_arbitrator_probe(struct platform_device *pdev)
struct device *dev = &pdev->dev;
struct device_node *np = dev->of_node;
struct device_node *parent_np;
+ struct i2c_mux_core *muxc;
struct i2c_arbitrator_data *arb;
enum of_gpio_flags gpio_flags;
unsigned long out_init;
@@ -134,12 +130,13 @@ static int i2c_arbitrator_probe(struct platform_device *pdev)
return -EINVAL;
}
- arb = devm_kzalloc(dev, sizeof(*arb), GFP_KERNEL);
- if (!arb) {
- dev_err(dev, "Cannot allocate i2c_arbitrator_data\n");
+ muxc = i2c_mux_alloc(NULL, dev, 1, sizeof(*arb), 0,
+ i2c_arbitrator_select, i2c_arbitrator_deselect);
+ if (!muxc)
return -ENOMEM;
- }
- platform_set_drvdata(pdev, arb);
+ arb = i2c_mux_priv(muxc);
+
+ platform_set_drvdata(pdev, muxc);
/* Request GPIOs */
ret = of_get_named_gpio_flags(np, "our-claim-gpio", 0, &gpio_flags);
@@ -196,21 +193,18 @@ static int i2c_arbitrator_probe(struct platform_device *pdev)
dev_err(dev, "Cannot parse i2c-parent\n");
return -EINVAL;
}
- arb->parent = of_get_i2c_adapter_by_node(parent_np);
+ muxc->parent = of_get_i2c_adapter_by_node(parent_np);
of_node_put(parent_np);
- if (!arb->parent) {
+ if (!muxc->parent) {
dev_err(dev, "Cannot find parent bus\n");
return -EPROBE_DEFER;
}
/* Actually add the mux adapter */
- arb->child = i2c_add_mux_adapter(arb->parent, dev, arb, 0, 0, 0,
- i2c_arbitrator_select,
- i2c_arbitrator_deselect);
- if (!arb->child) {
+ ret = i2c_mux_add_adapter(muxc, 0, 0, 0);
+ if (ret) {
dev_err(dev, "Failed to add adapter\n");
- ret = -ENODEV;
- i2c_put_adapter(arb->parent);
+ i2c_put_adapter(muxc->parent);
}
return ret;
@@ -218,11 +212,10 @@ static int i2c_arbitrator_probe(struct platform_device *pdev)
static int i2c_arbitrator_remove(struct platform_device *pdev)
{
- struct i2c_arbitrator_data *arb = platform_get_drvdata(pdev);
-
- i2c_del_mux_adapter(arb->child);
- i2c_put_adapter(arb->parent);
+ struct i2c_mux_core *muxc = platform_get_drvdata(pdev);
+ i2c_mux_del_adapters(muxc);
+ i2c_put_adapter(muxc->parent);
return 0;
}
diff --git a/drivers/i2c/muxes/i2c-mux-gpio.c b/drivers/i2c/muxes/i2c-mux-gpio.c
index b8e11c16d98c..e5cf26eefa97 100644
--- a/drivers/i2c/muxes/i2c-mux-gpio.c
+++ b/drivers/i2c/muxes/i2c-mux-gpio.c
@@ -15,11 +15,10 @@
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/gpio.h>
+#include "../../gpio/gpiolib.h"
#include <linux/of_gpio.h>
struct gpiomux {
- struct i2c_adapter *parent;
- struct i2c_adapter **adap; /* child busses */
struct i2c_mux_gpio_platform_data data;
unsigned gpio_base;
};
@@ -33,18 +32,18 @@ static void i2c_mux_gpio_set(const struct gpiomux *mux, unsigned val)
val & (1 << i));
}
-static int i2c_mux_gpio_select(struct i2c_adapter *adap, void *data, u32 chan)
+static int i2c_mux_gpio_select(struct i2c_mux_core *muxc, u32 chan)
{
- struct gpiomux *mux = data;
+ struct gpiomux *mux = i2c_mux_priv(muxc);
i2c_mux_gpio_set(mux, chan);
return 0;
}
-static int i2c_mux_gpio_deselect(struct i2c_adapter *adap, void *data, u32 chan)
+static int i2c_mux_gpio_deselect(struct i2c_mux_core *muxc, u32 chan)
{
- struct gpiomux *mux = data;
+ struct gpiomux *mux = i2c_mux_priv(muxc);
i2c_mux_gpio_set(mux, mux->data.idle);
@@ -136,19 +135,16 @@ static int i2c_mux_gpio_probe_dt(struct gpiomux *mux,
static int i2c_mux_gpio_probe(struct platform_device *pdev)
{
+ struct i2c_mux_core *muxc;
struct gpiomux *mux;
struct i2c_adapter *parent;
- int (*deselect) (struct i2c_adapter *, void *, u32);
+ struct i2c_adapter *root;
unsigned initial_state, gpio_base;
int i, ret;
mux = devm_kzalloc(&pdev->dev, sizeof(*mux), GFP_KERNEL);
- if (!mux) {
- dev_err(&pdev->dev, "Cannot allocate gpiomux structure");
+ if (!mux)
return -ENOMEM;
- }
-
- platform_set_drvdata(pdev, mux);
if (!dev_get_platdata(&pdev->dev)) {
ret = i2c_mux_gpio_probe_dt(mux, pdev);
@@ -180,27 +176,32 @@ static int i2c_mux_gpio_probe(struct platform_device *pdev)
if (!parent)
return -EPROBE_DEFER;
- mux->parent = parent;
- mux->gpio_base = gpio_base;
-
- mux->adap = devm_kzalloc(&pdev->dev,
- sizeof(*mux->adap) * mux->data.n_values,
- GFP_KERNEL);
- if (!mux->adap) {
- dev_err(&pdev->dev, "Cannot allocate i2c_adapter structure");
+ muxc = i2c_mux_alloc(parent, &pdev->dev, mux->data.n_values, 0, 0,
+ i2c_mux_gpio_select, NULL);
+ if (!muxc) {
ret = -ENOMEM;
goto alloc_failed;
}
+ muxc->priv = mux;
+
+ platform_set_drvdata(pdev, muxc);
+
+ root = i2c_root_adapter(&parent->dev);
+
+ muxc->mux_locked = true;
+ mux->gpio_base = gpio_base;
if (mux->data.idle != I2C_MUX_GPIO_NO_IDLE) {
initial_state = mux->data.idle;
- deselect = i2c_mux_gpio_deselect;
+ muxc->deselect = i2c_mux_gpio_deselect;
} else {
initial_state = mux->data.values[0];
- deselect = NULL;
}
for (i = 0; i < mux->data.n_gpios; i++) {
+ struct device *gpio_dev;
+ struct gpio_desc *gpio_desc;
+
ret = gpio_request(gpio_base + mux->data.gpios[i], "i2c-mux-gpio");
if (ret) {
dev_err(&pdev->dev, "Failed to request GPIO %d\n",
@@ -217,17 +218,24 @@ static int i2c_mux_gpio_probe(struct platform_device *pdev)
i++; /* gpio_request above succeeded, so must free */
goto err_request_gpio;
}
+
+ if (!muxc->mux_locked)
+ continue;
+
+ gpio_desc = gpio_to_desc(gpio_base + mux->data.gpios[i]);
+ gpio_dev = &gpio_desc->gdev->dev;
+ muxc->mux_locked = i2c_root_adapter(gpio_dev) == root;
}
+ if (muxc->mux_locked)
+ dev_info(&pdev->dev, "mux-locked i2c mux\n");
+
for (i = 0; i < mux->data.n_values; i++) {
u32 nr = mux->data.base_nr ? (mux->data.base_nr + i) : 0;
unsigned int class = mux->data.classes ? mux->data.classes[i] : 0;
- mux->adap[i] = i2c_add_mux_adapter(parent, &pdev->dev, mux, nr,
- mux->data.values[i], class,
- i2c_mux_gpio_select, deselect);
- if (!mux->adap[i]) {
- ret = -ENODEV;
+ ret = i2c_mux_add_adapter(muxc, nr, mux->data.values[i], class);
+ if (ret) {
dev_err(&pdev->dev, "Failed to add adapter %d\n", i);
goto add_adapter_failed;
}
@@ -239,8 +247,7 @@ static int i2c_mux_gpio_probe(struct platform_device *pdev)
return 0;
add_adapter_failed:
- for (; i > 0; i--)
- i2c_del_mux_adapter(mux->adap[i - 1]);
+ i2c_mux_del_adapters(muxc);
i = mux->data.n_gpios;
err_request_gpio:
for (; i > 0; i--)
@@ -253,16 +260,16 @@ alloc_failed:
static int i2c_mux_gpio_remove(struct platform_device *pdev)
{
- struct gpiomux *mux = platform_get_drvdata(pdev);
+ struct i2c_mux_core *muxc = platform_get_drvdata(pdev);
+ struct gpiomux *mux = i2c_mux_priv(muxc);
int i;
- for (i = 0; i < mux->data.n_values; i++)
- i2c_del_mux_adapter(mux->adap[i]);
+ i2c_mux_del_adapters(muxc);
for (i = 0; i < mux->data.n_gpios; i++)
gpio_free(mux->gpio_base + mux->data.gpios[i]);
- i2c_put_adapter(mux->parent);
+ i2c_put_adapter(muxc->parent);
return 0;
}
diff --git a/drivers/i2c/muxes/i2c-mux-pca9541.c b/drivers/i2c/muxes/i2c-mux-pca9541.c
index d0ba424adebc..3cb8af635db5 100644
--- a/drivers/i2c/muxes/i2c-mux-pca9541.c
+++ b/drivers/i2c/muxes/i2c-mux-pca9541.c
@@ -73,7 +73,7 @@
#define SELECT_DELAY_LONG 1000
struct pca9541 {
- struct i2c_adapter *mux_adap;
+ struct i2c_client *client;
unsigned long select_timeout;
unsigned long arb_timeout;
};
@@ -217,7 +217,8 @@ static const u8 pca9541_control[16] = {
*/
static int pca9541_arbitrate(struct i2c_client *client)
{
- struct pca9541 *data = i2c_get_clientdata(client);
+ struct i2c_mux_core *muxc = i2c_get_clientdata(client);
+ struct pca9541 *data = i2c_mux_priv(muxc);
int reg;
reg = pca9541_reg_read(client, PCA9541_CONTROL);
@@ -285,9 +286,10 @@ static int pca9541_arbitrate(struct i2c_client *client)
return 0;
}
-static int pca9541_select_chan(struct i2c_adapter *adap, void *client, u32 chan)
+static int pca9541_select_chan(struct i2c_mux_core *muxc, u32 chan)
{
- struct pca9541 *data = i2c_get_clientdata(client);
+ struct pca9541 *data = i2c_mux_priv(muxc);
+ struct i2c_client *client = data->client;
int ret;
unsigned long timeout = jiffies + ARB2_TIMEOUT;
/* give up after this time */
@@ -309,9 +311,11 @@ static int pca9541_select_chan(struct i2c_adapter *adap, void *client, u32 chan)
return -ETIMEDOUT;
}
-static int pca9541_release_chan(struct i2c_adapter *adap,
- void *client, u32 chan)
+static int pca9541_release_chan(struct i2c_mux_core *muxc, u32 chan)
{
+ struct pca9541 *data = i2c_mux_priv(muxc);
+ struct i2c_client *client = data->client;
+
pca9541_release_bus(client);
return 0;
}
@@ -324,20 +328,13 @@ static int pca9541_probe(struct i2c_client *client,
{
struct i2c_adapter *adap = client->adapter;
struct pca954x_platform_data *pdata = dev_get_platdata(&client->dev);
+ struct i2c_mux_core *muxc;
struct pca9541 *data;
int force;
- int ret = -ENODEV;
+ int ret;
if (!i2c_check_functionality(adap, I2C_FUNC_SMBUS_BYTE_DATA))
- goto err;
-
- data = kzalloc(sizeof(struct pca9541), GFP_KERNEL);
- if (!data) {
- ret = -ENOMEM;
- goto err;
- }
-
- i2c_set_clientdata(client, data);
+ return -ENODEV;
/*
* I2C accesses are unprotected here.
@@ -352,34 +349,33 @@ static int pca9541_probe(struct i2c_client *client,
force = 0;
if (pdata)
force = pdata->modes[0].adap_id;
- data->mux_adap = i2c_add_mux_adapter(adap, &client->dev, client,
- force, 0, 0,
- pca9541_select_chan,
- pca9541_release_chan);
+ muxc = i2c_mux_alloc(adap, &client->dev, 1, sizeof(*data), 0,
+ pca9541_select_chan, pca9541_release_chan);
+ if (!muxc)
+ return -ENOMEM;
- if (data->mux_adap == NULL) {
+ data = i2c_mux_priv(muxc);
+ data->client = client;
+
+ i2c_set_clientdata(client, muxc);
+
+ ret = i2c_mux_add_adapter(muxc, force, 0, 0);
+ if (ret) {
dev_err(&client->dev, "failed to register master selector\n");
- goto exit_free;
+ return ret;
}
dev_info(&client->dev, "registered master selector for I2C %s\n",
client->name);
return 0;
-
-exit_free:
- kfree(data);
-err:
- return ret;
}
static int pca9541_remove(struct i2c_client *client)
{
- struct pca9541 *data = i2c_get_clientdata(client);
-
- i2c_del_mux_adapter(data->mux_adap);
+ struct i2c_mux_core *muxc = i2c_get_clientdata(client);
- kfree(data);
+ i2c_mux_del_adapters(muxc);
return 0;
}
diff --git a/drivers/i2c/muxes/i2c-mux-pca954x.c b/drivers/i2c/muxes/i2c-mux-pca954x.c
index acfcef3d4068..528e755c468f 100644
--- a/drivers/i2c/muxes/i2c-mux-pca954x.c
+++ b/drivers/i2c/muxes/i2c-mux-pca954x.c
@@ -60,9 +60,10 @@ enum pca_type {
struct pca954x {
enum pca_type type;
- struct i2c_adapter *virt_adaps[PCA954X_MAX_NCHANS];
u8 last_chan; /* last register value */
+ u8 deselect;
+ struct i2c_client *client;
};
struct chip_desc {
@@ -146,10 +147,10 @@ static int pca954x_reg_write(struct i2c_adapter *adap,
return ret;
}
-static int pca954x_select_chan(struct i2c_adapter *adap,
- void *client, u32 chan)
+static int pca954x_select_chan(struct i2c_mux_core *muxc, u32 chan)
{
- struct pca954x *data = i2c_get_clientdata(client);
+ struct pca954x *data = i2c_mux_priv(muxc);
+ struct i2c_client *client = data->client;
const struct chip_desc *chip = &chips[data->type];
u8 regval;
int ret = 0;
@@ -162,21 +163,24 @@ static int pca954x_select_chan(struct i2c_adapter *adap,
/* Only select the channel if its different from the last channel */
if (data->last_chan != regval) {
- ret = pca954x_reg_write(adap, client, regval);
+ ret = pca954x_reg_write(muxc->parent, client, regval);
data->last_chan = regval;
}
return ret;
}
-static int pca954x_deselect_mux(struct i2c_adapter *adap,
- void *client, u32 chan)
+static int pca954x_deselect_mux(struct i2c_mux_core *muxc, u32 chan)
{
- struct pca954x *data = i2c_get_clientdata(client);
+ struct pca954x *data = i2c_mux_priv(muxc);
+ struct i2c_client *client = data->client;
+
+ if (!(data->deselect & (1 << chan)))
+ return 0;
/* Deselect active channel */
data->last_chan = 0;
- return pca954x_reg_write(adap, client, data->last_chan);
+ return pca954x_reg_write(muxc->parent, client, data->last_chan);
}
/*
@@ -191,17 +195,22 @@ static int pca954x_probe(struct i2c_client *client,
bool idle_disconnect_dt;
struct gpio_desc *gpio;
int num, force, class;
+ struct i2c_mux_core *muxc;
struct pca954x *data;
int ret;
if (!i2c_check_functionality(adap, I2C_FUNC_SMBUS_BYTE))
return -ENODEV;
- data = devm_kzalloc(&client->dev, sizeof(struct pca954x), GFP_KERNEL);
- if (!data)
+ muxc = i2c_mux_alloc(adap, &client->dev,
+ PCA954X_MAX_NCHANS, sizeof(*data), 0,
+ pca954x_select_chan, pca954x_deselect_mux);
+ if (!muxc)
return -ENOMEM;
+ data = i2c_mux_priv(muxc);
- i2c_set_clientdata(client, data);
+ i2c_set_clientdata(client, muxc);
+ data->client = client;
/* Get the mux out of reset if a reset GPIO is specified. */
gpio = devm_gpiod_get_optional(&client->dev, "reset", GPIOD_OUT_LOW);
@@ -238,16 +247,13 @@ static int pca954x_probe(struct i2c_client *client,
/* discard unconfigured channels */
break;
idle_disconnect_pd = pdata->modes[num].deselect_on_exit;
+ data->deselect |= (idle_disconnect_pd
+ || idle_disconnect_dt) << num;
}
- data->virt_adaps[num] =
- i2c_add_mux_adapter(adap, &client->dev, client,
- force, num, class, pca954x_select_chan,
- (idle_disconnect_pd || idle_disconnect_dt)
- ? pca954x_deselect_mux : NULL);
+ ret = i2c_mux_add_adapter(muxc, force, num, class);
- if (data->virt_adaps[num] == NULL) {
- ret = -ENODEV;
+ if (ret) {
dev_err(&client->dev,
"failed to register multiplexed adapter"
" %d as bus %d\n", num, force);
@@ -263,23 +269,15 @@ static int pca954x_probe(struct i2c_client *client,
return 0;
virt_reg_failed:
- for (num--; num >= 0; num--)
- i2c_del_mux_adapter(data->virt_adaps[num]);
+ i2c_mux_del_adapters(muxc);
return ret;
}
static int pca954x_remove(struct i2c_client *client)
{
- struct pca954x *data = i2c_get_clientdata(client);
- const struct chip_desc *chip = &chips[data->type];
- int i;
-
- for (i = 0; i < chip->nchans; ++i)
- if (data->virt_adaps[i]) {
- i2c_del_mux_adapter(data->virt_adaps[i]);
- data->virt_adaps[i] = NULL;
- }
+ struct i2c_mux_core *muxc = i2c_get_clientdata(client);
+ i2c_mux_del_adapters(muxc);
return 0;
}
@@ -287,7 +285,8 @@ static int pca954x_remove(struct i2c_client *client)
static int pca954x_resume(struct device *dev)
{
struct i2c_client *client = to_i2c_client(dev);
- struct pca954x *data = i2c_get_clientdata(client);
+ struct i2c_mux_core *muxc = i2c_get_clientdata(client);
+ struct pca954x *data = i2c_mux_priv(muxc);
data->last_chan = 0;
return i2c_smbus_write_byte(client, 0);
diff --git a/drivers/i2c/muxes/i2c-mux-pinctrl.c b/drivers/i2c/muxes/i2c-mux-pinctrl.c
index b5a982ba8898..35bb775e1b74 100644
--- a/drivers/i2c/muxes/i2c-mux-pinctrl.c
+++ b/drivers/i2c/muxes/i2c-mux-pinctrl.c
@@ -24,36 +24,32 @@
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/of.h>
+#include "../../pinctrl/core.h"
struct i2c_mux_pinctrl {
- struct device *dev;
struct i2c_mux_pinctrl_platform_data *pdata;
struct pinctrl *pinctrl;
struct pinctrl_state **states;
struct pinctrl_state *state_idle;
- struct i2c_adapter *parent;
- struct i2c_adapter **busses;
};
-static int i2c_mux_pinctrl_select(struct i2c_adapter *adap, void *data,
- u32 chan)
+static int i2c_mux_pinctrl_select(struct i2c_mux_core *muxc, u32 chan)
{
- struct i2c_mux_pinctrl *mux = data;
+ struct i2c_mux_pinctrl *mux = i2c_mux_priv(muxc);
return pinctrl_select_state(mux->pinctrl, mux->states[chan]);
}
-static int i2c_mux_pinctrl_deselect(struct i2c_adapter *adap, void *data,
- u32 chan)
+static int i2c_mux_pinctrl_deselect(struct i2c_mux_core *muxc, u32 chan)
{
- struct i2c_mux_pinctrl *mux = data;
+ struct i2c_mux_pinctrl *mux = i2c_mux_priv(muxc);
return pinctrl_select_state(mux->pinctrl, mux->state_idle);
}
#ifdef CONFIG_OF
static int i2c_mux_pinctrl_parse_dt(struct i2c_mux_pinctrl *mux,
- struct platform_device *pdev)
+ struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
int num_names, i, ret;
@@ -64,15 +60,12 @@ static int i2c_mux_pinctrl_parse_dt(struct i2c_mux_pinctrl *mux,
return 0;
mux->pdata = devm_kzalloc(&pdev->dev, sizeof(*mux->pdata), GFP_KERNEL);
- if (!mux->pdata) {
- dev_err(mux->dev,
- "Cannot allocate i2c_mux_pinctrl_platform_data\n");
+ if (!mux->pdata)
return -ENOMEM;
- }
num_names = of_property_count_strings(np, "pinctrl-names");
if (num_names < 0) {
- dev_err(mux->dev, "Cannot parse pinctrl-names: %d\n",
+ dev_err(&pdev->dev, "Cannot parse pinctrl-names: %d\n",
num_names);
return num_names;
}
@@ -80,23 +73,22 @@ static int i2c_mux_pinctrl_parse_dt(struct i2c_mux_pinctrl *mux,
mux->pdata->pinctrl_states = devm_kzalloc(&pdev->dev,
sizeof(*mux->pdata->pinctrl_states) * num_names,
GFP_KERNEL);
- if (!mux->pdata->pinctrl_states) {
- dev_err(mux->dev, "Cannot allocate pinctrl_states\n");
+ if (!mux->pdata->pinctrl_states)
return -ENOMEM;
- }
for (i = 0; i < num_names; i++) {
ret = of_property_read_string_index(np, "pinctrl-names", i,
&mux->pdata->pinctrl_states[mux->pdata->bus_count]);
if (ret < 0) {
- dev_err(mux->dev, "Cannot parse pinctrl-names: %d\n",
+ dev_err(&pdev->dev, "Cannot parse pinctrl-names: %d\n",
ret);
return ret;
}
if (!strcmp(mux->pdata->pinctrl_states[mux->pdata->bus_count],
"idle")) {
if (i != num_names - 1) {
- dev_err(mux->dev, "idle state must be last\n");
+ dev_err(&pdev->dev,
+ "idle state must be last\n");
return -EINVAL;
}
mux->pdata->pinctrl_state_idle = "idle";
@@ -107,13 +99,13 @@ static int i2c_mux_pinctrl_parse_dt(struct i2c_mux_pinctrl *mux,
adapter_np = of_parse_phandle(np, "i2c-parent", 0);
if (!adapter_np) {
- dev_err(mux->dev, "Cannot parse i2c-parent\n");
+ dev_err(&pdev->dev, "Cannot parse i2c-parent\n");
return -ENODEV;
}
adapter = of_find_i2c_adapter_by_node(adapter_np);
of_node_put(adapter_np);
if (!adapter) {
- dev_err(mux->dev, "Cannot find parent bus\n");
+ dev_err(&pdev->dev, "Cannot find parent bus\n");
return -EPROBE_DEFER;
}
mux->pdata->parent_bus_num = i2c_adapter_id(adapter);
@@ -129,21 +121,38 @@ static inline int i2c_mux_pinctrl_parse_dt(struct i2c_mux_pinctrl *mux,
}
#endif
+static struct i2c_adapter *i2c_mux_pinctrl_root_adapter(
+ struct pinctrl_state *state)
+{
+ struct i2c_adapter *root = NULL;
+ struct pinctrl_setting *setting;
+ struct i2c_adapter *pin_root;
+
+ list_for_each_entry(setting, &state->settings, node) {
+ pin_root = i2c_root_adapter(setting->pctldev->dev);
+ if (!pin_root)
+ return NULL;
+ if (!root)
+ root = pin_root;
+ else if (root != pin_root)
+ return NULL;
+ }
+
+ return root;
+}
+
static int i2c_mux_pinctrl_probe(struct platform_device *pdev)
{
+ struct i2c_mux_core *muxc;
struct i2c_mux_pinctrl *mux;
- int (*deselect)(struct i2c_adapter *, void *, u32);
+ struct i2c_adapter *root;
int i, ret;
mux = devm_kzalloc(&pdev->dev, sizeof(*mux), GFP_KERNEL);
if (!mux) {
- dev_err(&pdev->dev, "Cannot allocate i2c_mux_pinctrl\n");
ret = -ENOMEM;
goto err;
}
- platform_set_drvdata(pdev, mux);
-
- mux->dev = &pdev->dev;
mux->pdata = dev_get_platdata(&pdev->dev);
if (!mux->pdata) {
@@ -166,14 +175,15 @@ static int i2c_mux_pinctrl_probe(struct platform_device *pdev)
goto err;
}
- mux->busses = devm_kzalloc(&pdev->dev,
- sizeof(*mux->busses) * mux->pdata->bus_count,
- GFP_KERNEL);
- if (!mux->busses) {
- dev_err(&pdev->dev, "Cannot allocate busses\n");
+ muxc = i2c_mux_alloc(NULL, &pdev->dev, mux->pdata->bus_count, 0, 0,
+ i2c_mux_pinctrl_select, NULL);
+ if (!muxc) {
ret = -ENOMEM;
goto err;
}
+ muxc->priv = mux;
+
+ platform_set_drvdata(pdev, muxc);
mux->pinctrl = devm_pinctrl_get(&pdev->dev);
if (IS_ERR(mux->pinctrl)) {
@@ -184,13 +194,13 @@ static int i2c_mux_pinctrl_probe(struct platform_device *pdev)
for (i = 0; i < mux->pdata->bus_count; i++) {
mux->states[i] = pinctrl_lookup_state(mux->pinctrl,
mux->pdata->pinctrl_states[i]);
- if (IS_ERR(mux->states[i])) {
- ret = PTR_ERR(mux->states[i]);
- dev_err(&pdev->dev,
- "Cannot look up pinctrl state %s: %d\n",
- mux->pdata->pinctrl_states[i], ret);
- goto err;
- }
+ if (IS_ERR(mux->states[i])) {
+ ret = PTR_ERR(mux->states[i]);
+ dev_err(&pdev->dev,
+ "Cannot look up pinctrl state %s: %d\n",
+ mux->pdata->pinctrl_states[i], ret);
+ goto err;
+ }
}
if (mux->pdata->pinctrl_state_idle) {
mux->state_idle = pinctrl_lookup_state(mux->pinctrl,
@@ -203,29 +213,39 @@ static int i2c_mux_pinctrl_probe(struct platform_device *pdev)
goto err;
}
- deselect = i2c_mux_pinctrl_deselect;
- } else {
- deselect = NULL;
+ muxc->deselect = i2c_mux_pinctrl_deselect;
}
- mux->parent = i2c_get_adapter(mux->pdata->parent_bus_num);
- if (!mux->parent) {
+ muxc->parent = i2c_get_adapter(mux->pdata->parent_bus_num);
+ if (!muxc->parent) {
dev_err(&pdev->dev, "Parent adapter (%d) not found\n",
mux->pdata->parent_bus_num);
ret = -EPROBE_DEFER;
goto err;
}
+ root = i2c_root_adapter(&muxc->parent->dev);
+
+ muxc->mux_locked = true;
+ for (i = 0; i < mux->pdata->bus_count; i++) {
+ if (root != i2c_mux_pinctrl_root_adapter(mux->states[i])) {
+ muxc->mux_locked = false;
+ break;
+ }
+ }
+ if (muxc->mux_locked && mux->pdata->pinctrl_state_idle &&
+ root != i2c_mux_pinctrl_root_adapter(mux->state_idle))
+ muxc->mux_locked = false;
+
+ if (muxc->mux_locked)
+ dev_info(&pdev->dev, "mux-locked i2c mux\n");
+
for (i = 0; i < mux->pdata->bus_count; i++) {
u32 bus = mux->pdata->base_bus_num ?
(mux->pdata->base_bus_num + i) : 0;
- mux->busses[i] = i2c_add_mux_adapter(mux->parent, &pdev->dev,
- mux, bus, i, 0,
- i2c_mux_pinctrl_select,
- deselect);
- if (!mux->busses[i]) {
- ret = -ENODEV;
+ ret = i2c_mux_add_adapter(muxc, bus, i, 0);
+ if (ret) {
dev_err(&pdev->dev, "Failed to add adapter %d\n", i);
goto err_del_adapter;
}
@@ -234,23 +254,18 @@ static int i2c_mux_pinctrl_probe(struct platform_device *pdev)
return 0;
err_del_adapter:
- for (; i > 0; i--)
- i2c_del_mux_adapter(mux->busses[i - 1]);
- i2c_put_adapter(mux->parent);
+ i2c_mux_del_adapters(muxc);
+ i2c_put_adapter(muxc->parent);
err:
return ret;
}
static int i2c_mux_pinctrl_remove(struct platform_device *pdev)
{
- struct i2c_mux_pinctrl *mux = platform_get_drvdata(pdev);
- int i;
-
- for (i = 0; i < mux->pdata->bus_count; i++)
- i2c_del_mux_adapter(mux->busses[i]);
-
- i2c_put_adapter(mux->parent);
+ struct i2c_mux_core *muxc = platform_get_drvdata(pdev);
+ i2c_mux_del_adapters(muxc);
+ i2c_put_adapter(muxc->parent);
return 0;
}
diff --git a/drivers/i2c/muxes/i2c-mux-reg.c b/drivers/i2c/muxes/i2c-mux-reg.c
index 5fbd5bd0878f..6773cadf7c9f 100644
--- a/drivers/i2c/muxes/i2c-mux-reg.c
+++ b/drivers/i2c/muxes/i2c-mux-reg.c
@@ -21,8 +21,6 @@
#include <linux/slab.h>
struct regmux {
- struct i2c_adapter *parent;
- struct i2c_adapter **adap; /* child busses */
struct i2c_mux_reg_platform_data data;
};
@@ -64,18 +62,16 @@ static int i2c_mux_reg_set(const struct regmux *mux, unsigned int chan_id)
return 0;
}
-static int i2c_mux_reg_select(struct i2c_adapter *adap, void *data,
- unsigned int chan)
+static int i2c_mux_reg_select(struct i2c_mux_core *muxc, u32 chan)
{
- struct regmux *mux = data;
+ struct regmux *mux = i2c_mux_priv(muxc);
return i2c_mux_reg_set(mux, chan);
}
-static int i2c_mux_reg_deselect(struct i2c_adapter *adap, void *data,
- unsigned int chan)
+static int i2c_mux_reg_deselect(struct i2c_mux_core *muxc, u32 chan)
{
- struct regmux *mux = data;
+ struct regmux *mux = i2c_mux_priv(muxc);
if (mux->data.idle_in_use)
return i2c_mux_reg_set(mux, mux->data.idle);
@@ -85,7 +81,7 @@ static int i2c_mux_reg_deselect(struct i2c_adapter *adap, void *data,
#ifdef CONFIG_OF
static int i2c_mux_reg_probe_dt(struct regmux *mux,
- struct platform_device *pdev)
+ struct platform_device *pdev)
{
struct device_node *np = pdev->dev.of_node;
struct device_node *adapter_np, *child;
@@ -107,7 +103,6 @@ static int i2c_mux_reg_probe_dt(struct regmux *mux,
if (!adapter)
return -EPROBE_DEFER;
- mux->parent = adapter;
mux->data.parent = i2c_adapter_id(adapter);
put_device(&adapter->dev);
@@ -161,7 +156,7 @@ static int i2c_mux_reg_probe_dt(struct regmux *mux,
}
#else
static int i2c_mux_reg_probe_dt(struct regmux *mux,
- struct platform_device *pdev)
+ struct platform_device *pdev)
{
return 0;
}
@@ -169,10 +164,10 @@ static int i2c_mux_reg_probe_dt(struct regmux *mux,
static int i2c_mux_reg_probe(struct platform_device *pdev)
{
+ struct i2c_mux_core *muxc;
struct regmux *mux;
struct i2c_adapter *parent;
struct resource *res;
- int (*deselect)(struct i2c_adapter *, void *, u32);
unsigned int class;
int i, ret, nr;
@@ -180,17 +175,9 @@ static int i2c_mux_reg_probe(struct platform_device *pdev)
if (!mux)
return -ENOMEM;
- platform_set_drvdata(pdev, mux);
-
if (dev_get_platdata(&pdev->dev)) {
memcpy(&mux->data, dev_get_platdata(&pdev->dev),
sizeof(mux->data));
-
- parent = i2c_get_adapter(mux->data.parent);
- if (!parent)
- return -EPROBE_DEFER;
-
- mux->parent = parent;
} else {
ret = i2c_mux_reg_probe_dt(mux, pdev);
if (ret < 0) {
@@ -199,6 +186,10 @@ static int i2c_mux_reg_probe(struct platform_device *pdev)
}
}
+ parent = i2c_get_adapter(mux->data.parent);
+ if (!parent)
+ return -EPROBE_DEFER;
+
if (!mux->data.reg) {
dev_info(&pdev->dev,
"Register not set, using platform resource\n");
@@ -215,55 +206,45 @@ static int i2c_mux_reg_probe(struct platform_device *pdev)
return -EINVAL;
}
- mux->adap = devm_kzalloc(&pdev->dev,
- sizeof(*mux->adap) * mux->data.n_values,
- GFP_KERNEL);
- if (!mux->adap) {
- dev_err(&pdev->dev, "Cannot allocate i2c_adapter structure");
+ muxc = i2c_mux_alloc(parent, &pdev->dev, mux->data.n_values, 0, 0,
+ i2c_mux_reg_select, NULL);
+ if (!muxc)
return -ENOMEM;
- }
+ muxc->priv = mux;
+
+ platform_set_drvdata(pdev, muxc);
if (mux->data.idle_in_use)
- deselect = i2c_mux_reg_deselect;
- else
- deselect = NULL;
+ muxc->deselect = i2c_mux_reg_deselect;
for (i = 0; i < mux->data.n_values; i++) {
nr = mux->data.base_nr ? (mux->data.base_nr + i) : 0;
class = mux->data.classes ? mux->data.classes[i] : 0;
- mux->adap[i] = i2c_add_mux_adapter(mux->parent, &pdev->dev, mux,
- nr, mux->data.values[i],
- class, i2c_mux_reg_select,
- deselect);
- if (!mux->adap[i]) {
- ret = -ENODEV;
+ ret = i2c_mux_add_adapter(muxc, nr, mux->data.values[i], class);
+ if (ret) {
dev_err(&pdev->dev, "Failed to add adapter %d\n", i);
goto add_adapter_failed;
}
}
dev_dbg(&pdev->dev, "%d port mux on %s adapter\n",
- mux->data.n_values, mux->parent->name);
+ mux->data.n_values, muxc->parent->name);
return 0;
add_adapter_failed:
- for (; i > 0; i--)
- i2c_del_mux_adapter(mux->adap[i - 1]);
+ i2c_mux_del_adapters(muxc);
return ret;
}
static int i2c_mux_reg_remove(struct platform_device *pdev)
{
- struct regmux *mux = platform_get_drvdata(pdev);
- int i;
-
- for (i = 0; i < mux->data.n_values; i++)
- i2c_del_mux_adapter(mux->adap[i]);
+ struct i2c_mux_core *muxc = platform_get_drvdata(pdev);
- i2c_put_adapter(mux->parent);
+ i2c_mux_del_adapters(muxc);
+ i2c_put_adapter(muxc->parent);
return 0;
}
diff --git a/drivers/ide/ide-disk.c b/drivers/ide/ide-disk.c
index 37a8a907febe..05dbcce70b0e 100644
--- a/drivers/ide/ide-disk.c
+++ b/drivers/ide/ide-disk.c
@@ -522,7 +522,7 @@ static int ide_do_setfeature(ide_drive_t *drive, u8 feature, u8 nsect)
static void update_flush(ide_drive_t *drive)
{
u16 *id = drive->id;
- unsigned flush = 0;
+ bool wc = false;
if (drive->dev_flags & IDE_DFLAG_WCACHE) {
unsigned long long capacity;
@@ -546,12 +546,12 @@ static void update_flush(ide_drive_t *drive)
drive->name, barrier ? "" : "not ");
if (barrier) {
- flush = REQ_FLUSH;
+ wc = true;
blk_queue_prep_rq(drive->queue, idedisk_prep_fn);
}
}
- blk_queue_flush(drive->queue, flush);
+ blk_queue_write_cache(drive->queue, wc, false);
}
ide_devset_get_flag(wcache, IDE_DFLAG_WCACHE);
diff --git a/drivers/idle/intel_idle.c b/drivers/idle/intel_idle.c
index c6935de425fa..c96649292b55 100644
--- a/drivers/idle/intel_idle.c
+++ b/drivers/idle/intel_idle.c
@@ -766,6 +766,67 @@ static struct cpuidle_state knl_cstates[] = {
.enter = NULL }
};
+static struct cpuidle_state bxt_cstates[] = {
+ {
+ .name = "C1-BXT",
+ .desc = "MWAIT 0x00",
+ .flags = MWAIT2flg(0x00),
+ .exit_latency = 2,
+ .target_residency = 2,
+ .enter = &intel_idle,
+ .enter_freeze = intel_idle_freeze, },
+ {
+ .name = "C1E-BXT",
+ .desc = "MWAIT 0x01",
+ .flags = MWAIT2flg(0x01),
+ .exit_latency = 10,
+ .target_residency = 20,
+ .enter = &intel_idle,
+ .enter_freeze = intel_idle_freeze, },
+ {
+ .name = "C6-BXT",
+ .desc = "MWAIT 0x20",
+ .flags = MWAIT2flg(0x20) | CPUIDLE_FLAG_TLB_FLUSHED,
+ .exit_latency = 133,
+ .target_residency = 133,
+ .enter = &intel_idle,
+ .enter_freeze = intel_idle_freeze, },
+ {
+ .name = "C7s-BXT",
+ .desc = "MWAIT 0x31",
+ .flags = MWAIT2flg(0x31) | CPUIDLE_FLAG_TLB_FLUSHED,
+ .exit_latency = 155,
+ .target_residency = 155,
+ .enter = &intel_idle,
+ .enter_freeze = intel_idle_freeze, },
+ {
+ .name = "C8-BXT",
+ .desc = "MWAIT 0x40",
+ .flags = MWAIT2flg(0x40) | CPUIDLE_FLAG_TLB_FLUSHED,
+ .exit_latency = 1000,
+ .target_residency = 1000,
+ .enter = &intel_idle,
+ .enter_freeze = intel_idle_freeze, },
+ {
+ .name = "C9-BXT",
+ .desc = "MWAIT 0x50",
+ .flags = MWAIT2flg(0x50) | CPUIDLE_FLAG_TLB_FLUSHED,
+ .exit_latency = 2000,
+ .target_residency = 2000,
+ .enter = &intel_idle,
+ .enter_freeze = intel_idle_freeze, },
+ {
+ .name = "C10-BXT",
+ .desc = "MWAIT 0x60",
+ .flags = MWAIT2flg(0x60) | CPUIDLE_FLAG_TLB_FLUSHED,
+ .exit_latency = 10000,
+ .target_residency = 10000,
+ .enter = &intel_idle,
+ .enter_freeze = intel_idle_freeze, },
+ {
+ .enter = NULL }
+};
+
/**
* intel_idle
* @dev: cpuidle_device
@@ -950,6 +1011,11 @@ static const struct idle_cpu idle_cpu_knl = {
.state_table = knl_cstates,
};
+static const struct idle_cpu idle_cpu_bxt = {
+ .state_table = bxt_cstates,
+ .disable_promotion_to_c1e = true,
+};
+
#define ICPU(model, cpu) \
{ X86_VENDOR_INTEL, 6, model, X86_FEATURE_MWAIT, (unsigned long)&cpu }
@@ -985,6 +1051,7 @@ static const struct x86_cpu_id intel_idle_ids[] __initconst = {
ICPU(0x9e, idle_cpu_skl),
ICPU(0x55, idle_cpu_skx),
ICPU(0x57, idle_cpu_knl),
+ ICPU(0x5c, idle_cpu_bxt),
{}
};
MODULE_DEVICE_TABLE(x86cpu, intel_idle_ids);
@@ -1075,6 +1142,73 @@ static void ivt_idle_state_table_update(void)
/* else, 1 and 2 socket systems use default ivt_cstates */
}
+
+/*
+ * Translate IRTL (Interrupt Response Time Limit) MSR to usec
+ */
+
+static unsigned int irtl_ns_units[] = {
+ 1, 32, 1024, 32768, 1048576, 33554432, 0, 0 };
+
+static unsigned long long irtl_2_usec(unsigned long long irtl)
+{
+ unsigned long long ns;
+
+ ns = irtl_ns_units[(irtl >> 10) & 0x3];
+
+ return div64_u64((irtl & 0x3FF) * ns, 1000);
+}
+/*
+ * bxt_idle_state_table_update(void)
+ *
+ * On BXT, we trust the IRTL to show the definitive maximum latency
+ * We use the same value for target_residency.
+ */
+static void bxt_idle_state_table_update(void)
+{
+ unsigned long long msr;
+
+ rdmsrl(MSR_PKGC6_IRTL, msr);
+ if (msr) {
+ unsigned int usec = irtl_2_usec(msr);
+
+ bxt_cstates[2].exit_latency = usec;
+ bxt_cstates[2].target_residency = usec;
+ }
+
+ rdmsrl(MSR_PKGC7_IRTL, msr);
+ if (msr) {
+ unsigned int usec = irtl_2_usec(msr);
+
+ bxt_cstates[3].exit_latency = usec;
+ bxt_cstates[3].target_residency = usec;
+ }
+
+ rdmsrl(MSR_PKGC8_IRTL, msr);
+ if (msr) {
+ unsigned int usec = irtl_2_usec(msr);
+
+ bxt_cstates[4].exit_latency = usec;
+ bxt_cstates[4].target_residency = usec;
+ }
+
+ rdmsrl(MSR_PKGC9_IRTL, msr);
+ if (msr) {
+ unsigned int usec = irtl_2_usec(msr);
+
+ bxt_cstates[5].exit_latency = usec;
+ bxt_cstates[5].target_residency = usec;
+ }
+
+ rdmsrl(MSR_PKGC10_IRTL, msr);
+ if (msr) {
+ unsigned int usec = irtl_2_usec(msr);
+
+ bxt_cstates[6].exit_latency = usec;
+ bxt_cstates[6].target_residency = usec;
+ }
+
+}
/*
* sklh_idle_state_table_update(void)
*
@@ -1130,6 +1264,9 @@ static void intel_idle_state_table_update(void)
case 0x3e: /* IVT */
ivt_idle_state_table_update();
break;
+ case 0x5c: /* BXT */
+ bxt_idle_state_table_update();
+ break;
case 0x5e: /* SKL-H */
sklh_idle_state_table_update();
break;
diff --git a/drivers/iio/accel/Kconfig b/drivers/iio/accel/Kconfig
index b0d3ecf3318b..e4a758cd7d35 100644
--- a/drivers/iio/accel/Kconfig
+++ b/drivers/iio/accel/Kconfig
@@ -64,7 +64,7 @@ config IIO_ST_ACCEL_3AXIS
help
Say yes here to build support for STMicroelectronics accelerometers:
LSM303DLH, LSM303DLHC, LIS3DH, LSM330D, LSM330DL, LSM330DLC,
- LIS331DLH, LSM303DL, LSM303DLM, LSM330, LIS2DH12.
+ LIS331DLH, LSM303DL, LSM303DLM, LSM330, LIS2DH12, H3LIS331DL.
This driver can also be built as a module. If so, these modules
will be created:
@@ -143,7 +143,8 @@ config MMA8452
select IIO_TRIGGERED_BUFFER
help
Say yes here to build support for the following Freescale 3-axis
- accelerometers: MMA8451Q, MMA8452Q, MMA8453Q, MMA8652FC, MMA8653FC.
+ accelerometers: MMA8451Q, MMA8452Q, MMA8453Q, MMA8652FC, MMA8653FC,
+ FXLS8471Q.
To compile this driver as a module, choose M here: the module
will be called mma8452.
diff --git a/drivers/iio/accel/bmc150-accel-core.c b/drivers/iio/accel/bmc150-accel-core.c
index 2072a31e813b..197e693e7e7b 100644
--- a/drivers/iio/accel/bmc150-accel-core.c
+++ b/drivers/iio/accel/bmc150-accel-core.c
@@ -25,7 +25,6 @@
#include <linux/delay.h>
#include <linux/slab.h>
#include <linux/acpi.h>
-#include <linux/gpio/consumer.h>
#include <linux/pm.h>
#include <linux/pm_runtime.h>
#include <linux/iio/iio.h>
@@ -138,6 +137,7 @@ enum bmc150_accel_axis {
AXIS_X,
AXIS_Y,
AXIS_Z,
+ AXIS_MAX,
};
enum bmc150_power_modes {
@@ -188,7 +188,6 @@ enum bmc150_accel_trigger_id {
struct bmc150_accel_data {
struct regmap *regmap;
- struct device *dev;
int irq;
struct bmc150_accel_interrupt interrupts[BMC150_ACCEL_INTERRUPTS];
atomic_t active_intr;
@@ -246,16 +245,18 @@ static const struct {
{500000, BMC150_ACCEL_SLEEP_500_MS},
{1000000, BMC150_ACCEL_SLEEP_1_SEC} };
-static const struct regmap_config bmc150_i2c_regmap_conf = {
+const struct regmap_config bmc150_regmap_conf = {
.reg_bits = 8,
.val_bits = 8,
.max_register = 0x3f,
};
+EXPORT_SYMBOL_GPL(bmc150_regmap_conf);
static int bmc150_accel_set_mode(struct bmc150_accel_data *data,
enum bmc150_power_modes mode,
int dur_us)
{
+ struct device *dev = regmap_get_device(data->regmap);
int i;
int ret;
u8 lpw_bits;
@@ -279,11 +280,11 @@ static int bmc150_accel_set_mode(struct bmc150_accel_data *data,
lpw_bits = mode << BMC150_ACCEL_PMU_MODE_SHIFT;
lpw_bits |= (dur_val << BMC150_ACCEL_PMU_BIT_SLEEP_DUR_SHIFT);
- dev_dbg(data->dev, "Set Mode bits %x\n", lpw_bits);
+ dev_dbg(dev, "Set Mode bits %x\n", lpw_bits);
ret = regmap_write(data->regmap, BMC150_ACCEL_REG_PMU_LPW, lpw_bits);
if (ret < 0) {
- dev_err(data->dev, "Error writing reg_pmu_lpw\n");
+ dev_err(dev, "Error writing reg_pmu_lpw\n");
return ret;
}
@@ -316,23 +317,24 @@ static int bmc150_accel_set_bw(struct bmc150_accel_data *data, int val,
static int bmc150_accel_update_slope(struct bmc150_accel_data *data)
{
+ struct device *dev = regmap_get_device(data->regmap);
int ret;
ret = regmap_write(data->regmap, BMC150_ACCEL_REG_INT_6,
data->slope_thres);
if (ret < 0) {
- dev_err(data->dev, "Error writing reg_int_6\n");
+ dev_err(dev, "Error writing reg_int_6\n");
return ret;
}
ret = regmap_update_bits(data->regmap, BMC150_ACCEL_REG_INT_5,
BMC150_ACCEL_SLOPE_DUR_MASK, data->slope_dur);
if (ret < 0) {
- dev_err(data->dev, "Error updating reg_int_5\n");
+ dev_err(dev, "Error updating reg_int_5\n");
return ret;
}
- dev_dbg(data->dev, "%s: %x %x\n", __func__, data->slope_thres,
+ dev_dbg(dev, "%s: %x %x\n", __func__, data->slope_thres,
data->slope_dur);
return ret;
@@ -378,20 +380,21 @@ static int bmc150_accel_get_startup_times(struct bmc150_accel_data *data)
static int bmc150_accel_set_power_state(struct bmc150_accel_data *data, bool on)
{
+ struct device *dev = regmap_get_device(data->regmap);
int ret;
if (on) {
- ret = pm_runtime_get_sync(data->dev);
+ ret = pm_runtime_get_sync(dev);
} else {
- pm_runtime_mark_last_busy(data->dev);
- ret = pm_runtime_put_autosuspend(data->dev);
+ pm_runtime_mark_last_busy(dev);
+ ret = pm_runtime_put_autosuspend(dev);
}
if (ret < 0) {
- dev_err(data->dev,
+ dev_err(dev,
"Failed: bmc150_accel_set_power_state for %d\n", on);
if (on)
- pm_runtime_put_noidle(data->dev);
+ pm_runtime_put_noidle(dev);
return ret;
}
@@ -445,6 +448,7 @@ static void bmc150_accel_interrupts_setup(struct iio_dev *indio_dev,
static int bmc150_accel_set_interrupt(struct bmc150_accel_data *data, int i,
bool state)
{
+ struct device *dev = regmap_get_device(data->regmap);
struct bmc150_accel_interrupt *intr = &data->interrupts[i];
const struct bmc150_accel_interrupt_info *info = intr->info;
int ret;
@@ -474,7 +478,7 @@ static int bmc150_accel_set_interrupt(struct bmc150_accel_data *data, int i,
ret = regmap_update_bits(data->regmap, info->map_reg, info->map_bitmask,
(state ? info->map_bitmask : 0));
if (ret < 0) {
- dev_err(data->dev, "Error updating reg_int_map\n");
+ dev_err(dev, "Error updating reg_int_map\n");
goto out_fix_power_state;
}
@@ -482,7 +486,7 @@ static int bmc150_accel_set_interrupt(struct bmc150_accel_data *data, int i,
ret = regmap_update_bits(data->regmap, info->en_reg, info->en_bitmask,
(state ? info->en_bitmask : 0));
if (ret < 0) {
- dev_err(data->dev, "Error updating reg_int_en\n");
+ dev_err(dev, "Error updating reg_int_en\n");
goto out_fix_power_state;
}
@@ -500,6 +504,7 @@ out_fix_power_state:
static int bmc150_accel_set_scale(struct bmc150_accel_data *data, int val)
{
+ struct device *dev = regmap_get_device(data->regmap);
int ret, i;
for (i = 0; i < ARRAY_SIZE(data->chip_info->scale_table); ++i) {
@@ -508,8 +513,7 @@ static int bmc150_accel_set_scale(struct bmc150_accel_data *data, int val)
BMC150_ACCEL_REG_PMU_RANGE,
data->chip_info->scale_table[i].reg_range);
if (ret < 0) {
- dev_err(data->dev,
- "Error writing pmu_range\n");
+ dev_err(dev, "Error writing pmu_range\n");
return ret;
}
@@ -523,6 +527,7 @@ static int bmc150_accel_set_scale(struct bmc150_accel_data *data, int val)
static int bmc150_accel_get_temp(struct bmc150_accel_data *data, int *val)
{
+ struct device *dev = regmap_get_device(data->regmap);
int ret;
unsigned int value;
@@ -530,7 +535,7 @@ static int bmc150_accel_get_temp(struct bmc150_accel_data *data, int *val)
ret = regmap_read(data->regmap, BMC150_ACCEL_REG_TEMP, &value);
if (ret < 0) {
- dev_err(data->dev, "Error reading reg_temp\n");
+ dev_err(dev, "Error reading reg_temp\n");
mutex_unlock(&data->mutex);
return ret;
}
@@ -545,6 +550,7 @@ static int bmc150_accel_get_axis(struct bmc150_accel_data *data,
struct iio_chan_spec const *chan,
int *val)
{
+ struct device *dev = regmap_get_device(data->regmap);
int ret;
int axis = chan->scan_index;
__le16 raw_val;
@@ -559,7 +565,7 @@ static int bmc150_accel_get_axis(struct bmc150_accel_data *data,
ret = regmap_bulk_read(data->regmap, BMC150_ACCEL_AXIS_TO_REG(axis),
&raw_val, sizeof(raw_val));
if (ret < 0) {
- dev_err(data->dev, "Error reading axis %d\n", axis);
+ dev_err(dev, "Error reading axis %d\n", axis);
bmc150_accel_set_power_state(data, false);
mutex_unlock(&data->mutex);
return ret;
@@ -831,6 +837,7 @@ static int bmc150_accel_set_watermark(struct iio_dev *indio_dev, unsigned val)
static int bmc150_accel_fifo_transfer(struct bmc150_accel_data *data,
char *buffer, int samples)
{
+ struct device *dev = regmap_get_device(data->regmap);
int sample_length = 3 * 2;
int ret;
int total_length = samples * sample_length;
@@ -854,7 +861,8 @@ static int bmc150_accel_fifo_transfer(struct bmc150_accel_data *data,
}
if (ret)
- dev_err(data->dev, "Error transferring data from fifo in single steps of %zu\n",
+ dev_err(dev,
+ "Error transferring data from fifo in single steps of %zu\n",
step);
return ret;
@@ -864,6 +872,7 @@ static int __bmc150_accel_fifo_flush(struct iio_dev *indio_dev,
unsigned samples, bool irq)
{
struct bmc150_accel_data *data = iio_priv(indio_dev);
+ struct device *dev = regmap_get_device(data->regmap);
int ret, i;
u8 count;
u16 buffer[BMC150_ACCEL_FIFO_LENGTH * 3];
@@ -873,7 +882,7 @@ static int __bmc150_accel_fifo_flush(struct iio_dev *indio_dev,
ret = regmap_read(data->regmap, BMC150_ACCEL_REG_FIFO_STATUS, &val);
if (ret < 0) {
- dev_err(data->dev, "Error reading reg_fifo_status\n");
+ dev_err(dev, "Error reading reg_fifo_status\n");
return ret;
}
@@ -1105,27 +1114,23 @@ static const struct iio_info bmc150_accel_info_fifo = {
.driver_module = THIS_MODULE,
};
+static const unsigned long bmc150_accel_scan_masks[] = {
+ BIT(AXIS_X) | BIT(AXIS_Y) | BIT(AXIS_Z),
+ 0};
+
static irqreturn_t bmc150_accel_trigger_handler(int irq, void *p)
{
struct iio_poll_func *pf = p;
struct iio_dev *indio_dev = pf->indio_dev;
struct bmc150_accel_data *data = iio_priv(indio_dev);
- int bit, ret, i = 0;
- unsigned int raw_val;
+ int ret;
mutex_lock(&data->mutex);
- for_each_set_bit(bit, indio_dev->active_scan_mask,
- indio_dev->masklength) {
- ret = regmap_bulk_read(data->regmap,
- BMC150_ACCEL_AXIS_TO_REG(bit), &raw_val,
- 2);
- if (ret < 0) {
- mutex_unlock(&data->mutex);
- goto err_read;
- }
- data->buffer[i++] = raw_val;
- }
+ ret = regmap_bulk_read(data->regmap, BMC150_ACCEL_REG_XOUT_L,
+ data->buffer, AXIS_MAX * 2);
mutex_unlock(&data->mutex);
+ if (ret < 0)
+ goto err_read;
iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
pf->timestamp);
@@ -1139,6 +1144,7 @@ static int bmc150_accel_trig_try_reen(struct iio_trigger *trig)
{
struct bmc150_accel_trigger *t = iio_trigger_get_drvdata(trig);
struct bmc150_accel_data *data = t->data;
+ struct device *dev = regmap_get_device(data->regmap);
int ret;
/* new data interrupts don't need ack */
@@ -1152,8 +1158,7 @@ static int bmc150_accel_trig_try_reen(struct iio_trigger *trig)
BMC150_ACCEL_INT_MODE_LATCH_RESET);
mutex_unlock(&data->mutex);
if (ret < 0) {
- dev_err(data->dev,
- "Error writing reg_int_rst_latch\n");
+ dev_err(dev, "Error writing reg_int_rst_latch\n");
return ret;
}
@@ -1204,13 +1209,14 @@ static const struct iio_trigger_ops bmc150_accel_trigger_ops = {
static int bmc150_accel_handle_roc_event(struct iio_dev *indio_dev)
{
struct bmc150_accel_data *data = iio_priv(indio_dev);
+ struct device *dev = regmap_get_device(data->regmap);
int dir;
int ret;
unsigned int val;
ret = regmap_read(data->regmap, BMC150_ACCEL_REG_INT_STATUS_2, &val);
if (ret < 0) {
- dev_err(data->dev, "Error reading reg_int_status_2\n");
+ dev_err(dev, "Error reading reg_int_status_2\n");
return ret;
}
@@ -1253,6 +1259,7 @@ static irqreturn_t bmc150_accel_irq_thread_handler(int irq, void *private)
{
struct iio_dev *indio_dev = private;
struct bmc150_accel_data *data = iio_priv(indio_dev);
+ struct device *dev = regmap_get_device(data->regmap);
bool ack = false;
int ret;
@@ -1276,7 +1283,7 @@ static irqreturn_t bmc150_accel_irq_thread_handler(int irq, void *private)
BMC150_ACCEL_INT_MODE_LATCH_INT |
BMC150_ACCEL_INT_MODE_LATCH_RESET);
if (ret)
- dev_err(data->dev, "Error writing reg_int_rst_latch\n");
+ dev_err(dev, "Error writing reg_int_rst_latch\n");
ret = IRQ_HANDLED;
} else {
@@ -1347,13 +1354,14 @@ static void bmc150_accel_unregister_triggers(struct bmc150_accel_data *data,
static int bmc150_accel_triggers_setup(struct iio_dev *indio_dev,
struct bmc150_accel_data *data)
{
+ struct device *dev = regmap_get_device(data->regmap);
int i, ret;
for (i = 0; i < BMC150_ACCEL_TRIGGERS; i++) {
struct bmc150_accel_trigger *t = &data->triggers[i];
- t->indio_trig = devm_iio_trigger_alloc(data->dev,
- bmc150_accel_triggers[i].name,
+ t->indio_trig = devm_iio_trigger_alloc(dev,
+ bmc150_accel_triggers[i].name,
indio_dev->name,
indio_dev->id);
if (!t->indio_trig) {
@@ -1361,7 +1369,7 @@ static int bmc150_accel_triggers_setup(struct iio_dev *indio_dev,
break;
}
- t->indio_trig->dev.parent = data->dev;
+ t->indio_trig->dev.parent = dev;
t->indio_trig->ops = &bmc150_accel_trigger_ops;
t->intr = bmc150_accel_triggers[i].intr;
t->data = data;
@@ -1385,12 +1393,13 @@ static int bmc150_accel_triggers_setup(struct iio_dev *indio_dev,
static int bmc150_accel_fifo_set_mode(struct bmc150_accel_data *data)
{
+ struct device *dev = regmap_get_device(data->regmap);
u8 reg = BMC150_ACCEL_REG_FIFO_CONFIG1;
int ret;
ret = regmap_write(data->regmap, reg, data->fifo_mode);
if (ret < 0) {
- dev_err(data->dev, "Error writing reg_fifo_config1\n");
+ dev_err(dev, "Error writing reg_fifo_config1\n");
return ret;
}
@@ -1400,7 +1409,7 @@ static int bmc150_accel_fifo_set_mode(struct bmc150_accel_data *data)
ret = regmap_write(data->regmap, BMC150_ACCEL_REG_FIFO_CONFIG0,
data->watermark);
if (ret < 0)
- dev_err(data->dev, "Error writing reg_fifo_config0\n");
+ dev_err(dev, "Error writing reg_fifo_config0\n");
return ret;
}
@@ -1484,17 +1493,17 @@ static const struct iio_buffer_setup_ops bmc150_accel_buffer_ops = {
static int bmc150_accel_chip_init(struct bmc150_accel_data *data)
{
+ struct device *dev = regmap_get_device(data->regmap);
int ret, i;
unsigned int val;
ret = regmap_read(data->regmap, BMC150_ACCEL_REG_CHIP_ID, &val);
if (ret < 0) {
- dev_err(data->dev,
- "Error: Reading chip id\n");
+ dev_err(dev, "Error: Reading chip id\n");
return ret;
}
- dev_dbg(data->dev, "Chip Id %x\n", val);
+ dev_dbg(dev, "Chip Id %x\n", val);
for (i = 0; i < ARRAY_SIZE(bmc150_accel_chip_info_tbl); i++) {
if (bmc150_accel_chip_info_tbl[i].chip_id == val) {
data->chip_info = &bmc150_accel_chip_info_tbl[i];
@@ -1503,7 +1512,7 @@ static int bmc150_accel_chip_init(struct bmc150_accel_data *data)
}
if (!data->chip_info) {
- dev_err(data->dev, "Invalid chip %x\n", val);
+ dev_err(dev, "Invalid chip %x\n", val);
return -ENODEV;
}
@@ -1520,8 +1529,7 @@ static int bmc150_accel_chip_init(struct bmc150_accel_data *data)
ret = regmap_write(data->regmap, BMC150_ACCEL_REG_PMU_RANGE,
BMC150_ACCEL_DEF_RANGE_4G);
if (ret < 0) {
- dev_err(data->dev,
- "Error writing reg_pmu_range\n");
+ dev_err(dev, "Error writing reg_pmu_range\n");
return ret;
}
@@ -1539,8 +1547,7 @@ static int bmc150_accel_chip_init(struct bmc150_accel_data *data)
BMC150_ACCEL_INT_MODE_LATCH_INT |
BMC150_ACCEL_INT_MODE_LATCH_RESET);
if (ret < 0) {
- dev_err(data->dev,
- "Error writing reg_int_rst_latch\n");
+ dev_err(dev, "Error writing reg_int_rst_latch\n");
return ret;
}
@@ -1560,7 +1567,6 @@ int bmc150_accel_core_probe(struct device *dev, struct regmap *regmap, int irq,
data = iio_priv(indio_dev);
dev_set_drvdata(dev, indio_dev);
- data->dev = dev;
data->irq = irq;
data->regmap = regmap;
@@ -1575,6 +1581,7 @@ int bmc150_accel_core_probe(struct device *dev, struct regmap *regmap, int irq,
indio_dev->channels = data->chip_info->channels;
indio_dev->num_channels = data->chip_info->num_channels;
indio_dev->name = name ? name : data->chip_info->name;
+ indio_dev->available_scan_masks = bmc150_accel_scan_masks;
indio_dev->modes = INDIO_DIRECT_MODE;
indio_dev->info = &bmc150_accel_info;
@@ -1583,13 +1590,13 @@ int bmc150_accel_core_probe(struct device *dev, struct regmap *regmap, int irq,
bmc150_accel_trigger_handler,
&bmc150_accel_buffer_ops);
if (ret < 0) {
- dev_err(data->dev, "Failed: iio triggered buffer setup\n");
+ dev_err(dev, "Failed: iio triggered buffer setup\n");
return ret;
}
if (data->irq > 0) {
ret = devm_request_threaded_irq(
- data->dev, data->irq,
+ dev, data->irq,
bmc150_accel_irq_handler,
bmc150_accel_irq_thread_handler,
IRQF_TRIGGER_RISING,
@@ -1607,7 +1614,7 @@ int bmc150_accel_core_probe(struct device *dev, struct regmap *regmap, int irq,
ret = regmap_write(data->regmap, BMC150_ACCEL_REG_INT_RST_LATCH,
BMC150_ACCEL_INT_MODE_LATCH_RESET);
if (ret < 0) {
- dev_err(data->dev, "Error writing reg_int_rst_latch\n");
+ dev_err(dev, "Error writing reg_int_rst_latch\n");
goto err_buffer_cleanup;
}
@@ -1656,9 +1663,9 @@ int bmc150_accel_core_remove(struct device *dev)
iio_device_unregister(indio_dev);
- pm_runtime_disable(data->dev);
- pm_runtime_set_suspended(data->dev);
- pm_runtime_put_noidle(data->dev);
+ pm_runtime_disable(dev);
+ pm_runtime_set_suspended(dev);
+ pm_runtime_put_noidle(dev);
bmc150_accel_unregister_triggers(data, BMC150_ACCEL_TRIGGERS - 1);
@@ -1707,7 +1714,7 @@ static int bmc150_accel_runtime_suspend(struct device *dev)
struct bmc150_accel_data *data = iio_priv(indio_dev);
int ret;
- dev_dbg(data->dev, __func__);
+ dev_dbg(dev, __func__);
ret = bmc150_accel_set_mode(data, BMC150_ACCEL_SLEEP_MODE_SUSPEND, 0);
if (ret < 0)
return -EAGAIN;
@@ -1722,7 +1729,7 @@ static int bmc150_accel_runtime_resume(struct device *dev)
int ret;
int sleep_val;
- dev_dbg(data->dev, __func__);
+ dev_dbg(dev, __func__);
ret = bmc150_accel_set_mode(data, BMC150_ACCEL_SLEEP_MODE_NORMAL, 0);
if (ret < 0)
diff --git a/drivers/iio/accel/bmc150-accel-i2c.c b/drivers/iio/accel/bmc150-accel-i2c.c
index b41404ba32fc..8ca8041267ef 100644
--- a/drivers/iio/accel/bmc150-accel-i2c.c
+++ b/drivers/iio/accel/bmc150-accel-i2c.c
@@ -28,11 +28,6 @@
#include "bmc150-accel.h"
-static const struct regmap_config bmc150_i2c_regmap_conf = {
- .reg_bits = 8,
- .val_bits = 8,
-};
-
static int bmc150_accel_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
@@ -43,7 +38,7 @@ static int bmc150_accel_probe(struct i2c_client *client,
i2c_check_functionality(client->adapter,
I2C_FUNC_SMBUS_READ_I2C_BLOCK);
- regmap = devm_regmap_init_i2c(client, &bmc150_i2c_regmap_conf);
+ regmap = devm_regmap_init_i2c(client, &bmc150_regmap_conf);
if (IS_ERR(regmap)) {
dev_err(&client->dev, "Failed to initialize i2c regmap\n");
return PTR_ERR(regmap);
diff --git a/drivers/iio/accel/bmc150-accel-spi.c b/drivers/iio/accel/bmc150-accel-spi.c
index 16b66f2a7204..006794a70a1f 100644
--- a/drivers/iio/accel/bmc150-accel-spi.c
+++ b/drivers/iio/accel/bmc150-accel-spi.c
@@ -25,18 +25,12 @@
#include "bmc150-accel.h"
-static const struct regmap_config bmc150_spi_regmap_conf = {
- .reg_bits = 8,
- .val_bits = 8,
- .max_register = 0x3f,
-};
-
static int bmc150_accel_probe(struct spi_device *spi)
{
struct regmap *regmap;
const struct spi_device_id *id = spi_get_device_id(spi);
- regmap = devm_regmap_init_spi(spi, &bmc150_spi_regmap_conf);
+ regmap = devm_regmap_init_spi(spi, &bmc150_regmap_conf);
if (IS_ERR(regmap)) {
dev_err(&spi->dev, "Failed to initialize spi regmap\n");
return PTR_ERR(regmap);
diff --git a/drivers/iio/accel/bmc150-accel.h b/drivers/iio/accel/bmc150-accel.h
index ba0335987f94..38a8b11f8c19 100644
--- a/drivers/iio/accel/bmc150-accel.h
+++ b/drivers/iio/accel/bmc150-accel.h
@@ -16,5 +16,6 @@ int bmc150_accel_core_probe(struct device *dev, struct regmap *regmap, int irq,
const char *name, bool block_supported);
int bmc150_accel_core_remove(struct device *dev);
extern const struct dev_pm_ops bmc150_accel_pm_ops;
+extern const struct regmap_config bmc150_regmap_conf;
#endif /* _BMC150_ACCEL_H_ */
diff --git a/drivers/iio/accel/kxcjk-1013.c b/drivers/iio/accel/kxcjk-1013.c
index edec1d099e91..bfe219a8bea2 100644
--- a/drivers/iio/accel/kxcjk-1013.c
+++ b/drivers/iio/accel/kxcjk-1013.c
@@ -20,7 +20,6 @@
#include <linux/slab.h>
#include <linux/string.h>
#include <linux/acpi.h>
-#include <linux/gpio/consumer.h>
#include <linux/pm.h>
#include <linux/pm_runtime.h>
#include <linux/iio/iio.h>
@@ -115,6 +114,7 @@ enum kxcjk1013_axis {
AXIS_X,
AXIS_Y,
AXIS_Z,
+ AXIS_MAX,
};
enum kxcjk1013_mode {
@@ -922,7 +922,7 @@ static const struct iio_event_spec kxcjk1013_event = {
.realbits = 12, \
.storagebits = 16, \
.shift = 4, \
- .endianness = IIO_CPU, \
+ .endianness = IIO_LE, \
}, \
.event_spec = &kxcjk1013_event, \
.num_event_specs = 1 \
@@ -953,25 +953,23 @@ static const struct iio_info kxcjk1013_info = {
.driver_module = THIS_MODULE,
};
+static const unsigned long kxcjk1013_scan_masks[] = {0x7, 0};
+
static irqreturn_t kxcjk1013_trigger_handler(int irq, void *p)
{
struct iio_poll_func *pf = p;
struct iio_dev *indio_dev = pf->indio_dev;
struct kxcjk1013_data *data = iio_priv(indio_dev);
- int bit, ret, i = 0;
+ int ret;
mutex_lock(&data->mutex);
-
- for_each_set_bit(bit, indio_dev->active_scan_mask,
- indio_dev->masklength) {
- ret = kxcjk1013_get_acc_reg(data, bit);
- if (ret < 0) {
- mutex_unlock(&data->mutex);
- goto err;
- }
- data->buffer[i++] = ret;
- }
+ ret = i2c_smbus_read_i2c_block_data_or_emulated(data->client,
+ KXCJK1013_REG_XOUT_L,
+ AXIS_MAX * 2,
+ (u8 *)data->buffer);
mutex_unlock(&data->mutex);
+ if (ret < 0)
+ goto err;
iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
data->timestamp);
@@ -1204,6 +1202,7 @@ static int kxcjk1013_probe(struct i2c_client *client,
indio_dev->dev.parent = &client->dev;
indio_dev->channels = kxcjk1013_channels;
indio_dev->num_channels = ARRAY_SIZE(kxcjk1013_channels);
+ indio_dev->available_scan_masks = kxcjk1013_scan_masks;
indio_dev->name = name;
indio_dev->modes = INDIO_DIRECT_MODE;
indio_dev->info = &kxcjk1013_info;
diff --git a/drivers/iio/accel/mma7455_core.c b/drivers/iio/accel/mma7455_core.c
index c633cc2c0789..c902f54c23f5 100644
--- a/drivers/iio/accel/mma7455_core.c
+++ b/drivers/iio/accel/mma7455_core.c
@@ -55,11 +55,11 @@
struct mma7455_data {
struct regmap *regmap;
- struct device *dev;
};
static int mma7455_drdy(struct mma7455_data *mma7455)
{
+ struct device *dev = regmap_get_device(mma7455->regmap);
unsigned int reg;
int tries = 3;
int ret;
@@ -75,7 +75,7 @@ static int mma7455_drdy(struct mma7455_data *mma7455)
msleep(20);
}
- dev_warn(mma7455->dev, "data not ready\n");
+ dev_warn(dev, "data not ready\n");
return -EIO;
}
@@ -260,7 +260,6 @@ int mma7455_core_probe(struct device *dev, struct regmap *regmap,
dev_set_drvdata(dev, indio_dev);
mma7455 = iio_priv(indio_dev);
mma7455->regmap = regmap;
- mma7455->dev = dev;
indio_dev->info = &mma7455_info;
indio_dev->name = name;
diff --git a/drivers/iio/accel/mma8452.c b/drivers/iio/accel/mma8452.c
index 7f4994f32a90..e225d3c53bd5 100644
--- a/drivers/iio/accel/mma8452.c
+++ b/drivers/iio/accel/mma8452.c
@@ -6,6 +6,7 @@
* MMA8453Q (10 bit)
* MMA8652FC (12 bit)
* MMA8653FC (10 bit)
+ * FXLS8471Q (14 bit)
*
* Copyright 2015 Martin Kepplinger <martin.kepplinger@theobroma-systems.com>
* Copyright 2014 Peter Meerwald <pmeerw@pmeerw.net>
@@ -16,7 +17,7 @@
*
* 7-bit I2C slave address 0x1c/0x1d (pin selectable)
*
- * TODO: orientation events, autosleep
+ * TODO: orientation events
*/
#include <linux/module.h>
@@ -31,6 +32,7 @@
#include <linux/delay.h>
#include <linux/of_device.h>
#include <linux/of_irq.h>
+#include <linux/pm_runtime.h>
#define MMA8452_STATUS 0x00
#define MMA8452_STATUS_DRDY (BIT(2) | BIT(1) | BIT(0))
@@ -91,6 +93,9 @@
#define MMA8453_DEVICE_ID 0x3a
#define MMA8652_DEVICE_ID 0x4a
#define MMA8653_DEVICE_ID 0x5a
+#define FXLS8471_DEVICE_ID 0x6a
+
+#define MMA8452_AUTO_SUSPEND_DELAY_MS 2000
struct mma8452_data {
struct i2c_client *client;
@@ -172,6 +177,31 @@ static int mma8452_drdy(struct mma8452_data *data)
return -EIO;
}
+static int mma8452_set_runtime_pm_state(struct i2c_client *client, bool on)
+{
+#ifdef CONFIG_PM
+ int ret;
+
+ if (on) {
+ ret = pm_runtime_get_sync(&client->dev);
+ } else {
+ pm_runtime_mark_last_busy(&client->dev);
+ ret = pm_runtime_put_autosuspend(&client->dev);
+ }
+
+ if (ret < 0) {
+ dev_err(&client->dev,
+ "failed to change power state to %d\n", on);
+ if (on)
+ pm_runtime_put_noidle(&client->dev);
+
+ return ret;
+ }
+#endif
+
+ return 0;
+}
+
static int mma8452_read(struct mma8452_data *data, __be16 buf[3])
{
int ret = mma8452_drdy(data);
@@ -179,8 +209,16 @@ static int mma8452_read(struct mma8452_data *data, __be16 buf[3])
if (ret < 0)
return ret;
- return i2c_smbus_read_i2c_block_data(data->client, MMA8452_OUT_X,
- 3 * sizeof(__be16), (u8 *)buf);
+ ret = mma8452_set_runtime_pm_state(data->client, true);
+ if (ret)
+ return ret;
+
+ ret = i2c_smbus_read_i2c_block_data(data->client, MMA8452_OUT_X,
+ 3 * sizeof(__be16), (u8 *)buf);
+
+ ret = mma8452_set_runtime_pm_state(data->client, false);
+
+ return ret;
}
static ssize_t mma8452_show_int_plus_micros(char *buf, const int (*vals)[2],
@@ -357,7 +395,8 @@ static int mma8452_read_raw(struct iio_dev *indio_dev,
return IIO_VAL_INT_PLUS_MICRO;
case IIO_CHAN_INFO_CALIBBIAS:
ret = i2c_smbus_read_byte_data(data->client,
- MMA8452_OFF_X + chan->scan_index);
+ MMA8452_OFF_X +
+ chan->scan_index);
if (ret < 0)
return ret;
@@ -392,24 +431,47 @@ static int mma8452_active(struct mma8452_data *data)
data->ctrl_reg1);
}
+/* returns >0 if active, 0 if in standby and <0 on error */
+static int mma8452_is_active(struct mma8452_data *data)
+{
+ int reg;
+
+ reg = i2c_smbus_read_byte_data(data->client, MMA8452_CTRL_REG1);
+ if (reg < 0)
+ return reg;
+
+ return reg & MMA8452_CTRL_ACTIVE;
+}
+
static int mma8452_change_config(struct mma8452_data *data, u8 reg, u8 val)
{
int ret;
+ int is_active;
mutex_lock(&data->lock);
- /* config can only be changed when in standby */
- ret = mma8452_standby(data);
- if (ret < 0)
+ is_active = mma8452_is_active(data);
+ if (is_active < 0) {
+ ret = is_active;
goto fail;
+ }
+
+ /* config can only be changed when in standby */
+ if (is_active > 0) {
+ ret = mma8452_standby(data);
+ if (ret < 0)
+ goto fail;
+ }
ret = i2c_smbus_write_byte_data(data->client, reg, val);
if (ret < 0)
goto fail;
- ret = mma8452_active(data);
- if (ret < 0)
- goto fail;
+ if (is_active > 0) {
+ ret = mma8452_active(data);
+ if (ret < 0)
+ goto fail;
+ }
ret = 0;
fail:
@@ -418,7 +480,7 @@ fail:
return ret;
}
-/* returns >0 if in freefall mode, 0 if not or <0 if an error occured */
+/* returns >0 if in freefall mode, 0 if not or <0 if an error occurred */
static int mma8452_freefall_mode_enabled(struct mma8452_data *data)
{
int val;
@@ -668,7 +730,8 @@ static int mma8452_read_event_config(struct iio_dev *indio_dev,
if (ret < 0)
return ret;
- return !!(ret & BIT(chan->scan_index + chip->ev_cfg_chan_shift));
+ return !!(ret & BIT(chan->scan_index +
+ chip->ev_cfg_chan_shift));
default:
return -EINVAL;
}
@@ -682,7 +745,11 @@ static int mma8452_write_event_config(struct iio_dev *indio_dev,
{
struct mma8452_data *data = iio_priv(indio_dev);
const struct mma_chip_info *chip = data->chip_info;
- int val;
+ int val, ret;
+
+ ret = mma8452_set_runtime_pm_state(data->client, state);
+ if (ret)
+ return ret;
switch (dir) {
case IIO_EV_DIR_FALLING:
@@ -990,6 +1057,7 @@ enum {
mma8453,
mma8652,
mma8653,
+ fxls8471,
};
static const struct mma_chip_info mma_chip_info_table[] = {
@@ -1003,7 +1071,7 @@ static const struct mma_chip_info mma_chip_info_table[] = {
* bit.
* The userspace interface uses m/s^2 and we declare micro units
* So scale factor for 12 bit here is given by:
- * g * N * 1000000 / 2048 for N = 2, 4, 8 and g=9.80665
+ * g * N * 1000000 / 2048 for N = 2, 4, 8 and g=9.80665
*/
.mma_scales = { {0, 2394}, {0, 4788}, {0, 9577} },
.ev_cfg = MMA8452_TRANSIENT_CFG,
@@ -1081,6 +1149,22 @@ static const struct mma_chip_info mma_chip_info_table[] = {
.ev_ths_mask = MMA8452_FF_MT_THS_MASK,
.ev_count = MMA8452_FF_MT_COUNT,
},
+ [fxls8471] = {
+ .chip_id = FXLS8471_DEVICE_ID,
+ .channels = mma8451_channels,
+ .num_channels = ARRAY_SIZE(mma8451_channels),
+ .mma_scales = { {0, 2394}, {0, 4788}, {0, 9577} },
+ .ev_cfg = MMA8452_TRANSIENT_CFG,
+ .ev_cfg_ele = MMA8452_TRANSIENT_CFG_ELE,
+ .ev_cfg_chan_shift = 1,
+ .ev_src = MMA8452_TRANSIENT_SRC,
+ .ev_src_xe = MMA8452_TRANSIENT_SRC_XTRANSE,
+ .ev_src_ye = MMA8452_TRANSIENT_SRC_YTRANSE,
+ .ev_src_ze = MMA8452_TRANSIENT_SRC_ZTRANSE,
+ .ev_ths = MMA8452_TRANSIENT_THS,
+ .ev_ths_mask = MMA8452_TRANSIENT_THS_MASK,
+ .ev_count = MMA8452_TRANSIENT_COUNT,
+ },
};
static struct attribute *mma8452_attributes[] = {
@@ -1114,7 +1198,11 @@ static int mma8452_data_rdy_trigger_set_state(struct iio_trigger *trig,
{
struct iio_dev *indio_dev = iio_trigger_get_drvdata(trig);
struct mma8452_data *data = iio_priv(indio_dev);
- int reg;
+ int reg, ret;
+
+ ret = mma8452_set_runtime_pm_state(data->client, state);
+ if (ret)
+ return ret;
reg = i2c_smbus_read_byte_data(data->client, MMA8452_CTRL_REG4);
if (reg < 0)
@@ -1206,6 +1294,7 @@ static const struct of_device_id mma8452_dt_ids[] = {
{ .compatible = "fsl,mma8453", .data = &mma_chip_info_table[mma8453] },
{ .compatible = "fsl,mma8652", .data = &mma_chip_info_table[mma8652] },
{ .compatible = "fsl,mma8653", .data = &mma_chip_info_table[mma8653] },
+ { .compatible = "fsl,fxls8471", .data = &mma_chip_info_table[fxls8471] },
{ }
};
MODULE_DEVICE_TABLE(of, mma8452_dt_ids);
@@ -1243,6 +1332,7 @@ static int mma8452_probe(struct i2c_client *client,
case MMA8453_DEVICE_ID:
case MMA8652_DEVICE_ID:
case MMA8653_DEVICE_ID:
+ case FXLS8471_DEVICE_ID:
if (ret == data->chip_info->chip_id)
break;
default:
@@ -1340,6 +1430,15 @@ static int mma8452_probe(struct i2c_client *client,
goto buffer_cleanup;
}
+ ret = pm_runtime_set_active(&client->dev);
+ if (ret < 0)
+ goto buffer_cleanup;
+
+ pm_runtime_enable(&client->dev);
+ pm_runtime_set_autosuspend_delay(&client->dev,
+ MMA8452_AUTO_SUSPEND_DELAY_MS);
+ pm_runtime_use_autosuspend(&client->dev);
+
ret = iio_device_register(indio_dev);
if (ret < 0)
goto buffer_cleanup;
@@ -1364,6 +1463,11 @@ static int mma8452_remove(struct i2c_client *client)
struct iio_dev *indio_dev = i2c_get_clientdata(client);
iio_device_unregister(indio_dev);
+
+ pm_runtime_disable(&client->dev);
+ pm_runtime_set_suspended(&client->dev);
+ pm_runtime_put_noidle(&client->dev);
+
iio_triggered_buffer_cleanup(indio_dev);
mma8452_trigger_cleanup(indio_dev);
mma8452_standby(iio_priv(indio_dev));
@@ -1371,6 +1475,45 @@ static int mma8452_remove(struct i2c_client *client)
return 0;
}
+#ifdef CONFIG_PM
+static int mma8452_runtime_suspend(struct device *dev)
+{
+ struct iio_dev *indio_dev = i2c_get_clientdata(to_i2c_client(dev));
+ struct mma8452_data *data = iio_priv(indio_dev);
+ int ret;
+
+ mutex_lock(&data->lock);
+ ret = mma8452_standby(data);
+ mutex_unlock(&data->lock);
+ if (ret < 0) {
+ dev_err(&data->client->dev, "powering off device failed\n");
+ return -EAGAIN;
+ }
+
+ return 0;
+}
+
+static int mma8452_runtime_resume(struct device *dev)
+{
+ struct iio_dev *indio_dev = i2c_get_clientdata(to_i2c_client(dev));
+ struct mma8452_data *data = iio_priv(indio_dev);
+ int ret, sleep_val;
+
+ ret = mma8452_active(data);
+ if (ret < 0)
+ return ret;
+
+ ret = mma8452_get_odr_index(data);
+ sleep_val = 1000 / mma8452_samp_freq[ret][0];
+ if (sleep_val < 20)
+ usleep_range(sleep_val * 1000, 20000);
+ else
+ msleep_interruptible(sleep_val);
+
+ return 0;
+}
+#endif
+
#ifdef CONFIG_PM_SLEEP
static int mma8452_suspend(struct device *dev)
{
@@ -1383,18 +1526,21 @@ static int mma8452_resume(struct device *dev)
return mma8452_active(iio_priv(i2c_get_clientdata(
to_i2c_client(dev))));
}
-
-static SIMPLE_DEV_PM_OPS(mma8452_pm_ops, mma8452_suspend, mma8452_resume);
-#define MMA8452_PM_OPS (&mma8452_pm_ops)
-#else
-#define MMA8452_PM_OPS NULL
#endif
+static const struct dev_pm_ops mma8452_pm_ops = {
+ SET_SYSTEM_SLEEP_PM_OPS(mma8452_suspend, mma8452_resume)
+ SET_RUNTIME_PM_OPS(mma8452_runtime_suspend,
+ mma8452_runtime_resume, NULL)
+};
+
static const struct i2c_device_id mma8452_id[] = {
+ { "mma8451", mma8451 },
{ "mma8452", mma8452 },
{ "mma8453", mma8453 },
{ "mma8652", mma8652 },
{ "mma8653", mma8653 },
+ { "fxls8471", fxls8471 },
{ }
};
MODULE_DEVICE_TABLE(i2c, mma8452_id);
@@ -1403,7 +1549,7 @@ static struct i2c_driver mma8452_driver = {
.driver = {
.name = "mma8452",
.of_match_table = of_match_ptr(mma8452_dt_ids),
- .pm = MMA8452_PM_OPS,
+ .pm = &mma8452_pm_ops,
},
.probe = mma8452_probe,
.remove = mma8452_remove,
diff --git a/drivers/iio/accel/mma9553.c b/drivers/iio/accel/mma9553.c
index fa7d36217c4b..bb05f3efddca 100644
--- a/drivers/iio/accel/mma9553.c
+++ b/drivers/iio/accel/mma9553.c
@@ -17,7 +17,6 @@
#include <linux/interrupt.h>
#include <linux/slab.h>
#include <linux/acpi.h>
-#include <linux/gpio/consumer.h>
#include <linux/iio/iio.h>
#include <linux/iio/sysfs.h>
#include <linux/iio/events.h>
diff --git a/drivers/iio/accel/mxc4005.c b/drivers/iio/accel/mxc4005.c
index e72e218c2696..c23f47af7256 100644
--- a/drivers/iio/accel/mxc4005.c
+++ b/drivers/iio/accel/mxc4005.c
@@ -17,7 +17,6 @@
#include <linux/i2c.h>
#include <linux/iio/iio.h>
#include <linux/acpi.h>
-#include <linux/gpio/consumer.h>
#include <linux/regmap.h>
#include <linux/iio/sysfs.h>
#include <linux/iio/trigger.h>
@@ -380,31 +379,6 @@ static const struct iio_trigger_ops mxc4005_trigger_ops = {
.owner = THIS_MODULE,
};
-static int mxc4005_gpio_probe(struct i2c_client *client,
- struct mxc4005_data *data)
-{
- struct device *dev;
- struct gpio_desc *gpio;
- int ret;
-
- if (!client)
- return -EINVAL;
-
- dev = &client->dev;
-
- gpio = devm_gpiod_get_index(dev, "mxc4005_int", 0, GPIOD_IN);
- if (IS_ERR(gpio)) {
- dev_err(dev, "failed to get acpi gpio index\n");
- return PTR_ERR(gpio);
- }
-
- ret = gpiod_to_irq(gpio);
-
- dev_dbg(dev, "GPIO resource, no:%d irq:%d\n", desc_to_gpio(gpio), ret);
-
- return ret;
-}
-
static int mxc4005_chip_init(struct mxc4005_data *data)
{
int ret;
@@ -470,9 +444,6 @@ static int mxc4005_probe(struct i2c_client *client,
return ret;
}
- if (client->irq < 0)
- client->irq = mxc4005_gpio_probe(client, data);
-
if (client->irq > 0) {
data->dready_trig = devm_iio_trigger_alloc(&client->dev,
"%s-dev%d",
diff --git a/drivers/iio/accel/st_accel.h b/drivers/iio/accel/st_accel.h
index 5d4a1897b293..57f83a67948c 100644
--- a/drivers/iio/accel/st_accel.h
+++ b/drivers/iio/accel/st_accel.h
@@ -14,6 +14,7 @@
#include <linux/types.h>
#include <linux/iio/common/st_sensors.h>
+#define H3LIS331DL_DRIVER_NAME "h3lis331dl_accel"
#define LIS3LV02DL_ACCEL_DEV_NAME "lis3lv02dl_accel"
#define LSM303DLHC_ACCEL_DEV_NAME "lsm303dlhc_accel"
#define LIS3DH_ACCEL_DEV_NAME "lis3dh"
diff --git a/drivers/iio/accel/st_accel_core.c b/drivers/iio/accel/st_accel_core.c
index a03a1417dd63..dc73f2d85e6d 100644
--- a/drivers/iio/accel/st_accel_core.c
+++ b/drivers/iio/accel/st_accel_core.c
@@ -39,6 +39,9 @@
#define ST_ACCEL_FS_AVL_6G 6
#define ST_ACCEL_FS_AVL_8G 8
#define ST_ACCEL_FS_AVL_16G 16
+#define ST_ACCEL_FS_AVL_100G 100
+#define ST_ACCEL_FS_AVL_200G 200
+#define ST_ACCEL_FS_AVL_400G 400
/* CUSTOM VALUES FOR SENSOR 1 */
#define ST_ACCEL_1_WAI_EXP 0x33
@@ -96,6 +99,8 @@
#define ST_ACCEL_2_DRDY_IRQ_INT2_MASK 0x10
#define ST_ACCEL_2_IHL_IRQ_ADDR 0x22
#define ST_ACCEL_2_IHL_IRQ_MASK 0x80
+#define ST_ACCEL_2_OD_IRQ_ADDR 0x22
+#define ST_ACCEL_2_OD_IRQ_MASK 0x40
#define ST_ACCEL_2_MULTIREAD_BIT true
/* CUSTOM VALUES FOR SENSOR 3 */
@@ -177,10 +182,39 @@
#define ST_ACCEL_5_DRDY_IRQ_INT2_MASK 0x20
#define ST_ACCEL_5_IHL_IRQ_ADDR 0x22
#define ST_ACCEL_5_IHL_IRQ_MASK 0x80
+#define ST_ACCEL_5_OD_IRQ_ADDR 0x22
+#define ST_ACCEL_5_OD_IRQ_MASK 0x40
#define ST_ACCEL_5_IG1_EN_ADDR 0x21
#define ST_ACCEL_5_IG1_EN_MASK 0x08
#define ST_ACCEL_5_MULTIREAD_BIT false
+/* CUSTOM VALUES FOR SENSOR 6 */
+#define ST_ACCEL_6_WAI_EXP 0x32
+#define ST_ACCEL_6_ODR_ADDR 0x20
+#define ST_ACCEL_6_ODR_MASK 0x18
+#define ST_ACCEL_6_ODR_AVL_50HZ_VAL 0x00
+#define ST_ACCEL_6_ODR_AVL_100HZ_VAL 0x01
+#define ST_ACCEL_6_ODR_AVL_400HZ_VAL 0x02
+#define ST_ACCEL_6_ODR_AVL_1000HZ_VAL 0x03
+#define ST_ACCEL_6_PW_ADDR 0x20
+#define ST_ACCEL_6_PW_MASK 0x20
+#define ST_ACCEL_6_FS_ADDR 0x23
+#define ST_ACCEL_6_FS_MASK 0x30
+#define ST_ACCEL_6_FS_AVL_100_VAL 0x00
+#define ST_ACCEL_6_FS_AVL_200_VAL 0x01
+#define ST_ACCEL_6_FS_AVL_400_VAL 0x03
+#define ST_ACCEL_6_FS_AVL_100_GAIN IIO_G_TO_M_S_2(49000)
+#define ST_ACCEL_6_FS_AVL_200_GAIN IIO_G_TO_M_S_2(98000)
+#define ST_ACCEL_6_FS_AVL_400_GAIN IIO_G_TO_M_S_2(195000)
+#define ST_ACCEL_6_BDU_ADDR 0x23
+#define ST_ACCEL_6_BDU_MASK 0x80
+#define ST_ACCEL_6_DRDY_IRQ_ADDR 0x22
+#define ST_ACCEL_6_DRDY_IRQ_INT1_MASK 0x02
+#define ST_ACCEL_6_DRDY_IRQ_INT2_MASK 0x10
+#define ST_ACCEL_6_IHL_IRQ_ADDR 0x22
+#define ST_ACCEL_6_IHL_IRQ_MASK 0x80
+#define ST_ACCEL_6_MULTIREAD_BIT true
+
static const struct iio_chan_spec st_accel_8bit_channels[] = {
ST_SENSORS_LSM_CHANNELS(IIO_ACCEL,
BIT(IIO_CHAN_INFO_RAW) | BIT(IIO_CHAN_INFO_SCALE),
@@ -302,6 +336,7 @@ static const struct st_sensor_settings st_accel_sensors_settings[] = {
.mask_int2 = ST_ACCEL_1_DRDY_IRQ_INT2_MASK,
.addr_ihl = ST_ACCEL_1_IHL_IRQ_ADDR,
.mask_ihl = ST_ACCEL_1_IHL_IRQ_MASK,
+ .addr_stat_drdy = ST_SENSORS_DEFAULT_STAT_ADDR,
},
.multi_read_bit = ST_ACCEL_1_MULTIREAD_BIT,
.bootime = 2,
@@ -367,6 +402,9 @@ static const struct st_sensor_settings st_accel_sensors_settings[] = {
.mask_int2 = ST_ACCEL_2_DRDY_IRQ_INT2_MASK,
.addr_ihl = ST_ACCEL_2_IHL_IRQ_ADDR,
.mask_ihl = ST_ACCEL_2_IHL_IRQ_MASK,
+ .addr_od = ST_ACCEL_2_OD_IRQ_ADDR,
+ .mask_od = ST_ACCEL_2_OD_IRQ_MASK,
+ .addr_stat_drdy = ST_SENSORS_DEFAULT_STAT_ADDR,
},
.multi_read_bit = ST_ACCEL_2_MULTIREAD_BIT,
.bootime = 2,
@@ -444,6 +482,7 @@ static const struct st_sensor_settings st_accel_sensors_settings[] = {
.mask_int2 = ST_ACCEL_3_DRDY_IRQ_INT2_MASK,
.addr_ihl = ST_ACCEL_3_IHL_IRQ_ADDR,
.mask_ihl = ST_ACCEL_3_IHL_IRQ_MASK,
+ .addr_stat_drdy = ST_SENSORS_DEFAULT_STAT_ADDR,
.ig1 = {
.en_addr = ST_ACCEL_3_IG1_EN_ADDR,
.en_mask = ST_ACCEL_3_IG1_EN_MASK,
@@ -502,6 +541,7 @@ static const struct st_sensor_settings st_accel_sensors_settings[] = {
.drdy_irq = {
.addr = ST_ACCEL_4_DRDY_IRQ_ADDR,
.mask_int1 = ST_ACCEL_4_DRDY_IRQ_INT1_MASK,
+ .addr_stat_drdy = ST_SENSORS_DEFAULT_STAT_ADDR,
},
.multi_read_bit = ST_ACCEL_4_MULTIREAD_BIT,
.bootime = 2, /* guess */
@@ -553,10 +593,75 @@ static const struct st_sensor_settings st_accel_sensors_settings[] = {
.mask_int2 = ST_ACCEL_5_DRDY_IRQ_INT2_MASK,
.addr_ihl = ST_ACCEL_5_IHL_IRQ_ADDR,
.mask_ihl = ST_ACCEL_5_IHL_IRQ_MASK,
+ .addr_od = ST_ACCEL_5_OD_IRQ_ADDR,
+ .mask_od = ST_ACCEL_5_OD_IRQ_MASK,
+ .addr_stat_drdy = ST_SENSORS_DEFAULT_STAT_ADDR,
},
.multi_read_bit = ST_ACCEL_5_MULTIREAD_BIT,
.bootime = 2, /* guess */
},
+ {
+ .wai = ST_ACCEL_6_WAI_EXP,
+ .wai_addr = ST_SENSORS_DEFAULT_WAI_ADDRESS,
+ .sensors_supported = {
+ [0] = H3LIS331DL_DRIVER_NAME,
+ },
+ .ch = (struct iio_chan_spec *)st_accel_12bit_channels,
+ .odr = {
+ .addr = ST_ACCEL_6_ODR_ADDR,
+ .mask = ST_ACCEL_6_ODR_MASK,
+ .odr_avl = {
+ { 50, ST_ACCEL_6_ODR_AVL_50HZ_VAL },
+ { 100, ST_ACCEL_6_ODR_AVL_100HZ_VAL, },
+ { 400, ST_ACCEL_6_ODR_AVL_400HZ_VAL, },
+ { 1000, ST_ACCEL_6_ODR_AVL_1000HZ_VAL, },
+ },
+ },
+ .pw = {
+ .addr = ST_ACCEL_6_PW_ADDR,
+ .mask = ST_ACCEL_6_PW_MASK,
+ .value_on = ST_SENSORS_DEFAULT_POWER_ON_VALUE,
+ .value_off = ST_SENSORS_DEFAULT_POWER_OFF_VALUE,
+ },
+ .enable_axis = {
+ .addr = ST_SENSORS_DEFAULT_AXIS_ADDR,
+ .mask = ST_SENSORS_DEFAULT_AXIS_MASK,
+ },
+ .fs = {
+ .addr = ST_ACCEL_6_FS_ADDR,
+ .mask = ST_ACCEL_6_FS_MASK,
+ .fs_avl = {
+ [0] = {
+ .num = ST_ACCEL_FS_AVL_100G,
+ .value = ST_ACCEL_6_FS_AVL_100_VAL,
+ .gain = ST_ACCEL_6_FS_AVL_100_GAIN,
+ },
+ [1] = {
+ .num = ST_ACCEL_FS_AVL_200G,
+ .value = ST_ACCEL_6_FS_AVL_200_VAL,
+ .gain = ST_ACCEL_6_FS_AVL_200_GAIN,
+ },
+ [2] = {
+ .num = ST_ACCEL_FS_AVL_400G,
+ .value = ST_ACCEL_6_FS_AVL_400_VAL,
+ .gain = ST_ACCEL_6_FS_AVL_400_GAIN,
+ },
+ },
+ },
+ .bdu = {
+ .addr = ST_ACCEL_6_BDU_ADDR,
+ .mask = ST_ACCEL_6_BDU_MASK,
+ },
+ .drdy_irq = {
+ .addr = ST_ACCEL_6_DRDY_IRQ_ADDR,
+ .mask_int1 = ST_ACCEL_6_DRDY_IRQ_INT1_MASK,
+ .mask_int2 = ST_ACCEL_6_DRDY_IRQ_INT2_MASK,
+ .addr_ihl = ST_ACCEL_6_IHL_IRQ_ADDR,
+ .mask_ihl = ST_ACCEL_6_IHL_IRQ_MASK,
+ },
+ .multi_read_bit = ST_ACCEL_6_MULTIREAD_BIT,
+ .bootime = 2,
+ },
};
static int st_accel_read_raw(struct iio_dev *indio_dev,
diff --git a/drivers/iio/accel/st_accel_i2c.c b/drivers/iio/accel/st_accel_i2c.c
index 294a32f89367..7333ee9fb11b 100644
--- a/drivers/iio/accel/st_accel_i2c.c
+++ b/drivers/iio/accel/st_accel_i2c.c
@@ -76,6 +76,10 @@ static const struct of_device_id st_accel_of_match[] = {
.compatible = "st,lis2dh12-accel",
.data = LIS2DH12_ACCEL_DEV_NAME,
},
+ {
+ .compatible = "st,h3lis331dl-accel",
+ .data = H3LIS331DL_DRIVER_NAME,
+ },
{},
};
MODULE_DEVICE_TABLE(of, st_accel_of_match);
diff --git a/drivers/iio/accel/stk8312.c b/drivers/iio/accel/stk8312.c
index 85fe7f7247c1..e31023dc5f1b 100644
--- a/drivers/iio/accel/stk8312.c
+++ b/drivers/iio/accel/stk8312.c
@@ -11,7 +11,6 @@
*/
#include <linux/acpi.h>
-#include <linux/gpio/consumer.h>
#include <linux/i2c.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
diff --git a/drivers/iio/accel/stk8ba50.c b/drivers/iio/accel/stk8ba50.c
index 5709d9eb8f34..300d955bad00 100644
--- a/drivers/iio/accel/stk8ba50.c
+++ b/drivers/iio/accel/stk8ba50.c
@@ -11,7 +11,6 @@
*/
#include <linux/acpi.h>
-#include <linux/gpio/consumer.h>
#include <linux/i2c.h>
#include <linux/interrupt.h>
#include <linux/kernel.h>
diff --git a/drivers/iio/adc/Kconfig b/drivers/iio/adc/Kconfig
index 82c718c515a0..25378c5882e2 100644
--- a/drivers/iio/adc/Kconfig
+++ b/drivers/iio/adc/Kconfig
@@ -242,6 +242,16 @@ config LP8788_ADC
To compile this driver as a module, choose M here: the module will be
called lp8788_adc.
+config LPC18XX_ADC
+ tristate "NXP LPC18xx ADC driver"
+ depends on ARCH_LPC18XX || COMPILE_TEST
+ depends on OF && HAS_IOMEM
+ help
+ Say yes here to build support for NXP LPC18XX ADC.
+
+ To compile this driver as a module, choose M here: the module will be
+ called lpc18xx_adc.
+
config MAX1027
tristate "Maxim max1027 ADC driver"
depends on SPI
@@ -375,11 +385,11 @@ config ROCKCHIP_SARADC
module will be called rockchip_saradc.
config TI_ADC081C
- tristate "Texas Instruments ADC081C021/027"
+ tristate "Texas Instruments ADC081C/ADC101C/ADC121C family"
depends on I2C
help
- If you say yes here you get support for Texas Instruments ADC081C021
- and ADC081C027 ADC chips.
+ If you say yes here you get support for Texas Instruments ADC081C,
+ ADC101C and ADC121C ADC chips.
This driver can also be built as a module. If so, the module will be
called ti-adc081c.
diff --git a/drivers/iio/adc/Makefile b/drivers/iio/adc/Makefile
index 0cb79210a4b0..38638d46f972 100644
--- a/drivers/iio/adc/Makefile
+++ b/drivers/iio/adc/Makefile
@@ -25,6 +25,7 @@ obj-$(CONFIG_HI8435) += hi8435.o
obj-$(CONFIG_IMX7D_ADC) += imx7d_adc.o
obj-$(CONFIG_INA2XX_ADC) += ina2xx-adc.o
obj-$(CONFIG_LP8788_ADC) += lp8788_adc.o
+obj-$(CONFIG_LPC18XX_ADC) += lpc18xx_adc.o
obj-$(CONFIG_MAX1027) += max1027.o
obj-$(CONFIG_MAX1363) += max1363.o
obj-$(CONFIG_MCP320X) += mcp320x.o
diff --git a/drivers/iio/adc/ad799x.c b/drivers/iio/adc/ad799x.c
index 01d71588d752..a3f5254f4e51 100644
--- a/drivers/iio/adc/ad799x.c
+++ b/drivers/iio/adc/ad799x.c
@@ -477,7 +477,7 @@ static int ad799x_read_event_value(struct iio_dev *indio_dev,
if (ret < 0)
return ret;
*val = (ret >> chan->scan_type.shift) &
- GENMASK(chan->scan_type.realbits - 1 , 0);
+ GENMASK(chan->scan_type.realbits - 1, 0);
return IIO_VAL_INT;
}
diff --git a/drivers/iio/adc/at91-sama5d2_adc.c b/drivers/iio/adc/at91-sama5d2_adc.c
index 2e154cb51685..e10dca3ed74b 100644
--- a/drivers/iio/adc/at91-sama5d2_adc.c
+++ b/drivers/iio/adc/at91-sama5d2_adc.c
@@ -66,8 +66,10 @@
#define AT91_SAMA5D2_MR_PRESCAL(v) ((v) << AT91_SAMA5D2_MR_PRESCAL_OFFSET)
#define AT91_SAMA5D2_MR_PRESCAL_OFFSET 8
#define AT91_SAMA5D2_MR_PRESCAL_MAX 0xff
+#define AT91_SAMA5D2_MR_PRESCAL_MASK GENMASK(15, 8)
/* Startup Time */
#define AT91_SAMA5D2_MR_STARTUP(v) ((v) << 16)
+#define AT91_SAMA5D2_MR_STARTUP_MASK GENMASK(19, 16)
/* Analog Change */
#define AT91_SAMA5D2_MR_ANACH BIT(23)
/* Tracking Time */
@@ -92,13 +94,13 @@
/* Last Converted Data Register */
#define AT91_SAMA5D2_LCDR 0x20
/* Interrupt Enable Register */
-#define AT91_SAMA5D2_IER 0x24
+#define AT91_SAMA5D2_IER 0x24
/* Interrupt Disable Register */
-#define AT91_SAMA5D2_IDR 0x28
+#define AT91_SAMA5D2_IDR 0x28
/* Interrupt Mask Register */
-#define AT91_SAMA5D2_IMR 0x2c
+#define AT91_SAMA5D2_IMR 0x2c
/* Interrupt Status Register */
-#define AT91_SAMA5D2_ISR 0x30
+#define AT91_SAMA5D2_ISR 0x30
/* Last Channel Trigger Mode Register */
#define AT91_SAMA5D2_LCTMR 0x34
/* Last Channel Compare Window Register */
@@ -106,17 +108,20 @@
/* Overrun Status Register */
#define AT91_SAMA5D2_OVER 0x3c
/* Extended Mode Register */
-#define AT91_SAMA5D2_EMR 0x40
+#define AT91_SAMA5D2_EMR 0x40
/* Compare Window Register */
-#define AT91_SAMA5D2_CWR 0x44
+#define AT91_SAMA5D2_CWR 0x44
/* Channel Gain Register */
-#define AT91_SAMA5D2_CGR 0x48
+#define AT91_SAMA5D2_CGR 0x48
+
/* Channel Offset Register */
-#define AT91_SAMA5D2_COR 0x4c
+#define AT91_SAMA5D2_COR 0x4c
+#define AT91_SAMA5D2_COR_DIFF_OFFSET 16
+
/* Channel Data Register 0 */
#define AT91_SAMA5D2_CDR0 0x50
/* Analog Control Register */
-#define AT91_SAMA5D2_ACR 0x94
+#define AT91_SAMA5D2_ACR 0x94
/* Touchscreen Mode Register */
#define AT91_SAMA5D2_TSMR 0xb0
/* Touchscreen X Position Register */
@@ -130,7 +135,7 @@
/* Correction Select Register */
#define AT91_SAMA5D2_COSR 0xd0
/* Correction Value Register */
-#define AT91_SAMA5D2_CVR 0xd4
+#define AT91_SAMA5D2_CVR 0xd4
/* Channel Error Correction Register */
#define AT91_SAMA5D2_CECR 0xd8
/* Write Protection Mode Register */
@@ -140,7 +145,7 @@
/* Version Register */
#define AT91_SAMA5D2_VERSION 0xfc
-#define AT91_AT91_SAMA5D2_CHAN(num, addr) \
+#define AT91_SAMA5D2_CHAN_SINGLE(num, addr) \
{ \
.type = IIO_VOLTAGE, \
.channel = num, \
@@ -156,6 +161,24 @@
.indexed = 1, \
}
+#define AT91_SAMA5D2_CHAN_DIFF(num, num2, addr) \
+ { \
+ .type = IIO_VOLTAGE, \
+ .differential = 1, \
+ .channel = num, \
+ .channel2 = num2, \
+ .address = addr, \
+ .scan_type = { \
+ .sign = 's', \
+ .realbits = 12, \
+ }, \
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \
+ .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), \
+ .info_mask_shared_by_all = BIT(IIO_CHAN_INFO_SAMP_FREQ),\
+ .datasheet_name = "CH"#num"-CH"#num2, \
+ .indexed = 1, \
+ }
+
#define at91_adc_readl(st, reg) readl_relaxed(st->base + reg)
#define at91_adc_writel(st, reg, val) writel_relaxed(val, st->base + reg)
@@ -185,18 +208,24 @@ struct at91_adc_state {
};
static const struct iio_chan_spec at91_adc_channels[] = {
- AT91_AT91_SAMA5D2_CHAN(0, 0x50),
- AT91_AT91_SAMA5D2_CHAN(1, 0x54),
- AT91_AT91_SAMA5D2_CHAN(2, 0x58),
- AT91_AT91_SAMA5D2_CHAN(3, 0x5c),
- AT91_AT91_SAMA5D2_CHAN(4, 0x60),
- AT91_AT91_SAMA5D2_CHAN(5, 0x64),
- AT91_AT91_SAMA5D2_CHAN(6, 0x68),
- AT91_AT91_SAMA5D2_CHAN(7, 0x6c),
- AT91_AT91_SAMA5D2_CHAN(8, 0x70),
- AT91_AT91_SAMA5D2_CHAN(9, 0x74),
- AT91_AT91_SAMA5D2_CHAN(10, 0x78),
- AT91_AT91_SAMA5D2_CHAN(11, 0x7c),
+ AT91_SAMA5D2_CHAN_SINGLE(0, 0x50),
+ AT91_SAMA5D2_CHAN_SINGLE(1, 0x54),
+ AT91_SAMA5D2_CHAN_SINGLE(2, 0x58),
+ AT91_SAMA5D2_CHAN_SINGLE(3, 0x5c),
+ AT91_SAMA5D2_CHAN_SINGLE(4, 0x60),
+ AT91_SAMA5D2_CHAN_SINGLE(5, 0x64),
+ AT91_SAMA5D2_CHAN_SINGLE(6, 0x68),
+ AT91_SAMA5D2_CHAN_SINGLE(7, 0x6c),
+ AT91_SAMA5D2_CHAN_SINGLE(8, 0x70),
+ AT91_SAMA5D2_CHAN_SINGLE(9, 0x74),
+ AT91_SAMA5D2_CHAN_SINGLE(10, 0x78),
+ AT91_SAMA5D2_CHAN_SINGLE(11, 0x7c),
+ AT91_SAMA5D2_CHAN_DIFF(0, 1, 0x50),
+ AT91_SAMA5D2_CHAN_DIFF(2, 3, 0x58),
+ AT91_SAMA5D2_CHAN_DIFF(4, 5, 0x60),
+ AT91_SAMA5D2_CHAN_DIFF(6, 7, 0x68),
+ AT91_SAMA5D2_CHAN_DIFF(8, 9, 0x70),
+ AT91_SAMA5D2_CHAN_DIFF(10, 11, 0x78),
};
static unsigned at91_adc_startup_time(unsigned startup_time_min,
@@ -226,7 +255,7 @@ static unsigned at91_adc_startup_time(unsigned startup_time_min,
static void at91_adc_setup_samp_freq(struct at91_adc_state *st, unsigned freq)
{
struct iio_dev *indio_dev = iio_priv_to_dev(st);
- unsigned f_per, prescal, startup;
+ unsigned f_per, prescal, startup, mr;
f_per = clk_get_rate(st->per_clk);
prescal = (f_per / (2 * freq)) - 1;
@@ -234,10 +263,11 @@ static void at91_adc_setup_samp_freq(struct at91_adc_state *st, unsigned freq)
startup = at91_adc_startup_time(st->soc_info.startup_time,
freq / 1000);
- at91_adc_writel(st, AT91_SAMA5D2_MR,
- AT91_SAMA5D2_MR_TRANSFER(2)
- | AT91_SAMA5D2_MR_STARTUP(startup)
- | AT91_SAMA5D2_MR_PRESCAL(prescal));
+ mr = at91_adc_readl(st, AT91_SAMA5D2_MR);
+ mr &= ~(AT91_SAMA5D2_MR_STARTUP_MASK | AT91_SAMA5D2_MR_PRESCAL_MASK);
+ mr |= AT91_SAMA5D2_MR_STARTUP(startup);
+ mr |= AT91_SAMA5D2_MR_PRESCAL(prescal);
+ at91_adc_writel(st, AT91_SAMA5D2_MR, mr);
dev_dbg(&indio_dev->dev, "freq: %u, startup: %u, prescal: %u\n",
freq, startup, prescal);
@@ -278,6 +308,7 @@ static int at91_adc_read_raw(struct iio_dev *indio_dev,
int *val, int *val2, long mask)
{
struct at91_adc_state *st = iio_priv(indio_dev);
+ u32 cor = 0;
int ret;
switch (mask) {
@@ -286,6 +317,11 @@ static int at91_adc_read_raw(struct iio_dev *indio_dev,
st->chan = chan;
+ if (chan->differential)
+ cor = (BIT(chan->channel) | BIT(chan->channel2)) <<
+ AT91_SAMA5D2_COR_DIFF_OFFSET;
+
+ at91_adc_writel(st, AT91_SAMA5D2_COR, cor);
at91_adc_writel(st, AT91_SAMA5D2_CHER, BIT(chan->channel));
at91_adc_writel(st, AT91_SAMA5D2_IER, BIT(chan->channel));
at91_adc_writel(st, AT91_SAMA5D2_CR, AT91_SAMA5D2_CR_START);
@@ -298,6 +334,8 @@ static int at91_adc_read_raw(struct iio_dev *indio_dev,
if (ret > 0) {
*val = st->conversion_value;
+ if (chan->scan_type.sign == 's')
+ *val = sign_extend32(*val, 11);
ret = IIO_VAL_INT;
st->conversion_done = false;
}
@@ -310,6 +348,8 @@ static int at91_adc_read_raw(struct iio_dev *indio_dev,
case IIO_CHAN_INFO_SCALE:
*val = st->vref_uv / 1000;
+ if (chan->differential)
+ *val *= 2;
*val2 = chan->scan_type.realbits;
return IIO_VAL_FRACTIONAL_LOG2;
@@ -444,6 +484,12 @@ static int at91_adc_probe(struct platform_device *pdev)
at91_adc_writel(st, AT91_SAMA5D2_CR, AT91_SAMA5D2_CR_SWRST);
at91_adc_writel(st, AT91_SAMA5D2_IDR, 0xffffffff);
+ /*
+ * Transfer field must be set to 2 according to the datasheet and
+ * allows different analog settings for each channel.
+ */
+ at91_adc_writel(st, AT91_SAMA5D2_MR,
+ AT91_SAMA5D2_MR_TRANSFER(2) | AT91_SAMA5D2_MR_ANACH);
at91_adc_setup_samp_freq(st, st->soc_info.min_sample_rate);
diff --git a/drivers/iio/adc/at91_adc.c b/drivers/iio/adc/at91_adc.c
index f284cd6a93d6..52430ba171f3 100644
--- a/drivers/iio/adc/at91_adc.c
+++ b/drivers/iio/adc/at91_adc.c
@@ -797,8 +797,8 @@ static u32 calc_startup_ticks_9x5(u32 startup_time, u32 adc_clk_khz)
* Startup Time = <lookup_table_value> / ADC Clock
*/
const int startup_lookup[] = {
- 0 , 8 , 16 , 24 ,
- 64 , 80 , 96 , 112,
+ 0, 8, 16, 24,
+ 64, 80, 96, 112,
512, 576, 640, 704,
768, 832, 896, 960
};
@@ -924,14 +924,14 @@ static int at91_adc_probe_dt(struct at91_adc_state *st,
ret = -EINVAL;
goto error_ret;
}
- trig->name = name;
+ trig->name = name;
if (of_property_read_u32(trig_node, "trigger-value", &prop)) {
dev_err(&idev->dev, "Missing trigger-value property in the DT.\n");
ret = -EINVAL;
goto error_ret;
}
- trig->value = prop;
+ trig->value = prop;
trig->is_external = of_property_read_bool(trig_node, "trigger-external");
i++;
}
diff --git a/drivers/iio/adc/ina2xx-adc.c b/drivers/iio/adc/ina2xx-adc.c
index 65909d5858b1..502f2fbe8aef 100644
--- a/drivers/iio/adc/ina2xx-adc.c
+++ b/drivers/iio/adc/ina2xx-adc.c
@@ -185,9 +185,9 @@ static int ina2xx_read_raw(struct iio_dev *indio_dev,
case IIO_CHAN_INFO_SCALE:
switch (chan->address) {
case INA2XX_SHUNT_VOLTAGE:
- /* processed (mV) = raw*1000/shunt_div */
+ /* processed (mV) = raw/shunt_div */
*val2 = chip->config->shunt_div;
- *val = 1000;
+ *val = 1;
return IIO_VAL_FRACTIONAL;
case INA2XX_BUS_VOLTAGE:
@@ -350,6 +350,23 @@ static ssize_t ina2xx_allow_async_readout_store(struct device *dev,
return len;
}
+/*
+ * Set current LSB to 1mA, shunt is in uOhms
+ * (equation 13 in datasheet). We hardcode a Current_LSB
+ * of 1.0 x10-6. The only remaining parameter is RShunt.
+ * There is no need to expose the CALIBRATION register
+ * to the user for now. But we need to reset this register
+ * if the user updates RShunt after driver init, e.g upon
+ * reading an EEPROM/Probe-type value.
+ */
+static int ina2xx_set_calibration(struct ina2xx_chip_info *chip)
+{
+ u16 regval = DIV_ROUND_CLOSEST(chip->config->calibration_factor,
+ chip->shunt_resistor);
+
+ return regmap_write(chip->regmap, INA2XX_CALIBRATION, regval);
+}
+
static int set_shunt_resistor(struct ina2xx_chip_info *chip, unsigned int val)
{
if (val <= 0 || val > chip->config->calibration_factor)
@@ -385,6 +402,11 @@ static ssize_t ina2xx_shunt_resistor_store(struct device *dev,
if (ret)
return ret;
+ /* Update the Calibration register */
+ ret = ina2xx_set_calibration(chip);
+ if (ret)
+ return ret;
+
return len;
}
@@ -602,24 +624,11 @@ static const struct iio_info ina2xx_info = {
/* Initialize the configuration and calibration registers. */
static int ina2xx_init(struct ina2xx_chip_info *chip, unsigned int config)
{
- u16 regval;
- int ret;
-
- ret = regmap_write(chip->regmap, INA2XX_CONFIG, config);
+ int ret = regmap_write(chip->regmap, INA2XX_CONFIG, config);
if (ret)
return ret;
- /*
- * Set current LSB to 1mA, shunt is in uOhms
- * (equation 13 in datasheet). We hardcode a Current_LSB
- * of 1.0 x10-6. The only remaining parameter is RShunt.
- * There is no need to expose the CALIBRATION register
- * to the user for now.
- */
- regval = DIV_ROUND_CLOSEST(chip->config->calibration_factor,
- chip->shunt_resistor);
-
- return regmap_write(chip->regmap, INA2XX_CALIBRATION, regval);
+ return ina2xx_set_calibration(chip);
}
static int ina2xx_probe(struct i2c_client *client,
diff --git a/drivers/iio/adc/lpc18xx_adc.c b/drivers/iio/adc/lpc18xx_adc.c
new file mode 100644
index 000000000000..3ef18f4b27f0
--- /dev/null
+++ b/drivers/iio/adc/lpc18xx_adc.c
@@ -0,0 +1,231 @@
+/*
+ * IIO ADC driver for NXP LPC18xx ADC
+ *
+ * Copyright (C) 2016 Joachim Eastwood <manabian@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * UNSUPPORTED hardware features:
+ * - Hardware triggers
+ * - Burst mode
+ * - Interrupts
+ * - DMA
+ */
+
+#include <linux/clk.h>
+#include <linux/err.h>
+#include <linux/iio/iio.h>
+#include <linux/iio/driver.h>
+#include <linux/io.h>
+#include <linux/iopoll.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/platform_device.h>
+#include <linux/regulator/consumer.h>
+
+/* LPC18XX ADC registers and bits */
+#define LPC18XX_ADC_CR 0x000
+#define LPC18XX_ADC_CR_CLKDIV_SHIFT 8
+#define LPC18XX_ADC_CR_PDN BIT(21)
+#define LPC18XX_ADC_CR_START_NOW (0x1 << 24)
+#define LPC18XX_ADC_GDR 0x004
+
+/* Data register bits */
+#define LPC18XX_ADC_SAMPLE_SHIFT 6
+#define LPC18XX_ADC_SAMPLE_MASK 0x3ff
+#define LPC18XX_ADC_CONV_DONE BIT(31)
+
+/* Clock should be 4.5 MHz or less */
+#define LPC18XX_ADC_CLK_TARGET 4500000
+
+struct lpc18xx_adc {
+ struct regulator *vref;
+ void __iomem *base;
+ struct device *dev;
+ struct mutex lock;
+ struct clk *clk;
+ u32 cr_reg;
+};
+
+#define LPC18XX_ADC_CHAN(_idx) { \
+ .type = IIO_VOLTAGE, \
+ .indexed = 1, \
+ .channel = _idx, \
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \
+ .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), \
+}
+
+static const struct iio_chan_spec lpc18xx_adc_iio_channels[] = {
+ LPC18XX_ADC_CHAN(0),
+ LPC18XX_ADC_CHAN(1),
+ LPC18XX_ADC_CHAN(2),
+ LPC18XX_ADC_CHAN(3),
+ LPC18XX_ADC_CHAN(4),
+ LPC18XX_ADC_CHAN(5),
+ LPC18XX_ADC_CHAN(6),
+ LPC18XX_ADC_CHAN(7),
+};
+
+static int lpc18xx_adc_read_chan(struct lpc18xx_adc *adc, unsigned int ch)
+{
+ int ret;
+ u32 reg;
+
+ reg = adc->cr_reg | BIT(ch) | LPC18XX_ADC_CR_START_NOW;
+ writel(reg, adc->base + LPC18XX_ADC_CR);
+
+ ret = readl_poll_timeout(adc->base + LPC18XX_ADC_GDR, reg,
+ reg & LPC18XX_ADC_CONV_DONE, 3, 9);
+ if (ret) {
+ dev_warn(adc->dev, "adc read timed out\n");
+ return ret;
+ }
+
+ return (reg >> LPC18XX_ADC_SAMPLE_SHIFT) & LPC18XX_ADC_SAMPLE_MASK;
+}
+
+static int lpc18xx_adc_read_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ int *val, int *val2, long mask)
+{
+ struct lpc18xx_adc *adc = iio_priv(indio_dev);
+
+ switch (mask) {
+ case IIO_CHAN_INFO_RAW:
+ mutex_lock(&adc->lock);
+ *val = lpc18xx_adc_read_chan(adc, chan->channel);
+ mutex_unlock(&adc->lock);
+ if (*val < 0)
+ return *val;
+
+ return IIO_VAL_INT;
+
+ case IIO_CHAN_INFO_SCALE:
+ *val = regulator_get_voltage(adc->vref) / 1000;
+ *val2 = 10;
+
+ return IIO_VAL_FRACTIONAL_LOG2;
+ }
+
+ return -EINVAL;
+}
+
+static const struct iio_info lpc18xx_adc_info = {
+ .read_raw = lpc18xx_adc_read_raw,
+ .driver_module = THIS_MODULE,
+};
+
+static int lpc18xx_adc_probe(struct platform_device *pdev)
+{
+ struct iio_dev *indio_dev;
+ struct lpc18xx_adc *adc;
+ struct resource *res;
+ unsigned int clkdiv;
+ unsigned long rate;
+ int ret;
+
+ indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*adc));
+ if (!indio_dev)
+ return -ENOMEM;
+
+ platform_set_drvdata(pdev, indio_dev);
+ adc = iio_priv(indio_dev);
+ adc->dev = &pdev->dev;
+ mutex_init(&adc->lock);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ adc->base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(adc->base))
+ return PTR_ERR(adc->base);
+
+ adc->clk = devm_clk_get(&pdev->dev, NULL);
+ if (IS_ERR(adc->clk)) {
+ dev_err(&pdev->dev, "error getting clock\n");
+ return PTR_ERR(adc->clk);
+ }
+
+ rate = clk_get_rate(adc->clk);
+ clkdiv = DIV_ROUND_UP(rate, LPC18XX_ADC_CLK_TARGET);
+
+ adc->vref = devm_regulator_get(&pdev->dev, "vref");
+ if (IS_ERR(adc->vref)) {
+ dev_err(&pdev->dev, "error getting regulator\n");
+ return PTR_ERR(adc->vref);
+ }
+
+ indio_dev->name = dev_name(&pdev->dev);
+ indio_dev->dev.parent = &pdev->dev;
+ indio_dev->info = &lpc18xx_adc_info;
+ indio_dev->modes = INDIO_DIRECT_MODE;
+ indio_dev->channels = lpc18xx_adc_iio_channels;
+ indio_dev->num_channels = ARRAY_SIZE(lpc18xx_adc_iio_channels);
+
+ ret = regulator_enable(adc->vref);
+ if (ret) {
+ dev_err(&pdev->dev, "unable to enable regulator\n");
+ return ret;
+ }
+
+ ret = clk_prepare_enable(adc->clk);
+ if (ret) {
+ dev_err(&pdev->dev, "unable to enable clock\n");
+ goto dis_reg;
+ }
+
+ adc->cr_reg = (clkdiv << LPC18XX_ADC_CR_CLKDIV_SHIFT) |
+ LPC18XX_ADC_CR_PDN;
+ writel(adc->cr_reg, adc->base + LPC18XX_ADC_CR);
+
+ ret = iio_device_register(indio_dev);
+ if (ret) {
+ dev_err(&pdev->dev, "unable to register device\n");
+ goto dis_clk;
+ }
+
+ return 0;
+
+dis_clk:
+ writel(0, adc->base + LPC18XX_ADC_CR);
+ clk_disable_unprepare(adc->clk);
+dis_reg:
+ regulator_disable(adc->vref);
+ return ret;
+}
+
+static int lpc18xx_adc_remove(struct platform_device *pdev)
+{
+ struct iio_dev *indio_dev = platform_get_drvdata(pdev);
+ struct lpc18xx_adc *adc = iio_priv(indio_dev);
+
+ iio_device_unregister(indio_dev);
+
+ writel(0, adc->base + LPC18XX_ADC_CR);
+ clk_disable_unprepare(adc->clk);
+ regulator_disable(adc->vref);
+
+ return 0;
+}
+
+static const struct of_device_id lpc18xx_adc_match[] = {
+ { .compatible = "nxp,lpc1850-adc" },
+ { /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, lpc18xx_adc_match);
+
+static struct platform_driver lpc18xx_adc_driver = {
+ .probe = lpc18xx_adc_probe,
+ .remove = lpc18xx_adc_remove,
+ .driver = {
+ .name = "lpc18xx-adc",
+ .of_match_table = lpc18xx_adc_match,
+ },
+};
+module_platform_driver(lpc18xx_adc_driver);
+
+MODULE_DESCRIPTION("LPC18xx ADC driver");
+MODULE_AUTHOR("Joachim Eastwood <manabian@gmail.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/iio/adc/mcp3422.c b/drivers/iio/adc/mcp3422.c
index d7b36efd2f3c..d1172dc1e8e2 100644
--- a/drivers/iio/adc/mcp3422.c
+++ b/drivers/iio/adc/mcp3422.c
@@ -61,9 +61,9 @@
static const int mcp3422_scales[4][4] = {
{ 1000000, 500000, 250000, 125000 },
- { 250000 , 125000, 62500 , 31250 },
- { 62500 , 31250 , 15625 , 7812 },
- { 15625 , 7812 , 3906 , 1953 } };
+ { 250000, 125000, 62500, 31250 },
+ { 62500, 31250, 15625, 7812 },
+ { 15625, 7812, 3906, 1953 } };
/* Constant msleep times for data acquisitions */
static const int mcp3422_read_times[4] = {
diff --git a/drivers/iio/adc/mxs-lradc.c b/drivers/iio/adc/mxs-lradc.c
index 33051b87aac2..ad26da1edbee 100644
--- a/drivers/iio/adc/mxs-lradc.c
+++ b/drivers/iio/adc/mxs-lradc.c
@@ -686,6 +686,17 @@ static void mxs_lradc_prepare_pressure(struct mxs_lradc *lradc)
static void mxs_lradc_enable_touch_detection(struct mxs_lradc *lradc)
{
+ /* Configure the touchscreen type */
+ if (lradc->soc == IMX28_LRADC) {
+ mxs_lradc_reg_clear(lradc, LRADC_CTRL0_MX28_TOUCH_SCREEN_TYPE,
+ LRADC_CTRL0);
+
+ if (lradc->use_touchscreen == MXS_LRADC_TOUCHSCREEN_5WIRE)
+ mxs_lradc_reg_set(lradc,
+ LRADC_CTRL0_MX28_TOUCH_SCREEN_TYPE,
+ LRADC_CTRL0);
+ }
+
mxs_lradc_setup_touch_detection(lradc);
lradc->cur_plate = LRADC_TOUCH;
@@ -1127,6 +1138,7 @@ static int mxs_lradc_ts_register(struct mxs_lradc *lradc)
__set_bit(EV_ABS, input->evbit);
__set_bit(EV_KEY, input->evbit);
__set_bit(BTN_TOUCH, input->keybit);
+ __set_bit(INPUT_PROP_DIRECT, input->propbit);
input_set_abs_params(input, ABS_X, 0, LRADC_SINGLE_SAMPLE_MASK, 0, 0);
input_set_abs_params(input, ABS_Y, 0, LRADC_SINGLE_SAMPLE_MASK, 0, 0);
input_set_abs_params(input, ABS_PRESSURE, 0, LRADC_SINGLE_SAMPLE_MASK,
@@ -1475,18 +1487,13 @@ static const struct iio_chan_spec mx28_lradc_chan_spec[] = {
MXS_ADC_CHAN(15, IIO_VOLTAGE, "VDD5V"),
};
-static int mxs_lradc_hw_init(struct mxs_lradc *lradc)
+static void mxs_lradc_hw_init(struct mxs_lradc *lradc)
{
/* The ADC always uses DELAY CHANNEL 0. */
const u32 adc_cfg =
(1 << (LRADC_DELAY_TRIGGER_DELAYS_OFFSET + 0)) |
(LRADC_DELAY_TIMER_PER << LRADC_DELAY_DELAY_OFFSET);
- int ret = stmp_reset_block(lradc->base);
-
- if (ret)
- return ret;
-
/* Configure DELAY CHANNEL 0 for generic ADC sampling. */
mxs_lradc_reg_wrt(lradc, adc_cfg, LRADC_DELAY(0));
@@ -1495,20 +1502,8 @@ static int mxs_lradc_hw_init(struct mxs_lradc *lradc)
mxs_lradc_reg_wrt(lradc, 0, LRADC_DELAY(2));
mxs_lradc_reg_wrt(lradc, 0, LRADC_DELAY(3));
- /* Configure the touchscreen type */
- if (lradc->soc == IMX28_LRADC) {
- mxs_lradc_reg_clear(lradc, LRADC_CTRL0_MX28_TOUCH_SCREEN_TYPE,
- LRADC_CTRL0);
-
- if (lradc->use_touchscreen == MXS_LRADC_TOUCHSCREEN_5WIRE)
- mxs_lradc_reg_set(lradc, LRADC_CTRL0_MX28_TOUCH_SCREEN_TYPE,
- LRADC_CTRL0);
- }
-
/* Start internal temperature sensing. */
mxs_lradc_reg_wrt(lradc, 0, LRADC_CTRL2);
-
- return 0;
}
static void mxs_lradc_hw_stop(struct mxs_lradc *lradc)
@@ -1708,11 +1703,13 @@ static int mxs_lradc_probe(struct platform_device *pdev)
}
}
- /* Configure the hardware. */
- ret = mxs_lradc_hw_init(lradc);
+ ret = stmp_reset_block(lradc->base);
if (ret)
goto err_dev;
+ /* Configure the hardware. */
+ mxs_lradc_hw_init(lradc);
+
/* Register the touchscreen input device. */
if (touch_ret == 0) {
ret = mxs_lradc_ts_register(lradc);
diff --git a/drivers/iio/adc/rockchip_saradc.c b/drivers/iio/adc/rockchip_saradc.c
index 9c311c1e1ac7..f9ad6c2d6821 100644
--- a/drivers/iio/adc/rockchip_saradc.c
+++ b/drivers/iio/adc/rockchip_saradc.c
@@ -159,6 +159,22 @@ static const struct rockchip_saradc_data rk3066_tsadc_data = {
.clk_rate = 50000,
};
+static const struct iio_chan_spec rockchip_rk3399_saradc_iio_channels[] = {
+ ADC_CHANNEL(0, "adc0"),
+ ADC_CHANNEL(1, "adc1"),
+ ADC_CHANNEL(2, "adc2"),
+ ADC_CHANNEL(3, "adc3"),
+ ADC_CHANNEL(4, "adc4"),
+ ADC_CHANNEL(5, "adc5"),
+};
+
+static const struct rockchip_saradc_data rk3399_saradc_data = {
+ .num_bits = 10,
+ .channels = rockchip_rk3399_saradc_iio_channels,
+ .num_channels = ARRAY_SIZE(rockchip_rk3399_saradc_iio_channels),
+ .clk_rate = 1000000,
+};
+
static const struct of_device_id rockchip_saradc_match[] = {
{
.compatible = "rockchip,saradc",
@@ -166,6 +182,9 @@ static const struct of_device_id rockchip_saradc_match[] = {
}, {
.compatible = "rockchip,rk3066-tsadc",
.data = &rk3066_tsadc_data,
+ }, {
+ .compatible = "rockchip,rk3399-saradc",
+ .data = &rk3399_saradc_data,
},
{},
};
diff --git a/drivers/iio/adc/ti-adc081c.c b/drivers/iio/adc/ti-adc081c.c
index ecbc12138d58..9fd032d9f402 100644
--- a/drivers/iio/adc/ti-adc081c.c
+++ b/drivers/iio/adc/ti-adc081c.c
@@ -1,9 +1,21 @@
/*
+ * TI ADC081C/ADC101C/ADC121C 8/10/12-bit ADC driver
+ *
* Copyright (C) 2012 Avionic Design GmbH
+ * Copyright (C) 2016 Intel
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
+ *
+ * Datasheets:
+ * http://www.ti.com/lit/ds/symlink/adc081c021.pdf
+ * http://www.ti.com/lit/ds/symlink/adc101c021.pdf
+ * http://www.ti.com/lit/ds/symlink/adc121c021.pdf
+ *
+ * The devices have a very similar interface and differ mostly in the number of
+ * bits handled. For the 8-bit and 10-bit models the least-significant 4 or 2
+ * bits of value registers are reserved.
*/
#include <linux/err.h>
@@ -12,11 +24,17 @@
#include <linux/of.h>
#include <linux/iio/iio.h>
+#include <linux/iio/buffer.h>
+#include <linux/iio/trigger_consumer.h>
+#include <linux/iio/triggered_buffer.h>
#include <linux/regulator/consumer.h>
struct adc081c {
struct i2c_client *i2c;
struct regulator *ref;
+
+ /* 8, 10 or 12 */
+ int bits;
};
#define REG_CONV_RES 0x00
@@ -34,7 +52,7 @@ static int adc081c_read_raw(struct iio_dev *iio,
if (err < 0)
return err;
- *value = (err >> 4) & 0xff;
+ *value = (err & 0xFFF) >> (12 - adc->bits);
return IIO_VAL_INT;
case IIO_CHAN_INFO_SCALE:
@@ -43,7 +61,7 @@ static int adc081c_read_raw(struct iio_dev *iio,
return err;
*value = err / 1000;
- *shift = 8;
+ *shift = adc->bits;
return IIO_VAL_FRACTIONAL_LOG2;
@@ -54,10 +72,53 @@ static int adc081c_read_raw(struct iio_dev *iio,
return -EINVAL;
}
-static const struct iio_chan_spec adc081c_channel = {
- .type = IIO_VOLTAGE,
- .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
- .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+#define ADCxx1C_CHAN(_bits) { \
+ .type = IIO_VOLTAGE, \
+ .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), \
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \
+ .scan_type = { \
+ .sign = 'u', \
+ .realbits = (_bits), \
+ .storagebits = 16, \
+ .shift = 12 - (_bits), \
+ .endianness = IIO_CPU, \
+ }, \
+}
+
+#define DEFINE_ADCxx1C_CHANNELS(_name, _bits) \
+ static const struct iio_chan_spec _name ## _channels[] = { \
+ ADCxx1C_CHAN((_bits)), \
+ IIO_CHAN_SOFT_TIMESTAMP(1), \
+ }; \
+
+#define ADC081C_NUM_CHANNELS 2
+
+struct adcxx1c_model {
+ const struct iio_chan_spec* channels;
+ int bits;
+};
+
+#define ADCxx1C_MODEL(_name, _bits) \
+ { \
+ .channels = _name ## _channels, \
+ .bits = (_bits), \
+ }
+
+DEFINE_ADCxx1C_CHANNELS(adc081c, 8);
+DEFINE_ADCxx1C_CHANNELS(adc101c, 10);
+DEFINE_ADCxx1C_CHANNELS(adc121c, 12);
+
+/* Model ids are indexes in _models array */
+enum adcxx1c_model_id {
+ ADC081C = 0,
+ ADC101C = 1,
+ ADC121C = 2,
+};
+
+static struct adcxx1c_model adcxx1c_models[] = {
+ ADCxx1C_MODEL(adc081c, 8),
+ ADCxx1C_MODEL(adc101c, 10),
+ ADCxx1C_MODEL(adc121c, 12),
};
static const struct iio_info adc081c_info = {
@@ -65,11 +126,30 @@ static const struct iio_info adc081c_info = {
.driver_module = THIS_MODULE,
};
+static irqreturn_t adc081c_trigger_handler(int irq, void *p)
+{
+ struct iio_poll_func *pf = p;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct adc081c *data = iio_priv(indio_dev);
+ u16 buf[8]; /* 2 bytes data + 6 bytes padding + 8 bytes timestamp */
+ int ret;
+
+ ret = i2c_smbus_read_word_swapped(data->i2c, REG_CONV_RES);
+ if (ret < 0)
+ goto out;
+ buf[0] = ret;
+ iio_push_to_buffers_with_timestamp(indio_dev, buf, iio_get_time_ns());
+out:
+ iio_trigger_notify_done(indio_dev->trig);
+ return IRQ_HANDLED;
+}
+
static int adc081c_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
struct iio_dev *iio;
struct adc081c *adc;
+ struct adcxx1c_model *model = &adcxx1c_models[id->driver_data];
int err;
if (!i2c_check_functionality(client->adapter, I2C_FUNC_SMBUS_WORD_DATA))
@@ -81,6 +161,7 @@ static int adc081c_probe(struct i2c_client *client,
adc = iio_priv(iio);
adc->i2c = client;
+ adc->bits = model->bits;
adc->ref = devm_regulator_get(&client->dev, "vref");
if (IS_ERR(adc->ref))
@@ -95,18 +176,26 @@ static int adc081c_probe(struct i2c_client *client,
iio->modes = INDIO_DIRECT_MODE;
iio->info = &adc081c_info;
- iio->channels = &adc081c_channel;
- iio->num_channels = 1;
+ iio->channels = model->channels;
+ iio->num_channels = ADC081C_NUM_CHANNELS;
+
+ err = iio_triggered_buffer_setup(iio, NULL, adc081c_trigger_handler, NULL);
+ if (err < 0) {
+ dev_err(&client->dev, "iio triggered buffer setup failed\n");
+ goto err_regulator_disable;
+ }
err = iio_device_register(iio);
if (err < 0)
- goto regulator_disable;
+ goto err_buffer_cleanup;
i2c_set_clientdata(client, iio);
return 0;
-regulator_disable:
+err_buffer_cleanup:
+ iio_triggered_buffer_cleanup(iio);
+err_regulator_disable:
regulator_disable(adc->ref);
return err;
@@ -118,13 +207,16 @@ static int adc081c_remove(struct i2c_client *client)
struct adc081c *adc = iio_priv(iio);
iio_device_unregister(iio);
+ iio_triggered_buffer_cleanup(iio);
regulator_disable(adc->ref);
return 0;
}
static const struct i2c_device_id adc081c_id[] = {
- { "adc081c", 0 },
+ { "adc081c", ADC081C },
+ { "adc101c", ADC101C },
+ { "adc121c", ADC121C },
{ }
};
MODULE_DEVICE_TABLE(i2c, adc081c_id);
@@ -132,6 +224,8 @@ MODULE_DEVICE_TABLE(i2c, adc081c_id);
#ifdef CONFIG_OF
static const struct of_device_id adc081c_of_match[] = {
{ .compatible = "ti,adc081c" },
+ { .compatible = "ti,adc101c" },
+ { .compatible = "ti,adc121c" },
{ }
};
MODULE_DEVICE_TABLE(of, adc081c_of_match);
@@ -149,5 +243,5 @@ static struct i2c_driver adc081c_driver = {
module_i2c_driver(adc081c_driver);
MODULE_AUTHOR("Thierry Reding <thierry.reding@avionic-design.de>");
-MODULE_DESCRIPTION("Texas Instruments ADC081C021/027 driver");
+MODULE_DESCRIPTION("Texas Instruments ADC081C/ADC101C/ADC121C driver");
MODULE_LICENSE("GPL v2");
diff --git a/drivers/iio/adc/vf610_adc.c b/drivers/iio/adc/vf610_adc.c
index b10f629cc44b..653bf1379d2e 100644
--- a/drivers/iio/adc/vf610_adc.c
+++ b/drivers/iio/adc/vf610_adc.c
@@ -714,19 +714,19 @@ static int vf610_write_raw(struct iio_dev *indio_dev,
int i;
switch (mask) {
- case IIO_CHAN_INFO_SAMP_FREQ:
- for (i = 0;
- i < ARRAY_SIZE(info->sample_freq_avail);
- i++)
- if (val == info->sample_freq_avail[i]) {
- info->adc_feature.sample_rate = i;
- vf610_adc_sample_set(info);
- return 0;
- }
- break;
+ case IIO_CHAN_INFO_SAMP_FREQ:
+ for (i = 0;
+ i < ARRAY_SIZE(info->sample_freq_avail);
+ i++)
+ if (val == info->sample_freq_avail[i]) {
+ info->adc_feature.sample_rate = i;
+ vf610_adc_sample_set(info);
+ return 0;
+ }
+ break;
- default:
- break;
+ default:
+ break;
}
return -EINVAL;
diff --git a/drivers/iio/common/hid-sensors/hid-sensor-trigger.c b/drivers/iio/common/hid-sensors/hid-sensor-trigger.c
index 595511022795..5b41f9d0d4f3 100644
--- a/drivers/iio/common/hid-sensors/hid-sensor-trigger.c
+++ b/drivers/iio/common/hid-sensors/hid-sensor-trigger.c
@@ -115,7 +115,7 @@ int hid_sensor_power_state(struct hid_sensor_common *st, bool state)
return ret;
}
- return 0;
+ return 0;
#else
atomic_set(&st->user_requested_state, state);
return _hid_sensor_power_state(st, state);
diff --git a/drivers/iio/common/ms_sensors/ms_sensors_i2c.c b/drivers/iio/common/ms_sensors/ms_sensors_i2c.c
index 669dc7c270f5..ecf7721ecaf4 100644
--- a/drivers/iio/common/ms_sensors/ms_sensors_i2c.c
+++ b/drivers/iio/common/ms_sensors/ms_sensors_i2c.c
@@ -106,7 +106,7 @@ int ms_sensors_convert_and_read(void *cli, u8 conv, u8 rd,
unsigned int delay, u32 *adc)
{
int ret;
- __be32 buf = 0;
+ __be32 buf = 0;
struct i2c_client *client = (struct i2c_client *)cli;
/* Trigger conversion */
diff --git a/drivers/iio/common/st_sensors/st_sensors_buffer.c b/drivers/iio/common/st_sensors/st_sensors_buffer.c
index e18bc6782256..c55898543a47 100644
--- a/drivers/iio/common/st_sensors/st_sensors_buffer.c
+++ b/drivers/iio/common/st_sensors/st_sensors_buffer.c
@@ -24,81 +24,30 @@
int st_sensors_get_buffer_element(struct iio_dev *indio_dev, u8 *buf)
{
- u8 *addr;
- int i, n = 0, len;
+ int i, len;
+ int total = 0;
struct st_sensor_data *sdata = iio_priv(indio_dev);
unsigned int num_data_channels = sdata->num_data_channels;
- unsigned int byte_for_channel =
- indio_dev->channels[0].scan_type.storagebits >> 3;
-
- addr = kmalloc(num_data_channels, GFP_KERNEL);
- if (!addr) {
- len = -ENOMEM;
- goto st_sensors_get_buffer_element_error;
- }
for (i = 0; i < num_data_channels; i++) {
+ unsigned int bytes_to_read;
+
if (test_bit(i, indio_dev->active_scan_mask)) {
- addr[n] = indio_dev->channels[i].address;
- n++;
- }
- }
- switch (n) {
- case 1:
- len = sdata->tf->read_multiple_byte(&sdata->tb, sdata->dev,
- addr[0], byte_for_channel, buf, sdata->multiread_bit);
- break;
- case 2:
- if ((addr[1] - addr[0]) == byte_for_channel) {
+ bytes_to_read = indio_dev->channels[i].scan_type.storagebits >> 3;
len = sdata->tf->read_multiple_byte(&sdata->tb,
- sdata->dev, addr[0], byte_for_channel * n,
- buf, sdata->multiread_bit);
- } else {
- u8 *rx_array;
- rx_array = kmalloc(byte_for_channel * num_data_channels,
- GFP_KERNEL);
- if (!rx_array) {
- len = -ENOMEM;
- goto st_sensors_free_memory;
- }
+ sdata->dev, indio_dev->channels[i].address,
+ bytes_to_read,
+ buf + total, sdata->multiread_bit);
- len = sdata->tf->read_multiple_byte(&sdata->tb,
- sdata->dev, addr[0],
- byte_for_channel * num_data_channels,
- rx_array, sdata->multiread_bit);
- if (len < 0) {
- kfree(rx_array);
- goto st_sensors_free_memory;
- }
-
- for (i = 0; i < n * byte_for_channel; i++) {
- if (i < n)
- buf[i] = rx_array[i];
- else
- buf[i] = rx_array[n + i];
- }
- kfree(rx_array);
- len = byte_for_channel * n;
+ if (len < bytes_to_read)
+ return -EIO;
+
+ /* Advance the buffer pointer */
+ total += len;
}
- break;
- case 3:
- len = sdata->tf->read_multiple_byte(&sdata->tb, sdata->dev,
- addr[0], byte_for_channel * num_data_channels,
- buf, sdata->multiread_bit);
- break;
- default:
- len = -EINVAL;
- goto st_sensors_free_memory;
- }
- if (len != byte_for_channel * n) {
- len = -EIO;
- goto st_sensors_free_memory;
}
-st_sensors_free_memory:
- kfree(addr);
-st_sensors_get_buffer_element_error:
- return len;
+ return total;
}
EXPORT_SYMBOL(st_sensors_get_buffer_element);
@@ -109,6 +58,24 @@ irqreturn_t st_sensors_trigger_handler(int irq, void *p)
struct iio_dev *indio_dev = pf->indio_dev;
struct st_sensor_data *sdata = iio_priv(indio_dev);
+ /* If we have a status register, check if this IRQ came from us */
+ if (sdata->sensor_settings->drdy_irq.addr_stat_drdy) {
+ u8 status;
+
+ len = sdata->tf->read_byte(&sdata->tb, sdata->dev,
+ sdata->sensor_settings->drdy_irq.addr_stat_drdy,
+ &status);
+ if (len < 0)
+ dev_err(sdata->dev, "could not read channel status\n");
+
+ /*
+ * If this was not caused by any channels on this sensor,
+ * return IRQ_NONE
+ */
+ if (!(status & (u8)indio_dev->active_scan_mask[0]))
+ return IRQ_NONE;
+ }
+
len = st_sensors_get_buffer_element(indio_dev, sdata->buffer_data);
if (len < 0)
goto st_sensors_get_buffer_element_error;
diff --git a/drivers/iio/common/st_sensors/st_sensors_core.c b/drivers/iio/common/st_sensors/st_sensors_core.c
index f5a2d445d0c0..dffe00692169 100644
--- a/drivers/iio/common/st_sensors/st_sensors_core.c
+++ b/drivers/iio/common/st_sensors/st_sensors_core.c
@@ -301,6 +301,14 @@ static int st_sensors_set_drdy_int_pin(struct iio_dev *indio_dev,
return -EINVAL;
}
+ if (pdata->open_drain) {
+ if (!sdata->sensor_settings->drdy_irq.addr_od)
+ dev_err(&indio_dev->dev,
+ "open drain requested but unsupported.\n");
+ else
+ sdata->int_pin_open_drain = true;
+ }
+
return 0;
}
@@ -321,6 +329,8 @@ static struct st_sensors_platform_data *st_sensors_of_probe(struct device *dev,
else
pdata->drdy_int_pin = defdata ? defdata->drdy_int_pin : 0;
+ pdata->open_drain = of_property_read_bool(np, "drive-open-drain");
+
return pdata;
}
#else
@@ -374,6 +384,16 @@ int st_sensors_init_sensor(struct iio_dev *indio_dev,
return err;
}
+ if (sdata->int_pin_open_drain) {
+ dev_info(&indio_dev->dev,
+ "set interrupt line to open drain mode\n");
+ err = st_sensors_write_data_with_mask(indio_dev,
+ sdata->sensor_settings->drdy_irq.addr_od,
+ sdata->sensor_settings->drdy_irq.mask_od, 1);
+ if (err < 0)
+ return err;
+ }
+
err = st_sensors_set_axis_enable(indio_dev, ST_SENSORS_ENABLE_ALL_AXIS);
return err;
diff --git a/drivers/iio/common/st_sensors/st_sensors_trigger.c b/drivers/iio/common/st_sensors/st_sensors_trigger.c
index 6a8c98327945..da72279fcf99 100644
--- a/drivers/iio/common/st_sensors/st_sensors_trigger.c
+++ b/drivers/iio/common/st_sensors/st_sensors_trigger.c
@@ -64,6 +64,19 @@ int st_sensors_allocate_trigger(struct iio_dev *indio_dev,
"rising edge\n", irq_trig);
irq_trig = IRQF_TRIGGER_RISING;
}
+
+ /*
+ * If the interrupt pin is Open Drain, by definition this
+ * means that the interrupt line may be shared with other
+ * peripherals. But to do this we also need to have a status
+ * register and mask to figure out if this sensor was firing
+ * the IRQ or not, so we can tell the interrupt handle that
+ * it was "our" interrupt.
+ */
+ if (sdata->int_pin_open_drain &&
+ sdata->sensor_settings->drdy_irq.addr_stat_drdy)
+ irq_trig |= IRQF_SHARED;
+
err = request_threaded_irq(irq,
iio_trigger_generic_data_rdy_poll,
NULL,
diff --git a/drivers/iio/dac/Kconfig b/drivers/iio/dac/Kconfig
index a995139f907c..e63b957c985f 100644
--- a/drivers/iio/dac/Kconfig
+++ b/drivers/iio/dac/Kconfig
@@ -74,6 +74,33 @@ config AD5449
To compile this driver as a module, choose M here: the
module will be called ad5449.
+config AD5592R_BASE
+ tristate
+
+config AD5592R
+ tristate "Analog Devices AD5592R ADC/DAC driver"
+ depends on SPI_MASTER
+ select GPIOLIB
+ select AD5592R_BASE
+ help
+ Say yes here to build support for Analog Devices AD5592R
+ Digital to Analog / Analog to Digital Converter.
+
+ To compile this driver as a module, choose M here: the
+ module will be called ad5592r.
+
+config AD5593R
+ tristate "Analog Devices AD5593R ADC/DAC driver"
+ depends on I2C
+ select GPIOLIB
+ select AD5592R_BASE
+ help
+ Say yes here to build support for Analog Devices AD5593R
+ Digital to Analog / Analog to Digital Converter.
+
+ To compile this driver as a module, choose M here: the
+ module will be called ad5593r.
+
config AD5504
tristate "Analog Devices AD5504/AD5501 DAC SPI driver"
depends on SPI
@@ -154,6 +181,16 @@ config AD7303
To compile this driver as module choose M here: the module will be called
ad7303.
+config LPC18XX_DAC
+ tristate "NXP LPC18xx DAC driver"
+ depends on ARCH_LPC18XX || COMPILE_TEST
+ depends on OF && HAS_IOMEM
+ help
+ Say yes here to build support for NXP LPC18XX DAC.
+
+ To compile this driver as a module, choose M here: the module will be
+ called lpc18xx_dac.
+
config M62332
tristate "Mitsubishi M62332 DAC driver"
depends on I2C
@@ -210,7 +247,7 @@ config MCP4922
config STX104
tristate "Apex Embedded Systems STX104 DAC driver"
- depends on ISA
+ depends on X86 && ISA
help
Say yes here to build support for the 2-channel DAC on the Apex
Embedded Systems STX104 integrated analog PC/104 card. The base port
diff --git a/drivers/iio/dac/Makefile b/drivers/iio/dac/Makefile
index 67b48429686d..8b78d5ca9b11 100644
--- a/drivers/iio/dac/Makefile
+++ b/drivers/iio/dac/Makefile
@@ -11,12 +11,16 @@ obj-$(CONFIG_AD5064) += ad5064.o
obj-$(CONFIG_AD5504) += ad5504.o
obj-$(CONFIG_AD5446) += ad5446.o
obj-$(CONFIG_AD5449) += ad5449.o
+obj-$(CONFIG_AD5592R_BASE) += ad5592r-base.o
+obj-$(CONFIG_AD5592R) += ad5592r.o
+obj-$(CONFIG_AD5593R) += ad5593r.o
obj-$(CONFIG_AD5755) += ad5755.o
obj-$(CONFIG_AD5761) += ad5761.o
obj-$(CONFIG_AD5764) += ad5764.o
obj-$(CONFIG_AD5791) += ad5791.o
obj-$(CONFIG_AD5686) += ad5686.o
obj-$(CONFIG_AD7303) += ad7303.o
+obj-$(CONFIG_LPC18XX_DAC) += lpc18xx_dac.o
obj-$(CONFIG_M62332) += m62332.o
obj-$(CONFIG_MAX517) += max517.o
obj-$(CONFIG_MAX5821) += max5821.o
diff --git a/drivers/iio/dac/ad5592r-base.c b/drivers/iio/dac/ad5592r-base.c
new file mode 100644
index 000000000000..948f600e7059
--- /dev/null
+++ b/drivers/iio/dac/ad5592r-base.c
@@ -0,0 +1,691 @@
+/*
+ * AD5592R Digital <-> Analog converters driver
+ *
+ * Copyright 2014-2016 Analog Devices Inc.
+ * Author: Paul Cercueil <paul.cercueil@analog.com>
+ *
+ * Licensed under the GPL-2.
+ */
+
+#include <linux/bitops.h>
+#include <linux/delay.h>
+#include <linux/iio/iio.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/of.h>
+#include <linux/regulator/consumer.h>
+#include <linux/gpio/consumer.h>
+#include <linux/gpio/driver.h>
+#include <linux/gpio.h>
+#include <linux/property.h>
+
+#include <dt-bindings/iio/adi,ad5592r.h>
+
+#include "ad5592r-base.h"
+
+static int ad5592r_gpio_get(struct gpio_chip *chip, unsigned offset)
+{
+ struct ad5592r_state *st = gpiochip_get_data(chip);
+ int ret = 0;
+ u8 val;
+
+ mutex_lock(&st->gpio_lock);
+
+ if (st->gpio_out & BIT(offset))
+ val = st->gpio_val;
+ else
+ ret = st->ops->gpio_read(st, &val);
+
+ mutex_unlock(&st->gpio_lock);
+
+ if (ret < 0)
+ return ret;
+
+ return !!(val & BIT(offset));
+}
+
+static void ad5592r_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
+{
+ struct ad5592r_state *st = gpiochip_get_data(chip);
+
+ mutex_lock(&st->gpio_lock);
+
+ if (value)
+ st->gpio_val |= BIT(offset);
+ else
+ st->gpio_val &= ~BIT(offset);
+
+ st->ops->reg_write(st, AD5592R_REG_GPIO_SET, st->gpio_val);
+
+ mutex_unlock(&st->gpio_lock);
+}
+
+static int ad5592r_gpio_direction_input(struct gpio_chip *chip, unsigned offset)
+{
+ struct ad5592r_state *st = gpiochip_get_data(chip);
+ int ret;
+
+ mutex_lock(&st->gpio_lock);
+
+ st->gpio_out &= ~BIT(offset);
+ st->gpio_in |= BIT(offset);
+
+ ret = st->ops->reg_write(st, AD5592R_REG_GPIO_OUT_EN, st->gpio_out);
+ if (ret < 0)
+ goto err_unlock;
+
+ ret = st->ops->reg_write(st, AD5592R_REG_GPIO_IN_EN, st->gpio_in);
+
+err_unlock:
+ mutex_unlock(&st->gpio_lock);
+
+ return ret;
+}
+
+static int ad5592r_gpio_direction_output(struct gpio_chip *chip,
+ unsigned offset, int value)
+{
+ struct ad5592r_state *st = gpiochip_get_data(chip);
+ int ret;
+
+ mutex_lock(&st->gpio_lock);
+
+ if (value)
+ st->gpio_val |= BIT(offset);
+ else
+ st->gpio_val &= ~BIT(offset);
+
+ st->gpio_in &= ~BIT(offset);
+ st->gpio_out |= BIT(offset);
+
+ ret = st->ops->reg_write(st, AD5592R_REG_GPIO_SET, st->gpio_val);
+ if (ret < 0)
+ goto err_unlock;
+
+ ret = st->ops->reg_write(st, AD5592R_REG_GPIO_OUT_EN, st->gpio_out);
+ if (ret < 0)
+ goto err_unlock;
+
+ ret = st->ops->reg_write(st, AD5592R_REG_GPIO_IN_EN, st->gpio_in);
+
+err_unlock:
+ mutex_unlock(&st->gpio_lock);
+
+ return ret;
+}
+
+static int ad5592r_gpio_request(struct gpio_chip *chip, unsigned offset)
+{
+ struct ad5592r_state *st = gpiochip_get_data(chip);
+
+ if (!(st->gpio_map & BIT(offset))) {
+ dev_err(st->dev, "GPIO %d is reserved by alternate function\n",
+ offset);
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static int ad5592r_gpio_init(struct ad5592r_state *st)
+{
+ if (!st->gpio_map)
+ return 0;
+
+ st->gpiochip.label = dev_name(st->dev);
+ st->gpiochip.base = -1;
+ st->gpiochip.ngpio = 8;
+ st->gpiochip.parent = st->dev;
+ st->gpiochip.can_sleep = true;
+ st->gpiochip.direction_input = ad5592r_gpio_direction_input;
+ st->gpiochip.direction_output = ad5592r_gpio_direction_output;
+ st->gpiochip.get = ad5592r_gpio_get;
+ st->gpiochip.set = ad5592r_gpio_set;
+ st->gpiochip.request = ad5592r_gpio_request;
+ st->gpiochip.owner = THIS_MODULE;
+
+ mutex_init(&st->gpio_lock);
+
+ return gpiochip_add_data(&st->gpiochip, st);
+}
+
+static void ad5592r_gpio_cleanup(struct ad5592r_state *st)
+{
+ if (st->gpio_map)
+ gpiochip_remove(&st->gpiochip);
+}
+
+static int ad5592r_reset(struct ad5592r_state *st)
+{
+ struct gpio_desc *gpio;
+ struct iio_dev *iio_dev = iio_priv_to_dev(st);
+
+ gpio = devm_gpiod_get_optional(st->dev, "reset", GPIOD_OUT_LOW);
+ if (IS_ERR(gpio))
+ return PTR_ERR(gpio);
+
+ if (gpio) {
+ udelay(1);
+ gpiod_set_value(gpio, 1);
+ } else {
+ mutex_lock(&iio_dev->mlock);
+ /* Writing this magic value resets the device */
+ st->ops->reg_write(st, AD5592R_REG_RESET, 0xdac);
+ mutex_unlock(&iio_dev->mlock);
+ }
+
+ udelay(250);
+
+ return 0;
+}
+
+static int ad5592r_get_vref(struct ad5592r_state *st)
+{
+ int ret;
+
+ if (st->reg) {
+ ret = regulator_get_voltage(st->reg);
+ if (ret < 0)
+ return ret;
+
+ return ret / 1000;
+ } else {
+ return 2500;
+ }
+}
+
+static int ad5592r_set_channel_modes(struct ad5592r_state *st)
+{
+ const struct ad5592r_rw_ops *ops = st->ops;
+ int ret;
+ unsigned i;
+ struct iio_dev *iio_dev = iio_priv_to_dev(st);
+ u8 pulldown = 0, tristate = 0, dac = 0, adc = 0;
+ u16 read_back;
+
+ for (i = 0; i < st->num_channels; i++) {
+ switch (st->channel_modes[i]) {
+ case CH_MODE_DAC:
+ dac |= BIT(i);
+ break;
+
+ case CH_MODE_ADC:
+ adc |= BIT(i);
+ break;
+
+ case CH_MODE_DAC_AND_ADC:
+ dac |= BIT(i);
+ adc |= BIT(i);
+ break;
+
+ case CH_MODE_GPIO:
+ st->gpio_map |= BIT(i);
+ st->gpio_in |= BIT(i); /* Default to input */
+ break;
+
+ case CH_MODE_UNUSED:
+ /* fall-through */
+ default:
+ switch (st->channel_offstate[i]) {
+ case CH_OFFSTATE_OUT_TRISTATE:
+ tristate |= BIT(i);
+ break;
+
+ case CH_OFFSTATE_OUT_LOW:
+ st->gpio_out |= BIT(i);
+ break;
+
+ case CH_OFFSTATE_OUT_HIGH:
+ st->gpio_out |= BIT(i);
+ st->gpio_val |= BIT(i);
+ break;
+
+ case CH_OFFSTATE_PULLDOWN:
+ /* fall-through */
+ default:
+ pulldown |= BIT(i);
+ break;
+ }
+ }
+ }
+
+ mutex_lock(&iio_dev->mlock);
+
+ /* Pull down unused pins to GND */
+ ret = ops->reg_write(st, AD5592R_REG_PULLDOWN, pulldown);
+ if (ret)
+ goto err_unlock;
+
+ ret = ops->reg_write(st, AD5592R_REG_TRISTATE, tristate);
+ if (ret)
+ goto err_unlock;
+
+ /* Configure pins that we use */
+ ret = ops->reg_write(st, AD5592R_REG_DAC_EN, dac);
+ if (ret)
+ goto err_unlock;
+
+ ret = ops->reg_write(st, AD5592R_REG_ADC_EN, adc);
+ if (ret)
+ goto err_unlock;
+
+ ret = ops->reg_write(st, AD5592R_REG_GPIO_SET, st->gpio_val);
+ if (ret)
+ goto err_unlock;
+
+ ret = ops->reg_write(st, AD5592R_REG_GPIO_OUT_EN, st->gpio_out);
+ if (ret)
+ goto err_unlock;
+
+ ret = ops->reg_write(st, AD5592R_REG_GPIO_IN_EN, st->gpio_in);
+ if (ret)
+ goto err_unlock;
+
+ /* Verify that we can read back at least one register */
+ ret = ops->reg_read(st, AD5592R_REG_ADC_EN, &read_back);
+ if (!ret && (read_back & 0xff) != adc)
+ ret = -EIO;
+
+err_unlock:
+ mutex_unlock(&iio_dev->mlock);
+ return ret;
+}
+
+static int ad5592r_reset_channel_modes(struct ad5592r_state *st)
+{
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(st->channel_modes); i++)
+ st->channel_modes[i] = CH_MODE_UNUSED;
+
+ return ad5592r_set_channel_modes(st);
+}
+
+static int ad5592r_write_raw(struct iio_dev *iio_dev,
+ struct iio_chan_spec const *chan, int val, int val2, long mask)
+{
+ struct ad5592r_state *st = iio_priv(iio_dev);
+ int ret;
+
+ switch (mask) {
+ case IIO_CHAN_INFO_RAW:
+
+ if (val >= (1 << chan->scan_type.realbits) || val < 0)
+ return -EINVAL;
+
+ if (!chan->output)
+ return -EINVAL;
+
+ mutex_lock(&iio_dev->mlock);
+ ret = st->ops->write_dac(st, chan->channel, val);
+ if (!ret)
+ st->cached_dac[chan->channel] = val;
+ mutex_unlock(&iio_dev->mlock);
+ return ret;
+ case IIO_CHAN_INFO_SCALE:
+ if (chan->type == IIO_VOLTAGE) {
+ bool gain;
+
+ if (val == st->scale_avail[0][0] &&
+ val2 == st->scale_avail[0][1])
+ gain = false;
+ else if (val == st->scale_avail[1][0] &&
+ val2 == st->scale_avail[1][1])
+ gain = true;
+ else
+ return -EINVAL;
+
+ mutex_lock(&iio_dev->mlock);
+
+ ret = st->ops->reg_read(st, AD5592R_REG_CTRL,
+ &st->cached_gp_ctrl);
+ if (ret < 0) {
+ mutex_unlock(&iio_dev->mlock);
+ return ret;
+ }
+
+ if (chan->output) {
+ if (gain)
+ st->cached_gp_ctrl |=
+ AD5592R_REG_CTRL_DAC_RANGE;
+ else
+ st->cached_gp_ctrl &=
+ ~AD5592R_REG_CTRL_DAC_RANGE;
+ } else {
+ if (gain)
+ st->cached_gp_ctrl |=
+ AD5592R_REG_CTRL_ADC_RANGE;
+ else
+ st->cached_gp_ctrl &=
+ ~AD5592R_REG_CTRL_ADC_RANGE;
+ }
+
+ ret = st->ops->reg_write(st, AD5592R_REG_CTRL,
+ st->cached_gp_ctrl);
+ mutex_unlock(&iio_dev->mlock);
+
+ return ret;
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int ad5592r_read_raw(struct iio_dev *iio_dev,
+ struct iio_chan_spec const *chan,
+ int *val, int *val2, long m)
+{
+ struct ad5592r_state *st = iio_priv(iio_dev);
+ u16 read_val;
+ int ret;
+
+ switch (m) {
+ case IIO_CHAN_INFO_RAW:
+ mutex_lock(&iio_dev->mlock);
+
+ if (!chan->output) {
+ ret = st->ops->read_adc(st, chan->channel, &read_val);
+ if (ret)
+ goto unlock;
+
+ if ((read_val >> 12 & 0x7) != (chan->channel & 0x7)) {
+ dev_err(st->dev, "Error while reading channel %u\n",
+ chan->channel);
+ ret = -EIO;
+ goto unlock;
+ }
+
+ read_val &= GENMASK(11, 0);
+
+ } else {
+ read_val = st->cached_dac[chan->channel];
+ }
+
+ dev_dbg(st->dev, "Channel %u read: 0x%04hX\n",
+ chan->channel, read_val);
+
+ *val = (int) read_val;
+ ret = IIO_VAL_INT;
+ break;
+ case IIO_CHAN_INFO_SCALE:
+ *val = ad5592r_get_vref(st);
+
+ if (chan->type == IIO_TEMP) {
+ s64 tmp = *val * (3767897513LL / 25LL);
+ *val = div_s64_rem(tmp, 1000000000LL, val2);
+
+ ret = IIO_VAL_INT_PLUS_MICRO;
+ } else {
+ int mult;
+
+ mutex_lock(&iio_dev->mlock);
+
+ if (chan->output)
+ mult = !!(st->cached_gp_ctrl &
+ AD5592R_REG_CTRL_DAC_RANGE);
+ else
+ mult = !!(st->cached_gp_ctrl &
+ AD5592R_REG_CTRL_ADC_RANGE);
+
+ *val *= ++mult;
+
+ *val2 = chan->scan_type.realbits;
+ ret = IIO_VAL_FRACTIONAL_LOG2;
+ }
+ break;
+ case IIO_CHAN_INFO_OFFSET:
+ ret = ad5592r_get_vref(st);
+
+ mutex_lock(&iio_dev->mlock);
+
+ if (st->cached_gp_ctrl & AD5592R_REG_CTRL_ADC_RANGE)
+ *val = (-34365 * 25) / ret;
+ else
+ *val = (-75365 * 25) / ret;
+ ret = IIO_VAL_INT;
+ break;
+ default:
+ ret = -EINVAL;
+ }
+
+unlock:
+ mutex_unlock(&iio_dev->mlock);
+ return ret;
+}
+
+static int ad5592r_write_raw_get_fmt(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan, long mask)
+{
+ switch (mask) {
+ case IIO_CHAN_INFO_SCALE:
+ return IIO_VAL_INT_PLUS_NANO;
+
+ default:
+ return IIO_VAL_INT_PLUS_MICRO;
+ }
+
+ return -EINVAL;
+}
+
+static const struct iio_info ad5592r_info = {
+ .read_raw = ad5592r_read_raw,
+ .write_raw = ad5592r_write_raw,
+ .write_raw_get_fmt = ad5592r_write_raw_get_fmt,
+ .driver_module = THIS_MODULE,
+};
+
+static ssize_t ad5592r_show_scale_available(struct iio_dev *iio_dev,
+ uintptr_t private,
+ const struct iio_chan_spec *chan,
+ char *buf)
+{
+ struct ad5592r_state *st = iio_priv(iio_dev);
+
+ return sprintf(buf, "%d.%09u %d.%09u\n",
+ st->scale_avail[0][0], st->scale_avail[0][1],
+ st->scale_avail[1][0], st->scale_avail[1][1]);
+}
+
+static struct iio_chan_spec_ext_info ad5592r_ext_info[] = {
+ {
+ .name = "scale_available",
+ .read = ad5592r_show_scale_available,
+ .shared = true,
+ },
+ {},
+};
+
+static void ad5592r_setup_channel(struct iio_dev *iio_dev,
+ struct iio_chan_spec *chan, bool output, unsigned id)
+{
+ chan->type = IIO_VOLTAGE;
+ chan->indexed = 1;
+ chan->output = output;
+ chan->channel = id;
+ chan->info_mask_separate = BIT(IIO_CHAN_INFO_RAW);
+ chan->info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE);
+ chan->scan_type.sign = 'u';
+ chan->scan_type.realbits = 12;
+ chan->scan_type.storagebits = 16;
+ chan->ext_info = ad5592r_ext_info;
+}
+
+static int ad5592r_alloc_channels(struct ad5592r_state *st)
+{
+ unsigned i, curr_channel = 0,
+ num_channels = st->num_channels;
+ struct iio_dev *iio_dev = iio_priv_to_dev(st);
+ struct iio_chan_spec *channels;
+ struct fwnode_handle *child;
+ u32 reg, tmp;
+ int ret;
+
+ device_for_each_child_node(st->dev, child) {
+ ret = fwnode_property_read_u32(child, "reg", &reg);
+ if (ret || reg > ARRAY_SIZE(st->channel_modes))
+ continue;
+
+ ret = fwnode_property_read_u32(child, "adi,mode", &tmp);
+ if (!ret)
+ st->channel_modes[reg] = tmp;
+
+ fwnode_property_read_u32(child, "adi,off-state", &tmp);
+ if (!ret)
+ st->channel_offstate[reg] = tmp;
+ }
+
+ channels = devm_kzalloc(st->dev,
+ (1 + 2 * num_channels) * sizeof(*channels), GFP_KERNEL);
+ if (!channels)
+ return -ENOMEM;
+
+ for (i = 0; i < num_channels; i++) {
+ switch (st->channel_modes[i]) {
+ case CH_MODE_DAC:
+ ad5592r_setup_channel(iio_dev, &channels[curr_channel],
+ true, i);
+ curr_channel++;
+ break;
+
+ case CH_MODE_ADC:
+ ad5592r_setup_channel(iio_dev, &channels[curr_channel],
+ false, i);
+ curr_channel++;
+ break;
+
+ case CH_MODE_DAC_AND_ADC:
+ ad5592r_setup_channel(iio_dev, &channels[curr_channel],
+ true, i);
+ curr_channel++;
+ ad5592r_setup_channel(iio_dev, &channels[curr_channel],
+ false, i);
+ curr_channel++;
+ break;
+
+ default:
+ continue;
+ }
+ }
+
+ channels[curr_channel].type = IIO_TEMP;
+ channels[curr_channel].channel = 8;
+ channels[curr_channel].info_mask_separate = BIT(IIO_CHAN_INFO_RAW) |
+ BIT(IIO_CHAN_INFO_SCALE) |
+ BIT(IIO_CHAN_INFO_OFFSET);
+ curr_channel++;
+
+ iio_dev->num_channels = curr_channel;
+ iio_dev->channels = channels;
+
+ return 0;
+}
+
+static void ad5592r_init_scales(struct ad5592r_state *st, int vref_mV)
+{
+ s64 tmp = (s64)vref_mV * 1000000000LL >> 12;
+
+ st->scale_avail[0][0] =
+ div_s64_rem(tmp, 1000000000LL, &st->scale_avail[0][1]);
+ st->scale_avail[1][0] =
+ div_s64_rem(tmp * 2, 1000000000LL, &st->scale_avail[1][1]);
+}
+
+int ad5592r_probe(struct device *dev, const char *name,
+ const struct ad5592r_rw_ops *ops)
+{
+ struct iio_dev *iio_dev;
+ struct ad5592r_state *st;
+ int ret;
+
+ iio_dev = devm_iio_device_alloc(dev, sizeof(*st));
+ if (!iio_dev)
+ return -ENOMEM;
+
+ st = iio_priv(iio_dev);
+ st->dev = dev;
+ st->ops = ops;
+ st->num_channels = 8;
+ dev_set_drvdata(dev, iio_dev);
+
+ st->reg = devm_regulator_get_optional(dev, "vref");
+ if (IS_ERR(st->reg)) {
+ if ((PTR_ERR(st->reg) != -ENODEV) && dev->of_node)
+ return PTR_ERR(st->reg);
+
+ st->reg = NULL;
+ } else {
+ ret = regulator_enable(st->reg);
+ if (ret)
+ return ret;
+ }
+
+ iio_dev->dev.parent = dev;
+ iio_dev->name = name;
+ iio_dev->info = &ad5592r_info;
+ iio_dev->modes = INDIO_DIRECT_MODE;
+
+ ad5592r_init_scales(st, ad5592r_get_vref(st));
+
+ ret = ad5592r_reset(st);
+ if (ret)
+ goto error_disable_reg;
+
+ ret = ops->reg_write(st, AD5592R_REG_PD,
+ (st->reg == NULL) ? AD5592R_REG_PD_EN_REF : 0);
+ if (ret)
+ goto error_disable_reg;
+
+ ret = ad5592r_alloc_channels(st);
+ if (ret)
+ goto error_disable_reg;
+
+ ret = ad5592r_set_channel_modes(st);
+ if (ret)
+ goto error_reset_ch_modes;
+
+ ret = iio_device_register(iio_dev);
+ if (ret)
+ goto error_reset_ch_modes;
+
+ ret = ad5592r_gpio_init(st);
+ if (ret)
+ goto error_dev_unregister;
+
+ return 0;
+
+error_dev_unregister:
+ iio_device_unregister(iio_dev);
+
+error_reset_ch_modes:
+ ad5592r_reset_channel_modes(st);
+
+error_disable_reg:
+ if (st->reg)
+ regulator_disable(st->reg);
+
+ return ret;
+}
+EXPORT_SYMBOL_GPL(ad5592r_probe);
+
+int ad5592r_remove(struct device *dev)
+{
+ struct iio_dev *iio_dev = dev_get_drvdata(dev);
+ struct ad5592r_state *st = iio_priv(iio_dev);
+
+ iio_device_unregister(iio_dev);
+ ad5592r_reset_channel_modes(st);
+ ad5592r_gpio_cleanup(st);
+
+ if (st->reg)
+ regulator_disable(st->reg);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(ad5592r_remove);
+
+MODULE_AUTHOR("Paul Cercueil <paul.cercueil@analog.com>");
+MODULE_DESCRIPTION("Analog Devices AD5592R multi-channel converters");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/iio/dac/ad5592r-base.h b/drivers/iio/dac/ad5592r-base.h
new file mode 100644
index 000000000000..841457e93f85
--- /dev/null
+++ b/drivers/iio/dac/ad5592r-base.h
@@ -0,0 +1,76 @@
+/*
+ * AD5592R / AD5593R Digital <-> Analog converters driver
+ *
+ * Copyright 2015-2016 Analog Devices Inc.
+ * Author: Paul Cercueil <paul.cercueil@analog.com>
+ *
+ * Licensed under the GPL-2.
+ */
+
+#ifndef __DRIVERS_IIO_DAC_AD5592R_BASE_H__
+#define __DRIVERS_IIO_DAC_AD5592R_BASE_H__
+
+#include <linux/types.h>
+#include <linux/cache.h>
+#include <linux/mutex.h>
+#include <linux/gpio/driver.h>
+
+struct device;
+struct ad5592r_state;
+
+enum ad5592r_registers {
+ AD5592R_REG_NOOP = 0x0,
+ AD5592R_REG_DAC_READBACK = 0x1,
+ AD5592R_REG_ADC_SEQ = 0x2,
+ AD5592R_REG_CTRL = 0x3,
+ AD5592R_REG_ADC_EN = 0x4,
+ AD5592R_REG_DAC_EN = 0x5,
+ AD5592R_REG_PULLDOWN = 0x6,
+ AD5592R_REG_LDAC = 0x7,
+ AD5592R_REG_GPIO_OUT_EN = 0x8,
+ AD5592R_REG_GPIO_SET = 0x9,
+ AD5592R_REG_GPIO_IN_EN = 0xA,
+ AD5592R_REG_PD = 0xB,
+ AD5592R_REG_OPEN_DRAIN = 0xC,
+ AD5592R_REG_TRISTATE = 0xD,
+ AD5592R_REG_RESET = 0xF,
+};
+
+#define AD5592R_REG_PD_EN_REF BIT(9)
+#define AD5592R_REG_CTRL_ADC_RANGE BIT(5)
+#define AD5592R_REG_CTRL_DAC_RANGE BIT(4)
+
+struct ad5592r_rw_ops {
+ int (*write_dac)(struct ad5592r_state *st, unsigned chan, u16 value);
+ int (*read_adc)(struct ad5592r_state *st, unsigned chan, u16 *value);
+ int (*reg_write)(struct ad5592r_state *st, u8 reg, u16 value);
+ int (*reg_read)(struct ad5592r_state *st, u8 reg, u16 *value);
+ int (*gpio_read)(struct ad5592r_state *st, u8 *value);
+};
+
+struct ad5592r_state {
+ struct device *dev;
+ struct regulator *reg;
+ struct gpio_chip gpiochip;
+ struct mutex gpio_lock; /* Protect cached gpio_out, gpio_val, etc. */
+ unsigned int num_channels;
+ const struct ad5592r_rw_ops *ops;
+ int scale_avail[2][2];
+ u16 cached_dac[8];
+ u16 cached_gp_ctrl;
+ u8 channel_modes[8];
+ u8 channel_offstate[8];
+ u8 gpio_map;
+ u8 gpio_out;
+ u8 gpio_in;
+ u8 gpio_val;
+
+ __be16 spi_msg ____cacheline_aligned;
+ __be16 spi_msg_nop;
+};
+
+int ad5592r_probe(struct device *dev, const char *name,
+ const struct ad5592r_rw_ops *ops);
+int ad5592r_remove(struct device *dev);
+
+#endif /* __DRIVERS_IIO_DAC_AD5592R_BASE_H__ */
diff --git a/drivers/iio/dac/ad5592r.c b/drivers/iio/dac/ad5592r.c
new file mode 100644
index 000000000000..0b235a2c7359
--- /dev/null
+++ b/drivers/iio/dac/ad5592r.c
@@ -0,0 +1,164 @@
+/*
+ * AD5592R Digital <-> Analog converters driver
+ *
+ * Copyright 2015-2016 Analog Devices Inc.
+ * Author: Paul Cercueil <paul.cercueil@analog.com>
+ *
+ * Licensed under the GPL-2.
+ */
+
+#include "ad5592r-base.h"
+
+#include <linux/bitops.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/spi/spi.h>
+
+#define AD5592R_GPIO_READBACK_EN BIT(10)
+#define AD5592R_LDAC_READBACK_EN BIT(6)
+
+static int ad5592r_spi_wnop_r16(struct ad5592r_state *st, u16 *buf)
+{
+ struct spi_device *spi = container_of(st->dev, struct spi_device, dev);
+ struct spi_transfer t = {
+ .tx_buf = &st->spi_msg_nop,
+ .rx_buf = buf,
+ .len = 2
+ };
+
+ st->spi_msg_nop = 0; /* NOP */
+
+ return spi_sync_transfer(spi, &t, 1);
+}
+
+static int ad5592r_write_dac(struct ad5592r_state *st, unsigned chan, u16 value)
+{
+ struct spi_device *spi = container_of(st->dev, struct spi_device, dev);
+
+ st->spi_msg = cpu_to_be16(BIT(15) | (chan << 12) | value);
+
+ return spi_write(spi, &st->spi_msg, sizeof(st->spi_msg));
+}
+
+static int ad5592r_read_adc(struct ad5592r_state *st, unsigned chan, u16 *value)
+{
+ struct spi_device *spi = container_of(st->dev, struct spi_device, dev);
+ int ret;
+
+ st->spi_msg = cpu_to_be16((AD5592R_REG_ADC_SEQ << 11) | BIT(chan));
+
+ ret = spi_write(spi, &st->spi_msg, sizeof(st->spi_msg));
+ if (ret)
+ return ret;
+
+ /*
+ * Invalid data:
+ * See Figure 40. Single-Channel ADC Conversion Sequence
+ */
+ ret = ad5592r_spi_wnop_r16(st, &st->spi_msg);
+ if (ret)
+ return ret;
+
+ ret = ad5592r_spi_wnop_r16(st, &st->spi_msg);
+ if (ret)
+ return ret;
+
+ *value = be16_to_cpu(st->spi_msg);
+
+ return 0;
+}
+
+static int ad5592r_reg_write(struct ad5592r_state *st, u8 reg, u16 value)
+{
+ struct spi_device *spi = container_of(st->dev, struct spi_device, dev);
+
+ st->spi_msg = cpu_to_be16((reg << 11) | value);
+
+ return spi_write(spi, &st->spi_msg, sizeof(st->spi_msg));
+}
+
+static int ad5592r_reg_read(struct ad5592r_state *st, u8 reg, u16 *value)
+{
+ struct spi_device *spi = container_of(st->dev, struct spi_device, dev);
+ int ret;
+
+ st->spi_msg = cpu_to_be16((AD5592R_REG_LDAC << 11) |
+ AD5592R_LDAC_READBACK_EN | (reg << 2));
+
+ ret = spi_write(spi, &st->spi_msg, sizeof(st->spi_msg));
+ if (ret)
+ return ret;
+
+ ret = ad5592r_spi_wnop_r16(st, &st->spi_msg);
+ if (ret)
+ return ret;
+
+ *value = be16_to_cpu(st->spi_msg);
+
+ return 0;
+}
+
+static int ad5593r_gpio_read(struct ad5592r_state *st, u8 *value)
+{
+ int ret;
+
+ ret = ad5592r_reg_write(st, AD5592R_REG_GPIO_IN_EN,
+ AD5592R_GPIO_READBACK_EN | st->gpio_in);
+ if (ret)
+ return ret;
+
+ ret = ad5592r_spi_wnop_r16(st, &st->spi_msg);
+ if (ret)
+ return ret;
+
+ *value = (u8) be16_to_cpu(st->spi_msg);
+
+ return 0;
+}
+
+static const struct ad5592r_rw_ops ad5592r_rw_ops = {
+ .write_dac = ad5592r_write_dac,
+ .read_adc = ad5592r_read_adc,
+ .reg_write = ad5592r_reg_write,
+ .reg_read = ad5592r_reg_read,
+ .gpio_read = ad5593r_gpio_read,
+};
+
+static int ad5592r_spi_probe(struct spi_device *spi)
+{
+ const struct spi_device_id *id = spi_get_device_id(spi);
+
+ return ad5592r_probe(&spi->dev, id->name, &ad5592r_rw_ops);
+}
+
+static int ad5592r_spi_remove(struct spi_device *spi)
+{
+ return ad5592r_remove(&spi->dev);
+}
+
+static const struct spi_device_id ad5592r_spi_ids[] = {
+ { .name = "ad5592r", },
+ {}
+};
+MODULE_DEVICE_TABLE(spi, ad5592r_spi_ids);
+
+static const struct of_device_id ad5592r_of_match[] = {
+ { .compatible = "adi,ad5592r", },
+ {},
+};
+MODULE_DEVICE_TABLE(of, ad5592r_of_match);
+
+static struct spi_driver ad5592r_spi_driver = {
+ .driver = {
+ .name = "ad5592r",
+ .of_match_table = of_match_ptr(ad5592r_of_match),
+ },
+ .probe = ad5592r_spi_probe,
+ .remove = ad5592r_spi_remove,
+ .id_table = ad5592r_spi_ids,
+};
+module_spi_driver(ad5592r_spi_driver);
+
+MODULE_AUTHOR("Paul Cercueil <paul.cercueil@analog.com>");
+MODULE_DESCRIPTION("Analog Devices AD5592R multi-channel converters");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/iio/dac/ad5593r.c b/drivers/iio/dac/ad5593r.c
new file mode 100644
index 000000000000..dca158a88f47
--- /dev/null
+++ b/drivers/iio/dac/ad5593r.c
@@ -0,0 +1,131 @@
+/*
+ * AD5593R Digital <-> Analog converters driver
+ *
+ * Copyright 2015-2016 Analog Devices Inc.
+ * Author: Paul Cercueil <paul.cercueil@analog.com>
+ *
+ * Licensed under the GPL-2.
+ */
+
+#include "ad5592r-base.h"
+
+#include <linux/bitops.h>
+#include <linux/i2c.h>
+#include <linux/module.h>
+#include <linux/of.h>
+
+#define AD5593R_MODE_CONF (0 << 4)
+#define AD5593R_MODE_DAC_WRITE (1 << 4)
+#define AD5593R_MODE_ADC_READBACK (4 << 4)
+#define AD5593R_MODE_DAC_READBACK (5 << 4)
+#define AD5593R_MODE_GPIO_READBACK (6 << 4)
+#define AD5593R_MODE_REG_READBACK (7 << 4)
+
+static int ad5593r_write_dac(struct ad5592r_state *st, unsigned chan, u16 value)
+{
+ struct i2c_client *i2c = to_i2c_client(st->dev);
+
+ return i2c_smbus_write_word_swapped(i2c,
+ AD5593R_MODE_DAC_WRITE | chan, value);
+}
+
+static int ad5593r_read_adc(struct ad5592r_state *st, unsigned chan, u16 *value)
+{
+ struct i2c_client *i2c = to_i2c_client(st->dev);
+ s32 val;
+
+ val = i2c_smbus_write_word_swapped(i2c,
+ AD5593R_MODE_CONF | AD5592R_REG_ADC_SEQ, BIT(chan));
+ if (val < 0)
+ return (int) val;
+
+ val = i2c_smbus_read_word_swapped(i2c, AD5593R_MODE_ADC_READBACK);
+ if (val < 0)
+ return (int) val;
+
+ *value = (u16) val;
+
+ return 0;
+}
+
+static int ad5593r_reg_write(struct ad5592r_state *st, u8 reg, u16 value)
+{
+ struct i2c_client *i2c = to_i2c_client(st->dev);
+
+ return i2c_smbus_write_word_swapped(i2c,
+ AD5593R_MODE_CONF | reg, value);
+}
+
+static int ad5593r_reg_read(struct ad5592r_state *st, u8 reg, u16 *value)
+{
+ struct i2c_client *i2c = to_i2c_client(st->dev);
+ s32 val;
+
+ val = i2c_smbus_read_word_swapped(i2c, AD5593R_MODE_REG_READBACK | reg);
+ if (val < 0)
+ return (int) val;
+
+ *value = (u16) val;
+
+ return 0;
+}
+
+static int ad5593r_gpio_read(struct ad5592r_state *st, u8 *value)
+{
+ struct i2c_client *i2c = to_i2c_client(st->dev);
+ s32 val;
+
+ val = i2c_smbus_read_word_swapped(i2c, AD5593R_MODE_GPIO_READBACK);
+ if (val < 0)
+ return (int) val;
+
+ *value = (u8) val;
+
+ return 0;
+}
+
+static const struct ad5592r_rw_ops ad5593r_rw_ops = {
+ .write_dac = ad5593r_write_dac,
+ .read_adc = ad5593r_read_adc,
+ .reg_write = ad5593r_reg_write,
+ .reg_read = ad5593r_reg_read,
+ .gpio_read = ad5593r_gpio_read,
+};
+
+static int ad5593r_i2c_probe(struct i2c_client *i2c,
+ const struct i2c_device_id *id)
+{
+ return ad5592r_probe(&i2c->dev, id->name, &ad5593r_rw_ops);
+}
+
+static int ad5593r_i2c_remove(struct i2c_client *i2c)
+{
+ return ad5592r_remove(&i2c->dev);
+}
+
+static const struct i2c_device_id ad5593r_i2c_ids[] = {
+ { .name = "ad5593r", },
+ {},
+};
+MODULE_DEVICE_TABLE(i2c, ad5593r_i2c_ids);
+
+static const struct of_device_id ad5593r_of_match[] = {
+ { .compatible = "adi,ad5593r", },
+ {},
+};
+MODULE_DEVICE_TABLE(of, ad5593r_of_match);
+
+static struct i2c_driver ad5593r_driver = {
+ .driver = {
+ .name = "ad5593r",
+ .of_match_table = of_match_ptr(ad5593r_of_match),
+ },
+ .probe = ad5593r_i2c_probe,
+ .remove = ad5593r_i2c_remove,
+ .id_table = ad5593r_i2c_ids,
+};
+module_i2c_driver(ad5593r_driver);
+
+MODULE_AUTHOR("Paul Cercueil <paul.cercueil@analog.com>");
+MODULE_DESCRIPTION("Analog Devices AD5592R multi-channel converters");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/iio/dac/lpc18xx_dac.c b/drivers/iio/dac/lpc18xx_dac.c
new file mode 100644
index 000000000000..55d1456a059d
--- /dev/null
+++ b/drivers/iio/dac/lpc18xx_dac.c
@@ -0,0 +1,210 @@
+/*
+ * IIO DAC driver for NXP LPC18xx DAC
+ *
+ * Copyright (C) 2016 Joachim Eastwood <manabian@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * UNSUPPORTED hardware features:
+ * - Interrupts
+ * - DMA
+ */
+
+#include <linux/clk.h>
+#include <linux/err.h>
+#include <linux/iio/iio.h>
+#include <linux/iio/driver.h>
+#include <linux/io.h>
+#include <linux/iopoll.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/platform_device.h>
+#include <linux/regulator/consumer.h>
+
+/* LPC18XX DAC registers and bits */
+#define LPC18XX_DAC_CR 0x000
+#define LPC18XX_DAC_CR_VALUE_SHIFT 6
+#define LPC18XX_DAC_CR_VALUE_MASK 0x3ff
+#define LPC18XX_DAC_CR_BIAS BIT(16)
+#define LPC18XX_DAC_CTRL 0x004
+#define LPC18XX_DAC_CTRL_DMA_ENA BIT(3)
+
+struct lpc18xx_dac {
+ struct regulator *vref;
+ void __iomem *base;
+ struct mutex lock;
+ struct clk *clk;
+};
+
+static const struct iio_chan_spec lpc18xx_dac_iio_channels[] = {
+ {
+ .type = IIO_VOLTAGE,
+ .output = 1,
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) |
+ BIT(IIO_CHAN_INFO_SCALE),
+ },
+};
+
+static int lpc18xx_dac_read_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ int *val, int *val2, long mask)
+{
+ struct lpc18xx_dac *dac = iio_priv(indio_dev);
+ u32 reg;
+
+ switch (mask) {
+ case IIO_CHAN_INFO_RAW:
+ reg = readl(dac->base + LPC18XX_DAC_CR);
+ *val = reg >> LPC18XX_DAC_CR_VALUE_SHIFT;
+ *val &= LPC18XX_DAC_CR_VALUE_MASK;
+
+ return IIO_VAL_INT;
+
+ case IIO_CHAN_INFO_SCALE:
+ *val = regulator_get_voltage(dac->vref) / 1000;
+ *val2 = 10;
+
+ return IIO_VAL_FRACTIONAL_LOG2;
+ }
+
+ return -EINVAL;
+}
+
+static int lpc18xx_dac_write_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ int val, int val2, long mask)
+{
+ struct lpc18xx_dac *dac = iio_priv(indio_dev);
+ u32 reg;
+
+ switch (mask) {
+ case IIO_CHAN_INFO_RAW:
+ if (val < 0 || val > LPC18XX_DAC_CR_VALUE_MASK)
+ return -EINVAL;
+
+ reg = LPC18XX_DAC_CR_BIAS;
+ reg |= val << LPC18XX_DAC_CR_VALUE_SHIFT;
+
+ mutex_lock(&dac->lock);
+ writel(reg, dac->base + LPC18XX_DAC_CR);
+ writel(LPC18XX_DAC_CTRL_DMA_ENA, dac->base + LPC18XX_DAC_CTRL);
+ mutex_unlock(&dac->lock);
+
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static const struct iio_info lpc18xx_dac_info = {
+ .read_raw = lpc18xx_dac_read_raw,
+ .write_raw = lpc18xx_dac_write_raw,
+ .driver_module = THIS_MODULE,
+};
+
+static int lpc18xx_dac_probe(struct platform_device *pdev)
+{
+ struct iio_dev *indio_dev;
+ struct lpc18xx_dac *dac;
+ struct resource *res;
+ int ret;
+
+ indio_dev = devm_iio_device_alloc(&pdev->dev, sizeof(*dac));
+ if (!indio_dev)
+ return -ENOMEM;
+
+ platform_set_drvdata(pdev, indio_dev);
+ dac = iio_priv(indio_dev);
+ mutex_init(&dac->lock);
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ dac->base = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(dac->base))
+ return PTR_ERR(dac->base);
+
+ dac->clk = devm_clk_get(&pdev->dev, NULL);
+ if (IS_ERR(dac->clk)) {
+ dev_err(&pdev->dev, "error getting clock\n");
+ return PTR_ERR(dac->clk);
+ }
+
+ dac->vref = devm_regulator_get(&pdev->dev, "vref");
+ if (IS_ERR(dac->vref)) {
+ dev_err(&pdev->dev, "error getting regulator\n");
+ return PTR_ERR(dac->vref);
+ }
+
+ indio_dev->name = dev_name(&pdev->dev);
+ indio_dev->dev.parent = &pdev->dev;
+ indio_dev->info = &lpc18xx_dac_info;
+ indio_dev->modes = INDIO_DIRECT_MODE;
+ indio_dev->channels = lpc18xx_dac_iio_channels;
+ indio_dev->num_channels = ARRAY_SIZE(lpc18xx_dac_iio_channels);
+
+ ret = regulator_enable(dac->vref);
+ if (ret) {
+ dev_err(&pdev->dev, "unable to enable regulator\n");
+ return ret;
+ }
+
+ ret = clk_prepare_enable(dac->clk);
+ if (ret) {
+ dev_err(&pdev->dev, "unable to enable clock\n");
+ goto dis_reg;
+ }
+
+ writel(0, dac->base + LPC18XX_DAC_CTRL);
+ writel(0, dac->base + LPC18XX_DAC_CR);
+
+ ret = iio_device_register(indio_dev);
+ if (ret) {
+ dev_err(&pdev->dev, "unable to register device\n");
+ goto dis_clk;
+ }
+
+ return 0;
+
+dis_clk:
+ clk_disable_unprepare(dac->clk);
+dis_reg:
+ regulator_disable(dac->vref);
+ return ret;
+}
+
+static int lpc18xx_dac_remove(struct platform_device *pdev)
+{
+ struct iio_dev *indio_dev = platform_get_drvdata(pdev);
+ struct lpc18xx_dac *dac = iio_priv(indio_dev);
+
+ iio_device_unregister(indio_dev);
+
+ writel(0, dac->base + LPC18XX_DAC_CTRL);
+ clk_disable_unprepare(dac->clk);
+ regulator_disable(dac->vref);
+
+ return 0;
+}
+
+static const struct of_device_id lpc18xx_dac_match[] = {
+ { .compatible = "nxp,lpc1850-dac" },
+ { /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, lpc18xx_dac_match);
+
+static struct platform_driver lpc18xx_dac_driver = {
+ .probe = lpc18xx_dac_probe,
+ .remove = lpc18xx_dac_remove,
+ .driver = {
+ .name = "lpc18xx-dac",
+ .of_match_table = lpc18xx_dac_match,
+ },
+};
+module_platform_driver(lpc18xx_dac_driver);
+
+MODULE_DESCRIPTION("LPC18xx DAC driver");
+MODULE_AUTHOR("Joachim Eastwood <manabian@gmail.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/iio/dac/stx104.c b/drivers/iio/dac/stx104.c
index 174f4b75ceed..27941220872f 100644
--- a/drivers/iio/dac/stx104.c
+++ b/drivers/iio/dac/stx104.c
@@ -33,16 +33,9 @@
}
#define STX104_EXTENT 16
-/**
- * The highest base address possible for an ISA device is 0x3FF; this results in
- * 1024 possible base addresses. Dividing the number of possible base addresses
- * by the address extent taken by each device results in the maximum number of
- * devices on a system.
- */
-#define MAX_NUM_STX104 (1024 / STX104_EXTENT)
-static unsigned base[MAX_NUM_STX104];
-static unsigned num_stx104;
+static unsigned int base[max_num_isa_dev(STX104_EXTENT)];
+static unsigned int num_stx104;
module_param_array(base, uint, &num_stx104, 0);
MODULE_PARM_DESC(base, "Apex Embedded Systems STX104 base addresses");
@@ -134,18 +127,7 @@ static struct isa_driver stx104_driver = {
}
};
-static void __exit stx104_exit(void)
-{
- isa_unregister_driver(&stx104_driver);
-}
-
-static int __init stx104_init(void)
-{
- return isa_register_driver(&stx104_driver, num_stx104);
-}
-
-module_init(stx104_init);
-module_exit(stx104_exit);
+module_isa_driver(stx104_driver, num_stx104);
MODULE_AUTHOR("William Breathitt Gray <vilhelm.gray@gmail.com>");
MODULE_DESCRIPTION("Apex Embedded Systems STX104 DAC driver");
diff --git a/drivers/iio/frequency/ad9523.c b/drivers/iio/frequency/ad9523.c
index 44a30f286de1..99eba524f6dd 100644
--- a/drivers/iio/frequency/ad9523.c
+++ b/drivers/iio/frequency/ad9523.c
@@ -284,7 +284,7 @@ struct ad9523_state {
} data[2] ____cacheline_aligned;
};
-static int ad9523_read(struct iio_dev *indio_dev, unsigned addr)
+static int ad9523_read(struct iio_dev *indio_dev, unsigned int addr)
{
struct ad9523_state *st = iio_priv(indio_dev);
int ret;
@@ -318,7 +318,8 @@ static int ad9523_read(struct iio_dev *indio_dev, unsigned addr)
return ret;
};
-static int ad9523_write(struct iio_dev *indio_dev, unsigned addr, unsigned val)
+static int ad9523_write(struct iio_dev *indio_dev,
+ unsigned int addr, unsigned int val)
{
struct ad9523_state *st = iio_priv(indio_dev);
int ret;
@@ -351,11 +352,11 @@ static int ad9523_io_update(struct iio_dev *indio_dev)
}
static int ad9523_vco_out_map(struct iio_dev *indio_dev,
- unsigned ch, unsigned out)
+ unsigned int ch, unsigned int out)
{
struct ad9523_state *st = iio_priv(indio_dev);
int ret;
- unsigned mask;
+ unsigned int mask;
switch (ch) {
case 0 ... 3:
@@ -405,7 +406,7 @@ static int ad9523_vco_out_map(struct iio_dev *indio_dev,
}
static int ad9523_set_clock_provider(struct iio_dev *indio_dev,
- unsigned ch, unsigned long freq)
+ unsigned int ch, unsigned long freq)
{
struct ad9523_state *st = iio_priv(indio_dev);
long tmp1, tmp2;
@@ -619,7 +620,7 @@ static int ad9523_read_raw(struct iio_dev *indio_dev,
long m)
{
struct ad9523_state *st = iio_priv(indio_dev);
- unsigned code;
+ unsigned int code;
int ret;
mutex_lock(&indio_dev->mlock);
@@ -655,7 +656,7 @@ static int ad9523_write_raw(struct iio_dev *indio_dev,
long mask)
{
struct ad9523_state *st = iio_priv(indio_dev);
- unsigned reg;
+ unsigned int reg;
int ret, tmp, code;
mutex_lock(&indio_dev->mlock);
@@ -709,8 +710,8 @@ out:
}
static int ad9523_reg_access(struct iio_dev *indio_dev,
- unsigned reg, unsigned writeval,
- unsigned *readval)
+ unsigned int reg, unsigned int writeval,
+ unsigned int *readval)
{
int ret;
diff --git a/drivers/iio/gyro/Kconfig b/drivers/iio/gyro/Kconfig
index e816d29d6a62..205a84420ae9 100644
--- a/drivers/iio/gyro/Kconfig
+++ b/drivers/iio/gyro/Kconfig
@@ -93,7 +93,7 @@ config IIO_ST_GYRO_3AXIS
select IIO_TRIGGERED_BUFFER if (IIO_BUFFER)
help
Say yes here to build support for STMicroelectronics gyroscopes:
- L3G4200D, LSM330DL, L3GD20, LSM330DLC, L3G4IS, LSM330.
+ L3G4200D, LSM330DL, L3GD20, LSM330DLC, L3G4IS, LSM330, LSM9DS0.
This driver can also be built as a module. If so, these modules
will be created:
diff --git a/drivers/iio/gyro/bmg160_core.c b/drivers/iio/gyro/bmg160_core.c
index 4dac567e75b4..7ccc044063f6 100644
--- a/drivers/iio/gyro/bmg160_core.c
+++ b/drivers/iio/gyro/bmg160_core.c
@@ -17,7 +17,6 @@
#include <linux/delay.h>
#include <linux/slab.h>
#include <linux/acpi.h>
-#include <linux/gpio/consumer.h>
#include <linux/pm.h>
#include <linux/pm_runtime.h>
#include <linux/iio/iio.h>
@@ -31,7 +30,6 @@
#include "bmg160.h"
#define BMG160_IRQ_NAME "bmg160_event"
-#define BMG160_GPIO_NAME "gpio_int"
#define BMG160_REG_CHIP_ID 0x00
#define BMG160_CHIP_ID_VAL 0x0F
@@ -97,7 +95,6 @@
#define BMG160_AUTO_SUSPEND_DELAY_MS 2000
struct bmg160_data {
- struct device *dev;
struct regmap *regmap;
struct iio_trigger *dready_trig;
struct iio_trigger *motion_trig;
@@ -116,6 +113,7 @@ enum bmg160_axis {
AXIS_X,
AXIS_Y,
AXIS_Z,
+ AXIS_MAX,
};
static const struct {
@@ -138,11 +136,12 @@ static const struct {
static int bmg160_set_mode(struct bmg160_data *data, u8 mode)
{
+ struct device *dev = regmap_get_device(data->regmap);
int ret;
ret = regmap_write(data->regmap, BMG160_REG_PMU_LPW, mode);
if (ret < 0) {
- dev_err(data->dev, "Error writing reg_pmu_lpw\n");
+ dev_err(dev, "Error writing reg_pmu_lpw\n");
return ret;
}
@@ -163,6 +162,7 @@ static int bmg160_convert_freq_to_bit(int val)
static int bmg160_set_bw(struct bmg160_data *data, int val)
{
+ struct device *dev = regmap_get_device(data->regmap);
int ret;
int bw_bits;
@@ -172,7 +172,7 @@ static int bmg160_set_bw(struct bmg160_data *data, int val)
ret = regmap_write(data->regmap, BMG160_REG_PMU_BW, bw_bits);
if (ret < 0) {
- dev_err(data->dev, "Error writing reg_pmu_bw\n");
+ dev_err(dev, "Error writing reg_pmu_bw\n");
return ret;
}
@@ -183,18 +183,19 @@ static int bmg160_set_bw(struct bmg160_data *data, int val)
static int bmg160_chip_init(struct bmg160_data *data)
{
+ struct device *dev = regmap_get_device(data->regmap);
int ret;
unsigned int val;
ret = regmap_read(data->regmap, BMG160_REG_CHIP_ID, &val);
if (ret < 0) {
- dev_err(data->dev, "Error reading reg_chip_id\n");
+ dev_err(dev, "Error reading reg_chip_id\n");
return ret;
}
- dev_dbg(data->dev, "Chip Id %x\n", val);
+ dev_dbg(dev, "Chip Id %x\n", val);
if (val != BMG160_CHIP_ID_VAL) {
- dev_err(data->dev, "invalid chip %x\n", val);
+ dev_err(dev, "invalid chip %x\n", val);
return -ENODEV;
}
@@ -213,14 +214,14 @@ static int bmg160_chip_init(struct bmg160_data *data)
/* Set Default Range */
ret = regmap_write(data->regmap, BMG160_REG_RANGE, BMG160_RANGE_500DPS);
if (ret < 0) {
- dev_err(data->dev, "Error writing reg_range\n");
+ dev_err(dev, "Error writing reg_range\n");
return ret;
}
data->dps_range = BMG160_RANGE_500DPS;
ret = regmap_read(data->regmap, BMG160_REG_SLOPE_THRES, &val);
if (ret < 0) {
- dev_err(data->dev, "Error reading reg_slope_thres\n");
+ dev_err(dev, "Error reading reg_slope_thres\n");
return ret;
}
data->slope_thres = val;
@@ -229,7 +230,7 @@ static int bmg160_chip_init(struct bmg160_data *data)
ret = regmap_update_bits(data->regmap, BMG160_REG_INT_EN_1,
BMG160_INT1_BIT_OD, 0);
if (ret < 0) {
- dev_err(data->dev, "Error updating bits in reg_int_en_1\n");
+ dev_err(dev, "Error updating bits in reg_int_en_1\n");
return ret;
}
@@ -237,7 +238,7 @@ static int bmg160_chip_init(struct bmg160_data *data)
BMG160_INT_MODE_LATCH_INT |
BMG160_INT_MODE_LATCH_RESET);
if (ret < 0) {
- dev_err(data->dev,
+ dev_err(dev,
"Error writing reg_motion_intr\n");
return ret;
}
@@ -248,20 +249,21 @@ static int bmg160_chip_init(struct bmg160_data *data)
static int bmg160_set_power_state(struct bmg160_data *data, bool on)
{
#ifdef CONFIG_PM
+ struct device *dev = regmap_get_device(data->regmap);
int ret;
if (on)
- ret = pm_runtime_get_sync(data->dev);
+ ret = pm_runtime_get_sync(dev);
else {
- pm_runtime_mark_last_busy(data->dev);
- ret = pm_runtime_put_autosuspend(data->dev);
+ pm_runtime_mark_last_busy(dev);
+ ret = pm_runtime_put_autosuspend(dev);
}
if (ret < 0) {
- dev_err(data->dev,
- "Failed: bmg160_set_power_state for %d\n", on);
+ dev_err(dev, "Failed: bmg160_set_power_state for %d\n", on);
+
if (on)
- pm_runtime_put_noidle(data->dev);
+ pm_runtime_put_noidle(dev);
return ret;
}
@@ -273,6 +275,7 @@ static int bmg160_set_power_state(struct bmg160_data *data, bool on)
static int bmg160_setup_any_motion_interrupt(struct bmg160_data *data,
bool status)
{
+ struct device *dev = regmap_get_device(data->regmap);
int ret;
/* Enable/Disable INT_MAP0 mapping */
@@ -280,7 +283,7 @@ static int bmg160_setup_any_motion_interrupt(struct bmg160_data *data,
BMG160_INT_MAP_0_BIT_ANY,
(status ? BMG160_INT_MAP_0_BIT_ANY : 0));
if (ret < 0) {
- dev_err(data->dev, "Error updating bits reg_int_map0\n");
+ dev_err(dev, "Error updating bits reg_int_map0\n");
return ret;
}
@@ -290,8 +293,7 @@ static int bmg160_setup_any_motion_interrupt(struct bmg160_data *data,
ret = regmap_write(data->regmap, BMG160_REG_SLOPE_THRES,
data->slope_thres);
if (ret < 0) {
- dev_err(data->dev,
- "Error writing reg_slope_thres\n");
+ dev_err(dev, "Error writing reg_slope_thres\n");
return ret;
}
@@ -299,8 +301,7 @@ static int bmg160_setup_any_motion_interrupt(struct bmg160_data *data,
BMG160_INT_MOTION_X | BMG160_INT_MOTION_Y |
BMG160_INT_MOTION_Z);
if (ret < 0) {
- dev_err(data->dev,
- "Error writing reg_motion_intr\n");
+ dev_err(dev, "Error writing reg_motion_intr\n");
return ret;
}
@@ -315,8 +316,7 @@ static int bmg160_setup_any_motion_interrupt(struct bmg160_data *data,
BMG160_INT_MODE_LATCH_INT |
BMG160_INT_MODE_LATCH_RESET);
if (ret < 0) {
- dev_err(data->dev,
- "Error writing reg_rst_latch\n");
+ dev_err(dev, "Error writing reg_rst_latch\n");
return ret;
}
}
@@ -329,7 +329,7 @@ static int bmg160_setup_any_motion_interrupt(struct bmg160_data *data,
}
if (ret < 0) {
- dev_err(data->dev, "Error writing reg_int_en0\n");
+ dev_err(dev, "Error writing reg_int_en0\n");
return ret;
}
@@ -339,6 +339,7 @@ static int bmg160_setup_any_motion_interrupt(struct bmg160_data *data,
static int bmg160_setup_new_data_interrupt(struct bmg160_data *data,
bool status)
{
+ struct device *dev = regmap_get_device(data->regmap);
int ret;
/* Enable/Disable INT_MAP1 mapping */
@@ -346,7 +347,7 @@ static int bmg160_setup_new_data_interrupt(struct bmg160_data *data,
BMG160_INT_MAP_1_BIT_NEW_DATA,
(status ? BMG160_INT_MAP_1_BIT_NEW_DATA : 0));
if (ret < 0) {
- dev_err(data->dev, "Error updating bits in reg_int_map1\n");
+ dev_err(dev, "Error updating bits in reg_int_map1\n");
return ret;
}
@@ -355,9 +356,8 @@ static int bmg160_setup_new_data_interrupt(struct bmg160_data *data,
BMG160_INT_MODE_NON_LATCH_INT |
BMG160_INT_MODE_LATCH_RESET);
if (ret < 0) {
- dev_err(data->dev,
- "Error writing reg_rst_latch\n");
- return ret;
+ dev_err(dev, "Error writing reg_rst_latch\n");
+ return ret;
}
ret = regmap_write(data->regmap, BMG160_REG_INT_EN_0,
@@ -369,16 +369,15 @@ static int bmg160_setup_new_data_interrupt(struct bmg160_data *data,
BMG160_INT_MODE_LATCH_INT |
BMG160_INT_MODE_LATCH_RESET);
if (ret < 0) {
- dev_err(data->dev,
- "Error writing reg_rst_latch\n");
- return ret;
+ dev_err(dev, "Error writing reg_rst_latch\n");
+ return ret;
}
ret = regmap_write(data->regmap, BMG160_REG_INT_EN_0, 0);
}
if (ret < 0) {
- dev_err(data->dev, "Error writing reg_int_en0\n");
+ dev_err(dev, "Error writing reg_int_en0\n");
return ret;
}
@@ -401,6 +400,7 @@ static int bmg160_get_bw(struct bmg160_data *data, int *val)
static int bmg160_set_scale(struct bmg160_data *data, int val)
{
+ struct device *dev = regmap_get_device(data->regmap);
int ret, i;
for (i = 0; i < ARRAY_SIZE(bmg160_scale_table); ++i) {
@@ -408,8 +408,7 @@ static int bmg160_set_scale(struct bmg160_data *data, int val)
ret = regmap_write(data->regmap, BMG160_REG_RANGE,
bmg160_scale_table[i].dps_range);
if (ret < 0) {
- dev_err(data->dev,
- "Error writing reg_range\n");
+ dev_err(dev, "Error writing reg_range\n");
return ret;
}
data->dps_range = bmg160_scale_table[i].dps_range;
@@ -422,6 +421,7 @@ static int bmg160_set_scale(struct bmg160_data *data, int val)
static int bmg160_get_temp(struct bmg160_data *data, int *val)
{
+ struct device *dev = regmap_get_device(data->regmap);
int ret;
unsigned int raw_val;
@@ -434,7 +434,7 @@ static int bmg160_get_temp(struct bmg160_data *data, int *val)
ret = regmap_read(data->regmap, BMG160_REG_TEMP, &raw_val);
if (ret < 0) {
- dev_err(data->dev, "Error reading reg_temp\n");
+ dev_err(dev, "Error reading reg_temp\n");
bmg160_set_power_state(data, false);
mutex_unlock(&data->mutex);
return ret;
@@ -451,6 +451,7 @@ static int bmg160_get_temp(struct bmg160_data *data, int *val)
static int bmg160_get_axis(struct bmg160_data *data, int axis, int *val)
{
+ struct device *dev = regmap_get_device(data->regmap);
int ret;
__le16 raw_val;
@@ -464,7 +465,7 @@ static int bmg160_get_axis(struct bmg160_data *data, int axis, int *val)
ret = regmap_bulk_read(data->regmap, BMG160_AXIS_TO_REG(axis), &raw_val,
sizeof(raw_val));
if (ret < 0) {
- dev_err(data->dev, "Error reading axis %d\n", axis);
+ dev_err(dev, "Error reading axis %d\n", axis);
bmg160_set_power_state(data, false);
mutex_unlock(&data->mutex);
return ret;
@@ -764,26 +765,23 @@ static const struct iio_info bmg160_info = {
.driver_module = THIS_MODULE,
};
+static const unsigned long bmg160_accel_scan_masks[] = {
+ BIT(AXIS_X) | BIT(AXIS_Y) | BIT(AXIS_Z),
+ 0};
+
static irqreturn_t bmg160_trigger_handler(int irq, void *p)
{
struct iio_poll_func *pf = p;
struct iio_dev *indio_dev = pf->indio_dev;
struct bmg160_data *data = iio_priv(indio_dev);
- int bit, ret, i = 0;
- unsigned int val;
+ int ret;
mutex_lock(&data->mutex);
- for_each_set_bit(bit, indio_dev->active_scan_mask,
- indio_dev->masklength) {
- ret = regmap_bulk_read(data->regmap, BMG160_AXIS_TO_REG(bit),
- &val, 2);
- if (ret < 0) {
- mutex_unlock(&data->mutex);
- goto err;
- }
- data->buffer[i++] = val;
- }
+ ret = regmap_bulk_read(data->regmap, BMG160_REG_XOUT_L,
+ data->buffer, AXIS_MAX * 2);
mutex_unlock(&data->mutex);
+ if (ret < 0)
+ goto err;
iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
pf->timestamp);
@@ -797,6 +795,7 @@ static int bmg160_trig_try_reen(struct iio_trigger *trig)
{
struct iio_dev *indio_dev = iio_trigger_get_drvdata(trig);
struct bmg160_data *data = iio_priv(indio_dev);
+ struct device *dev = regmap_get_device(data->regmap);
int ret;
/* new data interrupts don't need ack */
@@ -808,7 +807,7 @@ static int bmg160_trig_try_reen(struct iio_trigger *trig)
BMG160_INT_MODE_LATCH_INT |
BMG160_INT_MODE_LATCH_RESET);
if (ret < 0) {
- dev_err(data->dev, "Error writing reg_rst_latch\n");
+ dev_err(dev, "Error writing reg_rst_latch\n");
return ret;
}
@@ -868,13 +867,14 @@ static irqreturn_t bmg160_event_handler(int irq, void *private)
{
struct iio_dev *indio_dev = private;
struct bmg160_data *data = iio_priv(indio_dev);
+ struct device *dev = regmap_get_device(data->regmap);
int ret;
int dir;
unsigned int val;
ret = regmap_read(data->regmap, BMG160_REG_INT_STATUS_2, &val);
if (ret < 0) {
- dev_err(data->dev, "Error reading reg_int_status2\n");
+ dev_err(dev, "Error reading reg_int_status2\n");
goto ack_intr_status;
}
@@ -911,8 +911,7 @@ ack_intr_status:
BMG160_INT_MODE_LATCH_INT |
BMG160_INT_MODE_LATCH_RESET);
if (ret < 0)
- dev_err(data->dev,
- "Error writing reg_rst_latch\n");
+ dev_err(dev, "Error writing reg_rst_latch\n");
}
return IRQ_HANDLED;
@@ -956,29 +955,6 @@ static const struct iio_buffer_setup_ops bmg160_buffer_setup_ops = {
.postdisable = bmg160_buffer_postdisable,
};
-static int bmg160_gpio_probe(struct bmg160_data *data)
-
-{
- struct device *dev;
- struct gpio_desc *gpio;
-
- dev = data->dev;
-
- /* data ready gpio interrupt pin */
- gpio = devm_gpiod_get_index(dev, BMG160_GPIO_NAME, 0, GPIOD_IN);
- if (IS_ERR(gpio)) {
- dev_err(dev, "acpi gpio get index failed\n");
- return PTR_ERR(gpio);
- }
-
- data->irq = gpiod_to_irq(gpio);
-
- dev_dbg(dev, "GPIO resource, no:%d irq:%d\n", desc_to_gpio(gpio),
- data->irq);
-
- return 0;
-}
-
static const char *bmg160_match_acpi_device(struct device *dev)
{
const struct acpi_device_id *id;
@@ -1003,7 +979,6 @@ int bmg160_core_probe(struct device *dev, struct regmap *regmap, int irq,
data = iio_priv(indio_dev);
dev_set_drvdata(dev, indio_dev);
- data->dev = dev;
data->irq = irq;
data->regmap = regmap;
@@ -1020,12 +995,10 @@ int bmg160_core_probe(struct device *dev, struct regmap *regmap, int irq,
indio_dev->channels = bmg160_channels;
indio_dev->num_channels = ARRAY_SIZE(bmg160_channels);
indio_dev->name = name;
+ indio_dev->available_scan_masks = bmg160_accel_scan_masks;
indio_dev->modes = INDIO_DIRECT_MODE;
indio_dev->info = &bmg160_info;
- if (data->irq <= 0)
- bmg160_gpio_probe(data);
-
if (data->irq > 0) {
ret = devm_request_threaded_irq(dev,
data->irq,
@@ -1168,7 +1141,7 @@ static int bmg160_runtime_suspend(struct device *dev)
ret = bmg160_set_mode(data, BMG160_MODE_SUSPEND);
if (ret < 0) {
- dev_err(data->dev, "set mode failed\n");
+ dev_err(dev, "set mode failed\n");
return -EAGAIN;
}
diff --git a/drivers/iio/gyro/st_gyro.h b/drivers/iio/gyro/st_gyro.h
index 5353d6328c54..a5c5c4e29add 100644
--- a/drivers/iio/gyro/st_gyro.h
+++ b/drivers/iio/gyro/st_gyro.h
@@ -21,6 +21,7 @@
#define L3GD20_GYRO_DEV_NAME "l3gd20"
#define L3G4IS_GYRO_DEV_NAME "l3g4is_ui"
#define LSM330_GYRO_DEV_NAME "lsm330_gyro"
+#define LSM9DS0_GYRO_DEV_NAME "lsm9ds0_gyro"
/**
* struct st_sensors_platform_data - gyro platform data
diff --git a/drivers/iio/gyro/st_gyro_core.c b/drivers/iio/gyro/st_gyro_core.c
index 110f95b6e52f..52a3c87c375c 100644
--- a/drivers/iio/gyro/st_gyro_core.c
+++ b/drivers/iio/gyro/st_gyro_core.c
@@ -190,6 +190,7 @@ static const struct st_sensor_settings st_gyro_sensors_settings[] = {
* drain settings, but only for INT1 and not
* for the DRDY line on INT2.
*/
+ .addr_stat_drdy = ST_SENSORS_DEFAULT_STAT_ADDR,
},
.multi_read_bit = ST_GYRO_1_MULTIREAD_BIT,
.bootime = 2,
@@ -203,6 +204,7 @@ static const struct st_sensor_settings st_gyro_sensors_settings[] = {
[2] = LSM330DLC_GYRO_DEV_NAME,
[3] = L3G4IS_GYRO_DEV_NAME,
[4] = LSM330_GYRO_DEV_NAME,
+ [5] = LSM9DS0_GYRO_DEV_NAME,
},
.ch = (struct iio_chan_spec *)st_gyro_16bit_channels,
.odr = {
@@ -258,6 +260,7 @@ static const struct st_sensor_settings st_gyro_sensors_settings[] = {
* drain settings, but only for INT1 and not
* for the DRDY line on INT2.
*/
+ .addr_stat_drdy = ST_SENSORS_DEFAULT_STAT_ADDR,
},
.multi_read_bit = ST_GYRO_2_MULTIREAD_BIT,
.bootime = 2,
@@ -322,6 +325,7 @@ static const struct st_sensor_settings st_gyro_sensors_settings[] = {
* drain settings, but only for INT1 and not
* for the DRDY line on INT2.
*/
+ .addr_stat_drdy = ST_SENSORS_DEFAULT_STAT_ADDR,
},
.multi_read_bit = ST_GYRO_3_MULTIREAD_BIT,
.bootime = 2,
diff --git a/drivers/iio/gyro/st_gyro_i2c.c b/drivers/iio/gyro/st_gyro_i2c.c
index 6848451f817a..40056b821036 100644
--- a/drivers/iio/gyro/st_gyro_i2c.c
+++ b/drivers/iio/gyro/st_gyro_i2c.c
@@ -48,6 +48,10 @@ static const struct of_device_id st_gyro_of_match[] = {
.compatible = "st,lsm330-gyro",
.data = LSM330_GYRO_DEV_NAME,
},
+ {
+ .compatible = "st,lsm9ds0-gyro",
+ .data = LSM9DS0_GYRO_DEV_NAME,
+ },
{},
};
MODULE_DEVICE_TABLE(of, st_gyro_of_match);
@@ -93,6 +97,7 @@ static const struct i2c_device_id st_gyro_id_table[] = {
{ L3GD20_GYRO_DEV_NAME },
{ L3G4IS_GYRO_DEV_NAME },
{ LSM330_GYRO_DEV_NAME },
+ { LSM9DS0_GYRO_DEV_NAME },
{},
};
MODULE_DEVICE_TABLE(i2c, st_gyro_id_table);
diff --git a/drivers/iio/gyro/st_gyro_spi.c b/drivers/iio/gyro/st_gyro_spi.c
index d2b7a5fa344c..fbf2faed501c 100644
--- a/drivers/iio/gyro/st_gyro_spi.c
+++ b/drivers/iio/gyro/st_gyro_spi.c
@@ -54,6 +54,7 @@ static const struct spi_device_id st_gyro_id_table[] = {
{ L3GD20_GYRO_DEV_NAME },
{ L3G4IS_GYRO_DEV_NAME },
{ LSM330_GYRO_DEV_NAME },
+ { LSM9DS0_GYRO_DEV_NAME },
{},
};
MODULE_DEVICE_TABLE(spi, st_gyro_id_table);
diff --git a/drivers/iio/humidity/Kconfig b/drivers/iio/humidity/Kconfig
index 866dda133336..738a86d9e4a9 100644
--- a/drivers/iio/humidity/Kconfig
+++ b/drivers/iio/humidity/Kconfig
@@ -3,6 +3,16 @@
#
menu "Humidity sensors"
+config AM2315
+ tristate "Aosong AM2315 relative humidity and temperature sensor"
+ depends on I2C
+ help
+ If you say yes here you get support for the Aosong AM2315
+ relative humidity and ambient temperature sensor.
+
+ This driver can also be built as a module. If so, the module will
+ be called am2315.
+
config DHT11
tristate "DHT11 (and compatible sensors) driver"
depends on GPIOLIB || COMPILE_TEST
diff --git a/drivers/iio/humidity/Makefile b/drivers/iio/humidity/Makefile
index c9f089a9a6b8..4a73442fcd9c 100644
--- a/drivers/iio/humidity/Makefile
+++ b/drivers/iio/humidity/Makefile
@@ -2,6 +2,7 @@
# Makefile for IIO humidity sensor drivers
#
+obj-$(CONFIG_AM2315) += am2315.o
obj-$(CONFIG_DHT11) += dht11.o
obj-$(CONFIG_HDC100X) += hdc100x.o
obj-$(CONFIG_HTU21) += htu21.o
diff --git a/drivers/iio/humidity/am2315.c b/drivers/iio/humidity/am2315.c
new file mode 100644
index 000000000000..3be6d209a159
--- /dev/null
+++ b/drivers/iio/humidity/am2315.c
@@ -0,0 +1,303 @@
+/**
+ * Aosong AM2315 relative humidity and temperature
+ *
+ * Copyright (c) 2016, Intel Corporation.
+ *
+ * This file is subject to the terms and conditions of version 2 of
+ * the GNU General Public License. See the file COPYING in the main
+ * directory of this archive for more details.
+ *
+ * 7-bit I2C address: 0x5C.
+ */
+
+#include <linux/acpi.h>
+#include <linux/delay.h>
+#include <linux/i2c.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/iio/buffer.h>
+#include <linux/iio/iio.h>
+#include <linux/iio/sysfs.h>
+#include <linux/iio/trigger_consumer.h>
+#include <linux/iio/triggered_buffer.h>
+
+#define AM2315_REG_HUM_MSB 0x00
+#define AM2315_REG_HUM_LSB 0x01
+#define AM2315_REG_TEMP_MSB 0x02
+#define AM2315_REG_TEMP_LSB 0x03
+
+#define AM2315_FUNCTION_READ 0x03
+#define AM2315_HUM_OFFSET 2
+#define AM2315_TEMP_OFFSET 4
+#define AM2315_ALL_CHANNEL_MASK GENMASK(1, 0)
+
+#define AM2315_DRIVER_NAME "am2315"
+
+struct am2315_data {
+ struct i2c_client *client;
+ struct mutex lock;
+ s16 buffer[8]; /* 2x16-bit channels + 2x16 padding + 4x16 timestamp */
+};
+
+struct am2315_sensor_data {
+ s16 hum_data;
+ s16 temp_data;
+};
+
+static const struct iio_chan_spec am2315_channels[] = {
+ {
+ .type = IIO_HUMIDITYRELATIVE,
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) |
+ BIT(IIO_CHAN_INFO_SCALE),
+ .scan_index = 0,
+ .scan_type = {
+ .sign = 's',
+ .realbits = 16,
+ .storagebits = 16,
+ .endianness = IIO_CPU,
+ },
+ },
+ {
+ .type = IIO_TEMP,
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) |
+ BIT(IIO_CHAN_INFO_SCALE),
+ .scan_index = 1,
+ .scan_type = {
+ .sign = 's',
+ .realbits = 16,
+ .storagebits = 16,
+ .endianness = IIO_CPU,
+ },
+ },
+ IIO_CHAN_SOFT_TIMESTAMP(2),
+};
+
+/* CRC calculation algorithm, as specified in the datasheet (page 13). */
+static u16 am2315_crc(u8 *data, u8 nr_bytes)
+{
+ int i;
+ u16 crc = 0xffff;
+
+ while (nr_bytes--) {
+ crc ^= *data++;
+ for (i = 0; i < 8; i++) {
+ if (crc & 0x01) {
+ crc >>= 1;
+ crc ^= 0xA001;
+ } else {
+ crc >>= 1;
+ }
+ }
+ }
+
+ return crc;
+}
+
+/* Simple function that sends a few bytes to the device to wake it up. */
+static void am2315_ping(struct i2c_client *client)
+{
+ i2c_smbus_read_byte_data(client, AM2315_REG_HUM_MSB);
+}
+
+static int am2315_read_data(struct am2315_data *data,
+ struct am2315_sensor_data *sensor_data)
+{
+ int ret;
+ /* tx_buf format: <function code> <start addr> <nr of regs to read> */
+ u8 tx_buf[3] = { AM2315_FUNCTION_READ, AM2315_REG_HUM_MSB, 4 };
+ /*
+ * rx_buf format:
+ * <function code> <number of registers read>
+ * <humidity MSB> <humidity LSB> <temp MSB> <temp LSB>
+ * <CRC LSB> <CRC MSB>
+ */
+ u8 rx_buf[8];
+ u16 crc;
+
+ /* First wake up the device. */
+ am2315_ping(data->client);
+
+ mutex_lock(&data->lock);
+ ret = i2c_master_send(data->client, tx_buf, sizeof(tx_buf));
+ if (ret < 0) {
+ dev_err(&data->client->dev, "failed to send read request\n");
+ goto exit_unlock;
+ }
+ /* Wait 2-3 ms, then read back the data sent by the device. */
+ usleep_range(2000, 3000);
+ /* Do a bulk data read, then pick out what we need. */
+ ret = i2c_master_recv(data->client, rx_buf, sizeof(rx_buf));
+ if (ret < 0) {
+ dev_err(&data->client->dev, "failed to read sensor data\n");
+ goto exit_unlock;
+ }
+ mutex_unlock(&data->lock);
+ /*
+ * Do a CRC check on the data and compare it to the value
+ * calculated by the device.
+ */
+ crc = am2315_crc(rx_buf, sizeof(rx_buf) - 2);
+ if ((crc & 0xff) != rx_buf[6] || (crc >> 8) != rx_buf[7]) {
+ dev_err(&data->client->dev, "failed to verify sensor data\n");
+ return -EIO;
+ }
+
+ sensor_data->hum_data = (rx_buf[AM2315_HUM_OFFSET] << 8) |
+ rx_buf[AM2315_HUM_OFFSET + 1];
+ sensor_data->temp_data = (rx_buf[AM2315_TEMP_OFFSET] << 8) |
+ rx_buf[AM2315_TEMP_OFFSET + 1];
+
+ return ret;
+
+exit_unlock:
+ mutex_unlock(&data->lock);
+ return ret;
+}
+
+static irqreturn_t am2315_trigger_handler(int irq, void *p)
+{
+ int i;
+ int ret;
+ int bit;
+ struct iio_poll_func *pf = p;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct am2315_data *data = iio_priv(indio_dev);
+ struct am2315_sensor_data sensor_data;
+
+ ret = am2315_read_data(data, &sensor_data);
+ if (ret < 0) {
+ mutex_unlock(&data->lock);
+ goto err;
+ }
+
+ mutex_lock(&data->lock);
+ if (*(indio_dev->active_scan_mask) == AM2315_ALL_CHANNEL_MASK) {
+ data->buffer[0] = sensor_data.hum_data;
+ data->buffer[1] = sensor_data.temp_data;
+ } else {
+ i = 0;
+ for_each_set_bit(bit, indio_dev->active_scan_mask,
+ indio_dev->masklength) {
+ data->buffer[i] = (bit ? sensor_data.temp_data :
+ sensor_data.hum_data);
+ i++;
+ }
+ }
+ mutex_unlock(&data->lock);
+
+ iio_push_to_buffers_with_timestamp(indio_dev, data->buffer,
+ pf->timestamp);
+err:
+ iio_trigger_notify_done(indio_dev->trig);
+ return IRQ_HANDLED;
+}
+
+static int am2315_read_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ int *val, int *val2, long mask)
+{
+ int ret;
+ struct am2315_sensor_data sensor_data;
+ struct am2315_data *data = iio_priv(indio_dev);
+
+ switch (mask) {
+ case IIO_CHAN_INFO_RAW:
+ ret = am2315_read_data(data, &sensor_data);
+ if (ret < 0)
+ return ret;
+ *val = (chan->type == IIO_HUMIDITYRELATIVE) ?
+ sensor_data.hum_data : sensor_data.temp_data;
+ return IIO_VAL_INT;
+ case IIO_CHAN_INFO_SCALE:
+ *val = 100;
+ return IIO_VAL_INT;
+ }
+
+ return -EINVAL;
+}
+
+static const struct iio_info am2315_info = {
+ .driver_module = THIS_MODULE,
+ .read_raw = am2315_read_raw,
+};
+
+static int am2315_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ int ret;
+ struct iio_dev *indio_dev;
+ struct am2315_data *data;
+
+ indio_dev = devm_iio_device_alloc(&client->dev, sizeof(*data));
+ if (!indio_dev) {
+ dev_err(&client->dev, "iio allocation failed!\n");
+ return -ENOMEM;
+ }
+
+ data = iio_priv(indio_dev);
+ data->client = client;
+ i2c_set_clientdata(client, indio_dev);
+ mutex_init(&data->lock);
+
+ indio_dev->dev.parent = &client->dev;
+ indio_dev->info = &am2315_info;
+ indio_dev->name = AM2315_DRIVER_NAME;
+ indio_dev->modes = INDIO_DIRECT_MODE;
+ indio_dev->channels = am2315_channels;
+ indio_dev->num_channels = ARRAY_SIZE(am2315_channels);
+
+ ret = iio_triggered_buffer_setup(indio_dev, NULL,
+ am2315_trigger_handler, NULL);
+ if (ret < 0) {
+ dev_err(&client->dev, "iio triggered buffer setup failed\n");
+ return ret;
+ }
+
+ ret = iio_device_register(indio_dev);
+ if (ret < 0)
+ goto err_buffer_cleanup;
+
+ return 0;
+
+err_buffer_cleanup:
+ iio_triggered_buffer_cleanup(indio_dev);
+ return ret;
+}
+
+static int am2315_remove(struct i2c_client *client)
+{
+ struct iio_dev *indio_dev = i2c_get_clientdata(client);
+
+ iio_device_unregister(indio_dev);
+ iio_triggered_buffer_cleanup(indio_dev);
+
+ return 0;
+}
+
+static const struct i2c_device_id am2315_i2c_id[] = {
+ {"am2315", 0},
+ {}
+};
+
+static const struct acpi_device_id am2315_acpi_id[] = {
+ {"AOS2315", 0},
+ {}
+};
+
+MODULE_DEVICE_TABLE(acpi, am2315_acpi_id);
+
+static struct i2c_driver am2315_driver = {
+ .driver = {
+ .name = "am2315",
+ .acpi_match_table = ACPI_PTR(am2315_acpi_id),
+ },
+ .probe = am2315_probe,
+ .remove = am2315_remove,
+ .id_table = am2315_i2c_id,
+};
+
+module_i2c_driver(am2315_driver);
+
+MODULE_AUTHOR("Tiberiu Breana <tiberiu.a.breana@intel.com>");
+MODULE_DESCRIPTION("Aosong AM2315 relative humidity and temperature");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/iio/humidity/dht11.c b/drivers/iio/humidity/dht11.c
index 20b500da94db..9c47bc98f3ac 100644
--- a/drivers/iio/humidity/dht11.c
+++ b/drivers/iio/humidity/dht11.c
@@ -96,6 +96,24 @@ struct dht11 {
struct {s64 ts; int value; } edges[DHT11_EDGES_PER_READ];
};
+#ifdef CONFIG_DYNAMIC_DEBUG
+/*
+ * dht11_edges_print: show the data as actually received by the
+ * driver.
+ */
+static void dht11_edges_print(struct dht11 *dht11)
+{
+ int i;
+
+ dev_dbg(dht11->dev, "%d edges detected:\n", dht11->num_edges);
+ for (i = 1; i < dht11->num_edges; ++i) {
+ dev_dbg(dht11->dev, "%d: %lld ns %s\n", i,
+ dht11->edges[i].ts - dht11->edges[i - 1].ts,
+ dht11->edges[i - 1].value ? "high" : "low");
+ }
+}
+#endif /* CONFIG_DYNAMIC_DEBUG */
+
static unsigned char dht11_decode_byte(char *bits)
{
unsigned char ret = 0;
@@ -119,8 +137,12 @@ static int dht11_decode(struct dht11 *dht11, int offset)
for (i = 0; i < DHT11_BITS_PER_READ; ++i) {
t = dht11->edges[offset + 2 * i + 2].ts -
dht11->edges[offset + 2 * i + 1].ts;
- if (!dht11->edges[offset + 2 * i + 1].value)
- return -EIO; /* lost synchronisation */
+ if (!dht11->edges[offset + 2 * i + 1].value) {
+ dev_dbg(dht11->dev,
+ "lost synchronisation at edge %d\n",
+ offset + 2 * i + 1);
+ return -EIO;
+ }
bits[i] = t > DHT11_THRESHOLD;
}
@@ -130,8 +152,10 @@ static int dht11_decode(struct dht11 *dht11, int offset)
temp_dec = dht11_decode_byte(&bits[24]);
checksum = dht11_decode_byte(&bits[32]);
- if (((hum_int + hum_dec + temp_int + temp_dec) & 0xff) != checksum)
+ if (((hum_int + hum_dec + temp_int + temp_dec) & 0xff) != checksum) {
+ dev_dbg(dht11->dev, "invalid checksum\n");
return -EIO;
+ }
dht11->timestamp = ktime_get_boot_ns();
if (hum_int < 20) { /* DHT22 */
@@ -182,6 +206,7 @@ static int dht11_read_raw(struct iio_dev *iio_dev,
mutex_lock(&dht11->lock);
if (dht11->timestamp + DHT11_DATA_VALID_TIME < ktime_get_boot_ns()) {
timeres = ktime_get_resolution_ns();
+ dev_dbg(dht11->dev, "current timeresolution: %dns\n", timeres);
if (timeres > DHT11_MIN_TIMERES) {
dev_err(dht11->dev, "timeresolution %dns too low\n",
timeres);
@@ -219,10 +244,13 @@ static int dht11_read_raw(struct iio_dev *iio_dev,
free_irq(dht11->irq, iio_dev);
+#ifdef CONFIG_DYNAMIC_DEBUG
+ dht11_edges_print(dht11);
+#endif
+
if (ret == 0 && dht11->num_edges < DHT11_EDGES_PER_READ - 1) {
- dev_err(&iio_dev->dev,
- "Only %d signal edges detected\n",
- dht11->num_edges);
+ dev_err(dht11->dev, "Only %d signal edges detected\n",
+ dht11->num_edges);
ret = -ETIMEDOUT;
}
if (ret < 0)
diff --git a/drivers/iio/imu/Kconfig b/drivers/iio/imu/Kconfig
index 5e610f7de5aa..1f1ad41ef881 100644
--- a/drivers/iio/imu/Kconfig
+++ b/drivers/iio/imu/Kconfig
@@ -25,6 +25,8 @@ config ADIS16480
Say yes here to build support for Analog Devices ADIS16375, ADIS16480,
ADIS16485, ADIS16488 inertial sensors.
+source "drivers/iio/imu/bmi160/Kconfig"
+
config KMX61
tristate "Kionix KMX61 6-axis accelerometer and magnetometer"
depends on I2C
diff --git a/drivers/iio/imu/Makefile b/drivers/iio/imu/Makefile
index e1e6e3d70e26..c71bcd30dc38 100644
--- a/drivers/iio/imu/Makefile
+++ b/drivers/iio/imu/Makefile
@@ -13,6 +13,7 @@ adis_lib-$(CONFIG_IIO_ADIS_LIB_BUFFER) += adis_trigger.o
adis_lib-$(CONFIG_IIO_ADIS_LIB_BUFFER) += adis_buffer.o
obj-$(CONFIG_IIO_ADIS_LIB) += adis_lib.o
+obj-y += bmi160/
obj-y += inv_mpu6050/
obj-$(CONFIG_KMX61) += kmx61.o
diff --git a/drivers/iio/imu/adis.c b/drivers/iio/imu/adis.c
index 911255d41c1a..ad6f91d06185 100644
--- a/drivers/iio/imu/adis.c
+++ b/drivers/iio/imu/adis.c
@@ -324,7 +324,12 @@ static int adis_self_test(struct adis *adis)
msleep(adis->data->startup_delay);
- return adis_check_status(adis);
+ ret = adis_check_status(adis);
+
+ if (adis->data->self_test_no_autoclear)
+ adis_write_reg_16(adis, adis->data->msc_ctrl_reg, 0x00);
+
+ return ret;
}
/**
diff --git a/drivers/iio/imu/bmi160/Kconfig b/drivers/iio/imu/bmi160/Kconfig
new file mode 100644
index 000000000000..005c17ccc2b0
--- /dev/null
+++ b/drivers/iio/imu/bmi160/Kconfig
@@ -0,0 +1,32 @@
+#
+# BMI160 IMU driver
+#
+
+config BMI160
+ tristate
+ select IIO_BUFFER
+ select IIO_TRIGGERED_BUFFER
+
+config BMI160_I2C
+ tristate "Bosch BMI160 I2C driver"
+ depends on I2C
+ select BMI160
+ select REGMAP_I2C
+ help
+ If you say yes here you get support for BMI160 IMU on I2C with
+ accelerometer, gyroscope and external BMG160 magnetometer.
+
+ This driver can also be built as a module. If so, the module will be
+ called bmi160_i2c.
+
+config BMI160_SPI
+ tristate "Bosch BMI160 SPI driver"
+ depends on SPI
+ select BMI160
+ select REGMAP_SPI
+ help
+ If you say yes here you get support for BMI160 IMU on SPI with
+ accelerometer, gyroscope and external BMG160 magnetometer.
+
+ This driver can also be built as a module. If so, the module will be
+ called bmi160_spi.
diff --git a/drivers/iio/imu/bmi160/Makefile b/drivers/iio/imu/bmi160/Makefile
new file mode 100644
index 000000000000..10365e493ae2
--- /dev/null
+++ b/drivers/iio/imu/bmi160/Makefile
@@ -0,0 +1,6 @@
+#
+# Makefile for Bosch BMI160 IMU
+#
+obj-$(CONFIG_BMI160) += bmi160_core.o
+obj-$(CONFIG_BMI160_I2C) += bmi160_i2c.o
+obj-$(CONFIG_BMI160_SPI) += bmi160_spi.o
diff --git a/drivers/iio/imu/bmi160/bmi160.h b/drivers/iio/imu/bmi160/bmi160.h
new file mode 100644
index 000000000000..d2ae6ed70271
--- /dev/null
+++ b/drivers/iio/imu/bmi160/bmi160.h
@@ -0,0 +1,10 @@
+#ifndef BMI160_H_
+#define BMI160_H_
+
+extern const struct regmap_config bmi160_regmap_config;
+
+int bmi160_core_probe(struct device *dev, struct regmap *regmap,
+ const char *name, bool use_spi);
+void bmi160_core_remove(struct device *dev);
+
+#endif /* BMI160_H_ */
diff --git a/drivers/iio/imu/bmi160/bmi160_core.c b/drivers/iio/imu/bmi160/bmi160_core.c
new file mode 100644
index 000000000000..0bf92b06d7d8
--- /dev/null
+++ b/drivers/iio/imu/bmi160/bmi160_core.c
@@ -0,0 +1,596 @@
+/*
+ * BMI160 - Bosch IMU (accel, gyro plus external magnetometer)
+ *
+ * Copyright (c) 2016, Intel Corporation.
+ *
+ * This file is subject to the terms and conditions of version 2 of
+ * the GNU General Public License. See the file COPYING in the main
+ * directory of this archive for more details.
+ *
+ * IIO core driver for BMI160, with support for I2C/SPI busses
+ *
+ * TODO: magnetometer, interrupts, hardware FIFO
+ */
+#include <linux/module.h>
+#include <linux/regmap.h>
+#include <linux/acpi.h>
+#include <linux/delay.h>
+
+#include <linux/iio/iio.h>
+#include <linux/iio/triggered_buffer.h>
+#include <linux/iio/trigger_consumer.h>
+#include <linux/iio/buffer.h>
+
+#include "bmi160.h"
+
+#define BMI160_REG_CHIP_ID 0x00
+#define BMI160_CHIP_ID_VAL 0xD1
+
+#define BMI160_REG_PMU_STATUS 0x03
+
+/* X axis data low byte address, the rest can be obtained using axis offset */
+#define BMI160_REG_DATA_MAGN_XOUT_L 0x04
+#define BMI160_REG_DATA_GYRO_XOUT_L 0x0C
+#define BMI160_REG_DATA_ACCEL_XOUT_L 0x12
+
+#define BMI160_REG_ACCEL_CONFIG 0x40
+#define BMI160_ACCEL_CONFIG_ODR_MASK GENMASK(3, 0)
+#define BMI160_ACCEL_CONFIG_BWP_MASK GENMASK(6, 4)
+
+#define BMI160_REG_ACCEL_RANGE 0x41
+#define BMI160_ACCEL_RANGE_2G 0x03
+#define BMI160_ACCEL_RANGE_4G 0x05
+#define BMI160_ACCEL_RANGE_8G 0x08
+#define BMI160_ACCEL_RANGE_16G 0x0C
+
+#define BMI160_REG_GYRO_CONFIG 0x42
+#define BMI160_GYRO_CONFIG_ODR_MASK GENMASK(3, 0)
+#define BMI160_GYRO_CONFIG_BWP_MASK GENMASK(5, 4)
+
+#define BMI160_REG_GYRO_RANGE 0x43
+#define BMI160_GYRO_RANGE_2000DPS 0x00
+#define BMI160_GYRO_RANGE_1000DPS 0x01
+#define BMI160_GYRO_RANGE_500DPS 0x02
+#define BMI160_GYRO_RANGE_250DPS 0x03
+#define BMI160_GYRO_RANGE_125DPS 0x04
+
+#define BMI160_REG_CMD 0x7E
+#define BMI160_CMD_ACCEL_PM_SUSPEND 0x10
+#define BMI160_CMD_ACCEL_PM_NORMAL 0x11
+#define BMI160_CMD_ACCEL_PM_LOW_POWER 0x12
+#define BMI160_CMD_GYRO_PM_SUSPEND 0x14
+#define BMI160_CMD_GYRO_PM_NORMAL 0x15
+#define BMI160_CMD_GYRO_PM_FAST_STARTUP 0x17
+#define BMI160_CMD_SOFTRESET 0xB6
+
+#define BMI160_REG_DUMMY 0x7F
+
+#define BMI160_ACCEL_PMU_MIN_USLEEP 3200
+#define BMI160_ACCEL_PMU_MAX_USLEEP 3800
+#define BMI160_GYRO_PMU_MIN_USLEEP 55000
+#define BMI160_GYRO_PMU_MAX_USLEEP 80000
+#define BMI160_SOFTRESET_USLEEP 1000
+
+#define BMI160_CHANNEL(_type, _axis, _index) { \
+ .type = _type, \
+ .modified = 1, \
+ .channel2 = IIO_MOD_##_axis, \
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \
+ .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE) | \
+ BIT(IIO_CHAN_INFO_SAMP_FREQ), \
+ .scan_index = _index, \
+ .scan_type = { \
+ .sign = 's', \
+ .realbits = 16, \
+ .storagebits = 16, \
+ .endianness = IIO_LE, \
+ }, \
+}
+
+/* scan indexes follow DATA register order */
+enum bmi160_scan_axis {
+ BMI160_SCAN_EXT_MAGN_X = 0,
+ BMI160_SCAN_EXT_MAGN_Y,
+ BMI160_SCAN_EXT_MAGN_Z,
+ BMI160_SCAN_RHALL,
+ BMI160_SCAN_GYRO_X,
+ BMI160_SCAN_GYRO_Y,
+ BMI160_SCAN_GYRO_Z,
+ BMI160_SCAN_ACCEL_X,
+ BMI160_SCAN_ACCEL_Y,
+ BMI160_SCAN_ACCEL_Z,
+ BMI160_SCAN_TIMESTAMP,
+};
+
+enum bmi160_sensor_type {
+ BMI160_ACCEL = 0,
+ BMI160_GYRO,
+ BMI160_EXT_MAGN,
+ BMI160_NUM_SENSORS /* must be last */
+};
+
+struct bmi160_data {
+ struct regmap *regmap;
+};
+
+const struct regmap_config bmi160_regmap_config = {
+ .reg_bits = 8,
+ .val_bits = 8,
+};
+EXPORT_SYMBOL(bmi160_regmap_config);
+
+struct bmi160_regs {
+ u8 data; /* LSB byte register for X-axis */
+ u8 config;
+ u8 config_odr_mask;
+ u8 config_bwp_mask;
+ u8 range;
+ u8 pmu_cmd_normal;
+ u8 pmu_cmd_suspend;
+};
+
+static struct bmi160_regs bmi160_regs[] = {
+ [BMI160_ACCEL] = {
+ .data = BMI160_REG_DATA_ACCEL_XOUT_L,
+ .config = BMI160_REG_ACCEL_CONFIG,
+ .config_odr_mask = BMI160_ACCEL_CONFIG_ODR_MASK,
+ .config_bwp_mask = BMI160_ACCEL_CONFIG_BWP_MASK,
+ .range = BMI160_REG_ACCEL_RANGE,
+ .pmu_cmd_normal = BMI160_CMD_ACCEL_PM_NORMAL,
+ .pmu_cmd_suspend = BMI160_CMD_ACCEL_PM_SUSPEND,
+ },
+ [BMI160_GYRO] = {
+ .data = BMI160_REG_DATA_GYRO_XOUT_L,
+ .config = BMI160_REG_GYRO_CONFIG,
+ .config_odr_mask = BMI160_GYRO_CONFIG_ODR_MASK,
+ .config_bwp_mask = BMI160_GYRO_CONFIG_BWP_MASK,
+ .range = BMI160_REG_GYRO_RANGE,
+ .pmu_cmd_normal = BMI160_CMD_GYRO_PM_NORMAL,
+ .pmu_cmd_suspend = BMI160_CMD_GYRO_PM_SUSPEND,
+ },
+};
+
+struct bmi160_pmu_time {
+ unsigned long min;
+ unsigned long max;
+};
+
+static struct bmi160_pmu_time bmi160_pmu_time[] = {
+ [BMI160_ACCEL] = {
+ .min = BMI160_ACCEL_PMU_MIN_USLEEP,
+ .max = BMI160_ACCEL_PMU_MAX_USLEEP
+ },
+ [BMI160_GYRO] = {
+ .min = BMI160_GYRO_PMU_MIN_USLEEP,
+ .max = BMI160_GYRO_PMU_MIN_USLEEP,
+ },
+};
+
+struct bmi160_scale {
+ u8 bits;
+ int uscale;
+};
+
+struct bmi160_odr {
+ u8 bits;
+ int odr;
+ int uodr;
+};
+
+static const struct bmi160_scale bmi160_accel_scale[] = {
+ { BMI160_ACCEL_RANGE_2G, 598},
+ { BMI160_ACCEL_RANGE_4G, 1197},
+ { BMI160_ACCEL_RANGE_8G, 2394},
+ { BMI160_ACCEL_RANGE_16G, 4788},
+};
+
+static const struct bmi160_scale bmi160_gyro_scale[] = {
+ { BMI160_GYRO_RANGE_2000DPS, 1065},
+ { BMI160_GYRO_RANGE_1000DPS, 532},
+ { BMI160_GYRO_RANGE_500DPS, 266},
+ { BMI160_GYRO_RANGE_250DPS, 133},
+ { BMI160_GYRO_RANGE_125DPS, 66},
+};
+
+struct bmi160_scale_item {
+ const struct bmi160_scale *tbl;
+ int num;
+};
+
+static const struct bmi160_scale_item bmi160_scale_table[] = {
+ [BMI160_ACCEL] = {
+ .tbl = bmi160_accel_scale,
+ .num = ARRAY_SIZE(bmi160_accel_scale),
+ },
+ [BMI160_GYRO] = {
+ .tbl = bmi160_gyro_scale,
+ .num = ARRAY_SIZE(bmi160_gyro_scale),
+ },
+};
+
+static const struct bmi160_odr bmi160_accel_odr[] = {
+ {0x01, 0, 78125},
+ {0x02, 1, 5625},
+ {0x03, 3, 125},
+ {0x04, 6, 25},
+ {0x05, 12, 5},
+ {0x06, 25, 0},
+ {0x07, 50, 0},
+ {0x08, 100, 0},
+ {0x09, 200, 0},
+ {0x0A, 400, 0},
+ {0x0B, 800, 0},
+ {0x0C, 1600, 0},
+};
+
+static const struct bmi160_odr bmi160_gyro_odr[] = {
+ {0x06, 25, 0},
+ {0x07, 50, 0},
+ {0x08, 100, 0},
+ {0x09, 200, 0},
+ {0x0A, 400, 0},
+ {0x0B, 8000, 0},
+ {0x0C, 1600, 0},
+ {0x0D, 3200, 0},
+};
+
+struct bmi160_odr_item {
+ const struct bmi160_odr *tbl;
+ int num;
+};
+
+static const struct bmi160_odr_item bmi160_odr_table[] = {
+ [BMI160_ACCEL] = {
+ .tbl = bmi160_accel_odr,
+ .num = ARRAY_SIZE(bmi160_accel_odr),
+ },
+ [BMI160_GYRO] = {
+ .tbl = bmi160_gyro_odr,
+ .num = ARRAY_SIZE(bmi160_gyro_odr),
+ },
+};
+
+static const struct iio_chan_spec bmi160_channels[] = {
+ BMI160_CHANNEL(IIO_ACCEL, X, BMI160_SCAN_ACCEL_X),
+ BMI160_CHANNEL(IIO_ACCEL, Y, BMI160_SCAN_ACCEL_Y),
+ BMI160_CHANNEL(IIO_ACCEL, Z, BMI160_SCAN_ACCEL_Z),
+ BMI160_CHANNEL(IIO_ANGL_VEL, X, BMI160_SCAN_GYRO_X),
+ BMI160_CHANNEL(IIO_ANGL_VEL, Y, BMI160_SCAN_GYRO_Y),
+ BMI160_CHANNEL(IIO_ANGL_VEL, Z, BMI160_SCAN_GYRO_Z),
+ IIO_CHAN_SOFT_TIMESTAMP(BMI160_SCAN_TIMESTAMP),
+};
+
+static enum bmi160_sensor_type bmi160_to_sensor(enum iio_chan_type iio_type)
+{
+ switch (iio_type) {
+ case IIO_ACCEL:
+ return BMI160_ACCEL;
+ case IIO_ANGL_VEL:
+ return BMI160_GYRO;
+ default:
+ return -EINVAL;
+ }
+}
+
+static
+int bmi160_set_mode(struct bmi160_data *data, enum bmi160_sensor_type t,
+ bool mode)
+{
+ int ret;
+ u8 cmd;
+
+ if (mode)
+ cmd = bmi160_regs[t].pmu_cmd_normal;
+ else
+ cmd = bmi160_regs[t].pmu_cmd_suspend;
+
+ ret = regmap_write(data->regmap, BMI160_REG_CMD, cmd);
+ if (ret < 0)
+ return ret;
+
+ usleep_range(bmi160_pmu_time[t].min, bmi160_pmu_time[t].max);
+
+ return 0;
+}
+
+static
+int bmi160_set_scale(struct bmi160_data *data, enum bmi160_sensor_type t,
+ int uscale)
+{
+ int i;
+
+ for (i = 0; i < bmi160_scale_table[t].num; i++)
+ if (bmi160_scale_table[t].tbl[i].uscale == uscale)
+ break;
+
+ if (i == bmi160_scale_table[t].num)
+ return -EINVAL;
+
+ return regmap_write(data->regmap, bmi160_regs[t].range,
+ bmi160_scale_table[t].tbl[i].bits);
+}
+
+static
+int bmi160_get_scale(struct bmi160_data *data, enum bmi160_sensor_type t,
+ int *uscale)
+{
+ int i, ret, val;
+
+ ret = regmap_read(data->regmap, bmi160_regs[t].range, &val);
+ if (ret < 0)
+ return ret;
+
+ for (i = 0; i < bmi160_scale_table[t].num; i++)
+ if (bmi160_scale_table[t].tbl[i].bits == val) {
+ *uscale = bmi160_scale_table[t].tbl[i].uscale;
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static int bmi160_get_data(struct bmi160_data *data, int chan_type,
+ int axis, int *val)
+{
+ u8 reg;
+ int ret;
+ __le16 sample;
+ enum bmi160_sensor_type t = bmi160_to_sensor(chan_type);
+
+ reg = bmi160_regs[t].data + (axis - IIO_MOD_X) * sizeof(__le16);
+
+ ret = regmap_bulk_read(data->regmap, reg, &sample, sizeof(__le16));
+ if (ret < 0)
+ return ret;
+
+ *val = sign_extend32(le16_to_cpu(sample), 15);
+
+ return 0;
+}
+
+static
+int bmi160_set_odr(struct bmi160_data *data, enum bmi160_sensor_type t,
+ int odr, int uodr)
+{
+ int i;
+
+ for (i = 0; i < bmi160_odr_table[t].num; i++)
+ if (bmi160_odr_table[t].tbl[i].odr == odr &&
+ bmi160_odr_table[t].tbl[i].uodr == uodr)
+ break;
+
+ if (i >= bmi160_odr_table[t].num)
+ return -EINVAL;
+
+ return regmap_update_bits(data->regmap,
+ bmi160_regs[t].config,
+ bmi160_odr_table[t].tbl[i].bits,
+ bmi160_regs[t].config_odr_mask);
+}
+
+static int bmi160_get_odr(struct bmi160_data *data, enum bmi160_sensor_type t,
+ int *odr, int *uodr)
+{
+ int i, val, ret;
+
+ ret = regmap_read(data->regmap, bmi160_regs[t].config, &val);
+ if (ret < 0)
+ return ret;
+
+ val &= bmi160_regs[t].config_odr_mask;
+
+ for (i = 0; i < bmi160_odr_table[t].num; i++)
+ if (val == bmi160_odr_table[t].tbl[i].bits)
+ break;
+
+ if (i >= bmi160_odr_table[t].num)
+ return -EINVAL;
+
+ *odr = bmi160_odr_table[t].tbl[i].odr;
+ *uodr = bmi160_odr_table[t].tbl[i].uodr;
+
+ return 0;
+}
+
+static irqreturn_t bmi160_trigger_handler(int irq, void *p)
+{
+ struct iio_poll_func *pf = p;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct bmi160_data *data = iio_priv(indio_dev);
+ s16 buf[16]; /* 3 sens x 3 axis x s16 + 3 x s16 pad + 4 x s16 tstamp */
+ int i, ret, j = 0, base = BMI160_REG_DATA_MAGN_XOUT_L;
+ __le16 sample;
+
+ for_each_set_bit(i, indio_dev->active_scan_mask,
+ indio_dev->masklength) {
+ ret = regmap_bulk_read(data->regmap, base + i * sizeof(__le16),
+ &sample, sizeof(__le16));
+ if (ret < 0)
+ goto done;
+ buf[j++] = sample;
+ }
+
+ iio_push_to_buffers_with_timestamp(indio_dev, buf, iio_get_time_ns());
+done:
+ iio_trigger_notify_done(indio_dev->trig);
+ return IRQ_HANDLED;
+}
+
+static int bmi160_read_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ int *val, int *val2, long mask)
+{
+ int ret;
+ struct bmi160_data *data = iio_priv(indio_dev);
+
+ switch (mask) {
+ case IIO_CHAN_INFO_RAW:
+ ret = bmi160_get_data(data, chan->type, chan->channel2, val);
+ if (ret < 0)
+ return ret;
+ return IIO_VAL_INT;
+ case IIO_CHAN_INFO_SCALE:
+ *val = 0;
+ ret = bmi160_get_scale(data,
+ bmi160_to_sensor(chan->type), val2);
+ return ret < 0 ? ret : IIO_VAL_INT_PLUS_MICRO;
+ case IIO_CHAN_INFO_SAMP_FREQ:
+ ret = bmi160_get_odr(data, bmi160_to_sensor(chan->type),
+ val, val2);
+ return ret < 0 ? ret : IIO_VAL_INT_PLUS_MICRO;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int bmi160_write_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ int val, int val2, long mask)
+{
+ struct bmi160_data *data = iio_priv(indio_dev);
+
+ switch (mask) {
+ case IIO_CHAN_INFO_SCALE:
+ return bmi160_set_scale(data,
+ bmi160_to_sensor(chan->type), val2);
+ break;
+ case IIO_CHAN_INFO_SAMP_FREQ:
+ return bmi160_set_odr(data, bmi160_to_sensor(chan->type),
+ val, val2);
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static const struct iio_info bmi160_info = {
+ .driver_module = THIS_MODULE,
+ .read_raw = bmi160_read_raw,
+ .write_raw = bmi160_write_raw,
+};
+
+static const char *bmi160_match_acpi_device(struct device *dev)
+{
+ const struct acpi_device_id *id;
+
+ id = acpi_match_device(dev->driver->acpi_match_table, dev);
+ if (!id)
+ return NULL;
+
+ return dev_name(dev);
+}
+
+static int bmi160_chip_init(struct bmi160_data *data, bool use_spi)
+{
+ int ret;
+ unsigned int val;
+ struct device *dev = regmap_get_device(data->regmap);
+
+ ret = regmap_write(data->regmap, BMI160_REG_CMD, BMI160_CMD_SOFTRESET);
+ if (ret < 0)
+ return ret;
+
+ usleep_range(BMI160_SOFTRESET_USLEEP, BMI160_SOFTRESET_USLEEP + 1);
+
+ /*
+ * CS rising edge is needed before starting SPI, so do a dummy read
+ * See Section 3.2.1, page 86 of the datasheet
+ */
+ if (use_spi) {
+ ret = regmap_read(data->regmap, BMI160_REG_DUMMY, &val);
+ if (ret < 0)
+ return ret;
+ }
+
+ ret = regmap_read(data->regmap, BMI160_REG_CHIP_ID, &val);
+ if (ret < 0) {
+ dev_err(dev, "Error reading chip id\n");
+ return ret;
+ }
+ if (val != BMI160_CHIP_ID_VAL) {
+ dev_err(dev, "Wrong chip id, got %x expected %x\n",
+ val, BMI160_CHIP_ID_VAL);
+ return -ENODEV;
+ }
+
+ ret = bmi160_set_mode(data, BMI160_ACCEL, true);
+ if (ret < 0)
+ return ret;
+
+ ret = bmi160_set_mode(data, BMI160_GYRO, true);
+ if (ret < 0)
+ return ret;
+
+ return 0;
+}
+
+static void bmi160_chip_uninit(struct bmi160_data *data)
+{
+ bmi160_set_mode(data, BMI160_GYRO, false);
+ bmi160_set_mode(data, BMI160_ACCEL, false);
+}
+
+int bmi160_core_probe(struct device *dev, struct regmap *regmap,
+ const char *name, bool use_spi)
+{
+ struct iio_dev *indio_dev;
+ struct bmi160_data *data;
+ int ret;
+
+ indio_dev = devm_iio_device_alloc(dev, sizeof(*data));
+ if (!indio_dev)
+ return -ENOMEM;
+
+ data = iio_priv(indio_dev);
+ dev_set_drvdata(dev, indio_dev);
+ data->regmap = regmap;
+
+ ret = bmi160_chip_init(data, use_spi);
+ if (ret < 0)
+ return ret;
+
+ if (!name && ACPI_HANDLE(dev))
+ name = bmi160_match_acpi_device(dev);
+
+ indio_dev->dev.parent = dev;
+ indio_dev->channels = bmi160_channels;
+ indio_dev->num_channels = ARRAY_SIZE(bmi160_channels);
+ indio_dev->name = name;
+ indio_dev->modes = INDIO_DIRECT_MODE;
+ indio_dev->info = &bmi160_info;
+
+ ret = iio_triggered_buffer_setup(indio_dev, NULL,
+ bmi160_trigger_handler, NULL);
+ if (ret < 0)
+ goto uninit;
+
+ ret = iio_device_register(indio_dev);
+ if (ret < 0)
+ goto buffer_cleanup;
+
+ return 0;
+buffer_cleanup:
+ iio_triggered_buffer_cleanup(indio_dev);
+uninit:
+ bmi160_chip_uninit(data);
+ return ret;
+}
+EXPORT_SYMBOL_GPL(bmi160_core_probe);
+
+void bmi160_core_remove(struct device *dev)
+{
+ struct iio_dev *indio_dev = dev_get_drvdata(dev);
+ struct bmi160_data *data = iio_priv(indio_dev);
+
+ iio_device_unregister(indio_dev);
+ iio_triggered_buffer_cleanup(indio_dev);
+ bmi160_chip_uninit(data);
+}
+EXPORT_SYMBOL_GPL(bmi160_core_remove);
+
+MODULE_AUTHOR("Daniel Baluta <daniel.baluta@intel.com");
+MODULE_DESCRIPTION("Bosch BMI160 driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/iio/imu/bmi160/bmi160_i2c.c b/drivers/iio/imu/bmi160/bmi160_i2c.c
new file mode 100644
index 000000000000..07a179d8fb48
--- /dev/null
+++ b/drivers/iio/imu/bmi160/bmi160_i2c.c
@@ -0,0 +1,72 @@
+/*
+ * BMI160 - Bosch IMU, I2C bits
+ *
+ * Copyright (c) 2016, Intel Corporation.
+ *
+ * This file is subject to the terms and conditions of version 2 of
+ * the GNU General Public License. See the file COPYING in the main
+ * directory of this archive for more details.
+ *
+ * 7-bit I2C slave address is:
+ * - 0x68 if SDO is pulled to GND
+ * - 0x69 if SDO is pulled to VDDIO
+ */
+#include <linux/module.h>
+#include <linux/i2c.h>
+#include <linux/regmap.h>
+#include <linux/acpi.h>
+
+#include "bmi160.h"
+
+static int bmi160_i2c_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct regmap *regmap;
+ const char *name = NULL;
+
+ regmap = devm_regmap_init_i2c(client, &bmi160_regmap_config);
+ if (IS_ERR(regmap)) {
+ dev_err(&client->dev, "Failed to register i2c regmap %d\n",
+ (int)PTR_ERR(regmap));
+ return PTR_ERR(regmap);
+ }
+
+ if (id)
+ name = id->name;
+
+ return bmi160_core_probe(&client->dev, regmap, name, false);
+}
+
+static int bmi160_i2c_remove(struct i2c_client *client)
+{
+ bmi160_core_remove(&client->dev);
+
+ return 0;
+}
+
+static const struct i2c_device_id bmi160_i2c_id[] = {
+ {"bmi160", 0},
+ {}
+};
+MODULE_DEVICE_TABLE(i2c, bmi160_i2c_id);
+
+static const struct acpi_device_id bmi160_acpi_match[] = {
+ {"BMI0160", 0},
+ { },
+};
+MODULE_DEVICE_TABLE(acpi, bmi160_acpi_match);
+
+static struct i2c_driver bmi160_i2c_driver = {
+ .driver = {
+ .name = "bmi160_i2c",
+ .acpi_match_table = ACPI_PTR(bmi160_acpi_match),
+ },
+ .probe = bmi160_i2c_probe,
+ .remove = bmi160_i2c_remove,
+ .id_table = bmi160_i2c_id,
+};
+module_i2c_driver(bmi160_i2c_driver);
+
+MODULE_AUTHOR("Daniel Baluta <daniel.baluta@intel.com>");
+MODULE_DESCRIPTION("BMI160 I2C driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/iio/imu/bmi160/bmi160_spi.c b/drivers/iio/imu/bmi160/bmi160_spi.c
new file mode 100644
index 000000000000..1ec8b12bd984
--- /dev/null
+++ b/drivers/iio/imu/bmi160/bmi160_spi.c
@@ -0,0 +1,63 @@
+/*
+ * BMI160 - Bosch IMU, SPI bits
+ *
+ * Copyright (c) 2016, Intel Corporation.
+ *
+ * This file is subject to the terms and conditions of version 2 of
+ * the GNU General Public License. See the file COPYING in the main
+ * directory of this archive for more details.
+ */
+#include <linux/module.h>
+#include <linux/spi/spi.h>
+#include <linux/regmap.h>
+#include <linux/acpi.h>
+
+#include "bmi160.h"
+
+static int bmi160_spi_probe(struct spi_device *spi)
+{
+ struct regmap *regmap;
+ const struct spi_device_id *id = spi_get_device_id(spi);
+
+ regmap = devm_regmap_init_spi(spi, &bmi160_regmap_config);
+ if (IS_ERR(regmap)) {
+ dev_err(&spi->dev, "Failed to register spi regmap %d\n",
+ (int)PTR_ERR(regmap));
+ return PTR_ERR(regmap);
+ }
+ return bmi160_core_probe(&spi->dev, regmap, id->name, true);
+}
+
+static int bmi160_spi_remove(struct spi_device *spi)
+{
+ bmi160_core_remove(&spi->dev);
+
+ return 0;
+}
+
+static const struct spi_device_id bmi160_spi_id[] = {
+ {"bmi160", 0},
+ {}
+};
+MODULE_DEVICE_TABLE(spi, bmi160_spi_id);
+
+static const struct acpi_device_id bmi160_acpi_match[] = {
+ {"BMI0160", 0},
+ { },
+};
+MODULE_DEVICE_TABLE(acpi, bmi160_acpi_match);
+
+static struct spi_driver bmi160_spi_driver = {
+ .probe = bmi160_spi_probe,
+ .remove = bmi160_spi_remove,
+ .id_table = bmi160_spi_id,
+ .driver = {
+ .acpi_match_table = ACPI_PTR(bmi160_acpi_match),
+ .name = "bmi160_spi",
+ },
+};
+module_spi_driver(bmi160_spi_driver);
+
+MODULE_AUTHOR("Daniel Baluta <daniel.baluta@intel.com");
+MODULE_DESCRIPTION("Bosch BMI160 SPI driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/iio/imu/inv_mpu6050/Kconfig b/drivers/iio/imu/inv_mpu6050/Kconfig
index 847455a2d6bb..f756feecfa4c 100644
--- a/drivers/iio/imu/inv_mpu6050/Kconfig
+++ b/drivers/iio/imu/inv_mpu6050/Kconfig
@@ -13,10 +13,8 @@ config INV_MPU6050_I2C
select INV_MPU6050_IIO
select REGMAP_I2C
help
- This driver supports the Invensense MPU6050 devices.
- This driver can also support MPU6500 in MPU6050 compatibility mode
- and also in MPU6500 mode with some limitations.
- It is a gyroscope/accelerometer combo device.
+ This driver supports the Invensense MPU6050/6500/9150 motion tracking
+ devices over I2C.
This driver can be built as a module. The module will be called
inv-mpu6050-i2c.
@@ -26,7 +24,7 @@ config INV_MPU6050_SPI
select INV_MPU6050_IIO
select REGMAP_SPI
help
- This driver supports the Invensense MPU6050 devices.
- It is a gyroscope/accelerometer combo device.
+ This driver supports the Invensense MPU6000/6500/9150 motion tracking
+ devices over SPI.
This driver can be built as a module. The module will be called
inv-mpu6050-spi.
diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_acpi.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_acpi.c
index 2771106fd650..f62b8bd9ad7e 100644
--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_acpi.c
+++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_acpi.c
@@ -183,7 +183,7 @@ int inv_mpu_acpi_create_mux_client(struct i2c_client *client)
} else
return 0; /* no secondary addr, which is OK */
}
- st->mux_client = i2c_new_device(st->mux_adapter, &info);
+ st->mux_client = i2c_new_device(st->muxc->adapter[0], &info);
if (!st->mux_client)
return -ENODEV;
}
diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
index d192953e9a38..ee40dae5ab58 100644
--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
+++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_core.c
@@ -23,7 +23,6 @@
#include <linux/kfifo.h>
#include <linux/spinlock.h>
#include <linux/iio/iio.h>
-#include <linux/i2c-mux.h>
#include <linux/acpi.h>
#include "inv_mpu_iio.h"
@@ -88,16 +87,29 @@ static const struct inv_mpu6050_chip_config chip_config_6050 = {
.accl_fs = INV_MPU6050_FS_02G,
};
+/* Indexed by enum inv_devices */
static const struct inv_mpu6050_hw hw_info[] = {
{
- .num_reg = 117,
+ .whoami = INV_MPU6050_WHOAMI_VALUE,
+ .name = "MPU6050",
+ .reg = &reg_set_6050,
+ .config = &chip_config_6050,
+ },
+ {
+ .whoami = INV_MPU6500_WHOAMI_VALUE,
.name = "MPU6500",
.reg = &reg_set_6500,
.config = &chip_config_6050,
},
{
- .num_reg = 117,
- .name = "MPU6050",
+ .whoami = INV_MPU6000_WHOAMI_VALUE,
+ .name = "MPU6000",
+ .reg = &reg_set_6050,
+ .config = &chip_config_6050,
+ },
+ {
+ .whoami = INV_MPU9150_WHOAMI_VALUE,
+ .name = "MPU9150",
.reg = &reg_set_6050,
.config = &chip_config_6050,
},
@@ -600,6 +612,10 @@ inv_fifo_rate_show(struct device *dev, struct device_attribute *attr,
/**
* inv_attr_show() - calling this function will show current
* parameters.
+ *
+ * Deprecated in favor of IIO mounting matrix API.
+ *
+ * See inv_get_mount_matrix()
*/
static ssize_t inv_attr_show(struct device *dev, struct device_attribute *attr,
char *buf)
@@ -644,6 +660,18 @@ static int inv_mpu6050_validate_trigger(struct iio_dev *indio_dev,
return 0;
}
+static const struct iio_mount_matrix *
+inv_get_mount_matrix(const struct iio_dev *indio_dev,
+ const struct iio_chan_spec *chan)
+{
+ return &((struct inv_mpu6050_state *)iio_priv(indio_dev))->orientation;
+}
+
+static const struct iio_chan_spec_ext_info inv_ext_info[] = {
+ IIO_MOUNT_MATRIX(IIO_SHARED_BY_TYPE, inv_get_mount_matrix),
+ { },
+};
+
#define INV_MPU6050_CHAN(_type, _channel2, _index) \
{ \
.type = _type, \
@@ -660,6 +688,7 @@ static int inv_mpu6050_validate_trigger(struct iio_dev *indio_dev,
.shift = 0, \
.endianness = IIO_BE, \
}, \
+ .ext_info = inv_ext_info, \
}
static const struct iio_chan_spec inv_mpu_channels[] = {
@@ -692,14 +721,16 @@ static IIO_CONST_ATTR(in_accel_scale_available,
"0.000598 0.001196 0.002392 0.004785");
static IIO_DEV_ATTR_SAMP_FREQ(S_IRUGO | S_IWUSR, inv_fifo_rate_show,
inv_mpu6050_fifo_rate_store);
+
+/* Deprecated: kept for userspace backward compatibility. */
static IIO_DEVICE_ATTR(in_gyro_matrix, S_IRUGO, inv_attr_show, NULL,
ATTR_GYRO_MATRIX);
static IIO_DEVICE_ATTR(in_accel_matrix, S_IRUGO, inv_attr_show, NULL,
ATTR_ACCL_MATRIX);
static struct attribute *inv_attributes[] = {
- &iio_dev_attr_in_gyro_matrix.dev_attr.attr,
- &iio_dev_attr_in_accel_matrix.dev_attr.attr,
+ &iio_dev_attr_in_gyro_matrix.dev_attr.attr, /* deprecated */
+ &iio_dev_attr_in_accel_matrix.dev_attr.attr, /* deprecated */
&iio_dev_attr_sampling_frequency.dev_attr.attr,
&iio_const_attr_sampling_frequency_available.dev_attr.attr,
&iio_const_attr_in_accel_scale_available.dev_attr.attr,
@@ -726,6 +757,7 @@ static const struct iio_info mpu_info = {
static int inv_check_and_setup_chip(struct inv_mpu6050_state *st)
{
int result;
+ unsigned int regval;
st->hw = &hw_info[st->chip_type];
st->reg = hw_info[st->chip_type].reg;
@@ -736,6 +768,17 @@ static int inv_check_and_setup_chip(struct inv_mpu6050_state *st)
if (result)
return result;
msleep(INV_MPU6050_POWER_UP_TIME);
+
+ /* check chip self-identification */
+ result = regmap_read(st->map, INV_MPU6050_REG_WHOAMI, &regval);
+ if (result)
+ return result;
+ if (regval != st->hw->whoami) {
+ dev_warn(regmap_get_device(st->map),
+ "whoami mismatch got %#02x expected %#02hhx for %s\n",
+ regval, st->hw->whoami, st->hw->name);
+ }
+
/*
* toggle power state. After reset, the sleep bit could be on
* or off depending on the OTP settings. Toggling power would
@@ -774,14 +817,31 @@ int inv_mpu_core_probe(struct regmap *regmap, int irq, const char *name,
if (!indio_dev)
return -ENOMEM;
+ BUILD_BUG_ON(ARRAY_SIZE(hw_info) != INV_NUM_PARTS);
+ if (chip_type < 0 || chip_type >= INV_NUM_PARTS) {
+ dev_err(dev, "Bad invensense chip_type=%d name=%s\n",
+ chip_type, name);
+ return -ENODEV;
+ }
st = iio_priv(indio_dev);
st->chip_type = chip_type;
st->powerup_count = 0;
st->irq = irq;
st->map = regmap;
+
pdata = dev_get_platdata(dev);
- if (pdata)
+ if (!pdata) {
+ result = of_iio_read_mount_matrix(dev, "mount-matrix",
+ &st->orientation);
+ if (result) {
+ dev_err(dev, "Failed to retrieve mounting matrix %d\n",
+ result);
+ return result;
+ }
+ } else {
st->plat_data = *pdata;
+ }
+
/* power is turned on inside check chip type*/
result = inv_check_and_setup_chip(st);
if (result)
diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_i2c.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_i2c.c
index 5ee4e0dc093e..e1fd7fa53e3b 100644
--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_i2c.c
+++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_i2c.c
@@ -15,7 +15,6 @@
#include <linux/delay.h>
#include <linux/err.h>
#include <linux/i2c.h>
-#include <linux/i2c-mux.h>
#include <linux/iio/iio.h>
#include <linux/module.h>
#include "inv_mpu_iio.h"
@@ -25,46 +24,16 @@ static const struct regmap_config inv_mpu_regmap_config = {
.val_bits = 8,
};
-/*
- * The i2c read/write needs to happen in unlocked mode. As the parent
- * adapter is common. If we use locked versions, it will fail as
- * the mux adapter will lock the parent i2c adapter, while calling
- * select/deselect functions.
- */
-static int inv_mpu6050_write_reg_unlocked(struct i2c_client *client,
- u8 reg, u8 d)
+static int inv_mpu6050_select_bypass(struct i2c_mux_core *muxc, u32 chan_id)
{
- int ret;
- u8 buf[2] = {reg, d};
- struct i2c_msg msg[1] = {
- {
- .addr = client->addr,
- .flags = 0,
- .len = sizeof(buf),
- .buf = buf,
- }
- };
-
- ret = __i2c_transfer(client->adapter, msg, 1);
- if (ret != 1)
- return ret;
-
- return 0;
-}
-
-static int inv_mpu6050_select_bypass(struct i2c_adapter *adap, void *mux_priv,
- u32 chan_id)
-{
- struct i2c_client *client = mux_priv;
- struct iio_dev *indio_dev = dev_get_drvdata(&client->dev);
+ struct iio_dev *indio_dev = i2c_mux_priv(muxc);
struct inv_mpu6050_state *st = iio_priv(indio_dev);
int ret = 0;
/* Use the same mutex which was used everywhere to protect power-op */
mutex_lock(&indio_dev->mlock);
if (!st->powerup_count) {
- ret = inv_mpu6050_write_reg_unlocked(client,
- st->reg->pwr_mgmt_1, 0);
+ ret = regmap_write(st->map, st->reg->pwr_mgmt_1, 0);
if (ret)
goto write_error;
@@ -73,10 +42,9 @@ static int inv_mpu6050_select_bypass(struct i2c_adapter *adap, void *mux_priv,
}
if (!ret) {
st->powerup_count++;
- ret = inv_mpu6050_write_reg_unlocked(client,
- st->reg->int_pin_cfg,
- INV_MPU6050_INT_PIN_CFG |
- INV_MPU6050_BIT_BYPASS_EN);
+ ret = regmap_write(st->map, st->reg->int_pin_cfg,
+ INV_MPU6050_INT_PIN_CFG |
+ INV_MPU6050_BIT_BYPASS_EN);
}
write_error:
mutex_unlock(&indio_dev->mlock);
@@ -84,21 +52,18 @@ write_error:
return ret;
}
-static int inv_mpu6050_deselect_bypass(struct i2c_adapter *adap,
- void *mux_priv, u32 chan_id)
+static int inv_mpu6050_deselect_bypass(struct i2c_mux_core *muxc, u32 chan_id)
{
- struct i2c_client *client = mux_priv;
- struct iio_dev *indio_dev = dev_get_drvdata(&client->dev);
+ struct iio_dev *indio_dev = i2c_mux_priv(muxc);
struct inv_mpu6050_state *st = iio_priv(indio_dev);
mutex_lock(&indio_dev->mlock);
/* It doesn't really mattter, if any of the calls fails */
- inv_mpu6050_write_reg_unlocked(client, st->reg->int_pin_cfg,
- INV_MPU6050_INT_PIN_CFG);
+ regmap_write(st->map, st->reg->int_pin_cfg, INV_MPU6050_INT_PIN_CFG);
st->powerup_count--;
if (!st->powerup_count)
- inv_mpu6050_write_reg_unlocked(client, st->reg->pwr_mgmt_1,
- INV_MPU6050_BIT_SLEEP);
+ regmap_write(st->map, st->reg->pwr_mgmt_1,
+ INV_MPU6050_BIT_SLEEP);
mutex_unlock(&indio_dev->mlock);
return 0;
@@ -160,16 +125,18 @@ static int inv_mpu_probe(struct i2c_client *client,
return result;
st = iio_priv(dev_get_drvdata(&client->dev));
- st->mux_adapter = i2c_add_mux_adapter(client->adapter,
- &client->dev,
- client,
- 0, 0, 0,
- inv_mpu6050_select_bypass,
- inv_mpu6050_deselect_bypass);
- if (!st->mux_adapter) {
- result = -ENODEV;
+ st->muxc = i2c_mux_alloc(client->adapter, &client->dev,
+ 1, 0, I2C_MUX_LOCKED,
+ inv_mpu6050_select_bypass,
+ inv_mpu6050_deselect_bypass);
+ if (!st->muxc) {
+ result = -ENOMEM;
goto out_unreg_device;
}
+ st->muxc->priv = dev_get_drvdata(&client->dev);
+ result = i2c_mux_add_adapter(st->muxc, 0, 0, 0);
+ if (result)
+ goto out_unreg_device;
result = inv_mpu_acpi_create_mux_client(client);
if (result)
@@ -178,7 +145,7 @@ static int inv_mpu_probe(struct i2c_client *client,
return 0;
out_del_mux:
- i2c_del_mux_adapter(st->mux_adapter);
+ i2c_mux_del_adapters(st->muxc);
out_unreg_device:
inv_mpu_core_remove(&client->dev);
return result;
@@ -190,7 +157,7 @@ static int inv_mpu_remove(struct i2c_client *client)
struct inv_mpu6050_state *st = iio_priv(indio_dev);
inv_mpu_acpi_delete_mux_client(client);
- i2c_del_mux_adapter(st->mux_adapter);
+ i2c_mux_del_adapters(st->muxc);
return inv_mpu_core_remove(&client->dev);
}
@@ -202,13 +169,14 @@ static int inv_mpu_remove(struct i2c_client *client)
static const struct i2c_device_id inv_mpu_id[] = {
{"mpu6050", INV_MPU6050},
{"mpu6500", INV_MPU6500},
+ {"mpu9150", INV_MPU9150},
{}
};
MODULE_DEVICE_TABLE(i2c, inv_mpu_id);
static const struct acpi_device_id inv_acpi_match[] = {
- {"INVN6500", 0},
+ {"INVN6500", INV_MPU6500},
{ },
};
diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h b/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
index e302a49703bf..3bf8544ccc9f 100644
--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
+++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_iio.h
@@ -11,6 +11,7 @@
* GNU General Public License for more details.
*/
#include <linux/i2c.h>
+#include <linux/i2c-mux.h>
#include <linux/kfifo.h>
#include <linux/spinlock.h>
#include <linux/iio/iio.h>
@@ -68,6 +69,7 @@ enum inv_devices {
INV_MPU6050,
INV_MPU6500,
INV_MPU6000,
+ INV_MPU9150,
INV_NUM_PARTS
};
@@ -93,13 +95,13 @@ struct inv_mpu6050_chip_config {
/**
* struct inv_mpu6050_hw - Other important hardware information.
- * @num_reg: Number of registers on device.
+ * @whoami: Self identification byte from WHO_AM_I register
* @name: name of the chip.
* @reg: register map of the chip.
* @config: configuration of the chip.
*/
struct inv_mpu6050_hw {
- u8 num_reg;
+ u8 whoami;
u8 *name;
const struct inv_mpu6050_reg_map *reg;
const struct inv_mpu6050_chip_config *config;
@@ -114,7 +116,8 @@ struct inv_mpu6050_hw {
* @hw: Other hardware-specific information.
* @chip_type: chip type.
* @time_stamp_lock: spin lock to time stamp.
- * @plat_data: platform data.
+ * @plat_data: platform data (deprecated in favor of @orientation).
+ * @orientation: sensor chip orientation relative to main hardware.
* @timestamps: kfifo queue to store time stamp.
* @map regmap pointer.
* @irq interrupt number.
@@ -127,10 +130,11 @@ struct inv_mpu6050_state {
const struct inv_mpu6050_hw *hw;
enum inv_devices chip_type;
spinlock_t time_stamp_lock;
- struct i2c_adapter *mux_adapter;
+ struct i2c_mux_core *muxc;
struct i2c_client *mux_client;
unsigned int powerup_count;
struct inv_mpu6050_platform_data plat_data;
+ struct iio_mount_matrix orientation;
DECLARE_KFIFO(timestamps, long long, TIMESTAMP_FIFO_SIZE);
struct regmap *map;
int irq;
@@ -215,6 +219,13 @@ struct inv_mpu6050_state {
#define INV_MPU6050_MIN_FIFO_RATE 4
#define INV_MPU6050_ONE_K_HZ 1000
+#define INV_MPU6050_REG_WHOAMI 117
+
+#define INV_MPU6000_WHOAMI_VALUE 0x68
+#define INV_MPU6050_WHOAMI_VALUE 0x68
+#define INV_MPU6500_WHOAMI_VALUE 0x70
+#define INV_MPU9150_WHOAMI_VALUE 0x68
+
/* scan element definition */
enum inv_mpu6050_scan {
INV_MPU6050_SCAN_ACCL_X,
diff --git a/drivers/iio/imu/inv_mpu6050/inv_mpu_spi.c b/drivers/iio/imu/inv_mpu6050/inv_mpu_spi.c
index 7bcb8d839f05..190a4a51c830 100644
--- a/drivers/iio/imu/inv_mpu6050/inv_mpu_spi.c
+++ b/drivers/iio/imu/inv_mpu6050/inv_mpu_spi.c
@@ -44,9 +44,19 @@ static int inv_mpu_i2c_disable(struct iio_dev *indio_dev)
static int inv_mpu_probe(struct spi_device *spi)
{
struct regmap *regmap;
- const struct spi_device_id *id = spi_get_device_id(spi);
- const char *name = id ? id->name : NULL;
- const int chip_type = id ? id->driver_data : 0;
+ const struct spi_device_id *spi_id;
+ const struct acpi_device_id *acpi_id;
+ const char *name = NULL;
+ enum inv_devices chip_type;
+
+ if ((spi_id = spi_get_device_id(spi))) {
+ chip_type = (enum inv_devices)spi_id->driver_data;
+ name = spi_id->name;
+ } else if ((acpi_id = acpi_match_device(spi->dev.driver->acpi_match_table, &spi->dev))) {
+ chip_type = (enum inv_devices)acpi_id->driver_data;
+ } else {
+ return -ENODEV;
+ }
regmap = devm_regmap_init_spi(spi, &inv_mpu_regmap_config);
if (IS_ERR(regmap)) {
@@ -70,13 +80,15 @@ static int inv_mpu_remove(struct spi_device *spi)
*/
static const struct spi_device_id inv_mpu_id[] = {
{"mpu6000", INV_MPU6000},
+ {"mpu6500", INV_MPU6500},
+ {"mpu9150", INV_MPU9150},
{}
};
MODULE_DEVICE_TABLE(spi, inv_mpu_id);
static const struct acpi_device_id inv_acpi_match[] = {
- {"INVN6000", 0},
+ {"INVN6000", INV_MPU6000},
{ },
};
MODULE_DEVICE_TABLE(acpi, inv_acpi_match);
diff --git a/drivers/iio/imu/kmx61.c b/drivers/iio/imu/kmx61.c
index e5306b4e020e..2e7dd5754a56 100644
--- a/drivers/iio/imu/kmx61.c
+++ b/drivers/iio/imu/kmx61.c
@@ -14,7 +14,6 @@
#include <linux/module.h>
#include <linux/i2c.h>
#include <linux/acpi.h>
-#include <linux/gpio/consumer.h>
#include <linux/interrupt.h>
#include <linux/pm.h>
#include <linux/pm_runtime.h>
diff --git a/drivers/iio/industrialio-core.c b/drivers/iio/industrialio-core.c
index 70cb7eb0a75c..e6319a9346b2 100644
--- a/drivers/iio/industrialio-core.c
+++ b/drivers/iio/industrialio-core.c
@@ -25,6 +25,7 @@
#include <linux/slab.h>
#include <linux/anon_inodes.h>
#include <linux/debugfs.h>
+#include <linux/mutex.h>
#include <linux/iio/iio.h>
#include "iio_core.h"
#include "iio_core_trigger.h"
@@ -78,6 +79,7 @@ static const char * const iio_chan_type_name_spec[] = {
[IIO_CONCENTRATION] = "concentration",
[IIO_RESISTANCE] = "resistance",
[IIO_PH] = "ph",
+ [IIO_UVINDEX] = "uvindex",
};
static const char * const iio_modifier_names[] = {
@@ -100,6 +102,7 @@ static const char * const iio_modifier_names[] = {
[IIO_MOD_LIGHT_RED] = "red",
[IIO_MOD_LIGHT_GREEN] = "green",
[IIO_MOD_LIGHT_BLUE] = "blue",
+ [IIO_MOD_LIGHT_UV] = "uv",
[IIO_MOD_QUATERNION] = "quaternion",
[IIO_MOD_TEMP_AMBIENT] = "ambient",
[IIO_MOD_TEMP_OBJECT] = "object",
@@ -409,6 +412,88 @@ ssize_t iio_enum_write(struct iio_dev *indio_dev,
}
EXPORT_SYMBOL_GPL(iio_enum_write);
+static const struct iio_mount_matrix iio_mount_idmatrix = {
+ .rotation = {
+ "1", "0", "0",
+ "0", "1", "0",
+ "0", "0", "1"
+ }
+};
+
+static int iio_setup_mount_idmatrix(const struct device *dev,
+ struct iio_mount_matrix *matrix)
+{
+ *matrix = iio_mount_idmatrix;
+ dev_info(dev, "mounting matrix not found: using identity...\n");
+ return 0;
+}
+
+ssize_t iio_show_mount_matrix(struct iio_dev *indio_dev, uintptr_t priv,
+ const struct iio_chan_spec *chan, char *buf)
+{
+ const struct iio_mount_matrix *mtx = ((iio_get_mount_matrix_t *)
+ priv)(indio_dev, chan);
+
+ if (IS_ERR(mtx))
+ return PTR_ERR(mtx);
+
+ if (!mtx)
+ mtx = &iio_mount_idmatrix;
+
+ return snprintf(buf, PAGE_SIZE, "%s, %s, %s; %s, %s, %s; %s, %s, %s\n",
+ mtx->rotation[0], mtx->rotation[1], mtx->rotation[2],
+ mtx->rotation[3], mtx->rotation[4], mtx->rotation[5],
+ mtx->rotation[6], mtx->rotation[7], mtx->rotation[8]);
+}
+EXPORT_SYMBOL_GPL(iio_show_mount_matrix);
+
+/**
+ * of_iio_read_mount_matrix() - retrieve iio device mounting matrix from
+ * device-tree "mount-matrix" property
+ * @dev: device the mounting matrix property is assigned to
+ * @propname: device specific mounting matrix property name
+ * @matrix: where to store retrieved matrix
+ *
+ * If device is assigned no mounting matrix property, a default 3x3 identity
+ * matrix will be filled in.
+ *
+ * Return: 0 if success, or a negative error code on failure.
+ */
+#ifdef CONFIG_OF
+int of_iio_read_mount_matrix(const struct device *dev,
+ const char *propname,
+ struct iio_mount_matrix *matrix)
+{
+ if (dev->of_node) {
+ int err = of_property_read_string_array(dev->of_node,
+ propname, matrix->rotation,
+ ARRAY_SIZE(iio_mount_idmatrix.rotation));
+
+ if (err == ARRAY_SIZE(iio_mount_idmatrix.rotation))
+ return 0;
+
+ if (err >= 0)
+ /* Invalid number of matrix entries. */
+ return -EINVAL;
+
+ if (err != -EINVAL)
+ /* Invalid matrix declaration format. */
+ return err;
+ }
+
+ /* Matrix was not declared at all: fallback to identity. */
+ return iio_setup_mount_idmatrix(dev, matrix);
+}
+#else
+int of_iio_read_mount_matrix(const struct device *dev,
+ const char *propname,
+ struct iio_mount_matrix *matrix)
+{
+ return iio_setup_mount_idmatrix(dev, matrix);
+}
+#endif
+EXPORT_SYMBOL(of_iio_read_mount_matrix);
+
/**
* iio_format_value() - Formats a IIO value into its string representation
* @buf: The buffer to which the formatted value gets written
@@ -1375,6 +1460,44 @@ void devm_iio_device_unregister(struct device *dev, struct iio_dev *indio_dev)
}
EXPORT_SYMBOL_GPL(devm_iio_device_unregister);
+/**
+ * iio_device_claim_direct_mode - Keep device in direct mode
+ * @indio_dev: the iio_dev associated with the device
+ *
+ * If the device is in direct mode it is guaranteed to stay
+ * that way until iio_device_release_direct_mode() is called.
+ *
+ * Use with iio_device_release_direct_mode()
+ *
+ * Returns: 0 on success, -EBUSY on failure
+ */
+int iio_device_claim_direct_mode(struct iio_dev *indio_dev)
+{
+ mutex_lock(&indio_dev->mlock);
+
+ if (iio_buffer_enabled(indio_dev)) {
+ mutex_unlock(&indio_dev->mlock);
+ return -EBUSY;
+ }
+ return 0;
+}
+EXPORT_SYMBOL_GPL(iio_device_claim_direct_mode);
+
+/**
+ * iio_device_release_direct_mode - releases claim on direct mode
+ * @indio_dev: the iio_dev associated with the device
+ *
+ * Release the claim. Device is no longer guaranteed to stay
+ * in direct mode.
+ *
+ * Use with iio_device_claim_direct_mode()
+ */
+void iio_device_release_direct_mode(struct iio_dev *indio_dev)
+{
+ mutex_unlock(&indio_dev->mlock);
+}
+EXPORT_SYMBOL_GPL(iio_device_release_direct_mode);
+
subsys_initcall(iio_init);
module_exit(iio_exit);
diff --git a/drivers/iio/inkern.c b/drivers/iio/inkern.c
index 734a0042de0c..c4757e6367e7 100644
--- a/drivers/iio/inkern.c
+++ b/drivers/iio/inkern.c
@@ -356,6 +356,54 @@ void iio_channel_release(struct iio_channel *channel)
}
EXPORT_SYMBOL_GPL(iio_channel_release);
+static void devm_iio_channel_free(struct device *dev, void *res)
+{
+ struct iio_channel *channel = *(struct iio_channel **)res;
+
+ iio_channel_release(channel);
+}
+
+static int devm_iio_channel_match(struct device *dev, void *res, void *data)
+{
+ struct iio_channel **r = res;
+
+ if (!r || !*r) {
+ WARN_ON(!r || !*r);
+ return 0;
+ }
+
+ return *r == data;
+}
+
+struct iio_channel *devm_iio_channel_get(struct device *dev,
+ const char *channel_name)
+{
+ struct iio_channel **ptr, *channel;
+
+ ptr = devres_alloc(devm_iio_channel_free, sizeof(*ptr), GFP_KERNEL);
+ if (!ptr)
+ return ERR_PTR(-ENOMEM);
+
+ channel = iio_channel_get(dev, channel_name);
+ if (IS_ERR(channel)) {
+ devres_free(ptr);
+ return channel;
+ }
+
+ *ptr = channel;
+ devres_add(dev, ptr);
+
+ return channel;
+}
+EXPORT_SYMBOL_GPL(devm_iio_channel_get);
+
+void devm_iio_channel_release(struct device *dev, struct iio_channel *channel)
+{
+ WARN_ON(devres_release(dev, devm_iio_channel_free,
+ devm_iio_channel_match, channel));
+}
+EXPORT_SYMBOL_GPL(devm_iio_channel_release);
+
struct iio_channel *iio_channel_get_all(struct device *dev)
{
const char *name;
@@ -441,6 +489,42 @@ void iio_channel_release_all(struct iio_channel *channels)
}
EXPORT_SYMBOL_GPL(iio_channel_release_all);
+static void devm_iio_channel_free_all(struct device *dev, void *res)
+{
+ struct iio_channel *channels = *(struct iio_channel **)res;
+
+ iio_channel_release_all(channels);
+}
+
+struct iio_channel *devm_iio_channel_get_all(struct device *dev)
+{
+ struct iio_channel **ptr, *channels;
+
+ ptr = devres_alloc(devm_iio_channel_free_all, sizeof(*ptr), GFP_KERNEL);
+ if (!ptr)
+ return ERR_PTR(-ENOMEM);
+
+ channels = iio_channel_get_all(dev);
+ if (IS_ERR(channels)) {
+ devres_free(ptr);
+ return channels;
+ }
+
+ *ptr = channels;
+ devres_add(dev, ptr);
+
+ return channels;
+}
+EXPORT_SYMBOL_GPL(devm_iio_channel_get_all);
+
+void devm_iio_channel_release_all(struct device *dev,
+ struct iio_channel *channels)
+{
+ WARN_ON(devres_release(dev, devm_iio_channel_free_all,
+ devm_iio_channel_match, channels));
+}
+EXPORT_SYMBOL_GPL(devm_iio_channel_release_all);
+
static int iio_channel_read(struct iio_channel *chan, int *val, int *val2,
enum iio_chan_info_enum info)
{
@@ -452,7 +536,7 @@ static int iio_channel_read(struct iio_channel *chan, int *val, int *val2,
if (val2 == NULL)
val2 = &unused;
- if(!iio_channel_has_info(chan->channel, info))
+ if (!iio_channel_has_info(chan->channel, info))
return -EINVAL;
if (chan->indio_dev->info->read_raw_multi) {
diff --git a/drivers/iio/light/Kconfig b/drivers/iio/light/Kconfig
index cfd3df8416bb..7c566f516572 100644
--- a/drivers/iio/light/Kconfig
+++ b/drivers/iio/light/Kconfig
@@ -73,6 +73,17 @@ config BH1750
To compile this driver as a module, choose M here: the module will
be called bh1750.
+config BH1780
+ tristate "ROHM BH1780 ambient light sensor"
+ depends on I2C
+ depends on !SENSORS_BH1780
+ help
+ Say Y here to build support for the ROHM BH1780GLI ambient
+ light sensor.
+
+ To compile this driver as a module, choose M here: the module will
+ be called bh1780.
+
config CM32181
depends on I2C
tristate "CM32181 driver"
@@ -223,6 +234,17 @@ config LTR501
This driver can also be built as a module. If so, the module
will be called ltr501.
+config MAX44000
+ tristate "MAX44000 Ambient and Infrared Proximity Sensor"
+ depends on I2C
+ select REGMAP_I2C
+ help
+ Say Y here if you want to build support for Maxim Integrated's
+ MAX44000 ambient and infrared proximity sensor device.
+
+ To compile this driver as a module, choose M here:
+ the module will be called max44000.
+
config OPT3001
tristate "Texas Instruments OPT3001 Light Sensor"
depends on I2C
@@ -320,4 +342,14 @@ config VCNL4000
To compile this driver as a module, choose M here: the
module will be called vcnl4000.
+config VEML6070
+ tristate "VEML6070 UV A light sensor"
+ depends on I2C
+ help
+ Say Y here if you want to build a driver for the Vishay VEML6070 UV A
+ light sensor.
+
+ To compile this driver as a module, choose M here: the
+ module will be called veml6070.
+
endmenu
diff --git a/drivers/iio/light/Makefile b/drivers/iio/light/Makefile
index b2c31053db0c..6f2a3c62de27 100644
--- a/drivers/iio/light/Makefile
+++ b/drivers/iio/light/Makefile
@@ -9,6 +9,7 @@ obj-$(CONFIG_AL3320A) += al3320a.o
obj-$(CONFIG_APDS9300) += apds9300.o
obj-$(CONFIG_APDS9960) += apds9960.o
obj-$(CONFIG_BH1750) += bh1750.o
+obj-$(CONFIG_BH1780) += bh1780.o
obj-$(CONFIG_CM32181) += cm32181.o
obj-$(CONFIG_CM3232) += cm3232.o
obj-$(CONFIG_CM3323) += cm3323.o
@@ -20,6 +21,7 @@ obj-$(CONFIG_ISL29125) += isl29125.o
obj-$(CONFIG_JSA1212) += jsa1212.o
obj-$(CONFIG_SENSORS_LM3533) += lm3533-als.o
obj-$(CONFIG_LTR501) += ltr501.o
+obj-$(CONFIG_MAX44000) += max44000.o
obj-$(CONFIG_OPT3001) += opt3001.o
obj-$(CONFIG_PA12203001) += pa12203001.o
obj-$(CONFIG_RPR0521) += rpr0521.o
@@ -30,3 +32,4 @@ obj-$(CONFIG_TCS3472) += tcs3472.o
obj-$(CONFIG_TSL4531) += tsl4531.o
obj-$(CONFIG_US5182D) += us5182d.o
obj-$(CONFIG_VCNL4000) += vcnl4000.o
+obj-$(CONFIG_VEML6070) += veml6070.o
diff --git a/drivers/iio/light/apds9960.c b/drivers/iio/light/apds9960.c
index a6af56ad10e1..b4dbb3912977 100644
--- a/drivers/iio/light/apds9960.c
+++ b/drivers/iio/light/apds9960.c
@@ -321,8 +321,12 @@ static const struct iio_chan_spec apds9960_channels[] = {
};
/* integration time in us */
-static const int apds9960_int_time[][2] =
- { {28000, 246}, {100000, 219}, {200000, 182}, {700000, 0} };
+static const int apds9960_int_time[][2] = {
+ { 28000, 246},
+ {100000, 219},
+ {200000, 182},
+ {700000, 0}
+};
/* gain mapping */
static const int apds9960_pxs_gain_map[] = {1, 2, 4, 8};
@@ -491,9 +495,10 @@ static int apds9960_read_raw(struct iio_dev *indio_dev,
case IIO_INTENSITY:
ret = regmap_bulk_read(data->regmap, chan->address,
&buf, 2);
- if (!ret)
+ if (!ret) {
ret = IIO_VAL_INT;
- *val = le16_to_cpu(buf);
+ *val = le16_to_cpu(buf);
+ }
break;
default:
ret = -EINVAL;
diff --git a/drivers/iio/light/bh1780.c b/drivers/iio/light/bh1780.c
new file mode 100644
index 000000000000..72b364e4aa72
--- /dev/null
+++ b/drivers/iio/light/bh1780.c
@@ -0,0 +1,297 @@
+/*
+ * ROHM 1780GLI Ambient Light Sensor Driver
+ *
+ * Copyright (C) 2016 Linaro Ltd.
+ * Author: Linus Walleij <linus.walleij@linaro.org>
+ * Loosely based on the previous BH1780 ALS misc driver
+ * Copyright (C) 2010 Texas Instruments
+ * Author: Hemanth V <hemanthv@ti.com>
+ */
+#include <linux/i2c.h>
+#include <linux/slab.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/pm_runtime.h>
+#include <linux/iio/iio.h>
+#include <linux/iio/sysfs.h>
+#include <linux/bitops.h>
+
+#define BH1780_CMD_BIT BIT(7)
+#define BH1780_REG_CONTROL 0x00
+#define BH1780_REG_PARTID 0x0A
+#define BH1780_REG_MANFID 0x0B
+#define BH1780_REG_DLOW 0x0C
+#define BH1780_REG_DHIGH 0x0D
+
+#define BH1780_REVMASK GENMASK(3,0)
+#define BH1780_POWMASK GENMASK(1,0)
+#define BH1780_POFF (0x0)
+#define BH1780_PON (0x3)
+
+/* power on settling time in ms */
+#define BH1780_PON_DELAY 2
+/* max time before value available in ms */
+#define BH1780_INTERVAL 250
+
+struct bh1780_data {
+ struct i2c_client *client;
+};
+
+static int bh1780_write(struct bh1780_data *bh1780, u8 reg, u8 val)
+{
+ int ret = i2c_smbus_write_byte_data(bh1780->client,
+ BH1780_CMD_BIT | reg,
+ val);
+ if (ret < 0)
+ dev_err(&bh1780->client->dev,
+ "i2c_smbus_write_byte_data failed error "
+ "%d, register %01x\n",
+ ret, reg);
+ return ret;
+}
+
+static int bh1780_read(struct bh1780_data *bh1780, u8 reg)
+{
+ int ret = i2c_smbus_read_byte_data(bh1780->client,
+ BH1780_CMD_BIT | reg);
+ if (ret < 0)
+ dev_err(&bh1780->client->dev,
+ "i2c_smbus_read_byte_data failed error "
+ "%d, register %01x\n",
+ ret, reg);
+ return ret;
+}
+
+static int bh1780_read_word(struct bh1780_data *bh1780, u8 reg)
+{
+ int ret = i2c_smbus_read_word_data(bh1780->client,
+ BH1780_CMD_BIT | reg);
+ if (ret < 0)
+ dev_err(&bh1780->client->dev,
+ "i2c_smbus_read_word_data failed error "
+ "%d, register %01x\n",
+ ret, reg);
+ return ret;
+}
+
+static int bh1780_debugfs_reg_access(struct iio_dev *indio_dev,
+ unsigned int reg, unsigned int writeval,
+ unsigned int *readval)
+{
+ struct bh1780_data *bh1780 = iio_priv(indio_dev);
+ int ret;
+
+ if (!readval)
+ bh1780_write(bh1780, (u8)reg, (u8)writeval);
+
+ ret = bh1780_read(bh1780, (u8)reg);
+ if (ret < 0)
+ return ret;
+
+ *readval = ret;
+
+ return 0;
+}
+
+static int bh1780_read_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ int *val, int *val2, long mask)
+{
+ struct bh1780_data *bh1780 = iio_priv(indio_dev);
+ int value;
+
+ switch (mask) {
+ case IIO_CHAN_INFO_RAW:
+ switch (chan->type) {
+ case IIO_LIGHT:
+ pm_runtime_get_sync(&bh1780->client->dev);
+ value = bh1780_read_word(bh1780, BH1780_REG_DLOW);
+ if (value < 0)
+ return value;
+ pm_runtime_mark_last_busy(&bh1780->client->dev);
+ pm_runtime_put_autosuspend(&bh1780->client->dev);
+ *val = value;
+
+ return IIO_VAL_INT;
+ default:
+ return -EINVAL;
+ }
+ case IIO_CHAN_INFO_INT_TIME:
+ *val = 0;
+ *val2 = BH1780_INTERVAL * 1000;
+ return IIO_VAL_INT_PLUS_MICRO;
+ default:
+ return -EINVAL;
+ }
+}
+
+static const struct iio_info bh1780_info = {
+ .driver_module = THIS_MODULE,
+ .read_raw = bh1780_read_raw,
+ .debugfs_reg_access = bh1780_debugfs_reg_access,
+};
+
+static const struct iio_chan_spec bh1780_channels[] = {
+ {
+ .type = IIO_LIGHT,
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) |
+ BIT(IIO_CHAN_INFO_INT_TIME)
+ }
+};
+
+static int bh1780_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ int ret;
+ struct bh1780_data *bh1780;
+ struct i2c_adapter *adapter = to_i2c_adapter(client->dev.parent);
+ struct iio_dev *indio_dev;
+
+ if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE))
+ return -EIO;
+
+ indio_dev = devm_iio_device_alloc(&client->dev, sizeof(*bh1780));
+ if (!indio_dev)
+ return -ENOMEM;
+
+ bh1780 = iio_priv(indio_dev);
+ bh1780->client = client;
+ i2c_set_clientdata(client, indio_dev);
+
+ /* Power up the device */
+ ret = bh1780_write(bh1780, BH1780_REG_CONTROL, BH1780_PON);
+ if (ret < 0)
+ return ret;
+ msleep(BH1780_PON_DELAY);
+ pm_runtime_get_noresume(&client->dev);
+ pm_runtime_set_active(&client->dev);
+ pm_runtime_enable(&client->dev);
+
+ ret = bh1780_read(bh1780, BH1780_REG_PARTID);
+ if (ret < 0)
+ goto out_disable_pm;
+ dev_info(&client->dev,
+ "Ambient Light Sensor, Rev : %lu\n",
+ (ret & BH1780_REVMASK));
+
+ /*
+ * As the device takes 250 ms to even come up with a fresh
+ * measurement after power-on, do not shut it down unnecessarily.
+ * Set autosuspend to a five seconds.
+ */
+ pm_runtime_set_autosuspend_delay(&client->dev, 5000);
+ pm_runtime_use_autosuspend(&client->dev);
+ pm_runtime_put(&client->dev);
+
+ indio_dev->dev.parent = &client->dev;
+ indio_dev->info = &bh1780_info;
+ indio_dev->name = id->name;
+ indio_dev->channels = bh1780_channels;
+ indio_dev->num_channels = ARRAY_SIZE(bh1780_channels);
+ indio_dev->modes = INDIO_DIRECT_MODE;
+
+ ret = iio_device_register(indio_dev);
+ if (ret)
+ goto out_disable_pm;
+ return 0;
+
+out_disable_pm:
+ pm_runtime_put_noidle(&client->dev);
+ pm_runtime_disable(&client->dev);
+ return ret;
+}
+
+static int bh1780_remove(struct i2c_client *client)
+{
+ struct iio_dev *indio_dev = i2c_get_clientdata(client);
+ struct bh1780_data *bh1780 = iio_priv(indio_dev);
+ int ret;
+
+ iio_device_unregister(indio_dev);
+ pm_runtime_get_sync(&client->dev);
+ pm_runtime_put_noidle(&client->dev);
+ pm_runtime_disable(&client->dev);
+ ret = bh1780_write(bh1780, BH1780_REG_CONTROL, BH1780_POFF);
+ if (ret < 0) {
+ dev_err(&client->dev, "failed to power off\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+#ifdef CONFIG_PM
+static int bh1780_runtime_suspend(struct device *dev)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct bh1780_data *bh1780 = i2c_get_clientdata(client);
+ int ret;
+
+ ret = bh1780_write(bh1780, BH1780_REG_CONTROL, BH1780_POFF);
+ if (ret < 0) {
+ dev_err(dev, "failed to runtime suspend\n");
+ return ret;
+ }
+
+ return 0;
+}
+
+static int bh1780_runtime_resume(struct device *dev)
+{
+ struct i2c_client *client = to_i2c_client(dev);
+ struct bh1780_data *bh1780 = i2c_get_clientdata(client);
+ int ret;
+
+ ret = bh1780_write(bh1780, BH1780_REG_CONTROL, BH1780_PON);
+ if (ret < 0) {
+ dev_err(dev, "failed to runtime resume\n");
+ return ret;
+ }
+
+ /* Wait for power on, then for a value to be available */
+ msleep(BH1780_PON_DELAY + BH1780_INTERVAL);
+
+ return 0;
+}
+#endif /* CONFIG_PM */
+
+static const struct dev_pm_ops bh1780_dev_pm_ops = {
+ SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
+ pm_runtime_force_resume)
+ SET_RUNTIME_PM_OPS(bh1780_runtime_suspend,
+ bh1780_runtime_resume, NULL)
+};
+
+static const struct i2c_device_id bh1780_id[] = {
+ { "bh1780", 0 },
+ { },
+};
+
+MODULE_DEVICE_TABLE(i2c, bh1780_id);
+
+#ifdef CONFIG_OF
+static const struct of_device_id of_bh1780_match[] = {
+ { .compatible = "rohm,bh1780gli", },
+ {},
+};
+MODULE_DEVICE_TABLE(of, of_bh1780_match);
+#endif
+
+static struct i2c_driver bh1780_driver = {
+ .probe = bh1780_probe,
+ .remove = bh1780_remove,
+ .id_table = bh1780_id,
+ .driver = {
+ .name = "bh1780",
+ .pm = &bh1780_dev_pm_ops,
+ .of_match_table = of_match_ptr(of_bh1780_match),
+ },
+};
+
+module_i2c_driver(bh1780_driver);
+
+MODULE_DESCRIPTION("ROHM BH1780GLI Ambient Light Sensor Driver");
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Linus Walleij <linus.walleij@linaro.org>");
diff --git a/drivers/iio/light/max44000.c b/drivers/iio/light/max44000.c
new file mode 100644
index 000000000000..e01e58a9bd14
--- /dev/null
+++ b/drivers/iio/light/max44000.c
@@ -0,0 +1,639 @@
+/*
+ * MAX44000 Ambient and Infrared Proximity Sensor
+ *
+ * Copyright (c) 2016, Intel Corporation.
+ *
+ * This file is subject to the terms and conditions of version 2 of
+ * the GNU General Public License. See the file COPYING in the main
+ * directory of this archive for more details.
+ *
+ * Data sheet: https://datasheets.maximintegrated.com/en/ds/MAX44000.pdf
+ *
+ * 7-bit I2C slave address 0x4a
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/i2c.h>
+#include <linux/regmap.h>
+#include <linux/util_macros.h>
+#include <linux/iio/iio.h>
+#include <linux/iio/sysfs.h>
+#include <linux/iio/buffer.h>
+#include <linux/iio/trigger_consumer.h>
+#include <linux/iio/triggered_buffer.h>
+#include <linux/acpi.h>
+
+#define MAX44000_DRV_NAME "max44000"
+
+/* Registers in datasheet order */
+#define MAX44000_REG_STATUS 0x00
+#define MAX44000_REG_CFG_MAIN 0x01
+#define MAX44000_REG_CFG_RX 0x02
+#define MAX44000_REG_CFG_TX 0x03
+#define MAX44000_REG_ALS_DATA_HI 0x04
+#define MAX44000_REG_ALS_DATA_LO 0x05
+#define MAX44000_REG_PRX_DATA 0x16
+#define MAX44000_REG_ALS_UPTHR_HI 0x06
+#define MAX44000_REG_ALS_UPTHR_LO 0x07
+#define MAX44000_REG_ALS_LOTHR_HI 0x08
+#define MAX44000_REG_ALS_LOTHR_LO 0x09
+#define MAX44000_REG_PST 0x0a
+#define MAX44000_REG_PRX_IND 0x0b
+#define MAX44000_REG_PRX_THR 0x0c
+#define MAX44000_REG_TRIM_GAIN_GREEN 0x0f
+#define MAX44000_REG_TRIM_GAIN_IR 0x10
+
+/* REG_CFG bits */
+#define MAX44000_CFG_ALSINTE 0x01
+#define MAX44000_CFG_PRXINTE 0x02
+#define MAX44000_CFG_MASK 0x1c
+#define MAX44000_CFG_MODE_SHUTDOWN 0x00
+#define MAX44000_CFG_MODE_ALS_GIR 0x04
+#define MAX44000_CFG_MODE_ALS_G 0x08
+#define MAX44000_CFG_MODE_ALS_IR 0x0c
+#define MAX44000_CFG_MODE_ALS_PRX 0x10
+#define MAX44000_CFG_MODE_PRX 0x14
+#define MAX44000_CFG_TRIM 0x20
+
+/*
+ * Upper 4 bits are not documented but start as 1 on powerup
+ * Setting them to 0 causes proximity to misbehave so set them to 1
+ */
+#define MAX44000_REG_CFG_RX_DEFAULT 0xf0
+
+/* REG_RX bits */
+#define MAX44000_CFG_RX_ALSTIM_MASK 0x0c
+#define MAX44000_CFG_RX_ALSTIM_SHIFT 2
+#define MAX44000_CFG_RX_ALSPGA_MASK 0x03
+#define MAX44000_CFG_RX_ALSPGA_SHIFT 0
+
+/* REG_TX bits */
+#define MAX44000_LED_CURRENT_MASK 0xf
+#define MAX44000_LED_CURRENT_MAX 11
+#define MAX44000_LED_CURRENT_DEFAULT 6
+
+#define MAX44000_ALSDATA_OVERFLOW 0x4000
+
+struct max44000_data {
+ struct mutex lock;
+ struct regmap *regmap;
+};
+
+/* Default scale is set to the minimum of 0.03125 or 1 / (1 << 5) lux */
+#define MAX44000_ALS_TO_LUX_DEFAULT_FRACTION_LOG2 5
+
+/* Scale can be multiplied by up to 128x via ALSPGA for measurement gain */
+static const int max44000_alspga_shift[] = {0, 2, 4, 7};
+#define MAX44000_ALSPGA_MAX_SHIFT 7
+
+/*
+ * Scale can be multiplied by up to 64x via ALSTIM because of lost resolution
+ *
+ * This scaling factor is hidden from userspace and instead accounted for when
+ * reading raw values from the device.
+ *
+ * This makes it possible to cleanly expose ALSPGA as IIO_CHAN_INFO_SCALE and
+ * ALSTIM as IIO_CHAN_INFO_INT_TIME without the values affecting each other.
+ *
+ * Handling this internally is also required for buffer support because the
+ * channel's scan_type can't be modified dynamically.
+ */
+static const int max44000_alstim_shift[] = {0, 2, 4, 6};
+#define MAX44000_ALSTIM_SHIFT(alstim) (2 * (alstim))
+
+/* Available integration times with pretty manual alignment: */
+static const int max44000_int_time_avail_ns_array[] = {
+ 100000000,
+ 25000000,
+ 6250000,
+ 1562500,
+};
+static const char max44000_int_time_avail_str[] =
+ "0.100 "
+ "0.025 "
+ "0.00625 "
+ "0.001625";
+
+/* Available scales (internal to ulux) with pretty manual alignment: */
+static const int max44000_scale_avail_ulux_array[] = {
+ 31250,
+ 125000,
+ 500000,
+ 4000000,
+};
+static const char max44000_scale_avail_str[] =
+ "0.03125 "
+ "0.125 "
+ "0.5 "
+ "4";
+
+#define MAX44000_SCAN_INDEX_ALS 0
+#define MAX44000_SCAN_INDEX_PRX 1
+
+static const struct iio_chan_spec max44000_channels[] = {
+ {
+ .type = IIO_LIGHT,
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE) |
+ BIT(IIO_CHAN_INFO_INT_TIME),
+ .scan_index = MAX44000_SCAN_INDEX_ALS,
+ .scan_type = {
+ .sign = 'u',
+ .realbits = 14,
+ .storagebits = 16,
+ }
+ },
+ {
+ .type = IIO_PROXIMITY,
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
+ .scan_index = MAX44000_SCAN_INDEX_PRX,
+ .scan_type = {
+ .sign = 'u',
+ .realbits = 8,
+ .storagebits = 16,
+ }
+ },
+ IIO_CHAN_SOFT_TIMESTAMP(2),
+ {
+ .type = IIO_CURRENT,
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) |
+ BIT(IIO_CHAN_INFO_SCALE),
+ .extend_name = "led",
+ .output = 1,
+ .scan_index = -1,
+ },
+};
+
+static int max44000_read_alstim(struct max44000_data *data)
+{
+ unsigned int val;
+ int ret;
+
+ ret = regmap_read(data->regmap, MAX44000_REG_CFG_RX, &val);
+ if (ret < 0)
+ return ret;
+ return (val & MAX44000_CFG_RX_ALSTIM_MASK) >> MAX44000_CFG_RX_ALSTIM_SHIFT;
+}
+
+static int max44000_write_alstim(struct max44000_data *data, int val)
+{
+ return regmap_write_bits(data->regmap, MAX44000_REG_CFG_RX,
+ MAX44000_CFG_RX_ALSTIM_MASK,
+ val << MAX44000_CFG_RX_ALSTIM_SHIFT);
+}
+
+static int max44000_read_alspga(struct max44000_data *data)
+{
+ unsigned int val;
+ int ret;
+
+ ret = regmap_read(data->regmap, MAX44000_REG_CFG_RX, &val);
+ if (ret < 0)
+ return ret;
+ return (val & MAX44000_CFG_RX_ALSPGA_MASK) >> MAX44000_CFG_RX_ALSPGA_SHIFT;
+}
+
+static int max44000_write_alspga(struct max44000_data *data, int val)
+{
+ return regmap_write_bits(data->regmap, MAX44000_REG_CFG_RX,
+ MAX44000_CFG_RX_ALSPGA_MASK,
+ val << MAX44000_CFG_RX_ALSPGA_SHIFT);
+}
+
+static int max44000_read_alsval(struct max44000_data *data)
+{
+ u16 regval;
+ int alstim, ret;
+
+ ret = regmap_bulk_read(data->regmap, MAX44000_REG_ALS_DATA_HI,
+ &regval, sizeof(regval));
+ if (ret < 0)
+ return ret;
+ alstim = ret = max44000_read_alstim(data);
+ if (ret < 0)
+ return ret;
+
+ regval = be16_to_cpu(regval);
+
+ /*
+ * Overflow is explained on datasheet page 17.
+ *
+ * It's a warning that either the G or IR channel has become saturated
+ * and that the value in the register is likely incorrect.
+ *
+ * The recommendation is to change the scale (ALSPGA).
+ * The driver just returns the max representable value.
+ */
+ if (regval & MAX44000_ALSDATA_OVERFLOW)
+ return 0x3FFF;
+
+ return regval << MAX44000_ALSTIM_SHIFT(alstim);
+}
+
+static int max44000_write_led_current_raw(struct max44000_data *data, int val)
+{
+ /* Maybe we should clamp the value instead? */
+ if (val < 0 || val > MAX44000_LED_CURRENT_MAX)
+ return -ERANGE;
+ if (val >= 8)
+ val += 4;
+ return regmap_write_bits(data->regmap, MAX44000_REG_CFG_TX,
+ MAX44000_LED_CURRENT_MASK, val);
+}
+
+static int max44000_read_led_current_raw(struct max44000_data *data)
+{
+ unsigned int regval;
+ int ret;
+
+ ret = regmap_read(data->regmap, MAX44000_REG_CFG_TX, &regval);
+ if (ret < 0)
+ return ret;
+ regval &= MAX44000_LED_CURRENT_MASK;
+ if (regval >= 8)
+ regval -= 4;
+ return regval;
+}
+
+static int max44000_read_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ int *val, int *val2, long mask)
+{
+ struct max44000_data *data = iio_priv(indio_dev);
+ int alstim, alspga;
+ unsigned int regval;
+ int ret;
+
+ switch (mask) {
+ case IIO_CHAN_INFO_RAW:
+ switch (chan->type) {
+ case IIO_LIGHT:
+ mutex_lock(&data->lock);
+ ret = max44000_read_alsval(data);
+ mutex_unlock(&data->lock);
+ if (ret < 0)
+ return ret;
+ *val = ret;
+ return IIO_VAL_INT;
+
+ case IIO_PROXIMITY:
+ mutex_lock(&data->lock);
+ ret = regmap_read(data->regmap, MAX44000_REG_PRX_DATA, &regval);
+ mutex_unlock(&data->lock);
+ if (ret < 0)
+ return ret;
+ *val = regval;
+ return IIO_VAL_INT;
+
+ case IIO_CURRENT:
+ mutex_lock(&data->lock);
+ ret = max44000_read_led_current_raw(data);
+ mutex_unlock(&data->lock);
+ if (ret < 0)
+ return ret;
+ *val = ret;
+ return IIO_VAL_INT;
+
+ default:
+ return -EINVAL;
+ }
+
+ case IIO_CHAN_INFO_SCALE:
+ switch (chan->type) {
+ case IIO_CURRENT:
+ /* Output register is in 10s of miliamps */
+ *val = 10;
+ return IIO_VAL_INT;
+
+ case IIO_LIGHT:
+ mutex_lock(&data->lock);
+ alspga = ret = max44000_read_alspga(data);
+ mutex_unlock(&data->lock);
+ if (ret < 0)
+ return ret;
+
+ /* Avoid negative shifts */
+ *val = (1 << MAX44000_ALSPGA_MAX_SHIFT);
+ *val2 = MAX44000_ALS_TO_LUX_DEFAULT_FRACTION_LOG2
+ + MAX44000_ALSPGA_MAX_SHIFT
+ - max44000_alspga_shift[alspga];
+ return IIO_VAL_FRACTIONAL_LOG2;
+
+ default:
+ return -EINVAL;
+ }
+
+ case IIO_CHAN_INFO_INT_TIME:
+ mutex_lock(&data->lock);
+ alstim = ret = max44000_read_alstim(data);
+ mutex_unlock(&data->lock);
+
+ if (ret < 0)
+ return ret;
+ *val = 0;
+ *val2 = max44000_int_time_avail_ns_array[alstim];
+ return IIO_VAL_INT_PLUS_NANO;
+
+ default:
+ return -EINVAL;
+ }
+}
+
+static int max44000_write_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ int val, int val2, long mask)
+{
+ struct max44000_data *data = iio_priv(indio_dev);
+ int ret;
+
+ if (mask == IIO_CHAN_INFO_RAW && chan->type == IIO_CURRENT) {
+ mutex_lock(&data->lock);
+ ret = max44000_write_led_current_raw(data, val);
+ mutex_unlock(&data->lock);
+ return ret;
+ } else if (mask == IIO_CHAN_INFO_INT_TIME && chan->type == IIO_LIGHT) {
+ s64 valns = val * NSEC_PER_SEC + val2;
+ int alstim = find_closest_descending(valns,
+ max44000_int_time_avail_ns_array,
+ ARRAY_SIZE(max44000_int_time_avail_ns_array));
+ mutex_lock(&data->lock);
+ ret = max44000_write_alstim(data, alstim);
+ mutex_unlock(&data->lock);
+ return ret;
+ } else if (mask == IIO_CHAN_INFO_SCALE && chan->type == IIO_LIGHT) {
+ s64 valus = val * USEC_PER_SEC + val2;
+ int alspga = find_closest(valus,
+ max44000_scale_avail_ulux_array,
+ ARRAY_SIZE(max44000_scale_avail_ulux_array));
+ mutex_lock(&data->lock);
+ ret = max44000_write_alspga(data, alspga);
+ mutex_unlock(&data->lock);
+ return ret;
+ }
+
+ return -EINVAL;
+}
+
+static int max44000_write_raw_get_fmt(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ long mask)
+{
+ if (mask == IIO_CHAN_INFO_INT_TIME && chan->type == IIO_LIGHT)
+ return IIO_VAL_INT_PLUS_NANO;
+ else if (mask == IIO_CHAN_INFO_SCALE && chan->type == IIO_LIGHT)
+ return IIO_VAL_INT_PLUS_MICRO;
+ else
+ return IIO_VAL_INT;
+}
+
+static IIO_CONST_ATTR(illuminance_integration_time_available, max44000_int_time_avail_str);
+static IIO_CONST_ATTR(illuminance_scale_available, max44000_scale_avail_str);
+
+static struct attribute *max44000_attributes[] = {
+ &iio_const_attr_illuminance_integration_time_available.dev_attr.attr,
+ &iio_const_attr_illuminance_scale_available.dev_attr.attr,
+ NULL
+};
+
+static const struct attribute_group max44000_attribute_group = {
+ .attrs = max44000_attributes,
+};
+
+static const struct iio_info max44000_info = {
+ .driver_module = THIS_MODULE,
+ .read_raw = max44000_read_raw,
+ .write_raw = max44000_write_raw,
+ .write_raw_get_fmt = max44000_write_raw_get_fmt,
+ .attrs = &max44000_attribute_group,
+};
+
+static bool max44000_readable_reg(struct device *dev, unsigned int reg)
+{
+ switch (reg) {
+ case MAX44000_REG_STATUS:
+ case MAX44000_REG_CFG_MAIN:
+ case MAX44000_REG_CFG_RX:
+ case MAX44000_REG_CFG_TX:
+ case MAX44000_REG_ALS_DATA_HI:
+ case MAX44000_REG_ALS_DATA_LO:
+ case MAX44000_REG_PRX_DATA:
+ case MAX44000_REG_ALS_UPTHR_HI:
+ case MAX44000_REG_ALS_UPTHR_LO:
+ case MAX44000_REG_ALS_LOTHR_HI:
+ case MAX44000_REG_ALS_LOTHR_LO:
+ case MAX44000_REG_PST:
+ case MAX44000_REG_PRX_IND:
+ case MAX44000_REG_PRX_THR:
+ case MAX44000_REG_TRIM_GAIN_GREEN:
+ case MAX44000_REG_TRIM_GAIN_IR:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static bool max44000_writeable_reg(struct device *dev, unsigned int reg)
+{
+ switch (reg) {
+ case MAX44000_REG_CFG_MAIN:
+ case MAX44000_REG_CFG_RX:
+ case MAX44000_REG_CFG_TX:
+ case MAX44000_REG_ALS_UPTHR_HI:
+ case MAX44000_REG_ALS_UPTHR_LO:
+ case MAX44000_REG_ALS_LOTHR_HI:
+ case MAX44000_REG_ALS_LOTHR_LO:
+ case MAX44000_REG_PST:
+ case MAX44000_REG_PRX_IND:
+ case MAX44000_REG_PRX_THR:
+ case MAX44000_REG_TRIM_GAIN_GREEN:
+ case MAX44000_REG_TRIM_GAIN_IR:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static bool max44000_volatile_reg(struct device *dev, unsigned int reg)
+{
+ switch (reg) {
+ case MAX44000_REG_STATUS:
+ case MAX44000_REG_ALS_DATA_HI:
+ case MAX44000_REG_ALS_DATA_LO:
+ case MAX44000_REG_PRX_DATA:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static bool max44000_precious_reg(struct device *dev, unsigned int reg)
+{
+ return reg == MAX44000_REG_STATUS;
+}
+
+static const struct regmap_config max44000_regmap_config = {
+ .reg_bits = 8,
+ .val_bits = 8,
+
+ .max_register = MAX44000_REG_PRX_DATA,
+ .readable_reg = max44000_readable_reg,
+ .writeable_reg = max44000_writeable_reg,
+ .volatile_reg = max44000_volatile_reg,
+ .precious_reg = max44000_precious_reg,
+
+ .use_single_rw = 1,
+ .cache_type = REGCACHE_RBTREE,
+};
+
+static irqreturn_t max44000_trigger_handler(int irq, void *p)
+{
+ struct iio_poll_func *pf = p;
+ struct iio_dev *indio_dev = pf->indio_dev;
+ struct max44000_data *data = iio_priv(indio_dev);
+ u16 buf[8]; /* 2x u16 + padding + 8 bytes timestamp */
+ int index = 0;
+ unsigned int regval;
+ int ret;
+
+ mutex_lock(&data->lock);
+ if (test_bit(MAX44000_SCAN_INDEX_ALS, indio_dev->active_scan_mask)) {
+ ret = max44000_read_alsval(data);
+ if (ret < 0)
+ goto out_unlock;
+ buf[index++] = ret;
+ }
+ if (test_bit(MAX44000_SCAN_INDEX_PRX, indio_dev->active_scan_mask)) {
+ ret = regmap_read(data->regmap, MAX44000_REG_PRX_DATA, &regval);
+ if (ret < 0)
+ goto out_unlock;
+ buf[index] = regval;
+ }
+ mutex_unlock(&data->lock);
+
+ iio_push_to_buffers_with_timestamp(indio_dev, buf, iio_get_time_ns());
+ iio_trigger_notify_done(indio_dev->trig);
+ return IRQ_HANDLED;
+
+out_unlock:
+ mutex_unlock(&data->lock);
+ iio_trigger_notify_done(indio_dev->trig);
+ return IRQ_HANDLED;
+}
+
+static int max44000_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct max44000_data *data;
+ struct iio_dev *indio_dev;
+ int ret, reg;
+
+ indio_dev = devm_iio_device_alloc(&client->dev, sizeof(*data));
+ if (!indio_dev)
+ return -ENOMEM;
+ data = iio_priv(indio_dev);
+ data->regmap = devm_regmap_init_i2c(client, &max44000_regmap_config);
+ if (IS_ERR(data->regmap)) {
+ dev_err(&client->dev, "regmap_init failed!\n");
+ return PTR_ERR(data->regmap);
+ }
+
+ i2c_set_clientdata(client, indio_dev);
+ mutex_init(&data->lock);
+ indio_dev->dev.parent = &client->dev;
+ indio_dev->info = &max44000_info;
+ indio_dev->name = MAX44000_DRV_NAME;
+ indio_dev->channels = max44000_channels;
+ indio_dev->num_channels = ARRAY_SIZE(max44000_channels);
+
+ /*
+ * The device doesn't have a reset function so we just clear some
+ * important bits at probe time to ensure sane operation.
+ *
+ * Since we don't support interrupts/events the threshold values are
+ * not important. We also don't touch trim values.
+ */
+
+ /* Reset ALS scaling bits */
+ ret = regmap_write(data->regmap, MAX44000_REG_CFG_RX,
+ MAX44000_REG_CFG_RX_DEFAULT);
+ if (ret < 0) {
+ dev_err(&client->dev, "failed to write default CFG_RX: %d\n",
+ ret);
+ return ret;
+ }
+
+ /*
+ * By default the LED pulse used for the proximity sensor is disabled.
+ * Set a middle value so that we get some sort of valid data by default.
+ */
+ ret = max44000_write_led_current_raw(data, MAX44000_LED_CURRENT_DEFAULT);
+ if (ret < 0) {
+ dev_err(&client->dev, "failed to write init config: %d\n", ret);
+ return ret;
+ }
+
+ /* Reset CFG bits to ALS_PRX mode which allows easy reading of both values. */
+ reg = MAX44000_CFG_TRIM | MAX44000_CFG_MODE_ALS_PRX;
+ ret = regmap_write(data->regmap, MAX44000_REG_CFG_MAIN, reg);
+ if (ret < 0) {
+ dev_err(&client->dev, "failed to write init config: %d\n", ret);
+ return ret;
+ }
+
+ /* Read status at least once to clear any stale interrupt bits. */
+ ret = regmap_read(data->regmap, MAX44000_REG_STATUS, &reg);
+ if (ret < 0) {
+ dev_err(&client->dev, "failed to read init status: %d\n", ret);
+ return ret;
+ }
+
+ ret = iio_triggered_buffer_setup(indio_dev, NULL, max44000_trigger_handler, NULL);
+ if (ret < 0) {
+ dev_err(&client->dev, "iio triggered buffer setup failed\n");
+ return ret;
+ }
+
+ return iio_device_register(indio_dev);
+}
+
+static int max44000_remove(struct i2c_client *client)
+{
+ struct iio_dev *indio_dev = i2c_get_clientdata(client);
+
+ iio_device_unregister(indio_dev);
+ iio_triggered_buffer_cleanup(indio_dev);
+
+ return 0;
+}
+
+static const struct i2c_device_id max44000_id[] = {
+ {"max44000", 0},
+ { }
+};
+MODULE_DEVICE_TABLE(i2c, max44000_id);
+
+#ifdef CONFIG_ACPI
+static const struct acpi_device_id max44000_acpi_match[] = {
+ {"MAX44000", 0},
+ { }
+};
+MODULE_DEVICE_TABLE(acpi, max44000_acpi_match);
+#endif
+
+static struct i2c_driver max44000_driver = {
+ .driver = {
+ .name = MAX44000_DRV_NAME,
+ .acpi_match_table = ACPI_PTR(max44000_acpi_match),
+ },
+ .probe = max44000_probe,
+ .remove = max44000_remove,
+ .id_table = max44000_id,
+};
+
+module_i2c_driver(max44000_driver);
+
+MODULE_AUTHOR("Crestez Dan Leonard <leonard.crestez@intel.com>");
+MODULE_DESCRIPTION("MAX44000 Ambient and Infrared Proximity Sensor");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/iio/light/stk3310.c b/drivers/iio/light/stk3310.c
index 42d334ba612e..9e847f8f4f0c 100644
--- a/drivers/iio/light/stk3310.c
+++ b/drivers/iio/light/stk3310.c
@@ -16,7 +16,6 @@
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/regmap.h>
-#include <linux/gpio/consumer.h>
#include <linux/iio/events.h>
#include <linux/iio/iio.h>
#include <linux/iio/sysfs.h>
diff --git a/drivers/iio/light/tsl2563.c b/drivers/iio/light/tsl2563.c
index 12731d6b89ec..57b108c30e98 100644
--- a/drivers/iio/light/tsl2563.c
+++ b/drivers/iio/light/tsl2563.c
@@ -806,8 +806,7 @@ static int tsl2563_probe(struct i2c_client *client,
return 0;
fail:
- cancel_delayed_work(&chip->poweroff_work);
- flush_scheduled_work();
+ cancel_delayed_work_sync(&chip->poweroff_work);
return err;
}
diff --git a/drivers/iio/light/veml6070.c b/drivers/iio/light/veml6070.c
new file mode 100644
index 000000000000..bc1c4cb782cd
--- /dev/null
+++ b/drivers/iio/light/veml6070.c
@@ -0,0 +1,218 @@
+/*
+ * veml6070.c - Support for Vishay VEML6070 UV A light sensor
+ *
+ * Copyright 2016 Peter Meerwald-Stadler <pmeerw@pmeerw.net>
+ *
+ * This file is subject to the terms and conditions of version 2 of
+ * the GNU General Public License. See the file COPYING in the main
+ * directory of this archive for more details.
+ *
+ * IIO driver for VEML6070 (7-bit I2C slave addresses 0x38 and 0x39)
+ *
+ * TODO: integration time, ACK signal
+ */
+
+#include <linux/module.h>
+#include <linux/i2c.h>
+#include <linux/mutex.h>
+#include <linux/err.h>
+#include <linux/delay.h>
+
+#include <linux/iio/iio.h>
+#include <linux/iio/sysfs.h>
+
+#define VEML6070_DRV_NAME "veml6070"
+
+#define VEML6070_ADDR_CONFIG_DATA_MSB 0x38 /* read: MSB data, write: config */
+#define VEML6070_ADDR_DATA_LSB 0x39 /* LSB data */
+
+#define VEML6070_COMMAND_ACK BIT(5) /* raise interrupt when over threshold */
+#define VEML6070_COMMAND_IT GENMASK(3, 2) /* bit mask integration time */
+#define VEML6070_COMMAND_RSRVD BIT(1) /* reserved, set to 1 */
+#define VEML6070_COMMAND_SD BIT(0) /* shutdown mode when set */
+
+#define VEML6070_IT_10 0x04 /* integration time 1x */
+
+struct veml6070_data {
+ struct i2c_client *client1;
+ struct i2c_client *client2;
+ u8 config;
+ struct mutex lock;
+};
+
+static int veml6070_read(struct veml6070_data *data)
+{
+ int ret;
+ u8 msb, lsb;
+
+ mutex_lock(&data->lock);
+
+ /* disable shutdown */
+ ret = i2c_smbus_write_byte(data->client1,
+ data->config & ~VEML6070_COMMAND_SD);
+ if (ret < 0)
+ goto out;
+
+ msleep(125 + 10); /* measurement takes up to 125 ms for IT 1x */
+
+ ret = i2c_smbus_read_byte(data->client2); /* read MSB, address 0x39 */
+ if (ret < 0)
+ goto out;
+ msb = ret;
+
+ ret = i2c_smbus_read_byte(data->client1); /* read LSB, address 0x38 */
+ if (ret < 0)
+ goto out;
+ lsb = ret;
+
+ /* shutdown again */
+ ret = i2c_smbus_write_byte(data->client1, data->config);
+ if (ret < 0)
+ goto out;
+
+ ret = (msb << 8) | lsb;
+
+out:
+ mutex_unlock(&data->lock);
+ return ret;
+}
+
+static const struct iio_chan_spec veml6070_channels[] = {
+ {
+ .type = IIO_INTENSITY,
+ .modified = 1,
+ .channel2 = IIO_MOD_LIGHT_UV,
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ },
+ {
+ .type = IIO_UVINDEX,
+ .info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED),
+ }
+};
+
+static int veml6070_to_uv_index(unsigned val)
+{
+ /*
+ * conversion of raw UV intensity values to UV index depends on
+ * integration time (IT) and value of the resistor connected to
+ * the RSET pin (default: 270 KOhm)
+ */
+ unsigned uvi[11] = {
+ 187, 373, 560, /* low */
+ 746, 933, 1120, /* moderate */
+ 1308, 1494, /* high */
+ 1681, 1868, 2054}; /* very high */
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(uvi); i++)
+ if (val <= uvi[i])
+ return i;
+
+ return 11; /* extreme */
+}
+
+static int veml6070_read_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ int *val, int *val2, long mask)
+{
+ struct veml6070_data *data = iio_priv(indio_dev);
+ int ret;
+
+ switch (mask) {
+ case IIO_CHAN_INFO_RAW:
+ case IIO_CHAN_INFO_PROCESSED:
+ ret = veml6070_read(data);
+ if (ret < 0)
+ return ret;
+ if (mask == IIO_CHAN_INFO_PROCESSED)
+ *val = veml6070_to_uv_index(ret);
+ else
+ *val = ret;
+ return IIO_VAL_INT;
+ default:
+ return -EINVAL;
+ }
+}
+
+static const struct iio_info veml6070_info = {
+ .read_raw = veml6070_read_raw,
+ .driver_module = THIS_MODULE,
+};
+
+static int veml6070_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct veml6070_data *data;
+ struct iio_dev *indio_dev;
+ int ret;
+
+ indio_dev = devm_iio_device_alloc(&client->dev, sizeof(*data));
+ if (!indio_dev)
+ return -ENOMEM;
+
+ data = iio_priv(indio_dev);
+ i2c_set_clientdata(client, indio_dev);
+ data->client1 = client;
+ mutex_init(&data->lock);
+
+ indio_dev->dev.parent = &client->dev;
+ indio_dev->info = &veml6070_info;
+ indio_dev->channels = veml6070_channels;
+ indio_dev->num_channels = ARRAY_SIZE(veml6070_channels);
+ indio_dev->name = VEML6070_DRV_NAME;
+ indio_dev->modes = INDIO_DIRECT_MODE;
+
+ data->client2 = i2c_new_dummy(client->adapter, VEML6070_ADDR_DATA_LSB);
+ if (!data->client2) {
+ dev_err(&client->dev, "i2c device for second chip address failed\n");
+ return -ENODEV;
+ }
+
+ data->config = VEML6070_IT_10 | VEML6070_COMMAND_RSRVD |
+ VEML6070_COMMAND_SD;
+ ret = i2c_smbus_write_byte(data->client1, data->config);
+ if (ret < 0)
+ goto fail;
+
+ ret = iio_device_register(indio_dev);
+ if (ret < 0)
+ goto fail;
+
+ return ret;
+
+fail:
+ i2c_unregister_device(data->client2);
+ return ret;
+}
+
+static int veml6070_remove(struct i2c_client *client)
+{
+ struct iio_dev *indio_dev = i2c_get_clientdata(client);
+ struct veml6070_data *data = iio_priv(indio_dev);
+
+ iio_device_unregister(indio_dev);
+ i2c_unregister_device(data->client2);
+
+ return 0;
+}
+
+static const struct i2c_device_id veml6070_id[] = {
+ { "veml6070", 0 },
+ { }
+};
+MODULE_DEVICE_TABLE(i2c, veml6070_id);
+
+static struct i2c_driver veml6070_driver = {
+ .driver = {
+ .name = VEML6070_DRV_NAME,
+ },
+ .probe = veml6070_probe,
+ .remove = veml6070_remove,
+ .id_table = veml6070_id,
+};
+
+module_i2c_driver(veml6070_driver);
+
+MODULE_AUTHOR("Peter Meerwald-Stadler <pmeerw@pmeerw.net>");
+MODULE_DESCRIPTION("Vishay VEML6070 UV A light sensor driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/iio/magnetometer/Kconfig b/drivers/iio/magnetometer/Kconfig
index 021dc5361f53..84e6559ccc65 100644
--- a/drivers/iio/magnetometer/Kconfig
+++ b/drivers/iio/magnetometer/Kconfig
@@ -9,6 +9,8 @@ config AK8975
tristate "Asahi Kasei AK 3-Axis Magnetometer"
depends on I2C
depends on GPIOLIB || COMPILE_TEST
+ select IIO_BUFFER
+ select IIO_TRIGGERED_BUFFER
help
Say yes here to build support for Asahi Kasei AK8975, AK8963,
AK09911 or AK09912 3-Axis Magnetometer.
@@ -25,22 +27,41 @@ config AK09911
Deprecated: AK09911 is now supported by AK8975 driver.
config BMC150_MAGN
- tristate "Bosch BMC150 Magnetometer Driver"
- depends on I2C
- select REGMAP_I2C
+ tristate
select IIO_BUFFER
select IIO_TRIGGERED_BUFFER
+
+config BMC150_MAGN_I2C
+ tristate "Bosch BMC150 I2C Magnetometer Driver"
+ depends on I2C
+ select BMC150_MAGN
+ select REGMAP_I2C
help
- Say yes here to build support for the BMC150 magnetometer.
+ Say yes here to build support for the BMC150 magnetometer with
+ I2C interface.
- Currently this only supports the device via an i2c interface.
+ This is a combo module with both accelerometer and magnetometer.
+ This driver is only implementing magnetometer part, which has
+ its own address and register map.
+
+ To compile this driver as a module, choose M here: the module will be
+ called bmc150_magn_i2c.
+
+config BMC150_MAGN_SPI
+ tristate "Bosch BMC150 SPI Magnetometer Driver"
+ depends on SPI
+ select BMC150_MAGN
+ select REGMAP_SPI
+ help
+ Say yes here to build support for the BMC150 magnetometer with
+ SPI interface.
This is a combo module with both accelerometer and magnetometer.
This driver is only implementing magnetometer part, which has
its own address and register map.
To compile this driver as a module, choose M here: the module will be
- called bmc150_magn.
+ called bmc150_magn_spi.
config MAG3110
tristate "Freescale MAG3110 3-Axis Magnetometer"
diff --git a/drivers/iio/magnetometer/Makefile b/drivers/iio/magnetometer/Makefile
index dd03fe524481..92a745c9a6e8 100644
--- a/drivers/iio/magnetometer/Makefile
+++ b/drivers/iio/magnetometer/Makefile
@@ -5,6 +5,9 @@
# When adding new entries keep the list in alphabetical order
obj-$(CONFIG_AK8975) += ak8975.o
obj-$(CONFIG_BMC150_MAGN) += bmc150_magn.o
+obj-$(CONFIG_BMC150_MAGN_I2C) += bmc150_magn_i2c.o
+obj-$(CONFIG_BMC150_MAGN_SPI) += bmc150_magn_spi.o
+
obj-$(CONFIG_MAG3110) += mag3110.o
obj-$(CONFIG_HID_SENSOR_MAGNETOMETER_3D) += hid-sensor-magn-3d.o
obj-$(CONFIG_MMC35240) += mmc35240.o
diff --git a/drivers/iio/magnetometer/ak8975.c b/drivers/iio/magnetometer/ak8975.c
index 0e931a9a1669..609a2c401b5d 100644
--- a/drivers/iio/magnetometer/ak8975.c
+++ b/drivers/iio/magnetometer/ak8975.c
@@ -32,9 +32,17 @@
#include <linux/gpio.h>
#include <linux/of_gpio.h>
#include <linux/acpi.h>
+#include <linux/regulator/consumer.h>
#include <linux/iio/iio.h>
#include <linux/iio/sysfs.h>
+#include <linux/iio/buffer.h>
+#include <linux/iio/trigger.h>
+#include <linux/iio/trigger_consumer.h>
+#include <linux/iio/triggered_buffer.h>
+
+#include <linux/iio/magnetometer/ak8975.h>
+
/*
* Register definitions, as well as various shifts and masks to get at the
* individual fields of the registers.
@@ -361,7 +369,6 @@ static const struct ak_def ak_def_array[AK_MAX_TYPE] = {
struct ak8975_data {
struct i2c_client *client;
const struct ak_def *def;
- struct attribute_group attrs;
struct mutex lock;
u8 asa[3];
long raw_to_gauss[3];
@@ -370,8 +377,41 @@ struct ak8975_data {
wait_queue_head_t data_ready_queue;
unsigned long flags;
u8 cntl_cache;
+ struct iio_mount_matrix orientation;
+ struct regulator *vdd;
};
+/* Enable attached power regulator if any. */
+static int ak8975_power_on(struct i2c_client *client)
+{
+ const struct iio_dev *indio_dev = i2c_get_clientdata(client);
+ struct ak8975_data *data = iio_priv(indio_dev);
+ int ret;
+
+ data->vdd = devm_regulator_get(&client->dev, "vdd");
+ if (IS_ERR_OR_NULL(data->vdd)) {
+ ret = PTR_ERR(data->vdd);
+ if (ret == -ENODEV)
+ ret = 0;
+ } else {
+ ret = regulator_enable(data->vdd);
+ }
+
+ if (ret)
+ dev_err(&client->dev, "failed to enable Vdd supply: %d\n", ret);
+ return ret;
+}
+
+/* Disable attached power regulator if any. */
+static void ak8975_power_off(const struct i2c_client *client)
+{
+ const struct iio_dev *indio_dev = i2c_get_clientdata(client);
+ const struct ak8975_data *data = iio_priv(indio_dev);
+
+ if (!IS_ERR_OR_NULL(data->vdd))
+ regulator_disable(data->vdd);
+}
+
/*
* Return 0 if the i2c device is the one we expect.
* return a negative error number otherwise
@@ -601,22 +641,15 @@ static int wait_conversion_complete_interrupt(struct ak8975_data *data)
return ret > 0 ? 0 : -ETIME;
}
-/*
- * Emits the raw flux value for the x, y, or z axis.
- */
-static int ak8975_read_axis(struct iio_dev *indio_dev, int index, int *val)
+static int ak8975_start_read_axis(struct ak8975_data *data,
+ const struct i2c_client *client)
{
- struct ak8975_data *data = iio_priv(indio_dev);
- struct i2c_client *client = data->client;
- int ret;
-
- mutex_lock(&data->lock);
-
/* Set up the device for taking a sample. */
- ret = ak8975_set_mode(data, MODE_ONCE);
+ int ret = ak8975_set_mode(data, MODE_ONCE);
+
if (ret < 0) {
dev_err(&client->dev, "Error in setting operating mode\n");
- goto exit;
+ return ret;
}
/* Wait for the conversion to complete. */
@@ -627,7 +660,7 @@ static int ak8975_read_axis(struct iio_dev *indio_dev, int index, int *val)
else
ret = wait_conversion_complete_polled(data);
if (ret < 0)
- goto exit;
+ return ret;
/* This will be executed only for non-interrupt based waiting case */
if (ret & data->def->ctrl_masks[ST1_DRDY]) {
@@ -635,32 +668,45 @@ static int ak8975_read_axis(struct iio_dev *indio_dev, int index, int *val)
data->def->ctrl_regs[ST2]);
if (ret < 0) {
dev_err(&client->dev, "Error in reading ST2\n");
- goto exit;
+ return ret;
}
if (ret & (data->def->ctrl_masks[ST2_DERR] |
data->def->ctrl_masks[ST2_HOFL])) {
dev_err(&client->dev, "ST2 status error 0x%x\n", ret);
- ret = -EINVAL;
- goto exit;
+ return -EINVAL;
}
}
- /* Read the flux value from the appropriate register
- (the register is specified in the iio device attributes). */
- ret = i2c_smbus_read_word_data(client, data->def->data_regs[index]);
- if (ret < 0) {
- dev_err(&client->dev, "Read axis data fails\n");
+ return 0;
+}
+
+/* Retrieve raw flux value for one of the x, y, or z axis. */
+static int ak8975_read_axis(struct iio_dev *indio_dev, int index, int *val)
+{
+ struct ak8975_data *data = iio_priv(indio_dev);
+ const struct i2c_client *client = data->client;
+ const struct ak_def *def = data->def;
+ int ret;
+
+ mutex_lock(&data->lock);
+
+ ret = ak8975_start_read_axis(data, client);
+ if (ret)
+ goto exit;
+
+ ret = i2c_smbus_read_word_data(client, def->data_regs[index]);
+ if (ret < 0)
goto exit;
- }
mutex_unlock(&data->lock);
/* Clamp to valid range. */
- *val = clamp_t(s16, ret, -data->def->range, data->def->range);
+ *val = clamp_t(s16, ret, -def->range, def->range);
return IIO_VAL_INT;
exit:
mutex_unlock(&data->lock);
+ dev_err(&client->dev, "Error in reading axis\n");
return ret;
}
@@ -682,6 +728,18 @@ static int ak8975_read_raw(struct iio_dev *indio_dev,
return -EINVAL;
}
+static const struct iio_mount_matrix *
+ak8975_get_mount_matrix(const struct iio_dev *indio_dev,
+ const struct iio_chan_spec *chan)
+{
+ return &((struct ak8975_data *)iio_priv(indio_dev))->orientation;
+}
+
+static const struct iio_chan_spec_ext_info ak8975_ext_info[] = {
+ IIO_MOUNT_MATRIX(IIO_SHARED_BY_DIR, ak8975_get_mount_matrix),
+ { },
+};
+
#define AK8975_CHANNEL(axis, index) \
{ \
.type = IIO_MAGN, \
@@ -690,12 +748,23 @@ static int ak8975_read_raw(struct iio_dev *indio_dev,
.info_mask_separate = BIT(IIO_CHAN_INFO_RAW) | \
BIT(IIO_CHAN_INFO_SCALE), \
.address = index, \
+ .scan_index = index, \
+ .scan_type = { \
+ .sign = 's', \
+ .realbits = 16, \
+ .storagebits = 16, \
+ .endianness = IIO_CPU \
+ }, \
+ .ext_info = ak8975_ext_info, \
}
static const struct iio_chan_spec ak8975_channels[] = {
AK8975_CHANNEL(X, 0), AK8975_CHANNEL(Y, 1), AK8975_CHANNEL(Z, 2),
+ IIO_CHAN_SOFT_TIMESTAMP(3),
};
+static const unsigned long ak8975_scan_masks[] = { 0x7, 0 };
+
static const struct iio_info ak8975_info = {
.read_raw = &ak8975_read_raw,
.driver_module = THIS_MODULE,
@@ -724,6 +793,56 @@ static const char *ak8975_match_acpi_device(struct device *dev,
return dev_name(dev);
}
+static void ak8975_fill_buffer(struct iio_dev *indio_dev)
+{
+ struct ak8975_data *data = iio_priv(indio_dev);
+ const struct i2c_client *client = data->client;
+ const struct ak_def *def = data->def;
+ int ret;
+ s16 buff[8]; /* 3 x 16 bits axis values + 1 aligned 64 bits timestamp */
+
+ mutex_lock(&data->lock);
+
+ ret = ak8975_start_read_axis(data, client);
+ if (ret)
+ goto unlock;
+
+ /*
+ * For each axis, read the flux value from the appropriate register
+ * (the register is specified in the iio device attributes).
+ */
+ ret = i2c_smbus_read_i2c_block_data_or_emulated(client,
+ def->data_regs[0],
+ 3 * sizeof(buff[0]),
+ (u8 *)buff);
+ if (ret < 0)
+ goto unlock;
+
+ mutex_unlock(&data->lock);
+
+ /* Clamp to valid range. */
+ buff[0] = clamp_t(s16, le16_to_cpu(buff[0]), -def->range, def->range);
+ buff[1] = clamp_t(s16, le16_to_cpu(buff[1]), -def->range, def->range);
+ buff[2] = clamp_t(s16, le16_to_cpu(buff[2]), -def->range, def->range);
+
+ iio_push_to_buffers_with_timestamp(indio_dev, buff, iio_get_time_ns());
+ return;
+
+unlock:
+ mutex_unlock(&data->lock);
+ dev_err(&client->dev, "Error in reading axes block\n");
+}
+
+static irqreturn_t ak8975_handle_trigger(int irq, void *p)
+{
+ const struct iio_poll_func *pf = p;
+ struct iio_dev *indio_dev = pf->indio_dev;
+
+ ak8975_fill_buffer(indio_dev);
+ iio_trigger_notify_done(indio_dev->trig);
+ return IRQ_HANDLED;
+}
+
static int ak8975_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
@@ -733,10 +852,12 @@ static int ak8975_probe(struct i2c_client *client,
int err;
const char *name = NULL;
enum asahi_compass_chipset chipset = AK_MAX_TYPE;
+ const struct ak8975_platform_data *pdata =
+ dev_get_platdata(&client->dev);
/* Grab and set up the supplied GPIO. */
- if (client->dev.platform_data)
- eoc_gpio = *(int *)(client->dev.platform_data);
+ if (pdata)
+ eoc_gpio = pdata->eoc_gpio;
else if (client->dev.of_node)
eoc_gpio = of_get_gpio(client->dev.of_node, 0);
else
@@ -770,13 +891,24 @@ static int ak8975_probe(struct i2c_client *client,
data->eoc_gpio = eoc_gpio;
data->eoc_irq = 0;
+ if (!pdata) {
+ err = of_iio_read_mount_matrix(&client->dev,
+ "mount-matrix",
+ &data->orientation);
+ if (err)
+ return err;
+ } else
+ data->orientation = pdata->orientation;
+
/* id will be NULL when enumerated via ACPI */
if (id) {
chipset = (enum asahi_compass_chipset)(id->driver_data);
name = id->name;
- } else if (ACPI_HANDLE(&client->dev))
+ } else if (ACPI_HANDLE(&client->dev)) {
name = ak8975_match_acpi_device(&client->dev, &chipset);
- else
+ if (!name)
+ return -ENODEV;
+ } else
return -ENOSYS;
if (chipset >= AK_MAX_TYPE) {
@@ -786,10 +918,15 @@ static int ak8975_probe(struct i2c_client *client,
}
data->def = &ak_def_array[chipset];
+
+ err = ak8975_power_on(client);
+ if (err)
+ return err;
+
err = ak8975_who_i_am(client, data->def->type);
if (err < 0) {
dev_err(&client->dev, "Unexpected device\n");
- return err;
+ goto power_off;
}
dev_dbg(&client->dev, "Asahi compass chip %s\n", name);
@@ -797,7 +934,7 @@ static int ak8975_probe(struct i2c_client *client,
err = ak8975_setup(client);
if (err < 0) {
dev_err(&client->dev, "%s initialization fails\n", name);
- return err;
+ goto power_off;
}
mutex_init(&data->lock);
@@ -805,9 +942,41 @@ static int ak8975_probe(struct i2c_client *client,
indio_dev->channels = ak8975_channels;
indio_dev->num_channels = ARRAY_SIZE(ak8975_channels);
indio_dev->info = &ak8975_info;
+ indio_dev->available_scan_masks = ak8975_scan_masks;
indio_dev->modes = INDIO_DIRECT_MODE;
indio_dev->name = name;
- return devm_iio_device_register(&client->dev, indio_dev);
+
+ err = iio_triggered_buffer_setup(indio_dev, NULL, ak8975_handle_trigger,
+ NULL);
+ if (err) {
+ dev_err(&client->dev, "triggered buffer setup failed\n");
+ goto power_off;
+ }
+
+ err = iio_device_register(indio_dev);
+ if (err) {
+ dev_err(&client->dev, "device register failed\n");
+ goto cleanup_buffer;
+ }
+
+ return 0;
+
+cleanup_buffer:
+ iio_triggered_buffer_cleanup(indio_dev);
+power_off:
+ ak8975_power_off(client);
+ return err;
+}
+
+static int ak8975_remove(struct i2c_client *client)
+{
+ struct iio_dev *indio_dev = i2c_get_clientdata(client);
+
+ iio_device_unregister(indio_dev);
+ iio_triggered_buffer_cleanup(indio_dev);
+ ak8975_power_off(client);
+
+ return 0;
}
static const struct i2c_device_id ak8975_id[] = {
@@ -841,6 +1010,7 @@ static struct i2c_driver ak8975_driver = {
.acpi_match_table = ACPI_PTR(ak_acpi_match),
},
.probe = ak8975_probe,
+ .remove = ak8975_remove,
.id_table = ak8975_id,
};
module_i2c_driver(ak8975_driver);
diff --git a/drivers/iio/magnetometer/bmc150_magn.c b/drivers/iio/magnetometer/bmc150_magn.c
index ffcb75ea64fb..d104fb8d9379 100644
--- a/drivers/iio/magnetometer/bmc150_magn.c
+++ b/drivers/iio/magnetometer/bmc150_magn.c
@@ -23,7 +23,6 @@
#include <linux/delay.h>
#include <linux/slab.h>
#include <linux/acpi.h>
-#include <linux/gpio/consumer.h>
#include <linux/pm.h>
#include <linux/pm_runtime.h>
#include <linux/iio/iio.h>
@@ -35,6 +34,8 @@
#include <linux/iio/triggered_buffer.h>
#include <linux/regmap.h>
+#include "bmc150_magn.h"
+
#define BMC150_MAGN_DRV_NAME "bmc150_magn"
#define BMC150_MAGN_IRQ_NAME "bmc150_magn_event"
@@ -135,7 +136,7 @@ struct bmc150_magn_trim_regs {
} __packed;
struct bmc150_magn_data {
- struct i2c_client *client;
+ struct device *dev;
/*
* 1. Protect this structure.
* 2. Serialize sequences that power on/off the device and access HW.
@@ -147,6 +148,7 @@ struct bmc150_magn_data {
struct iio_trigger *dready_trig;
bool dready_trigger_on;
int max_odr;
+ int irq;
};
static const struct {
@@ -216,7 +218,7 @@ static bool bmc150_magn_is_volatile_reg(struct device *dev, unsigned int reg)
}
}
-static const struct regmap_config bmc150_magn_regmap_config = {
+const struct regmap_config bmc150_magn_regmap_config = {
.reg_bits = 8,
.val_bits = 8,
@@ -226,6 +228,7 @@ static const struct regmap_config bmc150_magn_regmap_config = {
.writeable_reg = bmc150_magn_is_writeable_reg,
.volatile_reg = bmc150_magn_is_volatile_reg,
};
+EXPORT_SYMBOL(bmc150_magn_regmap_config);
static int bmc150_magn_set_power_mode(struct bmc150_magn_data *data,
enum bmc150_magn_power_modes mode,
@@ -264,17 +267,17 @@ static int bmc150_magn_set_power_state(struct bmc150_magn_data *data, bool on)
int ret;
if (on) {
- ret = pm_runtime_get_sync(&data->client->dev);
+ ret = pm_runtime_get_sync(data->dev);
} else {
- pm_runtime_mark_last_busy(&data->client->dev);
- ret = pm_runtime_put_autosuspend(&data->client->dev);
+ pm_runtime_mark_last_busy(data->dev);
+ ret = pm_runtime_put_autosuspend(data->dev);
}
if (ret < 0) {
- dev_err(&data->client->dev,
+ dev_err(data->dev,
"failed to change power state to %d\n", on);
if (on)
- pm_runtime_put_noidle(&data->client->dev);
+ pm_runtime_put_noidle(data->dev);
return ret;
}
@@ -351,7 +354,7 @@ static int bmc150_magn_set_max_odr(struct bmc150_magn_data *data, int rep_xy,
/* the maximum selectable read-out frequency from datasheet */
max_odr = 1000000 / (145 * rep_xy + 500 * rep_z + 980);
if (odr > max_odr) {
- dev_err(&data->client->dev,
+ dev_err(data->dev,
"Can't set oversampling with sampling freq %d\n",
odr);
return -EINVAL;
@@ -685,27 +688,27 @@ static int bmc150_magn_init(struct bmc150_magn_data *data)
ret = bmc150_magn_set_power_mode(data, BMC150_MAGN_POWER_MODE_SUSPEND,
false);
if (ret < 0) {
- dev_err(&data->client->dev,
+ dev_err(data->dev,
"Failed to bring up device from suspend mode\n");
return ret;
}
ret = regmap_read(data->regmap, BMC150_MAGN_REG_CHIP_ID, &chip_id);
if (ret < 0) {
- dev_err(&data->client->dev, "Failed reading chip id\n");
+ dev_err(data->dev, "Failed reading chip id\n");
goto err_poweroff;
}
if (chip_id != BMC150_MAGN_CHIP_ID_VAL) {
- dev_err(&data->client->dev, "Invalid chip id 0x%x\n", chip_id);
+ dev_err(data->dev, "Invalid chip id 0x%x\n", chip_id);
ret = -ENODEV;
goto err_poweroff;
}
- dev_dbg(&data->client->dev, "Chip id %x\n", chip_id);
+ dev_dbg(data->dev, "Chip id %x\n", chip_id);
preset = bmc150_magn_presets_table[BMC150_MAGN_DEFAULT_PRESET];
ret = bmc150_magn_set_odr(data, preset.odr);
if (ret < 0) {
- dev_err(&data->client->dev, "Failed to set ODR to %d\n",
+ dev_err(data->dev, "Failed to set ODR to %d\n",
preset.odr);
goto err_poweroff;
}
@@ -713,7 +716,7 @@ static int bmc150_magn_init(struct bmc150_magn_data *data)
ret = regmap_write(data->regmap, BMC150_MAGN_REG_REP_XY,
BMC150_MAGN_REPXY_TO_REGVAL(preset.rep_xy));
if (ret < 0) {
- dev_err(&data->client->dev, "Failed to set REP XY to %d\n",
+ dev_err(data->dev, "Failed to set REP XY to %d\n",
preset.rep_xy);
goto err_poweroff;
}
@@ -721,7 +724,7 @@ static int bmc150_magn_init(struct bmc150_magn_data *data)
ret = regmap_write(data->regmap, BMC150_MAGN_REG_REP_Z,
BMC150_MAGN_REPZ_TO_REGVAL(preset.rep_z));
if (ret < 0) {
- dev_err(&data->client->dev, "Failed to set REP Z to %d\n",
+ dev_err(data->dev, "Failed to set REP Z to %d\n",
preset.rep_z);
goto err_poweroff;
}
@@ -734,7 +737,7 @@ static int bmc150_magn_init(struct bmc150_magn_data *data)
ret = bmc150_magn_set_power_mode(data, BMC150_MAGN_POWER_MODE_NORMAL,
true);
if (ret < 0) {
- dev_err(&data->client->dev, "Failed to power on device\n");
+ dev_err(data->dev, "Failed to power on device\n");
goto err_poweroff;
}
@@ -843,41 +846,33 @@ static const char *bmc150_magn_match_acpi_device(struct device *dev)
return dev_name(dev);
}
-static int bmc150_magn_probe(struct i2c_client *client,
- const struct i2c_device_id *id)
+int bmc150_magn_probe(struct device *dev, struct regmap *regmap,
+ int irq, const char *name)
{
struct bmc150_magn_data *data;
struct iio_dev *indio_dev;
- const char *name = NULL;
int ret;
- indio_dev = devm_iio_device_alloc(&client->dev, sizeof(*data));
+ indio_dev = devm_iio_device_alloc(dev, sizeof(*data));
if (!indio_dev)
return -ENOMEM;
data = iio_priv(indio_dev);
- i2c_set_clientdata(client, indio_dev);
- data->client = client;
+ dev_set_drvdata(dev, indio_dev);
+ data->regmap = regmap;
+ data->irq = irq;
+ data->dev = dev;
- if (id)
- name = id->name;
- else if (ACPI_HANDLE(&client->dev))
- name = bmc150_magn_match_acpi_device(&client->dev);
- else
- return -ENOSYS;
+ if (!name && ACPI_HANDLE(dev))
+ name = bmc150_magn_match_acpi_device(dev);
mutex_init(&data->mutex);
- data->regmap = devm_regmap_init_i2c(client, &bmc150_magn_regmap_config);
- if (IS_ERR(data->regmap)) {
- dev_err(&client->dev, "Failed to allocate register map\n");
- return PTR_ERR(data->regmap);
- }
ret = bmc150_magn_init(data);
if (ret < 0)
return ret;
- indio_dev->dev.parent = &client->dev;
+ indio_dev->dev.parent = dev;
indio_dev->channels = bmc150_magn_channels;
indio_dev->num_channels = ARRAY_SIZE(bmc150_magn_channels);
indio_dev->available_scan_masks = bmc150_magn_scan_masks;
@@ -885,35 +880,34 @@ static int bmc150_magn_probe(struct i2c_client *client,
indio_dev->modes = INDIO_DIRECT_MODE;
indio_dev->info = &bmc150_magn_info;
- if (client->irq > 0) {
- data->dready_trig = devm_iio_trigger_alloc(&client->dev,
+ if (irq > 0) {
+ data->dready_trig = devm_iio_trigger_alloc(dev,
"%s-dev%d",
indio_dev->name,
indio_dev->id);
if (!data->dready_trig) {
ret = -ENOMEM;
- dev_err(&client->dev, "iio trigger alloc failed\n");
+ dev_err(dev, "iio trigger alloc failed\n");
goto err_poweroff;
}
- data->dready_trig->dev.parent = &client->dev;
+ data->dready_trig->dev.parent = dev;
data->dready_trig->ops = &bmc150_magn_trigger_ops;
iio_trigger_set_drvdata(data->dready_trig, indio_dev);
ret = iio_trigger_register(data->dready_trig);
if (ret) {
- dev_err(&client->dev, "iio trigger register failed\n");
+ dev_err(dev, "iio trigger register failed\n");
goto err_poweroff;
}
- ret = request_threaded_irq(client->irq,
+ ret = request_threaded_irq(irq,
iio_trigger_generic_data_rdy_poll,
NULL,
IRQF_TRIGGER_RISING | IRQF_ONESHOT,
BMC150_MAGN_IRQ_NAME,
data->dready_trig);
if (ret < 0) {
- dev_err(&client->dev, "request irq %d failed\n",
- client->irq);
+ dev_err(dev, "request irq %d failed\n", irq);
goto err_trigger_unregister;
}
}
@@ -923,34 +917,33 @@ static int bmc150_magn_probe(struct i2c_client *client,
bmc150_magn_trigger_handler,
&bmc150_magn_buffer_setup_ops);
if (ret < 0) {
- dev_err(&client->dev,
- "iio triggered buffer setup failed\n");
+ dev_err(dev, "iio triggered buffer setup failed\n");
goto err_free_irq;
}
- ret = pm_runtime_set_active(&client->dev);
+ ret = pm_runtime_set_active(dev);
if (ret)
goto err_buffer_cleanup;
- pm_runtime_enable(&client->dev);
- pm_runtime_set_autosuspend_delay(&client->dev,
+ pm_runtime_enable(dev);
+ pm_runtime_set_autosuspend_delay(dev,
BMC150_MAGN_AUTO_SUSPEND_DELAY_MS);
- pm_runtime_use_autosuspend(&client->dev);
+ pm_runtime_use_autosuspend(dev);
ret = iio_device_register(indio_dev);
if (ret < 0) {
- dev_err(&client->dev, "unable to register iio device\n");
+ dev_err(dev, "unable to register iio device\n");
goto err_buffer_cleanup;
}
- dev_dbg(&indio_dev->dev, "Registered device %s\n", name);
+ dev_dbg(dev, "Registered device %s\n", name);
return 0;
err_buffer_cleanup:
iio_triggered_buffer_cleanup(indio_dev);
err_free_irq:
- if (client->irq > 0)
- free_irq(client->irq, data->dready_trig);
+ if (irq > 0)
+ free_irq(irq, data->dready_trig);
err_trigger_unregister:
if (data->dready_trig)
iio_trigger_unregister(data->dready_trig);
@@ -958,22 +951,23 @@ err_poweroff:
bmc150_magn_set_power_mode(data, BMC150_MAGN_POWER_MODE_SUSPEND, true);
return ret;
}
+EXPORT_SYMBOL(bmc150_magn_probe);
-static int bmc150_magn_remove(struct i2c_client *client)
+int bmc150_magn_remove(struct device *dev)
{
- struct iio_dev *indio_dev = i2c_get_clientdata(client);
+ struct iio_dev *indio_dev = dev_get_drvdata(dev);
struct bmc150_magn_data *data = iio_priv(indio_dev);
iio_device_unregister(indio_dev);
- pm_runtime_disable(&client->dev);
- pm_runtime_set_suspended(&client->dev);
- pm_runtime_put_noidle(&client->dev);
+ pm_runtime_disable(dev);
+ pm_runtime_set_suspended(dev);
+ pm_runtime_put_noidle(dev);
iio_triggered_buffer_cleanup(indio_dev);
- if (client->irq > 0)
- free_irq(data->client->irq, data->dready_trig);
+ if (data->irq > 0)
+ free_irq(data->irq, data->dready_trig);
if (data->dready_trig)
iio_trigger_unregister(data->dready_trig);
@@ -984,11 +978,12 @@ static int bmc150_magn_remove(struct i2c_client *client)
return 0;
}
+EXPORT_SYMBOL(bmc150_magn_remove);
#ifdef CONFIG_PM
static int bmc150_magn_runtime_suspend(struct device *dev)
{
- struct iio_dev *indio_dev = i2c_get_clientdata(to_i2c_client(dev));
+ struct iio_dev *indio_dev = dev_get_drvdata(dev);
struct bmc150_magn_data *data = iio_priv(indio_dev);
int ret;
@@ -997,7 +992,7 @@ static int bmc150_magn_runtime_suspend(struct device *dev)
true);
mutex_unlock(&data->mutex);
if (ret < 0) {
- dev_err(&data->client->dev, "powering off device failed\n");
+ dev_err(dev, "powering off device failed\n");
return ret;
}
return 0;
@@ -1008,7 +1003,7 @@ static int bmc150_magn_runtime_suspend(struct device *dev)
*/
static int bmc150_magn_runtime_resume(struct device *dev)
{
- struct iio_dev *indio_dev = i2c_get_clientdata(to_i2c_client(dev));
+ struct iio_dev *indio_dev = dev_get_drvdata(dev);
struct bmc150_magn_data *data = iio_priv(indio_dev);
return bmc150_magn_set_power_mode(data, BMC150_MAGN_POWER_MODE_NORMAL,
@@ -1019,7 +1014,7 @@ static int bmc150_magn_runtime_resume(struct device *dev)
#ifdef CONFIG_PM_SLEEP
static int bmc150_magn_suspend(struct device *dev)
{
- struct iio_dev *indio_dev = i2c_get_clientdata(to_i2c_client(dev));
+ struct iio_dev *indio_dev = dev_get_drvdata(dev);
struct bmc150_magn_data *data = iio_priv(indio_dev);
int ret;
@@ -1033,7 +1028,7 @@ static int bmc150_magn_suspend(struct device *dev)
static int bmc150_magn_resume(struct device *dev)
{
- struct iio_dev *indio_dev = i2c_get_clientdata(to_i2c_client(dev));
+ struct iio_dev *indio_dev = dev_get_drvdata(dev);
struct bmc150_magn_data *data = iio_priv(indio_dev);
int ret;
@@ -1046,38 +1041,13 @@ static int bmc150_magn_resume(struct device *dev)
}
#endif
-static const struct dev_pm_ops bmc150_magn_pm_ops = {
+const struct dev_pm_ops bmc150_magn_pm_ops = {
SET_SYSTEM_SLEEP_PM_OPS(bmc150_magn_suspend, bmc150_magn_resume)
SET_RUNTIME_PM_OPS(bmc150_magn_runtime_suspend,
bmc150_magn_runtime_resume, NULL)
};
-
-static const struct acpi_device_id bmc150_magn_acpi_match[] = {
- {"BMC150B", 0},
- {"BMC156B", 0},
- {},
-};
-MODULE_DEVICE_TABLE(acpi, bmc150_magn_acpi_match);
-
-static const struct i2c_device_id bmc150_magn_id[] = {
- {"bmc150_magn", 0},
- {"bmc156_magn", 0},
- {},
-};
-MODULE_DEVICE_TABLE(i2c, bmc150_magn_id);
-
-static struct i2c_driver bmc150_magn_driver = {
- .driver = {
- .name = BMC150_MAGN_DRV_NAME,
- .acpi_match_table = ACPI_PTR(bmc150_magn_acpi_match),
- .pm = &bmc150_magn_pm_ops,
- },
- .probe = bmc150_magn_probe,
- .remove = bmc150_magn_remove,
- .id_table = bmc150_magn_id,
-};
-module_i2c_driver(bmc150_magn_driver);
+EXPORT_SYMBOL(bmc150_magn_pm_ops);
MODULE_AUTHOR("Irina Tirdea <irina.tirdea@intel.com>");
MODULE_LICENSE("GPL v2");
-MODULE_DESCRIPTION("BMC150 magnetometer driver");
+MODULE_DESCRIPTION("BMC150 magnetometer core driver");
diff --git a/drivers/iio/magnetometer/bmc150_magn.h b/drivers/iio/magnetometer/bmc150_magn.h
new file mode 100644
index 000000000000..9a8e26812ca8
--- /dev/null
+++ b/drivers/iio/magnetometer/bmc150_magn.h
@@ -0,0 +1,11 @@
+#ifndef _BMC150_MAGN_H_
+#define _BMC150_MAGN_H_
+
+extern const struct regmap_config bmc150_magn_regmap_config;
+extern const struct dev_pm_ops bmc150_magn_pm_ops;
+
+int bmc150_magn_probe(struct device *dev, struct regmap *regmap, int irq,
+ const char *name);
+int bmc150_magn_remove(struct device *dev);
+
+#endif /* _BMC150_MAGN_H_ */
diff --git a/drivers/iio/magnetometer/bmc150_magn_i2c.c b/drivers/iio/magnetometer/bmc150_magn_i2c.c
new file mode 100644
index 000000000000..eddc7f0d0096
--- /dev/null
+++ b/drivers/iio/magnetometer/bmc150_magn_i2c.c
@@ -0,0 +1,77 @@
+/*
+ * 3-axis magnetometer driver supporting following I2C Bosch-Sensortec chips:
+ * - BMC150
+ * - BMC156
+ *
+ * Copyright (c) 2016, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ */
+#include <linux/device.h>
+#include <linux/mod_devicetable.h>
+#include <linux/i2c.h>
+#include <linux/module.h>
+#include <linux/acpi.h>
+#include <linux/regmap.h>
+
+#include "bmc150_magn.h"
+
+static int bmc150_magn_i2c_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct regmap *regmap;
+ const char *name = NULL;
+
+ regmap = devm_regmap_init_i2c(client, &bmc150_magn_regmap_config);
+ if (IS_ERR(regmap)) {
+ dev_err(&client->dev, "Failed to initialize i2c regmap\n");
+ return PTR_ERR(regmap);
+ }
+
+ if (id)
+ name = id->name;
+
+ return bmc150_magn_probe(&client->dev, regmap, client->irq, name);
+}
+
+static int bmc150_magn_i2c_remove(struct i2c_client *client)
+{
+ return bmc150_magn_remove(&client->dev);
+}
+
+static const struct acpi_device_id bmc150_magn_acpi_match[] = {
+ {"BMC150B", 0},
+ {"BMC156B", 0},
+ {},
+};
+MODULE_DEVICE_TABLE(acpi, bmc150_magn_acpi_match);
+
+static const struct i2c_device_id bmc150_magn_i2c_id[] = {
+ {"bmc150_magn", 0},
+ {"bmc156_magn", 0},
+ {}
+};
+MODULE_DEVICE_TABLE(i2c, bmc150_magn_i2c_id);
+
+static struct i2c_driver bmc150_magn_driver = {
+ .driver = {
+ .name = "bmc150_magn_i2c",
+ .acpi_match_table = ACPI_PTR(bmc150_magn_acpi_match),
+ .pm = &bmc150_magn_pm_ops,
+ },
+ .probe = bmc150_magn_i2c_probe,
+ .remove = bmc150_magn_i2c_remove,
+ .id_table = bmc150_magn_i2c_id,
+};
+module_i2c_driver(bmc150_magn_driver);
+
+MODULE_AUTHOR("Daniel Baluta <daniel.baluta@intel.com");
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("BMC150 I2C magnetometer driver");
diff --git a/drivers/iio/magnetometer/bmc150_magn_spi.c b/drivers/iio/magnetometer/bmc150_magn_spi.c
new file mode 100644
index 000000000000..c4c738a07695
--- /dev/null
+++ b/drivers/iio/magnetometer/bmc150_magn_spi.c
@@ -0,0 +1,68 @@
+/*
+ * 3-axis magnetometer driver support following SPI Bosch-Sensortec chips:
+ * - BMC150
+ * - BMC156
+ *
+ * Copyright (c) 2016, Intel Corporation.
+ *
+ * This file is subject to the terms and conditions of version 2 of
+ * the GNU General Public License. See the file COPYING in the main
+ * directory of this archive for more details.
+ */
+#include <linux/module.h>
+#include <linux/mod_devicetable.h>
+#include <linux/spi/spi.h>
+#include <linux/acpi.h>
+#include <linux/regmap.h>
+
+#include "bmc150_magn.h"
+
+static int bmc150_magn_spi_probe(struct spi_device *spi)
+{
+ struct regmap *regmap;
+ const struct spi_device_id *id = spi_get_device_id(spi);
+
+ regmap = devm_regmap_init_spi(spi, &bmc150_magn_regmap_config);
+ if (IS_ERR(regmap)) {
+ dev_err(&spi->dev, "Failed to register spi regmap %d\n",
+ (int)PTR_ERR(regmap));
+ return PTR_ERR(regmap);
+ }
+ return bmc150_magn_probe(&spi->dev, regmap, spi->irq, id->name);
+}
+
+static int bmc150_magn_spi_remove(struct spi_device *spi)
+{
+ bmc150_magn_remove(&spi->dev);
+
+ return 0;
+}
+
+static const struct spi_device_id bmc150_magn_spi_id[] = {
+ {"bmc150_magn", 0},
+ {"bmc156_magn", 0},
+ {}
+};
+MODULE_DEVICE_TABLE(spi, bmc150_magn_spi_id);
+
+static const struct acpi_device_id bmc150_magn_acpi_match[] = {
+ {"BMC150B", 0},
+ {"BMC156B", 0},
+ {},
+};
+MODULE_DEVICE_TABLE(acpi, bmc150_magn_acpi_match);
+
+static struct spi_driver bmc150_magn_spi_driver = {
+ .probe = bmc150_magn_spi_probe,
+ .remove = bmc150_magn_spi_remove,
+ .id_table = bmc150_magn_spi_id,
+ .driver = {
+ .acpi_match_table = ACPI_PTR(bmc150_magn_acpi_match),
+ .name = "bmc150_magn_spi",
+ },
+};
+module_spi_driver(bmc150_magn_spi_driver);
+
+MODULE_AUTHOR("Daniel Baluta <daniel.baluta@intel.com");
+MODULE_DESCRIPTION("BMC150 magnetometer SPI driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/iio/magnetometer/st_magn_core.c b/drivers/iio/magnetometer/st_magn_core.c
index 501f858df413..62036d2a9956 100644
--- a/drivers/iio/magnetometer/st_magn_core.c
+++ b/drivers/iio/magnetometer/st_magn_core.c
@@ -484,6 +484,7 @@ static const struct st_sensor_settings st_magn_sensors_settings[] = {
.mask_int1 = ST_MAGN_3_DRDY_INT_MASK,
.addr_ihl = ST_MAGN_3_IHL_IRQ_ADDR,
.mask_ihl = ST_MAGN_3_IHL_IRQ_MASK,
+ .addr_stat_drdy = ST_SENSORS_DEFAULT_STAT_ADDR,
},
.multi_read_bit = ST_MAGN_3_MULTIREAD_BIT,
.bootime = 2,
diff --git a/drivers/iio/potentiometer/Kconfig b/drivers/iio/potentiometer/Kconfig
index ffc735c168fb..6acb23810bb4 100644
--- a/drivers/iio/potentiometer/Kconfig
+++ b/drivers/iio/potentiometer/Kconfig
@@ -5,6 +5,34 @@
menu "Digital potentiometers"
+config DS1803
+ tristate "Maxim Integrated DS1803 Digital Potentiometer driver"
+ depends on I2C
+ help
+ Say yes here to build support for the Maxim Integrated DS1803
+ digital potentiomenter chip.
+
+ To compile this driver as a module, choose M here: the
+ module will be called ds1803.
+
+config MCP4131
+ tristate "Microchip MCP413X/414X/415X/416X/423X/424X/425X/426X Digital Potentiometer driver"
+ depends on SPI
+ help
+ Say yes here to build support for the Microchip
+ MCP4131, MCP4132,
+ MCP4141, MCP4142,
+ MCP4151, MCP4152,
+ MCP4161, MCP4162,
+ MCP4231, MCP4232,
+ MCP4241, MCP4242,
+ MCP4251, MCP4252,
+ MCP4261, MCP4262,
+ digital potentiomenter chips.
+
+ To compile this driver as a module, choose M here: the
+ module will be called mcp4131.
+
config MCP4531
tristate "Microchip MCP45xx/MCP46xx Digital Potentiometer driver"
depends on I2C
diff --git a/drivers/iio/potentiometer/Makefile b/drivers/iio/potentiometer/Makefile
index b563b492b486..6007faa2fb02 100644
--- a/drivers/iio/potentiometer/Makefile
+++ b/drivers/iio/potentiometer/Makefile
@@ -3,5 +3,7 @@
#
# When adding new entries keep the list in alphabetical order
+obj-$(CONFIG_DS1803) += ds1803.o
+obj-$(CONFIG_MCP4131) += mcp4131.o
obj-$(CONFIG_MCP4531) += mcp4531.o
obj-$(CONFIG_TPL0102) += tpl0102.o
diff --git a/drivers/iio/potentiometer/ds1803.c b/drivers/iio/potentiometer/ds1803.c
new file mode 100644
index 000000000000..fb9e2a337dc2
--- /dev/null
+++ b/drivers/iio/potentiometer/ds1803.c
@@ -0,0 +1,173 @@
+/*
+ * Maxim Integrated DS1803 digital potentiometer driver
+ * Copyright (c) 2016 Slawomir Stepien
+ *
+ * Datasheet: https://datasheets.maximintegrated.com/en/ds/DS1803.pdf
+ *
+ * DEVID #Wipers #Positions Resistor Opts (kOhm) i2c address
+ * ds1803 2 256 10, 50, 100 0101xxx
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ */
+
+#include <linux/err.h>
+#include <linux/export.h>
+#include <linux/i2c.h>
+#include <linux/iio/iio.h>
+#include <linux/module.h>
+#include <linux/of.h>
+
+#define DS1803_MAX_POS 255
+#define DS1803_WRITE(chan) (0xa8 | ((chan) + 1))
+
+enum ds1803_type {
+ DS1803_010,
+ DS1803_050,
+ DS1803_100,
+};
+
+struct ds1803_cfg {
+ int kohms;
+};
+
+static const struct ds1803_cfg ds1803_cfg[] = {
+ [DS1803_010] = { .kohms = 10, },
+ [DS1803_050] = { .kohms = 50, },
+ [DS1803_100] = { .kohms = 100, },
+};
+
+struct ds1803_data {
+ struct i2c_client *client;
+ const struct ds1803_cfg *cfg;
+};
+
+#define DS1803_CHANNEL(ch) { \
+ .type = IIO_RESISTANCE, \
+ .indexed = 1, \
+ .output = 1, \
+ .channel = (ch), \
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \
+ .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), \
+}
+
+static const struct iio_chan_spec ds1803_channels[] = {
+ DS1803_CHANNEL(0),
+ DS1803_CHANNEL(1),
+};
+
+static int ds1803_read_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ int *val, int *val2, long mask)
+{
+ struct ds1803_data *data = iio_priv(indio_dev);
+ int pot = chan->channel;
+ int ret;
+ u8 result[indio_dev->num_channels];
+
+ switch (mask) {
+ case IIO_CHAN_INFO_RAW:
+ ret = i2c_master_recv(data->client, result,
+ indio_dev->num_channels);
+ if (ret < 0)
+ return ret;
+
+ *val = result[pot];
+ return IIO_VAL_INT;
+
+ case IIO_CHAN_INFO_SCALE:
+ *val = 1000 * data->cfg->kohms;
+ *val2 = DS1803_MAX_POS;
+ return IIO_VAL_FRACTIONAL;
+ }
+
+ return -EINVAL;
+}
+
+static int ds1803_write_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ int val, int val2, long mask)
+{
+ struct ds1803_data *data = iio_priv(indio_dev);
+ int pot = chan->channel;
+
+ if (val2 != 0)
+ return -EINVAL;
+
+ switch (mask) {
+ case IIO_CHAN_INFO_RAW:
+ if (val > DS1803_MAX_POS || val < 0)
+ return -EINVAL;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return i2c_smbus_write_byte_data(data->client, DS1803_WRITE(pot), val);
+}
+
+static const struct iio_info ds1803_info = {
+ .read_raw = ds1803_read_raw,
+ .write_raw = ds1803_write_raw,
+ .driver_module = THIS_MODULE,
+};
+
+static int ds1803_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct device *dev = &client->dev;
+ struct ds1803_data *data;
+ struct iio_dev *indio_dev;
+
+ indio_dev = devm_iio_device_alloc(dev, sizeof(*data));
+ if (!indio_dev)
+ return -ENOMEM;
+
+ i2c_set_clientdata(client, indio_dev);
+
+ data = iio_priv(indio_dev);
+ data->client = client;
+ data->cfg = &ds1803_cfg[id->driver_data];
+
+ indio_dev->dev.parent = dev;
+ indio_dev->info = &ds1803_info;
+ indio_dev->channels = ds1803_channels;
+ indio_dev->num_channels = ARRAY_SIZE(ds1803_channels);
+ indio_dev->name = client->name;
+
+ return devm_iio_device_register(dev, indio_dev);
+}
+
+#if defined(CONFIG_OF)
+static const struct of_device_id ds1803_dt_ids[] = {
+ { .compatible = "maxim,ds1803-010", .data = &ds1803_cfg[DS1803_010] },
+ { .compatible = "maxim,ds1803-050", .data = &ds1803_cfg[DS1803_050] },
+ { .compatible = "maxim,ds1803-100", .data = &ds1803_cfg[DS1803_100] },
+ {}
+};
+MODULE_DEVICE_TABLE(of, ds1803_dt_ids);
+#endif /* CONFIG_OF */
+
+static const struct i2c_device_id ds1803_id[] = {
+ { "ds1803-010", DS1803_010 },
+ { "ds1803-050", DS1803_050 },
+ { "ds1803-100", DS1803_100 },
+ {}
+};
+MODULE_DEVICE_TABLE(i2c, ds1803_id);
+
+static struct i2c_driver ds1803_driver = {
+ .driver = {
+ .name = "ds1803",
+ .of_match_table = of_match_ptr(ds1803_dt_ids),
+ },
+ .probe = ds1803_probe,
+ .id_table = ds1803_id,
+};
+
+module_i2c_driver(ds1803_driver);
+
+MODULE_AUTHOR("Slawomir Stepien <sst@poczta.fm>");
+MODULE_DESCRIPTION("DS1803 digital potentiometer");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/iio/potentiometer/mcp4131.c b/drivers/iio/potentiometer/mcp4131.c
new file mode 100644
index 000000000000..4e7e2c6c522c
--- /dev/null
+++ b/drivers/iio/potentiometer/mcp4131.c
@@ -0,0 +1,494 @@
+/*
+ * Industrial I/O driver for Microchip digital potentiometers
+ *
+ * Copyright (c) 2016 Slawomir Stepien
+ * Based on: Peter Rosin's code from mcp4531.c
+ *
+ * Datasheet: http://ww1.microchip.com/downloads/en/DeviceDoc/22060b.pdf
+ *
+ * DEVID #Wipers #Positions Resistor Opts (kOhm)
+ * mcp4131 1 129 5, 10, 50, 100
+ * mcp4132 1 129 5, 10, 50, 100
+ * mcp4141 1 129 5, 10, 50, 100
+ * mcp4142 1 129 5, 10, 50, 100
+ * mcp4151 1 257 5, 10, 50, 100
+ * mcp4152 1 257 5, 10, 50, 100
+ * mcp4161 1 257 5, 10, 50, 100
+ * mcp4162 1 257 5, 10, 50, 100
+ * mcp4231 2 129 5, 10, 50, 100
+ * mcp4232 2 129 5, 10, 50, 100
+ * mcp4241 2 129 5, 10, 50, 100
+ * mcp4242 2 129 5, 10, 50, 100
+ * mcp4251 2 257 5, 10, 50, 100
+ * mcp4252 2 257 5, 10, 50, 100
+ * mcp4261 2 257 5, 10, 50, 100
+ * mcp4262 2 257 5, 10, 50, 100
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ */
+
+/*
+ * TODO:
+ * 1. Write wiper setting to EEPROM for EEPROM capable models.
+ */
+
+#include <linux/cache.h>
+#include <linux/err.h>
+#include <linux/export.h>
+#include <linux/iio/iio.h>
+#include <linux/iio/types.h>
+#include <linux/module.h>
+#include <linux/mutex.h>
+#include <linux/of.h>
+#include <linux/spi/spi.h>
+
+#define MCP4131_WRITE (0x00 << 2)
+#define MCP4131_READ (0x03 << 2)
+
+#define MCP4131_WIPER_SHIFT 4
+#define MCP4131_CMDERR(r) ((r[0]) & 0x02)
+#define MCP4131_RAW(r) ((r[0]) == 0xff ? 0x100 : (r[1]))
+
+struct mcp4131_cfg {
+ int wipers;
+ int max_pos;
+ int kohms;
+};
+
+enum mcp4131_type {
+ MCP413x_502 = 0,
+ MCP413x_103,
+ MCP413x_503,
+ MCP413x_104,
+ MCP414x_502,
+ MCP414x_103,
+ MCP414x_503,
+ MCP414x_104,
+ MCP415x_502,
+ MCP415x_103,
+ MCP415x_503,
+ MCP415x_104,
+ MCP416x_502,
+ MCP416x_103,
+ MCP416x_503,
+ MCP416x_104,
+ MCP423x_502,
+ MCP423x_103,
+ MCP423x_503,
+ MCP423x_104,
+ MCP424x_502,
+ MCP424x_103,
+ MCP424x_503,
+ MCP424x_104,
+ MCP425x_502,
+ MCP425x_103,
+ MCP425x_503,
+ MCP425x_104,
+ MCP426x_502,
+ MCP426x_103,
+ MCP426x_503,
+ MCP426x_104,
+};
+
+static const struct mcp4131_cfg mcp4131_cfg[] = {
+ [MCP413x_502] = { .wipers = 1, .max_pos = 128, .kohms = 5, },
+ [MCP413x_103] = { .wipers = 1, .max_pos = 128, .kohms = 10, },
+ [MCP413x_503] = { .wipers = 1, .max_pos = 128, .kohms = 50, },
+ [MCP413x_104] = { .wipers = 1, .max_pos = 128, .kohms = 100, },
+ [MCP414x_502] = { .wipers = 1, .max_pos = 128, .kohms = 5, },
+ [MCP414x_103] = { .wipers = 1, .max_pos = 128, .kohms = 10, },
+ [MCP414x_503] = { .wipers = 1, .max_pos = 128, .kohms = 50, },
+ [MCP414x_104] = { .wipers = 1, .max_pos = 128, .kohms = 100, },
+ [MCP415x_502] = { .wipers = 1, .max_pos = 256, .kohms = 5, },
+ [MCP415x_103] = { .wipers = 1, .max_pos = 256, .kohms = 10, },
+ [MCP415x_503] = { .wipers = 1, .max_pos = 256, .kohms = 50, },
+ [MCP415x_104] = { .wipers = 1, .max_pos = 256, .kohms = 100, },
+ [MCP416x_502] = { .wipers = 1, .max_pos = 256, .kohms = 5, },
+ [MCP416x_103] = { .wipers = 1, .max_pos = 256, .kohms = 10, },
+ [MCP416x_503] = { .wipers = 1, .max_pos = 256, .kohms = 50, },
+ [MCP416x_104] = { .wipers = 1, .max_pos = 256, .kohms = 100, },
+ [MCP423x_502] = { .wipers = 2, .max_pos = 128, .kohms = 5, },
+ [MCP423x_103] = { .wipers = 2, .max_pos = 128, .kohms = 10, },
+ [MCP423x_503] = { .wipers = 2, .max_pos = 128, .kohms = 50, },
+ [MCP423x_104] = { .wipers = 2, .max_pos = 128, .kohms = 100, },
+ [MCP424x_502] = { .wipers = 2, .max_pos = 128, .kohms = 5, },
+ [MCP424x_103] = { .wipers = 2, .max_pos = 128, .kohms = 10, },
+ [MCP424x_503] = { .wipers = 2, .max_pos = 128, .kohms = 50, },
+ [MCP424x_104] = { .wipers = 2, .max_pos = 128, .kohms = 100, },
+ [MCP425x_502] = { .wipers = 2, .max_pos = 256, .kohms = 5, },
+ [MCP425x_103] = { .wipers = 2, .max_pos = 256, .kohms = 10, },
+ [MCP425x_503] = { .wipers = 2, .max_pos = 256, .kohms = 50, },
+ [MCP425x_104] = { .wipers = 2, .max_pos = 256, .kohms = 100, },
+ [MCP426x_502] = { .wipers = 2, .max_pos = 256, .kohms = 5, },
+ [MCP426x_103] = { .wipers = 2, .max_pos = 256, .kohms = 10, },
+ [MCP426x_503] = { .wipers = 2, .max_pos = 256, .kohms = 50, },
+ [MCP426x_104] = { .wipers = 2, .max_pos = 256, .kohms = 100, },
+};
+
+struct mcp4131_data {
+ struct spi_device *spi;
+ const struct mcp4131_cfg *cfg;
+ struct mutex lock;
+ u8 buf[2] ____cacheline_aligned;
+};
+
+#define MCP4131_CHANNEL(ch) { \
+ .type = IIO_RESISTANCE, \
+ .indexed = 1, \
+ .output = 1, \
+ .channel = (ch), \
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW), \
+ .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE), \
+}
+
+static const struct iio_chan_spec mcp4131_channels[] = {
+ MCP4131_CHANNEL(0),
+ MCP4131_CHANNEL(1),
+};
+
+static int mcp4131_read(struct spi_device *spi, void *buf, size_t len)
+{
+ struct spi_transfer t = {
+ .tx_buf = buf, /* We need to send addr, cmd and 12 bits */
+ .rx_buf = buf,
+ .len = len,
+ };
+ struct spi_message m;
+
+ spi_message_init(&m);
+ spi_message_add_tail(&t, &m);
+
+ return spi_sync(spi, &m);
+}
+
+static int mcp4131_read_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ int *val, int *val2, long mask)
+{
+ int err;
+ struct mcp4131_data *data = iio_priv(indio_dev);
+ int address = chan->channel;
+
+ switch (mask) {
+ case IIO_CHAN_INFO_RAW:
+ mutex_lock(&data->lock);
+
+ data->buf[0] = (address << MCP4131_WIPER_SHIFT) | MCP4131_READ;
+ data->buf[1] = 0;
+
+ err = mcp4131_read(data->spi, data->buf, 2);
+ if (err) {
+ mutex_unlock(&data->lock);
+ return err;
+ }
+
+ /* Error, bad address/command combination */
+ if (!MCP4131_CMDERR(data->buf)) {
+ mutex_unlock(&data->lock);
+ return -EIO;
+ }
+
+ *val = MCP4131_RAW(data->buf);
+ mutex_unlock(&data->lock);
+
+ return IIO_VAL_INT;
+
+ case IIO_CHAN_INFO_SCALE:
+ *val = 1000 * data->cfg->kohms;
+ *val2 = data->cfg->max_pos;
+ return IIO_VAL_FRACTIONAL;
+ }
+
+ return -EINVAL;
+}
+
+static int mcp4131_write_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ int val, int val2, long mask)
+{
+ int err;
+ struct mcp4131_data *data = iio_priv(indio_dev);
+ int address = chan->channel << MCP4131_WIPER_SHIFT;
+
+ switch (mask) {
+ case IIO_CHAN_INFO_RAW:
+ if (val > data->cfg->max_pos || val < 0)
+ return -EINVAL;
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ mutex_lock(&data->lock);
+
+ data->buf[0] = address << MCP4131_WIPER_SHIFT;
+ data->buf[0] |= MCP4131_WRITE | (val >> 8);
+ data->buf[1] = val & 0xFF; /* 8 bits here */
+
+ err = spi_write(data->spi, data->buf, 2);
+ mutex_unlock(&data->lock);
+
+ return err;
+}
+
+static const struct iio_info mcp4131_info = {
+ .read_raw = mcp4131_read_raw,
+ .write_raw = mcp4131_write_raw,
+ .driver_module = THIS_MODULE,
+};
+
+static int mcp4131_probe(struct spi_device *spi)
+{
+ int err;
+ struct device *dev = &spi->dev;
+ unsigned long devid = spi_get_device_id(spi)->driver_data;
+ struct mcp4131_data *data;
+ struct iio_dev *indio_dev;
+
+ indio_dev = devm_iio_device_alloc(dev, sizeof(*data));
+ if (!indio_dev)
+ return -ENOMEM;
+
+ data = iio_priv(indio_dev);
+ spi_set_drvdata(spi, indio_dev);
+ data->spi = spi;
+ data->cfg = &mcp4131_cfg[devid];
+
+ mutex_init(&data->lock);
+
+ indio_dev->dev.parent = dev;
+ indio_dev->info = &mcp4131_info;
+ indio_dev->channels = mcp4131_channels;
+ indio_dev->num_channels = data->cfg->wipers;
+ indio_dev->name = spi_get_device_id(spi)->name;
+
+ err = devm_iio_device_register(dev, indio_dev);
+ if (err) {
+ dev_info(&spi->dev, "Unable to register %s\n", indio_dev->name);
+ return err;
+ }
+
+ return 0;
+}
+
+#if defined(CONFIG_OF)
+static const struct of_device_id mcp4131_dt_ids[] = {
+ { .compatible = "microchip,mcp4131-502",
+ .data = &mcp4131_cfg[MCP413x_502] },
+ { .compatible = "microchip,mcp4131-103",
+ .data = &mcp4131_cfg[MCP413x_103] },
+ { .compatible = "microchip,mcp4131-503",
+ .data = &mcp4131_cfg[MCP413x_503] },
+ { .compatible = "microchip,mcp4131-104",
+ .data = &mcp4131_cfg[MCP413x_104] },
+ { .compatible = "microchip,mcp4132-502",
+ .data = &mcp4131_cfg[MCP413x_502] },
+ { .compatible = "microchip,mcp4132-103",
+ .data = &mcp4131_cfg[MCP413x_103] },
+ { .compatible = "microchip,mcp4132-503",
+ .data = &mcp4131_cfg[MCP413x_503] },
+ { .compatible = "microchip,mcp4132-104",
+ .data = &mcp4131_cfg[MCP413x_104] },
+ { .compatible = "microchip,mcp4141-502",
+ .data = &mcp4131_cfg[MCP414x_502] },
+ { .compatible = "microchip,mcp4141-103",
+ .data = &mcp4131_cfg[MCP414x_103] },
+ { .compatible = "microchip,mcp4141-503",
+ .data = &mcp4131_cfg[MCP414x_503] },
+ { .compatible = "microchip,mcp4141-104",
+ .data = &mcp4131_cfg[MCP414x_104] },
+ { .compatible = "microchip,mcp4142-502",
+ .data = &mcp4131_cfg[MCP414x_502] },
+ { .compatible = "microchip,mcp4142-103",
+ .data = &mcp4131_cfg[MCP414x_103] },
+ { .compatible = "microchip,mcp4142-503",
+ .data = &mcp4131_cfg[MCP414x_503] },
+ { .compatible = "microchip,mcp4142-104",
+ .data = &mcp4131_cfg[MCP414x_104] },
+ { .compatible = "microchip,mcp4151-502",
+ .data = &mcp4131_cfg[MCP415x_502] },
+ { .compatible = "microchip,mcp4151-103",
+ .data = &mcp4131_cfg[MCP415x_103] },
+ { .compatible = "microchip,mcp4151-503",
+ .data = &mcp4131_cfg[MCP415x_503] },
+ { .compatible = "microchip,mcp4151-104",
+ .data = &mcp4131_cfg[MCP415x_104] },
+ { .compatible = "microchip,mcp4152-502",
+ .data = &mcp4131_cfg[MCP415x_502] },
+ { .compatible = "microchip,mcp4152-103",
+ .data = &mcp4131_cfg[MCP415x_103] },
+ { .compatible = "microchip,mcp4152-503",
+ .data = &mcp4131_cfg[MCP415x_503] },
+ { .compatible = "microchip,mcp4152-104",
+ .data = &mcp4131_cfg[MCP415x_104] },
+ { .compatible = "microchip,mcp4161-502",
+ .data = &mcp4131_cfg[MCP416x_502] },
+ { .compatible = "microchip,mcp4161-103",
+ .data = &mcp4131_cfg[MCP416x_103] },
+ { .compatible = "microchip,mcp4161-503",
+ .data = &mcp4131_cfg[MCP416x_503] },
+ { .compatible = "microchip,mcp4161-104",
+ .data = &mcp4131_cfg[MCP416x_104] },
+ { .compatible = "microchip,mcp4162-502",
+ .data = &mcp4131_cfg[MCP416x_502] },
+ { .compatible = "microchip,mcp4162-103",
+ .data = &mcp4131_cfg[MCP416x_103] },
+ { .compatible = "microchip,mcp4162-503",
+ .data = &mcp4131_cfg[MCP416x_503] },
+ { .compatible = "microchip,mcp4162-104",
+ .data = &mcp4131_cfg[MCP416x_104] },
+ { .compatible = "microchip,mcp4231-502",
+ .data = &mcp4131_cfg[MCP423x_502] },
+ { .compatible = "microchip,mcp4231-103",
+ .data = &mcp4131_cfg[MCP423x_103] },
+ { .compatible = "microchip,mcp4231-503",
+ .data = &mcp4131_cfg[MCP423x_503] },
+ { .compatible = "microchip,mcp4231-104",
+ .data = &mcp4131_cfg[MCP423x_104] },
+ { .compatible = "microchip,mcp4232-502",
+ .data = &mcp4131_cfg[MCP423x_502] },
+ { .compatible = "microchip,mcp4232-103",
+ .data = &mcp4131_cfg[MCP423x_103] },
+ { .compatible = "microchip,mcp4232-503",
+ .data = &mcp4131_cfg[MCP423x_503] },
+ { .compatible = "microchip,mcp4232-104",
+ .data = &mcp4131_cfg[MCP423x_104] },
+ { .compatible = "microchip,mcp4241-502",
+ .data = &mcp4131_cfg[MCP424x_502] },
+ { .compatible = "microchip,mcp4241-103",
+ .data = &mcp4131_cfg[MCP424x_103] },
+ { .compatible = "microchip,mcp4241-503",
+ .data = &mcp4131_cfg[MCP424x_503] },
+ { .compatible = "microchip,mcp4241-104",
+ .data = &mcp4131_cfg[MCP424x_104] },
+ { .compatible = "microchip,mcp4242-502",
+ .data = &mcp4131_cfg[MCP424x_502] },
+ { .compatible = "microchip,mcp4242-103",
+ .data = &mcp4131_cfg[MCP424x_103] },
+ { .compatible = "microchip,mcp4242-503",
+ .data = &mcp4131_cfg[MCP424x_503] },
+ { .compatible = "microchip,mcp4242-104",
+ .data = &mcp4131_cfg[MCP424x_104] },
+ { .compatible = "microchip,mcp4251-502",
+ .data = &mcp4131_cfg[MCP425x_502] },
+ { .compatible = "microchip,mcp4251-103",
+ .data = &mcp4131_cfg[MCP425x_103] },
+ { .compatible = "microchip,mcp4251-503",
+ .data = &mcp4131_cfg[MCP425x_503] },
+ { .compatible = "microchip,mcp4251-104",
+ .data = &mcp4131_cfg[MCP425x_104] },
+ { .compatible = "microchip,mcp4252-502",
+ .data = &mcp4131_cfg[MCP425x_502] },
+ { .compatible = "microchip,mcp4252-103",
+ .data = &mcp4131_cfg[MCP425x_103] },
+ { .compatible = "microchip,mcp4252-503",
+ .data = &mcp4131_cfg[MCP425x_503] },
+ { .compatible = "microchip,mcp4252-104",
+ .data = &mcp4131_cfg[MCP425x_104] },
+ { .compatible = "microchip,mcp4261-502",
+ .data = &mcp4131_cfg[MCP426x_502] },
+ { .compatible = "microchip,mcp4261-103",
+ .data = &mcp4131_cfg[MCP426x_103] },
+ { .compatible = "microchip,mcp4261-503",
+ .data = &mcp4131_cfg[MCP426x_503] },
+ { .compatible = "microchip,mcp4261-104",
+ .data = &mcp4131_cfg[MCP426x_104] },
+ { .compatible = "microchip,mcp4262-502",
+ .data = &mcp4131_cfg[MCP426x_502] },
+ { .compatible = "microchip,mcp4262-103",
+ .data = &mcp4131_cfg[MCP426x_103] },
+ { .compatible = "microchip,mcp4262-503",
+ .data = &mcp4131_cfg[MCP426x_503] },
+ { .compatible = "microchip,mcp4262-104",
+ .data = &mcp4131_cfg[MCP426x_104] },
+ {}
+};
+MODULE_DEVICE_TABLE(of, mcp4131_dt_ids);
+#endif /* CONFIG_OF */
+
+static const struct spi_device_id mcp4131_id[] = {
+ { "mcp4131-502", MCP413x_502 },
+ { "mcp4131-103", MCP413x_103 },
+ { "mcp4131-503", MCP413x_503 },
+ { "mcp4131-104", MCP413x_104 },
+ { "mcp4132-502", MCP413x_502 },
+ { "mcp4132-103", MCP413x_103 },
+ { "mcp4132-503", MCP413x_503 },
+ { "mcp4132-104", MCP413x_104 },
+ { "mcp4141-502", MCP414x_502 },
+ { "mcp4141-103", MCP414x_103 },
+ { "mcp4141-503", MCP414x_503 },
+ { "mcp4141-104", MCP414x_104 },
+ { "mcp4142-502", MCP414x_502 },
+ { "mcp4142-103", MCP414x_103 },
+ { "mcp4142-503", MCP414x_503 },
+ { "mcp4142-104", MCP414x_104 },
+ { "mcp4151-502", MCP415x_502 },
+ { "mcp4151-103", MCP415x_103 },
+ { "mcp4151-503", MCP415x_503 },
+ { "mcp4151-104", MCP415x_104 },
+ { "mcp4152-502", MCP415x_502 },
+ { "mcp4152-103", MCP415x_103 },
+ { "mcp4152-503", MCP415x_503 },
+ { "mcp4152-104", MCP415x_104 },
+ { "mcp4161-502", MCP416x_502 },
+ { "mcp4161-103", MCP416x_103 },
+ { "mcp4161-503", MCP416x_503 },
+ { "mcp4161-104", MCP416x_104 },
+ { "mcp4162-502", MCP416x_502 },
+ { "mcp4162-103", MCP416x_103 },
+ { "mcp4162-503", MCP416x_503 },
+ { "mcp4162-104", MCP416x_104 },
+ { "mcp4231-502", MCP423x_502 },
+ { "mcp4231-103", MCP423x_103 },
+ { "mcp4231-503", MCP423x_503 },
+ { "mcp4231-104", MCP423x_104 },
+ { "mcp4232-502", MCP423x_502 },
+ { "mcp4232-103", MCP423x_103 },
+ { "mcp4232-503", MCP423x_503 },
+ { "mcp4232-104", MCP423x_104 },
+ { "mcp4241-502", MCP424x_502 },
+ { "mcp4241-103", MCP424x_103 },
+ { "mcp4241-503", MCP424x_503 },
+ { "mcp4241-104", MCP424x_104 },
+ { "mcp4242-502", MCP424x_502 },
+ { "mcp4242-103", MCP424x_103 },
+ { "mcp4242-503", MCP424x_503 },
+ { "mcp4242-104", MCP424x_104 },
+ { "mcp4251-502", MCP425x_502 },
+ { "mcp4251-103", MCP425x_103 },
+ { "mcp4251-503", MCP425x_503 },
+ { "mcp4251-104", MCP425x_104 },
+ { "mcp4252-502", MCP425x_502 },
+ { "mcp4252-103", MCP425x_103 },
+ { "mcp4252-503", MCP425x_503 },
+ { "mcp4252-104", MCP425x_104 },
+ { "mcp4261-502", MCP426x_502 },
+ { "mcp4261-103", MCP426x_103 },
+ { "mcp4261-503", MCP426x_503 },
+ { "mcp4261-104", MCP426x_104 },
+ { "mcp4262-502", MCP426x_502 },
+ { "mcp4262-103", MCP426x_103 },
+ { "mcp4262-503", MCP426x_503 },
+ { "mcp4262-104", MCP426x_104 },
+ {}
+};
+MODULE_DEVICE_TABLE(spi, mcp4131_id);
+
+static struct spi_driver mcp4131_driver = {
+ .driver = {
+ .name = "mcp4131",
+ .of_match_table = of_match_ptr(mcp4131_dt_ids),
+ },
+ .probe = mcp4131_probe,
+ .id_table = mcp4131_id,
+};
+
+module_spi_driver(mcp4131_driver);
+
+MODULE_AUTHOR("Slawomir Stepien <sst@poczta.fm>");
+MODULE_DESCRIPTION("MCP4131 digital potentiometer");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/iio/potentiometer/mcp4531.c b/drivers/iio/potentiometer/mcp4531.c
index 0db67fe14766..3b72e1a595db 100644
--- a/drivers/iio/potentiometer/mcp4531.c
+++ b/drivers/iio/potentiometer/mcp4531.c
@@ -79,7 +79,7 @@ static const struct mcp4531_cfg mcp4531_cfg[] = {
struct mcp4531_data {
struct i2c_client *client;
- unsigned long devid;
+ const struct mcp4531_cfg *cfg;
};
#define MCP4531_CHANNEL(ch) { \
@@ -113,8 +113,8 @@ static int mcp4531_read_raw(struct iio_dev *indio_dev,
*val = ret;
return IIO_VAL_INT;
case IIO_CHAN_INFO_SCALE:
- *val = 1000 * mcp4531_cfg[data->devid].kohms;
- *val2 = mcp4531_cfg[data->devid].max_pos;
+ *val = 1000 * data->cfg->kohms;
+ *val2 = data->cfg->max_pos;
return IIO_VAL_FRACTIONAL;
}
@@ -130,7 +130,7 @@ static int mcp4531_write_raw(struct iio_dev *indio_dev,
switch (mask) {
case IIO_CHAN_INFO_RAW:
- if (val > mcp4531_cfg[data->devid].max_pos || val < 0)
+ if (val > data->cfg->max_pos || val < 0)
return -EINVAL;
break;
default:
@@ -152,7 +152,6 @@ static int mcp4531_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
struct device *dev = &client->dev;
- unsigned long devid = id->driver_data;
struct mcp4531_data *data;
struct iio_dev *indio_dev;
@@ -168,12 +167,12 @@ static int mcp4531_probe(struct i2c_client *client,
data = iio_priv(indio_dev);
i2c_set_clientdata(client, indio_dev);
data->client = client;
- data->devid = devid;
+ data->cfg = &mcp4531_cfg[id->driver_data];
indio_dev->dev.parent = dev;
indio_dev->info = &mcp4531_info;
indio_dev->channels = mcp4531_channels;
- indio_dev->num_channels = mcp4531_cfg[devid].wipers;
+ indio_dev->num_channels = data->cfg->wipers;
indio_dev->name = client->name;
return devm_iio_device_register(dev, indio_dev);
diff --git a/drivers/iio/potentiometer/tpl0102.c b/drivers/iio/potentiometer/tpl0102.c
index 313124b6fd59..5c304d42d713 100644
--- a/drivers/iio/potentiometer/tpl0102.c
+++ b/drivers/iio/potentiometer/tpl0102.c
@@ -118,7 +118,7 @@ static int tpl0102_probe(struct i2c_client *client,
if (!i2c_check_functionality(client->adapter,
I2C_FUNC_SMBUS_WORD_DATA))
- return -ENOTSUPP;
+ return -EOPNOTSUPP;
indio_dev = devm_iio_device_alloc(dev, sizeof(*data));
if (!indio_dev)
diff --git a/drivers/iio/pressure/Kconfig b/drivers/iio/pressure/Kconfig
index 31c0e1fd2202..cda9f128f3a4 100644
--- a/drivers/iio/pressure/Kconfig
+++ b/drivers/iio/pressure/Kconfig
@@ -6,12 +6,13 @@
menu "Pressure sensors"
config BMP280
- tristate "Bosch Sensortec BMP280 pressure sensor driver"
+ tristate "Bosch Sensortec BMP180 and BMP280 pressure sensor driver"
depends on I2C
+ depends on !(BMP085_I2C=y || BMP085_I2C=m)
select REGMAP_I2C
help
- Say yes here to build support for Bosch Sensortec BMP280
- pressure and temperature sensor.
+ Say yes here to build support for Bosch Sensortec BMP180 and BMP280
+ pressure and temperature sensors.
To compile this driver as a module, choose M here: the module
will be called bmp280.
@@ -30,6 +31,17 @@ config HID_SENSOR_PRESS
To compile this driver as a module, choose M here: the module
will be called hid-sensor-press.
+config HP03
+ tristate "Hope RF HP03 temperature and pressure sensor driver"
+ depends on I2C
+ select REGMAP_I2C
+ help
+ Say yes here to build support for Hope RF HP03 pressure and
+ temperature sensor.
+
+ To compile this driver as a module, choose M here: the module
+ will be called hp03.
+
config MPL115
tristate
@@ -148,4 +160,14 @@ config T5403
To compile this driver as a module, choose M here: the module
will be called t5403.
+config HP206C
+ tristate "HOPERF HP206C precision barometer and altimeter sensor"
+ depends on I2C
+ help
+ Say yes here to build support for the HOPREF HP206C precision
+ barometer and altimeter sensor.
+
+ This driver can also be built as a module. If so, the module will
+ be called hp206c.
+
endmenu
diff --git a/drivers/iio/pressure/Makefile b/drivers/iio/pressure/Makefile
index d336af14f3fe..17d6e7afa1ff 100644
--- a/drivers/iio/pressure/Makefile
+++ b/drivers/iio/pressure/Makefile
@@ -5,6 +5,7 @@
# When adding new entries keep the list in alphabetical order
obj-$(CONFIG_BMP280) += bmp280.o
obj-$(CONFIG_HID_SENSOR_PRESS) += hid-sensor-press.o
+obj-$(CONFIG_HP03) += hp03.o
obj-$(CONFIG_MPL115) += mpl115.o
obj-$(CONFIG_MPL115_I2C) += mpl115_i2c.o
obj-$(CONFIG_MPL115_SPI) += mpl115_spi.o
@@ -17,6 +18,7 @@ obj-$(CONFIG_IIO_ST_PRESS) += st_pressure.o
st_pressure-y := st_pressure_core.o
st_pressure-$(CONFIG_IIO_BUFFER) += st_pressure_buffer.o
obj-$(CONFIG_T5403) += t5403.o
+obj-$(CONFIG_HP206C) += hp206c.o
obj-$(CONFIG_IIO_ST_PRESS_I2C) += st_pressure_i2c.o
obj-$(CONFIG_IIO_ST_PRESS_SPI) += st_pressure_spi.o
diff --git a/drivers/iio/pressure/bmp280.c b/drivers/iio/pressure/bmp280.c
index a2602d8dd6d5..2f1498e12bb2 100644
--- a/drivers/iio/pressure/bmp280.c
+++ b/drivers/iio/pressure/bmp280.c
@@ -1,12 +1,15 @@
/*
* Copyright (c) 2014 Intel Corporation
*
- * Driver for Bosch Sensortec BMP280 digital pressure sensor.
+ * Driver for Bosch Sensortec BMP180 and BMP280 digital pressure sensor.
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License version 2 as
* published by the Free Software Foundation.
*
+ * Datasheet:
+ * https://ae-bst.resource.bosch.com/media/_tech/media/datasheets/BST-BMP180-DS000-121.pdf
+ * https://ae-bst.resource.bosch.com/media/_tech/media/datasheets/BST-BMP280-DS001-12.pdf
*/
#define pr_fmt(fmt) "bmp280: " fmt
@@ -15,9 +18,11 @@
#include <linux/i2c.h>
#include <linux/acpi.h>
#include <linux/regmap.h>
+#include <linux/delay.h>
#include <linux/iio/iio.h>
#include <linux/iio/sysfs.h>
+/* BMP280 specific registers */
#define BMP280_REG_TEMP_XLSB 0xFC
#define BMP280_REG_TEMP_LSB 0xFB
#define BMP280_REG_TEMP_MSB 0xFA
@@ -26,10 +31,7 @@
#define BMP280_REG_PRESS_MSB 0xF7
#define BMP280_REG_CONFIG 0xF5
-#define BMP280_REG_CTRL_MEAS 0xF4
#define BMP280_REG_STATUS 0xF3
-#define BMP280_REG_RESET 0xE0
-#define BMP280_REG_ID 0xD0
#define BMP280_REG_COMP_TEMP_START 0x88
#define BMP280_COMP_TEMP_REG_COUNT 6
@@ -46,25 +48,49 @@
#define BMP280_OSRS_TEMP_MASK (BIT(7) | BIT(6) | BIT(5))
#define BMP280_OSRS_TEMP_SKIP 0
-#define BMP280_OSRS_TEMP_1X BIT(5)
-#define BMP280_OSRS_TEMP_2X BIT(6)
-#define BMP280_OSRS_TEMP_4X (BIT(6) | BIT(5))
-#define BMP280_OSRS_TEMP_8X BIT(7)
-#define BMP280_OSRS_TEMP_16X (BIT(7) | BIT(5))
+#define BMP280_OSRS_TEMP_X(osrs_t) ((osrs_t) << 5)
+#define BMP280_OSRS_TEMP_1X BMP280_OSRS_TEMP_X(1)
+#define BMP280_OSRS_TEMP_2X BMP280_OSRS_TEMP_X(2)
+#define BMP280_OSRS_TEMP_4X BMP280_OSRS_TEMP_X(3)
+#define BMP280_OSRS_TEMP_8X BMP280_OSRS_TEMP_X(4)
+#define BMP280_OSRS_TEMP_16X BMP280_OSRS_TEMP_X(5)
#define BMP280_OSRS_PRESS_MASK (BIT(4) | BIT(3) | BIT(2))
#define BMP280_OSRS_PRESS_SKIP 0
-#define BMP280_OSRS_PRESS_1X BIT(2)
-#define BMP280_OSRS_PRESS_2X BIT(3)
-#define BMP280_OSRS_PRESS_4X (BIT(3) | BIT(2))
-#define BMP280_OSRS_PRESS_8X BIT(4)
-#define BMP280_OSRS_PRESS_16X (BIT(4) | BIT(2))
+#define BMP280_OSRS_PRESS_X(osrs_p) ((osrs_p) << 2)
+#define BMP280_OSRS_PRESS_1X BMP280_OSRS_PRESS_X(1)
+#define BMP280_OSRS_PRESS_2X BMP280_OSRS_PRESS_X(2)
+#define BMP280_OSRS_PRESS_4X BMP280_OSRS_PRESS_X(3)
+#define BMP280_OSRS_PRESS_8X BMP280_OSRS_PRESS_X(4)
+#define BMP280_OSRS_PRESS_16X BMP280_OSRS_PRESS_X(5)
#define BMP280_MODE_MASK (BIT(1) | BIT(0))
#define BMP280_MODE_SLEEP 0
#define BMP280_MODE_FORCED BIT(0)
#define BMP280_MODE_NORMAL (BIT(1) | BIT(0))
+/* BMP180 specific registers */
+#define BMP180_REG_OUT_XLSB 0xF8
+#define BMP180_REG_OUT_LSB 0xF7
+#define BMP180_REG_OUT_MSB 0xF6
+
+#define BMP180_REG_CALIB_START 0xAA
+#define BMP180_REG_CALIB_COUNT 22
+
+#define BMP180_MEAS_SCO BIT(5)
+#define BMP180_MEAS_TEMP (0x0E | BMP180_MEAS_SCO)
+#define BMP180_MEAS_PRESS_X(oss) ((oss) << 6 | 0x14 | BMP180_MEAS_SCO)
+#define BMP180_MEAS_PRESS_1X BMP180_MEAS_PRESS_X(0)
+#define BMP180_MEAS_PRESS_2X BMP180_MEAS_PRESS_X(1)
+#define BMP180_MEAS_PRESS_4X BMP180_MEAS_PRESS_X(2)
+#define BMP180_MEAS_PRESS_8X BMP180_MEAS_PRESS_X(3)
+
+/* BMP180 and BMP280 common registers */
+#define BMP280_REG_CTRL_MEAS 0xF4
+#define BMP280_REG_RESET 0xE0
+#define BMP280_REG_ID 0xD0
+
+#define BMP180_CHIP_ID 0x55
#define BMP280_CHIP_ID 0x58
#define BMP280_SOFT_RESET_VAL 0xB6
@@ -72,6 +98,11 @@ struct bmp280_data {
struct i2c_client *client;
struct mutex lock;
struct regmap *regmap;
+ const struct bmp280_chip_info *chip_info;
+
+ /* log of base 2 of oversampling rate */
+ u8 oversampling_press;
+ u8 oversampling_temp;
/*
* Carryover value from temperature conversion, used in pressure
@@ -80,9 +111,23 @@ struct bmp280_data {
s32 t_fine;
};
+struct bmp280_chip_info {
+ const struct regmap_config *regmap_config;
+
+ const int *oversampling_temp_avail;
+ int num_oversampling_temp_avail;
+
+ const int *oversampling_press_avail;
+ int num_oversampling_press_avail;
+
+ int (*chip_config)(struct bmp280_data *);
+ int (*read_temp)(struct bmp280_data *, int *);
+ int (*read_press)(struct bmp280_data *, int *, int *);
+};
+
/*
* These enums are used for indexing into the array of compensation
- * parameters.
+ * parameters for BMP280.
*/
enum { T1, T2, T3 };
enum { P1, P2, P3, P4, P5, P6, P7, P8, P9 };
@@ -90,11 +135,13 @@ enum { P1, P2, P3, P4, P5, P6, P7, P8, P9 };
static const struct iio_chan_spec bmp280_channels[] = {
{
.type = IIO_PRESSURE,
- .info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED),
+ .info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED) |
+ BIT(IIO_CHAN_INFO_OVERSAMPLING_RATIO),
},
{
.type = IIO_TEMP,
- .info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED),
+ .info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED) |
+ BIT(IIO_CHAN_INFO_OVERSAMPLING_RATIO),
},
};
@@ -290,10 +337,25 @@ static int bmp280_read_raw(struct iio_dev *indio_dev,
case IIO_CHAN_INFO_PROCESSED:
switch (chan->type) {
case IIO_PRESSURE:
- ret = bmp280_read_press(data, val, val2);
+ ret = data->chip_info->read_press(data, val, val2);
break;
case IIO_TEMP:
- ret = bmp280_read_temp(data, val);
+ ret = data->chip_info->read_temp(data, val);
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+ }
+ break;
+ case IIO_CHAN_INFO_OVERSAMPLING_RATIO:
+ switch (chan->type) {
+ case IIO_PRESSURE:
+ *val = 1 << data->oversampling_press;
+ ret = IIO_VAL_INT;
+ break;
+ case IIO_TEMP:
+ *val = 1 << data->oversampling_temp;
+ ret = IIO_VAL_INT;
break;
default:
ret = -EINVAL;
@@ -310,22 +372,135 @@ static int bmp280_read_raw(struct iio_dev *indio_dev,
return ret;
}
+static int bmp280_write_oversampling_ratio_temp(struct bmp280_data *data,
+ int val)
+{
+ int i;
+ const int *avail = data->chip_info->oversampling_temp_avail;
+ const int n = data->chip_info->num_oversampling_temp_avail;
+
+ for (i = 0; i < n; i++) {
+ if (avail[i] == val) {
+ data->oversampling_temp = ilog2(val);
+
+ return data->chip_info->chip_config(data);
+ }
+ }
+ return -EINVAL;
+}
+
+static int bmp280_write_oversampling_ratio_press(struct bmp280_data *data,
+ int val)
+{
+ int i;
+ const int *avail = data->chip_info->oversampling_press_avail;
+ const int n = data->chip_info->num_oversampling_press_avail;
+
+ for (i = 0; i < n; i++) {
+ if (avail[i] == val) {
+ data->oversampling_press = ilog2(val);
+
+ return data->chip_info->chip_config(data);
+ }
+ }
+ return -EINVAL;
+}
+
+static int bmp280_write_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ int val, int val2, long mask)
+{
+ int ret = 0;
+ struct bmp280_data *data = iio_priv(indio_dev);
+
+ switch (mask) {
+ case IIO_CHAN_INFO_OVERSAMPLING_RATIO:
+ mutex_lock(&data->lock);
+ switch (chan->type) {
+ case IIO_PRESSURE:
+ ret = bmp280_write_oversampling_ratio_press(data, val);
+ break;
+ case IIO_TEMP:
+ ret = bmp280_write_oversampling_ratio_temp(data, val);
+ break;
+ default:
+ ret = -EINVAL;
+ break;
+ }
+ mutex_unlock(&data->lock);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return ret;
+}
+
+static ssize_t bmp280_show_avail(char *buf, const int *vals, const int n)
+{
+ size_t len = 0;
+ int i;
+
+ for (i = 0; i < n; i++)
+ len += scnprintf(buf + len, PAGE_SIZE - len, "%d ", vals[i]);
+
+ buf[len - 1] = '\n';
+
+ return len;
+}
+
+static ssize_t bmp280_show_temp_oversampling_avail(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct bmp280_data *data = iio_priv(dev_to_iio_dev(dev));
+
+ return bmp280_show_avail(buf, data->chip_info->oversampling_temp_avail,
+ data->chip_info->num_oversampling_temp_avail);
+}
+
+static ssize_t bmp280_show_press_oversampling_avail(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct bmp280_data *data = iio_priv(dev_to_iio_dev(dev));
+
+ return bmp280_show_avail(buf, data->chip_info->oversampling_press_avail,
+ data->chip_info->num_oversampling_press_avail);
+}
+
+static IIO_DEVICE_ATTR(in_temp_oversampling_ratio_available,
+ S_IRUGO, bmp280_show_temp_oversampling_avail, NULL, 0);
+
+static IIO_DEVICE_ATTR(in_pressure_oversampling_ratio_available,
+ S_IRUGO, bmp280_show_press_oversampling_avail, NULL, 0);
+
+static struct attribute *bmp280_attributes[] = {
+ &iio_dev_attr_in_temp_oversampling_ratio_available.dev_attr.attr,
+ &iio_dev_attr_in_pressure_oversampling_ratio_available.dev_attr.attr,
+ NULL,
+};
+
+static const struct attribute_group bmp280_attrs_group = {
+ .attrs = bmp280_attributes,
+};
+
static const struct iio_info bmp280_info = {
.driver_module = THIS_MODULE,
.read_raw = &bmp280_read_raw,
+ .write_raw = &bmp280_write_raw,
+ .attrs = &bmp280_attrs_group,
};
-static int bmp280_chip_init(struct bmp280_data *data)
+static int bmp280_chip_config(struct bmp280_data *data)
{
int ret;
+ u8 osrs = BMP280_OSRS_TEMP_X(data->oversampling_temp + 1) |
+ BMP280_OSRS_PRESS_X(data->oversampling_press + 1);
ret = regmap_update_bits(data->regmap, BMP280_REG_CTRL_MEAS,
BMP280_OSRS_TEMP_MASK |
BMP280_OSRS_PRESS_MASK |
BMP280_MODE_MASK,
- BMP280_OSRS_TEMP_2X |
- BMP280_OSRS_PRESS_16X |
- BMP280_MODE_NORMAL);
+ osrs | BMP280_MODE_NORMAL);
if (ret < 0) {
dev_err(&data->client->dev,
"failed to write ctrl_meas register\n");
@@ -344,6 +519,317 @@ static int bmp280_chip_init(struct bmp280_data *data)
return ret;
}
+static const int bmp280_oversampling_avail[] = { 1, 2, 4, 8, 16 };
+
+static const struct bmp280_chip_info bmp280_chip_info = {
+ .regmap_config = &bmp280_regmap_config,
+
+ .oversampling_temp_avail = bmp280_oversampling_avail,
+ .num_oversampling_temp_avail = ARRAY_SIZE(bmp280_oversampling_avail),
+
+ .oversampling_press_avail = bmp280_oversampling_avail,
+ .num_oversampling_press_avail = ARRAY_SIZE(bmp280_oversampling_avail),
+
+ .chip_config = bmp280_chip_config,
+ .read_temp = bmp280_read_temp,
+ .read_press = bmp280_read_press,
+};
+
+static bool bmp180_is_writeable_reg(struct device *dev, unsigned int reg)
+{
+ switch (reg) {
+ case BMP280_REG_CTRL_MEAS:
+ case BMP280_REG_RESET:
+ return true;
+ default:
+ return false;
+ };
+}
+
+static bool bmp180_is_volatile_reg(struct device *dev, unsigned int reg)
+{
+ switch (reg) {
+ case BMP180_REG_OUT_XLSB:
+ case BMP180_REG_OUT_LSB:
+ case BMP180_REG_OUT_MSB:
+ case BMP280_REG_CTRL_MEAS:
+ return true;
+ default:
+ return false;
+ }
+}
+
+static const struct regmap_config bmp180_regmap_config = {
+ .reg_bits = 8,
+ .val_bits = 8,
+
+ .max_register = BMP180_REG_OUT_XLSB,
+ .cache_type = REGCACHE_RBTREE,
+
+ .writeable_reg = bmp180_is_writeable_reg,
+ .volatile_reg = bmp180_is_volatile_reg,
+};
+
+static int bmp180_measure(struct bmp280_data *data, u8 ctrl_meas)
+{
+ int ret;
+ const int conversion_time_max[] = { 4500, 7500, 13500, 25500 };
+ unsigned int delay_us;
+ unsigned int ctrl;
+
+ ret = regmap_write(data->regmap, BMP280_REG_CTRL_MEAS, ctrl_meas);
+ if (ret)
+ return ret;
+
+ if (ctrl_meas == BMP180_MEAS_TEMP)
+ delay_us = 4500;
+ else
+ delay_us = conversion_time_max[data->oversampling_press];
+
+ usleep_range(delay_us, delay_us + 1000);
+
+ ret = regmap_read(data->regmap, BMP280_REG_CTRL_MEAS, &ctrl);
+ if (ret)
+ return ret;
+
+ /* The value of this bit reset to "0" after conversion is complete */
+ if (ctrl & BMP180_MEAS_SCO)
+ return -EIO;
+
+ return 0;
+}
+
+static int bmp180_read_adc_temp(struct bmp280_data *data, int *val)
+{
+ int ret;
+ __be16 tmp = 0;
+
+ ret = bmp180_measure(data, BMP180_MEAS_TEMP);
+ if (ret)
+ return ret;
+
+ ret = regmap_bulk_read(data->regmap, BMP180_REG_OUT_MSB, (u8 *)&tmp, 2);
+ if (ret)
+ return ret;
+
+ *val = be16_to_cpu(tmp);
+
+ return 0;
+}
+
+/*
+ * These enums are used for indexing into the array of calibration
+ * coefficients for BMP180.
+ */
+enum { AC1, AC2, AC3, AC4, AC5, AC6, B1, B2, MB, MC, MD };
+
+struct bmp180_calib {
+ s16 AC1;
+ s16 AC2;
+ s16 AC3;
+ u16 AC4;
+ u16 AC5;
+ u16 AC6;
+ s16 B1;
+ s16 B2;
+ s16 MB;
+ s16 MC;
+ s16 MD;
+};
+
+static int bmp180_read_calib(struct bmp280_data *data,
+ struct bmp180_calib *calib)
+{
+ int ret;
+ int i;
+ __be16 buf[BMP180_REG_CALIB_COUNT / 2];
+
+ ret = regmap_bulk_read(data->regmap, BMP180_REG_CALIB_START, buf,
+ sizeof(buf));
+
+ if (ret < 0)
+ return ret;
+
+ /* None of the words has the value 0 or 0xFFFF */
+ for (i = 0; i < ARRAY_SIZE(buf); i++) {
+ if (buf[i] == cpu_to_be16(0) || buf[i] == cpu_to_be16(0xffff))
+ return -EIO;
+ }
+
+ calib->AC1 = be16_to_cpu(buf[AC1]);
+ calib->AC2 = be16_to_cpu(buf[AC2]);
+ calib->AC3 = be16_to_cpu(buf[AC3]);
+ calib->AC4 = be16_to_cpu(buf[AC4]);
+ calib->AC5 = be16_to_cpu(buf[AC5]);
+ calib->AC6 = be16_to_cpu(buf[AC6]);
+ calib->B1 = be16_to_cpu(buf[B1]);
+ calib->B2 = be16_to_cpu(buf[B2]);
+ calib->MB = be16_to_cpu(buf[MB]);
+ calib->MC = be16_to_cpu(buf[MC]);
+ calib->MD = be16_to_cpu(buf[MD]);
+
+ return 0;
+}
+
+/*
+ * Returns temperature in DegC, resolution is 0.1 DegC.
+ * t_fine carries fine temperature as global value.
+ *
+ * Taken from datasheet, Section 3.5, "Calculating pressure and temperature".
+ */
+static s32 bmp180_compensate_temp(struct bmp280_data *data, s32 adc_temp)
+{
+ int ret;
+ s32 x1, x2;
+ struct bmp180_calib calib;
+
+ ret = bmp180_read_calib(data, &calib);
+ if (ret < 0) {
+ dev_err(&data->client->dev,
+ "failed to read calibration coefficients\n");
+ return ret;
+ }
+
+ x1 = ((adc_temp - calib.AC6) * calib.AC5) >> 15;
+ x2 = (calib.MC << 11) / (x1 + calib.MD);
+ data->t_fine = x1 + x2;
+
+ return (data->t_fine + 8) >> 4;
+}
+
+static int bmp180_read_temp(struct bmp280_data *data, int *val)
+{
+ int ret;
+ s32 adc_temp, comp_temp;
+
+ ret = bmp180_read_adc_temp(data, &adc_temp);
+ if (ret)
+ return ret;
+
+ comp_temp = bmp180_compensate_temp(data, adc_temp);
+
+ /*
+ * val might be NULL if we're called by the read_press routine,
+ * who only cares about the carry over t_fine value.
+ */
+ if (val) {
+ *val = comp_temp * 100;
+ return IIO_VAL_INT;
+ }
+
+ return 0;
+}
+
+static int bmp180_read_adc_press(struct bmp280_data *data, int *val)
+{
+ int ret;
+ __be32 tmp = 0;
+ u8 oss = data->oversampling_press;
+
+ ret = bmp180_measure(data, BMP180_MEAS_PRESS_X(oss));
+ if (ret)
+ return ret;
+
+ ret = regmap_bulk_read(data->regmap, BMP180_REG_OUT_MSB, (u8 *)&tmp, 3);
+ if (ret)
+ return ret;
+
+ *val = (be32_to_cpu(tmp) >> 8) >> (8 - oss);
+
+ return 0;
+}
+
+/*
+ * Returns pressure in Pa, resolution is 1 Pa.
+ *
+ * Taken from datasheet, Section 3.5, "Calculating pressure and temperature".
+ */
+static u32 bmp180_compensate_press(struct bmp280_data *data, s32 adc_press)
+{
+ int ret;
+ s32 x1, x2, x3, p;
+ s32 b3, b6;
+ u32 b4, b7;
+ s32 oss = data->oversampling_press;
+ struct bmp180_calib calib;
+
+ ret = bmp180_read_calib(data, &calib);
+ if (ret < 0) {
+ dev_err(&data->client->dev,
+ "failed to read calibration coefficients\n");
+ return ret;
+ }
+
+ b6 = data->t_fine - 4000;
+ x1 = (calib.B2 * (b6 * b6 >> 12)) >> 11;
+ x2 = calib.AC2 * b6 >> 11;
+ x3 = x1 + x2;
+ b3 = ((((s32)calib.AC1 * 4 + x3) << oss) + 2) / 4;
+ x1 = calib.AC3 * b6 >> 13;
+ x2 = (calib.B1 * ((b6 * b6) >> 12)) >> 16;
+ x3 = (x1 + x2 + 2) >> 2;
+ b4 = calib.AC4 * (u32)(x3 + 32768) >> 15;
+ b7 = ((u32)adc_press - b3) * (50000 >> oss);
+ if (b7 < 0x80000000)
+ p = (b7 * 2) / b4;
+ else
+ p = (b7 / b4) * 2;
+
+ x1 = (p >> 8) * (p >> 8);
+ x1 = (x1 * 3038) >> 16;
+ x2 = (-7357 * p) >> 16;
+
+ return p + ((x1 + x2 + 3791) >> 4);
+}
+
+static int bmp180_read_press(struct bmp280_data *data,
+ int *val, int *val2)
+{
+ int ret;
+ s32 adc_press;
+ u32 comp_press;
+
+ /* Read and compensate temperature so we get a reading of t_fine. */
+ ret = bmp180_read_temp(data, NULL);
+ if (ret)
+ return ret;
+
+ ret = bmp180_read_adc_press(data, &adc_press);
+ if (ret)
+ return ret;
+
+ comp_press = bmp180_compensate_press(data, adc_press);
+
+ *val = comp_press;
+ *val2 = 1000;
+
+ return IIO_VAL_FRACTIONAL;
+}
+
+static int bmp180_chip_config(struct bmp280_data *data)
+{
+ return 0;
+}
+
+static const int bmp180_oversampling_temp_avail[] = { 1 };
+static const int bmp180_oversampling_press_avail[] = { 1, 2, 4, 8 };
+
+static const struct bmp280_chip_info bmp180_chip_info = {
+ .regmap_config = &bmp180_regmap_config,
+
+ .oversampling_temp_avail = bmp180_oversampling_temp_avail,
+ .num_oversampling_temp_avail =
+ ARRAY_SIZE(bmp180_oversampling_temp_avail),
+
+ .oversampling_press_avail = bmp180_oversampling_press_avail,
+ .num_oversampling_press_avail =
+ ARRAY_SIZE(bmp180_oversampling_press_avail),
+
+ .chip_config = bmp180_chip_config,
+ .read_temp = bmp180_read_temp,
+ .read_press = bmp180_read_press,
+};
+
static int bmp280_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
@@ -367,7 +853,23 @@ static int bmp280_probe(struct i2c_client *client,
indio_dev->info = &bmp280_info;
indio_dev->modes = INDIO_DIRECT_MODE;
- data->regmap = devm_regmap_init_i2c(client, &bmp280_regmap_config);
+ switch (id->driver_data) {
+ case BMP180_CHIP_ID:
+ data->chip_info = &bmp180_chip_info;
+ data->oversampling_press = ilog2(8);
+ data->oversampling_temp = ilog2(1);
+ break;
+ case BMP280_CHIP_ID:
+ data->chip_info = &bmp280_chip_info;
+ data->oversampling_press = ilog2(16);
+ data->oversampling_temp = ilog2(2);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ data->regmap = devm_regmap_init_i2c(client,
+ data->chip_info->regmap_config);
if (IS_ERR(data->regmap)) {
dev_err(&client->dev, "failed to allocate register map\n");
return PTR_ERR(data->regmap);
@@ -376,13 +878,13 @@ static int bmp280_probe(struct i2c_client *client,
ret = regmap_read(data->regmap, BMP280_REG_ID, &chip_id);
if (ret < 0)
return ret;
- if (chip_id != BMP280_CHIP_ID) {
+ if (chip_id != id->driver_data) {
dev_err(&client->dev, "bad chip id. expected %x got %x\n",
BMP280_CHIP_ID, chip_id);
return -EINVAL;
}
- ret = bmp280_chip_init(data);
+ ret = data->chip_info->chip_config(data);
if (ret < 0)
return ret;
@@ -390,13 +892,17 @@ static int bmp280_probe(struct i2c_client *client,
}
static const struct acpi_device_id bmp280_acpi_match[] = {
- {"BMP0280", 0},
+ {"BMP0280", BMP280_CHIP_ID },
+ {"BMP0180", BMP180_CHIP_ID },
+ {"BMP0085", BMP180_CHIP_ID },
{ },
};
MODULE_DEVICE_TABLE(acpi, bmp280_acpi_match);
static const struct i2c_device_id bmp280_id[] = {
- {"bmp280", 0},
+ {"bmp280", BMP280_CHIP_ID },
+ {"bmp180", BMP180_CHIP_ID },
+ {"bmp085", BMP180_CHIP_ID },
{ },
};
MODULE_DEVICE_TABLE(i2c, bmp280_id);
@@ -412,5 +918,5 @@ static struct i2c_driver bmp280_driver = {
module_i2c_driver(bmp280_driver);
MODULE_AUTHOR("Vlad Dogaru <vlad.dogaru@intel.com>");
-MODULE_DESCRIPTION("Driver for Bosch Sensortec BMP280 pressure and temperature sensor");
+MODULE_DESCRIPTION("Driver for Bosch Sensortec BMP180/BMP280 pressure and temperature sensor");
MODULE_LICENSE("GPL v2");
diff --git a/drivers/iio/pressure/hp03.c b/drivers/iio/pressure/hp03.c
new file mode 100644
index 000000000000..ac76515d5d49
--- /dev/null
+++ b/drivers/iio/pressure/hp03.c
@@ -0,0 +1,312 @@
+/*
+ * Copyright (c) 2016 Marek Vasut <marex@denx.de>
+ *
+ * Driver for Hope RF HP03 digital temperature and pressure sensor.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#define pr_fmt(fmt) "hp03: " fmt
+
+#include <linux/module.h>
+#include <linux/delay.h>
+#include <linux/gpio/consumer.h>
+#include <linux/i2c.h>
+#include <linux/regmap.h>
+#include <linux/iio/iio.h>
+#include <linux/iio/sysfs.h>
+
+/*
+ * The HP03 sensor occupies two fixed I2C addresses:
+ * 0x50 ... read-only EEPROM with calibration data
+ * 0x77 ... read-write ADC for pressure and temperature
+ */
+#define HP03_EEPROM_ADDR 0x50
+#define HP03_ADC_ADDR 0x77
+
+#define HP03_EEPROM_CX_OFFSET 0x10
+#define HP03_EEPROM_AB_OFFSET 0x1e
+#define HP03_EEPROM_CD_OFFSET 0x20
+
+#define HP03_ADC_WRITE_REG 0xff
+#define HP03_ADC_READ_REG 0xfd
+#define HP03_ADC_READ_PRESSURE 0xf0 /* D1 in datasheet */
+#define HP03_ADC_READ_TEMP 0xe8 /* D2 in datasheet */
+
+struct hp03_priv {
+ struct i2c_client *client;
+ struct mutex lock;
+ struct gpio_desc *xclr_gpio;
+
+ struct i2c_client *eeprom_client;
+ struct regmap *eeprom_regmap;
+
+ s32 pressure; /* kPa */
+ s32 temp; /* Deg. C */
+};
+
+static const struct iio_chan_spec hp03_channels[] = {
+ {
+ .type = IIO_PRESSURE,
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
+ },
+ {
+ .type = IIO_TEMP,
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW),
+ .info_mask_shared_by_type = BIT(IIO_CHAN_INFO_SCALE),
+ },
+};
+
+static bool hp03_is_writeable_reg(struct device *dev, unsigned int reg)
+{
+ return false;
+}
+
+static bool hp03_is_volatile_reg(struct device *dev, unsigned int reg)
+{
+ return false;
+}
+
+static const struct regmap_config hp03_regmap_config = {
+ .reg_bits = 8,
+ .val_bits = 8,
+
+ .max_register = HP03_EEPROM_CD_OFFSET + 1,
+ .cache_type = REGCACHE_RBTREE,
+
+ .writeable_reg = hp03_is_writeable_reg,
+ .volatile_reg = hp03_is_volatile_reg,
+};
+
+static int hp03_get_temp_pressure(struct hp03_priv *priv, const u8 reg)
+{
+ int ret;
+
+ ret = i2c_smbus_write_byte_data(priv->client, HP03_ADC_WRITE_REG, reg);
+ if (ret < 0)
+ return ret;
+
+ msleep(50); /* Wait for conversion to finish */
+
+ return i2c_smbus_read_word_data(priv->client, HP03_ADC_READ_REG);
+}
+
+static int hp03_update_temp_pressure(struct hp03_priv *priv)
+{
+ struct device *dev = &priv->client->dev;
+ u8 coefs[18];
+ u16 cx_val[7];
+ int ab_val, d1_val, d2_val, diff_val, dut, off, sens, x;
+ int i, ret;
+
+ /* Sample coefficients from EEPROM */
+ ret = regmap_bulk_read(priv->eeprom_regmap, HP03_EEPROM_CX_OFFSET,
+ coefs, sizeof(coefs));
+ if (ret < 0) {
+ dev_err(dev, "Failed to read EEPROM (reg=%02x)\n",
+ HP03_EEPROM_CX_OFFSET);
+ return ret;
+ }
+
+ /* Sample Temperature and Pressure */
+ gpiod_set_value_cansleep(priv->xclr_gpio, 1);
+
+ ret = hp03_get_temp_pressure(priv, HP03_ADC_READ_PRESSURE);
+ if (ret < 0) {
+ dev_err(dev, "Failed to read pressure\n");
+ goto err_adc;
+ }
+ d1_val = ret;
+
+ ret = hp03_get_temp_pressure(priv, HP03_ADC_READ_TEMP);
+ if (ret < 0) {
+ dev_err(dev, "Failed to read temperature\n");
+ goto err_adc;
+ }
+ d2_val = ret;
+
+ gpiod_set_value_cansleep(priv->xclr_gpio, 0);
+
+ /* The Cx coefficients and Temp/Pressure values are MSB first. */
+ for (i = 0; i < 7; i++)
+ cx_val[i] = (coefs[2 * i] << 8) | (coefs[(2 * i) + 1] << 0);
+ d1_val = ((d1_val >> 8) & 0xff) | ((d1_val & 0xff) << 8);
+ d2_val = ((d2_val >> 8) & 0xff) | ((d2_val & 0xff) << 8);
+
+ /* Coefficient voodoo from the HP03 datasheet. */
+ if (d2_val >= cx_val[4])
+ ab_val = coefs[14]; /* A-value */
+ else
+ ab_val = coefs[15]; /* B-value */
+
+ diff_val = d2_val - cx_val[4];
+ dut = (ab_val * (diff_val >> 7) * (diff_val >> 7)) >> coefs[16];
+ dut = diff_val - dut;
+
+ off = (cx_val[1] + (((cx_val[3] - 1024) * dut) >> 14)) * 4;
+ sens = cx_val[0] + ((cx_val[2] * dut) >> 10);
+ x = ((sens * (d1_val - 7168)) >> 14) - off;
+
+ priv->pressure = ((x * 100) >> 5) + (cx_val[6] * 10);
+ priv->temp = 250 + ((dut * cx_val[5]) >> 16) - (dut >> coefs[17]);
+
+ return 0;
+
+err_adc:
+ gpiod_set_value_cansleep(priv->xclr_gpio, 0);
+ return ret;
+}
+
+static int hp03_read_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ int *val, int *val2, long mask)
+{
+ struct hp03_priv *priv = iio_priv(indio_dev);
+ int ret;
+
+ mutex_lock(&priv->lock);
+ ret = hp03_update_temp_pressure(priv);
+ mutex_unlock(&priv->lock);
+
+ if (ret)
+ return ret;
+
+ switch (mask) {
+ case IIO_CHAN_INFO_RAW:
+ switch (chan->type) {
+ case IIO_PRESSURE:
+ *val = priv->pressure;
+ return IIO_VAL_INT;
+ case IIO_TEMP:
+ *val = priv->temp;
+ return IIO_VAL_INT;
+ default:
+ return -EINVAL;
+ }
+ break;
+ case IIO_CHAN_INFO_SCALE:
+ switch (chan->type) {
+ case IIO_PRESSURE:
+ *val = 0;
+ *val2 = 1000;
+ return IIO_VAL_INT_PLUS_MICRO;
+ case IIO_TEMP:
+ *val = 10;
+ return IIO_VAL_INT;
+ default:
+ return -EINVAL;
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return -EINVAL;
+}
+
+static const struct iio_info hp03_info = {
+ .driver_module = THIS_MODULE,
+ .read_raw = &hp03_read_raw,
+};
+
+static int hp03_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct device *dev = &client->dev;
+ struct iio_dev *indio_dev;
+ struct hp03_priv *priv;
+ int ret;
+
+ indio_dev = devm_iio_device_alloc(dev, sizeof(*priv));
+ if (!indio_dev)
+ return -ENOMEM;
+
+ priv = iio_priv(indio_dev);
+ priv->client = client;
+ mutex_init(&priv->lock);
+
+ indio_dev->dev.parent = dev;
+ indio_dev->name = id->name;
+ indio_dev->channels = hp03_channels;
+ indio_dev->num_channels = ARRAY_SIZE(hp03_channels);
+ indio_dev->info = &hp03_info;
+ indio_dev->modes = INDIO_DIRECT_MODE;
+
+ priv->xclr_gpio = devm_gpiod_get_index(dev, "xclr", 0, GPIOD_OUT_HIGH);
+ if (IS_ERR(priv->xclr_gpio)) {
+ dev_err(dev, "Failed to claim XCLR GPIO\n");
+ ret = PTR_ERR(priv->xclr_gpio);
+ return ret;
+ }
+
+ /*
+ * Allocate another device for the on-sensor EEPROM,
+ * which has it's dedicated I2C address and contains
+ * the calibration constants for the sensor.
+ */
+ priv->eeprom_client = i2c_new_dummy(client->adapter, HP03_EEPROM_ADDR);
+ if (!priv->eeprom_client) {
+ dev_err(dev, "New EEPROM I2C device failed\n");
+ return -ENODEV;
+ }
+
+ priv->eeprom_regmap = regmap_init_i2c(priv->eeprom_client,
+ &hp03_regmap_config);
+ if (IS_ERR(priv->eeprom_regmap)) {
+ dev_err(dev, "Failed to allocate EEPROM regmap\n");
+ ret = PTR_ERR(priv->eeprom_regmap);
+ goto err_cleanup_eeprom_client;
+ }
+
+ ret = iio_device_register(indio_dev);
+ if (ret) {
+ dev_err(dev, "Failed to register IIO device\n");
+ goto err_cleanup_eeprom_regmap;
+ }
+
+ i2c_set_clientdata(client, indio_dev);
+
+ return 0;
+
+err_cleanup_eeprom_regmap:
+ regmap_exit(priv->eeprom_regmap);
+
+err_cleanup_eeprom_client:
+ i2c_unregister_device(priv->eeprom_client);
+ return ret;
+}
+
+static int hp03_remove(struct i2c_client *client)
+{
+ struct iio_dev *indio_dev = i2c_get_clientdata(client);
+ struct hp03_priv *priv = iio_priv(indio_dev);
+
+ iio_device_unregister(indio_dev);
+ regmap_exit(priv->eeprom_regmap);
+ i2c_unregister_device(priv->eeprom_client);
+
+ return 0;
+}
+
+static const struct i2c_device_id hp03_id[] = {
+ { "hp03", 0 },
+ { },
+};
+MODULE_DEVICE_TABLE(i2c, hp03_id);
+
+static struct i2c_driver hp03_driver = {
+ .driver = {
+ .name = "hp03",
+ },
+ .probe = hp03_probe,
+ .remove = hp03_remove,
+ .id_table = hp03_id,
+};
+module_i2c_driver(hp03_driver);
+
+MODULE_AUTHOR("Marek Vasut <marex@denx.de>");
+MODULE_DESCRIPTION("Driver for Hope RF HP03 pressure and temperature sensor");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/iio/pressure/hp206c.c b/drivers/iio/pressure/hp206c.c
new file mode 100644
index 000000000000..90f2b6e4a920
--- /dev/null
+++ b/drivers/iio/pressure/hp206c.c
@@ -0,0 +1,426 @@
+/*
+ * hp206c.c - HOPERF HP206C precision barometer and altimeter sensor
+ *
+ * Copyright (c) 2016, Intel Corporation.
+ *
+ * This file is subject to the terms and conditions of version 2 of
+ * the GNU General Public License. See the file COPYING in the main
+ * directory of this archive for more details.
+ *
+ * (7-bit I2C slave address 0x76)
+ *
+ * Datasheet:
+ * http://www.hoperf.com/upload/sensor/HP206C_DataSheet_EN_V2.0.pdf
+ */
+
+#include <linux/module.h>
+#include <linux/i2c.h>
+#include <linux/iio/iio.h>
+#include <linux/iio/sysfs.h>
+#include <linux/delay.h>
+#include <linux/util_macros.h>
+#include <linux/acpi.h>
+
+/* I2C commands: */
+#define HP206C_CMD_SOFT_RST 0x06
+
+#define HP206C_CMD_ADC_CVT 0x40
+
+#define HP206C_CMD_ADC_CVT_OSR_4096 0x00
+#define HP206C_CMD_ADC_CVT_OSR_2048 0x04
+#define HP206C_CMD_ADC_CVT_OSR_1024 0x08
+#define HP206C_CMD_ADC_CVT_OSR_512 0x0c
+#define HP206C_CMD_ADC_CVT_OSR_256 0x10
+#define HP206C_CMD_ADC_CVT_OSR_128 0x14
+
+#define HP206C_CMD_ADC_CVT_CHNL_PT 0x00
+#define HP206C_CMD_ADC_CVT_CHNL_T 0x02
+
+#define HP206C_CMD_READ_P 0x30
+#define HP206C_CMD_READ_T 0x32
+
+#define HP206C_CMD_READ_REG 0x80
+#define HP206C_CMD_WRITE_REG 0xc0
+
+#define HP206C_REG_INT_EN 0x0b
+#define HP206C_REG_INT_CFG 0x0c
+
+#define HP206C_REG_INT_SRC 0x0d
+#define HP206C_FLAG_DEV_RDY 0x40
+
+#define HP206C_REG_PARA 0x0f
+#define HP206C_FLAG_CMPS_EN 0x80
+
+/* Maximum spin for DEV_RDY */
+#define HP206C_MAX_DEV_RDY_WAIT_COUNT 20
+#define HP206C_DEV_RDY_WAIT_US 20000
+
+struct hp206c_data {
+ struct mutex mutex;
+ struct i2c_client *client;
+ int temp_osr_index;
+ int pres_osr_index;
+};
+
+struct hp206c_osr_setting {
+ u8 osr_mask;
+ unsigned int temp_conv_time_us;
+ unsigned int pres_conv_time_us;
+};
+
+/* Data from Table 5 in datasheet. */
+static const struct hp206c_osr_setting hp206c_osr_settings[] = {
+ { HP206C_CMD_ADC_CVT_OSR_4096, 65600, 131100 },
+ { HP206C_CMD_ADC_CVT_OSR_2048, 32800, 65600 },
+ { HP206C_CMD_ADC_CVT_OSR_1024, 16400, 32800 },
+ { HP206C_CMD_ADC_CVT_OSR_512, 8200, 16400 },
+ { HP206C_CMD_ADC_CVT_OSR_256, 4100, 8200 },
+ { HP206C_CMD_ADC_CVT_OSR_128, 2100, 4100 },
+};
+static const int hp206c_osr_rates[] = { 4096, 2048, 1024, 512, 256, 128 };
+static const char hp206c_osr_rates_str[] = "4096 2048 1024 512 256 128";
+
+static inline int hp206c_read_reg(struct i2c_client *client, u8 reg)
+{
+ return i2c_smbus_read_byte_data(client, HP206C_CMD_READ_REG | reg);
+}
+
+static inline int hp206c_write_reg(struct i2c_client *client, u8 reg, u8 val)
+{
+ return i2c_smbus_write_byte_data(client,
+ HP206C_CMD_WRITE_REG | reg, val);
+}
+
+static int hp206c_read_20bit(struct i2c_client *client, u8 cmd)
+{
+ int ret;
+ u8 values[3];
+
+ ret = i2c_smbus_read_i2c_block_data(client, cmd, 3, values);
+ if (ret < 0)
+ return ret;
+ if (ret != 3)
+ return -EIO;
+ return ((values[0] & 0xF) << 16) | (values[1] << 8) | (values[2]);
+}
+
+/* Spin for max 160ms until DEV_RDY is 1, or return error. */
+static int hp206c_wait_dev_rdy(struct iio_dev *indio_dev)
+{
+ int ret;
+ int count = 0;
+ struct hp206c_data *data = iio_priv(indio_dev);
+ struct i2c_client *client = data->client;
+
+ while (++count <= HP206C_MAX_DEV_RDY_WAIT_COUNT) {
+ ret = hp206c_read_reg(client, HP206C_REG_INT_SRC);
+ if (ret < 0) {
+ dev_err(&indio_dev->dev, "Failed READ_REG INT_SRC: %d\n", ret);
+ return ret;
+ }
+ if (ret & HP206C_FLAG_DEV_RDY)
+ return 0;
+ usleep_range(HP206C_DEV_RDY_WAIT_US, HP206C_DEV_RDY_WAIT_US * 3 / 2);
+ }
+ return -ETIMEDOUT;
+}
+
+static int hp206c_set_compensation(struct i2c_client *client, bool enabled)
+{
+ int val;
+
+ val = hp206c_read_reg(client, HP206C_REG_PARA);
+ if (val < 0)
+ return val;
+ if (enabled)
+ val |= HP206C_FLAG_CMPS_EN;
+ else
+ val &= ~HP206C_FLAG_CMPS_EN;
+
+ return hp206c_write_reg(client, HP206C_REG_PARA, val);
+}
+
+/* Do a soft reset */
+static int hp206c_soft_reset(struct iio_dev *indio_dev)
+{
+ int ret;
+ struct hp206c_data *data = iio_priv(indio_dev);
+ struct i2c_client *client = data->client;
+
+ ret = i2c_smbus_write_byte(client, HP206C_CMD_SOFT_RST);
+ if (ret) {
+ dev_err(&client->dev, "Failed to reset device: %d\n", ret);
+ return ret;
+ }
+
+ usleep_range(400, 600);
+
+ ret = hp206c_wait_dev_rdy(indio_dev);
+ if (ret) {
+ dev_err(&client->dev, "Device not ready after soft reset: %d\n", ret);
+ return ret;
+ }
+
+ ret = hp206c_set_compensation(client, true);
+ if (ret)
+ dev_err(&client->dev, "Failed to enable compensation: %d\n", ret);
+ return ret;
+}
+
+static int hp206c_conv_and_read(struct iio_dev *indio_dev,
+ u8 conv_cmd, u8 read_cmd,
+ unsigned int sleep_us)
+{
+ int ret;
+ struct hp206c_data *data = iio_priv(indio_dev);
+ struct i2c_client *client = data->client;
+
+ ret = hp206c_wait_dev_rdy(indio_dev);
+ if (ret < 0) {
+ dev_err(&indio_dev->dev, "Device not ready: %d\n", ret);
+ return ret;
+ }
+
+ ret = i2c_smbus_write_byte(client, conv_cmd);
+ if (ret < 0) {
+ dev_err(&indio_dev->dev, "Failed convert: %d\n", ret);
+ return ret;
+ }
+
+ usleep_range(sleep_us, sleep_us * 3 / 2);
+
+ ret = hp206c_wait_dev_rdy(indio_dev);
+ if (ret < 0) {
+ dev_err(&indio_dev->dev, "Device not ready: %d\n", ret);
+ return ret;
+ }
+
+ ret = hp206c_read_20bit(client, read_cmd);
+ if (ret < 0)
+ dev_err(&indio_dev->dev, "Failed read: %d\n", ret);
+
+ return ret;
+}
+
+static int hp206c_read_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan, int *val,
+ int *val2, long mask)
+{
+ int ret;
+ struct hp206c_data *data = iio_priv(indio_dev);
+ const struct hp206c_osr_setting *osr_setting;
+ u8 conv_cmd;
+
+ mutex_lock(&data->mutex);
+
+ switch (mask) {
+ case IIO_CHAN_INFO_OVERSAMPLING_RATIO:
+ switch (chan->type) {
+ case IIO_TEMP:
+ *val = hp206c_osr_rates[data->temp_osr_index];
+ ret = IIO_VAL_INT;
+ break;
+
+ case IIO_PRESSURE:
+ *val = hp206c_osr_rates[data->pres_osr_index];
+ ret = IIO_VAL_INT;
+ break;
+ default:
+ ret = -EINVAL;
+ }
+ break;
+
+ case IIO_CHAN_INFO_RAW:
+ switch (chan->type) {
+ case IIO_TEMP:
+ osr_setting = &hp206c_osr_settings[data->temp_osr_index];
+ conv_cmd = HP206C_CMD_ADC_CVT |
+ osr_setting->osr_mask |
+ HP206C_CMD_ADC_CVT_CHNL_T;
+ ret = hp206c_conv_and_read(indio_dev,
+ conv_cmd,
+ HP206C_CMD_READ_T,
+ osr_setting->temp_conv_time_us);
+ if (ret >= 0) {
+ /* 20 significant bits are provided.
+ * Extend sign over the rest.
+ */
+ *val = sign_extend32(ret, 19);
+ ret = IIO_VAL_INT;
+ }
+ break;
+
+ case IIO_PRESSURE:
+ osr_setting = &hp206c_osr_settings[data->pres_osr_index];
+ conv_cmd = HP206C_CMD_ADC_CVT |
+ osr_setting->osr_mask |
+ HP206C_CMD_ADC_CVT_CHNL_PT;
+ ret = hp206c_conv_and_read(indio_dev,
+ conv_cmd,
+ HP206C_CMD_READ_P,
+ osr_setting->pres_conv_time_us);
+ if (ret >= 0) {
+ *val = ret;
+ ret = IIO_VAL_INT;
+ }
+ break;
+ default:
+ ret = -EINVAL;
+ }
+ break;
+
+ case IIO_CHAN_INFO_SCALE:
+ switch (chan->type) {
+ case IIO_TEMP:
+ *val = 0;
+ *val2 = 10000;
+ ret = IIO_VAL_INT_PLUS_MICRO;
+ break;
+
+ case IIO_PRESSURE:
+ *val = 0;
+ *val2 = 1000;
+ ret = IIO_VAL_INT_PLUS_MICRO;
+ break;
+ default:
+ ret = -EINVAL;
+ }
+ break;
+
+ default:
+ ret = -EINVAL;
+ }
+
+ mutex_unlock(&data->mutex);
+ return ret;
+}
+
+static int hp206c_write_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ int val, int val2, long mask)
+{
+ int ret = 0;
+ struct hp206c_data *data = iio_priv(indio_dev);
+
+ if (mask != IIO_CHAN_INFO_OVERSAMPLING_RATIO)
+ return -EINVAL;
+ mutex_lock(&data->mutex);
+ switch (chan->type) {
+ case IIO_TEMP:
+ data->temp_osr_index = find_closest_descending(val,
+ hp206c_osr_rates, ARRAY_SIZE(hp206c_osr_rates));
+ break;
+ case IIO_PRESSURE:
+ data->pres_osr_index = find_closest_descending(val,
+ hp206c_osr_rates, ARRAY_SIZE(hp206c_osr_rates));
+ break;
+ default:
+ ret = -EINVAL;
+ }
+ mutex_unlock(&data->mutex);
+ return ret;
+}
+
+static const struct iio_chan_spec hp206c_channels[] = {
+ {
+ .type = IIO_TEMP,
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) |
+ BIT(IIO_CHAN_INFO_SCALE) |
+ BIT(IIO_CHAN_INFO_OVERSAMPLING_RATIO),
+ },
+ {
+ .type = IIO_PRESSURE,
+ .info_mask_separate = BIT(IIO_CHAN_INFO_RAW) |
+ BIT(IIO_CHAN_INFO_SCALE) |
+ BIT(IIO_CHAN_INFO_OVERSAMPLING_RATIO),
+ }
+};
+
+static IIO_CONST_ATTR_SAMP_FREQ_AVAIL(hp206c_osr_rates_str);
+
+static struct attribute *hp206c_attributes[] = {
+ &iio_const_attr_sampling_frequency_available.dev_attr.attr,
+ NULL,
+};
+
+static const struct attribute_group hp206c_attribute_group = {
+ .attrs = hp206c_attributes,
+};
+
+static const struct iio_info hp206c_info = {
+ .attrs = &hp206c_attribute_group,
+ .read_raw = hp206c_read_raw,
+ .write_raw = hp206c_write_raw,
+ .driver_module = THIS_MODULE,
+};
+
+static int hp206c_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct iio_dev *indio_dev;
+ struct hp206c_data *data;
+ int ret;
+
+ if (!i2c_check_functionality(client->adapter,
+ I2C_FUNC_SMBUS_BYTE |
+ I2C_FUNC_SMBUS_BYTE_DATA |
+ I2C_FUNC_SMBUS_READ_I2C_BLOCK)) {
+ dev_err(&client->dev, "Adapter does not support "
+ "all required i2c functionality\n");
+ return -ENODEV;
+ }
+
+ indio_dev = devm_iio_device_alloc(&client->dev, sizeof(*data));
+ if (!indio_dev)
+ return -ENOMEM;
+
+ data = iio_priv(indio_dev);
+ data->client = client;
+ mutex_init(&data->mutex);
+
+ indio_dev->info = &hp206c_info;
+ indio_dev->name = id->name;
+ indio_dev->dev.parent = &client->dev;
+ indio_dev->modes = INDIO_DIRECT_MODE;
+ indio_dev->channels = hp206c_channels;
+ indio_dev->num_channels = ARRAY_SIZE(hp206c_channels);
+
+ i2c_set_clientdata(client, indio_dev);
+
+ /* Do a soft reset on probe */
+ ret = hp206c_soft_reset(indio_dev);
+ if (ret) {
+ dev_err(&client->dev, "Failed to reset on startup: %d\n", ret);
+ return -ENODEV;
+ }
+
+ return devm_iio_device_register(&client->dev, indio_dev);
+}
+
+static const struct i2c_device_id hp206c_id[] = {
+ {"hp206c"},
+ {}
+};
+
+#ifdef CONFIG_ACPI
+static const struct acpi_device_id hp206c_acpi_match[] = {
+ {"HOP206C", 0},
+ { },
+};
+MODULE_DEVICE_TABLE(acpi, hp206c_acpi_match);
+#endif
+
+static struct i2c_driver hp206c_driver = {
+ .probe = hp206c_probe,
+ .id_table = hp206c_id,
+ .driver = {
+ .name = "hp206c",
+ .acpi_match_table = ACPI_PTR(hp206c_acpi_match),
+ },
+};
+
+module_i2c_driver(hp206c_driver);
+
+MODULE_DESCRIPTION("HOPERF HP206C precision barometer and altimeter sensor");
+MODULE_AUTHOR("Leonard Crestez <leonard.crestez@intel.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/iio/pressure/ms5611.h b/drivers/iio/pressure/ms5611.h
index 8b08e4b7e3a9..ccda63c5b3c3 100644
--- a/drivers/iio/pressure/ms5611.h
+++ b/drivers/iio/pressure/ms5611.h
@@ -16,15 +16,11 @@
#include <linux/iio/iio.h>
#include <linux/mutex.h>
+struct regulator;
+
#define MS5611_RESET 0x1e
#define MS5611_READ_ADC 0x00
#define MS5611_READ_PROM_WORD 0xA0
-#define MS5611_START_TEMP_CONV 0x58
-#define MS5611_START_PRESSURE_CONV 0x48
-
-#define MS5611_CONV_TIME_MIN 9040
-#define MS5611_CONV_TIME_MAX 10000
-
#define MS5611_PROM_WORDS_NB 8
enum {
@@ -39,16 +35,31 @@ struct ms5611_chip_info {
s32 *temp, s32 *pressure);
};
+/*
+ * OverSampling Rate descriptor.
+ * Warning: cmd MUST be kept aligned on a word boundary (see
+ * m5611_spi_read_adc_temp_and_pressure in ms5611_spi.c).
+ */
+struct ms5611_osr {
+ unsigned long conv_usec;
+ u8 cmd;
+ unsigned short rate;
+};
+
struct ms5611_state {
void *client;
struct mutex lock;
+ const struct ms5611_osr *pressure_osr;
+ const struct ms5611_osr *temp_osr;
+
int (*reset)(struct device *dev);
int (*read_prom_word)(struct device *dev, int index, u16 *word);
int (*read_adc_temp_and_pressure)(struct device *dev,
s32 *temp, s32 *pressure);
struct ms5611_chip_info *chip_info;
+ struct regulator *vdd;
};
int ms5611_probe(struct iio_dev *indio_dev, struct device *dev,
diff --git a/drivers/iio/pressure/ms5611_core.c b/drivers/iio/pressure/ms5611_core.c
index 992ad8d3b67a..76578b07bb6e 100644
--- a/drivers/iio/pressure/ms5611_core.c
+++ b/drivers/iio/pressure/ms5611_core.c
@@ -18,11 +18,44 @@
#include <linux/delay.h>
#include <linux/regulator/consumer.h>
+#include <linux/iio/sysfs.h>
#include <linux/iio/buffer.h>
#include <linux/iio/triggered_buffer.h>
#include <linux/iio/trigger_consumer.h>
#include "ms5611.h"
+#define MS5611_INIT_OSR(_cmd, _conv_usec, _rate) \
+ { .cmd = _cmd, .conv_usec = _conv_usec, .rate = _rate }
+
+static const struct ms5611_osr ms5611_avail_pressure_osr[] = {
+ MS5611_INIT_OSR(0x40, 600, 256),
+ MS5611_INIT_OSR(0x42, 1170, 512),
+ MS5611_INIT_OSR(0x44, 2280, 1024),
+ MS5611_INIT_OSR(0x46, 4540, 2048),
+ MS5611_INIT_OSR(0x48, 9040, 4096)
+};
+
+static const struct ms5611_osr ms5611_avail_temp_osr[] = {
+ MS5611_INIT_OSR(0x50, 600, 256),
+ MS5611_INIT_OSR(0x52, 1170, 512),
+ MS5611_INIT_OSR(0x54, 2280, 1024),
+ MS5611_INIT_OSR(0x56, 4540, 2048),
+ MS5611_INIT_OSR(0x58, 9040, 4096)
+};
+
+static const char ms5611_show_osr[] = "256 512 1024 2048 4096";
+
+static IIO_CONST_ATTR(oversampling_ratio_available, ms5611_show_osr);
+
+static struct attribute *ms5611_attributes[] = {
+ &iio_const_attr_oversampling_ratio_available.dev_attr.attr,
+ NULL,
+};
+
+static const struct attribute_group ms5611_attribute_group = {
+ .attrs = ms5611_attributes,
+};
+
static bool ms5611_prom_is_valid(u16 *prom, size_t len)
{
int i, j;
@@ -239,11 +272,70 @@ static int ms5611_read_raw(struct iio_dev *indio_dev,
default:
return -EINVAL;
}
+ case IIO_CHAN_INFO_OVERSAMPLING_RATIO:
+ if (chan->type != IIO_TEMP && chan->type != IIO_PRESSURE)
+ break;
+ mutex_lock(&st->lock);
+ if (chan->type == IIO_TEMP)
+ *val = (int)st->temp_osr->rate;
+ else
+ *val = (int)st->pressure_osr->rate;
+ mutex_unlock(&st->lock);
+ return IIO_VAL_INT;
}
return -EINVAL;
}
+static const struct ms5611_osr *ms5611_find_osr(int rate,
+ const struct ms5611_osr *osr,
+ size_t count)
+{
+ unsigned int r;
+
+ for (r = 0; r < count; r++)
+ if ((unsigned short)rate == osr[r].rate)
+ break;
+ if (r >= count)
+ return NULL;
+ return &osr[r];
+}
+
+static int ms5611_write_raw(struct iio_dev *indio_dev,
+ struct iio_chan_spec const *chan,
+ int val, int val2, long mask)
+{
+ struct ms5611_state *st = iio_priv(indio_dev);
+ const struct ms5611_osr *osr = NULL;
+
+ if (mask != IIO_CHAN_INFO_OVERSAMPLING_RATIO)
+ return -EINVAL;
+
+ if (chan->type == IIO_TEMP)
+ osr = ms5611_find_osr(val, ms5611_avail_temp_osr,
+ ARRAY_SIZE(ms5611_avail_temp_osr));
+ else if (chan->type == IIO_PRESSURE)
+ osr = ms5611_find_osr(val, ms5611_avail_pressure_osr,
+ ARRAY_SIZE(ms5611_avail_pressure_osr));
+ if (!osr)
+ return -EINVAL;
+
+ mutex_lock(&st->lock);
+
+ if (iio_buffer_enabled(indio_dev)) {
+ mutex_unlock(&st->lock);
+ return -EBUSY;
+ }
+
+ if (chan->type == IIO_TEMP)
+ st->temp_osr = osr;
+ else
+ st->pressure_osr = osr;
+
+ mutex_unlock(&st->lock);
+ return 0;
+}
+
static const unsigned long ms5611_scan_masks[] = {0x3, 0};
static struct ms5611_chip_info chip_info_tbl[] = {
@@ -259,7 +351,8 @@ static const struct iio_chan_spec ms5611_channels[] = {
{
.type = IIO_PRESSURE,
.info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED) |
- BIT(IIO_CHAN_INFO_SCALE),
+ BIT(IIO_CHAN_INFO_SCALE) |
+ BIT(IIO_CHAN_INFO_OVERSAMPLING_RATIO),
.scan_index = 0,
.scan_type = {
.sign = 's',
@@ -271,7 +364,8 @@ static const struct iio_chan_spec ms5611_channels[] = {
{
.type = IIO_TEMP,
.info_mask_separate = BIT(IIO_CHAN_INFO_PROCESSED) |
- BIT(IIO_CHAN_INFO_SCALE),
+ BIT(IIO_CHAN_INFO_SCALE) |
+ BIT(IIO_CHAN_INFO_OVERSAMPLING_RATIO),
.scan_index = 1,
.scan_type = {
.sign = 's',
@@ -285,40 +379,68 @@ static const struct iio_chan_spec ms5611_channels[] = {
static const struct iio_info ms5611_info = {
.read_raw = &ms5611_read_raw,
+ .write_raw = &ms5611_write_raw,
+ .attrs = &ms5611_attribute_group,
.driver_module = THIS_MODULE,
};
static int ms5611_init(struct iio_dev *indio_dev)
{
int ret;
- struct regulator *vdd = devm_regulator_get(indio_dev->dev.parent,
- "vdd");
+ struct ms5611_state *st = iio_priv(indio_dev);
/* Enable attached regulator if any. */
- if (!IS_ERR(vdd)) {
- ret = regulator_enable(vdd);
+ st->vdd = devm_regulator_get(indio_dev->dev.parent, "vdd");
+ if (!IS_ERR(st->vdd)) {
+ ret = regulator_enable(st->vdd);
if (ret) {
dev_err(indio_dev->dev.parent,
- "failed to enable Vdd supply: %d\n", ret);
+ "failed to enable Vdd supply: %d\n", ret);
return ret;
}
+ } else {
+ ret = PTR_ERR(st->vdd);
+ if (ret != -ENODEV)
+ return ret;
}
ret = ms5611_reset(indio_dev);
if (ret < 0)
- return ret;
+ goto err_regulator_disable;
- return ms5611_read_prom(indio_dev);
+ ret = ms5611_read_prom(indio_dev);
+ if (ret < 0)
+ goto err_regulator_disable;
+
+ return 0;
+
+err_regulator_disable:
+ if (!IS_ERR_OR_NULL(st->vdd))
+ regulator_disable(st->vdd);
+ return ret;
+}
+
+static void ms5611_fini(const struct iio_dev *indio_dev)
+{
+ const struct ms5611_state *st = iio_priv(indio_dev);
+
+ if (!IS_ERR_OR_NULL(st->vdd))
+ regulator_disable(st->vdd);
}
int ms5611_probe(struct iio_dev *indio_dev, struct device *dev,
- const char *name, int type)
+ const char *name, int type)
{
int ret;
struct ms5611_state *st = iio_priv(indio_dev);
mutex_init(&st->lock);
st->chip_info = &chip_info_tbl[type];
+ st->temp_osr =
+ &ms5611_avail_temp_osr[ARRAY_SIZE(ms5611_avail_temp_osr) - 1];
+ st->pressure_osr =
+ &ms5611_avail_pressure_osr[ARRAY_SIZE(ms5611_avail_pressure_osr)
+ - 1];
indio_dev->dev.parent = dev;
indio_dev->name = name;
indio_dev->info = &ms5611_info;
@@ -335,7 +457,7 @@ int ms5611_probe(struct iio_dev *indio_dev, struct device *dev,
ms5611_trigger_handler, NULL);
if (ret < 0) {
dev_err(dev, "iio triggered buffer setup failed\n");
- return ret;
+ goto err_fini;
}
ret = iio_device_register(indio_dev);
@@ -348,7 +470,8 @@ int ms5611_probe(struct iio_dev *indio_dev, struct device *dev,
err_buffer_cleanup:
iio_triggered_buffer_cleanup(indio_dev);
-
+err_fini:
+ ms5611_fini(indio_dev);
return ret;
}
EXPORT_SYMBOL(ms5611_probe);
@@ -357,6 +480,7 @@ int ms5611_remove(struct iio_dev *indio_dev)
{
iio_device_unregister(indio_dev);
iio_triggered_buffer_cleanup(indio_dev);
+ ms5611_fini(indio_dev);
return 0;
}
diff --git a/drivers/iio/pressure/ms5611_i2c.c b/drivers/iio/pressure/ms5611_i2c.c
index 7f6fc8eee922..55fb5fc0b6ea 100644
--- a/drivers/iio/pressure/ms5611_i2c.c
+++ b/drivers/iio/pressure/ms5611_i2c.c
@@ -17,6 +17,7 @@
#include <linux/delay.h>
#include <linux/i2c.h>
#include <linux/module.h>
+#include <linux/of_device.h>
#include "ms5611.h"
@@ -62,23 +63,23 @@ static int ms5611_i2c_read_adc_temp_and_pressure(struct device *dev,
{
int ret;
struct ms5611_state *st = iio_priv(dev_to_iio_dev(dev));
+ const struct ms5611_osr *osr = st->temp_osr;
- ret = i2c_smbus_write_byte(st->client, MS5611_START_TEMP_CONV);
+ ret = i2c_smbus_write_byte(st->client, osr->cmd);
if (ret < 0)
return ret;
- usleep_range(MS5611_CONV_TIME_MIN, MS5611_CONV_TIME_MAX);
-
+ usleep_range(osr->conv_usec, osr->conv_usec + (osr->conv_usec / 10UL));
ret = ms5611_i2c_read_adc(st, temp);
if (ret < 0)
return ret;
- ret = i2c_smbus_write_byte(st->client, MS5611_START_PRESSURE_CONV);
+ osr = st->pressure_osr;
+ ret = i2c_smbus_write_byte(st->client, osr->cmd);
if (ret < 0)
return ret;
- usleep_range(MS5611_CONV_TIME_MIN, MS5611_CONV_TIME_MAX);
-
+ usleep_range(osr->conv_usec, osr->conv_usec + (osr->conv_usec / 10UL));
return ms5611_i2c_read_adc(st, pressure);
}
@@ -113,6 +114,17 @@ static int ms5611_i2c_remove(struct i2c_client *client)
return ms5611_remove(i2c_get_clientdata(client));
}
+#if defined(CONFIG_OF)
+static const struct of_device_id ms5611_i2c_matches[] = {
+ { .compatible = "meas,ms5611" },
+ { .compatible = "ms5611" },
+ { .compatible = "meas,ms5607" },
+ { .compatible = "ms5607" },
+ { }
+};
+MODULE_DEVICE_TABLE(of, ms5611_i2c_matches);
+#endif
+
static const struct i2c_device_id ms5611_id[] = {
{ "ms5611", MS5611 },
{ "ms5607", MS5607 },
@@ -123,6 +135,7 @@ MODULE_DEVICE_TABLE(i2c, ms5611_id);
static struct i2c_driver ms5611_driver = {
.driver = {
.name = "ms5611",
+ .of_match_table = of_match_ptr(ms5611_i2c_matches)
},
.id_table = ms5611_id,
.probe = ms5611_i2c_probe,
diff --git a/drivers/iio/pressure/ms5611_spi.c b/drivers/iio/pressure/ms5611_spi.c
index 5cc009e85f0e..932e05001e1a 100644
--- a/drivers/iio/pressure/ms5611_spi.c
+++ b/drivers/iio/pressure/ms5611_spi.c
@@ -12,6 +12,7 @@
#include <linux/delay.h>
#include <linux/module.h>
#include <linux/spi/spi.h>
+#include <linux/of_device.h>
#include "ms5611.h"
@@ -55,28 +56,29 @@ static int ms5611_spi_read_adc(struct device *dev, s32 *val)
static int ms5611_spi_read_adc_temp_and_pressure(struct device *dev,
s32 *temp, s32 *pressure)
{
- u8 cmd;
int ret;
struct ms5611_state *st = iio_priv(dev_to_iio_dev(dev));
+ const struct ms5611_osr *osr = st->temp_osr;
- cmd = MS5611_START_TEMP_CONV;
- ret = spi_write_then_read(st->client, &cmd, 1, NULL, 0);
+ /*
+ * Warning: &osr->cmd MUST be aligned on a word boundary since used as
+ * 2nd argument (void*) of spi_write_then_read.
+ */
+ ret = spi_write_then_read(st->client, &osr->cmd, 1, NULL, 0);
if (ret < 0)
return ret;
- usleep_range(MS5611_CONV_TIME_MIN, MS5611_CONV_TIME_MAX);
-
+ usleep_range(osr->conv_usec, osr->conv_usec + (osr->conv_usec / 10UL));
ret = ms5611_spi_read_adc(dev, temp);
if (ret < 0)
return ret;
- cmd = MS5611_START_PRESSURE_CONV;
- ret = spi_write_then_read(st->client, &cmd, 1, NULL, 0);
+ osr = st->pressure_osr;
+ ret = spi_write_then_read(st->client, &osr->cmd, 1, NULL, 0);
if (ret < 0)
return ret;
- usleep_range(MS5611_CONV_TIME_MIN, MS5611_CONV_TIME_MAX);
-
+ usleep_range(osr->conv_usec, osr->conv_usec + (osr->conv_usec / 10UL));
return ms5611_spi_read_adc(dev, pressure);
}
@@ -106,7 +108,7 @@ static int ms5611_spi_probe(struct spi_device *spi)
st->client = spi;
return ms5611_probe(indio_dev, &spi->dev, spi_get_device_id(spi)->name,
- spi_get_device_id(spi)->driver_data);
+ spi_get_device_id(spi)->driver_data);
}
static int ms5611_spi_remove(struct spi_device *spi)
@@ -114,6 +116,17 @@ static int ms5611_spi_remove(struct spi_device *spi)
return ms5611_remove(spi_get_drvdata(spi));
}
+#if defined(CONFIG_OF)
+static const struct of_device_id ms5611_spi_matches[] = {
+ { .compatible = "meas,ms5611" },
+ { .compatible = "ms5611" },
+ { .compatible = "meas,ms5607" },
+ { .compatible = "ms5607" },
+ { }
+};
+MODULE_DEVICE_TABLE(of, ms5611_spi_matches);
+#endif
+
static const struct spi_device_id ms5611_id[] = {
{ "ms5611", MS5611 },
{ "ms5607", MS5607 },
@@ -124,6 +137,7 @@ MODULE_DEVICE_TABLE(spi, ms5611_id);
static struct spi_driver ms5611_driver = {
.driver = {
.name = "ms5611",
+ .of_match_table = of_match_ptr(ms5611_spi_matches)
},
.id_table = ms5611_id,
.probe = ms5611_spi_probe,
diff --git a/drivers/iio/pressure/st_pressure_core.c b/drivers/iio/pressure/st_pressure_core.c
index 172393ad34af..9e9b72a8f18f 100644
--- a/drivers/iio/pressure/st_pressure_core.c
+++ b/drivers/iio/pressure/st_pressure_core.c
@@ -64,6 +64,8 @@
#define ST_PRESS_LPS331AP_DRDY_IRQ_INT2_MASK 0x20
#define ST_PRESS_LPS331AP_IHL_IRQ_ADDR 0x22
#define ST_PRESS_LPS331AP_IHL_IRQ_MASK 0x80
+#define ST_PRESS_LPS331AP_OD_IRQ_ADDR 0x22
+#define ST_PRESS_LPS331AP_OD_IRQ_MASK 0x40
#define ST_PRESS_LPS331AP_MULTIREAD_BIT true
#define ST_PRESS_LPS331AP_TEMP_OFFSET 42500
@@ -104,6 +106,8 @@
#define ST_PRESS_LPS25H_DRDY_IRQ_INT2_MASK 0x10
#define ST_PRESS_LPS25H_IHL_IRQ_ADDR 0x22
#define ST_PRESS_LPS25H_IHL_IRQ_MASK 0x80
+#define ST_PRESS_LPS25H_OD_IRQ_ADDR 0x22
+#define ST_PRESS_LPS25H_OD_IRQ_MASK 0x40
#define ST_PRESS_LPS25H_MULTIREAD_BIT true
#define ST_PRESS_LPS25H_TEMP_OFFSET 42500
#define ST_PRESS_LPS25H_OUT_XL_ADDR 0x28
@@ -226,6 +230,9 @@ static const struct st_sensor_settings st_press_sensors_settings[] = {
.mask_int2 = ST_PRESS_LPS331AP_DRDY_IRQ_INT2_MASK,
.addr_ihl = ST_PRESS_LPS331AP_IHL_IRQ_ADDR,
.mask_ihl = ST_PRESS_LPS331AP_IHL_IRQ_MASK,
+ .addr_od = ST_PRESS_LPS331AP_OD_IRQ_ADDR,
+ .mask_od = ST_PRESS_LPS331AP_OD_IRQ_MASK,
+ .addr_stat_drdy = ST_SENSORS_DEFAULT_STAT_ADDR,
},
.multi_read_bit = ST_PRESS_LPS331AP_MULTIREAD_BIT,
.bootime = 2,
@@ -312,6 +319,9 @@ static const struct st_sensor_settings st_press_sensors_settings[] = {
.mask_int2 = ST_PRESS_LPS25H_DRDY_IRQ_INT2_MASK,
.addr_ihl = ST_PRESS_LPS25H_IHL_IRQ_ADDR,
.mask_ihl = ST_PRESS_LPS25H_IHL_IRQ_MASK,
+ .addr_od = ST_PRESS_LPS25H_OD_IRQ_ADDR,
+ .mask_od = ST_PRESS_LPS25H_OD_IRQ_MASK,
+ .addr_stat_drdy = ST_SENSORS_DEFAULT_STAT_ADDR,
},
.multi_read_bit = ST_PRESS_LPS25H_MULTIREAD_BIT,
.bootime = 2,
diff --git a/drivers/infiniband/Kconfig b/drivers/infiniband/Kconfig
index 6425c0e5d18a..2137adfbd8c3 100644
--- a/drivers/infiniband/Kconfig
+++ b/drivers/infiniband/Kconfig
@@ -85,4 +85,6 @@ source "drivers/infiniband/ulp/isert/Kconfig"
source "drivers/infiniband/sw/rdmavt/Kconfig"
+source "drivers/infiniband/hw/hfi1/Kconfig"
+
endif # INFINIBAND
diff --git a/drivers/infiniband/core/Makefile b/drivers/infiniband/core/Makefile
index f818538a7f4e..edaae9f9853c 100644
--- a/drivers/infiniband/core/Makefile
+++ b/drivers/infiniband/core/Makefile
@@ -1,23 +1,19 @@
infiniband-$(CONFIG_INFINIBAND_ADDR_TRANS) := rdma_cm.o
user_access-$(CONFIG_INFINIBAND_ADDR_TRANS) := rdma_ucm.o
-obj-$(CONFIG_INFINIBAND) += ib_core.o ib_mad.o ib_sa.o \
- ib_cm.o iw_cm.o ib_addr.o \
+obj-$(CONFIG_INFINIBAND) += ib_core.o ib_cm.o iw_cm.o \
$(infiniband-y)
obj-$(CONFIG_INFINIBAND_USER_MAD) += ib_umad.o
obj-$(CONFIG_INFINIBAND_USER_ACCESS) += ib_uverbs.o ib_ucm.o \
$(user_access-y)
-ib_core-y := packer.o ud_header.o verbs.o cq.o sysfs.o \
+ib_core-y := packer.o ud_header.o verbs.o cq.o rw.o sysfs.o \
device.o fmr_pool.o cache.o netlink.o \
- roce_gid_mgmt.o
+ roce_gid_mgmt.o mr_pool.o addr.o sa_query.o \
+ multicast.o mad.o smi.o agent.o mad_rmpp.o
ib_core-$(CONFIG_INFINIBAND_USER_MEM) += umem.o
ib_core-$(CONFIG_INFINIBAND_ON_DEMAND_PAGING) += umem_odp.o umem_rbtree.o
-ib_mad-y := mad.o smi.o agent.o mad_rmpp.o
-
-ib_sa-y := sa_query.o multicast.o
-
ib_cm-y := cm.o
iw_cm-y := iwcm.o iwpm_util.o iwpm_msg.o
@@ -28,8 +24,6 @@ rdma_cm-$(CONFIG_INFINIBAND_ADDR_TRANS_CONFIGFS) += cma_configfs.o
rdma_ucm-y := ucma.o
-ib_addr-y := addr.o
-
ib_umad-y := user_mad.o
ib_ucm-y := ucm.o
diff --git a/drivers/infiniband/core/addr.c b/drivers/infiniband/core/addr.c
index 337353d86cfa..1374541a4528 100644
--- a/drivers/infiniband/core/addr.c
+++ b/drivers/infiniband/core/addr.c
@@ -46,10 +46,10 @@
#include <net/ip6_route.h>
#include <rdma/ib_addr.h>
#include <rdma/ib.h>
+#include <rdma/rdma_netlink.h>
+#include <net/netlink.h>
-MODULE_AUTHOR("Sean Hefty");
-MODULE_DESCRIPTION("IB Address Translation");
-MODULE_LICENSE("Dual BSD/GPL");
+#include "core_priv.h"
struct addr_req {
struct list_head list;
@@ -62,8 +62,11 @@ struct addr_req {
struct rdma_dev_addr *addr, void *context);
unsigned long timeout;
int status;
+ u32 seq;
};
+static atomic_t ib_nl_addr_request_seq = ATOMIC_INIT(0);
+
static void process_req(struct work_struct *work);
static DEFINE_MUTEX(lock);
@@ -71,6 +74,126 @@ static LIST_HEAD(req_list);
static DECLARE_DELAYED_WORK(work, process_req);
static struct workqueue_struct *addr_wq;
+static const struct nla_policy ib_nl_addr_policy[LS_NLA_TYPE_MAX] = {
+ [LS_NLA_TYPE_DGID] = {.type = NLA_BINARY,
+ .len = sizeof(struct rdma_nla_ls_gid)},
+};
+
+static inline bool ib_nl_is_good_ip_resp(const struct nlmsghdr *nlh)
+{
+ struct nlattr *tb[LS_NLA_TYPE_MAX] = {};
+ int ret;
+
+ if (nlh->nlmsg_flags & RDMA_NL_LS_F_ERR)
+ return false;
+
+ ret = nla_parse(tb, LS_NLA_TYPE_MAX - 1, nlmsg_data(nlh),
+ nlmsg_len(nlh), ib_nl_addr_policy);
+ if (ret)
+ return false;
+
+ return true;
+}
+
+static void ib_nl_process_good_ip_rsep(const struct nlmsghdr *nlh)
+{
+ const struct nlattr *head, *curr;
+ union ib_gid gid;
+ struct addr_req *req;
+ int len, rem;
+ int found = 0;
+
+ head = (const struct nlattr *)nlmsg_data(nlh);
+ len = nlmsg_len(nlh);
+
+ nla_for_each_attr(curr, head, len, rem) {
+ if (curr->nla_type == LS_NLA_TYPE_DGID)
+ memcpy(&gid, nla_data(curr), nla_len(curr));
+ }
+
+ mutex_lock(&lock);
+ list_for_each_entry(req, &req_list, list) {
+ if (nlh->nlmsg_seq != req->seq)
+ continue;
+ /* We set the DGID part, the rest was set earlier */
+ rdma_addr_set_dgid(req->addr, &gid);
+ req->status = 0;
+ found = 1;
+ break;
+ }
+ mutex_unlock(&lock);
+
+ if (!found)
+ pr_info("Couldn't find request waiting for DGID: %pI6\n",
+ &gid);
+}
+
+int ib_nl_handle_ip_res_resp(struct sk_buff *skb,
+ struct netlink_callback *cb)
+{
+ const struct nlmsghdr *nlh = (struct nlmsghdr *)cb->nlh;
+
+ if ((nlh->nlmsg_flags & NLM_F_REQUEST) ||
+ !(NETLINK_CB(skb).sk) ||
+ !netlink_capable(skb, CAP_NET_ADMIN))
+ return -EPERM;
+
+ if (ib_nl_is_good_ip_resp(nlh))
+ ib_nl_process_good_ip_rsep(nlh);
+
+ return skb->len;
+}
+
+static int ib_nl_ip_send_msg(struct rdma_dev_addr *dev_addr,
+ const void *daddr,
+ u32 seq, u16 family)
+{
+ struct sk_buff *skb = NULL;
+ struct nlmsghdr *nlh;
+ struct rdma_ls_ip_resolve_header *header;
+ void *data;
+ size_t size;
+ int attrtype;
+ int len;
+
+ if (family == AF_INET) {
+ size = sizeof(struct in_addr);
+ attrtype = RDMA_NLA_F_MANDATORY | LS_NLA_TYPE_IPV4;
+ } else {
+ size = sizeof(struct in6_addr);
+ attrtype = RDMA_NLA_F_MANDATORY | LS_NLA_TYPE_IPV6;
+ }
+
+ len = nla_total_size(sizeof(size));
+ len += NLMSG_ALIGN(sizeof(*header));
+
+ skb = nlmsg_new(len, GFP_KERNEL);
+ if (!skb)
+ return -ENOMEM;
+
+ data = ibnl_put_msg(skb, &nlh, seq, 0, RDMA_NL_LS,
+ RDMA_NL_LS_OP_IP_RESOLVE, NLM_F_REQUEST);
+ if (!data) {
+ nlmsg_free(skb);
+ return -ENODATA;
+ }
+
+ /* Construct the family header first */
+ header = (struct rdma_ls_ip_resolve_header *)
+ skb_put(skb, NLMSG_ALIGN(sizeof(*header)));
+ header->ifindex = dev_addr->bound_dev_if;
+ nla_put(skb, attrtype, size, daddr);
+
+ /* Repair the nlmsg header length */
+ nlmsg_end(skb, nlh);
+ ibnl_multicast(skb, nlh, RDMA_NL_GROUP_LS, GFP_KERNEL);
+
+ /* Make the request retry, so when we get the response from userspace
+ * we will have something.
+ */
+ return -ENODATA;
+}
+
int rdma_addr_size(struct sockaddr *addr)
{
switch (addr->sa_family) {
@@ -199,6 +322,17 @@ static void queue_req(struct addr_req *req)
mutex_unlock(&lock);
}
+static int ib_nl_fetch_ha(struct dst_entry *dst, struct rdma_dev_addr *dev_addr,
+ const void *daddr, u32 seq, u16 family)
+{
+ if (ibnl_chk_listeners(RDMA_NL_GROUP_LS))
+ return -EADDRNOTAVAIL;
+
+ /* We fill in what we can, the response will fill the rest */
+ rdma_copy_addr(dev_addr, dst->dev, NULL);
+ return ib_nl_ip_send_msg(dev_addr, daddr, seq, family);
+}
+
static int dst_fetch_ha(struct dst_entry *dst, struct rdma_dev_addr *dev_addr,
const void *daddr)
{
@@ -223,6 +357,39 @@ static int dst_fetch_ha(struct dst_entry *dst, struct rdma_dev_addr *dev_addr,
return ret;
}
+static bool has_gateway(struct dst_entry *dst, sa_family_t family)
+{
+ struct rtable *rt;
+ struct rt6_info *rt6;
+
+ if (family == AF_INET) {
+ rt = container_of(dst, struct rtable, dst);
+ return rt->rt_uses_gateway;
+ }
+
+ rt6 = container_of(dst, struct rt6_info, dst);
+ return rt6->rt6i_flags & RTF_GATEWAY;
+}
+
+static int fetch_ha(struct dst_entry *dst, struct rdma_dev_addr *dev_addr,
+ const struct sockaddr *dst_in, u32 seq)
+{
+ const struct sockaddr_in *dst_in4 =
+ (const struct sockaddr_in *)dst_in;
+ const struct sockaddr_in6 *dst_in6 =
+ (const struct sockaddr_in6 *)dst_in;
+ const void *daddr = (dst_in->sa_family == AF_INET) ?
+ (const void *)&dst_in4->sin_addr.s_addr :
+ (const void *)&dst_in6->sin6_addr;
+ sa_family_t family = dst_in->sa_family;
+
+ /* Gateway + ARPHRD_INFINIBAND -> IB router */
+ if (has_gateway(dst, family) && dst->dev->type == ARPHRD_INFINIBAND)
+ return ib_nl_fetch_ha(dst, dev_addr, daddr, seq, family);
+ else
+ return dst_fetch_ha(dst, dev_addr, daddr);
+}
+
static int addr4_resolve(struct sockaddr_in *src_in,
const struct sockaddr_in *dst_in,
struct rdma_dev_addr *addr,
@@ -246,10 +413,11 @@ static int addr4_resolve(struct sockaddr_in *src_in,
src_in->sin_family = AF_INET;
src_in->sin_addr.s_addr = fl4.saddr;
- /* If there's a gateway, we're definitely in RoCE v2 (as RoCE v1 isn't
- * routable) and we could set the network type accordingly.
+ /* If there's a gateway and type of device not ARPHRD_INFINIBAND, we're
+ * definitely in RoCE v2 (as RoCE v1 isn't routable) set the network
+ * type accordingly.
*/
- if (rt->rt_uses_gateway)
+ if (rt->rt_uses_gateway && rt->dst.dev->type != ARPHRD_INFINIBAND)
addr->network = RDMA_NETWORK_IPV4;
addr->hoplimit = ip4_dst_hoplimit(&rt->dst);
@@ -291,10 +459,12 @@ static int addr6_resolve(struct sockaddr_in6 *src_in,
src_in->sin6_addr = fl6.saddr;
}
- /* If there's a gateway, we're definitely in RoCE v2 (as RoCE v1 isn't
- * routable) and we could set the network type accordingly.
+ /* If there's a gateway and type of device not ARPHRD_INFINIBAND, we're
+ * definitely in RoCE v2 (as RoCE v1 isn't routable) set the network
+ * type accordingly.
*/
- if (rt->rt6i_flags & RTF_GATEWAY)
+ if (rt->rt6i_flags & RTF_GATEWAY &&
+ ip6_dst_idev(dst)->dev->type != ARPHRD_INFINIBAND)
addr->network = RDMA_NETWORK_IPV6;
addr->hoplimit = ip6_dst_hoplimit(dst);
@@ -317,7 +487,8 @@ static int addr6_resolve(struct sockaddr_in6 *src_in,
static int addr_resolve_neigh(struct dst_entry *dst,
const struct sockaddr *dst_in,
- struct rdma_dev_addr *addr)
+ struct rdma_dev_addr *addr,
+ u32 seq)
{
if (dst->dev->flags & IFF_LOOPBACK) {
int ret;
@@ -331,17 +502,8 @@ static int addr_resolve_neigh(struct dst_entry *dst,
}
/* If the device doesn't do ARP internally */
- if (!(dst->dev->flags & IFF_NOARP)) {
- const struct sockaddr_in *dst_in4 =
- (const struct sockaddr_in *)dst_in;
- const struct sockaddr_in6 *dst_in6 =
- (const struct sockaddr_in6 *)dst_in;
-
- return dst_fetch_ha(dst, addr,
- dst_in->sa_family == AF_INET ?
- (const void *)&dst_in4->sin_addr.s_addr :
- (const void *)&dst_in6->sin6_addr);
- }
+ if (!(dst->dev->flags & IFF_NOARP))
+ return fetch_ha(dst, addr, dst_in, seq);
return rdma_copy_addr(addr, dst->dev, NULL);
}
@@ -349,7 +511,8 @@ static int addr_resolve_neigh(struct dst_entry *dst,
static int addr_resolve(struct sockaddr *src_in,
const struct sockaddr *dst_in,
struct rdma_dev_addr *addr,
- bool resolve_neigh)
+ bool resolve_neigh,
+ u32 seq)
{
struct net_device *ndev;
struct dst_entry *dst;
@@ -366,7 +529,7 @@ static int addr_resolve(struct sockaddr *src_in,
return ret;
if (resolve_neigh)
- ret = addr_resolve_neigh(&rt->dst, dst_in, addr);
+ ret = addr_resolve_neigh(&rt->dst, dst_in, addr, seq);
ndev = rt->dst.dev;
dev_hold(ndev);
@@ -383,7 +546,7 @@ static int addr_resolve(struct sockaddr *src_in,
return ret;
if (resolve_neigh)
- ret = addr_resolve_neigh(dst, dst_in, addr);
+ ret = addr_resolve_neigh(dst, dst_in, addr, seq);
ndev = dst->dev;
dev_hold(ndev);
@@ -412,7 +575,7 @@ static void process_req(struct work_struct *work)
src_in = (struct sockaddr *) &req->src_addr;
dst_in = (struct sockaddr *) &req->dst_addr;
req->status = addr_resolve(src_in, dst_in, req->addr,
- true);
+ true, req->seq);
if (req->status && time_after_eq(jiffies, req->timeout))
req->status = -ETIMEDOUT;
else if (req->status == -ENODATA)
@@ -471,8 +634,9 @@ int rdma_resolve_ip(struct rdma_addr_client *client,
req->context = context;
req->client = client;
atomic_inc(&client->refcount);
+ req->seq = (u32)atomic_inc_return(&ib_nl_addr_request_seq);
- req->status = addr_resolve(src_in, dst_in, addr, true);
+ req->status = addr_resolve(src_in, dst_in, addr, true, req->seq);
switch (req->status) {
case 0:
req->timeout = jiffies;
@@ -510,7 +674,7 @@ int rdma_resolve_ip_route(struct sockaddr *src_addr,
src_in->sa_family = dst_addr->sa_family;
}
- return addr_resolve(src_in, dst_addr, addr, false);
+ return addr_resolve(src_in, dst_addr, addr, false, 0);
}
EXPORT_SYMBOL(rdma_resolve_ip_route);
@@ -634,7 +798,7 @@ static struct notifier_block nb = {
.notifier_call = netevent_callback
};
-static int __init addr_init(void)
+int addr_init(void)
{
addr_wq = create_singlethread_workqueue("ib_addr");
if (!addr_wq)
@@ -642,15 +806,13 @@ static int __init addr_init(void)
register_netevent_notifier(&nb);
rdma_addr_register_client(&self);
+
return 0;
}
-static void __exit addr_cleanup(void)
+void addr_cleanup(void)
{
rdma_addr_unregister_client(&self);
unregister_netevent_notifier(&nb);
destroy_workqueue(addr_wq);
}
-
-module_init(addr_init);
-module_exit(addr_cleanup);
diff --git a/drivers/infiniband/core/cma.c b/drivers/infiniband/core/cma.c
index 93ab0ae97208..f0c91ba3178a 100644
--- a/drivers/infiniband/core/cma.c
+++ b/drivers/infiniband/core/cma.c
@@ -800,6 +800,7 @@ int rdma_create_qp(struct rdma_cm_id *id, struct ib_pd *pd,
if (id->device != pd->device)
return -EINVAL;
+ qp_init_attr->port_num = id->port_num;
qp = ib_create_qp(pd, qp_init_attr);
if (IS_ERR(qp))
return PTR_ERR(qp);
@@ -4294,7 +4295,8 @@ static int __init cma_init(void)
if (ret)
goto err;
- if (ibnl_add_client(RDMA_NL_RDMA_CM, RDMA_NL_RDMA_CM_NUM_OPS, cma_cb_table))
+ if (ibnl_add_client(RDMA_NL_RDMA_CM, ARRAY_SIZE(cma_cb_table),
+ cma_cb_table))
pr_warn("RDMA CMA: failed to add netlink callback\n");
cma_configfs_init();
diff --git a/drivers/infiniband/core/core_priv.h b/drivers/infiniband/core/core_priv.h
index eab32215756b..19d499dcab76 100644
--- a/drivers/infiniband/core/core_priv.h
+++ b/drivers/infiniband/core/core_priv.h
@@ -137,4 +137,20 @@ static inline bool rdma_is_upper_dev_rcu(struct net_device *dev,
return _upper == upper;
}
+int addr_init(void);
+void addr_cleanup(void);
+
+int ib_mad_init(void);
+void ib_mad_cleanup(void);
+
+int ib_sa_init(void);
+void ib_sa_cleanup(void);
+
+int ib_nl_handle_resolve_resp(struct sk_buff *skb,
+ struct netlink_callback *cb);
+int ib_nl_handle_set_timeout(struct sk_buff *skb,
+ struct netlink_callback *cb);
+int ib_nl_handle_ip_res_resp(struct sk_buff *skb,
+ struct netlink_callback *cb);
+
#endif /* _CORE_PRIV_H */
diff --git a/drivers/infiniband/core/device.c b/drivers/infiniband/core/device.c
index 10979844026a..5516fb070344 100644
--- a/drivers/infiniband/core/device.c
+++ b/drivers/infiniband/core/device.c
@@ -955,6 +955,29 @@ struct net_device *ib_get_net_dev_by_params(struct ib_device *dev,
}
EXPORT_SYMBOL(ib_get_net_dev_by_params);
+static struct ibnl_client_cbs ibnl_ls_cb_table[] = {
+ [RDMA_NL_LS_OP_RESOLVE] = {
+ .dump = ib_nl_handle_resolve_resp,
+ .module = THIS_MODULE },
+ [RDMA_NL_LS_OP_SET_TIMEOUT] = {
+ .dump = ib_nl_handle_set_timeout,
+ .module = THIS_MODULE },
+ [RDMA_NL_LS_OP_IP_RESOLVE] = {
+ .dump = ib_nl_handle_ip_res_resp,
+ .module = THIS_MODULE },
+};
+
+static int ib_add_ibnl_clients(void)
+{
+ return ibnl_add_client(RDMA_NL_LS, ARRAY_SIZE(ibnl_ls_cb_table),
+ ibnl_ls_cb_table);
+}
+
+static void ib_remove_ibnl_clients(void)
+{
+ ibnl_remove_client(RDMA_NL_LS);
+}
+
static int __init ib_core_init(void)
{
int ret;
@@ -983,10 +1006,41 @@ static int __init ib_core_init(void)
goto err_sysfs;
}
+ ret = addr_init();
+ if (ret) {
+ pr_warn("Could't init IB address resolution\n");
+ goto err_ibnl;
+ }
+
+ ret = ib_mad_init();
+ if (ret) {
+ pr_warn("Couldn't init IB MAD\n");
+ goto err_addr;
+ }
+
+ ret = ib_sa_init();
+ if (ret) {
+ pr_warn("Couldn't init SA\n");
+ goto err_mad;
+ }
+
+ if (ib_add_ibnl_clients()) {
+ pr_warn("Couldn't register ibnl clients\n");
+ goto err_sa;
+ }
+
ib_cache_setup();
return 0;
+err_sa:
+ ib_sa_cleanup();
+err_mad:
+ ib_mad_cleanup();
+err_addr:
+ addr_cleanup();
+err_ibnl:
+ ibnl_cleanup();
err_sysfs:
class_unregister(&ib_class);
err_comp:
@@ -999,6 +1053,10 @@ err:
static void __exit ib_core_cleanup(void)
{
ib_cache_cleanup();
+ ib_remove_ibnl_clients();
+ ib_sa_cleanup();
+ ib_mad_cleanup();
+ addr_cleanup();
ibnl_cleanup();
class_unregister(&ib_class);
destroy_workqueue(ib_comp_wq);
diff --git a/drivers/infiniband/core/iwcm.c b/drivers/infiniband/core/iwcm.c
index e28a160cdab0..f0572049d291 100644
--- a/drivers/infiniband/core/iwcm.c
+++ b/drivers/infiniband/core/iwcm.c
@@ -459,7 +459,7 @@ static void iw_cm_check_wildcard(struct sockaddr_storage *pm_addr,
if (pm_addr->ss_family == AF_INET) {
struct sockaddr_in *pm4_addr = (struct sockaddr_in *)pm_addr;
- if (pm4_addr->sin_addr.s_addr == INADDR_ANY) {
+ if (pm4_addr->sin_addr.s_addr == htonl(INADDR_ANY)) {
struct sockaddr_in *cm4_addr =
(struct sockaddr_in *)cm_addr;
struct sockaddr_in *cm4_outaddr =
@@ -1175,7 +1175,7 @@ static int __init iw_cm_init(void)
if (ret)
pr_err("iw_cm: couldn't init iwpm\n");
- ret = ibnl_add_client(RDMA_NL_IWCM, RDMA_NL_IWPM_NUM_OPS,
+ ret = ibnl_add_client(RDMA_NL_IWCM, ARRAY_SIZE(iwcm_nl_cb_table),
iwcm_nl_cb_table);
if (ret)
pr_err("iw_cm: couldn't register netlink callbacks\n");
diff --git a/drivers/infiniband/core/iwpm_util.c b/drivers/infiniband/core/iwpm_util.c
index 9b2bf2fb2b00..b65e06c560d7 100644
--- a/drivers/infiniband/core/iwpm_util.c
+++ b/drivers/infiniband/core/iwpm_util.c
@@ -634,6 +634,7 @@ static int send_nlmsg_done(struct sk_buff *skb, u8 nl_client, int iwpm_pid)
if (!(ibnl_put_msg(skb, &nlh, 0, 0, nl_client,
RDMA_NL_IWPM_MAPINFO, NLM_F_MULTI))) {
pr_warn("%s Unable to put NLMSG_DONE\n", __func__);
+ dev_kfree_skb(skb);
return -ENOMEM;
}
nlh->nlmsg_type = NLMSG_DONE;
diff --git a/drivers/infiniband/core/mad.c b/drivers/infiniband/core/mad.c
index 9fa5bf33f5a3..82fb511112da 100644
--- a/drivers/infiniband/core/mad.c
+++ b/drivers/infiniband/core/mad.c
@@ -47,11 +47,7 @@
#include "smi.h"
#include "opa_smi.h"
#include "agent.h"
-
-MODULE_LICENSE("Dual BSD/GPL");
-MODULE_DESCRIPTION("kernel IB MAD API");
-MODULE_AUTHOR("Hal Rosenstock");
-MODULE_AUTHOR("Sean Hefty");
+#include "core_priv.h"
static int mad_sendq_size = IB_MAD_QP_SEND_SIZE;
static int mad_recvq_size = IB_MAD_QP_RECV_SIZE;
@@ -3316,7 +3312,7 @@ static struct ib_client mad_client = {
.remove = ib_mad_remove_device
};
-static int __init ib_mad_init_module(void)
+int ib_mad_init(void)
{
mad_recvq_size = min(mad_recvq_size, IB_MAD_QP_MAX_SIZE);
mad_recvq_size = max(mad_recvq_size, IB_MAD_QP_MIN_SIZE);
@@ -3334,10 +3330,7 @@ static int __init ib_mad_init_module(void)
return 0;
}
-static void __exit ib_mad_cleanup_module(void)
+void ib_mad_cleanup(void)
{
ib_unregister_client(&mad_client);
}
-
-module_init(ib_mad_init_module);
-module_exit(ib_mad_cleanup_module);
diff --git a/drivers/infiniband/core/mr_pool.c b/drivers/infiniband/core/mr_pool.c
new file mode 100644
index 000000000000..49d478b2ea94
--- /dev/null
+++ b/drivers/infiniband/core/mr_pool.c
@@ -0,0 +1,86 @@
+/*
+ * Copyright (c) 2016 HGST, a Western Digital Company.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ */
+#include <rdma/ib_verbs.h>
+#include <rdma/mr_pool.h>
+
+struct ib_mr *ib_mr_pool_get(struct ib_qp *qp, struct list_head *list)
+{
+ struct ib_mr *mr;
+ unsigned long flags;
+
+ spin_lock_irqsave(&qp->mr_lock, flags);
+ mr = list_first_entry_or_null(list, struct ib_mr, qp_entry);
+ if (mr) {
+ list_del(&mr->qp_entry);
+ qp->mrs_used++;
+ }
+ spin_unlock_irqrestore(&qp->mr_lock, flags);
+
+ return mr;
+}
+EXPORT_SYMBOL(ib_mr_pool_get);
+
+void ib_mr_pool_put(struct ib_qp *qp, struct list_head *list, struct ib_mr *mr)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&qp->mr_lock, flags);
+ list_add(&mr->qp_entry, list);
+ qp->mrs_used--;
+ spin_unlock_irqrestore(&qp->mr_lock, flags);
+}
+EXPORT_SYMBOL(ib_mr_pool_put);
+
+int ib_mr_pool_init(struct ib_qp *qp, struct list_head *list, int nr,
+ enum ib_mr_type type, u32 max_num_sg)
+{
+ struct ib_mr *mr;
+ unsigned long flags;
+ int ret, i;
+
+ for (i = 0; i < nr; i++) {
+ mr = ib_alloc_mr(qp->pd, type, max_num_sg);
+ if (IS_ERR(mr)) {
+ ret = PTR_ERR(mr);
+ goto out;
+ }
+
+ spin_lock_irqsave(&qp->mr_lock, flags);
+ list_add_tail(&mr->qp_entry, list);
+ spin_unlock_irqrestore(&qp->mr_lock, flags);
+ }
+
+ return 0;
+out:
+ ib_mr_pool_destroy(qp, list);
+ return ret;
+}
+EXPORT_SYMBOL(ib_mr_pool_init);
+
+void ib_mr_pool_destroy(struct ib_qp *qp, struct list_head *list)
+{
+ struct ib_mr *mr;
+ unsigned long flags;
+
+ spin_lock_irqsave(&qp->mr_lock, flags);
+ while (!list_empty(list)) {
+ mr = list_first_entry(list, struct ib_mr, qp_entry);
+ list_del(&mr->qp_entry);
+
+ spin_unlock_irqrestore(&qp->mr_lock, flags);
+ ib_dereg_mr(mr);
+ spin_lock_irqsave(&qp->mr_lock, flags);
+ }
+ spin_unlock_irqrestore(&qp->mr_lock, flags);
+}
+EXPORT_SYMBOL(ib_mr_pool_destroy);
diff --git a/drivers/infiniband/core/multicast.c b/drivers/infiniband/core/multicast.c
index 250937cb9a1a..a83ec28a147b 100644
--- a/drivers/infiniband/core/multicast.c
+++ b/drivers/infiniband/core/multicast.c
@@ -93,6 +93,18 @@ enum {
struct mcast_member;
+/*
+* There are 4 types of join states:
+* FullMember, NonMember, SendOnlyNonMember, SendOnlyFullMember.
+*/
+enum {
+ FULLMEMBER_JOIN,
+ NONMEMBER_JOIN,
+ SENDONLY_NONMEBER_JOIN,
+ SENDONLY_FULLMEMBER_JOIN,
+ NUM_JOIN_MEMBERSHIP_TYPES,
+};
+
struct mcast_group {
struct ib_sa_mcmember_rec rec;
struct rb_node node;
@@ -102,7 +114,7 @@ struct mcast_group {
struct list_head pending_list;
struct list_head active_list;
struct mcast_member *last_join;
- int members[3];
+ int members[NUM_JOIN_MEMBERSHIP_TYPES];
atomic_t refcount;
enum mcast_group_state state;
struct ib_sa_query *query;
@@ -220,8 +232,9 @@ static void queue_join(struct mcast_member *member)
}
/*
- * A multicast group has three types of members: full member, non member, and
- * send only member. We need to keep track of the number of members of each
+ * A multicast group has four types of members: full member, non member,
+ * sendonly non member and sendonly full member.
+ * We need to keep track of the number of members of each
* type based on their join state. Adjust the number of members the belong to
* the specified join states.
*/
@@ -229,7 +242,7 @@ static void adjust_membership(struct mcast_group *group, u8 join_state, int inc)
{
int i;
- for (i = 0; i < 3; i++, join_state >>= 1)
+ for (i = 0; i < NUM_JOIN_MEMBERSHIP_TYPES; i++, join_state >>= 1)
if (join_state & 0x1)
group->members[i] += inc;
}
@@ -245,7 +258,7 @@ static u8 get_leave_state(struct mcast_group *group)
u8 leave_state = 0;
int i;
- for (i = 0; i < 3; i++)
+ for (i = 0; i < NUM_JOIN_MEMBERSHIP_TYPES; i++)
if (!group->members[i])
leave_state |= (0x1 << i);
diff --git a/drivers/infiniband/core/netlink.c b/drivers/infiniband/core/netlink.c
index d47df9356779..9b8c20c8209b 100644
--- a/drivers/infiniband/core/netlink.c
+++ b/drivers/infiniband/core/netlink.c
@@ -151,12 +151,11 @@ static int ibnl_rcv_msg(struct sk_buff *skb, struct nlmsghdr *nlh)
struct ibnl_client *client;
int type = nlh->nlmsg_type;
int index = RDMA_NL_GET_CLIENT(type);
- int op = RDMA_NL_GET_OP(type);
+ unsigned int op = RDMA_NL_GET_OP(type);
list_for_each_entry(client, &client_list, list) {
if (client->index == index) {
- if (op < 0 || op >= client->nops ||
- !client->cb_table[op].dump)
+ if (op >= client->nops || !client->cb_table[op].dump)
return -EINVAL;
/*
diff --git a/drivers/infiniband/core/rw.c b/drivers/infiniband/core/rw.c
new file mode 100644
index 000000000000..1eb9b1294a63
--- /dev/null
+++ b/drivers/infiniband/core/rw.c
@@ -0,0 +1,727 @@
+/*
+ * Copyright (c) 2016 HGST, a Western Digital Company.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
+ * more details.
+ */
+#include <linux/moduleparam.h>
+#include <linux/slab.h>
+#include <rdma/mr_pool.h>
+#include <rdma/rw.h>
+
+enum {
+ RDMA_RW_SINGLE_WR,
+ RDMA_RW_MULTI_WR,
+ RDMA_RW_MR,
+ RDMA_RW_SIG_MR,
+};
+
+static bool rdma_rw_force_mr;
+module_param_named(force_mr, rdma_rw_force_mr, bool, 0);
+MODULE_PARM_DESC(force_mr, "Force usage of MRs for RDMA READ/WRITE operations");
+
+/*
+ * Check if the device might use memory registration. This is currently only
+ * true for iWarp devices. In the future we can hopefully fine tune this based
+ * on HCA driver input.
+ */
+static inline bool rdma_rw_can_use_mr(struct ib_device *dev, u8 port_num)
+{
+ if (rdma_protocol_iwarp(dev, port_num))
+ return true;
+ if (unlikely(rdma_rw_force_mr))
+ return true;
+ return false;
+}
+
+/*
+ * Check if the device will use memory registration for this RW operation.
+ * We currently always use memory registrations for iWarp RDMA READs, and
+ * have a debug option to force usage of MRs.
+ *
+ * XXX: In the future we can hopefully fine tune this based on HCA driver
+ * input.
+ */
+static inline bool rdma_rw_io_needs_mr(struct ib_device *dev, u8 port_num,
+ enum dma_data_direction dir, int dma_nents)
+{
+ if (rdma_protocol_iwarp(dev, port_num) && dir == DMA_FROM_DEVICE)
+ return true;
+ if (unlikely(rdma_rw_force_mr))
+ return true;
+ return false;
+}
+
+static inline u32 rdma_rw_max_sge(struct ib_device *dev,
+ enum dma_data_direction dir)
+{
+ return dir == DMA_TO_DEVICE ?
+ dev->attrs.max_sge : dev->attrs.max_sge_rd;
+}
+
+static inline u32 rdma_rw_fr_page_list_len(struct ib_device *dev)
+{
+ /* arbitrary limit to avoid allocating gigantic resources */
+ return min_t(u32, dev->attrs.max_fast_reg_page_list_len, 256);
+}
+
+static int rdma_rw_init_one_mr(struct ib_qp *qp, u8 port_num,
+ struct rdma_rw_reg_ctx *reg, struct scatterlist *sg,
+ u32 sg_cnt, u32 offset)
+{
+ u32 pages_per_mr = rdma_rw_fr_page_list_len(qp->pd->device);
+ u32 nents = min(sg_cnt, pages_per_mr);
+ int count = 0, ret;
+
+ reg->mr = ib_mr_pool_get(qp, &qp->rdma_mrs);
+ if (!reg->mr)
+ return -EAGAIN;
+
+ if (reg->mr->need_inval) {
+ reg->inv_wr.opcode = IB_WR_LOCAL_INV;
+ reg->inv_wr.ex.invalidate_rkey = reg->mr->lkey;
+ reg->inv_wr.next = &reg->reg_wr.wr;
+ count++;
+ } else {
+ reg->inv_wr.next = NULL;
+ }
+
+ ret = ib_map_mr_sg(reg->mr, sg, nents, &offset, PAGE_SIZE);
+ if (ret < nents) {
+ ib_mr_pool_put(qp, &qp->rdma_mrs, reg->mr);
+ return -EINVAL;
+ }
+
+ reg->reg_wr.wr.opcode = IB_WR_REG_MR;
+ reg->reg_wr.mr = reg->mr;
+ reg->reg_wr.access = IB_ACCESS_LOCAL_WRITE;
+ if (rdma_protocol_iwarp(qp->device, port_num))
+ reg->reg_wr.access |= IB_ACCESS_REMOTE_WRITE;
+ count++;
+
+ reg->sge.addr = reg->mr->iova;
+ reg->sge.length = reg->mr->length;
+ return count;
+}
+
+static int rdma_rw_init_mr_wrs(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
+ u8 port_num, struct scatterlist *sg, u32 sg_cnt, u32 offset,
+ u64 remote_addr, u32 rkey, enum dma_data_direction dir)
+{
+ u32 pages_per_mr = rdma_rw_fr_page_list_len(qp->pd->device);
+ int i, j, ret = 0, count = 0;
+
+ ctx->nr_ops = (sg_cnt + pages_per_mr - 1) / pages_per_mr;
+ ctx->reg = kcalloc(ctx->nr_ops, sizeof(*ctx->reg), GFP_KERNEL);
+ if (!ctx->reg) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ for (i = 0; i < ctx->nr_ops; i++) {
+ struct rdma_rw_reg_ctx *prev = i ? &ctx->reg[i - 1] : NULL;
+ struct rdma_rw_reg_ctx *reg = &ctx->reg[i];
+ u32 nents = min(sg_cnt, pages_per_mr);
+
+ ret = rdma_rw_init_one_mr(qp, port_num, reg, sg, sg_cnt,
+ offset);
+ if (ret < 0)
+ goto out_free;
+ count += ret;
+
+ if (prev) {
+ if (reg->mr->need_inval)
+ prev->wr.wr.next = &reg->inv_wr;
+ else
+ prev->wr.wr.next = &reg->reg_wr.wr;
+ }
+
+ reg->reg_wr.wr.next = &reg->wr.wr;
+
+ reg->wr.wr.sg_list = &reg->sge;
+ reg->wr.wr.num_sge = 1;
+ reg->wr.remote_addr = remote_addr;
+ reg->wr.rkey = rkey;
+ if (dir == DMA_TO_DEVICE) {
+ reg->wr.wr.opcode = IB_WR_RDMA_WRITE;
+ } else if (!rdma_cap_read_inv(qp->device, port_num)) {
+ reg->wr.wr.opcode = IB_WR_RDMA_READ;
+ } else {
+ reg->wr.wr.opcode = IB_WR_RDMA_READ_WITH_INV;
+ reg->wr.wr.ex.invalidate_rkey = reg->mr->lkey;
+ }
+ count++;
+
+ remote_addr += reg->sge.length;
+ sg_cnt -= nents;
+ for (j = 0; j < nents; j++)
+ sg = sg_next(sg);
+ offset = 0;
+ }
+
+ ctx->type = RDMA_RW_MR;
+ return count;
+
+out_free:
+ while (--i >= 0)
+ ib_mr_pool_put(qp, &qp->rdma_mrs, ctx->reg[i].mr);
+ kfree(ctx->reg);
+out:
+ return ret;
+}
+
+static int rdma_rw_init_map_wrs(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
+ struct scatterlist *sg, u32 sg_cnt, u32 offset,
+ u64 remote_addr, u32 rkey, enum dma_data_direction dir)
+{
+ struct ib_device *dev = qp->pd->device;
+ u32 max_sge = rdma_rw_max_sge(dev, dir);
+ struct ib_sge *sge;
+ u32 total_len = 0, i, j;
+
+ ctx->nr_ops = DIV_ROUND_UP(sg_cnt, max_sge);
+
+ ctx->map.sges = sge = kcalloc(sg_cnt, sizeof(*sge), GFP_KERNEL);
+ if (!ctx->map.sges)
+ goto out;
+
+ ctx->map.wrs = kcalloc(ctx->nr_ops, sizeof(*ctx->map.wrs), GFP_KERNEL);
+ if (!ctx->map.wrs)
+ goto out_free_sges;
+
+ for (i = 0; i < ctx->nr_ops; i++) {
+ struct ib_rdma_wr *rdma_wr = &ctx->map.wrs[i];
+ u32 nr_sge = min(sg_cnt, max_sge);
+
+ if (dir == DMA_TO_DEVICE)
+ rdma_wr->wr.opcode = IB_WR_RDMA_WRITE;
+ else
+ rdma_wr->wr.opcode = IB_WR_RDMA_READ;
+ rdma_wr->remote_addr = remote_addr + total_len;
+ rdma_wr->rkey = rkey;
+ rdma_wr->wr.sg_list = sge;
+
+ for (j = 0; j < nr_sge; j++, sg = sg_next(sg)) {
+ rdma_wr->wr.num_sge++;
+
+ sge->addr = ib_sg_dma_address(dev, sg) + offset;
+ sge->length = ib_sg_dma_len(dev, sg) - offset;
+ sge->lkey = qp->pd->local_dma_lkey;
+
+ total_len += sge->length;
+ sge++;
+ sg_cnt--;
+ offset = 0;
+ }
+
+ if (i + 1 < ctx->nr_ops)
+ rdma_wr->wr.next = &ctx->map.wrs[i + 1].wr;
+ }
+
+ ctx->type = RDMA_RW_MULTI_WR;
+ return ctx->nr_ops;
+
+out_free_sges:
+ kfree(ctx->map.sges);
+out:
+ return -ENOMEM;
+}
+
+static int rdma_rw_init_single_wr(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
+ struct scatterlist *sg, u32 offset, u64 remote_addr, u32 rkey,
+ enum dma_data_direction dir)
+{
+ struct ib_device *dev = qp->pd->device;
+ struct ib_rdma_wr *rdma_wr = &ctx->single.wr;
+
+ ctx->nr_ops = 1;
+
+ ctx->single.sge.lkey = qp->pd->local_dma_lkey;
+ ctx->single.sge.addr = ib_sg_dma_address(dev, sg) + offset;
+ ctx->single.sge.length = ib_sg_dma_len(dev, sg) - offset;
+
+ memset(rdma_wr, 0, sizeof(*rdma_wr));
+ if (dir == DMA_TO_DEVICE)
+ rdma_wr->wr.opcode = IB_WR_RDMA_WRITE;
+ else
+ rdma_wr->wr.opcode = IB_WR_RDMA_READ;
+ rdma_wr->wr.sg_list = &ctx->single.sge;
+ rdma_wr->wr.num_sge = 1;
+ rdma_wr->remote_addr = remote_addr;
+ rdma_wr->rkey = rkey;
+
+ ctx->type = RDMA_RW_SINGLE_WR;
+ return 1;
+}
+
+/**
+ * rdma_rw_ctx_init - initialize a RDMA READ/WRITE context
+ * @ctx: context to initialize
+ * @qp: queue pair to operate on
+ * @port_num: port num to which the connection is bound
+ * @sg: scatterlist to READ/WRITE from/to
+ * @sg_cnt: number of entries in @sg
+ * @sg_offset: current byte offset into @sg
+ * @remote_addr:remote address to read/write (relative to @rkey)
+ * @rkey: remote key to operate on
+ * @dir: %DMA_TO_DEVICE for RDMA WRITE, %DMA_FROM_DEVICE for RDMA READ
+ *
+ * Returns the number of WQEs that will be needed on the workqueue if
+ * successful, or a negative error code.
+ */
+int rdma_rw_ctx_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num,
+ struct scatterlist *sg, u32 sg_cnt, u32 sg_offset,
+ u64 remote_addr, u32 rkey, enum dma_data_direction dir)
+{
+ struct ib_device *dev = qp->pd->device;
+ int ret;
+
+ ret = ib_dma_map_sg(dev, sg, sg_cnt, dir);
+ if (!ret)
+ return -ENOMEM;
+ sg_cnt = ret;
+
+ /*
+ * Skip to the S/G entry that sg_offset falls into:
+ */
+ for (;;) {
+ u32 len = ib_sg_dma_len(dev, sg);
+
+ if (sg_offset < len)
+ break;
+
+ sg = sg_next(sg);
+ sg_offset -= len;
+ sg_cnt--;
+ }
+
+ ret = -EIO;
+ if (WARN_ON_ONCE(sg_cnt == 0))
+ goto out_unmap_sg;
+
+ if (rdma_rw_io_needs_mr(qp->device, port_num, dir, sg_cnt)) {
+ ret = rdma_rw_init_mr_wrs(ctx, qp, port_num, sg, sg_cnt,
+ sg_offset, remote_addr, rkey, dir);
+ } else if (sg_cnt > 1) {
+ ret = rdma_rw_init_map_wrs(ctx, qp, sg, sg_cnt, sg_offset,
+ remote_addr, rkey, dir);
+ } else {
+ ret = rdma_rw_init_single_wr(ctx, qp, sg, sg_offset,
+ remote_addr, rkey, dir);
+ }
+
+ if (ret < 0)
+ goto out_unmap_sg;
+ return ret;
+
+out_unmap_sg:
+ ib_dma_unmap_sg(dev, sg, sg_cnt, dir);
+ return ret;
+}
+EXPORT_SYMBOL(rdma_rw_ctx_init);
+
+/**
+ * rdma_rw_ctx_signature init - initialize a RW context with signature offload
+ * @ctx: context to initialize
+ * @qp: queue pair to operate on
+ * @port_num: port num to which the connection is bound
+ * @sg: scatterlist to READ/WRITE from/to
+ * @sg_cnt: number of entries in @sg
+ * @prot_sg: scatterlist to READ/WRITE protection information from/to
+ * @prot_sg_cnt: number of entries in @prot_sg
+ * @sig_attrs: signature offloading algorithms
+ * @remote_addr:remote address to read/write (relative to @rkey)
+ * @rkey: remote key to operate on
+ * @dir: %DMA_TO_DEVICE for RDMA WRITE, %DMA_FROM_DEVICE for RDMA READ
+ *
+ * Returns the number of WQEs that will be needed on the workqueue if
+ * successful, or a negative error code.
+ */
+int rdma_rw_ctx_signature_init(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
+ u8 port_num, struct scatterlist *sg, u32 sg_cnt,
+ struct scatterlist *prot_sg, u32 prot_sg_cnt,
+ struct ib_sig_attrs *sig_attrs,
+ u64 remote_addr, u32 rkey, enum dma_data_direction dir)
+{
+ struct ib_device *dev = qp->pd->device;
+ u32 pages_per_mr = rdma_rw_fr_page_list_len(qp->pd->device);
+ struct ib_rdma_wr *rdma_wr;
+ struct ib_send_wr *prev_wr = NULL;
+ int count = 0, ret;
+
+ if (sg_cnt > pages_per_mr || prot_sg_cnt > pages_per_mr) {
+ pr_err("SG count too large\n");
+ return -EINVAL;
+ }
+
+ ret = ib_dma_map_sg(dev, sg, sg_cnt, dir);
+ if (!ret)
+ return -ENOMEM;
+ sg_cnt = ret;
+
+ ret = ib_dma_map_sg(dev, prot_sg, prot_sg_cnt, dir);
+ if (!ret) {
+ ret = -ENOMEM;
+ goto out_unmap_sg;
+ }
+ prot_sg_cnt = ret;
+
+ ctx->type = RDMA_RW_SIG_MR;
+ ctx->nr_ops = 1;
+ ctx->sig = kcalloc(1, sizeof(*ctx->sig), GFP_KERNEL);
+ if (!ctx->sig) {
+ ret = -ENOMEM;
+ goto out_unmap_prot_sg;
+ }
+
+ ret = rdma_rw_init_one_mr(qp, port_num, &ctx->sig->data, sg, sg_cnt, 0);
+ if (ret < 0)
+ goto out_free_ctx;
+ count += ret;
+ prev_wr = &ctx->sig->data.reg_wr.wr;
+
+ if (prot_sg_cnt) {
+ ret = rdma_rw_init_one_mr(qp, port_num, &ctx->sig->prot,
+ prot_sg, prot_sg_cnt, 0);
+ if (ret < 0)
+ goto out_destroy_data_mr;
+ count += ret;
+
+ if (ctx->sig->prot.inv_wr.next)
+ prev_wr->next = &ctx->sig->prot.inv_wr;
+ else
+ prev_wr->next = &ctx->sig->prot.reg_wr.wr;
+ prev_wr = &ctx->sig->prot.reg_wr.wr;
+ } else {
+ ctx->sig->prot.mr = NULL;
+ }
+
+ ctx->sig->sig_mr = ib_mr_pool_get(qp, &qp->sig_mrs);
+ if (!ctx->sig->sig_mr) {
+ ret = -EAGAIN;
+ goto out_destroy_prot_mr;
+ }
+
+ if (ctx->sig->sig_mr->need_inval) {
+ memset(&ctx->sig->sig_inv_wr, 0, sizeof(ctx->sig->sig_inv_wr));
+
+ ctx->sig->sig_inv_wr.opcode = IB_WR_LOCAL_INV;
+ ctx->sig->sig_inv_wr.ex.invalidate_rkey = ctx->sig->sig_mr->rkey;
+
+ prev_wr->next = &ctx->sig->sig_inv_wr;
+ prev_wr = &ctx->sig->sig_inv_wr;
+ }
+
+ ctx->sig->sig_wr.wr.opcode = IB_WR_REG_SIG_MR;
+ ctx->sig->sig_wr.wr.wr_cqe = NULL;
+ ctx->sig->sig_wr.wr.sg_list = &ctx->sig->data.sge;
+ ctx->sig->sig_wr.wr.num_sge = 1;
+ ctx->sig->sig_wr.access_flags = IB_ACCESS_LOCAL_WRITE;
+ ctx->sig->sig_wr.sig_attrs = sig_attrs;
+ ctx->sig->sig_wr.sig_mr = ctx->sig->sig_mr;
+ if (prot_sg_cnt)
+ ctx->sig->sig_wr.prot = &ctx->sig->prot.sge;
+ prev_wr->next = &ctx->sig->sig_wr.wr;
+ prev_wr = &ctx->sig->sig_wr.wr;
+ count++;
+
+ ctx->sig->sig_sge.addr = 0;
+ ctx->sig->sig_sge.length = ctx->sig->data.sge.length;
+ if (sig_attrs->wire.sig_type != IB_SIG_TYPE_NONE)
+ ctx->sig->sig_sge.length += ctx->sig->prot.sge.length;
+
+ rdma_wr = &ctx->sig->data.wr;
+ rdma_wr->wr.sg_list = &ctx->sig->sig_sge;
+ rdma_wr->wr.num_sge = 1;
+ rdma_wr->remote_addr = remote_addr;
+ rdma_wr->rkey = rkey;
+ if (dir == DMA_TO_DEVICE)
+ rdma_wr->wr.opcode = IB_WR_RDMA_WRITE;
+ else
+ rdma_wr->wr.opcode = IB_WR_RDMA_READ;
+ prev_wr->next = &rdma_wr->wr;
+ prev_wr = &rdma_wr->wr;
+ count++;
+
+ return count;
+
+out_destroy_prot_mr:
+ if (prot_sg_cnt)
+ ib_mr_pool_put(qp, &qp->rdma_mrs, ctx->sig->prot.mr);
+out_destroy_data_mr:
+ ib_mr_pool_put(qp, &qp->rdma_mrs, ctx->sig->data.mr);
+out_free_ctx:
+ kfree(ctx->sig);
+out_unmap_prot_sg:
+ ib_dma_unmap_sg(dev, prot_sg, prot_sg_cnt, dir);
+out_unmap_sg:
+ ib_dma_unmap_sg(dev, sg, sg_cnt, dir);
+ return ret;
+}
+EXPORT_SYMBOL(rdma_rw_ctx_signature_init);
+
+/*
+ * Now that we are going to post the WRs we can update the lkey and need_inval
+ * state on the MRs. If we were doing this at init time, we would get double
+ * or missing invalidations if a context was initialized but not actually
+ * posted.
+ */
+static void rdma_rw_update_lkey(struct rdma_rw_reg_ctx *reg, bool need_inval)
+{
+ reg->mr->need_inval = need_inval;
+ ib_update_fast_reg_key(reg->mr, ib_inc_rkey(reg->mr->lkey));
+ reg->reg_wr.key = reg->mr->lkey;
+ reg->sge.lkey = reg->mr->lkey;
+}
+
+/**
+ * rdma_rw_ctx_wrs - return chain of WRs for a RDMA READ or WRITE operation
+ * @ctx: context to operate on
+ * @qp: queue pair to operate on
+ * @port_num: port num to which the connection is bound
+ * @cqe: completion queue entry for the last WR
+ * @chain_wr: WR to append to the posted chain
+ *
+ * Return the WR chain for the set of RDMA READ/WRITE operations described by
+ * @ctx, as well as any memory registration operations needed. If @chain_wr
+ * is non-NULL the WR it points to will be appended to the chain of WRs posted.
+ * If @chain_wr is not set @cqe must be set so that the caller gets a
+ * completion notification.
+ */
+struct ib_send_wr *rdma_rw_ctx_wrs(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
+ u8 port_num, struct ib_cqe *cqe, struct ib_send_wr *chain_wr)
+{
+ struct ib_send_wr *first_wr, *last_wr;
+ int i;
+
+ switch (ctx->type) {
+ case RDMA_RW_SIG_MR:
+ rdma_rw_update_lkey(&ctx->sig->data, true);
+ if (ctx->sig->prot.mr)
+ rdma_rw_update_lkey(&ctx->sig->prot, true);
+
+ ctx->sig->sig_mr->need_inval = true;
+ ib_update_fast_reg_key(ctx->sig->sig_mr,
+ ib_inc_rkey(ctx->sig->sig_mr->lkey));
+ ctx->sig->sig_sge.lkey = ctx->sig->sig_mr->lkey;
+
+ if (ctx->sig->data.inv_wr.next)
+ first_wr = &ctx->sig->data.inv_wr;
+ else
+ first_wr = &ctx->sig->data.reg_wr.wr;
+ last_wr = &ctx->sig->data.wr.wr;
+ break;
+ case RDMA_RW_MR:
+ for (i = 0; i < ctx->nr_ops; i++) {
+ rdma_rw_update_lkey(&ctx->reg[i],
+ ctx->reg[i].wr.wr.opcode !=
+ IB_WR_RDMA_READ_WITH_INV);
+ }
+
+ if (ctx->reg[0].inv_wr.next)
+ first_wr = &ctx->reg[0].inv_wr;
+ else
+ first_wr = &ctx->reg[0].reg_wr.wr;
+ last_wr = &ctx->reg[ctx->nr_ops - 1].wr.wr;
+ break;
+ case RDMA_RW_MULTI_WR:
+ first_wr = &ctx->map.wrs[0].wr;
+ last_wr = &ctx->map.wrs[ctx->nr_ops - 1].wr;
+ break;
+ case RDMA_RW_SINGLE_WR:
+ first_wr = &ctx->single.wr.wr;
+ last_wr = &ctx->single.wr.wr;
+ break;
+ default:
+ BUG();
+ }
+
+ if (chain_wr) {
+ last_wr->next = chain_wr;
+ } else {
+ last_wr->wr_cqe = cqe;
+ last_wr->send_flags |= IB_SEND_SIGNALED;
+ }
+
+ return first_wr;
+}
+EXPORT_SYMBOL(rdma_rw_ctx_wrs);
+
+/**
+ * rdma_rw_ctx_post - post a RDMA READ or RDMA WRITE operation
+ * @ctx: context to operate on
+ * @qp: queue pair to operate on
+ * @port_num: port num to which the connection is bound
+ * @cqe: completion queue entry for the last WR
+ * @chain_wr: WR to append to the posted chain
+ *
+ * Post the set of RDMA READ/WRITE operations described by @ctx, as well as
+ * any memory registration operations needed. If @chain_wr is non-NULL the
+ * WR it points to will be appended to the chain of WRs posted. If @chain_wr
+ * is not set @cqe must be set so that the caller gets a completion
+ * notification.
+ */
+int rdma_rw_ctx_post(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num,
+ struct ib_cqe *cqe, struct ib_send_wr *chain_wr)
+{
+ struct ib_send_wr *first_wr, *bad_wr;
+
+ first_wr = rdma_rw_ctx_wrs(ctx, qp, port_num, cqe, chain_wr);
+ return ib_post_send(qp, first_wr, &bad_wr);
+}
+EXPORT_SYMBOL(rdma_rw_ctx_post);
+
+/**
+ * rdma_rw_ctx_destroy - release all resources allocated by rdma_rw_ctx_init
+ * @ctx: context to release
+ * @qp: queue pair to operate on
+ * @port_num: port num to which the connection is bound
+ * @sg: scatterlist that was used for the READ/WRITE
+ * @sg_cnt: number of entries in @sg
+ * @dir: %DMA_TO_DEVICE for RDMA WRITE, %DMA_FROM_DEVICE for RDMA READ
+ */
+void rdma_rw_ctx_destroy(struct rdma_rw_ctx *ctx, struct ib_qp *qp, u8 port_num,
+ struct scatterlist *sg, u32 sg_cnt, enum dma_data_direction dir)
+{
+ int i;
+
+ switch (ctx->type) {
+ case RDMA_RW_MR:
+ for (i = 0; i < ctx->nr_ops; i++)
+ ib_mr_pool_put(qp, &qp->rdma_mrs, ctx->reg[i].mr);
+ kfree(ctx->reg);
+ break;
+ case RDMA_RW_MULTI_WR:
+ kfree(ctx->map.wrs);
+ kfree(ctx->map.sges);
+ break;
+ case RDMA_RW_SINGLE_WR:
+ break;
+ default:
+ BUG();
+ break;
+ }
+
+ ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir);
+}
+EXPORT_SYMBOL(rdma_rw_ctx_destroy);
+
+/**
+ * rdma_rw_ctx_destroy_signature - release all resources allocated by
+ * rdma_rw_ctx_init_signature
+ * @ctx: context to release
+ * @qp: queue pair to operate on
+ * @port_num: port num to which the connection is bound
+ * @sg: scatterlist that was used for the READ/WRITE
+ * @sg_cnt: number of entries in @sg
+ * @prot_sg: scatterlist that was used for the READ/WRITE of the PI
+ * @prot_sg_cnt: number of entries in @prot_sg
+ * @dir: %DMA_TO_DEVICE for RDMA WRITE, %DMA_FROM_DEVICE for RDMA READ
+ */
+void rdma_rw_ctx_destroy_signature(struct rdma_rw_ctx *ctx, struct ib_qp *qp,
+ u8 port_num, struct scatterlist *sg, u32 sg_cnt,
+ struct scatterlist *prot_sg, u32 prot_sg_cnt,
+ enum dma_data_direction dir)
+{
+ if (WARN_ON_ONCE(ctx->type != RDMA_RW_SIG_MR))
+ return;
+
+ ib_mr_pool_put(qp, &qp->rdma_mrs, ctx->sig->data.mr);
+ ib_dma_unmap_sg(qp->pd->device, sg, sg_cnt, dir);
+
+ if (ctx->sig->prot.mr) {
+ ib_mr_pool_put(qp, &qp->rdma_mrs, ctx->sig->prot.mr);
+ ib_dma_unmap_sg(qp->pd->device, prot_sg, prot_sg_cnt, dir);
+ }
+
+ ib_mr_pool_put(qp, &qp->sig_mrs, ctx->sig->sig_mr);
+ kfree(ctx->sig);
+}
+EXPORT_SYMBOL(rdma_rw_ctx_destroy_signature);
+
+void rdma_rw_init_qp(struct ib_device *dev, struct ib_qp_init_attr *attr)
+{
+ u32 factor;
+
+ WARN_ON_ONCE(attr->port_num == 0);
+
+ /*
+ * Each context needs at least one RDMA READ or WRITE WR.
+ *
+ * For some hardware we might need more, eventually we should ask the
+ * HCA driver for a multiplier here.
+ */
+ factor = 1;
+
+ /*
+ * If the devices needs MRs to perform RDMA READ or WRITE operations,
+ * we'll need two additional MRs for the registrations and the
+ * invalidation.
+ */
+ if (attr->create_flags & IB_QP_CREATE_SIGNATURE_EN)
+ factor += 6; /* (inv + reg) * (data + prot + sig) */
+ else if (rdma_rw_can_use_mr(dev, attr->port_num))
+ factor += 2; /* inv + reg */
+
+ attr->cap.max_send_wr += factor * attr->cap.max_rdma_ctxs;
+
+ /*
+ * But maybe we were just too high in the sky and the device doesn't
+ * even support all we need, and we'll have to live with what we get..
+ */
+ attr->cap.max_send_wr =
+ min_t(u32, attr->cap.max_send_wr, dev->attrs.max_qp_wr);
+}
+
+int rdma_rw_init_mrs(struct ib_qp *qp, struct ib_qp_init_attr *attr)
+{
+ struct ib_device *dev = qp->pd->device;
+ u32 nr_mrs = 0, nr_sig_mrs = 0;
+ int ret = 0;
+
+ if (attr->create_flags & IB_QP_CREATE_SIGNATURE_EN) {
+ nr_sig_mrs = attr->cap.max_rdma_ctxs;
+ nr_mrs = attr->cap.max_rdma_ctxs * 2;
+ } else if (rdma_rw_can_use_mr(dev, attr->port_num)) {
+ nr_mrs = attr->cap.max_rdma_ctxs;
+ }
+
+ if (nr_mrs) {
+ ret = ib_mr_pool_init(qp, &qp->rdma_mrs, nr_mrs,
+ IB_MR_TYPE_MEM_REG,
+ rdma_rw_fr_page_list_len(dev));
+ if (ret) {
+ pr_err("%s: failed to allocated %d MRs\n",
+ __func__, nr_mrs);
+ return ret;
+ }
+ }
+
+ if (nr_sig_mrs) {
+ ret = ib_mr_pool_init(qp, &qp->sig_mrs, nr_sig_mrs,
+ IB_MR_TYPE_SIGNATURE, 2);
+ if (ret) {
+ pr_err("%s: failed to allocated %d SIG MRs\n",
+ __func__, nr_mrs);
+ goto out_free_rdma_mrs;
+ }
+ }
+
+ return 0;
+
+out_free_rdma_mrs:
+ ib_mr_pool_destroy(qp, &qp->rdma_mrs);
+ return ret;
+}
+
+void rdma_rw_cleanup_mrs(struct ib_qp *qp)
+{
+ ib_mr_pool_destroy(qp, &qp->sig_mrs);
+ ib_mr_pool_destroy(qp, &qp->rdma_mrs);
+}
diff --git a/drivers/infiniband/core/sa_query.c b/drivers/infiniband/core/sa_query.c
index 8a09c0fb268d..e95538650dc6 100644
--- a/drivers/infiniband/core/sa_query.c
+++ b/drivers/infiniband/core/sa_query.c
@@ -53,10 +53,6 @@
#include "sa.h"
#include "core_priv.h"
-MODULE_AUTHOR("Roland Dreier");
-MODULE_DESCRIPTION("InfiniBand subnet administration query support");
-MODULE_LICENSE("Dual BSD/GPL");
-
#define IB_SA_LOCAL_SVC_TIMEOUT_MIN 100
#define IB_SA_LOCAL_SVC_TIMEOUT_DEFAULT 2000
#define IB_SA_LOCAL_SVC_TIMEOUT_MAX 200000
@@ -119,6 +115,12 @@ struct ib_sa_guidinfo_query {
struct ib_sa_query sa_query;
};
+struct ib_sa_classport_info_query {
+ void (*callback)(int, struct ib_class_port_info *, void *);
+ void *context;
+ struct ib_sa_query sa_query;
+};
+
struct ib_sa_mcmember_query {
void (*callback)(int, struct ib_sa_mcmember_rec *, void *);
void *context;
@@ -392,6 +394,82 @@ static const struct ib_field service_rec_table[] = {
.size_bits = 2*64 },
};
+#define CLASSPORTINFO_REC_FIELD(field) \
+ .struct_offset_bytes = offsetof(struct ib_class_port_info, field), \
+ .struct_size_bytes = sizeof((struct ib_class_port_info *)0)->field, \
+ .field_name = "ib_class_port_info:" #field
+
+static const struct ib_field classport_info_rec_table[] = {
+ { CLASSPORTINFO_REC_FIELD(base_version),
+ .offset_words = 0,
+ .offset_bits = 0,
+ .size_bits = 8 },
+ { CLASSPORTINFO_REC_FIELD(class_version),
+ .offset_words = 0,
+ .offset_bits = 8,
+ .size_bits = 8 },
+ { CLASSPORTINFO_REC_FIELD(capability_mask),
+ .offset_words = 0,
+ .offset_bits = 16,
+ .size_bits = 16 },
+ { CLASSPORTINFO_REC_FIELD(cap_mask2_resp_time),
+ .offset_words = 1,
+ .offset_bits = 0,
+ .size_bits = 32 },
+ { CLASSPORTINFO_REC_FIELD(redirect_gid),
+ .offset_words = 2,
+ .offset_bits = 0,
+ .size_bits = 128 },
+ { CLASSPORTINFO_REC_FIELD(redirect_tcslfl),
+ .offset_words = 6,
+ .offset_bits = 0,
+ .size_bits = 32 },
+ { CLASSPORTINFO_REC_FIELD(redirect_lid),
+ .offset_words = 7,
+ .offset_bits = 0,
+ .size_bits = 16 },
+ { CLASSPORTINFO_REC_FIELD(redirect_pkey),
+ .offset_words = 7,
+ .offset_bits = 16,
+ .size_bits = 16 },
+
+ { CLASSPORTINFO_REC_FIELD(redirect_qp),
+ .offset_words = 8,
+ .offset_bits = 0,
+ .size_bits = 32 },
+ { CLASSPORTINFO_REC_FIELD(redirect_qkey),
+ .offset_words = 9,
+ .offset_bits = 0,
+ .size_bits = 32 },
+
+ { CLASSPORTINFO_REC_FIELD(trap_gid),
+ .offset_words = 10,
+ .offset_bits = 0,
+ .size_bits = 128 },
+ { CLASSPORTINFO_REC_FIELD(trap_tcslfl),
+ .offset_words = 14,
+ .offset_bits = 0,
+ .size_bits = 32 },
+
+ { CLASSPORTINFO_REC_FIELD(trap_lid),
+ .offset_words = 15,
+ .offset_bits = 0,
+ .size_bits = 16 },
+ { CLASSPORTINFO_REC_FIELD(trap_pkey),
+ .offset_words = 15,
+ .offset_bits = 16,
+ .size_bits = 16 },
+
+ { CLASSPORTINFO_REC_FIELD(trap_hlqp),
+ .offset_words = 16,
+ .offset_bits = 0,
+ .size_bits = 32 },
+ { CLASSPORTINFO_REC_FIELD(trap_qkey),
+ .offset_words = 17,
+ .offset_bits = 0,
+ .size_bits = 32 },
+};
+
#define GUIDINFO_REC_FIELD(field) \
.struct_offset_bytes = offsetof(struct ib_sa_guidinfo_rec, field), \
.struct_size_bytes = sizeof((struct ib_sa_guidinfo_rec *) 0)->field, \
@@ -536,7 +614,7 @@ static int ib_nl_send_msg(struct ib_sa_query *query, gfp_t gfp_mask)
data = ibnl_put_msg(skb, &nlh, query->seq, 0, RDMA_NL_LS,
RDMA_NL_LS_OP_RESOLVE, NLM_F_REQUEST);
if (!data) {
- kfree_skb(skb);
+ nlmsg_free(skb);
return -EMSGSIZE;
}
@@ -705,8 +783,8 @@ static void ib_nl_request_timeout(struct work_struct *work)
spin_unlock_irqrestore(&ib_nl_request_lock, flags);
}
-static int ib_nl_handle_set_timeout(struct sk_buff *skb,
- struct netlink_callback *cb)
+int ib_nl_handle_set_timeout(struct sk_buff *skb,
+ struct netlink_callback *cb)
{
const struct nlmsghdr *nlh = (struct nlmsghdr *)cb->nlh;
int timeout, delta, abs_delta;
@@ -782,8 +860,8 @@ static inline int ib_nl_is_good_resolve_resp(const struct nlmsghdr *nlh)
return 1;
}
-static int ib_nl_handle_resolve_resp(struct sk_buff *skb,
- struct netlink_callback *cb)
+int ib_nl_handle_resolve_resp(struct sk_buff *skb,
+ struct netlink_callback *cb)
{
const struct nlmsghdr *nlh = (struct nlmsghdr *)cb->nlh;
unsigned long flags;
@@ -838,15 +916,6 @@ resp_out:
return skb->len;
}
-static struct ibnl_client_cbs ib_sa_cb_table[] = {
- [RDMA_NL_LS_OP_RESOLVE] = {
- .dump = ib_nl_handle_resolve_resp,
- .module = THIS_MODULE },
- [RDMA_NL_LS_OP_SET_TIMEOUT] = {
- .dump = ib_nl_handle_set_timeout,
- .module = THIS_MODULE },
-};
-
static void free_sm_ah(struct kref *kref)
{
struct ib_sa_sm_ah *sm_ah = container_of(kref, struct ib_sa_sm_ah, ref);
@@ -1645,6 +1714,97 @@ err1:
}
EXPORT_SYMBOL(ib_sa_guid_info_rec_query);
+/* Support get SA ClassPortInfo */
+static void ib_sa_classport_info_rec_callback(struct ib_sa_query *sa_query,
+ int status,
+ struct ib_sa_mad *mad)
+{
+ struct ib_sa_classport_info_query *query =
+ container_of(sa_query, struct ib_sa_classport_info_query, sa_query);
+
+ if (mad) {
+ struct ib_class_port_info rec;
+
+ ib_unpack(classport_info_rec_table,
+ ARRAY_SIZE(classport_info_rec_table),
+ mad->data, &rec);
+ query->callback(status, &rec, query->context);
+ } else {
+ query->callback(status, NULL, query->context);
+ }
+}
+
+static void ib_sa_portclass_info_rec_release(struct ib_sa_query *sa_query)
+{
+ kfree(container_of(sa_query, struct ib_sa_classport_info_query,
+ sa_query));
+}
+
+int ib_sa_classport_info_rec_query(struct ib_sa_client *client,
+ struct ib_device *device, u8 port_num,
+ int timeout_ms, gfp_t gfp_mask,
+ void (*callback)(int status,
+ struct ib_class_port_info *resp,
+ void *context),
+ void *context,
+ struct ib_sa_query **sa_query)
+{
+ struct ib_sa_classport_info_query *query;
+ struct ib_sa_device *sa_dev = ib_get_client_data(device, &sa_client);
+ struct ib_sa_port *port;
+ struct ib_mad_agent *agent;
+ struct ib_sa_mad *mad;
+ int ret;
+
+ if (!sa_dev)
+ return -ENODEV;
+
+ port = &sa_dev->port[port_num - sa_dev->start_port];
+ agent = port->agent;
+
+ query = kzalloc(sizeof(*query), gfp_mask);
+ if (!query)
+ return -ENOMEM;
+
+ query->sa_query.port = port;
+ ret = alloc_mad(&query->sa_query, gfp_mask);
+ if (ret)
+ goto err1;
+
+ ib_sa_client_get(client);
+ query->sa_query.client = client;
+ query->callback = callback;
+ query->context = context;
+
+ mad = query->sa_query.mad_buf->mad;
+ init_mad(mad, agent);
+
+ query->sa_query.callback = callback ? ib_sa_classport_info_rec_callback : NULL;
+
+ query->sa_query.release = ib_sa_portclass_info_rec_release;
+ /* support GET only */
+ mad->mad_hdr.method = IB_MGMT_METHOD_GET;
+ mad->mad_hdr.attr_id = cpu_to_be16(IB_SA_ATTR_CLASS_PORTINFO);
+ mad->sa_hdr.comp_mask = 0;
+ *sa_query = &query->sa_query;
+
+ ret = send_mad(&query->sa_query, timeout_ms, gfp_mask);
+ if (ret < 0)
+ goto err2;
+
+ return ret;
+
+err2:
+ *sa_query = NULL;
+ ib_sa_client_put(query->sa_query.client);
+ free_mad(&query->sa_query);
+
+err1:
+ kfree(query);
+ return ret;
+}
+EXPORT_SYMBOL(ib_sa_classport_info_rec_query);
+
static void send_handler(struct ib_mad_agent *agent,
struct ib_mad_send_wc *mad_send_wc)
{
@@ -1794,7 +1954,7 @@ static void ib_sa_remove_one(struct ib_device *device, void *client_data)
kfree(sa_dev);
}
-static int __init ib_sa_init(void)
+int ib_sa_init(void)
{
int ret;
@@ -1820,17 +1980,10 @@ static int __init ib_sa_init(void)
goto err3;
}
- if (ibnl_add_client(RDMA_NL_LS, RDMA_NL_LS_NUM_OPS,
- ib_sa_cb_table)) {
- pr_err("Failed to add netlink callback\n");
- ret = -EINVAL;
- goto err4;
- }
INIT_DELAYED_WORK(&ib_nl_timed_work, ib_nl_request_timeout);
return 0;
-err4:
- destroy_workqueue(ib_nl_wq);
+
err3:
mcast_cleanup();
err2:
@@ -1839,9 +1992,8 @@ err1:
return ret;
}
-static void __exit ib_sa_cleanup(void)
+void ib_sa_cleanup(void)
{
- ibnl_remove_client(RDMA_NL_LS);
cancel_delayed_work(&ib_nl_timed_work);
flush_workqueue(ib_nl_wq);
destroy_workqueue(ib_nl_wq);
@@ -1849,6 +2001,3 @@ static void __exit ib_sa_cleanup(void)
ib_unregister_client(&sa_client);
idr_destroy(&query_idr);
}
-
-module_init(ib_sa_init);
-module_exit(ib_sa_cleanup);
diff --git a/drivers/infiniband/core/sysfs.c b/drivers/infiniband/core/sysfs.c
index 14606afbfaa8..5e573bb18660 100644
--- a/drivers/infiniband/core/sysfs.c
+++ b/drivers/infiniband/core/sysfs.c
@@ -56,8 +56,10 @@ struct ib_port {
struct gid_attr_group *gid_attr_group;
struct attribute_group gid_group;
struct attribute_group pkey_group;
- u8 port_num;
struct attribute_group *pma_table;
+ struct attribute_group *hw_stats_ag;
+ struct rdma_hw_stats *hw_stats;
+ u8 port_num;
};
struct port_attribute {
@@ -80,6 +82,18 @@ struct port_table_attribute {
__be16 attr_id;
};
+struct hw_stats_attribute {
+ struct attribute attr;
+ ssize_t (*show)(struct kobject *kobj,
+ struct attribute *attr, char *buf);
+ ssize_t (*store)(struct kobject *kobj,
+ struct attribute *attr,
+ const char *buf,
+ size_t count);
+ int index;
+ u8 port_num;
+};
+
static ssize_t port_attr_show(struct kobject *kobj,
struct attribute *attr, char *buf)
{
@@ -733,6 +747,212 @@ static struct attribute_group *get_counter_table(struct ib_device *dev,
return &pma_group;
}
+static int update_hw_stats(struct ib_device *dev, struct rdma_hw_stats *stats,
+ u8 port_num, int index)
+{
+ int ret;
+
+ if (time_is_after_eq_jiffies(stats->timestamp + stats->lifespan))
+ return 0;
+ ret = dev->get_hw_stats(dev, stats, port_num, index);
+ if (ret < 0)
+ return ret;
+ if (ret == stats->num_counters)
+ stats->timestamp = jiffies;
+
+ return 0;
+}
+
+static ssize_t print_hw_stat(struct rdma_hw_stats *stats, int index, char *buf)
+{
+ return sprintf(buf, "%llu\n", stats->value[index]);
+}
+
+static ssize_t show_hw_stats(struct kobject *kobj, struct attribute *attr,
+ char *buf)
+{
+ struct ib_device *dev;
+ struct ib_port *port;
+ struct hw_stats_attribute *hsa;
+ struct rdma_hw_stats *stats;
+ int ret;
+
+ hsa = container_of(attr, struct hw_stats_attribute, attr);
+ if (!hsa->port_num) {
+ dev = container_of((struct device *)kobj,
+ struct ib_device, dev);
+ stats = dev->hw_stats;
+ } else {
+ port = container_of(kobj, struct ib_port, kobj);
+ dev = port->ibdev;
+ stats = port->hw_stats;
+ }
+ ret = update_hw_stats(dev, stats, hsa->port_num, hsa->index);
+ if (ret)
+ return ret;
+ return print_hw_stat(stats, hsa->index, buf);
+}
+
+static ssize_t show_stats_lifespan(struct kobject *kobj,
+ struct attribute *attr,
+ char *buf)
+{
+ struct hw_stats_attribute *hsa;
+ int msecs;
+
+ hsa = container_of(attr, struct hw_stats_attribute, attr);
+ if (!hsa->port_num) {
+ struct ib_device *dev = container_of((struct device *)kobj,
+ struct ib_device, dev);
+ msecs = jiffies_to_msecs(dev->hw_stats->lifespan);
+ } else {
+ struct ib_port *p = container_of(kobj, struct ib_port, kobj);
+ msecs = jiffies_to_msecs(p->hw_stats->lifespan);
+ }
+ return sprintf(buf, "%d\n", msecs);
+}
+
+static ssize_t set_stats_lifespan(struct kobject *kobj,
+ struct attribute *attr,
+ const char *buf, size_t count)
+{
+ struct hw_stats_attribute *hsa;
+ int msecs;
+ int jiffies;
+ int ret;
+
+ ret = kstrtoint(buf, 10, &msecs);
+ if (ret)
+ return ret;
+ if (msecs < 0 || msecs > 10000)
+ return -EINVAL;
+ jiffies = msecs_to_jiffies(msecs);
+ hsa = container_of(attr, struct hw_stats_attribute, attr);
+ if (!hsa->port_num) {
+ struct ib_device *dev = container_of((struct device *)kobj,
+ struct ib_device, dev);
+ dev->hw_stats->lifespan = jiffies;
+ } else {
+ struct ib_port *p = container_of(kobj, struct ib_port, kobj);
+ p->hw_stats->lifespan = jiffies;
+ }
+ return count;
+}
+
+static void free_hsag(struct kobject *kobj, struct attribute_group *attr_group)
+{
+ struct attribute **attr;
+
+ sysfs_remove_group(kobj, attr_group);
+
+ for (attr = attr_group->attrs; *attr; attr++)
+ kfree(*attr);
+ kfree(attr_group);
+}
+
+static struct attribute *alloc_hsa(int index, u8 port_num, const char *name)
+{
+ struct hw_stats_attribute *hsa;
+
+ hsa = kmalloc(sizeof(*hsa), GFP_KERNEL);
+ if (!hsa)
+ return NULL;
+
+ hsa->attr.name = (char *)name;
+ hsa->attr.mode = S_IRUGO;
+ hsa->show = show_hw_stats;
+ hsa->store = NULL;
+ hsa->index = index;
+ hsa->port_num = port_num;
+
+ return &hsa->attr;
+}
+
+static struct attribute *alloc_hsa_lifespan(char *name, u8 port_num)
+{
+ struct hw_stats_attribute *hsa;
+
+ hsa = kmalloc(sizeof(*hsa), GFP_KERNEL);
+ if (!hsa)
+ return NULL;
+
+ hsa->attr.name = name;
+ hsa->attr.mode = S_IWUSR | S_IRUGO;
+ hsa->show = show_stats_lifespan;
+ hsa->store = set_stats_lifespan;
+ hsa->index = 0;
+ hsa->port_num = port_num;
+
+ return &hsa->attr;
+}
+
+static void setup_hw_stats(struct ib_device *device, struct ib_port *port,
+ u8 port_num)
+{
+ struct attribute_group *hsag = NULL;
+ struct rdma_hw_stats *stats;
+ int i = 0, ret;
+
+ stats = device->alloc_hw_stats(device, port_num);
+
+ if (!stats)
+ return;
+
+ if (!stats->names || stats->num_counters <= 0)
+ goto err;
+
+ hsag = kzalloc(sizeof(*hsag) +
+ // 1 extra for the lifespan config entry
+ sizeof(void *) * (stats->num_counters + 1),
+ GFP_KERNEL);
+ if (!hsag)
+ return;
+
+ ret = device->get_hw_stats(device, stats, port_num,
+ stats->num_counters);
+ if (ret != stats->num_counters)
+ goto err;
+
+ stats->timestamp = jiffies;
+
+ hsag->name = "hw_counters";
+ hsag->attrs = (void *)hsag + sizeof(*hsag);
+
+ for (i = 0; i < stats->num_counters; i++) {
+ hsag->attrs[i] = alloc_hsa(i, port_num, stats->names[i]);
+ if (!hsag->attrs[i])
+ goto err;
+ }
+
+ /* treat an error here as non-fatal */
+ hsag->attrs[i] = alloc_hsa_lifespan("lifespan", port_num);
+
+ if (port) {
+ struct kobject *kobj = &port->kobj;
+ ret = sysfs_create_group(kobj, hsag);
+ if (ret)
+ goto err;
+ port->hw_stats_ag = hsag;
+ port->hw_stats = stats;
+ } else {
+ struct kobject *kobj = &device->dev.kobj;
+ ret = sysfs_create_group(kobj, hsag);
+ if (ret)
+ goto err;
+ device->hw_stats_ag = hsag;
+ device->hw_stats = stats;
+ }
+
+ return;
+
+err:
+ kfree(stats);
+ for (; i >= 0; i--)
+ kfree(hsag->attrs[i]);
+ kfree(hsag);
+ return;
+}
+
static int add_port(struct ib_device *device, int port_num,
int (*port_callback)(struct ib_device *,
u8, struct kobject *))
@@ -835,6 +1055,14 @@ static int add_port(struct ib_device *device, int port_num,
goto err_remove_pkey;
}
+ /*
+ * If port == 0, it means we have only one port and the parent
+ * device, not this port device, should be the holder of the
+ * hw_counters
+ */
+ if (device->alloc_hw_stats && port_num)
+ setup_hw_stats(device, p, port_num);
+
list_add_tail(&p->kobj.entry, &device->port_list);
kobject_uevent(&p->kobj, KOBJ_ADD);
@@ -972,120 +1200,6 @@ static struct device_attribute *ib_class_attributes[] = {
&dev_attr_node_desc
};
-/* Show a given an attribute in the statistics group */
-static ssize_t show_protocol_stat(const struct device *device,
- struct device_attribute *attr, char *buf,
- unsigned offset)
-{
- struct ib_device *dev = container_of(device, struct ib_device, dev);
- union rdma_protocol_stats stats;
- ssize_t ret;
-
- ret = dev->get_protocol_stats(dev, &stats);
- if (ret)
- return ret;
-
- return sprintf(buf, "%llu\n",
- (unsigned long long) ((u64 *) &stats)[offset]);
-}
-
-/* generate a read-only iwarp statistics attribute */
-#define IW_STATS_ENTRY(name) \
-static ssize_t show_##name(struct device *device, \
- struct device_attribute *attr, char *buf) \
-{ \
- return show_protocol_stat(device, attr, buf, \
- offsetof(struct iw_protocol_stats, name) / \
- sizeof (u64)); \
-} \
-static DEVICE_ATTR(name, S_IRUGO, show_##name, NULL)
-
-IW_STATS_ENTRY(ipInReceives);
-IW_STATS_ENTRY(ipInHdrErrors);
-IW_STATS_ENTRY(ipInTooBigErrors);
-IW_STATS_ENTRY(ipInNoRoutes);
-IW_STATS_ENTRY(ipInAddrErrors);
-IW_STATS_ENTRY(ipInUnknownProtos);
-IW_STATS_ENTRY(ipInTruncatedPkts);
-IW_STATS_ENTRY(ipInDiscards);
-IW_STATS_ENTRY(ipInDelivers);
-IW_STATS_ENTRY(ipOutForwDatagrams);
-IW_STATS_ENTRY(ipOutRequests);
-IW_STATS_ENTRY(ipOutDiscards);
-IW_STATS_ENTRY(ipOutNoRoutes);
-IW_STATS_ENTRY(ipReasmTimeout);
-IW_STATS_ENTRY(ipReasmReqds);
-IW_STATS_ENTRY(ipReasmOKs);
-IW_STATS_ENTRY(ipReasmFails);
-IW_STATS_ENTRY(ipFragOKs);
-IW_STATS_ENTRY(ipFragFails);
-IW_STATS_ENTRY(ipFragCreates);
-IW_STATS_ENTRY(ipInMcastPkts);
-IW_STATS_ENTRY(ipOutMcastPkts);
-IW_STATS_ENTRY(ipInBcastPkts);
-IW_STATS_ENTRY(ipOutBcastPkts);
-IW_STATS_ENTRY(tcpRtoAlgorithm);
-IW_STATS_ENTRY(tcpRtoMin);
-IW_STATS_ENTRY(tcpRtoMax);
-IW_STATS_ENTRY(tcpMaxConn);
-IW_STATS_ENTRY(tcpActiveOpens);
-IW_STATS_ENTRY(tcpPassiveOpens);
-IW_STATS_ENTRY(tcpAttemptFails);
-IW_STATS_ENTRY(tcpEstabResets);
-IW_STATS_ENTRY(tcpCurrEstab);
-IW_STATS_ENTRY(tcpInSegs);
-IW_STATS_ENTRY(tcpOutSegs);
-IW_STATS_ENTRY(tcpRetransSegs);
-IW_STATS_ENTRY(tcpInErrs);
-IW_STATS_ENTRY(tcpOutRsts);
-
-static struct attribute *iw_proto_stats_attrs[] = {
- &dev_attr_ipInReceives.attr,
- &dev_attr_ipInHdrErrors.attr,
- &dev_attr_ipInTooBigErrors.attr,
- &dev_attr_ipInNoRoutes.attr,
- &dev_attr_ipInAddrErrors.attr,
- &dev_attr_ipInUnknownProtos.attr,
- &dev_attr_ipInTruncatedPkts.attr,
- &dev_attr_ipInDiscards.attr,
- &dev_attr_ipInDelivers.attr,
- &dev_attr_ipOutForwDatagrams.attr,
- &dev_attr_ipOutRequests.attr,
- &dev_attr_ipOutDiscards.attr,
- &dev_attr_ipOutNoRoutes.attr,
- &dev_attr_ipReasmTimeout.attr,
- &dev_attr_ipReasmReqds.attr,
- &dev_attr_ipReasmOKs.attr,
- &dev_attr_ipReasmFails.attr,
- &dev_attr_ipFragOKs.attr,
- &dev_attr_ipFragFails.attr,
- &dev_attr_ipFragCreates.attr,
- &dev_attr_ipInMcastPkts.attr,
- &dev_attr_ipOutMcastPkts.attr,
- &dev_attr_ipInBcastPkts.attr,
- &dev_attr_ipOutBcastPkts.attr,
- &dev_attr_tcpRtoAlgorithm.attr,
- &dev_attr_tcpRtoMin.attr,
- &dev_attr_tcpRtoMax.attr,
- &dev_attr_tcpMaxConn.attr,
- &dev_attr_tcpActiveOpens.attr,
- &dev_attr_tcpPassiveOpens.attr,
- &dev_attr_tcpAttemptFails.attr,
- &dev_attr_tcpEstabResets.attr,
- &dev_attr_tcpCurrEstab.attr,
- &dev_attr_tcpInSegs.attr,
- &dev_attr_tcpOutSegs.attr,
- &dev_attr_tcpRetransSegs.attr,
- &dev_attr_tcpInErrs.attr,
- &dev_attr_tcpOutRsts.attr,
- NULL
-};
-
-static struct attribute_group iw_stats_group = {
- .name = "proto_stats",
- .attrs = iw_proto_stats_attrs,
-};
-
static void free_port_list_attributes(struct ib_device *device)
{
struct kobject *p, *t;
@@ -1093,6 +1207,10 @@ static void free_port_list_attributes(struct ib_device *device)
list_for_each_entry_safe(p, t, &device->port_list, entry) {
struct ib_port *port = container_of(p, struct ib_port, kobj);
list_del(&p->entry);
+ if (port->hw_stats) {
+ kfree(port->hw_stats);
+ free_hsag(&port->kobj, port->hw_stats_ag);
+ }
sysfs_remove_group(p, port->pma_table);
sysfs_remove_group(p, &port->pkey_group);
sysfs_remove_group(p, &port->gid_group);
@@ -1149,11 +1267,8 @@ int ib_device_register_sysfs(struct ib_device *device,
}
}
- if (device->node_type == RDMA_NODE_RNIC && device->get_protocol_stats) {
- ret = sysfs_create_group(&class_dev->kobj, &iw_stats_group);
- if (ret)
- goto err_put;
- }
+ if (device->alloc_hw_stats)
+ setup_hw_stats(device, NULL, 0);
return 0;
@@ -1169,15 +1284,18 @@ err:
void ib_device_unregister_sysfs(struct ib_device *device)
{
- /* Hold kobject until ib_dealloc_device() */
- struct kobject *kobj_dev = kobject_get(&device->dev.kobj);
int i;
- if (device->node_type == RDMA_NODE_RNIC && device->get_protocol_stats)
- sysfs_remove_group(kobj_dev, &iw_stats_group);
+ /* Hold kobject until ib_dealloc_device() */
+ kobject_get(&device->dev.kobj);
free_port_list_attributes(device);
+ if (device->hw_stats) {
+ kfree(device->hw_stats);
+ free_hsag(&device->dev.kobj, device->hw_stats_ag);
+ }
+
for (i = 0; i < ARRAY_SIZE(ib_class_attributes); ++i)
device_remove_file(&device->dev, ib_class_attributes[i]);
diff --git a/drivers/infiniband/core/uverbs_cmd.c b/drivers/infiniband/core/uverbs_cmd.c
index 6fdc7ecdaca0..1a8babb8ee3c 100644
--- a/drivers/infiniband/core/uverbs_cmd.c
+++ b/drivers/infiniband/core/uverbs_cmd.c
@@ -1833,7 +1833,8 @@ static int create_qp(struct ib_uverbs_file *file,
if (attr.create_flags & ~(IB_QP_CREATE_BLOCK_MULTICAST_LOOPBACK |
IB_QP_CREATE_CROSS_CHANNEL |
IB_QP_CREATE_MANAGED_SEND |
- IB_QP_CREATE_MANAGED_RECV)) {
+ IB_QP_CREATE_MANAGED_RECV |
+ IB_QP_CREATE_SCATTER_FCS)) {
ret = -EINVAL;
goto err_put;
}
@@ -3088,8 +3089,7 @@ int ib_uverbs_ex_create_flow(struct ib_uverbs_file *file,
if (cmd.comp_mask)
return -EINVAL;
- if ((cmd.flow_attr.type == IB_FLOW_ATTR_SNIFFER &&
- !capable(CAP_NET_ADMIN)) || !capable(CAP_NET_RAW))
+ if (!capable(CAP_NET_RAW))
return -EPERM;
if (cmd.flow_attr.flags >= IB_FLOW_ATTR_FLAGS_RESERVED)
@@ -3655,6 +3655,11 @@ int ib_uverbs_ex_query_device(struct ib_uverbs_file *file,
resp.hca_core_clock = attr.hca_core_clock;
resp.response_length += sizeof(resp.hca_core_clock);
+ if (ucore->outlen < resp.response_length + sizeof(resp.device_cap_flags_ex))
+ goto end;
+
+ resp.device_cap_flags_ex = attr.device_cap_flags;
+ resp.response_length += sizeof(resp.device_cap_flags_ex);
end:
err = ib_copy_to_udata(ucore, &resp, resp.response_length);
return err;
diff --git a/drivers/infiniband/core/verbs.c b/drivers/infiniband/core/verbs.c
index b65b3541e732..1d7d4cf442e3 100644
--- a/drivers/infiniband/core/verbs.c
+++ b/drivers/infiniband/core/verbs.c
@@ -48,6 +48,7 @@
#include <rdma/ib_verbs.h>
#include <rdma/ib_cache.h>
#include <rdma/ib_addr.h>
+#include <rdma/rw.h>
#include "core_priv.h"
@@ -723,59 +724,89 @@ struct ib_qp *ib_open_qp(struct ib_xrcd *xrcd,
}
EXPORT_SYMBOL(ib_open_qp);
+static struct ib_qp *ib_create_xrc_qp(struct ib_qp *qp,
+ struct ib_qp_init_attr *qp_init_attr)
+{
+ struct ib_qp *real_qp = qp;
+
+ qp->event_handler = __ib_shared_qp_event_handler;
+ qp->qp_context = qp;
+ qp->pd = NULL;
+ qp->send_cq = qp->recv_cq = NULL;
+ qp->srq = NULL;
+ qp->xrcd = qp_init_attr->xrcd;
+ atomic_inc(&qp_init_attr->xrcd->usecnt);
+ INIT_LIST_HEAD(&qp->open_list);
+
+ qp = __ib_open_qp(real_qp, qp_init_attr->event_handler,
+ qp_init_attr->qp_context);
+ if (!IS_ERR(qp))
+ __ib_insert_xrcd_qp(qp_init_attr->xrcd, real_qp);
+ else
+ real_qp->device->destroy_qp(real_qp);
+ return qp;
+}
+
struct ib_qp *ib_create_qp(struct ib_pd *pd,
struct ib_qp_init_attr *qp_init_attr)
{
- struct ib_qp *qp, *real_qp;
- struct ib_device *device;
+ struct ib_device *device = pd ? pd->device : qp_init_attr->xrcd->device;
+ struct ib_qp *qp;
+ int ret;
+
+ /*
+ * If the callers is using the RDMA API calculate the resources
+ * needed for the RDMA READ/WRITE operations.
+ *
+ * Note that these callers need to pass in a port number.
+ */
+ if (qp_init_attr->cap.max_rdma_ctxs)
+ rdma_rw_init_qp(device, qp_init_attr);
- device = pd ? pd->device : qp_init_attr->xrcd->device;
qp = device->create_qp(pd, qp_init_attr, NULL);
+ if (IS_ERR(qp))
+ return qp;
+
+ qp->device = device;
+ qp->real_qp = qp;
+ qp->uobject = NULL;
+ qp->qp_type = qp_init_attr->qp_type;
+
+ atomic_set(&qp->usecnt, 0);
+ qp->mrs_used = 0;
+ spin_lock_init(&qp->mr_lock);
+ INIT_LIST_HEAD(&qp->rdma_mrs);
+ INIT_LIST_HEAD(&qp->sig_mrs);
+
+ if (qp_init_attr->qp_type == IB_QPT_XRC_TGT)
+ return ib_create_xrc_qp(qp, qp_init_attr);
+
+ qp->event_handler = qp_init_attr->event_handler;
+ qp->qp_context = qp_init_attr->qp_context;
+ if (qp_init_attr->qp_type == IB_QPT_XRC_INI) {
+ qp->recv_cq = NULL;
+ qp->srq = NULL;
+ } else {
+ qp->recv_cq = qp_init_attr->recv_cq;
+ atomic_inc(&qp_init_attr->recv_cq->usecnt);
+ qp->srq = qp_init_attr->srq;
+ if (qp->srq)
+ atomic_inc(&qp_init_attr->srq->usecnt);
+ }
- if (!IS_ERR(qp)) {
- qp->device = device;
- qp->real_qp = qp;
- qp->uobject = NULL;
- qp->qp_type = qp_init_attr->qp_type;
-
- atomic_set(&qp->usecnt, 0);
- if (qp_init_attr->qp_type == IB_QPT_XRC_TGT) {
- qp->event_handler = __ib_shared_qp_event_handler;
- qp->qp_context = qp;
- qp->pd = NULL;
- qp->send_cq = qp->recv_cq = NULL;
- qp->srq = NULL;
- qp->xrcd = qp_init_attr->xrcd;
- atomic_inc(&qp_init_attr->xrcd->usecnt);
- INIT_LIST_HEAD(&qp->open_list);
-
- real_qp = qp;
- qp = __ib_open_qp(real_qp, qp_init_attr->event_handler,
- qp_init_attr->qp_context);
- if (!IS_ERR(qp))
- __ib_insert_xrcd_qp(qp_init_attr->xrcd, real_qp);
- else
- real_qp->device->destroy_qp(real_qp);
- } else {
- qp->event_handler = qp_init_attr->event_handler;
- qp->qp_context = qp_init_attr->qp_context;
- if (qp_init_attr->qp_type == IB_QPT_XRC_INI) {
- qp->recv_cq = NULL;
- qp->srq = NULL;
- } else {
- qp->recv_cq = qp_init_attr->recv_cq;
- atomic_inc(&qp_init_attr->recv_cq->usecnt);
- qp->srq = qp_init_attr->srq;
- if (qp->srq)
- atomic_inc(&qp_init_attr->srq->usecnt);
- }
+ qp->pd = pd;
+ qp->send_cq = qp_init_attr->send_cq;
+ qp->xrcd = NULL;
- qp->pd = pd;
- qp->send_cq = qp_init_attr->send_cq;
- qp->xrcd = NULL;
+ atomic_inc(&pd->usecnt);
+ atomic_inc(&qp_init_attr->send_cq->usecnt);
- atomic_inc(&pd->usecnt);
- atomic_inc(&qp_init_attr->send_cq->usecnt);
+ if (qp_init_attr->cap.max_rdma_ctxs) {
+ ret = rdma_rw_init_mrs(qp, qp_init_attr);
+ if (ret) {
+ pr_err("failed to init MR pool ret= %d\n", ret);
+ ib_destroy_qp(qp);
+ qp = ERR_PTR(ret);
}
}
@@ -1250,6 +1281,8 @@ int ib_destroy_qp(struct ib_qp *qp)
struct ib_srq *srq;
int ret;
+ WARN_ON_ONCE(qp->mrs_used > 0);
+
if (atomic_read(&qp->usecnt))
return -EBUSY;
@@ -1261,6 +1294,9 @@ int ib_destroy_qp(struct ib_qp *qp)
rcq = qp->recv_cq;
srq = qp->srq;
+ if (!qp->uobject)
+ rdma_rw_cleanup_mrs(qp);
+
ret = qp->device->destroy_qp(qp);
if (!ret) {
if (pd)
@@ -1343,6 +1379,7 @@ struct ib_mr *ib_get_dma_mr(struct ib_pd *pd, int mr_access_flags)
mr->pd = pd;
mr->uobject = NULL;
atomic_inc(&pd->usecnt);
+ mr->need_inval = false;
}
return mr;
@@ -1389,6 +1426,7 @@ struct ib_mr *ib_alloc_mr(struct ib_pd *pd,
mr->pd = pd;
mr->uobject = NULL;
atomic_inc(&pd->usecnt);
+ mr->need_inval = false;
}
return mr;
@@ -1597,6 +1635,7 @@ EXPORT_SYMBOL(ib_set_vf_guid);
* @mr: memory region
* @sg: dma mapped scatterlist
* @sg_nents: number of entries in sg
+ * @sg_offset: offset in bytes into sg
* @page_size: page vector desired page size
*
* Constraints:
@@ -1615,17 +1654,15 @@ EXPORT_SYMBOL(ib_set_vf_guid);
* After this completes successfully, the memory region
* is ready for registration.
*/
-int ib_map_mr_sg(struct ib_mr *mr,
- struct scatterlist *sg,
- int sg_nents,
- unsigned int page_size)
+int ib_map_mr_sg(struct ib_mr *mr, struct scatterlist *sg, int sg_nents,
+ unsigned int *sg_offset, unsigned int page_size)
{
if (unlikely(!mr->device->map_mr_sg))
return -ENOSYS;
mr->page_size = page_size;
- return mr->device->map_mr_sg(mr, sg, sg_nents);
+ return mr->device->map_mr_sg(mr, sg, sg_nents, sg_offset);
}
EXPORT_SYMBOL(ib_map_mr_sg);
@@ -1635,6 +1672,10 @@ EXPORT_SYMBOL(ib_map_mr_sg);
* @mr: memory region
* @sgl: dma mapped scatterlist
* @sg_nents: number of entries in sg
+ * @sg_offset_p: IN: start offset in bytes into sg
+ * OUT: offset in bytes for element n of the sg of the first
+ * byte that has not been processed where n is the return
+ * value of this function.
* @set_page: driver page assignment function pointer
*
* Core service helper for drivers to convert the largest
@@ -1645,23 +1686,26 @@ EXPORT_SYMBOL(ib_map_mr_sg);
* Returns the number of sg elements that were assigned to
* a page vector.
*/
-int ib_sg_to_pages(struct ib_mr *mr,
- struct scatterlist *sgl,
- int sg_nents,
- int (*set_page)(struct ib_mr *, u64))
+int ib_sg_to_pages(struct ib_mr *mr, struct scatterlist *sgl, int sg_nents,
+ unsigned int *sg_offset_p, int (*set_page)(struct ib_mr *, u64))
{
struct scatterlist *sg;
u64 last_end_dma_addr = 0;
+ unsigned int sg_offset = sg_offset_p ? *sg_offset_p : 0;
unsigned int last_page_off = 0;
u64 page_mask = ~((u64)mr->page_size - 1);
int i, ret;
- mr->iova = sg_dma_address(&sgl[0]);
+ if (unlikely(sg_nents <= 0 || sg_offset > sg_dma_len(&sgl[0])))
+ return -EINVAL;
+
+ mr->iova = sg_dma_address(&sgl[0]) + sg_offset;
mr->length = 0;
for_each_sg(sgl, sg, sg_nents, i) {
- u64 dma_addr = sg_dma_address(sg);
- unsigned int dma_len = sg_dma_len(sg);
+ u64 dma_addr = sg_dma_address(sg) + sg_offset;
+ u64 prev_addr = dma_addr;
+ unsigned int dma_len = sg_dma_len(sg) - sg_offset;
u64 end_dma_addr = dma_addr + dma_len;
u64 page_addr = dma_addr & page_mask;
@@ -1685,8 +1729,14 @@ int ib_sg_to_pages(struct ib_mr *mr,
do {
ret = set_page(mr, page_addr);
- if (unlikely(ret < 0))
- return i ? : ret;
+ if (unlikely(ret < 0)) {
+ sg_offset = prev_addr - sg_dma_address(sg);
+ mr->length += prev_addr - dma_addr;
+ if (sg_offset_p)
+ *sg_offset_p = sg_offset;
+ return i || sg_offset ? i : ret;
+ }
+ prev_addr = page_addr;
next_page:
page_addr += mr->page_size;
} while (page_addr < end_dma_addr);
@@ -1694,8 +1744,12 @@ next_page:
mr->length += dma_len;
last_end_dma_addr = end_dma_addr;
last_page_off = end_dma_addr & ~page_mask;
+
+ sg_offset = 0;
}
+ if (sg_offset_p)
+ *sg_offset_p = 0;
return i;
}
EXPORT_SYMBOL(ib_sg_to_pages);
diff --git a/drivers/infiniband/hw/Makefile b/drivers/infiniband/hw/Makefile
index c7ad0a4c8b15..c0c7cf8af3f4 100644
--- a/drivers/infiniband/hw/Makefile
+++ b/drivers/infiniband/hw/Makefile
@@ -8,3 +8,4 @@ obj-$(CONFIG_MLX5_INFINIBAND) += mlx5/
obj-$(CONFIG_INFINIBAND_NES) += nes/
obj-$(CONFIG_INFINIBAND_OCRDMA) += ocrdma/
obj-$(CONFIG_INFINIBAND_USNIC) += usnic/
+obj-$(CONFIG_INFINIBAND_HFI1) += hfi1/
diff --git a/drivers/infiniband/hw/cxgb3/cxio_hal.c b/drivers/infiniband/hw/cxgb3/cxio_hal.c
index de1c61b417d6..ada2e5009c86 100644
--- a/drivers/infiniband/hw/cxgb3/cxio_hal.c
+++ b/drivers/infiniband/hw/cxgb3/cxio_hal.c
@@ -327,7 +327,7 @@ int cxio_destroy_cq(struct cxio_rdev *rdev_p, struct t3_cq *cq)
kfree(cq->sw_queue);
dma_free_coherent(&(rdev_p->rnic_info.pdev->dev),
(1UL << (cq->size_log2))
- * sizeof(struct t3_cqe), cq->queue,
+ * sizeof(struct t3_cqe) + 1, cq->queue,
dma_unmap_addr(cq, mapping));
cxio_hal_put_cqid(rdev_p->rscp, cq->cqid);
return err;
diff --git a/drivers/infiniband/hw/cxgb3/iwch_cm.c b/drivers/infiniband/hw/cxgb3/iwch_cm.c
index d403231a4aff..3e8431b5cad7 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_cm.c
+++ b/drivers/infiniband/hw/cxgb3/iwch_cm.c
@@ -367,7 +367,7 @@ static void arp_failure_discard(struct t3cdev *dev, struct sk_buff *skb)
*/
static void act_open_req_arp_failure(struct t3cdev *dev, struct sk_buff *skb)
{
- printk(KERN_ERR MOD "ARP failure duing connect\n");
+ printk(KERN_ERR MOD "ARP failure during connect\n");
kfree_skb(skb);
}
diff --git a/drivers/infiniband/hw/cxgb3/iwch_provider.c b/drivers/infiniband/hw/cxgb3/iwch_provider.c
index 3234a8be16f6..bb1a839d4d6d 100644
--- a/drivers/infiniband/hw/cxgb3/iwch_provider.c
+++ b/drivers/infiniband/hw/cxgb3/iwch_provider.c
@@ -783,15 +783,14 @@ static int iwch_set_page(struct ib_mr *ibmr, u64 addr)
return 0;
}
-static int iwch_map_mr_sg(struct ib_mr *ibmr,
- struct scatterlist *sg,
- int sg_nents)
+static int iwch_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg,
+ int sg_nents, unsigned int *sg_offset)
{
struct iwch_mr *mhp = to_iwch_mr(ibmr);
mhp->npages = 0;
- return ib_sg_to_pages(ibmr, sg, sg_nents, iwch_set_page);
+ return ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, iwch_set_page);
}
static int iwch_destroy_qp(struct ib_qp *ib_qp)
@@ -1219,59 +1218,119 @@ static ssize_t show_board(struct device *dev, struct device_attribute *attr,
iwch_dev->rdev.rnic_info.pdev->device);
}
-static int iwch_get_mib(struct ib_device *ibdev,
- union rdma_protocol_stats *stats)
+enum counters {
+ IPINRECEIVES,
+ IPINHDRERRORS,
+ IPINADDRERRORS,
+ IPINUNKNOWNPROTOS,
+ IPINDISCARDS,
+ IPINDELIVERS,
+ IPOUTREQUESTS,
+ IPOUTDISCARDS,
+ IPOUTNOROUTES,
+ IPREASMTIMEOUT,
+ IPREASMREQDS,
+ IPREASMOKS,
+ IPREASMFAILS,
+ TCPACTIVEOPENS,
+ TCPPASSIVEOPENS,
+ TCPATTEMPTFAILS,
+ TCPESTABRESETS,
+ TCPCURRESTAB,
+ TCPINSEGS,
+ TCPOUTSEGS,
+ TCPRETRANSSEGS,
+ TCPINERRS,
+ TCPOUTRSTS,
+ TCPRTOMIN,
+ TCPRTOMAX,
+ NR_COUNTERS
+};
+
+static const char * const names[] = {
+ [IPINRECEIVES] = "ipInReceives",
+ [IPINHDRERRORS] = "ipInHdrErrors",
+ [IPINADDRERRORS] = "ipInAddrErrors",
+ [IPINUNKNOWNPROTOS] = "ipInUnknownProtos",
+ [IPINDISCARDS] = "ipInDiscards",
+ [IPINDELIVERS] = "ipInDelivers",
+ [IPOUTREQUESTS] = "ipOutRequests",
+ [IPOUTDISCARDS] = "ipOutDiscards",
+ [IPOUTNOROUTES] = "ipOutNoRoutes",
+ [IPREASMTIMEOUT] = "ipReasmTimeout",
+ [IPREASMREQDS] = "ipReasmReqds",
+ [IPREASMOKS] = "ipReasmOKs",
+ [IPREASMFAILS] = "ipReasmFails",
+ [TCPACTIVEOPENS] = "tcpActiveOpens",
+ [TCPPASSIVEOPENS] = "tcpPassiveOpens",
+ [TCPATTEMPTFAILS] = "tcpAttemptFails",
+ [TCPESTABRESETS] = "tcpEstabResets",
+ [TCPCURRESTAB] = "tcpCurrEstab",
+ [TCPINSEGS] = "tcpInSegs",
+ [TCPOUTSEGS] = "tcpOutSegs",
+ [TCPRETRANSSEGS] = "tcpRetransSegs",
+ [TCPINERRS] = "tcpInErrs",
+ [TCPOUTRSTS] = "tcpOutRsts",
+ [TCPRTOMIN] = "tcpRtoMin",
+ [TCPRTOMAX] = "tcpRtoMax",
+};
+
+static struct rdma_hw_stats *iwch_alloc_stats(struct ib_device *ibdev,
+ u8 port_num)
+{
+ BUILD_BUG_ON(ARRAY_SIZE(names) != NR_COUNTERS);
+
+ /* Our driver only supports device level stats */
+ if (port_num != 0)
+ return NULL;
+
+ return rdma_alloc_hw_stats_struct(names, NR_COUNTERS,
+ RDMA_HW_STATS_DEFAULT_LIFESPAN);
+}
+
+static int iwch_get_mib(struct ib_device *ibdev, struct rdma_hw_stats *stats,
+ u8 port, int index)
{
struct iwch_dev *dev;
struct tp_mib_stats m;
int ret;
+ if (port != 0 || !stats)
+ return -ENOSYS;
+
PDBG("%s ibdev %p\n", __func__, ibdev);
dev = to_iwch_dev(ibdev);
ret = dev->rdev.t3cdev_p->ctl(dev->rdev.t3cdev_p, RDMA_GET_MIB, &m);
if (ret)
return -ENOSYS;
- memset(stats, 0, sizeof *stats);
- stats->iw.ipInReceives = ((u64) m.ipInReceive_hi << 32) +
- m.ipInReceive_lo;
- stats->iw.ipInHdrErrors = ((u64) m.ipInHdrErrors_hi << 32) +
- m.ipInHdrErrors_lo;
- stats->iw.ipInAddrErrors = ((u64) m.ipInAddrErrors_hi << 32) +
- m.ipInAddrErrors_lo;
- stats->iw.ipInUnknownProtos = ((u64) m.ipInUnknownProtos_hi << 32) +
- m.ipInUnknownProtos_lo;
- stats->iw.ipInDiscards = ((u64) m.ipInDiscards_hi << 32) +
- m.ipInDiscards_lo;
- stats->iw.ipInDelivers = ((u64) m.ipInDelivers_hi << 32) +
- m.ipInDelivers_lo;
- stats->iw.ipOutRequests = ((u64) m.ipOutRequests_hi << 32) +
- m.ipOutRequests_lo;
- stats->iw.ipOutDiscards = ((u64) m.ipOutDiscards_hi << 32) +
- m.ipOutDiscards_lo;
- stats->iw.ipOutNoRoutes = ((u64) m.ipOutNoRoutes_hi << 32) +
- m.ipOutNoRoutes_lo;
- stats->iw.ipReasmTimeout = (u64) m.ipReasmTimeout;
- stats->iw.ipReasmReqds = (u64) m.ipReasmReqds;
- stats->iw.ipReasmOKs = (u64) m.ipReasmOKs;
- stats->iw.ipReasmFails = (u64) m.ipReasmFails;
- stats->iw.tcpActiveOpens = (u64) m.tcpActiveOpens;
- stats->iw.tcpPassiveOpens = (u64) m.tcpPassiveOpens;
- stats->iw.tcpAttemptFails = (u64) m.tcpAttemptFails;
- stats->iw.tcpEstabResets = (u64) m.tcpEstabResets;
- stats->iw.tcpOutRsts = (u64) m.tcpOutRsts;
- stats->iw.tcpCurrEstab = (u64) m.tcpCurrEstab;
- stats->iw.tcpInSegs = ((u64) m.tcpInSegs_hi << 32) +
- m.tcpInSegs_lo;
- stats->iw.tcpOutSegs = ((u64) m.tcpOutSegs_hi << 32) +
- m.tcpOutSegs_lo;
- stats->iw.tcpRetransSegs = ((u64) m.tcpRetransSeg_hi << 32) +
- m.tcpRetransSeg_lo;
- stats->iw.tcpInErrs = ((u64) m.tcpInErrs_hi << 32) +
- m.tcpInErrs_lo;
- stats->iw.tcpRtoMin = (u64) m.tcpRtoMin;
- stats->iw.tcpRtoMax = (u64) m.tcpRtoMax;
- return 0;
+ stats->value[IPINRECEIVES] = ((u64)m.ipInReceive_hi << 32) + m.ipInReceive_lo;
+ stats->value[IPINHDRERRORS] = ((u64)m.ipInHdrErrors_hi << 32) + m.ipInHdrErrors_lo;
+ stats->value[IPINADDRERRORS] = ((u64)m.ipInAddrErrors_hi << 32) + m.ipInAddrErrors_lo;
+ stats->value[IPINUNKNOWNPROTOS] = ((u64)m.ipInUnknownProtos_hi << 32) + m.ipInUnknownProtos_lo;
+ stats->value[IPINDISCARDS] = ((u64)m.ipInDiscards_hi << 32) + m.ipInDiscards_lo;
+ stats->value[IPINDELIVERS] = ((u64)m.ipInDelivers_hi << 32) + m.ipInDelivers_lo;
+ stats->value[IPOUTREQUESTS] = ((u64)m.ipOutRequests_hi << 32) + m.ipOutRequests_lo;
+ stats->value[IPOUTDISCARDS] = ((u64)m.ipOutDiscards_hi << 32) + m.ipOutDiscards_lo;
+ stats->value[IPOUTNOROUTES] = ((u64)m.ipOutNoRoutes_hi << 32) + m.ipOutNoRoutes_lo;
+ stats->value[IPREASMTIMEOUT] = m.ipReasmTimeout;
+ stats->value[IPREASMREQDS] = m.ipReasmReqds;
+ stats->value[IPREASMOKS] = m.ipReasmOKs;
+ stats->value[IPREASMFAILS] = m.ipReasmFails;
+ stats->value[TCPACTIVEOPENS] = m.tcpActiveOpens;
+ stats->value[TCPPASSIVEOPENS] = m.tcpPassiveOpens;
+ stats->value[TCPATTEMPTFAILS] = m.tcpAttemptFails;
+ stats->value[TCPESTABRESETS] = m.tcpEstabResets;
+ stats->value[TCPCURRESTAB] = m.tcpOutRsts;
+ stats->value[TCPINSEGS] = m.tcpCurrEstab;
+ stats->value[TCPOUTSEGS] = ((u64)m.tcpInSegs_hi << 32) + m.tcpInSegs_lo;
+ stats->value[TCPRETRANSSEGS] = ((u64)m.tcpOutSegs_hi << 32) + m.tcpOutSegs_lo;
+ stats->value[TCPINERRS] = ((u64)m.tcpRetransSeg_hi << 32) + m.tcpRetransSeg_lo,
+ stats->value[TCPOUTRSTS] = ((u64)m.tcpInErrs_hi << 32) + m.tcpInErrs_lo;
+ stats->value[TCPRTOMIN] = m.tcpRtoMin;
+ stats->value[TCPRTOMAX] = m.tcpRtoMax;
+
+ return stats->num_counters;
}
static DEVICE_ATTR(hw_rev, S_IRUGO, show_rev, NULL);
@@ -1374,7 +1433,8 @@ int iwch_register_device(struct iwch_dev *dev)
dev->ibdev.req_notify_cq = iwch_arm_cq;
dev->ibdev.post_send = iwch_post_send;
dev->ibdev.post_recv = iwch_post_receive;
- dev->ibdev.get_protocol_stats = iwch_get_mib;
+ dev->ibdev.alloc_hw_stats = iwch_alloc_stats;
+ dev->ibdev.get_hw_stats = iwch_get_mib;
dev->ibdev.uverbs_abi_ver = IWCH_UVERBS_ABI_VERSION;
dev->ibdev.get_port_immutable = iwch_port_immutable;
diff --git a/drivers/infiniband/hw/cxgb4/cm.c b/drivers/infiniband/hw/cxgb4/cm.c
index 651711370d55..a3a67216bce6 100644
--- a/drivers/infiniband/hw/cxgb4/cm.c
+++ b/drivers/infiniband/hw/cxgb4/cm.c
@@ -119,7 +119,7 @@ MODULE_PARM_DESC(ep_timeout_secs, "CM Endpoint operation timeout "
static int mpa_rev = 2;
module_param(mpa_rev, int, 0644);
MODULE_PARM_DESC(mpa_rev, "MPA Revision, 0 supports amso1100, "
- "1 is RFC0544 spec compliant, 2 is IETF MPA Peer Connect Draft"
+ "1 is RFC5044 spec compliant, 2 is IETF MPA Peer Connect Draft"
" compliant (default=2)");
static int markers_enabled;
@@ -145,19 +145,35 @@ static struct sk_buff_head rxq;
static struct sk_buff *get_skb(struct sk_buff *skb, int len, gfp_t gfp);
static void ep_timeout(unsigned long arg);
static void connect_reply_upcall(struct c4iw_ep *ep, int status);
+static int sched(struct c4iw_dev *dev, struct sk_buff *skb);
static LIST_HEAD(timeout_list);
static spinlock_t timeout_lock;
+static void deref_cm_id(struct c4iw_ep_common *epc)
+{
+ epc->cm_id->rem_ref(epc->cm_id);
+ epc->cm_id = NULL;
+ set_bit(CM_ID_DEREFED, &epc->history);
+}
+
+static void ref_cm_id(struct c4iw_ep_common *epc)
+{
+ set_bit(CM_ID_REFED, &epc->history);
+ epc->cm_id->add_ref(epc->cm_id);
+}
+
static void deref_qp(struct c4iw_ep *ep)
{
c4iw_qp_rem_ref(&ep->com.qp->ibqp);
clear_bit(QP_REFERENCED, &ep->com.flags);
+ set_bit(QP_DEREFED, &ep->com.history);
}
static void ref_qp(struct c4iw_ep *ep)
{
set_bit(QP_REFERENCED, &ep->com.flags);
+ set_bit(QP_REFED, &ep->com.history);
c4iw_qp_add_ref(&ep->com.qp->ibqp);
}
@@ -201,6 +217,8 @@ static int c4iw_l2t_send(struct c4iw_rdev *rdev, struct sk_buff *skb,
error = cxgb4_l2t_send(rdev->lldi.ports[0], skb, l2e);
if (error < 0)
kfree_skb(skb);
+ else if (error == NET_XMIT_DROP)
+ return -ENOMEM;
return error < 0 ? error : 0;
}
@@ -290,12 +308,63 @@ static void *alloc_ep(int size, gfp_t gfp)
return epc;
}
+static void remove_ep_tid(struct c4iw_ep *ep)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&ep->com.dev->lock, flags);
+ _remove_handle(ep->com.dev, &ep->com.dev->hwtid_idr, ep->hwtid, 0);
+ spin_unlock_irqrestore(&ep->com.dev->lock, flags);
+}
+
+static void insert_ep_tid(struct c4iw_ep *ep)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&ep->com.dev->lock, flags);
+ _insert_handle(ep->com.dev, &ep->com.dev->hwtid_idr, ep, ep->hwtid, 0);
+ spin_unlock_irqrestore(&ep->com.dev->lock, flags);
+}
+
+/*
+ * Atomically lookup the ep ptr given the tid and grab a reference on the ep.
+ */
+static struct c4iw_ep *get_ep_from_tid(struct c4iw_dev *dev, unsigned int tid)
+{
+ struct c4iw_ep *ep;
+ unsigned long flags;
+
+ spin_lock_irqsave(&dev->lock, flags);
+ ep = idr_find(&dev->hwtid_idr, tid);
+ if (ep)
+ c4iw_get_ep(&ep->com);
+ spin_unlock_irqrestore(&dev->lock, flags);
+ return ep;
+}
+
+/*
+ * Atomically lookup the ep ptr given the stid and grab a reference on the ep.
+ */
+static struct c4iw_listen_ep *get_ep_from_stid(struct c4iw_dev *dev,
+ unsigned int stid)
+{
+ struct c4iw_listen_ep *ep;
+ unsigned long flags;
+
+ spin_lock_irqsave(&dev->lock, flags);
+ ep = idr_find(&dev->stid_idr, stid);
+ if (ep)
+ c4iw_get_ep(&ep->com);
+ spin_unlock_irqrestore(&dev->lock, flags);
+ return ep;
+}
+
void _c4iw_free_ep(struct kref *kref)
{
struct c4iw_ep *ep;
ep = container_of(kref, struct c4iw_ep, com.kref);
- PDBG("%s ep %p state %s\n", __func__, ep, states[state_read(&ep->com)]);
+ PDBG("%s ep %p state %s\n", __func__, ep, states[ep->com.state]);
if (test_bit(QP_REFERENCED, &ep->com.flags))
deref_qp(ep);
if (test_bit(RELEASE_RESOURCES, &ep->com.flags)) {
@@ -309,10 +378,11 @@ void _c4iw_free_ep(struct kref *kref)
(const u32 *)&sin6->sin6_addr.s6_addr,
1);
}
- remove_handle(ep->com.dev, &ep->com.dev->hwtid_idr, ep->hwtid);
cxgb4_remove_tid(ep->com.dev->rdev.lldi.tids, 0, ep->hwtid);
dst_release(ep->dst);
cxgb4_l2t_release(ep->l2t);
+ if (ep->mpa_skb)
+ kfree_skb(ep->mpa_skb);
}
kfree(ep);
}
@@ -320,6 +390,15 @@ void _c4iw_free_ep(struct kref *kref)
static void release_ep_resources(struct c4iw_ep *ep)
{
set_bit(RELEASE_RESOURCES, &ep->com.flags);
+
+ /*
+ * If we have a hwtid, then remove it from the idr table
+ * so lookups will no longer find this endpoint. Otherwise
+ * we have a race where one thread finds the ep ptr just
+ * before the other thread is freeing the ep memory.
+ */
+ if (ep->hwtid != -1)
+ remove_ep_tid(ep);
c4iw_put_ep(&ep->com);
}
@@ -432,10 +511,74 @@ static struct dst_entry *find_route(struct c4iw_dev *dev, __be32 local_ip,
static void arp_failure_discard(void *handle, struct sk_buff *skb)
{
- PDBG("%s c4iw_dev %p\n", __func__, handle);
+ pr_err(MOD "ARP failure\n");
kfree_skb(skb);
}
+static void mpa_start_arp_failure(void *handle, struct sk_buff *skb)
+{
+ pr_err("ARP failure during MPA Negotiation - Closing Connection\n");
+}
+
+enum {
+ NUM_FAKE_CPLS = 2,
+ FAKE_CPL_PUT_EP_SAFE = NUM_CPL_CMDS + 0,
+ FAKE_CPL_PASS_PUT_EP_SAFE = NUM_CPL_CMDS + 1,
+};
+
+static int _put_ep_safe(struct c4iw_dev *dev, struct sk_buff *skb)
+{
+ struct c4iw_ep *ep;
+
+ ep = *((struct c4iw_ep **)(skb->cb + 2 * sizeof(void *)));
+ release_ep_resources(ep);
+ return 0;
+}
+
+static int _put_pass_ep_safe(struct c4iw_dev *dev, struct sk_buff *skb)
+{
+ struct c4iw_ep *ep;
+
+ ep = *((struct c4iw_ep **)(skb->cb + 2 * sizeof(void *)));
+ c4iw_put_ep(&ep->parent_ep->com);
+ release_ep_resources(ep);
+ return 0;
+}
+
+/*
+ * Fake up a special CPL opcode and call sched() so process_work() will call
+ * _put_ep_safe() in a safe context to free the ep resources. This is needed
+ * because ARP error handlers are called in an ATOMIC context, and
+ * _c4iw_free_ep() needs to block.
+ */
+static void queue_arp_failure_cpl(struct c4iw_ep *ep, struct sk_buff *skb,
+ int cpl)
+{
+ struct cpl_act_establish *rpl = cplhdr(skb);
+
+ /* Set our special ARP_FAILURE opcode */
+ rpl->ot.opcode = cpl;
+
+ /*
+ * Save ep in the skb->cb area, after where sched() will save the dev
+ * ptr.
+ */
+ *((struct c4iw_ep **)(skb->cb + 2 * sizeof(void *))) = ep;
+ sched(ep->com.dev, skb);
+}
+
+/* Handle an ARP failure for an accept */
+static void pass_accept_rpl_arp_failure(void *handle, struct sk_buff *skb)
+{
+ struct c4iw_ep *ep = handle;
+
+ pr_err(MOD "ARP failure during accept - tid %u -dropping connection\n",
+ ep->hwtid);
+
+ __state_set(&ep->com, DEAD);
+ queue_arp_failure_cpl(ep, skb, FAKE_CPL_PASS_PUT_EP_SAFE);
+}
+
/*
* Handle an ARP failure for an active open.
*/
@@ -444,9 +587,8 @@ static void act_open_req_arp_failure(void *handle, struct sk_buff *skb)
struct c4iw_ep *ep = handle;
printk(KERN_ERR MOD "ARP failure during connect\n");
- kfree_skb(skb);
connect_reply_upcall(ep, -EHOSTUNREACH);
- state_set(&ep->com, DEAD);
+ __state_set(&ep->com, DEAD);
if (ep->com.remote_addr.ss_family == AF_INET6) {
struct sockaddr_in6 *sin6 =
(struct sockaddr_in6 *)&ep->com.local_addr;
@@ -455,9 +597,7 @@ static void act_open_req_arp_failure(void *handle, struct sk_buff *skb)
}
remove_handle(ep->com.dev, &ep->com.dev->atid_idr, ep->atid);
cxgb4_free_atid(ep->com.dev->rdev.lldi.tids, ep->atid);
- dst_release(ep->dst);
- cxgb4_l2t_release(ep->l2t);
- c4iw_put_ep(&ep->com);
+ queue_arp_failure_cpl(ep, skb, FAKE_CPL_PUT_EP_SAFE);
}
/*
@@ -466,15 +606,21 @@ static void act_open_req_arp_failure(void *handle, struct sk_buff *skb)
*/
static void abort_arp_failure(void *handle, struct sk_buff *skb)
{
- struct c4iw_rdev *rdev = handle;
+ int ret;
+ struct c4iw_ep *ep = handle;
+ struct c4iw_rdev *rdev = &ep->com.dev->rdev;
struct cpl_abort_req *req = cplhdr(skb);
PDBG("%s rdev %p\n", __func__, rdev);
req->cmd = CPL_ABORT_NO_RST;
- c4iw_ofld_send(rdev, skb);
+ ret = c4iw_ofld_send(rdev, skb);
+ if (ret) {
+ __state_set(&ep->com, DEAD);
+ queue_arp_failure_cpl(ep, skb, FAKE_CPL_PUT_EP_SAFE);
+ }
}
-static void send_flowc(struct c4iw_ep *ep, struct sk_buff *skb)
+static int send_flowc(struct c4iw_ep *ep, struct sk_buff *skb)
{
unsigned int flowclen = 80;
struct fw_flowc_wr *flowc;
@@ -530,7 +676,7 @@ static void send_flowc(struct c4iw_ep *ep, struct sk_buff *skb)
}
set_wr_txq(skb, CPL_PRIORITY_DATA, ep->txq_idx);
- c4iw_ofld_send(&ep->com.dev->rdev, skb);
+ return c4iw_ofld_send(&ep->com.dev->rdev, skb);
}
static int send_halfclose(struct c4iw_ep *ep, gfp_t gfp)
@@ -568,7 +714,7 @@ static int send_abort(struct c4iw_ep *ep, struct sk_buff *skb, gfp_t gfp)
return -ENOMEM;
}
set_wr_txq(skb, CPL_PRIORITY_DATA, ep->txq_idx);
- t4_set_arp_err_handler(skb, &ep->com.dev->rdev, abort_arp_failure);
+ t4_set_arp_err_handler(skb, ep, abort_arp_failure);
req = (struct cpl_abort_req *) skb_put(skb, wrlen);
memset(req, 0, wrlen);
INIT_TP_WR(req, ep->hwtid);
@@ -807,10 +953,10 @@ clip_release:
return ret;
}
-static void send_mpa_req(struct c4iw_ep *ep, struct sk_buff *skb,
- u8 mpa_rev_to_use)
+static int send_mpa_req(struct c4iw_ep *ep, struct sk_buff *skb,
+ u8 mpa_rev_to_use)
{
- int mpalen, wrlen;
+ int mpalen, wrlen, ret;
struct fw_ofld_tx_data_wr *req;
struct mpa_message *mpa;
struct mpa_v2_conn_params mpa_v2_params;
@@ -826,7 +972,7 @@ static void send_mpa_req(struct c4iw_ep *ep, struct sk_buff *skb,
skb = get_skb(skb, wrlen, GFP_KERNEL);
if (!skb) {
connect_reply_upcall(ep, -ENOMEM);
- return;
+ return -ENOMEM;
}
set_wr_txq(skb, CPL_PRIORITY_DATA, ep->txq_idx);
@@ -894,12 +1040,14 @@ static void send_mpa_req(struct c4iw_ep *ep, struct sk_buff *skb,
t4_set_arp_err_handler(skb, NULL, arp_failure_discard);
BUG_ON(ep->mpa_skb);
ep->mpa_skb = skb;
- c4iw_l2t_send(&ep->com.dev->rdev, skb, ep->l2t);
+ ret = c4iw_l2t_send(&ep->com.dev->rdev, skb, ep->l2t);
+ if (ret)
+ return ret;
start_ep_timer(ep);
__state_set(&ep->com, MPA_REQ_SENT);
ep->mpa_attr.initiator = 1;
ep->snd_seq += mpalen;
- return;
+ return ret;
}
static int send_mpa_reject(struct c4iw_ep *ep, const void *pdata, u8 plen)
@@ -975,7 +1123,7 @@ static int send_mpa_reject(struct c4iw_ep *ep, const void *pdata, u8 plen)
*/
skb_get(skb);
set_wr_txq(skb, CPL_PRIORITY_DATA, ep->txq_idx);
- t4_set_arp_err_handler(skb, NULL, arp_failure_discard);
+ t4_set_arp_err_handler(skb, NULL, mpa_start_arp_failure);
BUG_ON(ep->mpa_skb);
ep->mpa_skb = skb;
ep->snd_seq += mpalen;
@@ -1060,7 +1208,7 @@ static int send_mpa_reply(struct c4iw_ep *ep, const void *pdata, u8 plen)
* Function fw4_ack() will deref it.
*/
skb_get(skb);
- t4_set_arp_err_handler(skb, NULL, arp_failure_discard);
+ t4_set_arp_err_handler(skb, NULL, mpa_start_arp_failure);
ep->mpa_skb = skb;
__state_set(&ep->com, MPA_REP_SENT);
ep->snd_seq += mpalen;
@@ -1074,6 +1222,7 @@ static int act_establish(struct c4iw_dev *dev, struct sk_buff *skb)
unsigned int tid = GET_TID(req);
unsigned int atid = TID_TID_G(ntohl(req->tos_atid));
struct tid_info *t = dev->rdev.lldi.tids;
+ int ret;
ep = lookup_atid(t, atid);
@@ -1086,7 +1235,7 @@ static int act_establish(struct c4iw_dev *dev, struct sk_buff *skb)
/* setup the hwtid for this connection */
ep->hwtid = tid;
cxgb4_insert_tid(t, ep, tid);
- insert_handle(dev, &dev->hwtid_idr, ep, ep->hwtid);
+ insert_ep_tid(ep);
ep->snd_seq = be32_to_cpu(req->snd_isn);
ep->rcv_seq = be32_to_cpu(req->rcv_isn);
@@ -1099,13 +1248,22 @@ static int act_establish(struct c4iw_dev *dev, struct sk_buff *skb)
set_bit(ACT_ESTAB, &ep->com.history);
/* start MPA negotiation */
- send_flowc(ep, NULL);
+ ret = send_flowc(ep, NULL);
+ if (ret)
+ goto err;
if (ep->retry_with_mpa_v1)
- send_mpa_req(ep, skb, 1);
+ ret = send_mpa_req(ep, skb, 1);
else
- send_mpa_req(ep, skb, mpa_rev);
+ ret = send_mpa_req(ep, skb, mpa_rev);
+ if (ret)
+ goto err;
mutex_unlock(&ep->com.mutex);
return 0;
+err:
+ mutex_unlock(&ep->com.mutex);
+ connect_reply_upcall(ep, -ENOMEM);
+ c4iw_ep_disconnect(ep, 0, GFP_KERNEL);
+ return 0;
}
static void close_complete_upcall(struct c4iw_ep *ep, int status)
@@ -1120,20 +1278,11 @@ static void close_complete_upcall(struct c4iw_ep *ep, int status)
PDBG("close complete delivered ep %p cm_id %p tid %u\n",
ep, ep->com.cm_id, ep->hwtid);
ep->com.cm_id->event_handler(ep->com.cm_id, &event);
- ep->com.cm_id->rem_ref(ep->com.cm_id);
- ep->com.cm_id = NULL;
+ deref_cm_id(&ep->com);
set_bit(CLOSE_UPCALL, &ep->com.history);
}
}
-static int abort_connection(struct c4iw_ep *ep, struct sk_buff *skb, gfp_t gfp)
-{
- PDBG("%s ep %p tid %u\n", __func__, ep, ep->hwtid);
- __state_set(&ep->com, ABORTING);
- set_bit(ABORT_CONN, &ep->com.history);
- return send_abort(ep, skb, gfp);
-}
-
static void peer_close_upcall(struct c4iw_ep *ep)
{
struct iw_cm_event event;
@@ -1161,8 +1310,7 @@ static void peer_abort_upcall(struct c4iw_ep *ep)
PDBG("abort delivered ep %p cm_id %p tid %u\n", ep,
ep->com.cm_id, ep->hwtid);
ep->com.cm_id->event_handler(ep->com.cm_id, &event);
- ep->com.cm_id->rem_ref(ep->com.cm_id);
- ep->com.cm_id = NULL;
+ deref_cm_id(&ep->com);
set_bit(ABORT_UPCALL, &ep->com.history);
}
}
@@ -1205,10 +1353,8 @@ static void connect_reply_upcall(struct c4iw_ep *ep, int status)
set_bit(CONN_RPL_UPCALL, &ep->com.history);
ep->com.cm_id->event_handler(ep->com.cm_id, &event);
- if (status < 0) {
- ep->com.cm_id->rem_ref(ep->com.cm_id);
- ep->com.cm_id = NULL;
- }
+ if (status < 0)
+ deref_cm_id(&ep->com);
}
static int connect_request_upcall(struct c4iw_ep *ep)
@@ -1301,6 +1447,18 @@ static int update_rx_credits(struct c4iw_ep *ep, u32 credits)
#define RELAXED_IRD_NEGOTIATION 1
+/*
+ * process_mpa_reply - process streaming mode MPA reply
+ *
+ * Returns:
+ *
+ * 0 upon success indicating a connect request was delivered to the ULP
+ * or the mpa request is incomplete but valid so far.
+ *
+ * 1 if a failure requires the caller to close the connection.
+ *
+ * 2 if a failure requires the caller to abort the connection.
+ */
static int process_mpa_reply(struct c4iw_ep *ep, struct sk_buff *skb)
{
struct mpa_message *mpa;
@@ -1316,20 +1474,12 @@ static int process_mpa_reply(struct c4iw_ep *ep, struct sk_buff *skb)
PDBG("%s ep %p tid %u\n", __func__, ep, ep->hwtid);
/*
- * Stop mpa timer. If it expired, then
- * we ignore the MPA reply. process_timeout()
- * will abort the connection.
- */
- if (stop_ep_timer(ep))
- return 0;
-
- /*
* If we get more than the supported amount of private data
* then we must fail this connection.
*/
if (ep->mpa_pkt_len + skb->len > sizeof(ep->mpa_pkt)) {
err = -EINVAL;
- goto err;
+ goto err_stop_timer;
}
/*
@@ -1351,11 +1501,11 @@ static int process_mpa_reply(struct c4iw_ep *ep, struct sk_buff *skb)
printk(KERN_ERR MOD "%s MPA version mismatch. Local = %d,"
" Received = %d\n", __func__, mpa_rev, mpa->revision);
err = -EPROTO;
- goto err;
+ goto err_stop_timer;
}
if (memcmp(mpa->key, MPA_KEY_REP, sizeof(mpa->key))) {
err = -EPROTO;
- goto err;
+ goto err_stop_timer;
}
plen = ntohs(mpa->private_data_size);
@@ -1365,7 +1515,7 @@ static int process_mpa_reply(struct c4iw_ep *ep, struct sk_buff *skb)
*/
if (plen > MPA_MAX_PRIVATE_DATA) {
err = -EPROTO;
- goto err;
+ goto err_stop_timer;
}
/*
@@ -1373,7 +1523,7 @@ static int process_mpa_reply(struct c4iw_ep *ep, struct sk_buff *skb)
*/
if (ep->mpa_pkt_len > (sizeof(*mpa) + plen)) {
err = -EPROTO;
- goto err;
+ goto err_stop_timer;
}
ep->plen = (u8) plen;
@@ -1387,10 +1537,18 @@ static int process_mpa_reply(struct c4iw_ep *ep, struct sk_buff *skb)
if (mpa->flags & MPA_REJECT) {
err = -ECONNREFUSED;
- goto err;
+ goto err_stop_timer;
}
/*
+ * Stop mpa timer. If it expired, then
+ * we ignore the MPA reply. process_timeout()
+ * will abort the connection.
+ */
+ if (stop_ep_timer(ep))
+ return 0;
+
+ /*
* If we get here we have accumulated the entire mpa
* start reply message including private data. And
* the MPA header is valid.
@@ -1529,15 +1687,28 @@ static int process_mpa_reply(struct c4iw_ep *ep, struct sk_buff *skb)
goto out;
}
goto out;
+err_stop_timer:
+ stop_ep_timer(ep);
err:
- __state_set(&ep->com, ABORTING);
- send_abort(ep, skb, GFP_KERNEL);
+ disconnect = 2;
out:
connect_reply_upcall(ep, err);
return disconnect;
}
-static void process_mpa_request(struct c4iw_ep *ep, struct sk_buff *skb)
+/*
+ * process_mpa_request - process streaming mode MPA request
+ *
+ * Returns:
+ *
+ * 0 upon success indicating a connect request was delivered to the ULP
+ * or the mpa request is incomplete but valid so far.
+ *
+ * 1 if a failure requires the caller to close the connection.
+ *
+ * 2 if a failure requires the caller to abort the connection.
+ */
+static int process_mpa_request(struct c4iw_ep *ep, struct sk_buff *skb)
{
struct mpa_message *mpa;
struct mpa_v2_conn_params *mpa_v2_params;
@@ -1549,11 +1720,8 @@ static void process_mpa_request(struct c4iw_ep *ep, struct sk_buff *skb)
* If we get more than the supported amount of private data
* then we must fail this connection.
*/
- if (ep->mpa_pkt_len + skb->len > sizeof(ep->mpa_pkt)) {
- (void)stop_ep_timer(ep);
- abort_connection(ep, skb, GFP_KERNEL);
- return;
- }
+ if (ep->mpa_pkt_len + skb->len > sizeof(ep->mpa_pkt))
+ goto err_stop_timer;
PDBG("%s enter (%s line %u)\n", __func__, __FILE__, __LINE__);
@@ -1569,7 +1737,7 @@ static void process_mpa_request(struct c4iw_ep *ep, struct sk_buff *skb)
* We'll continue process when more data arrives.
*/
if (ep->mpa_pkt_len < sizeof(*mpa))
- return;
+ return 0;
PDBG("%s enter (%s line %u)\n", __func__, __FILE__, __LINE__);
mpa = (struct mpa_message *) ep->mpa_pkt;
@@ -1580,43 +1748,32 @@ static void process_mpa_request(struct c4iw_ep *ep, struct sk_buff *skb)
if (mpa->revision > mpa_rev) {
printk(KERN_ERR MOD "%s MPA version mismatch. Local = %d,"
" Received = %d\n", __func__, mpa_rev, mpa->revision);
- (void)stop_ep_timer(ep);
- abort_connection(ep, skb, GFP_KERNEL);
- return;
+ goto err_stop_timer;
}
- if (memcmp(mpa->key, MPA_KEY_REQ, sizeof(mpa->key))) {
- (void)stop_ep_timer(ep);
- abort_connection(ep, skb, GFP_KERNEL);
- return;
- }
+ if (memcmp(mpa->key, MPA_KEY_REQ, sizeof(mpa->key)))
+ goto err_stop_timer;
plen = ntohs(mpa->private_data_size);
/*
* Fail if there's too much private data.
*/
- if (plen > MPA_MAX_PRIVATE_DATA) {
- (void)stop_ep_timer(ep);
- abort_connection(ep, skb, GFP_KERNEL);
- return;
- }
+ if (plen > MPA_MAX_PRIVATE_DATA)
+ goto err_stop_timer;
/*
* If plen does not account for pkt size
*/
- if (ep->mpa_pkt_len > (sizeof(*mpa) + plen)) {
- (void)stop_ep_timer(ep);
- abort_connection(ep, skb, GFP_KERNEL);
- return;
- }
+ if (ep->mpa_pkt_len > (sizeof(*mpa) + plen))
+ goto err_stop_timer;
ep->plen = (u8) plen;
/*
* If we don't have all the pdata yet, then bail.
*/
if (ep->mpa_pkt_len < (sizeof(*mpa) + plen))
- return;
+ return 0;
/*
* If we get here we have accumulated the entire mpa
@@ -1665,26 +1822,26 @@ static void process_mpa_request(struct c4iw_ep *ep, struct sk_buff *skb)
ep->mpa_attr.xmit_marker_enabled, ep->mpa_attr.version,
ep->mpa_attr.p2p_type);
- /*
- * If the endpoint timer already expired, then we ignore
- * the start request. process_timeout() will abort
- * the connection.
- */
- if (!stop_ep_timer(ep)) {
- __state_set(&ep->com, MPA_REQ_RCVD);
-
- /* drive upcall */
- mutex_lock_nested(&ep->parent_ep->com.mutex,
- SINGLE_DEPTH_NESTING);
- if (ep->parent_ep->com.state != DEAD) {
- if (connect_request_upcall(ep))
- abort_connection(ep, skb, GFP_KERNEL);
- } else {
- abort_connection(ep, skb, GFP_KERNEL);
- }
- mutex_unlock(&ep->parent_ep->com.mutex);
+ __state_set(&ep->com, MPA_REQ_RCVD);
+
+ /* drive upcall */
+ mutex_lock_nested(&ep->parent_ep->com.mutex, SINGLE_DEPTH_NESTING);
+ if (ep->parent_ep->com.state != DEAD) {
+ if (connect_request_upcall(ep))
+ goto err_unlock_parent;
+ } else {
+ goto err_unlock_parent;
}
- return;
+ mutex_unlock(&ep->parent_ep->com.mutex);
+ return 0;
+
+err_unlock_parent:
+ mutex_unlock(&ep->parent_ep->com.mutex);
+ goto err_out;
+err_stop_timer:
+ (void)stop_ep_timer(ep);
+err_out:
+ return 2;
}
static int rx_data(struct c4iw_dev *dev, struct sk_buff *skb)
@@ -1693,11 +1850,10 @@ static int rx_data(struct c4iw_dev *dev, struct sk_buff *skb)
struct cpl_rx_data *hdr = cplhdr(skb);
unsigned int dlen = ntohs(hdr->len);
unsigned int tid = GET_TID(hdr);
- struct tid_info *t = dev->rdev.lldi.tids;
__u8 status = hdr->status;
int disconnect = 0;
- ep = lookup_tid(t, tid);
+ ep = get_ep_from_tid(dev, tid);
if (!ep)
return 0;
PDBG("%s ep %p tid %u dlen %u\n", __func__, ep, ep->hwtid, dlen);
@@ -1715,7 +1871,7 @@ static int rx_data(struct c4iw_dev *dev, struct sk_buff *skb)
break;
case MPA_REQ_WAIT:
ep->rcv_seq += dlen;
- process_mpa_request(ep, skb);
+ disconnect = process_mpa_request(ep, skb);
break;
case FPDU_MODE: {
struct c4iw_qp_attributes attrs;
@@ -1736,7 +1892,8 @@ static int rx_data(struct c4iw_dev *dev, struct sk_buff *skb)
}
mutex_unlock(&ep->com.mutex);
if (disconnect)
- c4iw_ep_disconnect(ep, 0, GFP_KERNEL);
+ c4iw_ep_disconnect(ep, disconnect == 2, GFP_KERNEL);
+ c4iw_put_ep(&ep->com);
return 0;
}
@@ -1746,9 +1903,8 @@ static int abort_rpl(struct c4iw_dev *dev, struct sk_buff *skb)
struct cpl_abort_rpl_rss *rpl = cplhdr(skb);
int release = 0;
unsigned int tid = GET_TID(rpl);
- struct tid_info *t = dev->rdev.lldi.tids;
- ep = lookup_tid(t, tid);
+ ep = get_ep_from_tid(dev, tid);
if (!ep) {
printk(KERN_WARNING MOD "Abort rpl to freed endpoint\n");
return 0;
@@ -1770,10 +1926,11 @@ static int abort_rpl(struct c4iw_dev *dev, struct sk_buff *skb)
if (release)
release_ep_resources(ep);
+ c4iw_put_ep(&ep->com);
return 0;
}
-static void send_fw_act_open_req(struct c4iw_ep *ep, unsigned int atid)
+static int send_fw_act_open_req(struct c4iw_ep *ep, unsigned int atid)
{
struct sk_buff *skb;
struct fw_ofld_connection_wr *req;
@@ -1843,7 +2000,7 @@ static void send_fw_act_open_req(struct c4iw_ep *ep, unsigned int atid)
req->tcb.opt2 = cpu_to_be32((__force u32)req->tcb.opt2);
set_wr_txq(skb, CPL_PRIORITY_CONTROL, ep->ctrlq_idx);
set_bit(ACT_OFLD_CONN, &ep->com.history);
- c4iw_l2t_send(&ep->com.dev->rdev, skb, ep->l2t);
+ return c4iw_l2t_send(&ep->com.dev->rdev, skb, ep->l2t);
}
/*
@@ -1986,6 +2143,7 @@ static int c4iw_reconnect(struct c4iw_ep *ep)
PDBG("%s qp %p cm_id %p\n", __func__, ep->com.qp, ep->com.cm_id);
init_timer(&ep->timer);
+ c4iw_init_wr_wait(&ep->com.wr_wait);
/*
* Allocate an active TID to initiate a TCP connection.
@@ -2069,6 +2227,7 @@ static int act_open_rpl(struct c4iw_dev *dev, struct sk_buff *skb)
struct sockaddr_in *ra;
struct sockaddr_in6 *la6;
struct sockaddr_in6 *ra6;
+ int ret = 0;
ep = lookup_atid(t, atid);
la = (struct sockaddr_in *)&ep->com.local_addr;
@@ -2104,9 +2263,10 @@ static int act_open_rpl(struct c4iw_dev *dev, struct sk_buff *skb)
mutex_unlock(&dev->rdev.stats.lock);
if (ep->com.local_addr.ss_family == AF_INET &&
dev->rdev.lldi.enable_fw_ofld_conn) {
- send_fw_act_open_req(ep,
- TID_TID_G(AOPEN_ATID_G(
- ntohl(rpl->atid_status))));
+ ret = send_fw_act_open_req(ep, TID_TID_G(AOPEN_ATID_G(
+ ntohl(rpl->atid_status))));
+ if (ret)
+ goto fail;
return 0;
}
break;
@@ -2146,6 +2306,7 @@ static int act_open_rpl(struct c4iw_dev *dev, struct sk_buff *skb)
break;
}
+fail:
connect_reply_upcall(ep, status2errno(status));
state_set(&ep->com, DEAD);
@@ -2170,9 +2331,8 @@ static int act_open_rpl(struct c4iw_dev *dev, struct sk_buff *skb)
static int pass_open_rpl(struct c4iw_dev *dev, struct sk_buff *skb)
{
struct cpl_pass_open_rpl *rpl = cplhdr(skb);
- struct tid_info *t = dev->rdev.lldi.tids;
unsigned int stid = GET_TID(rpl);
- struct c4iw_listen_ep *ep = lookup_stid(t, stid);
+ struct c4iw_listen_ep *ep = get_ep_from_stid(dev, stid);
if (!ep) {
PDBG("%s stid %d lookup failure!\n", __func__, stid);
@@ -2181,7 +2341,7 @@ static int pass_open_rpl(struct c4iw_dev *dev, struct sk_buff *skb)
PDBG("%s ep %p status %d error %d\n", __func__, ep,
rpl->status, status2errno(rpl->status));
c4iw_wake_up(&ep->com.wr_wait, status2errno(rpl->status));
-
+ c4iw_put_ep(&ep->com);
out:
return 0;
}
@@ -2189,17 +2349,17 @@ out:
static int close_listsrv_rpl(struct c4iw_dev *dev, struct sk_buff *skb)
{
struct cpl_close_listsvr_rpl *rpl = cplhdr(skb);
- struct tid_info *t = dev->rdev.lldi.tids;
unsigned int stid = GET_TID(rpl);
- struct c4iw_listen_ep *ep = lookup_stid(t, stid);
+ struct c4iw_listen_ep *ep = get_ep_from_stid(dev, stid);
PDBG("%s ep %p\n", __func__, ep);
c4iw_wake_up(&ep->com.wr_wait, status2errno(rpl->status));
+ c4iw_put_ep(&ep->com);
return 0;
}
-static void accept_cr(struct c4iw_ep *ep, struct sk_buff *skb,
- struct cpl_pass_accept_req *req)
+static int accept_cr(struct c4iw_ep *ep, struct sk_buff *skb,
+ struct cpl_pass_accept_req *req)
{
struct cpl_pass_accept_rpl *rpl;
unsigned int mtu_idx;
@@ -2287,10 +2447,9 @@ static void accept_cr(struct c4iw_ep *ep, struct sk_buff *skb,
rpl->opt0 = cpu_to_be64(opt0);
rpl->opt2 = cpu_to_be32(opt2);
set_wr_txq(skb, CPL_PRIORITY_SETUP, ep->ctrlq_idx);
- t4_set_arp_err_handler(skb, NULL, arp_failure_discard);
- c4iw_l2t_send(&ep->com.dev->rdev, skb, ep->l2t);
+ t4_set_arp_err_handler(skb, ep, pass_accept_rpl_arp_failure);
- return;
+ return c4iw_l2t_send(&ep->com.dev->rdev, skb, ep->l2t);
}
static void reject_cr(struct c4iw_dev *dev, u32 hwtid, struct sk_buff *skb)
@@ -2355,7 +2514,7 @@ static int pass_accept_req(struct c4iw_dev *dev, struct sk_buff *skb)
unsigned short hdrs;
u8 tos = PASS_OPEN_TOS_G(ntohl(req->tos_stid));
- parent_ep = lookup_stid(t, stid);
+ parent_ep = (struct c4iw_ep *)get_ep_from_stid(dev, stid);
if (!parent_ep) {
PDBG("%s connect request on invalid stid %d\n", __func__, stid);
goto reject;
@@ -2468,9 +2627,13 @@ static int pass_accept_req(struct c4iw_dev *dev, struct sk_buff *skb)
init_timer(&child_ep->timer);
cxgb4_insert_tid(t, child_ep, hwtid);
- insert_handle(dev, &dev->hwtid_idr, child_ep, child_ep->hwtid);
- accept_cr(child_ep, skb, req);
- set_bit(PASS_ACCEPT_REQ, &child_ep->com.history);
+ insert_ep_tid(child_ep);
+ if (accept_cr(child_ep, skb, req)) {
+ c4iw_put_ep(&parent_ep->com);
+ release_ep_resources(child_ep);
+ } else {
+ set_bit(PASS_ACCEPT_REQ, &child_ep->com.history);
+ }
if (iptype == 6) {
sin6 = (struct sockaddr_in6 *)&child_ep->com.local_addr;
cxgb4_clip_get(child_ep->com.dev->rdev.lldi.ports[0],
@@ -2479,6 +2642,8 @@ static int pass_accept_req(struct c4iw_dev *dev, struct sk_buff *skb)
goto out;
reject:
reject_cr(dev, hwtid, skb);
+ if (parent_ep)
+ c4iw_put_ep(&parent_ep->com);
out:
return 0;
}
@@ -2487,10 +2652,10 @@ static int pass_establish(struct c4iw_dev *dev, struct sk_buff *skb)
{
struct c4iw_ep *ep;
struct cpl_pass_establish *req = cplhdr(skb);
- struct tid_info *t = dev->rdev.lldi.tids;
unsigned int tid = GET_TID(req);
+ int ret;
- ep = lookup_tid(t, tid);
+ ep = get_ep_from_tid(dev, tid);
PDBG("%s ep %p tid %u\n", __func__, ep, ep->hwtid);
ep->snd_seq = be32_to_cpu(req->snd_isn);
ep->rcv_seq = be32_to_cpu(req->rcv_isn);
@@ -2501,10 +2666,15 @@ static int pass_establish(struct c4iw_dev *dev, struct sk_buff *skb)
set_emss(ep, ntohs(req->tcp_opt));
dst_confirm(ep->dst);
- state_set(&ep->com, MPA_REQ_WAIT);
+ mutex_lock(&ep->com.mutex);
+ ep->com.state = MPA_REQ_WAIT;
start_ep_timer(ep);
- send_flowc(ep, skb);
set_bit(PASS_ESTAB, &ep->com.history);
+ ret = send_flowc(ep, skb);
+ mutex_unlock(&ep->com.mutex);
+ if (ret)
+ c4iw_ep_disconnect(ep, 1, GFP_KERNEL);
+ c4iw_put_ep(&ep->com);
return 0;
}
@@ -2516,11 +2686,13 @@ static int peer_close(struct c4iw_dev *dev, struct sk_buff *skb)
struct c4iw_qp_attributes attrs;
int disconnect = 1;
int release = 0;
- struct tid_info *t = dev->rdev.lldi.tids;
unsigned int tid = GET_TID(hdr);
int ret;
- ep = lookup_tid(t, tid);
+ ep = get_ep_from_tid(dev, tid);
+ if (!ep)
+ return 0;
+
PDBG("%s ep %p tid %u\n", __func__, ep, ep->hwtid);
dst_confirm(ep->dst);
@@ -2592,6 +2764,7 @@ static int peer_close(struct c4iw_dev *dev, struct sk_buff *skb)
c4iw_ep_disconnect(ep, 0, GFP_KERNEL);
if (release)
release_ep_resources(ep);
+ c4iw_put_ep(&ep->com);
return 0;
}
@@ -2604,10 +2777,12 @@ static int peer_abort(struct c4iw_dev *dev, struct sk_buff *skb)
struct c4iw_qp_attributes attrs;
int ret;
int release = 0;
- struct tid_info *t = dev->rdev.lldi.tids;
unsigned int tid = GET_TID(req);
- ep = lookup_tid(t, tid);
+ ep = get_ep_from_tid(dev, tid);
+ if (!ep)
+ return 0;
+
if (is_neg_adv(req->status)) {
PDBG("%s Negative advice on abort- tid %u status %d (%s)\n",
__func__, ep->hwtid, req->status,
@@ -2616,7 +2791,7 @@ static int peer_abort(struct c4iw_dev *dev, struct sk_buff *skb)
mutex_lock(&dev->rdev.stats.lock);
dev->rdev.stats.neg_adv++;
mutex_unlock(&dev->rdev.stats.lock);
- return 0;
+ goto deref_ep;
}
PDBG("%s ep %p tid %u state %u\n", __func__, ep, ep->hwtid,
ep->com.state);
@@ -2633,6 +2808,7 @@ static int peer_abort(struct c4iw_dev *dev, struct sk_buff *skb)
mutex_lock(&ep->com.mutex);
switch (ep->com.state) {
case CONNECTING:
+ c4iw_put_ep(&ep->parent_ep->com);
break;
case MPA_REQ_WAIT:
(void)stop_ep_timer(ep);
@@ -2681,7 +2857,7 @@ static int peer_abort(struct c4iw_dev *dev, struct sk_buff *skb)
case DEAD:
PDBG("%s PEER_ABORT IN DEAD STATE!!!!\n", __func__);
mutex_unlock(&ep->com.mutex);
- return 0;
+ goto deref_ep;
default:
BUG_ON(1);
break;
@@ -2728,6 +2904,10 @@ out:
c4iw_reconnect(ep);
}
+deref_ep:
+ c4iw_put_ep(&ep->com);
+ /* Dereferencing ep, referenced in peer_abort_intr() */
+ c4iw_put_ep(&ep->com);
return 0;
}
@@ -2737,16 +2917,18 @@ static int close_con_rpl(struct c4iw_dev *dev, struct sk_buff *skb)
struct c4iw_qp_attributes attrs;
struct cpl_close_con_rpl *rpl = cplhdr(skb);
int release = 0;
- struct tid_info *t = dev->rdev.lldi.tids;
unsigned int tid = GET_TID(rpl);
- ep = lookup_tid(t, tid);
+ ep = get_ep_from_tid(dev, tid);
+ if (!ep)
+ return 0;
PDBG("%s ep %p tid %u\n", __func__, ep, ep->hwtid);
BUG_ON(!ep);
/* The cm_id may be null if we failed to connect */
mutex_lock(&ep->com.mutex);
+ set_bit(CLOSE_CON_RPL, &ep->com.history);
switch (ep->com.state) {
case CLOSING:
__state_set(&ep->com, MORIBUND);
@@ -2774,18 +2956,18 @@ static int close_con_rpl(struct c4iw_dev *dev, struct sk_buff *skb)
mutex_unlock(&ep->com.mutex);
if (release)
release_ep_resources(ep);
+ c4iw_put_ep(&ep->com);
return 0;
}
static int terminate(struct c4iw_dev *dev, struct sk_buff *skb)
{
struct cpl_rdma_terminate *rpl = cplhdr(skb);
- struct tid_info *t = dev->rdev.lldi.tids;
unsigned int tid = GET_TID(rpl);
struct c4iw_ep *ep;
struct c4iw_qp_attributes attrs;
- ep = lookup_tid(t, tid);
+ ep = get_ep_from_tid(dev, tid);
BUG_ON(!ep);
if (ep && ep->com.qp) {
@@ -2796,6 +2978,7 @@ static int terminate(struct c4iw_dev *dev, struct sk_buff *skb)
C4IW_QP_ATTR_NEXT_STATE, &attrs, 1);
} else
printk(KERN_WARNING MOD "TERM received tid %u no ep/qp\n", tid);
+ c4iw_put_ep(&ep->com);
return 0;
}
@@ -2811,15 +2994,16 @@ static int fw4_ack(struct c4iw_dev *dev, struct sk_buff *skb)
struct cpl_fw4_ack *hdr = cplhdr(skb);
u8 credits = hdr->credits;
unsigned int tid = GET_TID(hdr);
- struct tid_info *t = dev->rdev.lldi.tids;
- ep = lookup_tid(t, tid);
+ ep = get_ep_from_tid(dev, tid);
+ if (!ep)
+ return 0;
PDBG("%s ep %p tid %u credits %u\n", __func__, ep, ep->hwtid, credits);
if (credits == 0) {
PDBG("%s 0 credit ack ep %p tid %u state %u\n",
__func__, ep, ep->hwtid, state_read(&ep->com));
- return 0;
+ goto out;
}
dst_confirm(ep->dst);
@@ -2829,7 +3013,13 @@ static int fw4_ack(struct c4iw_dev *dev, struct sk_buff *skb)
state_read(&ep->com), ep->mpa_attr.initiator ? 1 : 0);
kfree_skb(ep->mpa_skb);
ep->mpa_skb = NULL;
+ mutex_lock(&ep->com.mutex);
+ if (test_bit(STOP_MPA_TIMER, &ep->com.flags))
+ stop_ep_timer(ep);
+ mutex_unlock(&ep->com.mutex);
}
+out:
+ c4iw_put_ep(&ep->com);
return 0;
}
@@ -2841,22 +3031,23 @@ int c4iw_reject_cr(struct iw_cm_id *cm_id, const void *pdata, u8 pdata_len)
PDBG("%s ep %p tid %u\n", __func__, ep, ep->hwtid);
mutex_lock(&ep->com.mutex);
- if (ep->com.state == DEAD) {
+ if (ep->com.state != MPA_REQ_RCVD) {
mutex_unlock(&ep->com.mutex);
c4iw_put_ep(&ep->com);
return -ECONNRESET;
}
set_bit(ULP_REJECT, &ep->com.history);
- BUG_ON(ep->com.state != MPA_REQ_RCVD);
if (mpa_rev == 0)
- abort_connection(ep, NULL, GFP_KERNEL);
+ disconnect = 2;
else {
err = send_mpa_reject(ep, pdata, pdata_len);
disconnect = 1;
}
mutex_unlock(&ep->com.mutex);
- if (disconnect)
- err = c4iw_ep_disconnect(ep, 0, GFP_KERNEL);
+ if (disconnect) {
+ stop_ep_timer(ep);
+ err = c4iw_ep_disconnect(ep, disconnect == 2, GFP_KERNEL);
+ }
c4iw_put_ep(&ep->com);
return 0;
}
@@ -2869,24 +3060,23 @@ int c4iw_accept_cr(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
struct c4iw_ep *ep = to_ep(cm_id);
struct c4iw_dev *h = to_c4iw_dev(cm_id->device);
struct c4iw_qp *qp = get_qhp(h, conn_param->qpn);
+ int abort = 0;
PDBG("%s ep %p tid %u\n", __func__, ep, ep->hwtid);
mutex_lock(&ep->com.mutex);
- if (ep->com.state == DEAD) {
+ if (ep->com.state != MPA_REQ_RCVD) {
err = -ECONNRESET;
- goto err;
+ goto err_out;
}
- BUG_ON(ep->com.state != MPA_REQ_RCVD);
BUG_ON(!qp);
set_bit(ULP_ACCEPT, &ep->com.history);
if ((conn_param->ord > cur_max_read_depth(ep->com.dev)) ||
(conn_param->ird > cur_max_read_depth(ep->com.dev))) {
- abort_connection(ep, NULL, GFP_KERNEL);
err = -EINVAL;
- goto err;
+ goto err_abort;
}
if (ep->mpa_attr.version == 2 && ep->mpa_attr.enhanced_rdma_conn) {
@@ -2898,9 +3088,8 @@ int c4iw_accept_cr(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
ep->ord = conn_param->ord;
send_mpa_reject(ep, conn_param->private_data,
conn_param->private_data_len);
- abort_connection(ep, NULL, GFP_KERNEL);
err = -ENOMEM;
- goto err;
+ goto err_abort;
}
}
if (conn_param->ird < ep->ord) {
@@ -2908,9 +3097,8 @@ int c4iw_accept_cr(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
ep->ord <= h->rdev.lldi.max_ordird_qp) {
conn_param->ird = ep->ord;
} else {
- abort_connection(ep, NULL, GFP_KERNEL);
err = -ENOMEM;
- goto err;
+ goto err_abort;
}
}
}
@@ -2929,8 +3117,8 @@ int c4iw_accept_cr(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
PDBG("%s %d ird %d ord %d\n", __func__, __LINE__, ep->ird, ep->ord);
- cm_id->add_ref(cm_id);
ep->com.cm_id = cm_id;
+ ref_cm_id(&ep->com);
ep->com.qp = qp;
ref_qp(ep);
@@ -2951,23 +3139,27 @@ int c4iw_accept_cr(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
err = c4iw_modify_qp(ep->com.qp->rhp,
ep->com.qp, mask, &attrs, 1);
if (err)
- goto err1;
+ goto err_deref_cm_id;
+
+ set_bit(STOP_MPA_TIMER, &ep->com.flags);
err = send_mpa_reply(ep, conn_param->private_data,
conn_param->private_data_len);
if (err)
- goto err1;
+ goto err_deref_cm_id;
__state_set(&ep->com, FPDU_MODE);
established_upcall(ep);
mutex_unlock(&ep->com.mutex);
c4iw_put_ep(&ep->com);
return 0;
-err1:
- ep->com.cm_id = NULL;
- abort_connection(ep, NULL, GFP_KERNEL);
- cm_id->rem_ref(cm_id);
-err:
+err_deref_cm_id:
+ deref_cm_id(&ep->com);
+err_abort:
+ abort = 1;
+err_out:
mutex_unlock(&ep->com.mutex);
+ if (abort)
+ c4iw_ep_disconnect(ep, 1, GFP_KERNEL);
c4iw_put_ep(&ep->com);
return err;
}
@@ -3067,9 +3259,9 @@ int c4iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
if (peer2peer && ep->ord == 0)
ep->ord = 1;
- cm_id->add_ref(cm_id);
- ep->com.dev = dev;
ep->com.cm_id = cm_id;
+ ref_cm_id(&ep->com);
+ ep->com.dev = dev;
ep->com.qp = get_qhp(dev, conn_param->qpn);
if (!ep->com.qp) {
PDBG("%s qpn 0x%x not found!\n", __func__, conn_param->qpn);
@@ -3108,7 +3300,7 @@ int c4iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
/*
* Handle loopback requests to INADDR_ANY.
*/
- if ((__force int)raddr->sin_addr.s_addr == INADDR_ANY) {
+ if (raddr->sin_addr.s_addr == htonl(INADDR_ANY)) {
err = pick_local_ipaddrs(dev, cm_id);
if (err)
goto fail1;
@@ -3176,7 +3368,7 @@ fail2:
remove_handle(ep->com.dev, &ep->com.dev->atid_idr, ep->atid);
cxgb4_free_atid(ep->com.dev->rdev.lldi.tids, ep->atid);
fail1:
- cm_id->rem_ref(cm_id);
+ deref_cm_id(&ep->com);
c4iw_put_ep(&ep->com);
out:
return err;
@@ -3270,8 +3462,8 @@ int c4iw_create_listen(struct iw_cm_id *cm_id, int backlog)
goto fail1;
}
PDBG("%s ep %p\n", __func__, ep);
- cm_id->add_ref(cm_id);
ep->com.cm_id = cm_id;
+ ref_cm_id(&ep->com);
ep->com.dev = dev;
ep->backlog = backlog;
memcpy(&ep->com.local_addr, &cm_id->m_local_addr,
@@ -3311,7 +3503,7 @@ int c4iw_create_listen(struct iw_cm_id *cm_id, int backlog)
cxgb4_free_stid(ep->com.dev->rdev.lldi.tids, ep->stid,
ep->com.local_addr.ss_family);
fail2:
- cm_id->rem_ref(cm_id);
+ deref_cm_id(&ep->com);
c4iw_put_ep(&ep->com);
fail1:
out:
@@ -3350,7 +3542,7 @@ int c4iw_destroy_listen(struct iw_cm_id *cm_id)
cxgb4_free_stid(ep->com.dev->rdev.lldi.tids, ep->stid,
ep->com.local_addr.ss_family);
done:
- cm_id->rem_ref(cm_id);
+ deref_cm_id(&ep->com);
c4iw_put_ep(&ep->com);
return err;
}
@@ -3367,6 +3559,12 @@ int c4iw_ep_disconnect(struct c4iw_ep *ep, int abrupt, gfp_t gfp)
PDBG("%s ep %p state %s, abrupt %d\n", __func__, ep,
states[ep->com.state], abrupt);
+ /*
+ * Ref the ep here in case we have fatal errors causing the
+ * ep to be released and freed.
+ */
+ c4iw_get_ep(&ep->com);
+
rdev = &ep->com.dev->rdev;
if (c4iw_fatal_error(rdev)) {
fatal = 1;
@@ -3418,10 +3616,30 @@ int c4iw_ep_disconnect(struct c4iw_ep *ep, int abrupt, gfp_t gfp)
set_bit(EP_DISC_CLOSE, &ep->com.history);
ret = send_halfclose(ep, gfp);
}
- if (ret)
+ if (ret) {
+ set_bit(EP_DISC_FAIL, &ep->com.history);
+ if (!abrupt) {
+ stop_ep_timer(ep);
+ close_complete_upcall(ep, -EIO);
+ }
+ if (ep->com.qp) {
+ struct c4iw_qp_attributes attrs;
+
+ attrs.next_state = C4IW_QP_STATE_ERROR;
+ ret = c4iw_modify_qp(ep->com.qp->rhp,
+ ep->com.qp,
+ C4IW_QP_ATTR_NEXT_STATE,
+ &attrs, 1);
+ if (ret)
+ pr_err(MOD
+ "%s - qp <- error failed!\n",
+ __func__);
+ }
fatal = 1;
+ }
}
mutex_unlock(&ep->com.mutex);
+ c4iw_put_ep(&ep->com);
if (fatal)
release_ep_resources(ep);
return ret;
@@ -3676,7 +3894,7 @@ static int rx_pkt(struct c4iw_dev *dev, struct sk_buff *skb)
struct cpl_pass_accept_req *req = (void *)(rss + 1);
struct l2t_entry *e;
struct dst_entry *dst;
- struct c4iw_ep *lep;
+ struct c4iw_ep *lep = NULL;
u16 window;
struct port_info *pi;
struct net_device *pdev;
@@ -3701,7 +3919,7 @@ static int rx_pkt(struct c4iw_dev *dev, struct sk_buff *skb)
*/
stid = (__force int) cpu_to_be32((__force u32) rss->hash_val);
- lep = (struct c4iw_ep *)lookup_stid(dev->rdev.lldi.tids, stid);
+ lep = (struct c4iw_ep *)get_ep_from_stid(dev, stid);
if (!lep) {
PDBG("%s connect request on invalid stid %d\n", __func__, stid);
goto reject;
@@ -3802,6 +4020,8 @@ static int rx_pkt(struct c4iw_dev *dev, struct sk_buff *skb)
free_dst:
dst_release(dst);
reject:
+ if (lep)
+ c4iw_put_ep(&lep->com);
return 0;
}
@@ -3809,7 +4029,7 @@ reject:
* These are the real handlers that are called from a
* work queue.
*/
-static c4iw_handler_func work_handlers[NUM_CPL_CMDS] = {
+static c4iw_handler_func work_handlers[NUM_CPL_CMDS + NUM_FAKE_CPLS] = {
[CPL_ACT_ESTABLISH] = act_establish,
[CPL_ACT_OPEN_RPL] = act_open_rpl,
[CPL_RX_DATA] = rx_data,
@@ -3825,7 +4045,9 @@ static c4iw_handler_func work_handlers[NUM_CPL_CMDS] = {
[CPL_RDMA_TERMINATE] = terminate,
[CPL_FW4_ACK] = fw4_ack,
[CPL_FW6_MSG] = deferred_fw6_msg,
- [CPL_RX_PKT] = rx_pkt
+ [CPL_RX_PKT] = rx_pkt,
+ [FAKE_CPL_PUT_EP_SAFE] = _put_ep_safe,
+ [FAKE_CPL_PASS_PUT_EP_SAFE] = _put_pass_ep_safe
};
static void process_timeout(struct c4iw_ep *ep)
@@ -3839,11 +4061,12 @@ static void process_timeout(struct c4iw_ep *ep)
set_bit(TIMEDOUT, &ep->com.history);
switch (ep->com.state) {
case MPA_REQ_SENT:
- __state_set(&ep->com, ABORTING);
connect_reply_upcall(ep, -ETIMEDOUT);
break;
case MPA_REQ_WAIT:
- __state_set(&ep->com, ABORTING);
+ case MPA_REQ_RCVD:
+ case MPA_REP_SENT:
+ case FPDU_MODE:
break;
case CLOSING:
case MORIBUND:
@@ -3853,7 +4076,6 @@ static void process_timeout(struct c4iw_ep *ep)
ep->com.qp, C4IW_QP_ATTR_NEXT_STATE,
&attrs, 1);
}
- __state_set(&ep->com, ABORTING);
close_complete_upcall(ep, -ETIMEDOUT);
break;
case ABORTING:
@@ -3871,9 +4093,9 @@ static void process_timeout(struct c4iw_ep *ep)
__func__, ep, ep->hwtid, ep->com.state);
abort = 0;
}
- if (abort)
- abort_connection(ep, NULL, GFP_KERNEL);
mutex_unlock(&ep->com.mutex);
+ if (abort)
+ c4iw_ep_disconnect(ep, 1, GFP_KERNEL);
c4iw_put_ep(&ep->com);
}
@@ -4006,10 +4228,10 @@ static int peer_abort_intr(struct c4iw_dev *dev, struct sk_buff *skb)
{
struct cpl_abort_req_rss *req = cplhdr(skb);
struct c4iw_ep *ep;
- struct tid_info *t = dev->rdev.lldi.tids;
unsigned int tid = GET_TID(req);
- ep = lookup_tid(t, tid);
+ ep = get_ep_from_tid(dev, tid);
+ /* This EP will be dereferenced in peer_abort() */
if (!ep) {
printk(KERN_WARNING MOD
"Abort on non-existent endpoint, tid %d\n", tid);
@@ -4020,24 +4242,13 @@ static int peer_abort_intr(struct c4iw_dev *dev, struct sk_buff *skb)
PDBG("%s Negative advice on abort- tid %u status %d (%s)\n",
__func__, ep->hwtid, req->status,
neg_adv_str(req->status));
- ep->stats.abort_neg_adv++;
- dev->rdev.stats.neg_adv++;
- kfree_skb(skb);
- return 0;
+ goto out;
}
PDBG("%s ep %p tid %u state %u\n", __func__, ep, ep->hwtid,
ep->com.state);
- /*
- * Wake up any threads in rdma_init() or rdma_fini().
- * However, if we are on MPAv2 and want to retry with MPAv1
- * then, don't wake up yet.
- */
- if (mpa_rev == 2 && !ep->tried_with_mpa_v1) {
- if (ep->com.state != MPA_REQ_SENT)
- c4iw_wake_up(&ep->com.wr_wait, -ECONNRESET);
- } else
- c4iw_wake_up(&ep->com.wr_wait, -ECONNRESET);
+ c4iw_wake_up(&ep->com.wr_wait, -ECONNRESET);
+out:
sched(dev, skb);
return 0;
}
diff --git a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
index df43f871ab61..f6f34a75af27 100644
--- a/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
+++ b/drivers/infiniband/hw/cxgb4/iw_cxgb4.h
@@ -755,6 +755,7 @@ enum c4iw_ep_flags {
CLOSE_SENT = 3,
TIMEOUT = 4,
QP_REFERENCED = 5,
+ STOP_MPA_TIMER = 7,
};
enum c4iw_ep_history {
@@ -779,7 +780,13 @@ enum c4iw_ep_history {
EP_DISC_ABORT = 18,
CONN_RPL_UPCALL = 19,
ACT_RETRY_NOMEM = 20,
- ACT_RETRY_INUSE = 21
+ ACT_RETRY_INUSE = 21,
+ CLOSE_CON_RPL = 22,
+ EP_DISC_FAIL = 24,
+ QP_REFED = 25,
+ QP_DEREFED = 26,
+ CM_ID_REFED = 27,
+ CM_ID_DEREFED = 28,
};
struct c4iw_ep_common {
@@ -917,9 +924,8 @@ void c4iw_qp_rem_ref(struct ib_qp *qp);
struct ib_mr *c4iw_alloc_mr(struct ib_pd *pd,
enum ib_mr_type mr_type,
u32 max_num_sg);
-int c4iw_map_mr_sg(struct ib_mr *ibmr,
- struct scatterlist *sg,
- int sg_nents);
+int c4iw_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
+ unsigned int *sg_offset);
int c4iw_dealloc_mw(struct ib_mw *mw);
struct ib_mw *c4iw_alloc_mw(struct ib_pd *pd, enum ib_mw_type type,
struct ib_udata *udata);
diff --git a/drivers/infiniband/hw/cxgb4/mem.c b/drivers/infiniband/hw/cxgb4/mem.c
index 008be07d5604..55d0651ee4de 100644
--- a/drivers/infiniband/hw/cxgb4/mem.c
+++ b/drivers/infiniband/hw/cxgb4/mem.c
@@ -86,8 +86,9 @@ static int _c4iw_write_mem_dma_aligned(struct c4iw_rdev *rdev, u32 addr,
(wait ? FW_WR_COMPL_F : 0));
req->wr.wr_lo = wait ? (__force __be64)(unsigned long) &wr_wait : 0L;
req->wr.wr_mid = cpu_to_be32(FW_WR_LEN16_V(DIV_ROUND_UP(wr_len, 16)));
- req->cmd = cpu_to_be32(ULPTX_CMD_V(ULP_TX_MEM_WRITE));
- req->cmd |= cpu_to_be32(T5_ULP_MEMIO_ORDER_V(1));
+ req->cmd = cpu_to_be32(ULPTX_CMD_V(ULP_TX_MEM_WRITE) |
+ T5_ULP_MEMIO_ORDER_V(1) |
+ T5_ULP_MEMIO_FID_V(rdev->lldi.rxq_ids[0]));
req->dlen = cpu_to_be32(ULP_MEMIO_DATA_LEN_V(len>>5));
req->len16 = cpu_to_be32(DIV_ROUND_UP(wr_len-sizeof(req->wr), 16));
req->lock_addr = cpu_to_be32(ULP_MEMIO_ADDR_V(addr));
@@ -690,15 +691,14 @@ static int c4iw_set_page(struct ib_mr *ibmr, u64 addr)
return 0;
}
-int c4iw_map_mr_sg(struct ib_mr *ibmr,
- struct scatterlist *sg,
- int sg_nents)
+int c4iw_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
+ unsigned int *sg_offset)
{
struct c4iw_mr *mhp = to_c4iw_mr(ibmr);
mhp->mpl_len = 0;
- return ib_sg_to_pages(ibmr, sg, sg_nents, c4iw_set_page);
+ return ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, c4iw_set_page);
}
int c4iw_dereg_mr(struct ib_mr *ib_mr)
diff --git a/drivers/infiniband/hw/cxgb4/provider.c b/drivers/infiniband/hw/cxgb4/provider.c
index 7574f394fdac..dd8a86b726d2 100644
--- a/drivers/infiniband/hw/cxgb4/provider.c
+++ b/drivers/infiniband/hw/cxgb4/provider.c
@@ -446,20 +446,59 @@ static ssize_t show_board(struct device *dev, struct device_attribute *attr,
c4iw_dev->rdev.lldi.pdev->device);
}
+enum counters {
+ IP4INSEGS,
+ IP4OUTSEGS,
+ IP4RETRANSSEGS,
+ IP4OUTRSTS,
+ IP6INSEGS,
+ IP6OUTSEGS,
+ IP6RETRANSSEGS,
+ IP6OUTRSTS,
+ NR_COUNTERS
+};
+
+static const char * const names[] = {
+ [IP4INSEGS] = "ip4InSegs",
+ [IP4OUTSEGS] = "ip4OutSegs",
+ [IP4RETRANSSEGS] = "ip4RetransSegs",
+ [IP4OUTRSTS] = "ip4OutRsts",
+ [IP6INSEGS] = "ip6InSegs",
+ [IP6OUTSEGS] = "ip6OutSegs",
+ [IP6RETRANSSEGS] = "ip6RetransSegs",
+ [IP6OUTRSTS] = "ip6OutRsts"
+};
+
+static struct rdma_hw_stats *c4iw_alloc_stats(struct ib_device *ibdev,
+ u8 port_num)
+{
+ BUILD_BUG_ON(ARRAY_SIZE(names) != NR_COUNTERS);
+
+ if (port_num != 0)
+ return NULL;
+
+ return rdma_alloc_hw_stats_struct(names, NR_COUNTERS,
+ RDMA_HW_STATS_DEFAULT_LIFESPAN);
+}
+
static int c4iw_get_mib(struct ib_device *ibdev,
- union rdma_protocol_stats *stats)
+ struct rdma_hw_stats *stats,
+ u8 port, int index)
{
struct tp_tcp_stats v4, v6;
struct c4iw_dev *c4iw_dev = to_c4iw_dev(ibdev);
cxgb4_get_tcp_stats(c4iw_dev->rdev.lldi.pdev, &v4, &v6);
- memset(stats, 0, sizeof *stats);
- stats->iw.tcpInSegs = v4.tcp_in_segs + v6.tcp_in_segs;
- stats->iw.tcpOutSegs = v4.tcp_out_segs + v6.tcp_out_segs;
- stats->iw.tcpRetransSegs = v4.tcp_retrans_segs + v6.tcp_retrans_segs;
- stats->iw.tcpOutRsts = v4.tcp_out_rsts + v6.tcp_out_rsts;
-
- return 0;
+ stats->value[IP4INSEGS] = v4.tcp_in_segs;
+ stats->value[IP4OUTSEGS] = v4.tcp_out_segs;
+ stats->value[IP4RETRANSSEGS] = v4.tcp_retrans_segs;
+ stats->value[IP4OUTRSTS] = v4.tcp_out_rsts;
+ stats->value[IP6INSEGS] = v6.tcp_in_segs;
+ stats->value[IP6OUTSEGS] = v6.tcp_out_segs;
+ stats->value[IP6RETRANSSEGS] = v6.tcp_retrans_segs;
+ stats->value[IP6OUTRSTS] = v6.tcp_out_rsts;
+
+ return stats->num_counters;
}
static DEVICE_ATTR(hw_rev, S_IRUGO, show_rev, NULL);
@@ -562,7 +601,8 @@ int c4iw_register_device(struct c4iw_dev *dev)
dev->ibdev.req_notify_cq = c4iw_arm_cq;
dev->ibdev.post_send = c4iw_post_send;
dev->ibdev.post_recv = c4iw_post_receive;
- dev->ibdev.get_protocol_stats = c4iw_get_mib;
+ dev->ibdev.alloc_hw_stats = c4iw_alloc_stats;
+ dev->ibdev.get_hw_stats = c4iw_get_mib;
dev->ibdev.uverbs_abi_ver = C4IW_UVERBS_ABI_VERSION;
dev->ibdev.get_port_immutable = c4iw_port_immutable;
dev->ibdev.drain_sq = c4iw_drain_sq;
diff --git a/drivers/staging/rdma/hfi1/Kconfig b/drivers/infiniband/hw/hfi1/Kconfig
index a925fb0db706..a925fb0db706 100644
--- a/drivers/staging/rdma/hfi1/Kconfig
+++ b/drivers/infiniband/hw/hfi1/Kconfig
diff --git a/drivers/staging/rdma/hfi1/Makefile b/drivers/infiniband/hw/hfi1/Makefile
index 8dc59382ee96..9b5382c94b0c 100644
--- a/drivers/staging/rdma/hfi1/Makefile
+++ b/drivers/infiniband/hw/hfi1/Makefile
@@ -7,7 +7,7 @@
#
obj-$(CONFIG_INFINIBAND_HFI1) += hfi1.o
-hfi1-y := affinity.o chip.o device.o diag.o driver.o efivar.o \
+hfi1-y := affinity.o chip.o device.o driver.o efivar.o \
eprom.o file_ops.o firmware.o \
init.o intr.o mad.o mmu_rb.o pcie.o pio.o pio_copy.o platform.o \
qp.o qsfp.o rc.o ruc.o sdma.o sysfs.o trace.o twsi.o \
diff --git a/drivers/staging/rdma/hfi1/affinity.c b/drivers/infiniband/hw/hfi1/affinity.c
index 2cb8ca77f876..6e7050ab9e16 100644
--- a/drivers/staging/rdma/hfi1/affinity.c
+++ b/drivers/infiniband/hw/hfi1/affinity.c
@@ -53,20 +53,6 @@
#include "sdma.h"
#include "trace.h"
-struct cpu_mask_set {
- struct cpumask mask;
- struct cpumask used;
- uint gen;
-};
-
-struct hfi1_affinity {
- struct cpu_mask_set def_intr;
- struct cpu_mask_set rcv_intr;
- struct cpu_mask_set proc;
- /* spin lock to protect affinity struct */
- spinlock_t lock;
-};
-
/* Name of IRQ types, indexed by enum irq_type */
static const char * const irq_type_names[] = {
"SDMA",
@@ -82,6 +68,48 @@ static inline void init_cpu_mask_set(struct cpu_mask_set *set)
set->gen = 0;
}
+/* Initialize non-HT cpu cores mask */
+int init_real_cpu_mask(struct hfi1_devdata *dd)
+{
+ struct hfi1_affinity *info;
+ int possible, curr_cpu, i, ht;
+
+ info = kzalloc(sizeof(*info), GFP_KERNEL);
+ if (!info)
+ return -ENOMEM;
+
+ cpumask_clear(&info->real_cpu_mask);
+
+ /* Start with cpu online mask as the real cpu mask */
+ cpumask_copy(&info->real_cpu_mask, cpu_online_mask);
+
+ /*
+ * Remove HT cores from the real cpu mask. Do this in two steps below.
+ */
+ possible = cpumask_weight(&info->real_cpu_mask);
+ ht = cpumask_weight(topology_sibling_cpumask(
+ cpumask_first(&info->real_cpu_mask)));
+ /*
+ * Step 1. Skip over the first N HT siblings and use them as the
+ * "real" cores. Assumes that HT cores are not enumerated in
+ * succession (except in the single core case).
+ */
+ curr_cpu = cpumask_first(&info->real_cpu_mask);
+ for (i = 0; i < possible / ht; i++)
+ curr_cpu = cpumask_next(curr_cpu, &info->real_cpu_mask);
+ /*
+ * Step 2. Remove the remaining HT siblings. Use cpumask_next() to
+ * skip any gaps.
+ */
+ for (; i < possible; i++) {
+ cpumask_clear_cpu(curr_cpu, &info->real_cpu_mask);
+ curr_cpu = cpumask_next(curr_cpu, &info->real_cpu_mask);
+ }
+
+ dd->affinity = info;
+ return 0;
+}
+
/*
* Interrupt affinity.
*
@@ -93,20 +121,17 @@ static inline void init_cpu_mask_set(struct cpu_mask_set *set)
* to the node relative 1 as necessary.
*
*/
-int hfi1_dev_affinity_init(struct hfi1_devdata *dd)
+void hfi1_dev_affinity_init(struct hfi1_devdata *dd)
{
int node = pcibus_to_node(dd->pcidev->bus);
- struct hfi1_affinity *info;
+ struct hfi1_affinity *info = dd->affinity;
const struct cpumask *local_mask;
- int curr_cpu, possible, i, ht;
+ int curr_cpu, possible, i;
if (node < 0)
node = numa_node_id();
dd->node = node;
- info = kzalloc(sizeof(*info), GFP_KERNEL);
- if (!info)
- return -ENOMEM;
spin_lock_init(&info->lock);
init_cpu_mask_set(&info->def_intr);
@@ -116,30 +141,8 @@ int hfi1_dev_affinity_init(struct hfi1_devdata *dd)
local_mask = cpumask_of_node(dd->node);
if (cpumask_first(local_mask) >= nr_cpu_ids)
local_mask = topology_core_cpumask(0);
- /* use local mask as default */
- cpumask_copy(&info->def_intr.mask, local_mask);
- /*
- * Remove HT cores from the default mask. Do this in two steps below.
- */
- possible = cpumask_weight(&info->def_intr.mask);
- ht = cpumask_weight(topology_sibling_cpumask(
- cpumask_first(&info->def_intr.mask)));
- /*
- * Step 1. Skip over the first N HT siblings and use them as the
- * "real" cores. Assumes that HT cores are not enumerated in
- * succession (except in the single core case).
- */
- curr_cpu = cpumask_first(&info->def_intr.mask);
- for (i = 0; i < possible / ht; i++)
- curr_cpu = cpumask_next(curr_cpu, &info->def_intr.mask);
- /*
- * Step 2. Remove the remaining HT siblings. Use cpumask_next() to
- * skip any gaps.
- */
- for (; i < possible; i++) {
- cpumask_clear_cpu(curr_cpu, &info->def_intr.mask);
- curr_cpu = cpumask_next(curr_cpu, &info->def_intr.mask);
- }
+ /* Use the "real" cpu mask of this node as the default */
+ cpumask_and(&info->def_intr.mask, &info->real_cpu_mask, local_mask);
/* fill in the receive list */
possible = cpumask_weight(&info->def_intr.mask);
@@ -167,8 +170,6 @@ int hfi1_dev_affinity_init(struct hfi1_devdata *dd)
}
cpumask_copy(&info->proc.mask, cpu_online_mask);
- dd->affinity = info;
- return 0;
}
void hfi1_dev_affinity_free(struct hfi1_devdata *dd)
diff --git a/drivers/staging/rdma/hfi1/affinity.h b/drivers/infiniband/hw/hfi1/affinity.h
index b287e4963024..20f52fe74091 100644
--- a/drivers/staging/rdma/hfi1/affinity.h
+++ b/drivers/infiniband/hw/hfi1/affinity.h
@@ -64,10 +64,27 @@ enum affinity_flags {
AFF_IRQ_LOCAL
};
+struct cpu_mask_set {
+ struct cpumask mask;
+ struct cpumask used;
+ uint gen;
+};
+
+struct hfi1_affinity {
+ struct cpu_mask_set def_intr;
+ struct cpu_mask_set rcv_intr;
+ struct cpu_mask_set proc;
+ struct cpumask real_cpu_mask;
+ /* spin lock to protect affinity struct */
+ spinlock_t lock;
+};
+
struct hfi1_msix_entry;
+/* Initialize non-HT cpu cores mask */
+int init_real_cpu_mask(struct hfi1_devdata *);
/* Initialize driver affinity data */
-int hfi1_dev_affinity_init(struct hfi1_devdata *);
+void hfi1_dev_affinity_init(struct hfi1_devdata *);
/* Free driver affinity data */
void hfi1_dev_affinity_free(struct hfi1_devdata *);
/*
diff --git a/drivers/staging/rdma/hfi1/aspm.h b/drivers/infiniband/hw/hfi1/aspm.h
index 0d58fe3b49b5..0d58fe3b49b5 100644
--- a/drivers/staging/rdma/hfi1/aspm.h
+++ b/drivers/infiniband/hw/hfi1/aspm.h
diff --git a/drivers/staging/rdma/hfi1/chip.c b/drivers/infiniband/hw/hfi1/chip.c
index 16eb653903e0..3b876da745a1 100644
--- a/drivers/staging/rdma/hfi1/chip.c
+++ b/drivers/infiniband/hw/hfi1/chip.c
@@ -123,6 +123,8 @@ struct flag_table {
#define MIN_KERNEL_KCTXTS 2
#define FIRST_KERNEL_KCTXT 1
+/* sizes for both the QP and RSM map tables */
+#define NUM_MAP_ENTRIES 256
#define NUM_MAP_REGS 32
/* Bit offset into the GUID which carries HFI id information */
@@ -1029,9 +1031,13 @@ static int thermal_init(struct hfi1_devdata *dd);
static int wait_logical_linkstate(struct hfi1_pportdata *ppd, u32 state,
int msecs);
static void read_planned_down_reason_code(struct hfi1_devdata *dd, u8 *pdrrc);
+static void read_link_down_reason(struct hfi1_devdata *dd, u8 *ldr);
static void handle_temp_err(struct hfi1_devdata *);
static void dc_shutdown(struct hfi1_devdata *);
static void dc_start(struct hfi1_devdata *);
+static int qos_rmt_entries(struct hfi1_devdata *dd, unsigned int *mp,
+ unsigned int *np);
+static void remove_full_mgmt_pkey(struct hfi1_pportdata *ppd);
/*
* Error interrupt table entry. This is used as input to the interrupt
@@ -5661,7 +5667,7 @@ static int sc_to_vl(struct hfi1_devdata *dd, int sw_index)
sci = &dd->send_contexts[sw_index];
/* there is no information for user (PSM) and ack contexts */
- if (sci->type != SC_KERNEL)
+ if ((sci->type != SC_KERNEL) && (sci->type != SC_VL15))
return -1;
sc = sci->sc;
@@ -6100,7 +6106,7 @@ int acquire_lcb_access(struct hfi1_devdata *dd, int sleep_ok)
}
/* this access is valid only when the link is up */
- if ((ppd->host_link_state & HLS_UP) == 0) {
+ if (ppd->host_link_state & HLS_DOWN) {
dd_dev_info(dd, "%s: link state %s not up\n",
__func__, link_state_name(ppd->host_link_state));
ret = -EBUSY;
@@ -6199,18 +6205,13 @@ static void hreq_response(struct hfi1_devdata *dd, u8 return_code, u16 rsp_data)
/*
* Handle host requests from the 8051.
- *
- * This is a work-queue function outside of the interrupt.
*/
-void handle_8051_request(struct work_struct *work)
+static void handle_8051_request(struct hfi1_pportdata *ppd)
{
- struct hfi1_pportdata *ppd = container_of(work, struct hfi1_pportdata,
- dc_host_req_work);
struct hfi1_devdata *dd = ppd->dd;
u64 reg;
u16 data = 0;
- u8 type, i, lanes, *cache = ppd->qsfp_info.cache;
- u8 cdr_ctrl_byte = cache[QSFP_CDR_CTRL_BYTE_OFFS];
+ u8 type;
reg = read_csr(dd, DC_DC8051_CFG_EXT_DEV_1);
if ((reg & DC_DC8051_CFG_EXT_DEV_1_REQ_NEW_SMASK) == 0)
@@ -6231,46 +6232,11 @@ void handle_8051_request(struct work_struct *work)
case HREQ_READ_CONFIG:
case HREQ_SET_TX_EQ_ABS:
case HREQ_SET_TX_EQ_REL:
+ case HREQ_ENABLE:
dd_dev_info(dd, "8051 request: request 0x%x not supported\n",
type);
hreq_response(dd, HREQ_NOT_SUPPORTED, 0);
break;
-
- case HREQ_ENABLE:
- lanes = data & 0xF;
- for (i = 0; lanes; lanes >>= 1, i++) {
- if (!(lanes & 1))
- continue;
- if (data & 0x200) {
- /* enable TX CDR */
- if (cache[QSFP_MOD_PWR_OFFS] & 0x8 &&
- cache[QSFP_CDR_INFO_OFFS] & 0x80)
- cdr_ctrl_byte |= (1 << (i + 4));
- } else {
- /* disable TX CDR */
- if (cache[QSFP_MOD_PWR_OFFS] & 0x8 &&
- cache[QSFP_CDR_INFO_OFFS] & 0x80)
- cdr_ctrl_byte &= ~(1 << (i + 4));
- }
-
- if (data & 0x800) {
- /* enable RX CDR */
- if (cache[QSFP_MOD_PWR_OFFS] & 0x4 &&
- cache[QSFP_CDR_INFO_OFFS] & 0x40)
- cdr_ctrl_byte |= (1 << i);
- } else {
- /* disable RX CDR */
- if (cache[QSFP_MOD_PWR_OFFS] & 0x4 &&
- cache[QSFP_CDR_INFO_OFFS] & 0x40)
- cdr_ctrl_byte &= ~(1 << i);
- }
- }
- one_qsfp_write(ppd, dd->hfi1_id, QSFP_CDR_CTRL_BYTE_OFFS,
- &cdr_ctrl_byte, 1);
- hreq_response(dd, HREQ_SUCCESS, data);
- refresh_qsfp_cache(ppd, &ppd->qsfp_info);
- break;
-
case HREQ_CONFIG_DONE:
hreq_response(dd, HREQ_SUCCESS, 0);
break;
@@ -6278,7 +6244,6 @@ void handle_8051_request(struct work_struct *work)
case HREQ_INTERFACE_TEST:
hreq_response(dd, HREQ_SUCCESS, data);
break;
-
default:
dd_dev_err(dd, "8051 request: unknown request 0x%x\n", type);
hreq_response(dd, HREQ_NOT_SUPPORTED, 0);
@@ -6849,6 +6814,75 @@ static void reset_neighbor_info(struct hfi1_pportdata *ppd)
ppd->neighbor_fm_security = 0;
}
+static const char * const link_down_reason_strs[] = {
+ [OPA_LINKDOWN_REASON_NONE] = "None",
+ [OPA_LINKDOWN_REASON_RCV_ERROR_0] = "Recive error 0",
+ [OPA_LINKDOWN_REASON_BAD_PKT_LEN] = "Bad packet length",
+ [OPA_LINKDOWN_REASON_PKT_TOO_LONG] = "Packet too long",
+ [OPA_LINKDOWN_REASON_PKT_TOO_SHORT] = "Packet too short",
+ [OPA_LINKDOWN_REASON_BAD_SLID] = "Bad SLID",
+ [OPA_LINKDOWN_REASON_BAD_DLID] = "Bad DLID",
+ [OPA_LINKDOWN_REASON_BAD_L2] = "Bad L2",
+ [OPA_LINKDOWN_REASON_BAD_SC] = "Bad SC",
+ [OPA_LINKDOWN_REASON_RCV_ERROR_8] = "Receive error 8",
+ [OPA_LINKDOWN_REASON_BAD_MID_TAIL] = "Bad mid tail",
+ [OPA_LINKDOWN_REASON_RCV_ERROR_10] = "Receive error 10",
+ [OPA_LINKDOWN_REASON_PREEMPT_ERROR] = "Preempt error",
+ [OPA_LINKDOWN_REASON_PREEMPT_VL15] = "Preempt vl15",
+ [OPA_LINKDOWN_REASON_BAD_VL_MARKER] = "Bad VL marker",
+ [OPA_LINKDOWN_REASON_RCV_ERROR_14] = "Receive error 14",
+ [OPA_LINKDOWN_REASON_RCV_ERROR_15] = "Receive error 15",
+ [OPA_LINKDOWN_REASON_BAD_HEAD_DIST] = "Bad head distance",
+ [OPA_LINKDOWN_REASON_BAD_TAIL_DIST] = "Bad tail distance",
+ [OPA_LINKDOWN_REASON_BAD_CTRL_DIST] = "Bad control distance",
+ [OPA_LINKDOWN_REASON_BAD_CREDIT_ACK] = "Bad credit ack",
+ [OPA_LINKDOWN_REASON_UNSUPPORTED_VL_MARKER] = "Unsupported VL marker",
+ [OPA_LINKDOWN_REASON_BAD_PREEMPT] = "Bad preempt",
+ [OPA_LINKDOWN_REASON_BAD_CONTROL_FLIT] = "Bad control flit",
+ [OPA_LINKDOWN_REASON_EXCEED_MULTICAST_LIMIT] = "Exceed multicast limit",
+ [OPA_LINKDOWN_REASON_RCV_ERROR_24] = "Receive error 24",
+ [OPA_LINKDOWN_REASON_RCV_ERROR_25] = "Receive error 25",
+ [OPA_LINKDOWN_REASON_RCV_ERROR_26] = "Receive error 26",
+ [OPA_LINKDOWN_REASON_RCV_ERROR_27] = "Receive error 27",
+ [OPA_LINKDOWN_REASON_RCV_ERROR_28] = "Receive error 28",
+ [OPA_LINKDOWN_REASON_RCV_ERROR_29] = "Receive error 29",
+ [OPA_LINKDOWN_REASON_RCV_ERROR_30] = "Receive error 30",
+ [OPA_LINKDOWN_REASON_EXCESSIVE_BUFFER_OVERRUN] =
+ "Excessive buffer overrun",
+ [OPA_LINKDOWN_REASON_UNKNOWN] = "Unknown",
+ [OPA_LINKDOWN_REASON_REBOOT] = "Reboot",
+ [OPA_LINKDOWN_REASON_NEIGHBOR_UNKNOWN] = "Neighbor unknown",
+ [OPA_LINKDOWN_REASON_FM_BOUNCE] = "FM bounce",
+ [OPA_LINKDOWN_REASON_SPEED_POLICY] = "Speed policy",
+ [OPA_LINKDOWN_REASON_WIDTH_POLICY] = "Width policy",
+ [OPA_LINKDOWN_REASON_DISCONNECTED] = "Disconnected",
+ [OPA_LINKDOWN_REASON_LOCAL_MEDIA_NOT_INSTALLED] =
+ "Local media not installed",
+ [OPA_LINKDOWN_REASON_NOT_INSTALLED] = "Not installed",
+ [OPA_LINKDOWN_REASON_CHASSIS_CONFIG] = "Chassis config",
+ [OPA_LINKDOWN_REASON_END_TO_END_NOT_INSTALLED] =
+ "End to end not installed",
+ [OPA_LINKDOWN_REASON_POWER_POLICY] = "Power policy",
+ [OPA_LINKDOWN_REASON_LINKSPEED_POLICY] = "Link speed policy",
+ [OPA_LINKDOWN_REASON_LINKWIDTH_POLICY] = "Link width policy",
+ [OPA_LINKDOWN_REASON_SWITCH_MGMT] = "Switch management",
+ [OPA_LINKDOWN_REASON_SMA_DISABLED] = "SMA disabled",
+ [OPA_LINKDOWN_REASON_TRANSIENT] = "Transient"
+};
+
+/* return the neighbor link down reason string */
+static const char *link_down_reason_str(u8 reason)
+{
+ const char *str = NULL;
+
+ if (reason < ARRAY_SIZE(link_down_reason_strs))
+ str = link_down_reason_strs[reason];
+ if (!str)
+ str = "(invalid)";
+
+ return str;
+}
+
/*
* Handle a link down interrupt from the 8051.
*
@@ -6857,8 +6891,11 @@ static void reset_neighbor_info(struct hfi1_pportdata *ppd)
void handle_link_down(struct work_struct *work)
{
u8 lcl_reason, neigh_reason = 0;
+ u8 link_down_reason;
struct hfi1_pportdata *ppd = container_of(work, struct hfi1_pportdata,
- link_down_work);
+ link_down_work);
+ int was_up;
+ static const char ldr_str[] = "Link down reason: ";
if ((ppd->host_link_state &
(HLS_DN_POLL | HLS_VERIFY_CAP | HLS_GOING_UP)) &&
@@ -6867,21 +6904,66 @@ void handle_link_down(struct work_struct *work)
HFI1_ODR_MASK(OPA_LINKDOWN_REASON_NOT_INSTALLED);
/* Go offline first, then deal with reading/writing through 8051 */
+ was_up = !!(ppd->host_link_state & HLS_UP);
set_link_state(ppd, HLS_DN_OFFLINE);
- lcl_reason = 0;
- read_planned_down_reason_code(ppd->dd, &neigh_reason);
+ if (was_up) {
+ lcl_reason = 0;
+ /* link down reason is only valid if the link was up */
+ read_link_down_reason(ppd->dd, &link_down_reason);
+ switch (link_down_reason) {
+ case LDR_LINK_TRANSFER_ACTIVE_LOW:
+ /* the link went down, no idle message reason */
+ dd_dev_info(ppd->dd, "%sUnexpected link down\n",
+ ldr_str);
+ break;
+ case LDR_RECEIVED_LINKDOWN_IDLE_MSG:
+ /*
+ * The neighbor reason is only valid if an idle message
+ * was received for it.
+ */
+ read_planned_down_reason_code(ppd->dd, &neigh_reason);
+ dd_dev_info(ppd->dd,
+ "%sNeighbor link down message %d, %s\n",
+ ldr_str, neigh_reason,
+ link_down_reason_str(neigh_reason));
+ break;
+ case LDR_RECEIVED_HOST_OFFLINE_REQ:
+ dd_dev_info(ppd->dd,
+ "%sHost requested link to go offline\n",
+ ldr_str);
+ break;
+ default:
+ dd_dev_info(ppd->dd, "%sUnknown reason 0x%x\n",
+ ldr_str, link_down_reason);
+ break;
+ }
- /*
- * If no reason, assume peer-initiated but missed
- * LinkGoingDown idle flits.
- */
- if (neigh_reason == 0)
- lcl_reason = OPA_LINKDOWN_REASON_NEIGHBOR_UNKNOWN;
+ /*
+ * If no reason, assume peer-initiated but missed
+ * LinkGoingDown idle flits.
+ */
+ if (neigh_reason == 0)
+ lcl_reason = OPA_LINKDOWN_REASON_NEIGHBOR_UNKNOWN;
+ } else {
+ /* went down while polling or going up */
+ lcl_reason = OPA_LINKDOWN_REASON_TRANSIENT;
+ }
set_link_down_reason(ppd, lcl_reason, neigh_reason, 0);
+ /* inform the SMA when the link transitions from up to down */
+ if (was_up && ppd->local_link_down_reason.sma == 0 &&
+ ppd->neigh_link_down_reason.sma == 0) {
+ ppd->local_link_down_reason.sma =
+ ppd->local_link_down_reason.latest;
+ ppd->neigh_link_down_reason.sma =
+ ppd->neigh_link_down_reason.latest;
+ }
+
reset_neighbor_info(ppd);
+ if (ppd->mgmt_allowed)
+ remove_full_mgmt_pkey(ppd);
/* disable the port */
clear_rcvctrl(ppd->dd, RCV_CTRL_RCV_PORT_ENABLE_SMASK);
@@ -6890,7 +6972,7 @@ void handle_link_down(struct work_struct *work)
* If there is no cable attached, turn the DC off. Otherwise,
* start the link bring up.
*/
- if (!qsfp_mod_present(ppd)) {
+ if (ppd->port_type == PORT_TYPE_QSFP && !qsfp_mod_present(ppd)) {
dc_shutdown(ppd->dd);
} else {
tune_serdes(ppd);
@@ -6990,6 +7072,12 @@ static void add_full_mgmt_pkey(struct hfi1_pportdata *ppd)
(void)hfi1_set_ib_cfg(ppd, HFI1_IB_CFG_PKEYS, 0);
}
+static void remove_full_mgmt_pkey(struct hfi1_pportdata *ppd)
+{
+ ppd->pkeys[2] = 0;
+ (void)hfi1_set_ib_cfg(ppd, HFI1_IB_CFG_PKEYS, 0);
+}
+
/*
* Convert the given link width to the OPA link width bitmask.
*/
@@ -7350,7 +7438,7 @@ void apply_link_downgrade_policy(struct hfi1_pportdata *ppd, int refresh_widths)
retry:
mutex_lock(&ppd->hls_lock);
/* only apply if the link is up */
- if (!(ppd->host_link_state & HLS_UP)) {
+ if (ppd->host_link_state & HLS_DOWN) {
/* still going up..wait and retry */
if (ppd->host_link_state & HLS_GOING_UP) {
if (++tries < 1000) {
@@ -7373,7 +7461,11 @@ retry:
ppd->link_width_downgrade_rx_active = rx;
}
- if (lwde == 0) {
+ if (ppd->link_width_downgrade_tx_active == 0 ||
+ ppd->link_width_downgrade_rx_active == 0) {
+ /* the 8051 reported a dead link as a downgrade */
+ dd_dev_err(ppd->dd, "Link downgrade is really a link down, ignoring\n");
+ } else if (lwde == 0) {
/* downgrade is disabled */
/* bounce if not at starting active width */
@@ -7534,7 +7626,7 @@ static void handle_8051_interrupt(struct hfi1_devdata *dd, u32 unused, u64 reg)
host_msg &= ~(u64)LINKUP_ACHIEVED;
}
if (host_msg & EXT_DEVICE_CFG_REQ) {
- queue_work(ppd->hfi1_wq, &ppd->dc_host_req_work);
+ handle_8051_request(ppd);
host_msg &= ~(u64)EXT_DEVICE_CFG_REQ;
}
if (host_msg & VERIFY_CAP_FRAME) {
@@ -8660,6 +8752,14 @@ static void read_planned_down_reason_code(struct hfi1_devdata *dd, u8 *pdrrc)
*pdrrc = (frame >> DOWN_REMOTE_REASON_SHIFT) & DOWN_REMOTE_REASON_MASK;
}
+static void read_link_down_reason(struct hfi1_devdata *dd, u8 *ldr)
+{
+ u32 frame;
+
+ read_8051_config(dd, LINK_DOWN_REASON, GENERAL_CONFIG, &frame);
+ *ldr = (frame & 0xff);
+}
+
static int read_tx_settings(struct hfi1_devdata *dd,
u8 *enable_lane_tx,
u8 *tx_polarity_inversion,
@@ -9049,9 +9149,9 @@ set_local_link_attributes_fail:
}
/*
- * Call this to start the link. Schedule a retry if the cable is not
- * present or if unable to start polling. Do not do anything if the
- * link is disabled. Returns 0 if link is disabled or moved to polling
+ * Call this to start the link.
+ * Do not do anything if the link is disabled.
+ * Returns 0 if link is disabled, moved to polling, or the driver is not ready.
*/
int start_link(struct hfi1_pportdata *ppd)
{
@@ -9068,15 +9168,7 @@ int start_link(struct hfi1_pportdata *ppd)
return 0;
}
- if (qsfp_mod_present(ppd) || loopback == LOOPBACK_SERDES ||
- loopback == LOOPBACK_LCB ||
- ppd->dd->icode == ICODE_FUNCTIONAL_SIMULATOR)
- return set_link_state(ppd, HLS_DN_POLL);
-
- dd_dev_info(ppd->dd,
- "%s: stopping link start because no cable is present\n",
- __func__);
- return -EAGAIN;
+ return set_link_state(ppd, HLS_DN_POLL);
}
static void wait_for_qsfp_init(struct hfi1_pportdata *ppd)
@@ -9129,9 +9221,6 @@ void reset_qsfp(struct hfi1_pportdata *ppd)
/* Reset the QSFP */
mask = (u64)QSFP_HFI0_RESET_N;
- qsfp_mask = read_csr(dd, dd->hfi1_id ? ASIC_QSFP2_OE : ASIC_QSFP1_OE);
- qsfp_mask |= mask;
- write_csr(dd, dd->hfi1_id ? ASIC_QSFP2_OE : ASIC_QSFP1_OE, qsfp_mask);
qsfp_mask = read_csr(dd,
dd->hfi1_id ? ASIC_QSFP2_OUT : ASIC_QSFP1_OUT);
@@ -9169,6 +9258,12 @@ static int handle_qsfp_error_conditions(struct hfi1_pportdata *ppd,
dd_dev_info(dd, "%s: QSFP cable temperature too low\n",
__func__);
+ /*
+ * The remaining alarms/warnings don't matter if the link is down.
+ */
+ if (ppd->host_link_state & HLS_DOWN)
+ return 0;
+
if ((qsfp_interrupt_status[1] & QSFP_HIGH_VCC_ALARM) ||
(qsfp_interrupt_status[1] & QSFP_HIGH_VCC_WARNING))
dd_dev_info(dd, "%s: QSFP supply voltage too high\n",
@@ -9247,7 +9342,7 @@ static int handle_qsfp_error_conditions(struct hfi1_pportdata *ppd,
return 0;
}
-/* This routine will only be scheduled if the QSFP module is present */
+/* This routine will only be scheduled if the QSFP module present is asserted */
void qsfp_event(struct work_struct *work)
{
struct qsfp_data *qd;
@@ -9263,9 +9358,8 @@ void qsfp_event(struct work_struct *work)
return;
/*
- * Turn DC back on after cables has been
- * re-inserted. Up until now, the DC has been in
- * reset to save power.
+ * Turn DC back on after cable has been re-inserted. Up until
+ * now, the DC has been in reset to save power.
*/
dc_start(dd);
@@ -9397,7 +9491,15 @@ int bringup_serdes(struct hfi1_pportdata *ppd)
return ret;
}
- /* tune the SERDES to a ballpark setting for
+ get_port_type(ppd);
+ if (ppd->port_type == PORT_TYPE_QSFP) {
+ set_qsfp_int_n(ppd, 0);
+ wait_for_qsfp_init(ppd);
+ set_qsfp_int_n(ppd, 1);
+ }
+
+ /*
+ * Tune the SerDes to a ballpark setting for
* optimal signal and bit error rate
* Needs to be done before starting the link
*/
@@ -9676,6 +9778,7 @@ static void set_send_length(struct hfi1_pportdata *ppd)
& SEND_LEN_CHECK1_LEN_VL15_MASK) <<
SEND_LEN_CHECK1_LEN_VL15_SHIFT;
int i;
+ u32 thres;
for (i = 0; i < ppd->vls_supported; i++) {
if (dd->vld[i].mtu > maxvlmtu)
@@ -9694,16 +9797,17 @@ static void set_send_length(struct hfi1_pportdata *ppd)
/* adjust kernel credit return thresholds based on new MTUs */
/* all kernel receive contexts have the same hdrqentsize */
for (i = 0; i < ppd->vls_supported; i++) {
- sc_set_cr_threshold(dd->vld[i].sc,
- sc_mtu_to_threshold(dd->vld[i].sc,
- dd->vld[i].mtu,
- dd->rcd[0]->
- rcvhdrqentsize));
- }
- sc_set_cr_threshold(dd->vld[15].sc,
- sc_mtu_to_threshold(dd->vld[15].sc,
- dd->vld[15].mtu,
+ thres = min(sc_percent_to_threshold(dd->vld[i].sc, 50),
+ sc_mtu_to_threshold(dd->vld[i].sc,
+ dd->vld[i].mtu,
dd->rcd[0]->rcvhdrqentsize));
+ sc_set_cr_threshold(dd->vld[i].sc, thres);
+ }
+ thres = min(sc_percent_to_threshold(dd->vld[15].sc, 50),
+ sc_mtu_to_threshold(dd->vld[15].sc,
+ dd->vld[15].mtu,
+ dd->rcd[0]->rcvhdrqentsize));
+ sc_set_cr_threshold(dd->vld[15].sc, thres);
/* Adjust maximum MTU for the port in DC */
dcmtu = maxvlmtu == 10240 ? DCC_CFG_PORT_MTU_CAP_10240 :
@@ -9989,7 +10093,7 @@ u32 driver_physical_state(struct hfi1_pportdata *ppd)
*/
u32 driver_logical_state(struct hfi1_pportdata *ppd)
{
- if (ppd->host_link_state && !(ppd->host_link_state & HLS_UP))
+ if (ppd->host_link_state && (ppd->host_link_state & HLS_DOWN))
return IB_PORT_DOWN;
switch (ppd->host_link_state & HLS_UP) {
@@ -10030,7 +10134,6 @@ int set_link_state(struct hfi1_pportdata *ppd, u32 state)
struct hfi1_devdata *dd = ppd->dd;
struct ib_event event = {.device = NULL};
int ret1, ret = 0;
- int was_up, is_down;
int orig_new_state, poll_bounce;
mutex_lock(&ppd->hls_lock);
@@ -10049,8 +10152,6 @@ int set_link_state(struct hfi1_pportdata *ppd, u32 state)
poll_bounce ? "(bounce) " : "",
link_state_reason_name(ppd, state));
- was_up = !!(ppd->host_link_state & HLS_UP);
-
/*
* If we're going to a (HLS_*) link state that implies the logical
* link state is neither of (IB_PORT_ARMED, IB_PORT_ACTIVE), then
@@ -10261,17 +10362,6 @@ int set_link_state(struct hfi1_pportdata *ppd, u32 state)
break;
}
- is_down = !!(ppd->host_link_state & (HLS_DN_POLL |
- HLS_DN_DISABLE | HLS_DN_OFFLINE));
-
- if (was_up && is_down && ppd->local_link_down_reason.sma == 0 &&
- ppd->neigh_link_down_reason.sma == 0) {
- ppd->local_link_down_reason.sma =
- ppd->local_link_down_reason.latest;
- ppd->neigh_link_down_reason.sma =
- ppd->neigh_link_down_reason.latest;
- }
-
goto done;
unexpected:
@@ -12673,22 +12763,24 @@ static int set_up_context_variables(struct hfi1_devdata *dd)
int total_contexts;
int ret;
unsigned ngroups;
+ int qos_rmt_count;
+ int user_rmt_reduced;
/*
- * Kernel contexts: (to be fixed later):
- * - min or 2 or 1 context/numa
+ * Kernel receive contexts:
+ * - min of 2 or 1 context/numa (excluding control context)
* - Context 0 - control context (VL15/multicast/error)
- * - Context 1 - default context
+ * - Context 1 - first kernel context
+ * - Context 2 - second kernel context
+ * ...
*/
if (n_krcvqs)
/*
- * Don't count context 0 in n_krcvqs since
- * is isn't used for normal verbs traffic.
- *
- * krcvqs will reflect number of kernel
- * receive contexts above 0.
+ * n_krcvqs is the sum of module parameter kernel receive
+ * contexts, krcvqs[]. It does not include the control
+ * context, so add that.
*/
- num_kernel_contexts = n_krcvqs + MIN_KERNEL_KCTXTS - 1;
+ num_kernel_contexts = n_krcvqs + 1;
else
num_kernel_contexts = num_online_nodes() + 1;
num_kernel_contexts =
@@ -12705,12 +12797,13 @@ static int set_up_context_variables(struct hfi1_devdata *dd)
num_kernel_contexts = dd->chip_send_contexts - num_vls - 1;
}
/*
- * User contexts: (to be fixed later)
- * - default to 1 user context per CPU if num_user_contexts is
- * negative
+ * User contexts:
+ * - default to 1 user context per real (non-HT) CPU core if
+ * num_user_contexts is negative
*/
if (num_user_contexts < 0)
- num_user_contexts = num_online_cpus();
+ num_user_contexts =
+ cpumask_weight(&dd->affinity->real_cpu_mask);
total_contexts = num_kernel_contexts + num_user_contexts;
@@ -12727,6 +12820,19 @@ static int set_up_context_variables(struct hfi1_devdata *dd)
total_contexts = num_kernel_contexts + num_user_contexts;
}
+ /* each user context requires an entry in the RMT */
+ qos_rmt_count = qos_rmt_entries(dd, NULL, NULL);
+ if (qos_rmt_count + num_user_contexts > NUM_MAP_ENTRIES) {
+ user_rmt_reduced = NUM_MAP_ENTRIES - qos_rmt_count;
+ dd_dev_err(dd,
+ "RMT size is reducing the number of user receive contexts from %d to %d\n",
+ (int)num_user_contexts,
+ user_rmt_reduced);
+ /* recalculate */
+ num_user_contexts = user_rmt_reduced;
+ total_contexts = num_kernel_contexts + num_user_contexts;
+ }
+
/* the first N are kernel contexts, the rest are user contexts */
dd->num_rcv_contexts = total_contexts;
dd->n_krcv_queues = num_kernel_contexts;
@@ -12776,12 +12882,13 @@ static int set_up_context_variables(struct hfi1_devdata *dd)
dd->num_send_contexts = ret;
dd_dev_info(
dd,
- "send contexts: chip %d, used %d (kernel %d, ack %d, user %d)\n",
+ "send contexts: chip %d, used %d (kernel %d, ack %d, user %d, vl15 %d)\n",
dd->chip_send_contexts,
dd->num_send_contexts,
dd->sc_sizes[SC_KERNEL].count,
dd->sc_sizes[SC_ACK].count,
- dd->sc_sizes[SC_USER].count);
+ dd->sc_sizes[SC_USER].count,
+ dd->sc_sizes[SC_VL15].count);
ret = 0; /* success */
}
@@ -13451,122 +13558,224 @@ static void init_qpmap_table(struct hfi1_devdata *dd,
int i;
u64 ctxt = first_ctxt;
- for (i = 0; i < 256;) {
+ for (i = 0; i < 256; i++) {
reg |= ctxt << (8 * (i % 8));
- i++;
ctxt++;
if (ctxt > last_ctxt)
ctxt = first_ctxt;
- if (i % 8 == 0) {
+ if (i % 8 == 7) {
write_csr(dd, regno, reg);
reg = 0;
regno += 8;
}
}
- if (i % 8)
- write_csr(dd, regno, reg);
add_rcvctrl(dd, RCV_CTRL_RCV_QP_MAP_ENABLE_SMASK
| RCV_CTRL_RCV_BYPASS_ENABLE_SMASK);
}
-/**
- * init_qos - init RX qos
- * @dd - device data
- * @first_context
- *
- * This routine initializes Rule 0 and the
- * RSM map table to implement qos.
- *
- * If all of the limit tests succeed,
- * qos is applied based on the array
- * interpretation of krcvqs where
- * entry 0 is VL0.
- *
- * The number of vl bits (n) and the number of qpn
- * bits (m) are computed to feed both the RSM map table
- * and the single rule.
- *
+struct rsm_map_table {
+ u64 map[NUM_MAP_REGS];
+ unsigned int used;
+};
+
+struct rsm_rule_data {
+ u8 offset;
+ u8 pkt_type;
+ u32 field1_off;
+ u32 field2_off;
+ u32 index1_off;
+ u32 index1_width;
+ u32 index2_off;
+ u32 index2_width;
+ u32 mask1;
+ u32 value1;
+ u32 mask2;
+ u32 value2;
+};
+
+/*
+ * Return an initialized RMT map table for users to fill in. OK if it
+ * returns NULL, indicating no table.
+ */
+static struct rsm_map_table *alloc_rsm_map_table(struct hfi1_devdata *dd)
+{
+ struct rsm_map_table *rmt;
+ u8 rxcontext = is_ax(dd) ? 0 : 0xff; /* 0 is default if a0 ver. */
+
+ rmt = kmalloc(sizeof(*rmt), GFP_KERNEL);
+ if (rmt) {
+ memset(rmt->map, rxcontext, sizeof(rmt->map));
+ rmt->used = 0;
+ }
+
+ return rmt;
+}
+
+/*
+ * Write the final RMT map table to the chip and free the table. OK if
+ * table is NULL.
*/
-static void init_qos(struct hfi1_devdata *dd, u32 first_ctxt)
+static void complete_rsm_map_table(struct hfi1_devdata *dd,
+ struct rsm_map_table *rmt)
{
+ int i;
+
+ if (rmt) {
+ /* write table to chip */
+ for (i = 0; i < NUM_MAP_REGS; i++)
+ write_csr(dd, RCV_RSM_MAP_TABLE + (8 * i), rmt->map[i]);
+
+ /* enable RSM */
+ add_rcvctrl(dd, RCV_CTRL_RCV_RSM_ENABLE_SMASK);
+ }
+}
+
+/*
+ * Add a receive side mapping rule.
+ */
+static void add_rsm_rule(struct hfi1_devdata *dd, u8 rule_index,
+ struct rsm_rule_data *rrd)
+{
+ write_csr(dd, RCV_RSM_CFG + (8 * rule_index),
+ (u64)rrd->offset << RCV_RSM_CFG_OFFSET_SHIFT |
+ 1ull << rule_index | /* enable bit */
+ (u64)rrd->pkt_type << RCV_RSM_CFG_PACKET_TYPE_SHIFT);
+ write_csr(dd, RCV_RSM_SELECT + (8 * rule_index),
+ (u64)rrd->field1_off << RCV_RSM_SELECT_FIELD1_OFFSET_SHIFT |
+ (u64)rrd->field2_off << RCV_RSM_SELECT_FIELD2_OFFSET_SHIFT |
+ (u64)rrd->index1_off << RCV_RSM_SELECT_INDEX1_OFFSET_SHIFT |
+ (u64)rrd->index1_width << RCV_RSM_SELECT_INDEX1_WIDTH_SHIFT |
+ (u64)rrd->index2_off << RCV_RSM_SELECT_INDEX2_OFFSET_SHIFT |
+ (u64)rrd->index2_width << RCV_RSM_SELECT_INDEX2_WIDTH_SHIFT);
+ write_csr(dd, RCV_RSM_MATCH + (8 * rule_index),
+ (u64)rrd->mask1 << RCV_RSM_MATCH_MASK1_SHIFT |
+ (u64)rrd->value1 << RCV_RSM_MATCH_VALUE1_SHIFT |
+ (u64)rrd->mask2 << RCV_RSM_MATCH_MASK2_SHIFT |
+ (u64)rrd->value2 << RCV_RSM_MATCH_VALUE2_SHIFT);
+}
+
+/* return the number of RSM map table entries that will be used for QOS */
+static int qos_rmt_entries(struct hfi1_devdata *dd, unsigned int *mp,
+ unsigned int *np)
+{
+ int i;
+ unsigned int m, n;
u8 max_by_vl = 0;
- unsigned qpns_per_vl, ctxt, i, qpn, n = 1, m;
- u64 *rsmmap;
- u64 reg;
- u8 rxcontext = is_ax(dd) ? 0 : 0xff; /* 0 is default if a0 ver. */
- /* validate */
+ /* is QOS active at all? */
if (dd->n_krcv_queues <= MIN_KERNEL_KCTXTS ||
num_vls == 1 ||
krcvqsset <= 1)
- goto bail;
- for (i = 0; i < min_t(unsigned, num_vls, krcvqsset); i++)
+ goto no_qos;
+
+ /* determine bits for qpn */
+ for (i = 0; i < min_t(unsigned int, num_vls, krcvqsset); i++)
if (krcvqs[i] > max_by_vl)
max_by_vl = krcvqs[i];
if (max_by_vl > 32)
- goto bail;
- qpns_per_vl = __roundup_pow_of_two(max_by_vl);
- /* determine bits vl */
- n = ilog2(num_vls);
- /* determine bits for qpn */
- m = ilog2(qpns_per_vl);
+ goto no_qos;
+ m = ilog2(__roundup_pow_of_two(max_by_vl));
+
+ /* determine bits for vl */
+ n = ilog2(__roundup_pow_of_two(num_vls));
+
+ /* reject if too much is used */
if ((m + n) > 7)
+ goto no_qos;
+
+ if (mp)
+ *mp = m;
+ if (np)
+ *np = n;
+
+ return 1 << (m + n);
+
+no_qos:
+ if (mp)
+ *mp = 0;
+ if (np)
+ *np = 0;
+ return 0;
+}
+
+/**
+ * init_qos - init RX qos
+ * @dd - device data
+ * @rmt - RSM map table
+ *
+ * This routine initializes Rule 0 and the RSM map table to implement
+ * quality of service (qos).
+ *
+ * If all of the limit tests succeed, qos is applied based on the array
+ * interpretation of krcvqs where entry 0 is VL0.
+ *
+ * The number of vl bits (n) and the number of qpn bits (m) are computed to
+ * feed both the RSM map table and the single rule.
+ */
+static void init_qos(struct hfi1_devdata *dd, struct rsm_map_table *rmt)
+{
+ struct rsm_rule_data rrd;
+ unsigned qpns_per_vl, ctxt, i, qpn, n = 1, m;
+ unsigned int rmt_entries;
+ u64 reg;
+
+ if (!rmt)
goto bail;
- if (num_vls * qpns_per_vl > dd->chip_rcv_contexts)
+ rmt_entries = qos_rmt_entries(dd, &m, &n);
+ if (rmt_entries == 0)
goto bail;
- rsmmap = kmalloc_array(NUM_MAP_REGS, sizeof(u64), GFP_KERNEL);
- if (!rsmmap)
+ qpns_per_vl = 1 << m;
+
+ /* enough room in the map table? */
+ rmt_entries = 1 << (m + n);
+ if (rmt->used + rmt_entries >= NUM_MAP_ENTRIES)
goto bail;
- memset(rsmmap, rxcontext, NUM_MAP_REGS * sizeof(u64));
- /* init the local copy of the table */
- for (i = 0, ctxt = first_ctxt; i < num_vls; i++) {
+
+ /* add qos entries to the the RSM map table */
+ for (i = 0, ctxt = FIRST_KERNEL_KCTXT; i < num_vls; i++) {
unsigned tctxt;
for (qpn = 0, tctxt = ctxt;
krcvqs[i] && qpn < qpns_per_vl; qpn++) {
unsigned idx, regoff, regidx;
- /* generate index <= 128 */
- idx = (qpn << n) ^ i;
+ /* generate the index the hardware will produce */
+ idx = rmt->used + ((qpn << n) ^ i);
regoff = (idx % 8) * 8;
regidx = idx / 8;
- reg = rsmmap[regidx];
- /* replace 0xff with context number */
+ /* replace default with context number */
+ reg = rmt->map[regidx];
reg &= ~(RCV_RSM_MAP_TABLE_RCV_CONTEXT_A_MASK
<< regoff);
reg |= (u64)(tctxt++) << regoff;
- rsmmap[regidx] = reg;
+ rmt->map[regidx] = reg;
if (tctxt == ctxt + krcvqs[i])
tctxt = ctxt;
}
ctxt += krcvqs[i];
}
- /* flush cached copies to chip */
- for (i = 0; i < NUM_MAP_REGS; i++)
- write_csr(dd, RCV_RSM_MAP_TABLE + (8 * i), rsmmap[i]);
- /* add rule0 */
- write_csr(dd, RCV_RSM_CFG /* + (8 * 0) */,
- RCV_RSM_CFG_ENABLE_OR_CHAIN_RSM0_MASK <<
- RCV_RSM_CFG_ENABLE_OR_CHAIN_RSM0_SHIFT |
- 2ull << RCV_RSM_CFG_PACKET_TYPE_SHIFT);
- write_csr(dd, RCV_RSM_SELECT /* + (8 * 0) */,
- LRH_BTH_MATCH_OFFSET << RCV_RSM_SELECT_FIELD1_OFFSET_SHIFT |
- LRH_SC_MATCH_OFFSET << RCV_RSM_SELECT_FIELD2_OFFSET_SHIFT |
- LRH_SC_SELECT_OFFSET << RCV_RSM_SELECT_INDEX1_OFFSET_SHIFT |
- ((u64)n) << RCV_RSM_SELECT_INDEX1_WIDTH_SHIFT |
- QPN_SELECT_OFFSET << RCV_RSM_SELECT_INDEX2_OFFSET_SHIFT |
- ((u64)m + (u64)n) << RCV_RSM_SELECT_INDEX2_WIDTH_SHIFT);
- write_csr(dd, RCV_RSM_MATCH /* + (8 * 0) */,
- LRH_BTH_MASK << RCV_RSM_MATCH_MASK1_SHIFT |
- LRH_BTH_VALUE << RCV_RSM_MATCH_VALUE1_SHIFT |
- LRH_SC_MASK << RCV_RSM_MATCH_MASK2_SHIFT |
- LRH_SC_VALUE << RCV_RSM_MATCH_VALUE2_SHIFT);
- /* Enable RSM */
- add_rcvctrl(dd, RCV_CTRL_RCV_RSM_ENABLE_SMASK);
- kfree(rsmmap);
- /* map everything else to first context */
- init_qpmap_table(dd, FIRST_KERNEL_KCTXT, MIN_KERNEL_KCTXTS - 1);
+
+ rrd.offset = rmt->used;
+ rrd.pkt_type = 2;
+ rrd.field1_off = LRH_BTH_MATCH_OFFSET;
+ rrd.field2_off = LRH_SC_MATCH_OFFSET;
+ rrd.index1_off = LRH_SC_SELECT_OFFSET;
+ rrd.index1_width = n;
+ rrd.index2_off = QPN_SELECT_OFFSET;
+ rrd.index2_width = m + n;
+ rrd.mask1 = LRH_BTH_MASK;
+ rrd.value1 = LRH_BTH_VALUE;
+ rrd.mask2 = LRH_SC_MASK;
+ rrd.value2 = LRH_SC_VALUE;
+
+ /* add rule 0 */
+ add_rsm_rule(dd, 0, &rrd);
+
+ /* mark RSM map entries as used */
+ rmt->used += rmt_entries;
+ /* map everything else to the mcast/err/vl15 context */
+ init_qpmap_table(dd, HFI1_CTRL_CTXT, HFI1_CTRL_CTXT);
dd->qos_shift = n + 1;
return;
bail:
@@ -13574,13 +13783,86 @@ bail:
init_qpmap_table(dd, FIRST_KERNEL_KCTXT, dd->n_krcv_queues - 1);
}
+static void init_user_fecn_handling(struct hfi1_devdata *dd,
+ struct rsm_map_table *rmt)
+{
+ struct rsm_rule_data rrd;
+ u64 reg;
+ int i, idx, regoff, regidx;
+ u8 offset;
+
+ /* there needs to be enough room in the map table */
+ if (rmt->used + dd->num_user_contexts >= NUM_MAP_ENTRIES) {
+ dd_dev_err(dd, "User FECN handling disabled - too many user contexts allocated\n");
+ return;
+ }
+
+ /*
+ * RSM will extract the destination context as an index into the
+ * map table. The destination contexts are a sequential block
+ * in the range first_user_ctxt...num_rcv_contexts-1 (inclusive).
+ * Map entries are accessed as offset + extracted value. Adjust
+ * the added offset so this sequence can be placed anywhere in
+ * the table - as long as the entries themselves do not wrap.
+ * There are only enough bits in offset for the table size, so
+ * start with that to allow for a "negative" offset.
+ */
+ offset = (u8)(NUM_MAP_ENTRIES + (int)rmt->used -
+ (int)dd->first_user_ctxt);
+
+ for (i = dd->first_user_ctxt, idx = rmt->used;
+ i < dd->num_rcv_contexts; i++, idx++) {
+ /* replace with identity mapping */
+ regoff = (idx % 8) * 8;
+ regidx = idx / 8;
+ reg = rmt->map[regidx];
+ reg &= ~(RCV_RSM_MAP_TABLE_RCV_CONTEXT_A_MASK << regoff);
+ reg |= (u64)i << regoff;
+ rmt->map[regidx] = reg;
+ }
+
+ /*
+ * For RSM intercept of Expected FECN packets:
+ * o packet type 0 - expected
+ * o match on F (bit 95), using select/match 1, and
+ * o match on SH (bit 133), using select/match 2.
+ *
+ * Use index 1 to extract the 8-bit receive context from DestQP
+ * (start at bit 64). Use that as the RSM map table index.
+ */
+ rrd.offset = offset;
+ rrd.pkt_type = 0;
+ rrd.field1_off = 95;
+ rrd.field2_off = 133;
+ rrd.index1_off = 64;
+ rrd.index1_width = 8;
+ rrd.index2_off = 0;
+ rrd.index2_width = 0;
+ rrd.mask1 = 1;
+ rrd.value1 = 1;
+ rrd.mask2 = 1;
+ rrd.value2 = 1;
+
+ /* add rule 1 */
+ add_rsm_rule(dd, 1, &rrd);
+
+ rmt->used += dd->num_user_contexts;
+}
+
static void init_rxe(struct hfi1_devdata *dd)
{
+ struct rsm_map_table *rmt;
+
/* enable all receive errors */
write_csr(dd, RCV_ERR_MASK, ~0ull);
- /* setup QPN map table - start where VL15 context leaves off */
- init_qos(dd, dd->n_krcv_queues > MIN_KERNEL_KCTXTS ?
- MIN_KERNEL_KCTXTS : 0);
+
+ rmt = alloc_rsm_map_table(dd);
+ /* set up QOS, including the QPN map table */
+ init_qos(dd, rmt);
+ init_user_fecn_handling(dd, rmt);
+ complete_rsm_map_table(dd, rmt);
+ kfree(rmt);
+
/*
* make sure RcvCtrl.RcvWcb <= PCIe Device Control
* Register Max_Payload_Size (PCI_EXP_DEVCTL in Linux PCIe config
@@ -13762,6 +14044,7 @@ int hfi1_set_ctxt_pkey(struct hfi1_devdata *dd, unsigned ctxt, u16 pkey)
write_kctxt_csr(dd, sctxt, SEND_CTXT_CHECK_PARTITION_KEY, reg);
reg = read_kctxt_csr(dd, sctxt, SEND_CTXT_CHECK_ENABLE);
reg |= SEND_CTXT_CHECK_ENABLE_CHECK_PARTITION_KEY_SMASK;
+ reg &= ~SEND_CTXT_CHECK_ENABLE_DISALLOW_KDETH_PACKETS_SMASK;
write_kctxt_csr(dd, sctxt, SEND_CTXT_CHECK_ENABLE, reg);
done:
return ret;
@@ -14148,6 +14431,19 @@ struct hfi1_devdata *hfi1_init_dd(struct pci_dev *pdev,
(dd->revision >> CCE_REVISION_SW_SHIFT)
& CCE_REVISION_SW_MASK);
+ /*
+ * The real cpu mask is part of the affinity struct but has to be
+ * initialized earlier than the rest of the affinity struct because it
+ * is needed to calculate the number of user contexts in
+ * set_up_context_variables(). However, hfi1_dev_affinity_init(),
+ * which initializes the rest of the affinity struct members,
+ * depends on set_up_context_variables() for the number of kernel
+ * contexts, so it cannot be called before set_up_context_variables().
+ */
+ ret = init_real_cpu_mask(dd);
+ if (ret)
+ goto bail_cleanup;
+
ret = set_up_context_variables(dd);
if (ret)
goto bail_cleanup;
@@ -14161,9 +14457,7 @@ struct hfi1_devdata *hfi1_init_dd(struct pci_dev *pdev,
/* set up KDETH QP prefix in both RX and TX CSRs */
init_kdeth_qp(dd);
- ret = hfi1_dev_affinity_init(dd);
- if (ret)
- goto bail_cleanup;
+ hfi1_dev_affinity_init(dd);
/* send contexts must be set up before receive contexts */
ret = init_send_contexts(dd);
@@ -14303,7 +14597,7 @@ u64 create_pbc(struct hfi1_pportdata *ppd, u64 flags, int srate_mbs, u32 vl,
(reason), (ret))
/*
- * Initialize the Avago Thermal sensor.
+ * Initialize the thermal sensor.
*
* After initialization, enable polling of thermal sensor through
* SBus interface. In order for this to work, the SBus Master
diff --git a/drivers/staging/rdma/hfi1/chip.h b/drivers/infiniband/hw/hfi1/chip.h
index 4f3b878e43eb..66a327978739 100644
--- a/drivers/staging/rdma/hfi1/chip.h
+++ b/drivers/infiniband/hw/hfi1/chip.h
@@ -389,6 +389,7 @@
#define LAST_REMOTE_STATE_COMPLETE 0x13
#define LINK_QUALITY_INFO 0x14
#define REMOTE_DEVICE_ID 0x15
+#define LINK_DOWN_REASON 0x16
/* 8051 lane specific register field IDs */
#define TX_EQ_SETTINGS 0x00
@@ -397,6 +398,12 @@
/* Lane ID for general configuration registers */
#define GENERAL_CONFIG 4
+/* LINK_TUNING_PARAMETERS fields */
+#define TUNING_METHOD_SHIFT 24
+
+/* LINK_OPTIMIZATION_SETTINGS fields */
+#define ENABLE_EXT_DEV_CONFIG_SHIFT 24
+
/* LOAD_DATA 8051 command shifts and fields */
#define LOAD_DATA_FIELD_ID_SHIFT 40
#define LOAD_DATA_FIELD_ID_MASK 0xfull
@@ -497,6 +504,11 @@
#define PWRM_BER_CONTROL 0x1
#define PWRM_BANDWIDTH_CONTROL 0x2
+/* 8051 link down reasons */
+#define LDR_LINK_TRANSFER_ACTIVE_LOW 0xa
+#define LDR_RECEIVED_LINKDOWN_IDLE_MSG 0xb
+#define LDR_RECEIVED_HOST_OFFLINE_REQ 0xc
+
/* verify capability fabric CRC size bits */
enum {
CAP_CRC_14B = (1 << 0), /* 14b CRC */
@@ -691,7 +703,6 @@ void handle_verify_cap(struct work_struct *work);
void handle_freeze(struct work_struct *work);
void handle_link_up(struct work_struct *work);
void handle_link_down(struct work_struct *work);
-void handle_8051_request(struct work_struct *work);
void handle_link_downgrade(struct work_struct *work);
void handle_link_bounce(struct work_struct *work);
void handle_sma_message(struct work_struct *work);
diff --git a/drivers/staging/rdma/hfi1/chip_registers.h b/drivers/infiniband/hw/hfi1/chip_registers.h
index 770f05c9b8de..8744de6667c2 100644
--- a/drivers/staging/rdma/hfi1/chip_registers.h
+++ b/drivers/infiniband/hw/hfi1/chip_registers.h
@@ -771,6 +771,7 @@
#define RCV_RSM_CFG_ENABLE_OR_CHAIN_RSM0_MASK 0x1ull
#define RCV_RSM_CFG_ENABLE_OR_CHAIN_RSM0_SHIFT 0
#define RCV_RSM_CFG_PACKET_TYPE_SHIFT 60
+#define RCV_RSM_CFG_OFFSET_SHIFT 32
#define RCV_RSM_MAP_TABLE (RXE + 0x000000000900)
#define RCV_RSM_MAP_TABLE_RCV_CONTEXT_A_MASK 0xFFull
#define RCV_RSM_MATCH (RXE + 0x000000000800)
diff --git a/drivers/staging/rdma/hfi1/common.h b/drivers/infiniband/hw/hfi1/common.h
index e9b6bb322025..fcc9c217a97a 100644
--- a/drivers/staging/rdma/hfi1/common.h
+++ b/drivers/infiniband/hw/hfi1/common.h
@@ -178,7 +178,8 @@
HFI1_CAP_PKEY_CHECK | \
HFI1_CAP_NO_INTEGRITY)
-#define HFI1_USER_SWVERSION ((HFI1_USER_SWMAJOR << 16) | HFI1_USER_SWMINOR)
+#define HFI1_USER_SWVERSION ((HFI1_USER_SWMAJOR << HFI1_SWMAJOR_SHIFT) | \
+ HFI1_USER_SWMINOR)
#ifndef HFI1_KERN_TYPE
#define HFI1_KERN_TYPE 0
@@ -349,6 +350,8 @@ struct hfi1_message_header {
#define HFI1_BECN_MASK 1
#define HFI1_BECN_SMASK BIT(HFI1_BECN_SHIFT)
+#define HFI1_PSM_IOC_BASE_SEQ 0x0
+
static inline __u64 rhf_to_cpu(const __le32 *rbuf)
{
return __le64_to_cpu(*((__le64 *)rbuf));
diff --git a/drivers/staging/rdma/hfi1/debugfs.c b/drivers/infiniband/hw/hfi1/debugfs.c
index dbab9d9cc288..dbab9d9cc288 100644
--- a/drivers/staging/rdma/hfi1/debugfs.c
+++ b/drivers/infiniband/hw/hfi1/debugfs.c
diff --git a/drivers/staging/rdma/hfi1/debugfs.h b/drivers/infiniband/hw/hfi1/debugfs.h
index b6fb6814f1b8..b6fb6814f1b8 100644
--- a/drivers/staging/rdma/hfi1/debugfs.h
+++ b/drivers/infiniband/hw/hfi1/debugfs.h
diff --git a/drivers/staging/rdma/hfi1/device.c b/drivers/infiniband/hw/hfi1/device.c
index c05c39da83b1..bf64b5a7bfd7 100644
--- a/drivers/staging/rdma/hfi1/device.c
+++ b/drivers/infiniband/hw/hfi1/device.c
@@ -60,7 +60,8 @@ static dev_t hfi1_dev;
int hfi1_cdev_init(int minor, const char *name,
const struct file_operations *fops,
struct cdev *cdev, struct device **devp,
- bool user_accessible)
+ bool user_accessible,
+ struct kobject *parent)
{
const dev_t dev = MKDEV(MAJOR(hfi1_dev), minor);
struct device *device = NULL;
@@ -68,6 +69,7 @@ int hfi1_cdev_init(int minor, const char *name,
cdev_init(cdev, fops);
cdev->owner = THIS_MODULE;
+ cdev->kobj.parent = parent;
kobject_set_name(&cdev->kobj, name);
ret = cdev_add(cdev, dev, 1);
@@ -82,13 +84,13 @@ int hfi1_cdev_init(int minor, const char *name,
else
device = device_create(class, NULL, dev, NULL, "%s", name);
- if (!IS_ERR(device))
- goto done;
- ret = PTR_ERR(device);
- device = NULL;
- pr_err("Could not create device for minor %d, %s (err %d)\n",
- minor, name, -ret);
- cdev_del(cdev);
+ if (IS_ERR(device)) {
+ ret = PTR_ERR(device);
+ device = NULL;
+ pr_err("Could not create device for minor %d, %s (err %d)\n",
+ minor, name, -ret);
+ cdev_del(cdev);
+ }
done:
*devp = device;
return ret;
diff --git a/drivers/staging/rdma/hfi1/device.h b/drivers/infiniband/hw/hfi1/device.h
index 5bb3e83cf2da..c3ec19cb0ac9 100644
--- a/drivers/staging/rdma/hfi1/device.h
+++ b/drivers/infiniband/hw/hfi1/device.h
@@ -50,7 +50,8 @@
int hfi1_cdev_init(int minor, const char *name,
const struct file_operations *fops,
struct cdev *cdev, struct device **devp,
- bool user_accessible);
+ bool user_accessible,
+ struct kobject *parent);
void hfi1_cdev_cleanup(struct cdev *cdev, struct device **devp);
const char *class_name(void);
int __init dev_init(void);
diff --git a/drivers/staging/rdma/hfi1/dma.c b/drivers/infiniband/hw/hfi1/dma.c
index 7e8dab892848..7e8dab892848 100644
--- a/drivers/staging/rdma/hfi1/dma.c
+++ b/drivers/infiniband/hw/hfi1/dma.c
diff --git a/drivers/staging/rdma/hfi1/driver.c b/drivers/infiniband/hw/hfi1/driver.c
index 34511e5df1d5..c75b0ae688f8 100644
--- a/drivers/staging/rdma/hfi1/driver.c
+++ b/drivers/infiniband/hw/hfi1/driver.c
@@ -75,7 +75,8 @@ DEFINE_MUTEX(hfi1_mutex); /* general driver use */
unsigned int hfi1_max_mtu = HFI1_DEFAULT_MAX_MTU;
module_param_named(max_mtu, hfi1_max_mtu, uint, S_IRUGO);
-MODULE_PARM_DESC(max_mtu, "Set max MTU bytes, default is 8192");
+MODULE_PARM_DESC(max_mtu, "Set max MTU bytes, default is " __stringify(
+ HFI1_DEFAULT_MAX_MTU));
unsigned int hfi1_cu = 1;
module_param_named(cu, hfi1_cu, uint, S_IRUGO);
@@ -1160,7 +1161,7 @@ int hfi1_set_lid(struct hfi1_pportdata *ppd, u32 lid, u8 lmc)
ppd->lmc = lmc;
hfi1_set_ib_cfg(ppd, HFI1_IB_CFG_LIDLMC, 0);
- dd_dev_info(dd, "IB%u:%u got a lid: 0x%x\n", dd->unit, ppd->port, lid);
+ dd_dev_info(dd, "port %u: got a lid: 0x%x\n", ppd->port, lid);
return 0;
}
diff --git a/drivers/staging/rdma/hfi1/efivar.c b/drivers/infiniband/hw/hfi1/efivar.c
index 106349fc1fb9..106349fc1fb9 100644
--- a/drivers/staging/rdma/hfi1/efivar.c
+++ b/drivers/infiniband/hw/hfi1/efivar.c
diff --git a/drivers/staging/rdma/hfi1/efivar.h b/drivers/infiniband/hw/hfi1/efivar.h
index 94e9e70de568..94e9e70de568 100644
--- a/drivers/staging/rdma/hfi1/efivar.h
+++ b/drivers/infiniband/hw/hfi1/efivar.h
diff --git a/drivers/infiniband/hw/hfi1/eprom.c b/drivers/infiniband/hw/hfi1/eprom.c
new file mode 100644
index 000000000000..36b77943cbfd
--- /dev/null
+++ b/drivers/infiniband/hw/hfi1/eprom.c
@@ -0,0 +1,102 @@
+/*
+ * Copyright(c) 2015, 2016 Intel Corporation.
+ *
+ * This file is provided under a dual BSD/GPLv2 license. When using or
+ * redistributing this file, you may do so under either license.
+ *
+ * GPL LICENSE SUMMARY
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of version 2 of the GNU General Public License as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
+ * General Public License for more details.
+ *
+ * BSD LICENSE
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions
+ * are met:
+ *
+ * - Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * - Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in
+ * the documentation and/or other materials provided with the
+ * distribution.
+ * - Neither the name of Intel Corporation nor the names of its
+ * contributors may be used to endorse or promote products derived
+ * from this software without specific prior written permission.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ *
+ */
+#include <linux/delay.h>
+#include "hfi.h"
+#include "common.h"
+#include "eprom.h"
+
+#define CMD_SHIFT 24
+#define CMD_RELEASE_POWERDOWN_NOID ((0xab << CMD_SHIFT))
+
+/* controller interface speeds */
+#define EP_SPEED_FULL 0x2 /* full speed */
+
+/*
+ * How long to wait for the EPROM to become available, in ms.
+ * The spec 32 Mb EPROM takes around 40s to erase then write.
+ * Double it for safety.
+ */
+#define EPROM_TIMEOUT 80000 /* ms */
+/*
+ * Initialize the EPROM handler.
+ */
+int eprom_init(struct hfi1_devdata *dd)
+{
+ int ret = 0;
+
+ /* only the discrete chip has an EPROM */
+ if (dd->pcidev->device != PCI_DEVICE_ID_INTEL0)
+ return 0;
+
+ /*
+ * It is OK if both HFIs reset the EPROM as long as they don't
+ * do it at the same time.
+ */
+ ret = acquire_chip_resource(dd, CR_EPROM, EPROM_TIMEOUT);
+ if (ret) {
+ dd_dev_err(dd,
+ "%s: unable to acquire EPROM resource, no EPROM support\n",
+ __func__);
+ goto done_asic;
+ }
+
+ /* reset EPROM to be sure it is in a good state */
+
+ /* set reset */
+ write_csr(dd, ASIC_EEP_CTL_STAT, ASIC_EEP_CTL_STAT_EP_RESET_SMASK);
+ /* clear reset, set speed */
+ write_csr(dd, ASIC_EEP_CTL_STAT,
+ EP_SPEED_FULL << ASIC_EEP_CTL_STAT_RATE_SPI_SHIFT);
+
+ /* wake the device with command "release powerdown NoID" */
+ write_csr(dd, ASIC_EEP_ADDR_CMD, CMD_RELEASE_POWERDOWN_NOID);
+
+ dd->eprom_available = true;
+ release_chip_resource(dd, CR_EPROM);
+done_asic:
+ return ret;
+}
diff --git a/drivers/staging/rdma/hfi1/eprom.h b/drivers/infiniband/hw/hfi1/eprom.h
index d41f0b1afb15..d41f0b1afb15 100644
--- a/drivers/staging/rdma/hfi1/eprom.h
+++ b/drivers/infiniband/hw/hfi1/eprom.h
diff --git a/drivers/staging/rdma/hfi1/file_ops.c b/drivers/infiniband/hw/hfi1/file_ops.c
index c1c5bf82addb..7a5b0e676cc7 100644
--- a/drivers/staging/rdma/hfi1/file_ops.c
+++ b/drivers/infiniband/hw/hfi1/file_ops.c
@@ -72,8 +72,6 @@
*/
static int hfi1_file_open(struct inode *, struct file *);
static int hfi1_file_close(struct inode *, struct file *);
-static ssize_t hfi1_file_write(struct file *, const char __user *,
- size_t, loff_t *);
static ssize_t hfi1_write_iter(struct kiocb *, struct iov_iter *);
static unsigned int hfi1_poll(struct file *, struct poll_table_struct *);
static int hfi1_file_mmap(struct file *, struct vm_area_struct *);
@@ -86,8 +84,7 @@ static int get_ctxt_info(struct file *, void __user *, __u32);
static int get_base_info(struct file *, void __user *, __u32);
static int setup_ctxt(struct file *);
static int setup_subctxt(struct hfi1_ctxtdata *);
-static int get_user_context(struct file *, struct hfi1_user_info *,
- int, unsigned);
+static int get_user_context(struct file *, struct hfi1_user_info *, int);
static int find_shared_ctxt(struct file *, const struct hfi1_user_info *);
static int allocate_ctxt(struct file *, struct hfi1_devdata *,
struct hfi1_user_info *);
@@ -97,13 +94,15 @@ static int user_event_ack(struct hfi1_ctxtdata *, int, unsigned long);
static int set_ctxt_pkey(struct hfi1_ctxtdata *, unsigned, u16);
static int manage_rcvq(struct hfi1_ctxtdata *, unsigned, int);
static int vma_fault(struct vm_area_struct *, struct vm_fault *);
+static long hfi1_file_ioctl(struct file *fp, unsigned int cmd,
+ unsigned long arg);
static const struct file_operations hfi1_file_ops = {
.owner = THIS_MODULE,
- .write = hfi1_file_write,
.write_iter = hfi1_write_iter,
.open = hfi1_file_open,
.release = hfi1_file_close,
+ .unlocked_ioctl = hfi1_file_ioctl,
.poll = hfi1_poll,
.mmap = hfi1_file_mmap,
.llseek = noop_llseek,
@@ -169,6 +168,13 @@ static inline int is_valid_mmap(u64 token)
static int hfi1_file_open(struct inode *inode, struct file *fp)
{
+ struct hfi1_devdata *dd = container_of(inode->i_cdev,
+ struct hfi1_devdata,
+ user_cdev);
+
+ /* Just take a ref now. Not all opens result in a context assign */
+ kobject_get(&dd->kobj);
+
/* The real work is performed later in assign_ctxt() */
fp->private_data = kzalloc(sizeof(struct hfi1_filedata), GFP_KERNEL);
if (fp->private_data) /* no cpu affinity by default */
@@ -176,127 +182,59 @@ static int hfi1_file_open(struct inode *inode, struct file *fp)
return fp->private_data ? 0 : -ENOMEM;
}
-static ssize_t hfi1_file_write(struct file *fp, const char __user *data,
- size_t count, loff_t *offset)
+static long hfi1_file_ioctl(struct file *fp, unsigned int cmd,
+ unsigned long arg)
{
- const struct hfi1_cmd __user *ucmd;
struct hfi1_filedata *fd = fp->private_data;
struct hfi1_ctxtdata *uctxt = fd->uctxt;
- struct hfi1_cmd cmd;
struct hfi1_user_info uinfo;
struct hfi1_tid_info tinfo;
+ int ret = 0;
unsigned long addr;
- ssize_t consumed = 0, copy = 0, ret = 0;
- void *dest = NULL;
- __u64 user_val = 0;
- int uctxt_required = 1;
- int must_be_root = 0;
-
- /* FIXME: This interface cannot continue out of staging */
- if (WARN_ON_ONCE(!ib_safe_file_access(fp)))
- return -EACCES;
-
- if (count < sizeof(cmd)) {
- ret = -EINVAL;
- goto bail;
- }
-
- ucmd = (const struct hfi1_cmd __user *)data;
- if (copy_from_user(&cmd, ucmd, sizeof(cmd))) {
- ret = -EFAULT;
- goto bail;
- }
-
- consumed = sizeof(cmd);
-
- switch (cmd.type) {
- case HFI1_CMD_ASSIGN_CTXT:
- uctxt_required = 0; /* assigned user context not required */
- copy = sizeof(uinfo);
- dest = &uinfo;
- break;
- case HFI1_CMD_SDMA_STATUS_UPD:
- case HFI1_CMD_CREDIT_UPD:
- copy = 0;
- break;
- case HFI1_CMD_TID_UPDATE:
- case HFI1_CMD_TID_FREE:
- case HFI1_CMD_TID_INVAL_READ:
- copy = sizeof(tinfo);
- dest = &tinfo;
- break;
- case HFI1_CMD_USER_INFO:
- case HFI1_CMD_RECV_CTRL:
- case HFI1_CMD_POLL_TYPE:
- case HFI1_CMD_ACK_EVENT:
- case HFI1_CMD_CTXT_INFO:
- case HFI1_CMD_SET_PKEY:
- case HFI1_CMD_CTXT_RESET:
- copy = 0;
- user_val = cmd.addr;
- break;
- case HFI1_CMD_EP_INFO:
- case HFI1_CMD_EP_ERASE_CHIP:
- case HFI1_CMD_EP_ERASE_RANGE:
- case HFI1_CMD_EP_READ_RANGE:
- case HFI1_CMD_EP_WRITE_RANGE:
- uctxt_required = 0; /* assigned user context not required */
- must_be_root = 1; /* validate user */
- copy = 0;
- break;
- default:
- ret = -EINVAL;
- goto bail;
- }
-
- /* If the command comes with user data, copy it. */
- if (copy) {
- if (copy_from_user(dest, (void __user *)cmd.addr, copy)) {
- ret = -EFAULT;
- goto bail;
- }
- consumed += copy;
- }
-
- /*
- * Make sure there is a uctxt when needed.
- */
- if (uctxt_required && !uctxt) {
- ret = -EINVAL;
- goto bail;
- }
+ int uval = 0;
+ unsigned long ul_uval = 0;
+ u16 uval16 = 0;
+
+ hfi1_cdbg(IOCTL, "IOCTL recv: 0x%x", cmd);
+ if (cmd != HFI1_IOCTL_ASSIGN_CTXT &&
+ cmd != HFI1_IOCTL_GET_VERS &&
+ !uctxt)
+ return -EINVAL;
- /* only root can do these operations */
- if (must_be_root && !capable(CAP_SYS_ADMIN)) {
- ret = -EPERM;
- goto bail;
- }
+ switch (cmd) {
+ case HFI1_IOCTL_ASSIGN_CTXT:
+ if (copy_from_user(&uinfo,
+ (struct hfi1_user_info __user *)arg,
+ sizeof(uinfo)))
+ return -EFAULT;
- switch (cmd.type) {
- case HFI1_CMD_ASSIGN_CTXT:
ret = assign_ctxt(fp, &uinfo);
if (ret < 0)
- goto bail;
- ret = setup_ctxt(fp);
+ return ret;
+ setup_ctxt(fp);
if (ret)
- goto bail;
+ return ret;
ret = user_init(fp);
break;
- case HFI1_CMD_CTXT_INFO:
- ret = get_ctxt_info(fp, (void __user *)(unsigned long)
- user_val, cmd.len);
- break;
- case HFI1_CMD_USER_INFO:
- ret = get_base_info(fp, (void __user *)(unsigned long)
- user_val, cmd.len);
+ case HFI1_IOCTL_CTXT_INFO:
+ ret = get_ctxt_info(fp, (void __user *)(unsigned long)arg,
+ sizeof(struct hfi1_ctxt_info));
break;
- case HFI1_CMD_SDMA_STATUS_UPD:
+ case HFI1_IOCTL_USER_INFO:
+ ret = get_base_info(fp, (void __user *)(unsigned long)arg,
+ sizeof(struct hfi1_base_info));
break;
- case HFI1_CMD_CREDIT_UPD:
+ case HFI1_IOCTL_CREDIT_UPD:
if (uctxt && uctxt->sc)
sc_return_credits(uctxt->sc);
break;
- case HFI1_CMD_TID_UPDATE:
+
+ case HFI1_IOCTL_TID_UPDATE:
+ if (copy_from_user(&tinfo,
+ (struct hfi11_tid_info __user *)arg,
+ sizeof(tinfo)))
+ return -EFAULT;
+
ret = hfi1_user_exp_rcv_setup(fp, &tinfo);
if (!ret) {
/*
@@ -305,57 +243,82 @@ static ssize_t hfi1_file_write(struct file *fp, const char __user *data,
* These fields are adjacent in the structure so
* we can copy them at the same time.
*/
- addr = (unsigned long)cmd.addr +
- offsetof(struct hfi1_tid_info, tidcnt);
+ addr = arg + offsetof(struct hfi1_tid_info, tidcnt);
if (copy_to_user((void __user *)addr, &tinfo.tidcnt,
sizeof(tinfo.tidcnt) +
sizeof(tinfo.length)))
ret = -EFAULT;
}
break;
- case HFI1_CMD_TID_INVAL_READ:
- ret = hfi1_user_exp_rcv_invalid(fp, &tinfo);
+
+ case HFI1_IOCTL_TID_FREE:
+ if (copy_from_user(&tinfo,
+ (struct hfi11_tid_info __user *)arg,
+ sizeof(tinfo)))
+ return -EFAULT;
+
+ ret = hfi1_user_exp_rcv_clear(fp, &tinfo);
if (ret)
break;
- addr = (unsigned long)cmd.addr +
- offsetof(struct hfi1_tid_info, tidcnt);
+ addr = arg + offsetof(struct hfi1_tid_info, tidcnt);
if (copy_to_user((void __user *)addr, &tinfo.tidcnt,
sizeof(tinfo.tidcnt)))
ret = -EFAULT;
break;
- case HFI1_CMD_TID_FREE:
- ret = hfi1_user_exp_rcv_clear(fp, &tinfo);
+
+ case HFI1_IOCTL_TID_INVAL_READ:
+ if (copy_from_user(&tinfo,
+ (struct hfi11_tid_info __user *)arg,
+ sizeof(tinfo)))
+ return -EFAULT;
+
+ ret = hfi1_user_exp_rcv_invalid(fp, &tinfo);
if (ret)
break;
- addr = (unsigned long)cmd.addr +
- offsetof(struct hfi1_tid_info, tidcnt);
+ addr = arg + offsetof(struct hfi1_tid_info, tidcnt);
if (copy_to_user((void __user *)addr, &tinfo.tidcnt,
sizeof(tinfo.tidcnt)))
ret = -EFAULT;
break;
- case HFI1_CMD_RECV_CTRL:
- ret = manage_rcvq(uctxt, fd->subctxt, (int)user_val);
+
+ case HFI1_IOCTL_RECV_CTRL:
+ ret = get_user(uval, (int __user *)arg);
+ if (ret != 0)
+ return -EFAULT;
+ ret = manage_rcvq(uctxt, fd->subctxt, uval);
break;
- case HFI1_CMD_POLL_TYPE:
- uctxt->poll_type = (typeof(uctxt->poll_type))user_val;
+
+ case HFI1_IOCTL_POLL_TYPE:
+ ret = get_user(uval, (int __user *)arg);
+ if (ret != 0)
+ return -EFAULT;
+ uctxt->poll_type = (typeof(uctxt->poll_type))uval;
break;
- case HFI1_CMD_ACK_EVENT:
- ret = user_event_ack(uctxt, fd->subctxt, user_val);
+
+ case HFI1_IOCTL_ACK_EVENT:
+ ret = get_user(ul_uval, (unsigned long __user *)arg);
+ if (ret != 0)
+ return -EFAULT;
+ ret = user_event_ack(uctxt, fd->subctxt, ul_uval);
break;
- case HFI1_CMD_SET_PKEY:
+
+ case HFI1_IOCTL_SET_PKEY:
+ ret = get_user(uval16, (u16 __user *)arg);
+ if (ret != 0)
+ return -EFAULT;
if (HFI1_CAP_IS_USET(PKEY_CHECK))
- ret = set_ctxt_pkey(uctxt, fd->subctxt, user_val);
+ ret = set_ctxt_pkey(uctxt, fd->subctxt, uval16);
else
- ret = -EPERM;
+ return -EPERM;
break;
- case HFI1_CMD_CTXT_RESET: {
+
+ case HFI1_IOCTL_CTXT_RESET: {
struct send_context *sc;
struct hfi1_devdata *dd;
- if (!uctxt || !uctxt->dd || !uctxt->sc) {
- ret = -EINVAL;
- break;
- }
+ if (!uctxt || !uctxt->dd || !uctxt->sc)
+ return -EINVAL;
+
/*
* There is no protection here. User level has to
* guarantee that no one will be writing to the send
@@ -373,10 +336,9 @@ static ssize_t hfi1_file_write(struct file *fp, const char __user *data,
wait_event_interruptible_timeout(
sc->halt_wait, (sc->flags & SCF_HALTED),
msecs_to_jiffies(SEND_CTXT_HALT_TIMEOUT));
- if (!(sc->flags & SCF_HALTED)) {
- ret = -ENOLCK;
- break;
- }
+ if (!(sc->flags & SCF_HALTED))
+ return -ENOLCK;
+
/*
* If the send context was halted due to a Freeze,
* wait until the device has been "unfrozen" before
@@ -387,18 +349,16 @@ static ssize_t hfi1_file_write(struct file *fp, const char __user *data,
dd->event_queue,
!(ACCESS_ONCE(dd->flags) & HFI1_FROZEN),
msecs_to_jiffies(SEND_CTXT_HALT_TIMEOUT));
- if (dd->flags & HFI1_FROZEN) {
- ret = -ENOLCK;
- break;
- }
- if (dd->flags & HFI1_FORCED_FREEZE) {
+ if (dd->flags & HFI1_FROZEN)
+ return -ENOLCK;
+
+ if (dd->flags & HFI1_FORCED_FREEZE)
/*
* Don't allow context reset if we are into
* forced freeze
*/
- ret = -ENODEV;
- break;
- }
+ return -ENODEV;
+
sc_disable(sc);
ret = sc_enable(sc);
hfi1_rcvctrl(dd, HFI1_RCVCTRL_CTXT_ENB,
@@ -410,18 +370,17 @@ static ssize_t hfi1_file_write(struct file *fp, const char __user *data,
sc_return_credits(sc);
break;
}
- case HFI1_CMD_EP_INFO:
- case HFI1_CMD_EP_ERASE_CHIP:
- case HFI1_CMD_EP_ERASE_RANGE:
- case HFI1_CMD_EP_READ_RANGE:
- case HFI1_CMD_EP_WRITE_RANGE:
- ret = handle_eprom_command(fp, &cmd);
+
+ case HFI1_IOCTL_GET_VERS:
+ uval = HFI1_USER_SWVERSION;
+ if (put_user(uval, (int __user *)arg))
+ return -EFAULT;
break;
+
+ default:
+ return -EINVAL;
}
- if (ret >= 0)
- ret = consumed;
-bail:
return ret;
}
@@ -738,7 +697,9 @@ static int hfi1_file_close(struct inode *inode, struct file *fp)
{
struct hfi1_filedata *fdata = fp->private_data;
struct hfi1_ctxtdata *uctxt = fdata->uctxt;
- struct hfi1_devdata *dd;
+ struct hfi1_devdata *dd = container_of(inode->i_cdev,
+ struct hfi1_devdata,
+ user_cdev);
unsigned long flags, *ev;
fp->private_data = NULL;
@@ -747,7 +708,6 @@ static int hfi1_file_close(struct inode *inode, struct file *fp)
goto done;
hfi1_cdbg(PROC, "freeing ctxt %u:%u", uctxt->ctxt, fdata->subctxt);
- dd = uctxt->dd;
mutex_lock(&hfi1_mutex);
flush_wc();
@@ -813,6 +773,7 @@ static int hfi1_file_close(struct inode *inode, struct file *fp)
mutex_unlock(&hfi1_mutex);
hfi1_free_ctxtdata(dd, uctxt);
done:
+ kobject_put(&dd->kobj);
kfree(fdata);
return 0;
}
@@ -836,7 +797,7 @@ static u64 kvirt_to_phys(void *addr)
static int assign_ctxt(struct file *fp, struct hfi1_user_info *uinfo)
{
int i_minor, ret = 0;
- unsigned swmajor, swminor, alg = HFI1_ALG_ACROSS;
+ unsigned int swmajor, swminor;
swmajor = uinfo->userversion >> 16;
if (swmajor != HFI1_USER_SWMAJOR) {
@@ -846,9 +807,6 @@ static int assign_ctxt(struct file *fp, struct hfi1_user_info *uinfo)
swminor = uinfo->userversion & 0xffff;
- if (uinfo->hfi1_alg < HFI1_ALG_COUNT)
- alg = uinfo->hfi1_alg;
-
mutex_lock(&hfi1_mutex);
/* First, lets check if we need to setup a shared context? */
if (uinfo->subctxt_cnt) {
@@ -868,7 +826,7 @@ static int assign_ctxt(struct file *fp, struct hfi1_user_info *uinfo)
*/
if (!ret) {
i_minor = iminor(file_inode(fp)) - HFI1_USER_MINOR_BASE;
- ret = get_user_context(fp, uinfo, i_minor - 1, alg);
+ ret = get_user_context(fp, uinfo, i_minor);
}
done_unlock:
mutex_unlock(&hfi1_mutex);
@@ -876,71 +834,26 @@ done:
return ret;
}
-/* return true if the device available for general use */
-static int usable_device(struct hfi1_devdata *dd)
-{
- struct hfi1_pportdata *ppd = dd->pport;
-
- return driver_lstate(ppd) == IB_PORT_ACTIVE;
-}
-
static int get_user_context(struct file *fp, struct hfi1_user_info *uinfo,
- int devno, unsigned alg)
+ int devno)
{
struct hfi1_devdata *dd = NULL;
- int ret = 0, devmax, npresent, nup, dev;
+ int devmax, npresent, nup;
devmax = hfi1_count_units(&npresent, &nup);
- if (!npresent) {
- ret = -ENXIO;
- goto done;
- }
- if (!nup) {
- ret = -ENETDOWN;
- goto done;
- }
- if (devno >= 0) {
- dd = hfi1_lookup(devno);
- if (!dd)
- ret = -ENODEV;
- else if (!dd->freectxts)
- ret = -EBUSY;
- } else {
- struct hfi1_devdata *pdd;
-
- if (alg == HFI1_ALG_ACROSS) {
- unsigned free = 0U;
-
- for (dev = 0; dev < devmax; dev++) {
- pdd = hfi1_lookup(dev);
- if (!pdd)
- continue;
- if (!usable_device(pdd))
- continue;
- if (pdd->freectxts &&
- pdd->freectxts > free) {
- dd = pdd;
- free = pdd->freectxts;
- }
- }
- } else {
- for (dev = 0; dev < devmax; dev++) {
- pdd = hfi1_lookup(dev);
- if (!pdd)
- continue;
- if (!usable_device(pdd))
- continue;
- if (pdd->freectxts) {
- dd = pdd;
- break;
- }
- }
- }
- if (!dd)
- ret = -EBUSY;
- }
-done:
- return ret ? ret : allocate_ctxt(fp, dd, uinfo);
+ if (!npresent)
+ return -ENXIO;
+
+ if (!nup)
+ return -ENETDOWN;
+
+ dd = hfi1_lookup(devno);
+ if (!dd)
+ return -ENODEV;
+ else if (!dd->freectxts)
+ return -EBUSY;
+
+ return allocate_ctxt(fp, dd, uinfo);
}
static int find_shared_ctxt(struct file *fp,
@@ -1546,170 +1459,10 @@ done:
return ret;
}
-static int ui_open(struct inode *inode, struct file *filp)
-{
- struct hfi1_devdata *dd;
-
- dd = container_of(inode->i_cdev, struct hfi1_devdata, ui_cdev);
- filp->private_data = dd; /* for other methods */
- return 0;
-}
-
-static int ui_release(struct inode *inode, struct file *filp)
-{
- /* nothing to do */
- return 0;
-}
-
-static loff_t ui_lseek(struct file *filp, loff_t offset, int whence)
-{
- struct hfi1_devdata *dd = filp->private_data;
-
- return fixed_size_llseek(filp, offset, whence,
- (dd->kregend - dd->kregbase) + DC8051_DATA_MEM_SIZE);
-}
-
-/* NOTE: assumes unsigned long is 8 bytes */
-static ssize_t ui_read(struct file *filp, char __user *buf, size_t count,
- loff_t *f_pos)
-{
- struct hfi1_devdata *dd = filp->private_data;
- void __iomem *base = dd->kregbase;
- unsigned long total, csr_off,
- barlen = (dd->kregend - dd->kregbase);
- u64 data;
-
- /* only read 8 byte quantities */
- if ((count % 8) != 0)
- return -EINVAL;
- /* offset must be 8-byte aligned */
- if ((*f_pos % 8) != 0)
- return -EINVAL;
- /* destination buffer must be 8-byte aligned */
- if ((unsigned long)buf % 8 != 0)
- return -EINVAL;
- /* must be in range */
- if (*f_pos + count > (barlen + DC8051_DATA_MEM_SIZE))
- return -EINVAL;
- /* only set the base if we are not starting past the BAR */
- if (*f_pos < barlen)
- base += *f_pos;
- csr_off = *f_pos;
- for (total = 0; total < count; total += 8, csr_off += 8) {
- /* accessing LCB CSRs requires more checks */
- if (is_lcb_offset(csr_off)) {
- if (read_lcb_csr(dd, csr_off, (u64 *)&data))
- break; /* failed */
- }
- /*
- * Cannot read ASIC GPIO/QSFP* clear and force CSRs without a
- * false parity error. Avoid the whole issue by not reading
- * them. These registers are defined as having a read value
- * of 0.
- */
- else if (csr_off == ASIC_GPIO_CLEAR ||
- csr_off == ASIC_GPIO_FORCE ||
- csr_off == ASIC_QSFP1_CLEAR ||
- csr_off == ASIC_QSFP1_FORCE ||
- csr_off == ASIC_QSFP2_CLEAR ||
- csr_off == ASIC_QSFP2_FORCE)
- data = 0;
- else if (csr_off >= barlen) {
- /*
- * read_8051_data can read more than just 8 bytes at
- * a time. However, folding this into the loop and
- * handling the reads in 8 byte increments allows us
- * to smoothly transition from chip memory to 8051
- * memory.
- */
- if (read_8051_data(dd,
- (u32)(csr_off - barlen),
- sizeof(data), &data))
- break; /* failed */
- } else
- data = readq(base + total);
- if (put_user(data, (unsigned long __user *)(buf + total)))
- break;
- }
- *f_pos += total;
- return total;
-}
-
-/* NOTE: assumes unsigned long is 8 bytes */
-static ssize_t ui_write(struct file *filp, const char __user *buf,
- size_t count, loff_t *f_pos)
-{
- struct hfi1_devdata *dd = filp->private_data;
- void __iomem *base;
- unsigned long total, data, csr_off;
- int in_lcb;
-
- /* only write 8 byte quantities */
- if ((count % 8) != 0)
- return -EINVAL;
- /* offset must be 8-byte aligned */
- if ((*f_pos % 8) != 0)
- return -EINVAL;
- /* source buffer must be 8-byte aligned */
- if ((unsigned long)buf % 8 != 0)
- return -EINVAL;
- /* must be in range */
- if (*f_pos + count > dd->kregend - dd->kregbase)
- return -EINVAL;
-
- base = (void __iomem *)dd->kregbase + *f_pos;
- csr_off = *f_pos;
- in_lcb = 0;
- for (total = 0; total < count; total += 8, csr_off += 8) {
- if (get_user(data, (unsigned long __user *)(buf + total)))
- break;
- /* accessing LCB CSRs requires a special procedure */
- if (is_lcb_offset(csr_off)) {
- if (!in_lcb) {
- int ret = acquire_lcb_access(dd, 1);
-
- if (ret)
- break;
- in_lcb = 1;
- }
- } else {
- if (in_lcb) {
- release_lcb_access(dd, 1);
- in_lcb = 0;
- }
- }
- writeq(data, base + total);
- }
- if (in_lcb)
- release_lcb_access(dd, 1);
- *f_pos += total;
- return total;
-}
-
-static const struct file_operations ui_file_ops = {
- .owner = THIS_MODULE,
- .llseek = ui_lseek,
- .read = ui_read,
- .write = ui_write,
- .open = ui_open,
- .release = ui_release,
-};
-
-#define UI_OFFSET 192 /* device minor offset for UI devices */
-static int create_ui = 1;
-
-static struct cdev wildcard_cdev;
-static struct device *wildcard_device;
-
-static atomic_t user_count = ATOMIC_INIT(0);
-
static void user_remove(struct hfi1_devdata *dd)
{
- if (atomic_dec_return(&user_count) == 0)
- hfi1_cdev_cleanup(&wildcard_cdev, &wildcard_device);
hfi1_cdev_cleanup(&dd->user_cdev, &dd->user_device);
- hfi1_cdev_cleanup(&dd->ui_cdev, &dd->ui_device);
}
static int user_add(struct hfi1_devdata *dd)
@@ -1717,34 +1470,13 @@ static int user_add(struct hfi1_devdata *dd)
char name[10];
int ret;
- if (atomic_inc_return(&user_count) == 1) {
- ret = hfi1_cdev_init(0, class_name(), &hfi1_file_ops,
- &wildcard_cdev, &wildcard_device,
- true);
- if (ret)
- goto done;
- }
-
snprintf(name, sizeof(name), "%s_%d", class_name(), dd->unit);
- ret = hfi1_cdev_init(dd->unit + 1, name, &hfi1_file_ops,
+ ret = hfi1_cdev_init(dd->unit, name, &hfi1_file_ops,
&dd->user_cdev, &dd->user_device,
- true);
+ true, &dd->kobj);
if (ret)
- goto done;
+ user_remove(dd);
- if (create_ui) {
- snprintf(name, sizeof(name),
- "%s_ui%d", class_name(), dd->unit);
- ret = hfi1_cdev_init(dd->unit + UI_OFFSET, name, &ui_file_ops,
- &dd->ui_cdev, &dd->ui_device,
- false);
- if (ret)
- goto done;
- }
-
- return 0;
-done:
- user_remove(dd);
return ret;
}
@@ -1753,13 +1485,7 @@ done:
*/
int hfi1_device_create(struct hfi1_devdata *dd)
{
- int r, ret;
-
- r = user_add(dd);
- ret = hfi1_diag_add(dd);
- if (r && !ret)
- ret = r;
- return ret;
+ return user_add(dd);
}
/*
@@ -1769,5 +1495,4 @@ int hfi1_device_create(struct hfi1_devdata *dd)
void hfi1_device_remove(struct hfi1_devdata *dd)
{
user_remove(dd);
- hfi1_diag_remove(dd);
}
diff --git a/drivers/staging/rdma/hfi1/firmware.c b/drivers/infiniband/hw/hfi1/firmware.c
index 3040162cb326..ed680fda611d 100644
--- a/drivers/staging/rdma/hfi1/firmware.c
+++ b/drivers/infiniband/hw/hfi1/firmware.c
@@ -1413,8 +1413,15 @@ static int __acquire_chip_resource(struct hfi1_devdata *dd, u32 resource)
if (resource & CR_DYN_MASK) {
/* a dynamic resource is in use if either HFI has set the bit */
- all_bits = resource_mask(0, resource) |
+ if (dd->pcidev->device == PCI_DEVICE_ID_INTEL0 &&
+ (resource & (CR_I2C1 | CR_I2C2))) {
+ /* discrete devices must serialize across both chains */
+ all_bits = resource_mask(0, CR_I2C1 | CR_I2C2) |
+ resource_mask(1, CR_I2C1 | CR_I2C2);
+ } else {
+ all_bits = resource_mask(0, resource) |
resource_mask(1, resource);
+ }
my_bit = resource_mask(dd->hfi1_id, resource);
} else {
/* non-dynamic resources are not split between HFIs */
diff --git a/drivers/staging/rdma/hfi1/hfi.h b/drivers/infiniband/hw/hfi1/hfi.h
index 16cbdc4073e0..4417a0fd3ef9 100644
--- a/drivers/staging/rdma/hfi1/hfi.h
+++ b/drivers/infiniband/hw/hfi1/hfi.h
@@ -453,11 +453,12 @@ struct rvt_sge_state;
#define HLS_LINK_COOLDOWN BIT(__HLS_LINK_COOLDOWN_BP)
#define HLS_UP (HLS_UP_INIT | HLS_UP_ARMED | HLS_UP_ACTIVE)
+#define HLS_DOWN ~(HLS_UP)
/* use this MTU size if none other is given */
-#define HFI1_DEFAULT_ACTIVE_MTU 8192
+#define HFI1_DEFAULT_ACTIVE_MTU 10240
/* use this MTU size as the default maximum */
-#define HFI1_DEFAULT_MAX_MTU 8192
+#define HFI1_DEFAULT_MAX_MTU 10240
/* default partition key */
#define DEFAULT_PKEY 0xffff
@@ -606,7 +607,6 @@ struct hfi1_pportdata {
struct work_struct link_vc_work;
struct work_struct link_up_work;
struct work_struct link_down_work;
- struct work_struct dc_host_req_work;
struct work_struct sma_message_work;
struct work_struct freeze_work;
struct work_struct link_downgrade_work;
@@ -1169,6 +1169,7 @@ struct hfi1_devdata {
atomic_t aspm_disabled_cnt;
struct hfi1_affinity *affinity;
+ struct kobject kobj;
};
/* 8051 firmware version helper */
@@ -1258,7 +1259,7 @@ void receive_interrupt_work(struct work_struct *work);
static inline int hdr2sc(struct hfi1_message_header *hdr, u64 rhf)
{
return ((be16_to_cpu(hdr->lrh[0]) >> 12) & 0xf) |
- ((!!(rhf & RHF_DC_INFO_MASK)) << 4);
+ ((!!(rhf & RHF_DC_INFO_SMASK)) << 4);
}
static inline u16 generate_jkey(kuid_t uid)
@@ -1333,6 +1334,9 @@ void process_becn(struct hfi1_pportdata *ppd, u8 sl, u16 rlid, u32 lqpn,
void return_cnp(struct hfi1_ibport *ibp, struct rvt_qp *qp, u32 remote_qpn,
u32 pkey, u32 slid, u32 dlid, u8 sc5,
const struct ib_grh *old_grh);
+#define PKEY_CHECK_INVALID -1
+int egress_pkey_check(struct hfi1_pportdata *ppd, __be16 *lrh, __be32 *bth,
+ u8 sc5, int8_t s_pkey_index);
#define PACKET_EGRESS_TIMEOUT 350
static inline void pause_for_credit_return(struct hfi1_devdata *dd)
@@ -1776,6 +1780,7 @@ extern struct mutex hfi1_mutex;
#define HFI1_PKT_USER_SC_INTEGRITY \
(SEND_CTXT_CHECK_ENABLE_DISALLOW_NON_KDETH_PACKETS_SMASK \
+ | SEND_CTXT_CHECK_ENABLE_DISALLOW_KDETH_PACKETS_SMASK \
| SEND_CTXT_CHECK_ENABLE_DISALLOW_BYPASS_SMASK \
| SEND_CTXT_CHECK_ENABLE_DISALLOW_GRH_SMASK)
@@ -1879,9 +1884,8 @@ static inline u64 hfi1_pkt_base_sdma_integrity(struct hfi1_devdata *dd)
get_unit_name((dd)->unit), ##__VA_ARGS__)
#define hfi1_dev_porterr(dd, port, fmt, ...) \
- dev_err(&(dd)->pcidev->dev, "%s: IB%u:%u " fmt, \
- get_unit_name((dd)->unit), (dd)->unit, (port), \
- ##__VA_ARGS__)
+ dev_err(&(dd)->pcidev->dev, "%s: port %u: " fmt, \
+ get_unit_name((dd)->unit), (port), ##__VA_ARGS__)
/*
* this is used for formatting hw error messages...
diff --git a/drivers/staging/rdma/hfi1/init.c b/drivers/infiniband/hw/hfi1/init.c
index cfcdc16b41c3..5cc492e5776d 100644
--- a/drivers/staging/rdma/hfi1/init.c
+++ b/drivers/infiniband/hw/hfi1/init.c
@@ -422,9 +422,10 @@ static enum hrtimer_restart cca_timer_fn(struct hrtimer *t)
struct cca_timer *cca_timer;
struct hfi1_pportdata *ppd;
int sl;
- u16 ccti, ccti_timer, ccti_min;
+ u16 ccti_timer, ccti_min;
struct cc_state *cc_state;
unsigned long flags;
+ enum hrtimer_restart ret = HRTIMER_NORESTART;
cca_timer = container_of(t, struct cca_timer, hrtimer);
ppd = cca_timer->ppd;
@@ -450,24 +451,21 @@ static enum hrtimer_restart cca_timer_fn(struct hrtimer *t)
spin_lock_irqsave(&ppd->cca_timer_lock, flags);
- ccti = cca_timer->ccti;
-
- if (ccti > ccti_min) {
+ if (cca_timer->ccti > ccti_min) {
cca_timer->ccti--;
set_link_ipg(ppd);
}
- spin_unlock_irqrestore(&ppd->cca_timer_lock, flags);
-
- rcu_read_unlock();
-
- if (ccti > ccti_min) {
+ if (cca_timer->ccti > ccti_min) {
unsigned long nsec = 1024 * ccti_timer;
/* ccti_timer is in units of 1.024 usec */
hrtimer_forward_now(t, ns_to_ktime(nsec));
- return HRTIMER_RESTART;
+ ret = HRTIMER_RESTART;
}
- return HRTIMER_NORESTART;
+
+ spin_unlock_irqrestore(&ppd->cca_timer_lock, flags);
+ rcu_read_unlock();
+ return ret;
}
/*
@@ -496,7 +494,6 @@ void hfi1_init_pportdata(struct pci_dev *pdev, struct hfi1_pportdata *ppd,
INIT_WORK(&ppd->link_vc_work, handle_verify_cap);
INIT_WORK(&ppd->link_up_work, handle_link_up);
INIT_WORK(&ppd->link_down_work, handle_link_down);
- INIT_WORK(&ppd->dc_host_req_work, handle_8051_request);
INIT_WORK(&ppd->freeze_work, handle_freeze);
INIT_WORK(&ppd->link_downgrade_work, handle_link_downgrade);
INIT_WORK(&ppd->sma_message_work, handle_sma_message);
@@ -735,12 +732,12 @@ int hfi1_init(struct hfi1_devdata *dd, int reinit)
lastfail = hfi1_create_rcvhdrq(dd, rcd);
if (!lastfail)
lastfail = hfi1_setup_eagerbufs(rcd);
- if (lastfail)
+ if (lastfail) {
dd_dev_err(dd,
"failed to allocate kernel ctxt's rcvhdrq and/or egr bufs\n");
+ ret = lastfail;
+ }
}
- if (lastfail)
- ret = lastfail;
/* Allocate enough memory for user event notification. */
len = PAGE_ALIGN(dd->chip_rcv_contexts * HFI1_MAX_SHARED_CTXTS *
@@ -992,8 +989,10 @@ static void release_asic_data(struct hfi1_devdata *dd)
dd->asic_data = NULL;
}
-void hfi1_free_devdata(struct hfi1_devdata *dd)
+static void __hfi1_free_devdata(struct kobject *kobj)
{
+ struct hfi1_devdata *dd =
+ container_of(kobj, struct hfi1_devdata, kobj);
unsigned long flags;
spin_lock_irqsave(&hfi1_devs_lock, flags);
@@ -1007,7 +1006,16 @@ void hfi1_free_devdata(struct hfi1_devdata *dd)
free_percpu(dd->rcv_limit);
hfi1_dev_affinity_free(dd);
free_percpu(dd->send_schedule);
- ib_dealloc_device(&dd->verbs_dev.rdi.ibdev);
+ rvt_dealloc_device(&dd->verbs_dev.rdi);
+}
+
+static struct kobj_type hfi1_devdata_type = {
+ .release = __hfi1_free_devdata,
+};
+
+void hfi1_free_devdata(struct hfi1_devdata *dd)
+{
+ kobject_put(&dd->kobj);
}
/*
@@ -1105,12 +1113,13 @@ struct hfi1_devdata *hfi1_alloc_devdata(struct pci_dev *pdev, size_t extra)
&pdev->dev,
"Could not alloc cpulist info, cpu affinity might be wrong\n");
}
+ kobject_init(&dd->kobj, &hfi1_devdata_type);
return dd;
bail:
if (!list_empty(&dd->list))
list_del_init(&dd->list);
- ib_dealloc_device(&dd->verbs_dev.rdi.ibdev);
+ rvt_dealloc_device(&dd->verbs_dev.rdi);
return ERR_PTR(ret);
}
@@ -1303,7 +1312,7 @@ static void cleanup_device_data(struct hfi1_devdata *dd)
spin_lock(&ppd->cc_state_lock);
cc_state = get_cc_state(ppd);
- rcu_assign_pointer(ppd->cc_state, NULL);
+ RCU_INIT_POINTER(ppd->cc_state, NULL);
spin_unlock(&ppd->cc_state_lock);
if (cc_state)
diff --git a/drivers/staging/rdma/hfi1/intr.c b/drivers/infiniband/hw/hfi1/intr.c
index 65348d16ab2f..65348d16ab2f 100644
--- a/drivers/staging/rdma/hfi1/intr.c
+++ b/drivers/infiniband/hw/hfi1/intr.c
diff --git a/drivers/staging/rdma/hfi1/iowait.h b/drivers/infiniband/hw/hfi1/iowait.h
index 2ec6ef38d389..2ec6ef38d389 100644
--- a/drivers/staging/rdma/hfi1/iowait.h
+++ b/drivers/infiniband/hw/hfi1/iowait.h
diff --git a/drivers/staging/rdma/hfi1/mad.c b/drivers/infiniband/hw/hfi1/mad.c
index d1e7f4d7cf6f..219029576ba0 100644
--- a/drivers/staging/rdma/hfi1/mad.c
+++ b/drivers/infiniband/hw/hfi1/mad.c
@@ -999,7 +999,21 @@ static int set_port_states(struct hfi1_pportdata *ppd, struct opa_smp *smp,
break;
}
- set_link_state(ppd, link_state);
+ if ((link_state == HLS_DN_POLL ||
+ link_state == HLS_DN_DOWNDEF)) {
+ /*
+ * Going to poll. No matter what the current state,
+ * always move offline first, then tune and start the
+ * link. This correctly handles a FM link bounce and
+ * a link enable. Going offline is a no-op if already
+ * offline.
+ */
+ set_link_state(ppd, HLS_DN_OFFLINE);
+ tune_serdes(ppd);
+ start_link(ppd);
+ } else {
+ set_link_state(ppd, link_state);
+ }
if (link_state == HLS_DN_DISABLE &&
(ppd->offline_disabled_reason >
HFI1_ODR_MASK(OPA_LINKDOWN_REASON_SMA_DISABLED) ||
@@ -1389,6 +1403,12 @@ static int set_pkeys(struct hfi1_devdata *dd, u8 port, u16 *pkeys)
if (key == okey)
continue;
/*
+ * Don't update pkeys[2], if an HFI port without MgmtAllowed
+ * by neighbor is a switch.
+ */
+ if (i == 2 && !ppd->mgmt_allowed && ppd->neighbor_type == 1)
+ continue;
+ /*
* The SM gives us the complete PKey table. We have
* to ensure that we put the PKeys in the matching
* slots.
@@ -3349,6 +3369,50 @@ static int __subn_get_opa_cong_setting(struct opa_smp *smp, u32 am,
return reply((struct ib_mad_hdr *)smp);
}
+/*
+ * Apply congestion control information stored in the ppd to the
+ * active structure.
+ */
+static void apply_cc_state(struct hfi1_pportdata *ppd)
+{
+ struct cc_state *old_cc_state, *new_cc_state;
+
+ new_cc_state = kzalloc(sizeof(*new_cc_state), GFP_KERNEL);
+ if (!new_cc_state)
+ return;
+
+ /*
+ * Hold the lock for updating *and* to prevent ppd information
+ * from changing during the update.
+ */
+ spin_lock(&ppd->cc_state_lock);
+
+ old_cc_state = get_cc_state(ppd);
+ if (!old_cc_state) {
+ /* never active, or shutting down */
+ spin_unlock(&ppd->cc_state_lock);
+ kfree(new_cc_state);
+ return;
+ }
+
+ *new_cc_state = *old_cc_state;
+
+ new_cc_state->cct.ccti_limit = ppd->total_cct_entry - 1;
+ memcpy(new_cc_state->cct.entries, ppd->ccti_entries,
+ ppd->total_cct_entry * sizeof(struct ib_cc_table_entry));
+
+ new_cc_state->cong_setting.port_control = IB_CC_CCS_PC_SL_BASED;
+ new_cc_state->cong_setting.control_map = ppd->cc_sl_control_map;
+ memcpy(new_cc_state->cong_setting.entries, ppd->congestion_entries,
+ OPA_MAX_SLS * sizeof(struct opa_congestion_setting_entry));
+
+ rcu_assign_pointer(ppd->cc_state, new_cc_state);
+
+ spin_unlock(&ppd->cc_state_lock);
+
+ call_rcu(&old_cc_state->rcu, cc_state_reclaim);
+}
+
static int __subn_set_opa_cong_setting(struct opa_smp *smp, u32 am, u8 *data,
struct ib_device *ibdev, u8 port,
u32 *resp_len)
@@ -3360,6 +3424,11 @@ static int __subn_set_opa_cong_setting(struct opa_smp *smp, u32 am, u8 *data,
struct opa_congestion_setting_entry_shadow *entries;
int i;
+ /*
+ * Save details from packet into the ppd. Hold the cc_state_lock so
+ * our information is consistent with anyone trying to apply the state.
+ */
+ spin_lock(&ppd->cc_state_lock);
ppd->cc_sl_control_map = be32_to_cpu(p->control_map);
entries = ppd->congestion_entries;
@@ -3370,6 +3439,10 @@ static int __subn_set_opa_cong_setting(struct opa_smp *smp, u32 am, u8 *data,
p->entries[i].trigger_threshold;
entries[i].ccti_min = p->entries[i].ccti_min;
}
+ spin_unlock(&ppd->cc_state_lock);
+
+ /* now apply the information */
+ apply_cc_state(ppd);
return __subn_get_opa_cong_setting(smp, am, data, ibdev, port,
resp_len);
@@ -3512,7 +3585,6 @@ static int __subn_set_opa_cc_table(struct opa_smp *smp, u32 am, u8 *data,
int i, j;
u32 sentry, eentry;
u16 ccti_limit;
- struct cc_state *old_cc_state, *new_cc_state;
/* sanity check n_blocks, start_block */
if (n_blocks == 0 ||
@@ -3532,45 +3604,20 @@ static int __subn_set_opa_cc_table(struct opa_smp *smp, u32 am, u8 *data,
return reply((struct ib_mad_hdr *)smp);
}
- new_cc_state = kzalloc(sizeof(*new_cc_state), GFP_KERNEL);
- if (!new_cc_state)
- goto getit;
-
+ /*
+ * Save details from packet into the ppd. Hold the cc_state_lock so
+ * our information is consistent with anyone trying to apply the state.
+ */
spin_lock(&ppd->cc_state_lock);
-
- old_cc_state = get_cc_state(ppd);
-
- if (!old_cc_state) {
- spin_unlock(&ppd->cc_state_lock);
- kfree(new_cc_state);
- return reply((struct ib_mad_hdr *)smp);
- }
-
- *new_cc_state = *old_cc_state;
-
- new_cc_state->cct.ccti_limit = ccti_limit;
-
- entries = ppd->ccti_entries;
ppd->total_cct_entry = ccti_limit + 1;
-
+ entries = ppd->ccti_entries;
for (j = 0, i = sentry; i < eentry; j++, i++)
entries[i].entry = be16_to_cpu(p->ccti_entries[j].entry);
-
- memcpy(new_cc_state->cct.entries, entries,
- eentry * sizeof(struct ib_cc_table_entry));
-
- new_cc_state->cong_setting.port_control = IB_CC_CCS_PC_SL_BASED;
- new_cc_state->cong_setting.control_map = ppd->cc_sl_control_map;
- memcpy(new_cc_state->cong_setting.entries, ppd->congestion_entries,
- OPA_MAX_SLS * sizeof(struct opa_congestion_setting_entry));
-
- rcu_assign_pointer(ppd->cc_state, new_cc_state);
-
spin_unlock(&ppd->cc_state_lock);
- call_rcu(&old_cc_state->rcu, cc_state_reclaim);
+ /* now apply the information */
+ apply_cc_state(ppd);
-getit:
return __subn_get_opa_cc_table(smp, am, data, ibdev, port, resp_len);
}
diff --git a/drivers/staging/rdma/hfi1/mad.h b/drivers/infiniband/hw/hfi1/mad.h
index 55ee08675333..55ee08675333 100644
--- a/drivers/staging/rdma/hfi1/mad.h
+++ b/drivers/infiniband/hw/hfi1/mad.h
diff --git a/drivers/staging/rdma/hfi1/mmu_rb.c b/drivers/infiniband/hw/hfi1/mmu_rb.c
index b3f0682a36c9..b7a80aa1ae30 100644
--- a/drivers/staging/rdma/hfi1/mmu_rb.c
+++ b/drivers/infiniband/hw/hfi1/mmu_rb.c
@@ -45,6 +45,7 @@
*
*/
#include <linux/list.h>
+#include <linux/rculist.h>
#include <linux/mmu_notifier.h>
#include <linux/interval_tree_generic.h>
@@ -91,13 +92,12 @@ static unsigned long mmu_node_start(struct mmu_rb_node *node)
static unsigned long mmu_node_last(struct mmu_rb_node *node)
{
- return PAGE_ALIGN((node->addr & PAGE_MASK) + node->len) - 1;
+ return PAGE_ALIGN(node->addr + node->len) - 1;
}
int hfi1_mmu_rb_register(struct rb_root *root, struct mmu_rb_ops *ops)
{
struct mmu_rb_handler *handlr;
- unsigned long flags;
if (!ops->invalidate)
return -EINVAL;
@@ -111,9 +111,9 @@ int hfi1_mmu_rb_register(struct rb_root *root, struct mmu_rb_ops *ops)
INIT_HLIST_NODE(&handlr->mn.hlist);
spin_lock_init(&handlr->lock);
handlr->mn.ops = &mn_opts;
- spin_lock_irqsave(&mmu_rb_lock, flags);
- list_add_tail(&handlr->list, &mmu_rb_handlers);
- spin_unlock_irqrestore(&mmu_rb_lock, flags);
+ spin_lock(&mmu_rb_lock);
+ list_add_tail_rcu(&handlr->list, &mmu_rb_handlers);
+ spin_unlock(&mmu_rb_lock);
return mmu_notifier_register(&handlr->mn, current->mm);
}
@@ -126,10 +126,16 @@ void hfi1_mmu_rb_unregister(struct rb_root *root)
if (!handler)
return;
- spin_lock_irqsave(&mmu_rb_lock, flags);
- list_del(&handler->list);
- spin_unlock_irqrestore(&mmu_rb_lock, flags);
+ /* Unregister first so we don't get any more notifications. */
+ if (current->mm)
+ mmu_notifier_unregister(&handler->mn, current->mm);
+
+ spin_lock(&mmu_rb_lock);
+ list_del_rcu(&handler->list);
+ spin_unlock(&mmu_rb_lock);
+ synchronize_rcu();
+ spin_lock_irqsave(&handler->lock, flags);
if (!RB_EMPTY_ROOT(root)) {
struct rb_node *node;
struct mmu_rb_node *rbnode;
@@ -141,9 +147,8 @@ void hfi1_mmu_rb_unregister(struct rb_root *root)
handler->ops->remove(root, rbnode, NULL);
}
}
+ spin_unlock_irqrestore(&handler->lock, flags);
- if (current->mm)
- mmu_notifier_unregister(&handler->mn, current->mm);
kfree(handler);
}
@@ -235,6 +240,25 @@ struct mmu_rb_node *hfi1_mmu_rb_search(struct rb_root *root, unsigned long addr,
return node;
}
+struct mmu_rb_node *hfi1_mmu_rb_extract(struct rb_root *root,
+ unsigned long addr, unsigned long len)
+{
+ struct mmu_rb_handler *handler = find_mmu_handler(root);
+ struct mmu_rb_node *node;
+ unsigned long flags;
+
+ if (!handler)
+ return ERR_PTR(-EINVAL);
+
+ spin_lock_irqsave(&handler->lock, flags);
+ node = __mmu_rb_search(handler, addr, len);
+ if (node)
+ __mmu_int_rb_remove(node, handler->root);
+ spin_unlock_irqrestore(&handler->lock, flags);
+
+ return node;
+}
+
void hfi1_mmu_rb_remove(struct rb_root *root, struct mmu_rb_node *node)
{
struct mmu_rb_handler *handler = find_mmu_handler(root);
@@ -248,16 +272,15 @@ void hfi1_mmu_rb_remove(struct rb_root *root, struct mmu_rb_node *node)
static struct mmu_rb_handler *find_mmu_handler(struct rb_root *root)
{
struct mmu_rb_handler *handler;
- unsigned long flags;
- spin_lock_irqsave(&mmu_rb_lock, flags);
- list_for_each_entry(handler, &mmu_rb_handlers, list) {
+ rcu_read_lock();
+ list_for_each_entry_rcu(handler, &mmu_rb_handlers, list) {
if (handler->root == root)
goto unlock;
}
handler = NULL;
unlock:
- spin_unlock_irqrestore(&mmu_rb_lock, flags);
+ rcu_read_unlock();
return handler;
}
@@ -293,9 +316,9 @@ static void mmu_notifier_mem_invalidate(struct mmu_notifier *mn,
hfi1_cdbg(MMU, "Invalidating node addr 0x%llx, len %u",
node->addr, node->len);
if (handler->ops->invalidate(root, node)) {
- spin_unlock_irqrestore(&handler->lock, flags);
- __mmu_rb_remove(handler, node, mm);
- spin_lock_irqsave(&handler->lock, flags);
+ __mmu_int_rb_remove(node, root);
+ if (handler->ops->remove)
+ handler->ops->remove(root, node, mm);
}
}
spin_unlock_irqrestore(&handler->lock, flags);
diff --git a/drivers/staging/rdma/hfi1/mmu_rb.h b/drivers/infiniband/hw/hfi1/mmu_rb.h
index 19a306e83c7d..7a57b9c49d27 100644
--- a/drivers/staging/rdma/hfi1/mmu_rb.h
+++ b/drivers/infiniband/hw/hfi1/mmu_rb.h
@@ -70,5 +70,7 @@ int hfi1_mmu_rb_insert(struct rb_root *, struct mmu_rb_node *);
void hfi1_mmu_rb_remove(struct rb_root *, struct mmu_rb_node *);
struct mmu_rb_node *hfi1_mmu_rb_search(struct rb_root *, unsigned long,
unsigned long);
+struct mmu_rb_node *hfi1_mmu_rb_extract(struct rb_root *, unsigned long,
+ unsigned long);
#endif /* _HFI1_MMU_RB_H */
diff --git a/drivers/staging/rdma/hfi1/opa_compat.h b/drivers/infiniband/hw/hfi1/opa_compat.h
index 6ef3c1cbdcd7..6ef3c1cbdcd7 100644
--- a/drivers/staging/rdma/hfi1/opa_compat.h
+++ b/drivers/infiniband/hw/hfi1/opa_compat.h
diff --git a/drivers/staging/rdma/hfi1/pcie.c b/drivers/infiniband/hw/hfi1/pcie.c
index 0bac21e6a658..0bac21e6a658 100644
--- a/drivers/staging/rdma/hfi1/pcie.c
+++ b/drivers/infiniband/hw/hfi1/pcie.c
diff --git a/drivers/staging/rdma/hfi1/pio.c b/drivers/infiniband/hw/hfi1/pio.c
index c6849ce9e5eb..d5edb1afbb8f 100644
--- a/drivers/staging/rdma/hfi1/pio.c
+++ b/drivers/infiniband/hw/hfi1/pio.c
@@ -139,23 +139,30 @@ void pio_send_control(struct hfi1_devdata *dd, int op)
/* Send Context Size (SCS) wildcards */
#define SCS_POOL_0 -1
#define SCS_POOL_1 -2
+
/* Send Context Count (SCC) wildcards */
#define SCC_PER_VL -1
#define SCC_PER_CPU -2
-
#define SCC_PER_KRCVQ -3
-#define SCC_ACK_CREDITS 32
+
+/* Send Context Size (SCS) constants */
+#define SCS_ACK_CREDITS 32
+#define SCS_VL15_CREDITS 102 /* 3 pkts of 2048B data + 128B header */
+
+#define PIO_THRESHOLD_CEILING 4096
#define PIO_WAIT_BATCH_SIZE 5
/* default send context sizes */
static struct sc_config_sizes sc_config_sizes[SC_MAX] = {
[SC_KERNEL] = { .size = SCS_POOL_0, /* even divide, pool 0 */
- .count = SCC_PER_VL },/* one per NUMA */
- [SC_ACK] = { .size = SCC_ACK_CREDITS,
+ .count = SCC_PER_VL }, /* one per NUMA */
+ [SC_ACK] = { .size = SCS_ACK_CREDITS,
.count = SCC_PER_KRCVQ },
[SC_USER] = { .size = SCS_POOL_0, /* even divide, pool 0 */
.count = SCC_PER_CPU }, /* one per CPU */
+ [SC_VL15] = { .size = SCS_VL15_CREDITS,
+ .count = 1 },
};
@@ -202,7 +209,8 @@ static int wildcard_to_pool(int wc)
static const char *sc_type_names[SC_MAX] = {
"kernel",
"ack",
- "user"
+ "user",
+ "vl15"
};
static const char *sc_type_name(int index)
@@ -231,6 +239,22 @@ int init_sc_pools_and_sizes(struct hfi1_devdata *dd)
int i;
/*
+ * When SDMA is enabled, kernel context pio packet size is capped by
+ * "piothreshold". Reduce pio buffer allocation for kernel context by
+ * setting it to a fixed size. The allocation allows 3-deep buffering
+ * of the largest pio packets plus up to 128 bytes header, sufficient
+ * to maintain verbs performance.
+ *
+ * When SDMA is disabled, keep the default pooling allocation.
+ */
+ if (HFI1_CAP_IS_KSET(SDMA)) {
+ u16 max_pkt_size = (piothreshold < PIO_THRESHOLD_CEILING) ?
+ piothreshold : PIO_THRESHOLD_CEILING;
+ sc_config_sizes[SC_KERNEL].size =
+ 3 * (max_pkt_size + 128) / PIO_BLOCK_SIZE;
+ }
+
+ /*
* Step 0:
* - copy the centipercents/absolute sizes from the pool config
* - sanity check these values
@@ -311,7 +335,7 @@ int init_sc_pools_and_sizes(struct hfi1_devdata *dd)
if (i == SC_ACK) {
count = dd->n_krcv_queues;
} else if (i == SC_KERNEL) {
- count = (INIT_SC_PER_VL * num_vls) + 1 /* VL15 */;
+ count = INIT_SC_PER_VL * num_vls;
} else if (count == SCC_PER_CPU) {
count = dd->num_rcv_contexts - dd->n_krcv_queues;
} else if (count < 0) {
@@ -596,7 +620,7 @@ u32 sc_mtu_to_threshold(struct send_context *sc, u32 mtu, u32 hdrqentsize)
* Return value is what to write into the CSR: trigger return when
* unreturned credits pass this count.
*/
-static u32 sc_percent_to_threshold(struct send_context *sc, u32 percent)
+u32 sc_percent_to_threshold(struct send_context *sc, u32 percent)
{
return (sc->credits * percent) / 100;
}
@@ -790,7 +814,10 @@ struct send_context *sc_alloc(struct hfi1_devdata *dd, int type,
* For Ack contexts, set a threshold for half the credits.
* For User contexts use the given percentage. This has been
* sanitized on driver start-up.
- * For Kernel contexts, use the default MTU plus a header.
+ * For Kernel contexts, use the default MTU plus a header
+ * or half the credits, whichever is smaller. This should
+ * work for both the 3-deep buffering allocation and the
+ * pooling allocation.
*/
if (type == SC_ACK) {
thresh = sc_percent_to_threshold(sc, 50);
@@ -798,7 +825,9 @@ struct send_context *sc_alloc(struct hfi1_devdata *dd, int type,
thresh = sc_percent_to_threshold(sc,
user_credit_return_threshold);
} else { /* kernel */
- thresh = sc_mtu_to_threshold(sc, hfi1_max_mtu, hdrqentsize);
+ thresh = min(sc_percent_to_threshold(sc, 50),
+ sc_mtu_to_threshold(sc, hfi1_max_mtu,
+ hdrqentsize));
}
reg = thresh << SC(CREDIT_CTRL_THRESHOLD_SHIFT);
/* add in early return */
@@ -1531,7 +1560,8 @@ static void sc_piobufavail(struct send_context *sc)
unsigned long flags;
unsigned i, n = 0;
- if (dd->send_contexts[sc->sw_index].type != SC_KERNEL)
+ if (dd->send_contexts[sc->sw_index].type != SC_KERNEL &&
+ dd->send_contexts[sc->sw_index].type != SC_VL15)
return;
list = &sc->piowait;
/*
@@ -1805,8 +1835,7 @@ int pio_map_init(struct hfi1_devdata *dd, u8 port, u8 num_vls, u8 *vl_scontexts)
struct pio_vl_map *oldmap, *newmap;
if (!vl_scontexts) {
- /* send context 0 reserved for VL15 */
- for (i = 1; i < dd->num_send_contexts; i++)
+ for (i = 0; i < dd->num_send_contexts; i++)
if (dd->send_contexts[i].type == SC_KERNEL)
num_kernel_send_contexts++;
/* truncate divide */
@@ -1900,7 +1929,7 @@ int init_pervl_scs(struct hfi1_devdata *dd)
u32 ctxt;
struct hfi1_pportdata *ppd = dd->pport;
- dd->vld[15].sc = sc_alloc(dd, SC_KERNEL,
+ dd->vld[15].sc = sc_alloc(dd, SC_VL15,
dd->rcd[0]->rcvhdrqentsize, dd->node);
if (!dd->vld[15].sc)
goto nomem;
diff --git a/drivers/staging/rdma/hfi1/pio.h b/drivers/infiniband/hw/hfi1/pio.h
index 0026976ce4f6..464cbd27b975 100644
--- a/drivers/staging/rdma/hfi1/pio.h
+++ b/drivers/infiniband/hw/hfi1/pio.h
@@ -49,9 +49,10 @@
/* send context types */
#define SC_KERNEL 0
-#define SC_ACK 1
-#define SC_USER 2
-#define SC_MAX 3
+#define SC_VL15 1
+#define SC_ACK 2
+#define SC_USER 3 /* must be the last one: it may take all left */
+#define SC_MAX 4 /* count of send context types */
/* invalid send context index */
#define INVALID_SCI 0xff
@@ -293,6 +294,7 @@ void sc_group_release_update(struct hfi1_devdata *dd, u32 hw_context);
void sc_add_credit_return_intr(struct send_context *sc);
void sc_del_credit_return_intr(struct send_context *sc);
void sc_set_cr_threshold(struct send_context *sc, u32 new_threshold);
+u32 sc_percent_to_threshold(struct send_context *sc, u32 percent);
u32 sc_mtu_to_threshold(struct send_context *sc, u32 mtu, u32 hdrqentsize);
void hfi1_sc_wantpiobuf_intr(struct send_context *sc, u32 needint);
void sc_wait(struct hfi1_devdata *dd);
diff --git a/drivers/staging/rdma/hfi1/pio_copy.c b/drivers/infiniband/hw/hfi1/pio_copy.c
index 8c25e1b58849..8c25e1b58849 100644
--- a/drivers/staging/rdma/hfi1/pio_copy.c
+++ b/drivers/infiniband/hw/hfi1/pio_copy.c
diff --git a/drivers/staging/rdma/hfi1/platform.c b/drivers/infiniband/hw/hfi1/platform.c
index 0a1d074583e4..03df9322f862 100644
--- a/drivers/staging/rdma/hfi1/platform.c
+++ b/drivers/infiniband/hw/hfi1/platform.c
@@ -87,6 +87,17 @@ void free_platform_config(struct hfi1_devdata *dd)
*/
}
+void get_port_type(struct hfi1_pportdata *ppd)
+{
+ int ret;
+
+ ret = get_platform_config_field(ppd->dd, PLATFORM_CONFIG_PORT_TABLE, 0,
+ PORT_TABLE_PORT_TYPE, &ppd->port_type,
+ 4);
+ if (ret)
+ ppd->port_type = PORT_TYPE_UNKNOWN;
+}
+
int set_qsfp_tx(struct hfi1_pportdata *ppd, int on)
{
u8 tx_ctrl_byte = on ? 0x0 : 0xF;
@@ -114,21 +125,11 @@ static int qual_power(struct hfi1_pportdata *ppd)
if (ret)
return ret;
- if (QSFP_HIGH_PWR(cache[QSFP_MOD_PWR_OFFS]) != 4)
- cable_power_class = QSFP_HIGH_PWR(cache[QSFP_MOD_PWR_OFFS]);
- else
- cable_power_class = QSFP_PWR(cache[QSFP_MOD_PWR_OFFS]);
+ cable_power_class = get_qsfp_power_class(cache[QSFP_MOD_PWR_OFFS]);
- if (cable_power_class <= 3 && cable_power_class > (power_class_max - 1))
- ppd->offline_disabled_reason =
- HFI1_ODR_MASK(OPA_LINKDOWN_REASON_POWER_POLICY);
- else if (cable_power_class > 4 && cable_power_class > (power_class_max))
+ if (cable_power_class > power_class_max)
ppd->offline_disabled_reason =
HFI1_ODR_MASK(OPA_LINKDOWN_REASON_POWER_POLICY);
- /*
- * cable_power_class will never have value 4 as this simply
- * means the high power settings are unused
- */
if (ppd->offline_disabled_reason ==
HFI1_ODR_MASK(OPA_LINKDOWN_REASON_POWER_POLICY)) {
@@ -173,12 +174,9 @@ static int set_qsfp_high_power(struct hfi1_pportdata *ppd)
u8 *cache = ppd->qsfp_info.cache;
int ret;
- if (QSFP_HIGH_PWR(cache[QSFP_MOD_PWR_OFFS]) != 4)
- cable_power_class = QSFP_HIGH_PWR(cache[QSFP_MOD_PWR_OFFS]);
- else
- cable_power_class = QSFP_PWR(cache[QSFP_MOD_PWR_OFFS]);
+ cable_power_class = get_qsfp_power_class(cache[QSFP_MOD_PWR_OFFS]);
- if (cable_power_class) {
+ if (cable_power_class > QSFP_POWER_CLASS_1) {
power_ctrl_byte = cache[QSFP_PWR_CTRL_BYTE_OFFS];
power_ctrl_byte |= 1;
@@ -190,8 +188,7 @@ static int set_qsfp_high_power(struct hfi1_pportdata *ppd)
if (ret != 1)
return -EIO;
- if (cable_power_class > 3) {
- /* > power class 4*/
+ if (cable_power_class > QSFP_POWER_CLASS_4) {
power_ctrl_byte |= (1 << 2);
ret = qsfp_write(ppd, ppd->dd->hfi1_id,
QSFP_PWR_CTRL_BYTE_OFFS,
@@ -212,12 +209,21 @@ static void apply_rx_cdr(struct hfi1_pportdata *ppd,
{
u32 rx_preset;
u8 *cache = ppd->qsfp_info.cache;
+ int cable_power_class;
if (!((cache[QSFP_MOD_PWR_OFFS] & 0x4) &&
(cache[QSFP_CDR_INFO_OFFS] & 0x40)))
return;
- /* rx_preset preset to zero to catch error */
+ /* RX CDR present, bypass supported */
+ cable_power_class = get_qsfp_power_class(cache[QSFP_MOD_PWR_OFFS]);
+
+ if (cable_power_class <= QSFP_POWER_CLASS_3) {
+ /* Power class <= 3, ignore config & turn RX CDR on */
+ *cdr_ctrl_byte |= 0xF;
+ return;
+ }
+
get_platform_config_field(
ppd->dd, PLATFORM_CONFIG_RX_PRESET_TABLE,
rx_preset_index, RX_PRESET_TABLE_QSFP_RX_CDR_APPLY,
@@ -250,15 +256,25 @@ static void apply_rx_cdr(struct hfi1_pportdata *ppd,
static void apply_tx_cdr(struct hfi1_pportdata *ppd,
u32 tx_preset_index,
- u8 *ctr_ctrl_byte)
+ u8 *cdr_ctrl_byte)
{
u32 tx_preset;
u8 *cache = ppd->qsfp_info.cache;
+ int cable_power_class;
if (!((cache[QSFP_MOD_PWR_OFFS] & 0x8) &&
(cache[QSFP_CDR_INFO_OFFS] & 0x80)))
return;
+ /* TX CDR present, bypass supported */
+ cable_power_class = get_qsfp_power_class(cache[QSFP_MOD_PWR_OFFS]);
+
+ if (cable_power_class <= QSFP_POWER_CLASS_3) {
+ /* Power class <= 3, ignore config & turn TX CDR on */
+ *cdr_ctrl_byte |= 0xF0;
+ return;
+ }
+
get_platform_config_field(
ppd->dd,
PLATFORM_CONFIG_TX_PRESET_TABLE, tx_preset_index,
@@ -282,10 +298,10 @@ static void apply_tx_cdr(struct hfi1_pportdata *ppd,
(tx_preset << 2) | (tx_preset << 3));
if (tx_preset)
- *ctr_ctrl_byte |= (tx_preset << 4);
+ *cdr_ctrl_byte |= (tx_preset << 4);
else
/* Preserve current/determined RX CDR status */
- *ctr_ctrl_byte &= ((tx_preset << 4) | 0xF);
+ *cdr_ctrl_byte &= ((tx_preset << 4) | 0xF);
}
static void apply_cdr_settings(
@@ -524,7 +540,8 @@ static void apply_tunings(
/* Enable external device config if channel is limiting active */
read_8051_config(ppd->dd, LINK_OPTIMIZATION_SETTINGS,
GENERAL_CONFIG, &config_data);
- config_data |= limiting_active;
+ config_data &= ~(0xff << ENABLE_EXT_DEV_CONFIG_SHIFT);
+ config_data |= ((u32)limiting_active << ENABLE_EXT_DEV_CONFIG_SHIFT);
ret = load_8051_config(ppd->dd, LINK_OPTIMIZATION_SETTINGS,
GENERAL_CONFIG, config_data);
if (ret != HCMD_SUCCESS)
@@ -537,7 +554,8 @@ static void apply_tunings(
/* Pass tuning method to 8051 */
read_8051_config(ppd->dd, LINK_TUNING_PARAMETERS, GENERAL_CONFIG,
&config_data);
- config_data |= tuning_method;
+ config_data &= ~(0xff << TUNING_METHOD_SHIFT);
+ config_data |= ((u32)tuning_method << TUNING_METHOD_SHIFT);
ret = load_8051_config(ppd->dd, LINK_TUNING_PARAMETERS, GENERAL_CONFIG,
config_data);
if (ret != HCMD_SUCCESS)
@@ -559,8 +577,8 @@ static void apply_tunings(
ret = read_8051_config(ppd->dd, DC_HOST_COMM_SETTINGS,
GENERAL_CONFIG, &config_data);
/* Clear, then set the external device config field */
- config_data &= ~(0xFF << 24);
- config_data |= (external_device_config << 24);
+ config_data &= ~(u32)0xFF;
+ config_data |= external_device_config;
ret = load_8051_config(ppd->dd, DC_HOST_COMM_SETTINGS,
GENERAL_CONFIG, config_data);
if (ret != HCMD_SUCCESS)
@@ -598,6 +616,7 @@ static void apply_tunings(
"Applying TX settings");
}
+/* Must be holding the QSFP i2c resource */
static int tune_active_qsfp(struct hfi1_pportdata *ppd, u32 *ptr_tx_preset,
u32 *ptr_rx_preset, u32 *ptr_total_atten)
{
@@ -605,26 +624,19 @@ static int tune_active_qsfp(struct hfi1_pportdata *ppd, u32 *ptr_tx_preset,
u16 lss = ppd->link_speed_supported, lse = ppd->link_speed_enabled;
u8 *cache = ppd->qsfp_info.cache;
- ret = acquire_chip_resource(ppd->dd, qsfp_resource(ppd->dd), QSFP_WAIT);
- if (ret) {
- dd_dev_err(ppd->dd, "%s: hfi%d: cannot lock i2c chain\n",
- __func__, (int)ppd->dd->hfi1_id);
- return ret;
- }
-
ppd->qsfp_info.limiting_active = 1;
ret = set_qsfp_tx(ppd, 0);
if (ret)
- goto bail_unlock;
+ return ret;
ret = qual_power(ppd);
if (ret)
- goto bail_unlock;
+ return ret;
ret = qual_bitrate(ppd);
if (ret)
- goto bail_unlock;
+ return ret;
if (ppd->qsfp_info.reset_needed) {
reset_qsfp(ppd);
@@ -636,7 +648,7 @@ static int tune_active_qsfp(struct hfi1_pportdata *ppd, u32 *ptr_tx_preset,
ret = set_qsfp_high_power(ppd);
if (ret)
- goto bail_unlock;
+ return ret;
if (cache[QSFP_EQ_INFO_OFFS] & 0x4) {
ret = get_platform_config_field(
@@ -646,7 +658,7 @@ static int tune_active_qsfp(struct hfi1_pportdata *ppd, u32 *ptr_tx_preset,
ptr_tx_preset, 4);
if (ret) {
*ptr_tx_preset = OPA_INVALID_INDEX;
- goto bail_unlock;
+ return ret;
}
} else {
ret = get_platform_config_field(
@@ -656,7 +668,7 @@ static int tune_active_qsfp(struct hfi1_pportdata *ppd, u32 *ptr_tx_preset,
ptr_tx_preset, 4);
if (ret) {
*ptr_tx_preset = OPA_INVALID_INDEX;
- goto bail_unlock;
+ return ret;
}
}
@@ -665,7 +677,7 @@ static int tune_active_qsfp(struct hfi1_pportdata *ppd, u32 *ptr_tx_preset,
PORT_TABLE_RX_PRESET_IDX, ptr_rx_preset, 4);
if (ret) {
*ptr_rx_preset = OPA_INVALID_INDEX;
- goto bail_unlock;
+ return ret;
}
if ((lss & OPA_LINK_SPEED_25G) && (lse & OPA_LINK_SPEED_25G))
@@ -685,8 +697,6 @@ static int tune_active_qsfp(struct hfi1_pportdata *ppd, u32 *ptr_tx_preset,
ret = set_qsfp_tx(ppd, 1);
-bail_unlock:
- release_chip_resource(ppd->dd, qsfp_resource(ppd->dd));
return ret;
}
@@ -787,12 +797,6 @@ void tune_serdes(struct hfi1_pportdata *ppd)
return;
}
- ret = get_platform_config_field(ppd->dd, PLATFORM_CONFIG_PORT_TABLE, 0,
- PORT_TABLE_PORT_TYPE, &ppd->port_type,
- 4);
- if (ret)
- ppd->port_type = PORT_TYPE_UNKNOWN;
-
switch (ppd->port_type) {
case PORT_TYPE_DISCONNECTED:
ppd->offline_disabled_reason =
@@ -833,12 +837,22 @@ void tune_serdes(struct hfi1_pportdata *ppd)
total_atten = platform_atten + remote_atten;
tuning_method = OPA_PASSIVE_TUNING;
- } else
+ } else {
ppd->offline_disabled_reason =
HFI1_ODR_MASK(OPA_LINKDOWN_REASON_CHASSIS_CONFIG);
+ goto bail;
+ }
break;
case PORT_TYPE_QSFP:
if (qsfp_mod_present(ppd)) {
+ ret = acquire_chip_resource(ppd->dd,
+ qsfp_resource(ppd->dd),
+ QSFP_WAIT);
+ if (ret) {
+ dd_dev_err(ppd->dd, "%s: hfi%d: cannot lock i2c chain\n",
+ __func__, (int)ppd->dd->hfi1_id);
+ goto bail;
+ }
refresh_qsfp_cache(ppd, &ppd->qsfp_info);
if (ppd->qsfp_info.cache_valid) {
@@ -853,21 +867,23 @@ void tune_serdes(struct hfi1_pportdata *ppd)
* update the cache to reflect the changes
*/
refresh_qsfp_cache(ppd, &ppd->qsfp_info);
- if (ret)
- goto bail;
-
limiting_active =
ppd->qsfp_info.limiting_active;
} else {
dd_dev_err(dd,
"%s: Reading QSFP memory failed\n",
__func__);
- goto bail;
+ ret = -EINVAL; /* a fail indication */
}
- } else
+ release_chip_resource(ppd->dd, qsfp_resource(ppd->dd));
+ if (ret)
+ goto bail;
+ } else {
ppd->offline_disabled_reason =
HFI1_ODR_MASK(
OPA_LINKDOWN_REASON_LOCAL_MEDIA_NOT_INSTALLED);
+ goto bail;
+ }
break;
default:
dd_dev_info(ppd->dd, "%s: Unknown port type\n", __func__);
diff --git a/drivers/staging/rdma/hfi1/platform.h b/drivers/infiniband/hw/hfi1/platform.h
index 19620cf546d5..e2c21613c326 100644
--- a/drivers/staging/rdma/hfi1/platform.h
+++ b/drivers/infiniband/hw/hfi1/platform.h
@@ -298,6 +298,7 @@ enum link_tuning_encoding {
/* platform.c */
void get_platform_config(struct hfi1_devdata *dd);
void free_platform_config(struct hfi1_devdata *dd);
+void get_port_type(struct hfi1_pportdata *ppd);
int set_qsfp_tx(struct hfi1_pportdata *ppd, int on);
void tune_serdes(struct hfi1_pportdata *ppd);
diff --git a/drivers/staging/rdma/hfi1/qp.c b/drivers/infiniband/hw/hfi1/qp.c
index dc9119e1b458..1a942ffba4cb 100644
--- a/drivers/staging/rdma/hfi1/qp.c
+++ b/drivers/infiniband/hw/hfi1/qp.c
@@ -49,7 +49,6 @@
#include <linux/vmalloc.h>
#include <linux/hash.h>
#include <linux/module.h>
-#include <linux/random.h>
#include <linux/seq_file.h>
#include <rdma/rdma_vt.h>
#include <rdma/rdmavt_qp.h>
@@ -161,14 +160,15 @@ static inline int opa_mtu_enum_to_int(int mtu)
* This function is what we would push to the core layer if we wanted to be a
* "first class citizen". Instead we hide this here and rely on Verbs ULPs
* to blindly pass the MTU enum value from the PathRecord to us.
- *
- * The actual flag used to determine "8k MTU" will change and is currently
- * unknown.
*/
static inline int verbs_mtu_enum_to_int(struct ib_device *dev, enum ib_mtu mtu)
{
- int val = opa_mtu_enum_to_int((int)mtu);
+ int val;
+ /* Constraining 10KB packets to 8KB packets */
+ if (mtu == (enum ib_mtu)OPA_MTU_10240)
+ mtu = OPA_MTU_8192;
+ val = opa_mtu_enum_to_int((int)mtu);
if (val > 0)
return val;
return ib_mtu_enum_to_int(mtu);
@@ -512,6 +512,7 @@ static void iowait_wakeup(struct iowait *wait, int reason)
static void iowait_sdma_drained(struct iowait *wait)
{
struct rvt_qp *qp = iowait_to_qp(wait);
+ unsigned long flags;
/*
* This happens when the send engine notes
@@ -519,12 +520,12 @@ static void iowait_sdma_drained(struct iowait *wait)
* do the flush work until that QP's
* sdma work has finished.
*/
- spin_lock(&qp->s_lock);
+ spin_lock_irqsave(&qp->s_lock, flags);
if (qp->s_flags & RVT_S_WAIT_DMA) {
qp->s_flags &= ~RVT_S_WAIT_DMA;
hfi1_schedule_send(qp);
}
- spin_unlock(&qp->s_lock);
+ spin_unlock_irqrestore(&qp->s_lock, flags);
}
/**
diff --git a/drivers/staging/rdma/hfi1/qp.h b/drivers/infiniband/hw/hfi1/qp.h
index e7bc8d6cf681..e7bc8d6cf681 100644
--- a/drivers/staging/rdma/hfi1/qp.h
+++ b/drivers/infiniband/hw/hfi1/qp.h
diff --git a/drivers/staging/rdma/hfi1/qsfp.c b/drivers/infiniband/hw/hfi1/qsfp.c
index 9ed1963010fe..2441669f0817 100644
--- a/drivers/staging/rdma/hfi1/qsfp.c
+++ b/drivers/infiniband/hw/hfi1/qsfp.c
@@ -96,7 +96,7 @@ int i2c_write(struct hfi1_pportdata *ppd, u32 target, int i2c_addr, int offset,
{
int ret;
- if (!check_chip_resource(ppd->dd, qsfp_resource(ppd->dd), __func__))
+ if (!check_chip_resource(ppd->dd, i2c_target(target), __func__))
return -EACCES;
/* make sure the TWSI bus is in a sane state */
@@ -162,7 +162,7 @@ int i2c_read(struct hfi1_pportdata *ppd, u32 target, int i2c_addr, int offset,
{
int ret;
- if (!check_chip_resource(ppd->dd, qsfp_resource(ppd->dd), __func__))
+ if (!check_chip_resource(ppd->dd, i2c_target(target), __func__))
return -EACCES;
/* make sure the TWSI bus is in a sane state */
@@ -192,7 +192,7 @@ int qsfp_write(struct hfi1_pportdata *ppd, u32 target, int addr, void *bp,
int ret;
u8 page;
- if (!check_chip_resource(ppd->dd, qsfp_resource(ppd->dd), __func__))
+ if (!check_chip_resource(ppd->dd, i2c_target(target), __func__))
return -EACCES;
/* make sure the TWSI bus is in a sane state */
@@ -276,7 +276,7 @@ int qsfp_read(struct hfi1_pportdata *ppd, u32 target, int addr, void *bp,
int ret;
u8 page;
- if (!check_chip_resource(ppd->dd, qsfp_resource(ppd->dd), __func__))
+ if (!check_chip_resource(ppd->dd, i2c_target(target), __func__))
return -EACCES;
/* make sure the TWSI bus is in a sane state */
@@ -355,6 +355,8 @@ int one_qsfp_read(struct hfi1_pportdata *ppd, u32 target, int addr, void *bp,
* The calls to qsfp_{read,write} in this function correctly handle the
* address map difference between this mapping and the mapping implemented
* by those functions
+ *
+ * The caller must be holding the QSFP i2c chain resource.
*/
int refresh_qsfp_cache(struct hfi1_pportdata *ppd, struct qsfp_data *cp)
{
@@ -371,13 +373,9 @@ int refresh_qsfp_cache(struct hfi1_pportdata *ppd, struct qsfp_data *cp)
if (!qsfp_mod_present(ppd)) {
ret = -ENODEV;
- goto bail_no_release;
+ goto bail;
}
- ret = acquire_chip_resource(ppd->dd, qsfp_resource(ppd->dd), QSFP_WAIT);
- if (ret)
- goto bail_no_release;
-
ret = qsfp_read(ppd, target, 0, cache, QSFP_PAGESIZE);
if (ret != QSFP_PAGESIZE) {
dd_dev_info(ppd->dd,
@@ -440,8 +438,6 @@ int refresh_qsfp_cache(struct hfi1_pportdata *ppd, struct qsfp_data *cp)
}
}
- release_chip_resource(ppd->dd, qsfp_resource(ppd->dd));
-
spin_lock_irqsave(&ppd->qsfp_info.qsfp_lock, flags);
ppd->qsfp_info.cache_valid = 1;
ppd->qsfp_info.cache_refresh_required = 0;
@@ -450,8 +446,6 @@ int refresh_qsfp_cache(struct hfi1_pportdata *ppd, struct qsfp_data *cp)
return 0;
bail:
- release_chip_resource(ppd->dd, qsfp_resource(ppd->dd));
-bail_no_release:
memset(cache, 0, (QSFP_MAX_NUM_PAGES * 128));
return ret;
}
@@ -466,7 +460,28 @@ const char * const hfi1_qsfp_devtech[16] = {
#define QSFP_DUMP_CHUNK 16 /* Holds longest string */
#define QSFP_DEFAULT_HDR_CNT 224
-static const char *pwr_codes = "1.5W2.0W2.5W3.5W";
+#define QSFP_PWR(pbyte) (((pbyte) >> 6) & 3)
+#define QSFP_HIGH_PWR(pbyte) ((pbyte) & 3)
+/* For use with QSFP_HIGH_PWR macro */
+#define QSFP_HIGH_PWR_UNUSED 0 /* Bits [1:0] = 00 implies low power module */
+
+/*
+ * Takes power class byte [Page 00 Byte 129] in SFF 8636
+ * Returns power class as integer (1 through 7, per SFF 8636 rev 2.4)
+ */
+int get_qsfp_power_class(u8 power_byte)
+{
+ if (QSFP_HIGH_PWR(power_byte) == QSFP_HIGH_PWR_UNUSED)
+ /* power classes count from 1, their bit encodings from 0 */
+ return (QSFP_PWR(power_byte) + 1);
+ /*
+ * 00 in the high power classes stands for unused, bringing
+ * balance to the off-by-1 offset above, we add 4 here to
+ * account for the difference between the low and high power
+ * groups
+ */
+ return (QSFP_HIGH_PWR(power_byte) + 4);
+}
int qsfp_mod_present(struct hfi1_pportdata *ppd)
{
@@ -537,6 +552,16 @@ set_zeroes:
return ret;
}
+static const char *pwr_codes[8] = {"N/AW",
+ "1.5W",
+ "2.0W",
+ "2.5W",
+ "3.5W",
+ "4.0W",
+ "4.5W",
+ "5.0W"
+ };
+
int qsfp_dump(struct hfi1_pportdata *ppd, char *buf, int len)
{
u8 *cache = &ppd->qsfp_info.cache[0];
@@ -546,6 +571,7 @@ int qsfp_dump(struct hfi1_pportdata *ppd, char *buf, int len)
int bidx = 0;
u8 *atten = &cache[QSFP_ATTEN_OFFS];
u8 *vendor_oui = &cache[QSFP_VOUI_OFFS];
+ u8 power_byte = 0;
sofar = 0;
lenstr[0] = ' ';
@@ -555,9 +581,9 @@ int qsfp_dump(struct hfi1_pportdata *ppd, char *buf, int len)
if (QSFP_IS_CU(cache[QSFP_MOD_TECH_OFFS]))
sprintf(lenstr, "%dM ", cache[QSFP_MOD_LEN_OFFS]);
+ power_byte = cache[QSFP_MOD_PWR_OFFS];
sofar += scnprintf(buf + sofar, len - sofar, "PWR:%.3sW\n",
- pwr_codes +
- (QSFP_PWR(cache[QSFP_MOD_PWR_OFFS]) * 4));
+ pwr_codes[get_qsfp_power_class(power_byte)]);
sofar += scnprintf(buf + sofar, len - sofar, "TECH:%s%s\n",
lenstr,
diff --git a/drivers/staging/rdma/hfi1/qsfp.h b/drivers/infiniband/hw/hfi1/qsfp.h
index 831fe4cf1345..dadc66c442b9 100644
--- a/drivers/staging/rdma/hfi1/qsfp.h
+++ b/drivers/infiniband/hw/hfi1/qsfp.h
@@ -82,8 +82,9 @@
/* Byte 128 is Identifier: must be 0x0c for QSFP, or 0x0d for QSFP+ */
#define QSFP_MOD_ID_OFFS 128
/*
- * Byte 129 is "Extended Identifier". We only care about D7,D6: Power class
- * 0:1.5W, 1:2.0W, 2:2.5W, 3:3.5W
+ * Byte 129 is "Extended Identifier".
+ * For bits [7:6]: 0:1.5W, 1:2.0W, 2:2.5W, 3:3.5W
+ * For bits [1:0]: 0:Unused, 1:4W, 2:4.5W, 3:5W
*/
#define QSFP_MOD_PWR_OFFS 129
/* Byte 130 is Connector type. Not Intel req'd */
@@ -190,6 +191,9 @@ extern const char *const hfi1_qsfp_devtech[16];
#define QSFP_HIGH_BIAS_WARNING 0x22
#define QSFP_LOW_BIAS_WARNING 0x11
+#define QSFP_ATTEN_SDR(attenarray) (attenarray[0])
+#define QSFP_ATTEN_DDR(attenarray) (attenarray[1])
+
/*
* struct qsfp_data encapsulates state of QSFP device for one port.
* it will be part of port-specific data if a board supports QSFP.
@@ -201,12 +205,6 @@ extern const char *const hfi1_qsfp_devtech[16];
* and let the qsfp_lock arbitrate access to common resources.
*
*/
-
-#define QSFP_PWR(pbyte) (((pbyte) >> 6) & 3)
-#define QSFP_HIGH_PWR(pbyte) (((pbyte) & 3) | 4)
-#define QSFP_ATTEN_SDR(attenarray) (attenarray[0])
-#define QSFP_ATTEN_DDR(attenarray) (attenarray[1])
-
struct qsfp_data {
/* Helps to find our way */
struct hfi1_pportdata *ppd;
@@ -223,6 +221,7 @@ struct qsfp_data {
int refresh_qsfp_cache(struct hfi1_pportdata *ppd,
struct qsfp_data *cp);
+int get_qsfp_power_class(u8 power_byte);
int qsfp_mod_present(struct hfi1_pportdata *ppd);
int get_cable_info(struct hfi1_devdata *dd, u32 port_num, u32 addr,
u32 len, u8 *data);
diff --git a/drivers/staging/rdma/hfi1/rc.c b/drivers/infiniband/hw/hfi1/rc.c
index 0d7e1017f3cb..792f15eb8efe 100644
--- a/drivers/staging/rdma/hfi1/rc.c
+++ b/drivers/infiniband/hw/hfi1/rc.c
@@ -1497,7 +1497,7 @@ reserved:
/* Ignore reserved NAK codes. */
goto bail_stop;
}
- return ret;
+ /* cannot be reached */
bail_stop:
hfi1_stop_rc_timers(qp);
return ret;
@@ -2021,8 +2021,6 @@ void process_becn(struct hfi1_pportdata *ppd, u8 sl, u16 rlid, u32 lqpn,
if (sl >= OPA_MAX_SLS)
return;
- cca_timer = &ppd->cca_timer[sl];
-
cc_state = get_cc_state(ppd);
if (!cc_state)
@@ -2041,6 +2039,7 @@ void process_becn(struct hfi1_pportdata *ppd, u8 sl, u16 rlid, u32 lqpn,
spin_lock_irqsave(&ppd->cca_timer_lock, flags);
+ cca_timer = &ppd->cca_timer[sl];
if (cca_timer->ccti < ccti_limit) {
if (cca_timer->ccti + ccti_incr <= ccti_limit)
cca_timer->ccti += ccti_incr;
@@ -2049,8 +2048,6 @@ void process_becn(struct hfi1_pportdata *ppd, u8 sl, u16 rlid, u32 lqpn,
set_link_ipg(ppd);
}
- spin_unlock_irqrestore(&ppd->cca_timer_lock, flags);
-
ccti = cca_timer->ccti;
if (!hrtimer_active(&cca_timer->hrtimer)) {
@@ -2061,6 +2058,8 @@ void process_becn(struct hfi1_pportdata *ppd, u8 sl, u16 rlid, u32 lqpn,
HRTIMER_MODE_REL);
}
+ spin_unlock_irqrestore(&ppd->cca_timer_lock, flags);
+
if ((trigger_threshold != 0) && (ccti >= trigger_threshold))
log_cca_event(ppd, sl, rlid, lqpn, rqpn, svc_type);
}
diff --git a/drivers/staging/rdma/hfi1/ruc.c b/drivers/infiniband/hw/hfi1/ruc.c
index 08813cdbd475..a659aec3c3c6 100644
--- a/drivers/staging/rdma/hfi1/ruc.c
+++ b/drivers/infiniband/hw/hfi1/ruc.c
@@ -831,7 +831,6 @@ void hfi1_do_send(struct rvt_qp *qp)
struct hfi1_pkt_state ps;
struct hfi1_qp_priv *priv = qp->priv;
int (*make_req)(struct rvt_qp *qp, struct hfi1_pkt_state *ps);
- unsigned long flags;
unsigned long timeout;
unsigned long timeout_int;
int cpu;
@@ -866,11 +865,11 @@ void hfi1_do_send(struct rvt_qp *qp)
timeout_int = SEND_RESCHED_TIMEOUT;
}
- spin_lock_irqsave(&qp->s_lock, flags);
+ spin_lock_irqsave(&qp->s_lock, ps.flags);
/* Return if we are already busy processing a work request. */
if (!hfi1_send_ok(qp)) {
- spin_unlock_irqrestore(&qp->s_lock, flags);
+ spin_unlock_irqrestore(&qp->s_lock, ps.flags);
return;
}
@@ -884,7 +883,7 @@ void hfi1_do_send(struct rvt_qp *qp)
do {
/* Check for a constructed packet to be sent. */
if (qp->s_hdrwords != 0) {
- spin_unlock_irqrestore(&qp->s_lock, flags);
+ spin_unlock_irqrestore(&qp->s_lock, ps.flags);
/*
* If the packet cannot be sent now, return and
* the send tasklet will be woken up later.
@@ -897,11 +896,14 @@ void hfi1_do_send(struct rvt_qp *qp)
if (unlikely(time_after(jiffies, timeout))) {
if (workqueue_congested(cpu,
ps.ppd->hfi1_wq)) {
- spin_lock_irqsave(&qp->s_lock, flags);
+ spin_lock_irqsave(
+ &qp->s_lock,
+ ps.flags);
qp->s_flags &= ~RVT_S_BUSY;
hfi1_schedule_send(qp);
- spin_unlock_irqrestore(&qp->s_lock,
- flags);
+ spin_unlock_irqrestore(
+ &qp->s_lock,
+ ps.flags);
this_cpu_inc(
*ps.ppd->dd->send_schedule);
return;
@@ -913,11 +915,11 @@ void hfi1_do_send(struct rvt_qp *qp)
}
timeout = jiffies + (timeout_int) / 8;
}
- spin_lock_irqsave(&qp->s_lock, flags);
+ spin_lock_irqsave(&qp->s_lock, ps.flags);
}
} while (make_req(qp, &ps));
- spin_unlock_irqrestore(&qp->s_lock, flags);
+ spin_unlock_irqrestore(&qp->s_lock, ps.flags);
}
/*
diff --git a/drivers/staging/rdma/hfi1/sdma.c b/drivers/infiniband/hw/hfi1/sdma.c
index abb8ebc1fcac..f9befc05b349 100644
--- a/drivers/staging/rdma/hfi1/sdma.c
+++ b/drivers/infiniband/hw/hfi1/sdma.c
@@ -134,6 +134,7 @@ static const char * const sdma_state_names[] = {
[sdma_state_s99_running] = "s99_Running",
};
+#ifdef CONFIG_SDMA_VERBOSITY
static const char * const sdma_event_names[] = {
[sdma_event_e00_go_hw_down] = "e00_GoHwDown",
[sdma_event_e10_go_hw_start] = "e10_GoHwStart",
@@ -150,6 +151,7 @@ static const char * const sdma_event_names[] = {
[sdma_event_e85_link_down] = "e85_LinkDown",
[sdma_event_e90_sw_halted] = "e90_SwHalted",
};
+#endif
static const struct sdma_set_state_action sdma_action_table[] = {
[sdma_state_s00_hw_down] = {
@@ -376,7 +378,7 @@ static inline void complete_tx(struct sdma_engine *sde,
sdma_txclean(sde->dd, tx);
if (complete)
(*complete)(tx, res);
- if (iowait_sdma_dec(wait) && wait)
+ if (wait && iowait_sdma_dec(wait))
iowait_drain_wakeup(wait);
}
diff --git a/drivers/staging/rdma/hfi1/sdma.h b/drivers/infiniband/hw/hfi1/sdma.h
index 8f50c99fe711..8f50c99fe711 100644
--- a/drivers/staging/rdma/hfi1/sdma.h
+++ b/drivers/infiniband/hw/hfi1/sdma.h
diff --git a/drivers/staging/rdma/hfi1/sdma_txreq.h b/drivers/infiniband/hw/hfi1/sdma_txreq.h
index bf7d777d756e..bf7d777d756e 100644
--- a/drivers/staging/rdma/hfi1/sdma_txreq.h
+++ b/drivers/infiniband/hw/hfi1/sdma_txreq.h
diff --git a/drivers/staging/rdma/hfi1/sysfs.c b/drivers/infiniband/hw/hfi1/sysfs.c
index c7f1271190af..91fc2aed6aed 100644
--- a/drivers/staging/rdma/hfi1/sysfs.c
+++ b/drivers/infiniband/hw/hfi1/sysfs.c
@@ -84,7 +84,7 @@ static ssize_t read_cc_table_bin(struct file *filp, struct kobject *kobj,
rcu_read_unlock();
return -EINVAL;
}
- memcpy(buf, &cc_state->cct, count);
+ memcpy(buf, (void *)&cc_state->cct + pos, count);
rcu_read_unlock();
return count;
@@ -131,7 +131,7 @@ static ssize_t read_cc_setting_bin(struct file *filp, struct kobject *kobj,
rcu_read_unlock();
return -EINVAL;
}
- memcpy(buf, &cc_state->cong_setting, count);
+ memcpy(buf, (void *)&cc_state->cong_setting + pos, count);
rcu_read_unlock();
return count;
@@ -721,8 +721,8 @@ int hfi1_create_port_files(struct ib_device *ibdev, u8 port_num,
}
dd_dev_info(dd,
- "IB%u: Congestion Control Agent enabled for port %d\n",
- dd->unit, port_num);
+ "Congestion Control Agent enabled for port %d\n",
+ port_num);
return 0;
diff --git a/drivers/staging/rdma/hfi1/trace.c b/drivers/infiniband/hw/hfi1/trace.c
index 8b62fefcf903..79b2952c0dfb 100644
--- a/drivers/staging/rdma/hfi1/trace.c
+++ b/drivers/infiniband/hw/hfi1/trace.c
@@ -66,6 +66,7 @@ u8 ibhdr_exhdr_len(struct hfi1_ib_header *hdr)
#define RETH_PRN "reth vaddr 0x%.16llx rkey 0x%.8x dlen 0x%.8x"
#define AETH_PRN "aeth syn 0x%.2x %s msn 0x%.8x"
#define DETH_PRN "deth qkey 0x%.8x sqpn 0x%.6x"
+#define IETH_PRN "ieth rkey 0x%.8x"
#define ATOMICACKETH_PRN "origdata %lld"
#define ATOMICETH_PRN "vaddr 0x%llx rkey 0x%.8x sdata %lld cdata %lld"
@@ -166,6 +167,12 @@ const char *parse_everbs_hdrs(
be32_to_cpu(eh->ud.deth[0]),
be32_to_cpu(eh->ud.deth[1]) & RVT_QPN_MASK);
break;
+ /* ieth */
+ case OP(RC, SEND_LAST_WITH_INVALIDATE):
+ case OP(RC, SEND_ONLY_WITH_INVALIDATE):
+ trace_seq_printf(p, IETH_PRN,
+ be32_to_cpu(eh->ieth));
+ break;
}
trace_seq_putc(p, 0);
return ret;
@@ -233,3 +240,4 @@ __hfi1_trace_fn(FIRMWARE);
__hfi1_trace_fn(RCVCTRL);
__hfi1_trace_fn(TID);
__hfi1_trace_fn(MMU);
+__hfi1_trace_fn(IOCTL);
diff --git a/drivers/staging/rdma/hfi1/trace.h b/drivers/infiniband/hw/hfi1/trace.h
index 963dc948c38a..28c1d0832886 100644
--- a/drivers/staging/rdma/hfi1/trace.h
+++ b/drivers/infiniband/hw/hfi1/trace.h
@@ -74,8 +74,8 @@ __print_symbolic(etype, \
TRACE_EVENT(hfi1_rcvhdr,
TP_PROTO(struct hfi1_devdata *dd,
- u64 eflags,
u32 ctxt,
+ u64 eflags,
u32 etype,
u32 hlen,
u32 tlen,
@@ -392,6 +392,8 @@ __print_symbolic(opcode, \
ib_opcode_name(RC_ATOMIC_ACKNOWLEDGE), \
ib_opcode_name(RC_COMPARE_SWAP), \
ib_opcode_name(RC_FETCH_ADD), \
+ ib_opcode_name(RC_SEND_LAST_WITH_INVALIDATE), \
+ ib_opcode_name(RC_SEND_ONLY_WITH_INVALIDATE), \
ib_opcode_name(UC_SEND_FIRST), \
ib_opcode_name(UC_SEND_MIDDLE), \
ib_opcode_name(UC_SEND_LAST), \
@@ -1341,6 +1343,7 @@ __hfi1_trace_def(FIRMWARE);
__hfi1_trace_def(RCVCTRL);
__hfi1_trace_def(TID);
__hfi1_trace_def(MMU);
+__hfi1_trace_def(IOCTL);
#define hfi1_cdbg(which, fmt, ...) \
__hfi1_trace_##which(__func__, fmt, ##__VA_ARGS__)
diff --git a/drivers/staging/rdma/hfi1/twsi.c b/drivers/infiniband/hw/hfi1/twsi.c
index e82e52a63d35..e82e52a63d35 100644
--- a/drivers/staging/rdma/hfi1/twsi.c
+++ b/drivers/infiniband/hw/hfi1/twsi.c
diff --git a/drivers/staging/rdma/hfi1/twsi.h b/drivers/infiniband/hw/hfi1/twsi.h
index 5b8a5b5e7eae..5b8a5b5e7eae 100644
--- a/drivers/staging/rdma/hfi1/twsi.h
+++ b/drivers/infiniband/hw/hfi1/twsi.h
diff --git a/drivers/staging/rdma/hfi1/uc.c b/drivers/infiniband/hw/hfi1/uc.c
index df773d433297..df773d433297 100644
--- a/drivers/staging/rdma/hfi1/uc.c
+++ b/drivers/infiniband/hw/hfi1/uc.c
diff --git a/drivers/staging/rdma/hfi1/ud.c b/drivers/infiniband/hw/hfi1/ud.c
index ae8a70f703eb..1e503ad0bebb 100644
--- a/drivers/staging/rdma/hfi1/ud.c
+++ b/drivers/infiniband/hw/hfi1/ud.c
@@ -322,7 +322,7 @@ int hfi1_make_ud_req(struct rvt_qp *qp, struct hfi1_pkt_state *ps)
(lid == ppd->lid ||
(lid == be16_to_cpu(IB_LID_PERMISSIVE) &&
qp->ibqp.qp_type == IB_QPT_GSI)))) {
- unsigned long flags;
+ unsigned long tflags = ps->flags;
/*
* If DMAs are in progress, we can't generate
* a completion for the loopback packet since
@@ -335,10 +335,10 @@ int hfi1_make_ud_req(struct rvt_qp *qp, struct hfi1_pkt_state *ps)
goto bail;
}
qp->s_cur = next_cur;
- local_irq_save(flags);
- spin_unlock_irqrestore(&qp->s_lock, flags);
+ spin_unlock_irqrestore(&qp->s_lock, tflags);
ud_loopback(qp, wqe);
- spin_lock_irqsave(&qp->s_lock, flags);
+ spin_lock_irqsave(&qp->s_lock, tflags);
+ ps->flags = tflags;
hfi1_send_complete(qp, wqe, IB_WC_SUCCESS);
goto done_free_tx;
}
diff --git a/drivers/staging/rdma/hfi1/user_exp_rcv.c b/drivers/infiniband/hw/hfi1/user_exp_rcv.c
index 8bd56d5c783d..1b640a35b3fe 100644
--- a/drivers/staging/rdma/hfi1/user_exp_rcv.c
+++ b/drivers/infiniband/hw/hfi1/user_exp_rcv.c
@@ -399,8 +399,11 @@ int hfi1_user_exp_rcv_setup(struct file *fp, struct hfi1_tid_info *tinfo)
* pages, accept the amount pinned so far and program only that.
* User space knows how to deal with partially programmed buffers.
*/
- if (!hfi1_can_pin_pages(dd, fd->tid_n_pinned, npages))
- return -ENOMEM;
+ if (!hfi1_can_pin_pages(dd, fd->tid_n_pinned, npages)) {
+ ret = -ENOMEM;
+ goto bail;
+ }
+
pinned = hfi1_acquire_user_pages(vaddr, npages, true, pages);
if (pinned <= 0) {
ret = pinned;
diff --git a/drivers/staging/rdma/hfi1/user_exp_rcv.h b/drivers/infiniband/hw/hfi1/user_exp_rcv.h
index 9bc8d9fba87e..9bc8d9fba87e 100644
--- a/drivers/staging/rdma/hfi1/user_exp_rcv.h
+++ b/drivers/infiniband/hw/hfi1/user_exp_rcv.h
diff --git a/drivers/staging/rdma/hfi1/user_pages.c b/drivers/infiniband/hw/hfi1/user_pages.c
index 88e10b5f55f1..88e10b5f55f1 100644
--- a/drivers/staging/rdma/hfi1/user_pages.c
+++ b/drivers/infiniband/hw/hfi1/user_pages.c
diff --git a/drivers/staging/rdma/hfi1/user_sdma.c b/drivers/infiniband/hw/hfi1/user_sdma.c
index d53a659548e0..29f4795f866c 100644
--- a/drivers/staging/rdma/hfi1/user_sdma.c
+++ b/drivers/infiniband/hw/hfi1/user_sdma.c
@@ -166,6 +166,8 @@ static unsigned initial_pkt_count = 8;
#define SDMA_IOWAIT_TIMEOUT 1000 /* in milliseconds */
+struct sdma_mmu_node;
+
struct user_sdma_iovec {
struct list_head list;
struct iovec iov;
@@ -178,8 +180,11 @@ struct user_sdma_iovec {
* which we last left off.
*/
u64 offset;
+ struct sdma_mmu_node *node;
};
+#define SDMA_CACHE_NODE_EVICT BIT(0)
+
struct sdma_mmu_node {
struct mmu_rb_node rb;
struct list_head list;
@@ -187,6 +192,7 @@ struct sdma_mmu_node {
atomic_t refcount;
struct page **pages;
unsigned npages;
+ unsigned long flags;
};
struct user_sdma_request {
@@ -504,6 +510,7 @@ int hfi1_user_sdma_process_request(struct file *fp, struct iovec *iovec,
struct sdma_req_info info;
struct user_sdma_request *req;
u8 opcode, sc, vl;
+ int req_queued = 0;
if (iovec[idx].iov_len < sizeof(info) + sizeof(req->hdr)) {
hfi1_cdbg(
@@ -597,6 +604,13 @@ int hfi1_user_sdma_process_request(struct file *fp, struct iovec *iovec,
goto free_req;
}
+ /* Checking P_KEY for requests from user-space */
+ if (egress_pkey_check(dd->pport, req->hdr.lrh, req->hdr.bth, sc,
+ PKEY_CHECK_INVALID)) {
+ ret = -EINVAL;
+ goto free_req;
+ }
+
/*
* Also should check the BTH.lnh. If it says the next header is GRH then
* the RXE parsing will be off and will land in the middle of the KDETH
@@ -693,6 +707,7 @@ int hfi1_user_sdma_process_request(struct file *fp, struct iovec *iovec,
set_comp_state(pq, cq, info.comp_idx, QUEUED, 0);
atomic_inc(&pq->n_reqs);
+ req_queued = 1;
/* Send the first N packets in the request to buy us some time */
ret = user_sdma_send_pkts(req, pcount);
if (unlikely(ret < 0 && ret != -EBUSY)) {
@@ -737,7 +752,8 @@ int hfi1_user_sdma_process_request(struct file *fp, struct iovec *iovec,
return 0;
free_req:
user_sdma_free_request(req, true);
- pq_update(pq);
+ if (req_queued)
+ pq_update(pq);
set_comp_state(pq, cq, info.comp_idx, ERROR, req->status);
return ret;
}
@@ -1030,27 +1046,29 @@ static inline int num_user_pages(const struct iovec *iov)
return 1 + ((epage - spage) >> PAGE_SHIFT);
}
-/* Caller must hold pq->evict_lock */
static u32 sdma_cache_evict(struct hfi1_user_sdma_pkt_q *pq, u32 npages)
{
u32 cleared = 0;
struct sdma_mmu_node *node, *ptr;
+ struct list_head to_evict = LIST_HEAD_INIT(to_evict);
+ spin_lock(&pq->evict_lock);
list_for_each_entry_safe_reverse(node, ptr, &pq->evict, list) {
/* Make sure that no one is still using the node. */
if (!atomic_read(&node->refcount)) {
- /*
- * Need to use the page count now as the remove callback
- * will free the node.
- */
+ set_bit(SDMA_CACHE_NODE_EVICT, &node->flags);
+ list_del_init(&node->list);
+ list_add(&node->list, &to_evict);
cleared += node->npages;
- spin_unlock(&pq->evict_lock);
- hfi1_mmu_rb_remove(&pq->sdma_rb_root, &node->rb);
- spin_lock(&pq->evict_lock);
if (cleared >= npages)
break;
}
}
+ spin_unlock(&pq->evict_lock);
+
+ list_for_each_entry_safe(node, ptr, &to_evict, list)
+ hfi1_mmu_rb_remove(&pq->sdma_rb_root, &node->rb);
+
return cleared;
}
@@ -1062,9 +1080,9 @@ static int pin_vector_pages(struct user_sdma_request *req,
struct sdma_mmu_node *node = NULL;
struct mmu_rb_node *rb_node;
- rb_node = hfi1_mmu_rb_search(&pq->sdma_rb_root,
- (unsigned long)iovec->iov.iov_base,
- iovec->iov.iov_len);
+ rb_node = hfi1_mmu_rb_extract(&pq->sdma_rb_root,
+ (unsigned long)iovec->iov.iov_base,
+ iovec->iov.iov_len);
if (rb_node && !IS_ERR(rb_node))
node = container_of(rb_node, struct sdma_mmu_node, rb);
else
@@ -1076,7 +1094,6 @@ static int pin_vector_pages(struct user_sdma_request *req,
return -ENOMEM;
node->rb.addr = (unsigned long)iovec->iov.iov_base;
- node->rb.len = iovec->iov.iov_len;
node->pq = pq;
atomic_set(&node->refcount, 0);
INIT_LIST_HEAD(&node->list);
@@ -1093,11 +1110,25 @@ static int pin_vector_pages(struct user_sdma_request *req,
memcpy(pages, node->pages, node->npages * sizeof(*pages));
npages -= node->npages;
+
+ /*
+ * If rb_node is NULL, it means that this is brand new node
+ * and, therefore not on the eviction list.
+ * If, however, the rb_node is non-NULL, it means that the
+ * node is already in RB tree and, therefore on the eviction
+ * list (nodes are unconditionally inserted in the eviction
+ * list). In that case, we have to remove the node prior to
+ * calling the eviction function in order to prevent it from
+ * freeing this node.
+ */
+ if (rb_node) {
+ spin_lock(&pq->evict_lock);
+ list_del_init(&node->list);
+ spin_unlock(&pq->evict_lock);
+ }
retry:
if (!hfi1_can_pin_pages(pq->dd, pq->n_locked, npages)) {
- spin_lock(&pq->evict_lock);
cleared = sdma_cache_evict(pq, npages);
- spin_unlock(&pq->evict_lock);
if (cleared >= npages)
goto retry;
}
@@ -1117,37 +1148,33 @@ retry:
goto bail;
}
kfree(node->pages);
+ node->rb.len = iovec->iov.iov_len;
node->pages = pages;
node->npages += pinned;
npages = node->npages;
spin_lock(&pq->evict_lock);
- if (!rb_node)
- list_add(&node->list, &pq->evict);
- else
- list_move(&node->list, &pq->evict);
+ list_add(&node->list, &pq->evict);
pq->n_locked += pinned;
spin_unlock(&pq->evict_lock);
}
iovec->pages = node->pages;
iovec->npages = npages;
+ iovec->node = node;
- if (!rb_node) {
- ret = hfi1_mmu_rb_insert(&req->pq->sdma_rb_root, &node->rb);
- if (ret) {
- spin_lock(&pq->evict_lock);
+ ret = hfi1_mmu_rb_insert(&req->pq->sdma_rb_root, &node->rb);
+ if (ret) {
+ spin_lock(&pq->evict_lock);
+ if (!list_empty(&node->list))
list_del(&node->list);
- pq->n_locked -= node->npages;
- spin_unlock(&pq->evict_lock);
- ret = 0;
- goto bail;
- }
- } else {
- atomic_inc(&node->refcount);
+ pq->n_locked -= node->npages;
+ spin_unlock(&pq->evict_lock);
+ goto bail;
}
return 0;
bail:
- if (!rb_node)
- kfree(node);
+ if (rb_node)
+ unpin_vector_pages(current->mm, node->pages, 0, node->npages);
+ kfree(node);
return ret;
}
@@ -1499,18 +1526,13 @@ static void user_sdma_free_request(struct user_sdma_request *req, bool unpin)
}
if (req->data_iovs) {
struct sdma_mmu_node *node;
- struct mmu_rb_node *mnode;
int i;
for (i = 0; i < req->data_iovs; i++) {
- mnode = hfi1_mmu_rb_search(
- &req->pq->sdma_rb_root,
- (unsigned long)req->iovs[i].iov.iov_base,
- req->iovs[i].iov.iov_len);
- if (!mnode || IS_ERR(mnode))
+ node = req->iovs[i].node;
+ if (!node)
continue;
- node = container_of(mnode, struct sdma_mmu_node, rb);
if (unpin)
hfi1_mmu_rb_remove(&req->pq->sdma_rb_root,
&node->rb);
@@ -1558,7 +1580,20 @@ static void sdma_rb_remove(struct rb_root *root, struct mmu_rb_node *mnode,
container_of(mnode, struct sdma_mmu_node, rb);
spin_lock(&node->pq->evict_lock);
- list_del(&node->list);
+ /*
+ * We've been called by the MMU notifier but this node has been
+ * scheduled for eviction. The eviction function will take care
+ * of freeing this node.
+ * We have to take the above lock first because we are racing
+ * against the setting of the bit in the eviction function.
+ */
+ if (mm && test_bit(SDMA_CACHE_NODE_EVICT, &node->flags)) {
+ spin_unlock(&node->pq->evict_lock);
+ return;
+ }
+
+ if (!list_empty(&node->list))
+ list_del(&node->list);
node->pq->n_locked -= node->npages;
spin_unlock(&node->pq->evict_lock);
diff --git a/drivers/staging/rdma/hfi1/user_sdma.h b/drivers/infiniband/hw/hfi1/user_sdma.h
index b9240e351161..b9240e351161 100644
--- a/drivers/staging/rdma/hfi1/user_sdma.h
+++ b/drivers/infiniband/hw/hfi1/user_sdma.h
diff --git a/drivers/staging/rdma/hfi1/verbs.c b/drivers/infiniband/hw/hfi1/verbs.c
index 89f2aad45c1b..849c4b9399d4 100644
--- a/drivers/staging/rdma/hfi1/verbs.c
+++ b/drivers/infiniband/hw/hfi1/verbs.c
@@ -52,7 +52,6 @@
#include <linux/utsname.h>
#include <linux/rculist.h>
#include <linux/mm.h>
-#include <linux/random.h>
#include <linux/vmalloc.h>
#include "hfi.h"
@@ -336,6 +335,8 @@ const u8 hdr_len_by_opcode[256] = {
[IB_OPCODE_RC_ATOMIC_ACKNOWLEDGE] = 12 + 8 + 4,
[IB_OPCODE_RC_COMPARE_SWAP] = 12 + 8 + 28,
[IB_OPCODE_RC_FETCH_ADD] = 12 + 8 + 28,
+ [IB_OPCODE_RC_SEND_LAST_WITH_INVALIDATE] = 12 + 8 + 4,
+ [IB_OPCODE_RC_SEND_ONLY_WITH_INVALIDATE] = 12 + 8 + 4,
/* UC */
[IB_OPCODE_UC_SEND_FIRST] = 12 + 8,
[IB_OPCODE_UC_SEND_MIDDLE] = 12 + 8,
@@ -545,7 +546,7 @@ static inline int qp_ok(int opcode, struct hfi1_packet *packet)
if (!(ib_rvt_state_ops[packet->qp->state] & RVT_PROCESS_RECV_OK))
goto dropit;
- if (((opcode & OPCODE_QP_MASK) == packet->qp->allowed_ops) ||
+ if (((opcode & RVT_OPCODE_QP_MASK) == packet->qp->allowed_ops) ||
(opcode == IB_OPCODE_CNP))
return 1;
dropit:
@@ -946,7 +947,6 @@ static int pio_wait(struct rvt_qp *qp,
dev->n_piowait += !!(flag & RVT_S_WAIT_PIO);
dev->n_piodrain += !!(flag & RVT_S_WAIT_PIO_DRAIN);
- dev->n_piowait++;
qp->s_flags |= flag;
was_empty = list_empty(&sc->piowait);
list_add_tail(&priv->s_iowait.list, &sc->piowait);
@@ -1089,16 +1089,16 @@ bail:
/*
* egress_pkey_matches_entry - return 1 if the pkey matches ent (ent
- * being an entry from the ingress partition key table), return 0
+ * being an entry from the partition key table), return 0
* otherwise. Use the matching criteria for egress partition keys
* specified in the OPAv1 spec., section 9.1l.7.
*/
static inline int egress_pkey_matches_entry(u16 pkey, u16 ent)
{
u16 mkey = pkey & PKEY_LOW_15_MASK;
- u16 ment = ent & PKEY_LOW_15_MASK;
+ u16 mentry = ent & PKEY_LOW_15_MASK;
- if (mkey == ment) {
+ if (mkey == mentry) {
/*
* If pkey[15] is set (full partition member),
* is bit 15 in the corresponding table element
@@ -1111,32 +1111,32 @@ static inline int egress_pkey_matches_entry(u16 pkey, u16 ent)
return 0;
}
-/*
- * egress_pkey_check - return 0 if hdr's pkey matches according to the
- * criteria in the OPAv1 spec., section 9.11.7.
+/**
+ * egress_pkey_check - check P_KEY of a packet
+ * @ppd: Physical IB port data
+ * @lrh: Local route header
+ * @bth: Base transport header
+ * @sc5: SC for packet
+ * @s_pkey_index: It will be used for look up optimization for kernel contexts
+ * only. If it is negative value, then it means user contexts is calling this
+ * function.
+ *
+ * It checks if hdr's pkey is valid.
+ *
+ * Return: 0 on success, otherwise, 1
*/
-static inline int egress_pkey_check(struct hfi1_pportdata *ppd,
- struct hfi1_ib_header *hdr,
- struct rvt_qp *qp)
+int egress_pkey_check(struct hfi1_pportdata *ppd, __be16 *lrh, __be32 *bth,
+ u8 sc5, int8_t s_pkey_index)
{
- struct hfi1_qp_priv *priv = qp->priv;
- struct hfi1_other_headers *ohdr;
struct hfi1_devdata *dd;
- int i = 0;
+ int i;
u16 pkey;
- u8 lnh, sc5 = priv->s_sc;
+ int is_user_ctxt_mechanism = (s_pkey_index < 0);
if (!(ppd->part_enforce & HFI1_PART_ENFORCE_OUT))
return 0;
- /* locate the pkey within the headers */
- lnh = be16_to_cpu(hdr->lrh[0]) & 3;
- if (lnh == HFI1_LRH_GRH)
- ohdr = &hdr->u.l.oth;
- else
- ohdr = &hdr->u.oth;
-
- pkey = (u16)be32_to_cpu(ohdr->bth[0]);
+ pkey = (u16)be32_to_cpu(bth[0]);
/* If SC15, pkey[0:14] must be 0x7fff */
if ((sc5 == 0xf) && ((pkey & PKEY_LOW_15_MASK) != PKEY_LOW_15_MASK))
@@ -1146,28 +1146,37 @@ static inline int egress_pkey_check(struct hfi1_pportdata *ppd,
if ((pkey & PKEY_LOW_15_MASK) == 0)
goto bad;
- /* The most likely matching pkey has index qp->s_pkey_index */
- if (unlikely(!egress_pkey_matches_entry(pkey,
- ppd->pkeys
- [qp->s_pkey_index]))) {
- /* no match - try the entire table */
- for (; i < MAX_PKEY_VALUES; i++) {
- if (egress_pkey_matches_entry(pkey, ppd->pkeys[i]))
- break;
- }
+ /*
+ * For the kernel contexts only, if a qp is passed into the function,
+ * the most likely matching pkey has index qp->s_pkey_index
+ */
+ if (!is_user_ctxt_mechanism &&
+ egress_pkey_matches_entry(pkey, ppd->pkeys[s_pkey_index])) {
+ return 0;
}
- if (i < MAX_PKEY_VALUES)
- return 0;
+ for (i = 0; i < MAX_PKEY_VALUES; i++) {
+ if (egress_pkey_matches_entry(pkey, ppd->pkeys[i]))
+ return 0;
+ }
bad:
- incr_cntr64(&ppd->port_xmit_constraint_errors);
- dd = ppd->dd;
- if (!(dd->err_info_xmit_constraint.status & OPA_EI_STATUS_SMASK)) {
- u16 slid = be16_to_cpu(hdr->lrh[3]);
-
- dd->err_info_xmit_constraint.status |= OPA_EI_STATUS_SMASK;
- dd->err_info_xmit_constraint.slid = slid;
- dd->err_info_xmit_constraint.pkey = pkey;
+ /*
+ * For the user-context mechanism, the P_KEY check would only happen
+ * once per SDMA request, not once per packet. Therefore, there's no
+ * need to increment the counter for the user-context mechanism.
+ */
+ if (!is_user_ctxt_mechanism) {
+ incr_cntr64(&ppd->port_xmit_constraint_errors);
+ dd = ppd->dd;
+ if (!(dd->err_info_xmit_constraint.status &
+ OPA_EI_STATUS_SMASK)) {
+ u16 slid = be16_to_cpu(lrh[3]);
+
+ dd->err_info_xmit_constraint.status |=
+ OPA_EI_STATUS_SMASK;
+ dd->err_info_xmit_constraint.slid = slid;
+ dd->err_info_xmit_constraint.pkey = pkey;
+ }
}
return 1;
}
@@ -1227,11 +1236,26 @@ int hfi1_verbs_send(struct rvt_qp *qp, struct hfi1_pkt_state *ps)
{
struct hfi1_devdata *dd = dd_from_ibdev(qp->ibqp.device);
struct hfi1_qp_priv *priv = qp->priv;
+ struct hfi1_other_headers *ohdr;
+ struct hfi1_ib_header *hdr;
send_routine sr;
int ret;
+ u8 lnh;
+
+ hdr = &ps->s_txreq->phdr.hdr;
+ /* locate the pkey within the headers */
+ lnh = be16_to_cpu(hdr->lrh[0]) & 3;
+ if (lnh == HFI1_LRH_GRH)
+ ohdr = &hdr->u.l.oth;
+ else
+ ohdr = &hdr->u.oth;
sr = get_send_routine(qp, ps->s_txreq);
- ret = egress_pkey_check(dd->pport, &ps->s_txreq->phdr.hdr, qp);
+ ret = egress_pkey_check(dd->pport,
+ hdr->lrh,
+ ohdr->bth,
+ priv->s_sc,
+ qp->s_pkey_index);
if (unlikely(ret)) {
/*
* The value we are returning here does not get propagated to
diff --git a/drivers/staging/rdma/hfi1/verbs.h b/drivers/infiniband/hw/hfi1/verbs.h
index 6c4670fffdbb..488356775627 100644
--- a/drivers/staging/rdma/hfi1/verbs.h
+++ b/drivers/infiniband/hw/hfi1/verbs.h
@@ -152,6 +152,7 @@ union ib_ehdrs {
} at;
__be32 imm_data;
__be32 aeth;
+ __be32 ieth;
struct ib_atomic_eth atomic_eth;
} __packed;
@@ -215,6 +216,7 @@ struct hfi1_pkt_state {
struct hfi1_ibport *ibp;
struct hfi1_pportdata *ppd;
struct verbs_txreq *s_txreq;
+ unsigned long flags;
};
#define HFI1_PSN_CREDIT 16
@@ -334,9 +336,6 @@ int hfi1_process_mad(struct ib_device *ibdev, int mad_flags, u8 port,
#endif
#define PSN_MODIFY_MASK 0xFFFFFF
-/* Number of bits to pay attention to in the opcode for checking qp type */
-#define OPCODE_QP_MASK 0xE0
-
/*
* Compare the lower 24 bits of the msn values.
* Returns an integer <, ==, or > than zero.
diff --git a/drivers/staging/rdma/hfi1/verbs_txreq.c b/drivers/infiniband/hw/hfi1/verbs_txreq.c
index bc95c4112c61..bc95c4112c61 100644
--- a/drivers/staging/rdma/hfi1/verbs_txreq.c
+++ b/drivers/infiniband/hw/hfi1/verbs_txreq.c
diff --git a/drivers/staging/rdma/hfi1/verbs_txreq.h b/drivers/infiniband/hw/hfi1/verbs_txreq.h
index 1cf69b2fe4a5..1cf69b2fe4a5 100644
--- a/drivers/staging/rdma/hfi1/verbs_txreq.h
+++ b/drivers/infiniband/hw/hfi1/verbs_txreq.h
diff --git a/drivers/infiniband/hw/i40iw/i40iw.h b/drivers/infiniband/hw/i40iw/i40iw.h
index 819767681445..8b9532034558 100644
--- a/drivers/infiniband/hw/i40iw/i40iw.h
+++ b/drivers/infiniband/hw/i40iw/i40iw.h
@@ -50,8 +50,6 @@
#include <rdma/ib_pack.h>
#include <rdma/rdma_cm.h>
#include <rdma/iw_cm.h>
-#include <rdma/iw_portmap.h>
-#include <rdma/rdma_netlink.h>
#include <crypto/hash.h>
#include "i40iw_status.h"
@@ -254,6 +252,7 @@ struct i40iw_device {
u32 arp_table_size;
u32 next_arp_index;
spinlock_t resource_lock; /* hw resource access */
+ spinlock_t qptable_lock;
u32 vendor_id;
u32 vendor_part_id;
u32 of_device_registered;
@@ -392,7 +391,7 @@ void i40iw_flush_wqes(struct i40iw_device *iwdev,
void i40iw_manage_arp_cache(struct i40iw_device *iwdev,
unsigned char *mac_addr,
- __be32 *ip_addr,
+ u32 *ip_addr,
bool ipv4,
u32 action);
@@ -550,7 +549,7 @@ enum i40iw_status_code i40iw_hw_flush_wqes(struct i40iw_device *iwdev,
struct i40iw_qp_flush_info *info,
bool wait);
-void i40iw_copy_ip_ntohl(u32 *dst, u32 *src);
+void i40iw_copy_ip_ntohl(u32 *dst, __be32 *src);
struct ib_mr *i40iw_reg_phys_mr(struct ib_pd *ib_pd,
u64 addr,
u64 size,
diff --git a/drivers/infiniband/hw/i40iw/i40iw_cm.c b/drivers/infiniband/hw/i40iw/i40iw_cm.c
index 38f917a6c778..d2fa72516960 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_cm.c
+++ b/drivers/infiniband/hw/i40iw/i40iw_cm.c
@@ -771,6 +771,7 @@ static void i40iw_build_mpa_v2(struct i40iw_cm_node *cm_node,
{
struct ietf_mpa_v2 *mpa_frame = (struct ietf_mpa_v2 *)start_addr;
struct ietf_rtr_msg *rtr_msg = &mpa_frame->rtr_msg;
+ u16 ctrl_ird, ctrl_ord;
/* initialize the upper 5 bytes of the frame */
i40iw_build_mpa_v1(cm_node, start_addr, mpa_key);
@@ -779,38 +780,38 @@ static void i40iw_build_mpa_v2(struct i40iw_cm_node *cm_node,
/* initialize RTR msg */
if (cm_node->mpav2_ird_ord == IETF_NO_IRD_ORD) {
- rtr_msg->ctrl_ird = IETF_NO_IRD_ORD;
- rtr_msg->ctrl_ord = IETF_NO_IRD_ORD;
+ ctrl_ird = IETF_NO_IRD_ORD;
+ ctrl_ord = IETF_NO_IRD_ORD;
} else {
- rtr_msg->ctrl_ird = (cm_node->ird_size > IETF_NO_IRD_ORD) ?
+ ctrl_ird = (cm_node->ird_size > IETF_NO_IRD_ORD) ?
IETF_NO_IRD_ORD : cm_node->ird_size;
- rtr_msg->ctrl_ord = (cm_node->ord_size > IETF_NO_IRD_ORD) ?
+ ctrl_ord = (cm_node->ord_size > IETF_NO_IRD_ORD) ?
IETF_NO_IRD_ORD : cm_node->ord_size;
}
- rtr_msg->ctrl_ird |= IETF_PEER_TO_PEER;
- rtr_msg->ctrl_ird |= IETF_FLPDU_ZERO_LEN;
+ ctrl_ird |= IETF_PEER_TO_PEER;
+ ctrl_ird |= IETF_FLPDU_ZERO_LEN;
switch (mpa_key) {
case MPA_KEY_REQUEST:
- rtr_msg->ctrl_ord |= IETF_RDMA0_WRITE;
- rtr_msg->ctrl_ord |= IETF_RDMA0_READ;
+ ctrl_ord |= IETF_RDMA0_WRITE;
+ ctrl_ord |= IETF_RDMA0_READ;
break;
case MPA_KEY_REPLY:
switch (cm_node->send_rdma0_op) {
case SEND_RDMA_WRITE_ZERO:
- rtr_msg->ctrl_ord |= IETF_RDMA0_WRITE;
+ ctrl_ord |= IETF_RDMA0_WRITE;
break;
case SEND_RDMA_READ_ZERO:
- rtr_msg->ctrl_ord |= IETF_RDMA0_READ;
+ ctrl_ord |= IETF_RDMA0_READ;
break;
}
break;
default:
break;
}
- rtr_msg->ctrl_ird = htons(rtr_msg->ctrl_ird);
- rtr_msg->ctrl_ord = htons(rtr_msg->ctrl_ord);
+ rtr_msg->ctrl_ird = htons(ctrl_ird);
+ rtr_msg->ctrl_ord = htons(ctrl_ord);
}
/**
@@ -2107,7 +2108,7 @@ static bool i40iw_ipv6_is_loopback(u32 *loc_addr, u32 *rem_addr)
struct in6_addr raddr6;
i40iw_copy_ip_htonl(raddr6.in6_u.u6_addr32, rem_addr);
- return (!memcmp(loc_addr, rem_addr, 16) || ipv6_addr_loopback(&raddr6));
+ return !memcmp(loc_addr, rem_addr, 16) || ipv6_addr_loopback(&raddr6);
}
/**
@@ -2160,7 +2161,7 @@ static struct i40iw_cm_node *i40iw_make_cm_node(
cm_node->tcp_cntxt.rcv_wnd =
I40IW_CM_DEFAULT_RCV_WND_SCALED >> I40IW_CM_DEFAULT_RCV_WND_SCALE;
ts = current_kernel_time();
- cm_node->tcp_cntxt.loc_seq_num = htonl(ts.tv_nsec);
+ cm_node->tcp_cntxt.loc_seq_num = ts.tv_nsec;
cm_node->tcp_cntxt.mss = iwdev->mss;
cm_node->iwdev = iwdev;
@@ -2234,7 +2235,7 @@ static void i40iw_rem_ref_cm_node(struct i40iw_cm_node *cm_node)
if (cm_node->listener) {
i40iw_dec_refcnt_listen(cm_core, cm_node->listener, 0, true);
} else {
- if (!i40iw_listen_port_in_use(cm_core, htons(cm_node->loc_port)) &&
+ if (!i40iw_listen_port_in_use(cm_core, cm_node->loc_port) &&
cm_node->apbvt_set && cm_node->iwdev) {
i40iw_manage_apbvt(cm_node->iwdev,
cm_node->loc_port,
@@ -2852,7 +2853,6 @@ static struct i40iw_cm_node *i40iw_create_cm_node(
void *private_data,
struct i40iw_cm_info *cm_info)
{
- int ret;
struct i40iw_cm_node *cm_node;
struct i40iw_cm_listener *loopback_remotelistener;
struct i40iw_cm_node *loopback_remotenode;
@@ -2922,30 +2922,6 @@ static struct i40iw_cm_node *i40iw_create_cm_node(
memcpy(cm_node->pdata_buf, private_data, private_data_len);
cm_node->state = I40IW_CM_STATE_SYN_SENT;
- ret = i40iw_send_syn(cm_node, 0);
-
- if (ret) {
- if (cm_node->ipv4)
- i40iw_debug(cm_node->dev,
- I40IW_DEBUG_CM,
- "Api - connect() FAILED: dest addr=%pI4",
- cm_node->rem_addr);
- else
- i40iw_debug(cm_node->dev, I40IW_DEBUG_CM,
- "Api - connect() FAILED: dest addr=%pI6",
- cm_node->rem_addr);
- i40iw_rem_ref_cm_node(cm_node);
- cm_node = NULL;
- }
-
- if (cm_node)
- i40iw_debug(cm_node->dev,
- I40IW_DEBUG_CM,
- "Api - connect(): port=0x%04x, cm_node=%p, cm_id = %p.\n",
- cm_node->rem_port,
- cm_node,
- cm_node->cm_id);
-
return cm_node;
}
@@ -3266,11 +3242,13 @@ static void i40iw_init_tcp_ctx(struct i40iw_cm_node *cm_node,
tcp_info->dest_ip_addr3 = cpu_to_le32(cm_node->rem_addr[0]);
tcp_info->local_ipaddr3 = cpu_to_le32(cm_node->loc_addr[0]);
- tcp_info->arp_idx = cpu_to_le32(i40iw_arp_table(iwqp->iwdev,
- &tcp_info->dest_ip_addr3,
- true,
- NULL,
- I40IW_ARP_RESOLVE));
+ tcp_info->arp_idx =
+ cpu_to_le16((u16)i40iw_arp_table(
+ iwqp->iwdev,
+ &tcp_info->dest_ip_addr3,
+ true,
+ NULL,
+ I40IW_ARP_RESOLVE));
} else {
tcp_info->src_port = cpu_to_le16(cm_node->loc_port);
tcp_info->dst_port = cpu_to_le16(cm_node->rem_port);
@@ -3282,12 +3260,13 @@ static void i40iw_init_tcp_ctx(struct i40iw_cm_node *cm_node,
tcp_info->local_ipaddr1 = cpu_to_le32(cm_node->loc_addr[1]);
tcp_info->local_ipaddr2 = cpu_to_le32(cm_node->loc_addr[2]);
tcp_info->local_ipaddr3 = cpu_to_le32(cm_node->loc_addr[3]);
- tcp_info->arp_idx = cpu_to_le32(i40iw_arp_table(
- iwqp->iwdev,
- &tcp_info->dest_ip_addr0,
- false,
- NULL,
- I40IW_ARP_RESOLVE));
+ tcp_info->arp_idx =
+ cpu_to_le16((u16)i40iw_arp_table(
+ iwqp->iwdev,
+ &tcp_info->dest_ip_addr0,
+ false,
+ NULL,
+ I40IW_ARP_RESOLVE));
}
}
@@ -3564,7 +3543,6 @@ int i40iw_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
struct i40iw_cm_node *cm_node;
struct ib_qp_attr attr;
int passive_state;
- struct i40iw_ib_device *iwibdev;
struct ib_mr *ibmr;
struct i40iw_pd *iwpd;
u16 buf_len = 0;
@@ -3627,7 +3605,6 @@ int i40iw_accept(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
!i40iw_ipv4_is_loopback(cm_node->loc_addr[0], cm_node->rem_addr[0])) ||
(!cm_node->ipv4 &&
!i40iw_ipv6_is_loopback(cm_node->loc_addr, cm_node->rem_addr))) {
- iwibdev = iwdev->iwibdev;
iwpd = iwqp->iwpd;
tagged_offset = (uintptr_t)iwqp->ietf_mem.va;
ibmr = i40iw_reg_phys_mr(&iwpd->ibpd,
@@ -3752,6 +3729,7 @@ int i40iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
struct sockaddr_in *raddr;
struct sockaddr_in6 *laddr6;
struct sockaddr_in6 *raddr6;
+ bool qhash_set = false;
int apbvt_set = 0;
enum i40iw_status_code status;
@@ -3810,6 +3788,7 @@ int i40iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
true);
if (status)
return -EINVAL;
+ qhash_set = true;
}
status = i40iw_manage_apbvt(iwdev, cm_info.loc_port, I40IW_MANAGE_APBVT_ADD);
if (status) {
@@ -3828,23 +3807,8 @@ int i40iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
conn_param->private_data_len,
(void *)conn_param->private_data,
&cm_info);
- if (!cm_node) {
- i40iw_manage_qhash(iwdev,
- &cm_info,
- I40IW_QHASH_TYPE_TCP_ESTABLISHED,
- I40IW_QHASH_MANAGE_TYPE_DELETE,
- NULL,
- false);
-
- if (apbvt_set && !i40iw_listen_port_in_use(&iwdev->cm_core,
- cm_info.loc_port))
- i40iw_manage_apbvt(iwdev,
- cm_info.loc_port,
- I40IW_MANAGE_APBVT_DEL);
- cm_id->rem_ref(cm_id);
- iwdev->cm_core.stats_connect_errs++;
- return -ENOMEM;
- }
+ if (!cm_node)
+ goto err;
i40iw_record_ird_ord(cm_node, (u16)conn_param->ird, (u16)conn_param->ord);
if (cm_node->send_rdma0_op == SEND_RDMA_READ_ZERO &&
@@ -3852,12 +3816,54 @@ int i40iw_connect(struct iw_cm_id *cm_id, struct iw_cm_conn_param *conn_param)
cm_node->ord_size = 1;
cm_node->apbvt_set = apbvt_set;
- cm_node->qhash_set = true;
+ cm_node->qhash_set = qhash_set;
iwqp->cm_node = cm_node;
cm_node->iwqp = iwqp;
iwqp->cm_id = cm_id;
i40iw_add_ref(&iwqp->ibqp);
+
+ if (cm_node->state == I40IW_CM_STATE_SYN_SENT) {
+ if (i40iw_send_syn(cm_node, 0)) {
+ i40iw_rem_ref_cm_node(cm_node);
+ goto err;
+ }
+ }
+
+ i40iw_debug(cm_node->dev,
+ I40IW_DEBUG_CM,
+ "Api - connect(): port=0x%04x, cm_node=%p, cm_id = %p.\n",
+ cm_node->rem_port,
+ cm_node,
+ cm_node->cm_id);
return 0;
+
+err:
+ if (cm_node) {
+ if (cm_node->ipv4)
+ i40iw_debug(cm_node->dev,
+ I40IW_DEBUG_CM,
+ "Api - connect() FAILED: dest addr=%pI4",
+ cm_node->rem_addr);
+ else
+ i40iw_debug(cm_node->dev, I40IW_DEBUG_CM,
+ "Api - connect() FAILED: dest addr=%pI6",
+ cm_node->rem_addr);
+ }
+ i40iw_manage_qhash(iwdev,
+ &cm_info,
+ I40IW_QHASH_TYPE_TCP_ESTABLISHED,
+ I40IW_QHASH_MANAGE_TYPE_DELETE,
+ NULL,
+ false);
+
+ if (apbvt_set && !i40iw_listen_port_in_use(&iwdev->cm_core,
+ cm_info.loc_port))
+ i40iw_manage_apbvt(iwdev,
+ cm_info.loc_port,
+ I40IW_MANAGE_APBVT_DEL);
+ cm_id->rem_ref(cm_id);
+ iwdev->cm_core.stats_connect_errs++;
+ return -ENOMEM;
}
/**
diff --git a/drivers/infiniband/hw/i40iw/i40iw_cm.h b/drivers/infiniband/hw/i40iw/i40iw_cm.h
index 5f8ceb4a8e84..e9046d9f9645 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_cm.h
+++ b/drivers/infiniband/hw/i40iw/i40iw_cm.h
@@ -1,6 +1,6 @@
/*******************************************************************************
*
-* Copyright (c) 2015 Intel Corporation. All rights reserved.
+* Copyright (c) 2015-2016 Intel Corporation. All rights reserved.
*
* This software is available to you under a choice of one of two
* licenses. You may choose to be licensed under the terms of the GNU
@@ -291,8 +291,6 @@ struct i40iw_cm_listener {
u8 loc_mac[ETH_ALEN];
u32 loc_addr[4];
u16 loc_port;
- u32 map_loc_addr[4];
- u16 map_loc_port;
struct iw_cm_id *cm_id;
atomic_t ref_count;
struct i40iw_device *iwdev;
@@ -317,8 +315,6 @@ struct i40iw_kmem_info {
struct i40iw_cm_node {
u32 loc_addr[4], rem_addr[4];
u16 loc_port, rem_port;
- u32 map_loc_addr[4], map_rem_addr[4];
- u16 map_loc_port, map_rem_port;
u16 vlan_id;
enum i40iw_cm_node_state state;
u8 loc_mac[ETH_ALEN];
@@ -370,10 +366,6 @@ struct i40iw_cm_info {
u16 rem_port;
u32 loc_addr[4];
u32 rem_addr[4];
- u16 map_loc_port;
- u16 map_rem_port;
- u32 map_loc_addr[4];
- u32 map_rem_addr[4];
u16 vlan_id;
int backlog;
u16 user_pri;
diff --git a/drivers/infiniband/hw/i40iw/i40iw_ctrl.c b/drivers/infiniband/hw/i40iw/i40iw_ctrl.c
index f05802bf6ca0..2c4b4d072d6a 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_ctrl.c
+++ b/drivers/infiniband/hw/i40iw/i40iw_ctrl.c
@@ -114,16 +114,21 @@ static enum i40iw_status_code i40iw_cqp_poll_registers(
* i40iw_sc_parse_fpm_commit_buf - parse fpm commit buffer
* @buf: ptr to fpm commit buffer
* @info: ptr to i40iw_hmc_obj_info struct
+ * @sd: number of SDs for HMC objects
*
* parses fpm commit info and copy base value
* of hmc objects in hmc_info
*/
static enum i40iw_status_code i40iw_sc_parse_fpm_commit_buf(
u64 *buf,
- struct i40iw_hmc_obj_info *info)
+ struct i40iw_hmc_obj_info *info,
+ u32 *sd)
{
u64 temp;
+ u64 size;
+ u64 base = 0;
u32 i, j;
+ u32 k = 0;
u32 low;
/* copy base values in obj_info */
@@ -131,10 +136,20 @@ static enum i40iw_status_code i40iw_sc_parse_fpm_commit_buf(
i <= I40IW_HMC_IW_PBLE; i++, j += 8) {
get_64bit_val(buf, j, &temp);
info[i].base = RS_64_1(temp, 32) * 512;
+ if (info[i].base > base) {
+ base = info[i].base;
+ k = i;
+ }
low = (u32)(temp);
if (low)
info[i].cnt = low;
}
+ size = info[k].cnt * info[k].size + info[k].base;
+ if (size & 0x1FFFFF)
+ *sd = (u32)((size >> 21) + 1); /* add 1 for remainder */
+ else
+ *sd = (u32)(size >> 21);
+
return 0;
}
@@ -2909,6 +2924,65 @@ static enum i40iw_status_code i40iw_sc_mw_alloc(
}
/**
+ * i40iw_sc_mr_fast_register - Posts RDMA fast register mr WR to iwarp qp
+ * @qp: sc qp struct
+ * @info: fast mr info
+ * @post_sq: flag for cqp db to ring
+ */
+enum i40iw_status_code i40iw_sc_mr_fast_register(
+ struct i40iw_sc_qp *qp,
+ struct i40iw_fast_reg_stag_info *info,
+ bool post_sq)
+{
+ u64 temp, header;
+ u64 *wqe;
+ u32 wqe_idx;
+
+ wqe = i40iw_qp_get_next_send_wqe(&qp->qp_uk, &wqe_idx, I40IW_QP_WQE_MIN_SIZE,
+ 0, info->wr_id);
+ if (!wqe)
+ return I40IW_ERR_QP_TOOMANY_WRS_POSTED;
+
+ i40iw_debug(qp->dev, I40IW_DEBUG_MR, "%s: wr_id[%llxh] wqe_idx[%04d] location[%p]\n",
+ __func__, info->wr_id, wqe_idx,
+ &qp->qp_uk.sq_wrtrk_array[wqe_idx].wrid);
+ temp = (info->addr_type == I40IW_ADDR_TYPE_VA_BASED) ? (uintptr_t)info->va : info->fbo;
+ set_64bit_val(wqe, 0, temp);
+
+ temp = RS_64(info->first_pm_pbl_index >> 16, I40IWQPSQ_FIRSTPMPBLIDXHI);
+ set_64bit_val(wqe,
+ 8,
+ LS_64(temp, I40IWQPSQ_FIRSTPMPBLIDXHI) |
+ LS_64(info->reg_addr_pa >> I40IWQPSQ_PBLADDR_SHIFT, I40IWQPSQ_PBLADDR));
+
+ set_64bit_val(wqe,
+ 16,
+ info->total_len |
+ LS_64(info->first_pm_pbl_index, I40IWQPSQ_FIRSTPMPBLIDXLO));
+
+ header = LS_64(info->stag_key, I40IWQPSQ_STAGKEY) |
+ LS_64(info->stag_idx, I40IWQPSQ_STAGINDEX) |
+ LS_64(I40IWQP_OP_FAST_REGISTER, I40IWQPSQ_OPCODE) |
+ LS_64(info->chunk_size, I40IWQPSQ_LPBLSIZE) |
+ LS_64(info->page_size, I40IWQPSQ_HPAGESIZE) |
+ LS_64(info->access_rights, I40IWQPSQ_STAGRIGHTS) |
+ LS_64(info->addr_type, I40IWQPSQ_VABASEDTO) |
+ LS_64(info->read_fence, I40IWQPSQ_READFENCE) |
+ LS_64(info->local_fence, I40IWQPSQ_LOCALFENCE) |
+ LS_64(info->signaled, I40IWQPSQ_SIGCOMPL) |
+ LS_64(qp->qp_uk.swqe_polarity, I40IWQPSQ_VALID);
+
+ i40iw_insert_wqe_hdr(wqe, header);
+
+ i40iw_debug_buf(qp->dev, I40IW_DEBUG_WQE, "FAST_REG WQE",
+ wqe, I40IW_QP_WQE_MIN_SIZE);
+
+ if (post_sq)
+ i40iw_qp_post_wr(&qp->qp_uk);
+ return 0;
+}
+
+/**
* i40iw_sc_send_lsmm - send last streaming mode message
* @qp: sc qp struct
* @lsmm_buf: buffer with lsmm message
@@ -3147,7 +3221,7 @@ enum i40iw_status_code i40iw_sc_init_iw_hmc(struct i40iw_sc_dev *dev, u8 hmc_fn_
i40iw_cqp_commit_fpm_values_cmd(dev, &query_fpm_mem, hmc_fn_id);
/* parse the fpm_commit_buf and fill hmc obj info */
- i40iw_sc_parse_fpm_commit_buf((u64 *)query_fpm_mem.va, hmc_info->hmc_obj);
+ i40iw_sc_parse_fpm_commit_buf((u64 *)query_fpm_mem.va, hmc_info->hmc_obj, &hmc_info->sd_table.sd_cnt);
mem_size = sizeof(struct i40iw_hmc_sd_entry) *
(hmc_info->sd_table.sd_cnt + hmc_info->first_sd_index);
ret_code = i40iw_allocate_virt_mem(dev->hw, &virt_mem, mem_size);
@@ -3221,7 +3295,9 @@ static enum i40iw_status_code i40iw_sc_configure_iw_fpm(struct i40iw_sc_dev *dev
/* parse the fpm_commit_buf and fill hmc obj info */
if (!ret_code)
- ret_code = i40iw_sc_parse_fpm_commit_buf(dev->fpm_commit_buf, hmc_info->hmc_obj);
+ ret_code = i40iw_sc_parse_fpm_commit_buf(dev->fpm_commit_buf,
+ hmc_info->hmc_obj,
+ &hmc_info->sd_table.sd_cnt);
i40iw_debug_buf(dev, I40IW_DEBUG_HMC, "COMMIT FPM BUFFER",
commit_fpm_mem.va, I40IW_COMMIT_FPM_BUF_SIZE);
@@ -3469,6 +3545,40 @@ static bool i40iw_ring_full(struct i40iw_sc_cqp *cqp)
}
/**
+ * i40iw_est_sd - returns approximate number of SDs for HMC
+ * @dev: sc device struct
+ * @hmc_info: hmc structure, size and count for HMC objects
+ */
+static u64 i40iw_est_sd(struct i40iw_sc_dev *dev, struct i40iw_hmc_info *hmc_info)
+{
+ int i;
+ u64 size = 0;
+ u64 sd;
+
+ for (i = I40IW_HMC_IW_QP; i < I40IW_HMC_IW_PBLE; i++)
+ size += hmc_info->hmc_obj[i].cnt * hmc_info->hmc_obj[i].size;
+
+ if (dev->is_pf)
+ size += hmc_info->hmc_obj[I40IW_HMC_IW_PBLE].cnt * hmc_info->hmc_obj[I40IW_HMC_IW_PBLE].size;
+
+ if (size & 0x1FFFFF)
+ sd = (size >> 21) + 1; /* add 1 for remainder */
+ else
+ sd = size >> 21;
+
+ if (!dev->is_pf) {
+ /* 2MB alignment for VF PBLE HMC */
+ size = hmc_info->hmc_obj[I40IW_HMC_IW_PBLE].cnt * hmc_info->hmc_obj[I40IW_HMC_IW_PBLE].size;
+ if (size & 0x1FFFFF)
+ sd += (size >> 21) + 1; /* add 1 for remainder */
+ else
+ sd += size >> 21;
+ }
+
+ return sd;
+}
+
+/**
* i40iw_config_fpm_values - configure HMC objects
* @dev: sc device struct
* @qp_count: desired qp count
@@ -3479,7 +3589,7 @@ enum i40iw_status_code i40iw_config_fpm_values(struct i40iw_sc_dev *dev, u32 qp_
u32 i, mem_size;
u32 qpwantedoriginal, qpwanted, mrwanted, pblewanted;
u32 powerof2;
- u64 sd_needed, bytes_needed;
+ u64 sd_needed;
u32 loop_count = 0;
struct i40iw_hmc_info *hmc_info;
@@ -3497,23 +3607,15 @@ enum i40iw_status_code i40iw_config_fpm_values(struct i40iw_sc_dev *dev, u32 qp_
return ret_code;
}
- bytes_needed = 0;
- for (i = I40IW_HMC_IW_QP; i < I40IW_HMC_IW_MAX; i++) {
+ for (i = I40IW_HMC_IW_QP; i < I40IW_HMC_IW_MAX; i++)
hmc_info->hmc_obj[i].cnt = hmc_info->hmc_obj[i].max_cnt;
- bytes_needed +=
- (hmc_info->hmc_obj[i].max_cnt) * (hmc_info->hmc_obj[i].size);
- i40iw_debug(dev, I40IW_DEBUG_HMC,
- "%s i[%04d] max_cnt[0x%04X] size[0x%04llx]\n",
- __func__, i, hmc_info->hmc_obj[i].max_cnt,
- hmc_info->hmc_obj[i].size);
- }
- sd_needed = (bytes_needed / I40IW_HMC_DIRECT_BP_SIZE) + 1; /* round up */
+ sd_needed = i40iw_est_sd(dev, hmc_info);
i40iw_debug(dev, I40IW_DEBUG_HMC,
"%s: FW initial max sd_count[%08lld] first_sd_index[%04d]\n",
__func__, sd_needed, hmc_info->first_sd_index);
i40iw_debug(dev, I40IW_DEBUG_HMC,
- "%s: bytes_needed=0x%llx sd count %d where max sd is %d\n",
- __func__, bytes_needed, hmc_info->sd_table.sd_cnt,
+ "%s: sd count %d where max sd is %d\n",
+ __func__, hmc_info->sd_table.sd_cnt,
hmc_fpm_misc->max_sds);
qpwanted = min(qp_count, hmc_info->hmc_obj[I40IW_HMC_IW_QP].max_cnt);
@@ -3555,11 +3657,7 @@ enum i40iw_status_code i40iw_config_fpm_values(struct i40iw_sc_dev *dev, u32 qp_
hmc_info->hmc_obj[I40IW_HMC_IW_PBLE].cnt = pblewanted;
/* How much memory is needed for all the objects. */
- bytes_needed = 0;
- for (i = I40IW_HMC_IW_QP; i < I40IW_HMC_IW_MAX; i++)
- bytes_needed +=
- (hmc_info->hmc_obj[i].cnt) * (hmc_info->hmc_obj[i].size);
- sd_needed = (bytes_needed / I40IW_HMC_DIRECT_BP_SIZE) + 1;
+ sd_needed = i40iw_est_sd(dev, hmc_info);
if ((loop_count > 1000) ||
((!(loop_count % 10)) &&
(qpwanted > qpwantedoriginal * 2 / 3))) {
@@ -3580,15 +3678,7 @@ enum i40iw_status_code i40iw_config_fpm_values(struct i40iw_sc_dev *dev, u32 qp_
pblewanted -= FPM_MULTIPLIER * 1000;
} while (sd_needed > hmc_fpm_misc->max_sds && loop_count < 2000);
- bytes_needed = 0;
- for (i = I40IW_HMC_IW_QP; i < I40IW_HMC_IW_MAX; i++) {
- bytes_needed += (hmc_info->hmc_obj[i].cnt) * (hmc_info->hmc_obj[i].size);
- i40iw_debug(dev, I40IW_DEBUG_HMC,
- "%s i[%04d] cnt[0x%04x] size[0x%04llx]\n",
- __func__, i, hmc_info->hmc_obj[i].cnt,
- hmc_info->hmc_obj[i].size);
- }
- sd_needed = (bytes_needed / I40IW_HMC_DIRECT_BP_SIZE) + 1; /* round up not truncate. */
+ sd_needed = i40iw_est_sd(dev, hmc_info);
i40iw_debug(dev, I40IW_DEBUG_HMC,
"loop_cnt=%d, sd_needed=%lld, qpcnt = %d, cqcnt=%d, mrcnt=%d, pblecnt=%d\n",
@@ -3606,8 +3696,6 @@ enum i40iw_status_code i40iw_config_fpm_values(struct i40iw_sc_dev *dev, u32 qp_
return ret_code;
}
- hmc_info->sd_table.sd_cnt = (u32)sd_needed;
-
mem_size = sizeof(struct i40iw_hmc_sd_entry) *
(hmc_info->sd_table.sd_cnt + hmc_info->first_sd_index + 1);
ret_code = i40iw_allocate_virt_mem(dev->hw, &virt_mem, mem_size);
@@ -3911,11 +3999,11 @@ enum i40iw_status_code i40iw_process_bh(struct i40iw_sc_dev *dev)
*/
static u32 i40iw_iwarp_opcode(struct i40iw_aeqe_info *info, u8 *pkt)
{
- u16 *mpa;
+ __be16 *mpa;
u32 opcode = 0xffffffff;
if (info->q2_data_written) {
- mpa = (u16 *)pkt;
+ mpa = (__be16 *)pkt;
opcode = ntohs(mpa[1]) & 0xf;
}
return opcode;
@@ -3977,7 +4065,7 @@ static int i40iw_bld_terminate_hdr(struct i40iw_sc_qp *qp,
if (info->q2_data_written) {
/* Use data from offending packet to fill in ddp & rdma hdrs */
pkt = i40iw_locate_mpa(pkt);
- ddp_seg_len = ntohs(*(u16 *)pkt);
+ ddp_seg_len = ntohs(*(__be16 *)pkt);
if (ddp_seg_len) {
copy_len = 2;
termhdr->hdrct = DDP_LEN_FLAG;
@@ -4188,13 +4276,13 @@ void i40iw_terminate_connection(struct i40iw_sc_qp *qp, struct i40iw_aeqe_info *
void i40iw_terminate_received(struct i40iw_sc_qp *qp, struct i40iw_aeqe_info *info)
{
u8 *pkt = qp->q2_buf + Q2_BAD_FRAME_OFFSET;
- u32 *mpa;
+ __be32 *mpa;
u8 ddp_ctl;
u8 rdma_ctl;
u16 aeq_id = 0;
struct i40iw_terminate_hdr *termhdr;
- mpa = (u32 *)i40iw_locate_mpa(pkt);
+ mpa = (__be32 *)i40iw_locate_mpa(pkt);
if (info->q2_data_written) {
/* did not validate the frame - do it now */
ddp_ctl = (ntohl(mpa[0]) >> 8) & 0xff;
@@ -4559,17 +4647,18 @@ static struct i40iw_pd_ops iw_pd_ops = {
};
static struct i40iw_priv_qp_ops iw_priv_qp_ops = {
- i40iw_sc_qp_init,
- i40iw_sc_qp_create,
- i40iw_sc_qp_modify,
- i40iw_sc_qp_destroy,
- i40iw_sc_qp_flush_wqes,
- i40iw_sc_qp_upload_context,
- i40iw_sc_qp_setctx,
- i40iw_sc_send_lsmm,
- i40iw_sc_send_lsmm_nostag,
- i40iw_sc_send_rtt,
- i40iw_sc_post_wqe0,
+ .qp_init = i40iw_sc_qp_init,
+ .qp_create = i40iw_sc_qp_create,
+ .qp_modify = i40iw_sc_qp_modify,
+ .qp_destroy = i40iw_sc_qp_destroy,
+ .qp_flush_wqes = i40iw_sc_qp_flush_wqes,
+ .qp_upload_context = i40iw_sc_qp_upload_context,
+ .qp_setctx = i40iw_sc_qp_setctx,
+ .qp_send_lsmm = i40iw_sc_send_lsmm,
+ .qp_send_lsmm_nostag = i40iw_sc_send_lsmm_nostag,
+ .qp_send_rtt = i40iw_sc_send_rtt,
+ .qp_post_wqe0 = i40iw_sc_post_wqe0,
+ .iw_mr_fast_register = i40iw_sc_mr_fast_register
};
static struct i40iw_priv_cq_ops iw_priv_cq_ops = {
diff --git a/drivers/infiniband/hw/i40iw/i40iw_d.h b/drivers/infiniband/hw/i40iw/i40iw_d.h
index aab88d65f805..bd942da91a27 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_d.h
+++ b/drivers/infiniband/hw/i40iw/i40iw_d.h
@@ -1290,7 +1290,7 @@
/* wqe size considering 32 bytes per wqe*/
#define I40IWQP_SW_MIN_WQSIZE 4 /* 128 bytes */
-#define I40IWQP_SW_MAX_WQSIZE 16384 /* 524288 bytes */
+#define I40IWQP_SW_MAX_WQSIZE 2048 /* 2048 bytes */
#define I40IWQP_OP_RDMA_WRITE 0
#define I40IWQP_OP_RDMA_READ 1
@@ -1512,6 +1512,8 @@ enum i40iw_alignment {
I40IW_SD_BUF_ALIGNMENT = 0x100
};
+#define I40IW_WQE_SIZE_64 64
+
#define I40IW_QP_WQE_MIN_SIZE 32
#define I40IW_QP_WQE_MAX_SIZE 128
diff --git a/drivers/infiniband/hw/i40iw/i40iw_hw.c b/drivers/infiniband/hw/i40iw/i40iw_hw.c
index 9fd302425563..3ee0cad96bc6 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_hw.c
+++ b/drivers/infiniband/hw/i40iw/i40iw_hw.c
@@ -106,7 +106,9 @@ u32 i40iw_initialize_hw_resources(struct i40iw_device *iwdev)
set_bit(2, iwdev->allocated_pds);
spin_lock_init(&iwdev->resource_lock);
- mrdrvbits = 24 - get_count_order(iwdev->max_mr);
+ spin_lock_init(&iwdev->qptable_lock);
+ /* stag index mask has a minimum of 14 bits */
+ mrdrvbits = 24 - max(get_count_order(iwdev->max_mr), 14);
iwdev->mr_stagmask = ~(((1 << mrdrvbits) - 1) << (32 - mrdrvbits));
return 0;
}
@@ -301,11 +303,15 @@ void i40iw_process_aeq(struct i40iw_device *iwdev)
"%s ae_id = 0x%x bool qp=%d qp_id = %d\n",
__func__, info->ae_id, info->qp, info->qp_cq_id);
if (info->qp) {
+ spin_lock_irqsave(&iwdev->qptable_lock, flags);
iwqp = iwdev->qp_table[info->qp_cq_id];
if (!iwqp) {
+ spin_unlock_irqrestore(&iwdev->qptable_lock, flags);
i40iw_pr_err("qp_id %d is already freed\n", info->qp_cq_id);
continue;
}
+ i40iw_add_ref(&iwqp->ibqp);
+ spin_unlock_irqrestore(&iwdev->qptable_lock, flags);
qp = &iwqp->sc_qp;
spin_lock_irqsave(&iwqp->lock, flags);
iwqp->hw_tcp_state = info->tcp_state;
@@ -411,6 +417,8 @@ void i40iw_process_aeq(struct i40iw_device *iwdev)
i40iw_terminate_connection(qp, info);
break;
}
+ if (info->qp)
+ i40iw_rem_ref(&iwqp->ibqp);
} while (1);
if (aeqcnt)
@@ -460,7 +468,7 @@ int i40iw_manage_apbvt(struct i40iw_device *iwdev, u16 accel_local_port, bool ad
*/
void i40iw_manage_arp_cache(struct i40iw_device *iwdev,
unsigned char *mac_addr,
- __be32 *ip_addr,
+ u32 *ip_addr,
bool ipv4,
u32 action)
{
@@ -481,7 +489,7 @@ void i40iw_manage_arp_cache(struct i40iw_device *iwdev,
cqp_info->cqp_cmd = OP_ADD_ARP_CACHE_ENTRY;
info = &cqp_info->in.u.add_arp_cache_entry.info;
memset(info, 0, sizeof(*info));
- info->arp_index = cpu_to_le32(arp_index);
+ info->arp_index = cpu_to_le16((u16)arp_index);
info->permanent = true;
ether_addr_copy(info->mac_addr, mac_addr);
cqp_info->in.u.add_arp_cache_entry.scratch = (uintptr_t)cqp_request;
diff --git a/drivers/infiniband/hw/i40iw/i40iw_main.c b/drivers/infiniband/hw/i40iw/i40iw_main.c
index 90e5af21737e..c963cad92f5a 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_main.c
+++ b/drivers/infiniband/hw/i40iw/i40iw_main.c
@@ -270,7 +270,6 @@ static void i40iw_disable_irq(struct i40iw_sc_dev *dev,
i40iw_wr32(dev->hw, I40E_PFINT_DYN_CTLN(msix_vec->idx - 1), 0);
else
i40iw_wr32(dev->hw, I40E_VFINT_DYN_CTLN1(msix_vec->idx - 1), 0);
- synchronize_irq(msix_vec->irq);
free_irq(msix_vec->irq, dev_id);
}
@@ -1147,10 +1146,7 @@ static enum i40iw_status_code i40iw_alloc_set_mac_ipaddr(struct i40iw_device *iw
if (!status) {
status = i40iw_add_mac_ipaddr_entry(iwdev, macaddr,
(u8)iwdev->mac_ip_table_idx);
- if (!status)
- status = i40iw_add_mac_ipaddr_entry(iwdev, macaddr,
- (u8)iwdev->mac_ip_table_idx);
- else
+ if (status)
i40iw_del_macip_entry(iwdev, (u8)iwdev->mac_ip_table_idx);
}
return status;
@@ -1165,7 +1161,7 @@ static void i40iw_add_ipv6_addr(struct i40iw_device *iwdev)
struct net_device *ip_dev;
struct inet6_dev *idev;
struct inet6_ifaddr *ifp;
- __be32 local_ipaddr6[4];
+ u32 local_ipaddr6[4];
rcu_read_lock();
for_each_netdev_rcu(&init_net, ip_dev) {
@@ -1512,6 +1508,7 @@ static enum i40iw_status_code i40iw_setup_init_state(struct i40iw_handler *hdl,
I40IW_HMC_PROFILE_DEFAULT;
iwdev->max_rdma_vfs =
(iwdev->resource_profile != I40IW_HMC_PROFILE_DEFAULT) ? max_rdma_vfs : 0;
+ iwdev->max_enabled_vfs = iwdev->max_rdma_vfs;
iwdev->netdev = ldev->netdev;
hdl->client = client;
iwdev->mss = (!ldev->params.mtu) ? I40IW_DEFAULT_MSS : ldev->params.mtu - I40IW_MTU_TO_MSS;
@@ -1531,7 +1528,10 @@ static enum i40iw_status_code i40iw_setup_init_state(struct i40iw_handler *hdl,
goto exit;
iwdev->obj_next = iwdev->obj_mem;
iwdev->push_mode = push_mode;
+
init_waitqueue_head(&iwdev->vchnl_waitq);
+ init_waitqueue_head(&dev->vf_reqs);
+
status = i40iw_initialize_dev(iwdev, ldev);
exit:
if (status) {
@@ -1710,7 +1710,6 @@ static void i40iw_vf_reset(struct i40e_info *ldev, struct i40e_client *client, u
for (i = 0; i < I40IW_MAX_PE_ENABLED_VF_COUNT; i++) {
if (!dev->vf_dev[i] || (dev->vf_dev[i]->vf_id != vf_id))
continue;
-
/* free all resources allocated on behalf of vf */
tmp_vfdev = dev->vf_dev[i];
spin_lock_irqsave(&dev->dev_pestat.stats_lock, flags);
@@ -1819,8 +1818,6 @@ static int i40iw_virtchnl_receive(struct i40e_info *ldev,
dev = &hdl->device.sc_dev;
iwdev = dev->back_dev;
- i40iw_debug(dev, I40IW_DEBUG_VIRT, "msg %p, message length %u\n", msg, len);
-
if (dev->vchnl_if.vchnl_recv) {
ret_code = dev->vchnl_if.vchnl_recv(dev, vf_id, msg, len);
if (!dev->is_pf) {
@@ -1832,6 +1829,39 @@ static int i40iw_virtchnl_receive(struct i40e_info *ldev,
}
/**
+ * i40iw_vf_clear_to_send - wait to send virtual channel message
+ * @dev: iwarp device *
+ * Wait for until virtual channel is clear
+ * before sending the next message
+ *
+ * Returns false if error
+ * Returns true if clear to send
+ */
+bool i40iw_vf_clear_to_send(struct i40iw_sc_dev *dev)
+{
+ struct i40iw_device *iwdev;
+ wait_queue_t wait;
+
+ iwdev = dev->back_dev;
+
+ if (!wq_has_sleeper(&dev->vf_reqs) &&
+ (atomic_read(&iwdev->vchnl_msgs) == 0))
+ return true; /* virtual channel is clear */
+
+ init_wait(&wait);
+ add_wait_queue_exclusive(&dev->vf_reqs, &wait);
+
+ if (!wait_event_timeout(dev->vf_reqs,
+ (atomic_read(&iwdev->vchnl_msgs) == 0),
+ I40IW_VCHNL_EVENT_TIMEOUT))
+ dev->vchnl_up = false;
+
+ remove_wait_queue(&dev->vf_reqs, &wait);
+
+ return dev->vchnl_up;
+}
+
+/**
* i40iw_virtchnl_send - send a message through the virtual channel
* @dev: iwarp device
* @vf_id: virtual function id associated with the message
@@ -1848,22 +1878,20 @@ static enum i40iw_status_code i40iw_virtchnl_send(struct i40iw_sc_dev *dev,
{
struct i40iw_device *iwdev;
struct i40e_info *ldev;
- enum i40iw_status_code ret_code = I40IW_ERR_BAD_PTR;
if (!dev || !dev->back_dev)
- return ret_code;
+ return I40IW_ERR_BAD_PTR;
iwdev = dev->back_dev;
ldev = iwdev->ldev;
if (ldev && ldev->ops && ldev->ops->virtchnl_send)
- ret_code = ldev->ops->virtchnl_send(ldev, &i40iw_client, vf_id, msg, len);
-
- return ret_code;
+ return ldev->ops->virtchnl_send(ldev, &i40iw_client, vf_id, msg, len);
+ return I40IW_ERR_BAD_PTR;
}
/* client interface functions */
-static struct i40e_client_ops i40e_ops = {
+static const struct i40e_client_ops i40e_ops = {
.open = i40iw_open,
.close = i40iw_close,
.l2_param_change = i40iw_l2param_change,
diff --git a/drivers/infiniband/hw/i40iw/i40iw_osdep.h b/drivers/infiniband/hw/i40iw/i40iw_osdep.h
index 7e20493510e8..80f422bf3967 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_osdep.h
+++ b/drivers/infiniband/hw/i40iw/i40iw_osdep.h
@@ -172,6 +172,7 @@ struct i40iw_hw;
u8 __iomem *i40iw_get_hw_addr(void *dev);
void i40iw_ieq_mpa_crc_ae(struct i40iw_sc_dev *dev, struct i40iw_sc_qp *qp);
enum i40iw_status_code i40iw_vf_wait_vchnl_resp(struct i40iw_sc_dev *dev);
+bool i40iw_vf_clear_to_send(struct i40iw_sc_dev *dev);
enum i40iw_status_code i40iw_ieq_check_mpacrc(struct shash_desc *desc, void *addr,
u32 length, u32 value);
struct i40iw_sc_qp *i40iw_ieq_get_qp(struct i40iw_sc_dev *dev, struct i40iw_puda_buf *buf);
diff --git a/drivers/infiniband/hw/i40iw/i40iw_pble.c b/drivers/infiniband/hw/i40iw/i40iw_pble.c
index ded853d2fad8..85993dc44f6e 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_pble.c
+++ b/drivers/infiniband/hw/i40iw/i40iw_pble.c
@@ -404,13 +404,14 @@ static enum i40iw_status_code add_pble_pool(struct i40iw_sc_dev *dev,
sd_entry->u.pd_table.pd_page_addr.pa : sd_entry->u.bp.addr.pa;
if (sd_entry->valid)
return 0;
- if (dev->is_pf)
+ if (dev->is_pf) {
ret_code = i40iw_hmc_sd_one(dev, hmc_info->hmc_fn_id,
sd_reg_val, idx->sd_idx,
sd_entry->entry_type, true);
- if (ret_code) {
- i40iw_pr_err("cqp cmd failed for sd (pbles)\n");
- goto error;
+ if (ret_code) {
+ i40iw_pr_err("cqp cmd failed for sd (pbles)\n");
+ goto error;
+ }
}
sd_entry->valid = true;
diff --git a/drivers/infiniband/hw/i40iw/i40iw_puda.c b/drivers/infiniband/hw/i40iw/i40iw_puda.c
index 8eb400d8a7a0..e9c6e82af9c7 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_puda.c
+++ b/drivers/infiniband/hw/i40iw/i40iw_puda.c
@@ -1194,7 +1194,7 @@ static enum i40iw_status_code i40iw_ieq_process_buf(struct i40iw_puda_rsrc *ieq,
ioffset = (u16)(buf->data - (u8 *)buf->mem.va);
while (datalen) {
- fpdu_len = i40iw_ieq_get_fpdu_length(ntohs(*(u16 *)datap));
+ fpdu_len = i40iw_ieq_get_fpdu_length(ntohs(*(__be16 *)datap));
if (fpdu_len > pfpdu->max_fpdu_data) {
i40iw_debug(ieq->dev, I40IW_DEBUG_IEQ,
"%s: error bad fpdu_len\n", __func__);
diff --git a/drivers/infiniband/hw/i40iw/i40iw_status.h b/drivers/infiniband/hw/i40iw/i40iw_status.h
index b0110c15e044..91c421762f06 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_status.h
+++ b/drivers/infiniband/hw/i40iw/i40iw_status.h
@@ -95,6 +95,7 @@ enum i40iw_status_code {
I40IW_ERR_INVALID_MAC_ADDR = -65,
I40IW_ERR_BAD_STAG = -66,
I40IW_ERR_CQ_COMPL_ERROR = -67,
+ I40IW_ERR_QUEUE_DESTROYED = -68
};
#endif
diff --git a/drivers/infiniband/hw/i40iw/i40iw_type.h b/drivers/infiniband/hw/i40iw/i40iw_type.h
index edb3a8c8267a..16cc61720b53 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_type.h
+++ b/drivers/infiniband/hw/i40iw/i40iw_type.h
@@ -479,16 +479,17 @@ struct i40iw_sc_dev {
struct i40iw_virt_mem ieq_mem;
struct i40iw_puda_rsrc *ieq;
- struct i40iw_vf_cqp_ops *iw_vf_cqp_ops;
+ const struct i40iw_vf_cqp_ops *iw_vf_cqp_ops;
struct i40iw_hmc_fpm_misc hmc_fpm_misc;
u16 qs_handle;
- u32 debug_mask;
+ u32 debug_mask;
u16 exception_lan_queue;
u8 hmc_fn_id;
bool is_pf;
bool vchnl_up;
u8 vf_id;
+ wait_queue_head_t vf_reqs;
u64 cqp_cmd_stats[OP_SIZE_CQP_STAT_ARRAY];
struct i40iw_vchnl_vf_msg_buffer vchnl_vf_msg_buf;
u8 hw_rev;
@@ -889,8 +890,8 @@ struct i40iw_qhash_table_info {
u32 qp_num;
u32 dest_ip[4];
u32 src_ip[4];
- u32 dest_port;
- u32 src_port;
+ u16 dest_port;
+ u16 src_port;
};
struct i40iw_local_mac_ipaddr_entry_info {
@@ -1040,6 +1041,9 @@ struct i40iw_priv_qp_ops {
void (*qp_send_lsmm_nostag)(struct i40iw_sc_qp *, void *, u32);
void (*qp_send_rtt)(struct i40iw_sc_qp *, bool);
enum i40iw_status_code (*qp_post_wqe0)(struct i40iw_sc_qp *, u8);
+ enum i40iw_status_code (*iw_mr_fast_register)(struct i40iw_sc_qp *,
+ struct i40iw_fast_reg_stag_info *,
+ bool);
};
struct i40iw_priv_cq_ops {
@@ -1108,7 +1112,7 @@ struct i40iw_hmc_ops {
enum i40iw_status_code (*parse_fpm_query_buf)(u64 *, struct i40iw_hmc_info *,
struct i40iw_hmc_fpm_misc *);
enum i40iw_status_code (*configure_iw_fpm)(struct i40iw_sc_dev *, u8);
- enum i40iw_status_code (*parse_fpm_commit_buf)(u64 *, struct i40iw_hmc_obj_info *);
+ enum i40iw_status_code (*parse_fpm_commit_buf)(u64 *, struct i40iw_hmc_obj_info *, u32 *sd);
enum i40iw_status_code (*create_hmc_object)(struct i40iw_sc_dev *dev,
struct i40iw_hmc_create_obj_info *);
enum i40iw_status_code (*del_hmc_object)(struct i40iw_sc_dev *dev,
diff --git a/drivers/infiniband/hw/i40iw/i40iw_uk.c b/drivers/infiniband/hw/i40iw/i40iw_uk.c
index f78c3dc8bdb2..e35faea88c13 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_uk.c
+++ b/drivers/infiniband/hw/i40iw/i40iw_uk.c
@@ -56,6 +56,9 @@ static enum i40iw_status_code i40iw_nop_1(struct i40iw_qp_uk *qp)
wqe_idx = I40IW_RING_GETCURRENT_HEAD(qp->sq_ring);
wqe = qp->sq_base[wqe_idx].elem;
+
+ qp->sq_wrtrk_array[wqe_idx].wqe_size = I40IW_QP_WQE_MIN_SIZE;
+
peek_head = (qp->sq_ring.head + 1) % qp->sq_ring.size;
wqe_0 = qp->sq_base[peek_head].elem;
if (peek_head)
@@ -130,7 +133,10 @@ static void i40iw_qp_ring_push_db(struct i40iw_qp_uk *qp, u32 wqe_idx)
*/
u64 *i40iw_qp_get_next_send_wqe(struct i40iw_qp_uk *qp,
u32 *wqe_idx,
- u8 wqe_size)
+ u8 wqe_size,
+ u32 total_size,
+ u64 wr_id
+ )
{
u64 *wqe = NULL;
u64 wqe_ptr;
@@ -159,6 +165,17 @@ u64 *i40iw_qp_get_next_send_wqe(struct i40iw_qp_uk *qp,
if (!*wqe_idx)
qp->swqe_polarity = !qp->swqe_polarity;
}
+
+ if (((*wqe_idx & 3) == 1) && (wqe_size == I40IW_WQE_SIZE_64)) {
+ i40iw_nop_1(qp);
+ I40IW_RING_MOVE_HEAD(qp->sq_ring, ret_code);
+ if (ret_code)
+ return NULL;
+ *wqe_idx = I40IW_RING_GETCURRENT_HEAD(qp->sq_ring);
+ if (!*wqe_idx)
+ qp->swqe_polarity = !qp->swqe_polarity;
+ }
+
for (i = 0; i < wqe_size / I40IW_QP_WQE_MIN_SIZE; i++) {
I40IW_RING_MOVE_HEAD(qp->sq_ring, ret_code);
if (ret_code)
@@ -169,8 +186,15 @@ u64 *i40iw_qp_get_next_send_wqe(struct i40iw_qp_uk *qp,
peek_head = I40IW_RING_GETCURRENT_HEAD(qp->sq_ring);
wqe_0 = qp->sq_base[peek_head].elem;
- if (peek_head & 0x3)
- wqe_0[3] = LS_64(!qp->swqe_polarity, I40IWQPSQ_VALID);
+
+ if (((peek_head & 3) == 1) || ((peek_head & 3) == 3)) {
+ if (RS_64(wqe_0[3], I40IWQPSQ_VALID) != !qp->swqe_polarity)
+ wqe_0[3] = LS_64(!qp->swqe_polarity, I40IWQPSQ_VALID);
+ }
+
+ qp->sq_wrtrk_array[*wqe_idx].wrid = wr_id;
+ qp->sq_wrtrk_array[*wqe_idx].wr_len = total_size;
+ qp->sq_wrtrk_array[*wqe_idx].wqe_size = wqe_size;
return wqe;
}
@@ -249,12 +273,9 @@ static enum i40iw_status_code i40iw_rdma_write(struct i40iw_qp_uk *qp,
if (ret_code)
return ret_code;
- wqe = i40iw_qp_get_next_send_wqe(qp, &wqe_idx, wqe_size);
+ wqe = i40iw_qp_get_next_send_wqe(qp, &wqe_idx, wqe_size, total_size, info->wr_id);
if (!wqe)
return I40IW_ERR_QP_TOOMANY_WRS_POSTED;
-
- qp->sq_wrtrk_array[wqe_idx].wrid = info->wr_id;
- qp->sq_wrtrk_array[wqe_idx].wr_len = total_size;
set_64bit_val(wqe, 16,
LS_64(op_info->rem_addr.tag_off, I40IWQPSQ_FRAG_TO));
if (!op_info->rem_addr.stag)
@@ -309,12 +330,9 @@ static enum i40iw_status_code i40iw_rdma_read(struct i40iw_qp_uk *qp,
ret_code = i40iw_fragcnt_to_wqesize_sq(1, &wqe_size);
if (ret_code)
return ret_code;
- wqe = i40iw_qp_get_next_send_wqe(qp, &wqe_idx, wqe_size);
+ wqe = i40iw_qp_get_next_send_wqe(qp, &wqe_idx, wqe_size, op_info->lo_addr.len, info->wr_id);
if (!wqe)
return I40IW_ERR_QP_TOOMANY_WRS_POSTED;
-
- qp->sq_wrtrk_array[wqe_idx].wrid = info->wr_id;
- qp->sq_wrtrk_array[wqe_idx].wr_len = op_info->lo_addr.len;
local_fence |= info->local_fence;
set_64bit_val(wqe, 16, LS_64(op_info->rem_addr.tag_off, I40IWQPSQ_FRAG_TO));
@@ -366,13 +384,11 @@ static enum i40iw_status_code i40iw_send(struct i40iw_qp_uk *qp,
if (ret_code)
return ret_code;
- wqe = i40iw_qp_get_next_send_wqe(qp, &wqe_idx, wqe_size);
+ wqe = i40iw_qp_get_next_send_wqe(qp, &wqe_idx, wqe_size, total_size, info->wr_id);
if (!wqe)
return I40IW_ERR_QP_TOOMANY_WRS_POSTED;
read_fence |= info->read_fence;
- qp->sq_wrtrk_array[wqe_idx].wrid = info->wr_id;
- qp->sq_wrtrk_array[wqe_idx].wr_len = total_size;
set_64bit_val(wqe, 16, 0);
header = LS_64(stag_to_inv, I40IWQPSQ_REMSTAG) |
LS_64(info->op_type, I40IWQPSQ_OPCODE) |
@@ -427,13 +443,11 @@ static enum i40iw_status_code i40iw_inline_rdma_write(struct i40iw_qp_uk *qp,
if (ret_code)
return ret_code;
- wqe = i40iw_qp_get_next_send_wqe(qp, &wqe_idx, wqe_size);
+ wqe = i40iw_qp_get_next_send_wqe(qp, &wqe_idx, wqe_size, op_info->len, info->wr_id);
if (!wqe)
return I40IW_ERR_QP_TOOMANY_WRS_POSTED;
read_fence |= info->read_fence;
- qp->sq_wrtrk_array[wqe_idx].wrid = info->wr_id;
- qp->sq_wrtrk_array[wqe_idx].wr_len = op_info->len;
set_64bit_val(wqe, 16,
LS_64(op_info->rem_addr.tag_off, I40IWQPSQ_FRAG_TO));
@@ -507,14 +521,11 @@ static enum i40iw_status_code i40iw_inline_send(struct i40iw_qp_uk *qp,
if (ret_code)
return ret_code;
- wqe = i40iw_qp_get_next_send_wqe(qp, &wqe_idx, wqe_size);
+ wqe = i40iw_qp_get_next_send_wqe(qp, &wqe_idx, wqe_size, op_info->len, info->wr_id);
if (!wqe)
return I40IW_ERR_QP_TOOMANY_WRS_POSTED;
read_fence |= info->read_fence;
-
- qp->sq_wrtrk_array[wqe_idx].wrid = info->wr_id;
- qp->sq_wrtrk_array[wqe_idx].wr_len = op_info->len;
header = LS_64(stag_to_inv, I40IWQPSQ_REMSTAG) |
LS_64(info->op_type, I40IWQPSQ_OPCODE) |
LS_64(op_info->len, I40IWQPSQ_INLINEDATALEN) |
@@ -574,12 +585,9 @@ static enum i40iw_status_code i40iw_stag_local_invalidate(struct i40iw_qp_uk *qp
op_info = &info->op.inv_local_stag;
local_fence = info->local_fence;
- wqe = i40iw_qp_get_next_send_wqe(qp, &wqe_idx, I40IW_QP_WQE_MIN_SIZE);
+ wqe = i40iw_qp_get_next_send_wqe(qp, &wqe_idx, I40IW_QP_WQE_MIN_SIZE, 0, info->wr_id);
if (!wqe)
return I40IW_ERR_QP_TOOMANY_WRS_POSTED;
-
- qp->sq_wrtrk_array[wqe_idx].wrid = info->wr_id;
- qp->sq_wrtrk_array[wqe_idx].wr_len = 0;
set_64bit_val(wqe, 0, 0);
set_64bit_val(wqe, 8,
LS_64(op_info->target_stag, I40IWQPSQ_LOCSTAG));
@@ -619,12 +627,9 @@ static enum i40iw_status_code i40iw_mw_bind(struct i40iw_qp_uk *qp,
op_info = &info->op.bind_window;
local_fence |= info->local_fence;
- wqe = i40iw_qp_get_next_send_wqe(qp, &wqe_idx, I40IW_QP_WQE_MIN_SIZE);
+ wqe = i40iw_qp_get_next_send_wqe(qp, &wqe_idx, I40IW_QP_WQE_MIN_SIZE, 0, info->wr_id);
if (!wqe)
return I40IW_ERR_QP_TOOMANY_WRS_POSTED;
-
- qp->sq_wrtrk_array[wqe_idx].wrid = info->wr_id;
- qp->sq_wrtrk_array[wqe_idx].wr_len = 0;
set_64bit_val(wqe, 0, (uintptr_t)op_info->va);
set_64bit_val(wqe, 8,
LS_64(op_info->mr_stag, I40IWQPSQ_PARENTMRSTAG) |
@@ -760,7 +765,7 @@ static enum i40iw_status_code i40iw_cq_poll_completion(struct i40iw_cq_uk *cq,
enum i40iw_status_code ret_code2 = 0;
bool move_cq_head = true;
u8 polarity;
- u8 addl_frag_cnt, addl_wqes = 0;
+ u8 addl_wqes = 0;
if (cq->avoid_mem_cflct)
cqe = (u64 *)I40IW_GET_CURRENT_EXTENDED_CQ_ELEMENT(cq);
@@ -797,6 +802,10 @@ static enum i40iw_status_code i40iw_cq_poll_completion(struct i40iw_cq_uk *cq,
info->is_srq = (bool)RS_64(qword3, I40IWCQ_SRQ);
qp = (struct i40iw_qp_uk *)(unsigned long)comp_ctx;
+ if (!qp) {
+ ret_code = I40IW_ERR_QUEUE_DESTROYED;
+ goto exit;
+ }
wqe_idx = (u32)RS_64(qword3, I40IW_CQ_WQEIDX);
info->qp_handle = (i40iw_qp_handle)(unsigned long)qp;
@@ -827,11 +836,8 @@ static enum i40iw_status_code i40iw_cq_poll_completion(struct i40iw_cq_uk *cq,
info->op_type = (u8)RS_64(qword3, I40IWCQ_OP);
sw_wqe = qp->sq_base[wqe_idx].elem;
get_64bit_val(sw_wqe, 24, &wqe_qword);
- addl_frag_cnt =
- (u8)RS_64(wqe_qword, I40IWQPSQ_ADDFRAGCNT);
- i40iw_fragcnt_to_wqesize_sq(addl_frag_cnt + 1, &addl_wqes);
- addl_wqes = (addl_wqes / I40IW_QP_WQE_MIN_SIZE);
+ addl_wqes = qp->sq_wrtrk_array[wqe_idx].wqe_size / I40IW_QP_WQE_MIN_SIZE;
I40IW_RING_SET_TAIL(qp->sq_ring, (wqe_idx + addl_wqes));
} else {
do {
@@ -843,9 +849,7 @@ static enum i40iw_status_code i40iw_cq_poll_completion(struct i40iw_cq_uk *cq,
get_64bit_val(sw_wqe, 24, &wqe_qword);
op_type = (u8)RS_64(wqe_qword, I40IWQPSQ_OPCODE);
info->op_type = op_type;
- addl_frag_cnt = (u8)RS_64(wqe_qword, I40IWQPSQ_ADDFRAGCNT);
- i40iw_fragcnt_to_wqesize_sq(addl_frag_cnt + 1, &addl_wqes);
- addl_wqes = (addl_wqes / I40IW_QP_WQE_MIN_SIZE);
+ addl_wqes = qp->sq_wrtrk_array[tail].wqe_size / I40IW_QP_WQE_MIN_SIZE;
I40IW_RING_SET_TAIL(qp->sq_ring, (tail + addl_wqes));
if (op_type != I40IWQP_OP_NOP) {
info->wr_id = qp->sq_wrtrk_array[tail].wrid;
@@ -859,6 +863,7 @@ static enum i40iw_status_code i40iw_cq_poll_completion(struct i40iw_cq_uk *cq,
ret_code = 0;
+exit:
if (!ret_code &&
(info->comp_status == I40IW_COMPL_STATUS_FLUSHED))
if (pring && (I40IW_RING_MORE_WORK(*pring)))
@@ -893,19 +898,21 @@ static enum i40iw_status_code i40iw_cq_poll_completion(struct i40iw_cq_uk *cq,
* i40iw_get_wqe_shift - get shift count for maximum wqe size
* @wqdepth: depth of wq required.
* @sge: Maximum Scatter Gather Elements wqe
+ * @inline_data: Maximum inline data size
* @shift: Returns the shift needed based on sge
*
- * Shift can be used to left shift the wqe size based on sge.
- * If sge, == 1, shift =0 (wqe_size of 32 bytes), for sge=2 and 3, shift =1
- * (64 bytes wqes) and 2 otherwise (128 bytes wqe).
+ * Shift can be used to left shift the wqe size based on number of SGEs and inlind data size.
+ * For 1 SGE or inline data <= 16, shift = 0 (wqe size of 32 bytes).
+ * For 2 or 3 SGEs or inline data <= 48, shift = 1 (wqe size of 64 bytes).
+ * Shift of 2 otherwise (wqe size of 128 bytes).
*/
-enum i40iw_status_code i40iw_get_wqe_shift(u32 wqdepth, u8 sge, u8 *shift)
+enum i40iw_status_code i40iw_get_wqe_shift(u32 wqdepth, u32 sge, u32 inline_data, u8 *shift)
{
u32 size;
*shift = 0;
- if (sge > 1)
- *shift = (sge < 4) ? 1 : 2;
+ if (sge > 1 || inline_data > 16)
+ *shift = (sge < 4 && inline_data <= 48) ? 1 : 2;
/* check if wqdepth is multiple of 2 or not */
@@ -968,11 +975,11 @@ enum i40iw_status_code i40iw_qp_uk_init(struct i40iw_qp_uk *qp,
if (info->max_rq_frag_cnt > I40IW_MAX_WQ_FRAGMENT_COUNT)
return I40IW_ERR_INVALID_FRAG_COUNT;
- ret_code = i40iw_get_wqe_shift(info->sq_size, info->max_sq_frag_cnt, &sqshift);
+ ret_code = i40iw_get_wqe_shift(info->sq_size, info->max_sq_frag_cnt, info->max_inline_data, &sqshift);
if (ret_code)
return ret_code;
- ret_code = i40iw_get_wqe_shift(info->rq_size, info->max_rq_frag_cnt, &rqshift);
+ ret_code = i40iw_get_wqe_shift(info->rq_size, info->max_rq_frag_cnt, 0, &rqshift);
if (ret_code)
return ret_code;
@@ -1097,12 +1104,9 @@ enum i40iw_status_code i40iw_nop(struct i40iw_qp_uk *qp,
u64 header, *wqe;
u32 wqe_idx;
- wqe = i40iw_qp_get_next_send_wqe(qp, &wqe_idx, I40IW_QP_WQE_MIN_SIZE);
+ wqe = i40iw_qp_get_next_send_wqe(qp, &wqe_idx, I40IW_QP_WQE_MIN_SIZE, 0, wr_id);
if (!wqe)
return I40IW_ERR_QP_TOOMANY_WRS_POSTED;
-
- qp->sq_wrtrk_array[wqe_idx].wrid = wr_id;
- qp->sq_wrtrk_array[wqe_idx].wr_len = 0;
set_64bit_val(wqe, 0, 0);
set_64bit_val(wqe, 8, 0);
set_64bit_val(wqe, 16, 0);
@@ -1125,7 +1129,7 @@ enum i40iw_status_code i40iw_nop(struct i40iw_qp_uk *qp,
* @frag_cnt: number of fragments
* @wqe_size: size of sq wqe returned
*/
-enum i40iw_status_code i40iw_fragcnt_to_wqesize_sq(u8 frag_cnt, u8 *wqe_size)
+enum i40iw_status_code i40iw_fragcnt_to_wqesize_sq(u32 frag_cnt, u8 *wqe_size)
{
switch (frag_cnt) {
case 0:
@@ -1156,7 +1160,7 @@ enum i40iw_status_code i40iw_fragcnt_to_wqesize_sq(u8 frag_cnt, u8 *wqe_size)
* @frag_cnt: number of fragments
* @wqe_size: size of rq wqe returned
*/
-enum i40iw_status_code i40iw_fragcnt_to_wqesize_rq(u8 frag_cnt, u8 *wqe_size)
+enum i40iw_status_code i40iw_fragcnt_to_wqesize_rq(u32 frag_cnt, u8 *wqe_size)
{
switch (frag_cnt) {
case 0:
diff --git a/drivers/infiniband/hw/i40iw/i40iw_user.h b/drivers/infiniband/hw/i40iw/i40iw_user.h
index 5cd971bb8cc7..4627646fe8cd 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_user.h
+++ b/drivers/infiniband/hw/i40iw/i40iw_user.h
@@ -61,7 +61,7 @@ enum i40iw_device_capabilities_const {
I40IW_MAX_CQ_SIZE = 1048575,
I40IW_MAX_AEQ_ALLOCATE_COUNT = 255,
I40IW_DB_ID_ZERO = 0,
- I40IW_MAX_WQ_FRAGMENT_COUNT = 6,
+ I40IW_MAX_WQ_FRAGMENT_COUNT = 3,
I40IW_MAX_SGE_RD = 1,
I40IW_MAX_OUTBOUND_MESSAGE_SIZE = 2147483647,
I40IW_MAX_INBOUND_MESSAGE_SIZE = 2147483647,
@@ -70,8 +70,8 @@ enum i40iw_device_capabilities_const {
I40IW_MAX_VF_FPM_ID = 47,
I40IW_MAX_VF_PER_PF = 127,
I40IW_MAX_SQ_PAYLOAD_SIZE = 2145386496,
- I40IW_MAX_INLINE_DATA_SIZE = 112,
- I40IW_MAX_PUSHMODE_INLINE_DATA_SIZE = 112,
+ I40IW_MAX_INLINE_DATA_SIZE = 48,
+ I40IW_MAX_PUSHMODE_INLINE_DATA_SIZE = 48,
I40IW_MAX_IRD_SIZE = 32,
I40IW_QPCTX_ENCD_MAXIRD = 3,
I40IW_MAX_WQ_ENTRIES = 2048,
@@ -102,6 +102,8 @@ enum i40iw_device_capabilities_const {
#define I40IW_STAG_INDEX_FROM_STAG(stag) (((stag) && 0xFFFFFF00) >> 8)
+#define I40IW_MAX_MR_SIZE 0x10000000000L
+
struct i40iw_qp_uk;
struct i40iw_cq_uk;
struct i40iw_srq_uk;
@@ -198,7 +200,7 @@ enum i40iw_completion_notify {
struct i40iw_post_send {
i40iw_sgl sg_list;
- u8 num_sges;
+ u32 num_sges;
};
struct i40iw_post_inline_send {
@@ -220,7 +222,7 @@ struct i40iw_post_inline_send_w_inv {
struct i40iw_rdma_write {
i40iw_sgl lo_sg_list;
- u8 num_lo_sges;
+ u32 num_lo_sges;
struct i40iw_sge rem_addr;
};
@@ -345,7 +347,9 @@ struct i40iw_dev_uk {
struct i40iw_sq_uk_wr_trk_info {
u64 wrid;
- u64 wr_len;
+ u32 wr_len;
+ u8 wqe_size;
+ u8 reserved[3];
};
struct i40iw_qp_quanta {
@@ -367,6 +371,8 @@ struct i40iw_qp_uk {
u32 qp_id;
u32 sq_size;
u32 rq_size;
+ u32 max_sq_frag_cnt;
+ u32 max_rq_frag_cnt;
struct i40iw_qp_uk_ops ops;
bool use_srq;
u8 swqe_polarity;
@@ -374,8 +380,6 @@ struct i40iw_qp_uk {
u8 rwqe_polarity;
u8 rq_wqe_size;
u8 rq_wqe_size_multiplier;
- u8 max_sq_frag_cnt;
- u8 max_rq_frag_cnt;
bool deferred_flag;
};
@@ -404,8 +408,9 @@ struct i40iw_qp_uk_init_info {
u32 qp_id;
u32 sq_size;
u32 rq_size;
- u8 max_sq_frag_cnt;
- u8 max_rq_frag_cnt;
+ u32 max_sq_frag_cnt;
+ u32 max_rq_frag_cnt;
+ u32 max_inline_data;
};
@@ -422,7 +427,10 @@ void i40iw_device_init_uk(struct i40iw_dev_uk *dev);
void i40iw_qp_post_wr(struct i40iw_qp_uk *qp);
u64 *i40iw_qp_get_next_send_wqe(struct i40iw_qp_uk *qp, u32 *wqe_idx,
- u8 wqe_size);
+ u8 wqe_size,
+ u32 total_size,
+ u64 wr_id
+ );
u64 *i40iw_qp_get_next_recv_wqe(struct i40iw_qp_uk *qp, u32 *wqe_idx);
u64 *i40iw_qp_get_next_srq_wqe(struct i40iw_srq_uk *srq, u32 *wqe_idx);
@@ -434,9 +442,9 @@ enum i40iw_status_code i40iw_qp_uk_init(struct i40iw_qp_uk *qp,
void i40iw_clean_cq(void *queue, struct i40iw_cq_uk *cq);
enum i40iw_status_code i40iw_nop(struct i40iw_qp_uk *qp, u64 wr_id,
bool signaled, bool post_sq);
-enum i40iw_status_code i40iw_fragcnt_to_wqesize_sq(u8 frag_cnt, u8 *wqe_size);
-enum i40iw_status_code i40iw_fragcnt_to_wqesize_rq(u8 frag_cnt, u8 *wqe_size);
+enum i40iw_status_code i40iw_fragcnt_to_wqesize_sq(u32 frag_cnt, u8 *wqe_size);
+enum i40iw_status_code i40iw_fragcnt_to_wqesize_rq(u32 frag_cnt, u8 *wqe_size);
enum i40iw_status_code i40iw_inline_data_size_to_wqesize(u32 data_size,
u8 *wqe_size);
-enum i40iw_status_code i40iw_get_wqe_shift(u32 wqdepth, u8 sge, u8 *shift);
+enum i40iw_status_code i40iw_get_wqe_shift(u32 wqdepth, u32 sge, u32 inline_data, u8 *shift);
#endif
diff --git a/drivers/infiniband/hw/i40iw/i40iw_utils.c b/drivers/infiniband/hw/i40iw/i40iw_utils.c
index 1ceec81bd8eb..0e8db0a35141 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_utils.c
+++ b/drivers/infiniband/hw/i40iw/i40iw_utils.c
@@ -59,7 +59,7 @@
* @action: modify, delete or add
*/
int i40iw_arp_table(struct i40iw_device *iwdev,
- __be32 *ip_addr,
+ u32 *ip_addr,
bool ipv4,
u8 *mac_addr,
u32 action)
@@ -152,7 +152,7 @@ int i40iw_inetaddr_event(struct notifier_block *notifier,
struct net_device *upper_dev;
struct i40iw_device *iwdev;
struct i40iw_handler *hdl;
- __be32 local_ipaddr;
+ u32 local_ipaddr;
hdl = i40iw_find_netdev(event_netdev);
if (!hdl)
@@ -167,11 +167,10 @@ int i40iw_inetaddr_event(struct notifier_block *notifier,
switch (event) {
case NETDEV_DOWN:
if (upper_dev)
- local_ipaddr =
- ((struct in_device *)upper_dev->ip_ptr)->ifa_list->ifa_address;
+ local_ipaddr = ntohl(
+ ((struct in_device *)upper_dev->ip_ptr)->ifa_list->ifa_address);
else
- local_ipaddr = ifa->ifa_address;
- local_ipaddr = ntohl(local_ipaddr);
+ local_ipaddr = ntohl(ifa->ifa_address);
i40iw_manage_arp_cache(iwdev,
netdev->dev_addr,
&local_ipaddr,
@@ -180,11 +179,10 @@ int i40iw_inetaddr_event(struct notifier_block *notifier,
return NOTIFY_OK;
case NETDEV_UP:
if (upper_dev)
- local_ipaddr =
- ((struct in_device *)upper_dev->ip_ptr)->ifa_list->ifa_address;
+ local_ipaddr = ntohl(
+ ((struct in_device *)upper_dev->ip_ptr)->ifa_list->ifa_address);
else
- local_ipaddr = ifa->ifa_address;
- local_ipaddr = ntohl(local_ipaddr);
+ local_ipaddr = ntohl(ifa->ifa_address);
i40iw_manage_arp_cache(iwdev,
netdev->dev_addr,
&local_ipaddr,
@@ -194,12 +192,11 @@ int i40iw_inetaddr_event(struct notifier_block *notifier,
case NETDEV_CHANGEADDR:
/* Add the address to the IP table */
if (upper_dev)
- local_ipaddr =
- ((struct in_device *)upper_dev->ip_ptr)->ifa_list->ifa_address;
+ local_ipaddr = ntohl(
+ ((struct in_device *)upper_dev->ip_ptr)->ifa_list->ifa_address);
else
- local_ipaddr = ifa->ifa_address;
+ local_ipaddr = ntohl(ifa->ifa_address);
- local_ipaddr = ntohl(local_ipaddr);
i40iw_manage_arp_cache(iwdev,
netdev->dev_addr,
&local_ipaddr,
@@ -227,7 +224,7 @@ int i40iw_inet6addr_event(struct notifier_block *notifier,
struct net_device *netdev;
struct i40iw_device *iwdev;
struct i40iw_handler *hdl;
- __be32 local_ipaddr6[4];
+ u32 local_ipaddr6[4];
hdl = i40iw_find_netdev(event_netdev);
if (!hdl)
@@ -506,14 +503,19 @@ void i40iw_rem_ref(struct ib_qp *ibqp)
struct cqp_commands_info *cqp_info;
struct i40iw_device *iwdev;
u32 qp_num;
+ unsigned long flags;
iwqp = to_iwqp(ibqp);
- if (!atomic_dec_and_test(&iwqp->refcount))
+ iwdev = iwqp->iwdev;
+ spin_lock_irqsave(&iwdev->qptable_lock, flags);
+ if (!atomic_dec_and_test(&iwqp->refcount)) {
+ spin_unlock_irqrestore(&iwdev->qptable_lock, flags);
return;
+ }
- iwdev = iwqp->iwdev;
qp_num = iwqp->ibqp.qp_num;
iwdev->qp_table[qp_num] = NULL;
+ spin_unlock_irqrestore(&iwdev->qptable_lock, flags);
cqp_request = i40iw_get_cqp_request(&iwdev->cqp, false);
if (!cqp_request)
return;
@@ -985,21 +987,24 @@ enum i40iw_status_code i40iw_cqp_commit_fpm_values_cmd(struct i40iw_sc_dev *dev,
enum i40iw_status_code i40iw_vf_wait_vchnl_resp(struct i40iw_sc_dev *dev)
{
struct i40iw_device *iwdev = dev->back_dev;
- enum i40iw_status_code err_code = 0;
int timeout_ret;
i40iw_debug(dev, I40IW_DEBUG_VIRT, "%s[%u] dev %p, iwdev %p\n",
__func__, __LINE__, dev, iwdev);
- atomic_add(2, &iwdev->vchnl_msgs);
+
+ atomic_set(&iwdev->vchnl_msgs, 2);
timeout_ret = wait_event_timeout(iwdev->vchnl_waitq,
(atomic_read(&iwdev->vchnl_msgs) == 1),
I40IW_VCHNL_EVENT_TIMEOUT);
atomic_dec(&iwdev->vchnl_msgs);
if (!timeout_ret) {
i40iw_pr_err("virt channel completion timeout = 0x%x\n", timeout_ret);
- err_code = I40IW_ERR_TIMEOUT;
+ atomic_set(&iwdev->vchnl_msgs, 0);
+ dev->vchnl_up = false;
+ return I40IW_ERR_TIMEOUT;
}
- return err_code;
+ wake_up(&dev->vf_reqs);
+ return 0;
}
/**
diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.c b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
index 1fe3b84a06e4..02a735b64208 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_verbs.c
+++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.c
@@ -63,8 +63,8 @@ static int i40iw_query_device(struct ib_device *ibdev,
ether_addr_copy((u8 *)&props->sys_image_guid, iwdev->netdev->dev_addr);
props->fw_ver = I40IW_FW_VERSION;
props->device_cap_flags = iwdev->device_cap_flags;
- props->vendor_id = iwdev->vendor_id;
- props->vendor_part_id = iwdev->vendor_part_id;
+ props->vendor_id = iwdev->ldev->pcidev->vendor;
+ props->vendor_part_id = iwdev->ldev->pcidev->device;
props->hw_ver = (u32)iwdev->sc_dev.hw_rev;
props->max_mr_size = I40IW_MAX_OUTBOUND_MESSAGE_SIZE;
props->max_qp = iwdev->max_qp;
@@ -74,7 +74,7 @@ static int i40iw_query_device(struct ib_device *ibdev,
props->max_cqe = iwdev->max_cqe;
props->max_mr = iwdev->max_mr;
props->max_pd = iwdev->max_pd;
- props->max_sge_rd = 1;
+ props->max_sge_rd = I40IW_MAX_SGE_RD;
props->max_qp_rd_atom = I40IW_MAX_IRD_SIZE;
props->max_qp_init_rd_atom = props->max_qp_rd_atom;
props->atomic_cap = IB_ATOMIC_NONE;
@@ -120,7 +120,7 @@ static int i40iw_query_port(struct ib_device *ibdev,
props->pkey_tbl_len = 1;
props->active_width = IB_WIDTH_4X;
props->active_speed = 1;
- props->max_msg_sz = 0x80000000;
+ props->max_msg_sz = I40IW_MAX_OUTBOUND_MESSAGE_SIZE;
return 0;
}
@@ -437,7 +437,6 @@ void i40iw_free_qp_resources(struct i40iw_device *iwdev,
kfree(iwqp->kqp.wrid_mem);
iwqp->kqp.wrid_mem = NULL;
kfree(iwqp->allocated_buffer);
- iwqp->allocated_buffer = NULL;
}
/**
@@ -521,14 +520,12 @@ static int i40iw_setup_kmode_qp(struct i40iw_device *iwdev,
enum i40iw_status_code status;
struct i40iw_qp_uk_init_info *ukinfo = &info->qp_uk_init_info;
- ukinfo->max_sq_frag_cnt = I40IW_MAX_WQ_FRAGMENT_COUNT;
-
sq_size = i40iw_qp_roundup(ukinfo->sq_size + 1);
rq_size = i40iw_qp_roundup(ukinfo->rq_size + 1);
- status = i40iw_get_wqe_shift(sq_size, ukinfo->max_sq_frag_cnt, &sqshift);
+ status = i40iw_get_wqe_shift(sq_size, ukinfo->max_sq_frag_cnt, ukinfo->max_inline_data, &sqshift);
if (!status)
- status = i40iw_get_wqe_shift(rq_size, ukinfo->max_rq_frag_cnt, &rqshift);
+ status = i40iw_get_wqe_shift(rq_size, ukinfo->max_rq_frag_cnt, 0, &rqshift);
if (status)
return -ENOSYS;
@@ -609,6 +606,9 @@ static struct ib_qp *i40iw_create_qp(struct ib_pd *ibpd,
if (init_attr->cap.max_inline_data > I40IW_MAX_INLINE_DATA_SIZE)
init_attr->cap.max_inline_data = I40IW_MAX_INLINE_DATA_SIZE;
+ if (init_attr->cap.max_send_sge > I40IW_MAX_WQ_FRAGMENT_COUNT)
+ init_attr->cap.max_send_sge = I40IW_MAX_WQ_FRAGMENT_COUNT;
+
memset(&init_info, 0, sizeof(init_info));
sq_size = init_attr->cap.max_send_wr;
@@ -618,6 +618,7 @@ static struct ib_qp *i40iw_create_qp(struct ib_pd *ibpd,
init_info.qp_uk_init_info.rq_size = rq_size;
init_info.qp_uk_init_info.max_sq_frag_cnt = init_attr->cap.max_send_sge;
init_info.qp_uk_init_info.max_rq_frag_cnt = init_attr->cap.max_recv_sge;
+ init_info.qp_uk_init_info.max_inline_data = init_attr->cap.max_inline_data;
mem = kzalloc(sizeof(*iwqp), GFP_KERNEL);
if (!mem)
@@ -722,8 +723,10 @@ static struct ib_qp *i40iw_create_qp(struct ib_pd *ibpd,
iwarp_info = &iwqp->iwarp_info;
iwarp_info->rd_enable = true;
iwarp_info->wr_rdresp_en = true;
- if (!iwqp->user_mode)
+ if (!iwqp->user_mode) {
+ iwarp_info->fast_reg_en = true;
iwarp_info->priv_mode_en = true;
+ }
iwarp_info->ddp_ver = 1;
iwarp_info->rdmap_ver = 1;
@@ -784,6 +787,8 @@ static struct ib_qp *i40iw_create_qp(struct ib_pd *ibpd,
return ERR_PTR(err_code);
}
}
+ init_completion(&iwqp->sq_drained);
+ init_completion(&iwqp->rq_drained);
return &iwqp->ibqp;
error:
@@ -1444,6 +1449,166 @@ static int i40iw_handle_q_mem(struct i40iw_device *iwdev,
}
/**
+ * i40iw_hw_alloc_stag - cqp command to allocate stag
+ * @iwdev: iwarp device
+ * @iwmr: iwarp mr pointer
+ */
+static int i40iw_hw_alloc_stag(struct i40iw_device *iwdev, struct i40iw_mr *iwmr)
+{
+ struct i40iw_allocate_stag_info *info;
+ struct i40iw_pd *iwpd = to_iwpd(iwmr->ibmr.pd);
+ enum i40iw_status_code status;
+ int err = 0;
+ struct i40iw_cqp_request *cqp_request;
+ struct cqp_commands_info *cqp_info;
+
+ cqp_request = i40iw_get_cqp_request(&iwdev->cqp, true);
+ if (!cqp_request)
+ return -ENOMEM;
+
+ cqp_info = &cqp_request->info;
+ info = &cqp_info->in.u.alloc_stag.info;
+ memset(info, 0, sizeof(*info));
+ info->page_size = PAGE_SIZE;
+ info->stag_idx = iwmr->stag >> I40IW_CQPSQ_STAG_IDX_SHIFT;
+ info->pd_id = iwpd->sc_pd.pd_id;
+ info->total_len = iwmr->length;
+ cqp_info->cqp_cmd = OP_ALLOC_STAG;
+ cqp_info->post_sq = 1;
+ cqp_info->in.u.alloc_stag.dev = &iwdev->sc_dev;
+ cqp_info->in.u.alloc_stag.scratch = (uintptr_t)cqp_request;
+
+ status = i40iw_handle_cqp_op(iwdev, cqp_request);
+ if (status) {
+ err = -ENOMEM;
+ i40iw_pr_err("CQP-OP MR Reg fail");
+ }
+ return err;
+}
+
+/**
+ * i40iw_alloc_mr - register stag for fast memory registration
+ * @pd: ibpd pointer
+ * @mr_type: memory for stag registrion
+ * @max_num_sg: man number of pages
+ */
+static struct ib_mr *i40iw_alloc_mr(struct ib_pd *pd,
+ enum ib_mr_type mr_type,
+ u32 max_num_sg)
+{
+ struct i40iw_pd *iwpd = to_iwpd(pd);
+ struct i40iw_device *iwdev = to_iwdev(pd->device);
+ struct i40iw_pble_alloc *palloc;
+ struct i40iw_pbl *iwpbl;
+ struct i40iw_mr *iwmr;
+ enum i40iw_status_code status;
+ u32 stag;
+ int err_code = -ENOMEM;
+
+ iwmr = kzalloc(sizeof(*iwmr), GFP_KERNEL);
+ if (!iwmr)
+ return ERR_PTR(-ENOMEM);
+
+ stag = i40iw_create_stag(iwdev);
+ if (!stag) {
+ err_code = -EOVERFLOW;
+ goto err;
+ }
+ iwmr->stag = stag;
+ iwmr->ibmr.rkey = stag;
+ iwmr->ibmr.lkey = stag;
+ iwmr->ibmr.pd = pd;
+ iwmr->ibmr.device = pd->device;
+ iwpbl = &iwmr->iwpbl;
+ iwpbl->iwmr = iwmr;
+ iwmr->type = IW_MEMREG_TYPE_MEM;
+ palloc = &iwpbl->pble_alloc;
+ iwmr->page_cnt = max_num_sg;
+ mutex_lock(&iwdev->pbl_mutex);
+ status = i40iw_get_pble(&iwdev->sc_dev, iwdev->pble_rsrc, palloc, iwmr->page_cnt);
+ mutex_unlock(&iwdev->pbl_mutex);
+ if (!status)
+ goto err1;
+
+ if (palloc->level != I40IW_LEVEL_1)
+ goto err2;
+ err_code = i40iw_hw_alloc_stag(iwdev, iwmr);
+ if (err_code)
+ goto err2;
+ iwpbl->pbl_allocated = true;
+ i40iw_add_pdusecount(iwpd);
+ return &iwmr->ibmr;
+err2:
+ i40iw_free_pble(iwdev->pble_rsrc, palloc);
+err1:
+ i40iw_free_stag(iwdev, stag);
+err:
+ kfree(iwmr);
+ return ERR_PTR(err_code);
+}
+
+/**
+ * i40iw_set_page - populate pbl list for fmr
+ * @ibmr: ib mem to access iwarp mr pointer
+ * @addr: page dma address fro pbl list
+ */
+static int i40iw_set_page(struct ib_mr *ibmr, u64 addr)
+{
+ struct i40iw_mr *iwmr = to_iwmr(ibmr);
+ struct i40iw_pbl *iwpbl = &iwmr->iwpbl;
+ struct i40iw_pble_alloc *palloc = &iwpbl->pble_alloc;
+ u64 *pbl;
+
+ if (unlikely(iwmr->npages == iwmr->page_cnt))
+ return -ENOMEM;
+
+ pbl = (u64 *)palloc->level1.addr;
+ pbl[iwmr->npages++] = cpu_to_le64(addr);
+ return 0;
+}
+
+/**
+ * i40iw_map_mr_sg - map of sg list for fmr
+ * @ibmr: ib mem to access iwarp mr pointer
+ * @sg: scatter gather list for fmr
+ * @sg_nents: number of sg pages
+ */
+static int i40iw_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg,
+ int sg_nents, unsigned int *sg_offset)
+{
+ struct i40iw_mr *iwmr = to_iwmr(ibmr);
+
+ iwmr->npages = 0;
+ return ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, i40iw_set_page);
+}
+
+/**
+ * i40iw_drain_sq - drain the send queue
+ * @ibqp: ib qp pointer
+ */
+static void i40iw_drain_sq(struct ib_qp *ibqp)
+{
+ struct i40iw_qp *iwqp = to_iwqp(ibqp);
+ struct i40iw_sc_qp *qp = &iwqp->sc_qp;
+
+ if (I40IW_RING_MORE_WORK(qp->qp_uk.sq_ring))
+ wait_for_completion(&iwqp->sq_drained);
+}
+
+/**
+ * i40iw_drain_rq - drain the receive queue
+ * @ibqp: ib qp pointer
+ */
+static void i40iw_drain_rq(struct ib_qp *ibqp)
+{
+ struct i40iw_qp *iwqp = to_iwqp(ibqp);
+ struct i40iw_sc_qp *qp = &iwqp->sc_qp;
+
+ if (I40IW_RING_MORE_WORK(qp->qp_uk.rq_ring))
+ wait_for_completion(&iwqp->rq_drained);
+}
+
+/**
* i40iw_hwreg_mr - send cqp command for memory registration
* @iwdev: iwarp device
* @iwmr: iwarp mr pointer
@@ -1526,14 +1691,16 @@ static struct ib_mr *i40iw_reg_user_mr(struct ib_pd *pd,
struct i40iw_mr *iwmr;
struct ib_umem *region;
struct i40iw_mem_reg_req req;
- u32 pbl_depth = 0;
+ u64 pbl_depth = 0;
u32 stag = 0;
u16 access;
- u32 region_length;
+ u64 region_length;
bool use_pbles = false;
unsigned long flags;
int err = -ENOSYS;
+ if (length > I40IW_MAX_MR_SIZE)
+ return ERR_PTR(-EINVAL);
region = ib_umem_get(pd->uobject->context, start, length, acc, 0);
if (IS_ERR(region))
return (struct ib_mr *)region;
@@ -1564,7 +1731,7 @@ static struct ib_mr *i40iw_reg_user_mr(struct ib_pd *pd,
palloc = &iwpbl->pble_alloc;
iwmr->type = req.reg_type;
- iwmr->page_cnt = pbl_depth;
+ iwmr->page_cnt = (u32)pbl_depth;
switch (req.reg_type) {
case IW_MEMREG_TYPE_QP:
@@ -1881,12 +2048,14 @@ static int i40iw_post_send(struct ib_qp *ibqp,
enum i40iw_status_code ret;
int err = 0;
unsigned long flags;
+ bool inv_stag;
iwqp = (struct i40iw_qp *)ibqp;
ukqp = &iwqp->sc_qp.qp_uk;
spin_lock_irqsave(&iwqp->lock, flags);
while (ib_wr) {
+ inv_stag = false;
memset(&info, 0, sizeof(info));
info.wr_id = (u64)(ib_wr->wr_id);
if ((ib_wr->send_flags & IB_SEND_SIGNALED) || iwqp->sig_all)
@@ -1896,19 +2065,28 @@ static int i40iw_post_send(struct ib_qp *ibqp,
switch (ib_wr->opcode) {
case IB_WR_SEND:
- if (ib_wr->send_flags & IB_SEND_SOLICITED)
- info.op_type = I40IW_OP_TYPE_SEND_SOL;
- else
- info.op_type = I40IW_OP_TYPE_SEND;
+ /* fall-through */
+ case IB_WR_SEND_WITH_INV:
+ if (ib_wr->opcode == IB_WR_SEND) {
+ if (ib_wr->send_flags & IB_SEND_SOLICITED)
+ info.op_type = I40IW_OP_TYPE_SEND_SOL;
+ else
+ info.op_type = I40IW_OP_TYPE_SEND;
+ } else {
+ if (ib_wr->send_flags & IB_SEND_SOLICITED)
+ info.op_type = I40IW_OP_TYPE_SEND_SOL_INV;
+ else
+ info.op_type = I40IW_OP_TYPE_SEND_INV;
+ }
if (ib_wr->send_flags & IB_SEND_INLINE) {
info.op.inline_send.data = (void *)(unsigned long)ib_wr->sg_list[0].addr;
info.op.inline_send.len = ib_wr->sg_list[0].length;
- ret = ukqp->ops.iw_inline_send(ukqp, &info, rdma_wr(ib_wr)->rkey, false);
+ ret = ukqp->ops.iw_inline_send(ukqp, &info, ib_wr->ex.invalidate_rkey, false);
} else {
info.op.send.num_sges = ib_wr->num_sge;
info.op.send.sg_list = (struct i40iw_sge *)ib_wr->sg_list;
- ret = ukqp->ops.iw_send(ukqp, &info, rdma_wr(ib_wr)->rkey, false);
+ ret = ukqp->ops.iw_send(ukqp, &info, ib_wr->ex.invalidate_rkey, false);
}
if (ret)
@@ -1936,7 +2114,14 @@ static int i40iw_post_send(struct ib_qp *ibqp,
if (ret)
err = -EIO;
break;
+ case IB_WR_RDMA_READ_WITH_INV:
+ inv_stag = true;
+ /* fall-through*/
case IB_WR_RDMA_READ:
+ if (ib_wr->num_sge > I40IW_MAX_SGE_RD) {
+ err = -EINVAL;
+ break;
+ }
info.op_type = I40IW_OP_TYPE_RDMA_READ;
info.op.rdma_read.rem_addr.tag_off = rdma_wr(ib_wr)->remote_addr;
info.op.rdma_read.rem_addr.stag = rdma_wr(ib_wr)->rkey;
@@ -1944,10 +2129,47 @@ static int i40iw_post_send(struct ib_qp *ibqp,
info.op.rdma_read.lo_addr.tag_off = ib_wr->sg_list->addr;
info.op.rdma_read.lo_addr.stag = ib_wr->sg_list->lkey;
info.op.rdma_read.lo_addr.len = ib_wr->sg_list->length;
- ret = ukqp->ops.iw_rdma_read(ukqp, &info, false, false);
+ ret = ukqp->ops.iw_rdma_read(ukqp, &info, inv_stag, false);
+ if (ret)
+ err = -EIO;
+ break;
+ case IB_WR_LOCAL_INV:
+ info.op_type = I40IW_OP_TYPE_INV_STAG;
+ info.op.inv_local_stag.target_stag = ib_wr->ex.invalidate_rkey;
+ ret = ukqp->ops.iw_stag_local_invalidate(ukqp, &info, true);
+ if (ret)
+ err = -EIO;
+ break;
+ case IB_WR_REG_MR:
+ {
+ struct i40iw_mr *iwmr = to_iwmr(reg_wr(ib_wr)->mr);
+ int page_shift = ilog2(reg_wr(ib_wr)->mr->page_size);
+ int flags = reg_wr(ib_wr)->access;
+ struct i40iw_pble_alloc *palloc = &iwmr->iwpbl.pble_alloc;
+ struct i40iw_sc_dev *dev = &iwqp->iwdev->sc_dev;
+ struct i40iw_fast_reg_stag_info info;
+
+ info.access_rights = I40IW_ACCESS_FLAGS_LOCALREAD;
+ info.access_rights |= i40iw_get_user_access(flags);
+ info.stag_key = reg_wr(ib_wr)->key & 0xff;
+ info.stag_idx = reg_wr(ib_wr)->key >> 8;
+ info.wr_id = ib_wr->wr_id;
+
+ info.addr_type = I40IW_ADDR_TYPE_VA_BASED;
+ info.va = (void *)(uintptr_t)iwmr->ibmr.iova;
+ info.total_len = iwmr->ibmr.length;
+ info.first_pm_pbl_index = palloc->level1.idx;
+ info.local_fence = ib_wr->send_flags & IB_SEND_FENCE;
+ info.signaled = ib_wr->send_flags & IB_SEND_SIGNALED;
+
+ if (page_shift == 21)
+ info.page_size = 1; /* 2M page */
+
+ ret = dev->iw_priv_qp_ops->iw_mr_fast_register(&iwqp->sc_qp, &info, true);
if (ret)
err = -EIO;
break;
+ }
default:
err = -EINVAL;
i40iw_pr_err(" upost_send bad opcode = 0x%x\n",
@@ -2027,6 +2249,7 @@ static int i40iw_poll_cq(struct ib_cq *ibcq,
enum i40iw_status_code ret;
struct i40iw_cq_uk *ukcq;
struct i40iw_sc_qp *qp;
+ struct i40iw_qp *iwqp;
unsigned long flags;
iwcq = (struct i40iw_cq *)ibcq;
@@ -2037,6 +2260,8 @@ static int i40iw_poll_cq(struct ib_cq *ibcq,
ret = ukcq->ops.iw_cq_poll_completion(ukcq, &cq_poll_info, true);
if (ret == I40IW_ERR_QUEUE_EMPTY) {
break;
+ } else if (ret == I40IW_ERR_QUEUE_DESTROYED) {
+ continue;
} else if (ret) {
if (!cqe_count)
cqe_count = -1;
@@ -2044,10 +2269,12 @@ static int i40iw_poll_cq(struct ib_cq *ibcq,
}
entry->wc_flags = 0;
entry->wr_id = cq_poll_info.wr_id;
- if (!cq_poll_info.error)
- entry->status = IB_WC_SUCCESS;
- else
+ if (cq_poll_info.error) {
entry->status = IB_WC_WR_FLUSH_ERR;
+ entry->vendor_err = cq_poll_info.major_err << 16 | cq_poll_info.minor_err;
+ } else {
+ entry->status = IB_WC_SUCCESS;
+ }
switch (cq_poll_info.op_type) {
case I40IW_OP_TYPE_RDMA_WRITE:
@@ -2071,12 +2298,17 @@ static int i40iw_poll_cq(struct ib_cq *ibcq,
break;
}
- entry->vendor_err =
- cq_poll_info.major_err << 16 | cq_poll_info.minor_err;
entry->ex.imm_data = 0;
qp = (struct i40iw_sc_qp *)cq_poll_info.qp_handle;
entry->qp = (struct ib_qp *)qp->back_qp;
entry->src_qp = cq_poll_info.qp_id;
+ iwqp = (struct i40iw_qp *)qp->back_qp;
+ if (iwqp->iwarp_state > I40IW_QP_STATE_RTS) {
+ if (!I40IW_RING_MORE_WORK(qp->qp_uk.sq_ring))
+ complete(&iwqp->sq_drained);
+ if (!I40IW_RING_MORE_WORK(qp->qp_uk.rq_ring))
+ complete(&iwqp->rq_drained);
+ }
entry->byte_len = cq_poll_info.bytes_xfered;
entry++;
cqe_count++;
@@ -2129,62 +2361,130 @@ static int i40iw_port_immutable(struct ib_device *ibdev, u8 port_num,
return 0;
}
+static const char * const i40iw_hw_stat_names[] = {
+ // 32bit names
+ [I40IW_HW_STAT_INDEX_IP4RXDISCARD] = "ip4InDiscards",
+ [I40IW_HW_STAT_INDEX_IP4RXTRUNC] = "ip4InTruncatedPkts",
+ [I40IW_HW_STAT_INDEX_IP4TXNOROUTE] = "ip4OutNoRoutes",
+ [I40IW_HW_STAT_INDEX_IP6RXDISCARD] = "ip6InDiscards",
+ [I40IW_HW_STAT_INDEX_IP6RXTRUNC] = "ip6InTruncatedPkts",
+ [I40IW_HW_STAT_INDEX_IP6TXNOROUTE] = "ip6OutNoRoutes",
+ [I40IW_HW_STAT_INDEX_TCPRTXSEG] = "tcpRetransSegs",
+ [I40IW_HW_STAT_INDEX_TCPRXOPTERR] = "tcpInOptErrors",
+ [I40IW_HW_STAT_INDEX_TCPRXPROTOERR] = "tcpInProtoErrors",
+ // 64bit names
+ [I40IW_HW_STAT_INDEX_IP4RXOCTS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "ip4InOctets",
+ [I40IW_HW_STAT_INDEX_IP4RXPKTS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "ip4InPkts",
+ [I40IW_HW_STAT_INDEX_IP4RXFRAGS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "ip4InReasmRqd",
+ [I40IW_HW_STAT_INDEX_IP4RXMCPKTS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "ip4InMcastPkts",
+ [I40IW_HW_STAT_INDEX_IP4TXOCTS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "ip4OutOctets",
+ [I40IW_HW_STAT_INDEX_IP4TXPKTS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "ip4OutPkts",
+ [I40IW_HW_STAT_INDEX_IP4TXFRAGS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "ip4OutSegRqd",
+ [I40IW_HW_STAT_INDEX_IP4TXMCPKTS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "ip4OutMcastPkts",
+ [I40IW_HW_STAT_INDEX_IP6RXOCTS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "ip6InOctets",
+ [I40IW_HW_STAT_INDEX_IP6RXPKTS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "ip6InPkts",
+ [I40IW_HW_STAT_INDEX_IP6RXFRAGS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "ip6InReasmRqd",
+ [I40IW_HW_STAT_INDEX_IP6RXMCPKTS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "ip6InMcastPkts",
+ [I40IW_HW_STAT_INDEX_IP6TXOCTS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "ip6OutOctets",
+ [I40IW_HW_STAT_INDEX_IP6TXPKTS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "ip6OutPkts",
+ [I40IW_HW_STAT_INDEX_IP6TXFRAGS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "ip6OutSegRqd",
+ [I40IW_HW_STAT_INDEX_IP6TXMCPKTS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "ip6OutMcastPkts",
+ [I40IW_HW_STAT_INDEX_TCPRXSEGS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "tcpInSegs",
+ [I40IW_HW_STAT_INDEX_TCPTXSEG + I40IW_HW_STAT_INDEX_MAX_32] =
+ "tcpOutSegs",
+ [I40IW_HW_STAT_INDEX_RDMARXRDS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "iwInRdmaReads",
+ [I40IW_HW_STAT_INDEX_RDMARXSNDS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "iwInRdmaSends",
+ [I40IW_HW_STAT_INDEX_RDMARXWRS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "iwInRdmaWrites",
+ [I40IW_HW_STAT_INDEX_RDMATXRDS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "iwOutRdmaReads",
+ [I40IW_HW_STAT_INDEX_RDMATXSNDS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "iwOutRdmaSends",
+ [I40IW_HW_STAT_INDEX_RDMATXWRS + I40IW_HW_STAT_INDEX_MAX_32] =
+ "iwOutRdmaWrites",
+ [I40IW_HW_STAT_INDEX_RDMAVBND + I40IW_HW_STAT_INDEX_MAX_32] =
+ "iwRdmaBnd",
+ [I40IW_HW_STAT_INDEX_RDMAVINV + I40IW_HW_STAT_INDEX_MAX_32] =
+ "iwRdmaInv"
+};
+
+/**
+ * i40iw_alloc_hw_stats - Allocate a hw stats structure
+ * @ibdev: device pointer from stack
+ * @port_num: port number
+ */
+static struct rdma_hw_stats *i40iw_alloc_hw_stats(struct ib_device *ibdev,
+ u8 port_num)
+{
+ struct i40iw_device *iwdev = to_iwdev(ibdev);
+ struct i40iw_sc_dev *dev = &iwdev->sc_dev;
+ int num_counters = I40IW_HW_STAT_INDEX_MAX_32 +
+ I40IW_HW_STAT_INDEX_MAX_64;
+ unsigned long lifespan = RDMA_HW_STATS_DEFAULT_LIFESPAN;
+
+ BUILD_BUG_ON(ARRAY_SIZE(i40iw_hw_stat_names) !=
+ (I40IW_HW_STAT_INDEX_MAX_32 +
+ I40IW_HW_STAT_INDEX_MAX_64));
+
+ /*
+ * PFs get the default update lifespan, but VFs only update once
+ * per second
+ */
+ if (!dev->is_pf)
+ lifespan = 1000;
+ return rdma_alloc_hw_stats_struct(i40iw_hw_stat_names, num_counters,
+ lifespan);
+}
+
/**
- * i40iw_get_protocol_stats - Populates the rdma_stats structure
- * @ibdev: ib dev struct
- * @stats: iw protocol stats struct
+ * i40iw_get_hw_stats - Populates the rdma_hw_stats structure
+ * @ibdev: device pointer from stack
+ * @stats: stats pointer from stack
+ * @port_num: port number
+ * @index: which hw counter the stack is requesting we update
*/
-static int i40iw_get_protocol_stats(struct ib_device *ibdev,
- union rdma_protocol_stats *stats)
+static int i40iw_get_hw_stats(struct ib_device *ibdev,
+ struct rdma_hw_stats *stats,
+ u8 port_num, int index)
{
struct i40iw_device *iwdev = to_iwdev(ibdev);
struct i40iw_sc_dev *dev = &iwdev->sc_dev;
struct i40iw_dev_pestat *devstat = &dev->dev_pestat;
struct i40iw_dev_hw_stats *hw_stats = &devstat->hw_stats;
- struct timespec curr_time;
- static struct timespec last_rd_time = {0, 0};
- enum i40iw_status_code status = 0;
unsigned long flags;
- curr_time = current_kernel_time();
- memset(stats, 0, sizeof(*stats));
-
if (dev->is_pf) {
spin_lock_irqsave(&devstat->stats_lock, flags);
devstat->ops.iw_hw_stat_read_all(devstat,
&devstat->hw_stats);
spin_unlock_irqrestore(&devstat->stats_lock, flags);
} else {
- if (((u64)curr_time.tv_sec - (u64)last_rd_time.tv_sec) > 1)
- status = i40iw_vchnl_vf_get_pe_stats(dev,
- &devstat->hw_stats);
-
- if (status)
+ if (i40iw_vchnl_vf_get_pe_stats(dev, &devstat->hw_stats))
return -ENOSYS;
}
- stats->iw.ipInReceives = hw_stats->stat_value_64[I40IW_HW_STAT_INDEX_IP4RXPKTS] +
- hw_stats->stat_value_64[I40IW_HW_STAT_INDEX_IP6RXPKTS];
- stats->iw.ipInTruncatedPkts = hw_stats->stat_value_32[I40IW_HW_STAT_INDEX_IP4RXTRUNC] +
- hw_stats->stat_value_32[I40IW_HW_STAT_INDEX_IP6RXTRUNC];
- stats->iw.ipInDiscards = hw_stats->stat_value_32[I40IW_HW_STAT_INDEX_IP4RXDISCARD] +
- hw_stats->stat_value_32[I40IW_HW_STAT_INDEX_IP6RXDISCARD];
- stats->iw.ipOutNoRoutes = hw_stats->stat_value_32[I40IW_HW_STAT_INDEX_IP4TXNOROUTE] +
- hw_stats->stat_value_32[I40IW_HW_STAT_INDEX_IP6TXNOROUTE];
- stats->iw.ipReasmReqds = hw_stats->stat_value_64[I40IW_HW_STAT_INDEX_IP4RXFRAGS] +
- hw_stats->stat_value_64[I40IW_HW_STAT_INDEX_IP6RXFRAGS];
- stats->iw.ipFragCreates = hw_stats->stat_value_64[I40IW_HW_STAT_INDEX_IP4TXFRAGS] +
- hw_stats->stat_value_64[I40IW_HW_STAT_INDEX_IP6TXFRAGS];
- stats->iw.ipInMcastPkts = hw_stats->stat_value_64[I40IW_HW_STAT_INDEX_IP4RXMCPKTS] +
- hw_stats->stat_value_64[I40IW_HW_STAT_INDEX_IP6RXMCPKTS];
- stats->iw.ipOutMcastPkts = hw_stats->stat_value_64[I40IW_HW_STAT_INDEX_IP4TXMCPKTS] +
- hw_stats->stat_value_64[I40IW_HW_STAT_INDEX_IP6TXMCPKTS];
- stats->iw.tcpOutSegs = hw_stats->stat_value_64[I40IW_HW_STAT_INDEX_TCPTXSEG];
- stats->iw.tcpInSegs = hw_stats->stat_value_64[I40IW_HW_STAT_INDEX_TCPRXSEGS];
- stats->iw.tcpRetransSegs = hw_stats->stat_value_32[I40IW_HW_STAT_INDEX_TCPRTXSEG];
-
- last_rd_time = curr_time;
- return 0;
+ memcpy(&stats->value[0], &hw_stats, sizeof(*hw_stats));
+
+ return stats->num_counters;
}
/**
@@ -2323,10 +2623,15 @@ static struct i40iw_ib_device *i40iw_init_rdma_device(struct i40iw_device *iwdev
iwibdev->ibdev.get_dma_mr = i40iw_get_dma_mr;
iwibdev->ibdev.reg_user_mr = i40iw_reg_user_mr;
iwibdev->ibdev.dereg_mr = i40iw_dereg_mr;
- iwibdev->ibdev.get_protocol_stats = i40iw_get_protocol_stats;
+ iwibdev->ibdev.alloc_hw_stats = i40iw_alloc_hw_stats;
+ iwibdev->ibdev.get_hw_stats = i40iw_get_hw_stats;
iwibdev->ibdev.query_device = i40iw_query_device;
iwibdev->ibdev.create_ah = i40iw_create_ah;
iwibdev->ibdev.destroy_ah = i40iw_destroy_ah;
+ iwibdev->ibdev.drain_sq = i40iw_drain_sq;
+ iwibdev->ibdev.drain_rq = i40iw_drain_rq;
+ iwibdev->ibdev.alloc_mr = i40iw_alloc_mr;
+ iwibdev->ibdev.map_mr_sg = i40iw_map_mr_sg;
iwibdev->ibdev.iwcm = kzalloc(sizeof(*iwibdev->ibdev.iwcm), GFP_KERNEL);
if (!iwibdev->ibdev.iwcm) {
ib_dealloc_device(&iwibdev->ibdev);
diff --git a/drivers/infiniband/hw/i40iw/i40iw_verbs.h b/drivers/infiniband/hw/i40iw/i40iw_verbs.h
index 1101f77080e6..0069be8a5a38 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_verbs.h
+++ b/drivers/infiniband/hw/i40iw/i40iw_verbs.h
@@ -92,6 +92,7 @@ struct i40iw_mr {
struct ib_umem *region;
u16 type;
u32 page_cnt;
+ u32 npages;
u32 stag;
u64 length;
u64 pgaddrmem[MAX_SAVE_PAGE_ADDRS];
@@ -169,5 +170,7 @@ struct i40iw_qp {
struct i40iw_pbl *iwpbl;
struct i40iw_dma_mem q2_ctx_mem;
struct i40iw_dma_mem ietf_mem;
+ struct completion sq_drained;
+ struct completion rq_drained;
};
#endif
diff --git a/drivers/infiniband/hw/i40iw/i40iw_vf.c b/drivers/infiniband/hw/i40iw/i40iw_vf.c
index cb0f18340e14..e33d4810965c 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_vf.c
+++ b/drivers/infiniband/hw/i40iw/i40iw_vf.c
@@ -80,6 +80,6 @@ enum i40iw_status_code i40iw_manage_vf_pble_bp(struct i40iw_sc_cqp *cqp,
return 0;
}
-struct i40iw_vf_cqp_ops iw_vf_cqp_ops = {
+const struct i40iw_vf_cqp_ops iw_vf_cqp_ops = {
i40iw_manage_vf_pble_bp
};
diff --git a/drivers/infiniband/hw/i40iw/i40iw_vf.h b/drivers/infiniband/hw/i40iw/i40iw_vf.h
index f649f3a62e13..4359559ece9c 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_vf.h
+++ b/drivers/infiniband/hw/i40iw/i40iw_vf.h
@@ -57,6 +57,6 @@ enum i40iw_status_code i40iw_manage_vf_pble_bp(struct i40iw_sc_cqp *cqp,
u64 scratch,
bool post_sq);
-extern struct i40iw_vf_cqp_ops iw_vf_cqp_ops;
+extern const struct i40iw_vf_cqp_ops iw_vf_cqp_ops;
#endif
diff --git a/drivers/infiniband/hw/i40iw/i40iw_virtchnl.c b/drivers/infiniband/hw/i40iw/i40iw_virtchnl.c
index 6b68f7890b76..3041003c94d2 100644
--- a/drivers/infiniband/hw/i40iw/i40iw_virtchnl.c
+++ b/drivers/infiniband/hw/i40iw/i40iw_virtchnl.c
@@ -254,7 +254,7 @@ static void vchnl_pf_send_get_hmc_fcn_resp(struct i40iw_sc_dev *dev,
static void vchnl_pf_send_get_pe_stats_resp(struct i40iw_sc_dev *dev,
u32 vf_id,
struct i40iw_virtchnl_op_buf *vchnl_msg,
- struct i40iw_dev_hw_stats hw_stats)
+ struct i40iw_dev_hw_stats *hw_stats)
{
enum i40iw_status_code ret_code;
u8 resp_buffer[sizeof(struct i40iw_virtchnl_resp_buf) + sizeof(struct i40iw_dev_hw_stats) - 1];
@@ -264,7 +264,7 @@ static void vchnl_pf_send_get_pe_stats_resp(struct i40iw_sc_dev *dev,
vchnl_msg_resp->iw_chnl_op_ctx = vchnl_msg->iw_chnl_op_ctx;
vchnl_msg_resp->iw_chnl_buf_len = sizeof(resp_buffer);
vchnl_msg_resp->iw_op_ret_code = I40IW_SUCCESS;
- *((struct i40iw_dev_hw_stats *)vchnl_msg_resp->iw_chnl_buf) = hw_stats;
+ *((struct i40iw_dev_hw_stats *)vchnl_msg_resp->iw_chnl_buf) = *hw_stats;
ret_code = dev->vchnl_if.vchnl_send(dev, vf_id, resp_buffer, sizeof(resp_buffer));
if (ret_code)
i40iw_debug(dev, I40IW_DEBUG_VIRT,
@@ -437,11 +437,9 @@ enum i40iw_status_code i40iw_vchnl_recv_pf(struct i40iw_sc_dev *dev,
vchnl_pf_send_get_ver_resp(dev, vf_id, vchnl_msg);
return I40IW_SUCCESS;
}
- for (iw_vf_idx = 0; iw_vf_idx < I40IW_MAX_PE_ENABLED_VF_COUNT;
- iw_vf_idx++) {
+ for (iw_vf_idx = 0; iw_vf_idx < I40IW_MAX_PE_ENABLED_VF_COUNT; iw_vf_idx++) {
if (!dev->vf_dev[iw_vf_idx]) {
- if (first_avail_iw_vf ==
- I40IW_MAX_PE_ENABLED_VF_COUNT)
+ if (first_avail_iw_vf == I40IW_MAX_PE_ENABLED_VF_COUNT)
first_avail_iw_vf = iw_vf_idx;
continue;
}
@@ -541,7 +539,7 @@ enum i40iw_status_code i40iw_vchnl_recv_pf(struct i40iw_sc_dev *dev,
devstat->ops.iw_hw_stat_read_all(devstat, &devstat->hw_stats);
spin_unlock_irqrestore(&dev->dev_pestat.stats_lock, flags);
vf_dev->msg_count--;
- vchnl_pf_send_get_pe_stats_resp(dev, vf_id, vchnl_msg, devstat->hw_stats);
+ vchnl_pf_send_get_pe_stats_resp(dev, vf_id, vchnl_msg, &devstat->hw_stats);
break;
default:
i40iw_debug(dev, I40IW_DEBUG_VIRT,
@@ -596,23 +594,25 @@ enum i40iw_status_code i40iw_vchnl_vf_get_ver(struct i40iw_sc_dev *dev,
struct i40iw_virtchnl_req vchnl_req;
enum i40iw_status_code ret_code;
+ if (!i40iw_vf_clear_to_send(dev))
+ return I40IW_ERR_TIMEOUT;
memset(&vchnl_req, 0, sizeof(vchnl_req));
vchnl_req.dev = dev;
vchnl_req.parm = vchnl_ver;
vchnl_req.parm_len = sizeof(*vchnl_ver);
vchnl_req.vchnl_msg = &dev->vchnl_vf_msg_buf.vchnl_msg;
+
ret_code = vchnl_vf_send_get_ver_req(dev, &vchnl_req);
- if (!ret_code) {
- ret_code = i40iw_vf_wait_vchnl_resp(dev);
- if (!ret_code)
- ret_code = vchnl_req.ret_code;
- else
- dev->vchnl_up = false;
- } else {
+ if (ret_code) {
i40iw_debug(dev, I40IW_DEBUG_VIRT,
"%s Send message failed 0x%0x\n", __func__, ret_code);
+ return ret_code;
}
- return ret_code;
+ ret_code = i40iw_vf_wait_vchnl_resp(dev);
+ if (ret_code)
+ return ret_code;
+ else
+ return vchnl_req.ret_code;
}
/**
@@ -626,23 +626,25 @@ enum i40iw_status_code i40iw_vchnl_vf_get_hmc_fcn(struct i40iw_sc_dev *dev,
struct i40iw_virtchnl_req vchnl_req;
enum i40iw_status_code ret_code;
+ if (!i40iw_vf_clear_to_send(dev))
+ return I40IW_ERR_TIMEOUT;
memset(&vchnl_req, 0, sizeof(vchnl_req));
vchnl_req.dev = dev;
vchnl_req.parm = hmc_fcn;
vchnl_req.parm_len = sizeof(*hmc_fcn);
vchnl_req.vchnl_msg = &dev->vchnl_vf_msg_buf.vchnl_msg;
+
ret_code = vchnl_vf_send_get_hmc_fcn_req(dev, &vchnl_req);
- if (!ret_code) {
- ret_code = i40iw_vf_wait_vchnl_resp(dev);
- if (!ret_code)
- ret_code = vchnl_req.ret_code;
- else
- dev->vchnl_up = false;
- } else {
+ if (ret_code) {
i40iw_debug(dev, I40IW_DEBUG_VIRT,
"%s Send message failed 0x%0x\n", __func__, ret_code);
+ return ret_code;
}
- return ret_code;
+ ret_code = i40iw_vf_wait_vchnl_resp(dev);
+ if (ret_code)
+ return ret_code;
+ else
+ return vchnl_req.ret_code;
}
/**
@@ -660,25 +662,27 @@ enum i40iw_status_code i40iw_vchnl_vf_add_hmc_objs(struct i40iw_sc_dev *dev,
struct i40iw_virtchnl_req vchnl_req;
enum i40iw_status_code ret_code;
+ if (!i40iw_vf_clear_to_send(dev))
+ return I40IW_ERR_TIMEOUT;
memset(&vchnl_req, 0, sizeof(vchnl_req));
vchnl_req.dev = dev;
vchnl_req.vchnl_msg = &dev->vchnl_vf_msg_buf.vchnl_msg;
+
ret_code = vchnl_vf_send_add_hmc_objs_req(dev,
&vchnl_req,
rsrc_type,
start_index,
rsrc_count);
- if (!ret_code) {
- ret_code = i40iw_vf_wait_vchnl_resp(dev);
- if (!ret_code)
- ret_code = vchnl_req.ret_code;
- else
- dev->vchnl_up = false;
- } else {
+ if (ret_code) {
i40iw_debug(dev, I40IW_DEBUG_VIRT,
"%s Send message failed 0x%0x\n", __func__, ret_code);
+ return ret_code;
}
- return ret_code;
+ ret_code = i40iw_vf_wait_vchnl_resp(dev);
+ if (ret_code)
+ return ret_code;
+ else
+ return vchnl_req.ret_code;
}
/**
@@ -696,25 +700,27 @@ enum i40iw_status_code i40iw_vchnl_vf_del_hmc_obj(struct i40iw_sc_dev *dev,
struct i40iw_virtchnl_req vchnl_req;
enum i40iw_status_code ret_code;
+ if (!i40iw_vf_clear_to_send(dev))
+ return I40IW_ERR_TIMEOUT;
memset(&vchnl_req, 0, sizeof(vchnl_req));
vchnl_req.dev = dev;
vchnl_req.vchnl_msg = &dev->vchnl_vf_msg_buf.vchnl_msg;
+
ret_code = vchnl_vf_send_del_hmc_objs_req(dev,
&vchnl_req,
rsrc_type,
start_index,
rsrc_count);
- if (!ret_code) {
- ret_code = i40iw_vf_wait_vchnl_resp(dev);
- if (!ret_code)
- ret_code = vchnl_req.ret_code;
- else
- dev->vchnl_up = false;
- } else {
+ if (ret_code) {
i40iw_debug(dev, I40IW_DEBUG_VIRT,
"%s Send message failed 0x%0x\n", __func__, ret_code);
+ return ret_code;
}
- return ret_code;
+ ret_code = i40iw_vf_wait_vchnl_resp(dev);
+ if (ret_code)
+ return ret_code;
+ else
+ return vchnl_req.ret_code;
}
/**
@@ -728,21 +734,23 @@ enum i40iw_status_code i40iw_vchnl_vf_get_pe_stats(struct i40iw_sc_dev *dev,
struct i40iw_virtchnl_req vchnl_req;
enum i40iw_status_code ret_code;
+ if (!i40iw_vf_clear_to_send(dev))
+ return I40IW_ERR_TIMEOUT;
memset(&vchnl_req, 0, sizeof(vchnl_req));
vchnl_req.dev = dev;
vchnl_req.parm = hw_stats;
vchnl_req.parm_len = sizeof(*hw_stats);
vchnl_req.vchnl_msg = &dev->vchnl_vf_msg_buf.vchnl_msg;
+
ret_code = vchnl_vf_send_get_pe_stats_req(dev, &vchnl_req);
- if (!ret_code) {
- ret_code = i40iw_vf_wait_vchnl_resp(dev);
- if (!ret_code)
- ret_code = vchnl_req.ret_code;
- else
- dev->vchnl_up = false;
- } else {
+ if (ret_code) {
i40iw_debug(dev, I40IW_DEBUG_VIRT,
"%s Send message failed 0x%0x\n", __func__, ret_code);
+ return ret_code;
}
- return ret_code;
+ ret_code = i40iw_vf_wait_vchnl_resp(dev);
+ if (ret_code)
+ return ret_code;
+ else
+ return vchnl_req.ret_code;
}
diff --git a/drivers/infiniband/hw/mlx4/main.c b/drivers/infiniband/hw/mlx4/main.c
index f014eaf5969b..b01ef6eee6e8 100644
--- a/drivers/infiniband/hw/mlx4/main.c
+++ b/drivers/infiniband/hw/mlx4/main.c
@@ -1601,7 +1601,7 @@ static int __mlx4_ib_create_flow(struct ib_qp *qp, struct ib_flow_attr *flow_att
else if (ret == -ENXIO)
pr_err("Device managed flow steering is disabled. Fail to register network rule.\n");
else if (ret)
- pr_err("Invalid argumant. Fail to register network rule.\n");
+ pr_err("Invalid argument. Fail to register network rule.\n");
mlx4_free_cmd_mailbox(mdev->dev, mailbox);
return ret;
diff --git a/drivers/infiniband/hw/mlx4/mcg.c b/drivers/infiniband/hw/mlx4/mcg.c
index 99451d887266..8f7ad07915b0 100644
--- a/drivers/infiniband/hw/mlx4/mcg.c
+++ b/drivers/infiniband/hw/mlx4/mcg.c
@@ -96,7 +96,7 @@ struct ib_sa_mcmember_data {
u8 scope_join_state;
u8 proxy_join;
u8 reserved[2];
-};
+} __packed __aligned(4);
struct mcast_group {
struct ib_sa_mcmember_data rec;
@@ -747,14 +747,11 @@ static struct mcast_group *search_relocate_mgid0_group(struct mlx4_ib_demux_ctx
__be64 tid,
union ib_gid *new_mgid)
{
- struct mcast_group *group = NULL, *cur_group;
+ struct mcast_group *group = NULL, *cur_group, *n;
struct mcast_req *req;
- struct list_head *pos;
- struct list_head *n;
mutex_lock(&ctx->mcg_table_lock);
- list_for_each_safe(pos, n, &ctx->mcg_mgid0_list) {
- group = list_entry(pos, struct mcast_group, mgid0_list);
+ list_for_each_entry_safe(group, n, &ctx->mcg_mgid0_list, mgid0_list) {
mutex_lock(&group->lock);
if (group->last_req_tid == tid) {
if (memcmp(new_mgid, &mgid0, sizeof mgid0)) {
diff --git a/drivers/infiniband/hw/mlx4/mlx4_ib.h b/drivers/infiniband/hw/mlx4/mlx4_ib.h
index 1eca01cebe51..6c5ac5d8f32f 100644
--- a/drivers/infiniband/hw/mlx4/mlx4_ib.h
+++ b/drivers/infiniband/hw/mlx4/mlx4_ib.h
@@ -717,9 +717,8 @@ int mlx4_ib_dealloc_mw(struct ib_mw *mw);
struct ib_mr *mlx4_ib_alloc_mr(struct ib_pd *pd,
enum ib_mr_type mr_type,
u32 max_num_sg);
-int mlx4_ib_map_mr_sg(struct ib_mr *ibmr,
- struct scatterlist *sg,
- int sg_nents);
+int mlx4_ib_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
+ unsigned int *sg_offset);
int mlx4_ib_modify_cq(struct ib_cq *cq, u16 cq_count, u16 cq_period);
int mlx4_ib_resize_cq(struct ib_cq *ibcq, int entries, struct ib_udata *udata);
struct ib_cq *mlx4_ib_create_cq(struct ib_device *ibdev,
diff --git a/drivers/infiniband/hw/mlx4/mr.c b/drivers/infiniband/hw/mlx4/mr.c
index ce0b5aa8eb9b..631272172a0b 100644
--- a/drivers/infiniband/hw/mlx4/mr.c
+++ b/drivers/infiniband/hw/mlx4/mr.c
@@ -528,9 +528,8 @@ static int mlx4_set_page(struct ib_mr *ibmr, u64 addr)
return 0;
}
-int mlx4_ib_map_mr_sg(struct ib_mr *ibmr,
- struct scatterlist *sg,
- int sg_nents)
+int mlx4_ib_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
+ unsigned int *sg_offset)
{
struct mlx4_ib_mr *mr = to_mmr(ibmr);
int rc;
@@ -541,7 +540,7 @@ int mlx4_ib_map_mr_sg(struct ib_mr *ibmr,
sizeof(u64) * mr->max_pages,
DMA_TO_DEVICE);
- rc = ib_sg_to_pages(ibmr, sg, sg_nents, mlx4_set_page);
+ rc = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, mlx4_set_page);
ib_dma_sync_single_for_device(ibmr->device, mr->page_map,
sizeof(u64) * mr->max_pages,
diff --git a/drivers/infiniband/hw/mlx4/qp.c b/drivers/infiniband/hw/mlx4/qp.c
index fd97534762b8..81b0e1fbec1d 100644
--- a/drivers/infiniband/hw/mlx4/qp.c
+++ b/drivers/infiniband/hw/mlx4/qp.c
@@ -419,7 +419,8 @@ static int set_rq_size(struct mlx4_ib_dev *dev, struct ib_qp_cap *cap,
}
static int set_kernel_sq_size(struct mlx4_ib_dev *dev, struct ib_qp_cap *cap,
- enum mlx4_ib_qp_type type, struct mlx4_ib_qp *qp)
+ enum mlx4_ib_qp_type type, struct mlx4_ib_qp *qp,
+ bool shrink_wqe)
{
int s;
@@ -477,7 +478,7 @@ static int set_kernel_sq_size(struct mlx4_ib_dev *dev, struct ib_qp_cap *cap,
* We set WQE size to at least 64 bytes, this way stamping
* invalidates each WQE.
*/
- if (dev->dev->caps.fw_ver >= MLX4_FW_VER_WQE_CTRL_NEC &&
+ if (shrink_wqe && dev->dev->caps.fw_ver >= MLX4_FW_VER_WQE_CTRL_NEC &&
qp->sq_signal_bits && BITS_PER_LONG == 64 &&
type != MLX4_IB_QPT_SMI && type != MLX4_IB_QPT_GSI &&
!(type & (MLX4_IB_QPT_PROXY_SMI_OWNER | MLX4_IB_QPT_PROXY_SMI |
@@ -642,6 +643,7 @@ static int create_qp_common(struct mlx4_ib_dev *dev, struct ib_pd *pd,
{
int qpn;
int err;
+ struct ib_qp_cap backup_cap;
struct mlx4_ib_sqp *sqp;
struct mlx4_ib_qp *qp;
enum mlx4_ib_qp_type qp_type = (enum mlx4_ib_qp_type) init_attr->qp_type;
@@ -775,7 +777,9 @@ static int create_qp_common(struct mlx4_ib_dev *dev, struct ib_pd *pd,
goto err;
}
- err = set_kernel_sq_size(dev, &init_attr->cap, qp_type, qp);
+ memcpy(&backup_cap, &init_attr->cap, sizeof(backup_cap));
+ err = set_kernel_sq_size(dev, &init_attr->cap,
+ qp_type, qp, true);
if (err)
goto err;
@@ -787,9 +791,20 @@ static int create_qp_common(struct mlx4_ib_dev *dev, struct ib_pd *pd,
*qp->db.db = 0;
}
- if (mlx4_buf_alloc(dev->dev, qp->buf_size, PAGE_SIZE * 2, &qp->buf, gfp)) {
- err = -ENOMEM;
- goto err_db;
+ if (mlx4_buf_alloc(dev->dev, qp->buf_size, qp->buf_size,
+ &qp->buf, gfp)) {
+ memcpy(&init_attr->cap, &backup_cap,
+ sizeof(backup_cap));
+ err = set_kernel_sq_size(dev, &init_attr->cap, qp_type,
+ qp, false);
+ if (err)
+ goto err_db;
+
+ if (mlx4_buf_alloc(dev->dev, qp->buf_size,
+ PAGE_SIZE * 2, &qp->buf, gfp)) {
+ err = -ENOMEM;
+ goto err_db;
+ }
}
err = mlx4_mtt_init(dev->dev, qp->buf.npages, qp->buf.page_shift,
diff --git a/drivers/infiniband/hw/mlx5/cq.c b/drivers/infiniband/hw/mlx5/cq.c
index a00ba4418de9..dabcc65bd65e 100644
--- a/drivers/infiniband/hw/mlx5/cq.c
+++ b/drivers/infiniband/hw/mlx5/cq.c
@@ -879,7 +879,10 @@ struct ib_cq *mlx5_ib_create_cq(struct ib_device *ibdev,
mlx5_ib_dbg(dev, "cqn 0x%x\n", cq->mcq.cqn);
cq->mcq.irqn = irqn;
- cq->mcq.comp = mlx5_ib_cq_comp;
+ if (context)
+ cq->mcq.tasklet_ctx.comp = mlx5_ib_cq_comp;
+ else
+ cq->mcq.comp = mlx5_ib_cq_comp;
cq->mcq.event = mlx5_ib_cq_event;
INIT_LIST_HEAD(&cq->wc_list);
diff --git a/drivers/infiniband/hw/mlx5/main.c b/drivers/infiniband/hw/mlx5/main.c
index 6ad0489cb3c5..c72797cd9e4f 100644
--- a/drivers/infiniband/hw/mlx5/main.c
+++ b/drivers/infiniband/hw/mlx5/main.c
@@ -38,6 +38,9 @@
#include <linux/dma-mapping.h>
#include <linux/slab.h>
#include <linux/io-mapping.h>
+#if defined(CONFIG_X86)
+#include <asm/pat.h>
+#endif
#include <linux/sched.h>
#include <rdma/ib_user_verbs.h>
#include <rdma/ib_addr.h>
@@ -517,6 +520,10 @@ static int mlx5_ib_query_device(struct ib_device *ibdev,
props->device_cap_flags |= IB_DEVICE_UD_TSO;
}
+ if (MLX5_CAP_GEN(dev->mdev, eth_net_offloads) &&
+ MLX5_CAP_ETH(dev->mdev, scatter_fcs))
+ props->device_cap_flags |= IB_DEVICE_RAW_SCATTER_FCS;
+
props->vendor_part_id = mdev->pdev->device;
props->hw_ver = mdev->pdev->revision;
@@ -1068,38 +1075,89 @@ static int get_index(unsigned long offset)
return get_arg(offset);
}
+static inline char *mmap_cmd2str(enum mlx5_ib_mmap_cmd cmd)
+{
+ switch (cmd) {
+ case MLX5_IB_MMAP_WC_PAGE:
+ return "WC";
+ case MLX5_IB_MMAP_REGULAR_PAGE:
+ return "best effort WC";
+ case MLX5_IB_MMAP_NC_PAGE:
+ return "NC";
+ default:
+ return NULL;
+ }
+}
+
+static int uar_mmap(struct mlx5_ib_dev *dev, enum mlx5_ib_mmap_cmd cmd,
+ struct vm_area_struct *vma, struct mlx5_uuar_info *uuari)
+{
+ int err;
+ unsigned long idx;
+ phys_addr_t pfn, pa;
+ pgprot_t prot;
+
+ switch (cmd) {
+ case MLX5_IB_MMAP_WC_PAGE:
+/* Some architectures don't support WC memory */
+#if defined(CONFIG_X86)
+ if (!pat_enabled())
+ return -EPERM;
+#elif !(defined(CONFIG_PPC) || (defined(CONFIG_ARM) && defined(CONFIG_MMU)))
+ return -EPERM;
+#endif
+ /* fall through */
+ case MLX5_IB_MMAP_REGULAR_PAGE:
+ /* For MLX5_IB_MMAP_REGULAR_PAGE do the best effort to get WC */
+ prot = pgprot_writecombine(vma->vm_page_prot);
+ break;
+ case MLX5_IB_MMAP_NC_PAGE:
+ prot = pgprot_noncached(vma->vm_page_prot);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (vma->vm_end - vma->vm_start != PAGE_SIZE)
+ return -EINVAL;
+
+ idx = get_index(vma->vm_pgoff);
+ if (idx >= uuari->num_uars)
+ return -EINVAL;
+
+ pfn = uar_index2pfn(dev, uuari->uars[idx].index);
+ mlx5_ib_dbg(dev, "uar idx 0x%lx, pfn %pa\n", idx, &pfn);
+
+ vma->vm_page_prot = prot;
+ err = io_remap_pfn_range(vma, vma->vm_start, pfn,
+ PAGE_SIZE, vma->vm_page_prot);
+ if (err) {
+ mlx5_ib_err(dev, "io_remap_pfn_range failed with error=%d, vm_start=0x%lx, pfn=%pa, mmap_cmd=%s\n",
+ err, vma->vm_start, &pfn, mmap_cmd2str(cmd));
+ return -EAGAIN;
+ }
+
+ pa = pfn << PAGE_SHIFT;
+ mlx5_ib_dbg(dev, "mapped %s at 0x%lx, PA %pa\n", mmap_cmd2str(cmd),
+ vma->vm_start, &pa);
+
+ return 0;
+}
+
static int mlx5_ib_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vma)
{
struct mlx5_ib_ucontext *context = to_mucontext(ibcontext);
struct mlx5_ib_dev *dev = to_mdev(ibcontext->device);
struct mlx5_uuar_info *uuari = &context->uuari;
unsigned long command;
- unsigned long idx;
phys_addr_t pfn;
command = get_command(vma->vm_pgoff);
switch (command) {
+ case MLX5_IB_MMAP_WC_PAGE:
+ case MLX5_IB_MMAP_NC_PAGE:
case MLX5_IB_MMAP_REGULAR_PAGE:
- if (vma->vm_end - vma->vm_start != PAGE_SIZE)
- return -EINVAL;
-
- idx = get_index(vma->vm_pgoff);
- if (idx >= uuari->num_uars)
- return -EINVAL;
-
- pfn = uar_index2pfn(dev, uuari->uars[idx].index);
- mlx5_ib_dbg(dev, "uar idx 0x%lx, pfn 0x%llx\n", idx,
- (unsigned long long)pfn);
-
- vma->vm_page_prot = pgprot_writecombine(vma->vm_page_prot);
- if (io_remap_pfn_range(vma, vma->vm_start, pfn,
- PAGE_SIZE, vma->vm_page_prot))
- return -EAGAIN;
-
- mlx5_ib_dbg(dev, "mapped WC at 0x%lx, PA 0x%llx\n",
- vma->vm_start,
- (unsigned long long)pfn << PAGE_SHIFT);
- break;
+ return uar_mmap(dev, command, vma, uuari);
case MLX5_IB_MMAP_GET_CONTIGUOUS_PAGES:
return -ENOSYS;
@@ -1108,7 +1166,7 @@ static int mlx5_ib_mmap(struct ib_ucontext *ibcontext, struct vm_area_struct *vm
if (vma->vm_end - vma->vm_start != PAGE_SIZE)
return -EINVAL;
- if (vma->vm_flags & (VM_WRITE | VM_EXEC))
+ if (vma->vm_flags & VM_WRITE)
return -EPERM;
/* Don't expose to user-space information it shouldn't have */
@@ -1438,7 +1496,8 @@ static struct mlx5_ib_flow_prio *get_flow_table(struct mlx5_ib_dev *dev,
if (!ft) {
ft = mlx5_create_auto_grouped_flow_table(ns, priority,
num_entries,
- num_groups);
+ num_groups,
+ 0);
if (!IS_ERR(ft)) {
prio->refcount = 0;
diff --git a/drivers/infiniband/hw/mlx5/mlx5_ib.h b/drivers/infiniband/hw/mlx5/mlx5_ib.h
index b46c25542a7c..c4a9825828bc 100644
--- a/drivers/infiniband/hw/mlx5/mlx5_ib.h
+++ b/drivers/infiniband/hw/mlx5/mlx5_ib.h
@@ -70,6 +70,8 @@ enum {
enum mlx5_ib_mmap_cmd {
MLX5_IB_MMAP_REGULAR_PAGE = 0,
MLX5_IB_MMAP_GET_CONTIGUOUS_PAGES = 1,
+ MLX5_IB_MMAP_WC_PAGE = 2,
+ MLX5_IB_MMAP_NC_PAGE = 3,
/* 5 is chosen in order to be compatible with old versions of libmlx5 */
MLX5_IB_MMAP_CORE_CLOCK = 5,
};
@@ -356,6 +358,7 @@ enum mlx5_ib_qp_flags {
MLX5_IB_QP_SIGNATURE_HANDLING = 1 << 5,
/* QP uses 1 as its source QP number */
MLX5_IB_QP_SQPN_QP1 = 1 << 6,
+ MLX5_IB_QP_CAP_SCATTER_FCS = 1 << 7,
};
struct mlx5_umr_wr {
@@ -712,9 +715,8 @@ int mlx5_ib_dereg_mr(struct ib_mr *ibmr);
struct ib_mr *mlx5_ib_alloc_mr(struct ib_pd *pd,
enum ib_mr_type mr_type,
u32 max_num_sg);
-int mlx5_ib_map_mr_sg(struct ib_mr *ibmr,
- struct scatterlist *sg,
- int sg_nents);
+int mlx5_ib_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
+ unsigned int *sg_offset);
int mlx5_ib_process_mad(struct ib_device *ibdev, int mad_flags, u8 port_num,
const struct ib_wc *in_wc, const struct ib_grh *in_grh,
const struct ib_mad_hdr *in, size_t in_mad_size,
diff --git a/drivers/infiniband/hw/mlx5/mr.c b/drivers/infiniband/hw/mlx5/mr.c
index 4d5bff151cdf..8cf2ce50511f 100644
--- a/drivers/infiniband/hw/mlx5/mr.c
+++ b/drivers/infiniband/hw/mlx5/mr.c
@@ -1751,26 +1751,33 @@ done:
static int
mlx5_ib_sg_to_klms(struct mlx5_ib_mr *mr,
struct scatterlist *sgl,
- unsigned short sg_nents)
+ unsigned short sg_nents,
+ unsigned int *sg_offset_p)
{
struct scatterlist *sg = sgl;
struct mlx5_klm *klms = mr->descs;
+ unsigned int sg_offset = sg_offset_p ? *sg_offset_p : 0;
u32 lkey = mr->ibmr.pd->local_dma_lkey;
int i;
- mr->ibmr.iova = sg_dma_address(sg);
+ mr->ibmr.iova = sg_dma_address(sg) + sg_offset;
mr->ibmr.length = 0;
mr->ndescs = sg_nents;
for_each_sg(sgl, sg, sg_nents, i) {
if (unlikely(i > mr->max_descs))
break;
- klms[i].va = cpu_to_be64(sg_dma_address(sg));
- klms[i].bcount = cpu_to_be32(sg_dma_len(sg));
+ klms[i].va = cpu_to_be64(sg_dma_address(sg) + sg_offset);
+ klms[i].bcount = cpu_to_be32(sg_dma_len(sg) - sg_offset);
klms[i].key = cpu_to_be32(lkey);
mr->ibmr.length += sg_dma_len(sg);
+
+ sg_offset = 0;
}
+ if (sg_offset_p)
+ *sg_offset_p = sg_offset;
+
return i;
}
@@ -1788,9 +1795,8 @@ static int mlx5_set_page(struct ib_mr *ibmr, u64 addr)
return 0;
}
-int mlx5_ib_map_mr_sg(struct ib_mr *ibmr,
- struct scatterlist *sg,
- int sg_nents)
+int mlx5_ib_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
+ unsigned int *sg_offset)
{
struct mlx5_ib_mr *mr = to_mmr(ibmr);
int n;
@@ -1802,9 +1808,10 @@ int mlx5_ib_map_mr_sg(struct ib_mr *ibmr,
DMA_TO_DEVICE);
if (mr->access_mode == MLX5_ACCESS_MODE_KLM)
- n = mlx5_ib_sg_to_klms(mr, sg, sg_nents);
+ n = mlx5_ib_sg_to_klms(mr, sg, sg_nents, sg_offset);
else
- n = ib_sg_to_pages(ibmr, sg, sg_nents, mlx5_set_page);
+ n = ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset,
+ mlx5_set_page);
ib_dma_sync_single_for_device(ibmr->device, mr->desc_map,
mr->desc_size * mr->max_descs,
diff --git a/drivers/infiniband/hw/mlx5/qp.c b/drivers/infiniband/hw/mlx5/qp.c
index 8dee8bc1e0fe..504117657d41 100644
--- a/drivers/infiniband/hw/mlx5/qp.c
+++ b/drivers/infiniband/hw/mlx5/qp.c
@@ -1028,6 +1028,7 @@ static int get_rq_pas_size(void *qpc)
static int create_raw_packet_qp_rq(struct mlx5_ib_dev *dev,
struct mlx5_ib_rq *rq, void *qpin)
{
+ struct mlx5_ib_qp *mqp = rq->base.container_mibqp;
__be64 *pas;
__be64 *qp_pas;
void *in;
@@ -1051,6 +1052,9 @@ static int create_raw_packet_qp_rq(struct mlx5_ib_dev *dev,
MLX5_SET(rqc, rqc, user_index, MLX5_GET(qpc, qpc, user_index));
MLX5_SET(rqc, rqc, cqn, MLX5_GET(qpc, qpc, cqn_rcv));
+ if (mqp->flags & MLX5_IB_QP_CAP_SCATTER_FCS)
+ MLX5_SET(rqc, rqc, scatter_fcs, 1);
+
wq = MLX5_ADDR_OF(rqc, rqc, wq);
MLX5_SET(wq, wq, wq_type, MLX5_WQ_TYPE_CYCLIC);
MLX5_SET(wq, wq, end_padding_mode,
@@ -1136,11 +1140,12 @@ static int create_raw_packet_qp(struct mlx5_ib_dev *dev, struct mlx5_ib_qp *qp,
}
if (qp->rq.wqe_cnt) {
+ rq->base.container_mibqp = qp;
+
err = create_raw_packet_qp_rq(dev, rq, in);
if (err)
goto err_destroy_sq;
- rq->base.container_mibqp = qp;
err = create_raw_packet_qp_tir(dev, rq, tdn);
if (err)
@@ -1252,6 +1257,19 @@ static int create_qp_common(struct mlx5_ib_dev *dev, struct ib_pd *pd,
return -EOPNOTSUPP;
}
+ if (init_attr->create_flags & IB_QP_CREATE_SCATTER_FCS) {
+ if (init_attr->qp_type != IB_QPT_RAW_PACKET) {
+ mlx5_ib_dbg(dev, "Scatter FCS is supported only for Raw Packet QPs");
+ return -EOPNOTSUPP;
+ }
+ if (!MLX5_CAP_GEN(dev->mdev, eth_net_offloads) ||
+ !MLX5_CAP_ETH(dev->mdev, scatter_fcs)) {
+ mlx5_ib_dbg(dev, "Scatter FCS isn't supported\n");
+ return -EOPNOTSUPP;
+ }
+ qp->flags |= MLX5_IB_QP_CAP_SCATTER_FCS;
+ }
+
if (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR)
qp->sq_signal_bits = MLX5_WQE_CTRL_CQ_UPDATE;
diff --git a/drivers/infiniband/hw/nes/nes_nic.c b/drivers/infiniband/hw/nes/nes_nic.c
index 92914539edc7..2b27d1351cf7 100644
--- a/drivers/infiniband/hw/nes/nes_nic.c
+++ b/drivers/infiniband/hw/nes/nes_nic.c
@@ -356,7 +356,7 @@ static int nes_netdev_stop(struct net_device *netdev)
/**
* nes_nic_send
*/
-static int nes_nic_send(struct sk_buff *skb, struct net_device *netdev)
+static bool nes_nic_send(struct sk_buff *skb, struct net_device *netdev)
{
struct nes_vnic *nesvnic = netdev_priv(netdev);
struct nes_device *nesdev = nesvnic->nesdev;
@@ -413,7 +413,7 @@ static int nes_nic_send(struct sk_buff *skb, struct net_device *netdev)
netdev->name, skb_shinfo(skb)->nr_frags + 2, skb_headlen(skb));
kfree_skb(skb);
nesvnic->tx_sw_dropped++;
- return NETDEV_TX_LOCKED;
+ return false;
}
set_bit(nesnic->sq_head, nesnic->first_frag_overflow);
bus_address = pci_map_single(nesdev->pcidev, skb->data + NES_FIRST_FRAG_SIZE,
@@ -454,8 +454,7 @@ static int nes_nic_send(struct sk_buff *skb, struct net_device *netdev)
set_wqe_32bit_value(nic_sqe->wqe_words, NES_NIC_SQ_WQE_MISC_IDX, wqe_misc);
nesnic->sq_head++;
nesnic->sq_head &= nesnic->sq_size - 1;
-
- return NETDEV_TX_OK;
+ return true;
}
@@ -479,7 +478,6 @@ static int nes_netdev_start_xmit(struct sk_buff *skb, struct net_device *netdev)
u32 tso_wqe_length;
u32 curr_tcp_seq;
u32 wqe_count=1;
- u32 send_rc;
struct iphdr *iph;
__le16 *wqe_fragment_length;
u32 nr_frags;
@@ -670,13 +668,11 @@ tso_sq_no_longer_full:
skb_linearize(skb);
skb_set_transport_header(skb, hoffset);
skb_set_network_header(skb, nhoffset);
- send_rc = nes_nic_send(skb, netdev);
- if (send_rc != NETDEV_TX_OK)
+ if (!nes_nic_send(skb, netdev))
return NETDEV_TX_OK;
}
} else {
- send_rc = nes_nic_send(skb, netdev);
- if (send_rc != NETDEV_TX_OK)
+ if (!nes_nic_send(skb, netdev))
return NETDEV_TX_OK;
}
@@ -686,7 +682,7 @@ tso_sq_no_longer_full:
nes_write32(nesdev->regs+NES_WQE_ALLOC,
(wqe_count << 24) | (1 << 23) | nesvnic->nic.qp_id);
- netdev->trans_start = jiffies;
+ netif_trans_update(netdev);
return NETDEV_TX_OK;
}
diff --git a/drivers/infiniband/hw/nes/nes_utils.c b/drivers/infiniband/hw/nes/nes_utils.c
index 6d3a169c049b..37331e2fdc5f 100644
--- a/drivers/infiniband/hw/nes/nes_utils.c
+++ b/drivers/infiniband/hw/nes/nes_utils.c
@@ -44,6 +44,7 @@
#include <linux/ip.h>
#include <linux/tcp.h>
#include <linux/init.h>
+#include <linux/kernel.h>
#include <asm/io.h>
#include <asm/irq.h>
@@ -903,70 +904,15 @@ void nes_clc(unsigned long parm)
*/
void nes_dump_mem(unsigned int dump_debug_level, void *addr, int length)
{
- char xlate[] = {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9',
- 'a', 'b', 'c', 'd', 'e', 'f'};
- char *ptr;
- char hex_buf[80];
- char ascii_buf[20];
- int num_char;
- int num_ascii;
- int num_hex;
-
if (!(nes_debug_level & dump_debug_level)) {
return;
}
- ptr = addr;
if (length > 0x100) {
nes_debug(dump_debug_level, "Length truncated from %x to %x\n", length, 0x100);
length = 0x100;
}
- nes_debug(dump_debug_level, "Address=0x%p, length=0x%x (%d)\n", ptr, length, length);
-
- memset(ascii_buf, 0, 20);
- memset(hex_buf, 0, 80);
-
- num_ascii = 0;
- num_hex = 0;
- for (num_char = 0; num_char < length; num_char++) {
- if (num_ascii == 8) {
- ascii_buf[num_ascii++] = ' ';
- hex_buf[num_hex++] = '-';
- hex_buf[num_hex++] = ' ';
- }
-
- if (*ptr < 0x20 || *ptr > 0x7e)
- ascii_buf[num_ascii++] = '.';
- else
- ascii_buf[num_ascii++] = *ptr;
- hex_buf[num_hex++] = xlate[((*ptr & 0xf0) >> 4)];
- hex_buf[num_hex++] = xlate[*ptr & 0x0f];
- hex_buf[num_hex++] = ' ';
- ptr++;
-
- if (num_ascii >= 17) {
- /* output line and reset */
- nes_debug(dump_debug_level, " %s | %s\n", hex_buf, ascii_buf);
- memset(ascii_buf, 0, 20);
- memset(hex_buf, 0, 80);
- num_ascii = 0;
- num_hex = 0;
- }
- }
+ nes_debug(dump_debug_level, "Address=0x%p, length=0x%x (%d)\n", addr, length, length);
- /* output the rest */
- if (num_ascii) {
- while (num_ascii < 17) {
- if (num_ascii == 8) {
- hex_buf[num_hex++] = ' ';
- hex_buf[num_hex++] = ' ';
- }
- hex_buf[num_hex++] = ' ';
- hex_buf[num_hex++] = ' ';
- hex_buf[num_hex++] = ' ';
- num_ascii++;
- }
-
- nes_debug(dump_debug_level, " %s | %s\n", hex_buf, ascii_buf);
- }
+ print_hex_dump(KERN_ERR, PFX, DUMP_PREFIX_NONE, 16, 1, addr, length, true);
}
diff --git a/drivers/infiniband/hw/nes/nes_verbs.c b/drivers/infiniband/hw/nes/nes_verbs.c
index fba69a39a7eb..464d6da5fe91 100644
--- a/drivers/infiniband/hw/nes/nes_verbs.c
+++ b/drivers/infiniband/hw/nes/nes_verbs.c
@@ -402,15 +402,14 @@ static int nes_set_page(struct ib_mr *ibmr, u64 addr)
return 0;
}
-static int nes_map_mr_sg(struct ib_mr *ibmr,
- struct scatterlist *sg,
- int sg_nents)
+static int nes_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg,
+ int sg_nents, unsigned int *sg_offset)
{
struct nes_mr *nesmr = to_nesmr(ibmr);
nesmr->npages = 0;
- return ib_sg_to_pages(ibmr, sg, sg_nents, nes_set_page);
+ return ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, nes_set_page);
}
/**
@@ -981,7 +980,7 @@ static int nes_setup_mmap_qp(struct nes_qp *nesqp, struct nes_vnic *nesvnic,
/**
* nes_free_qp_mem() is to free up the qp's pci_alloc_consistent() memory.
*/
-static inline void nes_free_qp_mem(struct nes_device *nesdev,
+static void nes_free_qp_mem(struct nes_device *nesdev,
struct nes_qp *nesqp, int virt_wqs)
{
unsigned long flags;
@@ -1315,6 +1314,8 @@ static struct ib_qp *nes_create_qp(struct ib_pd *ibpd,
nes_debug(NES_DBG_QP, "Invalid QP type: %d\n", init_attr->qp_type);
return ERR_PTR(-EINVAL);
}
+ init_completion(&nesqp->sq_drained);
+ init_completion(&nesqp->rq_drained);
nesqp->sig_all = (init_attr->sq_sig_type == IB_SIGNAL_ALL_WR);
init_timer(&nesqp->terminate_timer);
@@ -3452,6 +3453,29 @@ out:
return err;
}
+/**
+ * nes_drain_sq - drain sq
+ * @ibqp: pointer to ibqp
+ */
+static void nes_drain_sq(struct ib_qp *ibqp)
+{
+ struct nes_qp *nesqp = to_nesqp(ibqp);
+
+ if (nesqp->hwqp.sq_tail != nesqp->hwqp.sq_head)
+ wait_for_completion(&nesqp->sq_drained);
+}
+
+/**
+ * nes_drain_rq - drain rq
+ * @ibqp: pointer to ibqp
+ */
+static void nes_drain_rq(struct ib_qp *ibqp)
+{
+ struct nes_qp *nesqp = to_nesqp(ibqp);
+
+ if (nesqp->hwqp.rq_tail != nesqp->hwqp.rq_head)
+ wait_for_completion(&nesqp->rq_drained);
+}
/**
* nes_poll_cq
@@ -3582,6 +3606,13 @@ static int nes_poll_cq(struct ib_cq *ibcq, int num_entries, struct ib_wc *entry)
}
}
+ if (nesqp->iwarp_state > NES_CQP_QP_IWARP_STATE_RTS) {
+ if (nesqp->hwqp.sq_tail == nesqp->hwqp.sq_head)
+ complete(&nesqp->sq_drained);
+ if (nesqp->hwqp.rq_tail == nesqp->hwqp.rq_head)
+ complete(&nesqp->rq_drained);
+ }
+
entry->wr_id = wrid;
entry++;
cqe_count++;
@@ -3754,6 +3785,8 @@ struct nes_ib_device *nes_init_ofa_device(struct net_device *netdev)
nesibdev->ibdev.req_notify_cq = nes_req_notify_cq;
nesibdev->ibdev.post_send = nes_post_send;
nesibdev->ibdev.post_recv = nes_post_recv;
+ nesibdev->ibdev.drain_sq = nes_drain_sq;
+ nesibdev->ibdev.drain_rq = nes_drain_rq;
nesibdev->ibdev.iwcm = kzalloc(sizeof(*nesibdev->ibdev.iwcm), GFP_KERNEL);
if (nesibdev->ibdev.iwcm == NULL) {
diff --git a/drivers/infiniband/hw/nes/nes_verbs.h b/drivers/infiniband/hw/nes/nes_verbs.h
index 70290883d067..e02a5662dc20 100644
--- a/drivers/infiniband/hw/nes/nes_verbs.h
+++ b/drivers/infiniband/hw/nes/nes_verbs.h
@@ -189,6 +189,8 @@ struct nes_qp {
u8 pau_pending;
u8 pau_state;
__u64 nesuqp_addr;
+ struct completion sq_drained;
+ struct completion rq_drained;
};
struct ib_mr *nes_reg_phys_mr(struct ib_pd *ib_pd,
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
index a8496a18e20d..b1a3d91fe8b9 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.c
@@ -3081,13 +3081,12 @@ static int ocrdma_set_page(struct ib_mr *ibmr, u64 addr)
return 0;
}
-int ocrdma_map_mr_sg(struct ib_mr *ibmr,
- struct scatterlist *sg,
- int sg_nents)
+int ocrdma_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
+ unsigned int *sg_offset)
{
struct ocrdma_mr *mr = get_ocrdma_mr(ibmr);
mr->npages = 0;
- return ib_sg_to_pages(ibmr, sg, sg_nents, ocrdma_set_page);
+ return ib_sg_to_pages(ibmr, sg, sg_nents, sg_offset, ocrdma_set_page);
}
diff --git a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h
index 8b517fd36779..704ef1e9271b 100644
--- a/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h
+++ b/drivers/infiniband/hw/ocrdma/ocrdma_verbs.h
@@ -122,8 +122,7 @@ struct ib_mr *ocrdma_reg_user_mr(struct ib_pd *, u64 start, u64 length,
struct ib_mr *ocrdma_alloc_mr(struct ib_pd *pd,
enum ib_mr_type mr_type,
u32 max_num_sg);
-int ocrdma_map_mr_sg(struct ib_mr *ibmr,
- struct scatterlist *sg,
- int sg_nents);
+int ocrdma_map_mr_sg(struct ib_mr *ibmr, struct scatterlist *sg, int sg_nents,
+ unsigned int *sg_offset);
#endif /* __OCRDMA_VERBS_H__ */
diff --git a/drivers/infiniband/hw/qib/qib_file_ops.c b/drivers/infiniband/hw/qib/qib_file_ops.c
index 24f4a782e0f4..ff946d5f59e4 100644
--- a/drivers/infiniband/hw/qib/qib_file_ops.c
+++ b/drivers/infiniband/hw/qib/qib_file_ops.c
@@ -824,10 +824,7 @@ static int mmap_piobufs(struct vm_area_struct *vma,
phys = dd->physaddr + piobufs;
#if defined(__powerpc__)
- /* There isn't a generic way to specify writethrough mappings */
- pgprot_val(vma->vm_page_prot) |= _PAGE_NO_CACHE;
- pgprot_val(vma->vm_page_prot) |= _PAGE_WRITETHRU;
- pgprot_val(vma->vm_page_prot) &= ~_PAGE_GUARDED;
+ vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
#endif
/*
diff --git a/drivers/infiniband/hw/qib/qib_iba7322.c b/drivers/infiniband/hw/qib/qib_iba7322.c
index 82d7c4bf5970..ce4034071f9c 100644
--- a/drivers/infiniband/hw/qib/qib_iba7322.c
+++ b/drivers/infiniband/hw/qib/qib_iba7322.c
@@ -1308,21 +1308,6 @@ static const struct qib_hwerror_msgs qib_7322p_error_msgs[] = {
SYM_LSB(IntMask, fldname##17IntMask)), \
.msg = #fldname "_C", .sz = sizeof(#fldname "_C") }
-static const struct qib_hwerror_msgs qib_7322_intr_msgs[] = {
- INTR_AUTO_P(SDmaInt),
- INTR_AUTO_P(SDmaProgressInt),
- INTR_AUTO_P(SDmaIdleInt),
- INTR_AUTO_P(SDmaCleanupDone),
- INTR_AUTO_C(RcvUrg),
- INTR_AUTO_P(ErrInt),
- INTR_AUTO(ErrInt), /* non-port-specific errs */
- INTR_AUTO(AssertGPIOInt),
- INTR_AUTO_P(SendDoneInt),
- INTR_AUTO(SendBufAvailInt),
- INTR_AUTO_C(RcvAvail),
- { .mask = 0, .sz = 0 }
-};
-
#define TXSYMPTOM_AUTO_P(fldname) \
{ .mask = SYM_MASK(SendHdrErrSymptom_0, fldname), \
.msg = #fldname, .sz = sizeof(#fldname) }
diff --git a/drivers/infiniband/hw/qib/qib_init.c b/drivers/infiniband/hw/qib/qib_init.c
index 3f062f0dd9d8..f253111e682e 100644
--- a/drivers/infiniband/hw/qib/qib_init.c
+++ b/drivers/infiniband/hw/qib/qib_init.c
@@ -1090,7 +1090,7 @@ void qib_free_devdata(struct qib_devdata *dd)
qib_dbg_ibdev_exit(&dd->verbs_dev);
#endif
free_percpu(dd->int_counter);
- ib_dealloc_device(&dd->verbs_dev.rdi.ibdev);
+ rvt_dealloc_device(&dd->verbs_dev.rdi);
}
u64 qib_int_counter(struct qib_devdata *dd)
@@ -1183,7 +1183,7 @@ struct qib_devdata *qib_alloc_devdata(struct pci_dev *pdev, size_t extra)
bail:
if (!list_empty(&dd->list))
list_del_init(&dd->list);
- ib_dealloc_device(&dd->verbs_dev.rdi.ibdev);
+ rvt_dealloc_device(&dd->verbs_dev.rdi);
return ERR_PTR(ret);
}
diff --git a/drivers/infiniband/hw/qib/qib_mad.c b/drivers/infiniband/hw/qib/qib_mad.c
index 0bd18375d7df..d2ac29861af5 100644
--- a/drivers/infiniband/hw/qib/qib_mad.c
+++ b/drivers/infiniband/hw/qib/qib_mad.c
@@ -1172,11 +1172,13 @@ static int pma_get_classportinfo(struct ib_pma_mad *pmp,
* Set the most significant bit of CM2 to indicate support for
* congestion statistics
*/
- p->reserved[0] = dd->psxmitwait_supported << 7;
+ ib_set_cpi_capmask2(p,
+ dd->psxmitwait_supported <<
+ (31 - IB_CLASS_PORT_INFO_RESP_TIME_FIELD_SIZE));
/*
* Expected response time is 4.096 usec. * 2^18 == 1.073741824 sec.
*/
- p->resp_time_value = 18;
+ ib_set_cpi_resp_time(p, 18);
return reply((struct ib_smp *) pmp);
}
diff --git a/drivers/infiniband/hw/qib/qib_pcie.c b/drivers/infiniband/hw/qib/qib_pcie.c
index 4758a3801ae8..6abe1c621aa4 100644
--- a/drivers/infiniband/hw/qib/qib_pcie.c
+++ b/drivers/infiniband/hw/qib/qib_pcie.c
@@ -144,13 +144,7 @@ int qib_pcie_ddinit(struct qib_devdata *dd, struct pci_dev *pdev,
addr = pci_resource_start(pdev, 0);
len = pci_resource_len(pdev, 0);
-#if defined(__powerpc__)
- /* There isn't a generic way to specify writethrough mappings */
- dd->kregbase = __ioremap(addr, len, _PAGE_NO_CACHE | _PAGE_WRITETHRU);
-#else
dd->kregbase = ioremap_nocache(addr, len);
-#endif
-
if (!dd->kregbase)
return -ENOMEM;
diff --git a/drivers/infiniband/hw/qib/qib_rc.c b/drivers/infiniband/hw/qib/qib_rc.c
index 9088e26d3ac8..444028a3582a 100644
--- a/drivers/infiniband/hw/qib/qib_rc.c
+++ b/drivers/infiniband/hw/qib/qib_rc.c
@@ -230,7 +230,7 @@ bail:
*
* Return 1 if constructed; otherwise, return 0.
*/
-int qib_make_rc_req(struct rvt_qp *qp)
+int qib_make_rc_req(struct rvt_qp *qp, unsigned long *flags)
{
struct qib_qp_priv *priv = qp->priv;
struct qib_ibdev *dev = to_idev(qp->ibqp.device);
diff --git a/drivers/infiniband/hw/qib/qib_ruc.c b/drivers/infiniband/hw/qib/qib_ruc.c
index a5f07a64b228..b67779256297 100644
--- a/drivers/infiniband/hw/qib/qib_ruc.c
+++ b/drivers/infiniband/hw/qib/qib_ruc.c
@@ -739,7 +739,7 @@ void qib_do_send(struct rvt_qp *qp)
struct qib_qp_priv *priv = qp->priv;
struct qib_ibport *ibp = to_iport(qp->ibqp.device, qp->port_num);
struct qib_pportdata *ppd = ppd_from_ibp(ibp);
- int (*make_req)(struct rvt_qp *qp);
+ int (*make_req)(struct rvt_qp *qp, unsigned long *flags);
unsigned long flags;
if ((qp->ibqp.qp_type == IB_QPT_RC ||
@@ -781,7 +781,7 @@ void qib_do_send(struct rvt_qp *qp)
qp->s_hdrwords = 0;
spin_lock_irqsave(&qp->s_lock, flags);
}
- } while (make_req(qp));
+ } while (make_req(qp, &flags));
spin_unlock_irqrestore(&qp->s_lock, flags);
}
diff --git a/drivers/infiniband/hw/qib/qib_uc.c b/drivers/infiniband/hw/qib/qib_uc.c
index 7bdbc79ceaa3..1d61bd04f449 100644
--- a/drivers/infiniband/hw/qib/qib_uc.c
+++ b/drivers/infiniband/hw/qib/qib_uc.c
@@ -45,7 +45,7 @@
*
* Return 1 if constructed; otherwise, return 0.
*/
-int qib_make_uc_req(struct rvt_qp *qp)
+int qib_make_uc_req(struct rvt_qp *qp, unsigned long *flags)
{
struct qib_qp_priv *priv = qp->priv;
struct qib_other_headers *ohdr;
diff --git a/drivers/infiniband/hw/qib/qib_ud.c b/drivers/infiniband/hw/qib/qib_ud.c
index d9502137de62..846e6c726df7 100644
--- a/drivers/infiniband/hw/qib/qib_ud.c
+++ b/drivers/infiniband/hw/qib/qib_ud.c
@@ -238,7 +238,7 @@ drop:
*
* Return 1 if constructed; otherwise, return 0.
*/
-int qib_make_ud_req(struct rvt_qp *qp)
+int qib_make_ud_req(struct rvt_qp *qp, unsigned long *flags)
{
struct qib_qp_priv *priv = qp->priv;
struct qib_other_headers *ohdr;
@@ -294,7 +294,7 @@ int qib_make_ud_req(struct rvt_qp *qp)
this_cpu_inc(ibp->pmastats->n_unicast_xmit);
lid = ah_attr->dlid & ~((1 << ppd->lmc) - 1);
if (unlikely(lid == ppd->lid)) {
- unsigned long flags;
+ unsigned long tflags = *flags;
/*
* If DMAs are in progress, we can't generate
* a completion for the loopback packet since
@@ -307,10 +307,10 @@ int qib_make_ud_req(struct rvt_qp *qp)
goto bail;
}
qp->s_cur = next_cur;
- local_irq_save(flags);
- spin_unlock_irqrestore(&qp->s_lock, flags);
+ spin_unlock_irqrestore(&qp->s_lock, tflags);
qib_ud_loopback(qp, wqe);
- spin_lock_irqsave(&qp->s_lock, flags);
+ spin_lock_irqsave(&qp->s_lock, tflags);
+ *flags = tflags;
qib_send_complete(qp, wqe, IB_WC_SUCCESS);
goto done;
}
diff --git a/drivers/infiniband/hw/qib/qib_verbs.h b/drivers/infiniband/hw/qib/qib_verbs.h
index 4b76a8d59337..4f878151f81f 100644
--- a/drivers/infiniband/hw/qib/qib_verbs.h
+++ b/drivers/infiniband/hw/qib/qib_verbs.h
@@ -159,6 +159,7 @@ struct qib_other_headers {
} at;
__be32 imm_data;
__be32 aeth;
+ __be32 ieth;
struct ib_atomic_eth atomic_eth;
} u;
} __packed;
@@ -430,11 +431,11 @@ void qib_send_complete(struct rvt_qp *qp, struct rvt_swqe *wqe,
void qib_send_rc_ack(struct rvt_qp *qp);
-int qib_make_rc_req(struct rvt_qp *qp);
+int qib_make_rc_req(struct rvt_qp *qp, unsigned long *flags);
-int qib_make_uc_req(struct rvt_qp *qp);
+int qib_make_uc_req(struct rvt_qp *qp, unsigned long *flags);
-int qib_make_ud_req(struct rvt_qp *qp);
+int qib_make_ud_req(struct rvt_qp *qp, unsigned long *flags);
int qib_register_ib_device(struct qib_devdata *);
diff --git a/drivers/infiniband/sw/rdmavt/cq.c b/drivers/infiniband/sw/rdmavt/cq.c
index b1ffc8b4a6c0..6ca6fa80dd6e 100644
--- a/drivers/infiniband/sw/rdmavt/cq.c
+++ b/drivers/infiniband/sw/rdmavt/cq.c
@@ -525,6 +525,7 @@ int rvt_driver_cq_init(struct rvt_dev_info *rdi)
return PTR_ERR(task);
}
+ set_user_nice(task, MIN_NICE);
cpu = cpumask_first(cpumask_of_node(rdi->dparms.node));
kthread_bind(task, cpu);
wake_up_process(task);
diff --git a/drivers/infiniband/sw/rdmavt/mr.c b/drivers/infiniband/sw/rdmavt/mr.c
index 0ff765bfd619..0f4d4500f45e 100644
--- a/drivers/infiniband/sw/rdmavt/mr.c
+++ b/drivers/infiniband/sw/rdmavt/mr.c
@@ -124,11 +124,13 @@ static int rvt_init_mregion(struct rvt_mregion *mr, struct ib_pd *pd,
int count)
{
int m, i = 0;
+ struct rvt_dev_info *dev = ib_to_rvt(pd->device);
mr->mapsz = 0;
m = (count + RVT_SEGSZ - 1) / RVT_SEGSZ;
for (; i < m; i++) {
- mr->map[i] = kzalloc(sizeof(*mr->map[0]), GFP_KERNEL);
+ mr->map[i] = kzalloc_node(sizeof(*mr->map[0]), GFP_KERNEL,
+ dev->dparms.node);
if (!mr->map[i]) {
rvt_deinit_mregion(mr);
return -ENOMEM;
diff --git a/drivers/infiniband/sw/rdmavt/qp.c b/drivers/infiniband/sw/rdmavt/qp.c
index a9e3bcc522c4..5fa4d4d81ee0 100644
--- a/drivers/infiniband/sw/rdmavt/qp.c
+++ b/drivers/infiniband/sw/rdmavt/qp.c
@@ -397,6 +397,7 @@ static void free_qpn(struct rvt_qpn_table *qpt, u32 qpn)
static void rvt_clear_mr_refs(struct rvt_qp *qp, int clr_sends)
{
unsigned n;
+ struct rvt_dev_info *rdi = ib_to_rvt(qp->ibqp.device);
if (test_and_clear_bit(RVT_R_REWIND_SGE, &qp->r_aflags))
rvt_put_ss(&qp->s_rdma_read_sge);
@@ -431,7 +432,7 @@ static void rvt_clear_mr_refs(struct rvt_qp *qp, int clr_sends)
if (qp->ibqp.qp_type != IB_QPT_RC)
return;
- for (n = 0; n < ARRAY_SIZE(qp->s_ack_queue); n++) {
+ for (n = 0; n < rvt_max_atomic(rdi); n++) {
struct rvt_ack_entry *e = &qp->s_ack_queue[n];
if (e->opcode == IB_OPCODE_RC_RDMA_READ_REQUEST &&
@@ -569,7 +570,12 @@ static void rvt_reset_qp(struct rvt_dev_info *rdi, struct rvt_qp *qp,
qp->s_ssn = 1;
qp->s_lsn = 0;
qp->s_mig_state = IB_MIG_MIGRATED;
- memset(qp->s_ack_queue, 0, sizeof(qp->s_ack_queue));
+ if (qp->s_ack_queue)
+ memset(
+ qp->s_ack_queue,
+ 0,
+ rvt_max_atomic(rdi) *
+ sizeof(*qp->s_ack_queue));
qp->r_head_ack_queue = 0;
qp->s_tail_ack_queue = 0;
qp->s_num_rd_atomic = 0;
@@ -653,9 +659,9 @@ struct ib_qp *rvt_create_qp(struct ib_pd *ibpd,
if (gfp == GFP_NOIO)
swq = __vmalloc(
(init_attr->cap.max_send_wr + 1) * sz,
- gfp, PAGE_KERNEL);
+ gfp | __GFP_ZERO, PAGE_KERNEL);
else
- swq = vmalloc_node(
+ swq = vzalloc_node(
(init_attr->cap.max_send_wr + 1) * sz,
rdi->dparms.node);
if (!swq)
@@ -677,6 +683,16 @@ struct ib_qp *rvt_create_qp(struct ib_pd *ibpd,
goto bail_swq;
RCU_INIT_POINTER(qp->next, NULL);
+ if (init_attr->qp_type == IB_QPT_RC) {
+ qp->s_ack_queue =
+ kzalloc_node(
+ sizeof(*qp->s_ack_queue) *
+ rvt_max_atomic(rdi),
+ gfp,
+ rdi->dparms.node);
+ if (!qp->s_ack_queue)
+ goto bail_qp;
+ }
/*
* Driver needs to set up it's private QP structure and do any
@@ -704,9 +720,9 @@ struct ib_qp *rvt_create_qp(struct ib_pd *ibpd,
qp->r_rq.wq = __vmalloc(
sizeof(struct rvt_rwq) +
qp->r_rq.size * sz,
- gfp, PAGE_KERNEL);
+ gfp | __GFP_ZERO, PAGE_KERNEL);
else
- qp->r_rq.wq = vmalloc_node(
+ qp->r_rq.wq = vzalloc_node(
sizeof(struct rvt_rwq) +
qp->r_rq.size * sz,
rdi->dparms.node);
@@ -829,13 +845,13 @@ struct ib_qp *rvt_create_qp(struct ib_pd *ibpd,
case IB_QPT_SMI:
case IB_QPT_GSI:
case IB_QPT_UD:
- qp->allowed_ops = IB_OPCODE_UD_SEND_ONLY & RVT_OPCODE_QP_MASK;
+ qp->allowed_ops = IB_OPCODE_UD;
break;
case IB_QPT_RC:
- qp->allowed_ops = IB_OPCODE_RC_SEND_ONLY & RVT_OPCODE_QP_MASK;
+ qp->allowed_ops = IB_OPCODE_RC;
break;
case IB_QPT_UC:
- qp->allowed_ops = IB_OPCODE_UC_SEND_ONLY & RVT_OPCODE_QP_MASK;
+ qp->allowed_ops = IB_OPCODE_UC;
break;
default:
ret = ERR_PTR(-EINVAL);
@@ -857,6 +873,7 @@ bail_driver_priv:
rdi->driver_f.qp_priv_free(rdi, qp);
bail_qp:
+ kfree(qp->s_ack_queue);
kfree(qp);
bail_swq:
@@ -1284,6 +1301,7 @@ int rvt_destroy_qp(struct ib_qp *ibqp)
vfree(qp->r_rq.wq);
vfree(qp->s_wq);
rdi->driver_f.qp_priv_free(rdi, qp);
+ kfree(qp->s_ack_queue);
kfree(qp);
return 0;
}
diff --git a/drivers/infiniband/sw/rdmavt/vt.c b/drivers/infiniband/sw/rdmavt/vt.c
index 6caf5272ba1f..e1cc2cc42f25 100644
--- a/drivers/infiniband/sw/rdmavt/vt.c
+++ b/drivers/infiniband/sw/rdmavt/vt.c
@@ -106,6 +106,19 @@ struct rvt_dev_info *rvt_alloc_device(size_t size, int nports)
}
EXPORT_SYMBOL(rvt_alloc_device);
+/**
+ * rvt_dealloc_device - deallocate rdi
+ * @rdi: structure to free
+ *
+ * Free a structure allocated with rvt_alloc_device()
+ */
+void rvt_dealloc_device(struct rvt_dev_info *rdi)
+{
+ kfree(rdi->ports);
+ ib_dealloc_device(&rdi->ibdev);
+}
+EXPORT_SYMBOL(rvt_dealloc_device);
+
static int rvt_query_device(struct ib_device *ibdev,
struct ib_device_attr *props,
struct ib_udata *uhw)
diff --git a/drivers/infiniband/ulp/ipoib/ipoib.h b/drivers/infiniband/ulp/ipoib/ipoib.h
index caec8e9c4666..bab7db6fa9ab 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib.h
+++ b/drivers/infiniband/ulp/ipoib/ipoib.h
@@ -92,6 +92,8 @@ enum {
IPOIB_FLAG_UMCAST = 10,
IPOIB_STOP_NEIGH_GC = 11,
IPOIB_NEIGH_TBL_FLUSH = 12,
+ IPOIB_FLAG_DEV_ADDR_SET = 13,
+ IPOIB_FLAG_DEV_ADDR_CTRL = 14,
IPOIB_MAX_BACKOFF_SECONDS = 16,
@@ -392,6 +394,7 @@ struct ipoib_dev_priv {
struct ipoib_ethtool_st ethtool;
struct timer_list poll_timer;
unsigned max_send_sge;
+ bool sm_fullmember_sendonly_support;
};
struct ipoib_ah {
@@ -476,6 +479,7 @@ void ipoib_reap_ah(struct work_struct *work);
void ipoib_mark_paths_invalid(struct net_device *dev);
void ipoib_flush_paths(struct net_device *dev);
+int ipoib_check_sm_sendonly_fullmember_support(struct ipoib_dev_priv *priv);
struct ipoib_dev_priv *ipoib_intf_alloc(const char *format);
int ipoib_ib_dev_init(struct net_device *dev, struct ib_device *ca, int port);
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_cm.c b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
index c8ed53562c9b..b2f42835d76d 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_cm.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_cm.c
@@ -766,7 +766,7 @@ void ipoib_cm_send(struct net_device *dev, struct sk_buff *skb, struct ipoib_cm_
ipoib_dma_unmap_tx(priv, tx_req);
dev_kfree_skb_any(skb);
} else {
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
++tx->tx_head;
if (++priv->tx_outstanding == ipoib_sendq_size) {
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ethtool.c b/drivers/infiniband/ulp/ipoib/ipoib_ethtool.c
index a53fa5fc0dec..1502199c8e56 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_ethtool.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_ethtool.c
@@ -36,6 +36,27 @@
#include "ipoib.h"
+struct ipoib_stats {
+ char stat_string[ETH_GSTRING_LEN];
+ int stat_offset;
+};
+
+#define IPOIB_NETDEV_STAT(m) { \
+ .stat_string = #m, \
+ .stat_offset = offsetof(struct rtnl_link_stats64, m) }
+
+static const struct ipoib_stats ipoib_gstrings_stats[] = {
+ IPOIB_NETDEV_STAT(rx_packets),
+ IPOIB_NETDEV_STAT(tx_packets),
+ IPOIB_NETDEV_STAT(rx_bytes),
+ IPOIB_NETDEV_STAT(tx_bytes),
+ IPOIB_NETDEV_STAT(tx_errors),
+ IPOIB_NETDEV_STAT(rx_dropped),
+ IPOIB_NETDEV_STAT(tx_dropped)
+};
+
+#define IPOIB_GLOBAL_STATS_LEN ARRAY_SIZE(ipoib_gstrings_stats)
+
static void ipoib_get_drvinfo(struct net_device *netdev,
struct ethtool_drvinfo *drvinfo)
{
@@ -92,11 +113,57 @@ static int ipoib_set_coalesce(struct net_device *dev,
return 0;
}
+static void ipoib_get_ethtool_stats(struct net_device *dev,
+ struct ethtool_stats __always_unused *stats,
+ u64 *data)
+{
+ int i;
+ struct net_device_stats *net_stats = &dev->stats;
+ u8 *p = (u8 *)net_stats;
+
+ for (i = 0; i < IPOIB_GLOBAL_STATS_LEN; i++)
+ data[i] = *(u64 *)(p + ipoib_gstrings_stats[i].stat_offset);
+
+}
+static void ipoib_get_strings(struct net_device __always_unused *dev,
+ u32 stringset, u8 *data)
+{
+ u8 *p = data;
+ int i;
+
+ switch (stringset) {
+ case ETH_SS_STATS:
+ for (i = 0; i < IPOIB_GLOBAL_STATS_LEN; i++) {
+ memcpy(p, ipoib_gstrings_stats[i].stat_string,
+ ETH_GSTRING_LEN);
+ p += ETH_GSTRING_LEN;
+ }
+ break;
+ case ETH_SS_TEST:
+ default:
+ break;
+ }
+}
+static int ipoib_get_sset_count(struct net_device __always_unused *dev,
+ int sset)
+{
+ switch (sset) {
+ case ETH_SS_STATS:
+ return IPOIB_GLOBAL_STATS_LEN;
+ case ETH_SS_TEST:
+ default:
+ break;
+ }
+ return -EOPNOTSUPP;
+}
static const struct ethtool_ops ipoib_ethtool_ops = {
.get_drvinfo = ipoib_get_drvinfo,
.get_coalesce = ipoib_get_coalesce,
.set_coalesce = ipoib_set_coalesce,
+ .get_strings = ipoib_get_strings,
+ .get_ethtool_stats = ipoib_get_ethtool_stats,
+ .get_sset_count = ipoib_get_sset_count,
};
void ipoib_set_ethtool_ops(struct net_device *dev)
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_ib.c b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
index f0e55e47eb54..45c40a17d6a6 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_ib.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_ib.c
@@ -51,8 +51,6 @@ MODULE_PARM_DESC(data_debug_level,
"Enable data path debug tracing if > 0");
#endif
-static DEFINE_MUTEX(pkey_mutex);
-
struct ipoib_ah *ipoib_create_ah(struct net_device *dev,
struct ib_pd *pd, struct ib_ah_attr *attr)
{
@@ -637,7 +635,7 @@ void ipoib_send(struct net_device *dev, struct sk_buff *skb,
if (netif_queue_stopped(dev))
netif_wake_queue(dev);
} else {
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
address->last_send = priv->tx_head;
++priv->tx_head;
@@ -999,6 +997,106 @@ static inline int update_child_pkey(struct ipoib_dev_priv *priv)
return 0;
}
+/*
+ * returns true if the device address of the ipoib interface has changed and the
+ * new address is a valid one (i.e in the gid table), return false otherwise.
+ */
+static bool ipoib_dev_addr_changed_valid(struct ipoib_dev_priv *priv)
+{
+ union ib_gid search_gid;
+ union ib_gid gid0;
+ union ib_gid *netdev_gid;
+ int err;
+ u16 index;
+ u8 port;
+ bool ret = false;
+
+ netdev_gid = (union ib_gid *)(priv->dev->dev_addr + 4);
+ if (ib_query_gid(priv->ca, priv->port, 0, &gid0, NULL))
+ return false;
+
+ netif_addr_lock(priv->dev);
+
+ /* The subnet prefix may have changed, update it now so we won't have
+ * to do it later
+ */
+ priv->local_gid.global.subnet_prefix = gid0.global.subnet_prefix;
+ netdev_gid->global.subnet_prefix = gid0.global.subnet_prefix;
+ search_gid.global.subnet_prefix = gid0.global.subnet_prefix;
+
+ search_gid.global.interface_id = priv->local_gid.global.interface_id;
+
+ netif_addr_unlock(priv->dev);
+
+ err = ib_find_gid(priv->ca, &search_gid, IB_GID_TYPE_IB,
+ priv->dev, &port, &index);
+
+ netif_addr_lock(priv->dev);
+
+ if (search_gid.global.interface_id !=
+ priv->local_gid.global.interface_id)
+ /* There was a change while we were looking up the gid, bail
+ * here and let the next work sort this out
+ */
+ goto out;
+
+ /* The next section of code needs some background:
+ * Per IB spec the port GUID can't change if the HCA is powered on.
+ * port GUID is the basis for GID at index 0 which is the basis for
+ * the default device address of a ipoib interface.
+ *
+ * so it seems the flow should be:
+ * if user_changed_dev_addr && gid in gid tbl
+ * set bit dev_addr_set
+ * return true
+ * else
+ * return false
+ *
+ * The issue is that there are devices that don't follow the spec,
+ * they change the port GUID when the HCA is powered, so in order
+ * not to break userspace applications, We need to check if the
+ * user wanted to control the device address and we assume that
+ * if he sets the device address back to be based on GID index 0,
+ * he no longer wishs to control it.
+ *
+ * If the user doesn't control the the device address,
+ * IPOIB_FLAG_DEV_ADDR_SET is set and ib_find_gid failed it means
+ * the port GUID has changed and GID at index 0 has changed
+ * so we need to change priv->local_gid and priv->dev->dev_addr
+ * to reflect the new GID.
+ */
+ if (!test_bit(IPOIB_FLAG_DEV_ADDR_SET, &priv->flags)) {
+ if (!err && port == priv->port) {
+ set_bit(IPOIB_FLAG_DEV_ADDR_SET, &priv->flags);
+ if (index == 0)
+ clear_bit(IPOIB_FLAG_DEV_ADDR_CTRL,
+ &priv->flags);
+ else
+ set_bit(IPOIB_FLAG_DEV_ADDR_CTRL, &priv->flags);
+ ret = true;
+ } else {
+ ret = false;
+ }
+ } else {
+ if (!err && port == priv->port) {
+ ret = true;
+ } else {
+ if (!test_bit(IPOIB_FLAG_DEV_ADDR_CTRL, &priv->flags)) {
+ memcpy(&priv->local_gid, &gid0,
+ sizeof(priv->local_gid));
+ memcpy(priv->dev->dev_addr + 4, &gid0,
+ sizeof(priv->local_gid));
+ ret = true;
+ }
+ }
+ }
+
+out:
+ netif_addr_unlock(priv->dev);
+
+ return ret;
+}
+
static void __ipoib_ib_dev_flush(struct ipoib_dev_priv *priv,
enum ipoib_flush_level level,
int nesting)
@@ -1020,6 +1118,9 @@ static void __ipoib_ib_dev_flush(struct ipoib_dev_priv *priv,
if (!test_bit(IPOIB_FLAG_INITIALIZED, &priv->flags) &&
level != IPOIB_FLUSH_HEAVY) {
+ /* Make sure the dev_addr is set even if not flushing */
+ if (level == IPOIB_FLUSH_LIGHT)
+ ipoib_dev_addr_changed_valid(priv);
ipoib_dbg(priv, "Not flushing - IPOIB_FLAG_INITIALIZED not set.\n");
return;
}
@@ -1031,7 +1132,8 @@ static void __ipoib_ib_dev_flush(struct ipoib_dev_priv *priv,
update_parent_pkey(priv);
else
update_child_pkey(priv);
- }
+ } else if (level == IPOIB_FLUSH_LIGHT)
+ ipoib_dev_addr_changed_valid(priv);
ipoib_dbg(priv, "Not flushing - IPOIB_FLAG_ADMIN_UP not set.\n");
return;
}
@@ -1083,7 +1185,8 @@ static void __ipoib_ib_dev_flush(struct ipoib_dev_priv *priv,
if (test_bit(IPOIB_FLAG_ADMIN_UP, &priv->flags)) {
if (level >= IPOIB_FLUSH_NORMAL)
ipoib_ib_dev_up(dev);
- ipoib_mcast_restart_task(&priv->restart_task);
+ if (ipoib_dev_addr_changed_valid(priv))
+ ipoib_mcast_restart_task(&priv->restart_task);
}
}
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_main.c b/drivers/infiniband/ulp/ipoib/ipoib_main.c
index 80807d6e5c4c..2d7c16346648 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_main.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_main.c
@@ -99,6 +99,7 @@ static struct net_device *ipoib_get_net_dev_by_params(
struct ib_device *dev, u8 port, u16 pkey,
const union ib_gid *gid, const struct sockaddr *addr,
void *client_data);
+static int ipoib_set_mac(struct net_device *dev, void *addr);
static struct ib_client ipoib_client = {
.name = "ipoib",
@@ -117,6 +118,8 @@ int ipoib_open(struct net_device *dev)
set_bit(IPOIB_FLAG_ADMIN_UP, &priv->flags);
+ priv->sm_fullmember_sendonly_support = false;
+
if (ipoib_ib_dev_open(dev)) {
if (!test_bit(IPOIB_PKEY_ASSIGNED, &priv->flags))
return 0;
@@ -629,6 +632,77 @@ void ipoib_mark_paths_invalid(struct net_device *dev)
spin_unlock_irq(&priv->lock);
}
+struct classport_info_context {
+ struct ipoib_dev_priv *priv;
+ struct completion done;
+ struct ib_sa_query *sa_query;
+};
+
+static void classport_info_query_cb(int status, struct ib_class_port_info *rec,
+ void *context)
+{
+ struct classport_info_context *cb_ctx = context;
+ struct ipoib_dev_priv *priv;
+
+ WARN_ON(!context);
+
+ priv = cb_ctx->priv;
+
+ if (status || !rec) {
+ pr_debug("device: %s failed query classport_info status: %d\n",
+ priv->dev->name, status);
+ /* keeps the default, will try next mcast_restart */
+ priv->sm_fullmember_sendonly_support = false;
+ goto out;
+ }
+
+ if (ib_get_cpi_capmask2(rec) &
+ IB_SA_CAP_MASK2_SENDONLY_FULL_MEM_SUPPORT) {
+ pr_debug("device: %s enabled fullmember-sendonly for sendonly MCG\n",
+ priv->dev->name);
+ priv->sm_fullmember_sendonly_support = true;
+ } else {
+ pr_debug("device: %s disabled fullmember-sendonly for sendonly MCG\n",
+ priv->dev->name);
+ priv->sm_fullmember_sendonly_support = false;
+ }
+
+out:
+ complete(&cb_ctx->done);
+}
+
+int ipoib_check_sm_sendonly_fullmember_support(struct ipoib_dev_priv *priv)
+{
+ struct classport_info_context *callback_context;
+ int ret;
+
+ callback_context = kmalloc(sizeof(*callback_context), GFP_KERNEL);
+ if (!callback_context)
+ return -ENOMEM;
+
+ callback_context->priv = priv;
+ init_completion(&callback_context->done);
+
+ ret = ib_sa_classport_info_rec_query(&ipoib_sa_client,
+ priv->ca, priv->port, 3000,
+ GFP_KERNEL,
+ classport_info_query_cb,
+ callback_context,
+ &callback_context->sa_query);
+ if (ret < 0) {
+ pr_info("%s failed to send ib_sa_classport_info query, ret: %d\n",
+ priv->dev->name, ret);
+ kfree(callback_context);
+ return ret;
+ }
+
+ /* waiting for the callback to finish before returnning */
+ wait_for_completion(&callback_context->done);
+ kfree(callback_context);
+
+ return ret;
+}
+
void ipoib_flush_paths(struct net_device *dev)
{
struct ipoib_dev_priv *priv = netdev_priv(dev);
@@ -1036,7 +1110,7 @@ static void ipoib_timeout(struct net_device *dev)
struct ipoib_dev_priv *priv = netdev_priv(dev);
ipoib_warn(priv, "transmit timeout: latency %d msecs\n",
- jiffies_to_msecs(jiffies - dev->trans_start));
+ jiffies_to_msecs(jiffies - dev_trans_start(dev)));
ipoib_warn(priv, "queue stopped %d, tx_head %u, tx_tail %u\n",
netif_queue_stopped(dev),
priv->tx_head, priv->tx_tail);
@@ -1649,6 +1723,7 @@ static const struct net_device_ops ipoib_netdev_ops_pf = {
.ndo_get_vf_config = ipoib_get_vf_config,
.ndo_get_vf_stats = ipoib_get_vf_stats,
.ndo_set_vf_guid = ipoib_set_vf_guid,
+ .ndo_set_mac_address = ipoib_set_mac,
};
static const struct net_device_ops ipoib_netdev_ops_vf = {
@@ -1771,6 +1846,70 @@ int ipoib_add_umcast_attr(struct net_device *dev)
return device_create_file(&dev->dev, &dev_attr_umcast);
}
+static void set_base_guid(struct ipoib_dev_priv *priv, union ib_gid *gid)
+{
+ struct ipoib_dev_priv *child_priv;
+ struct net_device *netdev = priv->dev;
+
+ netif_addr_lock(netdev);
+
+ memcpy(&priv->local_gid.global.interface_id,
+ &gid->global.interface_id,
+ sizeof(gid->global.interface_id));
+ memcpy(netdev->dev_addr + 4, &priv->local_gid, sizeof(priv->local_gid));
+ clear_bit(IPOIB_FLAG_DEV_ADDR_SET, &priv->flags);
+
+ netif_addr_unlock(netdev);
+
+ if (!test_bit(IPOIB_FLAG_SUBINTERFACE, &priv->flags)) {
+ down_read(&priv->vlan_rwsem);
+ list_for_each_entry(child_priv, &priv->child_intfs, list)
+ set_base_guid(child_priv, gid);
+ up_read(&priv->vlan_rwsem);
+ }
+}
+
+static int ipoib_check_lladdr(struct net_device *dev,
+ struct sockaddr_storage *ss)
+{
+ union ib_gid *gid = (union ib_gid *)(ss->__data + 4);
+ int ret = 0;
+
+ netif_addr_lock(dev);
+
+ /* Make sure the QPN, reserved and subnet prefix match the current
+ * lladdr, it also makes sure the lladdr is unicast.
+ */
+ if (memcmp(dev->dev_addr, ss->__data,
+ 4 + sizeof(gid->global.subnet_prefix)) ||
+ gid->global.interface_id == 0)
+ ret = -EINVAL;
+
+ netif_addr_unlock(dev);
+
+ return ret;
+}
+
+static int ipoib_set_mac(struct net_device *dev, void *addr)
+{
+ struct ipoib_dev_priv *priv = netdev_priv(dev);
+ struct sockaddr_storage *ss = addr;
+ int ret;
+
+ if (!(dev->priv_flags & IFF_LIVE_ADDR_CHANGE) && netif_running(dev))
+ return -EBUSY;
+
+ ret = ipoib_check_lladdr(dev, ss);
+ if (ret)
+ return ret;
+
+ set_base_guid(priv, (union ib_gid *)(ss->__data + 4));
+
+ queue_work(ipoib_workqueue, &priv->flush_light);
+
+ return 0;
+}
+
static ssize_t create_child(struct device *dev,
struct device_attribute *attr,
const char *buf, size_t count)
@@ -1894,6 +2033,7 @@ static struct net_device *ipoib_add_port(const char *format,
goto device_init_failed;
} else
memcpy(priv->dev->dev_addr + 4, priv->local_gid.raw, sizeof (union ib_gid));
+ set_bit(IPOIB_FLAG_DEV_ADDR_SET, &priv->flags);
result = ipoib_dev_init(priv->dev, hca, port);
if (result < 0) {
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
index 25889311b1e9..82fbc9442608 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_multicast.c
@@ -64,6 +64,9 @@ struct ipoib_mcast_iter {
unsigned int send_only;
};
+/* join state that allows creating mcg with sendonly member request */
+#define SENDONLY_FULLMEMBER_JOIN 8
+
/*
* This should be called with the priv->lock held
*/
@@ -326,12 +329,23 @@ void ipoib_mcast_carrier_on_task(struct work_struct *work)
struct ipoib_dev_priv *priv = container_of(work, struct ipoib_dev_priv,
carrier_on_task);
struct ib_port_attr attr;
+ int ret;
if (ib_query_port(priv->ca, priv->port, &attr) ||
attr.state != IB_PORT_ACTIVE) {
ipoib_dbg(priv, "Keeping carrier off until IB port is active\n");
return;
}
+ /*
+ * Check if can send sendonly MCG's with sendonly-fullmember join state.
+ * It done here after the successfully join to the broadcast group,
+ * because the broadcast group must always be joined first and is always
+ * re-joined if the SM changes substantially.
+ */
+ ret = ipoib_check_sm_sendonly_fullmember_support(priv);
+ if (ret < 0)
+ pr_debug("%s failed query sm support for sendonly-fullmember (ret: %d)\n",
+ priv->dev->name, ret);
/*
* Take rtnl_lock to avoid racing with ipoib_stop() and
@@ -515,22 +529,20 @@ static int ipoib_mcast_join(struct net_device *dev, struct ipoib_mcast *mcast)
rec.hop_limit = priv->broadcast->mcmember.hop_limit;
/*
- * Send-only IB Multicast joins do not work at the core
- * IB layer yet, so we can't use them here. However,
- * we are emulating an Ethernet multicast send, which
- * does not require a multicast subscription and will
- * still send properly. The most appropriate thing to
+ * Send-only IB Multicast joins work at the core IB layer but
+ * require specific SM support.
+ * We can use such joins here only if the current SM supports that feature.
+ * However, if not, we emulate an Ethernet multicast send,
+ * which does not require a multicast subscription and will
+ * still send properly. The most appropriate thing to
* do is to create the group if it doesn't exist as that
* most closely emulates the behavior, from a user space
- * application perspecitive, of Ethernet multicast
- * operation. For now, we do a full join, maybe later
- * when the core IB layers support send only joins we
- * will use them.
+ * application perspective, of Ethernet multicast operation.
*/
-#if 0
- if (test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags))
- rec.join_state = 4;
-#endif
+ if (test_bit(IPOIB_MCAST_FLAG_SENDONLY, &mcast->flags) &&
+ priv->sm_fullmember_sendonly_support)
+ /* SM supports sendonly-fullmember, otherwise fallback to full-member */
+ rec.join_state = SENDONLY_FULLMEMBER_JOIN;
}
spin_unlock_irq(&priv->lock);
@@ -570,11 +582,13 @@ void ipoib_mcast_join_task(struct work_struct *work)
return;
}
priv->local_lid = port_attr.lid;
+ netif_addr_lock(dev);
- if (ib_query_gid(priv->ca, priv->port, 0, &priv->local_gid, NULL))
- ipoib_warn(priv, "ib_query_gid() failed\n");
- else
- memcpy(priv->dev->dev_addr + 4, priv->local_gid.raw, sizeof (union ib_gid));
+ if (!test_bit(IPOIB_FLAG_DEV_ADDR_SET, &priv->flags)) {
+ netif_addr_unlock(dev);
+ return;
+ }
+ netif_addr_unlock(dev);
spin_lock_irq(&priv->lock);
if (!test_bit(IPOIB_FLAG_OPER_UP, &priv->flags))
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_verbs.c b/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
index b809c373e40e..1e7cbbaa15bd 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_verbs.c
@@ -307,5 +307,8 @@ void ipoib_event(struct ib_event_handler *handler,
queue_work(ipoib_workqueue, &priv->flush_normal);
} else if (record->event == IB_EVENT_PKEY_CHANGE) {
queue_work(ipoib_workqueue, &priv->flush_heavy);
+ } else if (record->event == IB_EVENT_GID_CHANGE &&
+ !test_bit(IPOIB_FLAG_DEV_ADDR_SET, &priv->flags)) {
+ queue_work(ipoib_workqueue, &priv->flush_light);
}
}
diff --git a/drivers/infiniband/ulp/ipoib/ipoib_vlan.c b/drivers/infiniband/ulp/ipoib/ipoib_vlan.c
index fca1a882de27..64a35595eab8 100644
--- a/drivers/infiniband/ulp/ipoib/ipoib_vlan.c
+++ b/drivers/infiniband/ulp/ipoib/ipoib_vlan.c
@@ -68,6 +68,8 @@ int __ipoib_vlan_add(struct ipoib_dev_priv *ppriv, struct ipoib_dev_priv *priv,
priv->pkey = pkey;
memcpy(priv->dev->dev_addr, ppriv->dev->dev_addr, INFINIBAND_ALEN);
+ memcpy(&priv->local_gid, &ppriv->local_gid, sizeof(priv->local_gid));
+ set_bit(IPOIB_FLAG_DEV_ADDR_SET, &priv->flags);
priv->dev->broadcast[8] = pkey >> 8;
priv->dev->broadcast[9] = pkey & 0xff;
diff --git a/drivers/infiniband/ulp/iser/iser_memory.c b/drivers/infiniband/ulp/iser/iser_memory.c
index 9a391cc5b9b3..90be56893414 100644
--- a/drivers/infiniband/ulp/iser/iser_memory.c
+++ b/drivers/infiniband/ulp/iser/iser_memory.c
@@ -236,7 +236,7 @@ int iser_fast_reg_fmr(struct iscsi_iser_task *iser_task,
page_vec->npages = 0;
page_vec->fake_mr.page_size = SIZE_4K;
plen = ib_sg_to_pages(&page_vec->fake_mr, mem->sg,
- mem->size, iser_set_page);
+ mem->size, NULL, iser_set_page);
if (unlikely(plen < mem->size)) {
iser_err("page vec too short to hold this SG\n");
iser_data_buf_dump(mem, device->ib_device);
@@ -446,7 +446,7 @@ static int iser_fast_reg_mr(struct iscsi_iser_task *iser_task,
ib_update_fast_reg_key(mr, ib_inc_rkey(mr->rkey));
- n = ib_map_mr_sg(mr, mem->sg, mem->size, SIZE_4K);
+ n = ib_map_mr_sg(mr, mem->sg, mem->size, NULL, SIZE_4K);
if (unlikely(n != mem->size)) {
iser_err("failed to map sg (%d/%d)\n",
n, mem->size);
diff --git a/drivers/infiniband/ulp/isert/ib_isert.c b/drivers/infiniband/ulp/isert/ib_isert.c
index 411e4464ca23..a990c04208c9 100644
--- a/drivers/infiniband/ulp/isert/ib_isert.c
+++ b/drivers/infiniband/ulp/isert/ib_isert.c
@@ -33,7 +33,8 @@
#define ISERT_MAX_CONN 8
#define ISER_MAX_RX_CQ_LEN (ISERT_QP_MAX_RECV_DTOS * ISERT_MAX_CONN)
-#define ISER_MAX_TX_CQ_LEN (ISERT_QP_MAX_REQ_DTOS * ISERT_MAX_CONN)
+#define ISER_MAX_TX_CQ_LEN \
+ ((ISERT_QP_MAX_REQ_DTOS + ISCSI_DEF_XMIT_CMDS_MAX) * ISERT_MAX_CONN)
#define ISER_MAX_CQ_LEN (ISER_MAX_RX_CQ_LEN + ISER_MAX_TX_CQ_LEN + \
ISERT_MAX_CONN)
@@ -46,14 +47,6 @@ static LIST_HEAD(device_list);
static struct workqueue_struct *isert_comp_wq;
static struct workqueue_struct *isert_release_wq;
-static void
-isert_unmap_cmd(struct isert_cmd *isert_cmd, struct isert_conn *isert_conn);
-static int
-isert_map_rdma(struct isert_cmd *isert_cmd, struct iscsi_conn *conn);
-static void
-isert_unreg_rdma(struct isert_cmd *isert_cmd, struct isert_conn *isert_conn);
-static int
-isert_reg_rdma(struct isert_cmd *isert_cmd, struct iscsi_conn *conn);
static int
isert_put_response(struct iscsi_conn *conn, struct iscsi_cmd *cmd);
static int
@@ -142,6 +135,7 @@ isert_create_qp(struct isert_conn *isert_conn,
attr.recv_cq = comp->cq;
attr.cap.max_send_wr = ISERT_QP_MAX_REQ_DTOS + 1;
attr.cap.max_recv_wr = ISERT_QP_MAX_RECV_DTOS + 1;
+ attr.cap.max_rdma_ctxs = ISCSI_DEF_XMIT_CMDS_MAX;
attr.cap.max_send_sge = device->ib_device->attrs.max_sge;
isert_conn->max_sge = min(device->ib_device->attrs.max_sge,
device->ib_device->attrs.max_sge_rd);
@@ -270,9 +264,9 @@ isert_alloc_comps(struct isert_device *device)
device->ib_device->num_comp_vectors));
isert_info("Using %d CQs, %s supports %d vectors support "
- "Fast registration %d pi_capable %d\n",
+ "pi_capable %d\n",
device->comps_used, device->ib_device->name,
- device->ib_device->num_comp_vectors, device->use_fastreg,
+ device->ib_device->num_comp_vectors,
device->pi_capable);
device->comps = kcalloc(device->comps_used, sizeof(struct isert_comp),
@@ -313,18 +307,6 @@ isert_create_device_ib_res(struct isert_device *device)
isert_dbg("devattr->max_sge: %d\n", ib_dev->attrs.max_sge);
isert_dbg("devattr->max_sge_rd: %d\n", ib_dev->attrs.max_sge_rd);
- /* asign function handlers */
- if (ib_dev->attrs.device_cap_flags & IB_DEVICE_MEM_MGT_EXTENSIONS &&
- ib_dev->attrs.device_cap_flags & IB_DEVICE_SIGNATURE_HANDOVER) {
- device->use_fastreg = 1;
- device->reg_rdma_mem = isert_reg_rdma;
- device->unreg_rdma_mem = isert_unreg_rdma;
- } else {
- device->use_fastreg = 0;
- device->reg_rdma_mem = isert_map_rdma;
- device->unreg_rdma_mem = isert_unmap_cmd;
- }
-
ret = isert_alloc_comps(device);
if (ret)
goto out;
@@ -417,146 +399,6 @@ isert_device_get(struct rdma_cm_id *cma_id)
}
static void
-isert_conn_free_fastreg_pool(struct isert_conn *isert_conn)
-{
- struct fast_reg_descriptor *fr_desc, *tmp;
- int i = 0;
-
- if (list_empty(&isert_conn->fr_pool))
- return;
-
- isert_info("Freeing conn %p fastreg pool", isert_conn);
-
- list_for_each_entry_safe(fr_desc, tmp,
- &isert_conn->fr_pool, list) {
- list_del(&fr_desc->list);
- ib_dereg_mr(fr_desc->data_mr);
- if (fr_desc->pi_ctx) {
- ib_dereg_mr(fr_desc->pi_ctx->prot_mr);
- ib_dereg_mr(fr_desc->pi_ctx->sig_mr);
- kfree(fr_desc->pi_ctx);
- }
- kfree(fr_desc);
- ++i;
- }
-
- if (i < isert_conn->fr_pool_size)
- isert_warn("Pool still has %d regions registered\n",
- isert_conn->fr_pool_size - i);
-}
-
-static int
-isert_create_pi_ctx(struct fast_reg_descriptor *desc,
- struct ib_device *device,
- struct ib_pd *pd)
-{
- struct pi_context *pi_ctx;
- int ret;
-
- pi_ctx = kzalloc(sizeof(*desc->pi_ctx), GFP_KERNEL);
- if (!pi_ctx) {
- isert_err("Failed to allocate pi context\n");
- return -ENOMEM;
- }
-
- pi_ctx->prot_mr = ib_alloc_mr(pd, IB_MR_TYPE_MEM_REG,
- ISCSI_ISER_SG_TABLESIZE);
- if (IS_ERR(pi_ctx->prot_mr)) {
- isert_err("Failed to allocate prot frmr err=%ld\n",
- PTR_ERR(pi_ctx->prot_mr));
- ret = PTR_ERR(pi_ctx->prot_mr);
- goto err_pi_ctx;
- }
- desc->ind |= ISERT_PROT_KEY_VALID;
-
- pi_ctx->sig_mr = ib_alloc_mr(pd, IB_MR_TYPE_SIGNATURE, 2);
- if (IS_ERR(pi_ctx->sig_mr)) {
- isert_err("Failed to allocate signature enabled mr err=%ld\n",
- PTR_ERR(pi_ctx->sig_mr));
- ret = PTR_ERR(pi_ctx->sig_mr);
- goto err_prot_mr;
- }
-
- desc->pi_ctx = pi_ctx;
- desc->ind |= ISERT_SIG_KEY_VALID;
- desc->ind &= ~ISERT_PROTECTED;
-
- return 0;
-
-err_prot_mr:
- ib_dereg_mr(pi_ctx->prot_mr);
-err_pi_ctx:
- kfree(pi_ctx);
-
- return ret;
-}
-
-static int
-isert_create_fr_desc(struct ib_device *ib_device, struct ib_pd *pd,
- struct fast_reg_descriptor *fr_desc)
-{
- fr_desc->data_mr = ib_alloc_mr(pd, IB_MR_TYPE_MEM_REG,
- ISCSI_ISER_SG_TABLESIZE);
- if (IS_ERR(fr_desc->data_mr)) {
- isert_err("Failed to allocate data frmr err=%ld\n",
- PTR_ERR(fr_desc->data_mr));
- return PTR_ERR(fr_desc->data_mr);
- }
- fr_desc->ind |= ISERT_DATA_KEY_VALID;
-
- isert_dbg("Created fr_desc %p\n", fr_desc);
-
- return 0;
-}
-
-static int
-isert_conn_create_fastreg_pool(struct isert_conn *isert_conn)
-{
- struct fast_reg_descriptor *fr_desc;
- struct isert_device *device = isert_conn->device;
- struct se_session *se_sess = isert_conn->conn->sess->se_sess;
- struct se_node_acl *se_nacl = se_sess->se_node_acl;
- int i, ret, tag_num;
- /*
- * Setup the number of FRMRs based upon the number of tags
- * available to session in iscsi_target_locate_portal().
- */
- tag_num = max_t(u32, ISCSIT_MIN_TAGS, se_nacl->queue_depth);
- tag_num = (tag_num * 2) + ISCSIT_EXTRA_TAGS;
-
- isert_conn->fr_pool_size = 0;
- for (i = 0; i < tag_num; i++) {
- fr_desc = kzalloc(sizeof(*fr_desc), GFP_KERNEL);
- if (!fr_desc) {
- isert_err("Failed to allocate fast_reg descriptor\n");
- ret = -ENOMEM;
- goto err;
- }
-
- ret = isert_create_fr_desc(device->ib_device,
- device->pd, fr_desc);
- if (ret) {
- isert_err("Failed to create fastreg descriptor err=%d\n",
- ret);
- kfree(fr_desc);
- goto err;
- }
-
- list_add_tail(&fr_desc->list, &isert_conn->fr_pool);
- isert_conn->fr_pool_size++;
- }
-
- isert_dbg("Creating conn %p fastreg pool size=%d",
- isert_conn, isert_conn->fr_pool_size);
-
- return 0;
-
-err:
- isert_conn_free_fastreg_pool(isert_conn);
- return ret;
-}
-
-static void
isert_init_conn(struct isert_conn *isert_conn)
{
isert_conn->state = ISER_CONN_INIT;
@@ -565,8 +407,6 @@ isert_init_conn(struct isert_conn *isert_conn)
init_completion(&isert_conn->login_req_comp);
kref_init(&isert_conn->kref);
mutex_init(&isert_conn->mutex);
- spin_lock_init(&isert_conn->pool_lock);
- INIT_LIST_HEAD(&isert_conn->fr_pool);
INIT_WORK(&isert_conn->release_work, isert_release_work);
}
@@ -739,9 +579,6 @@ isert_connect_release(struct isert_conn *isert_conn)
BUG_ON(!device);
- if (device->use_fastreg)
- isert_conn_free_fastreg_pool(isert_conn);
-
isert_free_rx_descriptors(isert_conn);
if (isert_conn->cm_id)
rdma_destroy_id(isert_conn->cm_id);
@@ -1080,7 +917,6 @@ isert_init_send_wr(struct isert_conn *isert_conn, struct isert_cmd *isert_cmd,
{
struct iser_tx_desc *tx_desc = &isert_cmd->tx_desc;
- isert_cmd->iser_ib_op = ISER_IB_SEND;
tx_desc->tx_cqe.done = isert_send_done;
send_wr->wr_cqe = &tx_desc->tx_cqe;
@@ -1160,16 +996,6 @@ isert_put_login_tx(struct iscsi_conn *conn, struct iscsi_login *login,
}
if (!login->login_failed) {
if (login->login_complete) {
- if (!conn->sess->sess_ops->SessionType &&
- isert_conn->device->use_fastreg) {
- ret = isert_conn_create_fastreg_pool(isert_conn);
- if (ret) {
- isert_err("Conn: %p failed to create"
- " fastreg pool\n", isert_conn);
- return ret;
- }
- }
-
ret = isert_alloc_rx_descriptors(isert_conn);
if (ret)
return ret;
@@ -1633,97 +1459,26 @@ isert_login_recv_done(struct ib_cq *cq, struct ib_wc *wc)
ISER_RX_PAYLOAD_SIZE, DMA_FROM_DEVICE);
}
-static int
-isert_map_data_buf(struct isert_conn *isert_conn, struct isert_cmd *isert_cmd,
- struct scatterlist *sg, u32 nents, u32 length, u32 offset,
- enum iser_ib_op_code op, struct isert_data_buf *data)
-{
- struct ib_device *ib_dev = isert_conn->cm_id->device;
-
- data->dma_dir = op == ISER_IB_RDMA_WRITE ?
- DMA_TO_DEVICE : DMA_FROM_DEVICE;
-
- data->len = length - offset;
- data->offset = offset;
- data->sg_off = data->offset / PAGE_SIZE;
-
- data->sg = &sg[data->sg_off];
- data->nents = min_t(unsigned int, nents - data->sg_off,
- ISCSI_ISER_SG_TABLESIZE);
- data->len = min_t(unsigned int, data->len, ISCSI_ISER_SG_TABLESIZE *
- PAGE_SIZE);
-
- data->dma_nents = ib_dma_map_sg(ib_dev, data->sg, data->nents,
- data->dma_dir);
- if (unlikely(!data->dma_nents)) {
- isert_err("Cmd: unable to dma map SGs %p\n", sg);
- return -EINVAL;
- }
-
- isert_dbg("Mapped cmd: %p count: %u sg: %p sg_nents: %u rdma_len %d\n",
- isert_cmd, data->dma_nents, data->sg, data->nents, data->len);
-
- return 0;
-}
-
-static void
-isert_unmap_data_buf(struct isert_conn *isert_conn, struct isert_data_buf *data)
-{
- struct ib_device *ib_dev = isert_conn->cm_id->device;
-
- ib_dma_unmap_sg(ib_dev, data->sg, data->nents, data->dma_dir);
- memset(data, 0, sizeof(*data));
-}
-
-
-
static void
-isert_unmap_cmd(struct isert_cmd *isert_cmd, struct isert_conn *isert_conn)
+isert_rdma_rw_ctx_destroy(struct isert_cmd *cmd, struct isert_conn *conn)
{
- isert_dbg("Cmd %p\n", isert_cmd);
+ struct se_cmd *se_cmd = &cmd->iscsi_cmd->se_cmd;
+ enum dma_data_direction dir = target_reverse_dma_direction(se_cmd);
- if (isert_cmd->data.sg) {
- isert_dbg("Cmd %p unmap_sg op\n", isert_cmd);
- isert_unmap_data_buf(isert_conn, &isert_cmd->data);
- }
-
- if (isert_cmd->rdma_wr) {
- isert_dbg("Cmd %p free send_wr\n", isert_cmd);
- kfree(isert_cmd->rdma_wr);
- isert_cmd->rdma_wr = NULL;
- }
-
- if (isert_cmd->ib_sge) {
- isert_dbg("Cmd %p free ib_sge\n", isert_cmd);
- kfree(isert_cmd->ib_sge);
- isert_cmd->ib_sge = NULL;
- }
-}
-
-static void
-isert_unreg_rdma(struct isert_cmd *isert_cmd, struct isert_conn *isert_conn)
-{
- isert_dbg("Cmd %p\n", isert_cmd);
-
- if (isert_cmd->fr_desc) {
- isert_dbg("Cmd %p free fr_desc %p\n", isert_cmd, isert_cmd->fr_desc);
- if (isert_cmd->fr_desc->ind & ISERT_PROTECTED) {
- isert_unmap_data_buf(isert_conn, &isert_cmd->prot);
- isert_cmd->fr_desc->ind &= ~ISERT_PROTECTED;
- }
- spin_lock_bh(&isert_conn->pool_lock);
- list_add_tail(&isert_cmd->fr_desc->list, &isert_conn->fr_pool);
- spin_unlock_bh(&isert_conn->pool_lock);
- isert_cmd->fr_desc = NULL;
- }
+ if (!cmd->rw.nr_ops)
+ return;
- if (isert_cmd->data.sg) {
- isert_dbg("Cmd %p unmap_sg op\n", isert_cmd);
- isert_unmap_data_buf(isert_conn, &isert_cmd->data);
+ if (isert_prot_cmd(conn, se_cmd)) {
+ rdma_rw_ctx_destroy_signature(&cmd->rw, conn->qp,
+ conn->cm_id->port_num, se_cmd->t_data_sg,
+ se_cmd->t_data_nents, se_cmd->t_prot_sg,
+ se_cmd->t_prot_nents, dir);
+ } else {
+ rdma_rw_ctx_destroy(&cmd->rw, conn->qp, conn->cm_id->port_num,
+ se_cmd->t_data_sg, se_cmd->t_data_nents, dir);
}
- isert_cmd->ib_sge = NULL;
- isert_cmd->rdma_wr = NULL;
+ cmd->rw.nr_ops = 0;
}
static void
@@ -1732,7 +1487,6 @@ isert_put_cmd(struct isert_cmd *isert_cmd, bool comp_err)
struct iscsi_cmd *cmd = isert_cmd->iscsi_cmd;
struct isert_conn *isert_conn = isert_cmd->conn;
struct iscsi_conn *conn = isert_conn->conn;
- struct isert_device *device = isert_conn->device;
struct iscsi_text_rsp *hdr;
isert_dbg("Cmd %p\n", isert_cmd);
@@ -1760,7 +1514,7 @@ isert_put_cmd(struct isert_cmd *isert_cmd, bool comp_err)
}
}
- device->unreg_rdma_mem(isert_cmd, isert_conn);
+ isert_rdma_rw_ctx_destroy(isert_cmd, isert_conn);
transport_generic_free_cmd(&cmd->se_cmd, 0);
break;
case ISCSI_OP_SCSI_TMFUNC:
@@ -1894,14 +1648,9 @@ isert_rdma_write_done(struct ib_cq *cq, struct ib_wc *wc)
isert_dbg("Cmd %p\n", isert_cmd);
- if (isert_cmd->fr_desc && isert_cmd->fr_desc->ind & ISERT_PROTECTED) {
- ret = isert_check_pi_status(cmd,
- isert_cmd->fr_desc->pi_ctx->sig_mr);
- isert_cmd->fr_desc->ind &= ~ISERT_PROTECTED;
- }
+ ret = isert_check_pi_status(cmd, isert_cmd->rw.sig->sig_mr);
+ isert_rdma_rw_ctx_destroy(isert_cmd, isert_conn);
- device->unreg_rdma_mem(isert_cmd, isert_conn);
- isert_cmd->rdma_wr_num = 0;
if (ret)
transport_send_check_condition_and_sense(cmd, cmd->pi_err, 0);
else
@@ -1929,16 +1678,12 @@ isert_rdma_read_done(struct ib_cq *cq, struct ib_wc *wc)
isert_dbg("Cmd %p\n", isert_cmd);
- if (isert_cmd->fr_desc && isert_cmd->fr_desc->ind & ISERT_PROTECTED) {
- ret = isert_check_pi_status(se_cmd,
- isert_cmd->fr_desc->pi_ctx->sig_mr);
- isert_cmd->fr_desc->ind &= ~ISERT_PROTECTED;
- }
-
iscsit_stop_dataout_timer(cmd);
- device->unreg_rdma_mem(isert_cmd, isert_conn);
- cmd->write_data_done = isert_cmd->data.len;
- isert_cmd->rdma_wr_num = 0;
+
+ if (isert_prot_cmd(isert_conn, se_cmd))
+ ret = isert_check_pi_status(se_cmd, isert_cmd->rw.sig->sig_mr);
+ isert_rdma_rw_ctx_destroy(isert_cmd, isert_conn);
+ cmd->write_data_done = 0;
isert_dbg("Cmd: %p RDMA_READ comp calling execute_cmd\n", isert_cmd);
spin_lock_bh(&cmd->istate_lock);
@@ -2111,7 +1856,6 @@ isert_aborted_task(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
{
struct isert_cmd *isert_cmd = iscsit_priv_cmd(cmd);
struct isert_conn *isert_conn = conn->context;
- struct isert_device *device = isert_conn->device;
spin_lock_bh(&conn->cmd_lock);
if (!list_empty(&cmd->i_conn_node))
@@ -2120,8 +1864,7 @@ isert_aborted_task(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
if (cmd->data_direction == DMA_TO_DEVICE)
iscsit_stop_dataout_timer(cmd);
-
- device->unreg_rdma_mem(isert_cmd, isert_conn);
+ isert_rdma_rw_ctx_destroy(isert_cmd, isert_conn);
}
static enum target_prot_op
@@ -2274,234 +2017,6 @@ isert_put_text_rsp(struct iscsi_cmd *cmd, struct iscsi_conn *conn)
return isert_post_response(isert_conn, isert_cmd);
}
-static int
-isert_build_rdma_wr(struct isert_conn *isert_conn, struct isert_cmd *isert_cmd,
- struct ib_sge *ib_sge, struct ib_rdma_wr *rdma_wr,
- u32 data_left, u32 offset)
-{
- struct iscsi_cmd *cmd = isert_cmd->iscsi_cmd;
- struct scatterlist *sg_start, *tmp_sg;
- struct isert_device *device = isert_conn->device;
- struct ib_device *ib_dev = device->ib_device;
- u32 sg_off, page_off;
- int i = 0, sg_nents;
-
- sg_off = offset / PAGE_SIZE;
- sg_start = &cmd->se_cmd.t_data_sg[sg_off];
- sg_nents = min(cmd->se_cmd.t_data_nents - sg_off, isert_conn->max_sge);
- page_off = offset % PAGE_SIZE;
-
- rdma_wr->wr.sg_list = ib_sge;
- rdma_wr->wr.wr_cqe = &isert_cmd->tx_desc.tx_cqe;
-
- /*
- * Perform mapping of TCM scatterlist memory ib_sge dma_addr.
- */
- for_each_sg(sg_start, tmp_sg, sg_nents, i) {
- isert_dbg("RDMA from SGL dma_addr: 0x%llx dma_len: %u, "
- "page_off: %u\n",
- (unsigned long long)tmp_sg->dma_address,
- tmp_sg->length, page_off);
-
- ib_sge->addr = ib_sg_dma_address(ib_dev, tmp_sg) + page_off;
- ib_sge->length = min_t(u32, data_left,
- ib_sg_dma_len(ib_dev, tmp_sg) - page_off);
- ib_sge->lkey = device->pd->local_dma_lkey;
-
- isert_dbg("RDMA ib_sge: addr: 0x%llx length: %u lkey: %x\n",
- ib_sge->addr, ib_sge->length, ib_sge->lkey);
- page_off = 0;
- data_left -= ib_sge->length;
- if (!data_left)
- break;
- ib_sge++;
- isert_dbg("Incrementing ib_sge pointer to %p\n", ib_sge);
- }
-
- rdma_wr->wr.num_sge = ++i;
- isert_dbg("Set outgoing sg_list: %p num_sg: %u from TCM SGLs\n",
- rdma_wr->wr.sg_list, rdma_wr->wr.num_sge);
-
- return rdma_wr->wr.num_sge;
-}
-
-static int
-isert_map_rdma(struct isert_cmd *isert_cmd, struct iscsi_conn *conn)
-{
- struct iscsi_cmd *cmd = isert_cmd->iscsi_cmd;
- struct se_cmd *se_cmd = &cmd->se_cmd;
- struct isert_conn *isert_conn = conn->context;
- struct isert_data_buf *data = &isert_cmd->data;
- struct ib_rdma_wr *rdma_wr;
- struct ib_sge *ib_sge;
- u32 offset, data_len, data_left, rdma_write_max, va_offset = 0;
- int ret = 0, i, ib_sge_cnt;
-
- offset = isert_cmd->iser_ib_op == ISER_IB_RDMA_READ ?
- cmd->write_data_done : 0;
- ret = isert_map_data_buf(isert_conn, isert_cmd, se_cmd->t_data_sg,
- se_cmd->t_data_nents, se_cmd->data_length,
- offset, isert_cmd->iser_ib_op,
- &isert_cmd->data);
- if (ret)
- return ret;
-
- data_left = data->len;
- offset = data->offset;
-
- ib_sge = kzalloc(sizeof(struct ib_sge) * data->nents, GFP_KERNEL);
- if (!ib_sge) {
- isert_warn("Unable to allocate ib_sge\n");
- ret = -ENOMEM;
- goto unmap_cmd;
- }
- isert_cmd->ib_sge = ib_sge;
-
- isert_cmd->rdma_wr_num = DIV_ROUND_UP(data->nents, isert_conn->max_sge);
- isert_cmd->rdma_wr = kzalloc(sizeof(struct ib_rdma_wr) *
- isert_cmd->rdma_wr_num, GFP_KERNEL);
- if (!isert_cmd->rdma_wr) {
- isert_dbg("Unable to allocate isert_cmd->rdma_wr\n");
- ret = -ENOMEM;
- goto unmap_cmd;
- }
-
- rdma_write_max = isert_conn->max_sge * PAGE_SIZE;
-
- for (i = 0; i < isert_cmd->rdma_wr_num; i++) {
- rdma_wr = &isert_cmd->rdma_wr[i];
- data_len = min(data_left, rdma_write_max);
-
- rdma_wr->wr.send_flags = 0;
- if (isert_cmd->iser_ib_op == ISER_IB_RDMA_WRITE) {
- isert_cmd->tx_desc.tx_cqe.done = isert_rdma_write_done;
-
- rdma_wr->wr.opcode = IB_WR_RDMA_WRITE;
- rdma_wr->remote_addr = isert_cmd->read_va + offset;
- rdma_wr->rkey = isert_cmd->read_stag;
- if (i + 1 == isert_cmd->rdma_wr_num)
- rdma_wr->wr.next = &isert_cmd->tx_desc.send_wr;
- else
- rdma_wr->wr.next = &isert_cmd->rdma_wr[i + 1].wr;
- } else {
- isert_cmd->tx_desc.tx_cqe.done = isert_rdma_read_done;
-
- rdma_wr->wr.opcode = IB_WR_RDMA_READ;
- rdma_wr->remote_addr = isert_cmd->write_va + va_offset;
- rdma_wr->rkey = isert_cmd->write_stag;
- if (i + 1 == isert_cmd->rdma_wr_num)
- rdma_wr->wr.send_flags = IB_SEND_SIGNALED;
- else
- rdma_wr->wr.next = &isert_cmd->rdma_wr[i + 1].wr;
- }
-
- ib_sge_cnt = isert_build_rdma_wr(isert_conn, isert_cmd, ib_sge,
- rdma_wr, data_len, offset);
- ib_sge += ib_sge_cnt;
-
- offset += data_len;
- va_offset += data_len;
- data_left -= data_len;
- }
-
- return 0;
-unmap_cmd:
- isert_unmap_data_buf(isert_conn, data);
-
- return ret;
-}
-
-static inline void
-isert_inv_rkey(struct ib_send_wr *inv_wr, struct ib_mr *mr)
-{
- u32 rkey;
-
- memset(inv_wr, 0, sizeof(*inv_wr));
- inv_wr->wr_cqe = NULL;
- inv_wr->opcode = IB_WR_LOCAL_INV;
- inv_wr->ex.invalidate_rkey = mr->rkey;
-
- /* Bump the key */
- rkey = ib_inc_rkey(mr->rkey);
- ib_update_fast_reg_key(mr, rkey);
-}
-
-static int
-isert_fast_reg_mr(struct isert_conn *isert_conn,
- struct fast_reg_descriptor *fr_desc,
- struct isert_data_buf *mem,
- enum isert_indicator ind,
- struct ib_sge *sge)
-{
- struct isert_device *device = isert_conn->device;
- struct ib_device *ib_dev = device->ib_device;
- struct ib_mr *mr;
- struct ib_reg_wr reg_wr;
- struct ib_send_wr inv_wr, *bad_wr, *wr = NULL;
- int ret, n;
-
- if (mem->dma_nents == 1) {
- sge->lkey = device->pd->local_dma_lkey;
- sge->addr = ib_sg_dma_address(ib_dev, &mem->sg[0]);
- sge->length = ib_sg_dma_len(ib_dev, &mem->sg[0]);
- isert_dbg("sge: addr: 0x%llx length: %u lkey: %x\n",
- sge->addr, sge->length, sge->lkey);
- return 0;
- }
-
- if (ind == ISERT_DATA_KEY_VALID)
- /* Registering data buffer */
- mr = fr_desc->data_mr;
- else
- /* Registering protection buffer */
- mr = fr_desc->pi_ctx->prot_mr;
-
- if (!(fr_desc->ind & ind)) {
- isert_inv_rkey(&inv_wr, mr);
- wr = &inv_wr;
- }
-
- n = ib_map_mr_sg(mr, mem->sg, mem->nents, PAGE_SIZE);
- if (unlikely(n != mem->nents)) {
- isert_err("failed to map mr sg (%d/%d)\n",
- n, mem->nents);
- return n < 0 ? n : -EINVAL;
- }
-
- isert_dbg("Use fr_desc %p sg_nents %d offset %u\n",
- fr_desc, mem->nents, mem->offset);
-
- reg_wr.wr.next = NULL;
- reg_wr.wr.opcode = IB_WR_REG_MR;
- reg_wr.wr.wr_cqe = NULL;
- reg_wr.wr.send_flags = 0;
- reg_wr.wr.num_sge = 0;
- reg_wr.mr = mr;
- reg_wr.key = mr->lkey;
- reg_wr.access = IB_ACCESS_LOCAL_WRITE;
-
- if (!wr)
- wr = &reg_wr.wr;
- else
- wr->next = &reg_wr.wr;
-
- ret = ib_post_send(isert_conn->qp, wr, &bad_wr);
- if (ret) {
- isert_err("fast registration failed, ret:%d\n", ret);
- return ret;
- }
- fr_desc->ind &= ~ind;
-
- sge->lkey = mr->lkey;
- sge->addr = mr->iova;
- sge->length = mr->length;
-
- isert_dbg("sge: addr: 0x%llx length: %u lkey: %x\n",
- sge->addr, sge->length, sge->lkey);
-
- return ret;
-}
-
static inline void
isert_set_dif_domain(struct se_cmd *se_cmd, struct ib_sig_attrs *sig_attrs,
struct ib_sig_domain *domain)
@@ -2526,6 +2041,8 @@ isert_set_dif_domain(struct se_cmd *se_cmd, struct ib_sig_attrs *sig_attrs,
static int
isert_set_sig_attrs(struct se_cmd *se_cmd, struct ib_sig_attrs *sig_attrs)
{
+ memset(sig_attrs, 0, sizeof(*sig_attrs));
+
switch (se_cmd->prot_op) {
case TARGET_PROT_DIN_INSERT:
case TARGET_PROT_DOUT_STRIP:
@@ -2547,228 +2064,59 @@ isert_set_sig_attrs(struct se_cmd *se_cmd, struct ib_sig_attrs *sig_attrs)
return -EINVAL;
}
+ sig_attrs->check_mask =
+ (se_cmd->prot_checks & TARGET_DIF_CHECK_GUARD ? 0xc0 : 0) |
+ (se_cmd->prot_checks & TARGET_DIF_CHECK_REFTAG ? 0x30 : 0) |
+ (se_cmd->prot_checks & TARGET_DIF_CHECK_REFTAG ? 0x0f : 0);
return 0;
}
-static inline u8
-isert_set_prot_checks(u8 prot_checks)
-{
- return (prot_checks & TARGET_DIF_CHECK_GUARD ? 0xc0 : 0) |
- (prot_checks & TARGET_DIF_CHECK_REFTAG ? 0x30 : 0) |
- (prot_checks & TARGET_DIF_CHECK_REFTAG ? 0x0f : 0);
-}
-
static int
-isert_reg_sig_mr(struct isert_conn *isert_conn,
- struct isert_cmd *isert_cmd,
- struct fast_reg_descriptor *fr_desc)
-{
- struct se_cmd *se_cmd = &isert_cmd->iscsi_cmd->se_cmd;
- struct ib_sig_handover_wr sig_wr;
- struct ib_send_wr inv_wr, *bad_wr, *wr = NULL;
- struct pi_context *pi_ctx = fr_desc->pi_ctx;
- struct ib_sig_attrs sig_attrs;
+isert_rdma_rw_ctx_post(struct isert_cmd *cmd, struct isert_conn *conn,
+ struct ib_cqe *cqe, struct ib_send_wr *chain_wr)
+{
+ struct se_cmd *se_cmd = &cmd->iscsi_cmd->se_cmd;
+ enum dma_data_direction dir = target_reverse_dma_direction(se_cmd);
+ u8 port_num = conn->cm_id->port_num;
+ u64 addr;
+ u32 rkey, offset;
int ret;
- memset(&sig_attrs, 0, sizeof(sig_attrs));
- ret = isert_set_sig_attrs(se_cmd, &sig_attrs);
- if (ret)
- goto err;
-
- sig_attrs.check_mask = isert_set_prot_checks(se_cmd->prot_checks);
-
- if (!(fr_desc->ind & ISERT_SIG_KEY_VALID)) {
- isert_inv_rkey(&inv_wr, pi_ctx->sig_mr);
- wr = &inv_wr;
- }
-
- memset(&sig_wr, 0, sizeof(sig_wr));
- sig_wr.wr.opcode = IB_WR_REG_SIG_MR;
- sig_wr.wr.wr_cqe = NULL;
- sig_wr.wr.sg_list = &isert_cmd->ib_sg[DATA];
- sig_wr.wr.num_sge = 1;
- sig_wr.access_flags = IB_ACCESS_LOCAL_WRITE;
- sig_wr.sig_attrs = &sig_attrs;
- sig_wr.sig_mr = pi_ctx->sig_mr;
- if (se_cmd->t_prot_sg)
- sig_wr.prot = &isert_cmd->ib_sg[PROT];
-
- if (!wr)
- wr = &sig_wr.wr;
- else
- wr->next = &sig_wr.wr;
-
- ret = ib_post_send(isert_conn->qp, wr, &bad_wr);
- if (ret) {
- isert_err("fast registration failed, ret:%d\n", ret);
- goto err;
- }
- fr_desc->ind &= ~ISERT_SIG_KEY_VALID;
-
- isert_cmd->ib_sg[SIG].lkey = pi_ctx->sig_mr->lkey;
- isert_cmd->ib_sg[SIG].addr = 0;
- isert_cmd->ib_sg[SIG].length = se_cmd->data_length;
- if (se_cmd->prot_op != TARGET_PROT_DIN_STRIP &&
- se_cmd->prot_op != TARGET_PROT_DOUT_INSERT)
- /*
- * We have protection guards on the wire
- * so we need to set a larget transfer
- */
- isert_cmd->ib_sg[SIG].length += se_cmd->prot_length;
-
- isert_dbg("sig_sge: addr: 0x%llx length: %u lkey: %x\n",
- isert_cmd->ib_sg[SIG].addr, isert_cmd->ib_sg[SIG].length,
- isert_cmd->ib_sg[SIG].lkey);
-err:
- return ret;
-}
-
-static int
-isert_handle_prot_cmd(struct isert_conn *isert_conn,
- struct isert_cmd *isert_cmd)
-{
- struct isert_device *device = isert_conn->device;
- struct se_cmd *se_cmd = &isert_cmd->iscsi_cmd->se_cmd;
- int ret;
-
- if (!isert_cmd->fr_desc->pi_ctx) {
- ret = isert_create_pi_ctx(isert_cmd->fr_desc,
- device->ib_device,
- device->pd);
- if (ret) {
- isert_err("conn %p failed to allocate pi_ctx\n",
- isert_conn);
- return ret;
- }
- }
-
- if (se_cmd->t_prot_sg) {
- ret = isert_map_data_buf(isert_conn, isert_cmd,
- se_cmd->t_prot_sg,
- se_cmd->t_prot_nents,
- se_cmd->prot_length,
- 0,
- isert_cmd->iser_ib_op,
- &isert_cmd->prot);
- if (ret) {
- isert_err("conn %p failed to map protection buffer\n",
- isert_conn);
- return ret;
- }
-
- memset(&isert_cmd->ib_sg[PROT], 0, sizeof(isert_cmd->ib_sg[PROT]));
- ret = isert_fast_reg_mr(isert_conn, isert_cmd->fr_desc,
- &isert_cmd->prot,
- ISERT_PROT_KEY_VALID,
- &isert_cmd->ib_sg[PROT]);
- if (ret) {
- isert_err("conn %p failed to fast reg mr\n",
- isert_conn);
- goto unmap_prot_cmd;
- }
- }
-
- ret = isert_reg_sig_mr(isert_conn, isert_cmd, isert_cmd->fr_desc);
- if (ret) {
- isert_err("conn %p failed to fast reg mr\n",
- isert_conn);
- goto unmap_prot_cmd;
- }
- isert_cmd->fr_desc->ind |= ISERT_PROTECTED;
-
- return 0;
-
-unmap_prot_cmd:
- if (se_cmd->t_prot_sg)
- isert_unmap_data_buf(isert_conn, &isert_cmd->prot);
-
- return ret;
-}
-
-static int
-isert_reg_rdma(struct isert_cmd *isert_cmd, struct iscsi_conn *conn)
-{
- struct iscsi_cmd *cmd = isert_cmd->iscsi_cmd;
- struct se_cmd *se_cmd = &cmd->se_cmd;
- struct isert_conn *isert_conn = conn->context;
- struct fast_reg_descriptor *fr_desc = NULL;
- struct ib_rdma_wr *rdma_wr;
- struct ib_sge *ib_sg;
- u32 offset;
- int ret = 0;
- unsigned long flags;
-
- offset = isert_cmd->iser_ib_op == ISER_IB_RDMA_READ ?
- cmd->write_data_done : 0;
- ret = isert_map_data_buf(isert_conn, isert_cmd, se_cmd->t_data_sg,
- se_cmd->t_data_nents, se_cmd->data_length,
- offset, isert_cmd->iser_ib_op,
- &isert_cmd->data);
- if (ret)
- return ret;
-
- if (isert_cmd->data.dma_nents != 1 ||
- isert_prot_cmd(isert_conn, se_cmd)) {
- spin_lock_irqsave(&isert_conn->pool_lock, flags);
- fr_desc = list_first_entry(&isert_conn->fr_pool,
- struct fast_reg_descriptor, list);
- list_del(&fr_desc->list);
- spin_unlock_irqrestore(&isert_conn->pool_lock, flags);
- isert_cmd->fr_desc = fr_desc;
- }
-
- ret = isert_fast_reg_mr(isert_conn, fr_desc, &isert_cmd->data,
- ISERT_DATA_KEY_VALID, &isert_cmd->ib_sg[DATA]);
- if (ret)
- goto unmap_cmd;
-
- if (isert_prot_cmd(isert_conn, se_cmd)) {
- ret = isert_handle_prot_cmd(isert_conn, isert_cmd);
- if (ret)
- goto unmap_cmd;
-
- ib_sg = &isert_cmd->ib_sg[SIG];
+ if (dir == DMA_FROM_DEVICE) {
+ addr = cmd->write_va;
+ rkey = cmd->write_stag;
+ offset = cmd->iscsi_cmd->write_data_done;
} else {
- ib_sg = &isert_cmd->ib_sg[DATA];
+ addr = cmd->read_va;
+ rkey = cmd->read_stag;
+ offset = 0;
}
- memcpy(&isert_cmd->s_ib_sge, ib_sg, sizeof(*ib_sg));
- isert_cmd->ib_sge = &isert_cmd->s_ib_sge;
- isert_cmd->rdma_wr_num = 1;
- memset(&isert_cmd->s_rdma_wr, 0, sizeof(isert_cmd->s_rdma_wr));
- isert_cmd->rdma_wr = &isert_cmd->s_rdma_wr;
+ if (isert_prot_cmd(conn, se_cmd)) {
+ struct ib_sig_attrs sig_attrs;
- rdma_wr = &isert_cmd->s_rdma_wr;
- rdma_wr->wr.sg_list = &isert_cmd->s_ib_sge;
- rdma_wr->wr.num_sge = 1;
- rdma_wr->wr.wr_cqe = &isert_cmd->tx_desc.tx_cqe;
- if (isert_cmd->iser_ib_op == ISER_IB_RDMA_WRITE) {
- isert_cmd->tx_desc.tx_cqe.done = isert_rdma_write_done;
+ ret = isert_set_sig_attrs(se_cmd, &sig_attrs);
+ if (ret)
+ return ret;
- rdma_wr->wr.opcode = IB_WR_RDMA_WRITE;
- rdma_wr->remote_addr = isert_cmd->read_va;
- rdma_wr->rkey = isert_cmd->read_stag;
- rdma_wr->wr.send_flags = !isert_prot_cmd(isert_conn, se_cmd) ?
- 0 : IB_SEND_SIGNALED;
+ WARN_ON_ONCE(offset);
+ ret = rdma_rw_ctx_signature_init(&cmd->rw, conn->qp, port_num,
+ se_cmd->t_data_sg, se_cmd->t_data_nents,
+ se_cmd->t_prot_sg, se_cmd->t_prot_nents,
+ &sig_attrs, addr, rkey, dir);
} else {
- isert_cmd->tx_desc.tx_cqe.done = isert_rdma_read_done;
-
- rdma_wr->wr.opcode = IB_WR_RDMA_READ;
- rdma_wr->remote_addr = isert_cmd->write_va;
- rdma_wr->rkey = isert_cmd->write_stag;
- rdma_wr->wr.send_flags = IB_SEND_SIGNALED;
+ ret = rdma_rw_ctx_init(&cmd->rw, conn->qp, port_num,
+ se_cmd->t_data_sg, se_cmd->t_data_nents,
+ offset, addr, rkey, dir);
}
-
- return 0;
-
-unmap_cmd:
- if (fr_desc) {
- spin_lock_irqsave(&isert_conn->pool_lock, flags);
- list_add_tail(&fr_desc->list, &isert_conn->fr_pool);
- spin_unlock_irqrestore(&isert_conn->pool_lock, flags);
+ if (ret < 0) {
+ isert_err("Cmd: %p failed to prepare RDMA res\n", cmd);
+ return ret;
}
- isert_unmap_data_buf(isert_conn, &isert_cmd->data);
+ ret = rdma_rw_ctx_post(&cmd->rw, conn->qp, port_num, cqe, chain_wr);
+ if (ret < 0)
+ isert_err("Cmd: %p failed to post RDMA res\n", cmd);
return ret;
}
@@ -2778,21 +2126,17 @@ isert_put_datain(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
struct se_cmd *se_cmd = &cmd->se_cmd;
struct isert_cmd *isert_cmd = iscsit_priv_cmd(cmd);
struct isert_conn *isert_conn = conn->context;
- struct isert_device *device = isert_conn->device;
- struct ib_send_wr *wr_failed;
+ struct ib_cqe *cqe = NULL;
+ struct ib_send_wr *chain_wr = NULL;
int rc;
isert_dbg("Cmd: %p RDMA_WRITE data_length: %u\n",
isert_cmd, se_cmd->data_length);
- isert_cmd->iser_ib_op = ISER_IB_RDMA_WRITE;
- rc = device->reg_rdma_mem(isert_cmd, conn);
- if (rc) {
- isert_err("Cmd: %p failed to prepare RDMA res\n", isert_cmd);
- return rc;
- }
-
- if (!isert_prot_cmd(isert_conn, se_cmd)) {
+ if (isert_prot_cmd(isert_conn, se_cmd)) {
+ isert_cmd->tx_desc.tx_cqe.done = isert_rdma_write_done;
+ cqe = &isert_cmd->tx_desc.tx_cqe;
+ } else {
/*
* Build isert_conn->tx_desc for iSCSI response PDU and attach
*/
@@ -2803,56 +2147,35 @@ isert_put_datain(struct iscsi_conn *conn, struct iscsi_cmd *cmd)
isert_init_tx_hdrs(isert_conn, &isert_cmd->tx_desc);
isert_init_send_wr(isert_conn, isert_cmd,
&isert_cmd->tx_desc.send_wr);
- isert_cmd->s_rdma_wr.wr.next = &isert_cmd->tx_desc.send_wr;
- isert_cmd->rdma_wr_num += 1;
rc = isert_post_recv(isert_conn, isert_cmd->rx_desc);
if (rc) {
isert_err("ib_post_recv failed with %d\n", rc);
return rc;
}
- }
- rc = ib_post_send(isert_conn->qp, &isert_cmd->rdma_wr->wr, &wr_failed);
- if (rc)
- isert_warn("ib_post_send() failed for IB_WR_RDMA_WRITE\n");
-
- if (!isert_prot_cmd(isert_conn, se_cmd))
- isert_dbg("Cmd: %p posted RDMA_WRITE + Response for iSER Data "
- "READ\n", isert_cmd);
- else
- isert_dbg("Cmd: %p posted RDMA_WRITE for iSER Data READ\n",
- isert_cmd);
+ chain_wr = &isert_cmd->tx_desc.send_wr;
+ }
+ isert_rdma_rw_ctx_post(isert_cmd, isert_conn, cqe, chain_wr);
+ isert_dbg("Cmd: %p posted RDMA_WRITE for iSER Data READ\n", isert_cmd);
return 1;
}
static int
isert_get_dataout(struct iscsi_conn *conn, struct iscsi_cmd *cmd, bool recovery)
{
- struct se_cmd *se_cmd = &cmd->se_cmd;
struct isert_cmd *isert_cmd = iscsit_priv_cmd(cmd);
- struct isert_conn *isert_conn = conn->context;
- struct isert_device *device = isert_conn->device;
- struct ib_send_wr *wr_failed;
- int rc;
isert_dbg("Cmd: %p RDMA_READ data_length: %u write_data_done: %u\n",
- isert_cmd, se_cmd->data_length, cmd->write_data_done);
- isert_cmd->iser_ib_op = ISER_IB_RDMA_READ;
- rc = device->reg_rdma_mem(isert_cmd, conn);
- if (rc) {
- isert_err("Cmd: %p failed to prepare RDMA res\n", isert_cmd);
- return rc;
- }
+ isert_cmd, cmd->se_cmd.data_length, cmd->write_data_done);
- rc = ib_post_send(isert_conn->qp, &isert_cmd->rdma_wr->wr, &wr_failed);
- if (rc)
- isert_warn("ib_post_send() failed for IB_WR_RDMA_READ\n");
+ isert_cmd->tx_desc.tx_cqe.done = isert_rdma_read_done;
+ isert_rdma_rw_ctx_post(isert_cmd, conn->context,
+ &isert_cmd->tx_desc.tx_cqe, NULL);
isert_dbg("Cmd: %p posted RDMA_READ memory for ISER Data WRITE\n",
isert_cmd);
-
return 0;
}
@@ -3273,9 +2596,19 @@ static void isert_free_conn(struct iscsi_conn *conn)
isert_put_conn(isert_conn);
}
+static void isert_get_rx_pdu(struct iscsi_conn *conn)
+{
+ struct completion comp;
+
+ init_completion(&comp);
+
+ wait_for_completion_interruptible(&comp);
+}
+
static struct iscsit_transport iser_target_transport = {
.name = "IB/iSER",
.transport_type = ISCSI_INFINIBAND,
+ .rdma_shutdown = true,
.priv_size = sizeof(struct isert_cmd),
.owner = THIS_MODULE,
.iscsit_setup_np = isert_setup_np,
@@ -3291,6 +2624,7 @@ static struct iscsit_transport iser_target_transport = {
.iscsit_queue_data_in = isert_put_datain,
.iscsit_queue_status = isert_put_response,
.iscsit_aborted_task = isert_aborted_task,
+ .iscsit_get_rx_pdu = isert_get_rx_pdu,
.iscsit_get_sup_prot_ops = isert_get_sup_prot_ops,
};
diff --git a/drivers/infiniband/ulp/isert/ib_isert.h b/drivers/infiniband/ulp/isert/ib_isert.h
index 147900cbb578..e512ba941f2f 100644
--- a/drivers/infiniband/ulp/isert/ib_isert.h
+++ b/drivers/infiniband/ulp/isert/ib_isert.h
@@ -3,6 +3,7 @@
#include <linux/in6.h>
#include <rdma/ib_verbs.h>
#include <rdma/rdma_cm.h>
+#include <rdma/rw.h>
#include <scsi/iser.h>
@@ -53,10 +54,7 @@
#define ISERT_MIN_POSTED_RX (ISCSI_DEF_XMIT_CMDS_MAX >> 2)
-#define ISERT_INFLIGHT_DATAOUTS 8
-
-#define ISERT_QP_MAX_REQ_DTOS (ISCSI_DEF_XMIT_CMDS_MAX * \
- (1 + ISERT_INFLIGHT_DATAOUTS) + \
+#define ISERT_QP_MAX_REQ_DTOS (ISCSI_DEF_XMIT_CMDS_MAX + \
ISERT_MAX_TX_MISC_PDUS + \
ISERT_MAX_RX_MISC_PDUS)
@@ -71,13 +69,6 @@ enum isert_desc_type {
ISCSI_TX_DATAIN
};
-enum iser_ib_op_code {
- ISER_IB_RECV,
- ISER_IB_SEND,
- ISER_IB_RDMA_WRITE,
- ISER_IB_RDMA_READ,
-};
-
enum iser_conn_state {
ISER_CONN_INIT,
ISER_CONN_UP,
@@ -118,42 +109,6 @@ static inline struct iser_tx_desc *cqe_to_tx_desc(struct ib_cqe *cqe)
return container_of(cqe, struct iser_tx_desc, tx_cqe);
}
-
-enum isert_indicator {
- ISERT_PROTECTED = 1 << 0,
- ISERT_DATA_KEY_VALID = 1 << 1,
- ISERT_PROT_KEY_VALID = 1 << 2,
- ISERT_SIG_KEY_VALID = 1 << 3,
-};
-
-struct pi_context {
- struct ib_mr *prot_mr;
- struct ib_mr *sig_mr;
-};
-
-struct fast_reg_descriptor {
- struct list_head list;
- struct ib_mr *data_mr;
- u8 ind;
- struct pi_context *pi_ctx;
-};
-
-struct isert_data_buf {
- struct scatterlist *sg;
- int nents;
- u32 sg_off;
- u32 len; /* cur_rdma_length */
- u32 offset;
- unsigned int dma_nents;
- enum dma_data_direction dma_dir;
-};
-
-enum {
- DATA = 0,
- PROT = 1,
- SIG = 2,
-};
-
struct isert_cmd {
uint32_t read_stag;
uint32_t write_stag;
@@ -166,16 +121,7 @@ struct isert_cmd {
struct iscsi_cmd *iscsi_cmd;
struct iser_tx_desc tx_desc;
struct iser_rx_desc *rx_desc;
- enum iser_ib_op_code iser_ib_op;
- struct ib_sge *ib_sge;
- struct ib_sge s_ib_sge;
- int rdma_wr_num;
- struct ib_rdma_wr *rdma_wr;
- struct ib_rdma_wr s_rdma_wr;
- struct ib_sge ib_sg[3];
- struct isert_data_buf data;
- struct isert_data_buf prot;
- struct fast_reg_descriptor *fr_desc;
+ struct rdma_rw_ctx rw;
struct work_struct comp_work;
struct scatterlist sg;
};
@@ -210,10 +156,6 @@ struct isert_conn {
struct isert_device *device;
struct mutex mutex;
struct kref kref;
- struct list_head fr_pool;
- int fr_pool_size;
- /* lock to protect fastreg pool */
- spinlock_t pool_lock;
struct work_struct release_work;
bool logout_posted;
bool snd_w_inv;
@@ -236,7 +178,6 @@ struct isert_comp {
};
struct isert_device {
- int use_fastreg;
bool pi_capable;
int refcount;
struct ib_device *ib_device;
@@ -244,10 +185,6 @@ struct isert_device {
struct isert_comp *comps;
int comps_used;
struct list_head dev_node;
- int (*reg_rdma_mem)(struct isert_cmd *isert_cmd,
- struct iscsi_conn *conn);
- void (*unreg_rdma_mem)(struct isert_cmd *isert_cmd,
- struct isert_conn *isert_conn);
};
struct isert_np {
diff --git a/drivers/infiniband/ulp/srp/ib_srp.c b/drivers/infiniband/ulp/srp/ib_srp.c
index b6bf20496021..646de170ec12 100644
--- a/drivers/infiniband/ulp/srp/ib_srp.c
+++ b/drivers/infiniband/ulp/srp/ib_srp.c
@@ -70,6 +70,7 @@ static unsigned int indirect_sg_entries;
static bool allow_ext_sg;
static bool prefer_fr = true;
static bool register_always = true;
+static bool never_register;
static int topspin_workarounds = 1;
module_param(srp_sg_tablesize, uint, 0444);
@@ -81,7 +82,7 @@ MODULE_PARM_DESC(cmd_sg_entries,
module_param(indirect_sg_entries, uint, 0444);
MODULE_PARM_DESC(indirect_sg_entries,
- "Default max number of gather/scatter entries (default is 12, max is " __stringify(SCSI_MAX_SG_CHAIN_SEGMENTS) ")");
+ "Default max number of gather/scatter entries (default is 12, max is " __stringify(SG_MAX_SEGMENTS) ")");
module_param(allow_ext_sg, bool, 0444);
MODULE_PARM_DESC(allow_ext_sg,
@@ -99,6 +100,9 @@ module_param(register_always, bool, 0444);
MODULE_PARM_DESC(register_always,
"Use memory registration even for contiguous memory regions");
+module_param(never_register, bool, 0444);
+MODULE_PARM_DESC(never_register, "Never register memory");
+
static const struct kernel_param_ops srp_tmo_ops;
static int srp_reconnect_delay = 10;
@@ -316,7 +320,7 @@ static struct ib_fmr_pool *srp_alloc_fmr_pool(struct srp_target_port *target)
struct ib_fmr_pool_param fmr_param;
memset(&fmr_param, 0, sizeof(fmr_param));
- fmr_param.pool_size = target->scsi_host->can_queue;
+ fmr_param.pool_size = target->mr_pool_size;
fmr_param.dirty_watermark = fmr_param.pool_size / 4;
fmr_param.cache = 1;
fmr_param.max_pages_per_fmr = dev->max_pages_per_mr;
@@ -441,23 +445,22 @@ static struct srp_fr_pool *srp_alloc_fr_pool(struct srp_target_port *target)
{
struct srp_device *dev = target->srp_host->srp_dev;
- return srp_create_fr_pool(dev->dev, dev->pd,
- target->scsi_host->can_queue,
+ return srp_create_fr_pool(dev->dev, dev->pd, target->mr_pool_size,
dev->max_pages_per_mr);
}
/**
* srp_destroy_qp() - destroy an RDMA queue pair
- * @ch: SRP RDMA channel.
+ * @qp: RDMA queue pair.
*
* Drain the qp before destroying it. This avoids that the receive
* completion handler can access the queue pair while it is
* being destroyed.
*/
-static void srp_destroy_qp(struct srp_rdma_ch *ch)
+static void srp_destroy_qp(struct ib_qp *qp)
{
- ib_drain_rq(ch->qp);
- ib_destroy_qp(ch->qp);
+ ib_drain_rq(qp);
+ ib_destroy_qp(qp);
}
static int srp_create_ch_ib(struct srp_rdma_ch *ch)
@@ -469,7 +472,7 @@ static int srp_create_ch_ib(struct srp_rdma_ch *ch)
struct ib_qp *qp;
struct ib_fmr_pool *fmr_pool = NULL;
struct srp_fr_pool *fr_pool = NULL;
- const int m = dev->use_fast_reg ? 3 : 1;
+ const int m = 1 + dev->use_fast_reg * target->mr_per_cmd * 2;
int ret;
init_attr = kzalloc(sizeof *init_attr, GFP_KERNEL);
@@ -530,7 +533,7 @@ static int srp_create_ch_ib(struct srp_rdma_ch *ch)
}
if (ch->qp)
- srp_destroy_qp(ch);
+ srp_destroy_qp(ch->qp);
if (ch->recv_cq)
ib_free_cq(ch->recv_cq);
if (ch->send_cq)
@@ -554,7 +557,7 @@ static int srp_create_ch_ib(struct srp_rdma_ch *ch)
return 0;
err_qp:
- srp_destroy_qp(ch);
+ srp_destroy_qp(qp);
err_send_cq:
ib_free_cq(send_cq);
@@ -597,7 +600,7 @@ static void srp_free_ch_ib(struct srp_target_port *target,
ib_destroy_fmr_pool(ch->fmr_pool);
}
- srp_destroy_qp(ch);
+ srp_destroy_qp(ch->qp);
ib_free_cq(ch->send_cq);
ib_free_cq(ch->recv_cq);
@@ -850,7 +853,7 @@ static int srp_alloc_req_data(struct srp_rdma_ch *ch)
for (i = 0; i < target->req_ring_size; ++i) {
req = &ch->req_ring[i];
- mr_list = kmalloc(target->cmd_sg_cnt * sizeof(void *),
+ mr_list = kmalloc(target->mr_per_cmd * sizeof(void *),
GFP_KERNEL);
if (!mr_list)
goto out;
@@ -1112,7 +1115,7 @@ static struct scsi_cmnd *srp_claim_req(struct srp_rdma_ch *ch,
}
/**
- * srp_free_req() - Unmap data and add request to the free request list.
+ * srp_free_req() - Unmap data and adjust ch->req_lim.
* @ch: SRP RDMA channel.
* @req: Request to be freed.
* @scmnd: SCSI command associated with @req.
@@ -1299,9 +1302,16 @@ static void srp_reg_mr_err_done(struct ib_cq *cq, struct ib_wc *wc)
srp_handle_qp_err(cq, wc, "FAST REG");
}
+/*
+ * Map up to sg_nents elements of state->sg where *sg_offset_p is the offset
+ * where to start in the first element. If sg_offset_p != NULL then
+ * *sg_offset_p is updated to the offset in state->sg[retval] of the first
+ * byte that has not yet been mapped.
+ */
static int srp_map_finish_fr(struct srp_map_state *state,
struct srp_request *req,
- struct srp_rdma_ch *ch, int sg_nents)
+ struct srp_rdma_ch *ch, int sg_nents,
+ unsigned int *sg_offset_p)
{
struct srp_target_port *target = ch->target;
struct srp_device *dev = target->srp_host->srp_dev;
@@ -1316,13 +1326,14 @@ static int srp_map_finish_fr(struct srp_map_state *state,
WARN_ON_ONCE(!dev->use_fast_reg);
- if (sg_nents == 0)
- return 0;
-
if (sg_nents == 1 && target->global_mr) {
- srp_map_desc(state, sg_dma_address(state->sg),
- sg_dma_len(state->sg),
+ unsigned int sg_offset = sg_offset_p ? *sg_offset_p : 0;
+
+ srp_map_desc(state, sg_dma_address(state->sg) + sg_offset,
+ sg_dma_len(state->sg) - sg_offset,
target->global_mr->rkey);
+ if (sg_offset_p)
+ *sg_offset_p = 0;
return 1;
}
@@ -1333,9 +1344,17 @@ static int srp_map_finish_fr(struct srp_map_state *state,
rkey = ib_inc_rkey(desc->mr->rkey);
ib_update_fast_reg_key(desc->mr, rkey);
- n = ib_map_mr_sg(desc->mr, state->sg, sg_nents, dev->mr_page_size);
- if (unlikely(n < 0))
+ n = ib_map_mr_sg(desc->mr, state->sg, sg_nents, sg_offset_p,
+ dev->mr_page_size);
+ if (unlikely(n < 0)) {
+ srp_fr_pool_put(ch->fr_pool, &desc, 1);
+ pr_debug("%s: ib_map_mr_sg(%d, %d) returned %d.\n",
+ dev_name(&req->scmnd->device->sdev_gendev), sg_nents,
+ sg_offset_p ? *sg_offset_p : -1, n);
return n;
+ }
+
+ WARN_ON_ONCE(desc->mr->length == 0);
req->reg_cqe.done = srp_reg_mr_err_done;
@@ -1357,8 +1376,10 @@ static int srp_map_finish_fr(struct srp_map_state *state,
desc->mr->length, desc->mr->rkey);
err = ib_post_send(ch->qp, &wr.wr, &bad_wr);
- if (unlikely(err))
+ if (unlikely(err)) {
+ WARN_ON_ONCE(err == -ENOMEM);
return err;
+ }
return n;
}
@@ -1398,7 +1419,7 @@ static int srp_map_sg_entry(struct srp_map_state *state,
/*
* If the last entry of the MR wasn't a full page, then we need to
* close it out and start a new one -- we can only merge at page
- * boundries.
+ * boundaries.
*/
ret = 0;
if (len != dev->mr_page_size)
@@ -1413,10 +1434,9 @@ static int srp_map_sg_fmr(struct srp_map_state *state, struct srp_rdma_ch *ch,
struct scatterlist *sg;
int i, ret;
- state->desc = req->indirect_desc;
state->pages = req->map_page;
state->fmr.next = req->fmr_list;
- state->fmr.end = req->fmr_list + ch->target->cmd_sg_cnt;
+ state->fmr.end = req->fmr_list + ch->target->mr_per_cmd;
for_each_sg(scat, sg, count, i) {
ret = srp_map_sg_entry(state, ch, sg, i);
@@ -1428,8 +1448,6 @@ static int srp_map_sg_fmr(struct srp_map_state *state, struct srp_rdma_ch *ch,
if (ret)
return ret;
- req->nmdesc = state->nmdesc;
-
return 0;
}
@@ -1437,15 +1455,20 @@ static int srp_map_sg_fr(struct srp_map_state *state, struct srp_rdma_ch *ch,
struct srp_request *req, struct scatterlist *scat,
int count)
{
+ unsigned int sg_offset = 0;
+
state->desc = req->indirect_desc;
state->fr.next = req->fr_list;
- state->fr.end = req->fr_list + ch->target->cmd_sg_cnt;
+ state->fr.end = req->fr_list + ch->target->mr_per_cmd;
state->sg = scat;
+ if (count == 0)
+ return 0;
+
while (count) {
int i, n;
- n = srp_map_finish_fr(state, req, ch, count);
+ n = srp_map_finish_fr(state, req, ch, count, &sg_offset);
if (unlikely(n < 0))
return n;
@@ -1454,8 +1477,6 @@ static int srp_map_sg_fr(struct srp_map_state *state, struct srp_rdma_ch *ch,
state->sg = sg_next(state->sg);
}
- req->nmdesc = state->nmdesc;
-
return 0;
}
@@ -1475,8 +1496,6 @@ static int srp_map_sg_dma(struct srp_map_state *state, struct srp_rdma_ch *ch,
target->global_mr->rkey);
}
- req->nmdesc = state->nmdesc;
-
return 0;
}
@@ -1509,14 +1528,15 @@ static int srp_map_idb(struct srp_rdma_ch *ch, struct srp_request *req,
if (dev->use_fast_reg) {
state.sg = idb_sg;
- sg_set_buf(idb_sg, req->indirect_desc, idb_len);
+ sg_init_one(idb_sg, req->indirect_desc, idb_len);
idb_sg->dma_address = req->indirect_dma_addr; /* hack! */
#ifdef CONFIG_NEED_SG_DMA_LENGTH
idb_sg->dma_length = idb_sg->length; /* hack^2 */
#endif
- ret = srp_map_finish_fr(&state, req, ch, 1);
+ ret = srp_map_finish_fr(&state, req, ch, 1, NULL);
if (ret < 0)
return ret;
+ WARN_ON_ONCE(ret < 1);
} else if (dev->use_fmr) {
state.pages = idb_pages;
state.pages[0] = (req->indirect_dma_addr &
@@ -1534,6 +1554,41 @@ static int srp_map_idb(struct srp_rdma_ch *ch, struct srp_request *req,
return 0;
}
+#if defined(DYNAMIC_DATA_DEBUG)
+static void srp_check_mapping(struct srp_map_state *state,
+ struct srp_rdma_ch *ch, struct srp_request *req,
+ struct scatterlist *scat, int count)
+{
+ struct srp_device *dev = ch->target->srp_host->srp_dev;
+ struct srp_fr_desc **pfr;
+ u64 desc_len = 0, mr_len = 0;
+ int i;
+
+ for (i = 0; i < state->ndesc; i++)
+ desc_len += be32_to_cpu(req->indirect_desc[i].len);
+ if (dev->use_fast_reg)
+ for (i = 0, pfr = req->fr_list; i < state->nmdesc; i++, pfr++)
+ mr_len += (*pfr)->mr->length;
+ else if (dev->use_fmr)
+ for (i = 0; i < state->nmdesc; i++)
+ mr_len += be32_to_cpu(req->indirect_desc[i].len);
+ if (desc_len != scsi_bufflen(req->scmnd) ||
+ mr_len > scsi_bufflen(req->scmnd))
+ pr_err("Inconsistent: scsi len %d <> desc len %lld <> mr len %lld; ndesc %d; nmdesc = %d\n",
+ scsi_bufflen(req->scmnd), desc_len, mr_len,
+ state->ndesc, state->nmdesc);
+}
+#endif
+
+/**
+ * srp_map_data() - map SCSI data buffer onto an SRP request
+ * @scmnd: SCSI command to map
+ * @ch: SRP RDMA channel
+ * @req: SRP request
+ *
+ * Returns the length in bytes of the SRP_CMD IU or a negative value if
+ * mapping failed.
+ */
static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch,
struct srp_request *req)
{
@@ -1601,11 +1656,23 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch,
memset(&state, 0, sizeof(state));
if (dev->use_fast_reg)
- srp_map_sg_fr(&state, ch, req, scat, count);
+ ret = srp_map_sg_fr(&state, ch, req, scat, count);
else if (dev->use_fmr)
- srp_map_sg_fmr(&state, ch, req, scat, count);
+ ret = srp_map_sg_fmr(&state, ch, req, scat, count);
else
- srp_map_sg_dma(&state, ch, req, scat, count);
+ ret = srp_map_sg_dma(&state, ch, req, scat, count);
+ req->nmdesc = state.nmdesc;
+ if (ret < 0)
+ goto unmap;
+
+#if defined(DYNAMIC_DEBUG)
+ {
+ DEFINE_DYNAMIC_DEBUG_METADATA(ddm,
+ "Memory mapping consistency check");
+ if (unlikely(ddm.flags & _DPRINTK_FLAGS_PRINT))
+ srp_check_mapping(&state, ch, req, scat, count);
+ }
+#endif
/* We've mapped the request, now pull as much of the indirect
* descriptor table as we can into the command buffer. If this
@@ -1628,7 +1695,8 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch,
!target->allow_ext_sg)) {
shost_printk(KERN_ERR, target->scsi_host,
"Could not fit S/G list into SRP_CMD\n");
- return -EIO;
+ ret = -EIO;
+ goto unmap;
}
count = min(state.ndesc, target->cmd_sg_cnt);
@@ -1646,7 +1714,7 @@ static int srp_map_data(struct scsi_cmnd *scmnd, struct srp_rdma_ch *ch,
ret = srp_map_idb(ch, req, state.gen.next, state.gen.end,
idb_len, &idb_rkey);
if (ret < 0)
- return ret;
+ goto unmap;
req->nmdesc++;
} else {
idb_rkey = cpu_to_be32(target->global_mr->rkey);
@@ -1672,6 +1740,12 @@ map_complete:
cmd->buf_fmt = fmt;
return len;
+
+unmap:
+ srp_unmap_data(scmnd, ch, req);
+ if (ret == -ENOMEM && req->nmdesc >= target->mr_pool_size)
+ ret = -E2BIG;
+ return ret;
}
/*
@@ -2564,6 +2638,20 @@ static int srp_reset_host(struct scsi_cmnd *scmnd)
return srp_reconnect_rport(target->rport) == 0 ? SUCCESS : FAILED;
}
+static int srp_slave_alloc(struct scsi_device *sdev)
+{
+ struct Scsi_Host *shost = sdev->host;
+ struct srp_target_port *target = host_to_target(shost);
+ struct srp_device *srp_dev = target->srp_host->srp_dev;
+ struct ib_device *ibdev = srp_dev->dev;
+
+ if (!(ibdev->attrs.device_cap_flags & IB_DEVICE_SG_GAPS_REG))
+ blk_queue_virt_boundary(sdev->request_queue,
+ ~srp_dev->mr_page_mask);
+
+ return 0;
+}
+
static int srp_slave_configure(struct scsi_device *sdev)
{
struct Scsi_Host *shost = sdev->host;
@@ -2755,6 +2843,7 @@ static struct scsi_host_template srp_template = {
.module = THIS_MODULE,
.name = "InfiniBand SRP initiator",
.proc_name = DRV_NAME,
+ .slave_alloc = srp_slave_alloc,
.slave_configure = srp_slave_configure,
.info = srp_target_info,
.queuecommand = srp_queuecommand,
@@ -2819,7 +2908,7 @@ static int srp_add_target(struct srp_host *host, struct srp_target_port *target)
spin_unlock(&host->target_lock);
scsi_scan_target(&target->scsi_host->shost_gendev,
- 0, target->scsi_id, SCAN_WILD_CARD, 0);
+ 0, target->scsi_id, SCAN_WILD_CARD, SCSI_SCAN_INITIAL);
if (srp_connected_ch(target) < target->ch_count ||
target->qp_in_error) {
@@ -2829,7 +2918,7 @@ static int srp_add_target(struct srp_host *host, struct srp_target_port *target)
goto out;
}
- pr_debug(PFX "%s: SCSI scan succeeded - detected %d LUNs\n",
+ pr_debug("%s: SCSI scan succeeded - detected %d LUNs\n",
dev_name(&target->scsi_host->shost_gendev),
srp_sdev_count(target->scsi_host));
@@ -3097,7 +3186,7 @@ static int srp_parse_options(const char *buf, struct srp_target_port *target)
case SRP_OPT_SG_TABLESIZE:
if (match_int(args, &token) || token < 1 ||
- token > SCSI_MAX_SG_CHAIN_SEGMENTS) {
+ token > SG_MAX_SEGMENTS) {
pr_warn("bad max sg_tablesize parameter '%s'\n",
p);
goto out;
@@ -3161,6 +3250,7 @@ static ssize_t srp_create_target(struct device *dev,
struct srp_device *srp_dev = host->srp_dev;
struct ib_device *ibdev = srp_dev->dev;
int ret, node_idx, node, cpu, i;
+ unsigned int max_sectors_per_mr, mr_per_cmd = 0;
bool multich = false;
target_host = scsi_host_alloc(&srp_template,
@@ -3217,7 +3307,33 @@ static ssize_t srp_create_target(struct device *dev,
target->sg_tablesize = target->cmd_sg_cnt;
}
+ if (srp_dev->use_fast_reg || srp_dev->use_fmr) {
+ /*
+ * FR and FMR can only map one HCA page per entry. If the
+ * start address is not aligned on a HCA page boundary two
+ * entries will be used for the head and the tail although
+ * these two entries combined contain at most one HCA page of
+ * data. Hence the "+ 1" in the calculation below.
+ *
+ * The indirect data buffer descriptor is contiguous so the
+ * memory for that buffer will only be registered if
+ * register_always is true. Hence add one to mr_per_cmd if
+ * register_always has been set.
+ */
+ max_sectors_per_mr = srp_dev->max_pages_per_mr <<
+ (ilog2(srp_dev->mr_page_size) - 9);
+ mr_per_cmd = register_always +
+ (target->scsi_host->max_sectors + 1 +
+ max_sectors_per_mr - 1) / max_sectors_per_mr;
+ pr_debug("max_sectors = %u; max_pages_per_mr = %u; mr_page_size = %u; max_sectors_per_mr = %u; mr_per_cmd = %u\n",
+ target->scsi_host->max_sectors,
+ srp_dev->max_pages_per_mr, srp_dev->mr_page_size,
+ max_sectors_per_mr, mr_per_cmd);
+ }
+
target_host->sg_tablesize = target->sg_tablesize;
+ target->mr_pool_size = target->scsi_host->can_queue * mr_per_cmd;
+ target->mr_per_cmd = mr_per_cmd;
target->indirect_size = target->sg_tablesize *
sizeof (struct srp_direct_buf);
target->max_iu_len = sizeof (struct srp_cmd) +
@@ -3414,17 +3530,6 @@ static void srp_add_one(struct ib_device *device)
if (!srp_dev)
return;
- srp_dev->has_fmr = (device->alloc_fmr && device->dealloc_fmr &&
- device->map_phys_fmr && device->unmap_fmr);
- srp_dev->has_fr = (device->attrs.device_cap_flags &
- IB_DEVICE_MEM_MGT_EXTENSIONS);
- if (!srp_dev->has_fmr && !srp_dev->has_fr)
- dev_warn(&device->dev, "neither FMR nor FR is supported\n");
-
- srp_dev->use_fast_reg = (srp_dev->has_fr &&
- (!srp_dev->has_fmr || prefer_fr));
- srp_dev->use_fmr = !srp_dev->use_fast_reg && srp_dev->has_fmr;
-
/*
* Use the smallest page size supported by the HCA, down to a
* minimum of 4096 bytes. We're unlikely to build large sglists
@@ -3435,8 +3540,25 @@ static void srp_add_one(struct ib_device *device)
srp_dev->mr_page_mask = ~((u64) srp_dev->mr_page_size - 1);
max_pages_per_mr = device->attrs.max_mr_size;
do_div(max_pages_per_mr, srp_dev->mr_page_size);
+ pr_debug("%s: %llu / %u = %llu <> %u\n", __func__,
+ device->attrs.max_mr_size, srp_dev->mr_page_size,
+ max_pages_per_mr, SRP_MAX_PAGES_PER_MR);
srp_dev->max_pages_per_mr = min_t(u64, SRP_MAX_PAGES_PER_MR,
max_pages_per_mr);
+
+ srp_dev->has_fmr = (device->alloc_fmr && device->dealloc_fmr &&
+ device->map_phys_fmr && device->unmap_fmr);
+ srp_dev->has_fr = (device->attrs.device_cap_flags &
+ IB_DEVICE_MEM_MGT_EXTENSIONS);
+ if (!never_register && !srp_dev->has_fmr && !srp_dev->has_fr) {
+ dev_warn(&device->dev, "neither FMR nor FR is supported\n");
+ } else if (!never_register &&
+ device->attrs.max_mr_size >= 2 * srp_dev->mr_page_size) {
+ srp_dev->use_fast_reg = (srp_dev->has_fr &&
+ (!srp_dev->has_fmr || prefer_fr));
+ srp_dev->use_fmr = !srp_dev->use_fast_reg && srp_dev->has_fmr;
+ }
+
if (srp_dev->use_fast_reg) {
srp_dev->max_pages_per_mr =
min_t(u32, srp_dev->max_pages_per_mr,
@@ -3456,7 +3578,8 @@ static void srp_add_one(struct ib_device *device)
if (IS_ERR(srp_dev->pd))
goto free_dev;
- if (!register_always || (!srp_dev->has_fmr && !srp_dev->has_fr)) {
+ if (never_register || !register_always ||
+ (!srp_dev->has_fmr && !srp_dev->has_fr)) {
srp_dev->global_mr = ib_get_dma_mr(srp_dev->pd,
IB_ACCESS_LOCAL_WRITE |
IB_ACCESS_REMOTE_READ |
diff --git a/drivers/infiniband/ulp/srp/ib_srp.h b/drivers/infiniband/ulp/srp/ib_srp.h
index 9e05ce4a04fd..26bb9b0a7a63 100644
--- a/drivers/infiniband/ulp/srp/ib_srp.h
+++ b/drivers/infiniband/ulp/srp/ib_srp.h
@@ -202,6 +202,8 @@ struct srp_target_port {
char target_name[32];
unsigned int scsi_id;
unsigned int sg_tablesize;
+ int mr_pool_size;
+ int mr_per_cmd;
int queue_size;
int req_ring_size;
int comp_vector;
diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.c b/drivers/infiniband/ulp/srpt/ib_srpt.c
index 8b42401d4795..e68b20cba70b 100644
--- a/drivers/infiniband/ulp/srpt/ib_srpt.c
+++ b/drivers/infiniband/ulp/srpt/ib_srpt.c
@@ -254,8 +254,8 @@ static void srpt_get_class_port_info(struct ib_dm_mad *mad)
memset(cif, 0, sizeof(*cif));
cif->base_version = 1;
cif->class_version = 1;
- cif->resp_time_value = 20;
+ ib_set_cpi_resp_time(cif, 20);
mad->mad_hdr.status = 0;
}
@@ -765,52 +765,6 @@ static int srpt_post_recv(struct srpt_device *sdev,
}
/**
- * srpt_post_send() - Post an IB send request.
- *
- * Returns zero upon success and a non-zero value upon failure.
- */
-static int srpt_post_send(struct srpt_rdma_ch *ch,
- struct srpt_send_ioctx *ioctx, int len)
-{
- struct ib_sge list;
- struct ib_send_wr wr, *bad_wr;
- struct srpt_device *sdev = ch->sport->sdev;
- int ret;
-
- atomic_inc(&ch->req_lim);
-
- ret = -ENOMEM;
- if (unlikely(atomic_dec_return(&ch->sq_wr_avail) < 0)) {
- pr_warn("IB send queue full (needed 1)\n");
- goto out;
- }
-
- ib_dma_sync_single_for_device(sdev->device, ioctx->ioctx.dma, len,
- DMA_TO_DEVICE);
-
- list.addr = ioctx->ioctx.dma;
- list.length = len;
- list.lkey = sdev->pd->local_dma_lkey;
-
- ioctx->ioctx.cqe.done = srpt_send_done;
- wr.next = NULL;
- wr.wr_cqe = &ioctx->ioctx.cqe;
- wr.sg_list = &list;
- wr.num_sge = 1;
- wr.opcode = IB_WR_SEND;
- wr.send_flags = IB_SEND_SIGNALED;
-
- ret = ib_post_send(ch->qp, &wr, &bad_wr);
-
-out:
- if (ret < 0) {
- atomic_inc(&ch->sq_wr_avail);
- atomic_dec(&ch->req_lim);
- }
- return ret;
-}
-
-/**
* srpt_zerolength_write() - Perform a zero-length RDMA write.
*
* A quote from the InfiniBand specification: C9-88: For an HCA responder
@@ -843,6 +797,110 @@ static void srpt_zerolength_write_done(struct ib_cq *cq, struct ib_wc *wc)
}
}
+static int srpt_alloc_rw_ctxs(struct srpt_send_ioctx *ioctx,
+ struct srp_direct_buf *db, int nbufs, struct scatterlist **sg,
+ unsigned *sg_cnt)
+{
+ enum dma_data_direction dir = target_reverse_dma_direction(&ioctx->cmd);
+ struct srpt_rdma_ch *ch = ioctx->ch;
+ struct scatterlist *prev = NULL;
+ unsigned prev_nents;
+ int ret, i;
+
+ if (nbufs == 1) {
+ ioctx->rw_ctxs = &ioctx->s_rw_ctx;
+ } else {
+ ioctx->rw_ctxs = kmalloc_array(nbufs, sizeof(*ioctx->rw_ctxs),
+ GFP_KERNEL);
+ if (!ioctx->rw_ctxs)
+ return -ENOMEM;
+ }
+
+ for (i = ioctx->n_rw_ctx; i < nbufs; i++, db++) {
+ struct srpt_rw_ctx *ctx = &ioctx->rw_ctxs[i];
+ u64 remote_addr = be64_to_cpu(db->va);
+ u32 size = be32_to_cpu(db->len);
+ u32 rkey = be32_to_cpu(db->key);
+
+ ret = target_alloc_sgl(&ctx->sg, &ctx->nents, size, false,
+ i < nbufs - 1);
+ if (ret)
+ goto unwind;
+
+ ret = rdma_rw_ctx_init(&ctx->rw, ch->qp, ch->sport->port,
+ ctx->sg, ctx->nents, 0, remote_addr, rkey, dir);
+ if (ret < 0) {
+ target_free_sgl(ctx->sg, ctx->nents);
+ goto unwind;
+ }
+
+ ioctx->n_rdma += ret;
+ ioctx->n_rw_ctx++;
+
+ if (prev) {
+ sg_unmark_end(&prev[prev_nents - 1]);
+ sg_chain(prev, prev_nents + 1, ctx->sg);
+ } else {
+ *sg = ctx->sg;
+ }
+
+ prev = ctx->sg;
+ prev_nents = ctx->nents;
+
+ *sg_cnt += ctx->nents;
+ }
+
+ return 0;
+
+unwind:
+ while (--i >= 0) {
+ struct srpt_rw_ctx *ctx = &ioctx->rw_ctxs[i];
+
+ rdma_rw_ctx_destroy(&ctx->rw, ch->qp, ch->sport->port,
+ ctx->sg, ctx->nents, dir);
+ target_free_sgl(ctx->sg, ctx->nents);
+ }
+ if (ioctx->rw_ctxs != &ioctx->s_rw_ctx)
+ kfree(ioctx->rw_ctxs);
+ return ret;
+}
+
+static void srpt_free_rw_ctxs(struct srpt_rdma_ch *ch,
+ struct srpt_send_ioctx *ioctx)
+{
+ enum dma_data_direction dir = target_reverse_dma_direction(&ioctx->cmd);
+ int i;
+
+ for (i = 0; i < ioctx->n_rw_ctx; i++) {
+ struct srpt_rw_ctx *ctx = &ioctx->rw_ctxs[i];
+
+ rdma_rw_ctx_destroy(&ctx->rw, ch->qp, ch->sport->port,
+ ctx->sg, ctx->nents, dir);
+ target_free_sgl(ctx->sg, ctx->nents);
+ }
+
+ if (ioctx->rw_ctxs != &ioctx->s_rw_ctx)
+ kfree(ioctx->rw_ctxs);
+}
+
+static inline void *srpt_get_desc_buf(struct srp_cmd *srp_cmd)
+{
+ /*
+ * The pointer computations below will only be compiled correctly
+ * if srp_cmd::add_data is declared as s8*, u8*, s8[] or u8[], so check
+ * whether srp_cmd::add_data has been declared as a byte pointer.
+ */
+ BUILD_BUG_ON(!__same_type(srp_cmd->add_data[0], (s8)0) &&
+ !__same_type(srp_cmd->add_data[0], (u8)0));
+
+ /*
+ * According to the SRP spec, the lower two bits of the 'ADDITIONAL
+ * CDB LENGTH' field are reserved and the size in bytes of this field
+ * is four times the value specified in bits 3..7. Hence the "& ~3".
+ */
+ return srp_cmd->add_data + (srp_cmd->add_cdb_len & ~3);
+}
+
/**
* srpt_get_desc_tbl() - Parse the data descriptors of an SRP_CMD request.
* @ioctx: Pointer to the I/O context associated with the request.
@@ -858,94 +916,59 @@ static void srpt_zerolength_write_done(struct ib_cq *cq, struct ib_wc *wc)
* -ENOMEM when memory allocation fails and zero upon success.
*/
static int srpt_get_desc_tbl(struct srpt_send_ioctx *ioctx,
- struct srp_cmd *srp_cmd,
- enum dma_data_direction *dir, u64 *data_len)
+ struct srp_cmd *srp_cmd, enum dma_data_direction *dir,
+ struct scatterlist **sg, unsigned *sg_cnt, u64 *data_len)
{
- struct srp_indirect_buf *idb;
- struct srp_direct_buf *db;
- unsigned add_cdb_offset;
- int ret;
-
- /*
- * The pointer computations below will only be compiled correctly
- * if srp_cmd::add_data is declared as s8*, u8*, s8[] or u8[], so check
- * whether srp_cmd::add_data has been declared as a byte pointer.
- */
- BUILD_BUG_ON(!__same_type(srp_cmd->add_data[0], (s8)0)
- && !__same_type(srp_cmd->add_data[0], (u8)0));
-
BUG_ON(!dir);
BUG_ON(!data_len);
- ret = 0;
- *data_len = 0;
-
/*
* The lower four bits of the buffer format field contain the DATA-IN
* buffer descriptor format, and the highest four bits contain the
* DATA-OUT buffer descriptor format.
*/
- *dir = DMA_NONE;
if (srp_cmd->buf_fmt & 0xf)
/* DATA-IN: transfer data from target to initiator (read). */
*dir = DMA_FROM_DEVICE;
else if (srp_cmd->buf_fmt >> 4)
/* DATA-OUT: transfer data from initiator to target (write). */
*dir = DMA_TO_DEVICE;
+ else
+ *dir = DMA_NONE;
+
+ /* initialize data_direction early as srpt_alloc_rw_ctxs needs it */
+ ioctx->cmd.data_direction = *dir;
- /*
- * According to the SRP spec, the lower two bits of the 'ADDITIONAL
- * CDB LENGTH' field are reserved and the size in bytes of this field
- * is four times the value specified in bits 3..7. Hence the "& ~3".
- */
- add_cdb_offset = srp_cmd->add_cdb_len & ~3;
if (((srp_cmd->buf_fmt & 0xf) == SRP_DATA_DESC_DIRECT) ||
((srp_cmd->buf_fmt >> 4) == SRP_DATA_DESC_DIRECT)) {
- ioctx->n_rbuf = 1;
- ioctx->rbufs = &ioctx->single_rbuf;
+ struct srp_direct_buf *db = srpt_get_desc_buf(srp_cmd);
- db = (struct srp_direct_buf *)(srp_cmd->add_data
- + add_cdb_offset);
- memcpy(ioctx->rbufs, db, sizeof(*db));
*data_len = be32_to_cpu(db->len);
+ return srpt_alloc_rw_ctxs(ioctx, db, 1, sg, sg_cnt);
} else if (((srp_cmd->buf_fmt & 0xf) == SRP_DATA_DESC_INDIRECT) ||
((srp_cmd->buf_fmt >> 4) == SRP_DATA_DESC_INDIRECT)) {
- idb = (struct srp_indirect_buf *)(srp_cmd->add_data
- + add_cdb_offset);
-
- ioctx->n_rbuf = be32_to_cpu(idb->table_desc.len) / sizeof(*db);
+ struct srp_indirect_buf *idb = srpt_get_desc_buf(srp_cmd);
+ int nbufs = be32_to_cpu(idb->table_desc.len) /
+ sizeof(struct srp_direct_buf);
- if (ioctx->n_rbuf >
+ if (nbufs >
(srp_cmd->data_out_desc_cnt + srp_cmd->data_in_desc_cnt)) {
pr_err("received unsupported SRP_CMD request"
" type (%u out + %u in != %u / %zu)\n",
srp_cmd->data_out_desc_cnt,
srp_cmd->data_in_desc_cnt,
be32_to_cpu(idb->table_desc.len),
- sizeof(*db));
- ioctx->n_rbuf = 0;
- ret = -EINVAL;
- goto out;
+ sizeof(struct srp_direct_buf));
+ return -EINVAL;
}
- if (ioctx->n_rbuf == 1)
- ioctx->rbufs = &ioctx->single_rbuf;
- else {
- ioctx->rbufs =
- kmalloc(ioctx->n_rbuf * sizeof(*db), GFP_ATOMIC);
- if (!ioctx->rbufs) {
- ioctx->n_rbuf = 0;
- ret = -ENOMEM;
- goto out;
- }
- }
-
- db = idb->desc_list;
- memcpy(ioctx->rbufs, db, ioctx->n_rbuf * sizeof(*db));
*data_len = be32_to_cpu(idb->len);
+ return srpt_alloc_rw_ctxs(ioctx, idb->desc_list, nbufs,
+ sg, sg_cnt);
+ } else {
+ *data_len = 0;
+ return 0;
}
-out:
- return ret;
}
/**
@@ -1049,217 +1072,6 @@ static int srpt_ch_qp_err(struct srpt_rdma_ch *ch)
}
/**
- * srpt_unmap_sg_to_ib_sge() - Unmap an IB SGE list.
- */
-static void srpt_unmap_sg_to_ib_sge(struct srpt_rdma_ch *ch,
- struct srpt_send_ioctx *ioctx)
-{
- struct scatterlist *sg;
- enum dma_data_direction dir;
-
- BUG_ON(!ch);
- BUG_ON(!ioctx);
- BUG_ON(ioctx->n_rdma && !ioctx->rdma_wrs);
-
- while (ioctx->n_rdma)
- kfree(ioctx->rdma_wrs[--ioctx->n_rdma].wr.sg_list);
-
- kfree(ioctx->rdma_wrs);
- ioctx->rdma_wrs = NULL;
-
- if (ioctx->mapped_sg_count) {
- sg = ioctx->sg;
- WARN_ON(!sg);
- dir = ioctx->cmd.data_direction;
- BUG_ON(dir == DMA_NONE);
- ib_dma_unmap_sg(ch->sport->sdev->device, sg, ioctx->sg_cnt,
- target_reverse_dma_direction(&ioctx->cmd));
- ioctx->mapped_sg_count = 0;
- }
-}
-
-/**
- * srpt_map_sg_to_ib_sge() - Map an SG list to an IB SGE list.
- */
-static int srpt_map_sg_to_ib_sge(struct srpt_rdma_ch *ch,
- struct srpt_send_ioctx *ioctx)
-{
- struct ib_device *dev = ch->sport->sdev->device;
- struct se_cmd *cmd;
- struct scatterlist *sg, *sg_orig;
- int sg_cnt;
- enum dma_data_direction dir;
- struct ib_rdma_wr *riu;
- struct srp_direct_buf *db;
- dma_addr_t dma_addr;
- struct ib_sge *sge;
- u64 raddr;
- u32 rsize;
- u32 tsize;
- u32 dma_len;
- int count, nrdma;
- int i, j, k;
-
- BUG_ON(!ch);
- BUG_ON(!ioctx);
- cmd = &ioctx->cmd;
- dir = cmd->data_direction;
- BUG_ON(dir == DMA_NONE);
-
- ioctx->sg = sg = sg_orig = cmd->t_data_sg;
- ioctx->sg_cnt = sg_cnt = cmd->t_data_nents;
-
- count = ib_dma_map_sg(ch->sport->sdev->device, sg, sg_cnt,
- target_reverse_dma_direction(cmd));
- if (unlikely(!count))
- return -EAGAIN;
-
- ioctx->mapped_sg_count = count;
-
- if (ioctx->rdma_wrs && ioctx->n_rdma_wrs)
- nrdma = ioctx->n_rdma_wrs;
- else {
- nrdma = (count + SRPT_DEF_SG_PER_WQE - 1) / SRPT_DEF_SG_PER_WQE
- + ioctx->n_rbuf;
-
- ioctx->rdma_wrs = kcalloc(nrdma, sizeof(*ioctx->rdma_wrs),
- GFP_KERNEL);
- if (!ioctx->rdma_wrs)
- goto free_mem;
-
- ioctx->n_rdma_wrs = nrdma;
- }
-
- db = ioctx->rbufs;
- tsize = cmd->data_length;
- dma_len = ib_sg_dma_len(dev, &sg[0]);
- riu = ioctx->rdma_wrs;
-
- /*
- * For each remote desc - calculate the #ib_sge.
- * If #ib_sge < SRPT_DEF_SG_PER_WQE per rdma operation then
- * each remote desc rdma_iu is required a rdma wr;
- * else
- * we need to allocate extra rdma_iu to carry extra #ib_sge in
- * another rdma wr
- */
- for (i = 0, j = 0;
- j < count && i < ioctx->n_rbuf && tsize > 0; ++i, ++riu, ++db) {
- rsize = be32_to_cpu(db->len);
- raddr = be64_to_cpu(db->va);
- riu->remote_addr = raddr;
- riu->rkey = be32_to_cpu(db->key);
- riu->wr.num_sge = 0;
-
- /* calculate how many sge required for this remote_buf */
- while (rsize > 0 && tsize > 0) {
-
- if (rsize >= dma_len) {
- tsize -= dma_len;
- rsize -= dma_len;
- raddr += dma_len;
-
- if (tsize > 0) {
- ++j;
- if (j < count) {
- sg = sg_next(sg);
- dma_len = ib_sg_dma_len(
- dev, sg);
- }
- }
- } else {
- tsize -= rsize;
- dma_len -= rsize;
- rsize = 0;
- }
-
- ++riu->wr.num_sge;
-
- if (rsize > 0 &&
- riu->wr.num_sge == SRPT_DEF_SG_PER_WQE) {
- ++ioctx->n_rdma;
- riu->wr.sg_list = kmalloc_array(riu->wr.num_sge,
- sizeof(*riu->wr.sg_list),
- GFP_KERNEL);
- if (!riu->wr.sg_list)
- goto free_mem;
-
- ++riu;
- riu->wr.num_sge = 0;
- riu->remote_addr = raddr;
- riu->rkey = be32_to_cpu(db->key);
- }
- }
-
- ++ioctx->n_rdma;
- riu->wr.sg_list = kmalloc_array(riu->wr.num_sge,
- sizeof(*riu->wr.sg_list),
- GFP_KERNEL);
- if (!riu->wr.sg_list)
- goto free_mem;
- }
-
- db = ioctx->rbufs;
- tsize = cmd->data_length;
- riu = ioctx->rdma_wrs;
- sg = sg_orig;
- dma_len = ib_sg_dma_len(dev, &sg[0]);
- dma_addr = ib_sg_dma_address(dev, &sg[0]);
-
- /* this second loop is really mapped sg_addres to rdma_iu->ib_sge */
- for (i = 0, j = 0;
- j < count && i < ioctx->n_rbuf && tsize > 0; ++i, ++riu, ++db) {
- rsize = be32_to_cpu(db->len);
- sge = riu->wr.sg_list;
- k = 0;
-
- while (rsize > 0 && tsize > 0) {
- sge->addr = dma_addr;
- sge->lkey = ch->sport->sdev->pd->local_dma_lkey;
-
- if (rsize >= dma_len) {
- sge->length =
- (tsize < dma_len) ? tsize : dma_len;
- tsize -= dma_len;
- rsize -= dma_len;
-
- if (tsize > 0) {
- ++j;
- if (j < count) {
- sg = sg_next(sg);
- dma_len = ib_sg_dma_len(
- dev, sg);
- dma_addr = ib_sg_dma_address(
- dev, sg);
- }
- }
- } else {
- sge->length = (tsize < rsize) ? tsize : rsize;
- tsize -= rsize;
- dma_len -= rsize;
- dma_addr += rsize;
- rsize = 0;
- }
-
- ++k;
- if (k == riu->wr.num_sge && rsize > 0 && tsize > 0) {
- ++riu;
- sge = riu->wr.sg_list;
- k = 0;
- } else if (rsize > 0 && tsize > 0)
- ++sge;
- }
- }
-
- return 0;
-
-free_mem:
- srpt_unmap_sg_to_ib_sge(ch, ioctx);
-
- return -ENOMEM;
-}
-
-/**
* srpt_get_send_ioctx() - Obtain an I/O context for sending to the initiator.
*/
static struct srpt_send_ioctx *srpt_get_send_ioctx(struct srpt_rdma_ch *ch)
@@ -1284,12 +1096,8 @@ static struct srpt_send_ioctx *srpt_get_send_ioctx(struct srpt_rdma_ch *ch)
BUG_ON(ioctx->ch != ch);
spin_lock_init(&ioctx->spinlock);
ioctx->state = SRPT_STATE_NEW;
- ioctx->n_rbuf = 0;
- ioctx->rbufs = NULL;
ioctx->n_rdma = 0;
- ioctx->n_rdma_wrs = 0;
- ioctx->rdma_wrs = NULL;
- ioctx->mapped_sg_count = 0;
+ ioctx->n_rw_ctx = 0;
init_completion(&ioctx->tx_done);
ioctx->queue_status_only = false;
/*
@@ -1359,7 +1167,6 @@ static int srpt_abort_cmd(struct srpt_send_ioctx *ioctx)
* SRP_RSP sending failed or the SRP_RSP send completion has
* not been received in time.
*/
- srpt_unmap_sg_to_ib_sge(ioctx->ch, ioctx);
transport_generic_free_cmd(&ioctx->cmd, 0);
break;
case SRPT_STATE_MGMT_RSP_SENT:
@@ -1387,6 +1194,7 @@ static void srpt_rdma_read_done(struct ib_cq *cq, struct ib_wc *wc)
WARN_ON(ioctx->n_rdma <= 0);
atomic_add(ioctx->n_rdma, &ch->sq_wr_avail);
+ ioctx->n_rdma = 0;
if (unlikely(wc->status != IB_WC_SUCCESS)) {
pr_info("RDMA_READ for ioctx 0x%p failed with status %d\n",
@@ -1403,23 +1211,6 @@ static void srpt_rdma_read_done(struct ib_cq *cq, struct ib_wc *wc)
__LINE__, srpt_get_cmd_state(ioctx));
}
-static void srpt_rdma_write_done(struct ib_cq *cq, struct ib_wc *wc)
-{
- struct srpt_send_ioctx *ioctx =
- container_of(wc->wr_cqe, struct srpt_send_ioctx, rdma_cqe);
-
- if (unlikely(wc->status != IB_WC_SUCCESS)) {
- /*
- * Note: if an RDMA write error completion is received that
- * means that a SEND also has been posted. Defer further
- * processing of the associated command until the send error
- * completion has been received.
- */
- pr_info("RDMA_WRITE for ioctx 0x%p failed with status %d\n",
- ioctx, wc->status);
- }
-}
-
/**
* srpt_build_cmd_rsp() - Build an SRP_RSP response.
* @ch: RDMA channel through which the request has been received.
@@ -1537,6 +1328,8 @@ static void srpt_handle_cmd(struct srpt_rdma_ch *ch,
{
struct se_cmd *cmd;
struct srp_cmd *srp_cmd;
+ struct scatterlist *sg = NULL;
+ unsigned sg_cnt = 0;
u64 data_len;
enum dma_data_direction dir;
int rc;
@@ -1563,16 +1356,21 @@ static void srpt_handle_cmd(struct srpt_rdma_ch *ch,
break;
}
- if (srpt_get_desc_tbl(send_ioctx, srp_cmd, &dir, &data_len)) {
- pr_err("0x%llx: parsing SRP descriptor table failed.\n",
- srp_cmd->tag);
+ rc = srpt_get_desc_tbl(send_ioctx, srp_cmd, &dir, &sg, &sg_cnt,
+ &data_len);
+ if (rc) {
+ if (rc != -EAGAIN) {
+ pr_err("0x%llx: parsing SRP descriptor table failed.\n",
+ srp_cmd->tag);
+ }
goto release_ioctx;
}
- rc = target_submit_cmd(cmd, ch->sess, srp_cmd->cdb,
+ rc = target_submit_cmd_map_sgls(cmd, ch->sess, srp_cmd->cdb,
&send_ioctx->sense_data[0],
scsilun_to_int(&srp_cmd->lun), data_len,
- TCM_SIMPLE_TAG, dir, TARGET_SCF_ACK_KREF);
+ TCM_SIMPLE_TAG, dir, TARGET_SCF_ACK_KREF,
+ sg, sg_cnt, NULL, 0, NULL, 0);
if (rc != 0) {
pr_debug("target_submit_cmd() returned %d for tag %#llx\n", rc,
srp_cmd->tag);
@@ -1664,23 +1462,21 @@ static void srpt_handle_new_iu(struct srpt_rdma_ch *ch,
recv_ioctx->ioctx.dma, srp_max_req_size,
DMA_FROM_DEVICE);
- if (unlikely(ch->state == CH_CONNECTING)) {
- list_add_tail(&recv_ioctx->wait_list, &ch->cmd_wait_list);
- goto out;
- }
+ if (unlikely(ch->state == CH_CONNECTING))
+ goto out_wait;
if (unlikely(ch->state != CH_LIVE))
- goto out;
+ return;
srp_cmd = recv_ioctx->ioctx.buf;
if (srp_cmd->opcode == SRP_CMD || srp_cmd->opcode == SRP_TSK_MGMT) {
- if (!send_ioctx)
+ if (!send_ioctx) {
+ if (!list_empty(&ch->cmd_wait_list))
+ goto out_wait;
send_ioctx = srpt_get_send_ioctx(ch);
- if (unlikely(!send_ioctx)) {
- list_add_tail(&recv_ioctx->wait_list,
- &ch->cmd_wait_list);
- goto out;
}
+ if (unlikely(!send_ioctx))
+ goto out_wait;
}
switch (srp_cmd->opcode) {
@@ -1709,8 +1505,10 @@ static void srpt_handle_new_iu(struct srpt_rdma_ch *ch,
}
srpt_post_recv(ch->sport->sdev, recv_ioctx);
-out:
return;
+
+out_wait:
+ list_add_tail(&recv_ioctx->wait_list, &ch->cmd_wait_list);
}
static void srpt_recv_done(struct ib_cq *cq, struct ib_wc *wc)
@@ -1779,14 +1577,13 @@ static void srpt_send_done(struct ib_cq *cq, struct ib_wc *wc)
WARN_ON(state != SRPT_STATE_CMD_RSP_SENT &&
state != SRPT_STATE_MGMT_RSP_SENT);
- atomic_inc(&ch->sq_wr_avail);
+ atomic_add(1 + ioctx->n_rdma, &ch->sq_wr_avail);
if (wc->status != IB_WC_SUCCESS)
pr_info("sending response for ioctx 0x%p failed"
" with status %d\n", ioctx, wc->status);
if (state != SRPT_STATE_DONE) {
- srpt_unmap_sg_to_ib_sge(ch, ioctx);
transport_generic_free_cmd(&ioctx->cmd, 0);
} else {
pr_err("IB completion has been received too late for"
@@ -1832,8 +1629,18 @@ retry:
qp_init->srq = sdev->srq;
qp_init->sq_sig_type = IB_SIGNAL_REQ_WR;
qp_init->qp_type = IB_QPT_RC;
- qp_init->cap.max_send_wr = srp_sq_size;
- qp_init->cap.max_send_sge = SRPT_DEF_SG_PER_WQE;
+ /*
+ * We divide up our send queue size into half SEND WRs to send the
+ * completions, and half R/W contexts to actually do the RDMA
+ * READ/WRITE transfers. Note that we need to allocate CQ slots for
+ * both both, as RDMA contexts will also post completions for the
+ * RDMA READ case.
+ */
+ qp_init->cap.max_send_wr = srp_sq_size / 2;
+ qp_init->cap.max_rdma_ctxs = srp_sq_size / 2;
+ qp_init->cap.max_send_sge = max(sdev->device->attrs.max_sge_rd,
+ sdev->device->attrs.max_sge);
+ qp_init->port_num = ch->sport->port;
ch->qp = ib_create_qp(sdev->pd, qp_init);
if (IS_ERR(ch->qp)) {
@@ -1960,14 +1767,6 @@ static void __srpt_close_all_ch(struct srpt_device *sdev)
}
}
-/**
- * srpt_shutdown_session() - Whether or not a session may be shut down.
- */
-static int srpt_shutdown_session(struct se_session *se_sess)
-{
- return 1;
-}
-
static void srpt_free_ch(struct kref *kref)
{
struct srpt_rdma_ch *ch = container_of(kref, struct srpt_rdma_ch, kref);
@@ -2386,95 +2185,6 @@ static int srpt_cm_handler(struct ib_cm_id *cm_id, struct ib_cm_event *event)
return ret;
}
-/**
- * srpt_perform_rdmas() - Perform IB RDMA.
- *
- * Returns zero upon success or a negative number upon failure.
- */
-static int srpt_perform_rdmas(struct srpt_rdma_ch *ch,
- struct srpt_send_ioctx *ioctx)
-{
- struct ib_send_wr *bad_wr;
- int sq_wr_avail, ret, i;
- enum dma_data_direction dir;
- const int n_rdma = ioctx->n_rdma;
-
- dir = ioctx->cmd.data_direction;
- if (dir == DMA_TO_DEVICE) {
- /* write */
- ret = -ENOMEM;
- sq_wr_avail = atomic_sub_return(n_rdma, &ch->sq_wr_avail);
- if (sq_wr_avail < 0) {
- pr_warn("IB send queue full (needed %d)\n",
- n_rdma);
- goto out;
- }
- }
-
- for (i = 0; i < n_rdma; i++) {
- struct ib_send_wr *wr = &ioctx->rdma_wrs[i].wr;
-
- wr->opcode = (dir == DMA_FROM_DEVICE) ?
- IB_WR_RDMA_WRITE : IB_WR_RDMA_READ;
-
- if (i == n_rdma - 1) {
- /* only get completion event for the last rdma read */
- if (dir == DMA_TO_DEVICE) {
- wr->send_flags = IB_SEND_SIGNALED;
- ioctx->rdma_cqe.done = srpt_rdma_read_done;
- } else {
- ioctx->rdma_cqe.done = srpt_rdma_write_done;
- }
- wr->wr_cqe = &ioctx->rdma_cqe;
- wr->next = NULL;
- } else {
- wr->wr_cqe = NULL;
- wr->next = &ioctx->rdma_wrs[i + 1].wr;
- }
- }
-
- ret = ib_post_send(ch->qp, &ioctx->rdma_wrs->wr, &bad_wr);
- if (ret)
- pr_err("%s[%d]: ib_post_send() returned %d for %d/%d\n",
- __func__, __LINE__, ret, i, n_rdma);
-out:
- if (unlikely(dir == DMA_TO_DEVICE && ret < 0))
- atomic_add(n_rdma, &ch->sq_wr_avail);
- return ret;
-}
-
-/**
- * srpt_xfer_data() - Start data transfer from initiator to target.
- */
-static int srpt_xfer_data(struct srpt_rdma_ch *ch,
- struct srpt_send_ioctx *ioctx)
-{
- int ret;
-
- ret = srpt_map_sg_to_ib_sge(ch, ioctx);
- if (ret) {
- pr_err("%s[%d] ret=%d\n", __func__, __LINE__, ret);
- goto out;
- }
-
- ret = srpt_perform_rdmas(ch, ioctx);
- if (ret) {
- if (ret == -EAGAIN || ret == -ENOMEM)
- pr_info("%s[%d] queue full -- ret=%d\n",
- __func__, __LINE__, ret);
- else
- pr_err("%s[%d] fatal error -- ret=%d\n",
- __func__, __LINE__, ret);
- goto out_unmap;
- }
-
-out:
- return ret;
-out_unmap:
- srpt_unmap_sg_to_ib_sge(ch, ioctx);
- goto out;
-}
-
static int srpt_write_pending_status(struct se_cmd *se_cmd)
{
struct srpt_send_ioctx *ioctx;
@@ -2491,11 +2201,42 @@ static int srpt_write_pending(struct se_cmd *se_cmd)
struct srpt_send_ioctx *ioctx =
container_of(se_cmd, struct srpt_send_ioctx, cmd);
struct srpt_rdma_ch *ch = ioctx->ch;
+ struct ib_send_wr *first_wr = NULL, *bad_wr;
+ struct ib_cqe *cqe = &ioctx->rdma_cqe;
enum srpt_command_state new_state;
+ int ret, i;
new_state = srpt_set_cmd_state(ioctx, SRPT_STATE_NEED_DATA);
WARN_ON(new_state == SRPT_STATE_DONE);
- return srpt_xfer_data(ch, ioctx);
+
+ if (atomic_sub_return(ioctx->n_rdma, &ch->sq_wr_avail) < 0) {
+ pr_warn("%s: IB send queue full (needed %d)\n",
+ __func__, ioctx->n_rdma);
+ ret = -ENOMEM;
+ goto out_undo;
+ }
+
+ cqe->done = srpt_rdma_read_done;
+ for (i = ioctx->n_rw_ctx - 1; i >= 0; i--) {
+ struct srpt_rw_ctx *ctx = &ioctx->rw_ctxs[i];
+
+ first_wr = rdma_rw_ctx_wrs(&ctx->rw, ch->qp, ch->sport->port,
+ cqe, first_wr);
+ cqe = NULL;
+ }
+
+ ret = ib_post_send(ch->qp, first_wr, &bad_wr);
+ if (ret) {
+ pr_err("%s: ib_post_send() returned %d for %d (avail: %d)\n",
+ __func__, ret, ioctx->n_rdma,
+ atomic_read(&ch->sq_wr_avail));
+ goto out_undo;
+ }
+
+ return 0;
+out_undo:
+ atomic_add(ioctx->n_rdma, &ch->sq_wr_avail);
+ return ret;
}
static u8 tcm_to_srp_tsk_mgmt_status(const int tcm_mgmt_status)
@@ -2517,17 +2258,17 @@ static u8 tcm_to_srp_tsk_mgmt_status(const int tcm_mgmt_status)
*/
static void srpt_queue_response(struct se_cmd *cmd)
{
- struct srpt_rdma_ch *ch;
- struct srpt_send_ioctx *ioctx;
+ struct srpt_send_ioctx *ioctx =
+ container_of(cmd, struct srpt_send_ioctx, cmd);
+ struct srpt_rdma_ch *ch = ioctx->ch;
+ struct srpt_device *sdev = ch->sport->sdev;
+ struct ib_send_wr send_wr, *first_wr = NULL, *bad_wr;
+ struct ib_sge sge;
enum srpt_command_state state;
unsigned long flags;
- int ret;
- enum dma_data_direction dir;
- int resp_len;
+ int resp_len, ret, i;
u8 srp_tm_status;
- ioctx = container_of(cmd, struct srpt_send_ioctx, cmd);
- ch = ioctx->ch;
BUG_ON(!ch);
spin_lock_irqsave(&ioctx->spinlock, flags);
@@ -2554,17 +2295,19 @@ static void srpt_queue_response(struct se_cmd *cmd)
return;
}
- dir = ioctx->cmd.data_direction;
-
/* For read commands, transfer the data to the initiator. */
- if (dir == DMA_FROM_DEVICE && ioctx->cmd.data_length &&
+ if (ioctx->cmd.data_direction == DMA_FROM_DEVICE &&
+ ioctx->cmd.data_length &&
!ioctx->queue_status_only) {
- ret = srpt_xfer_data(ch, ioctx);
- if (ret) {
- pr_err("xfer_data failed for tag %llu\n",
- ioctx->cmd.tag);
- return;
+ for (i = ioctx->n_rw_ctx - 1; i >= 0; i--) {
+ struct srpt_rw_ctx *ctx = &ioctx->rw_ctxs[i];
+
+ first_wr = rdma_rw_ctx_wrs(&ctx->rw, ch->qp,
+ ch->sport->port, NULL,
+ first_wr ? first_wr : &send_wr);
}
+ } else {
+ first_wr = &send_wr;
}
if (state != SRPT_STATE_MGMT)
@@ -2576,14 +2319,46 @@ static void srpt_queue_response(struct se_cmd *cmd)
resp_len = srpt_build_tskmgmt_rsp(ch, ioctx, srp_tm_status,
ioctx->cmd.tag);
}
- ret = srpt_post_send(ch, ioctx, resp_len);
- if (ret) {
- pr_err("sending cmd response failed for tag %llu\n",
- ioctx->cmd.tag);
- srpt_unmap_sg_to_ib_sge(ch, ioctx);
- srpt_set_cmd_state(ioctx, SRPT_STATE_DONE);
- target_put_sess_cmd(&ioctx->cmd);
+
+ atomic_inc(&ch->req_lim);
+
+ if (unlikely(atomic_sub_return(1 + ioctx->n_rdma,
+ &ch->sq_wr_avail) < 0)) {
+ pr_warn("%s: IB send queue full (needed %d)\n",
+ __func__, ioctx->n_rdma);
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ ib_dma_sync_single_for_device(sdev->device, ioctx->ioctx.dma, resp_len,
+ DMA_TO_DEVICE);
+
+ sge.addr = ioctx->ioctx.dma;
+ sge.length = resp_len;
+ sge.lkey = sdev->pd->local_dma_lkey;
+
+ ioctx->ioctx.cqe.done = srpt_send_done;
+ send_wr.next = NULL;
+ send_wr.wr_cqe = &ioctx->ioctx.cqe;
+ send_wr.sg_list = &sge;
+ send_wr.num_sge = 1;
+ send_wr.opcode = IB_WR_SEND;
+ send_wr.send_flags = IB_SEND_SIGNALED;
+
+ ret = ib_post_send(ch->qp, first_wr, &bad_wr);
+ if (ret < 0) {
+ pr_err("%s: sending cmd response failed for tag %llu (%d)\n",
+ __func__, ioctx->cmd.tag, ret);
+ goto out;
}
+
+ return;
+
+out:
+ atomic_add(1 + ioctx->n_rdma, &ch->sq_wr_avail);
+ atomic_dec(&ch->req_lim);
+ srpt_set_cmd_state(ioctx, SRPT_STATE_DONE);
+ target_put_sess_cmd(&ioctx->cmd);
}
static int srpt_queue_data_in(struct se_cmd *cmd)
@@ -2599,10 +2374,6 @@ static void srpt_queue_tm_rsp(struct se_cmd *cmd)
static void srpt_aborted_task(struct se_cmd *cmd)
{
- struct srpt_send_ioctx *ioctx = container_of(cmd,
- struct srpt_send_ioctx, cmd);
-
- srpt_unmap_sg_to_ib_sge(ioctx->ch, ioctx);
}
static int srpt_queue_status(struct se_cmd *cmd)
@@ -2903,12 +2674,10 @@ static void srpt_release_cmd(struct se_cmd *se_cmd)
unsigned long flags;
WARN_ON(ioctx->state != SRPT_STATE_DONE);
- WARN_ON(ioctx->mapped_sg_count != 0);
- if (ioctx->n_rbuf > 1) {
- kfree(ioctx->rbufs);
- ioctx->rbufs = NULL;
- ioctx->n_rbuf = 0;
+ if (ioctx->n_rw_ctx) {
+ srpt_free_rw_ctxs(ch, ioctx);
+ ioctx->n_rw_ctx = 0;
}
spin_lock_irqsave(&ch->spinlock, flags);
@@ -3287,7 +3056,6 @@ static const struct target_core_fabric_ops srpt_template = {
.tpg_get_inst_index = srpt_tpg_get_inst_index,
.release_cmd = srpt_release_cmd,
.check_stop_free = srpt_check_stop_free,
- .shutdown_session = srpt_shutdown_session,
.close_session = srpt_close_session,
.sess_get_index = srpt_sess_get_index,
.sess_get_initiator_sid = NULL,
diff --git a/drivers/infiniband/ulp/srpt/ib_srpt.h b/drivers/infiniband/ulp/srpt/ib_srpt.h
index af9b8b527340..fee6bfd7ca21 100644
--- a/drivers/infiniband/ulp/srpt/ib_srpt.h
+++ b/drivers/infiniband/ulp/srpt/ib_srpt.h
@@ -42,6 +42,7 @@
#include <rdma/ib_verbs.h>
#include <rdma/ib_sa.h>
#include <rdma/ib_cm.h>
+#include <rdma/rw.h>
#include <scsi/srp.h>
@@ -105,7 +106,6 @@ enum {
SRP_LOGIN_RSP_MULTICHAN_MAINTAINED = 0x2,
SRPT_DEF_SG_TABLESIZE = 128,
- SRPT_DEF_SG_PER_WQE = 16,
MIN_SRPT_SQ_SIZE = 16,
DEF_SRPT_SQ_SIZE = 4096,
@@ -174,21 +174,17 @@ struct srpt_recv_ioctx {
struct srpt_ioctx ioctx;
struct list_head wait_list;
};
+
+struct srpt_rw_ctx {
+ struct rdma_rw_ctx rw;
+ struct scatterlist *sg;
+ unsigned int nents;
+};
/**
* struct srpt_send_ioctx - SRPT send I/O context.
* @ioctx: See above.
* @ch: Channel pointer.
- * @free_list: Node in srpt_rdma_ch.free_list.
- * @n_rbuf: Number of data buffers in the received SRP command.
- * @rbufs: Pointer to SRP data buffer array.
- * @single_rbuf: SRP data buffer if the command has only a single buffer.
- * @sg: Pointer to sg-list associated with this I/O context.
- * @sg_cnt: SG-list size.
- * @mapped_sg_count: ib_dma_map_sg() return value.
- * @n_rdma_wrs: Number of elements in the rdma_wrs array.
- * @rdma_wrs: Array with information about the RDMA mapping.
- * @tag: Tag of the received SRP information unit.
* @spinlock: Protects 'state'.
* @state: I/O context state.
* @cmd: Target core command data structure.
@@ -197,21 +193,18 @@ struct srpt_recv_ioctx {
struct srpt_send_ioctx {
struct srpt_ioctx ioctx;
struct srpt_rdma_ch *ch;
- struct ib_rdma_wr *rdma_wrs;
+
+ struct srpt_rw_ctx s_rw_ctx;
+ struct srpt_rw_ctx *rw_ctxs;
+
struct ib_cqe rdma_cqe;
- struct srp_direct_buf *rbufs;
- struct srp_direct_buf single_rbuf;
- struct scatterlist *sg;
struct list_head free_list;
spinlock_t spinlock;
enum srpt_command_state state;
struct se_cmd cmd;
struct completion tx_done;
- int sg_cnt;
- int mapped_sg_count;
- u16 n_rdma_wrs;
u8 n_rdma;
- u8 n_rbuf;
+ u8 n_rw_ctx;
bool queue_status_only;
u8 sense_data[TRANSPORT_SENSE_BUFFER];
};
diff --git a/drivers/input/joystick/analog.c b/drivers/input/joystick/analog.c
index 6f8b084e13d0..3d8ff09eba57 100644
--- a/drivers/input/joystick/analog.c
+++ b/drivers/input/joystick/analog.c
@@ -143,9 +143,9 @@ struct analog_port {
#include <linux/i8253.h>
-#define GET_TIME(x) do { if (cpu_has_tsc) x = (unsigned int)rdtsc(); else x = get_time_pit(); } while (0)
-#define DELTA(x,y) (cpu_has_tsc ? ((y) - (x)) : ((x) - (y) + ((x) < (y) ? PIT_TICK_RATE / HZ : 0)))
-#define TIME_NAME (cpu_has_tsc?"TSC":"PIT")
+#define GET_TIME(x) do { if (boot_cpu_has(X86_FEATURE_TSC)) x = (unsigned int)rdtsc(); else x = get_time_pit(); } while (0)
+#define DELTA(x,y) (boot_cpu_has(X86_FEATURE_TSC) ? ((y) - (x)) : ((x) - (y) + ((x) < (y) ? PIT_TICK_RATE / HZ : 0)))
+#define TIME_NAME (boot_cpu_has(X86_FEATURE_TSC)?"TSC":"PIT")
static unsigned int get_time_pit(void)
{
unsigned long flags;
diff --git a/drivers/input/joystick/xpad.c b/drivers/input/joystick/xpad.c
index 1142a93dd90b..804dbcc37d3f 100644
--- a/drivers/input/joystick/xpad.c
+++ b/drivers/input/joystick/xpad.c
@@ -87,7 +87,7 @@
#define DRIVER_AUTHOR "Marko Friedemann <mfr@bmx-chemnitz.de>"
#define DRIVER_DESC "X-Box pad driver"
-#define XPAD_PKT_LEN 32
+#define XPAD_PKT_LEN 64
/* xbox d-pads should map to buttons, as is required for DDR pads
but we map them to axes when possible to simplify things */
@@ -129,6 +129,7 @@ static const struct xpad_device {
{ 0x045e, 0x028e, "Microsoft X-Box 360 pad", 0, XTYPE_XBOX360 },
{ 0x045e, 0x02d1, "Microsoft X-Box One pad", 0, XTYPE_XBOXONE },
{ 0x045e, 0x02dd, "Microsoft X-Box One pad (Firmware 2015)", 0, XTYPE_XBOXONE },
+ { 0x045e, 0x02e3, "Microsoft X-Box One Elite pad", 0, XTYPE_XBOXONE },
{ 0x045e, 0x0291, "Xbox 360 Wireless Receiver (XBOX)", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360W },
{ 0x045e, 0x0719, "Xbox 360 Wireless Receiver", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360W },
{ 0x044f, 0x0f07, "Thrustmaster, Inc. Controller", 0, XTYPE_XBOX },
@@ -173,9 +174,11 @@ static const struct xpad_device {
{ 0x0e6f, 0x0006, "Edge wireless Controller", 0, XTYPE_XBOX },
{ 0x0e6f, 0x0105, "HSM3 Xbox360 dancepad", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 },
{ 0x0e6f, 0x0113, "Afterglow AX.1 Gamepad for Xbox 360", 0, XTYPE_XBOX360 },
+ { 0x0e6f, 0x0139, "Afterglow Prismatic Wired Controller", 0, XTYPE_XBOXONE },
{ 0x0e6f, 0x0201, "Pelican PL-3601 'TSZ' Wired Xbox 360 Controller", 0, XTYPE_XBOX360 },
{ 0x0e6f, 0x0213, "Afterglow Gamepad for Xbox 360", 0, XTYPE_XBOX360 },
{ 0x0e6f, 0x021f, "Rock Candy Gamepad for Xbox 360", 0, XTYPE_XBOX360 },
+ { 0x0e6f, 0x0146, "Rock Candy Wired Controller for Xbox One", 0, XTYPE_XBOXONE },
{ 0x0e6f, 0x0301, "Logic3 Controller", 0, XTYPE_XBOX360 },
{ 0x0e6f, 0x0401, "Logic3 Controller", 0, XTYPE_XBOX360 },
{ 0x0e8f, 0x0201, "SmartJoy Frag Xpad/PS2 adaptor", 0, XTYPE_XBOX },
@@ -183,6 +186,7 @@ static const struct xpad_device {
{ 0x0f0d, 0x000a, "Hori Co. DOA4 FightStick", 0, XTYPE_XBOX360 },
{ 0x0f0d, 0x000d, "Hori Fighting Stick EX2", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
{ 0x0f0d, 0x0016, "Hori Real Arcade Pro.EX", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
+ { 0x0f0d, 0x0067, "HORIPAD ONE", 0, XTYPE_XBOXONE },
{ 0x0f30, 0x0202, "Joytech Advanced Controller", 0, XTYPE_XBOX },
{ 0x0f30, 0x8888, "BigBen XBMiniPad Controller", 0, XTYPE_XBOX },
{ 0x102c, 0xff0c, "Joytech Wireless Advanced Controller", 0, XTYPE_XBOX },
@@ -199,6 +203,7 @@ static const struct xpad_device {
{ 0x162e, 0xbeef, "Joytech Neo-Se Take2", 0, XTYPE_XBOX360 },
{ 0x1689, 0xfd00, "Razer Onza Tournament Edition", 0, XTYPE_XBOX360 },
{ 0x1689, 0xfd01, "Razer Onza Classic Edition", 0, XTYPE_XBOX360 },
+ { 0x24c6, 0x542a, "Xbox ONE spectra", 0, XTYPE_XBOXONE },
{ 0x24c6, 0x5d04, "Razer Sabertooth", 0, XTYPE_XBOX360 },
{ 0x1bad, 0x0002, "Harmonix Rock Band Guitar", 0, XTYPE_XBOX360 },
{ 0x1bad, 0x0003, "Harmonix Rock Band Drumkit", MAP_DPAD_TO_BUTTONS, XTYPE_XBOX360 },
@@ -212,6 +217,8 @@ static const struct xpad_device {
{ 0x24c6, 0x5000, "Razer Atrox Arcade Stick", MAP_TRIGGERS_TO_BUTTONS, XTYPE_XBOX360 },
{ 0x24c6, 0x5300, "PowerA MINI PROEX Controller", 0, XTYPE_XBOX360 },
{ 0x24c6, 0x5303, "Xbox Airflo wired controller", 0, XTYPE_XBOX360 },
+ { 0x24c6, 0x541a, "PowerA Xbox One Mini Wired Controller", 0, XTYPE_XBOXONE },
+ { 0x24c6, 0x543a, "PowerA Xbox One wired controller", 0, XTYPE_XBOXONE },
{ 0x24c6, 0x5500, "Hori XBOX 360 EX 2 with Turbo", 0, XTYPE_XBOX360 },
{ 0x24c6, 0x5501, "Hori Real Arcade Pro VX-SA", 0, XTYPE_XBOX360 },
{ 0x24c6, 0x5506, "Hori SOULCALIBUR V Stick", 0, XTYPE_XBOX360 },
@@ -307,13 +314,16 @@ static struct usb_device_id xpad_table[] = {
{ USB_DEVICE(0x0738, 0x4540) }, /* Mad Catz Beat Pad */
XPAD_XBOXONE_VENDOR(0x0738), /* Mad Catz FightStick TE 2 */
XPAD_XBOX360_VENDOR(0x0e6f), /* 0x0e6f X-Box 360 controllers */
+ XPAD_XBOXONE_VENDOR(0x0e6f), /* 0x0e6f X-Box One controllers */
XPAD_XBOX360_VENDOR(0x12ab), /* X-Box 360 dance pads */
XPAD_XBOX360_VENDOR(0x1430), /* RedOctane X-Box 360 controllers */
XPAD_XBOX360_VENDOR(0x146b), /* BigBen Interactive Controllers */
XPAD_XBOX360_VENDOR(0x1bad), /* Harminix Rock Band Guitar and Drums */
XPAD_XBOX360_VENDOR(0x0f0d), /* Hori Controllers */
+ XPAD_XBOXONE_VENDOR(0x0f0d), /* Hori Controllers */
XPAD_XBOX360_VENDOR(0x1689), /* Razer Onza */
XPAD_XBOX360_VENDOR(0x24c6), /* PowerA Controllers */
+ XPAD_XBOXONE_VENDOR(0x24c6), /* PowerA Controllers */
XPAD_XBOX360_VENDOR(0x1532), /* Razer Sabertooth */
XPAD_XBOX360_VENDOR(0x15e4), /* Numark X-Box 360 controllers */
XPAD_XBOX360_VENDOR(0x162e), /* Joytech X-Box 360 controllers */
@@ -457,6 +467,10 @@ static void xpad_process_packet(struct usb_xpad *xpad, u16 cmd, unsigned char *d
static void xpad360_process_packet(struct usb_xpad *xpad, struct input_dev *dev,
u16 cmd, unsigned char *data)
{
+ /* valid pad data */
+ if (data[0] != 0x00)
+ return;
+
/* digital pad */
if (xpad->mapping & MAP_DPAD_TO_BUTTONS) {
/* dpad as buttons (left, right, up, down) */
@@ -756,6 +770,7 @@ static bool xpad_prepare_next_out_packet(struct usb_xpad *xpad)
if (packet) {
memcpy(xpad->odata, packet->data, packet->len);
xpad->irq_out->transfer_buffer_length = packet->len;
+ packet->pending = false;
return true;
}
@@ -797,7 +812,6 @@ static void xpad_irq_out(struct urb *urb)
switch (status) {
case 0:
/* success */
- xpad->out_packets[xpad->last_out_packet].pending = false;
xpad->irq_out_active = xpad_prepare_next_out_packet(xpad);
break;
diff --git a/drivers/input/keyboard/adp5588-keys.c b/drivers/input/keyboard/adp5588-keys.c
index 21a62d0fa764..53fe9a3fb620 100644
--- a/drivers/input/keyboard/adp5588-keys.c
+++ b/drivers/input/keyboard/adp5588-keys.c
@@ -73,7 +73,7 @@ static int adp5588_write(struct i2c_client *client, u8 reg, u8 val)
#ifdef CONFIG_GPIOLIB
static int adp5588_gpio_get_value(struct gpio_chip *chip, unsigned off)
{
- struct adp5588_kpad *kpad = container_of(chip, struct adp5588_kpad, gc);
+ struct adp5588_kpad *kpad = gpiochip_get_data(chip);
unsigned int bank = ADP5588_BANK(kpad->gpiomap[off]);
unsigned int bit = ADP5588_BIT(kpad->gpiomap[off]);
int val;
@@ -93,7 +93,7 @@ static int adp5588_gpio_get_value(struct gpio_chip *chip, unsigned off)
static void adp5588_gpio_set_value(struct gpio_chip *chip,
unsigned off, int val)
{
- struct adp5588_kpad *kpad = container_of(chip, struct adp5588_kpad, gc);
+ struct adp5588_kpad *kpad = gpiochip_get_data(chip);
unsigned int bank = ADP5588_BANK(kpad->gpiomap[off]);
unsigned int bit = ADP5588_BIT(kpad->gpiomap[off]);
@@ -112,7 +112,7 @@ static void adp5588_gpio_set_value(struct gpio_chip *chip,
static int adp5588_gpio_direction_input(struct gpio_chip *chip, unsigned off)
{
- struct adp5588_kpad *kpad = container_of(chip, struct adp5588_kpad, gc);
+ struct adp5588_kpad *kpad = gpiochip_get_data(chip);
unsigned int bank = ADP5588_BANK(kpad->gpiomap[off]);
unsigned int bit = ADP5588_BIT(kpad->gpiomap[off]);
int ret;
@@ -130,7 +130,7 @@ static int adp5588_gpio_direction_input(struct gpio_chip *chip, unsigned off)
static int adp5588_gpio_direction_output(struct gpio_chip *chip,
unsigned off, int val)
{
- struct adp5588_kpad *kpad = container_of(chip, struct adp5588_kpad, gc);
+ struct adp5588_kpad *kpad = gpiochip_get_data(chip);
unsigned int bank = ADP5588_BANK(kpad->gpiomap[off]);
unsigned int bit = ADP5588_BIT(kpad->gpiomap[off]);
int ret;
@@ -210,7 +210,7 @@ static int adp5588_gpio_add(struct adp5588_kpad *kpad)
mutex_init(&kpad->gpio_lock);
- error = gpiochip_add(&kpad->gc);
+ error = gpiochip_add_data(&kpad->gc, kpad);
if (error) {
dev_err(dev, "gpiochip_add failed, err: %d\n", error);
return error;
diff --git a/drivers/input/keyboard/adp5589-keys.c b/drivers/input/keyboard/adp5589-keys.c
index c01a1d648f9f..32d94c63dc33 100644
--- a/drivers/input/keyboard/adp5589-keys.c
+++ b/drivers/input/keyboard/adp5589-keys.c
@@ -387,7 +387,7 @@ static int adp5589_write(struct i2c_client *client, u8 reg, u8 val)
#ifdef CONFIG_GPIOLIB
static int adp5589_gpio_get_value(struct gpio_chip *chip, unsigned off)
{
- struct adp5589_kpad *kpad = container_of(chip, struct adp5589_kpad, gc);
+ struct adp5589_kpad *kpad = gpiochip_get_data(chip);
unsigned int bank = kpad->var->bank(kpad->gpiomap[off]);
unsigned int bit = kpad->var->bit(kpad->gpiomap[off]);
@@ -399,7 +399,7 @@ static int adp5589_gpio_get_value(struct gpio_chip *chip, unsigned off)
static void adp5589_gpio_set_value(struct gpio_chip *chip,
unsigned off, int val)
{
- struct adp5589_kpad *kpad = container_of(chip, struct adp5589_kpad, gc);
+ struct adp5589_kpad *kpad = gpiochip_get_data(chip);
unsigned int bank = kpad->var->bank(kpad->gpiomap[off]);
unsigned int bit = kpad->var->bit(kpad->gpiomap[off]);
@@ -418,7 +418,7 @@ static void adp5589_gpio_set_value(struct gpio_chip *chip,
static int adp5589_gpio_direction_input(struct gpio_chip *chip, unsigned off)
{
- struct adp5589_kpad *kpad = container_of(chip, struct adp5589_kpad, gc);
+ struct adp5589_kpad *kpad = gpiochip_get_data(chip);
unsigned int bank = kpad->var->bank(kpad->gpiomap[off]);
unsigned int bit = kpad->var->bit(kpad->gpiomap[off]);
int ret;
@@ -438,7 +438,7 @@ static int adp5589_gpio_direction_input(struct gpio_chip *chip, unsigned off)
static int adp5589_gpio_direction_output(struct gpio_chip *chip,
unsigned off, int val)
{
- struct adp5589_kpad *kpad = container_of(chip, struct adp5589_kpad, gc);
+ struct adp5589_kpad *kpad = gpiochip_get_data(chip);
unsigned int bank = kpad->var->bank(kpad->gpiomap[off]);
unsigned int bit = kpad->var->bit(kpad->gpiomap[off]);
int ret;
@@ -525,9 +525,9 @@ static int adp5589_gpio_add(struct adp5589_kpad *kpad)
mutex_init(&kpad->gpio_lock);
- error = gpiochip_add(&kpad->gc);
+ error = gpiochip_add_data(&kpad->gc, kpad);
if (error) {
- dev_err(dev, "gpiochip_add failed, err: %d\n", error);
+ dev_err(dev, "gpiochip_add_data() failed, err: %d\n", error);
return error;
}
diff --git a/drivers/input/keyboard/omap-keypad.c b/drivers/input/keyboard/omap-keypad.c
index e0d72c8c01e4..146b26f665f6 100644
--- a/drivers/input/keyboard/omap-keypad.c
+++ b/drivers/input/keyboard/omap-keypad.c
@@ -64,31 +64,6 @@ static DECLARE_TASKLET_DISABLED(kp_tasklet, omap_kp_tasklet, 0);
static unsigned int *row_gpios;
static unsigned int *col_gpios;
-#ifdef CONFIG_ARCH_OMAP2
-static void set_col_gpio_val(struct omap_kp *omap_kp, u8 value)
-{
- int col;
-
- for (col = 0; col < omap_kp->cols; col++)
- gpio_set_value(col_gpios[col], value & (1 << col));
-}
-
-static u8 get_row_gpio_val(struct omap_kp *omap_kp)
-{
- int row;
- u8 value = 0;
-
- for (row = 0; row < omap_kp->rows; row++) {
- if (gpio_get_value(row_gpios[row]))
- value |= (1 << row);
- }
- return value;
-}
-#else
-#define set_col_gpio_val(x, y) do {} while (0)
-#define get_row_gpio_val(x) 0
-#endif
-
static irqreturn_t omap_kp_interrupt(int irq, void *dev_id)
{
/* disable keyboard interrupt and schedule for handling */
@@ -133,7 +108,6 @@ static void omap_kp_tasklet(unsigned long data)
unsigned int row_shift = get_count_order(omap_kp_data->cols);
unsigned char new_state[8], changed, key_down = 0;
int col, row;
- int spurious = 0;
/* check for any changes */
omap_kp_scan_keypad(omap_kp_data, new_state);
@@ -170,12 +144,9 @@ static void omap_kp_tasklet(unsigned long data)
memcpy(keypad_state, new_state, sizeof(keypad_state));
if (key_down) {
- int delay = HZ / 20;
/* some key is pressed - keep irq disabled and use timer
* to poll the keypad */
- if (spurious)
- delay = 2 * HZ;
- mod_timer(&omap_kp_data->timer, jiffies + delay);
+ mod_timer(&omap_kp_data->timer, jiffies + HZ / 20);
} else {
/* enable interrupts */
omap_writew(0, OMAP1_MPUIO_BASE + OMAP_MPUIO_KBD_MASKIT);
@@ -216,25 +187,6 @@ static ssize_t omap_kp_enable_store(struct device *dev, struct device_attribute
static DEVICE_ATTR(enable, S_IRUGO | S_IWUSR, omap_kp_enable_show, omap_kp_enable_store);
-#ifdef CONFIG_PM
-static int omap_kp_suspend(struct platform_device *dev, pm_message_t state)
-{
- /* Nothing yet */
-
- return 0;
-}
-
-static int omap_kp_resume(struct platform_device *dev)
-{
- /* Nothing yet */
-
- return 0;
-}
-#else
-#define omap_kp_suspend NULL
-#define omap_kp_resume NULL
-#endif
-
static int omap_kp_probe(struct platform_device *pdev)
{
struct omap_kp *omap_kp;
@@ -371,8 +323,6 @@ static int omap_kp_remove(struct platform_device *pdev)
static struct platform_driver omap_kp_driver = {
.probe = omap_kp_probe,
.remove = omap_kp_remove,
- .suspend = omap_kp_suspend,
- .resume = omap_kp_resume,
.driver = {
.name = "omap-keypad",
},
diff --git a/drivers/input/keyboard/twl4030_keypad.c b/drivers/input/keyboard/twl4030_keypad.c
index bbcccd67247d..323a0fb575a4 100644
--- a/drivers/input/keyboard/twl4030_keypad.c
+++ b/drivers/input/keyboard/twl4030_keypad.c
@@ -61,9 +61,9 @@ struct twl4030_keypad {
unsigned short keymap[TWL4030_KEYMAP_SIZE];
u16 kp_state[TWL4030_MAX_ROWS];
bool autorepeat;
- unsigned n_rows;
- unsigned n_cols;
- unsigned irq;
+ unsigned int n_rows;
+ unsigned int n_cols;
+ unsigned int irq;
struct device *dbg_dev;
struct input_dev *input;
@@ -110,7 +110,7 @@ struct twl4030_keypad {
#define KEYP_CTRL_KBD_ON BIT(6)
/* KEYP_DEB, KEYP_LONG_KEY, KEYP_TIMEOUT_x*/
-#define KEYP_PERIOD_US(t, prescale) ((t) / (31 << (prescale + 1)) - 1)
+#define KEYP_PERIOD_US(t, prescale) ((t) / (31 << ((prescale) + 1)) - 1)
/* KEYP_LK_PTV_REG Fields */
#define KEYP_LK_PTV_PTV_SHIFT 5
@@ -162,9 +162,10 @@ static int twl4030_kpwrite_u8(struct twl4030_keypad *kp, u8 data, u32 reg)
static inline u16 twl4030_col_xlate(struct twl4030_keypad *kp, u8 col)
{
- /* If all bits in a row are active for all coloumns then
+ /*
+ * If all bits in a row are active for all columns then
* we have that row line connected to gnd. Mark this
- * key on as if it was on matrix position n_cols (ie
+ * key on as if it was on matrix position n_cols (i.e.
* one higher than the size of the matrix).
*/
if (col == 0xFF)
@@ -209,9 +210,9 @@ static void twl4030_kp_scan(struct twl4030_keypad *kp, bool release_all)
u16 new_state[TWL4030_MAX_ROWS];
int col, row;
- if (release_all)
+ if (release_all) {
memset(new_state, 0, sizeof(new_state));
- else {
+ } else {
/* check for any changes */
int ret = twl4030_read_kp_matrix_state(kp, new_state);
@@ -262,8 +263,10 @@ static irqreturn_t do_kp_irq(int irq, void *_kp)
/* Read & Clear TWL4030 pending interrupt */
ret = twl4030_kpread(kp, &reg, KEYP_ISR1, 1);
- /* Release all keys if I2C has gone bad or
- * the KEYP has gone to idle state */
+ /*
+ * Release all keys if I2C has gone bad or
+ * the KEYP has gone to idle state.
+ */
if (ret >= 0 && (reg & KEYP_IMR1_KP))
twl4030_kp_scan(kp, false);
else
@@ -283,7 +286,8 @@ static int twl4030_kp_program(struct twl4030_keypad *kp)
if (twl4030_kpwrite_u8(kp, reg, KEYP_CTRL) < 0)
return -EIO;
- /* NOTE: we could use sih_setup() here to package keypad
+ /*
+ * NOTE: we could use sih_setup() here to package keypad
* event sources as four different IRQs ... but we don't.
*/
@@ -312,7 +316,7 @@ static int twl4030_kp_program(struct twl4030_keypad *kp)
/*
* Enable Clear-on-Read; disable remembering events that fire
- * after the IRQ but before our handler acks (reads) them,
+ * after the IRQ but before our handler acks (reads) them.
*/
reg = TWL4030_SIH_CTRL_COR_MASK | TWL4030_SIH_CTRL_PENDDIS_MASK;
if (twl4030_kpwrite_u8(kp, reg, KEYP_SIH_CTRL) < 0)
diff --git a/drivers/input/misc/cm109.c b/drivers/input/misc/cm109.c
index 9365535ba7f1..9cc6d057c302 100644
--- a/drivers/input/misc/cm109.c
+++ b/drivers/input/misc/cm109.c
@@ -76,8 +76,8 @@ enum {
BUZZER_ON = 1 << 5,
- /* up to 256 normal keys, up to 16 special keys */
- KEYMAP_SIZE = 256 + 16,
+ /* up to 256 normal keys, up to 15 special key combinations */
+ KEYMAP_SIZE = 256 + 15,
};
/* CM109 protocol packet */
@@ -139,7 +139,7 @@ static unsigned short special_keymap(int code)
{
if (code > 0xff) {
switch (code - 0xff) {
- case RECORD_MUTE: return KEY_MUTE;
+ case RECORD_MUTE: return KEY_MICMUTE;
case PLAYBACK_MUTE: return KEY_MUTE;
case VOLUME_DOWN: return KEY_VOLUMEDOWN;
case VOLUME_UP: return KEY_VOLUMEUP;
@@ -312,6 +312,32 @@ static void report_key(struct cm109_dev *dev, int key)
input_sync(idev);
}
+/*
+ * Converts data of special key presses (volume, mute) into events
+ * for the input subsystem, sends press-n-release for mute keys.
+ */
+static void cm109_report_special(struct cm109_dev *dev)
+{
+ static const u8 autorelease = RECORD_MUTE | PLAYBACK_MUTE;
+ struct input_dev *idev = dev->idev;
+ u8 data = dev->irq_data->byte[HID_IR0];
+ unsigned short keycode;
+ int i;
+
+ for (i = 0; i < 4; i++) {
+ keycode = dev->keymap[0xff + BIT(i)];
+ if (keycode == KEY_RESERVED)
+ continue;
+
+ input_report_key(idev, keycode, data & BIT(i));
+ if (data & autorelease & BIT(i)) {
+ input_sync(idev);
+ input_report_key(idev, keycode, 0);
+ }
+ }
+ input_sync(idev);
+}
+
/******************************************************************************
* CM109 usb communication interface
*****************************************************************************/
@@ -340,6 +366,7 @@ static void cm109_urb_irq_callback(struct urb *urb)
struct cm109_dev *dev = urb->context;
const int status = urb->status;
int error;
+ unsigned long flags;
dev_dbg(&dev->intf->dev, "### URB IRQ: [0x%02x 0x%02x 0x%02x 0x%02x] keybit=0x%02x\n",
dev->irq_data->byte[0],
@@ -357,10 +384,7 @@ static void cm109_urb_irq_callback(struct urb *urb)
}
/* Special keys */
- if (dev->irq_data->byte[HID_IR0] & 0x0f) {
- const int code = (dev->irq_data->byte[HID_IR0] & 0x0f);
- report_key(dev, dev->keymap[0xff + code]);
- }
+ cm109_report_special(dev);
/* Scan key column */
if (dev->keybit == 0xf) {
@@ -381,7 +405,7 @@ static void cm109_urb_irq_callback(struct urb *urb)
out:
- spin_lock(&dev->ctl_submit_lock);
+ spin_lock_irqsave(&dev->ctl_submit_lock, flags);
dev->irq_urb_pending = 0;
@@ -405,7 +429,7 @@ static void cm109_urb_irq_callback(struct urb *urb)
__func__, error);
}
- spin_unlock(&dev->ctl_submit_lock);
+ spin_unlock_irqrestore(&dev->ctl_submit_lock, flags);
}
static void cm109_urb_ctl_callback(struct urb *urb)
@@ -413,6 +437,7 @@ static void cm109_urb_ctl_callback(struct urb *urb)
struct cm109_dev *dev = urb->context;
const int status = urb->status;
int error;
+ unsigned long flags;
dev_dbg(&dev->intf->dev, "### URB CTL: [0x%02x 0x%02x 0x%02x 0x%02x]\n",
dev->ctl_data->byte[0],
@@ -427,7 +452,7 @@ static void cm109_urb_ctl_callback(struct urb *urb)
__func__, status);
}
- spin_lock(&dev->ctl_submit_lock);
+ spin_lock_irqsave(&dev->ctl_submit_lock, flags);
dev->ctl_urb_pending = 0;
@@ -448,7 +473,7 @@ static void cm109_urb_ctl_callback(struct urb *urb)
}
}
- spin_unlock(&dev->ctl_submit_lock);
+ spin_unlock_irqrestore(&dev->ctl_submit_lock, flags);
}
static void cm109_toggle_buzzer_async(struct cm109_dev *dev)
diff --git a/drivers/input/misc/max77693-haptic.c b/drivers/input/misc/max77693-haptic.c
index 6d96bff32a0e..29ddeb7be84b 100644
--- a/drivers/input/misc/max77693-haptic.c
+++ b/drivers/input/misc/max77693-haptic.c
@@ -70,10 +70,13 @@ struct max77693_haptic {
static int max77693_haptic_set_duty_cycle(struct max77693_haptic *haptic)
{
- int delta = (haptic->pwm_dev->period + haptic->pwm_duty) / 2;
+ struct pwm_args pargs;
+ int delta;
int error;
- error = pwm_config(haptic->pwm_dev, delta, haptic->pwm_dev->period);
+ pwm_get_args(haptic->pwm_dev, &pargs);
+ delta = (pargs.period + haptic->pwm_duty) / 2;
+ error = pwm_config(haptic->pwm_dev, delta, pargs.period);
if (error) {
dev_err(haptic->dev, "failed to configure pwm: %d\n", error);
return error;
@@ -234,6 +237,7 @@ static int max77693_haptic_play_effect(struct input_dev *dev, void *data,
struct ff_effect *effect)
{
struct max77693_haptic *haptic = input_get_drvdata(dev);
+ struct pwm_args pargs;
u64 period_mag_multi;
haptic->magnitude = effect->u.rumble.strong_magnitude;
@@ -245,7 +249,8 @@ static int max77693_haptic_play_effect(struct input_dev *dev, void *data,
* The formula to convert magnitude to pwm_duty as follows:
* - pwm_duty = (magnitude * pwm_period) / MAX_MAGNITUDE(0xFFFF)
*/
- period_mag_multi = (u64)haptic->pwm_dev->period * haptic->magnitude;
+ pwm_get_args(haptic->pwm_dev, &pargs);
+ period_mag_multi = (u64)pargs.period * haptic->magnitude;
haptic->pwm_duty = (unsigned int)(period_mag_multi >>
MAX_MAGNITUDE_SHIFT);
@@ -329,6 +334,12 @@ static int max77693_haptic_probe(struct platform_device *pdev)
return PTR_ERR(haptic->pwm_dev);
}
+ /*
+ * FIXME: pwm_apply_args() should be removed when switching to the
+ * atomic PWM API.
+ */
+ pwm_apply_args(haptic->pwm_dev);
+
haptic->motor_reg = devm_regulator_get(&pdev->dev, "haptic");
if (IS_ERR(haptic->motor_reg)) {
dev_err(&pdev->dev, "failed to get regulator\n");
diff --git a/drivers/input/misc/max8997_haptic.c b/drivers/input/misc/max8997_haptic.c
index a806ba3818f7..99bc762881d5 100644
--- a/drivers/input/misc/max8997_haptic.c
+++ b/drivers/input/misc/max8997_haptic.c
@@ -255,12 +255,14 @@ static int max8997_haptic_probe(struct platform_device *pdev)
struct max8997_dev *iodev = dev_get_drvdata(pdev->dev.parent);
const struct max8997_platform_data *pdata =
dev_get_platdata(iodev->dev);
- const struct max8997_haptic_platform_data *haptic_pdata =
- pdata->haptic_pdata;
+ const struct max8997_haptic_platform_data *haptic_pdata = NULL;
struct max8997_haptic *chip;
struct input_dev *input_dev;
int error;
+ if (pdata)
+ haptic_pdata = pdata->haptic_pdata;
+
if (!haptic_pdata) {
dev_err(&pdev->dev, "no haptic platform data\n");
return -EINVAL;
@@ -304,6 +306,12 @@ static int max8997_haptic_probe(struct platform_device *pdev)
error);
goto err_free_mem;
}
+
+ /*
+ * FIXME: pwm_apply_args() should be removed when switching to
+ * the atomic PWM API.
+ */
+ pwm_apply_args(chip->pwm);
break;
default:
diff --git a/drivers/input/misc/pwm-beeper.c b/drivers/input/misc/pwm-beeper.c
index f2261ab54701..5f9655d49a65 100644
--- a/drivers/input/misc/pwm-beeper.c
+++ b/drivers/input/misc/pwm-beeper.c
@@ -20,21 +20,40 @@
#include <linux/platform_device.h>
#include <linux/pwm.h>
#include <linux/slab.h>
+#include <linux/workqueue.h>
struct pwm_beeper {
struct input_dev *input;
struct pwm_device *pwm;
+ struct work_struct work;
unsigned long period;
};
#define HZ_TO_NANOSECONDS(x) (1000000000UL/(x))
+static void __pwm_beeper_set(struct pwm_beeper *beeper)
+{
+ unsigned long period = beeper->period;
+
+ if (period) {
+ pwm_config(beeper->pwm, period / 2, period);
+ pwm_enable(beeper->pwm);
+ } else
+ pwm_disable(beeper->pwm);
+}
+
+static void pwm_beeper_work(struct work_struct *work)
+{
+ struct pwm_beeper *beeper =
+ container_of(work, struct pwm_beeper, work);
+
+ __pwm_beeper_set(beeper);
+}
+
static int pwm_beeper_event(struct input_dev *input,
unsigned int type, unsigned int code, int value)
{
- int ret = 0;
struct pwm_beeper *beeper = input_get_drvdata(input);
- unsigned long period;
if (type != EV_SND || value < 0)
return -EINVAL;
@@ -49,22 +68,31 @@ static int pwm_beeper_event(struct input_dev *input,
return -EINVAL;
}
- if (value == 0) {
- pwm_disable(beeper->pwm);
- } else {
- period = HZ_TO_NANOSECONDS(value);
- ret = pwm_config(beeper->pwm, period / 2, period);
- if (ret)
- return ret;
- ret = pwm_enable(beeper->pwm);
- if (ret)
- return ret;
- beeper->period = period;
- }
+ if (value == 0)
+ beeper->period = 0;
+ else
+ beeper->period = HZ_TO_NANOSECONDS(value);
+
+ schedule_work(&beeper->work);
return 0;
}
+static void pwm_beeper_stop(struct pwm_beeper *beeper)
+{
+ cancel_work_sync(&beeper->work);
+
+ if (beeper->period)
+ pwm_disable(beeper->pwm);
+}
+
+static void pwm_beeper_close(struct input_dev *input)
+{
+ struct pwm_beeper *beeper = input_get_drvdata(input);
+
+ pwm_beeper_stop(beeper);
+}
+
static int pwm_beeper_probe(struct platform_device *pdev)
{
unsigned long pwm_id = (unsigned long)dev_get_platdata(&pdev->dev);
@@ -87,6 +115,14 @@ static int pwm_beeper_probe(struct platform_device *pdev)
goto err_free;
}
+ /*
+ * FIXME: pwm_apply_args() should be removed when switching to
+ * the atomic PWM API.
+ */
+ pwm_apply_args(beeper->pwm);
+
+ INIT_WORK(&beeper->work, pwm_beeper_work);
+
beeper->input = input_allocate_device();
if (!beeper->input) {
dev_err(&pdev->dev, "Failed to allocate input device\n");
@@ -106,6 +142,7 @@ static int pwm_beeper_probe(struct platform_device *pdev)
beeper->input->sndbit[0] = BIT(SND_TONE) | BIT(SND_BELL);
beeper->input->event = pwm_beeper_event;
+ beeper->input->close = pwm_beeper_close;
input_set_drvdata(beeper->input, beeper);
@@ -135,7 +172,6 @@ static int pwm_beeper_remove(struct platform_device *pdev)
input_unregister_device(beeper->input);
- pwm_disable(beeper->pwm);
pwm_free(beeper->pwm);
kfree(beeper);
@@ -147,8 +183,7 @@ static int __maybe_unused pwm_beeper_suspend(struct device *dev)
{
struct pwm_beeper *beeper = dev_get_drvdata(dev);
- if (beeper->period)
- pwm_disable(beeper->pwm);
+ pwm_beeper_stop(beeper);
return 0;
}
@@ -157,10 +192,8 @@ static int __maybe_unused pwm_beeper_resume(struct device *dev)
{
struct pwm_beeper *beeper = dev_get_drvdata(dev);
- if (beeper->period) {
- pwm_config(beeper->pwm, beeper->period / 2, beeper->period);
- pwm_enable(beeper->pwm);
- }
+ if (beeper->period)
+ __pwm_beeper_set(beeper);
return 0;
}
diff --git a/drivers/input/misc/rotary_encoder.c b/drivers/input/misc/rotary_encoder.c
index 96c486de49e0..c7fc8d4fb080 100644
--- a/drivers/input/misc/rotary_encoder.c
+++ b/drivers/input/misc/rotary_encoder.c
@@ -47,13 +47,13 @@ struct rotary_encoder {
bool armed;
signed char dir; /* 1 - clockwise, -1 - CCW */
- unsigned last_stable;
+ unsigned int last_stable;
};
-static unsigned rotary_encoder_get_state(struct rotary_encoder *encoder)
+static unsigned int rotary_encoder_get_state(struct rotary_encoder *encoder)
{
int i;
- unsigned ret = 0;
+ unsigned int ret = 0;
for (i = 0; i < encoder->gpios->ndescs; ++i) {
int val = gpiod_get_value_cansleep(encoder->gpios->desc[i]);
@@ -100,7 +100,7 @@ static void rotary_encoder_report_event(struct rotary_encoder *encoder)
static irqreturn_t rotary_encoder_irq(int irq, void *dev_id)
{
struct rotary_encoder *encoder = dev_id;
- unsigned state;
+ unsigned int state;
mutex_lock(&encoder->access_mutex);
diff --git a/drivers/input/misc/twl6040-vibra.c b/drivers/input/misc/twl6040-vibra.c
index df3581f60628..5690eb7ff954 100644
--- a/drivers/input/misc/twl6040-vibra.c
+++ b/drivers/input/misc/twl6040-vibra.c
@@ -46,7 +46,7 @@ struct vibra_info {
struct device *dev;
struct input_dev *input_dev;
struct work_struct play_work;
- struct mutex mutex;
+
int irq;
bool enabled;
@@ -190,8 +190,6 @@ static void vibra_play_work(struct work_struct *work)
return;
}
- mutex_lock(&info->mutex);
-
if (info->weak_speed || info->strong_speed) {
if (!info->enabled)
twl6040_vibra_enable(info);
@@ -200,7 +198,6 @@ static void vibra_play_work(struct work_struct *work)
} else if (info->enabled)
twl6040_vibra_disable(info);
- mutex_unlock(&info->mutex);
}
static int vibra_play(struct input_dev *input, void *data,
@@ -223,12 +220,8 @@ static void twl6040_vibra_close(struct input_dev *input)
cancel_work_sync(&info->play_work);
- mutex_lock(&info->mutex);
-
if (info->enabled)
twl6040_vibra_disable(info);
-
- mutex_unlock(&info->mutex);
}
static int __maybe_unused twl6040_vibra_suspend(struct device *dev)
@@ -236,13 +229,11 @@ static int __maybe_unused twl6040_vibra_suspend(struct device *dev)
struct platform_device *pdev = to_platform_device(dev);
struct vibra_info *info = platform_get_drvdata(pdev);
- mutex_lock(&info->mutex);
+ cancel_work_sync(&info->play_work);
if (info->enabled)
twl6040_vibra_disable(info);
- mutex_unlock(&info->mutex);
-
return 0;
}
@@ -257,6 +248,7 @@ static int twl6040_vibra_probe(struct platform_device *pdev)
int vddvibr_uV = 0;
int error;
+ of_node_get(twl6040_core_dev->of_node);
twl6040_core_node = of_find_node_by_name(twl6040_core_dev->of_node,
"vibra");
if (!twl6040_core_node) {
@@ -300,8 +292,6 @@ static int twl6040_vibra_probe(struct platform_device *pdev)
return -EINVAL;
}
- mutex_init(&info->mutex);
-
error = devm_request_threaded_irq(&pdev->dev, info->irq, NULL,
twl6040_vib_irq_handler,
IRQF_ONESHOT,
diff --git a/drivers/input/misc/uinput.c b/drivers/input/misc/uinput.c
index abe1a927b332..65ebbd111702 100644
--- a/drivers/input/misc/uinput.c
+++ b/drivers/input/misc/uinput.c
@@ -981,9 +981,15 @@ static long uinput_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
}
#ifdef CONFIG_COMPAT
+
+#define UI_SET_PHYS_COMPAT _IOW(UINPUT_IOCTL_BASE, 108, compat_uptr_t)
+
static long uinput_compat_ioctl(struct file *file,
unsigned int cmd, unsigned long arg)
{
+ if (cmd == UI_SET_PHYS_COMPAT)
+ cmd = UI_SET_PHYS;
+
return uinput_ioctl_handler(file, cmd, arg, compat_ptr(arg));
}
#endif
diff --git a/drivers/input/mouse/byd.c b/drivers/input/mouse/byd.c
index fdc243ca93ed..b27aa637f877 100644
--- a/drivers/input/mouse/byd.c
+++ b/drivers/input/mouse/byd.c
@@ -2,6 +2,10 @@
* BYD TouchPad PS/2 mouse driver
*
* Copyright (C) 2015 Chris Diamand <chris@diamand.org>
+ * Copyright (C) 2015 Richard Pospesel
+ * Copyright (C) 2015 Tai Chi Minh Ralph Eastwood
+ * Copyright (C) 2015 Martin Wimpress
+ * Copyright (C) 2015 Jay Kuri
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License version 2 as published by
@@ -474,7 +478,6 @@ int byd_init(struct psmouse *psmouse)
if (!priv)
return -ENOMEM;
- memset(priv, 0, sizeof(*priv));
setup_timer(&priv->timer, byd_clear_touch, (unsigned long) psmouse);
psmouse->private = priv;
diff --git a/drivers/input/tablet/acecad.c b/drivers/input/tablet/acecad.c
index 889f6b77e8cb..e86e377a90f5 100644
--- a/drivers/input/tablet/acecad.c
+++ b/drivers/input/tablet/acecad.c
@@ -49,7 +49,6 @@ MODULE_LICENSE(DRIVER_LICENSE);
struct usb_acecad {
char name[128];
char phys[64];
- struct usb_device *usbdev;
struct usb_interface *intf;
struct input_dev *input;
struct urb *irq;
@@ -64,6 +63,7 @@ static void usb_acecad_irq(struct urb *urb)
unsigned char *data = acecad->data;
struct input_dev *dev = acecad->input;
struct usb_interface *intf = acecad->intf;
+ struct usb_device *udev = interface_to_usbdev(intf);
int prox, status;
switch (urb->status) {
@@ -110,15 +110,15 @@ resubmit:
if (status)
dev_err(&intf->dev,
"can't resubmit intr, %s-%s/input0, status %d\n",
- acecad->usbdev->bus->bus_name,
- acecad->usbdev->devpath, status);
+ udev->bus->bus_name,
+ udev->devpath, status);
}
static int usb_acecad_open(struct input_dev *dev)
{
struct usb_acecad *acecad = input_get_drvdata(dev);
- acecad->irq->dev = acecad->usbdev;
+ acecad->irq->dev = interface_to_usbdev(acecad->intf);
if (usb_submit_urb(acecad->irq, GFP_KERNEL))
return -EIO;
@@ -172,7 +172,6 @@ static int usb_acecad_probe(struct usb_interface *intf, const struct usb_device_
goto fail2;
}
- acecad->usbdev = dev;
acecad->intf = intf;
acecad->input = input_dev;
@@ -251,12 +250,13 @@ static int usb_acecad_probe(struct usb_interface *intf, const struct usb_device_
static void usb_acecad_disconnect(struct usb_interface *intf)
{
struct usb_acecad *acecad = usb_get_intfdata(intf);
+ struct usb_device *udev = interface_to_usbdev(intf);
usb_set_intfdata(intf, NULL);
input_unregister_device(acecad->input);
usb_free_urb(acecad->irq);
- usb_free_coherent(acecad->usbdev, 8, acecad->data, acecad->data_dma);
+ usb_free_coherent(udev, 8, acecad->data, acecad->data_dma);
kfree(acecad);
}
diff --git a/drivers/input/tablet/aiptek.c b/drivers/input/tablet/aiptek.c
index 78ca44840d60..4613f0aefd08 100644
--- a/drivers/input/tablet/aiptek.c
+++ b/drivers/input/tablet/aiptek.c
@@ -307,7 +307,6 @@ struct aiptek_settings {
struct aiptek {
struct input_dev *inputdev; /* input device struct */
- struct usb_device *usbdev; /* usb device struct */
struct usb_interface *intf; /* usb interface struct */
struct urb *urb; /* urb for incoming reports */
dma_addr_t data_dma; /* our dma stuffage */
@@ -847,7 +846,7 @@ static int aiptek_open(struct input_dev *inputdev)
{
struct aiptek *aiptek = input_get_drvdata(inputdev);
- aiptek->urb->dev = aiptek->usbdev;
+ aiptek->urb->dev = interface_to_usbdev(aiptek->intf);
if (usb_submit_urb(aiptek->urb, GFP_KERNEL) != 0)
return -EIO;
@@ -873,8 +872,10 @@ aiptek_set_report(struct aiptek *aiptek,
unsigned char report_type,
unsigned char report_id, void *buffer, int size)
{
- return usb_control_msg(aiptek->usbdev,
- usb_sndctrlpipe(aiptek->usbdev, 0),
+ struct usb_device *udev = interface_to_usbdev(aiptek->intf);
+
+ return usb_control_msg(udev,
+ usb_sndctrlpipe(udev, 0),
USB_REQ_SET_REPORT,
USB_TYPE_CLASS | USB_RECIP_INTERFACE |
USB_DIR_OUT, (report_type << 8) + report_id,
@@ -886,8 +887,10 @@ aiptek_get_report(struct aiptek *aiptek,
unsigned char report_type,
unsigned char report_id, void *buffer, int size)
{
- return usb_control_msg(aiptek->usbdev,
- usb_rcvctrlpipe(aiptek->usbdev, 0),
+ struct usb_device *udev = interface_to_usbdev(aiptek->intf);
+
+ return usb_control_msg(udev,
+ usb_rcvctrlpipe(udev, 0),
USB_REQ_GET_REPORT,
USB_TYPE_CLASS | USB_RECIP_INTERFACE |
USB_DIR_IN, (report_type << 8) + report_id,
@@ -1729,7 +1732,6 @@ aiptek_probe(struct usb_interface *intf, const struct usb_device_id *id)
}
aiptek->inputdev = inputdev;
- aiptek->usbdev = usbdev;
aiptek->intf = intf;
aiptek->ifnum = intf->altsetting[0].desc.bInterfaceNumber;
aiptek->inDelay = 0;
@@ -1833,8 +1835,8 @@ aiptek_probe(struct usb_interface *intf, const struct usb_device_id *id)
* input.
*/
usb_fill_int_urb(aiptek->urb,
- aiptek->usbdev,
- usb_rcvintpipe(aiptek->usbdev,
+ usbdev,
+ usb_rcvintpipe(usbdev,
endpoint->bEndpointAddress),
aiptek->data, 8, aiptek_irq, aiptek,
endpoint->bInterval);
diff --git a/drivers/input/tablet/gtco.c b/drivers/input/tablet/gtco.c
index 7c18249d6c8e..abf09ac42ce4 100644
--- a/drivers/input/tablet/gtco.c
+++ b/drivers/input/tablet/gtco.c
@@ -104,7 +104,6 @@ MODULE_DEVICE_TABLE (usb, gtco_usbid_table);
struct gtco {
struct input_dev *inputdevice; /* input device struct pointer */
- struct usb_device *usbdev; /* the usb device for this device */
struct usb_interface *intf; /* the usb interface for this device */
struct urb *urbinfo; /* urb for incoming reports */
dma_addr_t buf_dma; /* dma addr of the data buffer*/
@@ -540,7 +539,7 @@ static int gtco_input_open(struct input_dev *inputdev)
{
struct gtco *device = input_get_drvdata(inputdev);
- device->urbinfo->dev = device->usbdev;
+ device->urbinfo->dev = interface_to_usbdev(device->intf);
if (usb_submit_urb(device->urbinfo, GFP_KERNEL))
return -EIO;
@@ -824,6 +823,7 @@ static int gtco_probe(struct usb_interface *usbinterface,
int result = 0, retry;
int error;
struct usb_endpoint_descriptor *endpoint;
+ struct usb_device *udev = interface_to_usbdev(usbinterface);
/* Allocate memory for device structure */
gtco = kzalloc(sizeof(struct gtco), GFP_KERNEL);
@@ -838,11 +838,10 @@ static int gtco_probe(struct usb_interface *usbinterface,
gtco->inputdevice = input_dev;
/* Save interface information */
- gtco->usbdev = interface_to_usbdev(usbinterface);
gtco->intf = usbinterface;
/* Allocate some data for incoming reports */
- gtco->buffer = usb_alloc_coherent(gtco->usbdev, REPORT_MAX_SIZE,
+ gtco->buffer = usb_alloc_coherent(udev, REPORT_MAX_SIZE,
GFP_KERNEL, &gtco->buf_dma);
if (!gtco->buffer) {
dev_err(&usbinterface->dev, "No more memory for us buffers\n");
@@ -907,8 +906,8 @@ static int gtco_probe(struct usb_interface *usbinterface,
/* Couple of tries to get reply */
for (retry = 0; retry < 3; retry++) {
- result = usb_control_msg(gtco->usbdev,
- usb_rcvctrlpipe(gtco->usbdev, 0),
+ result = usb_control_msg(udev,
+ usb_rcvctrlpipe(udev, 0),
USB_REQ_GET_DESCRIPTOR,
USB_RECIP_INTERFACE | USB_DIR_IN,
REPORT_DEVICE_TYPE << 8,
@@ -936,7 +935,7 @@ static int gtco_probe(struct usb_interface *usbinterface,
}
/* Create a device file node */
- usb_make_path(gtco->usbdev, gtco->usbpath, sizeof(gtco->usbpath));
+ usb_make_path(udev, gtco->usbpath, sizeof(gtco->usbpath));
strlcat(gtco->usbpath, "/input0", sizeof(gtco->usbpath));
/* Set Input device functions */
@@ -953,15 +952,15 @@ static int gtco_probe(struct usb_interface *usbinterface,
gtco_setup_caps(input_dev);
/* Set input device required ID information */
- usb_to_input_id(gtco->usbdev, &input_dev->id);
+ usb_to_input_id(udev, &input_dev->id);
input_dev->dev.parent = &usbinterface->dev;
/* Setup the URB, it will be posted later on open of input device */
endpoint = &usbinterface->altsetting[0].endpoint[0].desc;
usb_fill_int_urb(gtco->urbinfo,
- gtco->usbdev,
- usb_rcvintpipe(gtco->usbdev,
+ udev,
+ usb_rcvintpipe(udev,
endpoint->bEndpointAddress),
gtco->buffer,
REPORT_MAX_SIZE,
@@ -985,7 +984,7 @@ static int gtco_probe(struct usb_interface *usbinterface,
err_free_urb:
usb_free_urb(gtco->urbinfo);
err_free_buf:
- usb_free_coherent(gtco->usbdev, REPORT_MAX_SIZE,
+ usb_free_coherent(udev, REPORT_MAX_SIZE,
gtco->buffer, gtco->buf_dma);
err_free_devs:
input_free_device(input_dev);
@@ -1002,13 +1001,14 @@ static void gtco_disconnect(struct usb_interface *interface)
{
/* Grab private device ptr */
struct gtco *gtco = usb_get_intfdata(interface);
+ struct usb_device *udev = interface_to_usbdev(interface);
/* Now reverse all the registration stuff */
if (gtco) {
input_unregister_device(gtco->inputdevice);
usb_kill_urb(gtco->urbinfo);
usb_free_urb(gtco->urbinfo);
- usb_free_coherent(gtco->usbdev, REPORT_MAX_SIZE,
+ usb_free_coherent(udev, REPORT_MAX_SIZE,
gtco->buffer, gtco->buf_dma);
kfree(gtco);
}
diff --git a/drivers/input/tablet/kbtab.c b/drivers/input/tablet/kbtab.c
index d2ac7c2b5b82..e850d7e8afbc 100644
--- a/drivers/input/tablet/kbtab.c
+++ b/drivers/input/tablet/kbtab.c
@@ -31,7 +31,6 @@ struct kbtab {
unsigned char *data;
dma_addr_t data_dma;
struct input_dev *dev;
- struct usb_device *usbdev;
struct usb_interface *intf;
struct urb *irq;
char phys[32];
@@ -99,8 +98,9 @@ MODULE_DEVICE_TABLE(usb, kbtab_ids);
static int kbtab_open(struct input_dev *dev)
{
struct kbtab *kbtab = input_get_drvdata(dev);
+ struct usb_device *udev = interface_to_usbdev(kbtab->intf);
- kbtab->irq->dev = kbtab->usbdev;
+ kbtab->irq->dev = udev;
if (usb_submit_urb(kbtab->irq, GFP_KERNEL))
return -EIO;
@@ -135,7 +135,6 @@ static int kbtab_probe(struct usb_interface *intf, const struct usb_device_id *i
if (!kbtab->irq)
goto fail2;
- kbtab->usbdev = dev;
kbtab->intf = intf;
kbtab->dev = input_dev;
@@ -188,12 +187,13 @@ static int kbtab_probe(struct usb_interface *intf, const struct usb_device_id *i
static void kbtab_disconnect(struct usb_interface *intf)
{
struct kbtab *kbtab = usb_get_intfdata(intf);
+ struct usb_device *udev = interface_to_usbdev(intf);
usb_set_intfdata(intf, NULL);
input_unregister_device(kbtab->dev);
usb_free_urb(kbtab->irq);
- usb_free_coherent(kbtab->usbdev, 8, kbtab->data, kbtab->data_dma);
+ usb_free_coherent(udev, 8, kbtab->data, kbtab->data_dma);
kfree(kbtab);
}
diff --git a/drivers/input/touchscreen/ad7879.c b/drivers/input/touchscreen/ad7879.c
index 69d299d5dd00..e4bf1103e6f8 100644
--- a/drivers/input/touchscreen/ad7879.c
+++ b/drivers/input/touchscreen/ad7879.c
@@ -379,7 +379,7 @@ static const struct attribute_group ad7879_attr_group = {
static int ad7879_gpio_direction_input(struct gpio_chip *chip,
unsigned gpio)
{
- struct ad7879 *ts = container_of(chip, struct ad7879, gc);
+ struct ad7879 *ts = gpiochip_get_data(chip);
int err;
mutex_lock(&ts->mutex);
@@ -393,7 +393,7 @@ static int ad7879_gpio_direction_input(struct gpio_chip *chip,
static int ad7879_gpio_direction_output(struct gpio_chip *chip,
unsigned gpio, int level)
{
- struct ad7879 *ts = container_of(chip, struct ad7879, gc);
+ struct ad7879 *ts = gpiochip_get_data(chip);
int err;
mutex_lock(&ts->mutex);
@@ -412,7 +412,7 @@ static int ad7879_gpio_direction_output(struct gpio_chip *chip,
static int ad7879_gpio_get_value(struct gpio_chip *chip, unsigned gpio)
{
- struct ad7879 *ts = container_of(chip, struct ad7879, gc);
+ struct ad7879 *ts = gpiochip_get_data(chip);
u16 val;
mutex_lock(&ts->mutex);
@@ -425,7 +425,7 @@ static int ad7879_gpio_get_value(struct gpio_chip *chip, unsigned gpio)
static void ad7879_gpio_set_value(struct gpio_chip *chip,
unsigned gpio, int value)
{
- struct ad7879 *ts = container_of(chip, struct ad7879, gc);
+ struct ad7879 *ts = gpiochip_get_data(chip);
mutex_lock(&ts->mutex);
if (value)
@@ -456,7 +456,7 @@ static int ad7879_gpio_add(struct ad7879 *ts,
ts->gc.owner = THIS_MODULE;
ts->gc.parent = ts->dev;
- ret = gpiochip_add(&ts->gc);
+ ret = gpiochip_add_data(&ts->gc, ts);
if (ret)
dev_err(ts->dev, "failed to register gpio %d\n",
ts->gc.base);
diff --git a/drivers/input/touchscreen/bcm_iproc_tsc.c b/drivers/input/touchscreen/bcm_iproc_tsc.c
index ae460a5c93d5..4d11b27c7c43 100644
--- a/drivers/input/touchscreen/bcm_iproc_tsc.c
+++ b/drivers/input/touchscreen/bcm_iproc_tsc.c
@@ -23,6 +23,8 @@
#include <linux/io.h>
#include <linux/clk.h>
#include <linux/serio.h>
+#include <linux/mfd/syscon.h>
+#include <linux/regmap.h>
#define IPROC_TS_NAME "iproc-ts"
@@ -88,7 +90,11 @@
#define TS_WIRE_MODE_BIT BIT(1)
#define dbg_reg(dev, priv, reg) \
- dev_dbg(dev, "%20s= 0x%08x\n", #reg, readl((priv)->regs + reg))
+do { \
+ u32 val; \
+ regmap_read(priv->regmap, reg, &val); \
+ dev_dbg(dev, "%20s= 0x%08x\n", #reg, val); \
+} while (0)
struct tsc_param {
/* Each step is 1024 us. Valid 1-256 */
@@ -141,7 +147,7 @@ struct iproc_ts_priv {
struct platform_device *pdev;
struct input_dev *idev;
- void __iomem *regs;
+ struct regmap *regmap;
struct clk *tsc_clk;
int pen_status;
@@ -196,22 +202,22 @@ static irqreturn_t iproc_touchscreen_interrupt(int irq, void *data)
int i;
bool needs_sync = false;
- intr_status = readl(priv->regs + INTERRUPT_STATUS);
+ regmap_read(priv->regmap, INTERRUPT_STATUS, &intr_status);
intr_status &= TS_PEN_INTR_MASK | TS_FIFO_INTR_MASK;
if (intr_status == 0)
return IRQ_NONE;
/* Clear all interrupt status bits, write-1-clear */
- writel(intr_status, priv->regs + INTERRUPT_STATUS);
-
+ regmap_write(priv->regmap, INTERRUPT_STATUS, intr_status);
/* Pen up/down */
if (intr_status & TS_PEN_INTR_MASK) {
- if (readl(priv->regs + CONTROLLER_STATUS) & TS_PEN_DOWN)
+ regmap_read(priv->regmap, CONTROLLER_STATUS, &priv->pen_status);
+ if (priv->pen_status & TS_PEN_DOWN)
priv->pen_status = PEN_DOWN_STATUS;
else
priv->pen_status = PEN_UP_STATUS;
- input_report_key(priv->idev, BTN_TOUCH, priv->pen_status);
+ input_report_key(priv->idev, BTN_TOUCH, priv->pen_status);
needs_sync = true;
dev_dbg(&priv->pdev->dev,
@@ -221,7 +227,7 @@ static irqreturn_t iproc_touchscreen_interrupt(int irq, void *data)
/* coordinates in FIFO exceed the theshold */
if (intr_status & TS_FIFO_INTR_MASK) {
for (i = 0; i < priv->cfg_params.fifo_threshold; i++) {
- raw_coordinate = readl(priv->regs + FIFO_DATA);
+ regmap_read(priv->regmap, FIFO_DATA, &raw_coordinate);
if (raw_coordinate == INVALID_COORD)
continue;
@@ -239,7 +245,7 @@ static irqreturn_t iproc_touchscreen_interrupt(int irq, void *data)
x = (x >> 4) & 0x0FFF;
y = (y >> 4) & 0x0FFF;
- /* adjust x y according to lcd tsc mount angle */
+ /* Adjust x y according to LCD tsc mount angle. */
if (priv->cfg_params.invert_x)
x = priv->cfg_params.max_x - x;
@@ -262,9 +268,10 @@ static irqreturn_t iproc_touchscreen_interrupt(int irq, void *data)
static int iproc_ts_start(struct input_dev *idev)
{
- struct iproc_ts_priv *priv = input_get_drvdata(idev);
u32 val;
+ u32 mask;
int error;
+ struct iproc_ts_priv *priv = input_get_drvdata(idev);
/* Enable clock */
error = clk_prepare_enable(priv->tsc_clk);
@@ -279,9 +286,10 @@ static int iproc_ts_start(struct input_dev *idev)
* FIFO reaches the int_th value, and pen event(up/down)
*/
val = TS_PEN_INTR_MASK | TS_FIFO_INTR_MASK;
- writel(val, priv->regs + INTERRUPT_MASK);
+ regmap_update_bits(priv->regmap, INTERRUPT_MASK, val, val);
- writel(priv->cfg_params.fifo_threshold, priv->regs + INTERRUPT_THRES);
+ val = priv->cfg_params.fifo_threshold;
+ regmap_write(priv->regmap, INTERRUPT_THRES, val);
/* Initialize control reg1 */
val = 0;
@@ -289,26 +297,23 @@ static int iproc_ts_start(struct input_dev *idev)
val |= priv->cfg_params.debounce_timeout << DEBOUNCE_TIMEOUT_SHIFT;
val |= priv->cfg_params.settling_timeout << SETTLING_TIMEOUT_SHIFT;
val |= priv->cfg_params.touch_timeout << TOUCH_TIMEOUT_SHIFT;
- writel(val, priv->regs + REGCTL1);
+ regmap_write(priv->regmap, REGCTL1, val);
/* Try to clear all interrupt status */
- val = readl(priv->regs + INTERRUPT_STATUS);
- val |= TS_FIFO_INTR_MASK | TS_PEN_INTR_MASK;
- writel(val, priv->regs + INTERRUPT_STATUS);
+ val = TS_FIFO_INTR_MASK | TS_PEN_INTR_MASK;
+ regmap_update_bits(priv->regmap, INTERRUPT_STATUS, val, val);
/* Initialize control reg2 */
- val = readl(priv->regs + REGCTL2);
- val |= TS_CONTROLLER_EN_BIT | TS_WIRE_MODE_BIT;
-
- val &= ~TS_CONTROLLER_AVGDATA_MASK;
+ val = TS_CONTROLLER_EN_BIT | TS_WIRE_MODE_BIT;
val |= priv->cfg_params.average_data << TS_CONTROLLER_AVGDATA_SHIFT;
- val &= ~(TS_CONTROLLER_PWR_LDO | /* PWR up LDO */
+ mask = (TS_CONTROLLER_AVGDATA_MASK);
+ mask |= (TS_CONTROLLER_PWR_LDO | /* PWR up LDO */
TS_CONTROLLER_PWR_ADC | /* PWR up ADC */
TS_CONTROLLER_PWR_BGP | /* PWR up BGP */
TS_CONTROLLER_PWR_TS); /* PWR up TS */
-
- writel(val, priv->regs + REGCTL2);
+ mask |= val;
+ regmap_update_bits(priv->regmap, REGCTL2, mask, val);
ts_reg_dump(priv);
@@ -320,12 +325,17 @@ static void iproc_ts_stop(struct input_dev *dev)
u32 val;
struct iproc_ts_priv *priv = input_get_drvdata(dev);
- writel(0, priv->regs + INTERRUPT_MASK); /* Disable all interrupts */
+ /*
+ * Disable FIFO int_th and pen event(up/down)Interrupts only
+ * as the interrupt mask register is shared between ADC, TS and
+ * flextimer.
+ */
+ val = TS_PEN_INTR_MASK | TS_FIFO_INTR_MASK;
+ regmap_update_bits(priv->regmap, INTERRUPT_MASK, val, 0);
/* Only power down touch screen controller */
- val = readl(priv->regs + REGCTL2);
- val |= TS_CONTROLLER_PWR_TS;
- writel(val, priv->regs + REGCTL2);
+ val = TS_CONTROLLER_PWR_TS;
+ regmap_update_bits(priv->regmap, REGCTL2, val, val);
clk_disable(priv->tsc_clk);
}
@@ -414,7 +424,6 @@ static int iproc_ts_probe(struct platform_device *pdev)
{
struct iproc_ts_priv *priv;
struct input_dev *idev;
- struct resource *res;
int irq;
int error;
@@ -422,12 +431,12 @@ static int iproc_ts_probe(struct platform_device *pdev)
if (!priv)
return -ENOMEM;
- /* touchscreen controller memory mapped regs */
- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- priv->regs = devm_ioremap_resource(&pdev->dev, res);
- if (IS_ERR(priv->regs)) {
- error = PTR_ERR(priv->regs);
- dev_err(&pdev->dev, "unable to map I/O memory: %d\n", error);
+ /* touchscreen controller memory mapped regs via syscon*/
+ priv->regmap = syscon_regmap_lookup_by_phandle(pdev->dev.of_node,
+ "ts_syscon");
+ if (IS_ERR(priv->regmap)) {
+ error = PTR_ERR(priv->regmap);
+ dev_err(&pdev->dev, "unable to map I/O memory:%d\n", error);
return error;
}
diff --git a/drivers/input/touchscreen/cyttsp4_core.c b/drivers/input/touchscreen/cyttsp4_core.c
index 5ed31057430c..44deca88c579 100644
--- a/drivers/input/touchscreen/cyttsp4_core.c
+++ b/drivers/input/touchscreen/cyttsp4_core.c
@@ -1499,7 +1499,7 @@ static int cyttsp4_core_sleep_(struct cyttsp4 *cd)
if (IS_BOOTLOADER(mode[0], mode[1])) {
mutex_unlock(&cd->system_lock);
- dev_err(cd->dev, "%s: Device in BOOTLADER mode.\n", __func__);
+ dev_err(cd->dev, "%s: Device in BOOTLOADER mode.\n", __func__);
rc = -EINVAL;
goto error;
}
diff --git a/drivers/input/touchscreen/sun4i-ts.c b/drivers/input/touchscreen/sun4i-ts.c
index 485794376ee5..d07dd29d4848 100644
--- a/drivers/input/touchscreen/sun4i-ts.c
+++ b/drivers/input/touchscreen/sun4i-ts.c
@@ -115,7 +115,6 @@
struct sun4i_ts_data {
struct device *dev;
struct input_dev *input;
- struct thermal_zone_device *tz;
void __iomem *base;
unsigned int irq;
bool ignore_fifo_data;
@@ -366,10 +365,7 @@ static int sun4i_ts_probe(struct platform_device *pdev)
if (IS_ERR(hwmon))
return PTR_ERR(hwmon);
- ts->tz = thermal_zone_of_sensor_register(ts->dev, 0, ts,
- &sun4i_ts_tz_ops);
- if (IS_ERR(ts->tz))
- ts->tz = NULL;
+ devm_thermal_zone_of_sensor_register(ts->dev, 0, ts, &sun4i_ts_tz_ops);
writel(TEMP_IRQ_EN(1), ts->base + TP_INT_FIFOC);
@@ -377,7 +373,6 @@ static int sun4i_ts_probe(struct platform_device *pdev)
error = input_register_device(ts->input);
if (error) {
writel(0, ts->base + TP_INT_FIFOC);
- thermal_zone_of_sensor_unregister(ts->dev, ts->tz);
return error;
}
}
@@ -394,8 +389,6 @@ static int sun4i_ts_remove(struct platform_device *pdev)
if (ts->input)
input_unregister_device(ts->input);
- thermal_zone_of_sensor_unregister(ts->dev, ts->tz);
-
/* Deactivate all IRQs */
writel(0, ts->base + TP_INT_FIFOC);
diff --git a/drivers/input/touchscreen/ti_am335x_tsc.c b/drivers/input/touchscreen/ti_am335x_tsc.c
index a21a07c3ab6d..8b3f15ca7725 100644
--- a/drivers/input/touchscreen/ti_am335x_tsc.c
+++ b/drivers/input/touchscreen/ti_am335x_tsc.c
@@ -487,8 +487,7 @@ static int titsc_remove(struct platform_device *pdev)
return 0;
}
-#ifdef CONFIG_PM
-static int titsc_suspend(struct device *dev)
+static int __maybe_unused titsc_suspend(struct device *dev)
{
struct titsc *ts_dev = dev_get_drvdata(dev);
struct ti_tscadc_dev *tscadc_dev;
@@ -504,7 +503,7 @@ static int titsc_suspend(struct device *dev)
return 0;
}
-static int titsc_resume(struct device *dev)
+static int __maybe_unused titsc_resume(struct device *dev)
{
struct titsc *ts_dev = dev_get_drvdata(dev);
struct ti_tscadc_dev *tscadc_dev;
@@ -521,14 +520,7 @@ static int titsc_resume(struct device *dev)
return 0;
}
-static const struct dev_pm_ops titsc_pm_ops = {
- .suspend = titsc_suspend,
- .resume = titsc_resume,
-};
-#define TITSC_PM_OPS (&titsc_pm_ops)
-#else
-#define TITSC_PM_OPS NULL
-#endif
+static SIMPLE_DEV_PM_OPS(titsc_pm_ops, titsc_suspend, titsc_resume);
static const struct of_device_id ti_tsc_dt_ids[] = {
{ .compatible = "ti,am3359-tsc", },
@@ -541,7 +533,7 @@ static struct platform_driver ti_tsc_driver = {
.remove = titsc_remove,
.driver = {
.name = "TI-am335x-tsc",
- .pm = TITSC_PM_OPS,
+ .pm = &titsc_pm_ops,
.of_match_table = ti_tsc_dt_ids,
},
};
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index dd1dc39f84ff..ad0860383cb3 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -76,8 +76,7 @@ config IOMMU_DMA
config FSL_PAMU
bool "Freescale IOMMU support"
- depends on PPC32
- depends on PPC_E500MC || COMPILE_TEST
+ depends on PPC_E500MC || (COMPILE_TEST && PPC)
select IOMMU_API
select GENERIC_ALLOCATOR
help
@@ -124,16 +123,6 @@ config AMD_IOMMU
your BIOS for an option to enable it or if you have an IVRS ACPI
table.
-config AMD_IOMMU_STATS
- bool "Export AMD IOMMU statistics to debugfs"
- depends on AMD_IOMMU
- select DEBUG_FS
- ---help---
- This option enables code in the AMD IOMMU driver to collect various
- statistics about whats happening in the driver and exports that
- information to userspace via debugfs.
- If unsure, say N.
-
config AMD_IOMMU_V2
tristate "AMD IOMMU Version 2 driver"
depends on AMD_IOMMU
diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
index 5efadad4615b..634f636393d5 100644
--- a/drivers/iommu/amd_iommu.c
+++ b/drivers/iommu/amd_iommu.c
@@ -19,6 +19,8 @@
#include <linux/ratelimit.h>
#include <linux/pci.h>
+#include <linux/acpi.h>
+#include <linux/amba/bus.h>
#include <linux/pci-ats.h>
#include <linux/bitmap.h>
#include <linux/slab.h>
@@ -72,6 +74,7 @@ static DEFINE_SPINLOCK(dev_data_list_lock);
LIST_HEAD(ioapic_map);
LIST_HEAD(hpet_map);
+LIST_HEAD(acpihid_map);
/*
* Domain for untranslated devices - only allocated
@@ -162,18 +165,65 @@ struct dma_ops_domain {
*
****************************************************************************/
-static struct protection_domain *to_pdomain(struct iommu_domain *dom)
+static inline int match_hid_uid(struct device *dev,
+ struct acpihid_map_entry *entry)
{
- return container_of(dom, struct protection_domain, domain);
+ const char *hid, *uid;
+
+ hid = acpi_device_hid(ACPI_COMPANION(dev));
+ uid = acpi_device_uid(ACPI_COMPANION(dev));
+
+ if (!hid || !(*hid))
+ return -ENODEV;
+
+ if (!uid || !(*uid))
+ return strcmp(hid, entry->hid);
+
+ if (!(*entry->uid))
+ return strcmp(hid, entry->hid);
+
+ return (strcmp(hid, entry->hid) || strcmp(uid, entry->uid));
}
-static inline u16 get_device_id(struct device *dev)
+static inline u16 get_pci_device_id(struct device *dev)
{
struct pci_dev *pdev = to_pci_dev(dev);
return PCI_DEVID(pdev->bus->number, pdev->devfn);
}
+static inline int get_acpihid_device_id(struct device *dev,
+ struct acpihid_map_entry **entry)
+{
+ struct acpihid_map_entry *p;
+
+ list_for_each_entry(p, &acpihid_map, list) {
+ if (!match_hid_uid(dev, p)) {
+ if (entry)
+ *entry = p;
+ return p->devid;
+ }
+ }
+ return -EINVAL;
+}
+
+static inline int get_device_id(struct device *dev)
+{
+ int devid;
+
+ if (dev_is_pci(dev))
+ devid = get_pci_device_id(dev);
+ else
+ devid = get_acpihid_device_id(dev, NULL);
+
+ return devid;
+}
+
+static struct protection_domain *to_pdomain(struct iommu_domain *dom)
+{
+ return container_of(dom, struct protection_domain, domain);
+}
+
static struct iommu_dev_data *alloc_dev_data(u16 devid)
{
struct iommu_dev_data *dev_data;
@@ -222,6 +272,7 @@ static u16 get_alias(struct device *dev)
struct pci_dev *pdev = to_pci_dev(dev);
u16 devid, ivrs_alias, pci_alias;
+ /* The callers make sure that get_device_id() does not fail here */
devid = get_device_id(dev);
ivrs_alias = amd_iommu_alias_table[devid];
pci_for_each_dma_alias(pdev, __last_alias, &pci_alias);
@@ -263,8 +314,7 @@ static u16 get_alias(struct device *dev)
*/
if (pci_alias == devid &&
PCI_BUS_NUM(ivrs_alias) == pdev->bus->number) {
- pdev->dev_flags |= PCI_DEV_FLAGS_DMA_ALIAS_DEVFN;
- pdev->dma_alias_devfn = ivrs_alias & 0xff;
+ pci_add_dma_alias(pdev, ivrs_alias & 0xff);
pr_info("AMD-Vi: Added PCI DMA alias %02x.%d for %s\n",
PCI_SLOT(ivrs_alias), PCI_FUNC(ivrs_alias),
dev_name(dev));
@@ -290,6 +340,29 @@ static struct iommu_dev_data *get_dev_data(struct device *dev)
return dev->archdata.iommu;
}
+/*
+* Find or create an IOMMU group for a acpihid device.
+*/
+static struct iommu_group *acpihid_device_group(struct device *dev)
+{
+ struct acpihid_map_entry *p, *entry = NULL;
+ int devid;
+
+ devid = get_acpihid_device_id(dev, &entry);
+ if (devid < 0)
+ return ERR_PTR(devid);
+
+ list_for_each_entry(p, &acpihid_map, list) {
+ if ((devid == p->devid) && p->group)
+ entry->group = p->group;
+ }
+
+ if (!entry->group)
+ entry->group = generic_device_group(dev);
+
+ return entry->group;
+}
+
static bool pci_iommuv2_capable(struct pci_dev *pdev)
{
static const int caps[] = {
@@ -341,9 +414,11 @@ static void init_unity_mappings_for_device(struct device *dev,
struct dma_ops_domain *dma_dom)
{
struct unity_map_entry *e;
- u16 devid;
+ int devid;
devid = get_device_id(dev);
+ if (devid < 0)
+ return;
list_for_each_entry(e, &amd_iommu_unity_map, list) {
if (!(devid >= e->devid_start && devid <= e->devid_end))
@@ -358,16 +433,14 @@ static void init_unity_mappings_for_device(struct device *dev,
*/
static bool check_device(struct device *dev)
{
- u16 devid;
+ int devid;
if (!dev || !dev->dma_mask)
return false;
- /* No PCI device */
- if (!dev_is_pci(dev))
- return false;
-
devid = get_device_id(dev);
+ if (devid < 0)
+ return false;
/* Out of our scope? */
if (devid > amd_iommu_last_bdf)
@@ -402,22 +475,26 @@ out:
static int iommu_init_device(struct device *dev)
{
- struct pci_dev *pdev = to_pci_dev(dev);
struct iommu_dev_data *dev_data;
+ int devid;
if (dev->archdata.iommu)
return 0;
- dev_data = find_dev_data(get_device_id(dev));
+ devid = get_device_id(dev);
+ if (devid < 0)
+ return devid;
+
+ dev_data = find_dev_data(devid);
if (!dev_data)
return -ENOMEM;
dev_data->alias = get_alias(dev);
- if (pci_iommuv2_capable(pdev)) {
+ if (dev_is_pci(dev) && pci_iommuv2_capable(to_pci_dev(dev))) {
struct amd_iommu *iommu;
- iommu = amd_iommu_rlookup_table[dev_data->devid];
+ iommu = amd_iommu_rlookup_table[dev_data->devid];
dev_data->iommu_v2 = iommu->is_iommu_v2;
}
@@ -431,9 +508,13 @@ static int iommu_init_device(struct device *dev)
static void iommu_ignore_device(struct device *dev)
{
- u16 devid, alias;
+ u16 alias;
+ int devid;
devid = get_device_id(dev);
+ if (devid < 0)
+ return;
+
alias = get_alias(dev);
memset(&amd_iommu_dev_table[devid], 0, sizeof(struct dev_table_entry));
@@ -445,8 +526,14 @@ static void iommu_ignore_device(struct device *dev)
static void iommu_uninit_device(struct device *dev)
{
- struct iommu_dev_data *dev_data = search_dev_data(get_device_id(dev));
+ int devid;
+ struct iommu_dev_data *dev_data;
+ devid = get_device_id(dev);
+ if (devid < 0)
+ return;
+
+ dev_data = search_dev_data(devid);
if (!dev_data)
return;
@@ -467,70 +554,6 @@ static void iommu_uninit_device(struct device *dev)
*/
}
-#ifdef CONFIG_AMD_IOMMU_STATS
-
-/*
- * Initialization code for statistics collection
- */
-
-DECLARE_STATS_COUNTER(compl_wait);
-DECLARE_STATS_COUNTER(cnt_map_single);
-DECLARE_STATS_COUNTER(cnt_unmap_single);
-DECLARE_STATS_COUNTER(cnt_map_sg);
-DECLARE_STATS_COUNTER(cnt_unmap_sg);
-DECLARE_STATS_COUNTER(cnt_alloc_coherent);
-DECLARE_STATS_COUNTER(cnt_free_coherent);
-DECLARE_STATS_COUNTER(cross_page);
-DECLARE_STATS_COUNTER(domain_flush_single);
-DECLARE_STATS_COUNTER(domain_flush_all);
-DECLARE_STATS_COUNTER(alloced_io_mem);
-DECLARE_STATS_COUNTER(total_map_requests);
-DECLARE_STATS_COUNTER(complete_ppr);
-DECLARE_STATS_COUNTER(invalidate_iotlb);
-DECLARE_STATS_COUNTER(invalidate_iotlb_all);
-DECLARE_STATS_COUNTER(pri_requests);
-
-static struct dentry *stats_dir;
-static struct dentry *de_fflush;
-
-static void amd_iommu_stats_add(struct __iommu_counter *cnt)
-{
- if (stats_dir == NULL)
- return;
-
- cnt->dent = debugfs_create_u64(cnt->name, 0444, stats_dir,
- &cnt->value);
-}
-
-static void amd_iommu_stats_init(void)
-{
- stats_dir = debugfs_create_dir("amd-iommu", NULL);
- if (stats_dir == NULL)
- return;
-
- de_fflush = debugfs_create_bool("fullflush", 0444, stats_dir,
- &amd_iommu_unmap_flush);
-
- amd_iommu_stats_add(&compl_wait);
- amd_iommu_stats_add(&cnt_map_single);
- amd_iommu_stats_add(&cnt_unmap_single);
- amd_iommu_stats_add(&cnt_map_sg);
- amd_iommu_stats_add(&cnt_unmap_sg);
- amd_iommu_stats_add(&cnt_alloc_coherent);
- amd_iommu_stats_add(&cnt_free_coherent);
- amd_iommu_stats_add(&cross_page);
- amd_iommu_stats_add(&domain_flush_single);
- amd_iommu_stats_add(&domain_flush_all);
- amd_iommu_stats_add(&alloced_io_mem);
- amd_iommu_stats_add(&total_map_requests);
- amd_iommu_stats_add(&complete_ppr);
- amd_iommu_stats_add(&invalidate_iotlb);
- amd_iommu_stats_add(&invalidate_iotlb_all);
- amd_iommu_stats_add(&pri_requests);
-}
-
-#endif
-
/****************************************************************************
*
* Interrupt handling functions
@@ -653,8 +676,6 @@ static void iommu_handle_ppr_entry(struct amd_iommu *iommu, u64 *raw)
{
struct amd_iommu_fault fault;
- INC_STATS_COUNTER(pri_requests);
-
if (PPR_REQ_TYPE(raw[0]) != PPR_REQ_FAULT) {
pr_err_ratelimited("AMD-Vi: Unknown PPR request received\n");
return;
@@ -2284,13 +2305,17 @@ static bool pci_pri_tlp_required(struct pci_dev *pdev)
static int attach_device(struct device *dev,
struct protection_domain *domain)
{
- struct pci_dev *pdev = to_pci_dev(dev);
+ struct pci_dev *pdev;
struct iommu_dev_data *dev_data;
unsigned long flags;
int ret;
dev_data = get_dev_data(dev);
+ if (!dev_is_pci(dev))
+ goto skip_ats_check;
+
+ pdev = to_pci_dev(dev);
if (domain->flags & PD_IOMMUV2_MASK) {
if (!dev_data->passthrough)
return -EINVAL;
@@ -2309,6 +2334,7 @@ static int attach_device(struct device *dev,
dev_data->ats.qdep = pci_ats_queue_depth(pdev);
}
+skip_ats_check:
write_lock_irqsave(&amd_iommu_devtable_lock, flags);
ret = __attach_device(dev_data, domain);
write_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
@@ -2365,6 +2391,9 @@ static void detach_device(struct device *dev)
__detach_device(dev_data);
write_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
+ if (!dev_is_pci(dev))
+ return;
+
if (domain->flags & PD_IOMMUV2_MASK && dev_data->iommu_v2)
pdev_iommuv2_disable(to_pci_dev(dev));
else if (dev_data->ats.enabled)
@@ -2378,13 +2407,15 @@ static int amd_iommu_add_device(struct device *dev)
struct iommu_dev_data *dev_data;
struct iommu_domain *domain;
struct amd_iommu *iommu;
- u16 devid;
- int ret;
+ int ret, devid;
if (!check_device(dev) || get_dev_data(dev))
return 0;
devid = get_device_id(dev);
+ if (devid < 0)
+ return devid;
+
iommu = amd_iommu_rlookup_table[devid];
ret = iommu_init_device(dev);
@@ -2422,18 +2453,29 @@ out:
static void amd_iommu_remove_device(struct device *dev)
{
struct amd_iommu *iommu;
- u16 devid;
+ int devid;
if (!check_device(dev))
return;
devid = get_device_id(dev);
+ if (devid < 0)
+ return;
+
iommu = amd_iommu_rlookup_table[devid];
iommu_uninit_device(dev);
iommu_completion_wait(iommu);
}
+static struct iommu_group *amd_iommu_device_group(struct device *dev)
+{
+ if (dev_is_pci(dev))
+ return pci_device_group(dev);
+
+ return acpihid_device_group(dev);
+}
+
/*****************************************************************************
*
* The next functions belong to the dma_ops mapping/unmapping code.
@@ -2598,11 +2640,6 @@ static dma_addr_t __map_single(struct device *dev,
pages = iommu_num_pages(paddr, size, PAGE_SIZE);
paddr &= PAGE_MASK;
- INC_STATS_COUNTER(total_map_requests);
-
- if (pages > 1)
- INC_STATS_COUNTER(cross_page);
-
if (align)
align_mask = (1UL << get_order(size)) - 1;
@@ -2623,8 +2660,6 @@ static dma_addr_t __map_single(struct device *dev,
}
address += offset;
- ADD_STATS_COUNTER(alloced_io_mem, size);
-
if (unlikely(amd_iommu_np_cache)) {
domain_flush_pages(&dma_dom->domain, address, size);
domain_flush_complete(&dma_dom->domain);
@@ -2672,8 +2707,6 @@ static void __unmap_single(struct dma_ops_domain *dma_dom,
start += PAGE_SIZE;
}
- SUB_STATS_COUNTER(alloced_io_mem, size);
-
dma_ops_free_addresses(dma_dom, dma_addr, pages);
}
@@ -2689,8 +2722,6 @@ static dma_addr_t map_page(struct device *dev, struct page *page,
struct protection_domain *domain;
u64 dma_mask;
- INC_STATS_COUNTER(cnt_map_single);
-
domain = get_domain(dev);
if (PTR_ERR(domain) == -EINVAL)
return (dma_addr_t)paddr;
@@ -2711,8 +2742,6 @@ static void unmap_page(struct device *dev, dma_addr_t dma_addr, size_t size,
{
struct protection_domain *domain;
- INC_STATS_COUNTER(cnt_unmap_single);
-
domain = get_domain(dev);
if (IS_ERR(domain))
return;
@@ -2735,8 +2764,6 @@ static int map_sg(struct device *dev, struct scatterlist *sglist,
int mapped_elems = 0;
u64 dma_mask;
- INC_STATS_COUNTER(cnt_map_sg);
-
domain = get_domain(dev);
if (IS_ERR(domain))
return 0;
@@ -2782,8 +2809,6 @@ static void unmap_sg(struct device *dev, struct scatterlist *sglist,
struct scatterlist *s;
int i;
- INC_STATS_COUNTER(cnt_unmap_sg);
-
domain = get_domain(dev);
if (IS_ERR(domain))
return;
@@ -2806,8 +2831,6 @@ static void *alloc_coherent(struct device *dev, size_t size,
struct protection_domain *domain;
struct page *page;
- INC_STATS_COUNTER(cnt_alloc_coherent);
-
domain = get_domain(dev);
if (PTR_ERR(domain) == -EINVAL) {
page = alloc_pages(flag, get_order(size));
@@ -2861,8 +2884,6 @@ static void free_coherent(struct device *dev, size_t size,
struct protection_domain *domain;
struct page *page;
- INC_STATS_COUNTER(cnt_free_coherent);
-
page = virt_to_page(virt_addr);
size = PAGE_ALIGN(size);
@@ -2927,7 +2948,17 @@ static struct dma_map_ops amd_iommu_dma_ops = {
int __init amd_iommu_init_api(void)
{
- return bus_set_iommu(&pci_bus_type, &amd_iommu_ops);
+ int err = 0;
+
+ err = bus_set_iommu(&pci_bus_type, &amd_iommu_ops);
+ if (err)
+ return err;
+#ifdef CONFIG_ARM_AMBA
+ err = bus_set_iommu(&amba_bustype, &amd_iommu_ops);
+ if (err)
+ return err;
+#endif
+ return 0;
}
int __init amd_iommu_init_dma_ops(void)
@@ -2944,8 +2975,6 @@ int __init amd_iommu_init_dma_ops(void)
if (!swiotlb)
dma_ops = &nommu_dma_ops;
- amd_iommu_stats_init();
-
if (amd_iommu_unmap_flush)
pr_info("AMD-Vi: IO/TLB flush on unmap enabled\n");
else
@@ -3099,12 +3128,14 @@ static void amd_iommu_detach_device(struct iommu_domain *dom,
{
struct iommu_dev_data *dev_data = dev->archdata.iommu;
struct amd_iommu *iommu;
- u16 devid;
+ int devid;
if (!check_device(dev))
return;
devid = get_device_id(dev);
+ if (devid < 0)
+ return;
if (dev_data->domain != NULL)
detach_device(dev);
@@ -3222,9 +3253,11 @@ static void amd_iommu_get_dm_regions(struct device *dev,
struct list_head *head)
{
struct unity_map_entry *entry;
- u16 devid;
+ int devid;
devid = get_device_id(dev);
+ if (devid < 0)
+ return;
list_for_each_entry(entry, &amd_iommu_unity_map, list) {
struct iommu_dm_region *region;
@@ -3271,7 +3304,7 @@ static const struct iommu_ops amd_iommu_ops = {
.iova_to_phys = amd_iommu_iova_to_phys,
.add_device = amd_iommu_add_device,
.remove_device = amd_iommu_remove_device,
- .device_group = pci_device_group,
+ .device_group = amd_iommu_device_group,
.get_dm_regions = amd_iommu_get_dm_regions,
.put_dm_regions = amd_iommu_put_dm_regions,
.pgsize_bitmap = AMD_IOMMU_PGSIZES,
@@ -3432,8 +3465,6 @@ out:
static int __amd_iommu_flush_page(struct protection_domain *domain, int pasid,
u64 address)
{
- INC_STATS_COUNTER(invalidate_iotlb);
-
return __flush_pasid(domain, pasid, address, false);
}
@@ -3454,8 +3485,6 @@ EXPORT_SYMBOL(amd_iommu_flush_page);
static int __amd_iommu_flush_tlb(struct protection_domain *domain, int pasid)
{
- INC_STATS_COUNTER(invalidate_iotlb_all);
-
return __flush_pasid(domain, pasid, CMD_INV_IOMMU_ALL_PAGES_ADDRESS,
true);
}
@@ -3575,8 +3604,6 @@ int amd_iommu_complete_ppr(struct pci_dev *pdev, int pasid,
struct amd_iommu *iommu;
struct iommu_cmd cmd;
- INC_STATS_COUNTER(complete_ppr);
-
dev_data = get_dev_data(&pdev->dev);
iommu = amd_iommu_rlookup_table[dev_data->devid];
@@ -3926,6 +3953,9 @@ static struct irq_domain *get_irq_domain(struct irq_alloc_info *info)
case X86_IRQ_ALLOC_TYPE_MSI:
case X86_IRQ_ALLOC_TYPE_MSIX:
devid = get_device_id(&info->msi_dev->dev);
+ if (devid < 0)
+ return NULL;
+
iommu = amd_iommu_rlookup_table[devid];
if (iommu)
return iommu->msi_domain;
diff --git a/drivers/iommu/amd_iommu_init.c b/drivers/iommu/amd_iommu_init.c
index bf4959f4225b..9e0034196e10 100644
--- a/drivers/iommu/amd_iommu_init.c
+++ b/drivers/iommu/amd_iommu_init.c
@@ -44,7 +44,7 @@
*/
#define IVRS_HEADER_LENGTH 48
-#define ACPI_IVHD_TYPE 0x10
+#define ACPI_IVHD_TYPE_MAX_SUPPORTED 0x40
#define ACPI_IVMD_TYPE_ALL 0x20
#define ACPI_IVMD_TYPE 0x21
#define ACPI_IVMD_TYPE_RANGE 0x22
@@ -58,6 +58,11 @@
#define IVHD_DEV_EXT_SELECT 0x46
#define IVHD_DEV_EXT_SELECT_RANGE 0x47
#define IVHD_DEV_SPECIAL 0x48
+#define IVHD_DEV_ACPI_HID 0xf0
+
+#define UID_NOT_PRESENT 0
+#define UID_IS_INTEGER 1
+#define UID_IS_CHARACTER 2
#define IVHD_SPECIAL_IOAPIC 1
#define IVHD_SPECIAL_HPET 2
@@ -99,7 +104,11 @@ struct ivhd_header {
u64 mmio_phys;
u16 pci_seg;
u16 info;
- u32 efr;
+ u32 efr_attr;
+
+ /* Following only valid on IVHD type 11h and 40h */
+ u64 efr_reg; /* Exact copy of MMIO_EXT_FEATURES */
+ u64 res;
} __attribute__((packed));
/*
@@ -111,6 +120,11 @@ struct ivhd_entry {
u16 devid;
u8 flags;
u32 ext;
+ u32 hidh;
+ u64 cid;
+ u8 uidf;
+ u8 uidl;
+ u8 uid;
} __attribute__((packed));
/*
@@ -133,6 +147,7 @@ bool amd_iommu_irq_remap __read_mostly;
static bool amd_iommu_detected;
static bool __initdata amd_iommu_disabled;
+static int amd_iommu_target_ivhd_type;
u16 amd_iommu_last_bdf; /* largest PCI device id we have
to handle */
@@ -218,8 +233,12 @@ enum iommu_init_state {
#define EARLY_MAP_SIZE 4
static struct devid_map __initdata early_ioapic_map[EARLY_MAP_SIZE];
static struct devid_map __initdata early_hpet_map[EARLY_MAP_SIZE];
+static struct acpihid_map_entry __initdata early_acpihid_map[EARLY_MAP_SIZE];
+
static int __initdata early_ioapic_map_size;
static int __initdata early_hpet_map_size;
+static int __initdata early_acpihid_map_size;
+
static bool __initdata cmdline_maps;
static enum iommu_init_state init_state = IOMMU_START_STATE;
@@ -394,6 +413,22 @@ static void __init iommu_unmap_mmio_space(struct amd_iommu *iommu)
release_mem_region(iommu->mmio_phys, iommu->mmio_phys_end);
}
+static inline u32 get_ivhd_header_size(struct ivhd_header *h)
+{
+ u32 size = 0;
+
+ switch (h->type) {
+ case 0x10:
+ size = 24;
+ break;
+ case 0x11:
+ case 0x40:
+ size = 40;
+ break;
+ }
+ return size;
+}
+
/****************************************************************************
*
* The functions below belong to the first pass of AMD IOMMU ACPI table
@@ -408,7 +443,15 @@ static void __init iommu_unmap_mmio_space(struct amd_iommu *iommu)
*/
static inline int ivhd_entry_length(u8 *ivhd)
{
- return 0x04 << (*ivhd >> 6);
+ u32 type = ((struct ivhd_entry *)ivhd)->type;
+
+ if (type < 0x80) {
+ return 0x04 << (*ivhd >> 6);
+ } else if (type == IVHD_DEV_ACPI_HID) {
+ /* For ACPI_HID, offset 21 is uid len */
+ return *((u8 *)ivhd + 21) + 22;
+ }
+ return 0;
}
/*
@@ -420,7 +463,14 @@ static int __init find_last_devid_from_ivhd(struct ivhd_header *h)
u8 *p = (void *)h, *end = (void *)h;
struct ivhd_entry *dev;
- p += sizeof(*h);
+ u32 ivhd_size = get_ivhd_header_size(h);
+
+ if (!ivhd_size) {
+ pr_err("AMD-Vi: Unsupported IVHD type %#x\n", h->type);
+ return -EINVAL;
+ }
+
+ p += ivhd_size;
end += h->length;
while (p < end) {
@@ -448,6 +498,22 @@ static int __init find_last_devid_from_ivhd(struct ivhd_header *h)
return 0;
}
+static int __init check_ivrs_checksum(struct acpi_table_header *table)
+{
+ int i;
+ u8 checksum = 0, *p = (u8 *)table;
+
+ for (i = 0; i < table->length; ++i)
+ checksum += p[i];
+ if (checksum != 0) {
+ /* ACPI table corrupt */
+ pr_err(FW_BUG "AMD-Vi: IVRS invalid checksum\n");
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
/*
* Iterate over all IVHD entries in the ACPI table and find the highest device
* id which we need to handle. This is the first of three functions which parse
@@ -455,31 +521,19 @@ static int __init find_last_devid_from_ivhd(struct ivhd_header *h)
*/
static int __init find_last_devid_acpi(struct acpi_table_header *table)
{
- int i;
- u8 checksum = 0, *p = (u8 *)table, *end = (u8 *)table;
+ u8 *p = (u8 *)table, *end = (u8 *)table;
struct ivhd_header *h;
- /*
- * Validate checksum here so we don't need to do it when
- * we actually parse the table
- */
- for (i = 0; i < table->length; ++i)
- checksum += p[i];
- if (checksum != 0)
- /* ACPI table corrupt */
- return -ENODEV;
-
p += IVRS_HEADER_LENGTH;
end += table->length;
while (p < end) {
h = (struct ivhd_header *)p;
- switch (h->type) {
- case ACPI_IVHD_TYPE:
- find_last_devid_from_ivhd(h);
- break;
- default:
- break;
+ if (h->type == amd_iommu_target_ivhd_type) {
+ int ret = find_last_devid_from_ivhd(h);
+
+ if (ret)
+ return ret;
}
p += h->length;
}
@@ -724,6 +778,42 @@ static int __init add_special_device(u8 type, u8 id, u16 *devid, bool cmd_line)
return 0;
}
+static int __init add_acpi_hid_device(u8 *hid, u8 *uid, u16 *devid,
+ bool cmd_line)
+{
+ struct acpihid_map_entry *entry;
+ struct list_head *list = &acpihid_map;
+
+ list_for_each_entry(entry, list, list) {
+ if (strcmp(entry->hid, hid) ||
+ (*uid && *entry->uid && strcmp(entry->uid, uid)) ||
+ !entry->cmd_line)
+ continue;
+
+ pr_info("AMD-Vi: Command-line override for hid:%s uid:%s\n",
+ hid, uid);
+ *devid = entry->devid;
+ return 0;
+ }
+
+ entry = kzalloc(sizeof(*entry), GFP_KERNEL);
+ if (!entry)
+ return -ENOMEM;
+
+ memcpy(entry->uid, uid, strlen(uid));
+ memcpy(entry->hid, hid, strlen(hid));
+ entry->devid = *devid;
+ entry->cmd_line = cmd_line;
+ entry->root_devid = (entry->devid & (~0x7));
+
+ pr_info("AMD-Vi:%s, add hid:%s, uid:%s, rdevid:%d\n",
+ entry->cmd_line ? "cmd" : "ivrs",
+ entry->hid, entry->uid, entry->root_devid);
+
+ list_add_tail(&entry->list, list);
+ return 0;
+}
+
static int __init add_early_maps(void)
{
int i, ret;
@@ -746,6 +836,15 @@ static int __init add_early_maps(void)
return ret;
}
+ for (i = 0; i < early_acpihid_map_size; ++i) {
+ ret = add_acpi_hid_device(early_acpihid_map[i].hid,
+ early_acpihid_map[i].uid,
+ &early_acpihid_map[i].devid,
+ early_acpihid_map[i].cmd_line);
+ if (ret)
+ return ret;
+ }
+
return 0;
}
@@ -785,6 +884,7 @@ static int __init init_iommu_from_acpi(struct amd_iommu *iommu,
u32 dev_i, ext_flags = 0;
bool alias = false;
struct ivhd_entry *e;
+ u32 ivhd_size;
int ret;
@@ -800,7 +900,14 @@ static int __init init_iommu_from_acpi(struct amd_iommu *iommu,
/*
* Done. Now parse the device entries
*/
- p += sizeof(struct ivhd_header);
+ ivhd_size = get_ivhd_header_size(h);
+ if (!ivhd_size) {
+ pr_err("AMD-Vi: Unsupported IVHD type %#x\n", h->type);
+ return -EINVAL;
+ }
+
+ p += ivhd_size;
+
end += h->length;
@@ -958,6 +1065,70 @@ static int __init init_iommu_from_acpi(struct amd_iommu *iommu,
break;
}
+ case IVHD_DEV_ACPI_HID: {
+ u16 devid;
+ u8 hid[ACPIHID_HID_LEN] = {0};
+ u8 uid[ACPIHID_UID_LEN] = {0};
+ int ret;
+
+ if (h->type != 0x40) {
+ pr_err(FW_BUG "Invalid IVHD device type %#x\n",
+ e->type);
+ break;
+ }
+
+ memcpy(hid, (u8 *)(&e->ext), ACPIHID_HID_LEN - 1);
+ hid[ACPIHID_HID_LEN - 1] = '\0';
+
+ if (!(*hid)) {
+ pr_err(FW_BUG "Invalid HID.\n");
+ break;
+ }
+
+ switch (e->uidf) {
+ case UID_NOT_PRESENT:
+
+ if (e->uidl != 0)
+ pr_warn(FW_BUG "Invalid UID length.\n");
+
+ break;
+ case UID_IS_INTEGER:
+
+ sprintf(uid, "%d", e->uid);
+
+ break;
+ case UID_IS_CHARACTER:
+
+ memcpy(uid, (u8 *)(&e->uid), ACPIHID_UID_LEN - 1);
+ uid[ACPIHID_UID_LEN - 1] = '\0';
+
+ break;
+ default:
+ break;
+ }
+
+ DUMP_printk(" DEV_ACPI_HID(%s[%s])\t\tdevid: %02x:%02x.%x\n",
+ hid, uid,
+ PCI_BUS_NUM(devid),
+ PCI_SLOT(devid),
+ PCI_FUNC(devid));
+
+ devid = e->devid;
+ flags = e->flags;
+
+ ret = add_acpi_hid_device(hid, uid, &devid, false);
+ if (ret)
+ return ret;
+
+ /*
+ * add_special_device might update the devid in case a
+ * command-line override is present. So call
+ * set_dev_entry_from_acpi after add_special_device.
+ */
+ set_dev_entry_from_acpi(iommu, devid, e->flags, 0);
+
+ break;
+ }
default:
break;
}
@@ -1078,13 +1249,25 @@ static int __init init_iommu_one(struct amd_iommu *iommu, struct ivhd_header *h)
iommu->pci_seg = h->pci_seg;
iommu->mmio_phys = h->mmio_phys;
- /* Check if IVHD EFR contains proper max banks/counters */
- if ((h->efr != 0) &&
- ((h->efr & (0xF << 13)) != 0) &&
- ((h->efr & (0x3F << 17)) != 0)) {
- iommu->mmio_phys_end = MMIO_REG_END_OFFSET;
- } else {
- iommu->mmio_phys_end = MMIO_CNTR_CONF_OFFSET;
+ switch (h->type) {
+ case 0x10:
+ /* Check if IVHD EFR contains proper max banks/counters */
+ if ((h->efr_attr != 0) &&
+ ((h->efr_attr & (0xF << 13)) != 0) &&
+ ((h->efr_attr & (0x3F << 17)) != 0))
+ iommu->mmio_phys_end = MMIO_REG_END_OFFSET;
+ else
+ iommu->mmio_phys_end = MMIO_CNTR_CONF_OFFSET;
+ break;
+ case 0x11:
+ case 0x40:
+ if (h->efr_reg & (1 << 9))
+ iommu->mmio_phys_end = MMIO_REG_END_OFFSET;
+ else
+ iommu->mmio_phys_end = MMIO_CNTR_CONF_OFFSET;
+ break;
+ default:
+ return -EINVAL;
}
iommu->mmio_base = iommu_map_mmio_space(iommu->mmio_phys,
@@ -1117,6 +1300,32 @@ static int __init init_iommu_one(struct amd_iommu *iommu, struct ivhd_header *h)
return 0;
}
+/**
+ * get_highest_supported_ivhd_type - Look up the appropriate IVHD type
+ * @ivrs Pointer to the IVRS header
+ *
+ * This function search through all IVDB of the maximum supported IVHD
+ */
+static u8 get_highest_supported_ivhd_type(struct acpi_table_header *ivrs)
+{
+ u8 *base = (u8 *)ivrs;
+ struct ivhd_header *ivhd = (struct ivhd_header *)
+ (base + IVRS_HEADER_LENGTH);
+ u8 last_type = ivhd->type;
+ u16 devid = ivhd->devid;
+
+ while (((u8 *)ivhd - base < ivrs->length) &&
+ (ivhd->type <= ACPI_IVHD_TYPE_MAX_SUPPORTED)) {
+ u8 *p = (u8 *) ivhd;
+
+ if (ivhd->devid == devid)
+ last_type = ivhd->type;
+ ivhd = (struct ivhd_header *)(p + ivhd->length);
+ }
+
+ return last_type;
+}
+
/*
* Iterates over all IOMMU entries in the ACPI table, allocates the
* IOMMU structure and initializes it with init_iommu_one()
@@ -1133,8 +1342,7 @@ static int __init init_iommu_all(struct acpi_table_header *table)
while (p < end) {
h = (struct ivhd_header *)p;
- switch (*p) {
- case ACPI_IVHD_TYPE:
+ if (*p == amd_iommu_target_ivhd_type) {
DUMP_printk("device: %02x:%02x.%01x cap: %04x "
"seg: %d flags: %01x info %04x\n",
@@ -1151,9 +1359,6 @@ static int __init init_iommu_all(struct acpi_table_header *table)
ret = init_iommu_one(iommu, h);
if (ret)
return ret;
- break;
- default:
- break;
}
p += h->length;
@@ -1818,18 +2023,20 @@ static void __init free_dma_resources(void)
* remapping setup code.
*
* This function basically parses the ACPI table for AMD IOMMU (IVRS)
- * three times:
+ * four times:
+ *
+ * 1 pass) Discover the most comprehensive IVHD type to use.
*
- * 1 pass) Find the highest PCI device id the driver has to handle.
+ * 2 pass) Find the highest PCI device id the driver has to handle.
* Upon this information the size of the data structures is
* determined that needs to be allocated.
*
- * 2 pass) Initialize the data structures just allocated with the
+ * 3 pass) Initialize the data structures just allocated with the
* information in the ACPI table about available AMD IOMMUs
* in the system. It also maps the PCI devices in the
* system to specific IOMMUs
*
- * 3 pass) After the basic data structures are allocated and
+ * 4 pass) After the basic data structures are allocated and
* initialized we update them with information about memory
* remapping requirements parsed out of the ACPI table in
* this last pass.
@@ -1857,6 +2064,17 @@ static int __init early_amd_iommu_init(void)
}
/*
+ * Validate checksum here so we don't need to do it when
+ * we actually parse the table
+ */
+ ret = check_ivrs_checksum(ivrs_base);
+ if (ret)
+ return ret;
+
+ amd_iommu_target_ivhd_type = get_highest_supported_ivhd_type(ivrs_base);
+ DUMP_printk("Using IVHD type %#x\n", amd_iommu_target_ivhd_type);
+
+ /*
* First parse ACPI tables to find the largest Bus/Dev/Func
* we need to handle. Upon this information the shared data
* structures for the IOMMUs in the system will be allocated
@@ -2259,10 +2477,43 @@ static int __init parse_ivrs_hpet(char *str)
return 1;
}
+static int __init parse_ivrs_acpihid(char *str)
+{
+ u32 bus, dev, fn;
+ char *hid, *uid, *p;
+ char acpiid[ACPIHID_UID_LEN + ACPIHID_HID_LEN] = {0};
+ int ret, i;
+
+ ret = sscanf(str, "[%x:%x.%x]=%s", &bus, &dev, &fn, acpiid);
+ if (ret != 4) {
+ pr_err("AMD-Vi: Invalid command line: ivrs_acpihid(%s)\n", str);
+ return 1;
+ }
+
+ p = acpiid;
+ hid = strsep(&p, ":");
+ uid = p;
+
+ if (!hid || !(*hid) || !uid) {
+ pr_err("AMD-Vi: Invalid command line: hid or uid\n");
+ return 1;
+ }
+
+ i = early_acpihid_map_size++;
+ memcpy(early_acpihid_map[i].hid, hid, strlen(hid));
+ memcpy(early_acpihid_map[i].uid, uid, strlen(uid));
+ early_acpihid_map[i].devid =
+ ((bus & 0xff) << 8) | ((dev & 0x1f) << 3) | (fn & 0x7);
+ early_acpihid_map[i].cmd_line = true;
+
+ return 1;
+}
+
__setup("amd_iommu_dump", parse_amd_iommu_dump);
__setup("amd_iommu=", parse_amd_iommu_options);
__setup("ivrs_ioapic", parse_ivrs_ioapic);
__setup("ivrs_hpet", parse_ivrs_hpet);
+__setup("ivrs_acpihid", parse_ivrs_acpihid);
IOMMU_INIT_FINISH(amd_iommu_detect,
gart_iommu_hole_init,
diff --git a/drivers/iommu/amd_iommu_types.h b/drivers/iommu/amd_iommu_types.h
index 9d32b20a5e9a..590956ac704e 100644
--- a/drivers/iommu/amd_iommu_types.h
+++ b/drivers/iommu/amd_iommu_types.h
@@ -527,6 +527,19 @@ struct amd_iommu {
#endif
};
+#define ACPIHID_UID_LEN 256
+#define ACPIHID_HID_LEN 9
+
+struct acpihid_map_entry {
+ struct list_head list;
+ u8 uid[ACPIHID_UID_LEN];
+ u8 hid[ACPIHID_HID_LEN];
+ u16 devid;
+ u16 root_devid;
+ bool cmd_line;
+ struct iommu_group *group;
+};
+
struct devid_map {
struct list_head list;
u8 id;
@@ -537,6 +550,7 @@ struct devid_map {
/* Map HPET and IOAPIC ids to the devid used by the IOMMU */
extern struct list_head ioapic_map;
extern struct list_head hpet_map;
+extern struct list_head acpihid_map;
/*
* List with all IOMMUs in the system. This list is not locked because it is
@@ -668,30 +682,4 @@ static inline int get_hpet_devid(int id)
return -EINVAL;
}
-#ifdef CONFIG_AMD_IOMMU_STATS
-
-struct __iommu_counter {
- char *name;
- struct dentry *dent;
- u64 value;
-};
-
-#define DECLARE_STATS_COUNTER(nm) \
- static struct __iommu_counter nm = { \
- .name = #nm, \
- }
-
-#define INC_STATS_COUNTER(name) name.value += 1
-#define ADD_STATS_COUNTER(name, x) name.value += (x)
-#define SUB_STATS_COUNTER(name, x) name.value -= (x)
-
-#else /* CONFIG_AMD_IOMMU_STATS */
-
-#define DECLARE_STATS_COUNTER(name)
-#define INC_STATS_COUNTER(name)
-#define ADD_STATS_COUNTER(name, x)
-#define SUB_STATS_COUNTER(name, x)
-
-#endif /* CONFIG_AMD_IOMMU_STATS */
-
#endif /* _ASM_X86_AMD_IOMMU_TYPES_H */
diff --git a/drivers/iommu/arm-smmu-v3.c b/drivers/iommu/arm-smmu-v3.c
index 4ff73ff64e49..94b68213c50d 100644
--- a/drivers/iommu/arm-smmu-v3.c
+++ b/drivers/iommu/arm-smmu-v3.c
@@ -590,6 +590,7 @@ struct arm_smmu_device {
unsigned long ias; /* IPA */
unsigned long oas; /* PA */
+ unsigned long pgsize_bitmap;
#define ARM_SMMU_MAX_ASIDS (1 << 16)
unsigned int asid_bits;
@@ -1476,7 +1477,7 @@ static int arm_smmu_domain_finalise_s1(struct arm_smmu_domain *smmu_domain,
struct arm_smmu_s1_cfg *cfg = &smmu_domain->s1_cfg;
asid = arm_smmu_bitmap_alloc(smmu->asid_map, smmu->asid_bits);
- if (IS_ERR_VALUE(asid))
+ if (asid < 0)
return asid;
cfg->cdptr = dmam_alloc_coherent(smmu->dev, CTXDESC_CD_DWORDS << 3,
@@ -1507,7 +1508,7 @@ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
struct arm_smmu_s2_cfg *cfg = &smmu_domain->s2_cfg;
vmid = arm_smmu_bitmap_alloc(smmu->vmid_map, smmu->vmid_bits);
- if (IS_ERR_VALUE(vmid))
+ if (vmid < 0)
return vmid;
cfg->vmid = (u16)vmid;
@@ -1516,8 +1517,6 @@ static int arm_smmu_domain_finalise_s2(struct arm_smmu_domain *smmu_domain,
return 0;
}
-static struct iommu_ops arm_smmu_ops;
-
static int arm_smmu_domain_finalise(struct iommu_domain *domain)
{
int ret;
@@ -1555,7 +1554,7 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain)
}
pgtbl_cfg = (struct io_pgtable_cfg) {
- .pgsize_bitmap = arm_smmu_ops.pgsize_bitmap,
+ .pgsize_bitmap = smmu->pgsize_bitmap,
.ias = ias,
.oas = oas,
.tlb = &arm_smmu_gather_ops,
@@ -1566,11 +1565,11 @@ static int arm_smmu_domain_finalise(struct iommu_domain *domain)
if (!pgtbl_ops)
return -ENOMEM;
- arm_smmu_ops.pgsize_bitmap = pgtbl_cfg.pgsize_bitmap;
+ domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap;
smmu_domain->pgtbl_ops = pgtbl_ops;
ret = finalise_stage_fn(smmu_domain, &pgtbl_cfg);
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
free_io_pgtable_ops(pgtbl_ops);
return ret;
@@ -1643,7 +1642,7 @@ static void arm_smmu_detach_dev(struct device *dev)
struct arm_smmu_group *smmu_group = arm_smmu_group_get(dev);
smmu_group->ste.bypass = true;
- if (IS_ERR_VALUE(arm_smmu_install_ste_for_group(smmu_group)))
+ if (arm_smmu_install_ste_for_group(smmu_group) < 0)
dev_warn(dev, "failed to install bypass STE\n");
smmu_group->domain = NULL;
@@ -1695,7 +1694,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
smmu_group->ste.bypass = domain->type == IOMMU_DOMAIN_DMA;
ret = arm_smmu_install_ste_for_group(smmu_group);
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
smmu_group->domain = NULL;
out_unlock:
@@ -2236,7 +2235,7 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
arm_smmu_evtq_handler,
arm_smmu_evtq_thread,
0, "arm-smmu-v3-evtq", smmu);
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
dev_warn(smmu->dev, "failed to enable evtq irq\n");
}
@@ -2245,7 +2244,7 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
ret = devm_request_irq(smmu->dev, irq,
arm_smmu_cmdq_sync_handler, 0,
"arm-smmu-v3-cmdq-sync", smmu);
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
dev_warn(smmu->dev, "failed to enable cmdq-sync irq\n");
}
@@ -2253,7 +2252,7 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
if (irq) {
ret = devm_request_irq(smmu->dev, irq, arm_smmu_gerror_handler,
0, "arm-smmu-v3-gerror", smmu);
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
dev_warn(smmu->dev, "failed to enable gerror irq\n");
}
@@ -2265,7 +2264,7 @@ static int arm_smmu_setup_irqs(struct arm_smmu_device *smmu)
arm_smmu_priq_thread,
0, "arm-smmu-v3-priq",
smmu);
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
dev_warn(smmu->dev,
"failed to enable priq irq\n");
else
@@ -2410,7 +2409,6 @@ static int arm_smmu_device_probe(struct arm_smmu_device *smmu)
{
u32 reg;
bool coherent;
- unsigned long pgsize_bitmap = 0;
/* IDR0 */
reg = readl_relaxed(smmu->base + ARM_SMMU_IDR0);
@@ -2541,13 +2539,16 @@ static int arm_smmu_device_probe(struct arm_smmu_device *smmu)
/* Page sizes */
if (reg & IDR5_GRAN64K)
- pgsize_bitmap |= SZ_64K | SZ_512M;
+ smmu->pgsize_bitmap |= SZ_64K | SZ_512M;
if (reg & IDR5_GRAN16K)
- pgsize_bitmap |= SZ_16K | SZ_32M;
+ smmu->pgsize_bitmap |= SZ_16K | SZ_32M;
if (reg & IDR5_GRAN4K)
- pgsize_bitmap |= SZ_4K | SZ_2M | SZ_1G;
+ smmu->pgsize_bitmap |= SZ_4K | SZ_2M | SZ_1G;
- arm_smmu_ops.pgsize_bitmap &= pgsize_bitmap;
+ if (arm_smmu_ops.pgsize_bitmap == -1UL)
+ arm_smmu_ops.pgsize_bitmap = smmu->pgsize_bitmap;
+ else
+ arm_smmu_ops.pgsize_bitmap |= smmu->pgsize_bitmap;
/* Output address size */
switch (reg & IDR5_OAS_MASK << IDR5_OAS_SHIFT) {
diff --git a/drivers/iommu/arm-smmu.c b/drivers/iommu/arm-smmu.c
index 7c39ac4b9c53..9345a3fcb706 100644
--- a/drivers/iommu/arm-smmu.c
+++ b/drivers/iommu/arm-smmu.c
@@ -34,6 +34,7 @@
#include <linux/err.h>
#include <linux/interrupt.h>
#include <linux/io.h>
+#include <linux/io-64-nonatomic-hi-lo.h>
#include <linux/iommu.h>
#include <linux/iopoll.h>
#include <linux/module.h>
@@ -49,7 +50,7 @@
#include "io-pgtable.h"
/* Maximum number of stream IDs assigned to a single device */
-#define MAX_MASTER_STREAMIDS MAX_PHANDLE_ARGS
+#define MAX_MASTER_STREAMIDS 128
/* Maximum number of context banks per SMMU */
#define ARM_SMMU_MAX_CBS 128
@@ -71,16 +72,15 @@
((smmu->options & ARM_SMMU_OPT_SECURE_CFG_ACCESS) \
? 0x400 : 0))
+/*
+ * Some 64-bit registers only make sense to write atomically, but in such
+ * cases all the data relevant to AArch32 formats lies within the lower word,
+ * therefore this actually makes more sense than it might first appear.
+ */
#ifdef CONFIG_64BIT
-#define smmu_writeq writeq_relaxed
+#define smmu_write_atomic_lq writeq_relaxed
#else
-#define smmu_writeq(reg64, addr) \
- do { \
- u64 __val = (reg64); \
- void __iomem *__addr = (addr); \
- writel_relaxed(__val >> 32, __addr + 4); \
- writel_relaxed(__val, __addr); \
- } while (0)
+#define smmu_write_atomic_lq writel_relaxed
#endif
/* Configuration registers */
@@ -94,9 +94,13 @@
#define sCR0_VMIDPNE (1 << 11)
#define sCR0_PTM (1 << 12)
#define sCR0_FB (1 << 13)
+#define sCR0_VMID16EN (1 << 31)
#define sCR0_BSU_SHIFT 14
#define sCR0_BSU_MASK 0x3
+/* Auxiliary Configuration register */
+#define ARM_SMMU_GR0_sACR 0x10
+
/* Identification registers */
#define ARM_SMMU_GR0_ID0 0x20
#define ARM_SMMU_GR0_ID1 0x24
@@ -116,6 +120,8 @@
#define ID0_NTS (1 << 28)
#define ID0_SMS (1 << 27)
#define ID0_ATOSNS (1 << 26)
+#define ID0_PTFS_NO_AARCH32 (1 << 25)
+#define ID0_PTFS_NO_AARCH32S (1 << 24)
#define ID0_CTTW (1 << 14)
#define ID0_NUMIRPT_SHIFT 16
#define ID0_NUMIRPT_MASK 0xff
@@ -141,6 +147,10 @@
#define ID2_PTFS_4K (1 << 12)
#define ID2_PTFS_16K (1 << 13)
#define ID2_PTFS_64K (1 << 14)
+#define ID2_VMID16 (1 << 15)
+
+#define ID7_MAJOR_SHIFT 4
+#define ID7_MAJOR_MASK 0xf
/* Global TLB invalidation */
#define ARM_SMMU_GR0_TLBIVMID 0x64
@@ -193,12 +203,15 @@
#define ARM_SMMU_GR1_CBA2R(n) (0x800 + ((n) << 2))
#define CBA2R_RW64_32BIT (0 << 0)
#define CBA2R_RW64_64BIT (1 << 0)
+#define CBA2R_VMID_SHIFT 16
+#define CBA2R_VMID_MASK 0xffff
/* Translation context bank */
#define ARM_SMMU_CB_BASE(smmu) ((smmu)->base + ((smmu)->size >> 1))
#define ARM_SMMU_CB(smmu, n) ((n) * (1 << (smmu)->pgshift))
#define ARM_SMMU_CB_SCTLR 0x0
+#define ARM_SMMU_CB_ACTLR 0x4
#define ARM_SMMU_CB_RESUME 0x8
#define ARM_SMMU_CB_TTBCR2 0x10
#define ARM_SMMU_CB_TTBR0 0x20
@@ -206,11 +219,9 @@
#define ARM_SMMU_CB_TTBCR 0x30
#define ARM_SMMU_CB_S1_MAIR0 0x38
#define ARM_SMMU_CB_S1_MAIR1 0x3c
-#define ARM_SMMU_CB_PAR_LO 0x50
-#define ARM_SMMU_CB_PAR_HI 0x54
+#define ARM_SMMU_CB_PAR 0x50
#define ARM_SMMU_CB_FSR 0x58
-#define ARM_SMMU_CB_FAR_LO 0x60
-#define ARM_SMMU_CB_FAR_HI 0x64
+#define ARM_SMMU_CB_FAR 0x60
#define ARM_SMMU_CB_FSYNR0 0x68
#define ARM_SMMU_CB_S1_TLBIVA 0x600
#define ARM_SMMU_CB_S1_TLBIASID 0x610
@@ -230,6 +241,10 @@
#define SCTLR_M (1 << 0)
#define SCTLR_EAE_SBOP (SCTLR_AFE | SCTLR_TRE)
+#define ARM_MMU500_ACTLR_CPRE (1 << 1)
+
+#define ARM_MMU500_ACR_CACHE_LOCK (1 << 26)
+
#define CB_PAR_F (1 << 0)
#define ATSR_ACTIVE (1 << 0)
@@ -270,10 +285,17 @@ MODULE_PARM_DESC(disable_bypass,
"Disable bypass streams such that incoming transactions from devices that are not attached to an iommu domain will report an abort back to the device and will not be allowed to pass through the SMMU.");
enum arm_smmu_arch_version {
- ARM_SMMU_V1 = 1,
+ ARM_SMMU_V1,
+ ARM_SMMU_V1_64K,
ARM_SMMU_V2,
};
+enum arm_smmu_implementation {
+ GENERIC_SMMU,
+ ARM_MMU500,
+ CAVIUM_SMMUV2,
+};
+
struct arm_smmu_smr {
u8 idx;
u16 mask;
@@ -305,11 +327,18 @@ struct arm_smmu_device {
#define ARM_SMMU_FEAT_TRANS_S2 (1 << 3)
#define ARM_SMMU_FEAT_TRANS_NESTED (1 << 4)
#define ARM_SMMU_FEAT_TRANS_OPS (1 << 5)
+#define ARM_SMMU_FEAT_VMID16 (1 << 6)
+#define ARM_SMMU_FEAT_FMT_AARCH64_4K (1 << 7)
+#define ARM_SMMU_FEAT_FMT_AARCH64_16K (1 << 8)
+#define ARM_SMMU_FEAT_FMT_AARCH64_64K (1 << 9)
+#define ARM_SMMU_FEAT_FMT_AARCH32_L (1 << 10)
+#define ARM_SMMU_FEAT_FMT_AARCH32_S (1 << 11)
u32 features;
#define ARM_SMMU_OPT_SECURE_CFG_ACCESS (1 << 0)
u32 options;
enum arm_smmu_arch_version version;
+ enum arm_smmu_implementation model;
u32 num_context_banks;
u32 num_s2_context_banks;
@@ -322,6 +351,7 @@ struct arm_smmu_device {
unsigned long va_size;
unsigned long ipa_size;
unsigned long pa_size;
+ unsigned long pgsize_bitmap;
u32 num_global_irqs;
u32 num_context_irqs;
@@ -329,17 +359,27 @@ struct arm_smmu_device {
struct list_head list;
struct rb_root masters;
+
+ u32 cavium_id_base; /* Specific to Cavium */
+};
+
+enum arm_smmu_context_fmt {
+ ARM_SMMU_CTX_FMT_NONE,
+ ARM_SMMU_CTX_FMT_AARCH64,
+ ARM_SMMU_CTX_FMT_AARCH32_L,
+ ARM_SMMU_CTX_FMT_AARCH32_S,
};
struct arm_smmu_cfg {
u8 cbndx;
u8 irptndx;
u32 cbar;
+ enum arm_smmu_context_fmt fmt;
};
#define INVALID_IRPTNDX 0xff
-#define ARM_SMMU_CB_ASID(cfg) ((cfg)->cbndx)
-#define ARM_SMMU_CB_VMID(cfg) ((cfg)->cbndx + 1)
+#define ARM_SMMU_CB_ASID(smmu, cfg) ((u16)(smmu)->cavium_id_base + (cfg)->cbndx)
+#define ARM_SMMU_CB_VMID(smmu, cfg) ((u16)(smmu)->cavium_id_base + (cfg)->cbndx + 1)
enum arm_smmu_domain_stage {
ARM_SMMU_DOMAIN_S1 = 0,
@@ -357,7 +397,11 @@ struct arm_smmu_domain {
struct iommu_domain domain;
};
-static struct iommu_ops arm_smmu_ops;
+struct arm_smmu_phandle_args {
+ struct device_node *np;
+ int args_count;
+ uint32_t args[MAX_MASTER_STREAMIDS];
+};
static DEFINE_SPINLOCK(arm_smmu_devices_lock);
static LIST_HEAD(arm_smmu_devices);
@@ -367,6 +411,8 @@ struct arm_smmu_option_prop {
const char *prop;
};
+static atomic_t cavium_smmu_context_count = ATOMIC_INIT(0);
+
static struct arm_smmu_option_prop arm_smmu_options[] = {
{ ARM_SMMU_OPT_SECURE_CFG_ACCESS, "calxeda,smmu-secure-config-access" },
{ 0, NULL},
@@ -466,7 +512,7 @@ static int insert_smmu_master(struct arm_smmu_device *smmu,
static int register_smmu_master(struct arm_smmu_device *smmu,
struct device *dev,
- struct of_phandle_args *masterspec)
+ struct arm_smmu_phandle_args *masterspec)
{
int i;
struct arm_smmu_master *master;
@@ -578,11 +624,11 @@ static void arm_smmu_tlb_inv_context(void *cookie)
if (stage1) {
base = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
- writel_relaxed(ARM_SMMU_CB_ASID(cfg),
+ writel_relaxed(ARM_SMMU_CB_ASID(smmu, cfg),
base + ARM_SMMU_CB_S1_TLBIASID);
} else {
base = ARM_SMMU_GR0(smmu);
- writel_relaxed(ARM_SMMU_CB_VMID(cfg),
+ writel_relaxed(ARM_SMMU_CB_VMID(smmu, cfg),
base + ARM_SMMU_GR0_TLBIVMID);
}
@@ -602,37 +648,33 @@ static void arm_smmu_tlb_inv_range_nosync(unsigned long iova, size_t size,
reg = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
reg += leaf ? ARM_SMMU_CB_S1_TLBIVAL : ARM_SMMU_CB_S1_TLBIVA;
- if (!IS_ENABLED(CONFIG_64BIT) || smmu->version == ARM_SMMU_V1) {
+ if (cfg->fmt != ARM_SMMU_CTX_FMT_AARCH64) {
iova &= ~12UL;
- iova |= ARM_SMMU_CB_ASID(cfg);
+ iova |= ARM_SMMU_CB_ASID(smmu, cfg);
do {
writel_relaxed(iova, reg);
iova += granule;
} while (size -= granule);
-#ifdef CONFIG_64BIT
} else {
iova >>= 12;
- iova |= (u64)ARM_SMMU_CB_ASID(cfg) << 48;
+ iova |= (u64)ARM_SMMU_CB_ASID(smmu, cfg) << 48;
do {
writeq_relaxed(iova, reg);
iova += granule >> 12;
} while (size -= granule);
-#endif
}
-#ifdef CONFIG_64BIT
} else if (smmu->version == ARM_SMMU_V2) {
reg = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
reg += leaf ? ARM_SMMU_CB_S2_TLBIIPAS2L :
ARM_SMMU_CB_S2_TLBIIPAS2;
iova >>= 12;
do {
- writeq_relaxed(iova, reg);
+ smmu_write_atomic_lq(iova, reg);
iova += granule >> 12;
} while (size -= granule);
-#endif
} else {
reg = ARM_SMMU_GR0(smmu) + ARM_SMMU_GR0_TLBIVMID;
- writel_relaxed(ARM_SMMU_CB_VMID(cfg), reg);
+ writel_relaxed(ARM_SMMU_CB_VMID(smmu, cfg), reg);
}
}
@@ -645,7 +687,7 @@ static struct iommu_gather_ops arm_smmu_gather_ops = {
static irqreturn_t arm_smmu_context_fault(int irq, void *dev)
{
int flags, ret;
- u32 fsr, far, fsynr, resume;
+ u32 fsr, fsynr, resume;
unsigned long iova;
struct iommu_domain *domain = dev;
struct arm_smmu_domain *smmu_domain = to_smmu_domain(domain);
@@ -667,13 +709,7 @@ static irqreturn_t arm_smmu_context_fault(int irq, void *dev)
fsynr = readl_relaxed(cb_base + ARM_SMMU_CB_FSYNR0);
flags = fsynr & FSYNR0_WNR ? IOMMU_FAULT_WRITE : IOMMU_FAULT_READ;
- far = readl_relaxed(cb_base + ARM_SMMU_CB_FAR_LO);
- iova = far;
-#ifdef CONFIG_64BIT
- far = readl_relaxed(cb_base + ARM_SMMU_CB_FAR_HI);
- iova |= ((unsigned long)far << 32);
-#endif
-
+ iova = readq_relaxed(cb_base + ARM_SMMU_CB_FAR);
if (!report_iommu_fault(domain, smmu->dev, iova, flags)) {
ret = IRQ_HANDLED;
resume = RESUME_RETRY;
@@ -734,22 +770,20 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain,
cb_base = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, cfg->cbndx);
if (smmu->version > ARM_SMMU_V1) {
- /*
- * CBA2R.
- * *Must* be initialised before CBAR thanks to VMID16
- * architectural oversight affected some implementations.
- */
-#ifdef CONFIG_64BIT
- reg = CBA2R_RW64_64BIT;
-#else
- reg = CBA2R_RW64_32BIT;
-#endif
+ if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH64)
+ reg = CBA2R_RW64_64BIT;
+ else
+ reg = CBA2R_RW64_32BIT;
+ /* 16-bit VMIDs live in CBA2R */
+ if (smmu->features & ARM_SMMU_FEAT_VMID16)
+ reg |= ARM_SMMU_CB_VMID(smmu, cfg) << CBA2R_VMID_SHIFT;
+
writel_relaxed(reg, gr1_base + ARM_SMMU_GR1_CBA2R(cfg->cbndx));
}
/* CBAR */
reg = cfg->cbar;
- if (smmu->version == ARM_SMMU_V1)
+ if (smmu->version < ARM_SMMU_V2)
reg |= cfg->irptndx << CBAR_IRPTNDX_SHIFT;
/*
@@ -759,8 +793,9 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain,
if (stage1) {
reg |= (CBAR_S1_BPSHCFG_NSH << CBAR_S1_BPSHCFG_SHIFT) |
(CBAR_S1_MEMATTR_WB << CBAR_S1_MEMATTR_SHIFT);
- } else {
- reg |= ARM_SMMU_CB_VMID(cfg) << CBAR_VMID_SHIFT;
+ } else if (!(smmu->features & ARM_SMMU_FEAT_VMID16)) {
+ /* 8-bit VMIDs live in CBAR */
+ reg |= ARM_SMMU_CB_VMID(smmu, cfg) << CBAR_VMID_SHIFT;
}
writel_relaxed(reg, gr1_base + ARM_SMMU_GR1_CBAR(cfg->cbndx));
@@ -768,15 +803,15 @@ static void arm_smmu_init_context_bank(struct arm_smmu_domain *smmu_domain,
if (stage1) {
reg64 = pgtbl_cfg->arm_lpae_s1_cfg.ttbr[0];
- reg64 |= ((u64)ARM_SMMU_CB_ASID(cfg)) << TTBRn_ASID_SHIFT;
- smmu_writeq(reg64, cb_base + ARM_SMMU_CB_TTBR0);
+ reg64 |= ((u64)ARM_SMMU_CB_ASID(smmu, cfg)) << TTBRn_ASID_SHIFT;
+ writeq_relaxed(reg64, cb_base + ARM_SMMU_CB_TTBR0);
reg64 = pgtbl_cfg->arm_lpae_s1_cfg.ttbr[1];
- reg64 |= ((u64)ARM_SMMU_CB_ASID(cfg)) << TTBRn_ASID_SHIFT;
- smmu_writeq(reg64, cb_base + ARM_SMMU_CB_TTBR1);
+ reg64 |= ((u64)ARM_SMMU_CB_ASID(smmu, cfg)) << TTBRn_ASID_SHIFT;
+ writeq_relaxed(reg64, cb_base + ARM_SMMU_CB_TTBR1);
} else {
reg64 = pgtbl_cfg->arm_lpae_s2_cfg.vttbr;
- smmu_writeq(reg64, cb_base + ARM_SMMU_CB_TTBR0);
+ writeq_relaxed(reg64, cb_base + ARM_SMMU_CB_TTBR0);
}
/* TTBCR */
@@ -855,16 +890,40 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain,
if (!(smmu->features & ARM_SMMU_FEAT_TRANS_S2))
smmu_domain->stage = ARM_SMMU_DOMAIN_S1;
+ /*
+ * Choosing a suitable context format is even more fiddly. Until we
+ * grow some way for the caller to express a preference, and/or move
+ * the decision into the io-pgtable code where it arguably belongs,
+ * just aim for the closest thing to the rest of the system, and hope
+ * that the hardware isn't esoteric enough that we can't assume AArch64
+ * support to be a superset of AArch32 support...
+ */
+ if (smmu->features & ARM_SMMU_FEAT_FMT_AARCH32_L)
+ cfg->fmt = ARM_SMMU_CTX_FMT_AARCH32_L;
+ if ((IS_ENABLED(CONFIG_64BIT) || cfg->fmt == ARM_SMMU_CTX_FMT_NONE) &&
+ (smmu->features & (ARM_SMMU_FEAT_FMT_AARCH64_64K |
+ ARM_SMMU_FEAT_FMT_AARCH64_16K |
+ ARM_SMMU_FEAT_FMT_AARCH64_4K)))
+ cfg->fmt = ARM_SMMU_CTX_FMT_AARCH64;
+
+ if (cfg->fmt == ARM_SMMU_CTX_FMT_NONE) {
+ ret = -EINVAL;
+ goto out_unlock;
+ }
+
switch (smmu_domain->stage) {
case ARM_SMMU_DOMAIN_S1:
cfg->cbar = CBAR_TYPE_S1_TRANS_S2_BYPASS;
start = smmu->num_s2_context_banks;
ias = smmu->va_size;
oas = smmu->ipa_size;
- if (IS_ENABLED(CONFIG_64BIT))
+ if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH64) {
fmt = ARM_64_LPAE_S1;
- else
+ } else {
fmt = ARM_32_LPAE_S1;
+ ias = min(ias, 32UL);
+ oas = min(oas, 40UL);
+ }
break;
case ARM_SMMU_DOMAIN_NESTED:
/*
@@ -876,10 +935,13 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain,
start = 0;
ias = smmu->ipa_size;
oas = smmu->pa_size;
- if (IS_ENABLED(CONFIG_64BIT))
+ if (cfg->fmt == ARM_SMMU_CTX_FMT_AARCH64) {
fmt = ARM_64_LPAE_S2;
- else
+ } else {
fmt = ARM_32_LPAE_S2;
+ ias = min(ias, 40UL);
+ oas = min(oas, 40UL);
+ }
break;
default:
ret = -EINVAL;
@@ -888,11 +950,11 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain,
ret = __arm_smmu_alloc_bitmap(smmu->context_map, start,
smmu->num_context_banks);
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
goto out_unlock;
cfg->cbndx = ret;
- if (smmu->version == ARM_SMMU_V1) {
+ if (smmu->version < ARM_SMMU_V2) {
cfg->irptndx = atomic_inc_return(&smmu->irptndx);
cfg->irptndx %= smmu->num_context_irqs;
} else {
@@ -900,7 +962,7 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain,
}
pgtbl_cfg = (struct io_pgtable_cfg) {
- .pgsize_bitmap = arm_smmu_ops.pgsize_bitmap,
+ .pgsize_bitmap = smmu->pgsize_bitmap,
.ias = ias,
.oas = oas,
.tlb = &arm_smmu_gather_ops,
@@ -914,8 +976,8 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain,
goto out_clear_smmu;
}
- /* Update our support page sizes to reflect the page table format */
- arm_smmu_ops.pgsize_bitmap = pgtbl_cfg.pgsize_bitmap;
+ /* Update the domain's page sizes to reflect the page table format */
+ domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap;
/* Initialise the context bank with our page table cfg */
arm_smmu_init_context_bank(smmu_domain, &pgtbl_cfg);
@@ -927,7 +989,7 @@ static int arm_smmu_init_domain_context(struct iommu_domain *domain,
irq = smmu->irqs[smmu->num_global_irqs + cfg->irptndx];
ret = request_irq(irq, arm_smmu_context_fault, IRQF_SHARED,
"arm-smmu-context-fault", domain);
- if (IS_ERR_VALUE(ret)) {
+ if (ret < 0) {
dev_err(smmu->dev, "failed to request context IRQ %d (%u)\n",
cfg->irptndx, irq);
cfg->irptndx = INVALID_IRPTNDX;
@@ -1037,7 +1099,7 @@ static int arm_smmu_master_configure_smrs(struct arm_smmu_device *smmu,
for (i = 0; i < cfg->num_streamids; ++i) {
int idx = __arm_smmu_alloc_bitmap(smmu->smr_map, 0,
smmu->num_mapping_groups);
- if (IS_ERR_VALUE(idx)) {
+ if (idx < 0) {
dev_err(smmu->dev, "failed to allocate free SMR\n");
goto err_free_smrs;
}
@@ -1171,7 +1233,7 @@ static int arm_smmu_attach_dev(struct iommu_domain *domain, struct device *dev)
/* Ensure that the domain is finalised */
ret = arm_smmu_init_domain_context(domain, smmu);
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
return ret;
/*
@@ -1252,8 +1314,8 @@ static phys_addr_t arm_smmu_iova_to_phys_hard(struct iommu_domain *domain,
/* ATS1 registers can only be written atomically */
va = iova & ~0xfffUL;
if (smmu->version == ARM_SMMU_V2)
- smmu_writeq(va, cb_base + ARM_SMMU_CB_ATS1PR);
- else
+ smmu_write_atomic_lq(va, cb_base + ARM_SMMU_CB_ATS1PR);
+ else /* Register is only 32-bit in v1 */
writel_relaxed(va, cb_base + ARM_SMMU_CB_ATS1PR);
if (readl_poll_timeout_atomic(cb_base + ARM_SMMU_CB_ATSR, tmp,
@@ -1264,9 +1326,7 @@ static phys_addr_t arm_smmu_iova_to_phys_hard(struct iommu_domain *domain,
return ops->iova_to_phys(ops, iova);
}
- phys = readl_relaxed(cb_base + ARM_SMMU_CB_PAR_LO);
- phys |= ((u64)readl_relaxed(cb_base + ARM_SMMU_CB_PAR_HI)) << 32;
-
+ phys = readq_relaxed(cb_base + ARM_SMMU_CB_PAR);
if (phys & CB_PAR_F) {
dev_err(dev, "translation fault!\n");
dev_err(dev, "PAR = 0x%llx\n", phys);
@@ -1492,7 +1552,7 @@ static void arm_smmu_device_reset(struct arm_smmu_device *smmu)
void __iomem *gr0_base = ARM_SMMU_GR0(smmu);
void __iomem *cb_base;
int i = 0;
- u32 reg;
+ u32 reg, major;
/* clear global FSR */
reg = readl_relaxed(ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sGFSR);
@@ -1505,11 +1565,33 @@ static void arm_smmu_device_reset(struct arm_smmu_device *smmu)
writel_relaxed(reg, gr0_base + ARM_SMMU_GR0_S2CR(i));
}
+ /*
+ * Before clearing ARM_MMU500_ACTLR_CPRE, need to
+ * clear CACHE_LOCK bit of ACR first. And, CACHE_LOCK
+ * bit is only present in MMU-500r2 onwards.
+ */
+ reg = readl_relaxed(gr0_base + ARM_SMMU_GR0_ID7);
+ major = (reg >> ID7_MAJOR_SHIFT) & ID7_MAJOR_MASK;
+ if ((smmu->model == ARM_MMU500) && (major >= 2)) {
+ reg = readl_relaxed(gr0_base + ARM_SMMU_GR0_sACR);
+ reg &= ~ARM_MMU500_ACR_CACHE_LOCK;
+ writel_relaxed(reg, gr0_base + ARM_SMMU_GR0_sACR);
+ }
+
/* Make sure all context banks are disabled and clear CB_FSR */
for (i = 0; i < smmu->num_context_banks; ++i) {
cb_base = ARM_SMMU_CB_BASE(smmu) + ARM_SMMU_CB(smmu, i);
writel_relaxed(0, cb_base + ARM_SMMU_CB_SCTLR);
writel_relaxed(FSR_FAULT, cb_base + ARM_SMMU_CB_FSR);
+ /*
+ * Disable MMU-500's not-particularly-beneficial next-page
+ * prefetcher for the sake of errata #841119 and #826419.
+ */
+ if (smmu->model == ARM_MMU500) {
+ reg = readl_relaxed(cb_base + ARM_SMMU_CB_ACTLR);
+ reg &= ~ARM_MMU500_ACTLR_CPRE;
+ writel_relaxed(reg, cb_base + ARM_SMMU_CB_ACTLR);
+ }
}
/* Invalidate the TLB, just in case */
@@ -1537,6 +1619,9 @@ static void arm_smmu_device_reset(struct arm_smmu_device *smmu)
/* Don't upgrade barriers */
reg &= ~(sCR0_BSU_MASK << sCR0_BSU_SHIFT);
+ if (smmu->features & ARM_SMMU_FEAT_VMID16)
+ reg |= sCR0_VMID16EN;
+
/* Push the button */
__arm_smmu_tlb_sync(smmu);
writel(reg, ARM_SMMU_GR0_NS(smmu) + ARM_SMMU_GR0_sCR0);
@@ -1569,7 +1654,8 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu)
bool cttw_dt, cttw_reg;
dev_notice(smmu->dev, "probing hardware configuration...\n");
- dev_notice(smmu->dev, "SMMUv%d with:\n", smmu->version);
+ dev_notice(smmu->dev, "SMMUv%d with:\n",
+ smmu->version == ARM_SMMU_V2 ? 2 : 1);
/* ID0 */
id = readl_relaxed(gr0_base + ARM_SMMU_GR0_ID0);
@@ -1601,7 +1687,8 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu)
return -ENODEV;
}
- if ((id & ID0_S1TS) && ((smmu->version == 1) || !(id & ID0_ATOSNS))) {
+ if ((id & ID0_S1TS) &&
+ ((smmu->version < ARM_SMMU_V2) || !(id & ID0_ATOSNS))) {
smmu->features |= ARM_SMMU_FEAT_TRANS_OPS;
dev_notice(smmu->dev, "\taddress translation ops\n");
}
@@ -1657,6 +1744,12 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu)
ID0_NUMSIDB_MASK;
}
+ if (smmu->version < ARM_SMMU_V2 || !(id & ID0_PTFS_NO_AARCH32)) {
+ smmu->features |= ARM_SMMU_FEAT_FMT_AARCH32_L;
+ if (!(id & ID0_PTFS_NO_AARCH32S))
+ smmu->features |= ARM_SMMU_FEAT_FMT_AARCH32_S;
+ }
+
/* ID1 */
id = readl_relaxed(gr0_base + ARM_SMMU_GR0_ID1);
smmu->pgshift = (id & ID1_PAGESIZE) ? 16 : 12;
@@ -1677,6 +1770,17 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu)
}
dev_notice(smmu->dev, "\t%u context banks (%u stage-2 only)\n",
smmu->num_context_banks, smmu->num_s2_context_banks);
+ /*
+ * Cavium CN88xx erratum #27704.
+ * Ensure ASID and VMID allocation is unique across all SMMUs in
+ * the system.
+ */
+ if (smmu->model == CAVIUM_SMMUV2) {
+ smmu->cavium_id_base =
+ atomic_add_return(smmu->num_context_banks,
+ &cavium_smmu_context_count);
+ smmu->cavium_id_base -= smmu->num_context_banks;
+ }
/* ID2 */
id = readl_relaxed(gr0_base + ARM_SMMU_GR0_ID2);
@@ -1687,6 +1791,9 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu)
size = arm_smmu_id_size_to_bits((id >> ID2_OAS_SHIFT) & ID2_OAS_MASK);
smmu->pa_size = size;
+ if (id & ID2_VMID16)
+ smmu->features |= ARM_SMMU_FEAT_VMID16;
+
/*
* What the page table walker can address actually depends on which
* descriptor format is in use, but since a) we don't know that yet,
@@ -1696,26 +1803,39 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu)
dev_warn(smmu->dev,
"failed to set DMA mask for table walker\n");
- if (smmu->version == ARM_SMMU_V1) {
+ if (smmu->version < ARM_SMMU_V2) {
smmu->va_size = smmu->ipa_size;
- size = SZ_4K | SZ_2M | SZ_1G;
+ if (smmu->version == ARM_SMMU_V1_64K)
+ smmu->features |= ARM_SMMU_FEAT_FMT_AARCH64_64K;
} else {
size = (id >> ID2_UBS_SHIFT) & ID2_UBS_MASK;
smmu->va_size = arm_smmu_id_size_to_bits(size);
-#ifndef CONFIG_64BIT
- smmu->va_size = min(32UL, smmu->va_size);
-#endif
- size = 0;
if (id & ID2_PTFS_4K)
- size |= SZ_4K | SZ_2M | SZ_1G;
+ smmu->features |= ARM_SMMU_FEAT_FMT_AARCH64_4K;
if (id & ID2_PTFS_16K)
- size |= SZ_16K | SZ_32M;
+ smmu->features |= ARM_SMMU_FEAT_FMT_AARCH64_16K;
if (id & ID2_PTFS_64K)
- size |= SZ_64K | SZ_512M;
+ smmu->features |= ARM_SMMU_FEAT_FMT_AARCH64_64K;
}
- arm_smmu_ops.pgsize_bitmap &= size;
- dev_notice(smmu->dev, "\tSupported page sizes: 0x%08lx\n", size);
+ /* Now we've corralled the various formats, what'll it do? */
+ if (smmu->features & ARM_SMMU_FEAT_FMT_AARCH32_S)
+ smmu->pgsize_bitmap |= SZ_4K | SZ_64K | SZ_1M | SZ_16M;
+ if (smmu->features &
+ (ARM_SMMU_FEAT_FMT_AARCH32_L | ARM_SMMU_FEAT_FMT_AARCH64_4K))
+ smmu->pgsize_bitmap |= SZ_4K | SZ_2M | SZ_1G;
+ if (smmu->features & ARM_SMMU_FEAT_FMT_AARCH64_16K)
+ smmu->pgsize_bitmap |= SZ_16K | SZ_32M;
+ if (smmu->features & ARM_SMMU_FEAT_FMT_AARCH64_64K)
+ smmu->pgsize_bitmap |= SZ_64K | SZ_512M;
+
+ if (arm_smmu_ops.pgsize_bitmap == -1UL)
+ arm_smmu_ops.pgsize_bitmap = smmu->pgsize_bitmap;
+ else
+ arm_smmu_ops.pgsize_bitmap |= smmu->pgsize_bitmap;
+ dev_notice(smmu->dev, "\tSupported page sizes: 0x%08lx\n",
+ smmu->pgsize_bitmap);
+
if (smmu->features & ARM_SMMU_FEAT_TRANS_S1)
dev_notice(smmu->dev, "\tStage-1: %lu-bit VA -> %lu-bit IPA\n",
@@ -1728,12 +1848,27 @@ static int arm_smmu_device_cfg_probe(struct arm_smmu_device *smmu)
return 0;
}
+struct arm_smmu_match_data {
+ enum arm_smmu_arch_version version;
+ enum arm_smmu_implementation model;
+};
+
+#define ARM_SMMU_MATCH_DATA(name, ver, imp) \
+static struct arm_smmu_match_data name = { .version = ver, .model = imp }
+
+ARM_SMMU_MATCH_DATA(smmu_generic_v1, ARM_SMMU_V1, GENERIC_SMMU);
+ARM_SMMU_MATCH_DATA(smmu_generic_v2, ARM_SMMU_V2, GENERIC_SMMU);
+ARM_SMMU_MATCH_DATA(arm_mmu401, ARM_SMMU_V1_64K, GENERIC_SMMU);
+ARM_SMMU_MATCH_DATA(arm_mmu500, ARM_SMMU_V2, ARM_MMU500);
+ARM_SMMU_MATCH_DATA(cavium_smmuv2, ARM_SMMU_V2, CAVIUM_SMMUV2);
+
static const struct of_device_id arm_smmu_of_match[] = {
- { .compatible = "arm,smmu-v1", .data = (void *)ARM_SMMU_V1 },
- { .compatible = "arm,smmu-v2", .data = (void *)ARM_SMMU_V2 },
- { .compatible = "arm,mmu-400", .data = (void *)ARM_SMMU_V1 },
- { .compatible = "arm,mmu-401", .data = (void *)ARM_SMMU_V1 },
- { .compatible = "arm,mmu-500", .data = (void *)ARM_SMMU_V2 },
+ { .compatible = "arm,smmu-v1", .data = &smmu_generic_v1 },
+ { .compatible = "arm,smmu-v2", .data = &smmu_generic_v2 },
+ { .compatible = "arm,mmu-400", .data = &smmu_generic_v1 },
+ { .compatible = "arm,mmu-401", .data = &arm_mmu401 },
+ { .compatible = "arm,mmu-500", .data = &arm_mmu500 },
+ { .compatible = "cavium,smmu-v2", .data = &cavium_smmuv2 },
{ },
};
MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
@@ -1741,11 +1876,13 @@ MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
static int arm_smmu_device_dt_probe(struct platform_device *pdev)
{
const struct of_device_id *of_id;
+ const struct arm_smmu_match_data *data;
struct resource *res;
struct arm_smmu_device *smmu;
struct device *dev = &pdev->dev;
struct rb_node *node;
- struct of_phandle_args masterspec;
+ struct of_phandle_iterator it;
+ struct arm_smmu_phandle_args *masterspec;
int num_irqs, i, err;
smmu = devm_kzalloc(dev, sizeof(*smmu), GFP_KERNEL);
@@ -1756,7 +1893,9 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev)
smmu->dev = dev;
of_id = of_match_node(arm_smmu_of_match, dev->of_node);
- smmu->version = (enum arm_smmu_arch_version)of_id->data;
+ data = of_id->data;
+ smmu->version = data->version;
+ smmu->model = data->model;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
smmu->base = devm_ioremap_resource(dev, res);
@@ -1806,23 +1945,38 @@ static int arm_smmu_device_dt_probe(struct platform_device *pdev)
i = 0;
smmu->masters = RB_ROOT;
- while (!of_parse_phandle_with_args(dev->of_node, "mmu-masters",
- "#stream-id-cells", i,
- &masterspec)) {
- err = register_smmu_master(smmu, dev, &masterspec);
+
+ err = -ENOMEM;
+ /* No need to zero the memory for masterspec */
+ masterspec = kmalloc(sizeof(*masterspec), GFP_KERNEL);
+ if (!masterspec)
+ goto out_put_masters;
+
+ of_for_each_phandle(&it, err, dev->of_node,
+ "mmu-masters", "#stream-id-cells", 0) {
+ int count = of_phandle_iterator_args(&it, masterspec->args,
+ MAX_MASTER_STREAMIDS);
+ masterspec->np = of_node_get(it.node);
+ masterspec->args_count = count;
+
+ err = register_smmu_master(smmu, dev, masterspec);
if (err) {
dev_err(dev, "failed to add master %s\n",
- masterspec.np->name);
+ masterspec->np->name);
+ kfree(masterspec);
goto out_put_masters;
}
i++;
}
+
dev_notice(dev, "registered %d master devices\n", i);
+ kfree(masterspec);
+
parse_driver_options(smmu);
- if (smmu->version > ARM_SMMU_V1 &&
+ if (smmu->version == ARM_SMMU_V2 &&
smmu->num_context_banks != smmu->num_context_irqs) {
dev_err(dev,
"found only %d context interrupt(s) but %d required\n",
diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 58f2fe687a24..ea5a9ebf0f78 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -94,7 +94,7 @@ int iommu_dma_init_domain(struct iommu_domain *domain, dma_addr_t base, u64 size
return -ENODEV;
/* Use the smallest supported page size for IOVA granularity */
- order = __ffs(domain->ops->pgsize_bitmap);
+ order = __ffs(domain->pgsize_bitmap);
base_pfn = max_t(unsigned long, 1, base >> order);
end_pfn = (base + size - 1) >> order;
@@ -190,11 +190,15 @@ static void __iommu_dma_free_pages(struct page **pages, int count)
kvfree(pages);
}
-static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp)
+static struct page **__iommu_dma_alloc_pages(unsigned int count,
+ unsigned long order_mask, gfp_t gfp)
{
struct page **pages;
unsigned int i = 0, array_size = count * sizeof(*pages);
- unsigned int order = MAX_ORDER;
+
+ order_mask &= (2U << MAX_ORDER) - 1;
+ if (!order_mask)
+ return NULL;
if (array_size <= PAGE_SIZE)
pages = kzalloc(array_size, GFP_KERNEL);
@@ -208,36 +212,38 @@ static struct page **__iommu_dma_alloc_pages(unsigned int count, gfp_t gfp)
while (count) {
struct page *page = NULL;
- int j;
+ unsigned int order_size;
/*
* Higher-order allocations are a convenience rather
* than a necessity, hence using __GFP_NORETRY until
- * falling back to single-page allocations.
+ * falling back to minimum-order allocations.
*/
- for (order = min_t(unsigned int, order, __fls(count));
- order > 0; order--) {
- page = alloc_pages(gfp | __GFP_NORETRY, order);
+ for (order_mask &= (2U << __fls(count)) - 1;
+ order_mask; order_mask &= ~order_size) {
+ unsigned int order = __fls(order_mask);
+
+ order_size = 1U << order;
+ page = alloc_pages((order_mask - order_size) ?
+ gfp | __GFP_NORETRY : gfp, order);
if (!page)
continue;
- if (PageCompound(page)) {
- if (!split_huge_page(page))
- break;
- __free_pages(page, order);
- } else {
+ if (!order)
+ break;
+ if (!PageCompound(page)) {
split_page(page, order);
break;
+ } else if (!split_huge_page(page)) {
+ break;
}
+ __free_pages(page, order);
}
- if (!page)
- page = alloc_page(gfp);
if (!page) {
__iommu_dma_free_pages(pages, i);
return NULL;
}
- j = 1 << order;
- count -= j;
- while (j--)
+ count -= order_size;
+ while (order_size--)
pages[i++] = page++;
}
return pages;
@@ -267,6 +273,7 @@ void iommu_dma_free(struct device *dev, struct page **pages, size_t size,
* attached to an iommu_dma_domain
* @size: Size of buffer in bytes
* @gfp: Allocation flags
+ * @attrs: DMA attributes for this allocation
* @prot: IOMMU mapping flags
* @handle: Out argument for allocated DMA handle
* @flush_page: Arch callback which must ensure PAGE_SIZE bytes from the
@@ -278,8 +285,8 @@ void iommu_dma_free(struct device *dev, struct page **pages, size_t size,
* Return: Array of struct page pointers describing the buffer,
* or NULL on failure.
*/
-struct page **iommu_dma_alloc(struct device *dev, size_t size,
- gfp_t gfp, int prot, dma_addr_t *handle,
+struct page **iommu_dma_alloc(struct device *dev, size_t size, gfp_t gfp,
+ struct dma_attrs *attrs, int prot, dma_addr_t *handle,
void (*flush_page)(struct device *, const void *, phys_addr_t))
{
struct iommu_domain *domain = iommu_get_domain_for_dev(dev);
@@ -288,11 +295,22 @@ struct page **iommu_dma_alloc(struct device *dev, size_t size,
struct page **pages;
struct sg_table sgt;
dma_addr_t dma_addr;
- unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+ unsigned int count, min_size, alloc_sizes = domain->pgsize_bitmap;
*handle = DMA_ERROR_CODE;
- pages = __iommu_dma_alloc_pages(count, gfp);
+ min_size = alloc_sizes & -alloc_sizes;
+ if (min_size < PAGE_SIZE) {
+ min_size = PAGE_SIZE;
+ alloc_sizes |= PAGE_SIZE;
+ } else {
+ size = ALIGN(size, min_size);
+ }
+ if (dma_get_attr(DMA_ATTR_ALLOC_SINGLE_PAGES, attrs))
+ alloc_sizes = min_size;
+
+ count = PAGE_ALIGN(size) >> PAGE_SHIFT;
+ pages = __iommu_dma_alloc_pages(count, alloc_sizes >> PAGE_SHIFT, gfp);
if (!pages)
return NULL;
@@ -389,26 +407,58 @@ void iommu_dma_unmap_page(struct device *dev, dma_addr_t handle, size_t size,
/*
* Prepare a successfully-mapped scatterlist to give back to the caller.
- * Handling IOVA concatenation can come later, if needed
+ *
+ * At this point the segments are already laid out by iommu_dma_map_sg() to
+ * avoid individually crossing any boundaries, so we merely need to check a
+ * segment's start address to avoid concatenating across one.
*/
static int __finalise_sg(struct device *dev, struct scatterlist *sg, int nents,
dma_addr_t dma_addr)
{
- struct scatterlist *s;
- int i;
+ struct scatterlist *s, *cur = sg;
+ unsigned long seg_mask = dma_get_seg_boundary(dev);
+ unsigned int cur_len = 0, max_len = dma_get_max_seg_size(dev);
+ int i, count = 0;
for_each_sg(sg, s, nents, i) {
- /* Un-swizzling the fields here, hence the naming mismatch */
- unsigned int s_offset = sg_dma_address(s);
+ /* Restore this segment's original unaligned fields first */
+ unsigned int s_iova_off = sg_dma_address(s);
unsigned int s_length = sg_dma_len(s);
- unsigned int s_dma_len = s->length;
+ unsigned int s_iova_len = s->length;
- s->offset += s_offset;
+ s->offset += s_iova_off;
s->length = s_length;
- sg_dma_address(s) = dma_addr + s_offset;
- dma_addr += s_dma_len;
+ sg_dma_address(s) = DMA_ERROR_CODE;
+ sg_dma_len(s) = 0;
+
+ /*
+ * Now fill in the real DMA data. If...
+ * - there is a valid output segment to append to
+ * - and this segment starts on an IOVA page boundary
+ * - but doesn't fall at a segment boundary
+ * - and wouldn't make the resulting output segment too long
+ */
+ if (cur_len && !s_iova_off && (dma_addr & seg_mask) &&
+ (cur_len + s_length <= max_len)) {
+ /* ...then concatenate it with the previous one */
+ cur_len += s_length;
+ } else {
+ /* Otherwise start the next output segment */
+ if (i > 0)
+ cur = sg_next(cur);
+ cur_len = s_length;
+ count++;
+
+ sg_dma_address(cur) = dma_addr + s_iova_off;
+ }
+
+ sg_dma_len(cur) = cur_len;
+ dma_addr += s_iova_len;
+
+ if (s_length + s_iova_off < s_iova_len)
+ cur_len = 0;
}
- return i;
+ return count;
}
/*
@@ -446,34 +496,40 @@ int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg,
struct scatterlist *s, *prev = NULL;
dma_addr_t dma_addr;
size_t iova_len = 0;
+ unsigned long mask = dma_get_seg_boundary(dev);
int i;
/*
* Work out how much IOVA space we need, and align the segments to
* IOVA granules for the IOMMU driver to handle. With some clever
* trickery we can modify the list in-place, but reversibly, by
- * hiding the original data in the as-yet-unused DMA fields.
+ * stashing the unaligned parts in the as-yet-unused DMA fields.
*/
for_each_sg(sg, s, nents, i) {
- size_t s_offset = iova_offset(iovad, s->offset);
+ size_t s_iova_off = iova_offset(iovad, s->offset);
size_t s_length = s->length;
+ size_t pad_len = (mask - iova_len + 1) & mask;
- sg_dma_address(s) = s_offset;
+ sg_dma_address(s) = s_iova_off;
sg_dma_len(s) = s_length;
- s->offset -= s_offset;
- s_length = iova_align(iovad, s_length + s_offset);
+ s->offset -= s_iova_off;
+ s_length = iova_align(iovad, s_length + s_iova_off);
s->length = s_length;
/*
- * The simple way to avoid the rare case of a segment
- * crossing the boundary mask is to pad the previous one
- * to end at a naturally-aligned IOVA for this one's size,
- * at the cost of potentially over-allocating a little.
+ * Due to the alignment of our single IOVA allocation, we can
+ * depend on these assumptions about the segment boundary mask:
+ * - If mask size >= IOVA size, then the IOVA range cannot
+ * possibly fall across a boundary, so we don't care.
+ * - If mask size < IOVA size, then the IOVA range must start
+ * exactly on a boundary, therefore we can lay things out
+ * based purely on segment lengths without needing to know
+ * the actual addresses beforehand.
+ * - The mask must be a power of 2, so pad_len == 0 if
+ * iova_len == 0, thus we cannot dereference prev the first
+ * time through here (i.e. before it has a meaningful value).
*/
- if (prev) {
- size_t pad_len = roundup_pow_of_two(s_length);
-
- pad_len = (pad_len - iova_len) & (pad_len - 1);
+ if (pad_len && pad_len < s_length - 1) {
prev->length += pad_len;
iova_len += pad_len;
}
diff --git a/drivers/iommu/dmar.c b/drivers/iommu/dmar.c
index 8ffd7568fc91..6a86b5d1defa 100644
--- a/drivers/iommu/dmar.c
+++ b/drivers/iommu/dmar.c
@@ -1579,18 +1579,14 @@ static int dmar_fault_do_one(struct intel_iommu *iommu, int type,
reason = dmar_get_fault_reason(fault_reason, &fault_type);
if (fault_type == INTR_REMAP)
- pr_err("INTR-REMAP: Request device [[%02x:%02x.%d] "
- "fault index %llx\n"
- "INTR-REMAP:[fault reason %02d] %s\n",
- (source_id >> 8), PCI_SLOT(source_id & 0xFF),
+ pr_err("[INTR-REMAP] Request device [%02x:%02x.%d] fault index %llx [fault reason %02d] %s\n",
+ source_id >> 8, PCI_SLOT(source_id & 0xFF),
PCI_FUNC(source_id & 0xFF), addr >> 48,
fault_reason, reason);
else
- pr_err("DMAR:[%s] Request device [%02x:%02x.%d] "
- "fault addr %llx \n"
- "DMAR:[fault reason %02d] %s\n",
- (type ? "DMA Read" : "DMA Write"),
- (source_id >> 8), PCI_SLOT(source_id & 0xFF),
+ pr_err("[%s] Request device [%02x:%02x.%d] fault addr %llx [fault reason %02d] %s\n",
+ type ? "DMA Read" : "DMA Write",
+ source_id >> 8, PCI_SLOT(source_id & 0xFF),
PCI_FUNC(source_id & 0xFF), addr, fault_reason, reason);
return 0;
}
@@ -1602,10 +1598,17 @@ irqreturn_t dmar_fault(int irq, void *dev_id)
int reg, fault_index;
u32 fault_status;
unsigned long flag;
+ bool ratelimited;
+ static DEFINE_RATELIMIT_STATE(rs,
+ DEFAULT_RATELIMIT_INTERVAL,
+ DEFAULT_RATELIMIT_BURST);
+
+ /* Disable printing, simply clear the fault when ratelimited */
+ ratelimited = !__ratelimit(&rs);
raw_spin_lock_irqsave(&iommu->register_lock, flag);
fault_status = readl(iommu->reg + DMAR_FSTS_REG);
- if (fault_status)
+ if (fault_status && !ratelimited)
pr_err("DRHD: handling fault status reg %x\n", fault_status);
/* TBD: ignore advanced fault log currently */
@@ -1627,24 +1630,28 @@ irqreturn_t dmar_fault(int irq, void *dev_id)
if (!(data & DMA_FRCD_F))
break;
- fault_reason = dma_frcd_fault_reason(data);
- type = dma_frcd_type(data);
+ if (!ratelimited) {
+ fault_reason = dma_frcd_fault_reason(data);
+ type = dma_frcd_type(data);
- data = readl(iommu->reg + reg +
- fault_index * PRIMARY_FAULT_REG_LEN + 8);
- source_id = dma_frcd_source_id(data);
+ data = readl(iommu->reg + reg +
+ fault_index * PRIMARY_FAULT_REG_LEN + 8);
+ source_id = dma_frcd_source_id(data);
+
+ guest_addr = dmar_readq(iommu->reg + reg +
+ fault_index * PRIMARY_FAULT_REG_LEN);
+ guest_addr = dma_frcd_page_addr(guest_addr);
+ }
- guest_addr = dmar_readq(iommu->reg + reg +
- fault_index * PRIMARY_FAULT_REG_LEN);
- guest_addr = dma_frcd_page_addr(guest_addr);
/* clear the fault */
writel(DMA_FRCD_F, iommu->reg + reg +
fault_index * PRIMARY_FAULT_REG_LEN + 12);
raw_spin_unlock_irqrestore(&iommu->register_lock, flag);
- dmar_fault_do_one(iommu, type, fault_reason,
- source_id, guest_addr);
+ if (!ratelimited)
+ dmar_fault_do_one(iommu, type, fault_reason,
+ source_id, guest_addr);
fault_index++;
if (fault_index >= cap_num_fault_regs(iommu->cap))
diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index e1852e845d21..a644d0cec2d8 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -33,6 +33,7 @@
#include <linux/dma-mapping.h>
#include <linux/mempool.h>
#include <linux/memory.h>
+#include <linux/cpu.h>
#include <linux/timer.h>
#include <linux/io.h>
#include <linux/iova.h>
@@ -390,6 +391,7 @@ struct dmar_domain {
* domain ids are 16 bit wide according
* to VT-d spec, section 9.3 */
+ bool has_iotlb_device;
struct list_head devices; /* all devices' list */
struct iova_domain iovad; /* iova's that belong to this domain */
@@ -456,27 +458,32 @@ static LIST_HEAD(dmar_rmrr_units);
static void flush_unmaps_timeout(unsigned long data);
-static DEFINE_TIMER(unmap_timer, flush_unmaps_timeout, 0, 0);
+struct deferred_flush_entry {
+ unsigned long iova_pfn;
+ unsigned long nrpages;
+ struct dmar_domain *domain;
+ struct page *freelist;
+};
#define HIGH_WATER_MARK 250
-struct deferred_flush_tables {
+struct deferred_flush_table {
int next;
- struct iova *iova[HIGH_WATER_MARK];
- struct dmar_domain *domain[HIGH_WATER_MARK];
- struct page *freelist[HIGH_WATER_MARK];
+ struct deferred_flush_entry entries[HIGH_WATER_MARK];
+};
+
+struct deferred_flush_data {
+ spinlock_t lock;
+ int timer_on;
+ struct timer_list timer;
+ long size;
+ struct deferred_flush_table *tables;
};
-static struct deferred_flush_tables *deferred_flush;
+DEFINE_PER_CPU(struct deferred_flush_data, deferred_flush);
/* bitmap for indexing intel_iommus */
static int g_num_of_iommus;
-static DEFINE_SPINLOCK(async_umap_flush_lock);
-static LIST_HEAD(unmaps_to_do);
-
-static int timer_on;
-static long list_size;
-
static void domain_exit(struct dmar_domain *domain);
static void domain_remove_dev_info(struct dmar_domain *domain);
static void dmar_remove_one_dev_info(struct dmar_domain *domain,
@@ -1143,7 +1150,7 @@ next:
} while (!first_pte_in_page(++pte) && pfn <= last_pfn);
}
-/* free page table pages. last level pte should already be cleared */
+/* clear last level (leaf) ptes and free page table pages. */
static void dma_pte_free_pagetable(struct dmar_domain *domain,
unsigned long start_pfn,
unsigned long last_pfn)
@@ -1458,10 +1465,35 @@ iommu_support_dev_iotlb (struct dmar_domain *domain, struct intel_iommu *iommu,
return NULL;
}
+static void domain_update_iotlb(struct dmar_domain *domain)
+{
+ struct device_domain_info *info;
+ bool has_iotlb_device = false;
+
+ assert_spin_locked(&device_domain_lock);
+
+ list_for_each_entry(info, &domain->devices, link) {
+ struct pci_dev *pdev;
+
+ if (!info->dev || !dev_is_pci(info->dev))
+ continue;
+
+ pdev = to_pci_dev(info->dev);
+ if (pdev->ats_enabled) {
+ has_iotlb_device = true;
+ break;
+ }
+ }
+
+ domain->has_iotlb_device = has_iotlb_device;
+}
+
static void iommu_enable_dev_iotlb(struct device_domain_info *info)
{
struct pci_dev *pdev;
+ assert_spin_locked(&device_domain_lock);
+
if (!info || !dev_is_pci(info->dev))
return;
@@ -1481,6 +1513,7 @@ static void iommu_enable_dev_iotlb(struct device_domain_info *info)
#endif
if (info->ats_supported && !pci_enable_ats(pdev, VTD_PAGE_SHIFT)) {
info->ats_enabled = 1;
+ domain_update_iotlb(info->domain);
info->ats_qdep = pci_ats_queue_depth(pdev);
}
}
@@ -1489,6 +1522,8 @@ static void iommu_disable_dev_iotlb(struct device_domain_info *info)
{
struct pci_dev *pdev;
+ assert_spin_locked(&device_domain_lock);
+
if (!dev_is_pci(info->dev))
return;
@@ -1497,6 +1532,7 @@ static void iommu_disable_dev_iotlb(struct device_domain_info *info)
if (info->ats_enabled) {
pci_disable_ats(pdev);
info->ats_enabled = 0;
+ domain_update_iotlb(info->domain);
}
#ifdef CONFIG_INTEL_IOMMU_SVM
if (info->pri_enabled) {
@@ -1517,6 +1553,9 @@ static void iommu_flush_dev_iotlb(struct dmar_domain *domain,
unsigned long flags;
struct device_domain_info *info;
+ if (!domain->has_iotlb_device)
+ return;
+
spin_lock_irqsave(&device_domain_lock, flags);
list_for_each_entry(info, &domain->devices, link) {
if (!info->ats_enabled)
@@ -1734,6 +1773,7 @@ static struct dmar_domain *alloc_domain(int flags)
memset(domain, 0, sizeof(*domain));
domain->nid = -1;
domain->flags = flags;
+ domain->has_iotlb_device = false;
INIT_LIST_HEAD(&domain->devices);
return domain;
@@ -1918,8 +1958,12 @@ static void domain_exit(struct dmar_domain *domain)
return;
/* Flush any lazy unmaps that may reference this domain */
- if (!intel_iommu_strict)
- flush_unmaps_timeout(0);
+ if (!intel_iommu_strict) {
+ int cpu;
+
+ for_each_possible_cpu(cpu)
+ flush_unmaps_timeout(cpu);
+ }
/* Remove associated devices and clear attached or cached domains */
rcu_read_lock();
@@ -3077,7 +3121,7 @@ static int __init init_dmars(void)
bool copied_tables = false;
struct device *dev;
struct intel_iommu *iommu;
- int i, ret;
+ int i, ret, cpu;
/*
* for each drhd
@@ -3110,11 +3154,20 @@ static int __init init_dmars(void)
goto error;
}
- deferred_flush = kzalloc(g_num_of_iommus *
- sizeof(struct deferred_flush_tables), GFP_KERNEL);
- if (!deferred_flush) {
- ret = -ENOMEM;
- goto free_g_iommus;
+ for_each_possible_cpu(cpu) {
+ struct deferred_flush_data *dfd = per_cpu_ptr(&deferred_flush,
+ cpu);
+
+ dfd->tables = kzalloc(g_num_of_iommus *
+ sizeof(struct deferred_flush_table),
+ GFP_KERNEL);
+ if (!dfd->tables) {
+ ret = -ENOMEM;
+ goto free_g_iommus;
+ }
+
+ spin_lock_init(&dfd->lock);
+ setup_timer(&dfd->timer, flush_unmaps_timeout, cpu);
}
for_each_active_iommu(iommu, drhd) {
@@ -3291,19 +3344,20 @@ free_iommu:
disable_dmar_iommu(iommu);
free_dmar_iommu(iommu);
}
- kfree(deferred_flush);
free_g_iommus:
+ for_each_possible_cpu(cpu)
+ kfree(per_cpu_ptr(&deferred_flush, cpu)->tables);
kfree(g_iommus);
error:
return ret;
}
/* This takes a number of _MM_ pages, not VTD pages */
-static struct iova *intel_alloc_iova(struct device *dev,
+static unsigned long intel_alloc_iova(struct device *dev,
struct dmar_domain *domain,
unsigned long nrpages, uint64_t dma_mask)
{
- struct iova *iova = NULL;
+ unsigned long iova_pfn = 0;
/* Restrict dma_mask to the width that the iommu can handle */
dma_mask = min_t(uint64_t, DOMAIN_MAX_ADDR(domain->gaw), dma_mask);
@@ -3316,19 +3370,19 @@ static struct iova *intel_alloc_iova(struct device *dev,
* DMA_BIT_MASK(32) and if that fails then try allocating
* from higher range
*/
- iova = alloc_iova(&domain->iovad, nrpages,
- IOVA_PFN(DMA_BIT_MASK(32)), 1);
- if (iova)
- return iova;
+ iova_pfn = alloc_iova_fast(&domain->iovad, nrpages,
+ IOVA_PFN(DMA_BIT_MASK(32)));
+ if (iova_pfn)
+ return iova_pfn;
}
- iova = alloc_iova(&domain->iovad, nrpages, IOVA_PFN(dma_mask), 1);
- if (unlikely(!iova)) {
+ iova_pfn = alloc_iova_fast(&domain->iovad, nrpages, IOVA_PFN(dma_mask));
+ if (unlikely(!iova_pfn)) {
pr_err("Allocating %ld-page iova for %s failed",
nrpages, dev_name(dev));
- return NULL;
+ return 0;
}
- return iova;
+ return iova_pfn;
}
static struct dmar_domain *__get_valid_domain_for_dev(struct device *dev)
@@ -3426,7 +3480,7 @@ static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr,
{
struct dmar_domain *domain;
phys_addr_t start_paddr;
- struct iova *iova;
+ unsigned long iova_pfn;
int prot = 0;
int ret;
struct intel_iommu *iommu;
@@ -3444,8 +3498,8 @@ static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr,
iommu = domain_get_iommu(domain);
size = aligned_nrpages(paddr, size);
- iova = intel_alloc_iova(dev, domain, dma_to_mm_pfn(size), dma_mask);
- if (!iova)
+ iova_pfn = intel_alloc_iova(dev, domain, dma_to_mm_pfn(size), dma_mask);
+ if (!iova_pfn)
goto error;
/*
@@ -3463,7 +3517,7 @@ static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr,
* might have two guest_addr mapping to the same host paddr, but this
* is not a big problem
*/
- ret = domain_pfn_mapping(domain, mm_to_dma_pfn(iova->pfn_lo),
+ ret = domain_pfn_mapping(domain, mm_to_dma_pfn(iova_pfn),
mm_to_dma_pfn(paddr_pfn), size, prot);
if (ret)
goto error;
@@ -3471,18 +3525,18 @@ static dma_addr_t __intel_map_single(struct device *dev, phys_addr_t paddr,
/* it's a non-present to present mapping. Only flush if caching mode */
if (cap_caching_mode(iommu->cap))
iommu_flush_iotlb_psi(iommu, domain,
- mm_to_dma_pfn(iova->pfn_lo),
+ mm_to_dma_pfn(iova_pfn),
size, 0, 1);
else
iommu_flush_write_buffer(iommu);
- start_paddr = (phys_addr_t)iova->pfn_lo << PAGE_SHIFT;
+ start_paddr = (phys_addr_t)iova_pfn << PAGE_SHIFT;
start_paddr += paddr & ~PAGE_MASK;
return start_paddr;
error:
- if (iova)
- __free_iova(&domain->iovad, iova);
+ if (iova_pfn)
+ free_iova_fast(&domain->iovad, iova_pfn, dma_to_mm_pfn(size));
pr_err("Device %s request: %zx@%llx dir %d --- failed\n",
dev_name(dev), size, (unsigned long long)paddr, dir);
return 0;
@@ -3497,91 +3551,120 @@ static dma_addr_t intel_map_page(struct device *dev, struct page *page,
dir, *dev->dma_mask);
}
-static void flush_unmaps(void)
+static void flush_unmaps(struct deferred_flush_data *flush_data)
{
int i, j;
- timer_on = 0;
+ flush_data->timer_on = 0;
/* just flush them all */
for (i = 0; i < g_num_of_iommus; i++) {
struct intel_iommu *iommu = g_iommus[i];
+ struct deferred_flush_table *flush_table =
+ &flush_data->tables[i];
if (!iommu)
continue;
- if (!deferred_flush[i].next)
+ if (!flush_table->next)
continue;
/* In caching mode, global flushes turn emulation expensive */
if (!cap_caching_mode(iommu->cap))
iommu->flush.flush_iotlb(iommu, 0, 0, 0,
DMA_TLB_GLOBAL_FLUSH);
- for (j = 0; j < deferred_flush[i].next; j++) {
+ for (j = 0; j < flush_table->next; j++) {
unsigned long mask;
- struct iova *iova = deferred_flush[i].iova[j];
- struct dmar_domain *domain = deferred_flush[i].domain[j];
+ struct deferred_flush_entry *entry =
+ &flush_table->entries[j];
+ unsigned long iova_pfn = entry->iova_pfn;
+ unsigned long nrpages = entry->nrpages;
+ struct dmar_domain *domain = entry->domain;
+ struct page *freelist = entry->freelist;
/* On real hardware multiple invalidations are expensive */
if (cap_caching_mode(iommu->cap))
iommu_flush_iotlb_psi(iommu, domain,
- iova->pfn_lo, iova_size(iova),
- !deferred_flush[i].freelist[j], 0);
+ mm_to_dma_pfn(iova_pfn),
+ nrpages, !freelist, 0);
else {
- mask = ilog2(mm_to_dma_pfn(iova_size(iova)));
- iommu_flush_dev_iotlb(deferred_flush[i].domain[j],
- (uint64_t)iova->pfn_lo << PAGE_SHIFT, mask);
+ mask = ilog2(nrpages);
+ iommu_flush_dev_iotlb(domain,
+ (uint64_t)iova_pfn << PAGE_SHIFT, mask);
}
- __free_iova(&deferred_flush[i].domain[j]->iovad, iova);
- if (deferred_flush[i].freelist[j])
- dma_free_pagelist(deferred_flush[i].freelist[j]);
+ free_iova_fast(&domain->iovad, iova_pfn, nrpages);
+ if (freelist)
+ dma_free_pagelist(freelist);
}
- deferred_flush[i].next = 0;
+ flush_table->next = 0;
}
- list_size = 0;
+ flush_data->size = 0;
}
-static void flush_unmaps_timeout(unsigned long data)
+static void flush_unmaps_timeout(unsigned long cpuid)
{
+ struct deferred_flush_data *flush_data = per_cpu_ptr(&deferred_flush, cpuid);
unsigned long flags;
- spin_lock_irqsave(&async_umap_flush_lock, flags);
- flush_unmaps();
- spin_unlock_irqrestore(&async_umap_flush_lock, flags);
+ spin_lock_irqsave(&flush_data->lock, flags);
+ flush_unmaps(flush_data);
+ spin_unlock_irqrestore(&flush_data->lock, flags);
}
-static void add_unmap(struct dmar_domain *dom, struct iova *iova, struct page *freelist)
+static void add_unmap(struct dmar_domain *dom, unsigned long iova_pfn,
+ unsigned long nrpages, struct page *freelist)
{
unsigned long flags;
- int next, iommu_id;
+ int entry_id, iommu_id;
struct intel_iommu *iommu;
+ struct deferred_flush_entry *entry;
+ struct deferred_flush_data *flush_data;
+ unsigned int cpuid;
- spin_lock_irqsave(&async_umap_flush_lock, flags);
- if (list_size == HIGH_WATER_MARK)
- flush_unmaps();
+ cpuid = get_cpu();
+ flush_data = per_cpu_ptr(&deferred_flush, cpuid);
+
+ /* Flush all CPUs' entries to avoid deferring too much. If
+ * this becomes a bottleneck, can just flush us, and rely on
+ * flush timer for the rest.
+ */
+ if (flush_data->size == HIGH_WATER_MARK) {
+ int cpu;
+
+ for_each_online_cpu(cpu)
+ flush_unmaps_timeout(cpu);
+ }
+
+ spin_lock_irqsave(&flush_data->lock, flags);
iommu = domain_get_iommu(dom);
iommu_id = iommu->seq_id;
- next = deferred_flush[iommu_id].next;
- deferred_flush[iommu_id].domain[next] = dom;
- deferred_flush[iommu_id].iova[next] = iova;
- deferred_flush[iommu_id].freelist[next] = freelist;
- deferred_flush[iommu_id].next++;
+ entry_id = flush_data->tables[iommu_id].next;
+ ++(flush_data->tables[iommu_id].next);
- if (!timer_on) {
- mod_timer(&unmap_timer, jiffies + msecs_to_jiffies(10));
- timer_on = 1;
+ entry = &flush_data->tables[iommu_id].entries[entry_id];
+ entry->domain = dom;
+ entry->iova_pfn = iova_pfn;
+ entry->nrpages = nrpages;
+ entry->freelist = freelist;
+
+ if (!flush_data->timer_on) {
+ mod_timer(&flush_data->timer, jiffies + msecs_to_jiffies(10));
+ flush_data->timer_on = 1;
}
- list_size++;
- spin_unlock_irqrestore(&async_umap_flush_lock, flags);
+ flush_data->size++;
+ spin_unlock_irqrestore(&flush_data->lock, flags);
+
+ put_cpu();
}
-static void intel_unmap(struct device *dev, dma_addr_t dev_addr)
+static void intel_unmap(struct device *dev, dma_addr_t dev_addr, size_t size)
{
struct dmar_domain *domain;
unsigned long start_pfn, last_pfn;
- struct iova *iova;
+ unsigned long nrpages;
+ unsigned long iova_pfn;
struct intel_iommu *iommu;
struct page *freelist;
@@ -3593,13 +3676,11 @@ static void intel_unmap(struct device *dev, dma_addr_t dev_addr)
iommu = domain_get_iommu(domain);
- iova = find_iova(&domain->iovad, IOVA_PFN(dev_addr));
- if (WARN_ONCE(!iova, "Driver unmaps unmatched page at PFN %llx\n",
- (unsigned long long)dev_addr))
- return;
+ iova_pfn = IOVA_PFN(dev_addr);
- start_pfn = mm_to_dma_pfn(iova->pfn_lo);
- last_pfn = mm_to_dma_pfn(iova->pfn_hi + 1) - 1;
+ nrpages = aligned_nrpages(dev_addr, size);
+ start_pfn = mm_to_dma_pfn(iova_pfn);
+ last_pfn = start_pfn + nrpages - 1;
pr_debug("Device %s unmapping: pfn %lx-%lx\n",
dev_name(dev), start_pfn, last_pfn);
@@ -3608,12 +3689,12 @@ static void intel_unmap(struct device *dev, dma_addr_t dev_addr)
if (intel_iommu_strict) {
iommu_flush_iotlb_psi(iommu, domain, start_pfn,
- last_pfn - start_pfn + 1, !freelist, 0);
+ nrpages, !freelist, 0);
/* free iova */
- __free_iova(&domain->iovad, iova);
+ free_iova_fast(&domain->iovad, iova_pfn, dma_to_mm_pfn(nrpages));
dma_free_pagelist(freelist);
} else {
- add_unmap(domain, iova, freelist);
+ add_unmap(domain, iova_pfn, nrpages, freelist);
/*
* queue up the release of the unmap to save the 1/6th of the
* cpu used up by the iotlb flush operation...
@@ -3625,7 +3706,7 @@ static void intel_unmap_page(struct device *dev, dma_addr_t dev_addr,
size_t size, enum dma_data_direction dir,
struct dma_attrs *attrs)
{
- intel_unmap(dev, dev_addr);
+ intel_unmap(dev, dev_addr, size);
}
static void *intel_alloc_coherent(struct device *dev, size_t size,
@@ -3684,7 +3765,7 @@ static void intel_free_coherent(struct device *dev, size_t size, void *vaddr,
size = PAGE_ALIGN(size);
order = get_order(size);
- intel_unmap(dev, dma_handle);
+ intel_unmap(dev, dma_handle, size);
if (!dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT))
__free_pages(page, order);
}
@@ -3693,7 +3774,16 @@ static void intel_unmap_sg(struct device *dev, struct scatterlist *sglist,
int nelems, enum dma_data_direction dir,
struct dma_attrs *attrs)
{
- intel_unmap(dev, sglist[0].dma_address);
+ dma_addr_t startaddr = sg_dma_address(sglist) & PAGE_MASK;
+ unsigned long nrpages = 0;
+ struct scatterlist *sg;
+ int i;
+
+ for_each_sg(sglist, sg, nelems, i) {
+ nrpages += aligned_nrpages(sg_dma_address(sg), sg_dma_len(sg));
+ }
+
+ intel_unmap(dev, startaddr, nrpages << VTD_PAGE_SHIFT);
}
static int intel_nontranslate_map_sg(struct device *hddev,
@@ -3717,7 +3807,7 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele
struct dmar_domain *domain;
size_t size = 0;
int prot = 0;
- struct iova *iova = NULL;
+ unsigned long iova_pfn;
int ret;
struct scatterlist *sg;
unsigned long start_vpfn;
@@ -3736,9 +3826,9 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele
for_each_sg(sglist, sg, nelems, i)
size += aligned_nrpages(sg->offset, sg->length);
- iova = intel_alloc_iova(dev, domain, dma_to_mm_pfn(size),
+ iova_pfn = intel_alloc_iova(dev, domain, dma_to_mm_pfn(size),
*dev->dma_mask);
- if (!iova) {
+ if (!iova_pfn) {
sglist->dma_length = 0;
return 0;
}
@@ -3753,13 +3843,13 @@ static int intel_map_sg(struct device *dev, struct scatterlist *sglist, int nele
if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)
prot |= DMA_PTE_WRITE;
- start_vpfn = mm_to_dma_pfn(iova->pfn_lo);
+ start_vpfn = mm_to_dma_pfn(iova_pfn);
ret = domain_sg_mapping(domain, start_vpfn, sglist, size, prot);
if (unlikely(ret)) {
dma_pte_free_pagetable(domain, start_vpfn,
start_vpfn + size - 1);
- __free_iova(&domain->iovad, iova);
+ free_iova_fast(&domain->iovad, iova_pfn, dma_to_mm_pfn(size));
return 0;
}
@@ -4498,6 +4588,46 @@ static struct notifier_block intel_iommu_memory_nb = {
.priority = 0
};
+static void free_all_cpu_cached_iovas(unsigned int cpu)
+{
+ int i;
+
+ for (i = 0; i < g_num_of_iommus; i++) {
+ struct intel_iommu *iommu = g_iommus[i];
+ struct dmar_domain *domain;
+ u16 did;
+
+ if (!iommu)
+ continue;
+
+ for (did = 0; did < 0xffff; did++) {
+ domain = get_iommu_domain(iommu, did);
+
+ if (!domain)
+ continue;
+ free_cpu_cached_iovas(cpu, &domain->iovad);
+ }
+ }
+}
+
+static int intel_iommu_cpu_notifier(struct notifier_block *nfb,
+ unsigned long action, void *v)
+{
+ unsigned int cpu = (unsigned long)v;
+
+ switch (action) {
+ case CPU_DEAD:
+ case CPU_DEAD_FROZEN:
+ free_all_cpu_cached_iovas(cpu);
+ flush_unmaps_timeout(cpu);
+ break;
+ }
+ return NOTIFY_OK;
+}
+
+static struct notifier_block intel_iommu_cpu_nb = {
+ .notifier_call = intel_iommu_cpu_notifier,
+};
static ssize_t intel_iommu_show_version(struct device *dev,
struct device_attribute *attr,
@@ -4631,7 +4761,6 @@ int __init intel_iommu_init(void)
up_write(&dmar_global_lock);
pr_info("Intel(R) Virtualization Technology for Directed I/O\n");
- init_timer(&unmap_timer);
#ifdef CONFIG_SWIOTLB
swiotlb = 0;
#endif
@@ -4648,6 +4777,7 @@ int __init intel_iommu_init(void)
bus_register_notifier(&pci_bus_type, &device_nb);
if (si_domain && !hw_pass_through)
register_memory_notifier(&intel_iommu_memory_nb);
+ register_hotcpu_notifier(&intel_iommu_cpu_nb);
intel_iommu_enabled = 1;
diff --git a/drivers/iommu/io-pgtable-arm-v7s.c b/drivers/iommu/io-pgtable-arm-v7s.c
index 9488e3c97bcb..8c6139986d7d 100644
--- a/drivers/iommu/io-pgtable-arm-v7s.c
+++ b/drivers/iommu/io-pgtable-arm-v7s.c
@@ -121,6 +121,8 @@
#define ARM_V7S_TEX_MASK 0x7
#define ARM_V7S_ATTR_TEX(val) (((val) & ARM_V7S_TEX_MASK) << ARM_V7S_TEX_SHIFT)
+#define ARM_V7S_ATTR_MTK_4GB BIT(9) /* MTK extend it for 4GB mode */
+
/* *well, except for TEX on level 2 large pages, of course :( */
#define ARM_V7S_CONT_PAGE_TEX_SHIFT 6
#define ARM_V7S_CONT_PAGE_TEX_MASK (ARM_V7S_TEX_MASK << ARM_V7S_CONT_PAGE_TEX_SHIFT)
@@ -258,9 +260,10 @@ static arm_v7s_iopte arm_v7s_prot_to_pte(int prot, int lvl,
struct io_pgtable_cfg *cfg)
{
bool ap = !(cfg->quirks & IO_PGTABLE_QUIRK_NO_PERMS);
- arm_v7s_iopte pte = ARM_V7S_ATTR_NG | ARM_V7S_ATTR_S |
- ARM_V7S_ATTR_TEX(1);
+ arm_v7s_iopte pte = ARM_V7S_ATTR_NG | ARM_V7S_ATTR_S;
+ if (!(prot & IOMMU_MMIO))
+ pte |= ARM_V7S_ATTR_TEX(1);
if (ap) {
pte |= ARM_V7S_PTE_AF | ARM_V7S_PTE_AP_UNPRIV;
if (!(prot & IOMMU_WRITE))
@@ -270,7 +273,9 @@ static arm_v7s_iopte arm_v7s_prot_to_pte(int prot, int lvl,
if ((prot & IOMMU_NOEXEC) && ap)
pte |= ARM_V7S_ATTR_XN(lvl);
- if (prot & IOMMU_CACHE)
+ if (prot & IOMMU_MMIO)
+ pte |= ARM_V7S_ATTR_B;
+ else if (prot & IOMMU_CACHE)
pte |= ARM_V7S_ATTR_B | ARM_V7S_ATTR_C;
return pte;
@@ -279,10 +284,13 @@ static arm_v7s_iopte arm_v7s_prot_to_pte(int prot, int lvl,
static int arm_v7s_pte_to_prot(arm_v7s_iopte pte, int lvl)
{
int prot = IOMMU_READ;
+ arm_v7s_iopte attr = pte >> ARM_V7S_ATTR_SHIFT(lvl);
- if (pte & (ARM_V7S_PTE_AP_RDONLY << ARM_V7S_ATTR_SHIFT(lvl)))
+ if (attr & ARM_V7S_PTE_AP_RDONLY)
prot |= IOMMU_WRITE;
- if (pte & ARM_V7S_ATTR_C)
+ if ((attr & (ARM_V7S_TEX_MASK << ARM_V7S_TEX_SHIFT)) == 0)
+ prot |= IOMMU_MMIO;
+ else if (pte & ARM_V7S_ATTR_C)
prot |= IOMMU_CACHE;
return prot;
@@ -364,6 +372,9 @@ static int arm_v7s_init_pte(struct arm_v7s_io_pgtable *data,
if (lvl == 1 && (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS))
pte |= ARM_V7S_ATTR_NS_SECTION;
+ if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_4GB)
+ pte |= ARM_V7S_ATTR_MTK_4GB;
+
if (num_entries > 1)
pte = arm_v7s_pte_to_cont(pte, lvl);
@@ -625,9 +636,15 @@ static struct io_pgtable *arm_v7s_alloc_pgtable(struct io_pgtable_cfg *cfg,
if (cfg->quirks & ~(IO_PGTABLE_QUIRK_ARM_NS |
IO_PGTABLE_QUIRK_NO_PERMS |
- IO_PGTABLE_QUIRK_TLBI_ON_MAP))
+ IO_PGTABLE_QUIRK_TLBI_ON_MAP |
+ IO_PGTABLE_QUIRK_ARM_MTK_4GB))
return NULL;
+ /* If ARM_MTK_4GB is enabled, the NO_PERMS is also expected. */
+ if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_MTK_4GB &&
+ !(cfg->quirks & IO_PGTABLE_QUIRK_NO_PERMS))
+ return NULL;
+
data = kmalloc(sizeof(*data), GFP_KERNEL);
if (!data)
return NULL;
diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index f433b516098a..a1ed1b73fed4 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -355,7 +355,10 @@ static arm_lpae_iopte arm_lpae_prot_to_pte(struct arm_lpae_io_pgtable *data,
if (!(prot & IOMMU_WRITE) && (prot & IOMMU_READ))
pte |= ARM_LPAE_PTE_AP_RDONLY;
- if (prot & IOMMU_CACHE)
+ if (prot & IOMMU_MMIO)
+ pte |= (ARM_LPAE_MAIR_ATTR_IDX_DEV
+ << ARM_LPAE_PTE_ATTRINDX_SHIFT);
+ else if (prot & IOMMU_CACHE)
pte |= (ARM_LPAE_MAIR_ATTR_IDX_CACHE
<< ARM_LPAE_PTE_ATTRINDX_SHIFT);
} else {
@@ -364,7 +367,9 @@ static arm_lpae_iopte arm_lpae_prot_to_pte(struct arm_lpae_io_pgtable *data,
pte |= ARM_LPAE_PTE_HAP_READ;
if (prot & IOMMU_WRITE)
pte |= ARM_LPAE_PTE_HAP_WRITE;
- if (prot & IOMMU_CACHE)
+ if (prot & IOMMU_MMIO)
+ pte |= ARM_LPAE_PTE_MEMATTR_DEV;
+ else if (prot & IOMMU_CACHE)
pte |= ARM_LPAE_PTE_MEMATTR_OIWB;
else
pte |= ARM_LPAE_PTE_MEMATTR_NC;
diff --git a/drivers/iommu/io-pgtable.c b/drivers/iommu/io-pgtable.c
index 876f6a76d288..127558d83667 100644
--- a/drivers/iommu/io-pgtable.c
+++ b/drivers/iommu/io-pgtable.c
@@ -25,8 +25,7 @@
#include "io-pgtable.h"
static const struct io_pgtable_init_fns *
-io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] =
-{
+io_pgtable_init_table[IO_PGTABLE_NUM_FMTS] = {
#ifdef CONFIG_IOMMU_IO_PGTABLE_LPAE
[ARM_32_LPAE_S1] = &io_pgtable_arm_32_lpae_s1_init_fns,
[ARM_32_LPAE_S2] = &io_pgtable_arm_32_lpae_s2_init_fns,
diff --git a/drivers/iommu/io-pgtable.h b/drivers/iommu/io-pgtable.h
index d4f502742e3b..969d82cc92ca 100644
--- a/drivers/iommu/io-pgtable.h
+++ b/drivers/iommu/io-pgtable.h
@@ -60,10 +60,16 @@ struct io_pgtable_cfg {
* IO_PGTABLE_QUIRK_TLBI_ON_MAP: If the format forbids caching invalid
* (unmapped) entries but the hardware might do so anyway, perform
* TLB maintenance when mapping as well as when unmapping.
+ *
+ * IO_PGTABLE_QUIRK_ARM_MTK_4GB: (ARM v7s format) Set bit 9 in all
+ * PTEs, for Mediatek IOMMUs which treat it as a 33rd address bit
+ * when the SoC is in "4GB mode" and they can only access the high
+ * remap of DRAM (0x1_00000000 to 0x1_ffffffff).
*/
#define IO_PGTABLE_QUIRK_ARM_NS BIT(0)
#define IO_PGTABLE_QUIRK_NO_PERMS BIT(1)
#define IO_PGTABLE_QUIRK_TLBI_ON_MAP BIT(2)
+ #define IO_PGTABLE_QUIRK_ARM_MTK_4GB BIT(3)
unsigned long quirks;
unsigned long pgsize_bitmap;
unsigned int ias;
diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
index b9df1411c894..3000051f48b4 100644
--- a/drivers/iommu/iommu.c
+++ b/drivers/iommu/iommu.c
@@ -337,9 +337,9 @@ static int iommu_group_create_direct_mappings(struct iommu_group *group,
if (!domain || domain->type != IOMMU_DOMAIN_DMA)
return 0;
- BUG_ON(!domain->ops->pgsize_bitmap);
+ BUG_ON(!domain->pgsize_bitmap);
- pg_size = 1UL << __ffs(domain->ops->pgsize_bitmap);
+ pg_size = 1UL << __ffs(domain->pgsize_bitmap);
INIT_LIST_HEAD(&mappings);
iommu_get_dm_regions(dev, &mappings);
@@ -660,8 +660,8 @@ static struct iommu_group *get_pci_function_alias_group(struct pci_dev *pdev,
}
/*
- * Look for aliases to or from the given device for exisiting groups. The
- * dma_alias_devfn only supports aliases on the same bus, therefore the search
+ * Look for aliases to or from the given device for existing groups. DMA
+ * aliases are only supported on the same bus, therefore the search
* space is quite small (especially since we're really only looking at pcie
* device, and therefore only expect multiple slots on the root complex or
* downstream switch ports). It's conceivable though that a pair of
@@ -686,11 +686,7 @@ static struct iommu_group *get_pci_alias_group(struct pci_dev *pdev,
continue;
/* We alias them or they alias us */
- if (((pdev->dev_flags & PCI_DEV_FLAGS_DMA_ALIAS_DEVFN) &&
- pdev->dma_alias_devfn == tmp->devfn) ||
- ((tmp->dev_flags & PCI_DEV_FLAGS_DMA_ALIAS_DEVFN) &&
- tmp->dma_alias_devfn == pdev->devfn)) {
-
+ if (pci_devs_are_dma_aliases(pdev, tmp)) {
group = get_pci_alias_group(tmp, devfns);
if (group) {
pci_dev_put(tmp);
@@ -1073,6 +1069,8 @@ static struct iommu_domain *__iommu_domain_alloc(struct bus_type *bus,
domain->ops = bus->iommu_ops;
domain->type = type;
+ /* Assume all sizes by default; the driver may override this later */
+ domain->pgsize_bitmap = bus->iommu_ops->pgsize_bitmap;
return domain;
}
@@ -1297,7 +1295,7 @@ static size_t iommu_pgsize(struct iommu_domain *domain,
pgsize = (1UL << (pgsize_idx + 1)) - 1;
/* throw away page sizes not supported by the hardware */
- pgsize &= domain->ops->pgsize_bitmap;
+ pgsize &= domain->pgsize_bitmap;
/* make sure we're still sane */
BUG_ON(!pgsize);
@@ -1319,14 +1317,14 @@ int iommu_map(struct iommu_domain *domain, unsigned long iova,
int ret = 0;
if (unlikely(domain->ops->map == NULL ||
- domain->ops->pgsize_bitmap == 0UL))
+ domain->pgsize_bitmap == 0UL))
return -ENODEV;
if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING)))
return -EINVAL;
/* find out the minimum page size supported */
- min_pagesz = 1 << __ffs(domain->ops->pgsize_bitmap);
+ min_pagesz = 1 << __ffs(domain->pgsize_bitmap);
/*
* both the virtual address and the physical one, as well as
@@ -1373,14 +1371,14 @@ size_t iommu_unmap(struct iommu_domain *domain, unsigned long iova, size_t size)
unsigned long orig_iova = iova;
if (unlikely(domain->ops->unmap == NULL ||
- domain->ops->pgsize_bitmap == 0UL))
+ domain->pgsize_bitmap == 0UL))
return -ENODEV;
if (unlikely(!(domain->type & __IOMMU_DOMAIN_PAGING)))
return -EINVAL;
/* find out the minimum page size supported */
- min_pagesz = 1 << __ffs(domain->ops->pgsize_bitmap);
+ min_pagesz = 1 << __ffs(domain->pgsize_bitmap);
/*
* The virtual address, as well as the size of the mapping, must be
@@ -1426,10 +1424,10 @@ size_t default_iommu_map_sg(struct iommu_domain *domain, unsigned long iova,
unsigned int i, min_pagesz;
int ret;
- if (unlikely(domain->ops->pgsize_bitmap == 0UL))
+ if (unlikely(domain->pgsize_bitmap == 0UL))
return 0;
- min_pagesz = 1 << __ffs(domain->ops->pgsize_bitmap);
+ min_pagesz = 1 << __ffs(domain->pgsize_bitmap);
for_each_sg(sg, s, nents, i) {
phys_addr_t phys = page_to_phys(sg_page(s)) + s->offset;
@@ -1510,7 +1508,7 @@ int iommu_domain_get_attr(struct iommu_domain *domain,
break;
case DOMAIN_ATTR_PAGING:
paging = data;
- *paging = (domain->ops->pgsize_bitmap != 0UL);
+ *paging = (domain->pgsize_bitmap != 0UL);
break;
case DOMAIN_ATTR_WINDOWS:
count = data;
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index fa0adef32bd6..ba764a0835d3 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -20,6 +20,17 @@
#include <linux/iova.h>
#include <linux/module.h>
#include <linux/slab.h>
+#include <linux/smp.h>
+#include <linux/bitops.h>
+
+static bool iova_rcache_insert(struct iova_domain *iovad,
+ unsigned long pfn,
+ unsigned long size);
+static unsigned long iova_rcache_get(struct iova_domain *iovad,
+ unsigned long size,
+ unsigned long limit_pfn);
+static void init_iova_rcaches(struct iova_domain *iovad);
+static void free_iova_rcaches(struct iova_domain *iovad);
void
init_iova_domain(struct iova_domain *iovad, unsigned long granule,
@@ -38,6 +49,7 @@ init_iova_domain(struct iova_domain *iovad, unsigned long granule,
iovad->granule = granule;
iovad->start_pfn = start_pfn;
iovad->dma_32bit_pfn = pfn_32bit;
+ init_iova_rcaches(iovad);
}
EXPORT_SYMBOL_GPL(init_iova_domain);
@@ -291,33 +303,18 @@ alloc_iova(struct iova_domain *iovad, unsigned long size,
}
EXPORT_SYMBOL_GPL(alloc_iova);
-/**
- * find_iova - find's an iova for a given pfn
- * @iovad: - iova domain in question.
- * @pfn: - page frame number
- * This function finds and returns an iova belonging to the
- * given doamin which matches the given pfn.
- */
-struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn)
+static struct iova *
+private_find_iova(struct iova_domain *iovad, unsigned long pfn)
{
- unsigned long flags;
- struct rb_node *node;
+ struct rb_node *node = iovad->rbroot.rb_node;
+
+ assert_spin_locked(&iovad->iova_rbtree_lock);
- /* Take the lock so that no other thread is manipulating the rbtree */
- spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
- node = iovad->rbroot.rb_node;
while (node) {
struct iova *iova = container_of(node, struct iova, node);
/* If pfn falls within iova's range, return iova */
if ((pfn >= iova->pfn_lo) && (pfn <= iova->pfn_hi)) {
- spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
- /* We are not holding the lock while this iova
- * is referenced by the caller as the same thread
- * which called this function also calls __free_iova()
- * and it is by design that only one thread can possibly
- * reference a particular iova and hence no conflict.
- */
return iova;
}
@@ -327,9 +324,35 @@ struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn)
node = node->rb_right;
}
- spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
return NULL;
}
+
+static void private_free_iova(struct iova_domain *iovad, struct iova *iova)
+{
+ assert_spin_locked(&iovad->iova_rbtree_lock);
+ __cached_rbnode_delete_update(iovad, iova);
+ rb_erase(&iova->node, &iovad->rbroot);
+ free_iova_mem(iova);
+}
+
+/**
+ * find_iova - finds an iova for a given pfn
+ * @iovad: - iova domain in question.
+ * @pfn: - page frame number
+ * This function finds and returns an iova belonging to the
+ * given doamin which matches the given pfn.
+ */
+struct iova *find_iova(struct iova_domain *iovad, unsigned long pfn)
+{
+ unsigned long flags;
+ struct iova *iova;
+
+ /* Take the lock so that no other thread is manipulating the rbtree */
+ spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
+ iova = private_find_iova(iovad, pfn);
+ spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+ return iova;
+}
EXPORT_SYMBOL_GPL(find_iova);
/**
@@ -344,10 +367,8 @@ __free_iova(struct iova_domain *iovad, struct iova *iova)
unsigned long flags;
spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
- __cached_rbnode_delete_update(iovad, iova);
- rb_erase(&iova->node, &iovad->rbroot);
+ private_free_iova(iovad, iova);
spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
- free_iova_mem(iova);
}
EXPORT_SYMBOL_GPL(__free_iova);
@@ -370,6 +391,63 @@ free_iova(struct iova_domain *iovad, unsigned long pfn)
EXPORT_SYMBOL_GPL(free_iova);
/**
+ * alloc_iova_fast - allocates an iova from rcache
+ * @iovad: - iova domain in question
+ * @size: - size of page frames to allocate
+ * @limit_pfn: - max limit address
+ * This function tries to satisfy an iova allocation from the rcache,
+ * and falls back to regular allocation on failure.
+*/
+unsigned long
+alloc_iova_fast(struct iova_domain *iovad, unsigned long size,
+ unsigned long limit_pfn)
+{
+ bool flushed_rcache = false;
+ unsigned long iova_pfn;
+ struct iova *new_iova;
+
+ iova_pfn = iova_rcache_get(iovad, size, limit_pfn);
+ if (iova_pfn)
+ return iova_pfn;
+
+retry:
+ new_iova = alloc_iova(iovad, size, limit_pfn, true);
+ if (!new_iova) {
+ unsigned int cpu;
+
+ if (flushed_rcache)
+ return 0;
+
+ /* Try replenishing IOVAs by flushing rcache. */
+ flushed_rcache = true;
+ for_each_online_cpu(cpu)
+ free_cpu_cached_iovas(cpu, iovad);
+ goto retry;
+ }
+
+ return new_iova->pfn_lo;
+}
+EXPORT_SYMBOL_GPL(alloc_iova_fast);
+
+/**
+ * free_iova_fast - free iova pfn range into rcache
+ * @iovad: - iova domain in question.
+ * @pfn: - pfn that is allocated previously
+ * @size: - # of pages in range
+ * This functions frees an iova range by trying to put it into the rcache,
+ * falling back to regular iova deallocation via free_iova() if this fails.
+ */
+void
+free_iova_fast(struct iova_domain *iovad, unsigned long pfn, unsigned long size)
+{
+ if (iova_rcache_insert(iovad, pfn, size))
+ return;
+
+ free_iova(iovad, pfn);
+}
+EXPORT_SYMBOL_GPL(free_iova_fast);
+
+/**
* put_iova_domain - destroys the iova doamin
* @iovad: - iova domain in question.
* All the iova's in that domain are destroyed.
@@ -379,6 +457,7 @@ void put_iova_domain(struct iova_domain *iovad)
struct rb_node *node;
unsigned long flags;
+ free_iova_rcaches(iovad);
spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
node = rb_first(&iovad->rbroot);
while (node) {
@@ -550,5 +629,295 @@ error:
return NULL;
}
+/*
+ * Magazine caches for IOVA ranges. For an introduction to magazines,
+ * see the USENIX 2001 paper "Magazines and Vmem: Extending the Slab
+ * Allocator to Many CPUs and Arbitrary Resources" by Bonwick and Adams.
+ * For simplicity, we use a static magazine size and don't implement the
+ * dynamic size tuning described in the paper.
+ */
+
+#define IOVA_MAG_SIZE 128
+
+struct iova_magazine {
+ unsigned long size;
+ unsigned long pfns[IOVA_MAG_SIZE];
+};
+
+struct iova_cpu_rcache {
+ spinlock_t lock;
+ struct iova_magazine *loaded;
+ struct iova_magazine *prev;
+};
+
+static struct iova_magazine *iova_magazine_alloc(gfp_t flags)
+{
+ return kzalloc(sizeof(struct iova_magazine), flags);
+}
+
+static void iova_magazine_free(struct iova_magazine *mag)
+{
+ kfree(mag);
+}
+
+static void
+iova_magazine_free_pfns(struct iova_magazine *mag, struct iova_domain *iovad)
+{
+ unsigned long flags;
+ int i;
+
+ if (!mag)
+ return;
+
+ spin_lock_irqsave(&iovad->iova_rbtree_lock, flags);
+
+ for (i = 0 ; i < mag->size; ++i) {
+ struct iova *iova = private_find_iova(iovad, mag->pfns[i]);
+
+ BUG_ON(!iova);
+ private_free_iova(iovad, iova);
+ }
+
+ spin_unlock_irqrestore(&iovad->iova_rbtree_lock, flags);
+
+ mag->size = 0;
+}
+
+static bool iova_magazine_full(struct iova_magazine *mag)
+{
+ return (mag && mag->size == IOVA_MAG_SIZE);
+}
+
+static bool iova_magazine_empty(struct iova_magazine *mag)
+{
+ return (!mag || mag->size == 0);
+}
+
+static unsigned long iova_magazine_pop(struct iova_magazine *mag,
+ unsigned long limit_pfn)
+{
+ BUG_ON(iova_magazine_empty(mag));
+
+ if (mag->pfns[mag->size - 1] >= limit_pfn)
+ return 0;
+
+ return mag->pfns[--mag->size];
+}
+
+static void iova_magazine_push(struct iova_magazine *mag, unsigned long pfn)
+{
+ BUG_ON(iova_magazine_full(mag));
+
+ mag->pfns[mag->size++] = pfn;
+}
+
+static void init_iova_rcaches(struct iova_domain *iovad)
+{
+ struct iova_cpu_rcache *cpu_rcache;
+ struct iova_rcache *rcache;
+ unsigned int cpu;
+ int i;
+
+ for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
+ rcache = &iovad->rcaches[i];
+ spin_lock_init(&rcache->lock);
+ rcache->depot_size = 0;
+ rcache->cpu_rcaches = __alloc_percpu(sizeof(*cpu_rcache), cache_line_size());
+ if (WARN_ON(!rcache->cpu_rcaches))
+ continue;
+ for_each_possible_cpu(cpu) {
+ cpu_rcache = per_cpu_ptr(rcache->cpu_rcaches, cpu);
+ spin_lock_init(&cpu_rcache->lock);
+ cpu_rcache->loaded = iova_magazine_alloc(GFP_KERNEL);
+ cpu_rcache->prev = iova_magazine_alloc(GFP_KERNEL);
+ }
+ }
+}
+
+/*
+ * Try inserting IOVA range starting with 'iova_pfn' into 'rcache', and
+ * return true on success. Can fail if rcache is full and we can't free
+ * space, and free_iova() (our only caller) will then return the IOVA
+ * range to the rbtree instead.
+ */
+static bool __iova_rcache_insert(struct iova_domain *iovad,
+ struct iova_rcache *rcache,
+ unsigned long iova_pfn)
+{
+ struct iova_magazine *mag_to_free = NULL;
+ struct iova_cpu_rcache *cpu_rcache;
+ bool can_insert = false;
+ unsigned long flags;
+
+ cpu_rcache = this_cpu_ptr(rcache->cpu_rcaches);
+ spin_lock_irqsave(&cpu_rcache->lock, flags);
+
+ if (!iova_magazine_full(cpu_rcache->loaded)) {
+ can_insert = true;
+ } else if (!iova_magazine_full(cpu_rcache->prev)) {
+ swap(cpu_rcache->prev, cpu_rcache->loaded);
+ can_insert = true;
+ } else {
+ struct iova_magazine *new_mag = iova_magazine_alloc(GFP_ATOMIC);
+
+ if (new_mag) {
+ spin_lock(&rcache->lock);
+ if (rcache->depot_size < MAX_GLOBAL_MAGS) {
+ rcache->depot[rcache->depot_size++] =
+ cpu_rcache->loaded;
+ } else {
+ mag_to_free = cpu_rcache->loaded;
+ }
+ spin_unlock(&rcache->lock);
+
+ cpu_rcache->loaded = new_mag;
+ can_insert = true;
+ }
+ }
+
+ if (can_insert)
+ iova_magazine_push(cpu_rcache->loaded, iova_pfn);
+
+ spin_unlock_irqrestore(&cpu_rcache->lock, flags);
+
+ if (mag_to_free) {
+ iova_magazine_free_pfns(mag_to_free, iovad);
+ iova_magazine_free(mag_to_free);
+ }
+
+ return can_insert;
+}
+
+static bool iova_rcache_insert(struct iova_domain *iovad, unsigned long pfn,
+ unsigned long size)
+{
+ unsigned int log_size = order_base_2(size);
+
+ if (log_size >= IOVA_RANGE_CACHE_MAX_SIZE)
+ return false;
+
+ return __iova_rcache_insert(iovad, &iovad->rcaches[log_size], pfn);
+}
+
+/*
+ * Caller wants to allocate a new IOVA range from 'rcache'. If we can
+ * satisfy the request, return a matching non-NULL range and remove
+ * it from the 'rcache'.
+ */
+static unsigned long __iova_rcache_get(struct iova_rcache *rcache,
+ unsigned long limit_pfn)
+{
+ struct iova_cpu_rcache *cpu_rcache;
+ unsigned long iova_pfn = 0;
+ bool has_pfn = false;
+ unsigned long flags;
+
+ cpu_rcache = this_cpu_ptr(rcache->cpu_rcaches);
+ spin_lock_irqsave(&cpu_rcache->lock, flags);
+
+ if (!iova_magazine_empty(cpu_rcache->loaded)) {
+ has_pfn = true;
+ } else if (!iova_magazine_empty(cpu_rcache->prev)) {
+ swap(cpu_rcache->prev, cpu_rcache->loaded);
+ has_pfn = true;
+ } else {
+ spin_lock(&rcache->lock);
+ if (rcache->depot_size > 0) {
+ iova_magazine_free(cpu_rcache->loaded);
+ cpu_rcache->loaded = rcache->depot[--rcache->depot_size];
+ has_pfn = true;
+ }
+ spin_unlock(&rcache->lock);
+ }
+
+ if (has_pfn)
+ iova_pfn = iova_magazine_pop(cpu_rcache->loaded, limit_pfn);
+
+ spin_unlock_irqrestore(&cpu_rcache->lock, flags);
+
+ return iova_pfn;
+}
+
+/*
+ * Try to satisfy IOVA allocation range from rcache. Fail if requested
+ * size is too big or the DMA limit we are given isn't satisfied by the
+ * top element in the magazine.
+ */
+static unsigned long iova_rcache_get(struct iova_domain *iovad,
+ unsigned long size,
+ unsigned long limit_pfn)
+{
+ unsigned int log_size = order_base_2(size);
+
+ if (log_size >= IOVA_RANGE_CACHE_MAX_SIZE)
+ return 0;
+
+ return __iova_rcache_get(&iovad->rcaches[log_size], limit_pfn);
+}
+
+/*
+ * Free a cpu's rcache.
+ */
+static void free_cpu_iova_rcache(unsigned int cpu, struct iova_domain *iovad,
+ struct iova_rcache *rcache)
+{
+ struct iova_cpu_rcache *cpu_rcache = per_cpu_ptr(rcache->cpu_rcaches, cpu);
+ unsigned long flags;
+
+ spin_lock_irqsave(&cpu_rcache->lock, flags);
+
+ iova_magazine_free_pfns(cpu_rcache->loaded, iovad);
+ iova_magazine_free(cpu_rcache->loaded);
+
+ iova_magazine_free_pfns(cpu_rcache->prev, iovad);
+ iova_magazine_free(cpu_rcache->prev);
+
+ spin_unlock_irqrestore(&cpu_rcache->lock, flags);
+}
+
+/*
+ * free rcache data structures.
+ */
+static void free_iova_rcaches(struct iova_domain *iovad)
+{
+ struct iova_rcache *rcache;
+ unsigned long flags;
+ unsigned int cpu;
+ int i, j;
+
+ for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
+ rcache = &iovad->rcaches[i];
+ for_each_possible_cpu(cpu)
+ free_cpu_iova_rcache(cpu, iovad, rcache);
+ spin_lock_irqsave(&rcache->lock, flags);
+ free_percpu(rcache->cpu_rcaches);
+ for (j = 0; j < rcache->depot_size; ++j) {
+ iova_magazine_free_pfns(rcache->depot[j], iovad);
+ iova_magazine_free(rcache->depot[j]);
+ }
+ spin_unlock_irqrestore(&rcache->lock, flags);
+ }
+}
+
+/*
+ * free all the IOVA ranges cached by a cpu (used when cpu is unplugged)
+ */
+void free_cpu_cached_iovas(unsigned int cpu, struct iova_domain *iovad)
+{
+ struct iova_cpu_rcache *cpu_rcache;
+ struct iova_rcache *rcache;
+ unsigned long flags;
+ int i;
+
+ for (i = 0; i < IOVA_RANGE_CACHE_MAX_SIZE; ++i) {
+ rcache = &iovad->rcaches[i];
+ cpu_rcache = per_cpu_ptr(rcache->cpu_rcaches, cpu);
+ spin_lock_irqsave(&cpu_rcache->lock, flags);
+ iova_magazine_free_pfns(cpu_rcache->loaded, iovad);
+ iova_magazine_free_pfns(cpu_rcache->prev, iovad);
+ spin_unlock_irqrestore(&cpu_rcache->lock, flags);
+ }
+}
+
MODULE_AUTHOR("Anil S Keshavamurthy <anil.s.keshavamurthy@intel.com>");
MODULE_LICENSE("GPL");
diff --git a/drivers/iommu/irq_remapping.c b/drivers/iommu/irq_remapping.c
index 8adaaeae3268..49721b4e1975 100644
--- a/drivers/iommu/irq_remapping.c
+++ b/drivers/iommu/irq_remapping.c
@@ -36,7 +36,7 @@ static void irq_remapping_disable_io_apic(void)
* As this gets called during crash dump, keep this simple for
* now.
*/
- if (cpu_has_apic || apic_from_smp_config())
+ if (boot_cpu_has(X86_FEATURE_APIC) || apic_from_smp_config())
disconnect_bsp_APIC(0);
}
diff --git a/drivers/iommu/mtk_iommu.c b/drivers/iommu/mtk_iommu.c
index 929a66a81b2b..c3043d8754e3 100644
--- a/drivers/iommu/mtk_iommu.c
+++ b/drivers/iommu/mtk_iommu.c
@@ -11,6 +11,7 @@
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*/
+#include <linux/bootmem.h>
#include <linux/bug.h>
#include <linux/clk.h>
#include <linux/component.h>
@@ -56,7 +57,7 @@
#define F_MMU_TF_PROTECT_SEL(prot) (((prot) & 0x3) << 5)
#define REG_MMU_IVRP_PADDR 0x114
-#define F_MMU_IVRP_PA_SET(pa) ((pa) >> 1)
+#define F_MMU_IVRP_PA_SET(pa, ext) (((pa) >> 1) | ((!!(ext)) << 31))
#define REG_MMU_INT_CONTROL0 0x120
#define F_L2_MULIT_HIT_EN BIT(0)
@@ -125,6 +126,7 @@ struct mtk_iommu_data {
struct mtk_iommu_domain *m4u_dom;
struct iommu_group *m4u_group;
struct mtk_smi_iommu smi_imu; /* SMI larb iommu info */
+ bool enable_4GB;
};
static struct iommu_ops mtk_iommu_ops;
@@ -257,6 +259,9 @@ static int mtk_iommu_domain_finalise(struct mtk_iommu_data *data)
.iommu_dev = data->dev,
};
+ if (data->enable_4GB)
+ dom->cfg.quirks |= IO_PGTABLE_QUIRK_ARM_MTK_4GB;
+
dom->iop = alloc_io_pgtable_ops(ARM_V7S, &dom->cfg, data);
if (!dom->iop) {
dev_err(data->dev, "Failed to alloc io pgtable\n");
@@ -264,7 +269,7 @@ static int mtk_iommu_domain_finalise(struct mtk_iommu_data *data)
}
/* Update our support page sizes bitmap */
- mtk_iommu_ops.pgsize_bitmap = dom->cfg.pgsize_bitmap;
+ dom->domain.pgsize_bitmap = dom->cfg.pgsize_bitmap;
writel(data->m4u_dom->cfg.arm_v7s_cfg.ttbr[0],
data->base + REG_MMU_PT_BASE_ADDR);
@@ -530,7 +535,7 @@ static int mtk_iommu_hw_init(const struct mtk_iommu_data *data)
F_INT_PRETETCH_TRANSATION_FIFO_FAULT;
writel_relaxed(regval, data->base + REG_MMU_INT_MAIN_CONTROL);
- writel_relaxed(F_MMU_IVRP_PA_SET(data->protect_base),
+ writel_relaxed(F_MMU_IVRP_PA_SET(data->protect_base, data->enable_4GB),
data->base + REG_MMU_IVRP_PADDR);
writel_relaxed(0, data->base + REG_MMU_DCM_DIS);
@@ -591,6 +596,9 @@ static int mtk_iommu_probe(struct platform_device *pdev)
return -ENOMEM;
data->protect_base = ALIGN(virt_to_phys(protect), MTK_PROTECT_PA_ALIGN);
+ /* Whether the current dram is over 4GB */
+ data->enable_4GB = !!(max_pfn > (0xffffffffUL >> PAGE_SHIFT));
+
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
data->base = devm_ioremap_resource(dev, res);
if (IS_ERR(data->base))
@@ -690,7 +698,7 @@ static int __maybe_unused mtk_iommu_resume(struct device *dev)
writel_relaxed(reg->ctrl_reg, base + REG_MMU_CTRL_REG);
writel_relaxed(reg->int_control0, base + REG_MMU_INT_CONTROL0);
writel_relaxed(reg->int_main_control, base + REG_MMU_INT_MAIN_CONTROL);
- writel_relaxed(F_MMU_IVRP_PA_SET(data->protect_base),
+ writel_relaxed(F_MMU_IVRP_PA_SET(data->protect_base, data->enable_4GB),
base + REG_MMU_IVRP_PADDR);
return 0;
}
diff --git a/drivers/iommu/of_iommu.c b/drivers/iommu/of_iommu.c
index 5fea665af99d..af499aea0a1a 100644
--- a/drivers/iommu/of_iommu.c
+++ b/drivers/iommu/of_iommu.c
@@ -98,12 +98,12 @@ EXPORT_SYMBOL_GPL(of_get_dma_window);
struct of_iommu_node {
struct list_head list;
struct device_node *np;
- struct iommu_ops *ops;
+ const struct iommu_ops *ops;
};
static LIST_HEAD(of_iommu_list);
static DEFINE_SPINLOCK(of_iommu_lock);
-void of_iommu_set_ops(struct device_node *np, struct iommu_ops *ops)
+void of_iommu_set_ops(struct device_node *np, const struct iommu_ops *ops)
{
struct of_iommu_node *iommu = kzalloc(sizeof(*iommu), GFP_KERNEL);
@@ -119,10 +119,10 @@ void of_iommu_set_ops(struct device_node *np, struct iommu_ops *ops)
spin_unlock(&of_iommu_lock);
}
-struct iommu_ops *of_iommu_get_ops(struct device_node *np)
+const struct iommu_ops *of_iommu_get_ops(struct device_node *np)
{
struct of_iommu_node *node;
- struct iommu_ops *ops = NULL;
+ const struct iommu_ops *ops = NULL;
spin_lock(&of_iommu_lock);
list_for_each_entry(node, &of_iommu_list, list)
@@ -134,12 +134,12 @@ struct iommu_ops *of_iommu_get_ops(struct device_node *np)
return ops;
}
-struct iommu_ops *of_iommu_configure(struct device *dev,
- struct device_node *master_np)
+const struct iommu_ops *of_iommu_configure(struct device *dev,
+ struct device_node *master_np)
{
struct of_phandle_args iommu_spec;
struct device_node *np;
- struct iommu_ops *ops = NULL;
+ const struct iommu_ops *ops = NULL;
int idx = 0;
/*
diff --git a/drivers/iommu/omap-iommu-debug.c b/drivers/iommu/omap-iommu-debug.c
index 9bc20e2119a3..505548aafeff 100644
--- a/drivers/iommu/omap-iommu-debug.c
+++ b/drivers/iommu/omap-iommu-debug.c
@@ -136,7 +136,7 @@ static ssize_t iotlb_dump_cr(struct omap_iommu *obj, struct cr_regs *cr,
struct seq_file *s)
{
seq_printf(s, "%08x %08x %01x\n", cr->cam, cr->ram,
- (cr->cam & MMU_CAM_P) ? 1 : 0);
+ (cr->cam & MMU_CAM_P) ? 1 : 0);
return 0;
}
diff --git a/drivers/iommu/omap-iommu.c b/drivers/iommu/omap-iommu.c
index 3dc5b65f3990..e2583cce2cc1 100644
--- a/drivers/iommu/omap-iommu.c
+++ b/drivers/iommu/omap-iommu.c
@@ -628,10 +628,12 @@ iopgtable_store_entry_core(struct omap_iommu *obj, struct iotlb_entry *e)
break;
default:
fn = NULL;
- BUG();
break;
}
+ if (WARN_ON(!fn))
+ return -EINVAL;
+
prot = get_iopte_attr(e);
spin_lock(&obj->page_table_lock);
@@ -987,7 +989,6 @@ static int omap_iommu_remove(struct platform_device *pdev)
{
struct omap_iommu *obj = platform_get_drvdata(pdev);
- iopgtable_clear_entry_all(obj);
omap_iommu_debugfs_remove(obj);
pm_runtime_disable(obj->dev);
@@ -1161,7 +1162,8 @@ static struct iommu_domain *omap_iommu_domain_alloc(unsigned type)
* should never fail, but please keep this around to ensure
* we keep the hardware happy
*/
- BUG_ON(!IS_ALIGNED((long)omap_domain->pgtable, IOPGD_TABLE_SIZE));
+ if (WARN_ON(!IS_ALIGNED((long)omap_domain->pgtable, IOPGD_TABLE_SIZE)))
+ goto fail_align;
clean_dcache_area(omap_domain->pgtable, IOPGD_TABLE_SIZE);
spin_lock_init(&omap_domain->lock);
@@ -1172,6 +1174,8 @@ static struct iommu_domain *omap_iommu_domain_alloc(unsigned type)
return &omap_domain->domain;
+fail_align:
+ kfree(omap_domain->pgtable);
fail_nomem:
kfree(omap_domain);
out:
diff --git a/drivers/iommu/rockchip-iommu.c b/drivers/iommu/rockchip-iommu.c
index 5710a06c3049..c7d6156ff536 100644
--- a/drivers/iommu/rockchip-iommu.c
+++ b/drivers/iommu/rockchip-iommu.c
@@ -1049,6 +1049,8 @@ static int rk_iommu_probe(struct platform_device *pdev)
for (i = 0; i < pdev->num_resources; i++) {
res = platform_get_resource(pdev, IORESOURCE_MEM, i);
+ if (!res)
+ continue;
iommu->bases[i] = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(iommu->bases[i]))
continue;
diff --git a/drivers/ipack/devices/ipoctal.c b/drivers/ipack/devices/ipoctal.c
index 035d5449227e..75dd15d66df6 100644
--- a/drivers/ipack/devices/ipoctal.c
+++ b/drivers/ipack/devices/ipoctal.c
@@ -629,8 +629,7 @@ static void ipoctal_hangup(struct tty_struct *tty)
tty_port_hangup(&channel->tty_port);
ipoctal_reset_channel(channel);
-
- clear_bit(ASYNCB_INITIALIZED, &channel->tty_port.flags);
+ tty_port_set_initialized(&channel->tty_port, 0);
wake_up_interruptible(&channel->tty_port.open_wait);
}
@@ -642,7 +641,7 @@ static void ipoctal_shutdown(struct tty_struct *tty)
return;
ipoctal_reset_channel(channel);
- clear_bit(ASYNCB_INITIALIZED, &channel->tty_port.flags);
+ tty_port_set_initialized(&channel->tty_port, 0);
}
static void ipoctal_cleanup(struct tty_struct *tty)
diff --git a/drivers/irqchip/Kconfig b/drivers/irqchip/Kconfig
index 3e124793e224..fa33c50b0e5a 100644
--- a/drivers/irqchip/Kconfig
+++ b/drivers/irqchip/Kconfig
@@ -27,6 +27,7 @@ config ARM_GIC_V3
select IRQ_DOMAIN
select MULTI_IRQ_HANDLER
select IRQ_DOMAIN_HIERARCHY
+ select PARTITION_PERCPU
config ARM_GIC_V3_ITS
bool
@@ -244,3 +245,18 @@ config IRQ_MXS
config MVEBU_ODMI
bool
select GENERIC_MSI_IRQ_DOMAIN
+
+config LS_SCFG_MSI
+ def_bool y if SOC_LS1021A || ARCH_LAYERSCAPE
+ depends on PCI && PCI_MSI
+ select PCI_MSI_IRQ_DOMAIN
+
+config PARTITION_PERCPU
+ bool
+
+config EZNPS_GIC
+ bool "NPS400 Global Interrupt Manager (GIM)"
+ depends on ARC || (COMPILE_TEST && !64BIT)
+ select IRQ_DOMAIN
+ help
+ Support the EZchip NPS400 global interrupt controller
diff --git a/drivers/irqchip/Makefile b/drivers/irqchip/Makefile
index b03cfcbbac6b..38853a187607 100644
--- a/drivers/irqchip/Makefile
+++ b/drivers/irqchip/Makefile
@@ -7,6 +7,7 @@ obj-$(CONFIG_ARCH_BCM2835) += irq-bcm2835.o
obj-$(CONFIG_ARCH_BCM2835) += irq-bcm2836.o
obj-$(CONFIG_ARCH_EXYNOS) += exynos-combiner.o
obj-$(CONFIG_ARCH_HIP04) += irq-hip04.o
+obj-$(CONFIG_ARCH_LPC32XX) += irq-lpc32xx.o
obj-$(CONFIG_ARCH_MMP) += irq-mmp.o
obj-$(CONFIG_IRQ_MXS) += irq-mxs.o
obj-$(CONFIG_ARCH_TEGRA) += irq-tegra.o
@@ -27,6 +28,7 @@ obj-$(CONFIG_REALVIEW_DT) += irq-gic-realview.o
obj-$(CONFIG_ARM_GIC_V2M) += irq-gic-v2m.o
obj-$(CONFIG_ARM_GIC_V3) += irq-gic-v3.o irq-gic-common.o
obj-$(CONFIG_ARM_GIC_V3_ITS) += irq-gic-v3-its.o irq-gic-v3-its-pci-msi.o irq-gic-v3-its-platform-msi.o
+obj-$(CONFIG_PARTITION_PERCPU) += irq-partition-percpu.o
obj-$(CONFIG_HISILICON_IRQ_MBIGEN) += irq-mbigen.o
obj-$(CONFIG_ARM_NVIC) += irq-nvic.o
obj-$(CONFIG_ARM_VIC) += irq-vic.o
@@ -65,3 +67,5 @@ obj-$(CONFIG_INGENIC_IRQ) += irq-ingenic.o
obj-$(CONFIG_IMX_GPCV2) += irq-imx-gpcv2.o
obj-$(CONFIG_PIC32_EVIC) += irq-pic32-evic.o
obj-$(CONFIG_MVEBU_ODMI) += irq-mvebu-odmi.o
+obj-$(CONFIG_LS_SCFG_MSI) += irq-ls-scfg-msi.o
+obj-$(CONFIG_EZNPS_GIC) += irq-eznps.o
diff --git a/drivers/irqchip/irq-alpine-msi.c b/drivers/irqchip/irq-alpine-msi.c
index 25384255b30f..63d980995d17 100644
--- a/drivers/irqchip/irq-alpine-msi.c
+++ b/drivers/irqchip/irq-alpine-msi.c
@@ -23,7 +23,7 @@
#include <linux/slab.h>
#include <asm/irq.h>
-#include <asm-generic/msi.h>
+#include <asm/msi.h>
/* MSIX message address format: local GIC target */
#define ALPINE_MSIX_SPI_TARGET_CLUSTER0 BIT(16)
diff --git a/drivers/irqchip/irq-bcm2836.c b/drivers/irqchip/irq-bcm2836.c
index b6e950d4782a..72ff1d5c5de6 100644
--- a/drivers/irqchip/irq-bcm2836.c
+++ b/drivers/irqchip/irq-bcm2836.c
@@ -195,7 +195,7 @@ static void bcm2836_arm_irqchip_send_ipi(const struct cpumask *mask,
* Ensure that stores to normal memory are visible to the
* other CPUs before issuing the IPI.
*/
- dsb();
+ smp_wmb();
for_each_cpu(cpu, mask) {
writel(1 << ipi, mailbox0_base + 16 * cpu);
@@ -223,6 +223,7 @@ static struct notifier_block bcm2836_arm_irqchip_cpu_notifier = {
.priority = 100,
};
+#ifdef CONFIG_ARM
int __init bcm2836_smp_boot_secondary(unsigned int cpu,
struct task_struct *idle)
{
@@ -238,7 +239,7 @@ int __init bcm2836_smp_boot_secondary(unsigned int cpu,
static const struct smp_operations bcm2836_smp_ops __initconst = {
.smp_boot_secondary = bcm2836_smp_boot_secondary,
};
-
+#endif
#endif
static const struct irq_domain_ops bcm2836_arm_irqchip_intc_ops = {
@@ -252,12 +253,15 @@ bcm2836_arm_irqchip_smp_init(void)
/* Unmask IPIs to the boot CPU. */
bcm2836_arm_irqchip_cpu_notify(&bcm2836_arm_irqchip_cpu_notifier,
CPU_STARTING,
- (void *)smp_processor_id());
+ (void *)(uintptr_t)smp_processor_id());
register_cpu_notifier(&bcm2836_arm_irqchip_cpu_notifier);
set_smp_cross_call(bcm2836_arm_irqchip_send_ipi);
+
+#ifdef CONFIG_ARM
smp_set_ops(&bcm2836_smp_ops);
#endif
+#endif
}
/*
diff --git a/drivers/irqchip/irq-clps711x.c b/drivers/irqchip/irq-clps711x.c
index eb5eb0cd414d..2223b3f15d68 100644
--- a/drivers/irqchip/irq-clps711x.c
+++ b/drivers/irqchip/irq-clps711x.c
@@ -182,7 +182,7 @@ static int __init _clps711x_intc_init(struct device_node *np,
writel_relaxed(0, clps711x_intc->intmr[2]);
err = irq_alloc_descs(-1, 0, ARRAY_SIZE(clps711x_irqs), numa_node_id());
- if (IS_ERR_VALUE(err))
+ if (err < 0)
goto out_iounmap;
clps711x_intc->ops.map = clps711x_intc_irq_map;
diff --git a/drivers/irqchip/irq-crossbar.c b/drivers/irqchip/irq-crossbar.c
index 75573fa431ba..1eef56a89b1f 100644
--- a/drivers/irqchip/irq-crossbar.c
+++ b/drivers/irqchip/irq-crossbar.c
@@ -183,7 +183,7 @@ static int crossbar_domain_translate(struct irq_domain *d,
return -EINVAL;
*hwirq = fwspec->param[1];
- *type = fwspec->param[2];
+ *type = fwspec->param[2] & IRQ_TYPE_SENSE_MASK;
return 0;
}
diff --git a/drivers/irqchip/irq-eznps.c b/drivers/irqchip/irq-eznps.c
new file mode 100644
index 000000000000..efbf0e4304b7
--- /dev/null
+++ b/drivers/irqchip/irq-eznps.c
@@ -0,0 +1,165 @@
+/*
+ * Copyright (c) 2016, Mellanox Technologies. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/interrupt.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/irq.h>
+#include <linux/irqdomain.h>
+#include <linux/irqchip.h>
+#include <soc/nps/common.h>
+
+#define NPS_NR_CPU_IRQS 8 /* number of interrupt lines of NPS400 CPU */
+#define NPS_TIMER0_IRQ 3
+
+/*
+ * NPS400 core includes an Interrupt Controller (IC) support.
+ * All cores can deactivate level irqs at first level control
+ * at cores mesh layer called MTM.
+ * For devices out side chip e.g. uart, network there is another
+ * level called Global Interrupt Manager (GIM).
+ * This second level can control level and edge interrupt.
+ *
+ * NOTE: AUX_IENABLE and CTOP_AUX_IACK are auxiliary registers
+ * with private HW copy per CPU.
+ */
+
+static void nps400_irq_mask(struct irq_data *irqd)
+{
+ unsigned int ienb;
+ unsigned int irq = irqd_to_hwirq(irqd);
+
+ ienb = read_aux_reg(AUX_IENABLE);
+ ienb &= ~(1 << irq);
+ write_aux_reg(AUX_IENABLE, ienb);
+}
+
+static void nps400_irq_unmask(struct irq_data *irqd)
+{
+ unsigned int ienb;
+ unsigned int irq = irqd_to_hwirq(irqd);
+
+ ienb = read_aux_reg(AUX_IENABLE);
+ ienb |= (1 << irq);
+ write_aux_reg(AUX_IENABLE, ienb);
+}
+
+static void nps400_irq_eoi_global(struct irq_data *irqd)
+{
+ unsigned int __maybe_unused irq = irqd_to_hwirq(irqd);
+
+ write_aux_reg(CTOP_AUX_IACK, 1 << irq);
+
+ /* Don't ack GIC before all device access attempts are done */
+ mb();
+
+ nps_ack_gic();
+}
+
+static void nps400_irq_eoi(struct irq_data *irqd)
+{
+ unsigned int __maybe_unused irq = irqd_to_hwirq(irqd);
+
+ write_aux_reg(CTOP_AUX_IACK, 1 << irq);
+}
+
+static struct irq_chip nps400_irq_chip_fasteoi = {
+ .name = "NPS400 IC Global",
+ .irq_mask = nps400_irq_mask,
+ .irq_unmask = nps400_irq_unmask,
+ .irq_eoi = nps400_irq_eoi_global,
+};
+
+static struct irq_chip nps400_irq_chip_percpu = {
+ .name = "NPS400 IC",
+ .irq_mask = nps400_irq_mask,
+ .irq_unmask = nps400_irq_unmask,
+ .irq_eoi = nps400_irq_eoi,
+};
+
+static int nps400_irq_map(struct irq_domain *d, unsigned int virq,
+ irq_hw_number_t hw)
+{
+ switch (hw) {
+ case NPS_TIMER0_IRQ:
+#ifdef CONFIG_SMP
+ case NPS_IPI_IRQ:
+#endif
+ irq_set_percpu_devid(virq);
+ irq_set_chip_and_handler(virq, &nps400_irq_chip_percpu,
+ handle_percpu_devid_irq);
+ break;
+ default:
+ irq_set_chip_and_handler(virq, &nps400_irq_chip_fasteoi,
+ handle_fasteoi_irq);
+ break;
+ }
+
+ return 0;
+}
+
+static const struct irq_domain_ops nps400_irq_ops = {
+ .xlate = irq_domain_xlate_onecell,
+ .map = nps400_irq_map,
+};
+
+static int __init nps400_of_init(struct device_node *node,
+ struct device_node *parent)
+{
+ static struct irq_domain *nps400_root_domain;
+
+ if (parent) {
+ pr_err("DeviceTree incore ic not a root irq controller\n");
+ return -EINVAL;
+ }
+
+ nps400_root_domain = irq_domain_add_linear(node, NPS_NR_CPU_IRQS,
+ &nps400_irq_ops, NULL);
+
+ if (!nps400_root_domain) {
+ pr_err("nps400 root irq domain not avail\n");
+ return -ENOMEM;
+ }
+
+ /*
+ * Needed for primary domain lookup to succeed
+ * This is a primary irqchip, and can never have a parent
+ */
+ irq_set_default_host(nps400_root_domain);
+
+#ifdef CONFIG_SMP
+ irq_create_mapping(nps400_root_domain, NPS_IPI_IRQ);
+#endif
+
+ return 0;
+}
+IRQCHIP_DECLARE(ezchip_nps400_ic, "ezchip,nps400-ic", nps400_of_init);
diff --git a/drivers/irqchip/irq-gic-common.c b/drivers/irqchip/irq-gic-common.c
index f174ce0ca361..89e7423f0ebb 100644
--- a/drivers/irqchip/irq-gic-common.c
+++ b/drivers/irqchip/irq-gic-common.c
@@ -21,6 +21,19 @@
#include "irq-gic-common.h"
+static const struct gic_kvm_info *gic_kvm_info;
+
+const struct gic_kvm_info *gic_get_kvm_info(void)
+{
+ return gic_kvm_info;
+}
+
+void gic_set_kvm_info(const struct gic_kvm_info *info)
+{
+ BUG_ON(gic_kvm_info != NULL);
+ gic_kvm_info = info;
+}
+
void gic_enable_quirks(u32 iidr, const struct gic_quirk *quirks,
void *data)
{
@@ -50,14 +63,26 @@ int gic_configure_irq(unsigned int irq, unsigned int type,
else if (type & IRQ_TYPE_EDGE_BOTH)
val |= confmask;
+ /* If the current configuration is the same, then we are done */
+ if (val == oldval)
+ return 0;
+
/*
* Write back the new configuration, and possibly re-enable
- * the interrupt. If we tried to write a new configuration and failed,
- * return an error.
+ * the interrupt. If we fail to write a new configuration for
+ * an SPI then WARN and return an error. If we fail to write the
+ * configuration for a PPI this is most likely because the GIC
+ * does not allow us to set the configuration or we are in a
+ * non-secure mode, and hence it may not be catastrophic.
*/
writel_relaxed(val, base + GIC_DIST_CONFIG + confoff);
- if (readl_relaxed(base + GIC_DIST_CONFIG + confoff) != val && val != oldval)
- ret = -EINVAL;
+ if (readl_relaxed(base + GIC_DIST_CONFIG + confoff) != val) {
+ if (WARN_ON(irq >= 32))
+ ret = -EINVAL;
+ else
+ pr_warn("GIC: PPI%d is secure or misconfigured\n",
+ irq - 16);
+ }
if (sync_access)
sync_access();
diff --git a/drivers/irqchip/irq-gic-common.h b/drivers/irqchip/irq-gic-common.h
index fff697db8e22..205e5fddf6da 100644
--- a/drivers/irqchip/irq-gic-common.h
+++ b/drivers/irqchip/irq-gic-common.h
@@ -19,6 +19,7 @@
#include <linux/of.h>
#include <linux/irqdomain.h>
+#include <linux/irqchip/arm-gic-common.h>
struct gic_quirk {
const char *desc;
@@ -35,4 +36,6 @@ void gic_cpu_config(void __iomem *base, void (*sync_access)(void));
void gic_enable_quirks(u32 iidr, const struct gic_quirk *quirks,
void *data);
+void gic_set_kvm_info(const struct gic_kvm_info *info);
+
#endif /* _IRQ_GIC_COMMON_H */
diff --git a/drivers/irqchip/irq-gic-v2m.c b/drivers/irqchip/irq-gic-v2m.c
index 28f047c61baa..ad0d2960b664 100644
--- a/drivers/irqchip/irq-gic-v2m.c
+++ b/drivers/irqchip/irq-gic-v2m.c
@@ -49,6 +49,9 @@
/* APM X-Gene with GICv2m MSI_IIDR register value */
#define XGENE_GICV2M_MSI_IIDR 0x06000170
+/* Broadcom NS2 GICv2m MSI_IIDR register value */
+#define BCM_NS2_GICV2M_MSI_IIDR 0x0000013f
+
/* List of flags for specific v2m implementation */
#define GICV2M_NEEDS_SPI_OFFSET 0x00000001
@@ -62,6 +65,7 @@ struct v2m_data {
void __iomem *base; /* GICv2m virt address */
u32 spi_start; /* The SPI number that MSIs start */
u32 nr_spis; /* The number of SPIs for MSIs */
+ u32 spi_offset; /* offset to be subtracted from SPI number */
unsigned long *bm; /* MSI vector bitmap */
u32 flags; /* v2m flags for specific implementation */
};
@@ -102,7 +106,7 @@ static void gicv2m_compose_msi_msg(struct irq_data *data, struct msi_msg *msg)
msg->data = data->hwirq;
if (v2m->flags & GICV2M_NEEDS_SPI_OFFSET)
- msg->data -= v2m->spi_start;
+ msg->data -= v2m->spi_offset;
}
static struct irq_chip gicv2m_irq_chip = {
@@ -340,9 +344,20 @@ static int __init gicv2m_init_one(struct fwnode_handle *fwnode,
* different from the standard GICv2m implementation where
* the MSI data is the absolute value within the range from
* spi_start to (spi_start + num_spis).
+ *
+ * Broadom NS2 GICv2m implementation has an erratum where the MSI data
+ * is 'spi_number - 32'
*/
- if (readl_relaxed(v2m->base + V2M_MSI_IIDR) == XGENE_GICV2M_MSI_IIDR)
+ switch (readl_relaxed(v2m->base + V2M_MSI_IIDR)) {
+ case XGENE_GICV2M_MSI_IIDR:
+ v2m->flags |= GICV2M_NEEDS_SPI_OFFSET;
+ v2m->spi_offset = v2m->spi_start;
+ break;
+ case BCM_NS2_GICV2M_MSI_IIDR:
v2m->flags |= GICV2M_NEEDS_SPI_OFFSET;
+ v2m->spi_offset = 32;
+ break;
+ }
v2m->bm = kzalloc(sizeof(long) * BITS_TO_LONGS(v2m->nr_spis),
GFP_KERNEL);
diff --git a/drivers/irqchip/irq-gic-v3-its.c b/drivers/irqchip/irq-gic-v3-its.c
index 39261798c59f..6bd881be24ea 100644
--- a/drivers/irqchip/irq-gic-v3-its.c
+++ b/drivers/irqchip/irq-gic-v3-its.c
@@ -55,6 +55,16 @@ struct its_collection {
};
/*
+ * The ITS_BASER structure - contains memory information and cached
+ * value of BASER register configuration.
+ */
+struct its_baser {
+ void *base;
+ u64 val;
+ u32 order;
+};
+
+/*
* The ITS structure - contains most of the infrastructure, with the
* top-level MSI domain, the command queue, the collections, and the
* list of devices writing to it.
@@ -66,14 +76,12 @@ struct its_node {
unsigned long phys_base;
struct its_cmd_block *cmd_base;
struct its_cmd_block *cmd_write;
- struct {
- void *base;
- u32 order;
- } tables[GITS_BASER_NR_REGS];
+ struct its_baser tables[GITS_BASER_NR_REGS];
struct its_collection *collections;
struct list_head its_device_list;
u64 flags;
u32 ite_size;
+ u32 device_ids;
};
#define ITS_ITT_ALIGN SZ_256
@@ -838,6 +846,8 @@ static int its_alloc_tables(const char *node_name, struct its_node *its)
ids = GITS_TYPER_DEVBITS(typer);
}
+ its->device_ids = ids;
+
for (i = 0; i < GITS_BASER_NR_REGS; i++) {
u64 val = readq_relaxed(its->base + GITS_BASER + i * 8);
u64 type = GITS_BASER_TYPE(val);
@@ -913,6 +923,7 @@ retry_baser:
}
val |= alloc_pages - 1;
+ its->tables[i].val = val;
writeq_relaxed(val, its->base + GITS_BASER + i * 8);
tmp = readq_relaxed(its->base + GITS_BASER + i * 8);
@@ -1138,9 +1149,22 @@ static struct its_device *its_find_device(struct its_node *its, u32 dev_id)
return its_dev;
}
+static struct its_baser *its_get_baser(struct its_node *its, u32 type)
+{
+ int i;
+
+ for (i = 0; i < GITS_BASER_NR_REGS; i++) {
+ if (GITS_BASER_TYPE(its->tables[i].val) == type)
+ return &its->tables[i];
+ }
+
+ return NULL;
+}
+
static struct its_device *its_create_device(struct its_node *its, u32 dev_id,
int nvecs)
{
+ struct its_baser *baser;
struct its_device *dev;
unsigned long *lpi_map;
unsigned long flags;
@@ -1151,6 +1175,16 @@ static struct its_device *its_create_device(struct its_node *its, u32 dev_id,
int nr_ites;
int sz;
+ baser = its_get_baser(its, GITS_BASER_TYPE_DEVICE);
+
+ /* Don't allow 'dev_id' that exceeds single, flat table limit */
+ if (baser) {
+ if (dev_id >= (PAGE_ORDER_TO_SIZE(baser->order) /
+ GITS_BASER_ENTRY_SIZE(baser->val)))
+ return NULL;
+ } else if (ilog2(dev_id) >= its->device_ids)
+ return NULL;
+
dev = kzalloc(sizeof(*dev), GFP_KERNEL);
/*
* At least one bit of EventID is being used, hence a minimum
diff --git a/drivers/irqchip/irq-gic-v3.c b/drivers/irqchip/irq-gic-v3.c
index 5b7d3c2129d8..fb042ba9a3db 100644
--- a/drivers/irqchip/irq-gic-v3.c
+++ b/drivers/irqchip/irq-gic-v3.c
@@ -15,6 +15,8 @@
* along with this program. If not, see <http://www.gnu.org/licenses/>.
*/
+#define pr_fmt(fmt) "GICv3: " fmt
+
#include <linux/acpi.h>
#include <linux/cpu.h>
#include <linux/cpu_pm.h>
@@ -28,7 +30,9 @@
#include <linux/slab.h>
#include <linux/irqchip.h>
+#include <linux/irqchip/arm-gic-common.h>
#include <linux/irqchip/arm-gic-v3.h>
+#include <linux/irqchip/irq-partition-percpu.h>
#include <asm/cputype.h>
#include <asm/exception.h>
@@ -44,6 +48,7 @@ struct redist_region {
};
struct gic_chip_data {
+ struct fwnode_handle *fwnode;
void __iomem *dist_base;
struct redist_region *redist_regions;
struct rdists rdists;
@@ -51,11 +56,14 @@ struct gic_chip_data {
u64 redist_stride;
u32 nr_redist_regions;
unsigned int irq_nr;
+ struct partition_desc *ppi_descs[16];
};
static struct gic_chip_data gic_data __read_mostly;
static struct static_key supports_deactivate = STATIC_KEY_INIT_TRUE;
+static struct gic_kvm_info gic_v3_kvm_info;
+
#define gic_data_rdist() (this_cpu_ptr(gic_data.rdists.rdist))
#define gic_data_rdist_rd_base() (gic_data_rdist()->rd_base)
#define gic_data_rdist_sgi_base() (gic_data_rdist_rd_base() + SZ_64K)
@@ -364,6 +372,13 @@ static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs
if (static_key_true(&supports_deactivate))
gic_write_dir(irqnr);
#ifdef CONFIG_SMP
+ /*
+ * Unlike GICv2, we don't need an smp_rmb() here.
+ * The control dependency from gic_read_iar to
+ * the ISB in gic_write_eoir is enough to ensure
+ * that any shared data read by handle_IPI will
+ * be read after the ACK.
+ */
handle_IPI(irqnr, regs);
#else
WARN_ONCE(true, "Unexpected SGI received!\n");
@@ -383,6 +398,15 @@ static void __init gic_dist_init(void)
writel_relaxed(0, base + GICD_CTLR);
gic_dist_wait_for_rwp();
+ /*
+ * Configure SPIs as non-secure Group-1. This will only matter
+ * if the GIC only has a single security state. This will not
+ * do the right thing if the kernel is running in secure mode,
+ * but that's not the intended use case anyway.
+ */
+ for (i = 32; i < gic_data.irq_nr; i += 32)
+ writel_relaxed(~0, base + GICD_IGROUPR + i / 8);
+
gic_dist_config(base, gic_data.irq_nr, gic_dist_wait_for_rwp);
/* Enable distributor with ARE, Group1 */
@@ -500,6 +524,9 @@ static void gic_cpu_init(void)
rbase = gic_data_rdist_sgi_base();
+ /* Configure SGIs/PPIs as non-secure Group-1 */
+ writel_relaxed(~0, rbase + GICR_IGROUPR0);
+
gic_cpu_config(rbase, gic_redist_wait_for_rwp);
/* Give LPIs a spin */
@@ -812,10 +839,62 @@ static void gic_irq_domain_free(struct irq_domain *domain, unsigned int virq,
}
}
+static int gic_irq_domain_select(struct irq_domain *d,
+ struct irq_fwspec *fwspec,
+ enum irq_domain_bus_token bus_token)
+{
+ /* Not for us */
+ if (fwspec->fwnode != d->fwnode)
+ return 0;
+
+ /* If this is not DT, then we have a single domain */
+ if (!is_of_node(fwspec->fwnode))
+ return 1;
+
+ /*
+ * If this is a PPI and we have a 4th (non-null) parameter,
+ * then we need to match the partition domain.
+ */
+ if (fwspec->param_count >= 4 &&
+ fwspec->param[0] == 1 && fwspec->param[3] != 0)
+ return d == partition_get_domain(gic_data.ppi_descs[fwspec->param[1]]);
+
+ return d == gic_data.domain;
+}
+
static const struct irq_domain_ops gic_irq_domain_ops = {
.translate = gic_irq_domain_translate,
.alloc = gic_irq_domain_alloc,
.free = gic_irq_domain_free,
+ .select = gic_irq_domain_select,
+};
+
+static int partition_domain_translate(struct irq_domain *d,
+ struct irq_fwspec *fwspec,
+ unsigned long *hwirq,
+ unsigned int *type)
+{
+ struct device_node *np;
+ int ret;
+
+ np = of_find_node_by_phandle(fwspec->param[3]);
+ if (WARN_ON(!np))
+ return -EINVAL;
+
+ ret = partition_translate_id(gic_data.ppi_descs[fwspec->param[1]],
+ of_node_to_fwnode(np));
+ if (ret < 0)
+ return ret;
+
+ *hwirq = ret;
+ *type = fwspec->param[2] & IRQ_TYPE_SENSE_MASK;
+
+ return 0;
+}
+
+static const struct irq_domain_ops partition_domain_ops = {
+ .translate = partition_domain_translate,
+ .select = gic_irq_domain_select,
};
static void gicv3_enable_quirks(void)
@@ -843,6 +922,7 @@ static int __init gic_init_bases(void __iomem *dist_base,
if (static_key_true(&supports_deactivate))
pr_info("GIC: Using split EOI/Deactivate mode\n");
+ gic_data.fwnode = handle;
gic_data.dist_base = dist_base;
gic_data.redist_regions = rdist_regs;
gic_data.nr_redist_regions = nr_redist_regions;
@@ -901,6 +981,143 @@ static int __init gic_validate_dist_version(void __iomem *dist_base)
return 0;
}
+static int get_cpu_number(struct device_node *dn)
+{
+ const __be32 *cell;
+ u64 hwid;
+ int i;
+
+ cell = of_get_property(dn, "reg", NULL);
+ if (!cell)
+ return -1;
+
+ hwid = of_read_number(cell, of_n_addr_cells(dn));
+
+ /*
+ * Non affinity bits must be set to 0 in the DT
+ */
+ if (hwid & ~MPIDR_HWID_BITMASK)
+ return -1;
+
+ for (i = 0; i < num_possible_cpus(); i++)
+ if (cpu_logical_map(i) == hwid)
+ return i;
+
+ return -1;
+}
+
+/* Create all possible partitions at boot time */
+static void __init gic_populate_ppi_partitions(struct device_node *gic_node)
+{
+ struct device_node *parts_node, *child_part;
+ int part_idx = 0, i;
+ int nr_parts;
+ struct partition_affinity *parts;
+
+ parts_node = of_find_node_by_name(gic_node, "ppi-partitions");
+ if (!parts_node)
+ return;
+
+ nr_parts = of_get_child_count(parts_node);
+
+ if (!nr_parts)
+ return;
+
+ parts = kzalloc(sizeof(*parts) * nr_parts, GFP_KERNEL);
+ if (WARN_ON(!parts))
+ return;
+
+ for_each_child_of_node(parts_node, child_part) {
+ struct partition_affinity *part;
+ int n;
+
+ part = &parts[part_idx];
+
+ part->partition_id = of_node_to_fwnode(child_part);
+
+ pr_info("GIC: PPI partition %s[%d] { ",
+ child_part->name, part_idx);
+
+ n = of_property_count_elems_of_size(child_part, "affinity",
+ sizeof(u32));
+ WARN_ON(n <= 0);
+
+ for (i = 0; i < n; i++) {
+ int err, cpu;
+ u32 cpu_phandle;
+ struct device_node *cpu_node;
+
+ err = of_property_read_u32_index(child_part, "affinity",
+ i, &cpu_phandle);
+ if (WARN_ON(err))
+ continue;
+
+ cpu_node = of_find_node_by_phandle(cpu_phandle);
+ if (WARN_ON(!cpu_node))
+ continue;
+
+ cpu = get_cpu_number(cpu_node);
+ if (WARN_ON(cpu == -1))
+ continue;
+
+ pr_cont("%s[%d] ", cpu_node->full_name, cpu);
+
+ cpumask_set_cpu(cpu, &part->mask);
+ }
+
+ pr_cont("}\n");
+ part_idx++;
+ }
+
+ for (i = 0; i < 16; i++) {
+ unsigned int irq;
+ struct partition_desc *desc;
+ struct irq_fwspec ppi_fwspec = {
+ .fwnode = gic_data.fwnode,
+ .param_count = 3,
+ .param = {
+ [0] = 1,
+ [1] = i,
+ [2] = IRQ_TYPE_NONE,
+ },
+ };
+
+ irq = irq_create_fwspec_mapping(&ppi_fwspec);
+ if (WARN_ON(!irq))
+ continue;
+ desc = partition_create_desc(gic_data.fwnode, parts, nr_parts,
+ irq, &partition_domain_ops);
+ if (WARN_ON(!desc))
+ continue;
+
+ gic_data.ppi_descs[i] = desc;
+ }
+}
+
+static void __init gic_of_setup_kvm_info(struct device_node *node)
+{
+ int ret;
+ struct resource r;
+ u32 gicv_idx;
+
+ gic_v3_kvm_info.type = GIC_V3;
+
+ gic_v3_kvm_info.maint_irq = irq_of_parse_and_map(node, 0);
+ if (!gic_v3_kvm_info.maint_irq)
+ return;
+
+ if (of_property_read_u32(node, "#redistributor-regions",
+ &gicv_idx))
+ gicv_idx = 1;
+
+ gicv_idx += 3; /* Also skip GICD, GICC, GICH */
+ ret = of_address_to_resource(node, gicv_idx, &r);
+ if (!ret)
+ gic_v3_kvm_info.vcpu = r;
+
+ gic_set_kvm_info(&gic_v3_kvm_info);
+}
+
static int __init gic_of_init(struct device_node *node, struct device_node *parent)
{
void __iomem *dist_base;
@@ -952,8 +1169,12 @@ static int __init gic_of_init(struct device_node *node, struct device_node *pare
err = gic_init_bases(dist_base, rdist_regs, nr_redist_regions,
redist_stride, &node->fwnode);
- if (!err)
- return 0;
+ if (err)
+ goto out_unmap_rdist;
+
+ gic_populate_ppi_partitions(node);
+ gic_of_setup_kvm_info(node);
+ return 0;
out_unmap_rdist:
for (i = 0; i < nr_redist_regions; i++)
@@ -968,19 +1189,25 @@ out_unmap_dist:
IRQCHIP_DECLARE(gic_v3, "arm,gic-v3", gic_of_init);
#ifdef CONFIG_ACPI
-static void __iomem *dist_base;
-static struct redist_region *redist_regs __initdata;
-static u32 nr_redist_regions __initdata;
-static bool single_redist;
+static struct
+{
+ void __iomem *dist_base;
+ struct redist_region *redist_regs;
+ u32 nr_redist_regions;
+ bool single_redist;
+ u32 maint_irq;
+ int maint_irq_mode;
+ phys_addr_t vcpu_base;
+} acpi_data __initdata;
static void __init
gic_acpi_register_redist(phys_addr_t phys_base, void __iomem *redist_base)
{
static int count = 0;
- redist_regs[count].phys_base = phys_base;
- redist_regs[count].redist_base = redist_base;
- redist_regs[count].single_redist = single_redist;
+ acpi_data.redist_regs[count].phys_base = phys_base;
+ acpi_data.redist_regs[count].redist_base = redist_base;
+ acpi_data.redist_regs[count].single_redist = acpi_data.single_redist;
count++;
}
@@ -1008,7 +1235,7 @@ gic_acpi_parse_madt_gicc(struct acpi_subtable_header *header,
{
struct acpi_madt_generic_interrupt *gicc =
(struct acpi_madt_generic_interrupt *)header;
- u32 reg = readl_relaxed(dist_base + GICD_PIDR2) & GIC_PIDR2_ARCH_MASK;
+ u32 reg = readl_relaxed(acpi_data.dist_base + GICD_PIDR2) & GIC_PIDR2_ARCH_MASK;
u32 size = reg == GIC_PIDR2_ARCH_GICv4 ? SZ_64K * 4 : SZ_64K * 2;
void __iomem *redist_base;
@@ -1025,7 +1252,7 @@ static int __init gic_acpi_collect_gicr_base(void)
acpi_tbl_entry_handler redist_parser;
enum acpi_madt_type type;
- if (single_redist) {
+ if (acpi_data.single_redist) {
type = ACPI_MADT_TYPE_GENERIC_INTERRUPT;
redist_parser = gic_acpi_parse_madt_gicc;
} else {
@@ -1076,14 +1303,14 @@ static int __init gic_acpi_count_gicr_regions(void)
count = acpi_table_parse_madt(ACPI_MADT_TYPE_GENERIC_REDISTRIBUTOR,
gic_acpi_match_gicr, 0);
if (count > 0) {
- single_redist = false;
+ acpi_data.single_redist = false;
return count;
}
count = acpi_table_parse_madt(ACPI_MADT_TYPE_GENERIC_INTERRUPT,
gic_acpi_match_gicc, 0);
if (count > 0)
- single_redist = true;
+ acpi_data.single_redist = true;
return count;
}
@@ -1103,36 +1330,117 @@ static bool __init acpi_validate_gic_table(struct acpi_subtable_header *header,
if (count <= 0)
return false;
- nr_redist_regions = count;
+ acpi_data.nr_redist_regions = count;
return true;
}
+static int __init gic_acpi_parse_virt_madt_gicc(struct acpi_subtable_header *header,
+ const unsigned long end)
+{
+ struct acpi_madt_generic_interrupt *gicc =
+ (struct acpi_madt_generic_interrupt *)header;
+ int maint_irq_mode;
+ static int first_madt = true;
+
+ /* Skip unusable CPUs */
+ if (!(gicc->flags & ACPI_MADT_ENABLED))
+ return 0;
+
+ maint_irq_mode = (gicc->flags & ACPI_MADT_VGIC_IRQ_MODE) ?
+ ACPI_EDGE_SENSITIVE : ACPI_LEVEL_SENSITIVE;
+
+ if (first_madt) {
+ first_madt = false;
+
+ acpi_data.maint_irq = gicc->vgic_interrupt;
+ acpi_data.maint_irq_mode = maint_irq_mode;
+ acpi_data.vcpu_base = gicc->gicv_base_address;
+
+ return 0;
+ }
+
+ /*
+ * The maintenance interrupt and GICV should be the same for every CPU
+ */
+ if ((acpi_data.maint_irq != gicc->vgic_interrupt) ||
+ (acpi_data.maint_irq_mode != maint_irq_mode) ||
+ (acpi_data.vcpu_base != gicc->gicv_base_address))
+ return -EINVAL;
+
+ return 0;
+}
+
+static bool __init gic_acpi_collect_virt_info(void)
+{
+ int count;
+
+ count = acpi_table_parse_madt(ACPI_MADT_TYPE_GENERIC_INTERRUPT,
+ gic_acpi_parse_virt_madt_gicc, 0);
+
+ return (count > 0);
+}
+
#define ACPI_GICV3_DIST_MEM_SIZE (SZ_64K)
+#define ACPI_GICV2_VCTRL_MEM_SIZE (SZ_4K)
+#define ACPI_GICV2_VCPU_MEM_SIZE (SZ_8K)
+
+static void __init gic_acpi_setup_kvm_info(void)
+{
+ int irq;
+
+ if (!gic_acpi_collect_virt_info()) {
+ pr_warn("Unable to get hardware information used for virtualization\n");
+ return;
+ }
+
+ gic_v3_kvm_info.type = GIC_V3;
+
+ irq = acpi_register_gsi(NULL, acpi_data.maint_irq,
+ acpi_data.maint_irq_mode,
+ ACPI_ACTIVE_HIGH);
+ if (irq <= 0)
+ return;
+
+ gic_v3_kvm_info.maint_irq = irq;
+
+ if (acpi_data.vcpu_base) {
+ struct resource *vcpu = &gic_v3_kvm_info.vcpu;
+
+ vcpu->flags = IORESOURCE_MEM;
+ vcpu->start = acpi_data.vcpu_base;
+ vcpu->end = vcpu->start + ACPI_GICV2_VCPU_MEM_SIZE - 1;
+ }
+
+ gic_set_kvm_info(&gic_v3_kvm_info);
+}
static int __init
gic_acpi_init(struct acpi_subtable_header *header, const unsigned long end)
{
struct acpi_madt_generic_distributor *dist;
struct fwnode_handle *domain_handle;
+ size_t size;
int i, err;
/* Get distributor base address */
dist = (struct acpi_madt_generic_distributor *)header;
- dist_base = ioremap(dist->base_address, ACPI_GICV3_DIST_MEM_SIZE);
- if (!dist_base) {
+ acpi_data.dist_base = ioremap(dist->base_address,
+ ACPI_GICV3_DIST_MEM_SIZE);
+ if (!acpi_data.dist_base) {
pr_err("Unable to map GICD registers\n");
return -ENOMEM;
}
- err = gic_validate_dist_version(dist_base);
+ err = gic_validate_dist_version(acpi_data.dist_base);
if (err) {
- pr_err("No distributor detected at @%p, giving up", dist_base);
+ pr_err("No distributor detected at @%p, giving up",
+ acpi_data.dist_base);
goto out_dist_unmap;
}
- redist_regs = kzalloc(sizeof(*redist_regs) * nr_redist_regions,
- GFP_KERNEL);
- if (!redist_regs) {
+ size = sizeof(*acpi_data.redist_regs) * acpi_data.nr_redist_regions;
+ acpi_data.redist_regs = kzalloc(size, GFP_KERNEL);
+ if (!acpi_data.redist_regs) {
err = -ENOMEM;
goto out_dist_unmap;
}
@@ -1141,29 +1449,31 @@ gic_acpi_init(struct acpi_subtable_header *header, const unsigned long end)
if (err)
goto out_redist_unmap;
- domain_handle = irq_domain_alloc_fwnode(dist_base);
+ domain_handle = irq_domain_alloc_fwnode(acpi_data.dist_base);
if (!domain_handle) {
err = -ENOMEM;
goto out_redist_unmap;
}
- err = gic_init_bases(dist_base, redist_regs, nr_redist_regions, 0,
- domain_handle);
+ err = gic_init_bases(acpi_data.dist_base, acpi_data.redist_regs,
+ acpi_data.nr_redist_regions, 0, domain_handle);
if (err)
goto out_fwhandle_free;
acpi_set_irq_model(ACPI_IRQ_MODEL_GIC, domain_handle);
+ gic_acpi_setup_kvm_info();
+
return 0;
out_fwhandle_free:
irq_domain_free_fwnode(domain_handle);
out_redist_unmap:
- for (i = 0; i < nr_redist_regions; i++)
- if (redist_regs[i].redist_base)
- iounmap(redist_regs[i].redist_base);
- kfree(redist_regs);
+ for (i = 0; i < acpi_data.nr_redist_regions; i++)
+ if (acpi_data.redist_regs[i].redist_base)
+ iounmap(acpi_data.redist_regs[i].redist_base);
+ kfree(acpi_data.redist_regs);
out_dist_unmap:
- iounmap(dist_base);
+ iounmap(acpi_data.dist_base);
return err;
}
IRQCHIP_ACPI_DECLARE(gic_v3, ACPI_MADT_TYPE_GENERIC_DISTRIBUTOR,
diff --git a/drivers/irqchip/irq-gic.c b/drivers/irqchip/irq-gic.c
index 282344b95ec2..fbc4ae2afd29 100644
--- a/drivers/irqchip/irq-gic.c
+++ b/drivers/irqchip/irq-gic.c
@@ -55,7 +55,7 @@
static void gic_check_cpu_features(void)
{
- WARN_TAINT_ONCE(cpus_have_cap(ARM64_HAS_SYSREG_GIC_CPUIF),
+ WARN_TAINT_ONCE(this_cpu_has_cap(ARM64_HAS_SYSREG_GIC_CPUIF),
TAINT_CPU_OUT_OF_SPEC,
"GICv3 system registers enabled, broken firmware!\n");
}
@@ -72,6 +72,9 @@ struct gic_chip_data {
struct irq_chip chip;
union gic_base dist_base;
union gic_base cpu_base;
+ void __iomem *raw_dist_base;
+ void __iomem *raw_cpu_base;
+ u32 percpu_offset;
#ifdef CONFIG_CPU_PM
u32 saved_spi_enable[DIV_ROUND_UP(1020, 32)];
u32 saved_spi_active[DIV_ROUND_UP(1020, 32)];
@@ -102,6 +105,8 @@ static struct static_key supports_deactivate = STATIC_KEY_INIT_TRUE;
static struct gic_chip_data gic_data[CONFIG_ARM_GIC_MAX_NR] __read_mostly;
+static struct gic_kvm_info gic_v2_kvm_info;
+
#ifdef CONFIG_GIC_NON_BANKED
static void __iomem *gic_get_percpu_base(union gic_base *base)
{
@@ -344,6 +349,14 @@ static void __exception_irq_entry gic_handle_irq(struct pt_regs *regs)
if (static_key_true(&supports_deactivate))
writel_relaxed(irqstat, cpu_base + GIC_CPU_DEACTIVATE);
#ifdef CONFIG_SMP
+ /*
+ * Ensure any shared data written by the CPU sending
+ * the IPI is read after we've read the ACK register
+ * on the GIC.
+ *
+ * Pairs with the write barrier in gic_raise_softirq
+ */
+ smp_rmb();
handle_IPI(irqnr, regs);
#endif
continue;
@@ -391,20 +404,6 @@ static struct irq_chip gic_chip = {
IRQCHIP_MASK_ON_SUSPEND,
};
-static struct irq_chip gic_eoimode1_chip = {
- .name = "GICv2",
- .irq_mask = gic_eoimode1_mask_irq,
- .irq_unmask = gic_unmask_irq,
- .irq_eoi = gic_eoimode1_eoi_irq,
- .irq_set_type = gic_set_type,
- .irq_get_irqchip_state = gic_irq_get_irqchip_state,
- .irq_set_irqchip_state = gic_irq_set_irqchip_state,
- .irq_set_vcpu_affinity = gic_irq_set_vcpu_affinity,
- .flags = IRQCHIP_SET_TYPE_MASKED |
- IRQCHIP_SKIP_SET_WAKE |
- IRQCHIP_MASK_ON_SUSPEND,
-};
-
void __init gic_cascade_irq(unsigned int gic_nr, unsigned int irq)
{
BUG_ON(gic_nr >= CONFIG_ARM_GIC_MAX_NR);
@@ -473,7 +472,7 @@ static void __init gic_dist_init(struct gic_chip_data *gic)
writel_relaxed(GICD_ENABLE, base + GIC_DIST_CTRL);
}
-static void gic_cpu_init(struct gic_chip_data *gic)
+static int gic_cpu_init(struct gic_chip_data *gic)
{
void __iomem *dist_base = gic_data_dist_base(gic);
void __iomem *base = gic_data_cpu_base(gic);
@@ -489,7 +488,10 @@ static void gic_cpu_init(struct gic_chip_data *gic)
/*
* Get what the GIC says our CPU mask is.
*/
- BUG_ON(cpu >= NR_GIC_CPU_IF);
+ if (WARN_ON(cpu >= NR_GIC_CPU_IF))
+ return -EINVAL;
+
+ gic_check_cpu_features();
cpu_mask = gic_get_cpumask(gic);
gic_cpu_map[cpu] = cpu_mask;
@@ -506,6 +508,8 @@ static void gic_cpu_init(struct gic_chip_data *gic)
writel_relaxed(GICC_INT_PRI_THRESHOLD, base + GIC_CPU_PRIMASK);
gic_cpu_if_up(gic);
+
+ return 0;
}
int gic_cpu_if_down(unsigned int gic_nr)
@@ -531,34 +535,35 @@ int gic_cpu_if_down(unsigned int gic_nr)
* this function, no interrupts will be delivered by the GIC, and another
* platform-specific wakeup source must be enabled.
*/
-static void gic_dist_save(unsigned int gic_nr)
+static void gic_dist_save(struct gic_chip_data *gic)
{
unsigned int gic_irqs;
void __iomem *dist_base;
int i;
- BUG_ON(gic_nr >= CONFIG_ARM_GIC_MAX_NR);
+ if (WARN_ON(!gic))
+ return;
- gic_irqs = gic_data[gic_nr].gic_irqs;
- dist_base = gic_data_dist_base(&gic_data[gic_nr]);
+ gic_irqs = gic->gic_irqs;
+ dist_base = gic_data_dist_base(gic);
if (!dist_base)
return;
for (i = 0; i < DIV_ROUND_UP(gic_irqs, 16); i++)
- gic_data[gic_nr].saved_spi_conf[i] =
+ gic->saved_spi_conf[i] =
readl_relaxed(dist_base + GIC_DIST_CONFIG + i * 4);
for (i = 0; i < DIV_ROUND_UP(gic_irqs, 4); i++)
- gic_data[gic_nr].saved_spi_target[i] =
+ gic->saved_spi_target[i] =
readl_relaxed(dist_base + GIC_DIST_TARGET + i * 4);
for (i = 0; i < DIV_ROUND_UP(gic_irqs, 32); i++)
- gic_data[gic_nr].saved_spi_enable[i] =
+ gic->saved_spi_enable[i] =
readl_relaxed(dist_base + GIC_DIST_ENABLE_SET + i * 4);
for (i = 0; i < DIV_ROUND_UP(gic_irqs, 32); i++)
- gic_data[gic_nr].saved_spi_active[i] =
+ gic->saved_spi_active[i] =
readl_relaxed(dist_base + GIC_DIST_ACTIVE_SET + i * 4);
}
@@ -569,16 +574,17 @@ static void gic_dist_save(unsigned int gic_nr)
* handled normally, but any edge interrupts that occured will not be seen by
* the GIC and need to be handled by the platform-specific wakeup source.
*/
-static void gic_dist_restore(unsigned int gic_nr)
+static void gic_dist_restore(struct gic_chip_data *gic)
{
unsigned int gic_irqs;
unsigned int i;
void __iomem *dist_base;
- BUG_ON(gic_nr >= CONFIG_ARM_GIC_MAX_NR);
+ if (WARN_ON(!gic))
+ return;
- gic_irqs = gic_data[gic_nr].gic_irqs;
- dist_base = gic_data_dist_base(&gic_data[gic_nr]);
+ gic_irqs = gic->gic_irqs;
+ dist_base = gic_data_dist_base(gic);
if (!dist_base)
return;
@@ -586,7 +592,7 @@ static void gic_dist_restore(unsigned int gic_nr)
writel_relaxed(GICD_DISABLE, dist_base + GIC_DIST_CTRL);
for (i = 0; i < DIV_ROUND_UP(gic_irqs, 16); i++)
- writel_relaxed(gic_data[gic_nr].saved_spi_conf[i],
+ writel_relaxed(gic->saved_spi_conf[i],
dist_base + GIC_DIST_CONFIG + i * 4);
for (i = 0; i < DIV_ROUND_UP(gic_irqs, 4); i++)
@@ -594,85 +600,87 @@ static void gic_dist_restore(unsigned int gic_nr)
dist_base + GIC_DIST_PRI + i * 4);
for (i = 0; i < DIV_ROUND_UP(gic_irqs, 4); i++)
- writel_relaxed(gic_data[gic_nr].saved_spi_target[i],
+ writel_relaxed(gic->saved_spi_target[i],
dist_base + GIC_DIST_TARGET + i * 4);
for (i = 0; i < DIV_ROUND_UP(gic_irqs, 32); i++) {
writel_relaxed(GICD_INT_EN_CLR_X32,
dist_base + GIC_DIST_ENABLE_CLEAR + i * 4);
- writel_relaxed(gic_data[gic_nr].saved_spi_enable[i],
+ writel_relaxed(gic->saved_spi_enable[i],
dist_base + GIC_DIST_ENABLE_SET + i * 4);
}
for (i = 0; i < DIV_ROUND_UP(gic_irqs, 32); i++) {
writel_relaxed(GICD_INT_EN_CLR_X32,
dist_base + GIC_DIST_ACTIVE_CLEAR + i * 4);
- writel_relaxed(gic_data[gic_nr].saved_spi_active[i],
+ writel_relaxed(gic->saved_spi_active[i],
dist_base + GIC_DIST_ACTIVE_SET + i * 4);
}
writel_relaxed(GICD_ENABLE, dist_base + GIC_DIST_CTRL);
}
-static void gic_cpu_save(unsigned int gic_nr)
+static void gic_cpu_save(struct gic_chip_data *gic)
{
int i;
u32 *ptr;
void __iomem *dist_base;
void __iomem *cpu_base;
- BUG_ON(gic_nr >= CONFIG_ARM_GIC_MAX_NR);
+ if (WARN_ON(!gic))
+ return;
- dist_base = gic_data_dist_base(&gic_data[gic_nr]);
- cpu_base = gic_data_cpu_base(&gic_data[gic_nr]);
+ dist_base = gic_data_dist_base(gic);
+ cpu_base = gic_data_cpu_base(gic);
if (!dist_base || !cpu_base)
return;
- ptr = raw_cpu_ptr(gic_data[gic_nr].saved_ppi_enable);
+ ptr = raw_cpu_ptr(gic->saved_ppi_enable);
for (i = 0; i < DIV_ROUND_UP(32, 32); i++)
ptr[i] = readl_relaxed(dist_base + GIC_DIST_ENABLE_SET + i * 4);
- ptr = raw_cpu_ptr(gic_data[gic_nr].saved_ppi_active);
+ ptr = raw_cpu_ptr(gic->saved_ppi_active);
for (i = 0; i < DIV_ROUND_UP(32, 32); i++)
ptr[i] = readl_relaxed(dist_base + GIC_DIST_ACTIVE_SET + i * 4);
- ptr = raw_cpu_ptr(gic_data[gic_nr].saved_ppi_conf);
+ ptr = raw_cpu_ptr(gic->saved_ppi_conf);
for (i = 0; i < DIV_ROUND_UP(32, 16); i++)
ptr[i] = readl_relaxed(dist_base + GIC_DIST_CONFIG + i * 4);
}
-static void gic_cpu_restore(unsigned int gic_nr)
+static void gic_cpu_restore(struct gic_chip_data *gic)
{
int i;
u32 *ptr;
void __iomem *dist_base;
void __iomem *cpu_base;
- BUG_ON(gic_nr >= CONFIG_ARM_GIC_MAX_NR);
+ if (WARN_ON(!gic))
+ return;
- dist_base = gic_data_dist_base(&gic_data[gic_nr]);
- cpu_base = gic_data_cpu_base(&gic_data[gic_nr]);
+ dist_base = gic_data_dist_base(gic);
+ cpu_base = gic_data_cpu_base(gic);
if (!dist_base || !cpu_base)
return;
- ptr = raw_cpu_ptr(gic_data[gic_nr].saved_ppi_enable);
+ ptr = raw_cpu_ptr(gic->saved_ppi_enable);
for (i = 0; i < DIV_ROUND_UP(32, 32); i++) {
writel_relaxed(GICD_INT_EN_CLR_X32,
dist_base + GIC_DIST_ENABLE_CLEAR + i * 4);
writel_relaxed(ptr[i], dist_base + GIC_DIST_ENABLE_SET + i * 4);
}
- ptr = raw_cpu_ptr(gic_data[gic_nr].saved_ppi_active);
+ ptr = raw_cpu_ptr(gic->saved_ppi_active);
for (i = 0; i < DIV_ROUND_UP(32, 32); i++) {
writel_relaxed(GICD_INT_EN_CLR_X32,
dist_base + GIC_DIST_ACTIVE_CLEAR + i * 4);
writel_relaxed(ptr[i], dist_base + GIC_DIST_ACTIVE_SET + i * 4);
}
- ptr = raw_cpu_ptr(gic_data[gic_nr].saved_ppi_conf);
+ ptr = raw_cpu_ptr(gic->saved_ppi_conf);
for (i = 0; i < DIV_ROUND_UP(32, 16); i++)
writel_relaxed(ptr[i], dist_base + GIC_DIST_CONFIG + i * 4);
@@ -681,7 +689,7 @@ static void gic_cpu_restore(unsigned int gic_nr)
dist_base + GIC_DIST_PRI + i * 4);
writel_relaxed(GICC_INT_PRI_THRESHOLD, cpu_base + GIC_CPU_PRIMASK);
- gic_cpu_if_up(&gic_data[gic_nr]);
+ gic_cpu_if_up(gic);
}
static int gic_notifier(struct notifier_block *self, unsigned long cmd, void *v)
@@ -696,18 +704,18 @@ static int gic_notifier(struct notifier_block *self, unsigned long cmd, void *v)
#endif
switch (cmd) {
case CPU_PM_ENTER:
- gic_cpu_save(i);
+ gic_cpu_save(&gic_data[i]);
break;
case CPU_PM_ENTER_FAILED:
case CPU_PM_EXIT:
- gic_cpu_restore(i);
+ gic_cpu_restore(&gic_data[i]);
break;
case CPU_CLUSTER_PM_ENTER:
- gic_dist_save(i);
+ gic_dist_save(&gic_data[i]);
break;
case CPU_CLUSTER_PM_ENTER_FAILED:
case CPU_CLUSTER_PM_EXIT:
- gic_dist_restore(i);
+ gic_dist_restore(&gic_data[i]);
break;
}
}
@@ -719,26 +727,39 @@ static struct notifier_block gic_notifier_block = {
.notifier_call = gic_notifier,
};
-static void __init gic_pm_init(struct gic_chip_data *gic)
+static int __init gic_pm_init(struct gic_chip_data *gic)
{
gic->saved_ppi_enable = __alloc_percpu(DIV_ROUND_UP(32, 32) * 4,
sizeof(u32));
- BUG_ON(!gic->saved_ppi_enable);
+ if (WARN_ON(!gic->saved_ppi_enable))
+ return -ENOMEM;
gic->saved_ppi_active = __alloc_percpu(DIV_ROUND_UP(32, 32) * 4,
sizeof(u32));
- BUG_ON(!gic->saved_ppi_active);
+ if (WARN_ON(!gic->saved_ppi_active))
+ goto free_ppi_enable;
gic->saved_ppi_conf = __alloc_percpu(DIV_ROUND_UP(32, 16) * 4,
sizeof(u32));
- BUG_ON(!gic->saved_ppi_conf);
+ if (WARN_ON(!gic->saved_ppi_conf))
+ goto free_ppi_active;
if (gic == &gic_data[0])
cpu_pm_register_notifier(&gic_notifier_block);
+
+ return 0;
+
+free_ppi_active:
+ free_percpu(gic->saved_ppi_active);
+free_ppi_enable:
+ free_percpu(gic->saved_ppi_enable);
+
+ return -ENOMEM;
}
#else
-static void __init gic_pm_init(struct gic_chip_data *gic)
+static int __init gic_pm_init(struct gic_chip_data *gic)
{
+ return 0;
}
#endif
@@ -1011,63 +1032,63 @@ static const struct irq_domain_ops gic_irq_domain_ops = {
.unmap = gic_irq_domain_unmap,
};
-static void __init __gic_init_bases(unsigned int gic_nr, int irq_start,
- void __iomem *dist_base, void __iomem *cpu_base,
- u32 percpu_offset, struct fwnode_handle *handle)
+static int __init __gic_init_bases(struct gic_chip_data *gic, int irq_start,
+ struct fwnode_handle *handle)
{
irq_hw_number_t hwirq_base;
- struct gic_chip_data *gic;
- int gic_irqs, irq_base, i;
-
- BUG_ON(gic_nr >= CONFIG_ARM_GIC_MAX_NR);
+ int gic_irqs, irq_base, i, ret;
- gic_check_cpu_features();
-
- gic = &gic_data[gic_nr];
+ if (WARN_ON(!gic || gic->domain))
+ return -EINVAL;
/* Initialize irq_chip */
- if (static_key_true(&supports_deactivate) && gic_nr == 0) {
- gic->chip = gic_eoimode1_chip;
+ gic->chip = gic_chip;
+
+ if (static_key_true(&supports_deactivate) && gic == &gic_data[0]) {
+ gic->chip.irq_mask = gic_eoimode1_mask_irq;
+ gic->chip.irq_eoi = gic_eoimode1_eoi_irq;
+ gic->chip.irq_set_vcpu_affinity = gic_irq_set_vcpu_affinity;
+ gic->chip.name = kasprintf(GFP_KERNEL, "GICv2");
} else {
- gic->chip = gic_chip;
- gic->chip.name = kasprintf(GFP_KERNEL, "GIC-%d", gic_nr);
+ gic->chip.name = kasprintf(GFP_KERNEL, "GIC-%d",
+ (int)(gic - &gic_data[0]));
}
#ifdef CONFIG_SMP
- if (gic_nr == 0)
+ if (gic == &gic_data[0])
gic->chip.irq_set_affinity = gic_set_affinity;
#endif
-#ifdef CONFIG_GIC_NON_BANKED
- if (percpu_offset) { /* Frankein-GIC without banked registers... */
+ if (IS_ENABLED(CONFIG_GIC_NON_BANKED) && gic->percpu_offset) {
+ /* Frankein-GIC without banked registers... */
unsigned int cpu;
gic->dist_base.percpu_base = alloc_percpu(void __iomem *);
gic->cpu_base.percpu_base = alloc_percpu(void __iomem *);
if (WARN_ON(!gic->dist_base.percpu_base ||
!gic->cpu_base.percpu_base)) {
- free_percpu(gic->dist_base.percpu_base);
- free_percpu(gic->cpu_base.percpu_base);
- return;
+ ret = -ENOMEM;
+ goto error;
}
for_each_possible_cpu(cpu) {
u32 mpidr = cpu_logical_map(cpu);
u32 core_id = MPIDR_AFFINITY_LEVEL(mpidr, 0);
- unsigned long offset = percpu_offset * core_id;
- *per_cpu_ptr(gic->dist_base.percpu_base, cpu) = dist_base + offset;
- *per_cpu_ptr(gic->cpu_base.percpu_base, cpu) = cpu_base + offset;
+ unsigned long offset = gic->percpu_offset * core_id;
+ *per_cpu_ptr(gic->dist_base.percpu_base, cpu) =
+ gic->raw_dist_base + offset;
+ *per_cpu_ptr(gic->cpu_base.percpu_base, cpu) =
+ gic->raw_cpu_base + offset;
}
gic_set_base_accessor(gic, gic_get_percpu_base);
- } else
-#endif
- { /* Normal, sane GIC... */
- WARN(percpu_offset,
+ } else {
+ /* Normal, sane GIC... */
+ WARN(gic->percpu_offset,
"GIC_NON_BANKED not enabled, ignoring %08x offset!",
- percpu_offset);
- gic->dist_base.common_base = dist_base;
- gic->cpu_base.common_base = cpu_base;
+ gic->percpu_offset);
+ gic->dist_base.common_base = gic->raw_dist_base;
+ gic->cpu_base.common_base = gic->raw_cpu_base;
gic_set_base_accessor(gic, gic_get_common_base);
}
@@ -1090,7 +1111,7 @@ static void __init __gic_init_bases(unsigned int gic_nr, int irq_start,
* For primary GICs, skip over SGIs.
* For secondary GICs, skip over PPIs, too.
*/
- if (gic_nr == 0 && (irq_start & 31) > 0) {
+ if (gic == &gic_data[0] && (irq_start & 31) > 0) {
hwirq_base = 16;
if (irq_start != -1)
irq_start = (irq_start & ~31) + 16;
@@ -1102,7 +1123,7 @@ static void __init __gic_init_bases(unsigned int gic_nr, int irq_start,
irq_base = irq_alloc_descs(irq_start, 16, gic_irqs,
numa_node_id());
- if (IS_ERR_VALUE(irq_base)) {
+ if (irq_base < 0) {
WARN(1, "Cannot allocate irq_descs @ IRQ%d, assuming pre-allocated\n",
irq_start);
irq_base = irq_start;
@@ -1112,10 +1133,12 @@ static void __init __gic_init_bases(unsigned int gic_nr, int irq_start,
hwirq_base, &gic_irq_domain_ops, gic);
}
- if (WARN_ON(!gic->domain))
- return;
+ if (WARN_ON(!gic->domain)) {
+ ret = -ENODEV;
+ goto error;
+ }
- if (gic_nr == 0) {
+ if (gic == &gic_data[0]) {
/*
* Initialize the CPU interface map to all CPUs.
* It will be refined as each CPU probes its ID.
@@ -1133,19 +1156,57 @@ static void __init __gic_init_bases(unsigned int gic_nr, int irq_start,
}
gic_dist_init(gic);
- gic_cpu_init(gic);
- gic_pm_init(gic);
+ ret = gic_cpu_init(gic);
+ if (ret)
+ goto error;
+
+ ret = gic_pm_init(gic);
+ if (ret)
+ goto error;
+
+ return 0;
+
+error:
+ if (IS_ENABLED(CONFIG_GIC_NON_BANKED) && gic->percpu_offset) {
+ free_percpu(gic->dist_base.percpu_base);
+ free_percpu(gic->cpu_base.percpu_base);
+ }
+
+ kfree(gic->chip.name);
+
+ return ret;
}
void __init gic_init(unsigned int gic_nr, int irq_start,
void __iomem *dist_base, void __iomem *cpu_base)
{
+ struct gic_chip_data *gic;
+
+ if (WARN_ON(gic_nr >= CONFIG_ARM_GIC_MAX_NR))
+ return;
+
/*
* Non-DT/ACPI systems won't run a hypervisor, so let's not
* bother with these...
*/
static_key_slow_dec(&supports_deactivate);
- __gic_init_bases(gic_nr, irq_start, dist_base, cpu_base, 0, NULL);
+
+ gic = &gic_data[gic_nr];
+ gic->raw_dist_base = dist_base;
+ gic->raw_cpu_base = cpu_base;
+
+ __gic_init_bases(gic, irq_start, NULL);
+}
+
+static void gic_teardown(struct gic_chip_data *gic)
+{
+ if (WARN_ON(!gic))
+ return;
+
+ if (gic->raw_dist_base)
+ iounmap(gic->raw_dist_base);
+ if (gic->raw_cpu_base)
+ iounmap(gic->raw_cpu_base);
}
#ifdef CONFIG_OF
@@ -1189,37 +1250,88 @@ static bool gic_check_eoimode(struct device_node *node, void __iomem **base)
return true;
}
+static int __init gic_of_setup(struct gic_chip_data *gic, struct device_node *node)
+{
+ if (!gic || !node)
+ return -EINVAL;
+
+ gic->raw_dist_base = of_iomap(node, 0);
+ if (WARN(!gic->raw_dist_base, "unable to map gic dist registers\n"))
+ goto error;
+
+ gic->raw_cpu_base = of_iomap(node, 1);
+ if (WARN(!gic->raw_cpu_base, "unable to map gic cpu registers\n"))
+ goto error;
+
+ if (of_property_read_u32(node, "cpu-offset", &gic->percpu_offset))
+ gic->percpu_offset = 0;
+
+ return 0;
+
+error:
+ gic_teardown(gic);
+
+ return -ENOMEM;
+}
+
+static void __init gic_of_setup_kvm_info(struct device_node *node)
+{
+ int ret;
+ struct resource *vctrl_res = &gic_v2_kvm_info.vctrl;
+ struct resource *vcpu_res = &gic_v2_kvm_info.vcpu;
+
+ gic_v2_kvm_info.type = GIC_V2;
+
+ gic_v2_kvm_info.maint_irq = irq_of_parse_and_map(node, 0);
+ if (!gic_v2_kvm_info.maint_irq)
+ return;
+
+ ret = of_address_to_resource(node, 2, vctrl_res);
+ if (ret)
+ return;
+
+ ret = of_address_to_resource(node, 3, vcpu_res);
+ if (ret)
+ return;
+
+ gic_set_kvm_info(&gic_v2_kvm_info);
+}
+
int __init
gic_of_init(struct device_node *node, struct device_node *parent)
{
- void __iomem *cpu_base;
- void __iomem *dist_base;
- u32 percpu_offset;
- int irq;
+ struct gic_chip_data *gic;
+ int irq, ret;
if (WARN_ON(!node))
return -ENODEV;
- dist_base = of_iomap(node, 0);
- WARN(!dist_base, "unable to map gic dist registers\n");
+ if (WARN_ON(gic_cnt >= CONFIG_ARM_GIC_MAX_NR))
+ return -EINVAL;
+
+ gic = &gic_data[gic_cnt];
- cpu_base = of_iomap(node, 1);
- WARN(!cpu_base, "unable to map gic cpu registers\n");
+ ret = gic_of_setup(gic, node);
+ if (ret)
+ return ret;
/*
* Disable split EOI/Deactivate if either HYP is not available
* or the CPU interface is too small.
*/
- if (gic_cnt == 0 && !gic_check_eoimode(node, &cpu_base))
+ if (gic_cnt == 0 && !gic_check_eoimode(node, &gic->raw_cpu_base))
static_key_slow_dec(&supports_deactivate);
- if (of_property_read_u32(node, "cpu-offset", &percpu_offset))
- percpu_offset = 0;
+ ret = __gic_init_bases(gic, -1, &node->fwnode);
+ if (ret) {
+ gic_teardown(gic);
+ return ret;
+ }
- __gic_init_bases(gic_cnt, -1, dist_base, cpu_base, percpu_offset,
- &node->fwnode);
- if (!gic_cnt)
+ if (!gic_cnt) {
gic_init_physaddr(node);
+ gic_of_setup_kvm_info(node);
+ }
if (parent) {
irq = irq_of_parse_and_map(node, 0);
@@ -1245,7 +1357,14 @@ IRQCHIP_DECLARE(pl390, "arm,pl390", gic_of_init);
#endif
#ifdef CONFIG_ACPI
-static phys_addr_t cpu_phy_base __initdata;
+static struct
+{
+ phys_addr_t cpu_phys_base;
+ u32 maint_irq;
+ int maint_irq_mode;
+ phys_addr_t vctrl_base;
+ phys_addr_t vcpu_base;
+} acpi_data __initdata;
static int __init
gic_acpi_parse_madt_cpu(struct acpi_subtable_header *header,
@@ -1265,10 +1384,16 @@ gic_acpi_parse_madt_cpu(struct acpi_subtable_header *header,
* All CPU interface addresses have to be the same.
*/
gic_cpu_base = processor->base_address;
- if (cpu_base_assigned && gic_cpu_base != cpu_phy_base)
+ if (cpu_base_assigned && gic_cpu_base != acpi_data.cpu_phys_base)
return -EINVAL;
- cpu_phy_base = gic_cpu_base;
+ acpi_data.cpu_phys_base = gic_cpu_base;
+ acpi_data.maint_irq = processor->vgic_interrupt;
+ acpi_data.maint_irq_mode = (processor->flags & ACPI_MADT_VGIC_IRQ_MODE) ?
+ ACPI_EDGE_SENSITIVE : ACPI_LEVEL_SENSITIVE;
+ acpi_data.vctrl_base = processor->gich_base_address;
+ acpi_data.vcpu_base = processor->gicv_base_address;
+
cpu_base_assigned = 1;
return 0;
}
@@ -1299,14 +1424,49 @@ static bool __init gic_validate_dist(struct acpi_subtable_header *header,
#define ACPI_GICV2_DIST_MEM_SIZE (SZ_4K)
#define ACPI_GIC_CPU_IF_MEM_SIZE (SZ_8K)
+#define ACPI_GICV2_VCTRL_MEM_SIZE (SZ_4K)
+#define ACPI_GICV2_VCPU_MEM_SIZE (SZ_8K)
+
+static void __init gic_acpi_setup_kvm_info(void)
+{
+ int irq;
+ struct resource *vctrl_res = &gic_v2_kvm_info.vctrl;
+ struct resource *vcpu_res = &gic_v2_kvm_info.vcpu;
+
+ gic_v2_kvm_info.type = GIC_V2;
+
+ if (!acpi_data.vctrl_base)
+ return;
+
+ vctrl_res->flags = IORESOURCE_MEM;
+ vctrl_res->start = acpi_data.vctrl_base;
+ vctrl_res->end = vctrl_res->start + ACPI_GICV2_VCTRL_MEM_SIZE - 1;
+
+ if (!acpi_data.vcpu_base)
+ return;
+
+ vcpu_res->flags = IORESOURCE_MEM;
+ vcpu_res->start = acpi_data.vcpu_base;
+ vcpu_res->end = vcpu_res->start + ACPI_GICV2_VCPU_MEM_SIZE - 1;
+
+ irq = acpi_register_gsi(NULL, acpi_data.maint_irq,
+ acpi_data.maint_irq_mode,
+ ACPI_ACTIVE_HIGH);
+ if (irq <= 0)
+ return;
+
+ gic_v2_kvm_info.maint_irq = irq;
+
+ gic_set_kvm_info(&gic_v2_kvm_info);
+}
static int __init gic_v2_acpi_init(struct acpi_subtable_header *header,
const unsigned long end)
{
struct acpi_madt_generic_distributor *dist;
- void __iomem *cpu_base, *dist_base;
struct fwnode_handle *domain_handle;
- int count;
+ struct gic_chip_data *gic = &gic_data[0];
+ int count, ret;
/* Collect CPU base addresses */
count = acpi_table_parse_madt(ACPI_MADT_TYPE_GENERIC_INTERRUPT,
@@ -1316,17 +1476,18 @@ static int __init gic_v2_acpi_init(struct acpi_subtable_header *header,
return -EINVAL;
}
- cpu_base = ioremap(cpu_phy_base, ACPI_GIC_CPU_IF_MEM_SIZE);
- if (!cpu_base) {
+ gic->raw_cpu_base = ioremap(acpi_data.cpu_phys_base, ACPI_GIC_CPU_IF_MEM_SIZE);
+ if (!gic->raw_cpu_base) {
pr_err("Unable to map GICC registers\n");
return -ENOMEM;
}
dist = (struct acpi_madt_generic_distributor *)header;
- dist_base = ioremap(dist->base_address, ACPI_GICV2_DIST_MEM_SIZE);
- if (!dist_base) {
+ gic->raw_dist_base = ioremap(dist->base_address,
+ ACPI_GICV2_DIST_MEM_SIZE);
+ if (!gic->raw_dist_base) {
pr_err("Unable to map GICD registers\n");
- iounmap(cpu_base);
+ gic_teardown(gic);
return -ENOMEM;
}
@@ -1341,21 +1502,28 @@ static int __init gic_v2_acpi_init(struct acpi_subtable_header *header,
/*
* Initialize GIC instance zero (no multi-GIC support).
*/
- domain_handle = irq_domain_alloc_fwnode(dist_base);
+ domain_handle = irq_domain_alloc_fwnode(gic->raw_dist_base);
if (!domain_handle) {
pr_err("Unable to allocate domain handle\n");
- iounmap(cpu_base);
- iounmap(dist_base);
+ gic_teardown(gic);
return -ENOMEM;
}
- __gic_init_bases(0, -1, dist_base, cpu_base, 0, domain_handle);
+ ret = __gic_init_bases(gic, -1, domain_handle);
+ if (ret) {
+ pr_err("Failed to initialise GIC\n");
+ irq_domain_free_fwnode(domain_handle);
+ gic_teardown(gic);
+ return ret;
+ }
acpi_set_irq_model(ACPI_IRQ_MODEL_GIC, domain_handle);
if (IS_ENABLED(CONFIG_ARM_GIC_V2M))
gicv2m_init(NULL, gic_data[0].domain);
+ gic_acpi_setup_kvm_info();
+
return 0;
}
IRQCHIP_ACPI_DECLARE(gic_v2, ACPI_MADT_TYPE_GENERIC_DISTRIBUTOR,
diff --git a/drivers/irqchip/irq-hip04.c b/drivers/irqchip/irq-hip04.c
index 9688d2e2a636..9e25d8ce08e5 100644
--- a/drivers/irqchip/irq-hip04.c
+++ b/drivers/irqchip/irq-hip04.c
@@ -402,7 +402,7 @@ hip04_of_init(struct device_node *node, struct device_node *parent)
nr_irqs -= hwirq_base; /* calculate # of irqs to allocate */
irq_base = irq_alloc_descs(-1, hwirq_base, nr_irqs, numa_node_id());
- if (IS_ERR_VALUE(irq_base)) {
+ if (irq_base < 0) {
pr_err("failed to allocate IRQ numbers\n");
return -EINVAL;
}
diff --git a/drivers/irqchip/irq-lpc32xx.c b/drivers/irqchip/irq-lpc32xx.c
new file mode 100644
index 000000000000..1034aeb2e98a
--- /dev/null
+++ b/drivers/irqchip/irq-lpc32xx.c
@@ -0,0 +1,238 @@
+/*
+ * Copyright 2015-2016 Vladimir Zapolskiy <vz@mleia.com>
+ *
+ * The code contained herein is licensed under the GNU General Public
+ * License. You may obtain a copy of the GNU General Public License
+ * Version 2 or later at the following locations:
+ *
+ * http://www.opensource.org/licenses/gpl-license.html
+ * http://www.gnu.org/copyleft/gpl.html
+ */
+
+#define pr_fmt(fmt) "%s: " fmt, __func__
+
+#include <linux/io.h>
+#include <linux/irqchip.h>
+#include <linux/irqchip/chained_irq.h>
+#include <linux/of_address.h>
+#include <linux/of_irq.h>
+#include <linux/of_platform.h>
+#include <linux/slab.h>
+#include <asm/exception.h>
+
+#define LPC32XX_INTC_MASK 0x00
+#define LPC32XX_INTC_RAW 0x04
+#define LPC32XX_INTC_STAT 0x08
+#define LPC32XX_INTC_POL 0x0C
+#define LPC32XX_INTC_TYPE 0x10
+#define LPC32XX_INTC_FIQ 0x14
+
+#define NR_LPC32XX_IC_IRQS 32
+
+struct lpc32xx_irq_chip {
+ void __iomem *base;
+ struct irq_domain *domain;
+ struct irq_chip chip;
+};
+
+static struct lpc32xx_irq_chip *lpc32xx_mic_irqc;
+
+static inline u32 lpc32xx_ic_read(struct lpc32xx_irq_chip *ic, u32 reg)
+{
+ return readl_relaxed(ic->base + reg);
+}
+
+static inline void lpc32xx_ic_write(struct lpc32xx_irq_chip *ic,
+ u32 reg, u32 val)
+{
+ writel_relaxed(val, ic->base + reg);
+}
+
+static void lpc32xx_irq_mask(struct irq_data *d)
+{
+ struct lpc32xx_irq_chip *ic = irq_data_get_irq_chip_data(d);
+ u32 val, mask = BIT(d->hwirq);
+
+ val = lpc32xx_ic_read(ic, LPC32XX_INTC_MASK) & ~mask;
+ lpc32xx_ic_write(ic, LPC32XX_INTC_MASK, val);
+}
+
+static void lpc32xx_irq_unmask(struct irq_data *d)
+{
+ struct lpc32xx_irq_chip *ic = irq_data_get_irq_chip_data(d);
+ u32 val, mask = BIT(d->hwirq);
+
+ val = lpc32xx_ic_read(ic, LPC32XX_INTC_MASK) | mask;
+ lpc32xx_ic_write(ic, LPC32XX_INTC_MASK, val);
+}
+
+static void lpc32xx_irq_ack(struct irq_data *d)
+{
+ struct lpc32xx_irq_chip *ic = irq_data_get_irq_chip_data(d);
+ u32 mask = BIT(d->hwirq);
+
+ lpc32xx_ic_write(ic, LPC32XX_INTC_RAW, mask);
+}
+
+static int lpc32xx_irq_set_type(struct irq_data *d, unsigned int type)
+{
+ struct lpc32xx_irq_chip *ic = irq_data_get_irq_chip_data(d);
+ u32 val, mask = BIT(d->hwirq);
+ bool high, edge;
+
+ switch (type) {
+ case IRQ_TYPE_EDGE_RISING:
+ edge = true;
+ high = true;
+ break;
+ case IRQ_TYPE_EDGE_FALLING:
+ edge = true;
+ high = false;
+ break;
+ case IRQ_TYPE_LEVEL_HIGH:
+ edge = false;
+ high = true;
+ break;
+ case IRQ_TYPE_LEVEL_LOW:
+ edge = false;
+ high = false;
+ break;
+ default:
+ pr_info("unsupported irq type %d\n", type);
+ return -EINVAL;
+ }
+
+ irqd_set_trigger_type(d, type);
+
+ val = lpc32xx_ic_read(ic, LPC32XX_INTC_POL);
+ if (high)
+ val |= mask;
+ else
+ val &= ~mask;
+ lpc32xx_ic_write(ic, LPC32XX_INTC_POL, val);
+
+ val = lpc32xx_ic_read(ic, LPC32XX_INTC_TYPE);
+ if (edge) {
+ val |= mask;
+ irq_set_handler_locked(d, handle_edge_irq);
+ } else {
+ val &= ~mask;
+ irq_set_handler_locked(d, handle_level_irq);
+ }
+ lpc32xx_ic_write(ic, LPC32XX_INTC_TYPE, val);
+
+ return 0;
+}
+
+static void __exception_irq_entry lpc32xx_handle_irq(struct pt_regs *regs)
+{
+ struct lpc32xx_irq_chip *ic = lpc32xx_mic_irqc;
+ u32 hwirq = lpc32xx_ic_read(ic, LPC32XX_INTC_STAT), irq;
+
+ while (hwirq) {
+ irq = __ffs(hwirq);
+ hwirq &= ~BIT(irq);
+ handle_domain_irq(lpc32xx_mic_irqc->domain, irq, regs);
+ }
+}
+
+static void lpc32xx_sic_handler(struct irq_desc *desc)
+{
+ struct lpc32xx_irq_chip *ic = irq_desc_get_handler_data(desc);
+ struct irq_chip *chip = irq_desc_get_chip(desc);
+ u32 hwirq = lpc32xx_ic_read(ic, LPC32XX_INTC_STAT), irq;
+
+ chained_irq_enter(chip, desc);
+
+ while (hwirq) {
+ irq = __ffs(hwirq);
+ hwirq &= ~BIT(irq);
+ generic_handle_irq(irq_find_mapping(ic->domain, irq));
+ }
+
+ chained_irq_exit(chip, desc);
+}
+
+static int lpc32xx_irq_domain_map(struct irq_domain *id, unsigned int virq,
+ irq_hw_number_t hw)
+{
+ struct lpc32xx_irq_chip *ic = id->host_data;
+
+ irq_set_chip_data(virq, ic);
+ irq_set_chip_and_handler(virq, &ic->chip, handle_level_irq);
+ irq_set_status_flags(virq, IRQ_LEVEL);
+ irq_set_noprobe(virq);
+
+ return 0;
+}
+
+static void lpc32xx_irq_domain_unmap(struct irq_domain *id, unsigned int virq)
+{
+ irq_set_chip_and_handler(virq, NULL, NULL);
+}
+
+static const struct irq_domain_ops lpc32xx_irq_domain_ops = {
+ .map = lpc32xx_irq_domain_map,
+ .unmap = lpc32xx_irq_domain_unmap,
+ .xlate = irq_domain_xlate_twocell,
+};
+
+static int __init lpc32xx_of_ic_init(struct device_node *node,
+ struct device_node *parent)
+{
+ struct lpc32xx_irq_chip *irqc;
+ bool is_mic = of_device_is_compatible(node, "nxp,lpc3220-mic");
+ const __be32 *reg = of_get_property(node, "reg", NULL);
+ u32 parent_irq, i, addr = reg ? be32_to_cpu(*reg) : 0;
+
+ irqc = kzalloc(sizeof(*irqc), GFP_KERNEL);
+ if (!irqc)
+ return -ENOMEM;
+
+ irqc->base = of_iomap(node, 0);
+ if (!irqc->base) {
+ pr_err("%s: unable to map registers\n", node->full_name);
+ kfree(irqc);
+ return -EINVAL;
+ }
+
+ irqc->chip.irq_ack = lpc32xx_irq_ack;
+ irqc->chip.irq_mask = lpc32xx_irq_mask;
+ irqc->chip.irq_unmask = lpc32xx_irq_unmask;
+ irqc->chip.irq_set_type = lpc32xx_irq_set_type;
+ if (is_mic)
+ irqc->chip.name = kasprintf(GFP_KERNEL, "%08x.mic", addr);
+ else
+ irqc->chip.name = kasprintf(GFP_KERNEL, "%08x.sic", addr);
+
+ irqc->domain = irq_domain_add_linear(node, NR_LPC32XX_IC_IRQS,
+ &lpc32xx_irq_domain_ops, irqc);
+ if (!irqc->domain) {
+ pr_err("unable to add irq domain\n");
+ iounmap(irqc->base);
+ kfree(irqc->chip.name);
+ kfree(irqc);
+ return -ENODEV;
+ }
+
+ if (is_mic) {
+ lpc32xx_mic_irqc = irqc;
+ set_handle_irq(lpc32xx_handle_irq);
+ } else {
+ for (i = 0; i < of_irq_count(node); i++) {
+ parent_irq = irq_of_parse_and_map(node, i);
+ if (parent_irq)
+ irq_set_chained_handler_and_data(parent_irq,
+ lpc32xx_sic_handler, irqc);
+ }
+ }
+
+ lpc32xx_ic_write(irqc, LPC32XX_INTC_MASK, 0x00);
+ lpc32xx_ic_write(irqc, LPC32XX_INTC_POL, 0x00);
+ lpc32xx_ic_write(irqc, LPC32XX_INTC_TYPE, 0x00);
+
+ return 0;
+}
+
+IRQCHIP_DECLARE(nxp_lpc32xx_mic, "nxp,lpc3220-mic", lpc32xx_of_ic_init);
+IRQCHIP_DECLARE(nxp_lpc32xx_sic, "nxp,lpc3220-sic", lpc32xx_of_ic_init);
diff --git a/drivers/irqchip/irq-ls-scfg-msi.c b/drivers/irqchip/irq-ls-scfg-msi.c
new file mode 100644
index 000000000000..02cca74cab94
--- /dev/null
+++ b/drivers/irqchip/irq-ls-scfg-msi.c
@@ -0,0 +1,240 @@
+/*
+ * Freescale SCFG MSI(-X) support
+ *
+ * Copyright (C) 2016 Freescale Semiconductor.
+ *
+ * Author: Minghuan Lian <Minghuan.Lian@nxp.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/msi.h>
+#include <linux/interrupt.h>
+#include <linux/irq.h>
+#include <linux/irqchip/chained_irq.h>
+#include <linux/irqdomain.h>
+#include <linux/of_pci.h>
+#include <linux/of_platform.h>
+#include <linux/spinlock.h>
+
+#define MSI_MAX_IRQS 32
+#define MSI_IBS_SHIFT 3
+#define MSIR 4
+
+struct ls_scfg_msi {
+ spinlock_t lock;
+ struct platform_device *pdev;
+ struct irq_domain *parent;
+ struct irq_domain *msi_domain;
+ void __iomem *regs;
+ phys_addr_t msiir_addr;
+ int irq;
+ DECLARE_BITMAP(used, MSI_MAX_IRQS);
+};
+
+static struct irq_chip ls_scfg_msi_irq_chip = {
+ .name = "MSI",
+ .irq_mask = pci_msi_mask_irq,
+ .irq_unmask = pci_msi_unmask_irq,
+};
+
+static struct msi_domain_info ls_scfg_msi_domain_info = {
+ .flags = (MSI_FLAG_USE_DEF_DOM_OPS |
+ MSI_FLAG_USE_DEF_CHIP_OPS |
+ MSI_FLAG_PCI_MSIX),
+ .chip = &ls_scfg_msi_irq_chip,
+};
+
+static void ls_scfg_msi_compose_msg(struct irq_data *data, struct msi_msg *msg)
+{
+ struct ls_scfg_msi *msi_data = irq_data_get_irq_chip_data(data);
+
+ msg->address_hi = upper_32_bits(msi_data->msiir_addr);
+ msg->address_lo = lower_32_bits(msi_data->msiir_addr);
+ msg->data = data->hwirq << MSI_IBS_SHIFT;
+}
+
+static int ls_scfg_msi_set_affinity(struct irq_data *irq_data,
+ const struct cpumask *mask, bool force)
+{
+ return -EINVAL;
+}
+
+static struct irq_chip ls_scfg_msi_parent_chip = {
+ .name = "SCFG",
+ .irq_compose_msi_msg = ls_scfg_msi_compose_msg,
+ .irq_set_affinity = ls_scfg_msi_set_affinity,
+};
+
+static int ls_scfg_msi_domain_irq_alloc(struct irq_domain *domain,
+ unsigned int virq,
+ unsigned int nr_irqs,
+ void *args)
+{
+ struct ls_scfg_msi *msi_data = domain->host_data;
+ int pos, err = 0;
+
+ WARN_ON(nr_irqs != 1);
+
+ spin_lock(&msi_data->lock);
+ pos = find_first_zero_bit(msi_data->used, MSI_MAX_IRQS);
+ if (pos < MSI_MAX_IRQS)
+ __set_bit(pos, msi_data->used);
+ else
+ err = -ENOSPC;
+ spin_unlock(&msi_data->lock);
+
+ if (err)
+ return err;
+
+ irq_domain_set_info(domain, virq, pos,
+ &ls_scfg_msi_parent_chip, msi_data,
+ handle_simple_irq, NULL, NULL);
+
+ return 0;
+}
+
+static void ls_scfg_msi_domain_irq_free(struct irq_domain *domain,
+ unsigned int virq, unsigned int nr_irqs)
+{
+ struct irq_data *d = irq_domain_get_irq_data(domain, virq);
+ struct ls_scfg_msi *msi_data = irq_data_get_irq_chip_data(d);
+ int pos;
+
+ pos = d->hwirq;
+ if (pos < 0 || pos >= MSI_MAX_IRQS) {
+ pr_err("failed to teardown msi. Invalid hwirq %d\n", pos);
+ return;
+ }
+
+ spin_lock(&msi_data->lock);
+ __clear_bit(pos, msi_data->used);
+ spin_unlock(&msi_data->lock);
+}
+
+static const struct irq_domain_ops ls_scfg_msi_domain_ops = {
+ .alloc = ls_scfg_msi_domain_irq_alloc,
+ .free = ls_scfg_msi_domain_irq_free,
+};
+
+static void ls_scfg_msi_irq_handler(struct irq_desc *desc)
+{
+ struct ls_scfg_msi *msi_data = irq_desc_get_handler_data(desc);
+ unsigned long val;
+ int pos, virq;
+
+ chained_irq_enter(irq_desc_get_chip(desc), desc);
+
+ val = ioread32be(msi_data->regs + MSIR);
+ for_each_set_bit(pos, &val, MSI_MAX_IRQS) {
+ virq = irq_find_mapping(msi_data->parent, (31 - pos));
+ if (virq)
+ generic_handle_irq(virq);
+ }
+
+ chained_irq_exit(irq_desc_get_chip(desc), desc);
+}
+
+static int ls_scfg_msi_domains_init(struct ls_scfg_msi *msi_data)
+{
+ /* Initialize MSI domain parent */
+ msi_data->parent = irq_domain_add_linear(NULL,
+ MSI_MAX_IRQS,
+ &ls_scfg_msi_domain_ops,
+ msi_data);
+ if (!msi_data->parent) {
+ dev_err(&msi_data->pdev->dev, "failed to create IRQ domain\n");
+ return -ENOMEM;
+ }
+
+ msi_data->msi_domain = pci_msi_create_irq_domain(
+ of_node_to_fwnode(msi_data->pdev->dev.of_node),
+ &ls_scfg_msi_domain_info,
+ msi_data->parent);
+ if (!msi_data->msi_domain) {
+ dev_err(&msi_data->pdev->dev, "failed to create MSI domain\n");
+ irq_domain_remove(msi_data->parent);
+ return -ENOMEM;
+ }
+
+ return 0;
+}
+
+static int ls_scfg_msi_probe(struct platform_device *pdev)
+{
+ struct ls_scfg_msi *msi_data;
+ struct resource *res;
+ int ret;
+
+ msi_data = devm_kzalloc(&pdev->dev, sizeof(*msi_data), GFP_KERNEL);
+ if (!msi_data)
+ return -ENOMEM;
+
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ msi_data->regs = devm_ioremap_resource(&pdev->dev, res);
+ if (IS_ERR(msi_data->regs)) {
+ dev_err(&pdev->dev, "failed to initialize 'regs'\n");
+ return PTR_ERR(msi_data->regs);
+ }
+ msi_data->msiir_addr = res->start;
+
+ msi_data->irq = platform_get_irq(pdev, 0);
+ if (msi_data->irq <= 0) {
+ dev_err(&pdev->dev, "failed to get MSI irq\n");
+ return -ENODEV;
+ }
+
+ msi_data->pdev = pdev;
+ spin_lock_init(&msi_data->lock);
+
+ ret = ls_scfg_msi_domains_init(msi_data);
+ if (ret)
+ return ret;
+
+ irq_set_chained_handler_and_data(msi_data->irq,
+ ls_scfg_msi_irq_handler,
+ msi_data);
+
+ platform_set_drvdata(pdev, msi_data);
+
+ return 0;
+}
+
+static int ls_scfg_msi_remove(struct platform_device *pdev)
+{
+ struct ls_scfg_msi *msi_data = platform_get_drvdata(pdev);
+
+ irq_set_chained_handler_and_data(msi_data->irq, NULL, NULL);
+
+ irq_domain_remove(msi_data->msi_domain);
+ irq_domain_remove(msi_data->parent);
+
+ platform_set_drvdata(pdev, NULL);
+
+ return 0;
+}
+
+static const struct of_device_id ls_scfg_msi_id[] = {
+ { .compatible = "fsl,1s1021a-msi", },
+ { .compatible = "fsl,1s1043a-msi", },
+ {},
+};
+
+static struct platform_driver ls_scfg_msi_driver = {
+ .driver = {
+ .name = "ls-scfg-msi",
+ .of_match_table = ls_scfg_msi_id,
+ },
+ .probe = ls_scfg_msi_probe,
+ .remove = ls_scfg_msi_remove,
+};
+
+module_platform_driver(ls_scfg_msi_driver);
+
+MODULE_AUTHOR("Minghuan Lian <Minghuan.Lian@nxp.com>");
+MODULE_DESCRIPTION("Freescale Layerscape SCFG MSI controller driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/irqchip/irq-mbigen.c b/drivers/irqchip/irq-mbigen.c
index d67baa231c13..03b79b061d24 100644
--- a/drivers/irqchip/irq-mbigen.c
+++ b/drivers/irqchip/irq-mbigen.c
@@ -263,8 +263,8 @@ static int mbigen_device_probe(struct platform_device *pdev)
parent = platform_bus_type.dev_root;
child = of_platform_device_create(np, NULL, parent);
- if (IS_ERR(child))
- return PTR_ERR(child);
+ if (!child)
+ return -ENOMEM;
if (of_property_read_u32(child->dev.of_node, "num-pins",
&num_pins) < 0) {
diff --git a/drivers/irqchip/irq-mips-gic.c b/drivers/irqchip/irq-mips-gic.c
index 4dffccf532a2..3b5e10aa48ab 100644
--- a/drivers/irqchip/irq-mips-gic.c
+++ b/drivers/irqchip/irq-mips-gic.c
@@ -197,7 +197,7 @@ void gic_write_cpu_compare(cycle_t cnt, int cpu)
local_irq_save(flags);
- gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_OTHER_ADDR), cpu);
+ gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_OTHER_ADDR), mips_cm_vp_id(cpu));
if (mips_cm_is64) {
gic_write(GIC_REG(VPE_OTHER, GIC_VPE_COMPARE), cnt);
@@ -246,6 +246,14 @@ void gic_stop_count(void)
#endif
+unsigned gic_read_local_vp_id(void)
+{
+ unsigned long ident;
+
+ ident = gic_read(GIC_REG(VPE_LOCAL, GIC_VP_IDENT));
+ return ident & GIC_VP_IDENT_VCNUM_MSK;
+}
+
static bool gic_local_irq_is_routable(int intr)
{
u32 vpe_ctl;
@@ -553,7 +561,8 @@ static void gic_mask_local_irq_all_vpes(struct irq_data *d)
spin_lock_irqsave(&gic_lock, flags);
for (i = 0; i < gic_vpes; i++) {
- gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_OTHER_ADDR), i);
+ gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_OTHER_ADDR),
+ mips_cm_vp_id(i));
gic_write32(GIC_REG(VPE_OTHER, GIC_VPE_RMASK), 1 << intr);
}
spin_unlock_irqrestore(&gic_lock, flags);
@@ -567,7 +576,8 @@ static void gic_unmask_local_irq_all_vpes(struct irq_data *d)
spin_lock_irqsave(&gic_lock, flags);
for (i = 0; i < gic_vpes; i++) {
- gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_OTHER_ADDR), i);
+ gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_OTHER_ADDR),
+ mips_cm_vp_id(i));
gic_write32(GIC_REG(VPE_OTHER, GIC_VPE_SMASK), 1 << intr);
}
spin_unlock_irqrestore(&gic_lock, flags);
@@ -607,7 +617,8 @@ static void __init gic_basic_init(void)
for (i = 0; i < gic_vpes; i++) {
unsigned int j;
- gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_OTHER_ADDR), i);
+ gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_OTHER_ADDR),
+ mips_cm_vp_id(i));
for (j = 0; j < GIC_NUM_LOCAL_INTRS; j++) {
if (!gic_local_irq_is_routable(j))
continue;
@@ -652,7 +663,8 @@ static int gic_local_irq_domain_map(struct irq_domain *d, unsigned int virq,
for (i = 0; i < gic_vpes; i++) {
u32 val = GIC_MAP_TO_PIN_MSK | gic_cpu_pin;
- gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_OTHER_ADDR), i);
+ gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_OTHER_ADDR),
+ mips_cm_vp_id(i));
switch (intr) {
case GIC_LOCAL_INT_WD:
@@ -956,7 +968,7 @@ static void __init __gic_init(unsigned long gic_base_addr,
unsigned int cpu_vec, unsigned int irqbase,
struct device_node *node)
{
- unsigned int gicconfig;
+ unsigned int gicconfig, cpu;
unsigned int v[2];
__gic_base_addr = gic_base_addr;
@@ -973,6 +985,14 @@ static void __init __gic_init(unsigned long gic_base_addr,
gic_vpes = gic_vpes + 1;
if (cpu_has_veic) {
+ /* Set EIC mode for all VPEs */
+ for_each_present_cpu(cpu) {
+ gic_write(GIC_REG(VPE_LOCAL, GIC_VPE_OTHER_ADDR),
+ mips_cm_vp_id(cpu));
+ gic_write(GIC_REG(VPE_OTHER, GIC_VPE_CTL),
+ GIC_VPE_CTL_EIC_MODE_MSK);
+ }
+
/* Always use vector 1 in EIC mode */
gic_cpu_pin = 0;
timer_cpu_pin = gic_cpu_pin;
diff --git a/drivers/irqchip/irq-partition-percpu.c b/drivers/irqchip/irq-partition-percpu.c
new file mode 100644
index 000000000000..ccd72c2cbc23
--- /dev/null
+++ b/drivers/irqchip/irq-partition-percpu.c
@@ -0,0 +1,256 @@
+/*
+ * Copyright (C) 2016 ARM Limited, All Rights Reserved.
+ * Author: Marc Zyngier <marc.zyngier@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program. If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <linux/bitops.h>
+#include <linux/interrupt.h>
+#include <linux/irqchip.h>
+#include <linux/irqchip/chained_irq.h>
+#include <linux/irqchip/irq-partition-percpu.h>
+#include <linux/irqdomain.h>
+#include <linux/seq_file.h>
+#include <linux/slab.h>
+
+struct partition_desc {
+ int nr_parts;
+ struct partition_affinity *parts;
+ struct irq_domain *domain;
+ struct irq_desc *chained_desc;
+ unsigned long *bitmap;
+ struct irq_domain_ops ops;
+};
+
+static bool partition_check_cpu(struct partition_desc *part,
+ unsigned int cpu, unsigned int hwirq)
+{
+ return cpumask_test_cpu(cpu, &part->parts[hwirq].mask);
+}
+
+static void partition_irq_mask(struct irq_data *d)
+{
+ struct partition_desc *part = irq_data_get_irq_chip_data(d);
+ struct irq_chip *chip = irq_desc_get_chip(part->chained_desc);
+ struct irq_data *data = irq_desc_get_irq_data(part->chained_desc);
+
+ if (partition_check_cpu(part, smp_processor_id(), d->hwirq) &&
+ chip->irq_mask)
+ chip->irq_mask(data);
+}
+
+static void partition_irq_unmask(struct irq_data *d)
+{
+ struct partition_desc *part = irq_data_get_irq_chip_data(d);
+ struct irq_chip *chip = irq_desc_get_chip(part->chained_desc);
+ struct irq_data *data = irq_desc_get_irq_data(part->chained_desc);
+
+ if (partition_check_cpu(part, smp_processor_id(), d->hwirq) &&
+ chip->irq_unmask)
+ chip->irq_unmask(data);
+}
+
+static int partition_irq_set_irqchip_state(struct irq_data *d,
+ enum irqchip_irq_state which,
+ bool val)
+{
+ struct partition_desc *part = irq_data_get_irq_chip_data(d);
+ struct irq_chip *chip = irq_desc_get_chip(part->chained_desc);
+ struct irq_data *data = irq_desc_get_irq_data(part->chained_desc);
+
+ if (partition_check_cpu(part, smp_processor_id(), d->hwirq) &&
+ chip->irq_set_irqchip_state)
+ return chip->irq_set_irqchip_state(data, which, val);
+
+ return -EINVAL;
+}
+
+static int partition_irq_get_irqchip_state(struct irq_data *d,
+ enum irqchip_irq_state which,
+ bool *val)
+{
+ struct partition_desc *part = irq_data_get_irq_chip_data(d);
+ struct irq_chip *chip = irq_desc_get_chip(part->chained_desc);
+ struct irq_data *data = irq_desc_get_irq_data(part->chained_desc);
+
+ if (partition_check_cpu(part, smp_processor_id(), d->hwirq) &&
+ chip->irq_get_irqchip_state)
+ return chip->irq_get_irqchip_state(data, which, val);
+
+ return -EINVAL;
+}
+
+static int partition_irq_set_type(struct irq_data *d, unsigned int type)
+{
+ struct partition_desc *part = irq_data_get_irq_chip_data(d);
+ struct irq_chip *chip = irq_desc_get_chip(part->chained_desc);
+ struct irq_data *data = irq_desc_get_irq_data(part->chained_desc);
+
+ if (chip->irq_set_type)
+ return chip->irq_set_type(data, type);
+
+ return -EINVAL;
+}
+
+static void partition_irq_print_chip(struct irq_data *d, struct seq_file *p)
+{
+ struct partition_desc *part = irq_data_get_irq_chip_data(d);
+ struct irq_chip *chip = irq_desc_get_chip(part->chained_desc);
+ struct irq_data *data = irq_desc_get_irq_data(part->chained_desc);
+
+ seq_printf(p, " %5s-%lu", chip->name, data->hwirq);
+}
+
+static struct irq_chip partition_irq_chip = {
+ .irq_mask = partition_irq_mask,
+ .irq_unmask = partition_irq_unmask,
+ .irq_set_type = partition_irq_set_type,
+ .irq_get_irqchip_state = partition_irq_get_irqchip_state,
+ .irq_set_irqchip_state = partition_irq_set_irqchip_state,
+ .irq_print_chip = partition_irq_print_chip,
+};
+
+static void partition_handle_irq(struct irq_desc *desc)
+{
+ struct partition_desc *part = irq_desc_get_handler_data(desc);
+ struct irq_chip *chip = irq_desc_get_chip(desc);
+ int cpu = smp_processor_id();
+ int hwirq;
+
+ chained_irq_enter(chip, desc);
+
+ for_each_set_bit(hwirq, part->bitmap, part->nr_parts) {
+ if (partition_check_cpu(part, cpu, hwirq))
+ break;
+ }
+
+ if (unlikely(hwirq == part->nr_parts)) {
+ handle_bad_irq(desc);
+ } else {
+ unsigned int irq;
+ irq = irq_find_mapping(part->domain, hwirq);
+ generic_handle_irq(irq);
+ }
+
+ chained_irq_exit(chip, desc);
+}
+
+static int partition_domain_alloc(struct irq_domain *domain, unsigned int virq,
+ unsigned int nr_irqs, void *arg)
+{
+ int ret;
+ irq_hw_number_t hwirq;
+ unsigned int type;
+ struct irq_fwspec *fwspec = arg;
+ struct partition_desc *part;
+
+ BUG_ON(nr_irqs != 1);
+ ret = domain->ops->translate(domain, fwspec, &hwirq, &type);
+ if (ret)
+ return ret;
+
+ part = domain->host_data;
+
+ set_bit(hwirq, part->bitmap);
+ irq_set_chained_handler_and_data(irq_desc_get_irq(part->chained_desc),
+ partition_handle_irq, part);
+ irq_set_percpu_devid_partition(virq, &part->parts[hwirq].mask);
+ irq_domain_set_info(domain, virq, hwirq, &partition_irq_chip, part,
+ handle_percpu_devid_irq, NULL, NULL);
+ irq_set_status_flags(virq, IRQ_NOAUTOEN);
+
+ return 0;
+}
+
+static void partition_domain_free(struct irq_domain *domain, unsigned int virq,
+ unsigned int nr_irqs)
+{
+ struct irq_data *d;
+
+ BUG_ON(nr_irqs != 1);
+
+ d = irq_domain_get_irq_data(domain, virq);
+ irq_set_handler(virq, NULL);
+ irq_domain_reset_irq_data(d);
+}
+
+int partition_translate_id(struct partition_desc *desc, void *partition_id)
+{
+ struct partition_affinity *part = NULL;
+ int i;
+
+ for (i = 0; i < desc->nr_parts; i++) {
+ if (desc->parts[i].partition_id == partition_id) {
+ part = &desc->parts[i];
+ break;
+ }
+ }
+
+ if (WARN_ON(!part)) {
+ pr_err("Failed to find partition\n");
+ return -EINVAL;
+ }
+
+ return i;
+}
+
+struct partition_desc *partition_create_desc(struct fwnode_handle *fwnode,
+ struct partition_affinity *parts,
+ int nr_parts,
+ int chained_irq,
+ const struct irq_domain_ops *ops)
+{
+ struct partition_desc *desc;
+ struct irq_domain *d;
+
+ BUG_ON(!ops->select || !ops->translate);
+
+ desc = kzalloc(sizeof(*desc), GFP_KERNEL);
+ if (!desc)
+ return NULL;
+
+ desc->ops = *ops;
+ desc->ops.free = partition_domain_free;
+ desc->ops.alloc = partition_domain_alloc;
+
+ d = irq_domain_create_linear(fwnode, nr_parts, &desc->ops, desc);
+ if (!d)
+ goto out;
+ desc->domain = d;
+
+ desc->bitmap = kzalloc(sizeof(long) * BITS_TO_LONGS(nr_parts),
+ GFP_KERNEL);
+ if (WARN_ON(!desc->bitmap))
+ goto out;
+
+ desc->chained_desc = irq_to_desc(chained_irq);
+ desc->nr_parts = nr_parts;
+ desc->parts = parts;
+
+ return desc;
+out:
+ if (d)
+ irq_domain_remove(d);
+ kfree(desc);
+
+ return NULL;
+}
+
+struct irq_domain *partition_get_domain(struct partition_desc *dsc)
+{
+ if (dsc)
+ return dsc->domain;
+
+ return NULL;
+}
diff --git a/drivers/irqchip/irq-tegra.c b/drivers/irqchip/irq-tegra.c
index 50be9639e27e..e902f081e16c 100644
--- a/drivers/irqchip/irq-tegra.c
+++ b/drivers/irqchip/irq-tegra.c
@@ -235,7 +235,7 @@ static int tegra_ictlr_domain_translate(struct irq_domain *d,
return -EINVAL;
*hwirq = fwspec->param[1];
- *type = fwspec->param[2];
+ *type = fwspec->param[2] & IRQ_TYPE_SENSE_MASK;
return 0;
}
diff --git a/drivers/irqchip/irq-versatile-fpga.c b/drivers/irqchip/irq-versatile-fpga.c
index 598ab3f0e0ac..37dd4645bf18 100644
--- a/drivers/irqchip/irq-versatile-fpga.c
+++ b/drivers/irqchip/irq-versatile-fpga.c
@@ -227,4 +227,5 @@ int __init fpga_irq_of_init(struct device_node *node,
}
IRQCHIP_DECLARE(arm_fpga, "arm,versatile-fpga-irq", fpga_irq_of_init);
IRQCHIP_DECLARE(arm_fpga_sic, "arm,versatile-sic", fpga_irq_of_init);
+IRQCHIP_DECLARE(ox810se_rps, "oxsemi,ox810se-rps-irq", fpga_irq_of_init);
#endif
diff --git a/drivers/irqchip/spear-shirq.c b/drivers/irqchip/spear-shirq.c
index 1ccd2abed65f..1518ba31a80c 100644
--- a/drivers/irqchip/spear-shirq.c
+++ b/drivers/irqchip/spear-shirq.c
@@ -232,7 +232,7 @@ static int __init shirq_init(struct spear_shirq **shirq_blocks, int block_nr,
nr_irqs += shirq_blocks[i]->nr_irqs;
virq_base = irq_alloc_descs(-1, 0, nr_irqs, 0);
- if (IS_ERR_VALUE(virq_base)) {
+ if (virq_base < 0) {
pr_err("%s: irq desc alloc failed\n", __func__);
goto err_unmap;
}
diff --git a/drivers/isdn/hardware/eicon/message.c b/drivers/isdn/hardware/eicon/message.c
index d7c286656a25..1a1d99704fe6 100644
--- a/drivers/isdn/hardware/eicon/message.c
+++ b/drivers/isdn/hardware/eicon/message.c
@@ -1147,8 +1147,6 @@ static byte test_c_ind_mask_bit(PLCI *plci, word b)
static void dump_c_ind_mask(PLCI *plci)
{
- static char hex_digit_table[0x10] =
- {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f'};
word i, j, k;
dword d;
char *p;
@@ -1165,7 +1163,7 @@ static void dump_c_ind_mask(PLCI *plci)
d = plci->c_ind_mask_table[i + j];
for (k = 0; k < 8; k++)
{
- *(--p) = hex_digit_table[d & 0xf];
+ *(--p) = hex_asc_lo(d);
d >>= 4;
}
}
@@ -10507,7 +10505,6 @@ static void mixer_set_bchannel_id(PLCI *plci, byte *chi)
static void mixer_calculate_coefs(DIVA_CAPI_ADAPTER *a)
{
- static char hex_digit_table[0x10] = {'0', '1', '2', '3', '4', '5', '6', '7', '8', '9', 'a', 'b', 'c', 'd', 'e', 'f'};
word n, i, j;
char *p;
char hex_line[2 * MIXER_MAX_DUMP_CHANNELS + MIXER_MAX_DUMP_CHANNELS / 8 + 4];
@@ -10690,13 +10687,13 @@ static void mixer_calculate_coefs(DIVA_CAPI_ADAPTER *a)
n = li_total_channels;
if (n > MIXER_MAX_DUMP_CHANNELS)
n = MIXER_MAX_DUMP_CHANNELS;
+
p = hex_line;
for (j = 0; j < n; j++)
{
if ((j & 0x7) == 0)
*(p++) = ' ';
- *(p++) = hex_digit_table[li_config_table[j].curchnl >> 4];
- *(p++) = hex_digit_table[li_config_table[j].curchnl & 0xf];
+ p = hex_byte_pack(p, li_config_table[j].curchnl);
}
*p = '\0';
dbug(1, dprintf("[%06lx] CURRENT %s",
@@ -10706,8 +10703,7 @@ static void mixer_calculate_coefs(DIVA_CAPI_ADAPTER *a)
{
if ((j & 0x7) == 0)
*(p++) = ' ';
- *(p++) = hex_digit_table[li_config_table[j].channel >> 4];
- *(p++) = hex_digit_table[li_config_table[j].channel & 0xf];
+ p = hex_byte_pack(p, li_config_table[j].channel);
}
*p = '\0';
dbug(1, dprintf("[%06lx] CHANNEL %s",
@@ -10717,8 +10713,7 @@ static void mixer_calculate_coefs(DIVA_CAPI_ADAPTER *a)
{
if ((j & 0x7) == 0)
*(p++) = ' ';
- *(p++) = hex_digit_table[li_config_table[j].chflags >> 4];
- *(p++) = hex_digit_table[li_config_table[j].chflags & 0xf];
+ p = hex_byte_pack(p, li_config_table[j].chflags);
}
*p = '\0';
dbug(1, dprintf("[%06lx] CHFLAG %s",
@@ -10730,8 +10725,7 @@ static void mixer_calculate_coefs(DIVA_CAPI_ADAPTER *a)
{
if ((j & 0x7) == 0)
*(p++) = ' ';
- *(p++) = hex_digit_table[li_config_table[i].flag_table[j] >> 4];
- *(p++) = hex_digit_table[li_config_table[i].flag_table[j] & 0xf];
+ p = hex_byte_pack(p, li_config_table[i].flag_table[j]);
}
*p = '\0';
dbug(1, dprintf("[%06lx] FLAG[%02x]%s",
@@ -10744,8 +10738,7 @@ static void mixer_calculate_coefs(DIVA_CAPI_ADAPTER *a)
{
if ((j & 0x7) == 0)
*(p++) = ' ';
- *(p++) = hex_digit_table[li_config_table[i].coef_table[j] >> 4];
- *(p++) = hex_digit_table[li_config_table[i].coef_table[j] & 0xf];
+ p = hex_byte_pack(p, li_config_table[i].coef_table[j]);
}
*p = '\0';
dbug(1, dprintf("[%06lx] COEF[%02x]%s",
diff --git a/drivers/isdn/hysdn/hysdn_net.c b/drivers/isdn/hysdn/hysdn_net.c
index a0efb4cefa1c..5609deee7cd3 100644
--- a/drivers/isdn/hysdn/hysdn_net.c
+++ b/drivers/isdn/hysdn/hysdn_net.c
@@ -127,7 +127,7 @@ net_send_packet(struct sk_buff *skb, struct net_device *dev)
if (lp->in_idx >= MAX_SKB_BUFFERS)
lp->in_idx = 0; /* wrap around */
lp->sk_count++; /* adjust counter */
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
/* If we just used up the very last entry in the
* TX ring on this device, tell the queueing
diff --git a/drivers/isdn/i4l/isdn_net.c b/drivers/isdn/i4l/isdn_net.c
index aa5dd5668528..c151c6daa67e 100644
--- a/drivers/isdn/i4l/isdn_net.c
+++ b/drivers/isdn/i4l/isdn_net.c
@@ -1153,7 +1153,7 @@ static void isdn_net_tx_timeout(struct net_device *ndev)
* ever called --KG
*/
}
- ndev->trans_start = jiffies;
+ netif_trans_update(ndev);
netif_wake_queue(ndev);
}
@@ -1291,7 +1291,7 @@ isdn_net_start_xmit(struct sk_buff *skb, struct net_device *ndev)
}
} else {
/* Device is connected to an ISDN channel */
- ndev->trans_start = jiffies;
+ netif_trans_update(ndev);
if (!lp->dialstate) {
/* ISDN connection is established, try sending */
int ret;
diff --git a/drivers/isdn/i4l/isdn_tty.c b/drivers/isdn/i4l/isdn_tty.c
index 947d5c978b8f..63eaa0a9f8a1 100644
--- a/drivers/isdn/i4l/isdn_tty.c
+++ b/drivers/isdn/i4l/isdn_tty.c
@@ -1043,17 +1043,13 @@ isdn_tty_change_speed(modem_info *info)
if (!(cflag & PARODD))
cval |= UART_LCR_EPAR;
- if (cflag & CLOCAL)
- port->flags &= ~ASYNC_CHECK_CD;
- else {
- port->flags |= ASYNC_CHECK_CD;
- }
+ tty_port_set_check_carrier(port, ~cflag & CLOCAL);
}
static int
isdn_tty_startup(modem_info *info)
{
- if (info->port.flags & ASYNC_INITIALIZED)
+ if (tty_port_initialized(&info->port))
return 0;
isdn_lock_drivers();
#ifdef ISDN_DEBUG_MODEM_OPEN
@@ -1070,7 +1066,7 @@ isdn_tty_startup(modem_info *info)
*/
isdn_tty_change_speed(info);
- info->port.flags |= ASYNC_INITIALIZED;
+ tty_port_set_initialized(&info->port, 1);
info->msr |= (UART_MSR_DSR | UART_MSR_CTS);
info->send_outstanding = 0;
return 0;
@@ -1083,7 +1079,7 @@ isdn_tty_startup(modem_info *info)
static void
isdn_tty_shutdown(modem_info *info)
{
- if (!(info->port.flags & ASYNC_INITIALIZED))
+ if (!tty_port_initialized(&info->port))
return;
#ifdef ISDN_DEBUG_MODEM_OPEN
printk(KERN_DEBUG "Shutting down isdnmodem port %d ....\n", info->line);
@@ -1103,7 +1099,7 @@ isdn_tty_shutdown(modem_info *info)
if (info->port.tty)
set_bit(TTY_IO_ERROR, &info->port.tty->flags);
- info->port.flags &= ~ASYNC_INITIALIZED;
+ tty_port_set_initialized(&info->port, 0);
}
/* isdn_tty_write() is the main send-routine. It is called from the upper
@@ -1351,7 +1347,7 @@ isdn_tty_tiocmget(struct tty_struct *tty)
if (isdn_tty_paranoia_check(info, tty->name, __func__))
return -ENODEV;
- if (tty->flags & (1 << TTY_IO_ERROR))
+ if (tty_io_error(tty))
return -EIO;
mutex_lock(&modem_info_mutex);
@@ -1378,7 +1374,7 @@ isdn_tty_tiocmset(struct tty_struct *tty,
if (isdn_tty_paranoia_check(info, tty->name, __func__))
return -ENODEV;
- if (tty->flags & (1 << TTY_IO_ERROR))
+ if (tty_io_error(tty))
return -EIO;
#ifdef ISDN_DEBUG_MODEM_IOCTL
@@ -1419,7 +1415,7 @@ isdn_tty_ioctl(struct tty_struct *tty, uint cmd, ulong arg)
if (isdn_tty_paranoia_check(info, tty->name, "isdn_tty_ioctl"))
return -ENODEV;
- if (tty->flags & (1 << TTY_IO_ERROR))
+ if (tty_io_error(tty))
return -EIO;
switch (cmd) {
case TCSBRK: /* SVID version: non-zero arg --> no break */
@@ -1581,7 +1577,7 @@ isdn_tty_close(struct tty_struct *tty, struct file *filp)
* interrupt driver to stop checking the data ready bit in the
* line status register.
*/
- if (port->flags & ASYNC_INITIALIZED) {
+ if (tty_port_initialized(port)) {
tty_wait_until_sent(tty, 3000); /* 30 seconds timeout */
/*
* Before we drop DTR, make sure the UART transmitter
@@ -1622,7 +1618,7 @@ isdn_tty_hangup(struct tty_struct *tty)
return;
isdn_tty_shutdown(info);
port->count = 0;
- port->flags &= ~ASYNC_NORMAL_ACTIVE;
+ tty_port_set_active(port, 0);
port->tty = NULL;
wake_up_interruptible(&port->open_wait);
}
@@ -1979,7 +1975,7 @@ isdn_tty_find_icall(int di, int ch, setup_parm *setup)
#endif
if (
#ifndef FIX_FILE_TRANSFER
- (info->port.flags & ASYNC_NORMAL_ACTIVE) &&
+ tty_port_active(&info->port) &&
#endif
(info->isdn_driver == -1) &&
(info->isdn_channel == -1) &&
@@ -2018,8 +2014,6 @@ isdn_tty_find_icall(int di, int ch, setup_parm *setup)
return (wret == 2) ? 3 : 0;
}
-#define TTY_IS_ACTIVE(info) (info->port.flags & ASYNC_NORMAL_ACTIVE)
-
int
isdn_tty_stat_callback(int i, isdn_ctrl *c)
{
@@ -2077,7 +2071,7 @@ isdn_tty_stat_callback(int i, isdn_ctrl *c)
#ifdef ISDN_TTY_STAT_DEBUG
printk(KERN_DEBUG "tty_STAT_DCONN ttyI%d\n", info->line);
#endif
- if (TTY_IS_ACTIVE(info)) {
+ if (tty_port_active(&info->port)) {
if (info->dialing == 1) {
info->dialing = 2;
return 1;
@@ -2088,7 +2082,7 @@ isdn_tty_stat_callback(int i, isdn_ctrl *c)
#ifdef ISDN_TTY_STAT_DEBUG
printk(KERN_DEBUG "tty_STAT_DHUP ttyI%d\n", info->line);
#endif
- if (TTY_IS_ACTIVE(info)) {
+ if (tty_port_active(&info->port)) {
if (info->dialing == 1)
isdn_tty_modem_result(RESULT_BUSY, info);
if (info->dialing > 1)
@@ -2118,7 +2112,7 @@ isdn_tty_stat_callback(int i, isdn_ctrl *c)
* waiting for it and
* set DCD-bit of its modem-status.
*/
- if (TTY_IS_ACTIVE(info) ||
+ if (tty_port_active(&info->port) ||
(info->port.blocked_open &&
(info->emu.mdmreg[REG_DCD] & BIT_DCD))) {
info->msr |= UART_MSR_DCD;
@@ -2145,7 +2139,7 @@ isdn_tty_stat_callback(int i, isdn_ctrl *c)
#ifdef ISDN_TTY_STAT_DEBUG
printk(KERN_DEBUG "tty_STAT_BHUP ttyI%d\n", info->line);
#endif
- if (TTY_IS_ACTIVE(info)) {
+ if (tty_port_active(&info->port)) {
#ifdef ISDN_DEBUG_MODEM_HUP
printk(KERN_DEBUG "Mhup in ISDN_STAT_BHUP\n");
#endif
@@ -2157,7 +2151,7 @@ isdn_tty_stat_callback(int i, isdn_ctrl *c)
#ifdef ISDN_TTY_STAT_DEBUG
printk(KERN_DEBUG "tty_STAT_NODCH ttyI%d\n", info->line);
#endif
- if (TTY_IS_ACTIVE(info)) {
+ if (tty_port_active(&info->port)) {
if (info->dialing) {
info->dialing = 0;
info->last_l2 = -1;
@@ -2183,14 +2177,14 @@ isdn_tty_stat_callback(int i, isdn_ctrl *c)
return 1;
#ifdef CONFIG_ISDN_TTY_FAX
case ISDN_STAT_FAXIND:
- if (TTY_IS_ACTIVE(info)) {
+ if (tty_port_active(&info->port)) {
isdn_tty_fax_command(info, c);
}
break;
#endif
#ifdef CONFIG_ISDN_AUDIO
case ISDN_STAT_AUDIO:
- if (TTY_IS_ACTIVE(info)) {
+ if (tty_port_active(&info->port)) {
switch (c->parm.num[0]) {
case ISDN_AUDIO_DTMF:
if (info->vonline) {
@@ -2528,7 +2522,7 @@ isdn_tty_modem_result(int code, modem_info *info)
if (info->closing || (!info->port.tty))
return;
- if (info->port.flags & ASYNC_CHECK_CD)
+ if (tty_port_check_carrier(&info->port))
tty_hangup(info->port.tty);
}
}
diff --git a/drivers/isdn/i4l/isdn_x25iface.c b/drivers/isdn/i4l/isdn_x25iface.c
index e2d4e58230f5..0c5d8de41b23 100644
--- a/drivers/isdn/i4l/isdn_x25iface.c
+++ b/drivers/isdn/i4l/isdn_x25iface.c
@@ -278,7 +278,7 @@ static int isdn_x25iface_xmit(struct concap_proto *cprot, struct sk_buff *skb)
case X25_IFACE_DATA:
if (*state == WAN_CONNECTED) {
skb_pull(skb, 1);
- cprot->net_dev->trans_start = jiffies;
+ netif_trans_update(cprot->net_dev);
ret = (cprot->dops->data_req(cprot, skb));
/* prepare for future retransmissions */
if (ret) skb_push(skb, 1);
diff --git a/drivers/leds/Kconfig b/drivers/leds/Kconfig
index 225147863e02..5ae28340a98b 100644
--- a/drivers/leds/Kconfig
+++ b/drivers/leds/Kconfig
@@ -413,10 +413,11 @@ config LEDS_INTEL_SS4200
tristate "LED driver for Intel NAS SS4200 series"
depends on LEDS_CLASS
depends on PCI && DMI
+ depends on X86
help
This option enables support for the Intel SS4200 series of
- Network Attached Storage servers. You may control the hard
- drive or power LEDs on the front panel. Using this driver
+ Network Attached Storage servers. You may control the hard
+ drive or power LEDs on the front panel. Using this driver
can stop the front LED from blinking after startup.
config LEDS_LT3593
diff --git a/drivers/leds/led-triggers.c b/drivers/leds/led-triggers.c
index 2181581795d3..55fa65e1ae03 100644
--- a/drivers/leds/led-triggers.c
+++ b/drivers/leds/led-triggers.c
@@ -26,7 +26,7 @@
* Nests outside led_cdev->trigger_lock
*/
static DECLARE_RWSEM(triggers_list_lock);
-static LIST_HEAD(trigger_list);
+LIST_HEAD(trigger_list);
/* Used by LED Class */
diff --git a/drivers/leds/leds-gpio.c b/drivers/leds/leds-gpio.c
index 61143f55597e..8229f063b483 100644
--- a/drivers/leds/leds-gpio.c
+++ b/drivers/leds/leds-gpio.c
@@ -127,6 +127,8 @@ static int create_gpio_led(const struct gpio_led *template,
led_dat->cdev.brightness = state ? LED_FULL : LED_OFF;
if (!template->retain_state_suspended)
led_dat->cdev.flags |= LED_CORE_SUSPENDRESUME;
+ if (template->panic_indicator)
+ led_dat->cdev.flags |= LED_PANIC_INDICATOR;
ret = gpiod_direction_output(led_dat->gpiod, state);
if (ret < 0)
@@ -200,6 +202,8 @@ static struct gpio_leds_priv *gpio_leds_create(struct platform_device *pdev)
if (fwnode_property_present(child, "retain-state-suspended"))
led.retain_state_suspended = 1;
+ if (fwnode_property_present(child, "panic-indicator"))
+ led.panic_indicator = 1;
ret = create_gpio_led(&led, &priv->leds[priv->num_leds],
dev, NULL);
diff --git a/drivers/leds/leds-pwm.c b/drivers/leds/leds-pwm.c
index 4783bacb2e9d..a9145aa7f36a 100644
--- a/drivers/leds/leds-pwm.c
+++ b/drivers/leds/leds-pwm.c
@@ -91,6 +91,7 @@ static int led_pwm_add(struct device *dev, struct led_pwm_priv *priv,
struct led_pwm *led, struct device_node *child)
{
struct led_pwm_data *led_data = &priv->leds[priv->num_leds];
+ struct pwm_args pargs;
int ret;
led_data->active_low = led->active_low;
@@ -117,7 +118,15 @@ static int led_pwm_add(struct device *dev, struct led_pwm_priv *priv,
else
led_data->cdev.brightness_set_blocking = led_pwm_set_blocking;
- led_data->period = pwm_get_period(led_data->pwm);
+ /*
+ * FIXME: pwm_apply_args() should be removed when switching to the
+ * atomic PWM API.
+ */
+ pwm_apply_args(led_data->pwm);
+
+ pwm_get_args(led_data->pwm, &pargs);
+
+ led_data->period = pargs.period;
if (!led_data->period && (led->pwm_period_ns > 0))
led_data->period = led->pwm_period_ns;
diff --git a/drivers/leds/leds-ss4200.c b/drivers/leds/leds-ss4200.c
index 046cb7008745..732eb86bc1a5 100644
--- a/drivers/leds/leds-ss4200.c
+++ b/drivers/leds/leds-ss4200.c
@@ -101,6 +101,19 @@ static struct dmi_system_id nas_led_whitelist[] __initdata = {
DMI_MATCH(DMI_PRODUCT_VERSION, "1.00.00")
}
},
+ {
+ /*
+ * FUJITSU SIEMENS SCALEO Home Server/SS4200-E
+ * BIOS V090L 12/19/2007
+ */
+ .callback = ss4200_led_dmi_callback,
+ .ident = "Fujitsu Siemens SCALEO Home Server",
+ .matches = {
+ DMI_MATCH(DMI_SYS_VENDOR, "FUJITSU SIEMENS"),
+ DMI_MATCH(DMI_PRODUCT_NAME, "SCALEO Home Server"),
+ DMI_MATCH(DMI_PRODUCT_VERSION, "1.00.00")
+ }
+ },
{}
};
diff --git a/drivers/leds/leds-tca6507.c b/drivers/leds/leds-tca6507.c
index c548ea10f0f0..45222a7f4f75 100644
--- a/drivers/leds/leds-tca6507.c
+++ b/drivers/leds/leds-tca6507.c
@@ -327,6 +327,8 @@ static void set_times(struct tca6507_chip *tca, int bank)
int result;
result = choose_times(tca->bank[bank].ontime, &c1, &c2);
+ if (result < 0)
+ return;
dev_dbg(&tca->client->dev,
"Chose on times %d(%d) %d(%d) for %dms\n",
c1, time_codes[c1],
diff --git a/drivers/leds/leds.h b/drivers/leds/leds.h
index db3f20da7221..7d38e6b9a740 100644
--- a/drivers/leds/leds.h
+++ b/drivers/leds/leds.h
@@ -30,5 +30,6 @@ void led_set_brightness_nosleep(struct led_classdev *led_cdev,
extern struct rw_semaphore leds_list_lock;
extern struct list_head leds_list;
+extern struct list_head trigger_list;
#endif /* __LEDS_H_INCLUDED */
diff --git a/drivers/leds/trigger/Kconfig b/drivers/leds/trigger/Kconfig
index 5bda6a9b56bb..9893d911390d 100644
--- a/drivers/leds/trigger/Kconfig
+++ b/drivers/leds/trigger/Kconfig
@@ -41,6 +41,14 @@ config LEDS_TRIGGER_IDE_DISK
This allows LEDs to be controlled by IDE disk activity.
If unsure, say Y.
+config LEDS_TRIGGER_MTD
+ bool "LED MTD (NAND/NOR) Trigger"
+ depends on MTD
+ depends on LEDS_TRIGGERS
+ help
+ This allows LEDs to be controlled by MTD activity.
+ If unsure, say N.
+
config LEDS_TRIGGER_HEARTBEAT
tristate "LED Heartbeat Trigger"
depends on LEDS_TRIGGERS
@@ -108,4 +116,14 @@ config LEDS_TRIGGER_CAMERA
This enables direct flash/torch on/off by the driver, kernel space.
If unsure, say Y.
+config LEDS_TRIGGER_PANIC
+ bool "LED Panic Trigger"
+ depends on LEDS_TRIGGERS
+ help
+ This allows LEDs to be configured to blink on a kernel panic.
+ Enabling this option will allow to mark certain LEDs as panic indicators,
+ allowing to blink them on a kernel panic, even if they are set to
+ a different trigger.
+ If unsure, say Y.
+
endif # LEDS_TRIGGERS
diff --git a/drivers/leds/trigger/Makefile b/drivers/leds/trigger/Makefile
index 1abf48dacf7e..8cc64a4f4e25 100644
--- a/drivers/leds/trigger/Makefile
+++ b/drivers/leds/trigger/Makefile
@@ -1,6 +1,7 @@
obj-$(CONFIG_LEDS_TRIGGER_TIMER) += ledtrig-timer.o
obj-$(CONFIG_LEDS_TRIGGER_ONESHOT) += ledtrig-oneshot.o
obj-$(CONFIG_LEDS_TRIGGER_IDE_DISK) += ledtrig-ide-disk.o
+obj-$(CONFIG_LEDS_TRIGGER_MTD) += ledtrig-mtd.o
obj-$(CONFIG_LEDS_TRIGGER_HEARTBEAT) += ledtrig-heartbeat.o
obj-$(CONFIG_LEDS_TRIGGER_BACKLIGHT) += ledtrig-backlight.o
obj-$(CONFIG_LEDS_TRIGGER_GPIO) += ledtrig-gpio.o
@@ -8,3 +9,4 @@ obj-$(CONFIG_LEDS_TRIGGER_CPU) += ledtrig-cpu.o
obj-$(CONFIG_LEDS_TRIGGER_DEFAULT_ON) += ledtrig-default-on.o
obj-$(CONFIG_LEDS_TRIGGER_TRANSIENT) += ledtrig-transient.o
obj-$(CONFIG_LEDS_TRIGGER_CAMERA) += ledtrig-camera.o
+obj-$(CONFIG_LEDS_TRIGGER_PANIC) += ledtrig-panic.o
diff --git a/drivers/leds/trigger/ledtrig-ide-disk.c b/drivers/leds/trigger/ledtrig-ide-disk.c
index c02a3ac3cd2b..15123d389240 100644
--- a/drivers/leds/trigger/ledtrig-ide-disk.c
+++ b/drivers/leds/trigger/ledtrig-ide-disk.c
@@ -18,10 +18,11 @@
#define BLINK_DELAY 30
DEFINE_LED_TRIGGER(ledtrig_ide);
-static unsigned long ide_blink_delay = BLINK_DELAY;
void ledtrig_ide_activity(void)
{
+ unsigned long ide_blink_delay = BLINK_DELAY;
+
led_trigger_blink_oneshot(ledtrig_ide,
&ide_blink_delay, &ide_blink_delay, 0);
}
diff --git a/drivers/leds/trigger/ledtrig-mtd.c b/drivers/leds/trigger/ledtrig-mtd.c
new file mode 100644
index 000000000000..99b5b0a4d826
--- /dev/null
+++ b/drivers/leds/trigger/ledtrig-mtd.c
@@ -0,0 +1,45 @@
+/*
+ * LED MTD trigger
+ *
+ * Copyright 2016 Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
+ *
+ * Based on LED IDE-Disk Activity Trigger
+ *
+ * Copyright 2006 Openedhand Ltd.
+ *
+ * Author: Richard Purdie <rpurdie@openedhand.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/leds.h>
+
+#define BLINK_DELAY 30
+
+DEFINE_LED_TRIGGER(ledtrig_mtd);
+DEFINE_LED_TRIGGER(ledtrig_nand);
+
+void ledtrig_mtd_activity(void)
+{
+ unsigned long blink_delay = BLINK_DELAY;
+
+ led_trigger_blink_oneshot(ledtrig_mtd,
+ &blink_delay, &blink_delay, 0);
+ led_trigger_blink_oneshot(ledtrig_nand,
+ &blink_delay, &blink_delay, 0);
+}
+EXPORT_SYMBOL(ledtrig_mtd_activity);
+
+static int __init ledtrig_mtd_init(void)
+{
+ led_trigger_register_simple("mtd", &ledtrig_mtd);
+ led_trigger_register_simple("nand-disk", &ledtrig_nand);
+
+ return 0;
+}
+device_initcall(ledtrig_mtd_init);
diff --git a/drivers/leds/trigger/ledtrig-panic.c b/drivers/leds/trigger/ledtrig-panic.c
new file mode 100644
index 000000000000..d735526b9db4
--- /dev/null
+++ b/drivers/leds/trigger/ledtrig-panic.c
@@ -0,0 +1,77 @@
+/*
+ * Kernel Panic LED Trigger
+ *
+ * Copyright 2016 Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/notifier.h>
+#include <linux/leds.h>
+#include "../leds.h"
+
+static struct led_trigger *trigger;
+
+/*
+ * This is called in a special context by the atomic panic
+ * notifier. This means the trigger can be changed without
+ * worrying about locking.
+ */
+static void led_trigger_set_panic(struct led_classdev *led_cdev)
+{
+ struct led_trigger *trig;
+
+ list_for_each_entry(trig, &trigger_list, next_trig) {
+ if (strcmp("panic", trig->name))
+ continue;
+ if (led_cdev->trigger)
+ list_del(&led_cdev->trig_list);
+ list_add_tail(&led_cdev->trig_list, &trig->led_cdevs);
+
+ /* Avoid the delayed blink path */
+ led_cdev->blink_delay_on = 0;
+ led_cdev->blink_delay_off = 0;
+
+ led_cdev->trigger = trig;
+ if (trig->activate)
+ trig->activate(led_cdev);
+ break;
+ }
+}
+
+static int led_trigger_panic_notifier(struct notifier_block *nb,
+ unsigned long code, void *unused)
+{
+ struct led_classdev *led_cdev;
+
+ list_for_each_entry(led_cdev, &leds_list, node)
+ if (led_cdev->flags & LED_PANIC_INDICATOR)
+ led_trigger_set_panic(led_cdev);
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block led_trigger_panic_nb = {
+ .notifier_call = led_trigger_panic_notifier,
+};
+
+static long led_panic_blink(int state)
+{
+ led_trigger_event(trigger, state ? LED_FULL : LED_OFF);
+ return 0;
+}
+
+static int __init ledtrig_panic_init(void)
+{
+ atomic_notifier_chain_register(&panic_notifier_list,
+ &led_trigger_panic_nb);
+
+ led_trigger_register_simple("panic", &trigger);
+ panic_blink = led_panic_blink;
+ return 0;
+}
+device_initcall(ledtrig_panic_init);
diff --git a/drivers/lguest/x86/core.c b/drivers/lguest/x86/core.c
index adc162c7040d..6e9042e3d2a9 100644
--- a/drivers/lguest/x86/core.c
+++ b/drivers/lguest/x86/core.c
@@ -603,7 +603,7 @@ void __init lguest_arch_host_init(void)
* doing this.
*/
get_online_cpus();
- if (cpu_has_pge) { /* We have a broader idea of "global". */
+ if (boot_cpu_has(X86_FEATURE_PGE)) { /* We have a broader idea of "global". */
/* Remember that this was originally set (for cleanup). */
cpu_had_pge = 1;
/*
diff --git a/drivers/lightnvm/core.c b/drivers/lightnvm/core.c
index 0dc9a80adb94..160c1a6838e1 100644
--- a/drivers/lightnvm/core.c
+++ b/drivers/lightnvm/core.c
@@ -30,23 +30,35 @@
#include <linux/sched/sysctl.h>
#include <uapi/linux/lightnvm.h>
-static LIST_HEAD(nvm_targets);
+static LIST_HEAD(nvm_tgt_types);
static LIST_HEAD(nvm_mgrs);
static LIST_HEAD(nvm_devices);
+static LIST_HEAD(nvm_targets);
static DECLARE_RWSEM(nvm_lock);
+static struct nvm_target *nvm_find_target(const char *name)
+{
+ struct nvm_target *tgt;
+
+ list_for_each_entry(tgt, &nvm_targets, list)
+ if (!strcmp(name, tgt->disk->disk_name))
+ return tgt;
+
+ return NULL;
+}
+
static struct nvm_tgt_type *nvm_find_target_type(const char *name)
{
struct nvm_tgt_type *tt;
- list_for_each_entry(tt, &nvm_targets, list)
+ list_for_each_entry(tt, &nvm_tgt_types, list)
if (!strcmp(name, tt->name))
return tt;
return NULL;
}
-int nvm_register_target(struct nvm_tgt_type *tt)
+int nvm_register_tgt_type(struct nvm_tgt_type *tt)
{
int ret = 0;
@@ -54,14 +66,14 @@ int nvm_register_target(struct nvm_tgt_type *tt)
if (nvm_find_target_type(tt->name))
ret = -EEXIST;
else
- list_add(&tt->list, &nvm_targets);
+ list_add(&tt->list, &nvm_tgt_types);
up_write(&nvm_lock);
return ret;
}
-EXPORT_SYMBOL(nvm_register_target);
+EXPORT_SYMBOL(nvm_register_tgt_type);
-void nvm_unregister_target(struct nvm_tgt_type *tt)
+void nvm_unregister_tgt_type(struct nvm_tgt_type *tt)
{
if (!tt)
return;
@@ -70,20 +82,20 @@ void nvm_unregister_target(struct nvm_tgt_type *tt)
list_del(&tt->list);
up_write(&nvm_lock);
}
-EXPORT_SYMBOL(nvm_unregister_target);
+EXPORT_SYMBOL(nvm_unregister_tgt_type);
void *nvm_dev_dma_alloc(struct nvm_dev *dev, gfp_t mem_flags,
dma_addr_t *dma_handler)
{
- return dev->ops->dev_dma_alloc(dev, dev->ppalist_pool, mem_flags,
+ return dev->ops->dev_dma_alloc(dev, dev->dma_pool, mem_flags,
dma_handler);
}
EXPORT_SYMBOL(nvm_dev_dma_alloc);
-void nvm_dev_dma_free(struct nvm_dev *dev, void *ppa_list,
+void nvm_dev_dma_free(struct nvm_dev *dev, void *addr,
dma_addr_t dma_handler)
{
- dev->ops->dev_dma_free(dev->ppalist_pool, ppa_list, dma_handler);
+ dev->ops->dev_dma_free(dev->dma_pool, addr, dma_handler);
}
EXPORT_SYMBOL(nvm_dev_dma_free);
@@ -214,8 +226,8 @@ void nvm_addr_to_generic_mode(struct nvm_dev *dev, struct nvm_rq *rqd)
{
int i;
- if (rqd->nr_pages > 1) {
- for (i = 0; i < rqd->nr_pages; i++)
+ if (rqd->nr_ppas > 1) {
+ for (i = 0; i < rqd->nr_ppas; i++)
rqd->ppa_list[i] = dev_to_generic_addr(dev,
rqd->ppa_list[i]);
} else {
@@ -228,8 +240,8 @@ void nvm_generic_to_addr_mode(struct nvm_dev *dev, struct nvm_rq *rqd)
{
int i;
- if (rqd->nr_pages > 1) {
- for (i = 0; i < rqd->nr_pages; i++)
+ if (rqd->nr_ppas > 1) {
+ for (i = 0; i < rqd->nr_ppas; i++)
rqd->ppa_list[i] = generic_to_dev_addr(dev,
rqd->ppa_list[i]);
} else {
@@ -239,33 +251,36 @@ void nvm_generic_to_addr_mode(struct nvm_dev *dev, struct nvm_rq *rqd)
EXPORT_SYMBOL(nvm_generic_to_addr_mode);
int nvm_set_rqd_ppalist(struct nvm_dev *dev, struct nvm_rq *rqd,
- struct ppa_addr *ppas, int nr_ppas)
+ struct ppa_addr *ppas, int nr_ppas, int vblk)
{
int i, plane_cnt, pl_idx;
- if (dev->plane_mode == NVM_PLANE_SINGLE && nr_ppas == 1) {
- rqd->nr_pages = 1;
+ if ((!vblk || dev->plane_mode == NVM_PLANE_SINGLE) && nr_ppas == 1) {
+ rqd->nr_ppas = nr_ppas;
rqd->ppa_addr = ppas[0];
return 0;
}
- plane_cnt = dev->plane_mode;
- rqd->nr_pages = plane_cnt * nr_ppas;
-
- if (dev->ops->max_phys_sect < rqd->nr_pages)
- return -EINVAL;
-
+ rqd->nr_ppas = nr_ppas;
rqd->ppa_list = nvm_dev_dma_alloc(dev, GFP_KERNEL, &rqd->dma_ppa_list);
if (!rqd->ppa_list) {
pr_err("nvm: failed to allocate dma memory\n");
return -ENOMEM;
}
- for (pl_idx = 0; pl_idx < plane_cnt; pl_idx++) {
+ if (!vblk) {
+ for (i = 0; i < nr_ppas; i++)
+ rqd->ppa_list[i] = ppas[i];
+ } else {
+ plane_cnt = dev->plane_mode;
+ rqd->nr_ppas *= plane_cnt;
+
for (i = 0; i < nr_ppas; i++) {
- ppas[i].g.pl = pl_idx;
- rqd->ppa_list[(pl_idx * nr_ppas) + i] = ppas[i];
+ for (pl_idx = 0; pl_idx < plane_cnt; pl_idx++) {
+ ppas[i].g.pl = pl_idx;
+ rqd->ppa_list[(pl_idx * nr_ppas) + i] = ppas[i];
+ }
}
}
@@ -292,7 +307,7 @@ int nvm_erase_ppa(struct nvm_dev *dev, struct ppa_addr *ppas, int nr_ppas)
memset(&rqd, 0, sizeof(struct nvm_rq));
- ret = nvm_set_rqd_ppalist(dev, &rqd, ppas, nr_ppas);
+ ret = nvm_set_rqd_ppalist(dev, &rqd, ppas, nr_ppas, 1);
if (ret)
return ret;
@@ -322,11 +337,10 @@ static void nvm_end_io_sync(struct nvm_rq *rqd)
complete(waiting);
}
-int nvm_submit_ppa(struct nvm_dev *dev, struct ppa_addr *ppa, int nr_ppas,
- int opcode, int flags, void *buf, int len)
+int __nvm_submit_ppa(struct nvm_dev *dev, struct nvm_rq *rqd, int opcode,
+ int flags, void *buf, int len)
{
DECLARE_COMPLETION_ONSTACK(wait);
- struct nvm_rq rqd;
struct bio *bio;
int ret;
unsigned long hang_check;
@@ -335,23 +349,21 @@ int nvm_submit_ppa(struct nvm_dev *dev, struct ppa_addr *ppa, int nr_ppas,
if (IS_ERR_OR_NULL(bio))
return -ENOMEM;
- memset(&rqd, 0, sizeof(struct nvm_rq));
- ret = nvm_set_rqd_ppalist(dev, &rqd, ppa, nr_ppas);
+ nvm_generic_to_addr_mode(dev, rqd);
+
+ rqd->dev = dev;
+ rqd->opcode = opcode;
+ rqd->flags = flags;
+ rqd->bio = bio;
+ rqd->wait = &wait;
+ rqd->end_io = nvm_end_io_sync;
+
+ ret = dev->ops->submit_io(dev, rqd);
if (ret) {
bio_put(bio);
return ret;
}
- rqd.opcode = opcode;
- rqd.bio = bio;
- rqd.wait = &wait;
- rqd.dev = dev;
- rqd.end_io = nvm_end_io_sync;
- rqd.flags = flags;
- nvm_generic_to_addr_mode(dev, &rqd);
-
- ret = dev->ops->submit_io(dev, &rqd);
-
/* Prevent hang_check timer from firing at us during very long I/O */
hang_check = sysctl_hung_task_timeout_secs;
if (hang_check)
@@ -359,12 +371,113 @@ int nvm_submit_ppa(struct nvm_dev *dev, struct ppa_addr *ppa, int nr_ppas,
else
wait_for_completion_io(&wait);
+ return rqd->error;
+}
+
+/**
+ * nvm_submit_ppa_list - submit user-defined ppa list to device. The user must
+ * take to free ppa list if necessary.
+ * @dev: device
+ * @ppa_list: user created ppa_list
+ * @nr_ppas: length of ppa_list
+ * @opcode: device opcode
+ * @flags: device flags
+ * @buf: data buffer
+ * @len: data buffer length
+ */
+int nvm_submit_ppa_list(struct nvm_dev *dev, struct ppa_addr *ppa_list,
+ int nr_ppas, int opcode, int flags, void *buf, int len)
+{
+ struct nvm_rq rqd;
+
+ if (dev->ops->max_phys_sect < nr_ppas)
+ return -EINVAL;
+
+ memset(&rqd, 0, sizeof(struct nvm_rq));
+
+ rqd.nr_ppas = nr_ppas;
+ if (nr_ppas > 1)
+ rqd.ppa_list = ppa_list;
+ else
+ rqd.ppa_addr = ppa_list[0];
+
+ return __nvm_submit_ppa(dev, &rqd, opcode, flags, buf, len);
+}
+EXPORT_SYMBOL(nvm_submit_ppa_list);
+
+/**
+ * nvm_submit_ppa - submit PPAs to device. PPAs will automatically be unfolded
+ * as single, dual, quad plane PPAs depending on device type.
+ * @dev: device
+ * @ppa: user created ppa_list
+ * @nr_ppas: length of ppa_list
+ * @opcode: device opcode
+ * @flags: device flags
+ * @buf: data buffer
+ * @len: data buffer length
+ */
+int nvm_submit_ppa(struct nvm_dev *dev, struct ppa_addr *ppa, int nr_ppas,
+ int opcode, int flags, void *buf, int len)
+{
+ struct nvm_rq rqd;
+ int ret;
+
+ memset(&rqd, 0, sizeof(struct nvm_rq));
+ ret = nvm_set_rqd_ppalist(dev, &rqd, ppa, nr_ppas, 1);
+ if (ret)
+ return ret;
+
+ ret = __nvm_submit_ppa(dev, &rqd, opcode, flags, buf, len);
+
nvm_free_rqd_ppalist(dev, &rqd);
- return rqd.error;
+ return ret;
}
EXPORT_SYMBOL(nvm_submit_ppa);
+/*
+ * folds a bad block list from its plane representation to its virtual
+ * block representation. The fold is done in place and reduced size is
+ * returned.
+ *
+ * If any of the planes status are bad or grown bad block, the virtual block
+ * is marked bad. If not bad, the first plane state acts as the block state.
+ */
+int nvm_bb_tbl_fold(struct nvm_dev *dev, u8 *blks, int nr_blks)
+{
+ int blk, offset, pl, blktype;
+
+ if (nr_blks != dev->blks_per_lun * dev->plane_mode)
+ return -EINVAL;
+
+ for (blk = 0; blk < dev->blks_per_lun; blk++) {
+ offset = blk * dev->plane_mode;
+ blktype = blks[offset];
+
+ /* Bad blocks on any planes take precedence over other types */
+ for (pl = 0; pl < dev->plane_mode; pl++) {
+ if (blks[offset + pl] &
+ (NVM_BLK_T_BAD|NVM_BLK_T_GRWN_BAD)) {
+ blktype = blks[offset + pl];
+ break;
+ }
+ }
+
+ blks[blk] = blktype;
+ }
+
+ return dev->blks_per_lun;
+}
+EXPORT_SYMBOL(nvm_bb_tbl_fold);
+
+int nvm_get_bb_tbl(struct nvm_dev *dev, struct ppa_addr ppa, u8 *blks)
+{
+ ppa = generic_to_dev_addr(dev, ppa);
+
+ return dev->ops->get_bb_tbl(dev, ppa, blks);
+}
+EXPORT_SYMBOL(nvm_get_bb_tbl);
+
static int nvm_init_slc_tbl(struct nvm_dev *dev, struct nvm_id_group *grp)
{
int i;
@@ -414,6 +527,7 @@ static int nvm_core_init(struct nvm_dev *dev)
{
struct nvm_id *id = &dev->identity;
struct nvm_id_group *grp = &id->groups[0];
+ int ret;
/* device values */
dev->nr_chnls = grp->num_ch;
@@ -421,6 +535,8 @@ static int nvm_core_init(struct nvm_dev *dev)
dev->pgs_per_blk = grp->num_pg;
dev->blks_per_lun = grp->num_blk;
dev->nr_planes = grp->num_pln;
+ dev->fpg_size = grp->fpg_sz;
+ dev->pfpg_size = grp->fpg_sz * grp->num_pln;
dev->sec_size = grp->csecs;
dev->oob_size = grp->sos;
dev->sec_per_pg = grp->fpg_sz / grp->csecs;
@@ -430,33 +546,16 @@ static int nvm_core_init(struct nvm_dev *dev)
dev->plane_mode = NVM_PLANE_SINGLE;
dev->max_rq_size = dev->ops->max_phys_sect * dev->sec_size;
- if (grp->mtype != 0) {
- pr_err("nvm: memory type not supported\n");
- return -EINVAL;
- }
-
- switch (grp->fmtype) {
- case NVM_ID_FMTYPE_SLC:
- if (nvm_init_slc_tbl(dev, grp))
- return -ENOMEM;
- break;
- case NVM_ID_FMTYPE_MLC:
- if (nvm_init_mlc_tbl(dev, grp))
- return -ENOMEM;
- break;
- default:
- pr_err("nvm: flash type not supported\n");
- return -EINVAL;
- }
-
- if (!dev->lps_per_blk)
- pr_info("nvm: lower page programming table missing\n");
-
if (grp->mpos & 0x020202)
dev->plane_mode = NVM_PLANE_DOUBLE;
if (grp->mpos & 0x040404)
dev->plane_mode = NVM_PLANE_QUAD;
+ if (grp->mtype != 0) {
+ pr_err("nvm: memory type not supported\n");
+ return -EINVAL;
+ }
+
/* calculated values */
dev->sec_per_pl = dev->sec_per_pg * dev->nr_planes;
dev->sec_per_blk = dev->sec_per_pl * dev->pgs_per_blk;
@@ -468,11 +567,73 @@ static int nvm_core_init(struct nvm_dev *dev)
sizeof(unsigned long), GFP_KERNEL);
if (!dev->lun_map)
return -ENOMEM;
- INIT_LIST_HEAD(&dev->online_targets);
+
+ switch (grp->fmtype) {
+ case NVM_ID_FMTYPE_SLC:
+ if (nvm_init_slc_tbl(dev, grp)) {
+ ret = -ENOMEM;
+ goto err_fmtype;
+ }
+ break;
+ case NVM_ID_FMTYPE_MLC:
+ if (nvm_init_mlc_tbl(dev, grp)) {
+ ret = -ENOMEM;
+ goto err_fmtype;
+ }
+ break;
+ default:
+ pr_err("nvm: flash type not supported\n");
+ ret = -EINVAL;
+ goto err_fmtype;
+ }
+
mutex_init(&dev->mlock);
spin_lock_init(&dev->lock);
return 0;
+err_fmtype:
+ kfree(dev->lun_map);
+ return ret;
+}
+
+static void nvm_remove_target(struct nvm_target *t)
+{
+ struct nvm_tgt_type *tt = t->type;
+ struct gendisk *tdisk = t->disk;
+ struct request_queue *q = tdisk->queue;
+
+ lockdep_assert_held(&nvm_lock);
+
+ del_gendisk(tdisk);
+ blk_cleanup_queue(q);
+
+ if (tt->exit)
+ tt->exit(tdisk->private_data);
+
+ put_disk(tdisk);
+
+ list_del(&t->list);
+ kfree(t);
+}
+
+static void nvm_free_mgr(struct nvm_dev *dev)
+{
+ struct nvm_target *tgt, *tmp;
+
+ if (!dev->mt)
+ return;
+
+ down_write(&nvm_lock);
+ list_for_each_entry_safe(tgt, tmp, &nvm_targets, list) {
+ if (tgt->dev != dev)
+ continue;
+
+ nvm_remove_target(tgt);
+ }
+ up_write(&nvm_lock);
+
+ dev->mt->unregister_mgr(dev);
+ dev->mt = NULL;
}
static void nvm_free(struct nvm_dev *dev)
@@ -480,10 +641,10 @@ static void nvm_free(struct nvm_dev *dev)
if (!dev)
return;
- if (dev->mt)
- dev->mt->unregister_mgr(dev);
+ nvm_free_mgr(dev);
kfree(dev->lptbl);
+ kfree(dev->lun_map);
}
static int nvm_init(struct nvm_dev *dev)
@@ -530,8 +691,8 @@ err:
static void nvm_exit(struct nvm_dev *dev)
{
- if (dev->ppalist_pool)
- dev->ops->destroy_dma_pool(dev->ppalist_pool);
+ if (dev->dma_pool)
+ dev->ops->destroy_dma_pool(dev->dma_pool);
nvm_free(dev);
pr_info("nvm: successfully unloaded\n");
@@ -565,9 +726,9 @@ int nvm_register(struct request_queue *q, char *disk_name,
}
if (dev->ops->max_phys_sect > 1) {
- dev->ppalist_pool = dev->ops->create_dma_pool(dev, "ppalist");
- if (!dev->ppalist_pool) {
- pr_err("nvm: could not create ppa pool\n");
+ dev->dma_pool = dev->ops->create_dma_pool(dev, "ppalist");
+ if (!dev->dma_pool) {
+ pr_err("nvm: could not create dma pool\n");
ret = -ENOMEM;
goto err_init;
}
@@ -613,7 +774,6 @@ void nvm_unregister(char *disk_name)
up_write(&nvm_lock);
nvm_exit(dev);
- kfree(dev->lun_map);
kfree(dev);
}
EXPORT_SYMBOL(nvm_unregister);
@@ -645,12 +805,11 @@ static int nvm_create_target(struct nvm_dev *dev,
return -EINVAL;
}
- list_for_each_entry(t, &dev->online_targets, list) {
- if (!strcmp(create->tgtname, t->disk->disk_name)) {
- pr_err("nvm: target name already exists.\n");
- up_write(&nvm_lock);
- return -EINVAL;
- }
+ t = nvm_find_target(create->tgtname);
+ if (t) {
+ pr_err("nvm: target name already exists.\n");
+ up_write(&nvm_lock);
+ return -EINVAL;
}
up_write(&nvm_lock);
@@ -688,9 +847,10 @@ static int nvm_create_target(struct nvm_dev *dev,
t->type = tt;
t->disk = tdisk;
+ t->dev = dev;
down_write(&nvm_lock);
- list_add_tail(&t->list, &dev->online_targets);
+ list_add_tail(&t->list, &nvm_targets);
up_write(&nvm_lock);
return 0;
@@ -703,26 +863,6 @@ err_t:
return -ENOMEM;
}
-static void nvm_remove_target(struct nvm_target *t)
-{
- struct nvm_tgt_type *tt = t->type;
- struct gendisk *tdisk = t->disk;
- struct request_queue *q = tdisk->queue;
-
- lockdep_assert_held(&nvm_lock);
-
- del_gendisk(tdisk);
- blk_cleanup_queue(q);
-
- if (tt->exit)
- tt->exit(tdisk->private_data);
-
- put_disk(tdisk);
-
- list_del(&t->list);
- kfree(t);
-}
-
static int __nvm_configure_create(struct nvm_ioctl_create *create)
{
struct nvm_dev *dev;
@@ -753,26 +893,19 @@ static int __nvm_configure_create(struct nvm_ioctl_create *create)
static int __nvm_configure_remove(struct nvm_ioctl_remove *remove)
{
- struct nvm_target *t = NULL;
- struct nvm_dev *dev;
- int ret = -1;
+ struct nvm_target *t;
down_write(&nvm_lock);
- list_for_each_entry(dev, &nvm_devices, devices)
- list_for_each_entry(t, &dev->online_targets, list) {
- if (!strcmp(remove->tgtname, t->disk->disk_name)) {
- nvm_remove_target(t);
- ret = 0;
- break;
- }
- }
- up_write(&nvm_lock);
-
- if (ret) {
+ t = nvm_find_target(remove->tgtname);
+ if (!t) {
pr_err("nvm: target \"%s\" doesn't exist.\n", remove->tgtname);
+ up_write(&nvm_lock);
return -EINVAL;
}
+ nvm_remove_target(t);
+ up_write(&nvm_lock);
+
return 0;
}
@@ -921,7 +1054,7 @@ static long nvm_ioctl_info(struct file *file, void __user *arg)
info->version[2] = NVM_VERSION_PATCH;
down_write(&nvm_lock);
- list_for_each_entry(tt, &nvm_targets, list) {
+ list_for_each_entry(tt, &nvm_tgt_types, list) {
struct nvm_ioctl_info_tgt *tgt = &info->tgts[tgt_iter];
tgt->version[0] = tt->version[0];
@@ -1118,10 +1251,7 @@ static long nvm_ioctl_dev_factory(struct file *file, void __user *arg)
return -EINVAL;
}
- if (dev->mt) {
- dev->mt->unregister_mgr(dev);
- dev->mt = NULL;
- }
+ nvm_free_mgr(dev);
if (dev->identity.cap & NVM_ID_DCAP_BBLKMGMT)
return nvm_dev_factory(dev, fact.flags);
diff --git a/drivers/lightnvm/gennvm.c b/drivers/lightnvm/gennvm.c
index 72e124a3927d..ec9fb6876e38 100644
--- a/drivers/lightnvm/gennvm.c
+++ b/drivers/lightnvm/gennvm.c
@@ -129,27 +129,25 @@ static int gennvm_luns_init(struct nvm_dev *dev, struct gen_nvm *gn)
return 0;
}
-static int gennvm_block_bb(struct ppa_addr ppa, int nr_blocks, u8 *blks,
- void *private)
+static int gennvm_block_bb(struct gen_nvm *gn, struct ppa_addr ppa,
+ u8 *blks, int nr_blks)
{
- struct gen_nvm *gn = private;
struct nvm_dev *dev = gn->dev;
struct gen_lun *lun;
struct nvm_block *blk;
int i;
+ nr_blks = nvm_bb_tbl_fold(dev, blks, nr_blks);
+ if (nr_blks < 0)
+ return nr_blks;
+
lun = &gn->luns[(dev->luns_per_chnl * ppa.g.ch) + ppa.g.lun];
- for (i = 0; i < nr_blocks; i++) {
+ for (i = 0; i < nr_blks; i++) {
if (blks[i] == 0)
continue;
blk = &lun->vlun.blocks[i];
- if (!blk) {
- pr_err("gennvm: BB data is out of bounds.\n");
- return -EINVAL;
- }
-
list_move_tail(&blk->list, &lun->bb_list);
lun->vlun.nr_bad_blocks++;
lun->vlun.nr_free_blocks--;
@@ -216,13 +214,21 @@ static int gennvm_blocks_init(struct nvm_dev *dev, struct gen_nvm *gn)
struct gen_lun *lun;
struct nvm_block *block;
sector_t lun_iter, blk_iter, cur_block_id = 0;
- int ret;
+ int ret, nr_blks;
+ u8 *blks;
+
+ nr_blks = dev->blks_per_lun * dev->plane_mode;
+ blks = kmalloc(nr_blks, GFP_KERNEL);
+ if (!blks)
+ return -ENOMEM;
gennvm_for_each_lun(gn, lun, lun_iter) {
lun->vlun.blocks = vzalloc(sizeof(struct nvm_block) *
dev->blks_per_lun);
- if (!lun->vlun.blocks)
+ if (!lun->vlun.blocks) {
+ kfree(blks);
return -ENOMEM;
+ }
for (blk_iter = 0; blk_iter < dev->blks_per_lun; blk_iter++) {
block = &lun->vlun.blocks[blk_iter];
@@ -246,14 +252,15 @@ static int gennvm_blocks_init(struct nvm_dev *dev, struct gen_nvm *gn)
ppa.ppa = 0;
ppa.g.ch = lun->vlun.chnl_id;
- ppa.g.lun = lun->vlun.id;
- ppa = generic_to_dev_addr(dev, ppa);
+ ppa.g.lun = lun->vlun.lun_id;
+
+ ret = nvm_get_bb_tbl(dev, ppa, blks);
+ if (ret)
+ pr_err("gennvm: could not get BB table\n");
- ret = dev->ops->get_bb_tbl(dev, ppa,
- dev->blks_per_lun,
- gennvm_block_bb, gn);
+ ret = gennvm_block_bb(gn, ppa, blks, nr_blks);
if (ret)
- pr_err("gennvm: could not read BB table\n");
+ pr_err("gennvm: BB table map failed\n");
}
}
@@ -266,6 +273,7 @@ static int gennvm_blocks_init(struct nvm_dev *dev, struct gen_nvm *gn)
}
}
+ kfree(blks);
return 0;
}
@@ -399,64 +407,60 @@ static void gennvm_put_blk(struct nvm_dev *dev, struct nvm_block *blk)
spin_unlock(&vlun->lock);
}
-static void gennvm_blk_set_type(struct nvm_dev *dev, struct ppa_addr *ppa,
- int type)
+static void gennvm_mark_blk(struct nvm_dev *dev, struct ppa_addr ppa, int type)
{
struct gen_nvm *gn = dev->mp;
struct gen_lun *lun;
struct nvm_block *blk;
- if (unlikely(ppa->g.ch > dev->nr_chnls ||
- ppa->g.lun > dev->luns_per_chnl ||
- ppa->g.blk > dev->blks_per_lun)) {
+ pr_debug("gennvm: ppa (ch: %u lun: %u blk: %u pg: %u) -> %u\n",
+ ppa.g.ch, ppa.g.lun, ppa.g.blk, ppa.g.pg, type);
+
+ if (unlikely(ppa.g.ch > dev->nr_chnls ||
+ ppa.g.lun > dev->luns_per_chnl ||
+ ppa.g.blk > dev->blks_per_lun)) {
WARN_ON_ONCE(1);
pr_err("gennvm: ppa broken (ch: %u > %u lun: %u > %u blk: %u > %u",
- ppa->g.ch, dev->nr_chnls,
- ppa->g.lun, dev->luns_per_chnl,
- ppa->g.blk, dev->blks_per_lun);
+ ppa.g.ch, dev->nr_chnls,
+ ppa.g.lun, dev->luns_per_chnl,
+ ppa.g.blk, dev->blks_per_lun);
return;
}
- lun = &gn->luns[ppa->g.lun * ppa->g.ch];
- blk = &lun->vlun.blocks[ppa->g.blk];
+ lun = &gn->luns[ppa.g.lun * ppa.g.ch];
+ blk = &lun->vlun.blocks[ppa.g.blk];
/* will be moved to bb list on put_blk from target */
blk->state = type;
}
-/* mark block bad. It is expected the target recover from the error. */
+/*
+ * mark block bad in gennvm. It is expected that the target recovers separately
+ */
static void gennvm_mark_blk_bad(struct nvm_dev *dev, struct nvm_rq *rqd)
{
- int i;
-
- if (!dev->ops->set_bb_tbl)
- return;
-
- if (dev->ops->set_bb_tbl(dev, rqd, 1))
- return;
+ int bit = -1;
+ int max_secs = dev->ops->max_phys_sect;
+ void *comp_bits = &rqd->ppa_status;
nvm_addr_to_generic_mode(dev, rqd);
/* look up blocks and mark them as bad */
- if (rqd->nr_pages > 1)
- for (i = 0; i < rqd->nr_pages; i++)
- gennvm_blk_set_type(dev, &rqd->ppa_list[i],
- NVM_BLK_ST_BAD);
- else
- gennvm_blk_set_type(dev, &rqd->ppa_addr, NVM_BLK_ST_BAD);
+ if (rqd->nr_ppas == 1) {
+ gennvm_mark_blk(dev, rqd->ppa_addr, NVM_BLK_ST_BAD);
+ return;
+ }
+
+ while ((bit = find_next_bit(comp_bits, max_secs, bit + 1)) < max_secs)
+ gennvm_mark_blk(dev, rqd->ppa_list[bit], NVM_BLK_ST_BAD);
}
static void gennvm_end_io(struct nvm_rq *rqd)
{
struct nvm_tgt_instance *ins = rqd->ins;
- switch (rqd->error) {
- case NVM_RSP_SUCCESS:
- case NVM_RSP_ERR_EMPTYPAGE:
- break;
- case NVM_RSP_ERR_FAILWRITE:
+ if (rqd->error == NVM_RSP_ERR_FAILWRITE)
gennvm_mark_blk_bad(rqd->dev, rqd);
- }
ins->tt->end_io(rqd);
}
@@ -539,6 +543,8 @@ static struct nvmm_type gennvm = {
.submit_io = gennvm_submit_io,
.erase_blk = gennvm_erase_blk,
+ .mark_blk = gennvm_mark_blk,
+
.get_lun = gennvm_get_lun,
.reserve_lun = gennvm_reserve_lun,
.release_lun = gennvm_release_lun,
diff --git a/drivers/lightnvm/rrpc.c b/drivers/lightnvm/rrpc.c
index 3ab6495c3fd8..2103e97a974f 100644
--- a/drivers/lightnvm/rrpc.c
+++ b/drivers/lightnvm/rrpc.c
@@ -405,9 +405,8 @@ static void rrpc_block_gc(struct work_struct *work)
ws_gc);
struct rrpc *rrpc = gcb->rrpc;
struct rrpc_block *rblk = gcb->rblk;
+ struct rrpc_lun *rlun = rblk->rlun;
struct nvm_dev *dev = rrpc->dev;
- struct nvm_lun *lun = rblk->parent->lun;
- struct rrpc_lun *rlun = &rrpc->luns[lun->id - rrpc->lun_offset];
mempool_free(gcb, rrpc->gcb_pool);
pr_debug("nvm: block '%lu' being reclaimed\n", rblk->parent->id);
@@ -508,9 +507,9 @@ static void rrpc_gc_queue(struct work_struct *work)
ws_gc);
struct rrpc *rrpc = gcb->rrpc;
struct rrpc_block *rblk = gcb->rblk;
+ struct rrpc_lun *rlun = rblk->rlun;
struct nvm_lun *lun = rblk->parent->lun;
struct nvm_block *blk = rblk->parent;
- struct rrpc_lun *rlun = &rrpc->luns[lun->id - rrpc->lun_offset];
spin_lock(&rlun->lock);
list_add_tail(&rblk->prio, &rlun->prio_list);
@@ -696,7 +695,7 @@ static void rrpc_end_io(struct nvm_rq *rqd)
{
struct rrpc *rrpc = container_of(rqd->ins, struct rrpc, instance);
struct rrpc_rq *rrqd = nvm_rq_to_pdu(rqd);
- uint8_t npages = rqd->nr_pages;
+ uint8_t npages = rqd->nr_ppas;
sector_t laddr = rrpc_get_laddr(rqd->bio) - npages;
if (bio_data_dir(rqd->bio) == WRITE)
@@ -711,8 +710,6 @@ static void rrpc_end_io(struct nvm_rq *rqd)
if (npages > 1)
nvm_dev_dma_free(rrpc->dev, rqd->ppa_list, rqd->dma_ppa_list);
- if (rqd->metadata)
- nvm_dev_dma_free(rrpc->dev, rqd->metadata, rqd->dma_metadata);
mempool_free(rqd, rrpc->rq_pool);
}
@@ -886,7 +883,7 @@ static int rrpc_submit_io(struct rrpc *rrpc, struct bio *bio,
bio_get(bio);
rqd->bio = bio;
rqd->ins = &rrpc->instance;
- rqd->nr_pages = nr_pages;
+ rqd->nr_ppas = nr_pages;
rrq->flags = flags;
err = nvm_submit_io(rrpc->dev, rqd);
@@ -895,7 +892,7 @@ static int rrpc_submit_io(struct rrpc *rrpc, struct bio *bio,
bio_put(bio);
if (!(flags & NVM_IOTYPE_GC)) {
rrpc_unlock_rq(rrpc, rqd);
- if (rqd->nr_pages > 1)
+ if (rqd->nr_ppas > 1)
nvm_dev_dma_free(rrpc->dev,
rqd->ppa_list, rqd->dma_ppa_list);
}
@@ -1039,11 +1036,8 @@ static int rrpc_map_init(struct rrpc *rrpc)
{
struct nvm_dev *dev = rrpc->dev;
sector_t i;
- u64 slba;
int ret;
- slba = rrpc->soffset >> (ilog2(dev->sec_size) - 9);
-
rrpc->trans_map = vzalloc(sizeof(struct rrpc_addr) * rrpc->nr_sects);
if (!rrpc->trans_map)
return -ENOMEM;
@@ -1065,8 +1059,8 @@ static int rrpc_map_init(struct rrpc *rrpc)
return 0;
/* Bring up the mapping table from device */
- ret = dev->ops->get_l2p_tbl(dev, slba, rrpc->nr_sects, rrpc_l2p_update,
- rrpc);
+ ret = dev->ops->get_l2p_tbl(dev, rrpc->soffset, rrpc->nr_sects,
+ rrpc_l2p_update, rrpc);
if (ret) {
pr_err("nvm: rrpc: could not read L2P table.\n");
return -EINVAL;
@@ -1207,10 +1201,6 @@ static int rrpc_luns_init(struct rrpc *rrpc, int lun_begin, int lun_end)
INIT_WORK(&rlun->ws_gc, rrpc_lun_gc);
spin_lock_init(&rlun->lock);
-
- rrpc->total_blocks += dev->blks_per_lun;
- rrpc->nr_sects += dev->sec_per_lun;
-
}
return 0;
@@ -1224,18 +1214,24 @@ static int rrpc_area_init(struct rrpc *rrpc, sector_t *begin)
struct nvm_dev *dev = rrpc->dev;
struct nvmm_type *mt = dev->mt;
sector_t size = rrpc->nr_sects * dev->sec_size;
+ int ret;
size >>= 9;
- return mt->get_area(dev, begin, size);
+ ret = mt->get_area(dev, begin, size);
+ if (!ret)
+ *begin >>= (ilog2(dev->sec_size) - 9);
+
+ return ret;
}
static void rrpc_area_free(struct rrpc *rrpc)
{
struct nvm_dev *dev = rrpc->dev;
struct nvmm_type *mt = dev->mt;
+ sector_t begin = rrpc->soffset << (ilog2(dev->sec_size) - 9);
- mt->put_area(dev, rrpc->soffset);
+ mt->put_area(dev, begin);
}
static void rrpc_free(struct rrpc *rrpc)
@@ -1268,7 +1264,7 @@ static sector_t rrpc_capacity(void *private)
sector_t reserved, provisioned;
/* cur, gc, and two emergency blocks for each lun */
- reserved = rrpc->nr_luns * dev->max_pages_per_blk * 4;
+ reserved = rrpc->nr_luns * dev->sec_per_blk * 4;
provisioned = rrpc->nr_sects - reserved;
if (reserved > rrpc->nr_sects) {
@@ -1388,6 +1384,8 @@ static void *rrpc_init(struct nvm_dev *dev, struct gendisk *tdisk,
INIT_WORK(&rrpc->ws_requeue, rrpc_requeue);
rrpc->nr_luns = lun_end - lun_begin + 1;
+ rrpc->total_blocks = (unsigned long)dev->blks_per_lun * rrpc->nr_luns;
+ rrpc->nr_sects = (unsigned long long)dev->sec_per_lun * rrpc->nr_luns;
/* simple round-robin strategy */
atomic_set(&rrpc->next_lun, -1);
@@ -1468,12 +1466,12 @@ static struct nvm_tgt_type tt_rrpc = {
static int __init rrpc_module_init(void)
{
- return nvm_register_target(&tt_rrpc);
+ return nvm_register_tgt_type(&tt_rrpc);
}
static void rrpc_module_exit(void)
{
- nvm_unregister_target(&tt_rrpc);
+ nvm_unregister_tgt_type(&tt_rrpc);
}
module_init(rrpc_module_init);
diff --git a/drivers/lightnvm/rrpc.h b/drivers/lightnvm/rrpc.h
index 2653484a3b40..87e84b5fc1cc 100644
--- a/drivers/lightnvm/rrpc.h
+++ b/drivers/lightnvm/rrpc.h
@@ -251,7 +251,7 @@ static inline void rrpc_unlock_laddr(struct rrpc *rrpc,
static inline void rrpc_unlock_rq(struct rrpc *rrpc, struct nvm_rq *rqd)
{
struct rrpc_inflight_rq *r = rrpc_get_inflight_rq(rqd);
- uint8_t pages = rqd->nr_pages;
+ uint8_t pages = rqd->nr_ppas;
BUG_ON((r->l_start + pages) > rrpc->nr_sects);
diff --git a/drivers/lightnvm/sysblk.c b/drivers/lightnvm/sysblk.c
index 321de1f154c5..994697ac786e 100644
--- a/drivers/lightnvm/sysblk.c
+++ b/drivers/lightnvm/sysblk.c
@@ -93,12 +93,51 @@ void nvm_setup_sysblk_scan(struct nvm_dev *dev, struct sysblk_scan *s,
s->nr_rows = nvm_setup_sysblks(dev, sysblk_ppas);
}
-static int sysblk_get_host_blks(struct ppa_addr ppa, int nr_blks, u8 *blks,
- void *private)
+static int sysblk_get_free_blks(struct nvm_dev *dev, struct ppa_addr ppa,
+ u8 *blks, int nr_blks,
+ struct sysblk_scan *s)
+{
+ struct ppa_addr *sppa;
+ int i, blkid = 0;
+
+ nr_blks = nvm_bb_tbl_fold(dev, blks, nr_blks);
+ if (nr_blks < 0)
+ return nr_blks;
+
+ for (i = 0; i < nr_blks; i++) {
+ if (blks[i] == NVM_BLK_T_HOST)
+ return -EEXIST;
+
+ if (blks[i] != NVM_BLK_T_FREE)
+ continue;
+
+ sppa = &s->ppas[scan_ppa_idx(s->row, blkid)];
+ sppa->g.ch = ppa.g.ch;
+ sppa->g.lun = ppa.g.lun;
+ sppa->g.blk = i;
+ s->nr_ppas++;
+ blkid++;
+
+ pr_debug("nvm: use (%u %u %u) as sysblk\n",
+ sppa->g.ch, sppa->g.lun, sppa->g.blk);
+ if (blkid > MAX_BLKS_PR_SYSBLK - 1)
+ return 0;
+ }
+
+ pr_err("nvm: sysblk failed get sysblk\n");
+ return -EINVAL;
+}
+
+static int sysblk_get_host_blks(struct nvm_dev *dev, struct ppa_addr ppa,
+ u8 *blks, int nr_blks,
+ struct sysblk_scan *s)
{
- struct sysblk_scan *s = private;
int i, nr_sysblk = 0;
+ nr_blks = nvm_bb_tbl_fold(dev, blks, nr_blks);
+ if (nr_blks < 0)
+ return nr_blks;
+
for (i = 0; i < nr_blks; i++) {
if (blks[i] != NVM_BLK_T_HOST)
continue;
@@ -119,26 +158,42 @@ static int sysblk_get_host_blks(struct ppa_addr ppa, int nr_blks, u8 *blks,
}
static int nvm_get_all_sysblks(struct nvm_dev *dev, struct sysblk_scan *s,
- struct ppa_addr *ppas, nvm_bb_update_fn *fn)
+ struct ppa_addr *ppas, int get_free)
{
- struct ppa_addr dppa;
- int i, ret;
+ int i, nr_blks, ret = 0;
+ u8 *blks;
s->nr_ppas = 0;
+ nr_blks = dev->blks_per_lun * dev->plane_mode;
+
+ blks = kmalloc(nr_blks, GFP_KERNEL);
+ if (!blks)
+ return -ENOMEM;
for (i = 0; i < s->nr_rows; i++) {
- dppa = generic_to_dev_addr(dev, ppas[i]);
s->row = i;
- ret = dev->ops->get_bb_tbl(dev, dppa, dev->blks_per_lun, fn, s);
+ ret = nvm_get_bb_tbl(dev, ppas[i], blks);
if (ret) {
pr_err("nvm: failed bb tbl for ppa (%u %u)\n",
ppas[i].g.ch,
ppas[i].g.blk);
- return ret;
+ goto err_get;
}
+
+ if (get_free)
+ ret = sysblk_get_free_blks(dev, ppas[i], blks, nr_blks,
+ s);
+ else
+ ret = sysblk_get_host_blks(dev, ppas[i], blks, nr_blks,
+ s);
+
+ if (ret)
+ goto err_get;
}
+err_get:
+ kfree(blks);
return ret;
}
@@ -154,13 +209,12 @@ static int nvm_scan_block(struct nvm_dev *dev, struct ppa_addr *ppa,
struct nvm_system_block *sblk)
{
struct nvm_system_block *cur;
- int pg, cursz, ret, found = 0;
+ int pg, ret, found = 0;
/* the full buffer for a flash page is allocated. Only the first of it
* contains the system block information
*/
- cursz = dev->sec_size * dev->sec_per_pg * dev->nr_planes;
- cur = kmalloc(cursz, GFP_KERNEL);
+ cur = kmalloc(dev->pfpg_size, GFP_KERNEL);
if (!cur)
return -ENOMEM;
@@ -169,7 +223,7 @@ static int nvm_scan_block(struct nvm_dev *dev, struct ppa_addr *ppa,
ppa->g.pg = ppa_to_slc(dev, pg);
ret = nvm_submit_ppa(dev, ppa, 1, NVM_OP_PREAD, NVM_IO_SLC_MODE,
- cur, cursz);
+ cur, dev->pfpg_size);
if (ret) {
if (ret == NVM_RSP_ERR_EMPTYPAGE) {
pr_debug("nvm: sysblk scan empty ppa (%u %u %u %u)\n",
@@ -223,10 +277,10 @@ static int nvm_set_bb_tbl(struct nvm_dev *dev, struct sysblk_scan *s, int type)
memset(&rqd, 0, sizeof(struct nvm_rq));
- nvm_set_rqd_ppalist(dev, &rqd, s->ppas, s->nr_ppas);
+ nvm_set_rqd_ppalist(dev, &rqd, s->ppas, s->nr_ppas, 1);
nvm_generic_to_addr_mode(dev, &rqd);
- ret = dev->ops->set_bb_tbl(dev, &rqd, type);
+ ret = dev->ops->set_bb_tbl(dev, &rqd.ppa_addr, rqd.nr_ppas, type);
nvm_free_rqd_ppalist(dev, &rqd);
if (ret) {
pr_err("nvm: sysblk failed bb mark\n");
@@ -236,50 +290,17 @@ static int nvm_set_bb_tbl(struct nvm_dev *dev, struct sysblk_scan *s, int type)
return 0;
}
-static int sysblk_get_free_blks(struct ppa_addr ppa, int nr_blks, u8 *blks,
- void *private)
-{
- struct sysblk_scan *s = private;
- struct ppa_addr *sppa;
- int i, blkid = 0;
-
- for (i = 0; i < nr_blks; i++) {
- if (blks[i] == NVM_BLK_T_HOST)
- return -EEXIST;
-
- if (blks[i] != NVM_BLK_T_FREE)
- continue;
-
- sppa = &s->ppas[scan_ppa_idx(s->row, blkid)];
- sppa->g.ch = ppa.g.ch;
- sppa->g.lun = ppa.g.lun;
- sppa->g.blk = i;
- s->nr_ppas++;
- blkid++;
-
- pr_debug("nvm: use (%u %u %u) as sysblk\n",
- sppa->g.ch, sppa->g.lun, sppa->g.blk);
- if (blkid > MAX_BLKS_PR_SYSBLK - 1)
- return 0;
- }
-
- pr_err("nvm: sysblk failed get sysblk\n");
- return -EINVAL;
-}
-
static int nvm_write_and_verify(struct nvm_dev *dev, struct nvm_sb_info *info,
struct sysblk_scan *s)
{
struct nvm_system_block nvmsb;
void *buf;
- int i, sect, ret, bufsz;
+ int i, sect, ret = 0;
struct ppa_addr *ppas;
nvm_cpu_to_sysblk(&nvmsb, info);
- /* buffer for flash page */
- bufsz = dev->sec_size * dev->sec_per_pg * dev->nr_planes;
- buf = kzalloc(bufsz, GFP_KERNEL);
+ buf = kzalloc(dev->pfpg_size, GFP_KERNEL);
if (!buf)
return -ENOMEM;
memcpy(buf, &nvmsb, sizeof(struct nvm_system_block));
@@ -309,7 +330,7 @@ static int nvm_write_and_verify(struct nvm_dev *dev, struct nvm_sb_info *info,
}
ret = nvm_submit_ppa(dev, ppas, dev->sec_per_pg, NVM_OP_PWRITE,
- NVM_IO_SLC_MODE, buf, bufsz);
+ NVM_IO_SLC_MODE, buf, dev->pfpg_size);
if (ret) {
pr_err("nvm: sysblk failed program (%u %u %u)\n",
ppas[0].g.ch,
@@ -319,7 +340,7 @@ static int nvm_write_and_verify(struct nvm_dev *dev, struct nvm_sb_info *info,
}
ret = nvm_submit_ppa(dev, ppas, dev->sec_per_pg, NVM_OP_PREAD,
- NVM_IO_SLC_MODE, buf, bufsz);
+ NVM_IO_SLC_MODE, buf, dev->pfpg_size);
if (ret) {
pr_err("nvm: sysblk failed read (%u %u %u)\n",
ppas[0].g.ch,
@@ -388,7 +409,7 @@ int nvm_get_sysblock(struct nvm_dev *dev, struct nvm_sb_info *info)
nvm_setup_sysblk_scan(dev, &s, sysblk_ppas);
mutex_lock(&dev->mlock);
- ret = nvm_get_all_sysblks(dev, &s, sysblk_ppas, sysblk_get_host_blks);
+ ret = nvm_get_all_sysblks(dev, &s, sysblk_ppas, 0);
if (ret)
goto err_sysblk;
@@ -448,7 +469,7 @@ int nvm_update_sysblock(struct nvm_dev *dev, struct nvm_sb_info *new)
nvm_setup_sysblk_scan(dev, &s, sysblk_ppas);
mutex_lock(&dev->mlock);
- ret = nvm_get_all_sysblks(dev, &s, sysblk_ppas, sysblk_get_host_blks);
+ ret = nvm_get_all_sysblks(dev, &s, sysblk_ppas, 0);
if (ret)
goto err_sysblk;
@@ -546,7 +567,7 @@ int nvm_init_sysblock(struct nvm_dev *dev, struct nvm_sb_info *info)
nvm_setup_sysblk_scan(dev, &s, sysblk_ppas);
mutex_lock(&dev->mlock);
- ret = nvm_get_all_sysblks(dev, &s, sysblk_ppas, sysblk_get_free_blks);
+ ret = nvm_get_all_sysblks(dev, &s, sysblk_ppas, 1);
if (ret)
goto err_mark;
@@ -561,52 +582,49 @@ err_mark:
return ret;
}
-struct factory_blks {
- struct nvm_dev *dev;
- int flags;
- unsigned long *blks;
-};
-
static int factory_nblks(int nblks)
{
/* Round up to nearest BITS_PER_LONG */
return (nblks + (BITS_PER_LONG - 1)) & ~(BITS_PER_LONG - 1);
}
-static unsigned int factory_blk_offset(struct nvm_dev *dev, int ch, int lun)
+static unsigned int factory_blk_offset(struct nvm_dev *dev, struct ppa_addr ppa)
{
int nblks = factory_nblks(dev->blks_per_lun);
- return ((ch * dev->luns_per_chnl * nblks) + (lun * nblks)) /
+ return ((ppa.g.ch * dev->luns_per_chnl * nblks) + (ppa.g.lun * nblks)) /
BITS_PER_LONG;
}
-static int nvm_factory_blks(struct ppa_addr ppa, int nr_blks, u8 *blks,
- void *private)
+static int nvm_factory_blks(struct nvm_dev *dev, struct ppa_addr ppa,
+ u8 *blks, int nr_blks,
+ unsigned long *blk_bitmap, int flags)
{
- struct factory_blks *f = private;
- struct nvm_dev *dev = f->dev;
int i, lunoff;
- lunoff = factory_blk_offset(dev, ppa.g.ch, ppa.g.lun);
+ nr_blks = nvm_bb_tbl_fold(dev, blks, nr_blks);
+ if (nr_blks < 0)
+ return nr_blks;
+
+ lunoff = factory_blk_offset(dev, ppa);
/* non-set bits correspond to the block must be erased */
for (i = 0; i < nr_blks; i++) {
switch (blks[i]) {
case NVM_BLK_T_FREE:
- if (f->flags & NVM_FACTORY_ERASE_ONLY_USER)
- set_bit(i, &f->blks[lunoff]);
+ if (flags & NVM_FACTORY_ERASE_ONLY_USER)
+ set_bit(i, &blk_bitmap[lunoff]);
break;
case NVM_BLK_T_HOST:
- if (!(f->flags & NVM_FACTORY_RESET_HOST_BLKS))
- set_bit(i, &f->blks[lunoff]);
+ if (!(flags & NVM_FACTORY_RESET_HOST_BLKS))
+ set_bit(i, &blk_bitmap[lunoff]);
break;
case NVM_BLK_T_GRWN_BAD:
- if (!(f->flags & NVM_FACTORY_RESET_GRWN_BBLKS))
- set_bit(i, &f->blks[lunoff]);
+ if (!(flags & NVM_FACTORY_RESET_GRWN_BBLKS))
+ set_bit(i, &blk_bitmap[lunoff]);
break;
default:
- set_bit(i, &f->blks[lunoff]);
+ set_bit(i, &blk_bitmap[lunoff]);
break;
}
}
@@ -615,7 +633,7 @@ static int nvm_factory_blks(struct ppa_addr ppa, int nr_blks, u8 *blks,
}
static int nvm_fact_get_blks(struct nvm_dev *dev, struct ppa_addr *erase_list,
- int max_ppas, struct factory_blks *f)
+ int max_ppas, unsigned long *blk_bitmap)
{
struct ppa_addr ppa;
int ch, lun, blkid, idx, done = 0, ppa_cnt = 0;
@@ -623,111 +641,95 @@ static int nvm_fact_get_blks(struct nvm_dev *dev, struct ppa_addr *erase_list,
while (!done) {
done = 1;
- for (ch = 0; ch < dev->nr_chnls; ch++) {
- for (lun = 0; lun < dev->luns_per_chnl; lun++) {
- idx = factory_blk_offset(dev, ch, lun);
- offset = &f->blks[idx];
-
- blkid = find_first_zero_bit(offset,
- dev->blks_per_lun);
- if (blkid >= dev->blks_per_lun)
- continue;
- set_bit(blkid, offset);
-
- ppa.ppa = 0;
- ppa.g.ch = ch;
- ppa.g.lun = lun;
- ppa.g.blk = blkid;
- pr_debug("nvm: erase ppa (%u %u %u)\n",
- ppa.g.ch,
- ppa.g.lun,
- ppa.g.blk);
-
- erase_list[ppa_cnt] = ppa;
- ppa_cnt++;
- done = 0;
-
- if (ppa_cnt == max_ppas)
- return ppa_cnt;
- }
+ nvm_for_each_lun_ppa(dev, ppa, ch, lun) {
+ idx = factory_blk_offset(dev, ppa);
+ offset = &blk_bitmap[idx];
+
+ blkid = find_first_zero_bit(offset,
+ dev->blks_per_lun);
+ if (blkid >= dev->blks_per_lun)
+ continue;
+ set_bit(blkid, offset);
+
+ ppa.g.blk = blkid;
+ pr_debug("nvm: erase ppa (%u %u %u)\n",
+ ppa.g.ch,
+ ppa.g.lun,
+ ppa.g.blk);
+
+ erase_list[ppa_cnt] = ppa;
+ ppa_cnt++;
+ done = 0;
+
+ if (ppa_cnt == max_ppas)
+ return ppa_cnt;
}
}
return ppa_cnt;
}
-static int nvm_fact_get_bb_tbl(struct nvm_dev *dev, struct ppa_addr ppa,
- nvm_bb_update_fn *fn, void *priv)
+static int nvm_fact_select_blks(struct nvm_dev *dev, unsigned long *blk_bitmap,
+ int flags)
{
- struct ppa_addr dev_ppa;
- int ret;
+ struct ppa_addr ppa;
+ int ch, lun, nr_blks, ret = 0;
+ u8 *blks;
- dev_ppa = generic_to_dev_addr(dev, ppa);
+ nr_blks = dev->blks_per_lun * dev->plane_mode;
+ blks = kmalloc(nr_blks, GFP_KERNEL);
+ if (!blks)
+ return -ENOMEM;
- ret = dev->ops->get_bb_tbl(dev, dev_ppa, dev->blks_per_lun, fn, priv);
- if (ret)
- pr_err("nvm: failed bb tbl for ch%u lun%u\n",
+ nvm_for_each_lun_ppa(dev, ppa, ch, lun) {
+ ret = nvm_get_bb_tbl(dev, ppa, blks);
+ if (ret)
+ pr_err("nvm: failed bb tbl for ch%u lun%u\n",
ppa.g.ch, ppa.g.blk);
- return ret;
-}
-static int nvm_fact_select_blks(struct nvm_dev *dev, struct factory_blks *f)
-{
- int ch, lun, ret;
- struct ppa_addr ppa;
-
- ppa.ppa = 0;
- for (ch = 0; ch < dev->nr_chnls; ch++) {
- for (lun = 0; lun < dev->luns_per_chnl; lun++) {
- ppa.g.ch = ch;
- ppa.g.lun = lun;
-
- ret = nvm_fact_get_bb_tbl(dev, ppa, nvm_factory_blks,
- f);
- if (ret)
- return ret;
- }
+ ret = nvm_factory_blks(dev, ppa, blks, nr_blks, blk_bitmap,
+ flags);
+ if (ret)
+ break;
}
- return 0;
+ kfree(blks);
+ return ret;
}
int nvm_dev_factory(struct nvm_dev *dev, int flags)
{
- struct factory_blks f;
struct ppa_addr *ppas;
int ppa_cnt, ret = -ENOMEM;
int max_ppas = dev->ops->max_phys_sect / dev->nr_planes;
struct ppa_addr sysblk_ppas[MAX_SYSBLKS];
struct sysblk_scan s;
+ unsigned long *blk_bitmap;
- f.blks = kzalloc(factory_nblks(dev->blks_per_lun) * dev->nr_luns,
+ blk_bitmap = kzalloc(factory_nblks(dev->blks_per_lun) * dev->nr_luns,
GFP_KERNEL);
- if (!f.blks)
+ if (!blk_bitmap)
return ret;
ppas = kcalloc(max_ppas, sizeof(struct ppa_addr), GFP_KERNEL);
if (!ppas)
goto err_blks;
- f.dev = dev;
- f.flags = flags;
-
/* create list of blks to be erased */
- ret = nvm_fact_select_blks(dev, &f);
+ ret = nvm_fact_select_blks(dev, blk_bitmap, flags);
if (ret)
goto err_ppas;
/* continue to erase until list of blks until empty */
- while ((ppa_cnt = nvm_fact_get_blks(dev, ppas, max_ppas, &f)) > 0)
+ while ((ppa_cnt =
+ nvm_fact_get_blks(dev, ppas, max_ppas, blk_bitmap)) > 0)
nvm_erase_ppa(dev, ppas, ppa_cnt);
/* mark host reserved blocks free */
if (flags & NVM_FACTORY_RESET_HOST_BLKS) {
nvm_setup_sysblk_scan(dev, &s, sysblk_ppas);
mutex_lock(&dev->mlock);
- ret = nvm_get_all_sysblks(dev, &s, sysblk_ppas,
- sysblk_get_host_blks);
+ ret = nvm_get_all_sysblks(dev, &s, sysblk_ppas, 0);
if (!ret)
ret = nvm_set_bb_tbl(dev, &s, NVM_BLK_T_FREE);
mutex_unlock(&dev->mlock);
@@ -735,7 +737,7 @@ int nvm_dev_factory(struct nvm_dev *dev, int flags)
err_ppas:
kfree(ppas);
err_blks:
- kfree(f.blks);
+ kfree(blk_bitmap);
return ret;
}
EXPORT_SYMBOL(nvm_dev_factory);
diff --git a/drivers/macintosh/rack-meter.c b/drivers/macintosh/rack-meter.c
index caaec654d7ea..465c52219639 100644
--- a/drivers/macintosh/rack-meter.c
+++ b/drivers/macintosh/rack-meter.c
@@ -154,8 +154,8 @@ static void rackmeter_do_pause(struct rackmeter *rm, int pause)
DBDMA_DO_STOP(rm->dma_regs);
return;
}
- memset(rdma->buf1, 0, SAMPLE_COUNT & sizeof(u32));
- memset(rdma->buf2, 0, SAMPLE_COUNT & sizeof(u32));
+ memset(rdma->buf1, 0, ARRAY_SIZE(rdma->buf1));
+ memset(rdma->buf2, 0, ARRAY_SIZE(rdma->buf2));
rm->dma_buf_v->mark = 0;
@@ -227,6 +227,7 @@ static void rackmeter_do_timer(struct work_struct *work)
total_idle_ticks = get_cpu_idle_time(cpu);
idle_ticks = (unsigned int) (total_idle_ticks - rcpu->prev_idle);
+ idle_ticks = min(idle_ticks, total_ticks);
rcpu->prev_idle = total_idle_ticks;
/* We do a very dumb calculation to update the LEDs for now,
diff --git a/drivers/macintosh/via-pmu.c b/drivers/macintosh/via-pmu.c
index 01ee736fe0ef..f8b6d1403c16 100644
--- a/drivers/macintosh/via-pmu.c
+++ b/drivers/macintosh/via-pmu.c
@@ -1851,7 +1851,7 @@ static int powerbook_sleep_grackle(void)
_set_L2CR(save_l2cr);
/* Restore userland MMU context */
- switch_mmu_context(NULL, current->active_mm);
+ switch_mmu_context(NULL, current->active_mm, NULL);
/* Power things up */
pmu_unlock();
@@ -1940,7 +1940,7 @@ powerbook_sleep_Core99(void)
_set_L3CR(save_l3cr);
/* Restore userland MMU context */
- switch_mmu_context(NULL, current->active_mm);
+ switch_mmu_context(NULL, current->active_mm, NULL);
/* Tell PMU we are ready */
pmu_unlock();
diff --git a/drivers/mailbox/mailbox-sti.c b/drivers/mailbox/mailbox-sti.c
index 2394cfe892b6..a334db5c9f1c 100644
--- a/drivers/mailbox/mailbox-sti.c
+++ b/drivers/mailbox/mailbox-sti.c
@@ -430,8 +430,8 @@ static int sti_mbox_probe(struct platform_device *pdev)
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
mdev->base = devm_ioremap_resource(&pdev->dev, res);
- if (!mdev->base)
- return -ENOMEM;
+ if (IS_ERR(mdev->base))
+ return PTR_ERR(mdev->base);
ret = of_property_read_string(np, "mbox-name", &mdev->name);
if (ret)
diff --git a/drivers/mailbox/omap-mailbox.c b/drivers/mailbox/omap-mailbox.c
index b7f636f15cac..c5e8b9cb170d 100644
--- a/drivers/mailbox/omap-mailbox.c
+++ b/drivers/mailbox/omap-mailbox.c
@@ -2,7 +2,7 @@
* OMAP mailbox driver
*
* Copyright (C) 2006-2009 Nokia Corporation. All rights reserved.
- * Copyright (C) 2013-2014 Texas Instruments Inc.
+ * Copyright (C) 2013-2016 Texas Instruments Incorporated - http://www.ti.com
*
* Contact: Hiroshi DOYU <Hiroshi.DOYU@nokia.com>
* Suman Anna <s-anna@ti.com>
@@ -15,12 +15,6 @@
* WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, write to the Free Software
- * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
- * 02110-1301 USA
- *
*/
#include <linux/interrupt.h>
@@ -33,7 +27,6 @@
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
-#include <linux/platform_data/mailbox-omap.h>
#include <linux/omap-mailbox.h>
#include <linux/mailbox_controller.h>
#include <linux/mailbox_client.h>
@@ -62,12 +55,9 @@
#define MAILBOX_IRQ_NEWMSG(m) (1 << (2 * (m)))
#define MAILBOX_IRQ_NOTFULL(m) (1 << (2 * (m) + 1))
-#define MBOX_REG_SIZE 0x120
-
-#define OMAP4_MBOX_REG_SIZE 0x130
-
-#define MBOX_NR_REGS (MBOX_REG_SIZE / sizeof(u32))
-#define OMAP4_MBOX_NR_REGS (OMAP4_MBOX_REG_SIZE / sizeof(u32))
+/* Interrupt register configuration types */
+#define MBOX_INTR_CFG_TYPE1 0
+#define MBOX_INTR_CFG_TYPE2 1
struct omap_mbox_fifo {
unsigned long msg;
@@ -91,8 +81,10 @@ struct omap_mbox_device {
struct device *dev;
struct mutex cfg_lock;
void __iomem *mbox_base;
+ u32 *irq_ctx;
u32 num_users;
u32 num_fifos;
+ u32 intr_type;
struct omap_mbox **mboxes;
struct mbox_controller controller;
struct list_head elem;
@@ -119,7 +111,6 @@ struct omap_mbox {
struct omap_mbox_device *parent;
struct omap_mbox_fifo tx_fifo;
struct omap_mbox_fifo rx_fifo;
- u32 ctx[OMAP4_MBOX_NR_REGS];
u32 intr_type;
struct mbox_chan *chan;
bool send_no_irq;
@@ -157,24 +148,28 @@ void mbox_write_reg(struct omap_mbox_device *mdev, u32 val, size_t ofs)
static mbox_msg_t mbox_fifo_read(struct omap_mbox *mbox)
{
struct omap_mbox_fifo *fifo = &mbox->rx_fifo;
- return (mbox_msg_t) mbox_read_reg(mbox->parent, fifo->msg);
+
+ return (mbox_msg_t)mbox_read_reg(mbox->parent, fifo->msg);
}
static void mbox_fifo_write(struct omap_mbox *mbox, mbox_msg_t msg)
{
struct omap_mbox_fifo *fifo = &mbox->tx_fifo;
+
mbox_write_reg(mbox->parent, msg, fifo->msg);
}
static int mbox_fifo_empty(struct omap_mbox *mbox)
{
struct omap_mbox_fifo *fifo = &mbox->rx_fifo;
+
return (mbox_read_reg(mbox->parent, fifo->msg_stat) == 0);
}
static int mbox_fifo_full(struct omap_mbox *mbox)
{
struct omap_mbox_fifo *fifo = &mbox->tx_fifo;
+
return mbox_read_reg(mbox->parent, fifo->fifo_stat);
}
@@ -206,49 +201,6 @@ static int is_mbox_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq)
return (int)(enable & status & bit);
}
-void omap_mbox_save_ctx(struct mbox_chan *chan)
-{
- int i;
- int nr_regs;
- struct omap_mbox *mbox = mbox_chan_to_omap_mbox(chan);
-
- if (WARN_ON(!mbox))
- return;
-
- if (mbox->intr_type)
- nr_regs = OMAP4_MBOX_NR_REGS;
- else
- nr_regs = MBOX_NR_REGS;
- for (i = 0; i < nr_regs; i++) {
- mbox->ctx[i] = mbox_read_reg(mbox->parent, i * sizeof(u32));
-
- dev_dbg(mbox->dev, "%s: [%02x] %08x\n", __func__,
- i, mbox->ctx[i]);
- }
-}
-EXPORT_SYMBOL(omap_mbox_save_ctx);
-
-void omap_mbox_restore_ctx(struct mbox_chan *chan)
-{
- int i;
- int nr_regs;
- struct omap_mbox *mbox = mbox_chan_to_omap_mbox(chan);
-
- if (WARN_ON(!mbox))
- return;
-
- if (mbox->intr_type)
- nr_regs = OMAP4_MBOX_NR_REGS;
- else
- nr_regs = MBOX_NR_REGS;
- for (i = 0; i < nr_regs; i++) {
- mbox_write_reg(mbox->parent, mbox->ctx[i], i * sizeof(u32));
- dev_dbg(mbox->dev, "%s: [%02x] %08x\n", __func__,
- i, mbox->ctx[i]);
- }
-}
-EXPORT_SYMBOL(omap_mbox_restore_ctx);
-
static void _omap_mbox_enable_irq(struct omap_mbox *mbox, omap_mbox_irq_t irq)
{
u32 l;
@@ -381,7 +333,7 @@ static struct omap_mbox_queue *mbox_queue_alloc(struct omap_mbox *mbox,
if (!work)
return NULL;
- mq = kzalloc(sizeof(struct omap_mbox_queue), GFP_KERNEL);
+ mq = kzalloc(sizeof(*mq), GFP_KERNEL);
if (!mq)
return NULL;
@@ -525,6 +477,7 @@ static int omap_mbox_register(struct omap_mbox_device *mdev)
mboxes = mdev->mboxes;
for (i = 0; mboxes[i]; i++) {
struct omap_mbox *mbox = mboxes[i];
+
mbox->dev = device_create(&omap_mbox_class, mdev->dev,
0, mbox, "%s", mbox->name);
if (IS_ERR(mbox->dev)) {
@@ -647,6 +600,52 @@ static const struct mbox_chan_ops omap_mbox_chan_ops = {
.shutdown = omap_mbox_chan_shutdown,
};
+#ifdef CONFIG_PM_SLEEP
+static int omap_mbox_suspend(struct device *dev)
+{
+ struct omap_mbox_device *mdev = dev_get_drvdata(dev);
+ u32 usr, fifo, reg;
+
+ if (pm_runtime_status_suspended(dev))
+ return 0;
+
+ for (fifo = 0; fifo < mdev->num_fifos; fifo++) {
+ if (mbox_read_reg(mdev, MAILBOX_MSGSTATUS(fifo))) {
+ dev_err(mdev->dev, "fifo %d has unexpected unread messages\n",
+ fifo);
+ return -EBUSY;
+ }
+ }
+
+ for (usr = 0; usr < mdev->num_users; usr++) {
+ reg = MAILBOX_IRQENABLE(mdev->intr_type, usr);
+ mdev->irq_ctx[usr] = mbox_read_reg(mdev, reg);
+ }
+
+ return 0;
+}
+
+static int omap_mbox_resume(struct device *dev)
+{
+ struct omap_mbox_device *mdev = dev_get_drvdata(dev);
+ u32 usr, reg;
+
+ if (pm_runtime_status_suspended(dev))
+ return 0;
+
+ for (usr = 0; usr < mdev->num_users; usr++) {
+ reg = MAILBOX_IRQENABLE(mdev->intr_type, usr);
+ mbox_write_reg(mdev, mdev->irq_ctx[usr], reg);
+ }
+
+ return 0;
+}
+#endif
+
+static const struct dev_pm_ops omap_mbox_pm_ops = {
+ SET_SYSTEM_SLEEP_PM_OPS(omap_mbox_suspend, omap_mbox_resume)
+};
+
static const struct of_device_id omap_mailbox_of_match[] = {
{
.compatible = "ti,omap2-mailbox",
@@ -696,8 +695,6 @@ static int omap_mbox_probe(struct platform_device *pdev)
int ret;
struct mbox_chan *chnls;
struct omap_mbox **list, *mbox, *mboxblk;
- struct omap_mbox_pdata *pdata = pdev->dev.platform_data;
- struct omap_mbox_dev_info *info = NULL;
struct omap_mbox_fifo_info *finfo, *finfoblk;
struct omap_mbox_device *mdev;
struct omap_mbox_fifo *fifo;
@@ -710,36 +707,26 @@ static int omap_mbox_probe(struct platform_device *pdev)
u32 l;
int i;
- if (!node && (!pdata || !pdata->info_cnt || !pdata->info)) {
- pr_err("%s: platform not supported\n", __func__);
+ if (!node) {
+ pr_err("%s: only DT-based devices are supported\n", __func__);
return -ENODEV;
}
- if (node) {
- match = of_match_device(omap_mailbox_of_match, &pdev->dev);
- if (!match)
- return -ENODEV;
- intr_type = (u32)match->data;
+ match = of_match_device(omap_mailbox_of_match, &pdev->dev);
+ if (!match)
+ return -ENODEV;
+ intr_type = (u32)match->data;
- if (of_property_read_u32(node, "ti,mbox-num-users",
- &num_users))
- return -ENODEV;
+ if (of_property_read_u32(node, "ti,mbox-num-users", &num_users))
+ return -ENODEV;
- if (of_property_read_u32(node, "ti,mbox-num-fifos",
- &num_fifos))
- return -ENODEV;
+ if (of_property_read_u32(node, "ti,mbox-num-fifos", &num_fifos))
+ return -ENODEV;
- info_count = of_get_available_child_count(node);
- if (!info_count) {
- dev_err(&pdev->dev, "no available mbox devices found\n");
- return -ENODEV;
- }
- } else { /* non-DT device creation */
- info_count = pdata->info_cnt;
- info = pdata->info;
- intr_type = pdata->intr_type;
- num_users = pdata->num_users;
- num_fifos = pdata->num_fifos;
+ info_count = of_get_available_child_count(node);
+ if (!info_count) {
+ dev_err(&pdev->dev, "no available mbox devices found\n");
+ return -ENODEV;
}
finfoblk = devm_kzalloc(&pdev->dev, info_count * sizeof(*finfoblk),
@@ -750,38 +737,28 @@ static int omap_mbox_probe(struct platform_device *pdev)
finfo = finfoblk;
child = NULL;
for (i = 0; i < info_count; i++, finfo++) {
- if (node) {
- child = of_get_next_available_child(node, child);
- ret = of_property_read_u32_array(child, "ti,mbox-tx",
- tmp, ARRAY_SIZE(tmp));
- if (ret)
- return ret;
- finfo->tx_id = tmp[0];
- finfo->tx_irq = tmp[1];
- finfo->tx_usr = tmp[2];
-
- ret = of_property_read_u32_array(child, "ti,mbox-rx",
- tmp, ARRAY_SIZE(tmp));
- if (ret)
- return ret;
- finfo->rx_id = tmp[0];
- finfo->rx_irq = tmp[1];
- finfo->rx_usr = tmp[2];
-
- finfo->name = child->name;
-
- if (of_find_property(child, "ti,mbox-send-noirq", NULL))
- finfo->send_no_irq = true;
- } else {
- finfo->tx_id = info->tx_id;
- finfo->rx_id = info->rx_id;
- finfo->tx_usr = info->usr_id;
- finfo->tx_irq = info->irq_id;
- finfo->rx_usr = info->usr_id;
- finfo->rx_irq = info->irq_id;
- finfo->name = info->name;
- info++;
- }
+ child = of_get_next_available_child(node, child);
+ ret = of_property_read_u32_array(child, "ti,mbox-tx", tmp,
+ ARRAY_SIZE(tmp));
+ if (ret)
+ return ret;
+ finfo->tx_id = tmp[0];
+ finfo->tx_irq = tmp[1];
+ finfo->tx_usr = tmp[2];
+
+ ret = of_property_read_u32_array(child, "ti,mbox-rx", tmp,
+ ARRAY_SIZE(tmp));
+ if (ret)
+ return ret;
+ finfo->rx_id = tmp[0];
+ finfo->rx_irq = tmp[1];
+ finfo->rx_usr = tmp[2];
+
+ finfo->name = child->name;
+
+ if (of_find_property(child, "ti,mbox-send-noirq", NULL))
+ finfo->send_no_irq = true;
+
if (finfo->tx_id >= num_fifos || finfo->rx_id >= num_fifos ||
finfo->tx_usr >= num_users || finfo->rx_usr >= num_users)
return -EINVAL;
@@ -796,6 +773,11 @@ static int omap_mbox_probe(struct platform_device *pdev)
if (IS_ERR(mdev->mbox_base))
return PTR_ERR(mdev->mbox_base);
+ mdev->irq_ctx = devm_kzalloc(&pdev->dev, num_users * sizeof(u32),
+ GFP_KERNEL);
+ if (!mdev->irq_ctx)
+ return -ENOMEM;
+
/* allocate one extra for marking end of list */
list = devm_kzalloc(&pdev->dev, (info_count + 1) * sizeof(*list),
GFP_KERNEL);
@@ -848,6 +830,7 @@ static int omap_mbox_probe(struct platform_device *pdev)
mdev->dev = &pdev->dev;
mdev->num_users = num_users;
mdev->num_fifos = num_fifos;
+ mdev->intr_type = intr_type;
mdev->mboxes = list;
/* OMAP does not have a Tx-Done IRQ, but rather a Tx-Ready IRQ */
@@ -905,6 +888,7 @@ static struct platform_driver omap_mbox_driver = {
.remove = omap_mbox_remove,
.driver = {
.name = "omap-mailbox",
+ .pm = &omap_mbox_pm_ops,
.of_match_table = of_match_ptr(omap_mailbox_of_match),
},
};
diff --git a/drivers/mcb/mcb-core.c b/drivers/mcb/mcb-core.c
index a4be451074e5..b73c6e7d28e4 100644
--- a/drivers/mcb/mcb-core.c
+++ b/drivers/mcb/mcb-core.c
@@ -83,13 +83,67 @@ static int mcb_remove(struct device *dev)
static void mcb_shutdown(struct device *dev)
{
+ struct mcb_driver *mdrv = to_mcb_driver(dev->driver);
struct mcb_device *mdev = to_mcb_device(dev);
- struct mcb_driver *mdrv = mdev->driver;
if (mdrv && mdrv->shutdown)
mdrv->shutdown(mdev);
}
+static ssize_t revision_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct mcb_bus *bus = to_mcb_bus(dev);
+
+ return scnprintf(buf, PAGE_SIZE, "%d\n", bus->revision);
+}
+static DEVICE_ATTR_RO(revision);
+
+static ssize_t model_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct mcb_bus *bus = to_mcb_bus(dev);
+
+ return scnprintf(buf, PAGE_SIZE, "%c\n", bus->model);
+}
+static DEVICE_ATTR_RO(model);
+
+static ssize_t minor_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct mcb_bus *bus = to_mcb_bus(dev);
+
+ return scnprintf(buf, PAGE_SIZE, "%d\n", bus->minor);
+}
+static DEVICE_ATTR_RO(minor);
+
+static ssize_t name_show(struct device *dev, struct device_attribute *attr,
+ char *buf)
+{
+ struct mcb_bus *bus = to_mcb_bus(dev);
+
+ return scnprintf(buf, PAGE_SIZE, "%s\n", bus->name);
+}
+static DEVICE_ATTR_RO(name);
+
+static struct attribute *mcb_bus_attrs[] = {
+ &dev_attr_revision.attr,
+ &dev_attr_model.attr,
+ &dev_attr_minor.attr,
+ &dev_attr_name.attr,
+ NULL,
+};
+
+static const struct attribute_group mcb_carrier_group = {
+ .attrs = mcb_bus_attrs,
+};
+
+static const struct attribute_group *mcb_carrier_groups[] = {
+ &mcb_carrier_group,
+ NULL,
+};
+
+
static struct bus_type mcb_bus_type = {
.name = "mcb",
.match = mcb_match,
@@ -99,6 +153,11 @@ static struct bus_type mcb_bus_type = {
.shutdown = mcb_shutdown,
};
+static struct device_type mcb_carrier_device_type = {
+ .name = "mcb-carrier",
+ .groups = mcb_carrier_groups,
+};
+
/**
* __mcb_register_driver() - Register a @mcb_driver at the system
* @drv: The @mcb_driver
@@ -155,6 +214,7 @@ int mcb_device_register(struct mcb_bus *bus, struct mcb_device *dev)
int device_id;
device_initialize(&dev->dev);
+ mcb_bus_get(bus);
dev->dev.bus = &mcb_bus_type;
dev->dev.parent = bus->dev.parent;
dev->dev.release = mcb_release_dev;
@@ -178,6 +238,15 @@ out:
}
EXPORT_SYMBOL_GPL(mcb_device_register);
+static void mcb_free_bus(struct device *dev)
+{
+ struct mcb_bus *bus = to_mcb_bus(dev);
+
+ put_device(bus->carrier);
+ ida_simple_remove(&mcb_ida, bus->bus_nr);
+ kfree(bus);
+}
+
/**
* mcb_alloc_bus() - Allocate a new @mcb_bus
*
@@ -187,6 +256,7 @@ struct mcb_bus *mcb_alloc_bus(struct device *carrier)
{
struct mcb_bus *bus;
int bus_nr;
+ int rc;
bus = kzalloc(sizeof(struct mcb_bus), GFP_KERNEL);
if (!bus)
@@ -194,14 +264,29 @@ struct mcb_bus *mcb_alloc_bus(struct device *carrier)
bus_nr = ida_simple_get(&mcb_ida, 0, 0, GFP_KERNEL);
if (bus_nr < 0) {
- kfree(bus);
- return ERR_PTR(bus_nr);
+ rc = bus_nr;
+ goto err_free;
}
- INIT_LIST_HEAD(&bus->children);
bus->bus_nr = bus_nr;
- bus->carrier = carrier;
+ bus->carrier = get_device(carrier);
+
+ device_initialize(&bus->dev);
+ bus->dev.parent = carrier;
+ bus->dev.bus = &mcb_bus_type;
+ bus->dev.type = &mcb_carrier_device_type;
+ bus->dev.release = &mcb_free_bus;
+
+ dev_set_name(&bus->dev, "mcb:%d", bus_nr);
+ rc = device_add(&bus->dev);
+ if (rc)
+ goto err_free;
+
return bus;
+err_free:
+ put_device(carrier);
+ kfree(bus);
+ return ERR_PTR(rc);
}
EXPORT_SYMBOL_GPL(mcb_alloc_bus);
@@ -224,10 +309,6 @@ static void mcb_devices_unregister(struct mcb_bus *bus)
void mcb_release_bus(struct mcb_bus *bus)
{
mcb_devices_unregister(bus);
-
- ida_simple_remove(&mcb_ida, bus->bus_nr);
-
- kfree(bus);
}
EXPORT_SYMBOL_GPL(mcb_release_bus);
diff --git a/drivers/mcb/mcb-internal.h b/drivers/mcb/mcb-internal.h
index fb7493dcfb79..5254e0285725 100644
--- a/drivers/mcb/mcb-internal.h
+++ b/drivers/mcb/mcb-internal.h
@@ -5,7 +5,6 @@
#define PCI_VENDOR_ID_MEN 0x1a88
#define PCI_DEVICE_ID_MEN_CHAMELEON 0x4d45
-#define CHAMELEON_FILENAME_LEN 12
#define CHAMELEONV2_MAGIC 0xabce
#define CHAM_HEADER_SIZE 0x200
diff --git a/drivers/mcb/mcb-parse.c b/drivers/mcb/mcb-parse.c
index 004926955263..dbecbed0d258 100644
--- a/drivers/mcb/mcb-parse.c
+++ b/drivers/mcb/mcb-parse.c
@@ -57,7 +57,7 @@ static int chameleon_parse_gdd(struct mcb_bus *bus,
mdev->id = GDD_DEV(reg1);
mdev->rev = GDD_REV(reg1);
mdev->var = GDD_VAR(reg1);
- mdev->bar = GDD_BAR(reg1);
+ mdev->bar = GDD_BAR(reg2);
mdev->group = GDD_GRP(reg2);
mdev->inst = GDD_INS(reg2);
@@ -113,16 +113,11 @@ int chameleon_parse_cells(struct mcb_bus *bus, phys_addr_t mapbase,
}
p += hsize;
- pr_debug("header->revision = %d\n", header->revision);
- pr_debug("header->model = 0x%x ('%c')\n", header->model,
- header->model);
- pr_debug("header->minor = %d\n", header->minor);
- pr_debug("header->bus_type = 0x%x\n", header->bus_type);
-
-
- pr_debug("header->magic = 0x%x\n", header->magic);
- pr_debug("header->filename = \"%.*s\"\n", CHAMELEON_FILENAME_LEN,
- header->filename);
+ bus->revision = header->revision;
+ bus->model = header->model;
+ bus->minor = header->minor;
+ snprintf(bus->name, CHAMELEON_FILENAME_LEN + 1, "%s",
+ header->filename);
for_each_chameleon_cell(dtype, p) {
switch (dtype) {
diff --git a/drivers/mcb/mcb-pci.c b/drivers/mcb/mcb-pci.c
index 67d5e7d08df6..b15a0349cd97 100644
--- a/drivers/mcb/mcb-pci.c
+++ b/drivers/mcb/mcb-pci.c
@@ -35,7 +35,6 @@ static int mcb_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
struct resource *res;
struct priv *priv;
int ret;
- int num_cells;
unsigned long flags;
priv = devm_kzalloc(&pdev->dev, sizeof(struct priv), GFP_KERNEL);
@@ -55,19 +54,20 @@ static int mcb_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
goto out_disable;
}
- res = request_mem_region(priv->mapbase, CHAM_HEADER_SIZE,
- KBUILD_MODNAME);
+ res = devm_request_mem_region(&pdev->dev, priv->mapbase,
+ CHAM_HEADER_SIZE,
+ KBUILD_MODNAME);
if (!res) {
dev_err(&pdev->dev, "Failed to request PCI memory\n");
ret = -EBUSY;
goto out_disable;
}
- priv->base = ioremap(priv->mapbase, CHAM_HEADER_SIZE);
+ priv->base = devm_ioremap(&pdev->dev, priv->mapbase, CHAM_HEADER_SIZE);
if (!priv->base) {
dev_err(&pdev->dev, "Cannot ioremap\n");
ret = -ENOMEM;
- goto out_release;
+ goto out_disable;
}
flags = pci_resource_flags(pdev, 0);
@@ -75,7 +75,7 @@ static int mcb_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
ret = -ENOTSUPP;
dev_err(&pdev->dev,
"IO mapped PCI devices are not supported\n");
- goto out_iounmap;
+ goto out_disable;
}
pci_set_drvdata(pdev, priv);
@@ -83,7 +83,7 @@ static int mcb_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
priv->bus = mcb_alloc_bus(&pdev->dev);
if (IS_ERR(priv->bus)) {
ret = PTR_ERR(priv->bus);
- goto out_iounmap;
+ goto out_disable;
}
priv->bus->get_irq = mcb_pci_get_irq;
@@ -91,9 +91,8 @@ static int mcb_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
ret = chameleon_parse_cells(priv->bus, priv->mapbase, priv->base);
if (ret < 0)
goto out_mcb_bus;
- num_cells = ret;
- dev_dbg(&pdev->dev, "Found %d cells\n", num_cells);
+ dev_dbg(&pdev->dev, "Found %d cells\n", ret);
mcb_bus_add_devices(priv->bus);
@@ -101,10 +100,6 @@ static int mcb_pci_probe(struct pci_dev *pdev, const struct pci_device_id *id)
out_mcb_bus:
mcb_release_bus(priv->bus);
-out_iounmap:
- iounmap(priv->base);
-out_release:
- pci_release_region(pdev, 0);
out_disable:
pci_disable_device(pdev);
return ret;
@@ -116,8 +111,6 @@ static void mcb_pci_remove(struct pci_dev *pdev)
mcb_release_bus(priv->bus);
- iounmap(priv->base);
- release_region(priv->mapbase, CHAM_HEADER_SIZE);
pci_disable_device(pdev);
}
diff --git a/drivers/md/bcache/alloc.c b/drivers/md/bcache/alloc.c
index 8eeab72b93e2..ca4abe1ccd8d 100644
--- a/drivers/md/bcache/alloc.c
+++ b/drivers/md/bcache/alloc.c
@@ -64,7 +64,6 @@
#include "btree.h"
#include <linux/blkdev.h>
-#include <linux/freezer.h>
#include <linux/kthread.h>
#include <linux/random.h>
#include <trace/events/bcache.h>
@@ -288,7 +287,6 @@ do { \
if (kthread_should_stop()) \
return 0; \
\
- try_to_freeze(); \
schedule(); \
mutex_lock(&(ca)->set->bucket_lock); \
} \
diff --git a/drivers/md/bcache/btree.c b/drivers/md/bcache/btree.c
index 22b9e34ceb75..eab505ee0027 100644
--- a/drivers/md/bcache/btree.c
+++ b/drivers/md/bcache/btree.c
@@ -27,7 +27,6 @@
#include <linux/slab.h>
#include <linux/bitops.h>
-#include <linux/freezer.h>
#include <linux/hash.h>
#include <linux/kthread.h>
#include <linux/prefetch.h>
@@ -1787,7 +1786,6 @@ again:
mutex_unlock(&c->bucket_lock);
- try_to_freeze();
schedule();
}
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index a296425a7270..f5dbb4e884d8 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -816,7 +816,7 @@ static int bcache_device_init(struct bcache_device *d, unsigned block_size,
clear_bit(QUEUE_FLAG_ADD_RANDOM, &d->disk->queue->queue_flags);
set_bit(QUEUE_FLAG_DISCARD, &d->disk->queue->queue_flags);
- blk_queue_flush(q, REQ_FLUSH|REQ_FUA);
+ blk_queue_write_cache(q, true, true);
return 0;
}
diff --git a/drivers/md/bcache/writeback.c b/drivers/md/bcache/writeback.c
index b9346cd9cda1..60123677b382 100644
--- a/drivers/md/bcache/writeback.c
+++ b/drivers/md/bcache/writeback.c
@@ -12,7 +12,6 @@
#include "writeback.h"
#include <linux/delay.h>
-#include <linux/freezer.h>
#include <linux/kthread.h>
#include <trace/events/bcache.h>
@@ -228,7 +227,6 @@ static void read_dirty(struct cached_dev *dc)
*/
while (!kthread_should_stop()) {
- try_to_freeze();
w = bch_keybuf_next(&dc->writeback_keys);
if (!w)
@@ -433,7 +431,6 @@ static int bch_writeback_thread(void *arg)
if (kthread_should_stop())
return 0;
- try_to_freeze();
schedule();
continue;
}
diff --git a/drivers/md/bitmap.c b/drivers/md/bitmap.c
index 3fe86b54d50b..d8129ec93ebd 100644
--- a/drivers/md/bitmap.c
+++ b/drivers/md/bitmap.c
@@ -46,7 +46,7 @@ static inline char *bmname(struct bitmap *bitmap)
* allocated while we're using it
*/
static int bitmap_checkpage(struct bitmap_counts *bitmap,
- unsigned long page, int create)
+ unsigned long page, int create, int no_hijack)
__releases(bitmap->lock)
__acquires(bitmap->lock)
{
@@ -90,6 +90,9 @@ __acquires(bitmap->lock)
if (mappage == NULL) {
pr_debug("md/bitmap: map page allocation failed, hijacking\n");
+ /* We don't support hijack for cluster raid */
+ if (no_hijack)
+ return -ENOMEM;
/* failed - set the hijacked flag so that we can use the
* pointer as a counter */
if (!bitmap->bp[page].map)
@@ -756,7 +759,7 @@ static int bitmap_storage_alloc(struct bitmap_storage *store,
bytes += sizeof(bitmap_super_t);
num_pages = DIV_ROUND_UP(bytes, PAGE_SIZE);
- offset = slot_number * (num_pages - 1);
+ offset = slot_number * num_pages;
store->filemap = kmalloc(sizeof(struct page *)
* num_pages, GFP_KERNEL);
@@ -900,6 +903,11 @@ static void bitmap_file_set_bit(struct bitmap *bitmap, sector_t block)
struct page *page;
void *kaddr;
unsigned long chunk = block >> bitmap->counts.chunkshift;
+ struct bitmap_storage *store = &bitmap->storage;
+ unsigned long node_offset = 0;
+
+ if (mddev_is_clustered(bitmap->mddev))
+ node_offset = bitmap->cluster_slot * store->file_pages;
page = filemap_get_page(&bitmap->storage, chunk);
if (!page)
@@ -915,7 +923,7 @@ static void bitmap_file_set_bit(struct bitmap *bitmap, sector_t block)
kunmap_atomic(kaddr);
pr_debug("set file bit %lu page %lu\n", bit, page->index);
/* record page number so it gets flushed to disk when unplug occurs */
- set_page_attr(bitmap, page->index, BITMAP_PAGE_DIRTY);
+ set_page_attr(bitmap, page->index - node_offset, BITMAP_PAGE_DIRTY);
}
static void bitmap_file_clear_bit(struct bitmap *bitmap, sector_t block)
@@ -924,6 +932,11 @@ static void bitmap_file_clear_bit(struct bitmap *bitmap, sector_t block)
struct page *page;
void *paddr;
unsigned long chunk = block >> bitmap->counts.chunkshift;
+ struct bitmap_storage *store = &bitmap->storage;
+ unsigned long node_offset = 0;
+
+ if (mddev_is_clustered(bitmap->mddev))
+ node_offset = bitmap->cluster_slot * store->file_pages;
page = filemap_get_page(&bitmap->storage, chunk);
if (!page)
@@ -935,8 +948,8 @@ static void bitmap_file_clear_bit(struct bitmap *bitmap, sector_t block)
else
clear_bit_le(bit, paddr);
kunmap_atomic(paddr);
- if (!test_page_attr(bitmap, page->index, BITMAP_PAGE_NEEDWRITE)) {
- set_page_attr(bitmap, page->index, BITMAP_PAGE_PENDING);
+ if (!test_page_attr(bitmap, page->index - node_offset, BITMAP_PAGE_NEEDWRITE)) {
+ set_page_attr(bitmap, page->index - node_offset, BITMAP_PAGE_PENDING);
bitmap->allclean = 0;
}
}
@@ -1321,7 +1334,7 @@ __acquires(bitmap->lock)
sector_t csize;
int err;
- err = bitmap_checkpage(bitmap, page, create);
+ err = bitmap_checkpage(bitmap, page, create, 0);
if (bitmap->bp[page].hijacked ||
bitmap->bp[page].map == NULL)
@@ -1594,6 +1607,27 @@ void bitmap_cond_end_sync(struct bitmap *bitmap, sector_t sector, bool force)
}
EXPORT_SYMBOL(bitmap_cond_end_sync);
+void bitmap_sync_with_cluster(struct mddev *mddev,
+ sector_t old_lo, sector_t old_hi,
+ sector_t new_lo, sector_t new_hi)
+{
+ struct bitmap *bitmap = mddev->bitmap;
+ sector_t sector, blocks = 0;
+
+ for (sector = old_lo; sector < new_lo; ) {
+ bitmap_end_sync(bitmap, sector, &blocks, 0);
+ sector += blocks;
+ }
+ WARN((blocks > new_lo) && old_lo, "alignment is not correct for lo\n");
+
+ for (sector = old_hi; sector < new_hi; ) {
+ bitmap_start_sync(bitmap, sector, &blocks, 0);
+ sector += blocks;
+ }
+ WARN((blocks > new_hi) && old_hi, "alignment is not correct for hi\n");
+}
+EXPORT_SYMBOL(bitmap_sync_with_cluster);
+
static void bitmap_set_memory_bits(struct bitmap *bitmap, sector_t offset, int needed)
{
/* For each chunk covered by any of these sectors, set the
@@ -1814,6 +1848,9 @@ int bitmap_load(struct mddev *mddev)
if (!bitmap)
goto out;
+ if (mddev_is_clustered(mddev))
+ md_cluster_ops->load_bitmaps(mddev, mddev->bitmap_info.nodes);
+
/* Clear out old bitmap info first: Either there is none, or we
* are resuming after someone else has possibly changed things,
* so we should forget old cached info.
@@ -1890,14 +1927,14 @@ int bitmap_copy_from_slot(struct mddev *mddev, int slot,
if (clear_bits) {
bitmap_update_sb(bitmap);
- /* Setting this for the ev_page should be enough.
- * And we do not require both write_all and PAGE_DIRT either
- */
+ /* BITMAP_PAGE_PENDING is set, but bitmap_unplug needs
+ * BITMAP_PAGE_DIRTY or _NEEDWRITE to write ... */
for (i = 0; i < bitmap->storage.file_pages; i++)
- set_page_attr(bitmap, i, BITMAP_PAGE_DIRTY);
- bitmap_write_all(bitmap);
+ if (test_page_attr(bitmap, i, BITMAP_PAGE_PENDING))
+ set_page_attr(bitmap, i, BITMAP_PAGE_NEEDWRITE);
bitmap_unplug(bitmap);
}
+ bitmap_unplug(mddev->bitmap);
*low = lo;
*high = hi;
err:
@@ -2032,6 +2069,35 @@ int bitmap_resize(struct bitmap *bitmap, sector_t blocks,
chunks << chunkshift);
spin_lock_irq(&bitmap->counts.lock);
+ /* For cluster raid, need to pre-allocate bitmap */
+ if (mddev_is_clustered(bitmap->mddev)) {
+ unsigned long page;
+ for (page = 0; page < pages; page++) {
+ ret = bitmap_checkpage(&bitmap->counts, page, 1, 1);
+ if (ret) {
+ unsigned long k;
+
+ /* deallocate the page memory */
+ for (k = 0; k < page; k++) {
+ kfree(new_bp[k].map);
+ }
+
+ /* restore some fields from old_counts */
+ bitmap->counts.bp = old_counts.bp;
+ bitmap->counts.pages = old_counts.pages;
+ bitmap->counts.missing_pages = old_counts.pages;
+ bitmap->counts.chunkshift = old_counts.chunkshift;
+ bitmap->counts.chunks = old_counts.chunks;
+ bitmap->mddev->bitmap_info.chunksize = 1 << (old_counts.chunkshift +
+ BITMAP_BLOCK_SHIFT);
+ blocks = old_counts.chunks << old_counts.chunkshift;
+ pr_err("Could not pre-allocate in-memory bitmap for cluster raid\n");
+ break;
+ } else
+ bitmap->counts.bp[page].count += 1;
+ }
+ }
+
for (block = 0; block < blocks; ) {
bitmap_counter_t *bmc_old, *bmc_new;
int set;
diff --git a/drivers/md/bitmap.h b/drivers/md/bitmap.h
index 5e3fcd6ecf77..5b6dd63dda91 100644
--- a/drivers/md/bitmap.h
+++ b/drivers/md/bitmap.h
@@ -258,6 +258,9 @@ int bitmap_start_sync(struct bitmap *bitmap, sector_t offset, sector_t *blocks,
void bitmap_end_sync(struct bitmap *bitmap, sector_t offset, sector_t *blocks, int aborted);
void bitmap_close_sync(struct bitmap *bitmap);
void bitmap_cond_end_sync(struct bitmap *bitmap, sector_t sector, bool force);
+void bitmap_sync_with_cluster(struct mddev *mddev,
+ sector_t old_lo, sector_t old_hi,
+ sector_t new_lo, sector_t new_hi);
void bitmap_unplug(struct bitmap *bitmap);
void bitmap_daemon_work(struct mddev *mddev);
diff --git a/drivers/md/dm-ioctl.c b/drivers/md/dm-ioctl.c
index 2adf81d81fca..2c7ca258c4e4 100644
--- a/drivers/md/dm-ioctl.c
+++ b/drivers/md/dm-ioctl.c
@@ -1723,7 +1723,7 @@ static int copy_params(struct dm_ioctl __user *user, struct dm_ioctl *param_kern
if (!dmi) {
unsigned noio_flag;
noio_flag = memalloc_noio_save();
- dmi = __vmalloc(param_kernel->data_size, GFP_NOIO | __GFP_REPEAT | __GFP_HIGH | __GFP_HIGHMEM, PAGE_KERNEL);
+ dmi = __vmalloc(param_kernel->data_size, GFP_NOIO | __GFP_HIGH | __GFP_HIGHMEM, PAGE_KERNEL);
memalloc_noio_restore(noio_flag);
if (dmi)
*param_flags |= DM_PARAMS_VMALLOC;
diff --git a/drivers/md/dm-mpath.c b/drivers/md/dm-mpath.c
index 677ba223e2ae..52baf8a5b0f4 100644
--- a/drivers/md/dm-mpath.c
+++ b/drivers/md/dm-mpath.c
@@ -76,26 +76,18 @@ struct multipath {
wait_queue_head_t pg_init_wait; /* Wait for pg_init completion */
- unsigned pg_init_in_progress; /* Only one pg_init allowed at once */
-
- unsigned nr_valid_paths; /* Total number of usable paths */
struct pgpath *current_pgpath;
struct priority_group *current_pg;
struct priority_group *next_pg; /* Switch to this PG if set */
- bool queue_io:1; /* Must we queue all I/O? */
- bool queue_if_no_path:1; /* Queue I/O if last path fails? */
- bool saved_queue_if_no_path:1; /* Saved state during suspension */
- bool retain_attached_hw_handler:1; /* If there's already a hw_handler present, don't change it. */
- bool pg_init_disabled:1; /* pg_init is not currently allowed */
- bool pg_init_required:1; /* pg_init needs calling? */
- bool pg_init_delay_retry:1; /* Delay pg_init retry? */
+ unsigned long flags; /* Multipath state flags */
unsigned pg_init_retries; /* Number of times to retry pg_init */
- unsigned pg_init_count; /* Number of times pg_init called */
unsigned pg_init_delay_msecs; /* Number of msecs before pg_init retry */
- struct work_struct trigger_event;
+ atomic_t nr_valid_paths; /* Total number of usable paths */
+ atomic_t pg_init_in_progress; /* Only one pg_init allowed at once */
+ atomic_t pg_init_count; /* Number of times pg_init called */
/*
* We must use a mempool of dm_mpath_io structs so that we
@@ -104,6 +96,7 @@ struct multipath {
mempool_t *mpio_pool;
struct mutex work_mutex;
+ struct work_struct trigger_event;
};
/*
@@ -122,6 +115,17 @@ static struct workqueue_struct *kmultipathd, *kmpath_handlerd;
static void trigger_event(struct work_struct *work);
static void activate_path(struct work_struct *work);
+/*-----------------------------------------------
+ * Multipath state flags.
+ *-----------------------------------------------*/
+
+#define MPATHF_QUEUE_IO 0 /* Must we queue all I/O? */
+#define MPATHF_QUEUE_IF_NO_PATH 1 /* Queue I/O if last path fails? */
+#define MPATHF_SAVED_QUEUE_IF_NO_PATH 2 /* Saved state during suspension */
+#define MPATHF_RETAIN_ATTACHED_HW_HANDLER 3 /* If there's already a hw_handler present, don't change it. */
+#define MPATHF_PG_INIT_DISABLED 4 /* pg_init is not currently allowed */
+#define MPATHF_PG_INIT_REQUIRED 5 /* pg_init needs calling? */
+#define MPATHF_PG_INIT_DELAY_RETRY 6 /* Delay pg_init retry? */
/*-----------------------------------------------
* Allocation routines
@@ -189,7 +193,10 @@ static struct multipath *alloc_multipath(struct dm_target *ti, bool use_blk_mq)
if (m) {
INIT_LIST_HEAD(&m->priority_groups);
spin_lock_init(&m->lock);
- m->queue_io = true;
+ set_bit(MPATHF_QUEUE_IO, &m->flags);
+ atomic_set(&m->nr_valid_paths, 0);
+ atomic_set(&m->pg_init_in_progress, 0);
+ atomic_set(&m->pg_init_count, 0);
m->pg_init_delay_msecs = DM_PG_INIT_DELAY_DEFAULT;
INIT_WORK(&m->trigger_event, trigger_event);
init_waitqueue_head(&m->pg_init_wait);
@@ -274,17 +281,17 @@ static int __pg_init_all_paths(struct multipath *m)
struct pgpath *pgpath;
unsigned long pg_init_delay = 0;
- if (m->pg_init_in_progress || m->pg_init_disabled)
+ if (atomic_read(&m->pg_init_in_progress) || test_bit(MPATHF_PG_INIT_DISABLED, &m->flags))
return 0;
- m->pg_init_count++;
- m->pg_init_required = false;
+ atomic_inc(&m->pg_init_count);
+ clear_bit(MPATHF_PG_INIT_REQUIRED, &m->flags);
/* Check here to reset pg_init_required */
if (!m->current_pg)
return 0;
- if (m->pg_init_delay_retry)
+ if (test_bit(MPATHF_PG_INIT_DELAY_RETRY, &m->flags))
pg_init_delay = msecs_to_jiffies(m->pg_init_delay_msecs != DM_PG_INIT_DELAY_DEFAULT ?
m->pg_init_delay_msecs : DM_PG_INIT_DELAY_MSECS);
list_for_each_entry(pgpath, &m->current_pg->pgpaths, list) {
@@ -293,65 +300,99 @@ static int __pg_init_all_paths(struct multipath *m)
continue;
if (queue_delayed_work(kmpath_handlerd, &pgpath->activate_path,
pg_init_delay))
- m->pg_init_in_progress++;
+ atomic_inc(&m->pg_init_in_progress);
}
- return m->pg_init_in_progress;
+ return atomic_read(&m->pg_init_in_progress);
+}
+
+static int pg_init_all_paths(struct multipath *m)
+{
+ int r;
+ unsigned long flags;
+
+ spin_lock_irqsave(&m->lock, flags);
+ r = __pg_init_all_paths(m);
+ spin_unlock_irqrestore(&m->lock, flags);
+
+ return r;
}
-static void __switch_pg(struct multipath *m, struct pgpath *pgpath)
+static void __switch_pg(struct multipath *m, struct priority_group *pg)
{
- m->current_pg = pgpath->pg;
+ m->current_pg = pg;
/* Must we initialise the PG first, and queue I/O till it's ready? */
if (m->hw_handler_name) {
- m->pg_init_required = true;
- m->queue_io = true;
+ set_bit(MPATHF_PG_INIT_REQUIRED, &m->flags);
+ set_bit(MPATHF_QUEUE_IO, &m->flags);
} else {
- m->pg_init_required = false;
- m->queue_io = false;
+ clear_bit(MPATHF_PG_INIT_REQUIRED, &m->flags);
+ clear_bit(MPATHF_QUEUE_IO, &m->flags);
}
- m->pg_init_count = 0;
+ atomic_set(&m->pg_init_count, 0);
}
-static int __choose_path_in_pg(struct multipath *m, struct priority_group *pg,
- size_t nr_bytes)
+static struct pgpath *choose_path_in_pg(struct multipath *m,
+ struct priority_group *pg,
+ size_t nr_bytes)
{
+ unsigned long flags;
struct dm_path *path;
+ struct pgpath *pgpath;
path = pg->ps.type->select_path(&pg->ps, nr_bytes);
if (!path)
- return -ENXIO;
+ return ERR_PTR(-ENXIO);
- m->current_pgpath = path_to_pgpath(path);
+ pgpath = path_to_pgpath(path);
- if (m->current_pg != pg)
- __switch_pg(m, m->current_pgpath);
+ if (unlikely(lockless_dereference(m->current_pg) != pg)) {
+ /* Only update current_pgpath if pg changed */
+ spin_lock_irqsave(&m->lock, flags);
+ m->current_pgpath = pgpath;
+ __switch_pg(m, pg);
+ spin_unlock_irqrestore(&m->lock, flags);
+ }
- return 0;
+ return pgpath;
}
-static void __choose_pgpath(struct multipath *m, size_t nr_bytes)
+static struct pgpath *choose_pgpath(struct multipath *m, size_t nr_bytes)
{
+ unsigned long flags;
struct priority_group *pg;
+ struct pgpath *pgpath;
bool bypassed = true;
- if (!m->nr_valid_paths) {
- m->queue_io = false;
+ if (!atomic_read(&m->nr_valid_paths)) {
+ clear_bit(MPATHF_QUEUE_IO, &m->flags);
goto failed;
}
/* Were we instructed to switch PG? */
- if (m->next_pg) {
+ if (lockless_dereference(m->next_pg)) {
+ spin_lock_irqsave(&m->lock, flags);
pg = m->next_pg;
+ if (!pg) {
+ spin_unlock_irqrestore(&m->lock, flags);
+ goto check_current_pg;
+ }
m->next_pg = NULL;
- if (!__choose_path_in_pg(m, pg, nr_bytes))
- return;
+ spin_unlock_irqrestore(&m->lock, flags);
+ pgpath = choose_path_in_pg(m, pg, nr_bytes);
+ if (!IS_ERR_OR_NULL(pgpath))
+ return pgpath;
}
/* Don't change PG until it has no remaining paths */
- if (m->current_pg && !__choose_path_in_pg(m, m->current_pg, nr_bytes))
- return;
+check_current_pg:
+ pg = lockless_dereference(m->current_pg);
+ if (pg) {
+ pgpath = choose_path_in_pg(m, pg, nr_bytes);
+ if (!IS_ERR_OR_NULL(pgpath))
+ return pgpath;
+ }
/*
* Loop through priority groups until we find a valid path.
@@ -363,34 +404,38 @@ static void __choose_pgpath(struct multipath *m, size_t nr_bytes)
list_for_each_entry(pg, &m->priority_groups, list) {
if (pg->bypassed == bypassed)
continue;
- if (!__choose_path_in_pg(m, pg, nr_bytes)) {
+ pgpath = choose_path_in_pg(m, pg, nr_bytes);
+ if (!IS_ERR_OR_NULL(pgpath)) {
if (!bypassed)
- m->pg_init_delay_retry = true;
- return;
+ set_bit(MPATHF_PG_INIT_DELAY_RETRY, &m->flags);
+ return pgpath;
}
}
} while (bypassed--);
failed:
+ spin_lock_irqsave(&m->lock, flags);
m->current_pgpath = NULL;
m->current_pg = NULL;
+ spin_unlock_irqrestore(&m->lock, flags);
+
+ return NULL;
}
/*
* Check whether bios must be queued in the device-mapper core rather
* than here in the target.
*
- * m->lock must be held on entry.
- *
* If m->queue_if_no_path and m->saved_queue_if_no_path hold the
* same value then we are not between multipath_presuspend()
* and multipath_resume() calls and we have no need to check
* for the DMF_NOFLUSH_SUSPENDING flag.
*/
-static int __must_push_back(struct multipath *m)
+static int must_push_back(struct multipath *m)
{
- return (m->queue_if_no_path ||
- (m->queue_if_no_path != m->saved_queue_if_no_path &&
+ return (test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags) ||
+ ((test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags) !=
+ test_bit(MPATHF_SAVED_QUEUE_IF_NO_PATH, &m->flags)) &&
dm_noflush_suspending(m->ti)));
}
@@ -408,35 +453,31 @@ static int __multipath_map(struct dm_target *ti, struct request *clone,
struct block_device *bdev;
struct dm_mpath_io *mpio;
- spin_lock_irq(&m->lock);
-
/* Do we need to select a new pgpath? */
- if (!m->current_pgpath || !m->queue_io)
- __choose_pgpath(m, nr_bytes);
-
- pgpath = m->current_pgpath;
+ pgpath = lockless_dereference(m->current_pgpath);
+ if (!pgpath || !test_bit(MPATHF_QUEUE_IO, &m->flags))
+ pgpath = choose_pgpath(m, nr_bytes);
if (!pgpath) {
- if (!__must_push_back(m))
+ if (!must_push_back(m))
r = -EIO; /* Failed */
- goto out_unlock;
- } else if (m->queue_io || m->pg_init_required) {
- __pg_init_all_paths(m);
- goto out_unlock;
+ return r;
+ } else if (test_bit(MPATHF_QUEUE_IO, &m->flags) ||
+ test_bit(MPATHF_PG_INIT_REQUIRED, &m->flags)) {
+ pg_init_all_paths(m);
+ return r;
}
mpio = set_mpio(m, map_context);
if (!mpio)
/* ENOMEM, requeue */
- goto out_unlock;
+ return r;
mpio->pgpath = pgpath;
mpio->nr_bytes = nr_bytes;
bdev = pgpath->path.dev->bdev;
- spin_unlock_irq(&m->lock);
-
if (clone) {
/*
* Old request-based interface: allocated clone is passed in.
@@ -468,11 +509,6 @@ static int __multipath_map(struct dm_target *ti, struct request *clone,
&pgpath->path,
nr_bytes);
return DM_MAPIO_REMAPPED;
-
-out_unlock:
- spin_unlock_irq(&m->lock);
-
- return r;
}
static int multipath_map(struct dm_target *ti, struct request *clone,
@@ -503,11 +539,22 @@ static int queue_if_no_path(struct multipath *m, bool queue_if_no_path,
spin_lock_irqsave(&m->lock, flags);
- if (save_old_value)
- m->saved_queue_if_no_path = m->queue_if_no_path;
+ if (save_old_value) {
+ if (test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags))
+ set_bit(MPATHF_SAVED_QUEUE_IF_NO_PATH, &m->flags);
+ else
+ clear_bit(MPATHF_SAVED_QUEUE_IF_NO_PATH, &m->flags);
+ } else {
+ if (queue_if_no_path)
+ set_bit(MPATHF_SAVED_QUEUE_IF_NO_PATH, &m->flags);
+ else
+ clear_bit(MPATHF_SAVED_QUEUE_IF_NO_PATH, &m->flags);
+ }
+ if (queue_if_no_path)
+ set_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags);
else
- m->saved_queue_if_no_path = queue_if_no_path;
- m->queue_if_no_path = queue_if_no_path;
+ clear_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags);
+
spin_unlock_irqrestore(&m->lock, flags);
if (!queue_if_no_path)
@@ -600,10 +647,10 @@ static struct pgpath *parse_path(struct dm_arg_set *as, struct path_selector *ps
goto bad;
}
- if (m->retain_attached_hw_handler || m->hw_handler_name)
+ if (test_bit(MPATHF_RETAIN_ATTACHED_HW_HANDLER, &m->flags) || m->hw_handler_name)
q = bdev_get_queue(p->path.dev->bdev);
- if (m->retain_attached_hw_handler) {
+ if (test_bit(MPATHF_RETAIN_ATTACHED_HW_HANDLER, &m->flags)) {
retain:
attached_handler_name = scsi_dh_attached_handler_name(q, GFP_KERNEL);
if (attached_handler_name) {
@@ -808,7 +855,7 @@ static int parse_features(struct dm_arg_set *as, struct multipath *m)
}
if (!strcasecmp(arg_name, "retain_attached_hw_handler")) {
- m->retain_attached_hw_handler = true;
+ set_bit(MPATHF_RETAIN_ATTACHED_HW_HANDLER, &m->flags);
continue;
}
@@ -884,6 +931,7 @@ static int multipath_ctr(struct dm_target *ti, unsigned int argc,
/* parse the priority groups */
while (as.argc) {
struct priority_group *pg;
+ unsigned nr_valid_paths = atomic_read(&m->nr_valid_paths);
pg = parse_priority_group(&as, m);
if (IS_ERR(pg)) {
@@ -891,7 +939,9 @@ static int multipath_ctr(struct dm_target *ti, unsigned int argc,
goto bad;
}
- m->nr_valid_paths += pg->nr_pgpaths;
+ nr_valid_paths += pg->nr_pgpaths;
+ atomic_set(&m->nr_valid_paths, nr_valid_paths);
+
list_add_tail(&pg->list, &m->priority_groups);
pg_count++;
pg->pg_num = pg_count;
@@ -921,19 +971,14 @@ static int multipath_ctr(struct dm_target *ti, unsigned int argc,
static void multipath_wait_for_pg_init_completion(struct multipath *m)
{
DECLARE_WAITQUEUE(wait, current);
- unsigned long flags;
add_wait_queue(&m->pg_init_wait, &wait);
while (1) {
set_current_state(TASK_UNINTERRUPTIBLE);
- spin_lock_irqsave(&m->lock, flags);
- if (!m->pg_init_in_progress) {
- spin_unlock_irqrestore(&m->lock, flags);
+ if (!atomic_read(&m->pg_init_in_progress))
break;
- }
- spin_unlock_irqrestore(&m->lock, flags);
io_schedule();
}
@@ -944,20 +989,16 @@ static void multipath_wait_for_pg_init_completion(struct multipath *m)
static void flush_multipath_work(struct multipath *m)
{
- unsigned long flags;
-
- spin_lock_irqsave(&m->lock, flags);
- m->pg_init_disabled = true;
- spin_unlock_irqrestore(&m->lock, flags);
+ set_bit(MPATHF_PG_INIT_DISABLED, &m->flags);
+ smp_mb__after_atomic();
flush_workqueue(kmpath_handlerd);
multipath_wait_for_pg_init_completion(m);
flush_workqueue(kmultipathd);
flush_work(&m->trigger_event);
- spin_lock_irqsave(&m->lock, flags);
- m->pg_init_disabled = false;
- spin_unlock_irqrestore(&m->lock, flags);
+ clear_bit(MPATHF_PG_INIT_DISABLED, &m->flags);
+ smp_mb__after_atomic();
}
static void multipath_dtr(struct dm_target *ti)
@@ -987,13 +1028,13 @@ static int fail_path(struct pgpath *pgpath)
pgpath->is_active = false;
pgpath->fail_count++;
- m->nr_valid_paths--;
+ atomic_dec(&m->nr_valid_paths);
if (pgpath == m->current_pgpath)
m->current_pgpath = NULL;
dm_path_uevent(DM_UEVENT_PATH_FAILED, m->ti,
- pgpath->path.dev->name, m->nr_valid_paths);
+ pgpath->path.dev->name, atomic_read(&m->nr_valid_paths));
schedule_work(&m->trigger_event);
@@ -1011,6 +1052,7 @@ static int reinstate_path(struct pgpath *pgpath)
int r = 0, run_queue = 0;
unsigned long flags;
struct multipath *m = pgpath->pg->m;
+ unsigned nr_valid_paths;
spin_lock_irqsave(&m->lock, flags);
@@ -1025,16 +1067,17 @@ static int reinstate_path(struct pgpath *pgpath)
pgpath->is_active = true;
- if (!m->nr_valid_paths++) {
+ nr_valid_paths = atomic_inc_return(&m->nr_valid_paths);
+ if (nr_valid_paths == 1) {
m->current_pgpath = NULL;
run_queue = 1;
} else if (m->hw_handler_name && (m->current_pg == pgpath->pg)) {
if (queue_work(kmpath_handlerd, &pgpath->activate_path.work))
- m->pg_init_in_progress++;
+ atomic_inc(&m->pg_init_in_progress);
}
dm_path_uevent(DM_UEVENT_PATH_REINSTATED, m->ti,
- pgpath->path.dev->name, m->nr_valid_paths);
+ pgpath->path.dev->name, nr_valid_paths);
schedule_work(&m->trigger_event);
@@ -1152,8 +1195,9 @@ static bool pg_init_limit_reached(struct multipath *m, struct pgpath *pgpath)
spin_lock_irqsave(&m->lock, flags);
- if (m->pg_init_count <= m->pg_init_retries && !m->pg_init_disabled)
- m->pg_init_required = true;
+ if (atomic_read(&m->pg_init_count) <= m->pg_init_retries &&
+ !test_bit(MPATHF_PG_INIT_DISABLED, &m->flags))
+ set_bit(MPATHF_PG_INIT_REQUIRED, &m->flags);
else
limit_reached = true;
@@ -1219,19 +1263,23 @@ static void pg_init_done(void *data, int errors)
m->current_pgpath = NULL;
m->current_pg = NULL;
}
- } else if (!m->pg_init_required)
+ } else if (!test_bit(MPATHF_PG_INIT_REQUIRED, &m->flags))
pg->bypassed = false;
- if (--m->pg_init_in_progress)
+ if (atomic_dec_return(&m->pg_init_in_progress) > 0)
/* Activations of other paths are still on going */
goto out;
- if (m->pg_init_required) {
- m->pg_init_delay_retry = delay_retry;
+ if (test_bit(MPATHF_PG_INIT_REQUIRED, &m->flags)) {
+ if (delay_retry)
+ set_bit(MPATHF_PG_INIT_DELAY_RETRY, &m->flags);
+ else
+ clear_bit(MPATHF_PG_INIT_DELAY_RETRY, &m->flags);
+
if (__pg_init_all_paths(m))
goto out;
}
- m->queue_io = false;
+ clear_bit(MPATHF_QUEUE_IO, &m->flags);
/*
* Wake up any thread waiting to suspend.
@@ -1287,7 +1335,6 @@ static int do_end_io(struct multipath *m, struct request *clone,
* clone bios for it and resubmit it later.
*/
int r = DM_ENDIO_REQUEUE;
- unsigned long flags;
if (!error && !clone->errors)
return 0; /* I/O complete */
@@ -1298,17 +1345,15 @@ static int do_end_io(struct multipath *m, struct request *clone,
if (mpio->pgpath)
fail_path(mpio->pgpath);
- spin_lock_irqsave(&m->lock, flags);
- if (!m->nr_valid_paths) {
- if (!m->queue_if_no_path) {
- if (!__must_push_back(m))
+ if (!atomic_read(&m->nr_valid_paths)) {
+ if (!test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags)) {
+ if (!must_push_back(m))
r = -EIO;
} else {
if (error == -EBADE)
r = error;
}
}
- spin_unlock_irqrestore(&m->lock, flags);
return r;
}
@@ -1364,11 +1409,12 @@ static void multipath_postsuspend(struct dm_target *ti)
static void multipath_resume(struct dm_target *ti)
{
struct multipath *m = ti->private;
- unsigned long flags;
- spin_lock_irqsave(&m->lock, flags);
- m->queue_if_no_path = m->saved_queue_if_no_path;
- spin_unlock_irqrestore(&m->lock, flags);
+ if (test_bit(MPATHF_SAVED_QUEUE_IF_NO_PATH, &m->flags))
+ set_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags);
+ else
+ clear_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags);
+ smp_mb__after_atomic();
}
/*
@@ -1402,19 +1448,20 @@ static void multipath_status(struct dm_target *ti, status_type_t type,
/* Features */
if (type == STATUSTYPE_INFO)
- DMEMIT("2 %u %u ", m->queue_io, m->pg_init_count);
+ DMEMIT("2 %u %u ", test_bit(MPATHF_QUEUE_IO, &m->flags),
+ atomic_read(&m->pg_init_count));
else {
- DMEMIT("%u ", m->queue_if_no_path +
+ DMEMIT("%u ", test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags) +
(m->pg_init_retries > 0) * 2 +
(m->pg_init_delay_msecs != DM_PG_INIT_DELAY_DEFAULT) * 2 +
- m->retain_attached_hw_handler);
- if (m->queue_if_no_path)
+ test_bit(MPATHF_RETAIN_ATTACHED_HW_HANDLER, &m->flags));
+ if (test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags))
DMEMIT("queue_if_no_path ");
if (m->pg_init_retries)
DMEMIT("pg_init_retries %u ", m->pg_init_retries);
if (m->pg_init_delay_msecs != DM_PG_INIT_DELAY_DEFAULT)
DMEMIT("pg_init_delay_msecs %u ", m->pg_init_delay_msecs);
- if (m->retain_attached_hw_handler)
+ if (test_bit(MPATHF_RETAIN_ATTACHED_HW_HANDLER, &m->flags))
DMEMIT("retain_attached_hw_handler ");
}
@@ -1563,18 +1610,17 @@ static int multipath_prepare_ioctl(struct dm_target *ti,
struct block_device **bdev, fmode_t *mode)
{
struct multipath *m = ti->private;
- unsigned long flags;
+ struct pgpath *current_pgpath;
int r;
- spin_lock_irqsave(&m->lock, flags);
+ current_pgpath = lockless_dereference(m->current_pgpath);
+ if (!current_pgpath)
+ current_pgpath = choose_pgpath(m, 0);
- if (!m->current_pgpath)
- __choose_pgpath(m, 0);
-
- if (m->current_pgpath) {
- if (!m->queue_io) {
- *bdev = m->current_pgpath->path.dev->bdev;
- *mode = m->current_pgpath->path.dev->mode;
+ if (current_pgpath) {
+ if (!test_bit(MPATHF_QUEUE_IO, &m->flags)) {
+ *bdev = current_pgpath->path.dev->bdev;
+ *mode = current_pgpath->path.dev->mode;
r = 0;
} else {
/* pg_init has not started or completed */
@@ -1582,23 +1628,19 @@ static int multipath_prepare_ioctl(struct dm_target *ti,
}
} else {
/* No path is available */
- if (m->queue_if_no_path)
+ if (test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags))
r = -ENOTCONN;
else
r = -EIO;
}
- spin_unlock_irqrestore(&m->lock, flags);
-
if (r == -ENOTCONN) {
- spin_lock_irqsave(&m->lock, flags);
- if (!m->current_pg) {
+ if (!lockless_dereference(m->current_pg)) {
/* Path status changed, redo selection */
- __choose_pgpath(m, 0);
+ (void) choose_pgpath(m, 0);
}
- if (m->pg_init_required)
- __pg_init_all_paths(m);
- spin_unlock_irqrestore(&m->lock, flags);
+ if (test_bit(MPATHF_PG_INIT_REQUIRED, &m->flags))
+ pg_init_all_paths(m);
dm_table_run_md_queue_async(m->ti->table);
}
@@ -1649,39 +1691,37 @@ static int multipath_busy(struct dm_target *ti)
{
bool busy = false, has_active = false;
struct multipath *m = ti->private;
- struct priority_group *pg;
+ struct priority_group *pg, *next_pg;
struct pgpath *pgpath;
- unsigned long flags;
-
- spin_lock_irqsave(&m->lock, flags);
/* pg_init in progress or no paths available */
- if (m->pg_init_in_progress ||
- (!m->nr_valid_paths && m->queue_if_no_path)) {
- busy = true;
- goto out;
- }
+ if (atomic_read(&m->pg_init_in_progress) ||
+ (!atomic_read(&m->nr_valid_paths) && test_bit(MPATHF_QUEUE_IF_NO_PATH, &m->flags)))
+ return true;
+
/* Guess which priority_group will be used at next mapping time */
- if (unlikely(!m->current_pgpath && m->next_pg))
- pg = m->next_pg;
- else if (likely(m->current_pg))
- pg = m->current_pg;
- else
+ pg = lockless_dereference(m->current_pg);
+ next_pg = lockless_dereference(m->next_pg);
+ if (unlikely(!lockless_dereference(m->current_pgpath) && next_pg))
+ pg = next_pg;
+
+ if (!pg) {
/*
* We don't know which pg will be used at next mapping time.
- * We don't call __choose_pgpath() here to avoid to trigger
+ * We don't call choose_pgpath() here to avoid to trigger
* pg_init just by busy checking.
* So we don't know whether underlying devices we will be using
* at next mapping time are busy or not. Just try mapping.
*/
- goto out;
+ return busy;
+ }
/*
* If there is one non-busy active path at least, the path selector
* will be able to select it. So we consider such a pg as not busy.
*/
busy = true;
- list_for_each_entry(pgpath, &pg->pgpaths, list)
+ list_for_each_entry(pgpath, &pg->pgpaths, list) {
if (pgpath->is_active) {
has_active = true;
if (!pgpath_busy(pgpath)) {
@@ -1689,17 +1729,16 @@ static int multipath_busy(struct dm_target *ti)
break;
}
}
+ }
- if (!has_active)
+ if (!has_active) {
/*
* No active path in this pg, so this pg won't be used and
* the current_pg will be changed at next mapping time.
* We need to try mapping to determine it.
*/
busy = false;
-
-out:
- spin_unlock_irqrestore(&m->lock, flags);
+ }
return busy;
}
diff --git a/drivers/md/dm-raid.c b/drivers/md/dm-raid.c
index a0901214aef5..52532745a50f 100644
--- a/drivers/md/dm-raid.c
+++ b/drivers/md/dm-raid.c
@@ -1037,6 +1037,11 @@ static int super_validate(struct raid_set *rs, struct md_rdev *rdev)
if (!mddev->events && super_init_validation(mddev, rdev))
return -EINVAL;
+ if (le32_to_cpu(sb->features)) {
+ rs->ti->error = "Unable to assemble array: No feature flags supported yet";
+ return -EINVAL;
+ }
+
/* Enable bitmap creation for RAID levels != 0 */
mddev->bitmap_info.offset = (rs->raid_type->level) ? to_sector(4096) : 0;
rdev->mddev->bitmap_info.default_offset = mddev->bitmap_info.offset;
@@ -1718,7 +1723,7 @@ static void raid_resume(struct dm_target *ti)
static struct target_type raid_target = {
.name = "raid",
- .version = {1, 7, 0},
+ .version = {1, 8, 0},
.module = THIS_MODULE,
.ctr = raid_ctr,
.dtr = raid_dtr,
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index f9e8f0bef332..626a5ec04466 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1348,13 +1348,13 @@ static void dm_table_verify_integrity(struct dm_table *t)
static int device_flush_capable(struct dm_target *ti, struct dm_dev *dev,
sector_t start, sector_t len, void *data)
{
- unsigned flush = (*(unsigned *)data);
+ unsigned long flush = (unsigned long) data;
struct request_queue *q = bdev_get_queue(dev->bdev);
- return q && (q->flush_flags & flush);
+ return q && (q->queue_flags & flush);
}
-static bool dm_table_supports_flush(struct dm_table *t, unsigned flush)
+static bool dm_table_supports_flush(struct dm_table *t, unsigned long flush)
{
struct dm_target *ti;
unsigned i = 0;
@@ -1375,7 +1375,7 @@ static bool dm_table_supports_flush(struct dm_table *t, unsigned flush)
return true;
if (ti->type->iterate_devices &&
- ti->type->iterate_devices(ti, device_flush_capable, &flush))
+ ti->type->iterate_devices(ti, device_flush_capable, (void *) flush))
return true;
}
@@ -1506,7 +1506,7 @@ static bool dm_table_supports_discards(struct dm_table *t)
void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
struct queue_limits *limits)
{
- unsigned flush = 0;
+ bool wc = false, fua = false;
/*
* Copy table's limits to the DM device's request_queue
@@ -1518,12 +1518,12 @@ void dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
else
queue_flag_set_unlocked(QUEUE_FLAG_DISCARD, q);
- if (dm_table_supports_flush(t, REQ_FLUSH)) {
- flush |= REQ_FLUSH;
- if (dm_table_supports_flush(t, REQ_FUA))
- flush |= REQ_FUA;
+ if (dm_table_supports_flush(t, (1UL << QUEUE_FLAG_WC))) {
+ wc = true;
+ if (dm_table_supports_flush(t, (1UL << QUEUE_FLAG_FUA)))
+ fua = true;
}
- blk_queue_flush(q, flush);
+ blk_queue_write_cache(q, wc, fua);
if (!dm_table_discard_zeroes_data(t))
q->limits.discard_zeroes_data = 0;
diff --git a/drivers/md/dm-thin.c b/drivers/md/dm-thin.c
index 92237b6fa8cd..fc803d50f9f0 100644
--- a/drivers/md/dm-thin.c
+++ b/drivers/md/dm-thin.c
@@ -322,56 +322,6 @@ struct thin_c {
/*----------------------------------------------------------------*/
-/**
- * __blkdev_issue_discard_async - queue a discard with async completion
- * @bdev: blockdev to issue discard for
- * @sector: start sector
- * @nr_sects: number of sectors to discard
- * @gfp_mask: memory allocation flags (for bio_alloc)
- * @flags: BLKDEV_IFL_* flags to control behaviour
- * @parent_bio: parent discard bio that all sub discards get chained to
- *
- * Description:
- * Asynchronously issue a discard request for the sectors in question.
- */
-static int __blkdev_issue_discard_async(struct block_device *bdev, sector_t sector,
- sector_t nr_sects, gfp_t gfp_mask, unsigned long flags,
- struct bio *parent_bio)
-{
- struct request_queue *q = bdev_get_queue(bdev);
- int type = REQ_WRITE | REQ_DISCARD;
- struct bio *bio;
-
- if (!q || !nr_sects)
- return -ENXIO;
-
- if (!blk_queue_discard(q))
- return -EOPNOTSUPP;
-
- if (flags & BLKDEV_DISCARD_SECURE) {
- if (!blk_queue_secdiscard(q))
- return -EOPNOTSUPP;
- type |= REQ_SECURE;
- }
-
- /*
- * Required bio_put occurs in bio_endio thanks to bio_chain below
- */
- bio = bio_alloc(gfp_mask, 1);
- if (!bio)
- return -ENOMEM;
-
- bio_chain(bio, parent_bio);
-
- bio->bi_iter.bi_sector = sector;
- bio->bi_bdev = bdev;
- bio->bi_iter.bi_size = nr_sects << 9;
-
- submit_bio(type, bio);
-
- return 0;
-}
-
static bool block_size_is_power_of_two(struct pool *pool)
{
return pool->sectors_per_block_shift >= 0;
@@ -384,14 +334,55 @@ static sector_t block_to_sectors(struct pool *pool, dm_block_t b)
(b * pool->sectors_per_block);
}
-static int issue_discard(struct thin_c *tc, dm_block_t data_b, dm_block_t data_e,
- struct bio *parent_bio)
+/*----------------------------------------------------------------*/
+
+struct discard_op {
+ struct thin_c *tc;
+ struct blk_plug plug;
+ struct bio *parent_bio;
+ struct bio *bio;
+};
+
+static void begin_discard(struct discard_op *op, struct thin_c *tc, struct bio *parent)
+{
+ BUG_ON(!parent);
+
+ op->tc = tc;
+ blk_start_plug(&op->plug);
+ op->parent_bio = parent;
+ op->bio = NULL;
+}
+
+static int issue_discard(struct discard_op *op, dm_block_t data_b, dm_block_t data_e)
{
+ struct thin_c *tc = op->tc;
sector_t s = block_to_sectors(tc->pool, data_b);
sector_t len = block_to_sectors(tc->pool, data_e - data_b);
- return __blkdev_issue_discard_async(tc->pool_dev->bdev, s, len,
- GFP_NOWAIT, 0, parent_bio);
+ return __blkdev_issue_discard(tc->pool_dev->bdev, s, len,
+ GFP_NOWAIT, REQ_WRITE | REQ_DISCARD, &op->bio);
+}
+
+static void end_discard(struct discard_op *op, int r)
+{
+ if (op->bio) {
+ /*
+ * Even if one of the calls to issue_discard failed, we
+ * need to wait for the chain to complete.
+ */
+ bio_chain(op->bio, op->parent_bio);
+ submit_bio(REQ_WRITE | REQ_DISCARD, op->bio);
+ }
+
+ blk_finish_plug(&op->plug);
+
+ /*
+ * Even if r is set, there could be sub discards in flight that we
+ * need to wait for.
+ */
+ if (r && !op->parent_bio->bi_error)
+ op->parent_bio->bi_error = r;
+ bio_endio(op->parent_bio);
}
/*----------------------------------------------------------------*/
@@ -632,7 +623,7 @@ static void error_retry_list(struct pool *pool)
{
int error = get_pool_io_error_code(pool);
- return error_retry_list_with_code(pool, error);
+ error_retry_list_with_code(pool, error);
}
/*
@@ -1006,24 +997,28 @@ static void process_prepared_discard_no_passdown(struct dm_thin_new_mapping *m)
mempool_free(m, tc->pool->mapping_pool);
}
-static int passdown_double_checking_shared_status(struct dm_thin_new_mapping *m)
+/*----------------------------------------------------------------*/
+
+static void passdown_double_checking_shared_status(struct dm_thin_new_mapping *m)
{
/*
* We've already unmapped this range of blocks, but before we
* passdown we have to check that these blocks are now unused.
*/
- int r;
+ int r = 0;
bool used = true;
struct thin_c *tc = m->tc;
struct pool *pool = tc->pool;
dm_block_t b = m->data_block, e, end = m->data_block + m->virt_end - m->virt_begin;
+ struct discard_op op;
+ begin_discard(&op, tc, m->bio);
while (b != end) {
/* find start of unmapped run */
for (; b < end; b++) {
r = dm_pool_block_is_used(pool->pmd, b, &used);
if (r)
- return r;
+ goto out;
if (!used)
break;
@@ -1036,20 +1031,20 @@ static int passdown_double_checking_shared_status(struct dm_thin_new_mapping *m)
for (e = b + 1; e != end; e++) {
r = dm_pool_block_is_used(pool->pmd, e, &used);
if (r)
- return r;
+ goto out;
if (used)
break;
}
- r = issue_discard(tc, b, e, m->bio);
+ r = issue_discard(&op, b, e);
if (r)
- return r;
+ goto out;
b = e;
}
-
- return 0;
+out:
+ end_discard(&op, r);
}
static void process_prepared_discard_passdown(struct dm_thin_new_mapping *m)
@@ -1059,20 +1054,21 @@ static void process_prepared_discard_passdown(struct dm_thin_new_mapping *m)
struct pool *pool = tc->pool;
r = dm_thin_remove_range(tc->td, m->virt_begin, m->virt_end);
- if (r)
+ if (r) {
metadata_operation_failed(pool, "dm_thin_remove_range", r);
+ bio_io_error(m->bio);
- else if (m->maybe_shared)
- r = passdown_double_checking_shared_status(m);
- else
- r = issue_discard(tc, m->data_block, m->data_block + (m->virt_end - m->virt_begin), m->bio);
+ } else if (m->maybe_shared) {
+ passdown_double_checking_shared_status(m);
+
+ } else {
+ struct discard_op op;
+ begin_discard(&op, tc, m->bio);
+ r = issue_discard(&op, m->data_block,
+ m->data_block + (m->virt_end - m->virt_begin));
+ end_discard(&op, r);
+ }
- /*
- * Even if r is set, there could be sub discards in flight that we
- * need to wait for.
- */
- m->bio->bi_error = r;
- bio_endio(m->bio);
cell_defer_no_holder(tc, m->cell);
mempool_free(m, pool->mapping_pool);
}
@@ -1494,17 +1490,6 @@ static void process_discard_cell_no_passdown(struct thin_c *tc,
pool->process_prepared_discard(m);
}
-/*
- * __bio_inc_remaining() is used to defer parent bios's end_io until
- * we _know_ all chained sub range discard bios have completed.
- */
-static inline void __bio_inc_remaining(struct bio *bio)
-{
- bio->bi_flags |= (1 << BIO_CHAIN);
- smp_mb__before_atomic();
- atomic_inc(&bio->__bi_remaining);
-}
-
static void break_up_discard_bio(struct thin_c *tc, dm_block_t begin, dm_block_t end,
struct bio *bio)
{
@@ -1554,13 +1539,13 @@ static void break_up_discard_bio(struct thin_c *tc, dm_block_t begin, dm_block_t
/*
* The parent bio must not complete before sub discard bios are
- * chained to it (see __blkdev_issue_discard_async's bio_chain)!
+ * chained to it (see end_discard's bio_chain)!
*
* This per-mapping bi_remaining increment is paired with
* the implicit decrement that occurs via bio_endio() in
- * process_prepared_discard_{passdown,no_passdown}.
+ * end_discard().
*/
- __bio_inc_remaining(bio);
+ bio_inc_remaining(bio);
if (!dm_deferred_set_add_work(pool->all_io_ds, &m->list))
pool->process_prepared_discard(m);
@@ -3899,7 +3884,7 @@ static struct target_type pool_target = {
.name = "thin-pool",
.features = DM_TARGET_SINGLETON | DM_TARGET_ALWAYS_WRITEABLE |
DM_TARGET_IMMUTABLE,
- .version = {1, 18, 0},
+ .version = {1, 19, 0},
.module = THIS_MODULE,
.ctr = pool_ctr,
.dtr = pool_dtr,
@@ -4273,7 +4258,7 @@ static void thin_io_hints(struct dm_target *ti, struct queue_limits *limits)
static struct target_type thin_target = {
.name = "thin",
- .version = {1, 18, 0},
+ .version = {1, 19, 0},
.module = THIS_MODULE,
.ctr = thin_ctr,
.dtr = thin_dtr,
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 3d3ac13287a4..1b2f96205361 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -674,7 +674,7 @@ static void free_io(struct mapped_device *md, struct dm_io *io)
mempool_free(io, md->io_pool);
}
-static void free_tio(struct mapped_device *md, struct dm_target_io *tio)
+static void free_tio(struct dm_target_io *tio)
{
bio_put(&tio->clone);
}
@@ -1055,7 +1055,7 @@ static void clone_endio(struct bio *bio)
!bdev_get_queue(bio->bi_bdev)->limits.max_write_same_sectors))
disable_write_same(md);
- free_tio(md, tio);
+ free_tio(tio);
dec_pending(io, error);
}
@@ -1517,7 +1517,6 @@ static void __map_bio(struct dm_target_io *tio)
{
int r;
sector_t sector;
- struct mapped_device *md;
struct bio *clone = &tio->clone;
struct dm_target *ti = tio->ti;
@@ -1540,9 +1539,8 @@ static void __map_bio(struct dm_target_io *tio)
generic_make_request(clone);
} else if (r < 0 || r == DM_MAPIO_REQUEUE) {
/* error the io and bail out, or requeue it if needed */
- md = tio->io->md;
dec_pending(tio->io, r);
- free_tio(md, tio);
+ free_tio(tio);
} else if (r != DM_MAPIO_SUBMITTED) {
DMWARN("unimplemented target map return value: %d", r);
BUG();
@@ -1663,7 +1661,7 @@ static int __clone_and_map_data_bio(struct clone_info *ci, struct dm_target *ti,
tio->len_ptr = len;
r = clone_bio(tio, bio, sector, *len);
if (r < 0) {
- free_tio(ci->md, tio);
+ free_tio(tio);
break;
}
__map_bio(tio);
diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
index dd97d4245822..41573f1f626f 100644
--- a/drivers/md/md-cluster.c
+++ b/drivers/md/md-cluster.c
@@ -61,6 +61,10 @@ struct resync_info {
* the lock.
*/
#define MD_CLUSTER_SEND_LOCKED_ALREADY 5
+/* We should receive message after node joined cluster and
+ * set up all the related infos such as bitmap and personality */
+#define MD_CLUSTER_ALREADY_IN_CLUSTER 6
+#define MD_CLUSTER_PENDING_RECV_EVENT 7
struct md_cluster_info {
@@ -85,6 +89,9 @@ struct md_cluster_info {
struct completion newdisk_completion;
wait_queue_head_t wait;
unsigned long state;
+ /* record the region in RESYNCING message */
+ sector_t sync_low;
+ sector_t sync_hi;
};
enum msg_type {
@@ -284,11 +291,14 @@ static void recover_bitmaps(struct md_thread *thread)
goto dlm_unlock;
}
if (hi > 0) {
- /* TODO:Wait for current resync to get over */
- set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
if (lo < mddev->recovery_cp)
mddev->recovery_cp = lo;
- md_check_recovery(mddev);
+ /* wake up thread to continue resync in case resync
+ * is not finished */
+ if (mddev->recovery_cp != MaxSector) {
+ set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+ md_wakeup_thread(mddev->thread);
+ }
}
dlm_unlock:
dlm_unlock_sync(bm_lockres);
@@ -370,8 +380,12 @@ static void ack_bast(void *arg, int mode)
struct dlm_lock_resource *res = arg;
struct md_cluster_info *cinfo = res->mddev->cluster_info;
- if (mode == DLM_LOCK_EX)
- md_wakeup_thread(cinfo->recv_thread);
+ if (mode == DLM_LOCK_EX) {
+ if (test_bit(MD_CLUSTER_ALREADY_IN_CLUSTER, &cinfo->state))
+ md_wakeup_thread(cinfo->recv_thread);
+ else
+ set_bit(MD_CLUSTER_PENDING_RECV_EVENT, &cinfo->state);
+ }
}
static void __remove_suspend_info(struct md_cluster_info *cinfo, int slot)
@@ -408,6 +422,30 @@ static void process_suspend_info(struct mddev *mddev,
md_wakeup_thread(mddev->thread);
return;
}
+
+ /*
+ * The bitmaps are not same for different nodes
+ * if RESYNCING is happening in one node, then
+ * the node which received the RESYNCING message
+ * probably will perform resync with the region
+ * [lo, hi] again, so we could reduce resync time
+ * a lot if we can ensure that the bitmaps among
+ * different nodes are match up well.
+ *
+ * sync_low/hi is used to record the region which
+ * arrived in the previous RESYNCING message,
+ *
+ * Call bitmap_sync_with_cluster to clear
+ * NEEDED_MASK and set RESYNC_MASK since
+ * resync thread is running in another node,
+ * so we don't need to do the resync again
+ * with the same section */
+ bitmap_sync_with_cluster(mddev, cinfo->sync_low,
+ cinfo->sync_hi,
+ lo, hi);
+ cinfo->sync_low = lo;
+ cinfo->sync_hi = hi;
+
s = kzalloc(sizeof(struct suspend_info), GFP_KERNEL);
if (!s)
return;
@@ -482,11 +520,13 @@ static void process_readd_disk(struct mddev *mddev, struct cluster_msg *msg)
__func__, __LINE__, le32_to_cpu(msg->raid_slot));
}
-static void process_recvd_msg(struct mddev *mddev, struct cluster_msg *msg)
+static int process_recvd_msg(struct mddev *mddev, struct cluster_msg *msg)
{
+ int ret = 0;
+
if (WARN(mddev->cluster_info->slot_number - 1 == le32_to_cpu(msg->slot),
"node %d received it's own msg\n", le32_to_cpu(msg->slot)))
- return;
+ return -1;
switch (le32_to_cpu(msg->type)) {
case METADATA_UPDATED:
process_metadata_update(mddev, msg);
@@ -509,9 +549,11 @@ static void process_recvd_msg(struct mddev *mddev, struct cluster_msg *msg)
__recover_slot(mddev, le32_to_cpu(msg->slot));
break;
default:
+ ret = -1;
pr_warn("%s:%d Received unknown message from %d\n",
__func__, __LINE__, msg->slot);
}
+ return ret;
}
/*
@@ -535,7 +577,9 @@ static void recv_daemon(struct md_thread *thread)
/* read lvb and wake up thread to process this message_lockres */
memcpy(&msg, message_lockres->lksb.sb_lvbptr, sizeof(struct cluster_msg));
- process_recvd_msg(thread->mddev, &msg);
+ ret = process_recvd_msg(thread->mddev, &msg);
+ if (ret)
+ goto out;
/*release CR on ack_lockres*/
ret = dlm_unlock_sync(ack_lockres);
@@ -549,6 +593,7 @@ static void recv_daemon(struct md_thread *thread)
ret = dlm_lock_sync(ack_lockres, DLM_LOCK_CR);
if (unlikely(ret != 0))
pr_info("lock CR on ack failed return %d\n", ret);
+out:
/*release CR on message_lockres*/
ret = dlm_unlock_sync(message_lockres);
if (unlikely(ret != 0))
@@ -778,17 +823,24 @@ static int join(struct mddev *mddev, int nodes)
cinfo->token_lockres = lockres_init(mddev, "token", NULL, 0);
if (!cinfo->token_lockres)
goto err;
- cinfo->ack_lockres = lockres_init(mddev, "ack", ack_bast, 0);
- if (!cinfo->ack_lockres)
- goto err;
cinfo->no_new_dev_lockres = lockres_init(mddev, "no-new-dev", NULL, 0);
if (!cinfo->no_new_dev_lockres)
goto err;
+ ret = dlm_lock_sync(cinfo->token_lockres, DLM_LOCK_EX);
+ if (ret) {
+ ret = -EAGAIN;
+ pr_err("md-cluster: can't join cluster to avoid lock issue\n");
+ goto err;
+ }
+ cinfo->ack_lockres = lockres_init(mddev, "ack", ack_bast, 0);
+ if (!cinfo->ack_lockres)
+ goto err;
/* get sync CR lock on ACK. */
if (dlm_lock_sync(cinfo->ack_lockres, DLM_LOCK_CR))
pr_err("md-cluster: failed to get a sync CR lock on ACK!(%d)\n",
ret);
+ dlm_unlock_sync(cinfo->token_lockres);
/* get sync CR lock on no-new-dev. */
if (dlm_lock_sync(cinfo->no_new_dev_lockres, DLM_LOCK_CR))
pr_err("md-cluster: failed to get a sync CR lock on no-new-dev!(%d)\n", ret);
@@ -809,12 +861,10 @@ static int join(struct mddev *mddev, int nodes)
if (!cinfo->resync_lockres)
goto err;
- ret = gather_all_resync_info(mddev, nodes);
- if (ret)
- goto err;
-
return 0;
err:
+ md_unregister_thread(&cinfo->recovery_thread);
+ md_unregister_thread(&cinfo->recv_thread);
lockres_free(cinfo->message_lockres);
lockres_free(cinfo->token_lockres);
lockres_free(cinfo->ack_lockres);
@@ -828,6 +878,19 @@ err:
return ret;
}
+static void load_bitmaps(struct mddev *mddev, int total_slots)
+{
+ struct md_cluster_info *cinfo = mddev->cluster_info;
+
+ /* load all the node's bitmap info for resync */
+ if (gather_all_resync_info(mddev, total_slots))
+ pr_err("md-cluster: failed to gather all resyn infos\n");
+ set_bit(MD_CLUSTER_ALREADY_IN_CLUSTER, &cinfo->state);
+ /* wake up recv thread in case something need to be handled */
+ if (test_and_clear_bit(MD_CLUSTER_PENDING_RECV_EVENT, &cinfo->state))
+ md_wakeup_thread(cinfo->recv_thread);
+}
+
static void resync_bitmap(struct mddev *mddev)
{
struct md_cluster_info *cinfo = mddev->cluster_info;
@@ -937,7 +1000,6 @@ static void metadata_update_cancel(struct mddev *mddev)
static int resync_start(struct mddev *mddev)
{
struct md_cluster_info *cinfo = mddev->cluster_info;
- cinfo->resync_lockres->flags |= DLM_LKF_NOQUEUE;
return dlm_lock_sync(cinfo->resync_lockres, DLM_LOCK_EX);
}
@@ -967,7 +1029,6 @@ static int resync_info_update(struct mddev *mddev, sector_t lo, sector_t hi)
static int resync_finish(struct mddev *mddev)
{
struct md_cluster_info *cinfo = mddev->cluster_info;
- cinfo->resync_lockres->flags &= ~DLM_LKF_NOQUEUE;
dlm_unlock_sync(cinfo->resync_lockres);
return resync_info_update(mddev, 0, 0);
}
@@ -1171,6 +1232,7 @@ static struct md_cluster_operations cluster_ops = {
.add_new_disk_cancel = add_new_disk_cancel,
.new_disk_ack = new_disk_ack,
.remove_disk = remove_disk,
+ .load_bitmaps = load_bitmaps,
.gather_bitmaps = gather_bitmaps,
.lock_all_bitmaps = lock_all_bitmaps,
.unlock_all_bitmaps = unlock_all_bitmaps,
diff --git a/drivers/md/md-cluster.h b/drivers/md/md-cluster.h
index 45ce6c97d8bd..e765499ba591 100644
--- a/drivers/md/md-cluster.h
+++ b/drivers/md/md-cluster.h
@@ -23,6 +23,7 @@ struct md_cluster_operations {
void (*add_new_disk_cancel)(struct mddev *mddev);
int (*new_disk_ack)(struct mddev *mddev, bool ack);
int (*remove_disk)(struct mddev *mddev, struct md_rdev *rdev);
+ void (*load_bitmaps)(struct mddev *mddev, int total_slots);
int (*gather_bitmaps)(struct md_rdev *rdev);
int (*lock_all_bitmaps)(struct mddev *mddev);
void (*unlock_all_bitmaps)(struct mddev *mddev);
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 14d3b37944df..866825f10b4c 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -307,7 +307,7 @@ static blk_qc_t md_make_request(struct request_queue *q, struct bio *bio)
*/
void mddev_suspend(struct mddev *mddev)
{
- WARN_ON_ONCE(current == mddev->thread->tsk);
+ WARN_ON_ONCE(mddev->thread && current == mddev->thread->tsk);
if (mddev->suspended++)
return;
synchronize_rcu();
@@ -2291,19 +2291,24 @@ void md_update_sb(struct mddev *mddev, int force_change)
return;
}
+repeat:
if (mddev_is_clustered(mddev)) {
if (test_and_clear_bit(MD_CHANGE_DEVS, &mddev->flags))
force_change = 1;
+ if (test_and_clear_bit(MD_CHANGE_CLEAN, &mddev->flags))
+ nospares = 1;
ret = md_cluster_ops->metadata_update_start(mddev);
/* Has someone else has updated the sb */
if (!does_sb_need_changing(mddev)) {
if (ret == 0)
md_cluster_ops->metadata_update_cancel(mddev);
- clear_bit(MD_CHANGE_PENDING, &mddev->flags);
+ bit_clear_unless(&mddev->flags, BIT(MD_CHANGE_PENDING),
+ BIT(MD_CHANGE_DEVS) |
+ BIT(MD_CHANGE_CLEAN));
return;
}
}
-repeat:
+
/* First make sure individual recovery_offsets are correct */
rdev_for_each(rdev, mddev) {
if (rdev->raid_disk >= 0 &&
@@ -2430,15 +2435,14 @@ repeat:
md_super_wait(mddev);
/* if there was a failure, MD_CHANGE_DEVS was set, and we re-write super */
- spin_lock(&mddev->lock);
+ if (mddev_is_clustered(mddev) && ret == 0)
+ md_cluster_ops->metadata_update_finish(mddev);
+
if (mddev->in_sync != sync_req ||
- test_bit(MD_CHANGE_DEVS, &mddev->flags)) {
+ !bit_clear_unless(&mddev->flags, BIT(MD_CHANGE_PENDING),
+ BIT(MD_CHANGE_DEVS) | BIT(MD_CHANGE_CLEAN)))
/* have to write it out again */
- spin_unlock(&mddev->lock);
goto repeat;
- }
- clear_bit(MD_CHANGE_PENDING, &mddev->flags);
- spin_unlock(&mddev->lock);
wake_up(&mddev->sb_wait);
if (test_bit(MD_RECOVERY_RUNNING, &mddev->recovery))
sysfs_notify(&mddev->kobj, NULL, "sync_completed");
@@ -2452,9 +2456,6 @@ repeat:
clear_bit(BlockedBadBlocks, &rdev->flags);
wake_up(&rdev->blocked_wait);
}
-
- if (mddev_is_clustered(mddev) && ret == 0)
- md_cluster_ops->metadata_update_finish(mddev);
}
EXPORT_SYMBOL(md_update_sb);
@@ -4816,6 +4817,10 @@ array_size_store(struct mddev *mddev, const char *buf, size_t len)
if (err)
return err;
+ /* cluster raid doesn't support change array_sectors */
+ if (mddev_is_clustered(mddev))
+ return -EINVAL;
+
if (strncmp(buf, "default", 7) == 0) {
if (mddev->pers)
sectors = mddev->pers->size(mddev, 0, 0);
@@ -5039,7 +5044,7 @@ static int md_alloc(dev_t dev, char *name)
disk->fops = &md_fops;
disk->private_data = mddev;
disk->queue = mddev->queue;
- blk_queue_flush(mddev->queue, REQ_FLUSH | REQ_FUA);
+ blk_queue_write_cache(mddev->queue, true, true);
/* Allow extended partitions. This makes the
* 'mdp' device redundant, but we can't really
* remove it now.
@@ -6437,6 +6442,10 @@ static int update_size(struct mddev *mddev, sector_t num_sectors)
int rv;
int fit = (num_sectors == 0);
+ /* cluster raid doesn't support update size */
+ if (mddev_is_clustered(mddev))
+ return -EINVAL;
+
if (mddev->pers->resize == NULL)
return -EINVAL;
/* The "num_sectors" is the number of sectors of each device that
@@ -7785,7 +7794,7 @@ void md_do_sync(struct md_thread *thread)
struct md_rdev *rdev;
char *desc, *action = NULL;
struct blk_plug plug;
- bool cluster_resync_finished = false;
+ int ret;
/* just incase thread restarts... */
if (test_bit(MD_RECOVERY_DONE, &mddev->recovery))
@@ -7795,6 +7804,19 @@ void md_do_sync(struct md_thread *thread)
return;
}
+ if (mddev_is_clustered(mddev)) {
+ ret = md_cluster_ops->resync_start(mddev);
+ if (ret)
+ goto skip;
+
+ if (!(test_bit(MD_RECOVERY_SYNC, &mddev->recovery) ||
+ test_bit(MD_RECOVERY_RESHAPE, &mddev->recovery) ||
+ test_bit(MD_RECOVERY_RECOVER, &mddev->recovery))
+ && ((unsigned long long)mddev->curr_resync_completed
+ < (unsigned long long)mddev->resync_max_sectors))
+ goto skip;
+ }
+
if (test_bit(MD_RECOVERY_SYNC, &mddev->recovery)) {
if (test_bit(MD_RECOVERY_CHECK, &mddev->recovery)) {
desc = "data-check";
@@ -8089,11 +8111,6 @@ void md_do_sync(struct md_thread *thread)
mddev->curr_resync_completed = mddev->curr_resync;
sysfs_notify(&mddev->kobj, NULL, "sync_completed");
}
- /* tell personality and other nodes that we are finished */
- if (mddev_is_clustered(mddev)) {
- md_cluster_ops->resync_finish(mddev);
- cluster_resync_finished = true;
- }
mddev->pers->sync_request(mddev, max_sectors, &skipped);
if (!test_bit(MD_RECOVERY_CHECK, &mddev->recovery) &&
@@ -8130,12 +8147,18 @@ void md_do_sync(struct md_thread *thread)
}
}
skip:
- set_bit(MD_CHANGE_DEVS, &mddev->flags);
-
if (mddev_is_clustered(mddev) &&
- test_bit(MD_RECOVERY_INTR, &mddev->recovery) &&
- !cluster_resync_finished)
+ ret == 0) {
+ /* set CHANGE_PENDING here since maybe another
+ * update is needed, so other nodes are informed */
+ set_mask_bits(&mddev->flags, 0,
+ BIT(MD_CHANGE_PENDING) | BIT(MD_CHANGE_DEVS));
+ md_wakeup_thread(mddev->thread);
+ wait_event(mddev->sb_wait,
+ !test_bit(MD_CHANGE_PENDING, &mddev->flags));
md_cluster_ops->resync_finish(mddev);
+ } else
+ set_bit(MD_CHANGE_DEVS, &mddev->flags);
spin_lock(&mddev->lock);
if (!test_bit(MD_RECOVERY_INTR, &mddev->recovery)) {
@@ -8226,18 +8249,9 @@ static void md_start_sync(struct work_struct *ws)
struct mddev *mddev = container_of(ws, struct mddev, del_work);
int ret = 0;
- if (mddev_is_clustered(mddev)) {
- ret = md_cluster_ops->resync_start(mddev);
- if (ret) {
- mddev->sync_thread = NULL;
- goto out;
- }
- }
-
mddev->sync_thread = md_register_thread(md_do_sync,
mddev,
"resync");
-out:
if (!mddev->sync_thread) {
if (!(mddev_is_clustered(mddev) && ret == -EAGAIN))
printk(KERN_ERR "%s: could not start resync"
@@ -8536,6 +8550,7 @@ EXPORT_SYMBOL(md_finish_reshape);
int rdev_set_badblocks(struct md_rdev *rdev, sector_t s, int sectors,
int is_new)
{
+ struct mddev *mddev = rdev->mddev;
int rv;
if (is_new)
s += rdev->new_data_offset;
@@ -8545,8 +8560,8 @@ int rdev_set_badblocks(struct md_rdev *rdev, sector_t s, int sectors,
if (rv == 0) {
/* Make sure they get written out promptly */
sysfs_notify_dirent_safe(rdev->sysfs_state);
- set_bit(MD_CHANGE_CLEAN, &rdev->mddev->flags);
- set_bit(MD_CHANGE_PENDING, &rdev->mddev->flags);
+ set_mask_bits(&mddev->flags, 0,
+ BIT(MD_CHANGE_CLEAN) | BIT(MD_CHANGE_PENDING));
md_wakeup_thread(rdev->mddev->thread);
return 1;
} else
@@ -8680,6 +8695,11 @@ static void check_sb_changes(struct mddev *mddev, struct md_rdev *rdev)
ret = remove_and_add_spares(mddev, rdev2);
pr_info("Activated spare: %s\n",
bdevname(rdev2->bdev,b));
+ /* wakeup mddev->thread here, so array could
+ * perform resync with the new activated disk */
+ set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
+ md_wakeup_thread(mddev->thread);
+
}
/* device faulty
* We just want to do the minimum to mark the disk
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index a7f2b9c9f8a0..c7c8cde0ab21 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -1474,8 +1474,8 @@ static void raid1_error(struct mddev *mddev, struct md_rdev *rdev)
* if recovery is running, make sure it aborts.
*/
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
- set_bit(MD_CHANGE_DEVS, &mddev->flags);
- set_bit(MD_CHANGE_PENDING, &mddev->flags);
+ set_mask_bits(&mddev->flags, 0,
+ BIT(MD_CHANGE_DEVS) | BIT(MD_CHANGE_PENDING));
printk(KERN_ALERT
"md/raid1:%s: Disk failure on %s, disabling device.\n"
"md/raid1:%s: Operation continuing on %d devices.\n",
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index e3fd725d5c4d..c7de2a53e625 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -1102,8 +1102,8 @@ static void __make_request(struct mddev *mddev, struct bio *bio)
bio->bi_iter.bi_sector < conf->reshape_progress))) {
/* Need to update reshape_position in metadata */
mddev->reshape_position = conf->reshape_progress;
- set_bit(MD_CHANGE_DEVS, &mddev->flags);
- set_bit(MD_CHANGE_PENDING, &mddev->flags);
+ set_mask_bits(&mddev->flags, 0,
+ BIT(MD_CHANGE_DEVS) | BIT(MD_CHANGE_PENDING));
md_wakeup_thread(mddev->thread);
wait_event(mddev->sb_wait,
!test_bit(MD_CHANGE_PENDING, &mddev->flags));
@@ -1591,8 +1591,8 @@ static void raid10_error(struct mddev *mddev, struct md_rdev *rdev)
set_bit(MD_RECOVERY_INTR, &mddev->recovery);
set_bit(Blocked, &rdev->flags);
set_bit(Faulty, &rdev->flags);
- set_bit(MD_CHANGE_DEVS, &mddev->flags);
- set_bit(MD_CHANGE_PENDING, &mddev->flags);
+ set_mask_bits(&mddev->flags, 0,
+ BIT(MD_CHANGE_DEVS) | BIT(MD_CHANGE_PENDING));
spin_unlock_irqrestore(&conf->device_lock, flags);
printk(KERN_ALERT
"md/raid10:%s: Disk failure on %s, disabling device.\n"
@@ -3782,8 +3782,10 @@ static int raid10_resize(struct mddev *mddev, sector_t sectors)
return ret;
}
md_set_array_sectors(mddev, size);
- set_capacity(mddev->gendisk, mddev->array_sectors);
- revalidate_disk(mddev->gendisk);
+ if (mddev->queue) {
+ set_capacity(mddev->gendisk, mddev->array_sectors);
+ revalidate_disk(mddev->gendisk);
+ }
if (sectors > mddev->dev_sectors &&
mddev->recovery_cp > oldsize) {
mddev->recovery_cp = oldsize;
@@ -4593,8 +4595,10 @@ static void raid10_finish_reshape(struct mddev *mddev)
set_bit(MD_RECOVERY_NEEDED, &mddev->recovery);
}
mddev->resync_max_sectors = size;
- set_capacity(mddev->gendisk, mddev->array_sectors);
- revalidate_disk(mddev->gendisk);
+ if (mddev->queue) {
+ set_capacity(mddev->gendisk, mddev->array_sectors);
+ revalidate_disk(mddev->gendisk);
+ }
} else {
int d;
for (d = conf->geo.raid_disks ;
diff --git a/drivers/md/raid5-cache.c b/drivers/md/raid5-cache.c
index 9531f5f05b93..e889e2deb7b3 100644
--- a/drivers/md/raid5-cache.c
+++ b/drivers/md/raid5-cache.c
@@ -712,8 +712,8 @@ static void r5l_write_super_and_discard_space(struct r5l_log *log,
* in_teardown check workaround this issue.
*/
if (!log->in_teardown) {
- set_bit(MD_CHANGE_DEVS, &mddev->flags);
- set_bit(MD_CHANGE_PENDING, &mddev->flags);
+ set_mask_bits(&mddev->flags, 0,
+ BIT(MD_CHANGE_DEVS) | BIT(MD_CHANGE_PENDING));
md_wakeup_thread(mddev->thread);
wait_event(mddev->sb_wait,
!test_bit(MD_CHANGE_PENDING, &mddev->flags) ||
@@ -1188,6 +1188,7 @@ ioerr:
int r5l_init_log(struct r5conf *conf, struct md_rdev *rdev)
{
+ struct request_queue *q = bdev_get_queue(rdev->bdev);
struct r5l_log *log;
if (PAGE_SIZE != 4096)
@@ -1197,7 +1198,7 @@ int r5l_init_log(struct r5conf *conf, struct md_rdev *rdev)
return -ENOMEM;
log->rdev = rdev;
- log->need_cache_flush = (rdev->bdev->bd_disk->queue->flush_flags != 0);
+ log->need_cache_flush = test_bit(QUEUE_FLAG_WC, &q->queue_flags) != 0;
log->uuid_checksum = crc32c_le(~0, rdev->mddev->uuid,
sizeof(rdev->mddev->uuid));
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index e48c262ce032..8959e6dd31dd 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -2514,8 +2514,8 @@ static void raid5_error(struct mddev *mddev, struct md_rdev *rdev)
set_bit(Blocked, &rdev->flags);
set_bit(Faulty, &rdev->flags);
- set_bit(MD_CHANGE_DEVS, &mddev->flags);
- set_bit(MD_CHANGE_PENDING, &mddev->flags);
+ set_mask_bits(&mddev->flags, 0,
+ BIT(MD_CHANGE_DEVS) | BIT(MD_CHANGE_PENDING));
printk(KERN_ALERT
"md/raid:%s: Disk failure on %s, disabling device.\n"
"md/raid:%s: Operation continuing on %d devices.\n",
@@ -7572,8 +7572,10 @@ static void raid5_finish_reshape(struct mddev *mddev)
if (mddev->delta_disks > 0) {
md_set_array_sectors(mddev, raid5_size(mddev, 0, 0));
- set_capacity(mddev->gendisk, mddev->array_sectors);
- revalidate_disk(mddev->gendisk);
+ if (mddev->queue) {
+ set_capacity(mddev->gendisk, mddev->array_sectors);
+ revalidate_disk(mddev->gendisk);
+ }
} else {
int d;
spin_lock_irq(&conf->device_lock);
diff --git a/drivers/media/common/Kconfig b/drivers/media/common/Kconfig
index 21154dd87b0b..326df0ad75c0 100644
--- a/drivers/media/common/Kconfig
+++ b/drivers/media/common/Kconfig
@@ -19,3 +19,4 @@ config CYPRESS_FIRMWARE
source "drivers/media/common/b2c2/Kconfig"
source "drivers/media/common/saa7146/Kconfig"
source "drivers/media/common/siano/Kconfig"
+source "drivers/media/common/v4l2-tpg/Kconfig"
diff --git a/drivers/media/common/Makefile b/drivers/media/common/Makefile
index 89b795df2cdd..2d1b0a025084 100644
--- a/drivers/media/common/Makefile
+++ b/drivers/media/common/Makefile
@@ -1,4 +1,4 @@
-obj-y += b2c2/ saa7146/ siano/
+obj-y += b2c2/ saa7146/ siano/ v4l2-tpg/
obj-$(CONFIG_VIDEO_CX2341X) += cx2341x.o
obj-$(CONFIG_VIDEO_TVEEPROM) += tveeprom.o
obj-$(CONFIG_CYPRESS_FIRMWARE) += cypress_firmware.o
diff --git a/drivers/media/common/v4l2-tpg/Kconfig b/drivers/media/common/v4l2-tpg/Kconfig
new file mode 100644
index 000000000000..7456fc1c41ed
--- /dev/null
+++ b/drivers/media/common/v4l2-tpg/Kconfig
@@ -0,0 +1,2 @@
+config VIDEO_V4L2_TPG
+ tristate
diff --git a/drivers/media/common/v4l2-tpg/Makefile b/drivers/media/common/v4l2-tpg/Makefile
new file mode 100644
index 000000000000..f588df466ae3
--- /dev/null
+++ b/drivers/media/common/v4l2-tpg/Makefile
@@ -0,0 +1,3 @@
+v4l2-tpg-objs := v4l2-tpg-core.o v4l2-tpg-colors.o
+
+obj-$(CONFIG_VIDEO_V4L2_TPG) += v4l2-tpg.o
diff --git a/drivers/media/platform/vivid/vivid-tpg-colors.c b/drivers/media/common/v4l2-tpg/v4l2-tpg-colors.c
index 2299f0ce47c8..9bcbd318489b 100644
--- a/drivers/media/platform/vivid/vivid-tpg-colors.c
+++ b/drivers/media/common/v4l2-tpg/v4l2-tpg-colors.c
@@ -1,5 +1,5 @@
/*
- * vivid-color.c - A table that converts colors to various colorspaces
+ * v4l2-tpg-colors.c - A table that converts colors to various colorspaces
*
* The test pattern generator uses the tpg_colors for its test patterns.
* For testing colorspaces the first 8 colors of that table need to be
@@ -12,7 +12,7 @@
* This source also contains the code used to generate the tpg_csc_colors
* table. Run the following command to compile it:
*
- * gcc vivid-tpg-colors.c -DCOMPILE_APP -o gen-colors -lm
+ * gcc v4l2-tpg-colors.c -DCOMPILE_APP -o gen-colors -lm
*
* and run the utility.
*
@@ -36,8 +36,7 @@
*/
#include <linux/videodev2.h>
-
-#include "vivid-tpg-colors.h"
+#include <media/v4l2-tpg-colors.h>
/* sRGB colors with range [0-255] */
const struct color tpg_colors[TPG_COLOR_MAX] = {
diff --git a/drivers/media/platform/vivid/vivid-tpg.c b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
index da862bb2e5f8..cf1dadd0be9e 100644
--- a/drivers/media/platform/vivid/vivid-tpg.c
+++ b/drivers/media/common/v4l2-tpg/v4l2-tpg-core.c
@@ -1,5 +1,5 @@
/*
- * vivid-tpg.c - Test Pattern Generator
+ * v4l2-tpg-core.c - Test Pattern Generator
*
* Note: gen_twopix and tpg_gen_text are based on code from vivi.c. See the
* vivi.c source for the copyright information of those functions.
@@ -20,7 +20,8 @@
* SOFTWARE.
*/
-#include "vivid-tpg.h"
+#include <linux/module.h>
+#include <media/v4l2-tpg.h>
/* Must remain in sync with enum tpg_pattern */
const char * const tpg_pattern_strings[] = {
@@ -48,6 +49,7 @@ const char * const tpg_pattern_strings[] = {
"Noise",
NULL
};
+EXPORT_SYMBOL_GPL(tpg_pattern_strings);
/* Must remain in sync with enum tpg_aspect */
const char * const tpg_aspect_strings[] = {
@@ -58,6 +60,7 @@ const char * const tpg_aspect_strings[] = {
"16x9 Anamorphic",
NULL
};
+EXPORT_SYMBOL_GPL(tpg_aspect_strings);
/*
* Sine table: sin[0] = 127 * sin(-180 degrees)
@@ -93,6 +96,7 @@ void tpg_set_font(const u8 *f)
{
font8x16 = f;
}
+EXPORT_SYMBOL_GPL(tpg_set_font);
void tpg_init(struct tpg_data *tpg, unsigned w, unsigned h)
{
@@ -114,6 +118,7 @@ void tpg_init(struct tpg_data *tpg, unsigned w, unsigned h)
tpg->colorspace = V4L2_COLORSPACE_SRGB;
tpg->perc_fill = 100;
}
+EXPORT_SYMBOL_GPL(tpg_init);
int tpg_alloc(struct tpg_data *tpg, unsigned max_w)
{
@@ -150,6 +155,7 @@ int tpg_alloc(struct tpg_data *tpg, unsigned max_w)
}
return 0;
}
+EXPORT_SYMBOL_GPL(tpg_alloc);
void tpg_free(struct tpg_data *tpg)
{
@@ -174,6 +180,7 @@ void tpg_free(struct tpg_data *tpg)
tpg->random_line[plane] = NULL;
}
}
+EXPORT_SYMBOL_GPL(tpg_free);
bool tpg_s_fourcc(struct tpg_data *tpg, u32 fourcc)
{
@@ -403,6 +410,7 @@ bool tpg_s_fourcc(struct tpg_data *tpg, u32 fourcc)
}
return true;
}
+EXPORT_SYMBOL_GPL(tpg_s_fourcc);
void tpg_s_crop_compose(struct tpg_data *tpg, const struct v4l2_rect *crop,
const struct v4l2_rect *compose)
@@ -418,6 +426,7 @@ void tpg_s_crop_compose(struct tpg_data *tpg, const struct v4l2_rect *crop,
tpg->scaled_width = 2;
tpg->recalc_lines = true;
}
+EXPORT_SYMBOL_GPL(tpg_s_crop_compose);
void tpg_reset_source(struct tpg_data *tpg, unsigned width, unsigned height,
u32 field)
@@ -442,6 +451,7 @@ void tpg_reset_source(struct tpg_data *tpg, unsigned width, unsigned height,
(2 * tpg->hdownsampling[p]);
tpg->recalc_square_border = true;
}
+EXPORT_SYMBOL_GPL(tpg_reset_source);
static enum tpg_color tpg_get_textbg_color(struct tpg_data *tpg)
{
@@ -1250,6 +1260,7 @@ unsigned tpg_g_interleaved_plane(const struct tpg_data *tpg, unsigned buf_line)
return 0;
}
}
+EXPORT_SYMBOL_GPL(tpg_g_interleaved_plane);
/* Return how many pattern lines are used by the current pattern. */
static unsigned tpg_get_pat_lines(const struct tpg_data *tpg)
@@ -1725,6 +1736,7 @@ void tpg_gen_text(const struct tpg_data *tpg, u8 *basep[TPG_MAX_PLANES][2],
}
}
}
+EXPORT_SYMBOL_GPL(tpg_gen_text);
void tpg_update_mv_step(struct tpg_data *tpg)
{
@@ -1773,6 +1785,7 @@ void tpg_update_mv_step(struct tpg_data *tpg)
if (factor < 0)
tpg->mv_vert_step = tpg->src_height - tpg->mv_vert_step;
}
+EXPORT_SYMBOL_GPL(tpg_update_mv_step);
/* Map the line number relative to the crop rectangle to a frame line number */
static unsigned tpg_calc_frameline(const struct tpg_data *tpg, unsigned src_y,
@@ -1862,6 +1875,7 @@ void tpg_calc_text_basep(struct tpg_data *tpg,
if (p == 0 && tpg->interleaved)
tpg_calc_text_basep(tpg, basep, 1, vbuf);
}
+EXPORT_SYMBOL_GPL(tpg_calc_text_basep);
static int tpg_pattern_avg(const struct tpg_data *tpg,
unsigned pat1, unsigned pat2)
@@ -1891,6 +1905,7 @@ void tpg_log_status(struct tpg_data *tpg)
pr_info("tpg quantization: %d/%d\n", tpg->quantization, tpg->real_quantization);
pr_info("tpg RGB range: %d/%d\n", tpg->rgb_range, tpg->real_rgb_range);
}
+EXPORT_SYMBOL_GPL(tpg_log_status);
/*
* This struct contains common parameters used by both the drawing of the
@@ -2296,6 +2311,7 @@ void tpg_fill_plane_buffer(struct tpg_data *tpg, v4l2_std_id std,
vbuf + buf_line * params.stride);
}
}
+EXPORT_SYMBOL_GPL(tpg_fill_plane_buffer);
void tpg_fillbuffer(struct tpg_data *tpg, v4l2_std_id std, unsigned p, u8 *vbuf)
{
@@ -2312,3 +2328,8 @@ void tpg_fillbuffer(struct tpg_data *tpg, v4l2_std_id std, unsigned p, u8 *vbuf)
offset += tpg_calc_plane_size(tpg, i);
}
}
+EXPORT_SYMBOL_GPL(tpg_fillbuffer);
+
+MODULE_DESCRIPTION("V4L2 Test Pattern Generator");
+MODULE_AUTHOR("Hans Verkuil");
+MODULE_LICENSE("GPL");
diff --git a/drivers/media/dvb-core/dvb-usb-ids.h b/drivers/media/dvb-core/dvb-usb-ids.h
index 0afad395ef97..a7a4674ccc40 100644
--- a/drivers/media/dvb-core/dvb-usb-ids.h
+++ b/drivers/media/dvb-core/dvb-usb-ids.h
@@ -58,6 +58,14 @@
#define USB_VID_TELESTAR 0x10b9
#define USB_VID_VISIONPLUS 0x13d3
#define USB_VID_SONY 0x1415
+#define USB_PID_TEVII_S421 0xd421
+#define USB_PID_TEVII_S480_1 0xd481
+#define USB_PID_TEVII_S480_2 0xd482
+#define USB_PID_TEVII_S630 0xd630
+#define USB_PID_TEVII_S632 0xd632
+#define USB_PID_TEVII_S650 0xd650
+#define USB_PID_TEVII_S660 0xd660
+#define USB_PID_TEVII_S662 0xd662
#define USB_VID_TWINHAN 0x1822
#define USB_VID_ULTIMA_ELECTRONIC 0x05d8
#define USB_VID_UNIWILL 0x1584
@@ -141,6 +149,7 @@
#define USB_PID_GENIUS_TVGO_DVB_T03 0x4012
#define USB_PID_GRANDTEC_DVBT_USB_COLD 0x0fa0
#define USB_PID_GRANDTEC_DVBT_USB_WARM 0x0fa1
+#define USB_PID_GOTVIEW_SAT_HD 0x5456
#define USB_PID_INTEL_CE9500 0x9500
#define USB_PID_ITETECH_IT9135 0x9135
#define USB_PID_ITETECH_IT9135_9005 0x9005
@@ -159,6 +168,8 @@
#define USB_PID_KWORLD_UB499_2T_T09 0xe409
#define USB_PID_KWORLD_VSTREAM_COLD 0x17de
#define USB_PID_KWORLD_VSTREAM_WARM 0x17df
+#define USB_PID_PROF_1100 0xb012
+#define USB_PID_TERRATEC_CINERGY_S 0x0064
#define USB_PID_TERRATEC_CINERGY_T_USB_XE 0x0055
#define USB_PID_TERRATEC_CINERGY_T_USB_XE_REV2 0x0069
#define USB_PID_TERRATEC_CINERGY_T_STICK 0x0093
@@ -361,6 +372,8 @@
#define USB_PID_YUAN_STK7700D 0x1efc
#define USB_PID_YUAN_STK7700D_2 0x1e8c
#define USB_PID_DW2102 0x2102
+#define USB_PID_DW2104 0x2104
+#define USB_PID_DW3101 0x3101
#define USB_PID_XTENSIONS_XD_380 0x0381
#define USB_PID_TELESTAR_STARSTICK_2 0x8000
#define USB_PID_MSI_DIGI_VOX_MINI_III 0x8807
@@ -373,6 +386,7 @@
#define USB_PID_ELGATO_EYETV_DTT_Dlx 0x0020
#define USB_PID_ELGATO_EYETV_SAT 0x002a
#define USB_PID_ELGATO_EYETV_SAT_V2 0x0025
+#define USB_PID_ELGATO_EYETV_SAT_V3 0x0036
#define USB_PID_DVB_T_USB_STICK_HIGH_SPEED_COLD 0x5000
#define USB_PID_DVB_T_USB_STICK_HIGH_SPEED_WARM 0x5001
#define USB_PID_FRIIO_WHITE 0x0001
diff --git a/drivers/media/dvb-core/dvbdev.c b/drivers/media/dvb-core/dvbdev.c
index e1684c570e2f..75a3f4b57fd4 100644
--- a/drivers/media/dvb-core/dvbdev.c
+++ b/drivers/media/dvb-core/dvbdev.c
@@ -676,13 +676,13 @@ int dvb_create_media_graph(struct dvb_adapter *adap,
demux, 0, MEDIA_LNK_FL_ENABLED,
false);
if (ret)
- return -ENOMEM;
+ return ret;
}
if (demux && ca) {
ret = media_create_pad_link(demux, 1, ca,
0, MEDIA_LNK_FL_ENABLED);
if (ret)
- return -ENOMEM;
+ return ret;
}
/* Create demux links for each ringbuffer/pad */
diff --git a/drivers/media/dvb-frontends/dib0090.c b/drivers/media/dvb-frontends/dib0090.c
index dc2d41e144fd..d879dc0607f4 100644
--- a/drivers/media/dvb-frontends/dib0090.c
+++ b/drivers/media/dvb-frontends/dib0090.c
@@ -1121,7 +1121,7 @@ void dib0090_pwm_gain_reset(struct dvb_frontend *fe)
(state->current_band == BAND_CBAND) ? "CBAND" : "NOT CBAND",
state->identity.version & 0x1f);
- if (rf_ramp && ((state->rf_ramp[0] == 0) ||
+ if (rf_ramp && ((state->rf_ramp && state->rf_ramp[0] == 0) ||
(state->current_band == BAND_CBAND &&
(state->identity.version & 0x1f) <= P1D_E_F))) {
dprintk("DE-Engage mux for direct gain reg control");
diff --git a/drivers/media/dvb-frontends/ds3000.c b/drivers/media/dvb-frontends/ds3000.c
index e8fc0329ea64..addffc33993a 100644
--- a/drivers/media/dvb-frontends/ds3000.c
+++ b/drivers/media/dvb-frontends/ds3000.c
@@ -458,7 +458,7 @@ static int ds3000_read_status(struct dvb_frontend *fe, enum fe_status *status)
break;
default:
- return 1;
+ return -EINVAL;
}
if (state->config->set_lock_led)
@@ -528,7 +528,7 @@ static int ds3000_read_ber(struct dvb_frontend *fe, u32* ber)
*ber = 0xffffffff;
break;
default:
- return 1;
+ return -EINVAL;
}
return 0;
@@ -623,7 +623,7 @@ static int ds3000_read_snr(struct dvb_frontend *fe, u16 *snr)
snr_reading, *snr);
break;
default:
- return 1;
+ return -EINVAL;
}
return 0;
@@ -661,7 +661,7 @@ static int ds3000_read_ucblocks(struct dvb_frontend *fe, u32 *ucblocks)
state->prevUCBS2 = _ucblocks;
break;
default:
- return 1;
+ return -EINVAL;
}
return 0;
@@ -754,7 +754,7 @@ static int ds3000_send_diseqc_msg(struct dvb_frontend *fe,
data |= 0x80;
ds3000_writereg(state, 0xa2, data);
- return 1;
+ return -ETIMEDOUT;
}
data = ds3000_readreg(state, 0xa2);
@@ -808,7 +808,7 @@ static int ds3000_diseqc_send_burst(struct dvb_frontend *fe,
data |= 0x80;
ds3000_writereg(state, 0xa2, data);
- return 1;
+ return -ETIMEDOUT;
}
data = ds3000_readreg(state, 0xa2);
@@ -951,7 +951,7 @@ static int ds3000_set_frontend(struct dvb_frontend *fe)
ds3000_writereg(state, 0xfe, 0x98);
break;
default:
- return 1;
+ return -EINVAL;
}
/* enable 27MHz clock output */
diff --git a/drivers/media/dvb-frontends/m88ds3103.c b/drivers/media/dvb-frontends/m88ds3103.c
index 76883600ec6f..5557ef8fc704 100644
--- a/drivers/media/dvb-frontends/m88ds3103.c
+++ b/drivers/media/dvb-frontends/m88ds3103.c
@@ -1251,9 +1251,9 @@ static void m88ds3103_release(struct dvb_frontend *fe)
i2c_unregister_device(client);
}
-static int m88ds3103_select(struct i2c_adapter *adap, void *mux_priv, u32 chan)
+static int m88ds3103_select(struct i2c_mux_core *muxc, u32 chan)
{
- struct m88ds3103_dev *dev = mux_priv;
+ struct m88ds3103_dev *dev = i2c_mux_priv(muxc);
struct i2c_client *client = dev->client;
int ret;
struct i2c_msg msg = {
@@ -1374,7 +1374,7 @@ static struct i2c_adapter *m88ds3103_get_i2c_adapter(struct i2c_client *client)
dev_dbg(&client->dev, "\n");
- return dev->i2c_adapter;
+ return dev->muxc->adapter[0];
}
static int m88ds3103_probe(struct i2c_client *client,
@@ -1467,13 +1467,16 @@ static int m88ds3103_probe(struct i2c_client *client,
goto err_kfree;
/* create mux i2c adapter for tuner */
- dev->i2c_adapter = i2c_add_mux_adapter(client->adapter, &client->dev,
- dev, 0, 0, 0, m88ds3103_select,
- NULL);
- if (dev->i2c_adapter == NULL) {
+ dev->muxc = i2c_mux_alloc(client->adapter, &client->dev, 1, 0, 0,
+ m88ds3103_select, NULL);
+ if (!dev->muxc) {
ret = -ENOMEM;
goto err_kfree;
}
+ dev->muxc->priv = dev;
+ ret = i2c_mux_add_adapter(dev->muxc, 0, 0, 0);
+ if (ret)
+ goto err_kfree;
/* create dvb_frontend */
memcpy(&dev->fe.ops, &m88ds3103_ops, sizeof(struct dvb_frontend_ops));
@@ -1502,7 +1505,7 @@ static int m88ds3103_remove(struct i2c_client *client)
dev_dbg(&client->dev, "\n");
- i2c_del_mux_adapter(dev->i2c_adapter);
+ i2c_mux_del_adapters(dev->muxc);
kfree(dev);
return 0;
diff --git a/drivers/media/dvb-frontends/m88ds3103_priv.h b/drivers/media/dvb-frontends/m88ds3103_priv.h
index eee8c22c51ec..d78e467295d2 100644
--- a/drivers/media/dvb-frontends/m88ds3103_priv.h
+++ b/drivers/media/dvb-frontends/m88ds3103_priv.h
@@ -42,11 +42,11 @@ struct m88ds3103_dev {
enum fe_status fe_status;
u32 dvbv3_ber; /* for old DVBv3 API read_ber */
bool warm; /* FW running */
- struct i2c_adapter *i2c_adapter;
+ struct i2c_mux_core *muxc;
/* auto detect chip id to do different config */
u8 chip_id;
/* main mclk is calculated for M88RS6000 dynamically */
- u32 mclk_khz;
+ s32 mclk_khz;
u64 post_bit_error;
u64 post_bit_count;
};
diff --git a/drivers/media/dvb-frontends/rtl2830.c b/drivers/media/dvb-frontends/rtl2830.c
index 3f96429af0e5..d25d1e0cd4ca 100644
--- a/drivers/media/dvb-frontends/rtl2830.c
+++ b/drivers/media/dvb-frontends/rtl2830.c
@@ -677,9 +677,9 @@ err:
* adapter lock is already taken by tuner driver.
* Gate is closed automatically after single I2C transfer.
*/
-static int rtl2830_select(struct i2c_adapter *adap, void *mux_priv, u32 chan_id)
+static int rtl2830_select(struct i2c_mux_core *muxc, u32 chan_id)
{
- struct i2c_client *client = mux_priv;
+ struct i2c_client *client = i2c_mux_priv(muxc);
struct rtl2830_dev *dev = i2c_get_clientdata(client);
int ret;
@@ -712,7 +712,7 @@ static struct i2c_adapter *rtl2830_get_i2c_adapter(struct i2c_client *client)
dev_dbg(&client->dev, "\n");
- return dev->adapter;
+ return dev->muxc->adapter[0];
}
/*
@@ -865,12 +865,16 @@ static int rtl2830_probe(struct i2c_client *client,
goto err_regmap_exit;
/* create muxed i2c adapter for tuner */
- dev->adapter = i2c_add_mux_adapter(client->adapter, &client->dev,
- client, 0, 0, 0, rtl2830_select, NULL);
- if (dev->adapter == NULL) {
- ret = -ENODEV;
+ dev->muxc = i2c_mux_alloc(client->adapter, &client->dev, 1, 0, 0,
+ rtl2830_select, NULL);
+ if (!dev->muxc) {
+ ret = -ENOMEM;
goto err_regmap_exit;
}
+ dev->muxc->priv = client;
+ ret = i2c_mux_add_adapter(dev->muxc, 0, 0, 0);
+ if (ret)
+ goto err_regmap_exit;
/* create dvb frontend */
memcpy(&dev->fe.ops, &rtl2830_ops, sizeof(dev->fe.ops));
@@ -903,7 +907,7 @@ static int rtl2830_remove(struct i2c_client *client)
/* stop statistics polling */
cancel_delayed_work_sync(&dev->stat_work);
- i2c_del_mux_adapter(dev->adapter);
+ i2c_mux_del_adapters(dev->muxc);
regmap_exit(dev->regmap);
kfree(dev);
diff --git a/drivers/media/dvb-frontends/rtl2830_priv.h b/drivers/media/dvb-frontends/rtl2830_priv.h
index cf793f39a09b..da4909543da2 100644
--- a/drivers/media/dvb-frontends/rtl2830_priv.h
+++ b/drivers/media/dvb-frontends/rtl2830_priv.h
@@ -29,7 +29,7 @@ struct rtl2830_dev {
struct rtl2830_platform_data *pdata;
struct i2c_client *client;
struct regmap *regmap;
- struct i2c_adapter *adapter;
+ struct i2c_mux_core *muxc;
struct dvb_frontend fe;
bool sleeping;
unsigned long filters;
diff --git a/drivers/media/dvb-frontends/rtl2832.c b/drivers/media/dvb-frontends/rtl2832.c
index 7c96f7679669..bfb6beedd40b 100644
--- a/drivers/media/dvb-frontends/rtl2832.c
+++ b/drivers/media/dvb-frontends/rtl2832.c
@@ -153,43 +153,6 @@ static const struct rtl2832_reg_entry registers[] = {
[DVBT_REG_4MSEL] = {0x013, 0, 0},
};
-/* Our regmap is bypassing I2C adapter lock, thus we do it! */
-static int rtl2832_bulk_write(struct i2c_client *client, unsigned int reg,
- const void *val, size_t val_count)
-{
- struct rtl2832_dev *dev = i2c_get_clientdata(client);
- int ret;
-
- i2c_lock_adapter(client->adapter);
- ret = regmap_bulk_write(dev->regmap, reg, val, val_count);
- i2c_unlock_adapter(client->adapter);
- return ret;
-}
-
-static int rtl2832_update_bits(struct i2c_client *client, unsigned int reg,
- unsigned int mask, unsigned int val)
-{
- struct rtl2832_dev *dev = i2c_get_clientdata(client);
- int ret;
-
- i2c_lock_adapter(client->adapter);
- ret = regmap_update_bits(dev->regmap, reg, mask, val);
- i2c_unlock_adapter(client->adapter);
- return ret;
-}
-
-static int rtl2832_bulk_read(struct i2c_client *client, unsigned int reg,
- void *val, size_t val_count)
-{
- struct rtl2832_dev *dev = i2c_get_clientdata(client);
- int ret;
-
- i2c_lock_adapter(client->adapter);
- ret = regmap_bulk_read(dev->regmap, reg, val, val_count);
- i2c_unlock_adapter(client->adapter);
- return ret;
-}
-
static int rtl2832_rd_demod_reg(struct rtl2832_dev *dev, int reg, u32 *val)
{
struct i2c_client *client = dev->client;
@@ -204,7 +167,7 @@ static int rtl2832_rd_demod_reg(struct rtl2832_dev *dev, int reg, u32 *val)
len = (msb >> 3) + 1;
mask = REG_MASK(msb - lsb);
- ret = rtl2832_bulk_read(client, reg_start_addr, reading, len);
+ ret = regmap_bulk_read(dev->regmap, reg_start_addr, reading, len);
if (ret)
goto err;
@@ -234,7 +197,7 @@ static int rtl2832_wr_demod_reg(struct rtl2832_dev *dev, int reg, u32 val)
len = (msb >> 3) + 1;
mask = REG_MASK(msb - lsb);
- ret = rtl2832_bulk_read(client, reg_start_addr, reading, len);
+ ret = regmap_bulk_read(dev->regmap, reg_start_addr, reading, len);
if (ret)
goto err;
@@ -248,7 +211,7 @@ static int rtl2832_wr_demod_reg(struct rtl2832_dev *dev, int reg, u32 val)
for (i = 0; i < len; i++)
writing[i] = (writing_tmp >> ((len - 1 - i) * 8)) & 0xff;
- ret = rtl2832_bulk_write(client, reg_start_addr, writing, len);
+ ret = regmap_bulk_write(dev->regmap, reg_start_addr, writing, len);
if (ret)
goto err;
@@ -525,7 +488,8 @@ static int rtl2832_set_frontend(struct dvb_frontend *fe)
}
for (j = 0; j < sizeof(bw_params[0]); j++) {
- ret = rtl2832_bulk_write(client, 0x11c + j, &bw_params[i][j], 1);
+ ret = regmap_bulk_write(dev->regmap,
+ 0x11c + j, &bw_params[i][j], 1);
if (ret)
goto err;
}
@@ -581,11 +545,11 @@ static int rtl2832_get_frontend(struct dvb_frontend *fe,
if (dev->sleeping)
return 0;
- ret = rtl2832_bulk_read(client, 0x33c, buf, 2);
+ ret = regmap_bulk_read(dev->regmap, 0x33c, buf, 2);
if (ret)
goto err;
- ret = rtl2832_bulk_read(client, 0x351, &buf[2], 1);
+ ret = regmap_bulk_read(dev->regmap, 0x351, &buf[2], 1);
if (ret)
goto err;
@@ -716,7 +680,7 @@ static int rtl2832_read_status(struct dvb_frontend *fe, enum fe_status *status)
/* signal strength */
if (dev->fe_status & FE_HAS_SIGNAL) {
/* read digital AGC */
- ret = rtl2832_bulk_read(client, 0x305, &u8tmp, 1);
+ ret = regmap_bulk_read(dev->regmap, 0x305, &u8tmp, 1);
if (ret)
goto err;
@@ -742,7 +706,7 @@ static int rtl2832_read_status(struct dvb_frontend *fe, enum fe_status *status)
{87659938, 87659938, 87885178, 88241743},
};
- ret = rtl2832_bulk_read(client, 0x33c, &u8tmp, 1);
+ ret = regmap_bulk_read(dev->regmap, 0x33c, &u8tmp, 1);
if (ret)
goto err;
@@ -754,7 +718,7 @@ static int rtl2832_read_status(struct dvb_frontend *fe, enum fe_status *status)
if (hierarchy > HIERARCHY_NUM - 1)
goto err;
- ret = rtl2832_bulk_read(client, 0x40c, buf, 2);
+ ret = regmap_bulk_read(dev->regmap, 0x40c, buf, 2);
if (ret)
goto err;
@@ -775,7 +739,7 @@ static int rtl2832_read_status(struct dvb_frontend *fe, enum fe_status *status)
/* BER */
if (dev->fe_status & FE_HAS_LOCK) {
- ret = rtl2832_bulk_read(client, 0x34e, buf, 2);
+ ret = regmap_bulk_read(dev->regmap, 0x34e, buf, 2);
if (ret)
goto err;
@@ -825,8 +789,6 @@ static int rtl2832_read_ber(struct dvb_frontend *fe, u32 *ber)
/*
* I2C gate/mux/repeater logic
- * We must use unlocked __i2c_transfer() here (through regmap) because of I2C
- * adapter lock is already taken by tuner driver.
* There is delay mechanism to avoid unneeded I2C gate open / close. Gate close
* is delayed here a little bit in order to see if there is sequence of I2C
* messages sent to same I2C bus.
@@ -838,7 +800,7 @@ static void rtl2832_i2c_gate_work(struct work_struct *work)
int ret;
/* close gate */
- ret = rtl2832_update_bits(dev->client, 0x101, 0x08, 0x00);
+ ret = regmap_update_bits(dev->regmap, 0x101, 0x08, 0x00);
if (ret)
goto err;
@@ -847,19 +809,16 @@ err:
dev_dbg(&client->dev, "failed=%d\n", ret);
}
-static int rtl2832_select(struct i2c_adapter *adap, void *mux_priv, u32 chan_id)
+static int rtl2832_select(struct i2c_mux_core *muxc, u32 chan_id)
{
- struct rtl2832_dev *dev = mux_priv;
+ struct rtl2832_dev *dev = i2c_mux_priv(muxc);
struct i2c_client *client = dev->client;
int ret;
/* terminate possible gate closing */
cancel_delayed_work(&dev->i2c_gate_work);
- /*
- * I2C adapter lock is already taken and due to that we will use
- * regmap_update_bits() which does not lock again I2C adapter.
- */
+ /* open gate */
ret = regmap_update_bits(dev->regmap, 0x101, 0x08, 0x08);
if (ret)
goto err;
@@ -870,10 +829,9 @@ err:
return ret;
}
-static int rtl2832_deselect(struct i2c_adapter *adap, void *mux_priv,
- u32 chan_id)
+static int rtl2832_deselect(struct i2c_mux_core *muxc, u32 chan_id)
{
- struct rtl2832_dev *dev = mux_priv;
+ struct rtl2832_dev *dev = i2c_mux_priv(muxc);
schedule_delayed_work(&dev->i2c_gate_work, usecs_to_jiffies(100));
return 0;
@@ -932,120 +890,6 @@ static bool rtl2832_volatile_reg(struct device *dev, unsigned int reg)
return false;
}
-/*
- * We implement own I2C access routines for regmap in order to get manual access
- * to I2C adapter lock, which is needed for I2C mux adapter.
- */
-static int rtl2832_regmap_read(void *context, const void *reg_buf,
- size_t reg_size, void *val_buf, size_t val_size)
-{
- struct i2c_client *client = context;
- int ret;
- struct i2c_msg msg[2] = {
- {
- .addr = client->addr,
- .flags = 0,
- .len = reg_size,
- .buf = (u8 *)reg_buf,
- }, {
- .addr = client->addr,
- .flags = I2C_M_RD,
- .len = val_size,
- .buf = val_buf,
- }
- };
-
- ret = __i2c_transfer(client->adapter, msg, 2);
- if (ret != 2) {
- dev_warn(&client->dev, "i2c reg read failed %d reg %02x\n",
- ret, *(u8 *)reg_buf);
- if (ret >= 0)
- ret = -EREMOTEIO;
- return ret;
- }
- return 0;
-}
-
-static int rtl2832_regmap_write(void *context, const void *data, size_t count)
-{
- struct i2c_client *client = context;
- int ret;
- struct i2c_msg msg[1] = {
- {
- .addr = client->addr,
- .flags = 0,
- .len = count,
- .buf = (u8 *)data,
- }
- };
-
- ret = __i2c_transfer(client->adapter, msg, 1);
- if (ret != 1) {
- dev_warn(&client->dev, "i2c reg write failed %d reg %02x\n",
- ret, *(u8 *)data);
- if (ret >= 0)
- ret = -EREMOTEIO;
- return ret;
- }
- return 0;
-}
-
-static int rtl2832_regmap_gather_write(void *context, const void *reg,
- size_t reg_len, const void *val,
- size_t val_len)
-{
- struct i2c_client *client = context;
- int ret;
- u8 buf[256];
- struct i2c_msg msg[1] = {
- {
- .addr = client->addr,
- .flags = 0,
- .len = 1 + val_len,
- .buf = buf,
- }
- };
-
- buf[0] = *(u8 const *)reg;
- memcpy(&buf[1], val, val_len);
-
- ret = __i2c_transfer(client->adapter, msg, 1);
- if (ret != 1) {
- dev_warn(&client->dev, "i2c reg write failed %d reg %02x\n",
- ret, *(u8 const *)reg);
- if (ret >= 0)
- ret = -EREMOTEIO;
- return ret;
- }
- return 0;
-}
-
-/*
- * FIXME: Hack. Implement own regmap locking in order to silence lockdep
- * recursive lock warning. That happens when regmap I2C client calls I2C mux
- * adapter, which leads demod I2C repeater enable via demod regmap. Operation
- * takes two regmap locks recursively - but those are different regmap instances
- * in a two different I2C drivers, so it is not deadlock. Proper fix is to make
- * regmap aware of lockdep.
- */
-static void rtl2832_regmap_lock(void *__dev)
-{
- struct rtl2832_dev *dev = __dev;
- struct i2c_client *client = dev->client;
-
- dev_dbg(&client->dev, "\n");
- mutex_lock(&dev->regmap_mutex);
-}
-
-static void rtl2832_regmap_unlock(void *__dev)
-{
- struct rtl2832_dev *dev = __dev;
- struct i2c_client *client = dev->client;
-
- dev_dbg(&client->dev, "\n");
- mutex_unlock(&dev->regmap_mutex);
-}
-
static struct dvb_frontend *rtl2832_get_dvb_frontend(struct i2c_client *client)
{
struct rtl2832_dev *dev = i2c_get_clientdata(client);
@@ -1059,7 +903,7 @@ static struct i2c_adapter *rtl2832_get_i2c_adapter(struct i2c_client *client)
struct rtl2832_dev *dev = i2c_get_clientdata(client);
dev_dbg(&client->dev, "\n");
- return dev->i2c_adapter_tuner;
+ return dev->muxc->adapter[0];
}
static int rtl2832_slave_ts_ctrl(struct i2c_client *client, bool enable)
@@ -1073,29 +917,29 @@ static int rtl2832_slave_ts_ctrl(struct i2c_client *client, bool enable)
ret = rtl2832_wr_demod_reg(dev, DVBT_SOFT_RST, 0x0);
if (ret)
goto err;
- ret = rtl2832_bulk_write(client, 0x10c, "\x5f\xff", 2);
+ ret = regmap_bulk_write(dev->regmap, 0x10c, "\x5f\xff", 2);
if (ret)
goto err;
ret = rtl2832_wr_demod_reg(dev, DVBT_PIP_ON, 0x1);
if (ret)
goto err;
- ret = rtl2832_bulk_write(client, 0x0bc, "\x18", 1);
+ ret = regmap_bulk_write(dev->regmap, 0x0bc, "\x18", 1);
if (ret)
goto err;
- ret = rtl2832_bulk_write(client, 0x192, "\x7f\xf7\xff", 3);
+ ret = regmap_bulk_write(dev->regmap, 0x192, "\x7f\xf7\xff", 3);
if (ret)
goto err;
} else {
- ret = rtl2832_bulk_write(client, 0x192, "\x00\x0f\xff", 3);
+ ret = regmap_bulk_write(dev->regmap, 0x192, "\x00\x0f\xff", 3);
if (ret)
goto err;
- ret = rtl2832_bulk_write(client, 0x0bc, "\x08", 1);
+ ret = regmap_bulk_write(dev->regmap, 0x0bc, "\x08", 1);
if (ret)
goto err;
ret = rtl2832_wr_demod_reg(dev, DVBT_PIP_ON, 0x0);
if (ret)
goto err;
- ret = rtl2832_bulk_write(client, 0x10c, "\x00\x00", 2);
+ ret = regmap_bulk_write(dev->regmap, 0x10c, "\x00\x00", 2);
if (ret)
goto err;
ret = rtl2832_wr_demod_reg(dev, DVBT_SOFT_RST, 0x1);
@@ -1124,7 +968,7 @@ static int rtl2832_pid_filter_ctrl(struct dvb_frontend *fe, int onoff)
else
u8tmp = 0x00;
- ret = rtl2832_update_bits(client, 0x061, 0xc0, u8tmp);
+ ret = regmap_update_bits(dev->regmap, 0x061, 0xc0, u8tmp);
if (ret)
goto err;
@@ -1159,14 +1003,14 @@ static int rtl2832_pid_filter(struct dvb_frontend *fe, u8 index, u16 pid,
buf[1] = (dev->filters >> 8) & 0xff;
buf[2] = (dev->filters >> 16) & 0xff;
buf[3] = (dev->filters >> 24) & 0xff;
- ret = rtl2832_bulk_write(client, 0x062, buf, 4);
+ ret = regmap_bulk_write(dev->regmap, 0x062, buf, 4);
if (ret)
goto err;
/* add PID */
buf[0] = (pid >> 8) & 0xff;
buf[1] = (pid >> 0) & 0xff;
- ret = rtl2832_bulk_write(client, 0x066 + 2 * index, buf, 2);
+ ret = regmap_bulk_write(dev->regmap, 0x066 + 2 * index, buf, 2);
if (ret)
goto err;
@@ -1184,12 +1028,6 @@ static int rtl2832_probe(struct i2c_client *client,
struct rtl2832_dev *dev;
int ret;
u8 tmp;
- static const struct regmap_bus regmap_bus = {
- .read = rtl2832_regmap_read,
- .write = rtl2832_regmap_write,
- .gather_write = rtl2832_regmap_gather_write,
- .val_format_endian_default = REGMAP_ENDIAN_NATIVE,
- };
static const struct regmap_range_cfg regmap_range_cfg[] = {
{
.selector_reg = 0x00,
@@ -1218,36 +1056,35 @@ static int rtl2832_probe(struct i2c_client *client,
dev->sleeping = true;
INIT_DELAYED_WORK(&dev->i2c_gate_work, rtl2832_i2c_gate_work);
/* create regmap */
- mutex_init(&dev->regmap_mutex);
dev->regmap_config.reg_bits = 8,
dev->regmap_config.val_bits = 8,
- dev->regmap_config.lock = rtl2832_regmap_lock,
- dev->regmap_config.unlock = rtl2832_regmap_unlock,
- dev->regmap_config.lock_arg = dev,
dev->regmap_config.volatile_reg = rtl2832_volatile_reg,
dev->regmap_config.max_register = 5 * 0x100,
dev->regmap_config.ranges = regmap_range_cfg,
dev->regmap_config.num_ranges = ARRAY_SIZE(regmap_range_cfg),
dev->regmap_config.cache_type = REGCACHE_NONE,
- dev->regmap = regmap_init(&client->dev, &regmap_bus, client,
- &dev->regmap_config);
+ dev->regmap = regmap_init_i2c(client, &dev->regmap_config);
if (IS_ERR(dev->regmap)) {
ret = PTR_ERR(dev->regmap);
goto err_kfree;
}
/* check if the demod is there */
- ret = rtl2832_bulk_read(client, 0x000, &tmp, 1);
+ ret = regmap_bulk_read(dev->regmap, 0x000, &tmp, 1);
if (ret)
goto err_regmap_exit;
/* create muxed i2c adapter for demod tuner bus */
- dev->i2c_adapter_tuner = i2c_add_mux_adapter(i2c, &i2c->dev, dev,
- 0, 0, 0, rtl2832_select, rtl2832_deselect);
- if (dev->i2c_adapter_tuner == NULL) {
- ret = -ENODEV;
+ dev->muxc = i2c_mux_alloc(i2c, &i2c->dev, 1, 0, I2C_MUX_LOCKED,
+ rtl2832_select, rtl2832_deselect);
+ if (!dev->muxc) {
+ ret = -ENOMEM;
goto err_regmap_exit;
}
+ dev->muxc->priv = dev;
+ ret = i2c_mux_add_adapter(dev->muxc, 0, 0, 0);
+ if (ret)
+ goto err_regmap_exit;
/* create dvb_frontend */
memcpy(&dev->fe.ops, &rtl2832_ops, sizeof(struct dvb_frontend_ops));
@@ -1259,9 +1096,7 @@ static int rtl2832_probe(struct i2c_client *client,
pdata->slave_ts_ctrl = rtl2832_slave_ts_ctrl;
pdata->pid_filter = rtl2832_pid_filter;
pdata->pid_filter_ctrl = rtl2832_pid_filter_ctrl;
- pdata->bulk_read = rtl2832_bulk_read;
- pdata->bulk_write = rtl2832_bulk_write;
- pdata->update_bits = rtl2832_update_bits;
+ pdata->regmap = dev->regmap;
dev_info(&client->dev, "Realtek RTL2832 successfully attached\n");
return 0;
@@ -1282,7 +1117,7 @@ static int rtl2832_remove(struct i2c_client *client)
cancel_delayed_work_sync(&dev->i2c_gate_work);
- i2c_del_mux_adapter(dev->i2c_adapter_tuner);
+ i2c_mux_del_adapters(dev->muxc);
regmap_exit(dev->regmap);
diff --git a/drivers/media/dvb-frontends/rtl2832.h b/drivers/media/dvb-frontends/rtl2832.h
index 6390af64cf45..03c0de039fa9 100644
--- a/drivers/media/dvb-frontends/rtl2832.h
+++ b/drivers/media/dvb-frontends/rtl2832.h
@@ -57,9 +57,7 @@ struct rtl2832_platform_data {
int (*pid_filter)(struct dvb_frontend *, u8, u16, int);
int (*pid_filter_ctrl)(struct dvb_frontend *, int);
/* private: Register access for SDR module use only */
- int (*bulk_read)(struct i2c_client *, unsigned int, void *, size_t);
- int (*bulk_write)(struct i2c_client *, unsigned int, const void *, size_t);
- int (*update_bits)(struct i2c_client *, unsigned int, unsigned int, unsigned int);
+ struct regmap *regmap;
};
#endif /* RTL2832_H */
diff --git a/drivers/media/dvb-frontends/rtl2832_priv.h b/drivers/media/dvb-frontends/rtl2832_priv.h
index 6b875f462f8b..c1a8a69e9015 100644
--- a/drivers/media/dvb-frontends/rtl2832_priv.h
+++ b/drivers/media/dvb-frontends/rtl2832_priv.h
@@ -33,10 +33,9 @@
struct rtl2832_dev {
struct rtl2832_platform_data *pdata;
struct i2c_client *client;
- struct mutex regmap_mutex;
struct regmap_config regmap_config;
struct regmap *regmap;
- struct i2c_adapter *i2c_adapter_tuner;
+ struct i2c_mux_core *muxc;
struct dvb_frontend fe;
enum fe_status fe_status;
u64 post_bit_error_prev; /* for old DVBv3 read_ber() calculation */
diff --git a/drivers/media/dvb-frontends/rtl2832_sdr.c b/drivers/media/dvb-frontends/rtl2832_sdr.c
index b860f02a4e55..47a480a7d46c 100644
--- a/drivers/media/dvb-frontends/rtl2832_sdr.c
+++ b/drivers/media/dvb-frontends/rtl2832_sdr.c
@@ -35,6 +35,7 @@
#include <linux/platform_device.h>
#include <linux/jiffies.h>
#include <linux/math64.h>
+#include <linux/regmap.h>
static bool rtl2832_sdr_emulated_fmt;
module_param_named(emulated_formats, rtl2832_sdr_emulated_fmt, bool, 0644);
@@ -119,6 +120,7 @@ struct rtl2832_sdr_dev {
unsigned long flags;
struct platform_device *pdev;
+ struct regmap *regmap;
struct video_device vdev;
struct v4l2_device v4l2_dev;
@@ -163,47 +165,6 @@ struct rtl2832_sdr_dev {
unsigned long jiffies_next;
};
-/* write multiple registers */
-static int rtl2832_sdr_wr_regs(struct rtl2832_sdr_dev *dev, u16 reg,
- const u8 *val, int len)
-{
- struct platform_device *pdev = dev->pdev;
- struct rtl2832_sdr_platform_data *pdata = pdev->dev.platform_data;
- struct i2c_client *client = pdata->i2c_client;
-
- return pdata->bulk_write(client, reg, val, len);
-}
-
-#if 0
-/* read multiple registers */
-static int rtl2832_sdr_rd_regs(struct rtl2832_sdr_dev *dev, u16 reg, u8 *val,
- int len)
-{
- struct platform_device *pdev = dev->pdev;
- struct rtl2832_sdr_platform_data *pdata = pdev->dev.platform_data;
- struct i2c_client *client = pdata->i2c_client;
-
- return pdata->bulk_read(client, reg, val, len);
-}
-#endif
-
-/* write single register */
-static int rtl2832_sdr_wr_reg(struct rtl2832_sdr_dev *dev, u16 reg, u8 val)
-{
- return rtl2832_sdr_wr_regs(dev, reg, &val, 1);
-}
-
-/* write single register with mask */
-static int rtl2832_sdr_wr_reg_mask(struct rtl2832_sdr_dev *dev, u16 reg,
- u8 val, u8 mask)
-{
- struct platform_device *pdev = dev->pdev;
- struct rtl2832_sdr_platform_data *pdata = pdev->dev.platform_data;
- struct i2c_client *client = pdata->i2c_client;
-
- return pdata->update_bits(client, reg, mask, val);
-}
-
/* Private functions */
static struct rtl2832_sdr_frame_buf *rtl2832_sdr_get_next_fill_buf(
struct rtl2832_sdr_dev *dev)
@@ -558,11 +519,11 @@ static int rtl2832_sdr_set_adc(struct rtl2832_sdr_dev *dev)
f_sr = dev->f_adc;
- ret = rtl2832_sdr_wr_regs(dev, 0x13e, "\x00\x00", 2);
+ ret = regmap_bulk_write(dev->regmap, 0x13e, "\x00\x00", 2);
if (ret)
goto err;
- ret = rtl2832_sdr_wr_regs(dev, 0x115, "\x00\x00\x00\x00", 4);
+ ret = regmap_bulk_write(dev->regmap, 0x115, "\x00\x00\x00\x00", 4);
if (ret)
goto err;
@@ -588,7 +549,7 @@ static int rtl2832_sdr_set_adc(struct rtl2832_sdr_dev *dev)
buf[1] = (u32tmp >> 8) & 0xff;
buf[2] = (u32tmp >> 0) & 0xff;
- ret = rtl2832_sdr_wr_regs(dev, 0x119, buf, 3);
+ ret = regmap_bulk_write(dev->regmap, 0x119, buf, 3);
if (ret)
goto err;
@@ -602,15 +563,15 @@ static int rtl2832_sdr_set_adc(struct rtl2832_sdr_dev *dev)
u8tmp2 = 0xcd; /* enable ADC I, ADC Q */
}
- ret = rtl2832_sdr_wr_reg(dev, 0x1b1, u8tmp1);
+ ret = regmap_write(dev->regmap, 0x1b1, u8tmp1);
if (ret)
goto err;
- ret = rtl2832_sdr_wr_reg(dev, 0x008, u8tmp2);
+ ret = regmap_write(dev->regmap, 0x008, u8tmp2);
if (ret)
goto err;
- ret = rtl2832_sdr_wr_reg(dev, 0x006, 0x80);
+ ret = regmap_write(dev->regmap, 0x006, 0x80);
if (ret)
goto err;
@@ -621,168 +582,169 @@ static int rtl2832_sdr_set_adc(struct rtl2832_sdr_dev *dev)
buf[1] = (u32tmp >> 16) & 0xff;
buf[2] = (u32tmp >> 8) & 0xff;
buf[3] = (u32tmp >> 0) & 0xff;
- ret = rtl2832_sdr_wr_regs(dev, 0x19f, buf, 4);
+ ret = regmap_bulk_write(dev->regmap, 0x19f, buf, 4);
if (ret)
goto err;
/* low-pass filter */
- ret = rtl2832_sdr_wr_regs(dev, 0x11c,
- "\xca\xdc\xd7\xd8\xe0\xf2\x0e\x35\x06\x50\x9c\x0d\x71\x11\x14\x71\x74\x19\x41\xa5",
- 20);
+ ret = regmap_bulk_write(dev->regmap, 0x11c,
+ "\xca\xdc\xd7\xd8\xe0\xf2\x0e\x35\x06\x50\x9c\x0d\x71\x11\x14\x71\x74\x19\x41\xa5",
+ 20);
if (ret)
goto err;
- ret = rtl2832_sdr_wr_regs(dev, 0x017, "\x11\x10", 2);
+ ret = regmap_bulk_write(dev->regmap, 0x017, "\x11\x10", 2);
if (ret)
goto err;
/* mode */
- ret = rtl2832_sdr_wr_regs(dev, 0x019, "\x05", 1);
+ ret = regmap_write(dev->regmap, 0x019, 0x05);
if (ret)
goto err;
- ret = rtl2832_sdr_wr_regs(dev, 0x01a, "\x1b\x16\x0d\x06\x01\xff", 6);
+ ret = regmap_bulk_write(dev->regmap, 0x01a,
+ "\x1b\x16\x0d\x06\x01\xff", 6);
if (ret)
goto err;
/* FSM */
- ret = rtl2832_sdr_wr_regs(dev, 0x192, "\x00\xf0\x0f", 3);
+ ret = regmap_bulk_write(dev->regmap, 0x192, "\x00\xf0\x0f", 3);
if (ret)
goto err;
/* PID filter */
- ret = rtl2832_sdr_wr_regs(dev, 0x061, "\x60", 1);
+ ret = regmap_write(dev->regmap, 0x061, 0x60);
if (ret)
goto err;
/* used RF tuner based settings */
switch (pdata->tuner) {
case RTL2832_SDR_TUNER_E4000:
- ret = rtl2832_sdr_wr_regs(dev, 0x112, "\x5a", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x102, "\x40", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x103, "\x5a", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1c7, "\x30", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x104, "\xd0", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x105, "\xbe", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1c8, "\x18", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x106, "\x35", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1c9, "\x21", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1ca, "\x21", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1cb, "\x00", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x107, "\x40", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1cd, "\x10", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1ce, "\x10", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x108, "\x80", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x109, "\x7f", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x10a, "\x80", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x10b, "\x7f", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x00e, "\xfc", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x00e, "\xfc", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x011, "\xd4", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1e5, "\xf0", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1d9, "\x00", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1db, "\x00", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1dd, "\x14", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1de, "\xec", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1d8, "\x0c", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1e6, "\x02", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1d7, "\x09", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x00d, "\x83", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x010, "\x49", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x00d, "\x87", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x00d, "\x85", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x013, "\x02", 1);
+ ret = regmap_write(dev->regmap, 0x112, 0x5a);
+ ret = regmap_write(dev->regmap, 0x102, 0x40);
+ ret = regmap_write(dev->regmap, 0x103, 0x5a);
+ ret = regmap_write(dev->regmap, 0x1c7, 0x30);
+ ret = regmap_write(dev->regmap, 0x104, 0xd0);
+ ret = regmap_write(dev->regmap, 0x105, 0xbe);
+ ret = regmap_write(dev->regmap, 0x1c8, 0x18);
+ ret = regmap_write(dev->regmap, 0x106, 0x35);
+ ret = regmap_write(dev->regmap, 0x1c9, 0x21);
+ ret = regmap_write(dev->regmap, 0x1ca, 0x21);
+ ret = regmap_write(dev->regmap, 0x1cb, 0x00);
+ ret = regmap_write(dev->regmap, 0x107, 0x40);
+ ret = regmap_write(dev->regmap, 0x1cd, 0x10);
+ ret = regmap_write(dev->regmap, 0x1ce, 0x10);
+ ret = regmap_write(dev->regmap, 0x108, 0x80);
+ ret = regmap_write(dev->regmap, 0x109, 0x7f);
+ ret = regmap_write(dev->regmap, 0x10a, 0x80);
+ ret = regmap_write(dev->regmap, 0x10b, 0x7f);
+ ret = regmap_write(dev->regmap, 0x00e, 0xfc);
+ ret = regmap_write(dev->regmap, 0x00e, 0xfc);
+ ret = regmap_write(dev->regmap, 0x011, 0xd4);
+ ret = regmap_write(dev->regmap, 0x1e5, 0xf0);
+ ret = regmap_write(dev->regmap, 0x1d9, 0x00);
+ ret = regmap_write(dev->regmap, 0x1db, 0x00);
+ ret = regmap_write(dev->regmap, 0x1dd, 0x14);
+ ret = regmap_write(dev->regmap, 0x1de, 0xec);
+ ret = regmap_write(dev->regmap, 0x1d8, 0x0c);
+ ret = regmap_write(dev->regmap, 0x1e6, 0x02);
+ ret = regmap_write(dev->regmap, 0x1d7, 0x09);
+ ret = regmap_write(dev->regmap, 0x00d, 0x83);
+ ret = regmap_write(dev->regmap, 0x010, 0x49);
+ ret = regmap_write(dev->regmap, 0x00d, 0x87);
+ ret = regmap_write(dev->regmap, 0x00d, 0x85);
+ ret = regmap_write(dev->regmap, 0x013, 0x02);
break;
case RTL2832_SDR_TUNER_FC0012:
case RTL2832_SDR_TUNER_FC0013:
- ret = rtl2832_sdr_wr_regs(dev, 0x112, "\x5a", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x102, "\x40", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x103, "\x5a", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1c7, "\x2c", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x104, "\xcc", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x105, "\xbe", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1c8, "\x16", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x106, "\x35", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1c9, "\x21", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1ca, "\x21", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1cb, "\x00", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x107, "\x40", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1cd, "\x10", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1ce, "\x10", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x108, "\x80", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x109, "\x7f", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x10a, "\x80", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x10b, "\x7f", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x00e, "\xfc", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x00e, "\xfc", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x011, "\xe9\xbf", 2);
- ret = rtl2832_sdr_wr_regs(dev, 0x1e5, "\xf0", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1d9, "\x00", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1db, "\x00", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1dd, "\x11", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1de, "\xef", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1d8, "\x0c", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1e6, "\x02", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1d7, "\x09", 1);
+ ret = regmap_write(dev->regmap, 0x112, 0x5a);
+ ret = regmap_write(dev->regmap, 0x102, 0x40);
+ ret = regmap_write(dev->regmap, 0x103, 0x5a);
+ ret = regmap_write(dev->regmap, 0x1c7, 0x2c);
+ ret = regmap_write(dev->regmap, 0x104, 0xcc);
+ ret = regmap_write(dev->regmap, 0x105, 0xbe);
+ ret = regmap_write(dev->regmap, 0x1c8, 0x16);
+ ret = regmap_write(dev->regmap, 0x106, 0x35);
+ ret = regmap_write(dev->regmap, 0x1c9, 0x21);
+ ret = regmap_write(dev->regmap, 0x1ca, 0x21);
+ ret = regmap_write(dev->regmap, 0x1cb, 0x00);
+ ret = regmap_write(dev->regmap, 0x107, 0x40);
+ ret = regmap_write(dev->regmap, 0x1cd, 0x10);
+ ret = regmap_write(dev->regmap, 0x1ce, 0x10);
+ ret = regmap_write(dev->regmap, 0x108, 0x80);
+ ret = regmap_write(dev->regmap, 0x109, 0x7f);
+ ret = regmap_write(dev->regmap, 0x10a, 0x80);
+ ret = regmap_write(dev->regmap, 0x10b, 0x7f);
+ ret = regmap_write(dev->regmap, 0x00e, 0xfc);
+ ret = regmap_write(dev->regmap, 0x00e, 0xfc);
+ ret = regmap_bulk_write(dev->regmap, 0x011, "\xe9\xbf", 2);
+ ret = regmap_write(dev->regmap, 0x1e5, 0xf0);
+ ret = regmap_write(dev->regmap, 0x1d9, 0x00);
+ ret = regmap_write(dev->regmap, 0x1db, 0x00);
+ ret = regmap_write(dev->regmap, 0x1dd, 0x11);
+ ret = regmap_write(dev->regmap, 0x1de, 0xef);
+ ret = regmap_write(dev->regmap, 0x1d8, 0x0c);
+ ret = regmap_write(dev->regmap, 0x1e6, 0x02);
+ ret = regmap_write(dev->regmap, 0x1d7, 0x09);
break;
case RTL2832_SDR_TUNER_R820T:
case RTL2832_SDR_TUNER_R828D:
- ret = rtl2832_sdr_wr_regs(dev, 0x112, "\x5a", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x102, "\x40", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x115, "\x01", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x103, "\x80", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1c7, "\x24", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x104, "\xcc", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x105, "\xbe", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1c8, "\x14", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x106, "\x35", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1c9, "\x21", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1ca, "\x21", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1cb, "\x00", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x107, "\x40", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1cd, "\x10", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1ce, "\x10", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x108, "\x80", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x109, "\x7f", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x10a, "\x80", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x10b, "\x7f", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x00e, "\xfc", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x00e, "\xfc", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x011, "\xf4", 1);
+ ret = regmap_write(dev->regmap, 0x112, 0x5a);
+ ret = regmap_write(dev->regmap, 0x102, 0x40);
+ ret = regmap_write(dev->regmap, 0x115, 0x01);
+ ret = regmap_write(dev->regmap, 0x103, 0x80);
+ ret = regmap_write(dev->regmap, 0x1c7, 0x24);
+ ret = regmap_write(dev->regmap, 0x104, 0xcc);
+ ret = regmap_write(dev->regmap, 0x105, 0xbe);
+ ret = regmap_write(dev->regmap, 0x1c8, 0x14);
+ ret = regmap_write(dev->regmap, 0x106, 0x35);
+ ret = regmap_write(dev->regmap, 0x1c9, 0x21);
+ ret = regmap_write(dev->regmap, 0x1ca, 0x21);
+ ret = regmap_write(dev->regmap, 0x1cb, 0x00);
+ ret = regmap_write(dev->regmap, 0x107, 0x40);
+ ret = regmap_write(dev->regmap, 0x1cd, 0x10);
+ ret = regmap_write(dev->regmap, 0x1ce, 0x10);
+ ret = regmap_write(dev->regmap, 0x108, 0x80);
+ ret = regmap_write(dev->regmap, 0x109, 0x7f);
+ ret = regmap_write(dev->regmap, 0x10a, 0x80);
+ ret = regmap_write(dev->regmap, 0x10b, 0x7f);
+ ret = regmap_write(dev->regmap, 0x00e, 0xfc);
+ ret = regmap_write(dev->regmap, 0x00e, 0xfc);
+ ret = regmap_write(dev->regmap, 0x011, 0xf4);
break;
case RTL2832_SDR_TUNER_FC2580:
- ret = rtl2832_sdr_wr_regs(dev, 0x112, "\x39", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x102, "\x40", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x103, "\x5a", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1c7, "\x2c", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x104, "\xcc", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x105, "\xbe", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1c8, "\x16", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x106, "\x35", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1c9, "\x21", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1ca, "\x21", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1cb, "\x00", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x107, "\x40", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1cd, "\x10", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x1ce, "\x10", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x108, "\x80", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x109, "\x7f", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x10a, "\x9c", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x10b, "\x7f", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x00e, "\xfc", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x00e, "\xfc", 1);
- ret = rtl2832_sdr_wr_regs(dev, 0x011, "\xe9\xf4", 2);
+ ret = regmap_write(dev->regmap, 0x112, 0x39);
+ ret = regmap_write(dev->regmap, 0x102, 0x40);
+ ret = regmap_write(dev->regmap, 0x103, 0x5a);
+ ret = regmap_write(dev->regmap, 0x1c7, 0x2c);
+ ret = regmap_write(dev->regmap, 0x104, 0xcc);
+ ret = regmap_write(dev->regmap, 0x105, 0xbe);
+ ret = regmap_write(dev->regmap, 0x1c8, 0x16);
+ ret = regmap_write(dev->regmap, 0x106, 0x35);
+ ret = regmap_write(dev->regmap, 0x1c9, 0x21);
+ ret = regmap_write(dev->regmap, 0x1ca, 0x21);
+ ret = regmap_write(dev->regmap, 0x1cb, 0x00);
+ ret = regmap_write(dev->regmap, 0x107, 0x40);
+ ret = regmap_write(dev->regmap, 0x1cd, 0x10);
+ ret = regmap_write(dev->regmap, 0x1ce, 0x10);
+ ret = regmap_write(dev->regmap, 0x108, 0x80);
+ ret = regmap_write(dev->regmap, 0x109, 0x7f);
+ ret = regmap_write(dev->regmap, 0x10a, 0x9c);
+ ret = regmap_write(dev->regmap, 0x10b, 0x7f);
+ ret = regmap_write(dev->regmap, 0x00e, 0xfc);
+ ret = regmap_write(dev->regmap, 0x00e, 0xfc);
+ ret = regmap_bulk_write(dev->regmap, 0x011, "\xe9\xf4", 2);
break;
default:
dev_notice(&pdev->dev, "Unsupported tuner\n");
}
/* software reset */
- ret = rtl2832_sdr_wr_reg_mask(dev, 0x101, 0x04, 0x04);
+ ret = regmap_update_bits(dev->regmap, 0x101, 0x04, 0x04);
if (ret)
goto err;
- ret = rtl2832_sdr_wr_reg_mask(dev, 0x101, 0x00, 0x04);
+ ret = regmap_update_bits(dev->regmap, 0x101, 0x04, 0x00);
if (ret)
goto err;
err:
@@ -797,29 +759,29 @@ static void rtl2832_sdr_unset_adc(struct rtl2832_sdr_dev *dev)
dev_dbg(&pdev->dev, "\n");
/* PID filter */
- ret = rtl2832_sdr_wr_regs(dev, 0x061, "\xe0", 1);
+ ret = regmap_write(dev->regmap, 0x061, 0xe0);
if (ret)
goto err;
/* mode */
- ret = rtl2832_sdr_wr_regs(dev, 0x019, "\x20", 1);
+ ret = regmap_write(dev->regmap, 0x019, 0x20);
if (ret)
goto err;
- ret = rtl2832_sdr_wr_regs(dev, 0x017, "\x11\x10", 2);
+ ret = regmap_bulk_write(dev->regmap, 0x017, "\x11\x10", 2);
if (ret)
goto err;
/* FSM */
- ret = rtl2832_sdr_wr_regs(dev, 0x192, "\x00\x0f\xff", 3);
+ ret = regmap_bulk_write(dev->regmap, 0x192, "\x00\x0f\xff", 3);
if (ret)
goto err;
- ret = rtl2832_sdr_wr_regs(dev, 0x13e, "\x40\x00", 2);
+ ret = regmap_bulk_write(dev->regmap, 0x13e, "\x40\x00", 2);
if (ret)
goto err;
- ret = rtl2832_sdr_wr_regs(dev, 0x115, "\x06\x3f\xce\xcc", 4);
+ ret = regmap_bulk_write(dev->regmap, 0x115, "\x06\x3f\xce\xcc", 4);
if (ret)
goto err;
err:
@@ -1399,6 +1361,7 @@ static int rtl2832_sdr_probe(struct platform_device *pdev)
subdev = pdata->v4l2_subdev;
dev->v4l2_subdev = pdata->v4l2_subdev;
dev->pdev = pdev;
+ dev->regmap = pdata->regmap;
dev->udev = pdata->dvb_usb_device->udev;
dev->f_adc = bands_adc[0].rangelow;
dev->f_tuner = bands_fm[0].rangelow;
diff --git a/drivers/media/dvb-frontends/rtl2832_sdr.h b/drivers/media/dvb-frontends/rtl2832_sdr.h
index 342ea84860df..d8fc7e7212e3 100644
--- a/drivers/media/dvb-frontends/rtl2832_sdr.h
+++ b/drivers/media/dvb-frontends/rtl2832_sdr.h
@@ -56,10 +56,7 @@ struct rtl2832_sdr_platform_data {
#define RTL2832_SDR_TUNER_R828D 0x2b
u8 tuner;
- struct i2c_client *i2c_client;
- int (*bulk_read)(struct i2c_client *, unsigned int, void *, size_t);
- int (*bulk_write)(struct i2c_client *, unsigned int, const void *, size_t);
- int (*update_bits)(struct i2c_client *, unsigned int, unsigned int, unsigned int);
+ struct regmap *regmap;
struct dvb_frontend *dvb_frontend;
struct v4l2_subdev *v4l2_subdev;
struct dvb_usb_device *dvb_usb_device;
diff --git a/drivers/media/dvb-frontends/si2168.c b/drivers/media/dvb-frontends/si2168.c
index 821a8f481507..108a069fa1ae 100644
--- a/drivers/media/dvb-frontends/si2168.c
+++ b/drivers/media/dvb-frontends/si2168.c
@@ -18,53 +18,23 @@
static const struct dvb_frontend_ops si2168_ops;
-/* Own I2C adapter locking is needed because of I2C gate logic. */
-static int si2168_i2c_master_send_unlocked(const struct i2c_client *client,
- const char *buf, int count)
-{
- int ret;
- struct i2c_msg msg = {
- .addr = client->addr,
- .flags = 0,
- .len = count,
- .buf = (char *)buf,
- };
-
- ret = __i2c_transfer(client->adapter, &msg, 1);
- return (ret == 1) ? count : ret;
-}
-
-static int si2168_i2c_master_recv_unlocked(const struct i2c_client *client,
- char *buf, int count)
-{
- int ret;
- struct i2c_msg msg = {
- .addr = client->addr,
- .flags = I2C_M_RD,
- .len = count,
- .buf = buf,
- };
-
- ret = __i2c_transfer(client->adapter, &msg, 1);
- return (ret == 1) ? count : ret;
-}
-
/* execute firmware command */
-static int si2168_cmd_execute_unlocked(struct i2c_client *client,
- struct si2168_cmd *cmd)
+static int si2168_cmd_execute(struct i2c_client *client, struct si2168_cmd *cmd)
{
+ struct si2168_dev *dev = i2c_get_clientdata(client);
int ret;
unsigned long timeout;
+ mutex_lock(&dev->i2c_mutex);
+
if (cmd->wlen) {
/* write cmd and args for firmware */
- ret = si2168_i2c_master_send_unlocked(client, cmd->args,
- cmd->wlen);
+ ret = i2c_master_send(client, cmd->args, cmd->wlen);
if (ret < 0) {
- goto err;
+ goto err_mutex_unlock;
} else if (ret != cmd->wlen) {
ret = -EREMOTEIO;
- goto err;
+ goto err_mutex_unlock;
}
}
@@ -73,13 +43,12 @@ static int si2168_cmd_execute_unlocked(struct i2c_client *client,
#define TIMEOUT 70
timeout = jiffies + msecs_to_jiffies(TIMEOUT);
while (!time_after(jiffies, timeout)) {
- ret = si2168_i2c_master_recv_unlocked(client, cmd->args,
- cmd->rlen);
+ ret = i2c_master_recv(client, cmd->args, cmd->rlen);
if (ret < 0) {
- goto err;
+ goto err_mutex_unlock;
} else if (ret != cmd->rlen) {
ret = -EREMOTEIO;
- goto err;
+ goto err_mutex_unlock;
}
/* firmware ready? */
@@ -94,32 +63,23 @@ static int si2168_cmd_execute_unlocked(struct i2c_client *client,
/* error bit set? */
if ((cmd->args[0] >> 6) & 0x01) {
ret = -EREMOTEIO;
- goto err;
+ goto err_mutex_unlock;
}
if (!((cmd->args[0] >> 7) & 0x01)) {
ret = -ETIMEDOUT;
- goto err;
+ goto err_mutex_unlock;
}
}
+ mutex_unlock(&dev->i2c_mutex);
return 0;
-err:
+err_mutex_unlock:
+ mutex_unlock(&dev->i2c_mutex);
dev_dbg(&client->dev, "failed=%d\n", ret);
return ret;
}
-static int si2168_cmd_execute(struct i2c_client *client, struct si2168_cmd *cmd)
-{
- int ret;
-
- i2c_lock_adapter(client->adapter);
- ret = si2168_cmd_execute_unlocked(client, cmd);
- i2c_unlock_adapter(client->adapter);
-
- return ret;
-}
-
static int si2168_read_status(struct dvb_frontend *fe, enum fe_status *status)
{
struct i2c_client *client = fe->demodulator_priv;
@@ -610,14 +570,9 @@ static int si2168_get_tune_settings(struct dvb_frontend *fe,
return 0;
}
-/*
- * I2C gate logic
- * We must use unlocked I2C I/O because I2C adapter lock is already taken
- * by the caller (usually tuner driver).
- */
-static int si2168_select(struct i2c_adapter *adap, void *mux_priv, u32 chan)
+static int si2168_select(struct i2c_mux_core *muxc, u32 chan)
{
- struct i2c_client *client = mux_priv;
+ struct i2c_client *client = i2c_mux_priv(muxc);
int ret;
struct si2168_cmd cmd;
@@ -625,7 +580,7 @@ static int si2168_select(struct i2c_adapter *adap, void *mux_priv, u32 chan)
memcpy(cmd.args, "\xc0\x0d\x01", 3);
cmd.wlen = 3;
cmd.rlen = 0;
- ret = si2168_cmd_execute_unlocked(client, &cmd);
+ ret = si2168_cmd_execute(client, &cmd);
if (ret)
goto err;
@@ -635,9 +590,9 @@ err:
return ret;
}
-static int si2168_deselect(struct i2c_adapter *adap, void *mux_priv, u32 chan)
+static int si2168_deselect(struct i2c_mux_core *muxc, u32 chan)
{
- struct i2c_client *client = mux_priv;
+ struct i2c_client *client = i2c_mux_priv(muxc);
int ret;
struct si2168_cmd cmd;
@@ -645,7 +600,7 @@ static int si2168_deselect(struct i2c_adapter *adap, void *mux_priv, u32 chan)
memcpy(cmd.args, "\xc0\x0d\x00", 3);
cmd.wlen = 3;
cmd.rlen = 0;
- ret = si2168_cmd_execute_unlocked(client, &cmd);
+ ret = si2168_cmd_execute(client, &cmd);
if (ret)
goto err;
@@ -708,18 +663,25 @@ static int si2168_probe(struct i2c_client *client,
goto err;
}
+ mutex_init(&dev->i2c_mutex);
+
/* create mux i2c adapter for tuner */
- dev->adapter = i2c_add_mux_adapter(client->adapter, &client->dev,
- client, 0, 0, 0, si2168_select, si2168_deselect);
- if (dev->adapter == NULL) {
- ret = -ENODEV;
+ dev->muxc = i2c_mux_alloc(client->adapter, &client->dev,
+ 1, 0, I2C_MUX_LOCKED,
+ si2168_select, si2168_deselect);
+ if (!dev->muxc) {
+ ret = -ENOMEM;
goto err_kfree;
}
+ dev->muxc->priv = client;
+ ret = i2c_mux_add_adapter(dev->muxc, 0, 0, 0);
+ if (ret)
+ goto err_kfree;
/* create dvb_frontend */
memcpy(&dev->fe.ops, &si2168_ops, sizeof(struct dvb_frontend_ops));
dev->fe.demodulator_priv = client;
- *config->i2c_adapter = dev->adapter;
+ *config->i2c_adapter = dev->muxc->adapter[0];
*config->fe = &dev->fe;
dev->ts_mode = config->ts_mode;
dev->ts_clock_inv = config->ts_clock_inv;
@@ -743,7 +705,7 @@ static int si2168_remove(struct i2c_client *client)
dev_dbg(&client->dev, "\n");
- i2c_del_mux_adapter(dev->adapter);
+ i2c_mux_del_adapters(dev->muxc);
dev->fe.ops.release = NULL;
dev->fe.demodulator_priv = NULL;
diff --git a/drivers/media/dvb-frontends/si2168_priv.h b/drivers/media/dvb-frontends/si2168_priv.h
index c07e6fe2cb10..8a1f36d2014d 100644
--- a/drivers/media/dvb-frontends/si2168_priv.h
+++ b/drivers/media/dvb-frontends/si2168_priv.h
@@ -29,7 +29,8 @@
/* state struct */
struct si2168_dev {
- struct i2c_adapter *adapter;
+ struct mutex i2c_mutex;
+ struct i2c_mux_core *muxc;
struct dvb_frontend fe;
enum fe_delivery_system delivery_system;
enum fe_status fe_status;
diff --git a/drivers/media/dvb-frontends/zl10353.c b/drivers/media/dvb-frontends/zl10353.c
index 1832c2f7695c..3b08176d7bec 100644
--- a/drivers/media/dvb-frontends/zl10353.c
+++ b/drivers/media/dvb-frontends/zl10353.c
@@ -135,8 +135,7 @@ static void zl10353_calc_nominal_rate(struct dvb_frontend *fe,
value = (u64)10 * (1 << 23) / 7 * 125;
value = (bw * value) + adc_clock / 2;
- do_div(value, adc_clock);
- *nominal_rate = value;
+ *nominal_rate = div_u64(value, adc_clock);
dprintk("%s: bw %d, adc_clock %d => 0x%x\n",
__func__, bw, adc_clock, *nominal_rate);
@@ -163,8 +162,7 @@ static void zl10353_calc_input_freq(struct dvb_frontend *fe,
if (ife > adc_clock / 2)
ife = adc_clock - ife;
}
- value = (u64)65536 * ife + adc_clock / 2;
- do_div(value, adc_clock);
+ value = div_u64((u64)65536 * ife + adc_clock / 2, adc_clock);
*input_freq = -value;
dprintk("%s: if2 %d, ife %d, adc_clock %d => %d / 0x%x\n",
diff --git a/drivers/media/i2c/ad9389b.c b/drivers/media/i2c/ad9389b.c
index 788967dadd29..0462f461e679 100644
--- a/drivers/media/i2c/ad9389b.c
+++ b/drivers/media/i2c/ad9389b.c
@@ -1130,8 +1130,6 @@ static int ad9389b_probe(struct i2c_client *client, const struct i2c_device_id *
hdl = &state->hdl;
v4l2_ctrl_handler_init(hdl, 5);
- /* private controls */
-
state->hdmi_mode_ctrl = v4l2_ctrl_new_std_menu(hdl, &ad9389b_ctrl_ops,
V4L2_CID_DV_TX_MODE, V4L2_DV_TX_MODE_HDMI,
0, V4L2_DV_TX_MODE_DVI_D);
@@ -1151,12 +1149,6 @@ static int ad9389b_probe(struct i2c_client *client, const struct i2c_device_id *
goto err_hdl;
}
- state->hdmi_mode_ctrl->is_private = true;
- state->hotplug_ctrl->is_private = true;
- state->rx_sense_ctrl->is_private = true;
- state->have_edid0_ctrl->is_private = true;
- state->rgb_quantization_range_ctrl->is_private = true;
-
state->pad.flags = MEDIA_PAD_FL_SINK;
err = media_entity_pads_init(&sd->entity, 1, &state->pad);
if (err)
diff --git a/drivers/media/i2c/adp1653.c b/drivers/media/i2c/adp1653.c
index fb7ed730d932..e191e295c951 100644
--- a/drivers/media/i2c/adp1653.c
+++ b/drivers/media/i2c/adp1653.c
@@ -95,7 +95,7 @@ static int adp1653_get_fault(struct adp1653_flash *flash)
int rval;
fault = i2c_smbus_read_byte_data(client, ADP1653_REG_FAULT);
- if (IS_ERR_VALUE(fault))
+ if (fault < 0)
return fault;
flash->fault |= fault;
@@ -105,13 +105,13 @@ static int adp1653_get_fault(struct adp1653_flash *flash)
/* Clear faults. */
rval = i2c_smbus_write_byte_data(client, ADP1653_REG_OUT_SEL, 0);
- if (IS_ERR_VALUE(rval))
+ if (rval < 0)
return rval;
flash->led_mode->val = V4L2_FLASH_LED_MODE_NONE;
rval = adp1653_update_hw(flash);
- if (IS_ERR_VALUE(rval))
+ if (rval)
return rval;
return flash->fault;
@@ -158,7 +158,7 @@ static int adp1653_get_ctrl(struct v4l2_ctrl *ctrl)
int rval;
rval = adp1653_get_fault(flash);
- if (IS_ERR_VALUE(rval))
+ if (rval)
return rval;
ctrl->cur.val = 0;
@@ -184,7 +184,7 @@ static int adp1653_set_ctrl(struct v4l2_ctrl *ctrl)
int rval;
rval = adp1653_get_fault(flash);
- if (IS_ERR_VALUE(rval))
+ if (rval)
return rval;
if ((rval & (ADP1653_REG_FAULT_FLT_SCP |
ADP1653_REG_FAULT_FLT_OT |
@@ -466,9 +466,9 @@ static int adp1653_of_init(struct i2c_client *client,
of_node_put(child);
pd->enable_gpio = devm_gpiod_get(&client->dev, "enable", GPIOD_OUT_LOW);
- if (!pd->enable_gpio) {
+ if (IS_ERR(pd->enable_gpio)) {
dev_err(&client->dev, "Error getting GPIO\n");
- return -EINVAL;
+ return PTR_ERR(pd->enable_gpio);
}
return 0;
diff --git a/drivers/media/i2c/adv7180.c b/drivers/media/i2c/adv7180.c
index ff57c1dcb8af..b77b0a4dbf68 100644
--- a/drivers/media/i2c/adv7180.c
+++ b/drivers/media/i2c/adv7180.c
@@ -26,8 +26,9 @@
#include <linux/i2c.h>
#include <linux/slab.h>
#include <linux/of.h>
-#include <media/v4l2-ioctl.h>
#include <linux/videodev2.h>
+#include <media/v4l2-ioctl.h>
+#include <media/v4l2-event.h>
#include <media/v4l2-device.h>
#include <media/v4l2-ctrls.h>
#include <linux/mutex.h>
@@ -192,8 +193,8 @@ struct adv7180_state {
struct mutex mutex; /* mutual excl. when accessing chip */
int irq;
v4l2_std_id curr_norm;
- bool autodetect;
bool powered;
+ bool streaming;
u8 input;
struct i2c_client *client;
@@ -338,12 +339,26 @@ static int adv7180_querystd(struct v4l2_subdev *sd, v4l2_std_id *std)
if (err)
return err;
- /* when we are interrupt driven we know the state */
- if (!state->autodetect || state->irq > 0)
- *std = state->curr_norm;
- else
- err = __adv7180_status(state, NULL, std);
+ if (state->streaming) {
+ err = -EBUSY;
+ goto unlock;
+ }
+
+ err = adv7180_set_video_standard(state,
+ ADV7180_STD_AD_PAL_BG_NTSC_J_SECAM);
+ if (err)
+ goto unlock;
+
+ msleep(100);
+ __adv7180_status(state, NULL, std);
+
+ err = v4l2_std_to_adv7180(state->curr_norm);
+ if (err < 0)
+ goto unlock;
+ err = adv7180_set_video_standard(state, err);
+
+unlock:
mutex_unlock(&state->mutex);
return err;
}
@@ -387,23 +402,13 @@ static int adv7180_program_std(struct adv7180_state *state)
{
int ret;
- if (state->autodetect) {
- ret = adv7180_set_video_standard(state,
- ADV7180_STD_AD_PAL_BG_NTSC_J_SECAM);
- if (ret < 0)
- return ret;
-
- __adv7180_status(state, NULL, &state->curr_norm);
- } else {
- ret = v4l2_std_to_adv7180(state->curr_norm);
- if (ret < 0)
- return ret;
-
- ret = adv7180_set_video_standard(state, ret);
- if (ret < 0)
- return ret;
- }
+ ret = v4l2_std_to_adv7180(state->curr_norm);
+ if (ret < 0)
+ return ret;
+ ret = adv7180_set_video_standard(state, ret);
+ if (ret < 0)
+ return ret;
return 0;
}
@@ -415,18 +420,12 @@ static int adv7180_s_std(struct v4l2_subdev *sd, v4l2_std_id std)
if (ret)
return ret;
- /* all standards -> autodetect */
- if (std == V4L2_STD_ALL) {
- state->autodetect = true;
- } else {
- /* Make sure we can support this std */
- ret = v4l2_std_to_adv7180(std);
- if (ret < 0)
- goto out;
+ /* Make sure we can support this std */
+ ret = v4l2_std_to_adv7180(std);
+ if (ret < 0)
+ goto out;
- state->curr_norm = std;
- state->autodetect = false;
- }
+ state->curr_norm = std;
ret = adv7180_program_std(state);
out:
@@ -434,6 +433,15 @@ out:
return ret;
}
+static int adv7180_g_std(struct v4l2_subdev *sd, v4l2_std_id *norm)
+{
+ struct adv7180_state *state = to_state(sd);
+
+ *norm = state->curr_norm;
+
+ return 0;
+}
+
static int adv7180_set_power(struct adv7180_state *state, bool on)
{
u8 val;
@@ -717,17 +725,77 @@ static int adv7180_g_mbus_config(struct v4l2_subdev *sd,
return 0;
}
+static int adv7180_cropcap(struct v4l2_subdev *sd, struct v4l2_cropcap *cropcap)
+{
+ struct adv7180_state *state = to_state(sd);
+
+ if (state->curr_norm & V4L2_STD_525_60) {
+ cropcap->pixelaspect.numerator = 11;
+ cropcap->pixelaspect.denominator = 10;
+ } else {
+ cropcap->pixelaspect.numerator = 54;
+ cropcap->pixelaspect.denominator = 59;
+ }
+
+ return 0;
+}
+
+static int adv7180_g_tvnorms(struct v4l2_subdev *sd, v4l2_std_id *norm)
+{
+ *norm = V4L2_STD_ALL;
+ return 0;
+}
+
+static int adv7180_s_stream(struct v4l2_subdev *sd, int enable)
+{
+ struct adv7180_state *state = to_state(sd);
+ int ret;
+
+ /* It's always safe to stop streaming, no need to take the lock */
+ if (!enable) {
+ state->streaming = enable;
+ return 0;
+ }
+
+ /* Must wait until querystd released the lock */
+ ret = mutex_lock_interruptible(&state->mutex);
+ if (ret)
+ return ret;
+ state->streaming = enable;
+ mutex_unlock(&state->mutex);
+ return 0;
+}
+
+static int adv7180_subscribe_event(struct v4l2_subdev *sd,
+ struct v4l2_fh *fh,
+ struct v4l2_event_subscription *sub)
+{
+ switch (sub->type) {
+ case V4L2_EVENT_SOURCE_CHANGE:
+ return v4l2_src_change_event_subdev_subscribe(sd, fh, sub);
+ case V4L2_EVENT_CTRL:
+ return v4l2_ctrl_subdev_subscribe_event(sd, fh, sub);
+ default:
+ return -EINVAL;
+ }
+}
+
static const struct v4l2_subdev_video_ops adv7180_video_ops = {
.s_std = adv7180_s_std,
+ .g_std = adv7180_g_std,
.querystd = adv7180_querystd,
.g_input_status = adv7180_g_input_status,
.s_routing = adv7180_s_routing,
.g_mbus_config = adv7180_g_mbus_config,
+ .cropcap = adv7180_cropcap,
+ .g_tvnorms = adv7180_g_tvnorms,
+ .s_stream = adv7180_s_stream,
};
-
static const struct v4l2_subdev_core_ops adv7180_core_ops = {
.s_power = adv7180_s_power,
+ .subscribe_event = adv7180_subscribe_event,
+ .unsubscribe_event = v4l2_event_subdev_unsubscribe,
};
static const struct v4l2_subdev_pad_ops adv7180_pad_ops = {
@@ -752,8 +820,14 @@ static irqreturn_t adv7180_irq(int irq, void *devid)
/* clear */
adv7180_write(state, ADV7180_REG_ICR3, isr3);
- if (isr3 & ADV7180_IRQ3_AD_CHANGE && state->autodetect)
- __adv7180_status(state, NULL, &state->curr_norm);
+ if (isr3 & ADV7180_IRQ3_AD_CHANGE) {
+ static const struct v4l2_event src_ch = {
+ .type = V4L2_EVENT_SOURCE_CHANGE,
+ .u.src_change.changes = V4L2_EVENT_SRC_CH_RESOLUTION,
+ };
+
+ v4l2_subdev_notify_event(&state->sd, &src_ch);
+ }
mutex_unlock(&state->mutex);
return IRQ_HANDLED;
@@ -1198,7 +1272,7 @@ static int adv7180_probe(struct i2c_client *client,
state->irq = client->irq;
mutex_init(&state->mutex);
- state->autodetect = true;
+ state->curr_norm = V4L2_STD_NTSC;
if (state->chip_info->flags & ADV7180_FLAG_RESET_POWERED)
state->powered = true;
else
@@ -1206,7 +1280,7 @@ static int adv7180_probe(struct i2c_client *client,
state->input = 0;
sd = &state->sd;
v4l2_i2c_subdev_init(sd, client, &adv7180_ops);
- sd->flags = V4L2_SUBDEV_FL_HAS_DEVNODE;
+ sd->flags = V4L2_SUBDEV_FL_HAS_DEVNODE | V4L2_SUBDEV_FL_HAS_EVENTS;
ret = adv7180_init_controls(state);
if (ret)
@@ -1328,6 +1402,14 @@ static SIMPLE_DEV_PM_OPS(adv7180_pm_ops, adv7180_suspend, adv7180_resume);
#ifdef CONFIG_OF
static const struct of_device_id adv7180_of_id[] = {
{ .compatible = "adi,adv7180", },
+ { .compatible = "adi,adv7182", },
+ { .compatible = "adi,adv7280", },
+ { .compatible = "adi,adv7280-m", },
+ { .compatible = "adi,adv7281", },
+ { .compatible = "adi,adv7281-m", },
+ { .compatible = "adi,adv7281-ma", },
+ { .compatible = "adi,adv7282", },
+ { .compatible = "adi,adv7282-m", },
{ },
};
diff --git a/drivers/media/i2c/adv7511.c b/drivers/media/i2c/adv7511.c
index bd822f032b08..39271c35da48 100644
--- a/drivers/media/i2c/adv7511.c
+++ b/drivers/media/i2c/adv7511.c
@@ -1502,12 +1502,6 @@ static int adv7511_probe(struct i2c_client *client, const struct i2c_device_id *
err = hdl->error;
goto err_hdl;
}
- state->hdmi_mode_ctrl->is_private = true;
- state->hotplug_ctrl->is_private = true;
- state->rx_sense_ctrl->is_private = true;
- state->have_edid0_ctrl->is_private = true;
- state->rgb_quantization_range_ctrl->is_private = true;
-
state->pad.flags = MEDIA_PAD_FL_SINK;
err = media_entity_pads_init(&sd->entity, 1, &state->pad);
if (err)
diff --git a/drivers/media/i2c/adv7604.c b/drivers/media/i2c/adv7604.c
index 41a1bfc5eaa7..beb2841ceae5 100644
--- a/drivers/media/i2c/adv7604.c
+++ b/drivers/media/i2c/adv7604.c
@@ -3141,7 +3141,6 @@ static int adv76xx_probe(struct i2c_client *client,
if (ctrl)
ctrl->flags |= V4L2_CTRL_FLAG_VOLATILE;
- /* private controls */
state->detect_tx_5v_ctrl = v4l2_ctrl_new_std(hdl, NULL,
V4L2_CID_DV_RX_POWER_PRESENT, 0,
(1 << state->info->num_dv_ports) - 1, 0, 0);
@@ -3164,13 +3163,6 @@ static int adv76xx_probe(struct i2c_client *client,
err = hdl->error;
goto err_hdl;
}
- state->detect_tx_5v_ctrl->is_private = true;
- state->rgb_quantization_range_ctrl->is_private = true;
- if (adv76xx_has_afe(state))
- state->analog_sampling_phase_ctrl->is_private = true;
- state->free_run_color_manual_ctrl->is_private = true;
- state->free_run_color_ctrl->is_private = true;
-
if (adv76xx_s_detect_tx_5v_ctrl(sd)) {
err = -ENODEV;
goto err_hdl;
diff --git a/drivers/media/i2c/adv7842.c b/drivers/media/i2c/adv7842.c
index 7ccb85d45224..ecaacb0a6fa1 100644
--- a/drivers/media/i2c/adv7842.c
+++ b/drivers/media/i2c/adv7842.c
@@ -3300,12 +3300,6 @@ static int adv7842_probe(struct i2c_client *client,
err = hdl->error;
goto err_hdl;
}
- state->detect_tx_5v_ctrl->is_private = true;
- state->rgb_quantization_range_ctrl->is_private = true;
- state->analog_sampling_phase_ctrl->is_private = true;
- state->free_run_color_ctrl_manual->is_private = true;
- state->free_run_color_ctrl->is_private = true;
-
if (adv7842_s_detect_tx_5v_ctrl(sd)) {
err = -ENODEV;
goto err_hdl;
diff --git a/drivers/media/i2c/m5mols/m5mols_controls.c b/drivers/media/i2c/m5mols/m5mols_controls.c
index a60931e66312..c2218c0a9e6f 100644
--- a/drivers/media/i2c/m5mols/m5mols_controls.c
+++ b/drivers/media/i2c/m5mols/m5mols_controls.c
@@ -405,7 +405,7 @@ static int m5mols_g_volatile_ctrl(struct v4l2_ctrl *ctrl)
struct v4l2_subdev *sd = to_sd(ctrl);
struct m5mols_info *info = to_m5mols(sd);
int ret = 0;
- u8 status;
+ u8 status = REG_ISO_AUTO;
v4l2_dbg(1, m5mols_debug, sd, "%s: ctrl: %s (%d)\n",
__func__, ctrl->name, info->isp_ready);
diff --git a/drivers/media/i2c/saa7115.c b/drivers/media/i2c/saa7115.c
index d2a1ce2bc7f5..bd3526bdd539 100644
--- a/drivers/media/i2c/saa7115.c
+++ b/drivers/media/i2c/saa7115.c
@@ -1798,6 +1798,21 @@ static int saa711x_detect_chip(struct i2c_client *client,
return GM7113C;
}
+ /* Check if it is a CJC7113 */
+ if (!memcmp(name, "1111111111111111", CHIP_VER_SIZE)) {
+ strlcpy(name, "cjc7113", CHIP_VER_SIZE);
+
+ if (!autodetect && strcmp(name, id->name))
+ return -EINVAL;
+
+ v4l_dbg(1, debug, client,
+ "It seems to be a %s chip (%*ph) @ 0x%x.\n",
+ name, 16, chip_ver, client->addr << 1);
+
+ /* CJC7113 seems to be SAA7113-compatible */
+ return SAA7113;
+ }
+
/* Chip was not discovered. Return its ID and don't bind */
v4l_dbg(1, debug, client, "chip %*ph @ 0x%x is unknown.\n",
16, chip_ver, client->addr << 1);
diff --git a/drivers/media/i2c/smiapp/smiapp-core.c b/drivers/media/i2c/smiapp/smiapp-core.c
index a215efe7a8ba..3dfe387abf6e 100644
--- a/drivers/media/i2c/smiapp/smiapp-core.c
+++ b/drivers/media/i2c/smiapp/smiapp-core.c
@@ -188,6 +188,8 @@ static int smiapp_read_frame_fmt(struct smiapp_sensor *sensor)
embedded_end = 0;
}
+ sensor->image_start = image_start;
+
dev_dbg(&client->dev, "embedded data from lines %d to %d\n",
embedded_start, embedded_end);
dev_dbg(&client->dev, "image data starts at line %d\n", image_start);
@@ -2280,6 +2282,15 @@ static int smiapp_get_skip_frames(struct v4l2_subdev *subdev, u32 *frames)
return 0;
}
+static int smiapp_get_skip_top_lines(struct v4l2_subdev *subdev, u32 *lines)
+{
+ struct smiapp_sensor *sensor = to_smiapp_sensor(subdev);
+
+ *lines = sensor->image_start;
+
+ return 0;
+}
+
/* -----------------------------------------------------------------------------
* sysfs attributes
*/
@@ -2890,6 +2901,7 @@ static const struct v4l2_subdev_pad_ops smiapp_pad_ops = {
static const struct v4l2_subdev_sensor_ops smiapp_sensor_ops = {
.g_skip_frames = smiapp_get_skip_frames,
+ .g_skip_top_lines = smiapp_get_skip_top_lines,
};
static const struct v4l2_subdev_ops smiapp_ops = {
diff --git a/drivers/media/i2c/smiapp/smiapp.h b/drivers/media/i2c/smiapp/smiapp.h
index f6af0cc4a256..2174f89a00db 100644
--- a/drivers/media/i2c/smiapp/smiapp.h
+++ b/drivers/media/i2c/smiapp/smiapp.h
@@ -217,6 +217,7 @@ struct smiapp_sensor {
u8 hvflip_inv_mask; /* H/VFLIP inversion due to sensor orientation */
u8 frame_skip;
+ u16 image_start; /* Offset to first line after metadata lines */
int power_count;
diff --git a/drivers/media/i2c/tc358743.c b/drivers/media/i2c/tc358743.c
index 972e0d47259d..6cf6d06737a5 100644
--- a/drivers/media/i2c/tc358743.c
+++ b/drivers/media/i2c/tc358743.c
@@ -1551,6 +1551,8 @@ static int tc358743_g_edid(struct v4l2_subdev *sd,
{
struct tc358743_state *state = to_state(sd);
+ memset(edid->reserved, 0, sizeof(edid->reserved));
+
if (edid->pad != 0)
return -EINVAL;
@@ -1585,6 +1587,8 @@ static int tc358743_s_edid(struct v4l2_subdev *sd,
v4l2_dbg(2, debug, sd, "%s, pad %d, start block %d, blocks %d\n",
__func__, edid->pad, edid->start_block, edid->blocks);
+ memset(edid->reserved, 0, sizeof(edid->reserved));
+
if (edid->pad != 0)
return -EINVAL;
@@ -1859,7 +1863,6 @@ static int tc358743_probe(struct i2c_client *client,
/* control handlers */
v4l2_ctrl_handler_init(&state->hdl, 3);
- /* private controls */
state->detect_tx_5v_ctrl = v4l2_ctrl_new_std(&state->hdl, NULL,
V4L2_CID_DV_RX_POWER_PRESENT, 0, 1, 0, 0);
diff --git a/drivers/media/i2c/ths7303.c b/drivers/media/i2c/ths7303.c
index 5bbfcab01c75..71a31352135c 100644
--- a/drivers/media/i2c/ths7303.c
+++ b/drivers/media/i2c/ths7303.c
@@ -285,7 +285,7 @@ static int ths7303_log_status(struct v4l2_subdev *sd)
v4l2_info(sd, "stream %s\n", state->stream_on ? "On" : "Off");
if (state->bt.pixelclock) {
- struct v4l2_bt_timings *bt = bt = &state->bt;
+ struct v4l2_bt_timings *bt = &state->bt;
u32 frame_width, frame_height;
frame_width = V4L2_DV_BT_FRAME_WIDTH(bt);
diff --git a/drivers/media/i2c/tvp5150.c b/drivers/media/i2c/tvp5150.c
index ff18444e19e4..0b6d46c453bf 100644
--- a/drivers/media/i2c/tvp5150.c
+++ b/drivers/media/i2c/tvp5150.c
@@ -83,7 +83,7 @@ static int tvp5150_read(struct v4l2_subdev *sd, unsigned char addr)
return rc;
}
-static inline void tvp5150_write(struct v4l2_subdev *sd, unsigned char addr,
+static int tvp5150_write(struct v4l2_subdev *sd, unsigned char addr,
unsigned char value)
{
struct i2c_client *c = v4l2_get_subdevdata(sd);
@@ -92,7 +92,9 @@ static inline void tvp5150_write(struct v4l2_subdev *sd, unsigned char addr,
v4l2_dbg(2, debug, sd, "tvp5150: writing 0x%02x 0x%02x\n", addr, value);
rc = i2c_smbus_write_byte_data(c, addr, value);
if (rc < 0)
- v4l2_dbg(0, debug, sd, "i2c i/o error: rc == %d\n", rc);
+ v4l2_err(sd, "i2c i/o error: rc == %d\n", rc);
+
+ return rc;
}
static void dump_reg_range(struct v4l2_subdev *sd, char *s, u8 init,
@@ -1159,8 +1161,7 @@ static int tvp5150_g_register(struct v4l2_subdev *sd, struct v4l2_dbg_register *
static int tvp5150_s_register(struct v4l2_subdev *sd, const struct v4l2_dbg_register *reg)
{
- tvp5150_write(sd, reg->reg & 0xff, reg->val & 0xff);
- return 0;
+ return tvp5150_write(sd, reg->reg & 0xff, reg->val & 0xff);
}
#endif
diff --git a/drivers/media/media-device.c b/drivers/media/media-device.c
index 3cfd7af8c5ca..a1cd50f331f1 100644
--- a/drivers/media/media-device.c
+++ b/drivers/media/media-device.c
@@ -90,18 +90,13 @@ static struct media_entity *find_entity(struct media_device *mdev, u32 id)
id &= ~MEDIA_ENT_ID_FLAG_NEXT;
- spin_lock(&mdev->lock);
-
media_device_for_each_entity(entity, mdev) {
if (((media_entity_id(entity) == id) && !next) ||
((media_entity_id(entity) > id) && next)) {
- spin_unlock(&mdev->lock);
return entity;
}
}
- spin_unlock(&mdev->lock);
-
return NULL;
}
@@ -431,6 +426,7 @@ static long media_device_ioctl(struct file *filp, unsigned int cmd,
struct media_device *dev = to_media_device(devnode);
long ret;
+ mutex_lock(&dev->graph_mutex);
switch (cmd) {
case MEDIA_IOC_DEVICE_INFO:
ret = media_device_get_info(dev,
@@ -443,29 +439,24 @@ static long media_device_ioctl(struct file *filp, unsigned int cmd,
break;
case MEDIA_IOC_ENUM_LINKS:
- mutex_lock(&dev->graph_mutex);
ret = media_device_enum_links(dev,
(struct media_links_enum __user *)arg);
- mutex_unlock(&dev->graph_mutex);
break;
case MEDIA_IOC_SETUP_LINK:
- mutex_lock(&dev->graph_mutex);
ret = media_device_setup_link(dev,
(struct media_link_desc __user *)arg);
- mutex_unlock(&dev->graph_mutex);
break;
case MEDIA_IOC_G_TOPOLOGY:
- mutex_lock(&dev->graph_mutex);
ret = media_device_get_topology(dev,
(struct media_v2_topology __user *)arg);
- mutex_unlock(&dev->graph_mutex);
break;
default:
ret = -ENOIOCTLCMD;
}
+ mutex_unlock(&dev->graph_mutex);
return ret;
}
@@ -508,12 +499,6 @@ static long media_device_compat_ioctl(struct file *filp, unsigned int cmd,
long ret;
switch (cmd) {
- case MEDIA_IOC_DEVICE_INFO:
- case MEDIA_IOC_ENUM_ENTITIES:
- case MEDIA_IOC_SETUP_LINK:
- case MEDIA_IOC_G_TOPOLOGY:
- return media_device_ioctl(filp, cmd, arg);
-
case MEDIA_IOC_ENUM_LINKS32:
mutex_lock(&dev->graph_mutex);
ret = media_device_enum_links32(dev,
@@ -522,7 +507,7 @@ static long media_device_compat_ioctl(struct file *filp, unsigned int cmd,
break;
default:
- ret = -ENOIOCTLCMD;
+ return media_device_ioctl(filp, cmd, arg);
}
return ret;
@@ -590,12 +575,12 @@ int __must_check media_device_register_entity(struct media_device *mdev,
if (!ida_pre_get(&mdev->entity_internal_idx, GFP_KERNEL))
return -ENOMEM;
- spin_lock(&mdev->lock);
+ mutex_lock(&mdev->graph_mutex);
ret = ida_get_new_above(&mdev->entity_internal_idx, 1,
&entity->internal_idx);
if (ret < 0) {
- spin_unlock(&mdev->lock);
+ mutex_unlock(&mdev->graph_mutex);
return ret;
}
@@ -615,9 +600,6 @@ int __must_check media_device_register_entity(struct media_device *mdev,
(notify)->notify(entity, notify->notify_data);
}
- spin_unlock(&mdev->lock);
-
- mutex_lock(&mdev->graph_mutex);
if (mdev->entity_internal_idx_max
>= mdev->pm_count_walk.ent_enum.idx_max) {
struct media_entity_graph new = { .top = 0 };
@@ -680,9 +662,9 @@ void media_device_unregister_entity(struct media_entity *entity)
if (mdev == NULL)
return;
- spin_lock(&mdev->lock);
+ mutex_lock(&mdev->graph_mutex);
__media_device_unregister_entity(entity);
- spin_unlock(&mdev->lock);
+ mutex_unlock(&mdev->graph_mutex);
}
EXPORT_SYMBOL_GPL(media_device_unregister_entity);
@@ -703,7 +685,6 @@ void media_device_init(struct media_device *mdev)
INIT_LIST_HEAD(&mdev->pads);
INIT_LIST_HEAD(&mdev->links);
INIT_LIST_HEAD(&mdev->entity_notify);
- spin_lock_init(&mdev->lock);
mutex_init(&mdev->graph_mutex);
ida_init(&mdev->entity_internal_idx);
@@ -752,9 +733,9 @@ EXPORT_SYMBOL_GPL(__media_device_register);
int __must_check media_device_register_entity_notify(struct media_device *mdev,
struct media_entity_notify *nptr)
{
- spin_lock(&mdev->lock);
+ mutex_lock(&mdev->graph_mutex);
list_add_tail(&nptr->list, &mdev->entity_notify);
- spin_unlock(&mdev->lock);
+ mutex_unlock(&mdev->graph_mutex);
return 0;
}
EXPORT_SYMBOL_GPL(media_device_register_entity_notify);
@@ -771,9 +752,9 @@ static void __media_device_unregister_entity_notify(struct media_device *mdev,
void media_device_unregister_entity_notify(struct media_device *mdev,
struct media_entity_notify *nptr)
{
- spin_lock(&mdev->lock);
+ mutex_lock(&mdev->graph_mutex);
__media_device_unregister_entity_notify(mdev, nptr);
- spin_unlock(&mdev->lock);
+ mutex_unlock(&mdev->graph_mutex);
}
EXPORT_SYMBOL_GPL(media_device_unregister_entity_notify);
@@ -787,11 +768,11 @@ void media_device_unregister(struct media_device *mdev)
if (mdev == NULL)
return;
- spin_lock(&mdev->lock);
+ mutex_lock(&mdev->graph_mutex);
/* Check if mdev was ever registered at all */
if (!media_devnode_is_registered(&mdev->devnode)) {
- spin_unlock(&mdev->lock);
+ mutex_unlock(&mdev->graph_mutex);
return;
}
@@ -811,12 +792,11 @@ void media_device_unregister(struct media_device *mdev)
kfree(intf);
}
- spin_unlock(&mdev->lock);
+ mutex_unlock(&mdev->graph_mutex);
device_remove_file(&mdev->devnode.dev, &dev_attr_model);
+ dev_dbg(mdev->dev, "Media device unregistering\n");
media_devnode_unregister(&mdev->devnode);
-
- dev_dbg(mdev->dev, "Media device unregistered\n");
}
EXPORT_SYMBOL_GPL(media_device_unregister);
diff --git a/drivers/media/media-devnode.c b/drivers/media/media-devnode.c
index 29409f440f1c..b66dc9d0766b 100644
--- a/drivers/media/media-devnode.c
+++ b/drivers/media/media-devnode.c
@@ -197,10 +197,11 @@ static int media_release(struct inode *inode, struct file *filp)
if (mdev->fops->release)
mdev->fops->release(filp);
+ filp->private_data = NULL;
+
/* decrease the refcount unconditionally since the release()
return value is ignored. */
put_device(&mdev->dev);
- filp->private_data = NULL;
return 0;
}
@@ -267,8 +268,11 @@ int __must_check media_devnode_register(struct media_devnode *mdev,
return 0;
error:
+ mutex_lock(&media_devnode_lock);
cdev_del(&mdev->cdev);
clear_bit(mdev->minor, media_devnode_nums);
+ mutex_unlock(&media_devnode_lock);
+
return ret;
}
diff --git a/drivers/media/media-entity.c b/drivers/media/media-entity.c
index e95070b3a3d4..d8a2299f0c2a 100644
--- a/drivers/media/media-entity.c
+++ b/drivers/media/media-entity.c
@@ -219,7 +219,7 @@ int media_entity_pads_init(struct media_entity *entity, u16 num_pads,
entity->pads = pads;
if (mdev)
- spin_lock(&mdev->lock);
+ mutex_lock(&mdev->graph_mutex);
for (i = 0; i < num_pads; i++) {
pads[i].entity = entity;
@@ -230,7 +230,7 @@ int media_entity_pads_init(struct media_entity *entity, u16 num_pads,
}
if (mdev)
- spin_unlock(&mdev->lock);
+ mutex_unlock(&mdev->graph_mutex);
return 0;
}
@@ -445,7 +445,7 @@ __must_check int __media_entity_pipeline_start(struct media_entity *entity,
bitmap_or(active, active, has_no_links, entity->num_pads);
if (!bitmap_full(active, entity->num_pads)) {
- ret = -EPIPE;
+ ret = -ENOLINK;
dev_dbg(entity->graph_obj.mdev->dev,
"\"%s\":%u must be connected by an enabled link\n",
entity->name,
@@ -747,9 +747,9 @@ void media_entity_remove_links(struct media_entity *entity)
if (mdev == NULL)
return;
- spin_lock(&mdev->lock);
+ mutex_lock(&mdev->graph_mutex);
__media_entity_remove_links(entity);
- spin_unlock(&mdev->lock);
+ mutex_unlock(&mdev->graph_mutex);
}
EXPORT_SYMBOL_GPL(media_entity_remove_links);
@@ -951,9 +951,9 @@ void media_remove_intf_link(struct media_link *link)
if (mdev == NULL)
return;
- spin_lock(&mdev->lock);
+ mutex_lock(&mdev->graph_mutex);
__media_remove_intf_link(link);
- spin_unlock(&mdev->lock);
+ mutex_unlock(&mdev->graph_mutex);
}
EXPORT_SYMBOL_GPL(media_remove_intf_link);
@@ -975,8 +975,8 @@ void media_remove_intf_links(struct media_interface *intf)
if (mdev == NULL)
return;
- spin_lock(&mdev->lock);
+ mutex_lock(&mdev->graph_mutex);
__media_remove_intf_links(intf);
- spin_unlock(&mdev->lock);
+ mutex_unlock(&mdev->graph_mutex);
}
EXPORT_SYMBOL_GPL(media_remove_intf_links);
diff --git a/drivers/media/pci/Kconfig b/drivers/media/pci/Kconfig
index 48a611bc3e18..4f6467fbaeb4 100644
--- a/drivers/media/pci/Kconfig
+++ b/drivers/media/pci/Kconfig
@@ -14,6 +14,7 @@ source "drivers/media/pci/meye/Kconfig"
source "drivers/media/pci/solo6x10/Kconfig"
source "drivers/media/pci/sta2x11/Kconfig"
source "drivers/media/pci/tw68/Kconfig"
+source "drivers/media/pci/tw686x/Kconfig"
source "drivers/media/pci/zoran/Kconfig"
endif
diff --git a/drivers/media/pci/Makefile b/drivers/media/pci/Makefile
index 5f8aacb8b9b8..2e54c36441f7 100644
--- a/drivers/media/pci/Makefile
+++ b/drivers/media/pci/Makefile
@@ -25,6 +25,7 @@ obj-$(CONFIG_VIDEO_BT848) += bt8xx/
obj-$(CONFIG_VIDEO_SAA7134) += saa7134/
obj-$(CONFIG_VIDEO_SAA7164) += saa7164/
obj-$(CONFIG_VIDEO_TW68) += tw68/
+obj-$(CONFIG_VIDEO_TW686X) += tw686x/
obj-$(CONFIG_VIDEO_DT3155) += dt3155/
obj-$(CONFIG_VIDEO_MEYE) += meye/
obj-$(CONFIG_STA2X11_VIP) += sta2x11/
diff --git a/drivers/media/pci/cobalt/Kconfig b/drivers/media/pci/cobalt/Kconfig
index a01f0cc745cc..70343829a125 100644
--- a/drivers/media/pci/cobalt/Kconfig
+++ b/drivers/media/pci/cobalt/Kconfig
@@ -4,6 +4,7 @@ config VIDEO_COBALT
depends on PCI_MSI && MTD_COMPLEX_MAPPINGS
depends on GPIOLIB || COMPILE_TEST
depends on SND
+ depends on MTD
select I2C_ALGOBIT
select VIDEO_ADV7604
select VIDEO_ADV7511
diff --git a/drivers/media/pci/cx18/cx18-driver.h b/drivers/media/pci/cx18/cx18-driver.h
index 7e31f2a2e085..47ce80fa73b9 100644
--- a/drivers/media/pci/cx18/cx18-driver.h
+++ b/drivers/media/pci/cx18/cx18-driver.h
@@ -707,11 +707,7 @@ static inline int cx18_raw_vbi(const struct cx18 *cx)
/* Call the specified callback for all subdevs with a grp_id bit matching the
* mask in hw (if 0, then match them all). Ignore any errors. */
#define cx18_call_hw(cx, hw, o, f, args...) \
- do { \
- struct v4l2_subdev *__sd; \
- __v4l2_device_call_subdevs_p(&(cx)->v4l2_dev, __sd, \
- !(hw) || (__sd->grp_id & (hw)), o, f , ##args); \
- } while (0)
+ v4l2_device_mask_call_all(&(cx)->v4l2_dev, hw, o, f, ##args)
#define cx18_call_all(cx, o, f, args...) cx18_call_hw(cx, 0, o, f , ##args)
@@ -719,12 +715,7 @@ static inline int cx18_raw_vbi(const struct cx18 *cx)
* mask in hw (if 0, then match them all). If the callback returns an error
* other than 0 or -ENOIOCTLCMD, then return with that error code. */
#define cx18_call_hw_err(cx, hw, o, f, args...) \
-({ \
- struct v4l2_subdev *__sd; \
- __v4l2_device_call_subdevs_until_err_p(&(cx)->v4l2_dev, \
- __sd, !(hw) || (__sd->grp_id & (hw)), o, f, \
- ##args); \
-})
+ v4l2_device_mask_call_until_err(&(cx)->v4l2_dev, hw, o, f, ##args)
#define cx18_call_all_err(cx, o, f, args...) \
cx18_call_hw_err(cx, 0, o, f , ##args)
diff --git a/drivers/media/pci/cx23885/cx23885-av.c b/drivers/media/pci/cx23885/cx23885-av.c
index 877dad89107e..e7d4406f9abd 100644
--- a/drivers/media/pci/cx23885/cx23885-av.c
+++ b/drivers/media/pci/cx23885/cx23885-av.c
@@ -24,7 +24,7 @@ void cx23885_av_work_handler(struct work_struct *work)
{
struct cx23885_dev *dev =
container_of(work, struct cx23885_dev, cx25840_work);
- bool handled;
+ bool handled = false;
v4l2_subdev_call(dev->sd_cx25840, core, interrupt_service_routine,
PCI_MSK_AV_CORE, &handled);
diff --git a/drivers/media/pci/ivtv/ivtv-driver.h b/drivers/media/pci/ivtv/ivtv-driver.h
index 6c08dae67a73..10cba305dbd2 100644
--- a/drivers/media/pci/ivtv/ivtv-driver.h
+++ b/drivers/media/pci/ivtv/ivtv-driver.h
@@ -827,12 +827,7 @@ static inline int ivtv_raw_vbi(const struct ivtv *itv)
/* Call the specified callback for all subdevs matching hw (if 0, then
match them all). Ignore any errors. */
#define ivtv_call_hw(itv, hw, o, f, args...) \
- do { \
- struct v4l2_subdev *__sd; \
- __v4l2_device_call_subdevs_p(&(itv)->v4l2_dev, __sd, \
- !(hw) ? true : (__sd->grp_id & (hw)), \
- o, f, ##args); \
- } while (0)
+ v4l2_device_mask_call_all(&(itv)->v4l2_dev, hw, o, f, ##args)
#define ivtv_call_all(itv, o, f, args...) ivtv_call_hw(itv, 0, o, f , ##args)
@@ -840,11 +835,7 @@ static inline int ivtv_raw_vbi(const struct ivtv *itv)
match them all). If the callback returns an error other than 0 or
-ENOIOCTLCMD, then return with that error code. */
#define ivtv_call_hw_err(itv, hw, o, f, args...) \
-({ \
- struct v4l2_subdev *__sd; \
- __v4l2_device_call_subdevs_until_err_p(&(itv)->v4l2_dev, __sd, \
- !(hw) || (__sd->grp_id & (hw)), o, f , ##args); \
-})
+ v4l2_device_mask_call_until_err(&(itv)->v4l2_dev, hw, o, f, ##args)
#define ivtv_call_all_err(itv, o, f, args...) ivtv_call_hw_err(itv, 0, o, f , ##args)
diff --git a/drivers/media/pci/smipcie/smipcie-ir.c b/drivers/media/pci/smipcie/smipcie-ir.c
index d018673c71f6..826c7c75e64d 100644
--- a/drivers/media/pci/smipcie/smipcie-ir.c
+++ b/drivers/media/pci/smipcie/smipcie-ir.c
@@ -203,7 +203,7 @@ int smi_ir_init(struct smi_dev *dev)
rc_dev->dev.parent = &dev->pci_dev->dev;
rc_dev->driver_type = RC_DRIVER_SCANCODE;
- rc_dev->map_name = RC_MAP_DVBSKY;
+ rc_dev->map_name = dev->info->rc_map;
ir->rc_dev = rc_dev;
ir->dev = dev;
diff --git a/drivers/media/pci/smipcie/smipcie-main.c b/drivers/media/pci/smipcie/smipcie-main.c
index b039a229b7d2..83981d611a79 100644
--- a/drivers/media/pci/smipcie/smipcie-main.c
+++ b/drivers/media/pci/smipcie/smipcie-main.c
@@ -716,7 +716,8 @@ static int smi_fe_init(struct smi_port *port)
/* init MAC.*/
ret = smi_read_eeprom(&dev->i2c_bus[0], 0xc0, mac_ee, 16);
dev_info(&port->dev->pci_dev->dev,
- "DVBSky SMI PCIe MAC= %pM\n", mac_ee + (port->idx)*8);
+ "%s port %d MAC: %pM\n", dev->info->name,
+ port->idx, mac_ee + (port->idx)*8);
memcpy(adap->proposed_mac, mac_ee + (port->idx)*8, 6);
return ret;
}
@@ -1066,6 +1067,7 @@ static struct smi_cfg_info dvbsky_s950_cfg = {
.ts_1 = SMI_TS_DMA_BOTH,
.fe_0 = DVBSKY_FE_NULL,
.fe_1 = DVBSKY_FE_M88DS3103,
+ .rc_map = RC_MAP_DVBSKY,
};
static struct smi_cfg_info dvbsky_s952_cfg = {
@@ -1075,6 +1077,7 @@ static struct smi_cfg_info dvbsky_s952_cfg = {
.ts_1 = SMI_TS_DMA_BOTH,
.fe_0 = DVBSKY_FE_M88RS6000,
.fe_1 = DVBSKY_FE_M88RS6000,
+ .rc_map = RC_MAP_DVBSKY,
};
static struct smi_cfg_info dvbsky_t9580_cfg = {
@@ -1084,6 +1087,17 @@ static struct smi_cfg_info dvbsky_t9580_cfg = {
.ts_1 = SMI_TS_DMA_BOTH,
.fe_0 = DVBSKY_FE_SIT2,
.fe_1 = DVBSKY_FE_M88DS3103,
+ .rc_map = RC_MAP_DVBSKY,
+};
+
+static struct smi_cfg_info technotrend_s2_4200_cfg = {
+ .type = SMI_TECHNOTREND_S2_4200,
+ .name = "TechnoTrend TT-budget S2-4200 Twin",
+ .ts_0 = SMI_TS_DMA_BOTH,
+ .ts_1 = SMI_TS_DMA_BOTH,
+ .fe_0 = DVBSKY_FE_M88RS6000,
+ .fe_1 = DVBSKY_FE_M88RS6000,
+ .rc_map = RC_MAP_TT_1500,
};
/* PCI IDs */
@@ -1096,6 +1110,7 @@ static const struct pci_device_id smi_id_table[] = {
SMI_ID(0x4254, 0x0550, dvbsky_s950_cfg),
SMI_ID(0x4254, 0x0552, dvbsky_s952_cfg),
SMI_ID(0x4254, 0x5580, dvbsky_t9580_cfg),
+ SMI_ID(0x13c2, 0x3016, technotrend_s2_4200_cfg),
{0}
};
MODULE_DEVICE_TABLE(pci, smi_id_table);
diff --git a/drivers/media/pci/smipcie/smipcie.h b/drivers/media/pci/smipcie/smipcie.h
index 68cdda28fd98..611e4f02cadd 100644
--- a/drivers/media/pci/smipcie/smipcie.h
+++ b/drivers/media/pci/smipcie/smipcie.h
@@ -216,6 +216,7 @@ struct smi_cfg_info {
#define SMI_DVBSKY_S950 1
#define SMI_DVBSKY_T9580 2
#define SMI_DVBSKY_T982 3
+#define SMI_TECHNOTREND_S2_4200 4
int type;
char *name;
#define SMI_TS_NULL 0
@@ -232,6 +233,7 @@ struct smi_cfg_info {
#define DVBSKY_FE_SIT2 3
int fe_0;
int fe_1;
+ char *rc_map;
};
struct smi_rc {
diff --git a/drivers/media/pci/sta2x11/sta2x11_vip.c b/drivers/media/pci/sta2x11/sta2x11_vip.c
index 753411cbbc9a..1fc195f89686 100644
--- a/drivers/media/pci/sta2x11/sta2x11_vip.c
+++ b/drivers/media/pci/sta2x11/sta2x11_vip.c
@@ -444,27 +444,19 @@ static int vidioc_querycap(struct file *file, void *priv,
static int vidioc_s_std(struct file *file, void *priv, v4l2_std_id std)
{
struct sta2x11_vip *vip = video_drvdata(file);
- v4l2_std_id oldstd = vip->std, newstd;
- int status;
-
- if (V4L2_STD_ALL == std) {
- v4l2_subdev_call(vip->decoder, video, s_std, std);
- ssleep(2);
- v4l2_subdev_call(vip->decoder, video, querystd, &newstd);
- v4l2_subdev_call(vip->decoder, video, g_input_status, &status);
- if (status & V4L2_IN_ST_NO_SIGNAL)
+
+ /*
+ * This is here for backwards compatibility only.
+ * The use of V4L2_STD_ALL to trigger a querystd is non-standard.
+ */
+ if (std == V4L2_STD_ALL) {
+ v4l2_subdev_call(vip->decoder, video, querystd, &std);
+ if (std == V4L2_STD_UNKNOWN)
return -EIO;
- std = vip->std = newstd;
- if (oldstd != std) {
- if (V4L2_STD_525_60 & std)
- vip->format = formats_60[0];
- else
- vip->format = formats_50[0];
- }
- return 0;
}
- if (oldstd != std) {
+ if (vip->std != std) {
+ vip->std = std;
if (V4L2_STD_525_60 & std)
vip->format = formats_60[0];
else
diff --git a/drivers/media/pci/tw686x/Kconfig b/drivers/media/pci/tw686x/Kconfig
new file mode 100644
index 000000000000..fb8536974052
--- /dev/null
+++ b/drivers/media/pci/tw686x/Kconfig
@@ -0,0 +1,18 @@
+config VIDEO_TW686X
+ tristate "Intersil/Techwell TW686x video capture cards"
+ depends on PCI && VIDEO_DEV && VIDEO_V4L2 && SND
+ depends on HAS_DMA
+ select VIDEOBUF2_VMALLOC
+ select SND_PCM
+ help
+ Support for Intersil/Techwell TW686x-based frame grabber cards.
+
+ Currently supported chips:
+ - TW6864 (4 video channels),
+ - TW6865 (4 video channels, not tested, second generation chip),
+ - TW6868 (8 video channels but only 4 first channels using
+ built-in video decoder are supported, not tested),
+ - TW6869 (8 video channels, second generation chip).
+
+ To compile this driver as a module, choose M here: the module
+ will be named tw686x.
diff --git a/drivers/media/pci/tw686x/Makefile b/drivers/media/pci/tw686x/Makefile
new file mode 100644
index 000000000000..99819542b733
--- /dev/null
+++ b/drivers/media/pci/tw686x/Makefile
@@ -0,0 +1,3 @@
+tw686x-objs := tw686x-core.o tw686x-video.o tw686x-audio.o
+
+obj-$(CONFIG_VIDEO_TW686X) += tw686x.o
diff --git a/drivers/media/pci/tw686x/tw686x-audio.c b/drivers/media/pci/tw686x/tw686x-audio.c
new file mode 100644
index 000000000000..91459ab715b2
--- /dev/null
+++ b/drivers/media/pci/tw686x/tw686x-audio.c
@@ -0,0 +1,386 @@
+/*
+ * Copyright (C) 2015 VanguardiaSur - www.vanguardiasur.com.ar
+ *
+ * Based on the audio support from the tw6869 driver:
+ * Copyright 2015 www.starterkit.ru <info@starterkit.ru>
+ *
+ * Based on:
+ * Driver for Intersil|Techwell TW6869 based DVR cards
+ * (c) 2011-12 liran <jli11@intersil.com> [Intersil|Techwell China]
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#include <linux/types.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/kmod.h>
+#include <linux/mutex.h>
+#include <linux/pci.h>
+#include <linux/delay.h>
+
+#include <sound/core.h>
+#include <sound/initval.h>
+#include <sound/pcm.h>
+#include <sound/control.h>
+#include "tw686x.h"
+#include "tw686x-regs.h"
+
+#define AUDIO_CHANNEL_OFFSET 8
+
+void tw686x_audio_irq(struct tw686x_dev *dev, unsigned long requests,
+ unsigned int pb_status)
+{
+ unsigned long flags;
+ unsigned int ch, pb;
+
+ for_each_set_bit(ch, &requests, max_channels(dev)) {
+ struct tw686x_audio_channel *ac = &dev->audio_channels[ch];
+ struct tw686x_audio_buf *done = NULL;
+ struct tw686x_audio_buf *next = NULL;
+ struct tw686x_dma_desc *desc;
+
+ pb = !!(pb_status & BIT(AUDIO_CHANNEL_OFFSET + ch));
+
+ spin_lock_irqsave(&ac->lock, flags);
+
+ /* Sanity check */
+ if (!ac->ss || !ac->curr_bufs[0] || !ac->curr_bufs[1]) {
+ spin_unlock_irqrestore(&ac->lock, flags);
+ continue;
+ }
+
+ if (!list_empty(&ac->buf_list)) {
+ next = list_first_entry(&ac->buf_list,
+ struct tw686x_audio_buf, list);
+ list_move_tail(&next->list, &ac->buf_list);
+ done = ac->curr_bufs[!pb];
+ ac->curr_bufs[pb] = next;
+ }
+ spin_unlock_irqrestore(&ac->lock, flags);
+
+ desc = &ac->dma_descs[pb];
+ if (done && next && desc->virt) {
+ memcpy(done->virt, desc->virt, desc->size);
+ ac->ptr = done->dma - ac->buf[0].dma;
+ snd_pcm_period_elapsed(ac->ss);
+ }
+ }
+}
+
+static int tw686x_pcm_hw_params(struct snd_pcm_substream *ss,
+ struct snd_pcm_hw_params *hw_params)
+{
+ return snd_pcm_lib_malloc_pages(ss, params_buffer_bytes(hw_params));
+}
+
+static int tw686x_pcm_hw_free(struct snd_pcm_substream *ss)
+{
+ return snd_pcm_lib_free_pages(ss);
+}
+
+/*
+ * The audio device rate is global and shared among all
+ * capture channels. The driver makes no effort to prevent
+ * rate modifications. User is free change the rate, but it
+ * means changing the rate for all capture sub-devices.
+ */
+static const struct snd_pcm_hardware tw686x_capture_hw = {
+ .info = (SNDRV_PCM_INFO_MMAP |
+ SNDRV_PCM_INFO_INTERLEAVED |
+ SNDRV_PCM_INFO_BLOCK_TRANSFER |
+ SNDRV_PCM_INFO_MMAP_VALID),
+ .formats = SNDRV_PCM_FMTBIT_S16_LE,
+ .rates = SNDRV_PCM_RATE_8000_48000,
+ .rate_min = 8000,
+ .rate_max = 48000,
+ .channels_min = 1,
+ .channels_max = 1,
+ .buffer_bytes_max = TW686X_AUDIO_PAGE_MAX * TW686X_AUDIO_PAGE_SZ,
+ .period_bytes_min = TW686X_AUDIO_PAGE_SZ,
+ .period_bytes_max = TW686X_AUDIO_PAGE_SZ,
+ .periods_min = TW686X_AUDIO_PERIODS_MIN,
+ .periods_max = TW686X_AUDIO_PERIODS_MAX,
+};
+
+static int tw686x_pcm_open(struct snd_pcm_substream *ss)
+{
+ struct tw686x_dev *dev = snd_pcm_substream_chip(ss);
+ struct tw686x_audio_channel *ac = &dev->audio_channels[ss->number];
+ struct snd_pcm_runtime *rt = ss->runtime;
+ int err;
+
+ ac->ss = ss;
+ rt->hw = tw686x_capture_hw;
+
+ err = snd_pcm_hw_constraint_integer(rt, SNDRV_PCM_HW_PARAM_PERIODS);
+ if (err < 0)
+ return err;
+
+ return 0;
+}
+
+static int tw686x_pcm_close(struct snd_pcm_substream *ss)
+{
+ struct tw686x_dev *dev = snd_pcm_substream_chip(ss);
+ struct tw686x_audio_channel *ac = &dev->audio_channels[ss->number];
+
+ ac->ss = NULL;
+ return 0;
+}
+
+static int tw686x_pcm_prepare(struct snd_pcm_substream *ss)
+{
+ struct tw686x_dev *dev = snd_pcm_substream_chip(ss);
+ struct tw686x_audio_channel *ac = &dev->audio_channels[ss->number];
+ struct snd_pcm_runtime *rt = ss->runtime;
+ unsigned int period_size = snd_pcm_lib_period_bytes(ss);
+ struct tw686x_audio_buf *p_buf, *b_buf;
+ unsigned long flags;
+ int i;
+
+ spin_lock_irqsave(&dev->lock, flags);
+ tw686x_disable_channel(dev, AUDIO_CHANNEL_OFFSET + ac->ch);
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ if (dev->audio_rate != rt->rate) {
+ u32 reg;
+
+ dev->audio_rate = rt->rate;
+ reg = ((125000000 / rt->rate) << 16) +
+ ((125000000 % rt->rate) << 16) / rt->rate;
+
+ reg_write(dev, AUDIO_CONTROL2, reg);
+ }
+
+ if (period_size != TW686X_AUDIO_PAGE_SZ ||
+ rt->periods < TW686X_AUDIO_PERIODS_MIN ||
+ rt->periods > TW686X_AUDIO_PERIODS_MAX) {
+ return -EINVAL;
+ }
+
+ spin_lock_irqsave(&ac->lock, flags);
+ INIT_LIST_HEAD(&ac->buf_list);
+
+ for (i = 0; i < rt->periods; i++) {
+ ac->buf[i].dma = rt->dma_addr + period_size * i;
+ ac->buf[i].virt = rt->dma_area + period_size * i;
+ INIT_LIST_HEAD(&ac->buf[i].list);
+ list_add_tail(&ac->buf[i].list, &ac->buf_list);
+ }
+
+ p_buf = list_first_entry(&ac->buf_list, struct tw686x_audio_buf, list);
+ list_move_tail(&p_buf->list, &ac->buf_list);
+
+ b_buf = list_first_entry(&ac->buf_list, struct tw686x_audio_buf, list);
+ list_move_tail(&b_buf->list, &ac->buf_list);
+
+ ac->curr_bufs[0] = p_buf;
+ ac->curr_bufs[1] = b_buf;
+ ac->ptr = 0;
+ spin_unlock_irqrestore(&ac->lock, flags);
+
+ return 0;
+}
+
+static int tw686x_pcm_trigger(struct snd_pcm_substream *ss, int cmd)
+{
+ struct tw686x_dev *dev = snd_pcm_substream_chip(ss);
+ struct tw686x_audio_channel *ac = &dev->audio_channels[ss->number];
+ unsigned long flags;
+ int err = 0;
+
+ switch (cmd) {
+ case SNDRV_PCM_TRIGGER_START:
+ if (ac->curr_bufs[0] && ac->curr_bufs[1]) {
+ spin_lock_irqsave(&dev->lock, flags);
+ tw686x_enable_channel(dev,
+ AUDIO_CHANNEL_OFFSET + ac->ch);
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ mod_timer(&dev->dma_delay_timer,
+ jiffies + msecs_to_jiffies(100));
+ } else {
+ err = -EIO;
+ }
+ break;
+ case SNDRV_PCM_TRIGGER_STOP:
+ spin_lock_irqsave(&dev->lock, flags);
+ tw686x_disable_channel(dev, AUDIO_CHANNEL_OFFSET + ac->ch);
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ spin_lock_irqsave(&ac->lock, flags);
+ ac->curr_bufs[0] = NULL;
+ ac->curr_bufs[1] = NULL;
+ spin_unlock_irqrestore(&ac->lock, flags);
+ break;
+ default:
+ err = -EINVAL;
+ }
+ return err;
+}
+
+static snd_pcm_uframes_t tw686x_pcm_pointer(struct snd_pcm_substream *ss)
+{
+ struct tw686x_dev *dev = snd_pcm_substream_chip(ss);
+ struct tw686x_audio_channel *ac = &dev->audio_channels[ss->number];
+
+ return bytes_to_frames(ss->runtime, ac->ptr);
+}
+
+static struct snd_pcm_ops tw686x_pcm_ops = {
+ .open = tw686x_pcm_open,
+ .close = tw686x_pcm_close,
+ .ioctl = snd_pcm_lib_ioctl,
+ .hw_params = tw686x_pcm_hw_params,
+ .hw_free = tw686x_pcm_hw_free,
+ .prepare = tw686x_pcm_prepare,
+ .trigger = tw686x_pcm_trigger,
+ .pointer = tw686x_pcm_pointer,
+};
+
+static int tw686x_snd_pcm_init(struct tw686x_dev *dev)
+{
+ struct snd_card *card = dev->snd_card;
+ struct snd_pcm *pcm;
+ struct snd_pcm_substream *ss;
+ unsigned int i;
+ int err;
+
+ err = snd_pcm_new(card, card->driver, 0, 0, max_channels(dev), &pcm);
+ if (err < 0)
+ return err;
+
+ snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &tw686x_pcm_ops);
+ snd_pcm_chip(pcm) = dev;
+ pcm->info_flags = 0;
+ strlcpy(pcm->name, "tw686x PCM", sizeof(pcm->name));
+
+ for (i = 0, ss = pcm->streams[SNDRV_PCM_STREAM_CAPTURE].substream;
+ ss; ss = ss->next, i++)
+ snprintf(ss->name, sizeof(ss->name), "vch%u audio", i);
+
+ return snd_pcm_lib_preallocate_pages_for_all(pcm,
+ SNDRV_DMA_TYPE_DEV,
+ snd_dma_pci_data(dev->pci_dev),
+ TW686X_AUDIO_PAGE_MAX * TW686X_AUDIO_PAGE_SZ,
+ TW686X_AUDIO_PAGE_MAX * TW686X_AUDIO_PAGE_SZ);
+}
+
+static void tw686x_audio_dma_free(struct tw686x_dev *dev,
+ struct tw686x_audio_channel *ac)
+{
+ int pb;
+
+ for (pb = 0; pb < 2; pb++) {
+ if (!ac->dma_descs[pb].virt)
+ continue;
+ pci_free_consistent(dev->pci_dev, ac->dma_descs[pb].size,
+ ac->dma_descs[pb].virt,
+ ac->dma_descs[pb].phys);
+ ac->dma_descs[pb].virt = NULL;
+ }
+}
+
+static int tw686x_audio_dma_alloc(struct tw686x_dev *dev,
+ struct tw686x_audio_channel *ac)
+{
+ int pb;
+
+ for (pb = 0; pb < 2; pb++) {
+ u32 reg = pb ? ADMA_B_ADDR[ac->ch] : ADMA_P_ADDR[ac->ch];
+ void *virt;
+
+ virt = pci_alloc_consistent(dev->pci_dev, TW686X_AUDIO_PAGE_SZ,
+ &ac->dma_descs[pb].phys);
+ if (!virt) {
+ dev_err(&dev->pci_dev->dev,
+ "dma%d: unable to allocate audio DMA %s-buffer\n",
+ ac->ch, pb ? "B" : "P");
+ return -ENOMEM;
+ }
+ ac->dma_descs[pb].virt = virt;
+ ac->dma_descs[pb].size = TW686X_AUDIO_PAGE_SZ;
+ reg_write(dev, reg, ac->dma_descs[pb].phys);
+ }
+ return 0;
+}
+
+void tw686x_audio_free(struct tw686x_dev *dev)
+{
+ unsigned long flags;
+ u32 dma_ch_mask;
+ u32 dma_cmd;
+
+ spin_lock_irqsave(&dev->lock, flags);
+ dma_cmd = reg_read(dev, DMA_CMD);
+ dma_ch_mask = reg_read(dev, DMA_CHANNEL_ENABLE);
+ reg_write(dev, DMA_CMD, dma_cmd & ~0xff00);
+ reg_write(dev, DMA_CHANNEL_ENABLE, dma_ch_mask & ~0xff00);
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ if (!dev->snd_card)
+ return;
+ snd_card_free(dev->snd_card);
+ dev->snd_card = NULL;
+}
+
+int tw686x_audio_init(struct tw686x_dev *dev)
+{
+ struct pci_dev *pci_dev = dev->pci_dev;
+ struct snd_card *card;
+ int err, ch;
+
+ /*
+ * AUDIO_CONTROL1
+ * DMA byte length [31:19] = 4096 (i.e. ALSA period)
+ * External audio enable [0] = enabled
+ */
+ reg_write(dev, AUDIO_CONTROL1, 0x80000001);
+
+ err = snd_card_new(&pci_dev->dev, SNDRV_DEFAULT_IDX1,
+ SNDRV_DEFAULT_STR1,
+ THIS_MODULE, 0, &card);
+ if (err < 0)
+ return err;
+
+ dev->snd_card = card;
+ strlcpy(card->driver, "tw686x", sizeof(card->driver));
+ strlcpy(card->shortname, "tw686x", sizeof(card->shortname));
+ strlcpy(card->longname, pci_name(pci_dev), sizeof(card->longname));
+ snd_card_set_dev(card, &pci_dev->dev);
+
+ for (ch = 0; ch < max_channels(dev); ch++) {
+ struct tw686x_audio_channel *ac;
+
+ ac = &dev->audio_channels[ch];
+ spin_lock_init(&ac->lock);
+ ac->dev = dev;
+ ac->ch = ch;
+
+ err = tw686x_audio_dma_alloc(dev, ac);
+ if (err < 0)
+ goto err_cleanup;
+ }
+
+ err = tw686x_snd_pcm_init(dev);
+ if (err < 0)
+ goto err_cleanup;
+
+ err = snd_card_register(card);
+ if (!err)
+ return 0;
+
+err_cleanup:
+ for (ch = 0; ch < max_channels(dev); ch++) {
+ if (!dev->audio_channels[ch].dev)
+ continue;
+ tw686x_audio_dma_free(dev, &dev->audio_channels[ch]);
+ }
+ snd_card_free(card);
+ dev->snd_card = NULL;
+ return err;
+}
diff --git a/drivers/media/pci/tw686x/tw686x-core.c b/drivers/media/pci/tw686x/tw686x-core.c
new file mode 100644
index 000000000000..cf53b0e97be2
--- /dev/null
+++ b/drivers/media/pci/tw686x/tw686x-core.c
@@ -0,0 +1,415 @@
+/*
+ * Copyright (C) 2015 VanguardiaSur - www.vanguardiasur.com.ar
+ *
+ * Based on original driver by Krzysztof Ha?asa:
+ * Copyright (C) 2015 Industrial Research Institute for Automation
+ * and Measurements PIAP
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ *
+ * Notes
+ * -----
+ *
+ * 1. Under stress-testing, it has been observed that the PCIe link
+ * goes down, without reason. Therefore, the driver takes special care
+ * to allow device hot-unplugging.
+ *
+ * 2. TW686X devices are capable of setting a few different DMA modes,
+ * including: scatter-gather, field and frame modes. However,
+ * under stress testings it has been found that the machine can
+ * freeze completely if DMA registers are programmed while streaming
+ * is active.
+ * This driver tries to access hardware registers as infrequently
+ * as possible by:
+ * i. allocating fixed DMA buffers and memcpy'ing into
+ * vmalloc'ed buffers
+ * ii. using a timer to mitigate the rate of DMA reset operations,
+ * on DMA channels error.
+ */
+
+#include <linux/init.h>
+#include <linux/interrupt.h>
+#include <linux/delay.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/pci_ids.h>
+#include <linux/slab.h>
+#include <linux/timer.h>
+
+#include "tw686x.h"
+#include "tw686x-regs.h"
+
+/*
+ * This module parameter allows to control the DMA_TIMER_INTERVAL value.
+ * The DMA_TIMER_INTERVAL register controls the minimum DMA interrupt
+ * time span (iow, the maximum DMA interrupt rate) thus allowing for
+ * IRQ coalescing.
+ *
+ * The chip datasheet does not mention a time unit for this value, so
+ * users wanting fine-grain control over the interrupt rate should
+ * determine the desired value through testing.
+ */
+static u32 dma_interval = 0x00098968;
+module_param(dma_interval, int, 0444);
+MODULE_PARM_DESC(dma_interval, "Minimum time span for DMA interrupting host");
+
+void tw686x_disable_channel(struct tw686x_dev *dev, unsigned int channel)
+{
+ u32 dma_en = reg_read(dev, DMA_CHANNEL_ENABLE);
+ u32 dma_cmd = reg_read(dev, DMA_CMD);
+
+ dma_en &= ~BIT(channel);
+ dma_cmd &= ~BIT(channel);
+
+ /* Must remove it from pending too */
+ dev->pending_dma_en &= ~BIT(channel);
+ dev->pending_dma_cmd &= ~BIT(channel);
+
+ /* Stop DMA if no channels are enabled */
+ if (!dma_en)
+ dma_cmd = 0;
+ reg_write(dev, DMA_CHANNEL_ENABLE, dma_en);
+ reg_write(dev, DMA_CMD, dma_cmd);
+}
+
+void tw686x_enable_channel(struct tw686x_dev *dev, unsigned int channel)
+{
+ u32 dma_en = reg_read(dev, DMA_CHANNEL_ENABLE);
+ u32 dma_cmd = reg_read(dev, DMA_CMD);
+
+ dev->pending_dma_en |= dma_en | BIT(channel);
+ dev->pending_dma_cmd |= dma_cmd | DMA_CMD_ENABLE | BIT(channel);
+}
+
+/*
+ * The purpose of this awful hack is to avoid enabling the DMA
+ * channels "too fast" which makes some TW686x devices very
+ * angry and freeze the CPU (see note 1).
+ */
+static void tw686x_dma_delay(unsigned long data)
+{
+ struct tw686x_dev *dev = (struct tw686x_dev *)data;
+ unsigned long flags;
+
+ spin_lock_irqsave(&dev->lock, flags);
+
+ reg_write(dev, DMA_CHANNEL_ENABLE, dev->pending_dma_en);
+ reg_write(dev, DMA_CMD, dev->pending_dma_cmd);
+ dev->pending_dma_en = 0;
+ dev->pending_dma_cmd = 0;
+
+ spin_unlock_irqrestore(&dev->lock, flags);
+}
+
+static void tw686x_reset_channels(struct tw686x_dev *dev, unsigned int ch_mask)
+{
+ u32 dma_en, dma_cmd;
+
+ dma_en = reg_read(dev, DMA_CHANNEL_ENABLE);
+ dma_cmd = reg_read(dev, DMA_CMD);
+
+ /*
+ * Save pending register status, the timer will
+ * restore them.
+ */
+ dev->pending_dma_en |= dma_en;
+ dev->pending_dma_cmd |= dma_cmd;
+
+ /* Disable the reset channels */
+ reg_write(dev, DMA_CHANNEL_ENABLE, dma_en & ~ch_mask);
+
+ if ((dma_en & ~ch_mask) == 0) {
+ dev_dbg(&dev->pci_dev->dev, "reset: stopping DMA\n");
+ dma_cmd &= ~DMA_CMD_ENABLE;
+ }
+ reg_write(dev, DMA_CMD, dma_cmd & ~ch_mask);
+}
+
+static irqreturn_t tw686x_irq(int irq, void *dev_id)
+{
+ struct tw686x_dev *dev = (struct tw686x_dev *)dev_id;
+ unsigned int video_requests, audio_requests, reset_ch;
+ u32 fifo_status, fifo_signal, fifo_ov, fifo_bad, fifo_errors;
+ u32 int_status, dma_en, video_en, pb_status;
+ unsigned long flags;
+
+ int_status = reg_read(dev, INT_STATUS); /* cleared on read */
+ fifo_status = reg_read(dev, VIDEO_FIFO_STATUS);
+
+ /* INT_STATUS does not include FIFO_STATUS errors! */
+ if (!int_status && !TW686X_FIFO_ERROR(fifo_status))
+ return IRQ_NONE;
+
+ if (int_status & INT_STATUS_DMA_TOUT) {
+ dev_dbg(&dev->pci_dev->dev,
+ "DMA timeout. Resetting DMA for all channels\n");
+ reset_ch = ~0;
+ goto reset_channels;
+ }
+
+ spin_lock_irqsave(&dev->lock, flags);
+ dma_en = reg_read(dev, DMA_CHANNEL_ENABLE);
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ video_en = dma_en & 0xff;
+ fifo_signal = ~(fifo_status & 0xff) & video_en;
+ fifo_ov = fifo_status >> 24;
+ fifo_bad = fifo_status >> 16;
+
+ /* Mask of channels with signal and FIFO errors */
+ fifo_errors = fifo_signal & (fifo_ov | fifo_bad);
+
+ reset_ch = 0;
+ pb_status = reg_read(dev, PB_STATUS);
+
+ /* Coalesce video frame/error events */
+ video_requests = (int_status & video_en) | fifo_errors;
+ audio_requests = (int_status & dma_en) >> 8;
+
+ if (video_requests)
+ tw686x_video_irq(dev, video_requests, pb_status,
+ fifo_status, &reset_ch);
+ if (audio_requests)
+ tw686x_audio_irq(dev, audio_requests, pb_status);
+
+reset_channels:
+ if (reset_ch) {
+ spin_lock_irqsave(&dev->lock, flags);
+ tw686x_reset_channels(dev, reset_ch);
+ spin_unlock_irqrestore(&dev->lock, flags);
+ mod_timer(&dev->dma_delay_timer,
+ jiffies + msecs_to_jiffies(100));
+ }
+
+ return IRQ_HANDLED;
+}
+
+static void tw686x_dev_release(struct v4l2_device *v4l2_dev)
+{
+ struct tw686x_dev *dev = container_of(v4l2_dev, struct tw686x_dev,
+ v4l2_dev);
+ unsigned int ch;
+
+ for (ch = 0; ch < max_channels(dev); ch++)
+ v4l2_ctrl_handler_free(&dev->video_channels[ch].ctrl_handler);
+
+ v4l2_device_unregister(&dev->v4l2_dev);
+
+ kfree(dev->audio_channels);
+ kfree(dev->video_channels);
+ kfree(dev);
+}
+
+static int tw686x_probe(struct pci_dev *pci_dev,
+ const struct pci_device_id *pci_id)
+{
+ struct tw686x_dev *dev;
+ int err;
+
+ dev = kzalloc(sizeof(*dev), GFP_KERNEL);
+ if (!dev)
+ return -ENOMEM;
+ dev->type = pci_id->driver_data;
+ sprintf(dev->name, "tw%04X", pci_dev->device);
+
+ dev->video_channels = kcalloc(max_channels(dev),
+ sizeof(*dev->video_channels), GFP_KERNEL);
+ if (!dev->video_channels) {
+ err = -ENOMEM;
+ goto free_dev;
+ }
+
+ dev->audio_channels = kcalloc(max_channels(dev),
+ sizeof(*dev->audio_channels), GFP_KERNEL);
+ if (!dev->audio_channels) {
+ err = -ENOMEM;
+ goto free_video;
+ }
+
+ pr_info("%s: PCI %s, IRQ %d, MMIO 0x%lx\n", dev->name,
+ pci_name(pci_dev), pci_dev->irq,
+ (unsigned long)pci_resource_start(pci_dev, 0));
+
+ dev->pci_dev = pci_dev;
+ if (pci_enable_device(pci_dev)) {
+ err = -EIO;
+ goto free_audio;
+ }
+
+ pci_set_master(pci_dev);
+ err = pci_set_dma_mask(pci_dev, DMA_BIT_MASK(32));
+ if (err) {
+ dev_err(&pci_dev->dev, "32-bit PCI DMA not supported\n");
+ err = -EIO;
+ goto disable_pci;
+ }
+
+ err = pci_request_regions(pci_dev, dev->name);
+ if (err) {
+ dev_err(&pci_dev->dev, "unable to request PCI region\n");
+ goto disable_pci;
+ }
+
+ dev->mmio = pci_ioremap_bar(pci_dev, 0);
+ if (!dev->mmio) {
+ dev_err(&pci_dev->dev, "unable to remap PCI region\n");
+ err = -ENOMEM;
+ goto free_region;
+ }
+
+ /* Reset all subsystems */
+ reg_write(dev, SYS_SOFT_RST, 0x0f);
+ mdelay(1);
+
+ reg_write(dev, SRST[0], 0x3f);
+ if (max_channels(dev) > 4)
+ reg_write(dev, SRST[1], 0x3f);
+
+ /* Disable the DMA engine */
+ reg_write(dev, DMA_CMD, 0);
+ reg_write(dev, DMA_CHANNEL_ENABLE, 0);
+
+ /* Enable DMA FIFO overflow and pointer check */
+ reg_write(dev, DMA_CONFIG, 0xffffff04);
+ reg_write(dev, DMA_CHANNEL_TIMEOUT, 0x140c8584);
+ reg_write(dev, DMA_TIMER_INTERVAL, dma_interval);
+
+ spin_lock_init(&dev->lock);
+
+ err = request_irq(pci_dev->irq, tw686x_irq, IRQF_SHARED,
+ dev->name, dev);
+ if (err < 0) {
+ dev_err(&pci_dev->dev, "unable to request interrupt\n");
+ goto iounmap;
+ }
+
+ setup_timer(&dev->dma_delay_timer,
+ tw686x_dma_delay, (unsigned long) dev);
+
+ /*
+ * This must be set right before initializing v4l2_dev.
+ * It's used to release resources after the last handle
+ * held is released.
+ */
+ dev->v4l2_dev.release = tw686x_dev_release;
+ err = tw686x_video_init(dev);
+ if (err) {
+ dev_err(&pci_dev->dev, "can't register video\n");
+ goto free_irq;
+ }
+
+ err = tw686x_audio_init(dev);
+ if (err)
+ dev_warn(&pci_dev->dev, "can't register audio\n");
+
+ pci_set_drvdata(pci_dev, dev);
+ return 0;
+
+free_irq:
+ free_irq(pci_dev->irq, dev);
+iounmap:
+ pci_iounmap(pci_dev, dev->mmio);
+free_region:
+ pci_release_regions(pci_dev);
+disable_pci:
+ pci_disable_device(pci_dev);
+free_audio:
+ kfree(dev->audio_channels);
+free_video:
+ kfree(dev->video_channels);
+free_dev:
+ kfree(dev);
+ return err;
+}
+
+static void tw686x_remove(struct pci_dev *pci_dev)
+{
+ struct tw686x_dev *dev = pci_get_drvdata(pci_dev);
+ unsigned long flags;
+
+ /* This guarantees the IRQ handler is no longer running,
+ * which means we can kiss good-bye some resources.
+ */
+ free_irq(pci_dev->irq, dev);
+
+ tw686x_video_free(dev);
+ tw686x_audio_free(dev);
+ del_timer_sync(&dev->dma_delay_timer);
+
+ pci_iounmap(pci_dev, dev->mmio);
+ pci_release_regions(pci_dev);
+ pci_disable_device(pci_dev);
+
+ /*
+ * Setting pci_dev to NULL allows to detect hardware is no longer
+ * available and will be used by vb2_ops. This is required because
+ * the device sometimes hot-unplugs itself as the result of a PCIe
+ * link down.
+ * The lock is really important here.
+ */
+ spin_lock_irqsave(&dev->lock, flags);
+ dev->pci_dev = NULL;
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ /*
+ * This calls tw686x_dev_release if it's the last reference.
+ * Otherwise, release is postponed until there are no users left.
+ */
+ v4l2_device_put(&dev->v4l2_dev);
+}
+
+/*
+ * On TW6864 and TW6868, all channels share the pair of video DMA SG tables,
+ * with 10-bit start_idx and end_idx determining start and end of frame buffer
+ * for particular channel.
+ * TW6868 with all its 8 channels would be problematic (only 127 SG entries per
+ * channel) but we support only 4 channels on this chip anyway (the first
+ * 4 channels are driven with internal video decoder, the other 4 would require
+ * an external TW286x part).
+ *
+ * On TW6865 and TW6869, each channel has its own DMA SG table, with indexes
+ * starting with 0. Both chips have complete sets of internal video decoders
+ * (respectively 4 or 8-channel).
+ *
+ * All chips have separate SG tables for two video frames.
+ */
+
+/* driver_data is number of A/V channels */
+static const struct pci_device_id tw686x_pci_tbl[] = {
+ {
+ PCI_DEVICE(PCI_VENDOR_ID_TECHWELL, 0x6864),
+ .driver_data = 4
+ },
+ {
+ PCI_DEVICE(PCI_VENDOR_ID_TECHWELL, 0x6865), /* not tested */
+ .driver_data = 4 | TYPE_SECOND_GEN
+ },
+ /*
+ * TW6868 supports 8 A/V channels with an external TW2865 chip;
+ * not supported by the driver.
+ */
+ {
+ PCI_DEVICE(PCI_VENDOR_ID_TECHWELL, 0x6868), /* not tested */
+ .driver_data = 4
+ },
+ {
+ PCI_DEVICE(PCI_VENDOR_ID_TECHWELL, 0x6869),
+ .driver_data = 8 | TYPE_SECOND_GEN},
+ {}
+};
+MODULE_DEVICE_TABLE(pci, tw686x_pci_tbl);
+
+static struct pci_driver tw686x_pci_driver = {
+ .name = "tw686x",
+ .id_table = tw686x_pci_tbl,
+ .probe = tw686x_probe,
+ .remove = tw686x_remove,
+};
+module_pci_driver(tw686x_pci_driver);
+
+MODULE_DESCRIPTION("Driver for video frame grabber cards based on Intersil/Techwell TW686[4589]");
+MODULE_AUTHOR("Ezequiel Garcia <ezequiel@vanguardiasur.com.ar>");
+MODULE_AUTHOR("Krzysztof Ha?asa <khalasa@piap.pl>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/media/pci/tw686x/tw686x-regs.h b/drivers/media/pci/tw686x/tw686x-regs.h
new file mode 100644
index 000000000000..fcef586a4c8c
--- /dev/null
+++ b/drivers/media/pci/tw686x/tw686x-regs.h
@@ -0,0 +1,122 @@
+/* DMA controller registers */
+#define REG8_1(a0) ((const u16[8]) { a0, a0 + 1, a0 + 2, a0 + 3, \
+ a0 + 4, a0 + 5, a0 + 6, a0 + 7})
+#define REG8_2(a0) ((const u16[8]) { a0, a0 + 2, a0 + 4, a0 + 6, \
+ a0 + 8, a0 + 0xa, a0 + 0xc, a0 + 0xe})
+#define REG8_8(a0) ((const u16[8]) { a0, a0 + 8, a0 + 0x10, a0 + 0x18, \
+ a0 + 0x20, a0 + 0x28, a0 + 0x30, \
+ a0 + 0x38})
+#define INT_STATUS 0x00
+#define PB_STATUS 0x01
+#define DMA_CMD 0x02
+#define VIDEO_FIFO_STATUS 0x03
+#define VIDEO_CHANNEL_ID 0x04
+#define VIDEO_PARSER_STATUS 0x05
+#define SYS_SOFT_RST 0x06
+#define DMA_PAGE_TABLE0_ADDR ((const u16[8]) { 0x08, 0xd0, 0xd2, 0xd4, \
+ 0xd6, 0xd8, 0xda, 0xdc })
+#define DMA_PAGE_TABLE1_ADDR ((const u16[8]) { 0x09, 0xd1, 0xd3, 0xd5, \
+ 0xd7, 0xd9, 0xdb, 0xdd })
+#define DMA_CHANNEL_ENABLE 0x0a
+#define DMA_CONFIG 0x0b
+#define DMA_TIMER_INTERVAL 0x0c
+#define DMA_CHANNEL_TIMEOUT 0x0d
+#define VDMA_CHANNEL_CONFIG REG8_1(0x10)
+#define ADMA_P_ADDR REG8_2(0x18)
+#define ADMA_B_ADDR REG8_2(0x19)
+#define DMA10_P_ADDR 0x28
+#define DMA10_B_ADDR 0x29
+#define VIDEO_CONTROL1 0x2a
+#define VIDEO_CONTROL2 0x2b
+#define AUDIO_CONTROL1 0x2c
+#define AUDIO_CONTROL2 0x2d
+#define PHASE_REF 0x2e
+#define GPIO_REG 0x2f
+#define INTL_HBAR_CTRL REG8_1(0x30)
+#define AUDIO_CONTROL3 0x38
+#define VIDEO_FIELD_CTRL REG8_1(0x39)
+#define HSCALER_CTRL REG8_1(0x42)
+#define VIDEO_SIZE REG8_1(0x4A)
+#define VIDEO_SIZE_F2 REG8_1(0x52)
+#define MD_CONF REG8_1(0x60)
+#define MD_INIT REG8_1(0x68)
+#define MD_MAP0 REG8_1(0x70)
+#define VDMA_P_ADDR REG8_8(0x80) /* not used in DMA SG mode */
+#define VDMA_WHP REG8_8(0x81)
+#define VDMA_B_ADDR REG8_8(0x82)
+#define VDMA_F2_P_ADDR REG8_8(0x84)
+#define VDMA_F2_WHP REG8_8(0x85)
+#define VDMA_F2_B_ADDR REG8_8(0x86)
+#define EP_REG_ADDR 0xfe
+#define EP_REG_DATA 0xff
+
+/* Video decoder registers */
+#define VDREG8(a0) ((const u16[8]) { \
+ a0 + 0x000, a0 + 0x010, a0 + 0x020, a0 + 0x030, \
+ a0 + 0x100, a0 + 0x110, a0 + 0x120, a0 + 0x130})
+#define VIDSTAT VDREG8(0x100)
+#define BRIGHT VDREG8(0x101)
+#define CONTRAST VDREG8(0x102)
+#define SHARPNESS VDREG8(0x103)
+#define SAT_U VDREG8(0x104)
+#define SAT_V VDREG8(0x105)
+#define HUE VDREG8(0x106)
+#define CROP_HI VDREG8(0x107)
+#define VDELAY_LO VDREG8(0x108)
+#define VACTIVE_LO VDREG8(0x109)
+#define HDELAY_LO VDREG8(0x10a)
+#define HACTIVE_LO VDREG8(0x10b)
+#define MVSN VDREG8(0x10c)
+#define STATUS2 VDREG8(0x10d)
+#define SDT VDREG8(0x10e)
+#define SDT_EN VDREG8(0x10f)
+
+#define VSCALE_LO VDREG8(0x144)
+#define SCALE_HI VDREG8(0x145)
+#define HSCALE_LO VDREG8(0x146)
+#define F2CROP_HI VDREG8(0x147)
+#define F2VDELAY_LO VDREG8(0x148)
+#define F2VACTIVE_LO VDREG8(0x149)
+#define F2HDELAY_LO VDREG8(0x14a)
+#define F2HACTIVE_LO VDREG8(0x14b)
+#define F2VSCALE_LO VDREG8(0x14c)
+#define F2SCALE_HI VDREG8(0x14d)
+#define F2HSCALE_LO VDREG8(0x14e)
+#define F2CNT VDREG8(0x14f)
+
+#define VDREG2(a0) ((const u16[2]) { a0, a0 + 0x100 })
+#define SRST VDREG2(0x180)
+#define ACNTL VDREG2(0x181)
+#define ACNTL2 VDREG2(0x182)
+#define CNTRL1 VDREG2(0x183)
+#define CKHY VDREG2(0x184)
+#define SHCOR VDREG2(0x185)
+#define CORING VDREG2(0x186)
+#define CLMPG VDREG2(0x187)
+#define IAGC VDREG2(0x188)
+#define VCTRL1 VDREG2(0x18f)
+#define MISC1 VDREG2(0x194)
+#define LOOP VDREG2(0x195)
+#define MISC2 VDREG2(0x196)
+
+#define CLMD VDREG2(0x197)
+#define ANPWRDOWN VDREG2(0x1ce)
+#define AIGAIN ((const u16[8]) { 0x1d0, 0x1d1, 0x1d2, 0x1d3, \
+ 0x2d0, 0x2d1, 0x2d2, 0x2d3 })
+
+#define SYS_MODE_DMA_SHIFT 13
+
+#define DMA_CMD_ENABLE BIT(31)
+#define INT_STATUS_DMA_TOUT BIT(17)
+#define TW686X_VIDSTAT_HLOCK BIT(6)
+#define TW686X_VIDSTAT_VDLOSS BIT(7)
+
+#define TW686X_STD_NTSC_M 0
+#define TW686X_STD_PAL 1
+#define TW686X_STD_SECAM 2
+#define TW686X_STD_NTSC_443 3
+#define TW686X_STD_PAL_M 4
+#define TW686X_STD_PAL_CN 5
+#define TW686X_STD_PAL_60 6
+
+#define TW686X_FIFO_ERROR(x) (x & ~(0xff))
diff --git a/drivers/media/pci/tw686x/tw686x-video.c b/drivers/media/pci/tw686x/tw686x-video.c
new file mode 100644
index 000000000000..253e10823ba3
--- /dev/null
+++ b/drivers/media/pci/tw686x/tw686x-video.c
@@ -0,0 +1,937 @@
+/*
+ * Copyright (C) 2015 VanguardiaSur - www.vanguardiasur.com.ar
+ *
+ * Based on original driver by Krzysztof Ha?asa:
+ * Copyright (C) 2015 Industrial Research Institute for Automation
+ * and Measurements PIAP
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ *
+ */
+
+#include <linux/init.h>
+#include <linux/delay.h>
+#include <linux/list.h>
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <media/v4l2-common.h>
+#include <media/v4l2-event.h>
+#include <media/videobuf2-vmalloc.h>
+#include "tw686x.h"
+#include "tw686x-regs.h"
+
+#define TW686X_INPUTS_PER_CH 4
+#define TW686X_VIDEO_WIDTH 720
+#define TW686X_VIDEO_HEIGHT(id) ((id & V4L2_STD_525_60) ? 480 : 576)
+
+static const struct tw686x_format formats[] = {
+ {
+ .fourcc = V4L2_PIX_FMT_UYVY,
+ .mode = 0,
+ .depth = 16,
+ }, {
+ .fourcc = V4L2_PIX_FMT_RGB565,
+ .mode = 5,
+ .depth = 16,
+ }, {
+ .fourcc = V4L2_PIX_FMT_YUYV,
+ .mode = 6,
+ .depth = 16,
+ }
+};
+
+static unsigned int tw686x_fields_map(v4l2_std_id std, unsigned int fps)
+{
+ static const unsigned int map[15] = {
+ 0x00000000, 0x00000001, 0x00004001, 0x00104001, 0x00404041,
+ 0x01041041, 0x01104411, 0x01111111, 0x04444445, 0x04511445,
+ 0x05145145, 0x05151515, 0x05515455, 0x05551555, 0x05555555
+ };
+
+ static const unsigned int std_625_50[26] = {
+ 0, 1, 1, 2, 3, 3, 4, 4, 5, 5, 6, 7, 7,
+ 8, 8, 9, 10, 10, 11, 11, 12, 13, 13, 14, 14, 0
+ };
+
+ static const unsigned int std_525_60[31] = {
+ 0, 1, 1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7,
+ 8, 8, 9, 9, 10, 10, 11, 11, 12, 12, 13, 13, 14, 0, 0
+ };
+
+ unsigned int i;
+
+ if (std & V4L2_STD_525_60) {
+ if (fps >= ARRAY_SIZE(std_525_60))
+ fps = 30;
+ i = std_525_60[fps];
+ } else {
+ if (fps >= ARRAY_SIZE(std_625_50))
+ fps = 25;
+ i = std_625_50[fps];
+ }
+
+ return map[i];
+}
+
+static void tw686x_set_framerate(struct tw686x_video_channel *vc,
+ unsigned int fps)
+{
+ unsigned int map;
+
+ if (vc->fps == fps)
+ return;
+
+ map = tw686x_fields_map(vc->video_standard, fps) << 1;
+ map |= map << 1;
+ if (map > 0)
+ map |= BIT(31);
+ reg_write(vc->dev, VIDEO_FIELD_CTRL[vc->ch], map);
+ vc->fps = fps;
+}
+
+static const struct tw686x_format *format_by_fourcc(unsigned int fourcc)
+{
+ unsigned int cnt;
+
+ for (cnt = 0; cnt < ARRAY_SIZE(formats); cnt++)
+ if (formats[cnt].fourcc == fourcc)
+ return &formats[cnt];
+ return NULL;
+}
+
+static int tw686x_queue_setup(struct vb2_queue *vq,
+ unsigned int *nbuffers, unsigned int *nplanes,
+ unsigned int sizes[], void *alloc_ctxs[])
+{
+ struct tw686x_video_channel *vc = vb2_get_drv_priv(vq);
+ unsigned int szimage =
+ (vc->width * vc->height * vc->format->depth) >> 3;
+
+ /*
+ * Let's request at least three buffers: two for the
+ * DMA engine and one for userspace.
+ */
+ if (vq->num_buffers + *nbuffers < 3)
+ *nbuffers = 3 - vq->num_buffers;
+
+ if (*nplanes) {
+ if (*nplanes != 1 || sizes[0] < szimage)
+ return -EINVAL;
+ return 0;
+ }
+
+ sizes[0] = szimage;
+ *nplanes = 1;
+ return 0;
+}
+
+static void tw686x_buf_queue(struct vb2_buffer *vb)
+{
+ struct tw686x_video_channel *vc = vb2_get_drv_priv(vb->vb2_queue);
+ struct tw686x_dev *dev = vc->dev;
+ struct pci_dev *pci_dev;
+ unsigned long flags;
+ struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
+ struct tw686x_v4l2_buf *buf =
+ container_of(vbuf, struct tw686x_v4l2_buf, vb);
+
+ /* Check device presence */
+ spin_lock_irqsave(&dev->lock, flags);
+ pci_dev = dev->pci_dev;
+ spin_unlock_irqrestore(&dev->lock, flags);
+ if (!pci_dev) {
+ vb2_buffer_done(&buf->vb.vb2_buf, VB2_BUF_STATE_ERROR);
+ return;
+ }
+
+ spin_lock_irqsave(&vc->qlock, flags);
+ list_add_tail(&buf->list, &vc->vidq_queued);
+ spin_unlock_irqrestore(&vc->qlock, flags);
+}
+
+/*
+ * We can call this even when alloc_dma failed for the given channel
+ */
+static void tw686x_free_dma(struct tw686x_video_channel *vc, unsigned int pb)
+{
+ struct tw686x_dma_desc *desc = &vc->dma_descs[pb];
+ struct tw686x_dev *dev = vc->dev;
+ struct pci_dev *pci_dev;
+ unsigned long flags;
+
+ /* Check device presence. Shouldn't really happen! */
+ spin_lock_irqsave(&dev->lock, flags);
+ pci_dev = dev->pci_dev;
+ spin_unlock_irqrestore(&dev->lock, flags);
+ if (!pci_dev) {
+ WARN(1, "trying to deallocate on missing device\n");
+ return;
+ }
+
+ if (desc->virt) {
+ pci_free_consistent(dev->pci_dev, desc->size,
+ desc->virt, desc->phys);
+ desc->virt = NULL;
+ }
+}
+
+static int tw686x_alloc_dma(struct tw686x_video_channel *vc, unsigned int pb)
+{
+ struct tw686x_dev *dev = vc->dev;
+ u32 reg = pb ? VDMA_B_ADDR[vc->ch] : VDMA_P_ADDR[vc->ch];
+ unsigned int len;
+ void *virt;
+
+ WARN(vc->dma_descs[pb].virt,
+ "Allocating buffer but previous still here\n");
+
+ len = (vc->width * vc->height * vc->format->depth) >> 3;
+ virt = pci_alloc_consistent(dev->pci_dev, len,
+ &vc->dma_descs[pb].phys);
+ if (!virt) {
+ v4l2_err(&dev->v4l2_dev,
+ "dma%d: unable to allocate %s-buffer\n",
+ vc->ch, pb ? "B" : "P");
+ return -ENOMEM;
+ }
+ vc->dma_descs[pb].size = len;
+ vc->dma_descs[pb].virt = virt;
+ reg_write(dev, reg, vc->dma_descs[pb].phys);
+
+ return 0;
+}
+
+static void tw686x_buffer_refill(struct tw686x_video_channel *vc,
+ unsigned int pb)
+{
+ struct tw686x_v4l2_buf *buf;
+
+ while (!list_empty(&vc->vidq_queued)) {
+
+ buf = list_first_entry(&vc->vidq_queued,
+ struct tw686x_v4l2_buf, list);
+ list_del(&buf->list);
+
+ vc->curr_bufs[pb] = buf;
+ return;
+ }
+ vc->curr_bufs[pb] = NULL;
+}
+
+static void tw686x_clear_queue(struct tw686x_video_channel *vc,
+ enum vb2_buffer_state state)
+{
+ unsigned int pb;
+
+ while (!list_empty(&vc->vidq_queued)) {
+ struct tw686x_v4l2_buf *buf;
+
+ buf = list_first_entry(&vc->vidq_queued,
+ struct tw686x_v4l2_buf, list);
+ list_del(&buf->list);
+ vb2_buffer_done(&buf->vb.vb2_buf, state);
+ }
+
+ for (pb = 0; pb < 2; pb++) {
+ if (vc->curr_bufs[pb])
+ vb2_buffer_done(&vc->curr_bufs[pb]->vb.vb2_buf, state);
+ vc->curr_bufs[pb] = NULL;
+ }
+}
+
+static int tw686x_start_streaming(struct vb2_queue *vq, unsigned int count)
+{
+ struct tw686x_video_channel *vc = vb2_get_drv_priv(vq);
+ struct tw686x_dev *dev = vc->dev;
+ struct pci_dev *pci_dev;
+ unsigned long flags;
+ int pb, err;
+
+ /* Check device presence */
+ spin_lock_irqsave(&dev->lock, flags);
+ pci_dev = dev->pci_dev;
+ spin_unlock_irqrestore(&dev->lock, flags);
+ if (!pci_dev) {
+ err = -ENODEV;
+ goto err_clear_queue;
+ }
+
+ spin_lock_irqsave(&vc->qlock, flags);
+
+ /* Sanity check */
+ if (!vc->dma_descs[0].virt || !vc->dma_descs[1].virt) {
+ spin_unlock_irqrestore(&vc->qlock, flags);
+ v4l2_err(&dev->v4l2_dev,
+ "video%d: refusing to start without DMA buffers\n",
+ vc->num);
+ err = -ENOMEM;
+ goto err_clear_queue;
+ }
+
+ for (pb = 0; pb < 2; pb++)
+ tw686x_buffer_refill(vc, pb);
+ spin_unlock_irqrestore(&vc->qlock, flags);
+
+ vc->sequence = 0;
+ vc->pb = 0;
+
+ spin_lock_irqsave(&dev->lock, flags);
+ tw686x_enable_channel(dev, vc->ch);
+ spin_unlock_irqrestore(&dev->lock, flags);
+
+ mod_timer(&dev->dma_delay_timer, jiffies + msecs_to_jiffies(100));
+
+ return 0;
+
+err_clear_queue:
+ spin_lock_irqsave(&vc->qlock, flags);
+ tw686x_clear_queue(vc, VB2_BUF_STATE_QUEUED);
+ spin_unlock_irqrestore(&vc->qlock, flags);
+ return err;
+}
+
+static void tw686x_stop_streaming(struct vb2_queue *vq)
+{
+ struct tw686x_video_channel *vc = vb2_get_drv_priv(vq);
+ struct tw686x_dev *dev = vc->dev;
+ struct pci_dev *pci_dev;
+ unsigned long flags;
+
+ /* Check device presence */
+ spin_lock_irqsave(&dev->lock, flags);
+ pci_dev = dev->pci_dev;
+ spin_unlock_irqrestore(&dev->lock, flags);
+ if (pci_dev)
+ tw686x_disable_channel(dev, vc->ch);
+
+ spin_lock_irqsave(&vc->qlock, flags);
+ tw686x_clear_queue(vc, VB2_BUF_STATE_ERROR);
+ spin_unlock_irqrestore(&vc->qlock, flags);
+}
+
+static int tw686x_buf_prepare(struct vb2_buffer *vb)
+{
+ struct tw686x_video_channel *vc = vb2_get_drv_priv(vb->vb2_queue);
+ unsigned int size =
+ (vc->width * vc->height * vc->format->depth) >> 3;
+
+ if (vb2_plane_size(vb, 0) < size)
+ return -EINVAL;
+ vb2_set_plane_payload(vb, 0, size);
+ return 0;
+}
+
+static struct vb2_ops tw686x_video_qops = {
+ .queue_setup = tw686x_queue_setup,
+ .buf_queue = tw686x_buf_queue,
+ .buf_prepare = tw686x_buf_prepare,
+ .start_streaming = tw686x_start_streaming,
+ .stop_streaming = tw686x_stop_streaming,
+ .wait_prepare = vb2_ops_wait_prepare,
+ .wait_finish = vb2_ops_wait_finish,
+};
+
+static int tw686x_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+ struct tw686x_video_channel *vc;
+ struct tw686x_dev *dev;
+ unsigned int ch;
+
+ vc = container_of(ctrl->handler, struct tw686x_video_channel,
+ ctrl_handler);
+ dev = vc->dev;
+ ch = vc->ch;
+
+ switch (ctrl->id) {
+ case V4L2_CID_BRIGHTNESS:
+ reg_write(dev, BRIGHT[ch], ctrl->val & 0xff);
+ return 0;
+
+ case V4L2_CID_CONTRAST:
+ reg_write(dev, CONTRAST[ch], ctrl->val);
+ return 0;
+
+ case V4L2_CID_SATURATION:
+ reg_write(dev, SAT_U[ch], ctrl->val);
+ reg_write(dev, SAT_V[ch], ctrl->val);
+ return 0;
+
+ case V4L2_CID_HUE:
+ reg_write(dev, HUE[ch], ctrl->val & 0xff);
+ return 0;
+ }
+
+ return -EINVAL;
+}
+
+static const struct v4l2_ctrl_ops ctrl_ops = {
+ .s_ctrl = tw686x_s_ctrl,
+};
+
+static int tw686x_g_fmt_vid_cap(struct file *file, void *priv,
+ struct v4l2_format *f)
+{
+ struct tw686x_video_channel *vc = video_drvdata(file);
+
+ f->fmt.pix.width = vc->width;
+ f->fmt.pix.height = vc->height;
+ f->fmt.pix.field = V4L2_FIELD_INTERLACED;
+ f->fmt.pix.pixelformat = vc->format->fourcc;
+ f->fmt.pix.colorspace = V4L2_COLORSPACE_SMPTE170M;
+ f->fmt.pix.bytesperline = (f->fmt.pix.width * vc->format->depth) / 8;
+ f->fmt.pix.sizeimage = f->fmt.pix.height * f->fmt.pix.bytesperline;
+ return 0;
+}
+
+static int tw686x_try_fmt_vid_cap(struct file *file, void *priv,
+ struct v4l2_format *f)
+{
+ struct tw686x_video_channel *vc = video_drvdata(file);
+ unsigned int video_height = TW686X_VIDEO_HEIGHT(vc->video_standard);
+ const struct tw686x_format *format;
+
+ format = format_by_fourcc(f->fmt.pix.pixelformat);
+ if (!format) {
+ format = &formats[0];
+ f->fmt.pix.pixelformat = format->fourcc;
+ }
+
+ if (f->fmt.pix.width <= TW686X_VIDEO_WIDTH / 2)
+ f->fmt.pix.width = TW686X_VIDEO_WIDTH / 2;
+ else
+ f->fmt.pix.width = TW686X_VIDEO_WIDTH;
+
+ if (f->fmt.pix.height <= video_height / 2)
+ f->fmt.pix.height = video_height / 2;
+ else
+ f->fmt.pix.height = video_height;
+
+ f->fmt.pix.bytesperline = (f->fmt.pix.width * format->depth) / 8;
+ f->fmt.pix.sizeimage = f->fmt.pix.height * f->fmt.pix.bytesperline;
+ f->fmt.pix.colorspace = V4L2_COLORSPACE_SMPTE170M;
+ f->fmt.pix.field = V4L2_FIELD_INTERLACED;
+
+ return 0;
+}
+
+static int tw686x_s_fmt_vid_cap(struct file *file, void *priv,
+ struct v4l2_format *f)
+{
+ struct tw686x_video_channel *vc = video_drvdata(file);
+ u32 val, width, line_width, height;
+ unsigned long bitsperframe;
+ int err, pb;
+
+ if (vb2_is_busy(&vc->vidq))
+ return -EBUSY;
+
+ bitsperframe = vc->width * vc->height * vc->format->depth;
+ err = tw686x_try_fmt_vid_cap(file, priv, f);
+ if (err)
+ return err;
+
+ vc->format = format_by_fourcc(f->fmt.pix.pixelformat);
+ vc->width = f->fmt.pix.width;
+ vc->height = f->fmt.pix.height;
+
+ /* We need new DMA buffers if the framesize has changed */
+ if (bitsperframe != vc->width * vc->height * vc->format->depth) {
+ for (pb = 0; pb < 2; pb++)
+ tw686x_free_dma(vc, pb);
+
+ for (pb = 0; pb < 2; pb++) {
+ err = tw686x_alloc_dma(vc, pb);
+ if (err) {
+ if (pb > 0)
+ tw686x_free_dma(vc, 0);
+ return err;
+ }
+ }
+ }
+
+ val = reg_read(vc->dev, VDMA_CHANNEL_CONFIG[vc->ch]);
+
+ if (vc->width <= TW686X_VIDEO_WIDTH / 2)
+ val |= BIT(23);
+ else
+ val &= ~BIT(23);
+
+ if (vc->height <= TW686X_VIDEO_HEIGHT(vc->video_standard) / 2)
+ val |= BIT(24);
+ else
+ val &= ~BIT(24);
+
+ val &= ~(0x7 << 20);
+ val |= vc->format->mode << 20;
+ reg_write(vc->dev, VDMA_CHANNEL_CONFIG[vc->ch], val);
+
+ /* Program the DMA frame size */
+ width = (vc->width * 2) & 0x7ff;
+ height = vc->height / 2;
+ line_width = (vc->width * 2) & 0x7ff;
+ val = (height << 22) | (line_width << 11) | width;
+ reg_write(vc->dev, VDMA_WHP[vc->ch], val);
+ return 0;
+}
+
+static int tw686x_querycap(struct file *file, void *priv,
+ struct v4l2_capability *cap)
+{
+ struct tw686x_video_channel *vc = video_drvdata(file);
+ struct tw686x_dev *dev = vc->dev;
+
+ strlcpy(cap->driver, "tw686x", sizeof(cap->driver));
+ strlcpy(cap->card, dev->name, sizeof(cap->card));
+ snprintf(cap->bus_info, sizeof(cap->bus_info),
+ "PCI:%s", pci_name(dev->pci_dev));
+ cap->device_caps = V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_STREAMING |
+ V4L2_CAP_READWRITE;
+ cap->capabilities = cap->device_caps | V4L2_CAP_DEVICE_CAPS;
+ return 0;
+}
+
+static int tw686x_s_std(struct file *file, void *priv, v4l2_std_id id)
+{
+ struct tw686x_video_channel *vc = video_drvdata(file);
+ struct v4l2_format f;
+ u32 val, ret;
+
+ if (vc->video_standard == id)
+ return 0;
+
+ if (vb2_is_busy(&vc->vidq))
+ return -EBUSY;
+
+ if (id & V4L2_STD_NTSC)
+ val = 0;
+ else if (id & V4L2_STD_PAL)
+ val = 1;
+ else if (id & V4L2_STD_SECAM)
+ val = 2;
+ else if (id & V4L2_STD_NTSC_443)
+ val = 3;
+ else if (id & V4L2_STD_PAL_M)
+ val = 4;
+ else if (id & V4L2_STD_PAL_Nc)
+ val = 5;
+ else if (id & V4L2_STD_PAL_60)
+ val = 6;
+ else
+ return -EINVAL;
+
+ vc->video_standard = id;
+ reg_write(vc->dev, SDT[vc->ch], val);
+
+ val = reg_read(vc->dev, VIDEO_CONTROL1);
+ if (id & V4L2_STD_525_60)
+ val &= ~(1 << (SYS_MODE_DMA_SHIFT + vc->ch));
+ else
+ val |= (1 << (SYS_MODE_DMA_SHIFT + vc->ch));
+ reg_write(vc->dev, VIDEO_CONTROL1, val);
+
+ /*
+ * Adjust format after V4L2_STD_525_60/V4L2_STD_625_50 change,
+ * calling g_fmt and s_fmt will sanitize the height
+ * according to the standard.
+ */
+ ret = tw686x_g_fmt_vid_cap(file, priv, &f);
+ if (!ret)
+ tw686x_s_fmt_vid_cap(file, priv, &f);
+ return 0;
+}
+
+static int tw686x_querystd(struct file *file, void *priv, v4l2_std_id *std)
+{
+ struct tw686x_video_channel *vc = video_drvdata(file);
+ struct tw686x_dev *dev = vc->dev;
+ unsigned int old_std, detected_std = 0;
+ unsigned long end;
+
+ if (vb2_is_streaming(&vc->vidq))
+ return -EBUSY;
+
+ /* Enable and start standard detection */
+ old_std = reg_read(dev, SDT[vc->ch]);
+ reg_write(dev, SDT[vc->ch], 0x7);
+ reg_write(dev, SDT_EN[vc->ch], 0xff);
+
+ end = jiffies + msecs_to_jiffies(500);
+ while (time_is_after_jiffies(end)) {
+
+ detected_std = reg_read(dev, SDT[vc->ch]);
+ if (!(detected_std & BIT(7)))
+ break;
+ msleep(100);
+ }
+ reg_write(dev, SDT[vc->ch], old_std);
+
+ /* Exit if still busy */
+ if (detected_std & BIT(7))
+ return 0;
+
+ detected_std = (detected_std >> 4) & 0x7;
+ switch (detected_std) {
+ case TW686X_STD_NTSC_M:
+ *std &= V4L2_STD_NTSC;
+ break;
+ case TW686X_STD_NTSC_443:
+ *std &= V4L2_STD_NTSC_443;
+ break;
+ case TW686X_STD_PAL_M:
+ *std &= V4L2_STD_PAL_M;
+ break;
+ case TW686X_STD_PAL_60:
+ *std &= V4L2_STD_PAL_60;
+ break;
+ case TW686X_STD_PAL:
+ *std &= V4L2_STD_PAL;
+ break;
+ case TW686X_STD_PAL_CN:
+ *std &= V4L2_STD_PAL_Nc;
+ break;
+ case TW686X_STD_SECAM:
+ *std &= V4L2_STD_SECAM;
+ break;
+ default:
+ *std = 0;
+ }
+ return 0;
+}
+
+static int tw686x_g_std(struct file *file, void *priv, v4l2_std_id *id)
+{
+ struct tw686x_video_channel *vc = video_drvdata(file);
+
+ *id = vc->video_standard;
+ return 0;
+}
+
+static int tw686x_enum_fmt_vid_cap(struct file *file, void *priv,
+ struct v4l2_fmtdesc *f)
+{
+ if (f->index >= ARRAY_SIZE(formats))
+ return -EINVAL;
+ f->pixelformat = formats[f->index].fourcc;
+ return 0;
+}
+
+static int tw686x_s_input(struct file *file, void *priv, unsigned int i)
+{
+ struct tw686x_video_channel *vc = video_drvdata(file);
+ u32 val;
+
+ if (i >= TW686X_INPUTS_PER_CH)
+ return -EINVAL;
+ if (i == vc->input)
+ return 0;
+ /*
+ * Not sure we are able to support on the fly input change
+ */
+ if (vb2_is_busy(&vc->vidq))
+ return -EBUSY;
+
+ vc->input = i;
+
+ val = reg_read(vc->dev, VDMA_CHANNEL_CONFIG[vc->ch]);
+ val &= ~(0x3 << 30);
+ val |= i << 30;
+ reg_write(vc->dev, VDMA_CHANNEL_CONFIG[vc->ch], val);
+ return 0;
+}
+
+static int tw686x_g_input(struct file *file, void *priv, unsigned int *i)
+{
+ struct tw686x_video_channel *vc = video_drvdata(file);
+
+ *i = vc->input;
+ return 0;
+}
+
+static int tw686x_enum_input(struct file *file, void *priv,
+ struct v4l2_input *i)
+{
+ struct tw686x_video_channel *vc = video_drvdata(file);
+ unsigned int vidstat;
+
+ if (i->index >= TW686X_INPUTS_PER_CH)
+ return -EINVAL;
+
+ snprintf(i->name, sizeof(i->name), "Composite%d", i->index);
+ i->type = V4L2_INPUT_TYPE_CAMERA;
+ i->std = vc->device->tvnorms;
+ i->capabilities = V4L2_IN_CAP_STD;
+
+ vidstat = reg_read(vc->dev, VIDSTAT[vc->ch]);
+ i->status = 0;
+ if (vidstat & TW686X_VIDSTAT_VDLOSS)
+ i->status |= V4L2_IN_ST_NO_SIGNAL;
+ if (!(vidstat & TW686X_VIDSTAT_HLOCK))
+ i->status |= V4L2_IN_ST_NO_H_LOCK;
+
+ return 0;
+}
+
+static const struct v4l2_file_operations tw686x_video_fops = {
+ .owner = THIS_MODULE,
+ .open = v4l2_fh_open,
+ .unlocked_ioctl = video_ioctl2,
+ .release = vb2_fop_release,
+ .poll = vb2_fop_poll,
+ .read = vb2_fop_read,
+ .mmap = vb2_fop_mmap,
+};
+
+static const struct v4l2_ioctl_ops tw686x_video_ioctl_ops = {
+ .vidioc_querycap = tw686x_querycap,
+ .vidioc_g_fmt_vid_cap = tw686x_g_fmt_vid_cap,
+ .vidioc_s_fmt_vid_cap = tw686x_s_fmt_vid_cap,
+ .vidioc_enum_fmt_vid_cap = tw686x_enum_fmt_vid_cap,
+ .vidioc_try_fmt_vid_cap = tw686x_try_fmt_vid_cap,
+
+ .vidioc_querystd = tw686x_querystd,
+ .vidioc_g_std = tw686x_g_std,
+ .vidioc_s_std = tw686x_s_std,
+
+ .vidioc_enum_input = tw686x_enum_input,
+ .vidioc_g_input = tw686x_g_input,
+ .vidioc_s_input = tw686x_s_input,
+
+ .vidioc_reqbufs = vb2_ioctl_reqbufs,
+ .vidioc_querybuf = vb2_ioctl_querybuf,
+ .vidioc_qbuf = vb2_ioctl_qbuf,
+ .vidioc_dqbuf = vb2_ioctl_dqbuf,
+ .vidioc_create_bufs = vb2_ioctl_create_bufs,
+ .vidioc_streamon = vb2_ioctl_streamon,
+ .vidioc_streamoff = vb2_ioctl_streamoff,
+ .vidioc_prepare_buf = vb2_ioctl_prepare_buf,
+
+ .vidioc_log_status = v4l2_ctrl_log_status,
+ .vidioc_subscribe_event = v4l2_ctrl_subscribe_event,
+ .vidioc_unsubscribe_event = v4l2_event_unsubscribe,
+};
+
+static void tw686x_buffer_copy(struct tw686x_video_channel *vc,
+ unsigned int pb, struct vb2_v4l2_buffer *vb)
+{
+ struct tw686x_dma_desc *desc = &vc->dma_descs[pb];
+ struct vb2_buffer *vb2_buf = &vb->vb2_buf;
+
+ vb->field = V4L2_FIELD_INTERLACED;
+ vb->sequence = vc->sequence++;
+
+ memcpy(vb2_plane_vaddr(vb2_buf, 0), desc->virt, desc->size);
+ vb2_buf->timestamp = ktime_get_ns();
+ vb2_buffer_done(vb2_buf, VB2_BUF_STATE_DONE);
+}
+
+void tw686x_video_irq(struct tw686x_dev *dev, unsigned long requests,
+ unsigned int pb_status, unsigned int fifo_status,
+ unsigned int *reset_ch)
+{
+ struct tw686x_video_channel *vc;
+ struct vb2_v4l2_buffer *vb;
+ unsigned long flags;
+ unsigned int ch, pb;
+
+ for_each_set_bit(ch, &requests, max_channels(dev)) {
+ vc = &dev->video_channels[ch];
+
+ /*
+ * This can either be a blue frame (with signal-lost bit set)
+ * or a good frame (with signal-lost bit clear). If we have just
+ * got signal, then this channel needs resetting.
+ */
+ if (vc->no_signal && !(fifo_status & BIT(ch))) {
+ v4l2_printk(KERN_DEBUG, &dev->v4l2_dev,
+ "video%d: signal recovered\n", vc->num);
+ vc->no_signal = false;
+ *reset_ch |= BIT(ch);
+ vc->pb = 0;
+ continue;
+ }
+ vc->no_signal = !!(fifo_status & BIT(ch));
+
+ /* Check FIFO errors only if there's signal */
+ if (!vc->no_signal) {
+ u32 fifo_ov, fifo_bad;
+
+ fifo_ov = (fifo_status >> 24) & BIT(ch);
+ fifo_bad = (fifo_status >> 16) & BIT(ch);
+ if (fifo_ov || fifo_bad) {
+ /* Mark this channel for reset */
+ v4l2_printk(KERN_DEBUG, &dev->v4l2_dev,
+ "video%d: FIFO error\n", vc->num);
+ *reset_ch |= BIT(ch);
+ vc->pb = 0;
+ continue;
+ }
+ }
+
+ pb = !!(pb_status & BIT(ch));
+ if (vc->pb != pb) {
+ /* Mark this channel for reset */
+ v4l2_printk(KERN_DEBUG, &dev->v4l2_dev,
+ "video%d: unexpected p-b buffer!\n",
+ vc->num);
+ *reset_ch |= BIT(ch);
+ vc->pb = 0;
+ continue;
+ }
+
+ /* handle video stream */
+ spin_lock_irqsave(&vc->qlock, flags);
+ if (vc->curr_bufs[pb]) {
+ vb = &vc->curr_bufs[pb]->vb;
+ tw686x_buffer_copy(vc, pb, vb);
+ }
+ vc->pb = !pb;
+ tw686x_buffer_refill(vc, pb);
+ spin_unlock_irqrestore(&vc->qlock, flags);
+ }
+}
+
+void tw686x_video_free(struct tw686x_dev *dev)
+{
+ unsigned int ch, pb;
+
+ for (ch = 0; ch < max_channels(dev); ch++) {
+ struct tw686x_video_channel *vc = &dev->video_channels[ch];
+
+ if (vc->device)
+ video_unregister_device(vc->device);
+
+ for (pb = 0; pb < 2; pb++)
+ tw686x_free_dma(vc, pb);
+ }
+}
+
+int tw686x_video_init(struct tw686x_dev *dev)
+{
+ unsigned int ch, val, pb;
+ int err;
+
+ err = v4l2_device_register(&dev->pci_dev->dev, &dev->v4l2_dev);
+ if (err)
+ return err;
+
+ for (ch = 0; ch < max_channels(dev); ch++) {
+ struct tw686x_video_channel *vc = &dev->video_channels[ch];
+ struct video_device *vdev;
+
+ mutex_init(&vc->vb_mutex);
+ spin_lock_init(&vc->qlock);
+ INIT_LIST_HEAD(&vc->vidq_queued);
+
+ vc->dev = dev;
+ vc->ch = ch;
+
+ /* default settings */
+ vc->format = &formats[0];
+ vc->video_standard = V4L2_STD_NTSC;
+ vc->width = TW686X_VIDEO_WIDTH;
+ vc->height = TW686X_VIDEO_HEIGHT(vc->video_standard);
+ vc->input = 0;
+
+ reg_write(vc->dev, SDT[ch], 0);
+ tw686x_set_framerate(vc, 30);
+
+ reg_write(dev, VDELAY_LO[ch], 0x14);
+ reg_write(dev, HACTIVE_LO[ch], 0xd0);
+ reg_write(dev, VIDEO_SIZE[ch], 0);
+
+ for (pb = 0; pb < 2; pb++) {
+ err = tw686x_alloc_dma(vc, pb);
+ if (err)
+ goto error;
+ }
+
+ vc->vidq.io_modes = VB2_READ | VB2_MMAP | VB2_DMABUF;
+ vc->vidq.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
+ vc->vidq.drv_priv = vc;
+ vc->vidq.buf_struct_size = sizeof(struct tw686x_v4l2_buf);
+ vc->vidq.ops = &tw686x_video_qops;
+ vc->vidq.mem_ops = &vb2_vmalloc_memops;
+ vc->vidq.timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC;
+ vc->vidq.min_buffers_needed = 2;
+ vc->vidq.lock = &vc->vb_mutex;
+ vc->vidq.gfp_flags = GFP_DMA32;
+
+ err = vb2_queue_init(&vc->vidq);
+ if (err) {
+ v4l2_err(&dev->v4l2_dev,
+ "dma%d: cannot init vb2 queue\n", ch);
+ goto error;
+ }
+
+ err = v4l2_ctrl_handler_init(&vc->ctrl_handler, 4);
+ if (err) {
+ v4l2_err(&dev->v4l2_dev,
+ "dma%d: cannot init ctrl handler\n", ch);
+ goto error;
+ }
+ v4l2_ctrl_new_std(&vc->ctrl_handler, &ctrl_ops,
+ V4L2_CID_BRIGHTNESS, -128, 127, 1, 0);
+ v4l2_ctrl_new_std(&vc->ctrl_handler, &ctrl_ops,
+ V4L2_CID_CONTRAST, 0, 255, 1, 100);
+ v4l2_ctrl_new_std(&vc->ctrl_handler, &ctrl_ops,
+ V4L2_CID_SATURATION, 0, 255, 1, 128);
+ v4l2_ctrl_new_std(&vc->ctrl_handler, &ctrl_ops,
+ V4L2_CID_HUE, -128, 127, 1, 0);
+ err = vc->ctrl_handler.error;
+ if (err)
+ goto error;
+
+ err = v4l2_ctrl_handler_setup(&vc->ctrl_handler);
+ if (err)
+ goto error;
+
+ vdev = video_device_alloc();
+ if (!vdev) {
+ v4l2_err(&dev->v4l2_dev,
+ "dma%d: unable to allocate device\n", ch);
+ err = -ENOMEM;
+ goto error;
+ }
+
+ snprintf(vdev->name, sizeof(vdev->name), "%s video", dev->name);
+ vdev->fops = &tw686x_video_fops;
+ vdev->ioctl_ops = &tw686x_video_ioctl_ops;
+ vdev->release = video_device_release;
+ vdev->v4l2_dev = &dev->v4l2_dev;
+ vdev->queue = &vc->vidq;
+ vdev->tvnorms = V4L2_STD_525_60 | V4L2_STD_625_50;
+ vdev->minor = -1;
+ vdev->lock = &vc->vb_mutex;
+ vdev->ctrl_handler = &vc->ctrl_handler;
+ vc->device = vdev;
+ video_set_drvdata(vdev, vc);
+
+ err = video_register_device(vdev, VFL_TYPE_GRABBER, -1);
+ if (err < 0)
+ goto error;
+ vc->num = vdev->num;
+ }
+
+ /* Set DMA frame mode on all channels. Only supported mode for now. */
+ val = TW686X_DEF_PHASE_REF;
+ for (ch = 0; ch < max_channels(dev); ch++)
+ val |= TW686X_FRAME_MODE << (16 + ch * 2);
+ reg_write(dev, PHASE_REF, val);
+
+ reg_write(dev, MISC2[0], 0xe7);
+ reg_write(dev, VCTRL1[0], 0xcc);
+ reg_write(dev, LOOP[0], 0xa5);
+ if (max_channels(dev) > 4) {
+ reg_write(dev, VCTRL1[1], 0xcc);
+ reg_write(dev, LOOP[1], 0xa5);
+ reg_write(dev, MISC2[1], 0xe7);
+ }
+ return 0;
+
+error:
+ tw686x_video_free(dev);
+ return err;
+}
diff --git a/drivers/media/pci/tw686x/tw686x.h b/drivers/media/pci/tw686x/tw686x.h
new file mode 100644
index 000000000000..44b5755acf02
--- /dev/null
+++ b/drivers/media/pci/tw686x/tw686x.h
@@ -0,0 +1,158 @@
+/*
+ * Copyright (C) 2015 VanguardiaSur - www.vanguardiasur.com.ar
+ *
+ * Copyright (C) 2015 Industrial Research Institute for Automation
+ * and Measurements PIAP
+ * Written by Krzysztof Ha?asa
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of version 2 of the GNU General Public License
+ * as published by the Free Software Foundation.
+ */
+
+#include <linux/mutex.h>
+#include <linux/pci.h>
+#include <linux/timer.h>
+#include <linux/videodev2.h>
+#include <media/v4l2-common.h>
+#include <media/v4l2-ctrls.h>
+#include <media/v4l2-device.h>
+#include <media/v4l2-ioctl.h>
+#include <media/videobuf2-v4l2.h>
+#include <sound/pcm.h>
+
+#include "tw686x-regs.h"
+
+#define TYPE_MAX_CHANNELS 0x0f
+#define TYPE_SECOND_GEN 0x10
+#define TW686X_DEF_PHASE_REF 0x1518
+
+#define TW686X_FIELD_MODE 0x3
+#define TW686X_FRAME_MODE 0x2
+/* 0x1 is reserved */
+#define TW686X_SG_MODE 0x0
+
+#define TW686X_AUDIO_PAGE_SZ 4096
+#define TW686X_AUDIO_PAGE_MAX 16
+#define TW686X_AUDIO_PERIODS_MIN 2
+#define TW686X_AUDIO_PERIODS_MAX TW686X_AUDIO_PAGE_MAX
+
+struct tw686x_format {
+ char *name;
+ unsigned int fourcc;
+ unsigned int depth;
+ unsigned int mode;
+};
+
+struct tw686x_dma_desc {
+ dma_addr_t phys;
+ void *virt;
+ unsigned int size;
+};
+
+struct tw686x_audio_buf {
+ dma_addr_t dma;
+ void *virt;
+ struct list_head list;
+};
+
+struct tw686x_v4l2_buf {
+ struct vb2_v4l2_buffer vb;
+ struct list_head list;
+};
+
+struct tw686x_audio_channel {
+ struct tw686x_dev *dev;
+ struct snd_pcm_substream *ss;
+ unsigned int ch;
+ struct tw686x_audio_buf *curr_bufs[2];
+ struct tw686x_dma_desc dma_descs[2];
+ dma_addr_t ptr;
+
+ struct tw686x_audio_buf buf[TW686X_AUDIO_PAGE_MAX];
+ struct list_head buf_list;
+ spinlock_t lock;
+};
+
+struct tw686x_video_channel {
+ struct tw686x_dev *dev;
+
+ struct vb2_queue vidq;
+ struct list_head vidq_queued;
+ struct video_device *device;
+ struct tw686x_v4l2_buf *curr_bufs[2];
+ struct tw686x_dma_desc dma_descs[2];
+
+ struct v4l2_ctrl_handler ctrl_handler;
+ const struct tw686x_format *format;
+ struct mutex vb_mutex;
+ spinlock_t qlock;
+ v4l2_std_id video_standard;
+ unsigned int width, height;
+ unsigned int h_halve, v_halve;
+ unsigned int ch;
+ unsigned int num;
+ unsigned int fps;
+ unsigned int input;
+ unsigned int sequence;
+ unsigned int pb;
+ bool no_signal;
+};
+
+/**
+ * struct tw686x_dev - global device status
+ * @lock: spinlock controlling access to the
+ * shared device registers (DMA enable/disable).
+ */
+struct tw686x_dev {
+ spinlock_t lock;
+
+ struct v4l2_device v4l2_dev;
+ struct snd_card *snd_card;
+
+ char name[32];
+ unsigned int type;
+ struct pci_dev *pci_dev;
+ __u32 __iomem *mmio;
+
+ void *alloc_ctx;
+
+ struct tw686x_video_channel *video_channels;
+ struct tw686x_audio_channel *audio_channels;
+
+ int audio_rate; /* per-device value */
+
+ struct timer_list dma_delay_timer;
+ u32 pending_dma_en; /* must be protected by lock */
+ u32 pending_dma_cmd; /* must be protected by lock */
+};
+
+static inline uint32_t reg_read(struct tw686x_dev *dev, unsigned int reg)
+{
+ return readl(dev->mmio + reg);
+}
+
+static inline void reg_write(struct tw686x_dev *dev, unsigned int reg,
+ uint32_t value)
+{
+ writel(value, dev->mmio + reg);
+}
+
+static inline unsigned int max_channels(struct tw686x_dev *dev)
+{
+ return dev->type & TYPE_MAX_CHANNELS; /* 4 or 8 channels */
+}
+
+void tw686x_enable_channel(struct tw686x_dev *dev, unsigned int channel);
+void tw686x_disable_channel(struct tw686x_dev *dev, unsigned int channel);
+
+int tw686x_video_init(struct tw686x_dev *dev);
+void tw686x_video_free(struct tw686x_dev *dev);
+void tw686x_video_irq(struct tw686x_dev *dev, unsigned long requests,
+ unsigned int pb_status, unsigned int fifo_status,
+ unsigned int *reset_ch);
+
+int tw686x_audio_init(struct tw686x_dev *dev);
+void tw686x_audio_free(struct tw686x_dev *dev);
+void tw686x_audio_irq(struct tw686x_dev *dev, unsigned long requests,
+ unsigned int pb_status);
diff --git a/drivers/media/pci/zoran/videocodec.c b/drivers/media/pci/zoran/videocodec.c
index c01071635290..13a3c07cd259 100644
--- a/drivers/media/pci/zoran/videocodec.c
+++ b/drivers/media/pci/zoran/videocodec.c
@@ -116,8 +116,9 @@ videocodec_attach (struct videocodec_master *master)
goto out_module_put;
}
- snprintf(codec->name, sizeof(codec->name),
- "%s[%d]", codec->name, h->attached);
+ res = strlen(codec->name);
+ snprintf(codec->name + res, sizeof(codec->name) - res,
+ "[%d]", h->attached);
codec->master_data = master;
res = codec->setup(codec);
if (res == 0) {
diff --git a/drivers/media/platform/Kconfig b/drivers/media/platform/Kconfig
index 201f5c296a95..84e041c0a70e 100644
--- a/drivers/media/platform/Kconfig
+++ b/drivers/media/platform/Kconfig
@@ -238,7 +238,7 @@ config VIDEO_SH_VEU
config VIDEO_RENESAS_JPU
tristate "Renesas JPEG Processing Unit"
depends on VIDEO_DEV && VIDEO_V4L2 && HAS_DMA
- depends on ARCH_SHMOBILE || COMPILE_TEST
+ depends on ARCH_RENESAS || COMPILE_TEST
select VIDEOBUF2_DMA_CONTIG
select V4L2_MEM2MEM_DEV
---help---
@@ -250,7 +250,7 @@ config VIDEO_RENESAS_JPU
config VIDEO_RENESAS_VSP1
tristate "Renesas VSP1 Video Processing Engine"
depends on VIDEO_V4L2 && VIDEO_V4L2_SUBDEV_API && HAS_DMA
- depends on (ARCH_SHMOBILE && OF) || COMPILE_TEST
+ depends on (ARCH_RENESAS && OF) || COMPILE_TEST
select VIDEOBUF2_DMA_CONTIG
---help---
This is a V4L2 driver for the Renesas VSP1 video processing engine.
diff --git a/drivers/media/platform/am437x/am437x-vpfe.c b/drivers/media/platform/am437x/am437x-vpfe.c
index de32e3a3d4d1..e749eb7c3be9 100644
--- a/drivers/media/platform/am437x/am437x-vpfe.c
+++ b/drivers/media/platform/am437x/am437x-vpfe.c
@@ -1047,7 +1047,7 @@ static int vpfe_get_ccdc_image_format(struct vpfe_device *vpfe,
static int vpfe_config_ccdc_image_format(struct vpfe_device *vpfe)
{
enum ccdc_frmfmt frm_fmt = CCDC_FRMFMT_INTERLACED;
- int ret;
+ int ret = 0;
vpfe_dbg(2, vpfe, "vpfe_config_ccdc_image_format\n");
@@ -1706,7 +1706,7 @@ static int vpfe_get_app_input_index(struct vpfe_device *vpfe,
sdinfo = &cfg->sub_devs[i];
client = v4l2_get_subdevdata(sdinfo->sd);
if (client->addr == curr_client->addr &&
- client->adapter->nr == client->adapter->nr) {
+ client->adapter->nr == curr_client->adapter->nr) {
if (vpfe->current_input >= 1)
return -1;
*app_input_index = j + vpfe->current_input;
diff --git a/drivers/media/platform/exynos-gsc/gsc-core.c b/drivers/media/platform/exynos-gsc/gsc-core.c
index 9b9e423e4fc4..c04973669a47 100644
--- a/drivers/media/platform/exynos-gsc/gsc-core.c
+++ b/drivers/media/platform/exynos-gsc/gsc-core.c
@@ -967,15 +967,6 @@ static struct gsc_driverdata gsc_v_100_drvdata = {
.lclk_frequency = 266000000UL,
};
-static const struct platform_device_id gsc_driver_ids[] = {
- {
- .name = "exynos-gsc",
- .driver_data = (unsigned long)&gsc_v_100_drvdata,
- },
- {},
-};
-MODULE_DEVICE_TABLE(platform, gsc_driver_ids);
-
static const struct of_device_id exynos_gsc_match[] = {
{
.compatible = "samsung,exynos5-gsc",
@@ -988,17 +979,11 @@ MODULE_DEVICE_TABLE(of, exynos_gsc_match);
static void *gsc_get_drv_data(struct platform_device *pdev)
{
struct gsc_driverdata *driver_data = NULL;
+ const struct of_device_id *match;
- if (pdev->dev.of_node) {
- const struct of_device_id *match;
- match = of_match_node(exynos_gsc_match,
- pdev->dev.of_node);
- if (match)
- driver_data = (struct gsc_driverdata *)match->data;
- } else {
- driver_data = (struct gsc_driverdata *)
- platform_get_device_id(pdev)->driver_data;
- }
+ match = of_match_node(exynos_gsc_match, pdev->dev.of_node);
+ if (match)
+ driver_data = (struct gsc_driverdata *)match->data;
return driver_data;
}
@@ -1078,17 +1063,17 @@ static int gsc_probe(struct platform_device *pdev)
struct resource *res;
struct gsc_driverdata *drv_data = gsc_get_drv_data(pdev);
struct device *dev = &pdev->dev;
- int ret = 0;
+ int ret;
gsc = devm_kzalloc(dev, sizeof(struct gsc_dev), GFP_KERNEL);
if (!gsc)
return -ENOMEM;
- if (dev->of_node)
- gsc->id = of_alias_get_id(pdev->dev.of_node, "gsc");
- else
- gsc->id = pdev->id;
+ ret = of_alias_get_id(pdev->dev.of_node, "gsc");
+ if (ret < 0)
+ return ret;
+ gsc->id = ret;
if (gsc->id >= drv_data->num_entities) {
dev_err(dev, "Invalid platform device id: %d\n", gsc->id);
return -EINVAL;
@@ -1096,7 +1081,6 @@ static int gsc_probe(struct platform_device *pdev)
gsc->variant = drv_data->variant[gsc->id];
gsc->pdev = pdev;
- gsc->pdata = dev->platform_data;
init_waitqueue_head(&gsc->irq_queue);
spin_lock_init(&gsc->slock);
@@ -1253,7 +1237,6 @@ static const struct dev_pm_ops gsc_pm_ops = {
static struct platform_driver gsc_driver = {
.probe = gsc_probe,
.remove = gsc_remove,
- .id_table = gsc_driver_ids,
.driver = {
.name = GSC_MODULE_NAME,
.pm = &gsc_pm_ops,
diff --git a/drivers/media/platform/exynos-gsc/gsc-core.h b/drivers/media/platform/exynos-gsc/gsc-core.h
index e93a2336cfa2..ec4000c72172 100644
--- a/drivers/media/platform/exynos-gsc/gsc-core.h
+++ b/drivers/media/platform/exynos-gsc/gsc-core.h
@@ -340,7 +340,6 @@ struct gsc_dev {
void __iomem *regs;
wait_queue_head_t irq_queue;
struct gsc_m2m_device m2m;
- struct exynos_platform_gscaler *pdata;
unsigned long state;
struct vb2_alloc_ctx *alloc_ctx;
struct video_device vdev;
diff --git a/drivers/media/platform/exynos4-is/fimc-core.c b/drivers/media/platform/exynos4-is/fimc-core.c
index cef2a7f07cdb..b1c1cea82a27 100644
--- a/drivers/media/platform/exynos4-is/fimc-core.c
+++ b/drivers/media/platform/exynos4-is/fimc-core.c
@@ -1154,26 +1154,6 @@ static const struct fimc_pix_limit s5p_pix_limit[4] = {
},
};
-static const struct fimc_variant fimc0_variant_s5p = {
- .has_inp_rot = 1,
- .has_out_rot = 1,
- .has_cam_if = 1,
- .min_inp_pixsize = 16,
- .min_out_pixsize = 16,
- .hor_offs_align = 8,
- .min_vsize_align = 16,
- .pix_limit = &s5p_pix_limit[0],
-};
-
-static const struct fimc_variant fimc2_variant_s5p = {
- .has_cam_if = 1,
- .min_inp_pixsize = 16,
- .min_out_pixsize = 16,
- .hor_offs_align = 8,
- .min_vsize_align = 16,
- .pix_limit = &s5p_pix_limit[1],
-};
-
static const struct fimc_variant fimc0_variant_s5pv210 = {
.has_inp_rot = 1,
.has_out_rot = 1,
@@ -1206,18 +1186,6 @@ static const struct fimc_variant fimc2_variant_s5pv210 = {
.pix_limit = &s5p_pix_limit[2],
};
-/* S5PC100 */
-static const struct fimc_drvdata fimc_drvdata_s5p = {
- .variant = {
- [0] = &fimc0_variant_s5p,
- [1] = &fimc0_variant_s5p,
- [2] = &fimc2_variant_s5p,
- },
- .num_entities = 3,
- .lclk_frequency = 133000000UL,
- .out_buf_count = 4,
-};
-
/* S5PV210, S5PC110 */
static const struct fimc_drvdata fimc_drvdata_s5pv210 = {
.variant = {
@@ -1251,23 +1219,6 @@ static const struct fimc_drvdata fimc_drvdata_exynos4x12 = {
.out_buf_count = 32,
};
-static const struct platform_device_id fimc_driver_ids[] = {
- {
- .name = "s5p-fimc",
- .driver_data = (unsigned long)&fimc_drvdata_s5p,
- }, {
- .name = "s5pv210-fimc",
- .driver_data = (unsigned long)&fimc_drvdata_s5pv210,
- }, {
- .name = "exynos4-fimc",
- .driver_data = (unsigned long)&fimc_drvdata_exynos4210,
- }, {
- .name = "exynos4x12-fimc",
- .driver_data = (unsigned long)&fimc_drvdata_exynos4x12,
- },
- { },
-};
-
static const struct of_device_id fimc_of_match[] = {
{
.compatible = "samsung,s5pv210-fimc",
@@ -1290,7 +1241,6 @@ static const struct dev_pm_ops fimc_pm_ops = {
static struct platform_driver fimc_driver = {
.probe = fimc_probe,
.remove = fimc_remove,
- .id_table = fimc_driver_ids,
.driver = {
.of_match_table = fimc_of_match,
.name = FIMC_DRIVER_NAME,
diff --git a/drivers/media/platform/exynos4-is/media-dev.c b/drivers/media/platform/exynos4-is/media-dev.c
index 4f494acd8150..891625e77ef5 100644
--- a/drivers/media/platform/exynos4-is/media-dev.c
+++ b/drivers/media/platform/exynos4-is/media-dev.c
@@ -446,8 +446,10 @@ static int fimc_md_parse_port_node(struct fimc_md *fmd,
else
pd->fimc_bus_type = pd->sensor_bus_type;
- if (WARN_ON(index >= ARRAY_SIZE(fmd->sensor)))
+ if (WARN_ON(index >= ARRAY_SIZE(fmd->sensor))) {
+ of_node_put(rem);
return -EINVAL;
+ }
fmd->sensor[index].asd.match_type = V4L2_ASYNC_MATCH_OF;
fmd->sensor[index].asd.match.of.node = rem;
@@ -1130,7 +1132,7 @@ static int __fimc_md_modify_pipelines(struct media_entity *entity, bool enable,
media_entity_graph_walk_start(graph, entity);
while ((entity = media_entity_graph_walk_next(graph))) {
- if (!is_media_entity_v4l2_io(entity))
+ if (!is_media_entity_v4l2_video_device(entity))
continue;
ret = __fimc_md_modify_pipeline(entity, enable);
@@ -1145,7 +1147,7 @@ err:
media_entity_graph_walk_start(graph, entity_err);
while ((entity_err = media_entity_graph_walk_next(graph))) {
- if (!is_media_entity_v4l2_io(entity_err))
+ if (!is_media_entity_v4l2_video_device(entity_err))
continue;
__fimc_md_modify_pipeline(entity_err, !enable);
diff --git a/drivers/media/platform/exynos4-is/mipi-csis.c b/drivers/media/platform/exynos4-is/mipi-csis.c
index bd5c46c3d4b7..bf954424e7be 100644
--- a/drivers/media/platform/exynos4-is/mipi-csis.c
+++ b/drivers/media/platform/exynos4-is/mipi-csis.c
@@ -757,8 +757,10 @@ static int s5pcsis_parse_dt(struct platform_device *pdev,
goto err;
state->index = endpoint.base.port - FIMC_INPUT_MIPI_CSI2_0;
- if (state->index >= CSIS_MAX_ENTITIES)
- return -ENXIO;
+ if (state->index >= CSIS_MAX_ENTITIES) {
+ ret = -ENXIO;
+ goto err;
+ }
/* Get MIPI CSI-2 bus configration from the endpoint node. */
of_property_read_u32(node, "samsung,csis-hs-settle",
diff --git a/drivers/media/platform/omap3isp/ispvideo.c b/drivers/media/platform/omap3isp/ispvideo.c
index ac76d2901501..1b1a95d546f6 100644
--- a/drivers/media/platform/omap3isp/ispvideo.c
+++ b/drivers/media/platform/omap3isp/ispvideo.c
@@ -251,7 +251,7 @@ static int isp_video_get_graph_data(struct isp_video *video,
if (entity == &video->video.entity)
continue;
- if (!is_media_entity_v4l2_io(entity))
+ if (!is_media_entity_v4l2_video_device(entity))
continue;
__video = to_isp_video(media_entity_to_video_device(entity));
diff --git a/drivers/media/platform/s5p-g2d/g2d.c b/drivers/media/platform/s5p-g2d/g2d.c
index 74bd46ca7942..612d1ea514f1 100644
--- a/drivers/media/platform/s5p-g2d/g2d.c
+++ b/drivers/media/platform/s5p-g2d/g2d.c
@@ -719,16 +719,12 @@ static int g2d_probe(struct platform_device *pdev)
def_frame.stride = (def_frame.width * def_frame.fmt->depth) >> 3;
- if (!pdev->dev.of_node) {
- dev->variant = g2d_get_drv_data(pdev);
- } else {
- of_id = of_match_node(exynos_g2d_match, pdev->dev.of_node);
- if (!of_id) {
- ret = -ENODEV;
- goto unreg_video_dev;
- }
- dev->variant = (struct g2d_variant *)of_id->data;
+ of_id = of_match_node(exynos_g2d_match, pdev->dev.of_node);
+ if (!of_id) {
+ ret = -ENODEV;
+ goto unreg_video_dev;
}
+ dev->variant = (struct g2d_variant *)of_id->data;
return 0;
@@ -788,22 +784,9 @@ static const struct of_device_id exynos_g2d_match[] = {
};
MODULE_DEVICE_TABLE(of, exynos_g2d_match);
-static const struct platform_device_id g2d_driver_ids[] = {
- {
- .name = "s5p-g2d",
- .driver_data = (unsigned long)&g2d_drvdata_v3x,
- }, {
- .name = "s5p-g2d-v4x",
- .driver_data = (unsigned long)&g2d_drvdata_v4x,
- },
- {},
-};
-MODULE_DEVICE_TABLE(platform, g2d_driver_ids);
-
static struct platform_driver g2d_pdrv = {
.probe = g2d_probe,
.remove = g2d_remove,
- .id_table = g2d_driver_ids,
.driver = {
.name = G2D_NAME,
.of_match_table = exynos_g2d_match,
diff --git a/drivers/media/platform/s5p-g2d/g2d.h b/drivers/media/platform/s5p-g2d/g2d.h
index b0e52ab7ecdb..e31df541aa62 100644
--- a/drivers/media/platform/s5p-g2d/g2d.h
+++ b/drivers/media/platform/s5p-g2d/g2d.h
@@ -89,8 +89,3 @@ void g2d_set_flip(struct g2d_dev *d, u32 r);
void g2d_set_v41_stretch(struct g2d_dev *d,
struct g2d_frame *src, struct g2d_frame *dst);
void g2d_set_cmd(struct g2d_dev *d, u32 c);
-
-static inline struct g2d_variant *g2d_get_drv_data(struct platform_device *pdev)
-{
- return (struct g2d_variant *)platform_get_device_id(pdev)->driver_data;
-}
diff --git a/drivers/media/platform/s5p-jpeg/jpeg-core.c b/drivers/media/platform/s5p-jpeg/jpeg-core.c
index c3b13a630edf..caa19b408551 100644
--- a/drivers/media/platform/s5p-jpeg/jpeg-core.c
+++ b/drivers/media/platform/s5p-jpeg/jpeg-core.c
@@ -1548,8 +1548,10 @@ static int exynos4_jpeg_get_output_buffer_size(struct s5p_jpeg_ctx *ctx,
struct v4l2_pix_format *pix = &f->fmt.pix;
u32 pix_fmt = f->fmt.pix.pixelformat;
int w = pix->width, h = pix->height, wh_align;
+ int padding = 0;
if (pix_fmt == V4L2_PIX_FMT_RGB32 ||
+ pix_fmt == V4L2_PIX_FMT_RGB565 ||
pix_fmt == V4L2_PIX_FMT_NV24 ||
pix_fmt == V4L2_PIX_FMT_NV42 ||
pix_fmt == V4L2_PIX_FMT_NV12 ||
@@ -1564,7 +1566,10 @@ static int exynos4_jpeg_get_output_buffer_size(struct s5p_jpeg_ctx *ctx,
&h, S5P_JPEG_MIN_HEIGHT,
S5P_JPEG_MAX_HEIGHT, wh_align);
- return w * h * fmt_depth >> 3;
+ if (ctx->jpeg->variant->version == SJPEG_EXYNOS4)
+ padding = PAGE_SIZE;
+
+ return (w * h * fmt_depth >> 3) + padding;
}
static int exynos3250_jpeg_try_downscale(struct s5p_jpeg_ctx *ctx,
diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc.c b/drivers/media/platform/s5p-mfc/s5p_mfc.c
index 927ab4928779..b16466fe35ee 100644
--- a/drivers/media/platform/s5p-mfc/s5p_mfc.c
+++ b/drivers/media/platform/s5p-mfc/s5p_mfc.c
@@ -1489,27 +1489,6 @@ static struct s5p_mfc_variant mfc_drvdata_v8 = {
.fw_name[0] = "s5p-mfc-v8.fw",
};
-static const struct platform_device_id mfc_driver_ids[] = {
- {
- .name = "s5p-mfc",
- .driver_data = (unsigned long)&mfc_drvdata_v5,
- }, {
- .name = "s5p-mfc-v5",
- .driver_data = (unsigned long)&mfc_drvdata_v5,
- }, {
- .name = "s5p-mfc-v6",
- .driver_data = (unsigned long)&mfc_drvdata_v6,
- }, {
- .name = "s5p-mfc-v7",
- .driver_data = (unsigned long)&mfc_drvdata_v7,
- }, {
- .name = "s5p-mfc-v8",
- .driver_data = (unsigned long)&mfc_drvdata_v8,
- },
- {},
-};
-MODULE_DEVICE_TABLE(platform, mfc_driver_ids);
-
static const struct of_device_id exynos_mfc_match[] = {
{
.compatible = "samsung,mfc-v5",
@@ -1531,24 +1510,18 @@ MODULE_DEVICE_TABLE(of, exynos_mfc_match);
static void *mfc_get_drv_data(struct platform_device *pdev)
{
struct s5p_mfc_variant *driver_data = NULL;
+ const struct of_device_id *match;
+
+ match = of_match_node(exynos_mfc_match, pdev->dev.of_node);
+ if (match)
+ driver_data = (struct s5p_mfc_variant *)match->data;
- if (pdev->dev.of_node) {
- const struct of_device_id *match;
- match = of_match_node(exynos_mfc_match,
- pdev->dev.of_node);
- if (match)
- driver_data = (struct s5p_mfc_variant *)match->data;
- } else {
- driver_data = (struct s5p_mfc_variant *)
- platform_get_device_id(pdev)->driver_data;
- }
return driver_data;
}
static struct platform_driver s5p_mfc_driver = {
.probe = s5p_mfc_probe,
.remove = s5p_mfc_remove,
- .id_table = mfc_driver_ids,
.driver = {
.name = S5P_MFC_NAME,
.pm = &s5p_mfc_pm_ops,
diff --git a/drivers/media/platform/s5p-tv/mixer.h b/drivers/media/platform/s5p-tv/mixer.h
index 42cd2709c41c..4dd62a918fcf 100644
--- a/drivers/media/platform/s5p-tv/mixer.h
+++ b/drivers/media/platform/s5p-tv/mixer.h
@@ -300,7 +300,7 @@ void mxr_release_video(struct mxr_device *mdev);
struct mxr_layer *mxr_graph_layer_create(struct mxr_device *mdev, int idx);
struct mxr_layer *mxr_vp_layer_create(struct mxr_device *mdev, int idx);
struct mxr_layer *mxr_base_layer_create(struct mxr_device *mdev,
- int idx, char *name, struct mxr_layer_ops *ops);
+ int idx, char *name, const struct mxr_layer_ops *ops);
void mxr_base_layer_release(struct mxr_layer *layer);
void mxr_layer_release(struct mxr_layer *layer);
diff --git a/drivers/media/platform/s5p-tv/mixer_drv.c b/drivers/media/platform/s5p-tv/mixer_drv.c
index 5ef67774971d..8a5d19469ddc 100644
--- a/drivers/media/platform/s5p-tv/mixer_drv.c
+++ b/drivers/media/platform/s5p-tv/mixer_drv.c
@@ -146,7 +146,7 @@ int mxr_power_get(struct mxr_device *mdev)
/* returning 1 means that power is already enabled,
* so zero success be returned */
- if (IS_ERR_VALUE(ret))
+ if (ret < 0)
return ret;
return 0;
}
diff --git a/drivers/media/platform/s5p-tv/mixer_grp_layer.c b/drivers/media/platform/s5p-tv/mixer_grp_layer.c
index db3163b23ea0..d4d2564f7de7 100644
--- a/drivers/media/platform/s5p-tv/mixer_grp_layer.c
+++ b/drivers/media/platform/s5p-tv/mixer_grp_layer.c
@@ -235,7 +235,7 @@ struct mxr_layer *mxr_graph_layer_create(struct mxr_device *mdev, int idx)
{
struct mxr_layer *layer;
int ret;
- struct mxr_layer_ops ops = {
+ const struct mxr_layer_ops ops = {
.release = mxr_graph_layer_release,
.buffer_set = mxr_graph_buffer_set,
.stream_set = mxr_graph_stream_set,
diff --git a/drivers/media/platform/s5p-tv/mixer_video.c b/drivers/media/platform/s5p-tv/mixer_video.c
index d9e7f030294c..7ab5578a0405 100644
--- a/drivers/media/platform/s5p-tv/mixer_video.c
+++ b/drivers/media/platform/s5p-tv/mixer_video.c
@@ -1070,7 +1070,7 @@ static void mxr_vfd_release(struct video_device *vdev)
}
struct mxr_layer *mxr_base_layer_create(struct mxr_device *mdev,
- int idx, char *name, struct mxr_layer_ops *ops)
+ int idx, char *name, const struct mxr_layer_ops *ops)
{
struct mxr_layer *layer;
diff --git a/drivers/media/platform/s5p-tv/mixer_vp_layer.c b/drivers/media/platform/s5p-tv/mixer_vp_layer.c
index dd002a497dbb..6fa6f673f53b 100644
--- a/drivers/media/platform/s5p-tv/mixer_vp_layer.c
+++ b/drivers/media/platform/s5p-tv/mixer_vp_layer.c
@@ -207,7 +207,7 @@ struct mxr_layer *mxr_vp_layer_create(struct mxr_device *mdev, int idx)
{
struct mxr_layer *layer;
int ret;
- struct mxr_layer_ops ops = {
+ const struct mxr_layer_ops ops = {
.release = mxr_vp_layer_release,
.buffer_set = mxr_vp_buffer_set,
.stream_set = mxr_vp_stream_set,
diff --git a/drivers/media/platform/soc_camera/Kconfig b/drivers/media/platform/soc_camera/Kconfig
index 355298989dd8..83029a4854ae 100644
--- a/drivers/media/platform/soc_camera/Kconfig
+++ b/drivers/media/platform/soc_camera/Kconfig
@@ -28,7 +28,7 @@ config VIDEO_PXA27x
config VIDEO_RCAR_VIN
tristate "R-Car Video Input (VIN) support"
depends on VIDEO_DEV && SOC_CAMERA
- depends on ARCH_SHMOBILE || COMPILE_TEST
+ depends on ARCH_RENESAS || COMPILE_TEST
depends on HAS_DMA
select VIDEOBUF2_DMA_CONTIG
select SOC_CAMERA_SCALE_CROP
@@ -45,7 +45,7 @@ config VIDEO_SH_MOBILE_CSI2
config VIDEO_SH_MOBILE_CEU
tristate "SuperH Mobile CEU Interface driver"
depends on VIDEO_DEV && SOC_CAMERA && HAS_DMA && HAVE_CLK
- depends on ARCH_SHMOBILE || SUPERH || COMPILE_TEST
+ depends on ARCH_SHMOBILE || COMPILE_TEST
depends on HAS_DMA
select VIDEOBUF2_DMA_CONTIG
select SOC_CAMERA_SCALE_CROP
diff --git a/drivers/media/platform/soc_camera/rcar_vin.c b/drivers/media/platform/soc_camera/rcar_vin.c
index 3b8edf458964..3f9c1b8456c3 100644
--- a/drivers/media/platform/soc_camera/rcar_vin.c
+++ b/drivers/media/platform/soc_camera/rcar_vin.c
@@ -1845,6 +1845,8 @@ static const struct of_device_id rcar_vin_of_table[] = {
{ .compatible = "renesas,vin-r8a7790", .data = (void *)RCAR_GEN2 },
{ .compatible = "renesas,vin-r8a7779", .data = (void *)RCAR_H1 },
{ .compatible = "renesas,vin-r8a7778", .data = (void *)RCAR_M1 },
+ { .compatible = "renesas,rcar-gen3-vin", .data = (void *)RCAR_GEN3 },
+ { .compatible = "renesas,rcar-gen2-vin", .data = (void *)RCAR_GEN2 },
{ },
};
MODULE_DEVICE_TABLE(of, rcar_vin_of_table);
diff --git a/drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c b/drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c
index 78e3cb9a628f..7dddf77a62cf 100644
--- a/drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c
+++ b/drivers/media/platform/sti/c8sectpfe/c8sectpfe-core.c
@@ -49,7 +49,7 @@ MODULE_FIRMWARE(FIRMWARE_MEMDMA);
#define PID_TABLE_SIZE 1024
#define POLL_MSECS 50
-static int load_c8sectpfe_fw_step1(struct c8sectpfei *fei);
+static int load_c8sectpfe_fw(struct c8sectpfei *fei);
#define TS_PKT_SIZE 188
#define HEADER_SIZE (4)
@@ -130,7 +130,7 @@ static void channel_swdemux_tsklet(unsigned long data)
writel(channel->back_buffer_busaddr, channel->irec +
DMA_PRDS_BUSRP_TP(0));
else
- writel(wp, channel->irec + DMA_PRDS_BUSWP_TP(0));
+ writel(wp, channel->irec + DMA_PRDS_BUSRP_TP(0));
}
static int c8sectpfe_start_feed(struct dvb_demux_feed *dvbdmxfeed)
@@ -141,6 +141,7 @@ static int c8sectpfe_start_feed(struct dvb_demux_feed *dvbdmxfeed)
struct channel_info *channel;
u32 tmp;
unsigned long *bitmap;
+ int ret;
switch (dvbdmxfeed->type) {
case DMX_TYPE_TS:
@@ -169,8 +170,9 @@ static int c8sectpfe_start_feed(struct dvb_demux_feed *dvbdmxfeed)
}
if (!atomic_read(&fei->fw_loaded)) {
- dev_err(fei->dev, "%s: c8sectpfe fw not loaded\n", __func__);
- return -EINVAL;
+ ret = load_c8sectpfe_fw(fei);
+ if (ret)
+ return ret;
}
mutex_lock(&fei->lock);
@@ -265,8 +267,9 @@ static int c8sectpfe_stop_feed(struct dvb_demux_feed *dvbdmxfeed)
unsigned long *bitmap;
if (!atomic_read(&fei->fw_loaded)) {
- dev_err(fei->dev, "%s: c8sectpfe fw not loaded\n", __func__);
- return -EINVAL;
+ ret = load_c8sectpfe_fw(fei);
+ if (ret)
+ return ret;
}
mutex_lock(&fei->lock);
@@ -585,7 +588,7 @@ static int configure_memdma_and_inputblock(struct c8sectpfei *fei,
writel(tsin->pid_buffer_busaddr,
fei->io + PIDF_BASE(tsin->tsin_id));
- dev_info(fei->dev, "chan=%d PIDF_BASE=0x%x pid_bus_addr=%pad\n",
+ dev_dbg(fei->dev, "chan=%d PIDF_BASE=0x%x pid_bus_addr=%pad\n",
tsin->tsin_id, readl(fei->io + PIDF_BASE(tsin->tsin_id)),
&tsin->pid_buffer_busaddr);
@@ -880,13 +883,6 @@ static int c8sectpfe_probe(struct platform_device *pdev)
goto err_clk_disable;
}
- /* ensure all other init has been done before requesting firmware */
- ret = load_c8sectpfe_fw_step1(fei);
- if (ret) {
- dev_err(dev, "Couldn't load slim core firmware\n");
- goto err_clk_disable;
- }
-
c8sectpfe_debugfs_init(fei);
return 0;
@@ -1091,15 +1087,14 @@ static void load_dmem_segment(struct c8sectpfei *fei, Elf32_Phdr *phdr,
phdr->p_memsz - phdr->p_filesz);
}
-static int load_slim_core_fw(const struct firmware *fw, void *context)
+static int load_slim_core_fw(const struct firmware *fw, struct c8sectpfei *fei)
{
- struct c8sectpfei *fei = context;
Elf32_Ehdr *ehdr;
Elf32_Phdr *phdr;
u8 __iomem *dst;
int err = 0, i;
- if (!fw || !context)
+ if (!fw || !fei)
return -EINVAL;
ehdr = (Elf32_Ehdr *)fw->data;
@@ -1151,29 +1146,35 @@ static int load_slim_core_fw(const struct firmware *fw, void *context)
return err;
}
-static void load_c8sectpfe_fw_cb(const struct firmware *fw, void *context)
+static int load_c8sectpfe_fw(struct c8sectpfei *fei)
{
- struct c8sectpfei *fei = context;
+ const struct firmware *fw;
int err;
+ dev_info(fei->dev, "Loading firmware: %s\n", FIRMWARE_MEMDMA);
+
+ err = request_firmware(&fw, FIRMWARE_MEMDMA, fei->dev);
+ if (err)
+ return err;
+
err = c8sectpfe_elf_sanity_check(fei, fw);
if (err) {
dev_err(fei->dev, "c8sectpfe_elf_sanity_check failed err=(%d)\n"
, err);
- goto err;
+ return err;
}
- err = load_slim_core_fw(fw, context);
+ err = load_slim_core_fw(fw, fei);
if (err) {
dev_err(fei->dev, "load_slim_core_fw failed err=(%d)\n", err);
- goto err;
+ return err;
}
/* now the firmware is loaded configure the input blocks */
err = configure_channels(fei);
if (err) {
dev_err(fei->dev, "configure_channels failed err=(%d)\n", err);
- goto err;
+ return err;
}
/*
@@ -1186,28 +1187,6 @@ static void load_c8sectpfe_fw_cb(const struct firmware *fw, void *context)
writel(0x1, fei->io + DMA_CPU_RUN);
atomic_set(&fei->fw_loaded, 1);
-err:
- complete_all(&fei->fw_ack);
-}
-
-static int load_c8sectpfe_fw_step1(struct c8sectpfei *fei)
-{
- int err;
-
- dev_info(fei->dev, "Loading firmware: %s\n", FIRMWARE_MEMDMA);
-
- init_completion(&fei->fw_ack);
- atomic_set(&fei->fw_loaded, 0);
-
- err = request_firmware_nowait(THIS_MODULE, FW_ACTION_HOTPLUG,
- FIRMWARE_MEMDMA, fei->dev, GFP_KERNEL, fei,
- load_c8sectpfe_fw_cb);
-
- if (err) {
- dev_err(fei->dev, "request_firmware_nowait err: %d.\n", err);
- complete_all(&fei->fw_ack);
- return err;
- }
return 0;
}
diff --git a/drivers/media/platform/vivid/Kconfig b/drivers/media/platform/vivid/Kconfig
index 0885e93ad436..f535f576913d 100644
--- a/drivers/media/platform/vivid/Kconfig
+++ b/drivers/media/platform/vivid/Kconfig
@@ -7,6 +7,7 @@ config VIDEO_VIVID
select FB_CFB_COPYAREA
select FB_CFB_IMAGEBLIT
select VIDEOBUF2_VMALLOC
+ select VIDEO_V4L2_TPG
default n
---help---
Enables a virtual video driver. This driver emulates a webcam,
diff --git a/drivers/media/platform/vivid/Makefile b/drivers/media/platform/vivid/Makefile
index 756fc12851df..633c8a1b2c27 100644
--- a/drivers/media/platform/vivid/Makefile
+++ b/drivers/media/platform/vivid/Makefile
@@ -2,5 +2,5 @@ vivid-objs := vivid-core.o vivid-ctrls.o vivid-vid-common.o vivid-vbi-gen.o \
vivid-vid-cap.o vivid-vid-out.o vivid-kthread-cap.o vivid-kthread-out.o \
vivid-radio-rx.o vivid-radio-tx.o vivid-radio-common.o \
vivid-rds-gen.o vivid-sdr-cap.o vivid-vbi-cap.o vivid-vbi-out.o \
- vivid-osd.o vivid-tpg.o vivid-tpg-colors.o
+ vivid-osd.o
obj-$(CONFIG_VIDEO_VIVID) += vivid.o
diff --git a/drivers/media/platform/vivid/vivid-core.c b/drivers/media/platform/vivid/vivid-core.c
index ec125becb7af..c14da84af09b 100644
--- a/drivers/media/platform/vivid/vivid-core.c
+++ b/drivers/media/platform/vivid/vivid-core.c
@@ -200,27 +200,12 @@ static int vidioc_querycap(struct file *file, void *priv,
struct v4l2_capability *cap)
{
struct vivid_dev *dev = video_drvdata(file);
- struct video_device *vdev = video_devdata(file);
strcpy(cap->driver, "vivid");
strcpy(cap->card, "vivid");
snprintf(cap->bus_info, sizeof(cap->bus_info),
"platform:%s", dev->v4l2_dev.name);
- if (vdev->vfl_type == VFL_TYPE_GRABBER && vdev->vfl_dir == VFL_DIR_RX)
- cap->device_caps = dev->vid_cap_caps;
- if (vdev->vfl_type == VFL_TYPE_GRABBER && vdev->vfl_dir == VFL_DIR_TX)
- cap->device_caps = dev->vid_out_caps;
- else if (vdev->vfl_type == VFL_TYPE_VBI && vdev->vfl_dir == VFL_DIR_RX)
- cap->device_caps = dev->vbi_cap_caps;
- else if (vdev->vfl_type == VFL_TYPE_VBI && vdev->vfl_dir == VFL_DIR_TX)
- cap->device_caps = dev->vbi_out_caps;
- else if (vdev->vfl_type == VFL_TYPE_SDR)
- cap->device_caps = dev->sdr_cap_caps;
- else if (vdev->vfl_type == VFL_TYPE_RADIO && vdev->vfl_dir == VFL_DIR_RX)
- cap->device_caps = dev->radio_rx_caps;
- else if (vdev->vfl_type == VFL_TYPE_RADIO && vdev->vfl_dir == VFL_DIR_TX)
- cap->device_caps = dev->radio_tx_caps;
cap->capabilities = dev->vid_cap_caps | dev->vid_out_caps |
dev->vbi_cap_caps | dev->vbi_out_caps |
dev->radio_rx_caps | dev->radio_tx_caps |
@@ -1135,6 +1120,7 @@ static int vivid_create_instance(struct platform_device *pdev, int inst)
strlcpy(vfd->name, "vivid-vid-cap", sizeof(vfd->name));
vfd->fops = &vivid_fops;
vfd->ioctl_ops = &vivid_ioctl_ops;
+ vfd->device_caps = dev->vid_cap_caps;
vfd->release = video_device_release_empty;
vfd->v4l2_dev = &dev->v4l2_dev;
vfd->queue = &dev->vb_vid_cap_q;
@@ -1160,6 +1146,7 @@ static int vivid_create_instance(struct platform_device *pdev, int inst)
vfd->vfl_dir = VFL_DIR_TX;
vfd->fops = &vivid_fops;
vfd->ioctl_ops = &vivid_ioctl_ops;
+ vfd->device_caps = dev->vid_out_caps;
vfd->release = video_device_release_empty;
vfd->v4l2_dev = &dev->v4l2_dev;
vfd->queue = &dev->vb_vid_out_q;
@@ -1184,6 +1171,7 @@ static int vivid_create_instance(struct platform_device *pdev, int inst)
strlcpy(vfd->name, "vivid-vbi-cap", sizeof(vfd->name));
vfd->fops = &vivid_fops;
vfd->ioctl_ops = &vivid_ioctl_ops;
+ vfd->device_caps = dev->vbi_cap_caps;
vfd->release = video_device_release_empty;
vfd->v4l2_dev = &dev->v4l2_dev;
vfd->queue = &dev->vb_vbi_cap_q;
@@ -1207,6 +1195,7 @@ static int vivid_create_instance(struct platform_device *pdev, int inst)
vfd->vfl_dir = VFL_DIR_TX;
vfd->fops = &vivid_fops;
vfd->ioctl_ops = &vivid_ioctl_ops;
+ vfd->device_caps = dev->vbi_out_caps;
vfd->release = video_device_release_empty;
vfd->v4l2_dev = &dev->v4l2_dev;
vfd->queue = &dev->vb_vbi_out_q;
@@ -1229,6 +1218,7 @@ static int vivid_create_instance(struct platform_device *pdev, int inst)
strlcpy(vfd->name, "vivid-sdr-cap", sizeof(vfd->name));
vfd->fops = &vivid_fops;
vfd->ioctl_ops = &vivid_ioctl_ops;
+ vfd->device_caps = dev->sdr_cap_caps;
vfd->release = video_device_release_empty;
vfd->v4l2_dev = &dev->v4l2_dev;
vfd->queue = &dev->vb_sdr_cap_q;
@@ -1247,6 +1237,7 @@ static int vivid_create_instance(struct platform_device *pdev, int inst)
strlcpy(vfd->name, "vivid-rad-rx", sizeof(vfd->name));
vfd->fops = &vivid_radio_fops;
vfd->ioctl_ops = &vivid_ioctl_ops;
+ vfd->device_caps = dev->radio_rx_caps;
vfd->release = video_device_release_empty;
vfd->v4l2_dev = &dev->v4l2_dev;
vfd->lock = &dev->mutex;
@@ -1265,6 +1256,7 @@ static int vivid_create_instance(struct platform_device *pdev, int inst)
vfd->vfl_dir = VFL_DIR_TX;
vfd->fops = &vivid_radio_fops;
vfd->ioctl_ops = &vivid_ioctl_ops;
+ vfd->device_caps = dev->radio_tx_caps;
vfd->release = video_device_release_empty;
vfd->v4l2_dev = &dev->v4l2_dev;
vfd->lock = &dev->mutex;
diff --git a/drivers/media/platform/vivid/vivid-core.h b/drivers/media/platform/vivid/vivid-core.h
index 751c1ba391e9..776783bec227 100644
--- a/drivers/media/platform/vivid/vivid-core.h
+++ b/drivers/media/platform/vivid/vivid-core.h
@@ -25,7 +25,7 @@
#include <media/v4l2-device.h>
#include <media/v4l2-dev.h>
#include <media/v4l2-ctrls.h>
-#include "vivid-tpg.h"
+#include <media/v4l2-tpg.h>
#include "vivid-rds-gen.h"
#include "vivid-vbi-gen.h"
diff --git a/drivers/media/platform/vivid/vivid-kthread-cap.c b/drivers/media/platform/vivid/vivid-kthread-cap.c
index 9034281944a4..3b8c10108dfa 100644
--- a/drivers/media/platform/vivid/vivid-kthread-cap.c
+++ b/drivers/media/platform/vivid/vivid-kthread-cap.c
@@ -36,6 +36,7 @@
#include <media/v4l2-ioctl.h>
#include <media/v4l2-fh.h>
#include <media/v4l2-event.h>
+#include <media/v4l2-rect.h>
#include "vivid-core.h"
#include "vivid-vid-common.h"
@@ -184,15 +185,15 @@ static void vivid_precalc_copy_rects(struct vivid_dev *dev)
dev->compose_out.width, dev->compose_out.height
};
- dev->loop_vid_copy = rect_intersect(&dev->crop_cap, &dev->compose_out);
+ v4l2_rect_intersect(&dev->loop_vid_copy, &dev->crop_cap, &dev->compose_out);
dev->loop_vid_out = dev->loop_vid_copy;
- rect_scale(&dev->loop_vid_out, &dev->compose_out, &dev->crop_out);
+ v4l2_rect_scale(&dev->loop_vid_out, &dev->compose_out, &dev->crop_out);
dev->loop_vid_out.left += dev->crop_out.left;
dev->loop_vid_out.top += dev->crop_out.top;
dev->loop_vid_cap = dev->loop_vid_copy;
- rect_scale(&dev->loop_vid_cap, &dev->crop_cap, &dev->compose_cap);
+ v4l2_rect_scale(&dev->loop_vid_cap, &dev->crop_cap, &dev->compose_cap);
dprintk(dev, 1,
"loop_vid_copy: %dx%d@%dx%d loop_vid_out: %dx%d@%dx%d loop_vid_cap: %dx%d@%dx%d\n",
@@ -203,13 +204,13 @@ static void vivid_precalc_copy_rects(struct vivid_dev *dev)
dev->loop_vid_cap.width, dev->loop_vid_cap.height,
dev->loop_vid_cap.left, dev->loop_vid_cap.top);
- r_overlay = rect_intersect(&r_fb, &r_overlay);
+ v4l2_rect_intersect(&r_overlay, &r_fb, &r_overlay);
/* shift r_overlay to the same origin as compose_out */
r_overlay.left += dev->compose_out.left - dev->overlay_out_left;
r_overlay.top += dev->compose_out.top - dev->overlay_out_top;
- dev->loop_vid_overlay = rect_intersect(&r_overlay, &dev->loop_vid_copy);
+ v4l2_rect_intersect(&dev->loop_vid_overlay, &r_overlay, &dev->loop_vid_copy);
dev->loop_fb_copy = dev->loop_vid_overlay;
/* shift dev->loop_fb_copy back again to the fb origin */
@@ -217,7 +218,7 @@ static void vivid_precalc_copy_rects(struct vivid_dev *dev)
dev->loop_fb_copy.top -= dev->compose_out.top - dev->overlay_out_top;
dev->loop_vid_overlay_cap = dev->loop_vid_overlay;
- rect_scale(&dev->loop_vid_overlay_cap, &dev->crop_cap, &dev->compose_cap);
+ v4l2_rect_scale(&dev->loop_vid_overlay_cap, &dev->crop_cap, &dev->compose_cap);
dprintk(dev, 1,
"loop_fb_copy: %dx%d@%dx%d loop_vid_overlay: %dx%d@%dx%d loop_vid_overlay_cap: %dx%d@%dx%d\n",
diff --git a/drivers/media/platform/vivid/vivid-rds-gen.c b/drivers/media/platform/vivid/vivid-rds-gen.c
index c382343fdb66..53c7777dc001 100644
--- a/drivers/media/platform/vivid/vivid-rds-gen.c
+++ b/drivers/media/platform/vivid/vivid-rds-gen.c
@@ -55,6 +55,7 @@ void vivid_rds_generate(struct vivid_rds_gen *rds)
{
struct v4l2_rds_data *data = rds->data;
unsigned grp;
+ unsigned idx;
struct tm tm;
unsigned date;
unsigned time;
@@ -73,24 +74,26 @@ void vivid_rds_generate(struct vivid_rds_gen *rds)
case 0 ... 3:
case 22 ... 25:
case 44 ... 47: /* Group 0B */
+ idx = (grp % 22) % 4;
data[1].lsb |= (rds->ta << 4) | (rds->ms << 3);
- data[1].lsb |= vivid_get_di(rds, grp % 22);
+ data[1].lsb |= vivid_get_di(rds, idx);
data[1].msb |= 1 << 3;
data[2].lsb = rds->picode & 0xff;
data[2].msb = rds->picode >> 8;
data[2].block = V4L2_RDS_BLOCK_C_ALT | (V4L2_RDS_BLOCK_C_ALT << 3);
- data[3].lsb = rds->psname[2 * (grp % 22) + 1];
- data[3].msb = rds->psname[2 * (grp % 22)];
+ data[3].lsb = rds->psname[2 * idx + 1];
+ data[3].msb = rds->psname[2 * idx];
break;
case 4 ... 19:
case 26 ... 41: /* Group 2A */
- data[1].lsb |= (grp - 4) % 22;
+ idx = ((grp - 4) % 22) % 16;
+ data[1].lsb |= idx;
data[1].msb |= 4 << 3;
- data[2].msb = rds->radiotext[4 * ((grp - 4) % 22)];
- data[2].lsb = rds->radiotext[4 * ((grp - 4) % 22) + 1];
+ data[2].msb = rds->radiotext[4 * idx];
+ data[2].lsb = rds->radiotext[4 * idx + 1];
data[2].block = V4L2_RDS_BLOCK_C | (V4L2_RDS_BLOCK_C << 3);
- data[3].msb = rds->radiotext[4 * ((grp - 4) % 22) + 2];
- data[3].lsb = rds->radiotext[4 * ((grp - 4) % 22) + 3];
+ data[3].msb = rds->radiotext[4 * idx + 2];
+ data[3].lsb = rds->radiotext[4 * idx + 3];
break;
case 56:
/*
diff --git a/drivers/media/platform/vivid/vivid-vid-cap.c b/drivers/media/platform/vivid/vivid-vid-cap.c
index b84f081c1b92..4f730f355a17 100644
--- a/drivers/media/platform/vivid/vivid-vid-cap.c
+++ b/drivers/media/platform/vivid/vivid-vid-cap.c
@@ -26,6 +26,7 @@
#include <media/v4l2-common.h>
#include <media/v4l2-event.h>
#include <media/v4l2-dv-timings.h>
+#include <media/v4l2-rect.h>
#include "vivid-core.h"
#include "vivid-vid-common.h"
@@ -590,16 +591,16 @@ int vivid_try_fmt_vid_cap(struct file *file, void *priv,
} else {
struct v4l2_rect r = { 0, 0, mp->width, mp->height * factor };
- rect_set_min_size(&r, &vivid_min_rect);
- rect_set_max_size(&r, &vivid_max_rect);
+ v4l2_rect_set_min_size(&r, &vivid_min_rect);
+ v4l2_rect_set_max_size(&r, &vivid_max_rect);
if (dev->has_scaler_cap && !dev->has_compose_cap) {
struct v4l2_rect max_r = { 0, 0, MAX_ZOOM * w, MAX_ZOOM * h };
- rect_set_max_size(&r, &max_r);
+ v4l2_rect_set_max_size(&r, &max_r);
} else if (!dev->has_scaler_cap && dev->has_crop_cap && !dev->has_compose_cap) {
- rect_set_max_size(&r, &dev->src_rect);
+ v4l2_rect_set_max_size(&r, &dev->src_rect);
} else if (!dev->has_scaler_cap && !dev->has_crop_cap) {
- rect_set_min_size(&r, &dev->src_rect);
+ v4l2_rect_set_min_size(&r, &dev->src_rect);
}
mp->width = r.width;
mp->height = r.height / factor;
@@ -668,7 +669,7 @@ int vivid_s_fmt_vid_cap(struct file *file, void *priv,
if (dev->has_scaler_cap) {
if (dev->has_compose_cap)
- rect_map_inside(compose, &r);
+ v4l2_rect_map_inside(compose, &r);
else
*compose = r;
if (dev->has_crop_cap && !dev->has_compose_cap) {
@@ -683,9 +684,9 @@ int vivid_s_fmt_vid_cap(struct file *file, void *priv,
factor * r.height * MAX_ZOOM
};
- rect_set_min_size(crop, &min_r);
- rect_set_max_size(crop, &max_r);
- rect_map_inside(crop, &dev->crop_bounds_cap);
+ v4l2_rect_set_min_size(crop, &min_r);
+ v4l2_rect_set_max_size(crop, &max_r);
+ v4l2_rect_map_inside(crop, &dev->crop_bounds_cap);
} else if (dev->has_crop_cap) {
struct v4l2_rect min_r = {
0, 0,
@@ -698,27 +699,27 @@ int vivid_s_fmt_vid_cap(struct file *file, void *priv,
factor * compose->height * MAX_ZOOM
};
- rect_set_min_size(crop, &min_r);
- rect_set_max_size(crop, &max_r);
- rect_map_inside(crop, &dev->crop_bounds_cap);
+ v4l2_rect_set_min_size(crop, &min_r);
+ v4l2_rect_set_max_size(crop, &max_r);
+ v4l2_rect_map_inside(crop, &dev->crop_bounds_cap);
}
} else if (dev->has_crop_cap && !dev->has_compose_cap) {
r.height *= factor;
- rect_set_size_to(crop, &r);
- rect_map_inside(crop, &dev->crop_bounds_cap);
+ v4l2_rect_set_size_to(crop, &r);
+ v4l2_rect_map_inside(crop, &dev->crop_bounds_cap);
r = *crop;
r.height /= factor;
- rect_set_size_to(compose, &r);
+ v4l2_rect_set_size_to(compose, &r);
} else if (!dev->has_crop_cap) {
- rect_map_inside(compose, &r);
+ v4l2_rect_map_inside(compose, &r);
} else {
r.height *= factor;
- rect_set_max_size(crop, &r);
- rect_map_inside(crop, &dev->crop_bounds_cap);
+ v4l2_rect_set_max_size(crop, &r);
+ v4l2_rect_map_inside(crop, &dev->crop_bounds_cap);
compose->top *= factor;
compose->height *= factor;
- rect_set_size_to(compose, crop);
- rect_map_inside(compose, &r);
+ v4l2_rect_set_size_to(compose, crop);
+ v4l2_rect_map_inside(compose, &r);
compose->top /= factor;
compose->height /= factor;
}
@@ -735,9 +736,9 @@ int vivid_s_fmt_vid_cap(struct file *file, void *priv,
} else {
struct v4l2_rect r = { 0, 0, mp->width, mp->height };
- rect_set_size_to(compose, &r);
+ v4l2_rect_set_size_to(compose, &r);
r.height *= factor;
- rect_set_size_to(crop, &r);
+ v4l2_rect_set_size_to(crop, &r);
}
dev->fmt_cap_rect.width = mp->width;
@@ -886,9 +887,9 @@ int vivid_vid_cap_s_selection(struct file *file, void *fh, struct v4l2_selection
ret = vivid_vid_adjust_sel(s->flags, &s->r);
if (ret)
return ret;
- rect_set_min_size(&s->r, &vivid_min_rect);
- rect_set_max_size(&s->r, &dev->src_rect);
- rect_map_inside(&s->r, &dev->crop_bounds_cap);
+ v4l2_rect_set_min_size(&s->r, &vivid_min_rect);
+ v4l2_rect_set_max_size(&s->r, &dev->src_rect);
+ v4l2_rect_map_inside(&s->r, &dev->crop_bounds_cap);
s->r.top /= factor;
s->r.height /= factor;
if (dev->has_scaler_cap) {
@@ -904,36 +905,36 @@ int vivid_vid_cap_s_selection(struct file *file, void *fh, struct v4l2_selection
s->r.height / MAX_ZOOM
};
- rect_set_min_size(&fmt, &min_rect);
+ v4l2_rect_set_min_size(&fmt, &min_rect);
if (!dev->has_compose_cap)
- rect_set_max_size(&fmt, &max_rect);
- if (!rect_same_size(&dev->fmt_cap_rect, &fmt) &&
+ v4l2_rect_set_max_size(&fmt, &max_rect);
+ if (!v4l2_rect_same_size(&dev->fmt_cap_rect, &fmt) &&
vb2_is_busy(&dev->vb_vid_cap_q))
return -EBUSY;
if (dev->has_compose_cap) {
- rect_set_min_size(compose, &min_rect);
- rect_set_max_size(compose, &max_rect);
+ v4l2_rect_set_min_size(compose, &min_rect);
+ v4l2_rect_set_max_size(compose, &max_rect);
}
dev->fmt_cap_rect = fmt;
tpg_s_buf_height(&dev->tpg, fmt.height);
} else if (dev->has_compose_cap) {
struct v4l2_rect fmt = dev->fmt_cap_rect;
- rect_set_min_size(&fmt, &s->r);
- if (!rect_same_size(&dev->fmt_cap_rect, &fmt) &&
+ v4l2_rect_set_min_size(&fmt, &s->r);
+ if (!v4l2_rect_same_size(&dev->fmt_cap_rect, &fmt) &&
vb2_is_busy(&dev->vb_vid_cap_q))
return -EBUSY;
dev->fmt_cap_rect = fmt;
tpg_s_buf_height(&dev->tpg, fmt.height);
- rect_set_size_to(compose, &s->r);
- rect_map_inside(compose, &dev->fmt_cap_rect);
+ v4l2_rect_set_size_to(compose, &s->r);
+ v4l2_rect_map_inside(compose, &dev->fmt_cap_rect);
} else {
- if (!rect_same_size(&s->r, &dev->fmt_cap_rect) &&
+ if (!v4l2_rect_same_size(&s->r, &dev->fmt_cap_rect) &&
vb2_is_busy(&dev->vb_vid_cap_q))
return -EBUSY;
- rect_set_size_to(&dev->fmt_cap_rect, &s->r);
- rect_set_size_to(compose, &s->r);
- rect_map_inside(compose, &dev->fmt_cap_rect);
+ v4l2_rect_set_size_to(&dev->fmt_cap_rect, &s->r);
+ v4l2_rect_set_size_to(compose, &s->r);
+ v4l2_rect_map_inside(compose, &dev->fmt_cap_rect);
tpg_s_buf_height(&dev->tpg, dev->fmt_cap_rect.height);
}
s->r.top *= factor;
@@ -946,8 +947,8 @@ int vivid_vid_cap_s_selection(struct file *file, void *fh, struct v4l2_selection
ret = vivid_vid_adjust_sel(s->flags, &s->r);
if (ret)
return ret;
- rect_set_min_size(&s->r, &vivid_min_rect);
- rect_set_max_size(&s->r, &dev->fmt_cap_rect);
+ v4l2_rect_set_min_size(&s->r, &vivid_min_rect);
+ v4l2_rect_set_max_size(&s->r, &dev->fmt_cap_rect);
if (dev->has_scaler_cap) {
struct v4l2_rect max_rect = {
0, 0,
@@ -955,7 +956,7 @@ int vivid_vid_cap_s_selection(struct file *file, void *fh, struct v4l2_selection
(dev->src_rect.height / factor) * MAX_ZOOM
};
- rect_set_max_size(&s->r, &max_rect);
+ v4l2_rect_set_max_size(&s->r, &max_rect);
if (dev->has_crop_cap) {
struct v4l2_rect min_rect = {
0, 0,
@@ -968,23 +969,23 @@ int vivid_vid_cap_s_selection(struct file *file, void *fh, struct v4l2_selection
(s->r.height * factor) * MAX_ZOOM
};
- rect_set_min_size(crop, &min_rect);
- rect_set_max_size(crop, &max_rect);
- rect_map_inside(crop, &dev->crop_bounds_cap);
+ v4l2_rect_set_min_size(crop, &min_rect);
+ v4l2_rect_set_max_size(crop, &max_rect);
+ v4l2_rect_map_inside(crop, &dev->crop_bounds_cap);
}
} else if (dev->has_crop_cap) {
s->r.top *= factor;
s->r.height *= factor;
- rect_set_max_size(&s->r, &dev->src_rect);
- rect_set_size_to(crop, &s->r);
- rect_map_inside(crop, &dev->crop_bounds_cap);
+ v4l2_rect_set_max_size(&s->r, &dev->src_rect);
+ v4l2_rect_set_size_to(crop, &s->r);
+ v4l2_rect_map_inside(crop, &dev->crop_bounds_cap);
s->r.top /= factor;
s->r.height /= factor;
} else {
- rect_set_size_to(&s->r, &dev->src_rect);
+ v4l2_rect_set_size_to(&s->r, &dev->src_rect);
s->r.height /= factor;
}
- rect_map_inside(&s->r, &dev->fmt_cap_rect);
+ v4l2_rect_map_inside(&s->r, &dev->fmt_cap_rect);
if (dev->bitmap_cap && (compose->width != s->r.width ||
compose->height != s->r.height)) {
kfree(dev->bitmap_cap);
@@ -1124,7 +1125,7 @@ int vidioc_try_fmt_vid_overlay(struct file *file, void *priv,
for (j = i + 1; j < win->clipcount; j++) {
struct v4l2_rect *r2 = &dev->try_clips_cap[j].c;
- if (rect_overlap(r1, r2))
+ if (v4l2_rect_overlap(r1, r2))
return -EINVAL;
}
}
diff --git a/drivers/media/platform/vivid/vivid-vid-common.c b/drivers/media/platform/vivid/vivid-vid-common.c
index b0d4e3a0acf0..39ea2284789c 100644
--- a/drivers/media/platform/vivid/vivid-vid-common.c
+++ b/drivers/media/platform/vivid/vivid-vid-common.c
@@ -653,103 +653,6 @@ int fmt_sp2mp_func(struct file *file, void *priv,
return ret;
}
-/* v4l2_rect helper function: copy the width/height values */
-void rect_set_size_to(struct v4l2_rect *r, const struct v4l2_rect *size)
-{
- r->width = size->width;
- r->height = size->height;
-}
-
-/* v4l2_rect helper function: width and height of r should be >= min_size */
-void rect_set_min_size(struct v4l2_rect *r, const struct v4l2_rect *min_size)
-{
- if (r->width < min_size->width)
- r->width = min_size->width;
- if (r->height < min_size->height)
- r->height = min_size->height;
-}
-
-/* v4l2_rect helper function: width and height of r should be <= max_size */
-void rect_set_max_size(struct v4l2_rect *r, const struct v4l2_rect *max_size)
-{
- if (r->width > max_size->width)
- r->width = max_size->width;
- if (r->height > max_size->height)
- r->height = max_size->height;
-}
-
-/* v4l2_rect helper function: r should be inside boundary */
-void rect_map_inside(struct v4l2_rect *r, const struct v4l2_rect *boundary)
-{
- rect_set_max_size(r, boundary);
- if (r->left < boundary->left)
- r->left = boundary->left;
- if (r->top < boundary->top)
- r->top = boundary->top;
- if (r->left + r->width > boundary->width)
- r->left = boundary->width - r->width;
- if (r->top + r->height > boundary->height)
- r->top = boundary->height - r->height;
-}
-
-/* v4l2_rect helper function: return true if r1 has the same size as r2 */
-bool rect_same_size(const struct v4l2_rect *r1, const struct v4l2_rect *r2)
-{
- return r1->width == r2->width && r1->height == r2->height;
-}
-
-/* v4l2_rect helper function: calculate the intersection of two rects */
-struct v4l2_rect rect_intersect(const struct v4l2_rect *a, const struct v4l2_rect *b)
-{
- struct v4l2_rect r;
- int right, bottom;
-
- r.top = max(a->top, b->top);
- r.left = max(a->left, b->left);
- bottom = min(a->top + a->height, b->top + b->height);
- right = min(a->left + a->width, b->left + b->width);
- r.height = max(0, bottom - r.top);
- r.width = max(0, right - r.left);
- return r;
-}
-
-/*
- * v4l2_rect helper function: scale rect r by to->width / from->width and
- * to->height / from->height.
- */
-void rect_scale(struct v4l2_rect *r, const struct v4l2_rect *from,
- const struct v4l2_rect *to)
-{
- if (from->width == 0 || from->height == 0) {
- r->left = r->top = r->width = r->height = 0;
- return;
- }
- r->left = (((r->left - from->left) * to->width) / from->width) & ~1;
- r->width = ((r->width * to->width) / from->width) & ~1;
- r->top = ((r->top - from->top) * to->height) / from->height;
- r->height = (r->height * to->height) / from->height;
-}
-
-bool rect_overlap(const struct v4l2_rect *r1, const struct v4l2_rect *r2)
-{
- /*
- * IF the left side of r1 is to the right of the right side of r2 OR
- * the left side of r2 is to the right of the right side of r1 THEN
- * they do not overlap.
- */
- if (r1->left >= r2->left + r2->width ||
- r2->left >= r1->left + r1->width)
- return false;
- /*
- * IF the top side of r1 is below the bottom of r2 OR
- * the top side of r2 is below the bottom of r1 THEN
- * they do not overlap.
- */
- if (r1->top >= r2->top + r2->height ||
- r2->top >= r1->top + r1->height)
- return false;
- return true;
-}
int vivid_vid_adjust_sel(unsigned flags, struct v4l2_rect *r)
{
unsigned w = r->width;
diff --git a/drivers/media/platform/vivid/vivid-vid-common.h b/drivers/media/platform/vivid/vivid-vid-common.h
index 3ec4fa85c9b9..4b6175eab8a2 100644
--- a/drivers/media/platform/vivid/vivid-vid-common.h
+++ b/drivers/media/platform/vivid/vivid-vid-common.h
@@ -37,15 +37,6 @@ const struct vivid_fmt *vivid_get_format(struct vivid_dev *dev, u32 pixelformat)
bool vivid_vid_can_loop(struct vivid_dev *dev);
void vivid_send_source_change(struct vivid_dev *dev, unsigned type);
-bool rect_overlap(const struct v4l2_rect *r1, const struct v4l2_rect *r2);
-void rect_set_size_to(struct v4l2_rect *r, const struct v4l2_rect *size);
-void rect_set_min_size(struct v4l2_rect *r, const struct v4l2_rect *min_size);
-void rect_set_max_size(struct v4l2_rect *r, const struct v4l2_rect *max_size);
-void rect_map_inside(struct v4l2_rect *r, const struct v4l2_rect *boundary);
-bool rect_same_size(const struct v4l2_rect *r1, const struct v4l2_rect *r2);
-struct v4l2_rect rect_intersect(const struct v4l2_rect *a, const struct v4l2_rect *b);
-void rect_scale(struct v4l2_rect *r, const struct v4l2_rect *from,
- const struct v4l2_rect *to);
int vivid_vid_adjust_sel(unsigned flags, struct v4l2_rect *r);
int vivid_enum_fmt_vid(struct file *file, void *priv, struct v4l2_fmtdesc *f);
diff --git a/drivers/media/platform/vivid/vivid-vid-out.c b/drivers/media/platform/vivid/vivid-vid-out.c
index 64e4d66482c1..f92f4496d527 100644
--- a/drivers/media/platform/vivid/vivid-vid-out.c
+++ b/drivers/media/platform/vivid/vivid-vid-out.c
@@ -25,6 +25,7 @@
#include <media/v4l2-common.h>
#include <media/v4l2-event.h>
#include <media/v4l2-dv-timings.h>
+#include <media/v4l2-rect.h>
#include "vivid-core.h"
#include "vivid-vid-common.h"
@@ -376,16 +377,16 @@ int vivid_try_fmt_vid_out(struct file *file, void *priv,
} else {
struct v4l2_rect r = { 0, 0, mp->width, mp->height * factor };
- rect_set_min_size(&r, &vivid_min_rect);
- rect_set_max_size(&r, &vivid_max_rect);
+ v4l2_rect_set_min_size(&r, &vivid_min_rect);
+ v4l2_rect_set_max_size(&r, &vivid_max_rect);
if (dev->has_scaler_out && !dev->has_crop_out) {
struct v4l2_rect max_r = { 0, 0, MAX_ZOOM * w, MAX_ZOOM * h };
- rect_set_max_size(&r, &max_r);
+ v4l2_rect_set_max_size(&r, &max_r);
} else if (!dev->has_scaler_out && dev->has_compose_out && !dev->has_crop_out) {
- rect_set_max_size(&r, &dev->sink_rect);
+ v4l2_rect_set_max_size(&r, &dev->sink_rect);
} else if (!dev->has_scaler_out && !dev->has_compose_out) {
- rect_set_min_size(&r, &dev->sink_rect);
+ v4l2_rect_set_min_size(&r, &dev->sink_rect);
}
mp->width = r.width;
mp->height = r.height / factor;
@@ -473,7 +474,7 @@ int vivid_s_fmt_vid_out(struct file *file, void *priv,
if (dev->has_scaler_out) {
if (dev->has_crop_out)
- rect_map_inside(crop, &r);
+ v4l2_rect_map_inside(crop, &r);
else
*crop = r;
if (dev->has_compose_out && !dev->has_crop_out) {
@@ -488,9 +489,9 @@ int vivid_s_fmt_vid_out(struct file *file, void *priv,
factor * r.height * MAX_ZOOM
};
- rect_set_min_size(compose, &min_r);
- rect_set_max_size(compose, &max_r);
- rect_map_inside(compose, &dev->compose_bounds_out);
+ v4l2_rect_set_min_size(compose, &min_r);
+ v4l2_rect_set_max_size(compose, &max_r);
+ v4l2_rect_map_inside(compose, &dev->compose_bounds_out);
} else if (dev->has_compose_out) {
struct v4l2_rect min_r = {
0, 0,
@@ -503,36 +504,36 @@ int vivid_s_fmt_vid_out(struct file *file, void *priv,
factor * crop->height * MAX_ZOOM
};
- rect_set_min_size(compose, &min_r);
- rect_set_max_size(compose, &max_r);
- rect_map_inside(compose, &dev->compose_bounds_out);
+ v4l2_rect_set_min_size(compose, &min_r);
+ v4l2_rect_set_max_size(compose, &max_r);
+ v4l2_rect_map_inside(compose, &dev->compose_bounds_out);
}
} else if (dev->has_compose_out && !dev->has_crop_out) {
- rect_set_size_to(crop, &r);
+ v4l2_rect_set_size_to(crop, &r);
r.height *= factor;
- rect_set_size_to(compose, &r);
- rect_map_inside(compose, &dev->compose_bounds_out);
+ v4l2_rect_set_size_to(compose, &r);
+ v4l2_rect_map_inside(compose, &dev->compose_bounds_out);
} else if (!dev->has_compose_out) {
- rect_map_inside(crop, &r);
+ v4l2_rect_map_inside(crop, &r);
r.height /= factor;
- rect_set_size_to(compose, &r);
+ v4l2_rect_set_size_to(compose, &r);
} else {
r.height *= factor;
- rect_set_max_size(compose, &r);
- rect_map_inside(compose, &dev->compose_bounds_out);
+ v4l2_rect_set_max_size(compose, &r);
+ v4l2_rect_map_inside(compose, &dev->compose_bounds_out);
crop->top *= factor;
crop->height *= factor;
- rect_set_size_to(crop, compose);
- rect_map_inside(crop, &r);
+ v4l2_rect_set_size_to(crop, compose);
+ v4l2_rect_map_inside(crop, &r);
crop->top /= factor;
crop->height /= factor;
}
} else {
struct v4l2_rect r = { 0, 0, mp->width, mp->height };
- rect_set_size_to(crop, &r);
+ v4l2_rect_set_size_to(crop, &r);
r.height /= factor;
- rect_set_size_to(compose, &r);
+ v4l2_rect_set_size_to(compose, &r);
}
dev->fmt_out_rect.width = mp->width;
@@ -683,8 +684,8 @@ int vivid_vid_out_s_selection(struct file *file, void *fh, struct v4l2_selection
ret = vivid_vid_adjust_sel(s->flags, &s->r);
if (ret)
return ret;
- rect_set_min_size(&s->r, &vivid_min_rect);
- rect_set_max_size(&s->r, &dev->fmt_out_rect);
+ v4l2_rect_set_min_size(&s->r, &vivid_min_rect);
+ v4l2_rect_set_max_size(&s->r, &dev->fmt_out_rect);
if (dev->has_scaler_out) {
struct v4l2_rect max_rect = {
0, 0,
@@ -692,7 +693,7 @@ int vivid_vid_out_s_selection(struct file *file, void *fh, struct v4l2_selection
(dev->sink_rect.height / factor) * MAX_ZOOM
};
- rect_set_max_size(&s->r, &max_rect);
+ v4l2_rect_set_max_size(&s->r, &max_rect);
if (dev->has_compose_out) {
struct v4l2_rect min_rect = {
0, 0,
@@ -705,23 +706,23 @@ int vivid_vid_out_s_selection(struct file *file, void *fh, struct v4l2_selection
(s->r.height * factor) * MAX_ZOOM
};
- rect_set_min_size(compose, &min_rect);
- rect_set_max_size(compose, &max_rect);
- rect_map_inside(compose, &dev->compose_bounds_out);
+ v4l2_rect_set_min_size(compose, &min_rect);
+ v4l2_rect_set_max_size(compose, &max_rect);
+ v4l2_rect_map_inside(compose, &dev->compose_bounds_out);
}
} else if (dev->has_compose_out) {
s->r.top *= factor;
s->r.height *= factor;
- rect_set_max_size(&s->r, &dev->sink_rect);
- rect_set_size_to(compose, &s->r);
- rect_map_inside(compose, &dev->compose_bounds_out);
+ v4l2_rect_set_max_size(&s->r, &dev->sink_rect);
+ v4l2_rect_set_size_to(compose, &s->r);
+ v4l2_rect_map_inside(compose, &dev->compose_bounds_out);
s->r.top /= factor;
s->r.height /= factor;
} else {
- rect_set_size_to(&s->r, &dev->sink_rect);
+ v4l2_rect_set_size_to(&s->r, &dev->sink_rect);
s->r.height /= factor;
}
- rect_map_inside(&s->r, &dev->fmt_out_rect);
+ v4l2_rect_map_inside(&s->r, &dev->fmt_out_rect);
*crop = s->r;
break;
case V4L2_SEL_TGT_COMPOSE:
@@ -730,9 +731,9 @@ int vivid_vid_out_s_selection(struct file *file, void *fh, struct v4l2_selection
ret = vivid_vid_adjust_sel(s->flags, &s->r);
if (ret)
return ret;
- rect_set_min_size(&s->r, &vivid_min_rect);
- rect_set_max_size(&s->r, &dev->sink_rect);
- rect_map_inside(&s->r, &dev->compose_bounds_out);
+ v4l2_rect_set_min_size(&s->r, &vivid_min_rect);
+ v4l2_rect_set_max_size(&s->r, &dev->sink_rect);
+ v4l2_rect_map_inside(&s->r, &dev->compose_bounds_out);
s->r.top /= factor;
s->r.height /= factor;
if (dev->has_scaler_out) {
@@ -748,35 +749,35 @@ int vivid_vid_out_s_selection(struct file *file, void *fh, struct v4l2_selection
s->r.height / MAX_ZOOM
};
- rect_set_min_size(&fmt, &min_rect);
+ v4l2_rect_set_min_size(&fmt, &min_rect);
if (!dev->has_crop_out)
- rect_set_max_size(&fmt, &max_rect);
- if (!rect_same_size(&dev->fmt_out_rect, &fmt) &&
+ v4l2_rect_set_max_size(&fmt, &max_rect);
+ if (!v4l2_rect_same_size(&dev->fmt_out_rect, &fmt) &&
vb2_is_busy(&dev->vb_vid_out_q))
return -EBUSY;
if (dev->has_crop_out) {
- rect_set_min_size(crop, &min_rect);
- rect_set_max_size(crop, &max_rect);
+ v4l2_rect_set_min_size(crop, &min_rect);
+ v4l2_rect_set_max_size(crop, &max_rect);
}
dev->fmt_out_rect = fmt;
} else if (dev->has_crop_out) {
struct v4l2_rect fmt = dev->fmt_out_rect;
- rect_set_min_size(&fmt, &s->r);
- if (!rect_same_size(&dev->fmt_out_rect, &fmt) &&
+ v4l2_rect_set_min_size(&fmt, &s->r);
+ if (!v4l2_rect_same_size(&dev->fmt_out_rect, &fmt) &&
vb2_is_busy(&dev->vb_vid_out_q))
return -EBUSY;
dev->fmt_out_rect = fmt;
- rect_set_size_to(crop, &s->r);
- rect_map_inside(crop, &dev->fmt_out_rect);
+ v4l2_rect_set_size_to(crop, &s->r);
+ v4l2_rect_map_inside(crop, &dev->fmt_out_rect);
} else {
- if (!rect_same_size(&s->r, &dev->fmt_out_rect) &&
+ if (!v4l2_rect_same_size(&s->r, &dev->fmt_out_rect) &&
vb2_is_busy(&dev->vb_vid_out_q))
return -EBUSY;
- rect_set_size_to(&dev->fmt_out_rect, &s->r);
- rect_set_size_to(crop, &s->r);
+ v4l2_rect_set_size_to(&dev->fmt_out_rect, &s->r);
+ v4l2_rect_set_size_to(crop, &s->r);
crop->height /= factor;
- rect_map_inside(crop, &dev->fmt_out_rect);
+ v4l2_rect_map_inside(crop, &dev->fmt_out_rect);
}
s->r.top *= factor;
s->r.height *= factor;
@@ -901,7 +902,7 @@ int vidioc_try_fmt_vid_out_overlay(struct file *file, void *priv,
for (j = i + 1; j < win->clipcount; j++) {
struct v4l2_rect *r2 = &dev->try_clips_out[j].c;
- if (rect_overlap(r1, r2))
+ if (v4l2_rect_overlap(r1, r2))
return -EINVAL;
}
}
diff --git a/drivers/media/platform/vsp1/vsp1.h b/drivers/media/platform/vsp1/vsp1.h
index 910d6b8e8b50..46738b6c5f72 100644
--- a/drivers/media/platform/vsp1/vsp1.h
+++ b/drivers/media/platform/vsp1/vsp1.h
@@ -26,7 +26,6 @@
struct clk;
struct device;
-struct vsp1_dl;
struct vsp1_drm;
struct vsp1_entity;
struct vsp1_platform_data;
@@ -49,6 +48,7 @@ struct vsp1_uds;
struct vsp1_device_info {
u32 version;
+ unsigned int gen;
unsigned int features;
unsigned int rpf_count;
unsigned int uds_count;
@@ -85,8 +85,6 @@ struct vsp1_device {
struct media_entity_operations media_ops;
struct vsp1_drm *drm;
-
- bool use_dl;
};
int vsp1_device_get(struct vsp1_device *vsp1);
@@ -104,14 +102,4 @@ static inline void vsp1_write(struct vsp1_device *vsp1, u32 reg, u32 data)
iowrite32(data, vsp1->mmio + reg);
}
-#include "vsp1_dl.h"
-
-static inline void vsp1_mod_write(struct vsp1_entity *e, u32 reg, u32 data)
-{
- if (e->vsp1->use_dl)
- vsp1_dl_add(e, reg, data);
- else
- vsp1_write(e->vsp1, reg, data);
-}
-
#endif /* __VSP1_H__ */
diff --git a/drivers/media/platform/vsp1/vsp1_bru.c b/drivers/media/platform/vsp1/vsp1_bru.c
index cb0dbc15ddad..b1068c018011 100644
--- a/drivers/media/platform/vsp1/vsp1_bru.c
+++ b/drivers/media/platform/vsp1/vsp1_bru.c
@@ -18,6 +18,8 @@
#include "vsp1.h"
#include "vsp1_bru.h"
+#include "vsp1_dl.h"
+#include "vsp1_pipe.h"
#include "vsp1_rwpf.h"
#include "vsp1_video.h"
@@ -28,9 +30,10 @@
* Device Access
*/
-static inline void vsp1_bru_write(struct vsp1_bru *bru, u32 reg, u32 data)
+static inline void vsp1_bru_write(struct vsp1_bru *bru, struct vsp1_dl_list *dl,
+ u32 reg, u32 data)
{
- vsp1_mod_write(&bru->entity, reg, data);
+ vsp1_dl_list_write(dl, reg, data);
}
/* -----------------------------------------------------------------------------
@@ -42,13 +45,9 @@ static int bru_s_ctrl(struct v4l2_ctrl *ctrl)
struct vsp1_bru *bru =
container_of(ctrl->handler, struct vsp1_bru, ctrls);
- if (!vsp1_entity_is_streaming(&bru->entity))
- return 0;
-
switch (ctrl->id) {
case V4L2_CID_BG_COLOR:
- vsp1_bru_write(bru, VI6_BRU_VIRRPF_COL, ctrl->val |
- (0xff << VI6_BRU_VIRRPF_COL_A_SHIFT));
+ bru->bgcolor = ctrl->val;
break;
}
@@ -60,116 +59,7 @@ static const struct v4l2_ctrl_ops bru_ctrl_ops = {
};
/* -----------------------------------------------------------------------------
- * V4L2 Subdevice Core Operations
- */
-
-static int bru_s_stream(struct v4l2_subdev *subdev, int enable)
-{
- struct vsp1_pipeline *pipe = to_vsp1_pipeline(&subdev->entity);
- struct vsp1_bru *bru = to_bru(subdev);
- struct v4l2_mbus_framefmt *format;
- unsigned int flags;
- unsigned int i;
- int ret;
-
- ret = vsp1_entity_set_streaming(&bru->entity, enable);
- if (ret < 0)
- return ret;
-
- if (!enable)
- return 0;
-
- format = &bru->entity.formats[bru->entity.source_pad];
-
- /* The hardware is extremely flexible but we have no userspace API to
- * expose all the parameters, nor is it clear whether we would have use
- * cases for all the supported modes. Let's just harcode the parameters
- * to sane default values for now.
- */
-
- /* Disable dithering and enable color data normalization unless the
- * format at the pipeline output is premultiplied.
- */
- flags = pipe->output ? pipe->output->format.flags : 0;
- vsp1_bru_write(bru, VI6_BRU_INCTRL,
- flags & V4L2_PIX_FMT_FLAG_PREMUL_ALPHA ?
- 0 : VI6_BRU_INCTRL_NRM);
-
- /* Set the background position to cover the whole output image. */
- vsp1_bru_write(bru, VI6_BRU_VIRRPF_SIZE,
- (format->width << VI6_BRU_VIRRPF_SIZE_HSIZE_SHIFT) |
- (format->height << VI6_BRU_VIRRPF_SIZE_VSIZE_SHIFT));
- vsp1_bru_write(bru, VI6_BRU_VIRRPF_LOC, 0);
-
- /* Route BRU input 1 as SRC input to the ROP unit and configure the ROP
- * unit with a NOP operation to make BRU input 1 available as the
- * Blend/ROP unit B SRC input.
- */
- vsp1_bru_write(bru, VI6_BRU_ROP, VI6_BRU_ROP_DSTSEL_BRUIN(1) |
- VI6_BRU_ROP_CROP(VI6_ROP_NOP) |
- VI6_BRU_ROP_AROP(VI6_ROP_NOP));
-
- for (i = 0; i < bru->entity.source_pad; ++i) {
- bool premultiplied = false;
- u32 ctrl = 0;
-
- /* Configure all Blend/ROP units corresponding to an enabled BRU
- * input for alpha blending. Blend/ROP units corresponding to
- * disabled BRU inputs are used in ROP NOP mode to ignore the
- * SRC input.
- */
- if (bru->inputs[i].rpf) {
- ctrl |= VI6_BRU_CTRL_RBC;
-
- premultiplied = bru->inputs[i].rpf->format.flags
- & V4L2_PIX_FMT_FLAG_PREMUL_ALPHA;
- } else {
- ctrl |= VI6_BRU_CTRL_CROP(VI6_ROP_NOP)
- | VI6_BRU_CTRL_AROP(VI6_ROP_NOP);
- }
-
- /* Select the virtual RPF as the Blend/ROP unit A DST input to
- * serve as a background color.
- */
- if (i == 0)
- ctrl |= VI6_BRU_CTRL_DSTSEL_VRPF;
-
- /* Route BRU inputs 0 to 3 as SRC inputs to Blend/ROP units A to
- * D in that order. The Blend/ROP unit B SRC is hardwired to the
- * ROP unit output, the corresponding register bits must be set
- * to 0.
- */
- if (i != 1)
- ctrl |= VI6_BRU_CTRL_SRCSEL_BRUIN(i);
-
- vsp1_bru_write(bru, VI6_BRU_CTRL(i), ctrl);
-
- /* Harcode the blending formula to
- *
- * DSTc = DSTc * (1 - SRCa) + SRCc * SRCa
- * DSTa = DSTa * (1 - SRCa) + SRCa
- *
- * when the SRC input isn't premultiplied, and to
- *
- * DSTc = DSTc * (1 - SRCa) + SRCc
- * DSTa = DSTa * (1 - SRCa) + SRCa
- *
- * otherwise.
- */
- vsp1_bru_write(bru, VI6_BRU_BLD(i),
- VI6_BRU_BLD_CCMDX_255_SRC_A |
- (premultiplied ? VI6_BRU_BLD_CCMDY_COEFY :
- VI6_BRU_BLD_CCMDY_SRC_A) |
- VI6_BRU_BLD_ACMDX_255_SRC_A |
- VI6_BRU_BLD_ACMDY_COEFY |
- (0xff << VI6_BRU_BLD_COEFY_SHIFT));
- }
-
- return 0;
-}
-
-/* -----------------------------------------------------------------------------
- * V4L2 Subdevice Pad Operations
+ * V4L2 Subdevice Operations
*/
/*
@@ -186,24 +76,9 @@ static int bru_enum_mbus_code(struct v4l2_subdev *subdev,
MEDIA_BUS_FMT_ARGB8888_1X32,
MEDIA_BUS_FMT_AYUV8_1X32,
};
- struct vsp1_bru *bru = to_bru(subdev);
- struct v4l2_mbus_framefmt *format;
-
- if (code->pad == BRU_PAD_SINK(0)) {
- if (code->index >= ARRAY_SIZE(codes))
- return -EINVAL;
-
- code->code = codes[code->index];
- } else {
- if (code->index)
- return -EINVAL;
-
- format = vsp1_entity_get_pad_format(&bru->entity, cfg,
- BRU_PAD_SINK(0), code->which);
- code->code = format->code;
- }
- return 0;
+ return vsp1_subdev_enum_mbus_code(subdev, cfg, code, codes,
+ ARRAY_SIZE(codes));
}
static int bru_enum_frame_size(struct v4l2_subdev *subdev,
@@ -227,32 +102,14 @@ static int bru_enum_frame_size(struct v4l2_subdev *subdev,
static struct v4l2_rect *bru_get_compose(struct vsp1_bru *bru,
struct v4l2_subdev_pad_config *cfg,
- unsigned int pad, u32 which)
-{
- switch (which) {
- case V4L2_SUBDEV_FORMAT_TRY:
- return v4l2_subdev_get_try_crop(&bru->entity.subdev, cfg, pad);
- case V4L2_SUBDEV_FORMAT_ACTIVE:
- return &bru->inputs[pad].compose;
- default:
- return NULL;
- }
-}
-
-static int bru_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_format *fmt)
+ unsigned int pad)
{
- struct vsp1_bru *bru = to_bru(subdev);
-
- fmt->format = *vsp1_entity_get_pad_format(&bru->entity, cfg, fmt->pad,
- fmt->which);
-
- return 0;
+ return v4l2_subdev_get_try_compose(&bru->entity.subdev, cfg, pad);
}
-static void bru_try_format(struct vsp1_bru *bru, struct v4l2_subdev_pad_config *cfg,
- unsigned int pad, struct v4l2_mbus_framefmt *fmt,
- enum v4l2_subdev_format_whence which)
+static void bru_try_format(struct vsp1_bru *bru,
+ struct v4l2_subdev_pad_config *config,
+ unsigned int pad, struct v4l2_mbus_framefmt *fmt)
{
struct v4l2_mbus_framefmt *format;
@@ -266,8 +123,8 @@ static void bru_try_format(struct vsp1_bru *bru, struct v4l2_subdev_pad_config *
default:
/* The BRU can't perform format conversion. */
- format = vsp1_entity_get_pad_format(&bru->entity, cfg,
- BRU_PAD_SINK(0), which);
+ format = vsp1_entity_get_pad_format(&bru->entity, config,
+ BRU_PAD_SINK(0));
fmt->code = format->code;
break;
}
@@ -278,23 +135,28 @@ static void bru_try_format(struct vsp1_bru *bru, struct v4l2_subdev_pad_config *
fmt->colorspace = V4L2_COLORSPACE_SRGB;
}
-static int bru_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
+static int bru_set_format(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vsp1_bru *bru = to_bru(subdev);
+ struct v4l2_subdev_pad_config *config;
struct v4l2_mbus_framefmt *format;
- bru_try_format(bru, cfg, fmt->pad, &fmt->format, fmt->which);
+ config = vsp1_entity_get_pad_config(&bru->entity, cfg, fmt->which);
+ if (!config)
+ return -EINVAL;
+
+ bru_try_format(bru, config, fmt->pad, &fmt->format);
- format = vsp1_entity_get_pad_format(&bru->entity, cfg, fmt->pad,
- fmt->which);
+ format = vsp1_entity_get_pad_format(&bru->entity, config, fmt->pad);
*format = fmt->format;
/* Reset the compose rectangle */
if (fmt->pad != bru->entity.source_pad) {
struct v4l2_rect *compose;
- compose = bru_get_compose(bru, cfg, fmt->pad, fmt->which);
+ compose = bru_get_compose(bru, config, fmt->pad);
compose->left = 0;
compose->top = 0;
compose->width = format->width;
@@ -306,8 +168,8 @@ static int bru_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_con
unsigned int i;
for (i = 0; i <= bru->entity.source_pad; ++i) {
- format = vsp1_entity_get_pad_format(&bru->entity, cfg,
- i, fmt->which);
+ format = vsp1_entity_get_pad_format(&bru->entity,
+ config, i);
format->code = fmt->format.code;
}
}
@@ -320,6 +182,7 @@ static int bru_get_selection(struct v4l2_subdev *subdev,
struct v4l2_subdev_selection *sel)
{
struct vsp1_bru *bru = to_bru(subdev);
+ struct v4l2_subdev_pad_config *config;
if (sel->pad == bru->entity.source_pad)
return -EINVAL;
@@ -333,7 +196,12 @@ static int bru_get_selection(struct v4l2_subdev *subdev,
return 0;
case V4L2_SEL_TGT_COMPOSE:
- sel->r = *bru_get_compose(bru, cfg, sel->pad, sel->which);
+ config = vsp1_entity_get_pad_config(&bru->entity, cfg,
+ sel->which);
+ if (!config)
+ return -EINVAL;
+
+ sel->r = *bru_get_compose(bru, config, sel->pad);
return 0;
default:
@@ -346,6 +214,7 @@ static int bru_set_selection(struct v4l2_subdev *subdev,
struct v4l2_subdev_selection *sel)
{
struct vsp1_bru *bru = to_bru(subdev);
+ struct v4l2_subdev_pad_config *config;
struct v4l2_mbus_framefmt *format;
struct v4l2_rect *compose;
@@ -355,57 +224,161 @@ static int bru_set_selection(struct v4l2_subdev *subdev,
if (sel->target != V4L2_SEL_TGT_COMPOSE)
return -EINVAL;
+ config = vsp1_entity_get_pad_config(&bru->entity, cfg, sel->which);
+ if (!config)
+ return -EINVAL;
+
/* The compose rectangle top left corner must be inside the output
* frame.
*/
- format = vsp1_entity_get_pad_format(&bru->entity, cfg,
- bru->entity.source_pad, sel->which);
+ format = vsp1_entity_get_pad_format(&bru->entity, config,
+ bru->entity.source_pad);
sel->r.left = clamp_t(unsigned int, sel->r.left, 0, format->width - 1);
sel->r.top = clamp_t(unsigned int, sel->r.top, 0, format->height - 1);
/* Scaling isn't supported, the compose rectangle size must be identical
* to the sink format size.
*/
- format = vsp1_entity_get_pad_format(&bru->entity, cfg, sel->pad,
- sel->which);
+ format = vsp1_entity_get_pad_format(&bru->entity, config, sel->pad);
sel->r.width = format->width;
sel->r.height = format->height;
- compose = bru_get_compose(bru, cfg, sel->pad, sel->which);
+ compose = bru_get_compose(bru, config, sel->pad);
*compose = sel->r;
return 0;
}
-/* -----------------------------------------------------------------------------
- * V4L2 Subdevice Operations
- */
-
-static struct v4l2_subdev_video_ops bru_video_ops = {
- .s_stream = bru_s_stream,
-};
-
static struct v4l2_subdev_pad_ops bru_pad_ops = {
+ .init_cfg = vsp1_entity_init_cfg,
.enum_mbus_code = bru_enum_mbus_code,
.enum_frame_size = bru_enum_frame_size,
- .get_fmt = bru_get_format,
+ .get_fmt = vsp1_subdev_get_pad_format,
.set_fmt = bru_set_format,
.get_selection = bru_get_selection,
.set_selection = bru_set_selection,
};
static struct v4l2_subdev_ops bru_ops = {
- .video = &bru_video_ops,
.pad = &bru_pad_ops,
};
/* -----------------------------------------------------------------------------
+ * VSP1 Entity Operations
+ */
+
+static void bru_configure(struct vsp1_entity *entity,
+ struct vsp1_pipeline *pipe,
+ struct vsp1_dl_list *dl)
+{
+ struct vsp1_bru *bru = to_bru(&entity->subdev);
+ struct v4l2_mbus_framefmt *format;
+ unsigned int flags;
+ unsigned int i;
+
+ format = vsp1_entity_get_pad_format(&bru->entity, bru->entity.config,
+ bru->entity.source_pad);
+
+ /* The hardware is extremely flexible but we have no userspace API to
+ * expose all the parameters, nor is it clear whether we would have use
+ * cases for all the supported modes. Let's just harcode the parameters
+ * to sane default values for now.
+ */
+
+ /* Disable dithering and enable color data normalization unless the
+ * format at the pipeline output is premultiplied.
+ */
+ flags = pipe->output ? pipe->output->format.flags : 0;
+ vsp1_bru_write(bru, dl, VI6_BRU_INCTRL,
+ flags & V4L2_PIX_FMT_FLAG_PREMUL_ALPHA ?
+ 0 : VI6_BRU_INCTRL_NRM);
+
+ /* Set the background position to cover the whole output image and
+ * configure its color.
+ */
+ vsp1_bru_write(bru, dl, VI6_BRU_VIRRPF_SIZE,
+ (format->width << VI6_BRU_VIRRPF_SIZE_HSIZE_SHIFT) |
+ (format->height << VI6_BRU_VIRRPF_SIZE_VSIZE_SHIFT));
+ vsp1_bru_write(bru, dl, VI6_BRU_VIRRPF_LOC, 0);
+
+ vsp1_bru_write(bru, dl, VI6_BRU_VIRRPF_COL, bru->bgcolor |
+ (0xff << VI6_BRU_VIRRPF_COL_A_SHIFT));
+
+ /* Route BRU input 1 as SRC input to the ROP unit and configure the ROP
+ * unit with a NOP operation to make BRU input 1 available as the
+ * Blend/ROP unit B SRC input.
+ */
+ vsp1_bru_write(bru, dl, VI6_BRU_ROP, VI6_BRU_ROP_DSTSEL_BRUIN(1) |
+ VI6_BRU_ROP_CROP(VI6_ROP_NOP) |
+ VI6_BRU_ROP_AROP(VI6_ROP_NOP));
+
+ for (i = 0; i < bru->entity.source_pad; ++i) {
+ bool premultiplied = false;
+ u32 ctrl = 0;
+
+ /* Configure all Blend/ROP units corresponding to an enabled BRU
+ * input for alpha blending. Blend/ROP units corresponding to
+ * disabled BRU inputs are used in ROP NOP mode to ignore the
+ * SRC input.
+ */
+ if (bru->inputs[i].rpf) {
+ ctrl |= VI6_BRU_CTRL_RBC;
+
+ premultiplied = bru->inputs[i].rpf->format.flags
+ & V4L2_PIX_FMT_FLAG_PREMUL_ALPHA;
+ } else {
+ ctrl |= VI6_BRU_CTRL_CROP(VI6_ROP_NOP)
+ | VI6_BRU_CTRL_AROP(VI6_ROP_NOP);
+ }
+
+ /* Select the virtual RPF as the Blend/ROP unit A DST input to
+ * serve as a background color.
+ */
+ if (i == 0)
+ ctrl |= VI6_BRU_CTRL_DSTSEL_VRPF;
+
+ /* Route BRU inputs 0 to 3 as SRC inputs to Blend/ROP units A to
+ * D in that order. The Blend/ROP unit B SRC is hardwired to the
+ * ROP unit output, the corresponding register bits must be set
+ * to 0.
+ */
+ if (i != 1)
+ ctrl |= VI6_BRU_CTRL_SRCSEL_BRUIN(i);
+
+ vsp1_bru_write(bru, dl, VI6_BRU_CTRL(i), ctrl);
+
+ /* Harcode the blending formula to
+ *
+ * DSTc = DSTc * (1 - SRCa) + SRCc * SRCa
+ * DSTa = DSTa * (1 - SRCa) + SRCa
+ *
+ * when the SRC input isn't premultiplied, and to
+ *
+ * DSTc = DSTc * (1 - SRCa) + SRCc
+ * DSTa = DSTa * (1 - SRCa) + SRCa
+ *
+ * otherwise.
+ */
+ vsp1_bru_write(bru, dl, VI6_BRU_BLD(i),
+ VI6_BRU_BLD_CCMDX_255_SRC_A |
+ (premultiplied ? VI6_BRU_BLD_CCMDY_COEFY :
+ VI6_BRU_BLD_CCMDY_SRC_A) |
+ VI6_BRU_BLD_ACMDX_255_SRC_A |
+ VI6_BRU_BLD_ACMDY_COEFY |
+ (0xff << VI6_BRU_BLD_COEFY_SHIFT));
+ }
+}
+
+static const struct vsp1_entity_operations bru_entity_ops = {
+ .configure = bru_configure,
+};
+
+/* -----------------------------------------------------------------------------
* Initialization and Cleanup
*/
struct vsp1_bru *vsp1_bru_create(struct vsp1_device *vsp1)
{
- struct v4l2_subdev *subdev;
struct vsp1_bru *bru;
int ret;
@@ -413,31 +386,21 @@ struct vsp1_bru *vsp1_bru_create(struct vsp1_device *vsp1)
if (bru == NULL)
return ERR_PTR(-ENOMEM);
+ bru->entity.ops = &bru_entity_ops;
bru->entity.type = VSP1_ENTITY_BRU;
- ret = vsp1_entity_init(vsp1, &bru->entity,
- vsp1->info->num_bru_inputs + 1);
+ ret = vsp1_entity_init(vsp1, &bru->entity, "bru",
+ vsp1->info->num_bru_inputs + 1, &bru_ops);
if (ret < 0)
return ERR_PTR(ret);
- /* Initialize the V4L2 subdev. */
- subdev = &bru->entity.subdev;
- v4l2_subdev_init(subdev, &bru_ops);
-
- subdev->entity.ops = &vsp1->media_ops;
- subdev->internal_ops = &vsp1_subdev_internal_ops;
- snprintf(subdev->name, sizeof(subdev->name), "%s bru",
- dev_name(vsp1->dev));
- v4l2_set_subdevdata(subdev, bru);
- subdev->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
-
- vsp1_entity_init_formats(subdev, NULL);
-
/* Initialize the control handler. */
v4l2_ctrl_handler_init(&bru->ctrls, 1);
v4l2_ctrl_new_std(&bru->ctrls, &bru_ctrl_ops, V4L2_CID_BG_COLOR,
0, 0xffffff, 1, 0);
+ bru->bgcolor = 0;
+
bru->entity.subdev.ctrl_handler = &bru->ctrls;
if (bru->ctrls.error) {
diff --git a/drivers/media/platform/vsp1/vsp1_bru.h b/drivers/media/platform/vsp1/vsp1_bru.h
index dbac9686ea69..828a3fcadea8 100644
--- a/drivers/media/platform/vsp1/vsp1_bru.h
+++ b/drivers/media/platform/vsp1/vsp1_bru.h
@@ -31,8 +31,9 @@ struct vsp1_bru {
struct {
struct vsp1_rwpf *rpf;
- struct v4l2_rect compose;
} inputs[VSP1_MAX_RPF];
+
+ u32 bgcolor;
};
static inline struct vsp1_bru *to_bru(struct v4l2_subdev *subdev)
diff --git a/drivers/media/platform/vsp1/vsp1_dl.c b/drivers/media/platform/vsp1/vsp1_dl.c
index 1a9a58588f84..e238d9b9376b 100644
--- a/drivers/media/platform/vsp1/vsp1_dl.c
+++ b/drivers/media/platform/vsp1/vsp1_dl.c
@@ -18,139 +18,406 @@
#include "vsp1.h"
#include "vsp1_dl.h"
-#include "vsp1_pipe.h"
-/*
- * Global resources
- *
- * - Display-related interrupts (can be used for vblank evasion ?)
- * - Display-list enable
- * - Header-less for WPF0
- * - DL swap
- */
-
-#define VSP1_DL_BODY_SIZE (2 * 4 * 256)
+#define VSP1_DL_NUM_ENTRIES 256
#define VSP1_DL_NUM_LISTS 3
+#define VSP1_DLH_INT_ENABLE (1 << 1)
+#define VSP1_DLH_AUTO_START (1 << 0)
+
+struct vsp1_dl_header_list {
+ u32 num_bytes;
+ u32 addr;
+} __attribute__((__packed__));
+
+struct vsp1_dl_header {
+ u32 num_lists;
+ struct vsp1_dl_header_list lists[8];
+ u32 next_header;
+ u32 flags;
+} __attribute__((__packed__));
+
struct vsp1_dl_entry {
u32 addr;
u32 data;
} __attribute__((__packed__));
-struct vsp1_dl_list {
+/**
+ * struct vsp1_dl_body - Display list body
+ * @list: entry in the display list list of bodies
+ * @vsp1: the VSP1 device
+ * @entries: array of entries
+ * @dma: DMA address of the entries
+ * @size: size of the DMA memory in bytes
+ * @num_entries: number of stored entries
+ */
+struct vsp1_dl_body {
+ struct list_head list;
+ struct vsp1_device *vsp1;
+
+ struct vsp1_dl_entry *entries;
+ dma_addr_t dma;
size_t size;
- int reg_count;
- bool in_use;
+ unsigned int num_entries;
+};
- struct vsp1_dl_entry *body;
+/**
+ * struct vsp1_dl_list - Display list
+ * @list: entry in the display list manager lists
+ * @dlm: the display list manager
+ * @header: display list header, NULL for headerless lists
+ * @dma: DMA address for the header
+ * @body0: first display list body
+ * @fragments: list of extra display list bodies
+ */
+struct vsp1_dl_list {
+ struct list_head list;
+ struct vsp1_dl_manager *dlm;
+
+ struct vsp1_dl_header *header;
dma_addr_t dma;
+
+ struct vsp1_dl_body body0;
+ struct list_head fragments;
+};
+
+enum vsp1_dl_mode {
+ VSP1_DL_MODE_HEADER,
+ VSP1_DL_MODE_HEADERLESS,
};
/**
- * struct vsp1_dl - Display List manager
+ * struct vsp1_dl_manager - Display List manager
+ * @index: index of the related WPF
+ * @mode: display list operation mode (header or headerless)
* @vsp1: the VSP1 device
* @lock: protects the active, queued and pending lists
- * @lists.all: array of all allocate display lists
- * @lists.active: list currently being processed (loaded) by hardware
- * @lists.queued: list queued to the hardware (written to the DL registers)
- * @lists.pending: list waiting to be queued to the hardware
- * @lists.write: list being written to by software
+ * @free: array of all free display lists
+ * @active: list currently being processed (loaded) by hardware
+ * @queued: list queued to the hardware (written to the DL registers)
+ * @pending: list waiting to be queued to the hardware
*/
-struct vsp1_dl {
+struct vsp1_dl_manager {
+ unsigned int index;
+ enum vsp1_dl_mode mode;
struct vsp1_device *vsp1;
spinlock_t lock;
+ struct list_head free;
+ struct vsp1_dl_list *active;
+ struct vsp1_dl_list *queued;
+ struct vsp1_dl_list *pending;
+};
- size_t size;
- dma_addr_t dma;
- void *mem;
+/* -----------------------------------------------------------------------------
+ * Display List Body Management
+ */
+
+/*
+ * Initialize a display list body object and allocate DMA memory for the body
+ * data. The display list body object is expected to have been initialized to
+ * 0 when allocated.
+ */
+static int vsp1_dl_body_init(struct vsp1_device *vsp1,
+ struct vsp1_dl_body *dlb, unsigned int num_entries,
+ size_t extra_size)
+{
+ size_t size = num_entries * sizeof(*dlb->entries) + extra_size;
- struct {
- struct vsp1_dl_list all[VSP1_DL_NUM_LISTS];
+ dlb->vsp1 = vsp1;
+ dlb->size = size;
- struct vsp1_dl_list *active;
- struct vsp1_dl_list *queued;
- struct vsp1_dl_list *pending;
- struct vsp1_dl_list *write;
- } lists;
-};
+ dlb->entries = dma_alloc_wc(vsp1->dev, dlb->size, &dlb->dma,
+ GFP_KERNEL);
+ if (!dlb->entries)
+ return -ENOMEM;
+
+ return 0;
+}
+
+/*
+ * Cleanup a display list body and free allocated DMA memory allocated.
+ */
+static void vsp1_dl_body_cleanup(struct vsp1_dl_body *dlb)
+{
+ dma_free_wc(dlb->vsp1->dev, dlb->size, dlb->entries, dlb->dma);
+}
+
+/**
+ * vsp1_dl_fragment_alloc - Allocate a display list fragment
+ * @vsp1: The VSP1 device
+ * @num_entries: The maximum number of entries that the fragment can contain
+ *
+ * Allocate a display list fragment with enough memory to contain the requested
+ * number of entries.
+ *
+ * Return a pointer to a fragment on success or NULL if memory can't be
+ * allocated.
+ */
+struct vsp1_dl_body *vsp1_dl_fragment_alloc(struct vsp1_device *vsp1,
+ unsigned int num_entries)
+{
+ struct vsp1_dl_body *dlb;
+ int ret;
+
+ dlb = kzalloc(sizeof(*dlb), GFP_KERNEL);
+ if (!dlb)
+ return NULL;
+
+ ret = vsp1_dl_body_init(vsp1, dlb, num_entries, 0);
+ if (ret < 0) {
+ kfree(dlb);
+ return NULL;
+ }
+
+ return dlb;
+}
+
+/**
+ * vsp1_dl_fragment_free - Free a display list fragment
+ * @dlb: The fragment
+ *
+ * Free the given display list fragment and the associated DMA memory.
+ *
+ * Fragments must only be freed explicitly if they are not added to a display
+ * list, as the display list will take ownership of them and free them
+ * otherwise. Manual free typically happens at cleanup time for fragments that
+ * have been allocated but not used.
+ *
+ * Passing a NULL pointer to this function is safe, in that case no operation
+ * will be performed.
+ */
+void vsp1_dl_fragment_free(struct vsp1_dl_body *dlb)
+{
+ if (!dlb)
+ return;
+
+ vsp1_dl_body_cleanup(dlb);
+ kfree(dlb);
+}
+
+/**
+ * vsp1_dl_fragment_write - Write a register to a display list fragment
+ * @dlb: The fragment
+ * @reg: The register address
+ * @data: The register value
+ *
+ * Write the given register and value to the display list fragment. The maximum
+ * number of entries that can be written in a fragment is specified when the
+ * fragment is allocated by vsp1_dl_fragment_alloc().
+ */
+void vsp1_dl_fragment_write(struct vsp1_dl_body *dlb, u32 reg, u32 data)
+{
+ dlb->entries[dlb->num_entries].addr = reg;
+ dlb->entries[dlb->num_entries].data = data;
+ dlb->num_entries++;
+}
/* -----------------------------------------------------------------------------
* Display List Transaction Management
*/
-static void vsp1_dl_free_list(struct vsp1_dl_list *list)
+static struct vsp1_dl_list *vsp1_dl_list_alloc(struct vsp1_dl_manager *dlm)
{
- if (!list)
- return;
+ struct vsp1_dl_list *dl;
+ size_t header_size;
+ int ret;
+
+ dl = kzalloc(sizeof(*dl), GFP_KERNEL);
+ if (!dl)
+ return NULL;
- list->in_use = false;
+ INIT_LIST_HEAD(&dl->fragments);
+ dl->dlm = dlm;
+
+ /* Initialize the display list body and allocate DMA memory for the body
+ * and the optional header. Both are allocated together to avoid memory
+ * fragmentation, with the header located right after the body in
+ * memory.
+ */
+ header_size = dlm->mode == VSP1_DL_MODE_HEADER
+ ? ALIGN(sizeof(struct vsp1_dl_header), 8)
+ : 0;
+
+ ret = vsp1_dl_body_init(dlm->vsp1, &dl->body0, VSP1_DL_NUM_ENTRIES,
+ header_size);
+ if (ret < 0) {
+ kfree(dl);
+ return NULL;
+ }
+
+ if (dlm->mode == VSP1_DL_MODE_HEADER) {
+ size_t header_offset = VSP1_DL_NUM_ENTRIES
+ * sizeof(*dl->body0.entries);
+
+ dl->header = ((void *)dl->body0.entries) + header_offset;
+ dl->dma = dl->body0.dma + header_offset;
+
+ memset(dl->header, 0, sizeof(*dl->header));
+ dl->header->lists[0].addr = dl->body0.dma;
+ dl->header->flags = VSP1_DLH_INT_ENABLE;
+ }
+
+ return dl;
}
-void vsp1_dl_reset(struct vsp1_dl *dl)
+static void vsp1_dl_list_free_fragments(struct vsp1_dl_list *dl)
{
- unsigned int i;
+ struct vsp1_dl_body *dlb, *next;
- dl->lists.active = NULL;
- dl->lists.queued = NULL;
- dl->lists.pending = NULL;
- dl->lists.write = NULL;
+ list_for_each_entry_safe(dlb, next, &dl->fragments, list) {
+ list_del(&dlb->list);
+ vsp1_dl_body_cleanup(dlb);
+ kfree(dlb);
+ }
+}
- for (i = 0; i < ARRAY_SIZE(dl->lists.all); ++i)
- dl->lists.all[i].in_use = false;
+static void vsp1_dl_list_free(struct vsp1_dl_list *dl)
+{
+ vsp1_dl_body_cleanup(&dl->body0);
+ vsp1_dl_list_free_fragments(dl);
+ kfree(dl);
}
-void vsp1_dl_begin(struct vsp1_dl *dl)
+/**
+ * vsp1_dl_list_get - Get a free display list
+ * @dlm: The display list manager
+ *
+ * Get a display list from the pool of free lists and return it.
+ *
+ * This function must be called without the display list manager lock held.
+ */
+struct vsp1_dl_list *vsp1_dl_list_get(struct vsp1_dl_manager *dlm)
{
- struct vsp1_dl_list *list = NULL;
+ struct vsp1_dl_list *dl = NULL;
unsigned long flags;
- unsigned int i;
- spin_lock_irqsave(&dl->lock, flags);
+ spin_lock_irqsave(&dlm->lock, flags);
- for (i = 0; i < ARRAY_SIZE(dl->lists.all); ++i) {
- if (!dl->lists.all[i].in_use) {
- list = &dl->lists.all[i];
- break;
- }
+ if (!list_empty(&dlm->free)) {
+ dl = list_first_entry(&dlm->free, struct vsp1_dl_list, list);
+ list_del(&dl->list);
}
- if (!list) {
- list = dl->lists.pending;
- dl->lists.pending = NULL;
- }
+ spin_unlock_irqrestore(&dlm->lock, flags);
+
+ return dl;
+}
+
+/* This function must be called with the display list manager lock held.*/
+static void __vsp1_dl_list_put(struct vsp1_dl_list *dl)
+{
+ if (!dl)
+ return;
+
+ vsp1_dl_list_free_fragments(dl);
+ dl->body0.num_entries = 0;
+
+ list_add_tail(&dl->list, &dl->dlm->free);
+}
+
+/**
+ * vsp1_dl_list_put - Release a display list
+ * @dl: The display list
+ *
+ * Release the display list and return it to the pool of free lists.
+ *
+ * Passing a NULL pointer to this function is safe, in that case no operation
+ * will be performed.
+ */
+void vsp1_dl_list_put(struct vsp1_dl_list *dl)
+{
+ unsigned long flags;
- spin_unlock_irqrestore(&dl->lock, flags);
+ if (!dl)
+ return;
- dl->lists.write = list;
+ spin_lock_irqsave(&dl->dlm->lock, flags);
+ __vsp1_dl_list_put(dl);
+ spin_unlock_irqrestore(&dl->dlm->lock, flags);
+}
- list->in_use = true;
- list->reg_count = 0;
+/**
+ * vsp1_dl_list_write - Write a register to the display list
+ * @dl: The display list
+ * @reg: The register address
+ * @data: The register value
+ *
+ * Write the given register and value to the display list. Up to 256 registers
+ * can be written per display list.
+ */
+void vsp1_dl_list_write(struct vsp1_dl_list *dl, u32 reg, u32 data)
+{
+ vsp1_dl_fragment_write(&dl->body0, reg, data);
}
-void vsp1_dl_add(struct vsp1_entity *e, u32 reg, u32 data)
+/**
+ * vsp1_dl_list_add_fragment - Add a fragment to the display list
+ * @dl: The display list
+ * @dlb: The fragment
+ *
+ * Add a display list body as a fragment to a display list. Registers contained
+ * in fragments are processed after registers contained in the main display
+ * list, in the order in which fragments are added.
+ *
+ * Adding a fragment to a display list passes ownership of the fragment to the
+ * list. The caller must not touch the fragment after this call, and must not
+ * free it explicitly with vsp1_dl_fragment_free().
+ *
+ * Fragments are only usable for display lists in header mode. Attempt to
+ * add a fragment to a header-less display list will return an error.
+ */
+int vsp1_dl_list_add_fragment(struct vsp1_dl_list *dl,
+ struct vsp1_dl_body *dlb)
{
- struct vsp1_pipeline *pipe = to_vsp1_pipeline(&e->subdev.entity);
- struct vsp1_dl *dl = pipe->dl;
- struct vsp1_dl_list *list = dl->lists.write;
+ /* Multi-body lists are only available in header mode. */
+ if (dl->dlm->mode != VSP1_DL_MODE_HEADER)
+ return -EINVAL;
- list->body[list->reg_count].addr = reg;
- list->body[list->reg_count].data = data;
- list->reg_count++;
+ list_add_tail(&dlb->list, &dl->fragments);
+ return 0;
}
-void vsp1_dl_commit(struct vsp1_dl *dl)
+void vsp1_dl_list_commit(struct vsp1_dl_list *dl)
{
- struct vsp1_device *vsp1 = dl->vsp1;
- struct vsp1_dl_list *list;
+ struct vsp1_dl_manager *dlm = dl->dlm;
+ struct vsp1_device *vsp1 = dlm->vsp1;
unsigned long flags;
bool update;
- list = dl->lists.write;
- dl->lists.write = NULL;
+ spin_lock_irqsave(&dlm->lock, flags);
+
+ if (dl->dlm->mode == VSP1_DL_MODE_HEADER) {
+ struct vsp1_dl_header_list *hdr = dl->header->lists;
+ struct vsp1_dl_body *dlb;
+ unsigned int num_lists = 0;
+
+ /* Fill the header with the display list bodies addresses and
+ * sizes. The address of the first body has already been filled
+ * when the display list was allocated.
+ *
+ * In header mode the caller guarantees that the hardware is
+ * idle at this point.
+ */
+ hdr->num_bytes = dl->body0.num_entries
+ * sizeof(*dl->header->lists);
+
+ list_for_each_entry(dlb, &dl->fragments, list) {
+ num_lists++;
+ hdr++;
+
+ hdr->addr = dlb->dma;
+ hdr->num_bytes = dlb->num_entries
+ * sizeof(*dl->header->lists);
+ }
+
+ dl->header->num_lists = num_lists;
+ vsp1_write(vsp1, VI6_DL_HDR_ADDR(dlm->index), dl->dma);
- spin_lock_irqsave(&dl->lock, flags);
+ dlm->active = dl;
+ goto done;
+ }
/* Once the UPD bit has been set the hardware can start processing the
* display list at any time and we can't touch the address and size
@@ -159,8 +426,8 @@ void vsp1_dl_commit(struct vsp1_dl *dl)
*/
update = !!(vsp1_read(vsp1, VI6_DL_BODY_SIZE) & VI6_DL_BODY_SIZE_UPD);
if (update) {
- vsp1_dl_free_list(dl->lists.pending);
- dl->lists.pending = list;
+ __vsp1_dl_list_put(dlm->pending);
+ dlm->pending = dl;
goto done;
}
@@ -168,42 +435,51 @@ void vsp1_dl_commit(struct vsp1_dl *dl)
* The UPD bit will be cleared by the device when the display list is
* processed.
*/
- vsp1_write(vsp1, VI6_DL_HDR_ADDR(0), list->dma);
+ vsp1_write(vsp1, VI6_DL_HDR_ADDR(0), dl->body0.dma);
vsp1_write(vsp1, VI6_DL_BODY_SIZE, VI6_DL_BODY_SIZE_UPD |
- (list->reg_count * 8));
+ (dl->body0.num_entries * sizeof(*dl->header->lists)));
- vsp1_dl_free_list(dl->lists.queued);
- dl->lists.queued = list;
+ __vsp1_dl_list_put(dlm->queued);
+ dlm->queued = dl;
done:
- spin_unlock_irqrestore(&dl->lock, flags);
+ spin_unlock_irqrestore(&dlm->lock, flags);
}
/* -----------------------------------------------------------------------------
- * Interrupt Handling
+ * Display List Manager
*/
-void vsp1_dl_irq_display_start(struct vsp1_dl *dl)
+/* Interrupt Handling */
+void vsp1_dlm_irq_display_start(struct vsp1_dl_manager *dlm)
{
- spin_lock(&dl->lock);
+ spin_lock(&dlm->lock);
/* The display start interrupt signals the end of the display list
* processing by the device. The active display list, if any, won't be
* accessed anymore and can be reused.
*/
- if (dl->lists.active) {
- vsp1_dl_free_list(dl->lists.active);
- dl->lists.active = NULL;
- }
+ __vsp1_dl_list_put(dlm->active);
+ dlm->active = NULL;
- spin_unlock(&dl->lock);
+ spin_unlock(&dlm->lock);
}
-void vsp1_dl_irq_frame_end(struct vsp1_dl *dl)
+void vsp1_dlm_irq_frame_end(struct vsp1_dl_manager *dlm)
{
- struct vsp1_device *vsp1 = dl->vsp1;
+ struct vsp1_device *vsp1 = dlm->vsp1;
+
+ spin_lock(&dlm->lock);
+
+ __vsp1_dl_list_put(dlm->active);
+ dlm->active = NULL;
- spin_lock(&dl->lock);
+ /* Header mode is used for mem-to-mem pipelines only. We don't need to
+ * perform any operation as there can't be any new display list queued
+ * in that case.
+ */
+ if (dlm->mode == VSP1_DL_MODE_HEADER)
+ goto done;
/* The UPD bit set indicates that the commit operation raced with the
* interrupt and occurred after the frame end event and UPD clear but
@@ -216,42 +492,39 @@ void vsp1_dl_irq_frame_end(struct vsp1_dl *dl)
/* The device starts processing the queued display list right after the
* frame end interrupt. The display list thus becomes active.
*/
- if (dl->lists.queued) {
- WARN_ON(dl->lists.active);
- dl->lists.active = dl->lists.queued;
- dl->lists.queued = NULL;
+ if (dlm->queued) {
+ dlm->active = dlm->queued;
+ dlm->queued = NULL;
}
/* Now that the UPD bit has been cleared we can queue the next display
* list to the hardware if one has been prepared.
*/
- if (dl->lists.pending) {
- struct vsp1_dl_list *list = dl->lists.pending;
+ if (dlm->pending) {
+ struct vsp1_dl_list *dl = dlm->pending;
- vsp1_write(vsp1, VI6_DL_HDR_ADDR(0), list->dma);
+ vsp1_write(vsp1, VI6_DL_HDR_ADDR(0), dl->body0.dma);
vsp1_write(vsp1, VI6_DL_BODY_SIZE, VI6_DL_BODY_SIZE_UPD |
- (list->reg_count * 8));
+ (dl->body0.num_entries *
+ sizeof(*dl->header->lists)));
- dl->lists.queued = list;
- dl->lists.pending = NULL;
+ dlm->queued = dl;
+ dlm->pending = NULL;
}
done:
- spin_unlock(&dl->lock);
+ spin_unlock(&dlm->lock);
}
-/* -----------------------------------------------------------------------------
- * Hardware Setup
- */
-
-void vsp1_dl_setup(struct vsp1_device *vsp1)
+/* Hardware Setup */
+void vsp1_dlm_setup(struct vsp1_device *vsp1)
{
u32 ctrl = (256 << VI6_DL_CTRL_AR_WAIT_SHIFT)
| VI6_DL_CTRL_DC2 | VI6_DL_CTRL_DC1 | VI6_DL_CTRL_DC0
| VI6_DL_CTRL_DLE;
- /* The DRM pipeline operates with header-less display lists in
- * Continuous Frame Mode.
+ /* The DRM pipeline operates with display lists in Continuous Frame
+ * Mode, all other pipelines use manual start.
*/
if (vsp1->drm)
ctrl |= VI6_DL_CTRL_CFM0 | VI6_DL_CTRL_NH0;
@@ -260,46 +533,64 @@ void vsp1_dl_setup(struct vsp1_device *vsp1)
vsp1_write(vsp1, VI6_DL_SWAP, VI6_DL_SWAP_LWS);
}
-/* -----------------------------------------------------------------------------
- * Initialization and Cleanup
- */
+void vsp1_dlm_reset(struct vsp1_dl_manager *dlm)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&dlm->lock, flags);
-struct vsp1_dl *vsp1_dl_create(struct vsp1_device *vsp1)
+ __vsp1_dl_list_put(dlm->active);
+ __vsp1_dl_list_put(dlm->queued);
+ __vsp1_dl_list_put(dlm->pending);
+
+ spin_unlock_irqrestore(&dlm->lock, flags);
+
+ dlm->active = NULL;
+ dlm->queued = NULL;
+ dlm->pending = NULL;
+}
+
+struct vsp1_dl_manager *vsp1_dlm_create(struct vsp1_device *vsp1,
+ unsigned int index,
+ unsigned int prealloc)
{
- struct vsp1_dl *dl;
+ struct vsp1_dl_manager *dlm;
unsigned int i;
- dl = kzalloc(sizeof(*dl), GFP_KERNEL);
- if (!dl)
+ dlm = devm_kzalloc(vsp1->dev, sizeof(*dlm), GFP_KERNEL);
+ if (!dlm)
return NULL;
- spin_lock_init(&dl->lock);
+ dlm->index = index;
+ dlm->mode = index == 0 && !vsp1->info->uapi
+ ? VSP1_DL_MODE_HEADERLESS : VSP1_DL_MODE_HEADER;
+ dlm->vsp1 = vsp1;
- dl->vsp1 = vsp1;
- dl->size = VSP1_DL_BODY_SIZE * ARRAY_SIZE(dl->lists.all);
+ spin_lock_init(&dlm->lock);
+ INIT_LIST_HEAD(&dlm->free);
- dl->mem = dma_alloc_wc(vsp1->dev, dl->size, &dl->dma,
- GFP_KERNEL);
- if (!dl->mem) {
- kfree(dl);
- return NULL;
- }
+ for (i = 0; i < prealloc; ++i) {
+ struct vsp1_dl_list *dl;
- for (i = 0; i < ARRAY_SIZE(dl->lists.all); ++i) {
- struct vsp1_dl_list *list = &dl->lists.all[i];
+ dl = vsp1_dl_list_alloc(dlm);
+ if (!dl)
+ return NULL;
- list->size = VSP1_DL_BODY_SIZE;
- list->reg_count = 0;
- list->in_use = false;
- list->dma = dl->dma + VSP1_DL_BODY_SIZE * i;
- list->body = dl->mem + VSP1_DL_BODY_SIZE * i;
+ list_add_tail(&dl->list, &dlm->free);
}
- return dl;
+ return dlm;
}
-void vsp1_dl_destroy(struct vsp1_dl *dl)
+void vsp1_dlm_destroy(struct vsp1_dl_manager *dlm)
{
- dma_free_wc(dl->vsp1->dev, dl->size, dl->mem, dl->dma);
- kfree(dl);
+ struct vsp1_dl_list *dl, *next;
+
+ if (!dlm)
+ return;
+
+ list_for_each_entry_safe(dl, next, &dlm->free, list) {
+ list_del(&dl->list);
+ vsp1_dl_list_free(dl);
+ }
}
diff --git a/drivers/media/platform/vsp1/vsp1_dl.h b/drivers/media/platform/vsp1/vsp1_dl.h
index 448c4250e54c..de387cd4d745 100644
--- a/drivers/media/platform/vsp1/vsp1_dl.h
+++ b/drivers/media/platform/vsp1/vsp1_dl.h
@@ -13,30 +13,33 @@
#ifndef __VSP1_DL_H__
#define __VSP1_DL_H__
-#include "vsp1_entity.h"
+#include <linux/types.h>
struct vsp1_device;
-struct vsp1_dl;
-
-struct vsp1_dl *vsp1_dl_create(struct vsp1_device *vsp1);
-void vsp1_dl_destroy(struct vsp1_dl *dl);
-
-void vsp1_dl_setup(struct vsp1_device *vsp1);
-
-void vsp1_dl_reset(struct vsp1_dl *dl);
-void vsp1_dl_begin(struct vsp1_dl *dl);
-void vsp1_dl_add(struct vsp1_entity *e, u32 reg, u32 data);
-void vsp1_dl_commit(struct vsp1_dl *dl);
-
-void vsp1_dl_irq_display_start(struct vsp1_dl *dl);
-void vsp1_dl_irq_frame_end(struct vsp1_dl *dl);
-
-static inline void vsp1_dl_mod_write(struct vsp1_entity *e, u32 reg, u32 data)
-{
- if (e->vsp1->use_dl)
- vsp1_dl_add(e, reg, data);
- else
- vsp1_write(e->vsp1, reg, data);
-}
+struct vsp1_dl_fragment;
+struct vsp1_dl_list;
+struct vsp1_dl_manager;
+
+void vsp1_dlm_setup(struct vsp1_device *vsp1);
+
+struct vsp1_dl_manager *vsp1_dlm_create(struct vsp1_device *vsp1,
+ unsigned int index,
+ unsigned int prealloc);
+void vsp1_dlm_destroy(struct vsp1_dl_manager *dlm);
+void vsp1_dlm_reset(struct vsp1_dl_manager *dlm);
+void vsp1_dlm_irq_display_start(struct vsp1_dl_manager *dlm);
+void vsp1_dlm_irq_frame_end(struct vsp1_dl_manager *dlm);
+
+struct vsp1_dl_list *vsp1_dl_list_get(struct vsp1_dl_manager *dlm);
+void vsp1_dl_list_put(struct vsp1_dl_list *dl);
+void vsp1_dl_list_write(struct vsp1_dl_list *dl, u32 reg, u32 data);
+void vsp1_dl_list_commit(struct vsp1_dl_list *dl);
+
+struct vsp1_dl_body *vsp1_dl_fragment_alloc(struct vsp1_device *vsp1,
+ unsigned int num_entries);
+void vsp1_dl_fragment_free(struct vsp1_dl_body *dlb);
+void vsp1_dl_fragment_write(struct vsp1_dl_body *dlb, u32 reg, u32 data);
+int vsp1_dl_list_add_fragment(struct vsp1_dl_list *dl,
+ struct vsp1_dl_body *dlb);
#endif /* __VSP1_DL_H__ */
diff --git a/drivers/media/platform/vsp1/vsp1_drm.c b/drivers/media/platform/vsp1/vsp1_drm.c
index 021fe5778cd1..fc4bbc401e67 100644
--- a/drivers/media/platform/vsp1/vsp1_drm.c
+++ b/drivers/media/platform/vsp1/vsp1_drm.c
@@ -13,10 +13,10 @@
#include <linux/device.h>
#include <linux/slab.h>
-#include <linux/vsp1.h>
#include <media/media-entity.h>
#include <media/v4l2-subdev.h>
+#include <media/vsp1.h>
#include "vsp1.h"
#include "vsp1_bru.h"
@@ -26,18 +26,14 @@
#include "vsp1_pipe.h"
#include "vsp1_rwpf.h"
+
/* -----------------------------------------------------------------------------
- * Runtime Handling
+ * Interrupt Handling
*/
-static void vsp1_drm_pipeline_frame_end(struct vsp1_pipeline *pipe)
+void vsp1_drm_display_start(struct vsp1_device *vsp1)
{
- unsigned long flags;
-
- spin_lock_irqsave(&pipe->irqlock, flags);
- if (pipe->num_inputs)
- vsp1_pipeline_run(pipe);
- spin_unlock_irqrestore(&pipe->irqlock, flags);
+ vsp1_dlm_irq_display_start(vsp1->drm->pipe.output->dlm);
}
/* -----------------------------------------------------------------------------
@@ -97,12 +93,14 @@ int vsp1_du_setup_lif(struct device *dev, unsigned int width,
media_entity_pipeline_stop(&pipe->output->entity.subdev.entity);
for (i = 0; i < bru->entity.source_pad; ++i) {
+ vsp1->drm->inputs[i].enabled = false;
bru->inputs[i].rpf = NULL;
pipe->inputs[i] = NULL;
}
pipe->num_inputs = 0;
+ vsp1_dlm_reset(pipe->output->dlm);
vsp1_device_put(vsp1);
dev_dbg(vsp1->dev, "%s: pipeline disabled\n", __func__);
@@ -110,8 +108,6 @@ int vsp1_du_setup_lif(struct device *dev, unsigned int width,
return 0;
}
- vsp1_dl_reset(vsp1->drm->dl);
-
/* Configure the format at the BRU sinks and propagate it through the
* pipeline.
*/
@@ -222,16 +218,11 @@ void vsp1_du_atomic_begin(struct device *dev)
{
struct vsp1_device *vsp1 = dev_get_drvdata(dev);
struct vsp1_pipeline *pipe = &vsp1->drm->pipe;
- unsigned long flags;
-
- spin_lock_irqsave(&pipe->irqlock, flags);
vsp1->drm->num_inputs = pipe->num_inputs;
- spin_unlock_irqrestore(&pipe->irqlock, flags);
-
/* Prepare the display list. */
- vsp1_dl_begin(vsp1->drm->dl);
+ pipe->dl = vsp1_dl_list_get(pipe->output->dlm);
}
EXPORT_SYMBOL_GPL(vsp1_du_atomic_begin);
@@ -244,10 +235,13 @@ EXPORT_SYMBOL_GPL(vsp1_du_atomic_begin);
* @mem: DMA addresses of the memory buffers (one per plane)
* @src: the source crop rectangle for the RPF
* @dst: the destination compose rectangle for the BRU input
+ * @alpha: global alpha value for the input
+ * @zpos: the Z-order position of the input
*
* Configure the VSP to perform composition of the image referenced by @mem
* through RPF @rpf_index, using the @src crop rectangle and the @dst
- * composition rectangle. The Z-order is fixed with RPF 0 at the bottom.
+ * composition rectangle. The Z-order is configurable with higher @zpos values
+ * displayed on top.
*
* Image format as stored in memory is expressed as a V4L2 @pixelformat value.
* As a special case, setting the pixel format to 0 will disable the RPF. The
@@ -265,25 +259,17 @@ EXPORT_SYMBOL_GPL(vsp1_du_atomic_begin);
*
* This function isn't reentrant, the caller needs to serialize calls.
*
- * TODO: Implement Z-order control by decoupling the RPF index from the BRU
- * input index.
- *
* Return 0 on success or a negative error code on failure.
*/
-int vsp1_du_atomic_update(struct device *dev, unsigned int rpf_index,
- u32 pixelformat, unsigned int pitch,
- dma_addr_t mem[2], const struct v4l2_rect *src,
- const struct v4l2_rect *dst)
+int vsp1_du_atomic_update_ext(struct device *dev, unsigned int rpf_index,
+ u32 pixelformat, unsigned int pitch,
+ dma_addr_t mem[2], const struct v4l2_rect *src,
+ const struct v4l2_rect *dst, unsigned int alpha,
+ unsigned int zpos)
{
struct vsp1_device *vsp1 = dev_get_drvdata(dev);
- struct vsp1_pipeline *pipe = &vsp1->drm->pipe;
const struct vsp1_format_info *fmtinfo;
- struct v4l2_subdev_selection sel;
- struct v4l2_subdev_format format;
- struct vsp1_rwpf_memory memory;
struct vsp1_rwpf *rpf;
- unsigned long flags;
- int ret;
if (rpf_index >= vsp1->info->rpf_count)
return -EINVAL;
@@ -294,31 +280,20 @@ int vsp1_du_atomic_update(struct device *dev, unsigned int rpf_index,
dev_dbg(vsp1->dev, "%s: RPF%u: disable requested\n", __func__,
rpf_index);
- spin_lock_irqsave(&pipe->irqlock, flags);
-
- if (pipe->inputs[rpf_index]) {
- /* Remove the RPF from the pipeline if it was previously
- * enabled.
- */
- vsp1->bru->inputs[rpf_index].rpf = NULL;
- pipe->inputs[rpf_index] = NULL;
-
- pipe->num_inputs--;
- }
-
- spin_unlock_irqrestore(&pipe->irqlock, flags);
-
+ vsp1->drm->inputs[rpf_index].enabled = false;
return 0;
}
dev_dbg(vsp1->dev,
- "%s: RPF%u: (%u,%u)/%ux%u -> (%u,%u)/%ux%u (%08x), pitch %u dma { %pad, %pad }\n",
+ "%s: RPF%u: (%u,%u)/%ux%u -> (%u,%u)/%ux%u (%08x), pitch %u dma { %pad, %pad } zpos %u\n",
__func__, rpf_index,
src->left, src->top, src->width, src->height,
dst->left, dst->top, dst->width, dst->height,
- pixelformat, pitch, &mem[0], &mem[1]);
+ pixelformat, pitch, &mem[0], &mem[1], zpos);
- /* Set the stride at the RPF input. */
+ /* Store the format, stride, memory buffer address, crop and compose
+ * rectangles and Z-order position and for the input.
+ */
fmtinfo = vsp1_get_format_info(pixelformat);
if (!fmtinfo) {
dev_dbg(vsp1->dev, "Unsupport pixel format %08x for RPF\n",
@@ -330,16 +305,40 @@ int vsp1_du_atomic_update(struct device *dev, unsigned int rpf_index,
rpf->format.num_planes = fmtinfo->planes;
rpf->format.plane_fmt[0].bytesperline = pitch;
rpf->format.plane_fmt[1].bytesperline = pitch;
+ rpf->alpha = alpha;
+
+ rpf->mem.addr[0] = mem[0];
+ rpf->mem.addr[1] = mem[1];
+ rpf->mem.addr[2] = 0;
+
+ vsp1->drm->inputs[rpf_index].crop = *src;
+ vsp1->drm->inputs[rpf_index].compose = *dst;
+ vsp1->drm->inputs[rpf_index].zpos = zpos;
+ vsp1->drm->inputs[rpf_index].enabled = true;
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(vsp1_du_atomic_update_ext);
+
+static int vsp1_du_setup_rpf_pipe(struct vsp1_device *vsp1,
+ struct vsp1_rwpf *rpf, unsigned int bru_input)
+{
+ struct v4l2_subdev_selection sel;
+ struct v4l2_subdev_format format;
+ const struct v4l2_rect *crop;
+ int ret;
/* Configure the format on the RPF sink pad and propagate it up to the
* BRU sink pad.
*/
+ crop = &vsp1->drm->inputs[rpf->entity.index].crop;
+
memset(&format, 0, sizeof(format));
format.which = V4L2_SUBDEV_FORMAT_ACTIVE;
format.pad = RWPF_PAD_SINK;
- format.format.width = src->width + src->left;
- format.format.height = src->height + src->top;
- format.format.code = fmtinfo->mbus;
+ format.format.width = crop->width + crop->left;
+ format.format.height = crop->height + crop->top;
+ format.format.code = rpf->fmtinfo->mbus;
format.format.field = V4L2_FIELD_NONE;
ret = v4l2_subdev_call(&rpf->entity.subdev, pad, set_fmt, NULL,
@@ -356,7 +355,7 @@ int vsp1_du_atomic_update(struct device *dev, unsigned int rpf_index,
sel.which = V4L2_SUBDEV_FORMAT_ACTIVE;
sel.pad = RWPF_PAD_SINK;
sel.target = V4L2_SEL_TGT_CROP;
- sel.r = *src;
+ sel.r = *crop;
ret = v4l2_subdev_call(&rpf->entity.subdev, pad, set_selection, NULL,
&sel);
@@ -391,7 +390,7 @@ int vsp1_du_atomic_update(struct device *dev, unsigned int rpf_index,
return ret;
/* BRU sink, propagate the format from the RPF source. */
- format.pad = rpf->entity.index;
+ format.pad = bru_input;
ret = v4l2_subdev_call(&vsp1->bru->entity.subdev, pad, set_fmt, NULL,
&format);
@@ -402,9 +401,9 @@ int vsp1_du_atomic_update(struct device *dev, unsigned int rpf_index,
__func__, format.format.width, format.format.height,
format.format.code, format.pad);
- sel.pad = rpf->entity.index;
+ sel.pad = bru_input;
sel.target = V4L2_SEL_TGT_COMPOSE;
- sel.r = *dst;
+ sel.r = vsp1->drm->inputs[rpf->entity.index].compose;
ret = v4l2_subdev_call(&vsp1->bru->entity.subdev, pad, set_selection,
NULL, &sel);
@@ -416,33 +415,13 @@ int vsp1_du_atomic_update(struct device *dev, unsigned int rpf_index,
__func__, sel.r.left, sel.r.top, sel.r.width, sel.r.height,
sel.pad);
- /* Store the compose rectangle coordinates in the RPF. */
- rpf->location.left = dst->left;
- rpf->location.top = dst->top;
-
- /* Set the memory buffer address. */
- memory.num_planes = fmtinfo->planes;
- memory.addr[0] = mem[0];
- memory.addr[1] = mem[1];
-
- rpf->ops->set_memory(rpf, &memory);
-
- spin_lock_irqsave(&pipe->irqlock, flags);
-
- /* If the RPF was previously stopped set the BRU input to the RPF and
- * store the RPF in the pipeline inputs array.
- */
- if (!pipe->inputs[rpf->entity.index]) {
- vsp1->bru->inputs[rpf_index].rpf = rpf;
- pipe->inputs[rpf->entity.index] = rpf;
- pipe->num_inputs++;
- }
-
- spin_unlock_irqrestore(&pipe->irqlock, flags);
-
return 0;
}
-EXPORT_SYMBOL_GPL(vsp1_du_atomic_update);
+
+static unsigned int rpf_zpos(struct vsp1_device *vsp1, struct vsp1_rwpf *rpf)
+{
+ return vsp1->drm->inputs[rpf->entity.index].zpos;
+}
/**
* vsp1_du_atomic_flush - Commit an atomic update
@@ -452,51 +431,96 @@ void vsp1_du_atomic_flush(struct device *dev)
{
struct vsp1_device *vsp1 = dev_get_drvdata(dev);
struct vsp1_pipeline *pipe = &vsp1->drm->pipe;
+ struct vsp1_rwpf *inputs[VSP1_MAX_RPF] = { NULL, };
struct vsp1_entity *entity;
unsigned long flags;
- bool stop = false;
+ unsigned int i;
int ret;
+ /* Count the number of enabled inputs and sort them by Z-order. */
+ pipe->num_inputs = 0;
+
+ for (i = 0; i < vsp1->info->rpf_count; ++i) {
+ struct vsp1_rwpf *rpf = vsp1->rpf[i];
+ unsigned int j;
+
+ if (!vsp1->drm->inputs[i].enabled) {
+ pipe->inputs[i] = NULL;
+ continue;
+ }
+
+ pipe->inputs[i] = rpf;
+
+ /* Insert the RPF in the sorted RPFs array. */
+ for (j = pipe->num_inputs++; j > 0; --j) {
+ if (rpf_zpos(vsp1, inputs[j-1]) <= rpf_zpos(vsp1, rpf))
+ break;
+ inputs[j] = inputs[j-1];
+ }
+
+ inputs[j] = rpf;
+ }
+
+ /* Setup the RPF input pipeline for every enabled input. */
+ for (i = 0; i < vsp1->info->num_bru_inputs; ++i) {
+ struct vsp1_rwpf *rpf = inputs[i];
+
+ if (!rpf) {
+ vsp1->bru->inputs[i].rpf = NULL;
+ continue;
+ }
+
+ vsp1->bru->inputs[i].rpf = rpf;
+ rpf->bru_input = i;
+ rpf->entity.sink_pad = i;
+
+ dev_dbg(vsp1->dev, "%s: connecting RPF.%u to BRU:%u\n",
+ __func__, rpf->entity.index, i);
+
+ ret = vsp1_du_setup_rpf_pipe(vsp1, rpf, i);
+ if (ret < 0)
+ dev_err(vsp1->dev,
+ "%s: failed to setup RPF.%u\n",
+ __func__, rpf->entity.index);
+ }
+
+ /* Configure all entities in the pipeline. */
list_for_each_entry(entity, &pipe->entities, list_pipe) {
/* Disconnect unused RPFs from the pipeline. */
if (entity->type == VSP1_ENTITY_RPF) {
struct vsp1_rwpf *rpf = to_rwpf(&entity->subdev);
if (!pipe->inputs[rpf->entity.index]) {
- vsp1_mod_write(entity, entity->route->reg,
- VI6_DPR_NODE_UNUSED);
+ vsp1_dl_list_write(pipe->dl, entity->route->reg,
+ VI6_DPR_NODE_UNUSED);
continue;
}
}
- vsp1_entity_route_setup(entity);
+ vsp1_entity_route_setup(entity, pipe->dl);
- ret = v4l2_subdev_call(&entity->subdev, video,
- s_stream, 1);
- if (ret < 0) {
- dev_err(vsp1->dev,
- "DRM pipeline start failure on entity %s\n",
- entity->subdev.name);
- return;
- }
- }
+ if (entity->ops->configure)
+ entity->ops->configure(entity, pipe, pipe->dl);
- vsp1_dl_commit(vsp1->drm->dl);
+ /* The memory buffer address must be applied after configuring
+ * the RPF to make sure the crop offset are computed.
+ */
+ if (entity->type == VSP1_ENTITY_RPF)
+ vsp1_rwpf_set_memory(to_rwpf(&entity->subdev),
+ pipe->dl);
+ }
- spin_lock_irqsave(&pipe->irqlock, flags);
+ vsp1_dl_list_commit(pipe->dl);
+ pipe->dl = NULL;
/* Start or stop the pipeline if needed. */
if (!vsp1->drm->num_inputs && pipe->num_inputs) {
vsp1_write(vsp1, VI6_DISP_IRQ_STA, 0);
vsp1_write(vsp1, VI6_DISP_IRQ_ENB, VI6_DISP_IRQ_ENB_DSTE);
+ spin_lock_irqsave(&pipe->irqlock, flags);
vsp1_pipeline_run(pipe);
+ spin_unlock_irqrestore(&pipe->irqlock, flags);
} else if (vsp1->drm->num_inputs && !pipe->num_inputs) {
- stop = true;
- }
-
- spin_unlock_irqrestore(&pipe->irqlock, flags);
-
- if (stop) {
vsp1_write(vsp1, VI6_DISP_IRQ_ENB, 0);
vsp1_pipeline_stop(pipe);
}
@@ -562,14 +586,9 @@ int vsp1_drm_init(struct vsp1_device *vsp1)
if (!vsp1->drm)
return -ENOMEM;
- vsp1->drm->dl = vsp1_dl_create(vsp1);
- if (!vsp1->drm->dl)
- return -ENOMEM;
-
pipe = &vsp1->drm->pipe;
vsp1_pipeline_init(pipe);
- pipe->frame_end = vsp1_drm_pipeline_frame_end;
/* The DRM pipeline is static, add entities manually. */
for (i = 0; i < vsp1->info->rpf_count; ++i) {
@@ -586,12 +605,9 @@ int vsp1_drm_init(struct vsp1_device *vsp1)
pipe->lif = &vsp1->lif->entity;
pipe->output = vsp1->wpf[0];
- pipe->dl = vsp1->drm->dl;
-
return 0;
}
void vsp1_drm_cleanup(struct vsp1_device *vsp1)
{
- vsp1_dl_destroy(vsp1->drm->dl);
}
diff --git a/drivers/media/platform/vsp1/vsp1_drm.h b/drivers/media/platform/vsp1/vsp1_drm.h
index f68056838319..9e28ab9254ba 100644
--- a/drivers/media/platform/vsp1/vsp1_drm.h
+++ b/drivers/media/platform/vsp1/vsp1_drm.h
@@ -13,37 +13,32 @@
#ifndef __VSP1_DRM_H__
#define __VSP1_DRM_H__
-#include "vsp1_pipe.h"
+#include <linux/videodev2.h>
-struct vsp1_dl;
+#include "vsp1_pipe.h"
/**
* vsp1_drm - State for the API exposed to the DRM driver
- * @dl: display list for DRM pipeline operation
* @pipe: the VSP1 pipeline used for display
* @num_inputs: number of active pipeline inputs at the beginning of an update
- * @update: the pipeline configuration has been updated
+ * @planes: source crop rectangle, destination compose rectangle and z-order
+ * position for every input
*/
struct vsp1_drm {
- struct vsp1_dl *dl;
struct vsp1_pipeline pipe;
unsigned int num_inputs;
- bool update;
+ struct {
+ bool enabled;
+ struct v4l2_rect crop;
+ struct v4l2_rect compose;
+ unsigned int zpos;
+ } inputs[VSP1_MAX_RPF];
};
int vsp1_drm_init(struct vsp1_device *vsp1);
void vsp1_drm_cleanup(struct vsp1_device *vsp1);
int vsp1_drm_create_links(struct vsp1_device *vsp1);
-int vsp1_du_init(struct device *dev);
-int vsp1_du_setup_lif(struct device *dev, unsigned int width,
- unsigned int height);
-void vsp1_du_atomic_begin(struct device *dev);
-int vsp1_du_atomic_update(struct device *dev, unsigned int rpf_index,
- u32 pixelformat, unsigned int pitch,
- dma_addr_t mem[2], const struct v4l2_rect *src,
- const struct v4l2_rect *dst);
-void vsp1_du_atomic_flush(struct device *dev);
-
+void vsp1_drm_display_start(struct vsp1_device *vsp1);
#endif /* __VSP1_DRM_H__ */
diff --git a/drivers/media/platform/vsp1/vsp1_drv.c b/drivers/media/platform/vsp1/vsp1_drv.c
index 25750a0e4631..e2d779fac0eb 100644
--- a/drivers/media/platform/vsp1/vsp1_drv.c
+++ b/drivers/media/platform/vsp1/vsp1_drv.c
@@ -30,6 +30,7 @@
#include "vsp1_hsit.h"
#include "vsp1_lif.h"
#include "vsp1_lut.h"
+#include "vsp1_pipe.h"
#include "vsp1_rwpf.h"
#include "vsp1_sru.h"
#include "vsp1_uds.h"
@@ -49,17 +50,15 @@ static irqreturn_t vsp1_irq_handler(int irq, void *data)
for (i = 0; i < vsp1->info->wpf_count; ++i) {
struct vsp1_rwpf *wpf = vsp1->wpf[i];
- struct vsp1_pipeline *pipe;
if (wpf == NULL)
continue;
- pipe = to_vsp1_pipeline(&wpf->entity.subdev.entity);
status = vsp1_read(vsp1, VI6_WPF_IRQ_STA(i));
vsp1_write(vsp1, VI6_WPF_IRQ_STA(i), ~status & mask);
if (status & VI6_WFP_IRQ_STA_FRE) {
- vsp1_pipeline_frame_end(pipe);
+ vsp1_pipeline_frame_end(wpf->pipe);
ret = IRQ_HANDLED;
}
}
@@ -68,14 +67,7 @@ static irqreturn_t vsp1_irq_handler(int irq, void *data)
vsp1_write(vsp1, VI6_DISP_IRQ_STA, ~status & VI6_DISP_IRQ_STA_DST);
if (status & VI6_DISP_IRQ_STA_DST) {
- struct vsp1_rwpf *wpf = vsp1->wpf[0];
- struct vsp1_pipeline *pipe;
-
- if (wpf) {
- pipe = to_vsp1_pipeline(&wpf->entity.subdev.entity);
- vsp1_pipeline_display_start(pipe);
- }
-
+ vsp1_drm_display_start(vsp1);
ret = IRQ_HANDLED;
}
@@ -387,13 +379,10 @@ static int vsp1_create_entities(struct vsp1_device *vsp1)
/* Register subdev nodes if the userspace API is enabled or initialize
* the DRM pipeline otherwise.
*/
- if (vsp1->info->uapi) {
- vsp1->use_dl = false;
+ if (vsp1->info->uapi)
ret = v4l2_device_register_subdev_nodes(&vsp1->v4l2_dev);
- } else {
- vsp1->use_dl = true;
+ else
ret = vsp1_drm_init(vsp1);
- }
if (ret < 0)
goto done;
@@ -465,8 +454,7 @@ static int vsp1_device_init(struct vsp1_device *vsp1)
vsp1_write(vsp1, VI6_DPR_HGT_SMPPT, (7 << VI6_DPR_SMPPT_TGW_SHIFT) |
(VI6_DPR_NODE_UNUSED << VI6_DPR_SMPPT_PT_SHIFT));
- if (vsp1->use_dl)
- vsp1_dl_setup(vsp1);
+ vsp1_dlm_setup(vsp1);
return 0;
}
@@ -570,6 +558,7 @@ static const struct dev_pm_ops vsp1_pm_ops = {
static const struct vsp1_device_info vsp1_device_infos[] = {
{
.version = VI6_IP_VERSION_MODEL_VSPS_H2,
+ .gen = 2,
.features = VSP1_HAS_BRU | VSP1_HAS_LUT | VSP1_HAS_SRU,
.rpf_count = 5,
.uds_count = 3,
@@ -578,6 +567,7 @@ static const struct vsp1_device_info vsp1_device_infos[] = {
.uapi = true,
}, {
.version = VI6_IP_VERSION_MODEL_VSPR_H2,
+ .gen = 2,
.features = VSP1_HAS_BRU | VSP1_HAS_SRU,
.rpf_count = 5,
.uds_count = 1,
@@ -586,6 +576,7 @@ static const struct vsp1_device_info vsp1_device_infos[] = {
.uapi = true,
}, {
.version = VI6_IP_VERSION_MODEL_VSPD_GEN2,
+ .gen = 2,
.features = VSP1_HAS_BRU | VSP1_HAS_LIF | VSP1_HAS_LUT,
.rpf_count = 4,
.uds_count = 1,
@@ -594,6 +585,7 @@ static const struct vsp1_device_info vsp1_device_infos[] = {
.uapi = true,
}, {
.version = VI6_IP_VERSION_MODEL_VSPS_M2,
+ .gen = 2,
.features = VSP1_HAS_BRU | VSP1_HAS_LUT | VSP1_HAS_SRU,
.rpf_count = 5,
.uds_count = 3,
@@ -602,6 +594,7 @@ static const struct vsp1_device_info vsp1_device_infos[] = {
.uapi = true,
}, {
.version = VI6_IP_VERSION_MODEL_VSPI_GEN3,
+ .gen = 3,
.features = VSP1_HAS_LUT | VSP1_HAS_SRU,
.rpf_count = 1,
.uds_count = 1,
@@ -609,6 +602,7 @@ static const struct vsp1_device_info vsp1_device_infos[] = {
.uapi = true,
}, {
.version = VI6_IP_VERSION_MODEL_VSPBD_GEN3,
+ .gen = 3,
.features = VSP1_HAS_BRU,
.rpf_count = 5,
.wpf_count = 1,
@@ -616,6 +610,7 @@ static const struct vsp1_device_info vsp1_device_infos[] = {
.uapi = true,
}, {
.version = VI6_IP_VERSION_MODEL_VSPBC_GEN3,
+ .gen = 3,
.features = VSP1_HAS_BRU | VSP1_HAS_LUT,
.rpf_count = 5,
.wpf_count = 1,
@@ -623,7 +618,8 @@ static const struct vsp1_device_info vsp1_device_infos[] = {
.uapi = true,
}, {
.version = VI6_IP_VERSION_MODEL_VSPD_GEN3,
- .features = VSP1_HAS_BRU | VSP1_HAS_LIF | VSP1_HAS_LUT,
+ .gen = 3,
+ .features = VSP1_HAS_BRU | VSP1_HAS_LIF,
.rpf_count = 5,
.wpf_count = 2,
.num_bru_inputs = 5,
diff --git a/drivers/media/platform/vsp1/vsp1_entity.c b/drivers/media/platform/vsp1/vsp1_entity.c
index 20a78fbd3691..3d070bcc6053 100644
--- a/drivers/media/platform/vsp1/vsp1_entity.c
+++ b/drivers/media/platform/vsp1/vsp1_entity.c
@@ -19,46 +19,11 @@
#include <media/v4l2-subdev.h>
#include "vsp1.h"
+#include "vsp1_dl.h"
#include "vsp1_entity.h"
-bool vsp1_entity_is_streaming(struct vsp1_entity *entity)
-{
- unsigned long flags;
- bool streaming;
-
- spin_lock_irqsave(&entity->lock, flags);
- streaming = entity->streaming;
- spin_unlock_irqrestore(&entity->lock, flags);
-
- return streaming;
-}
-
-int vsp1_entity_set_streaming(struct vsp1_entity *entity, bool streaming)
-{
- unsigned long flags;
- int ret;
-
- spin_lock_irqsave(&entity->lock, flags);
- entity->streaming = streaming;
- spin_unlock_irqrestore(&entity->lock, flags);
-
- if (!streaming)
- return 0;
-
- if (!entity->vsp1->info->uapi || !entity->subdev.ctrl_handler)
- return 0;
-
- ret = v4l2_ctrl_handler_setup(entity->subdev.ctrl_handler);
- if (ret < 0) {
- spin_lock_irqsave(&entity->lock, flags);
- entity->streaming = false;
- spin_unlock_irqrestore(&entity->lock, flags);
- }
-
- return ret;
-}
-
-void vsp1_entity_route_setup(struct vsp1_entity *source)
+void vsp1_entity_route_setup(struct vsp1_entity *source,
+ struct vsp1_dl_list *dl)
{
struct vsp1_entity *sink;
@@ -66,40 +31,74 @@ void vsp1_entity_route_setup(struct vsp1_entity *source)
return;
sink = container_of(source->sink, struct vsp1_entity, subdev.entity);
- vsp1_mod_write(source, source->route->reg,
- sink->route->inputs[source->sink_pad]);
+ vsp1_dl_list_write(dl, source->route->reg,
+ sink->route->inputs[source->sink_pad]);
}
/* -----------------------------------------------------------------------------
* V4L2 Subdevice Operations
*/
-struct v4l2_mbus_framefmt *
-vsp1_entity_get_pad_format(struct vsp1_entity *entity,
+/**
+ * vsp1_entity_get_pad_config - Get the pad configuration for an entity
+ * @entity: the entity
+ * @cfg: the TRY pad configuration
+ * @which: configuration selector (ACTIVE or TRY)
+ *
+ * Return the pad configuration requested by the which argument. The TRY
+ * configuration is passed explicitly to the function through the cfg argument
+ * and simply returned when requested. The ACTIVE configuration comes from the
+ * entity structure.
+ */
+struct v4l2_subdev_pad_config *
+vsp1_entity_get_pad_config(struct vsp1_entity *entity,
struct v4l2_subdev_pad_config *cfg,
- unsigned int pad, u32 which)
+ enum v4l2_subdev_format_whence which)
{
switch (which) {
- case V4L2_SUBDEV_FORMAT_TRY:
- return v4l2_subdev_get_try_format(&entity->subdev, cfg, pad);
case V4L2_SUBDEV_FORMAT_ACTIVE:
- return &entity->formats[pad];
+ return entity->config;
+ case V4L2_SUBDEV_FORMAT_TRY:
default:
- return NULL;
+ return cfg;
}
}
+/**
+ * vsp1_entity_get_pad_format - Get a pad format from storage for an entity
+ * @entity: the entity
+ * @cfg: the configuration storage
+ * @pad: the pad number
+ *
+ * Return the format stored in the given configuration for an entity's pad. The
+ * configuration can be an ACTIVE or TRY configuration.
+ */
+struct v4l2_mbus_framefmt *
+vsp1_entity_get_pad_format(struct vsp1_entity *entity,
+ struct v4l2_subdev_pad_config *cfg,
+ unsigned int pad)
+{
+ return v4l2_subdev_get_try_format(&entity->subdev, cfg, pad);
+}
+
+struct v4l2_rect *
+vsp1_entity_get_pad_compose(struct vsp1_entity *entity,
+ struct v4l2_subdev_pad_config *cfg,
+ unsigned int pad)
+{
+ return v4l2_subdev_get_try_compose(&entity->subdev, cfg, pad);
+}
+
/*
- * vsp1_entity_init_formats - Initialize formats on all pads
+ * vsp1_entity_init_cfg - Initialize formats on all pads
* @subdev: V4L2 subdevice
* @cfg: V4L2 subdev pad configuration
*
- * Initialize all pad formats with default values. If cfg is not NULL, try
- * formats are initialized on the file handle. Otherwise active formats are
- * initialized on the device.
+ * Initialize all pad formats with default values in the given pad config. This
+ * function can be used as a handler for the subdev pad::init_cfg operation.
*/
-void vsp1_entity_init_formats(struct v4l2_subdev *subdev,
- struct v4l2_subdev_pad_config *cfg)
+int vsp1_entity_init_cfg(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg)
{
struct v4l2_subdev_format format;
unsigned int pad;
@@ -113,19 +112,132 @@ void vsp1_entity_init_formats(struct v4l2_subdev *subdev,
v4l2_subdev_call(subdev, pad, set_fmt, cfg, &format);
}
+
+ return 0;
}
-static int vsp1_entity_open(struct v4l2_subdev *subdev,
- struct v4l2_subdev_fh *fh)
+/*
+ * vsp1_subdev_get_pad_format - Subdev pad get_fmt handler
+ * @subdev: V4L2 subdevice
+ * @cfg: V4L2 subdev pad configuration
+ * @fmt: V4L2 subdev format
+ *
+ * This function implements the subdev get_fmt pad operation. It can be used as
+ * a direct drop-in for the operation handler.
+ */
+int vsp1_subdev_get_pad_format(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_format *fmt)
{
- vsp1_entity_init_formats(subdev, fh->pad);
+ struct vsp1_entity *entity = to_vsp1_entity(subdev);
+ struct v4l2_subdev_pad_config *config;
+
+ config = vsp1_entity_get_pad_config(entity, cfg, fmt->which);
+ if (!config)
+ return -EINVAL;
+
+ fmt->format = *vsp1_entity_get_pad_format(entity, config, fmt->pad);
return 0;
}
-const struct v4l2_subdev_internal_ops vsp1_subdev_internal_ops = {
- .open = vsp1_entity_open,
-};
+/*
+ * vsp1_subdev_enum_mbus_code - Subdev pad enum_mbus_code handler
+ * @subdev: V4L2 subdevice
+ * @cfg: V4L2 subdev pad configuration
+ * @code: Media bus code enumeration
+ * @codes: Array of supported media bus codes
+ * @ncodes: Number of supported media bus codes
+ *
+ * This function implements the subdev enum_mbus_code pad operation for entities
+ * that do not support format conversion. It enumerates the given supported
+ * media bus codes on the sink pad and reports a source pad format identical to
+ * the sink pad.
+ */
+int vsp1_subdev_enum_mbus_code(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_mbus_code_enum *code,
+ const unsigned int *codes, unsigned int ncodes)
+{
+ struct vsp1_entity *entity = to_vsp1_entity(subdev);
+
+ if (code->pad == 0) {
+ if (code->index >= ncodes)
+ return -EINVAL;
+
+ code->code = codes[code->index];
+ } else {
+ struct v4l2_subdev_pad_config *config;
+ struct v4l2_mbus_framefmt *format;
+
+ /* The entity can't perform format conversion, the sink format
+ * is always identical to the source format.
+ */
+ if (code->index)
+ return -EINVAL;
+
+ config = vsp1_entity_get_pad_config(entity, cfg, code->which);
+ if (!config)
+ return -EINVAL;
+
+ format = vsp1_entity_get_pad_format(entity, config, 0);
+ code->code = format->code;
+ }
+
+ return 0;
+}
+
+/*
+ * vsp1_subdev_enum_frame_size - Subdev pad enum_frame_size handler
+ * @subdev: V4L2 subdevice
+ * @cfg: V4L2 subdev pad configuration
+ * @fse: Frame size enumeration
+ * @min_width: Minimum image width
+ * @min_height: Minimum image height
+ * @max_width: Maximum image width
+ * @max_height: Maximum image height
+ *
+ * This function implements the subdev enum_frame_size pad operation for
+ * entities that do not support scaling or cropping. It reports the given
+ * minimum and maximum frame width and height on the sink pad, and a fixed
+ * source pad size identical to the sink pad.
+ */
+int vsp1_subdev_enum_frame_size(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_frame_size_enum *fse,
+ unsigned int min_width, unsigned int min_height,
+ unsigned int max_width, unsigned int max_height)
+{
+ struct vsp1_entity *entity = to_vsp1_entity(subdev);
+ struct v4l2_subdev_pad_config *config;
+ struct v4l2_mbus_framefmt *format;
+
+ config = vsp1_entity_get_pad_config(entity, cfg, fse->which);
+ if (!config)
+ return -EINVAL;
+
+ format = vsp1_entity_get_pad_format(entity, config, fse->pad);
+
+ if (fse->index || fse->code != format->code)
+ return -EINVAL;
+
+ if (fse->pad == 0) {
+ fse->min_width = min_width;
+ fse->max_width = max_width;
+ fse->min_height = min_height;
+ fse->max_height = max_height;
+ } else {
+ /* The size on the source pad are fixed and always identical to
+ * the size on the sink pad.
+ */
+ fse->min_width = format->width;
+ fse->max_width = format->width;
+ fse->min_height = format->height;
+ fse->max_height = format->height;
+ }
+
+ return 0;
+}
/* -----------------------------------------------------------------------------
* Media Operations
@@ -171,11 +283,11 @@ static const struct vsp1_route vsp1_routes[] = {
{ VSP1_ENTITY_HST, 0, VI6_DPR_HST_ROUTE, { VI6_DPR_NODE_HST, } },
{ VSP1_ENTITY_LIF, 0, 0, { VI6_DPR_NODE_LIF, } },
{ VSP1_ENTITY_LUT, 0, VI6_DPR_LUT_ROUTE, { VI6_DPR_NODE_LUT, } },
- { VSP1_ENTITY_RPF, 0, VI6_DPR_RPF_ROUTE(0), { VI6_DPR_NODE_RPF(0), } },
- { VSP1_ENTITY_RPF, 1, VI6_DPR_RPF_ROUTE(1), { VI6_DPR_NODE_RPF(1), } },
- { VSP1_ENTITY_RPF, 2, VI6_DPR_RPF_ROUTE(2), { VI6_DPR_NODE_RPF(2), } },
- { VSP1_ENTITY_RPF, 3, VI6_DPR_RPF_ROUTE(3), { VI6_DPR_NODE_RPF(3), } },
- { VSP1_ENTITY_RPF, 4, VI6_DPR_RPF_ROUTE(4), { VI6_DPR_NODE_RPF(4), } },
+ { VSP1_ENTITY_RPF, 0, VI6_DPR_RPF_ROUTE(0), { 0, } },
+ { VSP1_ENTITY_RPF, 1, VI6_DPR_RPF_ROUTE(1), { 0, } },
+ { VSP1_ENTITY_RPF, 2, VI6_DPR_RPF_ROUTE(2), { 0, } },
+ { VSP1_ENTITY_RPF, 3, VI6_DPR_RPF_ROUTE(3), { 0, } },
+ { VSP1_ENTITY_RPF, 4, VI6_DPR_RPF_ROUTE(4), { 0, } },
{ VSP1_ENTITY_SRU, 0, VI6_DPR_SRU_ROUTE, { VI6_DPR_NODE_SRU, } },
{ VSP1_ENTITY_UDS, 0, VI6_DPR_UDS_ROUTE(0), { VI6_DPR_NODE_UDS(0), } },
{ VSP1_ENTITY_UDS, 1, VI6_DPR_UDS_ROUTE(1), { VI6_DPR_NODE_UDS(1), } },
@@ -187,9 +299,12 @@ static const struct vsp1_route vsp1_routes[] = {
};
int vsp1_entity_init(struct vsp1_device *vsp1, struct vsp1_entity *entity,
- unsigned int num_pads)
+ const char *name, unsigned int num_pads,
+ const struct v4l2_subdev_ops *ops)
{
+ struct v4l2_subdev *subdev;
unsigned int i;
+ int ret;
for (i = 0; i < ARRAY_SIZE(vsp1_routes); ++i) {
if (vsp1_routes[i].type == entity->type &&
@@ -202,37 +317,56 @@ int vsp1_entity_init(struct vsp1_device *vsp1, struct vsp1_entity *entity,
if (i == ARRAY_SIZE(vsp1_routes))
return -EINVAL;
- spin_lock_init(&entity->lock);
-
entity->vsp1 = vsp1;
entity->source_pad = num_pads - 1;
- /* Allocate formats and pads. */
- entity->formats = devm_kzalloc(vsp1->dev,
- num_pads * sizeof(*entity->formats),
- GFP_KERNEL);
- if (entity->formats == NULL)
- return -ENOMEM;
-
+ /* Allocate and initialize pads. */
entity->pads = devm_kzalloc(vsp1->dev, num_pads * sizeof(*entity->pads),
GFP_KERNEL);
if (entity->pads == NULL)
return -ENOMEM;
- /* Initialize pads. */
for (i = 0; i < num_pads - 1; ++i)
entity->pads[i].flags = MEDIA_PAD_FL_SINK;
entity->pads[num_pads - 1].flags = MEDIA_PAD_FL_SOURCE;
/* Initialize the media entity. */
- return media_entity_pads_init(&entity->subdev.entity, num_pads,
- entity->pads);
+ ret = media_entity_pads_init(&entity->subdev.entity, num_pads,
+ entity->pads);
+ if (ret < 0)
+ return ret;
+
+ /* Initialize the V4L2 subdev. */
+ subdev = &entity->subdev;
+ v4l2_subdev_init(subdev, ops);
+
+ subdev->entity.ops = &vsp1->media_ops;
+ subdev->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
+
+ snprintf(subdev->name, sizeof(subdev->name), "%s %s",
+ dev_name(vsp1->dev), name);
+
+ vsp1_entity_init_cfg(subdev, NULL);
+
+ /* Allocate the pad configuration to store formats and selection
+ * rectangles.
+ */
+ entity->config = v4l2_subdev_alloc_pad_config(&entity->subdev);
+ if (entity->config == NULL) {
+ media_entity_cleanup(&entity->subdev.entity);
+ return -ENOMEM;
+ }
+
+ return 0;
}
void vsp1_entity_destroy(struct vsp1_entity *entity)
{
+ if (entity->ops && entity->ops->destroy)
+ entity->ops->destroy(entity);
if (entity->subdev.ctrl_handler)
v4l2_ctrl_handler_free(entity->subdev.ctrl_handler);
+ v4l2_subdev_free_pad_config(entity->config);
media_entity_cleanup(&entity->subdev.entity);
}
diff --git a/drivers/media/platform/vsp1/vsp1_entity.h b/drivers/media/platform/vsp1/vsp1_entity.h
index 83570dfde8ec..69eff4e17350 100644
--- a/drivers/media/platform/vsp1/vsp1_entity.h
+++ b/drivers/media/platform/vsp1/vsp1_entity.h
@@ -19,6 +19,8 @@
#include <media/v4l2-subdev.h>
struct vsp1_device;
+struct vsp1_dl_list;
+struct vsp1_pipeline;
enum vsp1_entity_type {
VSP1_ENTITY_BRU,
@@ -53,9 +55,27 @@ struct vsp1_route {
unsigned int inputs[VSP1_ENTITY_MAX_INPUTS];
};
+/**
+ * struct vsp1_entity_operations - Entity operations
+ * @destroy: Destroy the entity.
+ * @set_memory: Setup memory buffer access. This operation applies the settings
+ * stored in the rwpf mem field to the display list. Valid for RPF
+ * and WPF only.
+ * @configure: Setup the hardware based on the entity state (pipeline, formats,
+ * selection rectangles, ...)
+ */
+struct vsp1_entity_operations {
+ void (*destroy)(struct vsp1_entity *);
+ void (*set_memory)(struct vsp1_entity *, struct vsp1_dl_list *dl);
+ void (*configure)(struct vsp1_entity *, struct vsp1_pipeline *,
+ struct vsp1_dl_list *);
+};
+
struct vsp1_entity {
struct vsp1_device *vsp1;
+ const struct vsp1_entity_operations *ops;
+
enum vsp1_entity_type type;
unsigned int index;
const struct vsp1_route *route;
@@ -70,10 +90,7 @@ struct vsp1_entity {
unsigned int sink_pad;
struct v4l2_subdev subdev;
- struct v4l2_mbus_framefmt *formats;
-
- spinlock_t lock; /* Protects the streaming field */
- bool streaming;
+ struct v4l2_subdev_pad_config *config;
};
static inline struct vsp1_entity *to_vsp1_entity(struct v4l2_subdev *subdev)
@@ -82,7 +99,8 @@ static inline struct vsp1_entity *to_vsp1_entity(struct v4l2_subdev *subdev)
}
int vsp1_entity_init(struct vsp1_device *vsp1, struct vsp1_entity *entity,
- unsigned int num_pads);
+ const char *name, unsigned int num_pads,
+ const struct v4l2_subdev_ops *ops);
void vsp1_entity_destroy(struct vsp1_entity *entity);
extern const struct v4l2_subdev_internal_ops vsp1_subdev_internal_ops;
@@ -91,16 +109,35 @@ int vsp1_entity_link_setup(struct media_entity *entity,
const struct media_pad *local,
const struct media_pad *remote, u32 flags);
+struct v4l2_subdev_pad_config *
+vsp1_entity_get_pad_config(struct vsp1_entity *entity,
+ struct v4l2_subdev_pad_config *cfg,
+ enum v4l2_subdev_format_whence which);
struct v4l2_mbus_framefmt *
vsp1_entity_get_pad_format(struct vsp1_entity *entity,
struct v4l2_subdev_pad_config *cfg,
- unsigned int pad, u32 which);
-void vsp1_entity_init_formats(struct v4l2_subdev *subdev,
- struct v4l2_subdev_pad_config *cfg);
-
-bool vsp1_entity_is_streaming(struct vsp1_entity *entity);
-int vsp1_entity_set_streaming(struct vsp1_entity *entity, bool streaming);
-
-void vsp1_entity_route_setup(struct vsp1_entity *source);
+ unsigned int pad);
+struct v4l2_rect *
+vsp1_entity_get_pad_compose(struct vsp1_entity *entity,
+ struct v4l2_subdev_pad_config *cfg,
+ unsigned int pad);
+int vsp1_entity_init_cfg(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg);
+
+void vsp1_entity_route_setup(struct vsp1_entity *source,
+ struct vsp1_dl_list *dl);
+
+int vsp1_subdev_get_pad_format(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_format *fmt);
+int vsp1_subdev_enum_mbus_code(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_mbus_code_enum *code,
+ const unsigned int *codes, unsigned int ncodes);
+int vsp1_subdev_enum_frame_size(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_frame_size_enum *fse,
+ unsigned int min_w, unsigned int min_h,
+ unsigned int max_w, unsigned int max_h);
#endif /* __VSP1_ENTITY_H__ */
diff --git a/drivers/media/platform/vsp1/vsp1_hsit.c b/drivers/media/platform/vsp1/vsp1_hsit.c
index c1087cff31a0..68b8567b374d 100644
--- a/drivers/media/platform/vsp1/vsp1_hsit.c
+++ b/drivers/media/platform/vsp1/vsp1_hsit.c
@@ -17,6 +17,7 @@
#include <media/v4l2-subdev.h>
#include "vsp1.h"
+#include "vsp1_dl.h"
#include "vsp1_hsit.h"
#define HSIT_MIN_SIZE 4U
@@ -26,32 +27,14 @@
* Device Access
*/
-static inline void vsp1_hsit_write(struct vsp1_hsit *hsit, u32 reg, u32 data)
+static inline void vsp1_hsit_write(struct vsp1_hsit *hsit,
+ struct vsp1_dl_list *dl, u32 reg, u32 data)
{
- vsp1_write(hsit->entity.vsp1, reg, data);
+ vsp1_dl_list_write(dl, reg, data);
}
/* -----------------------------------------------------------------------------
- * V4L2 Subdevice Core Operations
- */
-
-static int hsit_s_stream(struct v4l2_subdev *subdev, int enable)
-{
- struct vsp1_hsit *hsit = to_hsit(subdev);
-
- if (!enable)
- return 0;
-
- if (hsit->inverse)
- vsp1_hsit_write(hsit, VI6_HSI_CTRL, VI6_HSI_CTRL_EN);
- else
- vsp1_hsit_write(hsit, VI6_HST_CTRL, VI6_HST_CTRL_EN);
-
- return 0;
-}
-
-/* -----------------------------------------------------------------------------
- * V4L2 Subdevice Pad Operations
+ * V4L2 Subdevice Operations
*/
static int hsit_enum_mbus_code(struct v4l2_subdev *subdev,
@@ -76,43 +59,9 @@ static int hsit_enum_frame_size(struct v4l2_subdev *subdev,
struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
- struct vsp1_hsit *hsit = to_hsit(subdev);
- struct v4l2_mbus_framefmt *format;
-
- format = vsp1_entity_get_pad_format(&hsit->entity, cfg, fse->pad,
- fse->which);
-
- if (fse->index || fse->code != format->code)
- return -EINVAL;
-
- if (fse->pad == HSIT_PAD_SINK) {
- fse->min_width = HSIT_MIN_SIZE;
- fse->max_width = HSIT_MAX_SIZE;
- fse->min_height = HSIT_MIN_SIZE;
- fse->max_height = HSIT_MAX_SIZE;
- } else {
- /* The size on the source pad are fixed and always identical to
- * the size on the sink pad.
- */
- fse->min_width = format->width;
- fse->max_width = format->width;
- fse->min_height = format->height;
- fse->max_height = format->height;
- }
-
- return 0;
-}
-
-static int hsit_get_format(struct v4l2_subdev *subdev,
- struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_format *fmt)
-{
- struct vsp1_hsit *hsit = to_hsit(subdev);
-
- fmt->format = *vsp1_entity_get_pad_format(&hsit->entity, cfg, fmt->pad,
- fmt->which);
-
- return 0;
+ return vsp1_subdev_enum_frame_size(subdev, cfg, fse, HSIT_MIN_SIZE,
+ HSIT_MIN_SIZE, HSIT_MAX_SIZE,
+ HSIT_MAX_SIZE);
}
static int hsit_set_format(struct v4l2_subdev *subdev,
@@ -120,10 +69,14 @@ static int hsit_set_format(struct v4l2_subdev *subdev,
struct v4l2_subdev_format *fmt)
{
struct vsp1_hsit *hsit = to_hsit(subdev);
+ struct v4l2_subdev_pad_config *config;
struct v4l2_mbus_framefmt *format;
- format = vsp1_entity_get_pad_format(&hsit->entity, cfg, fmt->pad,
- fmt->which);
+ config = vsp1_entity_get_pad_config(&hsit->entity, cfg, fmt->which);
+ if (!config)
+ return -EINVAL;
+
+ format = vsp1_entity_get_pad_format(&hsit->entity, config, fmt->pad);
if (fmt->pad == HSIT_PAD_SOURCE) {
/* The HST and HSI output format code and resolution can't be
@@ -145,8 +98,8 @@ static int hsit_set_format(struct v4l2_subdev *subdev,
fmt->format = *format;
/* Propagate the format to the source pad. */
- format = vsp1_entity_get_pad_format(&hsit->entity, cfg, HSIT_PAD_SOURCE,
- fmt->which);
+ format = vsp1_entity_get_pad_format(&hsit->entity, config,
+ HSIT_PAD_SOURCE);
*format = fmt->format;
format->code = hsit->inverse ? MEDIA_BUS_FMT_ARGB8888_1X32
: MEDIA_BUS_FMT_AHSV8888_1X32;
@@ -154,33 +107,44 @@ static int hsit_set_format(struct v4l2_subdev *subdev,
return 0;
}
-/* -----------------------------------------------------------------------------
- * V4L2 Subdevice Operations
- */
-
-static struct v4l2_subdev_video_ops hsit_video_ops = {
- .s_stream = hsit_s_stream,
-};
-
static struct v4l2_subdev_pad_ops hsit_pad_ops = {
+ .init_cfg = vsp1_entity_init_cfg,
.enum_mbus_code = hsit_enum_mbus_code,
.enum_frame_size = hsit_enum_frame_size,
- .get_fmt = hsit_get_format,
+ .get_fmt = vsp1_subdev_get_pad_format,
.set_fmt = hsit_set_format,
};
static struct v4l2_subdev_ops hsit_ops = {
- .video = &hsit_video_ops,
.pad = &hsit_pad_ops,
};
/* -----------------------------------------------------------------------------
+ * VSP1 Entity Operations
+ */
+
+static void hsit_configure(struct vsp1_entity *entity,
+ struct vsp1_pipeline *pipe,
+ struct vsp1_dl_list *dl)
+{
+ struct vsp1_hsit *hsit = to_hsit(&entity->subdev);
+
+ if (hsit->inverse)
+ vsp1_hsit_write(hsit, dl, VI6_HSI_CTRL, VI6_HSI_CTRL_EN);
+ else
+ vsp1_hsit_write(hsit, dl, VI6_HST_CTRL, VI6_HST_CTRL_EN);
+}
+
+static const struct vsp1_entity_operations hsit_entity_ops = {
+ .configure = hsit_configure,
+};
+
+/* -----------------------------------------------------------------------------
* Initialization and Cleanup
*/
struct vsp1_hsit *vsp1_hsit_create(struct vsp1_device *vsp1, bool inverse)
{
- struct v4l2_subdev *subdev;
struct vsp1_hsit *hsit;
int ret;
@@ -190,27 +154,17 @@ struct vsp1_hsit *vsp1_hsit_create(struct vsp1_device *vsp1, bool inverse)
hsit->inverse = inverse;
+ hsit->entity.ops = &hsit_entity_ops;
+
if (inverse)
hsit->entity.type = VSP1_ENTITY_HSI;
else
hsit->entity.type = VSP1_ENTITY_HST;
- ret = vsp1_entity_init(vsp1, &hsit->entity, 2);
+ ret = vsp1_entity_init(vsp1, &hsit->entity, inverse ? "hsi" : "hst", 2,
+ &hsit_ops);
if (ret < 0)
return ERR_PTR(ret);
- /* Initialize the V4L2 subdev. */
- subdev = &hsit->entity.subdev;
- v4l2_subdev_init(subdev, &hsit_ops);
-
- subdev->entity.ops = &vsp1->media_ops;
- subdev->internal_ops = &vsp1_subdev_internal_ops;
- snprintf(subdev->name, sizeof(subdev->name), "%s %s",
- dev_name(vsp1->dev), inverse ? "hsi" : "hst");
- v4l2_set_subdevdata(subdev, hsit);
- subdev->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
-
- vsp1_entity_init_formats(subdev, NULL);
-
return hsit;
}
diff --git a/drivers/media/platform/vsp1/vsp1_lif.c b/drivers/media/platform/vsp1/vsp1_lif.c
index 433853ce8dbf..0217393f22df 100644
--- a/drivers/media/platform/vsp1/vsp1_lif.c
+++ b/drivers/media/platform/vsp1/vsp1_lif.c
@@ -17,55 +17,24 @@
#include <media/v4l2-subdev.h>
#include "vsp1.h"
+#include "vsp1_dl.h"
#include "vsp1_lif.h"
#define LIF_MIN_SIZE 2U
-#define LIF_MAX_SIZE 2048U
+#define LIF_MAX_SIZE 8190U
/* -----------------------------------------------------------------------------
* Device Access
*/
-static inline void vsp1_lif_write(struct vsp1_lif *lif, u32 reg, u32 data)
+static inline void vsp1_lif_write(struct vsp1_lif *lif, struct vsp1_dl_list *dl,
+ u32 reg, u32 data)
{
- vsp1_mod_write(&lif->entity, reg, data);
+ vsp1_dl_list_write(dl, reg, data);
}
/* -----------------------------------------------------------------------------
- * V4L2 Subdevice Core Operations
- */
-
-static int lif_s_stream(struct v4l2_subdev *subdev, int enable)
-{
- const struct v4l2_mbus_framefmt *format;
- struct vsp1_lif *lif = to_lif(subdev);
- unsigned int hbth = 1300;
- unsigned int obth = 400;
- unsigned int lbth = 200;
-
- if (!enable) {
- vsp1_write(lif->entity.vsp1, VI6_LIF_CTRL, 0);
- return 0;
- }
-
- format = &lif->entity.formats[LIF_PAD_SOURCE];
-
- obth = min(obth, (format->width + 1) / 2 * format->height - 4);
-
- vsp1_lif_write(lif, VI6_LIF_CSBTH,
- (hbth << VI6_LIF_CSBTH_HBTH_SHIFT) |
- (lbth << VI6_LIF_CSBTH_LBTH_SHIFT));
-
- vsp1_lif_write(lif, VI6_LIF_CTRL,
- (obth << VI6_LIF_CTRL_OBTH_SHIFT) |
- (format->code == 0 ? VI6_LIF_CTRL_CFMT : 0) |
- VI6_LIF_CTRL_REQSEL | VI6_LIF_CTRL_LIF_EN);
-
- return 0;
-}
-
-/* -----------------------------------------------------------------------------
- * V4L2 Subdevice Pad Operations
+ * V4L2 Subdevice Operations
*/
static int lif_enum_mbus_code(struct v4l2_subdev *subdev,
@@ -76,82 +45,38 @@ static int lif_enum_mbus_code(struct v4l2_subdev *subdev,
MEDIA_BUS_FMT_ARGB8888_1X32,
MEDIA_BUS_FMT_AYUV8_1X32,
};
- struct vsp1_lif *lif = to_lif(subdev);
- if (code->pad == LIF_PAD_SINK) {
- if (code->index >= ARRAY_SIZE(codes))
- return -EINVAL;
-
- code->code = codes[code->index];
- } else {
- struct v4l2_mbus_framefmt *format;
-
- /* The LIF can't perform format conversion, the sink format is
- * always identical to the source format.
- */
- if (code->index)
- return -EINVAL;
-
- format = vsp1_entity_get_pad_format(&lif->entity, cfg,
- LIF_PAD_SINK, code->which);
- code->code = format->code;
- }
-
- return 0;
+ return vsp1_subdev_enum_mbus_code(subdev, cfg, code, codes,
+ ARRAY_SIZE(codes));
}
static int lif_enum_frame_size(struct v4l2_subdev *subdev,
struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
- struct vsp1_lif *lif = to_lif(subdev);
- struct v4l2_mbus_framefmt *format;
-
- format = vsp1_entity_get_pad_format(&lif->entity, cfg, LIF_PAD_SINK,
- fse->which);
-
- if (fse->index || fse->code != format->code)
- return -EINVAL;
-
- if (fse->pad == LIF_PAD_SINK) {
- fse->min_width = LIF_MIN_SIZE;
- fse->max_width = LIF_MAX_SIZE;
- fse->min_height = LIF_MIN_SIZE;
- fse->max_height = LIF_MAX_SIZE;
- } else {
- fse->min_width = format->width;
- fse->max_width = format->width;
- fse->min_height = format->height;
- fse->max_height = format->height;
- }
-
- return 0;
+ return vsp1_subdev_enum_frame_size(subdev, cfg, fse, LIF_MIN_SIZE,
+ LIF_MIN_SIZE, LIF_MAX_SIZE,
+ LIF_MAX_SIZE);
}
-static int lif_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_format *fmt)
-{
- struct vsp1_lif *lif = to_lif(subdev);
-
- fmt->format = *vsp1_entity_get_pad_format(&lif->entity, cfg, fmt->pad,
- fmt->which);
-
- return 0;
-}
-
-static int lif_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
+static int lif_set_format(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vsp1_lif *lif = to_lif(subdev);
+ struct v4l2_subdev_pad_config *config;
struct v4l2_mbus_framefmt *format;
+ config = vsp1_entity_get_pad_config(&lif->entity, cfg, fmt->which);
+ if (!config)
+ return -EINVAL;
+
/* Default to YUV if the requested format is not supported. */
if (fmt->format.code != MEDIA_BUS_FMT_ARGB8888_1X32 &&
fmt->format.code != MEDIA_BUS_FMT_AYUV8_1X32)
fmt->format.code = MEDIA_BUS_FMT_AYUV8_1X32;
- format = vsp1_entity_get_pad_format(&lif->entity, cfg, fmt->pad,
- fmt->which);
+ format = vsp1_entity_get_pad_format(&lif->entity, config, fmt->pad);
if (fmt->pad == LIF_PAD_SOURCE) {
/* The LIF source format is always identical to its sink
@@ -172,40 +97,64 @@ static int lif_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_con
fmt->format = *format;
/* Propagate the format to the source pad. */
- format = vsp1_entity_get_pad_format(&lif->entity, cfg, LIF_PAD_SOURCE,
- fmt->which);
+ format = vsp1_entity_get_pad_format(&lif->entity, config,
+ LIF_PAD_SOURCE);
*format = fmt->format;
return 0;
}
-/* -----------------------------------------------------------------------------
- * V4L2 Subdevice Operations
- */
-
-static struct v4l2_subdev_video_ops lif_video_ops = {
- .s_stream = lif_s_stream,
-};
-
static struct v4l2_subdev_pad_ops lif_pad_ops = {
+ .init_cfg = vsp1_entity_init_cfg,
.enum_mbus_code = lif_enum_mbus_code,
.enum_frame_size = lif_enum_frame_size,
- .get_fmt = lif_get_format,
+ .get_fmt = vsp1_subdev_get_pad_format,
.set_fmt = lif_set_format,
};
static struct v4l2_subdev_ops lif_ops = {
- .video = &lif_video_ops,
.pad = &lif_pad_ops,
};
/* -----------------------------------------------------------------------------
+ * VSP1 Entity Operations
+ */
+
+static void lif_configure(struct vsp1_entity *entity,
+ struct vsp1_pipeline *pipe,
+ struct vsp1_dl_list *dl)
+{
+ const struct v4l2_mbus_framefmt *format;
+ struct vsp1_lif *lif = to_lif(&entity->subdev);
+ unsigned int hbth = 1300;
+ unsigned int obth = 400;
+ unsigned int lbth = 200;
+
+ format = vsp1_entity_get_pad_format(&lif->entity, lif->entity.config,
+ LIF_PAD_SOURCE);
+
+ obth = min(obth, (format->width + 1) / 2 * format->height - 4);
+
+ vsp1_lif_write(lif, dl, VI6_LIF_CSBTH,
+ (hbth << VI6_LIF_CSBTH_HBTH_SHIFT) |
+ (lbth << VI6_LIF_CSBTH_LBTH_SHIFT));
+
+ vsp1_lif_write(lif, dl, VI6_LIF_CTRL,
+ (obth << VI6_LIF_CTRL_OBTH_SHIFT) |
+ (format->code == 0 ? VI6_LIF_CTRL_CFMT : 0) |
+ VI6_LIF_CTRL_REQSEL | VI6_LIF_CTRL_LIF_EN);
+}
+
+static const struct vsp1_entity_operations lif_entity_ops = {
+ .configure = lif_configure,
+};
+
+/* -----------------------------------------------------------------------------
* Initialization and Cleanup
*/
struct vsp1_lif *vsp1_lif_create(struct vsp1_device *vsp1)
{
- struct v4l2_subdev *subdev;
struct vsp1_lif *lif;
int ret;
@@ -213,24 +162,12 @@ struct vsp1_lif *vsp1_lif_create(struct vsp1_device *vsp1)
if (lif == NULL)
return ERR_PTR(-ENOMEM);
+ lif->entity.ops = &lif_entity_ops;
lif->entity.type = VSP1_ENTITY_LIF;
- ret = vsp1_entity_init(vsp1, &lif->entity, 2);
+ ret = vsp1_entity_init(vsp1, &lif->entity, "lif", 2, &lif_ops);
if (ret < 0)
return ERR_PTR(ret);
- /* Initialize the V4L2 subdev. */
- subdev = &lif->entity.subdev;
- v4l2_subdev_init(subdev, &lif_ops);
-
- subdev->entity.ops = &vsp1->media_ops;
- subdev->internal_ops = &vsp1_subdev_internal_ops;
- snprintf(subdev->name, sizeof(subdev->name), "%s lif",
- dev_name(vsp1->dev));
- v4l2_set_subdevdata(subdev, lif);
- subdev->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
-
- vsp1_entity_init_formats(subdev, NULL);
-
return lif;
}
diff --git a/drivers/media/platform/vsp1/vsp1_lut.c b/drivers/media/platform/vsp1/vsp1_lut.c
index 4b89095e7b5f..aa09e59f0ab8 100644
--- a/drivers/media/platform/vsp1/vsp1_lut.c
+++ b/drivers/media/platform/vsp1/vsp1_lut.c
@@ -18,6 +18,7 @@
#include <media/v4l2-subdev.h>
#include "vsp1.h"
+#include "vsp1_dl.h"
#include "vsp1_lut.h"
#define LUT_MIN_SIZE 4U
@@ -27,19 +28,35 @@
* Device Access
*/
-static inline void vsp1_lut_write(struct vsp1_lut *lut, u32 reg, u32 data)
+static inline void vsp1_lut_write(struct vsp1_lut *lut, struct vsp1_dl_list *dl,
+ u32 reg, u32 data)
{
- vsp1_write(lut->entity.vsp1, reg, data);
+ vsp1_dl_list_write(dl, reg, data);
}
/* -----------------------------------------------------------------------------
* V4L2 Subdevice Core Operations
*/
-static void lut_configure(struct vsp1_lut *lut, struct vsp1_lut_config *config)
+static int lut_set_table(struct vsp1_lut *lut, struct vsp1_lut_config *config)
{
- memcpy_toio(lut->entity.vsp1->mmio + VI6_LUT_TABLE, config->lut,
- sizeof(config->lut));
+ struct vsp1_dl_body *dlb;
+ unsigned int i;
+
+ dlb = vsp1_dl_fragment_alloc(lut->entity.vsp1, ARRAY_SIZE(config->lut));
+ if (!dlb)
+ return -ENOMEM;
+
+ for (i = 0; i < ARRAY_SIZE(config->lut); ++i)
+ vsp1_dl_fragment_write(dlb, VI6_LUT_TABLE + 4 * i,
+ config->lut[i]);
+
+ mutex_lock(&lut->lock);
+ swap(lut->lut, dlb);
+ mutex_unlock(&lut->lock);
+
+ vsp1_dl_fragment_free(dlb);
+ return 0;
}
static long lut_ioctl(struct v4l2_subdev *subdev, unsigned int cmd, void *arg)
@@ -48,8 +65,7 @@ static long lut_ioctl(struct v4l2_subdev *subdev, unsigned int cmd, void *arg)
switch (cmd) {
case VIDIOC_VSP1_LUT_CONFIG:
- lut_configure(lut, arg);
- return 0;
+ return lut_set_table(lut, arg);
default:
return -ENOIOCTLCMD;
@@ -57,22 +73,6 @@ static long lut_ioctl(struct v4l2_subdev *subdev, unsigned int cmd, void *arg)
}
/* -----------------------------------------------------------------------------
- * V4L2 Subdevice Video Operations
- */
-
-static int lut_s_stream(struct v4l2_subdev *subdev, int enable)
-{
- struct vsp1_lut *lut = to_lut(subdev);
-
- if (!enable)
- return 0;
-
- vsp1_lut_write(lut, VI6_LUT_CTRL, VI6_LUT_CTRL_EN);
-
- return 0;
-}
-
-/* -----------------------------------------------------------------------------
* V4L2 Subdevice Pad Operations
*/
@@ -85,85 +85,39 @@ static int lut_enum_mbus_code(struct v4l2_subdev *subdev,
MEDIA_BUS_FMT_AHSV8888_1X32,
MEDIA_BUS_FMT_AYUV8_1X32,
};
- struct vsp1_lut *lut = to_lut(subdev);
- struct v4l2_mbus_framefmt *format;
-
- if (code->pad == LUT_PAD_SINK) {
- if (code->index >= ARRAY_SIZE(codes))
- return -EINVAL;
-
- code->code = codes[code->index];
- } else {
- /* The LUT can't perform format conversion, the sink format is
- * always identical to the source format.
- */
- if (code->index)
- return -EINVAL;
-
- format = vsp1_entity_get_pad_format(&lut->entity, cfg,
- LUT_PAD_SINK, code->which);
- code->code = format->code;
- }
- return 0;
+ return vsp1_subdev_enum_mbus_code(subdev, cfg, code, codes,
+ ARRAY_SIZE(codes));
}
static int lut_enum_frame_size(struct v4l2_subdev *subdev,
struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_frame_size_enum *fse)
{
- struct vsp1_lut *lut = to_lut(subdev);
- struct v4l2_mbus_framefmt *format;
-
- format = vsp1_entity_get_pad_format(&lut->entity, cfg,
- fse->pad, fse->which);
-
- if (fse->index || fse->code != format->code)
- return -EINVAL;
-
- if (fse->pad == LUT_PAD_SINK) {
- fse->min_width = LUT_MIN_SIZE;
- fse->max_width = LUT_MAX_SIZE;
- fse->min_height = LUT_MIN_SIZE;
- fse->max_height = LUT_MAX_SIZE;
- } else {
- /* The size on the source pad are fixed and always identical to
- * the size on the sink pad.
- */
- fse->min_width = format->width;
- fse->max_width = format->width;
- fse->min_height = format->height;
- fse->max_height = format->height;
- }
-
- return 0;
-}
-
-static int lut_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_format *fmt)
-{
- struct vsp1_lut *lut = to_lut(subdev);
-
- fmt->format = *vsp1_entity_get_pad_format(&lut->entity, cfg, fmt->pad,
- fmt->which);
-
- return 0;
+ return vsp1_subdev_enum_frame_size(subdev, cfg, fse, LUT_MIN_SIZE,
+ LUT_MIN_SIZE, LUT_MAX_SIZE,
+ LUT_MAX_SIZE);
}
-static int lut_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
+static int lut_set_format(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vsp1_lut *lut = to_lut(subdev);
+ struct v4l2_subdev_pad_config *config;
struct v4l2_mbus_framefmt *format;
+ config = vsp1_entity_get_pad_config(&lut->entity, cfg, fmt->which);
+ if (!config)
+ return -EINVAL;
+
/* Default to YUV if the requested format is not supported. */
if (fmt->format.code != MEDIA_BUS_FMT_ARGB8888_1X32 &&
fmt->format.code != MEDIA_BUS_FMT_AHSV8888_1X32 &&
fmt->format.code != MEDIA_BUS_FMT_AYUV8_1X32)
fmt->format.code = MEDIA_BUS_FMT_AYUV8_1X32;
- format = vsp1_entity_get_pad_format(&lut->entity, cfg, fmt->pad,
- fmt->which);
+ format = vsp1_entity_get_pad_format(&lut->entity, config, fmt->pad);
if (fmt->pad == LUT_PAD_SOURCE) {
/* The LUT output format can't be modified. */
@@ -171,6 +125,7 @@ static int lut_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_con
return 0;
}
+ format->code = fmt->format.code;
format->width = clamp_t(unsigned int, fmt->format.width,
LUT_MIN_SIZE, LUT_MAX_SIZE);
format->height = clamp_t(unsigned int, fmt->format.height,
@@ -181,8 +136,8 @@ static int lut_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_con
fmt->format = *format;
/* Propagate the format to the source pad. */
- format = vsp1_entity_get_pad_format(&lut->entity, cfg, LUT_PAD_SOURCE,
- fmt->which);
+ format = vsp1_entity_get_pad_format(&lut->entity, config,
+ LUT_PAD_SOURCE);
*format = fmt->format;
return 0;
@@ -196,30 +151,49 @@ static struct v4l2_subdev_core_ops lut_core_ops = {
.ioctl = lut_ioctl,
};
-static struct v4l2_subdev_video_ops lut_video_ops = {
- .s_stream = lut_s_stream,
-};
-
static struct v4l2_subdev_pad_ops lut_pad_ops = {
+ .init_cfg = vsp1_entity_init_cfg,
.enum_mbus_code = lut_enum_mbus_code,
.enum_frame_size = lut_enum_frame_size,
- .get_fmt = lut_get_format,
+ .get_fmt = vsp1_subdev_get_pad_format,
.set_fmt = lut_set_format,
};
static struct v4l2_subdev_ops lut_ops = {
.core = &lut_core_ops,
- .video = &lut_video_ops,
.pad = &lut_pad_ops,
};
/* -----------------------------------------------------------------------------
+ * VSP1 Entity Operations
+ */
+
+static void lut_configure(struct vsp1_entity *entity,
+ struct vsp1_pipeline *pipe,
+ struct vsp1_dl_list *dl)
+{
+ struct vsp1_lut *lut = to_lut(&entity->subdev);
+
+ vsp1_lut_write(lut, dl, VI6_LUT_CTRL, VI6_LUT_CTRL_EN);
+
+ mutex_lock(&lut->lock);
+ if (lut->lut) {
+ vsp1_dl_list_add_fragment(dl, lut->lut);
+ lut->lut = NULL;
+ }
+ mutex_unlock(&lut->lock);
+}
+
+static const struct vsp1_entity_operations lut_entity_ops = {
+ .configure = lut_configure,
+};
+
+/* -----------------------------------------------------------------------------
* Initialization and Cleanup
*/
struct vsp1_lut *vsp1_lut_create(struct vsp1_device *vsp1)
{
- struct v4l2_subdev *subdev;
struct vsp1_lut *lut;
int ret;
@@ -227,24 +201,12 @@ struct vsp1_lut *vsp1_lut_create(struct vsp1_device *vsp1)
if (lut == NULL)
return ERR_PTR(-ENOMEM);
+ lut->entity.ops = &lut_entity_ops;
lut->entity.type = VSP1_ENTITY_LUT;
- ret = vsp1_entity_init(vsp1, &lut->entity, 2);
+ ret = vsp1_entity_init(vsp1, &lut->entity, "lut", 2, &lut_ops);
if (ret < 0)
return ERR_PTR(ret);
- /* Initialize the V4L2 subdev. */
- subdev = &lut->entity.subdev;
- v4l2_subdev_init(subdev, &lut_ops);
-
- subdev->entity.ops = &vsp1->media_ops;
- subdev->internal_ops = &vsp1_subdev_internal_ops;
- snprintf(subdev->name, sizeof(subdev->name), "%s lut",
- dev_name(vsp1->dev));
- v4l2_set_subdevdata(subdev, lut);
- subdev->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
-
- vsp1_entity_init_formats(subdev, NULL);
-
return lut;
}
diff --git a/drivers/media/platform/vsp1/vsp1_lut.h b/drivers/media/platform/vsp1/vsp1_lut.h
index f92ffb867350..cef874f22b6a 100644
--- a/drivers/media/platform/vsp1/vsp1_lut.h
+++ b/drivers/media/platform/vsp1/vsp1_lut.h
@@ -13,6 +13,8 @@
#ifndef __VSP1_LUT_H__
#define __VSP1_LUT_H__
+#include <linux/mutex.h>
+
#include <media/media-entity.h>
#include <media/v4l2-subdev.h>
@@ -25,7 +27,9 @@ struct vsp1_device;
struct vsp1_lut {
struct vsp1_entity entity;
- u32 lut[256];
+
+ struct mutex lock;
+ struct vsp1_dl_body *lut;
};
static inline struct vsp1_lut *to_lut(struct v4l2_subdev *subdev)
diff --git a/drivers/media/platform/vsp1/vsp1_pipe.c b/drivers/media/platform/vsp1/vsp1_pipe.c
index 6659f06b1643..4f3b4a1d028a 100644
--- a/drivers/media/platform/vsp1/vsp1_pipe.c
+++ b/drivers/media/platform/vsp1/vsp1_pipe.c
@@ -43,7 +43,7 @@ static const struct vsp1_format_info vsp1_video_formats[] = {
{ V4L2_PIX_FMT_XRGB444, MEDIA_BUS_FMT_ARGB8888_1X32,
VI6_FMT_XRGB_4444, VI6_RPF_DSWAP_P_LLS | VI6_RPF_DSWAP_P_LWS |
VI6_RPF_DSWAP_P_WDS,
- 1, { 16, 0, 0 }, false, false, 1, 1, true },
+ 1, { 16, 0, 0 }, false, false, 1, 1, false },
{ V4L2_PIX_FMT_ARGB555, MEDIA_BUS_FMT_ARGB8888_1X32,
VI6_FMT_ARGB_1555, VI6_RPF_DSWAP_P_LLS | VI6_RPF_DSWAP_P_LWS |
VI6_RPF_DSWAP_P_WDS,
@@ -172,14 +172,18 @@ void vsp1_pipeline_reset(struct vsp1_pipeline *pipe)
bru->inputs[i].rpf = NULL;
}
- for (i = 0; i < ARRAY_SIZE(pipe->inputs); ++i)
+ for (i = 0; i < pipe->num_inputs; ++i) {
+ pipe->inputs[i]->pipe = NULL;
pipe->inputs[i] = NULL;
+ }
+
+ pipe->output->pipe = NULL;
+ pipe->output = NULL;
INIT_LIST_HEAD(&pipe->entities);
pipe->state = VSP1_PIPELINE_STOPPED;
pipe->buffers_ready = 0;
pipe->num_inputs = 0;
- pipe->output = NULL;
pipe->bru = NULL;
pipe->lif = NULL;
pipe->uds = NULL;
@@ -190,11 +194,13 @@ void vsp1_pipeline_init(struct vsp1_pipeline *pipe)
mutex_init(&pipe->lock);
spin_lock_init(&pipe->irqlock);
init_waitqueue_head(&pipe->wq);
+ kref_init(&pipe->kref);
INIT_LIST_HEAD(&pipe->entities);
pipe->state = VSP1_PIPELINE_STOPPED;
}
+/* Must be called with the pipe irqlock held. */
void vsp1_pipeline_run(struct vsp1_pipeline *pipe)
{
struct vsp1_device *vsp1 = pipe->output->entity.vsp1;
@@ -226,7 +232,7 @@ int vsp1_pipeline_stop(struct vsp1_pipeline *pipe)
unsigned long flags;
int ret;
- if (pipe->dl) {
+ if (pipe->lif) {
/* When using display lists in continuous frame mode the only
* way to stop the pipeline is to reset the hardware.
*/
@@ -253,10 +259,10 @@ int vsp1_pipeline_stop(struct vsp1_pipeline *pipe)
if (entity->route && entity->route->reg)
vsp1_write(entity->vsp1, entity->route->reg,
VI6_DPR_NODE_UNUSED);
-
- v4l2_subdev_call(&entity->subdev, video, s_stream, 0);
}
+ v4l2_subdev_call(&pipe->output->entity.subdev, video, s_stream, 0);
+
return ret;
}
@@ -271,50 +277,15 @@ bool vsp1_pipeline_ready(struct vsp1_pipeline *pipe)
return pipe->buffers_ready == mask;
}
-void vsp1_pipeline_display_start(struct vsp1_pipeline *pipe)
-{
- if (pipe->dl)
- vsp1_dl_irq_display_start(pipe->dl);
-}
-
void vsp1_pipeline_frame_end(struct vsp1_pipeline *pipe)
{
- enum vsp1_pipeline_state state;
- unsigned long flags;
-
if (pipe == NULL)
return;
- if (pipe->dl)
- vsp1_dl_irq_frame_end(pipe->dl);
+ vsp1_dlm_irq_frame_end(pipe->output->dlm);
- /* Signal frame end to the pipeline handler. */
- pipe->frame_end(pipe);
-
- spin_lock_irqsave(&pipe->irqlock, flags);
-
- state = pipe->state;
-
- /* When using display lists in continuous frame mode the pipeline is
- * automatically restarted by the hardware.
- */
- if (!pipe->dl)
- pipe->state = VSP1_PIPELINE_STOPPED;
-
- /* If a stop has been requested, mark the pipeline as stopped and
- * return.
- */
- if (state == VSP1_PIPELINE_STOPPING) {
- wake_up(&pipe->wq);
- goto done;
- }
-
- /* Restart the pipeline if ready. */
- if (vsp1_pipeline_ready(pipe))
- vsp1_pipeline_run(pipe);
-
-done:
- spin_unlock_irqrestore(&pipe->irqlock, flags);
+ if (pipe->frame_end)
+ pipe->frame_end(pipe);
}
/*
@@ -324,9 +295,13 @@ done:
* to be scaled, we disable alpha scaling when the UDS input has a fixed alpha
* value. The UDS then outputs a fixed alpha value which needs to be programmed
* from the input RPF alpha.
+ *
+ * This function can only be called from a subdev s_stream handler as it
+ * requires a valid display list context.
*/
void vsp1_pipeline_propagate_alpha(struct vsp1_pipeline *pipe,
struct vsp1_entity *input,
+ struct vsp1_dl_list *dl,
unsigned int alpha)
{
struct vsp1_entity *entity;
@@ -349,7 +324,7 @@ void vsp1_pipeline_propagate_alpha(struct vsp1_pipeline *pipe,
if (entity->type == VSP1_ENTITY_UDS) {
struct vsp1_uds *uds = to_uds(&entity->subdev);
- vsp1_uds_set_alpha(uds, alpha);
+ vsp1_uds_set_alpha(uds, dl, alpha);
break;
}
@@ -375,7 +350,7 @@ void vsp1_pipelines_suspend(struct vsp1_device *vsp1)
if (wpf == NULL)
continue;
- pipe = to_vsp1_pipeline(&wpf->entity.subdev.entity);
+ pipe = wpf->pipe;
if (pipe == NULL)
continue;
@@ -392,7 +367,7 @@ void vsp1_pipelines_suspend(struct vsp1_device *vsp1)
if (wpf == NULL)
continue;
- pipe = to_vsp1_pipeline(&wpf->entity.subdev.entity);
+ pipe = wpf->pipe;
if (pipe == NULL)
continue;
@@ -416,7 +391,7 @@ void vsp1_pipelines_resume(struct vsp1_device *vsp1)
if (wpf == NULL)
continue;
- pipe = to_vsp1_pipeline(&wpf->entity.subdev.entity);
+ pipe = wpf->pipe;
if (pipe == NULL)
continue;
diff --git a/drivers/media/platform/vsp1/vsp1_pipe.h b/drivers/media/platform/vsp1/vsp1_pipe.h
index b2f3a8a896c9..7b56113511dd 100644
--- a/drivers/media/platform/vsp1/vsp1_pipe.h
+++ b/drivers/media/platform/vsp1/vsp1_pipe.h
@@ -13,13 +13,14 @@
#ifndef __VSP1_PIPE_H__
#define __VSP1_PIPE_H__
+#include <linux/kref.h>
#include <linux/list.h>
#include <linux/spinlock.h>
#include <linux/wait.h>
#include <media/media-entity.h>
-struct vsp1_dl;
+struct vsp1_dl_list;
struct vsp1_rwpf;
/*
@@ -63,7 +64,7 @@ enum vsp1_pipeline_state {
* @wq: work queue to wait for state change completion
* @frame_end: frame end interrupt handler
* @lock: protects the pipeline use count and stream count
- * @use_count: number of video nodes using the pipeline
+ * @kref: pipeline reference count
* @stream_count: number of streaming video nodes
* @buffers_ready: bitmask of RPFs and WPFs with at least one buffer available
* @num_inputs: number of RPFs
@@ -86,7 +87,7 @@ struct vsp1_pipeline {
void (*frame_end)(struct vsp1_pipeline *pipe);
struct mutex lock;
- unsigned int use_count;
+ struct kref kref;
unsigned int stream_count;
unsigned int buffers_ready;
@@ -100,17 +101,9 @@ struct vsp1_pipeline {
struct list_head entities;
- struct vsp1_dl *dl;
+ struct vsp1_dl_list *dl;
};
-static inline struct vsp1_pipeline *to_vsp1_pipeline(struct media_entity *e)
-{
- if (likely(e->pipe))
- return container_of(e->pipe, struct vsp1_pipeline, pipe);
- else
- return NULL;
-}
-
void vsp1_pipeline_reset(struct vsp1_pipeline *pipe);
void vsp1_pipeline_init(struct vsp1_pipeline *pipe);
@@ -119,11 +112,11 @@ bool vsp1_pipeline_stopped(struct vsp1_pipeline *pipe);
int vsp1_pipeline_stop(struct vsp1_pipeline *pipe);
bool vsp1_pipeline_ready(struct vsp1_pipeline *pipe);
-void vsp1_pipeline_display_start(struct vsp1_pipeline *pipe);
void vsp1_pipeline_frame_end(struct vsp1_pipeline *pipe);
void vsp1_pipeline_propagate_alpha(struct vsp1_pipeline *pipe,
struct vsp1_entity *input,
+ struct vsp1_dl_list *dl,
unsigned int alpha);
void vsp1_pipelines_suspend(struct vsp1_device *vsp1);
diff --git a/drivers/media/platform/vsp1/vsp1_regs.h b/drivers/media/platform/vsp1/vsp1_regs.h
index 069216f0eb44..927b5fb94c48 100644
--- a/drivers/media/platform/vsp1/vsp1_regs.h
+++ b/drivers/media/platform/vsp1/vsp1_regs.h
@@ -217,6 +217,16 @@
#define VI6_RPF_SRCM_ADDR_C1 0x0344
#define VI6_RPF_SRCM_ADDR_AI 0x0348
+#define VI6_RPF_MULT_ALPHA 0x036c
+#define VI6_RPF_MULT_ALPHA_A_MMD_NONE (0 << 12)
+#define VI6_RPF_MULT_ALPHA_A_MMD_RATIO (1 << 12)
+#define VI6_RPF_MULT_ALPHA_P_MMD_NONE (0 << 8)
+#define VI6_RPF_MULT_ALPHA_P_MMD_RATIO (1 << 8)
+#define VI6_RPF_MULT_ALPHA_P_MMD_IMAGE (2 << 8)
+#define VI6_RPF_MULT_ALPHA_P_MMD_BOTH (3 << 8)
+#define VI6_RPF_MULT_ALPHA_RATIO_MASK (0xff < 0)
+#define VI6_RPF_MULT_ALPHA_RATIO_SHIFT 0
+
/* -----------------------------------------------------------------------------
* WPF Control Registers
*/
diff --git a/drivers/media/platform/vsp1/vsp1_rpf.c b/drivers/media/platform/vsp1/vsp1_rpf.c
index 5bc1d1574a43..49168db3f529 100644
--- a/drivers/media/platform/vsp1/vsp1_rpf.c
+++ b/drivers/media/platform/vsp1/vsp1_rpf.c
@@ -16,6 +16,8 @@
#include <media/v4l2-subdev.h>
#include "vsp1.h"
+#include "vsp1_dl.h"
+#include "vsp1_pipe.h"
#include "vsp1_rwpf.h"
#include "vsp1_video.h"
@@ -26,64 +28,50 @@
* Device Access
*/
-static inline void vsp1_rpf_write(struct vsp1_rwpf *rpf, u32 reg, u32 data)
+static inline void vsp1_rpf_write(struct vsp1_rwpf *rpf,
+ struct vsp1_dl_list *dl, u32 reg, u32 data)
{
- vsp1_mod_write(&rpf->entity, reg + rpf->entity.index * VI6_RPF_OFFSET,
- data);
+ vsp1_dl_list_write(dl, reg + rpf->entity.index * VI6_RPF_OFFSET, data);
}
/* -----------------------------------------------------------------------------
- * Controls
+ * V4L2 Subdevice Operations
*/
-static int rpf_s_ctrl(struct v4l2_ctrl *ctrl)
-{
- struct vsp1_rwpf *rpf =
- container_of(ctrl->handler, struct vsp1_rwpf, ctrls);
- struct vsp1_pipeline *pipe;
-
- if (!vsp1_entity_is_streaming(&rpf->entity))
- return 0;
-
- switch (ctrl->id) {
- case V4L2_CID_ALPHA_COMPONENT:
- vsp1_rpf_write(rpf, VI6_RPF_VRTCOL_SET,
- ctrl->val << VI6_RPF_VRTCOL_SET_LAYA_SHIFT);
-
- pipe = to_vsp1_pipeline(&rpf->entity.subdev.entity);
- vsp1_pipeline_propagate_alpha(pipe, &rpf->entity, ctrl->val);
- break;
- }
-
- return 0;
-}
-
-static const struct v4l2_ctrl_ops rpf_ctrl_ops = {
- .s_ctrl = rpf_s_ctrl,
+static struct v4l2_subdev_ops rpf_ops = {
+ .pad = &vsp1_rwpf_pad_ops,
};
/* -----------------------------------------------------------------------------
- * V4L2 Subdevice Core Operations
+ * VSP1 Entity Operations
*/
-static int rpf_s_stream(struct v4l2_subdev *subdev, int enable)
+static void rpf_set_memory(struct vsp1_entity *entity, struct vsp1_dl_list *dl)
+{
+ struct vsp1_rwpf *rpf = entity_to_rwpf(entity);
+
+ vsp1_rpf_write(rpf, dl, VI6_RPF_SRCM_ADDR_Y,
+ rpf->mem.addr[0] + rpf->offsets[0]);
+ vsp1_rpf_write(rpf, dl, VI6_RPF_SRCM_ADDR_C0,
+ rpf->mem.addr[1] + rpf->offsets[1]);
+ vsp1_rpf_write(rpf, dl, VI6_RPF_SRCM_ADDR_C1,
+ rpf->mem.addr[2] + rpf->offsets[1]);
+}
+
+static void rpf_configure(struct vsp1_entity *entity,
+ struct vsp1_pipeline *pipe,
+ struct vsp1_dl_list *dl)
{
- struct vsp1_pipeline *pipe = to_vsp1_pipeline(&subdev->entity);
- struct vsp1_rwpf *rpf = to_rwpf(subdev);
- struct vsp1_device *vsp1 = rpf->entity.vsp1;
+ struct vsp1_rwpf *rpf = to_rwpf(&entity->subdev);
const struct vsp1_format_info *fmtinfo = rpf->fmtinfo;
const struct v4l2_pix_format_mplane *format = &rpf->format;
- const struct v4l2_rect *crop = &rpf->crop;
+ const struct v4l2_mbus_framefmt *source_format;
+ const struct v4l2_mbus_framefmt *sink_format;
+ const struct v4l2_rect *crop;
+ unsigned int left = 0;
+ unsigned int top = 0;
u32 pstride;
u32 infmt;
- int ret;
-
- ret = vsp1_entity_set_streaming(&rpf->entity, enable);
- if (ret < 0)
- return ret;
-
- if (!enable)
- return 0;
/* Source size, stride and crop offsets.
*
@@ -91,10 +79,12 @@ static int rpf_s_stream(struct v4l2_subdev *subdev, int enable)
* left corner in the plane buffer. Only two offsets are needed, as
* planes 2 and 3 always have identical strides.
*/
- vsp1_rpf_write(rpf, VI6_RPF_SRC_BSIZE,
+ crop = vsp1_rwpf_get_crop(rpf, rpf->entity.config);
+
+ vsp1_rpf_write(rpf, dl, VI6_RPF_SRC_BSIZE,
(crop->width << VI6_RPF_SRC_BSIZE_BHSIZE_SHIFT) |
(crop->height << VI6_RPF_SRC_BSIZE_BVSIZE_SHIFT));
- vsp1_rpf_write(rpf, VI6_RPF_SRC_ESIZE,
+ vsp1_rpf_write(rpf, dl, VI6_RPF_SRC_ESIZE,
(crop->width << VI6_RPF_SRC_ESIZE_EHSIZE_SHIFT) |
(crop->height << VI6_RPF_SRC_ESIZE_EVSIZE_SHIFT));
@@ -103,26 +93,25 @@ static int rpf_s_stream(struct v4l2_subdev *subdev, int enable)
pstride = format->plane_fmt[0].bytesperline
<< VI6_RPF_SRCM_PSTRIDE_Y_SHIFT;
- vsp1_rpf_write(rpf, VI6_RPF_SRCM_ADDR_Y,
- rpf->buf_addr[0] + rpf->offsets[0]);
-
if (format->num_planes > 1) {
rpf->offsets[1] = crop->top * format->plane_fmt[1].bytesperline
+ crop->left * fmtinfo->bpp[1] / 8;
pstride |= format->plane_fmt[1].bytesperline
<< VI6_RPF_SRCM_PSTRIDE_C_SHIFT;
-
- vsp1_rpf_write(rpf, VI6_RPF_SRCM_ADDR_C0,
- rpf->buf_addr[1] + rpf->offsets[1]);
-
- if (format->num_planes > 2)
- vsp1_rpf_write(rpf, VI6_RPF_SRCM_ADDR_C1,
- rpf->buf_addr[2] + rpf->offsets[1]);
+ } else {
+ rpf->offsets[1] = 0;
}
- vsp1_rpf_write(rpf, VI6_RPF_SRCM_PSTRIDE, pstride);
+ vsp1_rpf_write(rpf, dl, VI6_RPF_SRCM_PSTRIDE, pstride);
/* Format */
+ sink_format = vsp1_entity_get_pad_format(&rpf->entity,
+ rpf->entity.config,
+ RWPF_PAD_SINK);
+ source_format = vsp1_entity_get_pad_format(&rpf->entity,
+ rpf->entity.config,
+ RWPF_PAD_SOURCE);
+
infmt = VI6_RPF_INFMT_CIPM
| (fmtinfo->hwfmt << VI6_RPF_INFMT_RDFMT_SHIFT);
@@ -131,88 +120,98 @@ static int rpf_s_stream(struct v4l2_subdev *subdev, int enable)
if (fmtinfo->swap_uv)
infmt |= VI6_RPF_INFMT_SPUVS;
- if (rpf->entity.formats[RWPF_PAD_SINK].code !=
- rpf->entity.formats[RWPF_PAD_SOURCE].code)
+ if (sink_format->code != source_format->code)
infmt |= VI6_RPF_INFMT_CSC;
- vsp1_rpf_write(rpf, VI6_RPF_INFMT, infmt);
- vsp1_rpf_write(rpf, VI6_RPF_DSWAP, fmtinfo->swap);
+ vsp1_rpf_write(rpf, dl, VI6_RPF_INFMT, infmt);
+ vsp1_rpf_write(rpf, dl, VI6_RPF_DSWAP, fmtinfo->swap);
/* Output location */
- vsp1_rpf_write(rpf, VI6_RPF_LOC,
- (rpf->location.left << VI6_RPF_LOC_HCOORD_SHIFT) |
- (rpf->location.top << VI6_RPF_LOC_VCOORD_SHIFT));
+ if (pipe->bru) {
+ const struct v4l2_rect *compose;
+
+ compose = vsp1_entity_get_pad_compose(pipe->bru,
+ pipe->bru->config,
+ rpf->bru_input);
+ left = compose->left;
+ top = compose->top;
+ }
- /* Use the alpha channel (extended to 8 bits) when available or an
- * alpha value set through the V4L2_CID_ALPHA_COMPONENT control
- * otherwise. Disable color keying.
+ vsp1_rpf_write(rpf, dl, VI6_RPF_LOC,
+ (left << VI6_RPF_LOC_HCOORD_SHIFT) |
+ (top << VI6_RPF_LOC_VCOORD_SHIFT));
+
+ /* On Gen2 use the alpha channel (extended to 8 bits) when available or
+ * a fixed alpha value set through the V4L2_CID_ALPHA_COMPONENT control
+ * otherwise.
+ *
+ * The Gen3 RPF has extended alpha capability and can both multiply the
+ * alpha channel by a fixed global alpha value, and multiply the pixel
+ * components to convert the input to premultiplied alpha.
+ *
+ * As alpha premultiplication is available in the BRU for both Gen2 and
+ * Gen3 we handle it there and use the Gen3 alpha multiplier for global
+ * alpha multiplication only. This however prevents conversion to
+ * premultiplied alpha if no BRU is present in the pipeline. If that use
+ * case turns out to be useful we will revisit the implementation (for
+ * Gen3 only).
+ *
+ * We enable alpha multiplication on Gen3 using the fixed alpha value
+ * set through the V4L2_CID_ALPHA_COMPONENT control when the input
+ * contains an alpha channel. On Gen2 the global alpha is ignored in
+ * that case.
+ *
+ * In all cases, disable color keying.
*/
- vsp1_rpf_write(rpf, VI6_RPF_ALPH_SEL, VI6_RPF_ALPH_SEL_AEXT_EXT |
+ vsp1_rpf_write(rpf, dl, VI6_RPF_ALPH_SEL, VI6_RPF_ALPH_SEL_AEXT_EXT |
(fmtinfo->alpha ? VI6_RPF_ALPH_SEL_ASEL_PACKED
: VI6_RPF_ALPH_SEL_ASEL_FIXED));
- if (vsp1->info->uapi)
- mutex_lock(rpf->ctrls.lock);
- vsp1_rpf_write(rpf, VI6_RPF_VRTCOL_SET,
- rpf->alpha->cur.val << VI6_RPF_VRTCOL_SET_LAYA_SHIFT);
- vsp1_pipeline_propagate_alpha(pipe, &rpf->entity, rpf->alpha->cur.val);
- if (vsp1->info->uapi)
- mutex_unlock(rpf->ctrls.lock);
-
- vsp1_rpf_write(rpf, VI6_RPF_MSK_CTRL, 0);
- vsp1_rpf_write(rpf, VI6_RPF_CKEY_CTRL, 0);
-
- return 0;
-}
-
-/* -----------------------------------------------------------------------------
- * V4L2 Subdevice Operations
- */
-
-static struct v4l2_subdev_video_ops rpf_video_ops = {
- .s_stream = rpf_s_stream,
-};
-
-static struct v4l2_subdev_pad_ops rpf_pad_ops = {
- .enum_mbus_code = vsp1_rwpf_enum_mbus_code,
- .enum_frame_size = vsp1_rwpf_enum_frame_size,
- .get_fmt = vsp1_rwpf_get_format,
- .set_fmt = vsp1_rwpf_set_format,
- .get_selection = vsp1_rwpf_get_selection,
- .set_selection = vsp1_rwpf_set_selection,
-};
+ vsp1_rpf_write(rpf, dl, VI6_RPF_VRTCOL_SET,
+ rpf->alpha << VI6_RPF_VRTCOL_SET_LAYA_SHIFT);
+
+ if (entity->vsp1->info->gen == 3) {
+ u32 mult;
+
+ if (fmtinfo->alpha) {
+ /* When the input contains an alpha channel enable the
+ * alpha multiplier. If the input is premultiplied we
+ * need to multiply both the alpha channel and the pixel
+ * components by the global alpha value to keep them
+ * premultiplied. Otherwise multiply the alpha channel
+ * only.
+ */
+ bool premultiplied = format->flags
+ & V4L2_PIX_FMT_FLAG_PREMUL_ALPHA;
+
+ mult = VI6_RPF_MULT_ALPHA_A_MMD_RATIO
+ | (premultiplied ?
+ VI6_RPF_MULT_ALPHA_P_MMD_RATIO :
+ VI6_RPF_MULT_ALPHA_P_MMD_NONE)
+ | (rpf->alpha << VI6_RPF_MULT_ALPHA_RATIO_SHIFT);
+ } else {
+ /* When the input doesn't contain an alpha channel the
+ * global alpha value is applied in the unpacking unit,
+ * the alpha multiplier isn't needed and must be
+ * disabled.
+ */
+ mult = VI6_RPF_MULT_ALPHA_A_MMD_NONE
+ | VI6_RPF_MULT_ALPHA_P_MMD_NONE;
+ }
+
+ vsp1_rpf_write(rpf, dl, VI6_RPF_MULT_ALPHA, mult);
+ }
-static struct v4l2_subdev_ops rpf_ops = {
- .video = &rpf_video_ops,
- .pad = &rpf_pad_ops,
-};
+ vsp1_pipeline_propagate_alpha(pipe, &rpf->entity, dl, rpf->alpha);
-/* -----------------------------------------------------------------------------
- * Video Device Operations
- */
+ vsp1_rpf_write(rpf, dl, VI6_RPF_MSK_CTRL, 0);
+ vsp1_rpf_write(rpf, dl, VI6_RPF_CKEY_CTRL, 0);
-static void rpf_set_memory(struct vsp1_rwpf *rpf, struct vsp1_rwpf_memory *mem)
-{
- unsigned int i;
-
- for (i = 0; i < 3; ++i)
- rpf->buf_addr[i] = mem->addr[i];
-
- if (!vsp1_entity_is_streaming(&rpf->entity))
- return;
-
- vsp1_rpf_write(rpf, VI6_RPF_SRCM_ADDR_Y,
- mem->addr[0] + rpf->offsets[0]);
- if (mem->num_planes > 1)
- vsp1_rpf_write(rpf, VI6_RPF_SRCM_ADDR_C0,
- mem->addr[1] + rpf->offsets[1]);
- if (mem->num_planes > 2)
- vsp1_rpf_write(rpf, VI6_RPF_SRCM_ADDR_C1,
- mem->addr[2] + rpf->offsets[1]);
}
-static const struct vsp1_rwpf_operations rpf_vdev_ops = {
+static const struct vsp1_entity_operations rpf_entity_ops = {
.set_memory = rpf_set_memory,
+ .configure = rpf_configure,
};
/* -----------------------------------------------------------------------------
@@ -221,51 +220,31 @@ static const struct vsp1_rwpf_operations rpf_vdev_ops = {
struct vsp1_rwpf *vsp1_rpf_create(struct vsp1_device *vsp1, unsigned int index)
{
- struct v4l2_subdev *subdev;
struct vsp1_rwpf *rpf;
+ char name[6];
int ret;
rpf = devm_kzalloc(vsp1->dev, sizeof(*rpf), GFP_KERNEL);
if (rpf == NULL)
return ERR_PTR(-ENOMEM);
- rpf->ops = &rpf_vdev_ops;
-
rpf->max_width = RPF_MAX_WIDTH;
rpf->max_height = RPF_MAX_HEIGHT;
+ rpf->entity.ops = &rpf_entity_ops;
rpf->entity.type = VSP1_ENTITY_RPF;
rpf->entity.index = index;
- ret = vsp1_entity_init(vsp1, &rpf->entity, 2);
+ sprintf(name, "rpf.%u", index);
+ ret = vsp1_entity_init(vsp1, &rpf->entity, name, 2, &rpf_ops);
if (ret < 0)
return ERR_PTR(ret);
- /* Initialize the V4L2 subdev. */
- subdev = &rpf->entity.subdev;
- v4l2_subdev_init(subdev, &rpf_ops);
-
- subdev->entity.ops = &vsp1->media_ops;
- subdev->internal_ops = &vsp1_subdev_internal_ops;
- snprintf(subdev->name, sizeof(subdev->name), "%s rpf.%u",
- dev_name(vsp1->dev), index);
- v4l2_set_subdevdata(subdev, rpf);
- subdev->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
-
- vsp1_entity_init_formats(subdev, NULL);
-
/* Initialize the control handler. */
- v4l2_ctrl_handler_init(&rpf->ctrls, 1);
- rpf->alpha = v4l2_ctrl_new_std(&rpf->ctrls, &rpf_ctrl_ops,
- V4L2_CID_ALPHA_COMPONENT,
- 0, 255, 1, 255);
-
- rpf->entity.subdev.ctrl_handler = &rpf->ctrls;
-
- if (rpf->ctrls.error) {
+ ret = vsp1_rwpf_init_ctrls(rpf);
+ if (ret < 0) {
dev_err(vsp1->dev, "rpf%u: failed to initialize controls\n",
index);
- ret = rpf->ctrls.error;
goto error;
}
diff --git a/drivers/media/platform/vsp1/vsp1_rwpf.c b/drivers/media/platform/vsp1/vsp1_rwpf.c
index 9688c219b30e..3b6e032e7806 100644
--- a/drivers/media/platform/vsp1/vsp1_rwpf.c
+++ b/drivers/media/platform/vsp1/vsp1_rwpf.c
@@ -20,13 +20,20 @@
#define RWPF_MIN_WIDTH 1
#define RWPF_MIN_HEIGHT 1
+struct v4l2_rect *vsp1_rwpf_get_crop(struct vsp1_rwpf *rwpf,
+ struct v4l2_subdev_pad_config *config)
+{
+ return v4l2_subdev_get_try_crop(&rwpf->entity.subdev, config,
+ RWPF_PAD_SINK);
+}
+
/* -----------------------------------------------------------------------------
* V4L2 Subdevice Pad Operations
*/
-int vsp1_rwpf_enum_mbus_code(struct v4l2_subdev *subdev,
- struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_mbus_code_enum *code)
+static int vsp1_rwpf_enum_mbus_code(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_mbus_code_enum *code)
{
static const unsigned int codes[] = {
MEDIA_BUS_FMT_ARGB8888_1X32,
@@ -41,75 +48,36 @@ int vsp1_rwpf_enum_mbus_code(struct v4l2_subdev *subdev,
return 0;
}
-int vsp1_rwpf_enum_frame_size(struct v4l2_subdev *subdev,
- struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_frame_size_enum *fse)
-{
- struct vsp1_rwpf *rwpf = to_rwpf(subdev);
- struct v4l2_mbus_framefmt *format;
-
- format = vsp1_entity_get_pad_format(&rwpf->entity, cfg, fse->pad,
- fse->which);
-
- if (fse->index || fse->code != format->code)
- return -EINVAL;
-
- if (fse->pad == RWPF_PAD_SINK) {
- fse->min_width = RWPF_MIN_WIDTH;
- fse->max_width = rwpf->max_width;
- fse->min_height = RWPF_MIN_HEIGHT;
- fse->max_height = rwpf->max_height;
- } else {
- /* The size on the source pad are fixed and always identical to
- * the size on the sink pad.
- */
- fse->min_width = format->width;
- fse->max_width = format->width;
- fse->min_height = format->height;
- fse->max_height = format->height;
- }
-
- return 0;
-}
-
-static struct v4l2_rect *
-vsp1_rwpf_get_crop(struct vsp1_rwpf *rwpf, struct v4l2_subdev_pad_config *cfg, u32 which)
-{
- switch (which) {
- case V4L2_SUBDEV_FORMAT_TRY:
- return v4l2_subdev_get_try_crop(&rwpf->entity.subdev, cfg, RWPF_PAD_SINK);
- case V4L2_SUBDEV_FORMAT_ACTIVE:
- return &rwpf->crop;
- default:
- return NULL;
- }
-}
-
-int vsp1_rwpf_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_format *fmt)
+static int vsp1_rwpf_enum_frame_size(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_frame_size_enum *fse)
{
struct vsp1_rwpf *rwpf = to_rwpf(subdev);
- fmt->format = *vsp1_entity_get_pad_format(&rwpf->entity, cfg, fmt->pad,
- fmt->which);
-
- return 0;
+ return vsp1_subdev_enum_frame_size(subdev, cfg, fse, RWPF_MIN_WIDTH,
+ RWPF_MIN_HEIGHT, rwpf->max_width,
+ rwpf->max_height);
}
-int vsp1_rwpf_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_format *fmt)
+static int vsp1_rwpf_set_format(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_format *fmt)
{
struct vsp1_rwpf *rwpf = to_rwpf(subdev);
+ struct v4l2_subdev_pad_config *config;
struct v4l2_mbus_framefmt *format;
struct v4l2_rect *crop;
+ config = vsp1_entity_get_pad_config(&rwpf->entity, cfg, fmt->which);
+ if (!config)
+ return -EINVAL;
+
/* Default to YUV if the requested format is not supported. */
if (fmt->format.code != MEDIA_BUS_FMT_ARGB8888_1X32 &&
fmt->format.code != MEDIA_BUS_FMT_AYUV8_1X32)
fmt->format.code = MEDIA_BUS_FMT_AYUV8_1X32;
- format = vsp1_entity_get_pad_format(&rwpf->entity, cfg, fmt->pad,
- fmt->which);
+ format = vsp1_entity_get_pad_format(&rwpf->entity, config, fmt->pad);
if (fmt->pad == RWPF_PAD_SOURCE) {
/* The RWPF performs format conversion but can't scale, only the
@@ -131,39 +99,44 @@ int vsp1_rwpf_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_conf
fmt->format = *format;
/* Update the sink crop rectangle. */
- crop = vsp1_rwpf_get_crop(rwpf, cfg, fmt->which);
+ crop = vsp1_rwpf_get_crop(rwpf, config);
crop->left = 0;
crop->top = 0;
crop->width = fmt->format.width;
crop->height = fmt->format.height;
/* Propagate the format to the source pad. */
- format = vsp1_entity_get_pad_format(&rwpf->entity, cfg, RWPF_PAD_SOURCE,
- fmt->which);
+ format = vsp1_entity_get_pad_format(&rwpf->entity, config,
+ RWPF_PAD_SOURCE);
*format = fmt->format;
return 0;
}
-int vsp1_rwpf_get_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_selection *sel)
+static int vsp1_rwpf_get_selection(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_selection *sel)
{
struct vsp1_rwpf *rwpf = to_rwpf(subdev);
+ struct v4l2_subdev_pad_config *config;
struct v4l2_mbus_framefmt *format;
/* Cropping is implemented on the sink pad. */
if (sel->pad != RWPF_PAD_SINK)
return -EINVAL;
+ config = vsp1_entity_get_pad_config(&rwpf->entity, cfg, sel->which);
+ if (!config)
+ return -EINVAL;
+
switch (sel->target) {
case V4L2_SEL_TGT_CROP:
- sel->r = *vsp1_rwpf_get_crop(rwpf, cfg, sel->which);
+ sel->r = *vsp1_rwpf_get_crop(rwpf, config);
break;
case V4L2_SEL_TGT_CROP_BOUNDS:
- format = vsp1_entity_get_pad_format(&rwpf->entity, cfg,
- RWPF_PAD_SINK, sel->which);
+ format = vsp1_entity_get_pad_format(&rwpf->entity, config,
+ RWPF_PAD_SINK);
sel->r.left = 0;
sel->r.top = 0;
sel->r.width = format->width;
@@ -177,11 +150,12 @@ int vsp1_rwpf_get_selection(struct v4l2_subdev *subdev,
return 0;
}
-int vsp1_rwpf_set_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_selection *sel)
+static int vsp1_rwpf_set_selection(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
+ struct v4l2_subdev_selection *sel)
{
struct vsp1_rwpf *rwpf = to_rwpf(subdev);
+ struct v4l2_subdev_pad_config *config;
struct v4l2_mbus_framefmt *format;
struct v4l2_rect *crop;
@@ -192,11 +166,15 @@ int vsp1_rwpf_set_selection(struct v4l2_subdev *subdev,
if (sel->target != V4L2_SEL_TGT_CROP)
return -EINVAL;
+ config = vsp1_entity_get_pad_config(&rwpf->entity, cfg, sel->which);
+ if (!config)
+ return -EINVAL;
+
/* Make sure the crop rectangle is entirely contained in the image. The
* WPF top and left offsets are limited to 255.
*/
- format = vsp1_entity_get_pad_format(&rwpf->entity, cfg, RWPF_PAD_SINK,
- sel->which);
+ format = vsp1_entity_get_pad_format(&rwpf->entity, config,
+ RWPF_PAD_SINK);
/* Restrict the crop rectangle coordinates to multiples of 2 to avoid
* shifting the color plane.
@@ -219,14 +197,59 @@ int vsp1_rwpf_set_selection(struct v4l2_subdev *subdev,
sel->r.height = min_t(unsigned int, sel->r.height,
format->height - sel->r.top);
- crop = vsp1_rwpf_get_crop(rwpf, cfg, sel->which);
+ crop = vsp1_rwpf_get_crop(rwpf, config);
*crop = sel->r;
/* Propagate the format to the source pad. */
- format = vsp1_entity_get_pad_format(&rwpf->entity, cfg, RWPF_PAD_SOURCE,
- sel->which);
+ format = vsp1_entity_get_pad_format(&rwpf->entity, config,
+ RWPF_PAD_SOURCE);
format->width = crop->width;
format->height = crop->height;
return 0;
}
+
+const struct v4l2_subdev_pad_ops vsp1_rwpf_pad_ops = {
+ .init_cfg = vsp1_entity_init_cfg,
+ .enum_mbus_code = vsp1_rwpf_enum_mbus_code,
+ .enum_frame_size = vsp1_rwpf_enum_frame_size,
+ .get_fmt = vsp1_subdev_get_pad_format,
+ .set_fmt = vsp1_rwpf_set_format,
+ .get_selection = vsp1_rwpf_get_selection,
+ .set_selection = vsp1_rwpf_set_selection,
+};
+
+/* -----------------------------------------------------------------------------
+ * Controls
+ */
+
+static int vsp1_rwpf_s_ctrl(struct v4l2_ctrl *ctrl)
+{
+ struct vsp1_rwpf *rwpf =
+ container_of(ctrl->handler, struct vsp1_rwpf, ctrls);
+
+ switch (ctrl->id) {
+ case V4L2_CID_ALPHA_COMPONENT:
+ rwpf->alpha = ctrl->val;
+ break;
+ }
+
+ return 0;
+}
+
+static const struct v4l2_ctrl_ops vsp1_rwpf_ctrl_ops = {
+ .s_ctrl = vsp1_rwpf_s_ctrl,
+};
+
+int vsp1_rwpf_init_ctrls(struct vsp1_rwpf *rwpf)
+{
+ rwpf->alpha = 255;
+
+ v4l2_ctrl_handler_init(&rwpf->ctrls, 1);
+ v4l2_ctrl_new_std(&rwpf->ctrls, &vsp1_rwpf_ctrl_ops,
+ V4L2_CID_ALPHA_COMPONENT, 0, 255, 1, 255);
+
+ rwpf->entity.subdev.ctrl_handler = &rwpf->ctrls;
+
+ return rwpf->ctrls.error;
+}
diff --git a/drivers/media/platform/vsp1/vsp1_rwpf.h b/drivers/media/platform/vsp1/vsp1_rwpf.h
index 8e8235682ada..9ff7c78f239e 100644
--- a/drivers/media/platform/vsp1/vsp1_rwpf.h
+++ b/drivers/media/platform/vsp1/vsp1_rwpf.h
@@ -24,42 +24,35 @@
#define RWPF_PAD_SOURCE 1
struct v4l2_ctrl;
+struct vsp1_dl_manager;
+struct vsp1_pipeline;
struct vsp1_rwpf;
struct vsp1_video;
struct vsp1_rwpf_memory {
- unsigned int num_planes;
dma_addr_t addr[3];
- unsigned int length[3];
-};
-
-struct vsp1_rwpf_operations {
- void (*set_memory)(struct vsp1_rwpf *rwpf,
- struct vsp1_rwpf_memory *mem);
};
struct vsp1_rwpf {
struct vsp1_entity entity;
struct v4l2_ctrl_handler ctrls;
- struct v4l2_ctrl *alpha;
+ struct vsp1_pipeline *pipe;
struct vsp1_video *video;
- const struct vsp1_rwpf_operations *ops;
-
unsigned int max_width;
unsigned int max_height;
struct v4l2_pix_format_mplane format;
const struct vsp1_format_info *fmtinfo;
- struct {
- unsigned int left;
- unsigned int top;
- } location;
- struct v4l2_rect crop;
+ unsigned int bru_input;
+
+ unsigned int alpha;
unsigned int offsets[2];
- dma_addr_t buf_addr[3];
+ struct vsp1_rwpf_memory mem;
+
+ struct vsp1_dl_manager *dlm;
};
static inline struct vsp1_rwpf *to_rwpf(struct v4l2_subdev *subdev)
@@ -67,24 +60,31 @@ static inline struct vsp1_rwpf *to_rwpf(struct v4l2_subdev *subdev)
return container_of(subdev, struct vsp1_rwpf, entity.subdev);
}
+static inline struct vsp1_rwpf *entity_to_rwpf(struct vsp1_entity *entity)
+{
+ return container_of(entity, struct vsp1_rwpf, entity);
+}
+
struct vsp1_rwpf *vsp1_rpf_create(struct vsp1_device *vsp1, unsigned int index);
struct vsp1_rwpf *vsp1_wpf_create(struct vsp1_device *vsp1, unsigned int index);
-int vsp1_rwpf_enum_mbus_code(struct v4l2_subdev *subdev,
- struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_mbus_code_enum *code);
-int vsp1_rwpf_enum_frame_size(struct v4l2_subdev *subdev,
- struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_frame_size_enum *fse);
-int vsp1_rwpf_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_format *fmt);
-int vsp1_rwpf_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_format *fmt);
-int vsp1_rwpf_get_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_selection *sel);
-int vsp1_rwpf_set_selection(struct v4l2_subdev *subdev,
- struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_selection *sel);
+int vsp1_rwpf_init_ctrls(struct vsp1_rwpf *rwpf);
+
+extern const struct v4l2_subdev_pad_ops vsp1_rwpf_pad_ops;
+
+struct v4l2_rect *vsp1_rwpf_get_crop(struct vsp1_rwpf *rwpf,
+ struct v4l2_subdev_pad_config *config);
+/**
+ * vsp1_rwpf_set_memory - Configure DMA addresses for a [RW]PF
+ * @rwpf: the [RW]PF instance
+ * @dl: the display list
+ *
+ * This function applies the cached memory buffer address to the display list.
+ */
+static inline void vsp1_rwpf_set_memory(struct vsp1_rwpf *rwpf,
+ struct vsp1_dl_list *dl)
+{
+ rwpf->entity.ops->set_memory(&rwpf->entity, dl);
+}
#endif /* __VSP1_RWPF_H__ */
diff --git a/drivers/media/platform/vsp1/vsp1_sru.c b/drivers/media/platform/vsp1/vsp1_sru.c
index cc09efbfb24f..97ef997ae735 100644
--- a/drivers/media/platform/vsp1/vsp1_sru.c
+++ b/drivers/media/platform/vsp1/vsp1_sru.c
@@ -17,6 +17,7 @@
#include <media/v4l2-subdev.h>
#include "vsp1.h"
+#include "vsp1_dl.h"
#include "vsp1_sru.h"
#define SRU_MIN_SIZE 4U
@@ -26,14 +27,10 @@
* Device Access
*/
-static inline u32 vsp1_sru_read(struct vsp1_sru *sru, u32 reg)
+static inline void vsp1_sru_write(struct vsp1_sru *sru, struct vsp1_dl_list *dl,
+ u32 reg, u32 data)
{
- return vsp1_read(sru->entity.vsp1, reg);
-}
-
-static inline void vsp1_sru_write(struct vsp1_sru *sru, u32 reg, u32 data)
-{
- vsp1_write(sru->entity.vsp1, reg, data);
+ vsp1_dl_list_write(dl, reg, data);
}
/* -----------------------------------------------------------------------------
@@ -82,20 +79,10 @@ static int sru_s_ctrl(struct v4l2_ctrl *ctrl)
{
struct vsp1_sru *sru =
container_of(ctrl->handler, struct vsp1_sru, ctrls);
- const struct vsp1_sru_param *param;
- u32 value;
switch (ctrl->id) {
case V4L2_CID_VSP1_SRU_INTENSITY:
- param = &vsp1_sru_params[ctrl->val - 1];
-
- value = vsp1_sru_read(sru, VI6_SRU_CTRL0);
- value &= ~(VI6_SRU_CTRL0_PARAM0_MASK |
- VI6_SRU_CTRL0_PARAM1_MASK);
- value |= param->ctrl0;
- vsp1_sru_write(sru, VI6_SRU_CTRL0, value);
-
- vsp1_sru_write(sru, VI6_SRU_CTRL2, param->ctrl2);
+ sru->intensity = ctrl->val;
break;
}
@@ -118,54 +105,7 @@ static const struct v4l2_ctrl_config sru_intensity_control = {
};
/* -----------------------------------------------------------------------------
- * V4L2 Subdevice Core Operations
- */
-
-static int sru_s_stream(struct v4l2_subdev *subdev, int enable)
-{
- struct vsp1_sru *sru = to_sru(subdev);
- struct v4l2_mbus_framefmt *input;
- struct v4l2_mbus_framefmt *output;
- u32 ctrl0;
- int ret;
-
- ret = vsp1_entity_set_streaming(&sru->entity, enable);
- if (ret < 0)
- return ret;
-
- if (!enable)
- return 0;
-
- input = &sru->entity.formats[SRU_PAD_SINK];
- output = &sru->entity.formats[SRU_PAD_SOURCE];
-
- if (input->code == MEDIA_BUS_FMT_ARGB8888_1X32)
- ctrl0 = VI6_SRU_CTRL0_PARAM2 | VI6_SRU_CTRL0_PARAM3
- | VI6_SRU_CTRL0_PARAM4;
- else
- ctrl0 = VI6_SRU_CTRL0_PARAM3;
-
- if (input->width != output->width)
- ctrl0 |= VI6_SRU_CTRL0_MODE_UPSCALE;
-
- /* Take the control handler lock to ensure that the CTRL0 value won't be
- * changed behind our back by a set control operation.
- */
- if (sru->entity.vsp1->info->uapi)
- mutex_lock(sru->ctrls.lock);
- ctrl0 |= vsp1_sru_read(sru, VI6_SRU_CTRL0)
- & (VI6_SRU_CTRL0_PARAM0_MASK | VI6_SRU_CTRL0_PARAM1_MASK);
- vsp1_sru_write(sru, VI6_SRU_CTRL0, ctrl0);
- if (sru->entity.vsp1->info->uapi)
- mutex_unlock(sru->ctrls.lock);
-
- vsp1_sru_write(sru, VI6_SRU_CTRL1, VI6_SRU_CTRL1_PARAM5);
-
- return 0;
-}
-
-/* -----------------------------------------------------------------------------
- * V4L2 Subdevice Pad Operations
+ * V4L2 Subdevice Operations
*/
static int sru_enum_mbus_code(struct v4l2_subdev *subdev,
@@ -176,27 +116,9 @@ static int sru_enum_mbus_code(struct v4l2_subdev *subdev,
MEDIA_BUS_FMT_ARGB8888_1X32,
MEDIA_BUS_FMT_AYUV8_1X32,
};
- struct vsp1_sru *sru = to_sru(subdev);
- struct v4l2_mbus_framefmt *format;
-
- if (code->pad == SRU_PAD_SINK) {
- if (code->index >= ARRAY_SIZE(codes))
- return -EINVAL;
-
- code->code = codes[code->index];
- } else {
- /* The SRU can't perform format conversion, the sink format is
- * always identical to the source format.
- */
- if (code->index)
- return -EINVAL;
- format = vsp1_entity_get_pad_format(&sru->entity, cfg,
- SRU_PAD_SINK, code->which);
- code->code = format->code;
- }
-
- return 0;
+ return vsp1_subdev_enum_mbus_code(subdev, cfg, code, codes,
+ ARRAY_SIZE(codes));
}
static int sru_enum_frame_size(struct v4l2_subdev *subdev,
@@ -204,10 +126,14 @@ static int sru_enum_frame_size(struct v4l2_subdev *subdev,
struct v4l2_subdev_frame_size_enum *fse)
{
struct vsp1_sru *sru = to_sru(subdev);
+ struct v4l2_subdev_pad_config *config;
struct v4l2_mbus_framefmt *format;
- format = vsp1_entity_get_pad_format(&sru->entity, cfg,
- SRU_PAD_SINK, fse->which);
+ config = vsp1_entity_get_pad_config(&sru->entity, cfg, fse->which);
+ if (!config)
+ return -EINVAL;
+
+ format = vsp1_entity_get_pad_format(&sru->entity, config, SRU_PAD_SINK);
if (fse->index || fse->code != format->code)
return -EINVAL;
@@ -233,20 +159,9 @@ static int sru_enum_frame_size(struct v4l2_subdev *subdev,
return 0;
}
-static int sru_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_format *fmt)
-{
- struct vsp1_sru *sru = to_sru(subdev);
-
- fmt->format = *vsp1_entity_get_pad_format(&sru->entity, cfg, fmt->pad,
- fmt->which);
-
- return 0;
-}
-
-static void sru_try_format(struct vsp1_sru *sru, struct v4l2_subdev_pad_config *cfg,
- unsigned int pad, struct v4l2_mbus_framefmt *fmt,
- enum v4l2_subdev_format_whence which)
+static void sru_try_format(struct vsp1_sru *sru,
+ struct v4l2_subdev_pad_config *config,
+ unsigned int pad, struct v4l2_mbus_framefmt *fmt)
{
struct v4l2_mbus_framefmt *format;
unsigned int input_area;
@@ -265,8 +180,8 @@ static void sru_try_format(struct vsp1_sru *sru, struct v4l2_subdev_pad_config *
case SRU_PAD_SOURCE:
/* The SRU can't perform format conversion. */
- format = vsp1_entity_get_pad_format(&sru->entity, cfg,
- SRU_PAD_SINK, which);
+ format = vsp1_entity_get_pad_format(&sru->entity, config,
+ SRU_PAD_SINK);
fmt->code = format->code;
/* We can upscale by 2 in both direction, but not independently.
@@ -295,57 +210,94 @@ static void sru_try_format(struct vsp1_sru *sru, struct v4l2_subdev_pad_config *
fmt->colorspace = V4L2_COLORSPACE_SRGB;
}
-static int sru_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
+static int sru_set_format(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vsp1_sru *sru = to_sru(subdev);
+ struct v4l2_subdev_pad_config *config;
struct v4l2_mbus_framefmt *format;
- sru_try_format(sru, cfg, fmt->pad, &fmt->format, fmt->which);
+ config = vsp1_entity_get_pad_config(&sru->entity, cfg, fmt->which);
+ if (!config)
+ return -EINVAL;
+
+ sru_try_format(sru, config, fmt->pad, &fmt->format);
- format = vsp1_entity_get_pad_format(&sru->entity, cfg, fmt->pad,
- fmt->which);
+ format = vsp1_entity_get_pad_format(&sru->entity, config, fmt->pad);
*format = fmt->format;
if (fmt->pad == SRU_PAD_SINK) {
/* Propagate the format to the source pad. */
- format = vsp1_entity_get_pad_format(&sru->entity, cfg,
- SRU_PAD_SOURCE, fmt->which);
+ format = vsp1_entity_get_pad_format(&sru->entity, config,
+ SRU_PAD_SOURCE);
*format = fmt->format;
- sru_try_format(sru, cfg, SRU_PAD_SOURCE, format, fmt->which);
+ sru_try_format(sru, config, SRU_PAD_SOURCE, format);
}
return 0;
}
-/* -----------------------------------------------------------------------------
- * V4L2 Subdevice Operations
- */
-
-static struct v4l2_subdev_video_ops sru_video_ops = {
- .s_stream = sru_s_stream,
-};
-
static struct v4l2_subdev_pad_ops sru_pad_ops = {
+ .init_cfg = vsp1_entity_init_cfg,
.enum_mbus_code = sru_enum_mbus_code,
.enum_frame_size = sru_enum_frame_size,
- .get_fmt = sru_get_format,
+ .get_fmt = vsp1_subdev_get_pad_format,
.set_fmt = sru_set_format,
};
static struct v4l2_subdev_ops sru_ops = {
- .video = &sru_video_ops,
.pad = &sru_pad_ops,
};
/* -----------------------------------------------------------------------------
+ * VSP1 Entity Operations
+ */
+
+static void sru_configure(struct vsp1_entity *entity,
+ struct vsp1_pipeline *pipe,
+ struct vsp1_dl_list *dl)
+{
+ const struct vsp1_sru_param *param;
+ struct vsp1_sru *sru = to_sru(&entity->subdev);
+ struct v4l2_mbus_framefmt *input;
+ struct v4l2_mbus_framefmt *output;
+ u32 ctrl0;
+
+ input = vsp1_entity_get_pad_format(&sru->entity, sru->entity.config,
+ SRU_PAD_SINK);
+ output = vsp1_entity_get_pad_format(&sru->entity, sru->entity.config,
+ SRU_PAD_SOURCE);
+
+ if (input->code == MEDIA_BUS_FMT_ARGB8888_1X32)
+ ctrl0 = VI6_SRU_CTRL0_PARAM2 | VI6_SRU_CTRL0_PARAM3
+ | VI6_SRU_CTRL0_PARAM4;
+ else
+ ctrl0 = VI6_SRU_CTRL0_PARAM3;
+
+ if (input->width != output->width)
+ ctrl0 |= VI6_SRU_CTRL0_MODE_UPSCALE;
+
+ param = &vsp1_sru_params[sru->intensity - 1];
+
+ ctrl0 |= param->ctrl0;
+
+ vsp1_sru_write(sru, dl, VI6_SRU_CTRL0, ctrl0);
+ vsp1_sru_write(sru, dl, VI6_SRU_CTRL1, VI6_SRU_CTRL1_PARAM5);
+ vsp1_sru_write(sru, dl, VI6_SRU_CTRL2, param->ctrl2);
+}
+
+static const struct vsp1_entity_operations sru_entity_ops = {
+ .configure = sru_configure,
+};
+
+/* -----------------------------------------------------------------------------
* Initialization and Cleanup
*/
struct vsp1_sru *vsp1_sru_create(struct vsp1_device *vsp1)
{
- struct v4l2_subdev *subdev;
struct vsp1_sru *sru;
int ret;
@@ -353,29 +305,19 @@ struct vsp1_sru *vsp1_sru_create(struct vsp1_device *vsp1)
if (sru == NULL)
return ERR_PTR(-ENOMEM);
+ sru->entity.ops = &sru_entity_ops;
sru->entity.type = VSP1_ENTITY_SRU;
- ret = vsp1_entity_init(vsp1, &sru->entity, 2);
+ ret = vsp1_entity_init(vsp1, &sru->entity, "sru", 2, &sru_ops);
if (ret < 0)
return ERR_PTR(ret);
- /* Initialize the V4L2 subdev. */
- subdev = &sru->entity.subdev;
- v4l2_subdev_init(subdev, &sru_ops);
-
- subdev->entity.ops = &vsp1->media_ops;
- subdev->internal_ops = &vsp1_subdev_internal_ops;
- snprintf(subdev->name, sizeof(subdev->name), "%s sru",
- dev_name(vsp1->dev));
- v4l2_set_subdevdata(subdev, sru);
- subdev->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
-
- vsp1_entity_init_formats(subdev, NULL);
-
/* Initialize the control handler. */
v4l2_ctrl_handler_init(&sru->ctrls, 1);
v4l2_ctrl_new_custom(&sru->ctrls, &sru_intensity_control, NULL);
+ sru->intensity = 1;
+
sru->entity.subdev.ctrl_handler = &sru->ctrls;
if (sru->ctrls.error) {
diff --git a/drivers/media/platform/vsp1/vsp1_sru.h b/drivers/media/platform/vsp1/vsp1_sru.h
index b6768bf3dc47..85e241457af2 100644
--- a/drivers/media/platform/vsp1/vsp1_sru.h
+++ b/drivers/media/platform/vsp1/vsp1_sru.h
@@ -28,6 +28,8 @@ struct vsp1_sru {
struct vsp1_entity entity;
struct v4l2_ctrl_handler ctrls;
+
+ unsigned int intensity;
};
static inline struct vsp1_sru *to_sru(struct v4l2_subdev *subdev)
diff --git a/drivers/media/platform/vsp1/vsp1_uds.c b/drivers/media/platform/vsp1/vsp1_uds.c
index bba67770cf95..1875e29da184 100644
--- a/drivers/media/platform/vsp1/vsp1_uds.c
+++ b/drivers/media/platform/vsp1/vsp1_uds.c
@@ -17,6 +17,7 @@
#include <media/v4l2-subdev.h>
#include "vsp1.h"
+#include "vsp1_dl.h"
#include "vsp1_uds.h"
#define UDS_MIN_SIZE 4U
@@ -29,19 +30,21 @@
* Device Access
*/
-static inline void vsp1_uds_write(struct vsp1_uds *uds, u32 reg, u32 data)
+static inline void vsp1_uds_write(struct vsp1_uds *uds, struct vsp1_dl_list *dl,
+ u32 reg, u32 data)
{
- vsp1_write(uds->entity.vsp1,
- reg + uds->entity.index * VI6_UDS_OFFSET, data);
+ vsp1_dl_list_write(dl, reg + uds->entity.index * VI6_UDS_OFFSET, data);
}
/* -----------------------------------------------------------------------------
* Scaling Computation
*/
-void vsp1_uds_set_alpha(struct vsp1_uds *uds, unsigned int alpha)
+void vsp1_uds_set_alpha(struct vsp1_uds *uds, struct vsp1_dl_list *dl,
+ unsigned int alpha)
{
- vsp1_uds_write(uds, VI6_UDS_ALPVAL, alpha << VI6_UDS_ALPVAL_VAL0_SHIFT);
+ vsp1_uds_write(uds, dl, VI6_UDS_ALPVAL,
+ alpha << VI6_UDS_ALPVAL_VAL0_SHIFT);
}
/*
@@ -105,60 +108,6 @@ static unsigned int uds_compute_ratio(unsigned int input, unsigned int output)
}
/* -----------------------------------------------------------------------------
- * V4L2 Subdevice Core Operations
- */
-
-static int uds_s_stream(struct v4l2_subdev *subdev, int enable)
-{
- struct vsp1_uds *uds = to_uds(subdev);
- const struct v4l2_mbus_framefmt *output;
- const struct v4l2_mbus_framefmt *input;
- unsigned int hscale;
- unsigned int vscale;
- bool multitap;
-
- if (!enable)
- return 0;
-
- input = &uds->entity.formats[UDS_PAD_SINK];
- output = &uds->entity.formats[UDS_PAD_SOURCE];
-
- hscale = uds_compute_ratio(input->width, output->width);
- vscale = uds_compute_ratio(input->height, output->height);
-
- dev_dbg(uds->entity.vsp1->dev, "hscale %u vscale %u\n", hscale, vscale);
-
- /* Multi-tap scaling can't be enabled along with alpha scaling when
- * scaling down with a factor lower than or equal to 1/2 in either
- * direction.
- */
- if (uds->scale_alpha && (hscale >= 8192 || vscale >= 8192))
- multitap = false;
- else
- multitap = true;
-
- vsp1_uds_write(uds, VI6_UDS_CTRL,
- (uds->scale_alpha ? VI6_UDS_CTRL_AON : 0) |
- (multitap ? VI6_UDS_CTRL_BC : 0));
-
- vsp1_uds_write(uds, VI6_UDS_PASS_BWIDTH,
- (uds_passband_width(hscale)
- << VI6_UDS_PASS_BWIDTH_H_SHIFT) |
- (uds_passband_width(vscale)
- << VI6_UDS_PASS_BWIDTH_V_SHIFT));
-
- /* Set the scaling ratios and the output size. */
- vsp1_uds_write(uds, VI6_UDS_SCALE,
- (hscale << VI6_UDS_SCALE_HFRAC_SHIFT) |
- (vscale << VI6_UDS_SCALE_VFRAC_SHIFT));
- vsp1_uds_write(uds, VI6_UDS_CLIP_SIZE,
- (output->width << VI6_UDS_CLIP_SIZE_HSIZE_SHIFT) |
- (output->height << VI6_UDS_CLIP_SIZE_VSIZE_SHIFT));
-
- return 0;
-}
-
-/* -----------------------------------------------------------------------------
* V4L2 Subdevice Pad Operations
*/
@@ -170,28 +119,9 @@ static int uds_enum_mbus_code(struct v4l2_subdev *subdev,
MEDIA_BUS_FMT_ARGB8888_1X32,
MEDIA_BUS_FMT_AYUV8_1X32,
};
- struct vsp1_uds *uds = to_uds(subdev);
-
- if (code->pad == UDS_PAD_SINK) {
- if (code->index >= ARRAY_SIZE(codes))
- return -EINVAL;
-
- code->code = codes[code->index];
- } else {
- struct v4l2_mbus_framefmt *format;
-
- /* The UDS can't perform format conversion, the sink format is
- * always identical to the source format.
- */
- if (code->index)
- return -EINVAL;
- format = vsp1_entity_get_pad_format(&uds->entity, cfg,
- UDS_PAD_SINK, code->which);
- code->code = format->code;
- }
-
- return 0;
+ return vsp1_subdev_enum_mbus_code(subdev, cfg, code, codes,
+ ARRAY_SIZE(codes));
}
static int uds_enum_frame_size(struct v4l2_subdev *subdev,
@@ -199,10 +129,15 @@ static int uds_enum_frame_size(struct v4l2_subdev *subdev,
struct v4l2_subdev_frame_size_enum *fse)
{
struct vsp1_uds *uds = to_uds(subdev);
+ struct v4l2_subdev_pad_config *config;
struct v4l2_mbus_framefmt *format;
- format = vsp1_entity_get_pad_format(&uds->entity, cfg,
- UDS_PAD_SINK, fse->which);
+ config = vsp1_entity_get_pad_config(&uds->entity, cfg, fse->which);
+ if (!config)
+ return -EINVAL;
+
+ format = vsp1_entity_get_pad_format(&uds->entity, config,
+ UDS_PAD_SINK);
if (fse->index || fse->code != format->code)
return -EINVAL;
@@ -222,20 +157,9 @@ static int uds_enum_frame_size(struct v4l2_subdev *subdev,
return 0;
}
-static int uds_get_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
- struct v4l2_subdev_format *fmt)
-{
- struct vsp1_uds *uds = to_uds(subdev);
-
- fmt->format = *vsp1_entity_get_pad_format(&uds->entity, cfg, fmt->pad,
- fmt->which);
-
- return 0;
-}
-
-static void uds_try_format(struct vsp1_uds *uds, struct v4l2_subdev_pad_config *cfg,
- unsigned int pad, struct v4l2_mbus_framefmt *fmt,
- enum v4l2_subdev_format_whence which)
+static void uds_try_format(struct vsp1_uds *uds,
+ struct v4l2_subdev_pad_config *config,
+ unsigned int pad, struct v4l2_mbus_framefmt *fmt)
{
struct v4l2_mbus_framefmt *format;
unsigned int minimum;
@@ -254,8 +178,8 @@ static void uds_try_format(struct vsp1_uds *uds, struct v4l2_subdev_pad_config *
case UDS_PAD_SOURCE:
/* The UDS scales but can't perform format conversion. */
- format = vsp1_entity_get_pad_format(&uds->entity, cfg,
- UDS_PAD_SINK, which);
+ format = vsp1_entity_get_pad_format(&uds->entity, config,
+ UDS_PAD_SINK);
fmt->code = format->code;
uds_output_limits(format->width, &minimum, &maximum);
@@ -269,25 +193,30 @@ static void uds_try_format(struct vsp1_uds *uds, struct v4l2_subdev_pad_config *
fmt->colorspace = V4L2_COLORSPACE_SRGB;
}
-static int uds_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_config *cfg,
+static int uds_set_format(struct v4l2_subdev *subdev,
+ struct v4l2_subdev_pad_config *cfg,
struct v4l2_subdev_format *fmt)
{
struct vsp1_uds *uds = to_uds(subdev);
+ struct v4l2_subdev_pad_config *config;
struct v4l2_mbus_framefmt *format;
- uds_try_format(uds, cfg, fmt->pad, &fmt->format, fmt->which);
+ config = vsp1_entity_get_pad_config(&uds->entity, cfg, fmt->which);
+ if (!config)
+ return -EINVAL;
+
+ uds_try_format(uds, config, fmt->pad, &fmt->format);
- format = vsp1_entity_get_pad_format(&uds->entity, cfg, fmt->pad,
- fmt->which);
+ format = vsp1_entity_get_pad_format(&uds->entity, config, fmt->pad);
*format = fmt->format;
if (fmt->pad == UDS_PAD_SINK) {
/* Propagate the format to the source pad. */
- format = vsp1_entity_get_pad_format(&uds->entity, cfg,
- UDS_PAD_SOURCE, fmt->which);
+ format = vsp1_entity_get_pad_format(&uds->entity, config,
+ UDS_PAD_SOURCE);
*format = fmt->format;
- uds_try_format(uds, cfg, UDS_PAD_SOURCE, format, fmt->which);
+ uds_try_format(uds, config, UDS_PAD_SOURCE, format);
}
return 0;
@@ -297,55 +226,97 @@ static int uds_set_format(struct v4l2_subdev *subdev, struct v4l2_subdev_pad_con
* V4L2 Subdevice Operations
*/
-static struct v4l2_subdev_video_ops uds_video_ops = {
- .s_stream = uds_s_stream,
-};
-
static struct v4l2_subdev_pad_ops uds_pad_ops = {
+ .init_cfg = vsp1_entity_init_cfg,
.enum_mbus_code = uds_enum_mbus_code,
.enum_frame_size = uds_enum_frame_size,
- .get_fmt = uds_get_format,
+ .get_fmt = vsp1_subdev_get_pad_format,
.set_fmt = uds_set_format,
};
static struct v4l2_subdev_ops uds_ops = {
- .video = &uds_video_ops,
.pad = &uds_pad_ops,
};
/* -----------------------------------------------------------------------------
+ * VSP1 Entity Operations
+ */
+
+static void uds_configure(struct vsp1_entity *entity,
+ struct vsp1_pipeline *pipe,
+ struct vsp1_dl_list *dl)
+{
+ struct vsp1_uds *uds = to_uds(&entity->subdev);
+ const struct v4l2_mbus_framefmt *output;
+ const struct v4l2_mbus_framefmt *input;
+ unsigned int hscale;
+ unsigned int vscale;
+ bool multitap;
+
+ input = vsp1_entity_get_pad_format(&uds->entity, uds->entity.config,
+ UDS_PAD_SINK);
+ output = vsp1_entity_get_pad_format(&uds->entity, uds->entity.config,
+ UDS_PAD_SOURCE);
+
+ hscale = uds_compute_ratio(input->width, output->width);
+ vscale = uds_compute_ratio(input->height, output->height);
+
+ dev_dbg(uds->entity.vsp1->dev, "hscale %u vscale %u\n", hscale, vscale);
+
+ /* Multi-tap scaling can't be enabled along with alpha scaling when
+ * scaling down with a factor lower than or equal to 1/2 in either
+ * direction.
+ */
+ if (uds->scale_alpha && (hscale >= 8192 || vscale >= 8192))
+ multitap = false;
+ else
+ multitap = true;
+
+ vsp1_uds_write(uds, dl, VI6_UDS_CTRL,
+ (uds->scale_alpha ? VI6_UDS_CTRL_AON : 0) |
+ (multitap ? VI6_UDS_CTRL_BC : 0));
+
+ vsp1_uds_write(uds, dl, VI6_UDS_PASS_BWIDTH,
+ (uds_passband_width(hscale)
+ << VI6_UDS_PASS_BWIDTH_H_SHIFT) |
+ (uds_passband_width(vscale)
+ << VI6_UDS_PASS_BWIDTH_V_SHIFT));
+
+ /* Set the scaling ratios and the output size. */
+ vsp1_uds_write(uds, dl, VI6_UDS_SCALE,
+ (hscale << VI6_UDS_SCALE_HFRAC_SHIFT) |
+ (vscale << VI6_UDS_SCALE_VFRAC_SHIFT));
+ vsp1_uds_write(uds, dl, VI6_UDS_CLIP_SIZE,
+ (output->width << VI6_UDS_CLIP_SIZE_HSIZE_SHIFT) |
+ (output->height << VI6_UDS_CLIP_SIZE_VSIZE_SHIFT));
+}
+
+static const struct vsp1_entity_operations uds_entity_ops = {
+ .configure = uds_configure,
+};
+
+/* -----------------------------------------------------------------------------
* Initialization and Cleanup
*/
struct vsp1_uds *vsp1_uds_create(struct vsp1_device *vsp1, unsigned int index)
{
- struct v4l2_subdev *subdev;
struct vsp1_uds *uds;
+ char name[6];
int ret;
uds = devm_kzalloc(vsp1->dev, sizeof(*uds), GFP_KERNEL);
if (uds == NULL)
return ERR_PTR(-ENOMEM);
+ uds->entity.ops = &uds_entity_ops;
uds->entity.type = VSP1_ENTITY_UDS;
uds->entity.index = index;
- ret = vsp1_entity_init(vsp1, &uds->entity, 2);
+ sprintf(name, "uds.%u", index);
+ ret = vsp1_entity_init(vsp1, &uds->entity, name, 2, &uds_ops);
if (ret < 0)
return ERR_PTR(ret);
- /* Initialize the V4L2 subdev. */
- subdev = &uds->entity.subdev;
- v4l2_subdev_init(subdev, &uds_ops);
-
- subdev->entity.ops = &vsp1->media_ops;
- subdev->internal_ops = &vsp1_subdev_internal_ops;
- snprintf(subdev->name, sizeof(subdev->name), "%s uds.%u",
- dev_name(vsp1->dev), index);
- v4l2_set_subdevdata(subdev, uds);
- subdev->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
-
- vsp1_entity_init_formats(subdev, NULL);
-
return uds;
}
diff --git a/drivers/media/platform/vsp1/vsp1_uds.h b/drivers/media/platform/vsp1/vsp1_uds.h
index 031ac0da1b66..5c8cbfcad4cc 100644
--- a/drivers/media/platform/vsp1/vsp1_uds.h
+++ b/drivers/media/platform/vsp1/vsp1_uds.h
@@ -35,6 +35,7 @@ static inline struct vsp1_uds *to_uds(struct v4l2_subdev *subdev)
struct vsp1_uds *vsp1_uds_create(struct vsp1_device *vsp1, unsigned int index);
-void vsp1_uds_set_alpha(struct vsp1_uds *uds, unsigned int alpha);
+void vsp1_uds_set_alpha(struct vsp1_uds *uds, struct vsp1_dl_list *dl,
+ unsigned int alpha);
#endif /* __VSP1_UDS_H__ */
diff --git a/drivers/media/platform/vsp1/vsp1_video.c b/drivers/media/platform/vsp1/vsp1_video.c
index 72cc7d3729f8..a9aec5c0bec6 100644
--- a/drivers/media/platform/vsp1/vsp1_video.c
+++ b/drivers/media/platform/vsp1/vsp1_video.c
@@ -29,6 +29,7 @@
#include "vsp1.h"
#include "vsp1_bru.h"
+#include "vsp1_dl.h"
#include "vsp1_entity.h"
#include "vsp1_pipe.h"
#include "vsp1_rwpf.h"
@@ -171,53 +172,178 @@ static int __vsp1_video_try_format(struct vsp1_video *video,
* Pipeline Management
*/
-static int vsp1_video_pipeline_validate_branch(struct vsp1_pipeline *pipe,
- struct vsp1_rwpf *input,
- struct vsp1_rwpf *output)
+/*
+ * vsp1_video_complete_buffer - Complete the current buffer
+ * @video: the video node
+ *
+ * This function completes the current buffer by filling its sequence number,
+ * time stamp and payload size, and hands it back to the videobuf core.
+ *
+ * When operating in DU output mode (deep pipeline to the DU through the LIF),
+ * the VSP1 needs to constantly supply frames to the display. In that case, if
+ * no other buffer is queued, reuse the one that has just been processed instead
+ * of handing it back to the videobuf core.
+ *
+ * Return the next queued buffer or NULL if the queue is empty.
+ */
+static struct vsp1_vb2_buffer *
+vsp1_video_complete_buffer(struct vsp1_video *video)
+{
+ struct vsp1_pipeline *pipe = video->rwpf->pipe;
+ struct vsp1_vb2_buffer *next = NULL;
+ struct vsp1_vb2_buffer *done;
+ unsigned long flags;
+ unsigned int i;
+
+ spin_lock_irqsave(&video->irqlock, flags);
+
+ if (list_empty(&video->irqqueue)) {
+ spin_unlock_irqrestore(&video->irqlock, flags);
+ return NULL;
+ }
+
+ done = list_first_entry(&video->irqqueue,
+ struct vsp1_vb2_buffer, queue);
+
+ /* In DU output mode reuse the buffer if the list is singular. */
+ if (pipe->lif && list_is_singular(&video->irqqueue)) {
+ spin_unlock_irqrestore(&video->irqlock, flags);
+ return done;
+ }
+
+ list_del(&done->queue);
+
+ if (!list_empty(&video->irqqueue))
+ next = list_first_entry(&video->irqqueue,
+ struct vsp1_vb2_buffer, queue);
+
+ spin_unlock_irqrestore(&video->irqlock, flags);
+
+ done->buf.sequence = video->sequence++;
+ done->buf.vb2_buf.timestamp = ktime_get_ns();
+ for (i = 0; i < done->buf.vb2_buf.num_planes; ++i)
+ vb2_set_plane_payload(&done->buf.vb2_buf, i,
+ vb2_plane_size(&done->buf.vb2_buf, i));
+ vb2_buffer_done(&done->buf.vb2_buf, VB2_BUF_STATE_DONE);
+
+ return next;
+}
+
+static void vsp1_video_frame_end(struct vsp1_pipeline *pipe,
+ struct vsp1_rwpf *rwpf)
+{
+ struct vsp1_video *video = rwpf->video;
+ struct vsp1_vb2_buffer *buf;
+ unsigned long flags;
+
+ buf = vsp1_video_complete_buffer(video);
+ if (buf == NULL)
+ return;
+
+ spin_lock_irqsave(&pipe->irqlock, flags);
+
+ video->rwpf->mem = buf->mem;
+ pipe->buffers_ready |= 1 << video->pipe_index;
+
+ spin_unlock_irqrestore(&pipe->irqlock, flags);
+}
+
+static void vsp1_video_pipeline_run(struct vsp1_pipeline *pipe)
+{
+ struct vsp1_device *vsp1 = pipe->output->entity.vsp1;
+ unsigned int i;
+
+ if (!pipe->dl)
+ pipe->dl = vsp1_dl_list_get(pipe->output->dlm);
+
+ for (i = 0; i < vsp1->info->rpf_count; ++i) {
+ struct vsp1_rwpf *rwpf = pipe->inputs[i];
+
+ if (rwpf)
+ vsp1_rwpf_set_memory(rwpf, pipe->dl);
+ }
+
+ if (!pipe->lif)
+ vsp1_rwpf_set_memory(pipe->output, pipe->dl);
+
+ vsp1_dl_list_commit(pipe->dl);
+ pipe->dl = NULL;
+
+ vsp1_pipeline_run(pipe);
+}
+
+static void vsp1_video_pipeline_frame_end(struct vsp1_pipeline *pipe)
+{
+ struct vsp1_device *vsp1 = pipe->output->entity.vsp1;
+ enum vsp1_pipeline_state state;
+ unsigned long flags;
+ unsigned int i;
+
+ /* Complete buffers on all video nodes. */
+ for (i = 0; i < vsp1->info->rpf_count; ++i) {
+ if (!pipe->inputs[i])
+ continue;
+
+ vsp1_video_frame_end(pipe, pipe->inputs[i]);
+ }
+
+ vsp1_video_frame_end(pipe, pipe->output);
+
+ spin_lock_irqsave(&pipe->irqlock, flags);
+
+ state = pipe->state;
+ pipe->state = VSP1_PIPELINE_STOPPED;
+
+ /* If a stop has been requested, mark the pipeline as stopped and
+ * return. Otherwise restart the pipeline if ready.
+ */
+ if (state == VSP1_PIPELINE_STOPPING)
+ wake_up(&pipe->wq);
+ else if (vsp1_pipeline_ready(pipe))
+ vsp1_video_pipeline_run(pipe);
+
+ spin_unlock_irqrestore(&pipe->irqlock, flags);
+}
+
+static int vsp1_video_pipeline_build_branch(struct vsp1_pipeline *pipe,
+ struct vsp1_rwpf *input,
+ struct vsp1_rwpf *output)
{
- struct vsp1_entity *entity;
struct media_entity_enum ent_enum;
+ struct vsp1_entity *entity;
struct media_pad *pad;
- int rval;
bool bru_found = false;
+ int ret;
- input->location.left = 0;
- input->location.top = 0;
-
- rval = media_entity_enum_init(
- &ent_enum, input->entity.pads[RWPF_PAD_SOURCE].graph_obj.mdev);
- if (rval)
- return rval;
+ ret = media_entity_enum_init(&ent_enum, &input->entity.vsp1->media_dev);
+ if (ret < 0)
+ return ret;
pad = media_entity_remote_pad(&input->entity.pads[RWPF_PAD_SOURCE]);
while (1) {
if (pad == NULL) {
- rval = -EPIPE;
+ ret = -EPIPE;
goto out;
}
/* We've reached a video node, that shouldn't have happened. */
if (!is_media_entity_v4l2_subdev(pad->entity)) {
- rval = -EPIPE;
+ ret = -EPIPE;
goto out;
}
entity = to_vsp1_entity(
media_entity_to_v4l2_subdev(pad->entity));
- /* A BRU is present in the pipeline, store the compose rectangle
- * location in the input RPF for use when configuring the RPF.
+ /* A BRU is present in the pipeline, store the BRU input pad
+ * number in the input RPF for use when configuring the RPF.
*/
if (entity->type == VSP1_ENTITY_BRU) {
struct vsp1_bru *bru = to_bru(&entity->subdev);
- struct v4l2_rect *rect =
- &bru->inputs[pad->index].compose;
bru->inputs[pad->index].rpf = input;
-
- input->location.left = rect->left;
- input->location.top = rect->top;
+ input->bru_input = pad->index;
bru_found = true;
}
@@ -229,14 +355,14 @@ static int vsp1_video_pipeline_validate_branch(struct vsp1_pipeline *pipe,
/* Ensure the branch has no loop. */
if (media_entity_enum_test_and_set(&ent_enum,
&entity->subdev.entity)) {
- rval = -EPIPE;
+ ret = -EPIPE;
goto out;
}
/* UDS can't be chained. */
if (entity->type == VSP1_ENTITY_UDS) {
if (pipe->uds) {
- rval = -EPIPE;
+ ret = -EPIPE;
goto out;
}
@@ -256,16 +382,16 @@ static int vsp1_video_pipeline_validate_branch(struct vsp1_pipeline *pipe,
/* The last entity must be the output WPF. */
if (entity != &output->entity)
- rval = -EPIPE;
+ ret = -EPIPE;
out:
media_entity_enum_cleanup(&ent_enum);
- return rval;
+ return ret;
}
-static int vsp1_video_pipeline_validate(struct vsp1_pipeline *pipe,
- struct vsp1_video *video)
+static int vsp1_video_pipeline_build(struct vsp1_pipeline *pipe,
+ struct vsp1_video *video)
{
struct media_entity_graph graph;
struct media_entity *entity = &video->video.entity;
@@ -273,14 +399,10 @@ static int vsp1_video_pipeline_validate(struct vsp1_pipeline *pipe,
unsigned int i;
int ret;
- mutex_lock(&mdev->graph_mutex);
-
/* Walk the graph to locate the entities and video nodes. */
ret = media_entity_graph_walk_init(&graph, mdev);
- if (ret) {
- mutex_unlock(&mdev->graph_mutex);
+ if (ret)
return ret;
- }
media_entity_graph_walk_start(&graph, entity);
@@ -300,10 +422,12 @@ static int vsp1_video_pipeline_validate(struct vsp1_pipeline *pipe,
rwpf = to_rwpf(subdev);
pipe->inputs[rwpf->entity.index] = rwpf;
rwpf->video->pipe_index = ++pipe->num_inputs;
+ rwpf->pipe = pipe;
} else if (e->type == VSP1_ENTITY_WPF) {
rwpf = to_rwpf(subdev);
pipe->output = rwpf;
rwpf->video->pipe_index = 0;
+ rwpf->pipe = pipe;
} else if (e->type == VSP1_ENTITY_LIF) {
pipe->lif = e;
} else if (e->type == VSP1_ENTITY_BRU) {
@@ -311,15 +435,11 @@ static int vsp1_video_pipeline_validate(struct vsp1_pipeline *pipe,
}
}
- mutex_unlock(&mdev->graph_mutex);
-
media_entity_graph_walk_cleanup(&graph);
/* We need one output and at least one input. */
- if (pipe->num_inputs == 0 || !pipe->output) {
- ret = -EPIPE;
- goto error;
- }
+ if (pipe->num_inputs == 0 || !pipe->output)
+ return -EPIPE;
/* Follow links downstream for each input and make sure the graph
* contains no loop and that all branches end at the output WPF.
@@ -328,143 +448,69 @@ static int vsp1_video_pipeline_validate(struct vsp1_pipeline *pipe,
if (!pipe->inputs[i])
continue;
- ret = vsp1_video_pipeline_validate_branch(pipe, pipe->inputs[i],
- pipe->output);
+ ret = vsp1_video_pipeline_build_branch(pipe, pipe->inputs[i],
+ pipe->output);
if (ret < 0)
- goto error;
+ return ret;
}
return 0;
-
-error:
- vsp1_pipeline_reset(pipe);
- return ret;
}
static int vsp1_video_pipeline_init(struct vsp1_pipeline *pipe,
struct vsp1_video *video)
{
- int ret;
+ vsp1_pipeline_init(pipe);
- mutex_lock(&pipe->lock);
-
- /* If we're the first user validate and initialize the pipeline. */
- if (pipe->use_count == 0) {
- ret = vsp1_video_pipeline_validate(pipe, video);
- if (ret < 0)
- goto done;
- }
+ pipe->frame_end = vsp1_video_pipeline_frame_end;
- pipe->use_count++;
- ret = 0;
-
-done:
- mutex_unlock(&pipe->lock);
- return ret;
+ return vsp1_video_pipeline_build(pipe, video);
}
-static void vsp1_video_pipeline_cleanup(struct vsp1_pipeline *pipe)
-{
- mutex_lock(&pipe->lock);
-
- /* If we're the last user clean up the pipeline. */
- if (--pipe->use_count == 0)
- vsp1_pipeline_reset(pipe);
-
- mutex_unlock(&pipe->lock);
-}
-
-/*
- * vsp1_video_complete_buffer - Complete the current buffer
- * @video: the video node
- *
- * This function completes the current buffer by filling its sequence number,
- * time stamp and payload size, and hands it back to the videobuf core.
- *
- * When operating in DU output mode (deep pipeline to the DU through the LIF),
- * the VSP1 needs to constantly supply frames to the display. In that case, if
- * no other buffer is queued, reuse the one that has just been processed instead
- * of handing it back to the videobuf core.
- *
- * Return the next queued buffer or NULL if the queue is empty.
- */
-static struct vsp1_vb2_buffer *
-vsp1_video_complete_buffer(struct vsp1_video *video)
+static struct vsp1_pipeline *vsp1_video_pipeline_get(struct vsp1_video *video)
{
- struct vsp1_pipeline *pipe = to_vsp1_pipeline(&video->video.entity);
- struct vsp1_vb2_buffer *next = NULL;
- struct vsp1_vb2_buffer *done;
- unsigned long flags;
- unsigned int i;
-
- spin_lock_irqsave(&video->irqlock, flags);
-
- if (list_empty(&video->irqqueue)) {
- spin_unlock_irqrestore(&video->irqlock, flags);
- return NULL;
- }
-
- done = list_first_entry(&video->irqqueue,
- struct vsp1_vb2_buffer, queue);
+ struct vsp1_pipeline *pipe;
+ int ret;
- /* In DU output mode reuse the buffer if the list is singular. */
- if (pipe->lif && list_is_singular(&video->irqqueue)) {
- spin_unlock_irqrestore(&video->irqlock, flags);
- return done;
+ /* Get a pipeline object for the video node. If a pipeline has already
+ * been allocated just increment its reference count and return it.
+ * Otherwise allocate a new pipeline and initialize it, it will be freed
+ * when the last reference is released.
+ */
+ if (!video->rwpf->pipe) {
+ pipe = kzalloc(sizeof(*pipe), GFP_KERNEL);
+ if (!pipe)
+ return ERR_PTR(-ENOMEM);
+
+ ret = vsp1_video_pipeline_init(pipe, video);
+ if (ret < 0) {
+ vsp1_pipeline_reset(pipe);
+ kfree(pipe);
+ return ERR_PTR(ret);
+ }
+ } else {
+ pipe = video->rwpf->pipe;
+ kref_get(&pipe->kref);
}
- list_del(&done->queue);
-
- if (!list_empty(&video->irqqueue))
- next = list_first_entry(&video->irqqueue,
- struct vsp1_vb2_buffer, queue);
-
- spin_unlock_irqrestore(&video->irqlock, flags);
-
- done->buf.sequence = video->sequence++;
- done->buf.vb2_buf.timestamp = ktime_get_ns();
- for (i = 0; i < done->buf.vb2_buf.num_planes; ++i)
- vb2_set_plane_payload(&done->buf.vb2_buf, i,
- done->mem.length[i]);
- vb2_buffer_done(&done->buf.vb2_buf, VB2_BUF_STATE_DONE);
-
- return next;
+ return pipe;
}
-static void vsp1_video_frame_end(struct vsp1_pipeline *pipe,
- struct vsp1_rwpf *rwpf)
+static void vsp1_video_pipeline_release(struct kref *kref)
{
- struct vsp1_video *video = rwpf->video;
- struct vsp1_vb2_buffer *buf;
- unsigned long flags;
+ struct vsp1_pipeline *pipe = container_of(kref, typeof(*pipe), kref);
- buf = vsp1_video_complete_buffer(video);
- if (buf == NULL)
- return;
-
- spin_lock_irqsave(&pipe->irqlock, flags);
-
- video->rwpf->ops->set_memory(video->rwpf, &buf->mem);
- pipe->buffers_ready |= 1 << video->pipe_index;
-
- spin_unlock_irqrestore(&pipe->irqlock, flags);
+ vsp1_pipeline_reset(pipe);
+ kfree(pipe);
}
-static void vsp1_video_pipeline_frame_end(struct vsp1_pipeline *pipe)
+static void vsp1_video_pipeline_put(struct vsp1_pipeline *pipe)
{
- struct vsp1_device *vsp1 = pipe->output->entity.vsp1;
- unsigned int i;
-
- /* Complete buffers on all video nodes. */
- for (i = 0; i < vsp1->info->rpf_count; ++i) {
- if (!pipe->inputs[i])
- continue;
+ struct media_device *mdev = &pipe->output->entity.vsp1->media_dev;
- vsp1_video_frame_end(pipe, pipe->inputs[i]);
- }
-
- if (!pipe->lif)
- vsp1_video_frame_end(pipe, pipe->output);
+ mutex_lock(&mdev->graph_mutex);
+ kref_put(&pipe->kref, vsp1_video_pipeline_release);
+ mutex_unlock(&mdev->graph_mutex);
}
/* -----------------------------------------------------------------------------
@@ -513,16 +559,16 @@ static int vsp1_video_buffer_prepare(struct vb2_buffer *vb)
if (vb->num_planes < format->num_planes)
return -EINVAL;
- buf->mem.num_planes = vb->num_planes;
-
for (i = 0; i < vb->num_planes; ++i) {
buf->mem.addr[i] = vb2_dma_contig_plane_dma_addr(vb, i);
- buf->mem.length[i] = vb2_plane_size(vb, i);
- if (buf->mem.length[i] < format->plane_fmt[i].sizeimage)
+ if (vb2_plane_size(vb, i) < format->plane_fmt[i].sizeimage)
return -EINVAL;
}
+ for ( ; i < 3; ++i)
+ buf->mem.addr[i] = 0;
+
return 0;
}
@@ -530,7 +576,7 @@ static void vsp1_video_buffer_queue(struct vb2_buffer *vb)
{
struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb);
struct vsp1_video *video = vb2_get_drv_priv(vb->vb2_queue);
- struct vsp1_pipeline *pipe = to_vsp1_pipeline(&video->video.entity);
+ struct vsp1_pipeline *pipe = video->rwpf->pipe;
struct vsp1_vb2_buffer *buf = to_vsp1_vb2_buffer(vbuf);
unsigned long flags;
bool empty;
@@ -545,54 +591,66 @@ static void vsp1_video_buffer_queue(struct vb2_buffer *vb)
spin_lock_irqsave(&pipe->irqlock, flags);
- video->rwpf->ops->set_memory(video->rwpf, &buf->mem);
+ video->rwpf->mem = buf->mem;
pipe->buffers_ready |= 1 << video->pipe_index;
if (vb2_is_streaming(&video->queue) &&
vsp1_pipeline_ready(pipe))
- vsp1_pipeline_run(pipe);
+ vsp1_video_pipeline_run(pipe);
spin_unlock_irqrestore(&pipe->irqlock, flags);
}
+static int vsp1_video_setup_pipeline(struct vsp1_pipeline *pipe)
+{
+ struct vsp1_entity *entity;
+
+ /* Prepare the display list. */
+ pipe->dl = vsp1_dl_list_get(pipe->output->dlm);
+ if (!pipe->dl)
+ return -ENOMEM;
+
+ if (pipe->uds) {
+ struct vsp1_uds *uds = to_uds(&pipe->uds->subdev);
+
+ /* If a BRU is present in the pipeline before the UDS, the alpha
+ * component doesn't need to be scaled as the BRU output alpha
+ * value is fixed to 255. Otherwise we need to scale the alpha
+ * component only when available at the input RPF.
+ */
+ if (pipe->uds_input->type == VSP1_ENTITY_BRU) {
+ uds->scale_alpha = false;
+ } else {
+ struct vsp1_rwpf *rpf =
+ to_rwpf(&pipe->uds_input->subdev);
+
+ uds->scale_alpha = rpf->fmtinfo->alpha;
+ }
+ }
+
+ list_for_each_entry(entity, &pipe->entities, list_pipe) {
+ vsp1_entity_route_setup(entity, pipe->dl);
+
+ if (entity->ops->configure)
+ entity->ops->configure(entity, pipe, pipe->dl);
+ }
+
+ return 0;
+}
+
static int vsp1_video_start_streaming(struct vb2_queue *vq, unsigned int count)
{
struct vsp1_video *video = vb2_get_drv_priv(vq);
- struct vsp1_pipeline *pipe = to_vsp1_pipeline(&video->video.entity);
- struct vsp1_entity *entity;
+ struct vsp1_pipeline *pipe = video->rwpf->pipe;
unsigned long flags;
int ret;
mutex_lock(&pipe->lock);
if (pipe->stream_count == pipe->num_inputs) {
- if (pipe->uds) {
- struct vsp1_uds *uds = to_uds(&pipe->uds->subdev);
-
- /* If a BRU is present in the pipeline before the UDS,
- * the alpha component doesn't need to be scaled as the
- * BRU output alpha value is fixed to 255. Otherwise we
- * need to scale the alpha component only when available
- * at the input RPF.
- */
- if (pipe->uds_input->type == VSP1_ENTITY_BRU) {
- uds->scale_alpha = false;
- } else {
- struct vsp1_rwpf *rpf =
- to_rwpf(&pipe->uds_input->subdev);
-
- uds->scale_alpha = rpf->fmtinfo->alpha;
- }
- }
-
- list_for_each_entry(entity, &pipe->entities, list_pipe) {
- vsp1_entity_route_setup(entity);
-
- ret = v4l2_subdev_call(&entity->subdev, video,
- s_stream, 1);
- if (ret < 0) {
- mutex_unlock(&pipe->lock);
- return ret;
- }
+ ret = vsp1_video_setup_pipeline(pipe);
+ if (ret < 0) {
+ mutex_unlock(&pipe->lock);
+ return ret;
}
}
@@ -601,7 +659,7 @@ static int vsp1_video_start_streaming(struct vb2_queue *vq, unsigned int count)
spin_lock_irqsave(&pipe->irqlock, flags);
if (vsp1_pipeline_ready(pipe))
- vsp1_pipeline_run(pipe);
+ vsp1_video_pipeline_run(pipe);
spin_unlock_irqrestore(&pipe->irqlock, flags);
return 0;
@@ -610,7 +668,7 @@ static int vsp1_video_start_streaming(struct vb2_queue *vq, unsigned int count)
static void vsp1_video_stop_streaming(struct vb2_queue *vq)
{
struct vsp1_video *video = vb2_get_drv_priv(vq);
- struct vsp1_pipeline *pipe = to_vsp1_pipeline(&video->video.entity);
+ struct vsp1_pipeline *pipe = video->rwpf->pipe;
struct vsp1_vb2_buffer *buffer;
unsigned long flags;
int ret;
@@ -621,11 +679,14 @@ static void vsp1_video_stop_streaming(struct vb2_queue *vq)
ret = vsp1_pipeline_stop(pipe);
if (ret == -ETIMEDOUT)
dev_err(video->vsp1->dev, "pipeline stop timeout\n");
+
+ vsp1_dl_list_put(pipe->dl);
+ pipe->dl = NULL;
}
mutex_unlock(&pipe->lock);
- vsp1_video_pipeline_cleanup(pipe);
media_entity_pipeline_stop(&video->video.entity);
+ vsp1_video_pipeline_put(pipe);
/* Remove all buffers from the IRQ queue. */
spin_lock_irqsave(&video->irqlock, flags);
@@ -737,6 +798,7 @@ vsp1_video_streamon(struct file *file, void *fh, enum v4l2_buf_type type)
{
struct v4l2_fh *vfh = file->private_data;
struct vsp1_video *video = to_vsp1_video(vfh->vdev);
+ struct media_device *mdev = &video->vsp1->media_dev;
struct vsp1_pipeline *pipe;
int ret;
@@ -745,18 +807,25 @@ vsp1_video_streamon(struct file *file, void *fh, enum v4l2_buf_type type)
video->sequence = 0;
- /* Start streaming on the pipeline. No link touching an entity in the
- * pipeline can be activated or deactivated once streaming is started.
- *
- * Use the VSP1 pipeline object embedded in the first video object that
- * starts streaming.
+ /* Get a pipeline for the video node and start streaming on it. No link
+ * touching an entity in the pipeline can be activated or deactivated
+ * once streaming is started.
*/
- pipe = video->video.entity.pipe
- ? to_vsp1_pipeline(&video->video.entity) : &video->pipe;
+ mutex_lock(&mdev->graph_mutex);
- ret = media_entity_pipeline_start(&video->video.entity, &pipe->pipe);
- if (ret < 0)
- return ret;
+ pipe = vsp1_video_pipeline_get(video);
+ if (IS_ERR(pipe)) {
+ mutex_unlock(&mdev->graph_mutex);
+ return PTR_ERR(pipe);
+ }
+
+ ret = __media_entity_pipeline_start(&video->video.entity, &pipe->pipe);
+ if (ret < 0) {
+ mutex_unlock(&mdev->graph_mutex);
+ goto err_pipe;
+ }
+
+ mutex_unlock(&mdev->graph_mutex);
/* Verify that the configured format matches the output of the connected
* subdev.
@@ -765,21 +834,17 @@ vsp1_video_streamon(struct file *file, void *fh, enum v4l2_buf_type type)
if (ret < 0)
goto err_stop;
- ret = vsp1_video_pipeline_init(pipe, video);
- if (ret < 0)
- goto err_stop;
-
/* Start the queue. */
ret = vb2_streamon(&video->queue, type);
if (ret < 0)
- goto err_cleanup;
+ goto err_stop;
return 0;
-err_cleanup:
- vsp1_video_pipeline_cleanup(pipe);
err_stop:
media_entity_pipeline_stop(&video->video.entity);
+err_pipe:
+ vsp1_video_pipeline_put(pipe);
return ret;
}
@@ -895,26 +960,16 @@ struct vsp1_video *vsp1_video_create(struct vsp1_device *vsp1,
spin_lock_init(&video->irqlock);
INIT_LIST_HEAD(&video->irqqueue);
- vsp1_pipeline_init(&video->pipe);
- video->pipe.frame_end = vsp1_video_pipeline_frame_end;
-
/* Initialize the media entity... */
ret = media_entity_pads_init(&video->video.entity, 1, &video->pad);
if (ret < 0)
return ERR_PTR(ret);
/* ... and the format ... */
- rwpf->fmtinfo = vsp1_get_format_info(VSP1_VIDEO_DEF_FORMAT);
- rwpf->format.pixelformat = rwpf->fmtinfo->fourcc;
- rwpf->format.colorspace = V4L2_COLORSPACE_SRGB;
- rwpf->format.field = V4L2_FIELD_NONE;
+ rwpf->format.pixelformat = VSP1_VIDEO_DEF_FORMAT;
rwpf->format.width = VSP1_VIDEO_DEF_WIDTH;
rwpf->format.height = VSP1_VIDEO_DEF_HEIGHT;
- rwpf->format.num_planes = 1;
- rwpf->format.plane_fmt[0].bytesperline =
- rwpf->format.width * rwpf->fmtinfo->bpp[0] / 8;
- rwpf->format.plane_fmt[0].sizeimage =
- rwpf->format.plane_fmt[0].bytesperline * rwpf->format.height;
+ __vsp1_video_try_format(video, &rwpf->format, &rwpf->fmtinfo);
/* ... and the video node... */
video->video.v4l2_dev = &video->vsp1->v4l2_dev;
diff --git a/drivers/media/platform/vsp1/vsp1_video.h b/drivers/media/platform/vsp1/vsp1_video.h
index 64abd39ee1e7..867b00807c46 100644
--- a/drivers/media/platform/vsp1/vsp1_video.h
+++ b/drivers/media/platform/vsp1/vsp1_video.h
@@ -18,7 +18,6 @@
#include <media/videobuf2-v4l2.h>
-#include "vsp1_pipe.h"
#include "vsp1_rwpf.h"
struct vsp1_vb2_buffer {
@@ -44,7 +43,6 @@ struct vsp1_video {
struct mutex lock;
- struct vsp1_pipeline pipe;
unsigned int pipe_index;
struct vb2_queue queue;
diff --git a/drivers/media/platform/vsp1/vsp1_wpf.c b/drivers/media/platform/vsp1/vsp1_wpf.c
index c78d4af50fcf..6c91eaa35e75 100644
--- a/drivers/media/platform/vsp1/vsp1_wpf.c
+++ b/drivers/media/platform/vsp1/vsp1_wpf.c
@@ -16,124 +16,114 @@
#include <media/v4l2-subdev.h>
#include "vsp1.h"
+#include "vsp1_dl.h"
+#include "vsp1_pipe.h"
#include "vsp1_rwpf.h"
#include "vsp1_video.h"
-#define WPF_MAX_WIDTH 2048
-#define WPF_MAX_HEIGHT 2048
+#define WPF_GEN2_MAX_WIDTH 2048U
+#define WPF_GEN2_MAX_HEIGHT 2048U
+#define WPF_GEN3_MAX_WIDTH 8190U
+#define WPF_GEN3_MAX_HEIGHT 8190U
/* -----------------------------------------------------------------------------
* Device Access
*/
-static inline u32 vsp1_wpf_read(struct vsp1_rwpf *wpf, u32 reg)
+static inline void vsp1_wpf_write(struct vsp1_rwpf *wpf,
+ struct vsp1_dl_list *dl, u32 reg, u32 data)
{
- return vsp1_read(wpf->entity.vsp1,
- reg + wpf->entity.index * VI6_WPF_OFFSET);
-}
-
-static inline void vsp1_wpf_write(struct vsp1_rwpf *wpf, u32 reg, u32 data)
-{
- vsp1_mod_write(&wpf->entity,
- reg + wpf->entity.index * VI6_WPF_OFFSET, data);
+ vsp1_dl_list_write(dl, reg + wpf->entity.index * VI6_WPF_OFFSET, data);
}
/* -----------------------------------------------------------------------------
- * Controls
+ * V4L2 Subdevice Core Operations
*/
-static int wpf_s_ctrl(struct v4l2_ctrl *ctrl)
+static int wpf_s_stream(struct v4l2_subdev *subdev, int enable)
{
- struct vsp1_rwpf *wpf =
- container_of(ctrl->handler, struct vsp1_rwpf, ctrls);
- u32 value;
+ struct vsp1_rwpf *wpf = to_rwpf(subdev);
+ struct vsp1_device *vsp1 = wpf->entity.vsp1;
- if (!vsp1_entity_is_streaming(&wpf->entity))
+ if (enable)
return 0;
- switch (ctrl->id) {
- case V4L2_CID_ALPHA_COMPONENT:
- value = vsp1_wpf_read(wpf, VI6_WPF_OUTFMT);
- value &= ~VI6_WPF_OUTFMT_PDV_MASK;
- value |= ctrl->val << VI6_WPF_OUTFMT_PDV_SHIFT;
- vsp1_wpf_write(wpf, VI6_WPF_OUTFMT, value);
- break;
- }
+ /* Write to registers directly when stopping the stream as there will be
+ * no pipeline run to apply the display list.
+ */
+ vsp1_write(vsp1, VI6_WPF_IRQ_ENB(wpf->entity.index), 0);
+ vsp1_write(vsp1, wpf->entity.index * VI6_WPF_OFFSET +
+ VI6_WPF_SRCRPF, 0);
return 0;
}
-static const struct v4l2_ctrl_ops wpf_ctrl_ops = {
- .s_ctrl = wpf_s_ctrl,
-};
-
/* -----------------------------------------------------------------------------
- * V4L2 Subdevice Core Operations
+ * V4L2 Subdevice Operations
*/
-static int wpf_s_stream(struct v4l2_subdev *subdev, int enable)
-{
- struct vsp1_pipeline *pipe = to_vsp1_pipeline(&subdev->entity);
- struct vsp1_rwpf *wpf = to_rwpf(subdev);
- struct vsp1_device *vsp1 = wpf->entity.vsp1;
- const struct v4l2_rect *crop = &wpf->crop;
- unsigned int i;
- u32 srcrpf = 0;
- u32 outfmt = 0;
- int ret;
-
- ret = vsp1_entity_set_streaming(&wpf->entity, enable);
- if (ret < 0)
- return ret;
+static struct v4l2_subdev_video_ops wpf_video_ops = {
+ .s_stream = wpf_s_stream,
+};
- if (!enable) {
- vsp1_write(vsp1, VI6_WPF_IRQ_ENB(wpf->entity.index), 0);
- vsp1_write(vsp1, wpf->entity.index * VI6_WPF_OFFSET +
- VI6_WPF_SRCRPF, 0);
- return 0;
- }
+static struct v4l2_subdev_ops wpf_ops = {
+ .video = &wpf_video_ops,
+ .pad = &vsp1_rwpf_pad_ops,
+};
- /* Sources. If the pipeline has a single input and BRU is not used,
- * configure it as the master layer. Otherwise configure all
- * inputs as sub-layers and select the virtual RPF as the master
- * layer.
- */
- for (i = 0; i < vsp1->info->rpf_count; ++i) {
- struct vsp1_rwpf *input = pipe->inputs[i];
+/* -----------------------------------------------------------------------------
+ * VSP1 Entity Operations
+ */
- if (!input)
- continue;
+static void vsp1_wpf_destroy(struct vsp1_entity *entity)
+{
+ struct vsp1_rwpf *wpf = entity_to_rwpf(entity);
- srcrpf |= (!pipe->bru && pipe->num_inputs == 1)
- ? VI6_WPF_SRCRPF_RPF_ACT_MST(input->entity.index)
- : VI6_WPF_SRCRPF_RPF_ACT_SUB(input->entity.index);
- }
+ vsp1_dlm_destroy(wpf->dlm);
+}
- if (pipe->bru || pipe->num_inputs > 1)
- srcrpf |= VI6_WPF_SRCRPF_VIRACT_MST;
+static void wpf_set_memory(struct vsp1_entity *entity, struct vsp1_dl_list *dl)
+{
+ struct vsp1_rwpf *wpf = entity_to_rwpf(entity);
- vsp1_wpf_write(wpf, VI6_WPF_SRCRPF, srcrpf);
+ vsp1_wpf_write(wpf, dl, VI6_WPF_DSTM_ADDR_Y, wpf->mem.addr[0]);
+ vsp1_wpf_write(wpf, dl, VI6_WPF_DSTM_ADDR_C0, wpf->mem.addr[1]);
+ vsp1_wpf_write(wpf, dl, VI6_WPF_DSTM_ADDR_C1, wpf->mem.addr[2]);
+}
- /* Destination stride. */
- if (!pipe->lif) {
- struct v4l2_pix_format_mplane *format = &wpf->format;
+static void wpf_configure(struct vsp1_entity *entity,
+ struct vsp1_pipeline *pipe,
+ struct vsp1_dl_list *dl)
+{
+ struct vsp1_rwpf *wpf = to_rwpf(&entity->subdev);
+ struct vsp1_device *vsp1 = wpf->entity.vsp1;
+ const struct v4l2_mbus_framefmt *source_format;
+ const struct v4l2_mbus_framefmt *sink_format;
+ const struct v4l2_rect *crop;
+ unsigned int i;
+ u32 outfmt = 0;
+ u32 srcrpf = 0;
- vsp1_wpf_write(wpf, VI6_WPF_DSTM_STRIDE_Y,
- format->plane_fmt[0].bytesperline);
- if (format->num_planes > 1)
- vsp1_wpf_write(wpf, VI6_WPF_DSTM_STRIDE_C,
- format->plane_fmt[1].bytesperline);
- }
+ /* Cropping */
+ crop = vsp1_rwpf_get_crop(wpf, wpf->entity.config);
- vsp1_wpf_write(wpf, VI6_WPF_HSZCLIP, VI6_WPF_SZCLIP_EN |
+ vsp1_wpf_write(wpf, dl, VI6_WPF_HSZCLIP, VI6_WPF_SZCLIP_EN |
(crop->left << VI6_WPF_SZCLIP_OFST_SHIFT) |
(crop->width << VI6_WPF_SZCLIP_SIZE_SHIFT));
- vsp1_wpf_write(wpf, VI6_WPF_VSZCLIP, VI6_WPF_SZCLIP_EN |
+ vsp1_wpf_write(wpf, dl, VI6_WPF_VSZCLIP, VI6_WPF_SZCLIP_EN |
(crop->top << VI6_WPF_SZCLIP_OFST_SHIFT) |
(crop->height << VI6_WPF_SZCLIP_SIZE_SHIFT));
/* Format */
+ sink_format = vsp1_entity_get_pad_format(&wpf->entity,
+ wpf->entity.config,
+ RWPF_PAD_SINK);
+ source_format = vsp1_entity_get_pad_format(&wpf->entity,
+ wpf->entity.config,
+ RWPF_PAD_SOURCE);
+
if (!pipe->lif) {
+ const struct v4l2_pix_format_mplane *format = &wpf->format;
const struct vsp1_format_info *fmtinfo = wpf->fmtinfo;
outfmt = fmtinfo->hwfmt << VI6_WPF_OUTFMT_WRFMT_SHIFT;
@@ -145,73 +135,58 @@ static int wpf_s_stream(struct v4l2_subdev *subdev, int enable)
if (fmtinfo->swap_uv)
outfmt |= VI6_WPF_OUTFMT_SPUVS;
- vsp1_wpf_write(wpf, VI6_WPF_DSWAP, fmtinfo->swap);
+ /* Destination stride and byte swapping. */
+ vsp1_wpf_write(wpf, dl, VI6_WPF_DSTM_STRIDE_Y,
+ format->plane_fmt[0].bytesperline);
+ if (format->num_planes > 1)
+ vsp1_wpf_write(wpf, dl, VI6_WPF_DSTM_STRIDE_C,
+ format->plane_fmt[1].bytesperline);
+
+ vsp1_wpf_write(wpf, dl, VI6_WPF_DSWAP, fmtinfo->swap);
}
- if (wpf->entity.formats[RWPF_PAD_SINK].code !=
- wpf->entity.formats[RWPF_PAD_SOURCE].code)
+ if (sink_format->code != source_format->code)
outfmt |= VI6_WPF_OUTFMT_CSC;
- /* Take the control handler lock to ensure that the PDV value won't be
- * changed behind our back by a set control operation.
- */
- if (vsp1->info->uapi)
- mutex_lock(wpf->ctrls.lock);
- outfmt |= wpf->alpha->cur.val << VI6_WPF_OUTFMT_PDV_SHIFT;
- vsp1_wpf_write(wpf, VI6_WPF_OUTFMT, outfmt);
- if (vsp1->info->uapi)
- mutex_unlock(wpf->ctrls.lock);
-
- vsp1_mod_write(&wpf->entity, VI6_DPR_WPF_FPORCH(wpf->entity.index),
- VI6_DPR_WPF_FPORCH_FP_WPFN);
+ outfmt |= wpf->alpha << VI6_WPF_OUTFMT_PDV_SHIFT;
+ vsp1_wpf_write(wpf, dl, VI6_WPF_OUTFMT, outfmt);
- vsp1_mod_write(&wpf->entity, VI6_WPF_WRBCK_CTRL, 0);
+ vsp1_dl_list_write(dl, VI6_DPR_WPF_FPORCH(wpf->entity.index),
+ VI6_DPR_WPF_FPORCH_FP_WPFN);
- /* Enable interrupts */
- vsp1_write(vsp1, VI6_WPF_IRQ_STA(wpf->entity.index), 0);
- vsp1_write(vsp1, VI6_WPF_IRQ_ENB(wpf->entity.index),
- VI6_WFP_IRQ_ENB_FREE);
-
- return 0;
-}
+ vsp1_dl_list_write(dl, VI6_WPF_WRBCK_CTRL, 0);
-/* -----------------------------------------------------------------------------
- * V4L2 Subdevice Operations
- */
+ /* Sources. If the pipeline has a single input and BRU is not used,
+ * configure it as the master layer. Otherwise configure all
+ * inputs as sub-layers and select the virtual RPF as the master
+ * layer.
+ */
+ for (i = 0; i < vsp1->info->rpf_count; ++i) {
+ struct vsp1_rwpf *input = pipe->inputs[i];
-static struct v4l2_subdev_video_ops wpf_video_ops = {
- .s_stream = wpf_s_stream,
-};
+ if (!input)
+ continue;
-static struct v4l2_subdev_pad_ops wpf_pad_ops = {
- .enum_mbus_code = vsp1_rwpf_enum_mbus_code,
- .enum_frame_size = vsp1_rwpf_enum_frame_size,
- .get_fmt = vsp1_rwpf_get_format,
- .set_fmt = vsp1_rwpf_set_format,
- .get_selection = vsp1_rwpf_get_selection,
- .set_selection = vsp1_rwpf_set_selection,
-};
+ srcrpf |= (!pipe->bru && pipe->num_inputs == 1)
+ ? VI6_WPF_SRCRPF_RPF_ACT_MST(input->entity.index)
+ : VI6_WPF_SRCRPF_RPF_ACT_SUB(input->entity.index);
+ }
-static struct v4l2_subdev_ops wpf_ops = {
- .video = &wpf_video_ops,
- .pad = &wpf_pad_ops,
-};
+ if (pipe->bru || pipe->num_inputs > 1)
+ srcrpf |= VI6_WPF_SRCRPF_VIRACT_MST;
-/* -----------------------------------------------------------------------------
- * Video Device Operations
- */
+ vsp1_wpf_write(wpf, dl, VI6_WPF_SRCRPF, srcrpf);
-static void wpf_set_memory(struct vsp1_rwpf *wpf, struct vsp1_rwpf_memory *mem)
-{
- vsp1_wpf_write(wpf, VI6_WPF_DSTM_ADDR_Y, mem->addr[0]);
- if (mem->num_planes > 1)
- vsp1_wpf_write(wpf, VI6_WPF_DSTM_ADDR_C0, mem->addr[1]);
- if (mem->num_planes > 2)
- vsp1_wpf_write(wpf, VI6_WPF_DSTM_ADDR_C1, mem->addr[2]);
+ /* Enable interrupts */
+ vsp1_dl_list_write(dl, VI6_WPF_IRQ_STA(wpf->entity.index), 0);
+ vsp1_dl_list_write(dl, VI6_WPF_IRQ_ENB(wpf->entity.index),
+ VI6_WFP_IRQ_ENB_FREE);
}
-static const struct vsp1_rwpf_operations wpf_vdev_ops = {
+static const struct vsp1_entity_operations wpf_entity_ops = {
+ .destroy = vsp1_wpf_destroy,
.set_memory = wpf_set_memory,
+ .configure = wpf_configure,
};
/* -----------------------------------------------------------------------------
@@ -220,51 +195,43 @@ static const struct vsp1_rwpf_operations wpf_vdev_ops = {
struct vsp1_rwpf *vsp1_wpf_create(struct vsp1_device *vsp1, unsigned int index)
{
- struct v4l2_subdev *subdev;
struct vsp1_rwpf *wpf;
+ char name[6];
int ret;
wpf = devm_kzalloc(vsp1->dev, sizeof(*wpf), GFP_KERNEL);
if (wpf == NULL)
return ERR_PTR(-ENOMEM);
- wpf->ops = &wpf_vdev_ops;
-
- wpf->max_width = WPF_MAX_WIDTH;
- wpf->max_height = WPF_MAX_HEIGHT;
+ if (vsp1->info->gen == 2) {
+ wpf->max_width = WPF_GEN2_MAX_WIDTH;
+ wpf->max_height = WPF_GEN2_MAX_HEIGHT;
+ } else {
+ wpf->max_width = WPF_GEN3_MAX_WIDTH;
+ wpf->max_height = WPF_GEN3_MAX_HEIGHT;
+ }
+ wpf->entity.ops = &wpf_entity_ops;
wpf->entity.type = VSP1_ENTITY_WPF;
wpf->entity.index = index;
- ret = vsp1_entity_init(vsp1, &wpf->entity, 2);
+ sprintf(name, "wpf.%u", index);
+ ret = vsp1_entity_init(vsp1, &wpf->entity, name, 2, &wpf_ops);
if (ret < 0)
return ERR_PTR(ret);
- /* Initialize the V4L2 subdev. */
- subdev = &wpf->entity.subdev;
- v4l2_subdev_init(subdev, &wpf_ops);
-
- subdev->entity.ops = &vsp1->media_ops;
- subdev->internal_ops = &vsp1_subdev_internal_ops;
- snprintf(subdev->name, sizeof(subdev->name), "%s wpf.%u",
- dev_name(vsp1->dev), index);
- v4l2_set_subdevdata(subdev, wpf);
- subdev->flags |= V4L2_SUBDEV_FL_HAS_DEVNODE;
-
- vsp1_entity_init_formats(subdev, NULL);
+ /* Initialize the display list manager. */
+ wpf->dlm = vsp1_dlm_create(vsp1, index, 4);
+ if (!wpf->dlm) {
+ ret = -ENOMEM;
+ goto error;
+ }
/* Initialize the control handler. */
- v4l2_ctrl_handler_init(&wpf->ctrls, 1);
- wpf->alpha = v4l2_ctrl_new_std(&wpf->ctrls, &wpf_ctrl_ops,
- V4L2_CID_ALPHA_COMPONENT,
- 0, 255, 1, 255);
-
- wpf->entity.subdev.ctrl_handler = &wpf->ctrls;
-
- if (wpf->ctrls.error) {
+ ret = vsp1_rwpf_init_ctrls(wpf);
+ if (ret < 0) {
dev_err(vsp1->dev, "wpf%u: failed to initialize controls\n",
index);
- ret = wpf->ctrls.error;
goto error;
}
diff --git a/drivers/media/platform/xilinx/xilinx-vipp.c b/drivers/media/platform/xilinx/xilinx-vipp.c
index e795a4501e8b..feb3b2f1d874 100644
--- a/drivers/media/platform/xilinx/xilinx-vipp.c
+++ b/drivers/media/platform/xilinx/xilinx-vipp.c
@@ -351,19 +351,15 @@ static int xvip_graph_parse_one(struct xvip_composite_device *xdev,
struct xvip_graph_entity *entity;
struct device_node *remote;
struct device_node *ep = NULL;
- struct device_node *next;
int ret = 0;
dev_dbg(xdev->dev, "parsing node %s\n", node->full_name);
while (1) {
- next = of_graph_get_next_endpoint(node, ep);
- if (next == NULL)
+ ep = of_graph_get_next_endpoint(node, ep);
+ if (ep == NULL)
break;
- of_node_put(ep);
- ep = next;
-
dev_dbg(xdev->dev, "handling endpoint %s\n", ep->full_name);
remote = of_graph_get_remote_port_parent(ep);
diff --git a/drivers/media/rc/ati_remote.c b/drivers/media/rc/ati_remote.c
index 3f61d77d4147..9f5b59706741 100644
--- a/drivers/media/rc/ati_remote.c
+++ b/drivers/media/rc/ati_remote.c
@@ -873,13 +873,10 @@ static int ati_remote_probe(struct usb_interface *interface,
strlcat(ati_remote->rc_phys, "/input0", sizeof(ati_remote->rc_phys));
strlcat(ati_remote->mouse_phys, "/input1", sizeof(ati_remote->mouse_phys));
- if (udev->manufacturer)
- strlcpy(ati_remote->rc_name, udev->manufacturer,
- sizeof(ati_remote->rc_name));
-
- if (udev->product)
- snprintf(ati_remote->rc_name, sizeof(ati_remote->rc_name),
- "%s %s", ati_remote->rc_name, udev->product);
+ snprintf(ati_remote->rc_name, sizeof(ati_remote->rc_name), "%s%s%s",
+ udev->manufacturer ?: "",
+ udev->manufacturer && udev->product ? " " : "",
+ udev->product ?: "");
if (!strlen(ati_remote->rc_name))
snprintf(ati_remote->rc_name, sizeof(ati_remote->rc_name),
diff --git a/drivers/media/rc/mceusb.c b/drivers/media/rc/mceusb.c
index 35155ae500c7..5cf2e749b9eb 100644
--- a/drivers/media/rc/mceusb.c
+++ b/drivers/media/rc/mceusb.c
@@ -188,6 +188,7 @@
#define VENDOR_TWISTEDMELON 0x2596
#define VENDOR_HAUPPAUGE 0x2040
#define VENDOR_PCTV 0x2013
+#define VENDOR_ADAPTEC 0x03f3
enum mceusb_model_type {
MCE_GEN2 = 0, /* Most boards */
@@ -302,6 +303,9 @@ static struct usb_device_id mceusb_dev_table[] = {
/* SMK/I-O Data GV-MC7/RCKIT Receiver */
{ USB_DEVICE(VENDOR_SMK, 0x0353),
.driver_info = MCE_GEN2_NO_TX },
+ /* SMK RXX6000 Infrared Receiver */
+ { USB_DEVICE(VENDOR_SMK, 0x0357),
+ .driver_info = MCE_GEN2_NO_TX },
/* Tatung eHome Infrared Transceiver */
{ USB_DEVICE(VENDOR_TATUNG, 0x9150) },
/* Shuttle eHome Infrared Transceiver */
@@ -405,6 +409,8 @@ static struct usb_device_id mceusb_dev_table[] = {
.driver_info = HAUPPAUGE_CX_HYBRID_TV },
{ USB_DEVICE(VENDOR_PCTV, 0x025e),
.driver_info = HAUPPAUGE_CX_HYBRID_TV },
+ /* Adaptec / HP eHome Receiver */
+ { USB_DEVICE(VENDOR_ADAPTEC, 0x0094) },
/* Terminating entry */
{ }
diff --git a/drivers/media/rc/rc-main.c b/drivers/media/rc/rc-main.c
index 4e9bbe735ae9..7dfc7c2188f0 100644
--- a/drivers/media/rc/rc-main.c
+++ b/drivers/media/rc/rc-main.c
@@ -1263,6 +1263,9 @@ unlock:
static void rc_dev_release(struct device *device)
{
+ struct rc_dev *dev = to_rc_dev(device);
+
+ kfree(dev);
}
#define ADD_HOTPLUG_VAR(fmt, val...) \
@@ -1384,7 +1387,9 @@ void rc_free_device(struct rc_dev *dev)
put_device(&dev->dev);
- kfree(dev);
+ /* kfree(dev) will be called by the callback function
+ rc_dev_release() */
+
module_put(THIS_MODULE);
}
EXPORT_SYMBOL_GPL(rc_free_device);
@@ -1492,9 +1497,7 @@ int rc_register_device(struct rc_dev *dev)
}
/* Allow the RC sysfs nodes to be accessible */
- mutex_lock(&dev->lock);
atomic_set(&dev->initialized, 1);
- mutex_unlock(&dev->lock);
IR_dprintk(1, "Registered rc%u (driver: %s, remote: %s, mode %s)\n",
dev->minor,
diff --git a/drivers/media/tuners/qm1d1c0042.c b/drivers/media/tuners/qm1d1c0042.c
index 18bc745ed108..9af2a155cfca 100644
--- a/drivers/media/tuners/qm1d1c0042.c
+++ b/drivers/media/tuners/qm1d1c0042.c
@@ -32,14 +32,24 @@
#include "qm1d1c0042.h"
#define QM1D1C0042_NUM_REGS 0x20
-
-static const u8 reg_initval[QM1D1C0042_NUM_REGS] = {
- 0x48, 0x1c, 0xa0, 0x10, 0xbc, 0xc5, 0x20, 0x33,
- 0x06, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00,
- 0x00, 0xff, 0xf3, 0x00, 0x2a, 0x64, 0xa6, 0x86,
- 0x8c, 0xcf, 0xb8, 0xf1, 0xa8, 0xf2, 0x89, 0x00
+#define QM1D1C0042_NUM_REG_ROWS 2
+
+static const u8
+reg_initval[QM1D1C0042_NUM_REG_ROWS][QM1D1C0042_NUM_REGS] = { {
+ 0x48, 0x1c, 0xa0, 0x10, 0xbc, 0xc5, 0x20, 0x33,
+ 0x06, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00,
+ 0x00, 0xff, 0xf3, 0x00, 0x2a, 0x64, 0xa6, 0x86,
+ 0x8c, 0xcf, 0xb8, 0xf1, 0xa8, 0xf2, 0x89, 0x00
+ }, {
+ 0x68, 0x1c, 0xc0, 0x10, 0xbc, 0xc1, 0x11, 0x33,
+ 0x03, 0x00, 0x00, 0x00, 0x03, 0x00, 0x00, 0x00,
+ 0x00, 0xff, 0xf3, 0x00, 0x3f, 0x25, 0x5c, 0xd6,
+ 0x55, 0xcf, 0x95, 0xf6, 0x36, 0xf2, 0x09, 0x00
+ }
};
+static int reg_index;
+
static const struct qm1d1c0042_config default_cfg = {
.xtal_freq = 16000,
.lpf = 1,
@@ -320,7 +330,6 @@ static int qm1d1c0042_init(struct dvb_frontend *fe)
int i, ret;
state = fe->tuner_priv;
- memcpy(state->regs, reg_initval, sizeof(reg_initval));
reg_write(state, 0x01, 0x0c);
reg_write(state, 0x01, 0x0c);
@@ -330,15 +339,22 @@ static int qm1d1c0042_init(struct dvb_frontend *fe)
goto failed;
usleep_range(2000, 3000);
- val = state->regs[0x01] | 0x10;
- ret = reg_write(state, 0x01, val); /* soft reset off */
+ ret = reg_write(state, 0x01, 0x1c); /* soft reset off */
if (ret < 0)
goto failed;
- /* check ID */
+ /* check ID and choose initial registers corresponding ID */
ret = reg_read(state, 0x00, &val);
- if (ret < 0 || val != 0x48)
+ if (ret < 0)
+ goto failed;
+ for (reg_index = 0; reg_index < QM1D1C0042_NUM_REG_ROWS;
+ reg_index++) {
+ if (val == reg_initval[reg_index][0x00])
+ break;
+ }
+ if (reg_index >= QM1D1C0042_NUM_REG_ROWS)
goto failed;
+ memcpy(state->regs, reg_initval[reg_index], QM1D1C0042_NUM_REGS);
usleep_range(2000, 3000);
state->regs[0x0c] |= 0x40;
diff --git a/drivers/media/tuners/si2157.c b/drivers/media/tuners/si2157.c
index 243ac3816028..b07a681f3fbc 100644
--- a/drivers/media/tuners/si2157.c
+++ b/drivers/media/tuners/si2157.c
@@ -84,11 +84,22 @@ static int si2157_init(struct dvb_frontend *fe)
struct si2157_cmd cmd;
const struct firmware *fw;
const char *fw_name;
- unsigned int chip_id;
+ unsigned int uitmp, chip_id;
dev_dbg(&client->dev, "\n");
- if (dev->fw_loaded)
+ /* Returned IF frequency is garbage when firmware is not running */
+ memcpy(cmd.args, "\x15\x00\x06\x07", 4);
+ cmd.wlen = 4;
+ cmd.rlen = 4;
+ ret = si2157_cmd_execute(client, &cmd);
+ if (ret)
+ goto err;
+
+ uitmp = cmd.args[2] << 0 | cmd.args[3] << 8;
+ dev_dbg(&client->dev, "if_frequency kHz=%u\n", uitmp);
+
+ if (uitmp == dev->if_frequency / 1000)
goto warm;
/* power up */
@@ -203,9 +214,6 @@ skip_fw_download:
dev_info(&client->dev, "firmware version: %c.%c.%d\n",
cmd.args[6], cmd.args[7], cmd.args[8]);
-
- dev->fw_loaded = true;
-
warm:
/* init statistics in order signal app which are supported */
c->strength.len = 1;
@@ -422,7 +430,6 @@ static int si2157_probe(struct i2c_client *client,
dev->fe = cfg->fe;
dev->inversion = cfg->inversion;
dev->if_port = cfg->if_port;
- dev->fw_loaded = false;
dev->chiptype = (u8)id->driver_data;
dev->if_frequency = 5000000; /* default value of property 0x0706 */
mutex_init(&dev->i2c_mutex);
diff --git a/drivers/media/tuners/si2157_priv.h b/drivers/media/tuners/si2157_priv.h
index 589d558d381c..d6b2c7b44053 100644
--- a/drivers/media/tuners/si2157_priv.h
+++ b/drivers/media/tuners/si2157_priv.h
@@ -26,7 +26,6 @@ struct si2157_dev {
struct mutex i2c_mutex;
struct dvb_frontend *fe;
bool active;
- bool fw_loaded;
bool inversion;
u8 chiptype;
u8 if_port;
diff --git a/drivers/media/usb/au0828/au0828-core.c b/drivers/media/usb/au0828/au0828-core.c
index cc22b32776ad..321ea5cf1329 100644
--- a/drivers/media/usb/au0828/au0828-core.c
+++ b/drivers/media/usb/au0828/au0828-core.c
@@ -131,22 +131,36 @@ static int recv_control_msg(struct au0828_dev *dev, u16 request, u32 value,
return status;
}
+#ifdef CONFIG_MEDIA_CONTROLLER
+static void au0828_media_graph_notify(struct media_entity *new,
+ void *notify_data);
+#endif
+
static void au0828_unregister_media_device(struct au0828_dev *dev)
{
-
#ifdef CONFIG_MEDIA_CONTROLLER
- if (dev->media_dev &&
- media_devnode_is_registered(&dev->media_dev->devnode)) {
- /* clear enable_source, disable_source */
- dev->media_dev->source_priv = NULL;
- dev->media_dev->enable_source = NULL;
- dev->media_dev->disable_source = NULL;
-
- media_device_unregister(dev->media_dev);
- media_device_cleanup(dev->media_dev);
- kfree(dev->media_dev);
- dev->media_dev = NULL;
+ struct media_device *mdev = dev->media_dev;
+ struct media_entity_notify *notify, *nextp;
+
+ if (!mdev || !media_devnode_is_registered(&mdev->devnode))
+ return;
+
+ /* Remove au0828 entity_notify callbacks */
+ list_for_each_entry_safe(notify, nextp, &mdev->entity_notify, list) {
+ if (notify->notify != au0828_media_graph_notify)
+ continue;
+ media_device_unregister_entity_notify(mdev, notify);
}
+
+ /* clear enable_source, disable_source */
+ dev->media_dev->source_priv = NULL;
+ dev->media_dev->enable_source = NULL;
+ dev->media_dev->disable_source = NULL;
+
+ media_device_unregister(dev->media_dev);
+ media_device_cleanup(dev->media_dev);
+ kfree(dev->media_dev);
+ dev->media_dev = NULL;
#endif
}
diff --git a/drivers/media/usb/au0828/au0828-video.c b/drivers/media/usb/au0828/au0828-video.c
index 32d7db96479c..7d0ec4cb248c 100644
--- a/drivers/media/usb/au0828/au0828-video.c
+++ b/drivers/media/usb/au0828/au0828-video.c
@@ -679,8 +679,6 @@ int au0828_v4l2_device_register(struct usb_interface *interface,
if (retval) {
pr_err("%s() v4l2_device_register failed\n",
__func__);
- mutex_unlock(&dev->lock);
- kfree(dev);
return retval;
}
@@ -691,8 +689,6 @@ int au0828_v4l2_device_register(struct usb_interface *interface,
if (retval) {
pr_err("%s() v4l2_ctrl_handler_init failed\n",
__func__);
- mutex_unlock(&dev->lock);
- kfree(dev);
return retval;
}
dev->v4l2_dev.ctrl_handler = &dev->v4l2_ctrl_hdl;
diff --git a/drivers/media/usb/au0828/au0828.h b/drivers/media/usb/au0828/au0828.h
index 87f32846f1c0..dd7b378fe070 100644
--- a/drivers/media/usb/au0828/au0828.h
+++ b/drivers/media/usb/au0828/au0828.h
@@ -55,7 +55,6 @@
#define NTSC_STD_H 480
#define AU0828_INTERLACED_DEFAULT 1
-#define V4L2_CID_PRIVATE_SHARPNESS (V4L2_CID_PRIVATE_BASE + 0)
/* Defination for AU0828 USB transfer */
#define AU0828_MAX_ISO_BUFS 12 /* maybe resize this value in the future */
diff --git a/drivers/media/usb/cx231xx/cx231xx-417.c b/drivers/media/usb/cx231xx/cx231xx-417.c
index c9320d6c6131..00da024b47a6 100644
--- a/drivers/media/usb/cx231xx/cx231xx-417.c
+++ b/drivers/media/usb/cx231xx/cx231xx-417.c
@@ -360,7 +360,7 @@ static int wait_for_mci_complete(struct cx231xx *dev)
if (count++ > 100) {
dprintk(3, "ERROR: Timeout - gpio=%x\n", gpio);
- return -1;
+ return -EIO;
}
}
return 0;
@@ -856,7 +856,7 @@ static int cx231xx_find_mailbox(struct cx231xx *dev)
}
}
dprintk(3, "Mailbox signature values not found!\n");
- return -1;
+ return -EIO;
}
static void mci_write_memory_to_gpio(struct cx231xx *dev, u32 address, u32 value,
@@ -960,13 +960,14 @@ static int cx231xx_load_firmware(struct cx231xx *dev)
p_fw = p_current_fw;
if (p_current_fw == NULL) {
dprintk(2, "FAIL!!!\n");
- return -1;
+ return -ENOMEM;
}
p_buffer = vmalloc(4096);
if (p_buffer == NULL) {
dprintk(2, "FAIL!!!\n");
- return -1;
+ vfree(p_current_fw);
+ return -ENOMEM;
}
dprintk(2, "%s()\n", __func__);
@@ -989,7 +990,9 @@ static int cx231xx_load_firmware(struct cx231xx *dev)
if (retval != 0) {
dev_err(dev->dev,
"%s: Error with mc417_register_write\n", __func__);
- return -1;
+ vfree(p_current_fw);
+ vfree(p_buffer);
+ return retval;
}
retval = request_firmware(&firmware, CX231xx_FIRM_IMAGE_NAME,
@@ -1001,7 +1004,9 @@ static int cx231xx_load_firmware(struct cx231xx *dev)
CX231xx_FIRM_IMAGE_NAME);
dev_err(dev->dev,
"Please fix your hotplug setup, the board will not work without firmware loaded!\n");
- return -1;
+ vfree(p_current_fw);
+ vfree(p_buffer);
+ return retval;
}
if (firmware->size != CX231xx_FIRM_IMAGE_SIZE) {
@@ -1009,14 +1014,18 @@ static int cx231xx_load_firmware(struct cx231xx *dev)
"ERROR: Firmware size mismatch (have %zd, expected %d)\n",
firmware->size, CX231xx_FIRM_IMAGE_SIZE);
release_firmware(firmware);
- return -1;
+ vfree(p_current_fw);
+ vfree(p_buffer);
+ return -EINVAL;
}
if (0 != memcmp(firmware->data, magic, 8)) {
dev_err(dev->dev,
"ERROR: Firmware magic mismatch, wrong file?\n");
release_firmware(firmware);
- return -1;
+ vfree(p_current_fw);
+ vfree(p_buffer);
+ return -EINVAL;
}
initGPIO(dev);
@@ -1131,21 +1140,21 @@ static int cx231xx_initialize_codec(struct cx231xx *dev)
if (retval < 0) {
dev_err(dev->dev, "%s: mailbox < 0, error\n",
__func__);
- return -1;
+ return retval;
}
dev->cx23417_mailbox = retval;
retval = cx231xx_api_cmd(dev, CX2341X_ENC_PING_FW, 0, 0);
if (retval < 0) {
dev_err(dev->dev,
"ERROR: cx23417 firmware ping failed!\n");
- return -1;
+ return retval;
}
retval = cx231xx_api_cmd(dev, CX2341X_ENC_GET_VERSION, 0, 1,
&version);
if (retval < 0) {
dev_err(dev->dev,
"ERROR: cx23417 firmware get encoder: version failed!\n");
- return -1;
+ return retval;
}
dprintk(1, "cx23417 firmware version is 0x%08x\n", version);
msleep(200);
diff --git a/drivers/media/usb/cx231xx/cx231xx-core.c b/drivers/media/usb/cx231xx/cx231xx-core.c
index f497888d94bf..630f4fc5155f 100644
--- a/drivers/media/usb/cx231xx/cx231xx-core.c
+++ b/drivers/media/usb/cx231xx/cx231xx-core.c
@@ -752,7 +752,8 @@ EXPORT_SYMBOL_GPL(cx231xx_set_mode);
int cx231xx_ep5_bulkout(struct cx231xx *dev, u8 *firmware, u16 size)
{
int errCode = 0;
- int actlen, ret = -ENOMEM;
+ int actlen = -1;
+ int ret = -ENOMEM;
u32 *buffer;
buffer = kzalloc(4096, GFP_KERNEL);
@@ -1304,6 +1305,9 @@ int cx231xx_dev_init(struct cx231xx *dev)
cx231xx_i2c_register(&dev->i2c_bus[1]);
cx231xx_i2c_register(&dev->i2c_bus[2]);
+ errCode = cx231xx_i2c_mux_create(dev);
+ if (errCode < 0)
+ return errCode;
cx231xx_i2c_mux_register(dev, 0);
cx231xx_i2c_mux_register(dev, 1);
@@ -1426,8 +1430,7 @@ EXPORT_SYMBOL_GPL(cx231xx_dev_init);
void cx231xx_dev_uninit(struct cx231xx *dev)
{
/* Un Initialize I2C bus */
- cx231xx_i2c_mux_unregister(dev, 1);
- cx231xx_i2c_mux_unregister(dev, 0);
+ cx231xx_i2c_mux_unregister(dev);
cx231xx_i2c_unregister(&dev->i2c_bus[2]);
cx231xx_i2c_unregister(&dev->i2c_bus[1]);
cx231xx_i2c_unregister(&dev->i2c_bus[0]);
diff --git a/drivers/media/usb/cx231xx/cx231xx-i2c.c b/drivers/media/usb/cx231xx/cx231xx-i2c.c
index a29c345b027d..473cd3433fe5 100644
--- a/drivers/media/usb/cx231xx/cx231xx-i2c.c
+++ b/drivers/media/usb/cx231xx/cx231xx-i2c.c
@@ -557,40 +557,41 @@ int cx231xx_i2c_unregister(struct cx231xx_i2c *bus)
* cx231xx_i2c_mux_select()
* switch i2c master number 1 between port1 and port3
*/
-static int cx231xx_i2c_mux_select(struct i2c_adapter *adap,
- void *mux_priv, u32 chan_id)
+static int cx231xx_i2c_mux_select(struct i2c_mux_core *muxc, u32 chan_id)
{
- struct cx231xx *dev = mux_priv;
+ struct cx231xx *dev = i2c_mux_priv(muxc);
return cx231xx_enable_i2c_port_3(dev, chan_id);
}
+int cx231xx_i2c_mux_create(struct cx231xx *dev)
+{
+ dev->muxc = i2c_mux_alloc(&dev->i2c_bus[1].i2c_adap, dev->dev, 2, 0, 0,
+ cx231xx_i2c_mux_select, NULL);
+ if (!dev->muxc)
+ return -ENOMEM;
+ dev->muxc->priv = dev;
+ return 0;
+}
+
int cx231xx_i2c_mux_register(struct cx231xx *dev, int mux_no)
{
- struct i2c_adapter *i2c_parent = &dev->i2c_bus[1].i2c_adap;
- /* what is the correct mux_dev? */
- struct device *mux_dev = dev->dev;
-
- dev->i2c_mux_adap[mux_no] = i2c_add_mux_adapter(i2c_parent,
- mux_dev,
- dev /* mux_priv */,
- 0,
- mux_no /* chan_id */,
- 0 /* class */,
- &cx231xx_i2c_mux_select,
- NULL);
-
- if (!dev->i2c_mux_adap[mux_no])
+ int rc;
+
+ rc = i2c_mux_add_adapter(dev->muxc,
+ 0,
+ mux_no /* chan_id */,
+ 0 /* class */);
+ if (rc)
dev_warn(dev->dev,
"i2c mux %d register FAILED\n", mux_no);
- return 0;
+ return rc;
}
-void cx231xx_i2c_mux_unregister(struct cx231xx *dev, int mux_no)
+void cx231xx_i2c_mux_unregister(struct cx231xx *dev)
{
- i2c_del_mux_adapter(dev->i2c_mux_adap[mux_no]);
- dev->i2c_mux_adap[mux_no] = NULL;
+ i2c_mux_del_adapters(dev->muxc);
}
struct i2c_adapter *cx231xx_get_i2c_adap(struct cx231xx *dev, int i2c_port)
@@ -603,9 +604,9 @@ struct i2c_adapter *cx231xx_get_i2c_adap(struct cx231xx *dev, int i2c_port)
case I2C_2:
return &dev->i2c_bus[2].i2c_adap;
case I2C_1_MUX_1:
- return dev->i2c_mux_adap[0];
+ return dev->muxc->adapter[0];
case I2C_1_MUX_3:
- return dev->i2c_mux_adap[1];
+ return dev->muxc->adapter[1];
default:
return NULL;
}
diff --git a/drivers/media/usb/cx231xx/cx231xx.h b/drivers/media/usb/cx231xx/cx231xx.h
index 69f6d20870f5..90c867683076 100644
--- a/drivers/media/usb/cx231xx/cx231xx.h
+++ b/drivers/media/usb/cx231xx/cx231xx.h
@@ -624,6 +624,7 @@ struct cx231xx {
/* I2C adapters: Master 1 & 2 (External) & Master 3 (Internal only) */
struct cx231xx_i2c i2c_bus[3];
+ struct i2c_mux_core *muxc;
struct i2c_adapter *i2c_mux_adap[2];
unsigned int xc_fw_load_done:1;
@@ -760,8 +761,9 @@ int cx231xx_reset_analog_tuner(struct cx231xx *dev);
void cx231xx_do_i2c_scan(struct cx231xx *dev, int i2c_port);
int cx231xx_i2c_register(struct cx231xx_i2c *bus);
int cx231xx_i2c_unregister(struct cx231xx_i2c *bus);
+int cx231xx_i2c_mux_create(struct cx231xx *dev);
int cx231xx_i2c_mux_register(struct cx231xx *dev, int mux_no);
-void cx231xx_i2c_mux_unregister(struct cx231xx *dev, int mux_no);
+void cx231xx_i2c_mux_unregister(struct cx231xx *dev);
struct i2c_adapter *cx231xx_get_i2c_adap(struct cx231xx *dev, int i2c_port);
/* Internal block control functions */
diff --git a/drivers/media/usb/dvb-usb-v2/af9015.c b/drivers/media/usb/dvb-usb-v2/af9015.c
index 95a7388e89d4..09e0f58f6bb7 100644
--- a/drivers/media/usb/dvb-usb-v2/af9015.c
+++ b/drivers/media/usb/dvb-usb-v2/af9015.c
@@ -398,6 +398,8 @@ error:
}
#define AF9015_EEPROM_SIZE 256
+/* 2^31 + 2^29 - 2^25 + 2^22 - 2^19 - 2^16 + 1 */
+#define GOLDEN_RATIO_PRIME_32 0x9e370001UL
/* hash (and dump) eeprom */
static int af9015_eeprom_hash(struct dvb_usb_device *d)
diff --git a/drivers/media/usb/dvb-usb-v2/af9035.h b/drivers/media/usb/dvb-usb-v2/af9035.h
index df22001f9e41..89e629a24aec 100644
--- a/drivers/media/usb/dvb-usb-v2/af9035.h
+++ b/drivers/media/usb/dvb-usb-v2/af9035.h
@@ -118,20 +118,20 @@ static const u32 clock_lut_it9135[] = {
* Values 0, 3 and 5 are seen to this day. 0 for single TS and 3/5 for dual TS.
*/
-#define EEPROM_BASE_AF9035 0x42fd
-#define EEPROM_BASE_IT9135 0x499c
+#define EEPROM_BASE_AF9035 0x42f5
+#define EEPROM_BASE_IT9135 0x4994
#define EEPROM_SHIFT 0x10
-#define EEPROM_IR_MODE 0x10
-#define EEPROM_TS_MODE 0x29
-#define EEPROM_2ND_DEMOD_ADDR 0x2a
-#define EEPROM_IR_TYPE 0x2c
-#define EEPROM_1_IF_L 0x30
-#define EEPROM_1_IF_H 0x31
-#define EEPROM_1_TUNER_ID 0x34
-#define EEPROM_2_IF_L 0x40
-#define EEPROM_2_IF_H 0x41
-#define EEPROM_2_TUNER_ID 0x44
+#define EEPROM_IR_MODE 0x18
+#define EEPROM_TS_MODE 0x31
+#define EEPROM_2ND_DEMOD_ADDR 0x32
+#define EEPROM_IR_TYPE 0x34
+#define EEPROM_1_IF_L 0x38
+#define EEPROM_1_IF_H 0x39
+#define EEPROM_1_TUNER_ID 0x3c
+#define EEPROM_2_IF_L 0x48
+#define EEPROM_2_IF_H 0x49
+#define EEPROM_2_TUNER_ID 0x4c
/* USB commands */
#define CMD_MEM_RD 0x00
diff --git a/drivers/media/usb/dvb-usb-v2/rtl28xxu.c b/drivers/media/usb/dvb-usb-v2/rtl28xxu.c
index fa72642d41f3..eb7af8cb8aca 100644
--- a/drivers/media/usb/dvb-usb-v2/rtl28xxu.c
+++ b/drivers/media/usb/dvb-usb-v2/rtl28xxu.c
@@ -1333,10 +1333,7 @@ static int rtl2832u_tuner_attach(struct dvb_usb_adapter *adap)
case TUNER_RTL2832_R828D:
pdata.clk = dev->rtl2832_platform_data.clk;
pdata.tuner = dev->tuner;
- pdata.i2c_client = dev->i2c_client_demod;
- pdata.bulk_read = dev->rtl2832_platform_data.bulk_read;
- pdata.bulk_write = dev->rtl2832_platform_data.bulk_write;
- pdata.update_bits = dev->rtl2832_platform_data.update_bits;
+ pdata.regmap = dev->rtl2832_platform_data.regmap;
pdata.dvb_frontend = adap->fe[0];
pdata.dvb_usb_device = d;
pdata.v4l2_subdev = subdev;
diff --git a/drivers/media/usb/dvb-usb/az6027.c b/drivers/media/usb/dvb-usb/az6027.c
index 92e47d6c3ee3..2e711362847e 100644
--- a/drivers/media/usb/dvb-usb/az6027.c
+++ b/drivers/media/usb/dvb-usb/az6027.c
@@ -1090,6 +1090,7 @@ static struct usb_device_id az6027_usb_table[] = {
{ USB_DEVICE(USB_VID_TECHNISAT, USB_PID_TECHNISAT_USB2_HDCI_V2) },
{ USB_DEVICE(USB_VID_ELGATO, USB_PID_ELGATO_EYETV_SAT) },
{ USB_DEVICE(USB_VID_ELGATO, USB_PID_ELGATO_EYETV_SAT_V2) },
+ { USB_DEVICE(USB_VID_ELGATO, USB_PID_ELGATO_EYETV_SAT_V3) },
{ },
};
@@ -1138,7 +1139,7 @@ static struct dvb_usb_device_properties az6027_properties = {
.i2c_algo = &az6027_i2c_algo,
- .num_device_descs = 7,
+ .num_device_descs = 8,
.devices = {
{
.name = "AZUREWAVE DVB-S/S2 USB2.0 (AZ6027)",
@@ -1168,6 +1169,10 @@ static struct dvb_usb_device_properties az6027_properties = {
.name = "Elgato EyeTV Sat",
.cold_ids = { &az6027_usb_table[6], NULL },
.warm_ids = { NULL },
+ }, {
+ .name = "Elgato EyeTV Sat",
+ .cold_ids = { &az6027_usb_table[7], NULL },
+ .warm_ids = { NULL },
},
{ NULL },
}
diff --git a/drivers/media/usb/dvb-usb/dib0700_core.c b/drivers/media/usb/dvb-usb/dib0700_core.c
index c16f999b9d7c..bf890c3d9cda 100644
--- a/drivers/media/usb/dvb-usb/dib0700_core.c
+++ b/drivers/media/usb/dvb-usb/dib0700_core.c
@@ -517,7 +517,7 @@ int dib0700_download_firmware(struct usb_device *udev, const struct firmware *fw
if (nb_packet_buffer_size < 1)
nb_packet_buffer_size = 1;
- /* get the fimware version */
+ /* get the firmware version */
usb_control_msg(udev, usb_rcvctrlpipe(udev, 0),
REQUEST_GET_VERSION,
USB_TYPE_VENDOR | USB_DIR_IN, 0, 0,
diff --git a/drivers/media/usb/dvb-usb/dib0700_devices.c b/drivers/media/usb/dvb-usb/dib0700_devices.c
index ea0391e32d23..0857b56e652c 100644
--- a/drivers/media/usb/dvb-usb/dib0700_devices.c
+++ b/drivers/media/usb/dvb-usb/dib0700_devices.c
@@ -3814,6 +3814,7 @@ struct usb_device_id dib0700_usb_id_table[] = {
{ USB_DEVICE(USB_VID_PCTV, USB_PID_PCTV_2002E) },
{ USB_DEVICE(USB_VID_PCTV, USB_PID_PCTV_2002E_SE) },
{ USB_DEVICE(USB_VID_PCTV, USB_PID_DIBCOM_STK8096PVR) },
+ { USB_DEVICE(USB_VID_DIBCOM, USB_PID_DIBCOM_STK8096PVR) },
{ 0 } /* Terminating entry */
};
MODULE_DEVICE_TABLE(usb, dib0700_usb_id_table);
@@ -5017,7 +5018,8 @@ struct dvb_usb_device_properties dib0700_devices[] = {
.num_device_descs = 1,
.devices = {
{ "DiBcom STK8096-PVR reference design",
- { &dib0700_usb_id_table[83], NULL },
+ { &dib0700_usb_id_table[83],
+ &dib0700_usb_id_table[84], NULL},
{ NULL },
},
},
diff --git a/drivers/media/usb/dvb-usb/dibusb-common.c b/drivers/media/usb/dvb-usb/dibusb-common.c
index 35de6095926d..6eea4e68891d 100644
--- a/drivers/media/usb/dvb-usb/dibusb-common.c
+++ b/drivers/media/usb/dvb-usb/dibusb-common.c
@@ -184,6 +184,8 @@ int dibusb_read_eeprom_byte(struct dvb_usb_device *d, u8 offs, u8 *val)
}
EXPORT_SYMBOL(dibusb_read_eeprom_byte);
+#if IS_ENABLED(CONFIG_DVB_DIB3000MC)
+
/* 3000MC/P stuff */
// Config Adjacent channels Perf -cal22
static struct dibx000_agc_config dib3000p_mt2060_agc_config = {
@@ -242,8 +244,6 @@ static struct dibx000_agc_config dib3000p_panasonic_agc_config = {
.agc2_slope2 = 0x1e,
};
-#if IS_ENABLED(CONFIG_DVB_DIB3000MC)
-
static struct dib3000mc_config mod3000p_dib3000p_config = {
&dib3000p_panasonic_agc_config,
diff --git a/drivers/media/usb/dvb-usb/dw2102.c b/drivers/media/usb/dvb-usb/dw2102.c
index 6d0dd859d684..49b55d7069b1 100644
--- a/drivers/media/usb/dvb-usb/dw2102.c
+++ b/drivers/media/usb/dvb-usb/dw2102.c
@@ -13,6 +13,7 @@
*
* see Documentation/dvb/README.dvb-usb for more information
*/
+#include "dvb-usb-ids.h"
#include "dw2102.h"
#include "si21xx.h"
#include "stv0299.h"
@@ -38,61 +39,6 @@
/* Max transfer size done by I2C transfer functions */
#define MAX_XFER_SIZE 64
-#ifndef USB_PID_DW2102
-#define USB_PID_DW2102 0x2102
-#endif
-
-#ifndef USB_PID_DW2104
-#define USB_PID_DW2104 0x2104
-#endif
-
-#ifndef USB_PID_DW3101
-#define USB_PID_DW3101 0x3101
-#endif
-
-#ifndef USB_PID_CINERGY_S
-#define USB_PID_CINERGY_S 0x0064
-#endif
-
-#ifndef USB_PID_TEVII_S630
-#define USB_PID_TEVII_S630 0xd630
-#endif
-
-#ifndef USB_PID_TEVII_S650
-#define USB_PID_TEVII_S650 0xd650
-#endif
-
-#ifndef USB_PID_TEVII_S660
-#define USB_PID_TEVII_S660 0xd660
-#endif
-
-#ifndef USB_PID_TEVII_S662
-#define USB_PID_TEVII_S662 0xd662
-#endif
-
-#ifndef USB_PID_TEVII_S480_1
-#define USB_PID_TEVII_S480_1 0xd481
-#endif
-
-#ifndef USB_PID_TEVII_S480_2
-#define USB_PID_TEVII_S480_2 0xd482
-#endif
-
-#ifndef USB_PID_PROF_1100
-#define USB_PID_PROF_1100 0xb012
-#endif
-
-#ifndef USB_PID_TEVII_S421
-#define USB_PID_TEVII_S421 0xd421
-#endif
-
-#ifndef USB_PID_TEVII_S632
-#define USB_PID_TEVII_S632 0xd632
-#endif
-
-#ifndef USB_PID_GOTVIEW_SAT_HD
-#define USB_PID_GOTVIEW_SAT_HD 0x5456
-#endif
#define DW210X_READ_MSG 0
#define DW210X_WRITE_MSG 1
@@ -1709,7 +1655,7 @@ static struct usb_device_id dw2102_table[] = {
[CYPRESS_DW2101] = {USB_DEVICE(USB_VID_CYPRESS, 0x2101)},
[CYPRESS_DW2104] = {USB_DEVICE(USB_VID_CYPRESS, USB_PID_DW2104)},
[TEVII_S650] = {USB_DEVICE(0x9022, USB_PID_TEVII_S650)},
- [TERRATEC_CINERGY_S] = {USB_DEVICE(USB_VID_TERRATEC, USB_PID_CINERGY_S)},
+ [TERRATEC_CINERGY_S] = {USB_DEVICE(USB_VID_TERRATEC, USB_PID_TERRATEC_CINERGY_S)},
[CYPRESS_DW3101] = {USB_DEVICE(USB_VID_CYPRESS, USB_PID_DW3101)},
[TEVII_S630] = {USB_DEVICE(0x9022, USB_PID_TEVII_S630)},
[PROF_1100] = {USB_DEVICE(0x3011, USB_PID_PROF_1100)},
@@ -1801,7 +1747,7 @@ static int dw2102_load_firmware(struct usb_device *dev,
dw210x_op_rw(dev, 0xbf, 0x0040, 0, &reset, 0,
DW210X_WRITE_MSG);
break;
- case USB_PID_CINERGY_S:
+ case USB_PID_TERRATEC_CINERGY_S:
case USB_PID_DW2102:
dw210x_op_rw(dev, 0xbf, 0x0040, 0, &reset, 0,
DW210X_WRITE_MSG);
@@ -1843,6 +1789,9 @@ static int dw2102_load_firmware(struct usb_device *dev,
msleep(100);
kfree(p);
}
+
+ if (le16_to_cpu(dev->descriptor.idProduct) == 0x2101)
+ release_firmware(fw);
return ret;
}
diff --git a/drivers/media/usb/dvb-usb/pctv452e.c b/drivers/media/usb/dvb-usb/pctv452e.c
index ec397c4b7cc8..c05de1b088a4 100644
--- a/drivers/media/usb/dvb-usb/pctv452e.c
+++ b/drivers/media/usb/dvb-usb/pctv452e.c
@@ -995,11 +995,11 @@ static struct dvb_usb_device_properties tt_connect_s2_3600_properties = {
/* parameter for the MPEG2-data transfer */
.stream = {
.type = USB_ISOC,
- .count = 7,
+ .count = 4,
.endpoint = 0x02,
.u = {
.isoc = {
- .framesperurb = 4,
+ .framesperurb = 64,
.framesize = 940,
.interval = 1
}
diff --git a/drivers/media/usb/em28xx/Kconfig b/drivers/media/usb/em28xx/Kconfig
index e382210c4ada..d917b0a2beb1 100644
--- a/drivers/media/usb/em28xx/Kconfig
+++ b/drivers/media/usb/em28xx/Kconfig
@@ -59,6 +59,8 @@ config VIDEO_EM28XX_DVB
select DVB_DRX39XYJ if MEDIA_SUBDRV_AUTOSELECT
select DVB_SI2168 if MEDIA_SUBDRV_AUTOSELECT
select MEDIA_TUNER_SI2157 if MEDIA_SUBDRV_AUTOSELECT
+ select DVB_TC90522 if MEDIA_SUBDRV_AUTOSELECT
+ select MEDIA_TUNER_QM1D1C0042 if MEDIA_SUBDRV_AUTOSELECT
---help---
This adds support for DVB cards based on the
Empiatech em28xx chips.
diff --git a/drivers/media/usb/em28xx/em28xx-cards.c b/drivers/media/usb/em28xx/em28xx-cards.c
index 930e3e3fc948..e397f544f108 100644
--- a/drivers/media/usb/em28xx/em28xx-cards.c
+++ b/drivers/media/usb/em28xx/em28xx-cards.c
@@ -492,6 +492,44 @@ static struct em28xx_reg_seq terratec_t2_stick_hd[] = {
{-1, -1, -1, -1},
};
+static struct em28xx_reg_seq plex_px_bcud[] = {
+ {EM2874_R80_GPIO_P0_CTRL, 0xff, 0xff, 0},
+ {0x0d, 0xff, 0xff, 0},
+ {EM2874_R50_IR_CONFIG, 0x01, 0xff, 0},
+ {EM28XX_R06_I2C_CLK, 0x40, 0xff, 0},
+ {EM2874_R80_GPIO_P0_CTRL, 0xfd, 0xff, 100},
+ {EM28XX_R12_VINENABLE, 0x20, 0x20, 0},
+ {0x0d, 0x42, 0xff, 1000},
+ {EM2874_R80_GPIO_P0_CTRL, 0xfc, 0xff, 10},
+ {EM2874_R80_GPIO_P0_CTRL, 0xfd, 0xff, 10},
+ {0x73, 0xfd, 0xff, 100},
+ {-1, -1, -1, -1},
+};
+
+/*
+ * 2040:0265 Hauppauge WinTV-dualHD DVB
+ * reg 0x80/0x84:
+ * GPIO_0: Yellow LED tuner 1, 0=on, 1=off
+ * GPIO_1: Green LED tuner 1, 0=on, 1=off
+ * GPIO_2: Yellow LED tuner 2, 0=on, 1=off
+ * GPIO_3: Green LED tuner 2, 0=on, 1=off
+ * GPIO_5: Reset #2, 0=active
+ * GPIO_6: Reset #1, 0=active
+ */
+static struct em28xx_reg_seq hauppauge_dualhd_dvb[] = {
+ {EM2874_R80_GPIO_P0_CTRL, 0xff, 0xff, 0},
+ {0x0d, 0xff, 0xff, 200},
+ {0x50, 0x04, 0xff, 300},
+ {EM2874_R80_GPIO_P0_CTRL, 0xbf, 0xff, 100}, /* demod 1 reset */
+ {EM2874_R80_GPIO_P0_CTRL, 0xff, 0xff, 100},
+ {EM2874_R80_GPIO_P0_CTRL, 0xdf, 0xff, 100}, /* demod 2 reset */
+ {EM2874_R80_GPIO_P0_CTRL, 0xff, 0xff, 100},
+ {EM2874_R5F_TS_ENABLE, 0x44, 0xff, 50},
+ {EM2874_R5D_TS1_PKT_SIZE, 0x05, 0xff, 50},
+ {EM2874_R5E_TS2_PKT_SIZE, 0x05, 0xff, 50},
+ {-1, -1, -1, -1},
+};
+
/*
* Button definitions
*/
@@ -571,6 +609,22 @@ static struct em28xx_led terratec_grabby_leds[] = {
{-1, 0, 0, 0},
};
+static struct em28xx_led hauppauge_dualhd_leds[] = {
+ {
+ .role = EM28XX_LED_DIGITAL_CAPTURING,
+ .gpio_reg = EM2874_R80_GPIO_P0_CTRL,
+ .gpio_mask = EM_GPIO_1,
+ .inverted = 1,
+ },
+ {
+ .role = EM28XX_LED_DIGITAL_CAPTURING_TS2,
+ .gpio_reg = EM2874_R80_GPIO_P0_CTRL,
+ .gpio_mask = EM_GPIO_3,
+ .inverted = 1,
+ },
+ {-1, 0, 0, 0},
+};
+
/*
* Board definitions
*/
@@ -2306,6 +2360,35 @@ struct em28xx_board em28xx_boards[] = {
.has_dvb = 1,
.ir_codes = RC_MAP_TERRATEC_SLIM_2,
},
+
+ /*
+ * 3275:0085 PLEX PX-BCUD.
+ * Empia EM28178, TOSHIBA TC90532XBG, Sharp QM1D1C0042
+ */
+ [EM28178_BOARD_PLEX_PX_BCUD] = {
+ .name = "PLEX PX-BCUD",
+ .xclk = EM28XX_XCLK_FREQUENCY_4_3MHZ,
+ .def_i2c_bus = 1,
+ .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE,
+ .tuner_type = TUNER_ABSENT,
+ .tuner_gpio = plex_px_bcud,
+ .has_dvb = 1,
+ },
+ /*
+ * 2040:0265 Hauppauge WinTV-dualHD (DVB version).
+ * Empia EM28274, 2x Silicon Labs Si2168, 2x Silicon Labs Si2157
+ */
+ [EM28174_BOARD_HAUPPAUGE_WINTV_DUALHD_DVB] = {
+ .name = "Hauppauge WinTV-dualHD DVB",
+ .def_i2c_bus = 1,
+ .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE |
+ EM28XX_I2C_FREQ_400_KHZ,
+ .tuner_type = TUNER_ABSENT,
+ .tuner_gpio = hauppauge_dualhd_dvb,
+ .has_dvb = 1,
+ .ir_codes = RC_MAP_HAUPPAUGE,
+ .leds = hauppauge_dualhd_leds,
+ },
};
EXPORT_SYMBOL_GPL(em28xx_boards);
@@ -2429,6 +2512,8 @@ struct usb_device_id em28xx_id_table[] = {
.driver_info = EM2883_BOARD_HAUPPAUGE_WINTV_HVR_950 },
{ USB_DEVICE(0x2040, 0x651f),
.driver_info = EM2883_BOARD_HAUPPAUGE_WINTV_HVR_850 },
+ { USB_DEVICE(0x2040, 0x0265),
+ .driver_info = EM28174_BOARD_HAUPPAUGE_WINTV_DUALHD_DVB },
{ USB_DEVICE(0x0438, 0xb002),
.driver_info = EM2880_BOARD_AMD_ATI_TV_WONDER_HD_600 },
{ USB_DEVICE(0x2001, 0xf112),
@@ -2495,6 +2580,8 @@ struct usb_device_id em28xx_id_table[] = {
.driver_info = EM2861_BOARD_LEADTEK_VC100 },
{ USB_DEVICE(0xeb1a, 0x8179),
.driver_info = EM28178_BOARD_TERRATEC_T2_STICK_HD },
+ { USB_DEVICE(0x3275, 0x0085),
+ .driver_info = EM28178_BOARD_PLEX_PX_BCUD },
{ },
};
MODULE_DEVICE_TABLE(usb, em28xx_id_table);
@@ -2861,6 +2948,7 @@ static void em28xx_card_setup(struct em28xx *dev)
case EM2883_BOARD_HAUPPAUGE_WINTV_HVR_850:
case EM2883_BOARD_HAUPPAUGE_WINTV_HVR_950:
case EM2884_BOARD_HAUPPAUGE_WINTV_HVR_930C:
+ case EM28174_BOARD_HAUPPAUGE_WINTV_DUALHD_DVB:
{
struct tveeprom tv;
diff --git a/drivers/media/usb/em28xx/em28xx-dvb.c b/drivers/media/usb/em28xx/em28xx-dvb.c
index 5d209c7c54d5..1a5c01202f73 100644
--- a/drivers/media/usb/em28xx/em28xx-dvb.c
+++ b/drivers/media/usb/em28xx/em28xx-dvb.c
@@ -58,6 +58,8 @@
#include "ts2020.h"
#include "si2168.h"
#include "si2157.h"
+#include "tc90522.h"
+#include "qm1d1c0042.h"
MODULE_AUTHOR("Mauro Carvalho Chehab <mchehab@infradead.org>");
MODULE_LICENSE("GPL");
@@ -787,6 +789,68 @@ static int em28xx_mt352_terratec_xs_init(struct dvb_frontend *fe)
return 0;
}
+static void px_bcud_init(struct em28xx *dev)
+{
+ int i;
+ struct {
+ unsigned char r[4];
+ int len;
+ } regs1[] = {
+ {{ 0x0e, 0x77 }, 2},
+ {{ 0x0f, 0x77 }, 2},
+ {{ 0x03, 0x90 }, 2},
+ }, regs2[] = {
+ {{ 0x07, 0x01 }, 2},
+ {{ 0x08, 0x10 }, 2},
+ {{ 0x13, 0x00 }, 2},
+ {{ 0x17, 0x00 }, 2},
+ {{ 0x03, 0x01 }, 2},
+ {{ 0x10, 0xb1 }, 2},
+ {{ 0x11, 0x40 }, 2},
+ {{ 0x85, 0x7a }, 2},
+ {{ 0x87, 0x04 }, 2},
+ };
+ static struct em28xx_reg_seq gpio[] = {
+ {EM28XX_R06_I2C_CLK, 0x40, 0xff, 300},
+ {EM2874_R80_GPIO_P0_CTRL, 0xfd, 0xff, 60},
+ {EM28XX_R15_RGAIN, 0x20, 0xff, 0},
+ {EM28XX_R16_GGAIN, 0x20, 0xff, 0},
+ {EM28XX_R17_BGAIN, 0x20, 0xff, 0},
+ {EM28XX_R18_ROFFSET, 0x00, 0xff, 0},
+ {EM28XX_R19_GOFFSET, 0x00, 0xff, 0},
+ {EM28XX_R1A_BOFFSET, 0x00, 0xff, 0},
+ {EM28XX_R23_UOFFSET, 0x00, 0xff, 0},
+ {EM28XX_R24_VOFFSET, 0x00, 0xff, 0},
+ {EM28XX_R26_COMPR, 0x00, 0xff, 0},
+ {0x13, 0x08, 0xff, 0},
+ {EM28XX_R12_VINENABLE, 0x27, 0xff, 0},
+ {EM28XX_R0C_USBSUSP, 0x10, 0xff, 0},
+ {EM28XX_R27_OUTFMT, 0x00, 0xff, 0},
+ {EM28XX_R10_VINMODE, 0x00, 0xff, 0},
+ {EM28XX_R11_VINCTRL, 0x11, 0xff, 0},
+ {EM2874_R50_IR_CONFIG, 0x01, 0xff, 0},
+ {EM2874_R5F_TS_ENABLE, 0x80, 0xff, 0},
+ {EM28XX_R06_I2C_CLK, 0x46, 0xff, 0},
+ };
+ em28xx_write_reg(dev, EM28XX_R06_I2C_CLK, 0x46);
+ /* sleeping ISDB-T */
+ dev->dvb->i2c_client_demod->addr = 0x14;
+ for (i = 0; i < ARRAY_SIZE(regs1); i++)
+ i2c_master_send(dev->dvb->i2c_client_demod, regs1[i].r,
+ regs1[i].len);
+ /* sleeping ISDB-S */
+ dev->dvb->i2c_client_demod->addr = 0x15;
+ for (i = 0; i < ARRAY_SIZE(regs2); i++)
+ i2c_master_send(dev->dvb->i2c_client_demod, regs2[i].r,
+ regs2[i].len);
+ for (i = 0; i < ARRAY_SIZE(gpio); i++) {
+ em28xx_write_reg_bits(dev, gpio[i].reg, gpio[i].val,
+ gpio[i].mask);
+ if (gpio[i].sleep > 0)
+ msleep(gpio[i].sleep);
+ }
+};
+
static struct mt352_config terratec_xs_mt352_cfg = {
.demod_address = (0x1e >> 1),
.no_tuner = 1,
@@ -1762,6 +1826,127 @@ static int em28xx_dvb_init(struct em28xx *dev)
dvb->i2c_client_tuner = client;
}
break;
+
+ case EM28178_BOARD_PLEX_PX_BCUD:
+ {
+ struct i2c_client *client;
+ struct i2c_board_info info;
+ struct tc90522_config tc90522_config;
+ struct qm1d1c0042_config qm1d1c0042_config;
+
+ /* attach demod */
+ memset(&tc90522_config, 0, sizeof(tc90522_config));
+ memset(&info, 0, sizeof(struct i2c_board_info));
+ strlcpy(info.type, "tc90522sat", I2C_NAME_SIZE);
+ info.addr = 0x15;
+ info.platform_data = &tc90522_config;
+ request_module("tc90522");
+ client = i2c_new_device(&dev->i2c_adap[dev->def_i2c_bus], &info);
+ if (client == NULL || client->dev.driver == NULL) {
+ result = -ENODEV;
+ goto out_free;
+ }
+ dvb->i2c_client_demod = client;
+ if (!try_module_get(client->dev.driver->owner)) {
+ i2c_unregister_device(client);
+ result = -ENODEV;
+ goto out_free;
+ }
+
+ /* attach tuner */
+ memset(&qm1d1c0042_config, 0,
+ sizeof(qm1d1c0042_config));
+ qm1d1c0042_config.fe = tc90522_config.fe;
+ qm1d1c0042_config.lpf = 1;
+ memset(&info, 0, sizeof(struct i2c_board_info));
+ strlcpy(info.type, "qm1d1c0042", I2C_NAME_SIZE);
+ info.addr = 0x61;
+ info.platform_data = &qm1d1c0042_config;
+ request_module(info.type);
+ client = i2c_new_device(tc90522_config.tuner_i2c,
+ &info);
+ if (client == NULL || client->dev.driver == NULL) {
+ module_put(dvb->i2c_client_demod->dev.driver->owner);
+ i2c_unregister_device(dvb->i2c_client_demod);
+ result = -ENODEV;
+ goto out_free;
+ }
+ dvb->i2c_client_tuner = client;
+ if (!try_module_get(client->dev.driver->owner)) {
+ i2c_unregister_device(client);
+ module_put(dvb->i2c_client_demod->dev.driver->owner);
+ i2c_unregister_device(dvb->i2c_client_demod);
+ result = -ENODEV;
+ goto out_free;
+ }
+ dvb->fe[0] = tc90522_config.fe;
+ px_bcud_init(dev);
+ }
+ break;
+ case EM28174_BOARD_HAUPPAUGE_WINTV_DUALHD_DVB:
+ {
+ struct i2c_adapter *adapter;
+ struct i2c_client *client;
+ struct i2c_board_info info;
+ struct si2168_config si2168_config;
+ struct si2157_config si2157_config;
+
+ /* attach demod */
+ memset(&si2168_config, 0, sizeof(si2168_config));
+ si2168_config.i2c_adapter = &adapter;
+ si2168_config.fe = &dvb->fe[0];
+ si2168_config.ts_mode = SI2168_TS_SERIAL;
+ memset(&info, 0, sizeof(struct i2c_board_info));
+ strlcpy(info.type, "si2168", I2C_NAME_SIZE);
+ info.addr = 0x64;
+ info.platform_data = &si2168_config;
+ request_module(info.type);
+ client = i2c_new_device(&dev->i2c_adap[dev->def_i2c_bus], &info);
+ if (client == NULL || client->dev.driver == NULL) {
+ result = -ENODEV;
+ goto out_free;
+ }
+
+ if (!try_module_get(client->dev.driver->owner)) {
+ i2c_unregister_device(client);
+ result = -ENODEV;
+ goto out_free;
+ }
+
+ dvb->i2c_client_demod = client;
+
+ /* attach tuner */
+ memset(&si2157_config, 0, sizeof(si2157_config));
+ si2157_config.fe = dvb->fe[0];
+ si2157_config.if_port = 1;
+#ifdef CONFIG_MEDIA_CONTROLLER_DVB
+ si2157_config.mdev = dev->media_dev;
+#endif
+ memset(&info, 0, sizeof(struct i2c_board_info));
+ strlcpy(info.type, "si2157", I2C_NAME_SIZE);
+ info.addr = 0x60;
+ info.platform_data = &si2157_config;
+ request_module(info.type);
+ client = i2c_new_device(adapter, &info);
+ if (client == NULL || client->dev.driver == NULL) {
+ module_put(dvb->i2c_client_demod->dev.driver->owner);
+ i2c_unregister_device(dvb->i2c_client_demod);
+ result = -ENODEV;
+ goto out_free;
+ }
+
+ if (!try_module_get(client->dev.driver->owner)) {
+ i2c_unregister_device(client);
+ module_put(dvb->i2c_client_demod->dev.driver->owner);
+ i2c_unregister_device(dvb->i2c_client_demod);
+ result = -ENODEV;
+ goto out_free;
+ }
+
+ dvb->i2c_client_tuner = client;
+
+ }
+ break;
default:
em28xx_errdev("/2: The frontend of your DVB/ATSC card"
" isn't supported yet\n");
diff --git a/drivers/media/usb/em28xx/em28xx-reg.h b/drivers/media/usb/em28xx/em28xx-reg.h
index 13cbb7f3ea10..afe7a66d7dc8 100644
--- a/drivers/media/usb/em28xx/em28xx-reg.h
+++ b/drivers/media/usb/em28xx/em28xx-reg.h
@@ -193,6 +193,19 @@
/* em2874 registers */
#define EM2874_R50_IR_CONFIG 0x50
#define EM2874_R51_IR 0x51
+#define EM2874_R5D_TS1_PKT_SIZE 0x5d
+#define EM2874_R5E_TS2_PKT_SIZE 0x5e
+ /*
+ * For both TS1 and TS2, In isochronous mode:
+ * 0x01 188 bytes
+ * 0x02 376 bytes
+ * 0x03 564 bytes
+ * 0x04 752 bytes
+ * 0x05 940 bytes
+ * In bulk mode:
+ * 0x01..0xff total packet count in 188-byte
+ */
+
#define EM2874_R5F_TS_ENABLE 0x5f
/* em2874/174/84, em25xx, em276x/7x/8x GPIO registers */
diff --git a/drivers/media/usb/em28xx/em28xx.h b/drivers/media/usb/em28xx/em28xx.h
index 267444961775..d148463b22c1 100644
--- a/drivers/media/usb/em28xx/em28xx.h
+++ b/drivers/media/usb/em28xx/em28xx.h
@@ -145,6 +145,8 @@
#define EM2861_BOARD_LEADTEK_VC100 95
#define EM28178_BOARD_TERRATEC_T2_STICK_HD 96
#define EM2884_BOARD_ELGATO_EYETV_HYBRID_2008 97
+#define EM28178_BOARD_PLEX_PX_BCUD 98
+#define EM28174_BOARD_HAUPPAUGE_WINTV_DUALHD_DVB 99
/* Limits minimum and default number of buffers */
#define EM28XX_MIN_BUF 4
@@ -406,6 +408,7 @@ enum em28xx_adecoder {
enum em28xx_led_role {
EM28XX_LED_ANALOG_CAPTURING = 0,
EM28XX_LED_DIGITAL_CAPTURING,
+ EM28XX_LED_DIGITAL_CAPTURING_TS2,
EM28XX_LED_ILLUMINATION,
EM28XX_NUM_LED_ROLES, /* must be the last */
};
diff --git a/drivers/media/usb/go7007/go7007-v4l2.c b/drivers/media/usb/go7007/go7007-v4l2.c
index 358c1c186d03..ea01ee5df60a 100644
--- a/drivers/media/usb/go7007/go7007-v4l2.c
+++ b/drivers/media/usb/go7007/go7007-v4l2.c
@@ -1125,7 +1125,7 @@ int go7007_v4l2_init(struct go7007 *go)
vdev->queue = &go->vidq;
video_set_drvdata(vdev, go);
vdev->v4l2_dev = &go->v4l2_dev;
- if (!v4l2_device_has_op(&go->v4l2_dev, video, querystd))
+ if (!v4l2_device_has_op(&go->v4l2_dev, 0, video, querystd))
v4l2_disable_ioctl(vdev, VIDIOC_QUERYSTD);
if (!(go->board_info->flags & GO7007_BOARD_HAS_TUNER)) {
v4l2_disable_ioctl(vdev, VIDIOC_S_FREQUENCY);
diff --git a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
index 1a093e5953fd..83e9a3eb3859 100644
--- a/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
+++ b/drivers/media/usb/pvrusb2/pvrusb2-hdw.c
@@ -3672,11 +3672,10 @@ static int pvr2_send_request_ex(struct pvr2_hdw *hdw,
hdw->cmd_debug_state = 1;
- if (write_len) {
+ if (write_len && write_data)
hdw->cmd_debug_code = ((unsigned char *)write_data)[0];
- } else {
+ else
hdw->cmd_debug_code = 0;
- }
hdw->cmd_debug_write_len = write_len;
hdw->cmd_debug_read_len = read_len;
@@ -3688,7 +3687,7 @@ static int pvr2_send_request_ex(struct pvr2_hdw *hdw,
setup_timer(&timer, pvr2_ctl_timeout, (unsigned long)hdw);
timer.expires = jiffies + timeout;
- if (write_len) {
+ if (write_len && write_data) {
hdw->cmd_debug_state = 2;
/* Transfer write data to internal buffer */
for (idx = 0; idx < write_len; idx++) {
@@ -3795,7 +3794,7 @@ static int pvr2_send_request_ex(struct pvr2_hdw *hdw,
goto done;
}
}
- if (read_len) {
+ if (read_len && read_data) {
/* Validate results of read request */
if ((hdw->ctl_read_urb->status != 0) &&
(hdw->ctl_read_urb->status != -ENOENT) &&
diff --git a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
index 019644ff627d..bacecbd68a6d 100644
--- a/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
+++ b/drivers/media/v4l2-core/v4l2-compat-ioctl32.c
@@ -280,7 +280,8 @@ static int put_v4l2_format32(struct v4l2_format *kp, struct v4l2_format32 __user
static int put_v4l2_create32(struct v4l2_create_buffers *kp, struct v4l2_create_buffers32 __user *up)
{
if (!access_ok(VERIFY_WRITE, up, sizeof(struct v4l2_create_buffers32)) ||
- copy_to_user(up, kp, offsetof(struct v4l2_create_buffers32, format)))
+ copy_to_user(up, kp, offsetof(struct v4l2_create_buffers32, format)) ||
+ copy_to_user(up->reserved, kp->reserved, sizeof(kp->reserved)))
return -EFAULT;
return __put_v4l2_format32(&kp->format, &up->format);
}
diff --git a/drivers/media/v4l2-core/v4l2-dev.c b/drivers/media/v4l2-core/v4l2-dev.c
index d8e5994cccf1..70b559d7ca80 100644
--- a/drivers/media/v4l2-core/v4l2-dev.c
+++ b/drivers/media/v4l2-core/v4l2-dev.c
@@ -735,6 +735,7 @@ static int video_register_media_controller(struct video_device *vdev, int type)
if (!vdev->v4l2_dev->mdev)
return 0;
+ vdev->entity.obj_type = MEDIA_ENTITY_TYPE_VIDEO_DEVICE;
vdev->entity.function = MEDIA_ENT_F_UNKNOWN;
switch (type) {
diff --git a/drivers/media/v4l2-core/v4l2-ioctl.c b/drivers/media/v4l2-core/v4l2-ioctl.c
index 170dd68d27f4..28e5be2c2eef 100644
--- a/drivers/media/v4l2-core/v4l2-ioctl.c
+++ b/drivers/media/v4l2-core/v4l2-ioctl.c
@@ -1020,9 +1020,12 @@ static int v4l_querycap(const struct v4l2_ioctl_ops *ops,
struct file *file, void *fh, void *arg)
{
struct v4l2_capability *cap = (struct v4l2_capability *)arg;
+ struct video_device *vfd = video_devdata(file);
int ret;
cap->version = LINUX_VERSION_CODE;
+ cap->device_caps = vfd->device_caps;
+ cap->capabilities = vfd->device_caps | V4L2_CAP_DEVICE_CAPS;
ret = ops->vidioc_querycap(file, fh, cap);
@@ -2157,40 +2160,56 @@ static int v4l_cropcap(const struct v4l2_ioctl_ops *ops,
struct file *file, void *fh, void *arg)
{
struct v4l2_cropcap *p = arg;
+ struct v4l2_selection s = { .type = p->type };
+ int ret = 0;
- if (ops->vidioc_g_selection) {
- struct v4l2_selection s = { .type = p->type };
- int ret;
+ /* setting trivial pixelaspect */
+ p->pixelaspect.numerator = 1;
+ p->pixelaspect.denominator = 1;
- /* obtaining bounds */
- if (V4L2_TYPE_IS_OUTPUT(p->type))
- s.target = V4L2_SEL_TGT_COMPOSE_BOUNDS;
- else
- s.target = V4L2_SEL_TGT_CROP_BOUNDS;
+ /*
+ * The determine_valid_ioctls() call already should ensure
+ * that this can never happen, but just in case...
+ */
+ if (WARN_ON(!ops->vidioc_cropcap && !ops->vidioc_cropcap))
+ return -ENOTTY;
- ret = ops->vidioc_g_selection(file, fh, &s);
- if (ret)
- return ret;
- p->bounds = s.r;
+ if (ops->vidioc_cropcap)
+ ret = ops->vidioc_cropcap(file, fh, p);
- /* obtaining defrect */
- if (V4L2_TYPE_IS_OUTPUT(p->type))
- s.target = V4L2_SEL_TGT_COMPOSE_DEFAULT;
- else
- s.target = V4L2_SEL_TGT_CROP_DEFAULT;
+ if (!ops->vidioc_g_selection)
+ return ret;
- ret = ops->vidioc_g_selection(file, fh, &s);
- if (ret)
- return ret;
- p->defrect = s.r;
- }
+ /*
+ * Ignore ENOTTY or ENOIOCTLCMD error returns, just use the
+ * square pixel aspect ratio in that case.
+ */
+ if (ret && ret != -ENOTTY && ret != -ENOIOCTLCMD)
+ return ret;
- /* setting trivial pixelaspect */
- p->pixelaspect.numerator = 1;
- p->pixelaspect.denominator = 1;
+ /* Use g_selection() to fill in the bounds and defrect rectangles */
- if (ops->vidioc_cropcap)
- return ops->vidioc_cropcap(file, fh, p);
+ /* obtaining bounds */
+ if (V4L2_TYPE_IS_OUTPUT(p->type))
+ s.target = V4L2_SEL_TGT_COMPOSE_BOUNDS;
+ else
+ s.target = V4L2_SEL_TGT_CROP_BOUNDS;
+
+ ret = ops->vidioc_g_selection(file, fh, &s);
+ if (ret)
+ return ret;
+ p->bounds = s.r;
+
+ /* obtaining defrect */
+ if (V4L2_TYPE_IS_OUTPUT(p->type))
+ s.target = V4L2_SEL_TGT_COMPOSE_DEFAULT;
+ else
+ s.target = V4L2_SEL_TGT_CROP_DEFAULT;
+
+ ret = ops->vidioc_g_selection(file, fh, &s);
+ if (ret)
+ return ret;
+ p->defrect = s.r;
return 0;
}
diff --git a/drivers/media/v4l2-core/v4l2-mc.c b/drivers/media/v4l2-core/v4l2-mc.c
index 2228cd3a846e..ca94bded3386 100644
--- a/drivers/media/v4l2-core/v4l2-mc.c
+++ b/drivers/media/v4l2-core/v4l2-mc.c
@@ -263,7 +263,7 @@ static int pipeline_pm_use_count(struct media_entity *entity,
media_entity_graph_walk_start(graph, entity);
while ((entity = media_entity_graph_walk_next(graph))) {
- if (is_media_entity_v4l2_io(entity))
+ if (is_media_entity_v4l2_video_device(entity))
use += entity->use_count;
}
diff --git a/drivers/media/v4l2-core/v4l2-subdev.c b/drivers/media/v4l2-core/v4l2-subdev.c
index d63083803144..953eab08e420 100644
--- a/drivers/media/v4l2-core/v4l2-subdev.c
+++ b/drivers/media/v4l2-core/v4l2-subdev.c
@@ -35,9 +35,11 @@
static int subdev_fh_init(struct v4l2_subdev_fh *fh, struct v4l2_subdev *sd)
{
#if defined(CONFIG_VIDEO_V4L2_SUBDEV_API)
- fh->pad = kzalloc(sizeof(*fh->pad) * sd->entity.num_pads, GFP_KERNEL);
- if (fh->pad == NULL)
- return -ENOMEM;
+ if (sd->entity.num_pads) {
+ fh->pad = v4l2_subdev_alloc_pad_config(sd);
+ if (fh->pad == NULL)
+ return -ENOMEM;
+ }
#endif
return 0;
}
@@ -45,7 +47,7 @@ static int subdev_fh_init(struct v4l2_subdev_fh *fh, struct v4l2_subdev *sd)
static void subdev_fh_free(struct v4l2_subdev_fh *fh)
{
#if defined(CONFIG_VIDEO_V4L2_SUBDEV_API)
- kfree(fh->pad);
+ v4l2_subdev_free_pad_config(fh->pad);
fh->pad = NULL;
#endif
}
@@ -508,7 +510,7 @@ int v4l2_subdev_link_validate_default(struct v4l2_subdev *sd,
if (source_fmt->format.width != sink_fmt->format.width
|| source_fmt->format.height != sink_fmt->format.height
|| source_fmt->format.code != sink_fmt->format.code)
- return -EINVAL;
+ return -EPIPE;
/* The field order must match, or the sink field order must be NONE
* to support interlaced hardware connected to bridges that support
@@ -516,7 +518,7 @@ int v4l2_subdev_link_validate_default(struct v4l2_subdev *sd,
*/
if (source_fmt->format.field != sink_fmt->format.field &&
sink_fmt->format.field != V4L2_FIELD_NONE)
- return -EINVAL;
+ return -EPIPE;
return 0;
}
@@ -569,6 +571,35 @@ int v4l2_subdev_link_validate(struct media_link *link)
sink, link, &source_fmt, &sink_fmt);
}
EXPORT_SYMBOL_GPL(v4l2_subdev_link_validate);
+
+struct v4l2_subdev_pad_config *
+v4l2_subdev_alloc_pad_config(struct v4l2_subdev *sd)
+{
+ struct v4l2_subdev_pad_config *cfg;
+ int ret;
+
+ if (!sd->entity.num_pads)
+ return NULL;
+
+ cfg = kcalloc(sd->entity.num_pads, sizeof(*cfg), GFP_KERNEL);
+ if (!cfg)
+ return NULL;
+
+ ret = v4l2_subdev_call(sd, pad, init_cfg, cfg);
+ if (ret < 0 && ret != -ENOIOCTLCMD) {
+ kfree(cfg);
+ return NULL;
+ }
+
+ return cfg;
+}
+EXPORT_SYMBOL_GPL(v4l2_subdev_alloc_pad_config);
+
+void v4l2_subdev_free_pad_config(struct v4l2_subdev_pad_config *cfg)
+{
+ kfree(cfg);
+}
+EXPORT_SYMBOL_GPL(v4l2_subdev_free_pad_config);
#endif /* CONFIG_MEDIA_CONTROLLER */
void v4l2_subdev_init(struct v4l2_subdev *sd, const struct v4l2_subdev_ops *ops)
@@ -584,6 +615,7 @@ void v4l2_subdev_init(struct v4l2_subdev *sd, const struct v4l2_subdev_ops *ops)
sd->host_priv = NULL;
#if defined(CONFIG_MEDIA_CONTROLLER)
sd->entity.name = sd->name;
+ sd->entity.obj_type = MEDIA_ENTITY_TYPE_V4L2_SUBDEV;
sd->entity.function = MEDIA_ENT_F_V4L2_SUBDEV_UNKNOWN;
#endif
}
diff --git a/drivers/media/v4l2-core/videobuf2-v4l2.c b/drivers/media/v4l2-core/videobuf2-v4l2.c
index 7f366f1b0377..0b1b8c7b6ce5 100644
--- a/drivers/media/v4l2-core/videobuf2-v4l2.c
+++ b/drivers/media/v4l2-core/videobuf2-v4l2.c
@@ -74,11 +74,6 @@ static int __verify_planes_array(struct vb2_buffer *vb, const struct v4l2_buffer
return 0;
}
-static int __verify_planes_array_core(struct vb2_buffer *vb, const void *pb)
-{
- return __verify_planes_array(vb, pb);
-}
-
/**
* __verify_length() - Verify that the bytesused value for each plane fits in
* the plane length and that the data offset doesn't exceed the bytesused value.
@@ -442,7 +437,6 @@ static int __fill_vb2_buffer(struct vb2_buffer *vb,
}
static const struct vb2_buf_ops v4l2_buf_ops = {
- .verify_planes_array = __verify_planes_array_core,
.fill_user_buffer = __fill_v4l2_buffer,
.fill_vb2_buffer = __fill_vb2_buffer,
.copy_timestamp = __copy_timestamp,
diff --git a/drivers/memory/Kconfig b/drivers/memory/Kconfig
index 51d5cd20c26a..81ddb17575a9 100644
--- a/drivers/memory/Kconfig
+++ b/drivers/memory/Kconfig
@@ -51,6 +51,7 @@ config TI_EMIF
config OMAP_GPMC
bool
+ select GPIOLIB
help
This driver is for the General Purpose Memory Controller (GPMC)
present on Texas Instruments SoCs (e.g. OMAP2+). GPMC allows
@@ -122,6 +123,7 @@ config MTK_SMI
mainly help enable/disable iommu and control the power domain and
clocks for each local arbiter.
+source "drivers/memory/samsung/Kconfig"
source "drivers/memory/tegra/Kconfig"
endif
diff --git a/drivers/memory/Makefile b/drivers/memory/Makefile
index 890bdf402449..cb0b7a1df11a 100644
--- a/drivers/memory/Makefile
+++ b/drivers/memory/Makefile
@@ -17,4 +17,5 @@ obj-$(CONFIG_TEGRA20_MC) += tegra20-mc.o
obj-$(CONFIG_JZ4780_NEMC) += jz4780-nemc.o
obj-$(CONFIG_MTK_SMI) += mtk-smi.o
+obj-$(CONFIG_SAMSUNG_MC) += samsung/
obj-$(CONFIG_TEGRA_MC) += tegra/
diff --git a/drivers/memory/fsl_ifc.c b/drivers/memory/fsl_ifc.c
index 2a691da8c1c7..904b4af5f142 100644
--- a/drivers/memory/fsl_ifc.c
+++ b/drivers/memory/fsl_ifc.c
@@ -59,11 +59,11 @@ int fsl_ifc_find(phys_addr_t addr_base)
{
int i = 0;
- if (!fsl_ifc_ctrl_dev || !fsl_ifc_ctrl_dev->regs)
+ if (!fsl_ifc_ctrl_dev || !fsl_ifc_ctrl_dev->gregs)
return -ENODEV;
for (i = 0; i < fsl_ifc_ctrl_dev->banks; i++) {
- u32 cspr = ifc_in32(&fsl_ifc_ctrl_dev->regs->cspr_cs[i].cspr);
+ u32 cspr = ifc_in32(&fsl_ifc_ctrl_dev->gregs->cspr_cs[i].cspr);
if (cspr & CSPR_V && (cspr & CSPR_BA) ==
convert_ifc_address(addr_base))
return i;
@@ -75,7 +75,7 @@ EXPORT_SYMBOL(fsl_ifc_find);
static int fsl_ifc_ctrl_init(struct fsl_ifc_ctrl *ctrl)
{
- struct fsl_ifc_regs __iomem *ifc = ctrl->regs;
+ struct fsl_ifc_global __iomem *ifc = ctrl->gregs;
/*
* Clear all the common status and event registers
@@ -104,7 +104,7 @@ static int fsl_ifc_ctrl_remove(struct platform_device *dev)
irq_dispose_mapping(ctrl->nand_irq);
irq_dispose_mapping(ctrl->irq);
- iounmap(ctrl->regs);
+ iounmap(ctrl->gregs);
dev_set_drvdata(&dev->dev, NULL);
kfree(ctrl);
@@ -122,7 +122,7 @@ static DEFINE_SPINLOCK(nand_irq_lock);
static u32 check_nand_stat(struct fsl_ifc_ctrl *ctrl)
{
- struct fsl_ifc_regs __iomem *ifc = ctrl->regs;
+ struct fsl_ifc_runtime __iomem *ifc = ctrl->rregs;
unsigned long flags;
u32 stat;
@@ -157,7 +157,7 @@ static irqreturn_t fsl_ifc_nand_irq(int irqno, void *data)
static irqreturn_t fsl_ifc_ctrl_irq(int irqno, void *data)
{
struct fsl_ifc_ctrl *ctrl = data;
- struct fsl_ifc_regs __iomem *ifc = ctrl->regs;
+ struct fsl_ifc_global __iomem *ifc = ctrl->gregs;
u32 err_axiid, err_srcid, status, cs_err, err_addr;
irqreturn_t ret = IRQ_NONE;
@@ -215,6 +215,7 @@ static int fsl_ifc_ctrl_probe(struct platform_device *dev)
{
int ret = 0;
int version, banks;
+ void __iomem *addr;
dev_info(&dev->dev, "Freescale Integrated Flash Controller\n");
@@ -225,22 +226,13 @@ static int fsl_ifc_ctrl_probe(struct platform_device *dev)
dev_set_drvdata(&dev->dev, fsl_ifc_ctrl_dev);
/* IOMAP the entire IFC region */
- fsl_ifc_ctrl_dev->regs = of_iomap(dev->dev.of_node, 0);
- if (!fsl_ifc_ctrl_dev->regs) {
+ fsl_ifc_ctrl_dev->gregs = of_iomap(dev->dev.of_node, 0);
+ if (!fsl_ifc_ctrl_dev->gregs) {
dev_err(&dev->dev, "failed to get memory region\n");
ret = -ENODEV;
goto err;
}
- version = ifc_in32(&fsl_ifc_ctrl_dev->regs->ifc_rev) &
- FSL_IFC_VERSION_MASK;
- banks = (version == FSL_IFC_VERSION_1_0_0) ? 4 : 8;
- dev_info(&dev->dev, "IFC version %d.%d, %d banks\n",
- version >> 24, (version >> 16) & 0xf, banks);
-
- fsl_ifc_ctrl_dev->version = version;
- fsl_ifc_ctrl_dev->banks = banks;
-
if (of_property_read_bool(dev->dev.of_node, "little-endian")) {
fsl_ifc_ctrl_dev->little_endian = true;
dev_dbg(&dev->dev, "IFC REGISTERS are LITTLE endian\n");
@@ -249,8 +241,9 @@ static int fsl_ifc_ctrl_probe(struct platform_device *dev)
dev_dbg(&dev->dev, "IFC REGISTERS are BIG endian\n");
}
- version = ioread32be(&fsl_ifc_ctrl_dev->regs->ifc_rev) &
+ version = ifc_in32(&fsl_ifc_ctrl_dev->gregs->ifc_rev) &
FSL_IFC_VERSION_MASK;
+
banks = (version == FSL_IFC_VERSION_1_0_0) ? 4 : 8;
dev_info(&dev->dev, "IFC version %d.%d, %d banks\n",
version >> 24, (version >> 16) & 0xf, banks);
@@ -258,6 +251,13 @@ static int fsl_ifc_ctrl_probe(struct platform_device *dev)
fsl_ifc_ctrl_dev->version = version;
fsl_ifc_ctrl_dev->banks = banks;
+ addr = fsl_ifc_ctrl_dev->gregs;
+ if (version >= FSL_IFC_VERSION_2_0_0)
+ addr += PGOFFSET_64K;
+ else
+ addr += PGOFFSET_4K;
+ fsl_ifc_ctrl_dev->rregs = addr;
+
/* get the Controller level irq */
fsl_ifc_ctrl_dev->irq = irq_of_parse_and_map(dev->dev.of_node, 0);
if (fsl_ifc_ctrl_dev->irq == 0) {
diff --git a/drivers/memory/of_memory.c b/drivers/memory/of_memory.c
index 60074351f17e..9daf94bb8f27 100644
--- a/drivers/memory/of_memory.c
+++ b/drivers/memory/of_memory.c
@@ -109,7 +109,7 @@ const struct lpddr2_timings *of_get_ddr_timings(struct device_node *np_ddr,
struct lpddr2_timings *timings = NULL;
u32 arr_sz = 0, i = 0;
struct device_node *np_tim;
- char *tim_compat;
+ char *tim_compat = NULL;
switch (device_type) {
case DDR_TYPE_LPDDR2_S2:
diff --git a/drivers/memory/omap-gpmc.c b/drivers/memory/omap-gpmc.c
index 21825ddce4a3..af4884ba6b7c 100644
--- a/drivers/memory/omap-gpmc.c
+++ b/drivers/memory/omap-gpmc.c
@@ -21,15 +21,15 @@
#include <linux/spinlock.h>
#include <linux/io.h>
#include <linux/module.h>
+#include <linux/gpio/driver.h>
#include <linux/interrupt.h>
+#include <linux/irqdomain.h>
#include <linux/platform_device.h>
#include <linux/of.h>
#include <linux/of_address.h>
-#include <linux/of_mtd.h>
#include <linux/of_device.h>
#include <linux/of_platform.h>
#include <linux/omap-gpmc.h>
-#include <linux/mtd/nand.h>
#include <linux/pm_runtime.h>
#include <linux/platform_data/mtd-nand-omap2.h>
@@ -81,6 +81,8 @@
#define GPMC_CONFIG_LIMITEDADDRESS BIT(1)
+#define GPMC_STATUS_EMPTYWRITEBUFFERSTATUS BIT(0)
+
#define GPMC_CONFIG2_CSEXTRADELAY BIT(7)
#define GPMC_CONFIG3_ADVEXTRADELAY BIT(7)
#define GPMC_CONFIG4_OEEXTRADELAY BIT(7)
@@ -92,6 +94,14 @@
#define GPMC_CS_SIZE 0x30
#define GPMC_BCH_SIZE 0x10
+/*
+ * The first 1MB of GPMC address space is typically mapped to
+ * the internal ROM. Never allocate the first page, to
+ * facilitate bug detection; even if we didn't boot from ROM.
+ * As GPMC minimum partition size is 16MB we can only start from
+ * there.
+ */
+#define GPMC_MEM_START 0x1000000
#define GPMC_MEM_END 0x3FFFFFFF
#define GPMC_CHUNK_SHIFT 24 /* 16 MB */
@@ -125,7 +135,6 @@
#define GPMC_CONFIG_RDY_BSY 0x00000001
#define GPMC_CONFIG_DEV_SIZE 0x00000002
#define GPMC_CONFIG_DEV_TYPE 0x00000003
-#define GPMC_SET_IRQ_STATUS 0x00000004
#define GPMC_CONFIG1_WRAPBURST_SUPP (1 << 31)
#define GPMC_CONFIG1_READMULTIPLE_SUPP (1 << 30)
@@ -174,16 +183,12 @@
#define GPMC_CONFIG_WRITEPROTECT 0x00000010
#define WR_RD_PIN_MONITORING 0x00600000
-#define GPMC_ENABLE_IRQ 0x0000000d
-
/* ECC commands */
#define GPMC_ECC_READ 0 /* Reset Hardware ECC for read */
#define GPMC_ECC_WRITE 1 /* Reset Hardware ECC for write */
#define GPMC_ECC_READSYN 2 /* Reset before syndrom is read back */
-/* XXX: Only NAND irq has been considered,currently these are the only ones used
- */
-#define GPMC_NR_IRQ 2
+#define GPMC_NR_NAND_IRQS 2 /* number of NAND specific IRQs */
enum gpmc_clk_domain {
GPMC_CD_FCLK,
@@ -199,11 +204,6 @@ struct gpmc_cs_data {
struct resource mem;
};
-struct gpmc_client_irq {
- unsigned irq;
- u32 bitmask;
-};
-
/* Structure to save gpmc cs context */
struct gpmc_cs_config {
u32 config1;
@@ -231,9 +231,15 @@ struct omap3_gpmc_regs {
struct gpmc_cs_config cs_context[GPMC_CS_NUM];
};
-static struct gpmc_client_irq gpmc_client_irq[GPMC_NR_IRQ];
-static struct irq_chip gpmc_irq_chip;
-static int gpmc_irq_start;
+struct gpmc_device {
+ struct device *dev;
+ int irq;
+ struct irq_chip irq_chip;
+ struct gpio_chip gpio_chip;
+ int nirqs;
+};
+
+static struct irq_domain *gpmc_irq_domain;
static struct resource gpmc_mem_root;
static struct gpmc_cs_data gpmc_cs[GPMC_CS_NUM];
@@ -241,8 +247,6 @@ static DEFINE_SPINLOCK(gpmc_mem_lock);
/* Define chip-selects as reserved by default until probe completes */
static unsigned int gpmc_cs_num = GPMC_CS_NUM;
static unsigned int gpmc_nr_waitpins;
-static struct device *gpmc_dev;
-static int gpmc_irq;
static resource_size_t phys_base, mem_size;
static unsigned gpmc_capability;
static void __iomem *gpmc_base;
@@ -1054,14 +1058,6 @@ int gpmc_configure(int cmd, int wval)
u32 regval;
switch (cmd) {
- case GPMC_ENABLE_IRQ:
- gpmc_write_reg(GPMC_IRQENABLE, wval);
- break;
-
- case GPMC_SET_IRQ_STATUS:
- gpmc_write_reg(GPMC_IRQSTATUS, wval);
- break;
-
case GPMC_CONFIG_WP:
regval = gpmc_read_reg(GPMC_CONFIG);
if (wval)
@@ -1084,7 +1080,7 @@ void gpmc_update_nand_reg(struct gpmc_nand_regs *reg, int cs)
{
int i;
- reg->gpmc_status = gpmc_base + GPMC_STATUS;
+ reg->gpmc_status = NULL; /* deprecated */
reg->gpmc_nand_command = gpmc_base + GPMC_CS0_OFFSET +
GPMC_CS_NAND_COMMAND + GPMC_CS_SIZE * cs;
reg->gpmc_nand_address = gpmc_base + GPMC_CS0_OFFSET +
@@ -1118,87 +1114,201 @@ void gpmc_update_nand_reg(struct gpmc_nand_regs *reg, int cs)
}
}
-int gpmc_get_client_irq(unsigned irq_config)
+static bool gpmc_nand_writebuffer_empty(void)
{
- int i;
+ if (gpmc_read_reg(GPMC_STATUS) & GPMC_STATUS_EMPTYWRITEBUFFERSTATUS)
+ return true;
- if (hweight32(irq_config) > 1)
+ return false;
+}
+
+static struct gpmc_nand_ops nand_ops = {
+ .nand_writebuffer_empty = gpmc_nand_writebuffer_empty,
+};
+
+/**
+ * gpmc_omap_get_nand_ops - Get the GPMC NAND interface
+ * @regs: the GPMC NAND register map exclusive for NAND use.
+ * @cs: GPMC chip select number on which the NAND sits. The
+ * register map returned will be specific to this chip select.
+ *
+ * Returns NULL on error e.g. invalid cs.
+ */
+struct gpmc_nand_ops *gpmc_omap_get_nand_ops(struct gpmc_nand_regs *reg, int cs)
+{
+ if (cs >= gpmc_cs_num)
+ return NULL;
+
+ gpmc_update_nand_reg(reg, cs);
+
+ return &nand_ops;
+}
+EXPORT_SYMBOL_GPL(gpmc_omap_get_nand_ops);
+
+int gpmc_get_client_irq(unsigned irq_config)
+{
+ if (!gpmc_irq_domain) {
+ pr_warn("%s called before GPMC IRQ domain available\n",
+ __func__);
return 0;
+ }
- for (i = 0; i < GPMC_NR_IRQ; i++)
- if (gpmc_client_irq[i].bitmask & irq_config)
- return gpmc_client_irq[i].irq;
+ /* we restrict this to NAND IRQs only */
+ if (irq_config >= GPMC_NR_NAND_IRQS)
+ return 0;
- return 0;
+ return irq_create_mapping(gpmc_irq_domain, irq_config);
}
-static int gpmc_irq_endis(unsigned irq, bool endis)
+static int gpmc_irq_endis(unsigned long hwirq, bool endis)
{
- int i;
u32 regval;
- for (i = 0; i < GPMC_NR_IRQ; i++)
- if (irq == gpmc_client_irq[i].irq) {
- regval = gpmc_read_reg(GPMC_IRQENABLE);
- if (endis)
- regval |= gpmc_client_irq[i].bitmask;
- else
- regval &= ~gpmc_client_irq[i].bitmask;
- gpmc_write_reg(GPMC_IRQENABLE, regval);
- break;
- }
+ /* bits GPMC_NR_NAND_IRQS to 8 are reserved */
+ if (hwirq >= GPMC_NR_NAND_IRQS)
+ hwirq += 8 - GPMC_NR_NAND_IRQS;
+
+ regval = gpmc_read_reg(GPMC_IRQENABLE);
+ if (endis)
+ regval |= BIT(hwirq);
+ else
+ regval &= ~BIT(hwirq);
+ gpmc_write_reg(GPMC_IRQENABLE, regval);
return 0;
}
static void gpmc_irq_disable(struct irq_data *p)
{
- gpmc_irq_endis(p->irq, false);
+ gpmc_irq_endis(p->hwirq, false);
}
static void gpmc_irq_enable(struct irq_data *p)
{
- gpmc_irq_endis(p->irq, true);
+ gpmc_irq_endis(p->hwirq, true);
}
-static void gpmc_irq_noop(struct irq_data *data) { }
+static void gpmc_irq_mask(struct irq_data *d)
+{
+ gpmc_irq_endis(d->hwirq, false);
+}
-static unsigned int gpmc_irq_noop_ret(struct irq_data *data) { return 0; }
+static void gpmc_irq_unmask(struct irq_data *d)
+{
+ gpmc_irq_endis(d->hwirq, true);
+}
-static int gpmc_setup_irq(void)
+static void gpmc_irq_edge_config(unsigned long hwirq, bool rising_edge)
{
- int i;
u32 regval;
- if (!gpmc_irq)
+ /* NAND IRQs polarity is not configurable */
+ if (hwirq < GPMC_NR_NAND_IRQS)
+ return;
+
+ /* WAITPIN starts at BIT 8 */
+ hwirq += 8 - GPMC_NR_NAND_IRQS;
+
+ regval = gpmc_read_reg(GPMC_CONFIG);
+ if (rising_edge)
+ regval &= ~BIT(hwirq);
+ else
+ regval |= BIT(hwirq);
+
+ gpmc_write_reg(GPMC_CONFIG, regval);
+}
+
+static void gpmc_irq_ack(struct irq_data *d)
+{
+ unsigned int hwirq = d->hwirq;
+
+ /* skip reserved bits */
+ if (hwirq >= GPMC_NR_NAND_IRQS)
+ hwirq += 8 - GPMC_NR_NAND_IRQS;
+
+ /* Setting bit to 1 clears (or Acks) the interrupt */
+ gpmc_write_reg(GPMC_IRQSTATUS, BIT(hwirq));
+}
+
+static int gpmc_irq_set_type(struct irq_data *d, unsigned int trigger)
+{
+ /* can't set type for NAND IRQs */
+ if (d->hwirq < GPMC_NR_NAND_IRQS)
return -EINVAL;
- gpmc_irq_start = irq_alloc_descs(-1, 0, GPMC_NR_IRQ, 0);
- if (gpmc_irq_start < 0) {
- pr_err("irq_alloc_descs failed\n");
- return gpmc_irq_start;
+ /* We can support either rising or falling edge at a time */
+ if (trigger == IRQ_TYPE_EDGE_FALLING)
+ gpmc_irq_edge_config(d->hwirq, false);
+ else if (trigger == IRQ_TYPE_EDGE_RISING)
+ gpmc_irq_edge_config(d->hwirq, true);
+ else
+ return -EINVAL;
+
+ return 0;
+}
+
+static int gpmc_irq_map(struct irq_domain *d, unsigned int virq,
+ irq_hw_number_t hw)
+{
+ struct gpmc_device *gpmc = d->host_data;
+
+ irq_set_chip_data(virq, gpmc);
+ if (hw < GPMC_NR_NAND_IRQS) {
+ irq_modify_status(virq, IRQ_NOREQUEST, IRQ_NOAUTOEN);
+ irq_set_chip_and_handler(virq, &gpmc->irq_chip,
+ handle_simple_irq);
+ } else {
+ irq_set_chip_and_handler(virq, &gpmc->irq_chip,
+ handle_edge_irq);
}
- gpmc_irq_chip.name = "gpmc";
- gpmc_irq_chip.irq_startup = gpmc_irq_noop_ret;
- gpmc_irq_chip.irq_enable = gpmc_irq_enable;
- gpmc_irq_chip.irq_disable = gpmc_irq_disable;
- gpmc_irq_chip.irq_shutdown = gpmc_irq_noop;
- gpmc_irq_chip.irq_ack = gpmc_irq_noop;
- gpmc_irq_chip.irq_mask = gpmc_irq_noop;
- gpmc_irq_chip.irq_unmask = gpmc_irq_noop;
-
- gpmc_client_irq[0].bitmask = GPMC_IRQ_FIFOEVENTENABLE;
- gpmc_client_irq[1].bitmask = GPMC_IRQ_COUNT_EVENT;
-
- for (i = 0; i < GPMC_NR_IRQ; i++) {
- gpmc_client_irq[i].irq = gpmc_irq_start + i;
- irq_set_chip_and_handler(gpmc_client_irq[i].irq,
- &gpmc_irq_chip, handle_simple_irq);
- irq_modify_status(gpmc_client_irq[i].irq, IRQ_NOREQUEST,
- IRQ_NOAUTOEN);
+ return 0;
+}
+
+static const struct irq_domain_ops gpmc_irq_domain_ops = {
+ .map = gpmc_irq_map,
+ .xlate = irq_domain_xlate_twocell,
+};
+
+static irqreturn_t gpmc_handle_irq(int irq, void *data)
+{
+ int hwirq, virq;
+ u32 regval, regvalx;
+ struct gpmc_device *gpmc = data;
+
+ regval = gpmc_read_reg(GPMC_IRQSTATUS);
+ regvalx = regval;
+
+ if (!regval)
+ return IRQ_NONE;
+
+ for (hwirq = 0; hwirq < gpmc->nirqs; hwirq++) {
+ /* skip reserved status bits */
+ if (hwirq == GPMC_NR_NAND_IRQS)
+ regvalx >>= 8 - GPMC_NR_NAND_IRQS;
+
+ if (regvalx & BIT(hwirq)) {
+ virq = irq_find_mapping(gpmc_irq_domain, hwirq);
+ if (!virq) {
+ dev_warn(gpmc->dev,
+ "spurious irq detected hwirq %d, virq %d\n",
+ hwirq, virq);
+ }
+
+ generic_handle_irq(virq);
+ }
}
+ gpmc_write_reg(GPMC_IRQSTATUS, regval);
+
+ return IRQ_HANDLED;
+}
+
+static int gpmc_setup_irq(struct gpmc_device *gpmc)
+{
+ u32 regval;
+ int rc;
+
/* Disable interrupts */
gpmc_write_reg(GPMC_IRQENABLE, 0);
@@ -1206,22 +1316,45 @@ static int gpmc_setup_irq(void)
regval = gpmc_read_reg(GPMC_IRQSTATUS);
gpmc_write_reg(GPMC_IRQSTATUS, regval);
- return request_irq(gpmc_irq, gpmc_handle_irq, 0, "gpmc", NULL);
+ gpmc->irq_chip.name = "gpmc";
+ gpmc->irq_chip.irq_enable = gpmc_irq_enable;
+ gpmc->irq_chip.irq_disable = gpmc_irq_disable;
+ gpmc->irq_chip.irq_ack = gpmc_irq_ack;
+ gpmc->irq_chip.irq_mask = gpmc_irq_mask;
+ gpmc->irq_chip.irq_unmask = gpmc_irq_unmask;
+ gpmc->irq_chip.irq_set_type = gpmc_irq_set_type;
+
+ gpmc_irq_domain = irq_domain_add_linear(gpmc->dev->of_node,
+ gpmc->nirqs,
+ &gpmc_irq_domain_ops,
+ gpmc);
+ if (!gpmc_irq_domain) {
+ dev_err(gpmc->dev, "IRQ domain add failed\n");
+ return -ENODEV;
+ }
+
+ rc = request_irq(gpmc->irq, gpmc_handle_irq, 0, "gpmc", gpmc);
+ if (rc) {
+ dev_err(gpmc->dev, "failed to request irq %d: %d\n",
+ gpmc->irq, rc);
+ irq_domain_remove(gpmc_irq_domain);
+ gpmc_irq_domain = NULL;
+ }
+
+ return rc;
}
-static int gpmc_free_irq(void)
+static int gpmc_free_irq(struct gpmc_device *gpmc)
{
- int i;
+ int hwirq;
- if (gpmc_irq)
- free_irq(gpmc_irq, NULL);
+ free_irq(gpmc->irq, gpmc);
- for (i = 0; i < GPMC_NR_IRQ; i++) {
- irq_set_handler(gpmc_client_irq[i].irq, NULL);
- irq_set_chip(gpmc_client_irq[i].irq, &no_irq_chip);
- }
+ for (hwirq = 0; hwirq < gpmc->nirqs; hwirq++)
+ irq_dispose_mapping(irq_find_mapping(gpmc_irq_domain, hwirq));
- irq_free_descs(gpmc_irq_start, GPMC_NR_IRQ);
+ irq_domain_remove(gpmc_irq_domain);
+ gpmc_irq_domain = NULL;
return 0;
}
@@ -1242,12 +1375,7 @@ static void gpmc_mem_init(void)
{
int cs;
- /*
- * The first 1MB of GPMC address space is typically mapped to
- * the internal ROM. Never allocate the first page, to
- * facilitate bug detection; even if we didn't boot from ROM.
- */
- gpmc_mem_root.start = SZ_1M;
+ gpmc_mem_root.start = GPMC_MEM_START;
gpmc_mem_root.end = GPMC_MEM_END;
/* Reserve all regions that has been set up by bootloader */
@@ -1796,105 +1924,6 @@ static void __maybe_unused gpmc_read_timings_dt(struct device_node *np,
of_property_read_bool(np, "gpmc,time-para-granularity");
}
-#if IS_ENABLED(CONFIG_MTD_NAND)
-
-static const char * const nand_xfer_types[] = {
- [NAND_OMAP_PREFETCH_POLLED] = "prefetch-polled",
- [NAND_OMAP_POLLED] = "polled",
- [NAND_OMAP_PREFETCH_DMA] = "prefetch-dma",
- [NAND_OMAP_PREFETCH_IRQ] = "prefetch-irq",
-};
-
-static int gpmc_probe_nand_child(struct platform_device *pdev,
- struct device_node *child)
-{
- u32 val;
- const char *s;
- struct gpmc_timings gpmc_t;
- struct omap_nand_platform_data *gpmc_nand_data;
-
- if (of_property_read_u32(child, "reg", &val) < 0) {
- dev_err(&pdev->dev, "%s has no 'reg' property\n",
- child->full_name);
- return -ENODEV;
- }
-
- gpmc_nand_data = devm_kzalloc(&pdev->dev, sizeof(*gpmc_nand_data),
- GFP_KERNEL);
- if (!gpmc_nand_data)
- return -ENOMEM;
-
- gpmc_nand_data->cs = val;
- gpmc_nand_data->of_node = child;
-
- /* Detect availability of ELM module */
- gpmc_nand_data->elm_of_node = of_parse_phandle(child, "ti,elm-id", 0);
- if (gpmc_nand_data->elm_of_node == NULL)
- gpmc_nand_data->elm_of_node =
- of_parse_phandle(child, "elm_id", 0);
-
- /* select ecc-scheme for NAND */
- if (of_property_read_string(child, "ti,nand-ecc-opt", &s)) {
- pr_err("%s: ti,nand-ecc-opt not found\n", __func__);
- return -ENODEV;
- }
-
- if (!strcmp(s, "sw"))
- gpmc_nand_data->ecc_opt = OMAP_ECC_HAM1_CODE_SW;
- else if (!strcmp(s, "ham1") ||
- !strcmp(s, "hw") || !strcmp(s, "hw-romcode"))
- gpmc_nand_data->ecc_opt =
- OMAP_ECC_HAM1_CODE_HW;
- else if (!strcmp(s, "bch4"))
- if (gpmc_nand_data->elm_of_node)
- gpmc_nand_data->ecc_opt =
- OMAP_ECC_BCH4_CODE_HW;
- else
- gpmc_nand_data->ecc_opt =
- OMAP_ECC_BCH4_CODE_HW_DETECTION_SW;
- else if (!strcmp(s, "bch8"))
- if (gpmc_nand_data->elm_of_node)
- gpmc_nand_data->ecc_opt =
- OMAP_ECC_BCH8_CODE_HW;
- else
- gpmc_nand_data->ecc_opt =
- OMAP_ECC_BCH8_CODE_HW_DETECTION_SW;
- else if (!strcmp(s, "bch16"))
- if (gpmc_nand_data->elm_of_node)
- gpmc_nand_data->ecc_opt =
- OMAP_ECC_BCH16_CODE_HW;
- else
- pr_err("%s: BCH16 requires ELM support\n", __func__);
- else
- pr_err("%s: ti,nand-ecc-opt invalid value\n", __func__);
-
- /* select data transfer mode for NAND controller */
- if (!of_property_read_string(child, "ti,nand-xfer-type", &s))
- for (val = 0; val < ARRAY_SIZE(nand_xfer_types); val++)
- if (!strcasecmp(s, nand_xfer_types[val])) {
- gpmc_nand_data->xfer_type = val;
- break;
- }
-
- gpmc_nand_data->flash_bbt = of_get_nand_on_flash_bbt(child);
-
- val = of_get_nand_bus_width(child);
- if (val == 16)
- gpmc_nand_data->devsize = NAND_BUSWIDTH_16;
-
- gpmc_read_timings_dt(child, &gpmc_t);
- gpmc_nand_init(gpmc_nand_data, &gpmc_t);
-
- return 0;
-}
-#else
-static int gpmc_probe_nand_child(struct platform_device *pdev,
- struct device_node *child)
-{
- return 0;
-}
-#endif
-
#if IS_ENABLED(CONFIG_MTD_ONENAND)
static int gpmc_probe_onenand_child(struct platform_device *pdev,
struct device_node *child)
@@ -1950,6 +1979,8 @@ static int gpmc_probe_generic_child(struct platform_device *pdev,
const char *name;
int ret, cs;
u32 val;
+ struct gpio_desc *waitpin_desc = NULL;
+ struct gpmc_device *gpmc = platform_get_drvdata(pdev);
if (of_property_read_u32(child, "reg", &cs) < 0) {
dev_err(&pdev->dev, "%s has no 'reg' property\n",
@@ -2010,23 +2041,80 @@ static int gpmc_probe_generic_child(struct platform_device *pdev,
if (ret < 0) {
dev_err(&pdev->dev, "cannot remap GPMC CS %d to %pa\n",
cs, &res.start);
+ if (res.start < GPMC_MEM_START) {
+ dev_info(&pdev->dev,
+ "GPMC CS %d start cannot be lesser than 0x%x\n",
+ cs, GPMC_MEM_START);
+ } else if (res.end > GPMC_MEM_END) {
+ dev_info(&pdev->dev,
+ "GPMC CS %d end cannot be greater than 0x%x\n",
+ cs, GPMC_MEM_END);
+ }
goto err;
}
- ret = of_property_read_u32(child, "bank-width", &gpmc_s.device_width);
- if (ret < 0)
- goto err;
+ if (of_node_cmp(child->name, "nand") == 0) {
+ /* Warn about older DT blobs with no compatible property */
+ if (!of_property_read_bool(child, "compatible")) {
+ dev_warn(&pdev->dev,
+ "Incompatible NAND node: missing compatible");
+ ret = -EINVAL;
+ goto err;
+ }
+ }
+
+ if (of_device_is_compatible(child, "ti,omap2-nand")) {
+ /* NAND specific setup */
+ val = 8;
+ of_property_read_u32(child, "nand-bus-width", &val);
+ switch (val) {
+ case 8:
+ gpmc_s.device_width = GPMC_DEVWIDTH_8BIT;
+ break;
+ case 16:
+ gpmc_s.device_width = GPMC_DEVWIDTH_16BIT;
+ break;
+ default:
+ dev_err(&pdev->dev, "%s: invalid 'nand-bus-width'\n",
+ child->name);
+ ret = -EINVAL;
+ goto err;
+ }
+
+ /* disable write protect */
+ gpmc_configure(GPMC_CONFIG_WP, 0);
+ gpmc_s.device_nand = true;
+ } else {
+ ret = of_property_read_u32(child, "bank-width",
+ &gpmc_s.device_width);
+ if (ret < 0)
+ goto err;
+ }
+
+ /* Reserve wait pin if it is required and valid */
+ if (gpmc_s.wait_on_read || gpmc_s.wait_on_write) {
+ unsigned int wait_pin = gpmc_s.wait_pin;
+
+ waitpin_desc = gpiochip_request_own_desc(&gpmc->gpio_chip,
+ wait_pin, "WAITPIN");
+ if (IS_ERR(waitpin_desc)) {
+ dev_err(&pdev->dev, "invalid wait-pin: %d\n", wait_pin);
+ ret = PTR_ERR(waitpin_desc);
+ goto err;
+ }
+ }
gpmc_cs_show_timings(cs, "before gpmc_cs_program_settings");
+
ret = gpmc_cs_program_settings(cs, &gpmc_s);
if (ret < 0)
- goto err;
+ goto err_cs;
ret = gpmc_cs_set_timings(cs, &gpmc_t, &gpmc_s);
if (ret) {
dev_err(&pdev->dev, "failed to set gpmc timings for: %s\n",
child->name);
- goto err;
+ goto err_cs;
}
/* Clear limited address i.e. enable A26-A11 */
@@ -2057,16 +2145,81 @@ err_child_fail:
dev_err(&pdev->dev, "failed to create gpmc child %s\n", child->name);
ret = -ENODEV;
+err_cs:
+ if (waitpin_desc)
+ gpiochip_free_own_desc(waitpin_desc);
+
err:
gpmc_cs_free(cs);
return ret;
}
+static int gpmc_gpio_get_direction(struct gpio_chip *chip, unsigned int offset)
+{
+ return 1; /* we're input only */
+}
+
+static int gpmc_gpio_direction_input(struct gpio_chip *chip,
+ unsigned int offset)
+{
+ return 0; /* we're input only */
+}
+
+static int gpmc_gpio_direction_output(struct gpio_chip *chip,
+ unsigned int offset, int value)
+{
+ return -EINVAL; /* we're input only */
+}
+
+static void gpmc_gpio_set(struct gpio_chip *chip, unsigned int offset,
+ int value)
+{
+}
+
+static int gpmc_gpio_get(struct gpio_chip *chip, unsigned int offset)
+{
+ u32 reg;
+
+ offset += 8;
+
+ reg = gpmc_read_reg(GPMC_STATUS) & BIT(offset);
+
+ return !!reg;
+}
+
+static int gpmc_gpio_init(struct gpmc_device *gpmc)
+{
+ int ret;
+
+ gpmc->gpio_chip.parent = gpmc->dev;
+ gpmc->gpio_chip.owner = THIS_MODULE;
+ gpmc->gpio_chip.label = DEVICE_NAME;
+ gpmc->gpio_chip.ngpio = gpmc_nr_waitpins;
+ gpmc->gpio_chip.get_direction = gpmc_gpio_get_direction;
+ gpmc->gpio_chip.direction_input = gpmc_gpio_direction_input;
+ gpmc->gpio_chip.direction_output = gpmc_gpio_direction_output;
+ gpmc->gpio_chip.set = gpmc_gpio_set;
+ gpmc->gpio_chip.get = gpmc_gpio_get;
+ gpmc->gpio_chip.base = -1;
+
+ ret = gpiochip_add(&gpmc->gpio_chip);
+ if (ret < 0) {
+ dev_err(gpmc->dev, "could not register gpio chip: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static void gpmc_gpio_exit(struct gpmc_device *gpmc)
+{
+ gpiochip_remove(&gpmc->gpio_chip);
+}
+
static int gpmc_probe_dt(struct platform_device *pdev)
{
int ret;
- struct device_node *child;
const struct of_device_id *of_id =
of_match_device(gpmc_dt_ids, &pdev->dev);
@@ -2094,17 +2247,26 @@ static int gpmc_probe_dt(struct platform_device *pdev)
return ret;
}
+ return 0;
+}
+
+static int gpmc_probe_dt_children(struct platform_device *pdev)
+{
+ int ret;
+ struct device_node *child;
+
for_each_available_child_of_node(pdev->dev.of_node, child) {
if (!child->name)
continue;
- if (of_node_cmp(child->name, "nand") == 0)
- ret = gpmc_probe_nand_child(pdev, child);
- else if (of_node_cmp(child->name, "onenand") == 0)
+ if (of_node_cmp(child->name, "onenand") == 0)
ret = gpmc_probe_onenand_child(pdev, child);
else
ret = gpmc_probe_generic_child(pdev, child);
+
+ if (ret)
+ return ret;
}
return 0;
@@ -2114,6 +2276,11 @@ static int gpmc_probe_dt(struct platform_device *pdev)
{
return 0;
}
+
+static int gpmc_probe_dt_children(struct platform_device *pdev)
+{
+ return 0;
+}
#endif
static int gpmc_probe(struct platform_device *pdev)
@@ -2121,6 +2288,14 @@ static int gpmc_probe(struct platform_device *pdev)
int rc;
u32 l;
struct resource *res;
+ struct gpmc_device *gpmc;
+
+ gpmc = devm_kzalloc(&pdev->dev, sizeof(*gpmc), GFP_KERNEL);
+ if (!gpmc)
+ return -ENOMEM;
+
+ gpmc->dev = &pdev->dev;
+ platform_set_drvdata(pdev, gpmc);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (res == NULL)
@@ -2134,15 +2309,16 @@ static int gpmc_probe(struct platform_device *pdev)
return PTR_ERR(gpmc_base);
res = platform_get_resource(pdev, IORESOURCE_IRQ, 0);
- if (res == NULL)
- dev_warn(&pdev->dev, "Failed to get resource: irq\n");
- else
- gpmc_irq = res->start;
+ if (!res) {
+ dev_err(&pdev->dev, "Failed to get resource: irq\n");
+ return -ENOENT;
+ }
+
+ gpmc->irq = res->start;
gpmc_l3_clk = devm_clk_get(&pdev->dev, "fck");
if (IS_ERR(gpmc_l3_clk)) {
dev_err(&pdev->dev, "Failed to get GPMC fck\n");
- gpmc_irq = 0;
return PTR_ERR(gpmc_l3_clk);
}
@@ -2151,11 +2327,18 @@ static int gpmc_probe(struct platform_device *pdev)
return -EINVAL;
}
+ if (pdev->dev.of_node) {
+ rc = gpmc_probe_dt(pdev);
+ if (rc)
+ return rc;
+ } else {
+ gpmc_cs_num = GPMC_CS_NUM;
+ gpmc_nr_waitpins = GPMC_NR_WAITPINS;
+ }
+
pm_runtime_enable(&pdev->dev);
pm_runtime_get_sync(&pdev->dev);
- gpmc_dev = &pdev->dev;
-
l = gpmc_read_reg(GPMC_REVISION);
/*
@@ -2174,36 +2357,51 @@ static int gpmc_probe(struct platform_device *pdev)
gpmc_capability = GPMC_HAS_WR_ACCESS | GPMC_HAS_WR_DATA_MUX_BUS;
if (GPMC_REVISION_MAJOR(l) > 0x5)
gpmc_capability |= GPMC_HAS_MUX_AAD;
- dev_info(gpmc_dev, "GPMC revision %d.%d\n", GPMC_REVISION_MAJOR(l),
+ dev_info(gpmc->dev, "GPMC revision %d.%d\n", GPMC_REVISION_MAJOR(l),
GPMC_REVISION_MINOR(l));
gpmc_mem_init();
-
- if (gpmc_setup_irq() < 0)
- dev_warn(gpmc_dev, "gpmc_setup_irq failed\n");
-
- if (!pdev->dev.of_node) {
- gpmc_cs_num = GPMC_CS_NUM;
- gpmc_nr_waitpins = GPMC_NR_WAITPINS;
+ rc = gpmc_gpio_init(gpmc);
+ if (rc)
+ goto gpio_init_failed;
+
+ gpmc->nirqs = GPMC_NR_NAND_IRQS + gpmc_nr_waitpins;
+ rc = gpmc_setup_irq(gpmc);
+ if (rc) {
+ dev_err(gpmc->dev, "gpmc_setup_irq failed\n");
+ goto setup_irq_failed;
}
- rc = gpmc_probe_dt(pdev);
+ rc = gpmc_probe_dt_children(pdev);
if (rc < 0) {
- pm_runtime_put_sync(&pdev->dev);
- dev_err(gpmc_dev, "failed to probe DT parameters\n");
- return rc;
+ dev_err(gpmc->dev, "failed to probe DT children\n");
+ goto dt_children_failed;
}
return 0;
+
+dt_children_failed:
+ gpmc_free_irq(gpmc);
+setup_irq_failed:
+ gpmc_gpio_exit(gpmc);
+gpio_init_failed:
+ gpmc_mem_exit();
+ pm_runtime_put_sync(&pdev->dev);
+ pm_runtime_disable(&pdev->dev);
+
+ return rc;
}
static int gpmc_remove(struct platform_device *pdev)
{
- gpmc_free_irq();
+ struct gpmc_device *gpmc = platform_get_drvdata(pdev);
+
+ gpmc_free_irq(gpmc);
+ gpmc_gpio_exit(gpmc);
gpmc_mem_exit();
pm_runtime_put_sync(&pdev->dev);
pm_runtime_disable(&pdev->dev);
- gpmc_dev = NULL;
+
return 0;
}
@@ -2249,25 +2447,6 @@ static __exit void gpmc_exit(void)
postcore_initcall(gpmc_init);
module_exit(gpmc_exit);
-static irqreturn_t gpmc_handle_irq(int irq, void *dev)
-{
- int i;
- u32 regval;
-
- regval = gpmc_read_reg(GPMC_IRQSTATUS);
-
- if (!regval)
- return IRQ_NONE;
-
- for (i = 0; i < GPMC_NR_IRQ; i++)
- if (regval & gpmc_client_irq[i].bitmask)
- generic_handle_irq(gpmc_client_irq[i].irq);
-
- gpmc_write_reg(GPMC_IRQSTATUS, regval);
-
- return IRQ_HANDLED;
-}
-
static struct omap3_gpmc_regs gpmc_context;
void omap3_gpmc_save_context(void)
diff --git a/drivers/memory/samsung/Kconfig b/drivers/memory/samsung/Kconfig
new file mode 100644
index 000000000000..9de12222061c
--- /dev/null
+++ b/drivers/memory/samsung/Kconfig
@@ -0,0 +1,13 @@
+config SAMSUNG_MC
+ bool "Samsung Exynos Memory Controller support" if COMPILE_TEST
+ help
+ Support for the Memory Controller (MC) devices found on
+ Samsung Exynos SoCs.
+
+if SAMSUNG_MC
+
+config EXYNOS_SROM
+ bool "Exynos SROM controller driver" if COMPILE_TEST
+ depends on (ARM && ARCH_EXYNOS) || (COMPILE_TEST && HAS_IOMEM)
+
+endif
diff --git a/drivers/memory/samsung/Makefile b/drivers/memory/samsung/Makefile
new file mode 100644
index 000000000000..9c554d5522ad
--- /dev/null
+++ b/drivers/memory/samsung/Makefile
@@ -0,0 +1 @@
+obj-$(CONFIG_EXYNOS_SROM) += exynos-srom.o
diff --git a/drivers/memory/samsung/exynos-srom.c b/drivers/memory/samsung/exynos-srom.c
new file mode 100644
index 000000000000..96756fb4d6bd
--- /dev/null
+++ b/drivers/memory/samsung/exynos-srom.c
@@ -0,0 +1,231 @@
+/*
+ * Copyright (c) 2015 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com/
+ *
+ * EXYNOS - SROM Controller support
+ * Author: Pankaj Dubey <pankaj.dubey@samsung.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/io.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+#include <linux/slab.h>
+
+#include "exynos-srom.h"
+
+static const unsigned long exynos_srom_offsets[] = {
+ /* SROM side */
+ EXYNOS_SROM_BW,
+ EXYNOS_SROM_BC0,
+ EXYNOS_SROM_BC1,
+ EXYNOS_SROM_BC2,
+ EXYNOS_SROM_BC3,
+};
+
+/**
+ * struct exynos_srom_reg_dump: register dump of SROM Controller registers.
+ * @offset: srom register offset from the controller base address.
+ * @value: the value of register under the offset.
+ */
+struct exynos_srom_reg_dump {
+ u32 offset;
+ u32 value;
+};
+
+/**
+ * struct exynos_srom: platform data for exynos srom controller driver.
+ * @dev: platform device pointer
+ * @reg_base: srom base address
+ * @reg_offset: exynos_srom_reg_dump pointer to hold offset and its value.
+ */
+struct exynos_srom {
+ struct device *dev;
+ void __iomem *reg_base;
+ struct exynos_srom_reg_dump *reg_offset;
+};
+
+static struct exynos_srom_reg_dump *exynos_srom_alloc_reg_dump(
+ const unsigned long *rdump,
+ unsigned long nr_rdump)
+{
+ struct exynos_srom_reg_dump *rd;
+ unsigned int i;
+
+ rd = kcalloc(nr_rdump, sizeof(*rd), GFP_KERNEL);
+ if (!rd)
+ return NULL;
+
+ for (i = 0; i < nr_rdump; ++i)
+ rd[i].offset = rdump[i];
+
+ return rd;
+}
+
+static int exynos_srom_configure_bank(struct exynos_srom *srom,
+ struct device_node *np)
+{
+ u32 bank, width, pmc = 0;
+ u32 timing[6];
+ u32 cs, bw;
+
+ if (of_property_read_u32(np, "reg", &bank))
+ return -EINVAL;
+ if (of_property_read_u32(np, "reg-io-width", &width))
+ width = 1;
+ if (of_property_read_bool(np, "samsung,srom-page-mode"))
+ pmc = 1 << EXYNOS_SROM_BCX__PMC__SHIFT;
+ if (of_property_read_u32_array(np, "samsung,srom-timing", timing,
+ ARRAY_SIZE(timing)))
+ return -EINVAL;
+
+ bank *= 4; /* Convert bank into shift/offset */
+
+ cs = 1 << EXYNOS_SROM_BW__BYTEENABLE__SHIFT;
+ if (width == 2)
+ cs |= 1 << EXYNOS_SROM_BW__DATAWIDTH__SHIFT;
+
+ bw = __raw_readl(srom->reg_base + EXYNOS_SROM_BW);
+ bw = (bw & ~(EXYNOS_SROM_BW__CS_MASK << bank)) | (cs << bank);
+ __raw_writel(bw, srom->reg_base + EXYNOS_SROM_BW);
+
+ __raw_writel(pmc | (timing[0] << EXYNOS_SROM_BCX__TACP__SHIFT) |
+ (timing[1] << EXYNOS_SROM_BCX__TCAH__SHIFT) |
+ (timing[2] << EXYNOS_SROM_BCX__TCOH__SHIFT) |
+ (timing[3] << EXYNOS_SROM_BCX__TACC__SHIFT) |
+ (timing[4] << EXYNOS_SROM_BCX__TCOS__SHIFT) |
+ (timing[5] << EXYNOS_SROM_BCX__TACS__SHIFT),
+ srom->reg_base + EXYNOS_SROM_BC0 + bank);
+
+ return 0;
+}
+
+static int exynos_srom_probe(struct platform_device *pdev)
+{
+ struct device_node *np, *child;
+ struct exynos_srom *srom;
+ struct device *dev = &pdev->dev;
+ bool bad_bank_config = false;
+
+ np = dev->of_node;
+ if (!np) {
+ dev_err(&pdev->dev, "could not find device info\n");
+ return -EINVAL;
+ }
+
+ srom = devm_kzalloc(&pdev->dev,
+ sizeof(struct exynos_srom), GFP_KERNEL);
+ if (!srom)
+ return -ENOMEM;
+
+ srom->dev = dev;
+ srom->reg_base = of_iomap(np, 0);
+ if (!srom->reg_base) {
+ dev_err(&pdev->dev, "iomap of exynos srom controller failed\n");
+ return -ENOMEM;
+ }
+
+ platform_set_drvdata(pdev, srom);
+
+ srom->reg_offset = exynos_srom_alloc_reg_dump(exynos_srom_offsets,
+ sizeof(exynos_srom_offsets));
+ if (!srom->reg_offset) {
+ iounmap(srom->reg_base);
+ return -ENOMEM;
+ }
+
+ for_each_child_of_node(np, child) {
+ if (exynos_srom_configure_bank(srom, child)) {
+ dev_err(dev,
+ "Could not decode bank configuration for %s\n",
+ child->name);
+ bad_bank_config = true;
+ }
+ }
+
+ /*
+ * If any bank failed to configure, we still provide suspend/resume,
+ * but do not probe child devices
+ */
+ if (bad_bank_config)
+ return 0;
+
+ return of_platform_populate(np, NULL, NULL, dev);
+}
+
+static int exynos_srom_remove(struct platform_device *pdev)
+{
+ struct exynos_srom *srom = platform_get_drvdata(pdev);
+
+ kfree(srom->reg_offset);
+ iounmap(srom->reg_base);
+
+ return 0;
+}
+
+#ifdef CONFIG_PM_SLEEP
+static void exynos_srom_save(void __iomem *base,
+ struct exynos_srom_reg_dump *rd,
+ unsigned int num_regs)
+{
+ for (; num_regs > 0; --num_regs, ++rd)
+ rd->value = readl(base + rd->offset);
+}
+
+static void exynos_srom_restore(void __iomem *base,
+ const struct exynos_srom_reg_dump *rd,
+ unsigned int num_regs)
+{
+ for (; num_regs > 0; --num_regs, ++rd)
+ writel(rd->value, base + rd->offset);
+}
+
+static int exynos_srom_suspend(struct device *dev)
+{
+ struct exynos_srom *srom = dev_get_drvdata(dev);
+
+ exynos_srom_save(srom->reg_base, srom->reg_offset,
+ ARRAY_SIZE(exynos_srom_offsets));
+ return 0;
+}
+
+static int exynos_srom_resume(struct device *dev)
+{
+ struct exynos_srom *srom = dev_get_drvdata(dev);
+
+ exynos_srom_restore(srom->reg_base, srom->reg_offset,
+ ARRAY_SIZE(exynos_srom_offsets));
+ return 0;
+}
+#endif
+
+static const struct of_device_id of_exynos_srom_ids[] = {
+ {
+ .compatible = "samsung,exynos4210-srom",
+ },
+ {},
+};
+MODULE_DEVICE_TABLE(of, of_exynos_srom_ids);
+
+static SIMPLE_DEV_PM_OPS(exynos_srom_pm_ops, exynos_srom_suspend, exynos_srom_resume);
+
+static struct platform_driver exynos_srom_driver = {
+ .probe = exynos_srom_probe,
+ .remove = exynos_srom_remove,
+ .driver = {
+ .name = "exynos-srom",
+ .of_match_table = of_exynos_srom_ids,
+ .pm = &exynos_srom_pm_ops,
+ },
+};
+module_platform_driver(exynos_srom_driver);
+
+MODULE_AUTHOR("Pankaj Dubey <pankaj.dubey@samsung.com>");
+MODULE_DESCRIPTION("Exynos SROM Controller Driver");
+MODULE_LICENSE("GPL");
diff --git a/drivers/memory/samsung/exynos-srom.h b/drivers/memory/samsung/exynos-srom.h
new file mode 100644
index 000000000000..34660c6a57a9
--- /dev/null
+++ b/drivers/memory/samsung/exynos-srom.h
@@ -0,0 +1,51 @@
+/*
+ * Copyright (c) 2015 Samsung Electronics Co., Ltd.
+ * http://www.samsung.com
+ *
+ * Exynos SROMC register definitions
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+*/
+
+#ifndef __EXYNOS_SROM_H
+#define __EXYNOS_SROM_H __FILE__
+
+#define EXYNOS_SROMREG(x) (x)
+
+#define EXYNOS_SROM_BW EXYNOS_SROMREG(0x0)
+#define EXYNOS_SROM_BC0 EXYNOS_SROMREG(0x4)
+#define EXYNOS_SROM_BC1 EXYNOS_SROMREG(0x8)
+#define EXYNOS_SROM_BC2 EXYNOS_SROMREG(0xc)
+#define EXYNOS_SROM_BC3 EXYNOS_SROMREG(0x10)
+#define EXYNOS_SROM_BC4 EXYNOS_SROMREG(0x14)
+#define EXYNOS_SROM_BC5 EXYNOS_SROMREG(0x18)
+
+/* one register BW holds 4 x 4-bit packed settings for NCS0 - NCS3 */
+
+#define EXYNOS_SROM_BW__DATAWIDTH__SHIFT 0
+#define EXYNOS_SROM_BW__ADDRMODE__SHIFT 1
+#define EXYNOS_SROM_BW__WAITENABLE__SHIFT 2
+#define EXYNOS_SROM_BW__BYTEENABLE__SHIFT 3
+
+#define EXYNOS_SROM_BW__CS_MASK 0xf
+
+#define EXYNOS_SROM_BW__NCS0__SHIFT 0
+#define EXYNOS_SROM_BW__NCS1__SHIFT 4
+#define EXYNOS_SROM_BW__NCS2__SHIFT 8
+#define EXYNOS_SROM_BW__NCS3__SHIFT 12
+#define EXYNOS_SROM_BW__NCS4__SHIFT 16
+#define EXYNOS_SROM_BW__NCS5__SHIFT 20
+
+/* applies to same to BCS0 - BCS3 */
+
+#define EXYNOS_SROM_BCX__PMC__SHIFT 0
+#define EXYNOS_SROM_BCX__TACP__SHIFT 4
+#define EXYNOS_SROM_BCX__TCAH__SHIFT 8
+#define EXYNOS_SROM_BCX__TCOH__SHIFT 12
+#define EXYNOS_SROM_BCX__TACC__SHIFT 16
+#define EXYNOS_SROM_BCX__TCOS__SHIFT 24
+#define EXYNOS_SROM_BCX__TACS__SHIFT 28
+
+#endif /* __EXYNOS_SROM_H */
diff --git a/drivers/memstick/core/ms_block.c b/drivers/memstick/core/ms_block.c
index 84abf9d3c24e..3cd68152ddf8 100644
--- a/drivers/memstick/core/ms_block.c
+++ b/drivers/memstick/core/ms_block.c
@@ -1220,7 +1220,7 @@ static int msb_read_boot_blocks(struct msb_data *msb)
}
if (extra.management_flag & MEMSTICK_MANAGEMENT_SYSFLG) {
- dbg("managment flag doesn't indicate boot block %d",
+ dbg("management flag doesn't indicate boot block %d",
pba);
continue;
}
@@ -1367,7 +1367,7 @@ static int msb_ftl_initialize(struct msb_data *msb)
static int msb_ftl_scan(struct msb_data *msb)
{
u16 pba, lba, other_block;
- u8 overwrite_flag, managment_flag, other_overwrite_flag;
+ u8 overwrite_flag, management_flag, other_overwrite_flag;
int error;
struct ms_extra_data_register extra;
u8 *overwrite_flags = kzalloc(msb->block_count, GFP_KERNEL);
@@ -1409,7 +1409,7 @@ static int msb_ftl_scan(struct msb_data *msb)
}
lba = be16_to_cpu(extra.logical_address);
- managment_flag = extra.management_flag;
+ management_flag = extra.management_flag;
overwrite_flag = extra.overwrite_flag;
overwrite_flags[pba] = overwrite_flag;
@@ -1421,16 +1421,16 @@ static int msb_ftl_scan(struct msb_data *msb)
}
/* Skip system/drm blocks */
- if ((managment_flag & MEMSTICK_MANAGMENT_FLAG_NORMAL) !=
- MEMSTICK_MANAGMENT_FLAG_NORMAL) {
- dbg("pba %05d -> [reserved managment flag %02x]",
- pba, managment_flag);
+ if ((management_flag & MEMSTICK_MANAGEMENT_FLAG_NORMAL) !=
+ MEMSTICK_MANAGEMENT_FLAG_NORMAL) {
+ dbg("pba %05d -> [reserved management flag %02x]",
+ pba, management_flag);
msb_mark_block_used(msb, pba);
continue;
}
/* Erase temporary tables */
- if (!(managment_flag & MEMSTICK_MANAGEMENT_ATFLG)) {
+ if (!(management_flag & MEMSTICK_MANAGEMENT_ATFLG)) {
dbg("pba %05d -> [temp table] - will erase", pba);
msb_mark_block_used(msb, pba);
diff --git a/drivers/memstick/core/ms_block.h b/drivers/memstick/core/ms_block.h
index c75198dbf139..53962c3b21df 100644
--- a/drivers/memstick/core/ms_block.h
+++ b/drivers/memstick/core/ms_block.h
@@ -47,7 +47,7 @@
#define MEMSTICK_OV_PG_NORMAL \
(MEMSTICK_OVERWRITE_PGST1 | MEMSTICK_OVERWRITE_PGST0)
-#define MEMSTICK_MANAGMENT_FLAG_NORMAL \
+#define MEMSTICK_MANAGEMENT_FLAG_NORMAL \
(MEMSTICK_MANAGEMENT_SYSFLG | \
MEMSTICK_MANAGEMENT_SCMS1 | \
MEMSTICK_MANAGEMENT_SCMS0) \
diff --git a/drivers/memstick/core/mspro_block.c b/drivers/memstick/core/mspro_block.c
index 922a750640e8..0fb27d338811 100644
--- a/drivers/memstick/core/mspro_block.c
+++ b/drivers/memstick/core/mspro_block.c
@@ -1033,12 +1033,11 @@ static int mspro_block_read_attributes(struct memstick_dev *card)
}
msb->attr_group.name = "media_attributes";
- buffer = kmalloc(attr_len, GFP_KERNEL);
+ buffer = kmemdup(attr, attr_len, GFP_KERNEL);
if (!buffer) {
rc = -ENOMEM;
goto out_free_attr;
}
- memcpy(buffer, (char *)attr, attr_len);
for (cnt = 0; cnt < attr_count; ++cnt) {
s_attr = kzalloc(sizeof(struct mspro_sys_attr), GFP_KERNEL);
diff --git a/drivers/memstick/host/rtsx_usb_ms.c b/drivers/memstick/host/rtsx_usb_ms.c
index 1105db2355d2..d34bc3530385 100644
--- a/drivers/memstick/host/rtsx_usb_ms.c
+++ b/drivers/memstick/host/rtsx_usb_ms.c
@@ -706,7 +706,7 @@ poll_again:
if (host->eject)
break;
- msleep(1000);
+ schedule_timeout_idle(HZ);
}
complete(&host->detect_ms_exit);
diff --git a/drivers/message/fusion/mptlan.c b/drivers/message/fusion/mptlan.c
index cbe96072a6cc..6955c9e22d57 100644
--- a/drivers/message/fusion/mptlan.c
+++ b/drivers/message/fusion/mptlan.c
@@ -791,7 +791,7 @@ mpt_lan_sdu_send (struct sk_buff *skb, struct net_device *dev)
pSimple->Address.High = 0;
mpt_put_msg_frame (LanCtx, mpt_dev, mf);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
dioprintk((KERN_INFO MYNAM ": %s/%s: Sending packet. FlagsLength = %08x.\n",
IOC_AND_NETDEV_NAMES_s_s(dev),
diff --git a/drivers/message/fusion/mptsas.c b/drivers/message/fusion/mptsas.c
index 7ebccfa8072a..7ee1667acde4 100644
--- a/drivers/message/fusion/mptsas.c
+++ b/drivers/message/fusion/mptsas.c
@@ -2281,7 +2281,7 @@ static int mptsas_smp_handler(struct Scsi_Host *shost, struct sas_rphy *rphy,
dma_addr_out = pci_map_single(ioc->pcidev, bio_data(req->bio),
blk_rq_bytes(req), PCI_DMA_BIDIRECTIONAL);
- if (!dma_addr_out)
+ if (pci_dma_mapping_error(ioc->pcidev, dma_addr_out))
goto put_mf;
ioc->add_sge(psge, flagsLength, dma_addr_out);
psge += ioc->SGE_size;
@@ -2296,7 +2296,7 @@ static int mptsas_smp_handler(struct Scsi_Host *shost, struct sas_rphy *rphy,
flagsLength |= blk_rq_bytes(rsp) + 4;
dma_addr_in = pci_map_single(ioc->pcidev, bio_data(rsp->bio),
blk_rq_bytes(rsp), PCI_DMA_BIDIRECTIONAL);
- if (!dma_addr_in)
+ if (pci_dma_mapping_error(ioc->pcidev, dma_addr_in))
goto unmap;
ioc->add_sge(psge, flagsLength, dma_addr_in);
diff --git a/drivers/message/fusion/mptspi.c b/drivers/message/fusion/mptspi.c
index 613231c16194..031e088edb5e 100644
--- a/drivers/message/fusion/mptspi.c
+++ b/drivers/message/fusion/mptspi.c
@@ -1150,7 +1150,7 @@ static void mpt_work_wrapper(struct work_struct *work)
}
shost_printk(KERN_INFO, shost, MYIOC_s_FMT
"Integrated RAID detects new device %d\n", ioc->name, disk);
- scsi_scan_target(&ioc->sh->shost_gendev, 1, disk, 0, 1);
+ scsi_scan_target(&ioc->sh->shost_gendev, 1, disk, 0, SCSI_SCAN_RESCAN);
}
diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
index eea61e349e26..1bcf601de5bc 100644
--- a/drivers/mfd/Kconfig
+++ b/drivers/mfd/Kconfig
@@ -134,7 +134,7 @@ config MFD_CROS_EC
select MFD_CORE
select CHROME_PLATFORMS
select CROS_EC_PROTO
- depends on X86 || ARM || COMPILE_TEST
+ depends on X86 || ARM || ARM64 || COMPILE_TEST
help
If you say Y here you get support for the ChromeOS Embedded
Controller (EC) providing keyboard, battery and power services.
@@ -319,6 +319,16 @@ config MFD_HI6421_PMIC
menus in order to enable them.
We communicate with the Hi6421 via memory-mapped I/O.
+config MFD_HI655X_PMIC
+ tristate "HiSilicon Hi655X series PMU/Codec IC"
+ depends on ARCH_HISI || COMPILE_TEST
+ depends on OF
+ select MFD_CORE
+ select REGMAP_MMIO
+ select REGMAP_IRQ
+ help
+ Select this option to enable Hisilicon hi655x series pmic driver.
+
config HTC_EGPIO
bool "HTC EGPIO support"
depends on GPIOLIB && ARM
@@ -527,6 +537,21 @@ config MFD_MAX14577
additional drivers must be enabled in order to use the functionality
of the device.
+config MFD_MAX77620
+ bool "Maxim Semiconductor MAX77620 and MAX20024 PMIC Support"
+ depends on I2C=y
+ depends on OF
+ select MFD_CORE
+ select REGMAP_I2C
+ select REGMAP_IRQ
+ select IRQ_DOMAIN
+ help
+ Say yes here to add support for Maxim Semiconductor MAX77620 and
+ MAX20024 which are Power Management IC with General purpose pins,
+ RTC, regulators, clock generator, watchdog etc. This driver
+ provides common support for accessing the device; additional drivers
+ must be enabled in order to use the functionality of the device.
+
config MFD_MAX77686
tristate "Maxim Semiconductor MAX77686/802 PMIC Support"
depends on I2C
@@ -543,8 +568,8 @@ config MFD_MAX77686
of the device.
config MFD_MAX77693
- bool "Maxim Semiconductor MAX77693 PMIC Support"
- depends on I2C=y
+ tristate "Maxim Semiconductor MAX77693 PMIC Support"
+ depends on I2C
select MFD_CORE
select REGMAP_I2C
select REGMAP_IRQ
@@ -1568,7 +1593,7 @@ endmenu
config MFD_VEXPRESS_SYSREG
bool "Versatile Express System Registers"
- depends on VEXPRESS_CONFIG && GPIOLIB
+ depends on VEXPRESS_CONFIG && GPIOLIB && !ARCH_USES_GETTIMEOFFSET
default y
select CLKSRC_MMIO
select GPIO_GENERIC_PLATFORM
diff --git a/drivers/mfd/Makefile b/drivers/mfd/Makefile
index 5eaa6465d0a6..42a66e19e191 100644
--- a/drivers/mfd/Makefile
+++ b/drivers/mfd/Makefile
@@ -128,6 +128,7 @@ obj-$(CONFIG_MFD_DA9063) += da9063.o
obj-$(CONFIG_MFD_DA9150) += da9150-core.o
obj-$(CONFIG_MFD_MAX14577) += max14577.o
+obj-$(CONFIG_MFD_MAX77620) += max77620.o
obj-$(CONFIG_MFD_MAX77686) += max77686.o
obj-$(CONFIG_MFD_MAX77693) += max77693.o
obj-$(CONFIG_MFD_MAX77843) += max77843.o
@@ -195,6 +196,7 @@ obj-$(CONFIG_MFD_STW481X) += stw481x.o
obj-$(CONFIG_MFD_IPAQ_MICRO) += ipaq-micro.o
obj-$(CONFIG_MFD_MENF21BMC) += menf21bmc.o
obj-$(CONFIG_MFD_HI6421_PMIC) += hi6421-pmic-core.o
+obj-$(CONFIG_MFD_HI655X_PMIC) += hi655x-pmic.o
obj-$(CONFIG_MFD_DLN2) += dln2.o
obj-$(CONFIG_MFD_RT5033) += rt5033.o
obj-$(CONFIG_MFD_SKY81452) += sky81452.o
diff --git a/drivers/mfd/ab8500-debugfs.c b/drivers/mfd/ab8500-debugfs.c
index 69d9fffe5b5c..0aecd7bd35f8 100644
--- a/drivers/mfd/ab8500-debugfs.c
+++ b/drivers/mfd/ab8500-debugfs.c
@@ -2563,7 +2563,7 @@ static ssize_t ab8500_gpadc_trig_timer_write(struct file *file,
if (user_trig_timer & ~0xFF) {
dev_err(dev,
- "debugfs error input: should be beetween 0 to 255\n");
+ "debugfs error input: should be between 0 to 255\n");
return -EINVAL;
}
diff --git a/drivers/mfd/act8945a.c b/drivers/mfd/act8945a.c
index 525b546ba42f..10c6d2da8822 100644
--- a/drivers/mfd/act8945a.c
+++ b/drivers/mfd/act8945a.c
@@ -46,8 +46,9 @@ static int act8945a_i2c_probe(struct i2c_client *i2c,
i2c_set_clientdata(i2c, regmap);
- ret = mfd_add_devices(&i2c->dev, PLATFORM_DEVID_NONE, act8945a_devs,
- ARRAY_SIZE(act8945a_devs), NULL, 0, NULL);
+ ret = devm_mfd_add_devices(&i2c->dev, PLATFORM_DEVID_NONE,
+ act8945a_devs, ARRAY_SIZE(act8945a_devs),
+ NULL, 0, NULL);
if (ret) {
dev_err(&i2c->dev, "Failed to add sub devices\n");
return ret;
@@ -56,13 +57,6 @@ static int act8945a_i2c_probe(struct i2c_client *i2c,
return 0;
}
-static int act8945a_i2c_remove(struct i2c_client *i2c)
-{
- mfd_remove_devices(&i2c->dev);
-
- return 0;
-}
-
static const struct i2c_device_id act8945a_i2c_id[] = {
{ "act8945a", 0 },
{}
@@ -81,7 +75,6 @@ static struct i2c_driver act8945a_i2c_driver = {
.of_match_table = of_match_ptr(act8945a_of_match),
},
.probe = act8945a_i2c_probe,
- .remove = act8945a_i2c_remove,
.id_table = act8945a_i2c_id,
};
diff --git a/drivers/mfd/arizona-core.c b/drivers/mfd/arizona-core.c
index 5319f252790b..bf2717967597 100644
--- a/drivers/mfd/arizona-core.c
+++ b/drivers/mfd/arizona-core.c
@@ -908,12 +908,12 @@ static const char * const wm5102_supplies[] = {
static const struct mfd_cell wm5102_devs[] = {
{ .name = "arizona-micsupp" },
+ { .name = "arizona-gpio" },
{
.name = "arizona-extcon",
.parent_supplies = wm5102_supplies,
.num_parent_supplies = 1, /* We only need MICVDD */
},
- { .name = "arizona-gpio" },
{ .name = "arizona-haptics" },
{ .name = "arizona-pwm" },
{
@@ -925,12 +925,12 @@ static const struct mfd_cell wm5102_devs[] = {
static const struct mfd_cell wm5110_devs[] = {
{ .name = "arizona-micsupp" },
+ { .name = "arizona-gpio" },
{
.name = "arizona-extcon",
.parent_supplies = wm5102_supplies,
.num_parent_supplies = 1, /* We only need MICVDD */
},
- { .name = "arizona-gpio" },
{ .name = "arizona-haptics" },
{ .name = "arizona-pwm" },
{
@@ -966,12 +966,12 @@ static const char * const wm8997_supplies[] = {
static const struct mfd_cell wm8997_devs[] = {
{ .name = "arizona-micsupp" },
+ { .name = "arizona-gpio" },
{
.name = "arizona-extcon",
.parent_supplies = wm8997_supplies,
.num_parent_supplies = 1, /* We only need MICVDD */
},
- { .name = "arizona-gpio" },
{ .name = "arizona-haptics" },
{ .name = "arizona-pwm" },
{
@@ -982,12 +982,13 @@ static const struct mfd_cell wm8997_devs[] = {
};
static const struct mfd_cell wm8998_devs[] = {
+ { .name = "arizona-micsupp" },
+ { .name = "arizona-gpio" },
{
.name = "arizona-extcon",
.parent_supplies = wm5102_supplies,
.num_parent_supplies = 1, /* We only need MICVDD */
},
- { .name = "arizona-gpio" },
{ .name = "arizona-haptics" },
{ .name = "arizona-pwm" },
{
@@ -995,7 +996,6 @@ static const struct mfd_cell wm8998_devs[] = {
.parent_supplies = wm5102_supplies,
.num_parent_supplies = ARRAY_SIZE(wm5102_supplies),
},
- { .name = "arizona-micsupp" },
};
int arizona_dev_init(struct arizona *arizona)
diff --git a/drivers/mfd/arizona-irq.c b/drivers/mfd/arizona-irq.c
index 5fef014920a3..edeb4951366a 100644
--- a/drivers/mfd/arizona-irq.c
+++ b/drivers/mfd/arizona-irq.c
@@ -168,12 +168,15 @@ static struct irq_chip arizona_irq_chip = {
.irq_set_wake = arizona_irq_set_wake,
};
+static struct lock_class_key arizona_irq_lock_class;
+
static int arizona_irq_map(struct irq_domain *h, unsigned int virq,
irq_hw_number_t hw)
{
struct arizona *data = h->host_data;
irq_set_chip_data(virq, data);
+ irq_set_lockdep_class(virq, &arizona_irq_lock_class);
irq_set_chip_and_handler(virq, &arizona_irq_chip, handle_simple_irq);
irq_set_nested_thread(virq, 1);
irq_set_noprobe(virq);
diff --git a/drivers/mfd/as3711.c b/drivers/mfd/as3711.c
index 09e1483b99bc..67b12417585d 100644
--- a/drivers/mfd/as3711.c
+++ b/drivers/mfd/as3711.c
@@ -189,22 +189,14 @@ static int as3711_i2c_probe(struct i2c_client *client,
as3711_subdevs[AS3711_BACKLIGHT].pdata_size = 0;
}
- ret = mfd_add_devices(as3711->dev, -1, as3711_subdevs,
- ARRAY_SIZE(as3711_subdevs), NULL, 0, NULL);
+ ret = devm_mfd_add_devices(as3711->dev, -1, as3711_subdevs,
+ ARRAY_SIZE(as3711_subdevs), NULL, 0, NULL);
if (ret < 0)
dev_err(&client->dev, "add mfd devices failed: %d\n", ret);
return ret;
}
-static int as3711_i2c_remove(struct i2c_client *client)
-{
- struct as3711 *as3711 = i2c_get_clientdata(client);
-
- mfd_remove_devices(as3711->dev);
- return 0;
-}
-
static const struct i2c_device_id as3711_i2c_id[] = {
{.name = "as3711", .driver_data = 0},
{}
@@ -218,7 +210,6 @@ static struct i2c_driver as3711_i2c_driver = {
.of_match_table = of_match_ptr(as3711_of_match),
},
.probe = as3711_i2c_probe,
- .remove = as3711_i2c_remove,
.id_table = as3711_i2c_id,
};
diff --git a/drivers/mfd/as3722.c b/drivers/mfd/as3722.c
index e1f597f97f86..f87342c211bc 100644
--- a/drivers/mfd/as3722.c
+++ b/drivers/mfd/as3722.c
@@ -385,9 +385,10 @@ static int as3722_i2c_probe(struct i2c_client *i2c,
return ret;
irq_flags = as3722->irq_flags | IRQF_ONESHOT;
- ret = regmap_add_irq_chip(as3722->regmap, as3722->chip_irq,
- irq_flags, -1, &as3722_irq_chip,
- &as3722->irq_data);
+ ret = devm_regmap_add_irq_chip(as3722->dev, as3722->regmap,
+ as3722->chip_irq,
+ irq_flags, -1, &as3722_irq_chip,
+ &as3722->irq_data);
if (ret < 0) {
dev_err(as3722->dev, "Failed to add regmap irq: %d\n", ret);
return ret;
@@ -395,33 +396,20 @@ static int as3722_i2c_probe(struct i2c_client *i2c,
ret = as3722_configure_pullups(as3722);
if (ret < 0)
- goto scrub;
+ return ret;
- ret = mfd_add_devices(&i2c->dev, -1, as3722_devs,
- ARRAY_SIZE(as3722_devs), NULL, 0,
- regmap_irq_get_domain(as3722->irq_data));
+ ret = devm_mfd_add_devices(&i2c->dev, -1, as3722_devs,
+ ARRAY_SIZE(as3722_devs), NULL, 0,
+ regmap_irq_get_domain(as3722->irq_data));
if (ret) {
dev_err(as3722->dev, "Failed to add MFD devices: %d\n", ret);
- goto scrub;
+ return ret;
}
device_init_wakeup(as3722->dev, true);
dev_dbg(as3722->dev, "AS3722 core driver initialized successfully\n");
return 0;
-
-scrub:
- regmap_del_irq_chip(as3722->chip_irq, as3722->irq_data);
- return ret;
-}
-
-static int as3722_i2c_remove(struct i2c_client *i2c)
-{
- struct as3722 *as3722 = i2c_get_clientdata(i2c);
-
- mfd_remove_devices(as3722->dev);
- regmap_del_irq_chip(as3722->chip_irq, as3722->irq_data);
- return 0;
}
static int __maybe_unused as3722_i2c_suspend(struct device *dev)
@@ -470,7 +458,6 @@ static struct i2c_driver as3722_i2c_driver = {
.pm = &as3722_pm_ops,
},
.probe = as3722_i2c_probe,
- .remove = as3722_i2c_remove,
.id_table = as3722_i2c_id,
};
diff --git a/drivers/mfd/asic3.c b/drivers/mfd/asic3.c
index 4dca6bc61f5b..0413c8159551 100644
--- a/drivers/mfd/asic3.c
+++ b/drivers/mfd/asic3.c
@@ -446,7 +446,7 @@ static int asic3_gpio_direction(struct gpio_chip *chip,
unsigned long flags;
struct asic3 *asic;
- asic = container_of(chip, struct asic3, gpio);
+ asic = gpiochip_get_data(chip);
gpio_base = ASIC3_GPIO_TO_BASE(offset);
if (gpio_base > ASIC3_GPIO_D_BASE) {
@@ -492,7 +492,7 @@ static int asic3_gpio_get(struct gpio_chip *chip,
u32 mask = ASIC3_GPIO_TO_MASK(offset);
struct asic3 *asic;
- asic = container_of(chip, struct asic3, gpio);
+ asic = gpiochip_get_data(chip);
gpio_base = ASIC3_GPIO_TO_BASE(offset);
if (gpio_base > ASIC3_GPIO_D_BASE) {
@@ -513,7 +513,7 @@ static void asic3_gpio_set(struct gpio_chip *chip,
unsigned long flags;
struct asic3 *asic;
- asic = container_of(chip, struct asic3, gpio);
+ asic = gpiochip_get_data(chip);
gpio_base = ASIC3_GPIO_TO_BASE(offset);
if (gpio_base > ASIC3_GPIO_D_BASE) {
@@ -540,7 +540,7 @@ static void asic3_gpio_set(struct gpio_chip *chip,
static int asic3_gpio_to_irq(struct gpio_chip *chip, unsigned offset)
{
- struct asic3 *asic = container_of(chip, struct asic3, gpio);
+ struct asic3 *asic = gpiochip_get_data(chip);
return asic->irq_base + offset;
}
@@ -595,7 +595,7 @@ static __init int asic3_gpio_probe(struct platform_device *pdev,
alt_reg[i]);
}
- return gpiochip_add(&asic->gpio);
+ return gpiochip_add_data(&asic->gpio, asic);
}
static int asic3_gpio_remove(struct platform_device *pdev)
diff --git a/drivers/mfd/atmel-hlcdc.c b/drivers/mfd/atmel-hlcdc.c
index 06c205868573..eca7ea69b81c 100644
--- a/drivers/mfd/atmel-hlcdc.c
+++ b/drivers/mfd/atmel-hlcdc.c
@@ -128,16 +128,9 @@ static int atmel_hlcdc_probe(struct platform_device *pdev)
dev_set_drvdata(dev, hlcdc);
- return mfd_add_devices(dev, -1, atmel_hlcdc_cells,
- ARRAY_SIZE(atmel_hlcdc_cells),
- NULL, 0, NULL);
-}
-
-static int atmel_hlcdc_remove(struct platform_device *pdev)
-{
- mfd_remove_devices(&pdev->dev);
-
- return 0;
+ return devm_mfd_add_devices(dev, -1, atmel_hlcdc_cells,
+ ARRAY_SIZE(atmel_hlcdc_cells),
+ NULL, 0, NULL);
}
static const struct of_device_id atmel_hlcdc_match[] = {
@@ -152,7 +145,6 @@ MODULE_DEVICE_TABLE(of, atmel_hlcdc_match);
static struct platform_driver atmel_hlcdc_driver = {
.probe = atmel_hlcdc_probe,
- .remove = atmel_hlcdc_remove,
.driver = {
.name = "atmel-hlcdc",
.of_match_table = atmel_hlcdc_match,
diff --git a/drivers/mfd/axp20x-rsb.c b/drivers/mfd/axp20x-rsb.c
index 28c20247c112..a407527bcd09 100644
--- a/drivers/mfd/axp20x-rsb.c
+++ b/drivers/mfd/axp20x-rsb.c
@@ -61,6 +61,7 @@ static int axp20x_rsb_remove(struct sunxi_rsb_device *rdev)
static const struct of_device_id axp20x_rsb_of_match[] = {
{ .compatible = "x-powers,axp223", .data = (void *)AXP223_ID },
+ { .compatible = "x-powers,axp809", .data = (void *)AXP809_ID },
{ },
};
MODULE_DEVICE_TABLE(of, axp20x_rsb_of_match);
diff --git a/drivers/mfd/axp20x.c b/drivers/mfd/axp20x.c
index a57d6e940610..e4e32978c377 100644
--- a/drivers/mfd/axp20x.c
+++ b/drivers/mfd/axp20x.c
@@ -37,6 +37,7 @@ static const char * const axp20x_model_names[] = {
"AXP221",
"AXP223",
"AXP288",
+ "AXP809",
};
static const struct regmap_range axp152_writeable_ranges[] = {
@@ -85,6 +86,7 @@ static const struct regmap_access_table axp20x_volatile_table = {
.n_yes_ranges = ARRAY_SIZE(axp20x_volatile_ranges),
};
+/* AXP22x ranges are shared with the AXP809, as they cover the same range */
static const struct regmap_range axp22x_writeable_ranges[] = {
regmap_reg_range(AXP20X_DATACACHE(0), AXP20X_IRQ5_STATE),
regmap_reg_range(AXP20X_DCDC_MODE, AXP22X_BATLOW_THRES1),
@@ -128,6 +130,12 @@ static struct resource axp152_pek_resources[] = {
DEFINE_RES_IRQ_NAMED(AXP152_IRQ_PEK_FAL_EDGE, "PEK_DBF"),
};
+static struct resource axp20x_ac_power_supply_resources[] = {
+ DEFINE_RES_IRQ_NAMED(AXP20X_IRQ_ACIN_PLUGIN, "ACIN_PLUGIN"),
+ DEFINE_RES_IRQ_NAMED(AXP20X_IRQ_ACIN_REMOVAL, "ACIN_REMOVAL"),
+ DEFINE_RES_IRQ_NAMED(AXP20X_IRQ_ACIN_OVER_V, "ACIN_OVER_V"),
+};
+
static struct resource axp20x_pek_resources[] = {
{
.name = "PEK_DBR",
@@ -211,6 +219,20 @@ static struct resource axp288_fuel_gauge_resources[] = {
},
};
+static struct resource axp809_pek_resources[] = {
+ {
+ .name = "PEK_DBR",
+ .start = AXP809_IRQ_PEK_RIS_EDGE,
+ .end = AXP809_IRQ_PEK_RIS_EDGE,
+ .flags = IORESOURCE_IRQ,
+ }, {
+ .name = "PEK_DBF",
+ .start = AXP809_IRQ_PEK_FAL_EDGE,
+ .end = AXP809_IRQ_PEK_FAL_EDGE,
+ .flags = IORESOURCE_IRQ,
+ },
+};
+
static const struct regmap_config axp152_regmap_config = {
.reg_bits = 8,
.val_bits = 8,
@@ -378,6 +400,41 @@ static const struct regmap_irq axp288_regmap_irqs[] = {
INIT_REGMAP_IRQ(AXP288, BC_USB_CHNG, 5, 1),
};
+static const struct regmap_irq axp809_regmap_irqs[] = {
+ INIT_REGMAP_IRQ(AXP809, ACIN_OVER_V, 0, 7),
+ INIT_REGMAP_IRQ(AXP809, ACIN_PLUGIN, 0, 6),
+ INIT_REGMAP_IRQ(AXP809, ACIN_REMOVAL, 0, 5),
+ INIT_REGMAP_IRQ(AXP809, VBUS_OVER_V, 0, 4),
+ INIT_REGMAP_IRQ(AXP809, VBUS_PLUGIN, 0, 3),
+ INIT_REGMAP_IRQ(AXP809, VBUS_REMOVAL, 0, 2),
+ INIT_REGMAP_IRQ(AXP809, VBUS_V_LOW, 0, 1),
+ INIT_REGMAP_IRQ(AXP809, BATT_PLUGIN, 1, 7),
+ INIT_REGMAP_IRQ(AXP809, BATT_REMOVAL, 1, 6),
+ INIT_REGMAP_IRQ(AXP809, BATT_ENT_ACT_MODE, 1, 5),
+ INIT_REGMAP_IRQ(AXP809, BATT_EXIT_ACT_MODE, 1, 4),
+ INIT_REGMAP_IRQ(AXP809, CHARG, 1, 3),
+ INIT_REGMAP_IRQ(AXP809, CHARG_DONE, 1, 2),
+ INIT_REGMAP_IRQ(AXP809, BATT_CHG_TEMP_HIGH, 2, 7),
+ INIT_REGMAP_IRQ(AXP809, BATT_CHG_TEMP_HIGH_END, 2, 6),
+ INIT_REGMAP_IRQ(AXP809, BATT_CHG_TEMP_LOW, 2, 5),
+ INIT_REGMAP_IRQ(AXP809, BATT_CHG_TEMP_LOW_END, 2, 4),
+ INIT_REGMAP_IRQ(AXP809, BATT_ACT_TEMP_HIGH, 2, 3),
+ INIT_REGMAP_IRQ(AXP809, BATT_ACT_TEMP_HIGH_END, 2, 2),
+ INIT_REGMAP_IRQ(AXP809, BATT_ACT_TEMP_LOW, 2, 1),
+ INIT_REGMAP_IRQ(AXP809, BATT_ACT_TEMP_LOW_END, 2, 0),
+ INIT_REGMAP_IRQ(AXP809, DIE_TEMP_HIGH, 3, 7),
+ INIT_REGMAP_IRQ(AXP809, LOW_PWR_LVL1, 3, 1),
+ INIT_REGMAP_IRQ(AXP809, LOW_PWR_LVL2, 3, 0),
+ INIT_REGMAP_IRQ(AXP809, TIMER, 4, 7),
+ INIT_REGMAP_IRQ(AXP809, PEK_RIS_EDGE, 4, 6),
+ INIT_REGMAP_IRQ(AXP809, PEK_FAL_EDGE, 4, 5),
+ INIT_REGMAP_IRQ(AXP809, PEK_SHORT, 4, 4),
+ INIT_REGMAP_IRQ(AXP809, PEK_LONG, 4, 3),
+ INIT_REGMAP_IRQ(AXP809, PEK_OVER_OFF, 4, 2),
+ INIT_REGMAP_IRQ(AXP809, GPIO1_INPUT, 4, 1),
+ INIT_REGMAP_IRQ(AXP809, GPIO0_INPUT, 4, 0),
+};
+
static const struct regmap_irq_chip axp152_regmap_irq_chip = {
.name = "axp152_irq_chip",
.status_base = AXP152_IRQ1_STATE,
@@ -428,6 +485,18 @@ static const struct regmap_irq_chip axp288_regmap_irq_chip = {
};
+static const struct regmap_irq_chip axp809_regmap_irq_chip = {
+ .name = "axp809",
+ .status_base = AXP20X_IRQ1_STATE,
+ .ack_base = AXP20X_IRQ1_STATE,
+ .mask_base = AXP20X_IRQ1_EN,
+ .mask_invert = true,
+ .init_ack_masked = true,
+ .irqs = axp809_regmap_irqs,
+ .num_irqs = ARRAY_SIZE(axp809_regmap_irqs),
+ .num_regs = 5,
+};
+
static struct mfd_cell axp20x_cells[] = {
{
.name = "axp20x-pek",
@@ -436,6 +505,11 @@ static struct mfd_cell axp20x_cells[] = {
}, {
.name = "axp20x-regulator",
}, {
+ .name = "axp20x-ac-power-supply",
+ .of_compatible = "x-powers,axp202-ac-power-supply",
+ .num_resources = ARRAY_SIZE(axp20x_ac_power_supply_resources),
+ .resources = axp20x_ac_power_supply_resources,
+ }, {
.name = "axp20x-usb-power-supply",
.of_compatible = "x-powers,axp202-usb-power-supply",
.num_resources = ARRAY_SIZE(axp20x_usb_power_supply_resources),
@@ -572,6 +646,16 @@ static struct mfd_cell axp288_cells[] = {
},
};
+static struct mfd_cell axp809_cells[] = {
+ {
+ .name = "axp20x-pek",
+ .num_resources = ARRAY_SIZE(axp809_pek_resources),
+ .resources = axp809_pek_resources,
+ }, {
+ .name = "axp20x-regulator",
+ },
+};
+
static struct axp20x_dev *axp20x_pm_power_off;
static void axp20x_power_off(void)
{
@@ -631,6 +715,12 @@ int axp20x_match_device(struct axp20x_dev *axp20x)
axp20x->regmap_cfg = &axp288_regmap_config;
axp20x->regmap_irq_chip = &axp288_regmap_irq_chip;
break;
+ case AXP809_ID:
+ axp20x->nr_cells = ARRAY_SIZE(axp809_cells);
+ axp20x->cells = axp809_cells;
+ axp20x->regmap_cfg = &axp22x_regmap_config;
+ axp20x->regmap_irq_chip = &axp809_regmap_irq_chip;
+ break;
default:
dev_err(dev, "unsupported AXP20X ID %lu\n", axp20x->variant);
return -EINVAL;
diff --git a/drivers/mfd/bcm590xx.c b/drivers/mfd/bcm590xx.c
index 320aaefee718..0d76d690176b 100644
--- a/drivers/mfd/bcm590xx.c
+++ b/drivers/mfd/bcm590xx.c
@@ -82,8 +82,8 @@ static int bcm590xx_i2c_probe(struct i2c_client *i2c_pri,
goto err;
}
- ret = mfd_add_devices(&i2c_pri->dev, -1, bcm590xx_devs,
- ARRAY_SIZE(bcm590xx_devs), NULL, 0, NULL);
+ ret = devm_mfd_add_devices(&i2c_pri->dev, -1, bcm590xx_devs,
+ ARRAY_SIZE(bcm590xx_devs), NULL, 0, NULL);
if (ret < 0) {
dev_err(&i2c_pri->dev, "failed to add sub-devices: %d\n", ret);
goto err;
@@ -96,12 +96,6 @@ err:
return ret;
}
-static int bcm590xx_i2c_remove(struct i2c_client *i2c)
-{
- mfd_remove_devices(&i2c->dev);
- return 0;
-}
-
static const struct of_device_id bcm590xx_of_match[] = {
{ .compatible = "brcm,bcm59056" },
{ }
@@ -120,7 +114,6 @@ static struct i2c_driver bcm590xx_i2c_driver = {
.of_match_table = of_match_ptr(bcm590xx_of_match),
},
.probe = bcm590xx_i2c_probe,
- .remove = bcm590xx_i2c_remove,
.id_table = bcm590xx_i2c_id,
};
module_i2c_driver(bcm590xx_i2c_driver);
diff --git a/drivers/mfd/da9063-irq.c b/drivers/mfd/da9063-irq.c
index 26302634633c..7e903fcb8813 100644
--- a/drivers/mfd/da9063-irq.c
+++ b/drivers/mfd/da9063-irq.c
@@ -25,14 +25,6 @@
#define DA9063_REG_EVENT_B_OFFSET 1
#define DA9063_REG_EVENT_C_OFFSET 2
#define DA9063_REG_EVENT_D_OFFSET 3
-#define EVENTS_BUF_LEN 4
-
-static const u8 mask_events_buf[] = { [0 ... (EVENTS_BUF_LEN - 1)] = ~0 };
-
-struct da9063_irq_data {
- u16 reg;
- u8 mask;
-};
static const struct regmap_irq da9063_irqs[] = {
/* DA9063 event A register */
diff --git a/drivers/mfd/dm355evm_msp.c b/drivers/mfd/dm355evm_msp.c
index ec4438ed2faf..14661ec5ef7f 100644
--- a/drivers/mfd/dm355evm_msp.c
+++ b/drivers/mfd/dm355evm_msp.c
@@ -33,25 +33,25 @@
* This driver was tested with firmware revision A4.
*/
-#if defined(CONFIG_INPUT_DM355EVM) || defined(CONFIG_INPUT_DM355EVM_MODULE)
+#if IS_ENABLED(CONFIG_INPUT_DM355EVM)
#define msp_has_keyboard() true
#else
#define msp_has_keyboard() false
#endif
-#if defined(CONFIG_LEDS_GPIO) || defined(CONFIG_LEDS_GPIO_MODULE)
+#if IS_ENABLED(CONFIG_LEDS_GPIO)
#define msp_has_leds() true
#else
#define msp_has_leds() false
#endif
-#if defined(CONFIG_RTC_DRV_DM355EVM) || defined(CONFIG_RTC_DRV_DM355EVM_MODULE)
+#if IS_ENABLED(CONFIG_RTC_DRV_DM355EVM)
#define msp_has_rtc() true
#else
#define msp_has_rtc() false
#endif
-#if defined(CONFIG_VIDEO_TVP514X) || defined(CONFIG_VIDEO_TVP514X_MODULE)
+#if IS_ENABLED(CONFIG_VIDEO_TVP514X)
#define msp_has_tvp() true
#else
#define msp_has_tvp() false
@@ -260,7 +260,7 @@ static int add_children(struct i2c_client *client)
/* GPIO-ish stuff */
dm355evm_msp_gpio.parent = &client->dev;
- status = gpiochip_add(&dm355evm_msp_gpio);
+ status = gpiochip_add_data(&dm355evm_msp_gpio, NULL);
if (status < 0)
return status;
diff --git a/drivers/mfd/hi6421-pmic-core.c b/drivers/mfd/hi6421-pmic-core.c
index f9ded45a992d..3fd703fe3aba 100644
--- a/drivers/mfd/hi6421-pmic-core.c
+++ b/drivers/mfd/hi6421-pmic-core.c
@@ -76,8 +76,8 @@ static int hi6421_pmic_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, pmic);
- ret = mfd_add_devices(&pdev->dev, 0, hi6421_devs,
- ARRAY_SIZE(hi6421_devs), NULL, 0, NULL);
+ ret = devm_mfd_add_devices(&pdev->dev, 0, hi6421_devs,
+ ARRAY_SIZE(hi6421_devs), NULL, 0, NULL);
if (ret) {
dev_err(&pdev->dev, "add mfd devices failed: %d\n", ret);
return ret;
@@ -86,13 +86,6 @@ static int hi6421_pmic_probe(struct platform_device *pdev)
return 0;
}
-static int hi6421_pmic_remove(struct platform_device *pdev)
-{
- mfd_remove_devices(&pdev->dev);
-
- return 0;
-}
-
static const struct of_device_id of_hi6421_pmic_match_tbl[] = {
{ .compatible = "hisilicon,hi6421-pmic", },
{ },
@@ -105,7 +98,6 @@ static struct platform_driver hi6421_pmic_driver = {
.of_match_table = of_hi6421_pmic_match_tbl,
},
.probe = hi6421_pmic_probe,
- .remove = hi6421_pmic_remove,
};
module_platform_driver(hi6421_pmic_driver);
diff --git a/drivers/mfd/hi655x-pmic.c b/drivers/mfd/hi655x-pmic.c
new file mode 100644
index 000000000000..05ddc7882362
--- /dev/null
+++ b/drivers/mfd/hi655x-pmic.c
@@ -0,0 +1,162 @@
+/*
+ * Device driver for MFD hi655x PMIC
+ *
+ * Copyright (c) 2016 Hisilicon.
+ *
+ * Authors:
+ * Chen Feng <puck.chen@hisilicon.com>
+ * Fei Wang <w.f@huawei.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+#include <linux/gpio.h>
+#include <linux/io.h>
+#include <linux/interrupt.h>
+#include <linux/init.h>
+#include <linux/mfd/core.h>
+#include <linux/mfd/hi655x-pmic.h>
+#include <linux/module.h>
+#include <linux/of_gpio.h>
+#include <linux/of_platform.h>
+#include <linux/platform_device.h>
+#include <linux/regmap.h>
+
+static const struct mfd_cell hi655x_pmic_devs[] = {
+ { .name = "hi655x-regulator", },
+};
+
+static const struct regmap_irq hi655x_irqs[] = {
+ { .reg_offset = 0, .mask = OTMP_D1R_INT },
+ { .reg_offset = 0, .mask = VSYS_2P5_R_INT },
+ { .reg_offset = 0, .mask = VSYS_UV_D3R_INT },
+ { .reg_offset = 0, .mask = VSYS_6P0_D200UR_INT },
+ { .reg_offset = 0, .mask = PWRON_D4SR_INT },
+ { .reg_offset = 0, .mask = PWRON_D20F_INT },
+ { .reg_offset = 0, .mask = PWRON_D20R_INT },
+ { .reg_offset = 0, .mask = RESERVE_INT },
+};
+
+static const struct regmap_irq_chip hi655x_irq_chip = {
+ .name = "hi655x-pmic",
+ .irqs = hi655x_irqs,
+ .num_regs = 1,
+ .num_irqs = ARRAY_SIZE(hi655x_irqs),
+ .status_base = HI655X_IRQ_STAT_BASE,
+ .mask_base = HI655X_IRQ_MASK_BASE,
+};
+
+static struct regmap_config hi655x_regmap_config = {
+ .reg_bits = 32,
+ .reg_stride = HI655X_STRIDE,
+ .val_bits = 8,
+ .max_register = HI655X_BUS_ADDR(0xFFF),
+};
+
+static void hi655x_local_irq_clear(struct regmap *map)
+{
+ int i;
+
+ regmap_write(map, HI655X_ANA_IRQM_BASE, HI655X_IRQ_CLR);
+ for (i = 0; i < HI655X_IRQ_ARRAY; i++) {
+ regmap_write(map, HI655X_IRQ_STAT_BASE + i * HI655X_STRIDE,
+ HI655X_IRQ_CLR);
+ }
+}
+
+static int hi655x_pmic_probe(struct platform_device *pdev)
+{
+ int ret;
+ struct hi655x_pmic *pmic;
+ struct device *dev = &pdev->dev;
+ struct device_node *np = dev->of_node;
+ void __iomem *base;
+
+ pmic = devm_kzalloc(dev, sizeof(*pmic), GFP_KERNEL);
+ if (!pmic)
+ return -ENOMEM;
+ pmic->dev = dev;
+
+ pmic->res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!pmic->res)
+ return -ENOENT;
+
+ base = devm_ioremap_resource(dev, pmic->res);
+ if (!base)
+ return -ENOMEM;
+
+ pmic->regmap = devm_regmap_init_mmio_clk(dev, NULL, base,
+ &hi655x_regmap_config);
+
+ regmap_read(pmic->regmap, HI655X_BUS_ADDR(HI655X_VER_REG), &pmic->ver);
+ if ((pmic->ver < PMU_VER_START) || (pmic->ver > PMU_VER_END)) {
+ dev_warn(dev, "PMU version %d unsupported\n", pmic->ver);
+ return -EINVAL;
+ }
+
+ hi655x_local_irq_clear(pmic->regmap);
+
+ pmic->gpio = of_get_named_gpio(np, "pmic-gpios", 0);
+ if (!gpio_is_valid(pmic->gpio)) {
+ dev_err(dev, "Failed to get the pmic-gpios\n");
+ return -ENODEV;
+ }
+
+ ret = devm_gpio_request_one(dev, pmic->gpio, GPIOF_IN,
+ "hi655x_pmic_irq");
+ if (ret < 0) {
+ dev_err(dev, "Failed to request gpio %d ret = %d\n",
+ pmic->gpio, ret);
+ return ret;
+ }
+
+ ret = regmap_add_irq_chip(pmic->regmap, gpio_to_irq(pmic->gpio),
+ IRQF_TRIGGER_LOW | IRQF_NO_SUSPEND, 0,
+ &hi655x_irq_chip, &pmic->irq_data);
+ if (ret) {
+ dev_err(dev, "Failed to obtain 'hi655x_pmic_irq' %d\n", ret);
+ return ret;
+ }
+
+ platform_set_drvdata(pdev, pmic);
+
+ ret = mfd_add_devices(dev, PLATFORM_DEVID_AUTO, hi655x_pmic_devs,
+ ARRAY_SIZE(hi655x_pmic_devs), NULL, 0, NULL);
+ if (ret) {
+ dev_err(dev, "Failed to register device %d\n", ret);
+ regmap_del_irq_chip(gpio_to_irq(pmic->gpio), pmic->irq_data);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int hi655x_pmic_remove(struct platform_device *pdev)
+{
+ struct hi655x_pmic *pmic = platform_get_drvdata(pdev);
+
+ regmap_del_irq_chip(gpio_to_irq(pmic->gpio), pmic->irq_data);
+ mfd_remove_devices(&pdev->dev);
+ return 0;
+}
+
+static const struct of_device_id hi655x_pmic_match[] = {
+ { .compatible = "hisilicon,hi655x-pmic", },
+ {},
+};
+
+static struct platform_driver hi655x_pmic_driver = {
+ .driver = {
+ .name = "hi655x-pmic",
+ .of_match_table = of_match_ptr(hi655x_pmic_match),
+ },
+ .probe = hi655x_pmic_probe,
+ .remove = hi655x_pmic_remove,
+};
+module_platform_driver(hi655x_pmic_driver);
+
+MODULE_AUTHOR("Chen Feng <puck.chen@hisilicon.com>");
+MODULE_DESCRIPTION("Hisilicon hi655x PMIC driver");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/mfd/htc-egpio.c b/drivers/mfd/htc-egpio.c
index c636b5f83cfb..513cfc5c8fb6 100644
--- a/drivers/mfd/htc-egpio.c
+++ b/drivers/mfd/htc-egpio.c
@@ -155,7 +155,7 @@ static int egpio_get(struct gpio_chip *chip, unsigned offset)
pr_debug("egpio_get_value(%d)\n", chip->base + offset);
- egpio = container_of(chip, struct egpio_chip, chip);
+ egpio = gpiochip_get_data(chip);
ei = dev_get_drvdata(egpio->dev);
bit = egpio_bit(ei, offset);
reg = egpio->reg_start + egpio_pos(ei, offset);
@@ -170,7 +170,7 @@ static int egpio_direction_input(struct gpio_chip *chip, unsigned offset)
{
struct egpio_chip *egpio;
- egpio = container_of(chip, struct egpio_chip, chip);
+ egpio = gpiochip_get_data(chip);
return test_bit(offset, &egpio->is_out) ? -EINVAL : 0;
}
@@ -192,7 +192,7 @@ static void egpio_set(struct gpio_chip *chip, unsigned offset, int value)
pr_debug("egpio_set(%s, %d(%d), %d)\n",
chip->label, offset, offset+chip->base, value);
- egpio = container_of(chip, struct egpio_chip, chip);
+ egpio = gpiochip_get_data(chip);
ei = dev_get_drvdata(egpio->dev);
bit = egpio_bit(ei, offset);
pos = egpio_pos(ei, offset);
@@ -216,7 +216,7 @@ static int egpio_direction_output(struct gpio_chip *chip,
{
struct egpio_chip *egpio;
- egpio = container_of(chip, struct egpio_chip, chip);
+ egpio = gpiochip_get_data(chip);
if (test_bit(offset, &egpio->is_out)) {
egpio_set(chip, offset, value);
return 0;
@@ -330,7 +330,7 @@ static int __init egpio_probe(struct platform_device *pdev)
chip->base = pdata->chip[i].gpio_base;
chip->ngpio = pdata->chip[i].num_gpios;
- gpiochip_add(chip);
+ gpiochip_add_data(chip, &ei->chip[i]);
}
/* Set initial pin values */
diff --git a/drivers/mfd/htc-i2cpld.c b/drivers/mfd/htc-i2cpld.c
index bd6b96d07ab8..3f9eee5f8fb9 100644
--- a/drivers/mfd/htc-i2cpld.c
+++ b/drivers/mfd/htc-i2cpld.c
@@ -227,8 +227,7 @@ static irqreturn_t htcpld_handler(int irq, void *dev)
static void htcpld_chip_set(struct gpio_chip *chip, unsigned offset, int val)
{
struct i2c_client *client;
- struct htcpld_chip *chip_data =
- container_of(chip, struct htcpld_chip, chip_out);
+ struct htcpld_chip *chip_data = gpiochip_get_data(chip);
unsigned long flags;
client = chip_data->client;
@@ -257,14 +256,12 @@ static void htcpld_chip_set_ni(struct work_struct *work)
static int htcpld_chip_get(struct gpio_chip *chip, unsigned offset)
{
- struct htcpld_chip *chip_data;
+ struct htcpld_chip *chip_data = gpiochip_get_data(chip);
u8 cache;
if (!strncmp(chip->label, "htcpld-out", 10)) {
- chip_data = container_of(chip, struct htcpld_chip, chip_out);
cache = chip_data->cache_out;
} else if (!strncmp(chip->label, "htcpld-in", 9)) {
- chip_data = container_of(chip, struct htcpld_chip, chip_in);
cache = chip_data->cache_in;
} else
return -EINVAL;
@@ -291,9 +288,7 @@ static int htcpld_direction_input(struct gpio_chip *chip,
static int htcpld_chip_to_irq(struct gpio_chip *chip, unsigned offset)
{
- struct htcpld_chip *chip_data;
-
- chip_data = container_of(chip, struct htcpld_chip, chip_in);
+ struct htcpld_chip *chip_data = gpiochip_get_data(chip);
if (offset < chip_data->nirqs)
return chip_data->irq_start + offset;
@@ -451,14 +446,14 @@ static int htcpld_register_chip_gpio(
gpio_chip->ngpio = plat_chip_data->num_gpios;
/* Add the GPIO chips */
- ret = gpiochip_add(&(chip->chip_out));
+ ret = gpiochip_add_data(&(chip->chip_out), chip);
if (ret) {
dev_warn(dev, "Unable to register output GPIOs for 0x%x: %d\n",
plat_chip_data->addr, ret);
return ret;
}
- ret = gpiochip_add(&(chip->chip_in));
+ ret = gpiochip_add_data(&(chip->chip_in), chip);
if (ret) {
dev_warn(dev, "Unable to register input GPIOs for 0x%x: %d\n",
plat_chip_data->addr, ret);
diff --git a/drivers/mfd/intel-lpss-acpi.c b/drivers/mfd/intel-lpss-acpi.c
index 5a8d9c766633..7ddc4a9563ea 100644
--- a/drivers/mfd/intel-lpss-acpi.c
+++ b/drivers/mfd/intel-lpss-acpi.c
@@ -31,13 +31,9 @@ static struct property_entry spt_i2c_properties[] = {
{ },
};
-static struct property_set spt_i2c_pset = {
- .properties = spt_i2c_properties,
-};
-
static const struct intel_lpss_platform_info spt_i2c_info = {
.clk_rate = 120000000,
- .pset = &spt_i2c_pset,
+ .properties = spt_i2c_properties,
};
static const struct intel_lpss_platform_info bxt_info = {
@@ -51,13 +47,9 @@ static struct property_entry bxt_i2c_properties[] = {
{ },
};
-static struct property_set bxt_i2c_pset = {
- .properties = bxt_i2c_properties,
-};
-
static const struct intel_lpss_platform_info bxt_i2c_info = {
.clk_rate = 133000000,
- .pset = &bxt_i2c_pset,
+ .properties = bxt_i2c_properties,
};
static const struct acpi_device_id intel_lpss_acpi_ids[] = {
diff --git a/drivers/mfd/intel-lpss-pci.c b/drivers/mfd/intel-lpss-pci.c
index a19e57118641..1d79a3c9370f 100644
--- a/drivers/mfd/intel-lpss-pci.c
+++ b/drivers/mfd/intel-lpss-pci.c
@@ -71,13 +71,9 @@ static struct property_entry spt_i2c_properties[] = {
{ },
};
-static struct property_set spt_i2c_pset = {
- .properties = spt_i2c_properties,
-};
-
static const struct intel_lpss_platform_info spt_i2c_info = {
.clk_rate = 120000000,
- .pset = &spt_i2c_pset,
+ .properties = spt_i2c_properties,
};
static struct property_entry uart_properties[] = {
@@ -87,14 +83,10 @@ static struct property_entry uart_properties[] = {
{ },
};
-static struct property_set uart_pset = {
- .properties = uart_properties,
-};
-
static const struct intel_lpss_platform_info spt_uart_info = {
.clk_rate = 120000000,
.clk_con_id = "baudclk",
- .pset = &uart_pset,
+ .properties = uart_properties,
};
static const struct intel_lpss_platform_info bxt_info = {
@@ -104,7 +96,7 @@ static const struct intel_lpss_platform_info bxt_info = {
static const struct intel_lpss_platform_info bxt_uart_info = {
.clk_rate = 100000000,
.clk_con_id = "baudclk",
- .pset = &uart_pset,
+ .properties = uart_properties,
};
static struct property_entry bxt_i2c_properties[] = {
@@ -114,13 +106,9 @@ static struct property_entry bxt_i2c_properties[] = {
{ },
};
-static struct property_set bxt_i2c_pset = {
- .properties = bxt_i2c_properties,
-};
-
static const struct intel_lpss_platform_info bxt_i2c_info = {
.clk_rate = 133000000,
- .pset = &bxt_i2c_pset,
+ .properties = bxt_i2c_properties,
};
static const struct pci_device_id intel_lpss_pci_ids[] = {
diff --git a/drivers/mfd/intel-lpss.c b/drivers/mfd/intel-lpss.c
index 1bbbe877ba7e..41b113875d64 100644
--- a/drivers/mfd/intel-lpss.c
+++ b/drivers/mfd/intel-lpss.c
@@ -34,6 +34,7 @@
#define LPSS_DEV_SIZE 0x200
#define LPSS_PRIV_OFFSET 0x200
#define LPSS_PRIV_SIZE 0x100
+#define LPSS_PRIV_REG_COUNT (LPSS_PRIV_SIZE / 4)
#define LPSS_IDMA64_OFFSET 0x800
#define LPSS_IDMA64_SIZE 0x800
@@ -76,6 +77,7 @@ struct intel_lpss {
struct mfd_cell *cell;
struct device *dev;
void __iomem *priv;
+ u32 priv_ctx[LPSS_PRIV_REG_COUNT];
int devid;
u32 caps;
u32 active_ltr;
@@ -336,8 +338,8 @@ static int intel_lpss_register_clock(struct intel_lpss *lpss)
return 0;
/* Root clock */
- clk = clk_register_fixed_rate(NULL, dev_name(lpss->dev), NULL,
- CLK_IS_ROOT, lpss->info->clk_rate);
+ clk = clk_register_fixed_rate(NULL, dev_name(lpss->dev), NULL, 0,
+ lpss->info->clk_rate);
if (IS_ERR(clk))
return PTR_ERR(clk);
@@ -407,7 +409,7 @@ int intel_lpss_probe(struct device *dev,
if (ret)
return ret;
- lpss->cell->pset = info->pset;
+ lpss->cell->properties = info->properties;
intel_lpss_init_dev(lpss);
@@ -493,6 +495,16 @@ EXPORT_SYMBOL_GPL(intel_lpss_prepare);
int intel_lpss_suspend(struct device *dev)
{
+ struct intel_lpss *lpss = dev_get_drvdata(dev);
+ unsigned int i;
+
+ /* Save device context */
+ for (i = 0; i < LPSS_PRIV_REG_COUNT; i++)
+ lpss->priv_ctx[i] = readl(lpss->priv + i * 4);
+
+ /* Put the device into reset state */
+ writel(0, lpss->priv + LPSS_PRIV_RESETS);
+
return 0;
}
EXPORT_SYMBOL_GPL(intel_lpss_suspend);
@@ -500,8 +512,13 @@ EXPORT_SYMBOL_GPL(intel_lpss_suspend);
int intel_lpss_resume(struct device *dev)
{
struct intel_lpss *lpss = dev_get_drvdata(dev);
+ unsigned int i;
- intel_lpss_init_dev(lpss);
+ intel_lpss_deassert_reset(lpss);
+
+ /* Restore device context */
+ for (i = 0; i < LPSS_PRIV_REG_COUNT; i++)
+ writel(lpss->priv_ctx[i], lpss->priv + i * 4);
return 0;
}
diff --git a/drivers/mfd/intel-lpss.h b/drivers/mfd/intel-lpss.h
index 0dcea9eb2d03..694116630ffa 100644
--- a/drivers/mfd/intel-lpss.h
+++ b/drivers/mfd/intel-lpss.h
@@ -16,14 +16,14 @@
struct device;
struct resource;
-struct property_set;
+struct property_entry;
struct intel_lpss_platform_info {
struct resource *mem;
int irq;
unsigned long clk_rate;
const char *clk_con_id;
- struct property_set *pset;
+ struct property_entry *properties;
};
int intel_lpss_probe(struct device *dev,
diff --git a/drivers/mfd/intel_quark_i2c_gpio.c b/drivers/mfd/intel_quark_i2c_gpio.c
index bdc5e27222c0..7946d6e38b87 100644
--- a/drivers/mfd/intel_quark_i2c_gpio.c
+++ b/drivers/mfd/intel_quark_i2c_gpio.c
@@ -53,7 +53,7 @@
#define INTEL_QUARK_I2C_CLK_HZ 33000000
struct intel_quark_mfd {
- struct pci_dev *pdev;
+ struct device *dev;
struct clk *i2c_clk;
struct clk_lookup *i2c_clk_lookup;
};
@@ -123,14 +123,14 @@ static const struct pci_device_id intel_quark_mfd_ids[] = {
};
MODULE_DEVICE_TABLE(pci, intel_quark_mfd_ids);
-static int intel_quark_register_i2c_clk(struct intel_quark_mfd *quark_mfd)
+static int intel_quark_register_i2c_clk(struct device *dev)
{
- struct pci_dev *pdev = quark_mfd->pdev;
+ struct intel_quark_mfd *quark_mfd = dev_get_drvdata(dev);
struct clk *i2c_clk;
- i2c_clk = clk_register_fixed_rate(&pdev->dev,
+ i2c_clk = clk_register_fixed_rate(dev,
INTEL_QUARK_I2C_CONTROLLER_CLK, NULL,
- CLK_IS_ROOT, INTEL_QUARK_I2C_CLK_HZ);
+ 0, INTEL_QUARK_I2C_CLK_HZ);
if (IS_ERR(i2c_clk))
return PTR_ERR(i2c_clk);
@@ -139,18 +139,19 @@ static int intel_quark_register_i2c_clk(struct intel_quark_mfd *quark_mfd)
INTEL_QUARK_I2C_CONTROLLER_CLK);
if (!quark_mfd->i2c_clk_lookup) {
- dev_err(&pdev->dev, "Fixed clk register failed\n");
+ clk_unregister(quark_mfd->i2c_clk);
+ dev_err(dev, "Fixed clk register failed\n");
return -ENOMEM;
}
return 0;
}
-static void intel_quark_unregister_i2c_clk(struct pci_dev *pdev)
+static void intel_quark_unregister_i2c_clk(struct device *dev)
{
- struct intel_quark_mfd *quark_mfd = dev_get_drvdata(&pdev->dev);
+ struct intel_quark_mfd *quark_mfd = dev_get_drvdata(dev);
- if (!quark_mfd->i2c_clk || !quark_mfd->i2c_clk_lookup)
+ if (!quark_mfd->i2c_clk_lookup)
return;
clkdev_drop(quark_mfd->i2c_clk_lookup);
@@ -219,8 +220,7 @@ static int intel_quark_gpio_setup(struct pci_dev *pdev, struct mfd_cell *cell)
return -ENOMEM;
/* Set the properties for portA */
- pdata->properties->node = NULL;
- pdata->properties->name = "intel-quark-x1000-gpio-portA";
+ pdata->properties->fwnode = NULL;
pdata->properties->idx = 0;
pdata->properties->ngpio = INTEL_QUARK_MFD_NGPIO;
pdata->properties->gpio_base = INTEL_QUARK_MFD_GPIO_BASE;
@@ -246,30 +246,38 @@ static int intel_quark_mfd_probe(struct pci_dev *pdev,
quark_mfd = devm_kzalloc(&pdev->dev, sizeof(*quark_mfd), GFP_KERNEL);
if (!quark_mfd)
return -ENOMEM;
- quark_mfd->pdev = pdev;
- ret = intel_quark_register_i2c_clk(quark_mfd);
+ quark_mfd->dev = &pdev->dev;
+ dev_set_drvdata(&pdev->dev, quark_mfd);
+
+ ret = intel_quark_register_i2c_clk(&pdev->dev);
if (ret)
return ret;
- dev_set_drvdata(&pdev->dev, quark_mfd);
-
ret = intel_quark_i2c_setup(pdev, &intel_quark_mfd_cells[1]);
if (ret)
- return ret;
+ goto err_unregister_i2c_clk;
ret = intel_quark_gpio_setup(pdev, &intel_quark_mfd_cells[0]);
if (ret)
- return ret;
+ goto err_unregister_i2c_clk;
+
+ ret = mfd_add_devices(&pdev->dev, 0, intel_quark_mfd_cells,
+ ARRAY_SIZE(intel_quark_mfd_cells), NULL, 0,
+ NULL);
+ if (ret)
+ goto err_unregister_i2c_clk;
+
+ return 0;
- return mfd_add_devices(&pdev->dev, 0, intel_quark_mfd_cells,
- ARRAY_SIZE(intel_quark_mfd_cells), NULL, 0,
- NULL);
+err_unregister_i2c_clk:
+ intel_quark_unregister_i2c_clk(&pdev->dev);
+ return ret;
}
static void intel_quark_mfd_remove(struct pci_dev *pdev)
{
- intel_quark_unregister_i2c_clk(pdev);
+ intel_quark_unregister_i2c_clk(&pdev->dev);
mfd_remove_devices(&pdev->dev);
}
diff --git a/drivers/mfd/intel_soc_pmic_core.c b/drivers/mfd/intel_soc_pmic_core.c
index d9e15cf7c6c8..12d6ebb4ae5d 100644
--- a/drivers/mfd/intel_soc_pmic_core.c
+++ b/drivers/mfd/intel_soc_pmic_core.c
@@ -35,6 +35,7 @@ static struct gpiod_lookup_table panel_gpio_table = {
.table = {
/* Panel EN/DISABLE */
GPIO_LOOKUP("gpio_crystalcove", 94, "panel", GPIO_ACTIVE_HIGH),
+ { },
},
};
diff --git a/drivers/mfd/lp3943.c b/drivers/mfd/lp3943.c
index eecbb13de1bd..65a2a8f14e74 100644
--- a/drivers/mfd/lp3943.c
+++ b/drivers/mfd/lp3943.c
@@ -123,16 +123,9 @@ static int lp3943_probe(struct i2c_client *cl, const struct i2c_device_id *id)
lp3943->mux_cfg = lp3943_mux_cfg;
i2c_set_clientdata(cl, lp3943);
- return mfd_add_devices(dev, -1, lp3943_devs, ARRAY_SIZE(lp3943_devs),
- NULL, 0, NULL);
-}
-
-static int lp3943_remove(struct i2c_client *cl)
-{
- struct lp3943 *lp3943 = i2c_get_clientdata(cl);
-
- mfd_remove_devices(lp3943->dev);
- return 0;
+ return devm_mfd_add_devices(dev, -1, lp3943_devs,
+ ARRAY_SIZE(lp3943_devs),
+ NULL, 0, NULL);
}
static const struct i2c_device_id lp3943_ids[] = {
@@ -151,7 +144,6 @@ MODULE_DEVICE_TABLE(of, lp3943_of_match);
static struct i2c_driver lp3943_driver = {
.probe = lp3943_probe,
- .remove = lp3943_remove,
.driver = {
.name = "lp3943",
.of_match_table = of_match_ptr(lp3943_of_match),
diff --git a/drivers/mfd/lp8788-irq.c b/drivers/mfd/lp8788-irq.c
index c7a9825aa4ce..792d51bae20f 100644
--- a/drivers/mfd/lp8788-irq.c
+++ b/drivers/mfd/lp8788-irq.c
@@ -112,7 +112,7 @@ static irqreturn_t lp8788_irq_handler(int irq, void *ptr)
struct lp8788_irq_data *irqd = ptr;
struct lp8788 *lp = irqd->lp;
u8 status[NUM_REGS], addr, mask;
- bool handled;
+ bool handled = false;
int i;
if (lp8788_read_multi_bytes(lp, LP8788_INT_1, status, NUM_REGS))
diff --git a/drivers/mfd/max77620.c b/drivers/mfd/max77620.c
new file mode 100644
index 000000000000..199d261990be
--- /dev/null
+++ b/drivers/mfd/max77620.c
@@ -0,0 +1,590 @@
+/*
+ * Maxim MAX77620 MFD Driver
+ *
+ * Copyright (C) 2016 NVIDIA CORPORATION. All rights reserved.
+ *
+ * Author:
+ * Laxman Dewangan <ldewangan@nvidia.com>
+ * Chaitanya Bandi <bandik@nvidia.com>
+ * Mallikarjun Kasoju <mkasoju@nvidia.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ */
+
+/****************** Teminology used in driver ********************
+ * Here are some terminology used from datasheet for quick reference:
+ * Flexible Power Sequence (FPS):
+ * The Flexible Power Sequencer (FPS) allows each regulator to power up under
+ * hardware or software control. Additionally, each regulator can power on
+ * independently or among a group of other regulators with an adjustable
+ * power-up and power-down delays (sequencing). GPIO1, GPIO2, and GPIO3 can
+ * be programmed to be part of a sequence allowing external regulators to be
+ * sequenced along with internal regulators. 32KHz clock can be programmed to
+ * be part of a sequence.
+ * There is 3 FPS confguration registers and all resources are configured to
+ * any of these FPS or no FPS.
+ */
+
+#include <linux/i2c.h>
+#include <linux/interrupt.h>
+#include <linux/mfd/core.h>
+#include <linux/mfd/max77620.h>
+#include <linux/module.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/regmap.h>
+#include <linux/slab.h>
+
+static struct resource gpio_resources[] = {
+ DEFINE_RES_IRQ(MAX77620_IRQ_TOP_GPIO),
+};
+
+static struct resource power_resources[] = {
+ DEFINE_RES_IRQ(MAX77620_IRQ_LBT_MBATLOW),
+};
+
+static struct resource rtc_resources[] = {
+ DEFINE_RES_IRQ(MAX77620_IRQ_TOP_RTC),
+};
+
+static struct resource thermal_resources[] = {
+ DEFINE_RES_IRQ(MAX77620_IRQ_LBT_TJALRM1),
+ DEFINE_RES_IRQ(MAX77620_IRQ_LBT_TJALRM2),
+};
+
+static const struct regmap_irq max77620_top_irqs[] = {
+ REGMAP_IRQ_REG(MAX77620_IRQ_TOP_GLBL, 0, MAX77620_IRQ_TOP_GLBL_MASK),
+ REGMAP_IRQ_REG(MAX77620_IRQ_TOP_SD, 0, MAX77620_IRQ_TOP_SD_MASK),
+ REGMAP_IRQ_REG(MAX77620_IRQ_TOP_LDO, 0, MAX77620_IRQ_TOP_LDO_MASK),
+ REGMAP_IRQ_REG(MAX77620_IRQ_TOP_GPIO, 0, MAX77620_IRQ_TOP_GPIO_MASK),
+ REGMAP_IRQ_REG(MAX77620_IRQ_TOP_RTC, 0, MAX77620_IRQ_TOP_RTC_MASK),
+ REGMAP_IRQ_REG(MAX77620_IRQ_TOP_32K, 0, MAX77620_IRQ_TOP_32K_MASK),
+ REGMAP_IRQ_REG(MAX77620_IRQ_TOP_ONOFF, 0, MAX77620_IRQ_TOP_ONOFF_MASK),
+ REGMAP_IRQ_REG(MAX77620_IRQ_LBT_MBATLOW, 1, MAX77620_IRQ_LBM_MASK),
+ REGMAP_IRQ_REG(MAX77620_IRQ_LBT_TJALRM1, 1, MAX77620_IRQ_TJALRM1_MASK),
+ REGMAP_IRQ_REG(MAX77620_IRQ_LBT_TJALRM2, 1, MAX77620_IRQ_TJALRM2_MASK),
+};
+
+static const struct mfd_cell max77620_children[] = {
+ { .name = "max77620-pinctrl", },
+ { .name = "max77620-clock", },
+ { .name = "max77620-pmic", },
+ { .name = "max77620-watchdog", },
+ {
+ .name = "max77620-gpio",
+ .resources = gpio_resources,
+ .num_resources = ARRAY_SIZE(gpio_resources),
+ }, {
+ .name = "max77620-rtc",
+ .resources = rtc_resources,
+ .num_resources = ARRAY_SIZE(rtc_resources),
+ }, {
+ .name = "max77620-power",
+ .resources = power_resources,
+ .num_resources = ARRAY_SIZE(power_resources),
+ }, {
+ .name = "max77620-thermal",
+ .resources = thermal_resources,
+ .num_resources = ARRAY_SIZE(thermal_resources),
+ },
+};
+
+static const struct mfd_cell max20024_children[] = {
+ { .name = "max20024-pinctrl", },
+ { .name = "max77620-clock", },
+ { .name = "max20024-pmic", },
+ { .name = "max77620-watchdog", },
+ {
+ .name = "max77620-gpio",
+ .resources = gpio_resources,
+ .num_resources = ARRAY_SIZE(gpio_resources),
+ }, {
+ .name = "max77620-rtc",
+ .resources = rtc_resources,
+ .num_resources = ARRAY_SIZE(rtc_resources),
+ }, {
+ .name = "max20024-power",
+ .resources = power_resources,
+ .num_resources = ARRAY_SIZE(power_resources),
+ },
+};
+
+static struct regmap_irq_chip max77620_top_irq_chip = {
+ .name = "max77620-top",
+ .irqs = max77620_top_irqs,
+ .num_irqs = ARRAY_SIZE(max77620_top_irqs),
+ .num_regs = 2,
+ .status_base = MAX77620_REG_IRQTOP,
+ .mask_base = MAX77620_REG_IRQTOPM,
+};
+
+static const struct regmap_range max77620_readable_ranges[] = {
+ regmap_reg_range(MAX77620_REG_CNFGGLBL1, MAX77620_REG_DVSSD4),
+};
+
+static const struct regmap_access_table max77620_readable_table = {
+ .yes_ranges = max77620_readable_ranges,
+ .n_yes_ranges = ARRAY_SIZE(max77620_readable_ranges),
+};
+
+static const struct regmap_range max20024_readable_ranges[] = {
+ regmap_reg_range(MAX77620_REG_CNFGGLBL1, MAX77620_REG_DVSSD4),
+ regmap_reg_range(MAX20024_REG_MAX_ADD, MAX20024_REG_MAX_ADD),
+};
+
+static const struct regmap_access_table max20024_readable_table = {
+ .yes_ranges = max20024_readable_ranges,
+ .n_yes_ranges = ARRAY_SIZE(max20024_readable_ranges),
+};
+
+static const struct regmap_range max77620_writable_ranges[] = {
+ regmap_reg_range(MAX77620_REG_CNFGGLBL1, MAX77620_REG_DVSSD4),
+};
+
+static const struct regmap_access_table max77620_writable_table = {
+ .yes_ranges = max77620_writable_ranges,
+ .n_yes_ranges = ARRAY_SIZE(max77620_writable_ranges),
+};
+
+static const struct regmap_range max77620_cacheable_ranges[] = {
+ regmap_reg_range(MAX77620_REG_SD0_CFG, MAX77620_REG_LDO_CFG3),
+ regmap_reg_range(MAX77620_REG_FPS_CFG0, MAX77620_REG_FPS_SD3),
+};
+
+static const struct regmap_access_table max77620_volatile_table = {
+ .no_ranges = max77620_cacheable_ranges,
+ .n_no_ranges = ARRAY_SIZE(max77620_cacheable_ranges),
+};
+
+static const struct regmap_config max77620_regmap_config = {
+ .name = "power-slave",
+ .reg_bits = 8,
+ .val_bits = 8,
+ .max_register = MAX77620_REG_DVSSD4 + 1,
+ .cache_type = REGCACHE_RBTREE,
+ .rd_table = &max77620_readable_table,
+ .wr_table = &max77620_writable_table,
+ .volatile_table = &max77620_volatile_table,
+};
+
+static const struct regmap_config max20024_regmap_config = {
+ .name = "power-slave",
+ .reg_bits = 8,
+ .val_bits = 8,
+ .max_register = MAX20024_REG_MAX_ADD + 1,
+ .cache_type = REGCACHE_RBTREE,
+ .rd_table = &max20024_readable_table,
+ .wr_table = &max77620_writable_table,
+ .volatile_table = &max77620_volatile_table,
+};
+
+/* max77620_get_fps_period_reg_value: Get FPS bit field value from
+ * requested periods.
+ * MAX77620 supports the FPS period of 40, 80, 160, 320, 540, 1280, 2560
+ * and 5120 microseconds. MAX20024 supports the FPS period of 20, 40, 80,
+ * 160, 320, 540, 1280 and 2560 microseconds.
+ * The FPS register has 3 bits field to set the FPS period as
+ * bits max77620 max20024
+ * 000 40 20
+ * 001 80 40
+ * :::
+*/
+static int max77620_get_fps_period_reg_value(struct max77620_chip *chip,
+ int tperiod)
+{
+ int fps_min_period;
+ int i;
+
+ switch (chip->chip_id) {
+ case MAX20024:
+ fps_min_period = MAX20024_FPS_PERIOD_MIN_US;
+ break;
+ case MAX77620:
+ fps_min_period = MAX77620_FPS_PERIOD_MIN_US;
+ default:
+ return -EINVAL;
+ }
+
+ for (i = 0; i < 7; i++) {
+ if (fps_min_period >= tperiod)
+ return i;
+ fps_min_period *= 2;
+ }
+
+ return i;
+}
+
+/* max77620_config_fps: Configure FPS configuration registers
+ * based on platform specific information.
+ */
+static int max77620_config_fps(struct max77620_chip *chip,
+ struct device_node *fps_np)
+{
+ struct device *dev = chip->dev;
+ unsigned int mask = 0, config = 0;
+ u32 fps_max_period;
+ u32 param_val;
+ int tperiod, fps_id;
+ int ret;
+ char fps_name[10];
+
+ switch (chip->chip_id) {
+ case MAX20024:
+ fps_max_period = MAX20024_FPS_PERIOD_MAX_US;
+ break;
+ case MAX77620:
+ fps_max_period = MAX77620_FPS_PERIOD_MAX_US;
+ default:
+ return -EINVAL;
+ }
+
+ for (fps_id = 0; fps_id < MAX77620_FPS_COUNT; fps_id++) {
+ sprintf(fps_name, "fps%d", fps_id);
+ if (!strcmp(fps_np->name, fps_name))
+ break;
+ }
+
+ if (fps_id == MAX77620_FPS_COUNT) {
+ dev_err(dev, "FPS node name %s is not valid\n", fps_np->name);
+ return -EINVAL;
+ }
+
+ ret = of_property_read_u32(fps_np, "maxim,shutdown-fps-time-period-us",
+ &param_val);
+ if (!ret) {
+ mask |= MAX77620_FPS_TIME_PERIOD_MASK;
+ chip->shutdown_fps_period[fps_id] = min(param_val,
+ fps_max_period);
+ tperiod = max77620_get_fps_period_reg_value(chip,
+ chip->shutdown_fps_period[fps_id]);
+ config |= tperiod << MAX77620_FPS_TIME_PERIOD_SHIFT;
+ }
+
+ ret = of_property_read_u32(fps_np, "maxim,suspend-fps-time-period-us",
+ &param_val);
+ if (!ret)
+ chip->suspend_fps_period[fps_id] = min(param_val,
+ fps_max_period);
+
+ ret = of_property_read_u32(fps_np, "maxim,fps-event-source",
+ &param_val);
+ if (!ret) {
+ if (param_val > 2) {
+ dev_err(dev, "FPS%d event-source invalid\n", fps_id);
+ return -EINVAL;
+ }
+ mask |= MAX77620_FPS_EN_SRC_MASK;
+ config |= param_val << MAX77620_FPS_EN_SRC_SHIFT;
+ if (param_val == 2) {
+ mask |= MAX77620_FPS_ENFPS_SW_MASK;
+ config |= MAX77620_FPS_ENFPS_SW;
+ }
+ }
+
+ if (!chip->sleep_enable && !chip->enable_global_lpm) {
+ ret = of_property_read_u32(fps_np,
+ "maxim,device-state-on-disabled-event",
+ &param_val);
+ if (!ret) {
+ if (param_val == 0)
+ chip->sleep_enable = true;
+ else if (param_val == 1)
+ chip->enable_global_lpm = true;
+ }
+ }
+
+ ret = regmap_update_bits(chip->rmap, MAX77620_REG_FPS_CFG0 + fps_id,
+ mask, config);
+ if (ret < 0) {
+ dev_err(dev, "Failed to update FPS CFG: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int max77620_initialise_fps(struct max77620_chip *chip)
+{
+ struct device *dev = chip->dev;
+ struct device_node *fps_np, *fps_child;
+ u8 config;
+ int fps_id;
+ int ret;
+
+ for (fps_id = 0; fps_id < MAX77620_FPS_COUNT; fps_id++) {
+ chip->shutdown_fps_period[fps_id] = -1;
+ chip->suspend_fps_period[fps_id] = -1;
+ }
+
+ fps_np = of_get_child_by_name(dev->of_node, "fps");
+ if (!fps_np)
+ goto skip_fps;
+
+ for_each_child_of_node(fps_np, fps_child) {
+ ret = max77620_config_fps(chip, fps_child);
+ if (ret < 0)
+ return ret;
+ }
+
+ config = chip->enable_global_lpm ? MAX77620_ONOFFCNFG2_SLP_LPM_MSK : 0;
+ ret = regmap_update_bits(chip->rmap, MAX77620_REG_ONOFFCNFG2,
+ MAX77620_ONOFFCNFG2_SLP_LPM_MSK, config);
+ if (ret < 0) {
+ dev_err(dev, "Failed to update SLP_LPM: %d\n", ret);
+ return ret;
+ }
+
+skip_fps:
+ /* Enable wake on EN0 pin */
+ ret = regmap_update_bits(chip->rmap, MAX77620_REG_ONOFFCNFG2,
+ MAX77620_ONOFFCNFG2_WK_EN0,
+ MAX77620_ONOFFCNFG2_WK_EN0);
+ if (ret < 0) {
+ dev_err(dev, "Failed to update WK_EN0: %d\n", ret);
+ return ret;
+ }
+
+ /* For MAX20024, SLPEN will be POR reset if CLRSE is b11 */
+ if ((chip->chip_id == MAX20024) && chip->sleep_enable) {
+ config = MAX77620_ONOFFCNFG1_SLPEN | MAX20024_ONOFFCNFG1_CLRSE;
+ ret = regmap_update_bits(chip->rmap, MAX77620_REG_ONOFFCNFG1,
+ config, config);
+ if (ret < 0) {
+ dev_err(dev, "Failed to update SLPEN: %d\n", ret);
+ return ret;
+ }
+ }
+
+ return 0;
+}
+
+static int max77620_read_es_version(struct max77620_chip *chip)
+{
+ unsigned int val;
+ u8 cid_val[6];
+ int i;
+ int ret;
+
+ for (i = MAX77620_REG_CID0; i <= MAX77620_REG_CID5; i++) {
+ ret = regmap_read(chip->rmap, i, &val);
+ if (ret < 0) {
+ dev_err(chip->dev, "Failed to read CID: %d\n", ret);
+ return ret;
+ }
+ dev_dbg(chip->dev, "CID%d: 0x%02x\n",
+ i - MAX77620_REG_CID0, val);
+ cid_val[i - MAX77620_REG_CID0] = val;
+ }
+
+ /* CID4 is OTP Version and CID5 is ES version */
+ dev_info(chip->dev, "PMIC Version OTP:0x%02X and ES:0x%X\n",
+ cid_val[4], MAX77620_CID5_DIDM(cid_val[5]));
+
+ return ret;
+}
+
+static int max77620_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ const struct regmap_config *rmap_config;
+ struct max77620_chip *chip;
+ const struct mfd_cell *mfd_cells;
+ int n_mfd_cells;
+ int ret;
+
+ chip = devm_kzalloc(&client->dev, sizeof(*chip), GFP_KERNEL);
+ if (!chip)
+ return -ENOMEM;
+
+ i2c_set_clientdata(client, chip);
+ chip->dev = &client->dev;
+ chip->irq_base = -1;
+ chip->chip_irq = client->irq;
+ chip->chip_id = (enum max77620_chip_id)id->driver_data;
+
+ switch (chip->chip_id) {
+ case MAX77620:
+ mfd_cells = max77620_children;
+ n_mfd_cells = ARRAY_SIZE(max77620_children);
+ rmap_config = &max77620_regmap_config;
+ break;
+ case MAX20024:
+ mfd_cells = max20024_children;
+ n_mfd_cells = ARRAY_SIZE(max20024_children);
+ rmap_config = &max20024_regmap_config;
+ break;
+ default:
+ dev_err(chip->dev, "ChipID is invalid %d\n", chip->chip_id);
+ return -EINVAL;
+ }
+
+ chip->rmap = devm_regmap_init_i2c(client, rmap_config);
+ if (IS_ERR(chip->rmap)) {
+ ret = PTR_ERR(chip->rmap);
+ dev_err(chip->dev, "Failed to intialise regmap: %d\n", ret);
+ return ret;
+ }
+
+ ret = max77620_read_es_version(chip);
+ if (ret < 0)
+ return ret;
+
+ ret = devm_regmap_add_irq_chip(chip->dev, chip->rmap, client->irq,
+ IRQF_ONESHOT | IRQF_SHARED,
+ chip->irq_base, &max77620_top_irq_chip,
+ &chip->top_irq_data);
+ if (ret < 0) {
+ dev_err(chip->dev, "Failed to add regmap irq: %d\n", ret);
+ return ret;
+ }
+
+ ret = max77620_initialise_fps(chip);
+ if (ret < 0)
+ return ret;
+
+ ret = devm_mfd_add_devices(chip->dev, PLATFORM_DEVID_NONE,
+ mfd_cells, n_mfd_cells, NULL, 0,
+ regmap_irq_get_domain(chip->top_irq_data));
+ if (ret < 0) {
+ dev_err(chip->dev, "Failed to add MFD children: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+#ifdef CONFIG_PM_SLEEP
+static int max77620_set_fps_period(struct max77620_chip *chip,
+ int fps_id, int time_period)
+{
+ int period = max77620_get_fps_period_reg_value(chip, time_period);
+ int ret;
+
+ ret = regmap_update_bits(chip->rmap, MAX77620_REG_FPS_CFG0 + fps_id,
+ MAX77620_FPS_TIME_PERIOD_MASK,
+ period << MAX77620_FPS_TIME_PERIOD_SHIFT);
+ if (ret < 0) {
+ dev_err(chip->dev, "Failed to update FPS period: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
+static int max77620_i2c_suspend(struct device *dev)
+{
+ struct max77620_chip *chip = dev_get_drvdata(dev);
+ struct i2c_client *client = to_i2c_client(dev);
+ unsigned int config;
+ int fps;
+ int ret;
+
+ for (fps = 0; fps < MAX77620_FPS_COUNT; fps++) {
+ if (chip->suspend_fps_period[fps] < 0)
+ continue;
+
+ ret = max77620_set_fps_period(chip, fps,
+ chip->suspend_fps_period[fps]);
+ if (ret < 0)
+ return ret;
+ }
+
+ /*
+ * For MAX20024: No need to configure SLPEN on suspend as
+ * it will be configured on Init.
+ */
+ if (chip->chip_id == MAX20024)
+ goto out;
+
+ config = (chip->sleep_enable) ? MAX77620_ONOFFCNFG1_SLPEN : 0;
+ ret = regmap_update_bits(chip->rmap, MAX77620_REG_ONOFFCNFG1,
+ MAX77620_ONOFFCNFG1_SLPEN,
+ config);
+ if (ret < 0) {
+ dev_err(dev, "Failed to configure sleep in suspend: %d\n", ret);
+ return ret;
+ }
+
+ /* Disable WK_EN0 */
+ ret = regmap_update_bits(chip->rmap, MAX77620_REG_ONOFFCNFG2,
+ MAX77620_ONOFFCNFG2_WK_EN0, 0);
+ if (ret < 0) {
+ dev_err(dev, "Failed to configure WK_EN in suspend: %d\n", ret);
+ return ret;
+ }
+
+out:
+ disable_irq(client->irq);
+
+ return 0;
+}
+
+static int max77620_i2c_resume(struct device *dev)
+{
+ struct max77620_chip *chip = dev_get_drvdata(dev);
+ struct i2c_client *client = to_i2c_client(dev);
+ int ret;
+ int fps;
+
+ for (fps = 0; fps < MAX77620_FPS_COUNT; fps++) {
+ if (chip->shutdown_fps_period[fps] < 0)
+ continue;
+
+ ret = max77620_set_fps_period(chip, fps,
+ chip->shutdown_fps_period[fps]);
+ if (ret < 0)
+ return ret;
+ }
+
+ /*
+ * For MAX20024: No need to configure WKEN0 on resume as
+ * it is configured on Init.
+ */
+ if (chip->chip_id == MAX20024)
+ goto out;
+
+ /* Enable WK_EN0 */
+ ret = regmap_update_bits(chip->rmap, MAX77620_REG_ONOFFCNFG2,
+ MAX77620_ONOFFCNFG2_WK_EN0,
+ MAX77620_ONOFFCNFG2_WK_EN0);
+ if (ret < 0) {
+ dev_err(dev, "Failed to configure WK_EN0 n resume: %d\n", ret);
+ return ret;
+ }
+
+out:
+ enable_irq(client->irq);
+
+ return 0;
+}
+#endif
+
+static const struct i2c_device_id max77620_id[] = {
+ {"max77620", MAX77620},
+ {"max20024", MAX20024},
+ {},
+};
+MODULE_DEVICE_TABLE(i2c, max77620_id);
+
+static const struct dev_pm_ops max77620_pm_ops = {
+ SET_SYSTEM_SLEEP_PM_OPS(max77620_i2c_suspend, max77620_i2c_resume)
+};
+
+static struct i2c_driver max77620_driver = {
+ .driver = {
+ .name = "max77620",
+ .pm = &max77620_pm_ops,
+ },
+ .probe = max77620_probe,
+ .id_table = max77620_id,
+};
+
+module_i2c_driver(max77620_driver);
+
+MODULE_DESCRIPTION("MAX77620/MAX20024 Multi Function Device Core Driver");
+MODULE_AUTHOR("Laxman Dewangan <ldewangan@nvidia.com>");
+MODULE_AUTHOR("Chaitanya Bandi <bandik@nvidia.com>");
+MODULE_AUTHOR("Mallikarjun Kasoju <mkasoju@nvidia.com>");
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/mfd/max77686.c b/drivers/mfd/max77686.c
index c1aff46e89d9..7b68ed72e9cb 100644
--- a/drivers/mfd/max77686.c
+++ b/drivers/mfd/max77686.c
@@ -2,7 +2,7 @@
* max77686.c - mfd core driver for the Maxim 77686/802
*
* Copyright (C) 2012 Samsung Electronics
- * Chiwoong Byun <woong.byun@smasung.com>
+ * Chiwoong Byun <woong.byun@samsung.com>
* Jonghwa Lee <jonghwa3.lee@samsung.com>
*
* This program is free software; you can redistribute it and/or modify
@@ -230,38 +230,24 @@ static int max77686_i2c_probe(struct i2c_client *i2c,
return -ENODEV;
}
- ret = regmap_add_irq_chip(max77686->regmap, max77686->irq,
- IRQF_TRIGGER_FALLING | IRQF_ONESHOT |
- IRQF_SHARED, 0, irq_chip,
- &max77686->irq_data);
+ ret = devm_regmap_add_irq_chip(&i2c->dev, max77686->regmap,
+ max77686->irq,
+ IRQF_TRIGGER_FALLING | IRQF_ONESHOT |
+ IRQF_SHARED, 0, irq_chip,
+ &max77686->irq_data);
if (ret < 0) {
dev_err(&i2c->dev, "failed to add PMIC irq chip: %d\n", ret);
return ret;
}
- ret = mfd_add_devices(max77686->dev, -1, cells, n_devs, NULL, 0, NULL);
+ ret = devm_mfd_add_devices(max77686->dev, -1, cells, n_devs, NULL,
+ 0, NULL);
if (ret < 0) {
dev_err(&i2c->dev, "failed to add MFD devices: %d\n", ret);
- goto err_del_irqc;
+ return ret;
}
return 0;
-
-err_del_irqc:
- regmap_del_irq_chip(max77686->irq, max77686->irq_data);
-
- return ret;
-}
-
-static int max77686_i2c_remove(struct i2c_client *i2c)
-{
- struct max77686_dev *max77686 = i2c_get_clientdata(i2c);
-
- mfd_remove_devices(max77686->dev);
-
- regmap_del_irq_chip(max77686->irq, max77686->irq_data);
-
- return 0;
}
static const struct i2c_device_id max77686_i2c_id[] = {
@@ -317,22 +303,10 @@ static struct i2c_driver max77686_i2c_driver = {
.of_match_table = of_match_ptr(max77686_pmic_dt_match),
},
.probe = max77686_i2c_probe,
- .remove = max77686_i2c_remove,
.id_table = max77686_i2c_id,
};
-static int __init max77686_i2c_init(void)
-{
- return i2c_add_driver(&max77686_i2c_driver);
-}
-/* init early so consumer devices can complete system boot */
-subsys_initcall(max77686_i2c_init);
-
-static void __exit max77686_i2c_exit(void)
-{
- i2c_del_driver(&max77686_i2c_driver);
-}
-module_exit(max77686_i2c_exit);
+module_i2c_driver(max77686_i2c_driver);
MODULE_DESCRIPTION("MAXIM 77686/802 multi-function core driver");
MODULE_AUTHOR("Chiwoong Byun <woong.byun@samsung.com>");
diff --git a/drivers/mfd/max77693.c b/drivers/mfd/max77693.c
index b83b7a7da1ae..662ae0d9e334 100644
--- a/drivers/mfd/max77693.c
+++ b/drivers/mfd/max77693.c
@@ -2,7 +2,7 @@
* max77693.c - mfd core driver for the MAX 77693
*
* Copyright (C) 2012 Samsung Electronics
- * SangYoung Son <hello.son@smasung.com>
+ * SangYoung Son <hello.son@samsung.com>
*
* This program is not provided / owned by Maxim Integrated Products.
*
@@ -368,6 +368,7 @@ static const struct of_device_id max77693_dt_match[] = {
{ .compatible = "maxim,max77693" },
{},
};
+MODULE_DEVICE_TABLE(of, max77693_dt_match);
#endif
static struct i2c_driver max77693_i2c_driver = {
@@ -381,18 +382,7 @@ static struct i2c_driver max77693_i2c_driver = {
.id_table = max77693_i2c_id,
};
-static int __init max77693_i2c_init(void)
-{
- return i2c_add_driver(&max77693_i2c_driver);
-}
-/* init early so consumer devices can complete system boot */
-subsys_initcall(max77693_i2c_init);
-
-static void __exit max77693_i2c_exit(void)
-{
- i2c_del_driver(&max77693_i2c_driver);
-}
-module_exit(max77693_i2c_exit);
+module_i2c_driver(max77693_i2c_driver);
MODULE_DESCRIPTION("MAXIM 77693 multi-function core driver");
MODULE_AUTHOR("SangYoung, Son <hello.son@samsung.com>");
diff --git a/drivers/mfd/menf21bmc.c b/drivers/mfd/menf21bmc.c
index 1c274345820c..3ad2def947d8 100644
--- a/drivers/mfd/menf21bmc.c
+++ b/drivers/mfd/menf21bmc.c
@@ -96,8 +96,8 @@ menf21bmc_probe(struct i2c_client *client, const struct i2c_device_id *ids)
return ret;
}
- ret = mfd_add_devices(&client->dev, 0, menf21bmc_cell,
- ARRAY_SIZE(menf21bmc_cell), NULL, 0, NULL);
+ ret = devm_mfd_add_devices(&client->dev, 0, menf21bmc_cell,
+ ARRAY_SIZE(menf21bmc_cell), NULL, 0, NULL);
if (ret < 0) {
dev_err(&client->dev, "failed to add BMC sub-devices\n");
return ret;
@@ -106,12 +106,6 @@ menf21bmc_probe(struct i2c_client *client, const struct i2c_device_id *ids)
return 0;
}
-static int menf21bmc_remove(struct i2c_client *client)
-{
- mfd_remove_devices(&client->dev);
- return 0;
-}
-
static const struct i2c_device_id menf21bmc_id_table[] = {
{ "menf21bmc" },
{ }
@@ -122,7 +116,6 @@ static struct i2c_driver menf21bmc_driver = {
.driver.name = "menf21bmc",
.id_table = menf21bmc_id_table,
.probe = menf21bmc_probe,
- .remove = menf21bmc_remove,
};
module_i2c_driver(menf21bmc_driver);
diff --git a/drivers/mfd/mfd-core.c b/drivers/mfd/mfd-core.c
index 88bd1b1e47be..3ac486a597f3 100644
--- a/drivers/mfd/mfd-core.c
+++ b/drivers/mfd/mfd-core.c
@@ -107,7 +107,7 @@ static void mfd_acpi_add_device(const struct mfd_cell *cell,
strlcpy(ids[0].id, match->pnpid, sizeof(ids[0].id));
list_for_each_entry(child, &parent->children, node) {
- if (acpi_match_device_ids(child, ids)) {
+ if (!acpi_match_device_ids(child, ids)) {
adev = child;
break;
}
@@ -193,8 +193,8 @@ static int mfd_add_device(struct device *parent, int id,
goto fail_alias;
}
- if (cell->pset) {
- ret = platform_device_add_properties(pdev, cell->pset);
+ if (cell->properties) {
+ ret = platform_device_add_properties(pdev, cell->properties);
if (ret)
goto fail_alias;
}
@@ -334,6 +334,44 @@ void mfd_remove_devices(struct device *parent)
}
EXPORT_SYMBOL(mfd_remove_devices);
+static void devm_mfd_dev_release(struct device *dev, void *res)
+{
+ mfd_remove_devices(dev);
+}
+
+/**
+ * devm_mfd_add_devices - Resource managed version of mfd_add_devices()
+ *
+ * Returns 0 on success or an appropriate negative error number on failure.
+ * All child-devices of the MFD will automatically be removed when it gets
+ * unbinded.
+ */
+int devm_mfd_add_devices(struct device *dev, int id,
+ const struct mfd_cell *cells, int n_devs,
+ struct resource *mem_base,
+ int irq_base, struct irq_domain *domain)
+{
+ struct device **ptr;
+ int ret;
+
+ ptr = devres_alloc(devm_mfd_dev_release, sizeof(*ptr), GFP_KERNEL);
+ if (!ptr)
+ return -ENOMEM;
+
+ ret = mfd_add_devices(dev, id, cells, n_devs, mem_base,
+ irq_base, domain);
+ if (ret < 0) {
+ devres_free(ptr);
+ return ret;
+ }
+
+ *ptr = dev;
+ devres_add(dev, ptr);
+
+ return ret;
+}
+EXPORT_SYMBOL(devm_mfd_add_devices);
+
int mfd_clone_cell(const char *cell, const char **clones, size_t n_clones)
{
struct mfd_cell cell_entry;
diff --git a/drivers/mfd/mt6397-core.c b/drivers/mfd/mt6397-core.c
index 8e8d93249c09..e14d8b058f0c 100644
--- a/drivers/mfd/mt6397-core.c
+++ b/drivers/mfd/mt6397-core.c
@@ -267,17 +267,26 @@ static int mt6397_probe(struct platform_device *pdev)
ret = regmap_read(pmic->regmap, MT6397_CID, &id);
if (ret) {
dev_err(pmic->dev, "Failed to read chip id: %d\n", ret);
- goto fail_irq;
+ return ret;
}
+ pmic->irq = platform_get_irq(pdev, 0);
+ if (pmic->irq <= 0)
+ return pmic->irq;
+
switch (id & 0xff) {
case MT6323_CID_CODE:
pmic->int_con[0] = MT6323_INT_CON0;
pmic->int_con[1] = MT6323_INT_CON1;
pmic->int_status[0] = MT6323_INT_STATUS0;
pmic->int_status[1] = MT6323_INT_STATUS1;
- ret = mfd_add_devices(&pdev->dev, -1, mt6323_devs,
- ARRAY_SIZE(mt6323_devs), NULL, 0, NULL);
+ ret = mt6397_irq_init(pmic);
+ if (ret)
+ return ret;
+
+ ret = devm_mfd_add_devices(&pdev->dev, -1, mt6323_devs,
+ ARRAY_SIZE(mt6323_devs), NULL,
+ 0, NULL);
break;
case MT6397_CID_CODE:
@@ -286,8 +295,13 @@ static int mt6397_probe(struct platform_device *pdev)
pmic->int_con[1] = MT6397_INT_CON1;
pmic->int_status[0] = MT6397_INT_STATUS0;
pmic->int_status[1] = MT6397_INT_STATUS1;
- ret = mfd_add_devices(&pdev->dev, -1, mt6397_devs,
- ARRAY_SIZE(mt6397_devs), NULL, 0, NULL);
+ ret = mt6397_irq_init(pmic);
+ if (ret)
+ return ret;
+
+ ret = devm_mfd_add_devices(&pdev->dev, -1, mt6397_devs,
+ ARRAY_SIZE(mt6397_devs), NULL,
+ 0, NULL);
break;
default:
@@ -296,14 +310,6 @@ static int mt6397_probe(struct platform_device *pdev)
break;
}
- pmic->irq = platform_get_irq(pdev, 0);
- if (pmic->irq > 0) {
- ret = mt6397_irq_init(pmic);
- if (ret)
- return ret;
- }
-
-fail_irq:
if (ret) {
irq_domain_remove(pmic->irq_domain);
dev_err(&pdev->dev, "failed to add child devices: %d\n", ret);
@@ -312,13 +318,6 @@ fail_irq:
return ret;
}
-static int mt6397_remove(struct platform_device *pdev)
-{
- mfd_remove_devices(&pdev->dev);
-
- return 0;
-}
-
static const struct of_device_id mt6397_of_match[] = {
{ .compatible = "mediatek,mt6397" },
{ .compatible = "mediatek,mt6323" },
@@ -334,7 +333,6 @@ MODULE_DEVICE_TABLE(platform, mt6397_id);
static struct platform_driver mt6397_driver = {
.probe = mt6397_probe,
- .remove = mt6397_remove,
.driver = {
.name = "mt6397",
.of_match_table = of_match_ptr(mt6397_of_match),
diff --git a/drivers/mfd/omap-usb-tll.c b/drivers/mfd/omap-usb-tll.c
index b7b3e8ee64f2..c30290f33430 100644
--- a/drivers/mfd/omap-usb-tll.c
+++ b/drivers/mfd/omap-usb-tll.c
@@ -269,6 +269,8 @@ static int usbtll_omap_probe(struct platform_device *pdev)
if (IS_ERR(tll->ch_clk[i]))
dev_dbg(dev, "can't get clock : %s\n", clkname);
+ else
+ clk_prepare(tll->ch_clk[i]);
}
pm_runtime_put_sync(dev);
@@ -301,9 +303,12 @@ static int usbtll_omap_remove(struct platform_device *pdev)
tll_dev = NULL;
spin_unlock(&tll_lock);
- for (i = 0; i < tll->nch; i++)
- if (!IS_ERR(tll->ch_clk[i]))
+ for (i = 0; i < tll->nch; i++) {
+ if (!IS_ERR(tll->ch_clk[i])) {
+ clk_unprepare(tll->ch_clk[i]);
clk_put(tll->ch_clk[i]);
+ }
+ }
pm_runtime_disable(&pdev->dev);
return 0;
@@ -420,7 +425,7 @@ int omap_tll_enable(struct usbhs_omap_platform_data *pdata)
if (IS_ERR(tll->ch_clk[i]))
continue;
- r = clk_prepare_enable(tll->ch_clk[i]);
+ r = clk_enable(tll->ch_clk[i]);
if (r) {
dev_err(tll_dev,
"Error enabling ch %d clock: %d\n", i, r);
@@ -448,7 +453,7 @@ int omap_tll_disable(struct usbhs_omap_platform_data *pdata)
for (i = 0; i < tll->nch; i++) {
if (omap_usb_mode_needs_tll(pdata->port_mode[i])) {
if (!IS_ERR(tll->ch_clk[i]))
- clk_disable_unprepare(tll->ch_clk[i]);
+ clk_disable(tll->ch_clk[i]);
}
}
diff --git a/drivers/mfd/rc5t583-irq.c b/drivers/mfd/rc5t583-irq.c
index 3f8812daa304..f8dde59ea6af 100644
--- a/drivers/mfd/rc5t583-irq.c
+++ b/drivers/mfd/rc5t583-irq.c
@@ -389,17 +389,10 @@ int rc5t583_irq_init(struct rc5t583 *rc5t583, int irq, int irq_base)
irq_clear_status_flags(__irq, IRQ_NOREQUEST);
}
- ret = request_threaded_irq(irq, NULL, rc5t583_irq, IRQF_ONESHOT,
- "rc5t583", rc5t583);
+ ret = devm_request_threaded_irq(rc5t583->dev, irq, NULL, rc5t583_irq,
+ IRQF_ONESHOT, "rc5t583", rc5t583);
if (ret < 0)
dev_err(rc5t583->dev,
"Error in registering interrupt error: %d\n", ret);
return ret;
}
-
-int rc5t583_irq_exit(struct rc5t583 *rc5t583)
-{
- if (rc5t583->chip_irq)
- free_irq(rc5t583->chip_irq, rc5t583);
- return 0;
-}
diff --git a/drivers/mfd/rc5t583.c b/drivers/mfd/rc5t583.c
index fc2b2d93f354..d12243d5ecb8 100644
--- a/drivers/mfd/rc5t583.c
+++ b/drivers/mfd/rc5t583.c
@@ -252,7 +252,6 @@ static int rc5t583_i2c_probe(struct i2c_client *i2c,
struct rc5t583 *rc5t583;
struct rc5t583_platform_data *pdata = dev_get_platdata(&i2c->dev);
int ret;
- bool irq_init_success = false;
if (!pdata) {
dev_err(&i2c->dev, "Err: Platform data not found\n");
@@ -284,32 +283,16 @@ static int rc5t583_i2c_probe(struct i2c_client *i2c,
/* Still continue with warning, if irq init fails */
if (ret)
dev_warn(&i2c->dev, "IRQ init failed: %d\n", ret);
- else
- irq_init_success = true;
}
- ret = mfd_add_devices(rc5t583->dev, -1, rc5t583_subdevs,
- ARRAY_SIZE(rc5t583_subdevs), NULL, 0, NULL);
+ ret = devm_mfd_add_devices(rc5t583->dev, -1, rc5t583_subdevs,
+ ARRAY_SIZE(rc5t583_subdevs), NULL, 0, NULL);
if (ret) {
dev_err(&i2c->dev, "add mfd devices failed: %d\n", ret);
- goto err_add_devs;
+ return ret;
}
return 0;
-
-err_add_devs:
- if (irq_init_success)
- rc5t583_irq_exit(rc5t583);
- return ret;
-}
-
-static int rc5t583_i2c_remove(struct i2c_client *i2c)
-{
- struct rc5t583 *rc5t583 = i2c_get_clientdata(i2c);
-
- mfd_remove_devices(rc5t583->dev);
- rc5t583_irq_exit(rc5t583);
- return 0;
}
static const struct i2c_device_id rc5t583_i2c_id[] = {
@@ -324,7 +307,6 @@ static struct i2c_driver rc5t583_i2c_driver = {
.name = "rc5t583",
},
.probe = rc5t583_i2c_probe,
- .remove = rc5t583_i2c_remove,
.id_table = rc5t583_i2c_id,
};
diff --git a/drivers/mfd/rdc321x-southbridge.c b/drivers/mfd/rdc321x-southbridge.c
index 6575585f1d1f..2bd8c5b6d600 100644
--- a/drivers/mfd/rdc321x-southbridge.c
+++ b/drivers/mfd/rdc321x-southbridge.c
@@ -85,14 +85,10 @@ static int rdc321x_sb_probe(struct pci_dev *pdev,
rdc321x_gpio_pdata.sb_pdev = pdev;
rdc321x_wdt_pdata.sb_pdev = pdev;
- return mfd_add_devices(&pdev->dev, -1,
- rdc321x_sb_cells, ARRAY_SIZE(rdc321x_sb_cells),
- NULL, 0, NULL);
-}
-
-static void rdc321x_sb_remove(struct pci_dev *pdev)
-{
- mfd_remove_devices(&pdev->dev);
+ return devm_mfd_add_devices(&pdev->dev, -1,
+ rdc321x_sb_cells,
+ ARRAY_SIZE(rdc321x_sb_cells),
+ NULL, 0, NULL);
}
static const struct pci_device_id rdc321x_sb_table[] = {
@@ -105,7 +101,6 @@ static struct pci_driver rdc321x_sb_driver = {
.name = "RDC321x Southbridge",
.id_table = rdc321x_sb_table,
.probe = rdc321x_sb_probe,
- .remove = rdc321x_sb_remove,
};
module_pci_driver(rdc321x_sb_driver);
diff --git a/drivers/mfd/rk808.c b/drivers/mfd/rk808.c
index 4b1e4399754b..49d7f624fc94 100644
--- a/drivers/mfd/rk808.c
+++ b/drivers/mfd/rk808.c
@@ -213,9 +213,9 @@ static int rk808_probe(struct i2c_client *client,
rk808->i2c = client;
i2c_set_clientdata(client, rk808);
- ret = mfd_add_devices(&client->dev, -1,
- rk808s, ARRAY_SIZE(rk808s),
- NULL, 0, regmap_irq_get_domain(rk808->irq_data));
+ ret = devm_mfd_add_devices(&client->dev, -1,
+ rk808s, ARRAY_SIZE(rk808s), NULL, 0,
+ regmap_irq_get_domain(rk808->irq_data));
if (ret) {
dev_err(&client->dev, "failed to add MFD devices %d\n", ret);
goto err_irq;
@@ -240,7 +240,6 @@ static int rk808_remove(struct i2c_client *client)
struct rk808 *rk808 = i2c_get_clientdata(client);
regmap_del_irq_chip(client->irq, rk808->irq_data);
- mfd_remove_devices(&client->dev);
pm_power_off = NULL;
return 0;
diff --git a/drivers/mfd/rn5t618.c b/drivers/mfd/rn5t618.c
index 666857192dbe..0ad51d792feb 100644
--- a/drivers/mfd/rn5t618.c
+++ b/drivers/mfd/rn5t618.c
@@ -78,8 +78,8 @@ static int rn5t618_i2c_probe(struct i2c_client *i2c,
return ret;
}
- ret = mfd_add_devices(&i2c->dev, -1, rn5t618_cells,
- ARRAY_SIZE(rn5t618_cells), NULL, 0, NULL);
+ ret = devm_mfd_add_devices(&i2c->dev, -1, rn5t618_cells,
+ ARRAY_SIZE(rn5t618_cells), NULL, 0, NULL);
if (ret) {
dev_err(&i2c->dev, "failed to add sub-devices: %d\n", ret);
return ret;
@@ -102,7 +102,6 @@ static int rn5t618_i2c_remove(struct i2c_client *i2c)
pm_power_off = NULL;
}
- mfd_remove_devices(&i2c->dev);
return 0;
}
diff --git a/drivers/mfd/rt5033.c b/drivers/mfd/rt5033.c
index 2b95485f0057..9bd089c56375 100644
--- a/drivers/mfd/rt5033.c
+++ b/drivers/mfd/rt5033.c
@@ -97,9 +97,9 @@ static int rt5033_i2c_probe(struct i2c_client *i2c,
return ret;
}
- ret = mfd_add_devices(rt5033->dev, -1, rt5033_devs,
- ARRAY_SIZE(rt5033_devs), NULL, 0,
- regmap_irq_get_domain(rt5033->irq_data));
+ ret = devm_mfd_add_devices(rt5033->dev, -1, rt5033_devs,
+ ARRAY_SIZE(rt5033_devs), NULL, 0,
+ regmap_irq_get_domain(rt5033->irq_data));
if (ret < 0) {
dev_err(&i2c->dev, "Failed to add RT5033 child devices.\n");
return ret;
@@ -110,13 +110,6 @@ static int rt5033_i2c_probe(struct i2c_client *i2c,
return 0;
}
-static int rt5033_i2c_remove(struct i2c_client *i2c)
-{
- mfd_remove_devices(&i2c->dev);
-
- return 0;
-}
-
static const struct i2c_device_id rt5033_i2c_id[] = {
{ "rt5033", },
{ }
@@ -135,7 +128,6 @@ static struct i2c_driver rt5033_driver = {
.of_match_table = of_match_ptr(rt5033_dt_match),
},
.probe = rt5033_i2c_probe,
- .remove = rt5033_i2c_remove,
.id_table = rt5033_i2c_id,
};
module_i2c_driver(rt5033_driver);
diff --git a/drivers/mfd/sec-core.c b/drivers/mfd/sec-core.c
index 400e1d7d8d08..ca6b80d08ffc 100644
--- a/drivers/mfd/sec-core.c
+++ b/drivers/mfd/sec-core.c
@@ -481,29 +481,16 @@ static int sec_pmic_probe(struct i2c_client *i2c,
/* If this happens the probe function is problem */
BUG();
}
- ret = mfd_add_devices(sec_pmic->dev, -1, sec_devs, num_sec_devs, NULL,
- 0, NULL);
+ ret = devm_mfd_add_devices(sec_pmic->dev, -1, sec_devs, num_sec_devs,
+ NULL, 0, NULL);
if (ret)
- goto err_mfd;
+ return ret;
device_init_wakeup(sec_pmic->dev, sec_pmic->wakeup);
sec_pmic_configure(sec_pmic);
sec_pmic_dump_rev(sec_pmic);
return ret;
-
-err_mfd:
- sec_irq_exit(sec_pmic);
- return ret;
-}
-
-static int sec_pmic_remove(struct i2c_client *i2c)
-{
- struct sec_pmic_dev *sec_pmic = i2c_get_clientdata(i2c);
-
- mfd_remove_devices(sec_pmic->dev);
- sec_irq_exit(sec_pmic);
- return 0;
}
static void sec_pmic_shutdown(struct i2c_client *i2c)
@@ -583,7 +570,6 @@ static struct i2c_driver sec_pmic_driver = {
.of_match_table = of_match_ptr(sec_dt_match),
},
.probe = sec_pmic_probe,
- .remove = sec_pmic_remove,
.shutdown = sec_pmic_shutdown,
.id_table = sec_pmic_id,
};
diff --git a/drivers/mfd/sec-irq.c b/drivers/mfd/sec-irq.c
index d77de431cc50..5eb59c233d52 100644
--- a/drivers/mfd/sec-irq.c
+++ b/drivers/mfd/sec-irq.c
@@ -483,10 +483,11 @@ int sec_irq_init(struct sec_pmic_dev *sec_pmic)
return -EINVAL;
}
- ret = regmap_add_irq_chip(sec_pmic->regmap_pmic, sec_pmic->irq,
- IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
- sec_pmic->irq_base, sec_irq_chip,
- &sec_pmic->irq_data);
+ ret = devm_regmap_add_irq_chip(sec_pmic->dev, sec_pmic->regmap_pmic,
+ sec_pmic->irq,
+ IRQF_TRIGGER_FALLING | IRQF_ONESHOT,
+ sec_pmic->irq_base, sec_irq_chip,
+ &sec_pmic->irq_data);
if (ret != 0) {
dev_err(sec_pmic->dev, "Failed to register IRQ chip: %d\n", ret);
return ret;
@@ -500,8 +501,3 @@ int sec_irq_init(struct sec_pmic_dev *sec_pmic)
return 0;
}
-
-void sec_irq_exit(struct sec_pmic_dev *sec_pmic)
-{
- regmap_del_irq_chip(sec_pmic->irq, sec_pmic->irq_data);
-}
diff --git a/drivers/mfd/sky81452.c b/drivers/mfd/sky81452.c
index b0c9b0415650..30a2a677100f 100644
--- a/drivers/mfd/sky81452.c
+++ b/drivers/mfd/sky81452.c
@@ -64,19 +64,14 @@ static int sky81452_probe(struct i2c_client *client,
cells[1].platform_data = pdata->regulator_init_data;
cells[1].pdata_size = sizeof(*pdata->regulator_init_data);
- ret = mfd_add_devices(dev, -1, cells, ARRAY_SIZE(cells), NULL, 0, NULL);
+ ret = devm_mfd_add_devices(dev, -1, cells, ARRAY_SIZE(cells),
+ NULL, 0, NULL);
if (ret)
dev_err(dev, "failed to add child devices. err=%d\n", ret);
return ret;
}
-static int sky81452_remove(struct i2c_client *client)
-{
- mfd_remove_devices(&client->dev);
- return 0;
-}
-
static const struct i2c_device_id sky81452_ids[] = {
{ "sky81452" },
{ }
@@ -97,7 +92,6 @@ static struct i2c_driver sky81452_driver = {
.of_match_table = of_match_ptr(sky81452_of_match),
},
.probe = sky81452_probe,
- .remove = sky81452_remove,
.id_table = sky81452_ids,
};
diff --git a/drivers/mfd/sm501.c b/drivers/mfd/sm501.c
index c646784c5a7d..65cd0d2a822a 100644
--- a/drivers/mfd/sm501.c
+++ b/drivers/mfd/sm501.c
@@ -879,11 +879,6 @@ static int sm501_register_display(struct sm501_devdata *sm,
#ifdef CONFIG_MFD_SM501_GPIO
-static inline struct sm501_gpio_chip *to_sm501_gpio(struct gpio_chip *gc)
-{
- return container_of(gc, struct sm501_gpio_chip, gpio);
-}
-
static inline struct sm501_devdata *sm501_gpio_to_dev(struct sm501_gpio *gpio)
{
return container_of(gpio, struct sm501_devdata, gpio);
@@ -892,7 +887,7 @@ static inline struct sm501_devdata *sm501_gpio_to_dev(struct sm501_gpio *gpio)
static int sm501_gpio_get(struct gpio_chip *chip, unsigned offset)
{
- struct sm501_gpio_chip *smgpio = to_sm501_gpio(chip);
+ struct sm501_gpio_chip *smgpio = gpiochip_get_data(chip);
unsigned long result;
result = smc501_readl(smgpio->regbase + SM501_GPIO_DATA_LOW);
@@ -923,7 +918,7 @@ static void sm501_gpio_ensure_gpio(struct sm501_gpio_chip *smchip,
static void sm501_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
{
- struct sm501_gpio_chip *smchip = to_sm501_gpio(chip);
+ struct sm501_gpio_chip *smchip = gpiochip_get_data(chip);
struct sm501_gpio *smgpio = smchip->ourgpio;
unsigned long bit = 1 << offset;
void __iomem *regs = smchip->regbase;
@@ -948,7 +943,7 @@ static void sm501_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
static int sm501_gpio_input(struct gpio_chip *chip, unsigned offset)
{
- struct sm501_gpio_chip *smchip = to_sm501_gpio(chip);
+ struct sm501_gpio_chip *smchip = gpiochip_get_data(chip);
struct sm501_gpio *smgpio = smchip->ourgpio;
void __iomem *regs = smchip->regbase;
unsigned long bit = 1 << offset;
@@ -974,7 +969,7 @@ static int sm501_gpio_input(struct gpio_chip *chip, unsigned offset)
static int sm501_gpio_output(struct gpio_chip *chip,
unsigned offset, int value)
{
- struct sm501_gpio_chip *smchip = to_sm501_gpio(chip);
+ struct sm501_gpio_chip *smchip = gpiochip_get_data(chip);
struct sm501_gpio *smgpio = smchip->ourgpio;
unsigned long bit = 1 << offset;
void __iomem *regs = smchip->regbase;
@@ -1039,7 +1034,7 @@ static int sm501_gpio_register_chip(struct sm501_devdata *sm,
gchip->base = base;
chip->ourgpio = gpio;
- return gpiochip_add(gchip);
+ return gpiochip_add_data(gchip, chip);
}
static int sm501_register_gpio(struct sm501_devdata *sm)
diff --git a/drivers/mfd/smsc-ece1099.c b/drivers/mfd/smsc-ece1099.c
index a4c0df71c8b3..7f89e89b8a5e 100644
--- a/drivers/mfd/smsc-ece1099.c
+++ b/drivers/mfd/smsc-ece1099.c
@@ -80,15 +80,6 @@ err:
return ret;
}
-static int smsc_i2c_remove(struct i2c_client *i2c)
-{
- struct smsc *smsc = i2c_get_clientdata(i2c);
-
- mfd_remove_devices(smsc->dev);
-
- return 0;
-}
-
static const struct i2c_device_id smsc_i2c_id[] = {
{ "smscece1099", 0},
{},
@@ -100,7 +91,6 @@ static struct i2c_driver smsc_i2c_driver = {
.name = "smsc",
},
.probe = smsc_i2c_probe,
- .remove = smsc_i2c_remove,
.id_table = smsc_i2c_id,
};
diff --git a/drivers/mfd/stw481x.c b/drivers/mfd/stw481x.c
index ca613df36143..ab949eaca6ad 100644
--- a/drivers/mfd/stw481x.c
+++ b/drivers/mfd/stw481x.c
@@ -206,8 +206,8 @@ static int stw481x_probe(struct i2c_client *client,
stw481x_cells[i].pdata_size = sizeof(*stw481x);
}
- ret = mfd_add_devices(&client->dev, 0, stw481x_cells,
- ARRAY_SIZE(stw481x_cells), NULL, 0, NULL);
+ ret = devm_mfd_add_devices(&client->dev, 0, stw481x_cells,
+ ARRAY_SIZE(stw481x_cells), NULL, 0, NULL);
if (ret)
return ret;
@@ -216,12 +216,6 @@ static int stw481x_probe(struct i2c_client *client,
return ret;
}
-static int stw481x_remove(struct i2c_client *client)
-{
- mfd_remove_devices(&client->dev);
- return 0;
-}
-
/*
* This ID table is completely unused, as this is a pure
* device-tree probed driver, but it has to be here due to
@@ -246,7 +240,6 @@ static struct i2c_driver stw481x_driver = {
.of_match_table = stw481x_match,
},
.probe = stw481x_probe,
- .remove = stw481x_remove,
.id_table = stw481x_id,
};
diff --git a/drivers/mfd/tc6393xb.c b/drivers/mfd/tc6393xb.c
index 1ecbfa40d1b3..d42d322ac7ca 100644
--- a/drivers/mfd/tc6393xb.c
+++ b/drivers/mfd/tc6393xb.c
@@ -24,7 +24,7 @@
#include <linux/mfd/core.h>
#include <linux/mfd/tmio.h>
#include <linux/mfd/tc6393xb.h>
-#include <linux/gpio.h>
+#include <linux/gpio/driver.h>
#include <linux/slab.h>
#define SCR_REVID 0x08 /* b Revision ID */
@@ -434,7 +434,7 @@ static struct mfd_cell tc6393xb_cells[] = {
static int tc6393xb_gpio_get(struct gpio_chip *chip,
unsigned offset)
{
- struct tc6393xb *tc6393xb = container_of(chip, struct tc6393xb, gpio);
+ struct tc6393xb *tc6393xb = gpiochip_get_data(chip);
/* XXX: does dsr also represent inputs? */
return !!(tmio_ioread8(tc6393xb->scr + SCR_GPO_DSR(offset / 8))
@@ -444,7 +444,7 @@ static int tc6393xb_gpio_get(struct gpio_chip *chip,
static void __tc6393xb_gpio_set(struct gpio_chip *chip,
unsigned offset, int value)
{
- struct tc6393xb *tc6393xb = container_of(chip, struct tc6393xb, gpio);
+ struct tc6393xb *tc6393xb = gpiochip_get_data(chip);
u8 dsr;
dsr = tmio_ioread8(tc6393xb->scr + SCR_GPO_DSR(offset / 8));
@@ -459,7 +459,7 @@ static void __tc6393xb_gpio_set(struct gpio_chip *chip,
static void tc6393xb_gpio_set(struct gpio_chip *chip,
unsigned offset, int value)
{
- struct tc6393xb *tc6393xb = container_of(chip, struct tc6393xb, gpio);
+ struct tc6393xb *tc6393xb = gpiochip_get_data(chip);
unsigned long flags;
spin_lock_irqsave(&tc6393xb->lock, flags);
@@ -472,7 +472,7 @@ static void tc6393xb_gpio_set(struct gpio_chip *chip,
static int tc6393xb_gpio_direction_input(struct gpio_chip *chip,
unsigned offset)
{
- struct tc6393xb *tc6393xb = container_of(chip, struct tc6393xb, gpio);
+ struct tc6393xb *tc6393xb = gpiochip_get_data(chip);
unsigned long flags;
u8 doecr;
@@ -490,7 +490,7 @@ static int tc6393xb_gpio_direction_input(struct gpio_chip *chip,
static int tc6393xb_gpio_direction_output(struct gpio_chip *chip,
unsigned offset, int value)
{
- struct tc6393xb *tc6393xb = container_of(chip, struct tc6393xb, gpio);
+ struct tc6393xb *tc6393xb = gpiochip_get_data(chip);
unsigned long flags;
u8 doecr;
@@ -517,7 +517,7 @@ static int tc6393xb_register_gpio(struct tc6393xb *tc6393xb, int gpio_base)
tc6393xb->gpio.direction_input = tc6393xb_gpio_direction_input;
tc6393xb->gpio.direction_output = tc6393xb_gpio_direction_output;
- return gpiochip_add(&tc6393xb->gpio);
+ return gpiochip_add_data(&tc6393xb->gpio, tc6393xb);
}
/*--------------------------------------------------------------------------*/
diff --git a/drivers/mfd/tps6105x.c b/drivers/mfd/tps6105x.c
index 51c54951c220..baa12ea666fb 100644
--- a/drivers/mfd/tps6105x.c
+++ b/drivers/mfd/tps6105x.c
@@ -21,7 +21,6 @@
#include <linux/spinlock.h>
#include <linux/slab.h>
#include <linux/err.h>
-#include <linux/regulator/driver.h>
#include <linux/mfd/core.h>
#include <linux/mfd/tps6105x.h>
diff --git a/drivers/mfd/tps65010.c b/drivers/mfd/tps65010.c
index 495e4518fc29..d829a6131f09 100644
--- a/drivers/mfd/tps65010.c
+++ b/drivers/mfd/tps65010.c
@@ -34,7 +34,7 @@
#include <linux/i2c/tps65010.h>
-#include <linux/gpio.h>
+#include <linux/gpio/driver.h>
/*-------------------------------------------------------------------------*/
@@ -477,7 +477,7 @@ tps65010_output(struct gpio_chip *chip, unsigned offset, int value)
if (offset < 4) {
struct tps65010 *tps;
- tps = container_of(chip, struct tps65010, chip);
+ tps = gpiochip_get_data(chip);
if (!(tps->outmask & (1 << offset)))
return -EINVAL;
tps65010_set_gpio_out_value(offset + 1, value);
@@ -494,7 +494,7 @@ static int tps65010_gpio_get(struct gpio_chip *chip, unsigned offset)
int value;
struct tps65010 *tps;
- tps = container_of(chip, struct tps65010, chip);
+ tps = gpiochip_get_data(chip);
if (offset < 4) {
value = i2c_smbus_read_byte_data(tps->client, TPS_DEFGPIO);
@@ -651,7 +651,7 @@ static int tps65010_probe(struct i2c_client *client,
tps->chip.ngpio = 7;
tps->chip.can_sleep = 1;
- status = gpiochip_add(&tps->chip);
+ status = gpiochip_add_data(&tps->chip, tps);
if (status < 0)
dev_err(&client->dev, "can't add gpiochip, err %d\n",
status);
diff --git a/drivers/mfd/tps6507x.c b/drivers/mfd/tps6507x.c
index 1ab3dd6c8adf..40beb2f4350c 100644
--- a/drivers/mfd/tps6507x.c
+++ b/drivers/mfd/tps6507x.c
@@ -100,16 +100,8 @@ static int tps6507x_i2c_probe(struct i2c_client *i2c,
tps6507x->read_dev = tps6507x_i2c_read_device;
tps6507x->write_dev = tps6507x_i2c_write_device;
- return mfd_add_devices(tps6507x->dev, -1, tps6507x_devs,
- ARRAY_SIZE(tps6507x_devs), NULL, 0, NULL);
-}
-
-static int tps6507x_i2c_remove(struct i2c_client *i2c)
-{
- struct tps6507x_dev *tps6507x = i2c_get_clientdata(i2c);
-
- mfd_remove_devices(tps6507x->dev);
- return 0;
+ return devm_mfd_add_devices(tps6507x->dev, -1, tps6507x_devs,
+ ARRAY_SIZE(tps6507x_devs), NULL, 0, NULL);
}
static const struct i2c_device_id tps6507x_i2c_id[] = {
@@ -132,7 +124,6 @@ static struct i2c_driver tps6507x_i2c_driver = {
.of_match_table = of_match_ptr(tps6507x_of_match),
},
.probe = tps6507x_i2c_probe,
- .remove = tps6507x_i2c_remove,
.id_table = tps6507x_i2c_id,
};
diff --git a/drivers/mfd/tps65217.c b/drivers/mfd/tps65217.c
index d32b54426b70..049a6fcac651 100644
--- a/drivers/mfd/tps65217.c
+++ b/drivers/mfd/tps65217.c
@@ -205,8 +205,8 @@ static int tps65217_probe(struct i2c_client *client,
return ret;
}
- ret = mfd_add_devices(tps->dev, -1, tps65217s,
- ARRAY_SIZE(tps65217s), NULL, 0, NULL);
+ ret = devm_mfd_add_devices(tps->dev, -1, tps65217s,
+ ARRAY_SIZE(tps65217s), NULL, 0, NULL);
if (ret < 0) {
dev_err(tps->dev, "mfd_add_devices failed: %d\n", ret);
return ret;
@@ -235,15 +235,6 @@ static int tps65217_probe(struct i2c_client *client,
return 0;
}
-static int tps65217_remove(struct i2c_client *client)
-{
- struct tps65217 *tps = i2c_get_clientdata(client);
-
- mfd_remove_devices(tps->dev);
-
- return 0;
-}
-
static const struct i2c_device_id tps65217_id_table[] = {
{"tps65217", TPS65217},
{ /* sentinel */ }
@@ -257,7 +248,6 @@ static struct i2c_driver tps65217_driver = {
},
.id_table = tps65217_id_table,
.probe = tps65217_probe,
- .remove = tps65217_remove,
};
static int __init tps65217_init(void)
diff --git a/drivers/mfd/tps65910.c b/drivers/mfd/tps65910.c
index f7ab115483a9..11cab1582f2f 100644
--- a/drivers/mfd/tps65910.c
+++ b/drivers/mfd/tps65910.c
@@ -252,9 +252,10 @@ static int tps65910_irq_init(struct tps65910 *tps65910, int irq,
}
tps65910->chip_irq = irq;
- ret = regmap_add_irq_chip(tps65910->regmap, tps65910->chip_irq,
- IRQF_ONESHOT, pdata->irq_base,
- tps6591x_irqs_chip, &tps65910->irq_data);
+ ret = devm_regmap_add_irq_chip(tps65910->dev, tps65910->regmap,
+ tps65910->chip_irq,
+ IRQF_ONESHOT, pdata->irq_base,
+ tps6591x_irqs_chip, &tps65910->irq_data);
if (ret < 0) {
dev_warn(tps65910->dev, "Failed to add irq_chip %d\n", ret);
tps65910->chip_irq = 0;
@@ -262,13 +263,6 @@ static int tps65910_irq_init(struct tps65910 *tps65910, int irq,
return ret;
}
-static int tps65910_irq_exit(struct tps65910 *tps65910)
-{
- if (tps65910->chip_irq > 0)
- regmap_del_irq_chip(tps65910->chip_irq, tps65910->irq_data);
- return 0;
-}
-
static bool is_volatile_reg(struct device *dev, unsigned int reg)
{
struct tps65910 *tps65910 = dev_get_drvdata(dev);
@@ -510,29 +504,18 @@ static int tps65910_i2c_probe(struct i2c_client *i2c,
pm_power_off = tps65910_power_off;
}
- ret = mfd_add_devices(tps65910->dev, -1,
- tps65910s, ARRAY_SIZE(tps65910s),
- NULL, 0,
- regmap_irq_get_domain(tps65910->irq_data));
+ ret = devm_mfd_add_devices(tps65910->dev, -1,
+ tps65910s, ARRAY_SIZE(tps65910s),
+ NULL, 0,
+ regmap_irq_get_domain(tps65910->irq_data));
if (ret < 0) {
dev_err(&i2c->dev, "mfd_add_devices failed: %d\n", ret);
- tps65910_irq_exit(tps65910);
return ret;
}
return ret;
}
-static int tps65910_i2c_remove(struct i2c_client *i2c)
-{
- struct tps65910 *tps65910 = i2c_get_clientdata(i2c);
-
- tps65910_irq_exit(tps65910);
- mfd_remove_devices(tps65910->dev);
-
- return 0;
-}
-
static const struct i2c_device_id tps65910_i2c_id[] = {
{ "tps65910", TPS65910 },
{ "tps65911", TPS65911 },
@@ -547,7 +530,6 @@ static struct i2c_driver tps65910_i2c_driver = {
.of_match_table = of_match_ptr(tps65910_of_match),
},
.probe = tps65910_i2c_probe,
- .remove = tps65910_i2c_remove,
.id_table = tps65910_i2c_id,
};
diff --git a/drivers/mfd/twl4030-irq.c b/drivers/mfd/twl4030-irq.c
index 40e51b0baa46..b46c0cfc27d9 100644
--- a/drivers/mfd/twl4030-irq.c
+++ b/drivers/mfd/twl4030-irq.c
@@ -696,7 +696,7 @@ int twl4030_init_irq(struct device *dev, int irq_num)
nr_irqs = TWL4030_PWR_NR_IRQS + TWL4030_CORE_NR_IRQS;
irq_base = irq_alloc_descs(-1, 0, nr_irqs, 0);
- if (IS_ERR_VALUE(irq_base)) {
+ if (irq_base < 0) {
dev_err(dev, "Fail to allocate IRQ descs\n");
return irq_base;
}
diff --git a/drivers/mfd/twl4030-power.c b/drivers/mfd/twl4030-power.c
index 04b539850e72..1beb722f6080 100644
--- a/drivers/mfd/twl4030-power.c
+++ b/drivers/mfd/twl4030-power.c
@@ -1,5 +1,4 @@
/*
- * linux/drivers/i2c/chips/twl4030-power.c
*
* Handle TWL4030 Power initialization
*
diff --git a/drivers/mfd/twl6040.c b/drivers/mfd/twl6040.c
index 08a693cd38cc..852d5874aabb 100644
--- a/drivers/mfd/twl6040.c
+++ b/drivers/mfd/twl6040.c
@@ -291,7 +291,11 @@ int twl6040_power(struct twl6040 *twl6040, int on)
if (twl6040->power_count++)
goto out;
- clk_prepare_enable(twl6040->clk32k);
+ ret = clk_prepare_enable(twl6040->clk32k);
+ if (ret) {
+ twl6040->power_count = 0;
+ goto out;
+ }
/* Allow writes to the chip */
regcache_cache_only(twl6040->regmap, false);
@@ -300,6 +304,7 @@ int twl6040_power(struct twl6040 *twl6040, int on)
/* use automatic power-up sequence */
ret = twl6040_power_up_automatic(twl6040);
if (ret) {
+ clk_disable_unprepare(twl6040->clk32k);
twl6040->power_count = 0;
goto out;
}
@@ -307,6 +312,7 @@ int twl6040_power(struct twl6040 *twl6040, int on)
/* use manual power-up sequence */
ret = twl6040_power_up_manual(twl6040);
if (ret) {
+ clk_disable_unprepare(twl6040->clk32k);
twl6040->power_count = 0;
goto out;
}
diff --git a/drivers/mfd/ucb1x00-core.c b/drivers/mfd/ucb1x00-core.c
index bcafe1ecd71c..9ab9ec47ea75 100644
--- a/drivers/mfd/ucb1x00-core.c
+++ b/drivers/mfd/ucb1x00-core.c
@@ -28,7 +28,7 @@
#include <linux/mutex.h>
#include <linux/mfd/ucb1x00.h>
#include <linux/pm.h>
-#include <linux/gpio.h>
+#include <linux/gpio/driver.h>
static DEFINE_MUTEX(ucb1x00_mutex);
static LIST_HEAD(ucb1x00_drivers);
@@ -109,7 +109,7 @@ unsigned int ucb1x00_io_read(struct ucb1x00 *ucb)
static void ucb1x00_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
{
- struct ucb1x00 *ucb = container_of(chip, struct ucb1x00, gpio);
+ struct ucb1x00 *ucb = gpiochip_get_data(chip);
unsigned long flags;
spin_lock_irqsave(&ucb->io_lock, flags);
@@ -126,7 +126,7 @@ static void ucb1x00_gpio_set(struct gpio_chip *chip, unsigned offset, int value)
static int ucb1x00_gpio_get(struct gpio_chip *chip, unsigned offset)
{
- struct ucb1x00 *ucb = container_of(chip, struct ucb1x00, gpio);
+ struct ucb1x00 *ucb = gpiochip_get_data(chip);
unsigned val;
ucb1x00_enable(ucb);
@@ -138,7 +138,7 @@ static int ucb1x00_gpio_get(struct gpio_chip *chip, unsigned offset)
static int ucb1x00_gpio_direction_input(struct gpio_chip *chip, unsigned offset)
{
- struct ucb1x00 *ucb = container_of(chip, struct ucb1x00, gpio);
+ struct ucb1x00 *ucb = gpiochip_get_data(chip);
unsigned long flags;
spin_lock_irqsave(&ucb->io_lock, flags);
@@ -154,7 +154,7 @@ static int ucb1x00_gpio_direction_input(struct gpio_chip *chip, unsigned offset)
static int ucb1x00_gpio_direction_output(struct gpio_chip *chip, unsigned offset
, int value)
{
- struct ucb1x00 *ucb = container_of(chip, struct ucb1x00, gpio);
+ struct ucb1x00 *ucb = gpiochip_get_data(chip);
unsigned long flags;
unsigned old, mask = 1 << offset;
@@ -181,7 +181,7 @@ static int ucb1x00_gpio_direction_output(struct gpio_chip *chip, unsigned offset
static int ucb1x00_to_irq(struct gpio_chip *chip, unsigned offset)
{
- struct ucb1x00 *ucb = container_of(chip, struct ucb1x00, gpio);
+ struct ucb1x00 *ucb = gpiochip_get_data(chip);
return ucb->irq_base > 0 ? ucb->irq_base + offset : -ENXIO;
}
@@ -579,7 +579,7 @@ static int ucb1x00_probe(struct mcp *mcp)
ucb->gpio.direction_input = ucb1x00_gpio_direction_input;
ucb->gpio.direction_output = ucb1x00_gpio_direction_output;
ucb->gpio.to_irq = ucb1x00_to_irq;
- ret = gpiochip_add(&ucb->gpio);
+ ret = gpiochip_add_data(&ucb->gpio, ucb);
if (ret)
goto err_gpio_add;
} else
diff --git a/drivers/mfd/vexpress-sysreg.c b/drivers/mfd/vexpress-sysreg.c
index 855c0204f09a..201a3ea2a9d3 100644
--- a/drivers/mfd/vexpress-sysreg.c
+++ b/drivers/mfd/vexpress-sysreg.c
@@ -202,7 +202,7 @@ static int vexpress_sysreg_probe(struct platform_device *pdev)
bgpio_init(mmc_gpio_chip, &pdev->dev, 0x4, base + SYS_MCI,
NULL, NULL, NULL, NULL, 0);
mmc_gpio_chip->ngpio = 2;
- gpiochip_add(mmc_gpio_chip);
+ gpiochip_add_data(mmc_gpio_chip, NULL);
return mfd_add_devices(&pdev->dev, PLATFORM_DEVID_AUTO,
vexpress_sysreg_cells,
diff --git a/drivers/mfd/wl1273-core.c b/drivers/mfd/wl1273-core.c
index f7c52d901040..708046592b33 100644
--- a/drivers/mfd/wl1273-core.c
+++ b/drivers/mfd/wl1273-core.c
@@ -170,15 +170,6 @@ static int wl1273_fm_set_volume(struct wl1273_core *core, unsigned int volume)
return 0;
}
-static int wl1273_core_remove(struct i2c_client *client)
-{
- dev_dbg(&client->dev, "%s\n", __func__);
-
- mfd_remove_devices(&client->dev);
-
- return 0;
-}
-
static int wl1273_core_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
@@ -237,8 +228,8 @@ static int wl1273_core_probe(struct i2c_client *client,
dev_dbg(&client->dev, "%s: number of children: %d.\n",
__func__, children);
- r = mfd_add_devices(&client->dev, -1, core->cells,
- children, NULL, 0, NULL);
+ r = devm_mfd_add_devices(&client->dev, -1, core->cells,
+ children, NULL, 0, NULL);
if (r)
goto err;
@@ -258,7 +249,6 @@ static struct i2c_driver wl1273_core_driver = {
},
.probe = wl1273_core_probe,
.id_table = wl1273_driver_id_table,
- .remove = wl1273_core_remove,
};
static int __init wl1273_core_init(void)
diff --git a/drivers/mfd/wm5110-tables.c b/drivers/mfd/wm5110-tables.c
index 8e74e71507e7..1ee68bd440fb 100644
--- a/drivers/mfd/wm5110-tables.c
+++ b/drivers/mfd/wm5110-tables.c
@@ -3066,6 +3066,7 @@ static bool wm5110_volatile_register(struct device *dev, unsigned int reg)
case ARIZONA_AOD_IRQ_RAW_STATUS:
case ARIZONA_FX_CTRL2:
case ARIZONA_ASRC_STATUS:
+ case ARIZONA_CLOCK_CONTROL:
case ARIZONA_DSP_STATUS:
case ARIZONA_DSP1_STATUS_1:
case ARIZONA_DSP1_STATUS_2:
diff --git a/drivers/mfd/wm8400-core.c b/drivers/mfd/wm8400-core.c
index 3bd44a45c378..8a98a2fc74e1 100644
--- a/drivers/mfd/wm8400-core.c
+++ b/drivers/mfd/wm8400-core.c
@@ -35,27 +35,6 @@ static bool wm8400_volatile(struct device *dev, unsigned int reg)
}
}
-/**
- * wm8400_reg_read - Single register read
- *
- * @wm8400: Pointer to wm8400 control structure
- * @reg: Register to read
- *
- * @return Read value
- */
-u16 wm8400_reg_read(struct wm8400 *wm8400, u8 reg)
-{
- unsigned int val;
- int ret;
-
- ret = regmap_read(wm8400->regmap, reg, &val);
- if (ret < 0)
- return ret;
-
- return val;
-}
-EXPORT_SYMBOL_GPL(wm8400_reg_read);
-
int wm8400_block_read(struct wm8400 *wm8400, u8 reg, int count, u16 *data)
{
return regmap_bulk_read(wm8400->regmap, reg, data, count);
@@ -70,7 +49,7 @@ static int wm8400_register_codec(struct wm8400 *wm8400)
.pdata_size = sizeof(*wm8400),
};
- return mfd_add_devices(wm8400->dev, -1, &cell, 1, NULL, 0, NULL);
+ return devm_mfd_add_devices(wm8400->dev, -1, &cell, 1, NULL, 0, NULL);
}
/*
@@ -111,7 +90,7 @@ static int wm8400_init(struct wm8400 *wm8400,
ret = wm8400_register_codec(wm8400);
if (ret != 0) {
dev_err(wm8400->dev, "Failed to register codec\n");
- goto err_children;
+ return ret;
}
if (pdata && pdata->platform_init) {
@@ -119,21 +98,12 @@ static int wm8400_init(struct wm8400 *wm8400,
if (ret != 0) {
dev_err(wm8400->dev, "Platform init failed: %d\n",
ret);
- goto err_children;
+ return ret;
}
} else
dev_warn(wm8400->dev, "No platform initialisation supplied\n");
return 0;
-
-err_children:
- mfd_remove_devices(wm8400->dev);
- return ret;
-}
-
-static void wm8400_release(struct wm8400 *wm8400)
-{
- mfd_remove_devices(wm8400->dev);
}
static const struct regmap_config wm8400_regmap_config = {
@@ -156,7 +126,7 @@ void wm8400_reset_codec_reg_cache(struct wm8400 *wm8400)
}
EXPORT_SYMBOL_GPL(wm8400_reset_codec_reg_cache);
-#if defined(CONFIG_I2C) || defined(CONFIG_I2C_MODULE)
+#if IS_ENABLED(CONFIG_I2C)
static int wm8400_i2c_probe(struct i2c_client *i2c,
const struct i2c_device_id *id)
{
@@ -176,15 +146,6 @@ static int wm8400_i2c_probe(struct i2c_client *i2c,
return wm8400_init(wm8400, dev_get_platdata(&i2c->dev));
}
-static int wm8400_i2c_remove(struct i2c_client *i2c)
-{
- struct wm8400 *wm8400 = i2c_get_clientdata(i2c);
-
- wm8400_release(wm8400);
-
- return 0;
-}
-
static const struct i2c_device_id wm8400_i2c_id[] = {
{ "wm8400", 0 },
{ }
@@ -196,7 +157,6 @@ static struct i2c_driver wm8400_i2c_driver = {
.name = "WM8400",
},
.probe = wm8400_i2c_probe,
- .remove = wm8400_i2c_remove,
.id_table = wm8400_i2c_id,
};
#endif
@@ -205,7 +165,7 @@ static int __init wm8400_module_init(void)
{
int ret = -ENODEV;
-#if defined(CONFIG_I2C) || defined(CONFIG_I2C_MODULE)
+#if IS_ENABLED(CONFIG_I2C)
ret = i2c_add_driver(&wm8400_i2c_driver);
if (ret != 0)
pr_err("Failed to register I2C driver: %d\n", ret);
@@ -217,7 +177,7 @@ subsys_initcall(wm8400_module_init);
static void __exit wm8400_module_exit(void)
{
-#if defined(CONFIG_I2C) || defined(CONFIG_I2C_MODULE)
+#if IS_ENABLED(CONFIG_I2C)
i2c_del_driver(&wm8400_i2c_driver);
#endif
}
diff --git a/drivers/misc/cxl/api.c b/drivers/misc/cxl/api.c
index 2107c948406d..6d228ccd884d 100644
--- a/drivers/misc/cxl/api.c
+++ b/drivers/misc/cxl/api.c
@@ -68,15 +68,6 @@ struct cxl_context *cxl_get_context(struct pci_dev *dev)
}
EXPORT_SYMBOL_GPL(cxl_get_context);
-struct device *cxl_get_phys_dev(struct pci_dev *dev)
-{
- struct cxl_afu *afu;
-
- afu = cxl_pci_to_afu(dev);
-
- return afu->adapter->dev.parent;
-}
-
int cxl_release_context(struct cxl_context *ctx)
{
if (ctx->status >= STARTED)
@@ -192,6 +183,7 @@ int cxl_start_context(struct cxl_context *ctx, u64 wed,
ctx->pid = get_task_pid(task, PIDTYPE_PID);
ctx->glpid = get_task_pid(task->group_leader, PIDTYPE_PID);
kernel = false;
+ ctx->real_mode = false;
}
cxl_ctx_get();
@@ -228,6 +220,24 @@ void cxl_set_master(struct cxl_context *ctx)
}
EXPORT_SYMBOL_GPL(cxl_set_master);
+int cxl_set_translation_mode(struct cxl_context *ctx, bool real_mode)
+{
+ if (ctx->status == STARTED) {
+ /*
+ * We could potentially update the PE and issue an update LLCMD
+ * to support this, but it doesn't seem to have a good use case
+ * since it's trivial to just create a second kernel context
+ * with different translation modes, so until someone convinces
+ * me otherwise:
+ */
+ return -EBUSY;
+ }
+
+ ctx->real_mode = real_mode;
+ return 0;
+}
+EXPORT_SYMBOL_GPL(cxl_set_translation_mode);
+
/* wrappers around afu_* file ops which are EXPORTED */
int cxl_fd_open(struct inode *inode, struct file *file)
{
diff --git a/drivers/misc/cxl/context.c b/drivers/misc/cxl/context.c
index 7edea9c19199..26d206b1d08c 100644
--- a/drivers/misc/cxl/context.c
+++ b/drivers/misc/cxl/context.c
@@ -297,8 +297,7 @@ static void reclaim_ctx(struct rcu_head *rcu)
if (ctx->kernelapi)
kfree(ctx->mapping);
- if (ctx->irq_bitmap)
- kfree(ctx->irq_bitmap);
+ kfree(ctx->irq_bitmap);
/* Drop ref to the afu device taken during cxl_context_init */
cxl_afu_put(ctx->afu);
diff --git a/drivers/misc/cxl/cxl.h b/drivers/misc/cxl/cxl.h
index 73dc2a33da74..4fe50788ff45 100644
--- a/drivers/misc/cxl/cxl.h
+++ b/drivers/misc/cxl/cxl.h
@@ -178,15 +178,6 @@ static const cxl_p2n_reg_t CXL_PSL_WED_An = {0x0A0};
#define CXL_PSL_SR_An_MP (1ull << (63-62)) /* Master Process */
#define CXL_PSL_SR_An_LE (1ull << (63-63)) /* Little Endian */
-/****** CXL_PSL_LLCMD_An ****************************************************/
-#define CXL_LLCMD_TERMINATE 0x0001000000000000ULL
-#define CXL_LLCMD_REMOVE 0x0002000000000000ULL
-#define CXL_LLCMD_SUSPEND 0x0003000000000000ULL
-#define CXL_LLCMD_RESUME 0x0004000000000000ULL
-#define CXL_LLCMD_ADD 0x0005000000000000ULL
-#define CXL_LLCMD_UPDATE 0x0006000000000000ULL
-#define CXL_LLCMD_HANDLE_MASK 0x000000000000ffffULL
-
/****** CXL_PSL_ID_An ****************************************************/
#define CXL_PSL_ID_An_F (1ull << (63-31))
#define CXL_PSL_ID_An_L (1ull << (63-30))
@@ -376,11 +367,13 @@ struct cxl_afu_native {
};
struct cxl_afu_guest {
+ struct cxl_afu *parent;
u64 handle;
phys_addr_t p2n_phys;
u64 p2n_size;
int max_ints;
- struct mutex recovery_lock;
+ bool handle_err;
+ struct delayed_work work_err;
int previous_state;
};
@@ -524,6 +517,7 @@ struct cxl_context {
bool pe_inserted;
bool master;
bool kernel;
+ bool real_mode;
bool pending_irq;
bool pending_fault;
bool pending_afu_err;
@@ -580,6 +574,7 @@ struct cxl {
bool perst_loads_image;
bool perst_select_user;
bool perst_same_image;
+ bool psl_timebase_synced;
};
int cxl_pci_alloc_one_irq(struct cxl *adapter);
diff --git a/drivers/misc/cxl/fault.c b/drivers/misc/cxl/fault.c
index 9a8650bcb042..377e650a2a1d 100644
--- a/drivers/misc/cxl/fault.c
+++ b/drivers/misc/cxl/fault.c
@@ -149,11 +149,13 @@ static void cxl_handle_page_fault(struct cxl_context *ctx,
* update_mmu_cache() will not have loaded the hash since current->trap
* is not a 0x400 or 0x300, so just call hash_page_mm() here.
*/
- access = _PAGE_PRESENT;
+ access = _PAGE_PRESENT | _PAGE_READ;
if (dsisr & CXL_PSL_DSISR_An_S)
- access |= _PAGE_RW;
- if ((!ctx->kernel) || ~(dar & (1ULL << 63)))
- access |= _PAGE_USER;
+ access |= _PAGE_WRITE;
+
+ access |= _PAGE_PRIVILEGED;
+ if ((!ctx->kernel) || (REGION_ID(dar) == USER_REGION_ID))
+ access &= ~_PAGE_PRIVILEGED;
if (dsisr & DSISR_NOHPTE)
inv_flags |= HPTE_NOHPTE_UPDATE;
diff --git a/drivers/misc/cxl/guest.c b/drivers/misc/cxl/guest.c
index 8213372de2b7..bc8d0b9870eb 100644
--- a/drivers/misc/cxl/guest.c
+++ b/drivers/misc/cxl/guest.c
@@ -178,6 +178,9 @@ static int afu_read_error_state(struct cxl_afu *afu, int *state_out)
u64 state;
int rc = 0;
+ if (!afu)
+ return -EIO;
+
rc = cxl_h_read_error_state(afu->guest->handle, &state);
if (!rc) {
WARN_ON(state != H_STATE_NORMAL &&
@@ -552,6 +555,17 @@ static int attach_afu_directed(struct cxl_context *ctx, u64 wed, u64 amr)
elem->common.sstp0 = cpu_to_be64(ctx->sstp0);
elem->common.sstp1 = cpu_to_be64(ctx->sstp1);
+
+ /*
+ * Ensure we have at least one interrupt allocated to take faults for
+ * kernel contexts that may not have allocated any AFU IRQs at all:
+ */
+ if (ctx->irqs.range[0] == 0) {
+ rc = afu_register_irqs(ctx, 0);
+ if (rc)
+ goto out_free;
+ }
+
for (r = 0; r < CXL_IRQ_RANGES; r++) {
for (i = 0; i < ctx->irqs.range[r]; i++) {
if (r == 0 && i == 0) {
@@ -597,6 +611,7 @@ static int attach_afu_directed(struct cxl_context *ctx, u64 wed, u64 amr)
enable_afu_irqs(ctx);
}
+out_free:
free_page((u64)elem);
return rc;
}
@@ -605,6 +620,9 @@ static int guest_attach_process(struct cxl_context *ctx, bool kernel, u64 wed, u
{
pr_devel("in %s\n", __func__);
+ if (ctx->real_mode)
+ return -EPERM;
+
ctx->kernel = kernel;
if (ctx->afu->current_mode == CXL_MODE_DIRECTED)
return attach_afu_directed(ctx, wed, amr);
@@ -818,7 +836,6 @@ static int afu_update_state(struct cxl_afu *afu)
switch (cur_state) {
case H_STATE_NORMAL:
afu->guest->previous_state = cur_state;
- rc = 1;
break;
case H_STATE_DISABLE:
@@ -834,7 +851,6 @@ static int afu_update_state(struct cxl_afu *afu)
pci_error_handlers(afu, CXL_SLOT_RESET_EVENT,
pci_channel_io_normal);
pci_error_handlers(afu, CXL_RESUME_EVENT, 0);
- rc = 1;
}
afu->guest->previous_state = 0;
break;
@@ -859,39 +875,30 @@ static int afu_update_state(struct cxl_afu *afu)
return rc;
}
-static int afu_do_recovery(struct cxl_afu *afu)
+static void afu_handle_errstate(struct work_struct *work)
{
- int rc;
+ struct cxl_afu_guest *afu_guest =
+ container_of(to_delayed_work(work), struct cxl_afu_guest, work_err);
- /* many threads can arrive here, in case of detach_all for example.
- * Only one needs to drive the recovery
- */
- if (mutex_trylock(&afu->guest->recovery_lock)) {
- rc = afu_update_state(afu);
- mutex_unlock(&afu->guest->recovery_lock);
- return rc;
- }
- return 0;
+ if (!afu_update_state(afu_guest->parent) &&
+ afu_guest->previous_state == H_STATE_PERM_UNAVAILABLE)
+ return;
+
+ if (afu_guest->handle_err == true)
+ schedule_delayed_work(&afu_guest->work_err,
+ msecs_to_jiffies(3000));
}
static bool guest_link_ok(struct cxl *cxl, struct cxl_afu *afu)
{
int state;
- if (afu) {
- if (afu_read_error_state(afu, &state) ||
- state != H_STATE_NORMAL) {
- if (afu_do_recovery(afu) > 0) {
- /* check again in case we've just fixed it */
- if (!afu_read_error_state(afu, &state) &&
- state == H_STATE_NORMAL)
- return true;
- }
- return false;
- }
+ if (afu && (!afu_read_error_state(afu, &state))) {
+ if (state == H_STATE_NORMAL)
+ return true;
}
- return true;
+ return false;
}
static int afu_properties_look_ok(struct cxl_afu *afu)
@@ -929,8 +936,6 @@ int cxl_guest_init_afu(struct cxl *adapter, int slice, struct device_node *afu_n
return -ENOMEM;
}
- mutex_init(&afu->guest->recovery_lock);
-
if ((rc = dev_set_name(&afu->dev, "afu%i.%i",
adapter->adapter_num,
slice)))
@@ -986,6 +991,15 @@ int cxl_guest_init_afu(struct cxl *adapter, int slice, struct device_node *afu_n
afu->enabled = true;
+ /*
+ * wake up the cpu periodically to check the state
+ * of the AFU using "afu" stored in the guest structure.
+ */
+ afu->guest->parent = afu;
+ afu->guest->handle_err = true;
+ INIT_DELAYED_WORK(&afu->guest->work_err, afu_handle_errstate);
+ schedule_delayed_work(&afu->guest->work_err, msecs_to_jiffies(1000));
+
if ((rc = cxl_pci_vphb_add(afu)))
dev_info(&afu->dev, "Can't register vPHB\n");
@@ -1014,6 +1028,10 @@ void cxl_guest_remove_afu(struct cxl_afu *afu)
if (!afu)
return;
+ /* flush and stop pending job */
+ afu->guest->handle_err = false;
+ flush_delayed_work(&afu->guest->work_err);
+
cxl_pci_vphb_remove(afu);
cxl_sysfs_afu_remove(afu);
@@ -1101,6 +1119,12 @@ struct cxl *cxl_guest_init_adapter(struct device_node *np, struct platform_devic
adapter->dev.release = release_adapter;
dev_set_drvdata(&pdev->dev, adapter);
+ /*
+ * Hypervisor controls PSL timebase initialization (p1 register).
+ * On FW840, PSL is initialized.
+ */
+ adapter->psl_timebase_synced = true;
+
if ((rc = cxl_of_read_adapter_handle(adapter, np)))
goto err1;
diff --git a/drivers/misc/cxl/native.c b/drivers/misc/cxl/native.c
index ecf7557cd657..55d8a1459f28 100644
--- a/drivers/misc/cxl/native.c
+++ b/drivers/misc/cxl/native.c
@@ -186,16 +186,25 @@ static int spa_max_procs(int spa_size)
int cxl_alloc_spa(struct cxl_afu *afu)
{
+ unsigned spa_size;
+
/* Work out how many pages to allocate */
afu->native->spa_order = 0;
do {
afu->native->spa_order++;
- afu->native->spa_size = (1 << afu->native->spa_order) * PAGE_SIZE;
+ spa_size = (1 << afu->native->spa_order) * PAGE_SIZE;
+
+ if (spa_size > 0x100000) {
+ dev_warn(&afu->dev, "num_of_processes too large for the SPA, limiting to %i (0x%x)\n",
+ afu->native->spa_max_procs, afu->native->spa_size);
+ afu->num_procs = afu->native->spa_max_procs;
+ break;
+ }
+
+ afu->native->spa_size = spa_size;
afu->native->spa_max_procs = spa_max_procs(afu->native->spa_size);
} while (afu->native->spa_max_procs < afu->num_procs);
- WARN_ON(afu->native->spa_size > 0x100000); /* Max size supported by the hardware */
-
if (!(afu->native->spa = (struct cxl_process_element *)
__get_free_pages(GFP_KERNEL | __GFP_ZERO, afu->native->spa_order))) {
pr_err("cxl_alloc_spa: Unable to allocate scheduled process area\n");
@@ -486,8 +495,9 @@ static u64 calculate_sr(struct cxl_context *ctx)
if (mfspr(SPRN_LPCR) & LPCR_TC)
sr |= CXL_PSL_SR_An_TC;
if (ctx->kernel) {
- sr |= CXL_PSL_SR_An_R | (mfmsr() & MSR_SF);
- sr |= CXL_PSL_SR_An_HV;
+ if (!ctx->real_mode)
+ sr |= CXL_PSL_SR_An_R;
+ sr |= (mfmsr() & MSR_SF) | CXL_PSL_SR_An_HV;
} else {
sr |= CXL_PSL_SR_An_PR | CXL_PSL_SR_An_R;
sr &= ~(CXL_PSL_SR_An_HV);
@@ -526,6 +536,15 @@ static int attach_afu_directed(struct cxl_context *ctx, u64 wed, u64 amr)
ctx->elem->common.sstp0 = cpu_to_be64(ctx->sstp0);
ctx->elem->common.sstp1 = cpu_to_be64(ctx->sstp1);
+ /*
+ * Ensure we have the multiplexed PSL interrupt set up to take faults
+ * for kernel contexts that may not have allocated any AFU IRQs at all:
+ */
+ if (ctx->irqs.range[0] == 0) {
+ ctx->irqs.offset[0] = ctx->afu->native->psl_hwirq;
+ ctx->irqs.range[0] = 1;
+ }
+
for (r = 0; r < CXL_IRQ_RANGES; r++) {
ctx->elem->ivte_offsets[r] = cpu_to_be16(ctx->irqs.offset[r]);
ctx->elem->ivte_ranges[r] = cpu_to_be16(ctx->irqs.range[r]);
diff --git a/drivers/misc/cxl/pci.c b/drivers/misc/cxl/pci.c
index 2844e975bf79..a08fcc888a71 100644
--- a/drivers/misc/cxl/pci.c
+++ b/drivers/misc/cxl/pci.c
@@ -21,6 +21,7 @@
#include <asm/msi_bitmap.h>
#include <asm/pnv-pci.h>
#include <asm/io.h>
+#include <asm/reg.h>
#include "cxl.h"
#include <misc/cxl.h>
@@ -321,12 +322,43 @@ static void dump_afu_descriptor(struct cxl_afu *afu)
#undef show_reg
}
+#define CAPP_UNIT0_ID 0xBA
+#define CAPP_UNIT1_ID 0XBE
+
+static u64 get_capp_unit_id(struct device_node *np)
+{
+ u32 phb_index;
+
+ /*
+ * For chips other than POWER8NVL, we only have CAPP 0,
+ * irrespective of which PHB is used.
+ */
+ if (!pvr_version_is(PVR_POWER8NVL))
+ return CAPP_UNIT0_ID;
+
+ /*
+ * For POWER8NVL, assume CAPP 0 is attached to PHB0 and
+ * CAPP 1 is attached to PHB1.
+ */
+ if (of_property_read_u32(np, "ibm,phb-index", &phb_index))
+ return 0;
+
+ if (phb_index == 0)
+ return CAPP_UNIT0_ID;
+
+ if (phb_index == 1)
+ return CAPP_UNIT1_ID;
+
+ return 0;
+}
+
static int init_implementation_adapter_regs(struct cxl *adapter, struct pci_dev *dev)
{
struct device_node *np;
const __be32 *prop;
u64 psl_dsnctl;
u64 chipid;
+ u64 capp_unit_id;
if (!(np = pnv_pci_get_phb_node(dev)))
return -ENODEV;
@@ -336,10 +368,19 @@ static int init_implementation_adapter_regs(struct cxl *adapter, struct pci_dev
if (!np)
return -ENODEV;
chipid = be32_to_cpup(prop);
+ capp_unit_id = get_capp_unit_id(np);
of_node_put(np);
+ if (!capp_unit_id) {
+ pr_err("cxl: invalid capp unit id\n");
+ return -ENODEV;
+ }
+ psl_dsnctl = 0x0000900000000000ULL; /* pteupd ttype, scdone */
+ psl_dsnctl |= (0x2ULL << (63-38)); /* MMIO hang pulse: 256 us */
/* Tell PSL where to route data to */
- psl_dsnctl = 0x02E8900002000000ULL | (chipid << (63-5));
+ psl_dsnctl |= (chipid << (63-5));
+ psl_dsnctl |= (capp_unit_id << (63-13));
+
cxl_p1_write(adapter, CXL_PSL_DSNDCTL, psl_dsnctl);
cxl_p1_write(adapter, CXL_PSL_RESLCKTO, 0x20000000200ULL);
/* snoop write mask */
@@ -355,22 +396,24 @@ static int init_implementation_adapter_regs(struct cxl *adapter, struct pci_dev
#define TBSYNC_CNT(n) (((u64)n & 0x7) << (63-6))
#define _2048_250MHZ_CYCLES 1
-static int cxl_setup_psl_timebase(struct cxl *adapter, struct pci_dev *dev)
+static void cxl_setup_psl_timebase(struct cxl *adapter, struct pci_dev *dev)
{
u64 psl_tb;
int delta;
unsigned int retry = 0;
struct device_node *np;
+ adapter->psl_timebase_synced = false;
+
if (!(np = pnv_pci_get_phb_node(dev)))
- return -ENODEV;
+ return;
/* Do not fail when CAPP timebase sync is not supported by OPAL */
of_node_get(np);
if (! of_get_property(np, "ibm,capp-timebase-sync", NULL)) {
of_node_put(np);
- pr_err("PSL: Timebase sync: OPAL support missing\n");
- return 0;
+ dev_info(&dev->dev, "PSL timebase inactive: OPAL support missing\n");
+ return;
}
of_node_put(np);
@@ -389,8 +432,8 @@ static int cxl_setup_psl_timebase(struct cxl *adapter, struct pci_dev *dev)
do {
msleep(1);
if (retry++ > 5) {
- pr_err("PSL: Timebase sync: giving up!\n");
- return -EIO;
+ dev_info(&dev->dev, "PSL timebase can't synchronize\n");
+ return;
}
psl_tb = cxl_p1_read(adapter, CXL_PSL_Timebase);
delta = mftb() - psl_tb;
@@ -398,7 +441,8 @@ static int cxl_setup_psl_timebase(struct cxl *adapter, struct pci_dev *dev)
delta = -delta;
} while (tb_to_ns(delta) > 16000);
- return 0;
+ adapter->psl_timebase_synced = true;
+ return;
}
static int init_implementation_afu_regs(struct cxl_afu *afu)
@@ -1144,8 +1188,8 @@ static int cxl_configure_adapter(struct cxl *adapter, struct pci_dev *dev)
if ((rc = pnv_phb_to_cxl_mode(dev, OPAL_PHB_CAPI_MODE_SNOOP_ON)))
goto err;
- if ((rc = cxl_setup_psl_timebase(adapter, dev)))
- goto err;
+ /* Ignore error, adapter init is not dependant on timebase sync */
+ cxl_setup_psl_timebase(adapter, dev);
if ((rc = cxl_native_register_psl_err_irq(adapter)))
goto err;
diff --git a/drivers/misc/cxl/sysfs.c b/drivers/misc/cxl/sysfs.c
index 25913c08794c..b043c20f158f 100644
--- a/drivers/misc/cxl/sysfs.c
+++ b/drivers/misc/cxl/sysfs.c
@@ -57,6 +57,15 @@ static ssize_t image_loaded_show(struct device *device,
return scnprintf(buf, PAGE_SIZE, "factory\n");
}
+static ssize_t psl_timebase_synced_show(struct device *device,
+ struct device_attribute *attr,
+ char *buf)
+{
+ struct cxl *adapter = to_cxl_adapter(device);
+
+ return scnprintf(buf, PAGE_SIZE, "%i\n", adapter->psl_timebase_synced);
+}
+
static ssize_t reset_adapter_store(struct device *device,
struct device_attribute *attr,
const char *buf, size_t count)
@@ -142,6 +151,7 @@ static struct device_attribute adapter_attrs[] = {
__ATTR_RO(psl_revision),
__ATTR_RO(base_image),
__ATTR_RO(image_loaded),
+ __ATTR_RO(psl_timebase_synced),
__ATTR_RW(load_image_on_perst),
__ATTR_RW(perst_reloads_same_image),
__ATTR(reset, S_IWUSR, NULL, reset_adapter_store),
diff --git a/drivers/misc/eeprom/Kconfig b/drivers/misc/eeprom/Kconfig
index cfc493c2e30a..c4e41c26649e 100644
--- a/drivers/misc/eeprom/Kconfig
+++ b/drivers/misc/eeprom/Kconfig
@@ -3,7 +3,6 @@ menu "EEPROM support"
config EEPROM_AT24
tristate "I2C EEPROMs / RAMs / ROMs from most vendors"
depends on I2C && SYSFS
- select REGMAP
select NVMEM
help
Enable this driver to get read/write support to most I2C EEPROMs
@@ -32,7 +31,6 @@ config EEPROM_AT24
config EEPROM_AT25
tristate "SPI EEPROMs from most vendors"
depends on SPI && SYSFS
- select REGMAP
select NVMEM
help
Enable this driver to get read/write support to most SPI EEPROMs,
diff --git a/drivers/misc/eeprom/at24.c b/drivers/misc/eeprom/at24.c
index 089d6943f68a..9ceb63b62be5 100644
--- a/drivers/misc/eeprom/at24.c
+++ b/drivers/misc/eeprom/at24.c
@@ -23,7 +23,6 @@
#include <linux/acpi.h>
#include <linux/i2c.h>
#include <linux/nvmem-provider.h>
-#include <linux/regmap.h>
#include <linux/platform_data/at24.h>
/*
@@ -69,7 +68,6 @@ struct at24_data {
unsigned write_max;
unsigned num_addresses;
- struct regmap_config regmap_config;
struct nvmem_config nvmem_config;
struct nvmem_device *nvmem;
@@ -245,17 +243,16 @@ static ssize_t at24_eeprom_read(struct at24_data *at24, char *buf,
if (status == count)
return count;
- /* REVISIT: at HZ=100, this is sloooow */
- msleep(1);
+ usleep_range(1000, 1500);
} while (time_before(read_time, timeout));
return -ETIMEDOUT;
}
-static ssize_t at24_read(struct at24_data *at24,
- char *buf, loff_t off, size_t count)
+static int at24_read(void *priv, unsigned int off, void *val, size_t count)
{
- ssize_t retval = 0;
+ struct at24_data *at24 = priv;
+ char *buf = val;
if (unlikely(!count))
return count;
@@ -267,23 +264,21 @@ static ssize_t at24_read(struct at24_data *at24,
mutex_lock(&at24->lock);
while (count) {
- ssize_t status;
+ int status;
status = at24_eeprom_read(at24, buf, off, count);
- if (status <= 0) {
- if (retval == 0)
- retval = status;
- break;
+ if (status < 0) {
+ mutex_unlock(&at24->lock);
+ return status;
}
buf += status;
off += status;
count -= status;
- retval += status;
}
mutex_unlock(&at24->lock);
- return retval;
+ return 0;
}
/*
@@ -365,20 +360,19 @@ static ssize_t at24_eeprom_write(struct at24_data *at24, const char *buf,
if (status == count)
return count;
- /* REVISIT: at HZ=100, this is sloooow */
- msleep(1);
+ usleep_range(1000, 1500);
} while (time_before(write_time, timeout));
return -ETIMEDOUT;
}
-static ssize_t at24_write(struct at24_data *at24, const char *buf, loff_t off,
- size_t count)
+static int at24_write(void *priv, unsigned int off, void *val, size_t count)
{
- ssize_t retval = 0;
+ struct at24_data *at24 = priv;
+ char *buf = val;
if (unlikely(!count))
- return count;
+ return -EINVAL;
/*
* Write data to chip, protecting against concurrent updates
@@ -387,70 +381,23 @@ static ssize_t at24_write(struct at24_data *at24, const char *buf, loff_t off,
mutex_lock(&at24->lock);
while (count) {
- ssize_t status;
+ int status;
status = at24_eeprom_write(at24, buf, off, count);
- if (status <= 0) {
- if (retval == 0)
- retval = status;
- break;
+ if (status < 0) {
+ mutex_unlock(&at24->lock);
+ return status;
}
buf += status;
off += status;
count -= status;
- retval += status;
}
mutex_unlock(&at24->lock);
- return retval;
-}
-
-/*-------------------------------------------------------------------------*/
-
-/*
- * Provide a regmap interface, which is registered with the NVMEM
- * framework
-*/
-static int at24_regmap_read(void *context, const void *reg, size_t reg_size,
- void *val, size_t val_size)
-{
- struct at24_data *at24 = context;
- off_t offset = *(u32 *)reg;
- int err;
-
- err = at24_read(at24, val, offset, val_size);
- if (err)
- return err;
return 0;
}
-static int at24_regmap_write(void *context, const void *data, size_t count)
-{
- struct at24_data *at24 = context;
- const char *buf;
- u32 offset;
- size_t len;
- int err;
-
- memcpy(&offset, data, sizeof(offset));
- buf = (const char *)data + sizeof(offset);
- len = count - sizeof(offset);
-
- err = at24_write(at24, buf, offset, len);
- if (err)
- return err;
- return 0;
-}
-
-static const struct regmap_bus at24_regmap_bus = {
- .read = at24_regmap_read,
- .write = at24_regmap_write,
- .reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
-};
-
-/*-------------------------------------------------------------------------*/
-
#ifdef CONFIG_OF
static void at24_get_ofdata(struct i2c_client *client,
struct at24_platform_data *chip)
@@ -482,7 +429,6 @@ static int at24_probe(struct i2c_client *client, const struct i2c_device_id *id)
struct at24_data *at24;
int err;
unsigned i, num_addresses;
- struct regmap *regmap;
if (client->dev.platform_data) {
chip = *(struct at24_platform_data *)client->dev.platform_data;
@@ -544,10 +490,7 @@ static int at24_probe(struct i2c_client *client, const struct i2c_device_id *id)
} else {
return -EPFNOSUPPORT;
}
- }
- /* Use I2C operations unless we're stuck with SMBus extensions. */
- if (!i2c_check_functionality(client->adapter, I2C_FUNC_I2C)) {
if (i2c_check_functionality(client->adapter,
I2C_FUNC_SMBUS_WRITE_I2C_BLOCK)) {
use_smbus_write = I2C_SMBUS_I2C_BLOCK_DATA;
@@ -612,19 +555,6 @@ static int at24_probe(struct i2c_client *client, const struct i2c_device_id *id)
}
}
- at24->regmap_config.reg_bits = 32;
- at24->regmap_config.val_bits = 8;
- at24->regmap_config.reg_stride = 1;
- at24->regmap_config.max_register = chip.byte_len - 1;
-
- regmap = devm_regmap_init(&client->dev, &at24_regmap_bus, at24,
- &at24->regmap_config);
- if (IS_ERR(regmap)) {
- dev_err(&client->dev, "regmap init failed\n");
- err = PTR_ERR(regmap);
- goto err_clients;
- }
-
at24->nvmem_config.name = dev_name(&client->dev);
at24->nvmem_config.dev = &client->dev;
at24->nvmem_config.read_only = !writable;
@@ -632,6 +562,12 @@ static int at24_probe(struct i2c_client *client, const struct i2c_device_id *id)
at24->nvmem_config.owner = THIS_MODULE;
at24->nvmem_config.compat = true;
at24->nvmem_config.base_dev = &client->dev;
+ at24->nvmem_config.reg_read = at24_read;
+ at24->nvmem_config.reg_write = at24_write;
+ at24->nvmem_config.priv = at24;
+ at24->nvmem_config.stride = 4;
+ at24->nvmem_config.word_size = 1;
+ at24->nvmem_config.size = chip.byte_len;
at24->nvmem = nvmem_register(&at24->nvmem_config);
diff --git a/drivers/misc/eeprom/at25.c b/drivers/misc/eeprom/at25.c
index fa36a6e37084..2c6c7c8e3ead 100644
--- a/drivers/misc/eeprom/at25.c
+++ b/drivers/misc/eeprom/at25.c
@@ -17,7 +17,6 @@
#include <linux/sched.h>
#include <linux/nvmem-provider.h>
-#include <linux/regmap.h>
#include <linux/spi/spi.h>
#include <linux/spi/eeprom.h>
#include <linux/property.h>
@@ -34,7 +33,6 @@ struct at25_data {
struct mutex lock;
struct spi_eeprom chip;
unsigned addrlen;
- struct regmap_config regmap_config;
struct nvmem_config nvmem_config;
struct nvmem_device *nvmem;
};
@@ -65,14 +63,11 @@ struct at25_data {
#define io_limit PAGE_SIZE /* bytes */
-static ssize_t
-at25_ee_read(
- struct at25_data *at25,
- char *buf,
- unsigned offset,
- size_t count
-)
+static int at25_ee_read(void *priv, unsigned int offset,
+ void *val, size_t count)
{
+ struct at25_data *at25 = priv;
+ char *buf = val;
u8 command[EE_MAXADDRLEN + 1];
u8 *cp;
ssize_t status;
@@ -81,11 +76,11 @@ at25_ee_read(
u8 instr;
if (unlikely(offset >= at25->chip.byte_len))
- return 0;
+ return -EINVAL;
if ((offset + count) > at25->chip.byte_len)
count = at25->chip.byte_len - offset;
if (unlikely(!count))
- return count;
+ return -EINVAL;
cp = command;
@@ -131,28 +126,14 @@ at25_ee_read(
count, offset, (int) status);
mutex_unlock(&at25->lock);
- return status ? status : count;
+ return status;
}
-static int at25_regmap_read(void *context, const void *reg, size_t reg_size,
- void *val, size_t val_size)
+static int at25_ee_write(void *priv, unsigned int off, void *val, size_t count)
{
- struct at25_data *at25 = context;
- off_t offset = *(u32 *)reg;
- int err;
-
- err = at25_ee_read(at25, val, offset, val_size);
- if (err)
- return err;
- return 0;
-}
-
-static ssize_t
-at25_ee_write(struct at25_data *at25, const char *buf, loff_t off,
- size_t count)
-{
- ssize_t status = 0;
- unsigned written = 0;
+ struct at25_data *at25 = priv;
+ const char *buf = val;
+ int status = 0;
unsigned buf_size;
u8 *bounce;
@@ -161,7 +142,7 @@ at25_ee_write(struct at25_data *at25, const char *buf, loff_t off,
if ((off + count) > at25->chip.byte_len)
count = at25->chip.byte_len - off;
if (unlikely(!count))
- return count;
+ return -EINVAL;
/* Temp buffer starts with command and address */
buf_size = at25->chip.page_size;
@@ -256,40 +237,15 @@ at25_ee_write(struct at25_data *at25, const char *buf, loff_t off,
off += segment;
buf += segment;
count -= segment;
- written += segment;
} while (count > 0);
mutex_unlock(&at25->lock);
kfree(bounce);
- return written ? written : status;
+ return status;
}
-static int at25_regmap_write(void *context, const void *data, size_t count)
-{
- struct at25_data *at25 = context;
- const char *buf;
- u32 offset;
- size_t len;
- int err;
-
- memcpy(&offset, data, sizeof(offset));
- buf = (const char *)data + sizeof(offset);
- len = count - sizeof(offset);
-
- err = at25_ee_write(at25, buf, offset, len);
- if (err)
- return err;
- return 0;
-}
-
-static const struct regmap_bus at25_regmap_bus = {
- .read = at25_regmap_read,
- .write = at25_regmap_write,
- .reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
-};
-
/*-------------------------------------------------------------------------*/
static int at25_fw_to_chip(struct device *dev, struct spi_eeprom *chip)
@@ -349,7 +305,6 @@ static int at25_probe(struct spi_device *spi)
{
struct at25_data *at25 = NULL;
struct spi_eeprom chip;
- struct regmap *regmap;
int err;
int sr;
int addrlen;
@@ -390,22 +345,10 @@ static int at25_probe(struct spi_device *spi)
mutex_init(&at25->lock);
at25->chip = chip;
- at25->spi = spi_dev_get(spi);
+ at25->spi = spi;
spi_set_drvdata(spi, at25);
at25->addrlen = addrlen;
- at25->regmap_config.reg_bits = 32;
- at25->regmap_config.val_bits = 8;
- at25->regmap_config.reg_stride = 1;
- at25->regmap_config.max_register = chip.byte_len - 1;
-
- regmap = devm_regmap_init(&spi->dev, &at25_regmap_bus, at25,
- &at25->regmap_config);
- if (IS_ERR(regmap)) {
- dev_err(&spi->dev, "regmap init failed\n");
- return PTR_ERR(regmap);
- }
-
at25->nvmem_config.name = dev_name(&spi->dev);
at25->nvmem_config.dev = &spi->dev;
at25->nvmem_config.read_only = chip.flags & EE_READONLY;
@@ -413,6 +356,12 @@ static int at25_probe(struct spi_device *spi)
at25->nvmem_config.owner = THIS_MODULE;
at25->nvmem_config.compat = true;
at25->nvmem_config.base_dev = &spi->dev;
+ at25->nvmem_config.reg_read = at25_ee_read;
+ at25->nvmem_config.reg_write = at25_ee_write;
+ at25->nvmem_config.priv = at25;
+ at25->nvmem_config.stride = 4;
+ at25->nvmem_config.word_size = 1;
+ at25->nvmem_config.size = chip.byte_len;
at25->nvmem = nvmem_register(&at25->nvmem_config);
if (IS_ERR(at25->nvmem))
diff --git a/drivers/misc/eeprom/eeprom_93xx46.c b/drivers/misc/eeprom/eeprom_93xx46.c
index 426fe2fd5238..94cc035aa841 100644
--- a/drivers/misc/eeprom/eeprom_93xx46.c
+++ b/drivers/misc/eeprom/eeprom_93xx46.c
@@ -20,7 +20,6 @@
#include <linux/slab.h>
#include <linux/spi/spi.h>
#include <linux/nvmem-provider.h>
-#include <linux/regmap.h>
#include <linux/eeprom_93xx46.h>
#define OP_START 0x4
@@ -43,7 +42,6 @@ struct eeprom_93xx46_dev {
struct spi_device *spi;
struct eeprom_93xx46_platform_data *pdata;
struct mutex lock;
- struct regmap_config regmap_config;
struct nvmem_config nvmem_config;
struct nvmem_device *nvmem;
int addrlen;
@@ -60,11 +58,12 @@ static inline bool has_quirk_instruction_length(struct eeprom_93xx46_dev *edev)
return edev->pdata->quirks & EEPROM_93XX46_QUIRK_INSTRUCTION_LENGTH;
}
-static ssize_t
-eeprom_93xx46_read(struct eeprom_93xx46_dev *edev, char *buf,
- unsigned off, size_t count)
+static int eeprom_93xx46_read(void *priv, unsigned int off,
+ void *val, size_t count)
{
- ssize_t ret = 0;
+ struct eeprom_93xx46_dev *edev = priv;
+ char *buf = val;
+ int err = 0;
if (unlikely(off >= edev->size))
return 0;
@@ -84,7 +83,6 @@ eeprom_93xx46_read(struct eeprom_93xx46_dev *edev, char *buf,
u16 cmd_addr = OP_READ << edev->addrlen;
size_t nbytes = count;
int bits;
- int err;
if (edev->addrlen == 7) {
cmd_addr |= off & 0x7f;
@@ -120,21 +118,20 @@ eeprom_93xx46_read(struct eeprom_93xx46_dev *edev, char *buf,
if (err) {
dev_err(&edev->spi->dev, "read %zu bytes at %d: err. %d\n",
nbytes, (int)off, err);
- ret = err;
break;
}
buf += nbytes;
off += nbytes;
count -= nbytes;
- ret += nbytes;
}
if (edev->pdata->finish)
edev->pdata->finish(edev);
mutex_unlock(&edev->lock);
- return ret;
+
+ return err;
}
static int eeprom_93xx46_ew(struct eeprom_93xx46_dev *edev, int is_on)
@@ -230,10 +227,11 @@ eeprom_93xx46_write_word(struct eeprom_93xx46_dev *edev,
return ret;
}
-static ssize_t
-eeprom_93xx46_write(struct eeprom_93xx46_dev *edev, const char *buf,
- loff_t off, size_t count)
+static int eeprom_93xx46_write(void *priv, unsigned int off,
+ void *val, size_t count)
{
+ struct eeprom_93xx46_dev *edev = priv;
+ char *buf = val;
int i, ret, step = 1;
if (unlikely(off >= edev->size))
@@ -275,52 +273,9 @@ eeprom_93xx46_write(struct eeprom_93xx46_dev *edev, const char *buf,
/* erase/write disable */
eeprom_93xx46_ew(edev, 0);
- return ret ? : count;
-}
-
-/*
- * Provide a regmap interface, which is registered with the NVMEM
- * framework
-*/
-static int eeprom_93xx46_regmap_read(void *context, const void *reg,
- size_t reg_size, void *val,
- size_t val_size)
-{
- struct eeprom_93xx46_dev *eeprom_93xx46 = context;
- off_t offset = *(u32 *)reg;
- int err;
-
- err = eeprom_93xx46_read(eeprom_93xx46, val, offset, val_size);
- if (err)
- return err;
- return 0;
-}
-
-static int eeprom_93xx46_regmap_write(void *context, const void *data,
- size_t count)
-{
- struct eeprom_93xx46_dev *eeprom_93xx46 = context;
- const char *buf;
- u32 offset;
- size_t len;
- int err;
-
- memcpy(&offset, data, sizeof(offset));
- buf = (const char *)data + sizeof(offset);
- len = count - sizeof(offset);
-
- err = eeprom_93xx46_write(eeprom_93xx46, buf, offset, len);
- if (err)
- return err;
- return 0;
+ return ret;
}
-static const struct regmap_bus eeprom_93xx46_regmap_bus = {
- .read = eeprom_93xx46_regmap_read,
- .write = eeprom_93xx46_regmap_write,
- .reg_format_endian_default = REGMAP_ENDIAN_NATIVE,
-};
-
static int eeprom_93xx46_eral(struct eeprom_93xx46_dev *edev)
{
struct eeprom_93xx46_platform_data *pd = edev->pdata;
@@ -480,7 +435,6 @@ static int eeprom_93xx46_probe(struct spi_device *spi)
{
struct eeprom_93xx46_platform_data *pd;
struct eeprom_93xx46_dev *edev;
- struct regmap *regmap;
int err;
if (spi->dev.of_node) {
@@ -511,24 +465,10 @@ static int eeprom_93xx46_probe(struct spi_device *spi)
mutex_init(&edev->lock);
- edev->spi = spi_dev_get(spi);
+ edev->spi = spi;
edev->pdata = pd;
edev->size = 128;
-
- edev->regmap_config.reg_bits = 32;
- edev->regmap_config.val_bits = 8;
- edev->regmap_config.reg_stride = 1;
- edev->regmap_config.max_register = edev->size - 1;
-
- regmap = devm_regmap_init(&spi->dev, &eeprom_93xx46_regmap_bus, edev,
- &edev->regmap_config);
- if (IS_ERR(regmap)) {
- dev_err(&spi->dev, "regmap init failed\n");
- err = PTR_ERR(regmap);
- goto fail;
- }
-
edev->nvmem_config.name = dev_name(&spi->dev);
edev->nvmem_config.dev = &spi->dev;
edev->nvmem_config.read_only = pd->flags & EE_READONLY;
@@ -536,6 +476,12 @@ static int eeprom_93xx46_probe(struct spi_device *spi)
edev->nvmem_config.owner = THIS_MODULE;
edev->nvmem_config.compat = true;
edev->nvmem_config.base_dev = &spi->dev;
+ edev->nvmem_config.reg_read = eeprom_93xx46_read;
+ edev->nvmem_config.reg_write = eeprom_93xx46_write;
+ edev->nvmem_config.priv = edev;
+ edev->nvmem_config.stride = 4;
+ edev->nvmem_config.word_size = 1;
+ edev->nvmem_config.size = edev->size;
edev->nvmem = nvmem_register(&edev->nvmem_config);
if (IS_ERR(edev->nvmem)) {
diff --git a/drivers/misc/mei/amthif.c b/drivers/misc/mei/amthif.c
index 194360a5f782..a039a5df6f21 100644
--- a/drivers/misc/mei/amthif.c
+++ b/drivers/misc/mei/amthif.c
@@ -380,8 +380,10 @@ int mei_amthif_irq_read_msg(struct mei_cl *cl,
dev = cl->dev;
- if (dev->iamthif_state != MEI_IAMTHIF_READING)
+ if (dev->iamthif_state != MEI_IAMTHIF_READING) {
+ mei_irq_discard_msg(dev, mei_hdr);
return 0;
+ }
ret = mei_cl_irq_read_msg(cl, mei_hdr, cmpl_list);
if (ret)
diff --git a/drivers/misc/mei/bus.c b/drivers/misc/mei/bus.c
index 5d5996e39a67..1f33fea9299f 100644
--- a/drivers/misc/mei/bus.c
+++ b/drivers/misc/mei/bus.c
@@ -220,17 +220,23 @@ EXPORT_SYMBOL_GPL(mei_cldev_recv);
static void mei_cl_bus_event_work(struct work_struct *work)
{
struct mei_cl_device *cldev;
+ struct mei_device *bus;
cldev = container_of(work, struct mei_cl_device, event_work);
+ bus = cldev->bus;
+
if (cldev->event_cb)
cldev->event_cb(cldev, cldev->events, cldev->event_context);
cldev->events = 0;
/* Prepare for the next read */
- if (cldev->events_mask & BIT(MEI_CL_EVENT_RX))
+ if (cldev->events_mask & BIT(MEI_CL_EVENT_RX)) {
+ mutex_lock(&bus->device_lock);
mei_cl_read_start(cldev->cl, 0, NULL);
+ mutex_unlock(&bus->device_lock);
+ }
}
/**
@@ -304,6 +310,7 @@ int mei_cldev_register_event_cb(struct mei_cl_device *cldev,
unsigned long events_mask,
mei_cldev_event_cb_t event_cb, void *context)
{
+ struct mei_device *bus = cldev->bus;
int ret;
if (cldev->event_cb)
@@ -316,15 +323,17 @@ int mei_cldev_register_event_cb(struct mei_cl_device *cldev,
INIT_WORK(&cldev->event_work, mei_cl_bus_event_work);
if (cldev->events_mask & BIT(MEI_CL_EVENT_RX)) {
+ mutex_lock(&bus->device_lock);
ret = mei_cl_read_start(cldev->cl, 0, NULL);
+ mutex_unlock(&bus->device_lock);
if (ret && ret != -EBUSY)
return ret;
}
if (cldev->events_mask & BIT(MEI_CL_EVENT_NOTIF)) {
- mutex_lock(&cldev->cl->dev->device_lock);
+ mutex_lock(&bus->device_lock);
ret = mei_cl_notify_request(cldev->cl, NULL, event_cb ? 1 : 0);
- mutex_unlock(&cldev->cl->dev->device_lock);
+ mutex_unlock(&bus->device_lock);
if (ret)
return ret;
}
@@ -580,6 +589,7 @@ static int mei_cl_device_probe(struct device *dev)
struct mei_cl_device *cldev;
struct mei_cl_driver *cldrv;
const struct mei_cl_device_id *id;
+ int ret;
cldev = to_mei_cl_device(dev);
cldrv = to_mei_cl_driver(dev->driver);
@@ -594,9 +604,12 @@ static int mei_cl_device_probe(struct device *dev)
if (!id)
return -ENODEV;
- __module_get(THIS_MODULE);
+ ret = cldrv->probe(cldev, id);
+ if (ret)
+ return ret;
- return cldrv->probe(cldev, id);
+ __module_get(THIS_MODULE);
+ return 0;
}
/**
@@ -634,11 +647,8 @@ static ssize_t name_show(struct device *dev, struct device_attribute *a,
char *buf)
{
struct mei_cl_device *cldev = to_mei_cl_device(dev);
- size_t len;
-
- len = snprintf(buf, PAGE_SIZE, "%s", cldev->name);
- return (len >= PAGE_SIZE) ? (PAGE_SIZE - 1) : len;
+ return scnprintf(buf, PAGE_SIZE, "%s", cldev->name);
}
static DEVICE_ATTR_RO(name);
@@ -647,11 +657,8 @@ static ssize_t uuid_show(struct device *dev, struct device_attribute *a,
{
struct mei_cl_device *cldev = to_mei_cl_device(dev);
const uuid_le *uuid = mei_me_cl_uuid(cldev->me_cl);
- size_t len;
- len = snprintf(buf, PAGE_SIZE, "%pUl", uuid);
-
- return (len >= PAGE_SIZE) ? (PAGE_SIZE - 1) : len;
+ return scnprintf(buf, PAGE_SIZE, "%pUl", uuid);
}
static DEVICE_ATTR_RO(uuid);
@@ -660,11 +667,8 @@ static ssize_t version_show(struct device *dev, struct device_attribute *a,
{
struct mei_cl_device *cldev = to_mei_cl_device(dev);
u8 version = mei_me_cl_ver(cldev->me_cl);
- size_t len;
-
- len = snprintf(buf, PAGE_SIZE, "%02X", version);
- return (len >= PAGE_SIZE) ? (PAGE_SIZE - 1) : len;
+ return scnprintf(buf, PAGE_SIZE, "%02X", version);
}
static DEVICE_ATTR_RO(version);
@@ -673,10 +677,8 @@ static ssize_t modalias_show(struct device *dev, struct device_attribute *a,
{
struct mei_cl_device *cldev = to_mei_cl_device(dev);
const uuid_le *uuid = mei_me_cl_uuid(cldev->me_cl);
- size_t len;
- len = snprintf(buf, PAGE_SIZE, "mei:%s:%pUl:", cldev->name, uuid);
- return (len >= PAGE_SIZE) ? (PAGE_SIZE - 1) : len;
+ return scnprintf(buf, PAGE_SIZE, "mei:%s:%pUl:", cldev->name, uuid);
}
static DEVICE_ATTR_RO(modalias);
diff --git a/drivers/misc/mei/client.c b/drivers/misc/mei/client.c
index bab17e4197b6..eed254da63a8 100644
--- a/drivers/misc/mei/client.c
+++ b/drivers/misc/mei/client.c
@@ -727,6 +727,11 @@ static void mei_cl_wake_all(struct mei_cl *cl)
cl_dbg(dev, cl, "Waking up waiting for event clients!\n");
wake_up_interruptible(&cl->ev_wait);
}
+ /* synchronized under device mutex */
+ if (waitqueue_active(&cl->wait)) {
+ cl_dbg(dev, cl, "Waking up ctrl write clients!\n");
+ wake_up_interruptible(&cl->wait);
+ }
}
/**
@@ -879,12 +884,15 @@ static int __mei_cl_disconnect(struct mei_cl *cl)
}
mutex_unlock(&dev->device_lock);
- wait_event_timeout(cl->wait, cl->state == MEI_FILE_DISCONNECT_REPLY,
+ wait_event_timeout(cl->wait,
+ cl->state == MEI_FILE_DISCONNECT_REPLY ||
+ cl->state == MEI_FILE_DISCONNECTED,
mei_secs_to_jiffies(MEI_CL_CONNECT_TIMEOUT));
mutex_lock(&dev->device_lock);
rets = cl->status;
- if (cl->state != MEI_FILE_DISCONNECT_REPLY) {
+ if (cl->state != MEI_FILE_DISCONNECT_REPLY &&
+ cl->state != MEI_FILE_DISCONNECTED) {
cl_dbg(dev, cl, "timeout on disconnect from FW client.\n");
rets = -ETIME;
}
@@ -1085,6 +1093,7 @@ int mei_cl_connect(struct mei_cl *cl, struct mei_me_client *me_cl,
mutex_unlock(&dev->device_lock);
wait_event_timeout(cl->wait,
(cl->state == MEI_FILE_CONNECTED ||
+ cl->state == MEI_FILE_DISCONNECTED ||
cl->state == MEI_FILE_DISCONNECT_REQUIRED ||
cl->state == MEI_FILE_DISCONNECT_REPLY),
mei_secs_to_jiffies(MEI_CL_CONNECT_TIMEOUT));
@@ -1333,16 +1342,13 @@ int mei_cl_notify_request(struct mei_cl *cl,
}
mutex_unlock(&dev->device_lock);
- wait_event_timeout(cl->wait, cl->notify_en == request,
- mei_secs_to_jiffies(MEI_CL_CONNECT_TIMEOUT));
+ wait_event_timeout(cl->wait,
+ cl->notify_en == request || !mei_cl_is_connected(cl),
+ mei_secs_to_jiffies(MEI_CL_CONNECT_TIMEOUT));
mutex_lock(&dev->device_lock);
- if (cl->notify_en != request) {
- mei_io_list_flush(&dev->ctrl_rd_list, cl);
- mei_io_list_flush(&dev->ctrl_wr_list, cl);
- if (!cl->status)
- cl->status = -EFAULT;
- }
+ if (cl->notify_en != request && !cl->status)
+ cl->status = -EFAULT;
rets = cl->status;
@@ -1767,6 +1773,10 @@ void mei_cl_complete(struct mei_cl *cl, struct mei_cl_cb *cb)
wake_up(&cl->wait);
break;
+ case MEI_FOP_DISCONNECT_RSP:
+ mei_io_cb_free(cb);
+ mei_cl_set_disconnected(cl);
+ break;
default:
BUG_ON(0);
}
diff --git a/drivers/misc/mei/hbm.c b/drivers/misc/mei/hbm.c
index 5e305d2605f3..5aa606c8a827 100644
--- a/drivers/misc/mei/hbm.c
+++ b/drivers/misc/mei/hbm.c
@@ -113,8 +113,6 @@ void mei_hbm_idle(struct mei_device *dev)
*/
void mei_hbm_reset(struct mei_device *dev)
{
- dev->me_client_index = 0;
-
mei_me_cl_rm_all(dev);
mei_hbm_idle(dev);
@@ -530,24 +528,22 @@ static void mei_hbm_cl_notify(struct mei_device *dev,
* mei_hbm_prop_req - request property for a single client
*
* @dev: the device structure
+ * @start_idx: client index to start search
*
* Return: 0 on success and < 0 on failure
*/
-
-static int mei_hbm_prop_req(struct mei_device *dev)
+static int mei_hbm_prop_req(struct mei_device *dev, unsigned long start_idx)
{
-
struct mei_msg_hdr *mei_hdr = &dev->wr_msg.hdr;
struct hbm_props_request *prop_req;
const size_t len = sizeof(struct hbm_props_request);
- unsigned long next_client_index;
+ unsigned long addr;
int ret;
- next_client_index = find_next_bit(dev->me_clients_map, MEI_CLIENTS_MAX,
- dev->me_client_index);
+ addr = find_next_bit(dev->me_clients_map, MEI_CLIENTS_MAX, start_idx);
/* We got all client properties */
- if (next_client_index == MEI_CLIENTS_MAX) {
+ if (addr == MEI_CLIENTS_MAX) {
dev->hbm_state = MEI_HBM_STARTED;
mei_host_client_init(dev);
@@ -560,7 +556,7 @@ static int mei_hbm_prop_req(struct mei_device *dev)
memset(prop_req, 0, sizeof(struct hbm_props_request));
prop_req->hbm_cmd = HOST_CLIENT_PROPERTIES_REQ_CMD;
- prop_req->me_addr = next_client_index;
+ prop_req->me_addr = addr;
ret = mei_write_message(dev, mei_hdr, dev->wr_msg.data);
if (ret) {
@@ -570,7 +566,6 @@ static int mei_hbm_prop_req(struct mei_device *dev)
}
dev->init_clients_timer = MEI_CLIENTS_INIT_TIMEOUT;
- dev->me_client_index = next_client_index;
return 0;
}
@@ -882,8 +877,7 @@ static int mei_hbm_fw_disconnect_req(struct mei_device *dev,
cb = mei_io_cb_init(cl, MEI_FOP_DISCONNECT_RSP, NULL);
if (!cb)
return -ENOMEM;
- cl_dbg(dev, cl, "add disconnect response as first\n");
- list_add(&cb->list, &dev->ctrl_wr_list.list);
+ list_add_tail(&cb->list, &dev->ctrl_wr_list.list);
}
return 0;
}
@@ -1152,10 +1146,8 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr)
mei_hbm_me_cl_add(dev, props_res);
- dev->me_client_index++;
-
/* request property for the next client */
- if (mei_hbm_prop_req(dev))
+ if (mei_hbm_prop_req(dev, props_res->me_addr + 1))
return -EIO;
break;
@@ -1181,7 +1173,7 @@ int mei_hbm_dispatch(struct mei_device *dev, struct mei_msg_hdr *hdr)
dev->hbm_state = MEI_HBM_CLIENT_PROPERTIES;
/* first property request */
- if (mei_hbm_prop_req(dev))
+ if (mei_hbm_prop_req(dev, 0))
return -EIO;
break;
diff --git a/drivers/misc/mei/interrupt.c b/drivers/misc/mei/interrupt.c
index 1e5cb1f704f8..3831a7ba2531 100644
--- a/drivers/misc/mei/interrupt.c
+++ b/drivers/misc/mei/interrupt.c
@@ -76,7 +76,6 @@ static inline int mei_cl_hbm_equal(struct mei_cl *cl,
* @dev: mei device
* @hdr: message header
*/
-static inline
void mei_irq_discard_msg(struct mei_device *dev, struct mei_msg_hdr *hdr)
{
/*
@@ -194,10 +193,7 @@ static int mei_cl_irq_disconnect_rsp(struct mei_cl *cl, struct mei_cl_cb *cb,
return -EMSGSIZE;
ret = mei_hbm_cl_disconnect_rsp(dev, cl);
- mei_cl_set_disconnected(cl);
- mei_io_cb_free(cb);
- mei_me_cl_put(cl->me_cl);
- cl->me_cl = NULL;
+ list_move_tail(&cb->list, &cmpl_list->list);
return ret;
}
diff --git a/drivers/misc/mei/mei_dev.h b/drivers/misc/mei/mei_dev.h
index db78e6d99456..c9e01021eadf 100644
--- a/drivers/misc/mei/mei_dev.h
+++ b/drivers/misc/mei/mei_dev.h
@@ -396,7 +396,6 @@ const char *mei_pg_state_str(enum mei_pg_state state);
* @me_clients : list of FW clients
* @me_clients_map : FW clients bit map
* @host_clients_map : host clients id pool
- * @me_client_index : last FW client index in enumeration
*
* @allow_fixed_address: allow user space to connect a fixed client
* @override_fixed_address: force allow fixed address behavior
@@ -486,7 +485,6 @@ struct mei_device {
struct list_head me_clients;
DECLARE_BITMAP(me_clients_map, MEI_CLIENTS_MAX);
DECLARE_BITMAP(host_clients_map, MEI_CLIENTS_MAX);
- unsigned long me_client_index;
bool allow_fixed_address;
bool override_fixed_address;
@@ -704,6 +702,8 @@ bool mei_hbuf_acquire(struct mei_device *dev);
bool mei_write_is_idle(struct mei_device *dev);
+void mei_irq_discard_msg(struct mei_device *dev, struct mei_msg_hdr *hdr);
+
#if IS_ENABLED(CONFIG_DEBUG_FS)
int mei_dbgfs_register(struct mei_device *dev, const char *name);
void mei_dbgfs_deregister(struct mei_device *dev);
diff --git a/drivers/misc/mic/Kconfig b/drivers/misc/mic/Kconfig
index 2e4f3ba75c8e..89e5917e1c33 100644
--- a/drivers/misc/mic/Kconfig
+++ b/drivers/misc/mic/Kconfig
@@ -132,6 +132,7 @@ config VOP
tristate "VOP Driver"
depends on 64BIT && PCI && X86 && VOP_BUS
select VHOST_RING
+ select VIRTIO
help
This enables VOP (Virtio over PCIe) Driver support for the Intel
Many Integrated Core (MIC) family of PCIe form factor coprocessor
diff --git a/drivers/misc/mic/host/mic_boot.c b/drivers/misc/mic/host/mic_boot.c
index 8c91c9950b54..e047efd83f57 100644
--- a/drivers/misc/mic/host/mic_boot.c
+++ b/drivers/misc/mic/host/mic_boot.c
@@ -76,7 +76,7 @@ static void __mic_free_irq(struct vop_device *vpdev,
{
struct mic_device *mdev = vpdev_to_mdev(&vpdev->dev);
- return mic_free_irq(mdev, cookie, data);
+ mic_free_irq(mdev, cookie, data);
}
static void __mic_ack_interrupt(struct vop_device *vpdev, int num)
@@ -272,7 +272,7 @@ ___mic_free_irq(struct scif_hw_dev *scdev,
{
struct mic_device *mdev = scdev_to_mdev(scdev);
- return mic_free_irq(mdev, cookie, data);
+ mic_free_irq(mdev, cookie, data);
}
static void ___mic_ack_interrupt(struct scif_hw_dev *scdev, int num)
@@ -362,7 +362,7 @@ _mic_request_threaded_irq(struct mbus_device *mbdev,
static void _mic_free_irq(struct mbus_device *mbdev,
struct mic_irq *cookie, void *data)
{
- return mic_free_irq(mbdev_to_mdev(mbdev), cookie, data);
+ mic_free_irq(mbdev_to_mdev(mbdev), cookie, data);
}
static void _mic_ack_interrupt(struct mbus_device *mbdev, int num)
diff --git a/drivers/misc/mic/scif/scif_fence.c b/drivers/misc/mic/scif/scif_fence.c
index 7f2c96f57066..cac3bcc308a7 100644
--- a/drivers/misc/mic/scif/scif_fence.c
+++ b/drivers/misc/mic/scif/scif_fence.c
@@ -27,7 +27,8 @@
void scif_recv_mark(struct scif_dev *scifdev, struct scifmsg *msg)
{
struct scif_endpt *ep = (struct scif_endpt *)msg->payload[0];
- int mark, err;
+ int mark = 0;
+ int err;
err = _scif_fence_mark(ep, &mark);
if (err)
diff --git a/drivers/misc/qcom-coincell.c b/drivers/misc/qcom-coincell.c
index 7b4a2da487a5..829a61dbd65f 100644
--- a/drivers/misc/qcom-coincell.c
+++ b/drivers/misc/qcom-coincell.c
@@ -94,7 +94,8 @@ static int qcom_coincell_probe(struct platform_device *pdev)
{
struct device_node *node = pdev->dev.of_node;
struct qcom_coincell chgr;
- u32 rset, vset;
+ u32 rset = 0;
+ u32 vset = 0;
bool enable;
int rc;
diff --git a/drivers/misc/sgi-gru/grukservices.c b/drivers/misc/sgi-gru/grukservices.c
index 967b9dd24fe9..030769018461 100644
--- a/drivers/misc/sgi-gru/grukservices.c
+++ b/drivers/misc/sgi-gru/grukservices.c
@@ -718,8 +718,8 @@ cberr:
static int send_message_put_nacked(void *cb, struct gru_message_queue_desc *mqd,
void *mesg, int lines)
{
- unsigned long m, *val = mesg, gpa, save;
- int ret;
+ unsigned long m;
+ int ret, loops = 200; /* experimentally determined */
m = mqd->mq_gpa + (gru_get_amo_value_head(cb) << 6);
if (lines == 2) {
@@ -735,22 +735,28 @@ static int send_message_put_nacked(void *cb, struct gru_message_queue_desc *mqd,
return MQE_OK;
/*
- * Send a cross-partition interrupt to the SSI that contains the target
- * message queue. Normally, the interrupt is automatically delivered by
- * hardware but some error conditions require explicit delivery.
- * Use the GRU to deliver the interrupt. Otherwise partition failures
+ * Send a noop message in order to deliver a cross-partition interrupt
+ * to the SSI that contains the target message queue. Normally, the
+ * interrupt is automatically delivered by hardware following mesq
+ * operations, but some error conditions require explicit delivery.
+ * The noop message will trigger delivery. Otherwise partition failures
* could cause unrecovered errors.
*/
- gpa = uv_global_gru_mmr_address(mqd->interrupt_pnode, UVH_IPI_INT);
- save = *val;
- *val = uv_hub_ipi_value(mqd->interrupt_apicid, mqd->interrupt_vector,
- dest_Fixed);
- gru_vstore_phys(cb, gpa, gru_get_tri(mesg), IAA_REGISTER, IMA);
- ret = gru_wait(cb);
- *val = save;
- if (ret != CBS_IDLE)
- return MQE_UNEXPECTED_CB_ERR;
- return MQE_OK;
+ do {
+ ret = send_noop_message(cb, mqd, mesg);
+ } while ((ret == MQIE_AGAIN || ret == MQE_CONGESTION) && (loops-- > 0));
+
+ if (ret == MQIE_AGAIN || ret == MQE_CONGESTION) {
+ /*
+ * Don't indicate to the app to resend the message, as it's
+ * already been successfully sent. We simply send an OK
+ * (rather than fail the send with MQE_UNEXPECTED_CB_ERR),
+ * assuming that the other side is receiving enough
+ * interrupts to get this message processed anyway.
+ */
+ ret = MQE_OK;
+ }
+ return ret;
}
/*
diff --git a/drivers/misc/sram.c b/drivers/misc/sram.c
index 69cdabea9c03..f84b53d6ce50 100644
--- a/drivers/misc/sram.c
+++ b/drivers/misc/sram.c
@@ -364,8 +364,8 @@ static int sram_probe(struct platform_device *pdev)
sram->virt_base = devm_ioremap(sram->dev, res->start, size);
else
sram->virt_base = devm_ioremap_wc(sram->dev, res->start, size);
- if (IS_ERR(sram->virt_base))
- return PTR_ERR(sram->virt_base);
+ if (!sram->virt_base)
+ return -ENOMEM;
sram->pool = devm_gen_pool_create(sram->dev, ilog2(SRAM_GRANULARITY),
NUMA_NO_NODE, NULL);
diff --git a/drivers/misc/ti-st/st_kim.c b/drivers/misc/ti-st/st_kim.c
index 71b64550b591..bf0d7708beac 100644
--- a/drivers/misc/ti-st/st_kim.c
+++ b/drivers/misc/ti-st/st_kim.c
@@ -78,7 +78,6 @@ static void validate_firmware_response(struct kim_data_s *kim_gdata)
memcpy(kim_gdata->resp_buffer,
kim_gdata->rx_skb->data,
kim_gdata->rx_skb->len);
- complete_all(&kim_gdata->kim_rcvd);
kim_gdata->rx_state = ST_W4_PACKET_TYPE;
kim_gdata->rx_skb = NULL;
kim_gdata->rx_count = 0;
diff --git a/drivers/mmc/card/block.c b/drivers/mmc/card/block.c
index 8a0147dfed27..e62fde3ac431 100644
--- a/drivers/mmc/card/block.c
+++ b/drivers/mmc/card/block.c
@@ -35,6 +35,7 @@
#include <linux/capability.h>
#include <linux/compat.h>
#include <linux/pm_runtime.h>
+#include <linux/idr.h>
#include <linux/mmc/ioctl.h>
#include <linux/mmc/card.h>
@@ -78,14 +79,14 @@ static int perdev_minors = CONFIG_MMC_BLOCK_MINORS;
/*
* We've only got one major, so number of mmcblk devices is
* limited to (1 << 20) / number of minors per device. It is also
- * currently limited by the size of the static bitmaps below.
+ * limited by the MAX_DEVICES below.
*/
static int max_devices;
#define MAX_DEVICES 256
-/* TODO: Replace these with struct ida */
-static DECLARE_BITMAP(dev_use, MAX_DEVICES);
+static DEFINE_IDA(mmc_blk_ida);
+static DEFINE_SPINLOCK(mmc_blk_lock);
/*
* There is one mmc_blk_data per slot.
@@ -178,7 +179,9 @@ static void mmc_blk_put(struct mmc_blk_data *md)
int devidx = mmc_get_devidx(md->disk);
blk_cleanup_queue(md->queue.queue);
- __clear_bit(devidx, dev_use);
+ spin_lock(&mmc_blk_lock);
+ ida_remove(&mmc_blk_ida, devidx);
+ spin_unlock(&mmc_blk_lock);
put_disk(md->disk);
kfree(md);
@@ -615,6 +618,10 @@ static int mmc_blk_ioctl_cmd(struct block_device *bdev,
ioc_err = __mmc_blk_ioctl_cmd(card, md, idata);
+ /* Always switch back to main area after RPMB access */
+ if (md->area_type & MMC_BLK_DATA_AREA_RPMB)
+ mmc_blk_part_switch(card, dev_get_drvdata(&card->dev));
+
mmc_put_card(card);
err = mmc_blk_ioctl_copy_to_user(ic_ptr, idata);
@@ -682,6 +689,10 @@ static int mmc_blk_ioctl_multi_cmd(struct block_device *bdev,
for (i = 0; i < num_of_cmds && !ioc_err; i++)
ioc_err = __mmc_blk_ioctl_cmd(card, md, idata[i]);
+ /* Always switch back to main area after RPMB access */
+ if (md->area_type & MMC_BLK_DATA_AREA_RPMB)
+ mmc_blk_part_switch(card, dev_get_drvdata(&card->dev));
+
mmc_put_card(card);
/* copy to user if data and response */
@@ -745,16 +756,25 @@ static inline int mmc_blk_part_switch(struct mmc_card *card,
if (mmc_card_mmc(card)) {
u8 part_config = card->ext_csd.part_config;
+ if (md->part_type == EXT_CSD_PART_CONFIG_ACC_RPMB)
+ mmc_retune_pause(card->host);
+
part_config &= ~EXT_CSD_PART_CONFIG_ACC_MASK;
part_config |= md->part_type;
ret = mmc_switch(card, EXT_CSD_CMD_SET_NORMAL,
EXT_CSD_PART_CONFIG, part_config,
card->ext_csd.part_time);
- if (ret)
+ if (ret) {
+ if (md->part_type == EXT_CSD_PART_CONFIG_ACC_RPMB)
+ mmc_retune_unpause(card->host);
return ret;
+ }
card->ext_csd.part_config = part_config;
+
+ if (main_md->part_curr == EXT_CSD_PART_CONFIG_ACC_RPMB)
+ mmc_retune_unpause(card->host);
}
main_md->part_curr = md->part_type;
@@ -945,16 +965,22 @@ static int mmc_blk_cmd_error(struct request *req, const char *name, int error,
req->rq_disk->disk_name, "timed out", name, status);
/* If the status cmd initially failed, retry the r/w cmd */
- if (!status_valid)
+ if (!status_valid) {
+ pr_err("%s: status not valid, retrying timeout\n",
+ req->rq_disk->disk_name);
return ERR_RETRY;
+ }
/*
* If it was a r/w cmd crc error, or illegal command
* (eg, issued in wrong state) then retry - we should
* have corrected the state problem above.
*/
- if (status & (R1_COM_CRC_ERROR | R1_ILLEGAL_COMMAND))
+ if (status & (R1_COM_CRC_ERROR | R1_ILLEGAL_COMMAND)) {
+ pr_err("%s: command error, retrying timeout\n",
+ req->rq_disk->disk_name);
return ERR_RETRY;
+ }
/* Otherwise abort the command */
return ERR_ABORT;
@@ -2189,10 +2215,23 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
struct mmc_blk_data *md;
int devidx, ret;
- devidx = find_first_zero_bit(dev_use, max_devices);
- if (devidx >= max_devices)
- return ERR_PTR(-ENOSPC);
- __set_bit(devidx, dev_use);
+again:
+ if (!ida_pre_get(&mmc_blk_ida, GFP_KERNEL))
+ return ERR_PTR(-ENOMEM);
+
+ spin_lock(&mmc_blk_lock);
+ ret = ida_get_new(&mmc_blk_ida, &devidx);
+ spin_unlock(&mmc_blk_lock);
+
+ if (ret == -EAGAIN)
+ goto again;
+ else if (ret)
+ return ERR_PTR(ret);
+
+ if (devidx >= max_devices) {
+ ret = -ENOSPC;
+ goto out;
+ }
md = kzalloc(sizeof(struct mmc_blk_data), GFP_KERNEL);
if (!md) {
@@ -2271,7 +2310,7 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) ||
card->ext_csd.rel_sectors)) {
md->flags |= MMC_BLK_REL_WR;
- blk_queue_flush(md->queue.queue, REQ_FLUSH | REQ_FUA);
+ blk_queue_write_cache(md->queue.queue, true, true);
}
if (mmc_card_mmc(card) &&
@@ -2289,6 +2328,9 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
err_kfree:
kfree(md);
out:
+ spin_lock(&mmc_blk_lock);
+ ida_remove(&mmc_blk_ida, devidx);
+ spin_unlock(&mmc_blk_lock);
return ERR_PTR(ret);
}
@@ -2494,11 +2536,12 @@ static const struct mmc_fixup blk_fixups[] =
MMC_QUIRK_BLK_NO_CMD23),
/*
- * Some Micron MMC cards needs longer data read timeout than
- * indicated in CSD.
+ * Some MMC cards need longer data read timeout than indicated in CSD.
*/
MMC_FIXUP(CID_NAME_ANY, CID_MANFID_MICRON, 0x200, add_quirk_mmc,
MMC_QUIRK_LONG_READ_TIME),
+ MMC_FIXUP("008GE0", CID_MANFID_TOSHIBA, CID_OEMID_ANY, add_quirk_mmc,
+ MMC_QUIRK_LONG_READ_TIME),
/*
* On these Samsung MoviNAND parts, performing secure erase or
diff --git a/drivers/mmc/card/sdio_uart.c b/drivers/mmc/card/sdio_uart.c
index 5415056f9aa5..5af6fb9a9ce2 100644
--- a/drivers/mmc/card/sdio_uart.c
+++ b/drivers/mmc/card/sdio_uart.c
@@ -895,7 +895,7 @@ static void sdio_uart_set_termios(struct tty_struct *tty,
/* Handle transition away from B0 status */
if (!(old_termios->c_cflag & CBAUD) && (cflag & CBAUD)) {
unsigned int mask = TIOCM_DTR;
- if (!(cflag & CRTSCTS) || !test_bit(TTY_THROTTLED, &tty->flags))
+ if (!(cflag & CRTSCTS) || !tty_throttled(tty))
mask |= TIOCM_RTS;
sdio_uart_set_mctrl(port, mask);
}
diff --git a/drivers/mmc/core/Kconfig b/drivers/mmc/core/Kconfig
index 4c33d7690f2f..250f223aaa80 100644
--- a/drivers/mmc/core/Kconfig
+++ b/drivers/mmc/core/Kconfig
@@ -1,3 +1,24 @@
#
# MMC core configuration
#
+config PWRSEQ_EMMC
+ tristate "HW reset support for eMMC"
+ default y
+ depends on OF
+ help
+ This selects Hardware reset support aka pwrseq-emmc for eMMC
+ devices. By default this option is set to y.
+
+ This driver can also be built as a module. If so, the module
+ will be called pwrseq_emmc.
+
+config PWRSEQ_SIMPLE
+ tristate "Simple HW reset support for MMC"
+ default y
+ depends on OF
+ help
+ This selects simple hardware reset support aka pwrseq-simple for MMC
+ devices. By default this option is set to y.
+
+ This driver can also be built as a module. If so, the module
+ will be called pwrseq_simple.
diff --git a/drivers/mmc/core/Makefile b/drivers/mmc/core/Makefile
index 2c25138f28b7..f007151dfdc6 100644
--- a/drivers/mmc/core/Makefile
+++ b/drivers/mmc/core/Makefile
@@ -8,5 +8,7 @@ mmc_core-y := core.o bus.o host.o \
sdio.o sdio_ops.o sdio_bus.o \
sdio_cis.o sdio_io.o sdio_irq.o \
quirks.o slot-gpio.o
-mmc_core-$(CONFIG_OF) += pwrseq.o pwrseq_simple.o pwrseq_emmc.o
+mmc_core-$(CONFIG_OF) += pwrseq.o
+obj-$(CONFIG_PWRSEQ_SIMPLE) += pwrseq_simple.o
+obj-$(CONFIG_PWRSEQ_EMMC) += pwrseq_emmc.o
mmc_core-$(CONFIG_DEBUG_FS) += debugfs.o
diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 41b1e761965f..8b4dfd45433b 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -36,6 +36,9 @@
#include <linux/mmc/sd.h>
#include <linux/mmc/slot-gpio.h>
+#define CREATE_TRACE_POINTS
+#include <trace/events/mmc.h>
+
#include "core.h"
#include "bus.h"
#include "host.h"
@@ -140,6 +143,8 @@ void mmc_request_done(struct mmc_host *host, struct mmc_request *mrq)
cmd->retries = 0;
}
+ trace_mmc_request_done(host, mrq);
+
if (err && cmd->retries && !mmc_card_removed(host->card)) {
/*
* Request starter must handle retries - see
@@ -215,6 +220,8 @@ static void __mmc_start_request(struct mmc_host *host, struct mmc_request *mrq)
}
}
+ trace_mmc_request_start(host, mrq);
+
host->ops->request(host, mrq);
}
@@ -868,11 +875,11 @@ void mmc_set_data_timeout(struct mmc_data *data, const struct mmc_card *card)
/*
* Some cards require longer data read timeout than indicated in CSD.
* Address this by setting the read timeout to a "reasonably high"
- * value. For the cards tested, 300ms has proven enough. If necessary,
+ * value. For the cards tested, 600ms has proven enough. If necessary,
* this value can be increased if other problematic cards require this.
*/
if (mmc_card_long_read_time(card) && data->flags & MMC_DATA_READ) {
- data->timeout_ns = 300000000;
+ data->timeout_ns = 600000000;
data->timeout_clks = 0;
}
@@ -2449,8 +2456,9 @@ int mmc_hw_reset(struct mmc_host *host)
ret = host->bus_ops->reset(host);
mmc_bus_put(host);
- if (ret != -EOPNOTSUPP)
- pr_warn("%s: tried to reset card\n", mmc_hostname(host));
+ if (ret)
+ pr_warn("%s: tried to reset card, got error %d\n",
+ mmc_hostname(host), ret);
return ret;
}
diff --git a/drivers/mmc/core/host.c b/drivers/mmc/core/host.c
index 6e4c55a4aab5..1be42fab1a30 100644
--- a/drivers/mmc/core/host.c
+++ b/drivers/mmc/core/host.c
@@ -33,14 +33,14 @@
#define cls_dev_to_mmc_host(d) container_of(d, struct mmc_host, class_dev)
-static DEFINE_IDR(mmc_host_idr);
+static DEFINE_IDA(mmc_host_ida);
static DEFINE_SPINLOCK(mmc_host_lock);
static void mmc_host_classdev_release(struct device *dev)
{
struct mmc_host *host = cls_dev_to_mmc_host(dev);
spin_lock(&mmc_host_lock);
- idr_remove(&mmc_host_idr, host->index);
+ ida_remove(&mmc_host_ida, host->index);
spin_unlock(&mmc_host_lock);
kfree(host);
}
@@ -68,8 +68,32 @@ void mmc_retune_enable(struct mmc_host *host)
jiffies + host->retune_period * HZ);
}
+/*
+ * Pause re-tuning for a small set of operations. The pause begins after the
+ * next command and after first doing re-tuning.
+ */
+void mmc_retune_pause(struct mmc_host *host)
+{
+ if (!host->retune_paused) {
+ host->retune_paused = 1;
+ mmc_retune_needed(host);
+ mmc_retune_hold(host);
+ }
+}
+EXPORT_SYMBOL(mmc_retune_pause);
+
+void mmc_retune_unpause(struct mmc_host *host)
+{
+ if (host->retune_paused) {
+ host->retune_paused = 0;
+ mmc_retune_release(host);
+ }
+}
+EXPORT_SYMBOL(mmc_retune_unpause);
+
void mmc_retune_disable(struct mmc_host *host)
{
+ mmc_retune_unpause(host);
host->can_retune = 0;
del_timer_sync(&host->retune_timer);
host->retune_now = 0;
@@ -321,14 +345,20 @@ struct mmc_host *mmc_alloc_host(int extra, struct device *dev)
/* scanning will be enabled when we're ready */
host->rescan_disable = 1;
- idr_preload(GFP_KERNEL);
+
+again:
+ if (!ida_pre_get(&mmc_host_ida, GFP_KERNEL)) {
+ kfree(host);
+ return NULL;
+ }
+
spin_lock(&mmc_host_lock);
- err = idr_alloc(&mmc_host_idr, host, 0, 0, GFP_NOWAIT);
- if (err >= 0)
- host->index = err;
+ err = ida_get_new(&mmc_host_ida, &host->index);
spin_unlock(&mmc_host_lock);
- idr_preload_end();
- if (err < 0) {
+
+ if (err == -EAGAIN) {
+ goto again;
+ } else if (err) {
kfree(host);
return NULL;
}
diff --git a/drivers/mmc/core/mmc.c b/drivers/mmc/core/mmc.c
index 4dbe3df8024b..c984321d1881 100644
--- a/drivers/mmc/core/mmc.c
+++ b/drivers/mmc/core/mmc.c
@@ -333,6 +333,9 @@ static void mmc_manage_gp_partitions(struct mmc_card *card, u8 *ext_csd)
}
}
+/* Minimum partition switch timeout in milliseconds */
+#define MMC_MIN_PART_SWITCH_TIME 300
+
/*
* Decode extended CSD.
*/
@@ -397,6 +400,10 @@ static int mmc_decode_ext_csd(struct mmc_card *card, u8 *ext_csd)
/* EXT_CSD value is in units of 10ms, but we store in ms */
card->ext_csd.part_time = 10 * ext_csd[EXT_CSD_PART_SWITCH_TIME];
+ /* Some eMMC set the value too low so set a minimum */
+ if (card->ext_csd.part_time &&
+ card->ext_csd.part_time < MMC_MIN_PART_SWITCH_TIME)
+ card->ext_csd.part_time = MMC_MIN_PART_SWITCH_TIME;
/* Sleep / awake timeout in 100ns units */
if (sa_shift > 0 && sa_shift <= 0x17)
@@ -1244,10 +1251,11 @@ static int mmc_select_hs200(struct mmc_card *card)
{
struct mmc_host *host = card->host;
bool send_status = true;
- unsigned int old_timing;
+ unsigned int old_timing, old_signal_voltage;
int err = -EINVAL;
u8 val;
+ old_signal_voltage = host->ios.signal_voltage;
if (card->mmc_avail_type & EXT_CSD_CARD_TYPE_HS200_1_2V)
err = __mmc_set_signal_voltage(host, MMC_SIGNAL_VOLTAGE_120);
@@ -1256,7 +1264,7 @@ static int mmc_select_hs200(struct mmc_card *card)
/* If fails try again during next card power cycle */
if (err)
- goto err;
+ return err;
mmc_select_driver_type(card);
@@ -1268,7 +1276,7 @@ static int mmc_select_hs200(struct mmc_card *card)
* switch to HS200 mode if bus width is set successfully.
*/
err = mmc_select_bus_width(card);
- if (!IS_ERR_VALUE(err)) {
+ if (!err) {
val = EXT_CSD_TIMING_HS200 |
card->drive_strength << EXT_CSD_DRV_STR_SHIFT;
err = __mmc_switch(card, EXT_CSD_CMD_SET_NORMAL,
@@ -1290,9 +1298,14 @@ static int mmc_select_hs200(struct mmc_card *card)
}
}
err:
- if (err)
+ if (err) {
+ /* fall back to the old signal voltage, if fails report error */
+ if (__mmc_set_signal_voltage(host, old_signal_voltage))
+ err = -EIO;
+
pr_err("%s: %s failed, error %d\n", mmc_hostname(card->host),
__func__, err);
+ }
return err;
}
@@ -1314,21 +1327,13 @@ static int mmc_select_timing(struct mmc_card *card)
if (err && err != -EBADMSG)
return err;
- if (err) {
- pr_warn("%s: switch to %s failed\n",
- mmc_card_hs(card) ? "high-speed" :
- (mmc_card_hs200(card) ? "hs200" : ""),
- mmc_hostname(card->host));
- err = 0;
- }
-
bus_speed:
/*
* Set the bus speed to the selected bus timing.
* If timing is not selected, backward compatible is the default.
*/
mmc_set_bus_speed(card);
- return err;
+ return 0;
}
/*
@@ -1483,12 +1488,13 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
if (err)
goto free_card;
- /* If doing byte addressing, check if required to do sector
+ /*
+ * If doing byte addressing, check if required to do sector
* addressing. Handle the case of <2GB cards needing sector
* addressing. See section 8.1 JEDEC Standard JED84-A441;
* ocr register has bit 30 set for sector addressing.
*/
- if (!(mmc_card_blockaddr(card)) && (rocr & (1<<30)))
+ if (rocr & BIT(30))
mmc_card_set_blockaddr(card);
/* Erase size depends on CSD and Extended CSD */
@@ -1577,7 +1583,7 @@ static int mmc_init_card(struct mmc_host *host, u32 ocr,
} else if (mmc_card_hs(card)) {
/* Select the desired bus width optionally */
err = mmc_select_bus_width(card);
- if (!IS_ERR_VALUE(err)) {
+ if (!err) {
err = mmc_select_hs_ddr(card);
if (err)
goto free_card;
@@ -1957,19 +1963,23 @@ static int mmc_reset(struct mmc_host *host)
{
struct mmc_card *card = host->card;
- if (!(host->caps & MMC_CAP_HW_RESET) || !host->ops->hw_reset)
- return -EOPNOTSUPP;
-
- if (!mmc_can_reset(card))
- return -EOPNOTSUPP;
-
- mmc_set_clock(host, host->f_init);
-
- host->ops->hw_reset(host);
-
- /* Set initial state and call mmc_set_ios */
- mmc_set_initial_state(host);
-
+ /*
+ * In the case of recovery, we can't expect flushing the cache to work
+ * always, but we have a go and ignore errors.
+ */
+ mmc_flush_cache(host->card);
+
+ if ((host->caps & MMC_CAP_HW_RESET) && host->ops->hw_reset &&
+ mmc_can_reset(card)) {
+ /* If the card accept RST_n signal, send it. */
+ mmc_set_clock(host, host->f_init);
+ host->ops->hw_reset(host);
+ /* Set initial state and call mmc_set_ios */
+ mmc_set_initial_state(host);
+ } else {
+ /* Do a brute force power cycle */
+ mmc_power_cycle(host, card->ocr);
+ }
return mmc_init_card(host, card->ocr, card);
}
diff --git a/drivers/mmc/core/pwrseq.c b/drivers/mmc/core/pwrseq.c
index 4c1d1757dbf9..9386c4771814 100644
--- a/drivers/mmc/core/pwrseq.c
+++ b/drivers/mmc/core/pwrseq.c
@@ -8,88 +8,55 @@
* MMC power sequence management
*/
#include <linux/kernel.h>
-#include <linux/platform_device.h>
#include <linux/err.h>
+#include <linux/module.h>
#include <linux/of.h>
-#include <linux/of_platform.h>
#include <linux/mmc/host.h>
#include "pwrseq.h"
-struct mmc_pwrseq_match {
- const char *compatible;
- struct mmc_pwrseq *(*alloc)(struct mmc_host *host, struct device *dev);
-};
-
-static struct mmc_pwrseq_match pwrseq_match[] = {
- {
- .compatible = "mmc-pwrseq-simple",
- .alloc = mmc_pwrseq_simple_alloc,
- }, {
- .compatible = "mmc-pwrseq-emmc",
- .alloc = mmc_pwrseq_emmc_alloc,
- },
-};
-
-static struct mmc_pwrseq_match *mmc_pwrseq_find(struct device_node *np)
-{
- struct mmc_pwrseq_match *match = ERR_PTR(-ENODEV);
- int i;
-
- for (i = 0; i < ARRAY_SIZE(pwrseq_match); i++) {
- if (of_device_is_compatible(np, pwrseq_match[i].compatible)) {
- match = &pwrseq_match[i];
- break;
- }
- }
-
- return match;
-}
+static DEFINE_MUTEX(pwrseq_list_mutex);
+static LIST_HEAD(pwrseq_list);
int mmc_pwrseq_alloc(struct mmc_host *host)
{
- struct platform_device *pdev;
struct device_node *np;
- struct mmc_pwrseq_match *match;
- struct mmc_pwrseq *pwrseq;
- int ret = 0;
+ struct mmc_pwrseq *p;
np = of_parse_phandle(host->parent->of_node, "mmc-pwrseq", 0);
if (!np)
return 0;
- pdev = of_find_device_by_node(np);
- if (!pdev) {
- ret = -ENODEV;
- goto err;
- }
+ mutex_lock(&pwrseq_list_mutex);
+ list_for_each_entry(p, &pwrseq_list, pwrseq_node) {
+ if (p->dev->of_node == np) {
+ if (!try_module_get(p->owner))
+ dev_err(host->parent,
+ "increasing module refcount failed\n");
+ else
+ host->pwrseq = p;
- match = mmc_pwrseq_find(np);
- if (IS_ERR(match)) {
- ret = PTR_ERR(match);
- goto err;
+ break;
+ }
}
- pwrseq = match->alloc(host, &pdev->dev);
- if (IS_ERR(pwrseq)) {
- ret = PTR_ERR(pwrseq);
- goto err;
- }
+ of_node_put(np);
+ mutex_unlock(&pwrseq_list_mutex);
+
+ if (!host->pwrseq)
+ return -EPROBE_DEFER;
- host->pwrseq = pwrseq;
dev_info(host->parent, "allocated mmc-pwrseq\n");
-err:
- of_node_put(np);
- return ret;
+ return 0;
}
void mmc_pwrseq_pre_power_on(struct mmc_host *host)
{
struct mmc_pwrseq *pwrseq = host->pwrseq;
- if (pwrseq && pwrseq->ops && pwrseq->ops->pre_power_on)
+ if (pwrseq && pwrseq->ops->pre_power_on)
pwrseq->ops->pre_power_on(host);
}
@@ -97,7 +64,7 @@ void mmc_pwrseq_post_power_on(struct mmc_host *host)
{
struct mmc_pwrseq *pwrseq = host->pwrseq;
- if (pwrseq && pwrseq->ops && pwrseq->ops->post_power_on)
+ if (pwrseq && pwrseq->ops->post_power_on)
pwrseq->ops->post_power_on(host);
}
@@ -105,7 +72,7 @@ void mmc_pwrseq_power_off(struct mmc_host *host)
{
struct mmc_pwrseq *pwrseq = host->pwrseq;
- if (pwrseq && pwrseq->ops && pwrseq->ops->power_off)
+ if (pwrseq && pwrseq->ops->power_off)
pwrseq->ops->power_off(host);
}
@@ -113,8 +80,31 @@ void mmc_pwrseq_free(struct mmc_host *host)
{
struct mmc_pwrseq *pwrseq = host->pwrseq;
- if (pwrseq && pwrseq->ops && pwrseq->ops->free)
- pwrseq->ops->free(host);
+ if (pwrseq) {
+ module_put(pwrseq->owner);
+ host->pwrseq = NULL;
+ }
+}
+
+int mmc_pwrseq_register(struct mmc_pwrseq *pwrseq)
+{
+ if (!pwrseq || !pwrseq->ops || !pwrseq->dev)
+ return -EINVAL;
- host->pwrseq = NULL;
+ mutex_lock(&pwrseq_list_mutex);
+ list_add(&pwrseq->pwrseq_node, &pwrseq_list);
+ mutex_unlock(&pwrseq_list_mutex);
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(mmc_pwrseq_register);
+
+void mmc_pwrseq_unregister(struct mmc_pwrseq *pwrseq)
+{
+ if (pwrseq) {
+ mutex_lock(&pwrseq_list_mutex);
+ list_del(&pwrseq->pwrseq_node);
+ mutex_unlock(&pwrseq_list_mutex);
+ }
}
+EXPORT_SYMBOL_GPL(mmc_pwrseq_unregister);
diff --git a/drivers/mmc/core/pwrseq.h b/drivers/mmc/core/pwrseq.h
index 133de0426687..d69e751f148b 100644
--- a/drivers/mmc/core/pwrseq.h
+++ b/drivers/mmc/core/pwrseq.h
@@ -8,32 +8,39 @@
#ifndef _MMC_CORE_PWRSEQ_H
#define _MMC_CORE_PWRSEQ_H
+#include <linux/mmc/host.h>
+
struct mmc_pwrseq_ops {
void (*pre_power_on)(struct mmc_host *host);
void (*post_power_on)(struct mmc_host *host);
void (*power_off)(struct mmc_host *host);
- void (*free)(struct mmc_host *host);
};
struct mmc_pwrseq {
const struct mmc_pwrseq_ops *ops;
+ struct device *dev;
+ struct list_head pwrseq_node;
+ struct module *owner;
};
#ifdef CONFIG_OF
+int mmc_pwrseq_register(struct mmc_pwrseq *pwrseq);
+void mmc_pwrseq_unregister(struct mmc_pwrseq *pwrseq);
+
int mmc_pwrseq_alloc(struct mmc_host *host);
void mmc_pwrseq_pre_power_on(struct mmc_host *host);
void mmc_pwrseq_post_power_on(struct mmc_host *host);
void mmc_pwrseq_power_off(struct mmc_host *host);
void mmc_pwrseq_free(struct mmc_host *host);
-struct mmc_pwrseq *mmc_pwrseq_simple_alloc(struct mmc_host *host,
- struct device *dev);
-struct mmc_pwrseq *mmc_pwrseq_emmc_alloc(struct mmc_host *host,
- struct device *dev);
-
#else
+static inline int mmc_pwrseq_register(struct mmc_pwrseq *pwrseq)
+{
+ return -ENOSYS;
+}
+static inline void mmc_pwrseq_unregister(struct mmc_pwrseq *pwrseq) {}
static inline int mmc_pwrseq_alloc(struct mmc_host *host) { return 0; }
static inline void mmc_pwrseq_pre_power_on(struct mmc_host *host) {}
static inline void mmc_pwrseq_post_power_on(struct mmc_host *host) {}
diff --git a/drivers/mmc/core/pwrseq_emmc.c b/drivers/mmc/core/pwrseq_emmc.c
index 4a82bc77fe49..adc9c0c614fb 100644
--- a/drivers/mmc/core/pwrseq_emmc.c
+++ b/drivers/mmc/core/pwrseq_emmc.c
@@ -9,6 +9,9 @@
*/
#include <linux/delay.h>
#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/platform_device.h>
+#include <linux/module.h>
#include <linux/slab.h>
#include <linux/device.h>
#include <linux/err.h>
@@ -25,6 +28,8 @@ struct mmc_pwrseq_emmc {
struct gpio_desc *reset_gpio;
};
+#define to_pwrseq_emmc(p) container_of(p, struct mmc_pwrseq_emmc, pwrseq)
+
static void __mmc_pwrseq_emmc_reset(struct mmc_pwrseq_emmc *pwrseq)
{
gpiod_set_value(pwrseq->reset_gpio, 1);
@@ -35,27 +40,11 @@ static void __mmc_pwrseq_emmc_reset(struct mmc_pwrseq_emmc *pwrseq)
static void mmc_pwrseq_emmc_reset(struct mmc_host *host)
{
- struct mmc_pwrseq_emmc *pwrseq = container_of(host->pwrseq,
- struct mmc_pwrseq_emmc, pwrseq);
+ struct mmc_pwrseq_emmc *pwrseq = to_pwrseq_emmc(host->pwrseq);
__mmc_pwrseq_emmc_reset(pwrseq);
}
-static void mmc_pwrseq_emmc_free(struct mmc_host *host)
-{
- struct mmc_pwrseq_emmc *pwrseq = container_of(host->pwrseq,
- struct mmc_pwrseq_emmc, pwrseq);
-
- unregister_restart_handler(&pwrseq->reset_nb);
- gpiod_put(pwrseq->reset_gpio);
- kfree(pwrseq);
-}
-
-static const struct mmc_pwrseq_ops mmc_pwrseq_emmc_ops = {
- .post_power_on = mmc_pwrseq_emmc_reset,
- .free = mmc_pwrseq_emmc_free,
-};
-
static int mmc_pwrseq_emmc_reset_nb(struct notifier_block *this,
unsigned long mode, void *cmd)
{
@@ -66,21 +55,22 @@ static int mmc_pwrseq_emmc_reset_nb(struct notifier_block *this,
return NOTIFY_DONE;
}
-struct mmc_pwrseq *mmc_pwrseq_emmc_alloc(struct mmc_host *host,
- struct device *dev)
+static const struct mmc_pwrseq_ops mmc_pwrseq_emmc_ops = {
+ .post_power_on = mmc_pwrseq_emmc_reset,
+};
+
+static int mmc_pwrseq_emmc_probe(struct platform_device *pdev)
{
struct mmc_pwrseq_emmc *pwrseq;
- int ret = 0;
+ struct device *dev = &pdev->dev;
- pwrseq = kzalloc(sizeof(struct mmc_pwrseq_emmc), GFP_KERNEL);
+ pwrseq = devm_kzalloc(dev, sizeof(*pwrseq), GFP_KERNEL);
if (!pwrseq)
- return ERR_PTR(-ENOMEM);
+ return -ENOMEM;
- pwrseq->reset_gpio = gpiod_get(dev, "reset", GPIOD_OUT_LOW);
- if (IS_ERR(pwrseq->reset_gpio)) {
- ret = PTR_ERR(pwrseq->reset_gpio);
- goto free;
- }
+ pwrseq->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW);
+ if (IS_ERR(pwrseq->reset_gpio))
+ return PTR_ERR(pwrseq->reset_gpio);
/*
* register reset handler to ensure emmc reset also from
@@ -92,9 +82,38 @@ struct mmc_pwrseq *mmc_pwrseq_emmc_alloc(struct mmc_host *host,
register_restart_handler(&pwrseq->reset_nb);
pwrseq->pwrseq.ops = &mmc_pwrseq_emmc_ops;
+ pwrseq->pwrseq.dev = dev;
+ pwrseq->pwrseq.owner = THIS_MODULE;
+ platform_set_drvdata(pdev, pwrseq);
+
+ return mmc_pwrseq_register(&pwrseq->pwrseq);
+}
+
+static int mmc_pwrseq_emmc_remove(struct platform_device *pdev)
+{
+ struct mmc_pwrseq_emmc *pwrseq = platform_get_drvdata(pdev);
+
+ unregister_restart_handler(&pwrseq->reset_nb);
+ mmc_pwrseq_unregister(&pwrseq->pwrseq);
- return &pwrseq->pwrseq;
-free:
- kfree(pwrseq);
- return ERR_PTR(ret);
+ return 0;
}
+
+static const struct of_device_id mmc_pwrseq_emmc_of_match[] = {
+ { .compatible = "mmc-pwrseq-emmc",},
+ {/* sentinel */},
+};
+
+MODULE_DEVICE_TABLE(of, mmc_pwrseq_emmc_of_match);
+
+static struct platform_driver mmc_pwrseq_emmc_driver = {
+ .probe = mmc_pwrseq_emmc_probe,
+ .remove = mmc_pwrseq_emmc_remove,
+ .driver = {
+ .name = "pwrseq_emmc",
+ .of_match_table = mmc_pwrseq_emmc_of_match,
+ },
+};
+
+module_platform_driver(mmc_pwrseq_emmc_driver);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/mmc/core/pwrseq_simple.c b/drivers/mmc/core/pwrseq_simple.c
index bc173e18b71c..450d907c6e6c 100644
--- a/drivers/mmc/core/pwrseq_simple.c
+++ b/drivers/mmc/core/pwrseq_simple.c
@@ -8,7 +8,10 @@
* Simple MMC power sequence management
*/
#include <linux/clk.h>
+#include <linux/init.h>
#include <linux/kernel.h>
+#include <linux/platform_device.h>
+#include <linux/module.h>
#include <linux/slab.h>
#include <linux/device.h>
#include <linux/err.h>
@@ -25,6 +28,8 @@ struct mmc_pwrseq_simple {
struct gpio_descs *reset_gpios;
};
+#define to_pwrseq_simple(p) container_of(p, struct mmc_pwrseq_simple, pwrseq)
+
static void mmc_pwrseq_simple_set_gpios_value(struct mmc_pwrseq_simple *pwrseq,
int value)
{
@@ -44,8 +49,7 @@ static void mmc_pwrseq_simple_set_gpios_value(struct mmc_pwrseq_simple *pwrseq,
static void mmc_pwrseq_simple_pre_power_on(struct mmc_host *host)
{
- struct mmc_pwrseq_simple *pwrseq = container_of(host->pwrseq,
- struct mmc_pwrseq_simple, pwrseq);
+ struct mmc_pwrseq_simple *pwrseq = to_pwrseq_simple(host->pwrseq);
if (!IS_ERR(pwrseq->ext_clk) && !pwrseq->clk_enabled) {
clk_prepare_enable(pwrseq->ext_clk);
@@ -57,16 +61,14 @@ static void mmc_pwrseq_simple_pre_power_on(struct mmc_host *host)
static void mmc_pwrseq_simple_post_power_on(struct mmc_host *host)
{
- struct mmc_pwrseq_simple *pwrseq = container_of(host->pwrseq,
- struct mmc_pwrseq_simple, pwrseq);
+ struct mmc_pwrseq_simple *pwrseq = to_pwrseq_simple(host->pwrseq);
mmc_pwrseq_simple_set_gpios_value(pwrseq, 0);
}
static void mmc_pwrseq_simple_power_off(struct mmc_host *host)
{
- struct mmc_pwrseq_simple *pwrseq = container_of(host->pwrseq,
- struct mmc_pwrseq_simple, pwrseq);
+ struct mmc_pwrseq_simple *pwrseq = to_pwrseq_simple(host->pwrseq);
mmc_pwrseq_simple_set_gpios_value(pwrseq, 1);
@@ -76,59 +78,64 @@ static void mmc_pwrseq_simple_power_off(struct mmc_host *host)
}
}
-static void mmc_pwrseq_simple_free(struct mmc_host *host)
-{
- struct mmc_pwrseq_simple *pwrseq = container_of(host->pwrseq,
- struct mmc_pwrseq_simple, pwrseq);
-
- if (!IS_ERR(pwrseq->reset_gpios))
- gpiod_put_array(pwrseq->reset_gpios);
-
- if (!IS_ERR(pwrseq->ext_clk))
- clk_put(pwrseq->ext_clk);
-
- kfree(pwrseq);
-}
-
static const struct mmc_pwrseq_ops mmc_pwrseq_simple_ops = {
.pre_power_on = mmc_pwrseq_simple_pre_power_on,
.post_power_on = mmc_pwrseq_simple_post_power_on,
.power_off = mmc_pwrseq_simple_power_off,
- .free = mmc_pwrseq_simple_free,
};
-struct mmc_pwrseq *mmc_pwrseq_simple_alloc(struct mmc_host *host,
- struct device *dev)
+static const struct of_device_id mmc_pwrseq_simple_of_match[] = {
+ { .compatible = "mmc-pwrseq-simple",},
+ {/* sentinel */},
+};
+MODULE_DEVICE_TABLE(of, mmc_pwrseq_simple_of_match);
+
+static int mmc_pwrseq_simple_probe(struct platform_device *pdev)
{
struct mmc_pwrseq_simple *pwrseq;
- int ret = 0;
+ struct device *dev = &pdev->dev;
- pwrseq = kzalloc(sizeof(*pwrseq), GFP_KERNEL);
+ pwrseq = devm_kzalloc(dev, sizeof(*pwrseq), GFP_KERNEL);
if (!pwrseq)
- return ERR_PTR(-ENOMEM);
+ return -ENOMEM;
- pwrseq->ext_clk = clk_get(dev, "ext_clock");
- if (IS_ERR(pwrseq->ext_clk) &&
- PTR_ERR(pwrseq->ext_clk) != -ENOENT) {
- ret = PTR_ERR(pwrseq->ext_clk);
- goto free;
- }
+ pwrseq->ext_clk = devm_clk_get(dev, "ext_clock");
+ if (IS_ERR(pwrseq->ext_clk) && PTR_ERR(pwrseq->ext_clk) != -ENOENT)
+ return PTR_ERR(pwrseq->ext_clk);
- pwrseq->reset_gpios = gpiod_get_array(dev, "reset", GPIOD_OUT_HIGH);
+ pwrseq->reset_gpios = devm_gpiod_get_array(dev, "reset",
+ GPIOD_OUT_HIGH);
if (IS_ERR(pwrseq->reset_gpios) &&
PTR_ERR(pwrseq->reset_gpios) != -ENOENT &&
PTR_ERR(pwrseq->reset_gpios) != -ENOSYS) {
- ret = PTR_ERR(pwrseq->reset_gpios);
- goto clk_put;
+ return PTR_ERR(pwrseq->reset_gpios);
}
+ pwrseq->pwrseq.dev = dev;
pwrseq->pwrseq.ops = &mmc_pwrseq_simple_ops;
+ pwrseq->pwrseq.owner = THIS_MODULE;
+ platform_set_drvdata(pdev, pwrseq);
- return &pwrseq->pwrseq;
-clk_put:
- if (!IS_ERR(pwrseq->ext_clk))
- clk_put(pwrseq->ext_clk);
-free:
- kfree(pwrseq);
- return ERR_PTR(ret);
+ return mmc_pwrseq_register(&pwrseq->pwrseq);
}
+
+static int mmc_pwrseq_simple_remove(struct platform_device *pdev)
+{
+ struct mmc_pwrseq_simple *pwrseq = platform_get_drvdata(pdev);
+
+ mmc_pwrseq_unregister(&pwrseq->pwrseq);
+
+ return 0;
+}
+
+static struct platform_driver mmc_pwrseq_simple_driver = {
+ .probe = mmc_pwrseq_simple_probe,
+ .remove = mmc_pwrseq_simple_remove,
+ .driver = {
+ .name = "pwrseq_simple",
+ .of_match_table = mmc_pwrseq_simple_of_match,
+ },
+};
+
+module_platform_driver(mmc_pwrseq_simple_driver);
+MODULE_LICENSE("GPL v2");
diff --git a/drivers/mmc/core/sdio_cis.c b/drivers/mmc/core/sdio_cis.c
index 6f6fc527a263..dcb3dee59fa5 100644
--- a/drivers/mmc/core/sdio_cis.c
+++ b/drivers/mmc/core/sdio_cis.c
@@ -177,8 +177,13 @@ static int cistpl_funce_func(struct mmc_card *card, struct sdio_func *func,
vsn = func->card->cccr.sdio_vsn;
min_size = (vsn == SDIO_SDIO_REV_1_00) ? 28 : 42;
- if (size < min_size)
+ if (size == 28 && vsn == SDIO_SDIO_REV_1_10) {
+ pr_warn("%s: card has broken SDIO 1.1 CIS, forcing SDIO 1.0\n",
+ mmc_hostname(card->host));
+ vsn = SDIO_SDIO_REV_1_00;
+ } else if (size < min_size) {
return -EINVAL;
+ }
/* TPLFE_MAX_BLK_SIZE */
func->max_blksize = buf[12] | (buf[13] << 8);
diff --git a/drivers/mmc/host/Kconfig b/drivers/mmc/host/Kconfig
index e657af0e95fa..0aa484c10c0a 100644
--- a/drivers/mmc/host/Kconfig
+++ b/drivers/mmc/host/Kconfig
@@ -677,9 +677,9 @@ config MMC_SH_MMCIF
depends on HAS_DMA
depends on SUPERH || ARCH_RENESAS || COMPILE_TEST
help
- This selects the MMC Host Interface controller (MMCIF).
+ This selects the MMC Host Interface controller (MMCIF) found in various
+ Renesas SoCs for SH and ARM architectures.
- This driver supports MMCIF in sh7724/sh7757/sh7372.
config MMC_JZ4740
tristate "JZ4740 SD/Multimedia Card Interface support"
diff --git a/drivers/mmc/host/atmel-mci.c b/drivers/mmc/host/atmel-mci.c
index 9268c41a8561..0ad8ef565b74 100644
--- a/drivers/mmc/host/atmel-mci.c
+++ b/drivers/mmc/host/atmel-mci.c
@@ -1410,8 +1410,6 @@ static void atmci_request(struct mmc_host *mmc, struct mmc_request *mrq)
WARN_ON(slot->mrq);
dev_dbg(&host->pdev->dev, "MRQ: cmd %u\n", mrq->cmd->opcode);
- pm_runtime_get_sync(&host->pdev->dev);
-
/*
* We may "know" the card is gone even though there's still an
* electrical connection. If so, we really need to communicate
@@ -1442,8 +1440,6 @@ static void atmci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
struct atmel_mci *host = slot->host;
unsigned int i;
- pm_runtime_get_sync(&host->pdev->dev);
-
slot->sdc_reg &= ~ATMCI_SDCBUS_MASK;
switch (ios->bus_width) {
case MMC_BUS_WIDTH_1:
@@ -1576,8 +1572,6 @@ static void atmci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
break;
}
- pm_runtime_mark_last_busy(&host->pdev->dev);
- pm_runtime_put_autosuspend(&host->pdev->dev);
}
static int atmci_get_ro(struct mmc_host *mmc)
@@ -1669,9 +1663,6 @@ static void atmci_request_end(struct atmel_mci *host, struct mmc_request *mrq)
spin_unlock(&host->lock);
mmc_request_done(prev_mmc, mrq);
spin_lock(&host->lock);
-
- pm_runtime_mark_last_busy(&host->pdev->dev);
- pm_runtime_put_autosuspend(&host->pdev->dev);
}
static void atmci_command_complete(struct atmel_mci *host,
diff --git a/drivers/mmc/host/davinci_mmc.c b/drivers/mmc/host/davinci_mmc.c
index 693144e7427b..a56373c75983 100644
--- a/drivers/mmc/host/davinci_mmc.c
+++ b/drivers/mmc/host/davinci_mmc.c
@@ -32,12 +32,10 @@
#include <linux/delay.h>
#include <linux/dmaengine.h>
#include <linux/dma-mapping.h>
-#include <linux/edma.h>
#include <linux/mmc/mmc.h>
#include <linux/of.h>
#include <linux/of_device.h>
-#include <linux/platform_data/edma.h>
#include <linux/platform_data/mmc-davinci.h>
/*
@@ -202,7 +200,6 @@ struct mmc_davinci_host {
u32 buffer_bytes_left;
u32 bytes_left;
- u32 rxdma, txdma;
struct dma_chan *dma_tx;
struct dma_chan *dma_rx;
bool use_dma;
@@ -513,35 +510,20 @@ davinci_release_dma_channels(struct mmc_davinci_host *host)
static int __init davinci_acquire_dma_channels(struct mmc_davinci_host *host)
{
- int r;
- dma_cap_mask_t mask;
-
- dma_cap_zero(mask);
- dma_cap_set(DMA_SLAVE, mask);
-
- host->dma_tx =
- dma_request_slave_channel_compat(mask, edma_filter_fn,
- &host->txdma, mmc_dev(host->mmc), "tx");
- if (!host->dma_tx) {
+ host->dma_tx = dma_request_chan(mmc_dev(host->mmc), "tx");
+ if (IS_ERR(host->dma_tx)) {
dev_err(mmc_dev(host->mmc), "Can't get dma_tx channel\n");
- return -ENODEV;
+ return PTR_ERR(host->dma_tx);
}
- host->dma_rx =
- dma_request_slave_channel_compat(mask, edma_filter_fn,
- &host->rxdma, mmc_dev(host->mmc), "rx");
- if (!host->dma_rx) {
+ host->dma_rx = dma_request_chan(mmc_dev(host->mmc), "rx");
+ if (IS_ERR(host->dma_rx)) {
dev_err(mmc_dev(host->mmc), "Can't get dma_rx channel\n");
- r = -ENODEV;
- goto free_master_write;
+ dma_release_channel(host->dma_tx);
+ return PTR_ERR(host->dma_rx);
}
return 0;
-
-free_master_write:
- dma_release_channel(host->dma_tx);
-
- return r;
}
/*----------------------------------------------------------------------*/
@@ -1223,7 +1205,7 @@ static int __init davinci_mmcsd_probe(struct platform_device *pdev)
struct mmc_davinci_host *host = NULL;
struct mmc_host *mmc = NULL;
struct resource *r, *mem = NULL;
- int ret = 0, irq = 0;
+ int ret, irq;
size_t mem_size;
const struct platform_device_id *id_entry;
@@ -1233,50 +1215,40 @@ static int __init davinci_mmcsd_probe(struct platform_device *pdev)
return -ENOENT;
}
- ret = -ENODEV;
r = platform_get_resource(pdev, IORESOURCE_MEM, 0);
irq = platform_get_irq(pdev, 0);
if (!r || irq == NO_IRQ)
- goto out;
+ return -ENODEV;
- ret = -EBUSY;
mem_size = resource_size(r);
- mem = request_mem_region(r->start, mem_size, pdev->name);
+ mem = devm_request_mem_region(&pdev->dev, r->start, mem_size,
+ pdev->name);
if (!mem)
- goto out;
+ return -EBUSY;
- ret = -ENOMEM;
mmc = mmc_alloc_host(sizeof(struct mmc_davinci_host), &pdev->dev);
if (!mmc)
- goto out;
+ return -ENOMEM;
host = mmc_priv(mmc);
host->mmc = mmc; /* Important */
- r = platform_get_resource(pdev, IORESOURCE_DMA, 0);
- if (!r)
- dev_warn(&pdev->dev, "RX DMA resource not specified\n");
- else
- host->rxdma = r->start;
-
- r = platform_get_resource(pdev, IORESOURCE_DMA, 1);
- if (!r)
- dev_warn(&pdev->dev, "TX DMA resource not specified\n");
- else
- host->txdma = r->start;
-
host->mem_res = mem;
- host->base = ioremap(mem->start, mem_size);
- if (!host->base)
- goto out;
+ host->base = devm_ioremap(&pdev->dev, mem->start, mem_size);
+ if (!host->base) {
+ ret = -ENOMEM;
+ goto ioremap_fail;
+ }
- ret = -ENXIO;
- host->clk = clk_get(&pdev->dev, "MMCSDCLK");
+ host->clk = devm_clk_get(&pdev->dev, NULL);
if (IS_ERR(host->clk)) {
ret = PTR_ERR(host->clk);
- goto out;
+ goto clk_get_fail;
}
- clk_enable(host->clk);
+ ret = clk_prepare_enable(host->clk);
+ if (ret)
+ goto clk_prepare_enable_fail;
+
host->mmc_input_clk = clk_get_rate(host->clk);
init_mmcsd_host(host);
@@ -1291,8 +1263,13 @@ static int __init davinci_mmcsd_probe(struct platform_device *pdev)
host->mmc_irq = irq;
host->sdio_irq = platform_get_irq(pdev, 1);
- if (host->use_dma && davinci_acquire_dma_channels(host) != 0)
- host->use_dma = 0;
+ if (host->use_dma) {
+ ret = davinci_acquire_dma_channels(host);
+ if (ret == -EPROBE_DEFER)
+ goto dma_probe_defer;
+ else if (ret)
+ host->use_dma = 0;
+ }
/* REVISIT: someday, support IRQ-driven card detection. */
mmc->caps |= MMC_CAP_NEEDS_POLL;
@@ -1346,15 +1323,17 @@ static int __init davinci_mmcsd_probe(struct platform_device *pdev)
ret = mmc_add_host(mmc);
if (ret < 0)
- goto out;
+ goto mmc_add_host_fail;
- ret = request_irq(irq, mmc_davinci_irq, 0, mmc_hostname(mmc), host);
+ ret = devm_request_irq(&pdev->dev, irq, mmc_davinci_irq, 0,
+ mmc_hostname(mmc), host);
if (ret)
- goto out;
+ goto request_irq_fail;
if (host->sdio_irq >= 0) {
- ret = request_irq(host->sdio_irq, mmc_davinci_sdio_irq, 0,
- mmc_hostname(mmc), host);
+ ret = devm_request_irq(&pdev->dev, host->sdio_irq,
+ mmc_davinci_sdio_irq, 0,
+ mmc_hostname(mmc), host);
if (!ret)
mmc->caps |= MMC_CAP_SDIO_IRQ;
}
@@ -1367,28 +1346,18 @@ static int __init davinci_mmcsd_probe(struct platform_device *pdev)
return 0;
-out:
+request_irq_fail:
+ mmc_remove_host(mmc);
+mmc_add_host_fail:
mmc_davinci_cpufreq_deregister(host);
cpu_freq_fail:
- if (host) {
- davinci_release_dma_channels(host);
-
- if (host->clk) {
- clk_disable(host->clk);
- clk_put(host->clk);
- }
-
- if (host->base)
- iounmap(host->base);
- }
-
- if (mmc)
- mmc_free_host(mmc);
-
- if (mem)
- release_resource(mem);
-
- dev_dbg(&pdev->dev, "probe err %d\n", ret);
+ davinci_release_dma_channels(host);
+dma_probe_defer:
+ clk_disable_unprepare(host->clk);
+clk_prepare_enable_fail:
+clk_get_fail:
+ioremap_fail:
+ mmc_free_host(mmc);
return ret;
}
@@ -1397,25 +1366,11 @@ static int __exit davinci_mmcsd_remove(struct platform_device *pdev)
{
struct mmc_davinci_host *host = platform_get_drvdata(pdev);
- if (host) {
- mmc_davinci_cpufreq_deregister(host);
-
- mmc_remove_host(host->mmc);
- free_irq(host->mmc_irq, host);
- if (host->mmc->caps & MMC_CAP_SDIO_IRQ)
- free_irq(host->sdio_irq, host);
-
- davinci_release_dma_channels(host);
-
- clk_disable(host->clk);
- clk_put(host->clk);
-
- iounmap(host->base);
-
- release_resource(host->mem_res);
-
- mmc_free_host(host->mmc);
- }
+ mmc_remove_host(host->mmc);
+ mmc_davinci_cpufreq_deregister(host);
+ davinci_release_dma_channels(host);
+ clk_disable_unprepare(host->clk);
+ mmc_free_host(host->mmc);
return 0;
}
diff --git a/drivers/mmc/host/dw_mmc-exynos.c b/drivers/mmc/host/dw_mmc-exynos.c
index 8790f2afc057..7e3a3247b852 100644
--- a/drivers/mmc/host/dw_mmc-exynos.c
+++ b/drivers/mmc/host/dw_mmc-exynos.c
@@ -91,10 +91,14 @@ static inline u8 dw_mci_exynos_get_ciu_div(struct dw_mci *host)
return SDMMC_CLKSEL_GET_DIV(mci_readl(host, CLKSEL)) + 1;
}
-static int dw_mci_exynos_priv_init(struct dw_mci *host)
+static void dw_mci_exynos_config_smu(struct dw_mci *host)
{
struct dw_mci_exynos_priv_data *priv = host->priv;
+ /*
+ * If Exynos is provided the Security management,
+ * set for non-ecryption mode at this time.
+ */
if (priv->ctrl_type == DW_MCI_TYPE_EXYNOS5420_SMU ||
priv->ctrl_type == DW_MCI_TYPE_EXYNOS7_SMU) {
mci_writel(host, MPSBEGIN0, 0);
@@ -104,6 +108,13 @@ static int dw_mci_exynos_priv_init(struct dw_mci *host)
SDMMC_MPSCTRL_VALID |
SDMMC_MPSCTRL_NON_SECURE_WRITE_BIT);
}
+}
+
+static int dw_mci_exynos_priv_init(struct dw_mci *host)
+{
+ struct dw_mci_exynos_priv_data *priv = host->priv;
+
+ dw_mci_exynos_config_smu(host);
if (priv->ctrl_type >= DW_MCI_TYPE_EXYNOS5420) {
priv->saved_strobe_ctrl = mci_readl(host, HS400_DLINE_CTRL);
@@ -115,13 +126,6 @@ static int dw_mci_exynos_priv_init(struct dw_mci *host)
DQS_CTRL_GET_RD_DELAY(priv->saved_strobe_ctrl);
}
- return 0;
-}
-
-static int dw_mci_exynos_setup_clock(struct dw_mci *host)
-{
- struct dw_mci_exynos_priv_data *priv = host->priv;
-
host->bus_hz /= (priv->ciu_div + 1);
return 0;
@@ -169,7 +173,7 @@ static int dw_mci_exynos_resume(struct device *dev)
{
struct dw_mci *host = dev_get_drvdata(dev);
- dw_mci_exynos_priv_init(host);
+ dw_mci_exynos_config_smu(host);
return dw_mci_resume(host);
}
@@ -489,7 +493,6 @@ static unsigned long exynos_dwmmc_caps[4] = {
static const struct dw_mci_drv_data exynos_drv_data = {
.caps = exynos_dwmmc_caps,
.init = dw_mci_exynos_priv_init,
- .setup_clock = dw_mci_exynos_setup_clock,
.set_ios = dw_mci_exynos_set_ios,
.parse_dt = dw_mci_exynos_parse_dt,
.execute_tuning = dw_mci_exynos_execute_tuning,
diff --git a/drivers/mmc/host/dw_mmc-rockchip.c b/drivers/mmc/host/dw_mmc-rockchip.c
index 84e50f3a64b6..358b0dc853b0 100644
--- a/drivers/mmc/host/dw_mmc-rockchip.c
+++ b/drivers/mmc/host/dw_mmc-rockchip.c
@@ -26,13 +26,6 @@ struct dw_mci_rockchip_priv_data {
int default_sample_phase;
};
-static int dw_mci_rk3288_setup_clock(struct dw_mci *host)
-{
- host->bus_hz /= RK3288_CLKGEN_DIV;
-
- return 0;
-}
-
static void dw_mci_rk3288_set_ios(struct dw_mci *host, struct mmc_ios *ios)
{
struct dw_mci_rockchip_priv_data *priv = host->priv;
@@ -73,6 +66,70 @@ static void dw_mci_rk3288_set_ios(struct dw_mci *host, struct mmc_ios *ios)
/* Make sure we use phases which we can enumerate with */
if (!IS_ERR(priv->sample_clk))
clk_set_phase(priv->sample_clk, priv->default_sample_phase);
+
+ /*
+ * Set the drive phase offset based on speed mode to achieve hold times.
+ *
+ * NOTE: this is _not_ a value that is dynamically tuned and is also
+ * _not_ a value that will vary from board to board. It is a value
+ * that could vary between different SoC models if they had massively
+ * different output clock delays inside their dw_mmc IP block (delay_o),
+ * but since it's OK to overshoot a little we don't need to do complex
+ * calculations and can pick values that will just work for everyone.
+ *
+ * When picking values we'll stick with picking 0/90/180/270 since
+ * those can be made very accurately on all known Rockchip SoCs.
+ *
+ * Note that these values match values from the DesignWare Databook
+ * tables for the most part except for SDR12 and "ID mode". For those
+ * two modes the databook calculations assume a clock in of 50MHz. As
+ * seen above, we always use a clock in rate that is exactly the
+ * card's input clock (times RK3288_CLKGEN_DIV, but that gets divided
+ * back out before the controller sees it).
+ *
+ * From measurement of a single device, it appears that delay_o is
+ * about .5 ns. Since we try to leave a bit of margin, it's expected
+ * that numbers here will be fine even with much larger delay_o
+ * (the 1.4 ns assumed by the DesignWare Databook would result in the
+ * same results, for instance).
+ */
+ if (!IS_ERR(priv->drv_clk)) {
+ int phase;
+
+ /*
+ * In almost all cases a 90 degree phase offset will provide
+ * sufficient hold times across all valid input clock rates
+ * assuming delay_o is not absurd for a given SoC. We'll use
+ * that as a default.
+ */
+ phase = 90;
+
+ switch (ios->timing) {
+ case MMC_TIMING_MMC_DDR52:
+ /*
+ * Since clock in rate with MMC_DDR52 is doubled when
+ * bus width is 8 we need to double the phase offset
+ * to get the same timings.
+ */
+ if (ios->bus_width == MMC_BUS_WIDTH_8)
+ phase = 180;
+ break;
+ case MMC_TIMING_UHS_SDR104:
+ case MMC_TIMING_MMC_HS200:
+ /*
+ * In the case of 150 MHz clock (typical max for
+ * Rockchip SoCs), 90 degree offset will add a delay
+ * of 1.67 ns. That will meet min hold time of .8 ns
+ * as long as clock output delay is < .87 ns. On
+ * SoCs measured this seems to be OK, but it doesn't
+ * hurt to give margin here, so we use 180.
+ */
+ phase = 180;
+ break;
+ }
+
+ clk_set_phase(priv->drv_clk, phase);
+ }
}
#define NUM_PHASES 360
@@ -231,18 +288,30 @@ static int dw_mci_rockchip_init(struct dw_mci *host)
/* It needs this quirk on all Rockchip SoCs */
host->pdata->quirks |= DW_MCI_QUIRK_BROKEN_DTO;
+ if (of_device_is_compatible(host->dev->of_node,
+ "rockchip,rk3288-dw-mshc"))
+ host->bus_hz /= RK3288_CLKGEN_DIV;
+
return 0;
}
+/* Common capabilities of RK3288 SoC */
+static unsigned long dw_mci_rk3288_dwmmc_caps[4] = {
+ MMC_CAP_ERASE | MMC_CAP_CMD23,
+ MMC_CAP_ERASE | MMC_CAP_CMD23,
+ MMC_CAP_ERASE | MMC_CAP_CMD23,
+ MMC_CAP_ERASE | MMC_CAP_CMD23,
+};
+
static const struct dw_mci_drv_data rk2928_drv_data = {
.init = dw_mci_rockchip_init,
};
static const struct dw_mci_drv_data rk3288_drv_data = {
+ .caps = dw_mci_rk3288_dwmmc_caps,
.set_ios = dw_mci_rk3288_set_ios,
.execute_tuning = dw_mci_rk3288_execute_tuning,
.parse_dt = dw_mci_rk3288_parse_dt,
- .setup_clock = dw_mci_rk3288_setup_clock,
.init = dw_mci_rockchip_init,
};
@@ -269,33 +338,13 @@ static int dw_mci_rockchip_probe(struct platform_device *pdev)
return dw_mci_pltfm_register(pdev, drv_data);
}
-#ifdef CONFIG_PM_SLEEP
-static int dw_mci_rockchip_suspend(struct device *dev)
-{
- struct dw_mci *host = dev_get_drvdata(dev);
-
- return dw_mci_suspend(host);
-}
-
-static int dw_mci_rockchip_resume(struct device *dev)
-{
- struct dw_mci *host = dev_get_drvdata(dev);
-
- return dw_mci_resume(host);
-}
-#endif /* CONFIG_PM_SLEEP */
-
-static SIMPLE_DEV_PM_OPS(dw_mci_rockchip_pmops,
- dw_mci_rockchip_suspend,
- dw_mci_rockchip_resume);
-
static struct platform_driver dw_mci_rockchip_pltfm_driver = {
.probe = dw_mci_rockchip_probe,
.remove = dw_mci_pltfm_remove,
.driver = {
.name = "dwmmc_rockchip",
.of_match_table = dw_mci_rockchip_match,
- .pm = &dw_mci_rockchip_pmops,
+ .pm = &dw_mci_pltfm_pmops,
},
};
diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.c
index 242f9a0769bd..2cc6123b1df9 100644
--- a/drivers/mmc/host/dw_mmc.c
+++ b/drivers/mmc/host/dw_mmc.c
@@ -680,7 +680,7 @@ static const struct dw_mci_dma_ops dw_mci_idmac_ops = {
static void dw_mci_edmac_stop_dma(struct dw_mci *host)
{
- dmaengine_terminate_all(host->dms->ch);
+ dmaengine_terminate_async(host->dms->ch);
}
static int dw_mci_edmac_start_dma(struct dw_mci *host,
@@ -1431,7 +1431,7 @@ static int dw_mci_get_ro(struct mmc_host *mmc)
int gpio_ro = mmc_gpio_get_ro(mmc);
/* Use platform get_ro function, else try on board write protect */
- if (!IS_ERR_VALUE(gpio_ro))
+ if (gpio_ro >= 0)
read_only = gpio_ro;
else
read_only =
@@ -1454,7 +1454,7 @@ static int dw_mci_get_cd(struct mmc_host *mmc)
if ((mmc->caps & MMC_CAP_NEEDS_POLL) ||
(mmc->caps & MMC_CAP_NONREMOVABLE))
present = 1;
- else if (!IS_ERR_VALUE(gpio_cd))
+ else if (gpio_cd >= 0)
present = gpio_cd;
else
present = (mci_readl(slot->host, CDETECT) & (1 << slot->id))
@@ -2595,13 +2595,13 @@ static int dw_mci_init_slot(struct dw_mci *host, unsigned int id)
/* Useful defaults if platform data is unset. */
if (host->use_dma == TRANS_MODE_IDMAC) {
mmc->max_segs = host->ring_size;
- mmc->max_blk_size = 65536;
+ mmc->max_blk_size = 65535;
mmc->max_seg_size = 0x1000;
mmc->max_req_size = mmc->max_seg_size * host->ring_size;
mmc->max_blk_count = mmc->max_req_size / 512;
} else if (host->use_dma == TRANS_MODE_EDMAC) {
mmc->max_segs = 64;
- mmc->max_blk_size = 65536;
+ mmc->max_blk_size = 65535;
mmc->max_blk_count = 65535;
mmc->max_req_size =
mmc->max_blk_size * mmc->max_blk_count;
@@ -2609,7 +2609,7 @@ static int dw_mci_init_slot(struct dw_mci *host, unsigned int id)
} else {
/* TRANS_MODE_PIO */
mmc->max_segs = 64;
- mmc->max_blk_size = 65536; /* BLKSIZ is 16 bits */
+ mmc->max_blk_size = 65535; /* BLKSIZ is 16 bits */
mmc->max_blk_count = 512;
mmc->max_req_size = mmc->max_blk_size *
mmc->max_blk_count;
@@ -2927,7 +2927,7 @@ static void dw_mci_enable_cd(struct dw_mci *host)
if (slot->mmc->caps & MMC_CAP_NEEDS_POLL)
return;
- if (IS_ERR_VALUE(mmc_gpio_get_cd(slot->mmc)))
+ if (mmc_gpio_get_cd(slot->mmc) < 0)
break;
}
if (i == host->num_slots)
@@ -3003,15 +3003,6 @@ int dw_mci_probe(struct dw_mci *host)
}
}
- if (drv_data && drv_data->setup_clock) {
- ret = drv_data->setup_clock(host);
- if (ret) {
- dev_err(host->dev,
- "implementation specific clock setup failed\n");
- goto err_clk_ciu;
- }
- }
-
setup_timer(&host->cmd11_timer,
dw_mci_cmd11_timer, (unsigned long)host);
diff --git a/drivers/mmc/host/dw_mmc.h b/drivers/mmc/host/dw_mmc.h
index 68d5da2dfd19..1e8d8380f9cf 100644
--- a/drivers/mmc/host/dw_mmc.h
+++ b/drivers/mmc/host/dw_mmc.h
@@ -277,7 +277,6 @@ struct dw_mci_slot {
* dw_mci driver data - dw-mshc implementation specific driver data.
* @caps: mmc subsystem specified capabilities of the controller(s).
* @init: early implementation specific initialization.
- * @setup_clock: implementation specific clock configuration.
* @set_ios: handle bus specific extensions.
* @parse_dt: parse implementation specific device tree properties.
* @execute_tuning: implementation specific tuning procedure.
@@ -289,7 +288,6 @@ struct dw_mci_slot {
struct dw_mci_drv_data {
unsigned long *caps;
int (*init)(struct dw_mci *host);
- int (*setup_clock)(struct dw_mci *host);
void (*set_ios)(struct dw_mci *host, struct mmc_ios *ios);
int (*parse_dt)(struct dw_mci *host);
int (*execute_tuning)(struct dw_mci_slot *slot, u32 opcode);
diff --git a/drivers/mmc/host/mmci.c b/drivers/mmc/host/mmci.c
index 2e6c96845c9a..df990bb8c873 100644
--- a/drivers/mmc/host/mmci.c
+++ b/drivers/mmc/host/mmci.c
@@ -226,16 +226,11 @@ static int mmci_card_busy(struct mmc_host *mmc)
unsigned long flags;
int busy = 0;
- pm_runtime_get_sync(mmc_dev(mmc));
-
spin_lock_irqsave(&host->lock, flags);
if (readl(host->base + MMCISTATUS) & MCI_ST_CARDBUSY)
busy = 1;
spin_unlock_irqrestore(&host->lock, flags);
- pm_runtime_mark_last_busy(mmc_dev(mmc));
- pm_runtime_put_autosuspend(mmc_dev(mmc));
-
return busy;
}
@@ -381,9 +376,6 @@ mmci_request_end(struct mmci_host *host, struct mmc_request *mrq)
host->cmd = NULL;
mmc_request_done(host->mmc, mrq);
-
- pm_runtime_mark_last_busy(mmc_dev(host->mmc));
- pm_runtime_put_autosuspend(mmc_dev(host->mmc));
}
static void mmci_set_mask1(struct mmci_host *host, unsigned int mask)
@@ -1290,8 +1282,6 @@ static void mmci_request(struct mmc_host *mmc, struct mmc_request *mrq)
return;
}
- pm_runtime_get_sync(mmc_dev(mmc));
-
spin_lock_irqsave(&host->lock, flags);
host->mrq = mrq;
@@ -1318,8 +1308,6 @@ static void mmci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
unsigned long flags;
int ret;
- pm_runtime_get_sync(mmc_dev(mmc));
-
if (host->plat->ios_handler &&
host->plat->ios_handler(mmc_dev(mmc), ios))
dev_err(mmc_dev(mmc), "platform ios_handler failed\n");
@@ -1414,9 +1402,6 @@ static void mmci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
mmci_reg_delay(host);
spin_unlock_irqrestore(&host->lock, flags);
-
- pm_runtime_mark_last_busy(mmc_dev(mmc));
- pm_runtime_put_autosuspend(mmc_dev(mmc));
}
static int mmci_get_cd(struct mmc_host *mmc)
@@ -1440,8 +1425,6 @@ static int mmci_sig_volt_switch(struct mmc_host *mmc, struct mmc_ios *ios)
if (!IS_ERR(mmc->supply.vqmmc)) {
- pm_runtime_get_sync(mmc_dev(mmc));
-
switch (ios->signal_voltage) {
case MMC_SIGNAL_VOLTAGE_330:
ret = regulator_set_voltage(mmc->supply.vqmmc,
@@ -1459,9 +1442,6 @@ static int mmci_sig_volt_switch(struct mmc_host *mmc, struct mmc_ios *ios)
if (ret)
dev_warn(mmc_dev(mmc), "Voltage switch failed\n");
-
- pm_runtime_mark_last_busy(mmc_dev(mmc));
- pm_runtime_put_autosuspend(mmc_dev(mmc));
}
return ret;
diff --git a/drivers/mmc/host/mtk-sd.c b/drivers/mmc/host/mtk-sd.c
index b17f30da97da..5642f71f8bf0 100644
--- a/drivers/mmc/host/mtk-sd.c
+++ b/drivers/mmc/host/mtk-sd.c
@@ -736,9 +736,6 @@ static void msdc_request_done(struct msdc_host *host, struct mmc_request *mrq)
if (mrq->data)
msdc_unprepare_data(host, mrq);
mmc_request_done(host->mmc, mrq);
-
- pm_runtime_mark_last_busy(host->dev);
- pm_runtime_put_autosuspend(host->dev);
}
/* returns true if command is fully handled; returns false otherwise */
@@ -886,8 +883,6 @@ static void msdc_ops_request(struct mmc_host *mmc, struct mmc_request *mrq)
WARN_ON(host->mrq);
host->mrq = mrq;
- pm_runtime_get_sync(host->dev);
-
if (mrq->data)
msdc_prepare_data(host, mrq);
@@ -1201,8 +1196,6 @@ static void msdc_ops_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
struct msdc_host *host = mmc_priv(mmc);
int ret;
- pm_runtime_get_sync(host->dev);
-
msdc_set_buswidth(host, ios->bus_width);
/* Suspend/Resume will do power off/on */
@@ -1214,7 +1207,7 @@ static void msdc_ops_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
ios->vdd);
if (ret) {
dev_err(host->dev, "Failed to set vmmc power!\n");
- goto end;
+ return;
}
}
break;
@@ -1242,10 +1235,6 @@ static void msdc_ops_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
if (host->mclk != ios->clock || host->timing != ios->timing)
msdc_set_mclk(host, ios->timing, ios->clock);
-
-end:
- pm_runtime_mark_last_busy(host->dev);
- pm_runtime_put_autosuspend(host->dev);
}
static u32 test_delay_bit(u32 delay, u32 bit)
@@ -1408,19 +1397,15 @@ static int msdc_execute_tuning(struct mmc_host *mmc, u32 opcode)
struct msdc_host *host = mmc_priv(mmc);
int ret;
- pm_runtime_get_sync(host->dev);
ret = msdc_tune_response(mmc, opcode);
if (ret == -EIO) {
dev_err(host->dev, "Tune response fail!\n");
- goto out;
+ return ret;
}
ret = msdc_tune_data(mmc, opcode);
if (ret == -EIO)
dev_err(host->dev, "Tune data fail!\n");
-out:
- pm_runtime_mark_last_busy(host->dev);
- pm_runtime_put_autosuspend(host->dev);
return ret;
}
diff --git a/drivers/mmc/host/omap.c b/drivers/mmc/host/omap.c
index b9958a123594..f23d65eb070d 100644
--- a/drivers/mmc/host/omap.c
+++ b/drivers/mmc/host/omap.c
@@ -23,7 +23,6 @@
#include <linux/spinlock.h>
#include <linux/timer.h>
#include <linux/of.h>
-#include <linux/omap-dma.h>
#include <linux/mmc/host.h>
#include <linux/mmc/card.h>
#include <linux/mmc/mmc.h>
@@ -1321,8 +1320,6 @@ static int mmc_omap_probe(struct platform_device *pdev)
struct omap_mmc_platform_data *pdata = pdev->dev.platform_data;
struct mmc_omap_host *host = NULL;
struct resource *res;
- dma_cap_mask_t mask;
- unsigned sig = 0;
int i, ret = 0;
int irq;
@@ -1382,29 +1379,34 @@ static int mmc_omap_probe(struct platform_device *pdev)
goto err_free_iclk;
}
- dma_cap_zero(mask);
- dma_cap_set(DMA_SLAVE, mask);
-
host->dma_tx_burst = -1;
host->dma_rx_burst = -1;
- res = platform_get_resource_byname(pdev, IORESOURCE_DMA, "tx");
- if (res)
- sig = res->start;
- host->dma_tx = dma_request_slave_channel_compat(mask,
- omap_dma_filter_fn, &sig, &pdev->dev, "tx");
- if (!host->dma_tx)
- dev_warn(host->dev, "unable to obtain TX DMA engine channel %u\n",
- sig);
-
- res = platform_get_resource_byname(pdev, IORESOURCE_DMA, "rx");
- if (res)
- sig = res->start;
- host->dma_rx = dma_request_slave_channel_compat(mask,
- omap_dma_filter_fn, &sig, &pdev->dev, "rx");
- if (!host->dma_rx)
- dev_warn(host->dev, "unable to obtain RX DMA engine channel %u\n",
- sig);
+ host->dma_tx = dma_request_chan(&pdev->dev, "tx");
+ if (IS_ERR(host->dma_tx)) {
+ ret = PTR_ERR(host->dma_tx);
+ if (ret == -EPROBE_DEFER) {
+ clk_put(host->fclk);
+ goto err_free_iclk;
+ }
+
+ host->dma_tx = NULL;
+ dev_warn(host->dev, "TX DMA channel request failed\n");
+ }
+
+ host->dma_rx = dma_request_chan(&pdev->dev, "rx");
+ if (IS_ERR(host->dma_rx)) {
+ ret = PTR_ERR(host->dma_rx);
+ if (ret == -EPROBE_DEFER) {
+ if (host->dma_tx)
+ dma_release_channel(host->dma_tx);
+ clk_put(host->fclk);
+ goto err_free_iclk;
+ }
+
+ host->dma_rx = NULL;
+ dev_warn(host->dev, "RX DMA channel request failed\n");
+ }
ret = request_irq(host->irq, mmc_omap_irq, 0, DRIVER_NAME, host);
if (ret)
diff --git a/drivers/mmc/host/omap_hsmmc.c b/drivers/mmc/host/omap_hsmmc.c
index f9ac3bb5d617..24ebc9a8de89 100644
--- a/drivers/mmc/host/omap_hsmmc.c
+++ b/drivers/mmc/host/omap_hsmmc.c
@@ -32,7 +32,6 @@
#include <linux/of_irq.h>
#include <linux/of_gpio.h>
#include <linux/of_device.h>
-#include <linux/omap-dmaengine.h>
#include <linux/mmc/host.h>
#include <linux/mmc/core.h>
#include <linux/mmc/mmc.h>
@@ -351,15 +350,14 @@ static int omap_hsmmc_set_pbias(struct omap_hsmmc_host *host, bool power_on,
return 0;
}
-static int omap_hsmmc_set_power(struct device *dev, int power_on, int vdd)
+static int omap_hsmmc_set_power(struct omap_hsmmc_host *host, int power_on,
+ int vdd)
{
- struct omap_hsmmc_host *host =
- platform_get_drvdata(to_platform_device(dev));
struct mmc_host *mmc = host->mmc;
int ret = 0;
if (mmc_pdata(host)->set_power)
- return mmc_pdata(host)->set_power(dev, power_on, vdd);
+ return mmc_pdata(host)->set_power(host->dev, power_on, vdd);
/*
* If we don't see a Vcc regulator, assume it's a fixed
@@ -369,7 +367,7 @@ static int omap_hsmmc_set_power(struct device *dev, int power_on, int vdd)
return 0;
if (mmc_pdata(host)->before_set_reg)
- mmc_pdata(host)->before_set_reg(dev, power_on, vdd);
+ mmc_pdata(host)->before_set_reg(host->dev, power_on, vdd);
ret = omap_hsmmc_set_pbias(host, false, 0);
if (ret)
@@ -403,7 +401,7 @@ static int omap_hsmmc_set_power(struct device *dev, int power_on, int vdd)
}
if (mmc_pdata(host)->after_set_reg)
- mmc_pdata(host)->after_set_reg(dev, power_on, vdd);
+ mmc_pdata(host)->after_set_reg(host->dev, power_on, vdd);
return 0;
@@ -968,8 +966,6 @@ static void omap_hsmmc_request_done(struct omap_hsmmc_host *host, struct mmc_req
return;
host->mrq = NULL;
mmc_request_done(host->mmc, mrq);
- pm_runtime_mark_last_busy(host->dev);
- pm_runtime_put_autosuspend(host->dev);
}
/*
@@ -1250,17 +1246,15 @@ static int omap_hsmmc_switch_opcond(struct omap_hsmmc_host *host, int vdd)
int ret;
/* Disable the clocks */
- pm_runtime_put_sync(host->dev);
if (host->dbclk)
clk_disable_unprepare(host->dbclk);
/* Turn the power off */
- ret = omap_hsmmc_set_power(host->dev, 0, 0);
+ ret = omap_hsmmc_set_power(host, 0, 0);
/* Turn the power ON with given VDD 1.8 or 3.0v */
if (!ret)
- ret = omap_hsmmc_set_power(host->dev, 1, vdd);
- pm_runtime_get_sync(host->dev);
+ ret = omap_hsmmc_set_power(host, 1, vdd);
if (host->dbclk)
clk_prepare_enable(host->dbclk);
@@ -1368,8 +1362,6 @@ static void omap_hsmmc_dma_callback(void *param)
host->mrq = NULL;
mmc_request_done(host->mmc, mrq);
- pm_runtime_mark_last_busy(host->dev);
- pm_runtime_put_autosuspend(host->dev);
}
}
@@ -1602,7 +1594,6 @@ static void omap_hsmmc_request(struct mmc_host *mmc, struct mmc_request *req)
BUG_ON(host->req_in_progress);
BUG_ON(host->dma_ch != -1);
- pm_runtime_get_sync(host->dev);
if (host->protect_card) {
if (host->reqs_blocked < 3) {
/*
@@ -1619,8 +1610,6 @@ static void omap_hsmmc_request(struct mmc_host *mmc, struct mmc_request *req)
req->data->error = -EBADF;
req->cmd->retries = 0;
mmc_request_done(mmc, req);
- pm_runtime_mark_last_busy(host->dev);
- pm_runtime_put_autosuspend(host->dev);
return;
} else if (host->reqs_blocked)
host->reqs_blocked = 0;
@@ -1634,8 +1623,6 @@ static void omap_hsmmc_request(struct mmc_host *mmc, struct mmc_request *req)
req->data->error = err;
host->mrq = NULL;
mmc_request_done(mmc, req);
- pm_runtime_mark_last_busy(host->dev);
- pm_runtime_put_autosuspend(host->dev);
return;
}
if (req->sbc && !(host->flags & AUTO_CMD23)) {
@@ -1653,15 +1640,13 @@ static void omap_hsmmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
struct omap_hsmmc_host *host = mmc_priv(mmc);
int do_send_init_stream = 0;
- pm_runtime_get_sync(host->dev);
-
if (ios->power_mode != host->power_mode) {
switch (ios->power_mode) {
case MMC_POWER_OFF:
- omap_hsmmc_set_power(host->dev, 0, 0);
+ omap_hsmmc_set_power(host, 0, 0);
break;
case MMC_POWER_UP:
- omap_hsmmc_set_power(host->dev, 1, ios->vdd);
+ omap_hsmmc_set_power(host, 1, ios->vdd);
break;
case MMC_POWER_ON:
do_send_init_stream = 1;
@@ -1698,8 +1683,6 @@ static void omap_hsmmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
send_init_stream(host);
omap_hsmmc_set_bus_mode(host);
-
- pm_runtime_put_autosuspend(host->dev);
}
static int omap_hsmmc_get_cd(struct mmc_host *mmc)
@@ -1962,13 +1945,17 @@ MODULE_DEVICE_TABLE(of, omap_mmc_of_match);
static struct omap_hsmmc_platform_data *of_get_hsmmc_pdata(struct device *dev)
{
- struct omap_hsmmc_platform_data *pdata;
+ struct omap_hsmmc_platform_data *pdata, *legacy;
struct device_node *np = dev->of_node;
pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL);
if (!pdata)
return ERR_PTR(-ENOMEM); /* out of memory */
+ legacy = dev_get_platdata(dev);
+ if (legacy && legacy->name)
+ pdata->name = legacy->name;
+
if (of_find_property(np, "ti,dual-volt", NULL))
pdata->controller_flags |= OMAP_HSMMC_SUPPORTS_DUAL_VOLT;
@@ -2005,8 +1992,6 @@ static int omap_hsmmc_probe(struct platform_device *pdev)
struct resource *res;
int ret, irq;
const struct of_device_id *match;
- dma_cap_mask_t mask;
- unsigned tx_req, rx_req;
const struct omap_mmc_of_data *data;
void __iomem *base;
@@ -2136,44 +2121,17 @@ static int omap_hsmmc_probe(struct platform_device *pdev)
omap_hsmmc_conf_bus_power(host);
- if (!pdev->dev.of_node) {
- res = platform_get_resource_byname(pdev, IORESOURCE_DMA, "tx");
- if (!res) {
- dev_err(mmc_dev(host->mmc), "cannot get DMA TX channel\n");
- ret = -ENXIO;
- goto err_irq;
- }
- tx_req = res->start;
-
- res = platform_get_resource_byname(pdev, IORESOURCE_DMA, "rx");
- if (!res) {
- dev_err(mmc_dev(host->mmc), "cannot get DMA RX channel\n");
- ret = -ENXIO;
- goto err_irq;
- }
- rx_req = res->start;
- }
-
- dma_cap_zero(mask);
- dma_cap_set(DMA_SLAVE, mask);
-
- host->rx_chan =
- dma_request_slave_channel_compat(mask, omap_dma_filter_fn,
- &rx_req, &pdev->dev, "rx");
-
- if (!host->rx_chan) {
- dev_err(mmc_dev(host->mmc), "unable to obtain RX DMA engine channel\n");
- ret = -ENXIO;
+ host->rx_chan = dma_request_chan(&pdev->dev, "rx");
+ if (IS_ERR(host->rx_chan)) {
+ dev_err(mmc_dev(host->mmc), "RX DMA channel request failed\n");
+ ret = PTR_ERR(host->rx_chan);
goto err_irq;
}
- host->tx_chan =
- dma_request_slave_channel_compat(mask, omap_dma_filter_fn,
- &tx_req, &pdev->dev, "tx");
-
- if (!host->tx_chan) {
- dev_err(mmc_dev(host->mmc), "unable to obtain TX DMA engine channel\n");
- ret = -ENXIO;
+ host->tx_chan = dma_request_chan(&pdev->dev, "tx");
+ if (IS_ERR(host->tx_chan)) {
+ dev_err(mmc_dev(host->mmc), "TX DMA channel request failed\n");
+ ret = PTR_ERR(host->tx_chan);
goto err_irq;
}
@@ -2231,9 +2189,9 @@ err_slot_name:
mmc_remove_host(mmc);
err_irq:
device_init_wakeup(&pdev->dev, false);
- if (host->tx_chan)
+ if (!IS_ERR_OR_NULL(host->tx_chan))
dma_release_channel(host->tx_chan);
- if (host->rx_chan)
+ if (!IS_ERR_OR_NULL(host->rx_chan))
dma_release_channel(host->rx_chan);
pm_runtime_dont_use_autosuspend(host->dev);
pm_runtime_put_sync(host->dev);
diff --git a/drivers/mmc/host/sdhci-acpi.c b/drivers/mmc/host/sdhci-acpi.c
index bed6a494f52c..458ffb7637e5 100644
--- a/drivers/mmc/host/sdhci-acpi.c
+++ b/drivers/mmc/host/sdhci-acpi.c
@@ -200,8 +200,6 @@ static int bxt_get_cd(struct mmc_host *mmc)
if (!gpio_cd)
return 0;
- pm_runtime_get_sync(mmc->parent);
-
spin_lock_irqsave(&host->lock, flags);
if (host->flags & SDHCI_DEVICE_DEAD)
@@ -211,9 +209,6 @@ static int bxt_get_cd(struct mmc_host *mmc)
out:
spin_unlock_irqrestore(&host->lock, flags);
- pm_runtime_mark_last_busy(mmc->parent);
- pm_runtime_put_autosuspend(mmc->parent);
-
return ret;
}
@@ -267,8 +262,10 @@ static int sdhci_acpi_sd_probe_slot(struct platform_device *pdev,
/* Platform specific code during sd probe slot goes here */
- if (hid && !strcmp(hid, "80865ACA"))
+ if (hid && !strcmp(hid, "80865ACA")) {
host->mmc_host_ops.get_cd = bxt_get_cd;
+ host->mmc->caps |= MMC_CAP_AGGRESSIVE_PM;
+ }
return 0;
}
@@ -277,7 +274,7 @@ static const struct sdhci_acpi_slot sdhci_acpi_slot_int_emmc = {
.chip = &sdhci_acpi_chip_int,
.caps = MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE |
MMC_CAP_HW_RESET | MMC_CAP_1_8V_DDR |
- MMC_CAP_BUS_WIDTH_TEST | MMC_CAP_WAIT_WHILE_BUSY,
+ MMC_CAP_WAIT_WHILE_BUSY,
.caps2 = MMC_CAP2_HC_ERASE_SZ,
.flags = SDHCI_ACPI_RUNTIME_PM,
.quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC,
@@ -292,7 +289,7 @@ static const struct sdhci_acpi_slot sdhci_acpi_slot_int_sdio = {
SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC,
.quirks2 = SDHCI_QUIRK2_HOST_OFF_CARD_ON,
.caps = MMC_CAP_NONREMOVABLE | MMC_CAP_POWER_OFF_CARD |
- MMC_CAP_BUS_WIDTH_TEST | MMC_CAP_WAIT_WHILE_BUSY,
+ MMC_CAP_WAIT_WHILE_BUSY,
.flags = SDHCI_ACPI_RUNTIME_PM,
.pm_caps = MMC_PM_KEEP_POWER,
.probe_slot = sdhci_acpi_sdio_probe_slot,
@@ -304,7 +301,7 @@ static const struct sdhci_acpi_slot sdhci_acpi_slot_int_sd = {
.quirks = SDHCI_QUIRK_NO_ENDATTR_IN_NOPDESC,
.quirks2 = SDHCI_QUIRK2_CARD_ON_NEEDS_BUS_ON |
SDHCI_QUIRK2_STOP_WITH_TC,
- .caps = MMC_CAP_BUS_WIDTH_TEST | MMC_CAP_WAIT_WHILE_BUSY,
+ .caps = MMC_CAP_WAIT_WHILE_BUSY,
.probe_slot = sdhci_acpi_sd_probe_slot,
};
@@ -381,7 +378,7 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
{
struct device *dev = &pdev->dev;
acpi_handle handle = ACPI_HANDLE(dev);
- struct acpi_device *device;
+ struct acpi_device *device, *child;
struct sdhci_acpi_host *c;
struct sdhci_host *host;
struct resource *iomem;
@@ -393,6 +390,11 @@ static int sdhci_acpi_probe(struct platform_device *pdev)
if (acpi_bus_get_device(handle, &device))
return -ENODEV;
+ /* Power on the SDHCI controller and its children */
+ acpi_device_fix_up_power(device);
+ list_for_each_entry(child, &device->children, node)
+ acpi_device_fix_up_power(child);
+
if (acpi_bus_get_status(device) || !device->status.present)
return -ENODEV;
diff --git a/drivers/mmc/host/sdhci-esdhc-imx.c b/drivers/mmc/host/sdhci-esdhc-imx.c
index 2d300d87cda8..9d3ae1f4bd3c 100644
--- a/drivers/mmc/host/sdhci-esdhc-imx.c
+++ b/drivers/mmc/host/sdhci-esdhc-imx.c
@@ -1011,7 +1011,7 @@ sdhci_esdhc_imx_probe_dt(struct platform_device *pdev,
if (ret)
return ret;
- if (!IS_ERR_VALUE(mmc_gpio_get_cd(host->mmc)))
+ if (mmc_gpio_get_cd(host->mmc) >= 0)
host->quirks &= ~SDHCI_QUIRK_BROKEN_CARD_DETECTION;
return 0;
diff --git a/drivers/mmc/host/sdhci-of-arasan.c b/drivers/mmc/host/sdhci-of-arasan.c
index 2e482b13d25e..b6f4c1d41636 100644
--- a/drivers/mmc/host/sdhci-of-arasan.c
+++ b/drivers/mmc/host/sdhci-of-arasan.c
@@ -55,8 +55,32 @@ static unsigned int sdhci_arasan_get_timeout_clock(struct sdhci_host *host)
return freq;
}
+static void sdhci_arasan_set_clock(struct sdhci_host *host, unsigned int clock)
+{
+ struct sdhci_pltfm_host *pltfm_host = sdhci_priv(host);
+ struct sdhci_arasan_data *sdhci_arasan = sdhci_pltfm_priv(pltfm_host);
+ bool ctrl_phy = false;
+
+ if (clock > MMC_HIGH_52_MAX_DTR && (!IS_ERR(sdhci_arasan->phy)))
+ ctrl_phy = true;
+
+ if (ctrl_phy) {
+ spin_unlock_irq(&host->lock);
+ phy_power_off(sdhci_arasan->phy);
+ spin_lock_irq(&host->lock);
+ }
+
+ sdhci_set_clock(host, clock);
+
+ if (ctrl_phy) {
+ spin_unlock_irq(&host->lock);
+ phy_power_on(sdhci_arasan->phy);
+ spin_lock_irq(&host->lock);
+ }
+}
+
static struct sdhci_ops sdhci_arasan_ops = {
- .set_clock = sdhci_set_clock,
+ .set_clock = sdhci_arasan_set_clock,
.get_max_clock = sdhci_pltfm_clk_get_max_clock,
.get_timeout_clock = sdhci_arasan_get_timeout_clock,
.set_bus_width = sdhci_set_bus_width,
diff --git a/drivers/mmc/host/sdhci-of-at91.c b/drivers/mmc/host/sdhci-of-at91.c
index 2703aa90d018..d4cef713d246 100644
--- a/drivers/mmc/host/sdhci-of-at91.c
+++ b/drivers/mmc/host/sdhci-of-at91.c
@@ -15,8 +15,10 @@
*/
#include <linux/clk.h>
+#include <linux/delay.h>
#include <linux/err.h>
#include <linux/io.h>
+#include <linux/kernel.h>
#include <linux/mmc/host.h>
#include <linux/mmc/slot-gpio.h>
#include <linux/module.h>
@@ -31,14 +33,60 @@
#define SDMMC_CACR_CAPWREN BIT(0)
#define SDMMC_CACR_KEY (0x46 << 8)
+#define SDHCI_AT91_PRESET_COMMON_CONF 0x400 /* drv type B, programmable clock mode */
+
struct sdhci_at91_priv {
struct clk *hclock;
struct clk *gck;
struct clk *mainck;
};
+static void sdhci_at91_set_clock(struct sdhci_host *host, unsigned int clock)
+{
+ u16 clk;
+ unsigned long timeout;
+
+ host->mmc->actual_clock = 0;
+
+ /*
+ * There is no requirement to disable the internal clock before
+ * changing the SD clock configuration. Moreover, disabling the
+ * internal clock, changing the configuration and re-enabling the
+ * internal clock causes some bugs. It can prevent to get the internal
+ * clock stable flag ready and an unexpected switch to the base clock
+ * when using presets.
+ */
+ clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL);
+ clk &= SDHCI_CLOCK_INT_EN;
+ sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL);
+
+ if (clock == 0)
+ return;
+
+ clk = sdhci_calc_clk(host, clock, &host->mmc->actual_clock);
+
+ clk |= SDHCI_CLOCK_INT_EN;
+ sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL);
+
+ /* Wait max 20 ms */
+ timeout = 20;
+ while (!((clk = sdhci_readw(host, SDHCI_CLOCK_CONTROL))
+ & SDHCI_CLOCK_INT_STABLE)) {
+ if (timeout == 0) {
+ pr_err("%s: Internal clock never stabilised.\n",
+ mmc_hostname(host->mmc));
+ return;
+ }
+ timeout--;
+ mdelay(1);
+ }
+
+ clk |= SDHCI_CLOCK_CARD_EN;
+ sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL);
+}
+
static const struct sdhci_ops sdhci_at91_sama5d2_ops = {
- .set_clock = sdhci_set_clock,
+ .set_clock = sdhci_at91_set_clock,
.set_bus_width = sdhci_set_bus_width,
.reset = sdhci_reset,
.set_uhs_signaling = sdhci_set_uhs_signaling,
@@ -46,7 +94,6 @@ static const struct sdhci_ops sdhci_at91_sama5d2_ops = {
static const struct sdhci_pltfm_data soc_data_sama5d2 = {
.ops = &sdhci_at91_sama5d2_ops,
- .quirks2 = SDHCI_QUIRK2_NEED_DELAY_AFTER_INT_CLK_RST,
};
static const struct of_device_id sdhci_at91_dt_match[] = {
@@ -119,6 +166,7 @@ static int sdhci_at91_probe(struct platform_device *pdev)
unsigned int clk_base, clk_mul;
unsigned int gck_rate, real_gck_rate;
int ret;
+ unsigned int preset_div;
match = of_match_device(sdhci_at91_dt_match, &pdev->dev);
if (!match)
@@ -186,6 +234,28 @@ static int sdhci_at91_probe(struct platform_device *pdev)
clk_mul, real_gck_rate);
}
+ /*
+ * We have to set preset values because it depends on the clk_mul
+ * value. Moreover, SDR104 is supported in a degraded mode since the
+ * maximum sd clock value is 120 MHz instead of 208 MHz. For that
+ * reason, we need to use presets to support SDR104.
+ */
+ preset_div = DIV_ROUND_UP(real_gck_rate, 24000000) - 1;
+ writew(SDHCI_AT91_PRESET_COMMON_CONF | preset_div,
+ host->ioaddr + SDHCI_PRESET_FOR_SDR12);
+ preset_div = DIV_ROUND_UP(real_gck_rate, 50000000) - 1;
+ writew(SDHCI_AT91_PRESET_COMMON_CONF | preset_div,
+ host->ioaddr + SDHCI_PRESET_FOR_SDR25);
+ preset_div = DIV_ROUND_UP(real_gck_rate, 100000000) - 1;
+ writew(SDHCI_AT91_PRESET_COMMON_CONF | preset_div,
+ host->ioaddr + SDHCI_PRESET_FOR_SDR50);
+ preset_div = DIV_ROUND_UP(real_gck_rate, 120000000) - 1;
+ writew(SDHCI_AT91_PRESET_COMMON_CONF | preset_div,
+ host->ioaddr + SDHCI_PRESET_FOR_SDR104);
+ preset_div = DIV_ROUND_UP(real_gck_rate, 50000000) - 1;
+ writew(SDHCI_AT91_PRESET_COMMON_CONF | preset_div,
+ host->ioaddr + SDHCI_PRESET_FOR_DDR50);
+
clk_prepare_enable(priv->mainck);
clk_prepare_enable(priv->gck);
@@ -219,7 +289,7 @@ static int sdhci_at91_probe(struct platform_device *pdev)
* to enable polling via device tree with broken-cd property.
*/
if (!(host->mmc->caps & MMC_CAP_NONREMOVABLE) &&
- IS_ERR_VALUE(mmc_gpio_get_cd(host->mmc))) {
+ mmc_gpio_get_cd(host->mmc) < 0) {
host->mmc->caps |= MMC_CAP_NEEDS_POLL;
host->quirks &= ~SDHCI_QUIRK_BROKEN_CARD_DETECTION;
}
diff --git a/drivers/mmc/host/sdhci-pci-core.c b/drivers/mmc/host/sdhci-pci-core.c
index 79e19017343e..a4dbf7421edc 100644
--- a/drivers/mmc/host/sdhci-pci-core.c
+++ b/drivers/mmc/host/sdhci-pci-core.c
@@ -340,8 +340,6 @@ static int bxt_get_cd(struct mmc_host *mmc)
if (!gpio_cd)
return 0;
- pm_runtime_get_sync(mmc->parent);
-
spin_lock_irqsave(&host->lock, flags);
if (host->flags & SDHCI_DEVICE_DEAD)
@@ -351,9 +349,6 @@ static int bxt_get_cd(struct mmc_host *mmc)
out:
spin_unlock_irqrestore(&host->lock, flags);
- pm_runtime_mark_last_busy(mmc->parent);
- pm_runtime_put_autosuspend(mmc->parent);
-
return ret;
}
@@ -361,7 +356,6 @@ static int byt_emmc_probe_slot(struct sdhci_pci_slot *slot)
{
slot->host->mmc->caps |= MMC_CAP_8_BIT_DATA | MMC_CAP_NONREMOVABLE |
MMC_CAP_HW_RESET | MMC_CAP_1_8V_DDR |
- MMC_CAP_BUS_WIDTH_TEST |
MMC_CAP_WAIT_WHILE_BUSY;
slot->host->mmc->caps2 |= MMC_CAP2_HC_ERASE_SZ;
slot->hw_reset = sdhci_pci_int_hw_reset;
@@ -377,22 +371,22 @@ static int byt_emmc_probe_slot(struct sdhci_pci_slot *slot)
static int byt_sdio_probe_slot(struct sdhci_pci_slot *slot)
{
slot->host->mmc->caps |= MMC_CAP_POWER_OFF_CARD | MMC_CAP_NONREMOVABLE |
- MMC_CAP_BUS_WIDTH_TEST |
MMC_CAP_WAIT_WHILE_BUSY;
return 0;
}
static int byt_sd_probe_slot(struct sdhci_pci_slot *slot)
{
- slot->host->mmc->caps |= MMC_CAP_BUS_WIDTH_TEST |
- MMC_CAP_WAIT_WHILE_BUSY;
+ slot->host->mmc->caps |= MMC_CAP_WAIT_WHILE_BUSY;
slot->cd_con_id = NULL;
slot->cd_idx = 0;
slot->cd_override_level = true;
if (slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_BXT_SD ||
slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_BXTM_SD ||
- slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_APL_SD)
+ slot->chip->pdev->device == PCI_DEVICE_ID_INTEL_APL_SD) {
slot->host->mmc_host_ops.get_cd = bxt_get_cd;
+ slot->host->mmc->caps |= MMC_CAP_AGGRESSIVE_PM;
+ }
return 0;
}
diff --git a/drivers/mmc/host/sdhci-pic32.c b/drivers/mmc/host/sdhci-pic32.c
index 059df707a2fe..72c13b6f05f9 100644
--- a/drivers/mmc/host/sdhci-pic32.c
+++ b/drivers/mmc/host/sdhci-pic32.c
@@ -243,7 +243,6 @@ MODULE_DEVICE_TABLE(of, pic32_sdhci_id_table);
static struct platform_driver pic32_sdhci_driver = {
.driver = {
.name = "pic32-sdhci",
- .owner = THIS_MODULE,
.of_match_table = of_match_ptr(pic32_sdhci_id_table),
},
.probe = pic32_sdhci_probe,
diff --git a/drivers/mmc/host/sdhci-pltfm.c b/drivers/mmc/host/sdhci-pltfm.c
index 072bb27a65cf..64f287a03cd3 100644
--- a/drivers/mmc/host/sdhci-pltfm.c
+++ b/drivers/mmc/host/sdhci-pltfm.c
@@ -119,16 +119,22 @@ struct sdhci_host *sdhci_pltfm_init(struct platform_device *pdev,
{
struct sdhci_host *host;
struct resource *iomem;
- int ret;
+ void __iomem *ioaddr;
+ int irq, ret;
iomem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- if (!iomem) {
- ret = -ENOMEM;
+ ioaddr = devm_ioremap_resource(&pdev->dev, iomem);
+ if (IS_ERR(ioaddr)) {
+ ret = PTR_ERR(ioaddr);
goto err;
}
- if (resource_size(iomem) < 0x100)
- dev_err(&pdev->dev, "Invalid iomem size!\n");
+ irq = platform_get_irq(pdev, 0);
+ if (irq < 0) {
+ dev_err(&pdev->dev, "failed to get IRQ number\n");
+ ret = irq;
+ goto err;
+ }
host = sdhci_alloc_host(&pdev->dev,
sizeof(struct sdhci_pltfm_host) + priv_size);
@@ -138,6 +144,8 @@ struct sdhci_host *sdhci_pltfm_init(struct platform_device *pdev,
goto err;
}
+ host->ioaddr = ioaddr;
+ host->irq = irq;
host->hw_name = dev_name(&pdev->dev);
if (pdata && pdata->ops)
host->ops = pdata->ops;
@@ -148,22 +156,6 @@ struct sdhci_host *sdhci_pltfm_init(struct platform_device *pdev,
host->quirks2 = pdata->quirks2;
}
- host->irq = platform_get_irq(pdev, 0);
-
- if (!request_mem_region(iomem->start, resource_size(iomem),
- mmc_hostname(host->mmc))) {
- dev_err(&pdev->dev, "cannot request region\n");
- ret = -EBUSY;
- goto err_request;
- }
-
- host->ioaddr = ioremap(iomem->start, resource_size(iomem));
- if (!host->ioaddr) {
- dev_err(&pdev->dev, "failed to remap registers\n");
- ret = -ENOMEM;
- goto err_remap;
- }
-
/*
* Some platforms need to probe the controller to be able to
* determine which caps should be used.
@@ -174,11 +166,6 @@ struct sdhci_host *sdhci_pltfm_init(struct platform_device *pdev,
platform_set_drvdata(pdev, host);
return host;
-
-err_remap:
- release_mem_region(iomem->start, resource_size(iomem));
-err_request:
- sdhci_free_host(host);
err:
dev_err(&pdev->dev, "%s failed %d\n", __func__, ret);
return ERR_PTR(ret);
@@ -188,10 +175,7 @@ EXPORT_SYMBOL_GPL(sdhci_pltfm_init);
void sdhci_pltfm_free(struct platform_device *pdev)
{
struct sdhci_host *host = platform_get_drvdata(pdev);
- struct resource *iomem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- iounmap(host->ioaddr);
- release_mem_region(iomem->start, resource_size(iomem));
sdhci_free_host(host);
}
EXPORT_SYMBOL_GPL(sdhci_pltfm_free);
diff --git a/drivers/mmc/host/sdhci.c b/drivers/mmc/host/sdhci.c
index 6bd3d1794966..0e3d7c056cb1 100644
--- a/drivers/mmc/host/sdhci.c
+++ b/drivers/mmc/host/sdhci.c
@@ -38,11 +38,6 @@
#define DBG(f, x...) \
pr_debug(DRIVER_NAME " [%s()]: " f, __func__,## x)
-#if defined(CONFIG_LEDS_CLASS) || (defined(CONFIG_LEDS_CLASS_MODULE) && \
- defined(CONFIG_MMC_SDHCI_MODULE))
-#define SDHCI_USE_LEDS_CLASS
-#endif
-
#define MAX_TUNING_LOOP 40
static unsigned int debug_quirks = 0;
@@ -53,29 +48,7 @@ static void sdhci_finish_data(struct sdhci_host *);
static void sdhci_finish_command(struct sdhci_host *);
static int sdhci_execute_tuning(struct mmc_host *mmc, u32 opcode);
static void sdhci_enable_preset_value(struct sdhci_host *host, bool enable);
-static int sdhci_do_get_cd(struct sdhci_host *host);
-
-#ifdef CONFIG_PM
-static int sdhci_runtime_pm_get(struct sdhci_host *host);
-static int sdhci_runtime_pm_put(struct sdhci_host *host);
-static void sdhci_runtime_pm_bus_on(struct sdhci_host *host);
-static void sdhci_runtime_pm_bus_off(struct sdhci_host *host);
-#else
-static inline int sdhci_runtime_pm_get(struct sdhci_host *host)
-{
- return 0;
-}
-static inline int sdhci_runtime_pm_put(struct sdhci_host *host)
-{
- return 0;
-}
-static void sdhci_runtime_pm_bus_on(struct sdhci_host *host)
-{
-}
-static void sdhci_runtime_pm_bus_off(struct sdhci_host *host)
-{
-}
-#endif
+static int sdhci_get_cd(struct mmc_host *mmc);
static void sdhci_dumpregs(struct sdhci_host *host)
{
@@ -171,6 +144,22 @@ static void sdhci_disable_card_detection(struct sdhci_host *host)
sdhci_set_card_detection(host, false);
}
+static void sdhci_runtime_pm_bus_on(struct sdhci_host *host)
+{
+ if (host->bus_on)
+ return;
+ host->bus_on = true;
+ pm_runtime_get_noresume(host->mmc->parent);
+}
+
+static void sdhci_runtime_pm_bus_off(struct sdhci_host *host)
+{
+ if (!host->bus_on)
+ return;
+ host->bus_on = false;
+ pm_runtime_put_noidle(host->mmc->parent);
+}
+
void sdhci_reset(struct sdhci_host *host, u8 mask)
{
unsigned long timeout;
@@ -204,7 +193,7 @@ EXPORT_SYMBOL_GPL(sdhci_reset);
static void sdhci_do_reset(struct sdhci_host *host, u8 mask)
{
if (host->quirks & SDHCI_QUIRK_NO_CARD_NO_RESET) {
- if (!sdhci_do_get_cd(host))
+ if (!sdhci_get_cd(host->mmc))
return;
}
@@ -252,7 +241,7 @@ static void sdhci_reinit(struct sdhci_host *host)
sdhci_enable_card_detection(host);
}
-static void sdhci_activate_led(struct sdhci_host *host)
+static void __sdhci_led_activate(struct sdhci_host *host)
{
u8 ctrl;
@@ -261,7 +250,7 @@ static void sdhci_activate_led(struct sdhci_host *host)
sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL);
}
-static void sdhci_deactivate_led(struct sdhci_host *host)
+static void __sdhci_led_deactivate(struct sdhci_host *host)
{
u8 ctrl;
@@ -270,9 +259,9 @@ static void sdhci_deactivate_led(struct sdhci_host *host)
sdhci_writeb(host, ctrl, SDHCI_HOST_CONTROL);
}
-#ifdef SDHCI_USE_LEDS_CLASS
+#if IS_REACHABLE(CONFIG_LEDS_CLASS)
static void sdhci_led_control(struct led_classdev *led,
- enum led_brightness brightness)
+ enum led_brightness brightness)
{
struct sdhci_host *host = container_of(led, struct sdhci_host, led);
unsigned long flags;
@@ -283,12 +272,62 @@ static void sdhci_led_control(struct led_classdev *led,
goto out;
if (brightness == LED_OFF)
- sdhci_deactivate_led(host);
+ __sdhci_led_deactivate(host);
else
- sdhci_activate_led(host);
+ __sdhci_led_activate(host);
out:
spin_unlock_irqrestore(&host->lock, flags);
}
+
+static int sdhci_led_register(struct sdhci_host *host)
+{
+ struct mmc_host *mmc = host->mmc;
+
+ snprintf(host->led_name, sizeof(host->led_name),
+ "%s::", mmc_hostname(mmc));
+
+ host->led.name = host->led_name;
+ host->led.brightness = LED_OFF;
+ host->led.default_trigger = mmc_hostname(mmc);
+ host->led.brightness_set = sdhci_led_control;
+
+ return led_classdev_register(mmc_dev(mmc), &host->led);
+}
+
+static void sdhci_led_unregister(struct sdhci_host *host)
+{
+ led_classdev_unregister(&host->led);
+}
+
+static inline void sdhci_led_activate(struct sdhci_host *host)
+{
+}
+
+static inline void sdhci_led_deactivate(struct sdhci_host *host)
+{
+}
+
+#else
+
+static inline int sdhci_led_register(struct sdhci_host *host)
+{
+ return 0;
+}
+
+static inline void sdhci_led_unregister(struct sdhci_host *host)
+{
+}
+
+static inline void sdhci_led_activate(struct sdhci_host *host)
+{
+ __sdhci_led_activate(host);
+}
+
+static inline void sdhci_led_deactivate(struct sdhci_host *host)
+{
+ __sdhci_led_deactivate(host);
+}
+
#endif
/*****************************************************************************\
@@ -1091,23 +1130,14 @@ static u16 sdhci_get_preset_value(struct sdhci_host *host)
return preset;
}
-void sdhci_set_clock(struct sdhci_host *host, unsigned int clock)
+u16 sdhci_calc_clk(struct sdhci_host *host, unsigned int clock,
+ unsigned int *actual_clock)
{
int div = 0; /* Initialized for compiler warning */
int real_div = div, clk_mul = 1;
u16 clk = 0;
- unsigned long timeout;
bool switch_base_clk = false;
- host->mmc->actual_clock = 0;
-
- sdhci_writew(host, 0, SDHCI_CLOCK_CONTROL);
- if (host->quirks2 & SDHCI_QUIRK2_NEED_DELAY_AFTER_INT_CLK_RST)
- mdelay(1);
-
- if (clock == 0)
- return;
-
if (host->version >= SDHCI_SPEC_300) {
if (host->preset_enabled) {
u16 pre_val;
@@ -1184,10 +1214,29 @@ void sdhci_set_clock(struct sdhci_host *host, unsigned int clock)
clock_set:
if (real_div)
- host->mmc->actual_clock = (host->max_clk * clk_mul) / real_div;
+ *actual_clock = (host->max_clk * clk_mul) / real_div;
clk |= (div & SDHCI_DIV_MASK) << SDHCI_DIVIDER_SHIFT;
clk |= ((div & SDHCI_DIV_HI_MASK) >> SDHCI_DIV_MASK_LEN)
<< SDHCI_DIVIDER_HI_SHIFT;
+
+ return clk;
+}
+EXPORT_SYMBOL_GPL(sdhci_calc_clk);
+
+void sdhci_set_clock(struct sdhci_host *host, unsigned int clock)
+{
+ u16 clk;
+ unsigned long timeout;
+
+ host->mmc->actual_clock = 0;
+
+ sdhci_writew(host, 0, SDHCI_CLOCK_CONTROL);
+
+ if (clock == 0)
+ return;
+
+ clk = sdhci_calc_clk(host, clock, &host->mmc->actual_clock);
+
clk |= SDHCI_CLOCK_INT_EN;
sdhci_writew(host, clk, SDHCI_CLOCK_CONTROL);
@@ -1319,8 +1368,6 @@ static void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
host = mmc_priv(mmc);
- sdhci_runtime_pm_get(host);
-
/* Firstly check card presence */
present = mmc->ops->get_cd(mmc);
@@ -1328,9 +1375,7 @@ static void sdhci_request(struct mmc_host *mmc, struct mmc_request *mrq)
WARN_ON(host->mrq != NULL);
-#ifndef SDHCI_USE_LEDS_CLASS
- sdhci_activate_led(host);
-#endif
+ sdhci_led_activate(host);
/*
* Ensure we don't send the STOP for non-SET_BLOCK_COUNTED
@@ -1405,11 +1450,11 @@ void sdhci_set_uhs_signaling(struct sdhci_host *host, unsigned timing)
}
EXPORT_SYMBOL_GPL(sdhci_set_uhs_signaling);
-static void sdhci_do_set_ios(struct sdhci_host *host, struct mmc_ios *ios)
+static void sdhci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
{
+ struct sdhci_host *host = mmc_priv(mmc);
unsigned long flags;
u8 ctrl;
- struct mmc_host *mmc = host->mmc;
spin_lock_irqsave(&host->lock, flags);
@@ -1563,18 +1608,10 @@ static void sdhci_do_set_ios(struct sdhci_host *host, struct mmc_ios *ios)
spin_unlock_irqrestore(&host->lock, flags);
}
-static void sdhci_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
+static int sdhci_get_cd(struct mmc_host *mmc)
{
struct sdhci_host *host = mmc_priv(mmc);
-
- sdhci_runtime_pm_get(host);
- sdhci_do_set_ios(host, ios);
- sdhci_runtime_pm_put(host);
-}
-
-static int sdhci_do_get_cd(struct sdhci_host *host)
-{
- int gpio_cd = mmc_gpio_get_cd(host->mmc);
+ int gpio_cd = mmc_gpio_get_cd(mmc);
if (host->flags & SDHCI_DEVICE_DEAD)
return 0;
@@ -1587,7 +1624,7 @@ static int sdhci_do_get_cd(struct sdhci_host *host)
* Try slot gpio detect, if defined it take precedence
* over build in controller functionality
*/
- if (!IS_ERR_VALUE(gpio_cd))
+ if (gpio_cd >= 0)
return !!gpio_cd;
/* If polling, assume that the card is always present. */
@@ -1598,17 +1635,6 @@ static int sdhci_do_get_cd(struct sdhci_host *host)
return !!(sdhci_readl(host, SDHCI_PRESENT_STATE) & SDHCI_CARD_PRESENT);
}
-static int sdhci_get_cd(struct mmc_host *mmc)
-{
- struct sdhci_host *host = mmc_priv(mmc);
- int ret;
-
- sdhci_runtime_pm_get(host);
- ret = sdhci_do_get_cd(host);
- sdhci_runtime_pm_put(host);
- return ret;
-}
-
static int sdhci_check_ro(struct sdhci_host *host)
{
unsigned long flags;
@@ -1633,8 +1659,9 @@ static int sdhci_check_ro(struct sdhci_host *host)
#define SAMPLE_COUNT 5
-static int sdhci_do_get_ro(struct sdhci_host *host)
+static int sdhci_get_ro(struct mmc_host *mmc)
{
+ struct sdhci_host *host = mmc_priv(mmc);
int i, ro_count;
if (!(host->quirks & SDHCI_QUIRK_UNSTABLE_RO_DETECT))
@@ -1659,17 +1686,6 @@ static void sdhci_hw_reset(struct mmc_host *mmc)
host->ops->hw_reset(host);
}
-static int sdhci_get_ro(struct mmc_host *mmc)
-{
- struct sdhci_host *host = mmc_priv(mmc);
- int ret;
-
- sdhci_runtime_pm_get(host);
- ret = sdhci_do_get_ro(host);
- sdhci_runtime_pm_put(host);
- return ret;
-}
-
static void sdhci_enable_sdio_irq_nolock(struct sdhci_host *host, int enable)
{
if (!(host->flags & SDHCI_DEVICE_DEAD)) {
@@ -1689,8 +1705,6 @@ static void sdhci_enable_sdio_irq(struct mmc_host *mmc, int enable)
struct sdhci_host *host = mmc_priv(mmc);
unsigned long flags;
- sdhci_runtime_pm_get(host);
-
spin_lock_irqsave(&host->lock, flags);
if (enable)
host->flags |= SDHCI_SDIO_IRQ_ENABLED;
@@ -1699,14 +1713,12 @@ static void sdhci_enable_sdio_irq(struct mmc_host *mmc, int enable)
sdhci_enable_sdio_irq_nolock(host, enable);
spin_unlock_irqrestore(&host->lock, flags);
-
- sdhci_runtime_pm_put(host);
}
-static int sdhci_do_start_signal_voltage_switch(struct sdhci_host *host,
- struct mmc_ios *ios)
+static int sdhci_start_signal_voltage_switch(struct mmc_host *mmc,
+ struct mmc_ios *ios)
{
- struct mmc_host *mmc = host->mmc;
+ struct sdhci_host *host = mmc_priv(mmc);
u16 ctrl;
int ret;
@@ -1794,29 +1806,13 @@ static int sdhci_do_start_signal_voltage_switch(struct sdhci_host *host,
}
}
-static int sdhci_start_signal_voltage_switch(struct mmc_host *mmc,
- struct mmc_ios *ios)
-{
- struct sdhci_host *host = mmc_priv(mmc);
- int err;
-
- if (host->version < SDHCI_SPEC_300)
- return 0;
- sdhci_runtime_pm_get(host);
- err = sdhci_do_start_signal_voltage_switch(host, ios);
- sdhci_runtime_pm_put(host);
- return err;
-}
-
static int sdhci_card_busy(struct mmc_host *mmc)
{
struct sdhci_host *host = mmc_priv(mmc);
u32 present_state;
- sdhci_runtime_pm_get(host);
/* Check whether DAT[3:0] is 0000 */
present_state = sdhci_readl(host, SDHCI_PRESENT_STATE);
- sdhci_runtime_pm_put(host);
return !(present_state & SDHCI_DATA_LVL_MASK);
}
@@ -1843,7 +1839,6 @@ static int sdhci_execute_tuning(struct mmc_host *mmc, u32 opcode)
unsigned int tuning_count = 0;
bool hs400_tuning;
- sdhci_runtime_pm_get(host);
spin_lock_irqsave(&host->lock, flags);
hs400_tuning = host->flags & SDHCI_HS400_TUNING;
@@ -1879,8 +1874,7 @@ static int sdhci_execute_tuning(struct mmc_host *mmc, u32 opcode)
break;
case MMC_TIMING_UHS_SDR50:
- if (host->flags & SDHCI_SDR50_NEEDS_TUNING ||
- host->flags & SDHCI_SDR104_NEEDS_TUNING)
+ if (host->flags & SDHCI_SDR50_NEEDS_TUNING)
break;
/* FALLTHROUGH */
@@ -1891,7 +1885,6 @@ static int sdhci_execute_tuning(struct mmc_host *mmc, u32 opcode)
if (host->ops->platform_execute_tuning) {
spin_unlock_irqrestore(&host->lock, flags);
err = host->ops->platform_execute_tuning(host, opcode);
- sdhci_runtime_pm_put(host);
return err;
}
@@ -2023,8 +2016,6 @@ out:
sdhci_writel(host, host->ier, SDHCI_SIGNAL_ENABLE);
out_unlock:
spin_unlock_irqrestore(&host->lock, flags);
- sdhci_runtime_pm_put(host);
-
return err;
}
@@ -2105,7 +2096,7 @@ static void sdhci_card_event(struct mmc_host *mmc)
if (host->ops->card_event)
host->ops->card_event(host);
- present = sdhci_do_get_cd(host);
+ present = sdhci_get_cd(host->mmc);
spin_lock_irqsave(&host->lock, flags);
@@ -2214,15 +2205,12 @@ static void sdhci_tasklet_finish(unsigned long param)
host->cmd = NULL;
host->data = NULL;
-#ifndef SDHCI_USE_LEDS_CLASS
- sdhci_deactivate_led(host);
-#endif
+ sdhci_led_deactivate(host);
mmiowb();
spin_unlock_irqrestore(&host->lock, flags);
mmc_request_done(host->mmc, mrq);
- sdhci_runtime_pm_put(host);
}
static void sdhci_timeout_timer(unsigned long data)
@@ -2679,7 +2667,7 @@ int sdhci_resume_host(struct sdhci_host *host)
sdhci_init(host, 0);
host->pwr = 0;
host->clock = 0;
- sdhci_do_set_ios(host, &host->mmc->ios);
+ sdhci_set_ios(host->mmc, &host->mmc->ios);
} else {
sdhci_init(host, (host->mmc->pm_flags & MMC_PM_KEEP_POWER));
mmiowb();
@@ -2703,33 +2691,6 @@ int sdhci_resume_host(struct sdhci_host *host)
EXPORT_SYMBOL_GPL(sdhci_resume_host);
-static int sdhci_runtime_pm_get(struct sdhci_host *host)
-{
- return pm_runtime_get_sync(host->mmc->parent);
-}
-
-static int sdhci_runtime_pm_put(struct sdhci_host *host)
-{
- pm_runtime_mark_last_busy(host->mmc->parent);
- return pm_runtime_put_autosuspend(host->mmc->parent);
-}
-
-static void sdhci_runtime_pm_bus_on(struct sdhci_host *host)
-{
- if (host->bus_on)
- return;
- host->bus_on = true;
- pm_runtime_get_noresume(host->mmc->parent);
-}
-
-static void sdhci_runtime_pm_bus_off(struct sdhci_host *host)
-{
- if (!host->bus_on)
- return;
- host->bus_on = false;
- pm_runtime_put_noidle(host->mmc->parent);
-}
-
int sdhci_runtime_suspend_host(struct sdhci_host *host)
{
unsigned long flags;
@@ -2768,8 +2729,8 @@ int sdhci_runtime_resume_host(struct sdhci_host *host)
/* Force clock and power re-program */
host->pwr = 0;
host->clock = 0;
- sdhci_do_start_signal_voltage_switch(host, &host->mmc->ios);
- sdhci_do_set_ios(host, &host->mmc->ios);
+ sdhci_start_signal_voltage_switch(host->mmc, &host->mmc->ios);
+ sdhci_set_ios(host->mmc, &host->mmc->ios);
if ((host_flags & SDHCI_PV_ENABLED) &&
!(host->quirks2 & SDHCI_QUIRK2_PRESET_VALUE_BROKEN)) {
@@ -3014,7 +2975,8 @@ int sdhci_add_host(struct sdhci_host *host)
if (!host->ops->get_max_clock) {
pr_err("%s: Hardware doesn't specify base clock frequency.\n",
mmc_hostname(mmc));
- return -ENODEV;
+ ret = -ENODEV;
+ goto undma;
}
host->max_clk = host->ops->get_max_clock(host);
}
@@ -3051,7 +3013,7 @@ int sdhci_add_host(struct sdhci_host *host)
} else
mmc->f_min = host->max_clk / SDHCI_MAX_DIV_SPEC_200;
- if (!mmc->f_max || (mmc->f_max && (mmc->f_max > max_clk)))
+ if (!mmc->f_max || mmc->f_max > max_clk)
mmc->f_max = max_clk;
if (!(host->quirks & SDHCI_QUIRK_DATA_TIMEOUT_USES_SDCLK)) {
@@ -3064,7 +3026,8 @@ int sdhci_add_host(struct sdhci_host *host)
} else {
pr_err("%s: Hardware doesn't specify timeout clock frequency.\n",
mmc_hostname(mmc));
- return -ENODEV;
+ ret = -ENODEV;
+ goto undma;
}
}
@@ -3114,12 +3077,13 @@ int sdhci_add_host(struct sdhci_host *host)
if ((host->quirks & SDHCI_QUIRK_BROKEN_CARD_DETECTION) &&
!(mmc->caps & MMC_CAP_NONREMOVABLE) &&
- IS_ERR_VALUE(mmc_gpio_get_cd(host->mmc)))
+ mmc_gpio_get_cd(host->mmc) < 0)
mmc->caps |= MMC_CAP_NEEDS_POLL;
/* If there are external regulators, get them */
- if (mmc_regulator_get_supply(mmc) == -EPROBE_DEFER)
- return -EPROBE_DEFER;
+ ret = mmc_regulator_get_supply(mmc);
+ if (ret == -EPROBE_DEFER)
+ goto undma;
/* If vqmmc regulator and no 1.8V signalling, then there's no UHS */
if (!IS_ERR(mmc->supply.vqmmc)) {
@@ -3174,10 +3138,6 @@ int sdhci_add_host(struct sdhci_host *host)
if (caps[1] & SDHCI_USE_SDR50_TUNING)
host->flags |= SDHCI_SDR50_NEEDS_TUNING;
- /* Does the host need tuning for SDR104 / HS200? */
- if (mmc->caps2 & MMC_CAP2_HS200)
- host->flags |= SDHCI_SDR104_NEEDS_TUNING;
-
/* Driver Type(s) (A, C, D) supported by the host */
if (caps[1] & SDHCI_DRIVER_TYPE_A)
mmc->caps |= MMC_CAP_DRIVER_TYPE_A;
@@ -3276,7 +3236,8 @@ int sdhci_add_host(struct sdhci_host *host)
if (mmc->ocr_avail == 0) {
pr_err("%s: Hardware doesn't report any support voltages.\n",
mmc_hostname(mmc));
- return -ENODEV;
+ ret = -ENODEV;
+ goto unreg;
}
spin_lock_init(&host->lock);
@@ -3360,25 +3321,18 @@ int sdhci_add_host(struct sdhci_host *host)
sdhci_dumpregs(host);
#endif
-#ifdef SDHCI_USE_LEDS_CLASS
- snprintf(host->led_name, sizeof(host->led_name),
- "%s::", mmc_hostname(mmc));
- host->led.name = host->led_name;
- host->led.brightness = LED_OFF;
- host->led.default_trigger = mmc_hostname(mmc);
- host->led.brightness_set = sdhci_led_control;
-
- ret = led_classdev_register(mmc_dev(mmc), &host->led);
+ ret = sdhci_led_register(host);
if (ret) {
pr_err("%s: Failed to register LED device: %d\n",
mmc_hostname(mmc), ret);
- goto reset;
+ goto unirq;
}
-#endif
mmiowb();
- mmc_add_host(mmc);
+ ret = mmc_add_host(mmc);
+ if (ret)
+ goto unled;
pr_info("%s: SDHCI controller on %s [%s] using %s\n",
mmc_hostname(mmc), host->hw_name, dev_name(mmc_dev(mmc)),
@@ -3390,15 +3344,25 @@ int sdhci_add_host(struct sdhci_host *host)
return 0;
-#ifdef SDHCI_USE_LEDS_CLASS
-reset:
+unled:
+ sdhci_led_unregister(host);
+unirq:
sdhci_do_reset(host, SDHCI_RESET_ALL);
sdhci_writel(host, 0, SDHCI_INT_ENABLE);
sdhci_writel(host, 0, SDHCI_SIGNAL_ENABLE);
free_irq(host->irq, host);
-#endif
untasklet:
tasklet_kill(&host->finish_tasklet);
+unreg:
+ if (!IS_ERR(mmc->supply.vqmmc))
+ regulator_disable(mmc->supply.vqmmc);
+undma:
+ if (host->align_buffer)
+ dma_free_coherent(mmc_dev(mmc), host->align_buffer_sz +
+ host->adma_table_sz, host->align_buffer,
+ host->align_addr);
+ host->adma_table = NULL;
+ host->align_buffer = NULL;
return ret;
}
@@ -3430,9 +3394,7 @@ void sdhci_remove_host(struct sdhci_host *host, int dead)
mmc_remove_host(mmc);
-#ifdef SDHCI_USE_LEDS_CLASS
- led_classdev_unregister(&host->led);
-#endif
+ sdhci_led_unregister(host);
if (!dead)
sdhci_do_reset(host, SDHCI_RESET_ALL);
diff --git a/drivers/mmc/host/sdhci.h b/drivers/mmc/host/sdhci.h
index 0f39f4f84d10..609f87ca536b 100644
--- a/drivers/mmc/host/sdhci.h
+++ b/drivers/mmc/host/sdhci.h
@@ -417,11 +417,6 @@ struct sdhci_host {
#define SDHCI_QUIRK2_ACMD23_BROKEN (1<<14)
/* Broken Clock divider zero in controller */
#define SDHCI_QUIRK2_CLOCK_DIV_ZERO_BROKEN (1<<15)
-/*
- * When internal clock is disabled, a delay is needed before modifying the
- * SD clock frequency or enabling back the internal clock.
- */
-#define SDHCI_QUIRK2_NEED_DELAY_AFTER_INT_CLK_RST (1<<16)
int irq; /* Device IRQ */
void __iomem *ioaddr; /* Mapped address */
@@ -433,7 +428,7 @@ struct sdhci_host {
struct mmc_host_ops mmc_host_ops; /* MMC host ops */
u64 dma_mask; /* custom DMA mask */
-#if defined(CONFIG_LEDS_CLASS) || defined(CONFIG_LEDS_CLASS_MODULE)
+#if IS_ENABLED(CONFIG_LEDS_CLASS)
struct led_classdev led; /* LED control */
char led_name[32];
#endif
@@ -450,7 +445,6 @@ struct sdhci_host {
#define SDHCI_AUTO_CMD23 (1<<7) /* Auto CMD23 support */
#define SDHCI_PV_ENABLED (1<<8) /* Preset value enabled */
#define SDHCI_SDIO_IRQ_ENABLED (1<<9) /* SDIO irq enabled */
-#define SDHCI_SDR104_NEEDS_TUNING (1<<10) /* SDR104/HS200 needs tuning */
#define SDHCI_USE_64_BIT_DMA (1<<12) /* Use 64-bit DMA */
#define SDHCI_HS400_TUNING (1<<13) /* Tuning for HS400 */
@@ -661,6 +655,8 @@ static inline bool sdhci_sdio_irq_enabled(struct sdhci_host *host)
return !!(host->flags & SDHCI_SDIO_IRQ_ENABLED);
}
+u16 sdhci_calc_clk(struct sdhci_host *host, unsigned int clock,
+ unsigned int *actual_clock);
void sdhci_set_clock(struct sdhci_host *host, unsigned int clock);
void sdhci_set_power(struct sdhci_host *host, unsigned char mode,
unsigned short vdd);
diff --git a/drivers/mmc/host/sh_mmcif.c b/drivers/mmc/host/sh_mmcif.c
index d9a655f47d41..dd64b8663984 100644
--- a/drivers/mmc/host/sh_mmcif.c
+++ b/drivers/mmc/host/sh_mmcif.c
@@ -248,7 +248,6 @@ struct sh_mmcif_host {
int sg_idx;
int sg_blkidx;
bool power;
- bool card_present;
bool ccs_enable; /* Command Completion Signal support */
bool clk_ctrl2_enable;
struct mutex thread_lock;
@@ -1064,16 +1063,6 @@ static void sh_mmcif_clk_setup(struct sh_mmcif_host *host)
host->mmc->f_max, host->mmc->f_min);
}
-static void sh_mmcif_set_power(struct sh_mmcif_host *host, struct mmc_ios *ios)
-{
- struct mmc_host *mmc = host->mmc;
-
- if (!IS_ERR(mmc->supply.vmmc))
- /* Errors ignored... */
- mmc_regulator_set_ocr(mmc, mmc->supply.vmmc,
- ios->power_mode ? ios->vdd : 0);
-}
-
static void sh_mmcif_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
{
struct sh_mmcif_host *host = mmc_priv(mmc);
@@ -1091,42 +1080,32 @@ static void sh_mmcif_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
host->state = STATE_IOS;
spin_unlock_irqrestore(&host->lock, flags);
- if (ios->power_mode == MMC_POWER_UP) {
- if (!host->card_present) {
- /* See if we also get DMA */
+ switch (ios->power_mode) {
+ case MMC_POWER_UP:
+ if (!IS_ERR(mmc->supply.vmmc))
+ mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, ios->vdd);
+ if (!host->power) {
+ clk_prepare_enable(host->clk);
+ pm_runtime_get_sync(dev);
+ sh_mmcif_sync_reset(host);
sh_mmcif_request_dma(host);
- host->card_present = true;
- }
- sh_mmcif_set_power(host, ios);
- } else if (ios->power_mode == MMC_POWER_OFF || !ios->clock) {
- /* clock stop */
- sh_mmcif_clock_control(host, 0);
- if (ios->power_mode == MMC_POWER_OFF) {
- if (host->card_present) {
- sh_mmcif_release_dma(host);
- host->card_present = false;
- }
+ host->power = true;
}
+ break;
+ case MMC_POWER_OFF:
+ if (!IS_ERR(mmc->supply.vmmc))
+ mmc_regulator_set_ocr(mmc, mmc->supply.vmmc, 0);
if (host->power) {
- pm_runtime_put_sync(dev);
+ sh_mmcif_clock_control(host, 0);
+ sh_mmcif_release_dma(host);
+ pm_runtime_put(dev);
clk_disable_unprepare(host->clk);
host->power = false;
- if (ios->power_mode == MMC_POWER_OFF)
- sh_mmcif_set_power(host, ios);
- }
- host->state = STATE_IDLE;
- return;
- }
-
- if (ios->clock) {
- if (!host->power) {
- clk_prepare_enable(host->clk);
-
- pm_runtime_get_sync(dev);
- host->power = true;
- sh_mmcif_sync_reset(host);
}
+ break;
+ case MMC_POWER_ON:
sh_mmcif_clock_control(host, ios->clock);
+ break;
}
host->timing = ios->timing;
@@ -1519,23 +1498,23 @@ static int sh_mmcif_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, host);
- pm_runtime_enable(dev);
- host->power = false;
-
host->clk = devm_clk_get(dev, NULL);
if (IS_ERR(host->clk)) {
ret = PTR_ERR(host->clk);
dev_err(dev, "cannot get clock: %d\n", ret);
- goto err_pm;
+ goto err_host;
}
ret = clk_prepare_enable(host->clk);
if (ret < 0)
- goto err_pm;
+ goto err_host;
sh_mmcif_clk_setup(host);
- ret = pm_runtime_resume(dev);
+ pm_runtime_enable(dev);
+ host->power = false;
+
+ ret = pm_runtime_get_sync(dev);
if (ret < 0)
goto err_clk;
@@ -1579,12 +1558,13 @@ static int sh_mmcif_probe(struct platform_device *pdev)
sh_mmcif_readl(host->addr, MMCIF_CE_VERSION) & 0xffff,
clk_get_rate(host->clk) / 1000000UL);
+ pm_runtime_put(dev);
clk_disable_unprepare(host->clk);
return ret;
err_clk:
clk_disable_unprepare(host->clk);
-err_pm:
+ pm_runtime_put_sync(dev);
pm_runtime_disable(dev);
err_host:
mmc_free_host(mmc);
diff --git a/drivers/mmc/host/sh_mobile_sdhi.c b/drivers/mmc/host/sh_mobile_sdhi.c
index 9aa147959276..f750f9494410 100644
--- a/drivers/mmc/host/sh_mobile_sdhi.c
+++ b/drivers/mmc/host/sh_mobile_sdhi.c
@@ -28,10 +28,12 @@
#include <linux/of_device.h>
#include <linux/platform_device.h>
#include <linux/mmc/host.h>
-#include <linux/mmc/sh_mobile_sdhi.h>
#include <linux/mfd/tmio.h>
#include <linux/sh_dma.h>
#include <linux/delay.h>
+#include <linux/pinctrl/consumer.h>
+#include <linux/pinctrl/pinctrl-state.h>
+#include <linux/regulator/consumer.h>
#include "tmio_mmc.h"
@@ -48,10 +50,8 @@ struct sh_mobile_sdhi_of_data {
unsigned bus_shift;
};
-static const struct sh_mobile_sdhi_of_data sh_mobile_sdhi_of_cfg[] = {
- {
- .tmio_flags = TMIO_MMC_HAS_IDLE_WAIT,
- },
+static const struct sh_mobile_sdhi_of_data of_default_cfg = {
+ .tmio_flags = TMIO_MMC_HAS_IDLE_WAIT,
};
static const struct sh_mobile_sdhi_of_data of_rcar_gen1_compatible = {
@@ -62,7 +62,7 @@ static const struct sh_mobile_sdhi_of_data of_rcar_gen1_compatible = {
static const struct sh_mobile_sdhi_of_data of_rcar_gen2_compatible = {
.tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_WRPROTECT_DISABLE |
- TMIO_MMC_CLK_ACTUAL | TMIO_MMC_FAST_CLK_CHG,
+ TMIO_MMC_CLK_ACTUAL | TMIO_MMC_MIN_RCAR2,
.capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ,
.dma_buswidth = DMA_SLAVE_BUSWIDTH_4_BYTES,
.dma_rx_offset = 0x2000,
@@ -70,17 +70,16 @@ static const struct sh_mobile_sdhi_of_data of_rcar_gen2_compatible = {
static const struct sh_mobile_sdhi_of_data of_rcar_gen3_compatible = {
.tmio_flags = TMIO_MMC_HAS_IDLE_WAIT | TMIO_MMC_WRPROTECT_DISABLE |
- TMIO_MMC_CLK_ACTUAL | TMIO_MMC_FAST_CLK_CHG,
- .capabilities = MMC_CAP_SD_HIGHSPEED,
+ TMIO_MMC_CLK_ACTUAL | TMIO_MMC_MIN_RCAR2,
+ .capabilities = MMC_CAP_SD_HIGHSPEED | MMC_CAP_SDIO_IRQ,
.bus_shift = 2,
};
static const struct of_device_id sh_mobile_sdhi_of_match[] = {
{ .compatible = "renesas,sdhi-shmobile" },
- { .compatible = "renesas,sdhi-sh7372" },
- { .compatible = "renesas,sdhi-sh73a0", .data = &sh_mobile_sdhi_of_cfg[0], },
- { .compatible = "renesas,sdhi-r8a73a4", .data = &sh_mobile_sdhi_of_cfg[0], },
- { .compatible = "renesas,sdhi-r8a7740", .data = &sh_mobile_sdhi_of_cfg[0], },
+ { .compatible = "renesas,sdhi-sh73a0", .data = &of_default_cfg, },
+ { .compatible = "renesas,sdhi-r8a73a4", .data = &of_default_cfg, },
+ { .compatible = "renesas,sdhi-r8a7740", .data = &of_default_cfg, },
{ .compatible = "renesas,sdhi-r8a7778", .data = &of_rcar_gen1_compatible, },
{ .compatible = "renesas,sdhi-r8a7779", .data = &of_rcar_gen1_compatible, },
{ .compatible = "renesas,sdhi-r8a7790", .data = &of_rcar_gen2_compatible, },
@@ -97,6 +96,8 @@ struct sh_mobile_sdhi {
struct clk *clk;
struct tmio_mmc_data mmc_data;
struct tmio_mmc_dma dma_priv;
+ struct pinctrl *pinctrl;
+ struct pinctrl_state *pins_default, *pins_uhs;
};
static void sh_mobile_sdhi_sdbuf_width(struct tmio_mmc_host *host, int width)
@@ -131,16 +132,28 @@ static void sh_mobile_sdhi_sdbuf_width(struct tmio_mmc_host *host, int width)
sd_ctrl_write16(host, EXT_ACC, val);
}
-static int sh_mobile_sdhi_clk_enable(struct platform_device *pdev, unsigned int *f)
+static int sh_mobile_sdhi_clk_enable(struct tmio_mmc_host *host)
{
- struct mmc_host *mmc = platform_get_drvdata(pdev);
- struct tmio_mmc_host *host = mmc_priv(mmc);
+ struct mmc_host *mmc = host->mmc;
struct sh_mobile_sdhi *priv = host_to_priv(host);
int ret = clk_prepare_enable(priv->clk);
if (ret < 0)
return ret;
- *f = clk_get_rate(priv->clk);
+ /*
+ * The clock driver may not know what maximum frequency
+ * actually works, so it should be set with the max-frequency
+ * property which will already have been read to f_max. If it
+ * was missing, assume the current frequency is the maximum.
+ */
+ if (!mmc->f_max)
+ mmc->f_max = clk_get_rate(priv->clk);
+
+ /*
+ * Minimum frequency is the minimum input clock frequency
+ * divided by our maximum divider.
+ */
+ mmc->f_min = max(clk_round_rate(priv->clk, 1) / 512, 1L);
/* enable 16bit data access on SDBUF as default */
sh_mobile_sdhi_sdbuf_width(host, 16);
@@ -148,19 +161,92 @@ static int sh_mobile_sdhi_clk_enable(struct platform_device *pdev, unsigned int
return 0;
}
-static void sh_mobile_sdhi_clk_disable(struct platform_device *pdev)
+static unsigned int sh_mobile_sdhi_clk_update(struct tmio_mmc_host *host,
+ unsigned int new_clock)
+{
+ struct sh_mobile_sdhi *priv = host_to_priv(host);
+ unsigned int freq, diff, best_freq = 0, diff_min = ~0;
+ int i, ret;
+
+ /* tested only on RCar Gen2+ currently; may work for others */
+ if (!(host->pdata->flags & TMIO_MMC_MIN_RCAR2))
+ return clk_get_rate(priv->clk);
+
+ /*
+ * We want the bus clock to be as close as possible to, but no
+ * greater than, new_clock. As we can divide by 1 << i for
+ * any i in [0, 9] we want the input clock to be as close as
+ * possible, but no greater than, new_clock << i.
+ */
+ for (i = min(9, ilog2(UINT_MAX / new_clock)); i >= 0; i--) {
+ freq = clk_round_rate(priv->clk, new_clock << i);
+ if (freq > (new_clock << i)) {
+ /* Too fast; look for a slightly slower option */
+ freq = clk_round_rate(priv->clk,
+ (new_clock << i) / 4 * 3);
+ if (freq > (new_clock << i))
+ continue;
+ }
+
+ diff = new_clock - (freq >> i);
+ if (diff <= diff_min) {
+ best_freq = freq;
+ diff_min = diff;
+ }
+ }
+
+ ret = clk_set_rate(priv->clk, best_freq);
+
+ return ret == 0 ? best_freq : clk_get_rate(priv->clk);
+}
+
+static void sh_mobile_sdhi_clk_disable(struct tmio_mmc_host *host)
{
- struct mmc_host *mmc = platform_get_drvdata(pdev);
- struct tmio_mmc_host *host = mmc_priv(mmc);
struct sh_mobile_sdhi *priv = host_to_priv(host);
+
clk_disable_unprepare(priv->clk);
}
+static int sh_mobile_sdhi_start_signal_voltage_switch(struct mmc_host *mmc,
+ struct mmc_ios *ios)
+{
+ struct tmio_mmc_host *host = mmc_priv(mmc);
+ struct sh_mobile_sdhi *priv = host_to_priv(host);
+ struct pinctrl_state *pin_state;
+ int ret;
+
+ switch (ios->signal_voltage) {
+ case MMC_SIGNAL_VOLTAGE_330:
+ pin_state = priv->pins_default;
+ break;
+ case MMC_SIGNAL_VOLTAGE_180:
+ pin_state = priv->pins_uhs;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ /*
+ * If anything is missing, assume signal voltage is fixed at
+ * 3.3V and succeed/fail accordingly.
+ */
+ if (IS_ERR(priv->pinctrl) || IS_ERR(pin_state))
+ return ios->signal_voltage ==
+ MMC_SIGNAL_VOLTAGE_330 ? 0 : -EINVAL;
+
+ ret = mmc_regulator_set_vqmmc(host->mmc, ios);
+ if (ret)
+ return ret;
+
+ return pinctrl_select_state(priv->pinctrl, pin_state);
+}
+
static int sh_mobile_sdhi_wait_idle(struct tmio_mmc_host *host)
{
int timeout = 1000;
- while (--timeout && !(sd_ctrl_read16(host, CTL_STATUS2) & (1 << 13)))
+ while (--timeout && !(sd_ctrl_read16_and_16_as_32(host, CTL_STATUS)
+ & TMIO_STAT_SCLKDIVEN))
udelay(1);
if (!timeout) {
@@ -226,7 +312,6 @@ static int sh_mobile_sdhi_probe(struct platform_device *pdev)
struct tmio_mmc_host *host;
struct resource *res;
int irq, ret, i = 0;
- bool multiplexed_isr = true;
struct tmio_mmc_dma *dma_priv;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
@@ -247,6 +332,14 @@ static int sh_mobile_sdhi_probe(struct platform_device *pdev)
goto eprobe;
}
+ priv->pinctrl = devm_pinctrl_get(&pdev->dev);
+ if (!IS_ERR(priv->pinctrl)) {
+ priv->pins_default = pinctrl_lookup_state(priv->pinctrl,
+ PINCTRL_STATE_DEFAULT);
+ priv->pins_uhs = pinctrl_lookup_state(priv->pinctrl,
+ "state_uhs");
+ }
+
host = tmio_mmc_host_alloc(pdev);
if (!host) {
ret = -ENOMEM;
@@ -267,8 +360,10 @@ static int sh_mobile_sdhi_probe(struct platform_device *pdev)
host->dma = dma_priv;
host->write16_hook = sh_mobile_sdhi_write16_hook;
host->clk_enable = sh_mobile_sdhi_clk_enable;
+ host->clk_update = sh_mobile_sdhi_clk_update;
host->clk_disable = sh_mobile_sdhi_clk_disable;
host->multi_io_quirk = sh_mobile_sdhi_multi_io_quirk;
+ host->start_signal_voltage_switch = sh_mobile_sdhi_start_signal_voltage_switch;
/* Orginally registers were 16 bit apart, could be 32 or 64 nowadays */
if (!host->bus_shift && resource_size(res) > 0x100) /* old way to determine the shift */
@@ -308,63 +403,24 @@ static int sh_mobile_sdhi_probe(struct platform_device *pdev)
if (ret < 0)
goto efree;
- /*
- * Allow one or more specific (named) ISRs or
- * one or more multiplexed (un-named) ISRs.
- */
-
- irq = platform_get_irq_byname(pdev, SH_MOBILE_SDHI_IRQ_CARD_DETECT);
- if (irq >= 0) {
- multiplexed_isr = false;
- ret = devm_request_irq(&pdev->dev, irq, tmio_mmc_card_detect_irq, 0,
- dev_name(&pdev->dev), host);
- if (ret)
- goto eirq;
- }
-
- irq = platform_get_irq_byname(pdev, SH_MOBILE_SDHI_IRQ_SDIO);
- if (irq >= 0) {
- multiplexed_isr = false;
- ret = devm_request_irq(&pdev->dev, irq, tmio_mmc_sdio_irq, 0,
+ while (1) {
+ irq = platform_get_irq(pdev, i);
+ if (irq < 0)
+ break;
+ i++;
+ ret = devm_request_irq(&pdev->dev, irq, tmio_mmc_irq, 0,
dev_name(&pdev->dev), host);
if (ret)
goto eirq;
}
- irq = platform_get_irq_byname(pdev, SH_MOBILE_SDHI_IRQ_SDCARD);
- if (irq >= 0) {
- multiplexed_isr = false;
- ret = devm_request_irq(&pdev->dev, irq, tmio_mmc_sdcard_irq, 0,
- dev_name(&pdev->dev), host);
- if (ret)
- goto eirq;
- } else if (!multiplexed_isr) {
- dev_err(&pdev->dev,
- "Principal SD-card IRQ is missing among named interrupts\n");
+ /* There must be at least one IRQ source */
+ if (!i) {
ret = irq;
goto eirq;
}
- if (multiplexed_isr) {
- while (1) {
- irq = platform_get_irq(pdev, i);
- if (irq < 0)
- break;
- i++;
- ret = devm_request_irq(&pdev->dev, irq, tmio_mmc_irq, 0,
- dev_name(&pdev->dev), host);
- if (ret)
- goto eirq;
- }
-
- /* There must be at least one IRQ source */
- if (!i) {
- ret = irq;
- goto eirq;
- }
- }
-
- dev_info(&pdev->dev, "%s base at 0x%08lx clock rate %u MHz\n",
+ dev_info(&pdev->dev, "%s base at 0x%08lx max clock rate %u MHz\n",
mmc_hostname(host->mmc), (unsigned long)
(platform_get_resource(pdev, IORESOURCE_MEM, 0)->start),
host->mmc->f_max / 1000000);
diff --git a/drivers/mmc/host/tmio_mmc.h b/drivers/mmc/host/tmio_mmc.h
index 4a597f5a53e2..1aac2ad8edf2 100644
--- a/drivers/mmc/host/tmio_mmc.h
+++ b/drivers/mmc/host/tmio_mmc.h
@@ -1,6 +1,8 @@
/*
* linux/drivers/mmc/host/tmio_mmc.h
*
+ * Copyright (C) 2016 Sang Engineering, Wolfram Sang
+ * Copyright (C) 2015-16 Renesas Electronics Corporation
* Copyright (C) 2007 Ian Molton
* Copyright (C) 2004 Ian Molton
*
@@ -18,12 +20,67 @@
#include <linux/dmaengine.h>
#include <linux/highmem.h>
-#include <linux/mmc/tmio.h>
#include <linux/mutex.h>
#include <linux/pagemap.h>
#include <linux/scatterlist.h>
#include <linux/spinlock.h>
+#define CTL_SD_CMD 0x00
+#define CTL_ARG_REG 0x04
+#define CTL_STOP_INTERNAL_ACTION 0x08
+#define CTL_XFER_BLK_COUNT 0xa
+#define CTL_RESPONSE 0x0c
+/* driver merges STATUS and following STATUS2 */
+#define CTL_STATUS 0x1c
+/* driver merges IRQ_MASK and following IRQ_MASK2 */
+#define CTL_IRQ_MASK 0x20
+#define CTL_SD_CARD_CLK_CTL 0x24
+#define CTL_SD_XFER_LEN 0x26
+#define CTL_SD_MEM_CARD_OPT 0x28
+#define CTL_SD_ERROR_DETAIL_STATUS 0x2c
+#define CTL_SD_DATA_PORT 0x30
+#define CTL_TRANSACTION_CTL 0x34
+#define CTL_SDIO_STATUS 0x36
+#define CTL_SDIO_IRQ_MASK 0x38
+#define CTL_DMA_ENABLE 0xd8
+#define CTL_RESET_SD 0xe0
+#define CTL_VERSION 0xe2
+#define CTL_SDIO_REGS 0x100
+#define CTL_CLK_AND_WAIT_CTL 0x138
+#define CTL_RESET_SDIO 0x1e0
+
+/* Definitions for values the CTRL_STATUS register can take. */
+#define TMIO_STAT_CMDRESPEND BIT(0)
+#define TMIO_STAT_DATAEND BIT(2)
+#define TMIO_STAT_CARD_REMOVE BIT(3)
+#define TMIO_STAT_CARD_INSERT BIT(4)
+#define TMIO_STAT_SIGSTATE BIT(5)
+#define TMIO_STAT_WRPROTECT BIT(7)
+#define TMIO_STAT_CARD_REMOVE_A BIT(8)
+#define TMIO_STAT_CARD_INSERT_A BIT(9)
+#define TMIO_STAT_SIGSTATE_A BIT(10)
+
+/* These belong technically to CTRL_STATUS2, but the driver merges them */
+#define TMIO_STAT_CMD_IDX_ERR BIT(16)
+#define TMIO_STAT_CRCFAIL BIT(17)
+#define TMIO_STAT_STOPBIT_ERR BIT(18)
+#define TMIO_STAT_DATATIMEOUT BIT(19)
+#define TMIO_STAT_RXOVERFLOW BIT(20)
+#define TMIO_STAT_TXUNDERRUN BIT(21)
+#define TMIO_STAT_CMDTIMEOUT BIT(22)
+#define TMIO_STAT_DAT0 BIT(23) /* only known on R-Car so far */
+#define TMIO_STAT_RXRDY BIT(24)
+#define TMIO_STAT_TXRQ BIT(25)
+#define TMIO_STAT_ILL_FUNC BIT(29) /* only when !TMIO_MMC_HAS_IDLE_WAIT */
+#define TMIO_STAT_SCLKDIVEN BIT(29) /* only when TMIO_MMC_HAS_IDLE_WAIT */
+#define TMIO_STAT_CMD_BUSY BIT(30)
+#define TMIO_STAT_ILL_ACCESS BIT(31)
+
+#define CLK_CTL_DIV_MASK 0xff
+#define CLK_CTL_SCLKEN BIT(8)
+
+#define TMIO_BBS 512 /* Boot block size */
+
/* Definitions for values the CTRL_SDIO_STATUS register can take. */
#define TMIO_SDIO_STAT_IOIRQ 0x0001
#define TMIO_SDIO_STAT_EXPUB52 0x4000
@@ -95,10 +152,14 @@ struct tmio_mmc_host {
bool sdio_irq_enabled;
int (*write16_hook)(struct tmio_mmc_host *host, int addr);
- int (*clk_enable)(struct platform_device *pdev, unsigned int *f);
- void (*clk_disable)(struct platform_device *pdev);
+ int (*clk_enable)(struct tmio_mmc_host *host);
+ unsigned int (*clk_update)(struct tmio_mmc_host *host,
+ unsigned int new_clock);
+ void (*clk_disable)(struct tmio_mmc_host *host);
int (*multi_io_quirk)(struct mmc_card *card,
unsigned int direction, int blk_size);
+ int (*start_signal_voltage_switch)(struct mmc_host *mmc,
+ struct mmc_ios *ios);
};
struct tmio_mmc_host *tmio_mmc_host_alloc(struct platform_device *pdev);
@@ -111,9 +172,6 @@ void tmio_mmc_do_data_irq(struct tmio_mmc_host *host);
void tmio_mmc_enable_mmc_irqs(struct tmio_mmc_host *host, u32 i);
void tmio_mmc_disable_mmc_irqs(struct tmio_mmc_host *host, u32 i);
irqreturn_t tmio_mmc_irq(int irq, void *devid);
-irqreturn_t tmio_mmc_sdcard_irq(int irq, void *devid);
-irqreturn_t tmio_mmc_card_detect_irq(int irq, void *devid);
-irqreturn_t tmio_mmc_sdio_irq(int irq, void *devid);
static inline char *tmio_mmc_kmap_atomic(struct scatterlist *sg,
unsigned long *flags)
@@ -177,7 +235,7 @@ static inline void sd_ctrl_read16_rep(struct tmio_mmc_host *host, int addr,
readsw(host->ctl + (addr << host->bus_shift), buf, count);
}
-static inline u32 sd_ctrl_read32(struct tmio_mmc_host *host, int addr)
+static inline u32 sd_ctrl_read16_and_16_as_32(struct tmio_mmc_host *host, int addr)
{
return readw(host->ctl + (addr << host->bus_shift)) |
readw(host->ctl + ((addr + 2) << host->bus_shift)) << 16;
@@ -199,11 +257,10 @@ static inline void sd_ctrl_write16_rep(struct tmio_mmc_host *host, int addr,
writesw(host->ctl + (addr << host->bus_shift), buf, count);
}
-static inline void sd_ctrl_write32(struct tmio_mmc_host *host, int addr, u32 val)
+static inline void sd_ctrl_write32_as_16_and_16(struct tmio_mmc_host *host, int addr, u32 val)
{
writew(val, host->ctl + (addr << host->bus_shift));
writew(val >> 16, host->ctl + ((addr + 2) << host->bus_shift));
}
-
#endif
diff --git a/drivers/mmc/host/tmio_mmc_dma.c b/drivers/mmc/host/tmio_mmc_dma.c
index 7fb0c034dcb6..fa8a936a3d9b 100644
--- a/drivers/mmc/host/tmio_mmc_dma.c
+++ b/drivers/mmc/host/tmio_mmc_dma.c
@@ -15,7 +15,6 @@
#include <linux/dmaengine.h>
#include <linux/mfd/tmio.h>
#include <linux/mmc/host.h>
-#include <linux/mmc/tmio.h>
#include <linux/pagemap.h>
#include <linux/scatterlist.h>
diff --git a/drivers/mmc/host/tmio_mmc_pio.c b/drivers/mmc/host/tmio_mmc_pio.c
index 0521b4662748..f44e2ab7aea2 100644
--- a/drivers/mmc/host/tmio_mmc_pio.c
+++ b/drivers/mmc/host/tmio_mmc_pio.c
@@ -39,7 +39,6 @@
#include <linux/mmc/host.h>
#include <linux/mmc/mmc.h>
#include <linux/mmc/slot-gpio.h>
-#include <linux/mmc/tmio.h>
#include <linux/module.h>
#include <linux/pagemap.h>
#include <linux/platform_device.h>
@@ -56,18 +55,18 @@
void tmio_mmc_enable_mmc_irqs(struct tmio_mmc_host *host, u32 i)
{
host->sdcard_irq_mask &= ~(i & TMIO_MASK_IRQ);
- sd_ctrl_write32(host, CTL_IRQ_MASK, host->sdcard_irq_mask);
+ sd_ctrl_write32_as_16_and_16(host, CTL_IRQ_MASK, host->sdcard_irq_mask);
}
void tmio_mmc_disable_mmc_irqs(struct tmio_mmc_host *host, u32 i)
{
host->sdcard_irq_mask |= (i & TMIO_MASK_IRQ);
- sd_ctrl_write32(host, CTL_IRQ_MASK, host->sdcard_irq_mask);
+ sd_ctrl_write32_as_16_and_16(host, CTL_IRQ_MASK, host->sdcard_irq_mask);
}
static void tmio_mmc_ack_mmc_irqs(struct tmio_mmc_host *host, u32 i)
{
- sd_ctrl_write32(host, CTL_STATUS, ~i);
+ sd_ctrl_write32_as_16_and_16(host, CTL_STATUS, ~i);
}
static void tmio_mmc_init_sg(struct tmio_mmc_host *host, struct mmc_data *data)
@@ -154,31 +153,16 @@ static void tmio_mmc_enable_sdio_irq(struct mmc_host *mmc, int enable)
}
}
-static void tmio_mmc_set_clock(struct tmio_mmc_host *host,
- unsigned int new_clock)
+static void tmio_mmc_clk_start(struct tmio_mmc_host *host)
{
- u32 clk = 0, clock;
-
- if (new_clock) {
- for (clock = host->mmc->f_min, clk = 0x80000080;
- new_clock >= (clock << 1);
- clk >>= 1)
- clock <<= 1;
-
- /* 1/1 clock is option */
- if ((host->pdata->flags & TMIO_MMC_CLK_ACTUAL) &&
- ((clk >> 22) & 0x1))
- clk |= 0xff;
- }
-
- if (host->set_clk_div)
- host->set_clk_div(host->pdev, (clk >> 22) & 1);
+ sd_ctrl_write16(host, CTL_SD_CARD_CLK_CTL, CLK_CTL_SCLKEN |
+ sd_ctrl_read16(host, CTL_SD_CARD_CLK_CTL));
+ msleep(host->pdata->flags & TMIO_MMC_MIN_RCAR2 ? 1 : 10);
- sd_ctrl_write16(host, CTL_SD_CARD_CLK_CTL, ~CLK_CTL_SCLKEN &
- sd_ctrl_read16(host, CTL_SD_CARD_CLK_CTL));
- sd_ctrl_write16(host, CTL_SD_CARD_CLK_CTL, clk & CLK_CTL_DIV_MASK);
- if (!(host->pdata->flags & TMIO_MMC_FAST_CLK_CHG))
+ if (host->pdata->flags & TMIO_MMC_HAVE_HIGH_REG) {
+ sd_ctrl_write16(host, CTL_CLK_AND_WAIT_CTL, 0x0100);
msleep(10);
+ }
}
static void tmio_mmc_clk_stop(struct tmio_mmc_host *host)
@@ -190,19 +174,41 @@ static void tmio_mmc_clk_stop(struct tmio_mmc_host *host)
sd_ctrl_write16(host, CTL_SD_CARD_CLK_CTL, ~CLK_CTL_SCLKEN &
sd_ctrl_read16(host, CTL_SD_CARD_CLK_CTL));
- msleep(host->pdata->flags & TMIO_MMC_FAST_CLK_CHG ? 5 : 10);
+ msleep(host->pdata->flags & TMIO_MMC_MIN_RCAR2 ? 5 : 10);
}
-static void tmio_mmc_clk_start(struct tmio_mmc_host *host)
+static void tmio_mmc_set_clock(struct tmio_mmc_host *host,
+ unsigned int new_clock)
{
- sd_ctrl_write16(host, CTL_SD_CARD_CLK_CTL, CLK_CTL_SCLKEN |
- sd_ctrl_read16(host, CTL_SD_CARD_CLK_CTL));
- msleep(host->pdata->flags & TMIO_MMC_FAST_CLK_CHG ? 1 : 10);
+ u32 clk = 0, clock;
- if (host->pdata->flags & TMIO_MMC_HAVE_HIGH_REG) {
- sd_ctrl_write16(host, CTL_CLK_AND_WAIT_CTL, 0x0100);
- msleep(10);
+ if (new_clock == 0) {
+ tmio_mmc_clk_stop(host);
+ return;
}
+
+ if (host->clk_update)
+ clock = host->clk_update(host, new_clock) / 512;
+ else
+ clock = host->mmc->f_min;
+
+ for (clk = 0x80000080; new_clock >= (clock << 1); clk >>= 1)
+ clock <<= 1;
+
+ /* 1/1 clock is option */
+ if ((host->pdata->flags & TMIO_MMC_CLK_ACTUAL) && ((clk >> 22) & 0x1))
+ clk |= 0xff;
+
+ if (host->set_clk_div)
+ host->set_clk_div(host->pdev, (clk >> 22) & 1);
+
+ sd_ctrl_write16(host, CTL_SD_CARD_CLK_CTL, ~CLK_CTL_SCLKEN &
+ sd_ctrl_read16(host, CTL_SD_CARD_CLK_CTL));
+ sd_ctrl_write16(host, CTL_SD_CARD_CLK_CTL, clk & CLK_CTL_DIV_MASK);
+ if (!(host->pdata->flags & TMIO_MMC_MIN_RCAR2))
+ msleep(10);
+
+ tmio_mmc_clk_start(host);
}
static void tmio_mmc_reset(struct tmio_mmc_host *host)
@@ -264,9 +270,6 @@ static void tmio_mmc_reset_work(struct work_struct *work)
tmio_mmc_abort_dma(host);
mmc_request_done(host->mmc, mrq);
-
- pm_runtime_mark_last_busy(mmc_dev(host->mmc));
- pm_runtime_put_autosuspend(mmc_dev(host->mmc));
}
/* called with host->lock held, interrupts disabled */
@@ -296,9 +299,6 @@ static void tmio_mmc_finish_request(struct tmio_mmc_host *host)
tmio_mmc_abort_dma(host);
mmc_request_done(host->mmc, mrq);
-
- pm_runtime_mark_last_busy(mmc_dev(host->mmc));
- pm_runtime_put_autosuspend(mmc_dev(host->mmc));
}
static void tmio_mmc_done_work(struct work_struct *work)
@@ -375,7 +375,7 @@ static int tmio_mmc_start_command(struct tmio_mmc_host *host, struct mmc_command
tmio_mmc_enable_mmc_irqs(host, irq_mask);
/* Fire off the command */
- sd_ctrl_write32(host, CTL_ARG_REG, cmd->arg);
+ sd_ctrl_write32_as_16_and_16(host, CTL_ARG_REG, cmd->arg);
sd_ctrl_write16(host, CTL_SD_CMD, c);
return 0;
@@ -530,7 +530,7 @@ static void tmio_mmc_data_irq(struct tmio_mmc_host *host)
goto out;
if (host->chan_tx && (data->flags & MMC_DATA_WRITE) && !host->force_pio) {
- u32 status = sd_ctrl_read32(host, CTL_STATUS);
+ u32 status = sd_ctrl_read16_and_16_as_32(host, CTL_STATUS);
bool done = false;
/*
@@ -542,7 +542,7 @@ static void tmio_mmc_data_irq(struct tmio_mmc_host *host)
* waiting for one more interrupt fixes the problem.
*/
if (host->pdata->flags & TMIO_MMC_HAS_IDLE_WAIT) {
- if (status & TMIO_STAT_ILL_FUNC)
+ if (status & TMIO_STAT_SCLKDIVEN)
done = true;
} else {
if (!(status & TMIO_STAT_CMD_BUSY))
@@ -585,7 +585,7 @@ static void tmio_mmc_cmd_irq(struct tmio_mmc_host *host,
*/
for (i = 3, addr = CTL_RESPONSE ; i >= 0 ; i--, addr += 4)
- cmd->resp[i] = sd_ctrl_read32(host, addr);
+ cmd->resp[i] = sd_ctrl_read16_and_16_as_32(host, addr);
if (cmd->flags & MMC_RSP_136) {
cmd->resp[0] = (cmd->resp[0] << 8) | (cmd->resp[1] >> 24);
@@ -625,19 +625,6 @@ out:
spin_unlock(&host->lock);
}
-static void tmio_mmc_card_irq_status(struct tmio_mmc_host *host,
- int *ireg, int *status)
-{
- *status = sd_ctrl_read32(host, CTL_STATUS);
- *ireg = *status & TMIO_MASK_IRQ & ~host->sdcard_irq_mask;
-
- pr_debug_status(*status);
- pr_debug_status(*ireg);
-
- /* Clear the status except the interrupt status */
- sd_ctrl_write32(host, CTL_STATUS, TMIO_MASK_IRQ);
-}
-
static bool __tmio_mmc_card_detect_irq(struct tmio_mmc_host *host,
int ireg, int status)
{
@@ -657,18 +644,6 @@ static bool __tmio_mmc_card_detect_irq(struct tmio_mmc_host *host,
return false;
}
-irqreturn_t tmio_mmc_card_detect_irq(int irq, void *devid)
-{
- unsigned int ireg, status;
- struct tmio_mmc_host *host = devid;
-
- tmio_mmc_card_irq_status(host, &ireg, &status);
- __tmio_mmc_card_detect_irq(host, ireg, status);
-
- return IRQ_HANDLED;
-}
-EXPORT_SYMBOL(tmio_mmc_card_detect_irq);
-
static bool __tmio_mmc_sdcard_irq(struct tmio_mmc_host *host,
int ireg, int status)
{
@@ -698,19 +673,7 @@ static bool __tmio_mmc_sdcard_irq(struct tmio_mmc_host *host,
return false;
}
-irqreturn_t tmio_mmc_sdcard_irq(int irq, void *devid)
-{
- unsigned int ireg, status;
- struct tmio_mmc_host *host = devid;
-
- tmio_mmc_card_irq_status(host, &ireg, &status);
- __tmio_mmc_sdcard_irq(host, ireg, status);
-
- return IRQ_HANDLED;
-}
-EXPORT_SYMBOL(tmio_mmc_sdcard_irq);
-
-irqreturn_t tmio_mmc_sdio_irq(int irq, void *devid)
+static void tmio_mmc_sdio_irq(int irq, void *devid)
{
struct tmio_mmc_host *host = devid;
struct mmc_host *mmc = host->mmc;
@@ -719,7 +682,7 @@ irqreturn_t tmio_mmc_sdio_irq(int irq, void *devid)
unsigned int sdio_status;
if (!(pdata->flags & TMIO_MMC_SDIO_IRQ))
- return IRQ_HANDLED;
+ return;
status = sd_ctrl_read16(host, CTL_SDIO_STATUS);
ireg = status & TMIO_SDIO_MASK_ALL & ~host->sdcard_irq_mask;
@@ -732,19 +695,22 @@ irqreturn_t tmio_mmc_sdio_irq(int irq, void *devid)
if (mmc->caps & MMC_CAP_SDIO_IRQ && ireg & TMIO_SDIO_STAT_IOIRQ)
mmc_signal_sdio_irq(mmc);
-
- return IRQ_HANDLED;
}
-EXPORT_SYMBOL(tmio_mmc_sdio_irq);
irqreturn_t tmio_mmc_irq(int irq, void *devid)
{
struct tmio_mmc_host *host = devid;
unsigned int ireg, status;
- pr_debug("MMC IRQ begin\n");
+ status = sd_ctrl_read16_and_16_as_32(host, CTL_STATUS);
+ ireg = status & TMIO_MASK_IRQ & ~host->sdcard_irq_mask;
+
+ pr_debug_status(status);
+ pr_debug_status(ireg);
+
+ /* Clear the status except the interrupt status */
+ sd_ctrl_write32_as_16_and_16(host, CTL_STATUS, TMIO_MASK_IRQ);
- tmio_mmc_card_irq_status(host, &ireg, &status);
if (__tmio_mmc_card_detect_irq(host, ireg, status))
return IRQ_HANDLED;
if (__tmio_mmc_sdcard_irq(host, ireg, status))
@@ -812,8 +778,6 @@ static void tmio_mmc_request(struct mmc_host *mmc, struct mmc_request *mrq)
spin_unlock_irqrestore(&host->lock, flags);
- pm_runtime_get_sync(mmc_dev(mmc));
-
if (mrq->data) {
ret = tmio_mmc_start_data(host, mrq->data);
if (ret)
@@ -832,24 +796,14 @@ fail:
host->mrq = NULL;
mrq->cmd->error = ret;
mmc_request_done(mmc, mrq);
-
- pm_runtime_mark_last_busy(mmc_dev(mmc));
- pm_runtime_put_autosuspend(mmc_dev(mmc));
}
-static int tmio_mmc_clk_update(struct tmio_mmc_host *host)
+static int tmio_mmc_clk_enable(struct tmio_mmc_host *host)
{
- struct mmc_host *mmc = host->mmc;
- int ret;
-
if (!host->clk_enable)
return -ENOTSUPP;
- ret = host->clk_enable(host->pdev, &mmc->f_max);
- if (!ret)
- mmc->f_min = mmc->f_max / 512;
-
- return ret;
+ return host->clk_enable(host);
}
static void tmio_mmc_power_on(struct tmio_mmc_host *host, unsigned short vdd)
@@ -925,8 +879,6 @@ static void tmio_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
struct device *dev = &host->pdev->dev;
unsigned long flags;
- pm_runtime_get_sync(mmc_dev(mmc));
-
mutex_lock(&host->ios_lock);
spin_lock_irqsave(&host->lock, flags);
@@ -959,14 +911,12 @@ static void tmio_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
tmio_mmc_clk_stop(host);
break;
case MMC_POWER_UP:
- tmio_mmc_set_clock(host, ios->clock);
tmio_mmc_power_on(host, ios->vdd);
- tmio_mmc_clk_start(host);
+ tmio_mmc_set_clock(host, ios->clock);
tmio_mmc_set_bus_width(host, ios->bus_width);
break;
case MMC_POWER_ON:
tmio_mmc_set_clock(host, ios->clock);
- tmio_mmc_clk_start(host);
tmio_mmc_set_bus_width(host, ios->bus_width);
break;
}
@@ -983,9 +933,6 @@ static void tmio_mmc_set_ios(struct mmc_host *mmc, struct mmc_ios *ios)
host->clk_cache = ios->clock;
mutex_unlock(&host->ios_lock);
-
- pm_runtime_mark_last_busy(mmc_dev(mmc));
- pm_runtime_put_autosuspend(mmc_dev(mmc));
}
static int tmio_mmc_get_ro(struct mmc_host *mmc)
@@ -996,11 +943,8 @@ static int tmio_mmc_get_ro(struct mmc_host *mmc)
if (ret >= 0)
return ret;
- pm_runtime_get_sync(mmc_dev(mmc));
ret = !((pdata->flags & TMIO_MMC_WRPROTECT_DISABLE) ||
- (sd_ctrl_read32(host, CTL_STATUS) & TMIO_STAT_WRPROTECT));
- pm_runtime_mark_last_busy(mmc_dev(mmc));
- pm_runtime_put_autosuspend(mmc_dev(mmc));
+ (sd_ctrl_read16_and_16_as_32(host, CTL_STATUS) & TMIO_STAT_WRPROTECT));
return ret;
}
@@ -1016,12 +960,20 @@ static int tmio_multi_io_quirk(struct mmc_card *card,
return blk_size;
}
-static const struct mmc_host_ops tmio_mmc_ops = {
+static int tmio_mmc_card_busy(struct mmc_host *mmc)
+{
+ struct tmio_mmc_host *host = mmc_priv(mmc);
+
+ return !(sd_ctrl_read16_and_16_as_32(host, CTL_STATUS) & TMIO_STAT_DAT0);
+}
+
+static struct mmc_host_ops tmio_mmc_ops = {
.request = tmio_mmc_request,
.set_ios = tmio_mmc_set_ios,
.get_ro = tmio_mmc_get_ro,
.get_cd = mmc_gpio_get_cd,
.enable_sdio_irq = tmio_mmc_enable_sdio_irq,
+ .card_busy = tmio_mmc_card_busy,
.multi_io_quirk = tmio_multi_io_quirk,
};
@@ -1120,7 +1072,9 @@ int tmio_mmc_host_probe(struct tmio_mmc_host *_host,
goto host_free;
}
+ tmio_mmc_ops.start_signal_voltage_switch = _host->start_signal_voltage_switch;
mmc->ops = &tmio_mmc_ops;
+
mmc->caps |= MMC_CAP_4_BIT_DATA | pdata->capabilities;
mmc->caps2 |= pdata->capabilities2;
mmc->max_segs = 32;
@@ -1135,7 +1089,7 @@ int tmio_mmc_host_probe(struct tmio_mmc_host *_host,
mmc->caps & MMC_CAP_NONREMOVABLE ||
mmc->slot.cd_irq >= 0);
- if (tmio_mmc_clk_update(_host) < 0) {
+ if (tmio_mmc_clk_enable(_host) < 0) {
mmc->f_max = pdata->hclk;
mmc->f_min = mmc->f_max / 512;
}
@@ -1159,7 +1113,7 @@ int tmio_mmc_host_probe(struct tmio_mmc_host *_host,
tmio_mmc_clk_stop(_host);
tmio_mmc_reset(_host);
- _host->sdcard_irq_mask = sd_ctrl_read32(_host, CTL_IRQ_MASK);
+ _host->sdcard_irq_mask = sd_ctrl_read16_and_16_as_32(_host, CTL_IRQ_MASK);
tmio_mmc_disable_mmc_irqs(_host, TMIO_MASK_ALL);
/* Unmask the IRQs we want to know about */
@@ -1251,7 +1205,7 @@ int tmio_mmc_host_runtime_suspend(struct device *dev)
tmio_mmc_clk_stop(host);
if (host->clk_disable)
- host->clk_disable(host->pdev);
+ host->clk_disable(host);
return 0;
}
@@ -1263,12 +1217,10 @@ int tmio_mmc_host_runtime_resume(struct device *dev)
struct tmio_mmc_host *host = mmc_priv(mmc);
tmio_mmc_reset(host);
- tmio_mmc_clk_update(host);
+ tmio_mmc_clk_enable(host);
- if (host->clk_cache) {
+ if (host->clk_cache)
tmio_mmc_set_clock(host, host->clk_cache);
- tmio_mmc_clk_start(host);
- }
tmio_mmc_enable_dma(host, true);
diff --git a/drivers/mmc/host/toshsd.c b/drivers/mmc/host/toshsd.c
index e2cdd5fb1423..553ef41bb806 100644
--- a/drivers/mmc/host/toshsd.c
+++ b/drivers/mmc/host/toshsd.c
@@ -21,6 +21,7 @@
#include <linux/interrupt.h>
#include <linux/io.h>
#include <linux/pm.h>
+#include <linux/pm_runtime.h>
#include <linux/mmc/host.h>
#include <linux/mmc/mmc.h>
diff --git a/drivers/mmc/host/usdhi6rol0.c b/drivers/mmc/host/usdhi6rol0.c
index 807c06e203c3..1bd5f1a18d4e 100644
--- a/drivers/mmc/host/usdhi6rol0.c
+++ b/drivers/mmc/host/usdhi6rol0.c
@@ -22,6 +22,7 @@
#include <linux/mmc/sdio.h>
#include <linux/module.h>
#include <linux/pagemap.h>
+#include <linux/pinctrl/consumer.h>
#include <linux/platform_device.h>
#include <linux/scatterlist.h>
#include <linux/string.h>
@@ -198,6 +199,11 @@ struct usdhi6_host {
struct dma_chan *chan_rx;
struct dma_chan *chan_tx;
bool dma_active;
+
+ /* Pin control */
+ struct pinctrl *pinctrl;
+ struct pinctrl_state *pins_default;
+ struct pinctrl_state *pins_uhs;
};
/* I/O primitives */
@@ -1147,12 +1153,45 @@ static void usdhi6_enable_sdio_irq(struct mmc_host *mmc, int enable)
}
}
+static int usdhi6_set_pinstates(struct usdhi6_host *host, int voltage)
+{
+ if (IS_ERR(host->pins_uhs))
+ return 0;
+
+ switch (voltage) {
+ case MMC_SIGNAL_VOLTAGE_180:
+ case MMC_SIGNAL_VOLTAGE_120:
+ return pinctrl_select_state(host->pinctrl,
+ host->pins_uhs);
+
+ default:
+ return pinctrl_select_state(host->pinctrl,
+ host->pins_default);
+ }
+}
+
+static int usdhi6_sig_volt_switch(struct mmc_host *mmc, struct mmc_ios *ios)
+{
+ int ret;
+
+ ret = mmc_regulator_set_vqmmc(mmc, ios);
+ if (ret < 0)
+ return ret;
+
+ ret = usdhi6_set_pinstates(mmc_priv(mmc), ios->signal_voltage);
+ if (ret)
+ dev_warn_once(mmc_dev(mmc),
+ "Failed to set pinstate err=%d\n", ret);
+ return ret;
+}
+
static struct mmc_host_ops usdhi6_ops = {
.request = usdhi6_request,
.set_ios = usdhi6_set_ios,
.get_cd = usdhi6_get_cd,
.get_ro = usdhi6_get_ro,
.enable_sdio_irq = usdhi6_enable_sdio_irq,
+ .start_signal_voltage_switch = usdhi6_sig_volt_switch,
};
/* State machine handlers */
@@ -1730,6 +1769,25 @@ static int usdhi6_probe(struct platform_device *pdev)
host->wait = USDHI6_WAIT_FOR_REQUEST;
host->timeout = msecs_to_jiffies(4000);
+ host->pinctrl = devm_pinctrl_get(&pdev->dev);
+ if (IS_ERR(host->pinctrl)) {
+ ret = PTR_ERR(host->pinctrl);
+ goto e_free_mmc;
+ }
+
+ host->pins_uhs = pinctrl_lookup_state(host->pinctrl, "state_uhs");
+ if (!IS_ERR(host->pins_uhs)) {
+ host->pins_default = pinctrl_lookup_state(host->pinctrl,
+ PINCTRL_STATE_DEFAULT);
+
+ if (IS_ERR(host->pins_default)) {
+ dev_err(dev,
+ "UHS pinctrl requires a default pin state.\n");
+ ret = PTR_ERR(host->pins_default);
+ goto e_free_mmc;
+ }
+ }
+
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
host->base = devm_ioremap_resource(dev, res);
if (IS_ERR(host->base)) {
@@ -1785,7 +1843,7 @@ static int usdhi6_probe(struct platform_device *pdev)
mmc->ops = &usdhi6_ops;
mmc->caps |= MMC_CAP_SD_HIGHSPEED | MMC_CAP_MMC_HIGHSPEED |
- MMC_CAP_UHS_SDR50 | MMC_CAP_UHS_DDR50 | MMC_CAP_SDIO_IRQ;
+ MMC_CAP_SDIO_IRQ;
/* Set .max_segs to some random number. Feel free to adjust. */
mmc->max_segs = 32;
mmc->max_blk_size = 512;
diff --git a/drivers/mtd/chips/Kconfig b/drivers/mtd/chips/Kconfig
index 3b3dabce58de..bbfa1f129266 100644
--- a/drivers/mtd/chips/Kconfig
+++ b/drivers/mtd/chips/Kconfig
@@ -115,6 +115,7 @@ config MTD_MAP_BANK_WIDTH_16
config MTD_MAP_BANK_WIDTH_32
bool "Support 256-bit buswidth" if MTD_CFI_GEOMETRY
+ select MTD_COMPLEX_MAPPINGS if HAS_IOMEM
default n
help
If you wish to support CFI devices on a physical bus which is
diff --git a/drivers/mtd/devices/bcm47xxsflash.c b/drivers/mtd/devices/bcm47xxsflash.c
index 347bb83db864..1c65c15b31a1 100644
--- a/drivers/mtd/devices/bcm47xxsflash.c
+++ b/drivers/mtd/devices/bcm47xxsflash.c
@@ -2,6 +2,7 @@
#include <linux/module.h>
#include <linux/slab.h>
#include <linux/delay.h>
+#include <linux/ioport.h>
#include <linux/mtd/mtd.h>
#include <linux/platform_device.h>
#include <linux/bcma/bcma.h>
@@ -109,8 +110,7 @@ static int bcm47xxsflash_read(struct mtd_info *mtd, loff_t from, size_t len,
if ((from + len) > mtd->size)
return -EINVAL;
- memcpy_fromio(buf, (void __iomem *)KSEG0ADDR(b47s->window + from),
- len);
+ memcpy_fromio(buf, b47s->window + from, len);
*retlen = len;
return len;
@@ -275,15 +275,33 @@ static void bcm47xxsflash_bcma_cc_write(struct bcm47xxsflash *b47s, u16 offset,
static int bcm47xxsflash_bcma_probe(struct platform_device *pdev)
{
- struct bcma_sflash *sflash = dev_get_platdata(&pdev->dev);
+ struct device *dev = &pdev->dev;
+ struct bcma_sflash *sflash = dev_get_platdata(dev);
struct bcm47xxsflash *b47s;
+ struct resource *res;
int err;
- b47s = devm_kzalloc(&pdev->dev, sizeof(*b47s), GFP_KERNEL);
+ b47s = devm_kzalloc(dev, sizeof(*b47s), GFP_KERNEL);
if (!b47s)
return -ENOMEM;
sflash->priv = b47s;
+ res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ if (!res) {
+ dev_err(dev, "invalid resource\n");
+ return -EINVAL;
+ }
+ if (!devm_request_mem_region(dev, res->start, resource_size(res),
+ res->name)) {
+ dev_err(dev, "can't request region for resource %pR\n", res);
+ return -EBUSY;
+ }
+ b47s->window = ioremap_cache(res->start, resource_size(res));
+ if (!b47s->window) {
+ dev_err(dev, "ioremap failed for resource %pR\n", res);
+ return -ENOMEM;
+ }
+
b47s->bcma_cc = container_of(sflash, struct bcma_drv_cc, sflash);
b47s->cc_read = bcm47xxsflash_bcma_cc_read;
b47s->cc_write = bcm47xxsflash_bcma_cc_write;
@@ -297,7 +315,6 @@ static int bcm47xxsflash_bcma_probe(struct platform_device *pdev)
break;
}
- b47s->window = sflash->window;
b47s->blocksize = sflash->blocksize;
b47s->numblocks = sflash->numblocks;
b47s->size = sflash->size;
@@ -306,6 +323,7 @@ static int bcm47xxsflash_bcma_probe(struct platform_device *pdev)
err = mtd_device_parse_register(&b47s->mtd, probes, NULL, NULL, 0);
if (err) {
pr_err("Failed to register MTD device: %d\n", err);
+ iounmap(b47s->window);
return err;
}
@@ -321,6 +339,7 @@ static int bcm47xxsflash_bcma_remove(struct platform_device *pdev)
struct bcm47xxsflash *b47s = sflash->priv;
mtd_device_unregister(&b47s->mtd);
+ iounmap(b47s->window);
return 0;
}
diff --git a/drivers/mtd/devices/bcm47xxsflash.h b/drivers/mtd/devices/bcm47xxsflash.h
index fe93daf4f489..1564b62b412e 100644
--- a/drivers/mtd/devices/bcm47xxsflash.h
+++ b/drivers/mtd/devices/bcm47xxsflash.h
@@ -65,7 +65,8 @@ struct bcm47xxsflash {
enum bcm47xxsflash_type type;
- u32 window;
+ void __iomem *window;
+
u32 blocksize;
u16 numblocks;
u32 size;
diff --git a/drivers/mtd/devices/docg3.c b/drivers/mtd/devices/docg3.c
index e7b2e439696c..b833e6cc684c 100644
--- a/drivers/mtd/devices/docg3.c
+++ b/drivers/mtd/devices/docg3.c
@@ -67,16 +67,40 @@ module_param(reliable_mode, uint, 0);
MODULE_PARM_DESC(reliable_mode, "Set the docg3 mode (0=normal MLC, 1=fast, "
"2=reliable) : MLC normal operations are in normal mode");
-/**
- * struct docg3_oobinfo - DiskOnChip G3 OOB layout
- * @eccbytes: 8 bytes are used (1 for Hamming ECC, 7 for BCH ECC)
- * @eccpos: ecc positions (byte 7 is Hamming ECC, byte 8-14 are BCH ECC)
- * @oobfree: free pageinfo bytes (byte 0 until byte 6, byte 15
- */
-static struct nand_ecclayout docg3_oobinfo = {
- .eccbytes = 8,
- .eccpos = {7, 8, 9, 10, 11, 12, 13, 14},
- .oobfree = {{0, 7}, {15, 1} },
+static int docg3_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section)
+ return -ERANGE;
+
+ /* byte 7 is Hamming ECC, byte 8-14 are BCH ECC */
+ oobregion->offset = 7;
+ oobregion->length = 8;
+
+ return 0;
+}
+
+static int docg3_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section > 1)
+ return -ERANGE;
+
+ /* free bytes: byte 0 until byte 6, byte 15 */
+ if (!section) {
+ oobregion->offset = 0;
+ oobregion->length = 7;
+ } else {
+ oobregion->offset = 15;
+ oobregion->length = 1;
+ }
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops nand_ooblayout_docg3_ops = {
+ .ecc = docg3_ooblayout_ecc,
+ .free = docg3_ooblayout_free,
};
static inline u8 doc_readb(struct docg3 *docg3, u16 reg)
@@ -1857,7 +1881,7 @@ static int __init doc_set_driver_info(int chip_id, struct mtd_info *mtd)
mtd->_read_oob = doc_read_oob;
mtd->_write_oob = doc_write_oob;
mtd->_block_isbad = doc_block_isbad;
- mtd->ecclayout = &docg3_oobinfo;
+ mtd_set_ooblayout(mtd, &nand_ooblayout_docg3_ops);
mtd->oobavail = 8;
mtd->ecc_strength = DOC_ECC_BCH_T;
diff --git a/drivers/mtd/devices/m25p80.c b/drivers/mtd/devices/m25p80.c
index c9c3b7fa3051..9d6854467651 100644
--- a/drivers/mtd/devices/m25p80.c
+++ b/drivers/mtd/devices/m25p80.c
@@ -131,6 +131,28 @@ static int m25p80_read(struct spi_nor *nor, loff_t from, size_t len,
/* convert the dummy cycles to the number of bytes */
dummy /= 8;
+ if (spi_flash_read_supported(spi)) {
+ struct spi_flash_read_message msg;
+ int ret;
+
+ memset(&msg, 0, sizeof(msg));
+
+ msg.buf = buf;
+ msg.from = from;
+ msg.len = len;
+ msg.read_opcode = nor->read_opcode;
+ msg.addr_width = nor->addr_width;
+ msg.dummy_bytes = dummy;
+ /* TODO: Support other combinations */
+ msg.opcode_nbits = SPI_NBITS_SINGLE;
+ msg.addr_nbits = SPI_NBITS_SINGLE;
+ msg.data_nbits = m25p80_rx_nbits(nor);
+
+ ret = spi_flash_read(spi, &msg);
+ *retlen = msg.retlen;
+ return ret;
+ }
+
spi_message_init(&m);
memset(t, 0, (sizeof t));
diff --git a/drivers/mtd/devices/pmc551.c b/drivers/mtd/devices/pmc551.c
index 708b7e8c8b18..220f9200fa52 100644
--- a/drivers/mtd/devices/pmc551.c
+++ b/drivers/mtd/devices/pmc551.c
@@ -353,7 +353,7 @@ static int pmc551_write(struct mtd_info *mtd, loff_t to, size_t len,
* mechanism
* returns the size of the memory region found.
*/
-static int fixup_pmc551(struct pci_dev *dev)
+static int __init fixup_pmc551(struct pci_dev *dev)
{
#ifdef CONFIG_MTD_PMC551_BUGFIX
u32 dram_data;
diff --git a/drivers/mtd/maps/Kconfig b/drivers/mtd/maps/Kconfig
index 7c95a656f9e4..392f9eff5fb7 100644
--- a/drivers/mtd/maps/Kconfig
+++ b/drivers/mtd/maps/Kconfig
@@ -74,6 +74,16 @@ config MTD_PHYSMAP_OF
physically into the CPU's memory. The mapping description here is
taken from OF device tree.
+config MTD_PHYSMAP_OF_VERSATILE
+ bool "Support ARM Versatile physmap OF"
+ depends on MTD_PHYSMAP_OF
+ depends on MFD_SYSCON
+ default y if (ARCH_INTEGRATOR || ARCH_VERSATILE || REALVIEW_DT)
+ help
+ This provides some extra DT physmap parsing for the ARM Versatile
+ platforms, basically to add a VPP (write protection) callback so
+ the flash can be taken out of write protection.
+
config MTD_PMC_MSP_EVM
tristate "CFI Flash device mapped on PMC-Sierra MSP"
depends on PMC_MSP && MTD_CFI
diff --git a/drivers/mtd/maps/Makefile b/drivers/mtd/maps/Makefile
index 141c91a5b24c..644f7d36d35d 100644
--- a/drivers/mtd/maps/Makefile
+++ b/drivers/mtd/maps/Makefile
@@ -18,6 +18,9 @@ obj-$(CONFIG_MTD_TSUNAMI) += tsunami_flash.o
obj-$(CONFIG_MTD_PXA2XX) += pxa2xx-flash.o
obj-$(CONFIG_MTD_PHYSMAP) += physmap.o
obj-$(CONFIG_MTD_PHYSMAP_OF) += physmap_of.o
+ifdef CONFIG_MTD_PHYSMAP_OF_VERSATILE
+obj-$(CONFIG_MTD_PHYSMAP_OF) += physmap_of_versatile.o
+endif
obj-$(CONFIG_MTD_PISMO) += pismo.o
obj-$(CONFIG_MTD_PMC_MSP_EVM) += pmcmsp-flash.o
obj-$(CONFIG_MTD_PCMCIA) += pcmciamtd.o
diff --git a/drivers/mtd/maps/ck804xrom.c b/drivers/mtd/maps/ck804xrom.c
index 0455166f05fa..4f206a99164c 100644
--- a/drivers/mtd/maps/ck804xrom.c
+++ b/drivers/mtd/maps/ck804xrom.c
@@ -112,8 +112,8 @@ static void ck804xrom_cleanup(struct ck804xrom_window *window)
}
-static int ck804xrom_init_one(struct pci_dev *pdev,
- const struct pci_device_id *ent)
+static int __init ck804xrom_init_one(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
{
static char *rom_probe_types[] = { "cfi_probe", "jedec_probe", NULL };
u8 byte;
diff --git a/drivers/mtd/maps/esb2rom.c b/drivers/mtd/maps/esb2rom.c
index 76ed651b515b..9646b0766ce0 100644
--- a/drivers/mtd/maps/esb2rom.c
+++ b/drivers/mtd/maps/esb2rom.c
@@ -144,8 +144,8 @@ static void esb2rom_cleanup(struct esb2rom_window *window)
pci_dev_put(window->pdev);
}
-static int esb2rom_init_one(struct pci_dev *pdev,
- const struct pci_device_id *ent)
+static int __init esb2rom_init_one(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
{
static char *rom_probe_types[] = { "cfi_probe", "jedec_probe", NULL };
struct esb2rom_window *window = &esb2rom_window;
diff --git a/drivers/mtd/maps/ichxrom.c b/drivers/mtd/maps/ichxrom.c
index 8636bba42200..e17d02ae03f0 100644
--- a/drivers/mtd/maps/ichxrom.c
+++ b/drivers/mtd/maps/ichxrom.c
@@ -84,8 +84,8 @@ static void ichxrom_cleanup(struct ichxrom_window *window)
}
-static int ichxrom_init_one(struct pci_dev *pdev,
- const struct pci_device_id *ent)
+static int __init ichxrom_init_one(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
{
static char *rom_probe_types[] = { "cfi_probe", "jedec_probe", NULL };
struct ichxrom_window *window = &ichxrom_window;
diff --git a/drivers/mtd/maps/physmap_of.c b/drivers/mtd/maps/physmap_of.c
index 70c453144f00..22f3858c0364 100644
--- a/drivers/mtd/maps/physmap_of.c
+++ b/drivers/mtd/maps/physmap_of.c
@@ -24,6 +24,7 @@
#include <linux/of_address.h>
#include <linux/of_platform.h>
#include <linux/slab.h>
+#include "physmap_of_versatile.h"
struct of_flash_list {
struct mtd_info *mtd;
@@ -240,6 +241,11 @@ static int of_flash_probe(struct platform_device *dev)
info->list[i].map.size = res_size;
info->list[i].map.bankwidth = be32_to_cpup(width);
info->list[i].map.device_node = dp;
+ err = of_flash_probe_versatile(dev, dp, &info->list[i].map);
+ if (err) {
+ dev_err(&dev->dev, "Can't probe Versatile VPP\n");
+ return err;
+ }
err = -ENOMEM;
info->list[i].map.virt = ioremap(info->list[i].map.phys,
diff --git a/drivers/mtd/maps/physmap_of_versatile.c b/drivers/mtd/maps/physmap_of_versatile.c
new file mode 100644
index 000000000000..0f39b2a015f4
--- /dev/null
+++ b/drivers/mtd/maps/physmap_of_versatile.c
@@ -0,0 +1,255 @@
+/*
+ * Versatile OF physmap driver add-on
+ *
+ * Copyright (c) 2016, Linaro Limited
+ * Author: Linus Walleij <linus.walleij@linaro.org>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation; either version 2 of
+ * the License, or (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston,
+ * MA 02111-1307 USA
+ */
+#include <linux/export.h>
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_address.h>
+#include <linux/of_device.h>
+#include <linux/mtd/map.h>
+#include <linux/mfd/syscon.h>
+#include <linux/regmap.h>
+#include <linux/bitops.h>
+#include "physmap_of_versatile.h"
+
+static struct regmap *syscon_regmap;
+
+enum versatile_flashprot {
+ INTEGRATOR_AP_FLASHPROT,
+ INTEGRATOR_CP_FLASHPROT,
+ VERSATILE_FLASHPROT,
+ REALVIEW_FLASHPROT,
+};
+
+static const struct of_device_id syscon_match[] = {
+ {
+ .compatible = "arm,integrator-ap-syscon",
+ .data = (void *)INTEGRATOR_AP_FLASHPROT,
+ },
+ {
+ .compatible = "arm,integrator-cp-syscon",
+ .data = (void *)INTEGRATOR_CP_FLASHPROT,
+ },
+ {
+ .compatible = "arm,core-module-versatile",
+ .data = (void *)VERSATILE_FLASHPROT,
+ },
+ {
+ .compatible = "arm,realview-eb-syscon",
+ .data = (void *)REALVIEW_FLASHPROT,
+ },
+ {
+ .compatible = "arm,realview-pb1176-syscon",
+ .data = (void *)REALVIEW_FLASHPROT,
+ },
+ {
+ .compatible = "arm,realview-pb11mp-syscon",
+ .data = (void *)REALVIEW_FLASHPROT,
+ },
+ {
+ .compatible = "arm,realview-pba8-syscon",
+ .data = (void *)REALVIEW_FLASHPROT,
+ },
+ {
+ .compatible = "arm,realview-pbx-syscon",
+ .data = (void *)REALVIEW_FLASHPROT,
+ },
+ {},
+};
+
+/*
+ * Flash protection handling for the Integrator/AP
+ */
+#define INTEGRATOR_SC_CTRLS_OFFSET 0x08
+#define INTEGRATOR_SC_CTRLC_OFFSET 0x0C
+#define INTEGRATOR_SC_CTRL_FLVPPEN BIT(1)
+#define INTEGRATOR_SC_CTRL_FLWP BIT(2)
+
+#define INTEGRATOR_EBI_CSR1_OFFSET 0x04
+/* The manual says bit 2, the code says bit 3, trust the code */
+#define INTEGRATOR_EBI_WRITE_ENABLE BIT(3)
+#define INTEGRATOR_EBI_LOCK_OFFSET 0x20
+#define INTEGRATOR_EBI_LOCK_VAL 0xA05F
+
+static const struct of_device_id ebi_match[] = {
+ { .compatible = "arm,external-bus-interface"},
+ { },
+};
+
+static int ap_flash_init(struct platform_device *pdev)
+{
+ struct device_node *ebi;
+ static void __iomem *ebi_base;
+ u32 val;
+ int ret;
+
+ /* Look up the EBI */
+ ebi = of_find_matching_node(NULL, ebi_match);
+ if (!ebi) {
+ return -ENODEV;
+ }
+ ebi_base = of_iomap(ebi, 0);
+ if (!ebi_base)
+ return -ENODEV;
+
+ /* Clear VPP and write protection bits */
+ ret = regmap_write(syscon_regmap,
+ INTEGRATOR_SC_CTRLC_OFFSET,
+ INTEGRATOR_SC_CTRL_FLVPPEN | INTEGRATOR_SC_CTRL_FLWP);
+ if (ret)
+ dev_err(&pdev->dev, "error clearing Integrator VPP/WP\n");
+
+ /* Unlock the EBI */
+ writel(INTEGRATOR_EBI_LOCK_VAL, ebi_base + INTEGRATOR_EBI_LOCK_OFFSET);
+
+ /* Enable write cycles on the EBI, CSR1 (flash) */
+ val = readl(ebi_base + INTEGRATOR_EBI_CSR1_OFFSET);
+ val |= INTEGRATOR_EBI_WRITE_ENABLE;
+ writel(val, ebi_base + INTEGRATOR_EBI_CSR1_OFFSET);
+
+ /* Lock the EBI again */
+ writel(0, ebi_base + INTEGRATOR_EBI_LOCK_OFFSET);
+ iounmap(ebi_base);
+
+ return 0;
+}
+
+static void ap_flash_set_vpp(struct map_info *map, int on)
+{
+ int ret;
+
+ if (on) {
+ ret = regmap_write(syscon_regmap,
+ INTEGRATOR_SC_CTRLS_OFFSET,
+ INTEGRATOR_SC_CTRL_FLVPPEN | INTEGRATOR_SC_CTRL_FLWP);
+ if (ret)
+ pr_err("error enabling AP VPP\n");
+ } else {
+ ret = regmap_write(syscon_regmap,
+ INTEGRATOR_SC_CTRLC_OFFSET,
+ INTEGRATOR_SC_CTRL_FLVPPEN | INTEGRATOR_SC_CTRL_FLWP);
+ if (ret)
+ pr_err("error disabling AP VPP\n");
+ }
+}
+
+/*
+ * Flash protection handling for the Integrator/CP
+ */
+
+#define INTCP_FLASHPROG_OFFSET 0x04
+#define CINTEGRATOR_FLVPPEN BIT(0)
+#define CINTEGRATOR_FLWREN BIT(1)
+#define CINTEGRATOR_FLMASK BIT(0)|BIT(1)
+
+static void cp_flash_set_vpp(struct map_info *map, int on)
+{
+ int ret;
+
+ if (on) {
+ ret = regmap_update_bits(syscon_regmap,
+ INTCP_FLASHPROG_OFFSET,
+ CINTEGRATOR_FLMASK,
+ CINTEGRATOR_FLVPPEN | CINTEGRATOR_FLWREN);
+ if (ret)
+ pr_err("error setting CP VPP\n");
+ } else {
+ ret = regmap_update_bits(syscon_regmap,
+ INTCP_FLASHPROG_OFFSET,
+ CINTEGRATOR_FLMASK,
+ 0);
+ if (ret)
+ pr_err("error setting CP VPP\n");
+ }
+}
+
+/*
+ * Flash protection handling for the Versatiles and RealViews
+ */
+
+#define VERSATILE_SYS_FLASH_OFFSET 0x4C
+
+static void versatile_flash_set_vpp(struct map_info *map, int on)
+{
+ int ret;
+
+ ret = regmap_update_bits(syscon_regmap, VERSATILE_SYS_FLASH_OFFSET,
+ 0x01, !!on);
+ if (ret)
+ pr_err("error setting Versatile VPP\n");
+}
+
+int of_flash_probe_versatile(struct platform_device *pdev,
+ struct device_node *np,
+ struct map_info *map)
+{
+ struct device_node *sysnp;
+ const struct of_device_id *devid;
+ struct regmap *rmap;
+ static enum versatile_flashprot versatile_flashprot;
+ int ret;
+
+ /* Not all flash chips use this protection line */
+ if (!of_device_is_compatible(np, "arm,versatile-flash"))
+ return 0;
+
+ /* For first chip probed, look up the syscon regmap */
+ if (!syscon_regmap) {
+ sysnp = of_find_matching_node_and_match(NULL,
+ syscon_match,
+ &devid);
+ if (!sysnp)
+ return -ENODEV;
+
+ versatile_flashprot = (enum versatile_flashprot)devid->data;
+ rmap = syscon_node_to_regmap(sysnp);
+ if (IS_ERR(rmap))
+ return PTR_ERR(rmap);
+
+ syscon_regmap = rmap;
+ }
+
+ switch (versatile_flashprot) {
+ case INTEGRATOR_AP_FLASHPROT:
+ ret = ap_flash_init(pdev);
+ if (ret)
+ return ret;
+ map->set_vpp = ap_flash_set_vpp;
+ dev_info(&pdev->dev, "Integrator/AP flash protection\n");
+ break;
+ case INTEGRATOR_CP_FLASHPROT:
+ map->set_vpp = cp_flash_set_vpp;
+ dev_info(&pdev->dev, "Integrator/CP flash protection\n");
+ break;
+ case VERSATILE_FLASHPROT:
+ case REALVIEW_FLASHPROT:
+ map->set_vpp = versatile_flash_set_vpp;
+ dev_info(&pdev->dev, "versatile/realview flash protection\n");
+ break;
+ default:
+ dev_info(&pdev->dev, "device marked as Versatile flash "
+ "but no system controller was found\n");
+ break;
+ }
+
+ return 0;
+}
+EXPORT_SYMBOL_GPL(of_flash_probe_versatile);
diff --git a/drivers/mtd/maps/physmap_of_versatile.h b/drivers/mtd/maps/physmap_of_versatile.h
new file mode 100644
index 000000000000..5b86f6dc6b3d
--- /dev/null
+++ b/drivers/mtd/maps/physmap_of_versatile.h
@@ -0,0 +1,16 @@
+#include <linux/of.h>
+#include <linux/mtd/map.h>
+
+#ifdef CONFIG_MTD_PHYSMAP_OF_VERSATILE
+int of_flash_probe_versatile(struct platform_device *pdev,
+ struct device_node *np,
+ struct map_info *map);
+#else
+static inline
+int of_flash_probe_versatile(struct platform_device *pdev,
+ struct device_node *np,
+ struct map_info *map)
+{
+ return 0;
+}
+#endif
diff --git a/drivers/mtd/maps/pxa2xx-flash.c b/drivers/mtd/maps/pxa2xx-flash.c
index 7497090e9900..2cde28ed95c9 100644
--- a/drivers/mtd/maps/pxa2xx-flash.c
+++ b/drivers/mtd/maps/pxa2xx-flash.c
@@ -71,8 +71,8 @@ static int pxa2xx_flash_probe(struct platform_device *pdev)
info->map.name);
return -ENOMEM;
}
- info->map.cached = memremap(info->map.phys, info->map.size,
- MEMREMAP_WB);
+ info->map.cached =
+ ioremap_cached(info->map.phys, info->map.size);
if (!info->map.cached)
printk(KERN_WARNING "Failed to ioremap cached %s\n",
info->map.name);
@@ -111,7 +111,7 @@ static int pxa2xx_flash_remove(struct platform_device *dev)
map_destroy(info->mtd);
iounmap(info->map.virt);
if (info->map.cached)
- memunmap(info->map.cached);
+ iounmap(info->map.cached);
kfree(info);
return 0;
}
diff --git a/drivers/mtd/maps/uclinux.c b/drivers/mtd/maps/uclinux.c
index c1af83db5202..00a8190797ec 100644
--- a/drivers/mtd/maps/uclinux.c
+++ b/drivers/mtd/maps/uclinux.c
@@ -4,11 +4,13 @@
* uclinux.c -- generic memory mapped MTD driver for uclinux
*
* (C) Copyright 2002, Greg Ungerer (gerg@snapgear.com)
+ *
+ * License: GPL
*/
/****************************************************************************/
-#include <linux/module.h>
+#include <linux/moduleparam.h>
#include <linux/types.h>
#include <linux/init.h>
#include <linux/kernel.h>
@@ -117,27 +119,6 @@ static int __init uclinux_mtd_init(void)
return(0);
}
-
-/****************************************************************************/
-
-static void __exit uclinux_mtd_cleanup(void)
-{
- if (uclinux_ram_mtdinfo) {
- mtd_device_unregister(uclinux_ram_mtdinfo);
- map_destroy(uclinux_ram_mtdinfo);
- uclinux_ram_mtdinfo = NULL;
- }
- if (uclinux_ram_map.virt)
- uclinux_ram_map.virt = 0;
-}
-
-/****************************************************************************/
-
-module_init(uclinux_mtd_init);
-module_exit(uclinux_mtd_cleanup);
-
-MODULE_LICENSE("GPL");
-MODULE_AUTHOR("Greg Ungerer <gerg@snapgear.com>");
-MODULE_DESCRIPTION("Generic MTD for uClinux");
+device_initcall(uclinux_mtd_init);
/****************************************************************************/
diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
index f4701182b558..74ae24364a8d 100644
--- a/drivers/mtd/mtd_blkdevs.c
+++ b/drivers/mtd/mtd_blkdevs.c
@@ -409,7 +409,7 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
goto error3;
if (tr->flush)
- blk_queue_flush(new->rq, REQ_FLUSH);
+ blk_queue_write_cache(new->rq, true, false);
new->rq->queuedata = new;
blk_queue_logical_block_size(new->rq, tr->blksize);
diff --git a/drivers/mtd/mtdchar.c b/drivers/mtd/mtdchar.c
index 6d19835b80a9..2a47a3f0e730 100644
--- a/drivers/mtd/mtdchar.c
+++ b/drivers/mtd/mtdchar.c
@@ -465,35 +465,108 @@ static int mtdchar_readoob(struct file *file, struct mtd_info *mtd,
}
/*
- * Copies (and truncates, if necessary) data from the larger struct,
- * nand_ecclayout, to the smaller, deprecated layout struct,
- * nand_ecclayout_user. This is necessary only to support the deprecated
- * API ioctl ECCGETLAYOUT while allowing all new functionality to use
- * nand_ecclayout flexibly (i.e. the struct may change size in new
- * releases without requiring major rewrites).
+ * Copies (and truncates, if necessary) OOB layout information to the
+ * deprecated layout struct, nand_ecclayout_user. This is necessary only to
+ * support the deprecated API ioctl ECCGETLAYOUT while allowing all new
+ * functionality to use mtd_ooblayout_ops flexibly (i.e. mtd_ooblayout_ops
+ * can describe any kind of OOB layout with almost zero overhead from a
+ * memory usage point of view).
*/
-static int shrink_ecclayout(const struct nand_ecclayout *from,
- struct nand_ecclayout_user *to)
+static int shrink_ecclayout(struct mtd_info *mtd,
+ struct nand_ecclayout_user *to)
{
- int i;
+ struct mtd_oob_region oobregion;
+ int i, section = 0, ret;
- if (!from || !to)
+ if (!mtd || !to)
return -EINVAL;
memset(to, 0, sizeof(*to));
- to->eccbytes = min((int)from->eccbytes, MTD_MAX_ECCPOS_ENTRIES);
- for (i = 0; i < to->eccbytes; i++)
- to->eccpos[i] = from->eccpos[i];
+ to->eccbytes = 0;
+ for (i = 0; i < MTD_MAX_ECCPOS_ENTRIES;) {
+ u32 eccpos;
+
+ ret = mtd_ooblayout_ecc(mtd, section, &oobregion);
+ if (ret < 0) {
+ if (ret != -ERANGE)
+ return ret;
+
+ break;
+ }
+
+ eccpos = oobregion.offset;
+ for (; i < MTD_MAX_ECCPOS_ENTRIES &&
+ eccpos < oobregion.offset + oobregion.length; i++) {
+ to->eccpos[i] = eccpos++;
+ to->eccbytes++;
+ }
+ }
for (i = 0; i < MTD_MAX_OOBFREE_ENTRIES; i++) {
- if (from->oobfree[i].length == 0 &&
- from->oobfree[i].offset == 0)
+ ret = mtd_ooblayout_free(mtd, i, &oobregion);
+ if (ret < 0) {
+ if (ret != -ERANGE)
+ return ret;
+
+ break;
+ }
+
+ to->oobfree[i].offset = oobregion.offset;
+ to->oobfree[i].length = oobregion.length;
+ to->oobavail += to->oobfree[i].length;
+ }
+
+ return 0;
+}
+
+static int get_oobinfo(struct mtd_info *mtd, struct nand_oobinfo *to)
+{
+ struct mtd_oob_region oobregion;
+ int i, section = 0, ret;
+
+ if (!mtd || !to)
+ return -EINVAL;
+
+ memset(to, 0, sizeof(*to));
+
+ to->eccbytes = 0;
+ for (i = 0; i < ARRAY_SIZE(to->eccpos);) {
+ u32 eccpos;
+
+ ret = mtd_ooblayout_ecc(mtd, section, &oobregion);
+ if (ret < 0) {
+ if (ret != -ERANGE)
+ return ret;
+
break;
- to->oobavail += from->oobfree[i].length;
- to->oobfree[i] = from->oobfree[i];
+ }
+
+ if (oobregion.length + i > ARRAY_SIZE(to->eccpos))
+ return -EINVAL;
+
+ eccpos = oobregion.offset;
+ for (; eccpos < oobregion.offset + oobregion.length; i++) {
+ to->eccpos[i] = eccpos++;
+ to->eccbytes++;
+ }
}
+ for (i = 0; i < 8; i++) {
+ ret = mtd_ooblayout_free(mtd, i, &oobregion);
+ if (ret < 0) {
+ if (ret != -ERANGE)
+ return ret;
+
+ break;
+ }
+
+ to->oobfree[i][0] = oobregion.offset;
+ to->oobfree[i][1] = oobregion.length;
+ }
+
+ to->useecc = MTD_NANDECC_AUTOPLACE;
+
return 0;
}
@@ -815,16 +888,12 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
{
struct nand_oobinfo oi;
- if (!mtd->ecclayout)
+ if (!mtd->ooblayout)
return -EOPNOTSUPP;
- if (mtd->ecclayout->eccbytes > ARRAY_SIZE(oi.eccpos))
- return -EINVAL;
- oi.useecc = MTD_NANDECC_AUTOPLACE;
- memcpy(&oi.eccpos, mtd->ecclayout->eccpos, sizeof(oi.eccpos));
- memcpy(&oi.oobfree, mtd->ecclayout->oobfree,
- sizeof(oi.oobfree));
- oi.eccbytes = mtd->ecclayout->eccbytes;
+ ret = get_oobinfo(mtd, &oi);
+ if (ret)
+ return ret;
if (copy_to_user(argp, &oi, sizeof(struct nand_oobinfo)))
return -EFAULT;
@@ -913,14 +982,14 @@ static int mtdchar_ioctl(struct file *file, u_int cmd, u_long arg)
{
struct nand_ecclayout_user *usrlay;
- if (!mtd->ecclayout)
+ if (!mtd->ooblayout)
return -EOPNOTSUPP;
usrlay = kmalloc(sizeof(*usrlay), GFP_KERNEL);
if (!usrlay)
return -ENOMEM;
- shrink_ecclayout(mtd->ecclayout, usrlay);
+ shrink_ecclayout(mtd, usrlay);
if (copy_to_user(argp, usrlay, sizeof(*usrlay)))
ret = -EFAULT;
diff --git a/drivers/mtd/mtdconcat.c b/drivers/mtd/mtdconcat.c
index 239a8c806b67..d573606b91c2 100644
--- a/drivers/mtd/mtdconcat.c
+++ b/drivers/mtd/mtdconcat.c
@@ -777,7 +777,7 @@ struct mtd_info *mtd_concat_create(struct mtd_info *subdev[], /* subdevices to c
}
- concat->mtd.ecclayout = subdev[0]->ecclayout;
+ mtd_set_ooblayout(&concat->mtd, subdev[0]->ooblayout);
concat->num_subdev = num_devs;
concat->mtd.name = name;
diff --git a/drivers/mtd/mtdcore.c b/drivers/mtd/mtdcore.c
index 309625130b21..e3936b847c6b 100644
--- a/drivers/mtd/mtdcore.c
+++ b/drivers/mtd/mtdcore.c
@@ -40,6 +40,7 @@
#include <linux/slab.h>
#include <linux/reboot.h>
#include <linux/kconfig.h>
+#include <linux/leds.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h>
@@ -862,6 +863,7 @@ int mtd_erase(struct mtd_info *mtd, struct erase_info *instr)
mtd_erase_callback(instr);
return 0;
}
+ ledtrig_mtd_activity();
return mtd->_erase(mtd, instr);
}
EXPORT_SYMBOL_GPL(mtd_erase);
@@ -925,6 +927,7 @@ int mtd_read(struct mtd_info *mtd, loff_t from, size_t len, size_t *retlen,
if (!len)
return 0;
+ ledtrig_mtd_activity();
/*
* In the absence of an error, drivers return a non-negative integer
* representing the maximum number of bitflips that were corrected on
@@ -949,6 +952,7 @@ int mtd_write(struct mtd_info *mtd, loff_t to, size_t len, size_t *retlen,
return -EROFS;
if (!len)
return 0;
+ ledtrig_mtd_activity();
return mtd->_write(mtd, to, len, retlen, buf);
}
EXPORT_SYMBOL_GPL(mtd_write);
@@ -982,6 +986,8 @@ int mtd_read_oob(struct mtd_info *mtd, loff_t from, struct mtd_oob_ops *ops)
ops->retlen = ops->oobretlen = 0;
if (!mtd->_read_oob)
return -EOPNOTSUPP;
+
+ ledtrig_mtd_activity();
/*
* In cases where ops->datbuf != NULL, mtd->_read_oob() has semantics
* similar to mtd->_read(), returning a non-negative integer
@@ -997,6 +1003,379 @@ int mtd_read_oob(struct mtd_info *mtd, loff_t from, struct mtd_oob_ops *ops)
}
EXPORT_SYMBOL_GPL(mtd_read_oob);
+int mtd_write_oob(struct mtd_info *mtd, loff_t to,
+ struct mtd_oob_ops *ops)
+{
+ ops->retlen = ops->oobretlen = 0;
+ if (!mtd->_write_oob)
+ return -EOPNOTSUPP;
+ if (!(mtd->flags & MTD_WRITEABLE))
+ return -EROFS;
+ ledtrig_mtd_activity();
+ return mtd->_write_oob(mtd, to, ops);
+}
+EXPORT_SYMBOL_GPL(mtd_write_oob);
+
+/**
+ * mtd_ooblayout_ecc - Get the OOB region definition of a specific ECC section
+ * @mtd: MTD device structure
+ * @section: ECC section. Depending on the layout you may have all the ECC
+ * bytes stored in a single contiguous section, or one section
+ * per ECC chunk (and sometime several sections for a single ECC
+ * ECC chunk)
+ * @oobecc: OOB region struct filled with the appropriate ECC position
+ * information
+ *
+ * This functions return ECC section information in the OOB area. I you want
+ * to get all the ECC bytes information, then you should call
+ * mtd_ooblayout_ecc(mtd, section++, oobecc) until it returns -ERANGE.
+ *
+ * Returns zero on success, a negative error code otherwise.
+ */
+int mtd_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobecc)
+{
+ memset(oobecc, 0, sizeof(*oobecc));
+
+ if (!mtd || section < 0)
+ return -EINVAL;
+
+ if (!mtd->ooblayout || !mtd->ooblayout->ecc)
+ return -ENOTSUPP;
+
+ return mtd->ooblayout->ecc(mtd, section, oobecc);
+}
+EXPORT_SYMBOL_GPL(mtd_ooblayout_ecc);
+
+/**
+ * mtd_ooblayout_free - Get the OOB region definition of a specific free
+ * section
+ * @mtd: MTD device structure
+ * @section: Free section you are interested in. Depending on the layout
+ * you may have all the free bytes stored in a single contiguous
+ * section, or one section per ECC chunk plus an extra section
+ * for the remaining bytes (or other funky layout).
+ * @oobfree: OOB region struct filled with the appropriate free position
+ * information
+ *
+ * This functions return free bytes position in the OOB area. I you want
+ * to get all the free bytes information, then you should call
+ * mtd_ooblayout_free(mtd, section++, oobfree) until it returns -ERANGE.
+ *
+ * Returns zero on success, a negative error code otherwise.
+ */
+int mtd_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobfree)
+{
+ memset(oobfree, 0, sizeof(*oobfree));
+
+ if (!mtd || section < 0)
+ return -EINVAL;
+
+ if (!mtd->ooblayout || !mtd->ooblayout->free)
+ return -ENOTSUPP;
+
+ return mtd->ooblayout->free(mtd, section, oobfree);
+}
+EXPORT_SYMBOL_GPL(mtd_ooblayout_free);
+
+/**
+ * mtd_ooblayout_find_region - Find the region attached to a specific byte
+ * @mtd: mtd info structure
+ * @byte: the byte we are searching for
+ * @sectionp: pointer where the section id will be stored
+ * @oobregion: used to retrieve the ECC position
+ * @iter: iterator function. Should be either mtd_ooblayout_free or
+ * mtd_ooblayout_ecc depending on the region type you're searching for
+ *
+ * This functions returns the section id and oobregion information of a
+ * specific byte. For example, say you want to know where the 4th ECC byte is
+ * stored, you'll use:
+ *
+ * mtd_ooblayout_find_region(mtd, 3, &section, &oobregion, mtd_ooblayout_ecc);
+ *
+ * Returns zero on success, a negative error code otherwise.
+ */
+static int mtd_ooblayout_find_region(struct mtd_info *mtd, int byte,
+ int *sectionp, struct mtd_oob_region *oobregion,
+ int (*iter)(struct mtd_info *,
+ int section,
+ struct mtd_oob_region *oobregion))
+{
+ int pos = 0, ret, section = 0;
+
+ memset(oobregion, 0, sizeof(*oobregion));
+
+ while (1) {
+ ret = iter(mtd, section, oobregion);
+ if (ret)
+ return ret;
+
+ if (pos + oobregion->length > byte)
+ break;
+
+ pos += oobregion->length;
+ section++;
+ }
+
+ /*
+ * Adjust region info to make it start at the beginning at the
+ * 'start' ECC byte.
+ */
+ oobregion->offset += byte - pos;
+ oobregion->length -= byte - pos;
+ *sectionp = section;
+
+ return 0;
+}
+
+/**
+ * mtd_ooblayout_find_eccregion - Find the ECC region attached to a specific
+ * ECC byte
+ * @mtd: mtd info structure
+ * @eccbyte: the byte we are searching for
+ * @sectionp: pointer where the section id will be stored
+ * @oobregion: OOB region information
+ *
+ * Works like mtd_ooblayout_find_region() except it searches for a specific ECC
+ * byte.
+ *
+ * Returns zero on success, a negative error code otherwise.
+ */
+int mtd_ooblayout_find_eccregion(struct mtd_info *mtd, int eccbyte,
+ int *section,
+ struct mtd_oob_region *oobregion)
+{
+ return mtd_ooblayout_find_region(mtd, eccbyte, section, oobregion,
+ mtd_ooblayout_ecc);
+}
+EXPORT_SYMBOL_GPL(mtd_ooblayout_find_eccregion);
+
+/**
+ * mtd_ooblayout_get_bytes - Extract OOB bytes from the oob buffer
+ * @mtd: mtd info structure
+ * @buf: destination buffer to store OOB bytes
+ * @oobbuf: OOB buffer
+ * @start: first byte to retrieve
+ * @nbytes: number of bytes to retrieve
+ * @iter: section iterator
+ *
+ * Extract bytes attached to a specific category (ECC or free)
+ * from the OOB buffer and copy them into buf.
+ *
+ * Returns zero on success, a negative error code otherwise.
+ */
+static int mtd_ooblayout_get_bytes(struct mtd_info *mtd, u8 *buf,
+ const u8 *oobbuf, int start, int nbytes,
+ int (*iter)(struct mtd_info *,
+ int section,
+ struct mtd_oob_region *oobregion))
+{
+ struct mtd_oob_region oobregion = { };
+ int section = 0, ret;
+
+ ret = mtd_ooblayout_find_region(mtd, start, &section,
+ &oobregion, iter);
+
+ while (!ret) {
+ int cnt;
+
+ cnt = oobregion.length > nbytes ? nbytes : oobregion.length;
+ memcpy(buf, oobbuf + oobregion.offset, cnt);
+ buf += cnt;
+ nbytes -= cnt;
+
+ if (!nbytes)
+ break;
+
+ ret = iter(mtd, ++section, &oobregion);
+ }
+
+ return ret;
+}
+
+/**
+ * mtd_ooblayout_set_bytes - put OOB bytes into the oob buffer
+ * @mtd: mtd info structure
+ * @buf: source buffer to get OOB bytes from
+ * @oobbuf: OOB buffer
+ * @start: first OOB byte to set
+ * @nbytes: number of OOB bytes to set
+ * @iter: section iterator
+ *
+ * Fill the OOB buffer with data provided in buf. The category (ECC or free)
+ * is selected by passing the appropriate iterator.
+ *
+ * Returns zero on success, a negative error code otherwise.
+ */
+static int mtd_ooblayout_set_bytes(struct mtd_info *mtd, const u8 *buf,
+ u8 *oobbuf, int start, int nbytes,
+ int (*iter)(struct mtd_info *,
+ int section,
+ struct mtd_oob_region *oobregion))
+{
+ struct mtd_oob_region oobregion = { };
+ int section = 0, ret;
+
+ ret = mtd_ooblayout_find_region(mtd, start, &section,
+ &oobregion, iter);
+
+ while (!ret) {
+ int cnt;
+
+ cnt = oobregion.length > nbytes ? nbytes : oobregion.length;
+ memcpy(oobbuf + oobregion.offset, buf, cnt);
+ buf += cnt;
+ nbytes -= cnt;
+
+ if (!nbytes)
+ break;
+
+ ret = iter(mtd, ++section, &oobregion);
+ }
+
+ return ret;
+}
+
+/**
+ * mtd_ooblayout_count_bytes - count the number of bytes in a OOB category
+ * @mtd: mtd info structure
+ * @iter: category iterator
+ *
+ * Count the number of bytes in a given category.
+ *
+ * Returns a positive value on success, a negative error code otherwise.
+ */
+static int mtd_ooblayout_count_bytes(struct mtd_info *mtd,
+ int (*iter)(struct mtd_info *,
+ int section,
+ struct mtd_oob_region *oobregion))
+{
+ struct mtd_oob_region oobregion = { };
+ int section = 0, ret, nbytes = 0;
+
+ while (1) {
+ ret = iter(mtd, section++, &oobregion);
+ if (ret) {
+ if (ret == -ERANGE)
+ ret = nbytes;
+ break;
+ }
+
+ nbytes += oobregion.length;
+ }
+
+ return ret;
+}
+
+/**
+ * mtd_ooblayout_get_eccbytes - extract ECC bytes from the oob buffer
+ * @mtd: mtd info structure
+ * @eccbuf: destination buffer to store ECC bytes
+ * @oobbuf: OOB buffer
+ * @start: first ECC byte to retrieve
+ * @nbytes: number of ECC bytes to retrieve
+ *
+ * Works like mtd_ooblayout_get_bytes(), except it acts on ECC bytes.
+ *
+ * Returns zero on success, a negative error code otherwise.
+ */
+int mtd_ooblayout_get_eccbytes(struct mtd_info *mtd, u8 *eccbuf,
+ const u8 *oobbuf, int start, int nbytes)
+{
+ return mtd_ooblayout_get_bytes(mtd, eccbuf, oobbuf, start, nbytes,
+ mtd_ooblayout_ecc);
+}
+EXPORT_SYMBOL_GPL(mtd_ooblayout_get_eccbytes);
+
+/**
+ * mtd_ooblayout_set_eccbytes - set ECC bytes into the oob buffer
+ * @mtd: mtd info structure
+ * @eccbuf: source buffer to get ECC bytes from
+ * @oobbuf: OOB buffer
+ * @start: first ECC byte to set
+ * @nbytes: number of ECC bytes to set
+ *
+ * Works like mtd_ooblayout_set_bytes(), except it acts on ECC bytes.
+ *
+ * Returns zero on success, a negative error code otherwise.
+ */
+int mtd_ooblayout_set_eccbytes(struct mtd_info *mtd, const u8 *eccbuf,
+ u8 *oobbuf, int start, int nbytes)
+{
+ return mtd_ooblayout_set_bytes(mtd, eccbuf, oobbuf, start, nbytes,
+ mtd_ooblayout_ecc);
+}
+EXPORT_SYMBOL_GPL(mtd_ooblayout_set_eccbytes);
+
+/**
+ * mtd_ooblayout_get_databytes - extract data bytes from the oob buffer
+ * @mtd: mtd info structure
+ * @databuf: destination buffer to store ECC bytes
+ * @oobbuf: OOB buffer
+ * @start: first ECC byte to retrieve
+ * @nbytes: number of ECC bytes to retrieve
+ *
+ * Works like mtd_ooblayout_get_bytes(), except it acts on free bytes.
+ *
+ * Returns zero on success, a negative error code otherwise.
+ */
+int mtd_ooblayout_get_databytes(struct mtd_info *mtd, u8 *databuf,
+ const u8 *oobbuf, int start, int nbytes)
+{
+ return mtd_ooblayout_get_bytes(mtd, databuf, oobbuf, start, nbytes,
+ mtd_ooblayout_free);
+}
+EXPORT_SYMBOL_GPL(mtd_ooblayout_get_databytes);
+
+/**
+ * mtd_ooblayout_get_eccbytes - set data bytes into the oob buffer
+ * @mtd: mtd info structure
+ * @eccbuf: source buffer to get data bytes from
+ * @oobbuf: OOB buffer
+ * @start: first ECC byte to set
+ * @nbytes: number of ECC bytes to set
+ *
+ * Works like mtd_ooblayout_get_bytes(), except it acts on free bytes.
+ *
+ * Returns zero on success, a negative error code otherwise.
+ */
+int mtd_ooblayout_set_databytes(struct mtd_info *mtd, const u8 *databuf,
+ u8 *oobbuf, int start, int nbytes)
+{
+ return mtd_ooblayout_set_bytes(mtd, databuf, oobbuf, start, nbytes,
+ mtd_ooblayout_free);
+}
+EXPORT_SYMBOL_GPL(mtd_ooblayout_set_databytes);
+
+/**
+ * mtd_ooblayout_count_freebytes - count the number of free bytes in OOB
+ * @mtd: mtd info structure
+ *
+ * Works like mtd_ooblayout_count_bytes(), except it count free bytes.
+ *
+ * Returns zero on success, a negative error code otherwise.
+ */
+int mtd_ooblayout_count_freebytes(struct mtd_info *mtd)
+{
+ return mtd_ooblayout_count_bytes(mtd, mtd_ooblayout_free);
+}
+EXPORT_SYMBOL_GPL(mtd_ooblayout_count_freebytes);
+
+/**
+ * mtd_ooblayout_count_freebytes - count the number of ECC bytes in OOB
+ * @mtd: mtd info structure
+ *
+ * Works like mtd_ooblayout_count_bytes(), except it count ECC bytes.
+ *
+ * Returns zero on success, a negative error code otherwise.
+ */
+int mtd_ooblayout_count_eccbytes(struct mtd_info *mtd)
+{
+ return mtd_ooblayout_count_bytes(mtd, mtd_ooblayout_ecc);
+}
+EXPORT_SYMBOL_GPL(mtd_ooblayout_count_eccbytes);
+
/*
* Method to access the protection register area, present in some flash
* devices. The user data is one time programmable but the factory data is read
diff --git a/drivers/mtd/mtdpart.c b/drivers/mtd/mtdpart.c
index 08de4b2cf0f5..1f13e32556f8 100644
--- a/drivers/mtd/mtdpart.c
+++ b/drivers/mtd/mtdpart.c
@@ -317,6 +317,27 @@ static int part_block_markbad(struct mtd_info *mtd, loff_t ofs)
return res;
}
+static int part_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct mtd_part *part = mtd_to_part(mtd);
+
+ return mtd_ooblayout_ecc(part->master, section, oobregion);
+}
+
+static int part_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct mtd_part *part = mtd_to_part(mtd);
+
+ return mtd_ooblayout_free(part->master, section, oobregion);
+}
+
+static const struct mtd_ooblayout_ops part_ooblayout_ops = {
+ .ecc = part_ooblayout_ecc,
+ .free = part_ooblayout_free,
+};
+
static inline void free_partition(struct mtd_part *p)
{
kfree(p->mtd.name);
@@ -533,7 +554,7 @@ static struct mtd_part *allocate_partition(struct mtd_info *master,
part->name);
}
- slave->mtd.ecclayout = master->ecclayout;
+ mtd_set_ooblayout(&slave->mtd, &part_ooblayout_ops);
slave->mtd.ecc_step_size = master->ecc_step_size;
slave->mtd.ecc_strength = master->ecc_strength;
slave->mtd.bitflip_threshold = master->bitflip_threshold;
diff --git a/drivers/mtd/nand/ams-delta.c b/drivers/mtd/nand/ams-delta.c
index 68b58c85789c..78e12cc8bac2 100644
--- a/drivers/mtd/nand/ams-delta.c
+++ b/drivers/mtd/nand/ams-delta.c
@@ -224,6 +224,7 @@ static int ams_delta_init(struct platform_device *pdev)
/* 25 us command delay time */
this->chip_delay = 30;
this->ecc.mode = NAND_ECC_SOFT;
+ this->ecc.algo = NAND_ECC_HAMMING;
platform_set_drvdata(pdev, io_base);
diff --git a/drivers/mtd/nand/atmel_nand.c b/drivers/mtd/nand/atmel_nand.c
index 20cbaabb2959..68b9160108c9 100644
--- a/drivers/mtd/nand/atmel_nand.c
+++ b/drivers/mtd/nand/atmel_nand.c
@@ -36,7 +36,6 @@
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/of_gpio.h>
-#include <linux/of_mtd.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/nand.h>
#include <linux/mtd/partitions.h>
@@ -68,34 +67,44 @@ struct atmel_nand_caps {
uint8_t pmecc_max_correction;
};
-struct atmel_nand_nfc_caps {
- uint32_t rb_mask;
-};
-
-/* oob layout for large page size
+/*
+ * oob layout for large page size
* bad block info is on bytes 0 and 1
* the bytes have to be consecutives to avoid
* several NAND_CMD_RNDOUT during read
- */
-static struct nand_ecclayout atmel_oobinfo_large = {
- .eccbytes = 4,
- .eccpos = {60, 61, 62, 63},
- .oobfree = {
- {2, 58}
- },
-};
-
-/* oob layout for small page size
+ *
+ * oob layout for small page size
* bad block info is on bytes 4 and 5
* the bytes have to be consecutives to avoid
* several NAND_CMD_RNDOUT during read
*/
-static struct nand_ecclayout atmel_oobinfo_small = {
- .eccbytes = 4,
- .eccpos = {0, 1, 2, 3},
- .oobfree = {
- {6, 10}
- },
+static int atmel_ooblayout_ecc_sp(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section)
+ return -ERANGE;
+
+ oobregion->length = 4;
+ oobregion->offset = 0;
+
+ return 0;
+}
+
+static int atmel_ooblayout_free_sp(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section)
+ return -ERANGE;
+
+ oobregion->offset = 6;
+ oobregion->length = mtd->oobsize - oobregion->offset;
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops atmel_ooblayout_sp_ops = {
+ .ecc = atmel_ooblayout_ecc_sp,
+ .free = atmel_ooblayout_free_sp,
};
struct atmel_nfc {
@@ -116,7 +125,6 @@ struct atmel_nfc {
/* Point to the sram bank which include readed data via NFC */
void *data_in_sram;
bool will_write_sram;
- const struct atmel_nand_nfc_caps *caps;
};
static struct atmel_nfc nand_nfc;
@@ -163,8 +171,6 @@ struct atmel_nand_host {
int *pmecc_delta;
};
-static struct nand_ecclayout atmel_pmecc_oobinfo;
-
/*
* Enable NAND.
*/
@@ -434,14 +440,13 @@ err_buf:
static void atmel_read_buf(struct mtd_info *mtd, u8 *buf, int len)
{
struct nand_chip *chip = mtd_to_nand(mtd);
- struct atmel_nand_host *host = nand_get_controller_data(chip);
if (use_dma && len > mtd->oobsize)
/* only use DMA for bigger than oob size: better performances */
if (atmel_nand_dma_op(mtd, buf, len, 1) == 0)
return;
- if (host->board.bus_width_16)
+ if (chip->options & NAND_BUSWIDTH_16)
atmel_read_buf16(mtd, buf, len);
else
atmel_read_buf8(mtd, buf, len);
@@ -450,14 +455,13 @@ static void atmel_read_buf(struct mtd_info *mtd, u8 *buf, int len)
static void atmel_write_buf(struct mtd_info *mtd, const u8 *buf, int len)
{
struct nand_chip *chip = mtd_to_nand(mtd);
- struct atmel_nand_host *host = nand_get_controller_data(chip);
if (use_dma && len > mtd->oobsize)
/* only use DMA for bigger than oob size: better performances */
if (atmel_nand_dma_op(mtd, (void *)buf, len, 0) == 0)
return;
- if (host->board.bus_width_16)
+ if (chip->options & NAND_BUSWIDTH_16)
atmel_write_buf16(mtd, buf, len);
else
atmel_write_buf8(mtd, buf, len);
@@ -483,22 +487,6 @@ static int pmecc_get_ecc_bytes(int cap, int sector_size)
return (m * cap + 7) / 8;
}
-static void pmecc_config_ecc_layout(struct nand_ecclayout *layout,
- int oobsize, int ecc_len)
-{
- int i;
-
- layout->eccbytes = ecc_len;
-
- /* ECC will occupy the last ecc_len bytes continuously */
- for (i = 0; i < ecc_len; i++)
- layout->eccpos[i] = oobsize - ecc_len + i;
-
- layout->oobfree[0].offset = PMECC_OOB_RESERVED_BYTES;
- layout->oobfree[0].length =
- oobsize - ecc_len - layout->oobfree[0].offset;
-}
-
static void __iomem *pmecc_get_alpha_to(struct atmel_nand_host *host)
{
int table_size;
@@ -836,13 +824,16 @@ static void pmecc_correct_data(struct mtd_info *mtd, uint8_t *buf, uint8_t *ecc,
dev_dbg(host->dev, "Bit flip in data area, byte_pos: %d, bit_pos: %d, 0x%02x -> 0x%02x\n",
pos, bit_pos, err_byte, *(buf + byte_pos));
} else {
+ struct mtd_oob_region oobregion;
+
/* Bit flip in OOB area */
tmp = sector_num * nand_chip->ecc.bytes
+ (byte_pos - sector_size);
err_byte = ecc[tmp];
ecc[tmp] ^= (1 << bit_pos);
- pos = tmp + nand_chip->ecc.layout->eccpos[0];
+ mtd_ooblayout_ecc(mtd, 0, &oobregion);
+ pos = tmp + oobregion.offset;
dev_dbg(host->dev, "Bit flip in OOB, oob_byte_pos: %d, bit_pos: %d, 0x%02x -> 0x%02x\n",
pos, bit_pos, err_byte, ecc[tmp]);
}
@@ -863,17 +854,6 @@ static int pmecc_correction(struct mtd_info *mtd, u32 pmecc_stat, uint8_t *buf,
uint8_t *buf_pos;
int max_bitflips = 0;
- /* If can correct bitfilps from erased page, do the normal check */
- if (host->caps->pmecc_correct_erase_page)
- goto normal_check;
-
- for (i = 0; i < nand_chip->ecc.total; i++)
- if (ecc[i] != 0xff)
- goto normal_check;
- /* Erased page, return OK */
- return 0;
-
-normal_check:
for (i = 0; i < nand_chip->ecc.steps; i++) {
err_nbr = 0;
if (pmecc_stat & 0x1) {
@@ -884,16 +864,30 @@ normal_check:
pmecc_get_sigma(mtd);
err_nbr = pmecc_err_location(mtd);
- if (err_nbr == -1) {
+ if (err_nbr >= 0) {
+ pmecc_correct_data(mtd, buf_pos, ecc, i,
+ nand_chip->ecc.bytes,
+ err_nbr);
+ } else if (!host->caps->pmecc_correct_erase_page) {
+ u8 *ecc_pos = ecc + (i * nand_chip->ecc.bytes);
+
+ /* Try to detect erased pages */
+ err_nbr = nand_check_erased_ecc_chunk(buf_pos,
+ host->pmecc_sector_size,
+ ecc_pos,
+ nand_chip->ecc.bytes,
+ NULL, 0,
+ nand_chip->ecc.strength);
+ }
+
+ if (err_nbr < 0) {
dev_err(host->dev, "PMECC: Too many errors\n");
mtd->ecc_stats.failed++;
return -EIO;
- } else {
- pmecc_correct_data(mtd, buf_pos, ecc, i,
- nand_chip->ecc.bytes, err_nbr);
- mtd->ecc_stats.corrected += err_nbr;
- max_bitflips = max_t(int, max_bitflips, err_nbr);
}
+
+ mtd->ecc_stats.corrected += err_nbr;
+ max_bitflips = max_t(int, max_bitflips, err_nbr);
}
pmecc_stat >>= 1;
}
@@ -931,7 +925,6 @@ static int atmel_nand_pmecc_read_page(struct mtd_info *mtd,
struct atmel_nand_host *host = nand_get_controller_data(chip);
int eccsize = chip->ecc.size * chip->ecc.steps;
uint8_t *oob = chip->oob_poi;
- uint32_t *eccpos = chip->ecc.layout->eccpos;
uint32_t stat;
unsigned long end_time;
int bitflips = 0;
@@ -953,7 +946,11 @@ static int atmel_nand_pmecc_read_page(struct mtd_info *mtd,
stat = pmecc_readl_relaxed(host->ecc, ISR);
if (stat != 0) {
- bitflips = pmecc_correction(mtd, stat, buf, &oob[eccpos[0]]);
+ struct mtd_oob_region oobregion;
+
+ mtd_ooblayout_ecc(mtd, 0, &oobregion);
+ bitflips = pmecc_correction(mtd, stat, buf,
+ &oob[oobregion.offset]);
if (bitflips < 0)
/* uncorrectable errors */
return 0;
@@ -967,8 +964,8 @@ static int atmel_nand_pmecc_write_page(struct mtd_info *mtd,
int page)
{
struct atmel_nand_host *host = nand_get_controller_data(chip);
- uint32_t *eccpos = chip->ecc.layout->eccpos;
- int i, j;
+ struct mtd_oob_region oobregion = { };
+ int i, j, section = 0;
unsigned long end_time;
if (!host->nfc || !host->nfc->write_by_sram) {
@@ -987,11 +984,14 @@ static int atmel_nand_pmecc_write_page(struct mtd_info *mtd,
for (i = 0; i < chip->ecc.steps; i++) {
for (j = 0; j < chip->ecc.bytes; j++) {
- int pos;
+ if (!oobregion.length)
+ mtd_ooblayout_ecc(mtd, section, &oobregion);
- pos = i * chip->ecc.bytes + j;
- chip->oob_poi[eccpos[pos]] =
+ chip->oob_poi[oobregion.offset] =
pmecc_readb_ecc_relaxed(host->ecc, i, j);
+ oobregion.length--;
+ oobregion.offset++;
+ section++;
}
}
chip->write_buf(mtd, chip->oob_poi, mtd->oobsize);
@@ -1003,8 +1003,9 @@ static void atmel_pmecc_core_init(struct mtd_info *mtd)
{
struct nand_chip *nand_chip = mtd_to_nand(mtd);
struct atmel_nand_host *host = nand_get_controller_data(nand_chip);
+ int eccbytes = mtd_ooblayout_count_eccbytes(mtd);
uint32_t val = 0;
- struct nand_ecclayout *ecc_layout;
+ struct mtd_oob_region oobregion;
pmecc_writel(host->ecc, CTRL, PMECC_CTRL_RST);
pmecc_writel(host->ecc, CTRL, PMECC_CTRL_DISABLE);
@@ -1054,11 +1055,11 @@ static void atmel_pmecc_core_init(struct mtd_info *mtd)
| PMECC_CFG_AUTO_DISABLE);
pmecc_writel(host->ecc, CFG, val);
- ecc_layout = nand_chip->ecc.layout;
pmecc_writel(host->ecc, SAREA, mtd->oobsize - 1);
- pmecc_writel(host->ecc, SADDR, ecc_layout->eccpos[0]);
+ mtd_ooblayout_ecc(mtd, 0, &oobregion);
+ pmecc_writel(host->ecc, SADDR, oobregion.offset);
pmecc_writel(host->ecc, EADDR,
- ecc_layout->eccpos[ecc_layout->eccbytes - 1]);
+ oobregion.offset + eccbytes - 1);
/* See datasheet about PMECC Clock Control Register */
pmecc_writel(host->ecc, CLK, 2);
pmecc_writel(host->ecc, IDR, 0xff);
@@ -1206,6 +1207,7 @@ static int atmel_pmecc_nand_init_params(struct platform_device *pdev,
dev_warn(host->dev,
"Can't get I/O resource regs for PMECC controller, rolling back on software ECC\n");
nand_chip->ecc.mode = NAND_ECC_SOFT;
+ nand_chip->ecc.algo = NAND_ECC_HAMMING;
return 0;
}
@@ -1280,11 +1282,8 @@ static int atmel_pmecc_nand_init_params(struct platform_device *pdev,
err_no = -EINVAL;
goto err;
}
- pmecc_config_ecc_layout(&atmel_pmecc_oobinfo,
- mtd->oobsize,
- nand_chip->ecc.total);
- nand_chip->ecc.layout = &atmel_pmecc_oobinfo;
+ mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops);
break;
default:
dev_warn(host->dev,
@@ -1292,6 +1291,7 @@ static int atmel_pmecc_nand_init_params(struct platform_device *pdev,
/* page size not handled by HW ECC */
/* switching back to soft ECC */
nand_chip->ecc.mode = NAND_ECC_SOFT;
+ nand_chip->ecc.algo = NAND_ECC_HAMMING;
return 0;
}
@@ -1359,12 +1359,12 @@ static int atmel_nand_read_page(struct mtd_info *mtd, struct nand_chip *chip,
{
int eccsize = chip->ecc.size;
int eccbytes = chip->ecc.bytes;
- uint32_t *eccpos = chip->ecc.layout->eccpos;
uint8_t *p = buf;
uint8_t *oob = chip->oob_poi;
uint8_t *ecc_pos;
int stat;
unsigned int max_bitflips = 0;
+ struct mtd_oob_region oobregion = {};
/*
* Errata: ALE is incorrectly wired up to the ECC controller
@@ -1382,19 +1382,20 @@ static int atmel_nand_read_page(struct mtd_info *mtd, struct nand_chip *chip,
chip->read_buf(mtd, p, eccsize);
/* move to ECC position if needed */
- if (eccpos[0] != 0) {
- /* This only works on large pages
- * because the ECC controller waits for
- * NAND_CMD_RNDOUTSTART after the
- * NAND_CMD_RNDOUT.
- * anyway, for small pages, the eccpos[0] == 0
+ mtd_ooblayout_ecc(mtd, 0, &oobregion);
+ if (oobregion.offset != 0) {
+ /*
+ * This only works on large pages because the ECC controller
+ * waits for NAND_CMD_RNDOUTSTART after the NAND_CMD_RNDOUT.
+ * Anyway, for small pages, the first ECC byte is at offset
+ * 0 in the OOB area.
*/
chip->cmdfunc(mtd, NAND_CMD_RNDOUT,
- mtd->writesize + eccpos[0], -1);
+ mtd->writesize + oobregion.offset, -1);
}
/* the ECC controller needs to read the ECC just after the data */
- ecc_pos = oob + eccpos[0];
+ ecc_pos = oob + oobregion.offset;
chip->read_buf(mtd, ecc_pos, eccbytes);
/* check if there's an error */
@@ -1504,58 +1505,17 @@ static void atmel_nand_hwctl(struct mtd_info *mtd, int mode)
ecc_writel(host->ecc, CR, ATMEL_ECC_RST);
}
-static int atmel_of_init_port(struct atmel_nand_host *host,
- struct device_node *np)
+static int atmel_of_init_ecc(struct atmel_nand_host *host,
+ struct device_node *np)
{
- u32 val;
u32 offset[2];
- int ecc_mode;
- struct atmel_nand_data *board = &host->board;
- enum of_gpio_flags flags = 0;
-
- host->caps = (struct atmel_nand_caps *)
- of_device_get_match_data(host->dev);
-
- if (of_property_read_u32(np, "atmel,nand-addr-offset", &val) == 0) {
- if (val >= 32) {
- dev_err(host->dev, "invalid addr-offset %u\n", val);
- return -EINVAL;
- }
- board->ale = val;
- }
-
- if (of_property_read_u32(np, "atmel,nand-cmd-offset", &val) == 0) {
- if (val >= 32) {
- dev_err(host->dev, "invalid cmd-offset %u\n", val);
- return -EINVAL;
- }
- board->cle = val;
- }
-
- ecc_mode = of_get_nand_ecc_mode(np);
-
- board->ecc_mode = ecc_mode < 0 ? NAND_ECC_SOFT : ecc_mode;
-
- board->on_flash_bbt = of_get_nand_on_flash_bbt(np);
-
- board->has_dma = of_property_read_bool(np, "atmel,nand-has-dma");
-
- if (of_get_nand_bus_width(np) == 16)
- board->bus_width_16 = 1;
-
- board->rdy_pin = of_get_gpio_flags(np, 0, &flags);
- board->rdy_pin_active_low = (flags == OF_GPIO_ACTIVE_LOW);
-
- board->enable_pin = of_get_gpio(np, 1);
- board->det_pin = of_get_gpio(np, 2);
+ u32 val;
host->has_pmecc = of_property_read_bool(np, "atmel,has-pmecc");
- /* load the nfc driver if there is */
- of_platform_populate(np, NULL, NULL, host->dev);
-
- if (!(board->ecc_mode == NAND_ECC_HW) || !host->has_pmecc)
- return 0; /* Not using PMECC */
+ /* Not using PMECC */
+ if (!(host->nand_chip.ecc.mode == NAND_ECC_HW) || !host->has_pmecc)
+ return 0;
/* use PMECC, get correction capability, sector size and lookup
* table offset.
@@ -1596,16 +1556,65 @@ static int atmel_of_init_port(struct atmel_nand_host *host,
/* Will build a lookup table and initialize the offset later */
return 0;
}
+
if (!offset[0] && !offset[1]) {
dev_err(host->dev, "Invalid PMECC lookup table offset\n");
return -EINVAL;
}
+
host->pmecc_lookup_table_offset_512 = offset[0];
host->pmecc_lookup_table_offset_1024 = offset[1];
return 0;
}
+static int atmel_of_init_port(struct atmel_nand_host *host,
+ struct device_node *np)
+{
+ u32 val;
+ struct atmel_nand_data *board = &host->board;
+ enum of_gpio_flags flags = 0;
+
+ host->caps = (struct atmel_nand_caps *)
+ of_device_get_match_data(host->dev);
+
+ if (of_property_read_u32(np, "atmel,nand-addr-offset", &val) == 0) {
+ if (val >= 32) {
+ dev_err(host->dev, "invalid addr-offset %u\n", val);
+ return -EINVAL;
+ }
+ board->ale = val;
+ }
+
+ if (of_property_read_u32(np, "atmel,nand-cmd-offset", &val) == 0) {
+ if (val >= 32) {
+ dev_err(host->dev, "invalid cmd-offset %u\n", val);
+ return -EINVAL;
+ }
+ board->cle = val;
+ }
+
+ board->has_dma = of_property_read_bool(np, "atmel,nand-has-dma");
+
+ board->rdy_pin = of_get_gpio_flags(np, 0, &flags);
+ board->rdy_pin_active_low = (flags == OF_GPIO_ACTIVE_LOW);
+
+ board->enable_pin = of_get_gpio(np, 1);
+ board->det_pin = of_get_gpio(np, 2);
+
+ /* load the nfc driver if there is */
+ of_platform_populate(np, NULL, NULL, host->dev);
+
+ /*
+ * Initialize ECC mode to NAND_ECC_SOFT so that we have a correct value
+ * even if the nand-ecc-mode property is not defined.
+ */
+ host->nand_chip.ecc.mode = NAND_ECC_SOFT;
+ host->nand_chip.ecc.algo = NAND_ECC_HAMMING;
+
+ return 0;
+}
+
static int atmel_hw_nand_init_params(struct platform_device *pdev,
struct atmel_nand_host *host)
{
@@ -1618,6 +1627,7 @@ static int atmel_hw_nand_init_params(struct platform_device *pdev,
dev_err(host->dev,
"Can't get I/O resource regs, use software ECC\n");
nand_chip->ecc.mode = NAND_ECC_SOFT;
+ nand_chip->ecc.algo = NAND_ECC_HAMMING;
return 0;
}
@@ -1631,25 +1641,26 @@ static int atmel_hw_nand_init_params(struct platform_device *pdev,
/* set ECC page size and oob layout */
switch (mtd->writesize) {
case 512:
- nand_chip->ecc.layout = &atmel_oobinfo_small;
+ mtd_set_ooblayout(mtd, &atmel_ooblayout_sp_ops);
ecc_writel(host->ecc, MR, ATMEL_ECC_PAGESIZE_528);
break;
case 1024:
- nand_chip->ecc.layout = &atmel_oobinfo_large;
+ mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops);
ecc_writel(host->ecc, MR, ATMEL_ECC_PAGESIZE_1056);
break;
case 2048:
- nand_chip->ecc.layout = &atmel_oobinfo_large;
+ mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops);
ecc_writel(host->ecc, MR, ATMEL_ECC_PAGESIZE_2112);
break;
case 4096:
- nand_chip->ecc.layout = &atmel_oobinfo_large;
+ mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops);
ecc_writel(host->ecc, MR, ATMEL_ECC_PAGESIZE_4224);
break;
default:
/* page size not handled by HW ECC */
/* switching back to soft ECC */
nand_chip->ecc.mode = NAND_ECC_SOFT;
+ nand_chip->ecc.algo = NAND_ECC_HAMMING;
return 0;
}
@@ -1699,9 +1710,9 @@ static irqreturn_t hsmc_interrupt(int irq, void *dev_id)
nfc_writel(host->nfc->hsmc_regs, IDR, NFC_SR_XFR_DONE);
ret = IRQ_HANDLED;
}
- if (pending & host->nfc->caps->rb_mask) {
+ if (pending & NFC_SR_RB_EDGE) {
complete(&host->nfc->comp_ready);
- nfc_writel(host->nfc->hsmc_regs, IDR, host->nfc->caps->rb_mask);
+ nfc_writel(host->nfc->hsmc_regs, IDR, NFC_SR_RB_EDGE);
ret = IRQ_HANDLED;
}
if (pending & NFC_SR_CMD_DONE) {
@@ -1719,7 +1730,7 @@ static void nfc_prepare_interrupt(struct atmel_nand_host *host, u32 flag)
if (flag & NFC_SR_XFR_DONE)
init_completion(&host->nfc->comp_xfer_done);
- if (flag & host->nfc->caps->rb_mask)
+ if (flag & NFC_SR_RB_EDGE)
init_completion(&host->nfc->comp_ready);
if (flag & NFC_SR_CMD_DONE)
@@ -1737,7 +1748,7 @@ static int nfc_wait_interrupt(struct atmel_nand_host *host, u32 flag)
if (flag & NFC_SR_XFR_DONE)
comp[index++] = &host->nfc->comp_xfer_done;
- if (flag & host->nfc->caps->rb_mask)
+ if (flag & NFC_SR_RB_EDGE)
comp[index++] = &host->nfc->comp_ready;
if (flag & NFC_SR_CMD_DONE)
@@ -1805,7 +1816,7 @@ static int nfc_device_ready(struct mtd_info *mtd)
dev_err(host->dev, "Lost the interrupt flags: 0x%08x\n",
mask & status);
- return status & host->nfc->caps->rb_mask;
+ return status & NFC_SR_RB_EDGE;
}
static void nfc_select_chip(struct mtd_info *mtd, int chip)
@@ -1978,8 +1989,8 @@ static void nfc_nand_command(struct mtd_info *mtd, unsigned int command,
}
/* fall through */
default:
- nfc_prepare_interrupt(host, host->nfc->caps->rb_mask);
- nfc_wait_interrupt(host, host->nfc->caps->rb_mask);
+ nfc_prepare_interrupt(host, NFC_SR_RB_EDGE);
+ nfc_wait_interrupt(host, NFC_SR_RB_EDGE);
}
}
@@ -2147,6 +2158,19 @@ static int atmel_nand_probe(struct platform_device *pdev)
} else {
memcpy(&host->board, dev_get_platdata(&pdev->dev),
sizeof(struct atmel_nand_data));
+ nand_chip->ecc.mode = host->board.ecc_mode;
+
+ /*
+ * When using software ECC every supported avr32 board means
+ * Hamming algorithm. If that ever changes we'll need to add
+ * ecc_algo field to the struct atmel_nand_data.
+ */
+ if (nand_chip->ecc.mode == NAND_ECC_SOFT)
+ nand_chip->ecc.algo = NAND_ECC_HAMMING;
+
+ /* 16-bit bus width */
+ if (host->board.bus_width_16)
+ nand_chip->options |= NAND_BUSWIDTH_16;
}
/* link the private data structures */
@@ -2188,11 +2212,8 @@ static int atmel_nand_probe(struct platform_device *pdev)
nand_chip->cmd_ctrl = atmel_nand_cmd_ctrl;
}
- nand_chip->ecc.mode = host->board.ecc_mode;
nand_chip->chip_delay = 40; /* 40us command delay time */
- if (host->board.bus_width_16) /* 16-bit bus width */
- nand_chip->options |= NAND_BUSWIDTH_16;
nand_chip->read_buf = atmel_read_buf;
nand_chip->write_buf = atmel_write_buf;
@@ -2225,11 +2246,6 @@ static int atmel_nand_probe(struct platform_device *pdev)
}
}
- if (host->board.on_flash_bbt || on_flash_bbt) {
- dev_info(&pdev->dev, "Use On Flash BBT\n");
- nand_chip->bbt_options |= NAND_BBT_USE_FLASH;
- }
-
if (!host->board.has_dma)
use_dma = 0;
@@ -2256,6 +2272,18 @@ static int atmel_nand_probe(struct platform_device *pdev)
goto err_scan_ident;
}
+ if (host->board.on_flash_bbt || on_flash_bbt)
+ nand_chip->bbt_options |= NAND_BBT_USE_FLASH;
+
+ if (nand_chip->bbt_options & NAND_BBT_USE_FLASH)
+ dev_info(&pdev->dev, "Use On Flash BBT\n");
+
+ if (IS_ENABLED(CONFIG_OF) && pdev->dev.of_node) {
+ res = atmel_of_init_ecc(host, pdev->dev.of_node);
+ if (res)
+ goto err_hw_ecc;
+ }
+
if (nand_chip->ecc.mode == NAND_ECC_HW) {
if (host->has_pmecc)
res = atmel_pmecc_nand_init_params(pdev, host);
@@ -2393,11 +2421,6 @@ static int atmel_nand_nfc_probe(struct platform_device *pdev)
}
}
- nfc->caps = (const struct atmel_nand_nfc_caps *)
- of_device_get_match_data(&pdev->dev);
- if (!nfc->caps)
- return -ENODEV;
-
nfc_writel(nfc->hsmc_regs, IDR, 0xffffffff);
nfc_readl(nfc->hsmc_regs, SR); /* clear the NFC_SR */
@@ -2426,17 +2449,8 @@ static int atmel_nand_nfc_remove(struct platform_device *pdev)
return 0;
}
-static const struct atmel_nand_nfc_caps sama5d3_nfc_caps = {
- .rb_mask = NFC_SR_RB_EDGE0,
-};
-
-static const struct atmel_nand_nfc_caps sama5d4_nfc_caps = {
- .rb_mask = NFC_SR_RB_EDGE3,
-};
-
static const struct of_device_id atmel_nand_nfc_match[] = {
- { .compatible = "atmel,sama5d3-nfc", .data = &sama5d3_nfc_caps },
- { .compatible = "atmel,sama5d4-nfc", .data = &sama5d4_nfc_caps },
+ { .compatible = "atmel,sama5d3-nfc" },
{ /* sentinel */ }
};
MODULE_DEVICE_TABLE(of, atmel_nand_nfc_match);
diff --git a/drivers/mtd/nand/atmel_nand_nfc.h b/drivers/mtd/nand/atmel_nand_nfc.h
index 0bbc1fa97dba..4d5d26221a7e 100644
--- a/drivers/mtd/nand/atmel_nand_nfc.h
+++ b/drivers/mtd/nand/atmel_nand_nfc.h
@@ -42,8 +42,7 @@
#define NFC_SR_UNDEF (1 << 21)
#define NFC_SR_AWB (1 << 22)
#define NFC_SR_ASE (1 << 23)
-#define NFC_SR_RB_EDGE0 (1 << 24)
-#define NFC_SR_RB_EDGE3 (1 << 27)
+#define NFC_SR_RB_EDGE (1 << 24)
#define ATMEL_HSMC_NFC_IER 0x0c
#define ATMEL_HSMC_NFC_IDR 0x10
diff --git a/drivers/mtd/nand/au1550nd.c b/drivers/mtd/nand/au1550nd.c
index 341ea4904164..9bf6d9915694 100644
--- a/drivers/mtd/nand/au1550nd.c
+++ b/drivers/mtd/nand/au1550nd.c
@@ -459,6 +459,7 @@ static int au1550nd_probe(struct platform_device *pdev)
/* 30 us command delay time */
this->chip_delay = 30;
this->ecc.mode = NAND_ECC_SOFT;
+ this->ecc.algo = NAND_ECC_HAMMING;
if (pd->devwidth)
this->options |= NAND_BUSWIDTH_16;
diff --git a/drivers/mtd/nand/bf5xx_nand.c b/drivers/mtd/nand/bf5xx_nand.c
index 7f6b30e615b7..37da4236ab90 100644
--- a/drivers/mtd/nand/bf5xx_nand.c
+++ b/drivers/mtd/nand/bf5xx_nand.c
@@ -109,28 +109,33 @@ static const unsigned short bfin_nfc_pin_req[] =
0};
#ifdef CONFIG_MTD_NAND_BF5XX_BOOTROM_ECC
-static struct nand_ecclayout bootrom_ecclayout = {
- .eccbytes = 24,
- .eccpos = {
- 0x8 * 0, 0x8 * 0 + 1, 0x8 * 0 + 2,
- 0x8 * 1, 0x8 * 1 + 1, 0x8 * 1 + 2,
- 0x8 * 2, 0x8 * 2 + 1, 0x8 * 2 + 2,
- 0x8 * 3, 0x8 * 3 + 1, 0x8 * 3 + 2,
- 0x8 * 4, 0x8 * 4 + 1, 0x8 * 4 + 2,
- 0x8 * 5, 0x8 * 5 + 1, 0x8 * 5 + 2,
- 0x8 * 6, 0x8 * 6 + 1, 0x8 * 6 + 2,
- 0x8 * 7, 0x8 * 7 + 1, 0x8 * 7 + 2
- },
- .oobfree = {
- { 0x8 * 0 + 3, 5 },
- { 0x8 * 1 + 3, 5 },
- { 0x8 * 2 + 3, 5 },
- { 0x8 * 3 + 3, 5 },
- { 0x8 * 4 + 3, 5 },
- { 0x8 * 5 + 3, 5 },
- { 0x8 * 6 + 3, 5 },
- { 0x8 * 7 + 3, 5 },
- }
+static int bootrom_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section > 7)
+ return -ERANGE;
+
+ oobregion->offset = section * 8;
+ oobregion->length = 3;
+
+ return 0;
+}
+
+static int bootrom_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section > 7)
+ return -ERANGE;
+
+ oobregion->offset = (section * 8) + 3;
+ oobregion->length = 5;
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops bootrom_ooblayout_ops = {
+ .ecc = bootrom_ooblayout_ecc,
+ .free = bootrom_ooblayout_free,
};
#endif
@@ -800,7 +805,7 @@ static int bf5xx_nand_probe(struct platform_device *pdev)
/* setup hardware ECC data struct */
if (hardware_ecc) {
#ifdef CONFIG_MTD_NAND_BF5XX_BOOTROM_ECC
- chip->ecc.layout = &bootrom_ecclayout;
+ mtd_set_ooblayout(mtd, &bootrom_ooblayout_ops);
#endif
chip->read_buf = bf5xx_nand_dma_read_buf;
chip->write_buf = bf5xx_nand_dma_write_buf;
@@ -812,6 +817,7 @@ static int bf5xx_nand_probe(struct platform_device *pdev)
chip->ecc.write_page_raw = bf5xx_nand_write_page_raw;
} else {
chip->ecc.mode = NAND_ECC_SOFT;
+ chip->ecc.algo = NAND_ECC_HAMMING;
}
/* scan hardware nand chip and setup mtd info data struct */
diff --git a/drivers/mtd/nand/brcmnand/brcmnand.c b/drivers/mtd/nand/brcmnand/brcmnand.c
index e0528397306a..b76ad7c0144f 100644
--- a/drivers/mtd/nand/brcmnand/brcmnand.c
+++ b/drivers/mtd/nand/brcmnand/brcmnand.c
@@ -32,7 +32,6 @@
#include <linux/mtd/nand.h>
#include <linux/mtd/partitions.h>
#include <linux/of.h>
-#include <linux/of_mtd.h>
#include <linux/of_platform.h>
#include <linux/slab.h>
#include <linux/list.h>
@@ -601,7 +600,7 @@ static void brcmnand_wr_corr_thresh(struct brcmnand_host *host, u8 val)
static inline int brcmnand_cmd_shift(struct brcmnand_controller *ctrl)
{
- if (ctrl->nand_version < 0x0700)
+ if (ctrl->nand_version < 0x0602)
return 24;
return 0;
}
@@ -781,127 +780,183 @@ static inline bool is_hamming_ecc(struct brcmnand_cfg *cfg)
}
/*
- * Returns a nand_ecclayout strucutre for the given layout/configuration.
- * Returns NULL on failure.
+ * Set mtd->ooblayout to the appropriate mtd_ooblayout_ops given
+ * the layout/configuration.
+ * Returns -ERRCODE on failure.
*/
-static struct nand_ecclayout *brcmnand_create_layout(int ecc_level,
- struct brcmnand_host *host)
+static int brcmnand_hamming_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ struct brcmnand_host *host = nand_get_controller_data(chip);
struct brcmnand_cfg *cfg = &host->hwcfg;
- int i, j;
- struct nand_ecclayout *layout;
- int req;
- int sectors;
- int sas;
- int idx1, idx2;
-
- layout = devm_kzalloc(&host->pdev->dev, sizeof(*layout), GFP_KERNEL);
- if (!layout)
- return NULL;
-
- sectors = cfg->page_size / (512 << cfg->sector_size_1k);
- sas = cfg->spare_area_size << cfg->sector_size_1k;
-
- /* Hamming */
- if (is_hamming_ecc(cfg)) {
- for (i = 0, idx1 = 0, idx2 = 0; i < sectors; i++) {
- /* First sector of each page may have BBI */
- if (i == 0) {
- layout->oobfree[idx2].offset = i * sas + 1;
- /* Small-page NAND use byte 6 for BBI */
- if (cfg->page_size == 512)
- layout->oobfree[idx2].offset--;
- layout->oobfree[idx2].length = 5;
- } else {
- layout->oobfree[idx2].offset = i * sas;
- layout->oobfree[idx2].length = 6;
- }
- idx2++;
- layout->eccpos[idx1++] = i * sas + 6;
- layout->eccpos[idx1++] = i * sas + 7;
- layout->eccpos[idx1++] = i * sas + 8;
- layout->oobfree[idx2].offset = i * sas + 9;
- layout->oobfree[idx2].length = 7;
- idx2++;
- /* Leave zero-terminated entry for OOBFREE */
- if (idx1 >= MTD_MAX_ECCPOS_ENTRIES_LARGE ||
- idx2 >= MTD_MAX_OOBFREE_ENTRIES_LARGE - 1)
- break;
- }
+ int sas = cfg->spare_area_size << cfg->sector_size_1k;
+ int sectors = cfg->page_size / (512 << cfg->sector_size_1k);
- return layout;
- }
+ if (section >= sectors)
+ return -ERANGE;
- /*
- * CONTROLLER_VERSION:
- * < v5.0: ECC_REQ = ceil(BCH_T * 13/8)
- * >= v5.0: ECC_REQ = ceil(BCH_T * 14/8)
- * But we will just be conservative.
- */
- req = DIV_ROUND_UP(ecc_level * 14, 8);
- if (req >= sas) {
- dev_err(&host->pdev->dev,
- "error: ECC too large for OOB (ECC bytes %d, spare sector %d)\n",
- req, sas);
- return NULL;
- }
+ oobregion->offset = (section * sas) + 6;
+ oobregion->length = 3;
+
+ return 0;
+}
+
+static int brcmnand_hamming_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ struct brcmnand_host *host = nand_get_controller_data(chip);
+ struct brcmnand_cfg *cfg = &host->hwcfg;
+ int sas = cfg->spare_area_size << cfg->sector_size_1k;
+ int sectors = cfg->page_size / (512 << cfg->sector_size_1k);
- layout->eccbytes = req * sectors;
- for (i = 0, idx1 = 0, idx2 = 0; i < sectors; i++) {
- for (j = sas - req; j < sas && idx1 <
- MTD_MAX_ECCPOS_ENTRIES_LARGE; j++, idx1++)
- layout->eccpos[idx1] = i * sas + j;
+ if (section >= sectors * 2)
+ return -ERANGE;
+
+ oobregion->offset = (section / 2) * sas;
+
+ if (section & 1) {
+ oobregion->offset += 9;
+ oobregion->length = 7;
+ } else {
+ oobregion->length = 6;
/* First sector of each page may have BBI */
- if (i == 0) {
- if (cfg->page_size == 512 && (sas - req >= 6)) {
- /* Small-page NAND use byte 6 for BBI */
- layout->oobfree[idx2].offset = 0;
- layout->oobfree[idx2].length = 5;
- idx2++;
- if (sas - req > 6) {
- layout->oobfree[idx2].offset = 6;
- layout->oobfree[idx2].length =
- sas - req - 6;
- idx2++;
- }
- } else if (sas > req + 1) {
- layout->oobfree[idx2].offset = i * sas + 1;
- layout->oobfree[idx2].length = sas - req - 1;
- idx2++;
- }
- } else if (sas > req) {
- layout->oobfree[idx2].offset = i * sas;
- layout->oobfree[idx2].length = sas - req;
- idx2++;
+ if (!section) {
+ /*
+ * Small-page NAND use byte 6 for BBI while large-page
+ * NAND use byte 0.
+ */
+ if (cfg->page_size > 512)
+ oobregion->offset++;
+ oobregion->length--;
}
- /* Leave zero-terminated entry for OOBFREE */
- if (idx1 >= MTD_MAX_ECCPOS_ENTRIES_LARGE ||
- idx2 >= MTD_MAX_OOBFREE_ENTRIES_LARGE - 1)
- break;
}
- return layout;
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops brcmnand_hamming_ooblayout_ops = {
+ .ecc = brcmnand_hamming_ooblayout_ecc,
+ .free = brcmnand_hamming_ooblayout_free,
+};
+
+static int brcmnand_bch_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ struct brcmnand_host *host = nand_get_controller_data(chip);
+ struct brcmnand_cfg *cfg = &host->hwcfg;
+ int sas = cfg->spare_area_size << cfg->sector_size_1k;
+ int sectors = cfg->page_size / (512 << cfg->sector_size_1k);
+
+ if (section >= sectors)
+ return -ERANGE;
+
+ oobregion->offset = (section * (sas + 1)) - chip->ecc.bytes;
+ oobregion->length = chip->ecc.bytes;
+
+ return 0;
+}
+
+static int brcmnand_bch_ooblayout_free_lp(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ struct brcmnand_host *host = nand_get_controller_data(chip);
+ struct brcmnand_cfg *cfg = &host->hwcfg;
+ int sas = cfg->spare_area_size << cfg->sector_size_1k;
+ int sectors = cfg->page_size / (512 << cfg->sector_size_1k);
+
+ if (section >= sectors)
+ return -ERANGE;
+
+ if (sas <= chip->ecc.bytes)
+ return 0;
+
+ oobregion->offset = section * sas;
+ oobregion->length = sas - chip->ecc.bytes;
+
+ if (!section) {
+ oobregion->offset++;
+ oobregion->length--;
+ }
+
+ return 0;
}
-static struct nand_ecclayout *brcmstb_choose_ecc_layout(
- struct brcmnand_host *host)
+static int brcmnand_bch_ooblayout_free_sp(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ struct brcmnand_host *host = nand_get_controller_data(chip);
+ struct brcmnand_cfg *cfg = &host->hwcfg;
+ int sas = cfg->spare_area_size << cfg->sector_size_1k;
+
+ if (section > 1 || sas - chip->ecc.bytes < 6 ||
+ (section && sas - chip->ecc.bytes == 6))
+ return -ERANGE;
+
+ if (!section) {
+ oobregion->offset = 0;
+ oobregion->length = 5;
+ } else {
+ oobregion->offset = 6;
+ oobregion->length = sas - chip->ecc.bytes - 6;
+ }
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops brcmnand_bch_lp_ooblayout_ops = {
+ .ecc = brcmnand_bch_ooblayout_ecc,
+ .free = brcmnand_bch_ooblayout_free_lp,
+};
+
+static const struct mtd_ooblayout_ops brcmnand_bch_sp_ooblayout_ops = {
+ .ecc = brcmnand_bch_ooblayout_ecc,
+ .free = brcmnand_bch_ooblayout_free_sp,
+};
+
+static int brcmstb_choose_ecc_layout(struct brcmnand_host *host)
{
- struct nand_ecclayout *layout;
struct brcmnand_cfg *p = &host->hwcfg;
+ struct mtd_info *mtd = nand_to_mtd(&host->chip);
+ struct nand_ecc_ctrl *ecc = &host->chip.ecc;
unsigned int ecc_level = p->ecc_level;
+ int sas = p->spare_area_size << p->sector_size_1k;
+ int sectors = p->page_size / (512 << p->sector_size_1k);
if (p->sector_size_1k)
ecc_level <<= 1;
- layout = brcmnand_create_layout(ecc_level, host);
- if (!layout) {
+ if (is_hamming_ecc(p)) {
+ ecc->bytes = 3 * sectors;
+ mtd_set_ooblayout(mtd, &brcmnand_hamming_ooblayout_ops);
+ return 0;
+ }
+
+ /*
+ * CONTROLLER_VERSION:
+ * < v5.0: ECC_REQ = ceil(BCH_T * 13/8)
+ * >= v5.0: ECC_REQ = ceil(BCH_T * 14/8)
+ * But we will just be conservative.
+ */
+ ecc->bytes = DIV_ROUND_UP(ecc_level * 14, 8);
+ if (p->page_size == 512)
+ mtd_set_ooblayout(mtd, &brcmnand_bch_sp_ooblayout_ops);
+ else
+ mtd_set_ooblayout(mtd, &brcmnand_bch_lp_ooblayout_ops);
+
+ if (ecc->bytes >= sas) {
dev_err(&host->pdev->dev,
- "no proper ecc_layout for this NAND cfg\n");
- return NULL;
+ "error: ECC too large for OOB (ECC bytes %d, spare sector %d)\n",
+ ecc->bytes, sas);
+ return -EINVAL;
}
- return layout;
+ return 0;
}
static void brcmnand_wp(struct mtd_info *mtd, int wp)
@@ -1870,9 +1925,31 @@ static int brcmnand_setup_dev(struct brcmnand_host *host)
cfg->col_adr_bytes = 2;
cfg->blk_adr_bytes = get_blk_adr_bytes(mtd->size, mtd->writesize);
+ if (chip->ecc.mode != NAND_ECC_HW) {
+ dev_err(ctrl->dev, "only HW ECC supported; selected: %d\n",
+ chip->ecc.mode);
+ return -EINVAL;
+ }
+
+ if (chip->ecc.algo == NAND_ECC_UNKNOWN) {
+ if (chip->ecc.strength == 1 && chip->ecc.size == 512)
+ /* Default to Hamming for 1-bit ECC, if unspecified */
+ chip->ecc.algo = NAND_ECC_HAMMING;
+ else
+ /* Otherwise, BCH */
+ chip->ecc.algo = NAND_ECC_BCH;
+ }
+
+ if (chip->ecc.algo == NAND_ECC_HAMMING && (chip->ecc.strength != 1 ||
+ chip->ecc.size != 512)) {
+ dev_err(ctrl->dev, "invalid Hamming params: %d bits per %d bytes\n",
+ chip->ecc.strength, chip->ecc.size);
+ return -EINVAL;
+ }
+
switch (chip->ecc.size) {
case 512:
- if (chip->ecc.strength == 1) /* Hamming */
+ if (chip->ecc.algo == NAND_ECC_HAMMING)
cfg->ecc_level = 15;
else
cfg->ecc_level = chip->ecc.strength;
@@ -2001,8 +2078,8 @@ static int brcmnand_init_cs(struct brcmnand_host *host, struct device_node *dn)
*/
chip->options |= NAND_USE_BOUNCE_BUFFER;
- if (of_get_nand_on_flash_bbt(dn))
- chip->bbt_options |= NAND_BBT_USE_FLASH | NAND_BBT_NO_OOB;
+ if (chip->bbt_options & NAND_BBT_USE_FLASH)
+ chip->bbt_options |= NAND_BBT_NO_OOB;
if (brcmnand_setup_dev(host))
return -ENXIO;
@@ -2011,9 +2088,9 @@ static int brcmnand_init_cs(struct brcmnand_host *host, struct device_node *dn)
/* only use our internal HW threshold */
mtd->bitflip_threshold = 1;
- chip->ecc.layout = brcmstb_choose_ecc_layout(host);
- if (!chip->ecc.layout)
- return -ENXIO;
+ ret = brcmstb_choose_ecc_layout(host);
+ if (ret)
+ return ret;
if (nand_scan_tail(mtd))
return -ENXIO;
@@ -2115,6 +2192,7 @@ static const struct of_device_id brcmnand_of_match[] = {
{ .compatible = "brcm,brcmnand-v5.0" },
{ .compatible = "brcm,brcmnand-v6.0" },
{ .compatible = "brcm,brcmnand-v6.1" },
+ { .compatible = "brcm,brcmnand-v6.2" },
{ .compatible = "brcm,brcmnand-v7.0" },
{ .compatible = "brcm,brcmnand-v7.1" },
{},
diff --git a/drivers/mtd/nand/cafe_nand.c b/drivers/mtd/nand/cafe_nand.c
index e553aff68987..0b0c93702abb 100644
--- a/drivers/mtd/nand/cafe_nand.c
+++ b/drivers/mtd/nand/cafe_nand.c
@@ -459,10 +459,37 @@ static int cafe_nand_read_page(struct mtd_info *mtd, struct nand_chip *chip,
return max_bitflips;
}
-static struct nand_ecclayout cafe_oobinfo_2048 = {
- .eccbytes = 14,
- .eccpos = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13},
- .oobfree = {{14, 50}}
+static int cafe_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+
+ if (section)
+ return -ERANGE;
+
+ oobregion->offset = 0;
+ oobregion->length = chip->ecc.total;
+
+ return 0;
+}
+
+static int cafe_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+
+ if (section)
+ return -ERANGE;
+
+ oobregion->offset = chip->ecc.total;
+ oobregion->length = mtd->oobsize - chip->ecc.total;
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops cafe_ooblayout_ops = {
+ .ecc = cafe_ooblayout_ecc,
+ .free = cafe_ooblayout_free,
};
/* Ick. The BBT code really ought to be able to work this bit out
@@ -494,12 +521,6 @@ static struct nand_bbt_descr cafe_bbt_mirror_descr_2048 = {
.pattern = cafe_mirror_pattern_2048
};
-static struct nand_ecclayout cafe_oobinfo_512 = {
- .eccbytes = 14,
- .eccpos = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13},
- .oobfree = {{14, 2}}
-};
-
static struct nand_bbt_descr cafe_bbt_main_descr_512 = {
.options = NAND_BBT_LASTBLOCK | NAND_BBT_CREATE | NAND_BBT_WRITE
| NAND_BBT_2BIT | NAND_BBT_VERSION,
@@ -743,12 +764,11 @@ static int cafe_nand_probe(struct pci_dev *pdev,
cafe->ctl2 |= 1<<29; /* 2KiB page size */
/* Set up ECC according to the type of chip we found */
+ mtd_set_ooblayout(mtd, &cafe_ooblayout_ops);
if (mtd->writesize == 2048) {
- cafe->nand.ecc.layout = &cafe_oobinfo_2048;
cafe->nand.bbt_td = &cafe_bbt_main_descr_2048;
cafe->nand.bbt_md = &cafe_bbt_mirror_descr_2048;
} else if (mtd->writesize == 512) {
- cafe->nand.ecc.layout = &cafe_oobinfo_512;
cafe->nand.bbt_td = &cafe_bbt_main_descr_512;
cafe->nand.bbt_md = &cafe_bbt_mirror_descr_512;
} else {
diff --git a/drivers/mtd/nand/cmx270_nand.c b/drivers/mtd/nand/cmx270_nand.c
index 6f97ebba52c4..49133783ca53 100644
--- a/drivers/mtd/nand/cmx270_nand.c
+++ b/drivers/mtd/nand/cmx270_nand.c
@@ -187,6 +187,7 @@ static int __init cmx270_init(void)
/* 15 us command delay time */
this->chip_delay = 20;
this->ecc.mode = NAND_ECC_SOFT;
+ this->ecc.algo = NAND_ECC_HAMMING;
/* read/write functions */
this->read_byte = cmx270_read_byte;
diff --git a/drivers/mtd/nand/davinci_nand.c b/drivers/mtd/nand/davinci_nand.c
index 8cb821b6686e..cc07ba0f044d 100644
--- a/drivers/mtd/nand/davinci_nand.c
+++ b/drivers/mtd/nand/davinci_nand.c
@@ -34,7 +34,6 @@
#include <linux/slab.h>
#include <linux/of_device.h>
#include <linux/of.h>
-#include <linux/of_mtd.h>
#include <linux/platform_data/mtd-davinci.h>
#include <linux/platform_data/mtd-davinci-aemif.h>
@@ -54,7 +53,6 @@
*/
struct davinci_nand_info {
struct nand_chip chip;
- struct nand_ecclayout ecclayout;
struct device *dev;
struct clk *clk;
@@ -480,63 +478,46 @@ static int nand_davinci_dev_ready(struct mtd_info *mtd)
* ten ECC bytes plus the manufacturer's bad block marker byte, and
* and not overlapping the default BBT markers.
*/
-static struct nand_ecclayout hwecc4_small = {
- .eccbytes = 10,
- .eccpos = { 0, 1, 2, 3, 4,
- /* offset 5 holds the badblock marker */
- 6, 7,
- 13, 14, 15, },
- .oobfree = {
- {.offset = 8, .length = 5, },
- {.offset = 16, },
- },
-};
+static int hwecc4_ooblayout_small_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section > 2)
+ return -ERANGE;
+
+ if (!section) {
+ oobregion->offset = 0;
+ oobregion->length = 5;
+ } else if (section == 1) {
+ oobregion->offset = 6;
+ oobregion->length = 2;
+ } else {
+ oobregion->offset = 13;
+ oobregion->length = 3;
+ }
-/* An ECC layout for using 4-bit ECC with large-page (2048bytes) flash,
- * storing ten ECC bytes plus the manufacturer's bad block marker byte,
- * and not overlapping the default BBT markers.
- */
-static struct nand_ecclayout hwecc4_2048 = {
- .eccbytes = 40,
- .eccpos = {
- /* at the end of spare sector */
- 24, 25, 26, 27, 28, 29, 30, 31, 32, 33,
- 34, 35, 36, 37, 38, 39, 40, 41, 42, 43,
- 44, 45, 46, 47, 48, 49, 50, 51, 52, 53,
- 54, 55, 56, 57, 58, 59, 60, 61, 62, 63,
- },
- .oobfree = {
- /* 2 bytes at offset 0 hold manufacturer badblock markers */
- {.offset = 2, .length = 22, },
- /* 5 bytes at offset 8 hold BBT markers */
- /* 8 bytes at offset 16 hold JFFS2 clean markers */
- },
-};
+ return 0;
+}
-/*
- * An ECC layout for using 4-bit ECC with large-page (4096bytes) flash,
- * storing ten ECC bytes plus the manufacturer's bad block marker byte,
- * and not overlapping the default BBT markers.
- */
-static struct nand_ecclayout hwecc4_4096 = {
- .eccbytes = 80,
- .eccpos = {
- /* at the end of spare sector */
- 48, 49, 50, 51, 52, 53, 54, 55, 56, 57,
- 58, 59, 60, 61, 62, 63, 64, 65, 66, 67,
- 68, 69, 70, 71, 72, 73, 74, 75, 76, 77,
- 78, 79, 80, 81, 82, 83, 84, 85, 86, 87,
- 88, 89, 90, 91, 92, 93, 94, 95, 96, 97,
- 98, 99, 100, 101, 102, 103, 104, 105, 106, 107,
- 108, 109, 110, 111, 112, 113, 114, 115, 116, 117,
- 118, 119, 120, 121, 122, 123, 124, 125, 126, 127,
- },
- .oobfree = {
- /* 2 bytes at offset 0 hold manufacturer badblock markers */
- {.offset = 2, .length = 46, },
- /* 5 bytes at offset 8 hold BBT markers */
- /* 8 bytes at offset 16 hold JFFS2 clean markers */
- },
+static int hwecc4_ooblayout_small_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section > 1)
+ return -ERANGE;
+
+ if (!section) {
+ oobregion->offset = 8;
+ oobregion->length = 5;
+ } else {
+ oobregion->offset = 16;
+ oobregion->length = mtd->oobsize - 16;
+ }
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops hwecc4_small_ooblayout_ops = {
+ .ecc = hwecc4_ooblayout_small_ecc,
+ .free = hwecc4_ooblayout_small_free,
};
#if defined(CONFIG_OF)
@@ -577,8 +558,6 @@ static struct davinci_nand_pdata
"ti,davinci-mask-chipsel", &prop))
pdata->mask_chipsel = prop;
if (!of_property_read_string(pdev->dev.of_node,
- "nand-ecc-mode", &mode) ||
- !of_property_read_string(pdev->dev.of_node,
"ti,davinci-ecc-mode", &mode)) {
if (!strncmp("none", mode, 4))
pdata->ecc_mode = NAND_ECC_NONE;
@@ -591,14 +570,11 @@ static struct davinci_nand_pdata
"ti,davinci-ecc-bits", &prop))
pdata->ecc_bits = prop;
- prop = of_get_nand_bus_width(pdev->dev.of_node);
- if (0 < prop || !of_property_read_u32(pdev->dev.of_node,
- "ti,davinci-nand-buswidth", &prop))
- if (prop == 16)
- pdata->options |= NAND_BUSWIDTH_16;
+ if (!of_property_read_u32(pdev->dev.of_node,
+ "ti,davinci-nand-buswidth", &prop) && prop == 16)
+ pdata->options |= NAND_BUSWIDTH_16;
+
if (of_property_read_bool(pdev->dev.of_node,
- "nand-on-flash-bbt") ||
- of_property_read_bool(pdev->dev.of_node,
"ti,davinci-nand-use-bbt"))
pdata->bbt_options = NAND_BBT_USE_FLASH;
@@ -628,7 +604,6 @@ static int nand_davinci_probe(struct platform_device *pdev)
void __iomem *base;
int ret;
uint32_t val;
- nand_ecc_modes_t ecc_mode;
struct mtd_info *mtd;
pdata = nand_davinci_get_pdata(pdev);
@@ -712,13 +687,53 @@ static int nand_davinci_probe(struct platform_device *pdev)
info->chip.write_buf = nand_davinci_write_buf;
/* Use board-specific ECC config */
- ecc_mode = pdata->ecc_mode;
+ info->chip.ecc.mode = pdata->ecc_mode;
ret = -EINVAL;
- switch (ecc_mode) {
+
+ info->clk = devm_clk_get(&pdev->dev, "aemif");
+ if (IS_ERR(info->clk)) {
+ ret = PTR_ERR(info->clk);
+ dev_dbg(&pdev->dev, "unable to get AEMIF clock, err %d\n", ret);
+ return ret;
+ }
+
+ ret = clk_prepare_enable(info->clk);
+ if (ret < 0) {
+ dev_dbg(&pdev->dev, "unable to enable AEMIF clock, err %d\n",
+ ret);
+ goto err_clk_enable;
+ }
+
+ spin_lock_irq(&davinci_nand_lock);
+
+ /* put CSxNAND into NAND mode */
+ val = davinci_nand_readl(info, NANDFCR_OFFSET);
+ val |= BIT(info->core_chipsel);
+ davinci_nand_writel(info, NANDFCR_OFFSET, val);
+
+ spin_unlock_irq(&davinci_nand_lock);
+
+ /* Scan to find existence of the device(s) */
+ ret = nand_scan_ident(mtd, pdata->mask_chipsel ? 2 : 1, NULL);
+ if (ret < 0) {
+ dev_dbg(&pdev->dev, "no NAND chip(s) found\n");
+ goto err;
+ }
+
+ switch (info->chip.ecc.mode) {
case NAND_ECC_NONE:
+ pdata->ecc_bits = 0;
+ break;
case NAND_ECC_SOFT:
pdata->ecc_bits = 0;
+ /*
+ * This driver expects Hamming based ECC when ecc_mode is set
+ * to NAND_ECC_SOFT. Force ecc.algo to NAND_ECC_HAMMING to
+ * avoid adding an extra ->ecc_algo field to
+ * davinci_nand_pdata.
+ */
+ info->chip.ecc.algo = NAND_ECC_HAMMING;
break;
case NAND_ECC_HW:
if (pdata->ecc_bits == 4) {
@@ -754,37 +769,6 @@ static int nand_davinci_probe(struct platform_device *pdev)
default:
return -EINVAL;
}
- info->chip.ecc.mode = ecc_mode;
-
- info->clk = devm_clk_get(&pdev->dev, "aemif");
- if (IS_ERR(info->clk)) {
- ret = PTR_ERR(info->clk);
- dev_dbg(&pdev->dev, "unable to get AEMIF clock, err %d\n", ret);
- return ret;
- }
-
- ret = clk_prepare_enable(info->clk);
- if (ret < 0) {
- dev_dbg(&pdev->dev, "unable to enable AEMIF clock, err %d\n",
- ret);
- goto err_clk_enable;
- }
-
- spin_lock_irq(&davinci_nand_lock);
-
- /* put CSxNAND into NAND mode */
- val = davinci_nand_readl(info, NANDFCR_OFFSET);
- val |= BIT(info->core_chipsel);
- davinci_nand_writel(info, NANDFCR_OFFSET, val);
-
- spin_unlock_irq(&davinci_nand_lock);
-
- /* Scan to find existence of the device(s) */
- ret = nand_scan_ident(mtd, pdata->mask_chipsel ? 2 : 1, NULL);
- if (ret < 0) {
- dev_dbg(&pdev->dev, "no NAND chip(s) found\n");
- goto err;
- }
/* Update ECC layout if needed ... for 1-bit HW ECC, the default
* is OK, but it allocates 6 bytes when only 3 are needed (for
@@ -805,26 +789,14 @@ static int nand_davinci_probe(struct platform_device *pdev)
* table marker fits in the free bytes.
*/
if (chunks == 1) {
- info->ecclayout = hwecc4_small;
- info->ecclayout.oobfree[1].length = mtd->oobsize - 16;
- goto syndrome_done;
- }
- if (chunks == 4) {
- info->ecclayout = hwecc4_2048;
- info->chip.ecc.mode = NAND_ECC_HW_OOB_FIRST;
- goto syndrome_done;
- }
- if (chunks == 8) {
- info->ecclayout = hwecc4_4096;
+ mtd_set_ooblayout(mtd, &hwecc4_small_ooblayout_ops);
+ } else if (chunks == 4 || chunks == 8) {
+ mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops);
info->chip.ecc.mode = NAND_ECC_HW_OOB_FIRST;
- goto syndrome_done;
+ } else {
+ ret = -EIO;
+ goto err;
}
-
- ret = -EIO;
- goto err;
-
-syndrome_done:
- info->chip.ecc.layout = &info->ecclayout;
}
ret = nand_scan_tail(mtd);
@@ -850,7 +822,7 @@ err:
err_clk_enable:
spin_lock_irq(&davinci_nand_lock);
- if (ecc_mode == NAND_ECC_HW_SYNDROME)
+ if (info->chip.ecc.mode == NAND_ECC_HW_SYNDROME)
ecc4_busy = false;
spin_unlock_irq(&davinci_nand_lock);
return ret;
diff --git a/drivers/mtd/nand/denali.c b/drivers/mtd/nand/denali.c
index 30bf5f690f78..0476ae8776d9 100644
--- a/drivers/mtd/nand/denali.c
+++ b/drivers/mtd/nand/denali.c
@@ -1374,13 +1374,41 @@ static void denali_hw_init(struct denali_nand_info *denali)
* correction
*/
#define ECC_8BITS 14
-static struct nand_ecclayout nand_8bit_oob = {
- .eccbytes = 14,
-};
-
#define ECC_15BITS 26
-static struct nand_ecclayout nand_15bit_oob = {
- .eccbytes = 26,
+
+static int denali_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct denali_nand_info *denali = mtd_to_denali(mtd);
+ struct nand_chip *chip = mtd_to_nand(mtd);
+
+ if (section)
+ return -ERANGE;
+
+ oobregion->offset = denali->bbtskipbytes;
+ oobregion->length = chip->ecc.total;
+
+ return 0;
+}
+
+static int denali_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct denali_nand_info *denali = mtd_to_denali(mtd);
+ struct nand_chip *chip = mtd_to_nand(mtd);
+
+ if (section)
+ return -ERANGE;
+
+ oobregion->offset = chip->ecc.total + denali->bbtskipbytes;
+ oobregion->length = mtd->oobsize - oobregion->offset;
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops denali_ooblayout_ops = {
+ .ecc = denali_ooblayout_ecc,
+ .free = denali_ooblayout_free,
};
static uint8_t bbt_pattern[] = {'B', 'b', 't', '0' };
@@ -1561,7 +1589,6 @@ int denali_init(struct denali_nand_info *denali)
ECC_SECTOR_SIZE)))) {
/* if MLC OOB size is large enough, use 15bit ECC*/
denali->nand.ecc.strength = 15;
- denali->nand.ecc.layout = &nand_15bit_oob;
denali->nand.ecc.bytes = ECC_15BITS;
iowrite32(15, denali->flash_reg + ECC_CORRECTION);
} else if (mtd->oobsize < (denali->bbtskipbytes +
@@ -1571,20 +1598,13 @@ int denali_init(struct denali_nand_info *denali)
goto failed_req_irq;
} else {
denali->nand.ecc.strength = 8;
- denali->nand.ecc.layout = &nand_8bit_oob;
denali->nand.ecc.bytes = ECC_8BITS;
iowrite32(8, denali->flash_reg + ECC_CORRECTION);
}
+ mtd_set_ooblayout(mtd, &denali_ooblayout_ops);
denali->nand.ecc.bytes *= denali->devnum;
denali->nand.ecc.strength *= denali->devnum;
- denali->nand.ecc.layout->eccbytes *=
- mtd->writesize / ECC_SECTOR_SIZE;
- denali->nand.ecc.layout->oobfree[0].offset =
- denali->bbtskipbytes + denali->nand.ecc.layout->eccbytes;
- denali->nand.ecc.layout->oobfree[0].length =
- mtd->oobsize - denali->nand.ecc.layout->eccbytes -
- denali->bbtskipbytes;
/*
* Let driver know the total blocks number and how many blocks
diff --git a/drivers/mtd/nand/diskonchip.c b/drivers/mtd/nand/diskonchip.c
index 547c1002941d..a023ab9e9cbf 100644
--- a/drivers/mtd/nand/diskonchip.c
+++ b/drivers/mtd/nand/diskonchip.c
@@ -950,20 +950,50 @@ static int doc200x_correct_data(struct mtd_info *mtd, u_char *dat,
//u_char mydatabuf[528];
-/* The strange out-of-order .oobfree list below is a (possibly unneeded)
- * attempt to retain compatibility. It used to read:
- * .oobfree = { {8, 8} }
- * Since that leaves two bytes unusable, it was changed. But the following
- * scheme might affect existing jffs2 installs by moving the cleanmarker:
- * .oobfree = { {6, 10} }
- * jffs2 seems to handle the above gracefully, but the current scheme seems
- * safer. The only problem with it is that any code that parses oobfree must
- * be able to handle out-of-order segments.
- */
-static struct nand_ecclayout doc200x_oobinfo = {
- .eccbytes = 6,
- .eccpos = {0, 1, 2, 3, 4, 5},
- .oobfree = {{8, 8}, {6, 2}}
+static int doc200x_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section)
+ return -ERANGE;
+
+ oobregion->offset = 0;
+ oobregion->length = 6;
+
+ return 0;
+}
+
+static int doc200x_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section > 1)
+ return -ERANGE;
+
+ /*
+ * The strange out-of-order free bytes definition is a (possibly
+ * unneeded) attempt to retain compatibility. It used to read:
+ * .oobfree = { {8, 8} }
+ * Since that leaves two bytes unusable, it was changed. But the
+ * following scheme might affect existing jffs2 installs by moving the
+ * cleanmarker:
+ * .oobfree = { {6, 10} }
+ * jffs2 seems to handle the above gracefully, but the current scheme
+ * seems safer. The only problem with it is that any code retrieving
+ * free bytes position must be able to handle out-of-order segments.
+ */
+ if (!section) {
+ oobregion->offset = 8;
+ oobregion->length = 8;
+ } else {
+ oobregion->offset = 6;
+ oobregion->length = 2;
+ }
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops doc200x_ooblayout_ops = {
+ .ecc = doc200x_ooblayout_ecc,
+ .free = doc200x_ooblayout_free,
};
/* Find the (I)NFTL Media Header, and optionally also the mirror media header.
@@ -1537,6 +1567,7 @@ static int __init doc_probe(unsigned long physadr)
nand->bbt_md = nand->bbt_td + 1;
mtd->owner = THIS_MODULE;
+ mtd_set_ooblayout(mtd, &doc200x_ooblayout_ops);
nand_set_controller_data(nand, doc);
nand->select_chip = doc200x_select_chip;
@@ -1548,7 +1579,6 @@ static int __init doc_probe(unsigned long physadr)
nand->ecc.calculate = doc200x_calculate_ecc;
nand->ecc.correct = doc200x_correct_data;
- nand->ecc.layout = &doc200x_oobinfo;
nand->ecc.mode = NAND_ECC_HW_SYNDROME;
nand->ecc.size = 512;
nand->ecc.bytes = 6;
diff --git a/drivers/mtd/nand/docg4.c b/drivers/mtd/nand/docg4.c
index d86a60e1bbcb..47316998017f 100644
--- a/drivers/mtd/nand/docg4.c
+++ b/drivers/mtd/nand/docg4.c
@@ -222,10 +222,33 @@ struct docg4_priv {
* Bytes 8 - 14 are hw-generated ecc covering entire page + oob bytes 0 - 14.
* Byte 15 (the last) is used by the driver as a "page written" flag.
*/
-static struct nand_ecclayout docg4_oobinfo = {
- .eccbytes = 9,
- .eccpos = {7, 8, 9, 10, 11, 12, 13, 14, 15},
- .oobfree = { {.offset = 2, .length = 5} }
+static int docg4_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section)
+ return -ERANGE;
+
+ oobregion->offset = 7;
+ oobregion->length = 9;
+
+ return 0;
+}
+
+static int docg4_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section)
+ return -ERANGE;
+
+ oobregion->offset = 2;
+ oobregion->length = 5;
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops docg4_ooblayout_ops = {
+ .ecc = docg4_ooblayout_ecc,
+ .free = docg4_ooblayout_free,
};
/*
@@ -1209,6 +1232,7 @@ static void __init init_mtd_structs(struct mtd_info *mtd)
mtd->writesize = DOCG4_PAGE_SIZE;
mtd->erasesize = DOCG4_BLOCK_SIZE;
mtd->oobsize = DOCG4_OOB_SIZE;
+ mtd_set_ooblayout(mtd, &docg4_ooblayout_ops);
nand->chipsize = DOCG4_CHIP_SIZE;
nand->chip_shift = DOCG4_CHIP_SHIFT;
nand->bbt_erase_shift = nand->phys_erase_shift = DOCG4_ERASE_SHIFT;
@@ -1217,7 +1241,6 @@ static void __init init_mtd_structs(struct mtd_info *mtd)
nand->pagemask = 0x3ffff;
nand->badblockpos = NAND_LARGE_BADBLOCK_POS;
nand->badblockbits = 8;
- nand->ecc.layout = &docg4_oobinfo;
nand->ecc.mode = NAND_ECC_HW_SYNDROME;
nand->ecc.size = DOCG4_PAGE_SIZE;
nand->ecc.prepad = 8;
diff --git a/drivers/mtd/nand/fsl_elbc_nand.c b/drivers/mtd/nand/fsl_elbc_nand.c
index 059d5f7ec124..60a88f24c6b3 100644
--- a/drivers/mtd/nand/fsl_elbc_nand.c
+++ b/drivers/mtd/nand/fsl_elbc_nand.c
@@ -79,32 +79,53 @@ struct fsl_elbc_fcm_ctrl {
/* These map to the positions used by the FCM hardware ECC generator */
-/* Small Page FLASH with FMR[ECCM] = 0 */
-static struct nand_ecclayout fsl_elbc_oob_sp_eccm0 = {
- .eccbytes = 3,
- .eccpos = {6, 7, 8},
- .oobfree = { {0, 5}, {9, 7} },
-};
+static int fsl_elbc_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ struct fsl_elbc_mtd *priv = nand_get_controller_data(chip);
-/* Small Page FLASH with FMR[ECCM] = 1 */
-static struct nand_ecclayout fsl_elbc_oob_sp_eccm1 = {
- .eccbytes = 3,
- .eccpos = {8, 9, 10},
- .oobfree = { {0, 5}, {6, 2}, {11, 5} },
-};
+ if (section >= chip->ecc.steps)
+ return -ERANGE;
-/* Large Page FLASH with FMR[ECCM] = 0 */
-static struct nand_ecclayout fsl_elbc_oob_lp_eccm0 = {
- .eccbytes = 12,
- .eccpos = {6, 7, 8, 22, 23, 24, 38, 39, 40, 54, 55, 56},
- .oobfree = { {1, 5}, {9, 13}, {25, 13}, {41, 13}, {57, 7} },
-};
+ oobregion->offset = (16 * section) + 6;
+ if (priv->fmr & FMR_ECCM)
+ oobregion->offset += 2;
-/* Large Page FLASH with FMR[ECCM] = 1 */
-static struct nand_ecclayout fsl_elbc_oob_lp_eccm1 = {
- .eccbytes = 12,
- .eccpos = {8, 9, 10, 24, 25, 26, 40, 41, 42, 56, 57, 58},
- .oobfree = { {1, 7}, {11, 13}, {27, 13}, {43, 13}, {59, 5} },
+ oobregion->length = chip->ecc.bytes;
+
+ return 0;
+}
+
+static int fsl_elbc_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ struct fsl_elbc_mtd *priv = nand_get_controller_data(chip);
+
+ if (section > chip->ecc.steps)
+ return -ERANGE;
+
+ if (!section) {
+ oobregion->offset = 0;
+ if (mtd->writesize > 512)
+ oobregion->offset++;
+ oobregion->length = (priv->fmr & FMR_ECCM) ? 7 : 5;
+ } else {
+ oobregion->offset = (16 * section) -
+ ((priv->fmr & FMR_ECCM) ? 5 : 7);
+ if (section < chip->ecc.steps)
+ oobregion->length = 13;
+ else
+ oobregion->length = mtd->oobsize - oobregion->offset;
+ }
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops fsl_elbc_ooblayout_ops = {
+ .ecc = fsl_elbc_ooblayout_ecc,
+ .free = fsl_elbc_ooblayout_free,
};
/*
@@ -657,8 +678,8 @@ static int fsl_elbc_chip_init_tail(struct mtd_info *mtd)
chip->ecc.bytes);
dev_dbg(priv->dev, "fsl_elbc_init: nand->ecc.total = %d\n",
chip->ecc.total);
- dev_dbg(priv->dev, "fsl_elbc_init: nand->ecc.layout = %p\n",
- chip->ecc.layout);
+ dev_dbg(priv->dev, "fsl_elbc_init: mtd->ooblayout = %p\n",
+ mtd->ooblayout);
dev_dbg(priv->dev, "fsl_elbc_init: mtd->flags = %08x\n", mtd->flags);
dev_dbg(priv->dev, "fsl_elbc_init: mtd->size = %lld\n", mtd->size);
dev_dbg(priv->dev, "fsl_elbc_init: mtd->erasesize = %d\n",
@@ -675,14 +696,6 @@ static int fsl_elbc_chip_init_tail(struct mtd_info *mtd)
} else if (mtd->writesize == 2048) {
priv->page_size = 1;
setbits32(&lbc->bank[priv->bank].or, OR_FCM_PGS);
- /* adjust ecc setup if needed */
- if ((in_be32(&lbc->bank[priv->bank].br) & BR_DECC) ==
- BR_DECC_CHK_GEN) {
- chip->ecc.size = 512;
- chip->ecc.layout = (priv->fmr & FMR_ECCM) ?
- &fsl_elbc_oob_lp_eccm1 :
- &fsl_elbc_oob_lp_eccm0;
- }
} else {
dev_err(priv->dev,
"fsl_elbc_init: page size %d is not supported\n",
@@ -780,15 +793,14 @@ static int fsl_elbc_chip_init(struct fsl_elbc_mtd *priv)
if ((in_be32(&lbc->bank[priv->bank].br) & BR_DECC) ==
BR_DECC_CHK_GEN) {
chip->ecc.mode = NAND_ECC_HW;
- /* put in small page settings and adjust later if needed */
- chip->ecc.layout = (priv->fmr & FMR_ECCM) ?
- &fsl_elbc_oob_sp_eccm1 : &fsl_elbc_oob_sp_eccm0;
+ mtd_set_ooblayout(mtd, &fsl_elbc_ooblayout_ops);
chip->ecc.size = 512;
chip->ecc.bytes = 3;
chip->ecc.strength = 1;
} else {
/* otherwise fall back to default software ECC */
chip->ecc.mode = NAND_ECC_SOFT;
+ chip->ecc.algo = NAND_ECC_HAMMING;
}
return 0;
diff --git a/drivers/mtd/nand/fsl_ifc_nand.c b/drivers/mtd/nand/fsl_ifc_nand.c
index 43f5a3a4873f..4e9e5fd8faf3 100644
--- a/drivers/mtd/nand/fsl_ifc_nand.c
+++ b/drivers/mtd/nand/fsl_ifc_nand.c
@@ -67,136 +67,6 @@ struct fsl_ifc_nand_ctrl {
static struct fsl_ifc_nand_ctrl *ifc_nand_ctrl;
-/* 512-byte page with 4-bit ECC, 8-bit */
-static struct nand_ecclayout oob_512_8bit_ecc4 = {
- .eccbytes = 8,
- .eccpos = {8, 9, 10, 11, 12, 13, 14, 15},
- .oobfree = { {0, 5}, {6, 2} },
-};
-
-/* 512-byte page with 4-bit ECC, 16-bit */
-static struct nand_ecclayout oob_512_16bit_ecc4 = {
- .eccbytes = 8,
- .eccpos = {8, 9, 10, 11, 12, 13, 14, 15},
- .oobfree = { {2, 6}, },
-};
-
-/* 2048-byte page size with 4-bit ECC */
-static struct nand_ecclayout oob_2048_ecc4 = {
- .eccbytes = 32,
- .eccpos = {
- 8, 9, 10, 11, 12, 13, 14, 15,
- 16, 17, 18, 19, 20, 21, 22, 23,
- 24, 25, 26, 27, 28, 29, 30, 31,
- 32, 33, 34, 35, 36, 37, 38, 39,
- },
- .oobfree = { {2, 6}, {40, 24} },
-};
-
-/* 4096-byte page size with 4-bit ECC */
-static struct nand_ecclayout oob_4096_ecc4 = {
- .eccbytes = 64,
- .eccpos = {
- 8, 9, 10, 11, 12, 13, 14, 15,
- 16, 17, 18, 19, 20, 21, 22, 23,
- 24, 25, 26, 27, 28, 29, 30, 31,
- 32, 33, 34, 35, 36, 37, 38, 39,
- 40, 41, 42, 43, 44, 45, 46, 47,
- 48, 49, 50, 51, 52, 53, 54, 55,
- 56, 57, 58, 59, 60, 61, 62, 63,
- 64, 65, 66, 67, 68, 69, 70, 71,
- },
- .oobfree = { {2, 6}, {72, 56} },
-};
-
-/* 4096-byte page size with 8-bit ECC -- requires 218-byte OOB */
-static struct nand_ecclayout oob_4096_ecc8 = {
- .eccbytes = 128,
- .eccpos = {
- 8, 9, 10, 11, 12, 13, 14, 15,
- 16, 17, 18, 19, 20, 21, 22, 23,
- 24, 25, 26, 27, 28, 29, 30, 31,
- 32, 33, 34, 35, 36, 37, 38, 39,
- 40, 41, 42, 43, 44, 45, 46, 47,
- 48, 49, 50, 51, 52, 53, 54, 55,
- 56, 57, 58, 59, 60, 61, 62, 63,
- 64, 65, 66, 67, 68, 69, 70, 71,
- 72, 73, 74, 75, 76, 77, 78, 79,
- 80, 81, 82, 83, 84, 85, 86, 87,
- 88, 89, 90, 91, 92, 93, 94, 95,
- 96, 97, 98, 99, 100, 101, 102, 103,
- 104, 105, 106, 107, 108, 109, 110, 111,
- 112, 113, 114, 115, 116, 117, 118, 119,
- 120, 121, 122, 123, 124, 125, 126, 127,
- 128, 129, 130, 131, 132, 133, 134, 135,
- },
- .oobfree = { {2, 6}, {136, 82} },
-};
-
-/* 8192-byte page size with 4-bit ECC */
-static struct nand_ecclayout oob_8192_ecc4 = {
- .eccbytes = 128,
- .eccpos = {
- 8, 9, 10, 11, 12, 13, 14, 15,
- 16, 17, 18, 19, 20, 21, 22, 23,
- 24, 25, 26, 27, 28, 29, 30, 31,
- 32, 33, 34, 35, 36, 37, 38, 39,
- 40, 41, 42, 43, 44, 45, 46, 47,
- 48, 49, 50, 51, 52, 53, 54, 55,
- 56, 57, 58, 59, 60, 61, 62, 63,
- 64, 65, 66, 67, 68, 69, 70, 71,
- 72, 73, 74, 75, 76, 77, 78, 79,
- 80, 81, 82, 83, 84, 85, 86, 87,
- 88, 89, 90, 91, 92, 93, 94, 95,
- 96, 97, 98, 99, 100, 101, 102, 103,
- 104, 105, 106, 107, 108, 109, 110, 111,
- 112, 113, 114, 115, 116, 117, 118, 119,
- 120, 121, 122, 123, 124, 125, 126, 127,
- 128, 129, 130, 131, 132, 133, 134, 135,
- },
- .oobfree = { {2, 6}, {136, 208} },
-};
-
-/* 8192-byte page size with 8-bit ECC -- requires 218-byte OOB */
-static struct nand_ecclayout oob_8192_ecc8 = {
- .eccbytes = 256,
- .eccpos = {
- 8, 9, 10, 11, 12, 13, 14, 15,
- 16, 17, 18, 19, 20, 21, 22, 23,
- 24, 25, 26, 27, 28, 29, 30, 31,
- 32, 33, 34, 35, 36, 37, 38, 39,
- 40, 41, 42, 43, 44, 45, 46, 47,
- 48, 49, 50, 51, 52, 53, 54, 55,
- 56, 57, 58, 59, 60, 61, 62, 63,
- 64, 65, 66, 67, 68, 69, 70, 71,
- 72, 73, 74, 75, 76, 77, 78, 79,
- 80, 81, 82, 83, 84, 85, 86, 87,
- 88, 89, 90, 91, 92, 93, 94, 95,
- 96, 97, 98, 99, 100, 101, 102, 103,
- 104, 105, 106, 107, 108, 109, 110, 111,
- 112, 113, 114, 115, 116, 117, 118, 119,
- 120, 121, 122, 123, 124, 125, 126, 127,
- 128, 129, 130, 131, 132, 133, 134, 135,
- 136, 137, 138, 139, 140, 141, 142, 143,
- 144, 145, 146, 147, 148, 149, 150, 151,
- 152, 153, 154, 155, 156, 157, 158, 159,
- 160, 161, 162, 163, 164, 165, 166, 167,
- 168, 169, 170, 171, 172, 173, 174, 175,
- 176, 177, 178, 179, 180, 181, 182, 183,
- 184, 185, 186, 187, 188, 189, 190, 191,
- 192, 193, 194, 195, 196, 197, 198, 199,
- 200, 201, 202, 203, 204, 205, 206, 207,
- 208, 209, 210, 211, 212, 213, 214, 215,
- 216, 217, 218, 219, 220, 221, 222, 223,
- 224, 225, 226, 227, 228, 229, 230, 231,
- 232, 233, 234, 235, 236, 237, 238, 239,
- 240, 241, 242, 243, 244, 245, 246, 247,
- 248, 249, 250, 251, 252, 253, 254, 255,
- 256, 257, 258, 259, 260, 261, 262, 263,
- },
- .oobfree = { {2, 6}, {264, 80} },
-};
-
/*
* Generic flash bbt descriptors
*/
@@ -223,6 +93,57 @@ static struct nand_bbt_descr bbt_mirror_descr = {
.pattern = mirror_pattern,
};
+static int fsl_ifc_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+
+ if (section)
+ return -ERANGE;
+
+ oobregion->offset = 8;
+ oobregion->length = chip->ecc.total;
+
+ return 0;
+}
+
+static int fsl_ifc_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+
+ if (section > 1)
+ return -ERANGE;
+
+ if (mtd->writesize == 512 &&
+ !(chip->options & NAND_BUSWIDTH_16)) {
+ if (!section) {
+ oobregion->offset = 0;
+ oobregion->length = 5;
+ } else {
+ oobregion->offset = 6;
+ oobregion->length = 2;
+ }
+
+ return 0;
+ }
+
+ if (!section) {
+ oobregion->offset = 2;
+ oobregion->length = 6;
+ } else {
+ oobregion->offset = chip->ecc.total + 8;
+ oobregion->length = mtd->oobsize - oobregion->offset;
+ }
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops fsl_ifc_ooblayout_ops = {
+ .ecc = fsl_ifc_ooblayout_ecc,
+ .free = fsl_ifc_ooblayout_free,
+};
+
/*
* Set up the IFC hardware block and page address fields, and the ifc nand
* structure addr field to point to the correct IFC buffer in memory
@@ -232,7 +153,7 @@ static void set_addr(struct mtd_info *mtd, int column, int page_addr, int oob)
struct nand_chip *chip = mtd_to_nand(mtd);
struct fsl_ifc_mtd *priv = nand_get_controller_data(chip);
struct fsl_ifc_ctrl *ctrl = priv->ctrl;
- struct fsl_ifc_regs __iomem *ifc = ctrl->regs;
+ struct fsl_ifc_runtime __iomem *ifc = ctrl->rregs;
int buf_num;
ifc_nand_ctrl->page = page_addr;
@@ -257,18 +178,22 @@ static int is_blank(struct mtd_info *mtd, unsigned int bufnum)
u8 __iomem *addr = priv->vbase + bufnum * (mtd->writesize * 2);
u32 __iomem *mainarea = (u32 __iomem *)addr;
u8 __iomem *oob = addr + mtd->writesize;
- int i;
+ struct mtd_oob_region oobregion = { };
+ int i, section = 0;
for (i = 0; i < mtd->writesize / 4; i++) {
if (__raw_readl(&mainarea[i]) != 0xffffffff)
return 0;
}
- for (i = 0; i < chip->ecc.layout->eccbytes; i++) {
- int pos = chip->ecc.layout->eccpos[i];
+ mtd_ooblayout_ecc(mtd, section++, &oobregion);
+ while (oobregion.length) {
+ for (i = 0; i < oobregion.length; i++) {
+ if (__raw_readb(&oob[oobregion.offset + i]) != 0xff)
+ return 0;
+ }
- if (__raw_readb(&oob[pos]) != 0xff)
- return 0;
+ mtd_ooblayout_ecc(mtd, section++, &oobregion);
}
return 1;
@@ -295,7 +220,7 @@ static void fsl_ifc_run_command(struct mtd_info *mtd)
struct fsl_ifc_mtd *priv = nand_get_controller_data(chip);
struct fsl_ifc_ctrl *ctrl = priv->ctrl;
struct fsl_ifc_nand_ctrl *nctrl = ifc_nand_ctrl;
- struct fsl_ifc_regs __iomem *ifc = ctrl->regs;
+ struct fsl_ifc_runtime __iomem *ifc = ctrl->rregs;
u32 eccstat[4];
int i;
@@ -371,7 +296,7 @@ static void fsl_ifc_do_read(struct nand_chip *chip,
{
struct fsl_ifc_mtd *priv = nand_get_controller_data(chip);
struct fsl_ifc_ctrl *ctrl = priv->ctrl;
- struct fsl_ifc_regs __iomem *ifc = ctrl->regs;
+ struct fsl_ifc_runtime __iomem *ifc = ctrl->rregs;
/* Program FIR/IFC_NAND_FCR0 for Small/Large page */
if (mtd->writesize > 512) {
@@ -411,7 +336,7 @@ static void fsl_ifc_cmdfunc(struct mtd_info *mtd, unsigned int command,
struct nand_chip *chip = mtd_to_nand(mtd);
struct fsl_ifc_mtd *priv = nand_get_controller_data(chip);
struct fsl_ifc_ctrl *ctrl = priv->ctrl;
- struct fsl_ifc_regs __iomem *ifc = ctrl->regs;
+ struct fsl_ifc_runtime __iomem *ifc = ctrl->rregs;
/* clear the read buffer */
ifc_nand_ctrl->read_bytes = 0;
@@ -723,7 +648,7 @@ static int fsl_ifc_wait(struct mtd_info *mtd, struct nand_chip *chip)
{
struct fsl_ifc_mtd *priv = nand_get_controller_data(chip);
struct fsl_ifc_ctrl *ctrl = priv->ctrl;
- struct fsl_ifc_regs __iomem *ifc = ctrl->regs;
+ struct fsl_ifc_runtime __iomem *ifc = ctrl->rregs;
u32 nand_fsr;
/* Use READ_STATUS command, but wait for the device to be ready */
@@ -808,8 +733,8 @@ static int fsl_ifc_chip_init_tail(struct mtd_info *mtd)
chip->ecc.bytes);
dev_dbg(priv->dev, "%s: nand->ecc.total = %d\n", __func__,
chip->ecc.total);
- dev_dbg(priv->dev, "%s: nand->ecc.layout = %p\n", __func__,
- chip->ecc.layout);
+ dev_dbg(priv->dev, "%s: mtd->ooblayout = %p\n", __func__,
+ mtd->ooblayout);
dev_dbg(priv->dev, "%s: mtd->flags = %08x\n", __func__, mtd->flags);
dev_dbg(priv->dev, "%s: mtd->size = %lld\n", __func__, mtd->size);
dev_dbg(priv->dev, "%s: mtd->erasesize = %d\n", __func__,
@@ -825,39 +750,42 @@ static int fsl_ifc_chip_init_tail(struct mtd_info *mtd)
static void fsl_ifc_sram_init(struct fsl_ifc_mtd *priv)
{
struct fsl_ifc_ctrl *ctrl = priv->ctrl;
- struct fsl_ifc_regs __iomem *ifc = ctrl->regs;
+ struct fsl_ifc_runtime __iomem *ifc_runtime = ctrl->rregs;
+ struct fsl_ifc_global __iomem *ifc_global = ctrl->gregs;
uint32_t csor = 0, csor_8k = 0, csor_ext = 0;
uint32_t cs = priv->bank;
/* Save CSOR and CSOR_ext */
- csor = ifc_in32(&ifc->csor_cs[cs].csor);
- csor_ext = ifc_in32(&ifc->csor_cs[cs].csor_ext);
+ csor = ifc_in32(&ifc_global->csor_cs[cs].csor);
+ csor_ext = ifc_in32(&ifc_global->csor_cs[cs].csor_ext);
/* chage PageSize 8K and SpareSize 1K*/
csor_8k = (csor & ~(CSOR_NAND_PGS_MASK)) | 0x0018C000;
- ifc_out32(csor_8k, &ifc->csor_cs[cs].csor);
- ifc_out32(0x0000400, &ifc->csor_cs[cs].csor_ext);
+ ifc_out32(csor_8k, &ifc_global->csor_cs[cs].csor);
+ ifc_out32(0x0000400, &ifc_global->csor_cs[cs].csor_ext);
/* READID */
ifc_out32((IFC_FIR_OP_CW0 << IFC_NAND_FIR0_OP0_SHIFT) |
- (IFC_FIR_OP_UA << IFC_NAND_FIR0_OP1_SHIFT) |
- (IFC_FIR_OP_RB << IFC_NAND_FIR0_OP2_SHIFT),
- &ifc->ifc_nand.nand_fir0);
+ (IFC_FIR_OP_UA << IFC_NAND_FIR0_OP1_SHIFT) |
+ (IFC_FIR_OP_RB << IFC_NAND_FIR0_OP2_SHIFT),
+ &ifc_runtime->ifc_nand.nand_fir0);
ifc_out32(NAND_CMD_READID << IFC_NAND_FCR0_CMD0_SHIFT,
- &ifc->ifc_nand.nand_fcr0);
- ifc_out32(0x0, &ifc->ifc_nand.row3);
+ &ifc_runtime->ifc_nand.nand_fcr0);
+ ifc_out32(0x0, &ifc_runtime->ifc_nand.row3);
- ifc_out32(0x0, &ifc->ifc_nand.nand_fbcr);
+ ifc_out32(0x0, &ifc_runtime->ifc_nand.nand_fbcr);
/* Program ROW0/COL0 */
- ifc_out32(0x0, &ifc->ifc_nand.row0);
- ifc_out32(0x0, &ifc->ifc_nand.col0);
+ ifc_out32(0x0, &ifc_runtime->ifc_nand.row0);
+ ifc_out32(0x0, &ifc_runtime->ifc_nand.col0);
/* set the chip select for NAND Transaction */
- ifc_out32(cs << IFC_NAND_CSEL_SHIFT, &ifc->ifc_nand.nand_csel);
+ ifc_out32(cs << IFC_NAND_CSEL_SHIFT,
+ &ifc_runtime->ifc_nand.nand_csel);
/* start read seq */
- ifc_out32(IFC_NAND_SEQ_STRT_FIR_STRT, &ifc->ifc_nand.nandseq_strt);
+ ifc_out32(IFC_NAND_SEQ_STRT_FIR_STRT,
+ &ifc_runtime->ifc_nand.nandseq_strt);
/* wait for command complete flag or timeout */
wait_event_timeout(ctrl->nand_wait, ctrl->nand_stat,
@@ -867,17 +795,17 @@ static void fsl_ifc_sram_init(struct fsl_ifc_mtd *priv)
printk(KERN_ERR "fsl-ifc: Failed to Initialise SRAM\n");
/* Restore CSOR and CSOR_ext */
- ifc_out32(csor, &ifc->csor_cs[cs].csor);
- ifc_out32(csor_ext, &ifc->csor_cs[cs].csor_ext);
+ ifc_out32(csor, &ifc_global->csor_cs[cs].csor);
+ ifc_out32(csor_ext, &ifc_global->csor_cs[cs].csor_ext);
}
static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
{
struct fsl_ifc_ctrl *ctrl = priv->ctrl;
- struct fsl_ifc_regs __iomem *ifc = ctrl->regs;
+ struct fsl_ifc_global __iomem *ifc_global = ctrl->gregs;
+ struct fsl_ifc_runtime __iomem *ifc_runtime = ctrl->rregs;
struct nand_chip *chip = &priv->chip;
struct mtd_info *mtd = nand_to_mtd(&priv->chip);
- struct nand_ecclayout *layout;
u32 csor;
/* Fill in fsl_ifc_mtd structure */
@@ -886,7 +814,8 @@ static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
/* fill in nand_chip structure */
/* set up function call table */
- if ((ifc_in32(&ifc->cspr_cs[priv->bank].cspr)) & CSPR_PORT_SIZE_16)
+ if ((ifc_in32(&ifc_global->cspr_cs[priv->bank].cspr))
+ & CSPR_PORT_SIZE_16)
chip->read_byte = fsl_ifc_read_byte16;
else
chip->read_byte = fsl_ifc_read_byte;
@@ -900,13 +829,14 @@ static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
chip->bbt_td = &bbt_main_descr;
chip->bbt_md = &bbt_mirror_descr;
- ifc_out32(0x0, &ifc->ifc_nand.ncfgr);
+ ifc_out32(0x0, &ifc_runtime->ifc_nand.ncfgr);
/* set up nand options */
chip->bbt_options = NAND_BBT_USE_FLASH;
chip->options = NAND_NO_SUBPAGE_WRITE;
- if (ifc_in32(&ifc->cspr_cs[priv->bank].cspr) & CSPR_PORT_SIZE_16) {
+ if (ifc_in32(&ifc_global->cspr_cs[priv->bank].cspr)
+ & CSPR_PORT_SIZE_16) {
chip->read_byte = fsl_ifc_read_byte16;
chip->options |= NAND_BUSWIDTH_16;
} else {
@@ -919,20 +849,11 @@ static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
chip->ecc.read_page = fsl_ifc_read_page;
chip->ecc.write_page = fsl_ifc_write_page;
- csor = ifc_in32(&ifc->csor_cs[priv->bank].csor);
-
- /* Hardware generates ECC per 512 Bytes */
- chip->ecc.size = 512;
- chip->ecc.bytes = 8;
- chip->ecc.strength = 4;
+ csor = ifc_in32(&ifc_global->csor_cs[priv->bank].csor);
switch (csor & CSOR_NAND_PGS_MASK) {
case CSOR_NAND_PGS_512:
- if (chip->options & NAND_BUSWIDTH_16) {
- layout = &oob_512_16bit_ecc4;
- } else {
- layout = &oob_512_8bit_ecc4;
-
+ if (!(chip->options & NAND_BUSWIDTH_16)) {
/* Avoid conflict with bad block marker */
bbt_main_descr.offs = 0;
bbt_mirror_descr.offs = 0;
@@ -942,35 +863,16 @@ static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
break;
case CSOR_NAND_PGS_2K:
- layout = &oob_2048_ecc4;
priv->bufnum_mask = 3;
break;
case CSOR_NAND_PGS_4K:
- if ((csor & CSOR_NAND_ECC_MODE_MASK) ==
- CSOR_NAND_ECC_MODE_4) {
- layout = &oob_4096_ecc4;
- } else {
- layout = &oob_4096_ecc8;
- chip->ecc.bytes = 16;
- chip->ecc.strength = 8;
- }
-
priv->bufnum_mask = 1;
break;
case CSOR_NAND_PGS_8K:
- if ((csor & CSOR_NAND_ECC_MODE_MASK) ==
- CSOR_NAND_ECC_MODE_4) {
- layout = &oob_8192_ecc4;
- } else {
- layout = &oob_8192_ecc8;
- chip->ecc.bytes = 16;
- chip->ecc.strength = 8;
- }
-
priv->bufnum_mask = 0;
- break;
+ break;
default:
dev_err(priv->dev, "bad csor %#x: bad page size\n", csor);
@@ -980,9 +882,20 @@ static int fsl_ifc_chip_init(struct fsl_ifc_mtd *priv)
/* Must also set CSOR_NAND_ECC_ENC_EN if DEC_EN set */
if (csor & CSOR_NAND_ECC_DEC_EN) {
chip->ecc.mode = NAND_ECC_HW;
- chip->ecc.layout = layout;
+ mtd_set_ooblayout(mtd, &fsl_ifc_ooblayout_ops);
+
+ /* Hardware generates ECC per 512 Bytes */
+ chip->ecc.size = 512;
+ if ((csor & CSOR_NAND_ECC_MODE_MASK) == CSOR_NAND_ECC_MODE_4) {
+ chip->ecc.bytes = 8;
+ chip->ecc.strength = 4;
+ } else {
+ chip->ecc.bytes = 16;
+ chip->ecc.strength = 8;
+ }
} else {
chip->ecc.mode = NAND_ECC_SOFT;
+ chip->ecc.algo = NAND_ECC_HAMMING;
}
if (ctrl->version == FSL_IFC_VERSION_1_1_0)
@@ -1007,10 +920,10 @@ static int fsl_ifc_chip_remove(struct fsl_ifc_mtd *priv)
return 0;
}
-static int match_bank(struct fsl_ifc_regs __iomem *ifc, int bank,
+static int match_bank(struct fsl_ifc_global __iomem *ifc_global, int bank,
phys_addr_t addr)
{
- u32 cspr = ifc_in32(&ifc->cspr_cs[bank].cspr);
+ u32 cspr = ifc_in32(&ifc_global->cspr_cs[bank].cspr);
if (!(cspr & CSPR_V))
return 0;
@@ -1024,7 +937,7 @@ static DEFINE_MUTEX(fsl_ifc_nand_mutex);
static int fsl_ifc_nand_probe(struct platform_device *dev)
{
- struct fsl_ifc_regs __iomem *ifc;
+ struct fsl_ifc_runtime __iomem *ifc;
struct fsl_ifc_mtd *priv;
struct resource res;
static const char *part_probe_types[]
@@ -1034,9 +947,9 @@ static int fsl_ifc_nand_probe(struct platform_device *dev)
struct device_node *node = dev->dev.of_node;
struct mtd_info *mtd;
- if (!fsl_ifc_ctrl_dev || !fsl_ifc_ctrl_dev->regs)
+ if (!fsl_ifc_ctrl_dev || !fsl_ifc_ctrl_dev->rregs)
return -ENODEV;
- ifc = fsl_ifc_ctrl_dev->regs;
+ ifc = fsl_ifc_ctrl_dev->rregs;
/* get, allocate and map the memory resource */
ret = of_address_to_resource(node, 0, &res);
@@ -1047,7 +960,7 @@ static int fsl_ifc_nand_probe(struct platform_device *dev)
/* find which chip select it is connected to */
for (bank = 0; bank < fsl_ifc_ctrl_dev->banks; bank++) {
- if (match_bank(ifc, bank, res.start))
+ if (match_bank(fsl_ifc_ctrl_dev->gregs, bank, res.start))
break;
}
diff --git a/drivers/mtd/nand/fsl_upm.c b/drivers/mtd/nand/fsl_upm.c
index cafd12de7276..d85fa2555b68 100644
--- a/drivers/mtd/nand/fsl_upm.c
+++ b/drivers/mtd/nand/fsl_upm.c
@@ -170,6 +170,7 @@ static int fun_chip_init(struct fsl_upm_nand *fun,
fun->chip.read_buf = fun_read_buf;
fun->chip.write_buf = fun_write_buf;
fun->chip.ecc.mode = NAND_ECC_SOFT;
+ fun->chip.ecc.algo = NAND_ECC_HAMMING;
if (fun->mchip_count > 1)
fun->chip.select_chip = fun_select_chip;
diff --git a/drivers/mtd/nand/fsmc_nand.c b/drivers/mtd/nand/fsmc_nand.c
index 1bdcd4fa26d4..d4f454a4b35e 100644
--- a/drivers/mtd/nand/fsmc_nand.c
+++ b/drivers/mtd/nand/fsmc_nand.c
@@ -39,210 +39,41 @@
#include <linux/amba/bus.h>
#include <mtd/mtd-abi.h>
-static struct nand_ecclayout fsmc_ecc1_128_layout = {
- .eccbytes = 24,
- .eccpos = {2, 3, 4, 18, 19, 20, 34, 35, 36, 50, 51, 52,
- 66, 67, 68, 82, 83, 84, 98, 99, 100, 114, 115, 116},
- .oobfree = {
- {.offset = 8, .length = 8},
- {.offset = 24, .length = 8},
- {.offset = 40, .length = 8},
- {.offset = 56, .length = 8},
- {.offset = 72, .length = 8},
- {.offset = 88, .length = 8},
- {.offset = 104, .length = 8},
- {.offset = 120, .length = 8}
- }
-};
+static int fsmc_ecc1_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
-static struct nand_ecclayout fsmc_ecc1_64_layout = {
- .eccbytes = 12,
- .eccpos = {2, 3, 4, 18, 19, 20, 34, 35, 36, 50, 51, 52},
- .oobfree = {
- {.offset = 8, .length = 8},
- {.offset = 24, .length = 8},
- {.offset = 40, .length = 8},
- {.offset = 56, .length = 8},
- }
-};
+ if (section >= chip->ecc.steps)
+ return -ERANGE;
-static struct nand_ecclayout fsmc_ecc1_16_layout = {
- .eccbytes = 3,
- .eccpos = {2, 3, 4},
- .oobfree = {
- {.offset = 8, .length = 8},
- }
-};
+ oobregion->offset = (section * 16) + 2;
+ oobregion->length = 3;
-/*
- * ECC4 layout for NAND of pagesize 8192 bytes & OOBsize 256 bytes. 13*16 bytes
- * of OB size is reserved for ECC, Byte no. 0 & 1 reserved for bad block and 46
- * bytes are free for use.
- */
-static struct nand_ecclayout fsmc_ecc4_256_layout = {
- .eccbytes = 208,
- .eccpos = { 2, 3, 4, 5, 6, 7, 8,
- 9, 10, 11, 12, 13, 14,
- 18, 19, 20, 21, 22, 23, 24,
- 25, 26, 27, 28, 29, 30,
- 34, 35, 36, 37, 38, 39, 40,
- 41, 42, 43, 44, 45, 46,
- 50, 51, 52, 53, 54, 55, 56,
- 57, 58, 59, 60, 61, 62,
- 66, 67, 68, 69, 70, 71, 72,
- 73, 74, 75, 76, 77, 78,
- 82, 83, 84, 85, 86, 87, 88,
- 89, 90, 91, 92, 93, 94,
- 98, 99, 100, 101, 102, 103, 104,
- 105, 106, 107, 108, 109, 110,
- 114, 115, 116, 117, 118, 119, 120,
- 121, 122, 123, 124, 125, 126,
- 130, 131, 132, 133, 134, 135, 136,
- 137, 138, 139, 140, 141, 142,
- 146, 147, 148, 149, 150, 151, 152,
- 153, 154, 155, 156, 157, 158,
- 162, 163, 164, 165, 166, 167, 168,
- 169, 170, 171, 172, 173, 174,
- 178, 179, 180, 181, 182, 183, 184,
- 185, 186, 187, 188, 189, 190,
- 194, 195, 196, 197, 198, 199, 200,
- 201, 202, 203, 204, 205, 206,
- 210, 211, 212, 213, 214, 215, 216,
- 217, 218, 219, 220, 221, 222,
- 226, 227, 228, 229, 230, 231, 232,
- 233, 234, 235, 236, 237, 238,
- 242, 243, 244, 245, 246, 247, 248,
- 249, 250, 251, 252, 253, 254
- },
- .oobfree = {
- {.offset = 15, .length = 3},
- {.offset = 31, .length = 3},
- {.offset = 47, .length = 3},
- {.offset = 63, .length = 3},
- {.offset = 79, .length = 3},
- {.offset = 95, .length = 3},
- {.offset = 111, .length = 3},
- {.offset = 127, .length = 3},
- {.offset = 143, .length = 3},
- {.offset = 159, .length = 3},
- {.offset = 175, .length = 3},
- {.offset = 191, .length = 3},
- {.offset = 207, .length = 3},
- {.offset = 223, .length = 3},
- {.offset = 239, .length = 3},
- {.offset = 255, .length = 1}
- }
-};
+ return 0;
+}
-/*
- * ECC4 layout for NAND of pagesize 4096 bytes & OOBsize 224 bytes. 13*8 bytes
- * of OOB size is reserved for ECC, Byte no. 0 & 1 reserved for bad block & 118
- * bytes are free for use.
- */
-static struct nand_ecclayout fsmc_ecc4_224_layout = {
- .eccbytes = 104,
- .eccpos = { 2, 3, 4, 5, 6, 7, 8,
- 9, 10, 11, 12, 13, 14,
- 18, 19, 20, 21, 22, 23, 24,
- 25, 26, 27, 28, 29, 30,
- 34, 35, 36, 37, 38, 39, 40,
- 41, 42, 43, 44, 45, 46,
- 50, 51, 52, 53, 54, 55, 56,
- 57, 58, 59, 60, 61, 62,
- 66, 67, 68, 69, 70, 71, 72,
- 73, 74, 75, 76, 77, 78,
- 82, 83, 84, 85, 86, 87, 88,
- 89, 90, 91, 92, 93, 94,
- 98, 99, 100, 101, 102, 103, 104,
- 105, 106, 107, 108, 109, 110,
- 114, 115, 116, 117, 118, 119, 120,
- 121, 122, 123, 124, 125, 126
- },
- .oobfree = {
- {.offset = 15, .length = 3},
- {.offset = 31, .length = 3},
- {.offset = 47, .length = 3},
- {.offset = 63, .length = 3},
- {.offset = 79, .length = 3},
- {.offset = 95, .length = 3},
- {.offset = 111, .length = 3},
- {.offset = 127, .length = 97}
- }
-};
+static int fsmc_ecc1_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
-/*
- * ECC4 layout for NAND of pagesize 4096 bytes & OOBsize 128 bytes. 13*8 bytes
- * of OOB size is reserved for ECC, Byte no. 0 & 1 reserved for bad block & 22
- * bytes are free for use.
- */
-static struct nand_ecclayout fsmc_ecc4_128_layout = {
- .eccbytes = 104,
- .eccpos = { 2, 3, 4, 5, 6, 7, 8,
- 9, 10, 11, 12, 13, 14,
- 18, 19, 20, 21, 22, 23, 24,
- 25, 26, 27, 28, 29, 30,
- 34, 35, 36, 37, 38, 39, 40,
- 41, 42, 43, 44, 45, 46,
- 50, 51, 52, 53, 54, 55, 56,
- 57, 58, 59, 60, 61, 62,
- 66, 67, 68, 69, 70, 71, 72,
- 73, 74, 75, 76, 77, 78,
- 82, 83, 84, 85, 86, 87, 88,
- 89, 90, 91, 92, 93, 94,
- 98, 99, 100, 101, 102, 103, 104,
- 105, 106, 107, 108, 109, 110,
- 114, 115, 116, 117, 118, 119, 120,
- 121, 122, 123, 124, 125, 126
- },
- .oobfree = {
- {.offset = 15, .length = 3},
- {.offset = 31, .length = 3},
- {.offset = 47, .length = 3},
- {.offset = 63, .length = 3},
- {.offset = 79, .length = 3},
- {.offset = 95, .length = 3},
- {.offset = 111, .length = 3},
- {.offset = 127, .length = 1}
- }
-};
+ if (section >= chip->ecc.steps)
+ return -ERANGE;
-/*
- * ECC4 layout for NAND of pagesize 2048 bytes & OOBsize 64 bytes. 13*4 bytes of
- * OOB size is reserved for ECC, Byte no. 0 & 1 reserved for bad block and 10
- * bytes are free for use.
- */
-static struct nand_ecclayout fsmc_ecc4_64_layout = {
- .eccbytes = 52,
- .eccpos = { 2, 3, 4, 5, 6, 7, 8,
- 9, 10, 11, 12, 13, 14,
- 18, 19, 20, 21, 22, 23, 24,
- 25, 26, 27, 28, 29, 30,
- 34, 35, 36, 37, 38, 39, 40,
- 41, 42, 43, 44, 45, 46,
- 50, 51, 52, 53, 54, 55, 56,
- 57, 58, 59, 60, 61, 62,
- },
- .oobfree = {
- {.offset = 15, .length = 3},
- {.offset = 31, .length = 3},
- {.offset = 47, .length = 3},
- {.offset = 63, .length = 1},
- }
-};
+ oobregion->offset = (section * 16) + 8;
-/*
- * ECC4 layout for NAND of pagesize 512 bytes & OOBsize 16 bytes. 13 bytes of
- * OOB size is reserved for ECC, Byte no. 4 & 5 reserved for bad block and One
- * byte is free for use.
- */
-static struct nand_ecclayout fsmc_ecc4_16_layout = {
- .eccbytes = 13,
- .eccpos = { 0, 1, 2, 3, 6, 7, 8,
- 9, 10, 11, 12, 13, 14
- },
- .oobfree = {
- {.offset = 15, .length = 1},
- }
+ if (section < chip->ecc.steps - 1)
+ oobregion->length = 8;
+ else
+ oobregion->length = mtd->oobsize - oobregion->offset;
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops fsmc_ecc1_ooblayout_ops = {
+ .ecc = fsmc_ecc1_ooblayout_ecc,
+ .free = fsmc_ecc1_ooblayout_free,
};
/*
@@ -250,28 +81,46 @@ static struct nand_ecclayout fsmc_ecc4_16_layout = {
* There are 13 bytes of ecc for every 512 byte block and it has to be read
* consecutively and immediately after the 512 byte data block for hardware to
* generate the error bit offsets in 512 byte data.
- * Managing the ecc bytes in the following way makes it easier for software to
- * read ecc bytes consecutive to data bytes. This way is similar to
- * oobfree structure maintained already in generic nand driver
*/
-static struct fsmc_eccplace fsmc_ecc4_lp_place = {
- .eccplace = {
- {.offset = 2, .length = 13},
- {.offset = 18, .length = 13},
- {.offset = 34, .length = 13},
- {.offset = 50, .length = 13},
- {.offset = 66, .length = 13},
- {.offset = 82, .length = 13},
- {.offset = 98, .length = 13},
- {.offset = 114, .length = 13}
- }
-};
+static int fsmc_ecc4_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
-static struct fsmc_eccplace fsmc_ecc4_sp_place = {
- .eccplace = {
- {.offset = 0, .length = 4},
- {.offset = 6, .length = 9}
- }
+ if (section >= chip->ecc.steps)
+ return -ERANGE;
+
+ oobregion->length = chip->ecc.bytes;
+
+ if (!section && mtd->writesize <= 512)
+ oobregion->offset = 0;
+ else
+ oobregion->offset = (section * 16) + 2;
+
+ return 0;
+}
+
+static int fsmc_ecc4_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+
+ if (section >= chip->ecc.steps)
+ return -ERANGE;
+
+ oobregion->offset = (section * 16) + 15;
+
+ if (section < chip->ecc.steps - 1)
+ oobregion->length = 3;
+ else
+ oobregion->length = mtd->oobsize - oobregion->offset;
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops fsmc_ecc4_ooblayout_ops = {
+ .ecc = fsmc_ecc4_ooblayout_ecc,
+ .free = fsmc_ecc4_ooblayout_free,
};
/**
@@ -283,7 +132,6 @@ static struct fsmc_eccplace fsmc_ecc4_sp_place = {
* @partitions: Partition info for a NAND Flash.
* @nr_partitions: Total number of partition of a NAND flash.
*
- * @ecc_place: ECC placing locations in oobfree type format.
* @bank: Bank number for probed device.
* @clk: Clock structure for FSMC.
*
@@ -303,7 +151,6 @@ struct fsmc_nand_data {
struct mtd_partition *partitions;
unsigned int nr_partitions;
- struct fsmc_eccplace *ecc_place;
unsigned int bank;
struct device *dev;
enum access_mode mode;
@@ -710,8 +557,6 @@ static void fsmc_write_buf_dma(struct mtd_info *mtd, const uint8_t *buf,
static int fsmc_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip,
uint8_t *buf, int oob_required, int page)
{
- struct fsmc_nand_data *host = mtd_to_fsmc(mtd);
- struct fsmc_eccplace *ecc_place = host->ecc_place;
int i, j, s, stat, eccsize = chip->ecc.size;
int eccbytes = chip->ecc.bytes;
int eccsteps = chip->ecc.steps;
@@ -734,9 +579,15 @@ static int fsmc_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip,
chip->read_buf(mtd, p, eccsize);
for (j = 0; j < eccbytes;) {
- off = ecc_place->eccplace[group].offset;
- len = ecc_place->eccplace[group].length;
- group++;
+ struct mtd_oob_region oobregion;
+ int ret;
+
+ ret = mtd_ooblayout_ecc(mtd, group++, &oobregion);
+ if (ret)
+ return ret;
+
+ off = oobregion.offset;
+ len = oobregion.length;
/*
* length is intentionally kept a higher multiple of 2
@@ -1084,24 +935,10 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
if (AMBA_REV_BITS(host->pid) >= 8) {
switch (mtd->oobsize) {
case 16:
- nand->ecc.layout = &fsmc_ecc4_16_layout;
- host->ecc_place = &fsmc_ecc4_sp_place;
- break;
case 64:
- nand->ecc.layout = &fsmc_ecc4_64_layout;
- host->ecc_place = &fsmc_ecc4_lp_place;
- break;
case 128:
- nand->ecc.layout = &fsmc_ecc4_128_layout;
- host->ecc_place = &fsmc_ecc4_lp_place;
- break;
case 224:
- nand->ecc.layout = &fsmc_ecc4_224_layout;
- host->ecc_place = &fsmc_ecc4_lp_place;
- break;
case 256:
- nand->ecc.layout = &fsmc_ecc4_256_layout;
- host->ecc_place = &fsmc_ecc4_lp_place;
break;
default:
dev_warn(&pdev->dev, "No oob scheme defined for oobsize %d\n",
@@ -1109,6 +946,8 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
ret = -EINVAL;
goto err_probe;
}
+
+ mtd_set_ooblayout(mtd, &fsmc_ecc4_ooblayout_ops);
} else {
switch (nand->ecc.mode) {
case NAND_ECC_HW:
@@ -1119,9 +958,11 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
nand->ecc.strength = 1;
break;
- case NAND_ECC_SOFT_BCH:
- dev_info(&pdev->dev, "Using 4-bit SW BCH ECC scheme\n");
- break;
+ case NAND_ECC_SOFT:
+ if (nand->ecc.algo == NAND_ECC_BCH) {
+ dev_info(&pdev->dev, "Using 4-bit SW BCH ECC scheme\n");
+ break;
+ }
default:
dev_err(&pdev->dev, "Unsupported ECC mode!\n");
@@ -1132,16 +973,13 @@ static int __init fsmc_nand_probe(struct platform_device *pdev)
* Don't set layout for BCH4 SW ECC. This will be
* generated later in nand_bch_init() later.
*/
- if (nand->ecc.mode != NAND_ECC_SOFT_BCH) {
+ if (nand->ecc.mode == NAND_ECC_HW) {
switch (mtd->oobsize) {
case 16:
- nand->ecc.layout = &fsmc_ecc1_16_layout;
- break;
case 64:
- nand->ecc.layout = &fsmc_ecc1_64_layout;
- break;
case 128:
- nand->ecc.layout = &fsmc_ecc1_128_layout;
+ mtd_set_ooblayout(mtd,
+ &fsmc_ecc1_ooblayout_ops);
break;
default:
dev_warn(&pdev->dev,
diff --git a/drivers/mtd/nand/gpio.c b/drivers/mtd/nand/gpio.c
index ded658fc7d73..6317f6836022 100644
--- a/drivers/mtd/nand/gpio.c
+++ b/drivers/mtd/nand/gpio.c
@@ -273,6 +273,7 @@ static int gpio_nand_probe(struct platform_device *pdev)
nand_set_flash_node(chip, pdev->dev.of_node);
chip->IO_ADDR_W = chip->IO_ADDR_R;
chip->ecc.mode = NAND_ECC_SOFT;
+ chip->ecc.algo = NAND_ECC_HAMMING;
chip->options = gpiomtd->plat.options;
chip->chip_delay = gpiomtd->plat.chip_delay;
chip->cmd_ctrl = gpio_nand_cmd_ctrl;
diff --git a/drivers/mtd/nand/gpmi-nand/gpmi-nand.c b/drivers/mtd/nand/gpmi-nand/gpmi-nand.c
index 8122c699ccf2..6e461560c6a8 100644
--- a/drivers/mtd/nand/gpmi-nand/gpmi-nand.c
+++ b/drivers/mtd/nand/gpmi-nand/gpmi-nand.c
@@ -25,7 +25,6 @@
#include <linux/mtd/partitions.h>
#include <linux/of.h>
#include <linux/of_device.h>
-#include <linux/of_mtd.h>
#include "gpmi-nand.h"
#include "bch-regs.h"
@@ -47,10 +46,44 @@ static struct nand_bbt_descr gpmi_bbt_descr = {
* We may change the layout if we can get the ECC info from the datasheet,
* else we will use all the (page + OOB).
*/
-static struct nand_ecclayout gpmi_hw_ecclayout = {
- .eccbytes = 0,
- .eccpos = { 0, },
- .oobfree = { {.offset = 0, .length = 0} }
+static int gpmi_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ struct gpmi_nand_data *this = nand_get_controller_data(chip);
+ struct bch_geometry *geo = &this->bch_geometry;
+
+ if (section)
+ return -ERANGE;
+
+ oobregion->offset = 0;
+ oobregion->length = geo->page_size - mtd->writesize;
+
+ return 0;
+}
+
+static int gpmi_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ struct gpmi_nand_data *this = nand_get_controller_data(chip);
+ struct bch_geometry *geo = &this->bch_geometry;
+
+ if (section)
+ return -ERANGE;
+
+ /* The available oob size we have. */
+ if (geo->page_size < mtd->writesize + mtd->oobsize) {
+ oobregion->offset = geo->page_size - mtd->writesize;
+ oobregion->length = mtd->oobsize - oobregion->offset;
+ }
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops gpmi_ooblayout_ops = {
+ .ecc = gpmi_ooblayout_ecc,
+ .free = gpmi_ooblayout_free,
};
static const struct gpmi_devdata gpmi_devdata_imx23 = {
@@ -141,7 +174,6 @@ static int set_geometry_by_ecc_info(struct gpmi_nand_data *this)
struct bch_geometry *geo = &this->bch_geometry;
struct nand_chip *chip = &this->nand;
struct mtd_info *mtd = nand_to_mtd(chip);
- struct nand_oobfree *of = gpmi_hw_ecclayout.oobfree;
unsigned int block_mark_bit_offset;
if (!(chip->ecc_strength_ds > 0 && chip->ecc_step_ds > 0))
@@ -229,12 +261,6 @@ static int set_geometry_by_ecc_info(struct gpmi_nand_data *this)
geo->page_size = mtd->writesize + geo->metadata_size +
(geo->gf_len * geo->ecc_strength * geo->ecc_chunk_count) / 8;
- /* The available oob size we have. */
- if (geo->page_size < mtd->writesize + mtd->oobsize) {
- of->offset = geo->page_size - mtd->writesize;
- of->length = mtd->oobsize - of->offset;
- }
-
geo->payload_size = mtd->writesize;
geo->auxiliary_status_offset = ALIGN(geo->metadata_size, 4);
@@ -797,6 +823,7 @@ static void gpmi_free_dma_buffer(struct gpmi_nand_data *this)
this->cmd_buffer = NULL;
this->data_buffer_dma = NULL;
+ this->raw_buffer = NULL;
this->page_buffer_virt = NULL;
this->page_buffer_size = 0;
}
@@ -1037,14 +1064,87 @@ static int gpmi_ecc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
/* Loop over status bytes, accumulating ECC status. */
status = auxiliary_virt + nfc_geo->auxiliary_status_offset;
+ read_page_swap_end(this, buf, nfc_geo->payload_size,
+ this->payload_virt, this->payload_phys,
+ nfc_geo->payload_size,
+ payload_virt, payload_phys);
+
for (i = 0; i < nfc_geo->ecc_chunk_count; i++, status++) {
if ((*status == STATUS_GOOD) || (*status == STATUS_ERASED))
continue;
if (*status == STATUS_UNCORRECTABLE) {
+ int eccbits = nfc_geo->ecc_strength * nfc_geo->gf_len;
+ u8 *eccbuf = this->raw_buffer;
+ int offset, bitoffset;
+ int eccbytes;
+ int flips;
+
+ /* Read ECC bytes into our internal raw_buffer */
+ offset = nfc_geo->metadata_size * 8;
+ offset += ((8 * nfc_geo->ecc_chunk_size) + eccbits) * (i + 1);
+ offset -= eccbits;
+ bitoffset = offset % 8;
+ eccbytes = DIV_ROUND_UP(offset + eccbits, 8);
+ offset /= 8;
+ eccbytes -= offset;
+ chip->cmdfunc(mtd, NAND_CMD_RNDOUT, offset, -1);
+ chip->read_buf(mtd, eccbuf, eccbytes);
+
+ /*
+ * ECC data are not byte aligned and we may have
+ * in-band data in the first and last byte of
+ * eccbuf. Set non-eccbits to one so that
+ * nand_check_erased_ecc_chunk() does not count them
+ * as bitflips.
+ */
+ if (bitoffset)
+ eccbuf[0] |= GENMASK(bitoffset - 1, 0);
+
+ bitoffset = (bitoffset + eccbits) % 8;
+ if (bitoffset)
+ eccbuf[eccbytes - 1] |= GENMASK(7, bitoffset);
+
+ /*
+ * The ECC hardware has an uncorrectable ECC status
+ * code in case we have bitflips in an erased page. As
+ * nothing was written into this subpage the ECC is
+ * obviously wrong and we can not trust it. We assume
+ * at this point that we are reading an erased page and
+ * try to correct the bitflips in buffer up to
+ * ecc_strength bitflips. If this is a page with random
+ * data, we exceed this number of bitflips and have a
+ * ECC failure. Otherwise we use the corrected buffer.
+ */
+ if (i == 0) {
+ /* The first block includes metadata */
+ flips = nand_check_erased_ecc_chunk(
+ buf + i * nfc_geo->ecc_chunk_size,
+ nfc_geo->ecc_chunk_size,
+ eccbuf, eccbytes,
+ auxiliary_virt,
+ nfc_geo->metadata_size,
+ nfc_geo->ecc_strength);
+ } else {
+ flips = nand_check_erased_ecc_chunk(
+ buf + i * nfc_geo->ecc_chunk_size,
+ nfc_geo->ecc_chunk_size,
+ eccbuf, eccbytes,
+ NULL, 0,
+ nfc_geo->ecc_strength);
+ }
+
+ if (flips > 0) {
+ max_bitflips = max_t(unsigned int, max_bitflips,
+ flips);
+ mtd->ecc_stats.corrected += flips;
+ continue;
+ }
+
mtd->ecc_stats.failed++;
continue;
}
+
mtd->ecc_stats.corrected += *status;
max_bitflips = max_t(unsigned int, max_bitflips, *status);
}
@@ -1064,11 +1164,6 @@ static int gpmi_ecc_read_page(struct mtd_info *mtd, struct nand_chip *chip,
chip->oob_poi[0] = ((uint8_t *) auxiliary_virt)[0];
}
- read_page_swap_end(this, buf, nfc_geo->payload_size,
- this->payload_virt, this->payload_phys,
- nfc_geo->payload_size,
- payload_virt, payload_phys);
-
return max_bitflips;
}
@@ -1327,18 +1422,19 @@ static int gpmi_ecc_read_oob(struct mtd_info *mtd, struct nand_chip *chip,
static int
gpmi_ecc_write_oob(struct mtd_info *mtd, struct nand_chip *chip, int page)
{
- struct nand_oobfree *of = mtd->ecclayout->oobfree;
+ struct mtd_oob_region of = { };
int status = 0;
/* Do we have available oob area? */
- if (!of->length)
+ mtd_ooblayout_free(mtd, 0, &of);
+ if (!of.length)
return -EPERM;
if (!nand_is_slc(chip))
return -EPERM;
- chip->cmdfunc(mtd, NAND_CMD_SEQIN, mtd->writesize + of->offset, page);
- chip->write_buf(mtd, chip->oob_poi + of->offset, of->length);
+ chip->cmdfunc(mtd, NAND_CMD_SEQIN, mtd->writesize + of.offset, page);
+ chip->write_buf(mtd, chip->oob_poi + of.offset, of.length);
chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
status = chip->waitfunc(mtd, chip);
@@ -1840,6 +1936,7 @@ static void gpmi_nand_exit(struct gpmi_nand_data *this)
static int gpmi_init_last(struct gpmi_nand_data *this)
{
struct nand_chip *chip = &this->nand;
+ struct mtd_info *mtd = nand_to_mtd(chip);
struct nand_ecc_ctrl *ecc = &chip->ecc;
struct bch_geometry *bch_geo = &this->bch_geometry;
int ret;
@@ -1861,7 +1958,7 @@ static int gpmi_init_last(struct gpmi_nand_data *this)
ecc->mode = NAND_ECC_HW;
ecc->size = bch_geo->ecc_chunk_size;
ecc->strength = bch_geo->ecc_strength;
- ecc->layout = &gpmi_hw_ecclayout;
+ mtd_set_ooblayout(mtd, &gpmi_ooblayout_ops);
/*
* We only enable the subpage read when:
@@ -1914,16 +2011,6 @@ static int gpmi_nand_init(struct gpmi_nand_data *this)
/* Set up swap_block_mark, must be set before the gpmi_set_geometry() */
this->swap_block_mark = !GPMI_IS_MX23(this);
- if (of_get_nand_on_flash_bbt(this->dev->of_node)) {
- chip->bbt_options |= NAND_BBT_USE_FLASH | NAND_BBT_NO_OOB;
-
- if (of_property_read_bool(this->dev->of_node,
- "fsl,no-blockmark-swap"))
- this->swap_block_mark = false;
- }
- dev_dbg(this->dev, "Blockmark swapping %sabled\n",
- this->swap_block_mark ? "en" : "dis");
-
/*
* Allocate a temporary DMA buffer for reading ID in the
* nand_scan_ident().
@@ -1938,6 +2025,16 @@ static int gpmi_nand_init(struct gpmi_nand_data *this)
if (ret)
goto err_out;
+ if (chip->bbt_options & NAND_BBT_USE_FLASH) {
+ chip->bbt_options |= NAND_BBT_NO_OOB;
+
+ if (of_property_read_bool(this->dev->of_node,
+ "fsl,no-blockmark-swap"))
+ this->swap_block_mark = false;
+ }
+ dev_dbg(this->dev, "Blockmark swapping %sabled\n",
+ this->swap_block_mark ? "en" : "dis");
+
ret = gpmi_init_last(this);
if (ret)
goto err_out;
diff --git a/drivers/mtd/nand/hisi504_nand.c b/drivers/mtd/nand/hisi504_nand.c
index 96502b624cfb..9432546f4cd4 100644
--- a/drivers/mtd/nand/hisi504_nand.c
+++ b/drivers/mtd/nand/hisi504_nand.c
@@ -19,7 +19,6 @@
* GNU General Public License for more details.
*/
#include <linux/of.h>
-#include <linux/of_mtd.h>
#include <linux/mtd/mtd.h>
#include <linux/sizes.h>
#include <linux/clk.h>
@@ -631,8 +630,28 @@ static void hisi_nfc_host_init(struct hinfc_host *host)
hinfc_write(host, HINFC504_INTEN_DMA, HINFC504_INTEN);
}
-static struct nand_ecclayout nand_ecc_2K_16bits = {
- .oobfree = { {2, 6} },
+static int hisi_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ /* FIXME: add ECC bytes position */
+ return -ENOTSUPP;
+}
+
+static int hisi_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section)
+ return -ERANGE;
+
+ oobregion->offset = 2;
+ oobregion->length = 6;
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops hisi_ooblayout_ops = {
+ .ecc = hisi_ooblayout_ecc,
+ .free = hisi_ooblayout_free,
};
static int hisi_nfc_ecc_probe(struct hinfc_host *host)
@@ -642,10 +661,9 @@ static int hisi_nfc_ecc_probe(struct hinfc_host *host)
struct device *dev = host->dev;
struct nand_chip *chip = &host->chip;
struct mtd_info *mtd = nand_to_mtd(chip);
- struct device_node *np = host->dev->of_node;
- size = of_get_nand_ecc_step_size(np);
- strength = of_get_nand_ecc_strength(np);
+ size = chip->ecc.size;
+ strength = chip->ecc.strength;
if (size != 1024) {
dev_err(dev, "error ecc size: %d\n", size);
return -EINVAL;
@@ -668,7 +686,7 @@ static int hisi_nfc_ecc_probe(struct hinfc_host *host)
case 16:
ecc_bits = 6;
if (mtd->writesize == 2048)
- chip->ecc.layout = &nand_ecc_2K_16bits;
+ mtd_set_ooblayout(mtd, &hisi_ooblayout_ops);
/* TODO: add more page size support */
break;
@@ -695,7 +713,7 @@ static int hisi_nfc_ecc_probe(struct hinfc_host *host)
static int hisi_nfc_probe(struct platform_device *pdev)
{
- int ret = 0, irq, buswidth, flag, max_chips = HINFC504_MAX_CHIP;
+ int ret = 0, irq, flag, max_chips = HINFC504_MAX_CHIP;
struct device *dev = &pdev->dev;
struct hinfc_host *host;
struct nand_chip *chip;
@@ -747,12 +765,6 @@ static int hisi_nfc_probe(struct platform_device *pdev)
chip->read_buf = hisi_nfc_read_buf;
chip->chip_delay = HINFC504_CHIP_DELAY;
- chip->ecc.mode = of_get_nand_ecc_mode(np);
-
- buswidth = of_get_nand_bus_width(np);
- if (buswidth == 16)
- chip->options |= NAND_BUSWIDTH_16;
-
hisi_nfc_host_init(host);
ret = devm_request_irq(dev, irq, hinfc_irq_handle, 0x0, "nandc", host);
diff --git a/drivers/mtd/nand/jz4740_nand.c b/drivers/mtd/nand/jz4740_nand.c
index 673ceb2a0b44..5551c36adbdf 100644
--- a/drivers/mtd/nand/jz4740_nand.c
+++ b/drivers/mtd/nand/jz4740_nand.c
@@ -221,7 +221,6 @@ static int jz_nand_correct_ecc_rs(struct mtd_info *mtd, uint8_t *dat,
struct jz_nand *nand = mtd_to_jz_nand(mtd);
int i, error_count, index;
uint32_t reg, status, error;
- uint32_t t;
unsigned int timeout = 1000;
for (i = 0; i < 9; ++i)
@@ -476,7 +475,7 @@ static int jz_nand_probe(struct platform_device *pdev)
}
if (pdata && pdata->ident_callback) {
- pdata->ident_callback(pdev, chip, &pdata->partitions,
+ pdata->ident_callback(pdev, mtd, &pdata->partitions,
&pdata->num_partitions);
}
diff --git a/drivers/mtd/nand/jz4780_bch.c b/drivers/mtd/nand/jz4780_bch.c
index 755499c6650e..d74f4ba4a6f4 100644
--- a/drivers/mtd/nand/jz4780_bch.c
+++ b/drivers/mtd/nand/jz4780_bch.c
@@ -287,7 +287,6 @@ static struct jz4780_bch *jz4780_bch_get(struct device_node *np)
bch = platform_get_drvdata(pdev);
clk_prepare_enable(bch->clk);
- bch->dev = &pdev->dev;
return bch;
}
diff --git a/drivers/mtd/nand/jz4780_nand.c b/drivers/mtd/nand/jz4780_nand.c
index e1c016c9d32d..daf3c4217f4d 100644
--- a/drivers/mtd/nand/jz4780_nand.c
+++ b/drivers/mtd/nand/jz4780_nand.c
@@ -17,7 +17,6 @@
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/gpio/consumer.h>
-#include <linux/of_mtd.h>
#include <linux/platform_device.h>
#include <linux/slab.h>
#include <linux/mtd/mtd.h>
@@ -56,8 +55,6 @@ struct jz4780_nand_chip {
struct nand_chip chip;
struct list_head chip_list;
- struct nand_ecclayout ecclayout;
-
struct gpio_desc *busy_gpio;
struct gpio_desc *wp_gpio;
unsigned int reading: 1;
@@ -165,8 +162,7 @@ static int jz4780_nand_init_ecc(struct jz4780_nand_chip *nand, struct device *de
struct nand_chip *chip = &nand->chip;
struct mtd_info *mtd = nand_to_mtd(chip);
struct jz4780_nand_controller *nfc = to_jz4780_nand_controller(chip->controller);
- struct nand_ecclayout *layout = &nand->ecclayout;
- u32 start, i;
+ int eccbytes;
chip->ecc.bytes = fls((1 + 8) * chip->ecc.size) *
(chip->ecc.strength / 8);
@@ -183,7 +179,6 @@ static int jz4780_nand_init_ecc(struct jz4780_nand_chip *nand, struct device *de
chip->ecc.correct = jz4780_nand_ecc_correct;
/* fall through */
case NAND_ECC_SOFT:
- case NAND_ECC_SOFT_BCH:
dev_info(dev, "using %s (strength %d, size %d, bytes %d)\n",
(nfc->bch) ? "hardware BCH" : "software ECC",
chip->ecc.strength, chip->ecc.size, chip->ecc.bytes);
@@ -201,23 +196,17 @@ static int jz4780_nand_init_ecc(struct jz4780_nand_chip *nand, struct device *de
return 0;
/* Generate ECC layout. ECC codes are right aligned in the OOB area. */
- layout->eccbytes = mtd->writesize / chip->ecc.size * chip->ecc.bytes;
+ eccbytes = mtd->writesize / chip->ecc.size * chip->ecc.bytes;
- if (layout->eccbytes > mtd->oobsize - 2) {
+ if (eccbytes > mtd->oobsize - 2) {
dev_err(dev,
"invalid ECC config: required %d ECC bytes, but only %d are available",
- layout->eccbytes, mtd->oobsize - 2);
+ eccbytes, mtd->oobsize - 2);
return -EINVAL;
}
- start = mtd->oobsize - layout->eccbytes;
- for (i = 0; i < layout->eccbytes; i++)
- layout->eccpos[i] = start + i;
-
- layout->oobfree[0].offset = 2;
- layout->oobfree[0].length = mtd->oobsize - layout->eccbytes - 2;
+ mtd->ooblayout = &nand_ooblayout_lp_ops;
- chip->ecc.layout = layout;
return 0;
}
diff --git a/drivers/mtd/nand/lpc32xx_mlc.c b/drivers/mtd/nand/lpc32xx_mlc.c
index d8c3e7afcc0b..852388171f20 100644
--- a/drivers/mtd/nand/lpc32xx_mlc.c
+++ b/drivers/mtd/nand/lpc32xx_mlc.c
@@ -35,7 +35,6 @@
#include <linux/completion.h>
#include <linux/interrupt.h>
#include <linux/of.h>
-#include <linux/of_mtd.h>
#include <linux/of_gpio.h>
#include <linux/mtd/lpc32xx_mlc.h>
#include <linux/io.h>
@@ -139,22 +138,37 @@ struct lpc32xx_nand_cfg_mlc {
unsigned num_parts;
};
-static struct nand_ecclayout lpc32xx_nand_oob = {
- .eccbytes = 40,
- .eccpos = { 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
- 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
- 38, 39, 40, 41, 42, 43, 44, 45, 46, 47,
- 54, 55, 56, 57, 58, 59, 60, 61, 62, 63 },
- .oobfree = {
- { .offset = 0,
- .length = 6, },
- { .offset = 16,
- .length = 6, },
- { .offset = 32,
- .length = 6, },
- { .offset = 48,
- .length = 6, },
- },
+static int lpc32xx_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *nand_chip = mtd_to_nand(mtd);
+
+ if (section >= nand_chip->ecc.steps)
+ return -ERANGE;
+
+ oobregion->offset = ((section + 1) * 16) - nand_chip->ecc.bytes;
+ oobregion->length = nand_chip->ecc.bytes;
+
+ return 0;
+}
+
+static int lpc32xx_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *nand_chip = mtd_to_nand(mtd);
+
+ if (section >= nand_chip->ecc.steps)
+ return -ERANGE;
+
+ oobregion->offset = 16 * section;
+ oobregion->length = 16 - nand_chip->ecc.bytes;
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops lpc32xx_ooblayout_ops = {
+ .ecc = lpc32xx_ooblayout_ecc,
+ .free = lpc32xx_ooblayout_free,
};
static struct nand_bbt_descr lpc32xx_nand_bbt = {
@@ -713,6 +727,7 @@ static int lpc32xx_nand_probe(struct platform_device *pdev)
nand_chip->ecc.write_oob = lpc32xx_write_oob;
nand_chip->ecc.read_oob = lpc32xx_read_oob;
nand_chip->ecc.strength = 4;
+ nand_chip->ecc.bytes = 10;
nand_chip->waitfunc = lpc32xx_waitfunc;
nand_chip->options = NAND_NO_SUBPAGE_WRITE;
@@ -751,7 +766,7 @@ static int lpc32xx_nand_probe(struct platform_device *pdev)
nand_chip->ecc.mode = NAND_ECC_HW;
nand_chip->ecc.size = 512;
- nand_chip->ecc.layout = &lpc32xx_nand_oob;
+ mtd_set_ooblayout(mtd, &lpc32xx_ooblayout_ops);
host->mlcsubpages = mtd->writesize / 512;
/* initially clear interrupt status */
diff --git a/drivers/mtd/nand/lpc32xx_slc.c b/drivers/mtd/nand/lpc32xx_slc.c
index 3b8f3735f3e8..8d3edc34958e 100644
--- a/drivers/mtd/nand/lpc32xx_slc.c
+++ b/drivers/mtd/nand/lpc32xx_slc.c
@@ -35,7 +35,6 @@
#include <linux/mtd/nand_ecc.h>
#include <linux/gpio.h>
#include <linux/of.h>
-#include <linux/of_mtd.h>
#include <linux/of_gpio.h>
#include <linux/mtd/lpc32xx_slc.h>
@@ -146,13 +145,38 @@
* NAND ECC Layout for small page NAND devices
* Note: For large and huge page devices, the default layouts are used
*/
-static struct nand_ecclayout lpc32xx_nand_oob_16 = {
- .eccbytes = 6,
- .eccpos = {10, 11, 12, 13, 14, 15},
- .oobfree = {
- { .offset = 0, .length = 4 },
- { .offset = 6, .length = 4 },
- },
+static int lpc32xx_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section)
+ return -ERANGE;
+
+ oobregion->length = 6;
+ oobregion->offset = 10;
+
+ return 0;
+}
+
+static int lpc32xx_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section > 1)
+ return -ERANGE;
+
+ if (!section) {
+ oobregion->offset = 0;
+ oobregion->length = 4;
+ } else {
+ oobregion->offset = 6;
+ oobregion->length = 4;
+ }
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops lpc32xx_ooblayout_ops = {
+ .ecc = lpc32xx_ooblayout_ecc,
+ .free = lpc32xx_ooblayout_free,
};
static u8 bbt_pattern[] = {'B', 'b', 't', '0' };
@@ -194,7 +218,6 @@ struct lpc32xx_nand_cfg_slc {
uint32_t rwidth;
uint32_t rhold;
uint32_t rsetup;
- bool use_bbt;
int wp_gpio;
struct mtd_partition *parts;
unsigned num_parts;
@@ -604,7 +627,8 @@ static int lpc32xx_nand_read_page_syndrome(struct mtd_info *mtd,
int oob_required, int page)
{
struct lpc32xx_nand_host *host = nand_get_controller_data(chip);
- int stat, i, status;
+ struct mtd_oob_region oobregion = { };
+ int stat, i, status, error;
uint8_t *oobecc, tmpecc[LPC32XX_ECC_SAVE_SIZE];
/* Issue read command */
@@ -620,7 +644,11 @@ static int lpc32xx_nand_read_page_syndrome(struct mtd_info *mtd,
lpc32xx_slc_ecc_copy(tmpecc, (uint32_t *) host->ecc_buf, chip->ecc.steps);
/* Pointer to ECC data retrieved from NAND spare area */
- oobecc = chip->oob_poi + chip->ecc.layout->eccpos[0];
+ error = mtd_ooblayout_ecc(mtd, 0, &oobregion);
+ if (error)
+ return error;
+
+ oobecc = chip->oob_poi + oobregion.offset;
for (i = 0; i < chip->ecc.steps; i++) {
stat = chip->ecc.correct(mtd, buf, oobecc,
@@ -666,7 +694,8 @@ static int lpc32xx_nand_write_page_syndrome(struct mtd_info *mtd,
int oob_required, int page)
{
struct lpc32xx_nand_host *host = nand_get_controller_data(chip);
- uint8_t *pb = chip->oob_poi + chip->ecc.layout->eccpos[0];
+ struct mtd_oob_region oobregion = { };
+ uint8_t *pb;
int error;
/* Write data, calculate ECC on outbound data */
@@ -678,6 +707,11 @@ static int lpc32xx_nand_write_page_syndrome(struct mtd_info *mtd,
* The calculated ECC needs some manual work done to it before
* committing it to NAND. Process the calculated ECC and place
* the resultant values directly into the OOB buffer. */
+ error = mtd_ooblayout_ecc(mtd, 0, &oobregion);
+ if (error)
+ return error;
+
+ pb = chip->oob_poi + oobregion.offset;
lpc32xx_slc_ecc_copy(pb, (uint32_t *)host->ecc_buf, chip->ecc.steps);
/* Write ECC data to device */
@@ -747,7 +781,6 @@ static struct lpc32xx_nand_cfg_slc *lpc32xx_parse_dt(struct device *dev)
return NULL;
}
- ncfg->use_bbt = of_get_nand_on_flash_bbt(np);
ncfg->wp_gpio = of_get_named_gpio(np, "gpios", 0);
return ncfg;
@@ -875,26 +908,22 @@ static int lpc32xx_nand_probe(struct platform_device *pdev)
* custom BBT marker layout.
*/
if (mtd->writesize <= 512)
- chip->ecc.layout = &lpc32xx_nand_oob_16;
+ mtd_set_ooblayout(mtd, &lpc32xx_ooblayout_ops);
/* These sizes remain the same regardless of page size */
chip->ecc.size = 256;
chip->ecc.bytes = LPC32XX_SLC_DEV_ECC_BYTES;
chip->ecc.prepad = chip->ecc.postpad = 0;
- /* Avoid extra scan if using BBT, setup BBT support */
- if (host->ncfg->use_bbt) {
- chip->bbt_options |= NAND_BBT_USE_FLASH;
-
- /*
- * Use a custom BBT marker setup for small page FLASH that
- * won't interfere with the ECC layout. Large and huge page
- * FLASH use the standard layout.
- */
- if (mtd->writesize <= 512) {
- chip->bbt_td = &bbt_smallpage_main_descr;
- chip->bbt_md = &bbt_smallpage_mirror_descr;
- }
+ /*
+ * Use a custom BBT marker setup for small page FLASH that
+ * won't interfere with the ECC layout. Large and huge page
+ * FLASH use the standard layout.
+ */
+ if ((chip->bbt_options & NAND_BBT_USE_FLASH) &&
+ mtd->writesize <= 512) {
+ chip->bbt_td = &bbt_smallpage_main_descr;
+ chip->bbt_md = &bbt_smallpage_mirror_descr;
}
/*
diff --git a/drivers/mtd/nand/mpc5121_nfc.c b/drivers/mtd/nand/mpc5121_nfc.c
index 5d7843ffff6a..7eacb2f545f5 100644
--- a/drivers/mtd/nand/mpc5121_nfc.c
+++ b/drivers/mtd/nand/mpc5121_nfc.c
@@ -710,6 +710,7 @@ static int mpc5121_nfc_probe(struct platform_device *op)
chip->select_chip = mpc5121_nfc_select_chip;
chip->bbt_options = NAND_BBT_USE_FLASH;
chip->ecc.mode = NAND_ECC_SOFT;
+ chip->ecc.algo = NAND_ECC_HAMMING;
/* Support external chip-select logic on ADS5121 board */
if (of_machine_is_compatible("fsl,mpc5121ads")) {
diff --git a/drivers/mtd/nand/mxc_nand.c b/drivers/mtd/nand/mxc_nand.c
index 854c832597aa..5173fadc9a4e 100644
--- a/drivers/mtd/nand/mxc_nand.c
+++ b/drivers/mtd/nand/mxc_nand.c
@@ -34,7 +34,6 @@
#include <linux/completion.h>
#include <linux/of.h>
#include <linux/of_device.h>
-#include <linux/of_mtd.h>
#include <asm/mach/flash.h>
#include <linux/platform_data/mtd-mxc_nand.h>
@@ -149,7 +148,7 @@ struct mxc_nand_devtype_data {
int (*check_int)(struct mxc_nand_host *);
void (*irq_control)(struct mxc_nand_host *, int);
u32 (*get_ecc_status)(struct mxc_nand_host *);
- struct nand_ecclayout *ecclayout_512, *ecclayout_2k, *ecclayout_4k;
+ const struct mtd_ooblayout_ops *ooblayout;
void (*select_chip)(struct mtd_info *mtd, int chip);
int (*correct_data)(struct mtd_info *mtd, u_char *dat,
u_char *read_ecc, u_char *calc_ecc);
@@ -200,73 +199,6 @@ struct mxc_nand_host {
struct mxc_nand_platform_data pdata;
};
-/* OOB placement block for use with hardware ecc generation */
-static struct nand_ecclayout nandv1_hw_eccoob_smallpage = {
- .eccbytes = 5,
- .eccpos = {6, 7, 8, 9, 10},
- .oobfree = {{0, 5}, {12, 4}, }
-};
-
-static struct nand_ecclayout nandv1_hw_eccoob_largepage = {
- .eccbytes = 20,
- .eccpos = {6, 7, 8, 9, 10, 22, 23, 24, 25, 26,
- 38, 39, 40, 41, 42, 54, 55, 56, 57, 58},
- .oobfree = {{2, 4}, {11, 10}, {27, 10}, {43, 10}, {59, 5}, }
-};
-
-/* OOB description for 512 byte pages with 16 byte OOB */
-static struct nand_ecclayout nandv2_hw_eccoob_smallpage = {
- .eccbytes = 1 * 9,
- .eccpos = {
- 7, 8, 9, 10, 11, 12, 13, 14, 15
- },
- .oobfree = {
- {.offset = 0, .length = 5}
- }
-};
-
-/* OOB description for 2048 byte pages with 64 byte OOB */
-static struct nand_ecclayout nandv2_hw_eccoob_largepage = {
- .eccbytes = 4 * 9,
- .eccpos = {
- 7, 8, 9, 10, 11, 12, 13, 14, 15,
- 23, 24, 25, 26, 27, 28, 29, 30, 31,
- 39, 40, 41, 42, 43, 44, 45, 46, 47,
- 55, 56, 57, 58, 59, 60, 61, 62, 63
- },
- .oobfree = {
- {.offset = 2, .length = 4},
- {.offset = 16, .length = 7},
- {.offset = 32, .length = 7},
- {.offset = 48, .length = 7}
- }
-};
-
-/* OOB description for 4096 byte pages with 128 byte OOB */
-static struct nand_ecclayout nandv2_hw_eccoob_4k = {
- .eccbytes = 8 * 9,
- .eccpos = {
- 7, 8, 9, 10, 11, 12, 13, 14, 15,
- 23, 24, 25, 26, 27, 28, 29, 30, 31,
- 39, 40, 41, 42, 43, 44, 45, 46, 47,
- 55, 56, 57, 58, 59, 60, 61, 62, 63,
- 71, 72, 73, 74, 75, 76, 77, 78, 79,
- 87, 88, 89, 90, 91, 92, 93, 94, 95,
- 103, 104, 105, 106, 107, 108, 109, 110, 111,
- 119, 120, 121, 122, 123, 124, 125, 126, 127,
- },
- .oobfree = {
- {.offset = 2, .length = 4},
- {.offset = 16, .length = 7},
- {.offset = 32, .length = 7},
- {.offset = 48, .length = 7},
- {.offset = 64, .length = 7},
- {.offset = 80, .length = 7},
- {.offset = 96, .length = 7},
- {.offset = 112, .length = 7},
- }
-};
-
static const char * const part_probes[] = {
"cmdlinepart", "RedBoot", "ofpart", NULL };
@@ -942,6 +874,99 @@ static void mxc_do_addr_cycle(struct mtd_info *mtd, int column, int page_addr)
}
}
+static int mxc_v1_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *nand_chip = mtd_to_nand(mtd);
+
+ if (section >= nand_chip->ecc.steps)
+ return -ERANGE;
+
+ oobregion->offset = (section * 16) + 6;
+ oobregion->length = nand_chip->ecc.bytes;
+
+ return 0;
+}
+
+static int mxc_v1_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *nand_chip = mtd_to_nand(mtd);
+
+ if (section > nand_chip->ecc.steps)
+ return -ERANGE;
+
+ if (!section) {
+ if (mtd->writesize <= 512) {
+ oobregion->offset = 0;
+ oobregion->length = 5;
+ } else {
+ oobregion->offset = 2;
+ oobregion->length = 4;
+ }
+ } else {
+ oobregion->offset = ((section - 1) * 16) +
+ nand_chip->ecc.bytes + 6;
+ if (section < nand_chip->ecc.steps)
+ oobregion->length = (section * 16) + 6 -
+ oobregion->offset;
+ else
+ oobregion->length = mtd->oobsize - oobregion->offset;
+ }
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops mxc_v1_ooblayout_ops = {
+ .ecc = mxc_v1_ooblayout_ecc,
+ .free = mxc_v1_ooblayout_free,
+};
+
+static int mxc_v2_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *nand_chip = mtd_to_nand(mtd);
+ int stepsize = nand_chip->ecc.bytes == 9 ? 16 : 26;
+
+ if (section >= nand_chip->ecc.steps)
+ return -ERANGE;
+
+ oobregion->offset = (section * stepsize) + 7;
+ oobregion->length = nand_chip->ecc.bytes;
+
+ return 0;
+}
+
+static int mxc_v2_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *nand_chip = mtd_to_nand(mtd);
+ int stepsize = nand_chip->ecc.bytes == 9 ? 16 : 26;
+
+ if (section > nand_chip->ecc.steps)
+ return -ERANGE;
+
+ if (!section) {
+ if (mtd->writesize <= 512) {
+ oobregion->offset = 0;
+ oobregion->length = 5;
+ } else {
+ oobregion->offset = 2;
+ oobregion->length = 4;
+ }
+ } else {
+ oobregion->offset = section * stepsize;
+ oobregion->length = 7;
+ }
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops mxc_v2_ooblayout_ops = {
+ .ecc = mxc_v2_ooblayout_ecc,
+ .free = mxc_v2_ooblayout_free,
+};
+
/*
* v2 and v3 type controllers can do 4bit or 8bit ecc depending
* on how much oob the nand chip has. For 8bit ecc we need at least
@@ -959,23 +984,6 @@ static int get_eccsize(struct mtd_info *mtd)
return 8;
}
-static void ecc_8bit_layout_4k(struct nand_ecclayout *layout)
-{
- int i, j;
-
- layout->eccbytes = 8*18;
- for (i = 0; i < 8; i++)
- for (j = 0; j < 18; j++)
- layout->eccpos[i*18 + j] = i*26 + j + 7;
-
- layout->oobfree[0].offset = 2;
- layout->oobfree[0].length = 4;
- for (i = 1; i < 8; i++) {
- layout->oobfree[i].offset = i*26;
- layout->oobfree[i].length = 7;
- }
-}
-
static void preset_v1(struct mtd_info *mtd)
{
struct nand_chip *nand_chip = mtd_to_nand(mtd);
@@ -1269,9 +1277,7 @@ static const struct mxc_nand_devtype_data imx21_nand_devtype_data = {
.check_int = check_int_v1_v2,
.irq_control = irq_control_v1_v2,
.get_ecc_status = get_ecc_status_v1,
- .ecclayout_512 = &nandv1_hw_eccoob_smallpage,
- .ecclayout_2k = &nandv1_hw_eccoob_largepage,
- .ecclayout_4k = &nandv1_hw_eccoob_smallpage, /* XXX: needs fix */
+ .ooblayout = &mxc_v1_ooblayout_ops,
.select_chip = mxc_nand_select_chip_v1_v3,
.correct_data = mxc_nand_correct_data_v1,
.irqpending_quirk = 1,
@@ -1294,9 +1300,7 @@ static const struct mxc_nand_devtype_data imx27_nand_devtype_data = {
.check_int = check_int_v1_v2,
.irq_control = irq_control_v1_v2,
.get_ecc_status = get_ecc_status_v1,
- .ecclayout_512 = &nandv1_hw_eccoob_smallpage,
- .ecclayout_2k = &nandv1_hw_eccoob_largepage,
- .ecclayout_4k = &nandv1_hw_eccoob_smallpage, /* XXX: needs fix */
+ .ooblayout = &mxc_v1_ooblayout_ops,
.select_chip = mxc_nand_select_chip_v1_v3,
.correct_data = mxc_nand_correct_data_v1,
.irqpending_quirk = 0,
@@ -1320,9 +1324,7 @@ static const struct mxc_nand_devtype_data imx25_nand_devtype_data = {
.check_int = check_int_v1_v2,
.irq_control = irq_control_v1_v2,
.get_ecc_status = get_ecc_status_v2,
- .ecclayout_512 = &nandv2_hw_eccoob_smallpage,
- .ecclayout_2k = &nandv2_hw_eccoob_largepage,
- .ecclayout_4k = &nandv2_hw_eccoob_4k,
+ .ooblayout = &mxc_v2_ooblayout_ops,
.select_chip = mxc_nand_select_chip_v2,
.correct_data = mxc_nand_correct_data_v2_v3,
.irqpending_quirk = 0,
@@ -1346,9 +1348,7 @@ static const struct mxc_nand_devtype_data imx51_nand_devtype_data = {
.check_int = check_int_v3,
.irq_control = irq_control_v3,
.get_ecc_status = get_ecc_status_v3,
- .ecclayout_512 = &nandv2_hw_eccoob_smallpage,
- .ecclayout_2k = &nandv2_hw_eccoob_largepage,
- .ecclayout_4k = &nandv2_hw_eccoob_smallpage, /* XXX: needs fix */
+ .ooblayout = &mxc_v2_ooblayout_ops,
.select_chip = mxc_nand_select_chip_v1_v3,
.correct_data = mxc_nand_correct_data_v2_v3,
.irqpending_quirk = 0,
@@ -1373,9 +1373,7 @@ static const struct mxc_nand_devtype_data imx53_nand_devtype_data = {
.check_int = check_int_v3,
.irq_control = irq_control_v3,
.get_ecc_status = get_ecc_status_v3,
- .ecclayout_512 = &nandv2_hw_eccoob_smallpage,
- .ecclayout_2k = &nandv2_hw_eccoob_largepage,
- .ecclayout_4k = &nandv2_hw_eccoob_smallpage, /* XXX: needs fix */
+ .ooblayout = &mxc_v2_ooblayout_ops,
.select_chip = mxc_nand_select_chip_v1_v3,
.correct_data = mxc_nand_correct_data_v2_v3,
.irqpending_quirk = 0,
@@ -1461,25 +1459,12 @@ MODULE_DEVICE_TABLE(of, mxcnd_dt_ids);
static int __init mxcnd_probe_dt(struct mxc_nand_host *host)
{
struct device_node *np = host->dev->of_node;
- struct mxc_nand_platform_data *pdata = &host->pdata;
const struct of_device_id *of_id =
of_match_device(mxcnd_dt_ids, host->dev);
- int buswidth;
if (!np)
return 1;
- if (of_get_nand_ecc_mode(np) >= 0)
- pdata->hw_ecc = 1;
-
- pdata->flash_bbt = of_get_nand_on_flash_bbt(np);
-
- buswidth = of_get_nand_bus_width(np);
- if (buswidth < 0)
- return buswidth;
-
- pdata->width = buswidth / 8;
-
host->devtype_data = of_id->data;
return 0;
@@ -1576,27 +1561,22 @@ static int mxcnd_probe(struct platform_device *pdev)
this->select_chip = host->devtype_data->select_chip;
this->ecc.size = 512;
- this->ecc.layout = host->devtype_data->ecclayout_512;
+ mtd_set_ooblayout(mtd, host->devtype_data->ooblayout);
if (host->pdata.hw_ecc) {
- this->ecc.calculate = mxc_nand_calculate_ecc;
- this->ecc.hwctl = mxc_nand_enable_hwecc;
- this->ecc.correct = host->devtype_data->correct_data;
this->ecc.mode = NAND_ECC_HW;
} else {
this->ecc.mode = NAND_ECC_SOFT;
+ this->ecc.algo = NAND_ECC_HAMMING;
}
/* NAND bus width determines access functions used by upper layer */
if (host->pdata.width == 2)
this->options |= NAND_BUSWIDTH_16;
- if (host->pdata.flash_bbt) {
- this->bbt_td = &bbt_main_descr;
- this->bbt_md = &bbt_mirror_descr;
- /* update flash based bbt */
+ /* update flash based bbt */
+ if (host->pdata.flash_bbt)
this->bbt_options |= NAND_BBT_USE_FLASH;
- }
init_completion(&host->op_completion);
@@ -1637,6 +1617,26 @@ static int mxcnd_probe(struct platform_device *pdev)
goto escan;
}
+ switch (this->ecc.mode) {
+ case NAND_ECC_HW:
+ this->ecc.calculate = mxc_nand_calculate_ecc;
+ this->ecc.hwctl = mxc_nand_enable_hwecc;
+ this->ecc.correct = host->devtype_data->correct_data;
+ break;
+
+ case NAND_ECC_SOFT:
+ break;
+
+ default:
+ err = -EINVAL;
+ goto escan;
+ }
+
+ if (this->bbt_options & NAND_BBT_USE_FLASH) {
+ this->bbt_td = &bbt_main_descr;
+ this->bbt_md = &bbt_mirror_descr;
+ }
+
/* allocate the right size buffer now */
devm_kfree(&pdev->dev, (void *)host->data_buf);
host->data_buf = devm_kzalloc(&pdev->dev, mtd->writesize + mtd->oobsize,
@@ -1649,12 +1649,11 @@ static int mxcnd_probe(struct platform_device *pdev)
/* Call preset again, with correct writesize this time */
host->devtype_data->preset(mtd);
- if (mtd->writesize == 2048)
- this->ecc.layout = host->devtype_data->ecclayout_2k;
- else if (mtd->writesize == 4096) {
- this->ecc.layout = host->devtype_data->ecclayout_4k;
- if (get_eccsize(mtd) == 8)
- ecc_8bit_layout_4k(this->ecc.layout);
+ if (!this->ecc.bytes) {
+ if (host->eccsize == 8)
+ this->ecc.bytes = 18;
+ else if (host->eccsize == 4)
+ this->ecc.bytes = 9;
}
/*
diff --git a/drivers/mtd/nand/nand_base.c b/drivers/mtd/nand/nand_base.c
index 557b8462f55e..0b0dc29d2af7 100644
--- a/drivers/mtd/nand/nand_base.c
+++ b/drivers/mtd/nand/nand_base.c
@@ -43,65 +43,100 @@
#include <linux/mtd/nand_bch.h>
#include <linux/interrupt.h>
#include <linux/bitops.h>
-#include <linux/leds.h>
#include <linux/io.h>
#include <linux/mtd/partitions.h>
-#include <linux/of_mtd.h>
+#include <linux/of.h>
+
+static int nand_get_device(struct mtd_info *mtd, int new_state);
+
+static int nand_do_write_oob(struct mtd_info *mtd, loff_t to,
+ struct mtd_oob_ops *ops);
/* Define default oob placement schemes for large and small page devices */
-static struct nand_ecclayout nand_oob_8 = {
- .eccbytes = 3,
- .eccpos = {0, 1, 2},
- .oobfree = {
- {.offset = 3,
- .length = 2},
- {.offset = 6,
- .length = 2} }
-};
+static int nand_ooblayout_ecc_sp(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ struct nand_ecc_ctrl *ecc = &chip->ecc;
-static struct nand_ecclayout nand_oob_16 = {
- .eccbytes = 6,
- .eccpos = {0, 1, 2, 3, 6, 7},
- .oobfree = {
- {.offset = 8,
- . length = 8} }
-};
+ if (section > 1)
+ return -ERANGE;
-static struct nand_ecclayout nand_oob_64 = {
- .eccbytes = 24,
- .eccpos = {
- 40, 41, 42, 43, 44, 45, 46, 47,
- 48, 49, 50, 51, 52, 53, 54, 55,
- 56, 57, 58, 59, 60, 61, 62, 63},
- .oobfree = {
- {.offset = 2,
- .length = 38} }
-};
+ if (!section) {
+ oobregion->offset = 0;
+ oobregion->length = 4;
+ } else {
+ oobregion->offset = 6;
+ oobregion->length = ecc->total - 4;
+ }
+
+ return 0;
+}
+
+static int nand_ooblayout_free_sp(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section > 1)
+ return -ERANGE;
-static struct nand_ecclayout nand_oob_128 = {
- .eccbytes = 48,
- .eccpos = {
- 80, 81, 82, 83, 84, 85, 86, 87,
- 88, 89, 90, 91, 92, 93, 94, 95,
- 96, 97, 98, 99, 100, 101, 102, 103,
- 104, 105, 106, 107, 108, 109, 110, 111,
- 112, 113, 114, 115, 116, 117, 118, 119,
- 120, 121, 122, 123, 124, 125, 126, 127},
- .oobfree = {
- {.offset = 2,
- .length = 78} }
+ if (mtd->oobsize == 16) {
+ if (section)
+ return -ERANGE;
+
+ oobregion->length = 8;
+ oobregion->offset = 8;
+ } else {
+ oobregion->length = 2;
+ if (!section)
+ oobregion->offset = 3;
+ else
+ oobregion->offset = 6;
+ }
+
+ return 0;
+}
+
+const struct mtd_ooblayout_ops nand_ooblayout_sp_ops = {
+ .ecc = nand_ooblayout_ecc_sp,
+ .free = nand_ooblayout_free_sp,
};
+EXPORT_SYMBOL_GPL(nand_ooblayout_sp_ops);
-static int nand_get_device(struct mtd_info *mtd, int new_state);
+static int nand_ooblayout_ecc_lp(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ struct nand_ecc_ctrl *ecc = &chip->ecc;
-static int nand_do_write_oob(struct mtd_info *mtd, loff_t to,
- struct mtd_oob_ops *ops);
+ if (section)
+ return -ERANGE;
-/*
- * For devices which display every fart in the system on a separate LED. Is
- * compiled away when LED support is disabled.
- */
-DEFINE_LED_TRIGGER(nand_led_trigger);
+ oobregion->length = ecc->total;
+ oobregion->offset = mtd->oobsize - oobregion->length;
+
+ return 0;
+}
+
+static int nand_ooblayout_free_lp(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ struct nand_ecc_ctrl *ecc = &chip->ecc;
+
+ if (section)
+ return -ERANGE;
+
+ oobregion->length = mtd->oobsize - ecc->total - 2;
+ oobregion->offset = 2;
+
+ return 0;
+}
+
+const struct mtd_ooblayout_ops nand_ooblayout_lp_ops = {
+ .ecc = nand_ooblayout_ecc_lp,
+ .free = nand_ooblayout_free_lp,
+};
+EXPORT_SYMBOL_GPL(nand_ooblayout_lp_ops);
static int check_offs_len(struct mtd_info *mtd,
loff_t ofs, uint64_t len)
@@ -540,19 +575,16 @@ void nand_wait_ready(struct mtd_info *mtd)
if (in_interrupt() || oops_in_progress)
return panic_nand_wait_ready(mtd, timeo);
- led_trigger_event(nand_led_trigger, LED_FULL);
/* Wait until command is processed or timeout occurs */
timeo = jiffies + msecs_to_jiffies(timeo);
do {
if (chip->dev_ready(mtd))
- goto out;
+ return;
cond_resched();
} while (time_before(jiffies, timeo));
if (!chip->dev_ready(mtd))
pr_warn_ratelimited("timeout while waiting for chip to become ready\n");
-out:
- led_trigger_event(nand_led_trigger, LED_OFF);
}
EXPORT_SYMBOL_GPL(nand_wait_ready);
@@ -885,8 +917,6 @@ static int nand_wait(struct mtd_info *mtd, struct nand_chip *chip)
int status;
unsigned long timeo = 400;
- led_trigger_event(nand_led_trigger, LED_FULL);
-
/*
* Apply this short delay always to ensure that we do wait tWB in any
* case on any machine.
@@ -910,7 +940,6 @@ static int nand_wait(struct mtd_info *mtd, struct nand_chip *chip)
cond_resched();
} while (time_before(jiffies, timeo));
}
- led_trigger_event(nand_led_trigger, LED_OFF);
status = (int)chip->read_byte(mtd);
/* This can happen if in case of timeout or buggy dev_ready */
@@ -1292,13 +1321,12 @@ static int nand_read_page_raw_syndrome(struct mtd_info *mtd,
static int nand_read_page_swecc(struct mtd_info *mtd, struct nand_chip *chip,
uint8_t *buf, int oob_required, int page)
{
- int i, eccsize = chip->ecc.size;
+ int i, eccsize = chip->ecc.size, ret;
int eccbytes = chip->ecc.bytes;
int eccsteps = chip->ecc.steps;
uint8_t *p = buf;
uint8_t *ecc_calc = chip->buffers->ecccalc;
uint8_t *ecc_code = chip->buffers->ecccode;
- uint32_t *eccpos = chip->ecc.layout->eccpos;
unsigned int max_bitflips = 0;
chip->ecc.read_page_raw(mtd, chip, buf, 1, page);
@@ -1306,8 +1334,10 @@ static int nand_read_page_swecc(struct mtd_info *mtd, struct nand_chip *chip,
for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize)
chip->ecc.calculate(mtd, p, &ecc_calc[i]);
- for (i = 0; i < chip->ecc.total; i++)
- ecc_code[i] = chip->oob_poi[eccpos[i]];
+ ret = mtd_ooblayout_get_eccbytes(mtd, ecc_code, chip->oob_poi, 0,
+ chip->ecc.total);
+ if (ret)
+ return ret;
eccsteps = chip->ecc.steps;
p = buf;
@@ -1339,14 +1369,14 @@ static int nand_read_subpage(struct mtd_info *mtd, struct nand_chip *chip,
uint32_t data_offs, uint32_t readlen, uint8_t *bufpoi,
int page)
{
- int start_step, end_step, num_steps;
- uint32_t *eccpos = chip->ecc.layout->eccpos;
+ int start_step, end_step, num_steps, ret;
uint8_t *p;
int data_col_addr, i, gaps = 0;
int datafrag_len, eccfrag_len, aligned_len, aligned_pos;
int busw = (chip->options & NAND_BUSWIDTH_16) ? 2 : 1;
- int index;
+ int index, section = 0;
unsigned int max_bitflips = 0;
+ struct mtd_oob_region oobregion = { };
/* Column address within the page aligned to ECC size (256bytes) */
start_step = data_offs / chip->ecc.size;
@@ -1374,12 +1404,13 @@ static int nand_read_subpage(struct mtd_info *mtd, struct nand_chip *chip,
* The performance is faster if we position offsets according to
* ecc.pos. Let's make sure that there are no gaps in ECC positions.
*/
- for (i = 0; i < eccfrag_len - 1; i++) {
- if (eccpos[i + index] + 1 != eccpos[i + index + 1]) {
- gaps = 1;
- break;
- }
- }
+ ret = mtd_ooblayout_find_eccregion(mtd, index, &section, &oobregion);
+ if (ret)
+ return ret;
+
+ if (oobregion.length < eccfrag_len)
+ gaps = 1;
+
if (gaps) {
chip->cmdfunc(mtd, NAND_CMD_RNDOUT, mtd->writesize, -1);
chip->read_buf(mtd, chip->oob_poi, mtd->oobsize);
@@ -1388,20 +1419,23 @@ static int nand_read_subpage(struct mtd_info *mtd, struct nand_chip *chip,
* Send the command to read the particular ECC bytes take care
* about buswidth alignment in read_buf.
*/
- aligned_pos = eccpos[index] & ~(busw - 1);
+ aligned_pos = oobregion.offset & ~(busw - 1);
aligned_len = eccfrag_len;
- if (eccpos[index] & (busw - 1))
+ if (oobregion.offset & (busw - 1))
aligned_len++;
- if (eccpos[index + (num_steps * chip->ecc.bytes)] & (busw - 1))
+ if ((oobregion.offset + (num_steps * chip->ecc.bytes)) &
+ (busw - 1))
aligned_len++;
chip->cmdfunc(mtd, NAND_CMD_RNDOUT,
- mtd->writesize + aligned_pos, -1);
+ mtd->writesize + aligned_pos, -1);
chip->read_buf(mtd, &chip->oob_poi[aligned_pos], aligned_len);
}
- for (i = 0; i < eccfrag_len; i++)
- chip->buffers->ecccode[i] = chip->oob_poi[eccpos[i + index]];
+ ret = mtd_ooblayout_get_eccbytes(mtd, chip->buffers->ecccode,
+ chip->oob_poi, index, eccfrag_len);
+ if (ret)
+ return ret;
p = bufpoi + data_col_addr;
for (i = 0; i < eccfrag_len ; i += chip->ecc.bytes, p += chip->ecc.size) {
@@ -1442,13 +1476,12 @@ static int nand_read_subpage(struct mtd_info *mtd, struct nand_chip *chip,
static int nand_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip,
uint8_t *buf, int oob_required, int page)
{
- int i, eccsize = chip->ecc.size;
+ int i, eccsize = chip->ecc.size, ret;
int eccbytes = chip->ecc.bytes;
int eccsteps = chip->ecc.steps;
uint8_t *p = buf;
uint8_t *ecc_calc = chip->buffers->ecccalc;
uint8_t *ecc_code = chip->buffers->ecccode;
- uint32_t *eccpos = chip->ecc.layout->eccpos;
unsigned int max_bitflips = 0;
for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) {
@@ -1458,8 +1491,10 @@ static int nand_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip,
}
chip->read_buf(mtd, chip->oob_poi, mtd->oobsize);
- for (i = 0; i < chip->ecc.total; i++)
- ecc_code[i] = chip->oob_poi[eccpos[i]];
+ ret = mtd_ooblayout_get_eccbytes(mtd, ecc_code, chip->oob_poi, 0,
+ chip->ecc.total);
+ if (ret)
+ return ret;
eccsteps = chip->ecc.steps;
p = buf;
@@ -1504,12 +1539,11 @@ static int nand_read_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip,
static int nand_read_page_hwecc_oob_first(struct mtd_info *mtd,
struct nand_chip *chip, uint8_t *buf, int oob_required, int page)
{
- int i, eccsize = chip->ecc.size;
+ int i, eccsize = chip->ecc.size, ret;
int eccbytes = chip->ecc.bytes;
int eccsteps = chip->ecc.steps;
uint8_t *p = buf;
uint8_t *ecc_code = chip->buffers->ecccode;
- uint32_t *eccpos = chip->ecc.layout->eccpos;
uint8_t *ecc_calc = chip->buffers->ecccalc;
unsigned int max_bitflips = 0;
@@ -1518,8 +1552,10 @@ static int nand_read_page_hwecc_oob_first(struct mtd_info *mtd,
chip->read_buf(mtd, chip->oob_poi, mtd->oobsize);
chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page);
- for (i = 0; i < chip->ecc.total; i++)
- ecc_code[i] = chip->oob_poi[eccpos[i]];
+ ret = mtd_ooblayout_get_eccbytes(mtd, ecc_code, chip->oob_poi, 0,
+ chip->ecc.total);
+ if (ret)
+ return ret;
for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) {
int stat;
@@ -1620,14 +1656,17 @@ static int nand_read_page_syndrome(struct mtd_info *mtd, struct nand_chip *chip,
/**
* nand_transfer_oob - [INTERN] Transfer oob to client buffer
- * @chip: nand chip structure
+ * @mtd: mtd info structure
* @oob: oob destination address
* @ops: oob ops structure
* @len: size of oob to transfer
*/
-static uint8_t *nand_transfer_oob(struct nand_chip *chip, uint8_t *oob,
+static uint8_t *nand_transfer_oob(struct mtd_info *mtd, uint8_t *oob,
struct mtd_oob_ops *ops, size_t len)
{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ int ret;
+
switch (ops->mode) {
case MTD_OPS_PLACE_OOB:
@@ -1635,31 +1674,12 @@ static uint8_t *nand_transfer_oob(struct nand_chip *chip, uint8_t *oob,
memcpy(oob, chip->oob_poi + ops->ooboffs, len);
return oob + len;
- case MTD_OPS_AUTO_OOB: {
- struct nand_oobfree *free = chip->ecc.layout->oobfree;
- uint32_t boffs = 0, roffs = ops->ooboffs;
- size_t bytes = 0;
-
- for (; free->length && len; free++, len -= bytes) {
- /* Read request not from offset 0? */
- if (unlikely(roffs)) {
- if (roffs >= free->length) {
- roffs -= free->length;
- continue;
- }
- boffs = free->offset + roffs;
- bytes = min_t(size_t, len,
- (free->length - roffs));
- roffs = 0;
- } else {
- bytes = min_t(size_t, len, free->length);
- boffs = free->offset;
- }
- memcpy(oob, chip->oob_poi + boffs, bytes);
- oob += bytes;
- }
- return oob;
- }
+ case MTD_OPS_AUTO_OOB:
+ ret = mtd_ooblayout_get_databytes(mtd, oob, chip->oob_poi,
+ ops->ooboffs, len);
+ BUG_ON(ret);
+ return oob + len;
+
default:
BUG();
}
@@ -1793,7 +1813,7 @@ read_retry:
int toread = min(oobreadlen, max_oobsize);
if (toread) {
- oob = nand_transfer_oob(chip,
+ oob = nand_transfer_oob(mtd,
oob, ops, toread);
oobreadlen -= toread;
}
@@ -1906,13 +1926,13 @@ static int nand_read(struct mtd_info *mtd, loff_t from, size_t len,
* @chip: nand chip info structure
* @page: page number to read
*/
-static int nand_read_oob_std(struct mtd_info *mtd, struct nand_chip *chip,
- int page)
+int nand_read_oob_std(struct mtd_info *mtd, struct nand_chip *chip, int page)
{
chip->cmdfunc(mtd, NAND_CMD_READOOB, 0, page);
chip->read_buf(mtd, chip->oob_poi, mtd->oobsize);
return 0;
}
+EXPORT_SYMBOL(nand_read_oob_std);
/**
* nand_read_oob_syndrome - [REPLACEABLE] OOB data read function for HW ECC
@@ -1921,8 +1941,8 @@ static int nand_read_oob_std(struct mtd_info *mtd, struct nand_chip *chip,
* @chip: nand chip info structure
* @page: page number to read
*/
-static int nand_read_oob_syndrome(struct mtd_info *mtd, struct nand_chip *chip,
- int page)
+int nand_read_oob_syndrome(struct mtd_info *mtd, struct nand_chip *chip,
+ int page)
{
int length = mtd->oobsize;
int chunk = chip->ecc.bytes + chip->ecc.prepad + chip->ecc.postpad;
@@ -1950,6 +1970,7 @@ static int nand_read_oob_syndrome(struct mtd_info *mtd, struct nand_chip *chip,
return 0;
}
+EXPORT_SYMBOL(nand_read_oob_syndrome);
/**
* nand_write_oob_std - [REPLACEABLE] the most common OOB data write function
@@ -1957,8 +1978,7 @@ static int nand_read_oob_syndrome(struct mtd_info *mtd, struct nand_chip *chip,
* @chip: nand chip info structure
* @page: page number to write
*/
-static int nand_write_oob_std(struct mtd_info *mtd, struct nand_chip *chip,
- int page)
+int nand_write_oob_std(struct mtd_info *mtd, struct nand_chip *chip, int page)
{
int status = 0;
const uint8_t *buf = chip->oob_poi;
@@ -1973,6 +1993,7 @@ static int nand_write_oob_std(struct mtd_info *mtd, struct nand_chip *chip,
return status & NAND_STATUS_FAIL ? -EIO : 0;
}
+EXPORT_SYMBOL(nand_write_oob_std);
/**
* nand_write_oob_syndrome - [REPLACEABLE] OOB data write function for HW ECC
@@ -1981,8 +2002,8 @@ static int nand_write_oob_std(struct mtd_info *mtd, struct nand_chip *chip,
* @chip: nand chip info structure
* @page: page number to write
*/
-static int nand_write_oob_syndrome(struct mtd_info *mtd,
- struct nand_chip *chip, int page)
+int nand_write_oob_syndrome(struct mtd_info *mtd, struct nand_chip *chip,
+ int page)
{
int chunk = chip->ecc.bytes + chip->ecc.prepad + chip->ecc.postpad;
int eccsize = chip->ecc.size, length = mtd->oobsize;
@@ -2032,6 +2053,7 @@ static int nand_write_oob_syndrome(struct mtd_info *mtd,
return status & NAND_STATUS_FAIL ? -EIO : 0;
}
+EXPORT_SYMBOL(nand_write_oob_syndrome);
/**
* nand_do_read_oob - [INTERN] NAND read out-of-band
@@ -2091,7 +2113,7 @@ static int nand_do_read_oob(struct mtd_info *mtd, loff_t from,
break;
len = min(len, readlen);
- buf = nand_transfer_oob(chip, buf, ops, len);
+ buf = nand_transfer_oob(mtd, buf, ops, len);
if (chip->options & NAND_NEED_READRDY) {
/* Apply delay or wait for ready/busy pin */
@@ -2250,19 +2272,20 @@ static int nand_write_page_swecc(struct mtd_info *mtd, struct nand_chip *chip,
const uint8_t *buf, int oob_required,
int page)
{
- int i, eccsize = chip->ecc.size;
+ int i, eccsize = chip->ecc.size, ret;
int eccbytes = chip->ecc.bytes;
int eccsteps = chip->ecc.steps;
uint8_t *ecc_calc = chip->buffers->ecccalc;
const uint8_t *p = buf;
- uint32_t *eccpos = chip->ecc.layout->eccpos;
/* Software ECC calculation */
for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize)
chip->ecc.calculate(mtd, p, &ecc_calc[i]);
- for (i = 0; i < chip->ecc.total; i++)
- chip->oob_poi[eccpos[i]] = ecc_calc[i];
+ ret = mtd_ooblayout_set_eccbytes(mtd, ecc_calc, chip->oob_poi, 0,
+ chip->ecc.total);
+ if (ret)
+ return ret;
return chip->ecc.write_page_raw(mtd, chip, buf, 1, page);
}
@@ -2279,12 +2302,11 @@ static int nand_write_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip,
const uint8_t *buf, int oob_required,
int page)
{
- int i, eccsize = chip->ecc.size;
+ int i, eccsize = chip->ecc.size, ret;
int eccbytes = chip->ecc.bytes;
int eccsteps = chip->ecc.steps;
uint8_t *ecc_calc = chip->buffers->ecccalc;
const uint8_t *p = buf;
- uint32_t *eccpos = chip->ecc.layout->eccpos;
for (i = 0; eccsteps; eccsteps--, i += eccbytes, p += eccsize) {
chip->ecc.hwctl(mtd, NAND_ECC_WRITE);
@@ -2292,8 +2314,10 @@ static int nand_write_page_hwecc(struct mtd_info *mtd, struct nand_chip *chip,
chip->ecc.calculate(mtd, p, &ecc_calc[i]);
}
- for (i = 0; i < chip->ecc.total; i++)
- chip->oob_poi[eccpos[i]] = ecc_calc[i];
+ ret = mtd_ooblayout_set_eccbytes(mtd, ecc_calc, chip->oob_poi, 0,
+ chip->ecc.total);
+ if (ret)
+ return ret;
chip->write_buf(mtd, chip->oob_poi, mtd->oobsize);
@@ -2321,11 +2345,10 @@ static int nand_write_subpage_hwecc(struct mtd_info *mtd,
int ecc_size = chip->ecc.size;
int ecc_bytes = chip->ecc.bytes;
int ecc_steps = chip->ecc.steps;
- uint32_t *eccpos = chip->ecc.layout->eccpos;
uint32_t start_step = offset / ecc_size;
uint32_t end_step = (offset + data_len - 1) / ecc_size;
int oob_bytes = mtd->oobsize / ecc_steps;
- int step, i;
+ int step, ret;
for (step = 0; step < ecc_steps; step++) {
/* configure controller for WRITE access */
@@ -2353,8 +2376,10 @@ static int nand_write_subpage_hwecc(struct mtd_info *mtd,
/* copy calculated ECC for whole page to chip->buffer->oob */
/* this include masked-value(0xFF) for unwritten subpages */
ecc_calc = chip->buffers->ecccalc;
- for (i = 0; i < chip->ecc.total; i++)
- chip->oob_poi[eccpos[i]] = ecc_calc[i];
+ ret = mtd_ooblayout_set_eccbytes(mtd, ecc_calc, chip->oob_poi, 0,
+ chip->ecc.total);
+ if (ret)
+ return ret;
/* write OOB buffer to NAND device */
chip->write_buf(mtd, chip->oob_poi, mtd->oobsize);
@@ -2491,6 +2516,7 @@ static uint8_t *nand_fill_oob(struct mtd_info *mtd, uint8_t *oob, size_t len,
struct mtd_oob_ops *ops)
{
struct nand_chip *chip = mtd_to_nand(mtd);
+ int ret;
/*
* Initialise to all 0xFF, to avoid the possibility of left over OOB
@@ -2505,31 +2531,12 @@ static uint8_t *nand_fill_oob(struct mtd_info *mtd, uint8_t *oob, size_t len,
memcpy(chip->oob_poi + ops->ooboffs, oob, len);
return oob + len;
- case MTD_OPS_AUTO_OOB: {
- struct nand_oobfree *free = chip->ecc.layout->oobfree;
- uint32_t boffs = 0, woffs = ops->ooboffs;
- size_t bytes = 0;
-
- for (; free->length && len; free++, len -= bytes) {
- /* Write request not from offset 0? */
- if (unlikely(woffs)) {
- if (woffs >= free->length) {
- woffs -= free->length;
- continue;
- }
- boffs = free->offset + woffs;
- bytes = min_t(size_t, len,
- (free->length - woffs));
- woffs = 0;
- } else {
- bytes = min_t(size_t, len, free->length);
- boffs = free->offset;
- }
- memcpy(chip->oob_poi + boffs, oob, bytes);
- oob += bytes;
- }
- return oob;
- }
+ case MTD_OPS_AUTO_OOB:
+ ret = mtd_ooblayout_set_databytes(mtd, oob, chip->oob_poi,
+ ops->ooboffs, len);
+ BUG_ON(ret);
+ return oob + len;
+
default:
BUG();
}
@@ -3964,10 +3971,115 @@ ident_done:
return type;
}
+static const char * const nand_ecc_modes[] = {
+ [NAND_ECC_NONE] = "none",
+ [NAND_ECC_SOFT] = "soft",
+ [NAND_ECC_HW] = "hw",
+ [NAND_ECC_HW_SYNDROME] = "hw_syndrome",
+ [NAND_ECC_HW_OOB_FIRST] = "hw_oob_first",
+};
+
+static int of_get_nand_ecc_mode(struct device_node *np)
+{
+ const char *pm;
+ int err, i;
+
+ err = of_property_read_string(np, "nand-ecc-mode", &pm);
+ if (err < 0)
+ return err;
+
+ for (i = 0; i < ARRAY_SIZE(nand_ecc_modes); i++)
+ if (!strcasecmp(pm, nand_ecc_modes[i]))
+ return i;
+
+ /*
+ * For backward compatibility we support few obsoleted values that don't
+ * have their mappings into nand_ecc_modes_t anymore (they were merged
+ * with other enums).
+ */
+ if (!strcasecmp(pm, "soft_bch"))
+ return NAND_ECC_SOFT;
+
+ return -ENODEV;
+}
+
+static const char * const nand_ecc_algos[] = {
+ [NAND_ECC_HAMMING] = "hamming",
+ [NAND_ECC_BCH] = "bch",
+};
+
+static int of_get_nand_ecc_algo(struct device_node *np)
+{
+ const char *pm;
+ int err, i;
+
+ err = of_property_read_string(np, "nand-ecc-algo", &pm);
+ if (!err) {
+ for (i = NAND_ECC_HAMMING; i < ARRAY_SIZE(nand_ecc_algos); i++)
+ if (!strcasecmp(pm, nand_ecc_algos[i]))
+ return i;
+ return -ENODEV;
+ }
+
+ /*
+ * For backward compatibility we also read "nand-ecc-mode" checking
+ * for some obsoleted values that were specifying ECC algorithm.
+ */
+ err = of_property_read_string(np, "nand-ecc-mode", &pm);
+ if (err < 0)
+ return err;
+
+ if (!strcasecmp(pm, "soft"))
+ return NAND_ECC_HAMMING;
+ else if (!strcasecmp(pm, "soft_bch"))
+ return NAND_ECC_BCH;
+
+ return -ENODEV;
+}
+
+static int of_get_nand_ecc_step_size(struct device_node *np)
+{
+ int ret;
+ u32 val;
+
+ ret = of_property_read_u32(np, "nand-ecc-step-size", &val);
+ return ret ? ret : val;
+}
+
+static int of_get_nand_ecc_strength(struct device_node *np)
+{
+ int ret;
+ u32 val;
+
+ ret = of_property_read_u32(np, "nand-ecc-strength", &val);
+ return ret ? ret : val;
+}
+
+static int of_get_nand_bus_width(struct device_node *np)
+{
+ u32 val;
+
+ if (of_property_read_u32(np, "nand-bus-width", &val))
+ return 8;
+
+ switch (val) {
+ case 8:
+ case 16:
+ return val;
+ default:
+ return -EIO;
+ }
+}
+
+static bool of_get_nand_on_flash_bbt(struct device_node *np)
+{
+ return of_property_read_bool(np, "nand-on-flash-bbt");
+}
+
static int nand_dt_init(struct nand_chip *chip)
{
struct device_node *dn = nand_get_flash_node(chip);
- int ecc_mode, ecc_strength, ecc_step;
+ int ecc_mode, ecc_algo, ecc_strength, ecc_step;
if (!dn)
return 0;
@@ -3979,6 +4091,7 @@ static int nand_dt_init(struct nand_chip *chip)
chip->bbt_options |= NAND_BBT_USE_FLASH;
ecc_mode = of_get_nand_ecc_mode(dn);
+ ecc_algo = of_get_nand_ecc_algo(dn);
ecc_strength = of_get_nand_ecc_strength(dn);
ecc_step = of_get_nand_ecc_step_size(dn);
@@ -3991,6 +4104,9 @@ static int nand_dt_init(struct nand_chip *chip)
if (ecc_mode >= 0)
chip->ecc.mode = ecc_mode;
+ if (ecc_algo >= 0)
+ chip->ecc.algo = ecc_algo;
+
if (ecc_strength >= 0)
chip->ecc.strength = ecc_strength;
@@ -4067,6 +4183,82 @@ int nand_scan_ident(struct mtd_info *mtd, int maxchips,
}
EXPORT_SYMBOL(nand_scan_ident);
+static int nand_set_ecc_soft_ops(struct mtd_info *mtd)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ struct nand_ecc_ctrl *ecc = &chip->ecc;
+
+ if (WARN_ON(ecc->mode != NAND_ECC_SOFT))
+ return -EINVAL;
+
+ switch (ecc->algo) {
+ case NAND_ECC_HAMMING:
+ ecc->calculate = nand_calculate_ecc;
+ ecc->correct = nand_correct_data;
+ ecc->read_page = nand_read_page_swecc;
+ ecc->read_subpage = nand_read_subpage;
+ ecc->write_page = nand_write_page_swecc;
+ ecc->read_page_raw = nand_read_page_raw;
+ ecc->write_page_raw = nand_write_page_raw;
+ ecc->read_oob = nand_read_oob_std;
+ ecc->write_oob = nand_write_oob_std;
+ if (!ecc->size)
+ ecc->size = 256;
+ ecc->bytes = 3;
+ ecc->strength = 1;
+ return 0;
+ case NAND_ECC_BCH:
+ if (!mtd_nand_has_bch()) {
+ WARN(1, "CONFIG_MTD_NAND_ECC_BCH not enabled\n");
+ return -EINVAL;
+ }
+ ecc->calculate = nand_bch_calculate_ecc;
+ ecc->correct = nand_bch_correct_data;
+ ecc->read_page = nand_read_page_swecc;
+ ecc->read_subpage = nand_read_subpage;
+ ecc->write_page = nand_write_page_swecc;
+ ecc->read_page_raw = nand_read_page_raw;
+ ecc->write_page_raw = nand_write_page_raw;
+ ecc->read_oob = nand_read_oob_std;
+ ecc->write_oob = nand_write_oob_std;
+ /*
+ * Board driver should supply ecc.size and ecc.strength
+ * values to select how many bits are correctable.
+ * Otherwise, default to 4 bits for large page devices.
+ */
+ if (!ecc->size && (mtd->oobsize >= 64)) {
+ ecc->size = 512;
+ ecc->strength = 4;
+ }
+
+ /*
+ * if no ecc placement scheme was provided pickup the default
+ * large page one.
+ */
+ if (!mtd->ooblayout) {
+ /* handle large page devices only */
+ if (mtd->oobsize < 64) {
+ WARN(1, "OOB layout is required when using software BCH on small pages\n");
+ return -EINVAL;
+ }
+
+ mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops);
+ }
+
+ /* See nand_bch_init() for details. */
+ ecc->bytes = 0;
+ ecc->priv = nand_bch_init(mtd);
+ if (!ecc->priv) {
+ WARN(1, "BCH ECC initialization failed!\n");
+ return -EINVAL;
+ }
+ return 0;
+ default:
+ WARN(1, "Unsupported ECC algorithm!\n");
+ return -EINVAL;
+ }
+}
+
/*
* Check if the chip configuration meet the datasheet requirements.
@@ -4111,14 +4303,15 @@ static bool nand_ecc_strength_good(struct mtd_info *mtd)
*/
int nand_scan_tail(struct mtd_info *mtd)
{
- int i;
struct nand_chip *chip = mtd_to_nand(mtd);
struct nand_ecc_ctrl *ecc = &chip->ecc;
struct nand_buffers *nbuf;
+ int ret;
/* New bad blocks should be marked in OOB, flash-based BBT, or both */
- BUG_ON((chip->bbt_options & NAND_BBT_NO_OOB_BBM) &&
- !(chip->bbt_options & NAND_BBT_USE_FLASH));
+ if (WARN_ON((chip->bbt_options & NAND_BBT_NO_OOB_BBM) &&
+ !(chip->bbt_options & NAND_BBT_USE_FLASH)))
+ return -EINVAL;
if (!(chip->options & NAND_OWN_BUFFERS)) {
nbuf = kzalloc(sizeof(*nbuf) + mtd->writesize
@@ -4141,24 +4334,22 @@ int nand_scan_tail(struct mtd_info *mtd)
/*
* If no default placement scheme is given, select an appropriate one.
*/
- if (!ecc->layout && (ecc->mode != NAND_ECC_SOFT_BCH)) {
+ if (!mtd->ooblayout &&
+ !(ecc->mode == NAND_ECC_SOFT && ecc->algo == NAND_ECC_BCH)) {
switch (mtd->oobsize) {
case 8:
- ecc->layout = &nand_oob_8;
- break;
case 16:
- ecc->layout = &nand_oob_16;
+ mtd_set_ooblayout(mtd, &nand_ooblayout_sp_ops);
break;
case 64:
- ecc->layout = &nand_oob_64;
- break;
case 128:
- ecc->layout = &nand_oob_128;
+ mtd_set_ooblayout(mtd, &nand_ooblayout_lp_ops);
break;
default:
- pr_warn("No oob scheme defined for oobsize %d\n",
- mtd->oobsize);
- BUG();
+ WARN(1, "No oob scheme defined for oobsize %d\n",
+ mtd->oobsize);
+ ret = -EINVAL;
+ goto err_free;
}
}
@@ -4174,8 +4365,9 @@ int nand_scan_tail(struct mtd_info *mtd)
case NAND_ECC_HW_OOB_FIRST:
/* Similar to NAND_ECC_HW, but a separate read_page handle */
if (!ecc->calculate || !ecc->correct || !ecc->hwctl) {
- pr_warn("No ECC functions supplied; hardware ECC not possible\n");
- BUG();
+ WARN(1, "No ECC functions supplied; hardware ECC not possible\n");
+ ret = -EINVAL;
+ goto err_free;
}
if (!ecc->read_page)
ecc->read_page = nand_read_page_hwecc_oob_first;
@@ -4205,8 +4397,9 @@ int nand_scan_tail(struct mtd_info *mtd)
ecc->read_page == nand_read_page_hwecc ||
!ecc->write_page ||
ecc->write_page == nand_write_page_hwecc)) {
- pr_warn("No ECC functions supplied; hardware ECC not possible\n");
- BUG();
+ WARN(1, "No ECC functions supplied; hardware ECC not possible\n");
+ ret = -EINVAL;
+ goto err_free;
}
/* Use standard syndrome read/write page function? */
if (!ecc->read_page)
@@ -4224,61 +4417,22 @@ int nand_scan_tail(struct mtd_info *mtd)
if (mtd->writesize >= ecc->size) {
if (!ecc->strength) {
- pr_warn("Driver must set ecc.strength when using hardware ECC\n");
- BUG();
+ WARN(1, "Driver must set ecc.strength when using hardware ECC\n");
+ ret = -EINVAL;
+ goto err_free;
}
break;
}
pr_warn("%d byte HW ECC not possible on %d byte page size, fallback to SW ECC\n",
ecc->size, mtd->writesize);
ecc->mode = NAND_ECC_SOFT;
+ ecc->algo = NAND_ECC_HAMMING;
case NAND_ECC_SOFT:
- ecc->calculate = nand_calculate_ecc;
- ecc->correct = nand_correct_data;
- ecc->read_page = nand_read_page_swecc;
- ecc->read_subpage = nand_read_subpage;
- ecc->write_page = nand_write_page_swecc;
- ecc->read_page_raw = nand_read_page_raw;
- ecc->write_page_raw = nand_write_page_raw;
- ecc->read_oob = nand_read_oob_std;
- ecc->write_oob = nand_write_oob_std;
- if (!ecc->size)
- ecc->size = 256;
- ecc->bytes = 3;
- ecc->strength = 1;
- break;
-
- case NAND_ECC_SOFT_BCH:
- if (!mtd_nand_has_bch()) {
- pr_warn("CONFIG_MTD_NAND_ECC_BCH not enabled\n");
- BUG();
- }
- ecc->calculate = nand_bch_calculate_ecc;
- ecc->correct = nand_bch_correct_data;
- ecc->read_page = nand_read_page_swecc;
- ecc->read_subpage = nand_read_subpage;
- ecc->write_page = nand_write_page_swecc;
- ecc->read_page_raw = nand_read_page_raw;
- ecc->write_page_raw = nand_write_page_raw;
- ecc->read_oob = nand_read_oob_std;
- ecc->write_oob = nand_write_oob_std;
- /*
- * Board driver should supply ecc.size and ecc.strength values
- * to select how many bits are correctable. Otherwise, default
- * to 4 bits for large page devices.
- */
- if (!ecc->size && (mtd->oobsize >= 64)) {
- ecc->size = 512;
- ecc->strength = 4;
- }
-
- /* See nand_bch_init() for details. */
- ecc->bytes = 0;
- ecc->priv = nand_bch_init(mtd);
- if (!ecc->priv) {
- pr_warn("BCH ECC initialization failed!\n");
- BUG();
+ ret = nand_set_ecc_soft_ops(mtd);
+ if (ret) {
+ ret = -EINVAL;
+ goto err_free;
}
break;
@@ -4296,8 +4450,9 @@ int nand_scan_tail(struct mtd_info *mtd)
break;
default:
- pr_warn("Invalid NAND_ECC_MODE %d\n", ecc->mode);
- BUG();
+ WARN(1, "Invalid NAND_ECC_MODE %d\n", ecc->mode);
+ ret = -EINVAL;
+ goto err_free;
}
/* For many systems, the standard OOB write also works for raw */
@@ -4306,20 +4461,9 @@ int nand_scan_tail(struct mtd_info *mtd)
if (!ecc->write_oob_raw)
ecc->write_oob_raw = ecc->write_oob;
- /*
- * The number of bytes available for a client to place data into
- * the out of band area.
- */
- mtd->oobavail = 0;
- if (ecc->layout) {
- for (i = 0; ecc->layout->oobfree[i].length; i++)
- mtd->oobavail += ecc->layout->oobfree[i].length;
- }
-
- /* ECC sanity check: warn if it's too weak */
- if (!nand_ecc_strength_good(mtd))
- pr_warn("WARNING: %s: the ECC used on your system is too weak compared to the one required by the NAND chip\n",
- mtd->name);
+ /* propagate ecc info to mtd_info */
+ mtd->ecc_strength = ecc->strength;
+ mtd->ecc_step_size = ecc->size;
/*
* Set the number of read / write steps for one page depending on ECC
@@ -4327,11 +4471,27 @@ int nand_scan_tail(struct mtd_info *mtd)
*/
ecc->steps = mtd->writesize / ecc->size;
if (ecc->steps * ecc->size != mtd->writesize) {
- pr_warn("Invalid ECC parameters\n");
- BUG();
+ WARN(1, "Invalid ECC parameters\n");
+ ret = -EINVAL;
+ goto err_free;
}
ecc->total = ecc->steps * ecc->bytes;
+ /*
+ * The number of bytes available for a client to place data into
+ * the out of band area.
+ */
+ ret = mtd_ooblayout_count_freebytes(mtd);
+ if (ret < 0)
+ ret = 0;
+
+ mtd->oobavail = ret;
+
+ /* ECC sanity check: warn if it's too weak */
+ if (!nand_ecc_strength_good(mtd))
+ pr_warn("WARNING: %s: the ECC used on your system is too weak compared to the one required by the NAND chip\n",
+ mtd->name);
+
/* Allow subpage writes up to ecc.steps. Not possible for MLC flash */
if (!(chip->options & NAND_NO_SUBPAGE_WRITE) && nand_is_slc(chip)) {
switch (ecc->steps) {
@@ -4356,7 +4516,6 @@ int nand_scan_tail(struct mtd_info *mtd)
/* Large page NAND with SOFT_ECC should support subpage reads */
switch (ecc->mode) {
case NAND_ECC_SOFT:
- case NAND_ECC_SOFT_BCH:
if (chip->page_shift > 9)
chip->options |= NAND_SUBPAGE_READ;
break;
@@ -4388,10 +4547,6 @@ int nand_scan_tail(struct mtd_info *mtd)
mtd->_block_markbad = nand_block_markbad;
mtd->writebufsize = mtd->writesize;
- /* propagate ecc info to mtd_info */
- mtd->ecclayout = ecc->layout;
- mtd->ecc_strength = ecc->strength;
- mtd->ecc_step_size = ecc->size;
/*
* Initialize bitflip_threshold to its default prior scan_bbt() call.
* scan_bbt() might invoke mtd_read(), thus bitflip_threshold must be
@@ -4406,6 +4561,10 @@ int nand_scan_tail(struct mtd_info *mtd)
/* Build bad block table */
return chip->scan_bbt(mtd);
+err_free:
+ if (!(chip->options & NAND_OWN_BUFFERS))
+ kfree(chip->buffers);
+ return ret;
}
EXPORT_SYMBOL(nand_scan_tail);
@@ -4449,7 +4608,8 @@ void nand_release(struct mtd_info *mtd)
{
struct nand_chip *chip = mtd_to_nand(mtd);
- if (chip->ecc.mode == NAND_ECC_SOFT_BCH)
+ if (chip->ecc.mode == NAND_ECC_SOFT &&
+ chip->ecc.algo == NAND_ECC_BCH)
nand_bch_free((struct nand_bch_control *)chip->ecc.priv);
mtd_device_unregister(mtd);
@@ -4466,20 +4626,6 @@ void nand_release(struct mtd_info *mtd)
}
EXPORT_SYMBOL_GPL(nand_release);
-static int __init nand_base_init(void)
-{
- led_trigger_register_simple("nand-disk", &nand_led_trigger);
- return 0;
-}
-
-static void __exit nand_base_exit(void)
-{
- led_trigger_unregister_simple(nand_led_trigger);
-}
-
-module_init(nand_base_init);
-module_exit(nand_base_exit);
-
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Steven J. Hill <sjhill@realitydiluted.com>");
MODULE_AUTHOR("Thomas Gleixner <tglx@linutronix.de>");
diff --git a/drivers/mtd/nand/nand_bch.c b/drivers/mtd/nand/nand_bch.c
index b585bae37929..44763f87eae4 100644
--- a/drivers/mtd/nand/nand_bch.c
+++ b/drivers/mtd/nand/nand_bch.c
@@ -32,13 +32,11 @@
/**
* struct nand_bch_control - private NAND BCH control structure
* @bch: BCH control structure
- * @ecclayout: private ecc layout for this BCH configuration
* @errloc: error location array
* @eccmask: XOR ecc mask, allows erased pages to be decoded as valid
*/
struct nand_bch_control {
struct bch_control *bch;
- struct nand_ecclayout ecclayout;
unsigned int *errloc;
unsigned char *eccmask;
};
@@ -124,7 +122,6 @@ struct nand_bch_control *nand_bch_init(struct mtd_info *mtd)
{
struct nand_chip *nand = mtd_to_nand(mtd);
unsigned int m, t, eccsteps, i;
- struct nand_ecclayout *layout = nand->ecc.layout;
struct nand_bch_control *nbc = NULL;
unsigned char *erased_page;
unsigned int eccsize = nand->ecc.size;
@@ -161,34 +158,10 @@ struct nand_bch_control *nand_bch_init(struct mtd_info *mtd)
eccsteps = mtd->writesize/eccsize;
- /* if no ecc placement scheme was provided, build one */
- if (!layout) {
-
- /* handle large page devices only */
- if (mtd->oobsize < 64) {
- printk(KERN_WARNING "must provide an oob scheme for "
- "oobsize %d\n", mtd->oobsize);
- goto fail;
- }
-
- layout = &nbc->ecclayout;
- layout->eccbytes = eccsteps*eccbytes;
-
- /* reserve 2 bytes for bad block marker */
- if (layout->eccbytes+2 > mtd->oobsize) {
- printk(KERN_WARNING "no suitable oob scheme available "
- "for oobsize %d eccbytes %u\n", mtd->oobsize,
- eccbytes);
- goto fail;
- }
- /* put ecc bytes at oob tail */
- for (i = 0; i < layout->eccbytes; i++)
- layout->eccpos[i] = mtd->oobsize-layout->eccbytes+i;
-
- layout->oobfree[0].offset = 2;
- layout->oobfree[0].length = mtd->oobsize-2-layout->eccbytes;
-
- nand->ecc.layout = layout;
+ /* Check that we have an oob layout description. */
+ if (!mtd->ooblayout) {
+ pr_warn("missing oob scheme");
+ goto fail;
}
/* sanity checks */
@@ -196,7 +169,18 @@ struct nand_bch_control *nand_bch_init(struct mtd_info *mtd)
printk(KERN_WARNING "eccsize %u is too large\n", eccsize);
goto fail;
}
- if (layout->eccbytes != (eccsteps*eccbytes)) {
+
+ /*
+ * ecc->steps and ecc->total might be used by mtd->ooblayout->ecc(),
+ * which is called by mtd_ooblayout_count_eccbytes().
+ * Make sure they are properly initialized before calling
+ * mtd_ooblayout_count_eccbytes().
+ * FIXME: we should probably rework the sequencing in nand_scan_tail()
+ * to avoid setting those fields twice.
+ */
+ nand->ecc.steps = eccsteps;
+ nand->ecc.total = eccsteps * eccbytes;
+ if (mtd_ooblayout_count_eccbytes(mtd) != (eccsteps*eccbytes)) {
printk(KERN_WARNING "invalid ecc layout\n");
goto fail;
}
diff --git a/drivers/mtd/nand/nandsim.c b/drivers/mtd/nand/nandsim.c
index a58169a28741..1eb934414eb5 100644
--- a/drivers/mtd/nand/nandsim.c
+++ b/drivers/mtd/nand/nandsim.c
@@ -569,7 +569,7 @@ static void nandsim_debugfs_remove(struct nandsim *ns)
*
* RETURNS: 0 if success, -ENOMEM if memory alloc fails.
*/
-static int alloc_device(struct nandsim *ns)
+static int __init alloc_device(struct nandsim *ns)
{
struct file *cfile;
int i, err;
@@ -654,7 +654,7 @@ static void free_device(struct nandsim *ns)
}
}
-static char *get_partition_name(int i)
+static char __init *get_partition_name(int i)
{
return kasprintf(GFP_KERNEL, "NAND simulator partition %d", i);
}
@@ -664,7 +664,7 @@ static char *get_partition_name(int i)
*
* RETURNS: 0 if success, -ERRNO if failure.
*/
-static int init_nandsim(struct mtd_info *mtd)
+static int __init init_nandsim(struct mtd_info *mtd)
{
struct nand_chip *chip = mtd_to_nand(mtd);
struct nandsim *ns = nand_get_controller_data(chip);
@@ -2261,6 +2261,7 @@ static int __init ns_init_module(void)
chip->read_buf = ns_nand_read_buf;
chip->read_word = ns_nand_read_word;
chip->ecc.mode = NAND_ECC_SOFT;
+ chip->ecc.algo = NAND_ECC_HAMMING;
/* The NAND_SKIP_BBTSCAN option is necessary for 'overridesize' */
/* and 'badblocks' parameters to work */
chip->options |= NAND_SKIP_BBTSCAN;
@@ -2338,7 +2339,8 @@ static int __init ns_init_module(void)
retval = -EINVAL;
goto error;
}
- chip->ecc.mode = NAND_ECC_SOFT_BCH;
+ chip->ecc.mode = NAND_ECC_SOFT;
+ chip->ecc.algo = NAND_ECC_BCH;
chip->ecc.size = 512;
chip->ecc.strength = bch;
chip->ecc.bytes = eccbytes;
diff --git a/drivers/mtd/nand/nuc900_nand.c b/drivers/mtd/nand/nuc900_nand.c
index dbc5b571c2bb..8f64011d32ef 100644
--- a/drivers/mtd/nand/nuc900_nand.c
+++ b/drivers/mtd/nand/nuc900_nand.c
@@ -261,6 +261,7 @@ static int nuc900_nand_probe(struct platform_device *pdev)
chip->chip_delay = 50;
chip->options = 0;
chip->ecc.mode = NAND_ECC_SOFT;
+ chip->ecc.algo = NAND_ECC_HAMMING;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
nuc900_nand->reg = devm_ioremap_resource(&pdev->dev, res);
diff --git a/drivers/mtd/nand/omap2.c b/drivers/mtd/nand/omap2.c
index 0749ca1a1456..08e158895635 100644
--- a/drivers/mtd/nand/omap2.c
+++ b/drivers/mtd/nand/omap2.c
@@ -12,6 +12,7 @@
#include <linux/dmaengine.h>
#include <linux/dma-mapping.h>
#include <linux/delay.h>
+#include <linux/gpio/consumer.h>
#include <linux/module.h>
#include <linux/interrupt.h>
#include <linux/jiffies.h>
@@ -28,6 +29,7 @@
#include <linux/mtd/nand_bch.h>
#include <linux/platform_data/elm.h>
+#include <linux/omap-gpmc.h>
#include <linux/platform_data/mtd-nand-omap2.h>
#define DRIVER_NAME "omap2-nand"
@@ -151,13 +153,17 @@ static struct nand_hw_control omap_gpmc_controller = {
};
struct omap_nand_info {
- struct omap_nand_platform_data *pdata;
struct nand_chip nand;
struct platform_device *pdev;
int gpmc_cs;
- unsigned long phys_base;
+ bool dev_ready;
+ enum nand_io xfer_type;
+ int devsize;
enum omap_ecc ecc_opt;
+ struct device_node *elm_of_node;
+
+ unsigned long phys_base;
struct completion comp;
struct dma_chan *dma;
int gpmc_irq_fifo;
@@ -168,12 +174,14 @@ struct omap_nand_info {
} iomode;
u_char *buf;
int buf_len;
+ /* Interface to GPMC */
struct gpmc_nand_regs reg;
- /* generated at runtime depending on ECC algorithm and layout selected */
- struct nand_ecclayout oobinfo;
+ struct gpmc_nand_ops *ops;
+ bool flash_bbt;
/* fields specific for BCHx_HW ECC scheme */
struct device *elm_dev;
- struct device_node *of_node;
+ /* NAND ready gpio */
+ struct gpio_desc *ready_gpiod;
};
static inline struct omap_nand_info *mtd_to_omap(struct mtd_info *mtd)
@@ -208,7 +216,7 @@ static int omap_prefetch_enable(int cs, int fifo_th, int dma_mode,
*/
val = ((cs << PREFETCH_CONFIG1_CS_SHIFT) |
PREFETCH_FIFOTHRESHOLD(fifo_th) | ENABLE_PREFETCH |
- (dma_mode << DMA_MPU_MODE_SHIFT) | (0x1 & is_write));
+ (dma_mode << DMA_MPU_MODE_SHIFT) | (is_write & 0x1));
writel(val, info->reg.gpmc_prefetch_config1);
/* Start the prefetch engine */
@@ -288,14 +296,13 @@ static void omap_write_buf8(struct mtd_info *mtd, const u_char *buf, int len)
{
struct omap_nand_info *info = mtd_to_omap(mtd);
u_char *p = (u_char *)buf;
- u32 status = 0;
+ bool status;
while (len--) {
iowrite8(*p++, info->nand.IO_ADDR_W);
/* wait until buffer is available for write */
do {
- status = readl(info->reg.gpmc_status) &
- STATUS_BUFF_EMPTY;
+ status = info->ops->nand_writebuffer_empty();
} while (!status);
}
}
@@ -323,7 +330,7 @@ static void omap_write_buf16(struct mtd_info *mtd, const u_char * buf, int len)
{
struct omap_nand_info *info = mtd_to_omap(mtd);
u16 *p = (u16 *) buf;
- u32 status = 0;
+ bool status;
/* FIXME try bursts of writesw() or DMA ... */
len >>= 1;
@@ -331,8 +338,7 @@ static void omap_write_buf16(struct mtd_info *mtd, const u_char * buf, int len)
iowrite16(*p++, info->nand.IO_ADDR_W);
/* wait until buffer is available for write */
do {
- status = readl(info->reg.gpmc_status) &
- STATUS_BUFF_EMPTY;
+ status = info->ops->nand_writebuffer_empty();
} while (!status);
}
}
@@ -467,17 +473,8 @@ static inline int omap_nand_dma_transfer(struct mtd_info *mtd, void *addr,
int ret;
u32 val;
- if (addr >= high_memory) {
- struct page *p1;
-
- if (((size_t)addr & PAGE_MASK) !=
- ((size_t)(addr + len - 1) & PAGE_MASK))
- goto out_copy;
- p1 = vmalloc_to_page(addr);
- if (!p1)
- goto out_copy;
- addr = page_address(p1) + ((size_t)addr & ~PAGE_MASK);
- }
+ if (!virt_addr_valid(addr))
+ goto out_copy;
sg_init_one(&sg, addr, len);
n = dma_map_sg(info->dma->device->dev, &sg, 1, dir);
@@ -497,6 +494,11 @@ static inline int omap_nand_dma_transfer(struct mtd_info *mtd, void *addr,
tx->callback_param = &info->comp;
dmaengine_submit(tx);
+ init_completion(&info->comp);
+
+ /* setup and start DMA using dma_addr */
+ dma_async_issue_pending(info->dma);
+
/* configure and start prefetch transfer */
ret = omap_prefetch_enable(info->gpmc_cs,
PREFETCH_FIFOTHRESHOLD_MAX, 0x1, len, is_write, info);
@@ -504,10 +506,6 @@ static inline int omap_nand_dma_transfer(struct mtd_info *mtd, void *addr,
/* PFPW engine is busy, use cpu copy method */
goto out_copy_unmap;
- init_completion(&info->comp);
- dma_async_issue_pending(info->dma);
-
- /* setup and start DMA using dma_addr */
wait_for_completion(&info->comp);
tim = 0;
limit = (loops_per_jiffy * msecs_to_jiffies(OMAP_NAND_TIMEOUT_MS));
@@ -1017,21 +1015,16 @@ static int omap_wait(struct mtd_info *mtd, struct nand_chip *chip)
}
/**
- * omap_dev_ready - calls the platform specific dev_ready function
+ * omap_dev_ready - checks the NAND Ready GPIO line
* @mtd: MTD device structure
+ *
+ * Returns true if ready and false if busy.
*/
static int omap_dev_ready(struct mtd_info *mtd)
{
- unsigned int val = 0;
struct omap_nand_info *info = mtd_to_omap(mtd);
- val = readl(info->reg.gpmc_status);
-
- if ((val & 0x100) == 0x100) {
- return 1;
- } else {
- return 0;
- }
+ return gpiod_get_value(info->ready_gpiod);
}
/**
@@ -1495,9 +1488,8 @@ static int omap_elm_correct_data(struct mtd_info *mtd, u_char *data,
static int omap_write_page_bch(struct mtd_info *mtd, struct nand_chip *chip,
const uint8_t *buf, int oob_required, int page)
{
- int i;
+ int ret;
uint8_t *ecc_calc = chip->buffers->ecccalc;
- uint32_t *eccpos = chip->ecc.layout->eccpos;
/* Enable GPMC ecc engine */
chip->ecc.hwctl(mtd, NAND_ECC_WRITE);
@@ -1508,8 +1500,10 @@ static int omap_write_page_bch(struct mtd_info *mtd, struct nand_chip *chip,
/* Update ecc vector from GPMC result registers */
chip->ecc.calculate(mtd, buf, &ecc_calc[0]);
- for (i = 0; i < chip->ecc.total; i++)
- chip->oob_poi[eccpos[i]] = ecc_calc[i];
+ ret = mtd_ooblayout_set_eccbytes(mtd, ecc_calc, chip->oob_poi, 0,
+ chip->ecc.total);
+ if (ret)
+ return ret;
/* Write ecc vector to OOB area */
chip->write_buf(mtd, chip->oob_poi, mtd->oobsize);
@@ -1536,10 +1530,7 @@ static int omap_read_page_bch(struct mtd_info *mtd, struct nand_chip *chip,
{
uint8_t *ecc_calc = chip->buffers->ecccalc;
uint8_t *ecc_code = chip->buffers->ecccode;
- uint32_t *eccpos = chip->ecc.layout->eccpos;
- uint8_t *oob = &chip->oob_poi[eccpos[0]];
- uint32_t oob_pos = mtd->writesize + chip->ecc.layout->eccpos[0];
- int stat;
+ int stat, ret;
unsigned int max_bitflips = 0;
/* Enable GPMC ecc engine */
@@ -1549,13 +1540,18 @@ static int omap_read_page_bch(struct mtd_info *mtd, struct nand_chip *chip,
chip->read_buf(mtd, buf, mtd->writesize);
/* Read oob bytes */
- chip->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_pos, -1);
- chip->read_buf(mtd, oob, chip->ecc.total);
+ chip->cmdfunc(mtd, NAND_CMD_RNDOUT,
+ mtd->writesize + BADBLOCK_MARKER_LENGTH, -1);
+ chip->read_buf(mtd, chip->oob_poi + BADBLOCK_MARKER_LENGTH,
+ chip->ecc.total);
/* Calculate ecc bytes */
chip->ecc.calculate(mtd, buf, ecc_calc);
- memcpy(ecc_code, &chip->oob_poi[eccpos[0]], chip->ecc.total);
+ ret = mtd_ooblayout_get_eccbytes(mtd, ecc_code, chip->oob_poi, 0,
+ chip->ecc.total);
+ if (ret)
+ return ret;
stat = chip->ecc.correct(mtd, buf, ecc_code, ecc_calc);
@@ -1630,7 +1626,7 @@ static bool omap2_nand_ecc_check(struct omap_nand_info *info,
"CONFIG_MTD_NAND_OMAP_BCH not enabled\n");
return false;
}
- if (ecc_needs_elm && !is_elm_present(info, pdata->elm_of_node)) {
+ if (ecc_needs_elm && !is_elm_present(info, info->elm_of_node)) {
dev_err(&info->pdev->dev, "ELM not available\n");
return false;
}
@@ -1638,43 +1634,227 @@ static bool omap2_nand_ecc_check(struct omap_nand_info *info,
return true;
}
+static const char * const nand_xfer_types[] = {
+ [NAND_OMAP_PREFETCH_POLLED] = "prefetch-polled",
+ [NAND_OMAP_POLLED] = "polled",
+ [NAND_OMAP_PREFETCH_DMA] = "prefetch-dma",
+ [NAND_OMAP_PREFETCH_IRQ] = "prefetch-irq",
+};
+
+static int omap_get_dt_info(struct device *dev, struct omap_nand_info *info)
+{
+ struct device_node *child = dev->of_node;
+ int i;
+ const char *s;
+ u32 cs;
+
+ if (of_property_read_u32(child, "reg", &cs) < 0) {
+ dev_err(dev, "reg not found in DT\n");
+ return -EINVAL;
+ }
+
+ info->gpmc_cs = cs;
+
+ /* detect availability of ELM module. Won't be present pre-OMAP4 */
+ info->elm_of_node = of_parse_phandle(child, "ti,elm-id", 0);
+ if (!info->elm_of_node)
+ dev_dbg(dev, "ti,elm-id not in DT\n");
+
+ /* select ecc-scheme for NAND */
+ if (of_property_read_string(child, "ti,nand-ecc-opt", &s)) {
+ dev_err(dev, "ti,nand-ecc-opt not found\n");
+ return -EINVAL;
+ }
+
+ if (!strcmp(s, "sw")) {
+ info->ecc_opt = OMAP_ECC_HAM1_CODE_SW;
+ } else if (!strcmp(s, "ham1") ||
+ !strcmp(s, "hw") || !strcmp(s, "hw-romcode")) {
+ info->ecc_opt = OMAP_ECC_HAM1_CODE_HW;
+ } else if (!strcmp(s, "bch4")) {
+ if (info->elm_of_node)
+ info->ecc_opt = OMAP_ECC_BCH4_CODE_HW;
+ else
+ info->ecc_opt = OMAP_ECC_BCH4_CODE_HW_DETECTION_SW;
+ } else if (!strcmp(s, "bch8")) {
+ if (info->elm_of_node)
+ info->ecc_opt = OMAP_ECC_BCH8_CODE_HW;
+ else
+ info->ecc_opt = OMAP_ECC_BCH8_CODE_HW_DETECTION_SW;
+ } else if (!strcmp(s, "bch16")) {
+ info->ecc_opt = OMAP_ECC_BCH16_CODE_HW;
+ } else {
+ dev_err(dev, "unrecognized value for ti,nand-ecc-opt\n");
+ return -EINVAL;
+ }
+
+ /* select data transfer mode */
+ if (!of_property_read_string(child, "ti,nand-xfer-type", &s)) {
+ for (i = 0; i < ARRAY_SIZE(nand_xfer_types); i++) {
+ if (!strcasecmp(s, nand_xfer_types[i])) {
+ info->xfer_type = i;
+ return 0;
+ }
+ }
+
+ dev_err(dev, "unrecognized value for ti,nand-xfer-type\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int omap_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct omap_nand_info *info = mtd_to_omap(mtd);
+ struct nand_chip *chip = &info->nand;
+ int off = BADBLOCK_MARKER_LENGTH;
+
+ if (info->ecc_opt == OMAP_ECC_HAM1_CODE_HW &&
+ !(chip->options & NAND_BUSWIDTH_16))
+ off = 1;
+
+ if (section)
+ return -ERANGE;
+
+ oobregion->offset = off;
+ oobregion->length = chip->ecc.total;
+
+ return 0;
+}
+
+static int omap_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct omap_nand_info *info = mtd_to_omap(mtd);
+ struct nand_chip *chip = &info->nand;
+ int off = BADBLOCK_MARKER_LENGTH;
+
+ if (info->ecc_opt == OMAP_ECC_HAM1_CODE_HW &&
+ !(chip->options & NAND_BUSWIDTH_16))
+ off = 1;
+
+ if (section)
+ return -ERANGE;
+
+ off += chip->ecc.total;
+ if (off >= mtd->oobsize)
+ return -ERANGE;
+
+ oobregion->offset = off;
+ oobregion->length = mtd->oobsize - off;
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops omap_ooblayout_ops = {
+ .ecc = omap_ooblayout_ecc,
+ .free = omap_ooblayout_free,
+};
+
+static int omap_sw_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ int off = BADBLOCK_MARKER_LENGTH;
+
+ if (section >= chip->ecc.steps)
+ return -ERANGE;
+
+ /*
+ * When SW correction is employed, one OMAP specific marker byte is
+ * reserved after each ECC step.
+ */
+ oobregion->offset = off + (section * (chip->ecc.bytes + 1));
+ oobregion->length = chip->ecc.bytes;
+
+ return 0;
+}
+
+static int omap_sw_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ int off = BADBLOCK_MARKER_LENGTH;
+
+ if (section)
+ return -ERANGE;
+
+ /*
+ * When SW correction is employed, one OMAP specific marker byte is
+ * reserved after each ECC step.
+ */
+ off += ((chip->ecc.bytes + 1) * chip->ecc.steps);
+ if (off >= mtd->oobsize)
+ return -ERANGE;
+
+ oobregion->offset = off;
+ oobregion->length = mtd->oobsize - off;
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops omap_sw_ooblayout_ops = {
+ .ecc = omap_sw_ooblayout_ecc,
+ .free = omap_sw_ooblayout_free,
+};
+
static int omap_nand_probe(struct platform_device *pdev)
{
struct omap_nand_info *info;
- struct omap_nand_platform_data *pdata;
+ struct omap_nand_platform_data *pdata = NULL;
struct mtd_info *mtd;
struct nand_chip *nand_chip;
- struct nand_ecclayout *ecclayout;
int err;
- int i;
dma_cap_mask_t mask;
unsigned sig;
- unsigned oob_index;
struct resource *res;
-
- pdata = dev_get_platdata(&pdev->dev);
- if (pdata == NULL) {
- dev_err(&pdev->dev, "platform data missing\n");
- return -ENODEV;
- }
+ struct device *dev = &pdev->dev;
+ int min_oobbytes = BADBLOCK_MARKER_LENGTH;
+ int oobbytes_per_step;
info = devm_kzalloc(&pdev->dev, sizeof(struct omap_nand_info),
GFP_KERNEL);
if (!info)
return -ENOMEM;
+ info->pdev = pdev;
+
+ if (dev->of_node) {
+ if (omap_get_dt_info(dev, info))
+ return -EINVAL;
+ } else {
+ pdata = dev_get_platdata(&pdev->dev);
+ if (!pdata) {
+ dev_err(&pdev->dev, "platform data missing\n");
+ return -EINVAL;
+ }
+
+ info->gpmc_cs = pdata->cs;
+ info->reg = pdata->reg;
+ info->ecc_opt = pdata->ecc_opt;
+ if (pdata->dev_ready)
+ dev_info(&pdev->dev, "pdata->dev_ready is deprecated\n");
+
+ info->xfer_type = pdata->xfer_type;
+ info->devsize = pdata->devsize;
+ info->elm_of_node = pdata->elm_of_node;
+ info->flash_bbt = pdata->flash_bbt;
+ }
+
platform_set_drvdata(pdev, info);
+ info->ops = gpmc_omap_get_nand_ops(&info->reg, info->gpmc_cs);
+ if (!info->ops) {
+ dev_err(&pdev->dev, "Failed to get GPMC->NAND interface\n");
+ return -ENODEV;
+ }
- info->pdev = pdev;
- info->gpmc_cs = pdata->cs;
- info->reg = pdata->reg;
- info->of_node = pdata->of_node;
- info->ecc_opt = pdata->ecc_opt;
nand_chip = &info->nand;
mtd = nand_to_mtd(nand_chip);
mtd->dev.parent = &pdev->dev;
nand_chip->ecc.priv = NULL;
- nand_set_flash_node(nand_chip, pdata->of_node);
+ nand_set_flash_node(nand_chip, dev->of_node);
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
nand_chip->IO_ADDR_R = devm_ioremap_resource(&pdev->dev, res);
@@ -1688,6 +1868,13 @@ static int omap_nand_probe(struct platform_device *pdev)
nand_chip->IO_ADDR_W = nand_chip->IO_ADDR_R;
nand_chip->cmd_ctrl = omap_hwcontrol;
+ info->ready_gpiod = devm_gpiod_get_optional(&pdev->dev, "rb",
+ GPIOD_IN);
+ if (IS_ERR(info->ready_gpiod)) {
+ dev_err(dev, "failed to get ready gpio\n");
+ return PTR_ERR(info->ready_gpiod);
+ }
+
/*
* If RDY/BSY line is connected to OMAP then use the omap ready
* function and the generic nand_wait function which reads the status
@@ -1695,7 +1882,7 @@ static int omap_nand_probe(struct platform_device *pdev)
* chip delay which is slightly more than tR (AC Timing) of the NAND
* device and read status register until you get a failure or success
*/
- if (pdata->dev_ready) {
+ if (info->ready_gpiod) {
nand_chip->dev_ready = omap_dev_ready;
nand_chip->chip_delay = 0;
} else {
@@ -1703,21 +1890,25 @@ static int omap_nand_probe(struct platform_device *pdev)
nand_chip->chip_delay = 50;
}
- if (pdata->flash_bbt)
- nand_chip->bbt_options |= NAND_BBT_USE_FLASH | NAND_BBT_NO_OOB;
- else
- nand_chip->options |= NAND_SKIP_BBTSCAN;
+ if (info->flash_bbt)
+ nand_chip->bbt_options |= NAND_BBT_USE_FLASH;
/* scan NAND device connected to chip controller */
- nand_chip->options |= pdata->devsize & NAND_BUSWIDTH_16;
+ nand_chip->options |= info->devsize & NAND_BUSWIDTH_16;
if (nand_scan_ident(mtd, 1, NULL)) {
- dev_err(&info->pdev->dev, "scan failed, may be bus-width mismatch\n");
+ dev_err(&info->pdev->dev,
+ "scan failed, may be bus-width mismatch\n");
err = -ENXIO;
goto return_error;
}
+ if (nand_chip->bbt_options & NAND_BBT_USE_FLASH)
+ nand_chip->bbt_options |= NAND_BBT_NO_OOB;
+ else
+ nand_chip->options |= NAND_SKIP_BBTSCAN;
+
/* re-populate low-level callbacks based on xfer modes */
- switch (pdata->xfer_type) {
+ switch (info->xfer_type) {
case NAND_OMAP_PREFETCH_POLLED:
nand_chip->read_buf = omap_read_buf_pref;
nand_chip->write_buf = omap_write_buf_pref;
@@ -1797,7 +1988,7 @@ static int omap_nand_probe(struct platform_device *pdev)
default:
dev_err(&pdev->dev,
- "xfer_type(%d) not supported!\n", pdata->xfer_type);
+ "xfer_type(%d) not supported!\n", info->xfer_type);
err = -EINVAL;
goto return_error;
}
@@ -1809,16 +2000,15 @@ static int omap_nand_probe(struct platform_device *pdev)
/*
* Bail out earlier to let NAND_ECC_SOFT code create its own
- * ecclayout instead of using ours.
+ * ooblayout instead of using ours.
*/
if (info->ecc_opt == OMAP_ECC_HAM1_CODE_SW) {
nand_chip->ecc.mode = NAND_ECC_SOFT;
+ nand_chip->ecc.algo = NAND_ECC_HAMMING;
goto scan_tail;
}
/* populate MTD interface based on ECC scheme */
- ecclayout = &info->oobinfo;
- nand_chip->ecc.layout = ecclayout;
switch (info->ecc_opt) {
case OMAP_ECC_HAM1_CODE_HW:
pr_info("nand: using OMAP_ECC_HAM1_CODE_HW\n");
@@ -1829,19 +2019,12 @@ static int omap_nand_probe(struct platform_device *pdev)
nand_chip->ecc.calculate = omap_calculate_ecc;
nand_chip->ecc.hwctl = omap_enable_hwecc;
nand_chip->ecc.correct = omap_correct_data;
- /* define ECC layout */
- ecclayout->eccbytes = nand_chip->ecc.bytes *
- (mtd->writesize /
- nand_chip->ecc.size);
- if (nand_chip->options & NAND_BUSWIDTH_16)
- oob_index = BADBLOCK_MARKER_LENGTH;
- else
- oob_index = 1;
- for (i = 0; i < ecclayout->eccbytes; i++, oob_index++)
- ecclayout->eccpos[i] = oob_index;
- /* no reserved-marker in ecclayout for this ecc-scheme */
- ecclayout->oobfree->offset =
- ecclayout->eccpos[ecclayout->eccbytes - 1] + 1;
+ mtd_set_ooblayout(mtd, &omap_ooblayout_ops);
+ oobbytes_per_step = nand_chip->ecc.bytes;
+
+ if (!(nand_chip->options & NAND_BUSWIDTH_16))
+ min_oobbytes = 1;
+
break;
case OMAP_ECC_BCH4_CODE_HW_DETECTION_SW:
@@ -1853,19 +2036,9 @@ static int omap_nand_probe(struct platform_device *pdev)
nand_chip->ecc.hwctl = omap_enable_hwecc_bch;
nand_chip->ecc.correct = nand_bch_correct_data;
nand_chip->ecc.calculate = omap_calculate_ecc_bch;
- /* define ECC layout */
- ecclayout->eccbytes = nand_chip->ecc.bytes *
- (mtd->writesize /
- nand_chip->ecc.size);
- oob_index = BADBLOCK_MARKER_LENGTH;
- for (i = 0; i < ecclayout->eccbytes; i++, oob_index++) {
- ecclayout->eccpos[i] = oob_index;
- if (((i + 1) % nand_chip->ecc.bytes) == 0)
- oob_index++;
- }
- /* include reserved-marker in ecclayout->oobfree calculation */
- ecclayout->oobfree->offset = 1 +
- ecclayout->eccpos[ecclayout->eccbytes - 1] + 1;
+ mtd_set_ooblayout(mtd, &omap_sw_ooblayout_ops);
+ /* Reserve one byte for the OMAP marker */
+ oobbytes_per_step = nand_chip->ecc.bytes + 1;
/* software bch library is used for locating errors */
nand_chip->ecc.priv = nand_bch_init(mtd);
if (!nand_chip->ecc.priv) {
@@ -1887,16 +2060,8 @@ static int omap_nand_probe(struct platform_device *pdev)
nand_chip->ecc.calculate = omap_calculate_ecc_bch;
nand_chip->ecc.read_page = omap_read_page_bch;
nand_chip->ecc.write_page = omap_write_page_bch;
- /* define ECC layout */
- ecclayout->eccbytes = nand_chip->ecc.bytes *
- (mtd->writesize /
- nand_chip->ecc.size);
- oob_index = BADBLOCK_MARKER_LENGTH;
- for (i = 0; i < ecclayout->eccbytes; i++, oob_index++)
- ecclayout->eccpos[i] = oob_index;
- /* reserved marker already included in ecclayout->eccbytes */
- ecclayout->oobfree->offset =
- ecclayout->eccpos[ecclayout->eccbytes - 1] + 1;
+ mtd_set_ooblayout(mtd, &omap_ooblayout_ops);
+ oobbytes_per_step = nand_chip->ecc.bytes;
err = elm_config(info->elm_dev, BCH4_ECC,
mtd->writesize / nand_chip->ecc.size,
@@ -1914,19 +2079,9 @@ static int omap_nand_probe(struct platform_device *pdev)
nand_chip->ecc.hwctl = omap_enable_hwecc_bch;
nand_chip->ecc.correct = nand_bch_correct_data;
nand_chip->ecc.calculate = omap_calculate_ecc_bch;
- /* define ECC layout */
- ecclayout->eccbytes = nand_chip->ecc.bytes *
- (mtd->writesize /
- nand_chip->ecc.size);
- oob_index = BADBLOCK_MARKER_LENGTH;
- for (i = 0; i < ecclayout->eccbytes; i++, oob_index++) {
- ecclayout->eccpos[i] = oob_index;
- if (((i + 1) % nand_chip->ecc.bytes) == 0)
- oob_index++;
- }
- /* include reserved-marker in ecclayout->oobfree calculation */
- ecclayout->oobfree->offset = 1 +
- ecclayout->eccpos[ecclayout->eccbytes - 1] + 1;
+ mtd_set_ooblayout(mtd, &omap_sw_ooblayout_ops);
+ /* Reserve one byte for the OMAP marker */
+ oobbytes_per_step = nand_chip->ecc.bytes + 1;
/* software bch library is used for locating errors */
nand_chip->ecc.priv = nand_bch_init(mtd);
if (!nand_chip->ecc.priv) {
@@ -1948,6 +2103,8 @@ static int omap_nand_probe(struct platform_device *pdev)
nand_chip->ecc.calculate = omap_calculate_ecc_bch;
nand_chip->ecc.read_page = omap_read_page_bch;
nand_chip->ecc.write_page = omap_write_page_bch;
+ mtd_set_ooblayout(mtd, &omap_ooblayout_ops);
+ oobbytes_per_step = nand_chip->ecc.bytes;
err = elm_config(info->elm_dev, BCH8_ECC,
mtd->writesize / nand_chip->ecc.size,
@@ -1955,16 +2112,6 @@ static int omap_nand_probe(struct platform_device *pdev)
if (err < 0)
goto return_error;
- /* define ECC layout */
- ecclayout->eccbytes = nand_chip->ecc.bytes *
- (mtd->writesize /
- nand_chip->ecc.size);
- oob_index = BADBLOCK_MARKER_LENGTH;
- for (i = 0; i < ecclayout->eccbytes; i++, oob_index++)
- ecclayout->eccpos[i] = oob_index;
- /* reserved marker already included in ecclayout->eccbytes */
- ecclayout->oobfree->offset =
- ecclayout->eccpos[ecclayout->eccbytes - 1] + 1;
break;
case OMAP_ECC_BCH16_CODE_HW:
@@ -1978,6 +2125,8 @@ static int omap_nand_probe(struct platform_device *pdev)
nand_chip->ecc.calculate = omap_calculate_ecc_bch;
nand_chip->ecc.read_page = omap_read_page_bch;
nand_chip->ecc.write_page = omap_write_page_bch;
+ mtd_set_ooblayout(mtd, &omap_ooblayout_ops);
+ oobbytes_per_step = nand_chip->ecc.bytes;
err = elm_config(info->elm_dev, BCH16_ECC,
mtd->writesize / nand_chip->ecc.size,
@@ -1985,16 +2134,6 @@ static int omap_nand_probe(struct platform_device *pdev)
if (err < 0)
goto return_error;
- /* define ECC layout */
- ecclayout->eccbytes = nand_chip->ecc.bytes *
- (mtd->writesize /
- nand_chip->ecc.size);
- oob_index = BADBLOCK_MARKER_LENGTH;
- for (i = 0; i < ecclayout->eccbytes; i++, oob_index++)
- ecclayout->eccpos[i] = oob_index;
- /* reserved marker already included in ecclayout->eccbytes */
- ecclayout->oobfree->offset =
- ecclayout->eccpos[ecclayout->eccbytes - 1] + 1;
break;
default:
dev_err(&info->pdev->dev, "invalid or unsupported ECC scheme\n");
@@ -2002,13 +2141,13 @@ static int omap_nand_probe(struct platform_device *pdev)
goto return_error;
}
- /* all OOB bytes from oobfree->offset till end off OOB are free */
- ecclayout->oobfree->length = mtd->oobsize - ecclayout->oobfree->offset;
/* check if NAND device's OOB is enough to store ECC signatures */
- if (mtd->oobsize < (ecclayout->eccbytes + BADBLOCK_MARKER_LENGTH)) {
+ min_oobbytes += (oobbytes_per_step *
+ (mtd->writesize / nand_chip->ecc.size));
+ if (mtd->oobsize < min_oobbytes) {
dev_err(&info->pdev->dev,
"not enough OOB bytes required = %d, available=%d\n",
- ecclayout->eccbytes, mtd->oobsize);
+ min_oobbytes, mtd->oobsize);
err = -EINVAL;
goto return_error;
}
@@ -2020,7 +2159,10 @@ scan_tail:
goto return_error;
}
- mtd_device_register(mtd, pdata->parts, pdata->nr_parts);
+ if (dev->of_node)
+ mtd_device_register(mtd, NULL, 0);
+ else
+ mtd_device_register(mtd, pdata->parts, pdata->nr_parts);
platform_set_drvdata(pdev, mtd);
@@ -2051,11 +2193,17 @@ static int omap_nand_remove(struct platform_device *pdev)
return 0;
}
+static const struct of_device_id omap_nand_ids[] = {
+ { .compatible = "ti,omap2-nand", },
+ {},
+};
+
static struct platform_driver omap_nand_driver = {
.probe = omap_nand_probe,
.remove = omap_nand_remove,
.driver = {
.name = DRIVER_NAME,
+ .of_match_table = of_match_ptr(omap_nand_ids),
},
};
diff --git a/drivers/mtd/nand/orion_nand.c b/drivers/mtd/nand/orion_nand.c
index d4614bfbfed6..40a7c4a2cf0d 100644
--- a/drivers/mtd/nand/orion_nand.c
+++ b/drivers/mtd/nand/orion_nand.c
@@ -130,6 +130,7 @@ static int __init orion_nand_probe(struct platform_device *pdev)
nc->cmd_ctrl = orion_nand_cmd_ctrl;
nc->read_buf = orion_nand_read_buf;
nc->ecc.mode = NAND_ECC_SOFT;
+ nc->ecc.algo = NAND_ECC_HAMMING;
if (board->chip_delay)
nc->chip_delay = board->chip_delay;
diff --git a/drivers/mtd/nand/pasemi_nand.c b/drivers/mtd/nand/pasemi_nand.c
index 3ab53ca53cca..5de7591b0510 100644
--- a/drivers/mtd/nand/pasemi_nand.c
+++ b/drivers/mtd/nand/pasemi_nand.c
@@ -92,8 +92,9 @@ int pasemi_device_ready(struct mtd_info *mtd)
static int pasemi_nand_probe(struct platform_device *ofdev)
{
+ struct device *dev = &ofdev->dev;
struct pci_dev *pdev;
- struct device_node *np = ofdev->dev.of_node;
+ struct device_node *np = dev->of_node;
struct resource res;
struct nand_chip *chip;
int err = 0;
@@ -107,13 +108,11 @@ static int pasemi_nand_probe(struct platform_device *ofdev)
if (pasemi_nand_mtd)
return -ENODEV;
- pr_debug("pasemi_nand at %pR\n", &res);
+ dev_dbg(dev, "pasemi_nand at %pR\n", &res);
/* Allocate memory for MTD device structure and private data */
chip = kzalloc(sizeof(struct nand_chip), GFP_KERNEL);
if (!chip) {
- printk(KERN_WARNING
- "Unable to allocate PASEMI NAND MTD device structure\n");
err = -ENOMEM;
goto out;
}
@@ -121,7 +120,7 @@ static int pasemi_nand_probe(struct platform_device *ofdev)
pasemi_nand_mtd = nand_to_mtd(chip);
/* Link the private data with the MTD structure */
- pasemi_nand_mtd->dev.parent = &ofdev->dev;
+ pasemi_nand_mtd->dev.parent = dev;
chip->IO_ADDR_R = of_iomap(np, 0);
chip->IO_ADDR_W = chip->IO_ADDR_R;
@@ -151,6 +150,7 @@ static int pasemi_nand_probe(struct platform_device *ofdev)
chip->write_buf = pasemi_write_buf;
chip->chip_delay = 0;
chip->ecc.mode = NAND_ECC_SOFT;
+ chip->ecc.algo = NAND_ECC_HAMMING;
/* Enable the following for a flash based bad block table */
chip->bbt_options = NAND_BBT_USE_FLASH;
@@ -162,13 +162,13 @@ static int pasemi_nand_probe(struct platform_device *ofdev)
}
if (mtd_device_register(pasemi_nand_mtd, NULL, 0)) {
- printk(KERN_ERR "pasemi_nand: Unable to register MTD device\n");
+ dev_err(dev, "Unable to register MTD device\n");
err = -ENODEV;
goto out_lpc;
}
- printk(KERN_INFO "PA Semi NAND flash at %08llx, control at I/O %x\n",
- res.start, lpcctl);
+ dev_info(dev, "PA Semi NAND flash at %pR, control at I/O %x\n", &res,
+ lpcctl);
return 0;
diff --git a/drivers/mtd/nand/plat_nand.c b/drivers/mtd/nand/plat_nand.c
index e4e50da30444..415a53a0deeb 100644
--- a/drivers/mtd/nand/plat_nand.c
+++ b/drivers/mtd/nand/plat_nand.c
@@ -74,6 +74,7 @@ static int plat_nand_probe(struct platform_device *pdev)
data->chip.ecc.hwctl = pdata->ctrl.hwcontrol;
data->chip.ecc.mode = NAND_ECC_SOFT;
+ data->chip.ecc.algo = NAND_ECC_HAMMING;
platform_set_drvdata(pdev, data);
diff --git a/drivers/mtd/nand/pxa3xx_nand.c b/drivers/mtd/nand/pxa3xx_nand.c
index d6508856da99..436dd6dc11f4 100644
--- a/drivers/mtd/nand/pxa3xx_nand.c
+++ b/drivers/mtd/nand/pxa3xx_nand.c
@@ -29,7 +29,6 @@
#include <linux/slab.h>
#include <linux/of.h>
#include <linux/of_device.h>
-#include <linux/of_mtd.h>
#include <linux/platform_data/mtd-nand-pxa3xx.h>
#define CHIP_DELAY_TIMEOUT msecs_to_jiffies(200)
@@ -324,6 +323,62 @@ static struct pxa3xx_nand_flash builtin_flash_types[] = {
{ 0xba20, 16, 16, &timing[3] },
};
+static int pxa3xx_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ struct pxa3xx_nand_host *host = nand_get_controller_data(chip);
+ struct pxa3xx_nand_info *info = host->info_data;
+ int nchunks = mtd->writesize / info->chunk_size;
+
+ if (section >= nchunks)
+ return -ERANGE;
+
+ oobregion->offset = ((info->ecc_size + info->spare_size) * section) +
+ info->spare_size;
+ oobregion->length = info->ecc_size;
+
+ return 0;
+}
+
+static int pxa3xx_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ struct pxa3xx_nand_host *host = nand_get_controller_data(chip);
+ struct pxa3xx_nand_info *info = host->info_data;
+ int nchunks = mtd->writesize / info->chunk_size;
+
+ if (section >= nchunks)
+ return -ERANGE;
+
+ if (!info->spare_size)
+ return 0;
+
+ oobregion->offset = section * (info->ecc_size + info->spare_size);
+ oobregion->length = info->spare_size;
+ if (!section) {
+ /*
+ * Bootrom looks in bytes 0 & 5 for bad blocks for the
+ * 4KB page / 4bit BCH combination.
+ */
+ if (mtd->writesize == 4096 && info->chunk_size == 2048) {
+ oobregion->offset += 6;
+ oobregion->length -= 6;
+ } else {
+ oobregion->offset += 2;
+ oobregion->length -= 2;
+ }
+ }
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops pxa3xx_ooblayout_ops = {
+ .ecc = pxa3xx_ooblayout_ecc,
+ .free = pxa3xx_ooblayout_free,
+};
+
static u8 bbt_pattern[] = {'M', 'V', 'B', 'b', 't', '0' };
static u8 bbt_mirror_pattern[] = {'1', 't', 'b', 'B', 'V', 'M' };
@@ -347,41 +402,6 @@ static struct nand_bbt_descr bbt_mirror_descr = {
.pattern = bbt_mirror_pattern
};
-static struct nand_ecclayout ecc_layout_2KB_bch4bit = {
- .eccbytes = 32,
- .eccpos = {
- 32, 33, 34, 35, 36, 37, 38, 39,
- 40, 41, 42, 43, 44, 45, 46, 47,
- 48, 49, 50, 51, 52, 53, 54, 55,
- 56, 57, 58, 59, 60, 61, 62, 63},
- .oobfree = { {2, 30} }
-};
-
-static struct nand_ecclayout ecc_layout_4KB_bch4bit = {
- .eccbytes = 64,
- .eccpos = {
- 32, 33, 34, 35, 36, 37, 38, 39,
- 40, 41, 42, 43, 44, 45, 46, 47,
- 48, 49, 50, 51, 52, 53, 54, 55,
- 56, 57, 58, 59, 60, 61, 62, 63,
- 96, 97, 98, 99, 100, 101, 102, 103,
- 104, 105, 106, 107, 108, 109, 110, 111,
- 112, 113, 114, 115, 116, 117, 118, 119,
- 120, 121, 122, 123, 124, 125, 126, 127},
- /* Bootrom looks in bytes 0 & 5 for bad blocks */
- .oobfree = { {6, 26}, { 64, 32} }
-};
-
-static struct nand_ecclayout ecc_layout_4KB_bch8bit = {
- .eccbytes = 128,
- .eccpos = {
- 32, 33, 34, 35, 36, 37, 38, 39,
- 40, 41, 42, 43, 44, 45, 46, 47,
- 48, 49, 50, 51, 52, 53, 54, 55,
- 56, 57, 58, 59, 60, 61, 62, 63},
- .oobfree = { }
-};
-
#define NDTR0_tCH(c) (min((c), 7) << 19)
#define NDTR0_tCS(c) (min((c), 7) << 16)
#define NDTR0_tWH(c) (min((c), 7) << 11)
@@ -1546,9 +1566,12 @@ static void pxa3xx_nand_free_buff(struct pxa3xx_nand_info *info)
}
static int pxa_ecc_init(struct pxa3xx_nand_info *info,
- struct nand_ecc_ctrl *ecc,
+ struct mtd_info *mtd,
int strength, int ecc_stepsize, int page_size)
{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ struct nand_ecc_ctrl *ecc = &chip->ecc;
+
if (strength == 1 && ecc_stepsize == 512 && page_size == 2048) {
info->nfullchunks = 1;
info->ntotalchunks = 1;
@@ -1582,7 +1605,7 @@ static int pxa_ecc_init(struct pxa3xx_nand_info *info,
info->ecc_size = 32;
ecc->mode = NAND_ECC_HW;
ecc->size = info->chunk_size;
- ecc->layout = &ecc_layout_2KB_bch4bit;
+ mtd_set_ooblayout(mtd, &pxa3xx_ooblayout_ops);
ecc->strength = 16;
} else if (strength == 4 && ecc_stepsize == 512 && page_size == 4096) {
@@ -1594,7 +1617,7 @@ static int pxa_ecc_init(struct pxa3xx_nand_info *info,
info->ecc_size = 32;
ecc->mode = NAND_ECC_HW;
ecc->size = info->chunk_size;
- ecc->layout = &ecc_layout_4KB_bch4bit;
+ mtd_set_ooblayout(mtd, &pxa3xx_ooblayout_ops);
ecc->strength = 16;
/*
@@ -1612,7 +1635,7 @@ static int pxa_ecc_init(struct pxa3xx_nand_info *info,
info->ecc_size = 32;
ecc->mode = NAND_ECC_HW;
ecc->size = info->chunk_size;
- ecc->layout = &ecc_layout_4KB_bch8bit;
+ mtd_set_ooblayout(mtd, &pxa3xx_ooblayout_ops);
ecc->strength = 16;
} else {
dev_err(&info->pdev->dev,
@@ -1651,6 +1674,12 @@ static int pxa3xx_nand_scan(struct mtd_info *mtd)
if (info->variant == PXA3XX_NAND_VARIANT_ARMADA370)
nand_writel(info, NDECCCTRL, 0x0);
+ if (pdata->flash_bbt)
+ chip->bbt_options |= NAND_BBT_USE_FLASH;
+
+ chip->ecc.strength = pdata->ecc_strength;
+ chip->ecc.size = pdata->ecc_step_size;
+
if (nand_scan_ident(mtd, 1, NULL))
return -ENODEV;
@@ -1663,13 +1692,12 @@ static int pxa3xx_nand_scan(struct mtd_info *mtd)
}
}
- if (pdata->flash_bbt) {
+ if (chip->bbt_options & NAND_BBT_USE_FLASH) {
/*
* We'll use a bad block table stored in-flash and don't
* allow writing the bad block marker to the flash.
*/
- chip->bbt_options |= NAND_BBT_USE_FLASH |
- NAND_BBT_NO_OOB_BBM;
+ chip->bbt_options |= NAND_BBT_NO_OOB_BBM;
chip->bbt_td = &bbt_main_descr;
chip->bbt_md = &bbt_mirror_descr;
}
@@ -1689,10 +1717,9 @@ static int pxa3xx_nand_scan(struct mtd_info *mtd)
}
}
- if (pdata->ecc_strength && pdata->ecc_step_size) {
- ecc_strength = pdata->ecc_strength;
- ecc_step = pdata->ecc_step_size;
- } else {
+ ecc_strength = chip->ecc.strength;
+ ecc_step = chip->ecc.size;
+ if (!ecc_strength || !ecc_step) {
ecc_strength = chip->ecc_strength_ds;
ecc_step = chip->ecc_step_ds;
}
@@ -1703,7 +1730,7 @@ static int pxa3xx_nand_scan(struct mtd_info *mtd)
ecc_step = 512;
}
- ret = pxa_ecc_init(info, &chip->ecc, ecc_strength,
+ ret = pxa_ecc_init(info, mtd, ecc_strength,
ecc_step, mtd->writesize);
if (ret)
return ret;
@@ -1903,15 +1930,6 @@ static int pxa3xx_nand_probe_dt(struct platform_device *pdev)
if (of_get_property(np, "marvell,nand-keep-config", NULL))
pdata->keep_config = 1;
of_property_read_u32(np, "num-cs", &pdata->num_cs);
- pdata->flash_bbt = of_get_nand_on_flash_bbt(np);
-
- pdata->ecc_strength = of_get_nand_ecc_strength(np);
- if (pdata->ecc_strength < 0)
- pdata->ecc_strength = 0;
-
- pdata->ecc_step_size = of_get_nand_ecc_step_size(np);
- if (pdata->ecc_step_size < 0)
- pdata->ecc_step_size = 0;
pdev->dev.platform_data = pdata;
diff --git a/drivers/mtd/nand/qcom_nandc.c b/drivers/mtd/nand/qcom_nandc.c
index f550a57e6eea..de7d28e62d4e 100644
--- a/drivers/mtd/nand/qcom_nandc.c
+++ b/drivers/mtd/nand/qcom_nandc.c
@@ -21,7 +21,6 @@
#include <linux/mtd/partitions.h>
#include <linux/of.h>
#include <linux/of_device.h>
-#include <linux/of_mtd.h>
#include <linux/delay.h>
/* NANDc reg offsets */
@@ -1437,7 +1436,6 @@ static int qcom_nandc_write_oob(struct mtd_info *mtd, struct nand_chip *chip,
struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
struct nand_ecc_ctrl *ecc = &chip->ecc;
u8 *oob = chip->oob_poi;
- int free_boff;
int data_size, oob_size;
int ret, status = 0;
@@ -1451,12 +1449,11 @@ static int qcom_nandc_write_oob(struct mtd_info *mtd, struct nand_chip *chip,
/* calculate the data and oob size for the last codeword/step */
data_size = ecc->size - ((ecc->steps - 1) << 2);
- oob_size = ecc->steps << 2;
-
- free_boff = ecc->layout->oobfree[0].offset;
+ oob_size = mtd->oobavail;
/* override new oob content to last codeword */
- memcpy(nandc->data_buffer + data_size, oob + free_boff, oob_size);
+ mtd_ooblayout_get_databytes(mtd, nandc->data_buffer + data_size, oob,
+ 0, mtd->oobavail);
set_address(host, host->cw_size * (ecc->steps - 1), page);
update_rw_regs(host, 1, false);
@@ -1710,61 +1707,52 @@ static void qcom_nandc_select_chip(struct mtd_info *mtd, int chipnr)
* This layout is read as is when ECC is disabled. When ECC is enabled, the
* inaccessible Bad Block byte(s) are ignored when we write to a page/oob,
* and assumed as 0xffs when we read a page/oob. The ECC, unused and
- * dummy/real bad block bytes are grouped as ecc bytes in nand_ecclayout (i.e,
- * ecc->bytes is the sum of the three).
+ * dummy/real bad block bytes are grouped as ecc bytes (i.e, ecc->bytes is
+ * the sum of the three).
*/
-
-static struct nand_ecclayout *
-qcom_nand_create_layout(struct qcom_nand_host *host)
+static int qcom_nand_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
{
- struct nand_chip *chip = &host->chip;
- struct mtd_info *mtd = nand_to_mtd(chip);
- struct qcom_nand_controller *nandc = get_qcom_nand_controller(chip);
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ struct qcom_nand_host *host = to_qcom_nand_host(chip);
struct nand_ecc_ctrl *ecc = &chip->ecc;
- struct nand_ecclayout *layout;
- int i, j, steps, pos = 0, shift = 0;
- layout = devm_kzalloc(nandc->dev, sizeof(*layout), GFP_KERNEL);
- if (!layout)
- return NULL;
-
- steps = mtd->writesize / ecc->size;
- layout->eccbytes = steps * ecc->bytes;
+ if (section > 1)
+ return -ERANGE;
- layout->oobfree[0].offset = (steps - 1) * ecc->bytes + host->bbm_size;
- layout->oobfree[0].length = steps << 2;
-
- /*
- * the oob bytes in the first n - 1 codewords are all grouped together
- * in the format:
- * DUMMY_BBM + UNUSED + ECC
- */
- for (i = 0; i < steps - 1; i++) {
- for (j = 0; j < ecc->bytes; j++)
- layout->eccpos[pos++] = i * ecc->bytes + j;
+ if (!section) {
+ oobregion->length = (ecc->bytes * (ecc->steps - 1)) +
+ host->bbm_size;
+ oobregion->offset = 0;
+ } else {
+ oobregion->length = host->ecc_bytes_hw + host->spare_bytes;
+ oobregion->offset = mtd->oobsize - oobregion->length;
}
- /*
- * the oob bytes in the last codeword are grouped in the format:
- * BBM + FREE OOB + UNUSED + ECC
- */
+ return 0;
+}
- /* fill up the bbm positions */
- for (j = 0; j < host->bbm_size; j++)
- layout->eccpos[pos++] = i * ecc->bytes + j;
+static int qcom_nand_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+ struct qcom_nand_host *host = to_qcom_nand_host(chip);
+ struct nand_ecc_ctrl *ecc = &chip->ecc;
- /*
- * fill up the ecc and reserved positions, their indices are offseted
- * by the free oob region
- */
- shift = layout->oobfree[0].length + host->bbm_size;
+ if (section)
+ return -ERANGE;
- for (j = 0; j < (host->ecc_bytes_hw + host->spare_bytes); j++)
- layout->eccpos[pos++] = i * ecc->bytes + shift + j;
+ oobregion->length = ecc->steps * 4;
+ oobregion->offset = ((ecc->steps - 1) * ecc->bytes) + host->bbm_size;
- return layout;
+ return 0;
}
+static const struct mtd_ooblayout_ops qcom_nand_ooblayout_ops = {
+ .ecc = qcom_nand_ooblayout_ecc,
+ .free = qcom_nand_ooblayout_free,
+};
+
static int qcom_nand_host_setup(struct qcom_nand_host *host)
{
struct nand_chip *chip = &host->chip;
@@ -1851,9 +1839,7 @@ static int qcom_nand_host_setup(struct qcom_nand_host *host)
ecc->mode = NAND_ECC_HW;
- ecc->layout = qcom_nand_create_layout(host);
- if (!ecc->layout)
- return -ENOMEM;
+ mtd_set_ooblayout(mtd, &qcom_nand_ooblayout_ops);
cwperpage = mtd->writesize / ecc->size;
diff --git a/drivers/mtd/nand/s3c2410.c b/drivers/mtd/nand/s3c2410.c
index 9c9397b54b2c..d9309cf0ce2e 100644
--- a/drivers/mtd/nand/s3c2410.c
+++ b/drivers/mtd/nand/s3c2410.c
@@ -84,11 +84,33 @@
/* new oob placement block for use with hardware ecc generation
*/
+static int s3c2410_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section)
+ return -ERANGE;
+
+ oobregion->offset = 0;
+ oobregion->length = 3;
+
+ return 0;
+}
+
+static int s3c2410_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section)
+ return -ERANGE;
+
+ oobregion->offset = 8;
+ oobregion->length = 8;
+
+ return 0;
+}
-static struct nand_ecclayout nand_hw_eccoob = {
- .eccbytes = 3,
- .eccpos = {0, 1, 2},
- .oobfree = {{8, 8}}
+static const struct mtd_ooblayout_ops s3c2410_ooblayout_ops = {
+ .ecc = s3c2410_ooblayout_ecc,
+ .free = s3c2410_ooblayout_free,
};
/* controller and mtd information */
@@ -542,7 +564,8 @@ static int s3c2410_nand_correct_data(struct mtd_info *mtd, u_char *dat,
diff0 |= (diff1 << 8);
diff0 |= (diff2 << 16);
- if ((diff0 & ~(1<<fls(diff0))) == 0)
+ /* equal to "(diff0 & ~(1 << __ffs(diff0)))" */
+ if ((diff0 & (diff0 - 1)) == 0)
return 1;
return -1;
@@ -859,6 +882,7 @@ static void s3c2410_nand_init_chip(struct s3c2410_nand_info *info,
}
#else
chip->ecc.mode = NAND_ECC_SOFT;
+ chip->ecc.algo = NAND_ECC_HAMMING;
#endif
if (set->disable_ecc)
@@ -919,7 +943,7 @@ static void s3c2410_nand_update_chip(struct s3c2410_nand_info *info,
} else {
chip->ecc.size = 512;
chip->ecc.bytes = 3;
- chip->ecc.layout = &nand_hw_eccoob;
+ mtd_set_ooblayout(nand_to_mtd(chip), &s3c2410_ooblayout_ops);
}
}
diff --git a/drivers/mtd/nand/sh_flctl.c b/drivers/mtd/nand/sh_flctl.c
index 4814402902f9..6fa3bcd59769 100644
--- a/drivers/mtd/nand/sh_flctl.c
+++ b/drivers/mtd/nand/sh_flctl.c
@@ -31,7 +31,6 @@
#include <linux/io.h>
#include <linux/of.h>
#include <linux/of_device.h>
-#include <linux/of_mtd.h>
#include <linux/platform_device.h>
#include <linux/pm_runtime.h>
#include <linux/sh_dma.h>
@@ -43,26 +42,73 @@
#include <linux/mtd/partitions.h>
#include <linux/mtd/sh_flctl.h>
-static struct nand_ecclayout flctl_4secc_oob_16 = {
- .eccbytes = 10,
- .eccpos = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9},
- .oobfree = {
- {.offset = 12,
- . length = 4} },
+static int flctl_4secc_ooblayout_sp_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+
+ if (section)
+ return -ERANGE;
+
+ oobregion->offset = 0;
+ oobregion->length = chip->ecc.bytes;
+
+ return 0;
+}
+
+static int flctl_4secc_ooblayout_sp_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section)
+ return -ERANGE;
+
+ oobregion->offset = 12;
+ oobregion->length = 4;
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops flctl_4secc_oob_smallpage_ops = {
+ .ecc = flctl_4secc_ooblayout_sp_ecc,
+ .free = flctl_4secc_ooblayout_sp_free,
};
-static struct nand_ecclayout flctl_4secc_oob_64 = {
- .eccbytes = 4 * 10,
- .eccpos = {
- 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
- 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
- 38, 39, 40, 41, 42, 43, 44, 45, 46, 47,
- 54, 55, 56, 57, 58, 59, 60, 61, 62, 63 },
- .oobfree = {
- {.offset = 2, .length = 4},
- {.offset = 16, .length = 6},
- {.offset = 32, .length = 6},
- {.offset = 48, .length = 6} },
+static int flctl_4secc_ooblayout_lp_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+
+ if (section >= chip->ecc.steps)
+ return -ERANGE;
+
+ oobregion->offset = (section * 16) + 6;
+ oobregion->length = chip->ecc.bytes;
+
+ return 0;
+}
+
+static int flctl_4secc_ooblayout_lp_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *chip = mtd_to_nand(mtd);
+
+ if (section >= chip->ecc.steps)
+ return -ERANGE;
+
+ oobregion->offset = section * 16;
+ oobregion->length = 6;
+
+ if (!section) {
+ oobregion->offset += 2;
+ oobregion->length -= 2;
+ }
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops flctl_4secc_oob_largepage_ops = {
+ .ecc = flctl_4secc_ooblayout_lp_ecc,
+ .free = flctl_4secc_ooblayout_lp_free,
};
static uint8_t scan_ff_pattern[] = { 0xff, 0xff };
@@ -987,10 +1033,10 @@ static int flctl_chip_init_tail(struct mtd_info *mtd)
if (flctl->hwecc) {
if (mtd->writesize == 512) {
- chip->ecc.layout = &flctl_4secc_oob_16;
+ mtd_set_ooblayout(mtd, &flctl_4secc_oob_smallpage_ops);
chip->badblock_pattern = &flctl_4secc_smallpage;
} else {
- chip->ecc.layout = &flctl_4secc_oob_64;
+ mtd_set_ooblayout(mtd, &flctl_4secc_oob_largepage_ops);
chip->badblock_pattern = &flctl_4secc_largepage;
}
@@ -1005,6 +1051,7 @@ static int flctl_chip_init_tail(struct mtd_info *mtd)
flctl->flcmncr_base |= _4ECCEN;
} else {
chip->ecc.mode = NAND_ECC_SOFT;
+ chip->ecc.algo = NAND_ECC_HAMMING;
}
return 0;
@@ -1044,8 +1091,6 @@ static struct sh_flctl_platform_data *flctl_parse_dt(struct device *dev)
const struct of_device_id *match;
struct flctl_soc_config *config;
struct sh_flctl_platform_data *pdata;
- struct device_node *dn = dev->of_node;
- int ret;
match = of_match_device(of_flctl_match, dev);
if (match)
@@ -1065,15 +1110,6 @@ static struct sh_flctl_platform_data *flctl_parse_dt(struct device *dev)
pdata->has_hwecc = config->has_hwecc;
pdata->use_holden = config->use_holden;
- /* parse user defined options */
- ret = of_get_nand_bus_width(dn);
- if (ret == 16)
- pdata->flcmncr_val |= SEL_16BIT;
- else if (ret != 8) {
- dev_err(dev, "%s: invalid bus width\n", __func__);
- return NULL;
- }
-
return pdata;
}
@@ -1136,15 +1172,14 @@ static int flctl_probe(struct platform_device *pdev)
nand->chip_delay = 20;
nand->read_byte = flctl_read_byte;
+ nand->read_word = flctl_read_word;
nand->write_buf = flctl_write_buf;
nand->read_buf = flctl_read_buf;
nand->select_chip = flctl_select_chip;
nand->cmdfunc = flctl_cmdfunc;
- if (pdata->flcmncr_val & SEL_16BIT) {
+ if (pdata->flcmncr_val & SEL_16BIT)
nand->options |= NAND_BUSWIDTH_16;
- nand->read_word = flctl_read_word;
- }
pm_runtime_enable(&pdev->dev);
pm_runtime_resume(&pdev->dev);
@@ -1155,6 +1190,16 @@ static int flctl_probe(struct platform_device *pdev)
if (ret)
goto err_chip;
+ if (nand->options & NAND_BUSWIDTH_16) {
+ /*
+ * NAND_BUSWIDTH_16 may have been set by nand_scan_ident().
+ * Add the SEL_16BIT flag in pdata->flcmncr_val and re-assign
+ * flctl->flcmncr_base to pdata->flcmncr_val.
+ */
+ pdata->flcmncr_val |= SEL_16BIT;
+ flctl->flcmncr_base = pdata->flcmncr_val;
+ }
+
ret = flctl_chip_init_tail(flctl_mtd);
if (ret)
goto err_chip;
diff --git a/drivers/mtd/nand/sharpsl.c b/drivers/mtd/nand/sharpsl.c
index b7d1b55a160b..064ca1757589 100644
--- a/drivers/mtd/nand/sharpsl.c
+++ b/drivers/mtd/nand/sharpsl.c
@@ -148,6 +148,7 @@ static int sharpsl_nand_probe(struct platform_device *pdev)
/* Link the private data with the MTD structure */
mtd = nand_to_mtd(this);
mtd->dev.parent = &pdev->dev;
+ mtd_set_ooblayout(mtd, data->ecc_layout);
platform_set_drvdata(pdev, sharpsl);
@@ -170,7 +171,6 @@ static int sharpsl_nand_probe(struct platform_device *pdev)
this->ecc.bytes = 3;
this->ecc.strength = 1;
this->badblock_pattern = data->badblock_pattern;
- this->ecc.layout = data->ecc_layout;
this->ecc.hwctl = sharpsl_nand_enable_hwecc;
this->ecc.calculate = sharpsl_nand_calculate_ecc;
this->ecc.correct = nand_correct_data;
diff --git a/drivers/mtd/nand/sm_common.c b/drivers/mtd/nand/sm_common.c
index c514740f9a83..5939dff253c2 100644
--- a/drivers/mtd/nand/sm_common.c
+++ b/drivers/mtd/nand/sm_common.c
@@ -12,14 +12,47 @@
#include <linux/sizes.h>
#include "sm_common.h"
-static struct nand_ecclayout nand_oob_sm = {
- .eccbytes = 6,
- .eccpos = {8, 9, 10, 13, 14, 15},
- .oobfree = {
- {.offset = 0 , .length = 4}, /* reserved */
- {.offset = 6 , .length = 2}, /* LBA1 */
- {.offset = 11, .length = 2} /* LBA2 */
+static int oob_sm_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section > 1)
+ return -ERANGE;
+
+ oobregion->length = 3;
+ oobregion->offset = ((section + 1) * 8) - 3;
+
+ return 0;
+}
+
+static int oob_sm_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ switch (section) {
+ case 0:
+ /* reserved */
+ oobregion->offset = 0;
+ oobregion->length = 4;
+ break;
+ case 1:
+ /* LBA1 */
+ oobregion->offset = 6;
+ oobregion->length = 2;
+ break;
+ case 2:
+ /* LBA2 */
+ oobregion->offset = 11;
+ oobregion->length = 2;
+ break;
+ default:
+ return -ERANGE;
}
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops oob_sm_ops = {
+ .ecc = oob_sm_ooblayout_ecc,
+ .free = oob_sm_ooblayout_free,
};
/* NOTE: This layout is is not compatabable with SmartMedia, */
@@ -28,15 +61,43 @@ static struct nand_ecclayout nand_oob_sm = {
/* If you use smftl, it will bypass this and work correctly */
/* If you not, then you break SmartMedia compliance anyway */
-static struct nand_ecclayout nand_oob_sm_small = {
- .eccbytes = 3,
- .eccpos = {0, 1, 2},
- .oobfree = {
- {.offset = 3 , .length = 2}, /* reserved */
- {.offset = 6 , .length = 2}, /* LBA1 */
+static int oob_sm_small_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section)
+ return -ERANGE;
+
+ oobregion->length = 3;
+ oobregion->offset = 0;
+
+ return 0;
+}
+
+static int oob_sm_small_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ switch (section) {
+ case 0:
+ /* reserved */
+ oobregion->offset = 3;
+ oobregion->length = 2;
+ break;
+ case 1:
+ /* LBA1 */
+ oobregion->offset = 6;
+ oobregion->length = 2;
+ break;
+ default:
+ return -ERANGE;
}
-};
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops oob_sm_small_ops = {
+ .ecc = oob_sm_small_ooblayout_ecc,
+ .free = oob_sm_small_ooblayout_free,
+};
static int sm_block_markbad(struct mtd_info *mtd, loff_t ofs)
{
@@ -121,9 +182,9 @@ int sm_register_device(struct mtd_info *mtd, int smartmedia)
/* ECC layout */
if (mtd->writesize == SM_SECTOR_SIZE)
- chip->ecc.layout = &nand_oob_sm;
+ mtd_set_ooblayout(mtd, &oob_sm_ops);
else if (mtd->writesize == SM_SMALL_PAGE)
- chip->ecc.layout = &nand_oob_sm_small;
+ mtd_set_ooblayout(mtd, &oob_sm_small_ops);
else
return -ENODEV;
diff --git a/drivers/mtd/nand/socrates_nand.c b/drivers/mtd/nand/socrates_nand.c
index e3305f9dd6fb..888fd314c62a 100644
--- a/drivers/mtd/nand/socrates_nand.c
+++ b/drivers/mtd/nand/socrates_nand.c
@@ -180,6 +180,7 @@ static int socrates_nand_probe(struct platform_device *ofdev)
nand_chip->dev_ready = socrates_nand_device_ready;
nand_chip->ecc.mode = NAND_ECC_SOFT; /* enable ECC */
+ nand_chip->ecc.algo = NAND_ECC_HAMMING;
/* TODO: I have no idea what real delay is. */
nand_chip->chip_delay = 20; /* 20us command delay time */
diff --git a/drivers/mtd/nand/sunxi_nand.c b/drivers/mtd/nand/sunxi_nand.c
index 1c03eee44f3d..a83a690688b4 100644
--- a/drivers/mtd/nand/sunxi_nand.c
+++ b/drivers/mtd/nand/sunxi_nand.c
@@ -30,7 +30,6 @@
#include <linux/of.h>
#include <linux/of_device.h>
#include <linux/of_gpio.h>
-#include <linux/of_mtd.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/nand.h>
#include <linux/mtd/partitions.h>
@@ -39,7 +38,7 @@
#include <linux/dmaengine.h>
#include <linux/gpio.h>
#include <linux/interrupt.h>
-#include <linux/io.h>
+#include <linux/iopoll.h>
#define NFC_REG_CTL 0x0000
#define NFC_REG_ST 0x0004
@@ -155,7 +154,7 @@
/* define bit use in NFC_ECC_ST */
#define NFC_ECC_ERR(x) BIT(x)
#define NFC_ECC_PAT_FOUND(x) BIT(x + 16)
-#define NFC_ECC_ERR_CNT(b, x) (((x) >> ((b) * 8)) & 0xff)
+#define NFC_ECC_ERR_CNT(b, x) (((x) >> (((b) % 4) * 8)) & 0xff)
#define NFC_DEFAULT_TIMEOUT_MS 1000
@@ -212,12 +211,9 @@ struct sunxi_nand_chip_sel {
* sunxi HW ECC infos: stores information related to HW ECC support
*
* @mode: the sunxi ECC mode field deduced from ECC requirements
- * @layout: the OOB layout depending on the ECC requirements and the
- * selected ECC mode
*/
struct sunxi_nand_hw_ecc {
int mode;
- struct nand_ecclayout layout;
};
/*
@@ -239,6 +235,10 @@ struct sunxi_nand_chip {
u32 timing_cfg;
u32 timing_ctl;
int selected;
+ int addr_cycles;
+ u32 addr[2];
+ int cmd_cycles;
+ u8 cmd[2];
int nsels;
struct sunxi_nand_chip_sel sels[0];
};
@@ -298,54 +298,71 @@ static irqreturn_t sunxi_nfc_interrupt(int irq, void *dev_id)
return IRQ_HANDLED;
}
-static int sunxi_nfc_wait_int(struct sunxi_nfc *nfc, u32 flags,
- unsigned int timeout_ms)
+static int sunxi_nfc_wait_events(struct sunxi_nfc *nfc, u32 events,
+ bool use_polling, unsigned int timeout_ms)
{
- init_completion(&nfc->complete);
+ int ret;
- writel(flags, nfc->regs + NFC_REG_INT);
+ if (events & ~NFC_INT_MASK)
+ return -EINVAL;
if (!timeout_ms)
timeout_ms = NFC_DEFAULT_TIMEOUT_MS;
- if (!wait_for_completion_timeout(&nfc->complete,
- msecs_to_jiffies(timeout_ms))) {
- dev_err(nfc->dev, "wait interrupt timedout\n");
- return -ETIMEDOUT;
+ if (!use_polling) {
+ init_completion(&nfc->complete);
+
+ writel(events, nfc->regs + NFC_REG_INT);
+
+ ret = wait_for_completion_timeout(&nfc->complete,
+ msecs_to_jiffies(timeout_ms));
+
+ writel(0, nfc->regs + NFC_REG_INT);
+ } else {
+ u32 status;
+
+ ret = readl_poll_timeout(nfc->regs + NFC_REG_ST, status,
+ (status & events) == events, 1,
+ timeout_ms * 1000);
}
- return 0;
+ writel(events & NFC_INT_MASK, nfc->regs + NFC_REG_ST);
+
+ if (ret)
+ dev_err(nfc->dev, "wait interrupt timedout\n");
+
+ return ret;
}
static int sunxi_nfc_wait_cmd_fifo_empty(struct sunxi_nfc *nfc)
{
- unsigned long timeout = jiffies +
- msecs_to_jiffies(NFC_DEFAULT_TIMEOUT_MS);
+ u32 status;
+ int ret;
- do {
- if (!(readl(nfc->regs + NFC_REG_ST) & NFC_CMD_FIFO_STATUS))
- return 0;
- } while (time_before(jiffies, timeout));
+ ret = readl_poll_timeout(nfc->regs + NFC_REG_ST, status,
+ !(status & NFC_CMD_FIFO_STATUS), 1,
+ NFC_DEFAULT_TIMEOUT_MS * 1000);
+ if (ret)
+ dev_err(nfc->dev, "wait for empty cmd FIFO timedout\n");
- dev_err(nfc->dev, "wait for empty cmd FIFO timedout\n");
- return -ETIMEDOUT;
+ return ret;
}
static int sunxi_nfc_rst(struct sunxi_nfc *nfc)
{
- unsigned long timeout = jiffies +
- msecs_to_jiffies(NFC_DEFAULT_TIMEOUT_MS);
+ u32 ctl;
+ int ret;
writel(0, nfc->regs + NFC_REG_ECC_CTL);
writel(NFC_RESET, nfc->regs + NFC_REG_CTL);
- do {
- if (!(readl(nfc->regs + NFC_REG_CTL) & NFC_RESET))
- return 0;
- } while (time_before(jiffies, timeout));
+ ret = readl_poll_timeout(nfc->regs + NFC_REG_CTL, ctl,
+ !(ctl & NFC_RESET), 1,
+ NFC_DEFAULT_TIMEOUT_MS * 1000);
+ if (ret)
+ dev_err(nfc->dev, "wait for NAND controller reset timedout\n");
- dev_err(nfc->dev, "wait for NAND controller reset timedout\n");
- return -ETIMEDOUT;
+ return ret;
}
static int sunxi_nfc_dev_ready(struct mtd_info *mtd)
@@ -354,7 +371,6 @@ static int sunxi_nfc_dev_ready(struct mtd_info *mtd)
struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand);
struct sunxi_nfc *nfc = to_sunxi_nfc(sunxi_nand->nand.controller);
struct sunxi_nand_rb *rb;
- unsigned long timeo = (sunxi_nand->nand.state == FL_ERASING ? 400 : 20);
int ret;
if (sunxi_nand->selected < 0)
@@ -366,12 +382,6 @@ static int sunxi_nfc_dev_ready(struct mtd_info *mtd)
case RB_NATIVE:
ret = !!(readl(nfc->regs + NFC_REG_ST) &
NFC_RB_STATE(rb->info.nativeid));
- if (ret)
- break;
-
- sunxi_nfc_wait_int(nfc, NFC_RB_B2R, timeo);
- ret = !!(readl(nfc->regs + NFC_REG_ST) &
- NFC_RB_STATE(rb->info.nativeid));
break;
case RB_GPIO:
ret = gpio_get_value(rb->info.gpio);
@@ -407,7 +417,7 @@ static void sunxi_nfc_select_chip(struct mtd_info *mtd, int chip)
sel = &sunxi_nand->sels[chip];
ctl |= NFC_CE_SEL(sel->cs) | NFC_EN |
- NFC_PAGE_SHIFT(nand->page_shift - 10);
+ NFC_PAGE_SHIFT(nand->page_shift);
if (sel->rb.type == RB_NONE) {
nand->dev_ready = NULL;
} else {
@@ -452,7 +462,7 @@ static void sunxi_nfc_read_buf(struct mtd_info *mtd, uint8_t *buf, int len)
tmp = NFC_DATA_TRANS | NFC_DATA_SWAP_METHOD;
writel(tmp, nfc->regs + NFC_REG_CMD);
- ret = sunxi_nfc_wait_int(nfc, NFC_CMD_INT_FLAG, 0);
+ ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
if (ret)
break;
@@ -487,7 +497,7 @@ static void sunxi_nfc_write_buf(struct mtd_info *mtd, const uint8_t *buf,
NFC_ACCESS_DIR;
writel(tmp, nfc->regs + NFC_REG_CMD);
- ret = sunxi_nfc_wait_int(nfc, NFC_CMD_INT_FLAG, 0);
+ ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
if (ret)
break;
@@ -511,32 +521,54 @@ static void sunxi_nfc_cmd_ctrl(struct mtd_info *mtd, int dat,
struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand);
struct sunxi_nfc *nfc = to_sunxi_nfc(sunxi_nand->nand.controller);
int ret;
- u32 tmp;
ret = sunxi_nfc_wait_cmd_fifo_empty(nfc);
if (ret)
return;
- if (ctrl & NAND_CTRL_CHANGE) {
- tmp = readl(nfc->regs + NFC_REG_CTL);
- if (ctrl & NAND_NCE)
- tmp |= NFC_CE_CTL;
- else
- tmp &= ~NFC_CE_CTL;
- writel(tmp, nfc->regs + NFC_REG_CTL);
- }
+ if (dat == NAND_CMD_NONE && (ctrl & NAND_NCE) &&
+ !(ctrl & (NAND_CLE | NAND_ALE))) {
+ u32 cmd = 0;
- if (dat == NAND_CMD_NONE)
- return;
+ if (!sunxi_nand->addr_cycles && !sunxi_nand->cmd_cycles)
+ return;
- if (ctrl & NAND_CLE) {
- writel(NFC_SEND_CMD1 | dat, nfc->regs + NFC_REG_CMD);
- } else {
- writel(dat, nfc->regs + NFC_REG_ADDR_LOW);
- writel(NFC_SEND_ADR, nfc->regs + NFC_REG_CMD);
+ if (sunxi_nand->cmd_cycles--)
+ cmd |= NFC_SEND_CMD1 | sunxi_nand->cmd[0];
+
+ if (sunxi_nand->cmd_cycles--) {
+ cmd |= NFC_SEND_CMD2;
+ writel(sunxi_nand->cmd[1],
+ nfc->regs + NFC_REG_RCMD_SET);
+ }
+
+ sunxi_nand->cmd_cycles = 0;
+
+ if (sunxi_nand->addr_cycles) {
+ cmd |= NFC_SEND_ADR |
+ NFC_ADR_NUM(sunxi_nand->addr_cycles);
+ writel(sunxi_nand->addr[0],
+ nfc->regs + NFC_REG_ADDR_LOW);
+ }
+
+ if (sunxi_nand->addr_cycles > 4)
+ writel(sunxi_nand->addr[1],
+ nfc->regs + NFC_REG_ADDR_HIGH);
+
+ writel(cmd, nfc->regs + NFC_REG_CMD);
+ sunxi_nand->addr[0] = 0;
+ sunxi_nand->addr[1] = 0;
+ sunxi_nand->addr_cycles = 0;
+ sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
}
- sunxi_nfc_wait_int(nfc, NFC_CMD_INT_FLAG, 0);
+ if (ctrl & NAND_CLE) {
+ sunxi_nand->cmd[sunxi_nand->cmd_cycles++] = dat;
+ } else if (ctrl & NAND_ALE) {
+ sunxi_nand->addr[sunxi_nand->addr_cycles / 4] |=
+ dat << ((sunxi_nand->addr_cycles % 4) * 8);
+ sunxi_nand->addr_cycles++;
+ }
}
/* These seed values have been extracted from Allwinner's BSP */
@@ -717,7 +749,8 @@ static void sunxi_nfc_hw_ecc_enable(struct mtd_info *mtd)
ecc_ctl = readl(nfc->regs + NFC_REG_ECC_CTL);
ecc_ctl &= ~(NFC_ECC_MODE_MSK | NFC_ECC_PIPELINE |
NFC_ECC_BLOCK_SIZE_MSK);
- ecc_ctl |= NFC_ECC_EN | NFC_ECC_MODE(data->mode) | NFC_ECC_EXCEPTION;
+ ecc_ctl |= NFC_ECC_EN | NFC_ECC_MODE(data->mode) | NFC_ECC_EXCEPTION |
+ NFC_ECC_PIPELINE;
writel(ecc_ctl, nfc->regs + NFC_REG_ECC_CTL);
}
@@ -739,18 +772,106 @@ static inline void sunxi_nfc_user_data_to_buf(u32 user_data, u8 *buf)
buf[3] = user_data >> 24;
}
+static inline u32 sunxi_nfc_buf_to_user_data(const u8 *buf)
+{
+ return buf[0] | (buf[1] << 8) | (buf[2] << 16) | (buf[3] << 24);
+}
+
+static void sunxi_nfc_hw_ecc_get_prot_oob_bytes(struct mtd_info *mtd, u8 *oob,
+ int step, bool bbm, int page)
+{
+ struct nand_chip *nand = mtd_to_nand(mtd);
+ struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+
+ sunxi_nfc_user_data_to_buf(readl(nfc->regs + NFC_REG_USER_DATA(step)),
+ oob);
+
+ /* De-randomize the Bad Block Marker. */
+ if (bbm && (nand->options & NAND_NEED_SCRAMBLING))
+ sunxi_nfc_randomize_bbm(mtd, page, oob);
+}
+
+static void sunxi_nfc_hw_ecc_set_prot_oob_bytes(struct mtd_info *mtd,
+ const u8 *oob, int step,
+ bool bbm, int page)
+{
+ struct nand_chip *nand = mtd_to_nand(mtd);
+ struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+ u8 user_data[4];
+
+ /* Randomize the Bad Block Marker. */
+ if (bbm && (nand->options & NAND_NEED_SCRAMBLING)) {
+ memcpy(user_data, oob, sizeof(user_data));
+ sunxi_nfc_randomize_bbm(mtd, page, user_data);
+ oob = user_data;
+ }
+
+ writel(sunxi_nfc_buf_to_user_data(oob),
+ nfc->regs + NFC_REG_USER_DATA(step));
+}
+
+static void sunxi_nfc_hw_ecc_update_stats(struct mtd_info *mtd,
+ unsigned int *max_bitflips, int ret)
+{
+ if (ret < 0) {
+ mtd->ecc_stats.failed++;
+ } else {
+ mtd->ecc_stats.corrected += ret;
+ *max_bitflips = max_t(unsigned int, *max_bitflips, ret);
+ }
+}
+
+static int sunxi_nfc_hw_ecc_correct(struct mtd_info *mtd, u8 *data, u8 *oob,
+ int step, bool *erased)
+{
+ struct nand_chip *nand = mtd_to_nand(mtd);
+ struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
+ struct nand_ecc_ctrl *ecc = &nand->ecc;
+ u32 status, tmp;
+
+ *erased = false;
+
+ status = readl(nfc->regs + NFC_REG_ECC_ST);
+
+ if (status & NFC_ECC_ERR(step))
+ return -EBADMSG;
+
+ if (status & NFC_ECC_PAT_FOUND(step)) {
+ u8 pattern;
+
+ if (unlikely(!(readl(nfc->regs + NFC_REG_PAT_ID) & 0x1))) {
+ pattern = 0x0;
+ } else {
+ pattern = 0xff;
+ *erased = true;
+ }
+
+ if (data)
+ memset(data, pattern, ecc->size);
+
+ if (oob)
+ memset(oob, pattern, ecc->bytes + 4);
+
+ return 0;
+ }
+
+ tmp = readl(nfc->regs + NFC_REG_ECC_ERR_CNT(step));
+
+ return NFC_ECC_ERR_CNT(step, tmp);
+}
+
static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
u8 *data, int data_off,
u8 *oob, int oob_off,
int *cur_off,
unsigned int *max_bitflips,
- bool bbm, int page)
+ bool bbm, bool oob_required, int page)
{
struct nand_chip *nand = mtd_to_nand(mtd);
struct sunxi_nfc *nfc = to_sunxi_nfc(nand->controller);
struct nand_ecc_ctrl *ecc = &nand->ecc;
int raw_mode = 0;
- u32 status;
+ bool erased;
int ret;
if (*cur_off != data_off)
@@ -769,34 +890,19 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
writel(NFC_DATA_TRANS | NFC_DATA_SWAP_METHOD | NFC_ECC_OP,
nfc->regs + NFC_REG_CMD);
- ret = sunxi_nfc_wait_int(nfc, NFC_CMD_INT_FLAG, 0);
+ ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
sunxi_nfc_randomizer_disable(mtd);
if (ret)
return ret;
*cur_off = oob_off + ecc->bytes + 4;
- status = readl(nfc->regs + NFC_REG_ECC_ST);
- if (status & NFC_ECC_PAT_FOUND(0)) {
- u8 pattern = 0xff;
-
- if (unlikely(!(readl(nfc->regs + NFC_REG_PAT_ID) & 0x1)))
- pattern = 0x0;
-
- memset(data, pattern, ecc->size);
- memset(oob, pattern, ecc->bytes + 4);
-
+ ret = sunxi_nfc_hw_ecc_correct(mtd, data, oob_required ? oob : NULL, 0,
+ &erased);
+ if (erased)
return 1;
- }
-
- ret = NFC_ECC_ERR_CNT(0, readl(nfc->regs + NFC_REG_ECC_ERR_CNT(0)));
-
- memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
-
- nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
- sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4, true, page);
- if (status & NFC_ECC_ERR(0)) {
+ if (ret < 0) {
/*
* Re-read the data with the randomizer disabled to identify
* bitflips in erased pages.
@@ -804,35 +910,34 @@ static int sunxi_nfc_hw_ecc_read_chunk(struct mtd_info *mtd,
if (nand->options & NAND_NEED_SCRAMBLING) {
nand->cmdfunc(mtd, NAND_CMD_RNDOUT, data_off, -1);
nand->read_buf(mtd, data, ecc->size);
- nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
- nand->read_buf(mtd, oob, ecc->bytes + 4);
+ } else {
+ memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE,
+ ecc->size);
}
+ nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+ nand->read_buf(mtd, oob, ecc->bytes + 4);
+
ret = nand_check_erased_ecc_chunk(data, ecc->size,
oob, ecc->bytes + 4,
NULL, 0, ecc->strength);
if (ret >= 0)
raw_mode = 1;
} else {
- /*
- * The engine protects 4 bytes of OOB data per chunk.
- * Retrieve the corrected OOB bytes.
- */
- sunxi_nfc_user_data_to_buf(readl(nfc->regs + NFC_REG_USER_DATA(0)),
- oob);
+ memcpy_fromio(data, nfc->regs + NFC_RAM0_BASE, ecc->size);
- /* De-randomize the Bad Block Marker. */
- if (bbm && nand->options & NAND_NEED_SCRAMBLING)
- sunxi_nfc_randomize_bbm(mtd, page, oob);
- }
+ if (oob_required) {
+ nand->cmdfunc(mtd, NAND_CMD_RNDOUT, oob_off, -1);
+ sunxi_nfc_randomizer_read_buf(mtd, oob, ecc->bytes + 4,
+ true, page);
- if (ret < 0) {
- mtd->ecc_stats.failed++;
- } else {
- mtd->ecc_stats.corrected += ret;
- *max_bitflips = max_t(unsigned int, *max_bitflips, ret);
+ sunxi_nfc_hw_ecc_get_prot_oob_bytes(mtd, oob, 0,
+ bbm, page);
+ }
}
+ sunxi_nfc_hw_ecc_update_stats(mtd, max_bitflips, ret);
+
return raw_mode;
}
@@ -848,7 +953,7 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
if (len <= 0)
return;
- if (*cur_off != offset)
+ if (!cur_off || *cur_off != offset)
nand->cmdfunc(mtd, NAND_CMD_RNDOUT,
offset + mtd->writesize, -1);
@@ -858,12 +963,8 @@ static void sunxi_nfc_hw_ecc_read_extra_oob(struct mtd_info *mtd,
sunxi_nfc_randomizer_read_buf(mtd, oob + offset, len,
false, page);
- *cur_off = mtd->oobsize + mtd->writesize;
-}
-
-static inline u32 sunxi_nfc_buf_to_user_data(const u8 *buf)
-{
- return buf[0] | (buf[1] << 8) | (buf[2] << 16) | (buf[3] << 24);
+ if (cur_off)
+ *cur_off = mtd->oobsize + mtd->writesize;
}
static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
@@ -882,19 +983,6 @@ static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
sunxi_nfc_randomizer_write_buf(mtd, data, ecc->size, false, page);
- /* Fill OOB data in */
- if ((nand->options & NAND_NEED_SCRAMBLING) && bbm) {
- u8 user_data[4];
-
- memcpy(user_data, oob, 4);
- sunxi_nfc_randomize_bbm(mtd, page, user_data);
- writel(sunxi_nfc_buf_to_user_data(user_data),
- nfc->regs + NFC_REG_USER_DATA(0));
- } else {
- writel(sunxi_nfc_buf_to_user_data(oob),
- nfc->regs + NFC_REG_USER_DATA(0));
- }
-
if (data_off + ecc->size != oob_off)
nand->cmdfunc(mtd, NAND_CMD_RNDIN, oob_off, -1);
@@ -903,11 +991,13 @@ static int sunxi_nfc_hw_ecc_write_chunk(struct mtd_info *mtd,
return ret;
sunxi_nfc_randomizer_enable(mtd);
+ sunxi_nfc_hw_ecc_set_prot_oob_bytes(mtd, oob, 0, bbm, page);
+
writel(NFC_DATA_TRANS | NFC_DATA_SWAP_METHOD |
NFC_ACCESS_DIR | NFC_ECC_OP,
nfc->regs + NFC_REG_CMD);
- ret = sunxi_nfc_wait_int(nfc, NFC_CMD_INT_FLAG, 0);
+ ret = sunxi_nfc_wait_events(nfc, NFC_CMD_INT_FLAG, true, 0);
sunxi_nfc_randomizer_disable(mtd);
if (ret)
return ret;
@@ -929,13 +1019,14 @@ static void sunxi_nfc_hw_ecc_write_extra_oob(struct mtd_info *mtd,
if (len <= 0)
return;
- if (*cur_off != offset)
+ if (!cur_off || *cur_off != offset)
nand->cmdfunc(mtd, NAND_CMD_RNDIN,
offset + mtd->writesize, -1);
sunxi_nfc_randomizer_write_buf(mtd, oob + offset, len, false, page);
- *cur_off = mtd->oobsize + mtd->writesize;
+ if (cur_off)
+ *cur_off = mtd->oobsize + mtd->writesize;
}
static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
@@ -958,7 +1049,7 @@ static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off, oob,
oob_off + mtd->writesize,
&cur_off, &max_bitflips,
- !i, page);
+ !i, oob_required, page);
if (ret < 0)
return ret;
else if (ret)
@@ -974,6 +1065,39 @@ static int sunxi_nfc_hw_ecc_read_page(struct mtd_info *mtd,
return max_bitflips;
}
+static int sunxi_nfc_hw_ecc_read_subpage(struct mtd_info *mtd,
+ struct nand_chip *chip,
+ u32 data_offs, u32 readlen,
+ u8 *bufpoi, int page)
+{
+ struct nand_ecc_ctrl *ecc = &chip->ecc;
+ int ret, i, cur_off = 0;
+ unsigned int max_bitflips = 0;
+
+ sunxi_nfc_hw_ecc_enable(mtd);
+
+ chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page);
+ for (i = data_offs / ecc->size;
+ i < DIV_ROUND_UP(data_offs + readlen, ecc->size); i++) {
+ int data_off = i * ecc->size;
+ int oob_off = i * (ecc->bytes + 4);
+ u8 *data = bufpoi + data_off;
+ u8 *oob = chip->oob_poi + oob_off;
+
+ ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off,
+ oob,
+ oob_off + mtd->writesize,
+ &cur_off, &max_bitflips, !i,
+ false, page);
+ if (ret < 0)
+ return ret;
+ }
+
+ sunxi_nfc_hw_ecc_disable(mtd);
+
+ return max_bitflips;
+}
+
static int sunxi_nfc_hw_ecc_write_page(struct mtd_info *mtd,
struct nand_chip *chip,
const uint8_t *buf, int oob_required,
@@ -1026,7 +1150,9 @@ static int sunxi_nfc_hw_syndrome_ecc_read_page(struct mtd_info *mtd,
ret = sunxi_nfc_hw_ecc_read_chunk(mtd, data, data_off, oob,
oob_off, &cur_off,
- &max_bitflips, !i, page);
+ &max_bitflips, !i,
+ oob_required,
+ page);
if (ret < 0)
return ret;
else if (ret)
@@ -1074,6 +1200,40 @@ static int sunxi_nfc_hw_syndrome_ecc_write_page(struct mtd_info *mtd,
return 0;
}
+static int sunxi_nfc_hw_common_ecc_read_oob(struct mtd_info *mtd,
+ struct nand_chip *chip,
+ int page)
+{
+ chip->cmdfunc(mtd, NAND_CMD_READ0, 0, page);
+
+ chip->pagebuf = -1;
+
+ return chip->ecc.read_page(mtd, chip, chip->buffers->databuf, 1, page);
+}
+
+static int sunxi_nfc_hw_common_ecc_write_oob(struct mtd_info *mtd,
+ struct nand_chip *chip,
+ int page)
+{
+ int ret, status;
+
+ chip->cmdfunc(mtd, NAND_CMD_SEQIN, 0, page);
+
+ chip->pagebuf = -1;
+
+ memset(chip->buffers->databuf, 0xff, mtd->writesize);
+ ret = chip->ecc.write_page(mtd, chip, chip->buffers->databuf, 1, page);
+ if (ret)
+ return ret;
+
+ /* Send command to program the OOB data */
+ chip->cmdfunc(mtd, NAND_CMD_PAGEPROG, -1, -1);
+
+ status = chip->waitfunc(mtd, chip);
+
+ return status & NAND_STATUS_FAIL ? -EIO : 0;
+}
+
static const s32 tWB_lut[] = {6, 12, 16, 20};
static const s32 tRHW_lut[] = {4, 8, 12, 20};
@@ -1101,6 +1261,7 @@ static int sunxi_nand_chip_set_timings(struct sunxi_nand_chip *chip,
struct sunxi_nfc *nfc = to_sunxi_nfc(chip->nand.controller);
u32 min_clk_period = 0;
s32 tWB, tADL, tWHR, tRHW, tCAD;
+ long real_clk_rate;
/* T1 <=> tCLS */
if (timings->tCLS_min > min_clk_period)
@@ -1163,6 +1324,18 @@ static int sunxi_nand_chip_set_timings(struct sunxi_nand_chip *chip,
min_clk_period = DIV_ROUND_UP(timings->tWC_min, 2);
/* T16 - T19 + tCAD */
+ if (timings->tWB_max > (min_clk_period * 20))
+ min_clk_period = DIV_ROUND_UP(timings->tWB_max, 20);
+
+ if (timings->tADL_min > (min_clk_period * 32))
+ min_clk_period = DIV_ROUND_UP(timings->tADL_min, 32);
+
+ if (timings->tWHR_min > (min_clk_period * 32))
+ min_clk_period = DIV_ROUND_UP(timings->tWHR_min, 32);
+
+ if (timings->tRHW_min > (min_clk_period * 20))
+ min_clk_period = DIV_ROUND_UP(timings->tRHW_min, 20);
+
tWB = sunxi_nand_lookup_timing(tWB_lut, timings->tWB_max,
min_clk_period);
if (tWB < 0) {
@@ -1198,23 +1371,26 @@ static int sunxi_nand_chip_set_timings(struct sunxi_nand_chip *chip,
/* TODO: A83 has some more bits for CDQSS, CS, CLHZ, CCS, WC */
chip->timing_cfg = NFC_TIMING_CFG(tWB, tADL, tWHR, tRHW, tCAD);
- /*
- * ONFI specification 3.1, paragraph 4.15.2 dictates that EDO data
- * output cycle timings shall be used if the host drives tRC less than
- * 30 ns.
- */
- chip->timing_ctl = (timings->tRC_min < 30000) ? NFC_TIMING_CTL_EDO : 0;
-
/* Convert min_clk_period from picoseconds to nanoseconds */
min_clk_period = DIV_ROUND_UP(min_clk_period, 1000);
/*
- * Convert min_clk_period into a clk frequency, then get the
- * appropriate rate for the NAND controller IP given this formula
- * (specified in the datasheet):
- * nand clk_rate = 2 * min_clk_rate
+ * Unlike what is stated in Allwinner datasheet, the clk_rate should
+ * be set to (1 / min_clk_period), and not (2 / min_clk_period).
+ * This new formula was verified with a scope and validated by
+ * Allwinner engineers.
*/
- chip->clk_rate = (2 * NSEC_PER_SEC) / min_clk_period;
+ chip->clk_rate = NSEC_PER_SEC / min_clk_period;
+ real_clk_rate = clk_round_rate(nfc->mod_clk, chip->clk_rate);
+
+ /*
+ * ONFI specification 3.1, paragraph 4.15.2 dictates that EDO data
+ * output cycle timings shall be used if the host drives tRC less than
+ * 30 ns.
+ */
+ min_clk_period = NSEC_PER_SEC / real_clk_rate;
+ chip->timing_ctl = ((min_clk_period * 2) < 30) ?
+ NFC_TIMING_CTL_EDO : 0;
return 0;
}
@@ -1257,6 +1433,57 @@ static int sunxi_nand_chip_init_timings(struct sunxi_nand_chip *chip,
return sunxi_nand_chip_set_timings(chip, timings);
}
+static int sunxi_nand_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *nand = mtd_to_nand(mtd);
+ struct nand_ecc_ctrl *ecc = &nand->ecc;
+
+ if (section >= ecc->steps)
+ return -ERANGE;
+
+ oobregion->offset = section * (ecc->bytes + 4) + 4;
+ oobregion->length = ecc->bytes;
+
+ return 0;
+}
+
+static int sunxi_nand_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ struct nand_chip *nand = mtd_to_nand(mtd);
+ struct nand_ecc_ctrl *ecc = &nand->ecc;
+
+ if (section > ecc->steps)
+ return -ERANGE;
+
+ /*
+ * The first 2 bytes are used for BB markers, hence we
+ * only have 2 bytes available in the first user data
+ * section.
+ */
+ if (!section && ecc->mode == NAND_ECC_HW) {
+ oobregion->offset = 2;
+ oobregion->length = 2;
+
+ return 0;
+ }
+
+ oobregion->offset = section * (ecc->bytes + 4);
+
+ if (section < ecc->steps)
+ oobregion->length = 4;
+ else
+ oobregion->offset = mtd->oobsize - oobregion->offset;
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops sunxi_nand_ooblayout_ops = {
+ .ecc = sunxi_nand_ooblayout_ecc,
+ .free = sunxi_nand_ooblayout_free,
+};
+
static int sunxi_nand_hw_common_ecc_ctrl_init(struct mtd_info *mtd,
struct nand_ecc_ctrl *ecc,
struct device_node *np)
@@ -1266,7 +1493,6 @@ static int sunxi_nand_hw_common_ecc_ctrl_init(struct mtd_info *mtd,
struct sunxi_nand_chip *sunxi_nand = to_sunxi_nand(nand);
struct sunxi_nfc *nfc = to_sunxi_nfc(sunxi_nand->nand.controller);
struct sunxi_nand_hw_ecc *data;
- struct nand_ecclayout *layout;
int nsectors;
int ret;
int i;
@@ -1295,7 +1521,6 @@ static int sunxi_nand_hw_common_ecc_ctrl_init(struct mtd_info *mtd,
/* HW ECC always work with even numbers of ECC bytes */
ecc->bytes = ALIGN(ecc->bytes, 2);
- layout = &data->layout;
nsectors = mtd->writesize / ecc->size;
if (mtd->oobsize < ((ecc->bytes + 4) * nsectors)) {
@@ -1303,9 +1528,9 @@ static int sunxi_nand_hw_common_ecc_ctrl_init(struct mtd_info *mtd,
goto err;
}
- layout->eccbytes = (ecc->bytes * nsectors);
-
- ecc->layout = layout;
+ ecc->read_oob = sunxi_nfc_hw_common_ecc_read_oob;
+ ecc->write_oob = sunxi_nfc_hw_common_ecc_write_oob;
+ mtd_set_ooblayout(mtd, &sunxi_nand_ooblayout_ops);
ecc->priv = data;
return 0;
@@ -1325,9 +1550,6 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct mtd_info *mtd,
struct nand_ecc_ctrl *ecc,
struct device_node *np)
{
- struct nand_ecclayout *layout;
- int nsectors;
- int i, j;
int ret;
ret = sunxi_nand_hw_common_ecc_ctrl_init(mtd, ecc, np);
@@ -1336,40 +1558,9 @@ static int sunxi_nand_hw_ecc_ctrl_init(struct mtd_info *mtd,
ecc->read_page = sunxi_nfc_hw_ecc_read_page;
ecc->write_page = sunxi_nfc_hw_ecc_write_page;
- layout = ecc->layout;
- nsectors = mtd->writesize / ecc->size;
-
- for (i = 0; i < nsectors; i++) {
- if (i) {
- layout->oobfree[i].offset =
- layout->oobfree[i - 1].offset +
- layout->oobfree[i - 1].length +
- ecc->bytes;
- layout->oobfree[i].length = 4;
- } else {
- /*
- * The first 2 bytes are used for BB markers, hence we
- * only have 2 bytes available in the first user data
- * section.
- */
- layout->oobfree[i].length = 2;
- layout->oobfree[i].offset = 2;
- }
-
- for (j = 0; j < ecc->bytes; j++)
- layout->eccpos[(ecc->bytes * i) + j] =
- layout->oobfree[i].offset +
- layout->oobfree[i].length + j;
- }
-
- if (mtd->oobsize > (ecc->bytes + 4) * nsectors) {
- layout->oobfree[nsectors].offset =
- layout->oobfree[nsectors - 1].offset +
- layout->oobfree[nsectors - 1].length +
- ecc->bytes;
- layout->oobfree[nsectors].length = mtd->oobsize -
- ((ecc->bytes + 4) * nsectors);
- }
+ ecc->read_oob_raw = nand_read_oob_std;
+ ecc->write_oob_raw = nand_write_oob_std;
+ ecc->read_subpage = sunxi_nfc_hw_ecc_read_subpage;
return 0;
}
@@ -1378,9 +1569,6 @@ static int sunxi_nand_hw_syndrome_ecc_ctrl_init(struct mtd_info *mtd,
struct nand_ecc_ctrl *ecc,
struct device_node *np)
{
- struct nand_ecclayout *layout;
- int nsectors;
- int i;
int ret;
ret = sunxi_nand_hw_common_ecc_ctrl_init(mtd, ecc, np);
@@ -1390,15 +1578,8 @@ static int sunxi_nand_hw_syndrome_ecc_ctrl_init(struct mtd_info *mtd,
ecc->prepad = 4;
ecc->read_page = sunxi_nfc_hw_syndrome_ecc_read_page;
ecc->write_page = sunxi_nfc_hw_syndrome_ecc_write_page;
-
- layout = ecc->layout;
- nsectors = mtd->writesize / ecc->size;
-
- for (i = 0; i < (ecc->bytes * nsectors); i++)
- layout->eccpos[i] = i;
-
- layout->oobfree[0].length = mtd->oobsize - i;
- layout->oobfree[0].offset = i;
+ ecc->read_oob_raw = nand_read_oob_syndrome;
+ ecc->write_oob_raw = nand_write_oob_syndrome;
return 0;
}
@@ -1411,7 +1592,6 @@ static void sunxi_nand_ecc_cleanup(struct nand_ecc_ctrl *ecc)
sunxi_nand_hw_common_ecc_ctrl_cleanup(ecc);
break;
case NAND_ECC_NONE:
- kfree(ecc->layout);
default:
break;
}
@@ -1432,8 +1612,6 @@ static int sunxi_nand_ecc_init(struct mtd_info *mtd, struct nand_ecc_ctrl *ecc,
return -EINVAL;
switch (ecc->mode) {
- case NAND_ECC_SOFT_BCH:
- break;
case NAND_ECC_HW:
ret = sunxi_nand_hw_ecc_ctrl_init(mtd, ecc, np);
if (ret)
@@ -1445,10 +1623,6 @@ static int sunxi_nand_ecc_init(struct mtd_info *mtd, struct nand_ecc_ctrl *ecc,
return ret;
break;
case NAND_ECC_NONE:
- ecc->layout = kzalloc(sizeof(*ecc->layout), GFP_KERNEL);
- if (!ecc->layout)
- return -ENOMEM;
- ecc->layout->oobfree[0].length = mtd->oobsize;
case NAND_ECC_SOFT:
break;
default:
@@ -1536,21 +1710,6 @@ static int sunxi_nand_chip_init(struct device *dev, struct sunxi_nfc *nfc,
}
}
- timings = onfi_async_timing_mode_to_sdr_timings(0);
- if (IS_ERR(timings)) {
- ret = PTR_ERR(timings);
- dev_err(dev,
- "could not retrieve timings for ONFI mode 0: %d\n",
- ret);
- return ret;
- }
-
- ret = sunxi_nand_chip_set_timings(chip, timings);
- if (ret) {
- dev_err(dev, "could not configure chip timings: %d\n", ret);
- return ret;
- }
-
nand = &chip->nand;
/* Default tR value specified in the ONFI spec (chapter 4.15.1) */
nand->chip_delay = 200;
@@ -1570,6 +1729,21 @@ static int sunxi_nand_chip_init(struct device *dev, struct sunxi_nfc *nfc,
mtd = nand_to_mtd(nand);
mtd->dev.parent = dev;
+ timings = onfi_async_timing_mode_to_sdr_timings(0);
+ if (IS_ERR(timings)) {
+ ret = PTR_ERR(timings);
+ dev_err(dev,
+ "could not retrieve timings for ONFI mode 0: %d\n",
+ ret);
+ return ret;
+ }
+
+ ret = sunxi_nand_chip_set_timings(chip, timings);
+ if (ret) {
+ dev_err(dev, "could not configure chip timings: %d\n", ret);
+ return ret;
+ }
+
ret = nand_scan_ident(mtd, nsels, NULL);
if (ret)
return ret;
@@ -1580,6 +1754,8 @@ static int sunxi_nand_chip_init(struct device *dev, struct sunxi_nfc *nfc,
if (nand->options & NAND_NEED_SCRAMBLING)
nand->options |= NAND_NO_SUBPAGE_WRITE;
+ nand->options |= NAND_SUBPAGE_READ;
+
ret = sunxi_nand_chip_init_timings(chip, np);
if (ret) {
dev_err(dev, "could not configure chip timings: %d\n", ret);
@@ -1728,6 +1904,8 @@ static int sunxi_nfc_remove(struct platform_device *pdev)
struct sunxi_nfc *nfc = platform_get_drvdata(pdev);
sunxi_nand_chips_cleanup(nfc);
+ clk_disable_unprepare(nfc->mod_clk);
+ clk_disable_unprepare(nfc->ahb_clk);
return 0;
}
diff --git a/drivers/mtd/nand/vf610_nfc.c b/drivers/mtd/nand/vf610_nfc.c
index 293feb19b0b1..3ad514c44dcb 100644
--- a/drivers/mtd/nand/vf610_nfc.c
+++ b/drivers/mtd/nand/vf610_nfc.c
@@ -33,7 +33,6 @@
#include <linux/mtd/mtd.h>
#include <linux/mtd/nand.h>
#include <linux/mtd/partitions.h>
-#include <linux/of_mtd.h>
#include <linux/of_device.h>
#include <linux/pinctrl/consumer.h>
#include <linux/platform_device.h>
@@ -175,34 +174,6 @@ static inline struct vf610_nfc *mtd_to_nfc(struct mtd_info *mtd)
return container_of(mtd_to_nand(mtd), struct vf610_nfc, chip);
}
-static struct nand_ecclayout vf610_nfc_ecc45 = {
- .eccbytes = 45,
- .eccpos = {19, 20, 21, 22, 23,
- 24, 25, 26, 27, 28, 29, 30, 31,
- 32, 33, 34, 35, 36, 37, 38, 39,
- 40, 41, 42, 43, 44, 45, 46, 47,
- 48, 49, 50, 51, 52, 53, 54, 55,
- 56, 57, 58, 59, 60, 61, 62, 63},
- .oobfree = {
- {.offset = 2,
- .length = 17} }
-};
-
-static struct nand_ecclayout vf610_nfc_ecc60 = {
- .eccbytes = 60,
- .eccpos = { 4, 5, 6, 7, 8, 9, 10, 11,
- 12, 13, 14, 15, 16, 17, 18, 19,
- 20, 21, 22, 23, 24, 25, 26, 27,
- 28, 29, 30, 31, 32, 33, 34, 35,
- 36, 37, 38, 39, 40, 41, 42, 43,
- 44, 45, 46, 47, 48, 49, 50, 51,
- 52, 53, 54, 55, 56, 57, 58, 59,
- 60, 61, 62, 63 },
- .oobfree = {
- {.offset = 2,
- .length = 2} }
-};
-
static inline u32 vf610_nfc_read(struct vf610_nfc *nfc, uint reg)
{
return readl(nfc->regs + reg);
@@ -781,14 +752,16 @@ static int vf610_nfc_probe(struct platform_device *pdev)
if (mtd->oobsize > 64)
mtd->oobsize = 64;
+ /*
+ * mtd->ecclayout is not specified here because we're using the
+ * default large page ECC layout defined in NAND core.
+ */
if (chip->ecc.strength == 32) {
nfc->ecc_mode = ECC_60_BYTE;
chip->ecc.bytes = 60;
- chip->ecc.layout = &vf610_nfc_ecc60;
} else if (chip->ecc.strength == 24) {
nfc->ecc_mode = ECC_45_BYTE;
chip->ecc.bytes = 45;
- chip->ecc.layout = &vf610_nfc_ecc45;
} else {
dev_err(nfc->dev, "Unsupported ECC strength\n");
err = -ENXIO;
diff --git a/drivers/mtd/onenand/onenand_base.c b/drivers/mtd/onenand/onenand_base.c
index af28bb3ae7cf..a4b029a417f0 100644
--- a/drivers/mtd/onenand/onenand_base.c
+++ b/drivers/mtd/onenand/onenand_base.c
@@ -68,21 +68,33 @@ MODULE_PARM_DESC(otp, "Corresponding behaviour of OneNAND in OTP"
* flexonenand_oob_128 - oob info for Flex-Onenand with 4KB page
* For now, we expose only 64 out of 80 ecc bytes
*/
-static struct nand_ecclayout flexonenand_oob_128 = {
- .eccbytes = 64,
- .eccpos = {
- 6, 7, 8, 9, 10, 11, 12, 13, 14, 15,
- 22, 23, 24, 25, 26, 27, 28, 29, 30, 31,
- 38, 39, 40, 41, 42, 43, 44, 45, 46, 47,
- 54, 55, 56, 57, 58, 59, 60, 61, 62, 63,
- 70, 71, 72, 73, 74, 75, 76, 77, 78, 79,
- 86, 87, 88, 89, 90, 91, 92, 93, 94, 95,
- 102, 103, 104, 105
- },
- .oobfree = {
- {2, 4}, {18, 4}, {34, 4}, {50, 4},
- {66, 4}, {82, 4}, {98, 4}, {114, 4}
- }
+static int flexonenand_ooblayout_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section > 7)
+ return -ERANGE;
+
+ oobregion->offset = (section * 16) + 6;
+ oobregion->length = 10;
+
+ return 0;
+}
+
+static int flexonenand_ooblayout_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section > 7)
+ return -ERANGE;
+
+ oobregion->offset = (section * 16) + 2;
+ oobregion->length = 4;
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops flexonenand_ooblayout_ops = {
+ .ecc = flexonenand_ooblayout_ecc,
+ .free = flexonenand_ooblayout_free,
};
/*
@@ -91,56 +103,77 @@ static struct nand_ecclayout flexonenand_oob_128 = {
* Based on specification:
* 4Gb M-die OneNAND Flash (KFM4G16Q4M, KFN8G16Q4M). Rev. 1.3, Apr. 2010
*
- * For eccpos we expose only 64 bytes out of 72 (see struct nand_ecclayout)
- *
- * oobfree uses the spare area fields marked as
- * "Managed by internal ECC logic for Logical Sector Number area"
*/
-static struct nand_ecclayout onenand_oob_128 = {
- .eccbytes = 64,
- .eccpos = {
- 7, 8, 9, 10, 11, 12, 13, 14, 15,
- 23, 24, 25, 26, 27, 28, 29, 30, 31,
- 39, 40, 41, 42, 43, 44, 45, 46, 47,
- 55, 56, 57, 58, 59, 60, 61, 62, 63,
- 71, 72, 73, 74, 75, 76, 77, 78, 79,
- 87, 88, 89, 90, 91, 92, 93, 94, 95,
- 103, 104, 105, 106, 107, 108, 109, 110, 111,
- 119
- },
- .oobfree = {
- {2, 3}, {18, 3}, {34, 3}, {50, 3},
- {66, 3}, {82, 3}, {98, 3}, {114, 3}
- }
+static int onenand_ooblayout_128_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section > 7)
+ return -ERANGE;
+
+ oobregion->offset = (section * 16) + 7;
+ oobregion->length = 9;
+
+ return 0;
+}
+
+static int onenand_ooblayout_128_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section >= 8)
+ return -ERANGE;
+
+ /*
+ * free bytes are using the spare area fields marked as
+ * "Managed by internal ECC logic for Logical Sector Number area"
+ */
+ oobregion->offset = (section * 16) + 2;
+ oobregion->length = 3;
+
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops onenand_oob_128_ooblayout_ops = {
+ .ecc = onenand_ooblayout_128_ecc,
+ .free = onenand_ooblayout_128_free,
};
/**
- * onenand_oob_64 - oob info for large (2KB) page
+ * onenand_oob_32_64 - oob info for large (2KB) page
*/
-static struct nand_ecclayout onenand_oob_64 = {
- .eccbytes = 20,
- .eccpos = {
- 8, 9, 10, 11, 12,
- 24, 25, 26, 27, 28,
- 40, 41, 42, 43, 44,
- 56, 57, 58, 59, 60,
- },
- .oobfree = {
- {2, 3}, {14, 2}, {18, 3}, {30, 2},
- {34, 3}, {46, 2}, {50, 3}, {62, 2}
+static int onenand_ooblayout_32_64_ecc(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ if (section > 3)
+ return -ERANGE;
+
+ oobregion->offset = (section * 16) + 8;
+ oobregion->length = 5;
+
+ return 0;
+}
+
+static int onenand_ooblayout_32_64_free(struct mtd_info *mtd, int section,
+ struct mtd_oob_region *oobregion)
+{
+ int sections = (mtd->oobsize / 32) * 2;
+
+ if (section >= sections)
+ return -ERANGE;
+
+ if (section & 1) {
+ oobregion->offset = ((section - 1) * 16) + 14;
+ oobregion->length = 2;
+ } else {
+ oobregion->offset = (section * 16) + 2;
+ oobregion->length = 3;
}
-};
-/**
- * onenand_oob_32 - oob info for middle (1KB) page
- */
-static struct nand_ecclayout onenand_oob_32 = {
- .eccbytes = 10,
- .eccpos = {
- 8, 9, 10, 11, 12,
- 24, 25, 26, 27, 28,
- },
- .oobfree = { {2, 3}, {14, 2}, {18, 3}, {30, 2} }
+ return 0;
+}
+
+static const struct mtd_ooblayout_ops onenand_oob_32_64_ooblayout_ops = {
+ .ecc = onenand_ooblayout_32_64_ecc,
+ .free = onenand_ooblayout_32_64_free,
};
static const unsigned char ffchars[] = {
@@ -1024,34 +1057,15 @@ static int onenand_transfer_auto_oob(struct mtd_info *mtd, uint8_t *buf, int col
int thislen)
{
struct onenand_chip *this = mtd->priv;
- struct nand_oobfree *free;
- int readcol = column;
- int readend = column + thislen;
- int lastgap = 0;
- unsigned int i;
- uint8_t *oob_buf = this->oob_buf;
-
- free = this->ecclayout->oobfree;
- for (i = 0; i < MTD_MAX_OOBFREE_ENTRIES && free->length; i++, free++) {
- if (readcol >= lastgap)
- readcol += free->offset - lastgap;
- if (readend >= lastgap)
- readend += free->offset - lastgap;
- lastgap = free->offset + free->length;
- }
- this->read_bufferram(mtd, ONENAND_SPARERAM, oob_buf, 0, mtd->oobsize);
- free = this->ecclayout->oobfree;
- for (i = 0; i < MTD_MAX_OOBFREE_ENTRIES && free->length; i++, free++) {
- int free_end = free->offset + free->length;
- if (free->offset < readend && free_end > readcol) {
- int st = max_t(int,free->offset,readcol);
- int ed = min_t(int,free_end,readend);
- int n = ed - st;
- memcpy(buf, oob_buf + st, n);
- buf += n;
- } else if (column == 0)
- break;
- }
+ int ret;
+
+ this->read_bufferram(mtd, ONENAND_SPARERAM, this->oob_buf, 0,
+ mtd->oobsize);
+ ret = mtd_ooblayout_get_databytes(mtd, buf, this->oob_buf,
+ column, thislen);
+ if (ret)
+ return ret;
+
return 0;
}
@@ -1808,34 +1822,7 @@ static int onenand_panic_write(struct mtd_info *mtd, loff_t to, size_t len,
static int onenand_fill_auto_oob(struct mtd_info *mtd, u_char *oob_buf,
const u_char *buf, int column, int thislen)
{
- struct onenand_chip *this = mtd->priv;
- struct nand_oobfree *free;
- int writecol = column;
- int writeend = column + thislen;
- int lastgap = 0;
- unsigned int i;
-
- free = this->ecclayout->oobfree;
- for (i = 0; i < MTD_MAX_OOBFREE_ENTRIES && free->length; i++, free++) {
- if (writecol >= lastgap)
- writecol += free->offset - lastgap;
- if (writeend >= lastgap)
- writeend += free->offset - lastgap;
- lastgap = free->offset + free->length;
- }
- free = this->ecclayout->oobfree;
- for (i = 0; i < MTD_MAX_OOBFREE_ENTRIES && free->length; i++, free++) {
- int free_end = free->offset + free->length;
- if (free->offset < writeend && free_end > writecol) {
- int st = max_t(int,free->offset,writecol);
- int ed = min_t(int,free_end,writeend);
- int n = ed - st;
- memcpy(oob_buf + st, buf, n);
- buf += n;
- } else if (column == 0)
- break;
- }
- return 0;
+ return mtd_ooblayout_set_databytes(mtd, buf, oob_buf, column, thislen);
}
/**
@@ -4003,22 +3990,22 @@ int onenand_scan(struct mtd_info *mtd, int maxchips)
switch (mtd->oobsize) {
case 128:
if (FLEXONENAND(this)) {
- this->ecclayout = &flexonenand_oob_128;
+ mtd_set_ooblayout(mtd, &flexonenand_ooblayout_ops);
mtd->subpage_sft = 0;
} else {
- this->ecclayout = &onenand_oob_128;
+ mtd_set_ooblayout(mtd, &onenand_oob_128_ooblayout_ops);
mtd->subpage_sft = 2;
}
if (ONENAND_IS_NOP_1(this))
mtd->subpage_sft = 0;
break;
case 64:
- this->ecclayout = &onenand_oob_64;
+ mtd_set_ooblayout(mtd, &onenand_oob_32_64_ooblayout_ops);
mtd->subpage_sft = 2;
break;
case 32:
- this->ecclayout = &onenand_oob_32;
+ mtd_set_ooblayout(mtd, &onenand_oob_32_64_ooblayout_ops);
mtd->subpage_sft = 1;
break;
@@ -4027,7 +4014,7 @@ int onenand_scan(struct mtd_info *mtd, int maxchips)
__func__, mtd->oobsize);
mtd->subpage_sft = 0;
/* To prevent kernel oops */
- this->ecclayout = &onenand_oob_32;
+ mtd_set_ooblayout(mtd, &onenand_oob_32_64_ooblayout_ops);
break;
}
@@ -4037,12 +4024,12 @@ int onenand_scan(struct mtd_info *mtd, int maxchips)
* The number of bytes available for a client to place data into
* the out of band area
*/
- mtd->oobavail = 0;
- for (i = 0; i < MTD_MAX_OOBFREE_ENTRIES &&
- this->ecclayout->oobfree[i].length; i++)
- mtd->oobavail += this->ecclayout->oobfree[i].length;
+ ret = mtd_ooblayout_count_freebytes(mtd);
+ if (ret < 0)
+ ret = 0;
+
+ mtd->oobavail = ret;
- mtd->ecclayout = this->ecclayout;
mtd->ecc_strength = 1;
/* Fill in remaining MTD driver data */
diff --git a/drivers/mtd/sm_ftl.c b/drivers/mtd/sm_ftl.c
index b096f8bb05ba..3692dd547879 100644
--- a/drivers/mtd/sm_ftl.c
+++ b/drivers/mtd/sm_ftl.c
@@ -386,7 +386,7 @@ restart:
if (test_bit(boffset / SM_SECTOR_SIZE, &invalid_bitmap)) {
sm_printk("sector %d of block at LBA %d of zone %d"
- " coudn't be read, marking it as invalid",
+ " couldn't be read, marking it as invalid",
boffset / SM_SECTOR_SIZE, lba, zone);
oob.data_status = 0;
diff --git a/drivers/mtd/spi-nor/spi-nor.c b/drivers/mtd/spi-nor/spi-nor.c
index 157841dc3e99..c52e45594bfd 100644
--- a/drivers/mtd/spi-nor/spi-nor.c
+++ b/drivers/mtd/spi-nor/spi-nor.c
@@ -832,6 +832,7 @@ static const struct flash_info spi_nor_ids[] = {
/* GigaDevice */
{ "gd25q32", INFO(0xc84016, 0, 64 * 1024, 64, SECT_4K) },
{ "gd25q64", INFO(0xc84017, 0, 64 * 1024, 128, SECT_4K) },
+ { "gd25lq64c", INFO(0xc86017, 0, 64 * 1024, 128, SECT_4K | SPI_NOR_DUAL_READ | SPI_NOR_QUAD_READ) },
{ "gd25q128", INFO(0xc84018, 0, 64 * 1024, 256, SECT_4K) },
/* Intel/Numonyx -- xxxs33b */
diff --git a/drivers/mtd/ubi/build.c b/drivers/mtd/ubi/build.c
index 22fd19c0c5d3..16baeb51b2bd 100644
--- a/drivers/mtd/ubi/build.c
+++ b/drivers/mtd/ubi/build.c
@@ -149,6 +149,8 @@ static struct device_attribute dev_bgt_enabled =
__ATTR(bgt_enabled, S_IRUGO, dev_attribute_show, NULL);
static struct device_attribute dev_mtd_num =
__ATTR(mtd_num, S_IRUGO, dev_attribute_show, NULL);
+static struct device_attribute dev_ro_mode =
+ __ATTR(ro_mode, S_IRUGO, dev_attribute_show, NULL);
/**
* ubi_volume_notify - send a volume change notification.
@@ -385,6 +387,8 @@ static ssize_t dev_attribute_show(struct device *dev,
ret = sprintf(buf, "%d\n", ubi->thread_enabled);
else if (attr == &dev_mtd_num)
ret = sprintf(buf, "%d\n", ubi->mtd->index);
+ else if (attr == &dev_ro_mode)
+ ret = sprintf(buf, "%d\n", ubi->ro_mode);
else
ret = -EINVAL;
@@ -404,6 +408,7 @@ static struct attribute *ubi_dev_attrs[] = {
&dev_min_io_size.attr,
&dev_bgt_enabled.attr,
&dev_mtd_num.attr,
+ &dev_ro_mode.attr,
NULL
};
ATTRIBUTE_GROUPS(ubi_dev);
@@ -1142,22 +1147,19 @@ int ubi_detach_mtd_dev(int ubi_num, int anyway)
*/
static struct mtd_info * __init open_mtd_by_chdev(const char *mtd_dev)
{
- int err, major, minor, mode;
- struct path path;
+ struct kstat stat;
+ int err, minor;
/* Probably this is an MTD character device node path */
- err = kern_path(mtd_dev, LOOKUP_FOLLOW, &path);
+ err = vfs_stat(mtd_dev, &stat);
if (err)
return ERR_PTR(err);
/* MTD device number is defined by the major / minor numbers */
- major = imajor(d_backing_inode(path.dentry));
- minor = iminor(d_backing_inode(path.dentry));
- mode = d_backing_inode(path.dentry)->i_mode;
- path_put(&path);
- if (major != MTD_CHAR_MAJOR || !S_ISCHR(mode))
+ if (MAJOR(stat.rdev) != MTD_CHAR_MAJOR || !S_ISCHR(stat.mode))
return ERR_PTR(-EINVAL);
+ minor = MINOR(stat.rdev);
if (minor & 1)
/*
* Just do not think the "/dev/mtdrX" devices support is need,
diff --git a/drivers/mtd/ubi/debug.c b/drivers/mtd/ubi/debug.c
index c4cb15a3098c..f101a4985a7c 100644
--- a/drivers/mtd/ubi/debug.c
+++ b/drivers/mtd/ubi/debug.c
@@ -352,7 +352,8 @@ static ssize_t dfs_file_write(struct file *file, const char __user *user_buf,
} else if (dent == d->dfs_emulate_power_cut) {
if (kstrtoint(buf, 0, &val) != 0)
count = -EINVAL;
- d->emulate_power_cut = val;
+ else
+ d->emulate_power_cut = val;
goto out;
}
diff --git a/drivers/mtd/ubi/eba.c b/drivers/mtd/ubi/eba.c
index 5b9834cf2820..5780dd1ba79d 100644
--- a/drivers/mtd/ubi/eba.c
+++ b/drivers/mtd/ubi/eba.c
@@ -426,8 +426,25 @@ retry:
pnum, vol_id, lnum);
err = -EBADMSG;
} else {
- err = -EINVAL;
- ubi_ro_mode(ubi);
+ /*
+ * Ending up here in the non-Fastmap case
+ * is a clear bug as the VID header had to
+ * be present at scan time to have it referenced.
+ * With fastmap the story is more complicated.
+ * Fastmap has the mapping info without the need
+ * of a full scan. So the LEB could have been
+ * unmapped, Fastmap cannot know this and keeps
+ * the LEB referenced.
+ * This is valid and works as the layer above UBI
+ * has to do bookkeeping about used/referenced
+ * LEBs in any case.
+ */
+ if (ubi->fast_attach) {
+ err = -EBADMSG;
+ } else {
+ err = -EINVAL;
+ ubi_ro_mode(ubi);
+ }
}
}
goto out_free;
@@ -1202,32 +1219,6 @@ int ubi_eba_copy_leb(struct ubi_device *ubi, int from, int to,
}
cond_resched();
-
- /*
- * We've written the data and are going to read it back to make
- * sure it was written correctly.
- */
- memset(ubi->peb_buf, 0xFF, aldata_size);
- err = ubi_io_read_data(ubi, ubi->peb_buf, to, 0, aldata_size);
- if (err) {
- if (err != UBI_IO_BITFLIPS) {
- ubi_warn(ubi, "error %d while reading data back from PEB %d",
- err, to);
- if (is_error_sane(err))
- err = MOVE_TARGET_RD_ERR;
- } else
- err = MOVE_TARGET_BITFLIPS;
- goto out_unlock_buf;
- }
-
- cond_resched();
-
- if (crc != crc32(UBI_CRC32_INIT, ubi->peb_buf, data_size)) {
- ubi_warn(ubi, "read data back from PEB %d and it is different",
- to);
- err = -EINVAL;
- goto out_unlock_buf;
- }
}
ubi_assert(vol->eba_tbl[lnum] == from);
diff --git a/drivers/mtd/ubi/fastmap.c b/drivers/mtd/ubi/fastmap.c
index 263b439e21a8..990898b9dc72 100644
--- a/drivers/mtd/ubi/fastmap.c
+++ b/drivers/mtd/ubi/fastmap.c
@@ -1058,6 +1058,7 @@ int ubi_scan_fastmap(struct ubi_device *ubi, struct ubi_attach_info *ai,
ubi_msg(ubi, "fastmap WL pool size: %d",
ubi->fm_wl_pool.max_size);
ubi->fm_disabled = 0;
+ ubi->fast_attach = 1;
ubi_free_vid_hdr(ubi, vh);
kfree(ech);
diff --git a/drivers/mtd/ubi/kapi.c b/drivers/mtd/ubi/kapi.c
index e844887732fb..348dbbcbedc8 100644
--- a/drivers/mtd/ubi/kapi.c
+++ b/drivers/mtd/ubi/kapi.c
@@ -301,27 +301,24 @@ EXPORT_SYMBOL_GPL(ubi_open_volume_nm);
*/
struct ubi_volume_desc *ubi_open_volume_path(const char *pathname, int mode)
{
- int error, ubi_num, vol_id, mod;
- struct inode *inode;
- struct path path;
+ int error, ubi_num, vol_id;
+ struct kstat stat;
dbg_gen("open volume %s, mode %d", pathname, mode);
if (!pathname || !*pathname)
return ERR_PTR(-EINVAL);
- error = kern_path(pathname, LOOKUP_FOLLOW, &path);
+ error = vfs_stat(pathname, &stat);
if (error)
return ERR_PTR(error);
- inode = d_backing_inode(path.dentry);
- mod = inode->i_mode;
- ubi_num = ubi_major2num(imajor(inode));
- vol_id = iminor(inode) - 1;
- path_put(&path);
-
- if (!S_ISCHR(mod))
+ if (!S_ISCHR(stat.mode))
return ERR_PTR(-EINVAL);
+
+ ubi_num = ubi_major2num(MAJOR(stat.rdev));
+ vol_id = MINOR(stat.rdev) - 1;
+
if (vol_id >= 0 && ubi_num >= 0)
return ubi_open_volume(ubi_num, vol_id, mode);
return ERR_PTR(-ENODEV);
@@ -708,7 +705,7 @@ int ubi_leb_map(struct ubi_volume_desc *desc, int lnum)
struct ubi_volume *vol = desc->vol;
struct ubi_device *ubi = vol->ubi;
- dbg_gen("unmap LEB %d:%d", vol->vol_id, lnum);
+ dbg_gen("map LEB %d:%d", vol->vol_id, lnum);
if (desc->mode == UBI_READONLY || vol->vol_type == UBI_STATIC_VOLUME)
return -EROFS;
diff --git a/drivers/mtd/ubi/ubi.h b/drivers/mtd/ubi/ubi.h
index dadc6a9d5755..61d4e99755a4 100644
--- a/drivers/mtd/ubi/ubi.h
+++ b/drivers/mtd/ubi/ubi.h
@@ -466,6 +466,7 @@ struct ubi_debug_info {
* @fm_eba_sem: allows ubi_update_fastmap() to block EBA table changes
* @fm_work: fastmap work queue
* @fm_work_scheduled: non-zero if fastmap work was scheduled
+ * @fast_attach: non-zero if UBI was attached by fastmap
*
* @used: RB-tree of used physical eraseblocks
* @erroneous: RB-tree of erroneous used physical eraseblocks
@@ -574,6 +575,7 @@ struct ubi_device {
size_t fm_size;
struct work_struct fm_work;
int fm_work_scheduled;
+ int fast_attach;
/* Wear-leveling sub-system's stuff */
struct rb_root used;
diff --git a/drivers/mtd/ubi/vmt.c b/drivers/mtd/ubi/vmt.c
index 1ae17bb9b889..10059dfdc1b6 100644
--- a/drivers/mtd/ubi/vmt.c
+++ b/drivers/mtd/ubi/vmt.c
@@ -405,7 +405,7 @@ int ubi_remove_volume(struct ubi_volume_desc *desc, int no_vtbl)
if (!no_vtbl)
self_check_volumes(ubi);
- return err;
+ return 0;
out_err:
ubi_err(ubi, "cannot remove volume %d, error %d", vol_id, err);
diff --git a/drivers/mtd/ubi/wl.c b/drivers/mtd/ubi/wl.c
index 17ec948ac40e..959c7b12e0b1 100644
--- a/drivers/mtd/ubi/wl.c
+++ b/drivers/mtd/ubi/wl.c
@@ -1534,6 +1534,7 @@ int ubi_wl_init(struct ubi_device *ubi, struct ubi_attach_info *ai)
INIT_LIST_HEAD(&ubi->pq[i]);
ubi->pq_head = 0;
+ ubi->free_count = 0;
list_for_each_entry_safe(aeb, tmp, &ai->erase, u.list) {
cond_resched();
@@ -1552,7 +1553,6 @@ int ubi_wl_init(struct ubi_device *ubi, struct ubi_attach_info *ai)
found_pebs++;
}
- ubi->free_count = 0;
list_for_each_entry(aeb, &ai->free, u.list) {
cond_resched();
diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
index befd67df08e1..0c5415b05ea9 100644
--- a/drivers/net/Kconfig
+++ b/drivers/net/Kconfig
@@ -192,6 +192,23 @@ config GENEVE
To compile this driver as a module, choose M here: the module
will be called geneve.
+config GTP
+ tristate "GPRS Tunneling Protocol datapath (GTP-U)"
+ depends on INET && NET_UDP_TUNNEL
+ select NET_IP_TUNNEL
+ ---help---
+ This allows one to create gtp virtual interfaces that provide
+ the GPRS Tunneling Protocol datapath (GTP-U). This tunneling protocol
+ is used to prevent subscribers from accessing mobile carrier core
+ network infrastructure. This driver requires a userspace software that
+ implements the signaling protocol (GTP-C) to update its PDP context
+ base, such as OpenGGSN <http://git.osmocom.org/openggsn/). This
+ tunneling protocol is implemented according to the GSM TS 09.60 and
+ 3GPP TS 29.060 standards.
+
+ To compile this drivers as a module, choose M here: the module
+ wil be called gtp.
+
config MACSEC
tristate "IEEE 802.1AE MAC-level encryption (MACsec)"
select CRYPTO
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 1aa7cb845663..7336cbd3ef5d 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -25,6 +25,7 @@ obj-$(CONFIG_VETH) += veth.o
obj-$(CONFIG_VIRTIO_NET) += virtio_net.o
obj-$(CONFIG_VXLAN) += vxlan.o
obj-$(CONFIG_GENEVE) += geneve.o
+obj-$(CONFIG_GTP) += gtp.o
obj-$(CONFIG_NLMON) += nlmon.o
obj-$(CONFIG_NET_VRF) += vrf.o
diff --git a/drivers/net/Space.c b/drivers/net/Space.c
index 67977f15af25..11fe71278f40 100644
--- a/drivers/net/Space.c
+++ b/drivers/net/Space.c
@@ -35,8 +35,8 @@
#include <net/Space.h>
/* A unified ethernet device probe. This is the easiest way to have every
- ethernet adaptor have the name "eth[0123...]".
- */
+ * ethernet adaptor have the name "eth[0123...]".
+ */
struct devprobe2 {
struct net_device *(*probe)(int unit);
@@ -46,6 +46,7 @@ struct devprobe2 {
static int __init probe_list2(int unit, struct devprobe2 *p, int autoprobe)
{
struct net_device *dev;
+
for (; p->probe; p++) {
if (autoprobe && p->status)
continue;
@@ -58,8 +59,7 @@ static int __init probe_list2(int unit, struct devprobe2 *p, int autoprobe)
return -ENODEV;
}
-/*
- * ISA probes that touch addresses < 0x400 (including those that also
+/* ISA probes that touch addresses < 0x400 (including those that also
* look for EISA/PCI cards in addition to ISA cards).
*/
static struct devprobe2 isa_probes[] __initdata = {
@@ -86,11 +86,11 @@ static struct devprobe2 isa_probes[] __initdata = {
#endif
#ifdef CONFIG_CS89x0
#ifndef CONFIG_CS89x0_PLATFORM
- {cs89x0_probe, 0},
+ {cs89x0_probe, 0},
#endif
#endif
-#if defined(CONFIG_MVME16x_NET) || defined(CONFIG_BVME6000_NET) /* Intel I82596 */
- {i82596_probe, 0},
+#if defined(CONFIG_MVME16x_NET) || defined(CONFIG_BVME6000_NET) /* Intel */
+ {i82596_probe, 0}, /* I82596 */
#endif
#ifdef CONFIG_NI65
{ni65_probe, 0},
@@ -118,13 +118,12 @@ static struct devprobe2 m68k_probes[] __initdata = {
{mac8390_probe, 0},
#endif
#ifdef CONFIG_MAC89x0
- {mac89x0_probe, 0},
+ {mac89x0_probe, 0},
#endif
{NULL, 0},
};
-/*
- * Unified ethernet device probe, segmented per architecture and
+/* Unified ethernet device probe, segmented per architecture and
* per bus interface. This drives the legacy devices only for now.
*/
@@ -135,7 +134,7 @@ static void __init ethif_probe2(int unit)
if (base_addr == 1)
return;
- (void)( probe_list2(unit, m68k_probes, base_addr == 0) &&
+ (void)(probe_list2(unit, m68k_probes, base_addr == 0) &&
probe_list2(unit, isa_probes, base_addr == 0));
}
diff --git a/drivers/net/appletalk/cops.c b/drivers/net/appletalk/cops.c
index 7f2a032c354c..1b2e9217ec78 100644
--- a/drivers/net/appletalk/cops.c
+++ b/drivers/net/appletalk/cops.c
@@ -861,7 +861,7 @@ static void cops_timeout(struct net_device *dev)
}
printk(KERN_WARNING "%s: Transmit timed out.\n", dev->name);
cops_jumpstart(dev); /* Restart the card. */
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue(dev);
}
diff --git a/drivers/net/arcnet/com90xx.c b/drivers/net/arcnet/com90xx.c
index 0d9b45ff1bb2..81f90c4703ae 100644
--- a/drivers/net/arcnet/com90xx.c
+++ b/drivers/net/arcnet/com90xx.c
@@ -433,7 +433,7 @@ static void __init com90xx_probe(void)
kfree(iomem);
}
-static int check_mirror(unsigned long addr, size_t size)
+static int __init check_mirror(unsigned long addr, size_t size)
{
void __iomem *p;
int res = -1;
diff --git a/drivers/net/can/dev.c b/drivers/net/can/dev.c
index 141c2a42d7ed..910c12e2638e 100644
--- a/drivers/net/can/dev.c
+++ b/drivers/net/can/dev.c
@@ -696,11 +696,17 @@ int can_change_mtu(struct net_device *dev, int new_mtu)
/* allow change of MTU according to the CANFD ability of the device */
switch (new_mtu) {
case CAN_MTU:
+ /* 'CANFD-only' controllers can not switch to CAN_MTU */
+ if (priv->ctrlmode_static & CAN_CTRLMODE_FD)
+ return -EINVAL;
+
priv->ctrlmode &= ~CAN_CTRLMODE_FD;
break;
case CANFD_MTU:
- if (!(priv->ctrlmode_supported & CAN_CTRLMODE_FD))
+ /* check for potential CANFD ability */
+ if (!(priv->ctrlmode_supported & CAN_CTRLMODE_FD) &&
+ !(priv->ctrlmode_static & CAN_CTRLMODE_FD))
return -EINVAL;
priv->ctrlmode |= CAN_CTRLMODE_FD;
@@ -782,6 +788,35 @@ static const struct nla_policy can_policy[IFLA_CAN_MAX + 1] = {
= { .len = sizeof(struct can_bittiming_const) },
};
+static int can_validate(struct nlattr *tb[], struct nlattr *data[])
+{
+ bool is_can_fd = false;
+
+ /* Make sure that valid CAN FD configurations always consist of
+ * - nominal/arbitration bittiming
+ * - data bittiming
+ * - control mode with CAN_CTRLMODE_FD set
+ */
+
+ if (data[IFLA_CAN_CTRLMODE]) {
+ struct can_ctrlmode *cm = nla_data(data[IFLA_CAN_CTRLMODE]);
+
+ is_can_fd = cm->flags & cm->mask & CAN_CTRLMODE_FD;
+ }
+
+ if (is_can_fd) {
+ if (!data[IFLA_CAN_BITTIMING] || !data[IFLA_CAN_DATA_BITTIMING])
+ return -EOPNOTSUPP;
+ }
+
+ if (data[IFLA_CAN_DATA_BITTIMING]) {
+ if (!is_can_fd || !data[IFLA_CAN_BITTIMING])
+ return -EOPNOTSUPP;
+ }
+
+ return 0;
+}
+
static int can_changelink(struct net_device *dev,
struct nlattr *tb[], struct nlattr *data[])
{
@@ -813,19 +848,31 @@ static int can_changelink(struct net_device *dev,
if (data[IFLA_CAN_CTRLMODE]) {
struct can_ctrlmode *cm;
+ u32 ctrlstatic;
+ u32 maskedflags;
/* Do not allow changing controller mode while running */
if (dev->flags & IFF_UP)
return -EBUSY;
cm = nla_data(data[IFLA_CAN_CTRLMODE]);
+ ctrlstatic = priv->ctrlmode_static;
+ maskedflags = cm->flags & cm->mask;
+
+ /* check whether provided bits are allowed to be passed */
+ if (cm->mask & ~(priv->ctrlmode_supported | ctrlstatic))
+ return -EOPNOTSUPP;
+
+ /* do not check for static fd-non-iso if 'fd' is disabled */
+ if (!(maskedflags & CAN_CTRLMODE_FD))
+ ctrlstatic &= ~CAN_CTRLMODE_FD_NON_ISO;
- /* check whether changed bits are allowed to be modified */
- if (cm->mask & ~priv->ctrlmode_supported)
+ /* make sure static options are provided by configuration */
+ if ((maskedflags & ctrlstatic) != ctrlstatic)
return -EOPNOTSUPP;
/* clear bits to be modified and copy the flag values */
priv->ctrlmode &= ~cm->mask;
- priv->ctrlmode |= (cm->flags & cm->mask);
+ priv->ctrlmode |= maskedflags;
/* CAN_CTRLMODE_FD can only be set when driver supports FD */
if (priv->ctrlmode & CAN_CTRLMODE_FD)
@@ -966,6 +1013,7 @@ static struct rtnl_link_ops can_link_ops __read_mostly = {
.maxtype = IFLA_CAN_MAX,
.policy = can_policy,
.setup = can_setup,
+ .validate = can_validate,
.newlink = can_newlink,
.changelink = can_changelink,
.get_size = can_get_size,
diff --git a/drivers/net/can/ifi_canfd/ifi_canfd.c b/drivers/net/can/ifi_canfd/ifi_canfd.c
index a1bd54ffd31e..2d1d22eec750 100644
--- a/drivers/net/can/ifi_canfd/ifi_canfd.c
+++ b/drivers/net/can/ifi_canfd/ifi_canfd.c
@@ -34,6 +34,7 @@
#define IFI_CANFD_STCMD_LOOPBACK BIT(18)
#define IFI_CANFD_STCMD_DISABLE_CANFD BIT(24)
#define IFI_CANFD_STCMD_ENABLE_ISO BIT(25)
+#define IFI_CANFD_STCMD_ENABLE_7_9_8_8_TIMING BIT(26)
#define IFI_CANFD_STCMD_NORMAL_MODE ((u32)BIT(31))
#define IFI_CANFD_RXSTCMD 0x4
@@ -51,7 +52,8 @@
#define IFI_CANFD_TXSTCMD_OVERFLOW BIT(13)
#define IFI_CANFD_INTERRUPT 0xc
-#define IFI_CANFD_INTERRUPT_ERROR_WARNING ((u32)BIT(1))
+#define IFI_CANFD_INTERRUPT_ERROR_WARNING BIT(1)
+#define IFI_CANFD_INTERRUPT_ERROR_COUNTER BIT(10)
#define IFI_CANFD_INTERRUPT_TXFIFO_EMPTY BIT(16)
#define IFI_CANFD_INTERRUPT_TXFIFO_REMOVE BIT(22)
#define IFI_CANFD_INTERRUPT_RXFIFO_NEMPTY BIT(24)
@@ -71,12 +73,12 @@
#define IFI_CANFD_TIME_TIMEB_OFF 0
#define IFI_CANFD_TIME_TIMEA_OFF 8
#define IFI_CANFD_TIME_PRESCALE_OFF 16
-#define IFI_CANFD_TIME_SJW_OFF_ISO 25
-#define IFI_CANFD_TIME_SJW_OFF_BOSCH 28
-#define IFI_CANFD_TIME_SET_SJW_BOSCH BIT(6)
-#define IFI_CANFD_TIME_SET_TIMEB_BOSCH BIT(7)
-#define IFI_CANFD_TIME_SET_PRESC_BOSCH BIT(14)
-#define IFI_CANFD_TIME_SET_TIMEA_BOSCH BIT(15)
+#define IFI_CANFD_TIME_SJW_OFF_7_9_8_8 25
+#define IFI_CANFD_TIME_SJW_OFF_4_12_6_6 28
+#define IFI_CANFD_TIME_SET_SJW_4_12_6_6 BIT(6)
+#define IFI_CANFD_TIME_SET_TIMEB_4_12_6_6 BIT(7)
+#define IFI_CANFD_TIME_SET_PRESC_4_12_6_6 BIT(14)
+#define IFI_CANFD_TIME_SET_TIMEA_4_12_6_6 BIT(15)
#define IFI_CANFD_TDELAY 0x1c
@@ -102,7 +104,26 @@
#define IFI_CANFD_RES1 0x40
-#define IFI_CANFD_RES2 0x44
+#define IFI_CANFD_ERROR_CTR 0x44
+#define IFI_CANFD_ERROR_CTR_UNLOCK_MAGIC 0x21302899
+#define IFI_CANFD_ERROR_CTR_OVERLOAD_FIRST BIT(0)
+#define IFI_CANFD_ERROR_CTR_ACK_ERROR_FIRST BIT(1)
+#define IFI_CANFD_ERROR_CTR_BIT0_ERROR_FIRST BIT(2)
+#define IFI_CANFD_ERROR_CTR_BIT1_ERROR_FIRST BIT(3)
+#define IFI_CANFD_ERROR_CTR_STUFF_ERROR_FIRST BIT(4)
+#define IFI_CANFD_ERROR_CTR_CRC_ERROR_FIRST BIT(5)
+#define IFI_CANFD_ERROR_CTR_FORM_ERROR_FIRST BIT(6)
+#define IFI_CANFD_ERROR_CTR_OVERLOAD_ALL BIT(8)
+#define IFI_CANFD_ERROR_CTR_ACK_ERROR_ALL BIT(9)
+#define IFI_CANFD_ERROR_CTR_BIT0_ERROR_ALL BIT(10)
+#define IFI_CANFD_ERROR_CTR_BIT1_ERROR_ALL BIT(11)
+#define IFI_CANFD_ERROR_CTR_STUFF_ERROR_ALL BIT(12)
+#define IFI_CANFD_ERROR_CTR_CRC_ERROR_ALL BIT(13)
+#define IFI_CANFD_ERROR_CTR_FORM_ERROR_ALL BIT(14)
+#define IFI_CANFD_ERROR_CTR_BITPOSITION_OFFSET 16
+#define IFI_CANFD_ERROR_CTR_BITPOSITION_MASK 0xff
+#define IFI_CANFD_ERROR_CTR_ER_RESET BIT(30)
+#define IFI_CANFD_ERROR_CTR_ER_ENABLE ((u32)BIT(31))
#define IFI_CANFD_PAR 0x48
@@ -196,6 +217,8 @@ static void ifi_canfd_irq_enable(struct net_device *ndev, bool enable)
if (enable) {
enirq = IFI_CANFD_IRQMASK_TXFIFO_EMPTY |
IFI_CANFD_IRQMASK_RXFIFO_NEMPTY;
+ if (priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING)
+ enirq |= IFI_CANFD_INTERRUPT_ERROR_COUNTER;
}
writel(IFI_CANFD_IRQMASK_SET_ERR |
@@ -334,6 +357,68 @@ static int ifi_canfd_handle_lost_msg(struct net_device *ndev)
return 1;
}
+static int ifi_canfd_handle_lec_err(struct net_device *ndev, const u32 errctr)
+{
+ struct ifi_canfd_priv *priv = netdev_priv(ndev);
+ struct net_device_stats *stats = &ndev->stats;
+ struct can_frame *cf;
+ struct sk_buff *skb;
+ const u32 errmask = IFI_CANFD_ERROR_CTR_OVERLOAD_FIRST |
+ IFI_CANFD_ERROR_CTR_ACK_ERROR_FIRST |
+ IFI_CANFD_ERROR_CTR_BIT0_ERROR_FIRST |
+ IFI_CANFD_ERROR_CTR_BIT1_ERROR_FIRST |
+ IFI_CANFD_ERROR_CTR_STUFF_ERROR_FIRST |
+ IFI_CANFD_ERROR_CTR_CRC_ERROR_FIRST |
+ IFI_CANFD_ERROR_CTR_FORM_ERROR_FIRST;
+
+ if (!(errctr & errmask)) /* No error happened. */
+ return 0;
+
+ priv->can.can_stats.bus_error++;
+ stats->rx_errors++;
+
+ /* Propagate the error condition to the CAN stack. */
+ skb = alloc_can_err_skb(ndev, &cf);
+ if (unlikely(!skb))
+ return 0;
+
+ /* Read the error counter register and check for new errors. */
+ cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+
+ if (errctr & IFI_CANFD_ERROR_CTR_OVERLOAD_FIRST)
+ cf->data[2] |= CAN_ERR_PROT_OVERLOAD;
+
+ if (errctr & IFI_CANFD_ERROR_CTR_ACK_ERROR_FIRST)
+ cf->data[3] = CAN_ERR_PROT_LOC_ACK;
+
+ if (errctr & IFI_CANFD_ERROR_CTR_BIT0_ERROR_FIRST)
+ cf->data[2] |= CAN_ERR_PROT_BIT0;
+
+ if (errctr & IFI_CANFD_ERROR_CTR_BIT1_ERROR_FIRST)
+ cf->data[2] |= CAN_ERR_PROT_BIT1;
+
+ if (errctr & IFI_CANFD_ERROR_CTR_STUFF_ERROR_FIRST)
+ cf->data[2] |= CAN_ERR_PROT_STUFF;
+
+ if (errctr & IFI_CANFD_ERROR_CTR_CRC_ERROR_FIRST)
+ cf->data[3] = CAN_ERR_PROT_LOC_CRC_SEQ;
+
+ if (errctr & IFI_CANFD_ERROR_CTR_FORM_ERROR_FIRST)
+ cf->data[2] |= CAN_ERR_PROT_FORM;
+
+ /* Reset the error counter, ack the IRQ and re-enable the counter. */
+ writel(IFI_CANFD_ERROR_CTR_ER_RESET, priv->base + IFI_CANFD_ERROR_CTR);
+ writel(IFI_CANFD_INTERRUPT_ERROR_COUNTER,
+ priv->base + IFI_CANFD_INTERRUPT);
+ writel(IFI_CANFD_ERROR_CTR_ER_ENABLE, priv->base + IFI_CANFD_ERROR_CTR);
+
+ stats->rx_packets++;
+ stats->rx_bytes += cf->can_dlc;
+ netif_receive_skb(skb);
+
+ return 1;
+}
+
static int ifi_canfd_get_berr_counter(const struct net_device *ndev,
struct can_berr_counter *bec)
{
@@ -469,6 +554,7 @@ static int ifi_canfd_poll(struct napi_struct *napi, int quota)
u32 stcmd = readl(priv->base + IFI_CANFD_STCMD);
u32 rxstcmd = readl(priv->base + IFI_CANFD_STCMD);
+ u32 errctr = readl(priv->base + IFI_CANFD_ERROR_CTR);
/* Handle bus state changes */
if ((stcmd & stcmd_state_mask) ||
@@ -479,6 +565,10 @@ static int ifi_canfd_poll(struct napi_struct *napi, int quota)
if (rxstcmd & IFI_CANFD_RXSTCMD_OVERFLOW)
work_done += ifi_canfd_handle_lost_msg(ndev);
+ /* Handle lec errors on the bus */
+ if (priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING)
+ work_done += ifi_canfd_handle_lec_err(ndev, errctr);
+
/* Handle normal messages on RX */
if (!(rxstcmd & IFI_CANFD_RXSTCMD_EMPTY))
work_done += ifi_canfd_do_rx_poll(ndev, quota - work_done);
@@ -497,11 +587,13 @@ static irqreturn_t ifi_canfd_isr(int irq, void *dev_id)
struct ifi_canfd_priv *priv = netdev_priv(ndev);
struct net_device_stats *stats = &ndev->stats;
const u32 rx_irq_mask = IFI_CANFD_INTERRUPT_RXFIFO_NEMPTY |
- IFI_CANFD_INTERRUPT_RXFIFO_NEMPTY_PER;
+ IFI_CANFD_INTERRUPT_RXFIFO_NEMPTY_PER |
+ IFI_CANFD_INTERRUPT_ERROR_WARNING |
+ IFI_CANFD_INTERRUPT_ERROR_COUNTER;
const u32 tx_irq_mask = IFI_CANFD_INTERRUPT_TXFIFO_EMPTY |
IFI_CANFD_INTERRUPT_TXFIFO_REMOVE;
- const u32 clr_irq_mask = ~(IFI_CANFD_INTERRUPT_SET_IRQ |
- IFI_CANFD_INTERRUPT_ERROR_WARNING);
+ const u32 clr_irq_mask = ~((u32)(IFI_CANFD_INTERRUPT_SET_IRQ |
+ IFI_CANFD_INTERRUPT_ERROR_WARNING));
u32 isr;
isr = readl(priv->base + IFI_CANFD_INTERRUPT);
@@ -513,44 +605,34 @@ static irqreturn_t ifi_canfd_isr(int irq, void *dev_id)
/* Clear all pending interrupts but ErrWarn */
writel(clr_irq_mask, priv->base + IFI_CANFD_INTERRUPT);
- /* RX IRQ, start NAPI */
+ /* RX IRQ or bus warning, start NAPI */
if (isr & rx_irq_mask) {
ifi_canfd_irq_enable(ndev, 0);
napi_schedule(&priv->napi);
}
/* TX IRQ */
- if (isr & tx_irq_mask) {
+ if (isr & IFI_CANFD_INTERRUPT_TXFIFO_REMOVE) {
stats->tx_bytes += can_get_echo_skb(ndev, 0);
stats->tx_packets++;
can_led_event(ndev, CAN_LED_EVENT_TX);
- netif_wake_queue(ndev);
}
+ if (isr & tx_irq_mask)
+ netif_wake_queue(ndev);
+
return IRQ_HANDLED;
}
static const struct can_bittiming_const ifi_canfd_bittiming_const = {
.name = KBUILD_MODNAME,
.tseg1_min = 1, /* Time segment 1 = prop_seg + phase_seg1 */
- .tseg1_max = 64,
- .tseg2_min = 2, /* Time segment 2 = phase_seg2 */
- .tseg2_max = 64,
- .sjw_max = 16,
- .brp_min = 2,
- .brp_max = 256,
- .brp_inc = 1,
-};
-
-static const struct can_bittiming_const ifi_canfd_data_bittiming_const = {
- .name = KBUILD_MODNAME,
- .tseg1_min = 1, /* Time segment 1 = prop_seg + phase_seg1 */
- .tseg1_max = 64,
+ .tseg1_max = 256,
.tseg2_min = 2, /* Time segment 2 = phase_seg2 */
- .tseg2_max = 64,
- .sjw_max = 16,
+ .tseg2_max = 256,
+ .sjw_max = 128,
.brp_min = 2,
- .brp_max = 256,
+ .brp_max = 512,
.brp_inc = 1,
};
@@ -560,19 +642,6 @@ static void ifi_canfd_set_bittiming(struct net_device *ndev)
const struct can_bittiming *bt = &priv->can.bittiming;
const struct can_bittiming *dbt = &priv->can.data_bittiming;
u16 brp, sjw, tseg1, tseg2;
- u32 noniso_arg = 0;
- u32 time_off;
-
- if ((priv->can.ctrlmode & CAN_CTRLMODE_FD) &&
- !(priv->can.ctrlmode & CAN_CTRLMODE_FD_NON_ISO)) {
- time_off = IFI_CANFD_TIME_SJW_OFF_ISO;
- } else {
- noniso_arg = IFI_CANFD_TIME_SET_TIMEB_BOSCH |
- IFI_CANFD_TIME_SET_TIMEA_BOSCH |
- IFI_CANFD_TIME_SET_PRESC_BOSCH |
- IFI_CANFD_TIME_SET_SJW_BOSCH;
- time_off = IFI_CANFD_TIME_SJW_OFF_BOSCH;
- }
/* Configure bit timing */
brp = bt->brp - 2;
@@ -582,8 +651,7 @@ static void ifi_canfd_set_bittiming(struct net_device *ndev)
writel((tseg2 << IFI_CANFD_TIME_TIMEB_OFF) |
(tseg1 << IFI_CANFD_TIME_TIMEA_OFF) |
(brp << IFI_CANFD_TIME_PRESCALE_OFF) |
- (sjw << time_off) |
- noniso_arg,
+ (sjw << IFI_CANFD_TIME_SJW_OFF_7_9_8_8),
priv->base + IFI_CANFD_TIME);
/* Configure data bit timing */
@@ -594,8 +662,7 @@ static void ifi_canfd_set_bittiming(struct net_device *ndev)
writel((tseg2 << IFI_CANFD_TIME_TIMEB_OFF) |
(tseg1 << IFI_CANFD_TIME_TIMEA_OFF) |
(brp << IFI_CANFD_TIME_PRESCALE_OFF) |
- (sjw << time_off) |
- noniso_arg,
+ (sjw << IFI_CANFD_TIME_SJW_OFF_7_9_8_8),
priv->base + IFI_CANFD_FTIME);
}
@@ -640,7 +707,8 @@ static void ifi_canfd_start(struct net_device *ndev)
/* Reset the IP */
writel(IFI_CANFD_STCMD_HARDRESET, priv->base + IFI_CANFD_STCMD);
- writel(0, priv->base + IFI_CANFD_STCMD);
+ writel(IFI_CANFD_STCMD_ENABLE_7_9_8_8_TIMING,
+ priv->base + IFI_CANFD_STCMD);
ifi_canfd_set_bittiming(ndev);
ifi_canfd_set_filters(ndev);
@@ -659,7 +727,8 @@ static void ifi_canfd_start(struct net_device *ndev)
writel((u32)(~IFI_CANFD_INTERRUPT_SET_IRQ),
priv->base + IFI_CANFD_INTERRUPT);
- stcmd = IFI_CANFD_STCMD_ENABLE | IFI_CANFD_STCMD_NORMAL_MODE;
+ stcmd = IFI_CANFD_STCMD_ENABLE | IFI_CANFD_STCMD_NORMAL_MODE |
+ IFI_CANFD_STCMD_ENABLE_7_9_8_8_TIMING;
if (priv->can.ctrlmode & CAN_CTRLMODE_LISTENONLY)
stcmd |= IFI_CANFD_STCMD_BUSMONITOR;
@@ -667,16 +736,23 @@ static void ifi_canfd_start(struct net_device *ndev)
if (priv->can.ctrlmode & CAN_CTRLMODE_LOOPBACK)
stcmd |= IFI_CANFD_STCMD_LOOPBACK;
- if (priv->can.ctrlmode & CAN_CTRLMODE_FD)
+ if ((priv->can.ctrlmode & CAN_CTRLMODE_FD) &&
+ !(priv->can.ctrlmode & CAN_CTRLMODE_FD_NON_ISO))
stcmd |= IFI_CANFD_STCMD_ENABLE_ISO;
- if (!(priv->can.ctrlmode & (CAN_CTRLMODE_FD | CAN_CTRLMODE_FD_NON_ISO)))
+ if (!(priv->can.ctrlmode & CAN_CTRLMODE_FD))
stcmd |= IFI_CANFD_STCMD_DISABLE_CANFD;
priv->can.state = CAN_STATE_ERROR_ACTIVE;
ifi_canfd_irq_enable(ndev, 1);
+ /* Unlock, reset and enable the error counter. */
+ writel(IFI_CANFD_ERROR_CTR_UNLOCK_MAGIC,
+ priv->base + IFI_CANFD_ERROR_CTR);
+ writel(IFI_CANFD_ERROR_CTR_ER_RESET, priv->base + IFI_CANFD_ERROR_CTR);
+ writel(IFI_CANFD_ERROR_CTR_ER_ENABLE, priv->base + IFI_CANFD_ERROR_CTR);
+
/* Enable controller */
writel(stcmd, priv->base + IFI_CANFD_STCMD);
}
@@ -685,6 +761,10 @@ static void ifi_canfd_stop(struct net_device *ndev)
{
struct ifi_canfd_priv *priv = netdev_priv(ndev);
+ /* Reset and disable the error counter. */
+ writel(IFI_CANFD_ERROR_CTR_ER_RESET, priv->base + IFI_CANFD_ERROR_CTR);
+ writel(0, priv->base + IFI_CANFD_ERROR_CTR);
+
/* Reset the IP */
writel(IFI_CANFD_STCMD_HARDRESET, priv->base + IFI_CANFD_STCMD);
@@ -877,7 +957,7 @@ static int ifi_canfd_plat_probe(struct platform_device *pdev)
priv->can.clock.freq = readl(addr + IFI_CANFD_CANCLOCK);
priv->can.bittiming_const = &ifi_canfd_bittiming_const;
- priv->can.data_bittiming_const = &ifi_canfd_data_bittiming_const;
+ priv->can.data_bittiming_const = &ifi_canfd_bittiming_const;
priv->can.do_set_mode = ifi_canfd_set_mode;
priv->can.do_get_berr_counter = ifi_canfd_get_berr_counter;
@@ -888,7 +968,8 @@ static int ifi_canfd_plat_probe(struct platform_device *pdev)
priv->can.ctrlmode_supported = CAN_CTRLMODE_LOOPBACK |
CAN_CTRLMODE_LISTENONLY |
CAN_CTRLMODE_FD |
- CAN_CTRLMODE_FD_NON_ISO;
+ CAN_CTRLMODE_FD_NON_ISO |
+ CAN_CTRLMODE_BERR_REPORTING;
platform_set_drvdata(pdev, ndev);
SET_NETDEV_DEV(ndev, dev);
diff --git a/drivers/net/can/janz-ican3.c b/drivers/net/can/janz-ican3.c
index 5d04f5464faf..f13bb8d9bb84 100644
--- a/drivers/net/can/janz-ican3.c
+++ b/drivers/net/can/janz-ican3.c
@@ -84,6 +84,7 @@
#define MSG_COFFREQ 0x42
#define MSG_CONREQ 0x43
#define MSG_CCONFREQ 0x47
+#define MSG_NMTS 0xb0
#define MSG_LMTS 0xb4
/*
@@ -130,6 +131,22 @@
#define ICAN3_CAN_DLC_MASK 0x0f
+/* Janz ICAN3 NMTS subtypes */
+#define NMTS_CREATE_NODE_REQ 0x0
+#define NMTS_SLAVE_STATE_IND 0x8
+#define NMTS_SLAVE_EVENT_IND 0x9
+
+/* Janz ICAN3 LMTS subtypes */
+#define LMTS_BUSON_REQ 0x0
+#define LMTS_BUSOFF_REQ 0x1
+#define LMTS_CAN_CONF_REQ 0x2
+
+/* Janz ICAN3 NMTS Event indications */
+#define NE_LOCAL_OCCURRED 0x3
+#define NE_LOCAL_RESOLVED 0x2
+#define NE_REMOTE_OCCURRED 0xc
+#define NE_REMOTE_RESOLVED 0x8
+
/*
* SJA1000 Status and Error Register Definitions
*
@@ -800,21 +817,41 @@ static int ican3_set_bus_state(struct ican3_dev *mod, bool on)
return ican3_send_msg(mod, &msg);
} else if (mod->fwtype == ICAN3_FWTYPE_CAL_CANOPEN) {
+ /* bittiming + can-on/off request */
memset(&msg, 0, sizeof(msg));
msg.spec = MSG_LMTS;
if (on) {
msg.len = cpu_to_le16(4);
- msg.data[0] = 0;
+ msg.data[0] = LMTS_BUSON_REQ;
msg.data[1] = 0;
msg.data[2] = btr0;
msg.data[3] = btr1;
} else {
msg.len = cpu_to_le16(2);
- msg.data[0] = 1;
+ msg.data[0] = LMTS_BUSOFF_REQ;
msg.data[1] = 0;
}
+ res = ican3_send_msg(mod, &msg);
+ if (res)
+ return res;
- return ican3_send_msg(mod, &msg);
+ if (on) {
+ /* create NMT Slave Node for error processing
+ * class 2 (with error capability, see CiA/DS203-1)
+ * id 1
+ * name locnod1 (must be exactly 7 bytes)
+ */
+ memset(&msg, 0, sizeof(msg));
+ msg.spec = MSG_NMTS;
+ msg.len = cpu_to_le16(11);
+ msg.data[0] = NMTS_CREATE_NODE_REQ;
+ msg.data[1] = 0;
+ msg.data[2] = 2; /* node class */
+ msg.data[3] = 1; /* node id */
+ strcpy(msg.data + 4, "locnod1"); /* node name */
+ return ican3_send_msg(mod, &msg);
+ }
+ return 0;
}
return -ENOTSUPP;
}
@@ -849,12 +886,23 @@ static int ican3_set_buserror(struct ican3_dev *mod, u8 quota)
{
struct ican3_msg msg;
- memset(&msg, 0, sizeof(msg));
- msg.spec = MSG_CCONFREQ;
- msg.len = cpu_to_le16(2);
- msg.data[0] = 0x00;
- msg.data[1] = quota;
-
+ if (mod->fwtype == ICAN3_FWTYPE_ICANOS) {
+ memset(&msg, 0, sizeof(msg));
+ msg.spec = MSG_CCONFREQ;
+ msg.len = cpu_to_le16(2);
+ msg.data[0] = 0x00;
+ msg.data[1] = quota;
+ } else if (mod->fwtype == ICAN3_FWTYPE_CAL_CANOPEN) {
+ memset(&msg, 0, sizeof(msg));
+ msg.spec = MSG_LMTS;
+ msg.len = cpu_to_le16(4);
+ msg.data[0] = LMTS_CAN_CONF_REQ;
+ msg.data[1] = 0x00;
+ msg.data[2] = 0x00;
+ msg.data[3] = quota;
+ } else {
+ return -ENOTSUPP;
+ }
return ican3_send_msg(mod, &msg);
}
@@ -1150,6 +1198,41 @@ static void ican3_handle_inquiry(struct ican3_dev *mod, struct ican3_msg *msg)
}
}
+/* Handle NMTS Slave Event Indication Messages from the firmware */
+static void ican3_handle_nmtsind(struct ican3_dev *mod, struct ican3_msg *msg)
+{
+ u16 subspec;
+
+ subspec = msg->data[0] + msg->data[1] * 0x100;
+ if (subspec == NMTS_SLAVE_EVENT_IND) {
+ switch (msg->data[2]) {
+ case NE_LOCAL_OCCURRED:
+ case NE_LOCAL_RESOLVED:
+ /* now follows the same message as Raw ICANOS CEVTIND
+ * shift the data at the same place and call this method
+ */
+ le16_add_cpu(&msg->len, -3);
+ memmove(msg->data, msg->data + 3, le16_to_cpu(msg->len));
+ ican3_handle_cevtind(mod, msg);
+ break;
+ case NE_REMOTE_OCCURRED:
+ case NE_REMOTE_RESOLVED:
+ /* should not occurre, ignore */
+ break;
+ default:
+ netdev_warn(mod->ndev, "unknown NMTS event indication %x\n",
+ msg->data[2]);
+ break;
+ }
+ } else if (subspec == NMTS_SLAVE_STATE_IND) {
+ /* ignore state indications */
+ } else {
+ netdev_warn(mod->ndev, "unhandled NMTS indication %x\n",
+ subspec);
+ return;
+ }
+}
+
static void ican3_handle_unknown_message(struct ican3_dev *mod,
struct ican3_msg *msg)
{
@@ -1179,6 +1262,9 @@ static void ican3_handle_message(struct ican3_dev *mod, struct ican3_msg *msg)
case MSG_INQUIRY:
ican3_handle_inquiry(mod, msg);
break;
+ case MSG_NMTS:
+ ican3_handle_nmtsind(mod, msg);
+ break;
default:
ican3_handle_unknown_message(mod, msg);
break;
diff --git a/drivers/net/can/m_can/m_can.c b/drivers/net/can/m_can/m_can.c
index 39cf911f7a1e..195f15edb32e 100644
--- a/drivers/net/can/m_can/m_can.c
+++ b/drivers/net/can/m_can/m_can.c
@@ -955,7 +955,7 @@ static struct net_device *alloc_m_can_dev(void)
priv->can.do_get_berr_counter = m_can_get_berr_counter;
/* CAN_CTRLMODE_FD_NON_ISO is fixed with M_CAN IP v3.0.1 */
- priv->can.ctrlmode = CAN_CTRLMODE_FD_NON_ISO;
+ can_set_static_ctrlmode(dev, CAN_CTRLMODE_FD_NON_ISO);
/* CAN_CTRLMODE_FD_NON_ISO can not be changed with M_CAN IP v3.0.1 */
priv->can.ctrlmode_supported = CAN_CTRLMODE_LOOPBACK |
diff --git a/drivers/net/can/mscan/mscan.c b/drivers/net/can/mscan/mscan.c
index e36b7400d5cc..acb708fc1463 100644
--- a/drivers/net/can/mscan/mscan.c
+++ b/drivers/net/can/mscan/mscan.c
@@ -276,7 +276,7 @@ static netdev_tx_t mscan_start_xmit(struct sk_buff *skb, struct net_device *dev)
out_8(&regs->cantflg, 1 << buf_id);
if (!test_bit(F_TX_PROGRESS, &priv->flags))
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
list_add_tail(&priv->tx_queue[buf_id].list, &priv->tx_head);
@@ -469,7 +469,7 @@ static irqreturn_t mscan_isr(int irq, void *dev_id)
clear_bit(F_TX_PROGRESS, &priv->flags);
priv->cur_pri = 0;
} else {
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
}
if (!test_bit(F_TX_WAIT_ALL, &priv->flags))
diff --git a/drivers/net/can/sja1000/plx_pci.c b/drivers/net/can/sja1000/plx_pci.c
index 8836a7485c81..3eb7430dffbf 100644
--- a/drivers/net/can/sja1000/plx_pci.c
+++ b/drivers/net/can/sja1000/plx_pci.c
@@ -39,6 +39,7 @@ MODULE_DESCRIPTION("Socket-CAN driver for PLX90xx PCI-bridge cards with "
MODULE_SUPPORTED_DEVICE("Adlink PCI-7841/cPCI-7841, "
"Adlink PCI-7841/cPCI-7841 SE, "
"Marathon CAN-bus-PCI, "
+ "Marathon CAN-bus-PCIe, "
"TEWS TECHNOLOGIES TPMC810, "
"esd CAN-PCI/CPCI/PCI104/200, "
"esd CAN-PCI/PMC/266, "
@@ -133,6 +134,7 @@ struct plx_pci_card {
#define IXXAT_PCI_SUB_SYS_ID 0x2540
#define MARATHON_PCI_DEVICE_ID 0x2715
+#define MARATHON_PCIE_DEVICE_ID 0x3432
#define TEWS_PCI_VENDOR_ID 0x1498
#define TEWS_PCI_DEVICE_ID_TMPC810 0x032A
@@ -141,8 +143,9 @@ struct plx_pci_card {
#define CTI_PCI_DEVICE_ID_CRG001 0x0900
static void plx_pci_reset_common(struct pci_dev *pdev);
-static void plx_pci_reset_marathon(struct pci_dev *pdev);
static void plx9056_pci_reset_common(struct pci_dev *pdev);
+static void plx_pci_reset_marathon_pci(struct pci_dev *pdev);
+static void plx_pci_reset_marathon_pcie(struct pci_dev *pdev);
struct plx_pci_channel_map {
u32 bar;
@@ -215,14 +218,22 @@ static struct plx_pci_card_info plx_pci_card_info_ixxat = {
/* based on PLX9050 */
};
-static struct plx_pci_card_info plx_pci_card_info_marathon = {
+static struct plx_pci_card_info plx_pci_card_info_marathon_pci = {
"Marathon CAN-bus-PCI", 2,
PLX_PCI_CAN_CLOCK, PLX_PCI_OCR, PLX_PCI_CDR,
{0, 0x00, 0x00}, { {2, 0x00, 0x00}, {4, 0x00, 0x00} },
- &plx_pci_reset_marathon
+ &plx_pci_reset_marathon_pci
/* based on PLX9052 */
};
+static struct plx_pci_card_info plx_pci_card_info_marathon_pcie = {
+ "Marathon CAN-bus-PCIe", 2,
+ PLX_PCI_CAN_CLOCK, PLX_PCI_OCR, PLX_PCI_CDR,
+ {0, 0x00, 0x00}, { {2, 0x00, 0x00}, {3, 0x80, 0x00} },
+ &plx_pci_reset_marathon_pcie
+ /* based on PEX8311 */
+};
+
static struct plx_pci_card_info plx_pci_card_info_tews = {
"TEWS TECHNOLOGIES TPMC810", 2,
PLX_PCI_CAN_CLOCK, PLX_PCI_OCR, PLX_PCI_CDR,
@@ -316,7 +327,14 @@ static const struct pci_device_id plx_pci_tbl[] = {
PCI_VENDOR_ID_PLX, MARATHON_PCI_DEVICE_ID,
PCI_ANY_ID, PCI_ANY_ID,
0, 0,
- (kernel_ulong_t)&plx_pci_card_info_marathon
+ (kernel_ulong_t)&plx_pci_card_info_marathon_pci
+ },
+ {
+ /* Marathon CAN-bus-PCIe card */
+ PCI_VENDOR_ID_PLX, MARATHON_PCIE_DEVICE_ID,
+ PCI_ANY_ID, PCI_ANY_ID,
+ 0, 0,
+ (kernel_ulong_t)&plx_pci_card_info_marathon_pcie
},
{
/* TEWS TECHNOLOGIES TPMC810 card */
@@ -437,8 +455,8 @@ static void plx9056_pci_reset_common(struct pci_dev *pdev)
iowrite32(cntrl, card->conf_addr + PLX9056_CNTRL);
};
-/* Special reset function for Marathon card */
-static void plx_pci_reset_marathon(struct pci_dev *pdev)
+/* Special reset function for Marathon CAN-bus-PCI card */
+static void plx_pci_reset_marathon_pci(struct pci_dev *pdev)
{
void __iomem *reset_addr;
int i;
@@ -460,6 +478,34 @@ static void plx_pci_reset_marathon(struct pci_dev *pdev)
}
}
+/* Special reset function for Marathon CAN-bus-PCIe card */
+static void plx_pci_reset_marathon_pcie(struct pci_dev *pdev)
+{
+ void __iomem *addr;
+ void __iomem *reset_addr;
+ int i;
+
+ plx9056_pci_reset_common(pdev);
+
+ for (i = 0; i < 2; i++) {
+ struct plx_pci_channel_map *chan_map =
+ &plx_pci_card_info_marathon_pcie.chan_map_tbl[i];
+ addr = pci_iomap(pdev, chan_map->bar, chan_map->size);
+ if (!addr) {
+ dev_err(&pdev->dev, "Failed to remap reset "
+ "space %d (BAR%d)\n", i, chan_map->bar);
+ } else {
+ /* reset the SJA1000 chip */
+ #define MARATHON_PCIE_RESET_OFFSET 32
+ reset_addr = addr + chan_map->offset +
+ MARATHON_PCIE_RESET_OFFSET;
+ iowrite8(0x1, reset_addr);
+ udelay(100);
+ pci_iounmap(pdev, addr);
+ }
+ }
+}
+
static void plx_pci_del_card(struct pci_dev *pdev)
{
struct plx_pci_card *card = pci_get_drvdata(pdev);
@@ -486,7 +532,8 @@ static void plx_pci_del_card(struct pci_dev *pdev)
* Disable interrupts from PCI-card and disable local
* interrupts
*/
- if (pdev->device != PCI_DEVICE_ID_PLX_9056)
+ if (pdev->device != PCI_DEVICE_ID_PLX_9056 &&
+ pdev->device != MARATHON_PCIE_DEVICE_ID)
iowrite32(0x0, card->conf_addr + PLX_INTCSR);
else
iowrite32(0x0, card->conf_addr + PLX9056_INTCSR);
@@ -619,7 +666,8 @@ static int plx_pci_add_card(struct pci_dev *pdev,
* Enable interrupts from PCI-card (PLX90xx) and enable Local_1,
* Local_2 interrupts from the SJA1000 chips
*/
- if (pdev->device != PCI_DEVICE_ID_PLX_9056) {
+ if (pdev->device != PCI_DEVICE_ID_PLX_9056 &&
+ pdev->device != MARATHON_PCIE_DEVICE_ID) {
val = ioread32(card->conf_addr + PLX_INTCSR);
if (pdev->subsystem_vendor == PCI_VENDOR_ID_ESDGMBH)
val |= PLX_LINT1_EN | PLX_PCI_INT_EN;
diff --git a/drivers/net/can/sja1000/sja1000.c b/drivers/net/can/sja1000/sja1000.c
index 8dda3b703d39..9f107798f904 100644
--- a/drivers/net/can/sja1000/sja1000.c
+++ b/drivers/net/can/sja1000/sja1000.c
@@ -438,6 +438,7 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
cf->can_id |= CAN_ERR_PROT | CAN_ERR_BUSERROR;
+ /* set error type */
switch (ecc & ECC_MASK) {
case ECC_BIT:
cf->data[2] |= CAN_ERR_PROT_BIT;
@@ -449,9 +450,12 @@ static int sja1000_err(struct net_device *dev, uint8_t isrc, uint8_t status)
cf->data[2] |= CAN_ERR_PROT_STUFF;
break;
default:
- cf->data[3] = ecc & ECC_SEG;
break;
}
+
+ /* set error location */
+ cf->data[3] = ecc & ECC_SEG;
+
/* Error occurred during transmission? */
if ((ecc & ECC_DIR) == 0)
cf->data[2] |= CAN_ERR_PROT_TX;
diff --git a/drivers/net/can/spi/mcp251x.c b/drivers/net/can/spi/mcp251x.c
index 74a7dfecee27..cf36d26ef002 100644
--- a/drivers/net/can/spi/mcp251x.c
+++ b/drivers/net/can/spi/mcp251x.c
@@ -961,7 +961,8 @@ static int mcp251x_open(struct net_device *net)
goto open_unlock;
}
- priv->wq = create_freezable_workqueue("mcp251x_wq");
+ priv->wq = alloc_workqueue("mcp251x_wq", WQ_FREEZABLE | WQ_MEM_RECLAIM,
+ 0);
INIT_WORK(&priv->tx_work, mcp251x_tx_work_handler);
INIT_WORK(&priv->restart_work, mcp251x_restart_work_handler);
diff --git a/drivers/net/can/usb/ems_usb.c b/drivers/net/can/usb/ems_usb.c
index 3400fd1cada7..71f0e791355b 100644
--- a/drivers/net/can/usb/ems_usb.c
+++ b/drivers/net/can/usb/ems_usb.c
@@ -521,7 +521,7 @@ static void ems_usb_write_bulk_callback(struct urb *urb)
if (urb->status)
netdev_info(netdev, "Tx URB aborted (%d)\n", urb->status);
- netdev->trans_start = jiffies;
+ netif_trans_update(netdev);
/* transmission complete interrupt */
netdev->stats.tx_packets++;
@@ -835,7 +835,7 @@ static netdev_tx_t ems_usb_start_xmit(struct sk_buff *skb, struct net_device *ne
stats->tx_dropped++;
}
} else {
- netdev->trans_start = jiffies;
+ netif_trans_update(netdev);
/* Slow down tx path */
if (atomic_read(&dev->active_tx_urbs) >= MAX_TX_URBS ||
diff --git a/drivers/net/can/usb/esd_usb2.c b/drivers/net/can/usb/esd_usb2.c
index 113e64fcd73b..784a9002fbb9 100644
--- a/drivers/net/can/usb/esd_usb2.c
+++ b/drivers/net/can/usb/esd_usb2.c
@@ -480,7 +480,7 @@ static void esd_usb2_write_bulk_callback(struct urb *urb)
if (urb->status)
netdev_info(netdev, "Tx URB aborted (%d)\n", urb->status);
- netdev->trans_start = jiffies;
+ netif_trans_update(netdev);
}
static ssize_t show_firmware(struct device *d,
@@ -820,7 +820,7 @@ static netdev_tx_t esd_usb2_start_xmit(struct sk_buff *skb,
goto releasebuf;
}
- netdev->trans_start = jiffies;
+ netif_trans_update(netdev);
/*
* Release our reference to this URB, the USB core will eventually free
diff --git a/drivers/net/can/usb/gs_usb.c b/drivers/net/can/usb/gs_usb.c
index cbc99d5649af..1556d4286235 100644
--- a/drivers/net/can/usb/gs_usb.c
+++ b/drivers/net/can/usb/gs_usb.c
@@ -950,7 +950,8 @@ static void gs_usb_disconnect(struct usb_interface *intf)
}
static const struct usb_device_id gs_usb_table[] = {
- {USB_DEVICE(USB_GSUSB_1_VENDOR_ID, USB_GSUSB_1_PRODUCT_ID)},
+ { USB_DEVICE_INTERFACE_NUMBER(USB_GSUSB_1_VENDOR_ID,
+ USB_GSUSB_1_PRODUCT_ID, 0) },
{} /* Terminating entry */
};
diff --git a/drivers/net/can/usb/peak_usb/pcan_usb_core.c b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
index 5a2e341a6d1e..bfb91d8fa460 100644
--- a/drivers/net/can/usb/peak_usb/pcan_usb_core.c
+++ b/drivers/net/can/usb/peak_usb/pcan_usb_core.c
@@ -274,7 +274,7 @@ static void peak_usb_write_bulk_callback(struct urb *urb)
netdev->stats.tx_bytes += context->data_len;
/* prevent tx timeout */
- netdev->trans_start = jiffies;
+ netif_trans_update(netdev);
break;
default:
@@ -373,7 +373,7 @@ static netdev_tx_t peak_usb_ndo_start_xmit(struct sk_buff *skb,
stats->tx_dropped++;
}
} else {
- netdev->trans_start = jiffies;
+ netif_trans_update(netdev);
/* slow down tx path */
if (atomic_read(&dev->active_tx_urbs) >= PCAN_USB_MAX_TX_URBS)
diff --git a/drivers/net/cris/eth_v10.c b/drivers/net/cris/eth_v10.c
index 64c016a99af8..221f5f011ff9 100644
--- a/drivers/net/cris/eth_v10.c
+++ b/drivers/net/cris/eth_v10.c
@@ -1106,7 +1106,7 @@ e100_send_packet(struct sk_buff *skb, struct net_device *dev)
myNextTxDesc->skb = skb;
- dev->trans_start = jiffies; /* NETIF_F_LLTX driver :( */
+ netif_trans_update(dev); /* NETIF_F_LLTX driver :( */
e100_hardware_send_packet(np, buf, skb->len);
diff --git a/drivers/net/dsa/Kconfig b/drivers/net/dsa/Kconfig
index 90ba003d8fdf..200663c43ce9 100644
--- a/drivers/net/dsa/Kconfig
+++ b/drivers/net/dsa/Kconfig
@@ -1,10 +1,6 @@
menu "Distributed Switch Architecture drivers"
depends on HAVE_NET_DSA
-config NET_DSA_MV88E6XXX
- tristate
- default n
-
config NET_DSA_MV88E6060
tristate "Marvell 88E6060 ethernet switch chip support"
depends on NET_DSA
@@ -13,46 +9,13 @@ config NET_DSA_MV88E6060
This enables support for the Marvell 88E6060 ethernet switch
chip.
-config NET_DSA_MV88E6XXX_NEED_PPU
- bool
- default n
-
-config NET_DSA_MV88E6131
- tristate "Marvell 88E6085/6095/6095F/6131 ethernet switch chip support"
- depends on NET_DSA
- select NET_DSA_MV88E6XXX
- select NET_DSA_MV88E6XXX_NEED_PPU
- select NET_DSA_TAG_DSA
- ---help---
- This enables support for the Marvell 88E6085/6095/6095F/6131
- ethernet switch chips.
-
-config NET_DSA_MV88E6123
- tristate "Marvell 88E6123/6161/6165 ethernet switch chip support"
- depends on NET_DSA
- select NET_DSA_MV88E6XXX
- select NET_DSA_TAG_EDSA
- ---help---
- This enables support for the Marvell 88E6123/6161/6165
- ethernet switch chips.
-
-config NET_DSA_MV88E6171
- tristate "Marvell 88E6171/6175/6350/6351 ethernet switch chip support"
- depends on NET_DSA
- select NET_DSA_MV88E6XXX
- select NET_DSA_TAG_EDSA
- ---help---
- This enables support for the Marvell 88E6171/6175/6350/6351
- ethernet switches chips.
-
-config NET_DSA_MV88E6352
- tristate "Marvell 88E6172/6176/6320/6321/6352 ethernet switch chip support"
+config NET_DSA_MV88E6XXX
+ tristate "Marvell 88E6xxx Ethernet switch chip support"
depends on NET_DSA
- select NET_DSA_MV88E6XXX
select NET_DSA_TAG_EDSA
---help---
- This enables support for the Marvell 88E6172, 88E6176, 88E6320,
- 88E6321 and 88E6352 ethernet switch chips.
+ This enables support for most of the Marvell 88E6xxx models of
+ Ethernet switch chips, except 88E6060.
config NET_DSA_BCM_SF2
tristate "Broadcom Starfighter 2 Ethernet switch support"
diff --git a/drivers/net/dsa/Makefile b/drivers/net/dsa/Makefile
index a6e09939be65..76b751dd9efd 100644
--- a/drivers/net/dsa/Makefile
+++ b/drivers/net/dsa/Makefile
@@ -1,16 +1,3 @@
obj-$(CONFIG_NET_DSA_MV88E6060) += mv88e6060.o
-obj-$(CONFIG_NET_DSA_MV88E6XXX) += mv88e6xxx_drv.o
-mv88e6xxx_drv-y += mv88e6xxx.o
-ifdef CONFIG_NET_DSA_MV88E6123
-mv88e6xxx_drv-y += mv88e6123.o
-endif
-ifdef CONFIG_NET_DSA_MV88E6131
-mv88e6xxx_drv-y += mv88e6131.o
-endif
-ifdef CONFIG_NET_DSA_MV88E6352
-mv88e6xxx_drv-y += mv88e6352.o
-endif
-ifdef CONFIG_NET_DSA_MV88E6171
-mv88e6xxx_drv-y += mv88e6171.o
-endif
+obj-$(CONFIG_NET_DSA_MV88E6XXX) += mv88e6xxx.o
obj-$(CONFIG_NET_DSA_BCM_SF2) += bcm_sf2.o
diff --git a/drivers/net/dsa/bcm_sf2.c b/drivers/net/dsa/bcm_sf2.c
index 95944d5e3e22..10ddd5a5dfb6 100644
--- a/drivers/net/dsa/bcm_sf2.c
+++ b/drivers/net/dsa/bcm_sf2.c
@@ -135,8 +135,17 @@ static int bcm_sf2_sw_get_sset_count(struct dsa_switch *ds)
return BCM_SF2_STATS_SIZE;
}
-static char *bcm_sf2_sw_probe(struct device *host_dev, int sw_addr)
+static const char *bcm_sf2_sw_drv_probe(struct device *dsa_dev,
+ struct device *host_dev, int sw_addr,
+ void **_priv)
{
+ struct bcm_sf2_priv *priv;
+
+ priv = devm_kzalloc(dsa_dev, sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return NULL;
+ *_priv = priv;
+
return "Broadcom Starfighter 2";
}
@@ -151,7 +160,7 @@ static void bcm_sf2_imp_vlan_setup(struct dsa_switch *ds, int cpu_port)
* the same VLAN.
*/
for (i = 0; i < priv->hw_params.num_ports; i++) {
- if (!((1 << i) & ds->phys_port_mask))
+ if (!((1 << i) & ds->enabled_port_mask))
continue;
reg = core_readl(priv, CORE_PORT_VLAN_CTL_PORT(i));
@@ -545,12 +554,11 @@ static void bcm_sf2_sw_br_leave(struct dsa_switch *ds, int port)
priv->port_sts[port].bridge_dev = NULL;
}
-static int bcm_sf2_sw_br_set_stp_state(struct dsa_switch *ds, int port,
- u8 state)
+static void bcm_sf2_sw_br_set_stp_state(struct dsa_switch *ds, int port,
+ u8 state)
{
struct bcm_sf2_priv *priv = ds_to_priv(ds);
u8 hw_state, cur_hw_state;
- int ret = 0;
u32 reg;
reg = core_readl(priv, CORE_G_PCTL_PORT(port));
@@ -574,7 +582,7 @@ static int bcm_sf2_sw_br_set_stp_state(struct dsa_switch *ds, int port,
break;
default:
pr_err("%s: invalid STP state: %d\n", __func__, state);
- return -EINVAL;
+ return;
}
/* Fast-age ARL entries if we are moving a port from Learning or
@@ -584,10 +592,9 @@ static int bcm_sf2_sw_br_set_stp_state(struct dsa_switch *ds, int port,
if (cur_hw_state != hw_state) {
if (cur_hw_state >= G_MISTP_LEARN_STATE &&
hw_state <= G_MISTP_LISTEN_STATE) {
- ret = bcm_sf2_sw_fast_age_port(ds, port);
- if (ret) {
+ if (bcm_sf2_sw_fast_age_port(ds, port)) {
pr_err("%s: fast-ageing failed\n", __func__);
- return ret;
+ return;
}
}
}
@@ -596,8 +603,6 @@ static int bcm_sf2_sw_br_set_stp_state(struct dsa_switch *ds, int port,
reg &= ~(G_MISTP_STATE_MASK << G_MISTP_STATE_SHIFT);
reg |= hw_state;
core_writel(priv, reg, CORE_G_PCTL_PORT(port));
-
- return 0;
}
/* Address Resolution Logic routines */
@@ -728,13 +733,14 @@ static int bcm_sf2_sw_fdb_prepare(struct dsa_switch *ds, int port,
return 0;
}
-static int bcm_sf2_sw_fdb_add(struct dsa_switch *ds, int port,
- const struct switchdev_obj_port_fdb *fdb,
- struct switchdev_trans *trans)
+static void bcm_sf2_sw_fdb_add(struct dsa_switch *ds, int port,
+ const struct switchdev_obj_port_fdb *fdb,
+ struct switchdev_trans *trans)
{
struct bcm_sf2_priv *priv = ds_to_priv(ds);
- return bcm_sf2_arl_op(priv, 0, port, fdb->addr, fdb->vid, true);
+ if (bcm_sf2_arl_op(priv, 0, port, fdb->addr, fdb->vid, true))
+ pr_err("%s: failed to add MAC address\n", __func__);
}
static int bcm_sf2_sw_fdb_del(struct dsa_switch *ds, int port,
@@ -943,8 +949,8 @@ static int bcm_sf2_sw_setup(struct dsa_switch *ds)
/* All the interesting properties are at the parent device_node
* level
*/
- dn = ds->pd->of_node->parent;
- bcm_sf2_identify_ports(priv, ds->pd->of_node);
+ dn = ds->cd->of_node->parent;
+ bcm_sf2_identify_ports(priv, ds->cd->of_node);
priv->irq0 = irq_of_parse_and_map(dn, 0);
priv->irq1 = irq_of_parse_and_map(dn, 1);
@@ -1003,7 +1009,7 @@ static int bcm_sf2_sw_setup(struct dsa_switch *ds)
/* Enable all valid ports and disable those unused */
for (port = 0; port < priv->hw_params.num_ports; port++) {
/* IMP port receives special treatment */
- if ((1 << port) & ds->phys_port_mask)
+ if ((1 << port) & ds->enabled_port_mask)
bcm_sf2_port_setup(ds, port, NULL);
else if (dsa_is_cpu_port(ds, port))
bcm_sf2_imp_setup(ds, port);
@@ -1016,11 +1022,12 @@ static int bcm_sf2_sw_setup(struct dsa_switch *ds)
* 7445D0, since 7445E0 disconnects the internal switch pseudo-PHY such
* that we can use the regular SWITCH_MDIO master controller instead.
*
- * By default, DSA initializes ds->phys_mii_mask to ds->phys_port_mask
- * to have a 1:1 mapping between Port address and PHY address in order
- * to utilize the slave_mii_bus instance to read from Port PHYs. This is
- * not what we want here, so we initialize phys_mii_mask 0 to always
- * utilize the "master" MDIO bus backed by the "mdio-unimac" driver.
+ * By default, DSA initializes ds->phys_mii_mask to
+ * ds->enabled_port_mask to have a 1:1 mapping between Port address
+ * and PHY address in order to utilize the slave_mii_bus instance to
+ * read from Port PHYs. This is not what we want here, so we
+ * initialize phys_mii_mask 0 to always utilize the "master" MDIO
+ * bus backed by the "mdio-unimac" driver.
*/
if (of_machine_is_compatible("brcm,bcm7445d0"))
ds->phys_mii_mask |= ((1 << BRCM_PSEUDO_PHY_ADDR) | (1 << 0));
@@ -1278,7 +1285,7 @@ static int bcm_sf2_sw_suspend(struct dsa_switch *ds)
* bcm_sf2_sw_setup
*/
for (port = 0; port < DSA_MAX_PORTS; port++) {
- if ((1 << port) & ds->phys_port_mask ||
+ if ((1 << port) & ds->enabled_port_mask ||
dsa_is_cpu_port(ds, port))
bcm_sf2_port_disable(ds, port, NULL);
}
@@ -1302,7 +1309,7 @@ static int bcm_sf2_sw_resume(struct dsa_switch *ds)
bcm_sf2_gphy_enable_set(ds, true);
for (port = 0; port < DSA_MAX_PORTS; port++) {
- if ((1 << port) & ds->phys_port_mask)
+ if ((1 << port) & ds->enabled_port_mask)
bcm_sf2_port_setup(ds, port, NULL);
else if (dsa_is_cpu_port(ds, port))
bcm_sf2_imp_setup(ds, port);
@@ -1365,8 +1372,7 @@ static int bcm_sf2_sw_set_wol(struct dsa_switch *ds, int port,
static struct dsa_switch_driver bcm_sf2_switch_driver = {
.tag_protocol = DSA_TAG_PROTO_BRCM,
- .priv_size = sizeof(struct bcm_sf2_priv),
- .probe = bcm_sf2_sw_probe,
+ .probe = bcm_sf2_sw_drv_probe,
.setup = bcm_sf2_sw_setup,
.set_addr = bcm_sf2_sw_set_addr,
.get_phy_flags = bcm_sf2_sw_get_phy_flags,
@@ -1387,7 +1393,7 @@ static struct dsa_switch_driver bcm_sf2_switch_driver = {
.set_eee = bcm_sf2_sw_set_eee,
.port_bridge_join = bcm_sf2_sw_br_join,
.port_bridge_leave = bcm_sf2_sw_br_leave,
- .port_stp_update = bcm_sf2_sw_br_set_stp_state,
+ .port_stp_state_set = bcm_sf2_sw_br_set_stp_state,
.port_fdb_prepare = bcm_sf2_sw_fdb_prepare,
.port_fdb_add = bcm_sf2_sw_fdb_add,
.port_fdb_del = bcm_sf2_sw_fdb_del,
diff --git a/drivers/net/dsa/mv88e6060.c b/drivers/net/dsa/mv88e6060.c
index 0527f485c3dc..e36b40886bd8 100644
--- a/drivers/net/dsa/mv88e6060.c
+++ b/drivers/net/dsa/mv88e6060.c
@@ -19,12 +19,9 @@
static int reg_read(struct dsa_switch *ds, int addr, int reg)
{
- struct mii_bus *bus = dsa_host_dev_to_mii_bus(ds->master_dev);
+ struct mv88e6060_priv *priv = ds_to_priv(ds);
- if (bus == NULL)
- return -EINVAL;
-
- return mdiobus_read_nested(bus, ds->pd->sw_addr + addr, reg);
+ return mdiobus_read_nested(priv->bus, priv->sw_addr + addr, reg);
}
#define REG_READ(addr, reg) \
@@ -40,12 +37,9 @@ static int reg_read(struct dsa_switch *ds, int addr, int reg)
static int reg_write(struct dsa_switch *ds, int addr, int reg, u16 val)
{
- struct mii_bus *bus = dsa_host_dev_to_mii_bus(ds->master_dev);
-
- if (bus == NULL)
- return -EINVAL;
+ struct mv88e6060_priv *priv = ds_to_priv(ds);
- return mdiobus_write_nested(bus, ds->pd->sw_addr + addr, reg, val);
+ return mdiobus_write_nested(priv->bus, priv->sw_addr + addr, reg, val);
}
#define REG_WRITE(addr, reg, val) \
@@ -57,14 +51,10 @@ static int reg_write(struct dsa_switch *ds, int addr, int reg, u16 val)
return __ret; \
})
-static char *mv88e6060_probe(struct device *host_dev, int sw_addr)
+static const char *mv88e6060_get_name(struct mii_bus *bus, int sw_addr)
{
- struct mii_bus *bus = dsa_host_dev_to_mii_bus(host_dev);
int ret;
- if (bus == NULL)
- return NULL;
-
ret = mdiobus_read(bus, sw_addr + REG_PORT(0), PORT_SWITCH_ID);
if (ret >= 0) {
if (ret == PORT_SWITCH_ID_6060)
@@ -79,6 +69,27 @@ static char *mv88e6060_probe(struct device *host_dev, int sw_addr)
return NULL;
}
+static const char *mv88e6060_drv_probe(struct device *dsa_dev,
+ struct device *host_dev, int sw_addr,
+ void **_priv)
+{
+ struct mii_bus *bus = dsa_host_dev_to_mii_bus(host_dev);
+ struct mv88e6060_priv *priv;
+ const char *name;
+
+ name = mv88e6060_get_name(bus, sw_addr);
+ if (name) {
+ priv = devm_kzalloc(dsa_dev, sizeof(*priv), GFP_KERNEL);
+ if (!priv)
+ return NULL;
+ *_priv = priv;
+ priv->bus = bus;
+ priv->sw_addr = sw_addr;
+ }
+
+ return name;
+}
+
static int mv88e6060_switch_reset(struct dsa_switch *ds)
{
int i;
@@ -159,7 +170,7 @@ static int mv88e6060_setup_port(struct dsa_switch *ds, int p)
REG_WRITE(addr, PORT_VLAN_MAP,
((p & 0xf) << PORT_VLAN_MAP_DBNUM_SHIFT) |
(dsa_is_cpu_port(ds, p) ?
- ds->phys_port_mask :
+ ds->enabled_port_mask :
BIT(ds->dst->cpu_port)));
/* Port Association Vector: when learning source addresses
@@ -174,8 +185,8 @@ static int mv88e6060_setup_port(struct dsa_switch *ds, int p)
static int mv88e6060_setup(struct dsa_switch *ds)
{
- int i;
int ret;
+ int i;
ret = mv88e6060_switch_reset(ds);
if (ret < 0)
@@ -238,7 +249,7 @@ mv88e6060_phy_write(struct dsa_switch *ds, int port, int regnum, u16 val)
static struct dsa_switch_driver mv88e6060_switch_driver = {
.tag_protocol = DSA_TAG_PROTO_TRAILER,
- .probe = mv88e6060_probe,
+ .probe = mv88e6060_drv_probe,
.setup = mv88e6060_setup,
.set_addr = mv88e6060_set_addr,
.phy_read = mv88e6060_phy_read,
diff --git a/drivers/net/dsa/mv88e6060.h b/drivers/net/dsa/mv88e6060.h
index cc9b2ed4aff4..10249bd16292 100644
--- a/drivers/net/dsa/mv88e6060.h
+++ b/drivers/net/dsa/mv88e6060.h
@@ -108,4 +108,15 @@
#define GLOBAL_ATU_MAC_23 0x0e
#define GLOBAL_ATU_MAC_45 0x0f
+struct mv88e6060_priv {
+ /* MDIO bus and address on bus to use. When in single chip
+ * mode, address is 0, and the switch uses multiple addresses
+ * on the bus. When in multi-chip mode, the switch uses a
+ * single address which contains two registers used for
+ * indirect access to more registers.
+ */
+ struct mii_bus *bus;
+ int sw_addr;
+};
+
#endif
diff --git a/drivers/net/dsa/mv88e6123.c b/drivers/net/dsa/mv88e6123.c
deleted file mode 100644
index 69a6f79dcb10..000000000000
--- a/drivers/net/dsa/mv88e6123.c
+++ /dev/null
@@ -1,124 +0,0 @@
-/*
- * net/dsa/mv88e6123_61_65.c - Marvell 88e6123/6161/6165 switch chip support
- * Copyright (c) 2008-2009 Marvell Semiconductor
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- */
-
-#include <linux/delay.h>
-#include <linux/jiffies.h>
-#include <linux/list.h>
-#include <linux/module.h>
-#include <linux/netdevice.h>
-#include <linux/phy.h>
-#include <net/dsa.h>
-#include "mv88e6xxx.h"
-
-static const struct mv88e6xxx_switch_id mv88e6123_table[] = {
- { PORT_SWITCH_ID_6123, "Marvell 88E6123" },
- { PORT_SWITCH_ID_6123_A1, "Marvell 88E6123 (A1)" },
- { PORT_SWITCH_ID_6123_A2, "Marvell 88E6123 (A2)" },
- { PORT_SWITCH_ID_6161, "Marvell 88E6161" },
- { PORT_SWITCH_ID_6161_A1, "Marvell 88E6161 (A1)" },
- { PORT_SWITCH_ID_6161_A2, "Marvell 88E6161 (A2)" },
- { PORT_SWITCH_ID_6165, "Marvell 88E6165" },
- { PORT_SWITCH_ID_6165_A1, "Marvell 88E6165 (A1)" },
- { PORT_SWITCH_ID_6165_A2, "Marvell 88e6165 (A2)" },
-};
-
-static char *mv88e6123_probe(struct device *host_dev, int sw_addr)
-{
- return mv88e6xxx_lookup_name(host_dev, sw_addr, mv88e6123_table,
- ARRAY_SIZE(mv88e6123_table));
-}
-
-static int mv88e6123_setup_global(struct dsa_switch *ds)
-{
- u32 upstream_port = dsa_upstream_port(ds);
- int ret;
- u32 reg;
-
- ret = mv88e6xxx_setup_global(ds);
- if (ret)
- return ret;
-
- /* Disable the PHY polling unit (since there won't be any
- * external PHYs to poll), don't discard packets with
- * excessive collisions, and mask all interrupt sources.
- */
- REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL, 0x0000);
-
- /* Configure the upstream port, and configure the upstream
- * port as the port to which ingress and egress monitor frames
- * are to be sent.
- */
- reg = upstream_port << GLOBAL_MONITOR_CONTROL_INGRESS_SHIFT |
- upstream_port << GLOBAL_MONITOR_CONTROL_EGRESS_SHIFT |
- upstream_port << GLOBAL_MONITOR_CONTROL_ARP_SHIFT;
- REG_WRITE(REG_GLOBAL, GLOBAL_MONITOR_CONTROL, reg);
-
- /* Disable remote management for now, and set the switch's
- * DSA device number.
- */
- REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL_2, ds->index & 0x1f);
-
- return 0;
-}
-
-static int mv88e6123_setup(struct dsa_switch *ds)
-{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
- int ret;
-
- ret = mv88e6xxx_setup_common(ds);
- if (ret < 0)
- return ret;
-
- switch (ps->id) {
- case PORT_SWITCH_ID_6123:
- ps->num_ports = 3;
- break;
- case PORT_SWITCH_ID_6161:
- case PORT_SWITCH_ID_6165:
- ps->num_ports = 6;
- break;
- default:
- return -ENODEV;
- }
-
- ret = mv88e6xxx_switch_reset(ds, false);
- if (ret < 0)
- return ret;
-
- ret = mv88e6123_setup_global(ds);
- if (ret < 0)
- return ret;
-
- return mv88e6xxx_setup_ports(ds);
-}
-
-struct dsa_switch_driver mv88e6123_switch_driver = {
- .tag_protocol = DSA_TAG_PROTO_EDSA,
- .priv_size = sizeof(struct mv88e6xxx_priv_state),
- .probe = mv88e6123_probe,
- .setup = mv88e6123_setup,
- .set_addr = mv88e6xxx_set_addr_indirect,
- .phy_read = mv88e6xxx_phy_read,
- .phy_write = mv88e6xxx_phy_write,
- .get_strings = mv88e6xxx_get_strings,
- .get_ethtool_stats = mv88e6xxx_get_ethtool_stats,
- .get_sset_count = mv88e6xxx_get_sset_count,
- .adjust_link = mv88e6xxx_adjust_link,
-#ifdef CONFIG_NET_DSA_HWMON
- .get_temp = mv88e6xxx_get_temp,
-#endif
- .get_regs_len = mv88e6xxx_get_regs_len,
- .get_regs = mv88e6xxx_get_regs,
-};
-
-MODULE_ALIAS("platform:mv88e6123");
-MODULE_ALIAS("platform:mv88e6161");
-MODULE_ALIAS("platform:mv88e6165");
diff --git a/drivers/net/dsa/mv88e6131.c b/drivers/net/dsa/mv88e6131.c
deleted file mode 100644
index a92ca651c399..000000000000
--- a/drivers/net/dsa/mv88e6131.c
+++ /dev/null
@@ -1,177 +0,0 @@
-/*
- * net/dsa/mv88e6131.c - Marvell 88e6095/6095f/6131 switch chip support
- * Copyright (c) 2008-2009 Marvell Semiconductor
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- */
-
-#include <linux/delay.h>
-#include <linux/jiffies.h>
-#include <linux/list.h>
-#include <linux/module.h>
-#include <linux/netdevice.h>
-#include <linux/phy.h>
-#include <net/dsa.h>
-#include "mv88e6xxx.h"
-
-static const struct mv88e6xxx_switch_id mv88e6131_table[] = {
- { PORT_SWITCH_ID_6085, "Marvell 88E6085" },
- { PORT_SWITCH_ID_6095, "Marvell 88E6095/88E6095F" },
- { PORT_SWITCH_ID_6131, "Marvell 88E6131" },
- { PORT_SWITCH_ID_6131_B2, "Marvell 88E6131 (B2)" },
- { PORT_SWITCH_ID_6185, "Marvell 88E6185" },
-};
-
-static char *mv88e6131_probe(struct device *host_dev, int sw_addr)
-{
- return mv88e6xxx_lookup_name(host_dev, sw_addr, mv88e6131_table,
- ARRAY_SIZE(mv88e6131_table));
-}
-
-static int mv88e6131_setup_global(struct dsa_switch *ds)
-{
- u32 upstream_port = dsa_upstream_port(ds);
- int ret;
- u32 reg;
-
- ret = mv88e6xxx_setup_global(ds);
- if (ret)
- return ret;
-
- /* Enable the PHY polling unit, don't discard packets with
- * excessive collisions, use a weighted fair queueing scheme
- * to arbitrate between packet queues, set the maximum frame
- * size to 1632, and mask all interrupt sources.
- */
- REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL,
- GLOBAL_CONTROL_PPU_ENABLE | GLOBAL_CONTROL_MAX_FRAME_1632);
-
- /* Set the VLAN ethertype to 0x8100. */
- REG_WRITE(REG_GLOBAL, GLOBAL_CORE_TAG_TYPE, 0x8100);
-
- /* Disable ARP mirroring, and configure the upstream port as
- * the port to which ingress and egress monitor frames are to
- * be sent.
- */
- reg = upstream_port << GLOBAL_MONITOR_CONTROL_INGRESS_SHIFT |
- upstream_port << GLOBAL_MONITOR_CONTROL_EGRESS_SHIFT |
- GLOBAL_MONITOR_CONTROL_ARP_DISABLED;
- REG_WRITE(REG_GLOBAL, GLOBAL_MONITOR_CONTROL, reg);
-
- /* Disable cascade port functionality unless this device
- * is used in a cascade configuration, and set the switch's
- * DSA device number.
- */
- if (ds->dst->pd->nr_chips > 1)
- REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL_2,
- GLOBAL_CONTROL_2_MULTIPLE_CASCADE |
- (ds->index & 0x1f));
- else
- REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL_2,
- GLOBAL_CONTROL_2_NO_CASCADE |
- (ds->index & 0x1f));
-
- /* Force the priority of IGMP/MLD snoop frames and ARP frames
- * to the highest setting.
- */
- REG_WRITE(REG_GLOBAL2, GLOBAL2_PRIO_OVERRIDE,
- GLOBAL2_PRIO_OVERRIDE_FORCE_SNOOP |
- 7 << GLOBAL2_PRIO_OVERRIDE_SNOOP_SHIFT |
- GLOBAL2_PRIO_OVERRIDE_FORCE_ARP |
- 7 << GLOBAL2_PRIO_OVERRIDE_ARP_SHIFT);
-
- return 0;
-}
-
-static int mv88e6131_setup(struct dsa_switch *ds)
-{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
- int ret;
-
- ret = mv88e6xxx_setup_common(ds);
- if (ret < 0)
- return ret;
-
- mv88e6xxx_ppu_state_init(ds);
-
- switch (ps->id) {
- case PORT_SWITCH_ID_6085:
- case PORT_SWITCH_ID_6185:
- ps->num_ports = 10;
- break;
- case PORT_SWITCH_ID_6095:
- ps->num_ports = 11;
- break;
- case PORT_SWITCH_ID_6131:
- case PORT_SWITCH_ID_6131_B2:
- ps->num_ports = 8;
- break;
- default:
- return -ENODEV;
- }
-
- ret = mv88e6xxx_switch_reset(ds, false);
- if (ret < 0)
- return ret;
-
- ret = mv88e6131_setup_global(ds);
- if (ret < 0)
- return ret;
-
- return mv88e6xxx_setup_ports(ds);
-}
-
-static int mv88e6131_port_to_phy_addr(struct dsa_switch *ds, int port)
-{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
- if (port >= 0 && port < ps->num_ports)
- return port;
-
- return -EINVAL;
-}
-
-static int
-mv88e6131_phy_read(struct dsa_switch *ds, int port, int regnum)
-{
- int addr = mv88e6131_port_to_phy_addr(ds, port);
-
- if (addr < 0)
- return addr;
-
- return mv88e6xxx_phy_read_ppu(ds, addr, regnum);
-}
-
-static int
-mv88e6131_phy_write(struct dsa_switch *ds,
- int port, int regnum, u16 val)
-{
- int addr = mv88e6131_port_to_phy_addr(ds, port);
-
- if (addr < 0)
- return addr;
-
- return mv88e6xxx_phy_write_ppu(ds, addr, regnum, val);
-}
-
-struct dsa_switch_driver mv88e6131_switch_driver = {
- .tag_protocol = DSA_TAG_PROTO_DSA,
- .priv_size = sizeof(struct mv88e6xxx_priv_state),
- .probe = mv88e6131_probe,
- .setup = mv88e6131_setup,
- .set_addr = mv88e6xxx_set_addr_direct,
- .phy_read = mv88e6131_phy_read,
- .phy_write = mv88e6131_phy_write,
- .get_strings = mv88e6xxx_get_strings,
- .get_ethtool_stats = mv88e6xxx_get_ethtool_stats,
- .get_sset_count = mv88e6xxx_get_sset_count,
- .adjust_link = mv88e6xxx_adjust_link,
-};
-
-MODULE_ALIAS("platform:mv88e6085");
-MODULE_ALIAS("platform:mv88e6095");
-MODULE_ALIAS("platform:mv88e6095f");
-MODULE_ALIAS("platform:mv88e6131");
diff --git a/drivers/net/dsa/mv88e6171.c b/drivers/net/dsa/mv88e6171.c
deleted file mode 100644
index c0164b98fc08..000000000000
--- a/drivers/net/dsa/mv88e6171.c
+++ /dev/null
@@ -1,123 +0,0 @@
-/* net/dsa/mv88e6171.c - Marvell 88e6171 switch chip support
- * Copyright (c) 2008-2009 Marvell Semiconductor
- * Copyright (c) 2014 Claudio Leite <leitec@staticky.com>
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- */
-
-#include <linux/delay.h>
-#include <linux/jiffies.h>
-#include <linux/list.h>
-#include <linux/module.h>
-#include <linux/netdevice.h>
-#include <linux/phy.h>
-#include <net/dsa.h>
-#include "mv88e6xxx.h"
-
-static const struct mv88e6xxx_switch_id mv88e6171_table[] = {
- { PORT_SWITCH_ID_6171, "Marvell 88E6171" },
- { PORT_SWITCH_ID_6175, "Marvell 88E6175" },
- { PORT_SWITCH_ID_6350, "Marvell 88E6350" },
- { PORT_SWITCH_ID_6351, "Marvell 88E6351" },
-};
-
-static char *mv88e6171_probe(struct device *host_dev, int sw_addr)
-{
- return mv88e6xxx_lookup_name(host_dev, sw_addr, mv88e6171_table,
- ARRAY_SIZE(mv88e6171_table));
-}
-
-static int mv88e6171_setup_global(struct dsa_switch *ds)
-{
- u32 upstream_port = dsa_upstream_port(ds);
- int ret;
- u32 reg;
-
- ret = mv88e6xxx_setup_global(ds);
- if (ret)
- return ret;
-
- /* Discard packets with excessive collisions, mask all
- * interrupt sources, enable PPU.
- */
- REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL,
- GLOBAL_CONTROL_PPU_ENABLE | GLOBAL_CONTROL_DISCARD_EXCESS);
-
- /* Configure the upstream port, and configure the upstream
- * port as the port to which ingress and egress monitor frames
- * are to be sent.
- */
- reg = upstream_port << GLOBAL_MONITOR_CONTROL_INGRESS_SHIFT |
- upstream_port << GLOBAL_MONITOR_CONTROL_EGRESS_SHIFT |
- upstream_port << GLOBAL_MONITOR_CONTROL_ARP_SHIFT |
- upstream_port << GLOBAL_MONITOR_CONTROL_MIRROR_SHIFT;
- REG_WRITE(REG_GLOBAL, GLOBAL_MONITOR_CONTROL, reg);
-
- /* Disable remote management for now, and set the switch's
- * DSA device number.
- */
- REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL_2, ds->index & 0x1f);
-
- return 0;
-}
-
-static int mv88e6171_setup(struct dsa_switch *ds)
-{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
- int ret;
-
- ret = mv88e6xxx_setup_common(ds);
- if (ret < 0)
- return ret;
-
- ps->num_ports = 7;
-
- ret = mv88e6xxx_switch_reset(ds, true);
- if (ret < 0)
- return ret;
-
- ret = mv88e6171_setup_global(ds);
- if (ret < 0)
- return ret;
-
- return mv88e6xxx_setup_ports(ds);
-}
-
-struct dsa_switch_driver mv88e6171_switch_driver = {
- .tag_protocol = DSA_TAG_PROTO_EDSA,
- .priv_size = sizeof(struct mv88e6xxx_priv_state),
- .probe = mv88e6171_probe,
- .setup = mv88e6171_setup,
- .set_addr = mv88e6xxx_set_addr_indirect,
- .phy_read = mv88e6xxx_phy_read_indirect,
- .phy_write = mv88e6xxx_phy_write_indirect,
- .get_strings = mv88e6xxx_get_strings,
- .get_ethtool_stats = mv88e6xxx_get_ethtool_stats,
- .get_sset_count = mv88e6xxx_get_sset_count,
- .adjust_link = mv88e6xxx_adjust_link,
-#ifdef CONFIG_NET_DSA_HWMON
- .get_temp = mv88e6xxx_get_temp,
-#endif
- .get_regs_len = mv88e6xxx_get_regs_len,
- .get_regs = mv88e6xxx_get_regs,
- .port_bridge_join = mv88e6xxx_port_bridge_join,
- .port_bridge_leave = mv88e6xxx_port_bridge_leave,
- .port_stp_update = mv88e6xxx_port_stp_update,
- .port_vlan_filtering = mv88e6xxx_port_vlan_filtering,
- .port_vlan_prepare = mv88e6xxx_port_vlan_prepare,
- .port_vlan_add = mv88e6xxx_port_vlan_add,
- .port_vlan_del = mv88e6xxx_port_vlan_del,
- .port_vlan_dump = mv88e6xxx_port_vlan_dump,
- .port_fdb_prepare = mv88e6xxx_port_fdb_prepare,
- .port_fdb_add = mv88e6xxx_port_fdb_add,
- .port_fdb_del = mv88e6xxx_port_fdb_del,
- .port_fdb_dump = mv88e6xxx_port_fdb_dump,
-};
-
-MODULE_ALIAS("platform:mv88e6171");
-MODULE_ALIAS("platform:mv88e6175");
-MODULE_ALIAS("platform:mv88e6350");
-MODULE_ALIAS("platform:mv88e6351");
diff --git a/drivers/net/dsa/mv88e6352.c b/drivers/net/dsa/mv88e6352.c
deleted file mode 100644
index 5f528abc8af1..000000000000
--- a/drivers/net/dsa/mv88e6352.c
+++ /dev/null
@@ -1,345 +0,0 @@
-/*
- * net/dsa/mv88e6352.c - Marvell 88e6352 switch chip support
- *
- * Copyright (c) 2014 Guenter Roeck
- *
- * Derived from mv88e6123_61_65.c
- * Copyright (c) 2008-2009 Marvell Semiconductor
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- */
-
-#include <linux/delay.h>
-#include <linux/jiffies.h>
-#include <linux/list.h>
-#include <linux/module.h>
-#include <linux/netdevice.h>
-#include <linux/platform_device.h>
-#include <linux/phy.h>
-#include <net/dsa.h>
-#include "mv88e6xxx.h"
-
-static const struct mv88e6xxx_switch_id mv88e6352_table[] = {
- { PORT_SWITCH_ID_6172, "Marvell 88E6172" },
- { PORT_SWITCH_ID_6176, "Marvell 88E6176" },
- { PORT_SWITCH_ID_6240, "Marvell 88E6240" },
- { PORT_SWITCH_ID_6320, "Marvell 88E6320" },
- { PORT_SWITCH_ID_6320_A1, "Marvell 88E6320 (A1)" },
- { PORT_SWITCH_ID_6320_A2, "Marvell 88e6320 (A2)" },
- { PORT_SWITCH_ID_6321, "Marvell 88E6321" },
- { PORT_SWITCH_ID_6321_A1, "Marvell 88E6321 (A1)" },
- { PORT_SWITCH_ID_6321_A2, "Marvell 88e6321 (A2)" },
- { PORT_SWITCH_ID_6352, "Marvell 88E6352" },
- { PORT_SWITCH_ID_6352_A0, "Marvell 88E6352 (A0)" },
- { PORT_SWITCH_ID_6352_A1, "Marvell 88E6352 (A1)" },
-};
-
-static char *mv88e6352_probe(struct device *host_dev, int sw_addr)
-{
- return mv88e6xxx_lookup_name(host_dev, sw_addr, mv88e6352_table,
- ARRAY_SIZE(mv88e6352_table));
-}
-
-static int mv88e6352_setup_global(struct dsa_switch *ds)
-{
- u32 upstream_port = dsa_upstream_port(ds);
- int ret;
- u32 reg;
-
- ret = mv88e6xxx_setup_global(ds);
- if (ret)
- return ret;
-
- /* Discard packets with excessive collisions,
- * mask all interrupt sources, enable PPU (bit 14, undocumented).
- */
- REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL,
- GLOBAL_CONTROL_PPU_ENABLE | GLOBAL_CONTROL_DISCARD_EXCESS);
-
- /* Configure the upstream port, and configure the upstream
- * port as the port to which ingress and egress monitor frames
- * are to be sent.
- */
- reg = upstream_port << GLOBAL_MONITOR_CONTROL_INGRESS_SHIFT |
- upstream_port << GLOBAL_MONITOR_CONTROL_EGRESS_SHIFT |
- upstream_port << GLOBAL_MONITOR_CONTROL_ARP_SHIFT;
- REG_WRITE(REG_GLOBAL, GLOBAL_MONITOR_CONTROL, reg);
-
- /* Disable remote management for now, and set the switch's
- * DSA device number.
- */
- REG_WRITE(REG_GLOBAL, 0x1c, ds->index & 0x1f);
-
- return 0;
-}
-
-static int mv88e6352_setup(struct dsa_switch *ds)
-{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
- int ret;
-
- ret = mv88e6xxx_setup_common(ds);
- if (ret < 0)
- return ret;
-
- ps->num_ports = 7;
-
- mutex_init(&ps->eeprom_mutex);
-
- ret = mv88e6xxx_switch_reset(ds, true);
- if (ret < 0)
- return ret;
-
- ret = mv88e6352_setup_global(ds);
- if (ret < 0)
- return ret;
-
- return mv88e6xxx_setup_ports(ds);
-}
-
-static int mv88e6352_read_eeprom_word(struct dsa_switch *ds, int addr)
-{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
- int ret;
-
- mutex_lock(&ps->eeprom_mutex);
-
- ret = mv88e6xxx_reg_write(ds, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
- GLOBAL2_EEPROM_OP_READ |
- (addr & GLOBAL2_EEPROM_OP_ADDR_MASK));
- if (ret < 0)
- goto error;
-
- ret = mv88e6xxx_eeprom_busy_wait(ds);
- if (ret < 0)
- goto error;
-
- ret = mv88e6xxx_reg_read(ds, REG_GLOBAL2, GLOBAL2_EEPROM_DATA);
-error:
- mutex_unlock(&ps->eeprom_mutex);
- return ret;
-}
-
-static int mv88e6352_get_eeprom(struct dsa_switch *ds,
- struct ethtool_eeprom *eeprom, u8 *data)
-{
- int offset;
- int len;
- int ret;
-
- offset = eeprom->offset;
- len = eeprom->len;
- eeprom->len = 0;
-
- eeprom->magic = 0xc3ec4951;
-
- ret = mv88e6xxx_eeprom_load_wait(ds);
- if (ret < 0)
- return ret;
-
- if (offset & 1) {
- int word;
-
- word = mv88e6352_read_eeprom_word(ds, offset >> 1);
- if (word < 0)
- return word;
-
- *data++ = (word >> 8) & 0xff;
-
- offset++;
- len--;
- eeprom->len++;
- }
-
- while (len >= 2) {
- int word;
-
- word = mv88e6352_read_eeprom_word(ds, offset >> 1);
- if (word < 0)
- return word;
-
- *data++ = word & 0xff;
- *data++ = (word >> 8) & 0xff;
-
- offset += 2;
- len -= 2;
- eeprom->len += 2;
- }
-
- if (len) {
- int word;
-
- word = mv88e6352_read_eeprom_word(ds, offset >> 1);
- if (word < 0)
- return word;
-
- *data++ = word & 0xff;
-
- offset++;
- len--;
- eeprom->len++;
- }
-
- return 0;
-}
-
-static int mv88e6352_eeprom_is_readonly(struct dsa_switch *ds)
-{
- int ret;
-
- ret = mv88e6xxx_reg_read(ds, REG_GLOBAL2, GLOBAL2_EEPROM_OP);
- if (ret < 0)
- return ret;
-
- if (!(ret & GLOBAL2_EEPROM_OP_WRITE_EN))
- return -EROFS;
-
- return 0;
-}
-
-static int mv88e6352_write_eeprom_word(struct dsa_switch *ds, int addr,
- u16 data)
-{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
- int ret;
-
- mutex_lock(&ps->eeprom_mutex);
-
- ret = mv88e6xxx_reg_write(ds, REG_GLOBAL2, GLOBAL2_EEPROM_DATA, data);
- if (ret < 0)
- goto error;
-
- ret = mv88e6xxx_reg_write(ds, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
- GLOBAL2_EEPROM_OP_WRITE |
- (addr & GLOBAL2_EEPROM_OP_ADDR_MASK));
- if (ret < 0)
- goto error;
-
- ret = mv88e6xxx_eeprom_busy_wait(ds);
-error:
- mutex_unlock(&ps->eeprom_mutex);
- return ret;
-}
-
-static int mv88e6352_set_eeprom(struct dsa_switch *ds,
- struct ethtool_eeprom *eeprom, u8 *data)
-{
- int offset;
- int ret;
- int len;
-
- if (eeprom->magic != 0xc3ec4951)
- return -EINVAL;
-
- ret = mv88e6352_eeprom_is_readonly(ds);
- if (ret)
- return ret;
-
- offset = eeprom->offset;
- len = eeprom->len;
- eeprom->len = 0;
-
- ret = mv88e6xxx_eeprom_load_wait(ds);
- if (ret < 0)
- return ret;
-
- if (offset & 1) {
- int word;
-
- word = mv88e6352_read_eeprom_word(ds, offset >> 1);
- if (word < 0)
- return word;
-
- word = (*data++ << 8) | (word & 0xff);
-
- ret = mv88e6352_write_eeprom_word(ds, offset >> 1, word);
- if (ret < 0)
- return ret;
-
- offset++;
- len--;
- eeprom->len++;
- }
-
- while (len >= 2) {
- int word;
-
- word = *data++;
- word |= *data++ << 8;
-
- ret = mv88e6352_write_eeprom_word(ds, offset >> 1, word);
- if (ret < 0)
- return ret;
-
- offset += 2;
- len -= 2;
- eeprom->len += 2;
- }
-
- if (len) {
- int word;
-
- word = mv88e6352_read_eeprom_word(ds, offset >> 1);
- if (word < 0)
- return word;
-
- word = (word & 0xff00) | *data++;
-
- ret = mv88e6352_write_eeprom_word(ds, offset >> 1, word);
- if (ret < 0)
- return ret;
-
- offset++;
- len--;
- eeprom->len++;
- }
-
- return 0;
-}
-
-struct dsa_switch_driver mv88e6352_switch_driver = {
- .tag_protocol = DSA_TAG_PROTO_EDSA,
- .priv_size = sizeof(struct mv88e6xxx_priv_state),
- .probe = mv88e6352_probe,
- .setup = mv88e6352_setup,
- .set_addr = mv88e6xxx_set_addr_indirect,
- .phy_read = mv88e6xxx_phy_read_indirect,
- .phy_write = mv88e6xxx_phy_write_indirect,
- .get_strings = mv88e6xxx_get_strings,
- .get_ethtool_stats = mv88e6xxx_get_ethtool_stats,
- .get_sset_count = mv88e6xxx_get_sset_count,
- .adjust_link = mv88e6xxx_adjust_link,
- .set_eee = mv88e6xxx_set_eee,
- .get_eee = mv88e6xxx_get_eee,
-#ifdef CONFIG_NET_DSA_HWMON
- .get_temp = mv88e6xxx_get_temp,
- .get_temp_limit = mv88e6xxx_get_temp_limit,
- .set_temp_limit = mv88e6xxx_set_temp_limit,
- .get_temp_alarm = mv88e6xxx_get_temp_alarm,
-#endif
- .get_eeprom = mv88e6352_get_eeprom,
- .set_eeprom = mv88e6352_set_eeprom,
- .get_regs_len = mv88e6xxx_get_regs_len,
- .get_regs = mv88e6xxx_get_regs,
- .port_bridge_join = mv88e6xxx_port_bridge_join,
- .port_bridge_leave = mv88e6xxx_port_bridge_leave,
- .port_stp_update = mv88e6xxx_port_stp_update,
- .port_vlan_filtering = mv88e6xxx_port_vlan_filtering,
- .port_vlan_prepare = mv88e6xxx_port_vlan_prepare,
- .port_vlan_add = mv88e6xxx_port_vlan_add,
- .port_vlan_del = mv88e6xxx_port_vlan_del,
- .port_vlan_dump = mv88e6xxx_port_vlan_dump,
- .port_fdb_prepare = mv88e6xxx_port_fdb_prepare,
- .port_fdb_add = mv88e6xxx_port_fdb_add,
- .port_fdb_del = mv88e6xxx_port_fdb_del,
- .port_fdb_dump = mv88e6xxx_port_fdb_dump,
-};
-
-MODULE_ALIAS("platform:mv88e6172");
-MODULE_ALIAS("platform:mv88e6176");
-MODULE_ALIAS("platform:mv88e6320");
-MODULE_ALIAS("platform:mv88e6321");
-MODULE_ALIAS("platform:mv88e6352");
diff --git a/drivers/net/dsa/mv88e6xxx.c b/drivers/net/dsa/mv88e6xxx.c
index 5e572b3510b9..ba9dfc9421ef 100644
--- a/drivers/net/dsa/mv88e6xxx.c
+++ b/drivers/net/dsa/mv88e6xxx.c
@@ -5,6 +5,8 @@
* Copyright (c) 2015 CMC Electronics, Inc.
* Added support for VLAN Table Unit operations
*
+ * Copyright (c) 2016 Andrew Lunn <andrew@lunn.ch>
+ *
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
* the Free Software Foundation; either version 2 of the License, or
@@ -17,6 +19,7 @@
#include <linux/if_bridge.h>
#include <linux/jiffies.h>
#include <linux/list.h>
+#include <linux/mdio.h>
#include <linux/module.h>
#include <linux/netdevice.h>
#include <linux/gpio/consumer.h>
@@ -25,12 +28,10 @@
#include <net/switchdev.h>
#include "mv88e6xxx.h"
-static void assert_smi_lock(struct dsa_switch *ds)
+static void assert_smi_lock(struct mv88e6xxx_priv_state *ps)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
if (unlikely(!mutex_is_locked(&ps->smi_mutex))) {
- dev_err(ds->master_dev, "SMI lock not held!\n");
+ dev_err(ps->dev, "SMI lock not held!\n");
dump_stack();
}
}
@@ -92,33 +93,29 @@ static int __mv88e6xxx_reg_read(struct mii_bus *bus, int sw_addr, int addr,
return ret & 0xffff;
}
-static int _mv88e6xxx_reg_read(struct dsa_switch *ds, int addr, int reg)
+static int _mv88e6xxx_reg_read(struct mv88e6xxx_priv_state *ps,
+ int addr, int reg)
{
- struct mii_bus *bus = dsa_host_dev_to_mii_bus(ds->master_dev);
int ret;
- assert_smi_lock(ds);
-
- if (bus == NULL)
- return -EINVAL;
+ assert_smi_lock(ps);
- ret = __mv88e6xxx_reg_read(bus, ds->pd->sw_addr, addr, reg);
+ ret = __mv88e6xxx_reg_read(ps->bus, ps->sw_addr, addr, reg);
if (ret < 0)
return ret;
- dev_dbg(ds->master_dev, "<- addr: 0x%.2x reg: 0x%.2x val: 0x%.4x\n",
+ dev_dbg(ps->dev, "<- addr: 0x%.2x reg: 0x%.2x val: 0x%.4x\n",
addr, reg, ret);
return ret;
}
-int mv88e6xxx_reg_read(struct dsa_switch *ds, int addr, int reg)
+int mv88e6xxx_reg_read(struct mv88e6xxx_priv_state *ps, int addr, int reg)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
int ret;
mutex_lock(&ps->smi_mutex);
- ret = _mv88e6xxx_reg_read(ds, addr, reg);
+ ret = _mv88e6xxx_reg_read(ps, addr, reg);
mutex_unlock(&ps->smi_mutex);
return ret;
@@ -156,58 +153,71 @@ static int __mv88e6xxx_reg_write(struct mii_bus *bus, int sw_addr, int addr,
return 0;
}
-static int _mv88e6xxx_reg_write(struct dsa_switch *ds, int addr, int reg,
- u16 val)
+static int _mv88e6xxx_reg_write(struct mv88e6xxx_priv_state *ps, int addr,
+ int reg, u16 val)
{
- struct mii_bus *bus = dsa_host_dev_to_mii_bus(ds->master_dev);
-
- assert_smi_lock(ds);
+ assert_smi_lock(ps);
- if (bus == NULL)
- return -EINVAL;
-
- dev_dbg(ds->master_dev, "-> addr: 0x%.2x reg: 0x%.2x val: 0x%.4x\n",
+ dev_dbg(ps->dev, "-> addr: 0x%.2x reg: 0x%.2x val: 0x%.4x\n",
addr, reg, val);
- return __mv88e6xxx_reg_write(bus, ds->pd->sw_addr, addr, reg, val);
+ return __mv88e6xxx_reg_write(ps->bus, ps->sw_addr, addr, reg, val);
}
-int mv88e6xxx_reg_write(struct dsa_switch *ds, int addr, int reg, u16 val)
+int mv88e6xxx_reg_write(struct mv88e6xxx_priv_state *ps, int addr,
+ int reg, u16 val)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
int ret;
mutex_lock(&ps->smi_mutex);
- ret = _mv88e6xxx_reg_write(ds, addr, reg, val);
+ ret = _mv88e6xxx_reg_write(ps, addr, reg, val);
mutex_unlock(&ps->smi_mutex);
return ret;
}
-int mv88e6xxx_set_addr_direct(struct dsa_switch *ds, u8 *addr)
+static int mv88e6xxx_set_addr_direct(struct dsa_switch *ds, u8 *addr)
{
- REG_WRITE(REG_GLOBAL, GLOBAL_MAC_01, (addr[0] << 8) | addr[1]);
- REG_WRITE(REG_GLOBAL, GLOBAL_MAC_23, (addr[2] << 8) | addr[3]);
- REG_WRITE(REG_GLOBAL, GLOBAL_MAC_45, (addr[4] << 8) | addr[5]);
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+ int err;
- return 0;
+ err = mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_MAC_01,
+ (addr[0] << 8) | addr[1]);
+ if (err)
+ return err;
+
+ err = mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_MAC_23,
+ (addr[2] << 8) | addr[3]);
+ if (err)
+ return err;
+
+ return mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_MAC_45,
+ (addr[4] << 8) | addr[5]);
}
-int mv88e6xxx_set_addr_indirect(struct dsa_switch *ds, u8 *addr)
+static int mv88e6xxx_set_addr_indirect(struct dsa_switch *ds, u8 *addr)
{
- int i;
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
int ret;
+ int i;
for (i = 0; i < 6; i++) {
int j;
/* Write the MAC address byte. */
- REG_WRITE(REG_GLOBAL2, GLOBAL2_SWITCH_MAC,
- GLOBAL2_SWITCH_MAC_BUSY | (i << 8) | addr[i]);
+ ret = mv88e6xxx_reg_write(ps, REG_GLOBAL2, GLOBAL2_SWITCH_MAC,
+ GLOBAL2_SWITCH_MAC_BUSY |
+ (i << 8) | addr[i]);
+ if (ret)
+ return ret;
/* Wait for the write to complete. */
for (j = 0; j < 16; j++) {
- ret = REG_READ(REG_GLOBAL2, GLOBAL2_SWITCH_MAC);
+ ret = mv88e6xxx_reg_read(ps, REG_GLOBAL2,
+ GLOBAL2_SWITCH_MAC);
+ if (ret < 0)
+ return ret;
+
if ((ret & GLOBAL2_SWITCH_MAC_BUSY) == 0)
break;
}
@@ -218,34 +228,52 @@ int mv88e6xxx_set_addr_indirect(struct dsa_switch *ds, u8 *addr)
return 0;
}
-static int _mv88e6xxx_phy_read(struct dsa_switch *ds, int addr, int regnum)
+int mv88e6xxx_set_addr(struct dsa_switch *ds, u8 *addr)
+{
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+
+ if (mv88e6xxx_has(ps, MV88E6XXX_FLAG_SWITCH_MAC))
+ return mv88e6xxx_set_addr_indirect(ds, addr);
+ else
+ return mv88e6xxx_set_addr_direct(ds, addr);
+}
+
+static int _mv88e6xxx_phy_read(struct mv88e6xxx_priv_state *ps, int addr,
+ int regnum)
{
if (addr >= 0)
- return _mv88e6xxx_reg_read(ds, addr, regnum);
+ return _mv88e6xxx_reg_read(ps, addr, regnum);
return 0xffff;
}
-static int _mv88e6xxx_phy_write(struct dsa_switch *ds, int addr, int regnum,
- u16 val)
+static int _mv88e6xxx_phy_write(struct mv88e6xxx_priv_state *ps, int addr,
+ int regnum, u16 val)
{
if (addr >= 0)
- return _mv88e6xxx_reg_write(ds, addr, regnum, val);
+ return _mv88e6xxx_reg_write(ps, addr, regnum, val);
return 0;
}
-#ifdef CONFIG_NET_DSA_MV88E6XXX_NEED_PPU
-static int mv88e6xxx_ppu_disable(struct dsa_switch *ds)
+static int mv88e6xxx_ppu_disable(struct mv88e6xxx_priv_state *ps)
{
int ret;
unsigned long timeout;
- ret = REG_READ(REG_GLOBAL, GLOBAL_CONTROL);
- REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL,
- ret & ~GLOBAL_CONTROL_PPU_ENABLE);
+ ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_CONTROL);
+ if (ret < 0)
+ return ret;
+
+ ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_CONTROL,
+ ret & ~GLOBAL_CONTROL_PPU_ENABLE);
+ if (ret)
+ return ret;
timeout = jiffies + 1 * HZ;
while (time_before(jiffies, timeout)) {
- ret = REG_READ(REG_GLOBAL, GLOBAL_STATUS);
+ ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_STATUS);
+ if (ret < 0)
+ return ret;
+
usleep_range(1000, 2000);
if ((ret & GLOBAL_STATUS_PPU_MASK) !=
GLOBAL_STATUS_PPU_POLLING)
@@ -255,17 +283,26 @@ static int mv88e6xxx_ppu_disable(struct dsa_switch *ds)
return -ETIMEDOUT;
}
-static int mv88e6xxx_ppu_enable(struct dsa_switch *ds)
+static int mv88e6xxx_ppu_enable(struct mv88e6xxx_priv_state *ps)
{
- int ret;
+ int ret, err;
unsigned long timeout;
- ret = REG_READ(REG_GLOBAL, GLOBAL_CONTROL);
- REG_WRITE(REG_GLOBAL, GLOBAL_CONTROL, ret | GLOBAL_CONTROL_PPU_ENABLE);
+ ret = mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_CONTROL);
+ if (ret < 0)
+ return ret;
+
+ err = mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_CONTROL,
+ ret | GLOBAL_CONTROL_PPU_ENABLE);
+ if (err)
+ return err;
timeout = jiffies + 1 * HZ;
while (time_before(jiffies, timeout)) {
- ret = REG_READ(REG_GLOBAL, GLOBAL_STATUS);
+ ret = mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_STATUS);
+ if (ret < 0)
+ return ret;
+
usleep_range(1000, 2000);
if ((ret & GLOBAL_STATUS_PPU_MASK) ==
GLOBAL_STATUS_PPU_POLLING)
@@ -281,9 +318,7 @@ static void mv88e6xxx_ppu_reenable_work(struct work_struct *ugly)
ps = container_of(ugly, struct mv88e6xxx_priv_state, ppu_work);
if (mutex_trylock(&ps->ppu_mutex)) {
- struct dsa_switch *ds = ((struct dsa_switch *)ps) - 1;
-
- if (mv88e6xxx_ppu_enable(ds) == 0)
+ if (mv88e6xxx_ppu_enable(ps) == 0)
ps->ppu_disabled = 0;
mutex_unlock(&ps->ppu_mutex);
}
@@ -296,9 +331,8 @@ static void mv88e6xxx_ppu_reenable_timer(unsigned long _ps)
schedule_work(&ps->ppu_work);
}
-static int mv88e6xxx_ppu_access_get(struct dsa_switch *ds)
+static int mv88e6xxx_ppu_access_get(struct mv88e6xxx_priv_state *ps)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
int ret;
mutex_lock(&ps->ppu_mutex);
@@ -309,7 +343,7 @@ static int mv88e6xxx_ppu_access_get(struct dsa_switch *ds)
* it.
*/
if (!ps->ppu_disabled) {
- ret = mv88e6xxx_ppu_disable(ds);
+ ret = mv88e6xxx_ppu_disable(ps);
if (ret < 0) {
mutex_unlock(&ps->ppu_mutex);
return ret;
@@ -323,19 +357,15 @@ static int mv88e6xxx_ppu_access_get(struct dsa_switch *ds)
return ret;
}
-static void mv88e6xxx_ppu_access_put(struct dsa_switch *ds)
+static void mv88e6xxx_ppu_access_put(struct mv88e6xxx_priv_state *ps)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
/* Schedule a timer to re-enable the PHY polling unit. */
mod_timer(&ps->ppu_timer, jiffies + msecs_to_jiffies(10));
mutex_unlock(&ps->ppu_mutex);
}
-void mv88e6xxx_ppu_state_init(struct dsa_switch *ds)
+void mv88e6xxx_ppu_state_init(struct mv88e6xxx_priv_state *ps)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
mutex_init(&ps->ppu_mutex);
INIT_WORK(&ps->ppu_work, mv88e6xxx_ppu_reenable_work);
init_timer(&ps->ppu_timer);
@@ -343,142 +373,86 @@ void mv88e6xxx_ppu_state_init(struct dsa_switch *ds)
ps->ppu_timer.function = mv88e6xxx_ppu_reenable_timer;
}
-int mv88e6xxx_phy_read_ppu(struct dsa_switch *ds, int addr, int regnum)
+static int mv88e6xxx_phy_read_ppu(struct mv88e6xxx_priv_state *ps, int addr,
+ int regnum)
{
int ret;
- ret = mv88e6xxx_ppu_access_get(ds);
+ ret = mv88e6xxx_ppu_access_get(ps);
if (ret >= 0) {
- ret = mv88e6xxx_reg_read(ds, addr, regnum);
- mv88e6xxx_ppu_access_put(ds);
+ ret = _mv88e6xxx_reg_read(ps, addr, regnum);
+ mv88e6xxx_ppu_access_put(ps);
}
return ret;
}
-int mv88e6xxx_phy_write_ppu(struct dsa_switch *ds, int addr,
- int regnum, u16 val)
+static int mv88e6xxx_phy_write_ppu(struct mv88e6xxx_priv_state *ps, int addr,
+ int regnum, u16 val)
{
int ret;
- ret = mv88e6xxx_ppu_access_get(ds);
+ ret = mv88e6xxx_ppu_access_get(ps);
if (ret >= 0) {
- ret = mv88e6xxx_reg_write(ds, addr, regnum, val);
- mv88e6xxx_ppu_access_put(ds);
+ ret = _mv88e6xxx_reg_write(ps, addr, regnum, val);
+ mv88e6xxx_ppu_access_put(ps);
}
return ret;
}
-#endif
-static bool mv88e6xxx_6065_family(struct dsa_switch *ds)
+static bool mv88e6xxx_6065_family(struct mv88e6xxx_priv_state *ps)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
- switch (ps->id) {
- case PORT_SWITCH_ID_6031:
- case PORT_SWITCH_ID_6061:
- case PORT_SWITCH_ID_6035:
- case PORT_SWITCH_ID_6065:
- return true;
- }
- return false;
+ return ps->info->family == MV88E6XXX_FAMILY_6065;
}
-static bool mv88e6xxx_6095_family(struct dsa_switch *ds)
+static bool mv88e6xxx_6095_family(struct mv88e6xxx_priv_state *ps)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
- switch (ps->id) {
- case PORT_SWITCH_ID_6092:
- case PORT_SWITCH_ID_6095:
- return true;
- }
- return false;
+ return ps->info->family == MV88E6XXX_FAMILY_6095;
}
-static bool mv88e6xxx_6097_family(struct dsa_switch *ds)
+static bool mv88e6xxx_6097_family(struct mv88e6xxx_priv_state *ps)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
- switch (ps->id) {
- case PORT_SWITCH_ID_6046:
- case PORT_SWITCH_ID_6085:
- case PORT_SWITCH_ID_6096:
- case PORT_SWITCH_ID_6097:
- return true;
- }
- return false;
+ return ps->info->family == MV88E6XXX_FAMILY_6097;
}
-static bool mv88e6xxx_6165_family(struct dsa_switch *ds)
+static bool mv88e6xxx_6165_family(struct mv88e6xxx_priv_state *ps)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
- switch (ps->id) {
- case PORT_SWITCH_ID_6123:
- case PORT_SWITCH_ID_6161:
- case PORT_SWITCH_ID_6165:
- return true;
- }
- return false;
+ return ps->info->family == MV88E6XXX_FAMILY_6165;
}
-static bool mv88e6xxx_6185_family(struct dsa_switch *ds)
+static bool mv88e6xxx_6185_family(struct mv88e6xxx_priv_state *ps)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
- switch (ps->id) {
- case PORT_SWITCH_ID_6121:
- case PORT_SWITCH_ID_6122:
- case PORT_SWITCH_ID_6152:
- case PORT_SWITCH_ID_6155:
- case PORT_SWITCH_ID_6182:
- case PORT_SWITCH_ID_6185:
- case PORT_SWITCH_ID_6108:
- case PORT_SWITCH_ID_6131:
- return true;
- }
- return false;
+ return ps->info->family == MV88E6XXX_FAMILY_6185;
}
-static bool mv88e6xxx_6320_family(struct dsa_switch *ds)
+static bool mv88e6xxx_6320_family(struct mv88e6xxx_priv_state *ps)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
- switch (ps->id) {
- case PORT_SWITCH_ID_6320:
- case PORT_SWITCH_ID_6321:
- return true;
- }
- return false;
+ return ps->info->family == MV88E6XXX_FAMILY_6320;
}
-static bool mv88e6xxx_6351_family(struct dsa_switch *ds)
+static bool mv88e6xxx_6351_family(struct mv88e6xxx_priv_state *ps)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+ return ps->info->family == MV88E6XXX_FAMILY_6351;
+}
- switch (ps->id) {
- case PORT_SWITCH_ID_6171:
- case PORT_SWITCH_ID_6175:
- case PORT_SWITCH_ID_6350:
- case PORT_SWITCH_ID_6351:
- return true;
- }
- return false;
+static bool mv88e6xxx_6352_family(struct mv88e6xxx_priv_state *ps)
+{
+ return ps->info->family == MV88E6XXX_FAMILY_6352;
}
-static bool mv88e6xxx_6352_family(struct dsa_switch *ds)
+static unsigned int mv88e6xxx_num_databases(struct mv88e6xxx_priv_state *ps)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+ return ps->info->num_databases;
+}
- switch (ps->id) {
- case PORT_SWITCH_ID_6172:
- case PORT_SWITCH_ID_6176:
- case PORT_SWITCH_ID_6240:
- case PORT_SWITCH_ID_6352:
+static bool mv88e6xxx_has_fid_reg(struct mv88e6xxx_priv_state *ps)
+{
+ /* Does the device have dedicated FID registers for ATU and VTU ops? */
+ if (mv88e6xxx_6097_family(ps) || mv88e6xxx_6165_family(ps) ||
+ mv88e6xxx_6351_family(ps) || mv88e6xxx_6352_family(ps))
return true;
- }
+
return false;
}
@@ -486,8 +460,8 @@ static bool mv88e6xxx_6352_family(struct dsa_switch *ds)
* phy. However, in the case of a fixed link phy, we force the port
* settings from the fixed link settings.
*/
-void mv88e6xxx_adjust_link(struct dsa_switch *ds, int port,
- struct phy_device *phydev)
+static void mv88e6xxx_adjust_link(struct dsa_switch *ds, int port,
+ struct phy_device *phydev)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
u32 reg;
@@ -498,7 +472,7 @@ void mv88e6xxx_adjust_link(struct dsa_switch *ds, int port,
mutex_lock(&ps->smi_mutex);
- ret = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_PCS_CTRL);
+ ret = _mv88e6xxx_reg_read(ps, REG_PORT(port), PORT_PCS_CTRL);
if (ret < 0)
goto out;
@@ -512,7 +486,7 @@ void mv88e6xxx_adjust_link(struct dsa_switch *ds, int port,
if (phydev->link)
reg |= PORT_PCS_CTRL_LINK_UP;
- if (mv88e6xxx_6065_family(ds) && phydev->speed > SPEED_100)
+ if (mv88e6xxx_6065_family(ps) && phydev->speed > SPEED_100)
goto out;
switch (phydev->speed) {
@@ -534,8 +508,8 @@ void mv88e6xxx_adjust_link(struct dsa_switch *ds, int port,
if (phydev->duplex == DUPLEX_FULL)
reg |= PORT_PCS_CTRL_DUPLEX_FULL;
- if ((mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds)) &&
- (port >= ps->num_ports - 2)) {
+ if ((mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps)) &&
+ (port >= ps->info->num_ports - 2)) {
if (phydev->interface == PHY_INTERFACE_MODE_RGMII_RXID)
reg |= PORT_PCS_CTRL_RGMII_DELAY_RXCLK;
if (phydev->interface == PHY_INTERFACE_MODE_RGMII_TXID)
@@ -544,19 +518,19 @@ void mv88e6xxx_adjust_link(struct dsa_switch *ds, int port,
reg |= (PORT_PCS_CTRL_RGMII_DELAY_RXCLK |
PORT_PCS_CTRL_RGMII_DELAY_TXCLK);
}
- _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_PCS_CTRL, reg);
+ _mv88e6xxx_reg_write(ps, REG_PORT(port), PORT_PCS_CTRL, reg);
out:
mutex_unlock(&ps->smi_mutex);
}
-static int _mv88e6xxx_stats_wait(struct dsa_switch *ds)
+static int _mv88e6xxx_stats_wait(struct mv88e6xxx_priv_state *ps)
{
int ret;
int i;
for (i = 0; i < 10; i++) {
- ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL, GLOBAL_STATS_OP);
+ ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_STATS_OP);
if ((ret & GLOBAL_STATS_OP_BUSY) == 0)
return 0;
}
@@ -564,52 +538,54 @@ static int _mv88e6xxx_stats_wait(struct dsa_switch *ds)
return -ETIMEDOUT;
}
-static int _mv88e6xxx_stats_snapshot(struct dsa_switch *ds, int port)
+static int _mv88e6xxx_stats_snapshot(struct mv88e6xxx_priv_state *ps,
+ int port)
{
int ret;
- if (mv88e6xxx_6320_family(ds) || mv88e6xxx_6352_family(ds))
+ if (mv88e6xxx_6320_family(ps) || mv88e6xxx_6352_family(ps))
port = (port + 1) << 5;
/* Snapshot the hardware statistics counters for this port. */
- ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_STATS_OP,
+ ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_STATS_OP,
GLOBAL_STATS_OP_CAPTURE_PORT |
GLOBAL_STATS_OP_HIST_RX_TX | port);
if (ret < 0)
return ret;
/* Wait for the snapshotting to complete. */
- ret = _mv88e6xxx_stats_wait(ds);
+ ret = _mv88e6xxx_stats_wait(ps);
if (ret < 0)
return ret;
return 0;
}
-static void _mv88e6xxx_stats_read(struct dsa_switch *ds, int stat, u32 *val)
+static void _mv88e6xxx_stats_read(struct mv88e6xxx_priv_state *ps,
+ int stat, u32 *val)
{
u32 _val;
int ret;
*val = 0;
- ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_STATS_OP,
+ ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_STATS_OP,
GLOBAL_STATS_OP_READ_CAPTURED |
GLOBAL_STATS_OP_HIST_RX_TX | stat);
if (ret < 0)
return;
- ret = _mv88e6xxx_stats_wait(ds);
+ ret = _mv88e6xxx_stats_wait(ps);
if (ret < 0)
return;
- ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL, GLOBAL_STATS_COUNTER_32);
+ ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_STATS_COUNTER_32);
if (ret < 0)
return;
_val = ret << 16;
- ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL, GLOBAL_STATS_COUNTER_01);
+ ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_STATS_COUNTER_01);
if (ret < 0)
return;
@@ -678,26 +654,26 @@ static struct mv88e6xxx_hw_stat mv88e6xxx_hw_stats[] = {
{ "out_management", 4, 0x1f | GLOBAL_STATS_OP_BANK_1, BANK1, },
};
-static bool mv88e6xxx_has_stat(struct dsa_switch *ds,
+static bool mv88e6xxx_has_stat(struct mv88e6xxx_priv_state *ps,
struct mv88e6xxx_hw_stat *stat)
{
switch (stat->type) {
case BANK0:
return true;
case BANK1:
- return mv88e6xxx_6320_family(ds);
+ return mv88e6xxx_6320_family(ps);
case PORT:
- return mv88e6xxx_6095_family(ds) ||
- mv88e6xxx_6185_family(ds) ||
- mv88e6xxx_6097_family(ds) ||
- mv88e6xxx_6165_family(ds) ||
- mv88e6xxx_6351_family(ds) ||
- mv88e6xxx_6352_family(ds);
+ return mv88e6xxx_6095_family(ps) ||
+ mv88e6xxx_6185_family(ps) ||
+ mv88e6xxx_6097_family(ps) ||
+ mv88e6xxx_6165_family(ps) ||
+ mv88e6xxx_6351_family(ps) ||
+ mv88e6xxx_6352_family(ps);
}
return false;
}
-static uint64_t _mv88e6xxx_get_ethtool_stat(struct dsa_switch *ds,
+static uint64_t _mv88e6xxx_get_ethtool_stat(struct mv88e6xxx_priv_state *ps,
struct mv88e6xxx_hw_stat *s,
int port)
{
@@ -708,13 +684,13 @@ static uint64_t _mv88e6xxx_get_ethtool_stat(struct dsa_switch *ds,
switch (s->type) {
case PORT:
- ret = _mv88e6xxx_reg_read(ds, REG_PORT(port), s->reg);
+ ret = _mv88e6xxx_reg_read(ps, REG_PORT(port), s->reg);
if (ret < 0)
return UINT64_MAX;
low = ret;
if (s->sizeof_stat == 4) {
- ret = _mv88e6xxx_reg_read(ds, REG_PORT(port),
+ ret = _mv88e6xxx_reg_read(ps, REG_PORT(port),
s->reg + 1);
if (ret < 0)
return UINT64_MAX;
@@ -723,22 +699,24 @@ static uint64_t _mv88e6xxx_get_ethtool_stat(struct dsa_switch *ds,
break;
case BANK0:
case BANK1:
- _mv88e6xxx_stats_read(ds, s->reg, &low);
+ _mv88e6xxx_stats_read(ps, s->reg, &low);
if (s->sizeof_stat == 8)
- _mv88e6xxx_stats_read(ds, s->reg + 1, &high);
+ _mv88e6xxx_stats_read(ps, s->reg + 1, &high);
}
value = (((u64)high) << 16) | low;
return value;
}
-void mv88e6xxx_get_strings(struct dsa_switch *ds, int port, uint8_t *data)
+static void mv88e6xxx_get_strings(struct dsa_switch *ds, int port,
+ uint8_t *data)
{
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
struct mv88e6xxx_hw_stat *stat;
int i, j;
for (i = 0, j = 0; i < ARRAY_SIZE(mv88e6xxx_hw_stats); i++) {
stat = &mv88e6xxx_hw_stats[i];
- if (mv88e6xxx_has_stat(ds, stat)) {
+ if (mv88e6xxx_has_stat(ps, stat)) {
memcpy(data + j * ETH_GSTRING_LEN, stat->string,
ETH_GSTRING_LEN);
j++;
@@ -746,22 +724,22 @@ void mv88e6xxx_get_strings(struct dsa_switch *ds, int port, uint8_t *data)
}
}
-int mv88e6xxx_get_sset_count(struct dsa_switch *ds)
+static int mv88e6xxx_get_sset_count(struct dsa_switch *ds)
{
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
struct mv88e6xxx_hw_stat *stat;
int i, j;
for (i = 0, j = 0; i < ARRAY_SIZE(mv88e6xxx_hw_stats); i++) {
stat = &mv88e6xxx_hw_stats[i];
- if (mv88e6xxx_has_stat(ds, stat))
+ if (mv88e6xxx_has_stat(ps, stat))
j++;
}
return j;
}
-void
-mv88e6xxx_get_ethtool_stats(struct dsa_switch *ds,
- int port, uint64_t *data)
+static void mv88e6xxx_get_ethtool_stats(struct dsa_switch *ds, int port,
+ uint64_t *data)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
struct mv88e6xxx_hw_stat *stat;
@@ -770,15 +748,15 @@ mv88e6xxx_get_ethtool_stats(struct dsa_switch *ds,
mutex_lock(&ps->smi_mutex);
- ret = _mv88e6xxx_stats_snapshot(ds, port);
+ ret = _mv88e6xxx_stats_snapshot(ps, port);
if (ret < 0) {
mutex_unlock(&ps->smi_mutex);
return;
}
for (i = 0, j = 0; i < ARRAY_SIZE(mv88e6xxx_hw_stats); i++) {
stat = &mv88e6xxx_hw_stats[i];
- if (mv88e6xxx_has_stat(ds, stat)) {
- data[j] = _mv88e6xxx_get_ethtool_stat(ds, stat, port);
+ if (mv88e6xxx_has_stat(ps, stat)) {
+ data[j] = _mv88e6xxx_get_ethtool_stat(ps, stat, port);
j++;
}
}
@@ -786,14 +764,15 @@ mv88e6xxx_get_ethtool_stats(struct dsa_switch *ds,
mutex_unlock(&ps->smi_mutex);
}
-int mv88e6xxx_get_regs_len(struct dsa_switch *ds, int port)
+static int mv88e6xxx_get_regs_len(struct dsa_switch *ds, int port)
{
return 32 * sizeof(u16);
}
-void mv88e6xxx_get_regs(struct dsa_switch *ds, int port,
- struct ethtool_regs *regs, void *_p)
+static void mv88e6xxx_get_regs(struct dsa_switch *ds, int port,
+ struct ethtool_regs *regs, void *_p)
{
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
u16 *p = _p;
int i;
@@ -801,16 +780,20 @@ void mv88e6xxx_get_regs(struct dsa_switch *ds, int port,
memset(p, 0xff, 32 * sizeof(u16));
+ mutex_lock(&ps->smi_mutex);
+
for (i = 0; i < 32; i++) {
int ret;
- ret = mv88e6xxx_reg_read(ds, REG_PORT(port), i);
+ ret = _mv88e6xxx_reg_read(ps, REG_PORT(port), i);
if (ret >= 0)
p[i] = ret;
}
+
+ mutex_unlock(&ps->smi_mutex);
}
-static int _mv88e6xxx_wait(struct dsa_switch *ds, int reg, int offset,
+static int _mv88e6xxx_wait(struct mv88e6xxx_priv_state *ps, int reg, int offset,
u16 mask)
{
unsigned long timeout = jiffies + HZ / 10;
@@ -818,7 +801,7 @@ static int _mv88e6xxx_wait(struct dsa_switch *ds, int reg, int offset,
while (time_before(jiffies, timeout)) {
int ret;
- ret = _mv88e6xxx_reg_read(ds, reg, offset);
+ ret = _mv88e6xxx_reg_read(ps, reg, offset);
if (ret < 0)
return ret;
if (!(ret & mask))
@@ -829,91 +812,320 @@ static int _mv88e6xxx_wait(struct dsa_switch *ds, int reg, int offset,
return -ETIMEDOUT;
}
-static int mv88e6xxx_wait(struct dsa_switch *ds, int reg, int offset, u16 mask)
+static int mv88e6xxx_wait(struct mv88e6xxx_priv_state *ps, int reg,
+ int offset, u16 mask)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
int ret;
mutex_lock(&ps->smi_mutex);
- ret = _mv88e6xxx_wait(ds, reg, offset, mask);
+ ret = _mv88e6xxx_wait(ps, reg, offset, mask);
mutex_unlock(&ps->smi_mutex);
return ret;
}
-static int _mv88e6xxx_phy_wait(struct dsa_switch *ds)
+static int _mv88e6xxx_phy_wait(struct mv88e6xxx_priv_state *ps)
{
- return _mv88e6xxx_wait(ds, REG_GLOBAL2, GLOBAL2_SMI_OP,
+ return _mv88e6xxx_wait(ps, REG_GLOBAL2, GLOBAL2_SMI_OP,
GLOBAL2_SMI_OP_BUSY);
}
-int mv88e6xxx_eeprom_load_wait(struct dsa_switch *ds)
+static int mv88e6xxx_eeprom_load_wait(struct dsa_switch *ds)
{
- return mv88e6xxx_wait(ds, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+
+ return mv88e6xxx_wait(ps, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
GLOBAL2_EEPROM_OP_LOAD);
}
-int mv88e6xxx_eeprom_busy_wait(struct dsa_switch *ds)
+static int mv88e6xxx_eeprom_busy_wait(struct dsa_switch *ds)
{
- return mv88e6xxx_wait(ds, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+
+ return mv88e6xxx_wait(ps, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
GLOBAL2_EEPROM_OP_BUSY);
}
-static int _mv88e6xxx_atu_wait(struct dsa_switch *ds)
+static int mv88e6xxx_read_eeprom_word(struct dsa_switch *ds, int addr)
+{
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+ int ret;
+
+ mutex_lock(&ps->eeprom_mutex);
+
+ ret = mv88e6xxx_reg_write(ps, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
+ GLOBAL2_EEPROM_OP_READ |
+ (addr & GLOBAL2_EEPROM_OP_ADDR_MASK));
+ if (ret < 0)
+ goto error;
+
+ ret = mv88e6xxx_eeprom_busy_wait(ds);
+ if (ret < 0)
+ goto error;
+
+ ret = mv88e6xxx_reg_read(ps, REG_GLOBAL2, GLOBAL2_EEPROM_DATA);
+error:
+ mutex_unlock(&ps->eeprom_mutex);
+ return ret;
+}
+
+static int mv88e6xxx_get_eeprom_len(struct dsa_switch *ds)
+{
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+
+ if (mv88e6xxx_has(ps, MV88E6XXX_FLAG_EEPROM))
+ return ps->eeprom_len;
+
+ return 0;
+}
+
+static int mv88e6xxx_get_eeprom(struct dsa_switch *ds,
+ struct ethtool_eeprom *eeprom, u8 *data)
+{
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+ int offset;
+ int len;
+ int ret;
+
+ if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_EEPROM))
+ return -EOPNOTSUPP;
+
+ offset = eeprom->offset;
+ len = eeprom->len;
+ eeprom->len = 0;
+
+ eeprom->magic = 0xc3ec4951;
+
+ ret = mv88e6xxx_eeprom_load_wait(ds);
+ if (ret < 0)
+ return ret;
+
+ if (offset & 1) {
+ int word;
+
+ word = mv88e6xxx_read_eeprom_word(ds, offset >> 1);
+ if (word < 0)
+ return word;
+
+ *data++ = (word >> 8) & 0xff;
+
+ offset++;
+ len--;
+ eeprom->len++;
+ }
+
+ while (len >= 2) {
+ int word;
+
+ word = mv88e6xxx_read_eeprom_word(ds, offset >> 1);
+ if (word < 0)
+ return word;
+
+ *data++ = word & 0xff;
+ *data++ = (word >> 8) & 0xff;
+
+ offset += 2;
+ len -= 2;
+ eeprom->len += 2;
+ }
+
+ if (len) {
+ int word;
+
+ word = mv88e6xxx_read_eeprom_word(ds, offset >> 1);
+ if (word < 0)
+ return word;
+
+ *data++ = word & 0xff;
+
+ offset++;
+ len--;
+ eeprom->len++;
+ }
+
+ return 0;
+}
+
+static int mv88e6xxx_eeprom_is_readonly(struct dsa_switch *ds)
+{
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+ int ret;
+
+ ret = mv88e6xxx_reg_read(ps, REG_GLOBAL2, GLOBAL2_EEPROM_OP);
+ if (ret < 0)
+ return ret;
+
+ if (!(ret & GLOBAL2_EEPROM_OP_WRITE_EN))
+ return -EROFS;
+
+ return 0;
+}
+
+static int mv88e6xxx_write_eeprom_word(struct dsa_switch *ds, int addr,
+ u16 data)
+{
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+ int ret;
+
+ mutex_lock(&ps->eeprom_mutex);
+
+ ret = mv88e6xxx_reg_write(ps, REG_GLOBAL2, GLOBAL2_EEPROM_DATA, data);
+ if (ret < 0)
+ goto error;
+
+ ret = mv88e6xxx_reg_write(ps, REG_GLOBAL2, GLOBAL2_EEPROM_OP,
+ GLOBAL2_EEPROM_OP_WRITE |
+ (addr & GLOBAL2_EEPROM_OP_ADDR_MASK));
+ if (ret < 0)
+ goto error;
+
+ ret = mv88e6xxx_eeprom_busy_wait(ds);
+error:
+ mutex_unlock(&ps->eeprom_mutex);
+ return ret;
+}
+
+static int mv88e6xxx_set_eeprom(struct dsa_switch *ds,
+ struct ethtool_eeprom *eeprom, u8 *data)
+{
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+ int offset;
+ int ret;
+ int len;
+
+ if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_EEPROM))
+ return -EOPNOTSUPP;
+
+ if (eeprom->magic != 0xc3ec4951)
+ return -EINVAL;
+
+ ret = mv88e6xxx_eeprom_is_readonly(ds);
+ if (ret)
+ return ret;
+
+ offset = eeprom->offset;
+ len = eeprom->len;
+ eeprom->len = 0;
+
+ ret = mv88e6xxx_eeprom_load_wait(ds);
+ if (ret < 0)
+ return ret;
+
+ if (offset & 1) {
+ int word;
+
+ word = mv88e6xxx_read_eeprom_word(ds, offset >> 1);
+ if (word < 0)
+ return word;
+
+ word = (*data++ << 8) | (word & 0xff);
+
+ ret = mv88e6xxx_write_eeprom_word(ds, offset >> 1, word);
+ if (ret < 0)
+ return ret;
+
+ offset++;
+ len--;
+ eeprom->len++;
+ }
+
+ while (len >= 2) {
+ int word;
+
+ word = *data++;
+ word |= *data++ << 8;
+
+ ret = mv88e6xxx_write_eeprom_word(ds, offset >> 1, word);
+ if (ret < 0)
+ return ret;
+
+ offset += 2;
+ len -= 2;
+ eeprom->len += 2;
+ }
+
+ if (len) {
+ int word;
+
+ word = mv88e6xxx_read_eeprom_word(ds, offset >> 1);
+ if (word < 0)
+ return word;
+
+ word = (word & 0xff00) | *data++;
+
+ ret = mv88e6xxx_write_eeprom_word(ds, offset >> 1, word);
+ if (ret < 0)
+ return ret;
+
+ offset++;
+ len--;
+ eeprom->len++;
+ }
+
+ return 0;
+}
+
+static int _mv88e6xxx_atu_wait(struct mv88e6xxx_priv_state *ps)
{
- return _mv88e6xxx_wait(ds, REG_GLOBAL, GLOBAL_ATU_OP,
+ return _mv88e6xxx_wait(ps, REG_GLOBAL, GLOBAL_ATU_OP,
GLOBAL_ATU_OP_BUSY);
}
-static int _mv88e6xxx_phy_read_indirect(struct dsa_switch *ds, int addr,
- int regnum)
+static int _mv88e6xxx_phy_read_indirect(struct mv88e6xxx_priv_state *ps,
+ int addr, int regnum)
{
int ret;
- ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL2, GLOBAL2_SMI_OP,
+ ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL2, GLOBAL2_SMI_OP,
GLOBAL2_SMI_OP_22_READ | (addr << 5) |
regnum);
if (ret < 0)
return ret;
- ret = _mv88e6xxx_phy_wait(ds);
+ ret = _mv88e6xxx_phy_wait(ps);
if (ret < 0)
return ret;
- return _mv88e6xxx_reg_read(ds, REG_GLOBAL2, GLOBAL2_SMI_DATA);
+ ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL2, GLOBAL2_SMI_DATA);
+
+ return ret;
}
-static int _mv88e6xxx_phy_write_indirect(struct dsa_switch *ds, int addr,
- int regnum, u16 val)
+static int _mv88e6xxx_phy_write_indirect(struct mv88e6xxx_priv_state *ps,
+ int addr, int regnum, u16 val)
{
int ret;
- ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL2, GLOBAL2_SMI_DATA, val);
+ ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL2, GLOBAL2_SMI_DATA, val);
if (ret < 0)
return ret;
- ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL2, GLOBAL2_SMI_OP,
+ ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL2, GLOBAL2_SMI_OP,
GLOBAL2_SMI_OP_22_WRITE | (addr << 5) |
regnum);
- return _mv88e6xxx_phy_wait(ds);
+ return _mv88e6xxx_phy_wait(ps);
}
-int mv88e6xxx_get_eee(struct dsa_switch *ds, int port, struct ethtool_eee *e)
+static int mv88e6xxx_get_eee(struct dsa_switch *ds, int port,
+ struct ethtool_eee *e)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
int reg;
+ if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_EEE))
+ return -EOPNOTSUPP;
+
mutex_lock(&ps->smi_mutex);
- reg = _mv88e6xxx_phy_read_indirect(ds, port, 16);
+ reg = _mv88e6xxx_phy_read_indirect(ps, port, 16);
if (reg < 0)
goto out;
e->eee_enabled = !!(reg & 0x0200);
e->tx_lpi_enabled = !!(reg & 0x0100);
- reg = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_STATUS);
+ reg = _mv88e6xxx_reg_read(ps, REG_PORT(port), PORT_STATUS);
if (reg < 0)
goto out;
@@ -925,16 +1137,19 @@ out:
return reg;
}
-int mv88e6xxx_set_eee(struct dsa_switch *ds, int port,
- struct phy_device *phydev, struct ethtool_eee *e)
+static int mv88e6xxx_set_eee(struct dsa_switch *ds, int port,
+ struct phy_device *phydev, struct ethtool_eee *e)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
int reg;
int ret;
+ if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_EEE))
+ return -EOPNOTSUPP;
+
mutex_lock(&ps->smi_mutex);
- ret = _mv88e6xxx_phy_read_indirect(ds, port, 16);
+ ret = _mv88e6xxx_phy_read_indirect(ps, port, 16);
if (ret < 0)
goto out;
@@ -944,25 +1159,45 @@ int mv88e6xxx_set_eee(struct dsa_switch *ds, int port,
if (e->tx_lpi_enabled)
reg |= 0x0100;
- ret = _mv88e6xxx_phy_write_indirect(ds, port, 16, reg);
+ ret = _mv88e6xxx_phy_write_indirect(ps, port, 16, reg);
out:
mutex_unlock(&ps->smi_mutex);
return ret;
}
-static int _mv88e6xxx_atu_cmd(struct dsa_switch *ds, u16 cmd)
+static int _mv88e6xxx_atu_cmd(struct mv88e6xxx_priv_state *ps, u16 fid, u16 cmd)
{
int ret;
- ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_ATU_OP, cmd);
+ if (mv88e6xxx_has_fid_reg(ps)) {
+ ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_ATU_FID, fid);
+ if (ret < 0)
+ return ret;
+ } else if (mv88e6xxx_num_databases(ps) == 256) {
+ /* ATU DBNum[7:4] are located in ATU Control 15:12 */
+ ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_ATU_CONTROL);
+ if (ret < 0)
+ return ret;
+
+ ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_ATU_CONTROL,
+ (ret & 0xfff) |
+ ((fid << 8) & 0xf000));
+ if (ret < 0)
+ return ret;
+
+ /* ATU DBNum[3:0] are located in ATU Operation 3:0 */
+ cmd |= fid & 0xf;
+ }
+
+ ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_ATU_OP, cmd);
if (ret < 0)
return ret;
- return _mv88e6xxx_atu_wait(ds);
+ return _mv88e6xxx_atu_wait(ps);
}
-static int _mv88e6xxx_atu_data_write(struct dsa_switch *ds,
+static int _mv88e6xxx_atu_data_write(struct mv88e6xxx_priv_state *ps,
struct mv88e6xxx_atu_entry *entry)
{
u16 data = entry->state & GLOBAL_ATU_DATA_STATE_MASK;
@@ -982,30 +1217,25 @@ static int _mv88e6xxx_atu_data_write(struct dsa_switch *ds,
data |= (entry->portv_trunkid << shift) & mask;
}
- return _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_ATU_DATA, data);
+ return _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_ATU_DATA, data);
}
-static int _mv88e6xxx_atu_flush_move(struct dsa_switch *ds,
+static int _mv88e6xxx_atu_flush_move(struct mv88e6xxx_priv_state *ps,
struct mv88e6xxx_atu_entry *entry,
bool static_too)
{
int op;
int err;
- err = _mv88e6xxx_atu_wait(ds);
+ err = _mv88e6xxx_atu_wait(ps);
if (err)
return err;
- err = _mv88e6xxx_atu_data_write(ds, entry);
+ err = _mv88e6xxx_atu_data_write(ps, entry);
if (err)
return err;
if (entry->fid) {
- err = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_ATU_FID,
- entry->fid);
- if (err)
- return err;
-
op = static_too ? GLOBAL_ATU_OP_FLUSH_MOVE_ALL_DB :
GLOBAL_ATU_OP_FLUSH_MOVE_NON_STATIC_DB;
} else {
@@ -1013,21 +1243,22 @@ static int _mv88e6xxx_atu_flush_move(struct dsa_switch *ds,
GLOBAL_ATU_OP_FLUSH_MOVE_NON_STATIC;
}
- return _mv88e6xxx_atu_cmd(ds, op);
+ return _mv88e6xxx_atu_cmd(ps, entry->fid, op);
}
-static int _mv88e6xxx_atu_flush(struct dsa_switch *ds, u16 fid, bool static_too)
+static int _mv88e6xxx_atu_flush(struct mv88e6xxx_priv_state *ps,
+ u16 fid, bool static_too)
{
struct mv88e6xxx_atu_entry entry = {
.fid = fid,
.state = 0, /* EntryState bits must be 0 */
};
- return _mv88e6xxx_atu_flush_move(ds, &entry, static_too);
+ return _mv88e6xxx_atu_flush_move(ps, &entry, static_too);
}
-static int _mv88e6xxx_atu_move(struct dsa_switch *ds, u16 fid, int from_port,
- int to_port, bool static_too)
+static int _mv88e6xxx_atu_move(struct mv88e6xxx_priv_state *ps, u16 fid,
+ int from_port, int to_port, bool static_too)
{
struct mv88e6xxx_atu_entry entry = {
.trunk = false,
@@ -1041,14 +1272,14 @@ static int _mv88e6xxx_atu_move(struct dsa_switch *ds, u16 fid, int from_port,
entry.portv_trunkid = (to_port & 0x0f) << 4;
entry.portv_trunkid |= from_port & 0x0f;
- return _mv88e6xxx_atu_flush_move(ds, &entry, static_too);
+ return _mv88e6xxx_atu_flush_move(ps, &entry, static_too);
}
-static int _mv88e6xxx_atu_remove(struct dsa_switch *ds, u16 fid, int port,
- bool static_too)
+static int _mv88e6xxx_atu_remove(struct mv88e6xxx_priv_state *ps, u16 fid,
+ int port, bool static_too)
{
/* Destination port 0xF means remove the entries */
- return _mv88e6xxx_atu_move(ds, fid, port, 0x0f, static_too);
+ return _mv88e6xxx_atu_move(ps, fid, port, 0x0f, static_too);
}
static const char * const mv88e6xxx_port_state_names[] = {
@@ -1058,12 +1289,14 @@ static const char * const mv88e6xxx_port_state_names[] = {
[PORT_CONTROL_STATE_FORWARDING] = "Forwarding",
};
-static int _mv88e6xxx_port_state(struct dsa_switch *ds, int port, u8 state)
+static int _mv88e6xxx_port_state(struct mv88e6xxx_priv_state *ps, int port,
+ u8 state)
{
+ struct dsa_switch *ds = ps->ds;
int reg, ret = 0;
u8 oldstate;
- reg = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_CONTROL);
+ reg = _mv88e6xxx_reg_read(ps, REG_PORT(port), PORT_CONTROL);
if (reg < 0)
return reg;
@@ -1078,13 +1311,13 @@ static int _mv88e6xxx_port_state(struct dsa_switch *ds, int port, u8 state)
oldstate == PORT_CONTROL_STATE_FORWARDING)
&& (state == PORT_CONTROL_STATE_DISABLED ||
state == PORT_CONTROL_STATE_BLOCKING)) {
- ret = _mv88e6xxx_atu_remove(ds, 0, port, false);
+ ret = _mv88e6xxx_atu_remove(ps, 0, port, false);
if (ret)
return ret;
}
reg = (reg & ~PORT_CONTROL_STATE_MASK) | state;
- ret = _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_CONTROL,
+ ret = _mv88e6xxx_reg_write(ps, REG_PORT(port), PORT_CONTROL,
reg);
if (ret)
return ret;
@@ -1097,11 +1330,12 @@ static int _mv88e6xxx_port_state(struct dsa_switch *ds, int port, u8 state)
return ret;
}
-static int _mv88e6xxx_port_based_vlan_map(struct dsa_switch *ds, int port)
+static int _mv88e6xxx_port_based_vlan_map(struct mv88e6xxx_priv_state *ps,
+ int port)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
struct net_device *bridge = ps->ports[port].bridge_dev;
- const u16 mask = (1 << ps->num_ports) - 1;
+ const u16 mask = (1 << ps->info->num_ports) - 1;
+ struct dsa_switch *ds = ps->ds;
u16 output_ports = 0;
int reg;
int i;
@@ -1110,7 +1344,7 @@ static int _mv88e6xxx_port_based_vlan_map(struct dsa_switch *ds, int port)
if (dsa_is_cpu_port(ds, port) || dsa_is_dsa_port(ds, port)) {
output_ports = mask;
} else {
- for (i = 0; i < ps->num_ports; ++i) {
+ for (i = 0; i < ps->info->num_ports; ++i) {
/* allow sending frames to every group member */
if (bridge && ps->ports[i].bridge_dev == bridge)
output_ports |= BIT(i);
@@ -1124,20 +1358,25 @@ static int _mv88e6xxx_port_based_vlan_map(struct dsa_switch *ds, int port)
/* prevent frames from going back out of the port they came in on */
output_ports &= ~BIT(port);
- reg = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_BASE_VLAN);
+ reg = _mv88e6xxx_reg_read(ps, REG_PORT(port), PORT_BASE_VLAN);
if (reg < 0)
return reg;
reg &= ~mask;
reg |= output_ports & mask;
- return _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_BASE_VLAN, reg);
+ return _mv88e6xxx_reg_write(ps, REG_PORT(port), PORT_BASE_VLAN, reg);
}
-int mv88e6xxx_port_stp_update(struct dsa_switch *ds, int port, u8 state)
+static void mv88e6xxx_port_stp_state_set(struct dsa_switch *ds, int port,
+ u8 state)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
int stp_state;
+ int err;
+
+ if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_PORTSTATE))
+ return;
switch (state) {
case BR_STATE_DISABLED:
@@ -1156,23 +1395,23 @@ int mv88e6xxx_port_stp_update(struct dsa_switch *ds, int port, u8 state)
break;
}
- /* mv88e6xxx_port_stp_update may be called with softirqs disabled,
- * so we can not update the port state directly but need to schedule it.
- */
- ps->ports[port].state = stp_state;
- set_bit(port, ps->port_state_update_mask);
- schedule_work(&ps->bridge_work);
+ mutex_lock(&ps->smi_mutex);
+ err = _mv88e6xxx_port_state(ps, port, stp_state);
+ mutex_unlock(&ps->smi_mutex);
- return 0;
+ if (err)
+ netdev_err(ds->ports[port], "failed to update state to %s\n",
+ mv88e6xxx_port_state_names[stp_state]);
}
-static int _mv88e6xxx_port_pvid(struct dsa_switch *ds, int port, u16 *new,
- u16 *old)
+static int _mv88e6xxx_port_pvid(struct mv88e6xxx_priv_state *ps, int port,
+ u16 *new, u16 *old)
{
+ struct dsa_switch *ds = ps->ds;
u16 pvid;
int ret;
- ret = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_DEFAULT_VLAN);
+ ret = _mv88e6xxx_reg_read(ps, REG_PORT(port), PORT_DEFAULT_VLAN);
if (ret < 0)
return ret;
@@ -1182,7 +1421,7 @@ static int _mv88e6xxx_port_pvid(struct dsa_switch *ds, int port, u16 *new,
ret &= ~PORT_DEFAULT_VLAN_MASK;
ret |= *new & PORT_DEFAULT_VLAN_MASK;
- ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+ ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
PORT_DEFAULT_VLAN, ret);
if (ret < 0)
return ret;
@@ -1197,55 +1436,56 @@ static int _mv88e6xxx_port_pvid(struct dsa_switch *ds, int port, u16 *new,
return 0;
}
-static int _mv88e6xxx_port_pvid_get(struct dsa_switch *ds, int port, u16 *pvid)
+static int _mv88e6xxx_port_pvid_get(struct mv88e6xxx_priv_state *ps,
+ int port, u16 *pvid)
{
- return _mv88e6xxx_port_pvid(ds, port, NULL, pvid);
+ return _mv88e6xxx_port_pvid(ps, port, NULL, pvid);
}
-static int _mv88e6xxx_port_pvid_set(struct dsa_switch *ds, int port, u16 pvid)
+static int _mv88e6xxx_port_pvid_set(struct mv88e6xxx_priv_state *ps,
+ int port, u16 pvid)
{
- return _mv88e6xxx_port_pvid(ds, port, &pvid, NULL);
+ return _mv88e6xxx_port_pvid(ps, port, &pvid, NULL);
}
-static int _mv88e6xxx_vtu_wait(struct dsa_switch *ds)
+static int _mv88e6xxx_vtu_wait(struct mv88e6xxx_priv_state *ps)
{
- return _mv88e6xxx_wait(ds, REG_GLOBAL, GLOBAL_VTU_OP,
+ return _mv88e6xxx_wait(ps, REG_GLOBAL, GLOBAL_VTU_OP,
GLOBAL_VTU_OP_BUSY);
}
-static int _mv88e6xxx_vtu_cmd(struct dsa_switch *ds, u16 op)
+static int _mv88e6xxx_vtu_cmd(struct mv88e6xxx_priv_state *ps, u16 op)
{
int ret;
- ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_VTU_OP, op);
+ ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_VTU_OP, op);
if (ret < 0)
return ret;
- return _mv88e6xxx_vtu_wait(ds);
+ return _mv88e6xxx_vtu_wait(ps);
}
-static int _mv88e6xxx_vtu_stu_flush(struct dsa_switch *ds)
+static int _mv88e6xxx_vtu_stu_flush(struct mv88e6xxx_priv_state *ps)
{
int ret;
- ret = _mv88e6xxx_vtu_wait(ds);
+ ret = _mv88e6xxx_vtu_wait(ps);
if (ret < 0)
return ret;
- return _mv88e6xxx_vtu_cmd(ds, GLOBAL_VTU_OP_FLUSH_ALL);
+ return _mv88e6xxx_vtu_cmd(ps, GLOBAL_VTU_OP_FLUSH_ALL);
}
-static int _mv88e6xxx_vtu_stu_data_read(struct dsa_switch *ds,
+static int _mv88e6xxx_vtu_stu_data_read(struct mv88e6xxx_priv_state *ps,
struct mv88e6xxx_vtu_stu_entry *entry,
unsigned int nibble_offset)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
u16 regs[3];
int i;
int ret;
for (i = 0; i < 3; ++i) {
- ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL,
+ ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL,
GLOBAL_VTU_DATA_0_3 + i);
if (ret < 0)
return ret;
@@ -1253,7 +1493,7 @@ static int _mv88e6xxx_vtu_stu_data_read(struct dsa_switch *ds,
regs[i] = ret;
}
- for (i = 0; i < ps->num_ports; ++i) {
+ for (i = 0; i < ps->info->num_ports; ++i) {
unsigned int shift = (i % 4) * 4 + nibble_offset;
u16 reg = regs[i / 4];
@@ -1263,16 +1503,27 @@ static int _mv88e6xxx_vtu_stu_data_read(struct dsa_switch *ds,
return 0;
}
-static int _mv88e6xxx_vtu_stu_data_write(struct dsa_switch *ds,
+static int mv88e6xxx_vtu_data_read(struct mv88e6xxx_priv_state *ps,
+ struct mv88e6xxx_vtu_stu_entry *entry)
+{
+ return _mv88e6xxx_vtu_stu_data_read(ps, entry, 0);
+}
+
+static int mv88e6xxx_stu_data_read(struct mv88e6xxx_priv_state *ps,
+ struct mv88e6xxx_vtu_stu_entry *entry)
+{
+ return _mv88e6xxx_vtu_stu_data_read(ps, entry, 2);
+}
+
+static int _mv88e6xxx_vtu_stu_data_write(struct mv88e6xxx_priv_state *ps,
struct mv88e6xxx_vtu_stu_entry *entry,
unsigned int nibble_offset)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
u16 regs[3] = { 0 };
int i;
int ret;
- for (i = 0; i < ps->num_ports; ++i) {
+ for (i = 0; i < ps->info->num_ports; ++i) {
unsigned int shift = (i % 4) * 4 + nibble_offset;
u8 data = entry->data[i];
@@ -1280,7 +1531,7 @@ static int _mv88e6xxx_vtu_stu_data_write(struct dsa_switch *ds,
}
for (i = 0; i < 3; ++i) {
- ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL,
+ ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL,
GLOBAL_VTU_DATA_0_3 + i, regs[i]);
if (ret < 0)
return ret;
@@ -1289,27 +1540,39 @@ static int _mv88e6xxx_vtu_stu_data_write(struct dsa_switch *ds,
return 0;
}
-static int _mv88e6xxx_vtu_vid_write(struct dsa_switch *ds, u16 vid)
+static int mv88e6xxx_vtu_data_write(struct mv88e6xxx_priv_state *ps,
+ struct mv88e6xxx_vtu_stu_entry *entry)
+{
+ return _mv88e6xxx_vtu_stu_data_write(ps, entry, 0);
+}
+
+static int mv88e6xxx_stu_data_write(struct mv88e6xxx_priv_state *ps,
+ struct mv88e6xxx_vtu_stu_entry *entry)
+{
+ return _mv88e6xxx_vtu_stu_data_write(ps, entry, 2);
+}
+
+static int _mv88e6xxx_vtu_vid_write(struct mv88e6xxx_priv_state *ps, u16 vid)
{
- return _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_VTU_VID,
+ return _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_VTU_VID,
vid & GLOBAL_VTU_VID_MASK);
}
-static int _mv88e6xxx_vtu_getnext(struct dsa_switch *ds,
+static int _mv88e6xxx_vtu_getnext(struct mv88e6xxx_priv_state *ps,
struct mv88e6xxx_vtu_stu_entry *entry)
{
struct mv88e6xxx_vtu_stu_entry next = { 0 };
int ret;
- ret = _mv88e6xxx_vtu_wait(ds);
+ ret = _mv88e6xxx_vtu_wait(ps);
if (ret < 0)
return ret;
- ret = _mv88e6xxx_vtu_cmd(ds, GLOBAL_VTU_OP_VTU_GET_NEXT);
+ ret = _mv88e6xxx_vtu_cmd(ps, GLOBAL_VTU_OP_VTU_GET_NEXT);
if (ret < 0)
return ret;
- ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL, GLOBAL_VTU_VID);
+ ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_VTU_VID);
if (ret < 0)
return ret;
@@ -1317,20 +1580,32 @@ static int _mv88e6xxx_vtu_getnext(struct dsa_switch *ds,
next.valid = !!(ret & GLOBAL_VTU_VID_VALID);
if (next.valid) {
- ret = _mv88e6xxx_vtu_stu_data_read(ds, &next, 0);
+ ret = mv88e6xxx_vtu_data_read(ps, &next);
if (ret < 0)
return ret;
- if (mv88e6xxx_6097_family(ds) || mv88e6xxx_6165_family(ds) ||
- mv88e6xxx_6351_family(ds) || mv88e6xxx_6352_family(ds)) {
- ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL,
+ if (mv88e6xxx_has_fid_reg(ps)) {
+ ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL,
GLOBAL_VTU_FID);
if (ret < 0)
return ret;
next.fid = ret & GLOBAL_VTU_FID_MASK;
+ } else if (mv88e6xxx_num_databases(ps) == 256) {
+ /* VTU DBNum[7:4] are located in VTU Operation 11:8, and
+ * VTU DBNum[3:0] are located in VTU Operation 3:0
+ */
+ ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL,
+ GLOBAL_VTU_OP);
+ if (ret < 0)
+ return ret;
- ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL,
+ next.fid = (ret & 0xf00) >> 4;
+ next.fid |= ret & 0xf;
+ }
+
+ if (mv88e6xxx_has(ps, MV88E6XXX_FLAG_STU)) {
+ ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL,
GLOBAL_VTU_SID);
if (ret < 0)
return ret;
@@ -1343,27 +1618,30 @@ static int _mv88e6xxx_vtu_getnext(struct dsa_switch *ds,
return 0;
}
-int mv88e6xxx_port_vlan_dump(struct dsa_switch *ds, int port,
- struct switchdev_obj_port_vlan *vlan,
- int (*cb)(struct switchdev_obj *obj))
+static int mv88e6xxx_port_vlan_dump(struct dsa_switch *ds, int port,
+ struct switchdev_obj_port_vlan *vlan,
+ int (*cb)(struct switchdev_obj *obj))
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
struct mv88e6xxx_vtu_stu_entry next;
u16 pvid;
int err;
+ if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_VTU))
+ return -EOPNOTSUPP;
+
mutex_lock(&ps->smi_mutex);
- err = _mv88e6xxx_port_pvid_get(ds, port, &pvid);
+ err = _mv88e6xxx_port_pvid_get(ps, port, &pvid);
if (err)
goto unlock;
- err = _mv88e6xxx_vtu_vid_write(ds, GLOBAL_VTU_VID_MASK);
+ err = _mv88e6xxx_vtu_vid_write(ps, GLOBAL_VTU_VID_MASK);
if (err)
goto unlock;
do {
- err = _mv88e6xxx_vtu_getnext(ds, &next);
+ err = _mv88e6xxx_vtu_getnext(ps, &next);
if (err)
break;
@@ -1394,13 +1672,14 @@ unlock:
return err;
}
-static int _mv88e6xxx_vtu_loadpurge(struct dsa_switch *ds,
+static int _mv88e6xxx_vtu_loadpurge(struct mv88e6xxx_priv_state *ps,
struct mv88e6xxx_vtu_stu_entry *entry)
{
+ u16 op = GLOBAL_VTU_OP_VTU_LOAD_PURGE;
u16 reg = 0;
int ret;
- ret = _mv88e6xxx_vtu_wait(ds);
+ ret = _mv88e6xxx_vtu_wait(ps);
if (ret < 0)
return ret;
@@ -1408,66 +1687,73 @@ static int _mv88e6xxx_vtu_loadpurge(struct dsa_switch *ds,
goto loadpurge;
/* Write port member tags */
- ret = _mv88e6xxx_vtu_stu_data_write(ds, entry, 0);
+ ret = mv88e6xxx_vtu_data_write(ps, entry);
if (ret < 0)
return ret;
- if (mv88e6xxx_6097_family(ds) || mv88e6xxx_6165_family(ds) ||
- mv88e6xxx_6351_family(ds) || mv88e6xxx_6352_family(ds)) {
+ if (mv88e6xxx_has(ps, MV88E6XXX_FLAG_STU)) {
reg = entry->sid & GLOBAL_VTU_SID_MASK;
- ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_VTU_SID, reg);
+ ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_VTU_SID, reg);
if (ret < 0)
return ret;
+ }
+ if (mv88e6xxx_has_fid_reg(ps)) {
reg = entry->fid & GLOBAL_VTU_FID_MASK;
- ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_VTU_FID, reg);
+ ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_VTU_FID, reg);
if (ret < 0)
return ret;
+ } else if (mv88e6xxx_num_databases(ps) == 256) {
+ /* VTU DBNum[7:4] are located in VTU Operation 11:8, and
+ * VTU DBNum[3:0] are located in VTU Operation 3:0
+ */
+ op |= (entry->fid & 0xf0) << 8;
+ op |= entry->fid & 0xf;
}
reg = GLOBAL_VTU_VID_VALID;
loadpurge:
reg |= entry->vid & GLOBAL_VTU_VID_MASK;
- ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_VTU_VID, reg);
+ ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_VTU_VID, reg);
if (ret < 0)
return ret;
- return _mv88e6xxx_vtu_cmd(ds, GLOBAL_VTU_OP_VTU_LOAD_PURGE);
+ return _mv88e6xxx_vtu_cmd(ps, op);
}
-static int _mv88e6xxx_stu_getnext(struct dsa_switch *ds, u8 sid,
+static int _mv88e6xxx_stu_getnext(struct mv88e6xxx_priv_state *ps, u8 sid,
struct mv88e6xxx_vtu_stu_entry *entry)
{
struct mv88e6xxx_vtu_stu_entry next = { 0 };
int ret;
- ret = _mv88e6xxx_vtu_wait(ds);
+ ret = _mv88e6xxx_vtu_wait(ps);
if (ret < 0)
return ret;
- ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_VTU_SID,
+ ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_VTU_SID,
sid & GLOBAL_VTU_SID_MASK);
if (ret < 0)
return ret;
- ret = _mv88e6xxx_vtu_cmd(ds, GLOBAL_VTU_OP_STU_GET_NEXT);
+ ret = _mv88e6xxx_vtu_cmd(ps, GLOBAL_VTU_OP_STU_GET_NEXT);
if (ret < 0)
return ret;
- ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL, GLOBAL_VTU_SID);
+ ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_VTU_SID);
if (ret < 0)
return ret;
next.sid = ret & GLOBAL_VTU_SID_MASK;
- ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL, GLOBAL_VTU_VID);
+ ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_VTU_VID);
if (ret < 0)
return ret;
next.valid = !!(ret & GLOBAL_VTU_VID_VALID);
if (next.valid) {
- ret = _mv88e6xxx_vtu_stu_data_read(ds, &next, 2);
+ ret = mv88e6xxx_stu_data_read(ps, &next);
if (ret < 0)
return ret;
}
@@ -1476,13 +1762,13 @@ static int _mv88e6xxx_stu_getnext(struct dsa_switch *ds, u8 sid,
return 0;
}
-static int _mv88e6xxx_stu_loadpurge(struct dsa_switch *ds,
+static int _mv88e6xxx_stu_loadpurge(struct mv88e6xxx_priv_state *ps,
struct mv88e6xxx_vtu_stu_entry *entry)
{
u16 reg = 0;
int ret;
- ret = _mv88e6xxx_vtu_wait(ds);
+ ret = _mv88e6xxx_vtu_wait(ps);
if (ret < 0)
return ret;
@@ -1490,32 +1776,41 @@ static int _mv88e6xxx_stu_loadpurge(struct dsa_switch *ds,
goto loadpurge;
/* Write port states */
- ret = _mv88e6xxx_vtu_stu_data_write(ds, entry, 2);
+ ret = mv88e6xxx_stu_data_write(ps, entry);
if (ret < 0)
return ret;
reg = GLOBAL_VTU_VID_VALID;
loadpurge:
- ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_VTU_VID, reg);
+ ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_VTU_VID, reg);
if (ret < 0)
return ret;
reg = entry->sid & GLOBAL_VTU_SID_MASK;
- ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_VTU_SID, reg);
+ ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_VTU_SID, reg);
if (ret < 0)
return ret;
- return _mv88e6xxx_vtu_cmd(ds, GLOBAL_VTU_OP_STU_LOAD_PURGE);
+ return _mv88e6xxx_vtu_cmd(ps, GLOBAL_VTU_OP_STU_LOAD_PURGE);
}
-static int _mv88e6xxx_port_fid(struct dsa_switch *ds, int port, u16 *new,
- u16 *old)
+static int _mv88e6xxx_port_fid(struct mv88e6xxx_priv_state *ps, int port,
+ u16 *new, u16 *old)
{
+ struct dsa_switch *ds = ps->ds;
+ u16 upper_mask;
u16 fid;
int ret;
+ if (mv88e6xxx_num_databases(ps) == 4096)
+ upper_mask = 0xff;
+ else if (mv88e6xxx_num_databases(ps) == 256)
+ upper_mask = 0xf;
+ else
+ return -EOPNOTSUPP;
+
/* Port's default FID bits 3:0 are located in reg 0x06, offset 12 */
- ret = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_BASE_VLAN);
+ ret = _mv88e6xxx_reg_read(ps, REG_PORT(port), PORT_BASE_VLAN);
if (ret < 0)
return ret;
@@ -1525,24 +1820,24 @@ static int _mv88e6xxx_port_fid(struct dsa_switch *ds, int port, u16 *new,
ret &= ~PORT_BASE_VLAN_FID_3_0_MASK;
ret |= (*new << 12) & PORT_BASE_VLAN_FID_3_0_MASK;
- ret = _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_BASE_VLAN,
+ ret = _mv88e6xxx_reg_write(ps, REG_PORT(port), PORT_BASE_VLAN,
ret);
if (ret < 0)
return ret;
}
/* Port's default FID bits 11:4 are located in reg 0x05, offset 0 */
- ret = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_CONTROL_1);
+ ret = _mv88e6xxx_reg_read(ps, REG_PORT(port), PORT_CONTROL_1);
if (ret < 0)
return ret;
- fid |= (ret & PORT_CONTROL_1_FID_11_4_MASK) << 4;
+ fid |= (ret & upper_mask) << 4;
if (new) {
- ret &= ~PORT_CONTROL_1_FID_11_4_MASK;
- ret |= (*new >> 4) & PORT_CONTROL_1_FID_11_4_MASK;
+ ret &= ~upper_mask;
+ ret |= (*new >> 4) & upper_mask;
- ret = _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_CONTROL_1,
+ ret = _mv88e6xxx_reg_write(ps, REG_PORT(port), PORT_CONTROL_1,
ret);
if (ret < 0)
return ret;
@@ -1556,19 +1851,20 @@ static int _mv88e6xxx_port_fid(struct dsa_switch *ds, int port, u16 *new,
return 0;
}
-static int _mv88e6xxx_port_fid_get(struct dsa_switch *ds, int port, u16 *fid)
+static int _mv88e6xxx_port_fid_get(struct mv88e6xxx_priv_state *ps,
+ int port, u16 *fid)
{
- return _mv88e6xxx_port_fid(ds, port, NULL, fid);
+ return _mv88e6xxx_port_fid(ps, port, NULL, fid);
}
-static int _mv88e6xxx_port_fid_set(struct dsa_switch *ds, int port, u16 fid)
+static int _mv88e6xxx_port_fid_set(struct mv88e6xxx_priv_state *ps,
+ int port, u16 fid)
{
- return _mv88e6xxx_port_fid(ds, port, &fid, NULL);
+ return _mv88e6xxx_port_fid(ps, port, &fid, NULL);
}
-static int _mv88e6xxx_fid_new(struct dsa_switch *ds, u16 *fid)
+static int _mv88e6xxx_fid_new(struct mv88e6xxx_priv_state *ps, u16 *fid)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
DECLARE_BITMAP(fid_bitmap, MV88E6XXX_N_FID);
struct mv88e6xxx_vtu_stu_entry vlan;
int i, err;
@@ -1576,8 +1872,8 @@ static int _mv88e6xxx_fid_new(struct dsa_switch *ds, u16 *fid)
bitmap_zero(fid_bitmap, MV88E6XXX_N_FID);
/* Set every FID bit used by the (un)bridged ports */
- for (i = 0; i < ps->num_ports; ++i) {
- err = _mv88e6xxx_port_fid_get(ds, i, fid);
+ for (i = 0; i < ps->info->num_ports; ++i) {
+ err = _mv88e6xxx_port_fid_get(ps, i, fid);
if (err)
return err;
@@ -1585,12 +1881,12 @@ static int _mv88e6xxx_fid_new(struct dsa_switch *ds, u16 *fid)
}
/* Set every FID bit used by the VLAN entries */
- err = _mv88e6xxx_vtu_vid_write(ds, GLOBAL_VTU_VID_MASK);
+ err = _mv88e6xxx_vtu_vid_write(ps, GLOBAL_VTU_VID_MASK);
if (err)
return err;
do {
- err = _mv88e6xxx_vtu_getnext(ds, &vlan);
+ err = _mv88e6xxx_vtu_getnext(ps, &vlan);
if (err)
return err;
@@ -1604,35 +1900,35 @@ static int _mv88e6xxx_fid_new(struct dsa_switch *ds, u16 *fid)
* databases are not needed. Return the next positive available.
*/
*fid = find_next_zero_bit(fid_bitmap, MV88E6XXX_N_FID, 1);
- if (unlikely(*fid == MV88E6XXX_N_FID))
+ if (unlikely(*fid >= mv88e6xxx_num_databases(ps)))
return -ENOSPC;
/* Clear the database */
- return _mv88e6xxx_atu_flush(ds, *fid, true);
+ return _mv88e6xxx_atu_flush(ps, *fid, true);
}
-static int _mv88e6xxx_vtu_new(struct dsa_switch *ds, u16 vid,
+static int _mv88e6xxx_vtu_new(struct mv88e6xxx_priv_state *ps, u16 vid,
struct mv88e6xxx_vtu_stu_entry *entry)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+ struct dsa_switch *ds = ps->ds;
struct mv88e6xxx_vtu_stu_entry vlan = {
.valid = true,
.vid = vid,
};
int i, err;
- err = _mv88e6xxx_fid_new(ds, &vlan.fid);
+ err = _mv88e6xxx_fid_new(ps, &vlan.fid);
if (err)
return err;
/* exclude all ports except the CPU and DSA ports */
- for (i = 0; i < ps->num_ports; ++i)
+ for (i = 0; i < ps->info->num_ports; ++i)
vlan.data[i] = dsa_is_cpu_port(ds, i) || dsa_is_dsa_port(ds, i)
? GLOBAL_VTU_DATA_MEMBER_TAG_UNMODIFIED
: GLOBAL_VTU_DATA_MEMBER_TAG_NON_MEMBER;
- if (mv88e6xxx_6097_family(ds) || mv88e6xxx_6165_family(ds) ||
- mv88e6xxx_6351_family(ds) || mv88e6xxx_6352_family(ds)) {
+ if (mv88e6xxx_6097_family(ps) || mv88e6xxx_6165_family(ps) ||
+ mv88e6xxx_6351_family(ps) || mv88e6xxx_6352_family(ps)) {
struct mv88e6xxx_vtu_stu_entry vstp;
/* Adding a VTU entry requires a valid STU entry. As VSTP is not
@@ -1640,7 +1936,7 @@ static int _mv88e6xxx_vtu_new(struct dsa_switch *ds, u16 vid,
* entries. Thus, validate the SID 0.
*/
vlan.sid = 0;
- err = _mv88e6xxx_stu_getnext(ds, GLOBAL_VTU_SID_MASK, &vstp);
+ err = _mv88e6xxx_stu_getnext(ps, GLOBAL_VTU_SID_MASK, &vstp);
if (err)
return err;
@@ -1649,7 +1945,7 @@ static int _mv88e6xxx_vtu_new(struct dsa_switch *ds, u16 vid,
vstp.valid = true;
vstp.sid = vlan.sid;
- err = _mv88e6xxx_stu_loadpurge(ds, &vstp);
+ err = _mv88e6xxx_stu_loadpurge(ps, &vstp);
if (err)
return err;
}
@@ -1659,7 +1955,7 @@ static int _mv88e6xxx_vtu_new(struct dsa_switch *ds, u16 vid,
return 0;
}
-static int _mv88e6xxx_vtu_get(struct dsa_switch *ds, u16 vid,
+static int _mv88e6xxx_vtu_get(struct mv88e6xxx_priv_state *ps, u16 vid,
struct mv88e6xxx_vtu_stu_entry *entry, bool creat)
{
int err;
@@ -1667,11 +1963,11 @@ static int _mv88e6xxx_vtu_get(struct dsa_switch *ds, u16 vid,
if (!vid)
return -EINVAL;
- err = _mv88e6xxx_vtu_vid_write(ds, vid - 1);
+ err = _mv88e6xxx_vtu_vid_write(ps, vid - 1);
if (err)
return err;
- err = _mv88e6xxx_vtu_getnext(ds, entry);
+ err = _mv88e6xxx_vtu_getnext(ps, entry);
if (err)
return err;
@@ -1682,7 +1978,7 @@ static int _mv88e6xxx_vtu_get(struct dsa_switch *ds, u16 vid,
* -EOPNOTSUPP to inform bridge about an eventual software VLAN.
*/
- err = _mv88e6xxx_vtu_new(ds, vid, entry);
+ err = _mv88e6xxx_vtu_new(ps, vid, entry);
}
return err;
@@ -1700,12 +1996,12 @@ static int mv88e6xxx_port_check_hw_vlan(struct dsa_switch *ds, int port,
mutex_lock(&ps->smi_mutex);
- err = _mv88e6xxx_vtu_vid_write(ds, vid_begin - 1);
+ err = _mv88e6xxx_vtu_vid_write(ps, vid_begin - 1);
if (err)
goto unlock;
do {
- err = _mv88e6xxx_vtu_getnext(ds, &vlan);
+ err = _mv88e6xxx_vtu_getnext(ps, &vlan);
if (err)
goto unlock;
@@ -1715,7 +2011,7 @@ static int mv88e6xxx_port_check_hw_vlan(struct dsa_switch *ds, int port,
if (vlan.vid > vid_end)
break;
- for (i = 0; i < ps->num_ports; ++i) {
+ for (i = 0; i < ps->info->num_ports; ++i) {
if (dsa_is_dsa_port(ds, i) || dsa_is_cpu_port(ds, i))
continue;
@@ -1749,17 +2045,20 @@ static const char * const mv88e6xxx_port_8021q_mode_names[] = {
[PORT_CONTROL_2_8021Q_SECURE] = "Secure",
};
-int mv88e6xxx_port_vlan_filtering(struct dsa_switch *ds, int port,
- bool vlan_filtering)
+static int mv88e6xxx_port_vlan_filtering(struct dsa_switch *ds, int port,
+ bool vlan_filtering)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
u16 old, new = vlan_filtering ? PORT_CONTROL_2_8021Q_SECURE :
PORT_CONTROL_2_8021Q_DISABLED;
int ret;
+ if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_VTU))
+ return -EOPNOTSUPP;
+
mutex_lock(&ps->smi_mutex);
- ret = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_CONTROL_2);
+ ret = _mv88e6xxx_reg_read(ps, REG_PORT(port), PORT_CONTROL_2);
if (ret < 0)
goto unlock;
@@ -1769,7 +2068,7 @@ int mv88e6xxx_port_vlan_filtering(struct dsa_switch *ds, int port,
ret &= ~PORT_CONTROL_2_8021Q_MASK;
ret |= new & PORT_CONTROL_2_8021Q_MASK;
- ret = _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_CONTROL_2,
+ ret = _mv88e6xxx_reg_write(ps, REG_PORT(port), PORT_CONTROL_2,
ret);
if (ret < 0)
goto unlock;
@@ -1786,12 +2085,16 @@ unlock:
return ret;
}
-int mv88e6xxx_port_vlan_prepare(struct dsa_switch *ds, int port,
- const struct switchdev_obj_port_vlan *vlan,
- struct switchdev_trans *trans)
+static int mv88e6xxx_port_vlan_prepare(struct dsa_switch *ds, int port,
+ const struct switchdev_obj_port_vlan *vlan,
+ struct switchdev_trans *trans)
{
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
int err;
+ if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_VTU))
+ return -EOPNOTSUPP;
+
/* If the requested port doesn't belong to the same bridge as the VLAN
* members, do not support it (yet) and fallback to software VLAN.
*/
@@ -1806,13 +2109,13 @@ int mv88e6xxx_port_vlan_prepare(struct dsa_switch *ds, int port,
return 0;
}
-static int _mv88e6xxx_port_vlan_add(struct dsa_switch *ds, int port, u16 vid,
- bool untagged)
+static int _mv88e6xxx_port_vlan_add(struct mv88e6xxx_priv_state *ps, int port,
+ u16 vid, bool untagged)
{
struct mv88e6xxx_vtu_stu_entry vlan;
int err;
- err = _mv88e6xxx_vtu_get(ds, vid, &vlan, true);
+ err = _mv88e6xxx_vtu_get(ps, vid, &vlan, true);
if (err)
return err;
@@ -1820,43 +2123,43 @@ static int _mv88e6xxx_port_vlan_add(struct dsa_switch *ds, int port, u16 vid,
GLOBAL_VTU_DATA_MEMBER_TAG_UNTAGGED :
GLOBAL_VTU_DATA_MEMBER_TAG_TAGGED;
- return _mv88e6xxx_vtu_loadpurge(ds, &vlan);
+ return _mv88e6xxx_vtu_loadpurge(ps, &vlan);
}
-int mv88e6xxx_port_vlan_add(struct dsa_switch *ds, int port,
- const struct switchdev_obj_port_vlan *vlan,
- struct switchdev_trans *trans)
+static void mv88e6xxx_port_vlan_add(struct dsa_switch *ds, int port,
+ const struct switchdev_obj_port_vlan *vlan,
+ struct switchdev_trans *trans)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
bool untagged = vlan->flags & BRIDGE_VLAN_INFO_UNTAGGED;
bool pvid = vlan->flags & BRIDGE_VLAN_INFO_PVID;
u16 vid;
- int err = 0;
+
+ if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_VTU))
+ return;
mutex_lock(&ps->smi_mutex);
- for (vid = vlan->vid_begin; vid <= vlan->vid_end; ++vid) {
- err = _mv88e6xxx_port_vlan_add(ds, port, vid, untagged);
- if (err)
- goto unlock;
- }
+ for (vid = vlan->vid_begin; vid <= vlan->vid_end; ++vid)
+ if (_mv88e6xxx_port_vlan_add(ps, port, vid, untagged))
+ netdev_err(ds->ports[port], "failed to add VLAN %d%c\n",
+ vid, untagged ? 'u' : 't');
- /* no PVID with ranges, otherwise it's a bug */
- if (pvid)
- err = _mv88e6xxx_port_pvid_set(ds, port, vlan->vid_end);
-unlock:
- mutex_unlock(&ps->smi_mutex);
+ if (pvid && _mv88e6xxx_port_pvid_set(ps, port, vlan->vid_end))
+ netdev_err(ds->ports[port], "failed to set PVID %d\n",
+ vlan->vid_end);
- return err;
+ mutex_unlock(&ps->smi_mutex);
}
-static int _mv88e6xxx_port_vlan_del(struct dsa_switch *ds, int port, u16 vid)
+static int _mv88e6xxx_port_vlan_del(struct mv88e6xxx_priv_state *ps,
+ int port, u16 vid)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+ struct dsa_switch *ds = ps->ds;
struct mv88e6xxx_vtu_stu_entry vlan;
int i, err;
- err = _mv88e6xxx_vtu_get(ds, vid, &vlan, false);
+ err = _mv88e6xxx_vtu_get(ps, vid, &vlan, false);
if (err)
return err;
@@ -1868,7 +2171,7 @@ static int _mv88e6xxx_port_vlan_del(struct dsa_switch *ds, int port, u16 vid)
/* keep the VLAN unless all ports are excluded */
vlan.valid = false;
- for (i = 0; i < ps->num_ports; ++i) {
+ for (i = 0; i < ps->info->num_ports; ++i) {
if (dsa_is_cpu_port(ds, i) || dsa_is_dsa_port(ds, i))
continue;
@@ -1878,33 +2181,36 @@ static int _mv88e6xxx_port_vlan_del(struct dsa_switch *ds, int port, u16 vid)
}
}
- err = _mv88e6xxx_vtu_loadpurge(ds, &vlan);
+ err = _mv88e6xxx_vtu_loadpurge(ps, &vlan);
if (err)
return err;
- return _mv88e6xxx_atu_remove(ds, vlan.fid, port, false);
+ return _mv88e6xxx_atu_remove(ps, vlan.fid, port, false);
}
-int mv88e6xxx_port_vlan_del(struct dsa_switch *ds, int port,
- const struct switchdev_obj_port_vlan *vlan)
+static int mv88e6xxx_port_vlan_del(struct dsa_switch *ds, int port,
+ const struct switchdev_obj_port_vlan *vlan)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
u16 pvid, vid;
int err = 0;
+ if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_VTU))
+ return -EOPNOTSUPP;
+
mutex_lock(&ps->smi_mutex);
- err = _mv88e6xxx_port_pvid_get(ds, port, &pvid);
+ err = _mv88e6xxx_port_pvid_get(ps, port, &pvid);
if (err)
goto unlock;
for (vid = vlan->vid_begin; vid <= vlan->vid_end; ++vid) {
- err = _mv88e6xxx_port_vlan_del(ds, port, vid);
+ err = _mv88e6xxx_port_vlan_del(ps, port, vid);
if (err)
goto unlock;
if (vid == pvid) {
- err = _mv88e6xxx_port_pvid_set(ds, port, 0);
+ err = _mv88e6xxx_port_pvid_set(ps, port, 0);
if (err)
goto unlock;
}
@@ -1916,14 +2222,14 @@ unlock:
return err;
}
-static int _mv88e6xxx_atu_mac_write(struct dsa_switch *ds,
+static int _mv88e6xxx_atu_mac_write(struct mv88e6xxx_priv_state *ps,
const unsigned char *addr)
{
int i, ret;
for (i = 0; i < 3; i++) {
ret = _mv88e6xxx_reg_write(
- ds, REG_GLOBAL, GLOBAL_ATU_MAC_01 + i,
+ ps, REG_GLOBAL, GLOBAL_ATU_MAC_01 + i,
(addr[i * 2] << 8) | addr[i * 2 + 1]);
if (ret < 0)
return ret;
@@ -1932,12 +2238,13 @@ static int _mv88e6xxx_atu_mac_write(struct dsa_switch *ds,
return 0;
}
-static int _mv88e6xxx_atu_mac_read(struct dsa_switch *ds, unsigned char *addr)
+static int _mv88e6xxx_atu_mac_read(struct mv88e6xxx_priv_state *ps,
+ unsigned char *addr)
{
int i, ret;
for (i = 0; i < 3; i++) {
- ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL,
+ ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL,
GLOBAL_ATU_MAC_01 + i);
if (ret < 0)
return ret;
@@ -1948,31 +2255,27 @@ static int _mv88e6xxx_atu_mac_read(struct dsa_switch *ds, unsigned char *addr)
return 0;
}
-static int _mv88e6xxx_atu_load(struct dsa_switch *ds,
+static int _mv88e6xxx_atu_load(struct mv88e6xxx_priv_state *ps,
struct mv88e6xxx_atu_entry *entry)
{
int ret;
- ret = _mv88e6xxx_atu_wait(ds);
+ ret = _mv88e6xxx_atu_wait(ps);
if (ret < 0)
return ret;
- ret = _mv88e6xxx_atu_mac_write(ds, entry->mac);
+ ret = _mv88e6xxx_atu_mac_write(ps, entry->mac);
if (ret < 0)
return ret;
- ret = _mv88e6xxx_atu_data_write(ds, entry);
+ ret = _mv88e6xxx_atu_data_write(ps, entry);
if (ret < 0)
return ret;
- ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_ATU_FID, entry->fid);
- if (ret < 0)
- return ret;
-
- return _mv88e6xxx_atu_cmd(ds, GLOBAL_ATU_OP_LOAD_DB);
+ return _mv88e6xxx_atu_cmd(ps, entry->fid, GLOBAL_ATU_OP_LOAD_DB);
}
-static int _mv88e6xxx_port_fdb_load(struct dsa_switch *ds, int port,
+static int _mv88e6xxx_port_fdb_load(struct mv88e6xxx_priv_state *ps, int port,
const unsigned char *addr, u16 vid,
u8 state)
{
@@ -1982,9 +2285,9 @@ static int _mv88e6xxx_port_fdb_load(struct dsa_switch *ds, int port,
/* Null VLAN ID corresponds to the port private database */
if (vid == 0)
- err = _mv88e6xxx_port_fid_get(ds, port, &vlan.fid);
+ err = _mv88e6xxx_port_fid_get(ps, port, &vlan.fid);
else
- err = _mv88e6xxx_vtu_get(ds, vid, &vlan, false);
+ err = _mv88e6xxx_vtu_get(ps, vid, &vlan, false);
if (err)
return err;
@@ -1996,51 +2299,60 @@ static int _mv88e6xxx_port_fdb_load(struct dsa_switch *ds, int port,
entry.portv_trunkid = BIT(port);
}
- return _mv88e6xxx_atu_load(ds, &entry);
+ return _mv88e6xxx_atu_load(ps, &entry);
}
-int mv88e6xxx_port_fdb_prepare(struct dsa_switch *ds, int port,
- const struct switchdev_obj_port_fdb *fdb,
- struct switchdev_trans *trans)
+static int mv88e6xxx_port_fdb_prepare(struct dsa_switch *ds, int port,
+ const struct switchdev_obj_port_fdb *fdb,
+ struct switchdev_trans *trans)
{
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+
+ if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_ATU))
+ return -EOPNOTSUPP;
+
/* We don't need any dynamic resource from the kernel (yet),
* so skip the prepare phase.
*/
return 0;
}
-int mv88e6xxx_port_fdb_add(struct dsa_switch *ds, int port,
- const struct switchdev_obj_port_fdb *fdb,
- struct switchdev_trans *trans)
+static void mv88e6xxx_port_fdb_add(struct dsa_switch *ds, int port,
+ const struct switchdev_obj_port_fdb *fdb,
+ struct switchdev_trans *trans)
{
int state = is_multicast_ether_addr(fdb->addr) ?
GLOBAL_ATU_DATA_STATE_MC_STATIC :
GLOBAL_ATU_DATA_STATE_UC_STATIC;
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
- int ret;
+
+ if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_ATU))
+ return;
mutex_lock(&ps->smi_mutex);
- ret = _mv88e6xxx_port_fdb_load(ds, port, fdb->addr, fdb->vid, state);
+ if (_mv88e6xxx_port_fdb_load(ps, port, fdb->addr, fdb->vid, state))
+ netdev_err(ds->ports[port], "failed to load MAC address\n");
mutex_unlock(&ps->smi_mutex);
-
- return ret;
}
-int mv88e6xxx_port_fdb_del(struct dsa_switch *ds, int port,
- const struct switchdev_obj_port_fdb *fdb)
+static int mv88e6xxx_port_fdb_del(struct dsa_switch *ds, int port,
+ const struct switchdev_obj_port_fdb *fdb)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
int ret;
+ if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_ATU))
+ return -EOPNOTSUPP;
+
mutex_lock(&ps->smi_mutex);
- ret = _mv88e6xxx_port_fdb_load(ds, port, fdb->addr, fdb->vid,
+ ret = _mv88e6xxx_port_fdb_load(ps, port, fdb->addr, fdb->vid,
GLOBAL_ATU_DATA_STATE_UNUSED);
mutex_unlock(&ps->smi_mutex);
return ret;
}
-static int _mv88e6xxx_atu_getnext(struct dsa_switch *ds, u16 fid,
+static int _mv88e6xxx_atu_getnext(struct mv88e6xxx_priv_state *ps, u16 fid,
struct mv88e6xxx_atu_entry *entry)
{
struct mv88e6xxx_atu_entry next = { 0 };
@@ -2048,23 +2360,19 @@ static int _mv88e6xxx_atu_getnext(struct dsa_switch *ds, u16 fid,
next.fid = fid;
- ret = _mv88e6xxx_atu_wait(ds);
+ ret = _mv88e6xxx_atu_wait(ps);
if (ret < 0)
return ret;
- ret = _mv88e6xxx_reg_write(ds, REG_GLOBAL, GLOBAL_ATU_FID, fid);
+ ret = _mv88e6xxx_atu_cmd(ps, fid, GLOBAL_ATU_OP_GET_NEXT_DB);
if (ret < 0)
return ret;
- ret = _mv88e6xxx_atu_cmd(ds, GLOBAL_ATU_OP_GET_NEXT_DB);
+ ret = _mv88e6xxx_atu_mac_read(ps, next.mac);
if (ret < 0)
return ret;
- ret = _mv88e6xxx_atu_mac_read(ds, next.mac);
- if (ret < 0)
- return ret;
-
- ret = _mv88e6xxx_reg_read(ds, REG_GLOBAL, GLOBAL_ATU_DATA);
+ ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, GLOBAL_ATU_DATA);
if (ret < 0)
return ret;
@@ -2089,8 +2397,8 @@ static int _mv88e6xxx_atu_getnext(struct dsa_switch *ds, u16 fid,
return 0;
}
-static int _mv88e6xxx_port_fdb_dump_one(struct dsa_switch *ds, u16 fid, u16 vid,
- int port,
+static int _mv88e6xxx_port_fdb_dump_one(struct mv88e6xxx_priv_state *ps,
+ u16 fid, u16 vid, int port,
struct switchdev_obj_port_fdb *fdb,
int (*cb)(struct switchdev_obj *obj))
{
@@ -2099,12 +2407,12 @@ static int _mv88e6xxx_port_fdb_dump_one(struct dsa_switch *ds, u16 fid, u16 vid,
};
int err;
- err = _mv88e6xxx_atu_mac_write(ds, addr.mac);
+ err = _mv88e6xxx_atu_mac_write(ps, addr.mac);
if (err)
return err;
do {
- err = _mv88e6xxx_atu_getnext(ds, fid, &addr);
+ err = _mv88e6xxx_atu_getnext(ps, fid, &addr);
if (err)
break;
@@ -2130,9 +2438,9 @@ static int _mv88e6xxx_port_fdb_dump_one(struct dsa_switch *ds, u16 fid, u16 vid,
return err;
}
-int mv88e6xxx_port_fdb_dump(struct dsa_switch *ds, int port,
- struct switchdev_obj_port_fdb *fdb,
- int (*cb)(struct switchdev_obj *obj))
+static int mv88e6xxx_port_fdb_dump(struct dsa_switch *ds, int port,
+ struct switchdev_obj_port_fdb *fdb,
+ int (*cb)(struct switchdev_obj *obj))
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
struct mv88e6xxx_vtu_stu_entry vlan = {
@@ -2141,31 +2449,34 @@ int mv88e6xxx_port_fdb_dump(struct dsa_switch *ds, int port,
u16 fid;
int err;
+ if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_ATU))
+ return -EOPNOTSUPP;
+
mutex_lock(&ps->smi_mutex);
/* Dump port's default Filtering Information Database (VLAN ID 0) */
- err = _mv88e6xxx_port_fid_get(ds, port, &fid);
+ err = _mv88e6xxx_port_fid_get(ps, port, &fid);
if (err)
goto unlock;
- err = _mv88e6xxx_port_fdb_dump_one(ds, fid, 0, port, fdb, cb);
+ err = _mv88e6xxx_port_fdb_dump_one(ps, fid, 0, port, fdb, cb);
if (err)
goto unlock;
/* Dump VLANs' Filtering Information Databases */
- err = _mv88e6xxx_vtu_vid_write(ds, vlan.vid);
+ err = _mv88e6xxx_vtu_vid_write(ps, vlan.vid);
if (err)
goto unlock;
do {
- err = _mv88e6xxx_vtu_getnext(ds, &vlan);
+ err = _mv88e6xxx_vtu_getnext(ps, &vlan);
if (err)
break;
if (!vlan.valid)
break;
- err = _mv88e6xxx_port_fdb_dump_one(ds, vlan.fid, vlan.vid, port,
+ err = _mv88e6xxx_port_fdb_dump_one(ps, vlan.fid, vlan.vid, port,
fdb, cb);
if (err)
break;
@@ -2177,20 +2488,23 @@ unlock:
return err;
}
-int mv88e6xxx_port_bridge_join(struct dsa_switch *ds, int port,
- struct net_device *bridge)
+static int mv88e6xxx_port_bridge_join(struct dsa_switch *ds, int port,
+ struct net_device *bridge)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
int i, err = 0;
+ if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_VLANTABLE))
+ return -EOPNOTSUPP;
+
mutex_lock(&ps->smi_mutex);
/* Assign the bridge and remap each port's VLANTable */
ps->ports[port].bridge_dev = bridge;
- for (i = 0; i < ps->num_ports; ++i) {
+ for (i = 0; i < ps->info->num_ports; ++i) {
if (ps->ports[i].bridge_dev == bridge) {
- err = _mv88e6xxx_port_based_vlan_map(ds, i);
+ err = _mv88e6xxx_port_based_vlan_map(ps, i);
if (err)
break;
}
@@ -2201,89 +2515,134 @@ int mv88e6xxx_port_bridge_join(struct dsa_switch *ds, int port,
return err;
}
-void mv88e6xxx_port_bridge_leave(struct dsa_switch *ds, int port)
+static void mv88e6xxx_port_bridge_leave(struct dsa_switch *ds, int port)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
struct net_device *bridge = ps->ports[port].bridge_dev;
int i;
+ if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_VLANTABLE))
+ return;
+
mutex_lock(&ps->smi_mutex);
/* Unassign the bridge and remap each port's VLANTable */
ps->ports[port].bridge_dev = NULL;
- for (i = 0; i < ps->num_ports; ++i)
+ for (i = 0; i < ps->info->num_ports; ++i)
if (i == port || ps->ports[i].bridge_dev == bridge)
- if (_mv88e6xxx_port_based_vlan_map(ds, i))
+ if (_mv88e6xxx_port_based_vlan_map(ps, i))
netdev_warn(ds->ports[i], "failed to remap\n");
mutex_unlock(&ps->smi_mutex);
}
-static void mv88e6xxx_bridge_work(struct work_struct *work)
+static int _mv88e6xxx_phy_page_write(struct mv88e6xxx_priv_state *ps,
+ int port, int page, int reg, int val)
{
- struct mv88e6xxx_priv_state *ps;
- struct dsa_switch *ds;
- int port;
-
- ps = container_of(work, struct mv88e6xxx_priv_state, bridge_work);
- ds = ((struct dsa_switch *)ps) - 1;
+ int ret;
- mutex_lock(&ps->smi_mutex);
+ ret = _mv88e6xxx_phy_write_indirect(ps, port, 0x16, page);
+ if (ret < 0)
+ goto restore_page_0;
- for (port = 0; port < ps->num_ports; ++port)
- if (test_and_clear_bit(port, ps->port_state_update_mask) &&
- _mv88e6xxx_port_state(ds, port, ps->ports[port].state))
- netdev_warn(ds->ports[port], "failed to update state to %s\n",
- mv88e6xxx_port_state_names[ps->ports[port].state]);
+ ret = _mv88e6xxx_phy_write_indirect(ps, port, reg, val);
+restore_page_0:
+ _mv88e6xxx_phy_write_indirect(ps, port, 0x16, 0x0);
- mutex_unlock(&ps->smi_mutex);
+ return ret;
}
-static int _mv88e6xxx_phy_page_write(struct dsa_switch *ds, int port, int page,
- int reg, int val)
+static int _mv88e6xxx_phy_page_read(struct mv88e6xxx_priv_state *ps,
+ int port, int page, int reg)
{
int ret;
- ret = _mv88e6xxx_phy_write_indirect(ds, port, 0x16, page);
+ ret = _mv88e6xxx_phy_write_indirect(ps, port, 0x16, page);
if (ret < 0)
goto restore_page_0;
- ret = _mv88e6xxx_phy_write_indirect(ds, port, reg, val);
+ ret = _mv88e6xxx_phy_read_indirect(ps, port, reg);
restore_page_0:
- _mv88e6xxx_phy_write_indirect(ds, port, 0x16, 0x0);
+ _mv88e6xxx_phy_write_indirect(ps, port, 0x16, 0x0);
return ret;
}
-static int _mv88e6xxx_phy_page_read(struct dsa_switch *ds, int port, int page,
- int reg)
+static int mv88e6xxx_switch_reset(struct mv88e6xxx_priv_state *ps)
{
+ bool ppu_active = mv88e6xxx_has(ps, MV88E6XXX_FLAG_PPU_ACTIVE);
+ u16 is_reset = (ppu_active ? 0x8800 : 0xc800);
+ struct gpio_desc *gpiod = ps->reset;
+ unsigned long timeout;
int ret;
+ int i;
- ret = _mv88e6xxx_phy_write_indirect(ds, port, 0x16, page);
- if (ret < 0)
- goto restore_page_0;
+ /* Set all ports to the disabled state. */
+ for (i = 0; i < ps->info->num_ports; i++) {
+ ret = _mv88e6xxx_reg_read(ps, REG_PORT(i), PORT_CONTROL);
+ if (ret < 0)
+ return ret;
- ret = _mv88e6xxx_phy_read_indirect(ds, port, reg);
-restore_page_0:
- _mv88e6xxx_phy_write_indirect(ds, port, 0x16, 0x0);
+ ret = _mv88e6xxx_reg_write(ps, REG_PORT(i), PORT_CONTROL,
+ ret & 0xfffc);
+ if (ret)
+ return ret;
+ }
+
+ /* Wait for transmit queues to drain. */
+ usleep_range(2000, 4000);
+
+ /* If there is a gpio connected to the reset pin, toggle it */
+ if (gpiod) {
+ gpiod_set_value_cansleep(gpiod, 1);
+ usleep_range(10000, 20000);
+ gpiod_set_value_cansleep(gpiod, 0);
+ usleep_range(10000, 20000);
+ }
+
+ /* Reset the switch. Keep the PPU active if requested. The PPU
+ * needs to be active to support indirect phy register access
+ * through global registers 0x18 and 0x19.
+ */
+ if (ppu_active)
+ ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, 0x04, 0xc000);
+ else
+ ret = _mv88e6xxx_reg_write(ps, REG_GLOBAL, 0x04, 0xc400);
+ if (ret)
+ return ret;
+
+ /* Wait up to one second for reset to complete. */
+ timeout = jiffies + 1 * HZ;
+ while (time_before(jiffies, timeout)) {
+ ret = _mv88e6xxx_reg_read(ps, REG_GLOBAL, 0x00);
+ if (ret < 0)
+ return ret;
+
+ if ((ret & is_reset) == is_reset)
+ break;
+ usleep_range(1000, 2000);
+ }
+ if (time_after(jiffies, timeout))
+ ret = -ETIMEDOUT;
+ else
+ ret = 0;
return ret;
}
-static int mv88e6xxx_power_on_serdes(struct dsa_switch *ds)
+static int mv88e6xxx_power_on_serdes(struct mv88e6xxx_priv_state *ps)
{
int ret;
- ret = _mv88e6xxx_phy_page_read(ds, REG_FIBER_SERDES, PAGE_FIBER_SERDES,
+ ret = _mv88e6xxx_phy_page_read(ps, REG_FIBER_SERDES, PAGE_FIBER_SERDES,
MII_BMCR);
if (ret < 0)
return ret;
if (ret & BMCR_PDOWN) {
ret &= ~BMCR_PDOWN;
- ret = _mv88e6xxx_phy_page_write(ds, REG_FIBER_SERDES,
+ ret = _mv88e6xxx_phy_page_write(ps, REG_FIBER_SERDES,
PAGE_FIBER_SERDES, MII_BMCR,
ret);
}
@@ -2291,32 +2650,30 @@ static int mv88e6xxx_power_on_serdes(struct dsa_switch *ds)
return ret;
}
-static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port)
+static int mv88e6xxx_setup_port(struct mv88e6xxx_priv_state *ps, int port)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+ struct dsa_switch *ds = ps->ds;
int ret;
u16 reg;
- mutex_lock(&ps->smi_mutex);
-
- if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
- mv88e6xxx_6165_family(ds) || mv88e6xxx_6097_family(ds) ||
- mv88e6xxx_6185_family(ds) || mv88e6xxx_6095_family(ds) ||
- mv88e6xxx_6065_family(ds) || mv88e6xxx_6320_family(ds)) {
+ if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+ mv88e6xxx_6165_family(ps) || mv88e6xxx_6097_family(ps) ||
+ mv88e6xxx_6185_family(ps) || mv88e6xxx_6095_family(ps) ||
+ mv88e6xxx_6065_family(ps) || mv88e6xxx_6320_family(ps)) {
/* MAC Forcing register: don't force link, speed,
* duplex or flow control state to any particular
* values on physical ports, but force the CPU port
* and all DSA ports to their maximum bandwidth and
* full duplex.
*/
- reg = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_PCS_CTRL);
+ reg = _mv88e6xxx_reg_read(ps, REG_PORT(port), PORT_PCS_CTRL);
if (dsa_is_cpu_port(ds, port) || dsa_is_dsa_port(ds, port)) {
reg &= ~PORT_PCS_CTRL_UNFORCED;
reg |= PORT_PCS_CTRL_FORCE_LINK |
PORT_PCS_CTRL_LINK_UP |
PORT_PCS_CTRL_DUPLEX_FULL |
PORT_PCS_CTRL_FORCE_DUPLEX;
- if (mv88e6xxx_6065_family(ds))
+ if (mv88e6xxx_6065_family(ps))
reg |= PORT_PCS_CTRL_100;
else
reg |= PORT_PCS_CTRL_1000;
@@ -2324,10 +2681,10 @@ static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port)
reg |= PORT_PCS_CTRL_UNFORCED;
}
- ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+ ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
PORT_PCS_CTRL, reg);
if (ret)
- goto abort;
+ return ret;
}
/* Port Control: disable Drop-on-Unlock, disable Drop-on-Lock,
@@ -2345,19 +2702,19 @@ static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port)
* forwarding of unknown unicasts and multicasts.
*/
reg = 0;
- if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
- mv88e6xxx_6165_family(ds) || mv88e6xxx_6097_family(ds) ||
- mv88e6xxx_6095_family(ds) || mv88e6xxx_6065_family(ds) ||
- mv88e6xxx_6185_family(ds) || mv88e6xxx_6320_family(ds))
+ if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+ mv88e6xxx_6165_family(ps) || mv88e6xxx_6097_family(ps) ||
+ mv88e6xxx_6095_family(ps) || mv88e6xxx_6065_family(ps) ||
+ mv88e6xxx_6185_family(ps) || mv88e6xxx_6320_family(ps))
reg = PORT_CONTROL_IGMP_MLD_SNOOP |
PORT_CONTROL_USE_TAG | PORT_CONTROL_USE_IP |
PORT_CONTROL_STATE_FORWARDING;
if (dsa_is_cpu_port(ds, port)) {
- if (mv88e6xxx_6095_family(ds) || mv88e6xxx_6185_family(ds))
+ if (mv88e6xxx_6095_family(ps) || mv88e6xxx_6185_family(ps))
reg |= PORT_CONTROL_DSA_TAG;
- if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
- mv88e6xxx_6165_family(ds) || mv88e6xxx_6097_family(ds) ||
- mv88e6xxx_6320_family(ds)) {
+ if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+ mv88e6xxx_6165_family(ps) || mv88e6xxx_6097_family(ps) ||
+ mv88e6xxx_6320_family(ps)) {
if (ds->dst->tag_protocol == DSA_TAG_PROTO_EDSA)
reg |= PORT_CONTROL_FRAME_ETHER_TYPE_DSA;
else
@@ -2366,20 +2723,20 @@ static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port)
PORT_CONTROL_FORWARD_UNKNOWN_MC;
}
- if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
- mv88e6xxx_6165_family(ds) || mv88e6xxx_6097_family(ds) ||
- mv88e6xxx_6095_family(ds) || mv88e6xxx_6065_family(ds) ||
- mv88e6xxx_6185_family(ds) || mv88e6xxx_6320_family(ds)) {
+ if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+ mv88e6xxx_6165_family(ps) || mv88e6xxx_6097_family(ps) ||
+ mv88e6xxx_6095_family(ps) || mv88e6xxx_6065_family(ps) ||
+ mv88e6xxx_6185_family(ps) || mv88e6xxx_6320_family(ps)) {
if (ds->dst->tag_protocol == DSA_TAG_PROTO_EDSA)
reg |= PORT_CONTROL_EGRESS_ADD_TAG;
}
}
if (dsa_is_dsa_port(ds, port)) {
- if (mv88e6xxx_6095_family(ds) || mv88e6xxx_6185_family(ds))
+ if (mv88e6xxx_6095_family(ps) || mv88e6xxx_6185_family(ps))
reg |= PORT_CONTROL_DSA_TAG;
- if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
- mv88e6xxx_6165_family(ds) || mv88e6xxx_6097_family(ds) ||
- mv88e6xxx_6320_family(ds)) {
+ if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+ mv88e6xxx_6165_family(ps) || mv88e6xxx_6097_family(ps) ||
+ mv88e6xxx_6320_family(ps)) {
reg |= PORT_CONTROL_FRAME_MODE_DSA;
}
@@ -2388,26 +2745,26 @@ static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port)
PORT_CONTROL_FORWARD_UNKNOWN_MC;
}
if (reg) {
- ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+ ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
PORT_CONTROL, reg);
if (ret)
- goto abort;
+ return ret;
}
/* If this port is connected to a SerDes, make sure the SerDes is not
* powered down.
*/
- if (mv88e6xxx_6352_family(ds)) {
- ret = _mv88e6xxx_reg_read(ds, REG_PORT(port), PORT_STATUS);
+ if (mv88e6xxx_6352_family(ps)) {
+ ret = _mv88e6xxx_reg_read(ps, REG_PORT(port), PORT_STATUS);
if (ret < 0)
- goto abort;
+ return ret;
ret &= PORT_STATUS_CMODE_MASK;
if ((ret == PORT_STATUS_CMODE_100BASE_X) ||
(ret == PORT_STATUS_CMODE_1000BASE_X) ||
(ret == PORT_STATUS_CMODE_SGMII)) {
- ret = mv88e6xxx_power_on_serdes(ds);
+ ret = mv88e6xxx_power_on_serdes(ps);
if (ret < 0)
- goto abort;
+ return ret;
}
}
@@ -2418,16 +2775,17 @@ static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port)
* copy of all transmitted/received frames on this port to the CPU.
*/
reg = 0;
- if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
- mv88e6xxx_6165_family(ds) || mv88e6xxx_6097_family(ds) ||
- mv88e6xxx_6095_family(ds) || mv88e6xxx_6320_family(ds))
+ if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+ mv88e6xxx_6165_family(ps) || mv88e6xxx_6097_family(ps) ||
+ mv88e6xxx_6095_family(ps) || mv88e6xxx_6320_family(ps) ||
+ mv88e6xxx_6185_family(ps))
reg = PORT_CONTROL_2_MAP_DA;
- if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
- mv88e6xxx_6165_family(ds) || mv88e6xxx_6320_family(ds))
+ if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+ mv88e6xxx_6165_family(ps) || mv88e6xxx_6320_family(ps))
reg |= PORT_CONTROL_2_JUMBO_10240;
- if (mv88e6xxx_6095_family(ds) || mv88e6xxx_6185_family(ds)) {
+ if (mv88e6xxx_6095_family(ps) || mv88e6xxx_6185_family(ps)) {
/* Set the upstream port this port should use */
reg |= dsa_upstream_port(ds);
/* enable forwarding of unknown multicast addresses to
@@ -2440,10 +2798,10 @@ static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port)
reg |= PORT_CONTROL_2_8021Q_DISABLED;
if (reg) {
- ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+ ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
PORT_CONTROL_2, reg);
if (ret)
- goto abort;
+ return ret;
}
/* Port Association Vector: when learning source addresses
@@ -2456,300 +2814,344 @@ static int mv88e6xxx_setup_port(struct dsa_switch *ds, int port)
if (dsa_is_cpu_port(ds, port))
reg = 0;
- ret = _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_ASSOC_VECTOR, reg);
+ ret = _mv88e6xxx_reg_write(ps, REG_PORT(port), PORT_ASSOC_VECTOR, reg);
if (ret)
- goto abort;
+ return ret;
/* Egress rate control 2: disable egress rate control. */
- ret = _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_RATE_CONTROL_2,
+ ret = _mv88e6xxx_reg_write(ps, REG_PORT(port), PORT_RATE_CONTROL_2,
0x0000);
if (ret)
- goto abort;
+ return ret;
- if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
- mv88e6xxx_6165_family(ds) || mv88e6xxx_6097_family(ds) ||
- mv88e6xxx_6320_family(ds)) {
+ if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+ mv88e6xxx_6165_family(ps) || mv88e6xxx_6097_family(ps) ||
+ mv88e6xxx_6320_family(ps)) {
/* Do not limit the period of time that this port can
* be paused for by the remote end or the period of
* time that this port can pause the remote end.
*/
- ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+ ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
PORT_PAUSE_CTRL, 0x0000);
if (ret)
- goto abort;
+ return ret;
/* Port ATU control: disable limiting the number of
* address database entries that this port is allowed
* to use.
*/
- ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+ ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
PORT_ATU_CONTROL, 0x0000);
/* Priority Override: disable DA, SA and VTU priority
* override.
*/
- ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+ ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
PORT_PRI_OVERRIDE, 0x0000);
if (ret)
- goto abort;
+ return ret;
/* Port Ethertype: use the Ethertype DSA Ethertype
* value.
*/
- ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+ ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
PORT_ETH_TYPE, ETH_P_EDSA);
if (ret)
- goto abort;
+ return ret;
/* Tag Remap: use an identity 802.1p prio -> switch
* prio mapping.
*/
- ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+ ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
PORT_TAG_REGMAP_0123, 0x3210);
if (ret)
- goto abort;
+ return ret;
/* Tag Remap 2: use an identity 802.1p prio -> switch
* prio mapping.
*/
- ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+ ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
PORT_TAG_REGMAP_4567, 0x7654);
if (ret)
- goto abort;
+ return ret;
}
- if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
- mv88e6xxx_6165_family(ds) || mv88e6xxx_6097_family(ds) ||
- mv88e6xxx_6185_family(ds) || mv88e6xxx_6095_family(ds) ||
- mv88e6xxx_6320_family(ds)) {
+ if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+ mv88e6xxx_6165_family(ps) || mv88e6xxx_6097_family(ps) ||
+ mv88e6xxx_6185_family(ps) || mv88e6xxx_6095_family(ps) ||
+ mv88e6xxx_6320_family(ps)) {
/* Rate Control: disable ingress rate limiting. */
- ret = _mv88e6xxx_reg_write(ds, REG_PORT(port),
+ ret = _mv88e6xxx_reg_write(ps, REG_PORT(port),
PORT_RATE_CONTROL, 0x0001);
if (ret)
- goto abort;
+ return ret;
}
/* Port Control 1: disable trunking, disable sending
* learning messages to this port.
*/
- ret = _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_CONTROL_1, 0x0000);
+ ret = _mv88e6xxx_reg_write(ps, REG_PORT(port), PORT_CONTROL_1, 0x0000);
if (ret)
- goto abort;
+ return ret;
/* Port based VLAN map: give each port the same default address
* database, and allow bidirectional communication between the
* CPU and DSA port(s), and the other ports.
*/
- ret = _mv88e6xxx_port_fid_set(ds, port, 0);
+ ret = _mv88e6xxx_port_fid_set(ps, port, 0);
if (ret)
- goto abort;
+ return ret;
- ret = _mv88e6xxx_port_based_vlan_map(ds, port);
+ ret = _mv88e6xxx_port_based_vlan_map(ps, port);
if (ret)
- goto abort;
+ return ret;
/* Default VLAN ID and priority: don't set a default VLAN
* ID, and set the default packet priority to zero.
*/
- ret = _mv88e6xxx_reg_write(ds, REG_PORT(port), PORT_DEFAULT_VLAN,
+ ret = _mv88e6xxx_reg_write(ps, REG_PORT(port), PORT_DEFAULT_VLAN,
0x0000);
-abort:
- mutex_unlock(&ps->smi_mutex);
- return ret;
-}
-
-int mv88e6xxx_setup_ports(struct dsa_switch *ds)
-{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
- int ret;
- int i;
+ if (ret)
+ return ret;
- for (i = 0; i < ps->num_ports; i++) {
- ret = mv88e6xxx_setup_port(ds, i);
- if (ret < 0)
- return ret;
- }
return 0;
}
-int mv88e6xxx_setup_common(struct dsa_switch *ds)
+static int mv88e6xxx_setup_global(struct mv88e6xxx_priv_state *ps)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
- mutex_init(&ps->smi_mutex);
+ struct dsa_switch *ds = ps->ds;
+ u32 upstream_port = dsa_upstream_port(ds);
+ u16 reg;
+ int err;
+ int i;
- ps->id = REG_READ(REG_PORT(0), PORT_SWITCH_ID) & 0xfff0;
+ /* Enable the PHY Polling Unit if present, don't discard any packets,
+ * and mask all interrupt sources.
+ */
+ reg = 0;
+ if (mv88e6xxx_has(ps, MV88E6XXX_FLAG_PPU) ||
+ mv88e6xxx_has(ps, MV88E6XXX_FLAG_PPU_ACTIVE))
+ reg |= GLOBAL_CONTROL_PPU_ENABLE;
- INIT_WORK(&ps->bridge_work, mv88e6xxx_bridge_work);
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_CONTROL, reg);
+ if (err)
+ return err;
- return 0;
-}
+ /* Configure the upstream port, and configure it as the port to which
+ * ingress and egress and ARP monitor frames are to be sent.
+ */
+ reg = upstream_port << GLOBAL_MONITOR_CONTROL_INGRESS_SHIFT |
+ upstream_port << GLOBAL_MONITOR_CONTROL_EGRESS_SHIFT |
+ upstream_port << GLOBAL_MONITOR_CONTROL_ARP_SHIFT;
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_MONITOR_CONTROL, reg);
+ if (err)
+ return err;
-int mv88e6xxx_setup_global(struct dsa_switch *ds)
-{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
- int ret;
- int i;
+ /* Disable remote management, and set the switch's DSA device number. */
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_CONTROL_2,
+ GLOBAL_CONTROL_2_MULTIPLE_CASCADE |
+ (ds->index & 0x1f));
+ if (err)
+ return err;
/* Set the default address aging time to 5 minutes, and
* enable address learn messages to be sent to all message
* ports.
*/
- REG_WRITE(REG_GLOBAL, GLOBAL_ATU_CONTROL,
- 0x0140 | GLOBAL_ATU_CONTROL_LEARN2ALL);
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_ATU_CONTROL,
+ 0x0140 | GLOBAL_ATU_CONTROL_LEARN2ALL);
+ if (err)
+ return err;
/* Configure the IP ToS mapping registers. */
- REG_WRITE(REG_GLOBAL, GLOBAL_IP_PRI_0, 0x0000);
- REG_WRITE(REG_GLOBAL, GLOBAL_IP_PRI_1, 0x0000);
- REG_WRITE(REG_GLOBAL, GLOBAL_IP_PRI_2, 0x5555);
- REG_WRITE(REG_GLOBAL, GLOBAL_IP_PRI_3, 0x5555);
- REG_WRITE(REG_GLOBAL, GLOBAL_IP_PRI_4, 0xaaaa);
- REG_WRITE(REG_GLOBAL, GLOBAL_IP_PRI_5, 0xaaaa);
- REG_WRITE(REG_GLOBAL, GLOBAL_IP_PRI_6, 0xffff);
- REG_WRITE(REG_GLOBAL, GLOBAL_IP_PRI_7, 0xffff);
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_IP_PRI_0, 0x0000);
+ if (err)
+ return err;
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_IP_PRI_1, 0x0000);
+ if (err)
+ return err;
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_IP_PRI_2, 0x5555);
+ if (err)
+ return err;
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_IP_PRI_3, 0x5555);
+ if (err)
+ return err;
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_IP_PRI_4, 0xaaaa);
+ if (err)
+ return err;
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_IP_PRI_5, 0xaaaa);
+ if (err)
+ return err;
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_IP_PRI_6, 0xffff);
+ if (err)
+ return err;
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_IP_PRI_7, 0xffff);
+ if (err)
+ return err;
/* Configure the IEEE 802.1p priority mapping register. */
- REG_WRITE(REG_GLOBAL, GLOBAL_IEEE_PRI, 0xfa41);
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_IEEE_PRI, 0xfa41);
+ if (err)
+ return err;
/* Send all frames with destination addresses matching
* 01:80:c2:00:00:0x to the CPU port.
*/
- REG_WRITE(REG_GLOBAL2, GLOBAL2_MGMT_EN_0X, 0xffff);
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL2, GLOBAL2_MGMT_EN_0X, 0xffff);
+ if (err)
+ return err;
/* Ignore removed tag data on doubly tagged packets, disable
* flow control messages, force flow control priority to the
* highest, and send all special multicast frames to the CPU
* port at the highest priority.
*/
- REG_WRITE(REG_GLOBAL2, GLOBAL2_SWITCH_MGMT,
- 0x7 | GLOBAL2_SWITCH_MGMT_RSVD2CPU | 0x70 |
- GLOBAL2_SWITCH_MGMT_FORCE_FLOW_CTRL_PRI);
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL2, GLOBAL2_SWITCH_MGMT,
+ 0x7 | GLOBAL2_SWITCH_MGMT_RSVD2CPU | 0x70 |
+ GLOBAL2_SWITCH_MGMT_FORCE_FLOW_CTRL_PRI);
+ if (err)
+ return err;
/* Program the DSA routing table. */
for (i = 0; i < 32; i++) {
int nexthop = 0x1f;
- if (ds->pd->rtable &&
- i != ds->index && i < ds->dst->pd->nr_chips)
- nexthop = ds->pd->rtable[i] & 0x1f;
+ if (ps->ds->cd->rtable &&
+ i != ps->ds->index && i < ps->ds->dst->pd->nr_chips)
+ nexthop = ps->ds->cd->rtable[i] & 0x1f;
- REG_WRITE(REG_GLOBAL2, GLOBAL2_DEVICE_MAPPING,
- GLOBAL2_DEVICE_MAPPING_UPDATE |
- (i << GLOBAL2_DEVICE_MAPPING_TARGET_SHIFT) |
- nexthop);
+ err = _mv88e6xxx_reg_write(
+ ps, REG_GLOBAL2,
+ GLOBAL2_DEVICE_MAPPING,
+ GLOBAL2_DEVICE_MAPPING_UPDATE |
+ (i << GLOBAL2_DEVICE_MAPPING_TARGET_SHIFT) | nexthop);
+ if (err)
+ return err;
}
/* Clear all trunk masks. */
- for (i = 0; i < 8; i++)
- REG_WRITE(REG_GLOBAL2, GLOBAL2_TRUNK_MASK,
- 0x8000 | (i << GLOBAL2_TRUNK_MASK_NUM_SHIFT) |
- ((1 << ps->num_ports) - 1));
+ for (i = 0; i < 8; i++) {
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL2, GLOBAL2_TRUNK_MASK,
+ 0x8000 |
+ (i << GLOBAL2_TRUNK_MASK_NUM_SHIFT) |
+ ((1 << ps->info->num_ports) - 1));
+ if (err)
+ return err;
+ }
/* Clear all trunk mappings. */
- for (i = 0; i < 16; i++)
- REG_WRITE(REG_GLOBAL2, GLOBAL2_TRUNK_MAPPING,
- GLOBAL2_TRUNK_MAPPING_UPDATE |
- (i << GLOBAL2_TRUNK_MAPPING_ID_SHIFT));
-
- if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
- mv88e6xxx_6165_family(ds) || mv88e6xxx_6097_family(ds) ||
- mv88e6xxx_6320_family(ds)) {
+ for (i = 0; i < 16; i++) {
+ err = _mv88e6xxx_reg_write(
+ ps, REG_GLOBAL2,
+ GLOBAL2_TRUNK_MAPPING,
+ GLOBAL2_TRUNK_MAPPING_UPDATE |
+ (i << GLOBAL2_TRUNK_MAPPING_ID_SHIFT));
+ if (err)
+ return err;
+ }
+
+ if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+ mv88e6xxx_6165_family(ps) || mv88e6xxx_6097_family(ps) ||
+ mv88e6xxx_6320_family(ps)) {
/* Send all frames with destination addresses matching
* 01:80:c2:00:00:2x to the CPU port.
*/
- REG_WRITE(REG_GLOBAL2, GLOBAL2_MGMT_EN_2X, 0xffff);
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL2,
+ GLOBAL2_MGMT_EN_2X, 0xffff);
+ if (err)
+ return err;
/* Initialise cross-chip port VLAN table to reset
* defaults.
*/
- REG_WRITE(REG_GLOBAL2, GLOBAL2_PVT_ADDR, 0x9000);
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL2,
+ GLOBAL2_PVT_ADDR, 0x9000);
+ if (err)
+ return err;
/* Clear the priority override table. */
- for (i = 0; i < 16; i++)
- REG_WRITE(REG_GLOBAL2, GLOBAL2_PRIO_OVERRIDE,
- 0x8000 | (i << 8));
+ for (i = 0; i < 16; i++) {
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL2,
+ GLOBAL2_PRIO_OVERRIDE,
+ 0x8000 | (i << 8));
+ if (err)
+ return err;
+ }
}
- if (mv88e6xxx_6352_family(ds) || mv88e6xxx_6351_family(ds) ||
- mv88e6xxx_6165_family(ds) || mv88e6xxx_6097_family(ds) ||
- mv88e6xxx_6185_family(ds) || mv88e6xxx_6095_family(ds) ||
- mv88e6xxx_6320_family(ds)) {
+ if (mv88e6xxx_6352_family(ps) || mv88e6xxx_6351_family(ps) ||
+ mv88e6xxx_6165_family(ps) || mv88e6xxx_6097_family(ps) ||
+ mv88e6xxx_6185_family(ps) || mv88e6xxx_6095_family(ps) ||
+ mv88e6xxx_6320_family(ps)) {
/* Disable ingress rate limiting by resetting all
* ingress rate limit registers to their initial
* state.
*/
- for (i = 0; i < ps->num_ports; i++)
- REG_WRITE(REG_GLOBAL2, GLOBAL2_INGRESS_OP,
- 0x9000 | (i << 8));
+ for (i = 0; i < ps->info->num_ports; i++) {
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL2,
+ GLOBAL2_INGRESS_OP,
+ 0x9000 | (i << 8));
+ if (err)
+ return err;
+ }
}
/* Clear the statistics counters for all ports */
- REG_WRITE(REG_GLOBAL, GLOBAL_STATS_OP, GLOBAL_STATS_OP_FLUSH_ALL);
+ err = _mv88e6xxx_reg_write(ps, REG_GLOBAL, GLOBAL_STATS_OP,
+ GLOBAL_STATS_OP_FLUSH_ALL);
+ if (err)
+ return err;
/* Wait for the flush to complete. */
- mutex_lock(&ps->smi_mutex);
- ret = _mv88e6xxx_stats_wait(ds);
- if (ret < 0)
- goto unlock;
+ err = _mv88e6xxx_stats_wait(ps);
+ if (err)
+ return err;
/* Clear all ATU entries */
- ret = _mv88e6xxx_atu_flush(ds, 0, true);
- if (ret < 0)
- goto unlock;
+ err = _mv88e6xxx_atu_flush(ps, 0, true);
+ if (err)
+ return err;
/* Clear all the VTU and STU entries */
- ret = _mv88e6xxx_vtu_stu_flush(ds);
-unlock:
- mutex_unlock(&ps->smi_mutex);
+ err = _mv88e6xxx_vtu_stu_flush(ps);
+ if (err < 0)
+ return err;
- return ret;
+ return err;
}
-int mv88e6xxx_switch_reset(struct dsa_switch *ds, bool ppu_active)
+static int mv88e6xxx_setup(struct dsa_switch *ds)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
- u16 is_reset = (ppu_active ? 0x8800 : 0xc800);
- struct gpio_desc *gpiod = ds->pd->reset;
- unsigned long timeout;
- int ret;
+ int err;
int i;
- /* Set all ports to the disabled state. */
- for (i = 0; i < ps->num_ports; i++) {
- ret = REG_READ(REG_PORT(i), PORT_CONTROL);
- REG_WRITE(REG_PORT(i), PORT_CONTROL, ret & 0xfffc);
- }
+ ps->ds = ds;
- /* Wait for transmit queues to drain. */
- usleep_range(2000, 4000);
+ if (mv88e6xxx_has(ps, MV88E6XXX_FLAG_EEPROM))
+ mutex_init(&ps->eeprom_mutex);
- /* If there is a gpio connected to the reset pin, toggle it */
- if (gpiod) {
- gpiod_set_value_cansleep(gpiod, 1);
- usleep_range(10000, 20000);
- gpiod_set_value_cansleep(gpiod, 0);
- usleep_range(10000, 20000);
- }
+ if (mv88e6xxx_has(ps, MV88E6XXX_FLAG_PPU))
+ mv88e6xxx_ppu_state_init(ps);
- /* Reset the switch. Keep the PPU active if requested. The PPU
- * needs to be active to support indirect phy register access
- * through global registers 0x18 and 0x19.
- */
- if (ppu_active)
- REG_WRITE(REG_GLOBAL, 0x04, 0xc000);
- else
- REG_WRITE(REG_GLOBAL, 0x04, 0xc400);
+ mutex_lock(&ps->smi_mutex);
- /* Wait up to one second for reset to complete. */
- timeout = jiffies + 1 * HZ;
- while (time_before(jiffies, timeout)) {
- ret = REG_READ(REG_GLOBAL, 0x00);
- if ((ret & is_reset) == is_reset)
- break;
- usleep_range(1000, 2000);
+ err = mv88e6xxx_switch_reset(ps);
+ if (err)
+ goto unlock;
+
+ err = mv88e6xxx_setup_global(ps);
+ if (err)
+ goto unlock;
+
+ for (i = 0; i < ps->info->num_ports; i++) {
+ err = mv88e6xxx_setup_port(ps, i);
+ if (err)
+ goto unlock;
}
- if (time_after(jiffies, timeout))
- return -ETIMEDOUT;
- return 0;
+unlock:
+ mutex_unlock(&ps->smi_mutex);
+
+ return err;
}
int mv88e6xxx_phy_page_read(struct dsa_switch *ds, int port, int page, int reg)
@@ -2758,7 +3160,7 @@ int mv88e6xxx_phy_page_read(struct dsa_switch *ds, int port, int page, int reg)
int ret;
mutex_lock(&ps->smi_mutex);
- ret = _mv88e6xxx_phy_page_read(ds, port, page, reg);
+ ret = _mv88e6xxx_phy_page_read(ps, port, page, reg);
mutex_unlock(&ps->smi_mutex);
return ret;
@@ -2771,82 +3173,61 @@ int mv88e6xxx_phy_page_write(struct dsa_switch *ds, int port, int page,
int ret;
mutex_lock(&ps->smi_mutex);
- ret = _mv88e6xxx_phy_page_write(ds, port, page, reg, val);
+ ret = _mv88e6xxx_phy_page_write(ps, port, page, reg, val);
mutex_unlock(&ps->smi_mutex);
return ret;
}
-static int mv88e6xxx_port_to_phy_addr(struct dsa_switch *ds, int port)
+static int mv88e6xxx_port_to_phy_addr(struct mv88e6xxx_priv_state *ps,
+ int port)
{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
-
- if (port >= 0 && port < ps->num_ports)
+ if (port >= 0 && port < ps->info->num_ports)
return port;
return -EINVAL;
}
-int
-mv88e6xxx_phy_read(struct dsa_switch *ds, int port, int regnum)
+static int mv88e6xxx_phy_read(struct dsa_switch *ds, int port, int regnum)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
- int addr = mv88e6xxx_port_to_phy_addr(ds, port);
+ int addr = mv88e6xxx_port_to_phy_addr(ps, port);
int ret;
if (addr < 0)
- return addr;
+ return 0xffff;
mutex_lock(&ps->smi_mutex);
- ret = _mv88e6xxx_phy_read(ds, addr, regnum);
- mutex_unlock(&ps->smi_mutex);
- return ret;
-}
-
-int
-mv88e6xxx_phy_write(struct dsa_switch *ds, int port, int regnum, u16 val)
-{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
- int addr = mv88e6xxx_port_to_phy_addr(ds, port);
- int ret;
- if (addr < 0)
- return addr;
+ if (mv88e6xxx_has(ps, MV88E6XXX_FLAG_PPU))
+ ret = mv88e6xxx_phy_read_ppu(ps, addr, regnum);
+ else if (mv88e6xxx_has(ps, MV88E6XXX_FLAG_SMI_PHY))
+ ret = _mv88e6xxx_phy_read_indirect(ps, addr, regnum);
+ else
+ ret = _mv88e6xxx_phy_read(ps, addr, regnum);
- mutex_lock(&ps->smi_mutex);
- ret = _mv88e6xxx_phy_write(ds, addr, regnum, val);
mutex_unlock(&ps->smi_mutex);
return ret;
}
-int
-mv88e6xxx_phy_read_indirect(struct dsa_switch *ds, int port, int regnum)
+static int mv88e6xxx_phy_write(struct dsa_switch *ds, int port, int regnum,
+ u16 val)
{
struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
- int addr = mv88e6xxx_port_to_phy_addr(ds, port);
+ int addr = mv88e6xxx_port_to_phy_addr(ps, port);
int ret;
if (addr < 0)
- return addr;
+ return 0xffff;
mutex_lock(&ps->smi_mutex);
- ret = _mv88e6xxx_phy_read_indirect(ds, addr, regnum);
- mutex_unlock(&ps->smi_mutex);
- return ret;
-}
-
-int
-mv88e6xxx_phy_write_indirect(struct dsa_switch *ds, int port, int regnum,
- u16 val)
-{
- struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
- int addr = mv88e6xxx_port_to_phy_addr(ds, port);
- int ret;
- if (addr < 0)
- return addr;
+ if (mv88e6xxx_has(ps, MV88E6XXX_FLAG_PPU))
+ ret = mv88e6xxx_phy_write_ppu(ps, addr, regnum, val);
+ else if (mv88e6xxx_has(ps, MV88E6XXX_FLAG_SMI_PHY))
+ ret = _mv88e6xxx_phy_write_indirect(ps, addr, regnum, val);
+ else
+ ret = _mv88e6xxx_phy_write(ps, addr, regnum, val);
- mutex_lock(&ps->smi_mutex);
- ret = _mv88e6xxx_phy_write_indirect(ds, addr, regnum, val);
mutex_unlock(&ps->smi_mutex);
return ret;
}
@@ -2863,44 +3244,45 @@ static int mv88e61xx_get_temp(struct dsa_switch *ds, int *temp)
mutex_lock(&ps->smi_mutex);
- ret = _mv88e6xxx_phy_write(ds, 0x0, 0x16, 0x6);
+ ret = _mv88e6xxx_phy_write(ps, 0x0, 0x16, 0x6);
if (ret < 0)
goto error;
/* Enable temperature sensor */
- ret = _mv88e6xxx_phy_read(ds, 0x0, 0x1a);
+ ret = _mv88e6xxx_phy_read(ps, 0x0, 0x1a);
if (ret < 0)
goto error;
- ret = _mv88e6xxx_phy_write(ds, 0x0, 0x1a, ret | (1 << 5));
+ ret = _mv88e6xxx_phy_write(ps, 0x0, 0x1a, ret | (1 << 5));
if (ret < 0)
goto error;
/* Wait for temperature to stabilize */
usleep_range(10000, 12000);
- val = _mv88e6xxx_phy_read(ds, 0x0, 0x1a);
+ val = _mv88e6xxx_phy_read(ps, 0x0, 0x1a);
if (val < 0) {
ret = val;
goto error;
}
/* Disable temperature sensor */
- ret = _mv88e6xxx_phy_write(ds, 0x0, 0x1a, ret & ~(1 << 5));
+ ret = _mv88e6xxx_phy_write(ps, 0x0, 0x1a, ret & ~(1 << 5));
if (ret < 0)
goto error;
*temp = ((val & 0x1f) - 5) * 5;
error:
- _mv88e6xxx_phy_write(ds, 0x0, 0x16, 0x0);
+ _mv88e6xxx_phy_write(ps, 0x0, 0x16, 0x0);
mutex_unlock(&ps->smi_mutex);
return ret;
}
static int mv88e63xx_get_temp(struct dsa_switch *ds, int *temp)
{
- int phy = mv88e6xxx_6320_family(ds) ? 3 : 0;
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+ int phy = mv88e6xxx_6320_family(ps) ? 3 : 0;
int ret;
*temp = 0;
@@ -2914,20 +3296,26 @@ static int mv88e63xx_get_temp(struct dsa_switch *ds, int *temp)
return 0;
}
-int mv88e6xxx_get_temp(struct dsa_switch *ds, int *temp)
+static int mv88e6xxx_get_temp(struct dsa_switch *ds, int *temp)
{
- if (mv88e6xxx_6320_family(ds) || mv88e6xxx_6352_family(ds))
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+
+ if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_TEMP))
+ return -EOPNOTSUPP;
+
+ if (mv88e6xxx_6320_family(ps) || mv88e6xxx_6352_family(ps))
return mv88e63xx_get_temp(ds, temp);
return mv88e61xx_get_temp(ds, temp);
}
-int mv88e6xxx_get_temp_limit(struct dsa_switch *ds, int *temp)
+static int mv88e6xxx_get_temp_limit(struct dsa_switch *ds, int *temp)
{
- int phy = mv88e6xxx_6320_family(ds) ? 3 : 0;
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+ int phy = mv88e6xxx_6320_family(ps) ? 3 : 0;
int ret;
- if (!mv88e6xxx_6320_family(ds) && !mv88e6xxx_6352_family(ds))
+ if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_TEMP_LIMIT))
return -EOPNOTSUPP;
*temp = 0;
@@ -2941,12 +3329,13 @@ int mv88e6xxx_get_temp_limit(struct dsa_switch *ds, int *temp)
return 0;
}
-int mv88e6xxx_set_temp_limit(struct dsa_switch *ds, int temp)
+static int mv88e6xxx_set_temp_limit(struct dsa_switch *ds, int temp)
{
- int phy = mv88e6xxx_6320_family(ds) ? 3 : 0;
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+ int phy = mv88e6xxx_6320_family(ps) ? 3 : 0;
int ret;
- if (!mv88e6xxx_6320_family(ds) && !mv88e6xxx_6352_family(ds))
+ if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_TEMP_LIMIT))
return -EOPNOTSUPP;
ret = mv88e6xxx_phy_page_read(ds, phy, 6, 26);
@@ -2957,12 +3346,13 @@ int mv88e6xxx_set_temp_limit(struct dsa_switch *ds, int temp)
(ret & 0xe0ff) | (temp << 8));
}
-int mv88e6xxx_get_temp_alarm(struct dsa_switch *ds, bool *alarm)
+static int mv88e6xxx_get_temp_alarm(struct dsa_switch *ds, bool *alarm)
{
- int phy = mv88e6xxx_6320_family(ds) ? 3 : 0;
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+ int phy = mv88e6xxx_6320_family(ps) ? 3 : 0;
int ret;
- if (!mv88e6xxx_6320_family(ds) && !mv88e6xxx_6352_family(ds))
+ if (!mv88e6xxx_has(ps, MV88E6XXX_FLAG_TEMP_LIMIT))
return -EOPNOTSUPP;
*alarm = false;
@@ -2977,70 +3367,354 @@ int mv88e6xxx_get_temp_alarm(struct dsa_switch *ds, bool *alarm)
}
#endif /* CONFIG_NET_DSA_HWMON */
-char *mv88e6xxx_lookup_name(struct device *host_dev, int sw_addr,
- const struct mv88e6xxx_switch_id *table,
- unsigned int num)
+static const struct mv88e6xxx_info mv88e6xxx_table[] = {
+ [MV88E6085] = {
+ .prod_num = PORT_SWITCH_ID_PROD_NUM_6085,
+ .family = MV88E6XXX_FAMILY_6097,
+ .name = "Marvell 88E6085",
+ .num_databases = 4096,
+ .num_ports = 10,
+ .flags = MV88E6XXX_FLAGS_FAMILY_6097,
+ },
+
+ [MV88E6095] = {
+ .prod_num = PORT_SWITCH_ID_PROD_NUM_6095,
+ .family = MV88E6XXX_FAMILY_6095,
+ .name = "Marvell 88E6095/88E6095F",
+ .num_databases = 256,
+ .num_ports = 11,
+ .flags = MV88E6XXX_FLAGS_FAMILY_6095,
+ },
+
+ [MV88E6123] = {
+ .prod_num = PORT_SWITCH_ID_PROD_NUM_6123,
+ .family = MV88E6XXX_FAMILY_6165,
+ .name = "Marvell 88E6123",
+ .num_databases = 4096,
+ .num_ports = 3,
+ .flags = MV88E6XXX_FLAGS_FAMILY_6165,
+ },
+
+ [MV88E6131] = {
+ .prod_num = PORT_SWITCH_ID_PROD_NUM_6131,
+ .family = MV88E6XXX_FAMILY_6185,
+ .name = "Marvell 88E6131",
+ .num_databases = 256,
+ .num_ports = 8,
+ .flags = MV88E6XXX_FLAGS_FAMILY_6185,
+ },
+
+ [MV88E6161] = {
+ .prod_num = PORT_SWITCH_ID_PROD_NUM_6161,
+ .family = MV88E6XXX_FAMILY_6165,
+ .name = "Marvell 88E6161",
+ .num_databases = 4096,
+ .num_ports = 6,
+ .flags = MV88E6XXX_FLAGS_FAMILY_6165,
+ },
+
+ [MV88E6165] = {
+ .prod_num = PORT_SWITCH_ID_PROD_NUM_6165,
+ .family = MV88E6XXX_FAMILY_6165,
+ .name = "Marvell 88E6165",
+ .num_databases = 4096,
+ .num_ports = 6,
+ .flags = MV88E6XXX_FLAGS_FAMILY_6165,
+ },
+
+ [MV88E6171] = {
+ .prod_num = PORT_SWITCH_ID_PROD_NUM_6171,
+ .family = MV88E6XXX_FAMILY_6351,
+ .name = "Marvell 88E6171",
+ .num_databases = 4096,
+ .num_ports = 7,
+ .flags = MV88E6XXX_FLAGS_FAMILY_6351,
+ },
+
+ [MV88E6172] = {
+ .prod_num = PORT_SWITCH_ID_PROD_NUM_6172,
+ .family = MV88E6XXX_FAMILY_6352,
+ .name = "Marvell 88E6172",
+ .num_databases = 4096,
+ .num_ports = 7,
+ .flags = MV88E6XXX_FLAGS_FAMILY_6352,
+ },
+
+ [MV88E6175] = {
+ .prod_num = PORT_SWITCH_ID_PROD_NUM_6175,
+ .family = MV88E6XXX_FAMILY_6351,
+ .name = "Marvell 88E6175",
+ .num_databases = 4096,
+ .num_ports = 7,
+ .flags = MV88E6XXX_FLAGS_FAMILY_6351,
+ },
+
+ [MV88E6176] = {
+ .prod_num = PORT_SWITCH_ID_PROD_NUM_6176,
+ .family = MV88E6XXX_FAMILY_6352,
+ .name = "Marvell 88E6176",
+ .num_databases = 4096,
+ .num_ports = 7,
+ .flags = MV88E6XXX_FLAGS_FAMILY_6352,
+ },
+
+ [MV88E6185] = {
+ .prod_num = PORT_SWITCH_ID_PROD_NUM_6185,
+ .family = MV88E6XXX_FAMILY_6185,
+ .name = "Marvell 88E6185",
+ .num_databases = 256,
+ .num_ports = 10,
+ .flags = MV88E6XXX_FLAGS_FAMILY_6185,
+ },
+
+ [MV88E6240] = {
+ .prod_num = PORT_SWITCH_ID_PROD_NUM_6240,
+ .family = MV88E6XXX_FAMILY_6352,
+ .name = "Marvell 88E6240",
+ .num_databases = 4096,
+ .num_ports = 7,
+ .flags = MV88E6XXX_FLAGS_FAMILY_6352,
+ },
+
+ [MV88E6320] = {
+ .prod_num = PORT_SWITCH_ID_PROD_NUM_6320,
+ .family = MV88E6XXX_FAMILY_6320,
+ .name = "Marvell 88E6320",
+ .num_databases = 4096,
+ .num_ports = 7,
+ .flags = MV88E6XXX_FLAGS_FAMILY_6320,
+ },
+
+ [MV88E6321] = {
+ .prod_num = PORT_SWITCH_ID_PROD_NUM_6321,
+ .family = MV88E6XXX_FAMILY_6320,
+ .name = "Marvell 88E6321",
+ .num_databases = 4096,
+ .num_ports = 7,
+ .flags = MV88E6XXX_FLAGS_FAMILY_6320,
+ },
+
+ [MV88E6350] = {
+ .prod_num = PORT_SWITCH_ID_PROD_NUM_6350,
+ .family = MV88E6XXX_FAMILY_6351,
+ .name = "Marvell 88E6350",
+ .num_databases = 4096,
+ .num_ports = 7,
+ .flags = MV88E6XXX_FLAGS_FAMILY_6351,
+ },
+
+ [MV88E6351] = {
+ .prod_num = PORT_SWITCH_ID_PROD_NUM_6351,
+ .family = MV88E6XXX_FAMILY_6351,
+ .name = "Marvell 88E6351",
+ .num_databases = 4096,
+ .num_ports = 7,
+ .flags = MV88E6XXX_FLAGS_FAMILY_6351,
+ },
+
+ [MV88E6352] = {
+ .prod_num = PORT_SWITCH_ID_PROD_NUM_6352,
+ .family = MV88E6XXX_FAMILY_6352,
+ .name = "Marvell 88E6352",
+ .num_databases = 4096,
+ .num_ports = 7,
+ .flags = MV88E6XXX_FLAGS_FAMILY_6352,
+ },
+};
+
+static const struct mv88e6xxx_info *
+mv88e6xxx_lookup_info(unsigned int prod_num, const struct mv88e6xxx_info *table,
+ unsigned int num)
{
- struct mii_bus *bus = dsa_host_dev_to_mii_bus(host_dev);
- int i, ret;
+ int i;
+
+ for (i = 0; i < num; ++i)
+ if (table[i].prod_num == prod_num)
+ return &table[i];
+
+ return NULL;
+}
+static const char *mv88e6xxx_drv_probe(struct device *dsa_dev,
+ struct device *host_dev, int sw_addr,
+ void **priv)
+{
+ const struct mv88e6xxx_info *info;
+ struct mv88e6xxx_priv_state *ps;
+ struct mii_bus *bus;
+ const char *name;
+ int id, prod_num, rev;
+
+ bus = dsa_host_dev_to_mii_bus(host_dev);
if (!bus)
return NULL;
- ret = __mv88e6xxx_reg_read(bus, sw_addr, REG_PORT(0), PORT_SWITCH_ID);
- if (ret < 0)
+ id = __mv88e6xxx_reg_read(bus, sw_addr, REG_PORT(0), PORT_SWITCH_ID);
+ if (id < 0)
return NULL;
- /* Look up the exact switch ID */
- for (i = 0; i < num; ++i)
- if (table[i].id == ret)
- return table[i].name;
-
- /* Look up only the product number */
- for (i = 0; i < num; ++i) {
- if (table[i].id == (ret & PORT_SWITCH_ID_PROD_NUM_MASK)) {
- dev_warn(host_dev, "unknown revision %d, using base switch 0x%x\n",
- ret & PORT_SWITCH_ID_REV_MASK,
- ret & PORT_SWITCH_ID_PROD_NUM_MASK);
- return table[i].name;
+ prod_num = (id & 0xfff0) >> 4;
+ rev = id & 0x000f;
+
+ info = mv88e6xxx_lookup_info(prod_num, mv88e6xxx_table,
+ ARRAY_SIZE(mv88e6xxx_table));
+ if (!info)
+ return NULL;
+
+ name = info->name;
+
+ ps = devm_kzalloc(dsa_dev, sizeof(*ps), GFP_KERNEL);
+ if (!ps)
+ return NULL;
+
+ ps->bus = bus;
+ ps->sw_addr = sw_addr;
+ ps->info = info;
+ mutex_init(&ps->smi_mutex);
+
+ *priv = ps;
+
+ dev_info(&ps->bus->dev, "switch 0x%x probed: %s, revision %u\n",
+ prod_num, name, rev);
+
+ return name;
+}
+
+struct dsa_switch_driver mv88e6xxx_switch_driver = {
+ .tag_protocol = DSA_TAG_PROTO_EDSA,
+ .probe = mv88e6xxx_drv_probe,
+ .setup = mv88e6xxx_setup,
+ .set_addr = mv88e6xxx_set_addr,
+ .phy_read = mv88e6xxx_phy_read,
+ .phy_write = mv88e6xxx_phy_write,
+ .adjust_link = mv88e6xxx_adjust_link,
+ .get_strings = mv88e6xxx_get_strings,
+ .get_ethtool_stats = mv88e6xxx_get_ethtool_stats,
+ .get_sset_count = mv88e6xxx_get_sset_count,
+ .set_eee = mv88e6xxx_set_eee,
+ .get_eee = mv88e6xxx_get_eee,
+#ifdef CONFIG_NET_DSA_HWMON
+ .get_temp = mv88e6xxx_get_temp,
+ .get_temp_limit = mv88e6xxx_get_temp_limit,
+ .set_temp_limit = mv88e6xxx_set_temp_limit,
+ .get_temp_alarm = mv88e6xxx_get_temp_alarm,
+#endif
+ .get_eeprom_len = mv88e6xxx_get_eeprom_len,
+ .get_eeprom = mv88e6xxx_get_eeprom,
+ .set_eeprom = mv88e6xxx_set_eeprom,
+ .get_regs_len = mv88e6xxx_get_regs_len,
+ .get_regs = mv88e6xxx_get_regs,
+ .port_bridge_join = mv88e6xxx_port_bridge_join,
+ .port_bridge_leave = mv88e6xxx_port_bridge_leave,
+ .port_stp_state_set = mv88e6xxx_port_stp_state_set,
+ .port_vlan_filtering = mv88e6xxx_port_vlan_filtering,
+ .port_vlan_prepare = mv88e6xxx_port_vlan_prepare,
+ .port_vlan_add = mv88e6xxx_port_vlan_add,
+ .port_vlan_del = mv88e6xxx_port_vlan_del,
+ .port_vlan_dump = mv88e6xxx_port_vlan_dump,
+ .port_fdb_prepare = mv88e6xxx_port_fdb_prepare,
+ .port_fdb_add = mv88e6xxx_port_fdb_add,
+ .port_fdb_del = mv88e6xxx_port_fdb_del,
+ .port_fdb_dump = mv88e6xxx_port_fdb_dump,
+};
+
+int mv88e6xxx_probe(struct mdio_device *mdiodev)
+{
+ struct device *dev = &mdiodev->dev;
+ struct device_node *np = dev->of_node;
+ struct mv88e6xxx_priv_state *ps;
+ int id, prod_num, rev;
+ struct dsa_switch *ds;
+ u32 eeprom_len;
+ int err;
+
+ ds = devm_kzalloc(dev, sizeof(*ds) + sizeof(*ps), GFP_KERNEL);
+ if (!ds)
+ return -ENOMEM;
+
+ ps = (struct mv88e6xxx_priv_state *)(ds + 1);
+ ds->priv = ps;
+ ds->dev = dev;
+ ps->dev = dev;
+ ps->ds = ds;
+ ps->bus = mdiodev->bus;
+ ps->sw_addr = mdiodev->addr;
+ mutex_init(&ps->smi_mutex);
+
+ get_device(&ps->bus->dev);
+
+ ds->drv = &mv88e6xxx_switch_driver;
+
+ id = mv88e6xxx_reg_read(ps, REG_PORT(0), PORT_SWITCH_ID);
+ if (id < 0)
+ return id;
+
+ prod_num = (id & 0xfff0) >> 4;
+ rev = id & 0x000f;
+
+ ps->info = mv88e6xxx_lookup_info(prod_num, mv88e6xxx_table,
+ ARRAY_SIZE(mv88e6xxx_table));
+ if (!ps->info)
+ return -ENODEV;
+
+ ps->reset = devm_gpiod_get(&mdiodev->dev, "reset", GPIOD_ASIS);
+ if (IS_ERR(ps->reset)) {
+ err = PTR_ERR(ps->reset);
+ if (err == -ENOENT) {
+ /* Optional, so not an error */
+ ps->reset = NULL;
+ } else {
+ return err;
}
}
- return NULL;
+ if (mv88e6xxx_has(ps, MV88E6XXX_FLAG_EEPROM) &&
+ !of_property_read_u32(np, "eeprom-length", &eeprom_len))
+ ps->eeprom_len = eeprom_len;
+
+ dev_set_drvdata(dev, ds);
+
+ dev_info(dev, "switch 0x%x probed: %s, revision %u\n",
+ prod_num, ps->info->name, rev);
+
+ return 0;
}
+static void mv88e6xxx_remove(struct mdio_device *mdiodev)
+{
+ struct dsa_switch *ds = dev_get_drvdata(&mdiodev->dev);
+ struct mv88e6xxx_priv_state *ps = ds_to_priv(ds);
+
+ put_device(&ps->bus->dev);
+}
+
+static const struct of_device_id mv88e6xxx_of_match[] = {
+ { .compatible = "marvell,mv88e6085" },
+ { /* sentinel */ },
+};
+
+MODULE_DEVICE_TABLE(of, mv88e6xxx_of_match);
+
+static struct mdio_driver mv88e6xxx_driver = {
+ .probe = mv88e6xxx_probe,
+ .remove = mv88e6xxx_remove,
+ .mdiodrv.driver = {
+ .name = "mv88e6085",
+ .of_match_table = mv88e6xxx_of_match,
+ },
+};
+
static int __init mv88e6xxx_init(void)
{
-#if IS_ENABLED(CONFIG_NET_DSA_MV88E6131)
- register_switch_driver(&mv88e6131_switch_driver);
-#endif
-#if IS_ENABLED(CONFIG_NET_DSA_MV88E6123)
- register_switch_driver(&mv88e6123_switch_driver);
-#endif
-#if IS_ENABLED(CONFIG_NET_DSA_MV88E6352)
- register_switch_driver(&mv88e6352_switch_driver);
-#endif
-#if IS_ENABLED(CONFIG_NET_DSA_MV88E6171)
- register_switch_driver(&mv88e6171_switch_driver);
-#endif
- return 0;
+ register_switch_driver(&mv88e6xxx_switch_driver);
+ return mdio_driver_register(&mv88e6xxx_driver);
}
module_init(mv88e6xxx_init);
static void __exit mv88e6xxx_cleanup(void)
{
-#if IS_ENABLED(CONFIG_NET_DSA_MV88E6171)
- unregister_switch_driver(&mv88e6171_switch_driver);
-#endif
-#if IS_ENABLED(CONFIG_NET_DSA_MV88E6352)
- unregister_switch_driver(&mv88e6352_switch_driver);
-#endif
-#if IS_ENABLED(CONFIG_NET_DSA_MV88E6123)
- unregister_switch_driver(&mv88e6123_switch_driver);
-#endif
-#if IS_ENABLED(CONFIG_NET_DSA_MV88E6131)
- unregister_switch_driver(&mv88e6131_switch_driver);
-#endif
+ mdio_driver_unregister(&mv88e6xxx_driver);
+ unregister_switch_driver(&mv88e6xxx_switch_driver);
}
module_exit(mv88e6xxx_cleanup);
diff --git a/drivers/net/dsa/mv88e6xxx.h b/drivers/net/dsa/mv88e6xxx.h
index 26a424acd10f..36d0e1504de1 100644
--- a/drivers/net/dsa/mv88e6xxx.h
+++ b/drivers/net/dsa/mv88e6xxx.h
@@ -12,6 +12,7 @@
#define __MV88E6XXX_H
#include <linux/if_vlan.h>
+#include <linux/gpio/consumer.h>
#ifndef UINT64_MAX
#define UINT64_MAX (u64)(~((u64)0))
@@ -68,52 +69,23 @@
#define PORT_PCS_CTRL_UNFORCED 0x03
#define PORT_PAUSE_CTRL 0x02
#define PORT_SWITCH_ID 0x03
-#define PORT_SWITCH_ID_PROD_NUM_MASK 0xfff0
-#define PORT_SWITCH_ID_REV_MASK 0x000f
-#define PORT_SWITCH_ID_6031 0x0310
-#define PORT_SWITCH_ID_6035 0x0350
-#define PORT_SWITCH_ID_6046 0x0480
-#define PORT_SWITCH_ID_6061 0x0610
-#define PORT_SWITCH_ID_6065 0x0650
-#define PORT_SWITCH_ID_6085 0x04a0
-#define PORT_SWITCH_ID_6092 0x0970
-#define PORT_SWITCH_ID_6095 0x0950
-#define PORT_SWITCH_ID_6096 0x0980
-#define PORT_SWITCH_ID_6097 0x0990
-#define PORT_SWITCH_ID_6108 0x1070
-#define PORT_SWITCH_ID_6121 0x1040
-#define PORT_SWITCH_ID_6122 0x1050
-#define PORT_SWITCH_ID_6123 0x1210
-#define PORT_SWITCH_ID_6123_A1 0x1212
-#define PORT_SWITCH_ID_6123_A2 0x1213
-#define PORT_SWITCH_ID_6131 0x1060
-#define PORT_SWITCH_ID_6131_B2 0x1066
-#define PORT_SWITCH_ID_6152 0x1a40
-#define PORT_SWITCH_ID_6155 0x1a50
-#define PORT_SWITCH_ID_6161 0x1610
-#define PORT_SWITCH_ID_6161_A1 0x1612
-#define PORT_SWITCH_ID_6161_A2 0x1613
-#define PORT_SWITCH_ID_6165 0x1650
-#define PORT_SWITCH_ID_6165_A1 0x1652
-#define PORT_SWITCH_ID_6165_A2 0x1653
-#define PORT_SWITCH_ID_6171 0x1710
-#define PORT_SWITCH_ID_6172 0x1720
-#define PORT_SWITCH_ID_6175 0x1750
-#define PORT_SWITCH_ID_6176 0x1760
-#define PORT_SWITCH_ID_6182 0x1a60
-#define PORT_SWITCH_ID_6185 0x1a70
-#define PORT_SWITCH_ID_6240 0x2400
-#define PORT_SWITCH_ID_6320 0x1150
-#define PORT_SWITCH_ID_6320_A1 0x1151
-#define PORT_SWITCH_ID_6320_A2 0x1152
-#define PORT_SWITCH_ID_6321 0x3100
-#define PORT_SWITCH_ID_6321_A1 0x3101
-#define PORT_SWITCH_ID_6321_A2 0x3102
-#define PORT_SWITCH_ID_6350 0x3710
-#define PORT_SWITCH_ID_6351 0x3750
-#define PORT_SWITCH_ID_6352 0x3520
-#define PORT_SWITCH_ID_6352_A0 0x3521
-#define PORT_SWITCH_ID_6352_A1 0x3522
+#define PORT_SWITCH_ID_PROD_NUM_6085 0x04a
+#define PORT_SWITCH_ID_PROD_NUM_6095 0x095
+#define PORT_SWITCH_ID_PROD_NUM_6131 0x106
+#define PORT_SWITCH_ID_PROD_NUM_6320 0x115
+#define PORT_SWITCH_ID_PROD_NUM_6123 0x121
+#define PORT_SWITCH_ID_PROD_NUM_6161 0x161
+#define PORT_SWITCH_ID_PROD_NUM_6165 0x165
+#define PORT_SWITCH_ID_PROD_NUM_6171 0x171
+#define PORT_SWITCH_ID_PROD_NUM_6172 0x172
+#define PORT_SWITCH_ID_PROD_NUM_6175 0x175
+#define PORT_SWITCH_ID_PROD_NUM_6176 0x176
+#define PORT_SWITCH_ID_PROD_NUM_6185 0x1a7
+#define PORT_SWITCH_ID_PROD_NUM_6240 0x240
+#define PORT_SWITCH_ID_PROD_NUM_6321 0x310
+#define PORT_SWITCH_ID_PROD_NUM_6352 0x352
+#define PORT_SWITCH_ID_PROD_NUM_6350 0x371
+#define PORT_SWITCH_ID_PROD_NUM_6351 0x375
#define PORT_CONTROL 0x04
#define PORT_CONTROL_USE_CORE_TAG BIT(15)
#define PORT_CONTROL_DROP_ON_LOCK BIT(14)
@@ -367,9 +339,187 @@
#define MV88E6XXX_N_FID 4096
-struct mv88e6xxx_switch_id {
- u16 id;
- char *name;
+/* List of supported models */
+enum mv88e6xxx_model {
+ MV88E6085,
+ MV88E6095,
+ MV88E6123,
+ MV88E6131,
+ MV88E6161,
+ MV88E6165,
+ MV88E6171,
+ MV88E6172,
+ MV88E6175,
+ MV88E6176,
+ MV88E6185,
+ MV88E6240,
+ MV88E6320,
+ MV88E6321,
+ MV88E6350,
+ MV88E6351,
+ MV88E6352,
+};
+
+enum mv88e6xxx_family {
+ MV88E6XXX_FAMILY_NONE,
+ MV88E6XXX_FAMILY_6065, /* 6031 6035 6061 6065 */
+ MV88E6XXX_FAMILY_6095, /* 6092 6095 */
+ MV88E6XXX_FAMILY_6097, /* 6046 6085 6096 6097 */
+ MV88E6XXX_FAMILY_6165, /* 6123 6161 6165 */
+ MV88E6XXX_FAMILY_6185, /* 6108 6121 6122 6131 6152 6155 6182 6185 */
+ MV88E6XXX_FAMILY_6320, /* 6320 6321 */
+ MV88E6XXX_FAMILY_6351, /* 6171 6175 6350 6351 */
+ MV88E6XXX_FAMILY_6352, /* 6172 6176 6240 6352 */
+};
+
+enum mv88e6xxx_cap {
+ /* Address Translation Unit.
+ * The ATU is used to lookup and learn MAC addresses. See GLOBAL_ATU_OP.
+ */
+ MV88E6XXX_CAP_ATU,
+
+ /* Energy Efficient Ethernet.
+ */
+ MV88E6XXX_CAP_EEE,
+
+ /* EEPROM Command and Data registers.
+ * See GLOBAL2_EEPROM_OP and GLOBAL2_EEPROM_DATA.
+ */
+ MV88E6XXX_CAP_EEPROM,
+
+ /* Port State Filtering for 802.1D Spanning Tree.
+ * See PORT_CONTROL_STATE_* values in the PORT_CONTROL register.
+ */
+ MV88E6XXX_CAP_PORTSTATE,
+
+ /* PHY Polling Unit.
+ * See GLOBAL_CONTROL_PPU_ENABLE and GLOBAL_STATUS_PPU_POLLING.
+ */
+ MV88E6XXX_CAP_PPU,
+ MV88E6XXX_CAP_PPU_ACTIVE,
+
+ /* SMI PHY Command and Data registers.
+ * This requires an indirect access to PHY registers through
+ * GLOBAL2_SMI_OP, otherwise direct access to PHY registers is done.
+ */
+ MV88E6XXX_CAP_SMI_PHY,
+
+ /* Per VLAN Spanning Tree Unit (STU).
+ * The Port State database, if present, is accessed through VTU
+ * operations and dedicated SID registers. See GLOBAL_VTU_SID.
+ */
+ MV88E6XXX_CAP_STU,
+
+ /* Switch MAC/WoL/WoF register.
+ * This requires an indirect access to set the switch MAC address
+ * through GLOBAL2_SWITCH_MAC, otherwise GLOBAL_MAC_01, GLOBAL_MAC_23,
+ * and GLOBAL_MAC_45 are used with a direct access.
+ */
+ MV88E6XXX_CAP_SWITCH_MAC_WOL_WOF,
+
+ /* Internal temperature sensor.
+ * Available from any enabled port's PHY register 26, page 6.
+ */
+ MV88E6XXX_CAP_TEMP,
+ MV88E6XXX_CAP_TEMP_LIMIT,
+
+ /* In-chip Port Based VLANs.
+ * Each port VLANTable register (see PORT_BASE_VLAN) is used to restrict
+ * the output (or egress) ports to which it is allowed to send frames.
+ */
+ MV88E6XXX_CAP_VLANTABLE,
+
+ /* VLAN Table Unit.
+ * The VTU is used to program 802.1Q VLANs. See GLOBAL_VTU_OP.
+ */
+ MV88E6XXX_CAP_VTU,
+};
+
+/* Bitmask of capabilities */
+#define MV88E6XXX_FLAG_ATU BIT(MV88E6XXX_CAP_ATU)
+#define MV88E6XXX_FLAG_EEE BIT(MV88E6XXX_CAP_EEE)
+#define MV88E6XXX_FLAG_EEPROM BIT(MV88E6XXX_CAP_EEPROM)
+#define MV88E6XXX_FLAG_PORTSTATE BIT(MV88E6XXX_CAP_PORTSTATE)
+#define MV88E6XXX_FLAG_PPU BIT(MV88E6XXX_CAP_PPU)
+#define MV88E6XXX_FLAG_PPU_ACTIVE BIT(MV88E6XXX_CAP_PPU_ACTIVE)
+#define MV88E6XXX_FLAG_SMI_PHY BIT(MV88E6XXX_CAP_SMI_PHY)
+#define MV88E6XXX_FLAG_STU BIT(MV88E6XXX_CAP_STU)
+#define MV88E6XXX_FLAG_SWITCH_MAC BIT(MV88E6XXX_CAP_SWITCH_MAC_WOL_WOF)
+#define MV88E6XXX_FLAG_TEMP BIT(MV88E6XXX_CAP_TEMP)
+#define MV88E6XXX_FLAG_TEMP_LIMIT BIT(MV88E6XXX_CAP_TEMP_LIMIT)
+#define MV88E6XXX_FLAG_VLANTABLE BIT(MV88E6XXX_CAP_VLANTABLE)
+#define MV88E6XXX_FLAG_VTU BIT(MV88E6XXX_CAP_VTU)
+
+#define MV88E6XXX_FLAGS_FAMILY_6095 \
+ (MV88E6XXX_FLAG_ATU | \
+ MV88E6XXX_FLAG_PPU | \
+ MV88E6XXX_FLAG_VLANTABLE | \
+ MV88E6XXX_FLAG_VTU)
+
+#define MV88E6XXX_FLAGS_FAMILY_6097 \
+ (MV88E6XXX_FLAG_ATU | \
+ MV88E6XXX_FLAG_PPU | \
+ MV88E6XXX_FLAG_STU | \
+ MV88E6XXX_FLAG_VLANTABLE | \
+ MV88E6XXX_FLAG_VTU)
+
+#define MV88E6XXX_FLAGS_FAMILY_6165 \
+ (MV88E6XXX_FLAG_STU | \
+ MV88E6XXX_FLAG_SWITCH_MAC | \
+ MV88E6XXX_FLAG_TEMP | \
+ MV88E6XXX_FLAG_VTU)
+
+#define MV88E6XXX_FLAGS_FAMILY_6185 \
+ (MV88E6XXX_FLAG_ATU | \
+ MV88E6XXX_FLAG_PPU | \
+ MV88E6XXX_FLAG_VLANTABLE | \
+ MV88E6XXX_FLAG_VTU)
+
+#define MV88E6XXX_FLAGS_FAMILY_6320 \
+ (MV88E6XXX_FLAG_ATU | \
+ MV88E6XXX_FLAG_EEE | \
+ MV88E6XXX_FLAG_EEPROM | \
+ MV88E6XXX_FLAG_PORTSTATE | \
+ MV88E6XXX_FLAG_PPU_ACTIVE | \
+ MV88E6XXX_FLAG_SMI_PHY | \
+ MV88E6XXX_FLAG_SWITCH_MAC | \
+ MV88E6XXX_FLAG_TEMP | \
+ MV88E6XXX_FLAG_TEMP_LIMIT | \
+ MV88E6XXX_FLAG_VLANTABLE | \
+ MV88E6XXX_FLAG_VTU)
+
+#define MV88E6XXX_FLAGS_FAMILY_6351 \
+ (MV88E6XXX_FLAG_ATU | \
+ MV88E6XXX_FLAG_PORTSTATE | \
+ MV88E6XXX_FLAG_PPU_ACTIVE | \
+ MV88E6XXX_FLAG_SMI_PHY | \
+ MV88E6XXX_FLAG_STU | \
+ MV88E6XXX_FLAG_SWITCH_MAC | \
+ MV88E6XXX_FLAG_TEMP | \
+ MV88E6XXX_FLAG_VLANTABLE | \
+ MV88E6XXX_FLAG_VTU)
+
+#define MV88E6XXX_FLAGS_FAMILY_6352 \
+ (MV88E6XXX_FLAG_ATU | \
+ MV88E6XXX_FLAG_EEE | \
+ MV88E6XXX_FLAG_EEPROM | \
+ MV88E6XXX_FLAG_PORTSTATE | \
+ MV88E6XXX_FLAG_PPU_ACTIVE | \
+ MV88E6XXX_FLAG_SMI_PHY | \
+ MV88E6XXX_FLAG_STU | \
+ MV88E6XXX_FLAG_SWITCH_MAC | \
+ MV88E6XXX_FLAG_TEMP | \
+ MV88E6XXX_FLAG_TEMP_LIMIT | \
+ MV88E6XXX_FLAG_VLANTABLE | \
+ MV88E6XXX_FLAG_VTU)
+
+struct mv88e6xxx_info {
+ enum mv88e6xxx_family family;
+ u16 prod_num;
+ const char *name;
+ unsigned int num_databases;
+ unsigned int num_ports;
+ unsigned long flags;
};
struct mv88e6xxx_atu_entry {
@@ -393,17 +543,29 @@ struct mv88e6xxx_vtu_stu_entry {
struct mv88e6xxx_priv_port {
struct net_device *bridge_dev;
- u8 state;
};
struct mv88e6xxx_priv_state {
+ const struct mv88e6xxx_info *info;
+
+ /* The dsa_switch this private structure is related to */
+ struct dsa_switch *ds;
+
+ /* The device this structure is associated to */
+ struct device *dev;
+
/* When using multi-chip addressing, this mutex protects
* access to the indirect access registers. (In single-chip
* mode, this mutex is effectively useless.)
*/
struct mutex smi_mutex;
-#ifdef CONFIG_NET_DSA_MV88E6XXX_NEED_PPU
+ /* The MII bus and the address on the bus that is used to
+ * communication with the switch
+ */
+ struct mii_bus *bus;
+ int sw_addr;
+
/* Handles automatic disabling and re-enabling of the PHY
* polling unit.
*/
@@ -411,7 +573,6 @@ struct mv88e6xxx_priv_state {
int ppu_disabled;
struct work_struct ppu_work;
struct timer_list ppu_timer;
-#endif
/* This mutex serialises access to the statistics unit.
* Hold this mutex over snapshot + dump sequences.
@@ -429,14 +590,16 @@ struct mv88e6xxx_priv_state {
*/
struct mutex eeprom_mutex;
- int id; /* switch product id */
- int num_ports; /* number of switch ports */
-
struct mv88e6xxx_priv_port ports[DSA_MAX_PORTS];
- DECLARE_BITMAP(port_state_update_mask, DSA_MAX_PORTS);
+ /* A switch may have a GPIO line tied to its reset pin. Parse
+ * this from the device tree, and use it before performing
+ * switch soft reset.
+ */
+ struct gpio_desc *reset;
- struct work_struct bridge_work;
+ /* set to size of eeprom if supported by the switch */
+ int eeprom_len;
};
enum stat_type {
@@ -452,104 +615,10 @@ struct mv88e6xxx_hw_stat {
enum stat_type type;
};
-int mv88e6xxx_switch_reset(struct dsa_switch *ds, bool ppu_active);
-char *mv88e6xxx_lookup_name(struct device *host_dev, int sw_addr,
- const struct mv88e6xxx_switch_id *table,
- unsigned int num);
-int mv88e6xxx_setup_ports(struct dsa_switch *ds);
-int mv88e6xxx_setup_common(struct dsa_switch *ds);
-int mv88e6xxx_setup_global(struct dsa_switch *ds);
-int mv88e6xxx_reg_read(struct dsa_switch *ds, int addr, int reg);
-int mv88e6xxx_reg_write(struct dsa_switch *ds, int addr, int reg, u16 val);
-int mv88e6xxx_set_addr_direct(struct dsa_switch *ds, u8 *addr);
-int mv88e6xxx_set_addr_indirect(struct dsa_switch *ds, u8 *addr);
-int mv88e6xxx_phy_read(struct dsa_switch *ds, int port, int regnum);
-int mv88e6xxx_phy_write(struct dsa_switch *ds, int port, int regnum, u16 val);
-int mv88e6xxx_phy_read_indirect(struct dsa_switch *ds, int port, int regnum);
-int mv88e6xxx_phy_write_indirect(struct dsa_switch *ds, int port, int regnum,
- u16 val);
-void mv88e6xxx_ppu_state_init(struct dsa_switch *ds);
-int mv88e6xxx_phy_read_ppu(struct dsa_switch *ds, int addr, int regnum);
-int mv88e6xxx_phy_write_ppu(struct dsa_switch *ds, int addr,
- int regnum, u16 val);
-void mv88e6xxx_get_strings(struct dsa_switch *ds, int port, uint8_t *data);
-void mv88e6xxx_get_ethtool_stats(struct dsa_switch *ds, int port,
- uint64_t *data);
-int mv88e6xxx_get_sset_count(struct dsa_switch *ds);
-int mv88e6xxx_get_sset_count_basic(struct dsa_switch *ds);
-void mv88e6xxx_adjust_link(struct dsa_switch *ds, int port,
- struct phy_device *phydev);
-int mv88e6xxx_get_regs_len(struct dsa_switch *ds, int port);
-void mv88e6xxx_get_regs(struct dsa_switch *ds, int port,
- struct ethtool_regs *regs, void *_p);
-int mv88e6xxx_get_temp(struct dsa_switch *ds, int *temp);
-int mv88e6xxx_get_temp_limit(struct dsa_switch *ds, int *temp);
-int mv88e6xxx_set_temp_limit(struct dsa_switch *ds, int temp);
-int mv88e6xxx_get_temp_alarm(struct dsa_switch *ds, bool *alarm);
-int mv88e6xxx_eeprom_load_wait(struct dsa_switch *ds);
-int mv88e6xxx_eeprom_busy_wait(struct dsa_switch *ds);
-int mv88e6xxx_phy_read_indirect(struct dsa_switch *ds, int addr, int regnum);
-int mv88e6xxx_phy_write_indirect(struct dsa_switch *ds, int addr, int regnum,
- u16 val);
-int mv88e6xxx_get_eee(struct dsa_switch *ds, int port, struct ethtool_eee *e);
-int mv88e6xxx_set_eee(struct dsa_switch *ds, int port,
- struct phy_device *phydev, struct ethtool_eee *e);
-int mv88e6xxx_port_bridge_join(struct dsa_switch *ds, int port,
- struct net_device *bridge);
-void mv88e6xxx_port_bridge_leave(struct dsa_switch *ds, int port);
-int mv88e6xxx_port_stp_update(struct dsa_switch *ds, int port, u8 state);
-int mv88e6xxx_port_vlan_filtering(struct dsa_switch *ds, int port,
- bool vlan_filtering);
-int mv88e6xxx_port_vlan_prepare(struct dsa_switch *ds, int port,
- const struct switchdev_obj_port_vlan *vlan,
- struct switchdev_trans *trans);
-int mv88e6xxx_port_vlan_add(struct dsa_switch *ds, int port,
- const struct switchdev_obj_port_vlan *vlan,
- struct switchdev_trans *trans);
-int mv88e6xxx_port_vlan_del(struct dsa_switch *ds, int port,
- const struct switchdev_obj_port_vlan *vlan);
-int mv88e6xxx_port_vlan_dump(struct dsa_switch *ds, int port,
- struct switchdev_obj_port_vlan *vlan,
- int (*cb)(struct switchdev_obj *obj));
-int mv88e6xxx_port_fdb_prepare(struct dsa_switch *ds, int port,
- const struct switchdev_obj_port_fdb *fdb,
- struct switchdev_trans *trans);
-int mv88e6xxx_port_fdb_add(struct dsa_switch *ds, int port,
- const struct switchdev_obj_port_fdb *fdb,
- struct switchdev_trans *trans);
-int mv88e6xxx_port_fdb_del(struct dsa_switch *ds, int port,
- const struct switchdev_obj_port_fdb *fdb);
-int mv88e6xxx_port_fdb_dump(struct dsa_switch *ds, int port,
- struct switchdev_obj_port_fdb *fdb,
- int (*cb)(struct switchdev_obj *obj));
-int mv88e6xxx_phy_page_read(struct dsa_switch *ds, int port, int page, int reg);
-int mv88e6xxx_phy_page_write(struct dsa_switch *ds, int port, int page,
- int reg, int val);
-
-extern struct dsa_switch_driver mv88e6131_switch_driver;
-extern struct dsa_switch_driver mv88e6123_switch_driver;
-extern struct dsa_switch_driver mv88e6352_switch_driver;
-extern struct dsa_switch_driver mv88e6171_switch_driver;
-
-#define REG_READ(addr, reg) \
- ({ \
- int __ret; \
- \
- __ret = mv88e6xxx_reg_read(ds, addr, reg); \
- if (__ret < 0) \
- return __ret; \
- __ret; \
- })
-
-#define REG_WRITE(addr, reg, val) \
- ({ \
- int __ret; \
- \
- __ret = mv88e6xxx_reg_write(ds, addr, reg, val); \
- if (__ret < 0) \
- return __ret; \
- })
-
-
+static inline bool mv88e6xxx_has(struct mv88e6xxx_priv_state *ps,
+ unsigned long flags)
+{
+ return (ps->info->flags & flags) == flags;
+}
#endif
diff --git a/drivers/net/ethernet/3com/3c509.c b/drivers/net/ethernet/3com/3c509.c
index 7677c745fb30..91ada52f776b 100644
--- a/drivers/net/ethernet/3com/3c509.c
+++ b/drivers/net/ethernet/3com/3c509.c
@@ -699,7 +699,7 @@ el3_tx_timeout (struct net_device *dev)
dev->name, inb(ioaddr + TX_STATUS), inw(ioaddr + EL3_STATUS),
inw(ioaddr + TX_FREE));
dev->stats.tx_errors++;
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
/* Issue TX_RESET and TX_START commands. */
outw(TxReset, ioaddr + EL3_CMD);
outw(TxEnable, ioaddr + EL3_CMD);
diff --git a/drivers/net/ethernet/3com/3c515.c b/drivers/net/ethernet/3com/3c515.c
index 942fb0d5aace..b26e038b4a0e 100644
--- a/drivers/net/ethernet/3com/3c515.c
+++ b/drivers/net/ethernet/3com/3c515.c
@@ -992,7 +992,7 @@ static void corkscrew_timeout(struct net_device *dev)
if (!(inw(ioaddr + EL3_STATUS) & CmdInProgress))
break;
outw(TxEnable, ioaddr + EL3_CMD);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
dev->stats.tx_errors++;
dev->stats.tx_dropped++;
netif_wake_queue(dev);
diff --git a/drivers/net/ethernet/3com/3c574_cs.c b/drivers/net/ethernet/3com/3c574_cs.c
index b9948f00c5e9..b88afd759307 100644
--- a/drivers/net/ethernet/3com/3c574_cs.c
+++ b/drivers/net/ethernet/3com/3c574_cs.c
@@ -700,7 +700,7 @@ static void el3_tx_timeout(struct net_device *dev)
netdev_notice(dev, "Transmit timed out!\n");
dump_status(dev);
dev->stats.tx_errors++;
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
/* Issue TX_RESET and TX_START commands. */
tc574_wait_for_completion(dev, TxReset);
outw(TxEnable, ioaddr + EL3_CMD);
diff --git a/drivers/net/ethernet/3com/3c589_cs.c b/drivers/net/ethernet/3com/3c589_cs.c
index c5a320507556..71396e4b87e3 100644
--- a/drivers/net/ethernet/3com/3c589_cs.c
+++ b/drivers/net/ethernet/3com/3c589_cs.c
@@ -534,7 +534,7 @@ static void el3_tx_timeout(struct net_device *dev)
netdev_warn(dev, "Transmit timed out!\n");
dump_status(dev);
dev->stats.tx_errors++;
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
/* Issue TX_RESET and TX_START commands. */
tc589_wait_for_completion(dev, TxReset);
outw(TxEnable, ioaddr + EL3_CMD);
diff --git a/drivers/net/ethernet/3com/3c59x.c b/drivers/net/ethernet/3com/3c59x.c
index d81fceddbe0e..25c55ab05c7d 100644
--- a/drivers/net/ethernet/3com/3c59x.c
+++ b/drivers/net/ethernet/3com/3c59x.c
@@ -1944,7 +1944,7 @@ static void vortex_tx_timeout(struct net_device *dev)
}
/* Issue Tx Enable */
iowrite16(TxEnable, ioaddr + EL3_CMD);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
}
/*
diff --git a/drivers/net/ethernet/8390/axnet_cs.c b/drivers/net/ethernet/8390/axnet_cs.c
index ec6eac1f8c95..4ea717d68c95 100644
--- a/drivers/net/ethernet/8390/axnet_cs.c
+++ b/drivers/net/ethernet/8390/axnet_cs.c
@@ -1041,7 +1041,7 @@ static netdev_tx_t axnet_start_xmit(struct sk_buff *skb,
{
ei_local->txing = 1;
NS8390_trigger_send(dev, send_length, output_page);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
if (output_page == ei_local->tx_start_page)
{
ei_local->tx1 = -1;
@@ -1270,7 +1270,7 @@ static void ei_tx_intr(struct net_device *dev)
{
ei_local->txing = 1;
NS8390_trigger_send(dev, ei_local->tx2, ei_local->tx_start_page + 6);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
ei_local->tx2 = -1,
ei_local->lasttx = 2;
}
@@ -1287,7 +1287,7 @@ static void ei_tx_intr(struct net_device *dev)
{
ei_local->txing = 1;
NS8390_trigger_send(dev, ei_local->tx1, ei_local->tx_start_page);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
ei_local->tx1 = -1;
ei_local->lasttx = 1;
}
diff --git a/drivers/net/ethernet/8390/lib8390.c b/drivers/net/ethernet/8390/lib8390.c
index b96e8852b2d1..60f8e2c8e726 100644
--- a/drivers/net/ethernet/8390/lib8390.c
+++ b/drivers/net/ethernet/8390/lib8390.c
@@ -596,7 +596,7 @@ static void ei_tx_intr(struct net_device *dev)
if (ei_local->tx2 > 0) {
ei_local->txing = 1;
NS8390_trigger_send(dev, ei_local->tx2, ei_local->tx_start_page + 6);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
ei_local->tx2 = -1,
ei_local->lasttx = 2;
} else
@@ -609,7 +609,7 @@ static void ei_tx_intr(struct net_device *dev)
if (ei_local->tx1 > 0) {
ei_local->txing = 1;
NS8390_trigger_send(dev, ei_local->tx1, ei_local->tx_start_page);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
ei_local->tx1 = -1;
ei_local->lasttx = 1;
} else
diff --git a/drivers/net/ethernet/adaptec/starfire.c b/drivers/net/ethernet/adaptec/starfire.c
index ac7288240d55..1d1069641d81 100644
--- a/drivers/net/ethernet/adaptec/starfire.c
+++ b/drivers/net/ethernet/adaptec/starfire.c
@@ -1129,7 +1129,7 @@ static void tx_timeout(struct net_device *dev)
/* Trigger an immediate transmit demand. */
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
dev->stats.tx_errors++;
netif_wake_queue(dev);
}
diff --git a/drivers/net/ethernet/adi/bfin_mac.c b/drivers/net/ethernet/adi/bfin_mac.c
index 74139cb7f849..3d2245fdc283 100644
--- a/drivers/net/ethernet/adi/bfin_mac.c
+++ b/drivers/net/ethernet/adi/bfin_mac.c
@@ -1430,7 +1430,7 @@ static void bfin_mac_timeout(struct net_device *dev)
bfin_mac_enable(lp->phydev);
/* We can accept TX packets again */
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
}
static void bfin_mac_multicast_hash(struct net_device *dev)
diff --git a/drivers/net/ethernet/aeroflex/greth.c b/drivers/net/ethernet/aeroflex/greth.c
index b873531c5575..bca07c5c94bd 100644
--- a/drivers/net/ethernet/aeroflex/greth.c
+++ b/drivers/net/ethernet/aeroflex/greth.c
@@ -1323,7 +1323,7 @@ static inline int phy_aneg_done(struct phy_device *phydev)
static int greth_mdio_init(struct greth_private *greth)
{
- int ret, phy;
+ int ret;
unsigned long timeout;
greth->mdio = mdiobus_alloc();
diff --git a/drivers/net/ethernet/agere/et131x.c b/drivers/net/ethernet/agere/et131x.c
index 0907ab6ff309..30defe6c81f2 100644
--- a/drivers/net/ethernet/agere/et131x.c
+++ b/drivers/net/ethernet/agere/et131x.c
@@ -3349,7 +3349,7 @@ static void et131x_down(struct net_device *netdev)
struct et131x_adapter *adapter = netdev_priv(netdev);
/* Save the timestamp for the TX watchdog, prevent a timeout */
- netdev->trans_start = jiffies;
+ netif_trans_update(netdev);
phy_stop(adapter->phydev);
et131x_disable_txrx(netdev);
@@ -3816,7 +3816,7 @@ static netdev_tx_t et131x_tx(struct sk_buff *skb, struct net_device *netdev)
netif_stop_queue(netdev);
/* Save the timestamp for the TX timeout watchdog */
- netdev->trans_start = jiffies;
+ netif_trans_update(netdev);
/* TCB is not available */
if (tx_ring->used >= NUM_TCB)
diff --git a/drivers/net/ethernet/allwinner/sun4i-emac.c b/drivers/net/ethernet/allwinner/sun4i-emac.c
index 8d50314ac3eb..de2c4bf5fac4 100644
--- a/drivers/net/ethernet/allwinner/sun4i-emac.c
+++ b/drivers/net/ethernet/allwinner/sun4i-emac.c
@@ -428,7 +428,7 @@ static void emac_timeout(struct net_device *dev)
emac_reset(db);
emac_init_device(dev);
/* We can accept TX packets again */
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
netif_wake_queue(dev);
/* Restore previous register address */
@@ -468,7 +468,7 @@ static int emac_start_xmit(struct sk_buff *skb, struct net_device *dev)
db->membase + EMAC_TX_CTL0_REG);
/* save the time stamp */
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
} else if (channel == 1) {
/* set TX len */
writel(skb->len, db->membase + EMAC_TX_PL1_REG);
@@ -477,7 +477,7 @@ static int emac_start_xmit(struct sk_buff *skb, struct net_device *dev)
db->membase + EMAC_TX_CTL1_REG);
/* save the time stamp */
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
}
if ((db->tx_fifo_stat & 3) == 3) {
diff --git a/drivers/net/ethernet/amd/7990.c b/drivers/net/ethernet/amd/7990.c
index 66d0b73c39c0..dcf2a1f3643d 100644
--- a/drivers/net/ethernet/amd/7990.c
+++ b/drivers/net/ethernet/amd/7990.c
@@ -260,7 +260,7 @@ static int lance_reset(struct net_device *dev)
load_csrs(lp);
lance_init_ring(dev);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
status = init_restart_lance(lp);
#ifdef DEBUG_DRIVER
printk("Lance restart=%d\n", status);
@@ -530,7 +530,7 @@ void lance_tx_timeout(struct net_device *dev)
{
printk("lance_tx_timeout\n");
lance_reset(dev);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue(dev);
}
EXPORT_SYMBOL_GPL(lance_tx_timeout);
@@ -543,11 +543,13 @@ int lance_start_xmit(struct sk_buff *skb, struct net_device *dev)
static int outs;
unsigned long flags;
- if (!TX_BUFFS_AVAIL)
- return NETDEV_TX_LOCKED;
-
netif_stop_queue(dev);
+ if (!TX_BUFFS_AVAIL) {
+ dev_consume_skb_any(skb);
+ return NETDEV_TX_OK;
+ }
+
skblen = skb->len;
#ifdef DEBUG_DRIVER
diff --git a/drivers/net/ethernet/amd/a2065.c b/drivers/net/ethernet/amd/a2065.c
index 56139184b801..a83cd1c4ce1d 100644
--- a/drivers/net/ethernet/amd/a2065.c
+++ b/drivers/net/ethernet/amd/a2065.c
@@ -512,7 +512,7 @@ static inline int lance_reset(struct net_device *dev)
load_csrs(lp);
lance_init_ring(dev);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_start_queue(dev);
status = init_restart_lance(lp);
@@ -547,10 +547,8 @@ static netdev_tx_t lance_start_xmit(struct sk_buff *skb,
local_irq_save(flags);
- if (!lance_tx_buffs_avail(lp)) {
- local_irq_restore(flags);
- return NETDEV_TX_LOCKED;
- }
+ if (!lance_tx_buffs_avail(lp))
+ goto out_free;
#ifdef DEBUG
/* dump the packet */
@@ -573,6 +571,7 @@ static netdev_tx_t lance_start_xmit(struct sk_buff *skb,
/* Kick the lance: transmit now */
ll->rdp = LE_C0_INEA | LE_C0_TDMD;
+ out_free:
dev_kfree_skb(skb);
local_irq_restore(flags);
diff --git a/drivers/net/ethernet/amd/atarilance.c b/drivers/net/ethernet/amd/atarilance.c
index b10964e8cb54..d2bc8e5dcd23 100644
--- a/drivers/net/ethernet/amd/atarilance.c
+++ b/drivers/net/ethernet/amd/atarilance.c
@@ -764,7 +764,7 @@ static void lance_tx_timeout (struct net_device *dev)
/* lance_restart, essentially */
lance_init_ring(dev);
REGA( CSR0 ) = CSR0_INEA | CSR0_INIT | CSR0_STRT;
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue(dev);
}
diff --git a/drivers/net/ethernet/amd/au1000_eth.c b/drivers/net/ethernet/amd/au1000_eth.c
index d3977d032b48..e0fb0f1122db 100644
--- a/drivers/net/ethernet/amd/au1000_eth.c
+++ b/drivers/net/ethernet/amd/au1000_eth.c
@@ -1074,7 +1074,7 @@ static void au1000_tx_timeout(struct net_device *dev)
netdev_err(dev, "au1000_tx_timeout: dev=%p\n", dev);
au1000_reset_mac(dev);
au1000_init(dev);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue(dev);
}
@@ -1269,7 +1269,7 @@ static int au1000_probe(struct platform_device *pdev)
aup->phy_irq = pd->phy_irq;
}
- if (aup->phy_busid && aup->phy_busid > 0) {
+ if (aup->phy_busid > 0) {
dev_err(&pdev->dev, "MAC0-associated PHY attached 2nd MACs MII bus not supported yet\n");
err = -ENODEV;
goto err_mdiobus_alloc;
diff --git a/drivers/net/ethernet/amd/declance.c b/drivers/net/ethernet/amd/declance.c
index b584b78237df..b799c7ac899b 100644
--- a/drivers/net/ethernet/amd/declance.c
+++ b/drivers/net/ethernet/amd/declance.c
@@ -877,7 +877,7 @@ static inline int lance_reset(struct net_device *dev)
lance_init_ring(dev);
load_csrs(lp);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
status = init_restart_lance(lp);
return status;
}
diff --git a/drivers/net/ethernet/amd/lance.c b/drivers/net/ethernet/amd/lance.c
index 3a7ebfdda57d..abb1ba228b26 100644
--- a/drivers/net/ethernet/amd/lance.c
+++ b/drivers/net/ethernet/amd/lance.c
@@ -943,7 +943,7 @@ static void lance_tx_timeout (struct net_device *dev)
#endif
lance_restart (dev, 0x0043, 1);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue (dev);
}
diff --git a/drivers/net/ethernet/amd/ni65.c b/drivers/net/ethernet/amd/ni65.c
index 1cf33addd15e..cda53db75f17 100644
--- a/drivers/net/ethernet/amd/ni65.c
+++ b/drivers/net/ethernet/amd/ni65.c
@@ -782,7 +782,7 @@ static void ni65_stop_start(struct net_device *dev,struct priv *p)
if(!p->lock)
if (p->tmdnum || !p->xmit_queued)
netif_wake_queue(dev);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
}
else
writedatareg(CSR0_STRT | csr0);
@@ -1148,7 +1148,7 @@ static void ni65_timeout(struct net_device *dev)
printk("%02x ",p->tmdhead[i].u.s.status);
printk("\n");
ni65_lance_reinit(dev);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue(dev);
}
diff --git a/drivers/net/ethernet/amd/nmclan_cs.c b/drivers/net/ethernet/amd/nmclan_cs.c
index 27245efe9f50..2807e181647b 100644
--- a/drivers/net/ethernet/amd/nmclan_cs.c
+++ b/drivers/net/ethernet/amd/nmclan_cs.c
@@ -851,7 +851,7 @@ static void mace_tx_timeout(struct net_device *dev)
#else /* #if RESET_ON_TIMEOUT */
pr_cont("NOT resetting card\n");
#endif /* #if RESET_ON_TIMEOUT */
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue(dev);
}
diff --git a/drivers/net/ethernet/amd/pcnet32.c b/drivers/net/ethernet/amd/pcnet32.c
index 7ccebae9cb48..c22bf52d3320 100644
--- a/drivers/net/ethernet/amd/pcnet32.c
+++ b/drivers/net/ethernet/amd/pcnet32.c
@@ -448,7 +448,7 @@ static void pcnet32_netif_stop(struct net_device *dev)
{
struct pcnet32_private *lp = netdev_priv(dev);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
napi_disable(&lp->napi);
netif_tx_disable(dev);
}
@@ -2426,7 +2426,7 @@ static void pcnet32_tx_timeout(struct net_device *dev)
}
pcnet32_restart(dev, CSR0_NORMAL);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue(dev);
spin_unlock_irqrestore(&lp->lock, flags);
diff --git a/drivers/net/ethernet/amd/sunlance.c b/drivers/net/ethernet/amd/sunlance.c
index 7847638bdd22..9b56b40259dc 100644
--- a/drivers/net/ethernet/amd/sunlance.c
+++ b/drivers/net/ethernet/amd/sunlance.c
@@ -997,7 +997,7 @@ static int lance_reset(struct net_device *dev)
}
lp->init_ring(dev);
load_csrs(lp);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
status = init_restart_lance(lp);
return status;
}
diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_cle.c b/drivers/net/ethernet/apm/xgene/xgene_enet_cle.c
index b212488606da..472c0fb3f4c4 100644
--- a/drivers/net/ethernet/apm/xgene/xgene_enet_cle.c
+++ b/drivers/net/ethernet/apm/xgene/xgene_enet_cle.c
@@ -43,6 +43,7 @@ static void xgene_cle_idt_to_hw(u32 dstqid, u32 fpsel,
static void xgene_cle_dbptr_to_hw(struct xgene_enet_pdata *pdata,
struct xgene_cle_dbptr *dbptr, u32 *buf)
{
+ buf[0] = SET_VAL(CLE_DROP, dbptr->drop);
buf[4] = SET_VAL(CLE_FPSEL, dbptr->fpsel) |
SET_VAL(CLE_DSTQIDL, dbptr->dstqid);
@@ -412,7 +413,7 @@ static int xgene_enet_cle_init(struct xgene_enet_pdata *pdata)
.branch = {
{
/* IPV4 */
- .valid = 0,
+ .valid = 1,
.next_packet_pointer = 22,
.jump_bw = JMP_FW,
.jump_rel = JMP_ABS,
@@ -420,7 +421,7 @@ static int xgene_enet_cle_init(struct xgene_enet_pdata *pdata)
.next_node = PKT_PROT_NODE,
.next_branch = 0,
.data = 0x8,
- .mask = 0xffff
+ .mask = 0x0
},
{
.valid = 0,
@@ -456,7 +457,7 @@ static int xgene_enet_cle_init(struct xgene_enet_pdata *pdata)
.next_node = RSS_IPV4_TCP_NODE,
.next_branch = 0,
.data = 0x0600,
- .mask = 0xffff
+ .mask = 0x00ff
},
{
/* UDP */
@@ -468,7 +469,7 @@ static int xgene_enet_cle_init(struct xgene_enet_pdata *pdata)
.next_node = RSS_IPV4_UDP_NODE,
.next_branch = 0,
.data = 0x1100,
- .mask = 0xffff
+ .mask = 0x00ff
},
{
.valid = 0,
@@ -642,7 +643,7 @@ static int xgene_enet_cle_init(struct xgene_enet_pdata *pdata)
{
/* TCP DST Port */
.valid = 0,
- .next_packet_pointer = 256,
+ .next_packet_pointer = 258,
.jump_bw = JMP_FW,
.jump_rel = JMP_ABS,
.operation = EQT,
@@ -729,6 +730,6 @@ static int xgene_enet_cle_init(struct xgene_enet_pdata *pdata)
return xgene_cle_setup_ptree(pdata, enet_cle);
}
-struct xgene_cle_ops xgene_cle3in_ops = {
+const struct xgene_cle_ops xgene_cle3in_ops = {
.cle_init = xgene_enet_cle_init,
};
diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_cle.h b/drivers/net/ethernet/apm/xgene/xgene_enet_cle.h
index 29a17abdd828..33c5f6b25824 100644
--- a/drivers/net/ethernet/apm/xgene/xgene_enet_cle.h
+++ b/drivers/net/ethernet/apm/xgene/xgene_enet_cle.h
@@ -83,6 +83,8 @@
#define CLE_TYPE_POS 0
#define CLE_TYPE_LEN 2
+#define CLE_DROP_POS 28
+#define CLE_DROP_LEN 1
#define CLE_DSTQIDL_POS 25
#define CLE_DSTQIDL_LEN 7
#define CLE_DSTQIDH_POS 0
@@ -290,6 +292,6 @@ struct xgene_enet_cle {
u32 jump_bytes;
};
-extern struct xgene_cle_ops xgene_cle3in_ops;
+extern const struct xgene_cle_ops xgene_cle3in_ops;
#endif /* __XGENE_ENET_CLE_H__ */
diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_hw.c b/drivers/net/ethernet/apm/xgene/xgene_enet_hw.c
index 39e081a70f5b..2f5638f7f864 100644
--- a/drivers/net/ethernet/apm/xgene/xgene_enet_hw.c
+++ b/drivers/net/ethernet/apm/xgene/xgene_enet_hw.c
@@ -219,27 +219,30 @@ void xgene_enet_parse_error(struct xgene_enet_desc_ring *ring,
struct xgene_enet_pdata *pdata,
enum xgene_enet_err_code status)
{
- struct rtnl_link_stats64 *stats = &pdata->stats;
-
switch (status) {
case INGRESS_CRC:
- stats->rx_crc_errors++;
+ ring->rx_crc_errors++;
+ ring->rx_dropped++;
break;
case INGRESS_CHECKSUM:
case INGRESS_CHECKSUM_COMPUTE:
- stats->rx_errors++;
+ ring->rx_errors++;
+ ring->rx_dropped++;
break;
case INGRESS_TRUNC_FRAME:
- stats->rx_frame_errors++;
+ ring->rx_frame_errors++;
+ ring->rx_dropped++;
break;
case INGRESS_PKT_LEN:
- stats->rx_length_errors++;
+ ring->rx_length_errors++;
+ ring->rx_dropped++;
break;
case INGRESS_PKT_UNDER:
- stats->rx_frame_errors++;
+ ring->rx_frame_errors++;
+ ring->rx_dropped++;
break;
case INGRESS_FIFO_OVERRUN:
- stats->rx_fifo_errors++;
+ ring->rx_fifo_errors++;
break;
default:
break;
@@ -824,7 +827,7 @@ static int xgene_mdiobus_register(struct xgene_enet_pdata *pdata,
return -EINVAL;
phy = get_phy_device(mdio, phy_id, false);
- if (!phy || IS_ERR(phy))
+ if (IS_ERR(phy))
return -EIO;
ret = phy_device_register(phy);
diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_hw.h b/drivers/net/ethernet/apm/xgene/xgene_enet_hw.h
index ba7da98af2ef..45220be3122f 100644
--- a/drivers/net/ethernet/apm/xgene/xgene_enet_hw.h
+++ b/drivers/net/ethernet/apm/xgene/xgene_enet_hw.h
@@ -86,7 +86,7 @@ enum xgene_enet_rm {
#define RINGADDRL_POS 5
#define RINGADDRL_LEN 27
#define RINGADDRH_POS 0
-#define RINGADDRH_LEN 6
+#define RINGADDRH_LEN 7
#define RINGSIZE_POS 23
#define RINGSIZE_LEN 3
#define RINGTYPE_POS 19
@@ -94,9 +94,9 @@ enum xgene_enet_rm {
#define RINGMODE_POS 20
#define RINGMODE_LEN 3
#define RECOMTIMEOUTL_POS 28
-#define RECOMTIMEOUTL_LEN 3
+#define RECOMTIMEOUTL_LEN 4
#define RECOMTIMEOUTH_POS 0
-#define RECOMTIMEOUTH_LEN 2
+#define RECOMTIMEOUTH_LEN 3
#define NUMMSGSINQ_POS 1
#define NUMMSGSINQ_LEN 16
#define ACCEPTLERR BIT(19)
@@ -201,6 +201,8 @@ enum xgene_enet_rm {
#define USERINFO_LEN 32
#define FPQNUM_POS 32
#define FPQNUM_LEN 12
+#define ELERR_POS 46
+#define ELERR_LEN 2
#define NV_POS 50
#define NV_LEN 1
#define LL_POS 51
diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
index 8d4c1ad2fc60..d208b172f4d7 100644
--- a/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
+++ b/drivers/net/ethernet/apm/xgene/xgene_enet_main.c
@@ -443,8 +443,8 @@ static netdev_tx_t xgene_enet_start_xmit(struct sk_buff *skb,
skb_tx_timestamp(skb);
- pdata->stats.tx_packets++;
- pdata->stats.tx_bytes += skb->len;
+ tx_ring->tx_packets++;
+ tx_ring->tx_bytes += skb->len;
pdata->ring_ops->wr_cmd(tx_ring, count);
return NETDEV_TX_OK;
@@ -483,12 +483,12 @@ static int xgene_enet_rx_frame(struct xgene_enet_desc_ring *rx_ring,
skb = buf_pool->rx_skb[skb_index];
/* checking for error */
- status = GET_VAL(LERR, le64_to_cpu(raw_desc->m0));
+ status = (GET_VAL(ELERR, le64_to_cpu(raw_desc->m0)) << LERR_LEN) ||
+ GET_VAL(LERR, le64_to_cpu(raw_desc->m0));
if (unlikely(status > 2)) {
dev_kfree_skb_any(skb);
xgene_enet_parse_error(rx_ring, netdev_priv(rx_ring->ndev),
status);
- pdata->stats.rx_dropped++;
ret = -EIO;
goto out;
}
@@ -506,8 +506,8 @@ static int xgene_enet_rx_frame(struct xgene_enet_desc_ring *rx_ring,
xgene_enet_skip_csum(skb);
}
- pdata->stats.rx_packets++;
- pdata->stats.rx_bytes += datalen;
+ rx_ring->rx_packets++;
+ rx_ring->rx_bytes += datalen;
napi_gro_receive(&rx_ring->napi, skb);
out:
if (--rx_ring->nbufpool == 0) {
@@ -630,7 +630,7 @@ static int xgene_enet_register_irq(struct net_device *ndev)
ring = pdata->rx_ring[i];
irq_set_status_flags(ring->irq, IRQ_DISABLE_UNLAZY);
ret = devm_request_irq(dev, ring->irq, xgene_enet_rx_irq,
- IRQF_SHARED, ring->irq_name, ring);
+ 0, ring->irq_name, ring);
if (ret) {
netdev_err(ndev, "Failed to request irq %s\n",
ring->irq_name);
@@ -641,7 +641,7 @@ static int xgene_enet_register_irq(struct net_device *ndev)
ring = pdata->tx_ring[i]->cp_ring;
irq_set_status_flags(ring->irq, IRQ_DISABLE_UNLAZY);
ret = devm_request_irq(dev, ring->irq, xgene_enet_rx_irq,
- IRQF_SHARED, ring->irq_name, ring);
+ 0, ring->irq_name, ring);
if (ret) {
netdev_err(ndev, "Failed to request irq %s\n",
ring->irq_name);
@@ -973,6 +973,17 @@ static enum xgene_ring_owner xgene_derive_ring_owner(struct xgene_enet_pdata *p)
return owner;
}
+static u8 xgene_start_cpu_bufnum(struct xgene_enet_pdata *pdata)
+{
+ struct device *dev = &pdata->pdev->dev;
+ u32 cpu_bufnum;
+ int ret;
+
+ ret = device_property_read_u32(dev, "channel", &cpu_bufnum);
+
+ return (!ret) ? cpu_bufnum : pdata->cpu_bufnum;
+}
+
static int xgene_enet_create_desc_rings(struct net_device *ndev)
{
struct xgene_enet_pdata *pdata = netdev_priv(ndev);
@@ -981,13 +992,15 @@ static int xgene_enet_create_desc_rings(struct net_device *ndev)
struct xgene_enet_desc_ring *buf_pool = NULL;
enum xgene_ring_owner owner;
dma_addr_t dma_exp_bufs;
- u8 cpu_bufnum = pdata->cpu_bufnum;
+ u8 cpu_bufnum;
u8 eth_bufnum = pdata->eth_bufnum;
u8 bp_bufnum = pdata->bp_bufnum;
u16 ring_num = pdata->ring_num;
u16 ring_id;
int i, ret, size;
+ cpu_bufnum = xgene_start_cpu_bufnum(pdata);
+
for (i = 0; i < pdata->rxq_cnt; i++) {
/* allocate rx descriptor ring */
owner = xgene_derive_ring_owner(pdata);
@@ -1114,12 +1127,31 @@ static struct rtnl_link_stats64 *xgene_enet_get_stats64(
{
struct xgene_enet_pdata *pdata = netdev_priv(ndev);
struct rtnl_link_stats64 *stats = &pdata->stats;
+ struct xgene_enet_desc_ring *ring;
+ int i;
- stats->rx_errors += stats->rx_length_errors +
- stats->rx_crc_errors +
- stats->rx_frame_errors +
- stats->rx_fifo_errors;
- memcpy(storage, &pdata->stats, sizeof(struct rtnl_link_stats64));
+ memset(stats, 0, sizeof(struct rtnl_link_stats64));
+ for (i = 0; i < pdata->txq_cnt; i++) {
+ ring = pdata->tx_ring[i];
+ if (ring) {
+ stats->tx_packets += ring->tx_packets;
+ stats->tx_bytes += ring->tx_bytes;
+ }
+ }
+
+ for (i = 0; i < pdata->rxq_cnt; i++) {
+ ring = pdata->rx_ring[i];
+ if (ring) {
+ stats->rx_packets += ring->rx_packets;
+ stats->rx_bytes += ring->rx_bytes;
+ stats->rx_errors += ring->rx_length_errors +
+ ring->rx_crc_errors +
+ ring->rx_frame_errors +
+ ring->rx_fifo_errors;
+ stats->rx_dropped += ring->rx_dropped;
+ }
+ }
+ memcpy(storage, stats, sizeof(struct rtnl_link_stats64));
return storage;
}
@@ -1234,6 +1266,13 @@ static int xgene_enet_get_irqs(struct xgene_enet_pdata *pdata)
for (i = 0; i < max_irqs; i++) {
ret = platform_get_irq(pdev, i);
if (ret <= 0) {
+ if (pdata->phy_mode == PHY_INTERFACE_MODE_XGMII) {
+ max_irqs = i;
+ pdata->rxq_cnt = max_irqs / 2;
+ pdata->txq_cnt = max_irqs / 2;
+ pdata->cq_cnt = max_irqs / 2;
+ break;
+ }
dev_err(dev, "Unable to get ENET IRQ\n");
ret = ret ? : -ENXIO;
return ret;
@@ -1437,19 +1476,28 @@ static void xgene_enet_setup_ops(struct xgene_enet_pdata *pdata)
pdata->port_ops = &xgene_xgport_ops;
pdata->cle_ops = &xgene_cle3in_ops;
pdata->rm = RM0;
- pdata->rxq_cnt = XGENE_NUM_RX_RING;
- pdata->txq_cnt = XGENE_NUM_TX_RING;
- pdata->cq_cnt = XGENE_NUM_TXC_RING;
+ if (!pdata->rxq_cnt) {
+ pdata->rxq_cnt = XGENE_NUM_RX_RING;
+ pdata->txq_cnt = XGENE_NUM_TX_RING;
+ pdata->cq_cnt = XGENE_NUM_TXC_RING;
+ }
break;
}
if (pdata->enet_id == XGENE_ENET1) {
switch (pdata->port_id) {
case 0:
- pdata->cpu_bufnum = START_CPU_BUFNUM_0;
- pdata->eth_bufnum = START_ETH_BUFNUM_0;
- pdata->bp_bufnum = START_BP_BUFNUM_0;
- pdata->ring_num = START_RING_NUM_0;
+ if (pdata->phy_mode == PHY_INTERFACE_MODE_XGMII) {
+ pdata->cpu_bufnum = X2_START_CPU_BUFNUM_0;
+ pdata->eth_bufnum = X2_START_ETH_BUFNUM_0;
+ pdata->bp_bufnum = X2_START_BP_BUFNUM_0;
+ pdata->ring_num = START_RING_NUM_0;
+ } else {
+ pdata->cpu_bufnum = START_CPU_BUFNUM_0;
+ pdata->eth_bufnum = START_ETH_BUFNUM_0;
+ pdata->bp_bufnum = START_BP_BUFNUM_0;
+ pdata->ring_num = START_RING_NUM_0;
+ }
break;
case 1:
if (pdata->phy_mode == PHY_INTERFACE_MODE_XGMII) {
@@ -1595,21 +1643,22 @@ static int xgene_enet_probe(struct platform_device *pdev)
ret = xgene_enet_init_hw(pdata);
if (ret)
- goto err;
+ goto err_netdev;
mac_ops = pdata->mac_ops;
if (pdata->phy_mode == PHY_INTERFACE_MODE_RGMII) {
ret = xgene_enet_mdio_config(pdata);
if (ret)
- goto err;
+ goto err_netdev;
} else {
INIT_DELAYED_WORK(&pdata->link_work, mac_ops->link_state);
}
xgene_enet_napi_add(pdata);
return 0;
-err:
+err_netdev:
unregister_netdev(ndev);
+err:
free_netdev(ndev);
return ret;
}
diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_main.h b/drivers/net/ethernet/apm/xgene/xgene_enet_main.h
index 175d18890c7a..092fbeccaa20 100644
--- a/drivers/net/ethernet/apm/xgene/xgene_enet_main.h
+++ b/drivers/net/ethernet/apm/xgene/xgene_enet_main.h
@@ -49,10 +49,10 @@
#define XGENE_ENET_MSS 1448
#define XGENE_MIN_ENET_FRAME_SIZE 60
-#define XGENE_MAX_ENET_IRQ 8
-#define XGENE_NUM_RX_RING 4
-#define XGENE_NUM_TX_RING 4
-#define XGENE_NUM_TXC_RING 4
+#define XGENE_MAX_ENET_IRQ 16
+#define XGENE_NUM_RX_RING 8
+#define XGENE_NUM_TX_RING 8
+#define XGENE_NUM_TXC_RING 8
#define START_CPU_BUFNUM_0 0
#define START_ETH_BUFNUM_0 2
@@ -121,6 +121,16 @@ struct xgene_enet_desc_ring {
struct xgene_enet_raw_desc16 *raw_desc16;
};
__le64 *exp_bufs;
+ u64 tx_packets;
+ u64 tx_bytes;
+ u64 rx_packets;
+ u64 rx_bytes;
+ u64 rx_dropped;
+ u64 rx_errors;
+ u64 rx_length_errors;
+ u64 rx_crc_errors;
+ u64 rx_frame_errors;
+ u64 rx_fifo_errors;
};
struct xgene_mac_ops {
@@ -191,7 +201,7 @@ struct xgene_enet_pdata {
const struct xgene_mac_ops *mac_ops;
const struct xgene_port_ops *port_ops;
struct xgene_ring_ops *ring_ops;
- struct xgene_cle_ops *cle_ops;
+ const struct xgene_cle_ops *cle_ops;
struct delayed_work link_work;
u32 port_id;
u8 cpu_bufnum;
diff --git a/drivers/net/ethernet/apm/xgene/xgene_enet_sgmac.h b/drivers/net/ethernet/apm/xgene/xgene_enet_sgmac.h
index 29a71b4dcc44..002df5a6756e 100644
--- a/drivers/net/ethernet/apm/xgene/xgene_enet_sgmac.h
+++ b/drivers/net/ethernet/apm/xgene/xgene_enet_sgmac.h
@@ -33,7 +33,7 @@
#define LINK_STATUS BIT(2)
#define LINK_UP BIT(15)
#define MPA_IDLE_WITH_QMI_EMPTY BIT(12)
-#define SG_RX_DV_GATE_REG_0_ADDR 0x0dfc
+#define SG_RX_DV_GATE_REG_0_ADDR 0x05fc
extern const struct xgene_mac_ops xgene_sgmac_ops;
extern const struct xgene_port_ops xgene_sgport_ops;
diff --git a/drivers/net/ethernet/atheros/alx/main.c b/drivers/net/ethernet/atheros/alx/main.c
index 55b118e876fd..9fe8b5e310d1 100644
--- a/drivers/net/ethernet/atheros/alx/main.c
+++ b/drivers/net/ethernet/atheros/alx/main.c
@@ -745,7 +745,7 @@ static netdev_features_t alx_fix_features(struct net_device *netdev,
static void alx_netif_stop(struct alx_priv *alx)
{
- alx->dev->trans_start = jiffies;
+ netif_trans_update(alx->dev);
if (netif_carrier_ok(alx->dev)) {
netif_carrier_off(alx->dev);
netif_tx_disable(alx->dev);
diff --git a/drivers/net/ethernet/atheros/atl1c/atl1c.h b/drivers/net/ethernet/atheros/atl1c/atl1c.h
index b9203d928938..c46b489ce9b4 100644
--- a/drivers/net/ethernet/atheros/atl1c/atl1c.h
+++ b/drivers/net/ethernet/atheros/atl1c/atl1c.h
@@ -488,7 +488,7 @@ struct atl1c_tpd_ring {
dma_addr_t dma; /* descriptor ring physical address */
u16 size; /* descriptor ring length in bytes */
u16 count; /* number of descriptors in the ring */
- u16 next_to_use; /* this is protectd by adapter->tx_lock */
+ u16 next_to_use;
atomic_t next_to_clean;
struct atl1c_buffer *buffer_info;
};
@@ -542,7 +542,6 @@ struct atl1c_adapter {
u16 link_duplex;
spinlock_t mdio_lock;
- spinlock_t tx_lock;
atomic_t irq_sem;
struct work_struct common_task;
diff --git a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
index d0084d4d1a9b..a3200ea6d765 100644
--- a/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
+++ b/drivers/net/ethernet/atheros/atl1c/atl1c_main.c
@@ -821,7 +821,6 @@ static int atl1c_sw_init(struct atl1c_adapter *adapter)
atl1c_set_rxbufsize(adapter, adapter->netdev);
atomic_set(&adapter->irq_sem, 1);
spin_lock_init(&adapter->mdio_lock);
- spin_lock_init(&adapter->tx_lock);
set_bit(__AT_DOWN, &adapter->flags);
return 0;
@@ -2206,7 +2205,6 @@ static netdev_tx_t atl1c_xmit_frame(struct sk_buff *skb,
struct net_device *netdev)
{
struct atl1c_adapter *adapter = netdev_priv(netdev);
- unsigned long flags;
u16 tpd_req = 1;
struct atl1c_tpd_desc *tpd;
enum atl1c_trans_queue type = atl1c_trans_normal;
@@ -2217,16 +2215,10 @@ static netdev_tx_t atl1c_xmit_frame(struct sk_buff *skb,
}
tpd_req = atl1c_cal_tpd_req(skb);
- if (!spin_trylock_irqsave(&adapter->tx_lock, flags)) {
- if (netif_msg_pktdata(adapter))
- dev_info(&adapter->pdev->dev, "tx locked\n");
- return NETDEV_TX_LOCKED;
- }
if (atl1c_tpd_avail(adapter, type) < tpd_req) {
/* no enough descriptor, just stop queue */
netif_stop_queue(netdev);
- spin_unlock_irqrestore(&adapter->tx_lock, flags);
return NETDEV_TX_BUSY;
}
@@ -2234,7 +2226,6 @@ static netdev_tx_t atl1c_xmit_frame(struct sk_buff *skb,
/* do TSO and check sum */
if (atl1c_tso_csum(adapter, skb, &tpd, type) != 0) {
- spin_unlock_irqrestore(&adapter->tx_lock, flags);
dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}
@@ -2257,12 +2248,10 @@ static netdev_tx_t atl1c_xmit_frame(struct sk_buff *skb,
"tx-skb droppted due to dma error\n");
/* roll back tpd/buffer */
atl1c_tx_rollback(adapter, tpd, type);
- spin_unlock_irqrestore(&adapter->tx_lock, flags);
dev_kfree_skb_any(skb);
} else {
netdev_sent_queue(adapter->netdev, skb->len);
atl1c_tx_queue(adapter, skb, tpd, type);
- spin_unlock_irqrestore(&adapter->tx_lock, flags);
}
return NETDEV_TX_OK;
diff --git a/drivers/net/ethernet/atheros/atl1e/atl1e.h b/drivers/net/ethernet/atheros/atl1e/atl1e.h
index 0212dac7e23a..632bb843aed6 100644
--- a/drivers/net/ethernet/atheros/atl1e/atl1e.h
+++ b/drivers/net/ethernet/atheros/atl1e/atl1e.h
@@ -442,7 +442,6 @@ struct atl1e_adapter {
u16 link_duplex;
spinlock_t mdio_lock;
- spinlock_t tx_lock;
atomic_t irq_sem;
struct work_struct reset_task;
diff --git a/drivers/net/ethernet/atheros/atl1e/atl1e_main.c b/drivers/net/ethernet/atheros/atl1e/atl1e_main.c
index 59a03a193e83..974713b19ab6 100644
--- a/drivers/net/ethernet/atheros/atl1e/atl1e_main.c
+++ b/drivers/net/ethernet/atheros/atl1e/atl1e_main.c
@@ -648,7 +648,6 @@ static int atl1e_sw_init(struct atl1e_adapter *adapter)
atomic_set(&adapter->irq_sem, 1);
spin_lock_init(&adapter->mdio_lock);
- spin_lock_init(&adapter->tx_lock);
set_bit(__AT_DOWN, &adapter->flags);
@@ -1866,7 +1865,6 @@ static netdev_tx_t atl1e_xmit_frame(struct sk_buff *skb,
struct net_device *netdev)
{
struct atl1e_adapter *adapter = netdev_priv(netdev);
- unsigned long flags;
u16 tpd_req = 1;
struct atl1e_tpd_desc *tpd;
@@ -1880,13 +1878,10 @@ static netdev_tx_t atl1e_xmit_frame(struct sk_buff *skb,
return NETDEV_TX_OK;
}
tpd_req = atl1e_cal_tdp_req(skb);
- if (!spin_trylock_irqsave(&adapter->tx_lock, flags))
- return NETDEV_TX_LOCKED;
if (atl1e_tpd_avail(adapter) < tpd_req) {
/* no enough descriptor, just stop queue */
netif_stop_queue(netdev);
- spin_unlock_irqrestore(&adapter->tx_lock, flags);
return NETDEV_TX_BUSY;
}
@@ -1910,7 +1905,6 @@ static netdev_tx_t atl1e_xmit_frame(struct sk_buff *skb,
/* do TSO and check sum */
if (atl1e_tso_csum(adapter, skb, tpd) != 0) {
- spin_unlock_irqrestore(&adapter->tx_lock, flags);
dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}
@@ -1921,10 +1915,7 @@ static netdev_tx_t atl1e_xmit_frame(struct sk_buff *skb,
}
atl1e_tx_queue(adapter, tpd_req, tpd);
-
- netdev->trans_start = jiffies; /* NETIF_F_LLTX driver :( */
out:
- spin_unlock_irqrestore(&adapter->tx_lock, flags);
return NETDEV_TX_OK;
}
@@ -2285,8 +2276,7 @@ static int atl1e_init_netdev(struct net_device *netdev, struct pci_dev *pdev)
netdev->hw_features = NETIF_F_SG | NETIF_F_HW_CSUM | NETIF_F_TSO |
NETIF_F_HW_VLAN_CTAG_RX;
- netdev->features = netdev->hw_features | NETIF_F_LLTX |
- NETIF_F_HW_VLAN_CTAG_TX;
+ netdev->features = netdev->hw_features | NETIF_F_HW_VLAN_CTAG_TX;
/* not enabled by default */
netdev->hw_features |= NETIF_F_RXALL | NETIF_F_RXFCS;
return 0;
diff --git a/drivers/net/ethernet/broadcom/bcmsysport.c b/drivers/net/ethernet/broadcom/bcmsysport.c
index 993c780bdfab..543bf38105c9 100644
--- a/drivers/net/ethernet/broadcom/bcmsysport.c
+++ b/drivers/net/ethernet/broadcom/bcmsysport.c
@@ -831,7 +831,7 @@ static int bcm_sysport_poll(struct napi_struct *napi, int budget)
rdma_writel(priv, priv->rx_c_index, RDMA_CONS_INDEX);
if (work_done < budget) {
- napi_complete(napi);
+ napi_complete_done(napi, work_done);
/* re-enable RX interrupts */
intrl2_0_mask_clear(priv, INTRL2_0_RDMA_MBDONE);
}
@@ -873,7 +873,7 @@ static irqreturn_t bcm_sysport_rx_isr(int irq, void *dev_id)
if (likely(napi_schedule_prep(&priv->napi))) {
/* disable RX interrupts */
intrl2_0_mask_set(priv, INTRL2_0_RDMA_MBDONE);
- __napi_schedule(&priv->napi);
+ __napi_schedule_irqoff(&priv->napi);
}
}
@@ -916,7 +916,7 @@ static irqreturn_t bcm_sysport_tx_isr(int irq, void *dev_id)
if (likely(napi_schedule_prep(&txr->napi))) {
intrl2_1_mask_set(priv, BIT(ring));
- __napi_schedule(&txr->napi);
+ __napi_schedule_irqoff(&txr->napi);
}
}
@@ -1117,7 +1117,7 @@ static void bcm_sysport_tx_timeout(struct net_device *dev)
{
netdev_warn(dev, "transmit timeout!\n");
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
dev->stats.tx_errors++;
netif_tx_wake_all_queues(dev);
diff --git a/drivers/net/ethernet/broadcom/bgmac.c b/drivers/net/ethernet/broadcom/bgmac.c
index 38db2e4d7d54..ee5f431ab32a 100644
--- a/drivers/net/ethernet/broadcom/bgmac.c
+++ b/drivers/net/ethernet/broadcom/bgmac.c
@@ -1515,7 +1515,7 @@ static int bgmac_mii_register(struct bgmac *bgmac)
phy_dev = phy_connect(bgmac->net_dev, bus_id, &bgmac_adjust_link,
PHY_INTERFACE_MODE_MII);
if (IS_ERR(phy_dev)) {
- bgmac_err(bgmac, "PHY connecton failed\n");
+ bgmac_err(bgmac, "PHY connection failed\n");
err = PTR_ERR(phy_dev);
goto err_unregister_bus;
}
diff --git a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
index d465bd721146..0a5b770cefaa 100644
--- a/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
+++ b/drivers/net/ethernet/broadcom/bnx2x/bnx2x_main.c
@@ -13259,12 +13259,11 @@ static int bnx2x_init_dev(struct bnx2x *bp, struct pci_dev *pdev,
NETIF_F_RXHASH | NETIF_F_HW_VLAN_CTAG_TX;
if (!chip_is_e1x) {
dev->hw_features |= NETIF_F_GSO_GRE | NETIF_F_GSO_UDP_TUNNEL |
- NETIF_F_GSO_IPIP | NETIF_F_GSO_SIT;
+ NETIF_F_GSO_IPXIP4;
dev->hw_enc_features =
NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | NETIF_F_SG |
NETIF_F_TSO | NETIF_F_TSO_ECN | NETIF_F_TSO6 |
- NETIF_F_GSO_IPIP |
- NETIF_F_GSO_SIT |
+ NETIF_F_GSO_IPXIP4 |
NETIF_F_GSO_GRE | NETIF_F_GSO_UDP_TUNNEL;
}
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.c b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
index 72eb29ed0359..72a2efff8e49 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.c
@@ -1,6 +1,6 @@
/* Broadcom NetXtreme-C/E network driver.
*
- * Copyright (c) 2014-2015 Broadcom Corporation
+ * Copyright (c) 2014-2016 Broadcom Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -78,6 +78,7 @@ enum board_idx {
BCM57402,
BCM57404,
BCM57406,
+ BCM57314,
BCM57304_VF,
BCM57404_VF,
};
@@ -92,6 +93,7 @@ static const struct {
{ "Broadcom BCM57402 NetXtreme-E Dual-port 10Gb Ethernet" },
{ "Broadcom BCM57404 NetXtreme-E Dual-port 10Gb/25Gb Ethernet" },
{ "Broadcom BCM57406 NetXtreme-E Dual-port 10GBase-T Ethernet" },
+ { "Broadcom BCM57314 NetXtreme-C Dual-port 10Gb/25Gb/40Gb/50Gb Ethernet" },
{ "Broadcom BCM57304 NetXtreme-C Ethernet Virtual Function" },
{ "Broadcom BCM57404 NetXtreme-E Ethernet Virtual Function" },
};
@@ -103,6 +105,7 @@ static const struct pci_device_id bnxt_pci_tbl[] = {
{ PCI_VDEVICE(BROADCOM, 0x16d0), .driver_data = BCM57402 },
{ PCI_VDEVICE(BROADCOM, 0x16d1), .driver_data = BCM57404 },
{ PCI_VDEVICE(BROADCOM, 0x16d2), .driver_data = BCM57406 },
+ { PCI_VDEVICE(BROADCOM, 0x16df), .driver_data = BCM57314 },
#ifdef CONFIG_BNXT_SRIOV
{ PCI_VDEVICE(BROADCOM, 0x16cb), .driver_data = BCM57304_VF },
{ PCI_VDEVICE(BROADCOM, 0x16d3), .driver_data = BCM57404_VF },
@@ -118,6 +121,13 @@ static const u16 bnxt_vf_req_snif[] = {
HWRM_CFA_L2_FILTER_ALLOC,
};
+static const u16 bnxt_async_events_arr[] = {
+ HWRM_ASYNC_EVENT_CMPL_EVENT_ID_LINK_STATUS_CHANGE,
+ HWRM_ASYNC_EVENT_CMPL_EVENT_ID_PF_DRVR_UNLOAD,
+ HWRM_ASYNC_EVENT_CMPL_EVENT_ID_PORT_CONN_NOT_ALLOWED,
+ HWRM_ASYNC_EVENT_CMPL_EVENT_ID_LINK_SPEED_CFG_CHANGE,
+};
+
static bool bnxt_vf_pciid(enum board_idx idx)
{
return (idx == BCM57304_VF || idx == BCM57404_VF);
@@ -813,6 +823,46 @@ static inline struct sk_buff *bnxt_copy_skb(struct bnxt_napi *bnapi, u8 *data,
return skb;
}
+static int bnxt_discard_rx(struct bnxt *bp, struct bnxt_napi *bnapi,
+ u32 *raw_cons, void *cmp)
+{
+ struct bnxt_cp_ring_info *cpr = &bnapi->cp_ring;
+ struct rx_cmp *rxcmp = cmp;
+ u32 tmp_raw_cons = *raw_cons;
+ u8 cmp_type, agg_bufs = 0;
+
+ cmp_type = RX_CMP_TYPE(rxcmp);
+
+ if (cmp_type == CMP_TYPE_RX_L2_CMP) {
+ agg_bufs = (le32_to_cpu(rxcmp->rx_cmp_misc_v1) &
+ RX_CMP_AGG_BUFS) >>
+ RX_CMP_AGG_BUFS_SHIFT;
+ } else if (cmp_type == CMP_TYPE_RX_L2_TPA_END_CMP) {
+ struct rx_tpa_end_cmp *tpa_end = cmp;
+
+ agg_bufs = (le32_to_cpu(tpa_end->rx_tpa_end_cmp_misc_v1) &
+ RX_TPA_END_CMP_AGG_BUFS) >>
+ RX_TPA_END_CMP_AGG_BUFS_SHIFT;
+ }
+
+ if (agg_bufs) {
+ if (!bnxt_agg_bufs_valid(bp, cpr, agg_bufs, &tmp_raw_cons))
+ return -EBUSY;
+ }
+ *raw_cons = tmp_raw_cons;
+ return 0;
+}
+
+static void bnxt_sched_reset(struct bnxt *bp, struct bnxt_rx_ring_info *rxr)
+{
+ if (!rxr->bnapi->in_reset) {
+ rxr->bnapi->in_reset = true;
+ set_bit(BNXT_RESET_TASK_SP_EVENT, &bp->sp_event);
+ schedule_work(&bp->sp_task);
+ }
+ rxr->rx_next_cons = 0xffff;
+}
+
static void bnxt_tpa_start(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
struct rx_tpa_start_cmp *tpa_start,
struct rx_tpa_start_cmp_ext *tpa_start1)
@@ -830,6 +880,11 @@ static void bnxt_tpa_start(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
prod_rx_buf = &rxr->rx_buf_ring[prod];
tpa_info = &rxr->rx_tpa[agg_id];
+ if (unlikely(cons != rxr->rx_next_cons)) {
+ bnxt_sched_reset(bp, rxr);
+ return;
+ }
+
prod_rx_buf->data = tpa_info->data;
mapping = tpa_info->mapping;
@@ -867,6 +922,7 @@ static void bnxt_tpa_start(struct bnxt *bp, struct bnxt_rx_ring_info *rxr,
rxr->rx_prod = NEXT_RX(prod);
cons = NEXT_RX(cons);
+ rxr->rx_next_cons = NEXT_RX(cons);
cons_rx_buf = &rxr->rx_buf_ring[cons];
bnxt_reuse_rx_data(rxr, cons, cons_rx_buf->data);
@@ -980,6 +1036,14 @@ static inline struct sk_buff *bnxt_tpa_end(struct bnxt *bp,
dma_addr_t mapping;
struct sk_buff *skb;
+ if (unlikely(bnapi->in_reset)) {
+ int rc = bnxt_discard_rx(bp, bnapi, raw_cons, tpa_end);
+
+ if (rc < 0)
+ return ERR_PTR(-EBUSY);
+ return NULL;
+ }
+
tpa_info = &rxr->rx_tpa[agg_id];
data = tpa_info->data;
prefetch(data);
@@ -1146,6 +1210,12 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_napi *bnapi, u32 *raw_cons,
cons = rxcmp->rx_cmp_opaque;
rx_buf = &rxr->rx_buf_ring[cons];
data = rx_buf->data;
+ if (unlikely(cons != rxr->rx_next_cons)) {
+ int rc1 = bnxt_discard_rx(bp, bnapi, raw_cons, rxcmp);
+
+ bnxt_sched_reset(bp, rxr);
+ return rc1;
+ }
prefetch(data);
agg_bufs = (le32_to_cpu(rxcmp->rx_cmp_misc_v1) & RX_CMP_AGG_BUFS) >>
@@ -1245,6 +1315,7 @@ static int bnxt_rx_pkt(struct bnxt *bp, struct bnxt_napi *bnapi, u32 *raw_cons,
next_rx:
rxr->rx_prod = NEXT_RX(prod);
+ rxr->rx_next_cons = NEXT_RX(cons);
next_rx_no_prod:
*raw_cons = tmp_raw_cons;
@@ -1252,6 +1323,10 @@ next_rx_no_prod:
return rc;
}
+#define BNXT_GET_EVENT_PORT(data) \
+ ((data) & \
+ HWRM_ASYNC_EVENT_CMPL_PORT_CONN_NOT_ALLOWED_EVENT_DATA1_PORT_ID_MASK)
+
static int bnxt_async_event_process(struct bnxt *bp,
struct hwrm_async_event_cmpl *cmpl)
{
@@ -1259,12 +1334,40 @@ static int bnxt_async_event_process(struct bnxt *bp,
/* TODO CHIMP_FW: Define event id's for link change, error etc */
switch (event_id) {
+ case HWRM_ASYNC_EVENT_CMPL_EVENT_ID_LINK_SPEED_CFG_CHANGE: {
+ u32 data1 = le32_to_cpu(cmpl->event_data1);
+ struct bnxt_link_info *link_info = &bp->link_info;
+
+ if (BNXT_VF(bp))
+ goto async_event_process_exit;
+ if (data1 & 0x20000) {
+ u16 fw_speed = link_info->force_link_speed;
+ u32 speed = bnxt_fw_to_ethtool_speed(fw_speed);
+
+ netdev_warn(bp->dev, "Link speed %d no longer supported\n",
+ speed);
+ }
+ /* fall thru */
+ }
case HWRM_ASYNC_EVENT_CMPL_EVENT_ID_LINK_STATUS_CHANGE:
set_bit(BNXT_LINK_CHNG_SP_EVENT, &bp->sp_event);
break;
case HWRM_ASYNC_EVENT_CMPL_EVENT_ID_PF_DRVR_UNLOAD:
set_bit(BNXT_HWRM_PF_UNLOAD_SP_EVENT, &bp->sp_event);
break;
+ case HWRM_ASYNC_EVENT_CMPL_EVENT_ID_PORT_CONN_NOT_ALLOWED: {
+ u32 data1 = le32_to_cpu(cmpl->event_data1);
+ u16 port_id = BNXT_GET_EVENT_PORT(data1);
+
+ if (BNXT_VF(bp))
+ break;
+
+ if (bp->pf.port_id != port_id)
+ break;
+
+ set_bit(BNXT_HWRM_PORT_MODULE_SP_EVENT, &bp->sp_event);
+ break;
+ }
default:
netdev_err(bp->dev, "unhandled ASYNC event (id 0x%x)\n",
event_id);
@@ -1388,6 +1491,10 @@ static int bnxt_poll_work(struct bnxt *bp, struct bnxt_napi *bnapi, int budget)
if (!TX_CMP_VALID(txcmp, raw_cons))
break;
+ /* The valid test of the entry must be done first before
+ * reading any further.
+ */
+ dma_rmb();
if (TX_CMP_TYPE(txcmp) == CMP_TYPE_TX_L2_CMP) {
tx_pkts++;
/* return full budget so NAPI will complete. */
@@ -2482,6 +2589,7 @@ static void bnxt_clear_ring_indices(struct bnxt *bp)
rxr->rx_prod = 0;
rxr->rx_agg_prod = 0;
rxr->rx_sw_agg_prod = 0;
+ rxr->rx_next_cons = 0;
}
}
}
@@ -2663,7 +2771,7 @@ void bnxt_hwrm_cmd_hdr_init(struct bnxt *bp, void *request, u16 req_type,
static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len,
int timeout, bool silent)
{
- int i, intr_process, rc;
+ int i, intr_process, rc, tmo_count;
struct input *req = msg;
u32 *data = msg;
__le32 *resp_len, *valid;
@@ -2692,11 +2800,12 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len,
timeout = DFLT_HWRM_CMD_TIMEOUT;
i = 0;
+ tmo_count = timeout * 40;
if (intr_process) {
/* Wait until hwrm response cmpl interrupt is processed */
while (bp->hwrm_intr_seq_id != HWRM_SEQ_ID_INVALID &&
- i++ < timeout) {
- usleep_range(600, 800);
+ i++ < tmo_count) {
+ usleep_range(25, 40);
}
if (bp->hwrm_intr_seq_id != HWRM_SEQ_ID_INVALID) {
@@ -2707,30 +2816,30 @@ static int bnxt_hwrm_do_send_msg(struct bnxt *bp, void *msg, u32 msg_len,
} else {
/* Check if response len is updated */
resp_len = bp->hwrm_cmd_resp_addr + HWRM_RESP_LEN_OFFSET;
- for (i = 0; i < timeout; i++) {
+ for (i = 0; i < tmo_count; i++) {
len = (le32_to_cpu(*resp_len) & HWRM_RESP_LEN_MASK) >>
HWRM_RESP_LEN_SFT;
if (len)
break;
- usleep_range(600, 800);
+ usleep_range(25, 40);
}
- if (i >= timeout) {
+ if (i >= tmo_count) {
netdev_err(bp->dev, "Error (timeout: %d) msg {0x%x 0x%x} len:%d\n",
timeout, le16_to_cpu(req->req_type),
- le16_to_cpu(req->seq_id), *resp_len);
+ le16_to_cpu(req->seq_id), len);
return -1;
}
/* Last word of resp contains valid bit */
valid = bp->hwrm_cmd_resp_addr + len - 4;
- for (i = 0; i < timeout; i++) {
+ for (i = 0; i < 5; i++) {
if (le32_to_cpu(*valid) & HWRM_RESP_VALID_MASK)
break;
- usleep_range(600, 800);
+ udelay(1);
}
- if (i >= timeout) {
+ if (i >= 5) {
netdev_err(bp->dev, "Error (timeout: %d) msg {0x%x 0x%x} len:%d v:%d\n",
timeout, le16_to_cpu(req->req_type),
le16_to_cpu(req->seq_id), len, *valid);
@@ -2776,6 +2885,8 @@ static int bnxt_hwrm_func_drv_rgtr(struct bnxt *bp)
{
struct hwrm_func_drv_rgtr_input req = {0};
int i;
+ DECLARE_BITMAP(async_events_bmap, 256);
+ u32 *events = (u32 *)async_events_bmap;
bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_DRV_RGTR, -1, -1);
@@ -2784,11 +2895,14 @@ static int bnxt_hwrm_func_drv_rgtr(struct bnxt *bp)
FUNC_DRV_RGTR_REQ_ENABLES_VER |
FUNC_DRV_RGTR_REQ_ENABLES_ASYNC_EVENT_FWD);
- /* TODO: current async event fwd bits are not defined and the firmware
- * only checks if it is non-zero to enable async event forwarding
- */
- req.async_event_fwd[0] |= cpu_to_le32(1);
- req.os_type = cpu_to_le16(1);
+ memset(async_events_bmap, 0, sizeof(async_events_bmap));
+ for (i = 0; i < ARRAY_SIZE(bnxt_async_events_arr); i++)
+ __set_bit(bnxt_async_events_arr[i], async_events_bmap);
+
+ for (i = 0; i < 8; i++)
+ req.async_event_fwd[i] |= cpu_to_le32(events[i]);
+
+ req.os_type = cpu_to_le16(FUNC_DRV_RGTR_REQ_OS_TYPE_LINUX);
req.ver_maj = DRV_VER_MAJ;
req.ver_min = DRV_VER_MIN;
req.ver_upd = DRV_VER_UPD;
@@ -3751,7 +3865,7 @@ int bnxt_hwrm_func_qcaps(struct bnxt *bp)
pf->fw_fid = le16_to_cpu(resp->fid);
pf->port_id = le16_to_cpu(resp->port_id);
- memcpy(pf->mac_addr, resp->perm_mac_address, ETH_ALEN);
+ memcpy(pf->mac_addr, resp->mac_address, ETH_ALEN);
memcpy(bp->dev->dev_addr, pf->mac_addr, ETH_ALEN);
pf->max_rsscos_ctxs = le16_to_cpu(resp->max_rsscos_ctx);
pf->max_cp_rings = le16_to_cpu(resp->max_cmpl_rings);
@@ -3776,7 +3890,7 @@ int bnxt_hwrm_func_qcaps(struct bnxt *bp)
struct bnxt_vf_info *vf = &bp->vf;
vf->fw_fid = le16_to_cpu(resp->fid);
- memcpy(vf->mac_addr, resp->perm_mac_address, ETH_ALEN);
+ memcpy(vf->mac_addr, resp->mac_address, ETH_ALEN);
if (is_valid_ether_addr(vf->mac_addr))
/* overwrite netdev dev_adr with admin VF MAC */
memcpy(bp->dev->dev_addr, vf->mac_addr, ETH_ALEN);
@@ -3867,6 +3981,8 @@ static int bnxt_hwrm_ver_get(struct bnxt *bp)
memcpy(&bp->ver_resp, resp, sizeof(struct hwrm_ver_get_output));
+ bp->hwrm_spec_code = resp->hwrm_intf_maj << 16 |
+ resp->hwrm_intf_min << 8 | resp->hwrm_intf_upd;
if (resp->hwrm_intf_maj < 1) {
netdev_warn(bp->dev, "HWRM interface %d.%d.%d is older than 1.0.0.\n",
resp->hwrm_intf_maj, resp->hwrm_intf_min,
@@ -4038,9 +4154,11 @@ static int bnxt_alloc_rfs_vnics(struct bnxt *bp)
}
static int bnxt_cfg_rx_mode(struct bnxt *);
+static bool bnxt_mc_list_updated(struct bnxt *, u32 *);
static int bnxt_init_chip(struct bnxt *bp, bool irq_re_init)
{
+ struct bnxt_vnic_info *vnic = &bp->vnic_info[0];
int rc = 0;
if (irq_re_init) {
@@ -4096,13 +4214,22 @@ static int bnxt_init_chip(struct bnxt *bp, bool irq_re_init)
netdev_err(bp->dev, "HWRM vnic filter failure rc: %x\n", rc);
goto err_out;
}
- bp->vnic_info[0].uc_filter_count = 1;
+ vnic->uc_filter_count = 1;
- bp->vnic_info[0].rx_mask = CFA_L2_SET_RX_MASK_REQ_MASK_BCAST;
+ vnic->rx_mask = CFA_L2_SET_RX_MASK_REQ_MASK_BCAST;
if ((bp->dev->flags & IFF_PROMISC) && BNXT_PF(bp))
- bp->vnic_info[0].rx_mask |=
- CFA_L2_SET_RX_MASK_REQ_MASK_PROMISCUOUS;
+ vnic->rx_mask |= CFA_L2_SET_RX_MASK_REQ_MASK_PROMISCUOUS;
+
+ if (bp->dev->flags & IFF_ALLMULTI) {
+ vnic->rx_mask |= CFA_L2_SET_RX_MASK_REQ_MASK_ALL_MCAST;
+ vnic->mc_list_count = 0;
+ } else {
+ u32 mask = 0;
+
+ bnxt_mc_list_updated(bp, &mask);
+ vnic->rx_mask |= mask;
+ }
rc = bnxt_cfg_rx_mode(bp);
if (rc)
@@ -4447,6 +4574,7 @@ static void bnxt_enable_napi(struct bnxt *bp)
int i;
for (i = 0; i < bp->cp_nr_rings; i++) {
+ bp->bnapi[i]->in_reset = false;
bnxt_enable_poll(bp->bnapi[i]);
napi_enable(&bp->bnapi[i]->napi);
}
@@ -4511,12 +4639,49 @@ static void bnxt_report_link(struct bnxt *bp)
speed = bnxt_fw_to_ethtool_speed(bp->link_info.link_speed);
netdev_info(bp->dev, "NIC Link is Up, %d Mbps %s duplex, Flow control: %s\n",
speed, duplex, flow_ctrl);
+ if (bp->flags & BNXT_FLAG_EEE_CAP)
+ netdev_info(bp->dev, "EEE is %s\n",
+ bp->eee.eee_active ? "active" :
+ "not active");
} else {
netif_carrier_off(bp->dev);
netdev_err(bp->dev, "NIC Link is Down\n");
}
}
+static int bnxt_hwrm_phy_qcaps(struct bnxt *bp)
+{
+ int rc = 0;
+ struct hwrm_port_phy_qcaps_input req = {0};
+ struct hwrm_port_phy_qcaps_output *resp = bp->hwrm_cmd_resp_addr;
+
+ if (bp->hwrm_spec_code < 0x10201)
+ return 0;
+
+ bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_PORT_PHY_QCAPS, -1, -1);
+
+ mutex_lock(&bp->hwrm_cmd_lock);
+ rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+ if (rc)
+ goto hwrm_phy_qcaps_exit;
+
+ if (resp->eee_supported & PORT_PHY_QCAPS_RESP_EEE_SUPPORTED) {
+ struct ethtool_eee *eee = &bp->eee;
+ u16 fw_speeds = le16_to_cpu(resp->supported_speeds_eee_mode);
+
+ bp->flags |= BNXT_FLAG_EEE_CAP;
+ eee->supported = _bnxt_fw_to_ethtool_adv_spds(fw_speeds, 0);
+ bp->lpi_tmr_lo = le32_to_cpu(resp->tx_lpi_timer_low) &
+ PORT_PHY_QCAPS_RESP_TX_LPI_TIMER_LOW_MASK;
+ bp->lpi_tmr_hi = le32_to_cpu(resp->valid_tx_lpi_timer_high) &
+ PORT_PHY_QCAPS_RESP_TX_LPI_TIMER_HIGH_MASK;
+ }
+
+hwrm_phy_qcaps_exit:
+ mutex_unlock(&bp->hwrm_cmd_lock);
+ return rc;
+}
+
static int bnxt_update_link(struct bnxt *bp, bool chng_link_state)
{
int rc = 0;
@@ -4548,7 +4713,6 @@ static int bnxt_update_link(struct bnxt *bp, bool chng_link_state)
else
link_info->link_speed = 0;
link_info->force_link_speed = le16_to_cpu(resp->force_link_speed);
- link_info->auto_link_speed = le16_to_cpu(resp->auto_link_speed);
link_info->support_speeds = le16_to_cpu(resp->support_speeds);
link_info->auto_link_speeds = le16_to_cpu(resp->auto_link_speed_mask);
link_info->lp_auto_link_speeds =
@@ -4558,9 +4722,47 @@ static int bnxt_update_link(struct bnxt *bp, bool chng_link_state)
link_info->phy_ver[1] = resp->phy_min;
link_info->phy_ver[2] = resp->phy_bld;
link_info->media_type = resp->media_type;
- link_info->transceiver = resp->transceiver_type;
- link_info->phy_addr = resp->phy_addr;
+ link_info->phy_type = resp->phy_type;
+ link_info->transceiver = resp->xcvr_pkg_type;
+ link_info->phy_addr = resp->eee_config_phy_addr &
+ PORT_PHY_QCFG_RESP_PHY_ADDR_MASK;
+ link_info->module_status = resp->module_status;
+
+ if (bp->flags & BNXT_FLAG_EEE_CAP) {
+ struct ethtool_eee *eee = &bp->eee;
+ u16 fw_speeds;
+
+ eee->eee_active = 0;
+ if (resp->eee_config_phy_addr &
+ PORT_PHY_QCFG_RESP_EEE_CONFIG_EEE_ACTIVE) {
+ eee->eee_active = 1;
+ fw_speeds = le16_to_cpu(
+ resp->link_partner_adv_eee_link_speed_mask);
+ eee->lp_advertised =
+ _bnxt_fw_to_ethtool_adv_spds(fw_speeds, 0);
+ }
+ /* Pull initial EEE config */
+ if (!chng_link_state) {
+ if (resp->eee_config_phy_addr &
+ PORT_PHY_QCFG_RESP_EEE_CONFIG_EEE_ENABLED)
+ eee->eee_enabled = 1;
+
+ fw_speeds = le16_to_cpu(resp->adv_eee_link_speed_mask);
+ eee->advertised =
+ _bnxt_fw_to_ethtool_adv_spds(fw_speeds, 0);
+
+ if (resp->eee_config_phy_addr &
+ PORT_PHY_QCFG_RESP_EEE_CONFIG_EEE_TX_LPI) {
+ __le32 tmr;
+
+ eee->tx_lpi_enabled = 1;
+ tmr = resp->xcvr_identifier_type_tx_lpi_timer;
+ eee->tx_lpi_timer = le32_to_cpu(tmr) &
+ PORT_PHY_QCFG_RESP_TX_LPI_TIMER_MASK;
+ }
+ }
+ }
/* TODO: need to add more logic to report VF link */
if (chng_link_state) {
if (link_info->phy_link_status == BNXT_LINK_LINK)
@@ -4577,10 +4779,40 @@ static int bnxt_update_link(struct bnxt *bp, bool chng_link_state)
return 0;
}
+static void bnxt_get_port_module_status(struct bnxt *bp)
+{
+ struct bnxt_link_info *link_info = &bp->link_info;
+ struct hwrm_port_phy_qcfg_output *resp = &link_info->phy_qcfg_resp;
+ u8 module_status;
+
+ if (bnxt_update_link(bp, true))
+ return;
+
+ module_status = link_info->module_status;
+ switch (module_status) {
+ case PORT_PHY_QCFG_RESP_MODULE_STATUS_DISABLETX:
+ case PORT_PHY_QCFG_RESP_MODULE_STATUS_PWRDOWN:
+ case PORT_PHY_QCFG_RESP_MODULE_STATUS_WARNINGMSG:
+ netdev_warn(bp->dev, "Unqualified SFP+ module detected on port %d\n",
+ bp->pf.port_id);
+ if (bp->hwrm_spec_code >= 0x10201) {
+ netdev_warn(bp->dev, "Module part number %s\n",
+ resp->phy_vendor_partnumber);
+ }
+ if (module_status == PORT_PHY_QCFG_RESP_MODULE_STATUS_DISABLETX)
+ netdev_warn(bp->dev, "TX is disabled\n");
+ if (module_status == PORT_PHY_QCFG_RESP_MODULE_STATUS_PWRDOWN)
+ netdev_warn(bp->dev, "SFP+ module is shutdown\n");
+ }
+}
+
static void
bnxt_hwrm_set_pause_common(struct bnxt *bp, struct hwrm_port_phy_cfg_input *req)
{
if (bp->link_info.autoneg & BNXT_AUTONEG_FLOW_CTRL) {
+ if (bp->hwrm_spec_code >= 0x10201)
+ req->auto_pause =
+ PORT_PHY_CFG_REQ_AUTO_PAUSE_AUTONEG_PAUSE;
if (bp->link_info.req_flow_ctrl & BNXT_LINK_PAUSE_RX)
req->auto_pause |= PORT_PHY_CFG_REQ_AUTO_PAUSE_RX;
if (bp->link_info.req_flow_ctrl & BNXT_LINK_PAUSE_TX)
@@ -4594,6 +4826,11 @@ bnxt_hwrm_set_pause_common(struct bnxt *bp, struct hwrm_port_phy_cfg_input *req)
req->force_pause |= PORT_PHY_CFG_REQ_FORCE_PAUSE_TX;
req->enables |=
cpu_to_le32(PORT_PHY_CFG_REQ_ENABLES_FORCE_PAUSE);
+ if (bp->hwrm_spec_code >= 0x10201) {
+ req->auto_pause = req->force_pause;
+ req->enables |= cpu_to_le32(
+ PORT_PHY_CFG_REQ_ENABLES_AUTO_PAUSE);
+ }
}
}
@@ -4606,7 +4843,7 @@ static void bnxt_hwrm_set_link_common(struct bnxt *bp,
if (autoneg & BNXT_AUTONEG_SPEED) {
req->auto_mode |=
- PORT_PHY_CFG_REQ_AUTO_MODE_MASK;
+ PORT_PHY_CFG_REQ_AUTO_MODE_SPEED_MASK;
req->enables |= cpu_to_le32(
PORT_PHY_CFG_REQ_ENABLES_AUTO_LINK_SPEED_MASK);
@@ -4620,9 +4857,6 @@ static void bnxt_hwrm_set_link_common(struct bnxt *bp,
req->flags |= cpu_to_le32(PORT_PHY_CFG_REQ_FLAGS_FORCE);
}
- /* currently don't support half duplex */
- req->auto_duplex = PORT_PHY_CFG_REQ_AUTO_DUPLEX_FULL;
- req->enables |= cpu_to_le32(PORT_PHY_CFG_REQ_ENABLES_AUTO_DUPLEX);
/* tell chimp that the setting takes effect immediately */
req->flags |= cpu_to_le32(PORT_PHY_CFG_REQ_FLAGS_RESET_PHY);
}
@@ -4657,7 +4891,30 @@ int bnxt_hwrm_set_pause(struct bnxt *bp)
return rc;
}
-int bnxt_hwrm_set_link_setting(struct bnxt *bp, bool set_pause)
+static void bnxt_hwrm_set_eee(struct bnxt *bp,
+ struct hwrm_port_phy_cfg_input *req)
+{
+ struct ethtool_eee *eee = &bp->eee;
+
+ if (eee->eee_enabled) {
+ u16 eee_speeds;
+ u32 flags = PORT_PHY_CFG_REQ_FLAGS_EEE_ENABLE;
+
+ if (eee->tx_lpi_enabled)
+ flags |= PORT_PHY_CFG_REQ_FLAGS_EEE_TX_LPI_ENABLE;
+ else
+ flags |= PORT_PHY_CFG_REQ_FLAGS_EEE_TX_LPI_DISABLE;
+
+ req->flags |= cpu_to_le32(flags);
+ eee_speeds = bnxt_get_fw_auto_link_speeds(eee->advertised);
+ req->eee_link_speed_mask = cpu_to_le16(eee_speeds);
+ req->tx_lpi_timer = cpu_to_le32(eee->tx_lpi_timer);
+ } else {
+ req->flags |= cpu_to_le32(PORT_PHY_CFG_REQ_FLAGS_EEE_DISABLE);
+ }
+}
+
+int bnxt_hwrm_set_link_setting(struct bnxt *bp, bool set_pause, bool set_eee)
{
struct hwrm_port_phy_cfg_input req = {0};
@@ -4666,14 +4923,57 @@ int bnxt_hwrm_set_link_setting(struct bnxt *bp, bool set_pause)
bnxt_hwrm_set_pause_common(bp, &req);
bnxt_hwrm_set_link_common(bp, &req);
+
+ if (set_eee)
+ bnxt_hwrm_set_eee(bp, &req);
+ return hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+}
+
+static int bnxt_hwrm_shutdown_link(struct bnxt *bp)
+{
+ struct hwrm_port_phy_cfg_input req = {0};
+
+ if (BNXT_VF(bp))
+ return 0;
+
+ if (pci_num_vf(bp->pdev))
+ return 0;
+
+ bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_PORT_PHY_CFG, -1, -1);
+ req.flags = cpu_to_le32(PORT_PHY_CFG_REQ_FLAGS_FORCE_LINK_DOWN);
return hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
}
+static bool bnxt_eee_config_ok(struct bnxt *bp)
+{
+ struct ethtool_eee *eee = &bp->eee;
+ struct bnxt_link_info *link_info = &bp->link_info;
+
+ if (!(bp->flags & BNXT_FLAG_EEE_CAP))
+ return true;
+
+ if (eee->eee_enabled) {
+ u32 advertising =
+ _bnxt_fw_to_ethtool_adv_spds(link_info->advertising, 0);
+
+ if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) {
+ eee->eee_enabled = 0;
+ return false;
+ }
+ if (eee->advertised & ~advertising) {
+ eee->advertised = advertising & eee->supported;
+ return false;
+ }
+ }
+ return true;
+}
+
static int bnxt_update_phy_setting(struct bnxt *bp)
{
int rc;
bool update_link = false;
bool update_pause = false;
+ bool update_eee = false;
struct bnxt_link_info *link_info = &bp->link_info;
rc = bnxt_update_link(bp, true);
@@ -4683,7 +4983,8 @@ static int bnxt_update_phy_setting(struct bnxt *bp)
return rc;
}
if ((link_info->autoneg & BNXT_AUTONEG_FLOW_CTRL) &&
- link_info->auto_pause_setting != link_info->req_flow_ctrl)
+ (link_info->auto_pause_setting & BNXT_LINK_PAUSE_BOTH) !=
+ link_info->req_flow_ctrl)
update_pause = true;
if (!(link_info->autoneg & BNXT_AUTONEG_FLOW_CTRL) &&
link_info->force_pause_setting != link_info->req_flow_ctrl)
@@ -4702,8 +5003,11 @@ static int bnxt_update_phy_setting(struct bnxt *bp)
update_link = true;
}
+ if (!bnxt_eee_config_ok(bp))
+ update_eee = true;
+
if (update_link)
- rc = bnxt_hwrm_set_link_setting(bp, update_pause);
+ rc = bnxt_hwrm_set_link_setting(bp, update_pause, update_eee);
else if (update_pause)
rc = bnxt_hwrm_set_pause(bp);
if (rc) {
@@ -4794,7 +5098,8 @@ static int __bnxt_open_nic(struct bnxt *bp, bool irq_re_init, bool link_re_init)
/* Enable TX queues */
bnxt_tx_enable(bp);
mod_timer(&bp->timer, jiffies + bp->current_interval);
- bnxt_update_link(bp, true);
+ /* Poll link status and check for SFP+ module status */
+ bnxt_get_port_module_status(bp);
return 0;
@@ -4894,6 +5199,7 @@ static int bnxt_close(struct net_device *dev)
struct bnxt *bp = netdev_priv(dev);
bnxt_close_nic(bp, true, true);
+ bnxt_hwrm_shutdown_link(bp);
return 0;
}
@@ -5375,6 +5681,9 @@ static void bnxt_sp_task(struct work_struct *work)
rtnl_unlock();
}
+ if (test_and_clear_bit(BNXT_HWRM_PORT_MODULE_SP_EVENT, &bp->sp_event))
+ bnxt_get_port_module_status(bp);
+
if (test_and_clear_bit(BNXT_PERIODIC_STATS_SP_EVENT, &bp->sp_event))
bnxt_hwrm_port_qstats(bp);
@@ -5505,10 +5814,9 @@ static int bnxt_change_mac_addr(struct net_device *dev, void *p)
if (!is_valid_ether_addr(addr->sa_data))
return -EADDRNOTAVAIL;
-#ifdef CONFIG_BNXT_SRIOV
- if (BNXT_VF(bp) && is_valid_ether_addr(bp->vf.mac_addr))
- return -EADDRNOTAVAIL;
-#endif
+ rc = bnxt_approve_mac(bp, addr->sa_data);
+ if (rc)
+ return rc;
if (ether_addr_equal(addr->sa_data, dev->dev_addr))
return 0;
@@ -5843,6 +6151,13 @@ static int bnxt_probe_phy(struct bnxt *bp)
int rc = 0;
struct bnxt_link_info *link_info = &bp->link_info;
+ rc = bnxt_hwrm_phy_qcaps(bp);
+ if (rc) {
+ netdev_err(bp->dev, "Probe phy can't get phy capabilities (rc: %x)\n",
+ rc);
+ return rc;
+ }
+
rc = bnxt_update_link(bp, false);
if (rc) {
netdev_err(bp->dev, "Probe phy can't update link (rc: %x)\n",
@@ -5852,15 +6167,24 @@ static int bnxt_probe_phy(struct bnxt *bp)
/*initialize the ethool setting copy with NVM settings */
if (BNXT_AUTO_MODE(link_info->auto_mode)) {
- link_info->autoneg = BNXT_AUTONEG_SPEED |
- BNXT_AUTONEG_FLOW_CTRL;
+ link_info->autoneg = BNXT_AUTONEG_SPEED;
+ if (bp->hwrm_spec_code >= 0x10201) {
+ if (link_info->auto_pause_setting &
+ PORT_PHY_CFG_REQ_AUTO_PAUSE_AUTONEG_PAUSE)
+ link_info->autoneg |= BNXT_AUTONEG_FLOW_CTRL;
+ } else {
+ link_info->autoneg |= BNXT_AUTONEG_FLOW_CTRL;
+ }
link_info->advertising = link_info->auto_link_speeds;
- link_info->req_flow_ctrl = link_info->auto_pause_setting;
} else {
link_info->req_link_speed = link_info->force_link_speed;
link_info->req_duplex = link_info->duplex_setting;
- link_info->req_flow_ctrl = link_info->force_pause_setting;
}
+ if (link_info->autoneg & BNXT_AUTONEG_FLOW_CTRL)
+ link_info->req_flow_ctrl =
+ link_info->auto_pause_setting & BNXT_LINK_PAUSE_BOTH;
+ else
+ link_info->req_flow_ctrl = link_info->force_pause_setting;
return rc;
}
@@ -5935,6 +6259,22 @@ static int bnxt_set_dflt_rings(struct bnxt *bp)
return rc;
}
+static void bnxt_parse_log_pcie_link(struct bnxt *bp)
+{
+ enum pcie_link_width width = PCIE_LNK_WIDTH_UNKNOWN;
+ enum pci_bus_speed speed = PCI_SPEED_UNKNOWN;
+
+ if (pcie_get_minimum_link(bp->pdev, &speed, &width) ||
+ speed == PCI_SPEED_UNKNOWN || width == PCIE_LNK_WIDTH_UNKNOWN)
+ netdev_info(bp->dev, "Failed to determine PCIe Link Info\n");
+ else
+ netdev_info(bp->dev, "PCIe: Speed %s Width x%d\n",
+ speed == PCIE_SPEED_2_5GT ? "2.5GT/s" :
+ speed == PCIE_SPEED_5_0GT ? "5.0GT/s" :
+ speed == PCIE_SPEED_8_0GT ? "8.0GT/s" :
+ "Unknown", width);
+}
+
static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
{
static int version_printed;
@@ -5971,15 +6311,19 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
dev->hw_features = NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | NETIF_F_SG |
NETIF_F_TSO | NETIF_F_TSO6 |
NETIF_F_GSO_UDP_TUNNEL | NETIF_F_GSO_GRE |
- NETIF_F_GSO_IPIP | NETIF_F_GSO_SIT |
- NETIF_F_RXHASH |
+ NETIF_F_GSO_IPXIP4 |
+ NETIF_F_GSO_UDP_TUNNEL_CSUM | NETIF_F_GSO_GRE_CSUM |
+ NETIF_F_GSO_PARTIAL | NETIF_F_RXHASH |
NETIF_F_RXCSUM | NETIF_F_LRO | NETIF_F_GRO;
dev->hw_enc_features =
NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM | NETIF_F_SG |
NETIF_F_TSO | NETIF_F_TSO6 |
NETIF_F_GSO_UDP_TUNNEL | NETIF_F_GSO_GRE |
- NETIF_F_GSO_IPIP | NETIF_F_GSO_SIT;
+ NETIF_F_GSO_UDP_TUNNEL_CSUM | NETIF_F_GSO_GRE_CSUM |
+ NETIF_F_GSO_IPXIP4 | NETIF_F_GSO_PARTIAL;
+ dev->gso_partial_features = NETIF_F_GSO_UDP_TUNNEL_CSUM |
+ NETIF_F_GSO_GRE_CSUM;
dev->vlan_features = dev->hw_features | NETIF_F_HIGHDMA;
dev->hw_features |= NETIF_F_HW_VLAN_CTAG_RX | NETIF_F_HW_VLAN_CTAG_TX |
NETIF_F_HW_VLAN_STAG_RX | NETIF_F_HW_VLAN_STAG_TX;
@@ -6050,6 +6394,8 @@ static int bnxt_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
board_info[ent->driver_data].name,
(long)pci_resource_start(pdev, 0), dev->dev_addr);
+ bnxt_parse_log_pcie_link(bp);
+
return 0;
init_err:
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt.h b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
index 8b823ff558ff..2824d65b2e35 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt.h
@@ -1,6 +1,6 @@
/* Broadcom NetXtreme-C/E network driver.
*
- * Copyright (c) 2014-2015 Broadcom Corporation
+ * Copyright (c) 2014-2016 Broadcom Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -11,7 +11,7 @@
#define BNXT_H
#define DRV_MODULE_NAME "bnxt_en"
-#define DRV_MODULE_VERSION "1.0.0"
+#define DRV_MODULE_VERSION "1.2.0"
#define DRV_VER_MAJ 1
#define DRV_VER_MIN 0
@@ -425,10 +425,17 @@ struct rx_tpa_end_cmp_ext {
#define MAX_TPA 64
+#if (BNXT_PAGE_SHIFT == 16)
+#define MAX_RX_PAGES 1
+#define MAX_RX_AGG_PAGES 4
+#define MAX_TX_PAGES 1
+#define MAX_CP_PAGES 8
+#else
#define MAX_RX_PAGES 8
#define MAX_RX_AGG_PAGES 32
#define MAX_TX_PAGES 8
#define MAX_CP_PAGES 64
+#endif
#define RX_DESC_CNT (BNXT_PAGE_SIZE / sizeof(struct rx_bd))
#define TX_DESC_CNT (BNXT_PAGE_SIZE / sizeof(struct tx_bd))
@@ -584,6 +591,7 @@ struct bnxt_rx_ring_info {
u16 rx_prod;
u16 rx_agg_prod;
u16 rx_sw_agg_prod;
+ u16 rx_next_cons;
void __iomem *rx_doorbell;
void __iomem *rx_agg_doorbell;
@@ -636,6 +644,7 @@ struct bnxt_napi {
#ifdef CONFIG_NET_RX_BUSY_POLL
atomic_t poll_state;
#endif
+ bool in_reset;
};
#ifdef CONFIG_NET_RX_BUSY_POLL
@@ -772,6 +781,7 @@ struct bnxt_ntuple_filter {
};
struct bnxt_link_info {
+ u8 phy_type;
u8 media_type;
u8 transceiver;
u8 phy_addr;
@@ -801,7 +811,7 @@ struct bnxt_link_info {
#define BNXT_LINK_AUTO_ALLSPDS PORT_PHY_QCFG_RESP_AUTO_MODE_ALL_SPEEDS
#define BNXT_LINK_AUTO_ONESPD PORT_PHY_QCFG_RESP_AUTO_MODE_ONE_SPEED
#define BNXT_LINK_AUTO_ONEORBELOW PORT_PHY_QCFG_RESP_AUTO_MODE_ONE_OR_BELOW
-#define BNXT_LINK_AUTO_MSK PORT_PHY_QCFG_RESP_AUTO_MODE_MASK
+#define BNXT_LINK_AUTO_MSK PORT_PHY_QCFG_RESP_AUTO_MODE_SPEED_MASK
#define PHY_VER_LEN 3
u8 phy_ver[PHY_VER_LEN];
u16 link_speed;
@@ -826,9 +836,9 @@ struct bnxt_link_info {
#define BNXT_LINK_SPEED_MSK_40GB PORT_PHY_QCFG_RESP_SUPPORT_SPEEDS_40GB
#define BNXT_LINK_SPEED_MSK_50GB PORT_PHY_QCFG_RESP_SUPPORT_SPEEDS_50GB
u16 lp_auto_link_speeds;
- u16 auto_link_speed;
u16 force_link_speed;
u32 preemphasis;
+ u8 module_status;
/* copy of requested setting from ethtool cmd */
u8 autoneg;
@@ -839,6 +849,7 @@ struct bnxt_link_info {
u16 req_link_speed;
u32 advertising;
bool force_link_chng;
+
/* a copy of phy_qcfg output used to report link
* info to VF
*/
@@ -888,6 +899,7 @@ struct bnxt {
#define BNXT_FLAG_RFS 0x100
#define BNXT_FLAG_SHARED_RINGS 0x200
#define BNXT_FLAG_PORT_STATS 0x400
+ #define BNXT_FLAG_EEE_CAP 0x1000
#define BNXT_FLAG_ALL_CONFIG_FEATS (BNXT_FLAG_TPA | \
BNXT_FLAG_RFS | \
@@ -953,6 +965,7 @@ struct bnxt {
u32 msg_enable;
+ u32 hwrm_spec_code;
u16 hwrm_cmd_seq;
u32 hwrm_intr_seq_id;
void *hwrm_cmd_resp_addr;
@@ -1004,6 +1017,7 @@ struct bnxt {
#define BNXT_RST_RING_SP_EVENT 7
#define BNXT_HWRM_PF_UNLOAD_SP_EVENT 8
#define BNXT_PERIODIC_STATS_SP_EVENT 9
+#define BNXT_HWRM_PORT_MODULE_SP_EVENT 10
struct bnxt_pf_info pf;
#ifdef CONFIG_BNXT_SRIOV
@@ -1024,6 +1038,9 @@ struct bnxt {
int ntp_fltr_count;
struct bnxt_link_info link_info;
+ struct ethtool_eee eee;
+ u32 lpi_tmr_lo;
+ u32 lpi_tmr_hi;
};
#ifdef CONFIG_NET_RX_BUSY_POLL
@@ -1113,6 +1130,16 @@ static inline void bnxt_disable_poll(struct bnxt_napi *bnapi)
#endif
+#define I2C_DEV_ADDR_A0 0xa0
+#define I2C_DEV_ADDR_A2 0xa2
+#define SFP_EEPROM_SFF_8472_COMP_ADDR 0x5e
+#define SFP_EEPROM_SFF_8472_COMP_SIZE 1
+#define SFF_MODULE_ID_SFP 0x3
+#define SFF_MODULE_ID_QSFP 0xc
+#define SFF_MODULE_ID_QSFP_PLUS 0xd
+#define SFF_MODULE_ID_QSFP28 0x11
+#define BNXT_MAX_PHY_I2C_RESP_SIZE 64
+
void bnxt_set_ring_params(struct bnxt *);
void bnxt_hwrm_cmd_hdr_init(struct bnxt *, void *, u16, u16, u16);
int _hwrm_send_message(struct bnxt *, void *, u32, int);
@@ -1121,7 +1148,7 @@ int hwrm_send_message_silent(struct bnxt *, void *, u32, int);
int bnxt_hwrm_set_coal(struct bnxt *);
int bnxt_hwrm_func_qcaps(struct bnxt *);
int bnxt_hwrm_set_pause(struct bnxt *);
-int bnxt_hwrm_set_link_setting(struct bnxt *, bool);
+int bnxt_hwrm_set_link_setting(struct bnxt *, bool, bool);
int bnxt_open_nic(struct bnxt *, bool, bool);
int bnxt_close_nic(struct bnxt *, bool, bool);
int bnxt_get_max_rings(struct bnxt *, int *, int *, bool);
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
index 2e472f6dbf2d..a38cb047b540 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.c
@@ -1,6 +1,6 @@
/* Broadcom NetXtreme-C/E network driver.
*
- * Copyright (c) 2014-2015 Broadcom Corporation
+ * Copyright (c) 2014-2016 Broadcom Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -327,7 +327,11 @@ static void bnxt_get_channels(struct net_device *dev,
bnxt_get_max_rings(bp, &max_rx_rings, &max_tx_rings, true);
channel->max_combined = max_rx_rings;
- bnxt_get_max_rings(bp, &max_rx_rings, &max_tx_rings, false);
+ if (bnxt_get_max_rings(bp, &max_rx_rings, &max_tx_rings, false)) {
+ max_rx_rings = 0;
+ max_tx_rings = 0;
+ }
+
tcs = netdev_get_num_tc(dev);
if (tcs > 1)
max_tx_rings /= tcs;
@@ -597,7 +601,7 @@ static void bnxt_get_drvinfo(struct net_device *dev,
kfree(pkglog);
}
-static u32 _bnxt_fw_to_ethtool_adv_spds(u16 fw_speeds, u8 fw_pause)
+u32 _bnxt_fw_to_ethtool_adv_spds(u16 fw_speeds, u8 fw_pause)
{
u32 speed_mask = 0;
@@ -698,10 +702,23 @@ static int bnxt_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
if (link_info->phy_link_status == BNXT_LINK_LINK)
cmd->lp_advertising =
bnxt_fw_to_ethtool_lp_adv(link_info);
+ ethtool_speed = bnxt_fw_to_ethtool_speed(link_info->link_speed);
+ if (!netif_carrier_ok(dev))
+ cmd->duplex = DUPLEX_UNKNOWN;
+ else if (link_info->duplex & BNXT_LINK_DUPLEX_FULL)
+ cmd->duplex = DUPLEX_FULL;
+ else
+ cmd->duplex = DUPLEX_HALF;
} else {
cmd->autoneg = AUTONEG_DISABLE;
cmd->advertising = 0;
+ ethtool_speed =
+ bnxt_fw_to_ethtool_speed(link_info->req_link_speed);
+ cmd->duplex = DUPLEX_HALF;
+ if (link_info->req_duplex == BNXT_LINK_DUPLEX_FULL)
+ cmd->duplex = DUPLEX_FULL;
}
+ ethtool_cmd_speed_set(cmd, ethtool_speed);
cmd->port = PORT_NONE;
if (link_info->media_type == PORT_PHY_QCFG_RESP_MEDIA_TYPE_TP) {
@@ -719,16 +736,8 @@ static int bnxt_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
cmd->port = PORT_FIBRE;
}
- if (link_info->phy_link_status == BNXT_LINK_LINK) {
- if (link_info->duplex & BNXT_LINK_DUPLEX_FULL)
- cmd->duplex = DUPLEX_FULL;
- } else {
- cmd->duplex = DUPLEX_UNKNOWN;
- }
- ethtool_speed = bnxt_fw_to_ethtool_speed(link_info->link_speed);
- ethtool_cmd_speed_set(cmd, ethtool_speed);
if (link_info->transceiver ==
- PORT_PHY_QCFG_RESP_TRANSCEIVER_TYPE_XCVR_INTERNAL)
+ PORT_PHY_QCFG_RESP_XCVR_PKG_TYPE_XCVR_INTERNAL)
cmd->transceiver = XCVR_INTERNAL;
else
cmd->transceiver = XCVR_EXTERNAL;
@@ -739,31 +748,52 @@ static int bnxt_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
static u32 bnxt_get_fw_speed(struct net_device *dev, u16 ethtool_speed)
{
+ struct bnxt *bp = netdev_priv(dev);
+ struct bnxt_link_info *link_info = &bp->link_info;
+ u16 support_spds = link_info->support_speeds;
+ u32 fw_speed = 0;
+
switch (ethtool_speed) {
case SPEED_100:
- return PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_100MB;
+ if (support_spds & BNXT_LINK_SPEED_MSK_100MB)
+ fw_speed = PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_100MB;
+ break;
case SPEED_1000:
- return PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_1GB;
+ if (support_spds & BNXT_LINK_SPEED_MSK_1GB)
+ fw_speed = PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_1GB;
+ break;
case SPEED_2500:
- return PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_2_5GB;
+ if (support_spds & BNXT_LINK_SPEED_MSK_2_5GB)
+ fw_speed = PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_2_5GB;
+ break;
case SPEED_10000:
- return PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_10GB;
+ if (support_spds & BNXT_LINK_SPEED_MSK_10GB)
+ fw_speed = PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_10GB;
+ break;
case SPEED_20000:
- return PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_20GB;
+ if (support_spds & BNXT_LINK_SPEED_MSK_20GB)
+ fw_speed = PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_20GB;
+ break;
case SPEED_25000:
- return PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_25GB;
+ if (support_spds & BNXT_LINK_SPEED_MSK_25GB)
+ fw_speed = PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_25GB;
+ break;
case SPEED_40000:
- return PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_40GB;
+ if (support_spds & BNXT_LINK_SPEED_MSK_40GB)
+ fw_speed = PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_40GB;
+ break;
case SPEED_50000:
- return PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_50GB;
+ if (support_spds & BNXT_LINK_SPEED_MSK_50GB)
+ fw_speed = PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_50GB;
+ break;
default:
netdev_err(dev, "unsupported speed!\n");
break;
}
- return 0;
+ return fw_speed;
}
-static u16 bnxt_get_fw_auto_link_speeds(u32 advertising)
+u16 bnxt_get_fw_auto_link_speeds(u32 advertising)
{
u16 fw_speed_mask = 0;
@@ -823,6 +853,16 @@ static int bnxt_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
*/
set_pause = true;
} else {
+ u16 fw_speed;
+ u8 phy_type = link_info->phy_type;
+
+ if (phy_type == PORT_PHY_QCFG_RESP_PHY_TYPE_BASET ||
+ phy_type == PORT_PHY_QCFG_RESP_PHY_TYPE_BASETE ||
+ link_info->media_type == PORT_PHY_QCFG_RESP_MEDIA_TYPE_TP) {
+ netdev_err(dev, "10GBase-T devices must autoneg\n");
+ rc = -EINVAL;
+ goto set_setting_exit;
+ }
/* TODO: currently don't support half duplex */
if (cmd->duplex == DUPLEX_HALF) {
netdev_err(dev, "HALF DUPLEX is not supported!\n");
@@ -833,14 +873,19 @@ static int bnxt_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
if (cmd->duplex == DUPLEX_UNKNOWN)
cmd->duplex = DUPLEX_FULL;
speed = ethtool_cmd_speed(cmd);
- link_info->req_link_speed = bnxt_get_fw_speed(dev, speed);
+ fw_speed = bnxt_get_fw_speed(dev, speed);
+ if (!fw_speed) {
+ rc = -EINVAL;
+ goto set_setting_exit;
+ }
+ link_info->req_link_speed = fw_speed;
link_info->req_duplex = BNXT_LINK_DUPLEX_FULL;
link_info->autoneg = 0;
link_info->advertising = 0;
}
if (netif_running(dev))
- rc = bnxt_hwrm_set_link_setting(bp, set_pause);
+ rc = bnxt_hwrm_set_link_setting(bp, set_pause, false);
set_setting_exit:
return rc;
@@ -874,7 +919,9 @@ static int bnxt_set_pauseparam(struct net_device *dev,
return -EINVAL;
link_info->autoneg |= BNXT_AUTONEG_FLOW_CTRL;
- link_info->req_flow_ctrl |= BNXT_LINK_PAUSE_BOTH;
+ if (bp->hwrm_spec_code >= 0x10201)
+ link_info->req_flow_ctrl =
+ PORT_PHY_CFG_REQ_AUTO_PAUSE_AUTONEG_PAUSE;
} else {
/* when transition from auto pause to force pause,
* force a link change
@@ -882,17 +929,13 @@ static int bnxt_set_pauseparam(struct net_device *dev,
if (link_info->autoneg & BNXT_AUTONEG_FLOW_CTRL)
link_info->force_link_chng = true;
link_info->autoneg &= ~BNXT_AUTONEG_FLOW_CTRL;
- link_info->req_flow_ctrl &= ~BNXT_LINK_PAUSE_BOTH;
+ link_info->req_flow_ctrl = 0;
}
if (epause->rx_pause)
link_info->req_flow_ctrl |= BNXT_LINK_PAUSE_RX;
- else
- link_info->req_flow_ctrl &= ~BNXT_LINK_PAUSE_RX;
if (epause->tx_pause)
link_info->req_flow_ctrl |= BNXT_LINK_PAUSE_TX;
- else
- link_info->req_flow_ctrl &= ~BNXT_LINK_PAUSE_TX;
if (netif_running(dev))
rc = bnxt_hwrm_set_pause(bp);
@@ -1381,6 +1424,199 @@ static int bnxt_set_eeprom(struct net_device *dev,
eeprom->len);
}
+static int bnxt_set_eee(struct net_device *dev, struct ethtool_eee *edata)
+{
+ struct bnxt *bp = netdev_priv(dev);
+ struct ethtool_eee *eee = &bp->eee;
+ struct bnxt_link_info *link_info = &bp->link_info;
+ u32 advertising =
+ _bnxt_fw_to_ethtool_adv_spds(link_info->advertising, 0);
+ int rc = 0;
+
+ if (BNXT_VF(bp))
+ return 0;
+
+ if (!(bp->flags & BNXT_FLAG_EEE_CAP))
+ return -EOPNOTSUPP;
+
+ if (!edata->eee_enabled)
+ goto eee_ok;
+
+ if (!(link_info->autoneg & BNXT_AUTONEG_SPEED)) {
+ netdev_warn(dev, "EEE requires autoneg\n");
+ return -EINVAL;
+ }
+ if (edata->tx_lpi_enabled) {
+ if (bp->lpi_tmr_hi && (edata->tx_lpi_timer > bp->lpi_tmr_hi ||
+ edata->tx_lpi_timer < bp->lpi_tmr_lo)) {
+ netdev_warn(dev, "Valid LPI timer range is %d and %d microsecs\n",
+ bp->lpi_tmr_lo, bp->lpi_tmr_hi);
+ return -EINVAL;
+ } else if (!bp->lpi_tmr_hi) {
+ edata->tx_lpi_timer = eee->tx_lpi_timer;
+ }
+ }
+ if (!edata->advertised) {
+ edata->advertised = advertising & eee->supported;
+ } else if (edata->advertised & ~advertising) {
+ netdev_warn(dev, "EEE advertised %x must be a subset of autoneg advertised speeds %x\n",
+ edata->advertised, advertising);
+ return -EINVAL;
+ }
+
+ eee->advertised = edata->advertised;
+ eee->tx_lpi_enabled = edata->tx_lpi_enabled;
+ eee->tx_lpi_timer = edata->tx_lpi_timer;
+eee_ok:
+ eee->eee_enabled = edata->eee_enabled;
+
+ if (netif_running(dev))
+ rc = bnxt_hwrm_set_link_setting(bp, false, true);
+
+ return rc;
+}
+
+static int bnxt_get_eee(struct net_device *dev, struct ethtool_eee *edata)
+{
+ struct bnxt *bp = netdev_priv(dev);
+
+ if (!(bp->flags & BNXT_FLAG_EEE_CAP))
+ return -EOPNOTSUPP;
+
+ *edata = bp->eee;
+ if (!bp->eee.eee_enabled) {
+ /* Preserve tx_lpi_timer so that the last value will be used
+ * by default when it is re-enabled.
+ */
+ edata->advertised = 0;
+ edata->tx_lpi_enabled = 0;
+ }
+
+ if (!bp->eee.eee_active)
+ edata->lp_advertised = 0;
+
+ return 0;
+}
+
+static int bnxt_read_sfp_module_eeprom_info(struct bnxt *bp, u16 i2c_addr,
+ u16 page_number, u16 start_addr,
+ u16 data_length, u8 *buf)
+{
+ struct hwrm_port_phy_i2c_read_input req = {0};
+ struct hwrm_port_phy_i2c_read_output *output = bp->hwrm_cmd_resp_addr;
+ int rc, byte_offset = 0;
+
+ bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_PORT_PHY_I2C_READ, -1, -1);
+ req.i2c_slave_addr = i2c_addr;
+ req.page_number = cpu_to_le16(page_number);
+ req.port_id = cpu_to_le16(bp->pf.port_id);
+ do {
+ u16 xfer_size;
+
+ xfer_size = min_t(u16, data_length, BNXT_MAX_PHY_I2C_RESP_SIZE);
+ data_length -= xfer_size;
+ req.page_offset = cpu_to_le16(start_addr + byte_offset);
+ req.data_length = xfer_size;
+ req.enables = cpu_to_le32(start_addr + byte_offset ?
+ PORT_PHY_I2C_READ_REQ_ENABLES_PAGE_OFFSET : 0);
+ mutex_lock(&bp->hwrm_cmd_lock);
+ rc = _hwrm_send_message(bp, &req, sizeof(req),
+ HWRM_CMD_TIMEOUT);
+ if (!rc)
+ memcpy(buf + byte_offset, output->data, xfer_size);
+ mutex_unlock(&bp->hwrm_cmd_lock);
+ byte_offset += xfer_size;
+ } while (!rc && data_length > 0);
+
+ return rc;
+}
+
+static int bnxt_get_module_info(struct net_device *dev,
+ struct ethtool_modinfo *modinfo)
+{
+ struct bnxt *bp = netdev_priv(dev);
+ struct hwrm_port_phy_i2c_read_input req = {0};
+ struct hwrm_port_phy_i2c_read_output *output = bp->hwrm_cmd_resp_addr;
+ int rc;
+
+ /* No point in going further if phy status indicates
+ * module is not inserted or if it is powered down or
+ * if it is of type 10GBase-T
+ */
+ if (bp->link_info.module_status >
+ PORT_PHY_QCFG_RESP_MODULE_STATUS_WARNINGMSG)
+ return -EOPNOTSUPP;
+
+ /* This feature is not supported in older firmware versions */
+ if (bp->hwrm_spec_code < 0x10202)
+ return -EOPNOTSUPP;
+
+ bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_PORT_PHY_I2C_READ, -1, -1);
+ req.i2c_slave_addr = I2C_DEV_ADDR_A0;
+ req.page_number = 0;
+ req.page_offset = cpu_to_le16(SFP_EEPROM_SFF_8472_COMP_ADDR);
+ req.data_length = SFP_EEPROM_SFF_8472_COMP_SIZE;
+ req.port_id = cpu_to_le16(bp->pf.port_id);
+ mutex_lock(&bp->hwrm_cmd_lock);
+ rc = _hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+ if (!rc) {
+ u32 module_id = le32_to_cpu(output->data[0]);
+
+ switch (module_id) {
+ case SFF_MODULE_ID_SFP:
+ modinfo->type = ETH_MODULE_SFF_8472;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
+ break;
+ case SFF_MODULE_ID_QSFP:
+ case SFF_MODULE_ID_QSFP_PLUS:
+ modinfo->type = ETH_MODULE_SFF_8436;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8436_LEN;
+ break;
+ case SFF_MODULE_ID_QSFP28:
+ modinfo->type = ETH_MODULE_SFF_8636;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8636_LEN;
+ break;
+ default:
+ rc = -EOPNOTSUPP;
+ break;
+ }
+ }
+ mutex_unlock(&bp->hwrm_cmd_lock);
+ return rc;
+}
+
+static int bnxt_get_module_eeprom(struct net_device *dev,
+ struct ethtool_eeprom *eeprom,
+ u8 *data)
+{
+ struct bnxt *bp = netdev_priv(dev);
+ u16 start = eeprom->offset, length = eeprom->len;
+ int rc;
+
+ memset(data, 0, eeprom->len);
+
+ /* Read A0 portion of the EEPROM */
+ if (start < ETH_MODULE_SFF_8436_LEN) {
+ if (start + eeprom->len > ETH_MODULE_SFF_8436_LEN)
+ length = ETH_MODULE_SFF_8436_LEN - start;
+ rc = bnxt_read_sfp_module_eeprom_info(bp, I2C_DEV_ADDR_A0, 0,
+ start, length, data);
+ if (rc)
+ return rc;
+ start += length;
+ data += length;
+ length = eeprom->len - length;
+ }
+
+ /* Read A2 portion of the EEPROM */
+ if (length) {
+ start -= ETH_MODULE_SFF_8436_LEN;
+ bnxt_read_sfp_module_eeprom_info(bp, I2C_DEV_ADDR_A2, 1, start,
+ length, data);
+ }
+ return rc;
+}
+
const struct ethtool_ops bnxt_ethtool_ops = {
.get_settings = bnxt_get_settings,
.set_settings = bnxt_set_settings,
@@ -1409,4 +1645,8 @@ const struct ethtool_ops bnxt_ethtool_ops = {
.get_eeprom = bnxt_get_eeprom,
.set_eeprom = bnxt_set_eeprom,
.get_link = bnxt_get_link,
+ .get_eee = bnxt_get_eee,
+ .set_eee = bnxt_set_eee,
+ .get_module_info = bnxt_get_module_info,
+ .get_module_eeprom = bnxt_get_module_eeprom,
};
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.h
index 98fa81e08b58..3abc03b60dbc 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_ethtool.h
@@ -1,6 +1,6 @@
/* Broadcom NetXtreme-C/E network driver.
*
- * Copyright (c) 2014-2015 Broadcom Corporation
+ * Copyright (c) 2014-2016 Broadcom Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -12,6 +12,8 @@
extern const struct ethtool_ops bnxt_ethtool_ops;
+u32 _bnxt_fw_to_ethtool_adv_spds(u16, u8);
u32 bnxt_fw_to_ethtool_speed(u16);
+u16 bnxt_get_fw_auto_link_speeds(u32);
#endif
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_fw_hdr.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_fw_hdr.h
index e0aac65c6d82..461675caaacd 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_fw_hdr.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_fw_hdr.h
@@ -1,6 +1,6 @@
/* Broadcom NetXtreme-C/E network driver.
*
- * Copyright (c) 2014-2015 Broadcom Corporation
+ * Copyright (c) 2014-2016 Broadcom Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h
index 4badbedcb421..05e3c49a7677 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_hsi.h
@@ -1,6 +1,6 @@
/* Broadcom NetXtreme-C/E network driver.
*
- * Copyright (c) 2014-2015 Broadcom Corporation
+ * Copyright (c) 2014-2016 Broadcom Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -104,6 +104,7 @@ struct hwrm_async_event_cmpl {
#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_DCB_CONFIG_CHANGE (0x3UL << 0)
#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_PORT_CONN_NOT_ALLOWED (0x4UL << 0)
#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_LINK_SPEED_CFG_NOT_ALLOWED (0x5UL << 0)
+ #define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_LINK_SPEED_CFG_CHANGE (0x6UL << 0)
#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_FUNC_DRVR_UNLOAD (0x10UL << 0)
#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_FUNC_DRVR_LOAD (0x11UL << 0)
#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_PF_DRVR_UNLOAD (0x20UL << 0)
@@ -111,6 +112,7 @@ struct hwrm_async_event_cmpl {
#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_VF_FLR (0x30UL << 0)
#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_VF_MAC_ADDR_CHANGE (0x31UL << 0)
#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_PF_VF_COMM_STATUS_CHANGE (0x32UL << 0)
+ #define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_VF_CFG_CHANGE (0x33UL << 0)
#define HWRM_ASYNC_EVENT_CMPL_EVENT_ID_HWRM_ERROR (0xffUL << 0)
__le32 event_data2;
u8 opaque_v;
@@ -141,6 +143,7 @@ struct hwrm_async_event_cmpl_link_status_change {
#define HWRM_ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_LINK_CHANGE 0x1UL
#define HWRM_ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_LINK_CHANGE_DOWN (0x0UL << 0)
#define HWRM_ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_LINK_CHANGE_UP (0x1UL << 0)
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_LINK_CHANGE_LAST HWRM_ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_LINK_CHANGE_UP
#define HWRM_ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_PORT_MASK 0xeUL
#define HWRM_ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_PORT_SFT 1
#define HWRM_ASYNC_EVENT_CMPL_LINK_STATUS_CHANGE_EVENT_DATA1_PORT_ID_MASK 0xffff0UL
@@ -195,6 +198,9 @@ struct hwrm_async_event_cmpl_link_speed_change {
#define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CHANGE_EVENT_DATA1_NEW_LINK_SPEED_100MBPS_25GB (0xfaUL << 1)
#define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CHANGE_EVENT_DATA1_NEW_LINK_SPEED_100MBPS_40GB (0x190UL << 1)
#define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CHANGE_EVENT_DATA1_NEW_LINK_SPEED_100MBPS_50GB (0x1f4UL << 1)
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CHANGE_EVENT_DATA1_NEW_LINK_SPEED_100MBPS_100GB (0x3e8UL << 1)
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CHANGE_EVENT_DATA1_NEW_LINK_SPEED_100MBPS_10MB (0xffffUL << 1)
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CHANGE_EVENT_DATA1_NEW_LINK_SPEED_100MBPS_LAST HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CHANGE_EVENT_DATA1_NEW_LINK_SPEED_100MBPS_10MB
#define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CHANGE_EVENT_DATA1_PORT_ID_MASK 0xffff0000UL
#define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CHANGE_EVENT_DATA1_PORT_ID_SFT 16
};
@@ -237,6 +243,55 @@ struct hwrm_async_event_cmpl_port_conn_not_allowed {
__le32 event_data1;
#define HWRM_ASYNC_EVENT_CMPL_PORT_CONN_NOT_ALLOWED_EVENT_DATA1_PORT_ID_MASK 0xffffUL
#define HWRM_ASYNC_EVENT_CMPL_PORT_CONN_NOT_ALLOWED_EVENT_DATA1_PORT_ID_SFT 0
+ #define HWRM_ASYNC_EVENT_CMPL_PORT_CONN_NOT_ALLOWED_EVENT_DATA1_ENFORCEMENT_POLICY_MASK 0xff0000UL
+ #define HWRM_ASYNC_EVENT_CMPL_PORT_CONN_NOT_ALLOWED_EVENT_DATA1_ENFORCEMENT_POLICY_SFT 16
+ #define HWRM_ASYNC_EVENT_CMPL_PORT_CONN_NOT_ALLOWED_EVENT_DATA1_ENFORCEMENT_POLICY_NONE (0x0UL << 16)
+ #define HWRM_ASYNC_EVENT_CMPL_PORT_CONN_NOT_ALLOWED_EVENT_DATA1_ENFORCEMENT_POLICY_DISABLETX (0x1UL << 16)
+ #define HWRM_ASYNC_EVENT_CMPL_PORT_CONN_NOT_ALLOWED_EVENT_DATA1_ENFORCEMENT_POLICY_WARNINGMSG (0x2UL << 16)
+ #define HWRM_ASYNC_EVENT_CMPL_PORT_CONN_NOT_ALLOWED_EVENT_DATA1_ENFORCEMENT_POLICY_PWRDOWN (0x3UL << 16)
+ #define HWRM_ASYNC_EVENT_CMPL_PORT_CONN_NOT_ALLOWED_EVENT_DATA1_ENFORCEMENT_POLICY_LAST HWRM_ASYNC_EVENT_CMPL_PORT_CONN_NOT_ALLOWED_EVENT_DATA1_ENFORCEMENT_POLICY_PWRDOWN
+};
+
+/* HWRM Asynchronous Event Completion Record for link speed config not allowed (16 bytes) */
+struct hwrm_async_event_cmpl_link_speed_cfg_not_allowed {
+ __le16 type;
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CFG_NOT_ALLOWED_TYPE_MASK 0x3fUL
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CFG_NOT_ALLOWED_TYPE_SFT 0
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CFG_NOT_ALLOWED_TYPE_HWRM_ASYNC_EVENT (0x2eUL << 0)
+ __le16 event_id;
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CFG_NOT_ALLOWED_EVENT_ID_LINK_SPEED_CFG_NOT_ALLOWED (0x5UL << 0)
+ __le32 event_data2;
+ u8 opaque_v;
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CFG_NOT_ALLOWED_V 0x1UL
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CFG_NOT_ALLOWED_OPAQUE_MASK 0xfeUL
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CFG_NOT_ALLOWED_OPAQUE_SFT 1
+ u8 timestamp_lo;
+ __le16 timestamp_hi;
+ __le32 event_data1;
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CFG_NOT_ALLOWED_EVENT_DATA1_PORT_ID_MASK 0xffffUL
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CFG_NOT_ALLOWED_EVENT_DATA1_PORT_ID_SFT 0
+};
+
+/* HWRM Asynchronous Event Completion Record for link speed configuration change (16 bytes) */
+struct hwrm_async_event_cmpl_link_speed_cfg_change {
+ __le16 type;
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CFG_CHANGE_TYPE_MASK 0x3fUL
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CFG_CHANGE_TYPE_SFT 0
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CFG_CHANGE_TYPE_HWRM_ASYNC_EVENT (0x2eUL << 0)
+ __le16 event_id;
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CFG_CHANGE_EVENT_ID_LINK_SPEED_CFG_CHANGE (0x6UL << 0)
+ __le32 event_data2;
+ u8 opaque_v;
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CFG_CHANGE_V 0x1UL
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CFG_CHANGE_OPAQUE_MASK 0xfeUL
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CFG_CHANGE_OPAQUE_SFT 1
+ u8 timestamp_lo;
+ __le16 timestamp_hi;
+ __le32 event_data1;
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CFG_CHANGE_EVENT_DATA1_PORT_ID_MASK 0xffffUL
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CFG_CHANGE_EVENT_DATA1_PORT_ID_SFT 0
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CFG_CHANGE_EVENT_DATA1_SUPPORTED_LINK_SPEEDS_CHANGE 0x10000UL
+ #define HWRM_ASYNC_EVENT_CMPL_LINK_SPEED_CFG_CHANGE_EVENT_DATA1_ILLEGAL_LINK_SPEED_CFG 0x20000UL
};
/* HWRM Asynchronous Event Completion Record for Function Driver Unload (16 bytes) */
@@ -363,6 +418,47 @@ struct hwrm_async_event_cmpl_vf_mac_addr_change {
#define HWRM_ASYNC_EVENT_CMPL_VF_MAC_ADDR_CHANGE_EVENT_DATA1_VF_ID_SFT 0
};
+/* HWRM Asynchronous Event Completion Record for PF-VF communication status change (16 bytes) */
+struct hwrm_async_event_cmpl_pf_vf_comm_status_change {
+ __le16 type;
+ #define HWRM_ASYNC_EVENT_CMPL_PF_VF_COMM_STATUS_CHANGE_TYPE_MASK 0x3fUL
+ #define HWRM_ASYNC_EVENT_CMPL_PF_VF_COMM_STATUS_CHANGE_TYPE_SFT 0
+ #define HWRM_ASYNC_EVENT_CMPL_PF_VF_COMM_STATUS_CHANGE_TYPE_HWRM_ASYNC_EVENT (0x2eUL << 0)
+ __le16 event_id;
+ #define HWRM_ASYNC_EVENT_CMPL_PF_VF_COMM_STATUS_CHANGE_EVENT_ID_PF_VF_COMM_STATUS_CHANGE (0x32UL << 0)
+ __le32 event_data2;
+ u8 opaque_v;
+ #define HWRM_ASYNC_EVENT_CMPL_PF_VF_COMM_STATUS_CHANGE_V 0x1UL
+ #define HWRM_ASYNC_EVENT_CMPL_PF_VF_COMM_STATUS_CHANGE_OPAQUE_MASK 0xfeUL
+ #define HWRM_ASYNC_EVENT_CMPL_PF_VF_COMM_STATUS_CHANGE_OPAQUE_SFT 1
+ u8 timestamp_lo;
+ __le16 timestamp_hi;
+ __le32 event_data1;
+ #define HWRM_ASYNC_EVENT_CMPL_PF_VF_COMM_STATUS_CHANGE_EVENT_DATA1_COMM_ESTABLISHED 0x1UL
+};
+
+/* HWRM Asynchronous Event Completion Record for VF configuration change (16 bytes) */
+struct hwrm_async_event_cmpl_vf_cfg_change {
+ __le16 type;
+ #define HWRM_ASYNC_EVENT_CMPL_VF_CFG_CHANGE_TYPE_MASK 0x3fUL
+ #define HWRM_ASYNC_EVENT_CMPL_VF_CFG_CHANGE_TYPE_SFT 0
+ #define HWRM_ASYNC_EVENT_CMPL_VF_CFG_CHANGE_TYPE_HWRM_ASYNC_EVENT (0x2eUL << 0)
+ __le16 event_id;
+ #define HWRM_ASYNC_EVENT_CMPL_VF_CFG_CHANGE_EVENT_ID_VF_CFG_CHANGE (0x33UL << 0)
+ __le32 event_data2;
+ u8 opaque_v;
+ #define HWRM_ASYNC_EVENT_CMPL_VF_CFG_CHANGE_V 0x1UL
+ #define HWRM_ASYNC_EVENT_CMPL_VF_CFG_CHANGE_OPAQUE_MASK 0xfeUL
+ #define HWRM_ASYNC_EVENT_CMPL_VF_CFG_CHANGE_OPAQUE_SFT 1
+ u8 timestamp_lo;
+ __le16 timestamp_hi;
+ __le32 event_data1;
+ #define HWRM_ASYNC_EVENT_CMPL_VF_CFG_CHANGE_EVENT_DATA1_MTU_CHANGE 0x1UL
+ #define HWRM_ASYNC_EVENT_CMPL_VF_CFG_CHANGE_EVENT_DATA1_MRU_CHANGE 0x2UL
+ #define HWRM_ASYNC_EVENT_CMPL_VF_CFG_CHANGE_EVENT_DATA1_DFLT_MAC_ADDR_CHANGE 0x4UL
+ #define HWRM_ASYNC_EVENT_CMPL_VF_CFG_CHANGE_EVENT_DATA1_DFLT_VLAN_CHANGE 0x8UL
+};
+
/* HWRM Asynchronous Event Completion Record for HWRM Error (16 bytes) */
struct hwrm_async_event_cmpl_hwrm_error {
__le16 type;
@@ -377,6 +473,7 @@ struct hwrm_async_event_cmpl_hwrm_error {
#define HWRM_ASYNC_EVENT_CMPL_HWRM_ERROR_EVENT_DATA2_SEVERITY_WARNING (0x0UL << 0)
#define HWRM_ASYNC_EVENT_CMPL_HWRM_ERROR_EVENT_DATA2_SEVERITY_NONFATAL (0x1UL << 0)
#define HWRM_ASYNC_EVENT_CMPL_HWRM_ERROR_EVENT_DATA2_SEVERITY_FATAL (0x2UL << 0)
+ #define HWRM_ASYNC_EVENT_CMPL_HWRM_ERROR_EVENT_DATA2_SEVERITY_LAST HWRM_ASYNC_EVENT_CMPL_HWRM_ERROR_EVENT_DATA2_SEVERITY_FATAL
u8 opaque_v;
#define HWRM_ASYNC_EVENT_CMPL_HWRM_ERROR_V 0x1UL
#define HWRM_ASYNC_EVENT_CMPL_HWRM_ERROR_OPAQUE_MASK 0xfeUL
@@ -387,12 +484,12 @@ struct hwrm_async_event_cmpl_hwrm_error {
#define HWRM_ASYNC_EVENT_CMPL_HWRM_ERROR_EVENT_DATA1_TIMESTAMP 0x1UL
};
-/* HW Resource Manager Specification 1.0.0 */
+/* HW Resource Manager Specification 1.2.2 */
#define HWRM_VERSION_MAJOR 1
-#define HWRM_VERSION_MINOR 0
-#define HWRM_VERSION_UPDATE 0
+#define HWRM_VERSION_MINOR 2
+#define HWRM_VERSION_UPDATE 2
-#define HWRM_VERSION_STR "1.0.0"
+#define HWRM_VERSION_STR "1.2.2"
/*
* Following is the signature for HWRM message field that indicates not
* applicable (All F's). Need to cast it the size of the field if needed.
@@ -444,7 +541,7 @@ struct cmd_nums {
#define HWRM_FUNC_BUF_RGTR (0x1fUL)
#define HWRM_PORT_PHY_CFG (0x20UL)
#define HWRM_PORT_MAC_CFG (0x21UL)
- #define RESERVED2 (0x22UL)
+ #define HWRM_PORT_TS_QUERY (0x22UL)
#define HWRM_PORT_QSTATS (0x23UL)
#define HWRM_PORT_LPBK_QSTATS (0x24UL)
#define HWRM_PORT_CLR_STATS (0x25UL)
@@ -452,6 +549,9 @@ struct cmd_nums {
#define HWRM_PORT_PHY_QCFG (0x27UL)
#define HWRM_PORT_MAC_QCFG (0x28UL)
#define HWRM_PORT_BLINK_LED (0x29UL)
+ #define HWRM_PORT_PHY_QCAPS (0x2aUL)
+ #define HWRM_PORT_PHY_I2C_WRITE (0x2bUL)
+ #define HWRM_PORT_PHY_I2C_READ (0x2cUL)
#define HWRM_QUEUE_QPORTCFG (0x30UL)
#define HWRM_QUEUE_QCFG (0x31UL)
#define HWRM_QUEUE_CFG (0x32UL)
@@ -531,6 +631,7 @@ struct cmd_nums {
__le16 unused_0[3];
};
+/* Return Codes (8 bytes) */
struct ret_codes {
__le16 error_code;
#define HWRM_ERR_CODE_SUCCESS (0x0UL)
@@ -875,10 +976,11 @@ struct hwrm_func_vf_cfg_input {
#define FUNC_VF_CFG_REQ_ENABLES_MTU 0x1UL
#define FUNC_VF_CFG_REQ_ENABLES_GUEST_VLAN 0x2UL
#define FUNC_VF_CFG_REQ_ENABLES_ASYNC_EVENT_CR 0x4UL
+ #define FUNC_VF_CFG_REQ_ENABLES_DFLT_MAC_ADDR 0x8UL
__le16 mtu;
__le16 guest_vlan;
__le16 async_event_cr;
- __le16 unused_0[3];
+ u8 dflt_mac_addr[6];
};
/* Output (16 bytes) */
@@ -917,7 +1019,8 @@ struct hwrm_func_qcaps_output {
__le32 flags;
#define FUNC_QCAPS_RESP_FLAGS_PUSH_MODE_SUPPORTED 0x1UL
#define FUNC_QCAPS_RESP_FLAGS_GLOBAL_MSIX_AUTOMASKING 0x2UL
- u8 perm_mac_address[6];
+ #define FUNC_QCAPS_RESP_FLAGS_PTP_SUPPORTED 0x4UL
+ u8 mac_address[6];
__le16 max_rsscos_ctx;
__le16 max_cmpl_rings;
__le16 max_tx_rings;
@@ -942,6 +1045,67 @@ struct hwrm_func_qcaps_output {
u8 valid;
};
+/* hwrm_func_qcfg */
+/* Input (24 bytes) */
+struct hwrm_func_qcfg_input {
+ __le16 req_type;
+ __le16 cmpl_ring;
+ __le16 seq_id;
+ __le16 target_id;
+ __le64 resp_addr;
+ __le16 fid;
+ __le16 unused_0[3];
+};
+
+/* Output (72 bytes) */
+struct hwrm_func_qcfg_output {
+ __le16 error_code;
+ __le16 req_type;
+ __le16 seq_id;
+ __le16 resp_len;
+ __le16 fid;
+ __le16 port_id;
+ __le16 vlan;
+ u8 unused_0;
+ u8 unused_1;
+ u8 mac_address[6];
+ __le16 pci_id;
+ __le16 alloc_rsscos_ctx;
+ __le16 alloc_cmpl_rings;
+ __le16 alloc_tx_rings;
+ __le16 alloc_rx_rings;
+ __le16 alloc_l2_ctx;
+ __le16 alloc_vnics;
+ __le16 mtu;
+ __le16 mru;
+ __le16 stat_ctx_id;
+ u8 port_partition_type;
+ #define FUNC_QCFG_RESP_PORT_PARTITION_TYPE_SPF (0x0UL << 0)
+ #define FUNC_QCFG_RESP_PORT_PARTITION_TYPE_MPFS (0x1UL << 0)
+ #define FUNC_QCFG_RESP_PORT_PARTITION_TYPE_NPAR1_0 (0x2UL << 0)
+ #define FUNC_QCFG_RESP_PORT_PARTITION_TYPE_NPAR1_5 (0x3UL << 0)
+ #define FUNC_QCFG_RESP_PORT_PARTITION_TYPE_NPAR2_0 (0x4UL << 0)
+ #define FUNC_QCFG_RESP_PORT_PARTITION_TYPE_UNKNOWN (0xffUL << 0)
+ u8 unused_2;
+ __le16 dflt_vnic_id;
+ u8 unused_3;
+ u8 unused_4;
+ __le32 min_bw;
+ __le32 max_bw;
+ u8 evb_mode;
+ #define FUNC_QCFG_RESP_EVB_MODE_NO_EVB (0x0UL << 0)
+ #define FUNC_QCFG_RESP_EVB_MODE_VEB (0x1UL << 0)
+ #define FUNC_QCFG_RESP_EVB_MODE_VEPA (0x2UL << 0)
+ u8 unused_5;
+ __le16 unused_6;
+ __le32 alloc_mcast_filters;
+ __le32 alloc_hw_ring_grps;
+ u8 unused_7;
+ u8 unused_8;
+ u8 unused_9;
+ u8 valid;
+};
+
/* hwrm_func_cfg */
/* Input (88 bytes) */
struct hwrm_func_cfg_input {
@@ -1171,6 +1335,7 @@ struct hwrm_func_drv_rgtr_input {
#define FUNC_DRV_RGTR_REQ_OS_TYPE_UNKNOWN (0x0UL << 0)
#define FUNC_DRV_RGTR_REQ_OS_TYPE_OTHER (0x1UL << 0)
#define FUNC_DRV_RGTR_REQ_OS_TYPE_MSDOS (0xeUL << 0)
+ #define FUNC_DRV_RGTR_REQ_OS_TYPE_WINDOWS (0x12UL << 0)
#define FUNC_DRV_RGTR_REQ_OS_TYPE_SOLARIS (0x1dUL << 0)
#define FUNC_DRV_RGTR_REQ_OS_TYPE_LINUX (0x24UL << 0)
#define FUNC_DRV_RGTR_REQ_OS_TYPE_FREEBSD (0x2aUL << 0)
@@ -1302,6 +1467,7 @@ struct hwrm_func_drv_qver_output {
#define FUNC_DRV_QVER_RESP_OS_TYPE_UNKNOWN (0x0UL << 0)
#define FUNC_DRV_QVER_RESP_OS_TYPE_OTHER (0x1UL << 0)
#define FUNC_DRV_QVER_RESP_OS_TYPE_MSDOS (0xeUL << 0)
+ #define FUNC_DRV_QVER_RESP_OS_TYPE_WINDOWS (0x12UL << 0)
#define FUNC_DRV_QVER_RESP_OS_TYPE_SOLARIS (0x1dUL << 0)
#define FUNC_DRV_QVER_RESP_OS_TYPE_LINUX (0x24UL << 0)
#define FUNC_DRV_QVER_RESP_OS_TYPE_FREEBSD (0x2aUL << 0)
@@ -1317,7 +1483,7 @@ struct hwrm_func_drv_qver_output {
};
/* hwrm_port_phy_cfg */
-/* Input (48 bytes) */
+/* Input (56 bytes) */
struct hwrm_port_phy_cfg_input {
__le16 req_type;
__le16 cmpl_ring;
@@ -1329,6 +1495,10 @@ struct hwrm_port_phy_cfg_input {
#define PORT_PHY_CFG_REQ_FLAGS_FORCE_LINK_DOWN 0x2UL
#define PORT_PHY_CFG_REQ_FLAGS_FORCE 0x4UL
#define PORT_PHY_CFG_REQ_FLAGS_RESTART_AUTONEG 0x8UL
+ #define PORT_PHY_CFG_REQ_FLAGS_EEE_ENABLE 0x10UL
+ #define PORT_PHY_CFG_REQ_FLAGS_EEE_DISABLE 0x20UL
+ #define PORT_PHY_CFG_REQ_FLAGS_EEE_TX_LPI_ENABLE 0x40UL
+ #define PORT_PHY_CFG_REQ_FLAGS_EEE_TX_LPI_DISABLE 0x80UL
__le32 enables;
#define PORT_PHY_CFG_REQ_ENABLES_AUTO_MODE 0x1UL
#define PORT_PHY_CFG_REQ_ENABLES_AUTO_DUPLEX 0x2UL
@@ -1339,6 +1509,8 @@ struct hwrm_port_phy_cfg_input {
#define PORT_PHY_CFG_REQ_ENABLES_LPBK 0x40UL
#define PORT_PHY_CFG_REQ_ENABLES_PREEMPHASIS 0x80UL
#define PORT_PHY_CFG_REQ_ENABLES_FORCE_PAUSE 0x100UL
+ #define PORT_PHY_CFG_REQ_ENABLES_EEE_LINK_SPEED_MASK 0x200UL
+ #define PORT_PHY_CFG_REQ_ENABLES_TX_LPI_TIMER 0x400UL
__le16 port_id;
__le16 force_link_speed;
#define PORT_PHY_CFG_REQ_FORCE_LINK_SPEED_100MB (0x1UL << 0)
@@ -1350,12 +1522,14 @@ struct hwrm_port_phy_cfg_input {
#define PORT_PHY_CFG_REQ_FORCE_LINK_SPEED_25GB (0xfaUL << 0)
#define PORT_PHY_CFG_REQ_FORCE_LINK_SPEED_40GB (0x190UL << 0)
#define PORT_PHY_CFG_REQ_FORCE_LINK_SPEED_50GB (0x1f4UL << 0)
+ #define PORT_PHY_CFG_REQ_FORCE_LINK_SPEED_100GB (0x3e8UL << 0)
+ #define PORT_PHY_CFG_REQ_FORCE_LINK_SPEED_10MB (0xffffUL << 0)
u8 auto_mode;
#define PORT_PHY_CFG_REQ_AUTO_MODE_NONE (0x0UL << 0)
#define PORT_PHY_CFG_REQ_AUTO_MODE_ALL_SPEEDS (0x1UL << 0)
#define PORT_PHY_CFG_REQ_AUTO_MODE_ONE_SPEED (0x2UL << 0)
#define PORT_PHY_CFG_REQ_AUTO_MODE_ONE_OR_BELOW (0x3UL << 0)
- #define PORT_PHY_CFG_REQ_AUTO_MODE_MASK (0x4UL << 0)
+ #define PORT_PHY_CFG_REQ_AUTO_MODE_SPEED_MASK (0x4UL << 0)
u8 auto_duplex;
#define PORT_PHY_CFG_REQ_AUTO_DUPLEX_HALF (0x0UL << 0)
#define PORT_PHY_CFG_REQ_AUTO_DUPLEX_FULL (0x1UL << 0)
@@ -1363,6 +1537,7 @@ struct hwrm_port_phy_cfg_input {
u8 auto_pause;
#define PORT_PHY_CFG_REQ_AUTO_PAUSE_TX 0x1UL
#define PORT_PHY_CFG_REQ_AUTO_PAUSE_RX 0x2UL
+ #define PORT_PHY_CFG_REQ_AUTO_PAUSE_AUTONEG_PAUSE 0x4UL
u8 unused_0;
__le16 auto_link_speed;
#define PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_100MB (0x1UL << 0)
@@ -1374,6 +1549,8 @@ struct hwrm_port_phy_cfg_input {
#define PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_25GB (0xfaUL << 0)
#define PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_40GB (0x190UL << 0)
#define PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_50GB (0x1f4UL << 0)
+ #define PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_100GB (0x3e8UL << 0)
+ #define PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_10MB (0xffffUL << 0)
__le16 auto_link_speed_mask;
#define PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_MASK_100MBHD 0x1UL
#define PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_MASK_100MB 0x2UL
@@ -1386,6 +1563,9 @@ struct hwrm_port_phy_cfg_input {
#define PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_MASK_25GB 0x100UL
#define PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_MASK_40GB 0x200UL
#define PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_MASK_50GB 0x400UL
+ #define PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_MASK_100GB 0x800UL
+ #define PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_MASK_10MBHD 0x1000UL
+ #define PORT_PHY_CFG_REQ_AUTO_LINK_SPEED_MASK_10MB 0x2000UL
u8 wirespeed;
#define PORT_PHY_CFG_REQ_WIRESPEED_OFF (0x0UL << 0)
#define PORT_PHY_CFG_REQ_WIRESPEED_ON (0x1UL << 0)
@@ -1398,7 +1578,20 @@ struct hwrm_port_phy_cfg_input {
#define PORT_PHY_CFG_REQ_FORCE_PAUSE_RX 0x2UL
u8 unused_1;
__le32 preemphasis;
- __le32 unused_2;
+ __le16 eee_link_speed_mask;
+ #define PORT_PHY_CFG_REQ_EEE_LINK_SPEED_MASK_RSVD1 0x1UL
+ #define PORT_PHY_CFG_REQ_EEE_LINK_SPEED_MASK_100MB 0x2UL
+ #define PORT_PHY_CFG_REQ_EEE_LINK_SPEED_MASK_RSVD2 0x4UL
+ #define PORT_PHY_CFG_REQ_EEE_LINK_SPEED_MASK_1GB 0x8UL
+ #define PORT_PHY_CFG_REQ_EEE_LINK_SPEED_MASK_RSVD3 0x10UL
+ #define PORT_PHY_CFG_REQ_EEE_LINK_SPEED_MASK_RSVD4 0x20UL
+ #define PORT_PHY_CFG_REQ_EEE_LINK_SPEED_MASK_10GB 0x40UL
+ u8 unused_2;
+ u8 unused_3;
+ __le32 tx_lpi_timer;
+ __le32 unused_4;
+ #define PORT_PHY_CFG_REQ_TX_LPI_TIMER_MASK 0xffffffUL
+ #define PORT_PHY_CFG_REQ_TX_LPI_TIMER_SFT 0
};
/* Output (16 bytes) */
@@ -1426,7 +1619,7 @@ struct hwrm_port_phy_qcfg_input {
__le16 unused_0[3];
};
-/* Output (48 bytes) */
+/* Output (96 bytes) */
struct hwrm_port_phy_qcfg_output {
__le16 error_code;
__le16 req_type;
@@ -1447,6 +1640,8 @@ struct hwrm_port_phy_qcfg_output {
#define PORT_PHY_QCFG_RESP_LINK_SPEED_25GB (0xfaUL << 0)
#define PORT_PHY_QCFG_RESP_LINK_SPEED_40GB (0x190UL << 0)
#define PORT_PHY_QCFG_RESP_LINK_SPEED_50GB (0x1f4UL << 0)
+ #define PORT_PHY_QCFG_RESP_LINK_SPEED_100GB (0x3e8UL << 0)
+ #define PORT_PHY_QCFG_RESP_LINK_SPEED_10MB (0xffffUL << 0)
u8 duplex;
#define PORT_PHY_QCFG_RESP_DUPLEX_HALF (0x0UL << 0)
#define PORT_PHY_QCFG_RESP_DUPLEX_FULL (0x1UL << 0)
@@ -1465,6 +1660,9 @@ struct hwrm_port_phy_qcfg_output {
#define PORT_PHY_QCFG_RESP_SUPPORT_SPEEDS_25GB 0x100UL
#define PORT_PHY_QCFG_RESP_SUPPORT_SPEEDS_40GB 0x200UL
#define PORT_PHY_QCFG_RESP_SUPPORT_SPEEDS_50GB 0x400UL
+ #define PORT_PHY_QCFG_RESP_SUPPORT_SPEEDS_100GB 0x800UL
+ #define PORT_PHY_QCFG_RESP_SUPPORT_SPEEDS_10MBHD 0x1000UL
+ #define PORT_PHY_QCFG_RESP_SUPPORT_SPEEDS_10MB 0x2000UL
__le16 force_link_speed;
#define PORT_PHY_QCFG_RESP_FORCE_LINK_SPEED_100MB (0x1UL << 0)
#define PORT_PHY_QCFG_RESP_FORCE_LINK_SPEED_1GB (0xaUL << 0)
@@ -1475,15 +1673,18 @@ struct hwrm_port_phy_qcfg_output {
#define PORT_PHY_QCFG_RESP_FORCE_LINK_SPEED_25GB (0xfaUL << 0)
#define PORT_PHY_QCFG_RESP_FORCE_LINK_SPEED_40GB (0x190UL << 0)
#define PORT_PHY_QCFG_RESP_FORCE_LINK_SPEED_50GB (0x1f4UL << 0)
+ #define PORT_PHY_QCFG_RESP_FORCE_LINK_SPEED_100GB (0x3e8UL << 0)
+ #define PORT_PHY_QCFG_RESP_FORCE_LINK_SPEED_10MB (0xffffUL << 0)
u8 auto_mode;
#define PORT_PHY_QCFG_RESP_AUTO_MODE_NONE (0x0UL << 0)
#define PORT_PHY_QCFG_RESP_AUTO_MODE_ALL_SPEEDS (0x1UL << 0)
#define PORT_PHY_QCFG_RESP_AUTO_MODE_ONE_SPEED (0x2UL << 0)
#define PORT_PHY_QCFG_RESP_AUTO_MODE_ONE_OR_BELOW (0x3UL << 0)
- #define PORT_PHY_QCFG_RESP_AUTO_MODE_MASK (0x4UL << 0)
+ #define PORT_PHY_QCFG_RESP_AUTO_MODE_SPEED_MASK (0x4UL << 0)
u8 auto_pause;
#define PORT_PHY_QCFG_RESP_AUTO_PAUSE_TX 0x1UL
#define PORT_PHY_QCFG_RESP_AUTO_PAUSE_RX 0x2UL
+ #define PORT_PHY_QCFG_RESP_AUTO_PAUSE_AUTONEG_PAUSE 0x4UL
__le16 auto_link_speed;
#define PORT_PHY_QCFG_RESP_AUTO_LINK_SPEED_100MB (0x1UL << 0)
#define PORT_PHY_QCFG_RESP_AUTO_LINK_SPEED_1GB (0xaUL << 0)
@@ -1494,6 +1695,8 @@ struct hwrm_port_phy_qcfg_output {
#define PORT_PHY_QCFG_RESP_AUTO_LINK_SPEED_25GB (0xfaUL << 0)
#define PORT_PHY_QCFG_RESP_AUTO_LINK_SPEED_40GB (0x190UL << 0)
#define PORT_PHY_QCFG_RESP_AUTO_LINK_SPEED_50GB (0x1f4UL << 0)
+ #define PORT_PHY_QCFG_RESP_AUTO_LINK_SPEED_100GB (0x3e8UL << 0)
+ #define PORT_PHY_QCFG_RESP_AUTO_LINK_SPEED_10MB (0xffffUL << 0)
__le16 auto_link_speed_mask;
#define PORT_PHY_QCFG_RESP_AUTO_LINK_SPEED_MASK_100MBHD 0x1UL
#define PORT_PHY_QCFG_RESP_AUTO_LINK_SPEED_MASK_100MB 0x2UL
@@ -1506,6 +1709,9 @@ struct hwrm_port_phy_qcfg_output {
#define PORT_PHY_QCFG_RESP_AUTO_LINK_SPEED_MASK_25GB 0x100UL
#define PORT_PHY_QCFG_RESP_AUTO_LINK_SPEED_MASK_40GB 0x200UL
#define PORT_PHY_QCFG_RESP_AUTO_LINK_SPEED_MASK_50GB 0x400UL
+ #define PORT_PHY_QCFG_RESP_AUTO_LINK_SPEED_MASK_100GB 0x800UL
+ #define PORT_PHY_QCFG_RESP_AUTO_LINK_SPEED_MASK_10MBHD 0x1000UL
+ #define PORT_PHY_QCFG_RESP_AUTO_LINK_SPEED_MASK_10MB 0x2000UL
u8 wirespeed;
#define PORT_PHY_QCFG_RESP_WIRESPEED_OFF (0x0UL << 0)
#define PORT_PHY_QCFG_RESP_WIRESPEED_ON (0x1UL << 0)
@@ -1516,31 +1722,49 @@ struct hwrm_port_phy_qcfg_output {
u8 force_pause;
#define PORT_PHY_QCFG_RESP_FORCE_PAUSE_TX 0x1UL
#define PORT_PHY_QCFG_RESP_FORCE_PAUSE_RX 0x2UL
- u8 reserved1;
+ u8 module_status;
+ #define PORT_PHY_QCFG_RESP_MODULE_STATUS_NONE (0x0UL << 0)
+ #define PORT_PHY_QCFG_RESP_MODULE_STATUS_DISABLETX (0x1UL << 0)
+ #define PORT_PHY_QCFG_RESP_MODULE_STATUS_WARNINGMSG (0x2UL << 0)
+ #define PORT_PHY_QCFG_RESP_MODULE_STATUS_PWRDOWN (0x3UL << 0)
+ #define PORT_PHY_QCFG_RESP_MODULE_STATUS_NOTINSERTED (0x4UL << 0)
+ #define PORT_PHY_QCFG_RESP_MODULE_STATUS_NOTAPPLICABLE (0xffUL << 0)
__le32 preemphasis;
u8 phy_maj;
u8 phy_min;
u8 phy_bld;
u8 phy_type;
- #define PORT_PHY_QCFG_RESP_PHY_TYPE_BASECR4 (0x1UL << 0)
+ #define PORT_PHY_QCFG_RESP_PHY_TYPE_UNKNOWN (0x0UL << 0)
+ #define PORT_PHY_QCFG_RESP_PHY_TYPE_BASECR (0x1UL << 0)
#define PORT_PHY_QCFG_RESP_PHY_TYPE_BASEKR4 (0x2UL << 0)
- #define PORT_PHY_QCFG_RESP_PHY_TYPE_BASELR4 (0x3UL << 0)
- #define PORT_PHY_QCFG_RESP_PHY_TYPE_BASESR4 (0x4UL << 0)
+ #define PORT_PHY_QCFG_RESP_PHY_TYPE_BASELR (0x3UL << 0)
+ #define PORT_PHY_QCFG_RESP_PHY_TYPE_BASESR (0x4UL << 0)
#define PORT_PHY_QCFG_RESP_PHY_TYPE_BASEKR2 (0x5UL << 0)
- #define PORT_PHY_QCFG_RESP_PHY_TYPE_BASEKX4 (0x6UL << 0)
+ #define PORT_PHY_QCFG_RESP_PHY_TYPE_BASEKX (0x6UL << 0)
#define PORT_PHY_QCFG_RESP_PHY_TYPE_BASEKR (0x7UL << 0)
#define PORT_PHY_QCFG_RESP_PHY_TYPE_BASET (0x8UL << 0)
+ #define PORT_PHY_QCFG_RESP_PHY_TYPE_BASETE (0x9UL << 0)
+ #define PORT_PHY_QCFG_RESP_PHY_TYPE_SGMIIEXTPHY (0xaUL << 0)
u8 media_type;
+ #define PORT_PHY_QCFG_RESP_MEDIA_TYPE_UNKNOWN (0x0UL << 0)
#define PORT_PHY_QCFG_RESP_MEDIA_TYPE_TP (0x1UL << 0)
#define PORT_PHY_QCFG_RESP_MEDIA_TYPE_DAC (0x2UL << 0)
#define PORT_PHY_QCFG_RESP_MEDIA_TYPE_FIBRE (0x3UL << 0)
- u8 transceiver_type;
- #define PORT_PHY_QCFG_RESP_TRANSCEIVER_TYPE_XCVR_INTERNAL (0x1UL << 0)
- #define PORT_PHY_QCFG_RESP_TRANSCEIVER_TYPE_XCVR_EXTERNAL (0x2UL << 0)
- u8 phy_addr;
+ u8 xcvr_pkg_type;
+ #define PORT_PHY_QCFG_RESP_XCVR_PKG_TYPE_XCVR_INTERNAL (0x1UL << 0)
+ #define PORT_PHY_QCFG_RESP_XCVR_PKG_TYPE_XCVR_EXTERNAL (0x2UL << 0)
+ u8 eee_config_phy_addr;
#define PORT_PHY_QCFG_RESP_PHY_ADDR_MASK 0x1fUL
#define PORT_PHY_QCFG_RESP_PHY_ADDR_SFT 0
- u8 unused_2;
+ #define PORT_PHY_QCFG_RESP_EEE_CONFIG_EEE_ENABLED 0x20UL
+ #define PORT_PHY_QCFG_RESP_EEE_CONFIG_EEE_ACTIVE 0x40UL
+ #define PORT_PHY_QCFG_RESP_EEE_CONFIG_EEE_TX_LPI 0x80UL
+ #define PORT_PHY_QCFG_RESP_EEE_CONFIG_MASK 0xe0UL
+ #define PORT_PHY_QCFG_RESP_EEE_CONFIG_SFT 5
+ u8 parallel_detect;
+ #define PORT_PHY_QCFG_RESP_PARALLEL_DETECT 0x1UL
+ #define PORT_PHY_QCFG_RESP_RESERVED_MASK 0xfeUL
+ #define PORT_PHY_QCFG_RESP_RESERVED_SFT 1
__le16 link_partner_adv_speeds;
#define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_SPEEDS_100MBHD 0x1UL
#define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_SPEEDS_100MB 0x2UL
@@ -1553,15 +1777,48 @@ struct hwrm_port_phy_qcfg_output {
#define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_SPEEDS_25GB 0x100UL
#define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_SPEEDS_40GB 0x200UL
#define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_SPEEDS_50GB 0x400UL
+ #define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_SPEEDS_100GB 0x800UL
+ #define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_SPEEDS_10MBHD 0x1000UL
+ #define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_SPEEDS_10MB 0x2000UL
u8 link_partner_adv_auto_mode;
#define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_AUTO_MODE_NONE (0x0UL << 0)
#define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_AUTO_MODE_ALL_SPEEDS (0x1UL << 0)
#define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_AUTO_MODE_ONE_SPEED (0x2UL << 0)
#define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_AUTO_MODE_ONE_OR_BELOW (0x3UL << 0)
- #define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_AUTO_MODE_MASK (0x4UL << 0)
+ #define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_AUTO_MODE_SPEED_MASK (0x4UL << 0)
u8 link_partner_adv_pause;
#define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_PAUSE_TX 0x1UL
#define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_PAUSE_RX 0x2UL
+ __le16 adv_eee_link_speed_mask;
+ #define PORT_PHY_QCFG_RESP_ADV_EEE_LINK_SPEED_MASK_RSVD1 0x1UL
+ #define PORT_PHY_QCFG_RESP_ADV_EEE_LINK_SPEED_MASK_100MB 0x2UL
+ #define PORT_PHY_QCFG_RESP_ADV_EEE_LINK_SPEED_MASK_RSVD2 0x4UL
+ #define PORT_PHY_QCFG_RESP_ADV_EEE_LINK_SPEED_MASK_1GB 0x8UL
+ #define PORT_PHY_QCFG_RESP_ADV_EEE_LINK_SPEED_MASK_RSVD3 0x10UL
+ #define PORT_PHY_QCFG_RESP_ADV_EEE_LINK_SPEED_MASK_RSVD4 0x20UL
+ #define PORT_PHY_QCFG_RESP_ADV_EEE_LINK_SPEED_MASK_10GB 0x40UL
+ __le16 link_partner_adv_eee_link_speed_mask;
+ #define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_EEE_LINK_SPEED_MASK_RSVD1 0x1UL
+ #define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_EEE_LINK_SPEED_MASK_100MB 0x2UL
+ #define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_EEE_LINK_SPEED_MASK_RSVD2 0x4UL
+ #define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_EEE_LINK_SPEED_MASK_1GB 0x8UL
+ #define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_EEE_LINK_SPEED_MASK_RSVD3 0x10UL
+ #define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_EEE_LINK_SPEED_MASK_RSVD4 0x20UL
+ #define PORT_PHY_QCFG_RESP_LINK_PARTNER_ADV_EEE_LINK_SPEED_MASK_10GB 0x40UL
+ __le32 xcvr_identifier_type_tx_lpi_timer;
+ #define PORT_PHY_QCFG_RESP_TX_LPI_TIMER_MASK 0xffffffUL
+ #define PORT_PHY_QCFG_RESP_TX_LPI_TIMER_SFT 0
+ #define PORT_PHY_QCFG_RESP_XCVR_IDENTIFIER_TYPE_MASK 0xff000000UL
+ #define PORT_PHY_QCFG_RESP_XCVR_IDENTIFIER_TYPE_SFT 24
+ #define PORT_PHY_QCFG_RESP_XCVR_IDENTIFIER_TYPE_UNKNOWN (0x0UL << 24)
+ #define PORT_PHY_QCFG_RESP_XCVR_IDENTIFIER_TYPE_SFP (0x3UL << 24)
+ #define PORT_PHY_QCFG_RESP_XCVR_IDENTIFIER_TYPE_QSFP (0xcUL << 24)
+ #define PORT_PHY_QCFG_RESP_XCVR_IDENTIFIER_TYPE_QSFPPLUS (0xdUL << 24)
+ #define PORT_PHY_QCFG_RESP_XCVR_IDENTIFIER_TYPE_QSFP28 (0x11UL << 24)
+ __le32 unused_1;
+ char phy_vendor_name[16];
+ char phy_vendor_partnumber[16];
+ __le32 unused_2;
u8 unused_3;
u8 unused_4;
u8 unused_5;
@@ -1569,7 +1826,7 @@ struct hwrm_port_phy_qcfg_output {
};
/* hwrm_port_mac_cfg */
-/* Input (32 bytes) */
+/* Input (40 bytes) */
struct hwrm_port_mac_cfg_input {
__le16 req_type;
__le16 cmpl_ring;
@@ -1581,6 +1838,10 @@ struct hwrm_port_mac_cfg_input {
#define PORT_MAC_CFG_REQ_FLAGS_COS_ASSIGNMENT_ENABLE 0x2UL
#define PORT_MAC_CFG_REQ_FLAGS_TUNNEL_PRI2COS_ENABLE 0x4UL
#define PORT_MAC_CFG_REQ_FLAGS_IP_DSCP2COS_ENABLE 0x8UL
+ #define PORT_MAC_CFG_REQ_FLAGS_PTP_RX_TS_CAPTURE_ENABLE 0x10UL
+ #define PORT_MAC_CFG_REQ_FLAGS_PTP_RX_TS_CAPTURE_DISABLE 0x20UL
+ #define PORT_MAC_CFG_REQ_FLAGS_PTP_TX_TS_CAPTURE_ENABLE 0x40UL
+ #define PORT_MAC_CFG_REQ_FLAGS_PTP_TX_TS_CAPTURE_DISABLE 0x80UL
__le32 enables;
#define PORT_MAC_CFG_REQ_ENABLES_IPG 0x1UL
#define PORT_MAC_CFG_REQ_ENABLES_LPBK 0x2UL
@@ -1588,6 +1849,8 @@ struct hwrm_port_mac_cfg_input {
#define PORT_MAC_CFG_REQ_ENABLES_LCOS_MAP_PRI 0x8UL
#define PORT_MAC_CFG_REQ_ENABLES_TUNNEL_PRI2COS_MAP_PRI 0x10UL
#define PORT_MAC_CFG_REQ_ENABLES_DSCP2COS_MAP_PRI 0x20UL
+ #define PORT_MAC_CFG_REQ_ENABLES_RX_TS_CAPTURE_PTP_MSG_TYPE 0x40UL
+ #define PORT_MAC_CFG_REQ_ENABLES_TX_TS_CAPTURE_PTP_MSG_TYPE 0x80UL
__le16 port_id;
u8 ipg;
u8 lpbk;
@@ -1598,6 +1861,9 @@ struct hwrm_port_mac_cfg_input {
u8 lcos_map_pri;
u8 tunnel_pri2cos_map_pri;
u8 dscp2pri_map_pri;
+ __le16 rx_ts_capture_ptp_msg_type;
+ __le16 tx_ts_capture_ptp_msg_type;
+ __le32 unused_0;
};
/* Output (16 bytes) */
@@ -1754,7 +2020,113 @@ struct hwrm_port_blink_led_output {
u8 valid;
};
-/* hwrm_queue_qportcfg */
+/* hwrm_port_phy_qcaps */
+/* Input (24 bytes) */
+struct hwrm_port_phy_qcaps_input {
+ __le16 req_type;
+ __le16 cmpl_ring;
+ __le16 seq_id;
+ __le16 target_id;
+ __le64 resp_addr;
+ __le16 port_id;
+ __le16 unused_0[3];
+};
+
+/* Output (24 bytes) */
+struct hwrm_port_phy_qcaps_output {
+ __le16 error_code;
+ __le16 req_type;
+ __le16 seq_id;
+ __le16 resp_len;
+ u8 eee_supported;
+ #define PORT_PHY_QCAPS_RESP_EEE_SUPPORTED 0x1UL
+ #define PORT_PHY_QCAPS_RESP_RSVD1_MASK 0xfeUL
+ #define PORT_PHY_QCAPS_RESP_RSVD1_SFT 1
+ u8 unused_0;
+ __le16 supported_speeds_force_mode;
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_FORCE_MODE_100MBHD 0x1UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_FORCE_MODE_100MB 0x2UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_FORCE_MODE_1GBHD 0x4UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_FORCE_MODE_1GB 0x8UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_FORCE_MODE_2GB 0x10UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_FORCE_MODE_2_5GB 0x20UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_FORCE_MODE_10GB 0x40UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_FORCE_MODE_20GB 0x80UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_FORCE_MODE_25GB 0x100UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_FORCE_MODE_40GB 0x200UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_FORCE_MODE_50GB 0x400UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_FORCE_MODE_100GB 0x800UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_FORCE_MODE_10MBHD 0x1000UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_FORCE_MODE_10MB 0x2000UL
+ __le16 supported_speeds_auto_mode;
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_AUTO_MODE_100MBHD 0x1UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_AUTO_MODE_100MB 0x2UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_AUTO_MODE_1GBHD 0x4UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_AUTO_MODE_1GB 0x8UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_AUTO_MODE_2GB 0x10UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_AUTO_MODE_2_5GB 0x20UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_AUTO_MODE_10GB 0x40UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_AUTO_MODE_20GB 0x80UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_AUTO_MODE_25GB 0x100UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_AUTO_MODE_40GB 0x200UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_AUTO_MODE_50GB 0x400UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_AUTO_MODE_100GB 0x800UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_AUTO_MODE_10MBHD 0x1000UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_AUTO_MODE_10MB 0x2000UL
+ __le16 supported_speeds_eee_mode;
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_EEE_MODE_RSVD1 0x1UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_EEE_MODE_100MB 0x2UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_EEE_MODE_RSVD2 0x4UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_EEE_MODE_1GB 0x8UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_EEE_MODE_RSVD3 0x10UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_EEE_MODE_RSVD4 0x20UL
+ #define PORT_PHY_QCAPS_RESP_SUPPORTED_SPEEDS_EEE_MODE_10GB 0x40UL
+ __le32 tx_lpi_timer_low;
+ #define PORT_PHY_QCAPS_RESP_TX_LPI_TIMER_LOW_MASK 0xffffffUL
+ #define PORT_PHY_QCAPS_RESP_TX_LPI_TIMER_LOW_SFT 0
+ #define PORT_PHY_QCAPS_RESP_RSVD2_MASK 0xff000000UL
+ #define PORT_PHY_QCAPS_RESP_RSVD2_SFT 24
+ __le32 valid_tx_lpi_timer_high;
+ #define PORT_PHY_QCAPS_RESP_TX_LPI_TIMER_HIGH_MASK 0xffffffUL
+ #define PORT_PHY_QCAPS_RESP_TX_LPI_TIMER_HIGH_SFT 0
+ #define PORT_PHY_QCAPS_RESP_VALID_MASK 0xff000000UL
+ #define PORT_PHY_QCAPS_RESP_VALID_SFT 24
+};
+
+/* hwrm_port_phy_i2c_read */
+/* Input (40 bytes) */
+struct hwrm_port_phy_i2c_read_input {
+ __le16 req_type;
+ __le16 cmpl_ring;
+ __le16 seq_id;
+ __le16 target_id;
+ __le64 resp_addr;
+ __le32 flags;
+ __le32 enables;
+ #define PORT_PHY_I2C_READ_REQ_ENABLES_PAGE_OFFSET 0x1UL
+ __le16 port_id;
+ u8 i2c_slave_addr;
+ u8 unused_0;
+ __le16 page_number;
+ __le16 page_offset;
+ u8 data_length;
+ u8 unused_1[7];
+};
+
+/* Output (80 bytes) */
+struct hwrm_port_phy_i2c_read_output {
+ __le16 error_code;
+ __le16 req_type;
+ __le16 seq_id;
+ __le16 resp_len;
+ __le32 data[16];
+ __le32 unused_0;
+ u8 unused_1;
+ u8 unused_2;
+ u8 unused_3;
+ u8 valid;
+};
+
/* Input (24 bytes) */
struct hwrm_queue_qportcfg_input {
__le16 req_type;
@@ -1766,6 +2138,7 @@ struct hwrm_queue_qportcfg_input {
#define QUEUE_QPORTCFG_REQ_FLAGS_PATH 0x1UL
#define QUEUE_QPORTCFG_REQ_FLAGS_PATH_TX (0x0UL << 0)
#define QUEUE_QPORTCFG_REQ_FLAGS_PATH_RX (0x1UL << 0)
+ #define QUEUE_QPORTCFG_REQ_FLAGS_PATH_LAST QUEUE_QPORTCFG_REQ_FLAGS_PATH_RX
__le16 port_id;
__le16 unused_0;
};
@@ -1838,6 +2211,7 @@ struct hwrm_queue_cfg_input {
#define QUEUE_CFG_REQ_FLAGS_PATH 0x1UL
#define QUEUE_CFG_REQ_FLAGS_PATH_TX (0x0UL << 0)
#define QUEUE_CFG_REQ_FLAGS_PATH_RX (0x1UL << 0)
+ #define QUEUE_CFG_REQ_FLAGS_PATH_LAST QUEUE_CFG_REQ_FLAGS_PATH_RX
__le32 enables;
#define QUEUE_CFG_REQ_ENABLES_DFLT_LEN 0x1UL
#define QUEUE_CFG_REQ_ENABLES_SERVICE_PROFILE 0x2UL
@@ -1875,6 +2249,7 @@ struct hwrm_queue_buffers_cfg_input {
#define QUEUE_BUFFERS_CFG_REQ_FLAGS_PATH 0x1UL
#define QUEUE_BUFFERS_CFG_REQ_FLAGS_PATH_TX (0x0UL << 0)
#define QUEUE_BUFFERS_CFG_REQ_FLAGS_PATH_RX (0x1UL << 0)
+ #define QUEUE_BUFFERS_CFG_REQ_FLAGS_PATH_LAST QUEUE_BUFFERS_CFG_REQ_FLAGS_PATH_RX
__le32 enables;
#define QUEUE_BUFFERS_CFG_REQ_ENABLES_RESERVED 0x1UL
#define QUEUE_BUFFERS_CFG_REQ_ENABLES_SHARED 0x2UL
@@ -1952,6 +2327,7 @@ struct hwrm_queue_pri2cos_cfg_input {
#define QUEUE_PRI2COS_CFG_REQ_FLAGS_PATH 0x1UL
#define QUEUE_PRI2COS_CFG_REQ_FLAGS_PATH_TX (0x0UL << 0)
#define QUEUE_PRI2COS_CFG_REQ_FLAGS_PATH_RX (0x1UL << 0)
+ #define QUEUE_PRI2COS_CFG_REQ_FLAGS_PATH_LAST QUEUE_PRI2COS_CFG_REQ_FLAGS_PATH_RX
#define QUEUE_PRI2COS_CFG_REQ_FLAGS_IVLAN 0x2UL
__le32 enables;
u8 port_id;
@@ -2158,6 +2534,8 @@ struct hwrm_vnic_cfg_input {
#define VNIC_CFG_REQ_FLAGS_DEFAULT 0x1UL
#define VNIC_CFG_REQ_FLAGS_VLAN_STRIP_MODE 0x2UL
#define VNIC_CFG_REQ_FLAGS_BD_STALL_MODE 0x4UL
+ #define VNIC_CFG_REQ_FLAGS_ROCE_DUAL_VNIC_MODE 0x8UL
+ #define VNIC_CFG_REQ_FLAGS_ROCE_ONLY_VNIC_MODE 0x10UL
__le32 enables;
#define VNIC_CFG_REQ_ENABLES_DFLT_RING_GRP 0x1UL
#define VNIC_CFG_REQ_ENABLES_RSS_RULE 0x2UL
@@ -2622,6 +3000,7 @@ struct hwrm_cfa_l2_filter_alloc_input {
#define CFA_L2_FILTER_ALLOC_REQ_FLAGS_PATH 0x1UL
#define CFA_L2_FILTER_ALLOC_REQ_FLAGS_PATH_TX (0x0UL << 0)
#define CFA_L2_FILTER_ALLOC_REQ_FLAGS_PATH_RX (0x1UL << 0)
+ #define CFA_L2_FILTER_ALLOC_REQ_FLAGS_PATH_LAST CFA_L2_FILTER_ALLOC_REQ_FLAGS_PATH_RX
#define CFA_L2_FILTER_ALLOC_REQ_FLAGS_LOOPBACK 0x2UL
#define CFA_L2_FILTER_ALLOC_REQ_FLAGS_DROP 0x4UL
#define CFA_L2_FILTER_ALLOC_REQ_FLAGS_OUTERMOST 0x8UL
@@ -2747,6 +3126,7 @@ struct hwrm_cfa_l2_filter_cfg_input {
#define CFA_L2_FILTER_CFG_REQ_FLAGS_PATH 0x1UL
#define CFA_L2_FILTER_CFG_REQ_FLAGS_PATH_TX (0x0UL << 0)
#define CFA_L2_FILTER_CFG_REQ_FLAGS_PATH_RX (0x1UL << 0)
+ #define CFA_L2_FILTER_CFG_REQ_FLAGS_PATH_LAST CFA_L2_FILTER_CFG_REQ_FLAGS_PATH_RX
#define CFA_L2_FILTER_CFG_REQ_FLAGS_DROP 0x2UL
__le32 enables;
#define CFA_L2_FILTER_CFG_REQ_ENABLES_DST_ID 0x1UL
@@ -3337,6 +3717,41 @@ struct hwrm_fw_reset_output {
u8 valid;
};
+/* hwrm_fw_qstatus */
+/* Input (24 bytes) */
+struct hwrm_fw_qstatus_input {
+ __le16 req_type;
+ __le16 cmpl_ring;
+ __le16 seq_id;
+ __le16 target_id;
+ __le64 resp_addr;
+ u8 embedded_proc_type;
+ #define FW_QSTATUS_REQ_EMBEDDED_PROC_TYPE_BOOT (0x0UL << 0)
+ #define FW_QSTATUS_REQ_EMBEDDED_PROC_TYPE_MGMT (0x1UL << 0)
+ #define FW_QSTATUS_REQ_EMBEDDED_PROC_TYPE_NETCTRL (0x2UL << 0)
+ #define FW_QSTATUS_REQ_EMBEDDED_PROC_TYPE_ROCE (0x3UL << 0)
+ #define FW_QSTATUS_REQ_EMBEDDED_PROC_TYPE_RSVD (0x4UL << 0)
+ u8 unused_0[7];
+};
+
+/* Output (16 bytes) */
+struct hwrm_fw_qstatus_output {
+ __le16 error_code;
+ __le16 req_type;
+ __le16 seq_id;
+ __le16 resp_len;
+ u8 selfrst_status;
+ #define FW_QSTATUS_RESP_SELFRST_STATUS_SELFRSTNONE (0x0UL << 0)
+ #define FW_QSTATUS_RESP_SELFRST_STATUS_SELFRSTASAP (0x1UL << 0)
+ #define FW_QSTATUS_RESP_SELFRST_STATUS_SELFRSTPCIERST (0x2UL << 0)
+ u8 unused_0;
+ __le16 unused_1;
+ u8 unused_2;
+ u8 unused_3;
+ u8 unused_4;
+ u8 valid;
+};
+
/* hwrm_exec_fwd_resp */
/* Input (128 bytes) */
struct hwrm_exec_fwd_resp_input {
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_nvm_defs.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_nvm_defs.h
index 43ef392c8588..40a7b0e09612 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_nvm_defs.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_nvm_defs.h
@@ -1,6 +1,6 @@
/* Broadcom NetXtreme-C/E network driver.
*
- * Copyright (c) 2014-2015 Broadcom Corporation
+ * Copyright (c) 2014-2016 Broadcom Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
index 0c5f510492f1..363884dd9e8a 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.c
@@ -1,6 +1,6 @@
/* Broadcom NetXtreme-C/E network driver.
*
- * Copyright (c) 2014-2015 Broadcom Corporation
+ * Copyright (c) 2014-2016 Broadcom Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -771,12 +771,8 @@ static int bnxt_vf_set_link(struct bnxt *bp, struct bnxt_vf_info *vf)
PORT_PHY_QCFG_RESP_LINK_NO_LINK) {
phy_qcfg_resp.link =
PORT_PHY_QCFG_RESP_LINK_LINK;
- if (phy_qcfg_resp.auto_link_speed)
- phy_qcfg_resp.link_speed =
- phy_qcfg_resp.auto_link_speed;
- else
- phy_qcfg_resp.link_speed =
- phy_qcfg_resp.force_link_speed;
+ phy_qcfg_resp.link_speed = cpu_to_le16(
+ PORT_PHY_QCFG_RESP_LINK_SPEED_10GB);
phy_qcfg_resp.duplex =
PORT_PHY_QCFG_RESP_DUPLEX_FULL;
phy_qcfg_resp.pause =
@@ -859,8 +855,8 @@ void bnxt_update_vf_mac(struct bnxt *bp)
* default but the stored zero MAC will allow the VF user to change
* the random MAC address using ndo_set_mac_address() if he wants.
*/
- if (!ether_addr_equal(resp->perm_mac_address, bp->vf.mac_addr))
- memcpy(bp->vf.mac_addr, resp->perm_mac_address, ETH_ALEN);
+ if (!ether_addr_equal(resp->mac_address, bp->vf.mac_addr))
+ memcpy(bp->vf.mac_addr, resp->mac_address, ETH_ALEN);
/* overwrite netdev dev_addr with admin VF MAC */
if (is_valid_ether_addr(bp->vf.mac_addr))
@@ -869,6 +865,31 @@ update_vf_mac_exit:
mutex_unlock(&bp->hwrm_cmd_lock);
}
+int bnxt_approve_mac(struct bnxt *bp, u8 *mac)
+{
+ struct hwrm_func_vf_cfg_input req = {0};
+ int rc = 0;
+
+ if (!BNXT_VF(bp))
+ return 0;
+
+ if (bp->hwrm_spec_code < 0x10202) {
+ if (is_valid_ether_addr(bp->vf.mac_addr))
+ rc = -EADDRNOTAVAIL;
+ goto mac_done;
+ }
+ bnxt_hwrm_cmd_hdr_init(bp, &req, HWRM_FUNC_VF_CFG, -1, -1);
+ req.enables = cpu_to_le32(FUNC_VF_CFG_REQ_ENABLES_DFLT_MAC_ADDR);
+ memcpy(req.dflt_mac_addr, mac, ETH_ALEN);
+ rc = hwrm_send_message(bp, &req, sizeof(req), HWRM_CMD_TIMEOUT);
+mac_done:
+ if (rc) {
+ rc = -EADDRNOTAVAIL;
+ netdev_warn(bp->dev, "VF MAC address %pM not approved by the PF\n",
+ mac);
+ }
+ return rc;
+}
#else
void bnxt_sriov_disable(struct bnxt *bp)
@@ -883,4 +904,9 @@ void bnxt_hwrm_exec_fwd_req(struct bnxt *bp)
void bnxt_update_vf_mac(struct bnxt *bp)
{
}
+
+int bnxt_approve_mac(struct bnxt *bp, u8 *mac)
+{
+ return 0;
+}
#endif
diff --git a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h
index c151280e3980..0392670ab49c 100644
--- a/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h
+++ b/drivers/net/ethernet/broadcom/bnxt/bnxt_sriov.h
@@ -1,6 +1,6 @@
/* Broadcom NetXtreme-C/E network driver.
*
- * Copyright (c) 2014-2015 Broadcom Corporation
+ * Copyright (c) 2014-2016 Broadcom Corporation
*
* This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by
@@ -20,4 +20,5 @@ int bnxt_sriov_configure(struct pci_dev *pdev, int num_vfs);
void bnxt_sriov_disable(struct bnxt *);
void bnxt_hwrm_exec_fwd_req(struct bnxt *);
void bnxt_update_vf_mac(struct bnxt *);
+int bnxt_approve_mac(struct bnxt *, u8 *);
#endif
diff --git a/drivers/net/ethernet/broadcom/cnic.c b/drivers/net/ethernet/broadcom/cnic.c
index b69dc58faeab..b1d2ac818710 100644
--- a/drivers/net/ethernet/broadcom/cnic.c
+++ b/drivers/net/ethernet/broadcom/cnic.c
@@ -5350,7 +5350,10 @@ static int cnic_start_hw(struct cnic_dev *dev)
return 0;
err1:
- cp->free_resc(dev);
+ if (ethdev->drv_state & CNIC_DRV_STATE_HANDLES_IRQ)
+ cp->stop_hw(dev);
+ else
+ cp->free_resc(dev);
pci_dev_put(dev->pcidev);
return err;
}
diff --git a/drivers/net/ethernet/broadcom/genet/bcmgenet.c b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
index 44ad1490b472..541456398dfb 100644
--- a/drivers/net/ethernet/broadcom/genet/bcmgenet.c
+++ b/drivers/net/ethernet/broadcom/genet/bcmgenet.c
@@ -104,8 +104,8 @@ static inline void dmadesc_set_addr(struct bcmgenet_priv *priv,
static inline void dmadesc_set(struct bcmgenet_priv *priv,
void __iomem *d, dma_addr_t addr, u32 val)
{
- dmadesc_set_length_status(priv, d, val);
dmadesc_set_addr(priv, d, addr);
+ dmadesc_set_length_status(priv, d, val);
}
static inline dma_addr_t dmadesc_get_addr(struct bcmgenet_priv *priv,
@@ -1225,8 +1225,10 @@ static unsigned int __bcmgenet_tx_reclaim(struct net_device *dev,
dev->stats.tx_packets += pkts_compl;
dev->stats.tx_bytes += bytes_compl;
+ txq = netdev_get_tx_queue(dev, ring->queue);
+ netdev_tx_completed_queue(txq, pkts_compl, bytes_compl);
+
if (ring->free_bds > (MAX_SKB_FRAGS + 1)) {
- txq = netdev_get_tx_queue(dev, ring->queue);
if (netif_tx_queue_stopped(txq))
netif_tx_wake_queue(txq);
}
@@ -1335,6 +1337,7 @@ static int bcmgenet_xmit_frag(struct net_device *dev,
struct bcmgenet_priv *priv = netdev_priv(dev);
struct device *kdev = &priv->pdev->dev;
struct enet_cb *tx_cb_ptr;
+ unsigned int frag_size;
dma_addr_t mapping;
int ret;
@@ -1342,10 +1345,12 @@ static int bcmgenet_xmit_frag(struct net_device *dev,
if (unlikely(!tx_cb_ptr))
BUG();
+
tx_cb_ptr->skb = NULL;
- mapping = skb_frag_dma_map(kdev, frag, 0,
- skb_frag_size(frag), DMA_TO_DEVICE);
+ frag_size = skb_frag_size(frag);
+
+ mapping = skb_frag_dma_map(kdev, frag, 0, frag_size, DMA_TO_DEVICE);
ret = dma_mapping_error(kdev, mapping);
if (ret) {
priv->mib.tx_dma_failed++;
@@ -1355,10 +1360,10 @@ static int bcmgenet_xmit_frag(struct net_device *dev,
}
dma_unmap_addr_set(tx_cb_ptr, dma_addr, mapping);
- dma_unmap_len_set(tx_cb_ptr, dma_len, frag->size);
+ dma_unmap_len_set(tx_cb_ptr, dma_len, frag_size);
dmadesc_set(priv, tx_cb_ptr->bd_addr, mapping,
- (frag->size << DMA_BUFLENGTH_SHIFT) | dma_desc_flags |
+ (frag_size << DMA_BUFLENGTH_SHIFT) | dma_desc_flags |
(priv->hw_params->qtag_mask << DMA_TX_QTAG_SHIFT));
return 0;
@@ -1451,15 +1456,19 @@ static netdev_tx_t bcmgenet_xmit(struct sk_buff *skb, struct net_device *dev)
else
index -= 1;
- nr_frags = skb_shinfo(skb)->nr_frags;
ring = &priv->tx_rings[index];
txq = netdev_get_tx_queue(dev, ring->queue);
+ nr_frags = skb_shinfo(skb)->nr_frags;
+
spin_lock_irqsave(&ring->lock, flags);
- if (ring->free_bds <= nr_frags + 1) {
- netif_tx_stop_queue(txq);
- netdev_err(dev, "%s: tx ring %d full when queue %d awake\n",
- __func__, index, ring->queue);
+ if (ring->free_bds <= (nr_frags + 1)) {
+ if (!netif_tx_queue_stopped(txq)) {
+ netif_tx_stop_queue(txq);
+ netdev_err(dev,
+ "%s: tx ring %d full when queue %d awake\n",
+ __func__, index, ring->queue);
+ }
ret = NETDEV_TX_BUSY;
goto out;
}
@@ -1513,6 +1522,8 @@ static netdev_tx_t bcmgenet_xmit(struct sk_buff *skb, struct net_device *dev)
ring->prod_index += nr_frags + 1;
ring->prod_index &= DMA_P_INDEX_MASK;
+ netdev_tx_sent_queue(txq, GENET_CB(skb)->bytes_sent);
+
if (ring->free_bds <= (MAX_SKB_FRAGS + 1))
netif_tx_stop_queue(txq);
@@ -1732,7 +1743,7 @@ static int bcmgenet_rx_poll(struct napi_struct *napi, int budget)
work_done = bcmgenet_desc_rx(ring, budget);
if (work_done < budget) {
- napi_complete(napi);
+ napi_complete_done(napi, work_done);
ring->int_enable(ring);
}
@@ -2361,6 +2372,7 @@ static int bcmgenet_dma_teardown(struct bcmgenet_priv *priv)
static void bcmgenet_fini_dma(struct bcmgenet_priv *priv)
{
int i;
+ struct netdev_queue *txq;
bcmgenet_fini_rx_napi(priv);
bcmgenet_fini_tx_napi(priv);
@@ -2375,6 +2387,14 @@ static void bcmgenet_fini_dma(struct bcmgenet_priv *priv)
}
}
+ for (i = 0; i < priv->hw_params->tx_queues; i++) {
+ txq = netdev_get_tx_queue(priv->dev, priv->tx_rings[i].queue);
+ netdev_tx_reset_queue(txq);
+ }
+
+ txq = netdev_get_tx_queue(priv->dev, priv->tx_rings[DESC_INDEX].queue);
+ netdev_tx_reset_queue(txq);
+
bcmgenet_free_rx_buffers(priv);
kfree(priv->rx_cbs);
kfree(priv->tx_cbs);
@@ -2490,7 +2510,7 @@ static irqreturn_t bcmgenet_isr1(int irq, void *dev_id)
if (likely(napi_schedule_prep(&rx_ring->napi))) {
rx_ring->int_disable(rx_ring);
- __napi_schedule(&rx_ring->napi);
+ __napi_schedule_irqoff(&rx_ring->napi);
}
}
@@ -2503,7 +2523,7 @@ static irqreturn_t bcmgenet_isr1(int irq, void *dev_id)
if (likely(napi_schedule_prep(&tx_ring->napi))) {
tx_ring->int_disable(tx_ring);
- __napi_schedule(&tx_ring->napi);
+ __napi_schedule_irqoff(&tx_ring->napi);
}
}
@@ -2533,7 +2553,7 @@ static irqreturn_t bcmgenet_isr0(int irq, void *dev_id)
if (likely(napi_schedule_prep(&rx_ring->napi))) {
rx_ring->int_disable(rx_ring);
- __napi_schedule(&rx_ring->napi);
+ __napi_schedule_irqoff(&rx_ring->napi);
}
}
@@ -2542,7 +2562,7 @@ static irqreturn_t bcmgenet_isr0(int irq, void *dev_id)
if (likely(napi_schedule_prep(&tx_ring->napi))) {
tx_ring->int_disable(tx_ring);
- __napi_schedule(&tx_ring->napi);
+ __napi_schedule_irqoff(&tx_ring->napi);
}
}
@@ -3039,7 +3059,7 @@ static void bcmgenet_timeout(struct net_device *dev)
bcmgenet_intrl2_0_writel(priv, int0_enable, INTRL2_CPU_MASK_CLEAR);
bcmgenet_intrl2_1_writel(priv, int1_enable, INTRL2_CPU_MASK_CLEAR);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
dev->stats.tx_errors++;
diff --git a/drivers/net/ethernet/broadcom/sb1250-mac.c b/drivers/net/ethernet/broadcom/sb1250-mac.c
index eacc559679bf..f1b81187a201 100644
--- a/drivers/net/ethernet/broadcom/sb1250-mac.c
+++ b/drivers/net/ethernet/broadcom/sb1250-mac.c
@@ -2462,7 +2462,7 @@ static void sbmac_tx_timeout (struct net_device *dev)
spin_lock_irqsave(&sc->sbm_lock, flags);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
dev->stats.tx_errors++;
spin_unlock_irqrestore(&sc->sbm_lock, flags);
diff --git a/drivers/net/ethernet/broadcom/tg3.c b/drivers/net/ethernet/broadcom/tg3.c
index 3010080cfeee..ff300f7cf529 100644
--- a/drivers/net/ethernet/broadcom/tg3.c
+++ b/drivers/net/ethernet/broadcom/tg3.c
@@ -7383,7 +7383,7 @@ static void tg3_napi_fini(struct tg3 *tp)
static inline void tg3_netif_stop(struct tg3 *tp)
{
- tp->dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(tp->dev); /* prevent tx timeout */
tg3_napi_disable(tp);
netif_carrier_off(tp->dev);
netif_tx_disable(tp->dev);
diff --git a/drivers/net/ethernet/cadence/macb.c b/drivers/net/ethernet/cadence/macb.c
index a63551d0a18a..cb07d95e3dd9 100644
--- a/drivers/net/ethernet/cadence/macb.c
+++ b/drivers/net/ethernet/cadence/macb.c
@@ -61,8 +61,7 @@
#define MACB_WOL_HAS_MAGIC_PACKET (0x1 << 0)
#define MACB_WOL_ENABLED (0x1 << 1)
-/*
- * Graceful stop timeouts in us. We should allow up to
+/* Graceful stop timeouts in us. We should allow up to
* 1 frame time (10 Mbits/s, full-duplex, ignoring collisions)
*/
#define MACB_HALT_TIMEOUT 1230
@@ -130,9 +129,8 @@ static void hw_writel(struct macb *bp, int offset, u32 value)
writel_relaxed(value, bp->regs + offset);
}
-/*
- * Find the CPU endianness by using the loopback bit of NCR register. When the
- * CPU is in big endian we need to program swaped mode for management
+/* Find the CPU endianness by using the loopback bit of NCR register. When the
+ * CPU is in big endian we need to program swapped mode for management
* descriptor access.
*/
static bool hw_is_native_io(void __iomem *addr)
@@ -189,7 +187,7 @@ static void macb_get_hwaddr(struct macb *bp)
pdata = dev_get_platdata(&bp->pdev->dev);
- /* Check all 4 address register for vaild address */
+ /* Check all 4 address register for valid address */
for (i = 0; i < 4; i++) {
bottom = macb_or_gem_readl(bp, SA1B + i * 8);
top = macb_or_gem_readl(bp, SA1T + i * 8);
@@ -297,7 +295,7 @@ static void macb_set_tx_clk(struct clk *clk, int speed, struct net_device *dev)
ferr = DIV_ROUND_UP(ferr, rate / 100000);
if (ferr > 5)
netdev_warn(dev, "unable to generate target frequency: %ld Hz\n",
- rate);
+ rate);
if (clk_set_rate(clk, rate_rounded))
netdev_err(dev, "adjusting tx_clk failed.\n");
@@ -386,7 +384,8 @@ static int macb_mii_probe(struct net_device *dev)
pdata = dev_get_platdata(&bp->pdev->dev);
if (pdata && gpio_is_valid(pdata->phy_irq_pin)) {
- ret = devm_gpio_request(&bp->pdev->dev, pdata->phy_irq_pin, "phy int");
+ ret = devm_gpio_request(&bp->pdev->dev, pdata->phy_irq_pin,
+ "phy int");
if (!ret) {
phy_irq = gpio_to_irq(pdata->phy_irq_pin);
phydev->irq = (phy_irq < 0) ? PHY_POLL : phy_irq;
@@ -430,7 +429,7 @@ static int macb_mii_init(struct macb *bp)
macb_writel(bp, NCR, MACB_BIT(MPE));
bp->mii_bus = mdiobus_alloc();
- if (bp->mii_bus == NULL) {
+ if (!bp->mii_bus) {
err = -ENOMEM;
goto err_out;
}
@@ -439,7 +438,7 @@ static int macb_mii_init(struct macb *bp)
bp->mii_bus->read = &macb_mdio_read;
bp->mii_bus->write = &macb_mdio_write;
snprintf(bp->mii_bus->id, MII_BUS_ID_SIZE, "%s-%x",
- bp->pdev->name, bp->pdev->id);
+ bp->pdev->name, bp->pdev->id);
bp->mii_bus->priv = bp;
bp->mii_bus->parent = &bp->pdev->dev;
pdata = dev_get_platdata(&bp->pdev->dev);
@@ -452,7 +451,8 @@ static int macb_mii_init(struct macb *bp)
err = of_mdiobus_register(bp->mii_bus, np);
/* fallback to standard phy registration if no phy were
- found during dt phy registration */
+ * found during dt phy registration
+ */
if (!err && !phy_find_first(bp->mii_bus)) {
for (i = 0; i < PHY_MAX_ADDR; i++) {
struct phy_device *phydev;
@@ -500,7 +500,7 @@ static void macb_update_stats(struct macb *bp)
WARN_ON((unsigned long)(end - p - 1) != (MACB_TPF - MACB_PFR) / 4);
- for(; p < end; p++, offset += 4)
+ for (; p < end; p++, offset += 4)
*p += bp->macb_reg_readl(bp, offset);
}
@@ -568,8 +568,7 @@ static void macb_tx_error_task(struct work_struct *work)
/* Make sure nobody is trying to queue up new packets */
netif_tx_stop_all_queues(bp->dev);
- /*
- * Stop transmission now
+ /* Stop transmission now
* (in case we have just queued new packets)
* macb/gem must be halted to write TBQP register
*/
@@ -577,8 +576,7 @@ static void macb_tx_error_task(struct work_struct *work)
/* Just complain for now, reinitializing TX path can be good */
netdev_err(bp->dev, "BUG: halt tx timed out\n");
- /*
- * Treat frames in TX queue including the ones that caused the error.
+ /* Treat frames in TX queue including the ones that caused the error.
* Free transmit buffers in upper layer.
*/
for (tail = queue->tx_tail; tail != queue->tx_head; tail++) {
@@ -608,10 +606,9 @@ static void macb_tx_error_task(struct work_struct *work)
bp->stats.tx_bytes += skb->len;
}
} else {
- /*
- * "Buffers exhausted mid-frame" errors may only happen
- * if the driver is buggy, so complain loudly about those.
- * Statistics are updated by hardware.
+ /* "Buffers exhausted mid-frame" errors may only happen
+ * if the driver is buggy, so complain loudly about
+ * those. Statistics are updated by hardware.
*/
if (ctrl & MACB_BIT(TX_BUF_EXHAUSTED))
netdev_err(bp->dev,
@@ -663,7 +660,7 @@ static void macb_tx_interrupt(struct macb_queue *queue)
queue_writel(queue, ISR, MACB_BIT(TCOMP));
netdev_vdbg(bp->dev, "macb_tx_interrupt status = 0x%03lx\n",
- (unsigned long)status);
+ (unsigned long)status);
head = queue->tx_head;
for (tail = queue->tx_tail; tail != head; tail++) {
@@ -723,7 +720,8 @@ static void gem_rx_refill(struct macb *bp)
struct sk_buff *skb;
dma_addr_t paddr;
- while (CIRC_SPACE(bp->rx_prepared_head, bp->rx_tail, RX_RING_SIZE) > 0) {
+ while (CIRC_SPACE(bp->rx_prepared_head, bp->rx_tail,
+ RX_RING_SIZE) > 0) {
entry = macb_rx_ring_wrap(bp->rx_prepared_head);
/* Make hw descriptor updates visible to CPU */
@@ -731,10 +729,10 @@ static void gem_rx_refill(struct macb *bp)
bp->rx_prepared_head++;
- if (bp->rx_skbuff[entry] == NULL) {
+ if (!bp->rx_skbuff[entry]) {
/* allocate sk_buff for this free entry in ring */
skb = netdev_alloc_skb(bp->dev, bp->rx_buffer_size);
- if (unlikely(skb == NULL)) {
+ if (unlikely(!skb)) {
netdev_err(bp->dev,
"Unable to allocate sk_buff\n");
break;
@@ -742,7 +740,8 @@ static void gem_rx_refill(struct macb *bp)
/* now fill corresponding descriptor entry */
paddr = dma_map_single(&bp->pdev->dev, skb->data,
- bp->rx_buffer_size, DMA_FROM_DEVICE);
+ bp->rx_buffer_size,
+ DMA_FROM_DEVICE);
if (dma_mapping_error(&bp->pdev->dev, paddr)) {
dev_kfree_skb(skb);
break;
@@ -767,7 +766,7 @@ static void gem_rx_refill(struct macb *bp)
wmb();
netdev_vdbg(bp->dev, "rx ring: prepared head %d, tail %d\n",
- bp->rx_prepared_head, bp->rx_tail);
+ bp->rx_prepared_head, bp->rx_tail);
}
/* Mark DMA descriptors from begin up to and not including end as unused */
@@ -778,14 +777,14 @@ static void discard_partial_frame(struct macb *bp, unsigned int begin,
for (frag = begin; frag != end; frag++) {
struct macb_dma_desc *desc = macb_rx_desc(bp, frag);
+
desc->addr &= ~MACB_BIT(RX_USED);
}
/* Make descriptor updates visible to hardware */
wmb();
- /*
- * When this happens, the hardware stats registers for
+ /* When this happens, the hardware stats registers for
* whatever caused this is updated, so we don't have to record
* anything.
*/
@@ -881,11 +880,10 @@ static int macb_rx_frame(struct macb *bp, unsigned int first_frag,
len = desc->ctrl & bp->rx_frm_len_mask;
netdev_vdbg(bp->dev, "macb_rx_frame frags %u - %u (len %u)\n",
- macb_rx_ring_wrap(first_frag),
- macb_rx_ring_wrap(last_frag), len);
+ macb_rx_ring_wrap(first_frag),
+ macb_rx_ring_wrap(last_frag), len);
- /*
- * The ethernet header starts NET_IP_ALIGN bytes into the
+ /* The ethernet header starts NET_IP_ALIGN bytes into the
* first buffer. Since the header is 14 bytes, this makes the
* payload word-aligned.
*
@@ -925,7 +923,8 @@ static int macb_rx_frame(struct macb *bp, unsigned int first_frag,
frag_len = len - offset;
}
skb_copy_to_linear_data_offset(skb, offset,
- macb_rx_buffer(bp, frag), frag_len);
+ macb_rx_buffer(bp, frag),
+ frag_len);
offset += bp->rx_buffer_size;
desc = macb_rx_desc(bp, frag);
desc->addr &= ~MACB_BIT(RX_USED);
@@ -943,7 +942,7 @@ static int macb_rx_frame(struct macb *bp, unsigned int first_frag,
bp->stats.rx_packets++;
bp->stats.rx_bytes += skb->len;
netdev_vdbg(bp->dev, "received skb of length %u, csum: %08x\n",
- skb->len, skb->csum);
+ skb->len, skb->csum);
netif_receive_skb(skb);
return 0;
@@ -1050,7 +1049,7 @@ static int macb_poll(struct napi_struct *napi, int budget)
work_done = 0;
netdev_vdbg(bp->dev, "poll: status = %08lx, budget = %d\n",
- (unsigned long)status, budget);
+ (unsigned long)status, budget);
work_done = bp->macbgem_ops.mog_rx(bp, budget);
if (work_done < budget) {
@@ -1100,8 +1099,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
(unsigned long)status);
if (status & MACB_RX_INT_FLAGS) {
- /*
- * There's no point taking any more interrupts
+ /* There's no point taking any more interrupts
* until we have processed the buffers. The
* scheduling call may fail if the poll routine
* is already scheduled, so disable interrupts
@@ -1130,8 +1128,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
if (status & MACB_BIT(TCOMP))
macb_tx_interrupt(queue);
- /*
- * Link change detection isn't possible with RMII, so we'll
+ /* Link change detection isn't possible with RMII, so we'll
* add that if/when we get our hands on a full-blown MII PHY.
*/
@@ -1162,8 +1159,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
}
if (status & MACB_BIT(HRESP)) {
- /*
- * TODO: Reset the hardware, and maybe move the
+ /* TODO: Reset the hardware, and maybe move the
* netdev_err to a lower-priority context as well
* (work queue?)
*/
@@ -1182,8 +1178,7 @@ static irqreturn_t macb_interrupt(int irq, void *dev_id)
}
#ifdef CONFIG_NET_POLL_CONTROLLER
-/*
- * Polling receive - used by netconsole and other diagnostic tools
+/* Polling receive - used by netconsole and other diagnostic tools
* to allow network i/o with interrupts disabled.
*/
static void macb_poll_controller(struct net_device *dev)
@@ -1269,7 +1264,7 @@ static unsigned int macb_tx_map(struct macb *bp,
}
/* Should never happen */
- if (unlikely(tx_skb == NULL)) {
+ if (unlikely(!tx_skb)) {
netdev_err(bp->dev, "BUG! empty skb!\n");
return 0;
}
@@ -1339,16 +1334,16 @@ static int macb_start_xmit(struct sk_buff *skb, struct net_device *dev)
#if defined(DEBUG) && defined(VERBOSE_DEBUG)
netdev_vdbg(bp->dev,
- "start_xmit: queue %hu len %u head %p data %p tail %p end %p\n",
- queue_index, skb->len, skb->head, skb->data,
- skb_tail_pointer(skb), skb_end_pointer(skb));
+ "start_xmit: queue %hu len %u head %p data %p tail %p end %p\n",
+ queue_index, skb->len, skb->head, skb->data,
+ skb_tail_pointer(skb), skb_end_pointer(skb));
print_hex_dump(KERN_DEBUG, "data: ", DUMP_PREFIX_OFFSET, 16, 1,
skb->data, 16, true);
#endif
/* Count how many TX buffer descriptors are needed to send this
* socket buffer: skb fragments of jumbo frames may need to be
- * splitted into many buffer descriptors.
+ * split into many buffer descriptors.
*/
count = DIV_ROUND_UP(skb_headlen(skb), bp->max_tx_length);
nr_frags = skb_shinfo(skb)->nr_frags;
@@ -1399,8 +1394,8 @@ static void macb_init_rx_buffer_size(struct macb *bp, size_t size)
if (bp->rx_buffer_size % RX_BUFFER_MULTIPLE) {
netdev_dbg(bp->dev,
- "RX buffer must be multiple of %d bytes, expanding\n",
- RX_BUFFER_MULTIPLE);
+ "RX buffer must be multiple of %d bytes, expanding\n",
+ RX_BUFFER_MULTIPLE);
bp->rx_buffer_size =
roundup(bp->rx_buffer_size, RX_BUFFER_MULTIPLE);
}
@@ -1423,7 +1418,7 @@ static void gem_free_rx_buffers(struct macb *bp)
for (i = 0; i < RX_RING_SIZE; i++) {
skb = bp->rx_skbuff[i];
- if (skb == NULL)
+ if (!skb)
continue;
desc = &bp->rx_ring[i];
@@ -1479,10 +1474,10 @@ static int gem_alloc_rx_buffers(struct macb *bp)
bp->rx_skbuff = kzalloc(size, GFP_KERNEL);
if (!bp->rx_skbuff)
return -ENOMEM;
- else
- netdev_dbg(bp->dev,
- "Allocated %d RX struct sk_buff entries at %p\n",
- RX_RING_SIZE, bp->rx_skbuff);
+
+ netdev_dbg(bp->dev,
+ "Allocated %d RX struct sk_buff entries at %p\n",
+ RX_RING_SIZE, bp->rx_skbuff);
return 0;
}
@@ -1495,10 +1490,10 @@ static int macb_alloc_rx_buffers(struct macb *bp)
&bp->rx_buffers_dma, GFP_KERNEL);
if (!bp->rx_buffers)
return -ENOMEM;
- else
- netdev_dbg(bp->dev,
- "Allocated RX buffers of %d bytes at %08lx (mapped %p)\n",
- size, (unsigned long)bp->rx_buffers_dma, bp->rx_buffers);
+
+ netdev_dbg(bp->dev,
+ "Allocated RX buffers of %d bytes at %08lx (mapped %p)\n",
+ size, (unsigned long)bp->rx_buffers_dma, bp->rx_buffers);
return 0;
}
@@ -1589,8 +1584,7 @@ static void macb_reset_hw(struct macb *bp)
struct macb_queue *queue;
unsigned int q;
- /*
- * Disable RX and TX (XXX: Should we halt the transmission
+ /* Disable RX and TX (XXX: Should we halt the transmission
* more gracefully?)
*/
macb_writel(bp, NCR, 0);
@@ -1653,8 +1647,7 @@ static u32 macb_mdc_clk_div(struct macb *bp)
return config;
}
-/*
- * Get the DMA bus width field of the network configuration register that we
+/* Get the DMA bus width field of the network configuration register that we
* should program. We find the width from decoding the design configuration
* register to find the maximum supported data bus width.
*/
@@ -1674,8 +1667,7 @@ static u32 macb_dbw(struct macb *bp)
}
}
-/*
- * Configure the receive DMA engine
+/* Configure the receive DMA engine
* - use the correct receive buffer size
* - set best burst length for DMA operations
* (if not supported by FIFO, it will fallback to default)
@@ -1763,8 +1755,7 @@ static void macb_init_hw(struct macb *bp)
macb_writel(bp, NCR, MACB_BIT(RE) | MACB_BIT(TE) | MACB_BIT(MPE));
}
-/*
- * The hash address register is 64 bits long and takes up two
+/* The hash address register is 64 bits long and takes up two
* locations in the memory map. The least significant bits are stored
* in EMAC_HSL and the most significant bits in EMAC_HSH.
*
@@ -1804,9 +1795,7 @@ static inline int hash_bit_value(int bitnr, __u8 *addr)
return 0;
}
-/*
- * Return the hash index value for the specified address.
- */
+/* Return the hash index value for the specified address. */
static int hash_get_index(__u8 *addr)
{
int i, j, bitval;
@@ -1822,9 +1811,7 @@ static int hash_get_index(__u8 *addr)
return hash_index;
}
-/*
- * Add multicast addresses to the internal multicast-hash table.
- */
+/* Add multicast addresses to the internal multicast-hash table. */
static void macb_sethashtable(struct net_device *dev)
{
struct netdev_hw_addr *ha;
@@ -1832,7 +1819,8 @@ static void macb_sethashtable(struct net_device *dev)
unsigned int bitnr;
struct macb *bp = netdev_priv(dev);
- mc_filter[0] = mc_filter[1] = 0;
+ mc_filter[0] = 0;
+ mc_filter[1] = 0;
netdev_for_each_mc_addr(ha, dev) {
bitnr = hash_get_index(ha->addr);
@@ -1843,9 +1831,7 @@ static void macb_sethashtable(struct net_device *dev)
macb_or_gem_writel(bp, HRT, mc_filter[1]);
}
-/*
- * Enable/Disable promiscuous and multicast modes.
- */
+/* Enable/Disable promiscuous and multicast modes. */
static void macb_set_rx_mode(struct net_device *dev)
{
unsigned long cfg;
@@ -2162,9 +2148,8 @@ static void macb_get_regs(struct net_device *dev, struct ethtool_regs *regs,
if (!(bp->caps & MACB_CAPS_USRIO_DISABLED))
regs_buff[12] = macb_or_gem_readl(bp, USRIO);
- if (macb_is_gem(bp)) {
+ if (macb_is_gem(bp))
regs_buff[13] = gem_readl(bp, DMACFG);
- }
}
static void macb_get_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
@@ -2287,11 +2272,11 @@ static const struct net_device_ops macb_netdev_ops = {
.ndo_set_features = macb_set_features,
};
-/*
- * Configure peripheral capabilities according to device tree
+/* Configure peripheral capabilities according to device tree
* and integration options used
*/
-static void macb_configure_caps(struct macb *bp, const struct macb_config *dt_conf)
+static void macb_configure_caps(struct macb *bp,
+ const struct macb_config *dt_conf)
{
u32 dcfg;
@@ -2989,7 +2974,7 @@ static int macb_probe(struct platform_device *pdev)
mac = of_get_mac_address(np);
if (mac)
- memcpy(bp->dev->dev_addr, mac, ETH_ALEN);
+ ether_addr_copy(bp->dev->dev_addr, mac);
else
macb_get_hwaddr(bp);
@@ -2997,6 +2982,7 @@ static int macb_probe(struct platform_device *pdev)
phy_node = of_get_next_available_child(np, NULL);
if (phy_node) {
int gpio = of_get_named_gpio(phy_node, "reset-gpios", 0);
+
if (gpio_is_valid(gpio)) {
bp->reset_gpio = gpio_to_desc(gpio);
gpiod_direction_output(bp->reset_gpio, 1);
diff --git a/drivers/net/ethernet/cavium/liquidio/lio_main.c b/drivers/net/ethernet/cavium/liquidio/lio_main.c
index 34d269cd5579..8de79ae63231 100644
--- a/drivers/net/ethernet/cavium/liquidio/lio_main.c
+++ b/drivers/net/ethernet/cavium/liquidio/lio_main.c
@@ -2899,7 +2899,7 @@ static int liquidio_xmit(struct sk_buff *skb, struct net_device *netdev)
if (status == IQ_SEND_STOP)
stop_q(lio->netdev, q_idx);
- netdev->trans_start = jiffies;
+ netif_trans_update(netdev);
stats->tx_done++;
stats->tx_tot_bytes += skb->len;
@@ -2928,7 +2928,7 @@ static void liquidio_tx_timeout(struct net_device *netdev)
netif_info(lio, tx_err, lio->netdev,
"Transmit timeout tx_dropped:%ld, waking up queues now!!\n",
netdev->stats.tx_dropped);
- netdev->trans_start = jiffies;
+ netif_trans_update(netdev);
txqs_wake(netdev);
}
diff --git a/drivers/net/ethernet/cavium/liquidio/octeon_device.c b/drivers/net/ethernet/cavium/liquidio/octeon_device.c
index f67641a2ff9e..8e23e3fad662 100644
--- a/drivers/net/ethernet/cavium/liquidio/octeon_device.c
+++ b/drivers/net/ethernet/cavium/liquidio/octeon_device.c
@@ -602,12 +602,10 @@ int octeon_download_firmware(struct octeon_device *oct, const u8 *data,
snprintf(oct->fw_info.liquidio_firmware_version, 32, "LIQUIDIO: %s",
h->version);
- buffer = kmalloc(size, GFP_KERNEL);
+ buffer = kmemdup(data, size, GFP_KERNEL);
if (!buffer)
return -ENOMEM;
- memcpy(buffer, data, size);
-
p = buffer + sizeof(struct octeon_firmware_file_header);
/* load all images */
diff --git a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
index c177c7cec13b..388cd799d9ed 100644
--- a/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
+++ b/drivers/net/ethernet/cavium/octeon/octeon_mgmt.c
@@ -1320,7 +1320,7 @@ static int octeon_mgmt_xmit(struct sk_buff *skb, struct net_device *netdev)
/* Ring the bell. */
cvmx_write_csr(p->mix + MIX_ORING2, 1);
- netdev->trans_start = jiffies;
+ netif_trans_update(netdev);
rv = NETDEV_TX_OK;
out:
octeon_mgmt_update_tx_stats(netdev);
diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_main.c b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
index bfee298fc02a..a19e73f11d73 100644
--- a/drivers/net/ethernet/cavium/thunder/nicvf_main.c
+++ b/drivers/net/ethernet/cavium/thunder/nicvf_main.c
@@ -1442,7 +1442,7 @@ static void nicvf_reset_task(struct work_struct *work)
nicvf_stop(nic->netdev);
nicvf_open(nic->netdev);
- nic->netdev->trans_start = jiffies;
+ netif_trans_update(nic->netdev);
}
static int nicvf_config_loopback(struct nicvf *nic,
diff --git a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
index fa05e347262f..0ff8e60deccb 100644
--- a/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
+++ b/drivers/net/ethernet/cavium/thunder/nicvf_queues.c
@@ -23,7 +23,7 @@ static void nicvf_get_page(struct nicvf *nic)
if (!nic->rb_pageref || !nic->rb_page)
return;
- atomic_add(nic->rb_pageref, &nic->rb_page->_count);
+ page_ref_add(nic->rb_page, nic->rb_pageref);
nic->rb_pageref = 0;
}
@@ -533,6 +533,7 @@ static void nicvf_rcv_queue_config(struct nicvf *nic, struct queue_set *qs,
nicvf_config_vlan_stripping(nic, nic->netdev->features);
/* Enable Receive queue */
+ memset(&rq_cfg, 0, sizeof(struct rq_cfg));
rq_cfg.ena = 1;
rq_cfg.tcp_ena = 0;
nicvf_queue_reg_write(nic, NIC_QSET_RQ_0_7_CFG, qidx, *(u64 *)&rq_cfg);
@@ -565,6 +566,7 @@ void nicvf_cmp_queue_config(struct nicvf *nic, struct queue_set *qs,
qidx, (u64)(cq->dmem.phys_base));
/* Enable Completion queue */
+ memset(&cq_cfg, 0, sizeof(struct cq_cfg));
cq_cfg.ena = 1;
cq_cfg.reset = 0;
cq_cfg.caching = 0;
@@ -613,6 +615,7 @@ static void nicvf_snd_queue_config(struct nicvf *nic, struct queue_set *qs,
qidx, (u64)(sq->dmem.phys_base));
/* Enable send queue & set queue size */
+ memset(&sq_cfg, 0, sizeof(struct sq_cfg));
sq_cfg.ena = 1;
sq_cfg.reset = 0;
sq_cfg.ldwb = 0;
@@ -649,6 +652,7 @@ static void nicvf_rbdr_config(struct nicvf *nic, struct queue_set *qs,
/* Enable RBDR & set queue size */
/* Buffer size should be in multiples of 128 bytes */
+ memset(&rbdr_cfg, 0, sizeof(struct rbdr_cfg));
rbdr_cfg.ena = 1;
rbdr_cfg.reset = 0;
rbdr_cfg.ldwb = 0;
diff --git a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
index d20539a6d162..3ed21988626b 100644
--- a/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
+++ b/drivers/net/ethernet/cavium/thunder/thunder_bgx.c
@@ -274,12 +274,14 @@ static void bgx_sgmii_change_link_state(struct lmac *lmac)
static void bgx_lmac_handler(struct net_device *netdev)
{
struct lmac *lmac = container_of(netdev, struct lmac, netdev);
- struct phy_device *phydev = lmac->phydev;
+ struct phy_device *phydev;
int link_changed = 0;
if (!lmac)
return;
+ phydev = lmac->phydev;
+
if (!phydev->link && lmac->last_link)
link_changed = -1;
diff --git a/drivers/net/ethernet/chelsio/cxgb/sge.c b/drivers/net/ethernet/chelsio/cxgb/sge.c
index 526ea74e82d9..86f467a2c485 100644
--- a/drivers/net/ethernet/chelsio/cxgb/sge.c
+++ b/drivers/net/ethernet/chelsio/cxgb/sge.c
@@ -1664,8 +1664,7 @@ static int t1_sge_tx(struct sk_buff *skb, struct adapter *adapter,
struct cmdQ *q = &sge->cmdQ[qid];
unsigned int credits, pidx, genbit, count, use_sched_skb = 0;
- if (!spin_trylock(&q->lock))
- return NETDEV_TX_LOCKED;
+ spin_lock(&q->lock);
reclaim_completed_tx(sge, q);
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
index 326d4009525e..b4fceb92479f 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4.h
@@ -324,7 +324,9 @@ struct adapter_params {
unsigned int sf_fw_start; /* start of FW image in flash */
unsigned int fw_vers;
+ unsigned int bs_vers; /* bootstrap version */
unsigned int tp_vers;
+ unsigned int er_vers; /* expansion ROM version */
u8 api_vers[7];
unsigned short mtus[NMTUS];
@@ -357,6 +359,34 @@ struct sge_idma_monitor_state {
unsigned int idma_warn[2]; /* time to warning in HZ */
};
+/* Firmware Mailbox Command/Reply log. All values are in Host-Endian format.
+ * The access and execute times are signed in order to accommodate negative
+ * error returns.
+ */
+struct mbox_cmd {
+ u64 cmd[MBOX_LEN / 8]; /* a Firmware Mailbox Command/Reply */
+ u64 timestamp; /* OS-dependent timestamp */
+ u32 seqno; /* sequence number */
+ s16 access; /* time (ms) to access mailbox */
+ s16 execute; /* time (ms) to execute */
+};
+
+struct mbox_cmd_log {
+ unsigned int size; /* number of entries in the log */
+ unsigned int cursor; /* next position in the log to write */
+ u32 seqno; /* next sequence number */
+ /* variable length mailbox command log starts here */
+};
+
+/* Given a pointer to a Firmware Mailbox Command Log and a log entry index,
+ * return a pointer to the specified entry.
+ */
+static inline struct mbox_cmd *mbox_cmd_log_entry(struct mbox_cmd_log *log,
+ unsigned int entry_idx)
+{
+ return &((struct mbox_cmd *)&(log)[1])[entry_idx];
+}
+
#include "t4fw_api.h"
#define FW_VERSION(chip) ( \
@@ -394,6 +424,7 @@ struct link_config {
unsigned char fc; /* actual link flow control */
unsigned char autoneg; /* autonegotiating? */
unsigned char link_ok; /* link up? */
+ unsigned char link_down_rc; /* link down reason */
};
#define FW_LEN16(fw_struct) FW_CMD_LEN16_V(sizeof(fw_struct) / 16)
@@ -731,6 +762,7 @@ struct adapter {
u32 t4_bar0;
struct pci_dev *pdev;
struct device *pdev_dev;
+ const char *name;
unsigned int mbox;
unsigned int pf;
unsigned int flags;
@@ -776,6 +808,10 @@ struct adapter {
struct work_struct db_drop_task;
bool tid_release_task_busy;
+ /* support for mailbox command/reply logging */
+#define T4_OS_LOG_MBOX_CMDS 256
+ struct mbox_cmd_log *mbox_log;
+
struct dentry *debugfs_root;
bool use_bd; /* Use SGE Back Door intfc for reading SGE Contexts */
bool trace_rss; /* 1 implies that different RSS flit per filter is
@@ -1306,6 +1342,7 @@ int t4_fl_pkt_align(struct adapter *adap);
unsigned int t4_flash_cfg_addr(struct adapter *adapter);
int t4_check_fw_version(struct adapter *adap);
int t4_get_fw_version(struct adapter *adapter, u32 *vers);
+int t4_get_bs_version(struct adapter *adapter, u32 *vers);
int t4_get_tp_version(struct adapter *adapter, u32 *vers);
int t4_get_exprom_version(struct adapter *adapter, u32 *vers);
int t4_prep_fw(struct adapter *adap, struct fw_info *fw_info,
@@ -1329,6 +1366,8 @@ int t4_init_sge_params(struct adapter *adapter);
int t4_init_tp_params(struct adapter *adap);
int t4_filter_field_shift(const struct adapter *adap, int filter_sel);
int t4_init_rss_mode(struct adapter *adap, int mbox);
+int t4_init_portinfo(struct port_info *pi, int mbox,
+ int port, int pf, int vf, u8 mac[]);
int t4_port_init(struct adapter *adap, int mbox, int pf, int vf);
void t4_fatal_err(struct adapter *adapter);
int t4_config_rss_range(struct adapter *adapter, int mbox, unsigned int viid,
@@ -1464,6 +1503,7 @@ int t4_ctrl_eq_free(struct adapter *adap, unsigned int mbox, unsigned int pf,
int t4_ofld_eq_free(struct adapter *adap, unsigned int mbox, unsigned int pf,
unsigned int vf, unsigned int eqid);
int t4_sge_ctxt_flush(struct adapter *adap, unsigned int mbox);
+void t4_handle_get_port_info(struct port_info *pi, const __be64 *rpl);
int t4_handle_fw_rpl(struct adapter *adap, const __be64 *rpl);
void t4_db_full(struct adapter *adapter);
void t4_db_dropped(struct adapter *adapter);
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_dcb.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_dcb.c
index 052c660aca80..6ee2ed30626b 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_dcb.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_dcb.c
@@ -253,7 +253,7 @@ void cxgb4_dcb_handle_fw_update(struct adapter *adap,
{
const union fw_port_dcb *fwdcb = &pcmd->u.dcb;
int port = FW_PORT_CMD_PORTID_G(be32_to_cpu(pcmd->op_to_portid));
- struct net_device *dev = adap->port[port];
+ struct net_device *dev = adap->port[adap->chan_map[port]];
struct port_info *pi = netdev_priv(dev);
struct port_dcb_info *dcb = &pi->dcb;
int dcb_type = pcmd->u.dcb.pgid.type;
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
index 0bb41e9b9b1c..91fb50850fff 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_debugfs.c
@@ -1152,6 +1152,104 @@ static const struct file_operations devlog_fops = {
.release = seq_release_private
};
+/* Show Firmware Mailbox Command/Reply Log
+ *
+ * Note that we don't do any locking when dumping the Firmware Mailbox Log so
+ * it's possible that we can catch things during a log update and therefore
+ * see partially corrupted log entries. But it's probably Good Enough(tm).
+ * If we ever decide that we want to make sure that we're dumping a coherent
+ * log, we'd need to perform locking in the mailbox logging and in
+ * mboxlog_open() where we'd need to grab the entire mailbox log in one go
+ * like we do for the Firmware Device Log.
+ */
+static int mboxlog_show(struct seq_file *seq, void *v)
+{
+ struct adapter *adapter = seq->private;
+ struct mbox_cmd_log *log = adapter->mbox_log;
+ struct mbox_cmd *entry;
+ int entry_idx, i;
+
+ if (v == SEQ_START_TOKEN) {
+ seq_printf(seq,
+ "%10s %15s %5s %5s %s\n",
+ "Seq#", "Tstamp", "Atime", "Etime",
+ "Command/Reply");
+ return 0;
+ }
+
+ entry_idx = log->cursor + ((uintptr_t)v - 2);
+ if (entry_idx >= log->size)
+ entry_idx -= log->size;
+ entry = mbox_cmd_log_entry(log, entry_idx);
+
+ /* skip over unused entries */
+ if (entry->timestamp == 0)
+ return 0;
+
+ seq_printf(seq, "%10u %15llu %5d %5d",
+ entry->seqno, entry->timestamp,
+ entry->access, entry->execute);
+ for (i = 0; i < MBOX_LEN / 8; i++) {
+ u64 flit = entry->cmd[i];
+ u32 hi = (u32)(flit >> 32);
+ u32 lo = (u32)flit;
+
+ seq_printf(seq, " %08x %08x", hi, lo);
+ }
+ seq_puts(seq, "\n");
+ return 0;
+}
+
+static inline void *mboxlog_get_idx(struct seq_file *seq, loff_t pos)
+{
+ struct adapter *adapter = seq->private;
+ struct mbox_cmd_log *log = adapter->mbox_log;
+
+ return ((pos <= log->size) ? (void *)(uintptr_t)(pos + 1) : NULL);
+}
+
+static void *mboxlog_start(struct seq_file *seq, loff_t *pos)
+{
+ return *pos ? mboxlog_get_idx(seq, *pos) : SEQ_START_TOKEN;
+}
+
+static void *mboxlog_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+ ++*pos;
+ return mboxlog_get_idx(seq, *pos);
+}
+
+static void mboxlog_stop(struct seq_file *seq, void *v)
+{
+}
+
+static const struct seq_operations mboxlog_seq_ops = {
+ .start = mboxlog_start,
+ .next = mboxlog_next,
+ .stop = mboxlog_stop,
+ .show = mboxlog_show
+};
+
+static int mboxlog_open(struct inode *inode, struct file *file)
+{
+ int res = seq_open(file, &mboxlog_seq_ops);
+
+ if (!res) {
+ struct seq_file *seq = file->private_data;
+
+ seq->private = inode->i_private;
+ }
+ return res;
+}
+
+static const struct file_operations mboxlog_fops = {
+ .owner = THIS_MODULE,
+ .open = mboxlog_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
static int mbox_show(struct seq_file *seq, void *v)
{
static const char * const owner[] = { "none", "FW", "driver",
@@ -1572,6 +1670,7 @@ static const struct file_operations flash_debugfs_fops = {
.owner = THIS_MODULE,
.open = mem_open,
.read = flash_read,
+ .llseek = default_llseek,
};
static inline void tcamxy2valmask(u64 x, u64 y, u8 *addr, u64 *mask)
@@ -3128,6 +3227,7 @@ int t4_setup_debugfs(struct adapter *adap)
{ "cim_qcfg", &cim_qcfg_fops, S_IRUSR, 0 },
{ "clk", &clk_debugfs_fops, S_IRUSR, 0 },
{ "devlog", &devlog_fops, S_IRUSR, 0 },
+ { "mboxlog", &mboxlog_fops, S_IRUSR, 0 },
{ "mbox0", &mbox_debugfs_fops, S_IRUSR | S_IWUSR, 0 },
{ "mbox1", &mbox_debugfs_fops, S_IRUSR | S_IWUSR, 1 },
{ "mbox2", &mbox_debugfs_fops, S_IRUSR | S_IWUSR, 2 },
diff --git a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
index d1e3f0997d6b..477db477b133 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/cxgb4_main.c
@@ -168,7 +168,8 @@ MODULE_PARM_DESC(force_init, "Forcibly become Master PF and initialize adapter,"
static int dflt_msg_enable = DFLT_MSG_ENABLE;
module_param(dflt_msg_enable, int, 0644);
-MODULE_PARM_DESC(dflt_msg_enable, "Chelsio T4 default message enable bitmap");
+MODULE_PARM_DESC(dflt_msg_enable, "Chelsio T4 default message enable bitmap, "
+ "deprecated parameter");
/*
* The driver uses the best interrupt scheme available on a platform in the
@@ -303,6 +304,22 @@ static void dcb_tx_queue_prio_enable(struct net_device *dev, int enable)
}
#endif /* CONFIG_CHELSIO_T4_DCB */
+int cxgb4_dcb_enabled(const struct net_device *dev)
+{
+#ifdef CONFIG_CHELSIO_T4_DCB
+ struct port_info *pi = netdev_priv(dev);
+
+ if (!pi->dcb.enabled)
+ return 0;
+
+ return ((pi->dcb.state == CXGB4_DCB_STATE_FW_ALLSYNCED) ||
+ (pi->dcb.state == CXGB4_DCB_STATE_HOST));
+#else
+ return 0;
+#endif
+}
+EXPORT_SYMBOL(cxgb4_dcb_enabled);
+
void t4_os_link_changed(struct adapter *adapter, int port_id, int link_stat)
{
struct net_device *dev = adapter->port[port_id];
@@ -313,8 +330,10 @@ void t4_os_link_changed(struct adapter *adapter, int port_id, int link_stat)
netif_carrier_on(dev);
else {
#ifdef CONFIG_CHELSIO_T4_DCB
- cxgb4_dcb_state_init(dev);
- dcb_tx_queue_prio_enable(dev, false);
+ if (cxgb4_dcb_enabled(dev)) {
+ cxgb4_dcb_state_init(dev);
+ dcb_tx_queue_prio_enable(dev, false);
+ }
#endif /* CONFIG_CHELSIO_T4_DCB */
netif_carrier_off(dev);
}
@@ -336,6 +355,17 @@ void t4_os_portmod_changed(const struct adapter *adap, int port_id)
netdev_info(dev, "port module unplugged\n");
else if (pi->mod_type < ARRAY_SIZE(mod_str))
netdev_info(dev, "%s module inserted\n", mod_str[pi->mod_type]);
+ else if (pi->mod_type == FW_PORT_MOD_TYPE_NOTSUPPORTED)
+ netdev_info(dev, "%s: unsupported port module inserted\n",
+ dev->name);
+ else if (pi->mod_type == FW_PORT_MOD_TYPE_UNKNOWN)
+ netdev_info(dev, "%s: unknown port module inserted\n",
+ dev->name);
+ else if (pi->mod_type == FW_PORT_MOD_TYPE_ERROR)
+ netdev_info(dev, "%s: transceiver module error\n", dev->name);
+ else
+ netdev_info(dev, "%s: unknown module type %d inserted\n",
+ dev->name, pi->mod_type);
}
int dbfifo_int_thresh = 10; /* 10 == 640 entry threshold */
@@ -482,28 +512,12 @@ static int link_start(struct net_device *dev)
return ret;
}
-int cxgb4_dcb_enabled(const struct net_device *dev)
-{
-#ifdef CONFIG_CHELSIO_T4_DCB
- struct port_info *pi = netdev_priv(dev);
-
- if (!pi->dcb.enabled)
- return 0;
-
- return ((pi->dcb.state == CXGB4_DCB_STATE_FW_ALLSYNCED) ||
- (pi->dcb.state == CXGB4_DCB_STATE_HOST));
-#else
- return 0;
-#endif
-}
-EXPORT_SYMBOL(cxgb4_dcb_enabled);
-
#ifdef CONFIG_CHELSIO_T4_DCB
/* Handle a Data Center Bridging update message from the firmware. */
static void dcb_rpl(struct adapter *adap, const struct fw_port_cmd *pcmd)
{
int port = FW_PORT_CMD_PORTID_G(ntohl(pcmd->op_to_portid));
- struct net_device *dev = adap->port[port];
+ struct net_device *dev = adap->port[adap->chan_map[port]];
int old_dcb_enabled = cxgb4_dcb_enabled(dev);
int new_dcb_enabled;
@@ -633,7 +647,8 @@ static int fwevtq_handler(struct sge_rspq *q, const __be64 *rsp,
action == FW_PORT_ACTION_GET_PORT_INFO) {
int port = FW_PORT_CMD_PORTID_G(
be32_to_cpu(pcmd->op_to_portid));
- struct net_device *dev = q->adap->port[port];
+ struct net_device *dev =
+ q->adap->port[q->adap->chan_map[port]];
int state_input = ((pcmd->u.info.dcbxdis_pkd &
FW_PORT_CMD_DCBXDIS_F)
? CXGB4_DCB_INPUT_FW_DISABLED
@@ -3737,7 +3752,10 @@ static int adap_init0(struct adapter *adap)
* is excessively mismatched relative to the driver.)
*/
t4_get_fw_version(adap, &adap->params.fw_vers);
+ t4_get_bs_version(adap, &adap->params.bs_vers);
t4_get_tp_version(adap, &adap->params.tp_vers);
+ t4_get_exprom_version(adap, &adap->params.er_vers);
+
ret = t4_check_fw_version(adap);
/* If firmware is too old (not supported by driver) force an update. */
if (ret)
@@ -4651,6 +4669,68 @@ static void cxgb4_check_pcie_caps(struct adapter *adap)
"suggested for optimal performance.\n");
}
+/* Dump basic information about the adapter */
+static void print_adapter_info(struct adapter *adapter)
+{
+ /* Device information */
+ dev_info(adapter->pdev_dev, "Chelsio %s rev %d\n",
+ adapter->params.vpd.id,
+ CHELSIO_CHIP_RELEASE(adapter->params.chip));
+ dev_info(adapter->pdev_dev, "S/N: %s, P/N: %s\n",
+ adapter->params.vpd.sn, adapter->params.vpd.pn);
+
+ /* Firmware Version */
+ if (!adapter->params.fw_vers)
+ dev_warn(adapter->pdev_dev, "No firmware loaded\n");
+ else
+ dev_info(adapter->pdev_dev, "Firmware version: %u.%u.%u.%u\n",
+ FW_HDR_FW_VER_MAJOR_G(adapter->params.fw_vers),
+ FW_HDR_FW_VER_MINOR_G(adapter->params.fw_vers),
+ FW_HDR_FW_VER_MICRO_G(adapter->params.fw_vers),
+ FW_HDR_FW_VER_BUILD_G(adapter->params.fw_vers));
+
+ /* Bootstrap Firmware Version. (Some adapters don't have Bootstrap
+ * Firmware, so dev_info() is more appropriate here.)
+ */
+ if (!adapter->params.bs_vers)
+ dev_info(adapter->pdev_dev, "No bootstrap loaded\n");
+ else
+ dev_info(adapter->pdev_dev, "Bootstrap version: %u.%u.%u.%u\n",
+ FW_HDR_FW_VER_MAJOR_G(adapter->params.bs_vers),
+ FW_HDR_FW_VER_MINOR_G(adapter->params.bs_vers),
+ FW_HDR_FW_VER_MICRO_G(adapter->params.bs_vers),
+ FW_HDR_FW_VER_BUILD_G(adapter->params.bs_vers));
+
+ /* TP Microcode Version */
+ if (!adapter->params.tp_vers)
+ dev_warn(adapter->pdev_dev, "No TP Microcode loaded\n");
+ else
+ dev_info(adapter->pdev_dev,
+ "TP Microcode version: %u.%u.%u.%u\n",
+ FW_HDR_FW_VER_MAJOR_G(adapter->params.tp_vers),
+ FW_HDR_FW_VER_MINOR_G(adapter->params.tp_vers),
+ FW_HDR_FW_VER_MICRO_G(adapter->params.tp_vers),
+ FW_HDR_FW_VER_BUILD_G(adapter->params.tp_vers));
+
+ /* Expansion ROM version */
+ if (!adapter->params.er_vers)
+ dev_info(adapter->pdev_dev, "No Expansion ROM loaded\n");
+ else
+ dev_info(adapter->pdev_dev,
+ "Expansion ROM version: %u.%u.%u.%u\n",
+ FW_HDR_FW_VER_MAJOR_G(adapter->params.er_vers),
+ FW_HDR_FW_VER_MINOR_G(adapter->params.er_vers),
+ FW_HDR_FW_VER_MICRO_G(adapter->params.er_vers),
+ FW_HDR_FW_VER_BUILD_G(adapter->params.er_vers));
+
+ /* Software/Hardware configuration */
+ dev_info(adapter->pdev_dev, "Configuration: %sNIC %s, %s capable\n",
+ is_offload(adapter) ? "R" : "",
+ ((adapter->flags & USING_MSIX) ? "MSI-X" :
+ (adapter->flags & USING_MSI) ? "MSI" : ""),
+ is_offload(adapter) ? "Offload" : "non-Offload");
+}
+
static void print_port_info(const struct net_device *dev)
{
char buf[80];
@@ -4678,14 +4758,8 @@ static void print_port_info(const struct net_device *dev)
--bufp;
sprintf(bufp, "BASE-%s", t4_get_port_type_description(pi->port_type));
- netdev_info(dev, "Chelsio %s rev %d %s %sNIC %s\n",
- adap->params.vpd.id,
- CHELSIO_CHIP_RELEASE(adap->params.chip), buf,
- is_offload(adap) ? "R" : "",
- (adap->flags & USING_MSIX) ? " MSI-X" :
- (adap->flags & USING_MSI) ? " MSI" : "");
- netdev_info(dev, "S/N: %s, P/N: %s\n",
- adap->params.vpd.sn, adap->params.vpd.pn);
+ netdev_info(dev, "%s: Chelsio %s (%s) %s\n",
+ dev->name, adap->params.vpd.id, adap->name, buf);
}
static void enable_pcie_relaxed_ordering(struct pci_dev *dev)
@@ -4837,12 +4911,23 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
goto out_free_adapter;
}
+ adapter->mbox_log = kzalloc(sizeof(*adapter->mbox_log) +
+ (sizeof(struct mbox_cmd) *
+ T4_OS_LOG_MBOX_CMDS),
+ GFP_KERNEL);
+ if (!adapter->mbox_log) {
+ err = -ENOMEM;
+ goto out_free_adapter;
+ }
+ adapter->mbox_log->size = T4_OS_LOG_MBOX_CMDS;
+
/* PCI device has been enabled */
adapter->flags |= DEV_ENABLED;
adapter->regs = regs;
adapter->pdev = pdev;
adapter->pdev_dev = &pdev->dev;
+ adapter->name = pci_name(pdev);
adapter->mbox = func;
adapter->pf = func;
adapter->msg_enable = dflt_msg_enable;
@@ -5073,6 +5158,8 @@ static int init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
if (is_offload(adapter))
attach_ulds(adapter);
+ print_adapter_info(adapter);
+
sriov:
#ifdef CONFIG_PCI_IOV
if (func < ARRAY_SIZE(num_vf) && num_vf[func] > 0)
@@ -5092,6 +5179,7 @@ sriov:
if (adapter->workq)
destroy_workqueue(adapter->workq);
+ kfree(adapter->mbox_log);
kfree(adapter);
out_unmap_bar0:
iounmap(regs);
@@ -5158,6 +5246,7 @@ static void remove_one(struct pci_dev *pdev)
adapter->flags &= ~DEV_ENABLED;
}
pci_release_regions(pdev);
+ kfree(adapter->mbox_log);
synchronize_rcu();
kfree(adapter);
} else
diff --git a/drivers/net/ethernet/chelsio/cxgb4/sge.c b/drivers/net/ethernet/chelsio/cxgb4/sge.c
index 6278e5a74b74..bad253beb8c8 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/sge.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/sge.c
@@ -3006,7 +3006,9 @@ void t4_free_sge_resources(struct adapter *adap)
if (etq->q.desc) {
t4_eth_eq_free(adap, adap->mbox, adap->pf, 0,
etq->q.cntxt_id);
+ __netif_tx_lock_bh(etq->txq);
free_tx_desc(adap, &etq->q, etq->q.in_use, true);
+ __netif_tx_unlock_bh(etq->txq);
kfree(etq->q.sdesc);
free_txq(adap, &etq->q);
}
diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
index 71586a3e0f61..a63addb4e72c 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.c
@@ -224,18 +224,34 @@ static void fw_asrt(struct adapter *adap, u32 mbox_addr)
be32_to_cpu(asrt.u.assert.x), be32_to_cpu(asrt.u.assert.y));
}
-static void dump_mbox(struct adapter *adap, int mbox, u32 data_reg)
+/**
+ * t4_record_mbox - record a Firmware Mailbox Command/Reply in the log
+ * @adapter: the adapter
+ * @cmd: the Firmware Mailbox Command or Reply
+ * @size: command length in bytes
+ * @access: the time (ms) needed to access the Firmware Mailbox
+ * @execute: the time (ms) the command spent being executed
+ */
+static void t4_record_mbox(struct adapter *adapter,
+ const __be64 *cmd, unsigned int size,
+ int access, int execute)
{
- dev_err(adap->pdev_dev,
- "mbox %d: %llx %llx %llx %llx %llx %llx %llx %llx\n", mbox,
- (unsigned long long)t4_read_reg64(adap, data_reg),
- (unsigned long long)t4_read_reg64(adap, data_reg + 8),
- (unsigned long long)t4_read_reg64(adap, data_reg + 16),
- (unsigned long long)t4_read_reg64(adap, data_reg + 24),
- (unsigned long long)t4_read_reg64(adap, data_reg + 32),
- (unsigned long long)t4_read_reg64(adap, data_reg + 40),
- (unsigned long long)t4_read_reg64(adap, data_reg + 48),
- (unsigned long long)t4_read_reg64(adap, data_reg + 56));
+ struct mbox_cmd_log *log = adapter->mbox_log;
+ struct mbox_cmd *entry;
+ int i;
+
+ entry = mbox_cmd_log_entry(log, log->cursor++);
+ if (log->cursor == log->size)
+ log->cursor = 0;
+
+ for (i = 0; i < size / 8; i++)
+ entry->cmd[i] = be64_to_cpu(cmd[i]);
+ while (i < MBOX_LEN / 8)
+ entry->cmd[i++] = 0;
+ entry->timestamp = jiffies;
+ entry->seqno = log->seqno++;
+ entry->access = access;
+ entry->execute = execute;
}
/**
@@ -268,12 +284,16 @@ int t4_wr_mbox_meat_timeout(struct adapter *adap, int mbox, const void *cmd,
1, 1, 3, 5, 10, 10, 20, 50, 100, 200
};
+ u16 access = 0;
+ u16 execute = 0;
u32 v;
u64 res;
- int i, ms, delay_idx;
+ int i, ms, delay_idx, ret;
const __be64 *p = cmd;
u32 data_reg = PF_REG(mbox, CIM_PF_MAILBOX_DATA_A);
u32 ctl_reg = PF_REG(mbox, CIM_PF_MAILBOX_CTRL_A);
+ __be64 cmd_rpl[MBOX_LEN / 8];
+ u32 pcie_fw;
if ((size & 15) || size > MBOX_LEN)
return -EINVAL;
@@ -285,13 +305,24 @@ int t4_wr_mbox_meat_timeout(struct adapter *adap, int mbox, const void *cmd,
if (adap->pdev->error_state != pci_channel_io_normal)
return -EIO;
+ /* If we have a negative timeout, that implies that we can't sleep. */
+ if (timeout < 0) {
+ sleep_ok = false;
+ timeout = -timeout;
+ }
+
v = MBOWNER_G(t4_read_reg(adap, ctl_reg));
for (i = 0; v == MBOX_OWNER_NONE && i < 3; i++)
v = MBOWNER_G(t4_read_reg(adap, ctl_reg));
- if (v != MBOX_OWNER_DRV)
- return v ? -EBUSY : -ETIMEDOUT;
+ if (v != MBOX_OWNER_DRV) {
+ ret = (v == MBOX_OWNER_FW) ? -EBUSY : -ETIMEDOUT;
+ t4_record_mbox(adap, cmd, MBOX_LEN, access, ret);
+ return ret;
+ }
+ /* Copy in the new mailbox command and send it on its way ... */
+ t4_record_mbox(adap, cmd, MBOX_LEN, access, 0);
for (i = 0; i < size; i += 8)
t4_write_reg64(adap, data_reg + i, be64_to_cpu(*p++));
@@ -301,7 +332,10 @@ int t4_wr_mbox_meat_timeout(struct adapter *adap, int mbox, const void *cmd,
delay_idx = 0;
ms = delay[0];
- for (i = 0; i < timeout; i += ms) {
+ for (i = 0;
+ !((pcie_fw = t4_read_reg(adap, PCIE_FW_A)) & PCIE_FW_ERR_F) &&
+ i < timeout;
+ i += ms) {
if (sleep_ok) {
ms = delay[delay_idx]; /* last element may repeat */
if (delay_idx < ARRAY_SIZE(delay) - 1)
@@ -317,26 +351,31 @@ int t4_wr_mbox_meat_timeout(struct adapter *adap, int mbox, const void *cmd,
continue;
}
- res = t4_read_reg64(adap, data_reg);
+ get_mbox_rpl(adap, cmd_rpl, MBOX_LEN / 8, data_reg);
+ res = be64_to_cpu(cmd_rpl[0]);
+
if (FW_CMD_OP_G(res >> 32) == FW_DEBUG_CMD) {
fw_asrt(adap, data_reg);
res = FW_CMD_RETVAL_V(EIO);
} else if (rpl) {
- get_mbox_rpl(adap, rpl, size / 8, data_reg);
+ memcpy(rpl, cmd_rpl, size);
}
- if (FW_CMD_RETVAL_G((int)res))
- dump_mbox(adap, mbox, data_reg);
t4_write_reg(adap, ctl_reg, 0);
+
+ execute = i + ms;
+ t4_record_mbox(adap, cmd_rpl,
+ MBOX_LEN, access, execute);
return -FW_CMD_RETVAL_G((int)res);
}
}
- dump_mbox(adap, mbox, data_reg);
+ ret = (pcie_fw & PCIE_FW_ERR_F) ? -ENXIO : -ETIMEDOUT;
+ t4_record_mbox(adap, cmd, MBOX_LEN, access, ret);
dev_err(adap->pdev_dev, "command %#x in mailbox %d timed out\n",
*(const u8 *)cmd, mbox);
t4_report_fw_error(adap);
- return -ETIMEDOUT;
+ return ret;
}
int t4_wr_mbox_meat(struct adapter *adap, int mbox, const void *cmd, int size,
@@ -2937,6 +2976,20 @@ int t4_get_fw_version(struct adapter *adapter, u32 *vers)
}
/**
+ * t4_get_bs_version - read the firmware bootstrap version
+ * @adapter: the adapter
+ * @vers: where to place the version
+ *
+ * Reads the FW Bootstrap version from flash.
+ */
+int t4_get_bs_version(struct adapter *adapter, u32 *vers)
+{
+ return t4_read_flash(adapter, FLASH_FWBOOTSTRAP_START +
+ offsetof(struct fw_hdr, fw_ver), 1,
+ vers, 0);
+}
+
+/**
* t4_get_tp_version - read the TP microcode version
* @adapter: the adapter
* @vers: where to place the version
@@ -7089,52 +7142,122 @@ int t4_ofld_eq_free(struct adapter *adap, unsigned int mbox, unsigned int pf,
}
/**
- * t4_handle_fw_rpl - process a FW reply message
+ * t4_link_down_rc_str - return a string for a Link Down Reason Code
* @adap: the adapter
+ * @link_down_rc: Link Down Reason Code
+ *
+ * Returns a string representation of the Link Down Reason Code.
+ */
+static const char *t4_link_down_rc_str(unsigned char link_down_rc)
+{
+ static const char * const reason[] = {
+ "Link Down",
+ "Remote Fault",
+ "Auto-negotiation Failure",
+ "Reserved",
+ "Insufficient Airflow",
+ "Unable To Determine Reason",
+ "No RX Signal Detected",
+ "Reserved",
+ };
+
+ if (link_down_rc >= ARRAY_SIZE(reason))
+ return "Bad Reason Code";
+
+ return reason[link_down_rc];
+}
+
+/**
+ * t4_handle_get_port_info - process a FW reply message
+ * @pi: the port info
* @rpl: start of the FW message
*
- * Processes a FW message, such as link state change messages.
+ * Processes a GET_PORT_INFO FW reply message.
+ */
+void t4_handle_get_port_info(struct port_info *pi, const __be64 *rpl)
+{
+ const struct fw_port_cmd *p = (const void *)rpl;
+ struct adapter *adap = pi->adapter;
+
+ /* link/module state change message */
+ int speed = 0, fc = 0;
+ struct link_config *lc;
+ u32 stat = be32_to_cpu(p->u.info.lstatus_to_modtype);
+ int link_ok = (stat & FW_PORT_CMD_LSTATUS_F) != 0;
+ u32 mod = FW_PORT_CMD_MODTYPE_G(stat);
+
+ if (stat & FW_PORT_CMD_RXPAUSE_F)
+ fc |= PAUSE_RX;
+ if (stat & FW_PORT_CMD_TXPAUSE_F)
+ fc |= PAUSE_TX;
+ if (stat & FW_PORT_CMD_LSPEED_V(FW_PORT_CAP_SPEED_100M))
+ speed = 100;
+ else if (stat & FW_PORT_CMD_LSPEED_V(FW_PORT_CAP_SPEED_1G))
+ speed = 1000;
+ else if (stat & FW_PORT_CMD_LSPEED_V(FW_PORT_CAP_SPEED_10G))
+ speed = 10000;
+ else if (stat & FW_PORT_CMD_LSPEED_V(FW_PORT_CAP_SPEED_40G))
+ speed = 40000;
+
+ lc = &pi->link_cfg;
+
+ if (mod != pi->mod_type) {
+ pi->mod_type = mod;
+ t4_os_portmod_changed(adap, pi->port_id);
+ }
+ if (link_ok != lc->link_ok || speed != lc->speed ||
+ fc != lc->fc) { /* something changed */
+ if (!link_ok && lc->link_ok) {
+ unsigned char rc = FW_PORT_CMD_LINKDNRC_G(stat);
+
+ lc->link_down_rc = rc;
+ dev_warn(adap->pdev_dev,
+ "Port %d link down, reason: %s\n",
+ pi->port_id, t4_link_down_rc_str(rc));
+ }
+ lc->link_ok = link_ok;
+ lc->speed = speed;
+ lc->fc = fc;
+ lc->supported = be16_to_cpu(p->u.info.pcap);
+ t4_os_link_changed(adap, pi->port_id, link_ok);
+ }
+}
+
+/**
+ * t4_handle_fw_rpl - process a FW reply message
+ * @adap: the adapter
+ * @rpl: start of the FW message
+ *
+ * Processes a FW message, such as link state change messages.
*/
int t4_handle_fw_rpl(struct adapter *adap, const __be64 *rpl)
{
u8 opcode = *(const u8 *)rpl;
- if (opcode == FW_PORT_CMD) { /* link/module state change message */
- int speed = 0, fc = 0;
- const struct fw_port_cmd *p = (void *)rpl;
+ /* This might be a port command ... this simplifies the following
+ * conditionals ... We can get away with pre-dereferencing
+ * action_to_len16 because it's in the first 16 bytes and all messages
+ * will be at least that long.
+ */
+ const struct fw_port_cmd *p = (const void *)rpl;
+ unsigned int action =
+ FW_PORT_CMD_ACTION_G(be32_to_cpu(p->action_to_len16));
+
+ if (opcode == FW_PORT_CMD && action == FW_PORT_ACTION_GET_PORT_INFO) {
+ int i;
int chan = FW_PORT_CMD_PORTID_G(be32_to_cpu(p->op_to_portid));
- int port = adap->chan_map[chan];
- struct port_info *pi = adap2pinfo(adap, port);
- struct link_config *lc = &pi->link_cfg;
- u32 stat = be32_to_cpu(p->u.info.lstatus_to_modtype);
- int link_ok = (stat & FW_PORT_CMD_LSTATUS_F) != 0;
- u32 mod = FW_PORT_CMD_MODTYPE_G(stat);
-
- if (stat & FW_PORT_CMD_RXPAUSE_F)
- fc |= PAUSE_RX;
- if (stat & FW_PORT_CMD_TXPAUSE_F)
- fc |= PAUSE_TX;
- if (stat & FW_PORT_CMD_LSPEED_V(FW_PORT_CAP_SPEED_100M))
- speed = 100;
- else if (stat & FW_PORT_CMD_LSPEED_V(FW_PORT_CAP_SPEED_1G))
- speed = 1000;
- else if (stat & FW_PORT_CMD_LSPEED_V(FW_PORT_CAP_SPEED_10G))
- speed = 10000;
- else if (stat & FW_PORT_CMD_LSPEED_V(FW_PORT_CAP_SPEED_40G))
- speed = 40000;
-
- if (link_ok != lc->link_ok || speed != lc->speed ||
- fc != lc->fc) { /* something changed */
- lc->link_ok = link_ok;
- lc->speed = speed;
- lc->fc = fc;
- lc->supported = be16_to_cpu(p->u.info.pcap);
- t4_os_link_changed(adap, port, link_ok);
- }
- if (mod != pi->mod_type) {
- pi->mod_type = mod;
- t4_os_portmod_changed(adap, port);
+ struct port_info *pi = NULL;
+
+ for_each_port(adap, i) {
+ pi = adap2pinfo(adap, i);
+ if (pi->tx_chan == chan)
+ break;
}
+
+ t4_handle_get_port_info(pi, rpl);
+ } else {
+ dev_warn(adap->pdev_dev, "Unknown firmware reply %d\n", opcode);
+ return -EINVAL;
}
return 0;
}
@@ -7654,61 +7777,74 @@ int t4_init_rss_mode(struct adapter *adap, int mbox)
return 0;
}
-int t4_port_init(struct adapter *adap, int mbox, int pf, int vf)
+/**
+ * t4_init_portinfo - allocate a virtual interface amd initialize port_info
+ * @pi: the port_info
+ * @mbox: mailbox to use for the FW command
+ * @port: physical port associated with the VI
+ * @pf: the PF owning the VI
+ * @vf: the VF owning the VI
+ * @mac: the MAC address of the VI
+ *
+ * Allocates a virtual interface for the given physical port. If @mac is
+ * not %NULL it contains the MAC address of the VI as assigned by FW.
+ * @mac should be large enough to hold an Ethernet address.
+ * Returns < 0 on error.
+ */
+int t4_init_portinfo(struct port_info *pi, int mbox,
+ int port, int pf, int vf, u8 mac[])
{
- u8 addr[6];
- int ret, i, j = 0;
+ int ret;
struct fw_port_cmd c;
- struct fw_rss_vi_config_cmd rvc;
+ unsigned int rss_size;
memset(&c, 0, sizeof(c));
- memset(&rvc, 0, sizeof(rvc));
+ c.op_to_portid = cpu_to_be32(FW_CMD_OP_V(FW_PORT_CMD) |
+ FW_CMD_REQUEST_F | FW_CMD_READ_F |
+ FW_PORT_CMD_PORTID_V(port));
+ c.action_to_len16 = cpu_to_be32(
+ FW_PORT_CMD_ACTION_V(FW_PORT_ACTION_GET_PORT_INFO) |
+ FW_LEN16(c));
+ ret = t4_wr_mbox(pi->adapter, mbox, &c, sizeof(c), &c);
+ if (ret)
+ return ret;
+
+ ret = t4_alloc_vi(pi->adapter, mbox, port, pf, vf, 1, mac, &rss_size);
+ if (ret < 0)
+ return ret;
+
+ pi->viid = ret;
+ pi->tx_chan = port;
+ pi->lport = port;
+ pi->rss_size = rss_size;
+
+ ret = be32_to_cpu(c.u.info.lstatus_to_modtype);
+ pi->mdio_addr = (ret & FW_PORT_CMD_MDIOCAP_F) ?
+ FW_PORT_CMD_MDIOADDR_G(ret) : -1;
+ pi->port_type = FW_PORT_CMD_PTYPE_G(ret);
+ pi->mod_type = FW_PORT_MOD_TYPE_NA;
+
+ init_link_config(&pi->link_cfg, be16_to_cpu(c.u.info.pcap));
+ return 0;
+}
+
+int t4_port_init(struct adapter *adap, int mbox, int pf, int vf)
+{
+ u8 addr[6];
+ int ret, i, j = 0;
for_each_port(adap, i) {
- unsigned int rss_size;
- struct port_info *p = adap2pinfo(adap, i);
+ struct port_info *pi = adap2pinfo(adap, i);
while ((adap->params.portvec & (1 << j)) == 0)
j++;
- c.op_to_portid = cpu_to_be32(FW_CMD_OP_V(FW_PORT_CMD) |
- FW_CMD_REQUEST_F | FW_CMD_READ_F |
- FW_PORT_CMD_PORTID_V(j));
- c.action_to_len16 = cpu_to_be32(
- FW_PORT_CMD_ACTION_V(FW_PORT_ACTION_GET_PORT_INFO) |
- FW_LEN16(c));
- ret = t4_wr_mbox(adap, mbox, &c, sizeof(c), &c);
+ ret = t4_init_portinfo(pi, mbox, j, pf, vf, addr);
if (ret)
return ret;
- ret = t4_alloc_vi(adap, mbox, j, pf, vf, 1, addr, &rss_size);
- if (ret < 0)
- return ret;
-
- p->viid = ret;
- p->tx_chan = j;
- p->lport = j;
- p->rss_size = rss_size;
memcpy(adap->port[i]->dev_addr, addr, ETH_ALEN);
adap->port[i]->dev_port = j;
-
- ret = be32_to_cpu(c.u.info.lstatus_to_modtype);
- p->mdio_addr = (ret & FW_PORT_CMD_MDIOCAP_F) ?
- FW_PORT_CMD_MDIOADDR_G(ret) : -1;
- p->port_type = FW_PORT_CMD_PTYPE_G(ret);
- p->mod_type = FW_PORT_MOD_TYPE_NA;
-
- rvc.op_to_viid =
- cpu_to_be32(FW_CMD_OP_V(FW_RSS_VI_CONFIG_CMD) |
- FW_CMD_REQUEST_F | FW_CMD_READ_F |
- FW_RSS_VI_CONFIG_CMD_VIID(p->viid));
- rvc.retval_len16 = cpu_to_be32(FW_LEN16(rvc));
- ret = t4_wr_mbox(adap, mbox, &rvc, sizeof(rvc), &rvc);
- if (ret)
- return ret;
- p->rss_mode = be32_to_cpu(rvc.u.basicvirtual.defaultq_to_udpen);
-
- init_link_config(&p->link_cfg, be16_to_cpu(c.u.info.pcap));
j++;
}
return 0;
diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.h b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.h
index 2fc60e83a7a1..7f59ca458431 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4_hw.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4_hw.h
@@ -220,6 +220,13 @@ enum {
FLASH_FW_START = FLASH_START(FLASH_FW_START_SEC),
FLASH_FW_MAX_SIZE = FLASH_MAX_SIZE(FLASH_FW_NSECS),
+ /* Location of bootstrap firmware image in FLASH.
+ */
+ FLASH_FWBOOTSTRAP_START_SEC = 27,
+ FLASH_FWBOOTSTRAP_NSECS = 1,
+ FLASH_FWBOOTSTRAP_START = FLASH_START(FLASH_FWBOOTSTRAP_START_SEC),
+ FLASH_FWBOOTSTRAP_MAX_SIZE = FLASH_MAX_SIZE(FLASH_FWBOOTSTRAP_NSECS),
+
/*
* iSCSI persistent/crash information.
*/
diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
index 80417fc564d4..4705e2dea423 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4_msg.h
@@ -1392,6 +1392,10 @@ struct ulp_mem_io {
#define T5_ULP_MEMIO_ORDER_V(x) ((x) << T5_ULP_MEMIO_ORDER_S)
#define T5_ULP_MEMIO_ORDER_F T5_ULP_MEMIO_ORDER_V(1U)
+#define T5_ULP_MEMIO_FID_S 4
+#define T5_ULP_MEMIO_FID_M 0x7ff
+#define T5_ULP_MEMIO_FID_V(x) ((x) << T5_ULP_MEMIO_FID_S)
+
/* ulp_mem_io.lock_addr fields */
#define ULP_MEMIO_ADDR_S 0
#define ULP_MEMIO_ADDR_V(x) ((x) << ULP_MEMIO_ADDR_S)
diff --git a/drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h b/drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h
index 7ad6d4e75b2a..392d6644fdd8 100644
--- a/drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h
+++ b/drivers/net/ethernet/chelsio/cxgb4/t4fw_api.h
@@ -2510,6 +2510,11 @@ struct fw_port_cmd {
#define FW_PORT_CMD_PTYPE_G(x) \
(((x) >> FW_PORT_CMD_PTYPE_S) & FW_PORT_CMD_PTYPE_M)
+#define FW_PORT_CMD_LINKDNRC_S 5
+#define FW_PORT_CMD_LINKDNRC_M 0x7
+#define FW_PORT_CMD_LINKDNRC_G(x) \
+ (((x) >> FW_PORT_CMD_LINKDNRC_S) & FW_PORT_CMD_LINKDNRC_M)
+
#define FW_PORT_CMD_MODTYPE_S 0
#define FW_PORT_CMD_MODTYPE_M 0x1f
#define FW_PORT_CMD_MODTYPE_V(x) ((x) << FW_PORT_CMD_MODTYPE_S)
diff --git a/drivers/net/ethernet/chelsio/cxgb4vf/adapter.h b/drivers/net/ethernet/chelsio/cxgb4vf/adapter.h
index 4a707c32d76f..734dd776c22f 100644
--- a/drivers/net/ethernet/chelsio/cxgb4vf/adapter.h
+++ b/drivers/net/ethernet/chelsio/cxgb4vf/adapter.h
@@ -387,6 +387,10 @@ struct adapter {
/* various locks */
spinlock_t stats_lock;
+ /* support for mailbox command/reply logging */
+#define T4VF_OS_LOG_MBOX_CMDS 256
+ struct mbox_cmd_log *mbox_log;
+
/* list of MAC addresses in MPS Hash */
struct list_head mac_hlist;
};
diff --git a/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c b/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
index 1cc8a7a69457..04fc6f6d1e25 100644
--- a/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
+++ b/drivers/net/ethernet/chelsio/cxgb4vf/cxgb4vf_main.c
@@ -74,7 +74,8 @@ static int dflt_msg_enable = DFLT_MSG_ENABLE;
module_param(dflt_msg_enable, int, 0644);
MODULE_PARM_DESC(dflt_msg_enable,
- "default adapter ethtool message level bitmap");
+ "default adapter ethtool message level bitmap, "
+ "deprecated parameter");
/*
* The driver uses the best interrupt scheme available on a platform in the
@@ -1703,6 +1704,105 @@ static const struct ethtool_ops cxgb4vf_ethtool_ops = {
*/
/*
+ * Show Firmware Mailbox Command/Reply Log
+ *
+ * Note that we don't do any locking when dumping the Firmware Mailbox Log so
+ * it's possible that we can catch things during a log update and therefore
+ * see partially corrupted log entries. But i9t's probably Good Enough(tm).
+ * If we ever decide that we want to make sure that we're dumping a coherent
+ * log, we'd need to perform locking in the mailbox logging and in
+ * mboxlog_open() where we'd need to grab the entire mailbox log in one go
+ * like we do for the Firmware Device Log. But as stated above, meh ...
+ */
+static int mboxlog_show(struct seq_file *seq, void *v)
+{
+ struct adapter *adapter = seq->private;
+ struct mbox_cmd_log *log = adapter->mbox_log;
+ struct mbox_cmd *entry;
+ int entry_idx, i;
+
+ if (v == SEQ_START_TOKEN) {
+ seq_printf(seq,
+ "%10s %15s %5s %5s %s\n",
+ "Seq#", "Tstamp", "Atime", "Etime",
+ "Command/Reply");
+ return 0;
+ }
+
+ entry_idx = log->cursor + ((uintptr_t)v - 2);
+ if (entry_idx >= log->size)
+ entry_idx -= log->size;
+ entry = mbox_cmd_log_entry(log, entry_idx);
+
+ /* skip over unused entries */
+ if (entry->timestamp == 0)
+ return 0;
+
+ seq_printf(seq, "%10u %15llu %5d %5d",
+ entry->seqno, entry->timestamp,
+ entry->access, entry->execute);
+ for (i = 0; i < MBOX_LEN / 8; i++) {
+ u64 flit = entry->cmd[i];
+ u32 hi = (u32)(flit >> 32);
+ u32 lo = (u32)flit;
+
+ seq_printf(seq, " %08x %08x", hi, lo);
+ }
+ seq_puts(seq, "\n");
+ return 0;
+}
+
+static inline void *mboxlog_get_idx(struct seq_file *seq, loff_t pos)
+{
+ struct adapter *adapter = seq->private;
+ struct mbox_cmd_log *log = adapter->mbox_log;
+
+ return ((pos <= log->size) ? (void *)(uintptr_t)(pos + 1) : NULL);
+}
+
+static void *mboxlog_start(struct seq_file *seq, loff_t *pos)
+{
+ return *pos ? mboxlog_get_idx(seq, *pos) : SEQ_START_TOKEN;
+}
+
+static void *mboxlog_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+ ++*pos;
+ return mboxlog_get_idx(seq, *pos);
+}
+
+static void mboxlog_stop(struct seq_file *seq, void *v)
+{
+}
+
+static const struct seq_operations mboxlog_seq_ops = {
+ .start = mboxlog_start,
+ .next = mboxlog_next,
+ .stop = mboxlog_stop,
+ .show = mboxlog_show
+};
+
+static int mboxlog_open(struct inode *inode, struct file *file)
+{
+ int res = seq_open(file, &mboxlog_seq_ops);
+
+ if (!res) {
+ struct seq_file *seq = file->private_data;
+
+ seq->private = inode->i_private;
+ }
+ return res;
+}
+
+static const struct file_operations mboxlog_fops = {
+ .owner = THIS_MODULE,
+ .open = mboxlog_open,
+ .read = seq_read,
+ .llseek = seq_lseek,
+ .release = seq_release,
+};
+
+/*
* Show SGE Queue Set information. We display QPL Queues Sets per line.
*/
#define QPL 4
@@ -2121,6 +2221,7 @@ struct cxgb4vf_debugfs_entry {
};
static struct cxgb4vf_debugfs_entry debugfs_files[] = {
+ { "mboxlog", S_IRUGO, &mboxlog_fops },
{ "sge_qinfo", S_IRUGO, &sge_qinfo_debugfs_fops },
{ "sge_qstats", S_IRUGO, &sge_qstats_proc_fops },
{ "resources", S_IRUGO, &resources_proc_fops },
@@ -2663,6 +2764,16 @@ static int cxgb4vf_pci_probe(struct pci_dev *pdev,
adapter->pdev = pdev;
adapter->pdev_dev = &pdev->dev;
+ adapter->mbox_log = kzalloc(sizeof(*adapter->mbox_log) +
+ (sizeof(struct mbox_cmd) *
+ T4VF_OS_LOG_MBOX_CMDS),
+ GFP_KERNEL);
+ if (!adapter->mbox_log) {
+ err = -ENOMEM;
+ goto err_free_adapter;
+ }
+ adapter->mbox_log->size = T4VF_OS_LOG_MBOX_CMDS;
+
/*
* Initialize SMP data synchronization resources.
*/
@@ -2912,6 +3023,7 @@ err_unmap_bar0:
iounmap(adapter->regs);
err_free_adapter:
+ kfree(adapter->mbox_log);
kfree(adapter);
err_release_regions:
@@ -2981,6 +3093,7 @@ static void cxgb4vf_pci_remove(struct pci_dev *pdev)
iounmap(adapter->regs);
if (!is_t4(adapter->params.chip))
iounmap(adapter->bar2);
+ kfree(adapter->mbox_log);
kfree(adapter);
}
diff --git a/drivers/net/ethernet/chelsio/cxgb4vf/sge.c b/drivers/net/ethernet/chelsio/cxgb4vf/sge.c
index 1ccd282949a5..1bb57d3fbbe8 100644
--- a/drivers/net/ethernet/chelsio/cxgb4vf/sge.c
+++ b/drivers/net/ethernet/chelsio/cxgb4vf/sge.c
@@ -1448,7 +1448,7 @@ int t4vf_eth_xmit(struct sk_buff *skb, struct net_device *dev)
* the new TX descriptors and return success.
*/
txq_advance(&txq->q, ndesc);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
ring_tx_db(adapter, &txq->q, ndesc);
return NETDEV_TX_OK;
diff --git a/drivers/net/ethernet/chelsio/cxgb4vf/t4vf_common.h b/drivers/net/ethernet/chelsio/cxgb4vf/t4vf_common.h
index 9b40a85cc1e4..438374a05791 100644
--- a/drivers/net/ethernet/chelsio/cxgb4vf/t4vf_common.h
+++ b/drivers/net/ethernet/chelsio/cxgb4vf/t4vf_common.h
@@ -36,6 +36,7 @@
#ifndef __T4VF_COMMON_H__
#define __T4VF_COMMON_H__
+#include "../cxgb4/t4_hw.h"
#include "../cxgb4/t4fw_api.h"
#define CHELSIO_CHIP_CODE(version, revision) (((version) << 4) | (revision))
@@ -227,6 +228,34 @@ struct adapter_params {
u8 nports; /* # of Ethernet "ports" */
};
+/* Firmware Mailbox Command/Reply log. All values are in Host-Endian format.
+ * The access and execute times are signed in order to accommodate negative
+ * error returns.
+ */
+struct mbox_cmd {
+ u64 cmd[MBOX_LEN / 8]; /* a Firmware Mailbox Command/Reply */
+ u64 timestamp; /* OS-dependent timestamp */
+ u32 seqno; /* sequence number */
+ s16 access; /* time (ms) to access mailbox */
+ s16 execute; /* time (ms) to execute */
+};
+
+struct mbox_cmd_log {
+ unsigned int size; /* number of entries in the log */
+ unsigned int cursor; /* next position in the log to write */
+ u32 seqno; /* next sequence number */
+ /* variable length mailbox command log starts here */
+};
+
+/* Given a pointer to a Firmware Mailbox Command Log and a log entry index,
+ * return a pointer to the specified entry.
+ */
+static inline struct mbox_cmd *mbox_cmd_log_entry(struct mbox_cmd_log *log,
+ unsigned int entry_idx)
+{
+ return &((struct mbox_cmd *)&(log)[1])[entry_idx];
+}
+
#include "adapter.h"
#ifndef PCI_VENDOR_ID_CHELSIO
diff --git a/drivers/net/ethernet/chelsio/cxgb4vf/t4vf_hw.c b/drivers/net/ethernet/chelsio/cxgb4vf/t4vf_hw.c
index fed83d88fc4e..955ff7c61f1b 100644
--- a/drivers/net/ethernet/chelsio/cxgb4vf/t4vf_hw.c
+++ b/drivers/net/ethernet/chelsio/cxgb4vf/t4vf_hw.c
@@ -76,21 +76,33 @@ static void get_mbox_rpl(struct adapter *adapter, __be64 *rpl, int size,
*rpl++ = cpu_to_be64(t4_read_reg64(adapter, mbox_data));
}
-/*
- * Dump contents of mailbox with a leading tag.
+/**
+ * t4vf_record_mbox - record a Firmware Mailbox Command/Reply in the log
+ * @adapter: the adapter
+ * @cmd: the Firmware Mailbox Command or Reply
+ * @size: command length in bytes
+ * @access: the time (ms) needed to access the Firmware Mailbox
+ * @execute: the time (ms) the command spent being executed
*/
-static void dump_mbox(struct adapter *adapter, const char *tag, u32 mbox_data)
+static void t4vf_record_mbox(struct adapter *adapter, const __be64 *cmd,
+ int size, int access, int execute)
{
- dev_err(adapter->pdev_dev,
- "mbox %s: %llx %llx %llx %llx %llx %llx %llx %llx\n", tag,
- (unsigned long long)t4_read_reg64(adapter, mbox_data + 0),
- (unsigned long long)t4_read_reg64(adapter, mbox_data + 8),
- (unsigned long long)t4_read_reg64(adapter, mbox_data + 16),
- (unsigned long long)t4_read_reg64(adapter, mbox_data + 24),
- (unsigned long long)t4_read_reg64(adapter, mbox_data + 32),
- (unsigned long long)t4_read_reg64(adapter, mbox_data + 40),
- (unsigned long long)t4_read_reg64(adapter, mbox_data + 48),
- (unsigned long long)t4_read_reg64(adapter, mbox_data + 56));
+ struct mbox_cmd_log *log = adapter->mbox_log;
+ struct mbox_cmd *entry;
+ int i;
+
+ entry = mbox_cmd_log_entry(log, log->cursor++);
+ if (log->cursor == log->size)
+ log->cursor = 0;
+
+ for (i = 0; i < size / 8; i++)
+ entry->cmd[i] = be64_to_cpu(cmd[i]);
+ while (i < MBOX_LEN / 8)
+ entry->cmd[i++] = 0;
+ entry->timestamp = jiffies;
+ entry->seqno = log->seqno++;
+ entry->access = access;
+ entry->execute = execute;
}
/**
@@ -120,10 +132,13 @@ int t4vf_wr_mbox_core(struct adapter *adapter, const void *cmd, int size,
1, 1, 3, 5, 10, 10, 20, 50, 100
};
+ u16 access = 0, execute = 0;
u32 v, mbox_data;
- int i, ms, delay_idx;
+ int i, ms, delay_idx, ret;
const __be64 *p;
u32 mbox_ctl = T4VF_CIM_BASE_ADDR + CIM_VF_EXT_MAILBOX_CTRL;
+ u32 cmd_op = FW_CMD_OP_G(be32_to_cpu(((struct fw_cmd_hdr *)cmd)->hi));
+ __be64 cmd_rpl[MBOX_LEN / 8];
/* In T6, mailbox size is changed to 128 bytes to avoid
* invalidating the entire prefetch buffer.
@@ -148,8 +163,11 @@ int t4vf_wr_mbox_core(struct adapter *adapter, const void *cmd, int size,
v = MBOWNER_G(t4_read_reg(adapter, mbox_ctl));
for (i = 0; v == MBOX_OWNER_NONE && i < 3; i++)
v = MBOWNER_G(t4_read_reg(adapter, mbox_ctl));
- if (v != MBOX_OWNER_DRV)
- return v == MBOX_OWNER_FW ? -EBUSY : -ETIMEDOUT;
+ if (v != MBOX_OWNER_DRV) {
+ ret = (v == MBOX_OWNER_FW) ? -EBUSY : -ETIMEDOUT;
+ t4vf_record_mbox(adapter, cmd, size, access, ret);
+ return ret;
+ }
/*
* Write the command array into the Mailbox Data register array and
@@ -164,6 +182,8 @@ int t4vf_wr_mbox_core(struct adapter *adapter, const void *cmd, int size,
* Data registers before doing the write to the VF Mailbox Control
* register.
*/
+ if (cmd_op != FW_VI_STATS_CMD)
+ t4vf_record_mbox(adapter, cmd, size, access, 0);
for (i = 0, p = cmd; i < size; i += 8)
t4_write_reg64(adapter, mbox_data + i, be64_to_cpu(*p++));
t4_read_reg(adapter, mbox_data); /* flush write */
@@ -209,31 +229,33 @@ int t4vf_wr_mbox_core(struct adapter *adapter, const void *cmd, int size,
* We return the (negated) firmware command return
* code (this depends on FW_SUCCESS == 0).
*/
+ get_mbox_rpl(adapter, cmd_rpl, size, mbox_data);
/* return value in low-order little-endian word */
- v = t4_read_reg(adapter, mbox_data);
- if (FW_CMD_RETVAL_G(v))
- dump_mbox(adapter, "FW Error", mbox_data);
+ v = be64_to_cpu(cmd_rpl[0]);
if (rpl) {
/* request bit in high-order BE word */
WARN_ON((be32_to_cpu(*(const __be32 *)cmd)
& FW_CMD_REQUEST_F) == 0);
- get_mbox_rpl(adapter, rpl, size, mbox_data);
+ memcpy(rpl, cmd_rpl, size);
WARN_ON((be32_to_cpu(*(__be32 *)rpl)
& FW_CMD_REQUEST_F) != 0);
}
t4_write_reg(adapter, mbox_ctl,
MBOWNER_V(MBOX_OWNER_NONE));
+ execute = i + ms;
+ if (cmd_op != FW_VI_STATS_CMD)
+ t4vf_record_mbox(adapter, cmd_rpl, size, access,
+ execute);
return -FW_CMD_RETVAL_G(v);
}
}
- /*
- * We timed out. Return the error ...
- */
- dump_mbox(adapter, "FW Timeout", mbox_data);
- return -ETIMEDOUT;
+ /* We timed out. Return the error ... */
+ ret = -ETIMEDOUT;
+ t4vf_record_mbox(adapter, cmd, size, access, ret);
+ return ret;
}
#define ADVERT_MASK (FW_PORT_CAP_SPEED_100M | FW_PORT_CAP_SPEED_1G |\
diff --git a/drivers/net/ethernet/cisco/enic/enic_main.c b/drivers/net/ethernet/cisco/enic/enic_main.c
index b2182d3ba3cc..f15560a06718 100644
--- a/drivers/net/ethernet/cisco/enic/enic_main.c
+++ b/drivers/net/ethernet/cisco/enic/enic_main.c
@@ -2740,6 +2740,7 @@ static int enic_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
netdev->hw_features |= NETIF_F_RXCSUM;
netdev->features |= netdev->hw_features;
+ netdev->vlan_features |= netdev->features;
#ifdef CONFIG_RFS_ACCEL
netdev->hw_features |= NETIF_F_NTUPLE;
diff --git a/drivers/net/ethernet/davicom/dm9000.c b/drivers/net/ethernet/davicom/dm9000.c
index 48d91941408d..1471e16ba719 100644
--- a/drivers/net/ethernet/davicom/dm9000.c
+++ b/drivers/net/ethernet/davicom/dm9000.c
@@ -966,7 +966,7 @@ dm9000_init_dm9000(struct net_device *dev)
/* Init Driver variable */
db->tx_pkt_cnt = 0;
db->queue_pkt_len = 0;
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
}
/* Our watchdog timed out. Called by the networking layer */
@@ -985,7 +985,7 @@ static void dm9000_timeout(struct net_device *dev)
dm9000_init_dm9000(dev);
dm9000_unmask_interrupts(db);
/* We can accept TX packets again */
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue(dev);
/* Restore previous register address */
@@ -1432,6 +1432,7 @@ dm9000_probe(struct platform_device *pdev)
int reset_gpios;
enum of_gpio_flags flags;
struct regulator *power;
+ bool inv_mac_addr = false;
power = devm_regulator_get(dev, "vcc");
if (IS_ERR(power)) {
@@ -1686,9 +1687,7 @@ dm9000_probe(struct platform_device *pdev)
}
if (!is_valid_ether_addr(ndev->dev_addr)) {
- dev_warn(db->dev, "%s: Invalid ethernet MAC address. Please "
- "set using ifconfig\n", ndev->name);
-
+ inv_mac_addr = true;
eth_hw_addr_random(ndev);
mac_src = "random";
}
@@ -1697,11 +1696,15 @@ dm9000_probe(struct platform_device *pdev)
platform_set_drvdata(pdev, ndev);
ret = register_netdev(ndev);
- if (ret == 0)
+ if (ret == 0) {
+ if (inv_mac_addr)
+ dev_warn(db->dev, "%s: Invalid ethernet MAC address. Please set using ip\n",
+ ndev->name);
printk(KERN_INFO "%s: dm9000%c at %p,%p IRQ %d MAC: %pM (%s)\n",
ndev->name, dm9000_type_to_char(db->type),
db->io_addr, db->io_data, ndev->irq,
ndev->dev_addr, mac_src);
+ }
return 0;
out:
diff --git a/drivers/net/ethernet/dec/tulip/de4x5.c b/drivers/net/ethernet/dec/tulip/de4x5.c
index 3acde3b9b767..cbe84972ff7a 100644
--- a/drivers/net/ethernet/dec/tulip/de4x5.c
+++ b/drivers/net/ethernet/dec/tulip/de4x5.c
@@ -1336,7 +1336,7 @@ de4x5_open(struct net_device *dev)
}
lp->interrupt = UNMASK_INTERRUPTS;
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
START_DE4X5;
@@ -1465,7 +1465,7 @@ de4x5_queue_pkt(struct sk_buff *skb, struct net_device *dev)
netif_stop_queue(dev);
if (!lp->tx_enable) /* Cannot send for now */
- return NETDEV_TX_LOCKED;
+ goto tx_err;
/*
** Clean out the TX ring asynchronously to interrupts - sometimes the
@@ -1478,7 +1478,7 @@ de4x5_queue_pkt(struct sk_buff *skb, struct net_device *dev)
/* Test if cache is already locked - requeue skb if so */
if (test_and_set_bit(0, (void *)&lp->cache.lock) && !lp->interrupt)
- return NETDEV_TX_LOCKED;
+ goto tx_err;
/* Transmit descriptor ring full or stale skb */
if (netif_queue_stopped(dev) || (u_long) lp->tx_skb[lp->tx_new] > 1) {
@@ -1519,6 +1519,9 @@ de4x5_queue_pkt(struct sk_buff *skb, struct net_device *dev)
lp->cache.lock = 0;
return NETDEV_TX_OK;
+tx_err:
+ dev_kfree_skb_any(skb);
+ return NETDEV_TX_OK;
}
/*
@@ -1932,7 +1935,7 @@ set_multicast_list(struct net_device *dev)
lp->tx_new = (lp->tx_new + 1) % lp->txRingSize;
outl(POLL_DEMAND, DE4X5_TPD); /* Start the TX */
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
}
}
}
diff --git a/drivers/net/ethernet/dec/tulip/dmfe.c b/drivers/net/ethernet/dec/tulip/dmfe.c
index afd8e78e024e..8ed0fd8b1dda 100644
--- a/drivers/net/ethernet/dec/tulip/dmfe.c
+++ b/drivers/net/ethernet/dec/tulip/dmfe.c
@@ -192,9 +192,6 @@
(__CHK_IO_SIZE(((pci_dev)->device << 16) | (pci_dev)->vendor, \
(pci_dev)->revision))
-/* Sten Check */
-#define DEVICE net_device
-
/* Structure/enum declaration ------------------------------- */
struct tx_desc {
__le32 tdes0, tdes1, tdes2, tdes3; /* Data for the card */
@@ -313,10 +310,10 @@ static u8 SF_mode; /* Special Function: 1:VLAN, 2:RX Flow Control
/* function declaration ------------------------------------- */
-static int dmfe_open(struct DEVICE *);
-static netdev_tx_t dmfe_start_xmit(struct sk_buff *, struct DEVICE *);
-static int dmfe_stop(struct DEVICE *);
-static void dmfe_set_filter_mode(struct DEVICE *);
+static int dmfe_open(struct net_device *);
+static netdev_tx_t dmfe_start_xmit(struct sk_buff *, struct net_device *);
+static int dmfe_stop(struct net_device *);
+static void dmfe_set_filter_mode(struct net_device *);
static const struct ethtool_ops netdev_ethtool_ops;
static u16 read_srom_word(void __iomem *, int);
static irqreturn_t dmfe_interrupt(int , void *);
@@ -326,8 +323,8 @@ static void poll_dmfe (struct net_device *dev);
static void dmfe_descriptor_init(struct net_device *);
static void allocate_rx_buffer(struct net_device *);
static void update_cr6(u32, void __iomem *);
-static void send_filter_frame(struct DEVICE *);
-static void dm9132_id_table(struct DEVICE *);
+static void send_filter_frame(struct net_device *);
+static void dm9132_id_table(struct net_device *);
static u16 dmfe_phy_read(void __iomem *, u8, u8, u32);
static void dmfe_phy_write(void __iomem *, u8, u8, u16, u32);
static void dmfe_phy_write_1bit(void __iomem *, u32);
@@ -336,12 +333,12 @@ static u8 dmfe_sense_speed(struct dmfe_board_info *);
static void dmfe_process_mode(struct dmfe_board_info *);
static void dmfe_timer(unsigned long);
static inline u32 cal_CRC(unsigned char *, unsigned int, u8);
-static void dmfe_rx_packet(struct DEVICE *, struct dmfe_board_info *);
-static void dmfe_free_tx_pkt(struct DEVICE *, struct dmfe_board_info *);
+static void dmfe_rx_packet(struct net_device *, struct dmfe_board_info *);
+static void dmfe_free_tx_pkt(struct net_device *, struct dmfe_board_info *);
static void dmfe_reuse_skb(struct dmfe_board_info *, struct sk_buff *);
-static void dmfe_dynamic_reset(struct DEVICE *);
+static void dmfe_dynamic_reset(struct net_device *);
static void dmfe_free_rxbuffer(struct dmfe_board_info *);
-static void dmfe_init_dm910x(struct DEVICE *);
+static void dmfe_init_dm910x(struct net_device *);
static void dmfe_parse_srom(struct dmfe_board_info *);
static void dmfe_program_DM9801(struct dmfe_board_info *, int);
static void dmfe_program_DM9802(struct dmfe_board_info *);
@@ -558,7 +555,7 @@ static void dmfe_remove_one(struct pci_dev *pdev)
* The interface is opened whenever "ifconfig" actives it.
*/
-static int dmfe_open(struct DEVICE *dev)
+static int dmfe_open(struct net_device *dev)
{
struct dmfe_board_info *db = netdev_priv(dev);
const int irq = db->pdev->irq;
@@ -617,7 +614,7 @@ static int dmfe_open(struct DEVICE *dev)
* Enable Tx/Rx machine
*/
-static void dmfe_init_dm910x(struct DEVICE *dev)
+static void dmfe_init_dm910x(struct net_device *dev)
{
struct dmfe_board_info *db = netdev_priv(dev);
void __iomem *ioaddr = db->ioaddr;
@@ -684,7 +681,7 @@ static void dmfe_init_dm910x(struct DEVICE *dev)
*/
static netdev_tx_t dmfe_start_xmit(struct sk_buff *skb,
- struct DEVICE *dev)
+ struct net_device *dev)
{
struct dmfe_board_info *db = netdev_priv(dev);
void __iomem *ioaddr = db->ioaddr;
@@ -728,7 +725,7 @@ static netdev_tx_t dmfe_start_xmit(struct sk_buff *skb,
txptr->tdes0 = cpu_to_le32(0x80000000); /* Set owner bit */
db->tx_packet_cnt++; /* Ready to send */
dw32(DCR1, 0x1); /* Issue Tx polling */
- dev->trans_start = jiffies; /* saved time stamp */
+ netif_trans_update(dev); /* saved time stamp */
} else {
db->tx_queue_cnt++; /* queue TX packet */
dw32(DCR1, 0x1); /* Issue Tx polling */
@@ -754,7 +751,7 @@ static netdev_tx_t dmfe_start_xmit(struct sk_buff *skb,
* The interface is stopped when it is brought.
*/
-static int dmfe_stop(struct DEVICE *dev)
+static int dmfe_stop(struct net_device *dev)
{
struct dmfe_board_info *db = netdev_priv(dev);
void __iomem *ioaddr = db->ioaddr;
@@ -798,7 +795,7 @@ static int dmfe_stop(struct DEVICE *dev)
static irqreturn_t dmfe_interrupt(int irq, void *dev_id)
{
- struct DEVICE *dev = dev_id;
+ struct net_device *dev = dev_id;
struct dmfe_board_info *db = netdev_priv(dev);
void __iomem *ioaddr = db->ioaddr;
unsigned long flags;
@@ -879,7 +876,7 @@ static void poll_dmfe (struct net_device *dev)
* Free TX resource after TX complete
*/
-static void dmfe_free_tx_pkt(struct DEVICE *dev, struct dmfe_board_info * db)
+static void dmfe_free_tx_pkt(struct net_device *dev, struct dmfe_board_info *db)
{
struct tx_desc *txptr;
void __iomem *ioaddr = db->ioaddr;
@@ -934,7 +931,7 @@ static void dmfe_free_tx_pkt(struct DEVICE *dev, struct dmfe_board_info * db)
db->tx_packet_cnt++; /* Ready to send */
db->tx_queue_cnt--;
dw32(DCR1, 0x1); /* Issue Tx polling */
- dev->trans_start = jiffies; /* saved time stamp */
+ netif_trans_update(dev); /* saved time stamp */
}
/* Resource available check */
@@ -961,7 +958,7 @@ static inline u32 cal_CRC(unsigned char * Data, unsigned int Len, u8 flag)
* Receive the come packet and pass to upper layer
*/
-static void dmfe_rx_packet(struct DEVICE *dev, struct dmfe_board_info * db)
+static void dmfe_rx_packet(struct net_device *dev, struct dmfe_board_info *db)
{
struct rx_desc *rxptr;
struct sk_buff *skb, *newskb;
@@ -1052,7 +1049,7 @@ static void dmfe_rx_packet(struct DEVICE *dev, struct dmfe_board_info * db)
* Set DM910X multicast address
*/
-static void dmfe_set_filter_mode(struct DEVICE * dev)
+static void dmfe_set_filter_mode(struct net_device *dev)
{
struct dmfe_board_info *db = netdev_priv(dev);
unsigned long flags;
@@ -1545,7 +1542,7 @@ static void send_filter_frame(struct net_device *dev)
update_cr6(db->cr6_data | 0x2000, ioaddr);
dw32(DCR1, 0x1); /* Issue Tx polling */
update_cr6(db->cr6_data, ioaddr);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
} else
db->tx_queue_cnt++; /* Put in TX queue */
}
diff --git a/drivers/net/ethernet/dec/tulip/pnic.c b/drivers/net/ethernet/dec/tulip/pnic.c
index 5364563c4378..7bcccf5cac7a 100644
--- a/drivers/net/ethernet/dec/tulip/pnic.c
+++ b/drivers/net/ethernet/dec/tulip/pnic.c
@@ -44,7 +44,7 @@ void pnic_do_nway(struct net_device *dev)
tp->csr6 = new_csr6;
/* Restart Tx */
tulip_restart_rxtx(tp);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
}
}
}
@@ -70,7 +70,7 @@ void pnic_lnk_change(struct net_device *dev, int csr5)
iowrite32(tp->csr6, ioaddr + CSR6);
iowrite32(0x30, ioaddr + CSR12);
iowrite32(0x0201F078, ioaddr + 0xB8); /* Turn on autonegotiation. */
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
}
} else if (ioread32(ioaddr + CSR5) & TPLnkPass) {
if (tulip_media_cap[dev->if_port] & MediaIsMII) {
@@ -147,7 +147,7 @@ void pnic_timer(unsigned long data)
tp->csr6 = new_csr6;
/* Restart Tx */
tulip_restart_rxtx(tp);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
if (tulip_debug > 1)
dev_info(&dev->dev,
"Changing PNIC configuration to %s %s-duplex, CSR6 %08x\n",
diff --git a/drivers/net/ethernet/dec/tulip/tulip_core.c b/drivers/net/ethernet/dec/tulip/tulip_core.c
index 94d0eebef129..bbde90bc74fe 100644
--- a/drivers/net/ethernet/dec/tulip/tulip_core.c
+++ b/drivers/net/ethernet/dec/tulip/tulip_core.c
@@ -605,7 +605,7 @@ static void tulip_tx_timeout(struct net_device *dev)
out_unlock:
spin_unlock_irqrestore (&tp->lock, flags);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue (dev);
}
diff --git a/drivers/net/ethernet/dec/tulip/uli526x.c b/drivers/net/ethernet/dec/tulip/uli526x.c
index 447d09272ab7..e750b5ddc0fb 100644
--- a/drivers/net/ethernet/dec/tulip/uli526x.c
+++ b/drivers/net/ethernet/dec/tulip/uli526x.c
@@ -636,7 +636,7 @@ static netdev_tx_t uli526x_start_xmit(struct sk_buff *skb,
txptr->tdes0 = cpu_to_le32(0x80000000); /* Set owner bit */
db->tx_packet_cnt++; /* Ready to send */
uw32(DCR1, 0x1); /* Issue Tx polling */
- dev->trans_start = jiffies; /* saved time stamp */
+ netif_trans_update(dev); /* saved time stamp */
}
/* Tx resource check */
@@ -1431,7 +1431,7 @@ static void send_filter_frame(struct net_device *dev, int mc_cnt)
update_cr6(db->cr6_data | 0x2000, ioaddr);
uw32(DCR1, 0x1); /* Issue Tx polling */
update_cr6(db->cr6_data, ioaddr);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
} else
netdev_err(dev, "No Tx resource - Send_filter_frame!\n");
}
diff --git a/drivers/net/ethernet/dec/tulip/winbond-840.c b/drivers/net/ethernet/dec/tulip/winbond-840.c
index 3c0e4d5c5fef..1f62b9423851 100644
--- a/drivers/net/ethernet/dec/tulip/winbond-840.c
+++ b/drivers/net/ethernet/dec/tulip/winbond-840.c
@@ -966,7 +966,7 @@ static void tx_timeout(struct net_device *dev)
enable_irq(irq);
netif_wake_queue(dev);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
np->stats.tx_errors++;
}
diff --git a/drivers/net/ethernet/dlink/dl2k.c b/drivers/net/ethernet/dlink/dl2k.c
index f92b6d948398..78f144696d6b 100644
--- a/drivers/net/ethernet/dlink/dl2k.c
+++ b/drivers/net/ethernet/dlink/dl2k.c
@@ -706,7 +706,7 @@ rio_tx_timeout (struct net_device *dev)
dev->name, dr32(TxStatus));
rio_free_tx(dev, 0);
dev->if_port = 0;
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
}
static netdev_tx_t
diff --git a/drivers/net/ethernet/dlink/sundance.c b/drivers/net/ethernet/dlink/sundance.c
index a28a2e583f0f..58c6338a839e 100644
--- a/drivers/net/ethernet/dlink/sundance.c
+++ b/drivers/net/ethernet/dlink/sundance.c
@@ -1011,7 +1011,7 @@ static void tx_timeout(struct net_device *dev)
dev->if_port = 0;
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
dev->stats.tx_errors++;
if (np->cur_tx - np->dirty_tx < TX_QUEUE_LEN - 4) {
netif_wake_queue(dev);
diff --git a/drivers/net/ethernet/emulex/benet/be_main.c b/drivers/net/ethernet/emulex/benet/be_main.c
index 536686476369..ed98ef1ecac3 100644
--- a/drivers/net/ethernet/emulex/benet/be_main.c
+++ b/drivers/net/ethernet/emulex/benet/be_main.c
@@ -4890,11 +4890,13 @@ static int be_resume(struct be_adapter *adapter)
if (status)
return status;
- if (netif_running(netdev)) {
+ rtnl_lock();
+ if (netif_running(netdev))
status = be_open(netdev);
- if (status)
- return status;
- }
+ rtnl_unlock();
+
+ if (status)
+ return status;
netif_device_attach(netdev);
diff --git a/drivers/net/ethernet/ezchip/nps_enet.c b/drivers/net/ethernet/ezchip/nps_enet.c
index 1f23845a0694..085f9125cf42 100644
--- a/drivers/net/ethernet/ezchip/nps_enet.c
+++ b/drivers/net/ethernet/ezchip/nps_enet.c
@@ -145,7 +145,7 @@ static void nps_enet_tx_handler(struct net_device *ndev)
u32 tx_ctrl_nt = (tx_ctrl_value & TX_CTL_NT_MASK) >> TX_CTL_NT_SHIFT;
/* Check if we got TX */
- if (!priv->tx_packet_sent || tx_ctrl_ct)
+ if (!priv->tx_skb || tx_ctrl_ct)
return;
/* Ack Tx ctrl register */
@@ -160,7 +160,7 @@ static void nps_enet_tx_handler(struct net_device *ndev)
}
dev_kfree_skb(priv->tx_skb);
- priv->tx_packet_sent = false;
+ priv->tx_skb = NULL;
if (netif_queue_stopped(ndev))
netif_wake_queue(ndev);
@@ -183,6 +183,9 @@ static int nps_enet_poll(struct napi_struct *napi, int budget)
work_done = nps_enet_rx_handler(ndev);
if (work_done < budget) {
u32 buf_int_enable_value = 0;
+ u32 tx_ctrl_value = nps_enet_reg_get(priv, NPS_ENET_REG_TX_CTL);
+ u32 tx_ctrl_ct =
+ (tx_ctrl_value & TX_CTL_CT_MASK) >> TX_CTL_CT_SHIFT;
napi_complete(napi);
@@ -192,6 +195,18 @@ static int nps_enet_poll(struct napi_struct *napi, int budget)
nps_enet_reg_set(priv, NPS_ENET_REG_BUF_INT_ENABLE,
buf_int_enable_value);
+
+ /* in case we will get a tx interrupt while interrupts
+ * are masked, we will lose it since the tx is edge interrupt.
+ * specifically, while executing the code section above,
+ * between nps_enet_tx_handler and the interrupts enable, all
+ * tx requests will be stuck until we will get an rx interrupt.
+ * the two code lines below will solve this situation by
+ * re-adding ourselves to the poll list.
+ */
+
+ if (priv->tx_skb && !tx_ctrl_ct)
+ napi_reschedule(napi);
}
return work_done;
@@ -217,7 +232,7 @@ static irqreturn_t nps_enet_irq_handler(s32 irq, void *dev_instance)
u32 tx_ctrl_ct = (tx_ctrl_value & TX_CTL_CT_MASK) >> TX_CTL_CT_SHIFT;
u32 rx_ctrl_cr = (rx_ctrl_value & RX_CTL_CR_MASK) >> RX_CTL_CR_SHIFT;
- if ((!tx_ctrl_ct && priv->tx_packet_sent) || rx_ctrl_cr)
+ if ((!tx_ctrl_ct && priv->tx_skb) || rx_ctrl_cr)
if (likely(napi_schedule_prep(&priv->napi))) {
nps_enet_reg_set(priv, NPS_ENET_REG_BUF_INT_ENABLE, 0);
__napi_schedule(&priv->napi);
@@ -387,8 +402,6 @@ static void nps_enet_send_frame(struct net_device *ndev,
/* Write the length of the Frame */
tx_ctrl_value |= length << TX_CTL_NT_SHIFT;
- /* Indicate SW is done */
- priv->tx_packet_sent = true;
tx_ctrl_value |= NPS_ENET_ENABLE << TX_CTL_CT_SHIFT;
/* Send Frame */
nps_enet_reg_set(priv, NPS_ENET_REG_TX_CTL, tx_ctrl_value);
@@ -465,7 +478,7 @@ static s32 nps_enet_open(struct net_device *ndev)
s32 err;
/* Reset private variables */
- priv->tx_packet_sent = false;
+ priv->tx_skb = NULL;
priv->ge_mac_cfg_2_value = 0;
priv->ge_mac_cfg_3_value = 0;
@@ -534,6 +547,11 @@ static netdev_tx_t nps_enet_start_xmit(struct sk_buff *skb,
priv->tx_skb = skb;
+ /* make sure tx_skb is actually written to the memory
+ * before the HW is informed and the IRQ is fired.
+ */
+ wmb();
+
nps_enet_send_frame(ndev, skb);
return NETDEV_TX_OK;
diff --git a/drivers/net/ethernet/ezchip/nps_enet.h b/drivers/net/ethernet/ezchip/nps_enet.h
index d0cab600bce8..3939ca20cc9f 100644
--- a/drivers/net/ethernet/ezchip/nps_enet.h
+++ b/drivers/net/ethernet/ezchip/nps_enet.h
@@ -165,14 +165,12 @@
* struct nps_enet_priv - Storage of ENET's private information.
* @regs_base: Base address of ENET memory-mapped control registers.
* @irq: For RX/TX IRQ number.
- * @tx_packet_sent: SW indication if frame is being sent.
* @tx_skb: socket buffer of sent frame.
* @napi: Structure for NAPI.
*/
struct nps_enet_priv {
void __iomem *regs_base;
s32 irq;
- bool tx_packet_sent;
struct sk_buff *tx_skb;
struct napi_struct napi;
u32 ge_mac_cfg_2_value;
diff --git a/drivers/net/ethernet/faraday/ftgmac100.c b/drivers/net/ethernet/faraday/ftgmac100.c
index 84384e1585a5..e7cf313e359b 100644
--- a/drivers/net/ethernet/faraday/ftgmac100.c
+++ b/drivers/net/ethernet/faraday/ftgmac100.c
@@ -71,7 +71,6 @@ struct ftgmac100 {
struct napi_struct napi;
struct mii_bus *mii_bus;
- struct phy_device *phydev;
int old_speed;
};
@@ -807,7 +806,7 @@ err:
static void ftgmac100_adjust_link(struct net_device *netdev)
{
struct ftgmac100 *priv = netdev_priv(netdev);
- struct phy_device *phydev = priv->phydev;
+ struct phy_device *phydev = netdev->phydev;
int ier;
if (phydev->speed == priv->old_speed)
@@ -850,7 +849,6 @@ static int ftgmac100_mii_probe(struct ftgmac100 *priv)
return PTR_ERR(phydev);
}
- priv->phydev = phydev;
return 0;
}
@@ -939,27 +937,11 @@ static void ftgmac100_get_drvinfo(struct net_device *netdev,
strlcpy(info->bus_info, dev_name(&netdev->dev), sizeof(info->bus_info));
}
-static int ftgmac100_get_settings(struct net_device *netdev,
- struct ethtool_cmd *cmd)
-{
- struct ftgmac100 *priv = netdev_priv(netdev);
-
- return phy_ethtool_gset(priv->phydev, cmd);
-}
-
-static int ftgmac100_set_settings(struct net_device *netdev,
- struct ethtool_cmd *cmd)
-{
- struct ftgmac100 *priv = netdev_priv(netdev);
-
- return phy_ethtool_sset(priv->phydev, cmd);
-}
-
static const struct ethtool_ops ftgmac100_ethtool_ops = {
- .set_settings = ftgmac100_set_settings,
- .get_settings = ftgmac100_get_settings,
.get_drvinfo = ftgmac100_get_drvinfo,
.get_link = ethtool_op_get_link,
+ .get_link_ksettings = phy_ethtool_get_link_ksettings,
+ .set_link_ksettings = phy_ethtool_set_link_ksettings,
};
/******************************************************************************
@@ -1085,7 +1067,7 @@ static int ftgmac100_open(struct net_device *netdev)
ftgmac100_init_hw(priv);
ftgmac100_start_hw(priv, 10);
- phy_start(priv->phydev);
+ phy_start(netdev->phydev);
napi_enable(&priv->napi);
netif_start_queue(netdev);
@@ -1111,7 +1093,7 @@ static int ftgmac100_stop(struct net_device *netdev)
netif_stop_queue(netdev);
napi_disable(&priv->napi);
- phy_stop(priv->phydev);
+ phy_stop(netdev->phydev);
ftgmac100_stop_hw(priv);
free_irq(priv->irq, netdev);
@@ -1152,9 +1134,7 @@ static int ftgmac100_hard_start_xmit(struct sk_buff *skb,
/* optional */
static int ftgmac100_do_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
{
- struct ftgmac100 *priv = netdev_priv(netdev);
-
- return phy_mii_ioctl(priv->phydev, ifr, cmd);
+ return phy_mii_ioctl(netdev->phydev, ifr, cmd);
}
static const struct net_device_ops ftgmac100_netdev_ops = {
@@ -1275,7 +1255,7 @@ static int ftgmac100_probe(struct platform_device *pdev)
return 0;
err_register_netdev:
- phy_disconnect(priv->phydev);
+ phy_disconnect(netdev->phydev);
err_mii_probe:
mdiobus_unregister(priv->mii_bus);
err_register_mdiobus:
@@ -1301,7 +1281,7 @@ static int __exit ftgmac100_remove(struct platform_device *pdev)
unregister_netdev(netdev);
- phy_disconnect(priv->phydev);
+ phy_disconnect(netdev->phydev);
mdiobus_unregister(priv->mii_bus);
mdiobus_free(priv->mii_bus);
diff --git a/drivers/net/ethernet/fealnx.c b/drivers/net/ethernet/fealnx.c
index b1b9ebafb354..c08bd763172a 100644
--- a/drivers/net/ethernet/fealnx.c
+++ b/drivers/net/ethernet/fealnx.c
@@ -1227,7 +1227,7 @@ static void fealnx_tx_timeout(struct net_device *dev)
spin_unlock_irqrestore(&np->lock, flags);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
dev->stats.tx_errors++;
netif_wake_queue(dev); /* or .._start_.. ?? */
}
diff --git a/drivers/net/ethernet/freescale/fec.h b/drivers/net/ethernet/freescale/fec.h
index 195122e11f10..f58f9ea51639 100644
--- a/drivers/net/ethernet/freescale/fec.h
+++ b/drivers/net/ethernet/freescale/fec.h
@@ -517,7 +517,6 @@ struct fec_enet_private {
/* Phylib and MDIO interface */
struct mii_bus *mii_bus;
- struct phy_device *phy_dev;
int mii_timeout;
uint phy_speed;
phy_interface_t phy_interface;
diff --git a/drivers/net/ethernet/freescale/fec_main.c b/drivers/net/ethernet/freescale/fec_main.c
index 08243c2ff4b4..ca2cccc594fd 100644
--- a/drivers/net/ethernet/freescale/fec_main.c
+++ b/drivers/net/ethernet/freescale/fec_main.c
@@ -967,10 +967,10 @@ fec_restart(struct net_device *ndev)
rcntl &= ~(1 << 8);
/* 1G, 100M or 10M */
- if (fep->phy_dev) {
- if (fep->phy_dev->speed == SPEED_1000)
+ if (ndev->phydev) {
+ if (ndev->phydev->speed == SPEED_1000)
ecntl |= (1 << 5);
- else if (fep->phy_dev->speed == SPEED_100)
+ else if (ndev->phydev->speed == SPEED_100)
rcntl &= ~(1 << 9);
else
rcntl |= (1 << 9);
@@ -991,7 +991,7 @@ fec_restart(struct net_device *ndev)
*/
cfgr = (fep->phy_interface == PHY_INTERFACE_MODE_RMII)
? BM_MIIGSK_CFGR_RMII : BM_MIIGSK_CFGR_MII;
- if (fep->phy_dev && fep->phy_dev->speed == SPEED_10)
+ if (ndev->phydev && ndev->phydev->speed == SPEED_10)
cfgr |= BM_MIIGSK_CFGR_FRCONT_10M;
writel(cfgr, fep->hwp + FEC_MIIGSK_CFGR);
@@ -1005,7 +1005,7 @@ fec_restart(struct net_device *ndev)
/* enable pause frame*/
if ((fep->pause_flag & FEC_PAUSE_FLAG_ENABLE) ||
((fep->pause_flag & FEC_PAUSE_FLAG_AUTONEG) &&
- fep->phy_dev && fep->phy_dev->pause)) {
+ ndev->phydev && ndev->phydev->pause)) {
rcntl |= FEC_ENET_FCE;
/* set FIFO threshold parameter to reduce overrun */
@@ -1521,9 +1521,15 @@ fec_enet_rx(struct net_device *ndev, int budget)
struct fec_enet_private *fep = netdev_priv(ndev);
for_each_set_bit(queue_id, &fep->work_rx, FEC_ENET_MAX_RX_QS) {
- clear_bit(queue_id, &fep->work_rx);
- pkt_received += fec_enet_rx_queue(ndev,
+ int ret;
+
+ ret = fec_enet_rx_queue(ndev,
budget - pkt_received, queue_id);
+
+ if (ret < budget - pkt_received)
+ clear_bit(queue_id, &fep->work_rx);
+
+ pkt_received += ret;
}
return pkt_received;
}
@@ -1679,7 +1685,7 @@ static void fec_get_mac(struct net_device *ndev)
static void fec_enet_adjust_link(struct net_device *ndev)
{
struct fec_enet_private *fep = netdev_priv(ndev);
- struct phy_device *phy_dev = fep->phy_dev;
+ struct phy_device *phy_dev = ndev->phydev;
int status_change = 0;
/* Prevent a state halted on mii error */
@@ -1879,8 +1885,6 @@ static int fec_enet_mii_probe(struct net_device *ndev)
int phy_id;
int dev_id = fep->dev_id;
- fep->phy_dev = NULL;
-
if (fep->phy_node) {
phy_dev = of_phy_connect(ndev, fep->phy_node,
&fec_enet_adjust_link, 0,
@@ -1928,7 +1932,6 @@ static int fec_enet_mii_probe(struct net_device *ndev)
phy_dev->advertising = phy_dev->supported;
- fep->phy_dev = phy_dev;
fep->link = 0;
fep->full_duplex = 0;
@@ -2058,30 +2061,6 @@ static void fec_enet_mii_remove(struct fec_enet_private *fep)
}
}
-static int fec_enet_get_settings(struct net_device *ndev,
- struct ethtool_cmd *cmd)
-{
- struct fec_enet_private *fep = netdev_priv(ndev);
- struct phy_device *phydev = fep->phy_dev;
-
- if (!phydev)
- return -ENODEV;
-
- return phy_ethtool_gset(phydev, cmd);
-}
-
-static int fec_enet_set_settings(struct net_device *ndev,
- struct ethtool_cmd *cmd)
-{
- struct fec_enet_private *fep = netdev_priv(ndev);
- struct phy_device *phydev = fep->phy_dev;
-
- if (!phydev)
- return -ENODEV;
-
- return phy_ethtool_sset(phydev, cmd);
-}
-
static void fec_enet_get_drvinfo(struct net_device *ndev,
struct ethtool_drvinfo *info)
{
@@ -2214,7 +2193,7 @@ static int fec_enet_set_pauseparam(struct net_device *ndev,
{
struct fec_enet_private *fep = netdev_priv(ndev);
- if (!fep->phy_dev)
+ if (!ndev->phydev)
return -ENODEV;
if (pause->tx_pause != pause->rx_pause) {
@@ -2230,17 +2209,17 @@ static int fec_enet_set_pauseparam(struct net_device *ndev,
fep->pause_flag |= pause->autoneg ? FEC_PAUSE_FLAG_AUTONEG : 0;
if (pause->rx_pause || pause->autoneg) {
- fep->phy_dev->supported |= ADVERTISED_Pause;
- fep->phy_dev->advertising |= ADVERTISED_Pause;
+ ndev->phydev->supported |= ADVERTISED_Pause;
+ ndev->phydev->advertising |= ADVERTISED_Pause;
} else {
- fep->phy_dev->supported &= ~ADVERTISED_Pause;
- fep->phy_dev->advertising &= ~ADVERTISED_Pause;
+ ndev->phydev->supported &= ~ADVERTISED_Pause;
+ ndev->phydev->advertising &= ~ADVERTISED_Pause;
}
if (pause->autoneg) {
if (netif_running(ndev))
fec_stop(ndev);
- phy_start_aneg(fep->phy_dev);
+ phy_start_aneg(ndev->phydev);
}
if (netif_running(ndev)) {
napi_disable(&fep->napi);
@@ -2356,8 +2335,7 @@ static int fec_enet_get_sset_count(struct net_device *dev, int sset)
static int fec_enet_nway_reset(struct net_device *dev)
{
- struct fec_enet_private *fep = netdev_priv(dev);
- struct phy_device *phydev = fep->phy_dev;
+ struct phy_device *phydev = dev->phydev;
if (!phydev)
return -ENODEV;
@@ -2562,8 +2540,6 @@ fec_enet_set_wol(struct net_device *ndev, struct ethtool_wolinfo *wol)
}
static const struct ethtool_ops fec_enet_ethtool_ops = {
- .get_settings = fec_enet_get_settings,
- .set_settings = fec_enet_set_settings,
.get_drvinfo = fec_enet_get_drvinfo,
.get_regs_len = fec_enet_get_regs_len,
.get_regs = fec_enet_get_regs,
@@ -2583,12 +2559,14 @@ static const struct ethtool_ops fec_enet_ethtool_ops = {
.set_tunable = fec_enet_set_tunable,
.get_wol = fec_enet_get_wol,
.set_wol = fec_enet_set_wol,
+ .get_link_ksettings = phy_ethtool_get_link_ksettings,
+ .set_link_ksettings = phy_ethtool_set_link_ksettings,
};
static int fec_enet_ioctl(struct net_device *ndev, struct ifreq *rq, int cmd)
{
struct fec_enet_private *fep = netdev_priv(ndev);
- struct phy_device *phydev = fep->phy_dev;
+ struct phy_device *phydev = ndev->phydev;
if (!netif_running(ndev))
return -EINVAL;
@@ -2843,7 +2821,7 @@ fec_enet_open(struct net_device *ndev)
goto err_enet_mii_probe;
napi_enable(&fep->napi);
- phy_start(fep->phy_dev);
+ phy_start(ndev->phydev);
netif_tx_start_all_queues(ndev);
device_set_wakeup_enable(&ndev->dev, fep->wol_flag &
@@ -2867,7 +2845,7 @@ fec_enet_close(struct net_device *ndev)
{
struct fec_enet_private *fep = netdev_priv(ndev);
- phy_stop(fep->phy_dev);
+ phy_stop(ndev->phydev);
if (netif_device_present(ndev)) {
napi_disable(&fep->napi);
@@ -2875,8 +2853,7 @@ fec_enet_close(struct net_device *ndev)
fec_stop(ndev);
}
- phy_disconnect(fep->phy_dev);
- fep->phy_dev = NULL;
+ phy_disconnect(ndev->phydev);
fec_enet_clk_enable(ndev, false);
pinctrl_pm_select_sleep_state(&fep->pdev->dev);
@@ -3504,7 +3481,7 @@ static int __maybe_unused fec_suspend(struct device *dev)
if (netif_running(ndev)) {
if (fep->wol_flag & FEC_WOL_FLAG_ENABLE)
fep->wol_flag |= FEC_WOL_FLAG_SLEEP_ON;
- phy_stop(fep->phy_dev);
+ phy_stop(ndev->phydev);
napi_disable(&fep->napi);
netif_tx_lock_bh(ndev);
netif_device_detach(ndev);
@@ -3564,7 +3541,7 @@ static int __maybe_unused fec_resume(struct device *dev)
netif_device_attach(ndev);
netif_tx_unlock_bh(ndev);
napi_enable(&fep->napi);
- phy_start(fep->phy_dev);
+ phy_start(ndev->phydev);
}
rtnl_unlock();
diff --git a/drivers/net/ethernet/freescale/fec_mpc52xx.c b/drivers/net/ethernet/freescale/fec_mpc52xx.c
index 25553ee857b4..446ae9d60c71 100644
--- a/drivers/net/ethernet/freescale/fec_mpc52xx.c
+++ b/drivers/net/ethernet/freescale/fec_mpc52xx.c
@@ -66,7 +66,6 @@ struct mpc52xx_fec_priv {
/* MDIO link details */
unsigned int mdio_speed;
struct device_node *phy_node;
- struct phy_device *phydev;
enum phy_state link;
int seven_wire_mode;
};
@@ -165,7 +164,7 @@ static int mpc52xx_fec_alloc_rx_buffers(struct net_device *dev, struct bcom_task
static void mpc52xx_fec_adjust_link(struct net_device *dev)
{
struct mpc52xx_fec_priv *priv = netdev_priv(dev);
- struct phy_device *phydev = priv->phydev;
+ struct phy_device *phydev = dev->phydev;
int new_state = 0;
if (phydev->link != PHY_DOWN) {
@@ -215,16 +214,17 @@ static void mpc52xx_fec_adjust_link(struct net_device *dev)
static int mpc52xx_fec_open(struct net_device *dev)
{
struct mpc52xx_fec_priv *priv = netdev_priv(dev);
+ struct phy_device *phydev = NULL;
int err = -EBUSY;
if (priv->phy_node) {
- priv->phydev = of_phy_connect(priv->ndev, priv->phy_node,
- mpc52xx_fec_adjust_link, 0, 0);
- if (!priv->phydev) {
+ phydev = of_phy_connect(priv->ndev, priv->phy_node,
+ mpc52xx_fec_adjust_link, 0, 0);
+ if (!phydev) {
dev_err(&dev->dev, "of_phy_connect failed\n");
return -ENODEV;
}
- phy_start(priv->phydev);
+ phy_start(phydev);
}
if (request_irq(dev->irq, mpc52xx_fec_interrupt, IRQF_SHARED,
@@ -268,10 +268,9 @@ static int mpc52xx_fec_open(struct net_device *dev)
free_ctrl_irq:
free_irq(dev->irq, dev);
free_phy:
- if (priv->phydev) {
- phy_stop(priv->phydev);
- phy_disconnect(priv->phydev);
- priv->phydev = NULL;
+ if (phydev) {
+ phy_stop(phydev);
+ phy_disconnect(phydev);
}
return err;
@@ -280,6 +279,7 @@ static int mpc52xx_fec_open(struct net_device *dev)
static int mpc52xx_fec_close(struct net_device *dev)
{
struct mpc52xx_fec_priv *priv = netdev_priv(dev);
+ struct phy_device *phydev = dev->phydev;
netif_stop_queue(dev);
@@ -291,11 +291,10 @@ static int mpc52xx_fec_close(struct net_device *dev)
free_irq(priv->r_irq, dev);
free_irq(priv->t_irq, dev);
- if (priv->phydev) {
+ if (phydev) {
/* power down phy */
- phy_stop(priv->phydev);
- phy_disconnect(priv->phydev);
- priv->phydev = NULL;
+ phy_stop(phydev);
+ phy_disconnect(phydev);
}
return 0;
@@ -763,26 +762,6 @@ static void mpc52xx_fec_reset(struct net_device *dev)
/* ethtool interface */
-static int mpc52xx_fec_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
- struct mpc52xx_fec_priv *priv = netdev_priv(dev);
-
- if (!priv->phydev)
- return -ENODEV;
-
- return phy_ethtool_gset(priv->phydev, cmd);
-}
-
-static int mpc52xx_fec_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
- struct mpc52xx_fec_priv *priv = netdev_priv(dev);
-
- if (!priv->phydev)
- return -ENODEV;
-
- return phy_ethtool_sset(priv->phydev, cmd);
-}
-
static u32 mpc52xx_fec_get_msglevel(struct net_device *dev)
{
struct mpc52xx_fec_priv *priv = netdev_priv(dev);
@@ -796,23 +775,23 @@ static void mpc52xx_fec_set_msglevel(struct net_device *dev, u32 level)
}
static const struct ethtool_ops mpc52xx_fec_ethtool_ops = {
- .get_settings = mpc52xx_fec_get_settings,
- .set_settings = mpc52xx_fec_set_settings,
.get_link = ethtool_op_get_link,
.get_msglevel = mpc52xx_fec_get_msglevel,
.set_msglevel = mpc52xx_fec_set_msglevel,
.get_ts_info = ethtool_op_get_ts_info,
+ .get_link_ksettings = phy_ethtool_get_link_ksettings,
+ .set_link_ksettings = phy_ethtool_set_link_ksettings,
};
static int mpc52xx_fec_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
- struct mpc52xx_fec_priv *priv = netdev_priv(dev);
+ struct phy_device *phydev = dev->phydev;
- if (!priv->phydev)
+ if (!phydev)
return -ENOTSUPP;
- return phy_mii_ioctl(priv->phydev, rq, cmd);
+ return phy_mii_ioctl(phydev, rq, cmd);
}
static const struct net_device_ops mpc52xx_fec_netdev_ops = {
diff --git a/drivers/net/ethernet/freescale/fman/fman.c b/drivers/net/ethernet/freescale/fman/fman.c
index ea83712a6d62..1de2e1e51c2b 100644
--- a/drivers/net/ethernet/freescale/fman/fman.c
+++ b/drivers/net/ethernet/freescale/fman/fman.c
@@ -615,7 +615,7 @@ struct fman {
struct fman_cfg *cfg;
struct muram_info *muram;
/* cam section in muram */
- int cam_offset;
+ unsigned long cam_offset;
size_t cam_size;
/* Fifo in MURAM */
int fifo_offset;
@@ -2772,7 +2772,7 @@ static struct fman *read_dts_node(struct platform_device *of_dev)
/* Get the FM address */
res = platform_get_resource(of_dev, IORESOURCE_MEM, 0);
if (!res) {
- dev_err(&of_dev->dev, "%s: Can't get FMan memory resouce\n",
+ dev_err(&of_dev->dev, "%s: Can't get FMan memory resource\n",
__func__);
goto fman_node_put;
}
diff --git a/drivers/net/ethernet/freescale/fman/fman_muram.c b/drivers/net/ethernet/freescale/fman/fman_muram.c
index 4eb0e9ac7182..47394c45b6e8 100644
--- a/drivers/net/ethernet/freescale/fman/fman_muram.c
+++ b/drivers/net/ethernet/freescale/fman/fman_muram.c
@@ -129,7 +129,7 @@ unsigned long fman_muram_offset_to_vbase(struct muram_info *muram,
*
* Return: address of the allocated memory; NULL otherwise.
*/
-int fman_muram_alloc(struct muram_info *muram, size_t size)
+unsigned long fman_muram_alloc(struct muram_info *muram, size_t size)
{
unsigned long vaddr;
@@ -150,7 +150,7 @@ int fman_muram_alloc(struct muram_info *muram, size_t size)
*
* Free an allocated memory from FM-MURAM partition.
*/
-void fman_muram_free_mem(struct muram_info *muram, u32 offset, size_t size)
+void fman_muram_free_mem(struct muram_info *muram, unsigned long offset, size_t size)
{
unsigned long addr = fman_muram_offset_to_vbase(muram, offset);
diff --git a/drivers/net/ethernet/freescale/fman/fman_muram.h b/drivers/net/ethernet/freescale/fman/fman_muram.h
index dbf0af9e5bb5..889649ad8931 100644
--- a/drivers/net/ethernet/freescale/fman/fman_muram.h
+++ b/drivers/net/ethernet/freescale/fman/fman_muram.h
@@ -44,8 +44,8 @@ struct muram_info *fman_muram_init(phys_addr_t base, size_t size);
unsigned long fman_muram_offset_to_vbase(struct muram_info *muram,
unsigned long offset);
-int fman_muram_alloc(struct muram_info *muram, size_t size);
+unsigned long fman_muram_alloc(struct muram_info *muram, size_t size);
-void fman_muram_free_mem(struct muram_info *muram, u32 offset, size_t size);
+void fman_muram_free_mem(struct muram_info *muram, unsigned long offset, size_t size);
#endif /* __FM_MURAM_EXT */
diff --git a/drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c b/drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c
index 48a9c176e0d1..61fd486c50bb 100644
--- a/drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c
+++ b/drivers/net/ethernet/freescale/fs_enet/fs_enet-main.c
@@ -652,13 +652,13 @@ static void fs_timeout(struct net_device *dev)
spin_lock_irqsave(&fep->lock, flags);
if (dev->flags & IFF_UP) {
- phy_stop(fep->phydev);
+ phy_stop(dev->phydev);
(*fep->ops->stop)(dev);
(*fep->ops->restart)(dev);
- phy_start(fep->phydev);
+ phy_start(dev->phydev);
}
- phy_start(fep->phydev);
+ phy_start(dev->phydev);
wake = fep->tx_free && !(CBDR_SC(fep->cur_tx) & BD_ENET_TX_READY);
spin_unlock_irqrestore(&fep->lock, flags);
@@ -672,7 +672,7 @@ static void fs_timeout(struct net_device *dev)
static void generic_adjust_link(struct net_device *dev)
{
struct fs_enet_private *fep = netdev_priv(dev);
- struct phy_device *phydev = fep->phydev;
+ struct phy_device *phydev = dev->phydev;
int new_state = 0;
if (phydev->link) {
@@ -741,8 +741,6 @@ static int fs_init_phy(struct net_device *dev)
return -ENODEV;
}
- fep->phydev = phydev;
-
return 0;
}
@@ -776,7 +774,7 @@ static int fs_enet_open(struct net_device *dev)
napi_disable(&fep->napi_tx);
return err;
}
- phy_start(fep->phydev);
+ phy_start(dev->phydev);
netif_start_queue(dev);
@@ -792,7 +790,7 @@ static int fs_enet_close(struct net_device *dev)
netif_carrier_off(dev);
napi_disable(&fep->napi);
napi_disable(&fep->napi_tx);
- phy_stop(fep->phydev);
+ phy_stop(dev->phydev);
spin_lock_irqsave(&fep->lock, flags);
spin_lock(&fep->tx_lock);
@@ -801,8 +799,7 @@ static int fs_enet_close(struct net_device *dev)
spin_unlock_irqrestore(&fep->lock, flags);
/* release any irqs */
- phy_disconnect(fep->phydev);
- fep->phydev = NULL;
+ phy_disconnect(dev->phydev);
free_irq(fep->interrupt, dev);
return 0;
@@ -847,26 +844,6 @@ static void fs_get_regs(struct net_device *dev, struct ethtool_regs *regs,
regs->version = 0;
}
-static int fs_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
- struct fs_enet_private *fep = netdev_priv(dev);
-
- if (!fep->phydev)
- return -ENODEV;
-
- return phy_ethtool_gset(fep->phydev, cmd);
-}
-
-static int fs_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
- struct fs_enet_private *fep = netdev_priv(dev);
-
- if (!fep->phydev)
- return -ENODEV;
-
- return phy_ethtool_sset(fep->phydev, cmd);
-}
-
static int fs_nway_reset(struct net_device *dev)
{
return 0;
@@ -887,24 +864,22 @@ static void fs_set_msglevel(struct net_device *dev, u32 value)
static const struct ethtool_ops fs_ethtool_ops = {
.get_drvinfo = fs_get_drvinfo,
.get_regs_len = fs_get_regs_len,
- .get_settings = fs_get_settings,
- .set_settings = fs_set_settings,
.nway_reset = fs_nway_reset,
.get_link = ethtool_op_get_link,
.get_msglevel = fs_get_msglevel,
.set_msglevel = fs_set_msglevel,
.get_regs = fs_get_regs,
.get_ts_info = ethtool_op_get_ts_info,
+ .get_link_ksettings = phy_ethtool_get_link_ksettings,
+ .set_link_ksettings = phy_ethtool_set_link_ksettings,
};
static int fs_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
- struct fs_enet_private *fep = netdev_priv(dev);
-
if (!netif_running(dev))
return -EINVAL;
- return phy_mii_ioctl(fep->phydev, rq, cmd);
+ return phy_mii_ioctl(dev->phydev, rq, cmd);
}
extern int fs_mii_connect(struct net_device *dev);
diff --git a/drivers/net/ethernet/freescale/fs_enet/fs_enet.h b/drivers/net/ethernet/freescale/fs_enet/fs_enet.h
index f184d8f952e2..e29f54a35210 100644
--- a/drivers/net/ethernet/freescale/fs_enet/fs_enet.h
+++ b/drivers/net/ethernet/freescale/fs_enet/fs_enet.h
@@ -149,7 +149,6 @@ struct fs_enet_private {
unsigned int last_mii_status;
int interrupt;
- struct phy_device *phydev;
int oldduplex, oldspeed, oldlink; /* current settings */
/* event masks */
diff --git a/drivers/net/ethernet/freescale/fs_enet/mac-fcc.c b/drivers/net/ethernet/freescale/fs_enet/mac-fcc.c
index 1ba359f17ec6..d71761a34022 100644
--- a/drivers/net/ethernet/freescale/fs_enet/mac-fcc.c
+++ b/drivers/net/ethernet/freescale/fs_enet/mac-fcc.c
@@ -370,7 +370,7 @@ static void restart(struct net_device *dev)
/* adjust to speed (for RMII mode) */
if (fpi->use_rmii) {
- if (fep->phydev->speed == 100)
+ if (dev->phydev->speed == 100)
C8(fcccp, fcc_gfemr, 0x20);
else
S8(fcccp, fcc_gfemr, 0x20);
@@ -396,7 +396,7 @@ static void restart(struct net_device *dev)
S32(fccp, fcc_fpsmr, FCC_PSMR_RMII);
/* adjust to duplex mode */
- if (fep->phydev->duplex)
+ if (dev->phydev->duplex)
S32(fccp, fcc_fpsmr, FCC_PSMR_FDE | FCC_PSMR_LPB);
else
C32(fccp, fcc_fpsmr, FCC_PSMR_FDE | FCC_PSMR_LPB);
diff --git a/drivers/net/ethernet/freescale/fs_enet/mac-fec.c b/drivers/net/ethernet/freescale/fs_enet/mac-fec.c
index bade2f8f9b5c..35a318ed3a62 100644
--- a/drivers/net/ethernet/freescale/fs_enet/mac-fec.c
+++ b/drivers/net/ethernet/freescale/fs_enet/mac-fec.c
@@ -254,7 +254,7 @@ static void restart(struct net_device *dev)
int r;
u32 addrhi, addrlo;
- struct mii_bus *mii = fep->phydev->mdio.bus;
+ struct mii_bus *mii = dev->phydev->mdio.bus;
struct fec_info* fec_inf = mii->priv;
r = whack_reset(fep->fec.fecp);
@@ -333,7 +333,7 @@ static void restart(struct net_device *dev)
/*
* adjust to duplex mode
*/
- if (fep->phydev->duplex) {
+ if (dev->phydev->duplex) {
FC(fecp, r_cntrl, FEC_RCNTRL_DRT);
FS(fecp, x_cntrl, FEC_TCNTRL_FDEN); /* FD enable */
} else {
@@ -363,7 +363,7 @@ static void stop(struct net_device *dev)
const struct fs_platform_info *fpi = fep->fpi;
struct fec __iomem *fecp = fep->fec.fecp;
- struct fec_info *feci = fep->phydev->mdio.bus->priv;
+ struct fec_info *feci = dev->phydev->mdio.bus->priv;
int i;
diff --git a/drivers/net/ethernet/freescale/fs_enet/mac-scc.c b/drivers/net/ethernet/freescale/fs_enet/mac-scc.c
index 7a184e8816a4..e8b9c33d35b4 100644
--- a/drivers/net/ethernet/freescale/fs_enet/mac-scc.c
+++ b/drivers/net/ethernet/freescale/fs_enet/mac-scc.c
@@ -352,7 +352,7 @@ static void restart(struct net_device *dev)
W16(sccp, scc_psmr, SCC_PSMR_ENCRC | SCC_PSMR_NIB22);
/* Set full duplex mode if needed */
- if (fep->phydev->duplex)
+ if (dev->phydev->duplex)
S16(sccp, scc_psmr, SCC_PSMR_LPB | SCC_PSMR_FDE);
/* Restore multicast and promiscuous settings */
diff --git a/drivers/net/ethernet/freescale/gianfar.c b/drivers/net/ethernet/freescale/gianfar.c
index d2f917af539f..7615e0668acb 100644
--- a/drivers/net/ethernet/freescale/gianfar.c
+++ b/drivers/net/ethernet/freescale/gianfar.c
@@ -999,7 +999,7 @@ static int gfar_hwtstamp_get(struct net_device *netdev, struct ifreq *ifr)
static int gfar_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
{
- struct gfar_private *priv = netdev_priv(dev);
+ struct phy_device *phydev = dev->phydev;
if (!netif_running(dev))
return -EINVAL;
@@ -1009,10 +1009,10 @@ static int gfar_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
if (cmd == SIOCGHWTSTAMP)
return gfar_hwtstamp_get(dev, rq);
- if (!priv->phydev)
+ if (!phydev)
return -ENODEV;
- return phy_mii_ioctl(priv->phydev, rq, cmd);
+ return phy_mii_ioctl(phydev, rq, cmd);
}
static u32 cluster_entry_per_class(struct gfar_private *priv, u32 rqfar,
@@ -1635,7 +1635,7 @@ static int gfar_suspend(struct device *dev)
gfar_start_wol_filer(priv);
} else {
- phy_stop(priv->phydev);
+ phy_stop(ndev->phydev);
}
return 0;
@@ -1664,7 +1664,7 @@ static int gfar_resume(struct device *dev)
gfar_filer_restore_table(priv);
} else {
- phy_start(priv->phydev);
+ phy_start(ndev->phydev);
}
gfar_start(priv);
@@ -1698,8 +1698,8 @@ static int gfar_restore(struct device *dev)
priv->oldspeed = 0;
priv->oldduplex = -1;
- if (priv->phydev)
- phy_start(priv->phydev);
+ if (ndev->phydev)
+ phy_start(ndev->phydev);
netif_device_attach(ndev);
enable_napi(priv);
@@ -1778,6 +1778,7 @@ static int init_phy(struct net_device *dev)
priv->device_flags & FSL_GIANFAR_DEV_HAS_GIGABIT ?
GFAR_SUPPORTED_GBIT : 0;
phy_interface_t interface;
+ struct phy_device *phydev;
priv->oldlink = 0;
priv->oldspeed = 0;
@@ -1785,9 +1786,9 @@ static int init_phy(struct net_device *dev)
interface = gfar_get_interface(dev);
- priv->phydev = of_phy_connect(dev, priv->phy_node, &adjust_link, 0,
- interface);
- if (!priv->phydev) {
+ phydev = of_phy_connect(dev, priv->phy_node, &adjust_link, 0,
+ interface);
+ if (!phydev) {
dev_err(&dev->dev, "could not attach to PHY\n");
return -ENODEV;
}
@@ -1796,11 +1797,11 @@ static int init_phy(struct net_device *dev)
gfar_configure_serdes(dev);
/* Remove any features not supported by the controller */
- priv->phydev->supported &= (GFAR_SUPPORTED | gigabit_support);
- priv->phydev->advertising = priv->phydev->supported;
+ phydev->supported &= (GFAR_SUPPORTED | gigabit_support);
+ phydev->advertising = phydev->supported;
/* Add support for flow control, but don't advertise it by default */
- priv->phydev->supported |= (SUPPORTED_Pause | SUPPORTED_Asym_Pause);
+ phydev->supported |= (SUPPORTED_Pause | SUPPORTED_Asym_Pause);
return 0;
}
@@ -1944,7 +1945,7 @@ void stop_gfar(struct net_device *dev)
/* disable ints and gracefully shut down Rx/Tx DMA */
gfar_halt(priv);
- phy_stop(priv->phydev);
+ phy_stop(dev->phydev);
free_skb_resources(priv);
}
@@ -2076,7 +2077,7 @@ void gfar_start(struct gfar_private *priv)
gfar_ints_enable(priv);
- priv->ndev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(priv->ndev); /* prevent tx timeout */
}
static void free_grp_irqs(struct gfar_priv_grp *grp)
@@ -2204,7 +2205,7 @@ int startup_gfar(struct net_device *ndev)
priv->oldspeed = 0;
priv->oldduplex = -1;
- phy_start(priv->phydev);
+ phy_start(ndev->phydev);
enable_napi(priv);
@@ -2572,8 +2573,7 @@ static int gfar_close(struct net_device *dev)
stop_gfar(dev);
/* Disconnect from the PHY */
- phy_disconnect(priv->phydev);
- priv->phydev = NULL;
+ phy_disconnect(dev->phydev);
gfar_free_irq(priv);
@@ -3379,7 +3379,7 @@ static irqreturn_t gfar_interrupt(int irq, void *grp_id)
static void adjust_link(struct net_device *dev)
{
struct gfar_private *priv = netdev_priv(dev);
- struct phy_device *phydev = priv->phydev;
+ struct phy_device *phydev = dev->phydev;
if (unlikely(phydev->link != priv->oldlink ||
(phydev->link && (phydev->duplex != priv->oldduplex ||
@@ -3620,7 +3620,8 @@ static irqreturn_t gfar_error(int irq, void *grp_id)
static u32 gfar_get_flowctrl_cfg(struct gfar_private *priv)
{
- struct phy_device *phydev = priv->phydev;
+ struct net_device *ndev = priv->ndev;
+ struct phy_device *phydev = ndev->phydev;
u32 val = 0;
if (!phydev->duplex)
@@ -3660,7 +3661,8 @@ static u32 gfar_get_flowctrl_cfg(struct gfar_private *priv)
static noinline void gfar_update_link_state(struct gfar_private *priv)
{
struct gfar __iomem *regs = priv->gfargrp[0].regs;
- struct phy_device *phydev = priv->phydev;
+ struct net_device *ndev = priv->ndev;
+ struct phy_device *phydev = ndev->phydev;
struct gfar_priv_rx_q *rx_queue = NULL;
int i;
diff --git a/drivers/net/ethernet/freescale/gianfar.h b/drivers/net/ethernet/freescale/gianfar.h
index cb77667971a7..373fd094f2f3 100644
--- a/drivers/net/ethernet/freescale/gianfar.h
+++ b/drivers/net/ethernet/freescale/gianfar.h
@@ -1153,7 +1153,6 @@ struct gfar_private {
phy_interface_t interface;
struct device_node *phy_node;
struct device_node *tbi_node;
- struct phy_device *phydev;
struct mii_bus *mii_bus;
int oldspeed;
int oldduplex;
diff --git a/drivers/net/ethernet/freescale/gianfar_ethtool.c b/drivers/net/ethernet/freescale/gianfar_ethtool.c
index 4b0ee855edd7..56588f2e1d91 100644
--- a/drivers/net/ethernet/freescale/gianfar_ethtool.c
+++ b/drivers/net/ethernet/freescale/gianfar_ethtool.c
@@ -184,40 +184,6 @@ static void gfar_gdrvinfo(struct net_device *dev,
strlcpy(drvinfo->bus_info, "N/A", sizeof(drvinfo->bus_info));
}
-
-static int gfar_ssettings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
- struct gfar_private *priv = netdev_priv(dev);
- struct phy_device *phydev = priv->phydev;
-
- if (NULL == phydev)
- return -ENODEV;
-
- return phy_ethtool_sset(phydev, cmd);
-}
-
-
-/* Return the current settings in the ethtool_cmd structure */
-static int gfar_gsettings(struct net_device *dev, struct ethtool_cmd *cmd)
-{
- struct gfar_private *priv = netdev_priv(dev);
- struct phy_device *phydev = priv->phydev;
- struct gfar_priv_rx_q *rx_queue = NULL;
- struct gfar_priv_tx_q *tx_queue = NULL;
-
- if (NULL == phydev)
- return -ENODEV;
- tx_queue = priv->tx_queue[0];
- rx_queue = priv->rx_queue[0];
-
- /* etsec-1.7 and older versions have only one txic
- * and rxic regs although they support multiple queues */
- cmd->maxtxpkt = get_icft_value(tx_queue->txic);
- cmd->maxrxpkt = get_icft_value(rx_queue->rxic);
-
- return phy_ethtool_gset(phydev, cmd);
-}
-
/* Return the length of the register structure */
static int gfar_reglen(struct net_device *dev)
{
@@ -242,10 +208,12 @@ static void gfar_get_regs(struct net_device *dev, struct ethtool_regs *regs,
static unsigned int gfar_usecs2ticks(struct gfar_private *priv,
unsigned int usecs)
{
+ struct net_device *ndev = priv->ndev;
+ struct phy_device *phydev = ndev->phydev;
unsigned int count;
/* The timer is different, depending on the interface speed */
- switch (priv->phydev->speed) {
+ switch (phydev->speed) {
case SPEED_1000:
count = GFAR_GBIT_TIME;
break;
@@ -267,10 +235,12 @@ static unsigned int gfar_usecs2ticks(struct gfar_private *priv,
static unsigned int gfar_ticks2usecs(struct gfar_private *priv,
unsigned int ticks)
{
+ struct net_device *ndev = priv->ndev;
+ struct phy_device *phydev = ndev->phydev;
unsigned int count;
/* The timer is different, depending on the interface speed */
- switch (priv->phydev->speed) {
+ switch (phydev->speed) {
case SPEED_1000:
count = GFAR_GBIT_TIME;
break;
@@ -304,7 +274,7 @@ static int gfar_gcoalesce(struct net_device *dev,
if (!(priv->device_flags & FSL_GIANFAR_DEV_HAS_COALESCE))
return -EOPNOTSUPP;
- if (NULL == priv->phydev)
+ if (!dev->phydev)
return -ENODEV;
rx_queue = priv->rx_queue[0];
@@ -365,7 +335,7 @@ static int gfar_scoalesce(struct net_device *dev,
if (!(priv->device_flags & FSL_GIANFAR_DEV_HAS_COALESCE))
return -EOPNOTSUPP;
- if (NULL == priv->phydev)
+ if (!dev->phydev)
return -ENODEV;
/* Check the bounds of the values */
@@ -529,7 +499,7 @@ static int gfar_spauseparam(struct net_device *dev,
struct ethtool_pauseparam *epause)
{
struct gfar_private *priv = netdev_priv(dev);
- struct phy_device *phydev = priv->phydev;
+ struct phy_device *phydev = dev->phydev;
struct gfar __iomem *regs = priv->gfargrp[0].regs;
u32 oldadv, newadv;
@@ -1565,8 +1535,6 @@ static int gfar_get_ts_info(struct net_device *dev,
}
const struct ethtool_ops gfar_ethtool_ops = {
- .get_settings = gfar_gsettings,
- .set_settings = gfar_ssettings,
.get_drvinfo = gfar_gdrvinfo,
.get_regs_len = gfar_reglen,
.get_regs = gfar_get_regs,
@@ -1589,4 +1557,6 @@ const struct ethtool_ops gfar_ethtool_ops = {
.set_rxnfc = gfar_set_nfc,
.get_rxnfc = gfar_get_nfc,
.get_ts_info = gfar_get_ts_info,
+ .get_link_ksettings = phy_ethtool_get_link_ksettings,
+ .set_link_ksettings = phy_ethtool_set_link_ksettings,
};
diff --git a/drivers/net/ethernet/freescale/ucc_geth_ethtool.c b/drivers/net/ethernet/freescale/ucc_geth_ethtool.c
index 89714f5e0dfc..812a968a78e9 100644
--- a/drivers/net/ethernet/freescale/ucc_geth_ethtool.c
+++ b/drivers/net/ethernet/freescale/ucc_geth_ethtool.c
@@ -105,23 +105,20 @@ static const char rx_fw_stat_gstrings[][ETH_GSTRING_LEN] = {
#define UEC_RX_FW_STATS_LEN ARRAY_SIZE(rx_fw_stat_gstrings)
static int
-uec_get_settings(struct net_device *netdev, struct ethtool_cmd *ecmd)
+uec_get_ksettings(struct net_device *netdev, struct ethtool_link_ksettings *cmd)
{
struct ucc_geth_private *ugeth = netdev_priv(netdev);
struct phy_device *phydev = ugeth->phydev;
- struct ucc_geth_info *ug_info = ugeth->ug_info;
if (!phydev)
return -ENODEV;
- ecmd->maxtxpkt = 1;
- ecmd->maxrxpkt = ug_info->interruptcoalescingmaxvalue[0];
-
- return phy_ethtool_gset(phydev, ecmd);
+ return phy_ethtool_ksettings_get(phydev, cmd);
}
static int
-uec_set_settings(struct net_device *netdev, struct ethtool_cmd *ecmd)
+uec_set_ksettings(struct net_device *netdev,
+ const struct ethtool_link_ksettings *cmd)
{
struct ucc_geth_private *ugeth = netdev_priv(netdev);
struct phy_device *phydev = ugeth->phydev;
@@ -129,7 +126,7 @@ uec_set_settings(struct net_device *netdev, struct ethtool_cmd *ecmd)
if (!phydev)
return -ENODEV;
- return phy_ethtool_sset(phydev, ecmd);
+ return phy_ethtool_ksettings_set(phydev, cmd);
}
static void
@@ -392,8 +389,6 @@ static int uec_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
#endif /* CONFIG_PM */
static const struct ethtool_ops uec_ethtool_ops = {
- .get_settings = uec_get_settings,
- .set_settings = uec_set_settings,
.get_drvinfo = uec_get_drvinfo,
.get_regs_len = uec_get_regs_len,
.get_regs = uec_get_regs,
@@ -411,6 +406,8 @@ static const struct ethtool_ops uec_ethtool_ops = {
.get_wol = uec_get_wol,
.set_wol = uec_set_wol,
.get_ts_info = ethtool_op_get_ts_info,
+ .get_link_ksettings = uec_get_ksettings,
+ .set_link_ksettings = uec_set_ksettings,
};
void uec_set_ethtool_ops(struct net_device *netdev)
diff --git a/drivers/net/ethernet/fujitsu/fmvj18x_cs.c b/drivers/net/ethernet/fujitsu/fmvj18x_cs.c
index 678f5018d0be..399cfd217288 100644
--- a/drivers/net/ethernet/fujitsu/fmvj18x_cs.c
+++ b/drivers/net/ethernet/fujitsu/fmvj18x_cs.c
@@ -746,7 +746,7 @@ static irqreturn_t fjn_interrupt(int dummy, void *dev_id)
lp->sent = lp->tx_queue ;
lp->tx_queue = 0;
lp->tx_queue_len = 0;
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
} else {
lp->tx_started = 0;
}
diff --git a/drivers/net/ethernet/hisilicon/hix5hd2_gmac.c b/drivers/net/ethernet/hisilicon/hix5hd2_gmac.c
index e51892d518ff..b9f2ea59308a 100644
--- a/drivers/net/ethernet/hisilicon/hix5hd2_gmac.c
+++ b/drivers/net/ethernet/hisilicon/hix5hd2_gmac.c
@@ -636,7 +636,7 @@ static int hix5hd2_net_xmit(struct sk_buff *skb, struct net_device *dev)
pos = dma_ring_incr(pos, TX_DESC_NUM);
writel_relaxed(dma_byte(pos), priv->base + TX_BQ_WR_ADDR);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
dev->stats.tx_packets++;
dev->stats.tx_bytes += skb->len;
netdev_sent_queue(dev, skb->len);
diff --git a/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c b/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c
index a1cb461ac45f..7a757e88c89a 100644
--- a/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c
+++ b/drivers/net/ethernet/hisilicon/hns/hns_ae_adapt.c
@@ -29,25 +29,6 @@ static struct hns_mac_cb *hns_get_mac_cb(struct hnae_handle *handle)
return vf_cb->mac_cb;
}
-/**
- * hns_ae_map_eport_to_dport - translate enet port id to dsaf port id
- * @port_id: enet port id
- *: debug port 0-1, service port 2 -7 (dsaf mode only 2)
- * return: dsaf port id
- *: service ports 0 - 5, debug port 6-7
- **/
-static int hns_ae_map_eport_to_dport(u32 port_id)
-{
- int port_index;
-
- if (port_id < DSAF_DEBUG_NW_NUM)
- port_index = port_id + DSAF_SERVICE_PORT_NUM_PER_DSAF;
- else
- port_index = port_id - DSAF_DEBUG_NW_NUM;
-
- return port_index;
-}
-
static struct dsaf_device *hns_ae_get_dsaf_dev(struct hnae_ae_dev *dev)
{
return container_of(dev, struct dsaf_device, ae_dev);
@@ -56,50 +37,35 @@ static struct dsaf_device *hns_ae_get_dsaf_dev(struct hnae_ae_dev *dev)
static struct hns_ppe_cb *hns_get_ppe_cb(struct hnae_handle *handle)
{
int ppe_index;
- int ppe_common_index;
struct ppe_common_cb *ppe_comm;
struct hnae_vf_cb *vf_cb = hns_ae_get_vf_cb(handle);
- if (vf_cb->port_index < DSAF_SERVICE_PORT_NUM_PER_DSAF) {
- ppe_index = vf_cb->port_index;
- ppe_common_index = 0;
- } else {
- ppe_index = 0;
- ppe_common_index =
- vf_cb->port_index - DSAF_SERVICE_PORT_NUM_PER_DSAF + 1;
- }
- ppe_comm = vf_cb->dsaf_dev->ppe_common[ppe_common_index];
+ ppe_comm = vf_cb->dsaf_dev->ppe_common[0];
+ ppe_index = vf_cb->port_index;
+
return &ppe_comm->ppe_cb[ppe_index];
}
static int hns_ae_get_q_num_per_vf(
struct dsaf_device *dsaf_dev, int port)
{
- int common_idx = hns_dsaf_get_comm_idx_by_port(port);
-
- return dsaf_dev->rcb_common[common_idx]->max_q_per_vf;
+ return dsaf_dev->rcb_common[0]->max_q_per_vf;
}
static int hns_ae_get_vf_num_per_port(
struct dsaf_device *dsaf_dev, int port)
{
- int common_idx = hns_dsaf_get_comm_idx_by_port(port);
-
- return dsaf_dev->rcb_common[common_idx]->max_vfn;
+ return dsaf_dev->rcb_common[0]->max_vfn;
}
static struct ring_pair_cb *hns_ae_get_base_ring_pair(
struct dsaf_device *dsaf_dev, int port)
{
- int common_idx = hns_dsaf_get_comm_idx_by_port(port);
- struct rcb_common_cb *rcb_comm = dsaf_dev->rcb_common[common_idx];
+ struct rcb_common_cb *rcb_comm = dsaf_dev->rcb_common[0];
int q_num = rcb_comm->max_q_per_vf;
int vf_num = rcb_comm->max_vfn;
- if (common_idx == HNS_DSAF_COMM_SERVICE_NW_IDX)
- return &rcb_comm->ring_pair_cb[port * q_num * vf_num];
- else
- return &rcb_comm->ring_pair_cb[0];
+ return &rcb_comm->ring_pair_cb[port * q_num * vf_num];
}
static struct ring_pair_cb *hns_ae_get_ring_pair(struct hnae_queue *q)
@@ -110,7 +76,6 @@ static struct ring_pair_cb *hns_ae_get_ring_pair(struct hnae_queue *q)
struct hnae_handle *hns_ae_get_handle(struct hnae_ae_dev *dev,
u32 port_id)
{
- int port_idx;
int vfnum_per_port;
int qnum_per_vf;
int i;
@@ -120,11 +85,10 @@ struct hnae_handle *hns_ae_get_handle(struct hnae_ae_dev *dev,
struct hnae_vf_cb *vf_cb;
dsaf_dev = hns_ae_get_dsaf_dev(dev);
- port_idx = hns_ae_map_eport_to_dport(port_id);
- ring_pair_cb = hns_ae_get_base_ring_pair(dsaf_dev, port_idx);
- vfnum_per_port = hns_ae_get_vf_num_per_port(dsaf_dev, port_idx);
- qnum_per_vf = hns_ae_get_q_num_per_vf(dsaf_dev, port_idx);
+ ring_pair_cb = hns_ae_get_base_ring_pair(dsaf_dev, port_id);
+ vfnum_per_port = hns_ae_get_vf_num_per_port(dsaf_dev, port_id);
+ qnum_per_vf = hns_ae_get_q_num_per_vf(dsaf_dev, port_id);
vf_cb = kzalloc(sizeof(*vf_cb) +
qnum_per_vf * sizeof(struct hnae_queue *), GFP_KERNEL);
@@ -163,14 +127,14 @@ struct hnae_handle *hns_ae_get_handle(struct hnae_ae_dev *dev,
}
vf_cb->dsaf_dev = dsaf_dev;
- vf_cb->port_index = port_idx;
- vf_cb->mac_cb = &dsaf_dev->mac_cb[port_idx];
+ vf_cb->port_index = port_id;
+ vf_cb->mac_cb = dsaf_dev->mac_cb[port_id];
ae_handle->phy_if = vf_cb->mac_cb->phy_if;
ae_handle->phy_node = vf_cb->mac_cb->phy_node;
ae_handle->if_support = vf_cb->mac_cb->if_support;
ae_handle->port_type = vf_cb->mac_cb->mac_type;
- ae_handle->dport_id = port_idx;
+ ae_handle->dport_id = port_id;
return ae_handle;
vf_id_err:
@@ -320,11 +284,8 @@ static void hns_ae_reset(struct hnae_handle *handle)
struct hnae_vf_cb *vf_cb = hns_ae_get_vf_cb(handle);
if (vf_cb->mac_cb->mac_type == HNAE_PORT_DEBUG) {
- u8 ppe_common_index =
- vf_cb->port_index - DSAF_SERVICE_PORT_NUM_PER_DSAF + 1;
-
hns_mac_reset(vf_cb->mac_cb);
- hns_ppe_reset_common(vf_cb->dsaf_dev, ppe_common_index);
+ hns_ppe_reset_common(vf_cb->dsaf_dev, 0);
}
}
@@ -399,11 +360,16 @@ static void hns_ae_get_ring_bdnum_limit(struct hnae_queue *queue,
static void hns_ae_get_pauseparam(struct hnae_handle *handle,
u32 *auto_neg, u32 *rx_en, u32 *tx_en)
{
- assert(handle);
+ struct hns_mac_cb *mac_cb = hns_get_mac_cb(handle);
+ struct dsaf_device *dsaf_dev = mac_cb->dsaf_dev;
+
+ hns_mac_get_autoneg(mac_cb, auto_neg);
- hns_mac_get_autoneg(hns_get_mac_cb(handle), auto_neg);
+ hns_mac_get_pauseparam(mac_cb, rx_en, tx_en);
- hns_mac_get_pauseparam(hns_get_mac_cb(handle), rx_en, tx_en);
+ /* Service port's pause feature is provided by DSAF, not mac */
+ if (handle->port_type == HNAE_PORT_SERVICE)
+ hns_dsaf_get_rx_mac_pause_en(dsaf_dev, mac_cb->mac_id, rx_en);
}
static int hns_ae_set_autoneg(struct hnae_handle *handle, u8 enable)
@@ -436,12 +402,21 @@ static int hns_ae_set_pauseparam(struct hnae_handle *handle,
u32 autoneg, u32 rx_en, u32 tx_en)
{
struct hns_mac_cb *mac_cb = hns_get_mac_cb(handle);
+ struct dsaf_device *dsaf_dev = mac_cb->dsaf_dev;
int ret;
ret = hns_mac_set_autoneg(mac_cb, autoneg);
if (ret)
return ret;
+ /* Service port's pause feature is provided by DSAF, not mac */
+ if (handle->port_type == HNAE_PORT_SERVICE) {
+ ret = hns_dsaf_set_rx_mac_pause_en(dsaf_dev,
+ mac_cb->mac_id, rx_en);
+ if (ret)
+ return ret;
+ rx_en = 0;
+ }
return hns_mac_set_pauseparam(mac_cb, rx_en, tx_en);
}
@@ -689,7 +664,7 @@ void hns_ae_update_led_status(struct hnae_handle *handle)
assert(handle);
mac_cb = hns_get_mac_cb(handle);
- if (!mac_cb->cpld_vaddr)
+ if (!mac_cb->cpld_ctrl)
return;
hns_set_led_opt(mac_cb);
}
@@ -709,7 +684,6 @@ int hns_ae_cpld_set_led_id(struct hnae_handle *handle,
void hns_ae_get_regs(struct hnae_handle *handle, void *data)
{
u32 *p = data;
- u32 rcb_com_idx;
int i;
struct hnae_vf_cb *vf_cb = hns_ae_get_vf_cb(handle);
struct hns_ppe_cb *ppe_cb = hns_get_ppe_cb(handle);
@@ -717,8 +691,7 @@ void hns_ae_get_regs(struct hnae_handle *handle, void *data)
hns_ppe_get_regs(ppe_cb, p);
p += hns_ppe_get_regs_count();
- rcb_com_idx = hns_dsaf_get_comm_idx_by_port(vf_cb->port_index);
- hns_rcb_get_common_regs(vf_cb->dsaf_dev->rcb_common[rcb_com_idx], p);
+ hns_rcb_get_common_regs(vf_cb->dsaf_dev->rcb_common[0], p);
p += hns_rcb_get_common_regs_count();
for (i = 0; i < handle->q_num; i++) {
diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
index a38084a22bf2..611581fccf2a 100644
--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
+++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.c
@@ -7,18 +7,19 @@
* (at your option) any later version.
*/
-#include <linux/module.h>
-#include <linux/kernel.h>
#include <linux/init.h>
-#include <linux/netdevice.h>
-#include <linux/phy_fixed.h>
#include <linux/interrupt.h>
-#include <linux/platform_device.h>
+#include <linux/kernel.h>
+#include <linux/mfd/syscon.h>
+#include <linux/module.h>
+#include <linux/netdevice.h>
#include <linux/of.h>
#include <linux/of_address.h>
+#include <linux/phy_fixed.h>
+#include <linux/platform_device.h>
-#include "hns_dsaf_misc.h"
#include "hns_dsaf_main.h"
+#include "hns_dsaf_misc.h"
#include "hns_dsaf_rcb.h"
#define MAC_EN_FLAG_V 0xada0328
@@ -81,17 +82,6 @@ static enum mac_mode hns_get_enet_interface(const struct hns_mac_cb *mac_cb)
}
}
-int hns_mac_get_sfp_prsnt(struct hns_mac_cb *mac_cb, int *sfp_prsnt)
-{
- if (!mac_cb->cpld_vaddr)
- return -ENODEV;
-
- *sfp_prsnt = !dsaf_read_b((u8 *)mac_cb->cpld_vaddr
- + MAC_SFP_PORT_OFFSET);
-
- return 0;
-}
-
void hns_mac_get_link_status(struct hns_mac_cb *mac_cb, u32 *link_status)
{
struct mac_driver *mac_ctrl_drv;
@@ -168,10 +158,9 @@ static int hns_mac_get_inner_port_num(struct hns_mac_cb *mac_cb,
u8 vmid, u8 *port_num)
{
u8 tmp_port;
- u32 comm_idx;
if (mac_cb->dsaf_dev->dsaf_mode <= DSAF_MODE_ENABLE) {
- if (mac_cb->mac_id != DSAF_MAX_PORT_NUM_PER_CHIP) {
+ if (mac_cb->mac_id != DSAF_MAX_PORT_NUM) {
dev_err(mac_cb->dev,
"input invalid,%s mac%d vmid%d !\n",
mac_cb->dsaf_dev->ae_dev.name,
@@ -179,7 +168,7 @@ static int hns_mac_get_inner_port_num(struct hns_mac_cb *mac_cb,
return -EINVAL;
}
} else if (mac_cb->dsaf_dev->dsaf_mode < DSAF_MODE_MAX) {
- if (mac_cb->mac_id >= DSAF_MAX_PORT_NUM_PER_CHIP) {
+ if (mac_cb->mac_id >= DSAF_MAX_PORT_NUM) {
dev_err(mac_cb->dev,
"input invalid,%s mac%d vmid%d!\n",
mac_cb->dsaf_dev->ae_dev.name,
@@ -192,9 +181,7 @@ static int hns_mac_get_inner_port_num(struct hns_mac_cb *mac_cb,
return -EINVAL;
}
- comm_idx = hns_dsaf_get_comm_idx_by_port(mac_cb->mac_id);
-
- if (vmid >= mac_cb->dsaf_dev->rcb_common[comm_idx]->max_vfn) {
+ if (vmid >= mac_cb->dsaf_dev->rcb_common[0]->max_vfn) {
dev_err(mac_cb->dev, "input invalid,%s mac%d vmid%d !\n",
mac_cb->dsaf_dev->ae_dev.name, mac_cb->mac_id, vmid);
return -EINVAL;
@@ -234,7 +221,7 @@ static int hns_mac_get_inner_port_num(struct hns_mac_cb *mac_cb,
}
/**
- *hns_mac_get_inner_port_num - change vf mac address
+ *hns_mac_change_vf_addr - change vf mac address
*@mac_cb: mac device
*@vmid: vmid
*@addr:mac address
@@ -249,7 +236,7 @@ int hns_mac_change_vf_addr(struct hns_mac_cb *mac_cb,
struct mac_entry_idx *old_entry;
old_entry = &mac_cb->addr_entry_idx[vmid];
- if (dsaf_dev) {
+ if (!HNS_DSAF_IS_DEBUG(dsaf_dev)) {
memcpy(mac_entry.addr, addr, sizeof(mac_entry.addr));
mac_entry.in_vlan_id = old_entry->vlan_id;
mac_entry.in_port_num = mac_cb->mac_id;
@@ -289,7 +276,7 @@ int hns_mac_set_multi(struct hns_mac_cb *mac_cb,
struct dsaf_device *dsaf_dev = mac_cb->dsaf_dev;
struct dsaf_drv_mac_single_dest_entry mac_entry;
- if (dsaf_dev && addr) {
+ if (!HNS_DSAF_IS_DEBUG(dsaf_dev) && addr) {
memcpy(mac_entry.addr, addr, sizeof(mac_entry.addr));
mac_entry.in_vlan_id = 0;/*vlan_id;*/
mac_entry.in_port_num = mac_cb->mac_id;
@@ -380,7 +367,7 @@ static int hns_mac_port_config_bc_en(struct hns_mac_cb *mac_cb,
if (mac_cb->mac_type == HNAE_PORT_DEBUG)
return 0;
- if (dsaf_dev) {
+ if (!HNS_DSAF_IS_DEBUG(dsaf_dev)) {
memcpy(mac_entry.addr, addr, sizeof(mac_entry.addr));
mac_entry.in_vlan_id = vlan_id;
mac_entry.in_port_num = mac_cb->mac_id;
@@ -418,7 +405,7 @@ int hns_mac_vm_config_bc_en(struct hns_mac_cb *mac_cb, u32 vmid, bool enable)
uc_mac_entry = &mac_cb->addr_entry_idx[vmid];
- if (dsaf_dev) {
+ if (!HNS_DSAF_IS_DEBUG(dsaf_dev)) {
memcpy(mac_entry.addr, addr, sizeof(mac_entry.addr));
mac_entry.in_vlan_id = uc_mac_entry->vlan_id;
mac_entry.in_port_num = mac_cb->mac_id;
@@ -439,9 +426,8 @@ int hns_mac_vm_config_bc_en(struct hns_mac_cb *mac_cb, u32 vmid, bool enable)
void hns_mac_reset(struct hns_mac_cb *mac_cb)
{
- struct mac_driver *drv;
-
- drv = hns_mac_get_drv(mac_cb);
+ struct mac_driver *drv = hns_mac_get_drv(mac_cb);
+ bool is_ver1 = AE_IS_VER1(mac_cb->dsaf_dev->dsaf_ver);
drv->mac_init(drv);
@@ -456,7 +442,7 @@ void hns_mac_reset(struct hns_mac_cb *mac_cb)
if (drv->mac_pausefrm_cfg) {
if (mac_cb->mac_type == HNAE_PORT_DEBUG)
- drv->mac_pausefrm_cfg(drv, 0, 0);
+ drv->mac_pausefrm_cfg(drv, !is_ver1, !is_ver1);
else /* mac rx must disable, dsaf pfc close instead of it*/
drv->mac_pausefrm_cfg(drv, 0, 1);
}
@@ -561,14 +547,6 @@ void hns_mac_get_pauseparam(struct hns_mac_cb *mac_cb, u32 *rx_en, u32 *tx_en)
*rx_en = 0;
*tx_en = 0;
}
-
- /* Due to the chip defect, the service mac's rx pause CAN'T be enabled.
- * We set the rx pause frm always be true (1), because DSAF deals with
- * the rx pause frm instead of service mac. After all, we still support
- * rx pause frm.
- */
- if (mac_cb->mac_type == HNAE_PORT_SERVICE)
- *rx_en = 1;
}
/**
@@ -602,20 +580,13 @@ int hns_mac_set_autoneg(struct hns_mac_cb *mac_cb, u8 enable)
int hns_mac_set_pauseparam(struct hns_mac_cb *mac_cb, u32 rx_en, u32 tx_en)
{
struct mac_driver *mac_ctrl_drv = hns_mac_get_drv(mac_cb);
+ bool is_ver1 = AE_IS_VER1(mac_cb->dsaf_dev->dsaf_ver);
- if (mac_cb->mac_type == HNAE_PORT_SERVICE) {
- if (!rx_en) {
- dev_err(mac_cb->dev, "disable rx_pause is not allowed!");
+ if (mac_cb->mac_type == HNAE_PORT_DEBUG) {
+ if (is_ver1 && (tx_en || rx_en)) {
+ dev_err(mac_cb->dev, "macv1 cann't enable tx/rx_pause!");
return -EINVAL;
}
- } else if (mac_cb->mac_type == HNAE_PORT_DEBUG) {
- if (tx_en || rx_en) {
- dev_err(mac_cb->dev, "enable tx_pause or enable rx_pause are not allowed!");
- return -EINVAL;
- }
- } else {
- dev_err(mac_cb->dev, "Unsupport this operation!");
- return -EINVAL;
}
if (mac_ctrl_drv->mac_pausefrm_cfg)
@@ -667,14 +638,18 @@ free_mac_drv:
}
/**
- *mac_free_dev - get mac information from device node
+ *hns_mac_get_info - get mac information from device node
*@mac_cb: mac device
*@np:device node
- *@mac_mode_idx:mac mode index
+ * return: 0 --success, negative --fail
*/
-static void hns_mac_get_info(struct hns_mac_cb *mac_cb,
- struct device_node *np, u32 mac_mode_idx)
+static int hns_mac_get_info(struct hns_mac_cb *mac_cb)
{
+ struct device_node *np = mac_cb->dev->of_node;
+ struct regmap *syscon;
+ struct of_phandle_args cpld_args;
+ u32 ret;
+
mac_cb->link = false;
mac_cb->half_duplex = false;
mac_cb->speed = mac_phy_to_speed[mac_cb->phy_if];
@@ -690,12 +665,73 @@ static void hns_mac_get_info(struct hns_mac_cb *mac_cb,
mac_cb->max_frm = MAC_DEFAULT_MTU;
mac_cb->tx_pause_frm_time = MAC_DEFAULT_PAUSE_TIME;
+ mac_cb->port_rst_off = mac_cb->mac_id;
+ mac_cb->port_mode_off = 0;
- /* Get the rest of the PHY information */
- mac_cb->phy_node = of_parse_phandle(np, "phy-handle", mac_cb->mac_id);
+ /* if the dsaf node doesn't contain a port subnode, get phy-handle
+ * from dsaf node
+ */
+ if (!mac_cb->fw_port) {
+ mac_cb->phy_node = of_parse_phandle(np, "phy-handle",
+ mac_cb->mac_id);
+ if (mac_cb->phy_node)
+ dev_dbg(mac_cb->dev, "mac%d phy_node: %s\n",
+ mac_cb->mac_id, mac_cb->phy_node->name);
+ return 0;
+ }
+ if (!is_of_node(mac_cb->fw_port))
+ return -EINVAL;
+ /* parse property from port subnode in dsaf */
+ mac_cb->phy_node = of_parse_phandle(to_of_node(mac_cb->fw_port),
+ "phy-handle", 0);
if (mac_cb->phy_node)
dev_dbg(mac_cb->dev, "mac%d phy_node: %s\n",
mac_cb->mac_id, mac_cb->phy_node->name);
+ syscon = syscon_node_to_regmap(
+ of_parse_phandle(to_of_node(mac_cb->fw_port),
+ "serdes-syscon", 0));
+ if (IS_ERR_OR_NULL(syscon)) {
+ dev_err(mac_cb->dev, "serdes-syscon is needed!\n");
+ return -EINVAL;
+ }
+ mac_cb->serdes_ctrl = syscon;
+
+ ret = fwnode_property_read_u32(mac_cb->fw_port,
+ "port-rst-offset",
+ &mac_cb->port_rst_off);
+ if (ret) {
+ dev_dbg(mac_cb->dev,
+ "mac%d port-rst-offset not found, use default value.\n",
+ mac_cb->mac_id);
+ }
+
+ ret = fwnode_property_read_u32(mac_cb->fw_port,
+ "port-mode-offset",
+ &mac_cb->port_mode_off);
+ if (ret) {
+ dev_dbg(mac_cb->dev,
+ "mac%d port-mode-offset not found, use default value.\n",
+ mac_cb->mac_id);
+ }
+
+ ret = of_parse_phandle_with_fixed_args(to_of_node(mac_cb->fw_port),
+ "cpld-syscon", 1, 0, &cpld_args);
+ if (ret) {
+ dev_dbg(mac_cb->dev, "mac%d no cpld-syscon found.\n",
+ mac_cb->mac_id);
+ mac_cb->cpld_ctrl = NULL;
+ } else {
+ syscon = syscon_node_to_regmap(cpld_args.np);
+ if (IS_ERR_OR_NULL(syscon)) {
+ dev_dbg(mac_cb->dev, "no cpld-syscon found!\n");
+ mac_cb->cpld_ctrl = NULL;
+ } else {
+ mac_cb->cpld_ctrl = syscon;
+ mac_cb->cpld_ctrl_reg = cpld_args.args[0];
+ }
+ }
+
+ return 0;
}
/**
@@ -725,40 +761,31 @@ u8 __iomem *hns_mac_get_vaddr(struct dsaf_device *dsaf_dev,
return base + 0x40000 + mac_id * 0x4000 -
mac_mode_idx * 0x20000;
else
- return mac_cb->serdes_vaddr + 0x1000
- + (mac_id - DSAF_SERVICE_PORT_NUM_PER_DSAF) * 0x100000;
+ return dsaf_dev->ppe_base + 0x1000;
}
/**
* hns_mac_get_cfg - get mac cfg from dtb or acpi table
* @dsaf_dev: dsa fabric device struct pointer
- * @mac_idx: mac index
- * retuen 0 - success , negative --fail
+ * @mac_cb: mac control block
+ * return 0 - success , negative --fail
*/
-int hns_mac_get_cfg(struct dsaf_device *dsaf_dev, int mac_idx)
+int hns_mac_get_cfg(struct dsaf_device *dsaf_dev, struct hns_mac_cb *mac_cb)
{
int ret;
u32 mac_mode_idx;
- struct hns_mac_cb *mac_cb = &dsaf_dev->mac_cb[mac_idx];
mac_cb->dsaf_dev = dsaf_dev;
mac_cb->dev = dsaf_dev->dev;
- mac_cb->mac_id = mac_idx;
mac_cb->sys_ctl_vaddr = dsaf_dev->sc_base;
mac_cb->serdes_vaddr = dsaf_dev->sds_base;
- if (dsaf_dev->cpld_base &&
- mac_idx < DSAF_SERVICE_PORT_NUM_PER_DSAF) {
- mac_cb->cpld_vaddr = dsaf_dev->cpld_base +
- mac_cb->mac_id * CPLD_ADDR_PORT_OFFSET;
- cpld_led_reset(mac_cb);
- }
mac_cb->sfp_prsnt = 0;
mac_cb->txpkt_for_led = 0;
mac_cb->rxpkt_for_led = 0;
- if (mac_idx < DSAF_SERVICE_PORT_NUM_PER_DSAF)
+ if (!HNS_DSAF_IS_DEBUG(dsaf_dev))
mac_cb->mac_type = HNAE_PORT_SERVICE;
else
mac_cb->mac_type = HNAE_PORT_DEBUG;
@@ -774,53 +801,100 @@ int hns_mac_get_cfg(struct dsaf_device *dsaf_dev, int mac_idx)
}
mac_mode_idx = (u32)ret;
- hns_mac_get_info(mac_cb, mac_cb->dev->of_node, mac_mode_idx);
+ ret = hns_mac_get_info(mac_cb);
+ if (ret)
+ return ret;
+ cpld_led_reset(mac_cb);
mac_cb->vaddr = hns_mac_get_vaddr(dsaf_dev, mac_cb, mac_mode_idx);
return 0;
}
+static int hns_mac_get_max_port_num(struct dsaf_device *dsaf_dev)
+{
+ if (HNS_DSAF_IS_DEBUG(dsaf_dev))
+ return 1;
+ else
+ return DSAF_MAX_PORT_NUM;
+}
+
/**
* hns_mac_init - init mac
* @dsaf_dev: dsa fabric device struct pointer
- * retuen 0 - success , negative --fail
+ * return 0 - success , negative --fail
*/
int hns_mac_init(struct dsaf_device *dsaf_dev)
{
- int i;
+ bool found = false;
int ret;
- size_t size;
+ u32 port_id;
+ int max_port_num = hns_mac_get_max_port_num(dsaf_dev);
struct hns_mac_cb *mac_cb;
+ struct fwnode_handle *child;
- size = sizeof(struct hns_mac_cb) * DSAF_MAX_PORT_NUM_PER_CHIP;
- dsaf_dev->mac_cb = devm_kzalloc(dsaf_dev->dev, size, GFP_KERNEL);
- if (!dsaf_dev->mac_cb)
- return -ENOMEM;
+ device_for_each_child_node(dsaf_dev->dev, child) {
+ ret = fwnode_property_read_u32(child, "reg", &port_id);
+ if (ret) {
+ dev_err(dsaf_dev->dev,
+ "get reg fail, ret=%d!\n", ret);
+ return ret;
+ }
+ if (port_id >= max_port_num) {
+ dev_err(dsaf_dev->dev,
+ "reg(%u) out of range!\n", port_id);
+ return -EINVAL;
+ }
+ mac_cb = devm_kzalloc(dsaf_dev->dev, sizeof(*mac_cb),
+ GFP_KERNEL);
+ if (!mac_cb)
+ return -ENOMEM;
+ mac_cb->fw_port = child;
+ mac_cb->mac_id = (u8)port_id;
+ dsaf_dev->mac_cb[port_id] = mac_cb;
+ found = true;
+ }
- for (i = 0; i < DSAF_MAX_PORT_NUM_PER_CHIP; i++) {
- ret = hns_mac_get_cfg(dsaf_dev, i);
- if (ret)
- goto free_mac_cb;
+ /* if don't get any port subnode from dsaf node
+ * will init all port then, this is compatible with the old dts
+ */
+ if (!found) {
+ for (port_id = 0; port_id < max_port_num; port_id++) {
+ mac_cb = devm_kzalloc(dsaf_dev->dev, sizeof(*mac_cb),
+ GFP_KERNEL);
+ if (!mac_cb)
+ return -ENOMEM;
+
+ mac_cb->mac_id = port_id;
+ dsaf_dev->mac_cb[port_id] = mac_cb;
+ }
+ }
+ /* init mac_cb for all port */
+ for (port_id = 0; port_id < max_port_num; port_id++) {
+ mac_cb = dsaf_dev->mac_cb[port_id];
+ if (!mac_cb)
+ continue;
- mac_cb = &dsaf_dev->mac_cb[i];
+ ret = hns_mac_get_cfg(dsaf_dev, mac_cb);
+ if (ret)
+ return ret;
ret = hns_mac_init_ex(mac_cb);
if (ret)
- goto free_mac_cb;
+ return ret;
}
return 0;
-
-free_mac_cb:
- dsaf_dev->mac_cb = NULL;
-
- return ret;
}
void hns_mac_uninit(struct dsaf_device *dsaf_dev)
{
- cpld_led_reset(dsaf_dev->mac_cb);
- dsaf_dev->mac_cb = NULL;
+ int i;
+ int max_port_num = hns_mac_get_max_port_num(dsaf_dev);
+
+ for (i = 0; i < max_port_num; i++) {
+ cpld_led_reset(dsaf_dev->mac_cb[i]);
+ dsaf_dev->mac_cb[i] = NULL;
+ }
}
int hns_mac_config_mac_loopback(struct hns_mac_cb *mac_cb,
@@ -908,7 +982,7 @@ void hns_set_led_opt(struct hns_mac_cb *mac_cb)
int hns_cpld_led_set_id(struct hns_mac_cb *mac_cb,
enum hnae_led_state status)
{
- if (!mac_cb || !mac_cb->cpld_vaddr)
+ if (!mac_cb || !mac_cb->cpld_ctrl)
return 0;
return cpld_set_led_id(mac_cb, status);
diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h
index 823b6e78c8aa..97ce9a750aaf 100644
--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h
+++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_mac.h
@@ -10,9 +10,10 @@
#ifndef _HNS_DSAF_MAC_H
#define _HNS_DSAF_MAC_H
-#include <linux/phy.h>
-#include <linux/kernel.h>
#include <linux/if_vlan.h>
+#include <linux/kernel.h>
+#include <linux/phy.h>
+#include <linux/regmap.h>
#include "hns_dsaf_main.h"
struct dsaf_device;
@@ -310,10 +311,15 @@ struct hns_mac_cb {
struct device *dev;
struct dsaf_device *dsaf_dev;
struct mac_priv priv;
+ struct fwnode_handle *fw_port;
u8 __iomem *vaddr;
- u8 __iomem *cpld_vaddr;
u8 __iomem *sys_ctl_vaddr;
u8 __iomem *serdes_vaddr;
+ struct regmap *serdes_ctrl;
+ struct regmap *cpld_ctrl;
+ u32 cpld_ctrl_reg;
+ u32 port_rst_off;
+ u32 port_mode_off;
struct mac_entry_idx addr_entry_idx[DSAF_MAX_VM_NUM];
u8 sfp_prsnt;
u8 cpld_led_value;
diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
index 5978a5c8ef35..1c2ddb25e776 100644
--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
+++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.c
@@ -7,27 +7,29 @@
* (at your option) any later version.
*/
-#include <linux/module.h>
-#include <linux/kernel.h>
+#include <linux/device.h>
#include <linux/init.h>
#include <linux/interrupt.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
#include <linux/netdevice.h>
-#include <linux/platform_device.h>
+#include <linux/mfd/syscon.h>
#include <linux/of.h>
#include <linux/of_address.h>
#include <linux/of_irq.h>
-#include <linux/device.h>
+#include <linux/platform_device.h>
#include <linux/vmalloc.h>
+#include "hns_dsaf_mac.h"
#include "hns_dsaf_main.h"
-#include "hns_dsaf_rcb.h"
#include "hns_dsaf_ppe.h"
-#include "hns_dsaf_mac.h"
+#include "hns_dsaf_rcb.h"
const char *g_dsaf_mode_match[DSAF_MODE_MAX] = {
[DSAF_MODE_DISABLE_2PORT_64VM] = "2port-64vf",
[DSAF_MODE_DISABLE_6PORT_0VM] = "6port-16rss",
[DSAF_MODE_DISABLE_6PORT_16VM] = "6port-16vf",
+ [DSAF_MODE_DISABLE_SP] = "single-port",
};
int hns_dsaf_get_cfg(struct dsaf_device *dsaf_dev)
@@ -35,8 +37,13 @@ int hns_dsaf_get_cfg(struct dsaf_device *dsaf_dev)
int ret, i;
u32 desc_num;
u32 buf_size;
+ u32 reset_offset = 0;
+ u32 res_idx = 0;
const char *mode_str;
+ struct regmap *syscon;
+ struct resource *res;
struct device_node *np = dsaf_dev->dev->of_node;
+ struct platform_device *pdev = to_platform_device(dsaf_dev->dev);
if (of_device_is_compatible(np, "hisilicon,hns-dsaf-v1"))
dsaf_dev->dsaf_ver = AE_VERSION_1;
@@ -73,42 +80,68 @@ int hns_dsaf_get_cfg(struct dsaf_device *dsaf_dev)
else
dsaf_dev->dsaf_tc_mode = HRD_DSAF_4TC_MODE;
- dsaf_dev->sc_base = of_iomap(np, 0);
- if (!dsaf_dev->sc_base) {
- dev_err(dsaf_dev->dev,
- "%s of_iomap 0 fail!\n", dsaf_dev->ae_dev.name);
- ret = -ENOMEM;
- goto unmap_base_addr;
- }
+ syscon = syscon_node_to_regmap(
+ of_parse_phandle(np, "subctrl-syscon", 0));
+ if (IS_ERR_OR_NULL(syscon)) {
+ res = platform_get_resource(pdev, IORESOURCE_MEM, res_idx++);
+ if (!res) {
+ dev_err(dsaf_dev->dev, "subctrl info is needed!\n");
+ return -ENOMEM;
+ }
+ dsaf_dev->sc_base = devm_ioremap_resource(&pdev->dev, res);
+ if (!dsaf_dev->sc_base) {
+ dev_err(dsaf_dev->dev, "subctrl can not map!\n");
+ return -ENOMEM;
+ }
- dsaf_dev->sds_base = of_iomap(np, 1);
- if (!dsaf_dev->sds_base) {
- dev_err(dsaf_dev->dev,
- "%s of_iomap 1 fail!\n", dsaf_dev->ae_dev.name);
- ret = -ENOMEM;
- goto unmap_base_addr;
+ res = platform_get_resource(pdev, IORESOURCE_MEM, res_idx++);
+ if (!res) {
+ dev_err(dsaf_dev->dev, "serdes-ctrl info is needed!\n");
+ return -ENOMEM;
+ }
+ dsaf_dev->sds_base = devm_ioremap_resource(&pdev->dev, res);
+ if (!dsaf_dev->sds_base) {
+ dev_err(dsaf_dev->dev, "serdes-ctrl can not map!\n");
+ return -ENOMEM;
+ }
+ } else {
+ dsaf_dev->sub_ctrl = syscon;
}
- dsaf_dev->ppe_base = of_iomap(np, 2);
- if (!dsaf_dev->ppe_base) {
- dev_err(dsaf_dev->dev,
- "%s of_iomap 2 fail!\n", dsaf_dev->ae_dev.name);
- ret = -ENOMEM;
- goto unmap_base_addr;
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ppe-base");
+ if (!res) {
+ res = platform_get_resource(pdev, IORESOURCE_MEM, res_idx++);
+ if (!res) {
+ dev_err(dsaf_dev->dev, "ppe-base info is needed!\n");
+ return -ENOMEM;
+ }
}
-
- dsaf_dev->io_base = of_iomap(np, 3);
- if (!dsaf_dev->io_base) {
- dev_err(dsaf_dev->dev,
- "%s of_iomap 3 fail!\n", dsaf_dev->ae_dev.name);
- ret = -ENOMEM;
- goto unmap_base_addr;
+ dsaf_dev->ppe_base = devm_ioremap_resource(&pdev->dev, res);
+ if (!dsaf_dev->ppe_base) {
+ dev_err(dsaf_dev->dev, "ppe-base resource can not map!\n");
+ return -ENOMEM;
+ }
+ dsaf_dev->ppe_paddr = res->start;
+
+ if (!HNS_DSAF_IS_DEBUG(dsaf_dev)) {
+ res = platform_get_resource_byname(pdev, IORESOURCE_MEM,
+ "dsaf-base");
+ if (!res) {
+ res = platform_get_resource(pdev, IORESOURCE_MEM,
+ res_idx);
+ if (!res) {
+ dev_err(dsaf_dev->dev,
+ "dsaf-base info is needed!\n");
+ return -ENOMEM;
+ }
+ }
+ dsaf_dev->io_base = devm_ioremap_resource(&pdev->dev, res);
+ if (!dsaf_dev->io_base) {
+ dev_err(dsaf_dev->dev, "dsaf-base resource can not map!\n");
+ return -ENOMEM;
+ }
}
- dsaf_dev->cpld_base = of_iomap(np, 4);
- if (!dsaf_dev->cpld_base)
- dev_dbg(dsaf_dev->dev, "NO CPLD ADDR");
-
ret = of_property_read_u32(np, "desc-num", &desc_num);
if (ret < 0 || desc_num < HNS_DSAF_MIN_DESC_CNT ||
desc_num > HNS_DSAF_MAX_DESC_CNT) {
@@ -118,6 +151,13 @@ int hns_dsaf_get_cfg(struct dsaf_device *dsaf_dev)
}
dsaf_dev->desc_num = desc_num;
+ ret = of_property_read_u32(np, "reset-field-offset", &reset_offset);
+ if (ret < 0) {
+ dev_dbg(dsaf_dev->dev,
+ "get reset-field-offset fail, ret=%d!\r\n", ret);
+ }
+ dsaf_dev->reset_offset = reset_offset;
+
ret = of_property_read_u32(np, "buf-size", &buf_size);
if (ret < 0) {
dev_err(dsaf_dev->dev,
@@ -149,8 +189,6 @@ unmap_base_addr:
iounmap(dsaf_dev->sds_base);
if (dsaf_dev->sc_base)
iounmap(dsaf_dev->sc_base);
- if (dsaf_dev->cpld_base)
- iounmap(dsaf_dev->cpld_base);
return ret;
}
@@ -167,9 +205,6 @@ static void hns_dsaf_free_cfg(struct dsaf_device *dsaf_dev)
if (dsaf_dev->sc_base)
iounmap(dsaf_dev->sc_base);
-
- if (dsaf_dev->cpld_base)
- iounmap(dsaf_dev->cpld_base);
}
/**
@@ -217,9 +252,7 @@ static void hns_dsaf_mix_def_qid_cfg(struct dsaf_device *dsaf_dev)
u32 q_id, q_num_per_port;
u32 i;
- hns_rcb_get_queue_mode(dsaf_dev->dsaf_mode,
- HNS_DSAF_COMM_SERVICE_NW_IDX,
- &max_vfn, &max_q_per_vf);
+ hns_rcb_get_queue_mode(dsaf_dev->dsaf_mode, &max_vfn, &max_q_per_vf);
q_num_per_port = max_vfn * max_q_per_vf;
for (i = 0, q_id = 0; i < DSAF_SERVICE_NW_NUM; i++) {
@@ -239,9 +272,7 @@ static void hns_dsaf_inner_qid_cfg(struct dsaf_device *dsaf_dev)
if (AE_IS_VER1(dsaf_dev->dsaf_ver))
return;
- hns_rcb_get_queue_mode(dsaf_dev->dsaf_mode,
- HNS_DSAF_COMM_SERVICE_NW_IDX,
- &max_vfn, &max_q_per_vf);
+ hns_rcb_get_queue_mode(dsaf_dev->dsaf_mode, &max_vfn, &max_q_per_vf);
q_num_per_port = max_vfn * max_q_per_vf;
for (mac_id = 0, q_id = 0; mac_id < DSAF_SERVICE_NW_NUM; mac_id++) {
@@ -712,13 +743,15 @@ static void hns_dsaf_tbl_tcam_data_ucast_pul(
void hns_dsaf_set_promisc_mode(struct dsaf_device *dsaf_dev, u32 en)
{
- dsaf_set_dev_bit(dsaf_dev, DSAF_CFG_0_REG, DSAF_CFG_MIX_MODE_S, !!en);
+ if (!HNS_DSAF_IS_DEBUG(dsaf_dev))
+ dsaf_set_dev_bit(dsaf_dev, DSAF_CFG_0_REG,
+ DSAF_CFG_MIX_MODE_S, !!en);
}
void hns_dsaf_set_inner_lb(struct dsaf_device *dsaf_dev, u32 mac_id, u32 en)
{
if (AE_IS_VER1(dsaf_dev->dsaf_ver) ||
- dsaf_dev->mac_cb[mac_id].mac_type == HNAE_PORT_DEBUG)
+ dsaf_dev->mac_cb[mac_id]->mac_type == HNAE_PORT_DEBUG)
return;
dsaf_set_dev_bit(dsaf_dev, DSAFV2_SERDES_LBK_0_REG + 4 * mac_id,
@@ -1022,12 +1055,52 @@ static void hns_dsaf_tbl_tcam_init(struct dsaf_device *dsaf_dev)
* @mac_cb: mac contrl block
*/
static void hns_dsaf_pfc_en_cfg(struct dsaf_device *dsaf_dev,
- int mac_id, int en)
+ int mac_id, int tc_en)
+{
+ dsaf_write_dev(dsaf_dev, DSAF_PFC_EN_0_REG + mac_id * 4, tc_en);
+}
+
+static void hns_dsaf_set_pfc_pause(struct dsaf_device *dsaf_dev,
+ int mac_id, int tx_en, int rx_en)
+{
+ if (AE_IS_VER1(dsaf_dev->dsaf_ver)) {
+ if (!tx_en || !rx_en)
+ dev_err(dsaf_dev->dev, "dsaf v1 can not close pfc!\n");
+
+ return;
+ }
+
+ dsaf_set_dev_bit(dsaf_dev, DSAF_PAUSE_CFG_REG + mac_id * 4,
+ DSAF_PFC_PAUSE_RX_EN_B, !!rx_en);
+ dsaf_set_dev_bit(dsaf_dev, DSAF_PAUSE_CFG_REG + mac_id * 4,
+ DSAF_PFC_PAUSE_TX_EN_B, !!tx_en);
+}
+
+int hns_dsaf_set_rx_mac_pause_en(struct dsaf_device *dsaf_dev, int mac_id,
+ u32 en)
{
- if (!en)
- dsaf_write_dev(dsaf_dev, DSAF_PFC_EN_0_REG + mac_id * 4, 0);
+ if (AE_IS_VER1(dsaf_dev->dsaf_ver)) {
+ if (!en)
+ dev_err(dsaf_dev->dev, "dsafv1 can't close rx_pause!\n");
+
+ return -EINVAL;
+ }
+
+ dsaf_set_dev_bit(dsaf_dev, DSAF_PAUSE_CFG_REG + mac_id * 4,
+ DSAF_MAC_PAUSE_RX_EN_B, !!en);
+
+ return 0;
+}
+
+void hns_dsaf_get_rx_mac_pause_en(struct dsaf_device *dsaf_dev, int mac_id,
+ u32 *en)
+{
+ if (AE_IS_VER1(dsaf_dev->dsaf_ver))
+ *en = 1;
else
- dsaf_write_dev(dsaf_dev, DSAF_PFC_EN_0_REG + mac_id * 4, 0xff);
+ *en = dsaf_get_dev_bit(dsaf_dev,
+ DSAF_PAUSE_CFG_REG + mac_id * 4,
+ DSAF_MAC_PAUSE_RX_EN_B);
}
/**
@@ -1039,6 +1112,7 @@ static void hns_dsaf_comm_init(struct dsaf_device *dsaf_dev)
{
u32 i;
u32 o_dsaf_cfg;
+ bool is_ver1 = AE_IS_VER1(dsaf_dev->dsaf_ver);
o_dsaf_cfg = dsaf_read_dev(dsaf_dev, DSAF_CFG_0_REG);
dsaf_set_bit(o_dsaf_cfg, DSAF_CFG_EN_S, dsaf_dev->dsaf_en);
@@ -1064,8 +1138,10 @@ static void hns_dsaf_comm_init(struct dsaf_device *dsaf_dev)
hns_dsaf_sw_port_type_cfg(dsaf_dev, DSAF_SW_PORT_TYPE_NON_VLAN);
/*set dsaf pfc to 0 for parseing rx pause*/
- for (i = 0; i < DSAF_COMM_CHN; i++)
+ for (i = 0; i < DSAF_COMM_CHN; i++) {
hns_dsaf_pfc_en_cfg(dsaf_dev, i, 0);
+ hns_dsaf_set_pfc_pause(dsaf_dev, i, is_ver1, is_ver1);
+ }
/*msk and clr exception irqs */
for (i = 0; i < DSAF_COMM_CHN; i++) {
@@ -1264,6 +1340,9 @@ static int hns_dsaf_init(struct dsaf_device *dsaf_dev)
u32 i;
int ret;
+ if (HNS_DSAF_IS_DEBUG(dsaf_dev))
+ return 0;
+
ret = hns_dsaf_init_hw(dsaf_dev);
if (ret)
return ret;
@@ -2013,6 +2092,8 @@ void hns_dsaf_update_stats(struct dsaf_device *dsaf_dev, u32 node_num)
{
struct dsaf_hw_stats *hw_stats
= &dsaf_dev->hw_stats[node_num];
+ bool is_ver1 = AE_IS_VER1(dsaf_dev->dsaf_ver);
+ u32 reg_tmp;
hw_stats->pad_drop += dsaf_read_dev(dsaf_dev,
DSAF_INODE_PAD_DISCARD_NUM_0_REG + 0x80 * (u64)node_num);
@@ -2022,8 +2103,12 @@ void hns_dsaf_update_stats(struct dsaf_device *dsaf_dev, u32 node_num)
DSAF_INODE_FINAL_IN_PKT_NUM_0_REG + 0x80 * (u64)node_num);
hw_stats->rx_pkt_id += dsaf_read_dev(dsaf_dev,
DSAF_INODE_SBM_PID_NUM_0_REG + 0x80 * (u64)node_num);
- hw_stats->rx_pause_frame += dsaf_read_dev(dsaf_dev,
- DSAF_INODE_FINAL_IN_PAUSE_NUM_0_REG + 0x80 * (u64)node_num);
+
+ reg_tmp = is_ver1 ? DSAF_INODE_FINAL_IN_PAUSE_NUM_0_REG :
+ DSAFV2_INODE_FINAL_IN_PAUSE_NUM_0_REG;
+ hw_stats->rx_pause_frame +=
+ dsaf_read_dev(dsaf_dev, reg_tmp + 0x80 * (u64)node_num);
+
hw_stats->release_buf_num += dsaf_read_dev(dsaf_dev,
DSAF_INODE_SBM_RELS_NUM_0_REG + 0x80 * (u64)node_num);
hw_stats->sbm_drop += dsaf_read_dev(dsaf_dev,
@@ -2056,6 +2141,8 @@ void hns_dsaf_get_regs(struct dsaf_device *ddev, u32 port, void *data)
u32 i = 0;
u32 j;
u32 *p = data;
+ u32 reg_tmp;
+ bool is_ver1 = AE_IS_VER1(ddev->dsaf_ver);
/* dsaf common registers */
p[0] = dsaf_read_dev(ddev, DSAF_SRAM_INIT_OVER_0_REG);
@@ -2120,8 +2207,9 @@ void hns_dsaf_get_regs(struct dsaf_device *ddev, u32 port, void *data)
DSAF_INODE_FINAL_IN_PKT_NUM_0_REG + j * 0x80);
p[190 + i] = dsaf_read_dev(ddev,
DSAF_INODE_SBM_PID_NUM_0_REG + j * 0x80);
- p[193 + i] = dsaf_read_dev(ddev,
- DSAF_INODE_FINAL_IN_PAUSE_NUM_0_REG + j * 0x80);
+ reg_tmp = is_ver1 ? DSAF_INODE_FINAL_IN_PAUSE_NUM_0_REG :
+ DSAFV2_INODE_FINAL_IN_PAUSE_NUM_0_REG;
+ p[193 + i] = dsaf_read_dev(ddev, reg_tmp + j * 0x80);
p[196 + i] = dsaf_read_dev(ddev,
DSAF_INODE_SBM_RELS_NUM_0_REG + j * 0x80);
p[199 + i] = dsaf_read_dev(ddev,
@@ -2368,8 +2456,11 @@ void hns_dsaf_get_regs(struct dsaf_device *ddev, u32 port, void *data)
p[496] = dsaf_read_dev(ddev, DSAF_NETPORT_CTRL_SIG_0_REG + port * 0x4);
p[497] = dsaf_read_dev(ddev, DSAF_XGE_CTRL_SIG_CFG_0_REG + port * 0x4);
+ if (!is_ver1)
+ p[498] = dsaf_read_dev(ddev, DSAF_PAUSE_CFG_REG + port * 0x4);
+
/* mark end of dsaf regs */
- for (i = 498; i < 504; i++)
+ for (i = 499; i < 504; i++)
p[i] = 0xdddddddd;
}
diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h
index 5fea226efaf3..f0502ba0a677 100644
--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h
+++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_main.h
@@ -41,6 +41,7 @@ struct hns_mac_cb;
#define DSAF_STATIC_NUM 28
#define DSAF_STATS_READ(p, offset) (*((u64 *)((u8 *)(p) + (offset))))
+#define HNS_DSAF_IS_DEBUG(dev) (dev->dsaf_mode == DSAF_MODE_DISABLE_SP)
enum hal_dsaf_mode {
HRD_DSAF_NO_DSAF_MODE = 0x0,
@@ -117,6 +118,7 @@ enum dsaf_mode {
DSAF_MODE_ENABLE_32VM, /**< en DSAF-mode, support 32 VM */
DSAF_MODE_ENABLE_128VM, /**< en DSAF-mode, support 128 VM */
DSAF_MODE_ENABLE, /**< before is enable DSAF mode*/
+ DSAF_MODE_DISABLE_SP, /* <non-dsaf, single port mode */
DSAF_MODE_DISABLE_FIX, /**< non-dasf, fixed to queue*/
DSAF_MODE_DISABLE_2PORT_8VM, /**< non-dasf, 2port 8VM */
DSAF_MODE_DISABLE_2PORT_16VM, /**< non-dasf, 2port 16VM */
@@ -275,10 +277,12 @@ struct dsaf_device {
u8 __iomem *sds_base;
u8 __iomem *ppe_base;
u8 __iomem *io_base;
- u8 __iomem *cpld_base;
+ struct regmap *sub_ctrl;
+ phys_addr_t ppe_paddr;
u32 desc_num; /* desc num per queue*/
u32 buf_size; /* ring buffer size */
+ u32 reset_offset; /* reset field offset in sub sysctrl */
int buf_size_type; /* ring buffer size-type */
enum dsaf_mode dsaf_mode; /* dsaf mode */
enum hal_dsaf_mode dsaf_en;
@@ -287,7 +291,7 @@ struct dsaf_device {
struct ppe_common_cb *ppe_common[DSAF_COMM_DEV_NUM];
struct rcb_common_cb *rcb_common[DSAF_COMM_DEV_NUM];
- struct hns_mac_cb *mac_cb;
+ struct hns_mac_cb *mac_cb[DSAF_MAX_PORT_NUM];
struct dsaf_hw_stats hw_stats[DSAF_NODE_NUM];
struct dsaf_int_stat int_stat;
@@ -359,14 +363,6 @@ static inline void hns_dsaf_tbl_line_addr_cfg(struct dsaf_device *dsaf_dev,
tab_line_addr);
}
-static inline int hns_dsaf_get_comm_idx_by_port(int port)
-{
- if ((port < DSAF_COMM_CHN) || (port == DSAF_MAX_PORT_NUM_PER_CHIP))
- return 0;
- else
- return (port - DSAF_COMM_CHN + 1);
-}
-
static inline struct hnae_vf_cb *hns_ae_get_vf_cb(
struct hnae_handle *handle)
{
@@ -417,6 +413,11 @@ void hns_dsaf_get_strings(int stringset, u8 *data, int port);
void hns_dsaf_get_regs(struct dsaf_device *ddev, u32 port, void *data);
int hns_dsaf_get_regs_count(void);
void hns_dsaf_set_promisc_mode(struct dsaf_device *dsaf_dev, u32 en);
+
+void hns_dsaf_get_rx_mac_pause_en(struct dsaf_device *dsaf_dev, int mac_id,
+ u32 *en);
+int hns_dsaf_set_rx_mac_pause_en(struct dsaf_device *dsaf_dev, int mac_id,
+ u32 en);
void hns_dsaf_set_inner_lb(struct dsaf_device *dsaf_dev, u32 mac_id, u32 en);
#endif /* __HNS_DSAF_MAIN_H__ */
diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c
index e69b02287c44..a837bb9e3839 100644
--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c
+++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_misc.c
@@ -7,10 +7,30 @@
* (at your option) any later version.
*/
-#include "hns_dsaf_misc.h"
#include "hns_dsaf_mac.h"
-#include "hns_dsaf_reg.h"
+#include "hns_dsaf_misc.h"
#include "hns_dsaf_ppe.h"
+#include "hns_dsaf_reg.h"
+
+static void dsaf_write_sub(struct dsaf_device *dsaf_dev, u32 reg, u32 val)
+{
+ if (dsaf_dev->sub_ctrl)
+ dsaf_write_syscon(dsaf_dev->sub_ctrl, reg, val);
+ else
+ dsaf_write_reg(dsaf_dev->sc_base, reg, val);
+}
+
+static u32 dsaf_read_sub(struct dsaf_device *dsaf_dev, u32 reg)
+{
+ u32 ret;
+
+ if (dsaf_dev->sub_ctrl)
+ ret = dsaf_read_syscon(dsaf_dev->sub_ctrl, reg);
+ else
+ ret = dsaf_read_reg(dsaf_dev->sc_base, reg);
+
+ return ret;
+}
void hns_cpld_set_led(struct hns_mac_cb *mac_cb, int link_status,
u16 speed, int data)
@@ -22,8 +42,8 @@ void hns_cpld_set_led(struct hns_mac_cb *mac_cb, int link_status,
pr_err("sfp_led_opt mac_dev is null!\n");
return;
}
- if (!mac_cb->cpld_vaddr) {
- dev_err(mac_cb->dev, "mac_id=%d, cpld_vaddr is null !\n",
+ if (!mac_cb->cpld_ctrl) {
+ dev_err(mac_cb->dev, "mac_id=%d, cpld syscon is null !\n",
mac_cb->mac_id);
return;
}
@@ -40,21 +60,24 @@ void hns_cpld_set_led(struct hns_mac_cb *mac_cb, int link_status,
dsaf_set_bit(value, DSAF_LED_DATA_B, data);
if (value != mac_cb->cpld_led_value) {
- dsaf_write_b(mac_cb->cpld_vaddr, value);
+ dsaf_write_syscon(mac_cb->cpld_ctrl,
+ mac_cb->cpld_ctrl_reg, value);
mac_cb->cpld_led_value = value;
}
} else {
- dsaf_write_b(mac_cb->cpld_vaddr, CPLD_LED_DEFAULT_VALUE);
+ dsaf_write_syscon(mac_cb->cpld_ctrl, mac_cb->cpld_ctrl_reg,
+ CPLD_LED_DEFAULT_VALUE);
mac_cb->cpld_led_value = CPLD_LED_DEFAULT_VALUE;
}
}
void cpld_led_reset(struct hns_mac_cb *mac_cb)
{
- if (!mac_cb || !mac_cb->cpld_vaddr)
+ if (!mac_cb || !mac_cb->cpld_ctrl)
return;
- dsaf_write_b(mac_cb->cpld_vaddr, CPLD_LED_DEFAULT_VALUE);
+ dsaf_write_syscon(mac_cb->cpld_ctrl, mac_cb->cpld_ctrl_reg,
+ CPLD_LED_DEFAULT_VALUE);
mac_cb->cpld_led_value = CPLD_LED_DEFAULT_VALUE;
}
@@ -63,15 +86,19 @@ int cpld_set_led_id(struct hns_mac_cb *mac_cb,
{
switch (status) {
case HNAE_LED_ACTIVE:
- mac_cb->cpld_led_value = dsaf_read_b(mac_cb->cpld_vaddr);
+ mac_cb->cpld_led_value =
+ dsaf_read_syscon(mac_cb->cpld_ctrl,
+ mac_cb->cpld_ctrl_reg);
dsaf_set_bit(mac_cb->cpld_led_value, DSAF_LED_ANCHOR_B,
CPLD_LED_ON_VALUE);
- dsaf_write_b(mac_cb->cpld_vaddr, mac_cb->cpld_led_value);
+ dsaf_write_syscon(mac_cb->cpld_ctrl, mac_cb->cpld_ctrl_reg,
+ mac_cb->cpld_led_value);
return 2;
case HNAE_LED_INACTIVE:
dsaf_set_bit(mac_cb->cpld_led_value, DSAF_LED_ANCHOR_B,
CPLD_LED_DEFAULT_VALUE);
- dsaf_write_b(mac_cb->cpld_vaddr, mac_cb->cpld_led_value);
+ dsaf_write_syscon(mac_cb->cpld_ctrl, mac_cb->cpld_ctrl_reg,
+ mac_cb->cpld_led_value);
break;
default:
break;
@@ -95,10 +122,8 @@ void hns_dsaf_rst(struct dsaf_device *dsaf_dev, u32 val)
nt_reg_addr = DSAF_SUB_SC_NT_RESET_DREQ_REG;
}
- dsaf_write_reg(dsaf_dev->sc_base, xbar_reg_addr,
- RESET_REQ_OR_DREQ);
- dsaf_write_reg(dsaf_dev->sc_base, nt_reg_addr,
- RESET_REQ_OR_DREQ);
+ dsaf_write_sub(dsaf_dev, xbar_reg_addr, RESET_REQ_OR_DREQ);
+ dsaf_write_sub(dsaf_dev, nt_reg_addr, RESET_REQ_OR_DREQ);
}
void hns_dsaf_xge_srst_by_port(struct dsaf_device *dsaf_dev, u32 port, u32 val)
@@ -110,14 +135,14 @@ void hns_dsaf_xge_srst_by_port(struct dsaf_device *dsaf_dev, u32 port, u32 val)
return;
reg_val |= RESET_REQ_OR_DREQ;
- reg_val |= 0x2082082 << port;
+ reg_val |= 0x2082082 << dsaf_dev->mac_cb[port]->port_rst_off;
if (val == 0)
reg_addr = DSAF_SUB_SC_XGE_RESET_REQ_REG;
else
reg_addr = DSAF_SUB_SC_XGE_RESET_DREQ_REG;
- dsaf_write_reg(dsaf_dev->sc_base, reg_addr, reg_val);
+ dsaf_write_sub(dsaf_dev, reg_addr, reg_val);
}
void hns_dsaf_xge_core_srst_by_port(struct dsaf_device *dsaf_dev,
@@ -129,68 +154,63 @@ void hns_dsaf_xge_core_srst_by_port(struct dsaf_device *dsaf_dev,
if (port >= DSAF_XGE_NUM)
return;
- reg_val |= XGMAC_TRX_CORE_SRST_M << port;
+ reg_val |= XGMAC_TRX_CORE_SRST_M
+ << dsaf_dev->mac_cb[port]->port_rst_off;
if (val == 0)
reg_addr = DSAF_SUB_SC_XGE_RESET_REQ_REG;
else
reg_addr = DSAF_SUB_SC_XGE_RESET_DREQ_REG;
- dsaf_write_reg(dsaf_dev->sc_base, reg_addr, reg_val);
+ dsaf_write_sub(dsaf_dev, reg_addr, reg_val);
}
void hns_dsaf_ge_srst_by_port(struct dsaf_device *dsaf_dev, u32 port, u32 val)
{
u32 reg_val_1;
u32 reg_val_2;
+ u32 port_rst_off;
if (port >= DSAF_GE_NUM)
return;
- if (port < DSAF_SERVICE_NW_NUM) {
+ if (!HNS_DSAF_IS_DEBUG(dsaf_dev)) {
reg_val_1 = 0x1 << port;
+ port_rst_off = dsaf_dev->mac_cb[port]->port_rst_off;
/* there is difference between V1 and V2 in register.*/
if (AE_IS_VER1(dsaf_dev->dsaf_ver))
- reg_val_2 = 0x1041041 << port;
+ reg_val_2 = 0x1041041 << port_rst_off;
else
- reg_val_2 = 0x2082082 << port;
+ reg_val_2 = 0x2082082 << port_rst_off;
if (val == 0) {
- dsaf_write_reg(dsaf_dev->sc_base,
- DSAF_SUB_SC_GE_RESET_REQ1_REG,
+ dsaf_write_sub(dsaf_dev, DSAF_SUB_SC_GE_RESET_REQ1_REG,
reg_val_1);
- dsaf_write_reg(dsaf_dev->sc_base,
- DSAF_SUB_SC_GE_RESET_REQ0_REG,
+ dsaf_write_sub(dsaf_dev, DSAF_SUB_SC_GE_RESET_REQ0_REG,
reg_val_2);
} else {
- dsaf_write_reg(dsaf_dev->sc_base,
- DSAF_SUB_SC_GE_RESET_DREQ0_REG,
+ dsaf_write_sub(dsaf_dev, DSAF_SUB_SC_GE_RESET_DREQ0_REG,
reg_val_2);
- dsaf_write_reg(dsaf_dev->sc_base,
- DSAF_SUB_SC_GE_RESET_DREQ1_REG,
+ dsaf_write_sub(dsaf_dev, DSAF_SUB_SC_GE_RESET_DREQ1_REG,
reg_val_1);
}
} else {
- reg_val_1 = 0x15540 << (port - 6);
- reg_val_2 = 0x100 << (port - 6);
+ reg_val_1 = 0x15540 << dsaf_dev->reset_offset;
+ reg_val_2 = 0x100 << dsaf_dev->reset_offset;
if (val == 0) {
- dsaf_write_reg(dsaf_dev->sc_base,
- DSAF_SUB_SC_GE_RESET_REQ1_REG,
+ dsaf_write_sub(dsaf_dev, DSAF_SUB_SC_GE_RESET_REQ1_REG,
reg_val_1);
- dsaf_write_reg(dsaf_dev->sc_base,
- DSAF_SUB_SC_PPE_RESET_REQ_REG,
+ dsaf_write_sub(dsaf_dev, DSAF_SUB_SC_PPE_RESET_REQ_REG,
reg_val_2);
} else {
- dsaf_write_reg(dsaf_dev->sc_base,
- DSAF_SUB_SC_GE_RESET_DREQ1_REG,
+ dsaf_write_sub(dsaf_dev, DSAF_SUB_SC_GE_RESET_DREQ1_REG,
reg_val_1);
- dsaf_write_reg(dsaf_dev->sc_base,
- DSAF_SUB_SC_PPE_RESET_DREQ_REG,
+ dsaf_write_sub(dsaf_dev, DSAF_SUB_SC_PPE_RESET_DREQ_REG,
reg_val_2);
}
}
@@ -201,24 +221,23 @@ void hns_ppe_srst_by_port(struct dsaf_device *dsaf_dev, u32 port, u32 val)
u32 reg_val = 0;
u32 reg_addr;
- reg_val |= RESET_REQ_OR_DREQ << port;
+ reg_val |= RESET_REQ_OR_DREQ << dsaf_dev->mac_cb[port]->port_rst_off;
if (val == 0)
reg_addr = DSAF_SUB_SC_PPE_RESET_REQ_REG;
else
reg_addr = DSAF_SUB_SC_PPE_RESET_DREQ_REG;
- dsaf_write_reg(dsaf_dev->sc_base, reg_addr, reg_val);
+ dsaf_write_sub(dsaf_dev, reg_addr, reg_val);
}
void hns_ppe_com_srst(struct ppe_common_cb *ppe_common, u32 val)
{
- int comm_index = ppe_common->comm_index;
struct dsaf_device *dsaf_dev = ppe_common->dsaf_dev;
u32 reg_val;
u32 reg_addr;
- if (comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX) {
+ if (!HNS_DSAF_IS_DEBUG(dsaf_dev)) {
reg_val = RESET_REQ_OR_DREQ;
if (val == 0)
reg_addr = DSAF_SUB_SC_RCB_PPE_COM_RESET_REQ_REG;
@@ -226,7 +245,7 @@ void hns_ppe_com_srst(struct ppe_common_cb *ppe_common, u32 val)
reg_addr = DSAF_SUB_SC_RCB_PPE_COM_RESET_DREQ_REG;
} else {
- reg_val = 0x100 << (comm_index - 1);
+ reg_val = 0x100 << dsaf_dev->reset_offset;
if (val == 0)
reg_addr = DSAF_SUB_SC_PPE_RESET_REQ_REG;
@@ -234,7 +253,7 @@ void hns_ppe_com_srst(struct ppe_common_cb *ppe_common, u32 val)
reg_addr = DSAF_SUB_SC_PPE_RESET_DREQ_REG;
}
- dsaf_write_reg(dsaf_dev->sc_base, reg_addr, reg_val);
+ dsaf_write_sub(dsaf_dev, reg_addr, reg_val);
}
/**
@@ -246,36 +265,45 @@ phy_interface_t hns_mac_get_phy_if(struct hns_mac_cb *mac_cb)
{
u32 mode;
u32 reg;
- u32 shift;
bool is_ver1 = AE_IS_VER1(mac_cb->dsaf_dev->dsaf_ver);
- void __iomem *sys_ctl_vaddr = mac_cb->sys_ctl_vaddr;
int mac_id = mac_cb->mac_id;
- phy_interface_t phy_if = PHY_INTERFACE_MODE_NA;
+ phy_interface_t phy_if;
- if (is_ver1 && (mac_id >= 6 && mac_id <= 7)) {
- phy_if = PHY_INTERFACE_MODE_SGMII;
- } else if (mac_id >= 0 && mac_id <= 3) {
- reg = is_ver1 ? HNS_MAC_HILINK4_REG : HNS_MAC_HILINK4V2_REG;
- mode = dsaf_read_reg(sys_ctl_vaddr, reg);
- /* mac_id 0, 1, 2, 3 ---> hilink4 lane 0, 1, 2, 3 */
- shift = is_ver1 ? 0 : mac_id;
- if (dsaf_get_bit(mode, shift))
- phy_if = PHY_INTERFACE_MODE_XGMII;
+ if (is_ver1) {
+ if (HNS_DSAF_IS_DEBUG(mac_cb->dsaf_dev))
+ return PHY_INTERFACE_MODE_SGMII;
+
+ if (mac_id >= 0 && mac_id <= 3)
+ reg = HNS_MAC_HILINK4_REG;
else
- phy_if = PHY_INTERFACE_MODE_SGMII;
- } else if (mac_id >= 4 && mac_id <= 7) {
- reg = is_ver1 ? HNS_MAC_HILINK3_REG : HNS_MAC_HILINK3V2_REG;
- mode = dsaf_read_reg(sys_ctl_vaddr, reg);
- /* mac_id 4, 5, 6, 7 ---> hilink3 lane 2, 3, 0, 1 */
- shift = is_ver1 ? 0 : mac_id <= 5 ? mac_id - 2 : mac_id - 6;
- if (dsaf_get_bit(mode, shift))
- phy_if = PHY_INTERFACE_MODE_XGMII;
+ reg = HNS_MAC_HILINK3_REG;
+ } else{
+ if (!HNS_DSAF_IS_DEBUG(mac_cb->dsaf_dev) && mac_id <= 3)
+ reg = HNS_MAC_HILINK4V2_REG;
else
- phy_if = PHY_INTERFACE_MODE_SGMII;
+ reg = HNS_MAC_HILINK3V2_REG;
}
+
+ mode = dsaf_read_sub(mac_cb->dsaf_dev, reg);
+ if (dsaf_get_bit(mode, mac_cb->port_mode_off))
+ phy_if = PHY_INTERFACE_MODE_XGMII;
+ else
+ phy_if = PHY_INTERFACE_MODE_SGMII;
+
return phy_if;
}
+int hns_mac_get_sfp_prsnt(struct hns_mac_cb *mac_cb, int *sfp_prsnt)
+{
+ if (!mac_cb->cpld_ctrl)
+ return -ENODEV;
+
+ *sfp_prsnt = !dsaf_read_syscon(mac_cb->cpld_ctrl, mac_cb->cpld_ctrl_reg
+ + MAC_SFP_PORT_OFFSET);
+
+ return 0;
+}
+
/**
* hns_mac_config_sds_loopback - set loop back for serdes
* @mac_cb: mac control block
@@ -312,7 +340,14 @@ int hns_mac_config_sds_loopback(struct hns_mac_cb *mac_cb, u8 en)
pr_info("no sfp in this eth\n");
}
- dsaf_set_reg_field(base_addr, reg_offset, 1ull << 10, 10, !!en);
+ if (mac_cb->serdes_ctrl) {
+ u32 origin = dsaf_read_syscon(mac_cb->serdes_ctrl, reg_offset);
+
+ dsaf_set_field(origin, 1ull << 10, 10, !!en);
+ dsaf_write_syscon(mac_cb->serdes_ctrl, reg_offset, origin);
+ } else {
+ dsaf_set_reg_field(base_addr, reg_offset, 1ull << 10, 10, !!en);
+ }
return 0;
}
diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c
index 5b7ae5ff43e8..8cd151a5245e 100644
--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c
+++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.c
@@ -61,22 +61,10 @@ void hns_ppe_set_indir_table(struct hns_ppe_cb *ppe_cb,
}
}
-static void __iomem *hns_ppe_common_get_ioaddr(
- struct ppe_common_cb *ppe_common)
+static void __iomem *
+hns_ppe_common_get_ioaddr(struct ppe_common_cb *ppe_common)
{
- void __iomem *base_addr;
-
- int idx = ppe_common->comm_index;
-
- if (idx == HNS_DSAF_COMM_SERVICE_NW_IDX)
- base_addr = ppe_common->dsaf_dev->ppe_base
- + PPE_COMMON_REG_OFFSET;
- else
- base_addr = ppe_common->dsaf_dev->sds_base
- + (idx - 1) * HNS_DSAF_DEBUG_NW_REG_OFFSET
- + PPE_COMMON_REG_OFFSET;
-
- return base_addr;
+ return ppe_common->dsaf_dev->ppe_base + PPE_COMMON_REG_OFFSET;
}
/**
@@ -90,7 +78,7 @@ int hns_ppe_common_get_cfg(struct dsaf_device *dsaf_dev, int comm_index)
struct ppe_common_cb *ppe_common;
int ppe_num;
- if (comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX)
+ if (!HNS_DSAF_IS_DEBUG(dsaf_dev))
ppe_num = HNS_PPE_SERVICE_NW_ENGINE_NUM;
else
ppe_num = HNS_PPE_DEBUG_NW_ENGINE_NUM;
@@ -103,7 +91,7 @@ int hns_ppe_common_get_cfg(struct dsaf_device *dsaf_dev, int comm_index)
ppe_common->ppe_num = ppe_num;
ppe_common->dsaf_dev = dsaf_dev;
ppe_common->comm_index = comm_index;
- if (comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX)
+ if (!HNS_DSAF_IS_DEBUG(dsaf_dev))
ppe_common->ppe_mode = PPE_COMMON_MODE_SERVICE;
else
ppe_common->ppe_mode = PPE_COMMON_MODE_DEBUG;
@@ -124,32 +112,8 @@ void hns_ppe_common_free_cfg(struct dsaf_device *dsaf_dev, u32 comm_index)
static void __iomem *hns_ppe_get_iobase(struct ppe_common_cb *ppe_common,
int ppe_idx)
{
- void __iomem *base_addr;
- int common_idx = ppe_common->comm_index;
-
- if (ppe_common->ppe_mode == PPE_COMMON_MODE_SERVICE) {
- base_addr = ppe_common->dsaf_dev->ppe_base +
- ppe_idx * PPE_REG_OFFSET;
-
- } else {
- base_addr = ppe_common->dsaf_dev->sds_base +
- (common_idx - 1) * HNS_DSAF_DEBUG_NW_REG_OFFSET;
- }
-
- return base_addr;
-}
-
-static int hns_ppe_get_port(struct ppe_common_cb *ppe_common, int idx)
-{
- int port;
-
- if (ppe_common->ppe_mode == PPE_COMMON_MODE_SERVICE)
- port = idx;
- else
- port = HNS_PPE_SERVICE_NW_ENGINE_NUM
- + ppe_common->comm_index - 1;
- return port;
+ return ppe_common->dsaf_dev->ppe_base + ppe_idx * PPE_REG_OFFSET;
}
static void hns_ppe_get_cfg(struct ppe_common_cb *ppe_common)
@@ -164,7 +128,6 @@ static void hns_ppe_get_cfg(struct ppe_common_cb *ppe_common)
ppe_cb->next = NULL;
ppe_cb->ppe_common_cb = ppe_common;
ppe_cb->index = i;
- ppe_cb->port = hns_ppe_get_port(ppe_common, i);
ppe_cb->io_base = hns_ppe_get_iobase(ppe_common, i);
ppe_cb->virq = 0;
}
@@ -318,7 +281,7 @@ static void hns_ppe_exc_irq_en(struct hns_ppe_cb *ppe_cb, int en)
static void hns_ppe_init_hw(struct hns_ppe_cb *ppe_cb)
{
struct ppe_common_cb *ppe_common_cb = ppe_cb->ppe_common_cb;
- u32 port = ppe_cb->port;
+ u32 port = ppe_cb->index;
struct dsaf_device *dsaf_dev = ppe_common_cb->dsaf_dev;
int i;
@@ -332,10 +295,12 @@ static void hns_ppe_init_hw(struct hns_ppe_cb *ppe_cb)
/* clr and msk except irq*/
hns_ppe_exc_irq_en(ppe_cb, 0);
- if (ppe_common_cb->ppe_mode == PPE_COMMON_MODE_DEBUG)
+ if (ppe_common_cb->ppe_mode == PPE_COMMON_MODE_DEBUG) {
hns_ppe_set_port_mode(ppe_cb, PPE_MODE_GE);
- else
+ dsaf_write_dev(ppe_cb, PPE_CFG_PAUSE_IDLE_CNT_REG, 0);
+ } else {
hns_ppe_set_port_mode(ppe_cb, PPE_MODE_XGE);
+ }
hns_ppe_checksum_hw(ppe_cb, 0xffffffff);
hns_ppe_cnt_clr_ce(ppe_cb);
@@ -375,7 +340,8 @@ void hns_ppe_uninit_ex(struct ppe_common_cb *ppe_common)
u32 i;
for (i = 0; i < ppe_common->ppe_num; i++) {
- hns_ppe_uninit_hw(&ppe_common->ppe_cb[i]);
+ if (ppe_common->dsaf_dev->mac_cb[i])
+ hns_ppe_uninit_hw(&ppe_common->ppe_cb[i]);
memset(&ppe_common->ppe_cb[i], 0, sizeof(struct hns_ppe_cb));
}
}
@@ -408,8 +374,11 @@ void hns_ppe_reset_common(struct dsaf_device *dsaf_dev, u8 ppe_common_index)
if (ret)
return;
- for (i = 0; i < ppe_common->ppe_num; i++)
- hns_ppe_init_hw(&ppe_common->ppe_cb[i]);
+ for (i = 0; i < ppe_common->ppe_num; i++) {
+ /* We only need to initiate ppe when the port exists */
+ if (dsaf_dev->mac_cb[i])
+ hns_ppe_init_hw(&ppe_common->ppe_cb[i]);
+ }
ret = hns_rcb_common_init_hw(dsaf_dev->rcb_common[ppe_common_index]);
if (ret)
diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h
index e9c0ec2fa0dd..9d8e643e8aa6 100644
--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h
+++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_ppe.h
@@ -80,7 +80,6 @@ struct hns_ppe_cb {
struct hns_ppe_hw_stats hw_stats;
u8 index; /* index in a ppe common device */
- u8 port; /* port id in dsaf */
void __iomem *io_base;
int virq;
u32 rss_indir_table[HNS_PPEV2_RSS_IND_TBL_SIZE]; /*shadow indir tab */
diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
index 28ee26e5c478..4ef6d23d998e 100644
--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
+++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.c
@@ -270,7 +270,7 @@ static void hns_rcb_set_port_timeout(
static int hns_rcb_common_get_port_num(struct rcb_common_cb *rcb_common)
{
- if (rcb_common->comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX)
+ if (!HNS_DSAF_IS_DEBUG(rcb_common->dsaf_dev))
return HNS_RCB_SERVICE_NW_ENGINE_NUM;
else
return HNS_RCB_DEBUG_NW_ENGINE_NUM;
@@ -430,36 +430,20 @@ static void hns_rcb_ring_pair_get_cfg(struct ring_pair_cb *ring_pair_cb)
static int hns_rcb_get_port_in_comm(
struct rcb_common_cb *rcb_common, int ring_idx)
{
- int comm_index = rcb_common->comm_index;
- int port;
- int q_num;
- if (comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX) {
- q_num = (int)rcb_common->max_q_per_vf * rcb_common->max_vfn;
- port = ring_idx / q_num;
- } else {
- port = 0; /* config debug-ports port_id_in_comm to 0*/
- }
-
- return port;
+ return ring_idx / (rcb_common->max_q_per_vf * rcb_common->max_vfn);
}
#define SERVICE_RING_IRQ_IDX(v1) \
((v1) ? HNS_SERVICE_RING_IRQ_IDX : HNSV2_SERVICE_RING_IRQ_IDX)
-#define DEBUG_RING_IRQ_IDX(v1) \
- ((v1) ? HNS_DEBUG_RING_IRQ_IDX : HNSV2_DEBUG_RING_IRQ_IDX)
-#define DEBUG_RING_IRQ_OFFSET(v1) \
- ((v1) ? HNS_DEBUG_RING_IRQ_OFFSET : HNSV2_DEBUG_RING_IRQ_OFFSET)
static int hns_rcb_get_base_irq_idx(struct rcb_common_cb *rcb_common)
{
- int comm_index = rcb_common->comm_index;
bool is_ver1 = AE_IS_VER1(rcb_common->dsaf_dev->dsaf_ver);
- if (comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX)
+ if (!HNS_DSAF_IS_DEBUG(rcb_common->dsaf_dev))
return SERVICE_RING_IRQ_IDX(is_ver1);
else
- return DEBUG_RING_IRQ_IDX(is_ver1) +
- (comm_index - 1) * DEBUG_RING_IRQ_OFFSET(is_ver1);
+ return HNS_DEBUG_RING_IRQ_IDX;
}
#define RCB_COMM_BASE_TO_RING_BASE(base, ringid)\
@@ -549,7 +533,7 @@ int hns_rcb_set_coalesce_usecs(
return 0;
if (AE_IS_VER1(rcb_common->dsaf_dev->dsaf_ver)) {
- if (rcb_common->comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX) {
+ if (!HNS_DSAF_IS_DEBUG(rcb_common->dsaf_dev)) {
dev_err(rcb_common->dsaf_dev->dev,
"error: not support coalesce_usecs setting!\n");
return -EINVAL;
@@ -601,113 +585,82 @@ int hns_rcb_set_coalesced_frames(
*@max_vfn : max vfn number
*@max_q_per_vf:max ring number per vm
*/
-void hns_rcb_get_queue_mode(enum dsaf_mode dsaf_mode, int comm_index,
- u16 *max_vfn, u16 *max_q_per_vf)
+void hns_rcb_get_queue_mode(enum dsaf_mode dsaf_mode, u16 *max_vfn,
+ u16 *max_q_per_vf)
{
- if (comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX) {
- switch (dsaf_mode) {
- case DSAF_MODE_DISABLE_6PORT_0VM:
- *max_vfn = 1;
- *max_q_per_vf = 16;
- break;
- case DSAF_MODE_DISABLE_FIX:
- *max_vfn = 1;
- *max_q_per_vf = 1;
- break;
- case DSAF_MODE_DISABLE_2PORT_64VM:
- *max_vfn = 64;
- *max_q_per_vf = 1;
- break;
- case DSAF_MODE_DISABLE_6PORT_16VM:
- *max_vfn = 16;
- *max_q_per_vf = 1;
- break;
- default:
- *max_vfn = 1;
- *max_q_per_vf = 16;
- break;
- }
- } else {
+ switch (dsaf_mode) {
+ case DSAF_MODE_DISABLE_6PORT_0VM:
+ *max_vfn = 1;
+ *max_q_per_vf = 16;
+ break;
+ case DSAF_MODE_DISABLE_FIX:
+ case DSAF_MODE_DISABLE_SP:
*max_vfn = 1;
*max_q_per_vf = 1;
+ break;
+ case DSAF_MODE_DISABLE_2PORT_64VM:
+ *max_vfn = 64;
+ *max_q_per_vf = 1;
+ break;
+ case DSAF_MODE_DISABLE_6PORT_16VM:
+ *max_vfn = 16;
+ *max_q_per_vf = 1;
+ break;
+ default:
+ *max_vfn = 1;
+ *max_q_per_vf = 16;
+ break;
}
}
-int hns_rcb_get_ring_num(struct dsaf_device *dsaf_dev, int comm_index)
+int hns_rcb_get_ring_num(struct dsaf_device *dsaf_dev)
{
- if (comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX) {
- switch (dsaf_dev->dsaf_mode) {
- case DSAF_MODE_ENABLE_FIX:
- return 1;
-
- case DSAF_MODE_DISABLE_FIX:
- return 6;
-
- case DSAF_MODE_ENABLE_0VM:
- return 32;
-
- case DSAF_MODE_DISABLE_6PORT_0VM:
- case DSAF_MODE_ENABLE_16VM:
- case DSAF_MODE_DISABLE_6PORT_2VM:
- case DSAF_MODE_DISABLE_6PORT_16VM:
- case DSAF_MODE_DISABLE_6PORT_4VM:
- case DSAF_MODE_ENABLE_8VM:
- return 96;
-
- case DSAF_MODE_DISABLE_2PORT_16VM:
- case DSAF_MODE_DISABLE_2PORT_8VM:
- case DSAF_MODE_ENABLE_32VM:
- case DSAF_MODE_DISABLE_2PORT_64VM:
- case DSAF_MODE_ENABLE_128VM:
- return 128;
-
- default:
- dev_warn(dsaf_dev->dev,
- "get ring num fail,use default!dsaf_mode=%d\n",
- dsaf_dev->dsaf_mode);
- return 128;
- }
- } else {
+ switch (dsaf_dev->dsaf_mode) {
+ case DSAF_MODE_ENABLE_FIX:
+ case DSAF_MODE_DISABLE_SP:
return 1;
+
+ case DSAF_MODE_DISABLE_FIX:
+ return 6;
+
+ case DSAF_MODE_ENABLE_0VM:
+ return 32;
+
+ case DSAF_MODE_DISABLE_6PORT_0VM:
+ case DSAF_MODE_ENABLE_16VM:
+ case DSAF_MODE_DISABLE_6PORT_2VM:
+ case DSAF_MODE_DISABLE_6PORT_16VM:
+ case DSAF_MODE_DISABLE_6PORT_4VM:
+ case DSAF_MODE_ENABLE_8VM:
+ return 96;
+
+ case DSAF_MODE_DISABLE_2PORT_16VM:
+ case DSAF_MODE_DISABLE_2PORT_8VM:
+ case DSAF_MODE_ENABLE_32VM:
+ case DSAF_MODE_DISABLE_2PORT_64VM:
+ case DSAF_MODE_ENABLE_128VM:
+ return 128;
+
+ default:
+ dev_warn(dsaf_dev->dev,
+ "get ring num fail,use default!dsaf_mode=%d\n",
+ dsaf_dev->dsaf_mode);
+ return 128;
}
}
-void __iomem *hns_rcb_common_get_vaddr(struct dsaf_device *dsaf_dev,
- int comm_index)
+void __iomem *hns_rcb_common_get_vaddr(struct rcb_common_cb *rcb_common)
{
- void __iomem *base_addr;
-
- if (comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX)
- base_addr = dsaf_dev->ppe_base + RCB_COMMON_REG_OFFSET;
- else
- base_addr = dsaf_dev->sds_base
- + (comm_index - 1) * HNS_DSAF_DEBUG_NW_REG_OFFSET
- + RCB_COMMON_REG_OFFSET;
+ struct dsaf_device *dsaf_dev = rcb_common->dsaf_dev;
- return base_addr;
+ return dsaf_dev->ppe_base + RCB_COMMON_REG_OFFSET;
}
-static phys_addr_t hns_rcb_common_get_paddr(struct dsaf_device *dsaf_dev,
- int comm_index)
+static phys_addr_t hns_rcb_common_get_paddr(struct rcb_common_cb *rcb_common)
{
- struct device_node *np = dsaf_dev->dev->of_node;
- phys_addr_t phy_addr;
- const __be32 *tmp_addr;
- u64 addr_offset = 0;
- u64 size = 0;
- int index = 0;
-
- if (comm_index == HNS_DSAF_COMM_SERVICE_NW_IDX) {
- index = 2;
- addr_offset = RCB_COMMON_REG_OFFSET;
- } else {
- index = 1;
- addr_offset = (comm_index - 1) * HNS_DSAF_DEBUG_NW_REG_OFFSET +
- RCB_COMMON_REG_OFFSET;
- }
- tmp_addr = of_get_address(np, index, &size, NULL);
- phy_addr = of_translate_address(np, tmp_addr);
- return phy_addr + addr_offset;
+ struct dsaf_device *dsaf_dev = rcb_common->dsaf_dev;
+
+ return dsaf_dev->ppe_paddr + RCB_COMMON_REG_OFFSET;
}
int hns_rcb_common_get_cfg(struct dsaf_device *dsaf_dev,
@@ -717,7 +670,7 @@ int hns_rcb_common_get_cfg(struct dsaf_device *dsaf_dev,
enum dsaf_mode dsaf_mode = dsaf_dev->dsaf_mode;
u16 max_vfn;
u16 max_q_per_vf;
- int ring_num = hns_rcb_get_ring_num(dsaf_dev, comm_index);
+ int ring_num = hns_rcb_get_ring_num(dsaf_dev);
rcb_common =
devm_kzalloc(dsaf_dev->dev, sizeof(*rcb_common) +
@@ -732,12 +685,12 @@ int hns_rcb_common_get_cfg(struct dsaf_device *dsaf_dev,
rcb_common->desc_num = dsaf_dev->desc_num;
- hns_rcb_get_queue_mode(dsaf_mode, comm_index, &max_vfn, &max_q_per_vf);
+ hns_rcb_get_queue_mode(dsaf_mode, &max_vfn, &max_q_per_vf);
rcb_common->max_vfn = max_vfn;
rcb_common->max_q_per_vf = max_q_per_vf;
- rcb_common->io_base = hns_rcb_common_get_vaddr(dsaf_dev, comm_index);
- rcb_common->phy_base = hns_rcb_common_get_paddr(dsaf_dev, comm_index);
+ rcb_common->io_base = hns_rcb_common_get_vaddr(rcb_common);
+ rcb_common->phy_base = hns_rcb_common_get_paddr(rcb_common);
dsaf_dev->rcb_common[comm_index] = rcb_common;
return 0;
@@ -932,7 +885,7 @@ void hns_rcb_get_common_regs(struct rcb_common_cb *rcb_com, void *data)
{
u32 *regs = data;
bool is_ver1 = AE_IS_VER1(rcb_com->dsaf_dev->dsaf_ver);
- bool is_dbg = (rcb_com->comm_index != HNS_DSAF_COMM_SERVICE_NW_IDX);
+ bool is_dbg = HNS_DSAF_IS_DEBUG(rcb_com->dsaf_dev);
u32 reg_tmp;
u32 reg_num_tmp;
u32 i = 0;
diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
index eb61014ad615..bd54dac82ee0 100644
--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
+++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_rcb.h
@@ -111,7 +111,7 @@ void hns_rcb_common_free_cfg(struct dsaf_device *dsaf_dev, u32 comm_index);
int hns_rcb_common_init_hw(struct rcb_common_cb *rcb_common);
void hns_rcb_start(struct hnae_queue *q, u32 val);
void hns_rcb_get_cfg(struct rcb_common_cb *rcb_common);
-void hns_rcb_get_queue_mode(enum dsaf_mode dsaf_mode, int comm_index,
+void hns_rcb_get_queue_mode(enum dsaf_mode dsaf_mode,
u16 *max_vfn, u16 *max_q_per_vf);
void hns_rcb_common_init_commit_hw(struct rcb_common_cb *rcb_common);
diff --git a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h
index 7d7204f45e78..7c3b5103d151 100644
--- a/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h
+++ b/drivers/net/ethernet/hisilicon/hns/hns_dsaf_reg.h
@@ -10,25 +10,20 @@
#ifndef _DSAF_REG_H_
#define _DSAF_REG_H_
-#define HNS_DEBUG_RING_IRQ_IDX 55
-#define HNS_SERVICE_RING_IRQ_IDX 59
-#define HNS_DEBUG_RING_IRQ_OFFSET 2
-#define HNSV2_DEBUG_RING_IRQ_IDX 409
-#define HNSV2_SERVICE_RING_IRQ_IDX 25
-#define HNSV2_DEBUG_RING_IRQ_OFFSET 9
-
-#define DSAF_MAX_PORT_NUM_PER_CHIP 8
-#define DSAF_SERVICE_PORT_NUM_PER_DSAF 6
-#define DSAF_MAX_VM_NUM 128
-
-#define DSAF_COMM_DEV_NUM 3
-#define DSAF_PPE_INODE_BASE 6
-#define HNS_DSAF_COMM_SERVICE_NW_IDX 0
+#include <linux/regmap.h>
+#define HNS_DEBUG_RING_IRQ_IDX 0
+#define HNS_SERVICE_RING_IRQ_IDX 59
+#define HNSV2_SERVICE_RING_IRQ_IDX 25
+
+#define DSAF_MAX_PORT_NUM 6
+#define DSAF_MAX_VM_NUM 128
+
+#define DSAF_COMM_DEV_NUM 1
+#define DSAF_PPE_INODE_BASE 6
#define DSAF_DEBUG_NW_NUM 2
#define DSAF_SERVICE_NW_NUM 6
#define DSAF_COMM_CHN DSAF_SERVICE_NW_NUM
#define DSAF_GE_NUM ((DSAF_SERVICE_NW_NUM) + (DSAF_DEBUG_NW_NUM))
-#define DSAF_PORT_NUM ((DSAF_SERVICE_NW_NUM) + (DSAF_DEBUG_NW_NUM))
#define DSAF_XGE_NUM DSAF_SERVICE_NW_NUM
#define DSAF_PORT_TYPE_NUM 3
#define DSAF_NODE_NUM 18
@@ -137,6 +132,7 @@
#define DSAF_PPE_INT_STS_0_REG 0x1E0
#define DSAF_ROCEE_INT_STS_0_REG 0x200
#define DSAFV2_SERDES_LBK_0_REG 0x220
+#define DSAF_PAUSE_CFG_REG 0x240
#define DSAF_PPE_QID_CFG_0_REG 0x300
#define DSAF_SW_PORT_TYPE_0_REG 0x320
#define DSAF_STP_PORT_TYPE_0_REG 0x340
@@ -155,6 +151,7 @@
#define DSAF_INODE_FINAL_IN_PKT_NUM_0_REG 0x1030
#define DSAF_INODE_SBM_PID_NUM_0_REG 0x1038
#define DSAF_INODE_FINAL_IN_PAUSE_NUM_0_REG 0x103C
+#define DSAFV2_INODE_FINAL_IN_PAUSE_NUM_0_REG 0x1024
#define DSAF_INODE_SBM_RELS_NUM_0_REG 0x104C
#define DSAF_INODE_SBM_DROP_NUM_0_REG 0x1050
#define DSAF_INODE_CRC_FALSE_NUM_0_REG 0x1054
@@ -711,6 +708,10 @@
#define DSAF_PFC_UNINT_CNT_M ((1ULL << 9) - 1)
#define DSAF_PFC_UNINT_CNT_S 0
+#define DSAF_MAC_PAUSE_RX_EN_B 2
+#define DSAF_PFC_PAUSE_RX_EN_B 1
+#define DSAF_PFC_PAUSE_TX_EN_B 0
+
#define DSAF_PPE_QID_CFG_M 0xFF
#define DSAF_PPE_QID_CFG_S 0
@@ -988,6 +989,19 @@ static inline u32 dsaf_read_reg(u8 __iomem *base, u32 reg)
return readl(reg_addr + reg);
}
+static inline void dsaf_write_syscon(struct regmap *base, u32 reg, u32 value)
+{
+ regmap_write(base, reg, value);
+}
+
+static inline u32 dsaf_read_syscon(struct regmap *base, u32 reg)
+{
+ unsigned int val;
+
+ regmap_read(base, reg, &val);
+ return val;
+}
+
#define dsaf_read_dev(a, reg) \
dsaf_read_reg((a)->io_base, (reg))
diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.c b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
index 687204b780b0..e621636e69b9 100644
--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.c
+++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.c
@@ -1275,7 +1275,7 @@ void hns_nic_net_reinit(struct net_device *netdev)
{
struct hns_nic_priv *priv = netdev_priv(netdev);
- priv->netdev->trans_start = jiffies;
+ netif_trans_update(priv->netdev);
while (test_and_set_bit(NIC_STATE_REINITING, &priv->state))
usleep_range(1000, 2000);
@@ -1376,7 +1376,7 @@ static netdev_tx_t hns_nic_net_xmit(struct sk_buff *skb,
ret = hns_nic_net_xmit_hw(ndev, skb,
&tx_ring_data(priv, skb->queue_mapping));
if (ret == NETDEV_TX_OK) {
- ndev->trans_start = jiffies;
+ netif_trans_update(ndev);
ndev->stats.tx_bytes += skb->len;
ndev->stats.tx_packets++;
}
@@ -1648,7 +1648,7 @@ static void hns_nic_reset_subtask(struct hns_nic_priv *priv)
rtnl_lock();
/* put off any impending NetWatchDogTimeout */
- priv->netdev->trans_start = jiffies;
+ netif_trans_update(priv->netdev);
if (type == HNAE_PORT_DEBUG) {
hns_nic_net_reinit(priv->netdev);
@@ -1873,6 +1873,7 @@ static int hns_nic_dev_probe(struct platform_device *pdev)
struct net_device *ndev;
struct hns_nic_priv *priv;
struct device_node *node = dev->of_node;
+ u32 port_id;
int ret;
ndev = alloc_etherdev_mq(sizeof(struct hns_nic_priv), NIC_MAX_Q_PER_VF);
@@ -1896,10 +1897,18 @@ static int hns_nic_dev_probe(struct platform_device *pdev)
dev_err(dev, "not find ae-handle\n");
goto out_read_prop_fail;
}
-
- ret = of_property_read_u32(node, "port-id", &priv->port_id);
- if (ret)
- goto out_read_prop_fail;
+ /* try to find port-idx-in-ae first */
+ ret = of_property_read_u32(node, "port-idx-in-ae", &port_id);
+ if (ret) {
+ /* only for old code compatible */
+ ret = of_property_read_u32(node, "port-id", &port_id);
+ if (ret)
+ goto out_read_prop_fail;
+ /* for old dts, we need to caculate the port offset */
+ port_id = port_id < HNS_SRV_OFFSET ? port_id + HNS_DEBUG_OFFSET
+ : port_id - HNS_SRV_OFFSET;
+ }
+ priv->port_id = port_id;
hns_init_mac_addr(ndev);
diff --git a/drivers/net/ethernet/hisilicon/hns/hns_enet.h b/drivers/net/ethernet/hisilicon/hns/hns_enet.h
index c68ab3d34fc2..337efa582bac 100644
--- a/drivers/net/ethernet/hisilicon/hns/hns_enet.h
+++ b/drivers/net/ethernet/hisilicon/hns/hns_enet.h
@@ -18,6 +18,9 @@
#include "hnae.h"
+#define HNS_DEBUG_OFFSET 6
+#define HNS_SRV_OFFSET 2
+
enum hns_nic_state {
NIC_STATE_TESTING = 0,
NIC_STATE_RESETTING,
diff --git a/drivers/net/ethernet/hp/hp100.c b/drivers/net/ethernet/hp/hp100.c
index 3daf2d4a7ca0..631dbc7b4dbb 100644
--- a/drivers/net/ethernet/hp/hp100.c
+++ b/drivers/net/ethernet/hp/hp100.c
@@ -1102,7 +1102,7 @@ static int hp100_open(struct net_device *dev)
return -EAGAIN;
}
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_start_queue(dev);
lp->lan_type = hp100_sense_lan(dev);
diff --git a/drivers/net/ethernet/i825xx/82596.c b/drivers/net/ethernet/i825xx/82596.c
index 7ce6379fd1a3..befb4ac3e2b0 100644
--- a/drivers/net/ethernet/i825xx/82596.c
+++ b/drivers/net/ethernet/i825xx/82596.c
@@ -1042,7 +1042,7 @@ static void i596_tx_timeout (struct net_device *dev)
lp->last_restart = dev->stats.tx_packets;
}
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue (dev);
}
diff --git a/drivers/net/ethernet/i825xx/lib82596.c b/drivers/net/ethernet/i825xx/lib82596.c
index c984998b34a0..3dbc53c21baa 100644
--- a/drivers/net/ethernet/i825xx/lib82596.c
+++ b/drivers/net/ethernet/i825xx/lib82596.c
@@ -960,7 +960,7 @@ static void i596_tx_timeout (struct net_device *dev)
lp->last_restart = dev->stats.tx_packets;
}
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue (dev);
}
diff --git a/drivers/net/ethernet/i825xx/sun3_82586.c b/drivers/net/ethernet/i825xx/sun3_82586.c
index 353f57f675d0..21c84cc9c871 100644
--- a/drivers/net/ethernet/i825xx/sun3_82586.c
+++ b/drivers/net/ethernet/i825xx/sun3_82586.c
@@ -983,7 +983,7 @@ static void sun3_82586_timeout(struct net_device *dev)
p->scb->cmd_cuc = CUC_START;
sun3_attn586();
WAIT_4_SCB_CMD();
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
return 0;
}
#endif
@@ -996,7 +996,7 @@ static void sun3_82586_timeout(struct net_device *dev)
sun3_82586_close(dev);
sun3_82586_open(dev);
}
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
}
/******************************************************
diff --git a/drivers/net/ethernet/ibm/ehea/ehea_main.c b/drivers/net/ethernet/ibm/ehea/ehea_main.c
index 2a0dc127df3f..54efa9a5167b 100644
--- a/drivers/net/ethernet/ibm/ehea/ehea_main.c
+++ b/drivers/net/ethernet/ibm/ehea/ehea_main.c
@@ -1169,16 +1169,15 @@ static void ehea_parse_eqe(struct ehea_adapter *adapter, u64 eqe)
ec = EHEA_BMASK_GET(NEQE_EVENT_CODE, eqe);
portnum = EHEA_BMASK_GET(NEQE_PORTNUM, eqe);
port = ehea_get_port(adapter, portnum);
+ if (!port) {
+ netdev_err(NULL, "unknown portnum %x\n", portnum);
+ return;
+ }
dev = port->netdev;
switch (ec) {
case EHEA_EC_PORTSTATE_CHG: /* port state change */
- if (!port) {
- netdev_err(dev, "unknown portnum %x\n", portnum);
- break;
- }
-
if (EHEA_BMASK_GET(NEQE_PORT_UP, eqe)) {
if (!netif_carrier_ok(dev)) {
ret = ehea_sense_port_attr(port);
diff --git a/drivers/net/ethernet/ibm/emac/core.c b/drivers/net/ethernet/ibm/emac/core.c
index 5d7db6c01c46..4c9771d57d6e 100644
--- a/drivers/net/ethernet/ibm/emac/core.c
+++ b/drivers/net/ethernet/ibm/emac/core.c
@@ -301,7 +301,7 @@ static inline void emac_netif_stop(struct emac_instance *dev)
dev->no_mcast = 1;
netif_addr_unlock(dev->ndev);
netif_tx_unlock_bh(dev->ndev);
- dev->ndev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev->ndev); /* prevent tx timeout */
mal_poll_disable(dev->mal, &dev->commac);
netif_tx_disable(dev->ndev);
}
@@ -1377,7 +1377,7 @@ static inline int emac_xmit_finish(struct emac_instance *dev, int len)
DBG2(dev, "stopped TX queue" NL);
}
- ndev->trans_start = jiffies;
+ netif_trans_update(ndev);
++dev->stats.tx_packets;
dev->stats.tx_bytes += len;
diff --git a/drivers/net/ethernet/ibm/emac/phy.c b/drivers/net/ethernet/ibm/emac/phy.c
index d3b9d103353e..5b88cc690c22 100644
--- a/drivers/net/ethernet/ibm/emac/phy.c
+++ b/drivers/net/ethernet/ibm/emac/phy.c
@@ -470,12 +470,38 @@ static struct mii_phy_def m88e1112_phy_def = {
.ops = &m88e1112_phy_ops,
};
+static int ar8035_init(struct mii_phy *phy)
+{
+ phy_write(phy, 0x1d, 0x5); /* Address debug register 5 */
+ phy_write(phy, 0x1e, 0x2d47); /* Value copied from u-boot */
+ phy_write(phy, 0x1d, 0xb); /* Address hib ctrl */
+ phy_write(phy, 0x1e, 0xbc20); /* Value copied from u-boot */
+
+ return 0;
+}
+
+static struct mii_phy_ops ar8035_phy_ops = {
+ .init = ar8035_init,
+ .setup_aneg = genmii_setup_aneg,
+ .setup_forced = genmii_setup_forced,
+ .poll_link = genmii_poll_link,
+ .read_link = genmii_read_link,
+};
+
+static struct mii_phy_def ar8035_phy_def = {
+ .phy_id = 0x004dd070,
+ .phy_id_mask = 0xfffffff0,
+ .name = "Atheros 8035 Gigabit Ethernet",
+ .ops = &ar8035_phy_ops,
+};
+
static struct mii_phy_def *mii_phy_table[] = {
&et1011c_phy_def,
&cis8201_phy_def,
&bcm5248_phy_def,
&m88e1111_phy_def,
&m88e1112_phy_def,
+ &ar8035_phy_def,
&genmii_phy_def,
NULL
};
diff --git a/drivers/net/ethernet/ibm/ibmvnic.c b/drivers/net/ethernet/ibm/ibmvnic.c
index 6e9e16eee5d0..864cb21351a4 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.c
+++ b/drivers/net/ethernet/ibm/ibmvnic.c
@@ -61,6 +61,7 @@
#include <linux/proc_fs.h>
#include <linux/in.h>
#include <linux/ip.h>
+#include <linux/ipv6.h>
#include <linux/irq.h>
#include <linux/kthread.h>
#include <linux/seq_file.h>
@@ -94,6 +95,7 @@ static int ibmvnic_reenable_crq_queue(struct ibmvnic_adapter *);
static int ibmvnic_send_crq(struct ibmvnic_adapter *, union ibmvnic_crq *);
static int send_subcrq(struct ibmvnic_adapter *adapter, u64 remote_handle,
union sub_crq *sub_crq);
+static int send_subcrq_indirect(struct ibmvnic_adapter *, u64, u64, u64);
static irqreturn_t ibmvnic_interrupt_rx(int irq, void *instance);
static int enable_scrq_irq(struct ibmvnic_adapter *,
struct ibmvnic_sub_crq_queue *);
@@ -561,10 +563,141 @@ static int ibmvnic_close(struct net_device *netdev)
return 0;
}
+/**
+ * build_hdr_data - creates L2/L3/L4 header data buffer
+ * @hdr_field - bitfield determining needed headers
+ * @skb - socket buffer
+ * @hdr_len - array of header lengths
+ * @tot_len - total length of data
+ *
+ * Reads hdr_field to determine which headers are needed by firmware.
+ * Builds a buffer containing these headers. Saves individual header
+ * lengths and total buffer length to be used to build descriptors.
+ */
+static int build_hdr_data(u8 hdr_field, struct sk_buff *skb,
+ int *hdr_len, u8 *hdr_data)
+{
+ int len = 0;
+ u8 *hdr;
+
+ hdr_len[0] = sizeof(struct ethhdr);
+
+ if (skb->protocol == htons(ETH_P_IP)) {
+ hdr_len[1] = ip_hdr(skb)->ihl * 4;
+ if (ip_hdr(skb)->protocol == IPPROTO_TCP)
+ hdr_len[2] = tcp_hdrlen(skb);
+ else if (ip_hdr(skb)->protocol == IPPROTO_UDP)
+ hdr_len[2] = sizeof(struct udphdr);
+ } else if (skb->protocol == htons(ETH_P_IPV6)) {
+ hdr_len[1] = sizeof(struct ipv6hdr);
+ if (ipv6_hdr(skb)->nexthdr == IPPROTO_TCP)
+ hdr_len[2] = tcp_hdrlen(skb);
+ else if (ipv6_hdr(skb)->nexthdr == IPPROTO_UDP)
+ hdr_len[2] = sizeof(struct udphdr);
+ }
+
+ memset(hdr_data, 0, 120);
+ if ((hdr_field >> 6) & 1) {
+ hdr = skb_mac_header(skb);
+ memcpy(hdr_data, hdr, hdr_len[0]);
+ len += hdr_len[0];
+ }
+
+ if ((hdr_field >> 5) & 1) {
+ hdr = skb_network_header(skb);
+ memcpy(hdr_data + len, hdr, hdr_len[1]);
+ len += hdr_len[1];
+ }
+
+ if ((hdr_field >> 4) & 1) {
+ hdr = skb_transport_header(skb);
+ memcpy(hdr_data + len, hdr, hdr_len[2]);
+ len += hdr_len[2];
+ }
+ return len;
+}
+
+/**
+ * create_hdr_descs - create header and header extension descriptors
+ * @hdr_field - bitfield determining needed headers
+ * @data - buffer containing header data
+ * @len - length of data buffer
+ * @hdr_len - array of individual header lengths
+ * @scrq_arr - descriptor array
+ *
+ * Creates header and, if needed, header extension descriptors and
+ * places them in a descriptor array, scrq_arr
+ */
+
+static void create_hdr_descs(u8 hdr_field, u8 *hdr_data, int len, int *hdr_len,
+ union sub_crq *scrq_arr)
+{
+ union sub_crq hdr_desc;
+ int tmp_len = len;
+ u8 *data, *cur;
+ int tmp;
+
+ while (tmp_len > 0) {
+ cur = hdr_data + len - tmp_len;
+
+ memset(&hdr_desc, 0, sizeof(hdr_desc));
+ if (cur != hdr_data) {
+ data = hdr_desc.hdr_ext.data;
+ tmp = tmp_len > 29 ? 29 : tmp_len;
+ hdr_desc.hdr_ext.first = IBMVNIC_CRQ_CMD;
+ hdr_desc.hdr_ext.type = IBMVNIC_HDR_EXT_DESC;
+ hdr_desc.hdr_ext.len = tmp;
+ } else {
+ data = hdr_desc.hdr.data;
+ tmp = tmp_len > 24 ? 24 : tmp_len;
+ hdr_desc.hdr.first = IBMVNIC_CRQ_CMD;
+ hdr_desc.hdr.type = IBMVNIC_HDR_DESC;
+ hdr_desc.hdr.len = tmp;
+ hdr_desc.hdr.l2_len = (u8)hdr_len[0];
+ hdr_desc.hdr.l3_len = cpu_to_be16((u16)hdr_len[1]);
+ hdr_desc.hdr.l4_len = (u8)hdr_len[2];
+ hdr_desc.hdr.flag = hdr_field << 1;
+ }
+ memcpy(data, cur, tmp);
+ tmp_len -= tmp;
+ *scrq_arr = hdr_desc;
+ scrq_arr++;
+ }
+}
+
+/**
+ * build_hdr_descs_arr - build a header descriptor array
+ * @skb - socket buffer
+ * @num_entries - number of descriptors to be sent
+ * @subcrq - first TX descriptor
+ * @hdr_field - bit field determining which headers will be sent
+ *
+ * This function will build a TX descriptor array with applicable
+ * L2/L3/L4 packet header descriptors to be sent by send_subcrq_indirect.
+ */
+
+static void build_hdr_descs_arr(struct ibmvnic_tx_buff *txbuff,
+ int *num_entries, u8 hdr_field)
+{
+ int hdr_len[3] = {0, 0, 0};
+ int tot_len, len;
+ u8 *hdr_data = txbuff->hdr_data;
+
+ tot_len = build_hdr_data(hdr_field, txbuff->skb, hdr_len,
+ txbuff->hdr_data);
+ len = tot_len;
+ len -= 24;
+ if (len > 0)
+ num_entries += len % 29 ? len / 29 + 1 : len / 29;
+ create_hdr_descs(hdr_field, hdr_data, tot_len, hdr_len,
+ txbuff->indir_arr + 1);
+}
+
static int ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
{
struct ibmvnic_adapter *adapter = netdev_priv(netdev);
int queue_num = skb_get_queue_mapping(skb);
+ u8 *hdrs = (u8 *)&adapter->tx_rx_desc_req;
struct device *dev = &adapter->vdev->dev;
struct ibmvnic_tx_buff *tx_buff = NULL;
struct ibmvnic_tx_pool *tx_pool;
@@ -579,6 +712,7 @@ static int ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
unsigned long lpar_rc;
union sub_crq tx_crq;
unsigned int offset;
+ int num_entries = 1;
unsigned char *dst;
u64 *handle_array;
int index = 0;
@@ -644,11 +778,35 @@ static int ibmvnic_xmit(struct sk_buff *skb, struct net_device *netdev)
tx_crq.v1.flags1 |= IBMVNIC_TX_PROT_UDP;
}
- if (skb->ip_summed == CHECKSUM_PARTIAL)
+ if (skb->ip_summed == CHECKSUM_PARTIAL) {
tx_crq.v1.flags1 |= IBMVNIC_TX_CHKSUM_OFFLOAD;
-
- lpar_rc = send_subcrq(adapter, handle_array[0], &tx_crq);
-
+ hdrs += 2;
+ }
+ /* determine if l2/3/4 headers are sent to firmware */
+ if ((*hdrs >> 7) & 1 &&
+ (skb->protocol == htons(ETH_P_IP) ||
+ skb->protocol == htons(ETH_P_IPV6))) {
+ build_hdr_descs_arr(tx_buff, &num_entries, *hdrs);
+ tx_crq.v1.n_crq_elem = num_entries;
+ tx_buff->indir_arr[0] = tx_crq;
+ tx_buff->indir_dma = dma_map_single(dev, tx_buff->indir_arr,
+ sizeof(tx_buff->indir_arr),
+ DMA_TO_DEVICE);
+ if (dma_mapping_error(dev, tx_buff->indir_dma)) {
+ if (!firmware_has_feature(FW_FEATURE_CMO))
+ dev_err(dev, "tx: unable to map descriptor array\n");
+ tx_map_failed++;
+ tx_dropped++;
+ ret = NETDEV_TX_BUSY;
+ goto out;
+ }
+ lpar_rc = send_subcrq_indirect(adapter, handle_array[queue_num],
+ (u64)tx_buff->indir_dma,
+ (u64)num_entries);
+ } else {
+ lpar_rc = send_subcrq(adapter, handle_array[queue_num],
+ &tx_crq);
+ }
if (lpar_rc != H_SUCCESS) {
dev_err(dev, "tx failed with code %ld\n", lpar_rc);
@@ -832,7 +990,7 @@ restart_poll:
netdev->stats.rx_bytes += length;
frames_processed++;
}
- replenish_pools(adapter);
+ replenish_rx_pool(adapter, &adapter->rx_pool[scrq_num]);
if (frames_processed < budget) {
enable_scrq_irq(adapter, adapter->rx_scrq[scrq_num]);
@@ -1159,6 +1317,7 @@ static int ibmvnic_complete_tx(struct ibmvnic_adapter *adapter,
union sub_crq *next;
int index;
int i, j;
+ u8 first;
restart_loop:
while (pending_scrq(adapter, scrq)) {
@@ -1181,6 +1340,13 @@ restart_loop:
txbuff->data_dma[j] = 0;
txbuff->used_bounce = false;
}
+ /* if sub_crq was sent indirectly */
+ first = txbuff->indir_arr[0].generic.first;
+ if (first == IBMVNIC_CRQ_CMD) {
+ dma_unmap_single(dev, txbuff->indir_dma,
+ sizeof(txbuff->indir_arr),
+ DMA_TO_DEVICE);
+ }
if (txbuff->last_frag)
dev_kfree_skb_any(txbuff->skb);
@@ -1261,9 +1427,9 @@ static void init_sub_crqs(struct ibmvnic_adapter *adapter, int retry)
entries_page : adapter->max_rx_add_entries_per_subcrq;
/* Choosing the maximum number of queues supported by firmware*/
- adapter->req_tx_queues = adapter->min_tx_queues;
- adapter->req_rx_queues = adapter->min_rx_queues;
- adapter->req_rx_add_queues = adapter->min_rx_add_queues;
+ adapter->req_tx_queues = adapter->max_tx_queues;
+ adapter->req_rx_queues = adapter->max_rx_queues;
+ adapter->req_rx_add_queues = adapter->max_rx_add_queues;
adapter->req_mtu = adapter->max_mtu;
}
@@ -1494,6 +1660,28 @@ static int send_subcrq(struct ibmvnic_adapter *adapter, u64 remote_handle,
return rc;
}
+static int send_subcrq_indirect(struct ibmvnic_adapter *adapter,
+ u64 remote_handle, u64 ioba, u64 num_entries)
+{
+ unsigned int ua = adapter->vdev->unit_address;
+ struct device *dev = &adapter->vdev->dev;
+ int rc;
+
+ /* Make sure the hypervisor sees the complete request */
+ mb();
+ rc = plpar_hcall_norets(H_SEND_SUB_CRQ_INDIRECT, ua,
+ cpu_to_be64(remote_handle),
+ ioba, num_entries);
+
+ if (rc) {
+ if (rc == H_CLOSED)
+ dev_warn(dev, "CRQ Queue closed\n");
+ dev_err(dev, "Send (indirect) error (rc=%d)\n", rc);
+ }
+
+ return rc;
+}
+
static int ibmvnic_send_crq(struct ibmvnic_adapter *adapter,
union ibmvnic_crq *crq)
{
@@ -1589,13 +1777,11 @@ static void send_login(struct ibmvnic_adapter *adapter)
goto buf_map_failed;
}
- rsp_buffer_size =
- sizeof(struct ibmvnic_login_rsp_buffer) +
- sizeof(u64) * (adapter->req_tx_queues +
- adapter->req_rx_queues *
- adapter->req_rx_add_queues + adapter->
- req_rx_add_queues) +
- sizeof(u8) * (IBMVNIC_TX_DESC_VERSIONS);
+ rsp_buffer_size = sizeof(struct ibmvnic_login_rsp_buffer) +
+ sizeof(u64) * adapter->req_tx_queues +
+ sizeof(u64) * adapter->req_rx_queues +
+ sizeof(u64) * adapter->req_rx_queues +
+ sizeof(u8) * IBMVNIC_TX_DESC_VERSIONS;
login_rsp_buffer = kmalloc(rsp_buffer_size, GFP_ATOMIC);
if (!login_rsp_buffer)
@@ -1918,6 +2104,10 @@ static void handle_query_ip_offload_rsp(struct ibmvnic_adapter *adapter)
if (buf->tcp_ipv6_chksum || buf->udp_ipv6_chksum)
adapter->netdev->features |= NETIF_F_IPV6_CSUM;
+ if ((adapter->netdev->features &
+ (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)))
+ adapter->netdev->features |= NETIF_F_RXCSUM;
+
memset(&crq, 0, sizeof(crq));
crq.control_ip_offload.first = IBMVNIC_CRQ_CMD;
crq.control_ip_offload.cmd = CONTROL_IP_OFFLOAD;
@@ -2210,6 +2400,16 @@ static int handle_login_rsp(union ibmvnic_crq *login_rsp_crq,
dma_unmap_single(dev, adapter->login_rsp_buf_token,
adapter->login_rsp_buf_sz, DMA_BIDIRECTIONAL);
+ /* If the number of queues requested can't be allocated by the
+ * server, the login response will return with code 1. We will need
+ * to resend the login buffer with fewer queues requested.
+ */
+ if (login_rsp_crq->generic.rc.code) {
+ adapter->renegotiate = true;
+ complete(&adapter->init_done);
+ return 0;
+ }
+
netdev_dbg(adapter->netdev, "Login Response Buffer:\n");
for (i = 0; i < (adapter->login_rsp_buf_sz - 1) / 8 + 1; i++) {
netdev_dbg(adapter->netdev, "%016lx\n",
@@ -3437,14 +3637,21 @@ static int ibmvnic_probe(struct vio_dev *dev, const struct vio_device_id *id)
init_completion(&adapter->init_done);
wait_for_completion(&adapter->init_done);
- /* needed to pull init_sub_crqs outside of an interrupt context
- * because it creates IRQ mappings for the subCRQ queues, causing
- * a kernel warning
- */
- init_sub_crqs(adapter, 0);
+ do {
+ adapter->renegotiate = false;
- reinit_completion(&adapter->init_done);
- wait_for_completion(&adapter->init_done);
+ init_sub_crqs(adapter, 0);
+ reinit_completion(&adapter->init_done);
+ wait_for_completion(&adapter->init_done);
+
+ if (adapter->renegotiate) {
+ release_sub_crqs(adapter);
+ send_cap_queries(adapter);
+
+ reinit_completion(&adapter->init_done);
+ wait_for_completion(&adapter->init_done);
+ }
+ } while (adapter->renegotiate);
/* if init_sub_crqs is partially successful, retry */
while (!adapter->tx_scrq || !adapter->rx_scrq) {
diff --git a/drivers/net/ethernet/ibm/ibmvnic.h b/drivers/net/ethernet/ibm/ibmvnic.h
index 1a9993cc79b5..0b66a506a4e4 100644
--- a/drivers/net/ethernet/ibm/ibmvnic.h
+++ b/drivers/net/ethernet/ibm/ibmvnic.h
@@ -879,6 +879,9 @@ struct ibmvnic_tx_buff {
int pool_index;
bool last_frag;
bool used_bounce;
+ union sub_crq indir_arr[6];
+ u8 hdr_data[140];
+ dma_addr_t indir_dma;
};
struct ibmvnic_tx_pool {
@@ -977,6 +980,7 @@ struct ibmvnic_adapter {
struct ibmvnic_sub_crq_queue **tx_scrq;
struct ibmvnic_sub_crq_queue **rx_scrq;
int requested_caps;
+ bool renegotiate;
/* rx structs */
struct napi_struct *napi;
diff --git a/drivers/net/ethernet/intel/Kconfig b/drivers/net/ethernet/intel/Kconfig
index 3772f3ac956e..714bd1014ddb 100644
--- a/drivers/net/ethernet/intel/Kconfig
+++ b/drivers/net/ethernet/intel/Kconfig
@@ -25,16 +25,13 @@ config E100
on the adapter. Look for a label that has a barcode and a number
in the format 123456-001 (six digits hyphen three digits).
- Use the above information and the Adapter & Driver ID Guide at:
+ Use the above information and the Adapter & Driver ID Guide that
+ can be located at:
- <http://support.intel.com/support/network/adapter/pro100/21397.htm>
+ <http://support.intel.com>
to identify the adapter.
- For the latest Intel PRO/100 network driver for Linux, see:
-
- <http://www.intel.com/p/en_US/support/highlights/network/pro100plus>
-
More specific information on configuring the driver is in
<file:Documentation/networking/e100.txt>.
@@ -47,12 +44,7 @@ config E1000
---help---
This driver supports Intel(R) PRO/1000 gigabit ethernet family of
adapters. For more information on how to identify your adapter, go
- to the Adapter & Driver ID Guide at:
-
- <http://support.intel.com/support/network/adapter/pro100/21397.htm>
-
- For general information and support, go to the Intel support
- website at:
+ to the Adapter & Driver ID Guide that can be located at:
<http://support.intel.com>
@@ -71,12 +63,8 @@ config E1000E
This driver supports the PCI-Express Intel(R) PRO/1000 gigabit
ethernet family of adapters. For PCI or PCI-X e1000 adapters,
use the regular e1000 driver For more information on how to
- identify your adapter, go to the Adapter & Driver ID Guide at:
-
- <http://support.intel.com/support/network/adapter/pro100/21397.htm>
-
- For general information and support, go to the Intel support
- website at:
+ identify your adapter, go to the Adapter & Driver ID Guide that
+ can be located at:
<http://support.intel.com>
@@ -101,12 +89,7 @@ config IGB
---help---
This driver supports Intel(R) 82575/82576 gigabit ethernet family of
adapters. For more information on how to identify your adapter, go
- to the Adapter & Driver ID Guide at:
-
- <http://support.intel.com/support/network/adapter/pro100/21397.htm>
-
- For general information and support, go to the Intel support
- website at:
+ to the Adapter & Driver ID Guide that can be located at:
<http://support.intel.com>
@@ -142,12 +125,7 @@ config IGBVF
---help---
This driver supports Intel(R) 82576 virtual functions. For more
information on how to identify your adapter, go to the Adapter &
- Driver ID Guide at:
-
- <http://support.intel.com/support/network/adapter/pro100/21397.htm>
-
- For general information and support, go to the Intel support
- website at:
+ Driver ID Guide that can be located at:
<http://support.intel.com>
@@ -164,12 +142,7 @@ config IXGB
This driver supports Intel(R) PRO/10GbE family of adapters for
PCI-X type cards. For PCI-E type cards, use the "ixgbe" driver
instead. For more information on how to identify your adapter, go
- to the Adapter & Driver ID Guide at:
-
- <http://support.intel.com/support/network/adapter/pro100/21397.htm>
-
- For general information and support, go to the Intel support
- website at:
+ to the Adapter & Driver ID Guide that can be located at:
<http://support.intel.com>
@@ -187,12 +160,7 @@ config IXGBE
---help---
This driver supports Intel(R) 10GbE PCI Express family of
adapters. For more information on how to identify your adapter, go
- to the Adapter & Driver ID Guide at:
-
- <http://support.intel.com/support/network/adapter/pro100/21397.htm>
-
- For general information and support, go to the Intel support
- website at:
+ to the Adapter & Driver ID Guide that can be located at:
<http://support.intel.com>
@@ -243,12 +211,7 @@ config IXGBEVF
---help---
This driver supports Intel(R) PCI Express virtual functions for the
Intel(R) ixgbe driver. For more information on how to identify your
- adapter, go to the Adapter & Driver ID Guide at:
-
- <http://support.intel.com/support/network/sb/CS-008441.htm>
-
- For general information and support, go to the Intel support
- website at:
+ adapter, go to the Adapter & Driver ID Guide that can be located at:
<http://support.intel.com>
@@ -266,12 +229,7 @@ config I40E
---help---
This driver supports Intel(R) Ethernet Controller XL710 Family of
devices. For more information on how to identify your adapter, go
- to the Adapter & Driver ID Guide at:
-
- <http://support.intel.com/support/network/adapter/pro100/21397.htm>
-
- For general information and support, go to the Intel support
- website at:
+ to the Adapter & Driver ID Guide that can be located at:
<http://support.intel.com>
@@ -326,12 +284,7 @@ config I40EVF
---help---
This driver supports Intel(R) XL710 and X710 virtual functions.
For more information on how to identify your adapter, go to the
- Adapter & Driver ID Guide at:
-
- <http://support.intel.com/support/network/sb/CS-008441.htm>
-
- For general information and support, go to the Intel support
- website at:
+ Adapter & Driver ID Guide that can be located at:
<http://support.intel.com>
@@ -347,12 +300,7 @@ config FM10K
---help---
This driver supports Intel(R) FM10000 Ethernet Switch Host
Interface. For more information on how to identify your adapter,
- go to the Adapter & Driver ID Guide at:
-
- <http://support.intel.com/support/network/sb/CS-008441.htm>
-
- For general information and support, go to the Intel support
- website at:
+ go to the Adapter & Driver ID Guide that can be located at:
<http://support.intel.com>
diff --git a/drivers/net/ethernet/intel/e1000/e1000.h b/drivers/net/ethernet/intel/e1000/e1000.h
index 98fe5a2cd6e3..d7bdea79e9fa 100644
--- a/drivers/net/ethernet/intel/e1000/e1000.h
+++ b/drivers/net/ethernet/intel/e1000/e1000.h
@@ -358,6 +358,8 @@ struct net_device *e1000_get_hw_dev(struct e1000_hw *hw);
extern char e1000_driver_name[];
extern const char e1000_driver_version[];
+int e1000_open(struct net_device *netdev);
+int e1000_close(struct net_device *netdev);
int e1000_up(struct e1000_adapter *adapter);
void e1000_down(struct e1000_adapter *adapter);
void e1000_reinit_locked(struct e1000_adapter *adapter);
diff --git a/drivers/net/ethernet/intel/e1000/e1000_ethtool.c b/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
index 83e557c7f279..975eeb885ca2 100644
--- a/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
+++ b/drivers/net/ethernet/intel/e1000/e1000_ethtool.c
@@ -1553,7 +1553,7 @@ static void e1000_diag_test(struct net_device *netdev,
if (if_running)
/* indicate we're in test mode */
- dev_close(netdev);
+ e1000_close(netdev);
else
e1000_reset(adapter);
@@ -1582,7 +1582,7 @@ static void e1000_diag_test(struct net_device *netdev,
e1000_reset(adapter);
clear_bit(__E1000_TESTING, &adapter->flags);
if (if_running)
- dev_open(netdev);
+ e1000_open(netdev);
} else {
e_info(hw, "online testing starting\n");
/* Online tests */
diff --git a/drivers/net/ethernet/intel/e1000/e1000_main.c b/drivers/net/ethernet/intel/e1000/e1000_main.c
index ae90d4f12b70..f42129d09e2c 100644
--- a/drivers/net/ethernet/intel/e1000/e1000_main.c
+++ b/drivers/net/ethernet/intel/e1000/e1000_main.c
@@ -114,8 +114,8 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent);
static void e1000_remove(struct pci_dev *pdev);
static int e1000_alloc_queues(struct e1000_adapter *adapter);
static int e1000_sw_init(struct e1000_adapter *adapter);
-static int e1000_open(struct net_device *netdev);
-static int e1000_close(struct net_device *netdev);
+int e1000_open(struct net_device *netdev);
+int e1000_close(struct net_device *netdev);
static void e1000_configure_tx(struct e1000_adapter *adapter);
static void e1000_configure_rx(struct e1000_adapter *adapter);
static void e1000_setup_rctl(struct e1000_adapter *adapter);
@@ -1360,7 +1360,7 @@ static int e1000_alloc_queues(struct e1000_adapter *adapter)
* handler is registered with the OS, the watchdog task is started,
* and the stack is notified that the interface is ready.
**/
-static int e1000_open(struct net_device *netdev)
+int e1000_open(struct net_device *netdev)
{
struct e1000_adapter *adapter = netdev_priv(netdev);
struct e1000_hw *hw = &adapter->hw;
@@ -1437,7 +1437,7 @@ err_setup_tx:
* needs to be disabled. A global MAC reset is issued to stop the
* hardware, and all transmit and receive resources are freed.
**/
-static int e1000_close(struct net_device *netdev)
+int e1000_close(struct net_device *netdev)
{
struct e1000_adapter *adapter = netdev_priv(netdev);
struct e1000_hw *hw = &adapter->hw;
diff --git a/drivers/net/ethernet/intel/e1000e/80003es2lan.c b/drivers/net/ethernet/intel/e1000e/80003es2lan.c
index 2af603f3e418..cd391376036c 100644
--- a/drivers/net/ethernet/intel/e1000e/80003es2lan.c
+++ b/drivers/net/ethernet/intel/e1000e/80003es2lan.c
@@ -121,7 +121,7 @@ static s32 e1000_init_nvm_params_80003es2lan(struct e1000_hw *hw)
/* EEPROM access above 16k is unsupported */
if (size > 14)
size = 14;
- nvm->word_size = 1 << size;
+ nvm->word_size = BIT(size);
return 0;
}
@@ -845,27 +845,27 @@ static void e1000_initialize_hw_bits_80003es2lan(struct e1000_hw *hw)
/* Transmit Descriptor Control 0 */
reg = er32(TXDCTL(0));
- reg |= (1 << 22);
+ reg |= BIT(22);
ew32(TXDCTL(0), reg);
/* Transmit Descriptor Control 1 */
reg = er32(TXDCTL(1));
- reg |= (1 << 22);
+ reg |= BIT(22);
ew32(TXDCTL(1), reg);
/* Transmit Arbitration Control 0 */
reg = er32(TARC(0));
reg &= ~(0xF << 27); /* 30:27 */
if (hw->phy.media_type != e1000_media_type_copper)
- reg &= ~(1 << 20);
+ reg &= ~BIT(20);
ew32(TARC(0), reg);
/* Transmit Arbitration Control 1 */
reg = er32(TARC(1));
if (er32(TCTL) & E1000_TCTL_MULR)
- reg &= ~(1 << 28);
+ reg &= ~BIT(28);
else
- reg |= (1 << 28);
+ reg |= BIT(28);
ew32(TARC(1), reg);
/* Disable IPv6 extension header parsing because some malformed
diff --git a/drivers/net/ethernet/intel/e1000e/82571.c b/drivers/net/ethernet/intel/e1000e/82571.c
index 5f7016442ec4..7fd4d54599e4 100644
--- a/drivers/net/ethernet/intel/e1000e/82571.c
+++ b/drivers/net/ethernet/intel/e1000e/82571.c
@@ -185,7 +185,7 @@ static s32 e1000_init_nvm_params_82571(struct e1000_hw *hw)
/* EEPROM access above 16k is unsupported */
if (size > 14)
size = 14;
- nvm->word_size = 1 << size;
+ nvm->word_size = BIT(size);
break;
}
@@ -1163,12 +1163,12 @@ static void e1000_initialize_hw_bits_82571(struct e1000_hw *hw)
/* Transmit Descriptor Control 0 */
reg = er32(TXDCTL(0));
- reg |= (1 << 22);
+ reg |= BIT(22);
ew32(TXDCTL(0), reg);
/* Transmit Descriptor Control 1 */
reg = er32(TXDCTL(1));
- reg |= (1 << 22);
+ reg |= BIT(22);
ew32(TXDCTL(1), reg);
/* Transmit Arbitration Control 0 */
@@ -1177,11 +1177,11 @@ static void e1000_initialize_hw_bits_82571(struct e1000_hw *hw)
switch (hw->mac.type) {
case e1000_82571:
case e1000_82572:
- reg |= (1 << 23) | (1 << 24) | (1 << 25) | (1 << 26);
+ reg |= BIT(23) | BIT(24) | BIT(25) | BIT(26);
break;
case e1000_82574:
case e1000_82583:
- reg |= (1 << 26);
+ reg |= BIT(26);
break;
default:
break;
@@ -1193,12 +1193,12 @@ static void e1000_initialize_hw_bits_82571(struct e1000_hw *hw)
switch (hw->mac.type) {
case e1000_82571:
case e1000_82572:
- reg &= ~((1 << 29) | (1 << 30));
- reg |= (1 << 22) | (1 << 24) | (1 << 25) | (1 << 26);
+ reg &= ~(BIT(29) | BIT(30));
+ reg |= BIT(22) | BIT(24) | BIT(25) | BIT(26);
if (er32(TCTL) & E1000_TCTL_MULR)
- reg &= ~(1 << 28);
+ reg &= ~BIT(28);
else
- reg |= (1 << 28);
+ reg |= BIT(28);
ew32(TARC(1), reg);
break;
default:
@@ -1211,7 +1211,7 @@ static void e1000_initialize_hw_bits_82571(struct e1000_hw *hw)
case e1000_82574:
case e1000_82583:
reg = er32(CTRL);
- reg &= ~(1 << 29);
+ reg &= ~BIT(29);
ew32(CTRL, reg);
break;
default:
@@ -1224,8 +1224,8 @@ static void e1000_initialize_hw_bits_82571(struct e1000_hw *hw)
case e1000_82574:
case e1000_82583:
reg = er32(CTRL_EXT);
- reg &= ~(1 << 23);
- reg |= (1 << 22);
+ reg &= ~BIT(23);
+ reg |= BIT(22);
ew32(CTRL_EXT, reg);
break;
default:
@@ -1261,7 +1261,7 @@ static void e1000_initialize_hw_bits_82571(struct e1000_hw *hw)
case e1000_82574:
case e1000_82583:
reg = er32(GCR);
- reg |= (1 << 22);
+ reg |= BIT(22);
ew32(GCR, reg);
/* Workaround for hardware errata.
@@ -1308,8 +1308,8 @@ static void e1000_clear_vfta_82571(struct e1000_hw *hw)
E1000_VFTA_ENTRY_SHIFT) &
E1000_VFTA_ENTRY_MASK;
vfta_bit_in_reg =
- 1 << (hw->mng_cookie.vlan_id &
- E1000_VFTA_ENTRY_BIT_SHIFT_MASK);
+ BIT(hw->mng_cookie.vlan_id &
+ E1000_VFTA_ENTRY_BIT_SHIFT_MASK);
}
break;
default:
diff --git a/drivers/net/ethernet/intel/e1000e/e1000.h b/drivers/net/ethernet/intel/e1000e/e1000.h
index 1dc293bad87b..ef96cd11d6d2 100644
--- a/drivers/net/ethernet/intel/e1000e/e1000.h
+++ b/drivers/net/ethernet/intel/e1000e/e1000.h
@@ -109,18 +109,18 @@ struct e1000_info;
#define E1000_TXDCTL_DMA_BURST_ENABLE \
(E1000_TXDCTL_GRAN | /* set descriptor granularity */ \
E1000_TXDCTL_COUNT_DESC | \
- (1 << 16) | /* wthresh must be +1 more than desired */\
- (1 << 8) | /* hthresh */ \
- 0x1f) /* pthresh */
+ (1u << 16) | /* wthresh must be +1 more than desired */\
+ (1u << 8) | /* hthresh */ \
+ 0x1f) /* pthresh */
#define E1000_RXDCTL_DMA_BURST_ENABLE \
(0x01000000 | /* set descriptor granularity */ \
- (4 << 16) | /* set writeback threshold */ \
- (4 << 8) | /* set prefetch threshold */ \
+ (4u << 16) | /* set writeback threshold */ \
+ (4u << 8) | /* set prefetch threshold */ \
0x20) /* set hthresh */
-#define E1000_TIDV_FPD (1 << 31)
-#define E1000_RDTR_FPD (1 << 31)
+#define E1000_TIDV_FPD BIT(31)
+#define E1000_RDTR_FPD BIT(31)
enum e1000_boards {
board_82571,
@@ -347,6 +347,7 @@ struct e1000_adapter {
struct ptp_clock *ptp_clock;
struct ptp_clock_info ptp_clock_info;
struct pm_qos_request pm_qos_req;
+ s32 ptp_delta;
u16 eee_advert;
};
@@ -404,53 +405,53 @@ s32 e1000e_get_base_timinca(struct e1000_adapter *adapter, u32 *timinca);
#define E1000_82574_SYSTIM_EPSILON (1ULL << 35ULL)
/* hardware capability, feature, and workaround flags */
-#define FLAG_HAS_AMT (1 << 0)
-#define FLAG_HAS_FLASH (1 << 1)
-#define FLAG_HAS_HW_VLAN_FILTER (1 << 2)
-#define FLAG_HAS_WOL (1 << 3)
-/* reserved bit4 */
-#define FLAG_HAS_CTRLEXT_ON_LOAD (1 << 5)
-#define FLAG_HAS_SWSM_ON_LOAD (1 << 6)
-#define FLAG_HAS_JUMBO_FRAMES (1 << 7)
-#define FLAG_READ_ONLY_NVM (1 << 8)
-#define FLAG_IS_ICH (1 << 9)
-#define FLAG_HAS_MSIX (1 << 10)
-#define FLAG_HAS_SMART_POWER_DOWN (1 << 11)
-#define FLAG_IS_QUAD_PORT_A (1 << 12)
-#define FLAG_IS_QUAD_PORT (1 << 13)
-#define FLAG_HAS_HW_TIMESTAMP (1 << 14)
-#define FLAG_APME_IN_WUC (1 << 15)
-#define FLAG_APME_IN_CTRL3 (1 << 16)
-#define FLAG_APME_CHECK_PORT_B (1 << 17)
-#define FLAG_DISABLE_FC_PAUSE_TIME (1 << 18)
-#define FLAG_NO_WAKE_UCAST (1 << 19)
-#define FLAG_MNG_PT_ENABLED (1 << 20)
-#define FLAG_RESET_OVERWRITES_LAA (1 << 21)
-#define FLAG_TARC_SPEED_MODE_BIT (1 << 22)
-#define FLAG_TARC_SET_BIT_ZERO (1 << 23)
-#define FLAG_RX_NEEDS_RESTART (1 << 24)
-#define FLAG_LSC_GIG_SPEED_DROP (1 << 25)
-#define FLAG_SMART_POWER_DOWN (1 << 26)
-#define FLAG_MSI_ENABLED (1 << 27)
-/* reserved (1 << 28) */
-#define FLAG_TSO_FORCE (1 << 29)
-#define FLAG_RESTART_NOW (1 << 30)
-#define FLAG_MSI_TEST_FAILED (1 << 31)
-
-#define FLAG2_CRC_STRIPPING (1 << 0)
-#define FLAG2_HAS_PHY_WAKEUP (1 << 1)
-#define FLAG2_IS_DISCARDING (1 << 2)
-#define FLAG2_DISABLE_ASPM_L1 (1 << 3)
-#define FLAG2_HAS_PHY_STATS (1 << 4)
-#define FLAG2_HAS_EEE (1 << 5)
-#define FLAG2_DMA_BURST (1 << 6)
-#define FLAG2_DISABLE_ASPM_L0S (1 << 7)
-#define FLAG2_DISABLE_AIM (1 << 8)
-#define FLAG2_CHECK_PHY_HANG (1 << 9)
-#define FLAG2_NO_DISABLE_RX (1 << 10)
-#define FLAG2_PCIM2PCI_ARBITER_WA (1 << 11)
-#define FLAG2_DFLT_CRC_STRIPPING (1 << 12)
-#define FLAG2_CHECK_RX_HWTSTAMP (1 << 13)
+#define FLAG_HAS_AMT BIT(0)
+#define FLAG_HAS_FLASH BIT(1)
+#define FLAG_HAS_HW_VLAN_FILTER BIT(2)
+#define FLAG_HAS_WOL BIT(3)
+/* reserved BIT(4) */
+#define FLAG_HAS_CTRLEXT_ON_LOAD BIT(5)
+#define FLAG_HAS_SWSM_ON_LOAD BIT(6)
+#define FLAG_HAS_JUMBO_FRAMES BIT(7)
+#define FLAG_READ_ONLY_NVM BIT(8)
+#define FLAG_IS_ICH BIT(9)
+#define FLAG_HAS_MSIX BIT(10)
+#define FLAG_HAS_SMART_POWER_DOWN BIT(11)
+#define FLAG_IS_QUAD_PORT_A BIT(12)
+#define FLAG_IS_QUAD_PORT BIT(13)
+#define FLAG_HAS_HW_TIMESTAMP BIT(14)
+#define FLAG_APME_IN_WUC BIT(15)
+#define FLAG_APME_IN_CTRL3 BIT(16)
+#define FLAG_APME_CHECK_PORT_B BIT(17)
+#define FLAG_DISABLE_FC_PAUSE_TIME BIT(18)
+#define FLAG_NO_WAKE_UCAST BIT(19)
+#define FLAG_MNG_PT_ENABLED BIT(20)
+#define FLAG_RESET_OVERWRITES_LAA BIT(21)
+#define FLAG_TARC_SPEED_MODE_BIT BIT(22)
+#define FLAG_TARC_SET_BIT_ZERO BIT(23)
+#define FLAG_RX_NEEDS_RESTART BIT(24)
+#define FLAG_LSC_GIG_SPEED_DROP BIT(25)
+#define FLAG_SMART_POWER_DOWN BIT(26)
+#define FLAG_MSI_ENABLED BIT(27)
+/* reserved BIT(28) */
+#define FLAG_TSO_FORCE BIT(29)
+#define FLAG_RESTART_NOW BIT(30)
+#define FLAG_MSI_TEST_FAILED BIT(31)
+
+#define FLAG2_CRC_STRIPPING BIT(0)
+#define FLAG2_HAS_PHY_WAKEUP BIT(1)
+#define FLAG2_IS_DISCARDING BIT(2)
+#define FLAG2_DISABLE_ASPM_L1 BIT(3)
+#define FLAG2_HAS_PHY_STATS BIT(4)
+#define FLAG2_HAS_EEE BIT(5)
+#define FLAG2_DMA_BURST BIT(6)
+#define FLAG2_DISABLE_ASPM_L0S BIT(7)
+#define FLAG2_DISABLE_AIM BIT(8)
+#define FLAG2_CHECK_PHY_HANG BIT(9)
+#define FLAG2_NO_DISABLE_RX BIT(10)
+#define FLAG2_PCIM2PCI_ARBITER_WA BIT(11)
+#define FLAG2_DFLT_CRC_STRIPPING BIT(12)
+#define FLAG2_CHECK_RX_HWTSTAMP BIT(13)
#define E1000_RX_DESC_PS(R, i) \
(&(((union e1000_rx_desc_packet_split *)((R).desc))[i]))
@@ -480,6 +481,8 @@ extern const char e1000e_driver_version[];
void e1000e_check_options(struct e1000_adapter *adapter);
void e1000e_set_ethtool_ops(struct net_device *netdev);
+int e1000e_open(struct net_device *netdev);
+int e1000e_close(struct net_device *netdev);
void e1000e_up(struct e1000_adapter *adapter);
void e1000e_down(struct e1000_adapter *adapter, bool reset);
void e1000e_reinit_locked(struct e1000_adapter *adapter);
diff --git a/drivers/net/ethernet/intel/e1000e/ethtool.c b/drivers/net/ethernet/intel/e1000e/ethtool.c
index 6cab1f30d41e..7aff68a4a4df 100644
--- a/drivers/net/ethernet/intel/e1000e/ethtool.c
+++ b/drivers/net/ethernet/intel/e1000e/ethtool.c
@@ -201,6 +201,9 @@ static int e1000_get_settings(struct net_device *netdev,
else
ecmd->eth_tp_mdix_ctrl = hw->phy.mdix;
+ if (hw->phy.media_type != e1000_media_type_copper)
+ ecmd->eth_tp_mdix_ctrl = ETH_TP_MDI_INVALID;
+
return 0;
}
@@ -236,8 +239,13 @@ static int e1000_set_spd_dplx(struct e1000_adapter *adapter, u32 spd, u8 dplx)
mac->forced_speed_duplex = ADVERTISE_100_FULL;
break;
case SPEED_1000 + DUPLEX_FULL:
- mac->autoneg = 1;
- adapter->hw.phy.autoneg_advertised = ADVERTISE_1000_FULL;
+ if (adapter->hw.phy.media_type == e1000_media_type_copper) {
+ mac->autoneg = 1;
+ adapter->hw.phy.autoneg_advertised =
+ ADVERTISE_1000_FULL;
+ } else {
+ mac->forced_speed_duplex = ADVERTISE_1000_FULL;
+ }
break;
case SPEED_1000 + DUPLEX_HALF: /* not supported */
default:
@@ -439,8 +447,9 @@ static void e1000_get_regs(struct net_device *netdev,
memset(p, 0, E1000_REGS_LEN * sizeof(u32));
- regs->version = (1 << 24) | (adapter->pdev->revision << 16) |
- adapter->pdev->device;
+ regs->version = (1u << 24) |
+ (adapter->pdev->revision << 16) |
+ adapter->pdev->device;
regs_buff[0] = er32(CTRL);
regs_buff[1] = er32(STATUS);
@@ -895,7 +904,7 @@ static int e1000_reg_test(struct e1000_adapter *adapter, u64 *data)
case e1000_pch2lan:
case e1000_pch_lpt:
case e1000_pch_spt:
- mask |= (1 << 18);
+ mask |= BIT(18);
break;
default:
break;
@@ -914,9 +923,9 @@ static int e1000_reg_test(struct e1000_adapter *adapter, u64 *data)
/* SHRAH[9] different than the others */
if (i == 10)
- mask |= (1 << 30);
+ mask |= BIT(30);
else
- mask &= ~(1 << 30);
+ mask &= ~BIT(30);
}
if (mac->type == e1000_pch2lan) {
/* SHRAH[0,1,2] different than previous */
@@ -924,7 +933,7 @@ static int e1000_reg_test(struct e1000_adapter *adapter, u64 *data)
mask &= 0xFFF4FFFF;
/* SHRAH[3] different than SHRAH[0,1,2] */
if (i == 4)
- mask |= (1 << 30);
+ mask |= BIT(30);
/* RAR[1-6] owned by management engine - skipping */
if (i > 0)
i += 6;
@@ -1019,7 +1028,7 @@ static int e1000_intr_test(struct e1000_adapter *adapter, u64 *data)
/* Test each interrupt */
for (i = 0; i < 10; i++) {
/* Interrupt to test */
- mask = 1 << i;
+ mask = BIT(i);
if (adapter->flags & FLAG_IS_ICH) {
switch (mask) {
@@ -1387,7 +1396,7 @@ static int e1000_integrated_phy_loopback(struct e1000_adapter *adapter)
case e1000_phy_82579:
/* Disable PHY energy detect power down */
e1e_rphy(hw, PHY_REG(0, 21), &phy_reg);
- e1e_wphy(hw, PHY_REG(0, 21), phy_reg & ~(1 << 3));
+ e1e_wphy(hw, PHY_REG(0, 21), phy_reg & ~BIT(3));
/* Disable full chip energy detect */
e1e_rphy(hw, PHY_REG(776, 18), &phy_reg);
e1e_wphy(hw, PHY_REG(776, 18), phy_reg | 1);
@@ -1453,7 +1462,7 @@ static int e1000_set_82571_fiber_loopback(struct e1000_adapter *adapter)
/* disable autoneg */
ctrl = er32(TXCW);
- ctrl &= ~(1 << 31);
+ ctrl &= ~BIT(31);
ew32(TXCW, ctrl);
link = (er32(STATUS) & E1000_STATUS_LU);
@@ -1816,7 +1825,7 @@ static void e1000_diag_test(struct net_device *netdev,
if (if_running)
/* indicate we're in test mode */
- dev_close(netdev);
+ e1000e_close(netdev);
if (e1000_reg_test(adapter, &data[0]))
eth_test->flags |= ETH_TEST_FL_FAILED;
@@ -1849,7 +1858,7 @@ static void e1000_diag_test(struct net_device *netdev,
clear_bit(__E1000_TESTING, &adapter->state);
if (if_running)
- dev_open(netdev);
+ e1000e_open(netdev);
} else {
/* Online tests */
@@ -2283,19 +2292,19 @@ static int e1000e_get_ts_info(struct net_device *netdev,
SOF_TIMESTAMPING_RX_HARDWARE |
SOF_TIMESTAMPING_RAW_HARDWARE);
- info->tx_types = (1 << HWTSTAMP_TX_OFF) | (1 << HWTSTAMP_TX_ON);
-
- info->rx_filters = ((1 << HWTSTAMP_FILTER_NONE) |
- (1 << HWTSTAMP_FILTER_PTP_V1_L4_SYNC) |
- (1 << HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) |
- (1 << HWTSTAMP_FILTER_PTP_V2_L4_SYNC) |
- (1 << HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ) |
- (1 << HWTSTAMP_FILTER_PTP_V2_L2_SYNC) |
- (1 << HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ) |
- (1 << HWTSTAMP_FILTER_PTP_V2_EVENT) |
- (1 << HWTSTAMP_FILTER_PTP_V2_SYNC) |
- (1 << HWTSTAMP_FILTER_PTP_V2_DELAY_REQ) |
- (1 << HWTSTAMP_FILTER_ALL));
+ info->tx_types = BIT(HWTSTAMP_TX_OFF) | BIT(HWTSTAMP_TX_ON);
+
+ info->rx_filters = (BIT(HWTSTAMP_FILTER_NONE) |
+ BIT(HWTSTAMP_FILTER_PTP_V1_L4_SYNC) |
+ BIT(HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) |
+ BIT(HWTSTAMP_FILTER_PTP_V2_L4_SYNC) |
+ BIT(HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ) |
+ BIT(HWTSTAMP_FILTER_PTP_V2_L2_SYNC) |
+ BIT(HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ) |
+ BIT(HWTSTAMP_FILTER_PTP_V2_EVENT) |
+ BIT(HWTSTAMP_FILTER_PTP_V2_SYNC) |
+ BIT(HWTSTAMP_FILTER_PTP_V2_DELAY_REQ) |
+ BIT(HWTSTAMP_FILTER_ALL));
if (adapter->ptp_clock)
info->phc_index = ptp_clock_index(adapter->ptp_clock);
diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.c b/drivers/net/ethernet/intel/e1000e/ich8lan.c
index c0f4887ea44d..3e11322d8d58 100644
--- a/drivers/net/ethernet/intel/e1000e/ich8lan.c
+++ b/drivers/net/ethernet/intel/e1000e/ich8lan.c
@@ -1048,7 +1048,7 @@ static s32 e1000_platform_pm_pch_lpt(struct e1000_hw *hw, bool link)
while (value > PCI_LTR_VALUE_MASK) {
scale++;
- value = DIV_ROUND_UP(value, (1 << 5));
+ value = DIV_ROUND_UP(value, BIT(5));
}
if (scale > E1000_LTRV_SCALE_MAX) {
e_dbg("Invalid LTR latency scale %d\n", scale);
@@ -1573,7 +1573,7 @@ static s32 e1000_check_for_copper_link_ich8lan(struct e1000_hw *hw)
phy_reg &= ~HV_KMRN_FIFO_CTRLSTA_PREAMBLE_MASK;
if ((er32(STATUS) & E1000_STATUS_FD) != E1000_STATUS_FD)
- phy_reg |= (1 << HV_KMRN_FIFO_CTRLSTA_PREAMBLE_SHIFT);
+ phy_reg |= BIT(HV_KMRN_FIFO_CTRLSTA_PREAMBLE_SHIFT);
e1e_wphy(hw, HV_KMRN_FIFO_CTRLSTA, phy_reg);
break;
@@ -2044,9 +2044,9 @@ static s32 e1000_write_smbus_addr(struct e1000_hw *hw)
/* Restore SMBus frequency */
if (freq--) {
phy_data &= ~HV_SMB_ADDR_FREQ_MASK;
- phy_data |= (freq & (1 << 0)) <<
+ phy_data |= (freq & BIT(0)) <<
HV_SMB_ADDR_FREQ_LOW_SHIFT;
- phy_data |= (freq & (1 << 1)) <<
+ phy_data |= (freq & BIT(1)) <<
(HV_SMB_ADDR_FREQ_HIGH_SHIFT - 1);
} else {
e_dbg("Unsupported SMB frequency in PHY\n");
@@ -2530,7 +2530,7 @@ s32 e1000_lv_jumbo_workaround_ich8lan(struct e1000_hw *hw, bool enable)
/* disable Rx path while enabling/disabling workaround */
e1e_rphy(hw, PHY_REG(769, 20), &phy_reg);
- ret_val = e1e_wphy(hw, PHY_REG(769, 20), phy_reg | (1 << 14));
+ ret_val = e1e_wphy(hw, PHY_REG(769, 20), phy_reg | BIT(14));
if (ret_val)
return ret_val;
@@ -2561,7 +2561,7 @@ s32 e1000_lv_jumbo_workaround_ich8lan(struct e1000_hw *hw, bool enable)
/* Enable jumbo frame workaround in the MAC */
mac_reg = er32(FFLT_DBG);
- mac_reg &= ~(1 << 14);
+ mac_reg &= ~BIT(14);
mac_reg |= (7 << 15);
ew32(FFLT_DBG, mac_reg);
@@ -2576,7 +2576,7 @@ s32 e1000_lv_jumbo_workaround_ich8lan(struct e1000_hw *hw, bool enable)
return ret_val;
ret_val = e1000e_write_kmrn_reg(hw,
E1000_KMRNCTRLSTA_CTRL_OFFSET,
- data | (1 << 0));
+ data | BIT(0));
if (ret_val)
return ret_val;
ret_val = e1000e_read_kmrn_reg(hw,
@@ -2600,7 +2600,7 @@ s32 e1000_lv_jumbo_workaround_ich8lan(struct e1000_hw *hw, bool enable)
if (ret_val)
return ret_val;
e1e_rphy(hw, PHY_REG(769, 16), &data);
- data &= ~(1 << 13);
+ data &= ~BIT(13);
ret_val = e1e_wphy(hw, PHY_REG(769, 16), data);
if (ret_val)
return ret_val;
@@ -2614,7 +2614,7 @@ s32 e1000_lv_jumbo_workaround_ich8lan(struct e1000_hw *hw, bool enable)
if (ret_val)
return ret_val;
e1e_rphy(hw, HV_PM_CTRL, &data);
- ret_val = e1e_wphy(hw, HV_PM_CTRL, data | (1 << 10));
+ ret_val = e1e_wphy(hw, HV_PM_CTRL, data | BIT(10));
if (ret_val)
return ret_val;
} else {
@@ -2634,7 +2634,7 @@ s32 e1000_lv_jumbo_workaround_ich8lan(struct e1000_hw *hw, bool enable)
return ret_val;
ret_val = e1000e_write_kmrn_reg(hw,
E1000_KMRNCTRLSTA_CTRL_OFFSET,
- data & ~(1 << 0));
+ data & ~BIT(0));
if (ret_val)
return ret_val;
ret_val = e1000e_read_kmrn_reg(hw,
@@ -2657,7 +2657,7 @@ s32 e1000_lv_jumbo_workaround_ich8lan(struct e1000_hw *hw, bool enable)
if (ret_val)
return ret_val;
e1e_rphy(hw, PHY_REG(769, 16), &data);
- data |= (1 << 13);
+ data |= BIT(13);
ret_val = e1e_wphy(hw, PHY_REG(769, 16), data);
if (ret_val)
return ret_val;
@@ -2671,13 +2671,13 @@ s32 e1000_lv_jumbo_workaround_ich8lan(struct e1000_hw *hw, bool enable)
if (ret_val)
return ret_val;
e1e_rphy(hw, HV_PM_CTRL, &data);
- ret_val = e1e_wphy(hw, HV_PM_CTRL, data & ~(1 << 10));
+ ret_val = e1e_wphy(hw, HV_PM_CTRL, data & ~BIT(10));
if (ret_val)
return ret_val;
}
/* re-enable Rx path after enabling/disabling workaround */
- return e1e_wphy(hw, PHY_REG(769, 20), phy_reg & ~(1 << 14));
+ return e1e_wphy(hw, PHY_REG(769, 20), phy_reg & ~BIT(14));
}
/**
@@ -4841,7 +4841,7 @@ static void e1000_initialize_hw_bits_ich8lan(struct e1000_hw *hw)
/* Extended Device Control */
reg = er32(CTRL_EXT);
- reg |= (1 << 22);
+ reg |= BIT(22);
/* Enable PHY low-power state when MAC is at D3 w/o WoL */
if (hw->mac.type >= e1000_pchlan)
reg |= E1000_CTRL_EXT_PHYPDEN;
@@ -4849,34 +4849,34 @@ static void e1000_initialize_hw_bits_ich8lan(struct e1000_hw *hw)
/* Transmit Descriptor Control 0 */
reg = er32(TXDCTL(0));
- reg |= (1 << 22);
+ reg |= BIT(22);
ew32(TXDCTL(0), reg);
/* Transmit Descriptor Control 1 */
reg = er32(TXDCTL(1));
- reg |= (1 << 22);
+ reg |= BIT(22);
ew32(TXDCTL(1), reg);
/* Transmit Arbitration Control 0 */
reg = er32(TARC(0));
if (hw->mac.type == e1000_ich8lan)
- reg |= (1 << 28) | (1 << 29);
- reg |= (1 << 23) | (1 << 24) | (1 << 26) | (1 << 27);
+ reg |= BIT(28) | BIT(29);
+ reg |= BIT(23) | BIT(24) | BIT(26) | BIT(27);
ew32(TARC(0), reg);
/* Transmit Arbitration Control 1 */
reg = er32(TARC(1));
if (er32(TCTL) & E1000_TCTL_MULR)
- reg &= ~(1 << 28);
+ reg &= ~BIT(28);
else
- reg |= (1 << 28);
- reg |= (1 << 24) | (1 << 26) | (1 << 30);
+ reg |= BIT(28);
+ reg |= BIT(24) | BIT(26) | BIT(30);
ew32(TARC(1), reg);
/* Device Status */
if (hw->mac.type == e1000_ich8lan) {
reg = er32(STATUS);
- reg &= ~(1 << 31);
+ reg &= ~BIT(31);
ew32(STATUS, reg);
}
diff --git a/drivers/net/ethernet/intel/e1000e/ich8lan.h b/drivers/net/ethernet/intel/e1000e/ich8lan.h
index 2311f6003f58..67163ca898ba 100644
--- a/drivers/net/ethernet/intel/e1000e/ich8lan.h
+++ b/drivers/net/ethernet/intel/e1000e/ich8lan.h
@@ -73,10 +73,10 @@
(ID_LED_OFF1_ON2 << 4) | \
(ID_LED_DEF1_DEF2))
-#define E1000_ICH_NVM_SIG_WORD 0x13
-#define E1000_ICH_NVM_SIG_MASK 0xC000
-#define E1000_ICH_NVM_VALID_SIG_MASK 0xC0
-#define E1000_ICH_NVM_SIG_VALUE 0x80
+#define E1000_ICH_NVM_SIG_WORD 0x13u
+#define E1000_ICH_NVM_SIG_MASK 0xC000u
+#define E1000_ICH_NVM_VALID_SIG_MASK 0xC0u
+#define E1000_ICH_NVM_SIG_VALUE 0x80u
#define E1000_ICH8_LAN_INIT_TIMEOUT 1500
diff --git a/drivers/net/ethernet/intel/e1000e/mac.c b/drivers/net/ethernet/intel/e1000e/mac.c
index e59d7c283cd4..b322011ec282 100644
--- a/drivers/net/ethernet/intel/e1000e/mac.c
+++ b/drivers/net/ethernet/intel/e1000e/mac.c
@@ -346,7 +346,7 @@ void e1000e_update_mc_addr_list_generic(struct e1000_hw *hw,
hash_reg = (hash_value >> 5) & (hw->mac.mta_reg_count - 1);
hash_bit = hash_value & 0x1F;
- hw->mac.mta_shadow[hash_reg] |= (1 << hash_bit);
+ hw->mac.mta_shadow[hash_reg] |= BIT(hash_bit);
mc_addr_list += (ETH_ALEN);
}
diff --git a/drivers/net/ethernet/intel/e1000e/netdev.c b/drivers/net/ethernet/intel/e1000e/netdev.c
index 9b4ec13d9161..75e60897b7e7 100644
--- a/drivers/net/ethernet/intel/e1000e/netdev.c
+++ b/drivers/net/ethernet/intel/e1000e/netdev.c
@@ -242,7 +242,7 @@ static void e1000e_dump(struct e1000_adapter *adapter)
dev_info(&adapter->pdev->dev, "Net device Info\n");
pr_info("Device Name state trans_start last_rx\n");
pr_info("%-15s %016lX %016lX %016lX\n", netdev->name,
- netdev->state, netdev->trans_start, netdev->last_rx);
+ netdev->state, dev_trans_start(netdev), netdev->last_rx);
}
/* Print Registers */
@@ -317,8 +317,8 @@ static void e1000e_dump(struct e1000_adapter *adapter)
else
next_desc = "";
pr_info("T%c[0x%03X] %016llX %016llX %016llX %04X %3X %016llX %p%s\n",
- (!(le64_to_cpu(u0->b) & (1 << 29)) ? 'l' :
- ((le64_to_cpu(u0->b) & (1 << 20)) ? 'd' : 'c')),
+ (!(le64_to_cpu(u0->b) & BIT(29)) ? 'l' :
+ ((le64_to_cpu(u0->b) & BIT(20)) ? 'd' : 'c')),
i,
(unsigned long long)le64_to_cpu(u0->a),
(unsigned long long)le64_to_cpu(u0->b),
@@ -2018,7 +2018,7 @@ static void e1000_configure_msix(struct e1000_adapter *adapter)
adapter->eiac_mask |= E1000_IMS_OTHER;
/* Cause Tx interrupts on every write back */
- ivar |= (1 << 31);
+ ivar |= BIT(31);
ew32(IVAR, ivar);
@@ -2709,7 +2709,7 @@ static int e1000_vlan_rx_add_vid(struct net_device *netdev,
if (adapter->flags & FLAG_HAS_HW_VLAN_FILTER) {
index = (vid >> 5) & 0x7F;
vfta = E1000_READ_REG_ARRAY(hw, E1000_VFTA, index);
- vfta |= (1 << (vid & 0x1F));
+ vfta |= BIT((vid & 0x1F));
hw->mac.ops.write_vfta(hw, index, vfta);
}
@@ -2737,7 +2737,7 @@ static int e1000_vlan_rx_kill_vid(struct net_device *netdev,
if (adapter->flags & FLAG_HAS_HW_VLAN_FILTER) {
index = (vid >> 5) & 0x7F;
vfta = E1000_READ_REG_ARRAY(hw, E1000_VFTA, index);
- vfta &= ~(1 << (vid & 0x1F));
+ vfta &= ~BIT((vid & 0x1F));
hw->mac.ops.write_vfta(hw, index, vfta);
}
@@ -2878,7 +2878,7 @@ static void e1000_init_manageability_pt(struct e1000_adapter *adapter)
/* Enable this decision filter in MANC2H */
if (mdef)
- manc2h |= (1 << i);
+ manc2h |= BIT(i);
j |= mdef;
}
@@ -2891,7 +2891,7 @@ static void e1000_init_manageability_pt(struct e1000_adapter *adapter)
if (er32(MDEF(i)) == 0) {
ew32(MDEF(i), (E1000_MDEF_PORT_623 |
E1000_MDEF_PORT_664));
- manc2h |= (1 << 1);
+ manc2h |= BIT(1);
j++;
break;
}
@@ -2971,7 +2971,7 @@ static void e1000_configure_tx(struct e1000_adapter *adapter)
/* set the speed mode bit, we'll clear it if we're not at
* gigabit link later
*/
-#define SPEED_MODE_BIT (1 << 21)
+#define SPEED_MODE_BIT BIT(21)
tarc |= SPEED_MODE_BIT;
ew32(TARC(0), tarc);
}
@@ -3071,12 +3071,12 @@ static void e1000_setup_rctl(struct e1000_adapter *adapter)
e1e_rphy(hw, PHY_REG(770, 26), &phy_data);
phy_data &= 0xfff8;
- phy_data |= (1 << 2);
+ phy_data |= BIT(2);
e1e_wphy(hw, PHY_REG(770, 26), phy_data);
e1e_rphy(hw, 22, &phy_data);
phy_data &= 0x0fff;
- phy_data |= (1 << 14);
+ phy_data |= BIT(14);
e1e_wphy(hw, 0x10, 0x2823);
e1e_wphy(hw, 0x11, 0x0003);
e1e_wphy(hw, 22, phy_data);
@@ -3368,12 +3368,12 @@ static int e1000e_write_uc_addr_list(struct net_device *netdev)
* combining
*/
netdev_for_each_uc_addr(ha, netdev) {
- int rval;
+ int ret_val;
if (!rar_entries)
break;
- rval = hw->mac.ops.rar_set(hw, ha->addr, rar_entries--);
- if (rval < 0)
+ ret_val = hw->mac.ops.rar_set(hw, ha->addr, rar_entries--);
+ if (ret_val < 0)
return -ENOMEM;
count++;
}
@@ -3503,8 +3503,8 @@ s32 e1000e_get_base_timinca(struct e1000_adapter *adapter, u32 *timinca)
!(er32(TSYNCRXCTL) & E1000_TSYNCRXCTL_ENABLED)) {
u32 fextnvm7 = er32(FEXTNVM7);
- if (!(fextnvm7 & (1 << 0))) {
- ew32(FEXTNVM7, fextnvm7 | (1 << 0));
+ if (!(fextnvm7 & BIT(0))) {
+ ew32(FEXTNVM7, fextnvm7 | BIT(0));
e1e_flush();
}
}
@@ -3580,7 +3580,6 @@ static int e1000e_config_hwtstamp(struct e1000_adapter *adapter,
bool is_l4 = false;
bool is_l2 = false;
u32 regval;
- s32 ret_val;
if (!(adapter->flags & FLAG_HAS_HW_TIMESTAMP))
return -EINVAL;
@@ -3719,16 +3718,6 @@ static int e1000e_config_hwtstamp(struct e1000_adapter *adapter,
er32(RXSTMPH);
er32(TXSTMPH);
- /* Get and set the System Time Register SYSTIM base frequency */
- ret_val = e1000e_get_base_timinca(adapter, &regval);
- if (ret_val)
- return ret_val;
- ew32(TIMINCA, regval);
-
- /* reset the ns time counter */
- timecounter_init(&adapter->tc, &adapter->cc,
- ktime_to_ns(ktime_get_real()));
-
return 0;
}
@@ -3839,7 +3828,7 @@ static void e1000_flush_rx_ring(struct e1000_adapter *adapter)
/* update thresholds: prefetch threshold to 31, host threshold to 1
* and make sure the granularity is "descriptors" and not "cache lines"
*/
- rxdctl |= (0x1F | (1 << 8) | E1000_RXDCTL_THRESH_UNIT_DESC);
+ rxdctl |= (0x1F | BIT(8) | E1000_RXDCTL_THRESH_UNIT_DESC);
ew32(RXDCTL(0), rxdctl);
/* momentarily enable the RX ring for the changes to take effect */
@@ -3885,6 +3874,53 @@ static void e1000_flush_desc_rings(struct e1000_adapter *adapter)
}
/**
+ * e1000e_systim_reset - reset the timesync registers after a hardware reset
+ * @adapter: board private structure
+ *
+ * When the MAC is reset, all hardware bits for timesync will be reset to the
+ * default values. This function will restore the settings last in place.
+ * Since the clock SYSTIME registers are reset, we will simply restore the
+ * cyclecounter to the kernel real clock time.
+ **/
+static void e1000e_systim_reset(struct e1000_adapter *adapter)
+{
+ struct ptp_clock_info *info = &adapter->ptp_clock_info;
+ struct e1000_hw *hw = &adapter->hw;
+ unsigned long flags;
+ u32 timinca;
+ s32 ret_val;
+
+ if (!(adapter->flags & FLAG_HAS_HW_TIMESTAMP))
+ return;
+
+ if (info->adjfreq) {
+ /* restore the previous ptp frequency delta */
+ ret_val = info->adjfreq(info, adapter->ptp_delta);
+ } else {
+ /* set the default base frequency if no adjustment possible */
+ ret_val = e1000e_get_base_timinca(adapter, &timinca);
+ if (!ret_val)
+ ew32(TIMINCA, timinca);
+ }
+
+ if (ret_val) {
+ dev_warn(&adapter->pdev->dev,
+ "Failed to restore TIMINCA clock rate delta: %d\n",
+ ret_val);
+ return;
+ }
+
+ /* reset the systim ns time counter */
+ spin_lock_irqsave(&adapter->systim_lock, flags);
+ timecounter_init(&adapter->tc, &adapter->cc,
+ ktime_to_ns(ktime_get_real()));
+ spin_unlock_irqrestore(&adapter->systim_lock, flags);
+
+ /* restore the previous hwtstamp configuration settings */
+ e1000e_config_hwtstamp(adapter, &adapter->hwtstamp_config);
+}
+
+/**
* e1000e_reset - bring the hardware into a known good state
*
* This function boots the hardware and enables some settings that
@@ -4063,8 +4099,8 @@ void e1000e_reset(struct e1000_adapter *adapter)
e1000e_reset_adaptive(hw);
- /* initialize systim and reset the ns time counter */
- e1000e_config_hwtstamp(adapter, &adapter->hwtstamp_config);
+ /* restore systim and hwtstamp settings */
+ e1000e_systim_reset(adapter);
/* Set EEE advertisement as appropriate */
if (adapter->flags2 & FLAG2_HAS_EEE) {
@@ -4275,7 +4311,7 @@ static cycle_t e1000e_cyclecounter_read(const struct cyclecounter *cc)
struct e1000_adapter *adapter = container_of(cc, struct e1000_adapter,
cc);
struct e1000_hw *hw = &adapter->hw;
- u32 systimel_1, systimel_2, systimeh;
+ u32 systimel, systimeh;
cycle_t systim, systim_next;
/* SYSTIMH latching upon SYSTIML read does not work well.
* This means that if SYSTIML overflows after we read it but before
@@ -4283,24 +4319,25 @@ static cycle_t e1000e_cyclecounter_read(const struct cyclecounter *cc)
* will experience a huge non linear increment in the systime value
* to fix that we test for overflow and if true, we re-read systime.
*/
- systimel_1 = er32(SYSTIML);
+ systimel = er32(SYSTIML);
systimeh = er32(SYSTIMH);
- systimel_2 = er32(SYSTIML);
- /* Check for overflow. If there was no overflow, use the values */
- if (systimel_1 < systimel_2) {
- systim = (cycle_t)systimel_1;
- systim |= (cycle_t)systimeh << 32;
- } else {
- /* There was an overflow, read again SYSTIMH, and use
- * systimel_2
- */
- systimeh = er32(SYSTIMH);
- systim = (cycle_t)systimel_2;
- systim |= (cycle_t)systimeh << 32;
+ /* Is systimel is so large that overflow is possible? */
+ if (systimel >= (u32)0xffffffff - E1000_TIMINCA_INCVALUE_MASK) {
+ u32 systimel_2 = er32(SYSTIML);
+ if (systimel > systimel_2) {
+ /* There was an overflow, read again SYSTIMH, and use
+ * systimel_2
+ */
+ systimeh = er32(SYSTIMH);
+ systimel = systimel_2;
+ }
}
+ systim = (cycle_t)systimel;
+ systim |= (cycle_t)systimeh << 32;
if ((hw->mac.type == e1000_82574) || (hw->mac.type == e1000_82583)) {
- u64 incvalue, time_delta, rem, temp;
+ u64 time_delta, rem, temp;
+ u32 incvalue;
int i;
/* errata for 82574/82583 possible bad bits read from SYSTIMH/L
@@ -4495,7 +4532,7 @@ static int e1000_test_msi(struct e1000_adapter *adapter)
}
/**
- * e1000_open - Called when a network interface is made active
+ * e1000e_open - Called when a network interface is made active
* @netdev: network interface device structure
*
* Returns 0 on success, negative value on failure
@@ -4506,7 +4543,7 @@ static int e1000_test_msi(struct e1000_adapter *adapter)
* handler is registered with the OS, the watchdog timer is started,
* and the stack is notified that the interface is ready.
**/
-static int e1000_open(struct net_device *netdev)
+int e1000e_open(struct net_device *netdev)
{
struct e1000_adapter *adapter = netdev_priv(netdev);
struct e1000_hw *hw = &adapter->hw;
@@ -4604,7 +4641,7 @@ err_setup_tx:
}
/**
- * e1000_close - Disables a network interface
+ * e1000e_close - Disables a network interface
* @netdev: network interface device structure
*
* Returns 0, this is not allowed to fail
@@ -4614,7 +4651,7 @@ err_setup_tx:
* needs to be disabled. A global MAC reset is issued to stop the
* hardware, and all transmit and receive resources are freed.
**/
-static int e1000_close(struct net_device *netdev)
+int e1000e_close(struct net_device *netdev)
{
struct e1000_adapter *adapter = netdev_priv(netdev);
struct pci_dev *pdev = adapter->pdev;
@@ -6861,7 +6898,7 @@ static void e1000_eeprom_checks(struct e1000_adapter *adapter)
ret_val = e1000_read_nvm(hw, NVM_INIT_CONTROL2_REG, 1, &buf);
le16_to_cpus(&buf);
- if (!ret_val && (!(buf & (1 << 0)))) {
+ if (!ret_val && (!(buf & BIT(0)))) {
/* Deep Smart Power Down (DSPD) */
dev_warn(&adapter->pdev->dev,
"Warning: detected DSPD enabled in EEPROM\n");
@@ -6920,8 +6957,8 @@ static int e1000_set_features(struct net_device *netdev,
}
static const struct net_device_ops e1000e_netdev_ops = {
- .ndo_open = e1000_open,
- .ndo_stop = e1000_close,
+ .ndo_open = e1000e_open,
+ .ndo_stop = e1000e_close,
.ndo_start_xmit = e1000_xmit_frame,
.ndo_get_stats64 = e1000e_get_stats64,
.ndo_set_rx_mode = e1000e_set_rx_mode,
@@ -6965,7 +7002,7 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
int bars, i, err, pci_using_dac;
u16 eeprom_data = 0;
u16 eeprom_apme_mask = E1000_EEPROM_APME;
- s32 rval = 0;
+ s32 ret_val = 0;
if (ei->flags2 & FLAG2_DISABLE_ASPM_L0S)
aspm_disable_flag = PCIE_LINK_STATE_L0S;
@@ -7200,18 +7237,18 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
} else if (adapter->flags & FLAG_APME_IN_CTRL3) {
if (adapter->flags & FLAG_APME_CHECK_PORT_B &&
(adapter->hw.bus.func == 1))
- rval = e1000_read_nvm(&adapter->hw,
+ ret_val = e1000_read_nvm(&adapter->hw,
NVM_INIT_CONTROL3_PORT_B,
1, &eeprom_data);
else
- rval = e1000_read_nvm(&adapter->hw,
+ ret_val = e1000_read_nvm(&adapter->hw,
NVM_INIT_CONTROL3_PORT_A,
1, &eeprom_data);
}
/* fetch WoL from EEPROM */
- if (rval)
- e_dbg("NVM read error getting WoL initial values: %d\n", rval);
+ if (ret_val)
+ e_dbg("NVM read error getting WoL initial values: %d\n", ret_val);
else if (eeprom_data & eeprom_apme_mask)
adapter->eeprom_wol |= E1000_WUFC_MAG;
@@ -7231,13 +7268,16 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
device_wakeup_enable(&pdev->dev);
/* save off EEPROM version number */
- rval = e1000_read_nvm(&adapter->hw, 5, 1, &adapter->eeprom_vers);
+ ret_val = e1000_read_nvm(&adapter->hw, 5, 1, &adapter->eeprom_vers);
- if (rval) {
- e_dbg("NVM read error getting EEPROM version: %d\n", rval);
+ if (ret_val) {
+ e_dbg("NVM read error getting EEPROM version: %d\n", ret_val);
adapter->eeprom_vers = 0;
}
+ /* init PTP hardware clock */
+ e1000e_ptp_init(adapter);
+
/* reset the hardware with the new settings */
e1000e_reset(adapter);
@@ -7256,9 +7296,6 @@ static int e1000_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
/* carrier off reporting is important to ethtool even BEFORE open */
netif_carrier_off(netdev);
- /* init PTP hardware clock */
- e1000e_ptp_init(adapter);
-
e1000_print_device_info(adapter);
if (pci_dev_run_wake(pdev))
diff --git a/drivers/net/ethernet/intel/e1000e/nvm.c b/drivers/net/ethernet/intel/e1000e/nvm.c
index 49f205c023bf..2efd80dfd88e 100644
--- a/drivers/net/ethernet/intel/e1000e/nvm.c
+++ b/drivers/net/ethernet/intel/e1000e/nvm.c
@@ -67,7 +67,7 @@ static void e1000_shift_out_eec_bits(struct e1000_hw *hw, u16 data, u16 count)
u32 eecd = er32(EECD);
u32 mask;
- mask = 0x01 << (count - 1);
+ mask = BIT(count - 1);
if (nvm->type == e1000_nvm_eeprom_spi)
eecd |= E1000_EECD_DO;
diff --git a/drivers/net/ethernet/intel/e1000e/phy.c b/drivers/net/ethernet/intel/e1000e/phy.c
index de13aeacae97..d78d47b41a71 100644
--- a/drivers/net/ethernet/intel/e1000e/phy.c
+++ b/drivers/net/ethernet/intel/e1000e/phy.c
@@ -2894,11 +2894,11 @@ static s32 __e1000_write_phy_reg_hv(struct e1000_hw *hw, u32 offset, u16 data,
if ((hw->phy.type == e1000_phy_82578) &&
(hw->phy.revision >= 1) &&
(hw->phy.addr == 2) &&
- !(MAX_PHY_REG_ADDRESS & reg) && (data & (1 << 11))) {
+ !(MAX_PHY_REG_ADDRESS & reg) && (data & BIT(11))) {
u16 data2 = 0x7EFF;
ret_val = e1000_access_phy_debug_regs_hv(hw,
- (1 << 6) | 0x3,
+ BIT(6) | 0x3,
&data2, false);
if (ret_val)
goto out;
diff --git a/drivers/net/ethernet/intel/e1000e/phy.h b/drivers/net/ethernet/intel/e1000e/phy.h
index 55bfe473514d..3027f63ee793 100644
--- a/drivers/net/ethernet/intel/e1000e/phy.h
+++ b/drivers/net/ethernet/intel/e1000e/phy.h
@@ -104,9 +104,9 @@ s32 e1000_get_cable_length_82577(struct e1000_hw *hw);
#define BM_WUC_DATA_OPCODE 0x12
#define BM_WUC_ENABLE_PAGE BM_PORT_CTRL_PAGE
#define BM_WUC_ENABLE_REG 17
-#define BM_WUC_ENABLE_BIT (1 << 2)
-#define BM_WUC_HOST_WU_BIT (1 << 4)
-#define BM_WUC_ME_WU_BIT (1 << 5)
+#define BM_WUC_ENABLE_BIT BIT(2)
+#define BM_WUC_HOST_WU_BIT BIT(4)
+#define BM_WUC_ME_WU_BIT BIT(5)
#define PHY_UPPER_SHIFT 21
#define BM_PHY_REG(page, reg) \
@@ -124,8 +124,8 @@ s32 e1000_get_cable_length_82577(struct e1000_hw *hw);
#define I82578_ADDR_REG 29
#define I82577_ADDR_REG 16
#define I82577_CFG_REG 22
-#define I82577_CFG_ASSERT_CRS_ON_TX (1 << 15)
-#define I82577_CFG_ENABLE_DOWNSHIFT (3 << 10) /* auto downshift */
+#define I82577_CFG_ASSERT_CRS_ON_TX BIT(15)
+#define I82577_CFG_ENABLE_DOWNSHIFT (3u << 10) /* auto downshift */
#define I82577_CTRL_REG 23
/* 82577 specific PHY registers */
diff --git a/drivers/net/ethernet/intel/e1000e/ptp.c b/drivers/net/ethernet/intel/e1000e/ptp.c
index e2ff3ef75d5d..2e1b17ad52a3 100644
--- a/drivers/net/ethernet/intel/e1000e/ptp.c
+++ b/drivers/net/ethernet/intel/e1000e/ptp.c
@@ -79,6 +79,8 @@ static int e1000e_phc_adjfreq(struct ptp_clock_info *ptp, s32 delta)
ew32(TIMINCA, timinca);
+ adapter->ptp_delta = delta;
+
spin_unlock_irqrestore(&adapter->systim_lock, flags);
return 0;
diff --git a/drivers/net/ethernet/intel/fm10k/Makefile b/drivers/net/ethernet/intel/fm10k/Makefile
index b006ff66d028..cac645329cea 100644
--- a/drivers/net/ethernet/intel/fm10k/Makefile
+++ b/drivers/net/ethernet/intel/fm10k/Makefile
@@ -1,7 +1,7 @@
################################################################################
#
-# Intel Ethernet Switch Host Interface Driver
-# Copyright(c) 2013 - 2015 Intel Corporation.
+# Intel(R) Ethernet Switch Host Interface Driver
+# Copyright(c) 2013 - 2016 Intel Corporation.
#
# This program is free software; you can redistribute it and/or modify it
# under the terms and conditions of the GNU General Public License,
@@ -22,7 +22,7 @@
################################################################################
#
-# Makefile for the Intel(R) FM10000 Ethernet Switch Host Interface driver
+# Makefile for the Intel(R) Ethernet Switch Host Interface Driver
#
obj-$(CONFIG_FM10K) += fm10k.o
@@ -30,7 +30,6 @@ obj-$(CONFIG_FM10K) += fm10k.o
fm10k-y := fm10k_main.o \
fm10k_common.o \
fm10k_pci.o \
- fm10k_ptp.o \
fm10k_netdev.o \
fm10k_ethtool.o \
fm10k_pf.o \
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k.h b/drivers/net/ethernet/intel/fm10k/fm10k.h
index b34bb008b104..fcf106e545c5 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k.h
+++ b/drivers/net/ethernet/intel/fm10k/fm10k.h
@@ -1,5 +1,5 @@
-/* Intel Ethernet Switch Host Interface Driver
- * Copyright(c) 2013 - 2015 Intel Corporation.
+/* Intel(R) Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
@@ -27,9 +27,6 @@
#include <linux/rtnetlink.h>
#include <linux/if_vlan.h>
#include <linux/pci.h>
-#include <linux/net_tstamp.h>
-#include <linux/clocksource.h>
-#include <linux/ptp_clock_kernel.h>
#include "fm10k_pf.h"
#include "fm10k_vf.h"
@@ -262,12 +259,12 @@ struct fm10k_intfc {
unsigned long state;
u32 flags;
-#define FM10K_FLAG_RESET_REQUESTED (u32)(1 << 0)
-#define FM10K_FLAG_RSS_FIELD_IPV4_UDP (u32)(1 << 1)
-#define FM10K_FLAG_RSS_FIELD_IPV6_UDP (u32)(1 << 2)
-#define FM10K_FLAG_RX_TS_ENABLED (u32)(1 << 3)
-#define FM10K_FLAG_SWPRI_CONFIG (u32)(1 << 4)
-#define FM10K_FLAG_DEBUG_STATS (u32)(1 << 5)
+#define FM10K_FLAG_RESET_REQUESTED (u32)(BIT(0))
+#define FM10K_FLAG_RSS_FIELD_IPV4_UDP (u32)(BIT(1))
+#define FM10K_FLAG_RSS_FIELD_IPV6_UDP (u32)(BIT(2))
+#define FM10K_FLAG_RX_TS_ENABLED (u32)(BIT(3))
+#define FM10K_FLAG_SWPRI_CONFIG (u32)(BIT(4))
+#define FM10K_FLAG_DEBUG_STATS (u32)(BIT(5))
int xcast_mode;
/* Tx fast path data */
@@ -333,6 +330,7 @@ struct fm10k_intfc {
unsigned long last_reset;
unsigned long link_down_event;
bool host_ready;
+ bool lport_map_failed;
u32 reta[FM10K_RETA_SIZE];
u32 rssrk[FM10K_RSSRK_SIZE];
@@ -342,22 +340,8 @@ struct fm10k_intfc {
#ifdef CONFIG_DEBUG_FS
struct dentry *dbg_intfc;
-
#endif /* CONFIG_DEBUG_FS */
- struct ptp_clock_info ptp_caps;
- struct ptp_clock *ptp_clock;
-
- struct sk_buff_head ts_tx_skb_queue;
- u32 tx_hwtstamp_timeouts;
- struct hwtstamp_config ts_config;
- /* We are unable to actually adjust the clock beyond the frequency
- * value. Once the clock is started there is no resetting it. As
- * such we maintain a separate offset from the actual hardware clock
- * to allow for offset adjustment.
- */
- s64 ptp_adjust;
- rwlock_t systime_lock;
#ifdef CONFIG_DCB
u8 pfc_en;
#endif
@@ -510,6 +494,8 @@ int fm10k_close(struct net_device *netdev);
/* Ethtool */
void fm10k_set_ethtool_ops(struct net_device *dev);
+u32 fm10k_get_reta_size(struct net_device *netdev);
+void fm10k_write_reta(struct fm10k_intfc *interface, const u32 *indir);
/* IOV */
s32 fm10k_iov_event(struct fm10k_intfc *interface);
@@ -544,21 +530,6 @@ static inline void fm10k_dbg_init(void) {}
static inline void fm10k_dbg_exit(void) {}
#endif /* CONFIG_DEBUG_FS */
-/* Time Stamping */
-void fm10k_systime_to_hwtstamp(struct fm10k_intfc *interface,
- struct skb_shared_hwtstamps *hwtstamp,
- u64 systime);
-void fm10k_ts_tx_enqueue(struct fm10k_intfc *interface, struct sk_buff *skb);
-void fm10k_ts_tx_hwtstamp(struct fm10k_intfc *interface, __le16 dglort,
- u64 systime);
-void fm10k_ts_reset(struct fm10k_intfc *interface);
-void fm10k_ts_init(struct fm10k_intfc *interface);
-void fm10k_ts_tx_subtask(struct fm10k_intfc *interface);
-void fm10k_ptp_register(struct fm10k_intfc *interface);
-void fm10k_ptp_unregister(struct fm10k_intfc *interface);
-int fm10k_get_ts_config(struct net_device *netdev, struct ifreq *ifr);
-int fm10k_set_ts_config(struct net_device *netdev, struct ifreq *ifr);
-
/* DCB */
#ifdef CONFIG_DCB
void fm10k_dcbnl_set_ops(struct net_device *dev);
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_common.c b/drivers/net/ethernet/intel/fm10k/fm10k_common.c
index 6cfae6ac04ea..5bbf19cfe29b 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_common.c
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_common.c
@@ -1,5 +1,5 @@
-/* Intel Ethernet Switch Host Interface Driver
- * Copyright(c) 2013 - 2014 Intel Corporation.
+/* Intel(R) Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_common.h b/drivers/net/ethernet/intel/fm10k/fm10k_common.h
index 45e4e5b1f20a..50f71e997448 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_common.h
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_common.h
@@ -1,5 +1,5 @@
-/* Intel Ethernet Switch Host Interface Driver
- * Copyright(c) 2013 - 2014 Intel Corporation.
+/* Intel(R) Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_dcbnl.c b/drivers/net/ethernet/intel/fm10k/fm10k_dcbnl.c
index 2be4361839db..db4bd8bf9722 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_dcbnl.c
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_dcbnl.c
@@ -1,5 +1,5 @@
-/* Intel Ethernet Switch Host Interface Driver
- * Copyright(c) 2013 - 2015 Intel Corporation.
+/* Intel(R) Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_debugfs.c b/drivers/net/ethernet/intel/fm10k/fm10k_debugfs.c
index 5d6137faf7d1..5116fd043630 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_debugfs.c
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_debugfs.c
@@ -1,5 +1,5 @@
-/* Intel Ethernet Switch Host Interface Driver
- * Copyright(c) 2013 - 2015 Intel Corporation.
+/* Intel(R) Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_ethtool.c b/drivers/net/ethernet/intel/fm10k/fm10k_ethtool.c
index 2f6a05b57228..9c0d87503977 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_ethtool.c
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_ethtool.c
@@ -1,5 +1,5 @@
-/* Intel Ethernet Switch Host Interface Driver
- * Copyright(c) 2013 - 2015 Intel Corporation.
+/* Intel(R) Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
@@ -77,19 +77,6 @@ static const struct fm10k_stats fm10k_gstrings_global_stats[] = {
FM10K_STAT("mac_rules_avail", hw.swapi.mac.avail),
FM10K_STAT("tx_hang_count", tx_timeout_count),
-
- FM10K_STAT("tx_hwtstamp_timeouts", tx_hwtstamp_timeouts),
-};
-
-static const struct fm10k_stats fm10k_gstrings_debug_stats[] = {
- FM10K_STAT("hw_sm_mbx_full", hw_sm_mbx_full),
- FM10K_STAT("hw_csum_tx_good", hw_csum_tx_good),
- FM10K_STAT("hw_csum_rx_good", hw_csum_rx_good),
- FM10K_STAT("rx_switch_errors", rx_switch_errors),
- FM10K_STAT("rx_drops", rx_drops),
- FM10K_STAT("rx_pp_errors", rx_pp_errors),
- FM10K_STAT("rx_link_errors", rx_link_errors),
- FM10K_STAT("rx_length_errors", rx_length_errors),
};
static const struct fm10k_stats fm10k_gstrings_pf_stats[] = {
@@ -121,13 +108,21 @@ static const struct fm10k_stats fm10k_gstrings_mbx_stats[] = {
FM10K_MBX_STAT("mbx_rx_mbmem_pushed", rx_mbmem_pushed),
};
+#define FM10K_QUEUE_STAT(_name, _stat) { \
+ .stat_string = _name, \
+ .sizeof_stat = FIELD_SIZEOF(struct fm10k_ring, _stat), \
+ .stat_offset = offsetof(struct fm10k_ring, _stat) \
+}
+
+static const struct fm10k_stats fm10k_gstrings_queue_stats[] = {
+ FM10K_QUEUE_STAT("packets", stats.packets),
+ FM10K_QUEUE_STAT("bytes", stats.bytes),
+};
+
#define FM10K_GLOBAL_STATS_LEN ARRAY_SIZE(fm10k_gstrings_global_stats)
-#define FM10K_DEBUG_STATS_LEN ARRAY_SIZE(fm10k_gstrings_debug_stats)
#define FM10K_PF_STATS_LEN ARRAY_SIZE(fm10k_gstrings_pf_stats)
#define FM10K_MBX_STATS_LEN ARRAY_SIZE(fm10k_gstrings_mbx_stats)
-
-#define FM10K_QUEUE_STATS_LEN(_n) \
- ((_n) * 2 * (sizeof(struct fm10k_queue_stats) / sizeof(u64)))
+#define FM10K_QUEUE_STATS_LEN ARRAY_SIZE(fm10k_gstrings_queue_stats)
#define FM10K_STATIC_STATS_LEN (FM10K_GLOBAL_STATS_LEN + \
FM10K_NETDEV_STATS_LEN + \
@@ -145,77 +140,56 @@ enum fm10k_self_test_types {
};
enum {
- FM10K_PRV_FLAG_DEBUG_STATS,
FM10K_PRV_FLAG_LEN,
};
static const char fm10k_prv_flags[FM10K_PRV_FLAG_LEN][ETH_GSTRING_LEN] = {
- "debug-statistics",
};
+static void fm10k_add_stat_strings(char **p, const char *prefix,
+ const struct fm10k_stats stats[],
+ const unsigned int size)
+{
+ unsigned int i;
+
+ for (i = 0; i < size; i++) {
+ snprintf(*p, ETH_GSTRING_LEN, "%s%s",
+ prefix, stats[i].stat_string);
+ *p += ETH_GSTRING_LEN;
+ }
+}
+
static void fm10k_get_stat_strings(struct net_device *dev, u8 *data)
{
struct fm10k_intfc *interface = netdev_priv(dev);
- struct fm10k_iov_data *iov_data = interface->iov_data;
char *p = (char *)data;
unsigned int i;
- unsigned int j;
- for (i = 0; i < FM10K_NETDEV_STATS_LEN; i++) {
- memcpy(p, fm10k_gstrings_net_stats[i].stat_string,
- ETH_GSTRING_LEN);
- p += ETH_GSTRING_LEN;
- }
+ fm10k_add_stat_strings(&p, "", fm10k_gstrings_net_stats,
+ FM10K_NETDEV_STATS_LEN);
- for (i = 0; i < FM10K_GLOBAL_STATS_LEN; i++) {
- memcpy(p, fm10k_gstrings_global_stats[i].stat_string,
- ETH_GSTRING_LEN);
- p += ETH_GSTRING_LEN;
- }
+ fm10k_add_stat_strings(&p, "", fm10k_gstrings_global_stats,
+ FM10K_GLOBAL_STATS_LEN);
- if (interface->flags & FM10K_FLAG_DEBUG_STATS) {
- for (i = 0; i < FM10K_DEBUG_STATS_LEN; i++) {
- memcpy(p, fm10k_gstrings_debug_stats[i].stat_string,
- ETH_GSTRING_LEN);
- p += ETH_GSTRING_LEN;
- }
- }
+ fm10k_add_stat_strings(&p, "", fm10k_gstrings_mbx_stats,
+ FM10K_MBX_STATS_LEN);
- for (i = 0; i < FM10K_MBX_STATS_LEN; i++) {
- memcpy(p, fm10k_gstrings_mbx_stats[i].stat_string,
- ETH_GSTRING_LEN);
- p += ETH_GSTRING_LEN;
- }
+ if (interface->hw.mac.type != fm10k_mac_vf)
+ fm10k_add_stat_strings(&p, "", fm10k_gstrings_pf_stats,
+ FM10K_PF_STATS_LEN);
- if (interface->hw.mac.type != fm10k_mac_vf) {
- for (i = 0; i < FM10K_PF_STATS_LEN; i++) {
- memcpy(p, fm10k_gstrings_pf_stats[i].stat_string,
- ETH_GSTRING_LEN);
- p += ETH_GSTRING_LEN;
- }
- }
+ for (i = 0; i < interface->hw.mac.max_queues; i++) {
+ char prefix[ETH_GSTRING_LEN];
- if ((interface->flags & FM10K_FLAG_DEBUG_STATS) && iov_data) {
- for (i = 0; i < iov_data->num_vfs; i++) {
- for (j = 0; j < FM10K_MBX_STATS_LEN; j++) {
- snprintf(p,
- ETH_GSTRING_LEN,
- "vf_%u_%s", i,
- fm10k_gstrings_mbx_stats[j].stat_string);
- p += ETH_GSTRING_LEN;
- }
- }
- }
+ snprintf(prefix, ETH_GSTRING_LEN, "tx_queue_%u_", i);
+ fm10k_add_stat_strings(&p, prefix,
+ fm10k_gstrings_queue_stats,
+ FM10K_QUEUE_STATS_LEN);
- for (i = 0; i < interface->hw.mac.max_queues; i++) {
- snprintf(p, ETH_GSTRING_LEN, "tx_queue_%u_packets", i);
- p += ETH_GSTRING_LEN;
- snprintf(p, ETH_GSTRING_LEN, "tx_queue_%u_bytes", i);
- p += ETH_GSTRING_LEN;
- snprintf(p, ETH_GSTRING_LEN, "rx_queue_%u_packets", i);
- p += ETH_GSTRING_LEN;
- snprintf(p, ETH_GSTRING_LEN, "rx_queue_%u_bytes", i);
- p += ETH_GSTRING_LEN;
+ snprintf(prefix, ETH_GSTRING_LEN, "rx_queue_%u_", i);
+ fm10k_add_stat_strings(&p, prefix,
+ fm10k_gstrings_queue_stats,
+ FM10K_QUEUE_STATS_LEN);
}
}
@@ -242,7 +216,6 @@ static void fm10k_get_strings(struct net_device *dev,
static int fm10k_get_sset_count(struct net_device *dev, int sset)
{
struct fm10k_intfc *interface = netdev_priv(dev);
- struct fm10k_iov_data *iov_data = interface->iov_data;
struct fm10k_hw *hw = &interface->hw;
int stats_len = FM10K_STATIC_STATS_LEN;
@@ -250,19 +223,11 @@ static int fm10k_get_sset_count(struct net_device *dev, int sset)
case ETH_SS_TEST:
return FM10K_TEST_LEN;
case ETH_SS_STATS:
- stats_len += FM10K_QUEUE_STATS_LEN(hw->mac.max_queues);
+ stats_len += hw->mac.max_queues * 2 * FM10K_QUEUE_STATS_LEN;
if (hw->mac.type != fm10k_mac_vf)
stats_len += FM10K_PF_STATS_LEN;
- if (interface->flags & FM10K_FLAG_DEBUG_STATS) {
- stats_len += FM10K_DEBUG_STATS_LEN;
-
- if (iov_data)
- stats_len += FM10K_MBX_STATS_LEN *
- iov_data->num_vfs;
- }
-
return stats_len;
case ETH_SS_PRIV_FLAGS:
return FM10K_PRV_FLAG_LEN;
@@ -271,93 +236,80 @@ static int fm10k_get_sset_count(struct net_device *dev, int sset)
}
}
-static void fm10k_get_ethtool_stats(struct net_device *netdev,
- struct ethtool_stats __always_unused *stats,
- u64 *data)
+static void fm10k_add_ethtool_stats(u64 **data, void *pointer,
+ const struct fm10k_stats stats[],
+ const unsigned int size)
{
- const int stat_count = sizeof(struct fm10k_queue_stats) / sizeof(u64);
- struct fm10k_intfc *interface = netdev_priv(netdev);
- struct fm10k_iov_data *iov_data = interface->iov_data;
- struct net_device_stats *net_stats = &netdev->stats;
+ unsigned int i;
char *p;
- int i, j;
- fm10k_update_stats(interface);
-
- for (i = 0; i < FM10K_NETDEV_STATS_LEN; i++) {
- p = (char *)net_stats + fm10k_gstrings_net_stats[i].stat_offset;
- *(data++) = (fm10k_gstrings_net_stats[i].sizeof_stat ==
- sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ if (!pointer) {
+ /* memory is not zero allocated so we have to clear it */
+ for (i = 0; i < size; i++)
+ *((*data)++) = 0;
+ return;
}
- for (i = 0; i < FM10K_GLOBAL_STATS_LEN; i++) {
- p = (char *)interface +
- fm10k_gstrings_global_stats[i].stat_offset;
- *(data++) = (fm10k_gstrings_global_stats[i].sizeof_stat ==
- sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
- }
+ for (i = 0; i < size; i++) {
+ p = (char *)pointer + stats[i].stat_offset;
- if (interface->flags & FM10K_FLAG_DEBUG_STATS) {
- for (i = 0; i < FM10K_DEBUG_STATS_LEN; i++) {
- p = (char *)interface +
- fm10k_gstrings_debug_stats[i].stat_offset;
- *(data++) = (fm10k_gstrings_debug_stats[i].sizeof_stat ==
- sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
+ switch (stats[i].sizeof_stat) {
+ case sizeof(u64):
+ *((*data)++) = *(u64 *)p;
+ break;
+ case sizeof(u32):
+ *((*data)++) = *(u32 *)p;
+ break;
+ case sizeof(u16):
+ *((*data)++) = *(u16 *)p;
+ break;
+ case sizeof(u8):
+ *((*data)++) = *(u8 *)p;
+ break;
+ default:
+ *((*data)++) = 0;
}
}
+}
- for (i = 0; i < FM10K_MBX_STATS_LEN; i++) {
- p = (char *)&interface->hw.mbx +
- fm10k_gstrings_mbx_stats[i].stat_offset;
- *(data++) = (fm10k_gstrings_mbx_stats[i].sizeof_stat ==
- sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
- }
+static void fm10k_get_ethtool_stats(struct net_device *netdev,
+ struct ethtool_stats __always_unused *stats,
+ u64 *data)
+{
+ struct fm10k_intfc *interface = netdev_priv(netdev);
+ struct net_device_stats *net_stats = &netdev->stats;
+ int i;
- if (interface->hw.mac.type != fm10k_mac_vf) {
- for (i = 0; i < FM10K_PF_STATS_LEN; i++) {
- p = (char *)interface +
- fm10k_gstrings_pf_stats[i].stat_offset;
- *(data++) = (fm10k_gstrings_pf_stats[i].sizeof_stat ==
- sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
- }
- }
+ fm10k_update_stats(interface);
- if ((interface->flags & FM10K_FLAG_DEBUG_STATS) && iov_data) {
- for (i = 0; i < iov_data->num_vfs; i++) {
- struct fm10k_vf_info *vf_info;
+ fm10k_add_ethtool_stats(&data, net_stats, fm10k_gstrings_net_stats,
+ FM10K_NETDEV_STATS_LEN);
- vf_info = &iov_data->vf_info[i];
+ fm10k_add_ethtool_stats(&data, interface, fm10k_gstrings_global_stats,
+ FM10K_GLOBAL_STATS_LEN);
- /* skip stats if we don't have a vf info */
- if (!vf_info) {
- data += FM10K_MBX_STATS_LEN;
- continue;
- }
+ fm10k_add_ethtool_stats(&data, &interface->hw.mbx,
+ fm10k_gstrings_mbx_stats,
+ FM10K_MBX_STATS_LEN);
- for (j = 0; j < FM10K_MBX_STATS_LEN; j++) {
- p = (char *)&vf_info->mbx +
- fm10k_gstrings_mbx_stats[j].stat_offset;
- *(data++) = (fm10k_gstrings_mbx_stats[j].sizeof_stat ==
- sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
- }
- }
+ if (interface->hw.mac.type != fm10k_mac_vf) {
+ fm10k_add_ethtool_stats(&data, interface,
+ fm10k_gstrings_pf_stats,
+ FM10K_PF_STATS_LEN);
}
for (i = 0; i < interface->hw.mac.max_queues; i++) {
struct fm10k_ring *ring;
- u64 *queue_stat;
ring = interface->tx_ring[i];
- if (ring)
- queue_stat = (u64 *)&ring->stats;
- for (j = 0; j < stat_count; j++)
- *(data++) = ring ? queue_stat[j] : 0;
+ fm10k_add_ethtool_stats(&data, ring,
+ fm10k_gstrings_queue_stats,
+ FM10K_QUEUE_STATS_LEN);
ring = interface->rx_ring[i];
- if (ring)
- queue_stat = (u64 *)&ring->stats;
- for (j = 0; j < stat_count; j++)
- *(data++) = ring ? queue_stat[j] : 0;
+ fm10k_add_ethtool_stats(&data, ring,
+ fm10k_gstrings_queue_stats,
+ FM10K_QUEUE_STATS_LEN);
}
}
@@ -425,7 +377,7 @@ static void fm10k_get_regs(struct net_device *netdev,
u32 *buff = p;
u16 i;
- regs->version = (1 << 24) | (hw->revision_id << 16) | hw->device_id;
+ regs->version = BIT(24) | (hw->revision_id << 16) | hw->device_id;
switch (hw->mac.type) {
case fm10k_mac_pf:
@@ -935,15 +887,15 @@ static int fm10k_mbx_test(struct fm10k_intfc *interface, u64 *data)
struct fm10k_mbx_info *mbx = &hw->mbx;
u32 attr_flag, test_msg[6];
unsigned long timeout;
- int err;
+ int err = -EINVAL;
/* For now this is a VF only feature */
if (hw->mac.type != fm10k_mac_vf)
return 0;
/* loop through both nested and unnested attribute types */
- for (attr_flag = (1 << FM10K_TEST_MSG_UNSET);
- attr_flag < (1 << (2 * FM10K_TEST_MSG_NESTED));
+ for (attr_flag = BIT(FM10K_TEST_MSG_UNSET);
+ attr_flag < BIT(2 * FM10K_TEST_MSG_NESTED);
attr_flag += attr_flag) {
/* generate message to be tested */
fm10k_tlv_msg_test_create(test_msg, attr_flag);
@@ -1001,35 +953,56 @@ static void fm10k_self_test(struct net_device *dev,
static u32 fm10k_get_priv_flags(struct net_device *netdev)
{
- struct fm10k_intfc *interface = netdev_priv(netdev);
- u32 priv_flags = 0;
-
- if (interface->flags & FM10K_FLAG_DEBUG_STATS)
- priv_flags |= 1 << FM10K_PRV_FLAG_DEBUG_STATS;
-
- return priv_flags;
+ return 0;
}
static int fm10k_set_priv_flags(struct net_device *netdev, u32 priv_flags)
{
- struct fm10k_intfc *interface = netdev_priv(netdev);
-
- if (priv_flags >= (1 << FM10K_PRV_FLAG_LEN))
+ if (priv_flags >= BIT(FM10K_PRV_FLAG_LEN))
return -EINVAL;
- if (priv_flags & (1 << FM10K_PRV_FLAG_DEBUG_STATS))
- interface->flags |= FM10K_FLAG_DEBUG_STATS;
- else
- interface->flags &= ~FM10K_FLAG_DEBUG_STATS;
-
return 0;
}
-static u32 fm10k_get_reta_size(struct net_device __always_unused *netdev)
+u32 fm10k_get_reta_size(struct net_device __always_unused *netdev)
{
return FM10K_RETA_SIZE * FM10K_RETA_ENTRIES_PER_REG;
}
+void fm10k_write_reta(struct fm10k_intfc *interface, const u32 *indir)
+{
+ u16 rss_i = interface->ring_feature[RING_F_RSS].indices;
+ struct fm10k_hw *hw = &interface->hw;
+ u32 table[4];
+ int i, j;
+
+ /* record entries to reta table */
+ for (i = 0; i < FM10K_RETA_SIZE; i++) {
+ u32 reta, n;
+
+ /* generate a new table if we weren't given one */
+ for (j = 0; j < 4; j++) {
+ if (indir)
+ n = indir[i + j];
+ else
+ n = ethtool_rxfh_indir_default(i + j, rss_i);
+
+ table[j] = n;
+ }
+
+ reta = table[0] |
+ (table[1] << 8) |
+ (table[2] << 16) |
+ (table[3] << 24);
+
+ if (interface->reta[i] == reta)
+ continue;
+
+ interface->reta[i] = reta;
+ fm10k_write_reg(hw, FM10K_RETA(0, i), reta);
+ }
+}
+
static int fm10k_get_reta(struct net_device *netdev, u32 *indir)
{
struct fm10k_intfc *interface = netdev_priv(netdev);
@@ -1053,7 +1026,6 @@ static int fm10k_get_reta(struct net_device *netdev, u32 *indir)
static int fm10k_set_reta(struct net_device *netdev, const u32 *indir)
{
struct fm10k_intfc *interface = netdev_priv(netdev);
- struct fm10k_hw *hw = &interface->hw;
int i;
u16 rss_i;
@@ -1068,19 +1040,7 @@ static int fm10k_set_reta(struct net_device *netdev, const u32 *indir)
return -EINVAL;
}
- /* record entries to reta table */
- for (i = 0; i < FM10K_RETA_SIZE; i++, indir += 4) {
- u32 reta = indir[0] |
- (indir[1] << 8) |
- (indir[2] << 16) |
- (indir[3] << 24);
-
- if (interface->reta[i] == reta)
- continue;
-
- interface->reta[i] = reta;
- fm10k_write_reg(hw, FM10K_RETA(0, i), reta);
- }
+ fm10k_write_reta(interface, indir);
return 0;
}
@@ -1145,7 +1105,7 @@ static unsigned int fm10k_max_channels(struct net_device *dev)
/* For QoS report channels per traffic class */
if (tcs > 1)
- max_combined = 1 << (fls(max_combined / tcs) - 1);
+ max_combined = BIT((fls(max_combined / tcs) - 1));
return max_combined;
}
@@ -1192,33 +1152,6 @@ static int fm10k_set_channels(struct net_device *dev,
return fm10k_setup_tc(dev, netdev_get_num_tc(dev));
}
-static int fm10k_get_ts_info(struct net_device *dev,
- struct ethtool_ts_info *info)
-{
- struct fm10k_intfc *interface = netdev_priv(dev);
-
- info->so_timestamping =
- SOF_TIMESTAMPING_TX_SOFTWARE |
- SOF_TIMESTAMPING_RX_SOFTWARE |
- SOF_TIMESTAMPING_SOFTWARE |
- SOF_TIMESTAMPING_TX_HARDWARE |
- SOF_TIMESTAMPING_RX_HARDWARE |
- SOF_TIMESTAMPING_RAW_HARDWARE;
-
- if (interface->ptp_clock)
- info->phc_index = ptp_clock_index(interface->ptp_clock);
- else
- info->phc_index = -1;
-
- info->tx_types = (1 << HWTSTAMP_TX_OFF) |
- (1 << HWTSTAMP_TX_ON);
-
- info->rx_filters = (1 << HWTSTAMP_FILTER_NONE) |
- (1 << HWTSTAMP_FILTER_ALL);
-
- return 0;
-}
-
static const struct ethtool_ops fm10k_ethtool_ops = {
.get_strings = fm10k_get_strings,
.get_sset_count = fm10k_get_sset_count,
@@ -1246,7 +1179,6 @@ static const struct ethtool_ops fm10k_ethtool_ops = {
.set_rxfh = fm10k_set_rssh,
.get_channels = fm10k_get_channels,
.set_channels = fm10k_set_channels,
- .get_ts_info = fm10k_get_ts_info,
};
void fm10k_set_ethtool_ops(struct net_device *dev)
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_iov.c b/drivers/net/ethernet/intel/fm10k/fm10k_iov.c
index acfb8b1f88a7..47f0743ec03b 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_iov.c
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_iov.c
@@ -1,5 +1,5 @@
-/* Intel Ethernet Switch Host Interface Driver
- * Copyright(c) 2013 - 2015 Intel Corporation.
+/* Intel(R) Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
@@ -50,7 +50,7 @@ s32 fm10k_iov_event(struct fm10k_intfc *interface)
s64 vflre;
int i;
- /* if there is no iov_data then there is no mailboxes to process */
+ /* if there is no iov_data then there is no mailbox to process */
if (!ACCESS_ONCE(interface->iov_data))
return 0;
@@ -98,7 +98,7 @@ s32 fm10k_iov_mbx(struct fm10k_intfc *interface)
struct fm10k_iov_data *iov_data;
int i;
- /* if there is no iov_data then there is no mailboxes to process */
+ /* if there is no iov_data then there is no mailbox to process */
if (!ACCESS_ONCE(interface->iov_data))
return 0;
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_main.c b/drivers/net/ethernet/intel/fm10k/fm10k_main.c
index 4de17db3808c..0e166e9c90c8 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_main.c
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_main.c
@@ -1,5 +1,5 @@
-/* Intel Ethernet Switch Host Interface Driver
- * Copyright(c) 2013 - 2014 Intel Corporation.
+/* Intel(R) Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
@@ -29,15 +29,15 @@
#include "fm10k.h"
#define DRV_VERSION "0.19.3-k"
+#define DRV_SUMMARY "Intel(R) Ethernet Switch Host Interface Driver"
const char fm10k_driver_version[] = DRV_VERSION;
char fm10k_driver_name[] = "fm10k";
-static const char fm10k_driver_string[] =
- "Intel(R) Ethernet Switch Host Interface Driver";
+static const char fm10k_driver_string[] = DRV_SUMMARY;
static const char fm10k_copyright[] =
- "Copyright (c) 2013 Intel Corporation.";
+ "Copyright (c) 2013 - 2016 Intel Corporation.";
MODULE_AUTHOR("Intel Corporation, <linux.nics@intel.com>");
-MODULE_DESCRIPTION("Intel(R) Ethernet Switch Host Interface Driver");
+MODULE_DESCRIPTION(DRV_SUMMARY);
MODULE_LICENSE("GPL");
MODULE_VERSION(DRV_VERSION);
@@ -401,10 +401,10 @@ static inline void fm10k_rx_checksum(struct fm10k_ring *ring,
}
#define FM10K_RSS_L4_TYPES_MASK \
- ((1ul << FM10K_RSSTYPE_IPV4_TCP) | \
- (1ul << FM10K_RSSTYPE_IPV4_UDP) | \
- (1ul << FM10K_RSSTYPE_IPV6_TCP) | \
- (1ul << FM10K_RSSTYPE_IPV6_UDP))
+ (BIT(FM10K_RSSTYPE_IPV4_TCP) | \
+ BIT(FM10K_RSSTYPE_IPV4_UDP) | \
+ BIT(FM10K_RSSTYPE_IPV6_TCP) | \
+ BIT(FM10K_RSSTYPE_IPV6_UDP))
static inline void fm10k_rx_hash(struct fm10k_ring *ring,
union fm10k_rx_desc *rx_desc,
@@ -420,23 +420,10 @@ static inline void fm10k_rx_hash(struct fm10k_ring *ring,
return;
skb_set_hash(skb, le32_to_cpu(rx_desc->d.rss),
- (FM10K_RSS_L4_TYPES_MASK & (1ul << rss_type)) ?
+ (BIT(rss_type) & FM10K_RSS_L4_TYPES_MASK) ?
PKT_HASH_TYPE_L4 : PKT_HASH_TYPE_L3);
}
-static void fm10k_rx_hwtstamp(struct fm10k_ring *rx_ring,
- union fm10k_rx_desc *rx_desc,
- struct sk_buff *skb)
-{
- struct fm10k_intfc *interface = rx_ring->q_vector->interface;
-
- FM10K_CB(skb)->tstamp = rx_desc->q.timestamp;
-
- if (unlikely(interface->flags & FM10K_FLAG_RX_TS_ENABLED))
- fm10k_systime_to_hwtstamp(interface, skb_hwtstamps(skb),
- le64_to_cpu(rx_desc->q.timestamp));
-}
-
static void fm10k_type_trans(struct fm10k_ring *rx_ring,
union fm10k_rx_desc __maybe_unused *rx_desc,
struct sk_buff *skb)
@@ -486,8 +473,6 @@ static unsigned int fm10k_process_skb_fields(struct fm10k_ring *rx_ring,
fm10k_rx_checksum(rx_ring, rx_desc, skb);
- fm10k_rx_hwtstamp(rx_ring, rx_desc, skb);
-
FM10K_CB(skb)->fi.w.vlan = rx_desc->w.vlan;
skb_record_rx_queue(skb, rx_ring->queue_index);
@@ -835,6 +820,8 @@ static void fm10k_tx_csum(struct fm10k_ring *tx_ring,
struct ipv6hdr *ipv6;
u8 *raw;
} network_hdr;
+ u8 *transport_hdr;
+ __be16 frag_off;
__be16 protocol;
u8 l4_hdr = 0;
@@ -852,9 +839,11 @@ static void fm10k_tx_csum(struct fm10k_ring *tx_ring,
goto no_csum;
}
network_hdr.raw = skb_inner_network_header(skb);
+ transport_hdr = skb_inner_transport_header(skb);
} else {
protocol = vlan_get_protocol(skb);
network_hdr.raw = skb_network_header(skb);
+ transport_hdr = skb_transport_header(skb);
}
switch (protocol) {
@@ -863,15 +852,17 @@ static void fm10k_tx_csum(struct fm10k_ring *tx_ring,
break;
case htons(ETH_P_IPV6):
l4_hdr = network_hdr.ipv6->nexthdr;
+ if (likely((transport_hdr - network_hdr.raw) ==
+ sizeof(struct ipv6hdr)))
+ break;
+ ipv6_skip_exthdr(skb, network_hdr.raw - skb->data +
+ sizeof(struct ipv6hdr),
+ &l4_hdr, &frag_off);
+ if (unlikely(frag_off))
+ l4_hdr = NEXTHDR_FRAGMENT;
break;
default:
- if (unlikely(net_ratelimit())) {
- dev_warn(tx_ring->dev,
- "partial checksum but ip version=%x!\n",
- protocol);
- }
- tx_ring->tx_stats.csum_err++;
- goto no_csum;
+ break;
}
switch (l4_hdr) {
@@ -884,9 +875,10 @@ static void fm10k_tx_csum(struct fm10k_ring *tx_ring,
default:
if (unlikely(net_ratelimit())) {
dev_warn(tx_ring->dev,
- "partial checksum but l4 proto=%x!\n",
- l4_hdr);
+ "partial checksum, version=%d l4 proto=%x\n",
+ protocol, l4_hdr);
}
+ skb_checksum_help(skb);
tx_ring->tx_stats.csum_err++;
goto no_csum;
}
@@ -912,11 +904,6 @@ static u8 fm10k_tx_desc_flags(struct sk_buff *skb, u32 tx_flags)
/* set type for advanced descriptor with frame checksum insertion */
u32 desc_flags = 0;
- /* set timestamping bits */
- if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
- likely(skb_shinfo(skb)->tx_flags & SKBTX_IN_PROGRESS))
- desc_flags |= FM10K_TXD_FLAG_TIME;
-
/* set checksum offload bits */
desc_flags |= FM10K_SET_FLAG(tx_flags, FM10K_TX_FLAGS_CSUM,
FM10K_TXD_FLAG_CSUM);
@@ -1198,9 +1185,10 @@ void fm10k_tx_timeout_reset(struct fm10k_intfc *interface)
* fm10k_clean_tx_irq - Reclaim resources after transmit completes
* @q_vector: structure containing interrupt and ring information
* @tx_ring: tx ring to clean
+ * @napi_budget: Used to determine if we are in netpoll
**/
static bool fm10k_clean_tx_irq(struct fm10k_q_vector *q_vector,
- struct fm10k_ring *tx_ring)
+ struct fm10k_ring *tx_ring, int napi_budget)
{
struct fm10k_intfc *interface = q_vector->interface;
struct fm10k_tx_buffer *tx_buffer;
@@ -1238,7 +1226,7 @@ static bool fm10k_clean_tx_irq(struct fm10k_q_vector *q_vector,
total_packets += tx_buffer->gso_segs;
/* free the skb */
- dev_consume_skb_any(tx_buffer->skb);
+ napi_consume_skb(tx_buffer->skb, napi_budget);
/* unmap skb header data */
dma_unmap_single(tx_ring->dev,
@@ -1409,7 +1397,7 @@ static void fm10k_update_itr(struct fm10k_ring_container *ring_container)
* accounts for changes in the ITR due to PCIe link speed.
*/
itr_round = ACCESS_ONCE(ring_container->itr_scale) + 8;
- avg_wire_size += (1 << itr_round) - 1;
+ avg_wire_size += BIT(itr_round) - 1;
avg_wire_size >>= itr_round;
/* write back value and retain adaptive flag */
@@ -1449,8 +1437,10 @@ static int fm10k_poll(struct napi_struct *napi, int budget)
int per_ring_budget, work_done = 0;
bool clean_complete = true;
- fm10k_for_each_ring(ring, q_vector->tx)
- clean_complete &= fm10k_clean_tx_irq(q_vector, ring);
+ fm10k_for_each_ring(ring, q_vector->tx) {
+ if (!fm10k_clean_tx_irq(q_vector, ring, budget))
+ clean_complete = false;
+ }
/* Handle case where we are called by netpoll with a budget of 0 */
if (budget <= 0)
@@ -1468,7 +1458,8 @@ static int fm10k_poll(struct napi_struct *napi, int budget)
int work = fm10k_clean_rx_irq(q_vector, ring, per_ring_budget);
work_done += work;
- clean_complete &= !!(work < per_ring_budget);
+ if (work >= per_ring_budget)
+ clean_complete = false;
}
/* If all work not completed, return budget and keep polling */
@@ -1511,17 +1502,17 @@ static bool fm10k_set_qos_queues(struct fm10k_intfc *interface)
/* set QoS mask and indices */
f = &interface->ring_feature[RING_F_QOS];
f->indices = pcs;
- f->mask = (1 << fls(pcs - 1)) - 1;
+ f->mask = BIT(fls(pcs - 1)) - 1;
/* determine the upper limit for our current DCB mode */
rss_i = interface->hw.mac.max_queues / pcs;
- rss_i = 1 << (fls(rss_i) - 1);
+ rss_i = BIT(fls(rss_i) - 1);
/* set RSS mask and indices */
f = &interface->ring_feature[RING_F_RSS];
rss_i = min_t(u16, rss_i, f->limit);
f->indices = rss_i;
- f->mask = (1 << fls(rss_i - 1)) - 1;
+ f->mask = BIT(fls(rss_i - 1)) - 1;
/* configure pause class to queue mapping */
for (i = 0; i < pcs; i++)
@@ -1551,7 +1542,7 @@ static bool fm10k_set_rss_queues(struct fm10k_intfc *interface)
/* record indices and power of 2 mask for RSS */
f->indices = rss_i;
- f->mask = (1 << fls(rss_i - 1)) - 1;
+ f->mask = BIT(fls(rss_i - 1)) - 1;
interface->num_rx_queues = rss_i;
interface->num_tx_queues = rss_i;
@@ -1572,17 +1563,29 @@ static bool fm10k_set_rss_queues(struct fm10k_intfc *interface)
**/
static void fm10k_set_num_queues(struct fm10k_intfc *interface)
{
- /* Start with base case */
- interface->num_rx_queues = 1;
- interface->num_tx_queues = 1;
-
+ /* Attempt to setup QoS and RSS first */
if (fm10k_set_qos_queues(interface))
return;
+ /* If we don't have QoS, just fallback to only RSS. */
fm10k_set_rss_queues(interface);
}
/**
+ * fm10k_reset_num_queues - Reset the number of queues to zero
+ * @interface: board private structure
+ *
+ * This function should be called whenever we need to reset the number of
+ * queues after an error condition.
+ */
+static void fm10k_reset_num_queues(struct fm10k_intfc *interface)
+{
+ interface->num_tx_queues = 0;
+ interface->num_rx_queues = 0;
+ interface->num_q_vectors = 0;
+}
+
+/**
* fm10k_alloc_q_vector - Allocate memory for a single interrupt vector
* @interface: board private structure to initialize
* @v_count: q_vectors allocated on interface, used for ring interleaving
@@ -1765,9 +1768,7 @@ static int fm10k_alloc_q_vectors(struct fm10k_intfc *interface)
return 0;
err_out:
- interface->num_tx_queues = 0;
- interface->num_rx_queues = 0;
- interface->num_q_vectors = 0;
+ fm10k_reset_num_queues(interface);
while (v_idx--)
fm10k_free_q_vector(interface, v_idx);
@@ -1787,9 +1788,7 @@ static void fm10k_free_q_vectors(struct fm10k_intfc *interface)
{
int v_idx = interface->num_q_vectors;
- interface->num_tx_queues = 0;
- interface->num_rx_queues = 0;
- interface->num_q_vectors = 0;
+ fm10k_reset_num_queues(interface);
while (v_idx--)
fm10k_free_q_vector(interface, v_idx);
@@ -1935,7 +1934,7 @@ static void fm10k_assign_rings(struct fm10k_intfc *interface)
static void fm10k_init_reta(struct fm10k_intfc *interface)
{
u16 i, rss_i = interface->ring_feature[RING_F_RSS].indices;
- u32 reta, base;
+ u32 reta;
/* If the Rx flow indirection table has been configured manually, we
* need to maintain it when possible.
@@ -1960,21 +1959,7 @@ static void fm10k_init_reta(struct fm10k_intfc *interface)
}
repopulate_reta:
- /* Populate the redirection table 4 entries at a time. To do this
- * we are generating the results for n and n+2 and then interleaving
- * those with the results with n+1 and n+3.
- */
- for (i = FM10K_RETA_SIZE; i--;) {
- /* first pass generates n and n+2 */
- base = ((i * 0x00040004) + 0x00020000) * rss_i;
- reta = (base & 0x3F803F80) >> 7;
-
- /* second pass generates n+1 and n+3 */
- base += 0x00010001 * rss_i;
- reta |= (base & 0x3F803F80) << 1;
-
- interface->reta[i] = reta;
- }
+ fm10k_write_reta(interface, NULL);
}
/**
@@ -1997,14 +1982,15 @@ int fm10k_init_queueing_scheme(struct fm10k_intfc *interface)
if (err) {
dev_err(&interface->pdev->dev,
"Unable to initialize MSI-X capability\n");
- return err;
+ goto err_init_msix;
}
/* Allocate memory for queues */
err = fm10k_alloc_q_vectors(interface);
if (err) {
- fm10k_reset_msix_capability(interface);
- return err;
+ dev_err(&interface->pdev->dev,
+ "Unable to allocate queue vectors\n");
+ goto err_alloc_q_vectors;
}
/* Map rings to devices, and map devices to physical queues */
@@ -2014,6 +2000,12 @@ int fm10k_init_queueing_scheme(struct fm10k_intfc *interface)
fm10k_init_reta(interface);
return 0;
+
+err_alloc_q_vectors:
+ fm10k_reset_msix_capability(interface);
+err_init_msix:
+ fm10k_reset_num_queues(interface);
+ return err;
}
/**
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_mbx.c b/drivers/net/ethernet/intel/fm10k/fm10k_mbx.c
index 98202c3d591c..c9dfa6564fcf 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_mbx.c
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_mbx.c
@@ -1,5 +1,5 @@
-/* Intel Ethernet Switch Host Interface Driver
- * Copyright(c) 2013 - 2015 Intel Corporation.
+/* Intel(R) Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_mbx.h b/drivers/net/ethernet/intel/fm10k/fm10k_mbx.h
index 245a0a3dc32e..b7dbc8a84c05 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_mbx.h
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_mbx.h
@@ -1,5 +1,5 @@
-/* Intel Ethernet Switch Host Interface Driver
- * Copyright(c) 2013 - 2015 Intel Corporation.
+/* Intel(R) Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c b/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c
index d09a8dd71fc2..2a08d3f5b6df 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_netdev.c
@@ -1,5 +1,5 @@
-/* Intel Ethernet Switch Host Interface Driver
- * Copyright(c) 2013 - 2015 Intel Corporation.
+/* Intel(R) Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
@@ -243,9 +243,6 @@ void fm10k_clean_all_tx_rings(struct fm10k_intfc *interface)
for (i = 0; i < interface->num_tx_queues; i++)
fm10k_clean_tx_ring(interface->tx_ring[i]);
-
- /* remove any stale timestamp buffers and free them */
- skb_queue_purge(&interface->ts_tx_skb_queue);
}
/**
@@ -440,7 +437,7 @@ static void fm10k_restore_vxlan_port(struct fm10k_intfc *interface)
* @sa_family: Address family of new port
* @port: port number used for VXLAN
*
- * This funciton is called when a new VXLAN interface has added a new port
+ * This function is called when a new VXLAN interface has added a new port
* number to the range that is currently in use for VXLAN. The new port
* number is always added to the tail so that the port number list should
* match the order in which the ports were allocated. The head of the list
@@ -484,7 +481,7 @@ insert_tail:
* @sa_family: Address family of freed port
* @port: port number used for VXLAN
*
- * This funciton is called when a new VXLAN interface has freed a port
+ * This function is called when a new VXLAN interface has freed a port
* number from the range that is currently in use for VXLAN. The freed
* port is removed from the list and the new head is used to determine
* the port number for offloads.
@@ -660,10 +657,6 @@ static netdev_tx_t fm10k_xmit_frame(struct sk_buff *skb, struct net_device *dev)
__skb_put(skb, pad_len);
}
- /* prepare packet for hardware time stamping */
- if (unlikely(skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP))
- fm10k_ts_tx_enqueue(interface, skb);
-
if (r_idx >= interface->num_tx_queues)
r_idx %= interface->num_tx_queues;
@@ -884,7 +877,7 @@ static int __fm10k_uc_sync(struct net_device *dev,
return -EADDRNOTAVAIL;
/* update table with current entries */
- for (vid = hw->mac.default_vid ? fm10k_find_next_vlan(interface, 0) : 0;
+ for (vid = hw->mac.default_vid ? fm10k_find_next_vlan(interface, 0) : 1;
vid < VLAN_N_VID;
vid = fm10k_find_next_vlan(interface, vid)) {
err = hw->mac.ops.update_uc_addr(hw, glort, addr,
@@ -947,7 +940,7 @@ static int __fm10k_mc_sync(struct net_device *dev,
u16 vid, glort = interface->glort;
/* update table with current entries */
- for (vid = hw->mac.default_vid ? fm10k_find_next_vlan(interface, 0) : 0;
+ for (vid = hw->mac.default_vid ? fm10k_find_next_vlan(interface, 0) : 1;
vid < VLAN_N_VID;
vid = fm10k_find_next_vlan(interface, vid)) {
hw->mac.ops.update_mc_addr(hw, glort, addr, vid, sync);
@@ -1002,11 +995,8 @@ static void fm10k_set_rx_mode(struct net_device *dev)
}
/* synchronize all of the addresses */
- if (xcast_mode != FM10K_XCAST_MODE_PROMISC) {
- __dev_uc_sync(dev, fm10k_uc_sync, fm10k_uc_unsync);
- if (xcast_mode != FM10K_XCAST_MODE_ALLMULTI)
- __dev_mc_sync(dev, fm10k_mc_sync, fm10k_mc_unsync);
- }
+ __dev_uc_sync(dev, fm10k_uc_sync, fm10k_uc_unsync);
+ __dev_mc_sync(dev, fm10k_mc_sync, fm10k_mc_unsync);
fm10k_mbx_unlock(interface);
}
@@ -1044,7 +1034,7 @@ void fm10k_restore_rx_state(struct fm10k_intfc *interface)
hw->mac.ops.update_vlan(hw, 0, 0, true);
/* update table with current entries */
- for (vid = hw->mac.default_vid ? fm10k_find_next_vlan(interface, 0) : 0;
+ for (vid = hw->mac.default_vid ? fm10k_find_next_vlan(interface, 0) : 1;
vid < VLAN_N_VID;
vid = fm10k_find_next_vlan(interface, vid)) {
hw->mac.ops.update_vlan(hw, vid, 0, true);
@@ -1056,11 +1046,8 @@ void fm10k_restore_rx_state(struct fm10k_intfc *interface)
hw->mac.ops.update_xcast_mode(hw, glort, xcast_mode);
/* synchronize all of the addresses */
- if (xcast_mode != FM10K_XCAST_MODE_PROMISC) {
- __dev_uc_sync(netdev, fm10k_uc_sync, fm10k_uc_unsync);
- if (xcast_mode != FM10K_XCAST_MODE_ALLMULTI)
- __dev_mc_sync(netdev, fm10k_mc_sync, fm10k_mc_unsync);
- }
+ __dev_uc_sync(netdev, fm10k_uc_sync, fm10k_uc_unsync);
+ __dev_mc_sync(netdev, fm10k_mc_sync, fm10k_mc_unsync);
fm10k_mbx_unlock(interface);
@@ -1213,18 +1200,6 @@ static int __fm10k_setup_tc(struct net_device *dev, u32 handle, __be16 proto,
return fm10k_setup_tc(dev, tc->tc);
}
-static int fm10k_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
-{
- switch (cmd) {
- case SIOCGHWTSTAMP:
- return fm10k_get_ts_config(netdev, ifr);
- case SIOCSHWTSTAMP:
- return fm10k_set_ts_config(netdev, ifr);
- default:
- return -EOPNOTSUPP;
- }
-}
-
static void fm10k_assign_l2_accel(struct fm10k_intfc *interface,
struct fm10k_l2_accel *l2_accel)
{
@@ -1402,7 +1377,6 @@ static const struct net_device_ops fm10k_netdev_ops = {
.ndo_get_vf_config = fm10k_ndo_get_vf_config,
.ndo_add_vxlan_port = fm10k_add_vxlan_port,
.ndo_del_vxlan_port = fm10k_del_vxlan_port,
- .ndo_do_ioctl = fm10k_ioctl,
.ndo_dfwd_add_station = fm10k_dfwd_add_station,
.ndo_dfwd_del_station = fm10k_dfwd_del_station,
#ifdef CONFIG_NET_POLL_CONTROLLER
@@ -1429,7 +1403,7 @@ struct net_device *fm10k_alloc_netdev(const struct fm10k_info *info)
/* configure default debug level */
interface = netdev_priv(dev);
- interface->msg_enable = (1 << DEFAULT_DEBUG_LEVEL_SHIFT) - 1;
+ interface->msg_enable = BIT(DEFAULT_DEBUG_LEVEL_SHIFT) - 1;
/* configure default features */
dev->features |= NETIF_F_IP_CSUM |
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_pci.c b/drivers/net/ethernet/intel/fm10k/fm10k_pci.c
index 4eb7a6fa6b0d..e05aca9bef0e 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_pci.c
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_pci.c
@@ -1,5 +1,5 @@
-/* Intel Ethernet Switch Host Interface Driver
- * Copyright(c) 2013 - 2015 Intel Corporation.
+/* Intel(R) Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
@@ -99,7 +99,7 @@ void fm10k_service_event_schedule(struct fm10k_intfc *interface)
static void fm10k_service_event_complete(struct fm10k_intfc *interface)
{
- BUG_ON(!test_bit(__FM10K_SERVICE_SCHED, &interface->state));
+ WARN_ON(!test_bit(__FM10K_SERVICE_SCHED, &interface->state));
/* flush memory to make sure state is correct before next watchog */
smp_mb__before_atomic();
@@ -145,7 +145,7 @@ static void fm10k_reinit(struct fm10k_intfc *interface)
WARN_ON(in_interrupt());
/* put off any impending NetWatchDogTimeout */
- netdev->trans_start = jiffies;
+ netif_trans_update(netdev);
while (test_and_set_bit(__FM10K_RESETTING, &interface->state))
usleep_range(1000, 2000);
@@ -209,9 +209,6 @@ static void fm10k_reinit(struct fm10k_intfc *interface)
netdev->features |= NETIF_F_HW_VLAN_CTAG_RX;
}
- /* reset clock */
- fm10k_ts_reset(interface);
-
err = netif_running(netdev) ? fm10k_open(netdev) : 0;
if (err)
goto err_open;
@@ -559,7 +556,6 @@ static void fm10k_service_task(struct work_struct *work)
/* tasks only run when interface is up */
fm10k_watchdog_subtask(interface);
fm10k_check_hang_subtask(interface);
- fm10k_ts_tx_subtask(interface);
/* release lock on service events to allow scheduling next event */
fm10k_service_event_complete(interface);
@@ -579,7 +575,7 @@ static void fm10k_configure_tx_ring(struct fm10k_intfc *interface,
u64 tdba = ring->dma;
u32 size = ring->count * sizeof(struct fm10k_tx_desc);
u32 txint = FM10K_INT_MAP_DISABLE;
- u32 txdctl = FM10K_TXDCTL_ENABLE | (1 << FM10K_TXDCTL_MAX_TIME_SHIFT);
+ u32 txdctl = BIT(FM10K_TXDCTL_MAX_TIME_SHIFT) | FM10K_TXDCTL_ENABLE;
u8 reg_idx = ring->reg_idx;
/* disable queue to avoid issues while updating state */
@@ -730,7 +726,7 @@ static void fm10k_configure_rx_ring(struct fm10k_intfc *interface,
if (interface->pfc_en)
rx_pause = interface->pfc_en;
#endif
- if (!(rx_pause & (1 << ring->qos_pc)))
+ if (!(rx_pause & BIT(ring->qos_pc)))
rxdctl |= FM10K_RXDCTL_DROP_ON_EMPTY;
fm10k_write_reg(hw, FM10K_RXDCTL(reg_idx), rxdctl);
@@ -779,7 +775,7 @@ void fm10k_update_rx_drop_en(struct fm10k_intfc *interface)
u32 rxdctl = FM10K_RXDCTL_WRITE_BACK_MIN_DELAY;
u8 reg_idx = ring->reg_idx;
- if (!(rx_pause & (1 << ring->qos_pc)))
+ if (!(rx_pause & BIT(ring->qos_pc)))
rxdctl |= FM10K_RXDCTL_DROP_ON_EMPTY;
fm10k_write_reg(hw, FM10K_RXDCTL(reg_idx), rxdctl);
@@ -903,8 +899,8 @@ static irqreturn_t fm10k_msix_mbx_vf(int __always_unused irq, void *data)
/* re-enable mailbox interrupt and indicate 20us delay */
fm10k_write_reg(hw, FM10K_VFITR(FM10K_MBX_VECTOR),
- FM10K_ITR_ENABLE | (FM10K_MBX_INT_DELAY >>
- hw->mac.itr_scale));
+ (FM10K_MBX_INT_DELAY >> hw->mac.itr_scale) |
+ FM10K_ITR_ENABLE);
/* service upstream mailbox */
if (fm10k_mbx_trylock(interface)) {
@@ -1065,7 +1061,7 @@ static void fm10k_reset_drop_on_empty(struct fm10k_intfc *interface, u32 eicr)
if (maxholdq)
fm10k_write_reg(hw, FM10K_MAXHOLDQ(7), maxholdq);
for (q = 255;;) {
- if (maxholdq & (1 << 31)) {
+ if (maxholdq & BIT(31)) {
if (q < FM10K_MAX_QUEUES_PF) {
interface->rx_overrun_pf++;
fm10k_write_reg(hw, FM10K_RXDCTL(q), rxdctl);
@@ -1135,22 +1131,24 @@ static irqreturn_t fm10k_msix_mbx_pf(int __always_unused irq, void *data)
/* re-enable mailbox interrupt and indicate 20us delay */
fm10k_write_reg(hw, FM10K_ITR(FM10K_MBX_VECTOR),
- FM10K_ITR_ENABLE | (FM10K_MBX_INT_DELAY >>
- hw->mac.itr_scale));
+ (FM10K_MBX_INT_DELAY >> hw->mac.itr_scale) |
+ FM10K_ITR_ENABLE);
return IRQ_HANDLED;
}
void fm10k_mbx_free_irq(struct fm10k_intfc *interface)
{
- struct msix_entry *entry = &interface->msix_entries[FM10K_MBX_VECTOR];
struct fm10k_hw *hw = &interface->hw;
+ struct msix_entry *entry;
int itr_reg;
/* no mailbox IRQ to free if MSI-X is not enabled */
if (!interface->msix_entries)
return;
+ entry = &interface->msix_entries[FM10K_MBX_VECTOR];
+
/* disconnect the mailbox */
hw->mbx.ops.disconnect(hw, &hw->mbx);
@@ -1202,25 +1200,6 @@ static s32 fm10k_mbx_mac_addr(struct fm10k_hw *hw, u32 **results,
return 0;
}
-static s32 fm10k_1588_msg_vf(struct fm10k_hw *hw, u32 **results,
- struct fm10k_mbx_info __always_unused *mbx)
-{
- struct fm10k_intfc *interface;
- u64 timestamp;
- s32 err;
-
- err = fm10k_tlv_attr_get_u64(results[FM10K_1588_MSG_TIMESTAMP],
- &timestamp);
- if (err)
- return err;
-
- interface = container_of(hw, struct fm10k_intfc, hw);
-
- fm10k_ts_tx_hwtstamp(interface, 0, timestamp);
-
- return 0;
-}
-
/* generic error handler for mailbox issues */
static s32 fm10k_mbx_error(struct fm10k_hw *hw, u32 **results,
struct fm10k_mbx_info __always_unused *mbx)
@@ -1241,7 +1220,6 @@ static const struct fm10k_msg_data vf_mbx_data[] = {
FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test),
FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_mbx_mac_addr),
FM10K_VF_MSG_LPORT_STATE_HANDLER(fm10k_msg_lport_state_vf),
- FM10K_VF_MSG_1588_HANDLER(fm10k_1588_msg_vf),
FM10K_TLV_MSG_ERROR_HANDLER(fm10k_mbx_error),
};
@@ -1253,7 +1231,7 @@ static int fm10k_mbx_request_irq_vf(struct fm10k_intfc *interface)
int err;
/* Use timer0 for interrupt moderation on the mailbox */
- u32 itr = FM10K_INT_MAP_TIMER0 | entry->entry;
+ u32 itr = entry->entry | FM10K_INT_MAP_TIMER0;
/* register mailbox handlers */
err = hw->mbx.ops.register_handlers(&hw->mbx, vf_mbx_data);
@@ -1285,11 +1263,40 @@ static s32 fm10k_lport_map(struct fm10k_hw *hw, u32 **results,
u32 dglort_map = hw->mac.dglort_map;
s32 err;
+ interface = container_of(hw, struct fm10k_intfc, hw);
+
+ err = fm10k_msg_err_pf(hw, results, mbx);
+ if (!err && hw->swapi.status) {
+ /* force link down for a reasonable delay */
+ interface->link_down_event = jiffies + (2 * HZ);
+ set_bit(__FM10K_LINK_DOWN, &interface->state);
+
+ /* reset dglort_map back to no config */
+ hw->mac.dglort_map = FM10K_DGLORTMAP_NONE;
+
+ fm10k_service_event_schedule(interface);
+
+ /* prevent overloading kernel message buffer */
+ if (interface->lport_map_failed)
+ return 0;
+
+ interface->lport_map_failed = true;
+
+ if (hw->swapi.status == FM10K_MSG_ERR_PEP_NOT_SCHEDULED)
+ dev_warn(&interface->pdev->dev,
+ "cannot obtain link because the host interface is configured for a PCIe host interface bandwidth of zero\n");
+ dev_warn(&interface->pdev->dev,
+ "request logical port map failed: %d\n",
+ hw->swapi.status);
+
+ return 0;
+ }
+
err = fm10k_msg_lport_map_pf(hw, results, mbx);
if (err)
return err;
- interface = container_of(hw, struct fm10k_intfc, hw);
+ interface->lport_map_failed = false;
/* we need to reset if port count was just updated */
if (dglort_map != hw->mac.dglort_map)
@@ -1339,68 +1346,6 @@ static s32 fm10k_update_pvid(struct fm10k_hw *hw, u32 **results,
return 0;
}
-static s32 fm10k_1588_msg_pf(struct fm10k_hw *hw, u32 **results,
- struct fm10k_mbx_info __always_unused *mbx)
-{
- struct fm10k_swapi_1588_timestamp timestamp;
- struct fm10k_iov_data *iov_data;
- struct fm10k_intfc *interface;
- u16 sglort, vf_idx;
- s32 err;
-
- err = fm10k_tlv_attr_get_le_struct(
- results[FM10K_PF_ATTR_ID_1588_TIMESTAMP],
- &timestamp, sizeof(timestamp));
- if (err)
- return err;
-
- interface = container_of(hw, struct fm10k_intfc, hw);
-
- if (timestamp.dglort) {
- fm10k_ts_tx_hwtstamp(interface, timestamp.dglort,
- le64_to_cpu(timestamp.egress));
- return 0;
- }
-
- /* either dglort or sglort must be set */
- if (!timestamp.sglort)
- return FM10K_ERR_PARAM;
-
- /* verify GLORT is at least one of the ones we own */
- sglort = le16_to_cpu(timestamp.sglort);
- if (!fm10k_glort_valid_pf(hw, sglort))
- return FM10K_ERR_PARAM;
-
- if (sglort == interface->glort) {
- fm10k_ts_tx_hwtstamp(interface, 0,
- le64_to_cpu(timestamp.ingress));
- return 0;
- }
-
- /* if there is no iov_data then there is no mailboxes to process */
- if (!ACCESS_ONCE(interface->iov_data))
- return FM10K_ERR_PARAM;
-
- rcu_read_lock();
-
- /* notify VF if this timestamp belongs to it */
- iov_data = interface->iov_data;
- vf_idx = (hw->mac.dglort_map & FM10K_DGLORTMAP_NONE) - sglort;
-
- if (!iov_data || vf_idx >= iov_data->num_vfs) {
- err = FM10K_ERR_PARAM;
- goto err_unlock;
- }
-
- err = hw->iov.ops.report_timestamp(hw, &iov_data->vf_info[vf_idx],
- le64_to_cpu(timestamp.ingress));
-
-err_unlock:
- rcu_read_unlock();
-
- return err;
-}
-
static const struct fm10k_msg_data pf_mbx_data[] = {
FM10K_PF_MSG_ERR_HANDLER(XCAST_MODES, fm10k_msg_err_pf),
FM10K_PF_MSG_ERR_HANDLER(UPDATE_MAC_FWD_RULE, fm10k_msg_err_pf),
@@ -1408,7 +1353,6 @@ static const struct fm10k_msg_data pf_mbx_data[] = {
FM10K_PF_MSG_ERR_HANDLER(LPORT_CREATE, fm10k_msg_err_pf),
FM10K_PF_MSG_ERR_HANDLER(LPORT_DELETE, fm10k_msg_err_pf),
FM10K_PF_MSG_UPDATE_PVID_HANDLER(fm10k_update_pvid),
- FM10K_PF_MSG_1588_TIMESTAMP_HANDLER(fm10k_1588_msg_pf),
FM10K_TLV_MSG_ERROR_HANDLER(fm10k_mbx_error),
};
@@ -1420,8 +1364,8 @@ static int fm10k_mbx_request_irq_pf(struct fm10k_intfc *interface)
int err;
/* Use timer0 for interrupt moderation on the mailbox */
- u32 mbx_itr = FM10K_INT_MAP_TIMER0 | entry->entry;
- u32 other_itr = FM10K_INT_MAP_IMMEDIATE | entry->entry;
+ u32 mbx_itr = entry->entry | FM10K_INT_MAP_TIMER0;
+ u32 other_itr = entry->entry | FM10K_INT_MAP_IMMEDIATE;
/* register mailbox handlers */
err = hw->mbx.ops.register_handlers(&hw->mbx, pf_mbx_data);
@@ -1654,6 +1598,7 @@ void fm10k_down(struct fm10k_intfc *interface)
{
struct net_device *netdev = interface->netdev;
struct fm10k_hw *hw = &interface->hw;
+ int err;
/* signal that we are down to the interrupt handler and service task */
set_bit(__FM10K_DOWN, &interface->state);
@@ -1678,7 +1623,9 @@ void fm10k_down(struct fm10k_intfc *interface)
fm10k_update_stats(interface);
/* Disable DMA engine for Tx/Rx */
- hw->mac.ops.stop_hw(hw);
+ err = hw->mac.ops.stop_hw(hw);
+ if (err)
+ dev_err(&interface->pdev->dev, "stop_hw failed: %d\n", err);
/* free any buffers still on the rings */
fm10k_clean_all_tx_rings(interface);
@@ -1776,35 +1723,17 @@ static int fm10k_sw_init(struct fm10k_intfc *interface,
netdev->addr_assign_type |= NET_ADDR_RANDOM;
}
- memcpy(netdev->dev_addr, hw->mac.addr, netdev->addr_len);
- memcpy(netdev->perm_addr, hw->mac.addr, netdev->addr_len);
+ ether_addr_copy(netdev->dev_addr, hw->mac.addr);
+ ether_addr_copy(netdev->perm_addr, hw->mac.addr);
if (!is_valid_ether_addr(netdev->perm_addr)) {
dev_err(&pdev->dev, "Invalid MAC Address\n");
return -EIO;
}
- /* assign BAR 4 resources for use with PTP */
- if (fm10k_read_reg(hw, FM10K_CTRL) & FM10K_CTRL_BAR4_ALLOWED)
- interface->sw_addr = ioremap(pci_resource_start(pdev, 4),
- pci_resource_len(pdev, 4));
- hw->sw_addr = interface->sw_addr;
-
/* initialize DCBNL interface */
fm10k_dcbnl_set_ops(netdev);
- /* Initialize service timer and service task */
- set_bit(__FM10K_SERVICE_DISABLE, &interface->state);
- setup_timer(&interface->service_timer, &fm10k_service_timer,
- (unsigned long)interface);
- INIT_WORK(&interface->service_task, fm10k_service_task);
-
- /* kick off service timer now, even when interface is down */
- mod_timer(&interface->service_timer, (HZ * 2) + jiffies);
-
- /* Intitialize timestamp data */
- fm10k_ts_init(interface);
-
/* set default ring sizes */
interface->tx_ring_count = FM10K_DEFAULT_TXD;
interface->rx_ring_count = FM10K_DEFAULT_RXD;
@@ -1987,6 +1916,12 @@ static int fm10k_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (err)
goto err_sw_init;
+ /* the mbx interrupt might attempt to schedule the service task, so we
+ * must ensure it is disabled since we haven't yet requested the timer
+ * or work item.
+ */
+ set_bit(__FM10K_SERVICE_DISABLE, &interface->state);
+
err = fm10k_mbx_request_irq(interface);
if (err)
goto err_mbx_interrupt;
@@ -2006,8 +1941,15 @@ static int fm10k_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
/* stop all the transmit queues from transmitting until link is up */
netif_tx_stop_all_queues(netdev);
- /* Register PTP interface */
- fm10k_ptp_register(interface);
+ /* Initialize service timer and service task late in order to avoid
+ * cleanup issues.
+ */
+ setup_timer(&interface->service_timer, &fm10k_service_timer,
+ (unsigned long)interface);
+ INIT_WORK(&interface->service_task, fm10k_service_task);
+
+ /* kick off service timer now, even when interface is down */
+ mod_timer(&interface->service_timer, (HZ * 2) + jiffies);
/* print warning for non-optimal configurations */
fm10k_slot_warn(interface);
@@ -2065,9 +2007,6 @@ static void fm10k_remove(struct pci_dev *pdev)
if (netdev->reg_state == NETREG_REGISTERED)
unregister_netdev(netdev);
- /* cleanup timestamp handling */
- fm10k_ptp_unregister(interface);
-
/* release VFs */
fm10k_iov_disable(pdev);
@@ -2140,9 +2079,6 @@ static int fm10k_resume(struct pci_dev *pdev)
/* reset statistics starting values */
hw->mac.ops.rebind_hw_stats(hw, &interface->stats);
- /* reset clock */
- fm10k_ts_reset(interface);
-
rtnl_lock();
err = fm10k_init_queueing_scheme(interface);
@@ -2259,15 +2195,17 @@ static pci_ers_result_t fm10k_io_error_detected(struct pci_dev *pdev,
if (state == pci_channel_io_perm_failure)
return PCI_ERS_RESULT_DISCONNECT;
+ rtnl_lock();
+
if (netif_running(netdev))
fm10k_close(netdev);
+ fm10k_mbx_free_irq(interface);
+
/* free interrupts */
fm10k_clear_queueing_scheme(interface);
- fm10k_mbx_free_irq(interface);
-
- pci_disable_device(pdev);
+ rtnl_unlock();
/* Request a slot reset. */
return PCI_ERS_RESULT_NEED_RESET;
@@ -2337,27 +2275,31 @@ static void fm10k_io_resume(struct pci_dev *pdev)
/* reset statistics starting values */
hw->mac.ops.rebind_hw_stats(hw, &interface->stats);
+ rtnl_lock();
+
err = fm10k_init_queueing_scheme(interface);
if (err) {
dev_err(&interface->pdev->dev,
"init_queueing_scheme failed: %d\n", err);
- return;
+ goto unlock;
}
/* reassociate interrupts */
fm10k_mbx_request_irq(interface);
- /* reset clock */
- fm10k_ts_reset(interface);
-
+ rtnl_lock();
if (netif_running(netdev))
err = fm10k_open(netdev);
+ rtnl_unlock();
/* final check of hardware state before registering the interface */
err = err ? : fm10k_hw_ready(interface);
if (!err)
netif_device_attach(netdev);
+
+unlock:
+ rtnl_unlock();
}
static const struct pci_error_handlers fm10k_err_handler = {
@@ -2382,7 +2324,7 @@ static struct pci_driver fm10k_driver = {
/**
* fm10k_register_pci_driver - register driver interface
*
- * This funciton is called on module load in order to register the driver.
+ * This function is called on module load in order to register the driver.
**/
int fm10k_register_pci_driver(void)
{
@@ -2392,7 +2334,7 @@ int fm10k_register_pci_driver(void)
/**
* fm10k_unregister_pci_driver - unregister driver interface
*
- * This funciton is called on module unload in order to remove the driver.
+ * This function is called on module unload in order to remove the driver.
**/
void fm10k_unregister_pci_driver(void)
{
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_pf.c b/drivers/net/ethernet/intel/fm10k/fm10k_pf.c
index 8cf943db5662..dc75507c9926 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_pf.c
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_pf.c
@@ -1,5 +1,5 @@
-/* Intel Ethernet Switch Host Interface Driver
- * Copyright(c) 2013 - 2015 Intel Corporation.
+/* Intel(R) Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
@@ -219,8 +219,8 @@ static s32 fm10k_update_vlan_pf(struct fm10k_hw *hw, u32 vid, u8 vsi, bool set)
/* VLAN multi-bit write:
* The multi-bit write has several parts to it.
- * 3 2 1 0
- * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0
+ * 24 16 8 0
+ * 7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0
* +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
* | RSVD0 | Length |C|RSVD0| VLAN ID |
* +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
@@ -488,6 +488,10 @@ static s32 fm10k_update_lport_state_pf(struct fm10k_hw *hw, u16 glort,
if (!fm10k_glort_valid_pf(hw, glort))
return FM10K_ERR_PARAM;
+ /* reset multicast mode if deleting lport */
+ if (!enable)
+ fm10k_update_xcast_mode_pf(hw, glort, FM10K_XCAST_MODE_NONE);
+
/* construct the lport message from the 2 pieces of data we have */
lport_msg = ((u32)count << 16) | glort;
@@ -527,8 +531,8 @@ static s32 fm10k_configure_dglort_map_pf(struct fm10k_hw *hw,
return FM10K_ERR_PARAM;
/* determine count of VSIs and queues */
- queue_count = 1 << (dglort->rss_l + dglort->pc_l);
- vsi_count = 1 << (dglort->vsi_l + dglort->queue_l);
+ queue_count = BIT(dglort->rss_l + dglort->pc_l);
+ vsi_count = BIT(dglort->vsi_l + dglort->queue_l);
glort = dglort->glort;
q_idx = dglort->queue_b;
@@ -544,8 +548,8 @@ static s32 fm10k_configure_dglort_map_pf(struct fm10k_hw *hw,
}
/* determine count of PCs and queues */
- queue_count = 1 << (dglort->queue_l + dglort->rss_l + dglort->vsi_l);
- pc_count = 1 << dglort->pc_l;
+ queue_count = BIT(dglort->queue_l + dglort->rss_l + dglort->vsi_l);
+ pc_count = BIT(dglort->pc_l);
/* configure PC for Tx queues */
for (pc = 0; pc < pc_count; pc++) {
@@ -711,8 +715,8 @@ static s32 fm10k_iov_assign_resources_pf(struct fm10k_hw *hw, u16 num_vfs,
FM10K_RXDCTL_WRITE_BACK_MIN_DELAY |
FM10K_RXDCTL_DROP_ON_EMPTY);
fm10k_write_reg(hw, FM10K_RXQCTL(vf_q_idx),
- FM10K_RXQCTL_VF |
- (i << FM10K_RXQCTL_VF_SHIFT));
+ (i << FM10K_RXQCTL_VF_SHIFT) |
+ FM10K_RXQCTL_VF);
/* map queue pair to VF */
fm10k_write_reg(hw, FM10K_TQMAP(qmap_idx), vf_q_idx);
@@ -864,9 +868,13 @@ static s32 fm10k_iov_assign_default_mac_vlan_pf(struct fm10k_hw *hw,
fm10k_write_reg(hw, FM10K_TQMAP(qmap_idx), 0);
fm10k_write_reg(hw, FM10K_TXDCTL(vf_q_idx), 0);
- /* determine correct default VLAN ID */
+ /* Determine correct default VLAN ID. The FM10K_VLAN_OVERRIDE bit is
+ * used here to indicate to the VF that it will not have privilege to
+ * write VLAN_TABLE. All policy is enforced on the PF but this allows
+ * the VF to correctly report errors to userspace rqeuests.
+ */
if (vf_info->pf_vid)
- vf_vid = vf_info->pf_vid | FM10K_VLAN_CLEAR;
+ vf_vid = vf_info->pf_vid | FM10K_VLAN_OVERRIDE;
else
vf_vid = vf_info->sw_vid;
@@ -952,7 +960,7 @@ static s32 fm10k_iov_reset_resources_pf(struct fm10k_hw *hw,
return FM10K_ERR_PARAM;
/* clear event notification of VF FLR */
- fm10k_write_reg(hw, FM10K_PFVFLREC(vf_idx / 32), 1 << (vf_idx % 32));
+ fm10k_write_reg(hw, FM10K_PFVFLREC(vf_idx / 32), BIT(vf_idx % 32));
/* force timeout and then disconnect the mailbox */
vf_info->mbx.timeout = 0;
@@ -987,7 +995,7 @@ static s32 fm10k_iov_reset_resources_pf(struct fm10k_hw *hw,
txqctl = ((u32)vf_vid << FM10K_TXQCTL_VID_SHIFT) |
(vf_idx << FM10K_TXQCTL_TC_SHIFT) |
FM10K_TXQCTL_VF | vf_idx;
- rxqctl = FM10K_RXQCTL_VF | (vf_idx << FM10K_RXQCTL_VF_SHIFT);
+ rxqctl = (vf_idx << FM10K_RXQCTL_VF_SHIFT) | FM10K_RXQCTL_VF;
/* stop further DMA and reset queue ownership back to VF */
for (i = vf_q_idx; i < (queues_per_pool + vf_q_idx); i++) {
@@ -1140,19 +1148,6 @@ static void fm10k_iov_update_stats_pf(struct fm10k_hw *hw,
fm10k_update_hw_stats_q(hw, q, idx, qpp);
}
-static s32 fm10k_iov_report_timestamp_pf(struct fm10k_hw *hw,
- struct fm10k_vf_info *vf_info,
- u64 timestamp)
-{
- u32 msg[4];
-
- /* generate port state response to notify VF it is not ready */
- fm10k_tlv_msg_init(msg, FM10K_VF_MSG_ID_1588);
- fm10k_tlv_attr_put_u64(msg, FM10K_1588_MSG_TIMESTAMP, timestamp);
-
- return vf_info->mbx.ops.enqueue_tx(hw, &vf_info->mbx, msg);
-}
-
/**
* fm10k_iov_msg_msix_pf - Message handler for MSI-X request from VF
* @hw: Pointer to hardware structure
@@ -1384,7 +1379,7 @@ s32 fm10k_iov_msg_lport_state_pf(struct fm10k_hw *hw, u32 **results,
mode = fm10k_iov_supported_xcast_mode_pf(vf_info, mode);
/* if mode is not currently enabled, enable it */
- if (!(FM10K_VF_FLAG_ENABLED(vf_info) & (1 << mode)))
+ if (!(FM10K_VF_FLAG_ENABLED(vf_info) & BIT(mode)))
fm10k_update_xcast_mode_pf(hw, vf_info->glort, mode);
/* swap mode back to a bit flag */
@@ -1618,7 +1613,7 @@ static s32 fm10k_request_lport_map_pf(struct fm10k_hw *hw)
* @hw: pointer to hardware structure
* @switch_ready: pointer to boolean value that will record switch state
*
- * This funciton will check the DMA_CTRL2 register and mailbox in order
+ * This function will check the DMA_CTRL2 register and mailbox in order
* to determine if the switch is ready for the PF to begin requesting
* addresses and mapping traffic to the local interface.
**/
@@ -1647,6 +1642,8 @@ out:
/* This structure defines the attibutes to be parsed below */
const struct fm10k_tlv_attr fm10k_lport_map_msg_attr[] = {
+ FM10K_TLV_ATTR_LE_STRUCT(FM10K_PF_ATTR_ID_ERR,
+ sizeof(struct fm10k_swapi_error)),
FM10K_TLV_ATTR_U32(FM10K_PF_ATTR_ID_LPORT_MAP),
FM10K_TLV_ATTR_LAST
};
@@ -1787,89 +1784,6 @@ s32 fm10k_msg_err_pf(struct fm10k_hw *hw, u32 **results,
return 0;
}
-const struct fm10k_tlv_attr fm10k_1588_timestamp_msg_attr[] = {
- FM10K_TLV_ATTR_LE_STRUCT(FM10K_PF_ATTR_ID_1588_TIMESTAMP,
- sizeof(struct fm10k_swapi_1588_timestamp)),
- FM10K_TLV_ATTR_LAST
-};
-
-/* currently there is no shared 1588 timestamp handler */
-
-/**
- * fm10k_adjust_systime_pf - Adjust systime frequency
- * @hw: pointer to hardware structure
- * @ppb: adjustment rate in parts per billion
- *
- * This function will adjust the SYSTIME_CFG register contained in BAR 4
- * if this function is supported for BAR 4 access. The adjustment amount
- * is based on the parts per billion value provided and adjusted to a
- * value based on parts per 2^48 clock cycles.
- *
- * If adjustment is not supported or the requested value is too large
- * we will return an error.
- **/
-static s32 fm10k_adjust_systime_pf(struct fm10k_hw *hw, s32 ppb)
-{
- u64 systime_adjust;
-
- /* if sw_addr is not set we don't have switch register access */
- if (!hw->sw_addr)
- return ppb ? FM10K_ERR_PARAM : 0;
-
- /* we must convert the value from parts per billion to parts per
- * 2^48 cycles. In addition I have opted to only use the 30 most
- * significant bits of the adjustment value as the 8 least
- * significant bits are located in another register and represent
- * a value significantly less than a part per billion, the result
- * of dropping the 8 least significant bits is that the adjustment
- * value is effectively multiplied by 2^8 when we write it.
- *
- * As a result of all this the math for this breaks down as follows:
- * ppb / 10^9 == adjust * 2^8 / 2^48
- * If we solve this for adjust, and simplify it comes out as:
- * ppb * 2^31 / 5^9 == adjust
- */
- systime_adjust = (ppb < 0) ? -ppb : ppb;
- systime_adjust <<= 31;
- do_div(systime_adjust, 1953125);
-
- /* verify the requested adjustment value is in range */
- if (systime_adjust > FM10K_SW_SYSTIME_ADJUST_MASK)
- return FM10K_ERR_PARAM;
-
- if (ppb > 0)
- systime_adjust |= FM10K_SW_SYSTIME_ADJUST_DIR_POSITIVE;
-
- fm10k_write_sw_reg(hw, FM10K_SW_SYSTIME_ADJUST, (u32)systime_adjust);
-
- return 0;
-}
-
-/**
- * fm10k_read_systime_pf - Reads value of systime registers
- * @hw: pointer to the hardware structure
- *
- * Function reads the content of 2 registers, combined to represent a 64 bit
- * value measured in nanosecods. In order to guarantee the value is accurate
- * we check the 32 most significant bits both before and after reading the
- * 32 least significant bits to verify they didn't change as we were reading
- * the registers.
- **/
-static u64 fm10k_read_systime_pf(struct fm10k_hw *hw)
-{
- u32 systime_l, systime_h, systime_tmp;
-
- systime_h = fm10k_read_reg(hw, FM10K_SYSTIME + 1);
-
- do {
- systime_tmp = systime_h;
- systime_l = fm10k_read_reg(hw, FM10K_SYSTIME);
- systime_h = fm10k_read_reg(hw, FM10K_SYSTIME + 1);
- } while (systime_tmp != systime_h);
-
- return ((u64)systime_h << 32) | systime_l;
-}
-
static const struct fm10k_msg_data fm10k_msg_data_pf[] = {
FM10K_PF_MSG_ERR_HANDLER(XCAST_MODES, fm10k_msg_err_pf),
FM10K_PF_MSG_ERR_HANDLER(UPDATE_MAC_FWD_RULE, fm10k_msg_err_pf),
@@ -1899,8 +1813,6 @@ static const struct fm10k_mac_ops mac_ops_pf = {
.set_dma_mask = fm10k_set_dma_mask_pf,
.get_fault = fm10k_get_fault_pf,
.get_host_state = fm10k_get_host_state_pf,
- .adjust_systime = fm10k_adjust_systime_pf,
- .read_systime = fm10k_read_systime_pf,
};
static const struct fm10k_iov_ops iov_ops_pf = {
@@ -1912,7 +1824,6 @@ static const struct fm10k_iov_ops iov_ops_pf = {
.set_lport = fm10k_iov_set_lport_pf,
.reset_lport = fm10k_iov_reset_lport_pf,
.update_stats = fm10k_iov_update_stats_pf,
- .report_timestamp = fm10k_iov_report_timestamp_pf,
};
static s32 fm10k_get_invariants_pf(struct fm10k_hw *hw)
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_pf.h b/drivers/net/ethernet/intel/fm10k/fm10k_pf.h
index b2d96b45ca3c..3336d3c10760 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_pf.h
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_pf.h
@@ -1,5 +1,5 @@
-/* Intel Ethernet Switch Host Interface Driver
- * Copyright(c) 2013 - 2015 Intel Corporation.
+/* Intel(R) Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
@@ -42,8 +42,6 @@ enum fm10k_pf_tlv_msg_id_v1 {
FM10K_PF_MSG_ID_UPDATE_FLOW = 0x503,
FM10K_PF_MSG_ID_DELETE_FLOW = 0x504,
FM10K_PF_MSG_ID_SET_FLOW_STATE = 0x505,
- FM10K_PF_MSG_ID_GET_1588_INFO = 0x506,
- FM10K_PF_MSG_ID_1588_TIMESTAMP = 0x701,
};
enum fm10k_pf_tlv_attr_id_v1 {
@@ -61,7 +59,6 @@ enum fm10k_pf_tlv_attr_id_v1 {
FM10K_PF_ATTR_ID_DELETE_FLOW = 0x0B,
FM10K_PF_ATTR_ID_PORT = 0x0C,
FM10K_PF_ATTR_ID_UPDATE_PVID = 0x0D,
- FM10K_PF_ATTR_ID_1588_TIMESTAMP = 0x10,
};
#define FM10K_MSG_LPORT_MAP_GLORT_SHIFT 0
@@ -74,6 +71,8 @@ enum fm10k_pf_tlv_attr_id_v1 {
#define FM10K_MSG_UPDATE_PVID_PVID_SHIFT 16
#define FM10K_MSG_UPDATE_PVID_PVID_SIZE 16
+#define FM10K_MSG_ERR_PEP_NOT_SCHEDULED 280
+
/* The following data structures are overlayed directly onto TLV mailbox
* messages, and must not break 4 byte alignment. Ensure the structures line
* up correctly as per their TLV definition.
@@ -100,13 +99,6 @@ struct fm10k_swapi_error {
struct fm10k_global_table_data ffu;
} __aligned(4) __packed;
-struct fm10k_swapi_1588_timestamp {
- __le64 egress;
- __le64 ingress;
- __le16 dglort;
- __le16 sglort;
-} __aligned(4) __packed;
-
s32 fm10k_msg_lport_map_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *);
extern const struct fm10k_tlv_attr fm10k_lport_map_msg_attr[];
#define FM10K_PF_MSG_LPORT_MAP_HANDLER(func) \
@@ -122,11 +114,6 @@ extern const struct fm10k_tlv_attr fm10k_err_msg_attr[];
#define FM10K_PF_MSG_ERR_HANDLER(msg, func) \
FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_##msg, fm10k_err_msg_attr, func)
-extern const struct fm10k_tlv_attr fm10k_1588_timestamp_msg_attr[];
-#define FM10K_PF_MSG_1588_TIMESTAMP_HANDLER(func) \
- FM10K_MSG_HANDLER(FM10K_PF_MSG_ID_1588_TIMESTAMP, \
- fm10k_1588_timestamp_msg_attr, func)
-
s32 fm10k_iov_msg_msix_pf(struct fm10k_hw *, u32 **, struct fm10k_mbx_info *);
s32 fm10k_iov_msg_mac_vlan_pf(struct fm10k_hw *, u32 **,
struct fm10k_mbx_info *);
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_ptp.c b/drivers/net/ethernet/intel/fm10k/fm10k_ptp.c
deleted file mode 100644
index b4945e8abe03..000000000000
--- a/drivers/net/ethernet/intel/fm10k/fm10k_ptp.c
+++ /dev/null
@@ -1,462 +0,0 @@
-/* Intel Ethernet Switch Host Interface Driver
- * Copyright(c) 2013 - 2015 Intel Corporation.
- *
- * This program is free software; you can redistribute it and/or modify it
- * under the terms and conditions of the GNU General Public License,
- * version 2, as published by the Free Software Foundation.
- *
- * This program is distributed in the hope it will be useful, but WITHOUT
- * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
- * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
- * more details.
- *
- * The full GNU General Public License is included in this distribution in
- * the file called "COPYING".
- *
- * Contact Information:
- * e1000-devel Mailing List <e1000-devel@lists.sourceforge.net>
- * Intel Corporation, 5200 N.E. Elam Young Parkway, Hillsboro, OR 97124-6497
- */
-
-#include <linux/ptp_classify.h>
-#include <linux/ptp_clock_kernel.h>
-
-#include "fm10k.h"
-
-#define FM10K_TS_TX_TIMEOUT (HZ * 15)
-
-void fm10k_systime_to_hwtstamp(struct fm10k_intfc *interface,
- struct skb_shared_hwtstamps *hwtstamp,
- u64 systime)
-{
- unsigned long flags;
-
- read_lock_irqsave(&interface->systime_lock, flags);
- systime += interface->ptp_adjust;
- read_unlock_irqrestore(&interface->systime_lock, flags);
-
- hwtstamp->hwtstamp = ns_to_ktime(systime);
-}
-
-static struct sk_buff *fm10k_ts_tx_skb(struct fm10k_intfc *interface,
- __le16 dglort)
-{
- struct sk_buff_head *list = &interface->ts_tx_skb_queue;
- struct sk_buff *skb;
-
- skb_queue_walk(list, skb) {
- if (FM10K_CB(skb)->fi.w.dglort == dglort)
- return skb;
- }
-
- return NULL;
-}
-
-void fm10k_ts_tx_enqueue(struct fm10k_intfc *interface, struct sk_buff *skb)
-{
- struct sk_buff_head *list = &interface->ts_tx_skb_queue;
- struct sk_buff *clone;
- unsigned long flags;
-
- /* create clone for us to return on the Tx path */
- clone = skb_clone_sk(skb);
- if (!clone)
- return;
-
- FM10K_CB(clone)->ts_tx_timeout = jiffies + FM10K_TS_TX_TIMEOUT;
- spin_lock_irqsave(&list->lock, flags);
-
- /* attempt to locate any buffers with the same dglort,
- * if none are present then insert skb in tail of list
- */
- skb = fm10k_ts_tx_skb(interface, FM10K_CB(clone)->fi.w.dglort);
- if (!skb) {
- skb_shinfo(clone)->tx_flags |= SKBTX_IN_PROGRESS;
- __skb_queue_tail(list, clone);
- }
-
- spin_unlock_irqrestore(&list->lock, flags);
-
- /* if list is already has one then we just free the clone */
- if (skb)
- dev_kfree_skb(clone);
-}
-
-void fm10k_ts_tx_hwtstamp(struct fm10k_intfc *interface, __le16 dglort,
- u64 systime)
-{
- struct skb_shared_hwtstamps shhwtstamps;
- struct sk_buff_head *list = &interface->ts_tx_skb_queue;
- struct sk_buff *skb;
- unsigned long flags;
-
- spin_lock_irqsave(&list->lock, flags);
-
- /* attempt to locate and pull the sk_buff out of the list */
- skb = fm10k_ts_tx_skb(interface, dglort);
- if (skb)
- __skb_unlink(skb, list);
-
- spin_unlock_irqrestore(&list->lock, flags);
-
- /* if not found do nothing */
- if (!skb)
- return;
-
- /* timestamp the sk_buff and free out copy */
- fm10k_systime_to_hwtstamp(interface, &shhwtstamps, systime);
- skb_tstamp_tx(skb, &shhwtstamps);
- dev_kfree_skb_any(skb);
-}
-
-void fm10k_ts_tx_subtask(struct fm10k_intfc *interface)
-{
- struct sk_buff_head *list = &interface->ts_tx_skb_queue;
- struct sk_buff *skb, *tmp;
- unsigned long flags;
-
- /* If we're down or resetting, just bail */
- if (test_bit(__FM10K_DOWN, &interface->state) ||
- test_bit(__FM10K_RESETTING, &interface->state))
- return;
-
- spin_lock_irqsave(&list->lock, flags);
-
- /* walk though the list and flush any expired timestamp packets */
- skb_queue_walk_safe(list, skb, tmp) {
- if (!time_is_after_jiffies(FM10K_CB(skb)->ts_tx_timeout))
- continue;
- __skb_unlink(skb, list);
- kfree_skb(skb);
- interface->tx_hwtstamp_timeouts++;
- }
-
- spin_unlock_irqrestore(&list->lock, flags);
-}
-
-static u64 fm10k_systime_read(struct fm10k_intfc *interface)
-{
- struct fm10k_hw *hw = &interface->hw;
-
- return hw->mac.ops.read_systime(hw);
-}
-
-void fm10k_ts_reset(struct fm10k_intfc *interface)
-{
- s64 ns = ktime_to_ns(ktime_get_real());
- unsigned long flags;
-
- /* reinitialize the clock */
- write_lock_irqsave(&interface->systime_lock, flags);
- interface->ptp_adjust = fm10k_systime_read(interface) - ns;
- write_unlock_irqrestore(&interface->systime_lock, flags);
-}
-
-void fm10k_ts_init(struct fm10k_intfc *interface)
-{
- /* Initialize lock protecting systime access */
- rwlock_init(&interface->systime_lock);
-
- /* Initialize skb queue for pending timestamp requests */
- skb_queue_head_init(&interface->ts_tx_skb_queue);
-
- /* reset the clock to current kernel time */
- fm10k_ts_reset(interface);
-}
-
-/**
- * fm10k_get_ts_config - get current hardware timestamping configuration
- * @netdev: network interface device structure
- * @ifreq: ioctl data
- *
- * This function returns the current timestamping settings. Rather than
- * attempt to deconstruct registers to fill in the values, simply keep a copy
- * of the old settings around, and return a copy when requested.
- */
-int fm10k_get_ts_config(struct net_device *netdev, struct ifreq *ifr)
-{
- struct fm10k_intfc *interface = netdev_priv(netdev);
- struct hwtstamp_config *config = &interface->ts_config;
-
- return copy_to_user(ifr->ifr_data, config, sizeof(*config)) ?
- -EFAULT : 0;
-}
-
-/**
- * fm10k_set_ts_config - control hardware time stamping
- * @netdev: network interface device structure
- * @ifreq: ioctl data
- *
- * Outgoing time stamping can be enabled and disabled. Play nice and
- * disable it when requested, although it shouldn't cause any overhead
- * when no packet needs it. At most one packet in the queue may be
- * marked for time stamping, otherwise it would be impossible to tell
- * for sure to which packet the hardware time stamp belongs.
- *
- * Incoming time stamping has to be configured via the hardware
- * filters. Not all combinations are supported, in particular event
- * type has to be specified. Matching the kind of event packet is
- * not supported, with the exception of "all V2 events regardless of
- * level 2 or 4".
- *
- * Since hardware always timestamps Path delay packets when timestamping V2
- * packets, regardless of the type specified in the register, only use V2
- * Event mode. This more accurately tells the user what the hardware is going
- * to do anyways.
- */
-int fm10k_set_ts_config(struct net_device *netdev, struct ifreq *ifr)
-{
- struct fm10k_intfc *interface = netdev_priv(netdev);
- struct hwtstamp_config ts_config;
-
- if (copy_from_user(&ts_config, ifr->ifr_data, sizeof(ts_config)))
- return -EFAULT;
-
- /* reserved for future extensions */
- if (ts_config.flags)
- return -EINVAL;
-
- switch (ts_config.tx_type) {
- case HWTSTAMP_TX_OFF:
- break;
- case HWTSTAMP_TX_ON:
- /* we likely need some check here to see if this is supported */
- break;
- default:
- return -ERANGE;
- }
-
- switch (ts_config.rx_filter) {
- case HWTSTAMP_FILTER_NONE:
- interface->flags &= ~FM10K_FLAG_RX_TS_ENABLED;
- break;
- case HWTSTAMP_FILTER_PTP_V1_L4_EVENT:
- case HWTSTAMP_FILTER_PTP_V1_L4_SYNC:
- case HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ:
- case HWTSTAMP_FILTER_PTP_V2_L4_EVENT:
- case HWTSTAMP_FILTER_PTP_V2_L4_SYNC:
- case HWTSTAMP_FILTER_PTP_V2_L4_DELAY_REQ:
- case HWTSTAMP_FILTER_PTP_V2_L2_EVENT:
- case HWTSTAMP_FILTER_PTP_V2_L2_SYNC:
- case HWTSTAMP_FILTER_PTP_V2_L2_DELAY_REQ:
- case HWTSTAMP_FILTER_PTP_V2_EVENT:
- case HWTSTAMP_FILTER_PTP_V2_SYNC:
- case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ:
- case HWTSTAMP_FILTER_ALL:
- interface->flags |= FM10K_FLAG_RX_TS_ENABLED;
- ts_config.rx_filter = HWTSTAMP_FILTER_ALL;
- break;
- default:
- return -ERANGE;
- }
-
- /* save these settings for future reference */
- interface->ts_config = ts_config;
-
- return copy_to_user(ifr->ifr_data, &ts_config, sizeof(ts_config)) ?
- -EFAULT : 0;
-}
-
-static int fm10k_ptp_adjfreq(struct ptp_clock_info *ptp, s32 ppb)
-{
- struct fm10k_intfc *interface;
- struct fm10k_hw *hw;
- int err;
-
- interface = container_of(ptp, struct fm10k_intfc, ptp_caps);
- hw = &interface->hw;
-
- err = hw->mac.ops.adjust_systime(hw, ppb);
-
- /* the only error we should see is if the value is out of range */
- return (err == FM10K_ERR_PARAM) ? -ERANGE : err;
-}
-
-static int fm10k_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
-{
- struct fm10k_intfc *interface;
- unsigned long flags;
-
- interface = container_of(ptp, struct fm10k_intfc, ptp_caps);
-
- write_lock_irqsave(&interface->systime_lock, flags);
- interface->ptp_adjust += delta;
- write_unlock_irqrestore(&interface->systime_lock, flags);
-
- return 0;
-}
-
-static int fm10k_ptp_gettime(struct ptp_clock_info *ptp, struct timespec64 *ts)
-{
- struct fm10k_intfc *interface;
- unsigned long flags;
- u64 now;
-
- interface = container_of(ptp, struct fm10k_intfc, ptp_caps);
-
- read_lock_irqsave(&interface->systime_lock, flags);
- now = fm10k_systime_read(interface) + interface->ptp_adjust;
- read_unlock_irqrestore(&interface->systime_lock, flags);
-
- *ts = ns_to_timespec64(now);
-
- return 0;
-}
-
-static int fm10k_ptp_settime(struct ptp_clock_info *ptp,
- const struct timespec64 *ts)
-{
- struct fm10k_intfc *interface;
- unsigned long flags;
- u64 ns = timespec64_to_ns(ts);
-
- interface = container_of(ptp, struct fm10k_intfc, ptp_caps);
-
- write_lock_irqsave(&interface->systime_lock, flags);
- interface->ptp_adjust = fm10k_systime_read(interface) - ns;
- write_unlock_irqrestore(&interface->systime_lock, flags);
-
- return 0;
-}
-
-static int fm10k_ptp_enable(struct ptp_clock_info *ptp,
- struct ptp_clock_request *rq,
- int __always_unused on)
-{
- struct ptp_clock_time *t = &rq->perout.period;
- struct fm10k_intfc *interface;
- struct fm10k_hw *hw;
- u64 period;
- u32 step;
-
- /* we can only support periodic output */
- if (rq->type != PTP_CLK_REQ_PEROUT)
- return -EINVAL;
-
- /* verify the requested channel is there */
- if (rq->perout.index >= ptp->n_per_out)
- return -EINVAL;
-
- /* we cannot enforce start time as there is no
- * mechanism for that in the hardware, we can only control
- * the period.
- */
-
- /* we cannot support periods greater than 4 seconds due to reg limit */
- if (t->sec > 4 || t->sec < 0)
- return -ERANGE;
-
- interface = container_of(ptp, struct fm10k_intfc, ptp_caps);
- hw = &interface->hw;
-
- /* we simply cannot support the operation if we don't have BAR4 */
- if (!hw->sw_addr)
- return -ENOTSUPP;
-
- /* convert to unsigned 64b ns, verify we can put it in a 32b register */
- period = t->sec * 1000000000LL + t->nsec;
-
- /* determine the minimum size for period */
- step = 2 * (fm10k_read_reg(hw, FM10K_SYSTIME_CFG) &
- FM10K_SYSTIME_CFG_STEP_MASK);
-
- /* verify the value is in range supported by hardware */
- if ((period && (period < step)) || (period > U32_MAX))
- return -ERANGE;
-
- /* notify hardware of request to being sending pulses */
- fm10k_write_sw_reg(hw, FM10K_SW_SYSTIME_PULSE(rq->perout.index),
- (u32)period);
-
- return 0;
-}
-
-static struct ptp_pin_desc fm10k_ptp_pd[2] = {
- {
- .name = "IEEE1588_PULSE0",
- .index = 0,
- .func = PTP_PF_PEROUT,
- .chan = 0
- },
- {
- .name = "IEEE1588_PULSE1",
- .index = 1,
- .func = PTP_PF_PEROUT,
- .chan = 1
- }
-};
-
-static int fm10k_ptp_verify(struct ptp_clock_info *ptp, unsigned int pin,
- enum ptp_pin_function func, unsigned int chan)
-{
- /* verify the requested pin is there */
- if (pin >= ptp->n_pins || !ptp->pin_config)
- return -EINVAL;
-
- /* enforce locked channels, no changing them */
- if (chan != ptp->pin_config[pin].chan)
- return -EINVAL;
-
- /* we want to keep the functions locked as well */
- if (func != ptp->pin_config[pin].func)
- return -EINVAL;
-
- return 0;
-}
-
-void fm10k_ptp_register(struct fm10k_intfc *interface)
-{
- struct ptp_clock_info *ptp_caps = &interface->ptp_caps;
- struct device *dev = &interface->pdev->dev;
- struct ptp_clock *ptp_clock;
-
- snprintf(ptp_caps->name, sizeof(ptp_caps->name),
- "%s", interface->netdev->name);
- ptp_caps->owner = THIS_MODULE;
- /* This math is simply the inverse of the math in
- * fm10k_adjust_systime_pf applied to an adjustment value
- * of 2^30 - 1 which is the maximum value of the register:
- * max_ppb == ((2^30 - 1) * 5^9) / 2^31
- */
- ptp_caps->max_adj = 976562;
- ptp_caps->adjfreq = fm10k_ptp_adjfreq;
- ptp_caps->adjtime = fm10k_ptp_adjtime;
- ptp_caps->gettime64 = fm10k_ptp_gettime;
- ptp_caps->settime64 = fm10k_ptp_settime;
-
- /* provide pins if BAR4 is accessible */
- if (interface->sw_addr) {
- /* enable periodic outputs */
- ptp_caps->n_per_out = 2;
- ptp_caps->enable = fm10k_ptp_enable;
-
- /* enable clock pins */
- ptp_caps->verify = fm10k_ptp_verify;
- ptp_caps->n_pins = 2;
- ptp_caps->pin_config = fm10k_ptp_pd;
- }
-
- ptp_clock = ptp_clock_register(ptp_caps, dev);
- if (IS_ERR(ptp_clock)) {
- ptp_clock = NULL;
- dev_err(dev, "ptp_clock_register failed\n");
- } else {
- dev_info(dev, "registered PHC device %s\n", ptp_caps->name);
- }
-
- interface->ptp_clock = ptp_clock;
-}
-
-void fm10k_ptp_unregister(struct fm10k_intfc *interface)
-{
- struct ptp_clock *ptp_clock = interface->ptp_clock;
- struct device *dev = &interface->pdev->dev;
-
- if (!ptp_clock)
- return;
-
- interface->ptp_clock = NULL;
-
- ptp_clock_unregister(ptp_clock);
- dev_info(dev, "removed PHC %s\n", interface->ptp_caps.name);
-}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_tlv.c b/drivers/net/ethernet/intel/fm10k/fm10k_tlv.c
index ab01bb30752f..f8e87bf086b9 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_tlv.c
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_tlv.c
@@ -1,5 +1,5 @@
-/* Intel Ethernet Switch Host Interface Driver
- * Copyright(c) 2013 - 2015 Intel Corporation.
+/* Intel(R) Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
@@ -222,7 +222,7 @@ s32 fm10k_tlv_attr_put_value(u32 *msg, u16 attr_id, s64 value, u32 len)
attr = &msg[FM10K_TLV_DWORD_LEN(*msg)];
if (len < 4) {
- attr[1] = (u32)value & ((0x1ul << (8 * len)) - 1);
+ attr[1] = (u32)value & (BIT(8 * len) - 1);
} else {
attr[1] = (u32)value;
if (len > 4)
@@ -481,7 +481,8 @@ static s32 fm10k_tlv_attr_validate(u32 *attr,
* up into an array of pointers stored in results. The function will
* return FM10K_ERR_PARAM on any input or message error,
* FM10K_NOT_IMPLEMENTED for any attribute that is outside of the array
- * and 0 on success.
+ * and 0 on success. Any attributes not found in tlv_attr will be silently
+ * ignored.
**/
static s32 fm10k_tlv_attr_parse(u32 *attr, u32 **results,
const struct fm10k_tlv_attr *tlv_attr)
@@ -518,14 +519,15 @@ static s32 fm10k_tlv_attr_parse(u32 *attr, u32 **results,
while (offset < len) {
attr_id = *attr & FM10K_TLV_ID_MASK;
- if (attr_id < FM10K_TLV_RESULTS_MAX)
- err = fm10k_tlv_attr_validate(attr, tlv_attr);
- else
- err = FM10K_NOT_IMPLEMENTED;
+ if (attr_id >= FM10K_TLV_RESULTS_MAX)
+ return FM10K_NOT_IMPLEMENTED;
- if (err < 0)
+ err = fm10k_tlv_attr_validate(attr, tlv_attr);
+ if (err == FM10K_NOT_IMPLEMENTED)
+ ; /* silently ignore non-implemented attributes */
+ else if (err)
return err;
- if (!err)
+ else
results[attr_id] = attr;
/* update offset */
@@ -652,29 +654,29 @@ const struct fm10k_tlv_attr fm10k_tlv_msg_test_attr[] = {
**/
static void fm10k_tlv_msg_test_generate_data(u32 *msg, u32 attr_flags)
{
- if (attr_flags & (1 << FM10K_TEST_MSG_STRING))
+ if (attr_flags & BIT(FM10K_TEST_MSG_STRING))
fm10k_tlv_attr_put_null_string(msg, FM10K_TEST_MSG_STRING,
test_str);
- if (attr_flags & (1 << FM10K_TEST_MSG_MAC_ADDR))
+ if (attr_flags & BIT(FM10K_TEST_MSG_MAC_ADDR))
fm10k_tlv_attr_put_mac_vlan(msg, FM10K_TEST_MSG_MAC_ADDR,
test_mac, test_vlan);
- if (attr_flags & (1 << FM10K_TEST_MSG_U8))
+ if (attr_flags & BIT(FM10K_TEST_MSG_U8))
fm10k_tlv_attr_put_u8(msg, FM10K_TEST_MSG_U8, test_u8);
- if (attr_flags & (1 << FM10K_TEST_MSG_U16))
+ if (attr_flags & BIT(FM10K_TEST_MSG_U16))
fm10k_tlv_attr_put_u16(msg, FM10K_TEST_MSG_U16, test_u16);
- if (attr_flags & (1 << FM10K_TEST_MSG_U32))
+ if (attr_flags & BIT(FM10K_TEST_MSG_U32))
fm10k_tlv_attr_put_u32(msg, FM10K_TEST_MSG_U32, test_u32);
- if (attr_flags & (1 << FM10K_TEST_MSG_U64))
+ if (attr_flags & BIT(FM10K_TEST_MSG_U64))
fm10k_tlv_attr_put_u64(msg, FM10K_TEST_MSG_U64, test_u64);
- if (attr_flags & (1 << FM10K_TEST_MSG_S8))
+ if (attr_flags & BIT(FM10K_TEST_MSG_S8))
fm10k_tlv_attr_put_s8(msg, FM10K_TEST_MSG_S8, test_s8);
- if (attr_flags & (1 << FM10K_TEST_MSG_S16))
+ if (attr_flags & BIT(FM10K_TEST_MSG_S16))
fm10k_tlv_attr_put_s16(msg, FM10K_TEST_MSG_S16, test_s16);
- if (attr_flags & (1 << FM10K_TEST_MSG_S32))
+ if (attr_flags & BIT(FM10K_TEST_MSG_S32))
fm10k_tlv_attr_put_s32(msg, FM10K_TEST_MSG_S32, test_s32);
- if (attr_flags & (1 << FM10K_TEST_MSG_S64))
+ if (attr_flags & BIT(FM10K_TEST_MSG_S64))
fm10k_tlv_attr_put_s64(msg, FM10K_TEST_MSG_S64, test_s64);
- if (attr_flags & (1 << FM10K_TEST_MSG_LE_STRUCT))
+ if (attr_flags & BIT(FM10K_TEST_MSG_LE_STRUCT))
fm10k_tlv_attr_put_le_struct(msg, FM10K_TEST_MSG_LE_STRUCT,
test_le, 8);
}
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_tlv.h b/drivers/net/ethernet/intel/fm10k/fm10k_tlv.h
index e1845e0a17d8..a1f1027fe184 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_tlv.h
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_tlv.h
@@ -1,5 +1,5 @@
-/* Intel Ethernet Switch Host Interface Driver
- * Copyright(c) 2013 - 2015 Intel Corporation.
+/* Intel(R) Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_type.h b/drivers/net/ethernet/intel/fm10k/fm10k_type.h
index 854ebb1906bf..b8bc06183720 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_type.h
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_type.h
@@ -1,5 +1,5 @@
-/* Intel Ethernet Switch Host Interface Driver
- * Copyright(c) 2013 - 2015 Intel Corporation.
+/* Intel(R) Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
@@ -225,11 +225,6 @@ struct fm10k_hw;
#define FM10K_STATS_LOOPBACK_DROP 0x3806
#define FM10K_STATS_NODESC_DROP 0x3807
-/* Timesync registers */
-#define FM10K_SYSTIME 0x3814
-#define FM10K_SYSTIME_CFG 0x3818
-#define FM10K_SYSTIME_CFG_STEP_MASK 0x0000000F
-
/* PCIe state registers */
#define FM10K_PHYADDR 0x381C
@@ -355,6 +350,7 @@ struct fm10k_hw;
#define FM10K_VLAN_TABLE_VSI_MAX 64
#define FM10K_VLAN_LENGTH_SHIFT 16
#define FM10K_VLAN_CLEAR BIT(15)
+#define FM10K_VLAN_OVERRIDE FM10K_VLAN_CLEAR
#define FM10K_VLAN_ALL \
((FM10K_VLAN_TABLE_VID_MAX - 1) << FM10K_VLAN_LENGTH_SHIFT)
@@ -381,12 +377,6 @@ struct fm10k_hw;
#define FM10K_VFSYSTIME 0x00040
#define FM10K_VFITR(_n) ((_n) + 0x00060)
-/* Registers contained in BAR 4 for Switch management */
-#define FM10K_SW_SYSTIME_ADJUST 0x0224D
-#define FM10K_SW_SYSTIME_ADJUST_MASK 0x3FFFFFFF
-#define FM10K_SW_SYSTIME_ADJUST_DIR_POSITIVE 0x80000000
-#define FM10K_SW_SYSTIME_PULSE(_n) ((_n) + 0x02252)
-
enum fm10k_int_source {
fm10k_int_mailbox = 0,
fm10k_int_pcie_fault = 1,
@@ -550,8 +540,6 @@ struct fm10k_mac_ops {
struct fm10k_dglort_cfg *);
void (*set_dma_mask)(struct fm10k_hw *, u64);
s32 (*get_fault)(struct fm10k_hw *, int, struct fm10k_fault *);
- s32 (*adjust_systime)(struct fm10k_hw *, s32 ppb);
- u64 (*read_systime)(struct fm10k_hw *);
};
enum fm10k_mac_type {
@@ -617,10 +605,10 @@ struct fm10k_vf_info {
*/
};
-#define FM10K_VF_FLAG_ALLMULTI_CAPABLE ((u8)1 << FM10K_XCAST_MODE_ALLMULTI)
-#define FM10K_VF_FLAG_MULTI_CAPABLE ((u8)1 << FM10K_XCAST_MODE_MULTI)
-#define FM10K_VF_FLAG_PROMISC_CAPABLE ((u8)1 << FM10K_XCAST_MODE_PROMISC)
-#define FM10K_VF_FLAG_NONE_CAPABLE ((u8)1 << FM10K_XCAST_MODE_NONE)
+#define FM10K_VF_FLAG_ALLMULTI_CAPABLE (u8)(BIT(FM10K_XCAST_MODE_ALLMULTI))
+#define FM10K_VF_FLAG_MULTI_CAPABLE (u8)(BIT(FM10K_XCAST_MODE_MULTI))
+#define FM10K_VF_FLAG_PROMISC_CAPABLE (u8)(BIT(FM10K_XCAST_MODE_PROMISC))
+#define FM10K_VF_FLAG_NONE_CAPABLE (u8)(BIT(FM10K_XCAST_MODE_NONE))
#define FM10K_VF_FLAG_CAPABLE(vf_info) ((vf_info)->vf_flags & (u8)0xF)
#define FM10K_VF_FLAG_ENABLED(vf_info) ((vf_info)->vf_flags >> 4)
#define FM10K_VF_FLAG_SET_MODE(mode) ((u8)0x10 << (mode))
@@ -643,7 +631,6 @@ struct fm10k_iov_ops {
s32 (*set_lport)(struct fm10k_hw *, struct fm10k_vf_info *, u16, u8);
void (*reset_lport)(struct fm10k_hw *, struct fm10k_vf_info *);
void (*update_stats)(struct fm10k_hw *, struct fm10k_hw_stats_q *, u16);
- s32 (*report_timestamp)(struct fm10k_hw *, struct fm10k_vf_info *, u64);
};
struct fm10k_iov_info {
@@ -667,7 +654,6 @@ struct fm10k_info {
struct fm10k_hw {
u32 __iomem *hw_addr;
- u32 __iomem *sw_addr;
void *back;
struct fm10k_mac_info mac;
struct fm10k_bus_info bus;
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_vf.c b/drivers/net/ethernet/intel/fm10k/fm10k_vf.c
index 91f8d7311f3b..3b06685ea63b 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_vf.c
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_vf.c
@@ -1,5 +1,5 @@
-/* Intel Ethernet Switch Host Interface Driver
- * Copyright(c) 2013 - 2015 Intel Corporation.
+/* Intel(R) Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
@@ -188,7 +188,7 @@ static s32 fm10k_update_vlan_vf(struct fm10k_hw *hw, u32 vid, u8 vsi, bool set)
if (vsi)
return FM10K_ERR_PARAM;
- /* verify upper 4 bits of vid and length are 0 */
+ /* clever trick to verify reserved bits in both vid and length */
if ((vid << 16 | vid) >> 28)
return FM10K_ERR_PARAM;
@@ -228,7 +228,7 @@ s32 fm10k_msg_mac_vlan_vf(struct fm10k_hw *hw, u32 **results,
ether_addr_copy(hw->mac.perm_addr, perm_addr);
hw->mac.default_vid = vid & (FM10K_VLAN_TABLE_VID_MAX - 1);
- hw->mac.vlan_override = !!(vid & FM10K_VLAN_CLEAR);
+ hw->mac.vlan_override = !!(vid & FM10K_VLAN_OVERRIDE);
return 0;
}
@@ -451,13 +451,6 @@ static s32 fm10k_update_xcast_mode_vf(struct fm10k_hw *hw, u16 glort, u8 mode)
return mbx->ops.enqueue_tx(hw, mbx, msg);
}
-const struct fm10k_tlv_attr fm10k_1588_msg_attr[] = {
- FM10K_TLV_ATTR_U64(FM10K_1588_MSG_TIMESTAMP),
- FM10K_TLV_ATTR_LAST
-};
-
-/* currently there is no shared 1588 timestamp handler */
-
/**
* fm10k_update_hw_stats_vf - Updates hardware related statistics of VF
* @hw: pointer to hardware structure
@@ -509,52 +502,6 @@ static s32 fm10k_configure_dglort_map_vf(struct fm10k_hw *hw,
return 0;
}
-/**
- * fm10k_adjust_systime_vf - Adjust systime frequency
- * @hw: pointer to hardware structure
- * @ppb: adjustment rate in parts per billion
- *
- * This function takes an adjustment rate in parts per billion and will
- * verify that this value is 0 as the VF cannot support adjusting the
- * systime clock.
- *
- * If the ppb value is non-zero the return is ERR_PARAM else success
- **/
-static s32 fm10k_adjust_systime_vf(struct fm10k_hw *hw, s32 ppb)
-{
- /* The VF cannot adjust the clock frequency, however it should
- * already have a syntonic clock with whichever host interface is
- * running as the master for the host interface clock domain so
- * there should be not frequency adjustment necessary.
- */
- return ppb ? FM10K_ERR_PARAM : 0;
-}
-
-/**
- * fm10k_read_systime_vf - Reads value of systime registers
- * @hw: pointer to the hardware structure
- *
- * Function reads the content of 2 registers, combined to represent a 64 bit
- * value measured in nanoseconds. In order to guarantee the value is accurate
- * we check the 32 most significant bits both before and after reading the
- * 32 least significant bits to verify they didn't change as we were reading
- * the registers.
- **/
-static u64 fm10k_read_systime_vf(struct fm10k_hw *hw)
-{
- u32 systime_l, systime_h, systime_tmp;
-
- systime_h = fm10k_read_reg(hw, FM10K_VFSYSTIME + 1);
-
- do {
- systime_tmp = systime_h;
- systime_l = fm10k_read_reg(hw, FM10K_VFSYSTIME);
- systime_h = fm10k_read_reg(hw, FM10K_VFSYSTIME + 1);
- } while (systime_tmp != systime_h);
-
- return ((u64)systime_h << 32) | systime_l;
-}
-
static const struct fm10k_msg_data fm10k_msg_data_vf[] = {
FM10K_TLV_MSG_TEST_HANDLER(fm10k_tlv_msg_test),
FM10K_VF_MSG_MAC_VLAN_HANDLER(fm10k_msg_mac_vlan_vf),
@@ -579,8 +526,6 @@ static const struct fm10k_mac_ops mac_ops_vf = {
.rebind_hw_stats = fm10k_rebind_hw_stats_vf,
.configure_dglort_map = fm10k_configure_dglort_map_vf,
.get_host_state = fm10k_get_host_state_generic,
- .adjust_systime = fm10k_adjust_systime_vf,
- .read_systime = fm10k_read_systime_vf,
};
static s32 fm10k_get_invariants_vf(struct fm10k_hw *hw)
diff --git a/drivers/net/ethernet/intel/fm10k/fm10k_vf.h b/drivers/net/ethernet/intel/fm10k/fm10k_vf.h
index c4439f1313a0..2662f33c0c71 100644
--- a/drivers/net/ethernet/intel/fm10k/fm10k_vf.h
+++ b/drivers/net/ethernet/intel/fm10k/fm10k_vf.h
@@ -1,5 +1,5 @@
-/* Intel Ethernet Switch Host Interface Driver
- * Copyright(c) 2013 - 2014 Intel Corporation.
+/* Intel(R) Ethernet Switch Host Interface Driver
+ * Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
@@ -29,7 +29,6 @@ enum fm10k_vf_tlv_msg_id {
FM10K_VF_MSG_ID_MSIX,
FM10K_VF_MSG_ID_MAC_VLAN,
FM10K_VF_MSG_ID_LPORT_STATE,
- FM10K_VF_MSG_ID_1588,
FM10K_VF_MSG_ID_MAX,
};
@@ -49,11 +48,6 @@ enum fm10k_tlv_lport_state_attr_id {
FM10K_LPORT_STATE_MSG_MAX
};
-enum fm10k_tlv_1588_attr_id {
- FM10K_1588_MSG_TIMESTAMP,
- FM10K_1588_MSG_MAX
-};
-
#define FM10K_VF_MSG_MSIX_HANDLER(func) \
FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_MSIX, NULL, func)
@@ -70,9 +64,5 @@ extern const struct fm10k_tlv_attr fm10k_lport_state_msg_attr[];
FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_LPORT_STATE, \
fm10k_lport_state_msg_attr, func)
-extern const struct fm10k_tlv_attr fm10k_1588_msg_attr[];
-#define FM10K_VF_MSG_1588_HANDLER(func) \
- FM10K_MSG_HANDLER(FM10K_VF_MSG_ID_1588, fm10k_1588_msg_attr, func)
-
extern const struct fm10k_info fm10k_vf_info;
#endif /* _FM10K_VF_H */
diff --git a/drivers/net/ethernet/intel/i40e/i40e.h b/drivers/net/ethernet/intel/i40e/i40e.h
index 1ce6e9c0427d..9c44739da5e2 100644
--- a/drivers/net/ethernet/intel/i40e/i40e.h
+++ b/drivers/net/ethernet/intel/i40e/i40e.h
@@ -97,12 +97,12 @@
#define I40E_INT_NAME_STR_LEN (IFNAMSIZ + 16)
/* Ethtool Private Flags */
-#define I40E_PRIV_FLAGS_NPAR_FLAG BIT(0)
-#define I40E_PRIV_FLAGS_LINKPOLL_FLAG BIT(1)
-#define I40E_PRIV_FLAGS_FD_ATR BIT(2)
-#define I40E_PRIV_FLAGS_VEB_STATS BIT(3)
-#define I40E_PRIV_FLAGS_PS BIT(4)
-#define I40E_PRIV_FLAGS_HW_ATR_EVICT BIT(5)
+#define I40E_PRIV_FLAGS_MFP_FLAG BIT(0)
+#define I40E_PRIV_FLAGS_LINKPOLL_FLAG BIT(1)
+#define I40E_PRIV_FLAGS_FD_ATR BIT(2)
+#define I40E_PRIV_FLAGS_VEB_STATS BIT(3)
+#define I40E_PRIV_FLAGS_HW_ATR_EVICT BIT(4)
+#define I40E_PRIV_FLAGS_TRUE_PROMISC_SUPPORT BIT(5)
#define I40E_NVM_VERSION_LO_SHIFT 0
#define I40E_NVM_VERSION_LO_MASK (0xff << I40E_NVM_VERSION_LO_SHIFT)
@@ -112,7 +112,9 @@
#define I40E_OEM_VER_PATCH_MASK 0xff
#define I40E_OEM_VER_BUILD_SHIFT 8
#define I40E_OEM_VER_SHIFT 24
-#define I40E_PHY_DEBUG_PORT BIT(4)
+#define I40E_PHY_DEBUG_ALL \
+ (I40E_AQ_PHY_DEBUG_DISABLE_LINK_FW | \
+ I40E_AQ_PHY_DEBUG_DISABLE_ALL_LINK_FW)
/* The values in here are decimal coded as hex as is the case in the NVM map*/
#define I40E_CURRENT_NVM_VERSION_HI 0x2
@@ -123,10 +125,7 @@
#define XSTRINGIFY(bar) STRINGIFY(bar)
#define I40E_RX_DESC(R, i) \
- ((ring_is_16byte_desc_enabled(R)) \
- ? (union i40e_32byte_rx_desc *) \
- (&(((union i40e_16byte_rx_desc *)((R)->desc))[i])) \
- : (&(((union i40e_32byte_rx_desc *)((R)->desc))[i])))
+ (&(((union i40e_32byte_rx_desc *)((R)->desc))[i]))
#define I40E_TX_DESC(R, i) \
(&(((struct i40e_tx_desc *)((R)->desc))[i]))
#define I40E_TX_CTXTDESC(R, i) \
@@ -202,6 +201,7 @@ struct i40e_lump_tracking {
#define I40E_HKEY_ARRAY_SIZE ((I40E_PFQF_HKEY_MAX_INDEX + 1) * 4)
#define I40E_HLUT_ARRAY_SIZE ((I40E_PFQF_HLUT_MAX_INDEX + 1) * 4)
+#define I40E_VF_HLUT_ARRAY_SIZE ((I40E_VFQF_HLUT1_MAX_INDEX + 1) * 4)
enum i40e_fd_stat_idx {
I40E_FD_STAT_ATR,
@@ -244,7 +244,6 @@ struct i40e_fdir_filter {
#define I40E_DCB_PRIO_TYPE_STRICT 0
#define I40E_DCB_PRIO_TYPE_ETS 1
#define I40E_DCB_STRICT_PRIO_CREDITS 127
-#define I40E_MAX_USER_PRIORITY 8
/* DCB per TC information data structure */
struct i40e_tc_info {
u16 qoffset; /* Queue offset from base queue */
@@ -320,8 +319,6 @@ struct i40e_pf {
#define I40E_FLAG_RX_CSUM_ENABLED BIT_ULL(1)
#define I40E_FLAG_MSI_ENABLED BIT_ULL(2)
#define I40E_FLAG_MSIX_ENABLED BIT_ULL(3)
-#define I40E_FLAG_RX_1BUF_ENABLED BIT_ULL(4)
-#define I40E_FLAG_RX_PS_ENABLED BIT_ULL(5)
#define I40E_FLAG_RSS_ENABLED BIT_ULL(6)
#define I40E_FLAG_VMDQ_ENABLED BIT_ULL(7)
#define I40E_FLAG_FDIR_REQUIRES_REINIT BIT_ULL(8)
@@ -330,7 +327,6 @@ struct i40e_pf {
#ifdef I40E_FCOE
#define I40E_FLAG_FCOE_ENABLED BIT_ULL(11)
#endif /* I40E_FCOE */
-#define I40E_FLAG_16BYTE_RX_DESC_ENABLED BIT_ULL(13)
#define I40E_FLAG_CLEAN_ADMINQ BIT_ULL(14)
#define I40E_FLAG_FILTER_SYNC BIT_ULL(15)
#define I40E_FLAG_SERVICE_CLIENT_REQUESTED BIT_ULL(16)
@@ -363,6 +359,7 @@ struct i40e_pf {
#define I40E_FLAG_STOP_FW_LLDP BIT_ULL(47)
#define I40E_FLAG_HAVE_10GBASET_PHY BIT_ULL(48)
#define I40E_FLAG_PF_MAC BIT_ULL(50)
+#define I40E_FLAG_TRUE_PROMISC_SUPPORT BIT_ULL(51)
/* tracks features that get auto disabled by errors */
u64 auto_disable_flags;
@@ -534,9 +531,7 @@ struct i40e_vsi {
u8 *rss_lut_user; /* User configured lookup table entries */
u16 max_frame;
- u16 rx_hdr_len;
u16 rx_buf_len;
- u8 dtype;
/* List of q_vectors allocated to this VSI */
struct i40e_q_vector **q_vectors;
@@ -554,7 +549,7 @@ struct i40e_vsi {
u16 num_queue_pairs; /* Used tx and rx pairs */
u16 num_desc;
enum i40e_vsi_type type; /* VSI type, e.g., LAN, FCoE, etc */
- u16 vf_id; /* Virtual function ID for SRIOV VSIs */
+ s16 vf_id; /* Virtual function ID for SRIOV VSIs */
struct i40e_tc_configuration tc_config;
struct i40e_aqc_vsi_properties_data info;
@@ -811,6 +806,7 @@ int i40e_vlan_rx_kill_vid(struct net_device *netdev,
__always_unused __be16 proto, u16 vid);
#endif
int i40e_open(struct net_device *netdev);
+int i40e_close(struct net_device *netdev);
int i40e_vsi_open(struct i40e_vsi *vsi);
void i40e_vlan_stripping_disable(struct i40e_vsi *vsi);
int i40e_vsi_add_vlan(struct i40e_vsi *vsi, s16 vid);
@@ -823,7 +819,6 @@ bool i40e_is_vsi_in_vlan(struct i40e_vsi *vsi);
struct i40e_mac_filter *i40e_find_mac(struct i40e_vsi *vsi, u8 *macaddr,
bool is_vf, bool is_netdev);
#ifdef I40E_FCOE
-int i40e_close(struct net_device *netdev);
int __i40e_setup_tc(struct net_device *netdev, u32 handle, __be16 proto,
struct tc_to_netdev *tc);
void i40e_netpoll(struct net_device *netdev);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq.c b/drivers/net/ethernet/intel/i40e/i40e_adminq.c
index df8e2fd6a649..738b42a44f20 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_adminq.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_adminq.c
@@ -33,16 +33,6 @@
static void i40e_resume_aq(struct i40e_hw *hw);
/**
- * i40e_is_nvm_update_op - return true if this is an NVM update operation
- * @desc: API request descriptor
- **/
-static inline bool i40e_is_nvm_update_op(struct i40e_aq_desc *desc)
-{
- return (desc->opcode == cpu_to_le16(i40e_aqc_opc_nvm_erase)) ||
- (desc->opcode == cpu_to_le16(i40e_aqc_opc_nvm_update));
-}
-
-/**
* i40e_adminq_init_regs - Initialize AdminQ registers
* @hw: pointer to the hardware structure
*
@@ -624,13 +614,9 @@ i40e_status i40e_init_adminq(struct i40e_hw *hw)
/* pre-emptive resource lock release */
i40e_aq_release_resource(hw, I40E_NVM_RESOURCE_ID, 0, NULL);
- hw->aq.nvm_release_on_done = false;
+ hw->nvm_release_on_done = false;
hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
- ret_code = i40e_aq_set_hmc_resource_profile(hw,
- I40E_HMC_PROFILE_DEFAULT,
- 0,
- NULL);
ret_code = 0;
/* success! */
@@ -1023,26 +1009,7 @@ i40e_status i40e_clean_arq_element(struct i40e_hw *hw,
hw->aq.arq.next_to_clean = ntc;
hw->aq.arq.next_to_use = ntu;
- if (i40e_is_nvm_update_op(&e->desc)) {
- if (hw->aq.nvm_release_on_done) {
- i40e_release_nvm(hw);
- hw->aq.nvm_release_on_done = false;
- }
-
- switch (hw->nvmupd_state) {
- case I40E_NVMUPD_STATE_INIT_WAIT:
- hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
- break;
-
- case I40E_NVMUPD_STATE_WRITE_WAIT:
- hw->nvmupd_state = I40E_NVMUPD_STATE_WRITING;
- break;
-
- default:
- break;
- }
- }
-
+ i40e_nvmupd_check_wait_event(hw, le16_to_cpu(e->desc.opcode));
clean_arq_element_out:
/* Set pending if needed, unlock and return */
if (pending)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq.h b/drivers/net/ethernet/intel/i40e/i40e_adminq.h
index 12fbbddea299..d92aad38afdc 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_adminq.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_adminq.h
@@ -97,7 +97,6 @@ struct i40e_adminq_info {
u32 fw_build; /* firmware build number */
u16 api_maj_ver; /* api major version */
u16 api_min_ver; /* api minor version */
- bool nvm_release_on_done;
struct mutex asq_mutex; /* Send queue lock */
struct mutex arq_mutex; /* Receive queue lock */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
index 8d5c65ab6267..11cf1a5ebccf 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_adminq_cmd.h
@@ -78,17 +78,17 @@ struct i40e_aq_desc {
#define I40E_AQ_FLAG_EI_SHIFT 14
#define I40E_AQ_FLAG_FE_SHIFT 15
-#define I40E_AQ_FLAG_DD (1 << I40E_AQ_FLAG_DD_SHIFT) /* 0x1 */
-#define I40E_AQ_FLAG_CMP (1 << I40E_AQ_FLAG_CMP_SHIFT) /* 0x2 */
-#define I40E_AQ_FLAG_ERR (1 << I40E_AQ_FLAG_ERR_SHIFT) /* 0x4 */
-#define I40E_AQ_FLAG_VFE (1 << I40E_AQ_FLAG_VFE_SHIFT) /* 0x8 */
-#define I40E_AQ_FLAG_LB (1 << I40E_AQ_FLAG_LB_SHIFT) /* 0x200 */
-#define I40E_AQ_FLAG_RD (1 << I40E_AQ_FLAG_RD_SHIFT) /* 0x400 */
-#define I40E_AQ_FLAG_VFC (1 << I40E_AQ_FLAG_VFC_SHIFT) /* 0x800 */
-#define I40E_AQ_FLAG_BUF (1 << I40E_AQ_FLAG_BUF_SHIFT) /* 0x1000 */
-#define I40E_AQ_FLAG_SI (1 << I40E_AQ_FLAG_SI_SHIFT) /* 0x2000 */
-#define I40E_AQ_FLAG_EI (1 << I40E_AQ_FLAG_EI_SHIFT) /* 0x4000 */
-#define I40E_AQ_FLAG_FE (1 << I40E_AQ_FLAG_FE_SHIFT) /* 0x8000 */
+#define I40E_AQ_FLAG_DD BIT(I40E_AQ_FLAG_DD_SHIFT) /* 0x1 */
+#define I40E_AQ_FLAG_CMP BIT(I40E_AQ_FLAG_CMP_SHIFT) /* 0x2 */
+#define I40E_AQ_FLAG_ERR BIT(I40E_AQ_FLAG_ERR_SHIFT) /* 0x4 */
+#define I40E_AQ_FLAG_VFE BIT(I40E_AQ_FLAG_VFE_SHIFT) /* 0x8 */
+#define I40E_AQ_FLAG_LB BIT(I40E_AQ_FLAG_LB_SHIFT) /* 0x200 */
+#define I40E_AQ_FLAG_RD BIT(I40E_AQ_FLAG_RD_SHIFT) /* 0x400 */
+#define I40E_AQ_FLAG_VFC BIT(I40E_AQ_FLAG_VFC_SHIFT) /* 0x800 */
+#define I40E_AQ_FLAG_BUF BIT(I40E_AQ_FLAG_BUF_SHIFT) /* 0x1000 */
+#define I40E_AQ_FLAG_SI BIT(I40E_AQ_FLAG_SI_SHIFT) /* 0x2000 */
+#define I40E_AQ_FLAG_EI BIT(I40E_AQ_FLAG_EI_SHIFT) /* 0x4000 */
+#define I40E_AQ_FLAG_FE BIT(I40E_AQ_FLAG_FE_SHIFT) /* 0x8000 */
/* error codes */
enum i40e_admin_queue_err {
@@ -205,10 +205,6 @@ enum i40e_admin_queue_opc {
i40e_aqc_opc_resume_port_tx = 0x041C,
i40e_aqc_opc_configure_partition_bw = 0x041D,
- /* hmc */
- i40e_aqc_opc_query_hmc_resource_profile = 0x0500,
- i40e_aqc_opc_set_hmc_resource_profile = 0x0501,
-
/* phy commands*/
i40e_aqc_opc_get_phy_abilities = 0x0600,
i40e_aqc_opc_set_phy_config = 0x0601,
@@ -429,6 +425,7 @@ struct i40e_aqc_list_capabilities_element_resp {
#define I40E_AQ_CAP_ID_SDP 0x0062
#define I40E_AQ_CAP_ID_MDIO 0x0063
#define I40E_AQ_CAP_ID_WSR_PROT 0x0064
+#define I40E_AQ_CAP_ID_NVM_MGMT 0x0080
#define I40E_AQ_CAP_ID_FLEX10 0x00F1
#define I40E_AQ_CAP_ID_CEM 0x00F2
@@ -1585,27 +1582,6 @@ struct i40e_aqc_configure_partition_bw_data {
I40E_CHECK_STRUCT_LEN(0x22, i40e_aqc_configure_partition_bw_data);
-/* Get and set the active HMC resource profile and status.
- * (direct 0x0500) and (direct 0x0501)
- */
-struct i40e_aq_get_set_hmc_resource_profile {
- u8 pm_profile;
- u8 pe_vf_enabled;
- u8 reserved[14];
-};
-
-I40E_CHECK_CMD_LENGTH(i40e_aq_get_set_hmc_resource_profile);
-
-enum i40e_aq_hmc_profile {
- /* I40E_HMC_PROFILE_NO_CHANGE = 0, reserved */
- I40E_HMC_PROFILE_DEFAULT = 1,
- I40E_HMC_PROFILE_FAVOR_VF = 2,
- I40E_HMC_PROFILE_EQUAL = 3,
-};
-
-#define I40E_AQ_GET_HMC_RESOURCE_PROFILE_PM_MASK 0xF
-#define I40E_AQ_GET_HMC_RESOURCE_PROFILE_COUNT_MASK 0x3F
-
/* Get PHY Abilities (indirect 0x0600) uses the generic indirect struct */
/* set in param0 for get phy abilities to report qualified modules */
@@ -1652,11 +1628,11 @@ enum i40e_aq_phy_type {
enum i40e_aq_link_speed {
I40E_LINK_SPEED_UNKNOWN = 0,
- I40E_LINK_SPEED_100MB = (1 << I40E_LINK_SPEED_100MB_SHIFT),
- I40E_LINK_SPEED_1GB = (1 << I40E_LINK_SPEED_1000MB_SHIFT),
- I40E_LINK_SPEED_10GB = (1 << I40E_LINK_SPEED_10GB_SHIFT),
- I40E_LINK_SPEED_40GB = (1 << I40E_LINK_SPEED_40GB_SHIFT),
- I40E_LINK_SPEED_20GB = (1 << I40E_LINK_SPEED_20GB_SHIFT)
+ I40E_LINK_SPEED_100MB = BIT(I40E_LINK_SPEED_100MB_SHIFT),
+ I40E_LINK_SPEED_1GB = BIT(I40E_LINK_SPEED_1000MB_SHIFT),
+ I40E_LINK_SPEED_10GB = BIT(I40E_LINK_SPEED_10GB_SHIFT),
+ I40E_LINK_SPEED_40GB = BIT(I40E_LINK_SPEED_40GB_SHIFT),
+ I40E_LINK_SPEED_20GB = BIT(I40E_LINK_SPEED_20GB_SHIFT)
};
struct i40e_aqc_module_desc {
@@ -1857,7 +1833,10 @@ struct i40e_aqc_set_phy_debug {
#define I40E_AQ_PHY_DEBUG_RESET_EXTERNAL_NONE 0x00
#define I40E_AQ_PHY_DEBUG_RESET_EXTERNAL_HARD 0x01
#define I40E_AQ_PHY_DEBUG_RESET_EXTERNAL_SOFT 0x02
+/* Disable link manageability on a single port */
#define I40E_AQ_PHY_DEBUG_DISABLE_LINK_FW 0x10
+/* Disable link manageability on all ports */
+#define I40E_AQ_PHY_DEBUG_DISABLE_ALL_LINK_FW 0x20
u8 reserved[15];
};
@@ -1927,9 +1906,9 @@ I40E_CHECK_CMD_LENGTH(i40e_aqc_nvm_config_write);
/* Used for 0x0704 as well as for 0x0705 commands */
#define I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT 1
#define I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_MASK \
- (1 << I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT)
+ BIT(I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT)
#define I40E_AQ_ANVM_FEATURE 0
-#define I40E_AQ_ANVM_IMMEDIATE_FIELD (1 << FEATURE_OR_IMMEDIATE_SHIFT)
+#define I40E_AQ_ANVM_IMMEDIATE_FIELD BIT(FEATURE_OR_IMMEDIATE_SHIFT)
struct i40e_aqc_nvm_config_data_feature {
__le16 feature_id;
#define I40E_AQ_ANVM_FEATURE_OPTION_OEM_ONLY 0x01
@@ -2226,13 +2205,11 @@ I40E_CHECK_STRUCT_LEN(0x20, i40e_aqc_get_cee_dcb_cfg_resp);
*/
struct i40e_aqc_lldp_set_local_mib {
#define SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT 0
-#define SET_LOCAL_MIB_AC_TYPE_DCBX_MASK (1 << SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT)
-#define SET_LOCAL_MIB_AC_TYPE_DCBX_MASK (1 << \
- SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT)
+#define SET_LOCAL_MIB_AC_TYPE_DCBX_MASK BIT(SET_LOCAL_MIB_AC_TYPE_DCBX_SHIFT)
#define SET_LOCAL_MIB_AC_TYPE_LOCAL_MIB 0x0
#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT (1)
-#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_MASK (1 << \
- SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT)
+#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_MASK \
+ BIT(SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS_SHIFT)
#define SET_LOCAL_MIB_AC_TYPE_NON_WILLING_APPS 0x1
u8 type;
u8 reserved0;
@@ -2250,7 +2227,7 @@ I40E_CHECK_CMD_LENGTH(i40e_aqc_lldp_set_local_mib);
struct i40e_aqc_lldp_stop_start_specific_agent {
#define I40E_AQC_START_SPECIFIC_AGENT_SHIFT 0
#define I40E_AQC_START_SPECIFIC_AGENT_MASK \
- (1 << I40E_AQC_START_SPECIFIC_AGENT_SHIFT)
+ BIT(I40E_AQC_START_SPECIFIC_AGENT_SHIFT)
u8 command;
u8 reserved[15];
};
@@ -2303,7 +2280,7 @@ struct i40e_aqc_del_udp_tunnel_completion {
I40E_CHECK_CMD_LENGTH(i40e_aqc_del_udp_tunnel_completion);
struct i40e_aqc_get_set_rss_key {
-#define I40E_AQC_SET_RSS_KEY_VSI_VALID (0x1 << 15)
+#define I40E_AQC_SET_RSS_KEY_VSI_VALID BIT(15)
#define I40E_AQC_SET_RSS_KEY_VSI_ID_SHIFT 0
#define I40E_AQC_SET_RSS_KEY_VSI_ID_MASK (0x3FF << \
I40E_AQC_SET_RSS_KEY_VSI_ID_SHIFT)
@@ -2323,14 +2300,13 @@ struct i40e_aqc_get_set_rss_key_data {
I40E_CHECK_STRUCT_LEN(0x34, i40e_aqc_get_set_rss_key_data);
struct i40e_aqc_get_set_rss_lut {
-#define I40E_AQC_SET_RSS_LUT_VSI_VALID (0x1 << 15)
+#define I40E_AQC_SET_RSS_LUT_VSI_VALID BIT(15)
#define I40E_AQC_SET_RSS_LUT_VSI_ID_SHIFT 0
#define I40E_AQC_SET_RSS_LUT_VSI_ID_MASK (0x3FF << \
I40E_AQC_SET_RSS_LUT_VSI_ID_SHIFT)
__le16 vsi_id;
#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT 0
-#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_MASK (0x1 << \
- I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT)
+#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_MASK BIT(I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT)
#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_VSI 0
#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_PF 1
diff --git a/drivers/net/ethernet/intel/i40e/i40e_client.h b/drivers/net/ethernet/intel/i40e/i40e_client.h
index bf6b453d93a1..a4601d97fb24 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_client.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_client.h
@@ -217,7 +217,7 @@ struct i40e_client {
#define I40E_CLIENT_FLAGS_LAUNCH_ON_PROBE BIT(0)
#define I40E_TX_FLAGS_NOTIFY_OTHER_EVENTS BIT(2)
enum i40e_client_type type;
- struct i40e_client_ops *ops; /* client ops provided by the client */
+ const struct i40e_client_ops *ops; /* client ops provided by the client */
};
static inline bool i40e_client_is_registered(struct i40e_client *client)
diff --git a/drivers/net/ethernet/intel/i40e/i40e_common.c b/drivers/net/ethernet/intel/i40e/i40e_common.c
index 4596294c2ab1..422b41d61c9a 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_common.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_common.c
@@ -60,6 +60,8 @@ static i40e_status i40e_set_mac_type(struct i40e_hw *hw)
case I40E_DEV_ID_SFP_X722:
case I40E_DEV_ID_1G_BASE_T_X722:
case I40E_DEV_ID_10G_BASE_T_X722:
+ case I40E_DEV_ID_SFP_I_X722:
+ case I40E_DEV_ID_QSFP_I_X722:
hw->mac.type = I40E_MAC_X722;
break;
default:
@@ -694,7 +696,7 @@ struct i40e_rx_ptype_decoded i40e_ptype_lookup[] = {
/* Non Tunneled IPv6 */
I40E_PTT(88, IP, IPV6, FRG, NONE, NONE, NOF, NONE, PAY3),
I40E_PTT(89, IP, IPV6, NOF, NONE, NONE, NOF, NONE, PAY3),
- I40E_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY3),
+ I40E_PTT(90, IP, IPV6, NOF, NONE, NONE, NOF, UDP, PAY4),
I40E_PTT_UNUSED_ENTRY(91),
I40E_PTT(92, IP, IPV6, NOF, NONE, NONE, NOF, TCP, PAY4),
I40E_PTT(93, IP, IPV6, NOF, NONE, NONE, NOF, SCTP, PAY4),
@@ -1901,13 +1903,13 @@ i40e_status i40e_aq_set_phy_int_mask(struct i40e_hw *hw,
*
* Reset the external PHY.
**/
-enum i40e_status_code i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags,
- struct i40e_asq_cmd_details *cmd_details)
+i40e_status i40e_aq_set_phy_debug(struct i40e_hw *hw, u8 cmd_flags,
+ struct i40e_asq_cmd_details *cmd_details)
{
struct i40e_aq_desc desc;
struct i40e_aqc_set_phy_debug *cmd =
(struct i40e_aqc_set_phy_debug *)&desc.params.raw;
- enum i40e_status_code status;
+ i40e_status status;
i40e_fill_default_direct_cmd_desc(&desc,
i40e_aqc_opc_set_phy_debug);
@@ -1970,10 +1972,12 @@ aq_add_vsi_exit:
* @seid: vsi number
* @set: set unicast promiscuous enable/disable
* @cmd_details: pointer to command details structure or NULL
+ * @rx_only_promisc: flag to decide if egress traffic gets mirrored in promisc
**/
i40e_status i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw,
u16 seid, bool set,
- struct i40e_asq_cmd_details *cmd_details)
+ struct i40e_asq_cmd_details *cmd_details,
+ bool rx_only_promisc)
{
struct i40e_aq_desc desc;
struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
@@ -1986,8 +1990,9 @@ i40e_status i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw,
if (set) {
flags |= I40E_AQC_SET_VSI_PROMISC_UNICAST;
- if (((hw->aq.api_maj_ver == 1) && (hw->aq.api_min_ver >= 5)) ||
- (hw->aq.api_maj_ver > 1))
+ if (rx_only_promisc &&
+ (((hw->aq.api_maj_ver == 1) && (hw->aq.api_min_ver >= 5)) ||
+ (hw->aq.api_maj_ver > 1)))
flags |= I40E_AQC_SET_VSI_PROMISC_TX;
}
@@ -2037,6 +2042,76 @@ i40e_status i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw,
}
/**
+ * i40e_aq_set_vsi_mc_promisc_on_vlan
+ * @hw: pointer to the hw struct
+ * @seid: vsi number
+ * @enable: set MAC L2 layer unicast promiscuous enable/disable for a given VLAN
+ * @vid: The VLAN tag filter - capture any multicast packet with this VLAN tag
+ * @cmd_details: pointer to command details structure or NULL
+ **/
+enum i40e_status_code i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw,
+ u16 seid, bool enable,
+ u16 vid,
+ struct i40e_asq_cmd_details *cmd_details)
+{
+ struct i40e_aq_desc desc;
+ struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
+ (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
+ enum i40e_status_code status;
+ u16 flags = 0;
+
+ i40e_fill_default_direct_cmd_desc(&desc,
+ i40e_aqc_opc_set_vsi_promiscuous_modes);
+
+ if (enable)
+ flags |= I40E_AQC_SET_VSI_PROMISC_MULTICAST;
+
+ cmd->promiscuous_flags = cpu_to_le16(flags);
+ cmd->valid_flags = cpu_to_le16(I40E_AQC_SET_VSI_PROMISC_MULTICAST);
+ cmd->seid = cpu_to_le16(seid);
+ cmd->vlan_tag = cpu_to_le16(vid | I40E_AQC_SET_VSI_VLAN_VALID);
+
+ status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+ return status;
+}
+
+/**
+ * i40e_aq_set_vsi_uc_promisc_on_vlan
+ * @hw: pointer to the hw struct
+ * @seid: vsi number
+ * @enable: set MAC L2 layer unicast promiscuous enable/disable for a given VLAN
+ * @vid: The VLAN tag filter - capture any unicast packet with this VLAN tag
+ * @cmd_details: pointer to command details structure or NULL
+ **/
+enum i40e_status_code i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw,
+ u16 seid, bool enable,
+ u16 vid,
+ struct i40e_asq_cmd_details *cmd_details)
+{
+ struct i40e_aq_desc desc;
+ struct i40e_aqc_set_vsi_promiscuous_modes *cmd =
+ (struct i40e_aqc_set_vsi_promiscuous_modes *)&desc.params.raw;
+ enum i40e_status_code status;
+ u16 flags = 0;
+
+ i40e_fill_default_direct_cmd_desc(&desc,
+ i40e_aqc_opc_set_vsi_promiscuous_modes);
+
+ if (enable)
+ flags |= I40E_AQC_SET_VSI_PROMISC_UNICAST;
+
+ cmd->promiscuous_flags = cpu_to_le16(flags);
+ cmd->valid_flags = cpu_to_le16(I40E_AQC_SET_VSI_PROMISC_UNICAST);
+ cmd->seid = cpu_to_le16(seid);
+ cmd->vlan_tag = cpu_to_le16(vid | I40E_AQC_SET_VSI_VLAN_VALID);
+
+ status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+ return status;
+}
+
+/**
* i40e_aq_set_vsi_broadcast
* @hw: pointer to the hw struct
* @seid: vsi number
@@ -2157,6 +2232,9 @@ i40e_status i40e_aq_update_vsi_params(struct i40e_hw *hw,
struct i40e_aq_desc desc;
struct i40e_aqc_add_get_update_vsi *cmd =
(struct i40e_aqc_add_get_update_vsi *)&desc.params.raw;
+ struct i40e_aqc_add_get_update_vsi_completion *resp =
+ (struct i40e_aqc_add_get_update_vsi_completion *)
+ &desc.params.raw;
i40e_status status;
i40e_fill_default_direct_cmd_desc(&desc,
@@ -2168,6 +2246,9 @@ i40e_status i40e_aq_update_vsi_params(struct i40e_hw *hw,
status = i40e_asq_send_command(hw, &desc, &vsi_ctx->info,
sizeof(vsi_ctx->info), cmd_details);
+ vsi_ctx->vsis_allocated = le16_to_cpu(resp->vsi_used);
+ vsi_ctx->vsis_unallocated = le16_to_cpu(resp->vsi_free);
+
return status;
}
@@ -2205,6 +2286,35 @@ i40e_status i40e_aq_get_switch_config(struct i40e_hw *hw,
}
/**
+ * i40e_aq_set_switch_config
+ * @hw: pointer to the hardware structure
+ * @flags: bit flag values to set
+ * @valid_flags: which bit flags to set
+ * @cmd_details: pointer to command details structure or NULL
+ *
+ * Set switch configuration bits
+ **/
+enum i40e_status_code i40e_aq_set_switch_config(struct i40e_hw *hw,
+ u16 flags,
+ u16 valid_flags,
+ struct i40e_asq_cmd_details *cmd_details)
+{
+ struct i40e_aq_desc desc;
+ struct i40e_aqc_set_switch_config *scfg =
+ (struct i40e_aqc_set_switch_config *)&desc.params.raw;
+ enum i40e_status_code status;
+
+ i40e_fill_default_direct_cmd_desc(&desc,
+ i40e_aqc_opc_set_switch_config);
+ scfg->flags = cpu_to_le16(flags);
+ scfg->valid_flags = cpu_to_le16(valid_flags);
+
+ status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
+
+ return status;
+}
+
+/**
* i40e_aq_get_firmware_version
* @hw: pointer to the hw struct
* @fw_major_version: firmware major version
@@ -2660,10 +2770,7 @@ i40e_status i40e_aq_delete_mirrorrule(struct i40e_hw *hw, u16 sw_seid,
u16 *rules_used, u16 *rules_free)
{
/* Rule ID has to be valid except rule_type: INGRESS VLAN mirroring */
- if (rule_type != I40E_AQC_MIRROR_RULE_TYPE_VLAN) {
- if (!rule_id)
- return I40E_ERR_PARAM;
- } else {
+ if (rule_type == I40E_AQC_MIRROR_RULE_TYPE_VLAN) {
/* count and mr_list shall be valid for rule_type INGRESS VLAN
* mirroring. For other rule_type, count and rule_type should
* not matter.
@@ -2780,36 +2887,6 @@ i40e_status i40e_aq_debug_write_register(struct i40e_hw *hw,
}
/**
- * i40e_aq_set_hmc_resource_profile
- * @hw: pointer to the hw struct
- * @profile: type of profile the HMC is to be set as
- * @pe_vf_enabled_count: the number of PE enabled VFs the system has
- * @cmd_details: pointer to command details structure or NULL
- *
- * set the HMC profile of the device.
- **/
-i40e_status i40e_aq_set_hmc_resource_profile(struct i40e_hw *hw,
- enum i40e_aq_hmc_profile profile,
- u8 pe_vf_enabled_count,
- struct i40e_asq_cmd_details *cmd_details)
-{
- struct i40e_aq_desc desc;
- struct i40e_aq_get_set_hmc_resource_profile *cmd =
- (struct i40e_aq_get_set_hmc_resource_profile *)&desc.params.raw;
- i40e_status status;
-
- i40e_fill_default_direct_cmd_desc(&desc,
- i40e_aqc_opc_set_hmc_resource_profile);
-
- cmd->pm_profile = (u8)profile;
- cmd->pe_vf_enabled = pe_vf_enabled_count;
-
- status = i40e_asq_send_command(hw, &desc, NULL, 0, cmd_details);
-
- return status;
-}
-
-/**
* i40e_aq_request_resource
* @hw: pointer to the hw struct
* @resource: resource id
@@ -3073,6 +3150,9 @@ static void i40e_parse_discover_capabilities(struct i40e_hw *hw, void *buff,
break;
case I40E_AQ_CAP_ID_MSIX:
p->num_msix_vectors = number;
+ i40e_debug(hw, I40E_DEBUG_INIT,
+ "HW Capability: MSIX vector count = %d\n",
+ p->num_msix_vectors);
break;
case I40E_AQ_CAP_ID_VF_MSIX:
p->num_msix_vectors_vf = number;
@@ -3128,6 +3208,12 @@ static void i40e_parse_discover_capabilities(struct i40e_hw *hw, void *buff,
p->wr_csr_prot = (u64)number;
p->wr_csr_prot |= (u64)logical_id << 32;
break;
+ case I40E_AQ_CAP_ID_NVM_MGMT:
+ if (number & I40E_NVM_MGMT_SEC_REV_DISABLED)
+ p->sec_rev_disabled = true;
+ if (number & I40E_NVM_MGMT_UPDATE_DISABLED)
+ p->update_disabled = true;
+ break;
default:
break;
}
diff --git a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
index 0c97733d253c..e6af8c8d7019 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_debugfs.c
@@ -147,9 +147,8 @@ static void i40e_dbg_dump_vsi_seid(struct i40e_pf *pf, int seid)
dev_info(&pf->pdev->dev, " vlan_features = 0x%08lx\n",
(unsigned long int)nd->vlan_features);
}
- if (vsi->active_vlans)
- dev_info(&pf->pdev->dev,
- " vlgrp: & = %p\n", vsi->active_vlans);
+ dev_info(&pf->pdev->dev,
+ " vlgrp: & = %p\n", vsi->active_vlans);
dev_info(&pf->pdev->dev,
" state = %li flags = 0x%08lx, netdev_registered = %i, current_netdev_flags = 0x%04x\n",
vsi->state, vsi->flags,
@@ -269,13 +268,11 @@ static void i40e_dbg_dump_vsi_seid(struct i40e_pf *pf, int seid)
rx_ring->queue_index,
rx_ring->reg_idx);
dev_info(&pf->pdev->dev,
- " rx_rings[%i]: rx_hdr_len = %d, rx_buf_len = %d, dtype = %d\n",
- i, rx_ring->rx_hdr_len,
- rx_ring->rx_buf_len,
- rx_ring->dtype);
+ " rx_rings[%i]: rx_buf_len = %d\n",
+ i, rx_ring->rx_buf_len);
dev_info(&pf->pdev->dev,
- " rx_rings[%i]: hsplit = %d, next_to_use = %d, next_to_clean = %d, ring_active = %i\n",
- i, ring_is_ps_enabled(rx_ring),
+ " rx_rings[%i]: next_to_use = %d, next_to_clean = %d, ring_active = %i\n",
+ i,
rx_ring->next_to_use,
rx_ring->next_to_clean,
rx_ring->ring_active);
@@ -327,9 +324,6 @@ static void i40e_dbg_dump_vsi_seid(struct i40e_pf *pf, int seid)
tx_ring->queue_index,
tx_ring->reg_idx);
dev_info(&pf->pdev->dev,
- " tx_rings[%i]: dtype = %d\n",
- i, tx_ring->dtype);
- dev_info(&pf->pdev->dev,
" tx_rings[%i]: next_to_use = %d, next_to_clean = %d, ring_active = %i\n",
i,
tx_ring->next_to_use,
@@ -366,8 +360,8 @@ static void i40e_dbg_dump_vsi_seid(struct i40e_pf *pf, int seid)
" work_limit = %d\n",
vsi->work_limit);
dev_info(&pf->pdev->dev,
- " max_frame = %d, rx_hdr_len = %d, rx_buf_len = %d dtype = %d\n",
- vsi->max_frame, vsi->rx_hdr_len, vsi->rx_buf_len, vsi->dtype);
+ " max_frame = %d, rx_buf_len = %d dtype = %d\n",
+ vsi->max_frame, vsi->rx_buf_len, 0);
dev_info(&pf->pdev->dev,
" num_q_vectors = %i, base_vector = %i\n",
vsi->num_q_vectors, vsi->base_vector);
@@ -592,13 +586,6 @@ static void i40e_dbg_dump_desc(int cnt, int vsi_seid, int ring_id, int desc_n,
" d[%03x] = 0x%016llx 0x%016llx\n",
i, txd->buffer_addr,
txd->cmd_type_offset_bsz);
- } else if (sizeof(union i40e_rx_desc) ==
- sizeof(union i40e_16byte_rx_desc)) {
- rxd = I40E_RX_DESC(ring, i);
- dev_info(&pf->pdev->dev,
- " d[%03x] = 0x%016llx 0x%016llx\n",
- i, rxd->read.pkt_addr,
- rxd->read.hdr_addr);
} else {
rxd = I40E_RX_DESC(ring, i);
dev_info(&pf->pdev->dev,
@@ -620,13 +607,6 @@ static void i40e_dbg_dump_desc(int cnt, int vsi_seid, int ring_id, int desc_n,
"vsi = %02i tx ring = %02i d[%03x] = 0x%016llx 0x%016llx\n",
vsi_seid, ring_id, desc_n,
txd->buffer_addr, txd->cmd_type_offset_bsz);
- } else if (sizeof(union i40e_rx_desc) ==
- sizeof(union i40e_16byte_rx_desc)) {
- rxd = I40E_RX_DESC(ring, desc_n);
- dev_info(&pf->pdev->dev,
- "vsi = %02i rx ring = %02i d[%03x] = 0x%016llx 0x%016llx\n",
- vsi_seid, ring_id, desc_n,
- rxd->read.pkt_addr, rxd->read.hdr_addr);
} else {
rxd = I40E_RX_DESC(ring, desc_n);
dev_info(&pf->pdev->dev,
diff --git a/drivers/net/ethernet/intel/i40e/i40e_devids.h b/drivers/net/ethernet/intel/i40e/i40e_devids.h
index 99257fcd1ef4..d701861c6e1e 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_devids.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_devids.h
@@ -44,6 +44,8 @@
#define I40E_DEV_ID_SFP_X722 0x37D0
#define I40E_DEV_ID_1G_BASE_T_X722 0x37D1
#define I40E_DEV_ID_10G_BASE_T_X722 0x37D2
+#define I40E_DEV_ID_SFP_I_X722 0x37D3
+#define I40E_DEV_ID_QSFP_I_X722 0x37D4
#define i40e_is_40G_device(d) ((d) == I40E_DEV_ID_QSFP_A || \
(d) == I40E_DEV_ID_QSFP_B || \
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
index 784b1659457a..5e8d84ff7d5f 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ethtool.c
@@ -230,12 +230,22 @@ static const char i40e_gstrings_test[][ETH_GSTRING_LEN] = {
#define I40E_TEST_LEN (sizeof(i40e_gstrings_test) / ETH_GSTRING_LEN)
+static const char i40e_priv_flags_strings_gl[][ETH_GSTRING_LEN] = {
+ "MFP",
+ "LinkPolling",
+ "flow-director-atr",
+ "veb-stats",
+ "hw-atr-eviction",
+ "vf-true-promisc-support",
+};
+
+#define I40E_PRIV_FLAGS_GL_STR_LEN ARRAY_SIZE(i40e_priv_flags_strings_gl)
+
static const char i40e_priv_flags_strings[][ETH_GSTRING_LEN] = {
"NPAR",
"LinkPolling",
"flow-director-atr",
"veb-stats",
- "packet-split",
"hw-atr-eviction",
};
@@ -252,6 +262,110 @@ static void i40e_partition_setting_complaint(struct i40e_pf *pf)
}
/**
+ * i40e_phy_type_to_ethtool - convert the phy_types to ethtool link modes
+ * @phy_types: PHY types to convert
+ * @supported: pointer to the ethtool supported variable to fill in
+ * @advertising: pointer to the ethtool advertising variable to fill in
+ *
+ **/
+static void i40e_phy_type_to_ethtool(struct i40e_pf *pf, u32 *supported,
+ u32 *advertising)
+{
+ enum i40e_aq_capabilities_phy_type phy_types = pf->hw.phy.phy_types;
+
+ *supported = 0x0;
+ *advertising = 0x0;
+
+ if (phy_types & I40E_CAP_PHY_TYPE_SGMII) {
+ *supported |= SUPPORTED_Autoneg |
+ SUPPORTED_1000baseT_Full;
+ *advertising |= ADVERTISED_Autoneg |
+ ADVERTISED_1000baseT_Full;
+ if (pf->flags & I40E_FLAG_100M_SGMII_CAPABLE) {
+ *supported |= SUPPORTED_100baseT_Full;
+ *advertising |= ADVERTISED_100baseT_Full;
+ }
+ }
+ if (phy_types & I40E_CAP_PHY_TYPE_XAUI ||
+ phy_types & I40E_CAP_PHY_TYPE_XFI ||
+ phy_types & I40E_CAP_PHY_TYPE_SFI ||
+ phy_types & I40E_CAP_PHY_TYPE_10GBASE_SFPP_CU ||
+ phy_types & I40E_CAP_PHY_TYPE_10GBASE_AOC)
+ *supported |= SUPPORTED_10000baseT_Full;
+ if (phy_types & I40E_CAP_PHY_TYPE_10GBASE_CR1_CU ||
+ phy_types & I40E_CAP_PHY_TYPE_10GBASE_CR1 ||
+ phy_types & I40E_CAP_PHY_TYPE_10GBASE_T ||
+ phy_types & I40E_CAP_PHY_TYPE_10GBASE_SR ||
+ phy_types & I40E_CAP_PHY_TYPE_10GBASE_LR) {
+ *supported |= SUPPORTED_Autoneg |
+ SUPPORTED_10000baseT_Full;
+ *advertising |= ADVERTISED_Autoneg |
+ ADVERTISED_10000baseT_Full;
+ }
+ if (phy_types & I40E_CAP_PHY_TYPE_XLAUI ||
+ phy_types & I40E_CAP_PHY_TYPE_XLPPI ||
+ phy_types & I40E_CAP_PHY_TYPE_40GBASE_AOC)
+ *supported |= SUPPORTED_40000baseCR4_Full;
+ if (phy_types & I40E_CAP_PHY_TYPE_40GBASE_CR4_CU ||
+ phy_types & I40E_CAP_PHY_TYPE_40GBASE_CR4) {
+ *supported |= SUPPORTED_Autoneg |
+ SUPPORTED_40000baseCR4_Full;
+ *advertising |= ADVERTISED_Autoneg |
+ ADVERTISED_40000baseCR4_Full;
+ }
+ if ((phy_types & I40E_CAP_PHY_TYPE_100BASE_TX) &&
+ !(phy_types & I40E_CAP_PHY_TYPE_1000BASE_T)) {
+ *supported |= SUPPORTED_Autoneg |
+ SUPPORTED_100baseT_Full;
+ *advertising |= ADVERTISED_Autoneg |
+ ADVERTISED_100baseT_Full;
+ }
+ if (phy_types & I40E_CAP_PHY_TYPE_1000BASE_T ||
+ phy_types & I40E_CAP_PHY_TYPE_1000BASE_SX ||
+ phy_types & I40E_CAP_PHY_TYPE_1000BASE_LX ||
+ phy_types & I40E_CAP_PHY_TYPE_1000BASE_T_OPTICAL) {
+ *supported |= SUPPORTED_Autoneg |
+ SUPPORTED_1000baseT_Full;
+ *advertising |= ADVERTISED_Autoneg |
+ ADVERTISED_1000baseT_Full;
+ }
+ if (phy_types & I40E_CAP_PHY_TYPE_40GBASE_SR4)
+ *supported |= SUPPORTED_40000baseSR4_Full;
+ if (phy_types & I40E_CAP_PHY_TYPE_40GBASE_LR4)
+ *supported |= SUPPORTED_40000baseLR4_Full;
+ if (phy_types & I40E_CAP_PHY_TYPE_40GBASE_KR4) {
+ *supported |= SUPPORTED_40000baseKR4_Full |
+ SUPPORTED_Autoneg;
+ *advertising |= ADVERTISED_40000baseKR4_Full |
+ ADVERTISED_Autoneg;
+ }
+ if (phy_types & I40E_CAP_PHY_TYPE_20GBASE_KR2) {
+ *supported |= SUPPORTED_20000baseKR2_Full |
+ SUPPORTED_Autoneg;
+ *advertising |= ADVERTISED_20000baseKR2_Full |
+ ADVERTISED_Autoneg;
+ }
+ if (phy_types & I40E_CAP_PHY_TYPE_10GBASE_KR) {
+ *supported |= SUPPORTED_10000baseKR_Full |
+ SUPPORTED_Autoneg;
+ *advertising |= ADVERTISED_10000baseKR_Full |
+ ADVERTISED_Autoneg;
+ }
+ if (phy_types & I40E_CAP_PHY_TYPE_10GBASE_KX4) {
+ *supported |= SUPPORTED_10000baseKX4_Full |
+ SUPPORTED_Autoneg;
+ *advertising |= ADVERTISED_10000baseKX4_Full |
+ ADVERTISED_Autoneg;
+ }
+ if (phy_types & I40E_CAP_PHY_TYPE_1000BASE_KX) {
+ *supported |= SUPPORTED_1000baseKX_Full |
+ SUPPORTED_Autoneg;
+ *advertising |= ADVERTISED_1000baseKX_Full |
+ ADVERTISED_Autoneg;
+ }
+}
+
+/**
* i40e_get_settings_link_up - Get the Link settings for when link is up
* @hw: hw structure
* @ecmd: ethtool command to fill in
@@ -265,6 +379,8 @@ static void i40e_get_settings_link_up(struct i40e_hw *hw,
{
struct i40e_link_status *hw_link_info = &hw->phy.link_info;
u32 link_speed = hw_link_info->link_speed;
+ u32 e_advertising = 0x0;
+ u32 e_supported = 0x0;
/* Initialize supported and advertised settings based on phy settings */
switch (hw_link_info->phy_type) {
@@ -305,14 +421,18 @@ static void i40e_get_settings_link_up(struct i40e_hw *hw,
break;
case I40E_PHY_TYPE_10GBASE_T:
case I40E_PHY_TYPE_1000BASE_T:
+ case I40E_PHY_TYPE_100BASE_TX:
ecmd->supported = SUPPORTED_Autoneg |
SUPPORTED_10000baseT_Full |
- SUPPORTED_1000baseT_Full;
+ SUPPORTED_1000baseT_Full |
+ SUPPORTED_100baseT_Full;
ecmd->advertising = ADVERTISED_Autoneg;
if (hw_link_info->requested_speeds & I40E_LINK_SPEED_10GB)
ecmd->advertising |= ADVERTISED_10000baseT_Full;
if (hw_link_info->requested_speeds & I40E_LINK_SPEED_1GB)
ecmd->advertising |= ADVERTISED_1000baseT_Full;
+ if (hw_link_info->requested_speeds & I40E_LINK_SPEED_100MB)
+ ecmd->advertising |= ADVERTISED_100baseT_Full;
break;
case I40E_PHY_TYPE_1000BASE_T_OPTICAL:
ecmd->supported = SUPPORTED_Autoneg |
@@ -320,12 +440,6 @@ static void i40e_get_settings_link_up(struct i40e_hw *hw,
ecmd->advertising = ADVERTISED_Autoneg |
ADVERTISED_1000baseT_Full;
break;
- case I40E_PHY_TYPE_100BASE_TX:
- ecmd->supported = SUPPORTED_Autoneg |
- SUPPORTED_100baseT_Full;
- if (hw_link_info->requested_speeds & I40E_LINK_SPEED_100MB)
- ecmd->advertising |= ADVERTISED_100baseT_Full;
- break;
case I40E_PHY_TYPE_10GBASE_CR1_CU:
case I40E_PHY_TYPE_10GBASE_CR1:
ecmd->supported = SUPPORTED_Autoneg |
@@ -352,14 +466,23 @@ static void i40e_get_settings_link_up(struct i40e_hw *hw,
ecmd->advertising |= ADVERTISED_100baseT_Full;
}
break;
- /* Backplane is set based on supported phy types in get_settings
- * so don't set anything here but don't warn either
- */
case I40E_PHY_TYPE_40GBASE_KR4:
case I40E_PHY_TYPE_20GBASE_KR2:
case I40E_PHY_TYPE_10GBASE_KR:
case I40E_PHY_TYPE_10GBASE_KX4:
case I40E_PHY_TYPE_1000BASE_KX:
+ ecmd->supported |= SUPPORTED_40000baseKR4_Full |
+ SUPPORTED_20000baseKR2_Full |
+ SUPPORTED_10000baseKR_Full |
+ SUPPORTED_10000baseKX4_Full |
+ SUPPORTED_1000baseKX_Full |
+ SUPPORTED_Autoneg;
+ ecmd->advertising |= ADVERTISED_40000baseKR4_Full |
+ ADVERTISED_20000baseKR2_Full |
+ ADVERTISED_10000baseKR_Full |
+ ADVERTISED_10000baseKX4_Full |
+ ADVERTISED_1000baseKX_Full |
+ ADVERTISED_Autoneg;
break;
default:
/* if we got here and link is up something bad is afoot */
@@ -367,6 +490,16 @@ static void i40e_get_settings_link_up(struct i40e_hw *hw,
hw_link_info->phy_type);
}
+ /* Now that we've worked out everything that could be supported by the
+ * current PHY type, get what is supported by the NVM and them to
+ * get what is truly supported
+ */
+ i40e_phy_type_to_ethtool(pf, &e_supported,
+ &e_advertising);
+
+ ecmd->supported = ecmd->supported & e_supported;
+ ecmd->advertising = ecmd->advertising & e_advertising;
+
/* Set speed and duplex */
switch (link_speed) {
case I40E_LINK_SPEED_40GB:
@@ -401,74 +534,11 @@ static void i40e_get_settings_link_down(struct i40e_hw *hw,
struct ethtool_cmd *ecmd,
struct i40e_pf *pf)
{
- enum i40e_aq_capabilities_phy_type phy_types = hw->phy.phy_types;
-
/* link is down and the driver needs to fall back on
* supported phy types to figure out what info to display
*/
- ecmd->supported = 0x0;
- ecmd->advertising = 0x0;
- if (phy_types & I40E_CAP_PHY_TYPE_SGMII) {
- ecmd->supported |= SUPPORTED_Autoneg |
- SUPPORTED_1000baseT_Full;
- ecmd->advertising |= ADVERTISED_Autoneg |
- ADVERTISED_1000baseT_Full;
- if (pf->hw.mac.type == I40E_MAC_X722) {
- ecmd->supported |= SUPPORTED_100baseT_Full;
- ecmd->advertising |= ADVERTISED_100baseT_Full;
- if (pf->flags & I40E_FLAG_100M_SGMII_CAPABLE) {
- ecmd->supported |= SUPPORTED_100baseT_Full;
- ecmd->advertising |= ADVERTISED_100baseT_Full;
- }
- }
- }
- if (phy_types & I40E_CAP_PHY_TYPE_XAUI ||
- phy_types & I40E_CAP_PHY_TYPE_XFI ||
- phy_types & I40E_CAP_PHY_TYPE_SFI ||
- phy_types & I40E_CAP_PHY_TYPE_10GBASE_SFPP_CU ||
- phy_types & I40E_CAP_PHY_TYPE_10GBASE_AOC)
- ecmd->supported |= SUPPORTED_10000baseT_Full;
- if (phy_types & I40E_CAP_PHY_TYPE_10GBASE_CR1_CU ||
- phy_types & I40E_CAP_PHY_TYPE_10GBASE_CR1 ||
- phy_types & I40E_CAP_PHY_TYPE_10GBASE_T ||
- phy_types & I40E_CAP_PHY_TYPE_10GBASE_SR ||
- phy_types & I40E_CAP_PHY_TYPE_10GBASE_LR) {
- ecmd->supported |= SUPPORTED_Autoneg |
- SUPPORTED_10000baseT_Full;
- ecmd->advertising |= ADVERTISED_Autoneg |
- ADVERTISED_10000baseT_Full;
- }
- if (phy_types & I40E_CAP_PHY_TYPE_XLAUI ||
- phy_types & I40E_CAP_PHY_TYPE_XLPPI ||
- phy_types & I40E_CAP_PHY_TYPE_40GBASE_AOC)
- ecmd->supported |= SUPPORTED_40000baseCR4_Full;
- if (phy_types & I40E_CAP_PHY_TYPE_40GBASE_CR4_CU ||
- phy_types & I40E_CAP_PHY_TYPE_40GBASE_CR4) {
- ecmd->supported |= SUPPORTED_Autoneg |
- SUPPORTED_40000baseCR4_Full;
- ecmd->advertising |= ADVERTISED_Autoneg |
- ADVERTISED_40000baseCR4_Full;
- }
- if ((phy_types & I40E_CAP_PHY_TYPE_100BASE_TX) &&
- !(phy_types & I40E_CAP_PHY_TYPE_1000BASE_T)) {
- ecmd->supported |= SUPPORTED_Autoneg |
- SUPPORTED_100baseT_Full;
- ecmd->advertising |= ADVERTISED_Autoneg |
- ADVERTISED_100baseT_Full;
- }
- if (phy_types & I40E_CAP_PHY_TYPE_1000BASE_T ||
- phy_types & I40E_CAP_PHY_TYPE_1000BASE_SX ||
- phy_types & I40E_CAP_PHY_TYPE_1000BASE_LX ||
- phy_types & I40E_CAP_PHY_TYPE_1000BASE_T_OPTICAL) {
- ecmd->supported |= SUPPORTED_Autoneg |
- SUPPORTED_1000baseT_Full;
- ecmd->advertising |= ADVERTISED_Autoneg |
- ADVERTISED_1000baseT_Full;
- }
- if (phy_types & I40E_CAP_PHY_TYPE_40GBASE_SR4)
- ecmd->supported |= SUPPORTED_40000baseSR4_Full;
- if (phy_types & I40E_CAP_PHY_TYPE_40GBASE_LR4)
- ecmd->supported |= SUPPORTED_40000baseLR4_Full;
+ i40e_phy_type_to_ethtool(pf, &ecmd->supported,
+ &ecmd->advertising);
/* With no link speed and duplex are unknown */
ethtool_cmd_speed_set(ecmd, SPEED_UNKNOWN);
@@ -497,38 +567,6 @@ static int i40e_get_settings(struct net_device *netdev,
i40e_get_settings_link_down(hw, ecmd, pf);
/* Now set the settings that don't rely on link being up/down */
-
- /* For backplane, supported and advertised are only reliant on the
- * phy types the NVM specifies are supported.
- */
- if (hw->device_id == I40E_DEV_ID_KX_B ||
- hw->device_id == I40E_DEV_ID_KX_C ||
- hw->device_id == I40E_DEV_ID_20G_KR2 ||
- hw->device_id == I40E_DEV_ID_20G_KR2_A) {
- ecmd->supported = SUPPORTED_Autoneg;
- ecmd->advertising = ADVERTISED_Autoneg;
- if (hw->phy.phy_types & I40E_CAP_PHY_TYPE_40GBASE_KR4) {
- ecmd->supported |= SUPPORTED_40000baseKR4_Full;
- ecmd->advertising |= ADVERTISED_40000baseKR4_Full;
- }
- if (hw->phy.phy_types & I40E_CAP_PHY_TYPE_20GBASE_KR2) {
- ecmd->supported |= SUPPORTED_20000baseKR2_Full;
- ecmd->advertising |= ADVERTISED_20000baseKR2_Full;
- }
- if (hw->phy.phy_types & I40E_CAP_PHY_TYPE_10GBASE_KR) {
- ecmd->supported |= SUPPORTED_10000baseKR_Full;
- ecmd->advertising |= ADVERTISED_10000baseKR_Full;
- }
- if (hw->phy.phy_types & I40E_CAP_PHY_TYPE_10GBASE_KX4) {
- ecmd->supported |= SUPPORTED_10000baseKX4_Full;
- ecmd->advertising |= ADVERTISED_10000baseKX4_Full;
- }
- if (hw->phy.phy_types & I40E_CAP_PHY_TYPE_1000BASE_KX) {
- ecmd->supported |= SUPPORTED_1000baseKX_Full;
- ecmd->advertising |= ADVERTISED_1000baseKX_Full;
- }
- }
-
/* Set autoneg settings */
ecmd->autoneg = ((hw_link_info->an_info & I40E_AQ_AN_COMPLETED) ?
AUTONEG_ENABLE : AUTONEG_DISABLE);
@@ -1143,6 +1181,10 @@ static void i40e_get_drvinfo(struct net_device *netdev,
sizeof(drvinfo->fw_version));
strlcpy(drvinfo->bus_info, pci_name(pf->pdev),
sizeof(drvinfo->bus_info));
+ if (pf->hw.pf_id == 0)
+ drvinfo->n_priv_flags = I40E_PRIV_FLAGS_GL_STR_LEN;
+ else
+ drvinfo->n_priv_flags = I40E_PRIV_FLAGS_STR_LEN;
}
static void i40e_get_ringparam(struct net_device *netdev,
@@ -1259,6 +1301,13 @@ static int i40e_set_ringparam(struct net_device *netdev,
}
for (i = 0; i < vsi->num_queue_pairs; i++) {
+ /* this is to allow wr32 to have something to write to
+ * during early allocation of Rx buffers
+ */
+ u32 __iomem faketail = 0;
+ struct i40e_ring *ring;
+ u16 unused;
+
/* clone ring and setup updated count */
rx_rings[i] = *vsi->rx_rings[i];
rx_rings[i].count = new_rx_count;
@@ -1267,12 +1316,22 @@ static int i40e_set_ringparam(struct net_device *netdev,
*/
rx_rings[i].desc = NULL;
rx_rings[i].rx_bi = NULL;
+ rx_rings[i].tail = (u8 __iomem *)&faketail;
err = i40e_setup_rx_descriptors(&rx_rings[i]);
+ if (err)
+ goto rx_unwind;
+
+ /* now allocate the Rx buffers to make sure the OS
+ * has enough memory, any failure here means abort
+ */
+ ring = &rx_rings[i];
+ unused = I40E_DESC_UNUSED(ring);
+ err = i40e_alloc_rx_buffers(ring, unused);
+rx_unwind:
if (err) {
- while (i) {
- i--;
+ do {
i40e_free_rx_resources(&rx_rings[i]);
- }
+ } while (i--);
kfree(rx_rings);
rx_rings = NULL;
@@ -1298,6 +1357,17 @@ static int i40e_set_ringparam(struct net_device *netdev,
if (rx_rings) {
for (i = 0; i < vsi->num_queue_pairs; i++) {
i40e_free_rx_resources(vsi->rx_rings[i]);
+ /* get the real tail offset */
+ rx_rings[i].tail = vsi->rx_rings[i]->tail;
+ /* this is to fake out the allocation routine
+ * into thinking it has to realloc everything
+ * but the recycling logic will let us re-use
+ * the buffers allocated above
+ */
+ rx_rings[i].next_to_use = 0;
+ rx_rings[i].next_to_clean = 0;
+ rx_rings[i].next_to_alloc = 0;
+ /* do a struct copy */
*vsi->rx_rings[i] = rx_rings[i];
}
kfree(rx_rings);
@@ -1342,7 +1412,10 @@ static int i40e_get_sset_count(struct net_device *netdev, int sset)
return I40E_VSI_STATS_LEN(netdev);
}
case ETH_SS_PRIV_FLAGS:
- return I40E_PRIV_FLAGS_STR_LEN;
+ if (pf->hw.pf_id == 0)
+ return I40E_PRIV_FLAGS_GL_STR_LEN;
+ else
+ return I40E_PRIV_FLAGS_STR_LEN;
default:
return -EOPNOTSUPP;
}
@@ -1540,10 +1613,18 @@ static void i40e_get_strings(struct net_device *netdev, u32 stringset,
/* BUG_ON(p - data != I40E_STATS_LEN * ETH_GSTRING_LEN); */
break;
case ETH_SS_PRIV_FLAGS:
- for (i = 0; i < I40E_PRIV_FLAGS_STR_LEN; i++) {
- memcpy(data, i40e_priv_flags_strings[i],
- ETH_GSTRING_LEN);
- data += ETH_GSTRING_LEN;
+ if (pf->hw.pf_id == 0) {
+ for (i = 0; i < I40E_PRIV_FLAGS_GL_STR_LEN; i++) {
+ memcpy(data, i40e_priv_flags_strings_gl[i],
+ ETH_GSTRING_LEN);
+ data += ETH_GSTRING_LEN;
+ }
+ } else {
+ for (i = 0; i < I40E_PRIV_FLAGS_STR_LEN; i++) {
+ memcpy(data, i40e_priv_flags_strings[i],
+ ETH_GSTRING_LEN);
+ data += ETH_GSTRING_LEN;
+ }
}
break;
default:
@@ -1714,7 +1795,7 @@ static void i40e_diag_test(struct net_device *netdev,
/* If the device is online then take it offline */
if (if_running)
/* indicate we're in test mode */
- dev_close(netdev);
+ i40e_close(netdev);
else
/* This reset does not affect link - if it is
* changed to a type of reset that does affect
@@ -1743,7 +1824,7 @@ static void i40e_diag_test(struct net_device *netdev,
i40e_do_reset(pf, BIT(__I40E_PF_RESET_REQUESTED));
if (if_running)
- dev_open(netdev);
+ i40e_open(netdev);
} else {
/* Online tests */
netif_info(pf, drv, netdev, "online testing starting\n");
@@ -1837,7 +1918,7 @@ static int i40e_set_phys_id(struct net_device *netdev,
if (!(pf->flags & I40E_FLAG_HAVE_10GBASET_PHY)) {
pf->led_status = i40e_led_get(hw);
} else {
- i40e_aq_set_phy_debug(hw, I40E_PHY_DEBUG_PORT, NULL);
+ i40e_aq_set_phy_debug(hw, I40E_PHY_DEBUG_ALL, NULL);
ret = i40e_led_get_phy(hw, &temp_status,
&pf->phy_led_val);
pf->led_status = temp_status;
@@ -2490,7 +2571,6 @@ static int i40e_add_fdir_ethtool(struct i40e_vsi *vsi,
if (!vsi)
return -EINVAL;
-
pf = vsi->back;
if (!(pf->flags & I40E_FLAG_FD_SB_ENABLED))
@@ -2548,15 +2628,18 @@ static int i40e_add_fdir_ethtool(struct i40e_vsi *vsi,
input->src_ip[0] = fsp->h_u.tcp_ip4_spec.ip4dst;
if (ntohl(fsp->m_ext.data[1])) {
- if (ntohl(fsp->h_ext.data[1]) >= pf->num_alloc_vfs) {
- netif_info(pf, drv, vsi->netdev, "Invalid VF id\n");
+ vf_id = ntohl(fsp->h_ext.data[1]);
+ if (vf_id >= pf->num_alloc_vfs) {
+ netif_info(pf, drv, vsi->netdev,
+ "Invalid VF id %d\n", vf_id);
goto free_input;
}
- vf_id = ntohl(fsp->h_ext.data[1]);
/* Find vsi id from vf id and override dest vsi */
input->dest_vsi = pf->vf[vf_id].lan_vsi_id;
if (input->q_index >= pf->vf[vf_id].num_queue_pairs) {
- netif_info(pf, drv, vsi->netdev, "Invalid queue id\n");
+ netif_info(pf, drv, vsi->netdev,
+ "Invalid queue id %d for VF %d\n",
+ input->q_index, vf_id);
goto free_input;
}
}
@@ -2803,18 +2886,18 @@ static u32 i40e_get_priv_flags(struct net_device *dev)
struct i40e_pf *pf = vsi->back;
u32 ret_flags = 0;
- ret_flags |= pf->hw.func_caps.npar_enable ?
- I40E_PRIV_FLAGS_NPAR_FLAG : 0;
ret_flags |= pf->flags & I40E_FLAG_LINK_POLLING_ENABLED ?
I40E_PRIV_FLAGS_LINKPOLL_FLAG : 0;
ret_flags |= pf->flags & I40E_FLAG_FD_ATR_ENABLED ?
I40E_PRIV_FLAGS_FD_ATR : 0;
ret_flags |= pf->flags & I40E_FLAG_VEB_STATS_ENABLED ?
I40E_PRIV_FLAGS_VEB_STATS : 0;
- ret_flags |= pf->flags & I40E_FLAG_RX_PS_ENABLED ?
- I40E_PRIV_FLAGS_PS : 0;
ret_flags |= pf->auto_disable_flags & I40E_FLAG_HW_ATR_EVICT_CAPABLE ?
0 : I40E_PRIV_FLAGS_HW_ATR_EVICT;
+ if (pf->hw.pf_id == 0) {
+ ret_flags |= pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT ?
+ I40E_PRIV_FLAGS_TRUE_PROMISC_SUPPORT : 0;
+ }
return ret_flags;
}
@@ -2829,27 +2912,13 @@ static int i40e_set_priv_flags(struct net_device *dev, u32 flags)
struct i40e_netdev_priv *np = netdev_priv(dev);
struct i40e_vsi *vsi = np->vsi;
struct i40e_pf *pf = vsi->back;
+ u16 sw_flags = 0, valid_flags = 0;
bool reset_required = false;
+ bool promisc_change = false;
+ int ret;
/* NOTE: MFP is not settable */
- /* allow the user to control the method of receive
- * buffer DMA, whether the packet is split at header
- * boundaries into two separate buffers. In some cases
- * one routine or the other will perform better.
- */
- if ((flags & I40E_PRIV_FLAGS_PS) &&
- !(pf->flags & I40E_FLAG_RX_PS_ENABLED)) {
- pf->flags |= I40E_FLAG_RX_PS_ENABLED;
- pf->flags &= ~I40E_FLAG_RX_1BUF_ENABLED;
- reset_required = true;
- } else if (!(flags & I40E_PRIV_FLAGS_PS) &&
- (pf->flags & I40E_FLAG_RX_PS_ENABLED)) {
- pf->flags &= ~I40E_FLAG_RX_PS_ENABLED;
- pf->flags |= I40E_FLAG_RX_1BUF_ENABLED;
- reset_required = true;
- }
-
if (flags & I40E_PRIV_FLAGS_LINKPOLL_FLAG)
pf->flags |= I40E_FLAG_LINK_POLLING_ENABLED;
else
@@ -2876,6 +2945,33 @@ static int i40e_set_priv_flags(struct net_device *dev, u32 flags)
reset_required = true;
}
+ if (pf->hw.pf_id == 0) {
+ if ((flags & I40E_PRIV_FLAGS_TRUE_PROMISC_SUPPORT) &&
+ !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT)) {
+ pf->flags |= I40E_FLAG_TRUE_PROMISC_SUPPORT;
+ promisc_change = true;
+ } else if (!(flags & I40E_PRIV_FLAGS_TRUE_PROMISC_SUPPORT) &&
+ (pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT)) {
+ pf->flags &= ~I40E_FLAG_TRUE_PROMISC_SUPPORT;
+ promisc_change = true;
+ }
+ }
+ if (promisc_change) {
+ if (!(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT))
+ sw_flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
+ valid_flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
+ ret = i40e_aq_set_switch_config(&pf->hw, sw_flags, valid_flags,
+ NULL);
+ if (ret && pf->hw.aq.asq_last_status != I40E_AQ_RC_ESRCH) {
+ dev_info(&pf->pdev->dev,
+ "couldn't set switch config bits, err %s aq_err %s\n",
+ i40e_stat_str(&pf->hw, ret),
+ i40e_aq_str(&pf->hw,
+ pf->hw.aq.asq_last_status));
+ /* not a fatal problem, just keep going */
+ }
+ }
+
if ((flags & I40E_PRIV_FLAGS_HW_ATR_EVICT) &&
(pf->flags & I40E_FLAG_HW_ATR_EVICT_CAPABLE))
pf->auto_disable_flags &= ~I40E_FLAG_HW_ATR_EVICT_CAPABLE;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_fcoe.c b/drivers/net/ethernet/intel/i40e/i40e_fcoe.c
index 8ad162c16f61..58e6c1570335 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_fcoe.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_fcoe.c
@@ -1,7 +1,7 @@
/*******************************************************************************
*
* Intel Ethernet Controller XL710 Family Linux Driver
- * Copyright(c) 2013 - 2015 Intel Corporation.
+ * Copyright(c) 2013 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
@@ -38,16 +38,6 @@
#include "i40e_fcoe.h"
/**
- * i40e_rx_is_fcoe - returns true if the rx packet type is FCoE
- * @ptype: the packet type field from rx descriptor write-back
- **/
-static inline bool i40e_rx_is_fcoe(u16 ptype)
-{
- return (ptype >= I40E_RX_PTYPE_L2_FCOE_PAY3) &&
- (ptype <= I40E_RX_PTYPE_L2_FCOE_VFT_FCOTHER);
-}
-
-/**
* i40e_fcoe_sof_is_class2 - returns true if this is a FC Class 2 SOF
* @sof: the FCoE start of frame delimiter
**/
@@ -1371,7 +1361,7 @@ static netdev_tx_t i40e_fcoe_xmit_frame(struct sk_buff *skb,
if (i40e_chk_linearize(skb, count)) {
if (__skb_linearize(skb))
goto out_drop;
- count = TXD_USE_COUNT(skb->len);
+ count = i40e_txd_use_count(skb->len);
tx_ring->tx_stats.tx_linearize++;
}
diff --git a/drivers/net/ethernet/intel/i40e/i40e_hmc.c b/drivers/net/ethernet/intel/i40e/i40e_hmc.c
index 5ebe12d56ebf..a7c7b1d9b7c8 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_hmc.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_hmc.c
@@ -49,7 +49,7 @@ i40e_status i40e_add_sd_table_entry(struct i40e_hw *hw,
struct i40e_hmc_sd_entry *sd_entry;
bool dma_mem_alloc_done = false;
struct i40e_dma_mem mem;
- i40e_status ret_code;
+ i40e_status ret_code = I40E_SUCCESS;
u64 alloc_len;
if (NULL == hmc_info->sd_table.sd_entry) {
diff --git a/drivers/net/ethernet/intel/i40e/i40e_main.c b/drivers/net/ethernet/intel/i40e/i40e_main.c
index 344912957cab..5ea22008d721 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_main.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_main.c
@@ -45,8 +45,8 @@ static const char i40e_driver_string[] =
#define DRV_KERN "-k"
#define DRV_VERSION_MAJOR 1
-#define DRV_VERSION_MINOR 4
-#define DRV_VERSION_BUILD 25
+#define DRV_VERSION_MINOR 5
+#define DRV_VERSION_BUILD 16
#define DRV_VERSION __stringify(DRV_VERSION_MAJOR) "." \
__stringify(DRV_VERSION_MINOR) "." \
__stringify(DRV_VERSION_BUILD) DRV_KERN
@@ -90,6 +90,8 @@ static const struct pci_device_id i40e_pci_tbl[] = {
{PCI_VDEVICE(INTEL, I40E_DEV_ID_SFP_X722), 0},
{PCI_VDEVICE(INTEL, I40E_DEV_ID_1G_BASE_T_X722), 0},
{PCI_VDEVICE(INTEL, I40E_DEV_ID_10G_BASE_T_X722), 0},
+ {PCI_VDEVICE(INTEL, I40E_DEV_ID_SFP_I_X722), 0},
+ {PCI_VDEVICE(INTEL, I40E_DEV_ID_QSFP_I_X722), 0},
{PCI_VDEVICE(INTEL, I40E_DEV_ID_20G_KR2), 0},
{PCI_VDEVICE(INTEL, I40E_DEV_ID_20G_KR2_A), 0},
/* required last entry */
@@ -326,7 +328,7 @@ static void i40e_tx_timeout(struct net_device *netdev)
unsigned long trans_start;
q = netdev_get_tx_queue(netdev, i);
- trans_start = q->trans_start ? : netdev->trans_start;
+ trans_start = q->trans_start;
if (netif_xmit_stopped(q) &&
time_after(jiffies,
(trans_start + netdev->watchdog_timeo))) {
@@ -396,24 +398,6 @@ static void i40e_tx_timeout(struct net_device *netdev)
}
/**
- * i40e_release_rx_desc - Store the new tail and head values
- * @rx_ring: ring to bump
- * @val: new head index
- **/
-static inline void i40e_release_rx_desc(struct i40e_ring *rx_ring, u32 val)
-{
- rx_ring->next_to_use = val;
-
- /* Force memory writes to complete before letting h/w
- * know there are new descriptors to fetch. (Only
- * applicable for weak-ordered memory model archs,
- * such as IA-64).
- */
- wmb();
- writel(val, rx_ring->tail);
-}
-
-/**
* i40e_get_vsi_stats_struct - Get System Network Statistics
* @vsi: the VSI we care about
*
@@ -2097,6 +2081,12 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi)
}
}
+ /* if the VF is not trusted do not do promisc */
+ if ((vsi->type == I40E_VSI_SRIOV) && !pf->vf[vsi->vf_id].trusted) {
+ clear_bit(__I40E_FILTER_OVERFLOW_PROMISC, &vsi->state);
+ goto out;
+ }
+
/* check for changes in promiscuous modes */
if (changed_flags & IFF_ALLMULTI) {
bool cur_multipromisc;
@@ -2138,7 +2128,8 @@ int i40e_sync_vsi_filters(struct i40e_vsi *vsi)
aq_ret = i40e_aq_set_vsi_unicast_promiscuous(
&vsi->back->hw,
vsi->seid,
- cur_promisc, NULL);
+ cur_promisc, NULL,
+ true);
if (aq_ret) {
retval =
i40e_aq_rc_to_posix(aq_ret,
@@ -2865,34 +2856,21 @@ static int i40e_configure_rx_ring(struct i40e_ring *ring)
memset(&rx_ctx, 0, sizeof(rx_ctx));
ring->rx_buf_len = vsi->rx_buf_len;
- ring->rx_hdr_len = vsi->rx_hdr_len;
rx_ctx.dbuff = ring->rx_buf_len >> I40E_RXQ_CTX_DBUFF_SHIFT;
- rx_ctx.hbuff = ring->rx_hdr_len >> I40E_RXQ_CTX_HBUFF_SHIFT;
rx_ctx.base = (ring->dma / 128);
rx_ctx.qlen = ring->count;
- if (vsi->back->flags & I40E_FLAG_16BYTE_RX_DESC_ENABLED) {
- set_ring_16byte_desc_enabled(ring);
- rx_ctx.dsize = 0;
- } else {
- rx_ctx.dsize = 1;
- }
+ /* use 32 byte descriptors */
+ rx_ctx.dsize = 1;
- rx_ctx.dtype = vsi->dtype;
- if (vsi->dtype) {
- set_ring_ps_enabled(ring);
- rx_ctx.hsplit_0 = I40E_RX_SPLIT_L2 |
- I40E_RX_SPLIT_IP |
- I40E_RX_SPLIT_TCP_UDP |
- I40E_RX_SPLIT_SCTP;
- } else {
- rx_ctx.hsplit_0 = 0;
- }
+ /* descriptor type is always zero
+ * rx_ctx.dtype = 0;
+ */
+ rx_ctx.hsplit_0 = 0;
- rx_ctx.rxmax = min_t(u16, vsi->max_frame,
- (chain_len * ring->rx_buf_len));
+ rx_ctx.rxmax = min_t(u16, vsi->max_frame, chain_len * ring->rx_buf_len);
if (hw->revision_id == 0)
rx_ctx.lrxqthresh = 0;
else
@@ -2929,12 +2907,7 @@ static int i40e_configure_rx_ring(struct i40e_ring *ring)
ring->tail = hw->hw_addr + I40E_QRX_TAIL(pf_q);
writel(0, ring->tail);
- if (ring_is_ps_enabled(ring)) {
- i40e_alloc_rx_headers(ring);
- i40e_alloc_rx_buffers_ps(ring, I40E_DESC_UNUSED(ring));
- } else {
- i40e_alloc_rx_buffers_1buf(ring, I40E_DESC_UNUSED(ring));
- }
+ i40e_alloc_rx_buffers(ring, I40E_DESC_UNUSED(ring));
return 0;
}
@@ -2973,40 +2946,18 @@ static int i40e_vsi_configure_rx(struct i40e_vsi *vsi)
else
vsi->max_frame = I40E_RXBUFFER_2048;
- /* figure out correct receive buffer length */
- switch (vsi->back->flags & (I40E_FLAG_RX_1BUF_ENABLED |
- I40E_FLAG_RX_PS_ENABLED)) {
- case I40E_FLAG_RX_1BUF_ENABLED:
- vsi->rx_hdr_len = 0;
- vsi->rx_buf_len = vsi->max_frame;
- vsi->dtype = I40E_RX_DTYPE_NO_SPLIT;
- break;
- case I40E_FLAG_RX_PS_ENABLED:
- vsi->rx_hdr_len = I40E_RX_HDR_SIZE;
- vsi->rx_buf_len = I40E_RXBUFFER_2048;
- vsi->dtype = I40E_RX_DTYPE_HEADER_SPLIT;
- break;
- default:
- vsi->rx_hdr_len = I40E_RX_HDR_SIZE;
- vsi->rx_buf_len = I40E_RXBUFFER_2048;
- vsi->dtype = I40E_RX_DTYPE_SPLIT_ALWAYS;
- break;
- }
+ vsi->rx_buf_len = I40E_RXBUFFER_2048;
#ifdef I40E_FCOE
/* setup rx buffer for FCoE */
if ((vsi->type == I40E_VSI_FCOE) &&
(vsi->back->flags & I40E_FLAG_FCOE_ENABLED)) {
- vsi->rx_hdr_len = 0;
vsi->rx_buf_len = I40E_RXBUFFER_3072;
vsi->max_frame = I40E_RXBUFFER_3072;
- vsi->dtype = I40E_RX_DTYPE_NO_SPLIT;
}
#endif /* I40E_FCOE */
/* round up for the chip's needs */
- vsi->rx_hdr_len = ALIGN(vsi->rx_hdr_len,
- BIT_ULL(I40E_RXQ_CTX_HBUFF_SHIFT));
vsi->rx_buf_len = ALIGN(vsi->rx_buf_len,
BIT_ULL(I40E_RXQ_CTX_DBUFF_SHIFT));
@@ -4164,7 +4115,7 @@ static void i40e_clear_interrupt_scheme(struct i40e_pf *pf)
int i;
i40e_stop_misc_vector(pf);
- if (pf->flags & I40E_FLAG_MSIX_ENABLED) {
+ if (pf->flags & I40E_FLAG_MSIX_ENABLED && pf->msix_entries) {
synchronize_irq(pf->msix_entries[0].vector);
free_irq(pf->msix_entries[0].vector, pf);
}
@@ -5509,11 +5460,7 @@ static void i40e_fdir_filter_exit(struct i40e_pf *pf)
*
* Returns 0, this is not allowed to fail
**/
-#ifdef I40E_FCOE
int i40e_close(struct net_device *netdev)
-#else
-static int i40e_close(struct net_device *netdev)
-#endif
{
struct i40e_netdev_priv *np = netdev_priv(netdev);
struct i40e_vsi *vsi = np->vsi;
@@ -5538,8 +5485,6 @@ void i40e_do_reset(struct i40e_pf *pf, u32 reset_flags)
WARN_ON(in_interrupt());
- if (i40e_check_asq_alive(&pf->hw))
- i40e_vc_notify_reset(pf);
/* do the biggest reset indicated */
if (reset_flags & BIT_ULL(__I40E_GLOBAL_RESET_REQUESTED)) {
@@ -6377,7 +6322,7 @@ static void i40e_clean_adminq_subtask(struct i40e_pf *pf)
break;
default:
dev_info(&pf->pdev->dev,
- "ARQ Error: Unknown event 0x%04x received\n",
+ "ARQ: Unknown event 0x%04x ignored\n",
opcode);
break;
}
@@ -6742,6 +6687,8 @@ static void i40e_prep_for_reset(struct i40e_pf *pf)
clear_bit(__I40E_RESET_INTR_RECEIVED, &pf->state);
if (test_and_set_bit(__I40E_RESET_RECOVERY_PENDING, &pf->state))
return;
+ if (i40e_check_asq_alive(&pf->hw))
+ i40e_vc_notify_reset(pf);
dev_dbg(&pf->pdev->dev, "Tearing down internal switch for reset\n");
@@ -6862,6 +6809,7 @@ static void i40e_reset_and_rebuild(struct i40e_pf *pf, bool reinit)
*/
ret = i40e_aq_set_phy_int_mask(&pf->hw,
~(I40E_AQ_EVENT_LINK_UPDOWN |
+ I40E_AQ_EVENT_MEDIA_NA |
I40E_AQ_EVENT_MODULE_QUAL_FAIL), NULL);
if (ret)
dev_info(&pf->pdev->dev, "set phy mask fail, err %s aq_err %s\n",
@@ -7525,10 +7473,6 @@ static int i40e_alloc_rings(struct i40e_vsi *vsi)
rx_ring->count = vsi->num_desc;
rx_ring->size = 0;
rx_ring->dcb_tc = 0;
- if (pf->flags & I40E_FLAG_16BYTE_RX_DESC_ENABLED)
- set_ring_16byte_desc_enabled(rx_ring);
- else
- clear_ring_16byte_desc_enabled(rx_ring);
rx_ring->rx_itr_setting = pf->rx_itr_default;
vsi->rx_rings[i] = rx_ring;
}
@@ -8084,24 +8028,45 @@ static int i40e_config_rss_reg(struct i40e_vsi *vsi, const u8 *seed,
{
struct i40e_pf *pf = vsi->back;
struct i40e_hw *hw = &pf->hw;
+ u16 vf_id = vsi->vf_id;
u8 i;
/* Fill out hash function seed */
if (seed) {
u32 *seed_dw = (u32 *)seed;
- for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
- i40e_write_rx_ctl(hw, I40E_PFQF_HKEY(i), seed_dw[i]);
+ if (vsi->type == I40E_VSI_MAIN) {
+ for (i = 0; i <= I40E_PFQF_HKEY_MAX_INDEX; i++)
+ i40e_write_rx_ctl(hw, I40E_PFQF_HKEY(i),
+ seed_dw[i]);
+ } else if (vsi->type == I40E_VSI_SRIOV) {
+ for (i = 0; i <= I40E_VFQF_HKEY1_MAX_INDEX; i++)
+ i40e_write_rx_ctl(hw,
+ I40E_VFQF_HKEY1(i, vf_id),
+ seed_dw[i]);
+ } else {
+ dev_err(&pf->pdev->dev, "Cannot set RSS seed - invalid VSI type\n");
+ }
}
if (lut) {
u32 *lut_dw = (u32 *)lut;
- if (lut_size != I40E_HLUT_ARRAY_SIZE)
- return -EINVAL;
-
- for (i = 0; i <= I40E_PFQF_HLUT_MAX_INDEX; i++)
- wr32(hw, I40E_PFQF_HLUT(i), lut_dw[i]);
+ if (vsi->type == I40E_VSI_MAIN) {
+ if (lut_size != I40E_HLUT_ARRAY_SIZE)
+ return -EINVAL;
+ for (i = 0; i <= I40E_PFQF_HLUT_MAX_INDEX; i++)
+ wr32(hw, I40E_PFQF_HLUT(i), lut_dw[i]);
+ } else if (vsi->type == I40E_VSI_SRIOV) {
+ if (lut_size != I40E_VF_HLUT_ARRAY_SIZE)
+ return -EINVAL;
+ for (i = 0; i <= I40E_VFQF_HLUT_MAX_INDEX; i++)
+ i40e_write_rx_ctl(hw,
+ I40E_VFQF_HLUT1(i, vf_id),
+ lut_dw[i]);
+ } else {
+ dev_err(&pf->pdev->dev, "Cannot set RSS LUT - invalid VSI type\n");
+ }
}
i40e_flush(hw);
@@ -8440,7 +8405,6 @@ static int i40e_sw_init(struct i40e_pf *pf)
pf->msg_enable = netif_msg_init(I40E_DEFAULT_MSG_ENABLE,
(NETIF_MSG_DRV|NETIF_MSG_PROBE|NETIF_MSG_LINK));
- pf->hw.debug_mask = pf->msg_enable | I40E_DEBUG_DIAG;
if (debug != -1 && debug != I40E_DEFAULT_MSG_ENABLE) {
if (I40E_DEBUG_USER & debug)
pf->hw.debug_mask = debug;
@@ -8451,14 +8415,8 @@ static int i40e_sw_init(struct i40e_pf *pf)
/* Set default capability flags */
pf->flags = I40E_FLAG_RX_CSUM_ENABLED |
I40E_FLAG_MSI_ENABLED |
- I40E_FLAG_LINK_POLLING_ENABLED |
I40E_FLAG_MSIX_ENABLED;
- if (iommu_present(&pci_bus_type))
- pf->flags |= I40E_FLAG_RX_PS_ENABLED;
- else
- pf->flags |= I40E_FLAG_RX_1BUF_ENABLED;
-
/* Set default ITR */
pf->rx_itr_default = I40E_ITR_DYNAMIC | I40E_ITR_RX_DEF;
pf->tx_itr_default = I40E_ITR_DYNAMIC | I40E_ITR_TX_DEF;
@@ -9074,6 +9032,7 @@ static const struct net_device_ops i40e_netdev_ops = {
.ndo_get_vf_config = i40e_ndo_get_vf_config,
.ndo_set_vf_link_state = i40e_ndo_set_vf_link_state,
.ndo_set_vf_spoofchk = i40e_ndo_set_vf_spoofchk,
+ .ndo_set_vf_trust = i40e_ndo_set_vf_trust,
#if IS_ENABLED(CONFIG_VXLAN)
.ndo_add_vxlan_port = i40e_add_vxlan_port,
.ndo_del_vxlan_port = i40e_del_vxlan_port,
@@ -9114,40 +9073,44 @@ static int i40e_config_netdev(struct i40e_vsi *vsi)
np = netdev_priv(netdev);
np->vsi = vsi;
- netdev->hw_enc_features |= NETIF_F_IP_CSUM |
- NETIF_F_IPV6_CSUM |
- NETIF_F_TSO |
- NETIF_F_TSO6 |
- NETIF_F_TSO_ECN |
- NETIF_F_GSO_GRE |
- NETIF_F_GSO_UDP_TUNNEL |
- NETIF_F_GSO_UDP_TUNNEL_CSUM |
+ netdev->hw_enc_features |= NETIF_F_SG |
+ NETIF_F_IP_CSUM |
+ NETIF_F_IPV6_CSUM |
+ NETIF_F_HIGHDMA |
+ NETIF_F_SOFT_FEATURES |
+ NETIF_F_TSO |
+ NETIF_F_TSO_ECN |
+ NETIF_F_TSO6 |
+ NETIF_F_GSO_GRE |
+ NETIF_F_GSO_GRE_CSUM |
+ NETIF_F_GSO_IPXIP4 |
+ NETIF_F_GSO_IPXIP6 |
+ NETIF_F_GSO_UDP_TUNNEL |
+ NETIF_F_GSO_UDP_TUNNEL_CSUM |
+ NETIF_F_GSO_PARTIAL |
+ NETIF_F_SCTP_CRC |
+ NETIF_F_RXHASH |
+ NETIF_F_RXCSUM |
0;
- netdev->features = NETIF_F_SG |
- NETIF_F_IP_CSUM |
- NETIF_F_SCTP_CRC |
- NETIF_F_HIGHDMA |
- NETIF_F_GSO_UDP_TUNNEL |
- NETIF_F_GSO_GRE |
- NETIF_F_HW_VLAN_CTAG_TX |
- NETIF_F_HW_VLAN_CTAG_RX |
- NETIF_F_HW_VLAN_CTAG_FILTER |
- NETIF_F_IPV6_CSUM |
- NETIF_F_TSO |
- NETIF_F_TSO_ECN |
- NETIF_F_TSO6 |
- NETIF_F_RXCSUM |
- NETIF_F_RXHASH |
- 0;
+ if (!(pf->flags & I40E_FLAG_OUTER_UDP_CSUM_CAPABLE))
+ netdev->gso_partial_features |= NETIF_F_GSO_UDP_TUNNEL_CSUM;
+
+ netdev->gso_partial_features |= NETIF_F_GSO_GRE_CSUM;
+
+ /* record features VLANs can make use of */
+ netdev->vlan_features |= netdev->hw_enc_features |
+ NETIF_F_TSO_MANGLEID;
if (!(pf->flags & I40E_FLAG_MFP_ENABLED))
- netdev->features |= NETIF_F_NTUPLE;
- if (pf->flags & I40E_FLAG_OUTER_UDP_CSUM_CAPABLE)
- netdev->features |= NETIF_F_GSO_UDP_TUNNEL_CSUM;
+ netdev->hw_features |= NETIF_F_NTUPLE;
- /* copy netdev features into list of user selectable features */
- netdev->hw_features |= netdev->features;
+ netdev->hw_features |= netdev->hw_enc_features |
+ NETIF_F_HW_VLAN_CTAG_TX |
+ NETIF_F_HW_VLAN_CTAG_RX;
+
+ netdev->features |= netdev->hw_features | NETIF_F_HW_VLAN_CTAG_FILTER;
+ netdev->hw_enc_features |= NETIF_F_TSO_MANGLEID;
if (vsi->type == I40E_VSI_MAIN) {
SET_NETDEV_DEV(netdev, &pf->pdev->dev);
@@ -9163,6 +9126,12 @@ static int i40e_config_netdev(struct i40e_vsi *vsi)
I40E_VLAN_ANY, false, true);
spin_unlock_bh(&vsi->mac_filter_list_lock);
}
+ } else if ((pf->hw.aq.api_maj_ver > 1) ||
+ ((pf->hw.aq.api_maj_ver == 1) &&
+ (pf->hw.aq.api_min_ver > 4))) {
+ /* Supported in FW API version higher than 1.4 */
+ pf->flags |= I40E_FLAG_GENEVE_OFFLOAD_CAPABLE;
+ pf->auto_disable_flags = I40E_FLAG_HW_ATR_EVICT_CAPABLE;
} else {
/* relate the VSI_VMDQ name to the VSI_MAIN name */
snprintf(netdev->name, IFNAMSIZ, "%sv%%d",
@@ -9180,12 +9149,7 @@ static int i40e_config_netdev(struct i40e_vsi *vsi)
ether_addr_copy(netdev->dev_addr, mac_addr);
ether_addr_copy(netdev->perm_addr, mac_addr);
- /* vlan gets same features (except vlan offload)
- * after any tweaks for specific VSI types
- */
- netdev->vlan_features = netdev->features & ~(NETIF_F_HW_VLAN_CTAG_TX |
- NETIF_F_HW_VLAN_CTAG_RX |
- NETIF_F_HW_VLAN_CTAG_FILTER);
+
netdev->priv_flags |= IFF_UNICAST_FLT;
netdev->priv_flags |= IFF_SUPP_NOFCS;
/* Setup netdev TC information */
@@ -9398,7 +9362,8 @@ static int i40e_add_vsi(struct i40e_vsi *vsi)
ctxt.info.valid_sections |=
cpu_to_le16(I40E_AQ_VSI_PROP_QUEUE_OPT_VALID);
ctxt.info.queueing_opt_flags |=
- I40E_AQ_VSI_QUE_OPT_TCP_ENA;
+ (I40E_AQ_VSI_QUE_OPT_TCP_ENA |
+ I40E_AQ_VSI_QUE_OPT_RSS_LUT_VSI);
}
ctxt.info.valid_sections |= cpu_to_le16(I40E_AQ_VSI_PROP_VLAN_VALID);
@@ -10444,6 +10409,7 @@ int i40e_fetch_switch_configuration(struct i40e_pf *pf, bool printconfig)
**/
static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
{
+ u16 flags = 0;
int ret;
/* find out what's out there already */
@@ -10457,6 +10423,32 @@ static int i40e_setup_pf_switch(struct i40e_pf *pf, bool reinit)
}
i40e_pf_reset_stats(pf);
+ /* set the switch config bit for the whole device to
+ * support limited promisc or true promisc
+ * when user requests promisc. The default is limited
+ * promisc.
+ */
+
+ if ((pf->hw.pf_id == 0) &&
+ !(pf->flags & I40E_FLAG_TRUE_PROMISC_SUPPORT))
+ flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
+
+ if (pf->hw.pf_id == 0) {
+ u16 valid_flags;
+
+ valid_flags = I40E_AQ_SET_SWITCH_CFG_PROMISC;
+ ret = i40e_aq_set_switch_config(&pf->hw, flags, valid_flags,
+ NULL);
+ if (ret && pf->hw.aq.asq_last_status != I40E_AQ_RC_ESRCH) {
+ dev_info(&pf->pdev->dev,
+ "couldn't set switch config bits, err %s aq_err %s\n",
+ i40e_stat_str(&pf->hw, ret),
+ i40e_aq_str(&pf->hw,
+ pf->hw.aq.asq_last_status));
+ /* not a fatal problem, just keep going */
+ }
+ }
+
/* first time setup */
if (pf->lan_vsi == I40E_NO_VSI || reinit) {
struct i40e_vsi *vsi = NULL;
@@ -10684,11 +10676,9 @@ static void i40e_print_features(struct i40e_pf *pf)
#ifdef CONFIG_PCI_IOV
i += snprintf(&buf[i], REMAIN(i), " VFs: %d", pf->num_req_vfs);
#endif
- i += snprintf(&buf[i], REMAIN(i), " VSIs: %d QP: %d RX: %s",
+ i += snprintf(&buf[i], REMAIN(i), " VSIs: %d QP: %d",
pf->hw.func_caps.num_vsis,
- pf->vsi[pf->lan_vsi]->num_queue_pairs,
- pf->flags & I40E_FLAG_RX_PS_ENABLED ? "PS" : "1BUF");
-
+ pf->vsi[pf->lan_vsi]->num_queue_pairs);
if (pf->flags & I40E_FLAG_RSS_ENABLED)
i += snprintf(&buf[i], REMAIN(i), " RSS");
if (pf->flags & I40E_FLAG_FD_ATR_ENABLED)
@@ -10827,6 +10817,12 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
hw->bus.func = PCI_FUNC(pdev->devfn);
pf->instance = pfs_found;
+ /* set up the locks for the AQ, do this only once in probe
+ * and destroy them only once in remove
+ */
+ mutex_init(&hw->aq.asq_mutex);
+ mutex_init(&hw->aq.arq_mutex);
+
if (debug != -1) {
pf->msg_enable = pf->hw.debug_mask;
pf->msg_enable = debug;
@@ -10872,12 +10868,6 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
/* set up a default setting for link flow control */
pf->hw.fc.requested_mode = I40E_FC_NONE;
- /* set up the locks for the AQ, do this only once in probe
- * and destroy them only once in remove
- */
- mutex_init(&hw->aq.asq_mutex);
- mutex_init(&hw->aq.arq_mutex);
-
err = i40e_init_adminq(hw);
if (err) {
if (err == I40E_ERR_FIRMWARE_API_VERSION)
@@ -11069,6 +11059,7 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
*/
err = i40e_aq_set_phy_int_mask(&pf->hw,
~(I40E_AQ_EVENT_LINK_UPDOWN |
+ I40E_AQ_EVENT_MEDIA_NA |
I40E_AQ_EVENT_MODULE_QUAL_FAIL), NULL);
if (err)
dev_info(&pf->pdev->dev, "set phy mask fail, err %s aq_err %s\n",
@@ -11270,7 +11261,6 @@ err_init_lan_hmc:
kfree(pf->qp_pile);
err_sw_init:
err_adminq_setup:
- (void)i40e_shutdown_adminq(hw);
err_pf_reset:
iounmap(hw->hw_addr);
err_ioremap:
@@ -11312,8 +11302,10 @@ static void i40e_remove(struct pci_dev *pdev)
/* no more scheduling of any task */
set_bit(__I40E_SUSPENDED, &pf->state);
set_bit(__I40E_DOWN, &pf->state);
- del_timer_sync(&pf->service_timer);
- cancel_work_sync(&pf->service_task);
+ if (pf->service_timer.data)
+ del_timer_sync(&pf->service_timer);
+ if (pf->service_task.func)
+ cancel_work_sync(&pf->service_task);
if (pf->flags & I40E_FLAG_SRIOV_ENABLED) {
i40e_free_vfs(pf);
diff --git a/drivers/net/ethernet/intel/i40e/i40e_nvm.c b/drivers/net/ethernet/intel/i40e/i40e_nvm.c
index 5730f8091e1b..954efe3118db 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_nvm.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_nvm.c
@@ -693,10 +693,10 @@ i40e_status i40e_nvmupd_command(struct i40e_hw *hw,
/* early check for status command and debug msgs */
upd_cmd = i40e_nvmupd_validate_command(hw, cmd, perrno);
- i40e_debug(hw, I40E_DEBUG_NVM, "%s state %d nvm_release_on_hold %d cmd 0x%08x config 0x%08x offset 0x%08x data_size 0x%08x\n",
+ i40e_debug(hw, I40E_DEBUG_NVM, "%s state %d nvm_release_on_hold %d opc 0x%04x cmd 0x%08x config 0x%08x offset 0x%08x data_size 0x%08x\n",
i40e_nvm_update_state_str[upd_cmd],
hw->nvmupd_state,
- hw->aq.nvm_release_on_done,
+ hw->nvm_release_on_done, hw->nvm_wait_opcode,
cmd->command, cmd->config, cmd->offset, cmd->data_size);
if (upd_cmd == I40E_NVMUPD_INVALID) {
@@ -710,7 +710,18 @@ i40e_status i40e_nvmupd_command(struct i40e_hw *hw,
* going into the state machine
*/
if (upd_cmd == I40E_NVMUPD_STATUS) {
+ if (!cmd->data_size) {
+ *perrno = -EFAULT;
+ return I40E_ERR_BUF_TOO_SHORT;
+ }
+
bytes[0] = hw->nvmupd_state;
+
+ if (cmd->data_size >= 4) {
+ bytes[1] = 0;
+ *((u16 *)&bytes[2]) = hw->nvm_wait_opcode;
+ }
+
return 0;
}
@@ -729,6 +740,14 @@ i40e_status i40e_nvmupd_command(struct i40e_hw *hw,
case I40E_NVMUPD_STATE_INIT_WAIT:
case I40E_NVMUPD_STATE_WRITE_WAIT:
+ /* if we need to stop waiting for an event, clear
+ * the wait info and return before doing anything else
+ */
+ if (cmd->offset == 0xffff) {
+ i40e_nvmupd_check_wait_event(hw, hw->nvm_wait_opcode);
+ return 0;
+ }
+
status = I40E_ERR_NOT_READY;
*perrno = -EBUSY;
break;
@@ -799,7 +818,8 @@ static i40e_status i40e_nvmupd_state_init(struct i40e_hw *hw,
if (status) {
i40e_release_nvm(hw);
} else {
- hw->aq.nvm_release_on_done = true;
+ hw->nvm_release_on_done = true;
+ hw->nvm_wait_opcode = i40e_aqc_opc_nvm_erase;
hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
}
}
@@ -815,7 +835,8 @@ static i40e_status i40e_nvmupd_state_init(struct i40e_hw *hw,
if (status) {
i40e_release_nvm(hw);
} else {
- hw->aq.nvm_release_on_done = true;
+ hw->nvm_release_on_done = true;
+ hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
}
}
@@ -828,10 +849,12 @@ static i40e_status i40e_nvmupd_state_init(struct i40e_hw *hw,
hw->aq.asq_last_status);
} else {
status = i40e_nvmupd_nvm_write(hw, cmd, bytes, perrno);
- if (status)
+ if (status) {
i40e_release_nvm(hw);
- else
+ } else {
+ hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
hw->nvmupd_state = I40E_NVMUPD_STATE_WRITE_WAIT;
+ }
}
break;
@@ -849,7 +872,8 @@ static i40e_status i40e_nvmupd_state_init(struct i40e_hw *hw,
-EIO;
i40e_release_nvm(hw);
} else {
- hw->aq.nvm_release_on_done = true;
+ hw->nvm_release_on_done = true;
+ hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
}
}
@@ -940,8 +964,10 @@ retry:
switch (upd_cmd) {
case I40E_NVMUPD_WRITE_CON:
status = i40e_nvmupd_nvm_write(hw, cmd, bytes, perrno);
- if (!status)
+ if (!status) {
+ hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
hw->nvmupd_state = I40E_NVMUPD_STATE_WRITE_WAIT;
+ }
break;
case I40E_NVMUPD_WRITE_LCB:
@@ -953,7 +979,8 @@ retry:
-EIO;
hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
} else {
- hw->aq.nvm_release_on_done = true;
+ hw->nvm_release_on_done = true;
+ hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
}
break;
@@ -967,6 +994,7 @@ retry:
-EIO;
hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
} else {
+ hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
hw->nvmupd_state = I40E_NVMUPD_STATE_WRITE_WAIT;
}
break;
@@ -980,7 +1008,8 @@ retry:
-EIO;
hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
} else {
- hw->aq.nvm_release_on_done = true;
+ hw->nvm_release_on_done = true;
+ hw->nvm_wait_opcode = i40e_aqc_opc_nvm_update;
hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
}
break;
@@ -1030,6 +1059,37 @@ retry:
}
/**
+ * i40e_nvmupd_check_wait_event - handle NVM update operation events
+ * @hw: pointer to the hardware structure
+ * @opcode: the event that just happened
+ **/
+void i40e_nvmupd_check_wait_event(struct i40e_hw *hw, u16 opcode)
+{
+ if (opcode == hw->nvm_wait_opcode) {
+ i40e_debug(hw, I40E_DEBUG_NVM,
+ "NVMUPD: clearing wait on opcode 0x%04x\n", opcode);
+ if (hw->nvm_release_on_done) {
+ i40e_release_nvm(hw);
+ hw->nvm_release_on_done = false;
+ }
+ hw->nvm_wait_opcode = 0;
+
+ switch (hw->nvmupd_state) {
+ case I40E_NVMUPD_STATE_INIT_WAIT:
+ hw->nvmupd_state = I40E_NVMUPD_STATE_INIT;
+ break;
+
+ case I40E_NVMUPD_STATE_WRITE_WAIT:
+ hw->nvmupd_state = I40E_NVMUPD_STATE_WRITING;
+ break;
+
+ default:
+ break;
+ }
+ }
+}
+
+/**
* i40e_nvmupd_validate_command - Validate given command
* @hw: pointer to hardware structure
* @cmd: pointer to nvm update command buffer
@@ -1189,6 +1249,12 @@ static i40e_status i40e_nvmupd_exec_aq(struct i40e_hw *hw,
*perrno = i40e_aq_rc_to_posix(status, hw->aq.asq_last_status);
}
+ /* should we wait for a followup event? */
+ if (cmd->offset) {
+ hw->nvm_wait_opcode = cmd->offset;
+ hw->nvmupd_state = I40E_NVMUPD_STATE_INIT_WAIT;
+ }
+
return status;
}
diff --git a/drivers/net/ethernet/intel/i40e/i40e_prototype.h b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
index d51eee5bf79a..80403c6ee7f0 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_prototype.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_prototype.h
@@ -130,9 +130,18 @@ i40e_status i40e_aq_set_vsi_broadcast(struct i40e_hw *hw,
u16 vsi_id, bool set_filter,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_set_vsi_unicast_promiscuous(struct i40e_hw *hw,
- u16 vsi_id, bool set, struct i40e_asq_cmd_details *cmd_details);
+ u16 vsi_id, bool set, struct i40e_asq_cmd_details *cmd_details,
+ bool rx_only_promisc);
i40e_status i40e_aq_set_vsi_multicast_promiscuous(struct i40e_hw *hw,
u16 vsi_id, bool set, struct i40e_asq_cmd_details *cmd_details);
+enum i40e_status_code i40e_aq_set_vsi_mc_promisc_on_vlan(struct i40e_hw *hw,
+ u16 seid, bool enable,
+ u16 vid,
+ struct i40e_asq_cmd_details *cmd_details);
+enum i40e_status_code i40e_aq_set_vsi_uc_promisc_on_vlan(struct i40e_hw *hw,
+ u16 seid, bool enable,
+ u16 vid,
+ struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_set_vsi_vlan_promisc(struct i40e_hw *hw,
u16 seid, bool enable,
struct i40e_asq_cmd_details *cmd_details);
@@ -174,6 +183,10 @@ i40e_status i40e_aq_get_switch_config(struct i40e_hw *hw,
struct i40e_aqc_get_switch_config_resp *buf,
u16 buf_size, u16 *start_seid,
struct i40e_asq_cmd_details *cmd_details);
+enum i40e_status_code i40e_aq_set_switch_config(struct i40e_hw *hw,
+ u16 flags,
+ u16 valid_flags,
+ struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_request_resource(struct i40e_hw *hw,
enum i40e_aq_resources_ids resource,
enum i40e_aq_resource_access_type access,
@@ -228,10 +241,6 @@ i40e_status i40e_aq_config_vsi_bw_limit(struct i40e_hw *hw,
struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_dcb_updated(struct i40e_hw *hw,
struct i40e_asq_cmd_details *cmd_details);
-i40e_status i40e_aq_set_hmc_resource_profile(struct i40e_hw *hw,
- enum i40e_aq_hmc_profile profile,
- u8 pe_vf_enabled_count,
- struct i40e_asq_cmd_details *cmd_details);
i40e_status i40e_aq_config_switch_comp_bw_limit(struct i40e_hw *hw,
u16 seid, u16 credit, u8 max_bw,
struct i40e_asq_cmd_details *cmd_details);
@@ -308,6 +317,7 @@ i40e_status i40e_validate_nvm_checksum(struct i40e_hw *hw,
i40e_status i40e_nvmupd_command(struct i40e_hw *hw,
struct i40e_nvm_access *cmd,
u8 *bytes, int *);
+void i40e_nvmupd_check_wait_event(struct i40e_hw *hw, u16 opcode);
void i40e_set_pci_config_data(struct i40e_hw *hw, u16 link_status);
extern struct i40e_rx_ptype_decoded i40e_ptype_lookup[];
diff --git a/drivers/net/ethernet/intel/i40e/i40e_ptp.c b/drivers/net/ethernet/intel/i40e/i40e_ptp.c
index 565ca7c835bc..ed39cbad24bd 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_ptp.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_ptp.c
@@ -158,9 +158,10 @@ static int i40e_ptp_adjfreq(struct ptp_clock_info *ptp, s32 ppb)
static int i40e_ptp_adjtime(struct ptp_clock_info *ptp, s64 delta)
{
struct i40e_pf *pf = container_of(ptp, struct i40e_pf, ptp_caps);
- struct timespec64 now, then = ns_to_timespec64(delta);
+ struct timespec64 now, then;
unsigned long flags;
+ then = ns_to_timespec64(delta);
spin_lock_irqsave(&pf->tmreg_lock, flags);
i40e_ptp_read(pf, &now);
@@ -288,9 +289,7 @@ void i40e_ptp_rx_hang(struct i40e_vsi *vsi)
rd32(hw, I40E_PRTTSYN_RXTIME_H(3));
pf->last_rx_ptp_check = jiffies;
pf->rx_hwtstamp_cleared++;
- dev_warn(&vsi->back->pdev->dev,
- "%s: clearing Rx timestamp hang\n",
- __func__);
+ WARN_ONCE(1, "Detected Rx timestamp register hang\n");
}
}
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
index 6a49b7ae511c..55f151fca1dc 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
@@ -636,19 +636,21 @@ u32 i40e_get_tx_pending(struct i40e_ring *ring, bool in_sw)
/**
* i40e_clean_tx_irq - Reclaim resources after transmit completes
- * @tx_ring: tx ring to clean
- * @budget: how many cleans we're allowed
+ * @vsi: the VSI we care about
+ * @tx_ring: Tx ring to clean
+ * @napi_budget: Used to determine if we are in netpoll
*
* Returns true if there's any budget left (e.g. the clean is finished)
**/
-static bool i40e_clean_tx_irq(struct i40e_ring *tx_ring, int budget)
+static bool i40e_clean_tx_irq(struct i40e_vsi *vsi,
+ struct i40e_ring *tx_ring, int napi_budget)
{
u16 i = tx_ring->next_to_clean;
struct i40e_tx_buffer *tx_buf;
struct i40e_tx_desc *tx_head;
struct i40e_tx_desc *tx_desc;
- unsigned int total_packets = 0;
- unsigned int total_bytes = 0;
+ unsigned int total_bytes = 0, total_packets = 0;
+ unsigned int budget = vsi->work_limit;
tx_buf = &tx_ring->tx_bi[i];
tx_desc = I40E_TX_DESC(tx_ring, i);
@@ -678,7 +680,7 @@ static bool i40e_clean_tx_irq(struct i40e_ring *tx_ring, int budget)
total_packets += tx_buf->gso_segs;
/* free the skb */
- dev_consume_skb_any(tx_buf->skb);
+ napi_consume_skb(tx_buf->skb, napi_budget);
/* unmap skb header data */
dma_unmap_single(tx_ring->dev,
@@ -749,7 +751,7 @@ static bool i40e_clean_tx_irq(struct i40e_ring *tx_ring, int budget)
if (budget &&
((j / (WB_STRIDE + 1)) == 0) && (j != 0) &&
- !test_bit(__I40E_DOWN, &tx_ring->vsi->state) &&
+ !test_bit(__I40E_DOWN, &vsi->state) &&
(I40E_DESC_UNUSED(tx_ring) != tx_ring->count))
tx_ring->arm_wb = true;
}
@@ -767,7 +769,7 @@ static bool i40e_clean_tx_irq(struct i40e_ring *tx_ring, int budget)
smp_mb();
if (__netif_subqueue_stopped(tx_ring->netdev,
tx_ring->queue_index) &&
- !test_bit(__I40E_DOWN, &tx_ring->vsi->state)) {
+ !test_bit(__I40E_DOWN, &vsi->state)) {
netif_wake_subqueue(tx_ring->netdev,
tx_ring->queue_index);
++tx_ring->tx_stats.restart_queue;
@@ -1022,7 +1024,6 @@ err:
void i40e_clean_rx_ring(struct i40e_ring *rx_ring)
{
struct device *dev = rx_ring->dev;
- struct i40e_rx_buffer *rx_bi;
unsigned long bi_size;
u16 i;
@@ -1030,48 +1031,22 @@ void i40e_clean_rx_ring(struct i40e_ring *rx_ring)
if (!rx_ring->rx_bi)
return;
- if (ring_is_ps_enabled(rx_ring)) {
- int bufsz = ALIGN(rx_ring->rx_hdr_len, 256) * rx_ring->count;
-
- rx_bi = &rx_ring->rx_bi[0];
- if (rx_bi->hdr_buf) {
- dma_free_coherent(dev,
- bufsz,
- rx_bi->hdr_buf,
- rx_bi->dma);
- for (i = 0; i < rx_ring->count; i++) {
- rx_bi = &rx_ring->rx_bi[i];
- rx_bi->dma = 0;
- rx_bi->hdr_buf = NULL;
- }
- }
- }
/* Free all the Rx ring sk_buffs */
for (i = 0; i < rx_ring->count; i++) {
- rx_bi = &rx_ring->rx_bi[i];
- if (rx_bi->dma) {
- dma_unmap_single(dev,
- rx_bi->dma,
- rx_ring->rx_buf_len,
- DMA_FROM_DEVICE);
- rx_bi->dma = 0;
- }
+ struct i40e_rx_buffer *rx_bi = &rx_ring->rx_bi[i];
+
if (rx_bi->skb) {
dev_kfree_skb(rx_bi->skb);
rx_bi->skb = NULL;
}
- if (rx_bi->page) {
- if (rx_bi->page_dma) {
- dma_unmap_page(dev,
- rx_bi->page_dma,
- PAGE_SIZE,
- DMA_FROM_DEVICE);
- rx_bi->page_dma = 0;
- }
- __free_page(rx_bi->page);
- rx_bi->page = NULL;
- rx_bi->page_offset = 0;
- }
+ if (!rx_bi->page)
+ continue;
+
+ dma_unmap_page(dev, rx_bi->dma, PAGE_SIZE, DMA_FROM_DEVICE);
+ __free_pages(rx_bi->page, 0);
+
+ rx_bi->page = NULL;
+ rx_bi->page_offset = 0;
}
bi_size = sizeof(struct i40e_rx_buffer) * rx_ring->count;
@@ -1080,6 +1055,7 @@ void i40e_clean_rx_ring(struct i40e_ring *rx_ring)
/* Zero out the descriptor ring */
memset(rx_ring->desc, 0, rx_ring->size);
+ rx_ring->next_to_alloc = 0;
rx_ring->next_to_clean = 0;
rx_ring->next_to_use = 0;
}
@@ -1104,37 +1080,6 @@ void i40e_free_rx_resources(struct i40e_ring *rx_ring)
}
/**
- * i40e_alloc_rx_headers - allocate rx header buffers
- * @rx_ring: ring to alloc buffers
- *
- * Allocate rx header buffers for the entire ring. As these are static,
- * this is only called when setting up a new ring.
- **/
-void i40e_alloc_rx_headers(struct i40e_ring *rx_ring)
-{
- struct device *dev = rx_ring->dev;
- struct i40e_rx_buffer *rx_bi;
- dma_addr_t dma;
- void *buffer;
- int buf_size;
- int i;
-
- if (rx_ring->rx_bi[0].hdr_buf)
- return;
- /* Make sure the buffers don't cross cache line boundaries. */
- buf_size = ALIGN(rx_ring->rx_hdr_len, 256);
- buffer = dma_alloc_coherent(dev, buf_size * rx_ring->count,
- &dma, GFP_KERNEL);
- if (!buffer)
- return;
- for (i = 0; i < rx_ring->count; i++) {
- rx_bi = &rx_ring->rx_bi[i];
- rx_bi->dma = dma + (i * buf_size);
- rx_bi->hdr_buf = buffer + (i * buf_size);
- }
-}
-
-/**
* i40e_setup_rx_descriptors - Allocate Rx descriptors
* @rx_ring: Rx descriptor ring (for a specific queue) to setup
*
@@ -1155,9 +1100,7 @@ int i40e_setup_rx_descriptors(struct i40e_ring *rx_ring)
u64_stats_init(&rx_ring->syncp);
/* Round up to nearest 4K */
- rx_ring->size = ring_is_16byte_desc_enabled(rx_ring)
- ? rx_ring->count * sizeof(union i40e_16byte_rx_desc)
- : rx_ring->count * sizeof(union i40e_32byte_rx_desc);
+ rx_ring->size = rx_ring->count * sizeof(union i40e_32byte_rx_desc);
rx_ring->size = ALIGN(rx_ring->size, 4096);
rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size,
&rx_ring->dma, GFP_KERNEL);
@@ -1168,6 +1111,7 @@ int i40e_setup_rx_descriptors(struct i40e_ring *rx_ring)
goto err;
}
+ rx_ring->next_to_alloc = 0;
rx_ring->next_to_clean = 0;
rx_ring->next_to_use = 0;
@@ -1186,6 +1130,10 @@ err:
static inline void i40e_release_rx_desc(struct i40e_ring *rx_ring, u32 val)
{
rx_ring->next_to_use = val;
+
+ /* update next to alloc since we have filled the ring */
+ rx_ring->next_to_alloc = val;
+
/* Force memory writes to complete before letting h/w
* know there are new descriptors to fetch. (Only
* applicable for weak-ordered memory model archs,
@@ -1196,160 +1144,122 @@ static inline void i40e_release_rx_desc(struct i40e_ring *rx_ring, u32 val)
}
/**
- * i40e_alloc_rx_buffers_ps - Replace used receive buffers; packet split
- * @rx_ring: ring to place buffers on
- * @cleaned_count: number of buffers to replace
+ * i40e_alloc_mapped_page - recycle or make a new page
+ * @rx_ring: ring to use
+ * @bi: rx_buffer struct to modify
*
- * Returns true if any errors on allocation
+ * Returns true if the page was successfully allocated or
+ * reused.
**/
-bool i40e_alloc_rx_buffers_ps(struct i40e_ring *rx_ring, u16 cleaned_count)
+static bool i40e_alloc_mapped_page(struct i40e_ring *rx_ring,
+ struct i40e_rx_buffer *bi)
{
- u16 i = rx_ring->next_to_use;
- union i40e_rx_desc *rx_desc;
- struct i40e_rx_buffer *bi;
- const int current_node = numa_node_id();
+ struct page *page = bi->page;
+ dma_addr_t dma;
- /* do nothing if no valid netdev defined */
- if (!rx_ring->netdev || !cleaned_count)
- return false;
+ /* since we are recycling buffers we should seldom need to alloc */
+ if (likely(page)) {
+ rx_ring->rx_stats.page_reuse_count++;
+ return true;
+ }
- while (cleaned_count--) {
- rx_desc = I40E_RX_DESC(rx_ring, i);
- bi = &rx_ring->rx_bi[i];
+ /* alloc new page for storage */
+ page = dev_alloc_page();
+ if (unlikely(!page)) {
+ rx_ring->rx_stats.alloc_page_failed++;
+ return false;
+ }
- if (bi->skb) /* desc is in use */
- goto no_buffers;
+ /* map page for use */
+ dma = dma_map_page(rx_ring->dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE);
- /* If we've been moved to a different NUMA node, release the
- * page so we can get a new one on the current node.
+ /* if mapping failed free memory back to system since
+ * there isn't much point in holding memory we can't use
*/
- if (bi->page && page_to_nid(bi->page) != current_node) {
- dma_unmap_page(rx_ring->dev,
- bi->page_dma,
- PAGE_SIZE,
- DMA_FROM_DEVICE);
- __free_page(bi->page);
- bi->page = NULL;
- bi->page_dma = 0;
- rx_ring->rx_stats.realloc_count++;
- } else if (bi->page) {
- rx_ring->rx_stats.page_reuse_count++;
- }
-
- if (!bi->page) {
- bi->page = alloc_page(GFP_ATOMIC);
- if (!bi->page) {
- rx_ring->rx_stats.alloc_page_failed++;
- goto no_buffers;
- }
- bi->page_dma = dma_map_page(rx_ring->dev,
- bi->page,
- 0,
- PAGE_SIZE,
- DMA_FROM_DEVICE);
- if (dma_mapping_error(rx_ring->dev, bi->page_dma)) {
- rx_ring->rx_stats.alloc_page_failed++;
- __free_page(bi->page);
- bi->page = NULL;
- bi->page_dma = 0;
- bi->page_offset = 0;
- goto no_buffers;
- }
- bi->page_offset = 0;
- }
-
- /* Refresh the desc even if buffer_addrs didn't change
- * because each write-back erases this info.
- */
- rx_desc->read.pkt_addr =
- cpu_to_le64(bi->page_dma + bi->page_offset);
- rx_desc->read.hdr_addr = cpu_to_le64(bi->dma);
- i++;
- if (i == rx_ring->count)
- i = 0;
+ if (dma_mapping_error(rx_ring->dev, dma)) {
+ __free_pages(page, 0);
+ rx_ring->rx_stats.alloc_page_failed++;
+ return false;
}
- if (rx_ring->next_to_use != i)
- i40e_release_rx_desc(rx_ring, i);
+ bi->dma = dma;
+ bi->page = page;
+ bi->page_offset = 0;
- return false;
+ return true;
+}
-no_buffers:
- if (rx_ring->next_to_use != i)
- i40e_release_rx_desc(rx_ring, i);
+/**
+ * i40e_receive_skb - Send a completed packet up the stack
+ * @rx_ring: rx ring in play
+ * @skb: packet to send up
+ * @vlan_tag: vlan tag for packet
+ **/
+static void i40e_receive_skb(struct i40e_ring *rx_ring,
+ struct sk_buff *skb, u16 vlan_tag)
+{
+ struct i40e_q_vector *q_vector = rx_ring->q_vector;
- /* make sure to come back via polling to try again after
- * allocation failure
- */
- return true;
+ if ((rx_ring->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) &&
+ (vlan_tag & VLAN_VID_MASK))
+ __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlan_tag);
+
+ napi_gro_receive(&q_vector->napi, skb);
}
/**
- * i40e_alloc_rx_buffers_1buf - Replace used receive buffers; single buffer
+ * i40e_alloc_rx_buffers - Replace used receive buffers
* @rx_ring: ring to place buffers on
* @cleaned_count: number of buffers to replace
*
- * Returns true if any errors on allocation
+ * Returns false if all allocations were successful, true if any fail
**/
-bool i40e_alloc_rx_buffers_1buf(struct i40e_ring *rx_ring, u16 cleaned_count)
+bool i40e_alloc_rx_buffers(struct i40e_ring *rx_ring, u16 cleaned_count)
{
- u16 i = rx_ring->next_to_use;
+ u16 ntu = rx_ring->next_to_use;
union i40e_rx_desc *rx_desc;
struct i40e_rx_buffer *bi;
- struct sk_buff *skb;
/* do nothing if no valid netdev defined */
if (!rx_ring->netdev || !cleaned_count)
return false;
- while (cleaned_count--) {
- rx_desc = I40E_RX_DESC(rx_ring, i);
- bi = &rx_ring->rx_bi[i];
- skb = bi->skb;
-
- if (!skb) {
- skb = __netdev_alloc_skb_ip_align(rx_ring->netdev,
- rx_ring->rx_buf_len,
- GFP_ATOMIC |
- __GFP_NOWARN);
- if (!skb) {
- rx_ring->rx_stats.alloc_buff_failed++;
- goto no_buffers;
- }
- /* initialize queue mapping */
- skb_record_rx_queue(skb, rx_ring->queue_index);
- bi->skb = skb;
- }
+ rx_desc = I40E_RX_DESC(rx_ring, ntu);
+ bi = &rx_ring->rx_bi[ntu];
- if (!bi->dma) {
- bi->dma = dma_map_single(rx_ring->dev,
- skb->data,
- rx_ring->rx_buf_len,
- DMA_FROM_DEVICE);
- if (dma_mapping_error(rx_ring->dev, bi->dma)) {
- rx_ring->rx_stats.alloc_buff_failed++;
- bi->dma = 0;
- dev_kfree_skb(bi->skb);
- bi->skb = NULL;
- goto no_buffers;
- }
- }
+ do {
+ if (!i40e_alloc_mapped_page(rx_ring, bi))
+ goto no_buffers;
- rx_desc->read.pkt_addr = cpu_to_le64(bi->dma);
+ /* Refresh the desc even if buffer_addrs didn't change
+ * because each write-back erases this info.
+ */
+ rx_desc->read.pkt_addr = cpu_to_le64(bi->dma + bi->page_offset);
rx_desc->read.hdr_addr = 0;
- i++;
- if (i == rx_ring->count)
- i = 0;
- }
- if (rx_ring->next_to_use != i)
- i40e_release_rx_desc(rx_ring, i);
+ rx_desc++;
+ bi++;
+ ntu++;
+ if (unlikely(ntu == rx_ring->count)) {
+ rx_desc = I40E_RX_DESC(rx_ring, 0);
+ bi = rx_ring->rx_bi;
+ ntu = 0;
+ }
+
+ /* clear the status bits for the next_to_use descriptor */
+ rx_desc->wb.qword1.status_error_len = 0;
+
+ cleaned_count--;
+ } while (cleaned_count);
+
+ if (rx_ring->next_to_use != ntu)
+ i40e_release_rx_desc(rx_ring, ntu);
return false;
no_buffers:
- if (rx_ring->next_to_use != i)
- i40e_release_rx_desc(rx_ring, i);
+ if (rx_ring->next_to_use != ntu)
+ i40e_release_rx_desc(rx_ring, ntu);
/* make sure to come back via polling to try again after
* allocation failure
@@ -1358,41 +1268,35 @@ no_buffers:
}
/**
- * i40e_receive_skb - Send a completed packet up the stack
- * @rx_ring: rx ring in play
- * @skb: packet to send up
- * @vlan_tag: vlan tag for packet
- **/
-static void i40e_receive_skb(struct i40e_ring *rx_ring,
- struct sk_buff *skb, u16 vlan_tag)
-{
- struct i40e_q_vector *q_vector = rx_ring->q_vector;
-
- if (vlan_tag & VLAN_VID_MASK)
- __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlan_tag);
-
- napi_gro_receive(&q_vector->napi, skb);
-}
-
-/**
* i40e_rx_checksum - Indicate in skb if hw indicated a good cksum
* @vsi: the VSI we care about
* @skb: skb currently being received and modified
- * @rx_status: status value of last descriptor in packet
- * @rx_error: error value of last descriptor in packet
- * @rx_ptype: ptype value of last descriptor in packet
+ * @rx_desc: the receive descriptor
+ *
+ * skb->protocol must be set before this function is called
**/
static inline void i40e_rx_checksum(struct i40e_vsi *vsi,
struct sk_buff *skb,
- u32 rx_status,
- u32 rx_error,
- u16 rx_ptype)
+ union i40e_rx_desc *rx_desc)
{
- struct i40e_rx_ptype_decoded decoded = decode_rx_desc_ptype(rx_ptype);
- bool ipv4, ipv6, ipv4_tunnel, ipv6_tunnel;
+ struct i40e_rx_ptype_decoded decoded;
+ bool ipv4, ipv6, tunnel = false;
+ u32 rx_error, rx_status;
+ u8 ptype;
+ u64 qword;
+
+ qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
+ ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT;
+ rx_error = (qword & I40E_RXD_QW1_ERROR_MASK) >>
+ I40E_RXD_QW1_ERROR_SHIFT;
+ rx_status = (qword & I40E_RXD_QW1_STATUS_MASK) >>
+ I40E_RXD_QW1_STATUS_SHIFT;
+ decoded = decode_rx_desc_ptype(ptype);
skb->ip_summed = CHECKSUM_NONE;
+ skb_checksum_none_assert(skb);
+
/* Rx csum enabled and ip headers found? */
if (!(vsi->netdev->features & NETIF_F_RXCSUM))
return;
@@ -1438,14 +1342,13 @@ static inline void i40e_rx_checksum(struct i40e_vsi *vsi,
* doesn't make it a hard requirement so if we have validated the
* inner checksum report CHECKSUM_UNNECESSARY.
*/
-
- ipv4_tunnel = (rx_ptype >= I40E_RX_PTYPE_GRENAT4_MAC_PAY3) &&
- (rx_ptype <= I40E_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4);
- ipv6_tunnel = (rx_ptype >= I40E_RX_PTYPE_GRENAT6_MAC_PAY3) &&
- (rx_ptype <= I40E_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4);
+ if (decoded.inner_prot & (I40E_RX_PTYPE_INNER_PROT_TCP |
+ I40E_RX_PTYPE_INNER_PROT_UDP |
+ I40E_RX_PTYPE_INNER_PROT_SCTP))
+ tunnel = true;
skb->ip_summed = CHECKSUM_UNNECESSARY;
- skb->csum_level = ipv4_tunnel || ipv6_tunnel;
+ skb->csum_level = tunnel ? 1 : 0;
return;
@@ -1459,7 +1362,7 @@ checksum_fail:
*
* Returns a hash type to be used by skb_set_hash
**/
-static inline enum pkt_hash_types i40e_ptype_to_htype(u8 ptype)
+static inline int i40e_ptype_to_htype(u8 ptype)
{
struct i40e_rx_ptype_decoded decoded = decode_rx_desc_ptype(ptype);
@@ -1487,11 +1390,11 @@ static inline void i40e_rx_hash(struct i40e_ring *ring,
u8 rx_ptype)
{
u32 hash;
- const __le64 rss_mask =
+ const __le64 rss_mask =
cpu_to_le64((u64)I40E_RX_DESC_FLTSTAT_RSS_HASH <<
I40E_RX_DESC_STATUS_FLTSTAT_SHIFT);
- if (ring->netdev->features & NETIF_F_RXHASH)
+ if (!(ring->netdev->features & NETIF_F_RXHASH))
return;
if ((rx_desc->wb.qword1.status_error_len & rss_mask) == rss_mask) {
@@ -1501,346 +1404,436 @@ static inline void i40e_rx_hash(struct i40e_ring *ring,
}
/**
- * i40e_clean_rx_irq_ps - Reclaim resources after receive; packet split
- * @rx_ring: rx ring to clean
- * @budget: how many cleans we're allowed
+ * i40e_process_skb_fields - Populate skb header fields from Rx descriptor
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @rx_desc: pointer to the EOP Rx descriptor
+ * @skb: pointer to current skb being populated
+ * @rx_ptype: the packet type decoded by hardware
*
- * Returns true if there's any budget left (e.g. the clean is finished)
+ * This function checks the ring, descriptor, and packet information in
+ * order to populate the hash, checksum, VLAN, protocol, and
+ * other fields within the skb.
**/
-static int i40e_clean_rx_irq_ps(struct i40e_ring *rx_ring, const int budget)
+static inline
+void i40e_process_skb_fields(struct i40e_ring *rx_ring,
+ union i40e_rx_desc *rx_desc, struct sk_buff *skb,
+ u8 rx_ptype)
{
- unsigned int total_rx_bytes = 0, total_rx_packets = 0;
- u16 rx_packet_len, rx_header_len, rx_sph, rx_hbo;
- u16 cleaned_count = I40E_DESC_UNUSED(rx_ring);
- struct i40e_vsi *vsi = rx_ring->vsi;
- u16 i = rx_ring->next_to_clean;
- union i40e_rx_desc *rx_desc;
- u32 rx_error, rx_status;
- bool failure = false;
- u8 rx_ptype;
- u64 qword;
- u32 copysize;
+ u64 qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
+ u32 rx_status = (qword & I40E_RXD_QW1_STATUS_MASK) >>
+ I40E_RXD_QW1_STATUS_SHIFT;
+ u32 rsyn = (rx_status & I40E_RXD_QW1_STATUS_TSYNINDX_MASK) >>
+ I40E_RXD_QW1_STATUS_TSYNINDX_SHIFT;
- if (budget <= 0)
- return 0;
+ if (unlikely(rsyn)) {
+ i40e_ptp_rx_hwtstamp(rx_ring->vsi->back, skb, rsyn);
+ rx_ring->last_rx_timestamp = jiffies;
+ }
- do {
- struct i40e_rx_buffer *rx_bi;
- struct sk_buff *skb;
- u16 vlan_tag;
- /* return some buffers to hardware, one at a time is too slow */
- if (cleaned_count >= I40E_RX_BUFFER_WRITE) {
- failure = failure ||
- i40e_alloc_rx_buffers_ps(rx_ring,
- cleaned_count);
- cleaned_count = 0;
- }
+ i40e_rx_hash(rx_ring, rx_desc, skb, rx_ptype);
- i = rx_ring->next_to_clean;
- rx_desc = I40E_RX_DESC(rx_ring, i);
- qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
- rx_status = (qword & I40E_RXD_QW1_STATUS_MASK) >>
- I40E_RXD_QW1_STATUS_SHIFT;
+ /* modifies the skb - consumes the enet header */
+ skb->protocol = eth_type_trans(skb, rx_ring->netdev);
- if (!(rx_status & BIT(I40E_RX_DESC_STATUS_DD_SHIFT)))
- break;
+ i40e_rx_checksum(rx_ring->vsi, skb, rx_desc);
- /* This memory barrier is needed to keep us from reading
- * any other fields out of the rx_desc until we know the
- * DD bit is set.
- */
- dma_rmb();
- /* sync header buffer for reading */
- dma_sync_single_range_for_cpu(rx_ring->dev,
- rx_ring->rx_bi[0].dma,
- i * rx_ring->rx_hdr_len,
- rx_ring->rx_hdr_len,
- DMA_FROM_DEVICE);
- if (i40e_rx_is_programming_status(qword)) {
- i40e_clean_programming_status(rx_ring, rx_desc);
- I40E_RX_INCREMENT(rx_ring, i);
- continue;
- }
- rx_bi = &rx_ring->rx_bi[i];
- skb = rx_bi->skb;
- if (likely(!skb)) {
- skb = __netdev_alloc_skb_ip_align(rx_ring->netdev,
- rx_ring->rx_hdr_len,
- GFP_ATOMIC |
- __GFP_NOWARN);
- if (!skb) {
- rx_ring->rx_stats.alloc_buff_failed++;
- failure = true;
- break;
- }
+ skb_record_rx_queue(skb, rx_ring->queue_index);
+}
- /* initialize queue mapping */
- skb_record_rx_queue(skb, rx_ring->queue_index);
- /* we are reusing so sync this buffer for CPU use */
- dma_sync_single_range_for_cpu(rx_ring->dev,
- rx_ring->rx_bi[0].dma,
- i * rx_ring->rx_hdr_len,
- rx_ring->rx_hdr_len,
- DMA_FROM_DEVICE);
- }
- rx_packet_len = (qword & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>
- I40E_RXD_QW1_LENGTH_PBUF_SHIFT;
- rx_header_len = (qword & I40E_RXD_QW1_LENGTH_HBUF_MASK) >>
- I40E_RXD_QW1_LENGTH_HBUF_SHIFT;
- rx_sph = (qword & I40E_RXD_QW1_LENGTH_SPH_MASK) >>
- I40E_RXD_QW1_LENGTH_SPH_SHIFT;
-
- rx_error = (qword & I40E_RXD_QW1_ERROR_MASK) >>
- I40E_RXD_QW1_ERROR_SHIFT;
- rx_hbo = rx_error & BIT(I40E_RX_DESC_ERROR_HBO_SHIFT);
- rx_error &= ~BIT(I40E_RX_DESC_ERROR_HBO_SHIFT);
+/**
+ * i40e_pull_tail - i40e specific version of skb_pull_tail
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @skb: pointer to current skb being adjusted
+ *
+ * This function is an i40e specific version of __pskb_pull_tail. The
+ * main difference between this version and the original function is that
+ * this function can make several assumptions about the state of things
+ * that allow for significant optimizations versus the standard function.
+ * As a result we can do things like drop a frag and maintain an accurate
+ * truesize for the skb.
+ */
+static void i40e_pull_tail(struct i40e_ring *rx_ring, struct sk_buff *skb)
+{
+ struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[0];
+ unsigned char *va;
+ unsigned int pull_len;
- rx_ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT;
- /* sync half-page for reading */
- dma_sync_single_range_for_cpu(rx_ring->dev,
- rx_bi->page_dma,
- rx_bi->page_offset,
- PAGE_SIZE / 2,
- DMA_FROM_DEVICE);
- prefetch(page_address(rx_bi->page) + rx_bi->page_offset);
- rx_bi->skb = NULL;
- cleaned_count++;
- copysize = 0;
- if (rx_hbo || rx_sph) {
- int len;
+ /* it is valid to use page_address instead of kmap since we are
+ * working with pages allocated out of the lomem pool per
+ * alloc_page(GFP_ATOMIC)
+ */
+ va = skb_frag_address(frag);
- if (rx_hbo)
- len = I40E_RX_HDR_SIZE;
- else
- len = rx_header_len;
- memcpy(__skb_put(skb, len), rx_bi->hdr_buf, len);
- } else if (skb->len == 0) {
- int len;
- unsigned char *va = page_address(rx_bi->page) +
- rx_bi->page_offset;
-
- len = min(rx_packet_len, rx_ring->rx_hdr_len);
- memcpy(__skb_put(skb, len), va, len);
- copysize = len;
- rx_packet_len -= len;
- }
- /* Get the rest of the data if this was a header split */
- if (rx_packet_len) {
- skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
- rx_bi->page,
- rx_bi->page_offset + copysize,
- rx_packet_len, I40E_RXBUFFER_2048);
-
- /* If the page count is more than 2, then both halves
- * of the page are used and we need to free it. Do it
- * here instead of in the alloc code. Otherwise one
- * of the half-pages might be released between now and
- * then, and we wouldn't know which one to use.
- * Don't call get_page and free_page since those are
- * both expensive atomic operations that just change
- * the refcount in opposite directions. Just give the
- * page to the stack; he can have our refcount.
- */
- if (page_count(rx_bi->page) > 2) {
- dma_unmap_page(rx_ring->dev,
- rx_bi->page_dma,
- PAGE_SIZE,
- DMA_FROM_DEVICE);
- rx_bi->page = NULL;
- rx_bi->page_dma = 0;
- rx_ring->rx_stats.realloc_count++;
- } else {
- get_page(rx_bi->page);
- /* switch to the other half-page here; the
- * allocation code programs the right addr
- * into HW. If we haven't used this half-page,
- * the address won't be changed, and HW can
- * just use it next time through.
- */
- rx_bi->page_offset ^= PAGE_SIZE / 2;
- }
+ /* we need the header to contain the greater of either ETH_HLEN or
+ * 60 bytes if the skb->len is less than 60 for skb_pad.
+ */
+ pull_len = eth_get_headlen(va, I40E_RX_HDR_SIZE);
- }
- I40E_RX_INCREMENT(rx_ring, i);
+ /* align pull length to size of long to optimize memcpy performance */
+ skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long)));
- if (unlikely(
- !(rx_status & BIT(I40E_RX_DESC_STATUS_EOF_SHIFT)))) {
- struct i40e_rx_buffer *next_buffer;
+ /* update all of the pointers */
+ skb_frag_size_sub(frag, pull_len);
+ frag->page_offset += pull_len;
+ skb->data_len -= pull_len;
+ skb->tail += pull_len;
+}
- next_buffer = &rx_ring->rx_bi[i];
- next_buffer->skb = skb;
- rx_ring->rx_stats.non_eop_descs++;
- continue;
- }
+/**
+ * i40e_cleanup_headers - Correct empty headers
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @skb: pointer to current skb being fixed
+ *
+ * Also address the case where we are pulling data in on pages only
+ * and as such no data is present in the skb header.
+ *
+ * In addition if skb is not at least 60 bytes we need to pad it so that
+ * it is large enough to qualify as a valid Ethernet frame.
+ *
+ * Returns true if an error was encountered and skb was freed.
+ **/
+static bool i40e_cleanup_headers(struct i40e_ring *rx_ring, struct sk_buff *skb)
+{
+ /* place header in linear portion of buffer */
+ if (skb_is_nonlinear(skb))
+ i40e_pull_tail(rx_ring, skb);
- /* ERR_MASK will only have valid bits if EOP set */
- if (unlikely(rx_error & BIT(I40E_RX_DESC_ERROR_RXE_SHIFT))) {
- dev_kfree_skb_any(skb);
- continue;
- }
+ /* if eth_skb_pad returns an error the skb was freed */
+ if (eth_skb_pad(skb))
+ return true;
- i40e_rx_hash(rx_ring, rx_desc, skb, rx_ptype);
+ return false;
+}
- if (unlikely(rx_status & I40E_RXD_QW1_STATUS_TSYNVALID_MASK)) {
- i40e_ptp_rx_hwtstamp(vsi->back, skb, (rx_status &
- I40E_RXD_QW1_STATUS_TSYNINDX_MASK) >>
- I40E_RXD_QW1_STATUS_TSYNINDX_SHIFT);
- rx_ring->last_rx_timestamp = jiffies;
- }
+/**
+ * i40e_reuse_rx_page - page flip buffer and store it back on the ring
+ * @rx_ring: rx descriptor ring to store buffers on
+ * @old_buff: donor buffer to have page reused
+ *
+ * Synchronizes page for reuse by the adapter
+ **/
+static void i40e_reuse_rx_page(struct i40e_ring *rx_ring,
+ struct i40e_rx_buffer *old_buff)
+{
+ struct i40e_rx_buffer *new_buff;
+ u16 nta = rx_ring->next_to_alloc;
- /* probably a little skewed due to removing CRC */
- total_rx_bytes += skb->len;
- total_rx_packets++;
+ new_buff = &rx_ring->rx_bi[nta];
- skb->protocol = eth_type_trans(skb, rx_ring->netdev);
+ /* update, and store next to alloc */
+ nta++;
+ rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0;
- i40e_rx_checksum(vsi, skb, rx_status, rx_error, rx_ptype);
+ /* transfer page from old buffer to new buffer */
+ *new_buff = *old_buff;
+}
- vlan_tag = rx_status & BIT(I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)
- ? le16_to_cpu(rx_desc->wb.qword0.lo_dword.l2tag1)
- : 0;
-#ifdef I40E_FCOE
- if (!i40e_fcoe_handle_offload(rx_ring, rx_desc, skb)) {
- dev_kfree_skb_any(skb);
- continue;
- }
+/**
+ * i40e_page_is_reserved - check if reuse is possible
+ * @page: page struct to check
+ */
+static inline bool i40e_page_is_reserved(struct page *page)
+{
+ return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page);
+}
+
+/**
+ * i40e_add_rx_frag - Add contents of Rx buffer to sk_buff
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @rx_buffer: buffer containing page to add
+ * @rx_desc: descriptor containing length of buffer written by hardware
+ * @skb: sk_buff to place the data into
+ *
+ * This function will add the data contained in rx_buffer->page to the skb.
+ * This is done either through a direct copy if the data in the buffer is
+ * less than the skb header size, otherwise it will just attach the page as
+ * a frag to the skb.
+ *
+ * The function will then update the page offset if necessary and return
+ * true if the buffer can be reused by the adapter.
+ **/
+static bool i40e_add_rx_frag(struct i40e_ring *rx_ring,
+ struct i40e_rx_buffer *rx_buffer,
+ union i40e_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ struct page *page = rx_buffer->page;
+ u64 qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
+ unsigned int size = (qword & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>
+ I40E_RXD_QW1_LENGTH_PBUF_SHIFT;
+#if (PAGE_SIZE < 8192)
+ unsigned int truesize = I40E_RXBUFFER_2048;
+#else
+ unsigned int truesize = ALIGN(size, L1_CACHE_BYTES);
+ unsigned int last_offset = PAGE_SIZE - I40E_RXBUFFER_2048;
#endif
- i40e_receive_skb(rx_ring, skb, vlan_tag);
- rx_desc->wb.qword1.status_error_len = 0;
+ /* will the data fit in the skb we allocated? if so, just
+ * copy it as it is pretty small anyway
+ */
+ if ((size <= I40E_RX_HDR_SIZE) && !skb_is_nonlinear(skb)) {
+ unsigned char *va = page_address(page) + rx_buffer->page_offset;
- } while (likely(total_rx_packets < budget));
+ memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long)));
- u64_stats_update_begin(&rx_ring->syncp);
- rx_ring->stats.packets += total_rx_packets;
- rx_ring->stats.bytes += total_rx_bytes;
- u64_stats_update_end(&rx_ring->syncp);
- rx_ring->q_vector->rx.total_packets += total_rx_packets;
- rx_ring->q_vector->rx.total_bytes += total_rx_bytes;
+ /* page is not reserved, we can reuse buffer as-is */
+ if (likely(!i40e_page_is_reserved(page)))
+ return true;
- return failure ? budget : total_rx_packets;
+ /* this page cannot be reused so discard it */
+ __free_pages(page, 0);
+ return false;
+ }
+
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
+ rx_buffer->page_offset, size, truesize);
+
+ /* avoid re-using remote pages */
+ if (unlikely(i40e_page_is_reserved(page)))
+ return false;
+
+#if (PAGE_SIZE < 8192)
+ /* if we are only owner of page we can reuse it */
+ if (unlikely(page_count(page) != 1))
+ return false;
+
+ /* flip page offset to other buffer */
+ rx_buffer->page_offset ^= truesize;
+#else
+ /* move offset up to the next cache line */
+ rx_buffer->page_offset += truesize;
+
+ if (rx_buffer->page_offset > last_offset)
+ return false;
+#endif
+
+ /* Even if we own the page, we are not allowed to use atomic_set()
+ * This would break get_page_unless_zero() users.
+ */
+ get_page(rx_buffer->page);
+
+ return true;
}
/**
- * i40e_clean_rx_irq_1buf - Reclaim resources after receive; single buffer
- * @rx_ring: rx ring to clean
- * @budget: how many cleans we're allowed
+ * i40e_fetch_rx_buffer - Allocate skb and populate it
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @rx_desc: descriptor containing info written by hardware
*
- * Returns number of packets cleaned
+ * This function allocates an skb on the fly, and populates it with the page
+ * data from the current receive descriptor, taking care to set up the skb
+ * correctly, as well as handling calling the page recycle function if
+ * necessary.
+ */
+static inline
+struct sk_buff *i40e_fetch_rx_buffer(struct i40e_ring *rx_ring,
+ union i40e_rx_desc *rx_desc)
+{
+ struct i40e_rx_buffer *rx_buffer;
+ struct sk_buff *skb;
+ struct page *page;
+
+ rx_buffer = &rx_ring->rx_bi[rx_ring->next_to_clean];
+ page = rx_buffer->page;
+ prefetchw(page);
+
+ skb = rx_buffer->skb;
+
+ if (likely(!skb)) {
+ void *page_addr = page_address(page) + rx_buffer->page_offset;
+
+ /* prefetch first cache line of first page */
+ prefetch(page_addr);
+#if L1_CACHE_BYTES < 128
+ prefetch(page_addr + L1_CACHE_BYTES);
+#endif
+
+ /* allocate a skb to store the frags */
+ skb = __napi_alloc_skb(&rx_ring->q_vector->napi,
+ I40E_RX_HDR_SIZE,
+ GFP_ATOMIC | __GFP_NOWARN);
+ if (unlikely(!skb)) {
+ rx_ring->rx_stats.alloc_buff_failed++;
+ return NULL;
+ }
+
+ /* we will be copying header into skb->data in
+ * pskb_may_pull so it is in our interest to prefetch
+ * it now to avoid a possible cache miss
+ */
+ prefetchw(skb->data);
+ } else {
+ rx_buffer->skb = NULL;
+ }
+
+ /* we are reusing so sync this buffer for CPU use */
+ dma_sync_single_range_for_cpu(rx_ring->dev,
+ rx_buffer->dma,
+ rx_buffer->page_offset,
+ I40E_RXBUFFER_2048,
+ DMA_FROM_DEVICE);
+
+ /* pull page into skb */
+ if (i40e_add_rx_frag(rx_ring, rx_buffer, rx_desc, skb)) {
+ /* hand second half of page back to the ring */
+ i40e_reuse_rx_page(rx_ring, rx_buffer);
+ rx_ring->rx_stats.page_reuse_count++;
+ } else {
+ /* we are not reusing the buffer so unmap it */
+ dma_unmap_page(rx_ring->dev, rx_buffer->dma, PAGE_SIZE,
+ DMA_FROM_DEVICE);
+ }
+
+ /* clear contents of buffer_info */
+ rx_buffer->page = NULL;
+
+ return skb;
+}
+
+/**
+ * i40e_is_non_eop - process handling of non-EOP buffers
+ * @rx_ring: Rx ring being processed
+ * @rx_desc: Rx descriptor for current buffer
+ * @skb: Current socket buffer containing buffer in progress
+ *
+ * This function updates next to clean. If the buffer is an EOP buffer
+ * this function exits returning false, otherwise it will place the
+ * sk_buff in the next buffer to be chained and return true indicating
+ * that this is in fact a non-EOP buffer.
+ **/
+static bool i40e_is_non_eop(struct i40e_ring *rx_ring,
+ union i40e_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ u32 ntc = rx_ring->next_to_clean + 1;
+
+ /* fetch, update, and store next to clean */
+ ntc = (ntc < rx_ring->count) ? ntc : 0;
+ rx_ring->next_to_clean = ntc;
+
+ prefetch(I40E_RX_DESC(rx_ring, ntc));
+
+#define staterrlen rx_desc->wb.qword1.status_error_len
+ if (unlikely(i40e_rx_is_programming_status(le64_to_cpu(staterrlen)))) {
+ i40e_clean_programming_status(rx_ring, rx_desc);
+ rx_ring->rx_bi[ntc].skb = skb;
+ return true;
+ }
+ /* if we are the last buffer then there is nothing else to do */
+#define I40E_RXD_EOF BIT(I40E_RX_DESC_STATUS_EOF_SHIFT)
+ if (likely(i40e_test_staterr(rx_desc, I40E_RXD_EOF)))
+ return false;
+
+ /* place skb in next buffer to be received */
+ rx_ring->rx_bi[ntc].skb = skb;
+ rx_ring->rx_stats.non_eop_descs++;
+
+ return true;
+}
+
+/**
+ * i40e_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @budget: Total limit on number of packets to process
+ *
+ * This function provides a "bounce buffer" approach to Rx interrupt
+ * processing. The advantage to this is that on systems that have
+ * expensive overhead for IOMMU access this provides a means of avoiding
+ * it by maintaining the mapping of the page to the system.
+ *
+ * Returns amount of work completed
**/
-static int i40e_clean_rx_irq_1buf(struct i40e_ring *rx_ring, int budget)
+static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
{
unsigned int total_rx_bytes = 0, total_rx_packets = 0;
u16 cleaned_count = I40E_DESC_UNUSED(rx_ring);
- struct i40e_vsi *vsi = rx_ring->vsi;
- union i40e_rx_desc *rx_desc;
- u32 rx_error, rx_status;
- u16 rx_packet_len;
bool failure = false;
- u8 rx_ptype;
- u64 qword;
- u16 i;
- do {
- struct i40e_rx_buffer *rx_bi;
+ while (likely(total_rx_packets < budget)) {
+ union i40e_rx_desc *rx_desc;
struct sk_buff *skb;
+ u32 rx_status;
u16 vlan_tag;
+ u8 rx_ptype;
+ u64 qword;
+
/* return some buffers to hardware, one at a time is too slow */
if (cleaned_count >= I40E_RX_BUFFER_WRITE) {
failure = failure ||
- i40e_alloc_rx_buffers_1buf(rx_ring,
- cleaned_count);
+ i40e_alloc_rx_buffers(rx_ring, cleaned_count);
cleaned_count = 0;
}
- i = rx_ring->next_to_clean;
- rx_desc = I40E_RX_DESC(rx_ring, i);
+ rx_desc = I40E_RX_DESC(rx_ring, rx_ring->next_to_clean);
+
qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
+ rx_ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >>
+ I40E_RXD_QW1_PTYPE_SHIFT;
rx_status = (qword & I40E_RXD_QW1_STATUS_MASK) >>
- I40E_RXD_QW1_STATUS_SHIFT;
+ I40E_RXD_QW1_STATUS_SHIFT;
if (!(rx_status & BIT(I40E_RX_DESC_STATUS_DD_SHIFT)))
break;
+ /* status_error_len will always be zero for unused descriptors
+ * because it's cleared in cleanup, and overlaps with hdr_addr
+ * which is always zero because packet split isn't used, if the
+ * hardware wrote DD then it will be non-zero
+ */
+ if (!rx_desc->wb.qword1.status_error_len)
+ break;
+
/* This memory barrier is needed to keep us from reading
* any other fields out of the rx_desc until we know the
* DD bit is set.
*/
dma_rmb();
- if (i40e_rx_is_programming_status(qword)) {
- i40e_clean_programming_status(rx_ring, rx_desc);
- I40E_RX_INCREMENT(rx_ring, i);
- continue;
- }
- rx_bi = &rx_ring->rx_bi[i];
- skb = rx_bi->skb;
- prefetch(skb->data);
-
- rx_packet_len = (qword & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>
- I40E_RXD_QW1_LENGTH_PBUF_SHIFT;
-
- rx_error = (qword & I40E_RXD_QW1_ERROR_MASK) >>
- I40E_RXD_QW1_ERROR_SHIFT;
- rx_error &= ~BIT(I40E_RX_DESC_ERROR_HBO_SHIFT);
+ skb = i40e_fetch_rx_buffer(rx_ring, rx_desc);
+ if (!skb)
+ break;
- rx_ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT;
- rx_bi->skb = NULL;
cleaned_count++;
- /* Get the header and possibly the whole packet
- * If this is an skb from previous receive dma will be 0
- */
- skb_put(skb, rx_packet_len);
- dma_unmap_single(rx_ring->dev, rx_bi->dma, rx_ring->rx_buf_len,
- DMA_FROM_DEVICE);
- rx_bi->dma = 0;
-
- I40E_RX_INCREMENT(rx_ring, i);
-
- if (unlikely(
- !(rx_status & BIT(I40E_RX_DESC_STATUS_EOF_SHIFT)))) {
- rx_ring->rx_stats.non_eop_descs++;
+ if (i40e_is_non_eop(rx_ring, rx_desc, skb))
continue;
- }
- /* ERR_MASK will only have valid bits if EOP set */
- if (unlikely(rx_error & BIT(I40E_RX_DESC_ERROR_RXE_SHIFT))) {
+ /* ERR_MASK will only have valid bits if EOP set, and
+ * what we are doing here is actually checking
+ * I40E_RX_DESC_ERROR_RXE_SHIFT, since it is the zeroth bit in
+ * the error field
+ */
+ if (unlikely(i40e_test_staterr(rx_desc, BIT(I40E_RXD_QW1_ERROR_SHIFT)))) {
dev_kfree_skb_any(skb);
continue;
}
- i40e_rx_hash(rx_ring, rx_desc, skb, rx_ptype);
- if (unlikely(rx_status & I40E_RXD_QW1_STATUS_TSYNVALID_MASK)) {
- i40e_ptp_rx_hwtstamp(vsi->back, skb, (rx_status &
- I40E_RXD_QW1_STATUS_TSYNINDX_MASK) >>
- I40E_RXD_QW1_STATUS_TSYNINDX_SHIFT);
- rx_ring->last_rx_timestamp = jiffies;
- }
+ if (i40e_cleanup_headers(rx_ring, skb))
+ continue;
/* probably a little skewed due to removing CRC */
total_rx_bytes += skb->len;
- total_rx_packets++;
-
- skb->protocol = eth_type_trans(skb, rx_ring->netdev);
- i40e_rx_checksum(vsi, skb, rx_status, rx_error, rx_ptype);
+ /* populate checksum, VLAN, and protocol */
+ i40e_process_skb_fields(rx_ring, rx_desc, skb, rx_ptype);
- vlan_tag = rx_status & BIT(I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)
- ? le16_to_cpu(rx_desc->wb.qword0.lo_dword.l2tag1)
- : 0;
#ifdef I40E_FCOE
- if (!i40e_fcoe_handle_offload(rx_ring, rx_desc, skb)) {
+ if (unlikely(
+ i40e_rx_is_fcoe(rx_ptype) &&
+ !i40e_fcoe_handle_offload(rx_ring, rx_desc, skb))) {
dev_kfree_skb_any(skb);
continue;
}
#endif
+
+ vlan_tag = (qword & BIT(I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) ?
+ le16_to_cpu(rx_desc->wb.qword0.lo_dword.l2tag1) : 0;
+
i40e_receive_skb(rx_ring, skb, vlan_tag);
- rx_desc->wb.qword1.status_error_len = 0;
- } while (likely(total_rx_packets < budget));
+ /* update budget accounting */
+ total_rx_packets++;
+ }
u64_stats_update_begin(&rx_ring->syncp);
rx_ring->stats.packets += total_rx_packets;
@@ -1849,6 +1842,7 @@ static int i40e_clean_rx_irq_1buf(struct i40e_ring *rx_ring, int budget)
rx_ring->q_vector->rx.total_packets += total_rx_packets;
rx_ring->q_vector->rx.total_bytes += total_rx_bytes;
+ /* guarantee a trip back through this routine if there was a failure */
return failure ? budget : total_rx_packets;
}
@@ -1975,9 +1969,11 @@ int i40e_napi_poll(struct napi_struct *napi, int budget)
* budget and be more aggressive about cleaning up the Tx descriptors.
*/
i40e_for_each_ring(ring, q_vector->tx) {
- clean_complete = clean_complete &&
- i40e_clean_tx_irq(ring, vsi->work_limit);
- arm_wb = arm_wb || ring->arm_wb;
+ if (!i40e_clean_tx_irq(vsi, ring, budget)) {
+ clean_complete = false;
+ continue;
+ }
+ arm_wb |= ring->arm_wb;
ring->arm_wb = false;
}
@@ -1991,16 +1987,12 @@ int i40e_napi_poll(struct napi_struct *napi, int budget)
budget_per_ring = max(budget/q_vector->num_ringpairs, 1);
i40e_for_each_ring(ring, q_vector->rx) {
- int cleaned;
-
- if (ring_is_ps_enabled(ring))
- cleaned = i40e_clean_rx_irq_ps(ring, budget_per_ring);
- else
- cleaned = i40e_clean_rx_irq_1buf(ring, budget_per_ring);
+ int cleaned = i40e_clean_rx_irq(ring, budget_per_ring);
work_done += cleaned;
- /* if we didn't clean as many as budgeted, we must be done */
- clean_complete = clean_complete && (budget_per_ring > cleaned);
+ /* if we clean as many as budgeted, we must not be done */
+ if (cleaned >= budget_per_ring)
+ clean_complete = false;
}
/* If work not completed, return budget and polling will return */
@@ -2247,15 +2239,13 @@ out:
/**
* i40e_tso - set up the tso context descriptor
- * @tx_ring: ptr to the ring to send
* @skb: ptr to the skb we're sending
* @hdr_len: ptr to the size of the packet header
* @cd_type_cmd_tso_mss: Quad Word 1
*
* Returns 0 if no TSO can happen, 1 if tso is going, or error
**/
-static int i40e_tso(struct i40e_ring *tx_ring, struct sk_buff *skb,
- u8 *hdr_len, u64 *cd_type_cmd_tso_mss)
+static int i40e_tso(struct sk_buff *skb, u8 *hdr_len, u64 *cd_type_cmd_tso_mss)
{
u64 cd_cmd, cd_tso_len, cd_mss;
union {
@@ -2292,16 +2282,22 @@ static int i40e_tso(struct i40e_ring *tx_ring, struct sk_buff *skb,
ip.v6->payload_len = 0;
}
- if (skb_shinfo(skb)->gso_type & (SKB_GSO_UDP_TUNNEL | SKB_GSO_GRE |
+ if (skb_shinfo(skb)->gso_type & (SKB_GSO_GRE |
+ SKB_GSO_GRE_CSUM |
+ SKB_GSO_IPXIP4 |
+ SKB_GSO_IPXIP6 |
+ SKB_GSO_UDP_TUNNEL |
SKB_GSO_UDP_TUNNEL_CSUM)) {
- if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL_CSUM) {
+ if (!(skb_shinfo(skb)->gso_type & SKB_GSO_PARTIAL) &&
+ (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL_CSUM)) {
+ l4.udp->len = 0;
+
/* determine offset of outer transport header */
l4_offset = l4.hdr - skb->data;
/* remove payload length from outer checksum */
- paylen = (__force u16)l4.udp->check;
- paylen += ntohs(1) * (u16)~(skb->len - l4_offset);
- l4.udp->check = ~csum_fold((__force __wsum)paylen);
+ paylen = skb->len - l4_offset;
+ csum_replace_by_diff(&l4.udp->check, htonl(paylen));
}
/* reset pointers to inner headers */
@@ -2321,9 +2317,8 @@ static int i40e_tso(struct i40e_ring *tx_ring, struct sk_buff *skb,
l4_offset = l4.hdr - skb->data;
/* remove payload length from inner checksum */
- paylen = (__force u16)l4.tcp->check;
- paylen += ntohs(1) * (u16)~(skb->len - l4_offset);
- l4.tcp->check = ~csum_fold((__force __wsum)paylen);
+ paylen = skb->len - l4_offset;
+ csum_replace_by_diff(&l4.tcp->check, htonl(paylen));
/* compute length of segmentation header */
*hdr_len = (l4.tcp->doff * 4) + l4_offset;
@@ -2405,7 +2400,7 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags,
unsigned char *hdr;
} l4;
unsigned char *exthdr;
- u32 offset, cmd = 0, tunnel = 0;
+ u32 offset, cmd = 0;
__be16 frag_off;
u8 l4_proto = 0;
@@ -2419,6 +2414,7 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags,
offset = ((ip.hdr - skb->data) / 2) << I40E_TX_DESC_LENGTH_MACLEN_SHIFT;
if (skb->encapsulation) {
+ u32 tunnel = 0;
/* define outer network header type */
if (*tx_flags & I40E_TX_FLAGS_IPV4) {
tunnel |= (*tx_flags & I40E_TX_FLAGS_TSO) ?
@@ -2436,13 +2432,6 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags,
&l4_proto, &frag_off);
}
- /* compute outer L3 header size */
- tunnel |= ((l4.hdr - ip.hdr) / 4) <<
- I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT;
-
- /* switch IP header pointer from outer to inner header */
- ip.hdr = skb_inner_network_header(skb);
-
/* define outer transport */
switch (l4_proto) {
case IPPROTO_UDP:
@@ -2453,6 +2442,11 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags,
tunnel |= I40E_TXD_CTX_GRE_TUNNELING;
*tx_flags |= I40E_TX_FLAGS_UDP_TUNNEL;
break;
+ case IPPROTO_IPIP:
+ case IPPROTO_IPV6:
+ *tx_flags |= I40E_TX_FLAGS_UDP_TUNNEL;
+ l4.hdr = skb_inner_network_header(skb);
+ break;
default:
if (*tx_flags & I40E_TX_FLAGS_TSO)
return -1;
@@ -2461,12 +2455,20 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags,
return 0;
}
+ /* compute outer L3 header size */
+ tunnel |= ((l4.hdr - ip.hdr) / 4) <<
+ I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT;
+
+ /* switch IP header pointer from outer to inner header */
+ ip.hdr = skb_inner_network_header(skb);
+
/* compute tunnel header size */
tunnel |= ((ip.hdr - l4.hdr) / 2) <<
I40E_TXD_CTX_QW0_NATLEN_SHIFT;
/* indicate if we need to offload outer UDP header */
if ((*tx_flags & I40E_TX_FLAGS_TSO) &&
+ !(skb_shinfo(skb)->gso_type & SKB_GSO_PARTIAL) &&
(skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL_CSUM))
tunnel |= I40E_TXD_CTX_QW0_L4T_CS_MASK;
@@ -2716,6 +2718,8 @@ static inline void i40e_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,
tx_bi = first;
for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
+ unsigned int max_data = I40E_MAX_DATA_PER_TXD_ALIGNED;
+
if (dma_mapping_error(tx_ring->dev, dma))
goto dma_error;
@@ -2723,12 +2727,14 @@ static inline void i40e_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,
dma_unmap_len_set(tx_bi, len, size);
dma_unmap_addr_set(tx_bi, dma, dma);
+ /* align size to end of page */
+ max_data += -dma & (I40E_MAX_READ_REQ_SIZE - 1);
tx_desc->buffer_addr = cpu_to_le64(dma);
while (unlikely(size > I40E_MAX_DATA_PER_TXD)) {
tx_desc->cmd_type_offset_bsz =
build_ctob(td_cmd, td_offset,
- I40E_MAX_DATA_PER_TXD, td_tag);
+ max_data, td_tag);
tx_desc++;
i++;
@@ -2739,9 +2745,10 @@ static inline void i40e_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,
i = 0;
}
- dma += I40E_MAX_DATA_PER_TXD;
- size -= I40E_MAX_DATA_PER_TXD;
+ dma += max_data;
+ size -= max_data;
+ max_data = I40E_MAX_DATA_PER_TXD_ALIGNED;
tx_desc->buffer_addr = cpu_to_le64(dma);
}
@@ -2891,7 +2898,7 @@ static netdev_tx_t i40e_xmit_frame_ring(struct sk_buff *skb,
if (i40e_chk_linearize(skb, count)) {
if (__skb_linearize(skb))
goto out_drop;
- count = TXD_USE_COUNT(skb->len);
+ count = i40e_txd_use_count(skb->len);
tx_ring->tx_stats.tx_linearize++;
}
@@ -2922,7 +2929,7 @@ static netdev_tx_t i40e_xmit_frame_ring(struct sk_buff *skb,
else if (protocol == htons(ETH_P_IPV6))
tx_flags |= I40E_TX_FLAGS_IPV6;
- tso = i40e_tso(tx_ring, skb, &hdr_len, &cd_type_cmd_tso_mss);
+ tso = i40e_tso(skb, &hdr_len, &cd_type_cmd_tso_mss);
if (tso < 0)
goto out_drop;
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.h b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
index a9bd70537d65..b78c810d1835 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_txrx.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.h
@@ -102,8 +102,8 @@ enum i40e_dyn_idx_t {
(((pf)->flags & I40E_FLAG_MULTIPLE_TCP_UDP_RSS_PCTYPE) ? \
I40E_DEFAULT_RSS_HENA_EXPANDED : I40E_DEFAULT_RSS_HENA)
-/* Supported Rx Buffer Sizes */
-#define I40E_RXBUFFER_512 512 /* Used for packet split */
+/* Supported Rx Buffer Sizes (a multiple of 128) */
+#define I40E_RXBUFFER_256 256
#define I40E_RXBUFFER_2048 2048
#define I40E_RXBUFFER_3072 3072 /* For FCoE MTU of 2158 */
#define I40E_RXBUFFER_4096 4096
@@ -114,9 +114,28 @@ enum i40e_dyn_idx_t {
* reserve 2 more, and skb_shared_info adds an additional 384 bytes more,
* this adds up to 512 bytes of extra data meaning the smallest allocation
* we could have is 1K.
- * i.e. RXBUFFER_512 --> size-1024 slab
+ * i.e. RXBUFFER_256 --> 960 byte skb (size-1024 slab)
+ * i.e. RXBUFFER_512 --> 1216 byte skb (size-2048 slab)
*/
-#define I40E_RX_HDR_SIZE I40E_RXBUFFER_512
+#define I40E_RX_HDR_SIZE I40E_RXBUFFER_256
+#define i40e_rx_desc i40e_32byte_rx_desc
+
+/**
+ * i40e_test_staterr - tests bits in Rx descriptor status and error fields
+ * @rx_desc: pointer to receive descriptor (in le64 format)
+ * @stat_err_bits: value to mask
+ *
+ * This function does some fast chicanery in order to return the
+ * value of the mask which is really only used for boolean tests.
+ * The status_error_len doesn't need to be shifted because it begins
+ * at offset zero.
+ */
+static inline bool i40e_test_staterr(union i40e_rx_desc *rx_desc,
+ const u64 stat_err_bits)
+{
+ return !!(rx_desc->wb.qword1.status_error_len &
+ cpu_to_le64(stat_err_bits));
+}
/* How many Rx Buffers do we bundle into one write to the hardware ? */
#define I40E_RX_BUFFER_WRITE 16 /* Must be power of 2 */
@@ -142,14 +161,41 @@ enum i40e_dyn_idx_t {
prefetch((n)); \
} while (0)
-#define i40e_rx_desc i40e_32byte_rx_desc
-
#define I40E_MAX_BUFFER_TXD 8
#define I40E_MIN_TX_LEN 17
-#define I40E_MAX_DATA_PER_TXD 8192
+
+/* The size limit for a transmit buffer in a descriptor is (16K - 1).
+ * In order to align with the read requests we will align the value to
+ * the nearest 4K which represents our maximum read request size.
+ */
+#define I40E_MAX_READ_REQ_SIZE 4096
+#define I40E_MAX_DATA_PER_TXD (16 * 1024 - 1)
+#define I40E_MAX_DATA_PER_TXD_ALIGNED \
+ (I40E_MAX_DATA_PER_TXD & ~(I40E_MAX_READ_REQ_SIZE - 1))
+
+/* This ugly bit of math is equivalent to DIV_ROUNDUP(size, X) where X is
+ * the value I40E_MAX_DATA_PER_TXD_ALIGNED. It is needed due to the fact
+ * that 12K is not a power of 2 and division is expensive. It is used to
+ * approximate the number of descriptors used per linear buffer. Note
+ * that this will overestimate in some cases as it doesn't account for the
+ * fact that we will add up to 4K - 1 in aligning the 12K buffer, however
+ * the error should not impact things much as large buffers usually mean
+ * we will use fewer descriptors then there are frags in an skb.
+ */
+static inline unsigned int i40e_txd_use_count(unsigned int size)
+{
+ const unsigned int max = I40E_MAX_DATA_PER_TXD_ALIGNED;
+ const unsigned int reciprocal = ((1ull << 32) - 1 + (max / 2)) / max;
+ unsigned int adjust = ~(u32)0;
+
+ /* if we rounded up on the reciprocal pull down the adjustment */
+ if ((max * reciprocal) > adjust)
+ adjust = ~(u32)(reciprocal - 1);
+
+ return (u32)((((u64)size * reciprocal) + adjust) >> 32);
+}
/* Tx Descriptors needed, worst case */
-#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), I40E_MAX_DATA_PER_TXD)
#define DESC_NEEDED (MAX_SKB_FRAGS + 4)
#define I40E_MIN_DESC_PENDING 4
@@ -184,10 +230,8 @@ struct i40e_tx_buffer {
struct i40e_rx_buffer {
struct sk_buff *skb;
- void *hdr_buf;
dma_addr_t dma;
struct page *page;
- dma_addr_t page_dma;
unsigned int page_offset;
};
@@ -216,22 +260,18 @@ struct i40e_rx_queue_stats {
enum i40e_ring_state_t {
__I40E_TX_FDIR_INIT_DONE,
__I40E_TX_XPS_INIT_DONE,
- __I40E_RX_PS_ENABLED,
- __I40E_RX_16BYTE_DESC_ENABLED,
};
-#define ring_is_ps_enabled(ring) \
- test_bit(__I40E_RX_PS_ENABLED, &(ring)->state)
-#define set_ring_ps_enabled(ring) \
- set_bit(__I40E_RX_PS_ENABLED, &(ring)->state)
-#define clear_ring_ps_enabled(ring) \
- clear_bit(__I40E_RX_PS_ENABLED, &(ring)->state)
-#define ring_is_16byte_desc_enabled(ring) \
- test_bit(__I40E_RX_16BYTE_DESC_ENABLED, &(ring)->state)
-#define set_ring_16byte_desc_enabled(ring) \
- set_bit(__I40E_RX_16BYTE_DESC_ENABLED, &(ring)->state)
-#define clear_ring_16byte_desc_enabled(ring) \
- clear_bit(__I40E_RX_16BYTE_DESC_ENABLED, &(ring)->state)
+/* some useful defines for virtchannel interface, which
+ * is the only remaining user of header split
+ */
+#define I40E_RX_DTYPE_NO_SPLIT 0
+#define I40E_RX_DTYPE_HEADER_SPLIT 1
+#define I40E_RX_DTYPE_SPLIT_ALWAYS 2
+#define I40E_RX_SPLIT_L2 0x1
+#define I40E_RX_SPLIT_IP 0x2
+#define I40E_RX_SPLIT_TCP_UDP 0x4
+#define I40E_RX_SPLIT_SCTP 0x8
/* struct that defines a descriptor ring, associated with a VSI */
struct i40e_ring {
@@ -258,16 +298,7 @@ struct i40e_ring {
u16 count; /* Number of descriptors */
u16 reg_idx; /* HW register index of the ring */
- u16 rx_hdr_len;
u16 rx_buf_len;
- u8 dtype;
-#define I40E_RX_DTYPE_NO_SPLIT 0
-#define I40E_RX_DTYPE_HEADER_SPLIT 1
-#define I40E_RX_DTYPE_SPLIT_ALWAYS 2
-#define I40E_RX_SPLIT_L2 0x1
-#define I40E_RX_SPLIT_IP 0x2
-#define I40E_RX_SPLIT_TCP_UDP 0x4
-#define I40E_RX_SPLIT_SCTP 0x8
/* used in interrupt processing */
u16 next_to_use;
@@ -301,6 +332,7 @@ struct i40e_ring {
struct i40e_q_vector *q_vector; /* Backreference to associated vector */
struct rcu_head rcu; /* to avoid race on free */
+ u16 next_to_alloc;
} ____cacheline_internodealigned_in_smp;
enum i40e_latency_range {
@@ -324,9 +356,7 @@ struct i40e_ring_container {
#define i40e_for_each_ring(pos, head) \
for (pos = (head).ring; pos != NULL; pos = pos->next)
-bool i40e_alloc_rx_buffers_ps(struct i40e_ring *rxr, u16 cleaned_count);
-bool i40e_alloc_rx_buffers_1buf(struct i40e_ring *rxr, u16 cleaned_count);
-void i40e_alloc_rx_headers(struct i40e_ring *rxr);
+bool i40e_alloc_rx_buffers(struct i40e_ring *rxr, u16 cleaned_count);
netdev_tx_t i40e_lan_xmit_frame(struct sk_buff *skb, struct net_device *netdev);
void i40e_clean_tx_ring(struct i40e_ring *tx_ring);
void i40e_clean_rx_ring(struct i40e_ring *rx_ring);
@@ -377,7 +407,7 @@ static inline int i40e_xmit_descriptor_count(struct sk_buff *skb)
int count = 0, size = skb_headlen(skb);
for (;;) {
- count += TXD_USE_COUNT(size);
+ count += i40e_txd_use_count(size);
if (!nr_frags--)
break;
@@ -423,4 +453,14 @@ static inline bool i40e_chk_linearize(struct sk_buff *skb, int count)
/* we can support up to 8 data buffers for a single send */
return count != I40E_MAX_BUFFER_TXD;
}
+
+/**
+ * i40e_rx_is_fcoe - returns true if the Rx packet type is FCoE
+ * @ptype: the packet type field from Rx descriptor write-back
+ **/
+static inline bool i40e_rx_is_fcoe(u16 ptype)
+{
+ return (ptype >= I40E_RX_PTYPE_L2_FCOE_PAY3) &&
+ (ptype <= I40E_RX_PTYPE_L2_FCOE_VFT_FCOTHER);
+}
#endif /* _I40E_TXRX_H_ */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_type.h b/drivers/net/ethernet/intel/i40e/i40e_type.h
index 3335f9d13374..bd5f13bef83c 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_type.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_type.h
@@ -36,7 +36,7 @@
#include "i40e_devids.h"
/* I40E_MASK is a macro used on 32 bit registers */
-#define I40E_MASK(mask, shift) (mask << shift)
+#define I40E_MASK(mask, shift) ((u32)(mask) << (shift))
#define I40E_MAX_VSI_QP 16
#define I40E_MAX_VF_VSI 3
@@ -275,6 +275,11 @@ struct i40e_hw_capabilities {
#define I40E_FLEX10_STATUS_DCC_ERROR 0x1
#define I40E_FLEX10_STATUS_VC_MODE 0x2
+ bool sec_rev_disabled;
+ bool update_disabled;
+#define I40E_NVM_MGMT_SEC_REV_DISABLED 0x1
+#define I40E_NVM_MGMT_UPDATE_DISABLED 0x2
+
bool mgmt_cem;
bool ieee_1588;
bool iwarp;
@@ -549,6 +554,8 @@ struct i40e_hw {
enum i40e_nvmupd_state nvmupd_state;
struct i40e_aq_desc nvm_wb_desc;
struct i40e_virt_mem nvm_buff;
+ bool nvm_release_on_done;
+ u16 nvm_wait_opcode;
/* HMC info */
struct i40e_hmc_info hmc; /* HMC info struct */
@@ -1533,4 +1540,37 @@ struct i40e_lldp_variables {
/* RSS Hash Table Size */
#define I40E_PFQF_CTL_0_HASHLUTSIZE_512 0x00010000
+
+/* INPUT SET MASK for RSS, flow director, and flexible payload */
+#define I40E_L3_SRC_SHIFT 47
+#define I40E_L3_SRC_MASK (0x3ULL << I40E_L3_SRC_SHIFT)
+#define I40E_L3_V6_SRC_SHIFT 43
+#define I40E_L3_V6_SRC_MASK (0xFFULL << I40E_L3_V6_SRC_SHIFT)
+#define I40E_L3_DST_SHIFT 35
+#define I40E_L3_DST_MASK (0x3ULL << I40E_L3_DST_SHIFT)
+#define I40E_L3_V6_DST_SHIFT 35
+#define I40E_L3_V6_DST_MASK (0xFFULL << I40E_L3_V6_DST_SHIFT)
+#define I40E_L4_SRC_SHIFT 34
+#define I40E_L4_SRC_MASK (0x1ULL << I40E_L4_SRC_SHIFT)
+#define I40E_L4_DST_SHIFT 33
+#define I40E_L4_DST_MASK (0x1ULL << I40E_L4_DST_SHIFT)
+#define I40E_VERIFY_TAG_SHIFT 31
+#define I40E_VERIFY_TAG_MASK (0x3ULL << I40E_VERIFY_TAG_SHIFT)
+
+#define I40E_FLEX_50_SHIFT 13
+#define I40E_FLEX_50_MASK (0x1ULL << I40E_FLEX_50_SHIFT)
+#define I40E_FLEX_51_SHIFT 12
+#define I40E_FLEX_51_MASK (0x1ULL << I40E_FLEX_51_SHIFT)
+#define I40E_FLEX_52_SHIFT 11
+#define I40E_FLEX_52_MASK (0x1ULL << I40E_FLEX_52_SHIFT)
+#define I40E_FLEX_53_SHIFT 10
+#define I40E_FLEX_53_MASK (0x1ULL << I40E_FLEX_53_SHIFT)
+#define I40E_FLEX_54_SHIFT 9
+#define I40E_FLEX_54_MASK (0x1ULL << I40E_FLEX_54_SHIFT)
+#define I40E_FLEX_55_SHIFT 8
+#define I40E_FLEX_55_MASK (0x1ULL << I40E_FLEX_55_SHIFT)
+#define I40E_FLEX_56_SHIFT 7
+#define I40E_FLEX_56_MASK (0x1ULL << I40E_FLEX_56_SHIFT)
+#define I40E_FLEX_57_SHIFT 6
+#define I40E_FLEX_57_MASK (0x1ULL << I40E_FLEX_57_SHIFT)
#endif /* _I40E_TYPE_H_ */
diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl.h
index ab866cf3dc18..c92a3bdee229 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl.h
@@ -80,10 +80,15 @@ enum i40e_virtchnl_ops {
I40E_VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE = 14,
I40E_VIRTCHNL_OP_GET_STATS = 15,
I40E_VIRTCHNL_OP_FCOE = 16,
- I40E_VIRTCHNL_OP_EVENT = 17,
+ I40E_VIRTCHNL_OP_EVENT = 17, /* must ALWAYS be 17 */
I40E_VIRTCHNL_OP_IWARP = 20,
I40E_VIRTCHNL_OP_CONFIG_IWARP_IRQ_MAP = 21,
I40E_VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP = 22,
+ I40E_VIRTCHNL_OP_CONFIG_RSS_KEY = 23,
+ I40E_VIRTCHNL_OP_CONFIG_RSS_LUT = 24,
+ I40E_VIRTCHNL_OP_GET_RSS_HENA_CAPS = 25,
+ I40E_VIRTCHNL_OP_SET_RSS_HENA = 26,
+
};
/* Virtual channel message descriptor. This overlays the admin queue
@@ -157,6 +162,7 @@ struct i40e_virtchnl_vsi_resource {
#define I40E_VIRTCHNL_VF_OFFLOAD_VLAN 0x00010000
#define I40E_VIRTCHNL_VF_OFFLOAD_RX_POLLING 0x00020000
#define I40E_VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2 0x00040000
+#define I40E_VIRTCHNL_VF_OFFLOAD_RSS_PF 0X00080000
struct i40e_virtchnl_vf_resource {
u16 num_vsis;
@@ -165,8 +171,8 @@ struct i40e_virtchnl_vf_resource {
u16 max_mtu;
u32 vf_offload_flags;
- u32 max_fcoe_contexts;
- u32 max_fcoe_filters;
+ u32 rss_key_size;
+ u32 rss_lut_size;
struct i40e_virtchnl_vsi_resource vsi_res[1];
};
@@ -325,6 +331,39 @@ struct i40e_virtchnl_promisc_info {
* PF replies with struct i40e_eth_stats in an external buffer.
*/
+/* I40E_VIRTCHNL_OP_CONFIG_RSS_KEY
+ * I40E_VIRTCHNL_OP_CONFIG_RSS_LUT
+ * VF sends these messages to configure RSS. Only supported if both PF
+ * and VF drivers set the I40E_VIRTCHNL_VF_OFFLOAD_RSS_PF bit during
+ * configuration negotiation. If this is the case, then the RSS fields in
+ * the VF resource struct are valid.
+ * Both the key and LUT are initialized to 0 by the PF, meaning that
+ * RSS is effectively disabled until set up by the VF.
+ */
+struct i40e_virtchnl_rss_key {
+ u16 vsi_id;
+ u16 key_len;
+ u8 key[1]; /* RSS hash key, packed bytes */
+};
+
+struct i40e_virtchnl_rss_lut {
+ u16 vsi_id;
+ u16 lut_entries;
+ u8 lut[1]; /* RSS lookup table*/
+};
+
+/* I40E_VIRTCHNL_OP_GET_RSS_HENA_CAPS
+ * I40E_VIRTCHNL_OP_SET_RSS_HENA
+ * VF sends these messages to get and set the hash filter enable bits for RSS.
+ * By default, the PF sets these to all possible traffic types that the
+ * hardware supports. The VF can query this value if it wants to change the
+ * traffic types that are hashed by the hardware.
+ * Traffic types are defined in the i40e_filter_pctype enum in i40e_type.h
+ */
+struct i40e_virtchnl_rss_hena {
+ u64 hena;
+};
+
/* I40E_VIRTCHNL_OP_EVENT
* PF sends this message to inform the VF driver of events that may affect it.
* No direct response is expected from the VF, though it may generate other
diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
index 816c6bbf7093..1fcafcfa8f14 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
+++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.c
@@ -48,7 +48,7 @@ static void i40e_vc_vf_broadcast(struct i40e_pf *pf,
int i;
for (i = 0; i < pf->num_alloc_vfs; i++, vf++) {
- int abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
+ int abs_vf_id = vf->vf_id + (int)hw->func_caps.vf_base_id;
/* Not all vfs are enabled so skip the ones that are not */
if (!test_bit(I40E_VF_STAT_INIT, &vf->vf_states) &&
!test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states))
@@ -63,7 +63,7 @@ static void i40e_vc_vf_broadcast(struct i40e_pf *pf,
}
/**
- * i40e_vc_notify_link_state
+ * i40e_vc_notify_vf_link_state
* @vf: pointer to the VF structure
*
* send a link status message to a single VF
@@ -74,7 +74,7 @@ static void i40e_vc_notify_vf_link_state(struct i40e_vf *vf)
struct i40e_pf *pf = vf->pf;
struct i40e_hw *hw = &pf->hw;
struct i40e_link_status *ls = &pf->hw.phy.link_info;
- int abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
+ int abs_vf_id = vf->vf_id + (int)hw->func_caps.vf_base_id;
pfe.event = I40E_VIRTCHNL_EVENT_LINK_CHANGE;
pfe.severity = I40E_PF_EVENT_SEVERITY_INFO;
@@ -141,7 +141,7 @@ void i40e_vc_notify_vf_reset(struct i40e_vf *vf)
!test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states))
return;
- abs_vf_id = vf->vf_id + vf->pf->hw.func_caps.vf_base_id;
+ abs_vf_id = vf->vf_id + (int)vf->pf->hw.func_caps.vf_base_id;
pfe.event = I40E_VIRTCHNL_EVENT_RESET_IMPENDING;
pfe.severity = I40E_PF_EVENT_SEVERITY_CERTAIN_DOOM;
@@ -590,7 +590,7 @@ static int i40e_config_vsi_rx_queue(struct i40e_vf *vf, u16 vsi_id,
}
rx_ctx.hbuff = info->hdr_size >> I40E_RXQ_CTX_HBUFF_SHIFT;
- /* set splitalways mode 10b */
+ /* set split mode 10b */
rx_ctx.dtype = I40E_RX_DTYPE_HEADER_SPLIT;
}
@@ -665,8 +665,6 @@ static int i40e_alloc_vsi_res(struct i40e_vf *vf, enum i40e_vsi_type type)
goto error_alloc_vsi_res;
}
if (type == I40E_VSI_SRIOV) {
- u8 brdcast[ETH_ALEN] = {0xff, 0xff, 0xff, 0xff, 0xff, 0xff};
-
vf->lan_vsi_idx = vsi->idx;
vf->lan_vsi_id = vsi->id;
/* If the port VLAN has been configured and then the
@@ -688,12 +686,6 @@ static int i40e_alloc_vsi_res(struct i40e_vf *vf, enum i40e_vsi_type type)
"Could not add MAC filter %pM for VF %d\n",
vf->default_lan_addr.addr, vf->vf_id);
}
- f = i40e_add_filter(vsi, brdcast,
- vf->port_vlan_id ? vf->port_vlan_id : -1,
- true, false);
- if (!f)
- dev_info(&pf->pdev->dev,
- "Could not allocate VF broadcast filter\n");
spin_unlock_bh(&vsi->mac_filter_list_lock);
}
@@ -860,7 +852,11 @@ static int i40e_alloc_vf_res(struct i40e_vf *vf)
if (ret)
goto error_alloc;
total_queue_pairs += pf->vsi[vf->lan_vsi_idx]->alloc_queue_pairs;
- set_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps);
+
+ if (vf->trusted)
+ set_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps);
+ else
+ clear_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps);
/* store the total qps number for the runtime
* VF req validation
@@ -917,9 +913,9 @@ void i40e_reset_vf(struct i40e_vf *vf, bool flr)
{
struct i40e_pf *pf = vf->pf;
struct i40e_hw *hw = &pf->hw;
+ u32 reg, reg_idx, bit_idx;
bool rsd = false;
int i;
- u32 reg;
if (test_and_set_bit(__I40E_VF_DISABLE, &pf->state))
return;
@@ -937,6 +933,11 @@ void i40e_reset_vf(struct i40e_vf *vf, bool flr)
wr32(hw, I40E_VPGEN_VFRTRIG(vf->vf_id), reg);
i40e_flush(hw);
}
+ /* clear the VFLR bit in GLGEN_VFLRSTAT */
+ reg_idx = (hw->func_caps.vf_base_id + vf->vf_id) / 32;
+ bit_idx = (hw->func_caps.vf_base_id + vf->vf_id) % 32;
+ wr32(hw, I40E_GLGEN_VFLRSTAT(reg_idx), BIT(bit_idx));
+ i40e_flush(hw);
if (i40e_quiesce_vf_pci(vf))
dev_err(&pf->pdev->dev, "VF %d PCI transactions stuck\n",
@@ -988,6 +989,7 @@ complete_reset:
}
/* tell the VF the reset is done */
wr32(hw, I40E_VFGEN_RSTAT1(vf->vf_id), I40E_VFR_VFACTIVE);
+
i40e_flush(hw);
clear_bit(__I40E_VF_DISABLE, &pf->state);
}
@@ -1227,8 +1229,8 @@ static int i40e_vc_send_msg_to_vf(struct i40e_vf *vf, u32 v_opcode,
/* single place to detect unsuccessful return values */
if (v_retval) {
vf->num_invalid_msgs++;
- dev_err(&pf->pdev->dev, "VF %d failed opcode %d, error: %d\n",
- vf->vf_id, v_opcode, v_retval);
+ dev_info(&pf->pdev->dev, "VF %d failed opcode %d, retval: %d\n",
+ vf->vf_id, v_opcode, v_retval);
if (vf->num_invalid_msgs >
I40E_DEFAULT_NUM_INVALID_MSGS_ALLOWED) {
dev_err(&pf->pdev->dev,
@@ -1246,9 +1248,9 @@ static int i40e_vc_send_msg_to_vf(struct i40e_vf *vf, u32 v_opcode,
aq_ret = i40e_aq_send_msg_to_vf(hw, abs_vf_id, v_opcode, v_retval,
msg, msglen, NULL);
if (aq_ret) {
- dev_err(&pf->pdev->dev,
- "Unable to send the message to VF %d aq_err %d\n",
- vf->vf_id, pf->hw.aq.asq_last_status);
+ dev_info(&pf->pdev->dev,
+ "Unable to send the message to VF %d aq_err %d\n",
+ vf->vf_id, pf->hw.aq.asq_last_status);
return -EIO;
}
@@ -1306,8 +1308,8 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
struct i40e_pf *pf = vf->pf;
i40e_status aq_ret = 0;
struct i40e_vsi *vsi;
- int i = 0, len = 0;
int num_vsis = 1;
+ int len = 0;
int ret;
if (!test_bit(I40E_VF_STAT_INIT, &vf->vf_states)) {
@@ -1342,12 +1344,16 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
set_bit(I40E_VF_STAT_IWARPENA, &vf->vf_states);
}
- if (pf->flags & I40E_FLAG_RSS_AQ_CAPABLE) {
- if (vf->driver_caps & I40E_VIRTCHNL_VF_OFFLOAD_RSS_AQ)
- vfres->vf_offload_flags |=
- I40E_VIRTCHNL_VF_OFFLOAD_RSS_AQ;
+ if (vf->driver_caps & I40E_VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+ vfres->vf_offload_flags |= I40E_VIRTCHNL_VF_OFFLOAD_RSS_PF;
} else {
- vfres->vf_offload_flags |= I40E_VIRTCHNL_VF_OFFLOAD_RSS_REG;
+ if ((pf->flags & I40E_FLAG_RSS_AQ_CAPABLE) &&
+ (vf->driver_caps & I40E_VIRTCHNL_VF_OFFLOAD_RSS_AQ))
+ vfres->vf_offload_flags |=
+ I40E_VIRTCHNL_VF_OFFLOAD_RSS_AQ;
+ else
+ vfres->vf_offload_flags |=
+ I40E_VIRTCHNL_VF_OFFLOAD_RSS_REG;
}
if (pf->flags & I40E_FLAG_MULTIPLE_TCP_UDP_RSS_PCTYPE) {
@@ -1356,8 +1362,16 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
I40E_VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2;
}
- if (vf->driver_caps & I40E_VIRTCHNL_VF_OFFLOAD_RX_POLLING)
+ if (vf->driver_caps & I40E_VIRTCHNL_VF_OFFLOAD_RX_POLLING) {
+ if (pf->flags & I40E_FLAG_MFP_ENABLED) {
+ dev_err(&pf->pdev->dev,
+ "VF %d requested polling mode: this feature is supported only when the device is running in single function per port (SFP) mode\n",
+ vf->vf_id);
+ ret = I40E_ERR_PARAM;
+ goto err;
+ }
vfres->vf_offload_flags |= I40E_VIRTCHNL_VF_OFFLOAD_RX_POLLING;
+ }
if (pf->flags & I40E_FLAG_WB_ON_ITR_CAPABLE) {
if (vf->driver_caps & I40E_VIRTCHNL_VF_OFFLOAD_WB_ON_ITR)
@@ -1368,16 +1382,18 @@ static int i40e_vc_get_vf_resources_msg(struct i40e_vf *vf, u8 *msg)
vfres->num_vsis = num_vsis;
vfres->num_queue_pairs = vf->num_queue_pairs;
vfres->max_vectors = pf->hw.func_caps.num_msix_vectors_vf;
+ vfres->rss_key_size = I40E_HKEY_ARRAY_SIZE;
+ vfres->rss_lut_size = I40E_VF_HLUT_ARRAY_SIZE;
+
if (vf->lan_vsi_idx) {
- vfres->vsi_res[i].vsi_id = vf->lan_vsi_id;
- vfres->vsi_res[i].vsi_type = I40E_VSI_SRIOV;
- vfres->vsi_res[i].num_queue_pairs = vsi->alloc_queue_pairs;
+ vfres->vsi_res[0].vsi_id = vf->lan_vsi_id;
+ vfres->vsi_res[0].vsi_type = I40E_VSI_SRIOV;
+ vfres->vsi_res[0].num_queue_pairs = vsi->alloc_queue_pairs;
/* VFs only use TC 0 */
- vfres->vsi_res[i].qset_handle
+ vfres->vsi_res[0].qset_handle
= le16_to_cpu(vsi->info.qs_handle[0]);
- ether_addr_copy(vfres->vsi_res[i].default_mac_addr,
+ ether_addr_copy(vfres->vsi_res[0].default_mac_addr,
vf->default_lan_addr.addr);
- i++;
}
set_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states);
@@ -1407,6 +1423,25 @@ static void i40e_vc_reset_vf_msg(struct i40e_vf *vf)
}
/**
+ * i40e_getnum_vf_vsi_vlan_filters
+ * @vsi: pointer to the vsi
+ *
+ * called to get the number of VLANs offloaded on this VF
+ **/
+static inline int i40e_getnum_vf_vsi_vlan_filters(struct i40e_vsi *vsi)
+{
+ struct i40e_mac_filter *f;
+ int num_vlans = 0;
+
+ list_for_each_entry(f, &vsi->mac_filter_list, list) {
+ if (f->vlan >= 0 && f->vlan <= I40E_MAX_VLANID)
+ num_vlans++;
+ }
+
+ return num_vlans;
+}
+
+/**
* i40e_vc_config_promiscuous_mode_msg
* @vf: pointer to the VF info
* @msg: pointer to the msg buffer
@@ -1422,22 +1457,128 @@ static int i40e_vc_config_promiscuous_mode_msg(struct i40e_vf *vf,
(struct i40e_virtchnl_promisc_info *)msg;
struct i40e_pf *pf = vf->pf;
struct i40e_hw *hw = &pf->hw;
- struct i40e_vsi *vsi;
+ struct i40e_mac_filter *f;
+ i40e_status aq_ret = 0;
bool allmulti = false;
- i40e_status aq_ret;
+ struct i40e_vsi *vsi;
+ bool alluni = false;
+ int aq_err = 0;
vsi = i40e_find_vsi_from_id(pf, info->vsi_id);
if (!test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states) ||
- !test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps) ||
- !i40e_vc_isvalid_vsi_id(vf, info->vsi_id) ||
- (vsi->type != I40E_VSI_FCOE)) {
+ !i40e_vc_isvalid_vsi_id(vf, info->vsi_id)) {
aq_ret = I40E_ERR_PARAM;
goto error_param;
}
+ if (!test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps)) {
+ dev_err(&pf->pdev->dev,
+ "Unprivileged VF %d is attempting to configure promiscuous mode\n",
+ vf->vf_id);
+ /* Lie to the VF on purpose. */
+ aq_ret = 0;
+ goto error_param;
+ }
+ /* Multicast promiscuous handling*/
if (info->flags & I40E_FLAG_VF_MULTICAST_PROMISC)
allmulti = true;
- aq_ret = i40e_aq_set_vsi_multicast_promiscuous(hw, vsi->seid,
- allmulti, NULL);
+
+ if (vf->port_vlan_id) {
+ aq_ret = i40e_aq_set_vsi_mc_promisc_on_vlan(hw, vsi->seid,
+ allmulti,
+ vf->port_vlan_id,
+ NULL);
+ } else if (i40e_getnum_vf_vsi_vlan_filters(vsi)) {
+ list_for_each_entry(f, &vsi->mac_filter_list, list) {
+ if (f->vlan < 0 || f->vlan > I40E_MAX_VLANID)
+ continue;
+ aq_ret = i40e_aq_set_vsi_mc_promisc_on_vlan(hw,
+ vsi->seid,
+ allmulti,
+ f->vlan,
+ NULL);
+ aq_err = pf->hw.aq.asq_last_status;
+ if (aq_ret) {
+ dev_err(&pf->pdev->dev,
+ "Could not add VLAN %d to multicast promiscuous domain err %s aq_err %s\n",
+ f->vlan,
+ i40e_stat_str(&pf->hw, aq_ret),
+ i40e_aq_str(&pf->hw, aq_err));
+ break;
+ }
+ }
+ } else {
+ aq_ret = i40e_aq_set_vsi_multicast_promiscuous(hw, vsi->seid,
+ allmulti, NULL);
+ aq_err = pf->hw.aq.asq_last_status;
+ if (aq_ret) {
+ dev_err(&pf->pdev->dev,
+ "VF %d failed to set multicast promiscuous mode err %s aq_err %s\n",
+ vf->vf_id,
+ i40e_stat_str(&pf->hw, aq_ret),
+ i40e_aq_str(&pf->hw, aq_err));
+ goto error_param_int;
+ }
+ }
+
+ if (!aq_ret) {
+ dev_info(&pf->pdev->dev,
+ "VF %d successfully set multicast promiscuous mode\n",
+ vf->vf_id);
+ if (allmulti)
+ set_bit(I40E_VF_STAT_MC_PROMISC, &vf->vf_states);
+ else
+ clear_bit(I40E_VF_STAT_MC_PROMISC, &vf->vf_states);
+ }
+
+ if (info->flags & I40E_FLAG_VF_UNICAST_PROMISC)
+ alluni = true;
+ if (vf->port_vlan_id) {
+ aq_ret = i40e_aq_set_vsi_uc_promisc_on_vlan(hw, vsi->seid,
+ alluni,
+ vf->port_vlan_id,
+ NULL);
+ } else if (i40e_getnum_vf_vsi_vlan_filters(vsi)) {
+ list_for_each_entry(f, &vsi->mac_filter_list, list) {
+ aq_ret = 0;
+ if (f->vlan >= 0 && f->vlan <= I40E_MAX_VLANID) {
+ aq_ret =
+ i40e_aq_set_vsi_uc_promisc_on_vlan(hw,
+ vsi->seid,
+ alluni,
+ f->vlan,
+ NULL);
+ aq_err = pf->hw.aq.asq_last_status;
+ }
+ if (aq_ret)
+ dev_err(&pf->pdev->dev,
+ "Could not add VLAN %d to Unicast promiscuous domain err %s aq_err %s\n",
+ f->vlan,
+ i40e_stat_str(&pf->hw, aq_ret),
+ i40e_aq_str(&pf->hw, aq_err));
+ }
+ } else {
+ aq_ret = i40e_aq_set_vsi_unicast_promiscuous(hw, vsi->seid,
+ allmulti, NULL,
+ true);
+ aq_err = pf->hw.aq.asq_last_status;
+ if (aq_ret)
+ dev_err(&pf->pdev->dev,
+ "VF %d failed to set unicast promiscuous mode %8.8x err %s aq_err %s\n",
+ vf->vf_id, info->flags,
+ i40e_stat_str(&pf->hw, aq_ret),
+ i40e_aq_str(&pf->hw, aq_err));
+ }
+
+error_param_int:
+ if (!aq_ret) {
+ dev_info(&pf->pdev->dev,
+ "VF %d successfully set unicast promiscuous mode\n",
+ vf->vf_id);
+ if (alluni)
+ set_bit(I40E_VF_STAT_UC_PROMISC, &vf->vf_states);
+ else
+ clear_bit(I40E_VF_STAT_UC_PROMISC, &vf->vf_states);
+ }
error_param:
/* send the response to the VF */
@@ -1688,6 +1829,10 @@ error_param:
(u8 *)&stats, sizeof(stats));
}
+/* If the VF is not trusted restrict the number of MAC/VLAN it can program */
+#define I40E_VC_MAX_MAC_ADDR_PER_VF 8
+#define I40E_VC_MAX_VLAN_PER_VF 8
+
/**
* i40e_check_vf_permission
* @vf: pointer to the VF info
@@ -1708,15 +1853,22 @@ static inline int i40e_check_vf_permission(struct i40e_vf *vf, u8 *macaddr)
dev_err(&pf->pdev->dev, "invalid VF MAC addr %pM\n", macaddr);
ret = I40E_ERR_INVALID_MAC_ADDR;
} else if (vf->pf_set_mac && !is_multicast_ether_addr(macaddr) &&
+ !test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps) &&
!ether_addr_equal(macaddr, vf->default_lan_addr.addr)) {
/* If the host VMM administrator has set the VF MAC address
* administratively via the ndo_set_vf_mac command then deny
* permission to the VF to add or delete unicast MAC addresses.
+ * Unless the VF is privileged and then it can do whatever.
* The VF may request to set the MAC address filter already
* assigned to it so do not return an error in that case.
*/
dev_err(&pf->pdev->dev,
- "VF attempting to override administratively set MAC address\nPlease reload the VF driver to resume normal operation\n");
+ "VF attempting to override administratively set MAC address, reload the VF driver to resume normal operation\n");
+ ret = -EPERM;
+ } else if ((vf->num_mac >= I40E_VC_MAX_MAC_ADDR_PER_VF) &&
+ !test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps)) {
+ dev_err(&pf->pdev->dev,
+ "VF is not trusted, switch the VF to trusted to add more functionality\n");
ret = -EPERM;
}
return ret;
@@ -1741,7 +1893,6 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
int i;
if (!test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states) ||
- !test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps) ||
!i40e_vc_isvalid_vsi_id(vf, vsi_id)) {
ret = I40E_ERR_PARAM;
goto error_param;
@@ -1780,6 +1931,8 @@ static int i40e_vc_add_mac_addr_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
ret = I40E_ERR_PARAM;
spin_unlock_bh(&vsi->mac_filter_list_lock);
goto error_param;
+ } else {
+ vf->num_mac++;
}
}
spin_unlock_bh(&vsi->mac_filter_list_lock);
@@ -1815,7 +1968,6 @@ static int i40e_vc_del_mac_addr_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
int i;
if (!test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states) ||
- !test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps) ||
!i40e_vc_isvalid_vsi_id(vf, vsi_id)) {
ret = I40E_ERR_PARAM;
goto error_param;
@@ -1839,6 +1991,8 @@ static int i40e_vc_del_mac_addr_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
ret = I40E_ERR_INVALID_MAC_ADDR;
spin_unlock_bh(&vsi->mac_filter_list_lock);
goto error_param;
+ } else {
+ vf->num_mac--;
}
spin_unlock_bh(&vsi->mac_filter_list_lock);
@@ -1873,8 +2027,13 @@ static int i40e_vc_add_vlan_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
i40e_status aq_ret = 0;
int i;
+ if ((vf->num_vlan >= I40E_VC_MAX_VLAN_PER_VF) &&
+ !test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps)) {
+ dev_err(&pf->pdev->dev,
+ "VF is not trusted, switch the VF to trusted to add more VLAN addresses\n");
+ goto error_param;
+ }
if (!test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states) ||
- !test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps) ||
!i40e_vc_isvalid_vsi_id(vf, vsi_id)) {
aq_ret = I40E_ERR_PARAM;
goto error_param;
@@ -1898,6 +2057,19 @@ static int i40e_vc_add_vlan_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
for (i = 0; i < vfl->num_elements; i++) {
/* add new VLAN filter */
int ret = i40e_vsi_add_vlan(vsi, vfl->vlan_id[i]);
+ if (!ret)
+ vf->num_vlan++;
+
+ if (test_bit(I40E_VF_STAT_UC_PROMISC, &vf->vf_states))
+ i40e_aq_set_vsi_uc_promisc_on_vlan(&pf->hw, vsi->seid,
+ true,
+ vfl->vlan_id[i],
+ NULL);
+ if (test_bit(I40E_VF_STAT_MC_PROMISC, &vf->vf_states))
+ i40e_aq_set_vsi_mc_promisc_on_vlan(&pf->hw, vsi->seid,
+ true,
+ vfl->vlan_id[i],
+ NULL);
if (ret)
dev_err(&pf->pdev->dev,
@@ -1929,7 +2101,6 @@ static int i40e_vc_remove_vlan_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
int i;
if (!test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states) ||
- !test_bit(I40E_VIRTCHNL_VF_CAP_PRIVILEGE, &vf->vf_caps) ||
!i40e_vc_isvalid_vsi_id(vf, vsi_id)) {
aq_ret = I40E_ERR_PARAM;
goto error_param;
@@ -1950,6 +2121,19 @@ static int i40e_vc_remove_vlan_msg(struct i40e_vf *vf, u8 *msg, u16 msglen)
for (i = 0; i < vfl->num_elements; i++) {
int ret = i40e_vsi_kill_vlan(vsi, vfl->vlan_id[i]);
+ if (!ret)
+ vf->num_vlan--;
+
+ if (test_bit(I40E_VF_STAT_UC_PROMISC, &vf->vf_states))
+ i40e_aq_set_vsi_uc_promisc_on_vlan(&pf->hw, vsi->seid,
+ false,
+ vfl->vlan_id[i],
+ NULL);
+ if (test_bit(I40E_VF_STAT_MC_PROMISC, &vf->vf_states))
+ i40e_aq_set_vsi_mc_promisc_on_vlan(&pf->hw, vsi->seid,
+ false,
+ vfl->vlan_id[i],
+ NULL);
if (ret)
dev_err(&pf->pdev->dev,
@@ -2029,6 +2213,135 @@ error_param:
}
/**
+ * i40e_vc_config_rss_key
+ * @vf: pointer to the VF info
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * Configure the VF's RSS key
+ **/
+static int i40e_vc_config_rss_key(struct i40e_vf *vf, u8 *msg, u16 msglen)
+{
+ struct i40e_virtchnl_rss_key *vrk =
+ (struct i40e_virtchnl_rss_key *)msg;
+ struct i40e_pf *pf = vf->pf;
+ struct i40e_vsi *vsi = NULL;
+ u16 vsi_id = vrk->vsi_id;
+ i40e_status aq_ret = 0;
+
+ if (!test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states) ||
+ !i40e_vc_isvalid_vsi_id(vf, vsi_id) ||
+ (vrk->key_len != I40E_HKEY_ARRAY_SIZE)) {
+ aq_ret = I40E_ERR_PARAM;
+ goto err;
+ }
+
+ vsi = pf->vsi[vf->lan_vsi_idx];
+ aq_ret = i40e_config_rss(vsi, vrk->key, NULL, 0);
+err:
+ /* send the response to the VF */
+ return i40e_vc_send_resp_to_vf(vf, I40E_VIRTCHNL_OP_CONFIG_RSS_KEY,
+ aq_ret);
+}
+
+/**
+ * i40e_vc_config_rss_lut
+ * @vf: pointer to the VF info
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * Configure the VF's RSS LUT
+ **/
+static int i40e_vc_config_rss_lut(struct i40e_vf *vf, u8 *msg, u16 msglen)
+{
+ struct i40e_virtchnl_rss_lut *vrl =
+ (struct i40e_virtchnl_rss_lut *)msg;
+ struct i40e_pf *pf = vf->pf;
+ struct i40e_vsi *vsi = NULL;
+ u16 vsi_id = vrl->vsi_id;
+ i40e_status aq_ret = 0;
+
+ if (!test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states) ||
+ !i40e_vc_isvalid_vsi_id(vf, vsi_id) ||
+ (vrl->lut_entries != I40E_VF_HLUT_ARRAY_SIZE)) {
+ aq_ret = I40E_ERR_PARAM;
+ goto err;
+ }
+
+ vsi = pf->vsi[vf->lan_vsi_idx];
+ aq_ret = i40e_config_rss(vsi, NULL, vrl->lut, I40E_VF_HLUT_ARRAY_SIZE);
+ /* send the response to the VF */
+err:
+ return i40e_vc_send_resp_to_vf(vf, I40E_VIRTCHNL_OP_CONFIG_RSS_LUT,
+ aq_ret);
+}
+
+/**
+ * i40e_vc_get_rss_hena
+ * @vf: pointer to the VF info
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * Return the RSS HENA bits allowed by the hardware
+ **/
+static int i40e_vc_get_rss_hena(struct i40e_vf *vf, u8 *msg, u16 msglen)
+{
+ struct i40e_virtchnl_rss_hena *vrh = NULL;
+ struct i40e_pf *pf = vf->pf;
+ i40e_status aq_ret = 0;
+ int len = 0;
+
+ if (!test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states)) {
+ aq_ret = I40E_ERR_PARAM;
+ goto err;
+ }
+ len = sizeof(struct i40e_virtchnl_rss_hena);
+
+ vrh = kzalloc(len, GFP_KERNEL);
+ if (!vrh) {
+ aq_ret = I40E_ERR_NO_MEMORY;
+ len = 0;
+ goto err;
+ }
+ vrh->hena = i40e_pf_get_default_rss_hena(pf);
+err:
+ /* send the response back to the VF */
+ aq_ret = i40e_vc_send_msg_to_vf(vf, I40E_VIRTCHNL_OP_GET_RSS_HENA_CAPS,
+ aq_ret, (u8 *)vrh, len);
+ return aq_ret;
+}
+
+/**
+ * i40e_vc_set_rss_hena
+ * @vf: pointer to the VF info
+ * @msg: pointer to the msg buffer
+ * @msglen: msg length
+ *
+ * Set the RSS HENA bits for the VF
+ **/
+static int i40e_vc_set_rss_hena(struct i40e_vf *vf, u8 *msg, u16 msglen)
+{
+ struct i40e_virtchnl_rss_hena *vrh =
+ (struct i40e_virtchnl_rss_hena *)msg;
+ struct i40e_pf *pf = vf->pf;
+ struct i40e_hw *hw = &pf->hw;
+ i40e_status aq_ret = 0;
+
+ if (!test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states)) {
+ aq_ret = I40E_ERR_PARAM;
+ goto err;
+ }
+ i40e_write_rx_ctl(hw, I40E_VFQF_HENA1(0, vf->vf_id), (u32)vrh->hena);
+ i40e_write_rx_ctl(hw, I40E_VFQF_HENA1(1, vf->vf_id),
+ (u32)(vrh->hena >> 32));
+
+ /* send the response to the VF */
+err:
+ return i40e_vc_send_resp_to_vf(vf, I40E_VIRTCHNL_OP_SET_RSS_HENA,
+ aq_ret);
+}
+
+/**
* i40e_vc_validate_vf_msg
* @vf: pointer to the VF info
* @msg: pointer to the msg buffer
@@ -2041,7 +2354,7 @@ static int i40e_vc_validate_vf_msg(struct i40e_vf *vf, u32 v_opcode,
u32 v_retval, u8 *msg, u16 msglen)
{
bool err_msg_format = false;
- int valid_len;
+ int valid_len = 0;
/* Check if VF is disabled. */
if (test_bit(I40E_VF_STAT_DISABLED, &vf->vf_states))
@@ -2053,13 +2366,10 @@ static int i40e_vc_validate_vf_msg(struct i40e_vf *vf, u32 v_opcode,
valid_len = sizeof(struct i40e_virtchnl_version_info);
break;
case I40E_VIRTCHNL_OP_RESET_VF:
- valid_len = 0;
break;
case I40E_VIRTCHNL_OP_GET_VF_RESOURCES:
if (VF_IS_V11(vf))
valid_len = sizeof(u32);
- else
- valid_len = 0;
break;
case I40E_VIRTCHNL_OP_CONFIG_TX_QUEUE:
valid_len = sizeof(struct i40e_virtchnl_txq_info);
@@ -2149,6 +2459,35 @@ static int i40e_vc_validate_vf_msg(struct i40e_vf *vf, u32 v_opcode,
sizeof(struct i40e_virtchnl_iwarp_qv_info));
}
break;
+ case I40E_VIRTCHNL_OP_CONFIG_RSS_KEY:
+ valid_len = sizeof(struct i40e_virtchnl_rss_key);
+ if (msglen >= valid_len) {
+ struct i40e_virtchnl_rss_key *vrk =
+ (struct i40e_virtchnl_rss_key *)msg;
+ if (vrk->key_len != I40E_HKEY_ARRAY_SIZE) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += vrk->key_len - 1;
+ }
+ break;
+ case I40E_VIRTCHNL_OP_CONFIG_RSS_LUT:
+ valid_len = sizeof(struct i40e_virtchnl_rss_lut);
+ if (msglen >= valid_len) {
+ struct i40e_virtchnl_rss_lut *vrl =
+ (struct i40e_virtchnl_rss_lut *)msg;
+ if (vrl->lut_entries != I40E_VF_HLUT_ARRAY_SIZE) {
+ err_msg_format = true;
+ break;
+ }
+ valid_len += vrl->lut_entries - 1;
+ }
+ break;
+ case I40E_VIRTCHNL_OP_GET_RSS_HENA_CAPS:
+ break;
+ case I40E_VIRTCHNL_OP_SET_RSS_HENA:
+ valid_len = sizeof(struct i40e_virtchnl_rss_hena);
+ break;
/* These are always errors coming from the VF. */
case I40E_VIRTCHNL_OP_EVENT:
case I40E_VIRTCHNL_OP_UNKNOWN:
@@ -2175,11 +2514,11 @@ static int i40e_vc_validate_vf_msg(struct i40e_vf *vf, u32 v_opcode,
* called from the common aeq/arq handler to
* process request from VF
**/
-int i40e_vc_process_vf_msg(struct i40e_pf *pf, u16 vf_id, u32 v_opcode,
+int i40e_vc_process_vf_msg(struct i40e_pf *pf, s16 vf_id, u32 v_opcode,
u32 v_retval, u8 *msg, u16 msglen)
{
struct i40e_hw *hw = &pf->hw;
- unsigned int local_vf_id = vf_id - hw->func_caps.vf_base_id;
+ int local_vf_id = vf_id - (s16)hw->func_caps.vf_base_id;
struct i40e_vf *vf;
int ret;
@@ -2247,6 +2586,19 @@ int i40e_vc_process_vf_msg(struct i40e_pf *pf, u16 vf_id, u32 v_opcode,
case I40E_VIRTCHNL_OP_RELEASE_IWARP_IRQ_MAP:
ret = i40e_vc_iwarp_qvmap_msg(vf, msg, msglen, false);
break;
+ case I40E_VIRTCHNL_OP_CONFIG_RSS_KEY:
+ ret = i40e_vc_config_rss_key(vf, msg, msglen);
+ break;
+ case I40E_VIRTCHNL_OP_CONFIG_RSS_LUT:
+ ret = i40e_vc_config_rss_lut(vf, msg, msglen);
+ break;
+ case I40E_VIRTCHNL_OP_GET_RSS_HENA_CAPS:
+ ret = i40e_vc_get_rss_hena(vf, msg, msglen);
+ break;
+ case I40E_VIRTCHNL_OP_SET_RSS_HENA:
+ ret = i40e_vc_set_rss_hena(vf, msg, msglen);
+ break;
+
case I40E_VIRTCHNL_OP_UNKNOWN:
default:
dev_err(&pf->pdev->dev, "Unsupported opcode %d from VF %d\n",
@@ -2268,9 +2620,10 @@ int i40e_vc_process_vf_msg(struct i40e_pf *pf, u16 vf_id, u32 v_opcode,
**/
int i40e_vc_process_vflr_event(struct i40e_pf *pf)
{
- u32 reg, reg_idx, bit_idx, vf_id;
struct i40e_hw *hw = &pf->hw;
+ u32 reg, reg_idx, bit_idx;
struct i40e_vf *vf;
+ int vf_id;
if (!test_bit(__I40E_VFLR_EVENT_PENDING, &pf->state))
return 0;
@@ -2292,13 +2645,9 @@ int i40e_vc_process_vflr_event(struct i40e_pf *pf)
/* read GLGEN_VFLRSTAT register to find out the flr VFs */
vf = &pf->vf[vf_id];
reg = rd32(hw, I40E_GLGEN_VFLRSTAT(reg_idx));
- if (reg & BIT(bit_idx)) {
- /* clear the bit in GLGEN_VFLRSTAT */
- wr32(hw, I40E_GLGEN_VFLRSTAT(reg_idx), BIT(bit_idx));
-
- if (!test_bit(__I40E_DOWN, &pf->state))
- i40e_reset_vf(vf, true);
- }
+ if (reg & BIT(bit_idx))
+ /* i40e_reset_vf will clear the bit in GLGEN_VFLRSTAT */
+ i40e_reset_vf(vf, true);
}
return 0;
@@ -2762,3 +3111,45 @@ int i40e_ndo_set_vf_spoofchk(struct net_device *netdev, int vf_id, bool enable)
out:
return ret;
}
+
+/**
+ * i40e_ndo_set_vf_trust
+ * @netdev: network interface device structure of the pf
+ * @vf_id: VF identifier
+ * @setting: trust setting
+ *
+ * Enable or disable VF trust setting
+ **/
+int i40e_ndo_set_vf_trust(struct net_device *netdev, int vf_id, bool setting)
+{
+ struct i40e_netdev_priv *np = netdev_priv(netdev);
+ struct i40e_pf *pf = np->vsi->back;
+ struct i40e_vf *vf;
+ int ret = 0;
+
+ /* validate the request */
+ if (vf_id >= pf->num_alloc_vfs) {
+ dev_err(&pf->pdev->dev, "Invalid VF Identifier %d\n", vf_id);
+ return -EINVAL;
+ }
+
+ if (pf->flags & I40E_FLAG_MFP_ENABLED) {
+ dev_err(&pf->pdev->dev, "Trusted VF not supported in MFP mode.\n");
+ return -EINVAL;
+ }
+
+ vf = &pf->vf[vf_id];
+
+ if (!vf)
+ return -EINVAL;
+ if (setting == vf->trusted)
+ goto out;
+
+ vf->trusted = setting;
+ i40e_vc_notify_vf_reset(vf);
+ i40e_reset_vf(vf, false);
+ dev_info(&pf->pdev->dev, "VF %u is now %strusted\n",
+ vf_id, setting ? "" : "un");
+out:
+ return ret;
+}
diff --git a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
index e7b2fba0309e..875174141451 100644
--- a/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
+++ b/drivers/net/ethernet/intel/i40e/i40e_virtchnl_pf.h
@@ -61,6 +61,8 @@ enum i40e_vf_states {
I40E_VF_STAT_IWARPENA,
I40E_VF_STAT_FCOEENA,
I40E_VF_STAT_DISABLED,
+ I40E_VF_STAT_MC_PROMISC,
+ I40E_VF_STAT_UC_PROMISC,
};
/* VF capabilities */
@@ -75,7 +77,7 @@ struct i40e_vf {
struct i40e_pf *pf;
/* VF id in the PF space */
- u16 vf_id;
+ s16 vf_id;
/* all VF vsis connect to the same parent */
enum i40e_switch_element_types parent_type;
struct i40e_virtchnl_version_info vf_ver;
@@ -88,6 +90,7 @@ struct i40e_vf {
struct i40e_virtchnl_ether_addr default_fcoe_addr;
u16 port_vlan_id;
bool pf_set_mac; /* The VMM admin set the VF MAC address */
+ bool trusted;
/* VSI indices - actual VSI pointers are maintained in the PF structure
* When assigned, these will be non-zero, because VSI 0 is always
@@ -108,6 +111,9 @@ struct i40e_vf {
bool link_forced;
bool link_up; /* only valid if VF link is forced */
bool spoofchk;
+ u16 num_mac;
+ u16 num_vlan;
+
/* RDMA Client */
struct i40e_virtchnl_iwarp_qvlist_info *qvlist_info;
};
@@ -115,7 +121,7 @@ struct i40e_vf {
void i40e_free_vfs(struct i40e_pf *pf);
int i40e_pci_sriov_configure(struct pci_dev *dev, int num_vfs);
int i40e_alloc_vfs(struct i40e_pf *pf, u16 num_alloc_vfs);
-int i40e_vc_process_vf_msg(struct i40e_pf *pf, u16 vf_id, u32 v_opcode,
+int i40e_vc_process_vf_msg(struct i40e_pf *pf, s16 vf_id, u32 v_opcode,
u32 v_retval, u8 *msg, u16 msglen);
int i40e_vc_process_vflr_event(struct i40e_pf *pf);
void i40e_reset_vf(struct i40e_vf *vf, bool flr);
@@ -127,6 +133,7 @@ int i40e_ndo_set_vf_port_vlan(struct net_device *netdev,
int vf_id, u16 vlan_id, u8 qos);
int i40e_ndo_set_vf_bw(struct net_device *netdev, int vf_id, int min_tx_rate,
int max_tx_rate);
+int i40e_ndo_set_vf_trust(struct net_device *netdev, int vf_id, bool setting);
int i40e_ndo_get_vf_config(struct net_device *netdev,
int vf_id, struct ifla_vf_info *ivi);
int i40e_ndo_set_vf_link_state(struct net_device *netdev, int vf_id, int link);
diff --git a/drivers/net/ethernet/intel/i40evf/i40e_adminq.h b/drivers/net/ethernet/intel/i40evf/i40e_adminq.h
index a3eae5d9a2bd..1f9b3b5d946d 100644
--- a/drivers/net/ethernet/intel/i40evf/i40e_adminq.h
+++ b/drivers/net/ethernet/intel/i40evf/i40e_adminq.h
@@ -97,7 +97,6 @@ struct i40e_adminq_info {
u32 fw_build; /* firmware build number */
u16 api_maj_ver; /* api major version */
u16 api_min_ver; /* api minor version */
- bool nvm_release_on_done;
struct mutex asq_mutex; /* Send queue lock */
struct mutex arq_mutex; /* Receive queue lock */
diff --git a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
index aad8d6277110..3114dcfa1724 100644
--- a/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
+++ b/drivers/net/ethernet/intel/i40evf/i40e_adminq_cmd.h
@@ -78,17 +78,17 @@ struct i40e_aq_desc {
#define I40E_AQ_FLAG_EI_SHIFT 14
#define I40E_AQ_FLAG_FE_SHIFT 15
-#define I40E_AQ_FLAG_DD (1 << I40E_AQ_FLAG_DD_SHIFT) /* 0x1 */
-#define I40E_AQ_FLAG_CMP (1 << I40E_AQ_FLAG_CMP_SHIFT) /* 0x2 */
-#define I40E_AQ_FLAG_ERR (1 << I40E_AQ_FLAG_ERR_SHIFT) /* 0x4 */
-#define I40E_AQ_FLAG_VFE (1 << I40E_AQ_FLAG_VFE_SHIFT) /* 0x8 */
-#define I40E_AQ_FLAG_LB (1 << I40E_AQ_FLAG_LB_SHIFT) /* 0x200 */
-#define I40E_AQ_FLAG_RD (1 << I40E_AQ_FLAG_RD_SHIFT) /* 0x400 */
-#define I40E_AQ_FLAG_VFC (1 << I40E_AQ_FLAG_VFC_SHIFT) /* 0x800 */
-#define I40E_AQ_FLAG_BUF (1 << I40E_AQ_FLAG_BUF_SHIFT) /* 0x1000 */
-#define I40E_AQ_FLAG_SI (1 << I40E_AQ_FLAG_SI_SHIFT) /* 0x2000 */
-#define I40E_AQ_FLAG_EI (1 << I40E_AQ_FLAG_EI_SHIFT) /* 0x4000 */
-#define I40E_AQ_FLAG_FE (1 << I40E_AQ_FLAG_FE_SHIFT) /* 0x8000 */
+#define I40E_AQ_FLAG_DD BIT(I40E_AQ_FLAG_DD_SHIFT) /* 0x1 */
+#define I40E_AQ_FLAG_CMP BIT(I40E_AQ_FLAG_CMP_SHIFT) /* 0x2 */
+#define I40E_AQ_FLAG_ERR BIT(I40E_AQ_FLAG_ERR_SHIFT) /* 0x4 */
+#define I40E_AQ_FLAG_VFE BIT(I40E_AQ_FLAG_VFE_SHIFT) /* 0x8 */
+#define I40E_AQ_FLAG_LB BIT(I40E_AQ_FLAG_LB_SHIFT) /* 0x200 */
+#define I40E_AQ_FLAG_RD BIT(I40E_AQ_FLAG_RD_SHIFT) /* 0x400 */
+#define I40E_AQ_FLAG_VFC BIT(I40E_AQ_FLAG_VFC_SHIFT) /* 0x800 */
+#define I40E_AQ_FLAG_BUF BIT(I40E_AQ_FLAG_BUF_SHIFT) /* 0x1000 */
+#define I40E_AQ_FLAG_SI BIT(I40E_AQ_FLAG_SI_SHIFT) /* 0x2000 */
+#define I40E_AQ_FLAG_EI BIT(I40E_AQ_FLAG_EI_SHIFT) /* 0x4000 */
+#define I40E_AQ_FLAG_FE BIT(I40E_AQ_FLAG_FE_SHIFT) /* 0x8000 */
/* error codes */
enum i40e_admin_queue_err {
@@ -205,10 +205,6 @@ enum i40e_admin_queue_opc {
i40e_aqc_opc_resume_port_tx = 0x041C,
i40e_aqc_opc_configure_partition_bw = 0x041D,
- /* hmc */
- i40e_aqc_opc_query_hmc_resource_profile = 0x0500,
- i40e_aqc_opc_set_hmc_resource_profile = 0x0501,
-
/* phy commands*/
i40e_aqc_opc_get_phy_abilities = 0x0600,
i40e_aqc_opc_set_phy_config = 0x0601,
@@ -426,6 +422,7 @@ struct i40e_aqc_list_capabilities_element_resp {
#define I40E_AQ_CAP_ID_SDP 0x0062
#define I40E_AQ_CAP_ID_MDIO 0x0063
#define I40E_AQ_CAP_ID_WSR_PROT 0x0064
+#define I40E_AQ_CAP_ID_NVM_MGMT 0x0080
#define I40E_AQ_CAP_ID_FLEX10 0x00F1
#define I40E_AQ_CAP_ID_CEM 0x00F2
@@ -1582,27 +1579,6 @@ struct i40e_aqc_configure_partition_bw_data {
I40E_CHECK_STRUCT_LEN(0x22, i40e_aqc_configure_partition_bw_data);
-/* Get and set the active HMC resource profile and status.
- * (direct 0x0500) and (direct 0x0501)
- */
-struct i40e_aq_get_set_hmc_resource_profile {
- u8 pm_profile;
- u8 pe_vf_enabled;
- u8 reserved[14];
-};
-
-I40E_CHECK_CMD_LENGTH(i40e_aq_get_set_hmc_resource_profile);
-
-enum i40e_aq_hmc_profile {
- /* I40E_HMC_PROFILE_NO_CHANGE = 0, reserved */
- I40E_HMC_PROFILE_DEFAULT = 1,
- I40E_HMC_PROFILE_FAVOR_VF = 2,
- I40E_HMC_PROFILE_EQUAL = 3,
-};
-
-#define I40E_AQ_GET_HMC_RESOURCE_PROFILE_PM_MASK 0xF
-#define I40E_AQ_GET_HMC_RESOURCE_PROFILE_COUNT_MASK 0x3F
-
/* Get PHY Abilities (indirect 0x0600) uses the generic indirect struct */
/* set in param0 for get phy abilities to report qualified modules */
@@ -1649,11 +1625,11 @@ enum i40e_aq_phy_type {
enum i40e_aq_link_speed {
I40E_LINK_SPEED_UNKNOWN = 0,
- I40E_LINK_SPEED_100MB = (1 << I40E_LINK_SPEED_100MB_SHIFT),
- I40E_LINK_SPEED_1GB = (1 << I40E_LINK_SPEED_1000MB_SHIFT),
- I40E_LINK_SPEED_10GB = (1 << I40E_LINK_SPEED_10GB_SHIFT),
- I40E_LINK_SPEED_40GB = (1 << I40E_LINK_SPEED_40GB_SHIFT),
- I40E_LINK_SPEED_20GB = (1 << I40E_LINK_SPEED_20GB_SHIFT)
+ I40E_LINK_SPEED_100MB = BIT(I40E_LINK_SPEED_100MB_SHIFT),
+ I40E_LINK_SPEED_1GB = BIT(I40E_LINK_SPEED_1000MB_SHIFT),
+ I40E_LINK_SPEED_10GB = BIT(I40E_LINK_SPEED_10GB_SHIFT),
+ I40E_LINK_SPEED_40GB = BIT(I40E_LINK_SPEED_40GB_SHIFT),
+ I40E_LINK_SPEED_20GB = BIT(I40E_LINK_SPEED_20GB_SHIFT)
};
struct i40e_aqc_module_desc {
@@ -1924,9 +1900,9 @@ I40E_CHECK_CMD_LENGTH(i40e_aqc_nvm_config_write);
/* Used for 0x0704 as well as for 0x0705 commands */
#define I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT 1
#define I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_MASK \
- (1 << I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT)
+ BIT(I40E_AQ_ANVM_FEATURE_OR_IMMEDIATE_SHIFT)
#define I40E_AQ_ANVM_FEATURE 0
-#define I40E_AQ_ANVM_IMMEDIATE_FIELD (1 << FEATURE_OR_IMMEDIATE_SHIFT)
+#define I40E_AQ_ANVM_IMMEDIATE_FIELD BIT(FEATURE_OR_IMMEDIATE_SHIFT)
struct i40e_aqc_nvm_config_data_feature {
__le16 feature_id;
#define I40E_AQ_ANVM_FEATURE_OPTION_OEM_ONLY 0x01
@@ -2195,7 +2171,7 @@ struct i40e_aqc_del_udp_tunnel_completion {
I40E_CHECK_CMD_LENGTH(i40e_aqc_del_udp_tunnel_completion);
struct i40e_aqc_get_set_rss_key {
-#define I40E_AQC_SET_RSS_KEY_VSI_VALID (0x1 << 15)
+#define I40E_AQC_SET_RSS_KEY_VSI_VALID BIT(15)
#define I40E_AQC_SET_RSS_KEY_VSI_ID_SHIFT 0
#define I40E_AQC_SET_RSS_KEY_VSI_ID_MASK (0x3FF << \
I40E_AQC_SET_RSS_KEY_VSI_ID_SHIFT)
@@ -2215,14 +2191,14 @@ struct i40e_aqc_get_set_rss_key_data {
I40E_CHECK_STRUCT_LEN(0x34, i40e_aqc_get_set_rss_key_data);
struct i40e_aqc_get_set_rss_lut {
-#define I40E_AQC_SET_RSS_LUT_VSI_VALID (0x1 << 15)
+#define I40E_AQC_SET_RSS_LUT_VSI_VALID BIT(15)
#define I40E_AQC_SET_RSS_LUT_VSI_ID_SHIFT 0
#define I40E_AQC_SET_RSS_LUT_VSI_ID_MASK (0x3FF << \
I40E_AQC_SET_RSS_LUT_VSI_ID_SHIFT)
__le16 vsi_id;
#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT 0
-#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_MASK (0x1 << \
- I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT)
+#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_MASK \
+ BIT(I40E_AQC_SET_RSS_LUT_TABLE_TYPE_SHIFT)
#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_VSI 0
#define I40E_AQC_SET_RSS_LUT_TABLE_TYPE_PF 1
diff --git a/drivers/net/ethernet/intel/i40evf/i40e_common.c b/drivers/net/ethernet/intel/i40evf/i40e_common.c
index 771ac6ad8cda..8f64204000fb 100644
--- a/drivers/net/ethernet/intel/i40evf/i40e_common.c
+++ b/drivers/net/ethernet/intel/i40evf/i40e_common.c
@@ -58,6 +58,8 @@ i40e_status i40e_set_mac_type(struct i40e_hw *hw)
case I40E_DEV_ID_SFP_X722:
case I40E_DEV_ID_1G_BASE_T_X722:
case I40E_DEV_ID_10G_BASE_T_X722:
+ case I40E_DEV_ID_SFP_I_X722:
+ case I40E_DEV_ID_QSFP_I_X722:
hw->mac.type = I40E_MAC_X722;
break;
case I40E_DEV_ID_X722_VF:
diff --git a/drivers/net/ethernet/intel/i40evf/i40e_devids.h b/drivers/net/ethernet/intel/i40evf/i40e_devids.h
index ca8b58c3d1f5..d34972bab09c 100644
--- a/drivers/net/ethernet/intel/i40evf/i40e_devids.h
+++ b/drivers/net/ethernet/intel/i40evf/i40e_devids.h
@@ -44,6 +44,8 @@
#define I40E_DEV_ID_SFP_X722 0x37D0
#define I40E_DEV_ID_1G_BASE_T_X722 0x37D1
#define I40E_DEV_ID_10G_BASE_T_X722 0x37D2
+#define I40E_DEV_ID_SFP_I_X722 0x37D3
+#define I40E_DEV_ID_QSFP_I_X722 0x37D4
#define I40E_DEV_ID_X722_VF 0x37CD
#define I40E_DEV_ID_X722_VF_HV 0x37D9
diff --git a/drivers/net/ethernet/intel/i40evf/i40e_txrx.c b/drivers/net/ethernet/intel/i40evf/i40e_txrx.c
index cea97daa844c..be99189da925 100644
--- a/drivers/net/ethernet/intel/i40evf/i40e_txrx.c
+++ b/drivers/net/ethernet/intel/i40evf/i40e_txrx.c
@@ -155,19 +155,21 @@ u32 i40evf_get_tx_pending(struct i40e_ring *ring, bool in_sw)
/**
* i40e_clean_tx_irq - Reclaim resources after transmit completes
- * @tx_ring: tx ring to clean
- * @budget: how many cleans we're allowed
+ * @vsi: the VSI we care about
+ * @tx_ring: Tx ring to clean
+ * @napi_budget: Used to determine if we are in netpoll
*
* Returns true if there's any budget left (e.g. the clean is finished)
**/
-static bool i40e_clean_tx_irq(struct i40e_ring *tx_ring, int budget)
+static bool i40e_clean_tx_irq(struct i40e_vsi *vsi,
+ struct i40e_ring *tx_ring, int napi_budget)
{
u16 i = tx_ring->next_to_clean;
struct i40e_tx_buffer *tx_buf;
struct i40e_tx_desc *tx_head;
struct i40e_tx_desc *tx_desc;
- unsigned int total_packets = 0;
- unsigned int total_bytes = 0;
+ unsigned int total_bytes = 0, total_packets = 0;
+ unsigned int budget = vsi->work_limit;
tx_buf = &tx_ring->tx_bi[i];
tx_desc = I40E_TX_DESC(tx_ring, i);
@@ -197,7 +199,7 @@ static bool i40e_clean_tx_irq(struct i40e_ring *tx_ring, int budget)
total_packets += tx_buf->gso_segs;
/* free the skb */
- dev_kfree_skb_any(tx_buf->skb);
+ napi_consume_skb(tx_buf->skb, napi_budget);
/* unmap skb header data */
dma_unmap_single(tx_ring->dev,
@@ -267,7 +269,7 @@ static bool i40e_clean_tx_irq(struct i40e_ring *tx_ring, int budget)
if (budget &&
((j / (WB_STRIDE + 1)) == 0) && (j > 0) &&
- !test_bit(__I40E_DOWN, &tx_ring->vsi->state) &&
+ !test_bit(__I40E_DOWN, &vsi->state) &&
(I40E_DESC_UNUSED(tx_ring) != tx_ring->count))
tx_ring->arm_wb = true;
}
@@ -285,7 +287,7 @@ static bool i40e_clean_tx_irq(struct i40e_ring *tx_ring, int budget)
smp_mb();
if (__netif_subqueue_stopped(tx_ring->netdev,
tx_ring->queue_index) &&
- !test_bit(__I40E_DOWN, &tx_ring->vsi->state)) {
+ !test_bit(__I40E_DOWN, &vsi->state)) {
netif_wake_subqueue(tx_ring->netdev,
tx_ring->queue_index);
++tx_ring->tx_stats.restart_queue;
@@ -494,7 +496,6 @@ err:
void i40evf_clean_rx_ring(struct i40e_ring *rx_ring)
{
struct device *dev = rx_ring->dev;
- struct i40e_rx_buffer *rx_bi;
unsigned long bi_size;
u16 i;
@@ -502,48 +503,22 @@ void i40evf_clean_rx_ring(struct i40e_ring *rx_ring)
if (!rx_ring->rx_bi)
return;
- if (ring_is_ps_enabled(rx_ring)) {
- int bufsz = ALIGN(rx_ring->rx_hdr_len, 256) * rx_ring->count;
-
- rx_bi = &rx_ring->rx_bi[0];
- if (rx_bi->hdr_buf) {
- dma_free_coherent(dev,
- bufsz,
- rx_bi->hdr_buf,
- rx_bi->dma);
- for (i = 0; i < rx_ring->count; i++) {
- rx_bi = &rx_ring->rx_bi[i];
- rx_bi->dma = 0;
- rx_bi->hdr_buf = NULL;
- }
- }
- }
/* Free all the Rx ring sk_buffs */
for (i = 0; i < rx_ring->count; i++) {
- rx_bi = &rx_ring->rx_bi[i];
- if (rx_bi->dma) {
- dma_unmap_single(dev,
- rx_bi->dma,
- rx_ring->rx_buf_len,
- DMA_FROM_DEVICE);
- rx_bi->dma = 0;
- }
+ struct i40e_rx_buffer *rx_bi = &rx_ring->rx_bi[i];
+
if (rx_bi->skb) {
dev_kfree_skb(rx_bi->skb);
rx_bi->skb = NULL;
}
- if (rx_bi->page) {
- if (rx_bi->page_dma) {
- dma_unmap_page(dev,
- rx_bi->page_dma,
- PAGE_SIZE,
- DMA_FROM_DEVICE);
- rx_bi->page_dma = 0;
- }
- __free_page(rx_bi->page);
- rx_bi->page = NULL;
- rx_bi->page_offset = 0;
- }
+ if (!rx_bi->page)
+ continue;
+
+ dma_unmap_page(dev, rx_bi->dma, PAGE_SIZE, DMA_FROM_DEVICE);
+ __free_pages(rx_bi->page, 0);
+
+ rx_bi->page = NULL;
+ rx_bi->page_offset = 0;
}
bi_size = sizeof(struct i40e_rx_buffer) * rx_ring->count;
@@ -552,6 +527,7 @@ void i40evf_clean_rx_ring(struct i40e_ring *rx_ring)
/* Zero out the descriptor ring */
memset(rx_ring->desc, 0, rx_ring->size);
+ rx_ring->next_to_alloc = 0;
rx_ring->next_to_clean = 0;
rx_ring->next_to_use = 0;
}
@@ -576,37 +552,6 @@ void i40evf_free_rx_resources(struct i40e_ring *rx_ring)
}
/**
- * i40evf_alloc_rx_headers - allocate rx header buffers
- * @rx_ring: ring to alloc buffers
- *
- * Allocate rx header buffers for the entire ring. As these are static,
- * this is only called when setting up a new ring.
- **/
-void i40evf_alloc_rx_headers(struct i40e_ring *rx_ring)
-{
- struct device *dev = rx_ring->dev;
- struct i40e_rx_buffer *rx_bi;
- dma_addr_t dma;
- void *buffer;
- int buf_size;
- int i;
-
- if (rx_ring->rx_bi[0].hdr_buf)
- return;
- /* Make sure the buffers don't cross cache line boundaries. */
- buf_size = ALIGN(rx_ring->rx_hdr_len, 256);
- buffer = dma_alloc_coherent(dev, buf_size * rx_ring->count,
- &dma, GFP_KERNEL);
- if (!buffer)
- return;
- for (i = 0; i < rx_ring->count; i++) {
- rx_bi = &rx_ring->rx_bi[i];
- rx_bi->dma = dma + (i * buf_size);
- rx_bi->hdr_buf = buffer + (i * buf_size);
- }
-}
-
-/**
* i40evf_setup_rx_descriptors - Allocate Rx descriptors
* @rx_ring: Rx descriptor ring (for a specific queue) to setup
*
@@ -627,9 +572,7 @@ int i40evf_setup_rx_descriptors(struct i40e_ring *rx_ring)
u64_stats_init(&rx_ring->syncp);
/* Round up to nearest 4K */
- rx_ring->size = ring_is_16byte_desc_enabled(rx_ring)
- ? rx_ring->count * sizeof(union i40e_16byte_rx_desc)
- : rx_ring->count * sizeof(union i40e_32byte_rx_desc);
+ rx_ring->size = rx_ring->count * sizeof(union i40e_32byte_rx_desc);
rx_ring->size = ALIGN(rx_ring->size, 4096);
rx_ring->desc = dma_alloc_coherent(dev, rx_ring->size,
&rx_ring->dma, GFP_KERNEL);
@@ -640,6 +583,7 @@ int i40evf_setup_rx_descriptors(struct i40e_ring *rx_ring)
goto err;
}
+ rx_ring->next_to_alloc = 0;
rx_ring->next_to_clean = 0;
rx_ring->next_to_use = 0;
@@ -658,6 +602,10 @@ err:
static inline void i40e_release_rx_desc(struct i40e_ring *rx_ring, u32 val)
{
rx_ring->next_to_use = val;
+
+ /* update next to alloc since we have filled the ring */
+ rx_ring->next_to_alloc = val;
+
/* Force memory writes to complete before letting h/w
* know there are new descriptors to fetch. (Only
* applicable for weak-ordered memory model archs,
@@ -668,160 +616,122 @@ static inline void i40e_release_rx_desc(struct i40e_ring *rx_ring, u32 val)
}
/**
- * i40evf_alloc_rx_buffers_ps - Replace used receive buffers; packet split
- * @rx_ring: ring to place buffers on
- * @cleaned_count: number of buffers to replace
+ * i40e_alloc_mapped_page - recycle or make a new page
+ * @rx_ring: ring to use
+ * @bi: rx_buffer struct to modify
*
- * Returns true if any errors on allocation
+ * Returns true if the page was successfully allocated or
+ * reused.
**/
-bool i40evf_alloc_rx_buffers_ps(struct i40e_ring *rx_ring, u16 cleaned_count)
+static bool i40e_alloc_mapped_page(struct i40e_ring *rx_ring,
+ struct i40e_rx_buffer *bi)
{
- u16 i = rx_ring->next_to_use;
- union i40e_rx_desc *rx_desc;
- struct i40e_rx_buffer *bi;
- const int current_node = numa_node_id();
+ struct page *page = bi->page;
+ dma_addr_t dma;
- /* do nothing if no valid netdev defined */
- if (!rx_ring->netdev || !cleaned_count)
+ /* since we are recycling buffers we should seldom need to alloc */
+ if (likely(page)) {
+ rx_ring->rx_stats.page_reuse_count++;
+ return true;
+ }
+
+ /* alloc new page for storage */
+ page = dev_alloc_page();
+ if (unlikely(!page)) {
+ rx_ring->rx_stats.alloc_page_failed++;
return false;
+ }
- while (cleaned_count--) {
- rx_desc = I40E_RX_DESC(rx_ring, i);
- bi = &rx_ring->rx_bi[i];
+ /* map page for use */
+ dma = dma_map_page(rx_ring->dev, page, 0, PAGE_SIZE, DMA_FROM_DEVICE);
- if (bi->skb) /* desc is in use */
- goto no_buffers;
-
- /* If we've been moved to a different NUMA node, release the
- * page so we can get a new one on the current node.
+ /* if mapping failed free memory back to system since
+ * there isn't much point in holding memory we can't use
*/
- if (bi->page && page_to_nid(bi->page) != current_node) {
- dma_unmap_page(rx_ring->dev,
- bi->page_dma,
- PAGE_SIZE,
- DMA_FROM_DEVICE);
- __free_page(bi->page);
- bi->page = NULL;
- bi->page_dma = 0;
- rx_ring->rx_stats.realloc_count++;
- } else if (bi->page) {
- rx_ring->rx_stats.page_reuse_count++;
- }
-
- if (!bi->page) {
- bi->page = alloc_page(GFP_ATOMIC);
- if (!bi->page) {
- rx_ring->rx_stats.alloc_page_failed++;
- goto no_buffers;
- }
- bi->page_dma = dma_map_page(rx_ring->dev,
- bi->page,
- 0,
- PAGE_SIZE,
- DMA_FROM_DEVICE);
- if (dma_mapping_error(rx_ring->dev, bi->page_dma)) {
- rx_ring->rx_stats.alloc_page_failed++;
- __free_page(bi->page);
- bi->page = NULL;
- bi->page_dma = 0;
- bi->page_offset = 0;
- goto no_buffers;
- }
- bi->page_offset = 0;
- }
-
- /* Refresh the desc even if buffer_addrs didn't change
- * because each write-back erases this info.
- */
- rx_desc->read.pkt_addr =
- cpu_to_le64(bi->page_dma + bi->page_offset);
- rx_desc->read.hdr_addr = cpu_to_le64(bi->dma);
- i++;
- if (i == rx_ring->count)
- i = 0;
+ if (dma_mapping_error(rx_ring->dev, dma)) {
+ __free_pages(page, 0);
+ rx_ring->rx_stats.alloc_page_failed++;
+ return false;
}
- if (rx_ring->next_to_use != i)
- i40e_release_rx_desc(rx_ring, i);
+ bi->dma = dma;
+ bi->page = page;
+ bi->page_offset = 0;
- return false;
+ return true;
+}
-no_buffers:
- if (rx_ring->next_to_use != i)
- i40e_release_rx_desc(rx_ring, i);
+/**
+ * i40e_receive_skb - Send a completed packet up the stack
+ * @rx_ring: rx ring in play
+ * @skb: packet to send up
+ * @vlan_tag: vlan tag for packet
+ **/
+static void i40e_receive_skb(struct i40e_ring *rx_ring,
+ struct sk_buff *skb, u16 vlan_tag)
+{
+ struct i40e_q_vector *q_vector = rx_ring->q_vector;
- /* make sure to come back via polling to try again after
- * allocation failure
- */
- return true;
+ if ((rx_ring->netdev->features & NETIF_F_HW_VLAN_CTAG_RX) &&
+ (vlan_tag & VLAN_VID_MASK))
+ __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlan_tag);
+
+ napi_gro_receive(&q_vector->napi, skb);
}
/**
- * i40evf_alloc_rx_buffers_1buf - Replace used receive buffers; single buffer
+ * i40evf_alloc_rx_buffers - Replace used receive buffers
* @rx_ring: ring to place buffers on
* @cleaned_count: number of buffers to replace
*
- * Returns true if any errors on allocation
+ * Returns false if all allocations were successful, true if any fail
**/
-bool i40evf_alloc_rx_buffers_1buf(struct i40e_ring *rx_ring, u16 cleaned_count)
+bool i40evf_alloc_rx_buffers(struct i40e_ring *rx_ring, u16 cleaned_count)
{
- u16 i = rx_ring->next_to_use;
+ u16 ntu = rx_ring->next_to_use;
union i40e_rx_desc *rx_desc;
struct i40e_rx_buffer *bi;
- struct sk_buff *skb;
/* do nothing if no valid netdev defined */
if (!rx_ring->netdev || !cleaned_count)
return false;
- while (cleaned_count--) {
- rx_desc = I40E_RX_DESC(rx_ring, i);
- bi = &rx_ring->rx_bi[i];
- skb = bi->skb;
-
- if (!skb) {
- skb = __netdev_alloc_skb_ip_align(rx_ring->netdev,
- rx_ring->rx_buf_len,
- GFP_ATOMIC |
- __GFP_NOWARN);
- if (!skb) {
- rx_ring->rx_stats.alloc_buff_failed++;
- goto no_buffers;
- }
- /* initialize queue mapping */
- skb_record_rx_queue(skb, rx_ring->queue_index);
- bi->skb = skb;
- }
+ rx_desc = I40E_RX_DESC(rx_ring, ntu);
+ bi = &rx_ring->rx_bi[ntu];
- if (!bi->dma) {
- bi->dma = dma_map_single(rx_ring->dev,
- skb->data,
- rx_ring->rx_buf_len,
- DMA_FROM_DEVICE);
- if (dma_mapping_error(rx_ring->dev, bi->dma)) {
- rx_ring->rx_stats.alloc_buff_failed++;
- bi->dma = 0;
- dev_kfree_skb(bi->skb);
- bi->skb = NULL;
- goto no_buffers;
- }
- }
+ do {
+ if (!i40e_alloc_mapped_page(rx_ring, bi))
+ goto no_buffers;
- rx_desc->read.pkt_addr = cpu_to_le64(bi->dma);
+ /* Refresh the desc even if buffer_addrs didn't change
+ * because each write-back erases this info.
+ */
+ rx_desc->read.pkt_addr = cpu_to_le64(bi->dma + bi->page_offset);
rx_desc->read.hdr_addr = 0;
- i++;
- if (i == rx_ring->count)
- i = 0;
- }
- if (rx_ring->next_to_use != i)
- i40e_release_rx_desc(rx_ring, i);
+ rx_desc++;
+ bi++;
+ ntu++;
+ if (unlikely(ntu == rx_ring->count)) {
+ rx_desc = I40E_RX_DESC(rx_ring, 0);
+ bi = rx_ring->rx_bi;
+ ntu = 0;
+ }
+
+ /* clear the status bits for the next_to_use descriptor */
+ rx_desc->wb.qword1.status_error_len = 0;
+
+ cleaned_count--;
+ } while (cleaned_count);
+
+ if (rx_ring->next_to_use != ntu)
+ i40e_release_rx_desc(rx_ring, ntu);
return false;
no_buffers:
- if (rx_ring->next_to_use != i)
- i40e_release_rx_desc(rx_ring, i);
+ if (rx_ring->next_to_use != ntu)
+ i40e_release_rx_desc(rx_ring, ntu);
/* make sure to come back via polling to try again after
* allocation failure
@@ -830,41 +740,35 @@ no_buffers:
}
/**
- * i40e_receive_skb - Send a completed packet up the stack
- * @rx_ring: rx ring in play
- * @skb: packet to send up
- * @vlan_tag: vlan tag for packet
- **/
-static void i40e_receive_skb(struct i40e_ring *rx_ring,
- struct sk_buff *skb, u16 vlan_tag)
-{
- struct i40e_q_vector *q_vector = rx_ring->q_vector;
-
- if (vlan_tag & VLAN_VID_MASK)
- __vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q), vlan_tag);
-
- napi_gro_receive(&q_vector->napi, skb);
-}
-
-/**
* i40e_rx_checksum - Indicate in skb if hw indicated a good cksum
* @vsi: the VSI we care about
* @skb: skb currently being received and modified
- * @rx_status: status value of last descriptor in packet
- * @rx_error: error value of last descriptor in packet
- * @rx_ptype: ptype value of last descriptor in packet
+ * @rx_desc: the receive descriptor
+ *
+ * skb->protocol must be set before this function is called
**/
static inline void i40e_rx_checksum(struct i40e_vsi *vsi,
struct sk_buff *skb,
- u32 rx_status,
- u32 rx_error,
- u16 rx_ptype)
+ union i40e_rx_desc *rx_desc)
{
- struct i40e_rx_ptype_decoded decoded = decode_rx_desc_ptype(rx_ptype);
- bool ipv4, ipv6, ipv4_tunnel, ipv6_tunnel;
+ struct i40e_rx_ptype_decoded decoded;
+ bool ipv4, ipv6, tunnel = false;
+ u32 rx_error, rx_status;
+ u8 ptype;
+ u64 qword;
+
+ qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
+ ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >> I40E_RXD_QW1_PTYPE_SHIFT;
+ rx_error = (qword & I40E_RXD_QW1_ERROR_MASK) >>
+ I40E_RXD_QW1_ERROR_SHIFT;
+ rx_status = (qword & I40E_RXD_QW1_STATUS_MASK) >>
+ I40E_RXD_QW1_STATUS_SHIFT;
+ decoded = decode_rx_desc_ptype(ptype);
skb->ip_summed = CHECKSUM_NONE;
+ skb_checksum_none_assert(skb);
+
/* Rx csum enabled and ip headers found? */
if (!(vsi->netdev->features & NETIF_F_RXCSUM))
return;
@@ -910,14 +814,13 @@ static inline void i40e_rx_checksum(struct i40e_vsi *vsi,
* doesn't make it a hard requirement so if we have validated the
* inner checksum report CHECKSUM_UNNECESSARY.
*/
-
- ipv4_tunnel = (rx_ptype >= I40E_RX_PTYPE_GRENAT4_MAC_PAY3) &&
- (rx_ptype <= I40E_RX_PTYPE_GRENAT4_MACVLAN_IPV6_ICMP_PAY4);
- ipv6_tunnel = (rx_ptype >= I40E_RX_PTYPE_GRENAT6_MAC_PAY3) &&
- (rx_ptype <= I40E_RX_PTYPE_GRENAT6_MACVLAN_IPV6_ICMP_PAY4);
+ if (decoded.inner_prot & (I40E_RX_PTYPE_INNER_PROT_TCP |
+ I40E_RX_PTYPE_INNER_PROT_UDP |
+ I40E_RX_PTYPE_INNER_PROT_SCTP))
+ tunnel = true;
skb->ip_summed = CHECKSUM_UNNECESSARY;
- skb->csum_level = ipv4_tunnel || ipv6_tunnel;
+ skb->csum_level = tunnel ? 1 : 0;
return;
@@ -931,7 +834,7 @@ checksum_fail:
*
* Returns a hash type to be used by skb_set_hash
**/
-static inline enum pkt_hash_types i40e_ptype_to_htype(u8 ptype)
+static inline int i40e_ptype_to_htype(u8 ptype)
{
struct i40e_rx_ptype_decoded decoded = decode_rx_desc_ptype(ptype);
@@ -959,7 +862,7 @@ static inline void i40e_rx_hash(struct i40e_ring *ring,
u8 rx_ptype)
{
u32 hash;
- const __le64 rss_mask =
+ const __le64 rss_mask =
cpu_to_le64((u64)I40E_RX_DESC_FLTSTAT_RSS_HASH <<
I40E_RX_DESC_STATUS_FLTSTAT_SHIFT);
@@ -973,313 +876,411 @@ static inline void i40e_rx_hash(struct i40e_ring *ring,
}
/**
- * i40e_clean_rx_irq_ps - Reclaim resources after receive; packet split
- * @rx_ring: rx ring to clean
- * @budget: how many cleans we're allowed
+ * i40evf_process_skb_fields - Populate skb header fields from Rx descriptor
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @rx_desc: pointer to the EOP Rx descriptor
+ * @skb: pointer to current skb being populated
+ * @rx_ptype: the packet type decoded by hardware
*
- * Returns true if there's any budget left (e.g. the clean is finished)
+ * This function checks the ring, descriptor, and packet information in
+ * order to populate the hash, checksum, VLAN, protocol, and
+ * other fields within the skb.
**/
-static int i40e_clean_rx_irq_ps(struct i40e_ring *rx_ring, const int budget)
+static inline
+void i40evf_process_skb_fields(struct i40e_ring *rx_ring,
+ union i40e_rx_desc *rx_desc, struct sk_buff *skb,
+ u8 rx_ptype)
{
- unsigned int total_rx_bytes = 0, total_rx_packets = 0;
- u16 rx_packet_len, rx_header_len, rx_sph, rx_hbo;
- u16 cleaned_count = I40E_DESC_UNUSED(rx_ring);
- struct i40e_vsi *vsi = rx_ring->vsi;
- u16 i = rx_ring->next_to_clean;
- union i40e_rx_desc *rx_desc;
- u32 rx_error, rx_status;
- bool failure = false;
- u8 rx_ptype;
- u64 qword;
- u32 copysize;
+ i40e_rx_hash(rx_ring, rx_desc, skb, rx_ptype);
- do {
- struct i40e_rx_buffer *rx_bi;
- struct sk_buff *skb;
- u16 vlan_tag;
- /* return some buffers to hardware, one at a time is too slow */
- if (cleaned_count >= I40E_RX_BUFFER_WRITE) {
- failure = failure ||
- i40evf_alloc_rx_buffers_ps(rx_ring,
- cleaned_count);
- cleaned_count = 0;
- }
+ /* modifies the skb - consumes the enet header */
+ skb->protocol = eth_type_trans(skb, rx_ring->netdev);
- i = rx_ring->next_to_clean;
- rx_desc = I40E_RX_DESC(rx_ring, i);
- qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
- rx_status = (qword & I40E_RXD_QW1_STATUS_MASK) >>
- I40E_RXD_QW1_STATUS_SHIFT;
+ i40e_rx_checksum(rx_ring->vsi, skb, rx_desc);
- if (!(rx_status & BIT(I40E_RX_DESC_STATUS_DD_SHIFT)))
- break;
+ skb_record_rx_queue(skb, rx_ring->queue_index);
+}
- /* This memory barrier is needed to keep us from reading
- * any other fields out of the rx_desc until we know the
- * DD bit is set.
- */
- dma_rmb();
- /* sync header buffer for reading */
- dma_sync_single_range_for_cpu(rx_ring->dev,
- rx_ring->rx_bi[0].dma,
- i * rx_ring->rx_hdr_len,
- rx_ring->rx_hdr_len,
- DMA_FROM_DEVICE);
- rx_bi = &rx_ring->rx_bi[i];
- skb = rx_bi->skb;
- if (likely(!skb)) {
- skb = __netdev_alloc_skb_ip_align(rx_ring->netdev,
- rx_ring->rx_hdr_len,
- GFP_ATOMIC |
- __GFP_NOWARN);
- if (!skb) {
- rx_ring->rx_stats.alloc_buff_failed++;
- failure = true;
- break;
- }
+/**
+ * i40e_pull_tail - i40e specific version of skb_pull_tail
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @skb: pointer to current skb being adjusted
+ *
+ * This function is an i40e specific version of __pskb_pull_tail. The
+ * main difference between this version and the original function is that
+ * this function can make several assumptions about the state of things
+ * that allow for significant optimizations versus the standard function.
+ * As a result we can do things like drop a frag and maintain an accurate
+ * truesize for the skb.
+ */
+static void i40e_pull_tail(struct i40e_ring *rx_ring, struct sk_buff *skb)
+{
+ struct skb_frag_struct *frag = &skb_shinfo(skb)->frags[0];
+ unsigned char *va;
+ unsigned int pull_len;
- /* initialize queue mapping */
- skb_record_rx_queue(skb, rx_ring->queue_index);
- /* we are reusing so sync this buffer for CPU use */
- dma_sync_single_range_for_cpu(rx_ring->dev,
- rx_ring->rx_bi[0].dma,
- i * rx_ring->rx_hdr_len,
- rx_ring->rx_hdr_len,
- DMA_FROM_DEVICE);
- }
- rx_packet_len = (qword & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>
- I40E_RXD_QW1_LENGTH_PBUF_SHIFT;
- rx_header_len = (qword & I40E_RXD_QW1_LENGTH_HBUF_MASK) >>
- I40E_RXD_QW1_LENGTH_HBUF_SHIFT;
- rx_sph = (qword & I40E_RXD_QW1_LENGTH_SPH_MASK) >>
- I40E_RXD_QW1_LENGTH_SPH_SHIFT;
-
- rx_error = (qword & I40E_RXD_QW1_ERROR_MASK) >>
- I40E_RXD_QW1_ERROR_SHIFT;
- rx_hbo = rx_error & BIT(I40E_RX_DESC_ERROR_HBO_SHIFT);
- rx_error &= ~BIT(I40E_RX_DESC_ERROR_HBO_SHIFT);
+ /* it is valid to use page_address instead of kmap since we are
+ * working with pages allocated out of the lomem pool per
+ * alloc_page(GFP_ATOMIC)
+ */
+ va = skb_frag_address(frag);
- rx_ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT;
- /* sync half-page for reading */
- dma_sync_single_range_for_cpu(rx_ring->dev,
- rx_bi->page_dma,
- rx_bi->page_offset,
- PAGE_SIZE / 2,
- DMA_FROM_DEVICE);
- prefetch(page_address(rx_bi->page) + rx_bi->page_offset);
- rx_bi->skb = NULL;
- cleaned_count++;
- copysize = 0;
- if (rx_hbo || rx_sph) {
- int len;
-
- if (rx_hbo)
- len = I40E_RX_HDR_SIZE;
- else
- len = rx_header_len;
- memcpy(__skb_put(skb, len), rx_bi->hdr_buf, len);
- } else if (skb->len == 0) {
- int len;
- unsigned char *va = page_address(rx_bi->page) +
- rx_bi->page_offset;
-
- len = min(rx_packet_len, rx_ring->rx_hdr_len);
- memcpy(__skb_put(skb, len), va, len);
- copysize = len;
- rx_packet_len -= len;
- }
- /* Get the rest of the data if this was a header split */
- if (rx_packet_len) {
- skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
- rx_bi->page,
- rx_bi->page_offset + copysize,
- rx_packet_len, I40E_RXBUFFER_2048);
-
- /* If the page count is more than 2, then both halves
- * of the page are used and we need to free it. Do it
- * here instead of in the alloc code. Otherwise one
- * of the half-pages might be released between now and
- * then, and we wouldn't know which one to use.
- * Don't call get_page and free_page since those are
- * both expensive atomic operations that just change
- * the refcount in opposite directions. Just give the
- * page to the stack; he can have our refcount.
- */
- if (page_count(rx_bi->page) > 2) {
- dma_unmap_page(rx_ring->dev,
- rx_bi->page_dma,
- PAGE_SIZE,
- DMA_FROM_DEVICE);
- rx_bi->page = NULL;
- rx_bi->page_dma = 0;
- rx_ring->rx_stats.realloc_count++;
- } else {
- get_page(rx_bi->page);
- /* switch to the other half-page here; the
- * allocation code programs the right addr
- * into HW. If we haven't used this half-page,
- * the address won't be changed, and HW can
- * just use it next time through.
- */
- rx_bi->page_offset ^= PAGE_SIZE / 2;
- }
+ /* we need the header to contain the greater of either ETH_HLEN or
+ * 60 bytes if the skb->len is less than 60 for skb_pad.
+ */
+ pull_len = eth_get_headlen(va, I40E_RX_HDR_SIZE);
- }
- I40E_RX_INCREMENT(rx_ring, i);
+ /* align pull length to size of long to optimize memcpy performance */
+ skb_copy_to_linear_data(skb, va, ALIGN(pull_len, sizeof(long)));
- if (unlikely(
- !(rx_status & BIT(I40E_RX_DESC_STATUS_EOF_SHIFT)))) {
- struct i40e_rx_buffer *next_buffer;
+ /* update all of the pointers */
+ skb_frag_size_sub(frag, pull_len);
+ frag->page_offset += pull_len;
+ skb->data_len -= pull_len;
+ skb->tail += pull_len;
+}
- next_buffer = &rx_ring->rx_bi[i];
- next_buffer->skb = skb;
- rx_ring->rx_stats.non_eop_descs++;
- continue;
- }
+/**
+ * i40e_cleanup_headers - Correct empty headers
+ * @rx_ring: rx descriptor ring packet is being transacted on
+ * @skb: pointer to current skb being fixed
+ *
+ * Also address the case where we are pulling data in on pages only
+ * and as such no data is present in the skb header.
+ *
+ * In addition if skb is not at least 60 bytes we need to pad it so that
+ * it is large enough to qualify as a valid Ethernet frame.
+ *
+ * Returns true if an error was encountered and skb was freed.
+ **/
+static bool i40e_cleanup_headers(struct i40e_ring *rx_ring, struct sk_buff *skb)
+{
+ /* place header in linear portion of buffer */
+ if (skb_is_nonlinear(skb))
+ i40e_pull_tail(rx_ring, skb);
- /* ERR_MASK will only have valid bits if EOP set */
- if (unlikely(rx_error & BIT(I40E_RX_DESC_ERROR_RXE_SHIFT))) {
- dev_kfree_skb_any(skb);
- continue;
- }
+ /* if eth_skb_pad returns an error the skb was freed */
+ if (eth_skb_pad(skb))
+ return true;
- i40e_rx_hash(rx_ring, rx_desc, skb, rx_ptype);
+ return false;
+}
- /* probably a little skewed due to removing CRC */
- total_rx_bytes += skb->len;
- total_rx_packets++;
+/**
+ * i40e_reuse_rx_page - page flip buffer and store it back on the ring
+ * @rx_ring: rx descriptor ring to store buffers on
+ * @old_buff: donor buffer to have page reused
+ *
+ * Synchronizes page for reuse by the adapter
+ **/
+static void i40e_reuse_rx_page(struct i40e_ring *rx_ring,
+ struct i40e_rx_buffer *old_buff)
+{
+ struct i40e_rx_buffer *new_buff;
+ u16 nta = rx_ring->next_to_alloc;
- skb->protocol = eth_type_trans(skb, rx_ring->netdev);
+ new_buff = &rx_ring->rx_bi[nta];
- i40e_rx_checksum(vsi, skb, rx_status, rx_error, rx_ptype);
+ /* update, and store next to alloc */
+ nta++;
+ rx_ring->next_to_alloc = (nta < rx_ring->count) ? nta : 0;
- vlan_tag = rx_status & BIT(I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)
- ? le16_to_cpu(rx_desc->wb.qword0.lo_dword.l2tag1)
- : 0;
-#ifdef I40E_FCOE
- if (!i40e_fcoe_handle_offload(rx_ring, rx_desc, skb)) {
- dev_kfree_skb_any(skb);
- continue;
- }
+ /* transfer page from old buffer to new buffer */
+ *new_buff = *old_buff;
+}
+
+/**
+ * i40e_page_is_reserved - check if reuse is possible
+ * @page: page struct to check
+ */
+static inline bool i40e_page_is_reserved(struct page *page)
+{
+ return (page_to_nid(page) != numa_mem_id()) || page_is_pfmemalloc(page);
+}
+
+/**
+ * i40e_add_rx_frag - Add contents of Rx buffer to sk_buff
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @rx_buffer: buffer containing page to add
+ * @rx_desc: descriptor containing length of buffer written by hardware
+ * @skb: sk_buff to place the data into
+ *
+ * This function will add the data contained in rx_buffer->page to the skb.
+ * This is done either through a direct copy if the data in the buffer is
+ * less than the skb header size, otherwise it will just attach the page as
+ * a frag to the skb.
+ *
+ * The function will then update the page offset if necessary and return
+ * true if the buffer can be reused by the adapter.
+ **/
+static bool i40e_add_rx_frag(struct i40e_ring *rx_ring,
+ struct i40e_rx_buffer *rx_buffer,
+ union i40e_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ struct page *page = rx_buffer->page;
+ u64 qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
+ unsigned int size = (qword & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>
+ I40E_RXD_QW1_LENGTH_PBUF_SHIFT;
+#if (PAGE_SIZE < 8192)
+ unsigned int truesize = I40E_RXBUFFER_2048;
+#else
+ unsigned int truesize = ALIGN(size, L1_CACHE_BYTES);
+ unsigned int last_offset = PAGE_SIZE - I40E_RXBUFFER_2048;
#endif
- i40e_receive_skb(rx_ring, skb, vlan_tag);
- rx_desc->wb.qword1.status_error_len = 0;
+ /* will the data fit in the skb we allocated? if so, just
+ * copy it as it is pretty small anyway
+ */
+ if ((size <= I40E_RX_HDR_SIZE) && !skb_is_nonlinear(skb)) {
+ unsigned char *va = page_address(page) + rx_buffer->page_offset;
- } while (likely(total_rx_packets < budget));
+ memcpy(__skb_put(skb, size), va, ALIGN(size, sizeof(long)));
- u64_stats_update_begin(&rx_ring->syncp);
- rx_ring->stats.packets += total_rx_packets;
- rx_ring->stats.bytes += total_rx_bytes;
- u64_stats_update_end(&rx_ring->syncp);
- rx_ring->q_vector->rx.total_packets += total_rx_packets;
- rx_ring->q_vector->rx.total_bytes += total_rx_bytes;
+ /* page is not reserved, we can reuse buffer as-is */
+ if (likely(!i40e_page_is_reserved(page)))
+ return true;
- return failure ? budget : total_rx_packets;
+ /* this page cannot be reused so discard it */
+ __free_pages(page, 0);
+ return false;
+ }
+
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags, page,
+ rx_buffer->page_offset, size, truesize);
+
+ /* avoid re-using remote pages */
+ if (unlikely(i40e_page_is_reserved(page)))
+ return false;
+
+#if (PAGE_SIZE < 8192)
+ /* if we are only owner of page we can reuse it */
+ if (unlikely(page_count(page) != 1))
+ return false;
+
+ /* flip page offset to other buffer */
+ rx_buffer->page_offset ^= truesize;
+#else
+ /* move offset up to the next cache line */
+ rx_buffer->page_offset += truesize;
+
+ if (rx_buffer->page_offset > last_offset)
+ return false;
+#endif
+
+ /* Even if we own the page, we are not allowed to use atomic_set()
+ * This would break get_page_unless_zero() users.
+ */
+ get_page(rx_buffer->page);
+
+ return true;
}
/**
- * i40e_clean_rx_irq_1buf - Reclaim resources after receive; single buffer
- * @rx_ring: rx ring to clean
- * @budget: how many cleans we're allowed
+ * i40evf_fetch_rx_buffer - Allocate skb and populate it
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @rx_desc: descriptor containing info written by hardware
*
- * Returns number of packets cleaned
+ * This function allocates an skb on the fly, and populates it with the page
+ * data from the current receive descriptor, taking care to set up the skb
+ * correctly, as well as handling calling the page recycle function if
+ * necessary.
+ */
+static inline
+struct sk_buff *i40evf_fetch_rx_buffer(struct i40e_ring *rx_ring,
+ union i40e_rx_desc *rx_desc)
+{
+ struct i40e_rx_buffer *rx_buffer;
+ struct sk_buff *skb;
+ struct page *page;
+
+ rx_buffer = &rx_ring->rx_bi[rx_ring->next_to_clean];
+ page = rx_buffer->page;
+ prefetchw(page);
+
+ skb = rx_buffer->skb;
+
+ if (likely(!skb)) {
+ void *page_addr = page_address(page) + rx_buffer->page_offset;
+
+ /* prefetch first cache line of first page */
+ prefetch(page_addr);
+#if L1_CACHE_BYTES < 128
+ prefetch(page_addr + L1_CACHE_BYTES);
+#endif
+
+ /* allocate a skb to store the frags */
+ skb = __napi_alloc_skb(&rx_ring->q_vector->napi,
+ I40E_RX_HDR_SIZE,
+ GFP_ATOMIC | __GFP_NOWARN);
+ if (unlikely(!skb)) {
+ rx_ring->rx_stats.alloc_buff_failed++;
+ return NULL;
+ }
+
+ /* we will be copying header into skb->data in
+ * pskb_may_pull so it is in our interest to prefetch
+ * it now to avoid a possible cache miss
+ */
+ prefetchw(skb->data);
+ } else {
+ rx_buffer->skb = NULL;
+ }
+
+ /* we are reusing so sync this buffer for CPU use */
+ dma_sync_single_range_for_cpu(rx_ring->dev,
+ rx_buffer->dma,
+ rx_buffer->page_offset,
+ I40E_RXBUFFER_2048,
+ DMA_FROM_DEVICE);
+
+ /* pull page into skb */
+ if (i40e_add_rx_frag(rx_ring, rx_buffer, rx_desc, skb)) {
+ /* hand second half of page back to the ring */
+ i40e_reuse_rx_page(rx_ring, rx_buffer);
+ rx_ring->rx_stats.page_reuse_count++;
+ } else {
+ /* we are not reusing the buffer so unmap it */
+ dma_unmap_page(rx_ring->dev, rx_buffer->dma, PAGE_SIZE,
+ DMA_FROM_DEVICE);
+ }
+
+ /* clear contents of buffer_info */
+ rx_buffer->page = NULL;
+
+ return skb;
+}
+
+/**
+ * i40e_is_non_eop - process handling of non-EOP buffers
+ * @rx_ring: Rx ring being processed
+ * @rx_desc: Rx descriptor for current buffer
+ * @skb: Current socket buffer containing buffer in progress
+ *
+ * This function updates next to clean. If the buffer is an EOP buffer
+ * this function exits returning false, otherwise it will place the
+ * sk_buff in the next buffer to be chained and return true indicating
+ * that this is in fact a non-EOP buffer.
**/
-static int i40e_clean_rx_irq_1buf(struct i40e_ring *rx_ring, int budget)
+static bool i40e_is_non_eop(struct i40e_ring *rx_ring,
+ union i40e_rx_desc *rx_desc,
+ struct sk_buff *skb)
+{
+ u32 ntc = rx_ring->next_to_clean + 1;
+
+ /* fetch, update, and store next to clean */
+ ntc = (ntc < rx_ring->count) ? ntc : 0;
+ rx_ring->next_to_clean = ntc;
+
+ prefetch(I40E_RX_DESC(rx_ring, ntc));
+
+ /* if we are the last buffer then there is nothing else to do */
+#define I40E_RXD_EOF BIT(I40E_RX_DESC_STATUS_EOF_SHIFT)
+ if (likely(i40e_test_staterr(rx_desc, I40E_RXD_EOF)))
+ return false;
+
+ /* place skb in next buffer to be received */
+ rx_ring->rx_bi[ntc].skb = skb;
+ rx_ring->rx_stats.non_eop_descs++;
+
+ return true;
+}
+
+/**
+ * i40e_clean_rx_irq - Clean completed descriptors from Rx ring - bounce buf
+ * @rx_ring: rx descriptor ring to transact packets on
+ * @budget: Total limit on number of packets to process
+ *
+ * This function provides a "bounce buffer" approach to Rx interrupt
+ * processing. The advantage to this is that on systems that have
+ * expensive overhead for IOMMU access this provides a means of avoiding
+ * it by maintaining the mapping of the page to the system.
+ *
+ * Returns amount of work completed
+ **/
+static int i40e_clean_rx_irq(struct i40e_ring *rx_ring, int budget)
{
unsigned int total_rx_bytes = 0, total_rx_packets = 0;
u16 cleaned_count = I40E_DESC_UNUSED(rx_ring);
- struct i40e_vsi *vsi = rx_ring->vsi;
- union i40e_rx_desc *rx_desc;
- u32 rx_error, rx_status;
- u16 rx_packet_len;
bool failure = false;
- u8 rx_ptype;
- u64 qword;
- u16 i;
- do {
- struct i40e_rx_buffer *rx_bi;
+ while (likely(total_rx_packets < budget)) {
+ union i40e_rx_desc *rx_desc;
struct sk_buff *skb;
+ u32 rx_status;
u16 vlan_tag;
+ u8 rx_ptype;
+ u64 qword;
+
/* return some buffers to hardware, one at a time is too slow */
if (cleaned_count >= I40E_RX_BUFFER_WRITE) {
failure = failure ||
- i40evf_alloc_rx_buffers_1buf(rx_ring,
- cleaned_count);
+ i40evf_alloc_rx_buffers(rx_ring, cleaned_count);
cleaned_count = 0;
}
- i = rx_ring->next_to_clean;
- rx_desc = I40E_RX_DESC(rx_ring, i);
+ rx_desc = I40E_RX_DESC(rx_ring, rx_ring->next_to_clean);
+
qword = le64_to_cpu(rx_desc->wb.qword1.status_error_len);
+ rx_ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >>
+ I40E_RXD_QW1_PTYPE_SHIFT;
rx_status = (qword & I40E_RXD_QW1_STATUS_MASK) >>
- I40E_RXD_QW1_STATUS_SHIFT;
+ I40E_RXD_QW1_STATUS_SHIFT;
if (!(rx_status & BIT(I40E_RX_DESC_STATUS_DD_SHIFT)))
break;
+ /* status_error_len will always be zero for unused descriptors
+ * because it's cleared in cleanup, and overlaps with hdr_addr
+ * which is always zero because packet split isn't used, if the
+ * hardware wrote DD then it will be non-zero
+ */
+ if (!rx_desc->wb.qword1.status_error_len)
+ break;
+
/* This memory barrier is needed to keep us from reading
* any other fields out of the rx_desc until we know the
* DD bit is set.
*/
dma_rmb();
- rx_bi = &rx_ring->rx_bi[i];
- skb = rx_bi->skb;
- prefetch(skb->data);
-
- rx_packet_len = (qword & I40E_RXD_QW1_LENGTH_PBUF_MASK) >>
- I40E_RXD_QW1_LENGTH_PBUF_SHIFT;
-
- rx_error = (qword & I40E_RXD_QW1_ERROR_MASK) >>
- I40E_RXD_QW1_ERROR_SHIFT;
- rx_error &= ~BIT(I40E_RX_DESC_ERROR_HBO_SHIFT);
+ skb = i40evf_fetch_rx_buffer(rx_ring, rx_desc);
+ if (!skb)
+ break;
- rx_ptype = (qword & I40E_RXD_QW1_PTYPE_MASK) >>
- I40E_RXD_QW1_PTYPE_SHIFT;
- rx_bi->skb = NULL;
cleaned_count++;
- /* Get the header and possibly the whole packet
- * If this is an skb from previous receive dma will be 0
- */
- skb_put(skb, rx_packet_len);
- dma_unmap_single(rx_ring->dev, rx_bi->dma, rx_ring->rx_buf_len,
- DMA_FROM_DEVICE);
- rx_bi->dma = 0;
-
- I40E_RX_INCREMENT(rx_ring, i);
-
- if (unlikely(
- !(rx_status & BIT(I40E_RX_DESC_STATUS_EOF_SHIFT)))) {
- rx_ring->rx_stats.non_eop_descs++;
+ if (i40e_is_non_eop(rx_ring, rx_desc, skb))
continue;
- }
- /* ERR_MASK will only have valid bits if EOP set */
- if (unlikely(rx_error & BIT(I40E_RX_DESC_ERROR_RXE_SHIFT))) {
+ /* ERR_MASK will only have valid bits if EOP set, and
+ * what we are doing here is actually checking
+ * I40E_RX_DESC_ERROR_RXE_SHIFT, since it is the zeroth bit in
+ * the error field
+ */
+ if (unlikely(i40e_test_staterr(rx_desc, BIT(I40E_RXD_QW1_ERROR_SHIFT)))) {
dev_kfree_skb_any(skb);
continue;
}
- i40e_rx_hash(rx_ring, rx_desc, skb, rx_ptype);
+ if (i40e_cleanup_headers(rx_ring, skb))
+ continue;
+
/* probably a little skewed due to removing CRC */
total_rx_bytes += skb->len;
- total_rx_packets++;
- skb->protocol = eth_type_trans(skb, rx_ring->netdev);
+ /* populate checksum, VLAN, and protocol */
+ i40evf_process_skb_fields(rx_ring, rx_desc, skb, rx_ptype);
+
- i40e_rx_checksum(vsi, skb, rx_status, rx_error, rx_ptype);
+ vlan_tag = (qword & BIT(I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)) ?
+ le16_to_cpu(rx_desc->wb.qword0.lo_dword.l2tag1) : 0;
- vlan_tag = rx_status & BIT(I40E_RX_DESC_STATUS_L2TAG1P_SHIFT)
- ? le16_to_cpu(rx_desc->wb.qword0.lo_dword.l2tag1)
- : 0;
i40e_receive_skb(rx_ring, skb, vlan_tag);
- rx_desc->wb.qword1.status_error_len = 0;
- } while (likely(total_rx_packets < budget));
+ /* update budget accounting */
+ total_rx_packets++;
+ }
u64_stats_update_begin(&rx_ring->syncp);
rx_ring->stats.packets += total_rx_packets;
@@ -1288,6 +1289,7 @@ static int i40e_clean_rx_irq_1buf(struct i40e_ring *rx_ring, int budget)
rx_ring->q_vector->rx.total_packets += total_rx_packets;
rx_ring->q_vector->rx.total_bytes += total_rx_bytes;
+ /* guarantee a trip back through this routine if there was a failure */
return failure ? budget : total_rx_packets;
}
@@ -1411,9 +1413,11 @@ int i40evf_napi_poll(struct napi_struct *napi, int budget)
* budget and be more aggressive about cleaning up the Tx descriptors.
*/
i40e_for_each_ring(ring, q_vector->tx) {
- clean_complete = clean_complete &&
- i40e_clean_tx_irq(ring, vsi->work_limit);
- arm_wb = arm_wb || ring->arm_wb;
+ if (!i40e_clean_tx_irq(vsi, ring, budget)) {
+ clean_complete = false;
+ continue;
+ }
+ arm_wb |= ring->arm_wb;
ring->arm_wb = false;
}
@@ -1427,16 +1431,12 @@ int i40evf_napi_poll(struct napi_struct *napi, int budget)
budget_per_ring = max(budget/q_vector->num_ringpairs, 1);
i40e_for_each_ring(ring, q_vector->rx) {
- int cleaned;
-
- if (ring_is_ps_enabled(ring))
- cleaned = i40e_clean_rx_irq_ps(ring, budget_per_ring);
- else
- cleaned = i40e_clean_rx_irq_1buf(ring, budget_per_ring);
+ int cleaned = i40e_clean_rx_irq(ring, budget_per_ring);
work_done += cleaned;
- /* if we didn't clean as many as budgeted, we must be done */
- clean_complete = clean_complete && (budget_per_ring > cleaned);
+ /* if we clean as many as budgeted, we must not be done */
+ if (cleaned >= budget_per_ring)
+ clean_complete = false;
}
/* If work not completed, return budget and polling will return */
@@ -1514,15 +1514,13 @@ out:
/**
* i40e_tso - set up the tso context descriptor
- * @tx_ring: ptr to the ring to send
* @skb: ptr to the skb we're sending
* @hdr_len: ptr to the size of the packet header
* @cd_type_cmd_tso_mss: Quad Word 1
*
* Returns 0 if no TSO can happen, 1 if tso is going, or error
**/
-static int i40e_tso(struct i40e_ring *tx_ring, struct sk_buff *skb,
- u8 *hdr_len, u64 *cd_type_cmd_tso_mss)
+static int i40e_tso(struct sk_buff *skb, u8 *hdr_len, u64 *cd_type_cmd_tso_mss)
{
u64 cd_cmd, cd_tso_len, cd_mss;
union {
@@ -1559,16 +1557,22 @@ static int i40e_tso(struct i40e_ring *tx_ring, struct sk_buff *skb,
ip.v6->payload_len = 0;
}
- if (skb_shinfo(skb)->gso_type & (SKB_GSO_UDP_TUNNEL | SKB_GSO_GRE |
+ if (skb_shinfo(skb)->gso_type & (SKB_GSO_GRE |
+ SKB_GSO_GRE_CSUM |
+ SKB_GSO_IPXIP4 |
+ SKB_GSO_IPXIP6 |
+ SKB_GSO_UDP_TUNNEL |
SKB_GSO_UDP_TUNNEL_CSUM)) {
- if (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL_CSUM) {
+ if (!(skb_shinfo(skb)->gso_type & SKB_GSO_PARTIAL) &&
+ (skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL_CSUM)) {
+ l4.udp->len = 0;
+
/* determine offset of outer transport header */
l4_offset = l4.hdr - skb->data;
/* remove payload length from outer checksum */
- paylen = (__force u16)l4.udp->check;
- paylen += ntohs(1) * (u16)~(skb->len - l4_offset);
- l4.udp->check = ~csum_fold((__force __wsum)paylen);
+ paylen = skb->len - l4_offset;
+ csum_replace_by_diff(&l4.udp->check, htonl(paylen));
}
/* reset pointers to inner headers */
@@ -1588,9 +1592,8 @@ static int i40e_tso(struct i40e_ring *tx_ring, struct sk_buff *skb,
l4_offset = l4.hdr - skb->data;
/* remove payload length from inner checksum */
- paylen = (__force u16)l4.tcp->check;
- paylen += ntohs(1) * (u16)~(skb->len - l4_offset);
- l4.tcp->check = ~csum_fold((__force __wsum)paylen);
+ paylen = skb->len - l4_offset;
+ csum_replace_by_diff(&l4.tcp->check, htonl(paylen));
/* compute length of segmentation header */
*hdr_len = (l4.tcp->doff * 4) + l4_offset;
@@ -1630,7 +1633,7 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags,
unsigned char *hdr;
} l4;
unsigned char *exthdr;
- u32 offset, cmd = 0, tunnel = 0;
+ u32 offset, cmd = 0;
__be16 frag_off;
u8 l4_proto = 0;
@@ -1644,6 +1647,7 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags,
offset = ((ip.hdr - skb->data) / 2) << I40E_TX_DESC_LENGTH_MACLEN_SHIFT;
if (skb->encapsulation) {
+ u32 tunnel = 0;
/* define outer network header type */
if (*tx_flags & I40E_TX_FLAGS_IPV4) {
tunnel |= (*tx_flags & I40E_TX_FLAGS_TSO) ?
@@ -1661,13 +1665,6 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags,
&l4_proto, &frag_off);
}
- /* compute outer L3 header size */
- tunnel |= ((l4.hdr - ip.hdr) / 4) <<
- I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT;
-
- /* switch IP header pointer from outer to inner header */
- ip.hdr = skb_inner_network_header(skb);
-
/* define outer transport */
switch (l4_proto) {
case IPPROTO_UDP:
@@ -1678,6 +1675,11 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags,
tunnel |= I40E_TXD_CTX_GRE_TUNNELING;
*tx_flags |= I40E_TX_FLAGS_VXLAN_TUNNEL;
break;
+ case IPPROTO_IPIP:
+ case IPPROTO_IPV6:
+ *tx_flags |= I40E_TX_FLAGS_VXLAN_TUNNEL;
+ l4.hdr = skb_inner_network_header(skb);
+ break;
default:
if (*tx_flags & I40E_TX_FLAGS_TSO)
return -1;
@@ -1686,12 +1688,20 @@ static int i40e_tx_enable_csum(struct sk_buff *skb, u32 *tx_flags,
return 0;
}
+ /* compute outer L3 header size */
+ tunnel |= ((l4.hdr - ip.hdr) / 4) <<
+ I40E_TXD_CTX_QW0_EXT_IPLEN_SHIFT;
+
+ /* switch IP header pointer from outer to inner header */
+ ip.hdr = skb_inner_network_header(skb);
+
/* compute tunnel header size */
tunnel |= ((ip.hdr - l4.hdr) / 2) <<
I40E_TXD_CTX_QW0_NATLEN_SHIFT;
/* indicate if we need to offload outer UDP header */
if ((*tx_flags & I40E_TX_FLAGS_TSO) &&
+ !(skb_shinfo(skb)->gso_type & SKB_GSO_PARTIAL) &&
(skb_shinfo(skb)->gso_type & SKB_GSO_UDP_TUNNEL_CSUM))
tunnel |= I40E_TXD_CTX_QW0_L4T_CS_MASK;
@@ -1935,6 +1945,8 @@ static inline void i40evf_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,
tx_bi = first;
for (frag = &skb_shinfo(skb)->frags[0];; frag++) {
+ unsigned int max_data = I40E_MAX_DATA_PER_TXD_ALIGNED;
+
if (dma_mapping_error(tx_ring->dev, dma))
goto dma_error;
@@ -1942,12 +1954,14 @@ static inline void i40evf_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,
dma_unmap_len_set(tx_bi, len, size);
dma_unmap_addr_set(tx_bi, dma, dma);
+ /* align size to end of page */
+ max_data += -dma & (I40E_MAX_READ_REQ_SIZE - 1);
tx_desc->buffer_addr = cpu_to_le64(dma);
while (unlikely(size > I40E_MAX_DATA_PER_TXD)) {
tx_desc->cmd_type_offset_bsz =
build_ctob(td_cmd, td_offset,
- I40E_MAX_DATA_PER_TXD, td_tag);
+ max_data, td_tag);
tx_desc++;
i++;
@@ -1958,9 +1972,10 @@ static inline void i40evf_tx_map(struct i40e_ring *tx_ring, struct sk_buff *skb,
i = 0;
}
- dma += I40E_MAX_DATA_PER_TXD;
- size -= I40E_MAX_DATA_PER_TXD;
+ dma += max_data;
+ size -= max_data;
+ max_data = I40E_MAX_DATA_PER_TXD_ALIGNED;
tx_desc->buffer_addr = cpu_to_le64(dma);
}
@@ -2109,7 +2124,7 @@ static netdev_tx_t i40e_xmit_frame_ring(struct sk_buff *skb,
if (i40e_chk_linearize(skb, count)) {
if (__skb_linearize(skb))
goto out_drop;
- count = TXD_USE_COUNT(skb->len);
+ count = i40e_txd_use_count(skb->len);
tx_ring->tx_stats.tx_linearize++;
}
@@ -2140,7 +2155,7 @@ static netdev_tx_t i40e_xmit_frame_ring(struct sk_buff *skb,
else if (protocol == htons(ETH_P_IPV6))
tx_flags |= I40E_TX_FLAGS_IPV6;
- tso = i40e_tso(tx_ring, skb, &hdr_len, &cd_type_cmd_tso_mss);
+ tso = i40e_tso(skb, &hdr_len, &cd_type_cmd_tso_mss);
if (tso < 0)
goto out_drop;
diff --git a/drivers/net/ethernet/intel/i40evf/i40e_txrx.h b/drivers/net/ethernet/intel/i40evf/i40e_txrx.h
index 0429553fe887..0112277e5882 100644
--- a/drivers/net/ethernet/intel/i40evf/i40e_txrx.h
+++ b/drivers/net/ethernet/intel/i40evf/i40e_txrx.h
@@ -102,8 +102,8 @@ enum i40e_dyn_idx_t {
(((pf)->flags & I40E_FLAG_MULTIPLE_TCP_UDP_RSS_PCTYPE) ? \
I40E_DEFAULT_RSS_HENA_EXPANDED : I40E_DEFAULT_RSS_HENA)
-/* Supported Rx Buffer Sizes */
-#define I40E_RXBUFFER_512 512 /* Used for packet split */
+/* Supported Rx Buffer Sizes (a multiple of 128) */
+#define I40E_RXBUFFER_256 256
#define I40E_RXBUFFER_2048 2048
#define I40E_RXBUFFER_3072 3072 /* For FCoE MTU of 2158 */
#define I40E_RXBUFFER_4096 4096
@@ -114,9 +114,28 @@ enum i40e_dyn_idx_t {
* reserve 2 more, and skb_shared_info adds an additional 384 bytes more,
* this adds up to 512 bytes of extra data meaning the smallest allocation
* we could have is 1K.
- * i.e. RXBUFFER_512 --> size-1024 slab
+ * i.e. RXBUFFER_256 --> 960 byte skb (size-1024 slab)
+ * i.e. RXBUFFER_512 --> 1216 byte skb (size-2048 slab)
*/
-#define I40E_RX_HDR_SIZE I40E_RXBUFFER_512
+#define I40E_RX_HDR_SIZE I40E_RXBUFFER_256
+#define i40e_rx_desc i40e_32byte_rx_desc
+
+/**
+ * i40e_test_staterr - tests bits in Rx descriptor status and error fields
+ * @rx_desc: pointer to receive descriptor (in le64 format)
+ * @stat_err_bits: value to mask
+ *
+ * This function does some fast chicanery in order to return the
+ * value of the mask which is really only used for boolean tests.
+ * The status_error_len doesn't need to be shifted because it begins
+ * at offset zero.
+ */
+static inline bool i40e_test_staterr(union i40e_rx_desc *rx_desc,
+ const u64 stat_err_bits)
+{
+ return !!(rx_desc->wb.qword1.status_error_len &
+ cpu_to_le64(stat_err_bits));
+}
/* How many Rx Buffers do we bundle into one write to the hardware ? */
#define I40E_RX_BUFFER_WRITE 16 /* Must be power of 2 */
@@ -142,14 +161,41 @@ enum i40e_dyn_idx_t {
prefetch((n)); \
} while (0)
-#define i40e_rx_desc i40e_32byte_rx_desc
-
#define I40E_MAX_BUFFER_TXD 8
#define I40E_MIN_TX_LEN 17
-#define I40E_MAX_DATA_PER_TXD 8192
+
+/* The size limit for a transmit buffer in a descriptor is (16K - 1).
+ * In order to align with the read requests we will align the value to
+ * the nearest 4K which represents our maximum read request size.
+ */
+#define I40E_MAX_READ_REQ_SIZE 4096
+#define I40E_MAX_DATA_PER_TXD (16 * 1024 - 1)
+#define I40E_MAX_DATA_PER_TXD_ALIGNED \
+ (I40E_MAX_DATA_PER_TXD & ~(I40E_MAX_READ_REQ_SIZE - 1))
+
+/* This ugly bit of math is equivalent to DIV_ROUNDUP(size, X) where X is
+ * the value I40E_MAX_DATA_PER_TXD_ALIGNED. It is needed due to the fact
+ * that 12K is not a power of 2 and division is expensive. It is used to
+ * approximate the number of descriptors used per linear buffer. Note
+ * that this will overestimate in some cases as it doesn't account for the
+ * fact that we will add up to 4K - 1 in aligning the 12K buffer, however
+ * the error should not impact things much as large buffers usually mean
+ * we will use fewer descriptors then there are frags in an skb.
+ */
+static inline unsigned int i40e_txd_use_count(unsigned int size)
+{
+ const unsigned int max = I40E_MAX_DATA_PER_TXD_ALIGNED;
+ const unsigned int reciprocal = ((1ull << 32) - 1 + (max / 2)) / max;
+ unsigned int adjust = ~(u32)0;
+
+ /* if we rounded up on the reciprocal pull down the adjustment */
+ if ((max * reciprocal) > adjust)
+ adjust = ~(u32)(reciprocal - 1);
+
+ return (u32)((((u64)size * reciprocal) + adjust) >> 32);
+}
/* Tx Descriptors needed, worst case */
-#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), I40E_MAX_DATA_PER_TXD)
#define DESC_NEEDED (MAX_SKB_FRAGS + 4)
#define I40E_MIN_DESC_PENDING 4
@@ -183,10 +229,8 @@ struct i40e_tx_buffer {
struct i40e_rx_buffer {
struct sk_buff *skb;
- void *hdr_buf;
dma_addr_t dma;
struct page *page;
- dma_addr_t page_dma;
unsigned int page_offset;
};
@@ -215,22 +259,18 @@ struct i40e_rx_queue_stats {
enum i40e_ring_state_t {
__I40E_TX_FDIR_INIT_DONE,
__I40E_TX_XPS_INIT_DONE,
- __I40E_RX_PS_ENABLED,
- __I40E_RX_16BYTE_DESC_ENABLED,
};
-#define ring_is_ps_enabled(ring) \
- test_bit(__I40E_RX_PS_ENABLED, &(ring)->state)
-#define set_ring_ps_enabled(ring) \
- set_bit(__I40E_RX_PS_ENABLED, &(ring)->state)
-#define clear_ring_ps_enabled(ring) \
- clear_bit(__I40E_RX_PS_ENABLED, &(ring)->state)
-#define ring_is_16byte_desc_enabled(ring) \
- test_bit(__I40E_RX_16BYTE_DESC_ENABLED, &(ring)->state)
-#define set_ring_16byte_desc_enabled(ring) \
- set_bit(__I40E_RX_16BYTE_DESC_ENABLED, &(ring)->state)
-#define clear_ring_16byte_desc_enabled(ring) \
- clear_bit(__I40E_RX_16BYTE_DESC_ENABLED, &(ring)->state)
+/* some useful defines for virtchannel interface, which
+ * is the only remaining user of header split
+ */
+#define I40E_RX_DTYPE_NO_SPLIT 0
+#define I40E_RX_DTYPE_HEADER_SPLIT 1
+#define I40E_RX_DTYPE_SPLIT_ALWAYS 2
+#define I40E_RX_SPLIT_L2 0x1
+#define I40E_RX_SPLIT_IP 0x2
+#define I40E_RX_SPLIT_TCP_UDP 0x4
+#define I40E_RX_SPLIT_SCTP 0x8
/* struct that defines a descriptor ring, associated with a VSI */
struct i40e_ring {
@@ -249,16 +289,7 @@ struct i40e_ring {
u16 count; /* Number of descriptors */
u16 reg_idx; /* HW register index of the ring */
- u16 rx_hdr_len;
u16 rx_buf_len;
- u8 dtype;
-#define I40E_RX_DTYPE_NO_SPLIT 0
-#define I40E_RX_DTYPE_HEADER_SPLIT 1
-#define I40E_RX_DTYPE_SPLIT_ALWAYS 2
-#define I40E_RX_SPLIT_L2 0x1
-#define I40E_RX_SPLIT_IP 0x2
-#define I40E_RX_SPLIT_TCP_UDP 0x4
-#define I40E_RX_SPLIT_SCTP 0x8
/* used in interrupt processing */
u16 next_to_use;
@@ -290,6 +321,7 @@ struct i40e_ring {
struct i40e_q_vector *q_vector; /* Backreference to associated vector */
struct rcu_head rcu; /* to avoid race on free */
+ u16 next_to_alloc;
} ____cacheline_internodealigned_in_smp;
enum i40e_latency_range {
@@ -313,9 +345,7 @@ struct i40e_ring_container {
#define i40e_for_each_ring(pos, head) \
for (pos = (head).ring; pos != NULL; pos = pos->next)
-bool i40evf_alloc_rx_buffers_ps(struct i40e_ring *rxr, u16 cleaned_count);
-bool i40evf_alloc_rx_buffers_1buf(struct i40e_ring *rxr, u16 cleaned_count);
-void i40evf_alloc_rx_headers(struct i40e_ring *rxr);
+bool i40evf_alloc_rx_buffers(struct i40e_ring *rxr, u16 cleaned_count);
netdev_tx_t i40evf_xmit_frame(struct sk_buff *skb, struct net_device *netdev);
void i40evf_clean_tx_ring(struct i40e_ring *tx_ring);
void i40evf_clean_rx_ring(struct i40e_ring *rx_ring);
@@ -359,7 +389,7 @@ static inline int i40e_xmit_descriptor_count(struct sk_buff *skb)
int count = 0, size = skb_headlen(skb);
for (;;) {
- count += TXD_USE_COUNT(size);
+ count += i40e_txd_use_count(size);
if (!nr_frags--)
break;
@@ -405,4 +435,14 @@ static inline bool i40e_chk_linearize(struct sk_buff *skb, int count)
/* we can support up to 8 data buffers for a single send */
return count != I40E_MAX_BUFFER_TXD;
}
+
+/**
+ * i40e_rx_is_fcoe - returns true if the Rx packet type is FCoE
+ * @ptype: the packet type field from Rx descriptor write-back
+ **/
+static inline bool i40e_rx_is_fcoe(u16 ptype)
+{
+ return (ptype >= I40E_RX_PTYPE_L2_FCOE_PAY3) &&
+ (ptype <= I40E_RX_PTYPE_L2_FCOE_VFT_FCOTHER);
+}
#endif /* _I40E_TXRX_H_ */
diff --git a/drivers/net/ethernet/intel/i40evf/i40e_type.h b/drivers/net/ethernet/intel/i40evf/i40e_type.h
index 301fe2b6dd03..97f96e0d9c4c 100644
--- a/drivers/net/ethernet/intel/i40evf/i40e_type.h
+++ b/drivers/net/ethernet/intel/i40evf/i40e_type.h
@@ -36,7 +36,7 @@
#include "i40e_devids.h"
/* I40E_MASK is a macro used on 32 bit registers */
-#define I40E_MASK(mask, shift) (mask << shift)
+#define I40E_MASK(mask, shift) ((u32)(mask) << (shift))
#define I40E_MAX_VSI_QP 16
#define I40E_MAX_VF_VSI 3
@@ -258,6 +258,11 @@ struct i40e_hw_capabilities {
#define I40E_FLEX10_STATUS_DCC_ERROR 0x1
#define I40E_FLEX10_STATUS_VC_MODE 0x2
+ bool sec_rev_disabled;
+ bool update_disabled;
+#define I40E_NVM_MGMT_SEC_REV_DISABLED 0x1
+#define I40E_NVM_MGMT_UPDATE_DISABLED 0x2
+
bool mgmt_cem;
bool ieee_1588;
bool iwarp;
@@ -522,6 +527,8 @@ struct i40e_hw {
enum i40e_nvmupd_state nvmupd_state;
struct i40e_aq_desc nvm_wb_desc;
struct i40e_virt_mem nvm_buff;
+ bool nvm_release_on_done;
+ u16 nvm_wait_opcode;
/* HMC info */
struct i40e_hmc_info hmc; /* HMC info struct */
@@ -1329,4 +1336,46 @@ enum i40e_reset_type {
/* RSS Hash Table Size */
#define I40E_PFQF_CTL_0_HASHLUTSIZE_512 0x00010000
+
+/* INPUT SET MASK for RSS, flow director and flexible payload */
+#define I40E_FD_INSET_L3_SRC_SHIFT 47
+#define I40E_FD_INSET_L3_SRC_WORD_MASK (0x3ULL << \
+ I40E_FD_INSET_L3_SRC_SHIFT)
+#define I40E_FD_INSET_L3_DST_SHIFT 35
+#define I40E_FD_INSET_L3_DST_WORD_MASK (0x3ULL << \
+ I40E_FD_INSET_L3_DST_SHIFT)
+#define I40E_FD_INSET_L4_SRC_SHIFT 34
+#define I40E_FD_INSET_L4_SRC_WORD_MASK (0x1ULL << \
+ I40E_FD_INSET_L4_SRC_SHIFT)
+#define I40E_FD_INSET_L4_DST_SHIFT 33
+#define I40E_FD_INSET_L4_DST_WORD_MASK (0x1ULL << \
+ I40E_FD_INSET_L4_DST_SHIFT)
+#define I40E_FD_INSET_VERIFY_TAG_SHIFT 31
+#define I40E_FD_INSET_VERIFY_TAG_WORD_MASK (0x3ULL << \
+ I40E_FD_INSET_VERIFY_TAG_SHIFT)
+
+#define I40E_FD_INSET_FLEX_WORD50_SHIFT 17
+#define I40E_FD_INSET_FLEX_WORD50_MASK (0x1ULL << \
+ I40E_FD_INSET_FLEX_WORD50_SHIFT)
+#define I40E_FD_INSET_FLEX_WORD51_SHIFT 16
+#define I40E_FD_INSET_FLEX_WORD51_MASK (0x1ULL << \
+ I40E_FD_INSET_FLEX_WORD51_SHIFT)
+#define I40E_FD_INSET_FLEX_WORD52_SHIFT 15
+#define I40E_FD_INSET_FLEX_WORD52_MASK (0x1ULL << \
+ I40E_FD_INSET_FLEX_WORD52_SHIFT)
+#define I40E_FD_INSET_FLEX_WORD53_SHIFT 14
+#define I40E_FD_INSET_FLEX_WORD53_MASK (0x1ULL << \
+ I40E_FD_INSET_FLEX_WORD53_SHIFT)
+#define I40E_FD_INSET_FLEX_WORD54_SHIFT 13
+#define I40E_FD_INSET_FLEX_WORD54_MASK (0x1ULL << \
+ I40E_FD_INSET_FLEX_WORD54_SHIFT)
+#define I40E_FD_INSET_FLEX_WORD55_SHIFT 12
+#define I40E_FD_INSET_FLEX_WORD55_MASK (0x1ULL << \
+ I40E_FD_INSET_FLEX_WORD55_SHIFT)
+#define I40E_FD_INSET_FLEX_WORD56_SHIFT 11
+#define I40E_FD_INSET_FLEX_WORD56_MASK (0x1ULL << \
+ I40E_FD_INSET_FLEX_WORD56_SHIFT)
+#define I40E_FD_INSET_FLEX_WORD57_SHIFT 10
+#define I40E_FD_INSET_FLEX_WORD57_MASK (0x1ULL << \
+ I40E_FD_INSET_FLEX_WORD57_SHIFT)
#endif /* _I40E_TYPE_H_ */
diff --git a/drivers/net/ethernet/intel/i40evf/i40e_virtchnl.h b/drivers/net/ethernet/intel/i40evf/i40e_virtchnl.h
index 3b9d2037456c..f04ce6cb70dc 100644
--- a/drivers/net/ethernet/intel/i40evf/i40e_virtchnl.h
+++ b/drivers/net/ethernet/intel/i40evf/i40e_virtchnl.h
@@ -80,7 +80,12 @@ enum i40e_virtchnl_ops {
I40E_VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE = 14,
I40E_VIRTCHNL_OP_GET_STATS = 15,
I40E_VIRTCHNL_OP_FCOE = 16,
- I40E_VIRTCHNL_OP_EVENT = 17,
+ I40E_VIRTCHNL_OP_EVENT = 17, /* must ALWAYS be 17 */
+ I40E_VIRTCHNL_OP_CONFIG_RSS_KEY = 23,
+ I40E_VIRTCHNL_OP_CONFIG_RSS_LUT = 24,
+ I40E_VIRTCHNL_OP_GET_RSS_HENA_CAPS = 25,
+ I40E_VIRTCHNL_OP_SET_RSS_HENA = 26,
+
};
/* Virtual channel message descriptor. This overlays the admin queue
@@ -154,6 +159,7 @@ struct i40e_virtchnl_vsi_resource {
#define I40E_VIRTCHNL_VF_OFFLOAD_VLAN 0x00010000
#define I40E_VIRTCHNL_VF_OFFLOAD_RX_POLLING 0x00020000
#define I40E_VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2 0x00040000
+#define I40E_VIRTCHNL_VF_OFFLOAD_RSS_PF 0X00080000
struct i40e_virtchnl_vf_resource {
u16 num_vsis;
@@ -162,8 +168,8 @@ struct i40e_virtchnl_vf_resource {
u16 max_mtu;
u32 vf_offload_flags;
- u32 max_fcoe_contexts;
- u32 max_fcoe_filters;
+ u32 rss_key_size;
+ u32 rss_lut_size;
struct i40e_virtchnl_vsi_resource vsi_res[1];
};
@@ -322,6 +328,39 @@ struct i40e_virtchnl_promisc_info {
* PF replies with struct i40e_eth_stats in an external buffer.
*/
+/* I40E_VIRTCHNL_OP_CONFIG_RSS_KEY
+ * I40E_VIRTCHNL_OP_CONFIG_RSS_LUT
+ * VF sends these messages to configure RSS. Only supported if both PF
+ * and VF drivers set the I40E_VIRTCHNL_VF_OFFLOAD_RSS_PF bit during
+ * configuration negotiation. If this is the case, then the RSS fields in
+ * the VF resource struct are valid.
+ * Both the key and LUT are initialized to 0 by the PF, meaning that
+ * RSS is effectively disabled until set up by the VF.
+ */
+struct i40e_virtchnl_rss_key {
+ u16 vsi_id;
+ u16 key_len;
+ u8 key[1]; /* RSS hash key, packed bytes */
+};
+
+struct i40e_virtchnl_rss_lut {
+ u16 vsi_id;
+ u16 lut_entries;
+ u8 lut[1]; /* RSS lookup table*/
+};
+
+/* I40E_VIRTCHNL_OP_GET_RSS_HENA_CAPS
+ * I40E_VIRTCHNL_OP_SET_RSS_HENA
+ * VF sends these messages to get and set the hash filter enable bits for RSS.
+ * By default, the PF sets these to all possible traffic types that the
+ * hardware supports. The VF can query this value if it wants to change the
+ * traffic types that are hashed by the hardware.
+ * Traffic types are defined in the i40e_filter_pctype enum in i40e_type.h
+ */
+struct i40e_virtchnl_rss_hena {
+ u64 hena;
+};
+
/* I40E_VIRTCHNL_OP_EVENT
* PF sends this message to inform the VF driver of events that may affect it.
* No direct response is expected from the VF, though it may generate other
diff --git a/drivers/net/ethernet/intel/i40evf/i40evf.h b/drivers/net/ethernet/intel/i40evf/i40evf.h
index e657eccd232c..76ed97db28e2 100644
--- a/drivers/net/ethernet/intel/i40evf/i40evf.h
+++ b/drivers/net/ethernet/intel/i40evf/i40evf.h
@@ -67,8 +67,6 @@ struct i40e_vsi {
u16 rx_itr_setting;
u16 tx_itr_setting;
u16 qs_handle;
- u8 *rss_hkey_user; /* User configured hash keys */
- u8 *rss_lut_user; /* User configured lookup table entries */
};
/* How many Rx Buffers do we bundle into one write to the hardware ? */
@@ -82,9 +80,6 @@ struct i40e_vsi {
#define I40EVF_REQ_DESCRIPTOR_MULTIPLE 32
/* Supported Rx Buffer Sizes */
-#define I40EVF_RXBUFFER_64 64 /* Used for packet split */
-#define I40EVF_RXBUFFER_128 128 /* Used for packet split */
-#define I40EVF_RXBUFFER_256 256 /* Used for packet split */
#define I40EVF_RXBUFFER_2048 2048
#define I40EVF_MAX_RXBUFFER 16384 /* largest size for single descriptor */
#define I40EVF_MAX_AQ_BUF_SIZE 4096
@@ -210,9 +205,6 @@ struct i40evf_adapter {
u32 flags;
#define I40EVF_FLAG_RX_CSUM_ENABLED BIT(0)
-#define I40EVF_FLAG_RX_1BUF_CAPABLE BIT(1)
-#define I40EVF_FLAG_RX_PS_CAPABLE BIT(2)
-#define I40EVF_FLAG_RX_PS_ENABLED BIT(3)
#define I40EVF_FLAG_IMIR_ENABLED BIT(5)
#define I40EVF_FLAG_MQ_CAPABLE BIT(6)
#define I40EVF_FLAG_NEED_LINK_UPDATE BIT(7)
@@ -222,6 +214,8 @@ struct i40evf_adapter {
#define I40EVF_FLAG_WB_ON_ITR_CAPABLE BIT(11)
#define I40EVF_FLAG_OUTER_UDP_CSUM_CAPABLE BIT(12)
#define I40EVF_FLAG_ADDR_SET_BY_PF BIT(13)
+#define I40EVF_FLAG_PROMISC_ON BIT(15)
+#define I40EVF_FLAG_ALLMULTI_ON BIT(16)
/* duplicates for common code */
#define I40E_FLAG_FDIR_ATR_ENABLED 0
#define I40E_FLAG_DCB_ENABLED 0
@@ -239,8 +233,17 @@ struct i40evf_adapter {
#define I40EVF_FLAG_AQ_CONFIGURE_QUEUES BIT(6)
#define I40EVF_FLAG_AQ_MAP_VECTORS BIT(7)
#define I40EVF_FLAG_AQ_HANDLE_RESET BIT(8)
-#define I40EVF_FLAG_AQ_CONFIGURE_RSS BIT(9)
+#define I40EVF_FLAG_AQ_CONFIGURE_RSS BIT(9) /* direct AQ config */
#define I40EVF_FLAG_AQ_GET_CONFIG BIT(10)
+/* Newer style, RSS done by the PF so we can ignore hardware vagaries. */
+#define I40EVF_FLAG_AQ_GET_HENA BIT(11)
+#define I40EVF_FLAG_AQ_SET_HENA BIT(12)
+#define I40EVF_FLAG_AQ_SET_RSS_KEY BIT(13)
+#define I40EVF_FLAG_AQ_SET_RSS_LUT BIT(14)
+#define I40EVF_FLAG_AQ_REQUEST_PROMISC BIT(15)
+#define I40EVF_FLAG_AQ_RELEASE_PROMISC BIT(16)
+#define I40EVF_FLAG_AQ_REQUEST_ALLMULTI BIT(17)
+#define I40EVF_FLAG_AQ_RELEASE_ALLMULTI BIT(18)
/* OS defined structs */
struct net_device *netdev;
@@ -256,10 +259,18 @@ struct i40evf_adapter {
bool netdev_registered;
bool link_up;
enum i40e_virtchnl_ops current_op;
-#define CLIENT_ENABLED(_a) ((_a)->vf_res->vf_offload_flags & \
- I40E_VIRTCHNL_VF_OFFLOAD_IWARP)
+#define CLIENT_ENABLED(_a) ((_a)->vf_res ? \
+ (_a)->vf_res->vf_offload_flags & \
+ I40E_VIRTCHNL_VF_OFFLOAD_IWARP : \
+ 0)
+/* RSS by the PF should be preferred over RSS via other methods. */
+#define RSS_PF(_a) ((_a)->vf_res->vf_offload_flags & \
+ I40E_VIRTCHNL_VF_OFFLOAD_RSS_PF)
#define RSS_AQ(_a) ((_a)->vf_res->vf_offload_flags & \
I40E_VIRTCHNL_VF_OFFLOAD_RSS_AQ)
+#define RSS_REG(_a) (!((_a)->vf_res->vf_offload_flags & \
+ (I40E_VIRTCHNL_VF_OFFLOAD_RSS_AQ | \
+ I40E_VIRTCHNL_VF_OFFLOAD_RSS_PF)))
#define VLAN_ALLOWED(_a) ((_a)->vf_res->vf_offload_flags & \
I40E_VIRTCHNL_VF_OFFLOAD_VLAN)
struct i40e_virtchnl_vf_resource *vf_res; /* incl. all VSIs */
@@ -271,11 +282,16 @@ struct i40evf_adapter {
struct i40e_eth_stats current_stats;
struct i40e_vsi vsi;
u32 aq_wait_count;
+ /* RSS stuff */
+ u64 hena;
+ u16 rss_key_size;
+ u16 rss_lut_size;
+ u8 *rss_key;
+ u8 *rss_lut;
};
/* Ethtool Private Flags */
-#define I40EVF_PRIV_FLAGS_PS BIT(0)
/* needed by i40evf_ethtool.c */
extern char i40evf_driver_name[];
@@ -314,11 +330,12 @@ void i40evf_del_vlans(struct i40evf_adapter *adapter);
void i40evf_set_promiscuous(struct i40evf_adapter *adapter, int flags);
void i40evf_request_stats(struct i40evf_adapter *adapter);
void i40evf_request_reset(struct i40evf_adapter *adapter);
+void i40evf_get_hena(struct i40evf_adapter *adapter);
+void i40evf_set_hena(struct i40evf_adapter *adapter);
+void i40evf_set_rss_key(struct i40evf_adapter *adapter);
+void i40evf_set_rss_lut(struct i40evf_adapter *adapter);
void i40evf_virtchnl_completion(struct i40evf_adapter *adapter,
enum i40e_virtchnl_ops v_opcode,
i40e_status v_retval, u8 *msg, u16 msglen);
-int i40evf_config_rss(struct i40e_vsi *vsi, const u8 *seed, u8 *lut,
- u16 lut_size);
-int i40evf_get_rss(struct i40e_vsi *vsi, const u8 *seed, u8 *lut,
- u16 lut_size);
+int i40evf_config_rss(struct i40evf_adapter *adapter);
#endif /* _I40EVF_H_ */
diff --git a/drivers/net/ethernet/intel/i40evf/i40evf_ethtool.c b/drivers/net/ethernet/intel/i40evf/i40evf_ethtool.c
index dd4430aae7fa..c9c202f6c521 100644
--- a/drivers/net/ethernet/intel/i40evf/i40evf_ethtool.c
+++ b/drivers/net/ethernet/intel/i40evf/i40evf_ethtool.c
@@ -63,12 +63,6 @@ static const struct i40evf_stats i40evf_gstrings_stats[] = {
#define I40EVF_STATS_LEN(_dev) \
(I40EVF_GLOBAL_STATS_LEN + I40EVF_QUEUE_STATS_LEN(_dev))
-static const char i40evf_priv_flags_strings[][ETH_GSTRING_LEN] = {
- "packet-split",
-};
-
-#define I40EVF_PRIV_FLAGS_STR_LEN ARRAY_SIZE(i40evf_priv_flags_strings)
-
/**
* i40evf_get_settings - Get Link Speed and Duplex settings
* @netdev: network interface device structure
@@ -103,8 +97,6 @@ static int i40evf_get_sset_count(struct net_device *netdev, int sset)
{
if (sset == ETH_SS_STATS)
return I40EVF_STATS_LEN(netdev);
- else if (sset == ETH_SS_PRIV_FLAGS)
- return I40EVF_PRIV_FLAGS_STR_LEN;
else
return -EINVAL;
}
@@ -170,12 +162,6 @@ static void i40evf_get_strings(struct net_device *netdev, u32 sset, u8 *data)
snprintf(p, ETH_GSTRING_LEN, "rx-%u.bytes", i);
p += ETH_GSTRING_LEN;
}
- } else if (sset == ETH_SS_PRIV_FLAGS) {
- for (i = 0; i < I40EVF_PRIV_FLAGS_STR_LEN; i++) {
- memcpy(data, i40evf_priv_flags_strings[i],
- ETH_GSTRING_LEN);
- data += ETH_GSTRING_LEN;
- }
}
}
@@ -225,7 +211,6 @@ static void i40evf_get_drvinfo(struct net_device *netdev,
strlcpy(drvinfo->version, i40evf_driver_version, 32);
strlcpy(drvinfo->fw_version, "N/A", 4);
strlcpy(drvinfo->bus_info, pci_name(adapter->pdev), 32);
- drvinfo->n_priv_flags = I40EVF_PRIV_FLAGS_STR_LEN;
}
/**
@@ -378,63 +363,6 @@ static int i40evf_set_coalesce(struct net_device *netdev,
}
/**
- * i40e_get_rss_hash_opts - Get RSS hash Input Set for each flow type
- * @adapter: board private structure
- * @cmd: ethtool rxnfc command
- *
- * Returns Success if the flow is supported, else Invalid Input.
- **/
-static int i40evf_get_rss_hash_opts(struct i40evf_adapter *adapter,
- struct ethtool_rxnfc *cmd)
-{
- struct i40e_hw *hw = &adapter->hw;
- u64 hena = (u64)rd32(hw, I40E_VFQF_HENA(0)) |
- ((u64)rd32(hw, I40E_VFQF_HENA(1)) << 32);
-
- /* We always hash on IP src and dest addresses */
- cmd->data = RXH_IP_SRC | RXH_IP_DST;
-
- switch (cmd->flow_type) {
- case TCP_V4_FLOW:
- if (hena & BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_TCP))
- cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
- break;
- case UDP_V4_FLOW:
- if (hena & BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_UDP))
- cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
- break;
-
- case SCTP_V4_FLOW:
- case AH_ESP_V4_FLOW:
- case AH_V4_FLOW:
- case ESP_V4_FLOW:
- case IPV4_FLOW:
- break;
-
- case TCP_V6_FLOW:
- if (hena & BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_TCP))
- cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
- break;
- case UDP_V6_FLOW:
- if (hena & BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_UDP))
- cmd->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
- break;
-
- case SCTP_V6_FLOW:
- case AH_ESP_V6_FLOW:
- case AH_V6_FLOW:
- case ESP_V6_FLOW:
- case IPV6_FLOW:
- break;
- default:
- cmd->data = 0;
- return -EINVAL;
- }
-
- return 0;
-}
-
-/**
* i40evf_get_rxnfc - command to get RX flow classification rules
* @netdev: network interface device structure
* @cmd: ethtool rxnfc command
@@ -454,145 +382,8 @@ static int i40evf_get_rxnfc(struct net_device *netdev,
ret = 0;
break;
case ETHTOOL_GRXFH:
- ret = i40evf_get_rss_hash_opts(adapter, cmd);
- break;
- default:
- break;
- }
-
- return ret;
-}
-
-/**
- * i40evf_set_rss_hash_opt - Enable/Disable flow types for RSS hash
- * @adapter: board private structure
- * @cmd: ethtool rxnfc command
- *
- * Returns Success if the flow input set is supported.
- **/
-static int i40evf_set_rss_hash_opt(struct i40evf_adapter *adapter,
- struct ethtool_rxnfc *nfc)
-{
- struct i40e_hw *hw = &adapter->hw;
- u32 flags = adapter->vf_res->vf_offload_flags;
-
- u64 hena = (u64)rd32(hw, I40E_VFQF_HENA(0)) |
- ((u64)rd32(hw, I40E_VFQF_HENA(1)) << 32);
-
- /* RSS does not support anything other than hashing
- * to queues on src and dst IPs and ports
- */
- if (nfc->data & ~(RXH_IP_SRC | RXH_IP_DST |
- RXH_L4_B_0_1 | RXH_L4_B_2_3))
- return -EINVAL;
-
- /* We need at least the IP SRC and DEST fields for hashing */
- if (!(nfc->data & RXH_IP_SRC) ||
- !(nfc->data & RXH_IP_DST))
- return -EINVAL;
-
- switch (nfc->flow_type) {
- case TCP_V4_FLOW:
- if (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
- if (flags & I40E_VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2)
- hena |=
- BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_TCP_SYN_NO_ACK);
-
- hena |= BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_TCP);
- } else {
- return -EINVAL;
- }
- break;
- case TCP_V6_FLOW:
- if (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
- if (flags & I40E_VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2)
- hena |=
- BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_TCP_SYN_NO_ACK);
-
- hena |= BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_TCP);
- } else {
- return -EINVAL;
- }
- break;
- case UDP_V4_FLOW:
- if (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
- if (flags & I40E_VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2)
- hena |=
- BIT_ULL(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV4_UDP) |
- BIT_ULL(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV4_UDP);
-
- hena |= (BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_UDP) |
- BIT_ULL(I40E_FILTER_PCTYPE_FRAG_IPV4));
- } else {
- return -EINVAL;
- }
- break;
- case UDP_V6_FLOW:
- if (nfc->data & (RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
- if (flags & I40E_VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2)
- hena |=
- BIT_ULL(I40E_FILTER_PCTYPE_NONF_UNICAST_IPV6_UDP) |
- BIT_ULL(I40E_FILTER_PCTYPE_NONF_MULTICAST_IPV6_UDP);
-
- hena |= (BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_UDP) |
- BIT_ULL(I40E_FILTER_PCTYPE_FRAG_IPV6));
- } else {
- return -EINVAL;
- }
- break;
- case AH_ESP_V4_FLOW:
- case AH_V4_FLOW:
- case ESP_V4_FLOW:
- case SCTP_V4_FLOW:
- if ((nfc->data & RXH_L4_B_0_1) ||
- (nfc->data & RXH_L4_B_2_3))
- return -EINVAL;
- hena |= BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_OTHER);
- break;
- case AH_ESP_V6_FLOW:
- case AH_V6_FLOW:
- case ESP_V6_FLOW:
- case SCTP_V6_FLOW:
- if ((nfc->data & RXH_L4_B_0_1) ||
- (nfc->data & RXH_L4_B_2_3))
- return -EINVAL;
- hena |= BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_OTHER);
- break;
- case IPV4_FLOW:
- hena |= (BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV4_OTHER) |
- BIT_ULL(I40E_FILTER_PCTYPE_FRAG_IPV4));
- break;
- case IPV6_FLOW:
- hena |= (BIT_ULL(I40E_FILTER_PCTYPE_NONF_IPV6_OTHER) |
- BIT_ULL(I40E_FILTER_PCTYPE_FRAG_IPV6));
- break;
- default:
- return -EINVAL;
- }
-
- wr32(hw, I40E_VFQF_HENA(0), (u32)hena);
- wr32(hw, I40E_VFQF_HENA(1), (u32)(hena >> 32));
- i40e_flush(hw);
-
- return 0;
-}
-
-/**
- * i40evf_set_rxnfc - command to set RX flow classification rules
- * @netdev: network interface device structure
- * @cmd: ethtool rxnfc command
- *
- * Returns Success if the command is supported.
- **/
-static int i40evf_set_rxnfc(struct net_device *netdev,
- struct ethtool_rxnfc *cmd)
-{
- struct i40evf_adapter *adapter = netdev_priv(netdev);
- int ret = -EOPNOTSUPP;
-
- switch (cmd->cmd) {
- case ETHTOOL_SRXFH:
- ret = i40evf_set_rss_hash_opt(adapter, cmd);
+ netdev_info(netdev,
+ "RSS hash info is not available to vf, use pf.\n");
break;
default:
break;
@@ -600,7 +391,6 @@ static int i40evf_set_rxnfc(struct net_device *netdev,
return ret;
}
-
/**
* i40evf_get_channels: get the number of channels supported by the device
* @netdev: network interface device structure
@@ -624,6 +414,19 @@ static void i40evf_get_channels(struct net_device *netdev,
}
/**
+ * i40evf_get_rxfh_key_size - get the RSS hash key size
+ * @netdev: network interface device structure
+ *
+ * Returns the table size.
+ **/
+static u32 i40evf_get_rxfh_key_size(struct net_device *netdev)
+{
+ struct i40evf_adapter *adapter = netdev_priv(netdev);
+
+ return adapter->rss_key_size;
+}
+
+/**
* i40evf_get_rxfh_indir_size - get the rx flow hash indirection table size
* @netdev: network interface device structure
*
@@ -631,7 +434,9 @@ static void i40evf_get_channels(struct net_device *netdev,
**/
static u32 i40evf_get_rxfh_indir_size(struct net_device *netdev)
{
- return (I40E_VFQF_HLUT_MAX_INDEX + 1) * 4;
+ struct i40evf_adapter *adapter = netdev_priv(netdev);
+
+ return adapter->rss_lut_size;
}
/**
@@ -646,9 +451,6 @@ static int i40evf_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key,
u8 *hfunc)
{
struct i40evf_adapter *adapter = netdev_priv(netdev);
- struct i40e_vsi *vsi = &adapter->vsi;
- u8 *seed = NULL, *lut;
- int ret;
u16 i;
if (hfunc)
@@ -656,24 +458,13 @@ static int i40evf_get_rxfh(struct net_device *netdev, u32 *indir, u8 *key,
if (!indir)
return 0;
- seed = key;
-
- lut = kzalloc(I40EVF_HLUT_ARRAY_SIZE, GFP_KERNEL);
- if (!lut)
- return -ENOMEM;
-
- ret = i40evf_get_rss(vsi, seed, lut, I40EVF_HLUT_ARRAY_SIZE);
- if (ret)
- goto out;
+ memcpy(key, adapter->rss_key, adapter->rss_key_size);
/* Each 32 bits pointed by 'indir' is stored with a lut entry */
- for (i = 0; i < I40EVF_HLUT_ARRAY_SIZE; i++)
- indir[i] = (u32)lut[i];
+ for (i = 0; i < adapter->rss_lut_size; i++)
+ indir[i] = (u32)adapter->rss_lut[i];
-out:
- kfree(lut);
-
- return ret;
+ return 0;
}
/**
@@ -689,8 +480,6 @@ static int i40evf_set_rxfh(struct net_device *netdev, const u32 *indir,
const u8 *key, const u8 hfunc)
{
struct i40evf_adapter *adapter = netdev_priv(netdev);
- struct i40e_vsi *vsi = &adapter->vsi;
- u8 *seed = NULL;
u16 i;
/* We do not allow change in unsupported parameters */
@@ -701,76 +490,14 @@ static int i40evf_set_rxfh(struct net_device *netdev, const u32 *indir,
return 0;
if (key) {
- if (!vsi->rss_hkey_user) {
- vsi->rss_hkey_user = kzalloc(I40EVF_HKEY_ARRAY_SIZE,
- GFP_KERNEL);
- if (!vsi->rss_hkey_user)
- return -ENOMEM;
- }
- memcpy(vsi->rss_hkey_user, key, I40EVF_HKEY_ARRAY_SIZE);
- seed = vsi->rss_hkey_user;
- }
- if (!vsi->rss_lut_user) {
- vsi->rss_lut_user = kzalloc(I40EVF_HLUT_ARRAY_SIZE,
- GFP_KERNEL);
- if (!vsi->rss_lut_user)
- return -ENOMEM;
+ memcpy(adapter->rss_key, key, adapter->rss_key_size);
}
/* Each 32 bits pointed by 'indir' is stored with a lut entry */
- for (i = 0; i < I40EVF_HLUT_ARRAY_SIZE; i++)
- vsi->rss_lut_user[i] = (u8)(indir[i]);
-
- return i40evf_config_rss(vsi, seed, vsi->rss_lut_user,
- I40EVF_HLUT_ARRAY_SIZE);
-}
-
-/**
- * i40evf_get_priv_flags - report device private flags
- * @dev: network interface device structure
- *
- * The get string set count and the string set should be matched for each
- * flag returned. Add new strings for each flag to the i40e_priv_flags_strings
- * array.
- *
- * Returns a u32 bitmap of flags.
- **/
-static u32 i40evf_get_priv_flags(struct net_device *dev)
-{
- struct i40evf_adapter *adapter = netdev_priv(dev);
- u32 ret_flags = 0;
-
- ret_flags |= adapter->flags & I40EVF_FLAG_RX_PS_ENABLED ?
- I40EVF_PRIV_FLAGS_PS : 0;
-
- return ret_flags;
-}
+ for (i = 0; i < adapter->rss_lut_size; i++)
+ adapter->rss_lut[i] = (u8)(indir[i]);
-/**
- * i40evf_set_priv_flags - set private flags
- * @dev: network interface device structure
- * @flags: bit flags to be set
- **/
-static int i40evf_set_priv_flags(struct net_device *dev, u32 flags)
-{
- struct i40evf_adapter *adapter = netdev_priv(dev);
- bool reset_required = false;
-
- if ((flags & I40EVF_PRIV_FLAGS_PS) &&
- !(adapter->flags & I40EVF_FLAG_RX_PS_ENABLED)) {
- adapter->flags |= I40EVF_FLAG_RX_PS_ENABLED;
- reset_required = true;
- } else if (!(flags & I40EVF_PRIV_FLAGS_PS) &&
- (adapter->flags & I40EVF_FLAG_RX_PS_ENABLED)) {
- adapter->flags &= ~I40EVF_FLAG_RX_PS_ENABLED;
- reset_required = true;
- }
-
- /* if needed, issue reset to cause things to take effect */
- if (reset_required)
- i40evf_schedule_reset(adapter);
-
- return 0;
+ return i40evf_config_rss(adapter);
}
static const struct ethtool_ops i40evf_ethtool_ops = {
@@ -782,18 +509,16 @@ static const struct ethtool_ops i40evf_ethtool_ops = {
.get_strings = i40evf_get_strings,
.get_ethtool_stats = i40evf_get_ethtool_stats,
.get_sset_count = i40evf_get_sset_count,
- .get_priv_flags = i40evf_get_priv_flags,
- .set_priv_flags = i40evf_set_priv_flags,
.get_msglevel = i40evf_get_msglevel,
.set_msglevel = i40evf_set_msglevel,
.get_coalesce = i40evf_get_coalesce,
.set_coalesce = i40evf_set_coalesce,
.get_rxnfc = i40evf_get_rxnfc,
- .set_rxnfc = i40evf_set_rxnfc,
.get_rxfh_indir_size = i40evf_get_rxfh_indir_size,
.get_rxfh = i40evf_get_rxfh,
.set_rxfh = i40evf_set_rxfh,
.get_channels = i40evf_get_channels,
+ .get_rxfh_key_size = i40evf_get_rxfh_key_size,
};
/**
diff --git a/drivers/net/ethernet/intel/i40evf/i40evf_main.c b/drivers/net/ethernet/intel/i40evf/i40evf_main.c
index 4b70aae2fa84..16c552952860 100644
--- a/drivers/net/ethernet/intel/i40evf/i40evf_main.c
+++ b/drivers/net/ethernet/intel/i40evf/i40evf_main.c
@@ -37,8 +37,8 @@ static const char i40evf_driver_string[] =
#define DRV_KERN "-k"
#define DRV_VERSION_MAJOR 1
-#define DRV_VERSION_MINOR 4
-#define DRV_VERSION_BUILD 15
+#define DRV_VERSION_MINOR 5
+#define DRV_VERSION_BUILD 10
#define DRV_VERSION __stringify(DRV_VERSION_MAJOR) "." \
__stringify(DRV_VERSION_MINOR) "." \
__stringify(DRV_VERSION_BUILD) \
@@ -641,28 +641,11 @@ static void i40evf_configure_tx(struct i40evf_adapter *adapter)
static void i40evf_configure_rx(struct i40evf_adapter *adapter)
{
struct i40e_hw *hw = &adapter->hw;
- struct net_device *netdev = adapter->netdev;
- int max_frame = netdev->mtu + ETH_HLEN + ETH_FCS_LEN;
int i;
- int rx_buf_len;
-
-
- /* Set the RX buffer length according to the mode */
- if (adapter->flags & I40EVF_FLAG_RX_PS_ENABLED ||
- netdev->mtu <= ETH_DATA_LEN)
- rx_buf_len = I40EVF_RXBUFFER_2048;
- else
- rx_buf_len = ALIGN(max_frame, 1024);
for (i = 0; i < adapter->num_active_queues; i++) {
adapter->rx_rings[i].tail = hw->hw_addr + I40E_QRX_TAIL1(i);
- adapter->rx_rings[i].rx_buf_len = rx_buf_len;
- if (adapter->flags & I40EVF_FLAG_RX_PS_ENABLED) {
- set_ring_ps_enabled(&adapter->rx_rings[i]);
- adapter->rx_rings[i].rx_hdr_len = I40E_RX_HDR_SIZE;
- } else {
- clear_ring_ps_enabled(&adapter->rx_rings[i]);
- }
+ adapter->rx_rings[i].rx_buf_len = I40EVF_RXBUFFER_2048;
}
}
@@ -943,6 +926,21 @@ static void i40evf_set_rx_mode(struct net_device *netdev)
bottom_of_search_loop:
continue;
}
+
+ if (netdev->flags & IFF_PROMISC &&
+ !(adapter->flags & I40EVF_FLAG_PROMISC_ON))
+ adapter->aq_required |= I40EVF_FLAG_AQ_REQUEST_PROMISC;
+ else if (!(netdev->flags & IFF_PROMISC) &&
+ adapter->flags & I40EVF_FLAG_PROMISC_ON)
+ adapter->aq_required |= I40EVF_FLAG_AQ_RELEASE_PROMISC;
+
+ if (netdev->flags & IFF_ALLMULTI &&
+ !(adapter->flags & I40EVF_FLAG_ALLMULTI_ON))
+ adapter->aq_required |= I40EVF_FLAG_AQ_REQUEST_ALLMULTI;
+ else if (!(netdev->flags & IFF_ALLMULTI) &&
+ adapter->flags & I40EVF_FLAG_ALLMULTI_ON)
+ adapter->aq_required |= I40EVF_FLAG_AQ_RELEASE_ALLMULTI;
+
clear_bit(__I40EVF_IN_CRITICAL_TASK, &adapter->crit_section);
}
@@ -999,14 +997,7 @@ static void i40evf_configure(struct i40evf_adapter *adapter)
for (i = 0; i < adapter->num_active_queues; i++) {
struct i40e_ring *ring = &adapter->rx_rings[i];
- if (adapter->flags & I40EVF_FLAG_RX_PS_ENABLED) {
- i40evf_alloc_rx_headers(ring);
- i40evf_alloc_rx_buffers_ps(ring, ring->count);
- } else {
- i40evf_alloc_rx_buffers_1buf(ring, ring->count);
- }
- ring->next_to_use = ring->count - 1;
- writel(ring->next_to_use, ring->tail);
+ i40evf_alloc_rx_buffers(ring, I40E_DESC_UNUSED(ring));
}
}
@@ -1224,24 +1215,18 @@ out:
}
/**
- * i40e_config_rss_aq - Prepare for RSS using AQ commands
- * @vsi: vsi structure
- * @seed: RSS hash seed
- * @lut: Lookup table
- * @lut_size: Lookup table size
+ * i40e_config_rss_aq - Configure RSS keys and lut by using AQ commands
+ * @adapter: board private structure
*
* Return 0 on success, negative on failure
**/
-static int i40evf_config_rss_aq(struct i40e_vsi *vsi, const u8 *seed,
- u8 *lut, u16 lut_size)
+static int i40evf_config_rss_aq(struct i40evf_adapter *adapter)
{
- struct i40evf_adapter *adapter = vsi->back;
+ struct i40e_aqc_get_set_rss_key_data *rss_key =
+ (struct i40e_aqc_get_set_rss_key_data *)adapter->rss_key;
struct i40e_hw *hw = &adapter->hw;
int ret = 0;
- if (!vsi->id)
- return -EINVAL;
-
if (adapter->current_op != I40E_VIRTCHNL_OP_UNKNOWN) {
/* bail because we already have a command pending */
dev_err(&adapter->pdev->dev, "Cannot configure RSS, command %d pending\n",
@@ -1249,198 +1234,82 @@ static int i40evf_config_rss_aq(struct i40e_vsi *vsi, const u8 *seed,
return -EBUSY;
}
- if (seed) {
- struct i40e_aqc_get_set_rss_key_data *rss_key =
- (struct i40e_aqc_get_set_rss_key_data *)seed;
- ret = i40evf_aq_set_rss_key(hw, vsi->id, rss_key);
- if (ret) {
- dev_err(&adapter->pdev->dev, "Cannot set RSS key, err %s aq_err %s\n",
- i40evf_stat_str(hw, ret),
- i40evf_aq_str(hw, hw->aq.asq_last_status));
- return ret;
- }
+ ret = i40evf_aq_set_rss_key(hw, adapter->vsi.id, rss_key);
+ if (ret) {
+ dev_err(&adapter->pdev->dev, "Cannot set RSS key, err %s aq_err %s\n",
+ i40evf_stat_str(hw, ret),
+ i40evf_aq_str(hw, hw->aq.asq_last_status));
+ return ret;
+
}
- if (lut) {
- ret = i40evf_aq_set_rss_lut(hw, vsi->id, false, lut, lut_size);
- if (ret) {
- dev_err(&adapter->pdev->dev,
- "Cannot set RSS lut, err %s aq_err %s\n",
- i40evf_stat_str(hw, ret),
- i40evf_aq_str(hw, hw->aq.asq_last_status));
- return ret;
- }
+ ret = i40evf_aq_set_rss_lut(hw, adapter->vsi.id, false,
+ adapter->rss_lut, adapter->rss_lut_size);
+ if (ret) {
+ dev_err(&adapter->pdev->dev, "Cannot set RSS lut, err %s aq_err %s\n",
+ i40evf_stat_str(hw, ret),
+ i40evf_aq_str(hw, hw->aq.asq_last_status));
}
return ret;
+
}
/**
* i40evf_config_rss_reg - Configure RSS keys and lut by writing registers
- * @vsi: Pointer to vsi structure
- * @seed: RSS hash seed
- * @lut: Lookup table
- * @lut_size: Lookup table size
+ * @adapter: board private structure
*
* Returns 0 on success, negative on failure
**/
-static int i40evf_config_rss_reg(struct i40e_vsi *vsi, const u8 *seed,
- const u8 *lut, u16 lut_size)
+static int i40evf_config_rss_reg(struct i40evf_adapter *adapter)
{
- struct i40evf_adapter *adapter = vsi->back;
struct i40e_hw *hw = &adapter->hw;
+ u32 *dw;
u16 i;
- if (seed) {
- u32 *seed_dw = (u32 *)seed;
-
- for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++)
- wr32(hw, I40E_VFQF_HKEY(i), seed_dw[i]);
- }
-
- if (lut) {
- u32 *lut_dw = (u32 *)lut;
+ dw = (u32 *)adapter->rss_key;
+ for (i = 0; i <= adapter->rss_key_size / 4; i++)
+ wr32(hw, I40E_VFQF_HKEY(i), dw[i]);
- if (lut_size != I40EVF_HLUT_ARRAY_SIZE)
- return -EINVAL;
+ dw = (u32 *)adapter->rss_lut;
+ for (i = 0; i <= adapter->rss_lut_size / 4; i++)
+ wr32(hw, I40E_VFQF_HLUT(i), dw[i]);
- for (i = 0; i <= I40E_VFQF_HLUT_MAX_INDEX; i++)
- wr32(hw, I40E_VFQF_HLUT(i), lut_dw[i]);
- }
i40e_flush(hw);
return 0;
}
/**
- * * i40evf_get_rss_aq - Get RSS keys and lut by using AQ commands
- * @vsi: Pointer to vsi structure
- * @seed: RSS hash seed
- * @lut: Lookup table
- * @lut_size: Lookup table size
- *
- * Return 0 on success, negative on failure
- **/
-static int i40evf_get_rss_aq(struct i40e_vsi *vsi, const u8 *seed,
- u8 *lut, u16 lut_size)
-{
- struct i40evf_adapter *adapter = vsi->back;
- struct i40e_hw *hw = &adapter->hw;
- int ret = 0;
-
- if (seed) {
- ret = i40evf_aq_get_rss_key(hw, vsi->id,
- (struct i40e_aqc_get_set_rss_key_data *)seed);
- if (ret) {
- dev_err(&adapter->pdev->dev,
- "Cannot get RSS key, err %s aq_err %s\n",
- i40evf_stat_str(hw, ret),
- i40evf_aq_str(hw, hw->aq.asq_last_status));
- return ret;
- }
- }
-
- if (lut) {
- ret = i40evf_aq_get_rss_lut(hw, vsi->id, seed, lut, lut_size);
- if (ret) {
- dev_err(&adapter->pdev->dev,
- "Cannot get RSS lut, err %s aq_err %s\n",
- i40evf_stat_str(hw, ret),
- i40evf_aq_str(hw, hw->aq.asq_last_status));
- return ret;
- }
- }
-
- return ret;
-}
-
-/**
- * * i40evf_get_rss_reg - Get RSS keys and lut by reading registers
- * @vsi: Pointer to vsi structure
- * @seed: RSS hash seed
- * @lut: Lookup table
- * @lut_size: Lookup table size
- *
- * Returns 0 on success, negative on failure
- **/
-static int i40evf_get_rss_reg(struct i40e_vsi *vsi, const u8 *seed,
- const u8 *lut, u16 lut_size)
-{
- struct i40evf_adapter *adapter = vsi->back;
- struct i40e_hw *hw = &adapter->hw;
- u16 i;
-
- if (seed) {
- u32 *seed_dw = (u32 *)seed;
-
- for (i = 0; i <= I40E_VFQF_HKEY_MAX_INDEX; i++)
- seed_dw[i] = rd32(hw, I40E_VFQF_HKEY(i));
- }
-
- if (lut) {
- u32 *lut_dw = (u32 *)lut;
-
- if (lut_size != I40EVF_HLUT_ARRAY_SIZE)
- return -EINVAL;
-
- for (i = 0; i <= I40E_VFQF_HLUT_MAX_INDEX; i++)
- lut_dw[i] = rd32(hw, I40E_VFQF_HLUT(i));
- }
-
- return 0;
-}
-
-/**
* i40evf_config_rss - Configure RSS keys and lut
- * @vsi: Pointer to vsi structure
- * @seed: RSS hash seed
- * @lut: Lookup table
- * @lut_size: Lookup table size
- *
- * Returns 0 on success, negative on failure
- **/
-int i40evf_config_rss(struct i40e_vsi *vsi, const u8 *seed,
- u8 *lut, u16 lut_size)
-{
- struct i40evf_adapter *adapter = vsi->back;
-
- if (RSS_AQ(adapter))
- return i40evf_config_rss_aq(vsi, seed, lut, lut_size);
- else
- return i40evf_config_rss_reg(vsi, seed, lut, lut_size);
-}
-
-/**
- * i40evf_get_rss - Get RSS keys and lut
- * @vsi: Pointer to vsi structure
- * @seed: RSS hash seed
- * @lut: Lookup table
- * @lut_size: Lookup table size
+ * @adapter: board private structure
*
* Returns 0 on success, negative on failure
**/
-int i40evf_get_rss(struct i40e_vsi *vsi, const u8 *seed, u8 *lut, u16 lut_size)
+int i40evf_config_rss(struct i40evf_adapter *adapter)
{
- struct i40evf_adapter *adapter = vsi->back;
- if (RSS_AQ(adapter))
- return i40evf_get_rss_aq(vsi, seed, lut, lut_size);
- else
- return i40evf_get_rss_reg(vsi, seed, lut, lut_size);
+ if (RSS_PF(adapter)) {
+ adapter->aq_required |= I40EVF_FLAG_AQ_SET_RSS_LUT |
+ I40EVF_FLAG_AQ_SET_RSS_KEY;
+ return 0;
+ } else if (RSS_AQ(adapter)) {
+ return i40evf_config_rss_aq(adapter);
+ } else {
+ return i40evf_config_rss_reg(adapter);
+ }
}
/**
* i40evf_fill_rss_lut - Fill the lut with default values
- * @lut: Lookup table to be filled with
- * @rss_table_size: Lookup table size
- * @rss_size: Range of queue number for hashing
+ * @adapter: board private structure
**/
-static void i40evf_fill_rss_lut(u8 *lut, u16 rss_table_size, u16 rss_size)
+static void i40evf_fill_rss_lut(struct i40evf_adapter *adapter)
{
u16 i;
- for (i = 0; i < rss_table_size; i++)
- lut[i] = i % rss_size;
+ for (i = 0; i < adapter->rss_lut_size; i++)
+ adapter->rss_lut[i] = i % adapter->num_active_queues;
}
/**
@@ -1451,42 +1320,25 @@ static void i40evf_fill_rss_lut(u8 *lut, u16 rss_table_size, u16 rss_size)
**/
static int i40evf_init_rss(struct i40evf_adapter *adapter)
{
- struct i40e_vsi *vsi = &adapter->vsi;
struct i40e_hw *hw = &adapter->hw;
- u8 seed[I40EVF_HKEY_ARRAY_SIZE];
- u64 hena;
- u8 *lut;
int ret;
- /* Enable PCTYPES for RSS, TCP/UDP with IPv4/IPv6 */
- if (adapter->vf_res->vf_offload_flags &
- I40E_VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2)
- hena = I40E_DEFAULT_RSS_HENA_EXPANDED;
- else
- hena = I40E_DEFAULT_RSS_HENA;
- wr32(hw, I40E_VFQF_HENA(0), (u32)hena);
- wr32(hw, I40E_VFQF_HENA(1), (u32)(hena >> 32));
+ if (!RSS_PF(adapter)) {
+ /* Enable PCTYPES for RSS, TCP/UDP with IPv4/IPv6 */
+ if (adapter->vf_res->vf_offload_flags &
+ I40E_VIRTCHNL_VF_OFFLOAD_RSS_PCTYPE_V2)
+ adapter->hena = I40E_DEFAULT_RSS_HENA_EXPANDED;
+ else
+ adapter->hena = I40E_DEFAULT_RSS_HENA;
- lut = kzalloc(I40EVF_HLUT_ARRAY_SIZE, GFP_KERNEL);
- if (!lut)
- return -ENOMEM;
+ wr32(hw, I40E_VFQF_HENA(0), (u32)adapter->hena);
+ wr32(hw, I40E_VFQF_HENA(1), (u32)(adapter->hena >> 32));
+ }
- /* Use user configured lut if there is one, otherwise use default */
- if (vsi->rss_lut_user)
- memcpy(lut, vsi->rss_lut_user, I40EVF_HLUT_ARRAY_SIZE);
- else
- i40evf_fill_rss_lut(lut, I40EVF_HLUT_ARRAY_SIZE,
- adapter->num_active_queues);
+ i40evf_fill_rss_lut(adapter);
- /* Use user configured hash key if there is one, otherwise
- * user default.
- */
- if (vsi->rss_hkey_user)
- memcpy(seed, vsi->rss_hkey_user, I40EVF_HKEY_ARRAY_SIZE);
- else
- netdev_rss_key_fill((void *)seed, I40EVF_HKEY_ARRAY_SIZE);
- ret = i40evf_config_rss(vsi, seed, lut, I40EVF_HLUT_ARRAY_SIZE);
- kfree(lut);
+ netdev_rss_key_fill((void *)adapter->rss_key, adapter->rss_key_size);
+ ret = i40evf_config_rss(adapter);
return ret;
}
@@ -1507,7 +1359,7 @@ static int i40evf_alloc_q_vectors(struct i40evf_adapter *adapter)
adapter->q_vectors = kcalloc(num_q_vectors, sizeof(*q_vector),
GFP_KERNEL);
if (!adapter->q_vectors)
- goto err_out;
+ return -ENOMEM;
for (q_idx = 0; q_idx < num_q_vectors; q_idx++) {
q_vector = &adapter->q_vectors[q_idx];
@@ -1519,15 +1371,6 @@ static int i40evf_alloc_q_vectors(struct i40evf_adapter *adapter)
}
return 0;
-
-err_out:
- while (q_idx) {
- q_idx--;
- q_vector = &adapter->q_vectors[q_idx];
- netif_napi_del(&q_vector->napi);
- }
- kfree(adapter->q_vectors);
- return -ENOMEM;
}
/**
@@ -1610,19 +1453,16 @@ err_set_interrupt:
}
/**
- * i40evf_clear_rss_config_user - Clear user configurations of RSS
- * @vsi: Pointer to VSI structure
+ * i40evf_free_rss - Free memory used by RSS structs
+ * @adapter: board private structure
**/
-static void i40evf_clear_rss_config_user(struct i40e_vsi *vsi)
+static void i40evf_free_rss(struct i40evf_adapter *adapter)
{
- if (!vsi)
- return;
-
- kfree(vsi->rss_hkey_user);
- vsi->rss_hkey_user = NULL;
+ kfree(adapter->rss_key);
+ adapter->rss_key = NULL;
- kfree(vsi->rss_lut_user);
- vsi->rss_lut_user = NULL;
+ kfree(adapter->rss_lut);
+ adapter->rss_lut = NULL;
}
/**
@@ -1756,6 +1596,39 @@ static void i40evf_watchdog_task(struct work_struct *work)
adapter->aq_required &= ~I40EVF_FLAG_AQ_CONFIGURE_RSS;
goto watchdog_done;
}
+ if (adapter->aq_required & I40EVF_FLAG_AQ_GET_HENA) {
+ i40evf_get_hena(adapter);
+ goto watchdog_done;
+ }
+ if (adapter->aq_required & I40EVF_FLAG_AQ_SET_HENA) {
+ i40evf_set_hena(adapter);
+ goto watchdog_done;
+ }
+ if (adapter->aq_required & I40EVF_FLAG_AQ_SET_RSS_KEY) {
+ i40evf_set_rss_key(adapter);
+ goto watchdog_done;
+ }
+ if (adapter->aq_required & I40EVF_FLAG_AQ_SET_RSS_LUT) {
+ i40evf_set_rss_lut(adapter);
+ goto watchdog_done;
+ }
+
+ if (adapter->aq_required & I40EVF_FLAG_AQ_REQUEST_PROMISC) {
+ i40evf_set_promiscuous(adapter, I40E_FLAG_VF_UNICAST_PROMISC |
+ I40E_FLAG_VF_MULTICAST_PROMISC);
+ goto watchdog_done;
+ }
+
+ if (adapter->aq_required & I40EVF_FLAG_AQ_REQUEST_ALLMULTI) {
+ i40evf_set_promiscuous(adapter, I40E_FLAG_VF_MULTICAST_PROMISC);
+ goto watchdog_done;
+ }
+
+ if ((adapter->aq_required & I40EVF_FLAG_AQ_RELEASE_PROMISC) &&
+ (adapter->aq_required & I40EVF_FLAG_AQ_RELEASE_ALLMULTI)) {
+ i40evf_set_promiscuous(adapter, 0);
+ goto watchdog_done;
+ }
if (adapter->state == __I40EVF_RUNNING)
i40evf_request_stats(adapter);
@@ -2003,6 +1876,8 @@ static void i40evf_adminq_task(struct work_struct *work)
/* check for error indications */
val = rd32(hw, hw->aq.arq.len);
+ if (val == 0xdeadbeef) /* indicates device in reset */
+ goto freedom;
oldval = val;
if (val & I40E_VF_ARQLEN1_ARQVFE_MASK) {
dev_info(&adapter->pdev->dev, "ARQ VF Error detected\n");
@@ -2259,6 +2134,28 @@ static int i40evf_change_mtu(struct net_device *netdev, int new_mtu)
return 0;
}
+#define I40EVF_VLAN_FEATURES (NETIF_F_HW_VLAN_CTAG_TX |\
+ NETIF_F_HW_VLAN_CTAG_RX |\
+ NETIF_F_HW_VLAN_CTAG_FILTER)
+
+/**
+ * i40evf_fix_features - fix up the netdev feature bits
+ * @netdev: our net device
+ * @features: desired feature bits
+ *
+ * Returns fixed-up features bits
+ **/
+static netdev_features_t i40evf_fix_features(struct net_device *netdev,
+ netdev_features_t features)
+{
+ struct i40evf_adapter *adapter = netdev_priv(netdev);
+
+ features &= ~I40EVF_VLAN_FEATURES;
+ if (adapter->vf_res->vf_offload_flags & I40E_VIRTCHNL_VF_OFFLOAD_VLAN)
+ features |= I40EVF_VLAN_FEATURES;
+ return features;
+}
+
static const struct net_device_ops i40evf_netdev_ops = {
.ndo_open = i40evf_open,
.ndo_stop = i40evf_close,
@@ -2271,6 +2168,7 @@ static const struct net_device_ops i40evf_netdev_ops = {
.ndo_tx_timeout = i40evf_tx_timeout,
.ndo_vlan_rx_add_vid = i40evf_vlan_rx_add_vid,
.ndo_vlan_rx_kill_vid = i40evf_vlan_rx_kill_vid,
+ .ndo_fix_features = i40evf_fix_features,
#ifdef CONFIG_NET_POLL_CONTROLLER
.ndo_poll_controller = i40evf_netpoll,
#endif
@@ -2307,57 +2205,61 @@ static int i40evf_check_reset_complete(struct i40e_hw *hw)
**/
int i40evf_process_config(struct i40evf_adapter *adapter)
{
+ struct i40e_virtchnl_vf_resource *vfres = adapter->vf_res;
struct net_device *netdev = adapter->netdev;
+ struct i40e_vsi *vsi = &adapter->vsi;
int i;
/* got VF config message back from PF, now we can parse it */
- for (i = 0; i < adapter->vf_res->num_vsis; i++) {
- if (adapter->vf_res->vsi_res[i].vsi_type == I40E_VSI_SRIOV)
- adapter->vsi_res = &adapter->vf_res->vsi_res[i];
+ for (i = 0; i < vfres->num_vsis; i++) {
+ if (vfres->vsi_res[i].vsi_type == I40E_VSI_SRIOV)
+ adapter->vsi_res = &vfres->vsi_res[i];
}
if (!adapter->vsi_res) {
dev_err(&adapter->pdev->dev, "No LAN VSI found\n");
return -ENODEV;
}
- if (adapter->vf_res->vf_offload_flags
- & I40E_VIRTCHNL_VF_OFFLOAD_VLAN) {
- netdev->vlan_features = netdev->features &
- ~(NETIF_F_HW_VLAN_CTAG_TX |
- NETIF_F_HW_VLAN_CTAG_RX |
- NETIF_F_HW_VLAN_CTAG_FILTER);
- netdev->features |= NETIF_F_HW_VLAN_CTAG_TX |
- NETIF_F_HW_VLAN_CTAG_RX |
- NETIF_F_HW_VLAN_CTAG_FILTER;
- }
- netdev->features |= NETIF_F_HIGHDMA |
- NETIF_F_SG |
- NETIF_F_IP_CSUM |
- NETIF_F_SCTP_CRC |
- NETIF_F_IPV6_CSUM |
- NETIF_F_TSO |
- NETIF_F_TSO6 |
- NETIF_F_TSO_ECN |
- NETIF_F_GSO_GRE |
- NETIF_F_GSO_UDP_TUNNEL |
- NETIF_F_RXCSUM |
- NETIF_F_GRO;
-
- netdev->hw_enc_features |= NETIF_F_IP_CSUM |
- NETIF_F_IPV6_CSUM |
- NETIF_F_TSO |
- NETIF_F_TSO6 |
- NETIF_F_TSO_ECN |
- NETIF_F_GSO_GRE |
- NETIF_F_GSO_UDP_TUNNEL |
- NETIF_F_GSO_UDP_TUNNEL_CSUM;
-
- if (adapter->flags & I40EVF_FLAG_OUTER_UDP_CSUM_CAPABLE)
- netdev->features |= NETIF_F_GSO_UDP_TUNNEL_CSUM;
-
- /* copy netdev features into list of user selectable features */
- netdev->hw_features |= netdev->features;
- netdev->hw_features &= ~NETIF_F_RXCSUM;
+ netdev->hw_enc_features |= NETIF_F_SG |
+ NETIF_F_IP_CSUM |
+ NETIF_F_IPV6_CSUM |
+ NETIF_F_HIGHDMA |
+ NETIF_F_SOFT_FEATURES |
+ NETIF_F_TSO |
+ NETIF_F_TSO_ECN |
+ NETIF_F_TSO6 |
+ NETIF_F_GSO_GRE |
+ NETIF_F_GSO_GRE_CSUM |
+ NETIF_F_GSO_IPXIP4 |
+ NETIF_F_GSO_IPXIP6 |
+ NETIF_F_GSO_UDP_TUNNEL |
+ NETIF_F_GSO_UDP_TUNNEL_CSUM |
+ NETIF_F_GSO_PARTIAL |
+ NETIF_F_SCTP_CRC |
+ NETIF_F_RXHASH |
+ NETIF_F_RXCSUM |
+ 0;
+
+ if (!(adapter->flags & I40EVF_FLAG_OUTER_UDP_CSUM_CAPABLE))
+ netdev->gso_partial_features |= NETIF_F_GSO_UDP_TUNNEL_CSUM;
+
+ netdev->gso_partial_features |= NETIF_F_GSO_GRE_CSUM;
+
+ /* record features VLANs can make use of */
+ netdev->vlan_features |= netdev->hw_enc_features |
+ NETIF_F_TSO_MANGLEID;
+
+ /* Write features and hw_features separately to avoid polluting
+ * with, or dropping, features that are set when we registgered.
+ */
+ netdev->hw_features |= netdev->hw_enc_features;
+
+ netdev->features |= netdev->hw_enc_features | I40EVF_VLAN_FEATURES;
+ netdev->hw_enc_features |= NETIF_F_TSO_MANGLEID;
+
+ /* disable VLAN features if not supported */
+ if (!(vfres->vf_offload_flags & I40E_VIRTCHNL_VF_OFFLOAD_VLAN))
+ netdev->features ^= I40EVF_VLAN_FEATURES;
adapter->vsi.id = adapter->vsi_res->vsi_id;
@@ -2368,8 +2270,16 @@ int i40evf_process_config(struct i40evf_adapter *adapter)
ITR_REG_TO_USEC(I40E_ITR_RX_DEF));
adapter->vsi.tx_itr_setting = (I40E_ITR_DYNAMIC |
ITR_REG_TO_USEC(I40E_ITR_TX_DEF));
- adapter->vsi.netdev = adapter->netdev;
- adapter->vsi.qs_handle = adapter->vsi_res->qset_handle;
+ vsi->netdev = adapter->netdev;
+ vsi->qs_handle = adapter->vsi_res->qset_handle;
+ if (vfres->vf_offload_flags & I40E_VIRTCHNL_VF_OFFLOAD_RSS_PF) {
+ adapter->rss_key_size = vfres->rss_key_size;
+ adapter->rss_lut_size = vfres->rss_lut_size;
+ } else {
+ adapter->rss_key_size = I40EVF_HKEY_ARRAY_SIZE;
+ adapter->rss_lut_size = I40EVF_HLUT_ARRAY_SIZE;
+ }
+
return 0;
}
@@ -2502,11 +2412,6 @@ static void i40evf_init_task(struct work_struct *work)
adapter->current_op = I40E_VIRTCHNL_OP_UNKNOWN;
adapter->flags |= I40EVF_FLAG_RX_CSUM_ENABLED;
- adapter->flags |= I40EVF_FLAG_RX_1BUF_CAPABLE;
- adapter->flags |= I40EVF_FLAG_RX_PS_CAPABLE;
-
- /* Default to single buffer rx, can be changed through ethtool. */
- adapter->flags &= ~I40EVF_FLAG_RX_PS_ENABLED;
netdev->netdev_ops = &i40evf_netdev_ops;
i40evf_set_ethtool_ops(netdev);
@@ -2565,6 +2470,11 @@ static void i40evf_init_task(struct work_struct *work)
set_bit(__I40E_DOWN, &adapter->vsi.state);
i40evf_misc_irq_enable(adapter);
+ adapter->rss_key = kzalloc(adapter->rss_key_size, GFP_KERNEL);
+ adapter->rss_lut = kzalloc(adapter->rss_lut_size, GFP_KERNEL);
+ if (!adapter->rss_key || !adapter->rss_lut)
+ goto err_mem;
+
if (RSS_AQ(adapter)) {
adapter->aq_required |= I40EVF_FLAG_AQ_CONFIGURE_RSS;
mod_timer_pending(&adapter->watchdog_timer, jiffies + 1);
@@ -2575,7 +2485,8 @@ static void i40evf_init_task(struct work_struct *work)
restart:
schedule_delayed_work(&adapter->init_task, msecs_to_jiffies(30));
return;
-
+err_mem:
+ i40evf_free_rss(adapter);
err_register:
i40evf_free_misc_irq(adapter);
err_sw_init:
@@ -2838,11 +2749,11 @@ static void i40evf_remove(struct pci_dev *pdev)
adapter->state = __I40EVF_REMOVE;
adapter->aq_required = 0;
i40evf_request_reset(adapter);
- msleep(20);
+ msleep(50);
/* If the FW isn't responding, kick it once, but only once. */
if (!i40evf_asq_done(hw)) {
i40evf_request_reset(adapter);
- msleep(20);
+ msleep(50);
}
if (adapter->msix_entries) {
@@ -2857,8 +2768,7 @@ static void i40evf_remove(struct pci_dev *pdev)
flush_scheduled_work();
- /* Clear user configurations for RSS */
- i40evf_clear_rss_config_user(&adapter->vsi);
+ i40evf_free_rss(adapter);
if (hw->aq.asq.count)
i40evf_shutdown_adminq(hw);
@@ -2869,7 +2779,6 @@ static void i40evf_remove(struct pci_dev *pdev)
iounmap(hw->hw_addr);
pci_release_regions(pdev);
-
i40evf_free_all_tx_resources(adapter);
i40evf_free_all_rx_resources(adapter);
i40evf_free_queues(adapter);
diff --git a/drivers/net/ethernet/intel/i40evf/i40evf_virtchnl.c b/drivers/net/ethernet/intel/i40evf/i40evf_virtchnl.c
index 488e738f76c6..f13445691507 100644
--- a/drivers/net/ethernet/intel/i40evf/i40evf_virtchnl.c
+++ b/drivers/net/ethernet/intel/i40evf/i40evf_virtchnl.c
@@ -270,10 +270,6 @@ void i40evf_configure_queues(struct i40evf_adapter *adapter)
vqpi->rxq.max_pkt_size = adapter->netdev->mtu
+ ETH_HLEN + VLAN_HLEN + ETH_FCS_LEN;
vqpi->rxq.databuffer_size = adapter->rx_rings[i].rx_buf_len;
- if (adapter->flags & I40EVF_FLAG_RX_PS_ENABLED) {
- vqpi->rxq.splithdr_enabled = true;
- vqpi->rxq.hdr_size = I40E_RX_HDR_SIZE;
- }
vqpi++;
}
@@ -645,6 +641,7 @@ void i40evf_del_vlans(struct i40evf_adapter *adapter)
void i40evf_set_promiscuous(struct i40evf_adapter *adapter, int flags)
{
struct i40e_virtchnl_promisc_info vpi;
+ int promisc_all;
if (adapter->current_op != I40E_VIRTCHNL_OP_UNKNOWN) {
/* bail because we already have a command pending */
@@ -652,6 +649,27 @@ void i40evf_set_promiscuous(struct i40evf_adapter *adapter, int flags)
adapter->current_op);
return;
}
+
+ promisc_all = I40E_FLAG_VF_UNICAST_PROMISC |
+ I40E_FLAG_VF_MULTICAST_PROMISC;
+ if ((flags & promisc_all) == promisc_all) {
+ adapter->flags |= I40EVF_FLAG_PROMISC_ON;
+ adapter->aq_required &= ~I40EVF_FLAG_AQ_REQUEST_PROMISC;
+ dev_info(&adapter->pdev->dev, "Entering promiscuous mode\n");
+ }
+
+ if (flags & I40E_FLAG_VF_MULTICAST_PROMISC) {
+ adapter->flags |= I40EVF_FLAG_ALLMULTI_ON;
+ adapter->aq_required &= ~I40EVF_FLAG_AQ_REQUEST_ALLMULTI;
+ dev_info(&adapter->pdev->dev, "Entering multicast promiscuous mode\n");
+ }
+
+ if (!flags) {
+ adapter->flags &= ~I40EVF_FLAG_PROMISC_ON;
+ adapter->aq_required &= ~I40EVF_FLAG_AQ_RELEASE_PROMISC;
+ dev_info(&adapter->pdev->dev, "Leaving promiscuous mode\n");
+ }
+
adapter->current_op = I40E_VIRTCHNL_OP_CONFIG_PROMISCUOUS_MODE;
vpi.vsi_id = adapter->vsi_res->vsi_id;
vpi.flags = flags;
@@ -681,6 +699,115 @@ void i40evf_request_stats(struct i40evf_adapter *adapter)
/* if the request failed, don't lock out others */
adapter->current_op = I40E_VIRTCHNL_OP_UNKNOWN;
}
+
+/**
+ * i40evf_get_hena
+ * @adapter: adapter structure
+ *
+ * Request hash enable capabilities from PF
+ **/
+void i40evf_get_hena(struct i40evf_adapter *adapter)
+{
+ if (adapter->current_op != I40E_VIRTCHNL_OP_UNKNOWN) {
+ /* bail because we already have a command pending */
+ dev_err(&adapter->pdev->dev, "Cannot get RSS hash capabilities, command %d pending\n",
+ adapter->current_op);
+ return;
+ }
+ adapter->current_op = I40E_VIRTCHNL_OP_GET_RSS_HENA_CAPS;
+ adapter->aq_required &= ~I40EVF_FLAG_AQ_GET_HENA;
+ i40evf_send_pf_msg(adapter, I40E_VIRTCHNL_OP_GET_RSS_HENA_CAPS,
+ NULL, 0);
+}
+
+/**
+ * i40evf_set_hena
+ * @adapter: adapter structure
+ *
+ * Request the PF to set our RSS hash capabilities
+ **/
+void i40evf_set_hena(struct i40evf_adapter *adapter)
+{
+ struct i40e_virtchnl_rss_hena vrh;
+
+ if (adapter->current_op != I40E_VIRTCHNL_OP_UNKNOWN) {
+ /* bail because we already have a command pending */
+ dev_err(&adapter->pdev->dev, "Cannot set RSS hash enable, command %d pending\n",
+ adapter->current_op);
+ return;
+ }
+ vrh.hena = adapter->hena;
+ adapter->current_op = I40E_VIRTCHNL_OP_SET_RSS_HENA;
+ adapter->aq_required &= ~I40EVF_FLAG_AQ_SET_HENA;
+ i40evf_send_pf_msg(adapter, I40E_VIRTCHNL_OP_SET_RSS_HENA,
+ (u8 *)&vrh, sizeof(vrh));
+}
+
+/**
+ * i40evf_set_rss_key
+ * @adapter: adapter structure
+ *
+ * Request the PF to set our RSS hash key
+ **/
+void i40evf_set_rss_key(struct i40evf_adapter *adapter)
+{
+ struct i40e_virtchnl_rss_key *vrk;
+ int len;
+
+ if (adapter->current_op != I40E_VIRTCHNL_OP_UNKNOWN) {
+ /* bail because we already have a command pending */
+ dev_err(&adapter->pdev->dev, "Cannot set RSS key, command %d pending\n",
+ adapter->current_op);
+ return;
+ }
+ len = sizeof(struct i40e_virtchnl_rss_key) +
+ (adapter->rss_key_size * sizeof(u8)) - 1;
+ vrk = kzalloc(len, GFP_KERNEL);
+ if (!vrk)
+ return;
+ vrk->vsi_id = adapter->vsi.id;
+ vrk->key_len = adapter->rss_key_size;
+ memcpy(vrk->key, adapter->rss_key, adapter->rss_key_size);
+
+ adapter->current_op = I40E_VIRTCHNL_OP_CONFIG_RSS_KEY;
+ adapter->aq_required &= ~I40EVF_FLAG_AQ_SET_RSS_KEY;
+ i40evf_send_pf_msg(adapter, I40E_VIRTCHNL_OP_CONFIG_RSS_KEY,
+ (u8 *)vrk, len);
+ kfree(vrk);
+}
+
+/**
+ * i40evf_set_rss_lut
+ * @adapter: adapter structure
+ *
+ * Request the PF to set our RSS lookup table
+ **/
+void i40evf_set_rss_lut(struct i40evf_adapter *adapter)
+{
+ struct i40e_virtchnl_rss_lut *vrl;
+ int len;
+
+ if (adapter->current_op != I40E_VIRTCHNL_OP_UNKNOWN) {
+ /* bail because we already have a command pending */
+ dev_err(&adapter->pdev->dev, "Cannot set RSS LUT, command %d pending\n",
+ adapter->current_op);
+ return;
+ }
+ len = sizeof(struct i40e_virtchnl_rss_lut) +
+ (adapter->rss_lut_size * sizeof(u8)) - 1;
+ vrl = kzalloc(len, GFP_KERNEL);
+ if (!vrl)
+ return;
+ vrl->vsi_id = adapter->vsi.id;
+ vrl->lut_entries = adapter->rss_lut_size;
+ memcpy(vrl->lut, adapter->rss_lut, adapter->rss_lut_size);
+ adapter->current_op = I40E_VIRTCHNL_OP_CONFIG_RSS_LUT;
+ adapter->aq_required &= ~I40EVF_FLAG_AQ_SET_RSS_LUT;
+ i40evf_send_pf_msg(adapter, I40E_VIRTCHNL_OP_CONFIG_RSS_LUT,
+ (u8 *)vrl, len);
+ kfree(vrl);
+}
+
/**
* i40evf_request_reset
* @adapter: adapter structure
@@ -820,6 +947,16 @@ void i40evf_virtchnl_completion(struct i40evf_adapter *adapter,
if (v_opcode != adapter->current_op)
return;
break;
+ case I40E_VIRTCHNL_OP_GET_RSS_HENA_CAPS: {
+ struct i40e_virtchnl_rss_hena *vrh =
+ (struct i40e_virtchnl_rss_hena *)msg;
+ if (msglen == sizeof(*vrh))
+ adapter->hena = vrh->hena;
+ else
+ dev_warn(&adapter->pdev->dev,
+ "Invalid message %d from PF\n", v_opcode);
+ }
+ break;
default:
if (v_opcode != adapter->current_op)
dev_warn(&adapter->pdev->dev, "Expected response %d from PF, received %d\n",
diff --git a/drivers/net/ethernet/intel/igb/e1000_82575.c b/drivers/net/ethernet/intel/igb/e1000_82575.c
index a23aa6704394..a61447fd778e 100644
--- a/drivers/net/ethernet/intel/igb/e1000_82575.c
+++ b/drivers/net/ethernet/intel/igb/e1000_82575.c
@@ -361,7 +361,7 @@ static s32 igb_init_nvm_params_82575(struct e1000_hw *hw)
if (size > 15)
size = 15;
- nvm->word_size = 1 << size;
+ nvm->word_size = BIT(size);
nvm->opcode_bits = 8;
nvm->delay_usec = 1;
@@ -380,7 +380,7 @@ static s32 igb_init_nvm_params_82575(struct e1000_hw *hw)
16 : 8;
break;
}
- if (nvm->word_size == (1 << 15))
+ if (nvm->word_size == BIT(15))
nvm->page_size = 128;
nvm->type = e1000_nvm_eeprom_spi;
@@ -391,7 +391,7 @@ static s32 igb_init_nvm_params_82575(struct e1000_hw *hw)
nvm->ops.write = igb_write_nvm_spi;
nvm->ops.validate = igb_validate_nvm_checksum;
nvm->ops.update = igb_update_nvm_checksum;
- if (nvm->word_size < (1 << 15))
+ if (nvm->word_size < BIT(15))
nvm->ops.read = igb_read_nvm_eerd;
else
nvm->ops.read = igb_read_nvm_spi;
@@ -2107,7 +2107,7 @@ void igb_vmdq_set_anti_spoofing_pf(struct e1000_hw *hw, bool enable, int pf)
/* The PF can spoof - it has to in order to
* support emulation mode NICs
*/
- reg_val ^= (1 << pf | 1 << (pf + MAX_NUM_VFS));
+ reg_val ^= (BIT(pf) | BIT(pf + MAX_NUM_VFS));
} else {
reg_val &= ~(E1000_DTXSWC_MAC_SPOOF_MASK |
E1000_DTXSWC_VLAN_SPOOF_MASK);
diff --git a/drivers/net/ethernet/intel/igb/e1000_82575.h b/drivers/net/ethernet/intel/igb/e1000_82575.h
index de8805a2a2fe..199ff98209cf 100644
--- a/drivers/net/ethernet/intel/igb/e1000_82575.h
+++ b/drivers/net/ethernet/intel/igb/e1000_82575.h
@@ -168,16 +168,16 @@ struct e1000_adv_tx_context_desc {
#define E1000_DCA_CTRL_DCA_MODE_CB2 0x02 /* DCA Mode CB2 */
#define E1000_DCA_RXCTRL_CPUID_MASK 0x0000001F /* Rx CPUID Mask */
-#define E1000_DCA_RXCTRL_DESC_DCA_EN (1 << 5) /* DCA Rx Desc enable */
-#define E1000_DCA_RXCTRL_HEAD_DCA_EN (1 << 6) /* DCA Rx Desc header enable */
-#define E1000_DCA_RXCTRL_DATA_DCA_EN (1 << 7) /* DCA Rx Desc payload enable */
-#define E1000_DCA_RXCTRL_DESC_RRO_EN (1 << 9) /* DCA Rx rd Desc Relax Order */
+#define E1000_DCA_RXCTRL_DESC_DCA_EN BIT(5) /* DCA Rx Desc enable */
+#define E1000_DCA_RXCTRL_HEAD_DCA_EN BIT(6) /* DCA Rx Desc header enable */
+#define E1000_DCA_RXCTRL_DATA_DCA_EN BIT(7) /* DCA Rx Desc payload enable */
+#define E1000_DCA_RXCTRL_DESC_RRO_EN BIT(9) /* DCA Rx rd Desc Relax Order */
#define E1000_DCA_TXCTRL_CPUID_MASK 0x0000001F /* Tx CPUID Mask */
-#define E1000_DCA_TXCTRL_DESC_DCA_EN (1 << 5) /* DCA Tx Desc enable */
-#define E1000_DCA_TXCTRL_DESC_RRO_EN (1 << 9) /* Tx rd Desc Relax Order */
-#define E1000_DCA_TXCTRL_TX_WB_RO_EN (1 << 11) /* Tx Desc writeback RO bit */
-#define E1000_DCA_TXCTRL_DATA_RRO_EN (1 << 13) /* Tx rd data Relax Order */
+#define E1000_DCA_TXCTRL_DESC_DCA_EN BIT(5) /* DCA Tx Desc enable */
+#define E1000_DCA_TXCTRL_DESC_RRO_EN BIT(9) /* Tx rd Desc Relax Order */
+#define E1000_DCA_TXCTRL_TX_WB_RO_EN BIT(11) /* Tx Desc writeback RO bit */
+#define E1000_DCA_TXCTRL_DATA_RRO_EN BIT(13) /* Tx rd data Relax Order */
/* Additional DCA related definitions, note change in position of CPUID */
#define E1000_DCA_TXCTRL_CPUID_MASK_82576 0xFF000000 /* Tx CPUID Mask */
@@ -186,8 +186,8 @@ struct e1000_adv_tx_context_desc {
#define E1000_DCA_RXCTRL_CPUID_SHIFT 24 /* Rx CPUID now in the last byte */
/* ETQF register bit definitions */
-#define E1000_ETQF_FILTER_ENABLE (1 << 26)
-#define E1000_ETQF_1588 (1 << 30)
+#define E1000_ETQF_FILTER_ENABLE BIT(26)
+#define E1000_ETQF_1588 BIT(30)
/* FTQF register bit definitions */
#define E1000_FTQF_VF_BP 0x00008000
@@ -203,16 +203,16 @@ struct e1000_adv_tx_context_desc {
#define E1000_DTXSWC_VLAN_SPOOF_MASK 0x0000FF00 /* Per VF VLAN spoof control */
#define E1000_DTXSWC_LLE_MASK 0x00FF0000 /* Per VF Local LB enables */
#define E1000_DTXSWC_VLAN_SPOOF_SHIFT 8
-#define E1000_DTXSWC_VMDQ_LOOPBACK_EN (1 << 31) /* global VF LB enable */
+#define E1000_DTXSWC_VMDQ_LOOPBACK_EN BIT(31) /* global VF LB enable */
/* Easy defines for setting default pool, would normally be left a zero */
#define E1000_VT_CTL_DEFAULT_POOL_SHIFT 7
#define E1000_VT_CTL_DEFAULT_POOL_MASK (0x7 << E1000_VT_CTL_DEFAULT_POOL_SHIFT)
/* Other useful VMD_CTL register defines */
-#define E1000_VT_CTL_IGNORE_MAC (1 << 28)
-#define E1000_VT_CTL_DISABLE_DEF_POOL (1 << 29)
-#define E1000_VT_CTL_VM_REPL_EN (1 << 30)
+#define E1000_VT_CTL_IGNORE_MAC BIT(28)
+#define E1000_VT_CTL_DISABLE_DEF_POOL BIT(29)
+#define E1000_VT_CTL_VM_REPL_EN BIT(30)
/* Per VM Offload register setup */
#define E1000_VMOLR_RLPML_MASK 0x00003FFF /* Long Packet Maximum Length mask */
@@ -252,7 +252,7 @@ struct e1000_adv_tx_context_desc {
#define E1000_DTXCTL_MDP_EN 0x0020
#define E1000_DTXCTL_SPOOF_INT 0x0040
-#define E1000_EEPROM_PCS_AUTONEG_DISABLE_BIT (1 << 14)
+#define E1000_EEPROM_PCS_AUTONEG_DISABLE_BIT BIT(14)
#define ALL_QUEUES 0xFFFF
diff --git a/drivers/net/ethernet/intel/igb/e1000_defines.h b/drivers/net/ethernet/intel/igb/e1000_defines.h
index e9f23ee8f15e..2997c443c5dc 100644
--- a/drivers/net/ethernet/intel/igb/e1000_defines.h
+++ b/drivers/net/ethernet/intel/igb/e1000_defines.h
@@ -530,65 +530,65 @@
/* Time Sync Interrupt Cause/Mask Register Bits */
-#define TSINTR_SYS_WRAP (1 << 0) /* SYSTIM Wrap around. */
-#define TSINTR_TXTS (1 << 1) /* Transmit Timestamp. */
-#define TSINTR_RXTS (1 << 2) /* Receive Timestamp. */
-#define TSINTR_TT0 (1 << 3) /* Target Time 0 Trigger. */
-#define TSINTR_TT1 (1 << 4) /* Target Time 1 Trigger. */
-#define TSINTR_AUTT0 (1 << 5) /* Auxiliary Timestamp 0 Taken. */
-#define TSINTR_AUTT1 (1 << 6) /* Auxiliary Timestamp 1 Taken. */
-#define TSINTR_TADJ (1 << 7) /* Time Adjust Done. */
+#define TSINTR_SYS_WRAP BIT(0) /* SYSTIM Wrap around. */
+#define TSINTR_TXTS BIT(1) /* Transmit Timestamp. */
+#define TSINTR_RXTS BIT(2) /* Receive Timestamp. */
+#define TSINTR_TT0 BIT(3) /* Target Time 0 Trigger. */
+#define TSINTR_TT1 BIT(4) /* Target Time 1 Trigger. */
+#define TSINTR_AUTT0 BIT(5) /* Auxiliary Timestamp 0 Taken. */
+#define TSINTR_AUTT1 BIT(6) /* Auxiliary Timestamp 1 Taken. */
+#define TSINTR_TADJ BIT(7) /* Time Adjust Done. */
#define TSYNC_INTERRUPTS TSINTR_TXTS
#define E1000_TSICR_TXTS TSINTR_TXTS
/* TSAUXC Configuration Bits */
-#define TSAUXC_EN_TT0 (1 << 0) /* Enable target time 0. */
-#define TSAUXC_EN_TT1 (1 << 1) /* Enable target time 1. */
-#define TSAUXC_EN_CLK0 (1 << 2) /* Enable Configurable Frequency Clock 0. */
-#define TSAUXC_SAMP_AUT0 (1 << 3) /* Latch SYSTIML/H into AUXSTMPL/0. */
-#define TSAUXC_ST0 (1 << 4) /* Start Clock 0 Toggle on Target Time 0. */
-#define TSAUXC_EN_CLK1 (1 << 5) /* Enable Configurable Frequency Clock 1. */
-#define TSAUXC_SAMP_AUT1 (1 << 6) /* Latch SYSTIML/H into AUXSTMPL/1. */
-#define TSAUXC_ST1 (1 << 7) /* Start Clock 1 Toggle on Target Time 1. */
-#define TSAUXC_EN_TS0 (1 << 8) /* Enable hardware timestamp 0. */
-#define TSAUXC_AUTT0 (1 << 9) /* Auxiliary Timestamp Taken. */
-#define TSAUXC_EN_TS1 (1 << 10) /* Enable hardware timestamp 0. */
-#define TSAUXC_AUTT1 (1 << 11) /* Auxiliary Timestamp Taken. */
-#define TSAUXC_PLSG (1 << 17) /* Generate a pulse. */
-#define TSAUXC_DISABLE (1 << 31) /* Disable SYSTIM Count Operation. */
+#define TSAUXC_EN_TT0 BIT(0) /* Enable target time 0. */
+#define TSAUXC_EN_TT1 BIT(1) /* Enable target time 1. */
+#define TSAUXC_EN_CLK0 BIT(2) /* Enable Configurable Frequency Clock 0. */
+#define TSAUXC_SAMP_AUT0 BIT(3) /* Latch SYSTIML/H into AUXSTMPL/0. */
+#define TSAUXC_ST0 BIT(4) /* Start Clock 0 Toggle on Target Time 0. */
+#define TSAUXC_EN_CLK1 BIT(5) /* Enable Configurable Frequency Clock 1. */
+#define TSAUXC_SAMP_AUT1 BIT(6) /* Latch SYSTIML/H into AUXSTMPL/1. */
+#define TSAUXC_ST1 BIT(7) /* Start Clock 1 Toggle on Target Time 1. */
+#define TSAUXC_EN_TS0 BIT(8) /* Enable hardware timestamp 0. */
+#define TSAUXC_AUTT0 BIT(9) /* Auxiliary Timestamp Taken. */
+#define TSAUXC_EN_TS1 BIT(10) /* Enable hardware timestamp 0. */
+#define TSAUXC_AUTT1 BIT(11) /* Auxiliary Timestamp Taken. */
+#define TSAUXC_PLSG BIT(17) /* Generate a pulse. */
+#define TSAUXC_DISABLE BIT(31) /* Disable SYSTIM Count Operation. */
/* SDP Configuration Bits */
-#define AUX0_SEL_SDP0 (0 << 0) /* Assign SDP0 to auxiliary time stamp 0. */
-#define AUX0_SEL_SDP1 (1 << 0) /* Assign SDP1 to auxiliary time stamp 0. */
-#define AUX0_SEL_SDP2 (2 << 0) /* Assign SDP2 to auxiliary time stamp 0. */
-#define AUX0_SEL_SDP3 (3 << 0) /* Assign SDP3 to auxiliary time stamp 0. */
-#define AUX0_TS_SDP_EN (1 << 2) /* Enable auxiliary time stamp trigger 0. */
-#define AUX1_SEL_SDP0 (0 << 3) /* Assign SDP0 to auxiliary time stamp 1. */
-#define AUX1_SEL_SDP1 (1 << 3) /* Assign SDP1 to auxiliary time stamp 1. */
-#define AUX1_SEL_SDP2 (2 << 3) /* Assign SDP2 to auxiliary time stamp 1. */
-#define AUX1_SEL_SDP3 (3 << 3) /* Assign SDP3 to auxiliary time stamp 1. */
-#define AUX1_TS_SDP_EN (1 << 5) /* Enable auxiliary time stamp trigger 1. */
-#define TS_SDP0_SEL_TT0 (0 << 6) /* Target time 0 is output on SDP0. */
-#define TS_SDP0_SEL_TT1 (1 << 6) /* Target time 1 is output on SDP0. */
-#define TS_SDP0_SEL_FC0 (2 << 6) /* Freq clock 0 is output on SDP0. */
-#define TS_SDP0_SEL_FC1 (3 << 6) /* Freq clock 1 is output on SDP0. */
-#define TS_SDP0_EN (1 << 8) /* SDP0 is assigned to Tsync. */
-#define TS_SDP1_SEL_TT0 (0 << 9) /* Target time 0 is output on SDP1. */
-#define TS_SDP1_SEL_TT1 (1 << 9) /* Target time 1 is output on SDP1. */
-#define TS_SDP1_SEL_FC0 (2 << 9) /* Freq clock 0 is output on SDP1. */
-#define TS_SDP1_SEL_FC1 (3 << 9) /* Freq clock 1 is output on SDP1. */
-#define TS_SDP1_EN (1 << 11) /* SDP1 is assigned to Tsync. */
-#define TS_SDP2_SEL_TT0 (0 << 12) /* Target time 0 is output on SDP2. */
-#define TS_SDP2_SEL_TT1 (1 << 12) /* Target time 1 is output on SDP2. */
-#define TS_SDP2_SEL_FC0 (2 << 12) /* Freq clock 0 is output on SDP2. */
-#define TS_SDP2_SEL_FC1 (3 << 12) /* Freq clock 1 is output on SDP2. */
-#define TS_SDP2_EN (1 << 14) /* SDP2 is assigned to Tsync. */
-#define TS_SDP3_SEL_TT0 (0 << 15) /* Target time 0 is output on SDP3. */
-#define TS_SDP3_SEL_TT1 (1 << 15) /* Target time 1 is output on SDP3. */
-#define TS_SDP3_SEL_FC0 (2 << 15) /* Freq clock 0 is output on SDP3. */
-#define TS_SDP3_SEL_FC1 (3 << 15) /* Freq clock 1 is output on SDP3. */
-#define TS_SDP3_EN (1 << 17) /* SDP3 is assigned to Tsync. */
+#define AUX0_SEL_SDP0 (0u << 0) /* Assign SDP0 to auxiliary time stamp 0. */
+#define AUX0_SEL_SDP1 (1u << 0) /* Assign SDP1 to auxiliary time stamp 0. */
+#define AUX0_SEL_SDP2 (2u << 0) /* Assign SDP2 to auxiliary time stamp 0. */
+#define AUX0_SEL_SDP3 (3u << 0) /* Assign SDP3 to auxiliary time stamp 0. */
+#define AUX0_TS_SDP_EN (1u << 2) /* Enable auxiliary time stamp trigger 0. */
+#define AUX1_SEL_SDP0 (0u << 3) /* Assign SDP0 to auxiliary time stamp 1. */
+#define AUX1_SEL_SDP1 (1u << 3) /* Assign SDP1 to auxiliary time stamp 1. */
+#define AUX1_SEL_SDP2 (2u << 3) /* Assign SDP2 to auxiliary time stamp 1. */
+#define AUX1_SEL_SDP3 (3u << 3) /* Assign SDP3 to auxiliary time stamp 1. */
+#define AUX1_TS_SDP_EN (1u << 5) /* Enable auxiliary time stamp trigger 1. */
+#define TS_SDP0_SEL_TT0 (0u << 6) /* Target time 0 is output on SDP0. */
+#define TS_SDP0_SEL_TT1 (1u << 6) /* Target time 1 is output on SDP0. */
+#define TS_SDP0_SEL_FC0 (2u << 6) /* Freq clock 0 is output on SDP0. */
+#define TS_SDP0_SEL_FC1 (3u << 6) /* Freq clock 1 is output on SDP0. */
+#define TS_SDP0_EN (1u << 8) /* SDP0 is assigned to Tsync. */
+#define TS_SDP1_SEL_TT0 (0u << 9) /* Target time 0 is output on SDP1. */
+#define TS_SDP1_SEL_TT1 (1u << 9) /* Target time 1 is output on SDP1. */
+#define TS_SDP1_SEL_FC0 (2u << 9) /* Freq clock 0 is output on SDP1. */
+#define TS_SDP1_SEL_FC1 (3u << 9) /* Freq clock 1 is output on SDP1. */
+#define TS_SDP1_EN (1u << 11) /* SDP1 is assigned to Tsync. */
+#define TS_SDP2_SEL_TT0 (0u << 12) /* Target time 0 is output on SDP2. */
+#define TS_SDP2_SEL_TT1 (1u << 12) /* Target time 1 is output on SDP2. */
+#define TS_SDP2_SEL_FC0 (2u << 12) /* Freq clock 0 is output on SDP2. */
+#define TS_SDP2_SEL_FC1 (3u << 12) /* Freq clock 1 is output on SDP2. */
+#define TS_SDP2_EN (1u << 14) /* SDP2 is assigned to Tsync. */
+#define TS_SDP3_SEL_TT0 (0u << 15) /* Target time 0 is output on SDP3. */
+#define TS_SDP3_SEL_TT1 (1u << 15) /* Target time 1 is output on SDP3. */
+#define TS_SDP3_SEL_FC0 (2u << 15) /* Freq clock 0 is output on SDP3. */
+#define TS_SDP3_SEL_FC1 (3u << 15) /* Freq clock 1 is output on SDP3. */
+#define TS_SDP3_EN (1u << 17) /* SDP3 is assigned to Tsync. */
#define E1000_MDICNFG_EXT_MDIO 0x80000000 /* MDI ext/int destination */
#define E1000_MDICNFG_COM_MDIO 0x40000000 /* MDI shared w/ lan 0 */
@@ -997,8 +997,8 @@
#define E1000_M88E1543_FIBER_CTRL 0x0
#define E1000_EEE_ADV_DEV_I354 7
#define E1000_EEE_ADV_ADDR_I354 60
-#define E1000_EEE_ADV_100_SUPPORTED (1 << 1) /* 100BaseTx EEE Supported */
-#define E1000_EEE_ADV_1000_SUPPORTED (1 << 2) /* 1000BaseT EEE Supported */
+#define E1000_EEE_ADV_100_SUPPORTED BIT(1) /* 100BaseTx EEE Supported */
+#define E1000_EEE_ADV_1000_SUPPORTED BIT(2) /* 1000BaseT EEE Supported */
#define E1000_PCS_STATUS_DEV_I354 3
#define E1000_PCS_STATUS_ADDR_I354 1
#define E1000_PCS_STATUS_TX_LPI_IND 0x0200 /* Tx in LPI state */
diff --git a/drivers/net/ethernet/intel/igb/e1000_mac.c b/drivers/net/ethernet/intel/igb/e1000_mac.c
index 07cf4fe58338..5010e2232c50 100644
--- a/drivers/net/ethernet/intel/igb/e1000_mac.c
+++ b/drivers/net/ethernet/intel/igb/e1000_mac.c
@@ -212,7 +212,7 @@ s32 igb_vfta_set(struct e1000_hw *hw, u32 vlan, u32 vind,
* bits[4-0]: which bit in the register
*/
regidx = vlan / 32;
- vfta_delta = 1 << (vlan % 32);
+ vfta_delta = BIT(vlan % 32);
vfta = adapter->shadow_vfta[regidx];
/* vfta_delta represents the difference between the current value
@@ -243,12 +243,12 @@ s32 igb_vfta_set(struct e1000_hw *hw, u32 vlan, u32 vind,
bits = rd32(E1000_VLVF(vlvf_index));
/* set the pool bit */
- bits |= 1 << (E1000_VLVF_POOLSEL_SHIFT + vind);
+ bits |= BIT(E1000_VLVF_POOLSEL_SHIFT + vind);
if (vlan_on)
goto vlvf_update;
/* clear the pool bit */
- bits ^= 1 << (E1000_VLVF_POOLSEL_SHIFT + vind);
+ bits ^= BIT(E1000_VLVF_POOLSEL_SHIFT + vind);
if (!(bits & E1000_VLVF_POOLSEL_MASK)) {
/* Clear VFTA first, then disable VLVF. Otherwise
@@ -427,7 +427,7 @@ void igb_mta_set(struct e1000_hw *hw, u32 hash_value)
mta = array_rd32(E1000_MTA, hash_reg);
- mta |= (1 << hash_bit);
+ mta |= BIT(hash_bit);
array_wr32(E1000_MTA, hash_reg, mta);
wrfl();
@@ -527,7 +527,7 @@ void igb_update_mc_addr_list(struct e1000_hw *hw,
hash_reg = (hash_value >> 5) & (hw->mac.mta_reg_count - 1);
hash_bit = hash_value & 0x1F;
- hw->mac.mta_shadow[hash_reg] |= (1 << hash_bit);
+ hw->mac.mta_shadow[hash_reg] |= BIT(hash_bit);
mc_addr_list += (ETH_ALEN);
}
diff --git a/drivers/net/ethernet/intel/igb/e1000_mbx.c b/drivers/net/ethernet/intel/igb/e1000_mbx.c
index 10f5c9e016a9..00e263f0c030 100644
--- a/drivers/net/ethernet/intel/igb/e1000_mbx.c
+++ b/drivers/net/ethernet/intel/igb/e1000_mbx.c
@@ -302,9 +302,9 @@ static s32 igb_check_for_rst_pf(struct e1000_hw *hw, u16 vf_number)
u32 vflre = rd32(E1000_VFLRE);
s32 ret_val = -E1000_ERR_MBX;
- if (vflre & (1 << vf_number)) {
+ if (vflre & BIT(vf_number)) {
ret_val = 0;
- wr32(E1000_VFLRE, (1 << vf_number));
+ wr32(E1000_VFLRE, BIT(vf_number));
hw->mbx.stats.rsts++;
}
diff --git a/drivers/net/ethernet/intel/igb/e1000_nvm.c b/drivers/net/ethernet/intel/igb/e1000_nvm.c
index e8280d0d7f02..3582c5cf8843 100644
--- a/drivers/net/ethernet/intel/igb/e1000_nvm.c
+++ b/drivers/net/ethernet/intel/igb/e1000_nvm.c
@@ -72,7 +72,7 @@ static void igb_shift_out_eec_bits(struct e1000_hw *hw, u16 data, u16 count)
u32 eecd = rd32(E1000_EECD);
u32 mask;
- mask = 0x01 << (count - 1);
+ mask = 1u << (count - 1);
if (nvm->type == e1000_nvm_eeprom_spi)
eecd |= E1000_EECD_DO;
diff --git a/drivers/net/ethernet/intel/igb/e1000_phy.h b/drivers/net/ethernet/intel/igb/e1000_phy.h
index 969a6ddafa3b..9b622b33bb5a 100644
--- a/drivers/net/ethernet/intel/igb/e1000_phy.h
+++ b/drivers/net/ethernet/intel/igb/e1000_phy.h
@@ -91,10 +91,10 @@ s32 igb_check_polarity_m88(struct e1000_hw *hw);
#define I82580_ADDR_REG 16
#define I82580_CFG_REG 22
-#define I82580_CFG_ASSERT_CRS_ON_TX (1 << 15)
-#define I82580_CFG_ENABLE_DOWNSHIFT (3 << 10) /* auto downshift 100/10 */
+#define I82580_CFG_ASSERT_CRS_ON_TX BIT(15)
+#define I82580_CFG_ENABLE_DOWNSHIFT (3u << 10) /* auto downshift 100/10 */
#define I82580_CTRL_REG 23
-#define I82580_CTRL_DOWNSHIFT_MASK (7 << 10)
+#define I82580_CTRL_DOWNSHIFT_MASK (7u << 10)
/* 82580 specific PHY registers */
#define I82580_PHY_CTRL_2 18
diff --git a/drivers/net/ethernet/intel/igb/igb.h b/drivers/net/ethernet/intel/igb/igb.h
index 9413fa61392f..b9609afa5ca3 100644
--- a/drivers/net/ethernet/intel/igb/igb.h
+++ b/drivers/net/ethernet/intel/igb/igb.h
@@ -91,6 +91,14 @@ struct igb_adapter;
#define NVM_COMB_VER_OFF 0x0083
#define NVM_COMB_VER_PTR 0x003d
+/* Transmit and receive latency (for PTP timestamps) */
+#define IGB_I210_TX_LATENCY_10 9542
+#define IGB_I210_TX_LATENCY_100 1024
+#define IGB_I210_TX_LATENCY_1000 178
+#define IGB_I210_RX_LATENCY_10 20662
+#define IGB_I210_RX_LATENCY_100 2213
+#define IGB_I210_RX_LATENCY_1000 448
+
struct vf_data_storage {
unsigned char vf_mac_addresses[ETH_ALEN];
u16 vf_mc_hashes[IGB_MAX_VF_MC_ENTRIES];
@@ -169,7 +177,7 @@ enum igb_tx_flags {
* maintain a power of two alignment we have to limit ourselves to 32K.
*/
#define IGB_MAX_TXD_PWR 15
-#define IGB_MAX_DATA_PER_TXD (1 << IGB_MAX_TXD_PWR)
+#define IGB_MAX_DATA_PER_TXD (1u << IGB_MAX_TXD_PWR)
/* Tx Descriptors needed, worst case */
#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), IGB_MAX_DATA_PER_TXD)
@@ -466,21 +474,21 @@ struct igb_adapter {
u16 eee_advert;
};
-#define IGB_FLAG_HAS_MSI (1 << 0)
-#define IGB_FLAG_DCA_ENABLED (1 << 1)
-#define IGB_FLAG_QUAD_PORT_A (1 << 2)
-#define IGB_FLAG_QUEUE_PAIRS (1 << 3)
-#define IGB_FLAG_DMAC (1 << 4)
-#define IGB_FLAG_PTP (1 << 5)
-#define IGB_FLAG_RSS_FIELD_IPV4_UDP (1 << 6)
-#define IGB_FLAG_RSS_FIELD_IPV6_UDP (1 << 7)
-#define IGB_FLAG_WOL_SUPPORTED (1 << 8)
-#define IGB_FLAG_NEED_LINK_UPDATE (1 << 9)
-#define IGB_FLAG_MEDIA_RESET (1 << 10)
-#define IGB_FLAG_MAS_CAPABLE (1 << 11)
-#define IGB_FLAG_MAS_ENABLE (1 << 12)
-#define IGB_FLAG_HAS_MSIX (1 << 13)
-#define IGB_FLAG_EEE (1 << 14)
+#define IGB_FLAG_HAS_MSI BIT(0)
+#define IGB_FLAG_DCA_ENABLED BIT(1)
+#define IGB_FLAG_QUAD_PORT_A BIT(2)
+#define IGB_FLAG_QUEUE_PAIRS BIT(3)
+#define IGB_FLAG_DMAC BIT(4)
+#define IGB_FLAG_PTP BIT(5)
+#define IGB_FLAG_RSS_FIELD_IPV4_UDP BIT(6)
+#define IGB_FLAG_RSS_FIELD_IPV6_UDP BIT(7)
+#define IGB_FLAG_WOL_SUPPORTED BIT(8)
+#define IGB_FLAG_NEED_LINK_UPDATE BIT(9)
+#define IGB_FLAG_MEDIA_RESET BIT(10)
+#define IGB_FLAG_MAS_CAPABLE BIT(11)
+#define IGB_FLAG_MAS_ENABLE BIT(12)
+#define IGB_FLAG_HAS_MSIX BIT(13)
+#define IGB_FLAG_EEE BIT(14)
#define IGB_FLAG_VLAN_PROMISC BIT(15)
/* Media Auto Sense */
diff --git a/drivers/net/ethernet/intel/igb/igb_ethtool.c b/drivers/net/ethernet/intel/igb/igb_ethtool.c
index 7982243d1f9b..64e91c575a39 100644
--- a/drivers/net/ethernet/intel/igb/igb_ethtool.c
+++ b/drivers/net/ethernet/intel/igb/igb_ethtool.c
@@ -466,7 +466,7 @@ static void igb_get_regs(struct net_device *netdev,
memset(p, 0, IGB_REGS_LEN * sizeof(u32));
- regs->version = (1 << 24) | (hw->revision_id << 16) | hw->device_id;
+ regs->version = (1u << 24) | (hw->revision_id << 16) | hw->device_id;
/* General Registers */
regs_buff[0] = rd32(E1000_CTRL);
@@ -1448,7 +1448,7 @@ static int igb_intr_test(struct igb_adapter *adapter, u64 *data)
/* Test each interrupt */
for (; i < 31; i++) {
/* Interrupt to test */
- mask = 1 << i;
+ mask = BIT(i);
if (!(mask & ics_mask))
continue;
@@ -2411,19 +2411,19 @@ static int igb_get_ts_info(struct net_device *dev,
SOF_TIMESTAMPING_RAW_HARDWARE;
info->tx_types =
- (1 << HWTSTAMP_TX_OFF) |
- (1 << HWTSTAMP_TX_ON);
+ BIT(HWTSTAMP_TX_OFF) |
+ BIT(HWTSTAMP_TX_ON);
- info->rx_filters = 1 << HWTSTAMP_FILTER_NONE;
+ info->rx_filters = BIT(HWTSTAMP_FILTER_NONE);
/* 82576 does not support timestamping all packets. */
if (adapter->hw.mac.type >= e1000_82580)
- info->rx_filters |= 1 << HWTSTAMP_FILTER_ALL;
+ info->rx_filters |= BIT(HWTSTAMP_FILTER_ALL);
else
info->rx_filters |=
- (1 << HWTSTAMP_FILTER_PTP_V1_L4_SYNC) |
- (1 << HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) |
- (1 << HWTSTAMP_FILTER_PTP_V2_EVENT);
+ BIT(HWTSTAMP_FILTER_PTP_V1_L4_SYNC) |
+ BIT(HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) |
+ BIT(HWTSTAMP_FILTER_PTP_V2_EVENT);
return 0;
default:
@@ -2831,7 +2831,8 @@ static int igb_get_module_eeprom(struct net_device *netdev,
/* Read EEPROM block, SFF-8079/SFF-8472, word at a time */
for (i = 0; i < last_word - first_word + 1; i++) {
- status = igb_read_phy_reg_i2c(hw, first_word + i, &dataword[i]);
+ status = igb_read_phy_reg_i2c(hw, (first_word + i) * 2,
+ &dataword[i]);
if (status) {
/* Error occurred while reading module */
kfree(dataword);
diff --git a/drivers/net/ethernet/intel/igb/igb_main.c b/drivers/net/ethernet/intel/igb/igb_main.c
index 55a1405cb2a1..ef3d642f5ff2 100644
--- a/drivers/net/ethernet/intel/igb/igb_main.c
+++ b/drivers/net/ethernet/intel/igb/igb_main.c
@@ -50,6 +50,7 @@
#include <linux/aer.h>
#include <linux/prefetch.h>
#include <linux/pm_runtime.h>
+#include <linux/etherdevice.h>
#ifdef CONFIG_IGB_DCA
#include <linux/dca.h>
#endif
@@ -150,7 +151,7 @@ static void igb_update_dca(struct igb_q_vector *);
static void igb_setup_dca(struct igb_adapter *);
#endif /* CONFIG_IGB_DCA */
static int igb_poll(struct napi_struct *, int);
-static bool igb_clean_tx_irq(struct igb_q_vector *);
+static bool igb_clean_tx_irq(struct igb_q_vector *, int);
static int igb_clean_rx_irq(struct igb_q_vector *, int);
static int igb_ioctl(struct net_device *, struct ifreq *, int cmd);
static void igb_tx_timeout(struct net_device *);
@@ -382,7 +383,7 @@ static void igb_dump(struct igb_adapter *adapter)
dev_info(&adapter->pdev->dev, "Net device Info\n");
pr_info("Device Name state trans_start last_rx\n");
pr_info("%-15s %016lX %016lX %016lX\n", netdev->name,
- netdev->state, netdev->trans_start, netdev->last_rx);
+ netdev->state, dev_trans_start(netdev), netdev->last_rx);
}
/* Print Registers */
@@ -835,7 +836,7 @@ static void igb_assign_vector(struct igb_q_vector *q_vector, int msix_vector)
igb_write_ivar(hw, msix_vector,
tx_queue & 0x7,
((tx_queue & 0x8) << 1) + 8);
- q_vector->eims_value = 1 << msix_vector;
+ q_vector->eims_value = BIT(msix_vector);
break;
case e1000_82580:
case e1000_i350:
@@ -856,7 +857,7 @@ static void igb_assign_vector(struct igb_q_vector *q_vector, int msix_vector)
igb_write_ivar(hw, msix_vector,
tx_queue >> 1,
((tx_queue & 0x1) << 4) + 8);
- q_vector->eims_value = 1 << msix_vector;
+ q_vector->eims_value = BIT(msix_vector);
break;
default:
BUG();
@@ -918,7 +919,7 @@ static void igb_configure_msix(struct igb_adapter *adapter)
E1000_GPIE_NSICR);
/* enable msix_other interrupt */
- adapter->eims_other = 1 << vector;
+ adapter->eims_other = BIT(vector);
tmp = (vector++ | E1000_IVAR_VALID) << 8;
wr32(E1000_IVAR_MISC, tmp);
@@ -2086,6 +2087,40 @@ static int igb_ndo_fdb_add(struct ndmsg *ndm, struct nlattr *tb[],
return ndo_dflt_fdb_add(ndm, tb, dev, addr, vid, flags);
}
+#define IGB_MAX_MAC_HDR_LEN 127
+#define IGB_MAX_NETWORK_HDR_LEN 511
+
+static netdev_features_t
+igb_features_check(struct sk_buff *skb, struct net_device *dev,
+ netdev_features_t features)
+{
+ unsigned int network_hdr_len, mac_hdr_len;
+
+ /* Make certain the headers can be described by a context descriptor */
+ mac_hdr_len = skb_network_header(skb) - skb->data;
+ if (unlikely(mac_hdr_len > IGB_MAX_MAC_HDR_LEN))
+ return features & ~(NETIF_F_HW_CSUM |
+ NETIF_F_SCTP_CRC |
+ NETIF_F_HW_VLAN_CTAG_TX |
+ NETIF_F_TSO |
+ NETIF_F_TSO6);
+
+ network_hdr_len = skb_checksum_start(skb) - skb_network_header(skb);
+ if (unlikely(network_hdr_len > IGB_MAX_NETWORK_HDR_LEN))
+ return features & ~(NETIF_F_HW_CSUM |
+ NETIF_F_SCTP_CRC |
+ NETIF_F_TSO |
+ NETIF_F_TSO6);
+
+ /* We can only support IPV4 TSO in tunnels if we can mangle the
+ * inner IP ID field, so strip TSO if MANGLEID is not supported.
+ */
+ if (skb->encapsulation && !(features & NETIF_F_TSO_MANGLEID))
+ features &= ~NETIF_F_TSO;
+
+ return features;
+}
+
static const struct net_device_ops igb_netdev_ops = {
.ndo_open = igb_open,
.ndo_stop = igb_close,
@@ -2110,7 +2145,7 @@ static const struct net_device_ops igb_netdev_ops = {
.ndo_fix_features = igb_fix_features,
.ndo_set_features = igb_set_features,
.ndo_fdb_add = igb_ndo_fdb_add,
- .ndo_features_check = passthru_features_check,
+ .ndo_features_check = igb_features_check,
};
/**
@@ -2376,38 +2411,43 @@ static int igb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
NETIF_F_TSO6 |
NETIF_F_RXHASH |
NETIF_F_RXCSUM |
- NETIF_F_HW_CSUM |
- NETIF_F_HW_VLAN_CTAG_RX |
- NETIF_F_HW_VLAN_CTAG_TX;
+ NETIF_F_HW_CSUM;
if (hw->mac.type >= e1000_82576)
netdev->features |= NETIF_F_SCTP_CRC;
+#define IGB_GSO_PARTIAL_FEATURES (NETIF_F_GSO_GRE | \
+ NETIF_F_GSO_GRE_CSUM | \
+ NETIF_F_GSO_IPXIP4 | \
+ NETIF_F_GSO_IPXIP6 | \
+ NETIF_F_GSO_UDP_TUNNEL | \
+ NETIF_F_GSO_UDP_TUNNEL_CSUM)
+
+ netdev->gso_partial_features = IGB_GSO_PARTIAL_FEATURES;
+ netdev->features |= NETIF_F_GSO_PARTIAL | IGB_GSO_PARTIAL_FEATURES;
+
/* copy netdev features into list of user selectable features */
- netdev->hw_features |= netdev->features;
- netdev->hw_features |= NETIF_F_RXALL;
+ netdev->hw_features |= netdev->features |
+ NETIF_F_HW_VLAN_CTAG_RX |
+ NETIF_F_HW_VLAN_CTAG_TX |
+ NETIF_F_RXALL;
if (hw->mac.type >= e1000_i350)
netdev->hw_features |= NETIF_F_NTUPLE;
- /* set this bit last since it cannot be part of hw_features */
- netdev->features |= NETIF_F_HW_VLAN_CTAG_FILTER;
-
- netdev->vlan_features |= NETIF_F_SG |
- NETIF_F_TSO |
- NETIF_F_TSO6 |
- NETIF_F_HW_CSUM |
- NETIF_F_SCTP_CRC;
+ if (pci_using_dac)
+ netdev->features |= NETIF_F_HIGHDMA;
+ netdev->vlan_features |= netdev->features | NETIF_F_TSO_MANGLEID;
netdev->mpls_features |= NETIF_F_HW_CSUM;
- netdev->hw_enc_features |= NETIF_F_HW_CSUM;
+ netdev->hw_enc_features |= netdev->vlan_features;
- netdev->priv_flags |= IFF_SUPP_NOFCS;
+ /* set this bit last since it cannot be part of vlan_features */
+ netdev->features |= NETIF_F_HW_VLAN_CTAG_FILTER |
+ NETIF_F_HW_VLAN_CTAG_RX |
+ NETIF_F_HW_VLAN_CTAG_TX;
- if (pci_using_dac) {
- netdev->features |= NETIF_F_HIGHDMA;
- netdev->vlan_features |= NETIF_F_HIGHDMA;
- }
+ netdev->priv_flags |= IFF_SUPP_NOFCS;
netdev->priv_flags |= IFF_UNICAST_FLT;
@@ -2442,9 +2482,11 @@ static int igb_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
break;
}
- /* copy the MAC address out of the NVM */
- if (hw->mac.ops.read_mac_addr(hw))
- dev_err(&pdev->dev, "NVM Read Error\n");
+ if (eth_platform_get_mac_address(&pdev->dev, hw->mac.addr)) {
+ /* copy the MAC address out of the NVM */
+ if (hw->mac.ops.read_mac_addr(hw))
+ dev_err(&pdev->dev, "NVM Read Error\n");
+ }
memcpy(netdev->dev_addr, hw->mac.addr, netdev->addr_len);
@@ -4061,7 +4103,7 @@ static int igb_vlan_promisc_enable(struct igb_adapter *adapter)
for (i = E1000_VLVF_ARRAY_SIZE; --i;) {
u32 vlvf = rd32(E1000_VLVF(i));
- vlvf |= 1 << pf_id;
+ vlvf |= BIT(pf_id);
wr32(E1000_VLVF(i), vlvf);
}
@@ -4088,7 +4130,7 @@ static void igb_scrub_vfta(struct igb_adapter *adapter, u32 vfta_offset)
/* guarantee that we don't scrub out management VLAN */
vid = adapter->mng_vlan_id;
if (vid >= vid_start && vid < vid_end)
- vfta[(vid - vid_start) / 32] |= 1 << (vid % 32);
+ vfta[(vid - vid_start) / 32] |= BIT(vid % 32);
if (!adapter->vfs_allocated_count)
goto set_vfta;
@@ -4107,7 +4149,7 @@ static void igb_scrub_vfta(struct igb_adapter *adapter, u32 vfta_offset)
if (vlvf & E1000_VLVF_VLANID_ENABLE) {
/* record VLAN ID in VFTA */
- vfta[(vid - vid_start) / 32] |= 1 << (vid % 32);
+ vfta[(vid - vid_start) / 32] |= BIT(vid % 32);
/* if PF is part of this then continue */
if (test_bit(vid, adapter->active_vlans))
@@ -4115,7 +4157,7 @@ static void igb_scrub_vfta(struct igb_adapter *adapter, u32 vfta_offset)
}
/* remove PF from the pool */
- bits = ~(1 << pf_id);
+ bits = ~BIT(pf_id);
bits &= rd32(E1000_VLVF(i));
wr32(E1000_VLVF(i), bits);
}
@@ -4273,13 +4315,13 @@ static void igb_spoof_check(struct igb_adapter *adapter)
return;
for (j = 0; j < adapter->vfs_allocated_count; j++) {
- if (adapter->wvbr & (1 << j) ||
- adapter->wvbr & (1 << (j + IGB_STAGGERED_QUEUE_OFFSET))) {
+ if (adapter->wvbr & BIT(j) ||
+ adapter->wvbr & BIT(j + IGB_STAGGERED_QUEUE_OFFSET)) {
dev_warn(&adapter->pdev->dev,
"Spoof event(s) detected on VF %d\n", j);
adapter->wvbr &=
- ~((1 << j) |
- (1 << (j + IGB_STAGGERED_QUEUE_OFFSET)));
+ ~(BIT(j) |
+ BIT(j + IGB_STAGGERED_QUEUE_OFFSET));
}
}
}
@@ -4839,9 +4881,18 @@ static int igb_tso(struct igb_ring *tx_ring,
struct igb_tx_buffer *first,
u8 *hdr_len)
{
+ u32 vlan_macip_lens, type_tucmd, mss_l4len_idx;
struct sk_buff *skb = first->skb;
- u32 vlan_macip_lens, type_tucmd;
- u32 mss_l4len_idx, l4len;
+ union {
+ struct iphdr *v4;
+ struct ipv6hdr *v6;
+ unsigned char *hdr;
+ } ip;
+ union {
+ struct tcphdr *tcp;
+ unsigned char *hdr;
+ } l4;
+ u32 paylen, l4_offset;
int err;
if (skb->ip_summed != CHECKSUM_PARTIAL)
@@ -4854,45 +4905,52 @@ static int igb_tso(struct igb_ring *tx_ring,
if (err < 0)
return err;
+ ip.hdr = skb_network_header(skb);
+ l4.hdr = skb_checksum_start(skb);
+
/* ADV DTYP TUCMD MKRLOC/ISCSIHEDLEN */
type_tucmd = E1000_ADVTXD_TUCMD_L4T_TCP;
- if (first->protocol == htons(ETH_P_IP)) {
- struct iphdr *iph = ip_hdr(skb);
- iph->tot_len = 0;
- iph->check = 0;
- tcp_hdr(skb)->check = ~csum_tcpudp_magic(iph->saddr,
- iph->daddr, 0,
- IPPROTO_TCP,
- 0);
+ /* initialize outer IP header fields */
+ if (ip.v4->version == 4) {
+ /* IP header will have to cancel out any data that
+ * is not a part of the outer IP header
+ */
+ ip.v4->check = csum_fold(csum_add(lco_csum(skb),
+ csum_unfold(l4.tcp->check)));
type_tucmd |= E1000_ADVTXD_TUCMD_IPV4;
+
+ ip.v4->tot_len = 0;
first->tx_flags |= IGB_TX_FLAGS_TSO |
IGB_TX_FLAGS_CSUM |
IGB_TX_FLAGS_IPV4;
- } else if (skb_is_gso_v6(skb)) {
- ipv6_hdr(skb)->payload_len = 0;
- tcp_hdr(skb)->check = ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
- &ipv6_hdr(skb)->daddr,
- 0, IPPROTO_TCP, 0);
+ } else {
+ ip.v6->payload_len = 0;
first->tx_flags |= IGB_TX_FLAGS_TSO |
IGB_TX_FLAGS_CSUM;
}
- /* compute header lengths */
- l4len = tcp_hdrlen(skb);
- *hdr_len = skb_transport_offset(skb) + l4len;
+ /* determine offset of inner transport header */
+ l4_offset = l4.hdr - skb->data;
+
+ /* compute length of segmentation header */
+ *hdr_len = (l4.tcp->doff * 4) + l4_offset;
+
+ /* remove payload length from inner checksum */
+ paylen = skb->len - l4_offset;
+ csum_replace_by_diff(&l4.tcp->check, htonl(paylen));
/* update gso size and bytecount with header size */
first->gso_segs = skb_shinfo(skb)->gso_segs;
first->bytecount += (first->gso_segs - 1) * *hdr_len;
/* MSS L4LEN IDX */
- mss_l4len_idx = l4len << E1000_ADVTXD_L4LEN_SHIFT;
+ mss_l4len_idx = (*hdr_len - l4_offset) << E1000_ADVTXD_L4LEN_SHIFT;
mss_l4len_idx |= skb_shinfo(skb)->gso_size << E1000_ADVTXD_MSS_SHIFT;
/* VLAN MACLEN IPLEN */
- vlan_macip_lens = skb_network_header_len(skb);
- vlan_macip_lens |= skb_network_offset(skb) << E1000_ADVTXD_MACLEN_SHIFT;
+ vlan_macip_lens = l4.hdr - ip.hdr;
+ vlan_macip_lens |= (ip.hdr - skb->data) << E1000_ADVTXD_MACLEN_SHIFT;
vlan_macip_lens |= first->tx_flags & IGB_TX_FLAGS_VLAN_MASK;
igb_tx_ctxtdesc(tx_ring, vlan_macip_lens, type_tucmd, mss_l4len_idx);
@@ -5960,11 +6018,11 @@ static void igb_clear_vf_vfta(struct igb_adapter *adapter, u32 vf)
/* create mask for VF and other pools */
pool_mask = E1000_VLVF_POOLSEL_MASK;
- vlvf_mask = 1 << (E1000_VLVF_POOLSEL_SHIFT + vf);
+ vlvf_mask = BIT(E1000_VLVF_POOLSEL_SHIFT + vf);
/* drop PF from pool bits */
- pool_mask &= ~(1 << (E1000_VLVF_POOLSEL_SHIFT +
- adapter->vfs_allocated_count));
+ pool_mask &= ~BIT(E1000_VLVF_POOLSEL_SHIFT +
+ adapter->vfs_allocated_count);
/* Find the vlan filter for this id */
for (i = E1000_VLVF_ARRAY_SIZE; i--;) {
@@ -5987,7 +6045,7 @@ static void igb_clear_vf_vfta(struct igb_adapter *adapter, u32 vf)
goto update_vlvf;
vid = vlvf & E1000_VLVF_VLANID_MASK;
- vfta_mask = 1 << (vid % 32);
+ vfta_mask = BIT(vid % 32);
/* clear bit from VFTA */
vfta = adapter->shadow_vfta[vid / 32];
@@ -6024,7 +6082,7 @@ static int igb_find_vlvf_entry(struct e1000_hw *hw, u32 vlan)
return idx;
}
-void igb_update_pf_vlvf(struct igb_adapter *adapter, u32 vid)
+static void igb_update_pf_vlvf(struct igb_adapter *adapter, u32 vid)
{
struct e1000_hw *hw = &adapter->hw;
u32 bits, pf_id;
@@ -6038,13 +6096,13 @@ void igb_update_pf_vlvf(struct igb_adapter *adapter, u32 vid)
* entry other than the PF.
*/
pf_id = adapter->vfs_allocated_count + E1000_VLVF_POOLSEL_SHIFT;
- bits = ~(1 << pf_id) & E1000_VLVF_POOLSEL_MASK;
+ bits = ~BIT(pf_id) & E1000_VLVF_POOLSEL_MASK;
bits &= rd32(E1000_VLVF(idx));
/* Disable the filter so this falls into the default pool. */
if (!bits) {
if (adapter->flags & IGB_FLAG_VLAN_PROMISC)
- wr32(E1000_VLVF(idx), 1 << pf_id);
+ wr32(E1000_VLVF(idx), BIT(pf_id));
else
wr32(E1000_VLVF(idx), 0);
}
@@ -6228,9 +6286,9 @@ static void igb_vf_reset_msg(struct igb_adapter *adapter, u32 vf)
/* enable transmit and receive for vf */
reg = rd32(E1000_VFTE);
- wr32(E1000_VFTE, reg | (1 << vf));
+ wr32(E1000_VFTE, reg | BIT(vf));
reg = rd32(E1000_VFRE);
- wr32(E1000_VFRE, reg | (1 << vf));
+ wr32(E1000_VFRE, reg | BIT(vf));
adapter->vf_data[vf].flags |= IGB_VF_FLAG_CTS;
@@ -6522,13 +6580,14 @@ static int igb_poll(struct napi_struct *napi, int budget)
igb_update_dca(q_vector);
#endif
if (q_vector->tx.ring)
- clean_complete = igb_clean_tx_irq(q_vector);
+ clean_complete = igb_clean_tx_irq(q_vector, budget);
if (q_vector->rx.ring) {
int cleaned = igb_clean_rx_irq(q_vector, budget);
work_done += cleaned;
- clean_complete &= (cleaned < budget);
+ if (cleaned >= budget)
+ clean_complete = false;
}
/* If all work not completed, return budget and keep polling */
@@ -6545,10 +6604,11 @@ static int igb_poll(struct napi_struct *napi, int budget)
/**
* igb_clean_tx_irq - Reclaim resources after transmit completes
* @q_vector: pointer to q_vector containing needed info
+ * @napi_budget: Used to determine if we are in netpoll
*
* returns true if ring is completely cleaned
**/
-static bool igb_clean_tx_irq(struct igb_q_vector *q_vector)
+static bool igb_clean_tx_irq(struct igb_q_vector *q_vector, int napi_budget)
{
struct igb_adapter *adapter = q_vector->adapter;
struct igb_ring *tx_ring = q_vector->tx.ring;
@@ -6587,7 +6647,7 @@ static bool igb_clean_tx_irq(struct igb_q_vector *q_vector)
total_packets += tx_buffer->gso_segs;
/* free the skb */
- dev_consume_skb_any(tx_buffer->skb);
+ napi_consume_skb(tx_buffer->skb, napi_budget);
/* unmap skb header data */
dma_unmap_single(tx_ring->dev,
@@ -7574,7 +7634,6 @@ static int igb_resume(struct device *dev)
if (igb_init_interrupt_scheme(adapter, true)) {
dev_err(&pdev->dev, "Unable to allocate memory for queues\n");
- rtnl_unlock();
return -ENOMEM;
}
@@ -7845,11 +7904,13 @@ static void igb_rar_set_qsel(struct igb_adapter *adapter, u8 *addr, u32 index,
struct e1000_hw *hw = &adapter->hw;
u32 rar_low, rar_high;
- /* HW expects these in little endian so we reverse the byte order
- * from network order (big endian) to CPU endian
+ /* HW expects these to be in network order when they are plugged
+ * into the registers which are little endian. In order to guarantee
+ * that ordering we need to do an leXX_to_cpup here in order to be
+ * ready for the byteswap that occurs with writel
*/
- rar_low = le32_to_cpup((__be32 *)(addr));
- rar_high = le16_to_cpup((__be16 *)(addr + 4));
+ rar_low = le32_to_cpup((__le32 *)(addr));
+ rar_high = le16_to_cpup((__le16 *)(addr + 4));
/* Indicate to hardware the Address is Valid. */
rar_high |= E1000_RAH_AV;
@@ -7921,7 +7982,7 @@ static void igb_set_vf_rate_limit(struct e1000_hw *hw, int vf, int tx_rate,
/* Calculate the rate factor values to set */
rf_int = link_speed / tx_rate;
rf_dec = (link_speed - (rf_int * tx_rate));
- rf_dec = (rf_dec * (1 << E1000_RTTBCNRC_RF_INT_SHIFT)) /
+ rf_dec = (rf_dec * BIT(E1000_RTTBCNRC_RF_INT_SHIFT)) /
tx_rate;
bcnrc_val = E1000_RTTBCNRC_RS_ENA;
@@ -8011,11 +8072,11 @@ static int igb_ndo_set_vf_spoofchk(struct net_device *netdev, int vf,
reg_offset = (hw->mac.type == e1000_82576) ? E1000_DTXSWC : E1000_TXSWC;
reg_val = rd32(reg_offset);
if (setting)
- reg_val |= ((1 << vf) |
- (1 << (vf + E1000_DTXSWC_VLAN_SPOOF_SHIFT)));
+ reg_val |= (BIT(vf) |
+ BIT(vf + E1000_DTXSWC_VLAN_SPOOF_SHIFT));
else
- reg_val &= ~((1 << vf) |
- (1 << (vf + E1000_DTXSWC_VLAN_SPOOF_SHIFT)));
+ reg_val &= ~(BIT(vf) |
+ BIT(vf + E1000_DTXSWC_VLAN_SPOOF_SHIFT));
wr32(reg_offset, reg_val);
adapter->vf_data[vf].spoofchk_enabled = setting;
diff --git a/drivers/net/ethernet/intel/igb/igb_ptp.c b/drivers/net/ethernet/intel/igb/igb_ptp.c
index 22a8a29895b4..f097c5a8ab93 100644
--- a/drivers/net/ethernet/intel/igb/igb_ptp.c
+++ b/drivers/net/ethernet/intel/igb/igb_ptp.c
@@ -69,9 +69,9 @@
#define IGB_SYSTIM_OVERFLOW_PERIOD (HZ * 60 * 9)
#define IGB_PTP_TX_TIMEOUT (HZ * 15)
-#define INCPERIOD_82576 (1 << E1000_TIMINCA_16NS_SHIFT)
-#define INCVALUE_82576_MASK ((1 << E1000_TIMINCA_16NS_SHIFT) - 1)
-#define INCVALUE_82576 (16 << IGB_82576_TSYNC_SHIFT)
+#define INCPERIOD_82576 BIT(E1000_TIMINCA_16NS_SHIFT)
+#define INCVALUE_82576_MASK GENMASK(E1000_TIMINCA_16NS_SHIFT - 1, 0)
+#define INCVALUE_82576 (16u << IGB_82576_TSYNC_SHIFT)
#define IGB_NBITS_82580 40
static void igb_ptp_tx_hwtstamp(struct igb_adapter *adapter);
@@ -722,11 +722,29 @@ static void igb_ptp_tx_hwtstamp(struct igb_adapter *adapter)
struct e1000_hw *hw = &adapter->hw;
struct skb_shared_hwtstamps shhwtstamps;
u64 regval;
+ int adjust = 0;
regval = rd32(E1000_TXSTMPL);
regval |= (u64)rd32(E1000_TXSTMPH) << 32;
igb_ptp_systim_to_hwtstamp(adapter, &shhwtstamps, regval);
+ /* adjust timestamp for the TX latency based on link speed */
+ if (adapter->hw.mac.type == e1000_i210) {
+ switch (adapter->link_speed) {
+ case SPEED_10:
+ adjust = IGB_I210_TX_LATENCY_10;
+ break;
+ case SPEED_100:
+ adjust = IGB_I210_TX_LATENCY_100;
+ break;
+ case SPEED_1000:
+ adjust = IGB_I210_TX_LATENCY_1000;
+ break;
+ }
+ }
+
+ shhwtstamps.hwtstamp = ktime_sub_ns(shhwtstamps.hwtstamp, adjust);
+
skb_tstamp_tx(adapter->ptp_tx_skb, &shhwtstamps);
dev_kfree_skb_any(adapter->ptp_tx_skb);
adapter->ptp_tx_skb = NULL;
@@ -771,6 +789,7 @@ void igb_ptp_rx_rgtstamp(struct igb_q_vector *q_vector,
struct igb_adapter *adapter = q_vector->adapter;
struct e1000_hw *hw = &adapter->hw;
u64 regval;
+ int adjust = 0;
/* If this bit is set, then the RX registers contain the time stamp. No
* other packet will be time stamped until we read these registers, so
@@ -790,6 +809,23 @@ void igb_ptp_rx_rgtstamp(struct igb_q_vector *q_vector,
igb_ptp_systim_to_hwtstamp(adapter, skb_hwtstamps(skb), regval);
+ /* adjust timestamp for the RX latency based on link speed */
+ if (adapter->hw.mac.type == e1000_i210) {
+ switch (adapter->link_speed) {
+ case SPEED_10:
+ adjust = IGB_I210_RX_LATENCY_10;
+ break;
+ case SPEED_100:
+ adjust = IGB_I210_RX_LATENCY_100;
+ break;
+ case SPEED_1000:
+ adjust = IGB_I210_RX_LATENCY_1000;
+ break;
+ }
+ }
+ skb_hwtstamps(skb)->hwtstamp =
+ ktime_add_ns(skb_hwtstamps(skb)->hwtstamp, adjust);
+
/* Update the last_rx_timestamp timer in order to enable watchdog check
* for error case of latched timestamp on a dropped packet.
*/
diff --git a/drivers/net/ethernet/intel/igbvf/defines.h b/drivers/net/ethernet/intel/igbvf/defines.h
index ae3f28332fa0..ee1ef08d7fc4 100644
--- a/drivers/net/ethernet/intel/igbvf/defines.h
+++ b/drivers/net/ethernet/intel/igbvf/defines.h
@@ -113,7 +113,7 @@
#define E1000_RXDCTL_QUEUE_ENABLE 0x02000000 /* Enable specific Rx Que */
/* Direct Cache Access (DCA) definitions */
-#define E1000_DCA_TXCTRL_TX_WB_RO_EN (1 << 11) /* Tx Desc writeback RO bit */
+#define E1000_DCA_TXCTRL_TX_WB_RO_EN BIT(11) /* Tx Desc writeback RO bit */
#define E1000_VF_INIT_TIMEOUT 200 /* Number of retries to clear RSTI */
diff --git a/drivers/net/ethernet/intel/igbvf/ethtool.c b/drivers/net/ethernet/intel/igbvf/ethtool.c
index b74ce53d7b52..8dea1b1367ef 100644
--- a/drivers/net/ethernet/intel/igbvf/ethtool.c
+++ b/drivers/net/ethernet/intel/igbvf/ethtool.c
@@ -154,7 +154,8 @@ static void igbvf_get_regs(struct net_device *netdev,
memset(p, 0, IGBVF_REGS_LEN * sizeof(u32));
- regs->version = (1 << 24) | (adapter->pdev->revision << 16) |
+ regs->version = (1u << 24) |
+ (adapter->pdev->revision << 16) |
adapter->pdev->device;
regs_buff[0] = er32(CTRL);
diff --git a/drivers/net/ethernet/intel/igbvf/igbvf.h b/drivers/net/ethernet/intel/igbvf/igbvf.h
index f166baab8d7e..6f4290d6dc9f 100644
--- a/drivers/net/ethernet/intel/igbvf/igbvf.h
+++ b/drivers/net/ethernet/intel/igbvf/igbvf.h
@@ -287,8 +287,8 @@ struct igbvf_info {
};
/* hardware capability, feature, and workaround flags */
-#define IGBVF_FLAG_RX_CSUM_DISABLED (1 << 0)
-#define IGBVF_FLAG_RX_LB_VLAN_BSWAP (1 << 1)
+#define IGBVF_FLAG_RX_CSUM_DISABLED BIT(0)
+#define IGBVF_FLAG_RX_LB_VLAN_BSWAP BIT(1)
#define IGBVF_RX_DESC_ADV(R, i) \
(&((((R).desc))[i].rx_desc))
#define IGBVF_TX_DESC_ADV(R, i) \
diff --git a/drivers/net/ethernet/intel/igbvf/netdev.c b/drivers/net/ethernet/intel/igbvf/netdev.c
index c12442252adb..b0778ba65083 100644
--- a/drivers/net/ethernet/intel/igbvf/netdev.c
+++ b/drivers/net/ethernet/intel/igbvf/netdev.c
@@ -964,7 +964,7 @@ static void igbvf_assign_vector(struct igbvf_adapter *adapter, int rx_queue,
ivar = ivar & 0xFFFFFF00;
ivar |= msix_vector | E1000_IVAR_VALID;
}
- adapter->rx_ring[rx_queue].eims_value = 1 << msix_vector;
+ adapter->rx_ring[rx_queue].eims_value = BIT(msix_vector);
array_ew32(IVAR0, index, ivar);
}
if (tx_queue > IGBVF_NO_QUEUE) {
@@ -979,7 +979,7 @@ static void igbvf_assign_vector(struct igbvf_adapter *adapter, int rx_queue,
ivar = ivar & 0xFFFF00FF;
ivar |= (msix_vector | E1000_IVAR_VALID) << 8;
}
- adapter->tx_ring[tx_queue].eims_value = 1 << msix_vector;
+ adapter->tx_ring[tx_queue].eims_value = BIT(msix_vector);
array_ew32(IVAR0, index, ivar);
}
}
@@ -1014,8 +1014,8 @@ static void igbvf_configure_msix(struct igbvf_adapter *adapter)
ew32(IVAR_MISC, tmp);
- adapter->eims_enable_mask = (1 << (vector)) - 1;
- adapter->eims_other = 1 << (vector - 1);
+ adapter->eims_enable_mask = GENMASK(vector - 1, 0);
+ adapter->eims_other = BIT(vector - 1);
e1e_flush();
}
@@ -1367,7 +1367,7 @@ static void igbvf_configure_rx(struct igbvf_adapter *adapter)
struct e1000_hw *hw = &adapter->hw;
struct igbvf_ring *rx_ring = adapter->rx_ring;
u64 rdba;
- u32 rdlen, rxdctl;
+ u32 rxdctl;
/* disable receives */
rxdctl = er32(RXDCTL(0));
@@ -1375,8 +1375,6 @@ static void igbvf_configure_rx(struct igbvf_adapter *adapter)
e1e_flush();
msleep(10);
- rdlen = rx_ring->count * sizeof(union e1000_adv_rx_desc);
-
/* Setup the HW Rx Head and Tail Descriptor Pointers and
* the Base and Length of the Rx Descriptor Ring
*/
@@ -1933,83 +1931,74 @@ static void igbvf_tx_ctxtdesc(struct igbvf_ring *tx_ring, u32 vlan_macip_lens,
buffer_info->dma = 0;
}
-static int igbvf_tso(struct igbvf_adapter *adapter,
- struct igbvf_ring *tx_ring,
- struct sk_buff *skb, u32 tx_flags, u8 *hdr_len,
- __be16 protocol)
-{
- struct e1000_adv_tx_context_desc *context_desc;
- struct igbvf_buffer *buffer_info;
- u32 info = 0, tu_cmd = 0;
- u32 mss_l4len_idx, l4len;
- unsigned int i;
+static int igbvf_tso(struct igbvf_ring *tx_ring,
+ struct sk_buff *skb, u32 tx_flags, u8 *hdr_len)
+{
+ u32 vlan_macip_lens, type_tucmd, mss_l4len_idx;
+ union {
+ struct iphdr *v4;
+ struct ipv6hdr *v6;
+ unsigned char *hdr;
+ } ip;
+ union {
+ struct tcphdr *tcp;
+ unsigned char *hdr;
+ } l4;
+ u32 paylen, l4_offset;
int err;
- *hdr_len = 0;
+ if (skb->ip_summed != CHECKSUM_PARTIAL)
+ return 0;
+
+ if (!skb_is_gso(skb))
+ return 0;
err = skb_cow_head(skb, 0);
- if (err < 0) {
- dev_err(&adapter->pdev->dev, "igbvf_tso returning an error\n");
+ if (err < 0)
return err;
- }
- l4len = tcp_hdrlen(skb);
- *hdr_len += l4len;
-
- if (protocol == htons(ETH_P_IP)) {
- struct iphdr *iph = ip_hdr(skb);
-
- iph->tot_len = 0;
- iph->check = 0;
- tcp_hdr(skb)->check = ~csum_tcpudp_magic(iph->saddr,
- iph->daddr, 0,
- IPPROTO_TCP,
- 0);
- } else if (skb_is_gso_v6(skb)) {
- ipv6_hdr(skb)->payload_len = 0;
- tcp_hdr(skb)->check = ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
- &ipv6_hdr(skb)->daddr,
- 0, IPPROTO_TCP, 0);
- }
+ ip.hdr = skb_network_header(skb);
+ l4.hdr = skb_checksum_start(skb);
- i = tx_ring->next_to_use;
+ /* ADV DTYP TUCMD MKRLOC/ISCSIHEDLEN */
+ type_tucmd = E1000_ADVTXD_TUCMD_L4T_TCP;
- buffer_info = &tx_ring->buffer_info[i];
- context_desc = IGBVF_TX_CTXTDESC_ADV(*tx_ring, i);
- /* VLAN MACLEN IPLEN */
- if (tx_flags & IGBVF_TX_FLAGS_VLAN)
- info |= (tx_flags & IGBVF_TX_FLAGS_VLAN_MASK);
- info |= (skb_network_offset(skb) << E1000_ADVTXD_MACLEN_SHIFT);
- *hdr_len += skb_network_offset(skb);
- info |= (skb_transport_header(skb) - skb_network_header(skb));
- *hdr_len += (skb_transport_header(skb) - skb_network_header(skb));
- context_desc->vlan_macip_lens = cpu_to_le32(info);
+ /* initialize outer IP header fields */
+ if (ip.v4->version == 4) {
+ /* IP header will have to cancel out any data that
+ * is not a part of the outer IP header
+ */
+ ip.v4->check = csum_fold(csum_add(lco_csum(skb),
+ csum_unfold(l4.tcp->check)));
+ type_tucmd |= E1000_ADVTXD_TUCMD_IPV4;
- /* ADV DTYP TUCMD MKRLOC/ISCSIHEDLEN */
- tu_cmd |= (E1000_TXD_CMD_DEXT | E1000_ADVTXD_DTYP_CTXT);
+ ip.v4->tot_len = 0;
+ } else {
+ ip.v6->payload_len = 0;
+ }
- if (protocol == htons(ETH_P_IP))
- tu_cmd |= E1000_ADVTXD_TUCMD_IPV4;
- tu_cmd |= E1000_ADVTXD_TUCMD_L4T_TCP;
+ /* determine offset of inner transport header */
+ l4_offset = l4.hdr - skb->data;
- context_desc->type_tucmd_mlhl = cpu_to_le32(tu_cmd);
+ /* compute length of segmentation header */
+ *hdr_len = (l4.tcp->doff * 4) + l4_offset;
- /* MSS L4LEN IDX */
- mss_l4len_idx = (skb_shinfo(skb)->gso_size << E1000_ADVTXD_MSS_SHIFT);
- mss_l4len_idx |= (l4len << E1000_ADVTXD_L4LEN_SHIFT);
+ /* remove payload length from inner checksum */
+ paylen = skb->len - l4_offset;
+ csum_replace_by_diff(&l4.tcp->check, htonl(paylen));
- context_desc->mss_l4len_idx = cpu_to_le32(mss_l4len_idx);
- context_desc->seqnum_seed = 0;
+ /* MSS L4LEN IDX */
+ mss_l4len_idx = (*hdr_len - l4_offset) << E1000_ADVTXD_L4LEN_SHIFT;
+ mss_l4len_idx |= skb_shinfo(skb)->gso_size << E1000_ADVTXD_MSS_SHIFT;
- buffer_info->time_stamp = jiffies;
- buffer_info->dma = 0;
- i++;
- if (i == tx_ring->count)
- i = 0;
+ /* VLAN MACLEN IPLEN */
+ vlan_macip_lens = l4.hdr - ip.hdr;
+ vlan_macip_lens |= (ip.hdr - skb->data) << E1000_ADVTXD_MACLEN_SHIFT;
+ vlan_macip_lens |= tx_flags & IGBVF_TX_FLAGS_VLAN_MASK;
- tx_ring->next_to_use = i;
+ igbvf_tx_ctxtdesc(tx_ring, vlan_macip_lens, type_tucmd, mss_l4len_idx);
- return true;
+ return 1;
}
static inline bool igbvf_ipv6_csum_is_sctp(struct sk_buff *skb)
@@ -2091,7 +2080,7 @@ static int igbvf_maybe_stop_tx(struct net_device *netdev, int size)
}
#define IGBVF_MAX_TXD_PWR 16
-#define IGBVF_MAX_DATA_PER_TXD (1 << IGBVF_MAX_TXD_PWR)
+#define IGBVF_MAX_DATA_PER_TXD (1u << IGBVF_MAX_TXD_PWR)
static inline int igbvf_tx_map_adv(struct igbvf_adapter *adapter,
struct igbvf_ring *tx_ring,
@@ -2271,8 +2260,7 @@ static netdev_tx_t igbvf_xmit_frame_ring_adv(struct sk_buff *skb,
first = tx_ring->next_to_use;
- tso = skb_is_gso(skb) ?
- igbvf_tso(adapter, tx_ring, skb, tx_flags, &hdr_len, protocol) : 0;
+ tso = igbvf_tso(tx_ring, skb, tx_flags, &hdr_len);
if (unlikely(tso < 0)) {
dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
@@ -2615,6 +2603,40 @@ static int igbvf_set_features(struct net_device *netdev,
return 0;
}
+#define IGBVF_MAX_MAC_HDR_LEN 127
+#define IGBVF_MAX_NETWORK_HDR_LEN 511
+
+static netdev_features_t
+igbvf_features_check(struct sk_buff *skb, struct net_device *dev,
+ netdev_features_t features)
+{
+ unsigned int network_hdr_len, mac_hdr_len;
+
+ /* Make certain the headers can be described by a context descriptor */
+ mac_hdr_len = skb_network_header(skb) - skb->data;
+ if (unlikely(mac_hdr_len > IGBVF_MAX_MAC_HDR_LEN))
+ return features & ~(NETIF_F_HW_CSUM |
+ NETIF_F_SCTP_CRC |
+ NETIF_F_HW_VLAN_CTAG_TX |
+ NETIF_F_TSO |
+ NETIF_F_TSO6);
+
+ network_hdr_len = skb_checksum_start(skb) - skb_network_header(skb);
+ if (unlikely(network_hdr_len > IGBVF_MAX_NETWORK_HDR_LEN))
+ return features & ~(NETIF_F_HW_CSUM |
+ NETIF_F_SCTP_CRC |
+ NETIF_F_TSO |
+ NETIF_F_TSO6);
+
+ /* We can only support IPV4 TSO in tunnels if we can mangle the
+ * inner IP ID field, so strip TSO if MANGLEID is not supported.
+ */
+ if (skb->encapsulation && !(features & NETIF_F_TSO_MANGLEID))
+ features &= ~NETIF_F_TSO;
+
+ return features;
+}
+
static const struct net_device_ops igbvf_netdev_ops = {
.ndo_open = igbvf_open,
.ndo_stop = igbvf_close,
@@ -2631,7 +2653,7 @@ static const struct net_device_ops igbvf_netdev_ops = {
.ndo_poll_controller = igbvf_netpoll,
#endif
.ndo_set_features = igbvf_set_features,
- .ndo_features_check = passthru_features_check,
+ .ndo_features_check = igbvf_features_check,
};
/**
@@ -2739,22 +2761,30 @@ static int igbvf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
NETIF_F_HW_CSUM |
NETIF_F_SCTP_CRC;
- netdev->features = netdev->hw_features |
- NETIF_F_HW_VLAN_CTAG_TX |
- NETIF_F_HW_VLAN_CTAG_RX |
- NETIF_F_HW_VLAN_CTAG_FILTER;
+#define IGBVF_GSO_PARTIAL_FEATURES (NETIF_F_GSO_GRE | \
+ NETIF_F_GSO_GRE_CSUM | \
+ NETIF_F_GSO_IPXIP4 | \
+ NETIF_F_GSO_IPXIP6 | \
+ NETIF_F_GSO_UDP_TUNNEL | \
+ NETIF_F_GSO_UDP_TUNNEL_CSUM)
+
+ netdev->gso_partial_features = IGBVF_GSO_PARTIAL_FEATURES;
+ netdev->hw_features |= NETIF_F_GSO_PARTIAL |
+ IGBVF_GSO_PARTIAL_FEATURES;
+
+ netdev->features = netdev->hw_features;
if (pci_using_dac)
netdev->features |= NETIF_F_HIGHDMA;
- netdev->vlan_features |= NETIF_F_SG |
- NETIF_F_TSO |
- NETIF_F_TSO6 |
- NETIF_F_HW_CSUM |
- NETIF_F_SCTP_CRC;
-
+ netdev->vlan_features |= netdev->features | NETIF_F_TSO_MANGLEID;
netdev->mpls_features |= NETIF_F_HW_CSUM;
- netdev->hw_enc_features |= NETIF_F_HW_CSUM;
+ netdev->hw_enc_features |= netdev->vlan_features;
+
+ /* set this bit last since it cannot be part of vlan_features */
+ netdev->features |= NETIF_F_HW_VLAN_CTAG_FILTER |
+ NETIF_F_HW_VLAN_CTAG_RX |
+ NETIF_F_HW_VLAN_CTAG_TX;
/*reset the controller to put the device in a known good state */
err = hw->mac.ops.reset_hw(hw);
diff --git a/drivers/net/ethernet/intel/igbvf/vf.c b/drivers/net/ethernet/intel/igbvf/vf.c
index a13baa90ae20..335ba6642145 100644
--- a/drivers/net/ethernet/intel/igbvf/vf.c
+++ b/drivers/net/ethernet/intel/igbvf/vf.c
@@ -266,7 +266,7 @@ static s32 e1000_set_vfta_vf(struct e1000_hw *hw, u16 vid, bool set)
msgbuf[1] = vid;
/* Setting the 8 bit field MSG INFO to true indicates "add" */
if (set)
- msgbuf[0] |= 1 << E1000_VT_MSGINFO_SHIFT;
+ msgbuf[0] |= BIT(E1000_VT_MSGINFO_SHIFT);
mbx->ops.write_posted(hw, msgbuf, 2);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe.h b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
index e4949af7dd6b..9f2db1855412 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe.h
@@ -1,7 +1,7 @@
/*******************************************************************************
Intel 10 Gigabit PCI Express Linux driver
- Copyright(c) 1999 - 2013 Intel Corporation.
+ Copyright(c) 1999 - 2016 Intel Corporation.
This program is free software; you can redistribute it and/or modify it
under the terms and conditions of the GNU General Public License,
@@ -143,14 +143,11 @@ struct vf_data_storage {
unsigned char vf_mac_addresses[ETH_ALEN];
u16 vf_mc_hashes[IXGBE_MAX_VF_MC_ENTRIES];
u16 num_vf_mc_hashes;
- u16 default_vf_vlan_id;
- u16 vlans_enabled;
bool clear_to_send;
bool pf_set_mac;
u16 pf_vlan; /* When set, guest VLAN config not allowed. */
u16 pf_qos;
u16 tx_rate;
- u16 vlan_count;
u8 spoofchk_enabled;
bool rss_query_enabled;
u8 trusted;
@@ -173,7 +170,7 @@ struct vf_macvlans {
};
#define IXGBE_MAX_TXD_PWR 14
-#define IXGBE_MAX_DATA_PER_TXD (1 << IXGBE_MAX_TXD_PWR)
+#define IXGBE_MAX_DATA_PER_TXD (1u << IXGBE_MAX_TXD_PWR)
/* Tx Descriptors needed, worst case */
#define TXD_USE_COUNT(S) DIV_ROUND_UP((S), IXGBE_MAX_DATA_PER_TXD)
@@ -456,7 +453,7 @@ static inline bool ixgbe_qv_lock_poll(struct ixgbe_q_vector *q_vector)
IXGBE_QV_STATE_POLL);
#ifdef BP_EXTENDED_STATS
if (rc != IXGBE_QV_STATE_IDLE)
- q_vector->tx.ring->stats.yields++;
+ q_vector->rx.ring->stats.yields++;
#endif
return rc == IXGBE_QV_STATE_IDLE;
}
@@ -623,44 +620,45 @@ struct ixgbe_adapter {
* thus the additional *_CAPABLE flags.
*/
u32 flags;
-#define IXGBE_FLAG_MSI_ENABLED (u32)(1 << 1)
-#define IXGBE_FLAG_MSIX_ENABLED (u32)(1 << 3)
-#define IXGBE_FLAG_RX_1BUF_CAPABLE (u32)(1 << 4)
-#define IXGBE_FLAG_RX_PS_CAPABLE (u32)(1 << 5)
-#define IXGBE_FLAG_RX_PS_ENABLED (u32)(1 << 6)
-#define IXGBE_FLAG_DCA_ENABLED (u32)(1 << 8)
-#define IXGBE_FLAG_DCA_CAPABLE (u32)(1 << 9)
-#define IXGBE_FLAG_IMIR_ENABLED (u32)(1 << 10)
-#define IXGBE_FLAG_MQ_CAPABLE (u32)(1 << 11)
-#define IXGBE_FLAG_DCB_ENABLED (u32)(1 << 12)
-#define IXGBE_FLAG_VMDQ_CAPABLE (u32)(1 << 13)
-#define IXGBE_FLAG_VMDQ_ENABLED (u32)(1 << 14)
-#define IXGBE_FLAG_FAN_FAIL_CAPABLE (u32)(1 << 15)
-#define IXGBE_FLAG_NEED_LINK_UPDATE (u32)(1 << 16)
-#define IXGBE_FLAG_NEED_LINK_CONFIG (u32)(1 << 17)
-#define IXGBE_FLAG_FDIR_HASH_CAPABLE (u32)(1 << 18)
-#define IXGBE_FLAG_FDIR_PERFECT_CAPABLE (u32)(1 << 19)
-#define IXGBE_FLAG_FCOE_CAPABLE (u32)(1 << 20)
-#define IXGBE_FLAG_FCOE_ENABLED (u32)(1 << 21)
-#define IXGBE_FLAG_SRIOV_CAPABLE (u32)(1 << 22)
-#define IXGBE_FLAG_SRIOV_ENABLED (u32)(1 << 23)
+#define IXGBE_FLAG_MSI_ENABLED BIT(1)
+#define IXGBE_FLAG_MSIX_ENABLED BIT(3)
+#define IXGBE_FLAG_RX_1BUF_CAPABLE BIT(4)
+#define IXGBE_FLAG_RX_PS_CAPABLE BIT(5)
+#define IXGBE_FLAG_RX_PS_ENABLED BIT(6)
+#define IXGBE_FLAG_DCA_ENABLED BIT(8)
+#define IXGBE_FLAG_DCA_CAPABLE BIT(9)
+#define IXGBE_FLAG_IMIR_ENABLED BIT(10)
+#define IXGBE_FLAG_MQ_CAPABLE BIT(11)
+#define IXGBE_FLAG_DCB_ENABLED BIT(12)
+#define IXGBE_FLAG_VMDQ_CAPABLE BIT(13)
+#define IXGBE_FLAG_VMDQ_ENABLED BIT(14)
+#define IXGBE_FLAG_FAN_FAIL_CAPABLE BIT(15)
+#define IXGBE_FLAG_NEED_LINK_UPDATE BIT(16)
+#define IXGBE_FLAG_NEED_LINK_CONFIG BIT(17)
+#define IXGBE_FLAG_FDIR_HASH_CAPABLE BIT(18)
+#define IXGBE_FLAG_FDIR_PERFECT_CAPABLE BIT(19)
+#define IXGBE_FLAG_FCOE_CAPABLE BIT(20)
+#define IXGBE_FLAG_FCOE_ENABLED BIT(21)
+#define IXGBE_FLAG_SRIOV_CAPABLE BIT(22)
+#define IXGBE_FLAG_SRIOV_ENABLED BIT(23)
#define IXGBE_FLAG_VXLAN_OFFLOAD_CAPABLE BIT(24)
#define IXGBE_FLAG_RX_HWTSTAMP_ENABLED BIT(25)
#define IXGBE_FLAG_RX_HWTSTAMP_IN_REGISTER BIT(26)
+#define IXGBE_FLAG_DCB_CAPABLE BIT(27)
u32 flags2;
-#define IXGBE_FLAG2_RSC_CAPABLE (u32)(1 << 0)
-#define IXGBE_FLAG2_RSC_ENABLED (u32)(1 << 1)
-#define IXGBE_FLAG2_TEMP_SENSOR_CAPABLE (u32)(1 << 2)
-#define IXGBE_FLAG2_TEMP_SENSOR_EVENT (u32)(1 << 3)
-#define IXGBE_FLAG2_SEARCH_FOR_SFP (u32)(1 << 4)
-#define IXGBE_FLAG2_SFP_NEEDS_RESET (u32)(1 << 5)
-#define IXGBE_FLAG2_RESET_REQUESTED (u32)(1 << 6)
-#define IXGBE_FLAG2_FDIR_REQUIRES_REINIT (u32)(1 << 7)
-#define IXGBE_FLAG2_RSS_FIELD_IPV4_UDP (u32)(1 << 8)
-#define IXGBE_FLAG2_RSS_FIELD_IPV6_UDP (u32)(1 << 9)
-#define IXGBE_FLAG2_PTP_PPS_ENABLED (u32)(1 << 10)
-#define IXGBE_FLAG2_PHY_INTERRUPT (u32)(1 << 11)
+#define IXGBE_FLAG2_RSC_CAPABLE BIT(0)
+#define IXGBE_FLAG2_RSC_ENABLED BIT(1)
+#define IXGBE_FLAG2_TEMP_SENSOR_CAPABLE BIT(2)
+#define IXGBE_FLAG2_TEMP_SENSOR_EVENT BIT(3)
+#define IXGBE_FLAG2_SEARCH_FOR_SFP BIT(4)
+#define IXGBE_FLAG2_SFP_NEEDS_RESET BIT(5)
+#define IXGBE_FLAG2_RESET_REQUESTED BIT(6)
+#define IXGBE_FLAG2_FDIR_REQUIRES_REINIT BIT(7)
+#define IXGBE_FLAG2_RSS_FIELD_IPV4_UDP BIT(8)
+#define IXGBE_FLAG2_RSS_FIELD_IPV6_UDP BIT(9)
+#define IXGBE_FLAG2_PTP_PPS_ENABLED BIT(10)
+#define IXGBE_FLAG2_PHY_INTERRUPT BIT(11)
#define IXGBE_FLAG2_VXLAN_REREG_NEEDED BIT(12)
#define IXGBE_FLAG2_VLAN_PROMISC BIT(13)
@@ -795,7 +793,7 @@ struct ixgbe_adapter {
unsigned long fwd_bitmask; /* Bitmask indicating in use pools */
#define IXGBE_MAX_LINK_HANDLE 10
- struct ixgbe_mat_field *jump_tables[IXGBE_MAX_LINK_HANDLE];
+ struct ixgbe_jump_table *jump_tables[IXGBE_MAX_LINK_HANDLE];
unsigned long tables;
/* maximum number of RETA entries among all devices supported by ixgbe
@@ -806,6 +804,8 @@ struct ixgbe_adapter {
#define IXGBE_RSS_KEY_SIZE 40 /* size of RSS Hash Key in bytes */
u32 rss_key[IXGBE_RSS_KEY_SIZE / sizeof(u32)];
+
+ bool need_crosstalk_fix;
};
static inline u8 ixgbe_max_rss_indices(struct ixgbe_adapter *adapter)
@@ -817,6 +817,7 @@ static inline u8 ixgbe_max_rss_indices(struct ixgbe_adapter *adapter)
return IXGBE_MAX_RSS_INDICES;
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
return IXGBE_MAX_RSS_INDICES_X550;
default:
return 0;
@@ -827,7 +828,7 @@ struct ixgbe_fdir_filter {
struct hlist_node fdir_node;
union ixgbe_atr_input filter;
u16 sw_idx;
- u16 action;
+ u64 action;
};
enum ixgbe_state_t {
@@ -860,13 +861,15 @@ enum ixgbe_boards {
board_X540,
board_X550,
board_X550EM_x,
+ board_x550em_a,
};
-extern struct ixgbe_info ixgbe_82598_info;
-extern struct ixgbe_info ixgbe_82599_info;
-extern struct ixgbe_info ixgbe_X540_info;
-extern struct ixgbe_info ixgbe_X550_info;
-extern struct ixgbe_info ixgbe_X550EM_x_info;
+extern const struct ixgbe_info ixgbe_82598_info;
+extern const struct ixgbe_info ixgbe_82599_info;
+extern const struct ixgbe_info ixgbe_X540_info;
+extern const struct ixgbe_info ixgbe_X550_info;
+extern const struct ixgbe_info ixgbe_X550EM_x_info;
+extern const struct ixgbe_info ixgbe_x550em_a_info;
#ifdef CONFIG_IXGBE_DCB
extern const struct dcbnl_rtnl_ops dcbnl_ops;
#endif
@@ -893,8 +896,8 @@ void ixgbe_configure_tx_ring(struct ixgbe_adapter *, struct ixgbe_ring *);
void ixgbe_disable_rx_queue(struct ixgbe_adapter *adapter, struct ixgbe_ring *);
void ixgbe_update_stats(struct ixgbe_adapter *adapter);
int ixgbe_init_interrupt_scheme(struct ixgbe_adapter *adapter);
-int ixgbe_wol_supported(struct ixgbe_adapter *adapter, u16 device_id,
- u16 subdevice_id);
+bool ixgbe_wol_supported(struct ixgbe_adapter *adapter, u16 device_id,
+ u16 subdevice_id);
#ifdef CONFIG_PCI_IOV
void ixgbe_full_sync_mac_table(struct ixgbe_adapter *adapter);
#endif
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c
index d8a9fb8a59e2..fb51be74dd4c 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_82598.c
@@ -1,7 +1,7 @@
/*******************************************************************************
Intel 10 Gigabit PCI Express Linux driver
- Copyright(c) 1999 - 2015 Intel Corporation.
+ Copyright(c) 1999 - 2016 Intel Corporation.
This program is free software; you can redistribute it and/or modify it
under the terms and conditions of the GNU General Public License,
@@ -792,7 +792,7 @@ mac_reset_top:
}
gheccr = IXGBE_READ_REG(hw, IXGBE_GHECCR);
- gheccr &= ~((1 << 21) | (1 << 18) | (1 << 9) | (1 << 6));
+ gheccr &= ~(BIT(21) | BIT(18) | BIT(9) | BIT(6));
IXGBE_WRITE_REG(hw, IXGBE_GHECCR, gheccr);
/*
@@ -914,10 +914,10 @@ static s32 ixgbe_set_vfta_82598(struct ixgbe_hw *hw, u32 vlan, u32 vind,
bits = IXGBE_READ_REG(hw, IXGBE_VFTA(regindex));
if (vlan_on)
/* Turn on this VLAN id */
- bits |= (1 << bitindex);
+ bits |= BIT(bitindex);
else
/* Turn off this VLAN id */
- bits &= ~(1 << bitindex);
+ bits &= ~BIT(bitindex);
IXGBE_WRITE_REG(hw, IXGBE_VFTA(regindex), bits);
return 0;
@@ -1160,7 +1160,7 @@ static void ixgbe_set_rxpba_82598(struct ixgbe_hw *hw, int num_pb,
IXGBE_WRITE_REG(hw, IXGBE_TXPBSIZE(i), IXGBE_TXPBSIZE_40KB);
}
-static struct ixgbe_mac_operations mac_ops_82598 = {
+static const struct ixgbe_mac_operations mac_ops_82598 = {
.init_hw = &ixgbe_init_hw_generic,
.reset_hw = &ixgbe_reset_hw_82598,
.start_hw = &ixgbe_start_hw_82598,
@@ -1192,9 +1192,11 @@ static struct ixgbe_mac_operations mac_ops_82598 = {
.clear_vfta = &ixgbe_clear_vfta_82598,
.set_vfta = &ixgbe_set_vfta_82598,
.fc_enable = &ixgbe_fc_enable_82598,
+ .setup_fc = ixgbe_setup_fc_generic,
.set_fw_drv_ver = NULL,
.acquire_swfw_sync = &ixgbe_acquire_swfw_sync,
.release_swfw_sync = &ixgbe_release_swfw_sync,
+ .init_swfw_sync = NULL,
.get_thermal_sensor_data = NULL,
.init_thermal_sensor_thresh = NULL,
.prot_autoc_read = &prot_autoc_read_generic,
@@ -1203,7 +1205,7 @@ static struct ixgbe_mac_operations mac_ops_82598 = {
.disable_rx = &ixgbe_disable_rx_generic,
};
-static struct ixgbe_eeprom_operations eeprom_ops_82598 = {
+static const struct ixgbe_eeprom_operations eeprom_ops_82598 = {
.init_params = &ixgbe_init_eeprom_params_generic,
.read = &ixgbe_read_eerd_generic,
.write = &ixgbe_write_eeprom_generic,
@@ -1214,7 +1216,7 @@ static struct ixgbe_eeprom_operations eeprom_ops_82598 = {
.update_checksum = &ixgbe_update_eeprom_checksum_generic,
};
-static struct ixgbe_phy_operations phy_ops_82598 = {
+static const struct ixgbe_phy_operations phy_ops_82598 = {
.identify = &ixgbe_identify_phy_generic,
.identify_sfp = &ixgbe_identify_module_generic,
.init = &ixgbe_init_phy_ops_82598,
@@ -1230,7 +1232,7 @@ static struct ixgbe_phy_operations phy_ops_82598 = {
.check_overtemp = &ixgbe_tn_check_overtemp,
};
-struct ixgbe_info ixgbe_82598_info = {
+const struct ixgbe_info ixgbe_82598_info = {
.mac = ixgbe_mac_82598EB,
.get_invariants = &ixgbe_get_invariants_82598,
.mac_ops = &mac_ops_82598,
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c
index fa8d4f40ac2a..47afed74a54d 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_82599.c
@@ -1,7 +1,7 @@
/*******************************************************************************
Intel 10 Gigabit PCI Express Linux driver
- Copyright(c) 1999 - 2015 Intel Corporation.
+ Copyright(c) 1999 - 2016 Intel Corporation.
This program is free software; you can redistribute it and/or modify it
under the terms and conditions of the GNU General Public License,
@@ -1296,17 +1296,17 @@ s32 ixgbe_init_fdir_perfect_82599(struct ixgbe_hw *hw, u32 fdirctrl)
#define IXGBE_COMPUTE_SIG_HASH_ITERATION(_n) \
do { \
u32 n = (_n); \
- if (IXGBE_ATR_COMMON_HASH_KEY & (0x01 << n)) \
+ if (IXGBE_ATR_COMMON_HASH_KEY & BIT(n)) \
common_hash ^= lo_hash_dword >> n; \
- else if (IXGBE_ATR_BUCKET_HASH_KEY & (0x01 << n)) \
+ else if (IXGBE_ATR_BUCKET_HASH_KEY & BIT(n)) \
bucket_hash ^= lo_hash_dword >> n; \
- else if (IXGBE_ATR_SIGNATURE_HASH_KEY & (0x01 << n)) \
+ else if (IXGBE_ATR_SIGNATURE_HASH_KEY & BIT(n)) \
sig_hash ^= lo_hash_dword << (16 - n); \
- if (IXGBE_ATR_COMMON_HASH_KEY & (0x01 << (n + 16))) \
+ if (IXGBE_ATR_COMMON_HASH_KEY & BIT(n + 16)) \
common_hash ^= hi_hash_dword >> n; \
- else if (IXGBE_ATR_BUCKET_HASH_KEY & (0x01 << (n + 16))) \
+ else if (IXGBE_ATR_BUCKET_HASH_KEY & BIT(n + 16)) \
bucket_hash ^= hi_hash_dword >> n; \
- else if (IXGBE_ATR_SIGNATURE_HASH_KEY & (0x01 << (n + 16))) \
+ else if (IXGBE_ATR_SIGNATURE_HASH_KEY & BIT(n + 16)) \
sig_hash ^= hi_hash_dword << (16 - n); \
} while (0)
@@ -1440,9 +1440,9 @@ s32 ixgbe_fdir_add_signature_filter_82599(struct ixgbe_hw *hw,
#define IXGBE_COMPUTE_BKT_HASH_ITERATION(_n) \
do { \
u32 n = (_n); \
- if (IXGBE_ATR_BUCKET_HASH_KEY & (0x01 << n)) \
+ if (IXGBE_ATR_BUCKET_HASH_KEY & BIT(n)) \
bucket_hash ^= lo_hash_dword >> n; \
- if (IXGBE_ATR_BUCKET_HASH_KEY & (0x01 << (n + 16))) \
+ if (IXGBE_ATR_BUCKET_HASH_KEY & BIT(n + 16)) \
bucket_hash ^= hi_hash_dword >> n; \
} while (0)
@@ -1633,6 +1633,7 @@ s32 ixgbe_fdir_set_input_mask_82599(struct ixgbe_hw *hw,
switch (hw->mac.type) {
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
IXGBE_WRITE_REG(hw, IXGBE_FDIRSCTPM, ~fdirtcpm);
break;
default:
@@ -2181,7 +2182,7 @@ release_i2c_access:
return status;
}
-static struct ixgbe_mac_operations mac_ops_82599 = {
+static const struct ixgbe_mac_operations mac_ops_82599 = {
.init_hw = &ixgbe_init_hw_generic,
.reset_hw = &ixgbe_reset_hw_82599,
.start_hw = &ixgbe_start_hw_82599,
@@ -2220,6 +2221,7 @@ static struct ixgbe_mac_operations mac_ops_82599 = {
.clear_vfta = &ixgbe_clear_vfta_generic,
.set_vfta = &ixgbe_set_vfta_generic,
.fc_enable = &ixgbe_fc_enable_generic,
+ .setup_fc = ixgbe_setup_fc_generic,
.set_fw_drv_ver = &ixgbe_set_fw_drv_ver_generic,
.init_uta_tables = &ixgbe_init_uta_tables_generic,
.setup_sfp = &ixgbe_setup_sfp_modules_82599,
@@ -2227,6 +2229,7 @@ static struct ixgbe_mac_operations mac_ops_82599 = {
.set_vlan_anti_spoofing = &ixgbe_set_vlan_anti_spoofing,
.acquire_swfw_sync = &ixgbe_acquire_swfw_sync,
.release_swfw_sync = &ixgbe_release_swfw_sync,
+ .init_swfw_sync = NULL,
.get_thermal_sensor_data = &ixgbe_get_thermal_sensor_data_generic,
.init_thermal_sensor_thresh = &ixgbe_init_thermal_sensor_thresh_generic,
.prot_autoc_read = &prot_autoc_read_82599,
@@ -2235,7 +2238,7 @@ static struct ixgbe_mac_operations mac_ops_82599 = {
.disable_rx = &ixgbe_disable_rx_generic,
};
-static struct ixgbe_eeprom_operations eeprom_ops_82599 = {
+static const struct ixgbe_eeprom_operations eeprom_ops_82599 = {
.init_params = &ixgbe_init_eeprom_params_generic,
.read = &ixgbe_read_eeprom_82599,
.read_buffer = &ixgbe_read_eeprom_buffer_82599,
@@ -2246,7 +2249,7 @@ static struct ixgbe_eeprom_operations eeprom_ops_82599 = {
.update_checksum = &ixgbe_update_eeprom_checksum_generic,
};
-static struct ixgbe_phy_operations phy_ops_82599 = {
+static const struct ixgbe_phy_operations phy_ops_82599 = {
.identify = &ixgbe_identify_phy_82599,
.identify_sfp = &ixgbe_identify_module_generic,
.init = &ixgbe_init_phy_ops_82599,
@@ -2263,7 +2266,7 @@ static struct ixgbe_phy_operations phy_ops_82599 = {
.check_overtemp = &ixgbe_tn_check_overtemp,
};
-struct ixgbe_info ixgbe_82599_info = {
+const struct ixgbe_info ixgbe_82599_info = {
.mac = ixgbe_mac_82599EB,
.get_invariants = &ixgbe_get_invariants_82599,
.mac_ops = &mac_ops_82599,
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
index 64045053e874..902d2061ce73 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.c
@@ -1,7 +1,7 @@
/*******************************************************************************
Intel 10 Gigabit PCI Express Linux driver
- Copyright(c) 1999 - 2015 Intel Corporation.
+ Copyright(c) 1999 - 2016 Intel Corporation.
This program is free software; you can redistribute it and/or modify it
under the terms and conditions of the GNU General Public License,
@@ -97,6 +97,7 @@ bool ixgbe_device_supports_autoneg_fc(struct ixgbe_hw *hw)
case IXGBE_DEV_ID_X540T:
case IXGBE_DEV_ID_X540T1:
case IXGBE_DEV_ID_X550T:
+ case IXGBE_DEV_ID_X550T1:
case IXGBE_DEV_ID_X550EM_X_10G_T:
supported = true;
break;
@@ -111,12 +112,12 @@ bool ixgbe_device_supports_autoneg_fc(struct ixgbe_hw *hw)
}
/**
- * ixgbe_setup_fc - Set up flow control
+ * ixgbe_setup_fc_generic - Set up flow control
* @hw: pointer to hardware structure
*
* Called at init time to set up flow control.
**/
-static s32 ixgbe_setup_fc(struct ixgbe_hw *hw)
+s32 ixgbe_setup_fc_generic(struct ixgbe_hw *hw)
{
s32 ret_val = 0;
u32 reg = 0, reg_bp = 0;
@@ -296,7 +297,7 @@ s32 ixgbe_start_hw_generic(struct ixgbe_hw *hw)
IXGBE_WRITE_FLUSH(hw);
/* Setup flow control */
- ret_val = ixgbe_setup_fc(hw);
+ ret_val = hw->mac.ops.setup_fc(hw);
if (ret_val)
return ret_val;
@@ -681,6 +682,7 @@ s32 ixgbe_get_bus_info_generic(struct ixgbe_hw *hw)
void ixgbe_set_lan_id_multi_port_pcie(struct ixgbe_hw *hw)
{
struct ixgbe_bus_info *bus = &hw->bus;
+ u16 ee_ctrl_4;
u32 reg;
reg = IXGBE_READ_REG(hw, IXGBE_STATUS);
@@ -691,6 +693,13 @@ void ixgbe_set_lan_id_multi_port_pcie(struct ixgbe_hw *hw)
reg = IXGBE_READ_REG(hw, IXGBE_FACTPS(hw));
if (reg & IXGBE_FACTPS_LFS)
bus->func ^= 0x1;
+
+ /* Get MAC instance from EEPROM for configuring CS4227 */
+ if (hw->device_id == IXGBE_DEV_ID_X550EM_A_SFP) {
+ hw->eeprom.ops.read(hw, IXGBE_EEPROM_CTRL_4, &ee_ctrl_4);
+ bus->instance_id = (ee_ctrl_4 & IXGBE_EE_CTRL_4_INST_ID) >>
+ IXGBE_EE_CTRL_4_INST_ID_SHIFT;
+ }
}
/**
@@ -816,8 +825,8 @@ s32 ixgbe_init_eeprom_params_generic(struct ixgbe_hw *hw)
*/
eeprom_size = (u16)((eec & IXGBE_EEC_SIZE) >>
IXGBE_EEC_SIZE_SHIFT);
- eeprom->word_size = 1 << (eeprom_size +
- IXGBE_EEPROM_WORD_SIZE_SHIFT);
+ eeprom->word_size = BIT(eeprom_size +
+ IXGBE_EEPROM_WORD_SIZE_SHIFT);
}
if (eec & IXGBE_EEC_ADDR_SIZE)
@@ -1493,7 +1502,7 @@ static void ixgbe_shift_out_eeprom_bits(struct ixgbe_hw *hw, u16 data,
* Mask is used to shift "count" bits of "data" out to the EEPROM
* one bit at a time. Determine the starting bit based on count
*/
- mask = 0x01 << (count - 1);
+ mask = BIT(count - 1);
for (i = 0; i < count; i++) {
/*
@@ -1982,7 +1991,7 @@ static void ixgbe_set_mta(struct ixgbe_hw *hw, u8 *mc_addr)
*/
vector_reg = (vector >> 5) & 0x7F;
vector_bit = vector & 0x1F;
- hw->mac.mta_shadow[vector_reg] |= (1 << vector_bit);
+ hw->mac.mta_shadow[vector_reg] |= BIT(vector_bit);
}
/**
@@ -2854,6 +2863,7 @@ u16 ixgbe_get_pcie_msix_count_generic(struct ixgbe_hw *hw)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
pcie_offset = IXGBE_PCIE_MSIX_82599_CAPS;
max_msix_count = IXGBE_MAX_MSIX_VECTORS_82599;
break;
@@ -2911,10 +2921,10 @@ s32 ixgbe_clear_vmdq_generic(struct ixgbe_hw *hw, u32 rar, u32 vmdq)
mpsar_hi = 0;
}
} else if (vmdq < 32) {
- mpsar_lo &= ~(1 << vmdq);
+ mpsar_lo &= ~BIT(vmdq);
IXGBE_WRITE_REG(hw, IXGBE_MPSAR_LO(rar), mpsar_lo);
} else {
- mpsar_hi &= ~(1 << (vmdq - 32));
+ mpsar_hi &= ~BIT(vmdq - 32);
IXGBE_WRITE_REG(hw, IXGBE_MPSAR_HI(rar), mpsar_hi);
}
@@ -2943,11 +2953,11 @@ s32 ixgbe_set_vmdq_generic(struct ixgbe_hw *hw, u32 rar, u32 vmdq)
if (vmdq < 32) {
mpsar = IXGBE_READ_REG(hw, IXGBE_MPSAR_LO(rar));
- mpsar |= 1 << vmdq;
+ mpsar |= BIT(vmdq);
IXGBE_WRITE_REG(hw, IXGBE_MPSAR_LO(rar), mpsar);
} else {
mpsar = IXGBE_READ_REG(hw, IXGBE_MPSAR_HI(rar));
- mpsar |= 1 << (vmdq - 32);
+ mpsar |= BIT(vmdq - 32);
IXGBE_WRITE_REG(hw, IXGBE_MPSAR_HI(rar), mpsar);
}
return 0;
@@ -2968,11 +2978,11 @@ s32 ixgbe_set_vmdq_san_mac_generic(struct ixgbe_hw *hw, u32 vmdq)
u32 rar = hw->mac.san_mac_rar_index;
if (vmdq < 32) {
- IXGBE_WRITE_REG(hw, IXGBE_MPSAR_LO(rar), 1 << vmdq);
+ IXGBE_WRITE_REG(hw, IXGBE_MPSAR_LO(rar), BIT(vmdq));
IXGBE_WRITE_REG(hw, IXGBE_MPSAR_HI(rar), 0);
} else {
IXGBE_WRITE_REG(hw, IXGBE_MPSAR_LO(rar), 0);
- IXGBE_WRITE_REG(hw, IXGBE_MPSAR_HI(rar), 1 << (vmdq - 32));
+ IXGBE_WRITE_REG(hw, IXGBE_MPSAR_HI(rar), BIT(vmdq - 32));
}
return 0;
@@ -3072,7 +3082,7 @@ s32 ixgbe_set_vfta_generic(struct ixgbe_hw *hw, u32 vlan, u32 vind,
* bits[4-0]: which bit in the register
*/
regidx = vlan / 32;
- vfta_delta = 1 << (vlan % 32);
+ vfta_delta = BIT(vlan % 32);
vfta = IXGBE_READ_REG(hw, IXGBE_VFTA(regidx));
/* vfta_delta represents the difference between the current value
@@ -3103,12 +3113,12 @@ s32 ixgbe_set_vfta_generic(struct ixgbe_hw *hw, u32 vlan, u32 vind,
bits = IXGBE_READ_REG(hw, IXGBE_VLVFB(vlvf_index * 2 + vind / 32));
/* set the pool bit */
- bits |= 1 << (vind % 32);
+ bits |= BIT(vind % 32);
if (vlan_on)
goto vlvf_update;
/* clear the pool bit */
- bits ^= 1 << (vind % 32);
+ bits ^= BIT(vind % 32);
if (!bits &&
!IXGBE_READ_REG(hw, IXGBE_VLVFB(vlvf_index * 2 + 1 - vind / 32))) {
@@ -3300,43 +3310,25 @@ wwn_prefix_err:
/**
* ixgbe_set_mac_anti_spoofing - Enable/Disable MAC anti-spoofing
* @hw: pointer to hardware structure
- * @enable: enable or disable switch for anti-spoofing
- * @pf: Physical Function pool - do not enable anti-spoofing for the PF
+ * @enable: enable or disable switch for MAC anti-spoofing
+ * @vf: Virtual Function pool - VF Pool to set for MAC anti-spoofing
*
**/
-void ixgbe_set_mac_anti_spoofing(struct ixgbe_hw *hw, bool enable, int pf)
+void ixgbe_set_mac_anti_spoofing(struct ixgbe_hw *hw, bool enable, int vf)
{
- int j;
- int pf_target_reg = pf >> 3;
- int pf_target_shift = pf % 8;
- u32 pfvfspoof = 0;
+ int vf_target_reg = vf >> 3;
+ int vf_target_shift = vf % 8;
+ u32 pfvfspoof;
if (hw->mac.type == ixgbe_mac_82598EB)
return;
+ pfvfspoof = IXGBE_READ_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg));
if (enable)
- pfvfspoof = IXGBE_SPOOF_MACAS_MASK;
-
- /*
- * PFVFSPOOF register array is size 8 with 8 bits assigned to
- * MAC anti-spoof enables in each register array element.
- */
- for (j = 0; j < pf_target_reg; j++)
- IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(j), pfvfspoof);
-
- /*
- * The PF should be allowed to spoof so that it can support
- * emulation mode NICs. Do not set the bits assigned to the PF
- */
- pfvfspoof &= (1 << pf_target_shift) - 1;
- IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(j), pfvfspoof);
-
- /*
- * Remaining pools belong to the PF so they do not need to have
- * anti-spoofing enabled.
- */
- for (j++; j < IXGBE_PFVFSPOOF_REG_COUNT; j++)
- IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(j), 0);
+ pfvfspoof |= BIT(vf_target_shift);
+ else
+ pfvfspoof &= ~BIT(vf_target_shift);
+ IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg), pfvfspoof);
}
/**
@@ -3357,9 +3349,9 @@ void ixgbe_set_vlan_anti_spoofing(struct ixgbe_hw *hw, bool enable, int vf)
pfvfspoof = IXGBE_READ_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg));
if (enable)
- pfvfspoof |= (1 << vf_target_shift);
+ pfvfspoof |= BIT(vf_target_shift);
else
- pfvfspoof &= ~(1 << vf_target_shift);
+ pfvfspoof &= ~BIT(vf_target_shift);
IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg), pfvfspoof);
}
@@ -3483,18 +3475,27 @@ static u8 ixgbe_calculate_checksum(u8 *buffer, u32 length)
* Communicates with the manageability block. On success return 0
* else return IXGBE_ERR_HOST_INTERFACE_COMMAND.
**/
-s32 ixgbe_host_interface_command(struct ixgbe_hw *hw, u32 *buffer,
+s32 ixgbe_host_interface_command(struct ixgbe_hw *hw, void *buffer,
u32 length, u32 timeout,
bool return_data)
{
- u32 hicr, i, bi, fwsts;
u32 hdr_size = sizeof(struct ixgbe_hic_hdr);
+ u32 hicr, i, bi, fwsts;
u16 buf_len, dword_len;
+ union {
+ struct ixgbe_hic_hdr hdr;
+ u32 u32arr[1];
+ } *bp = buffer;
+ s32 status;
- if (length == 0 || length > IXGBE_HI_MAX_BLOCK_BYTE_LENGTH) {
+ if (!length || length > IXGBE_HI_MAX_BLOCK_BYTE_LENGTH) {
hw_dbg(hw, "Buffer length failure buffersize-%d.\n", length);
return IXGBE_ERR_HOST_INTERFACE_COMMAND;
}
+ /* Take management host interface semaphore */
+ status = hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_SW_MNG_SM);
+ if (status)
+ return status;
/* Set bit 9 of FWSTS clearing FW reset indication */
fwsts = IXGBE_READ_REG(hw, IXGBE_FWSTS);
@@ -3502,26 +3503,27 @@ s32 ixgbe_host_interface_command(struct ixgbe_hw *hw, u32 *buffer,
/* Check that the host interface is enabled. */
hicr = IXGBE_READ_REG(hw, IXGBE_HICR);
- if ((hicr & IXGBE_HICR_EN) == 0) {
+ if (!(hicr & IXGBE_HICR_EN)) {
hw_dbg(hw, "IXGBE_HOST_EN bit disabled.\n");
- return IXGBE_ERR_HOST_INTERFACE_COMMAND;
+ status = IXGBE_ERR_HOST_INTERFACE_COMMAND;
+ goto rel_out;
}
/* Calculate length in DWORDs. We must be DWORD aligned */
- if ((length % (sizeof(u32))) != 0) {
+ if (length % sizeof(u32)) {
hw_dbg(hw, "Buffer length failure, not aligned to dword");
- return IXGBE_ERR_INVALID_ARGUMENT;
+ status = IXGBE_ERR_INVALID_ARGUMENT;
+ goto rel_out;
}
dword_len = length >> 2;
- /*
- * The device driver writes the relevant command block
+ /* The device driver writes the relevant command block
* into the ram area.
*/
for (i = 0; i < dword_len; i++)
IXGBE_WRITE_REG_ARRAY(hw, IXGBE_FLEX_MNG,
- i, cpu_to_le32(buffer[i]));
+ i, cpu_to_le32(bp->u32arr[i]));
/* Setting this bit tells the ARC that a new command is pending. */
IXGBE_WRITE_REG(hw, IXGBE_HICR, hicr | IXGBE_HICR_C);
@@ -3534,44 +3536,49 @@ s32 ixgbe_host_interface_command(struct ixgbe_hw *hw, u32 *buffer,
}
/* Check command successful completion. */
- if ((timeout != 0 && i == timeout) ||
- (!(IXGBE_READ_REG(hw, IXGBE_HICR) & IXGBE_HICR_SV))) {
+ if ((timeout && i == timeout) ||
+ !(IXGBE_READ_REG(hw, IXGBE_HICR) & IXGBE_HICR_SV)) {
hw_dbg(hw, "Command has failed with no status valid.\n");
- return IXGBE_ERR_HOST_INTERFACE_COMMAND;
+ status = IXGBE_ERR_HOST_INTERFACE_COMMAND;
+ goto rel_out;
}
if (!return_data)
- return 0;
+ goto rel_out;
/* Calculate length in DWORDs */
dword_len = hdr_size >> 2;
/* first pull in the header so we know the buffer length */
for (bi = 0; bi < dword_len; bi++) {
- buffer[bi] = IXGBE_READ_REG_ARRAY(hw, IXGBE_FLEX_MNG, bi);
- le32_to_cpus(&buffer[bi]);
+ bp->u32arr[bi] = IXGBE_READ_REG_ARRAY(hw, IXGBE_FLEX_MNG, bi);
+ le32_to_cpus(&bp->u32arr[bi]);
}
/* If there is any thing in data position pull it in */
- buf_len = ((struct ixgbe_hic_hdr *)buffer)->buf_len;
- if (buf_len == 0)
- return 0;
+ buf_len = bp->hdr.buf_len;
+ if (!buf_len)
+ goto rel_out;
- if (length < (buf_len + hdr_size)) {
+ if (length < round_up(buf_len, 4) + hdr_size) {
hw_dbg(hw, "Buffer not large enough for reply message.\n");
- return IXGBE_ERR_HOST_INTERFACE_COMMAND;
+ status = IXGBE_ERR_HOST_INTERFACE_COMMAND;
+ goto rel_out;
}
/* Calculate length in DWORDs, add 3 for odd lengths */
dword_len = (buf_len + 3) >> 2;
- /* Pull in the rest of the buffer (bi is where we left off)*/
+ /* Pull in the rest of the buffer (bi is where we left off) */
for (; bi <= dword_len; bi++) {
- buffer[bi] = IXGBE_READ_REG_ARRAY(hw, IXGBE_FLEX_MNG, bi);
- le32_to_cpus(&buffer[bi]);
+ bp->u32arr[bi] = IXGBE_READ_REG_ARRAY(hw, IXGBE_FLEX_MNG, bi);
+ le32_to_cpus(&bp->u32arr[bi]);
}
- return 0;
+rel_out:
+ hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_SW_MNG_SM);
+
+ return status;
}
/**
@@ -3594,13 +3601,10 @@ s32 ixgbe_set_fw_drv_ver_generic(struct ixgbe_hw *hw, u8 maj, u8 min,
int i;
s32 ret_val;
- if (hw->mac.ops.acquire_swfw_sync(hw, IXGBE_GSSR_SW_MNG_SM))
- return IXGBE_ERR_SWFW_SYNC;
-
fw_cmd.hdr.cmd = FW_CEM_CMD_DRIVER_INFO;
fw_cmd.hdr.buf_len = FW_CEM_CMD_DRIVER_INFO_LEN;
fw_cmd.hdr.cmd_or_resp.cmd_resv = FW_CEM_CMD_RESERVED;
- fw_cmd.port_num = (u8)hw->bus.func;
+ fw_cmd.port_num = hw->bus.func;
fw_cmd.ver_maj = maj;
fw_cmd.ver_min = min;
fw_cmd.ver_build = build;
@@ -3612,7 +3616,7 @@ s32 ixgbe_set_fw_drv_ver_generic(struct ixgbe_hw *hw, u8 maj, u8 min,
fw_cmd.pad2 = 0;
for (i = 0; i <= FW_CEM_MAX_RETRIES; i++) {
- ret_val = ixgbe_host_interface_command(hw, (u32 *)&fw_cmd,
+ ret_val = ixgbe_host_interface_command(hw, &fw_cmd,
sizeof(fw_cmd),
IXGBE_HI_COMMAND_TIMEOUT,
true);
@@ -3628,7 +3632,6 @@ s32 ixgbe_set_fw_drv_ver_generic(struct ixgbe_hw *hw, u8 maj, u8 min,
break;
}
- hw->mac.ops.release_swfw_sync(hw, IXGBE_GSSR_SW_MNG_SM);
return ret_val;
}
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.h
index 2b9563137fd8..6d4c260d0cbd 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_common.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_common.h
@@ -1,7 +1,7 @@
/*******************************************************************************
Intel 10 Gigabit PCI Express Linux driver
- Copyright(c) 1999 - 2014 Intel Corporation.
+ Copyright(c) 1999 - 2016 Intel Corporation.
This program is free software; you can redistribute it and/or modify it
under the terms and conditions of the GNU General Public License,
@@ -81,6 +81,7 @@ s32 ixgbe_disable_rx_buff_generic(struct ixgbe_hw *hw);
s32 ixgbe_enable_rx_buff_generic(struct ixgbe_hw *hw);
s32 ixgbe_enable_rx_dma_generic(struct ixgbe_hw *hw, u32 regval);
s32 ixgbe_fc_enable_generic(struct ixgbe_hw *hw);
+s32 ixgbe_setup_fc_generic(struct ixgbe_hw *);
bool ixgbe_device_supports_autoneg_fc(struct ixgbe_hw *hw);
void ixgbe_fc_autoneg(struct ixgbe_hw *hw);
@@ -105,13 +106,13 @@ s32 prot_autoc_write_generic(struct ixgbe_hw *hw, u32 reg_val, bool locked);
s32 ixgbe_blink_led_start_generic(struct ixgbe_hw *hw, u32 index);
s32 ixgbe_blink_led_stop_generic(struct ixgbe_hw *hw, u32 index);
-void ixgbe_set_mac_anti_spoofing(struct ixgbe_hw *hw, bool enable, int pf);
+void ixgbe_set_mac_anti_spoofing(struct ixgbe_hw *hw, bool enable, int vf);
void ixgbe_set_vlan_anti_spoofing(struct ixgbe_hw *hw, bool enable, int vf);
s32 ixgbe_get_device_caps_generic(struct ixgbe_hw *hw, u16 *device_caps);
s32 ixgbe_set_fw_drv_ver_generic(struct ixgbe_hw *hw, u8 maj, u8 min,
u8 build, u8 ver);
-s32 ixgbe_host_interface_command(struct ixgbe_hw *hw, u32 *buffer,
- u32 length, u32 timeout, bool return_data);
+s32 ixgbe_host_interface_command(struct ixgbe_hw *hw, void *, u32 length,
+ u32 timeout, bool return_data);
void ixgbe_clear_tx_pending(struct ixgbe_hw *hw);
bool ixgbe_mng_present(struct ixgbe_hw *hw);
bool ixgbe_mng_enabled(struct ixgbe_hw *hw);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.c
index 02c7333a9c83..072ef3b5fc61 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb.c
@@ -1,7 +1,7 @@
/*******************************************************************************
Intel 10 Gigabit PCI Express Linux driver
- Copyright(c) 1999 - 2014 Intel Corporation.
+ Copyright(c) 1999 - 2016 Intel Corporation.
This program is free software; you can redistribute it and/or modify it
under the terms and conditions of the GNU General Public License,
@@ -186,7 +186,7 @@ void ixgbe_dcb_unpack_pfc(struct ixgbe_dcb_config *cfg, u8 *pfc_en)
for (*pfc_en = 0, tc = 0; tc < MAX_TRAFFIC_CLASS; tc++) {
if (tc_config[tc].dcb_pfc != pfc_disabled)
- *pfc_en |= 1 << tc;
+ *pfc_en |= BIT(tc);
}
}
@@ -232,7 +232,7 @@ void ixgbe_dcb_unpack_prio(struct ixgbe_dcb_config *cfg, int direction,
u8 ixgbe_dcb_get_tc_from_up(struct ixgbe_dcb_config *cfg, int direction, u8 up)
{
struct tc_configuration *tc_config = &cfg->tc_config[0];
- u8 prio_mask = 1 << up;
+ u8 prio_mask = BIT(up);
u8 tc = cfg->num_tcs.pg_tcs;
/* If tc is 0 then DCB is likely not enabled or supported */
@@ -293,6 +293,7 @@ s32 ixgbe_dcb_hw_config(struct ixgbe_hw *hw,
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
return ixgbe_dcb_hw_config_82599(hw, pfc_en, refill, max,
bwgid, ptype, prio_tc);
default:
@@ -311,6 +312,7 @@ s32 ixgbe_dcb_hw_pfc_config(struct ixgbe_hw *hw, u8 pfc_en, u8 *prio_tc)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
return ixgbe_dcb_config_pfc_82599(hw, pfc_en, prio_tc);
default:
break;
@@ -368,6 +370,7 @@ s32 ixgbe_dcb_hw_ets_config(struct ixgbe_hw *hw,
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
ixgbe_dcb_config_rx_arbiter_82599(hw, refill, max,
bwg_id, prio_type, prio_tc);
ixgbe_dcb_config_tx_desc_arbiter_82599(hw, refill, max,
@@ -398,6 +401,7 @@ void ixgbe_dcb_read_rtrup2tc(struct ixgbe_hw *hw, u8 *map)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
ixgbe_dcb_read_rtrup2tc_82599(hw, map);
break;
default:
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_82598.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_82598.c
index d3ba63f9ad37..b79e93a5b699 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_82598.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_82598.c
@@ -210,7 +210,7 @@ s32 ixgbe_dcb_config_pfc_82598(struct ixgbe_hw *hw, u8 pfc_en)
/* Configure PFC Tx thresholds per TC */
for (i = 0; i < MAX_TRAFFIC_CLASS; i++) {
- if (!(pfc_en & (1 << i))) {
+ if (!(pfc_en & BIT(i))) {
IXGBE_WRITE_REG(hw, IXGBE_FCRTL(i), 0);
IXGBE_WRITE_REG(hw, IXGBE_FCRTH(i), 0);
continue;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_82599.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_82599.c
index b5cc989a3d23..1011d644978f 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_82599.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_82599.c
@@ -248,7 +248,7 @@ s32 ixgbe_dcb_config_pfc_82599(struct ixgbe_hw *hw, u8 pfc_en, u8 *prio_tc)
int enabled = 0;
for (j = 0; j < MAX_USER_PRIORITY; j++) {
- if ((prio_tc[j] == i) && (pfc_en & (1 << j))) {
+ if ((prio_tc[j] == i) && (pfc_en & BIT(j))) {
enabled = 1;
break;
}
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c
index 2707bda37418..b8fc3cfec831 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_dcb_nl.c
@@ -62,7 +62,7 @@ static int ixgbe_copy_dcb_cfg(struct ixgbe_adapter *adapter, int tc_max)
};
u8 up = dcb_getapp(adapter->netdev, &app);
- if (up && !(up & (1 << adapter->fcoe.up)))
+ if (up && !(up & BIT(adapter->fcoe.up)))
changes |= BIT_APP_UPCHG;
#endif
@@ -657,7 +657,7 @@ static int ixgbe_dcbnl_ieee_setapp(struct net_device *dev,
app->protocol == ETH_P_FCOE) {
u8 app_mask = dcb_ieee_getapp_mask(dev, app);
- if (app_mask & (1 << adapter->fcoe.up))
+ if (app_mask & BIT(adapter->fcoe.up))
return 0;
adapter->fcoe.up = app->priority;
@@ -700,7 +700,7 @@ static int ixgbe_dcbnl_ieee_delapp(struct net_device *dev,
app->protocol == ETH_P_FCOE) {
u8 app_mask = dcb_ieee_getapp_mask(dev, app);
- if (app_mask & (1 << adapter->fcoe.up))
+ if (app_mask & BIT(adapter->fcoe.up))
return 0;
adapter->fcoe.up = app_mask ?
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
index b3530e1e3ce1..59b771b9b354 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ethtool.c
@@ -1,7 +1,7 @@
/*******************************************************************************
Intel 10 Gigabit PCI Express Linux driver
- Copyright(c) 1999 - 2014 Intel Corporation.
+ Copyright(c) 1999 - 2016 Intel Corporation.
This program is free software; you can redistribute it and/or modify it
under the terms and conditions of the GNU General Public License,
@@ -533,10 +533,8 @@ static void ixgbe_get_regs(struct net_device *netdev,
/* Flow Control */
regs_buff[30] = IXGBE_READ_REG(hw, IXGBE_PFCTOP);
- regs_buff[31] = IXGBE_READ_REG(hw, IXGBE_FCTTV(0));
- regs_buff[32] = IXGBE_READ_REG(hw, IXGBE_FCTTV(1));
- regs_buff[33] = IXGBE_READ_REG(hw, IXGBE_FCTTV(2));
- regs_buff[34] = IXGBE_READ_REG(hw, IXGBE_FCTTV(3));
+ for (i = 0; i < 4; i++)
+ regs_buff[31 + i] = IXGBE_READ_REG(hw, IXGBE_FCTTV(i));
for (i = 0; i < 8; i++) {
switch (hw->mac.type) {
case ixgbe_mac_82598EB:
@@ -547,6 +545,7 @@ static void ixgbe_get_regs(struct net_device *netdev,
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
regs_buff[35 + i] = IXGBE_READ_REG(hw, IXGBE_FCRTL_82599(i));
regs_buff[43 + i] = IXGBE_READ_REG(hw, IXGBE_FCRTH_82599(i));
break;
@@ -660,6 +659,7 @@ static void ixgbe_get_regs(struct net_device *netdev,
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
regs_buff[830] = IXGBE_READ_REG(hw, IXGBE_RTTDCS);
regs_buff[832] = IXGBE_READ_REG(hw, IXGBE_RTRPCS);
for (i = 0; i < 8; i++)
@@ -718,8 +718,10 @@ static void ixgbe_get_regs(struct net_device *netdev,
regs_buff[939] = IXGBE_GET_STAT(adapter, bprc);
regs_buff[940] = IXGBE_GET_STAT(adapter, mprc);
regs_buff[941] = IXGBE_GET_STAT(adapter, gptc);
- regs_buff[942] = IXGBE_GET_STAT(adapter, gorc);
- regs_buff[944] = IXGBE_GET_STAT(adapter, gotc);
+ regs_buff[942] = (u32)IXGBE_GET_STAT(adapter, gorc);
+ regs_buff[943] = (u32)(IXGBE_GET_STAT(adapter, gorc) >> 32);
+ regs_buff[944] = (u32)IXGBE_GET_STAT(adapter, gotc);
+ regs_buff[945] = (u32)(IXGBE_GET_STAT(adapter, gotc) >> 32);
for (i = 0; i < 8; i++)
regs_buff[946 + i] = IXGBE_GET_STAT(adapter, rnbc[i]);
regs_buff[954] = IXGBE_GET_STAT(adapter, ruc);
@@ -729,7 +731,8 @@ static void ixgbe_get_regs(struct net_device *netdev,
regs_buff[958] = IXGBE_GET_STAT(adapter, mngprc);
regs_buff[959] = IXGBE_GET_STAT(adapter, mngpdc);
regs_buff[960] = IXGBE_GET_STAT(adapter, mngptc);
- regs_buff[961] = IXGBE_GET_STAT(adapter, tor);
+ regs_buff[961] = (u32)IXGBE_GET_STAT(adapter, tor);
+ regs_buff[962] = (u32)(IXGBE_GET_STAT(adapter, tor) >> 32);
regs_buff[963] = IXGBE_GET_STAT(adapter, tpr);
regs_buff[964] = IXGBE_GET_STAT(adapter, tpt);
regs_buff[965] = IXGBE_GET_STAT(adapter, ptc64);
@@ -801,15 +804,11 @@ static void ixgbe_get_regs(struct net_device *netdev,
regs_buff[1096 + i] = IXGBE_READ_REG(hw, IXGBE_TIC_DW(i));
regs_buff[1100] = IXGBE_READ_REG(hw, IXGBE_TDPROBE);
regs_buff[1101] = IXGBE_READ_REG(hw, IXGBE_TXBUFCTRL);
- regs_buff[1102] = IXGBE_READ_REG(hw, IXGBE_TXBUFDATA0);
- regs_buff[1103] = IXGBE_READ_REG(hw, IXGBE_TXBUFDATA1);
- regs_buff[1104] = IXGBE_READ_REG(hw, IXGBE_TXBUFDATA2);
- regs_buff[1105] = IXGBE_READ_REG(hw, IXGBE_TXBUFDATA3);
+ for (i = 0; i < 4; i++)
+ regs_buff[1102 + i] = IXGBE_READ_REG(hw, IXGBE_TXBUFDATA(i));
regs_buff[1106] = IXGBE_READ_REG(hw, IXGBE_RXBUFCTRL);
- regs_buff[1107] = IXGBE_READ_REG(hw, IXGBE_RXBUFDATA0);
- regs_buff[1108] = IXGBE_READ_REG(hw, IXGBE_RXBUFDATA1);
- regs_buff[1109] = IXGBE_READ_REG(hw, IXGBE_RXBUFDATA2);
- regs_buff[1110] = IXGBE_READ_REG(hw, IXGBE_RXBUFDATA3);
+ for (i = 0; i < 4; i++)
+ regs_buff[1107 + i] = IXGBE_READ_REG(hw, IXGBE_RXBUFDATA(i));
for (i = 0; i < 8; i++)
regs_buff[1111 + i] = IXGBE_READ_REG(hw, IXGBE_PCIE_DIAG(i));
regs_buff[1119] = IXGBE_READ_REG(hw, IXGBE_RFVAL);
@@ -1443,6 +1442,7 @@ static int ixgbe_reg_test(struct ixgbe_adapter *adapter, u64 *data)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
toggle = 0x7FFFF30F;
test = reg_test_82599;
break;
@@ -1583,7 +1583,7 @@ static int ixgbe_intr_test(struct ixgbe_adapter *adapter, u64 *data)
/* Test each interrupt */
for (; i < 10; i++) {
/* Interrupt to test */
- mask = 1 << i;
+ mask = BIT(i);
if (!shared_int) {
/*
@@ -1681,6 +1681,7 @@ static void ixgbe_free_desc_rings(struct ixgbe_adapter *adapter)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
reg_ctl = IXGBE_READ_REG(hw, IXGBE_DMATXCTL);
reg_ctl &= ~IXGBE_DMATXCTL_TE;
IXGBE_WRITE_REG(hw, IXGBE_DMATXCTL, reg_ctl);
@@ -1720,6 +1721,7 @@ static int ixgbe_setup_desc_rings(struct ixgbe_adapter *adapter)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
reg_data = IXGBE_READ_REG(&adapter->hw, IXGBE_DMATXCTL);
reg_data |= IXGBE_DMATXCTL_TE;
IXGBE_WRITE_REG(&adapter->hw, IXGBE_DMATXCTL, reg_data);
@@ -1780,6 +1782,7 @@ static int ixgbe_setup_loopback_test(struct ixgbe_adapter *adapter)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
reg_data = IXGBE_READ_REG(hw, IXGBE_MACC);
reg_data |= IXGBE_MACC_FLU;
IXGBE_WRITE_REG(hw, IXGBE_MACC, reg_data);
@@ -2991,6 +2994,7 @@ static int ixgbe_get_ts_info(struct net_device *dev,
switch (adapter->hw.mac.type) {
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
case ixgbe_mac_X540:
case ixgbe_mac_82599EB:
info->so_timestamping =
@@ -3007,14 +3011,14 @@ static int ixgbe_get_ts_info(struct net_device *dev,
info->phc_index = -1;
info->tx_types =
- (1 << HWTSTAMP_TX_OFF) |
- (1 << HWTSTAMP_TX_ON);
+ BIT(HWTSTAMP_TX_OFF) |
+ BIT(HWTSTAMP_TX_ON);
info->rx_filters =
- (1 << HWTSTAMP_FILTER_NONE) |
- (1 << HWTSTAMP_FILTER_PTP_V1_L4_SYNC) |
- (1 << HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) |
- (1 << HWTSTAMP_FILTER_PTP_V2_EVENT);
+ BIT(HWTSTAMP_FILTER_NONE) |
+ BIT(HWTSTAMP_FILTER_PTP_V1_L4_SYNC) |
+ BIT(HWTSTAMP_FILTER_PTP_V1_L4_DELAY_REQ) |
+ BIT(HWTSTAMP_FILTER_PTP_V2_EVENT);
break;
default:
return ethtool_op_get_ts_info(dev, info);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
index e771e764daa3..bcdc88444ceb 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_lib.c
@@ -1,7 +1,7 @@
/*******************************************************************************
Intel 10 Gigabit PCI Express Linux driver
- Copyright(c) 1999 - 2013 Intel Corporation.
+ Copyright(c) 1999 - 2016 Intel Corporation.
This program is free software; you can redistribute it and/or modify it
under the terms and conditions of the GNU General Public License,
@@ -128,6 +128,7 @@ static void ixgbe_get_first_reg_idx(struct ixgbe_adapter *adapter, u8 tc,
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
if (num_tcs > 4) {
/*
* TCs : TC0/1 TC2/3 TC4-7
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
index 7df3fe29b210..088c47cf27d9 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c
@@ -1,7 +1,7 @@
/*******************************************************************************
Intel 10 Gigabit PCI Express Linux driver
- Copyright(c) 1999 - 2015 Intel Corporation.
+ Copyright(c) 1999 - 2016 Intel Corporation.
This program is free software; you can redistribute it and/or modify it
under the terms and conditions of the GNU General Public License,
@@ -53,15 +53,7 @@
#include <net/vxlan.h>
#include <net/pkt_cls.h>
#include <net/tc_act/tc_gact.h>
-
-#ifdef CONFIG_OF
-#include <linux/of_net.h>
-#endif
-
-#ifdef CONFIG_SPARC
-#include <asm/idprom.h>
-#include <asm/prom.h>
-#endif
+#include <net/tc_act/tc_mirred.h>
#include "ixgbe.h"
#include "ixgbe_common.h"
@@ -79,10 +71,10 @@ char ixgbe_default_device_descr[] =
static char ixgbe_default_device_descr[] =
"Intel(R) 10 Gigabit Network Connection";
#endif
-#define DRV_VERSION "4.2.1-k"
+#define DRV_VERSION "4.4.0-k"
const char ixgbe_driver_version[] = DRV_VERSION;
static const char ixgbe_copyright[] =
- "Copyright (c) 1999-2015 Intel Corporation.";
+ "Copyright (c) 1999-2016 Intel Corporation.";
static const char ixgbe_overheat_msg[] = "Network adapter has been stopped because it has over heated. Restart the computer. If the problem persists, power off the system and replace the adapter";
@@ -92,6 +84,7 @@ static const struct ixgbe_info *ixgbe_info_tbl[] = {
[board_X540] = &ixgbe_X540_info,
[board_X550] = &ixgbe_X550_info,
[board_X550EM_x] = &ixgbe_X550EM_x_info,
+ [board_x550em_a] = &ixgbe_x550em_a_info,
};
/* ixgbe_pci_tbl - PCI Device ID Table
@@ -134,10 +127,17 @@ static const struct pci_device_id ixgbe_pci_tbl[] = {
{PCI_VDEVICE(INTEL, IXGBE_DEV_ID_82599_SFP_SF_QP), board_82599 },
{PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X540T1), board_X540 },
{PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X550T), board_X550},
+ {PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X550T1), board_X550},
{PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X550EM_X_KX4), board_X550EM_x},
{PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X550EM_X_KR), board_X550EM_x},
{PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X550EM_X_10G_T), board_X550EM_x},
{PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X550EM_X_SFP), board_X550EM_x},
+ {PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X550EM_A_KR), board_x550em_a },
+ {PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X550EM_A_KR_L), board_x550em_a },
+ {PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X550EM_A_SFP_N), board_x550em_a },
+ {PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X550EM_A_SGMII), board_x550em_a },
+ {PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X550EM_A_SGMII_L), board_x550em_a },
+ {PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X550EM_A_SFP), board_x550em_a },
/* required last entry */
{0, }
};
@@ -372,6 +372,27 @@ u32 ixgbe_read_reg(struct ixgbe_hw *hw, u32 reg)
if (ixgbe_removed(reg_addr))
return IXGBE_FAILED_READ_REG;
+ if (unlikely(hw->phy.nw_mng_if_sel &
+ IXGBE_NW_MNG_IF_SEL_ENABLE_10_100M)) {
+ struct ixgbe_adapter *adapter;
+ int i;
+
+ for (i = 0; i < 200; ++i) {
+ value = readl(reg_addr + IXGBE_MAC_SGMII_BUSY);
+ if (likely(!value))
+ goto writes_completed;
+ if (value == IXGBE_FAILED_READ_REG) {
+ ixgbe_remove_adapter(hw);
+ return IXGBE_FAILED_READ_REG;
+ }
+ udelay(5);
+ }
+
+ adapter = hw->back;
+ e_warn(hw, "register writes incomplete %08x\n", value);
+ }
+
+writes_completed:
value = readl(reg_addr + reg);
if (unlikely(value == IXGBE_FAILED_READ_REG))
ixgbe_check_remove(hw, reg);
@@ -588,7 +609,7 @@ static void ixgbe_dump(struct ixgbe_adapter *adapter)
pr_info("%-15s %016lX %016lX %016lX\n",
netdev->name,
netdev->state,
- netdev->trans_start,
+ dev_trans_start(netdev),
netdev->last_rx);
}
@@ -869,6 +890,7 @@ static void ixgbe_set_ivar(struct ixgbe_adapter *adapter, s8 direction,
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
if (direction == -1) {
/* other causes */
msix_vector |= IXGBE_IVAR_ALLOC_VAL;
@@ -907,6 +929,7 @@ static inline void ixgbe_irq_rearm_queues(struct ixgbe_adapter *adapter,
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
mask = (qmask & 0xFFFFFFFF);
IXGBE_WRITE_REG(&adapter->hw, IXGBE_EICS_EX(0), mask);
mask = (qmask >> 32);
@@ -1087,9 +1110,40 @@ static void ixgbe_tx_timeout_reset(struct ixgbe_adapter *adapter)
}
/**
+ * ixgbe_tx_maxrate - callback to set the maximum per-queue bitrate
+ **/
+static int ixgbe_tx_maxrate(struct net_device *netdev,
+ int queue_index, u32 maxrate)
+{
+ struct ixgbe_adapter *adapter = netdev_priv(netdev);
+ struct ixgbe_hw *hw = &adapter->hw;
+ u32 bcnrc_val = ixgbe_link_mbps(adapter);
+
+ if (!maxrate)
+ return 0;
+
+ /* Calculate the rate factor values to set */
+ bcnrc_val <<= IXGBE_RTTBCNRC_RF_INT_SHIFT;
+ bcnrc_val /= maxrate;
+
+ /* clear everything but the rate factor */
+ bcnrc_val &= IXGBE_RTTBCNRC_RF_INT_MASK |
+ IXGBE_RTTBCNRC_RF_DEC_MASK;
+
+ /* enable the rate scheduler */
+ bcnrc_val |= IXGBE_RTTBCNRC_RS_ENA;
+
+ IXGBE_WRITE_REG(hw, IXGBE_RTTDQSEL, queue_index);
+ IXGBE_WRITE_REG(hw, IXGBE_RTTBCNRC, bcnrc_val);
+
+ return 0;
+}
+
+/**
* ixgbe_clean_tx_irq - Reclaim resources after transmit completes
* @q_vector: structure containing interrupt and ring information
* @tx_ring: tx ring to clean
+ * @napi_budget: Used to determine if we are in netpoll
**/
static bool ixgbe_clean_tx_irq(struct ixgbe_q_vector *q_vector,
struct ixgbe_ring *tx_ring, int napi_budget)
@@ -2192,7 +2246,7 @@ static void ixgbe_configure_msix(struct ixgbe_adapter *adapter)
/* Populate MSIX to EITR Select */
if (adapter->num_vfs > 32) {
- u32 eitrsel = (1 << (adapter->num_vfs - 32)) - 1;
+ u32 eitrsel = BIT(adapter->num_vfs - 32) - 1;
IXGBE_WRITE_REG(&adapter->hw, IXGBE_EITRSEL, eitrsel);
}
@@ -2222,6 +2276,7 @@ static void ixgbe_configure_msix(struct ixgbe_adapter *adapter)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
ixgbe_set_ivar(adapter, -1, 1, v_idx);
break;
default:
@@ -2333,6 +2388,7 @@ void ixgbe_write_eitr(struct ixgbe_q_vector *q_vector)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
/*
* set the WDIS bit to not clear the timer bits and cause an
* immediate assertion of the interrupt
@@ -2494,6 +2550,7 @@ static inline bool ixgbe_is_sfp(struct ixgbe_hw *hw)
return false;
case ixgbe_mac_82599EB:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
switch (hw->mac.ops.get_media_type(hw)) {
case ixgbe_media_type_fiber:
case ixgbe_media_type_fiber_qsfp:
@@ -2568,6 +2625,7 @@ static inline void ixgbe_irq_enable_queues(struct ixgbe_adapter *adapter,
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
mask = (qmask & 0xFFFFFFFF);
if (mask)
IXGBE_WRITE_REG(hw, IXGBE_EIMS_EX(0), mask);
@@ -2596,6 +2654,7 @@ static inline void ixgbe_irq_disable_queues(struct ixgbe_adapter *adapter,
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
mask = (qmask & 0xFFFFFFFF);
if (mask)
IXGBE_WRITE_REG(hw, IXGBE_EIMC_EX(0), mask);
@@ -2631,6 +2690,7 @@ static inline void ixgbe_irq_enable(struct ixgbe_adapter *adapter, bool queues,
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
mask |= IXGBE_EIMS_TS;
break;
default:
@@ -2646,7 +2706,10 @@ static inline void ixgbe_irq_enable(struct ixgbe_adapter *adapter, bool queues,
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
- if (adapter->hw.device_id == IXGBE_DEV_ID_X550EM_X_SFP)
+ case ixgbe_mac_x550em_a:
+ if (adapter->hw.device_id == IXGBE_DEV_ID_X550EM_X_SFP ||
+ adapter->hw.device_id == IXGBE_DEV_ID_X550EM_A_SFP ||
+ adapter->hw.device_id == IXGBE_DEV_ID_X550EM_A_SFP_N)
mask |= IXGBE_EIMS_GPI_SDP0(&adapter->hw);
if (adapter->hw.phy.type == ixgbe_phy_x550em_ext_t)
mask |= IXGBE_EICR_GPI_SDP0_X540;
@@ -2704,6 +2767,7 @@ static irqreturn_t ixgbe_msix_other(int irq, void *data)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
if (hw->phy.type == ixgbe_phy_x550em_ext_t &&
(eicr & IXGBE_EICR_GPI_SDP0_X540)) {
adapter->flags2 |= IXGBE_FLAG2_PHY_INTERRUPT;
@@ -2786,8 +2850,10 @@ int ixgbe_poll(struct napi_struct *napi, int budget)
ixgbe_update_dca(q_vector);
#endif
- ixgbe_for_each_ring(ring, q_vector->tx)
- clean_complete &= !!ixgbe_clean_tx_irq(q_vector, ring, budget);
+ ixgbe_for_each_ring(ring, q_vector->tx) {
+ if (!ixgbe_clean_tx_irq(q_vector, ring, budget))
+ clean_complete = false;
+ }
/* Exit if we are called by netpoll or busy polling is active */
if ((budget <= 0) || !ixgbe_qv_lock_napi(q_vector))
@@ -2805,7 +2871,8 @@ int ixgbe_poll(struct napi_struct *napi, int budget)
per_ring_budget);
work_done += cleaned;
- clean_complete &= (cleaned < per_ring_budget);
+ if (cleaned >= per_ring_budget)
+ clean_complete = false;
}
ixgbe_qv_unlock_napi(q_vector);
@@ -2818,7 +2885,7 @@ int ixgbe_poll(struct napi_struct *napi, int budget)
if (adapter->rx_itr_setting & 1)
ixgbe_set_itr(q_vector);
if (!test_bit(__IXGBE_DOWN, &adapter->state))
- ixgbe_irq_enable_queues(adapter, ((u64)1 << q_vector->v_idx));
+ ixgbe_irq_enable_queues(adapter, BIT_ULL(q_vector->v_idx));
return 0;
}
@@ -2937,6 +3004,7 @@ static irqreturn_t ixgbe_intr(int irq, void *data)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
if (eicr & IXGBE_EICR_ECC) {
e_info(link, "Received ECC Err, initiating reset\n");
adapter->flags2 |= IXGBE_FLAG2_RESET_REQUESTED;
@@ -3033,6 +3101,7 @@ static inline void ixgbe_irq_disable(struct ixgbe_adapter *adapter)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
IXGBE_WRITE_REG(&adapter->hw, IXGBE_EIMC, 0xFFFF0000);
IXGBE_WRITE_REG(&adapter->hw, IXGBE_EIMC_EX(0), ~0);
IXGBE_WRITE_REG(&adapter->hw, IXGBE_EIMC_EX(1), ~0);
@@ -3109,15 +3178,15 @@ void ixgbe_configure_tx_ring(struct ixgbe_adapter *adapter,
* currently 40.
*/
if (!ring->q_vector || (ring->q_vector->itr < IXGBE_100K_ITR))
- txdctl |= (1 << 16); /* WTHRESH = 1 */
+ txdctl |= 1u << 16; /* WTHRESH = 1 */
else
- txdctl |= (8 << 16); /* WTHRESH = 8 */
+ txdctl |= 8u << 16; /* WTHRESH = 8 */
/*
* Setting PTHRESH to 32 both improves performance
* and avoids a TX hang with DFP enabled
*/
- txdctl |= (1 << 8) | /* HTHRESH = 1 */
+ txdctl |= (1u << 8) | /* HTHRESH = 1 */
32; /* PTHRESH = 32 */
/* reinitialize flowdirector state */
@@ -3669,9 +3738,9 @@ static void ixgbe_setup_psrtype(struct ixgbe_adapter *adapter)
return;
if (rss_i > 3)
- psrtype |= 2 << 29;
+ psrtype |= 2u << 29;
else if (rss_i > 1)
- psrtype |= 1 << 29;
+ psrtype |= 1u << 29;
for_each_set_bit(pool, &adapter->fwd_bitmask, 32)
IXGBE_WRITE_REG(hw, IXGBE_PSRTYPE(VMDQ_P(pool)), psrtype);
@@ -3698,9 +3767,9 @@ static void ixgbe_configure_virtualization(struct ixgbe_adapter *adapter)
reg_offset = (VMDQ_P(0) >= 32) ? 1 : 0;
/* Enable only the PF's pool for Tx/Rx */
- IXGBE_WRITE_REG(hw, IXGBE_VFRE(reg_offset), (~0) << vf_shift);
+ IXGBE_WRITE_REG(hw, IXGBE_VFRE(reg_offset), GENMASK(31, vf_shift));
IXGBE_WRITE_REG(hw, IXGBE_VFRE(reg_offset ^ 1), reg_offset - 1);
- IXGBE_WRITE_REG(hw, IXGBE_VFTE(reg_offset), (~0) << vf_shift);
+ IXGBE_WRITE_REG(hw, IXGBE_VFTE(reg_offset), GENMASK(31, vf_shift));
IXGBE_WRITE_REG(hw, IXGBE_VFTE(reg_offset ^ 1), reg_offset - 1);
if (adapter->bridge_mode == BRIDGE_MODE_VEB)
IXGBE_WRITE_REG(hw, IXGBE_PFDTXGSWC, IXGBE_PFDTXGSWC_VT_LBEN);
@@ -3729,34 +3798,10 @@ static void ixgbe_configure_virtualization(struct ixgbe_adapter *adapter)
IXGBE_WRITE_REG(hw, IXGBE_GCR_EXT, gcr_ext);
-
- /* Enable MAC Anti-Spoofing */
- hw->mac.ops.set_mac_anti_spoofing(hw, (adapter->num_vfs != 0),
- adapter->num_vfs);
-
- /* Ensure LLDP and FC is set for Ethertype Antispoofing if we will be
- * calling set_ethertype_anti_spoofing for each VF in loop below
- */
- if (hw->mac.ops.set_ethertype_anti_spoofing) {
- IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_LLDP),
- (IXGBE_ETQF_FILTER_EN |
- IXGBE_ETQF_TX_ANTISPOOF |
- IXGBE_ETH_P_LLDP));
-
- IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_FC),
- (IXGBE_ETQF_FILTER_EN |
- IXGBE_ETQF_TX_ANTISPOOF |
- ETH_P_PAUSE));
- }
-
- /* For VFs that have spoof checking turned off */
for (i = 0; i < adapter->num_vfs; i++) {
- if (!adapter->vfinfo[i].spoofchk_enabled)
- ixgbe_ndo_set_vf_spoofchk(adapter->netdev, i, false);
-
- /* enable ethertype anti spoofing if hw supports it */
- if (hw->mac.ops.set_ethertype_anti_spoofing)
- hw->mac.ops.set_ethertype_anti_spoofing(hw, true, i);
+ /* configure spoof checking */
+ ixgbe_ndo_set_vf_spoofchk(adapter->netdev, i,
+ adapter->vfinfo[i].spoofchk_enabled);
/* Enable/Disable RSS query feature */
ixgbe_ndo_set_vf_rss_query_en(adapter->netdev, i,
@@ -3832,6 +3877,7 @@ static void ixgbe_setup_rdrxctl(struct ixgbe_adapter *adapter)
break;
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
if (adapter->num_vfs)
rdrxctl |= IXGBE_RDRXCTL_PSP;
/* fall through for older HW */
@@ -3908,7 +3954,9 @@ static int ixgbe_vlan_rx_add_vid(struct net_device *netdev,
struct ixgbe_hw *hw = &adapter->hw;
/* add VID to filter table */
- hw->mac.ops.set_vfta(&adapter->hw, vid, VMDQ_P(0), true, true);
+ if (!vid || !(adapter->flags2 & IXGBE_FLAG2_VLAN_PROMISC))
+ hw->mac.ops.set_vfta(&adapter->hw, vid, VMDQ_P(0), true, !!vid);
+
set_bit(vid, adapter->active_vlans);
return 0;
@@ -3947,7 +3995,7 @@ void ixgbe_update_pf_promisc_vlvf(struct ixgbe_adapter *adapter, u32 vid)
* entry other than the PF.
*/
word = idx * 2 + (VMDQ_P(0) / 32);
- bits = ~(1 << (VMDQ_P(0)) % 32);
+ bits = ~BIT(VMDQ_P(0) % 32);
bits &= IXGBE_READ_REG(hw, IXGBE_VLVFB(word));
/* Disable the filter so this falls into the default pool. */
@@ -3965,9 +4013,7 @@ static int ixgbe_vlan_rx_kill_vid(struct net_device *netdev,
struct ixgbe_hw *hw = &adapter->hw;
/* remove VID from filter table */
- if (adapter->flags2 & IXGBE_FLAG2_VLAN_PROMISC)
- ixgbe_update_pf_promisc_vlvf(adapter, vid);
- else
+ if (vid && !(adapter->flags2 & IXGBE_FLAG2_VLAN_PROMISC))
hw->mac.ops.set_vfta(hw, vid, VMDQ_P(0), false, true);
clear_bit(vid, adapter->active_vlans);
@@ -3995,6 +4041,7 @@ static void ixgbe_vlan_strip_disable(struct ixgbe_adapter *adapter)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
for (i = 0; i < adapter->num_rx_queues; i++) {
struct ixgbe_ring *ring = adapter->rx_ring[i];
@@ -4031,6 +4078,7 @@ static void ixgbe_vlan_strip_enable(struct ixgbe_adapter *adapter)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
for (i = 0; i < adapter->num_rx_queues; i++) {
struct ixgbe_ring *ring = adapter->rx_ring[i];
@@ -4057,6 +4105,7 @@ static void ixgbe_vlan_promisc_enable(struct ixgbe_adapter *adapter)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
default:
if (adapter->flags & IXGBE_FLAG_VMDQ_ENABLED)
break;
@@ -4081,7 +4130,7 @@ static void ixgbe_vlan_promisc_enable(struct ixgbe_adapter *adapter)
u32 reg_offset = IXGBE_VLVFB(i * 2 + VMDQ_P(0) / 32);
u32 vlvfb = IXGBE_READ_REG(hw, reg_offset);
- vlvfb |= 1 << (VMDQ_P(0) % 32);
+ vlvfb |= BIT(VMDQ_P(0) % 32);
IXGBE_WRITE_REG(hw, reg_offset, vlvfb);
}
@@ -4111,7 +4160,7 @@ static void ixgbe_scrub_vfta(struct ixgbe_adapter *adapter, u32 vfta_offset)
if (vlvf) {
/* record VLAN ID in VFTA */
- vfta[(vid - vid_start) / 32] |= 1 << (vid % 32);
+ vfta[(vid - vid_start) / 32] |= BIT(vid % 32);
/* if PF is part of this then continue */
if (test_bit(vid, adapter->active_vlans))
@@ -4120,7 +4169,7 @@ static void ixgbe_scrub_vfta(struct ixgbe_adapter *adapter, u32 vfta_offset)
/* remove PF from the pool */
word = i * 2 + VMDQ_P(0) / 32;
- bits = ~(1 << (VMDQ_P(0) % 32));
+ bits = ~BIT(VMDQ_P(0) % 32);
bits &= IXGBE_READ_REG(hw, IXGBE_VLVFB(word));
IXGBE_WRITE_REG(hw, IXGBE_VLVFB(word), bits);
}
@@ -4147,6 +4196,7 @@ static void ixgbe_vlan_promisc_disable(struct ixgbe_adapter *adapter)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
default:
if (adapter->flags & IXGBE_FLAG_VMDQ_ENABLED)
break;
@@ -4172,11 +4222,11 @@ static void ixgbe_vlan_promisc_disable(struct ixgbe_adapter *adapter)
static void ixgbe_restore_vlan(struct ixgbe_adapter *adapter)
{
- u16 vid;
+ u16 vid = 1;
ixgbe_vlan_rx_add_vid(adapter->netdev, htons(ETH_P_8021Q), 0);
- for_each_set_bit(vid, adapter->active_vlans, VLAN_N_VID)
+ for_each_set_bit_from(vid, adapter->active_vlans, VLAN_N_VID)
ixgbe_vlan_rx_add_vid(adapter->netdev, htons(ETH_P_8021Q), vid);
}
@@ -4426,6 +4476,7 @@ void ixgbe_set_rx_mode(struct net_device *netdev)
struct ixgbe_adapter *adapter = netdev_priv(netdev);
struct ixgbe_hw *hw = &adapter->hw;
u32 fctrl, vmolr = IXGBE_VMOLR_BAM | IXGBE_VMOLR_AUPE;
+ netdev_features_t features = netdev->features;
int count;
/* Check for Promiscuous and All Multicast modes */
@@ -4443,14 +4494,13 @@ void ixgbe_set_rx_mode(struct net_device *netdev)
hw->addr_ctrl.user_set_promisc = true;
fctrl |= (IXGBE_FCTRL_UPE | IXGBE_FCTRL_MPE);
vmolr |= IXGBE_VMOLR_MPE;
- ixgbe_vlan_promisc_enable(adapter);
+ features &= ~NETIF_F_HW_VLAN_CTAG_FILTER;
} else {
if (netdev->flags & IFF_ALLMULTI) {
fctrl |= IXGBE_FCTRL_MPE;
vmolr |= IXGBE_VMOLR_MPE;
}
hw->addr_ctrl.user_set_promisc = false;
- ixgbe_vlan_promisc_disable(adapter);
}
/*
@@ -4483,7 +4533,7 @@ void ixgbe_set_rx_mode(struct net_device *netdev)
}
/* This is useful for sniffing bad packets. */
- if (adapter->netdev->features & NETIF_F_RXALL) {
+ if (features & NETIF_F_RXALL) {
/* UPE and MPE will be handled by normal PROMISC logic
* in e1000e_set_rx_mode */
fctrl |= (IXGBE_FCTRL_SBP | /* Receive bad packets */
@@ -4496,10 +4546,15 @@ void ixgbe_set_rx_mode(struct net_device *netdev)
IXGBE_WRITE_REG(hw, IXGBE_FCTRL, fctrl);
- if (netdev->features & NETIF_F_HW_VLAN_CTAG_RX)
+ if (features & NETIF_F_HW_VLAN_CTAG_RX)
ixgbe_vlan_strip_enable(adapter);
else
ixgbe_vlan_strip_disable(adapter);
+
+ if (features & NETIF_F_HW_VLAN_CTAG_FILTER)
+ ixgbe_vlan_promisc_disable(adapter);
+ else
+ ixgbe_vlan_promisc_enable(adapter);
}
static void ixgbe_napi_enable_all(struct ixgbe_adapter *adapter)
@@ -4530,6 +4585,7 @@ static void ixgbe_clear_vxlan_port(struct ixgbe_adapter *adapter)
switch (adapter->hw.mac.type) {
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
IXGBE_WRITE_REG(&adapter->hw, IXGBE_VXLANCTRL, 0);
adapter->vxlan_port = 0;
break;
@@ -4630,6 +4686,7 @@ static int ixgbe_hpbthresh(struct ixgbe_adapter *adapter, int pb)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
dv_id = IXGBE_DV_X540(link, tc);
break;
default:
@@ -4690,6 +4747,7 @@ static int ixgbe_lpbthresh(struct ixgbe_adapter *adapter, int pb)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
dv_id = IXGBE_LOW_DV_X540(tc);
break;
default:
@@ -4805,9 +4863,9 @@ static void ixgbe_fwd_psrtype(struct ixgbe_fwd_adapter *vadapter)
return;
if (rss_i > 3)
- psrtype |= 2 << 29;
+ psrtype |= 2u << 29;
else if (rss_i > 1)
- psrtype |= 1 << 29;
+ psrtype |= 1u << 29;
IXGBE_WRITE_REG(hw, IXGBE_PSRTYPE(VMDQ_P(pool)), psrtype);
}
@@ -4871,7 +4929,7 @@ static void ixgbe_disable_fwd_ring(struct ixgbe_fwd_adapter *vadapter,
/* shutdown specific queue receive and wait for dma to settle */
ixgbe_disable_rx_queue(adapter, rx_ring);
usleep_range(10000, 20000);
- ixgbe_irq_disable_queues(adapter, ((u64)1 << index));
+ ixgbe_irq_disable_queues(adapter, BIT_ULL(index));
ixgbe_clean_rx_ring(rx_ring);
rx_ring->l2_accel_priv = NULL;
}
@@ -5106,6 +5164,7 @@ static void ixgbe_setup_gpie(struct ixgbe_adapter *adapter)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
default:
IXGBE_WRITE_REG(hw, IXGBE_EIAM_EX(0), 0xFFFFFFFF);
IXGBE_WRITE_REG(hw, IXGBE_EIAM_EX(1), 0xFFFFFFFF);
@@ -5156,6 +5215,7 @@ static void ixgbe_setup_gpie(struct ixgbe_adapter *adapter)
gpie |= IXGBE_SDP1_GPIEN_8259X | IXGBE_SDP2_GPIEN_8259X;
break;
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
gpie |= IXGBE_SDP0_GPIEN_X540;
break;
default:
@@ -5228,7 +5288,7 @@ void ixgbe_reinit_locked(struct ixgbe_adapter *adapter)
{
WARN_ON(in_interrupt());
/* put off any impending NetWatchDogTimeout */
- adapter->netdev->trans_start = jiffies;
+ netif_trans_update(adapter->netdev);
while (test_and_set_bit(__IXGBE_RESETTING, &adapter->state))
usleep_range(1000, 2000);
@@ -5467,6 +5527,7 @@ void ixgbe_down(struct ixgbe_adapter *adapter)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
IXGBE_WRITE_REG(hw, IXGBE_DMATXCTL,
(IXGBE_READ_REG(hw, IXGBE_DMATXCTL) &
~IXGBE_DMATXCTL_TE));
@@ -5498,6 +5559,58 @@ static void ixgbe_tx_timeout(struct net_device *netdev)
ixgbe_tx_timeout_reset(adapter);
}
+#ifdef CONFIG_IXGBE_DCB
+static void ixgbe_init_dcb(struct ixgbe_adapter *adapter)
+{
+ struct ixgbe_hw *hw = &adapter->hw;
+ struct tc_configuration *tc;
+ int j;
+
+ switch (hw->mac.type) {
+ case ixgbe_mac_82598EB:
+ case ixgbe_mac_82599EB:
+ adapter->dcb_cfg.num_tcs.pg_tcs = MAX_TRAFFIC_CLASS;
+ adapter->dcb_cfg.num_tcs.pfc_tcs = MAX_TRAFFIC_CLASS;
+ break;
+ case ixgbe_mac_X540:
+ case ixgbe_mac_X550:
+ adapter->dcb_cfg.num_tcs.pg_tcs = X540_TRAFFIC_CLASS;
+ adapter->dcb_cfg.num_tcs.pfc_tcs = X540_TRAFFIC_CLASS;
+ break;
+ case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
+ default:
+ adapter->dcb_cfg.num_tcs.pg_tcs = DEF_TRAFFIC_CLASS;
+ adapter->dcb_cfg.num_tcs.pfc_tcs = DEF_TRAFFIC_CLASS;
+ break;
+ }
+
+ /* Configure DCB traffic classes */
+ for (j = 0; j < MAX_TRAFFIC_CLASS; j++) {
+ tc = &adapter->dcb_cfg.tc_config[j];
+ tc->path[DCB_TX_CONFIG].bwg_id = 0;
+ tc->path[DCB_TX_CONFIG].bwg_percent = 12 + (j & 1);
+ tc->path[DCB_RX_CONFIG].bwg_id = 0;
+ tc->path[DCB_RX_CONFIG].bwg_percent = 12 + (j & 1);
+ tc->dcb_pfc = pfc_disabled;
+ }
+
+ /* Initialize default user to priority mapping, UPx->TC0 */
+ tc = &adapter->dcb_cfg.tc_config[0];
+ tc->path[DCB_TX_CONFIG].up_to_tc_bitmap = 0xFF;
+ tc->path[DCB_RX_CONFIG].up_to_tc_bitmap = 0xFF;
+
+ adapter->dcb_cfg.bw_percentage[DCB_TX_CONFIG][0] = 100;
+ adapter->dcb_cfg.bw_percentage[DCB_RX_CONFIG][0] = 100;
+ adapter->dcb_cfg.pfc_mode_enable = false;
+ adapter->dcb_set_bitmap = 0x00;
+ if (adapter->flags & IXGBE_FLAG_DCB_CAPABLE)
+ adapter->dcbx_cap = DCB_CAP_DCBX_HOST | DCB_CAP_DCBX_VER_CEE;
+ memcpy(&adapter->temp_dcb_cfg, &adapter->dcb_cfg,
+ sizeof(adapter->temp_dcb_cfg));
+}
+#endif
+
/**
* ixgbe_sw_init - Initialize general software structures (struct ixgbe_adapter)
* @adapter: board private structure to initialize
@@ -5512,10 +5625,8 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter)
struct pci_dev *pdev = adapter->pdev;
unsigned int rss, fdir;
u32 fwsm;
-#ifdef CONFIG_IXGBE_DCB
- int j;
- struct tc_configuration *tc;
-#endif
+ u16 device_caps;
+ int i;
/* PCI config space info */
@@ -5537,6 +5648,10 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter)
#ifdef CONFIG_IXGBE_DCA
adapter->flags |= IXGBE_FLAG_DCA_CAPABLE;
#endif
+#ifdef CONFIG_IXGBE_DCB
+ adapter->flags |= IXGBE_FLAG_DCB_CAPABLE;
+ adapter->flags &= ~IXGBE_FLAG_DCB_ENABLED;
+#endif
#ifdef IXGBE_FCOE
adapter->flags |= IXGBE_FLAG_FCOE_CAPABLE;
adapter->flags &= ~IXGBE_FLAG_FCOE_ENABLED;
@@ -5547,7 +5662,14 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter)
#endif /* IXGBE_FCOE */
/* initialize static ixgbe jump table entries */
- adapter->jump_tables[0] = ixgbe_ipv4_fields;
+ adapter->jump_tables[0] = kzalloc(sizeof(*adapter->jump_tables[0]),
+ GFP_KERNEL);
+ if (!adapter->jump_tables[0])
+ return -ENOMEM;
+ adapter->jump_tables[0]->mat = ixgbe_ipv4_fields;
+
+ for (i = 1; i < IXGBE_MAX_LINK_HANDLE; i++)
+ adapter->jump_tables[i] = NULL;
adapter->mac_table = kzalloc(sizeof(struct ixgbe_mac_addr) *
hw->mac.num_rar_entries,
@@ -5585,6 +5707,17 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter)
adapter->flags2 |= IXGBE_FLAG2_TEMP_SENSOR_CAPABLE;
break;
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
+#ifdef CONFIG_IXGBE_DCB
+ adapter->flags &= ~IXGBE_FLAG_DCB_CAPABLE;
+#endif
+#ifdef IXGBE_FCOE
+ adapter->flags &= ~IXGBE_FLAG_FCOE_CAPABLE;
+#ifdef CONFIG_IXGBE_DCB
+ adapter->fcoe.up = 0;
+#endif /* IXGBE_DCB */
+#endif /* IXGBE_FCOE */
+ /* Fall Through */
case ixgbe_mac_X550:
#ifdef CONFIG_IXGBE_DCA
adapter->flags &= ~IXGBE_FLAG_DCA_CAPABLE;
@@ -5606,42 +5739,7 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter)
spin_lock_init(&adapter->fdir_perfect_lock);
#ifdef CONFIG_IXGBE_DCB
- switch (hw->mac.type) {
- case ixgbe_mac_X540:
- case ixgbe_mac_X550:
- case ixgbe_mac_X550EM_x:
- adapter->dcb_cfg.num_tcs.pg_tcs = X540_TRAFFIC_CLASS;
- adapter->dcb_cfg.num_tcs.pfc_tcs = X540_TRAFFIC_CLASS;
- break;
- default:
- adapter->dcb_cfg.num_tcs.pg_tcs = MAX_TRAFFIC_CLASS;
- adapter->dcb_cfg.num_tcs.pfc_tcs = MAX_TRAFFIC_CLASS;
- break;
- }
-
- /* Configure DCB traffic classes */
- for (j = 0; j < MAX_TRAFFIC_CLASS; j++) {
- tc = &adapter->dcb_cfg.tc_config[j];
- tc->path[DCB_TX_CONFIG].bwg_id = 0;
- tc->path[DCB_TX_CONFIG].bwg_percent = 12 + (j & 1);
- tc->path[DCB_RX_CONFIG].bwg_id = 0;
- tc->path[DCB_RX_CONFIG].bwg_percent = 12 + (j & 1);
- tc->dcb_pfc = pfc_disabled;
- }
-
- /* Initialize default user to priority mapping, UPx->TC0 */
- tc = &adapter->dcb_cfg.tc_config[0];
- tc->path[DCB_TX_CONFIG].up_to_tc_bitmap = 0xFF;
- tc->path[DCB_RX_CONFIG].up_to_tc_bitmap = 0xFF;
-
- adapter->dcb_cfg.bw_percentage[DCB_TX_CONFIG][0] = 100;
- adapter->dcb_cfg.bw_percentage[DCB_RX_CONFIG][0] = 100;
- adapter->dcb_cfg.pfc_mode_enable = false;
- adapter->dcb_set_bitmap = 0x00;
- adapter->dcbx_cap = DCB_CAP_DCBX_HOST | DCB_CAP_DCBX_VER_CEE;
- memcpy(&adapter->temp_dcb_cfg, &adapter->dcb_cfg,
- sizeof(adapter->temp_dcb_cfg));
-
+ ixgbe_init_dcb(adapter);
#endif
/* default flow control settings */
@@ -5675,6 +5773,22 @@ static int ixgbe_sw_init(struct ixgbe_adapter *adapter)
adapter->tx_ring_count = IXGBE_DEFAULT_TXD;
adapter->rx_ring_count = IXGBE_DEFAULT_RXD;
+ /* Cache bit indicating need for crosstalk fix */
+ switch (hw->mac.type) {
+ case ixgbe_mac_82599EB:
+ case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
+ hw->mac.ops.get_device_caps(hw, &device_caps);
+ if (device_caps & IXGBE_DEVICE_CAPS_NO_CROSSTALK_WR)
+ adapter->need_crosstalk_fix = false;
+ else
+ adapter->need_crosstalk_fix = true;
+ break;
+ default:
+ adapter->need_crosstalk_fix = false;
+ break;
+ }
+
/* set default work limits */
adapter->tx_work_limit = IXGBE_DEFAULT_TX_WORK;
@@ -6217,6 +6331,7 @@ static int __ixgbe_shutdown(struct pci_dev *pdev, bool *enable_wake)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
pci_wake_from_d3(pdev, !!wufc);
break;
default:
@@ -6352,6 +6467,7 @@ void ixgbe_update_stats(struct ixgbe_adapter *adapter)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
hwstats->pxonrxc[i] +=
IXGBE_READ_REG(hw, IXGBE_PXONRXCNT(i));
break;
@@ -6367,7 +6483,8 @@ void ixgbe_update_stats(struct ixgbe_adapter *adapter)
if ((hw->mac.type == ixgbe_mac_82599EB) ||
(hw->mac.type == ixgbe_mac_X540) ||
(hw->mac.type == ixgbe_mac_X550) ||
- (hw->mac.type == ixgbe_mac_X550EM_x)) {
+ (hw->mac.type == ixgbe_mac_X550EM_x) ||
+ (hw->mac.type == ixgbe_mac_x550em_a)) {
hwstats->qbtc[i] += IXGBE_READ_REG(hw, IXGBE_QBTC_L(i));
IXGBE_READ_REG(hw, IXGBE_QBTC_H(i)); /* to clear */
hwstats->qbrc[i] += IXGBE_READ_REG(hw, IXGBE_QBRC_L(i));
@@ -6392,6 +6509,7 @@ void ixgbe_update_stats(struct ixgbe_adapter *adapter)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
/* OS2BMC stats are X540 and later */
hwstats->o2bgptc += IXGBE_READ_REG(hw, IXGBE_O2BGPTC);
hwstats->o2bspc += IXGBE_READ_REG(hw, IXGBE_O2BSPC);
@@ -6562,7 +6680,7 @@ static void ixgbe_check_hang_subtask(struct ixgbe_adapter *adapter)
for (i = 0; i < adapter->num_q_vectors; i++) {
struct ixgbe_q_vector *qv = adapter->q_vector[i];
if (qv->rx.ring || qv->tx.ring)
- eics |= ((u64)1 << i);
+ eics |= BIT_ULL(i);
}
}
@@ -6593,6 +6711,18 @@ static void ixgbe_watchdog_update_link(struct ixgbe_adapter *adapter)
link_up = true;
}
+ /* If Crosstalk fix enabled do the sanity check of making sure
+ * the SFP+ cage is empty.
+ */
+ if (adapter->need_crosstalk_fix) {
+ u32 sfp_cage_full;
+
+ sfp_cage_full = IXGBE_READ_REG(hw, IXGBE_ESDP) &
+ IXGBE_ESDP_SDP2;
+ if (ixgbe_is_sfp(hw) && link_up && !sfp_cage_full)
+ link_up = false;
+ }
+
if (adapter->ixgbe_ieee_pfc)
pfc_en |= !!(adapter->ixgbe_ieee_pfc->pfc_en);
@@ -6662,6 +6792,7 @@ static void ixgbe_watchdog_link_is_up(struct ixgbe_adapter *adapter)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
case ixgbe_mac_82599EB: {
u32 mflcn = IXGBE_READ_REG(hw, IXGBE_MFLCN);
u32 fccfg = IXGBE_READ_REG(hw, IXGBE_FCCFG);
@@ -6938,6 +7069,16 @@ static void ixgbe_sfp_detection_subtask(struct ixgbe_adapter *adapter)
struct ixgbe_hw *hw = &adapter->hw;
s32 err;
+ /* If crosstalk fix enabled verify the SFP+ cage is full */
+ if (adapter->need_crosstalk_fix) {
+ u32 sfp_cage_full;
+
+ sfp_cage_full = IXGBE_READ_REG(hw, IXGBE_ESDP) &
+ IXGBE_ESDP_SDP2;
+ if (!sfp_cage_full)
+ return;
+ }
+
/* not searching for SFP so there is nothing to do here */
if (!(adapter->flags2 & IXGBE_FLAG2_SEARCH_FOR_SFP) &&
!(adapter->flags2 & IXGBE_FLAG2_SFP_NEEDS_RESET))
@@ -7122,10 +7263,12 @@ static void ixgbe_service_task(struct work_struct *work)
return;
}
#ifdef CONFIG_IXGBE_VXLAN
+ rtnl_lock();
if (adapter->flags2 & IXGBE_FLAG2_VXLAN_REREG_NEEDED) {
adapter->flags2 &= ~IXGBE_FLAG2_VXLAN_REREG_NEEDED;
vxlan_get_rx_port(adapter->netdev);
}
+ rtnl_unlock();
#endif /* CONFIG_IXGBE_VXLAN */
ixgbe_reset_subtask(adapter);
ixgbe_phy_interrupt_subtask(adapter);
@@ -7148,9 +7291,18 @@ static int ixgbe_tso(struct ixgbe_ring *tx_ring,
struct ixgbe_tx_buffer *first,
u8 *hdr_len)
{
+ u32 vlan_macip_lens, type_tucmd, mss_l4len_idx;
struct sk_buff *skb = first->skb;
- u32 vlan_macip_lens, type_tucmd;
- u32 mss_l4len_idx, l4len;
+ union {
+ struct iphdr *v4;
+ struct ipv6hdr *v6;
+ unsigned char *hdr;
+ } ip;
+ union {
+ struct tcphdr *tcp;
+ unsigned char *hdr;
+ } l4;
+ u32 paylen, l4_offset;
int err;
if (skb->ip_summed != CHECKSUM_PARTIAL)
@@ -7163,46 +7315,52 @@ static int ixgbe_tso(struct ixgbe_ring *tx_ring,
if (err < 0)
return err;
+ ip.hdr = skb_network_header(skb);
+ l4.hdr = skb_checksum_start(skb);
+
/* ADV DTYP TUCMD MKRLOC/ISCSIHEDLEN */
type_tucmd = IXGBE_ADVTXD_TUCMD_L4T_TCP;
- if (first->protocol == htons(ETH_P_IP)) {
- struct iphdr *iph = ip_hdr(skb);
- iph->tot_len = 0;
- iph->check = 0;
- tcp_hdr(skb)->check = ~csum_tcpudp_magic(iph->saddr,
- iph->daddr, 0,
- IPPROTO_TCP,
- 0);
+ /* initialize outer IP header fields */
+ if (ip.v4->version == 4) {
+ /* IP header will have to cancel out any data that
+ * is not a part of the outer IP header
+ */
+ ip.v4->check = csum_fold(csum_add(lco_csum(skb),
+ csum_unfold(l4.tcp->check)));
type_tucmd |= IXGBE_ADVTXD_TUCMD_IPV4;
+
+ ip.v4->tot_len = 0;
first->tx_flags |= IXGBE_TX_FLAGS_TSO |
IXGBE_TX_FLAGS_CSUM |
IXGBE_TX_FLAGS_IPV4;
- } else if (skb_is_gso_v6(skb)) {
- ipv6_hdr(skb)->payload_len = 0;
- tcp_hdr(skb)->check =
- ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
- &ipv6_hdr(skb)->daddr,
- 0, IPPROTO_TCP, 0);
+ } else {
+ ip.v6->payload_len = 0;
first->tx_flags |= IXGBE_TX_FLAGS_TSO |
IXGBE_TX_FLAGS_CSUM;
}
- /* compute header lengths */
- l4len = tcp_hdrlen(skb);
- *hdr_len = skb_transport_offset(skb) + l4len;
+ /* determine offset of inner transport header */
+ l4_offset = l4.hdr - skb->data;
+
+ /* compute length of segmentation header */
+ *hdr_len = (l4.tcp->doff * 4) + l4_offset;
+
+ /* remove payload length from inner checksum */
+ paylen = skb->len - l4_offset;
+ csum_replace_by_diff(&l4.tcp->check, htonl(paylen));
/* update gso size and bytecount with header size */
first->gso_segs = skb_shinfo(skb)->gso_segs;
first->bytecount += (first->gso_segs - 1) * *hdr_len;
/* mss_l4len_id: use 0 as index for TSO */
- mss_l4len_idx = l4len << IXGBE_ADVTXD_L4LEN_SHIFT;
+ mss_l4len_idx = (*hdr_len - l4_offset) << IXGBE_ADVTXD_L4LEN_SHIFT;
mss_l4len_idx |= skb_shinfo(skb)->gso_size << IXGBE_ADVTXD_MSS_SHIFT;
/* vlan_macip_lens: HEADLEN, MACLEN, VLAN tag */
- vlan_macip_lens = skb_network_header_len(skb);
- vlan_macip_lens |= skb_network_offset(skb) << IXGBE_ADVTXD_MACLEN_SHIFT;
+ vlan_macip_lens = l4.hdr - ip.hdr;
+ vlan_macip_lens |= (ip.hdr - skb->data) << IXGBE_ADVTXD_MACLEN_SHIFT;
vlan_macip_lens |= first->tx_flags & IXGBE_TX_FLAGS_VLAN_MASK;
ixgbe_tx_ctxtdesc(tx_ring, vlan_macip_lens, 0, type_tucmd,
@@ -7211,103 +7369,61 @@ static int ixgbe_tso(struct ixgbe_ring *tx_ring,
return 1;
}
+static inline bool ixgbe_ipv6_csum_is_sctp(struct sk_buff *skb)
+{
+ unsigned int offset = 0;
+
+ ipv6_find_hdr(skb, &offset, IPPROTO_SCTP, NULL, NULL);
+
+ return offset == skb_checksum_start_offset(skb);
+}
+
static void ixgbe_tx_csum(struct ixgbe_ring *tx_ring,
struct ixgbe_tx_buffer *first)
{
struct sk_buff *skb = first->skb;
u32 vlan_macip_lens = 0;
- u32 mss_l4len_idx = 0;
u32 type_tucmd = 0;
if (skb->ip_summed != CHECKSUM_PARTIAL) {
- if (!(first->tx_flags & IXGBE_TX_FLAGS_HW_VLAN) &&
- !(first->tx_flags & IXGBE_TX_FLAGS_CC))
+csum_failed:
+ if (!(first->tx_flags & (IXGBE_TX_FLAGS_HW_VLAN |
+ IXGBE_TX_FLAGS_CC)))
return;
- vlan_macip_lens = skb_network_offset(skb) <<
- IXGBE_ADVTXD_MACLEN_SHIFT;
- } else {
- u8 l4_hdr = 0;
- union {
- struct iphdr *ipv4;
- struct ipv6hdr *ipv6;
- u8 *raw;
- } network_hdr;
- union {
- struct tcphdr *tcphdr;
- u8 *raw;
- } transport_hdr;
- __be16 frag_off;
-
- if (skb->encapsulation) {
- network_hdr.raw = skb_inner_network_header(skb);
- transport_hdr.raw = skb_inner_transport_header(skb);
- vlan_macip_lens = skb_inner_network_offset(skb) <<
- IXGBE_ADVTXD_MACLEN_SHIFT;
- } else {
- network_hdr.raw = skb_network_header(skb);
- transport_hdr.raw = skb_transport_header(skb);
- vlan_macip_lens = skb_network_offset(skb) <<
- IXGBE_ADVTXD_MACLEN_SHIFT;
- }
-
- /* use first 4 bits to determine IP version */
- switch (network_hdr.ipv4->version) {
- case IPVERSION:
- vlan_macip_lens |= transport_hdr.raw - network_hdr.raw;
- type_tucmd |= IXGBE_ADVTXD_TUCMD_IPV4;
- l4_hdr = network_hdr.ipv4->protocol;
- break;
- case 6:
- vlan_macip_lens |= transport_hdr.raw - network_hdr.raw;
- l4_hdr = network_hdr.ipv6->nexthdr;
- if (likely((transport_hdr.raw - network_hdr.raw) ==
- sizeof(struct ipv6hdr)))
- break;
- ipv6_skip_exthdr(skb, network_hdr.raw - skb->data +
- sizeof(struct ipv6hdr),
- &l4_hdr, &frag_off);
- if (unlikely(frag_off))
- l4_hdr = NEXTHDR_FRAGMENT;
- break;
- default:
- break;
- }
+ goto no_csum;
+ }
- switch (l4_hdr) {
- case IPPROTO_TCP:
- type_tucmd |= IXGBE_ADVTXD_TUCMD_L4T_TCP;
- mss_l4len_idx = (transport_hdr.tcphdr->doff * 4) <<
- IXGBE_ADVTXD_L4LEN_SHIFT;
- break;
- case IPPROTO_SCTP:
- type_tucmd |= IXGBE_ADVTXD_TUCMD_L4T_SCTP;
- mss_l4len_idx = sizeof(struct sctphdr) <<
- IXGBE_ADVTXD_L4LEN_SHIFT;
- break;
- case IPPROTO_UDP:
- mss_l4len_idx = sizeof(struct udphdr) <<
- IXGBE_ADVTXD_L4LEN_SHIFT;
+ switch (skb->csum_offset) {
+ case offsetof(struct tcphdr, check):
+ type_tucmd = IXGBE_ADVTXD_TUCMD_L4T_TCP;
+ /* fall through */
+ case offsetof(struct udphdr, check):
+ break;
+ case offsetof(struct sctphdr, checksum):
+ /* validate that this is actually an SCTP request */
+ if (((first->protocol == htons(ETH_P_IP)) &&
+ (ip_hdr(skb)->protocol == IPPROTO_SCTP)) ||
+ ((first->protocol == htons(ETH_P_IPV6)) &&
+ ixgbe_ipv6_csum_is_sctp(skb))) {
+ type_tucmd = IXGBE_ADVTXD_TUCMD_L4T_SCTP;
break;
- default:
- if (unlikely(net_ratelimit())) {
- dev_warn(tx_ring->dev,
- "partial checksum, version=%d, l4 proto=%x\n",
- network_hdr.ipv4->version, l4_hdr);
- }
- skb_checksum_help(skb);
- goto no_csum;
}
-
- /* update TX checksum flag */
- first->tx_flags |= IXGBE_TX_FLAGS_CSUM;
+ /* fall through */
+ default:
+ skb_checksum_help(skb);
+ goto csum_failed;
}
+ /* update TX checksum flag */
+ first->tx_flags |= IXGBE_TX_FLAGS_CSUM;
+ vlan_macip_lens = skb_checksum_start_offset(skb) -
+ skb_network_offset(skb);
no_csum:
/* vlan_macip_lens: MACLEN, VLAN tag */
+ vlan_macip_lens |= skb_network_offset(skb) << IXGBE_ADVTXD_MACLEN_SHIFT;
vlan_macip_lens |= first->tx_flags & IXGBE_TX_FLAGS_VLAN_MASK;
- ixgbe_tx_ctxtdesc(tx_ring, vlan_macip_lens, 0,
- type_tucmd, mss_l4len_idx);
+ ixgbe_tx_ctxtdesc(tx_ring, vlan_macip_lens, 0, type_tucmd, 0);
}
#define IXGBE_SET_FLAG(_input, _flag, _result) \
@@ -8238,6 +8354,134 @@ static int ixgbe_configure_clsu32_del_hnode(struct ixgbe_adapter *adapter,
return 0;
}
+#ifdef CONFIG_NET_CLS_ACT
+static int handle_redirect_action(struct ixgbe_adapter *adapter, int ifindex,
+ u8 *queue, u64 *action)
+{
+ unsigned int num_vfs = adapter->num_vfs, vf;
+ struct net_device *upper;
+ struct list_head *iter;
+
+ /* redirect to a SRIOV VF */
+ for (vf = 0; vf < num_vfs; ++vf) {
+ upper = pci_get_drvdata(adapter->vfinfo[vf].vfdev);
+ if (upper->ifindex == ifindex) {
+ if (adapter->num_rx_pools > 1)
+ *queue = vf * 2;
+ else
+ *queue = vf * adapter->num_rx_queues_per_pool;
+
+ *action = vf + 1;
+ *action <<= ETHTOOL_RX_FLOW_SPEC_RING_VF_OFF;
+ return 0;
+ }
+ }
+
+ /* redirect to a offloaded macvlan netdev */
+ netdev_for_each_all_upper_dev_rcu(adapter->netdev, upper, iter) {
+ if (netif_is_macvlan(upper)) {
+ struct macvlan_dev *dfwd = netdev_priv(upper);
+ struct ixgbe_fwd_adapter *vadapter = dfwd->fwd_priv;
+
+ if (vadapter && vadapter->netdev->ifindex == ifindex) {
+ *queue = adapter->rx_ring[vadapter->rx_base_queue]->reg_idx;
+ *action = *queue;
+ return 0;
+ }
+ }
+ }
+
+ return -EINVAL;
+}
+
+static int parse_tc_actions(struct ixgbe_adapter *adapter,
+ struct tcf_exts *exts, u64 *action, u8 *queue)
+{
+ const struct tc_action *a;
+ int err;
+
+ if (tc_no_actions(exts))
+ return -EINVAL;
+
+ tc_for_each_action(a, exts) {
+
+ /* Drop action */
+ if (is_tcf_gact_shot(a)) {
+ *action = IXGBE_FDIR_DROP_QUEUE;
+ *queue = IXGBE_FDIR_DROP_QUEUE;
+ return 0;
+ }
+
+ /* Redirect to a VF or a offloaded macvlan */
+ if (is_tcf_mirred_redirect(a)) {
+ int ifindex = tcf_mirred_ifindex(a);
+
+ err = handle_redirect_action(adapter, ifindex, queue,
+ action);
+ if (err == 0)
+ return err;
+ }
+ }
+
+ return -EINVAL;
+}
+#else
+static int parse_tc_actions(struct ixgbe_adapter *adapter,
+ struct tcf_exts *exts, u64 *action, u8 *queue)
+{
+ return -EINVAL;
+}
+#endif /* CONFIG_NET_CLS_ACT */
+
+static int ixgbe_clsu32_build_input(struct ixgbe_fdir_filter *input,
+ union ixgbe_atr_input *mask,
+ struct tc_cls_u32_offload *cls,
+ struct ixgbe_mat_field *field_ptr,
+ struct ixgbe_nexthdr *nexthdr)
+{
+ int i, j, off;
+ __be32 val, m;
+ bool found_entry = false, found_jump_field = false;
+
+ for (i = 0; i < cls->knode.sel->nkeys; i++) {
+ off = cls->knode.sel->keys[i].off;
+ val = cls->knode.sel->keys[i].val;
+ m = cls->knode.sel->keys[i].mask;
+
+ for (j = 0; field_ptr[j].val; j++) {
+ if (field_ptr[j].off == off) {
+ field_ptr[j].val(input, mask, val, m);
+ input->filter.formatted.flow_type |=
+ field_ptr[j].type;
+ found_entry = true;
+ break;
+ }
+ }
+ if (nexthdr) {
+ if (nexthdr->off == cls->knode.sel->keys[i].off &&
+ nexthdr->val == cls->knode.sel->keys[i].val &&
+ nexthdr->mask == cls->knode.sel->keys[i].mask)
+ found_jump_field = true;
+ else
+ continue;
+ }
+ }
+
+ if (nexthdr && !found_jump_field)
+ return -EINVAL;
+
+ if (!found_entry)
+ return 0;
+
+ mask->formatted.flow_type = IXGBE_ATR_L4TYPE_IPV6_MASK |
+ IXGBE_ATR_L4TYPE_MASK;
+
+ if (input->filter.formatted.flow_type == IXGBE_ATR_FLOW_TYPE_IPV4)
+ mask->formatted.flow_type &= IXGBE_ATR_L4TYPE_IPV6_MASK;
+
+ return 0;
+}
+
static int ixgbe_configure_clsu32(struct ixgbe_adapter *adapter,
__be16 protocol,
struct tc_cls_u32_offload *cls)
@@ -8245,16 +8489,13 @@ static int ixgbe_configure_clsu32(struct ixgbe_adapter *adapter,
u32 loc = cls->knode.handle & 0xfffff;
struct ixgbe_hw *hw = &adapter->hw;
struct ixgbe_mat_field *field_ptr;
- struct ixgbe_fdir_filter *input;
- union ixgbe_atr_input mask;
-#ifdef CONFIG_NET_CLS_ACT
- const struct tc_action *a;
-#endif
- int i, err = 0;
+ struct ixgbe_fdir_filter *input = NULL;
+ union ixgbe_atr_input *mask = NULL;
+ struct ixgbe_jump_table *jump = NULL;
+ int i, err = -EINVAL;
u8 queue;
u32 uhtid, link_uhtid;
- memset(&mask, 0, sizeof(union ixgbe_atr_input));
uhtid = TC_U32_USERHTID(cls->knode.handle);
link_uhtid = TC_U32_USERHTID(cls->knode.link_handle);
@@ -8266,38 +8507,11 @@ static int ixgbe_configure_clsu32(struct ixgbe_adapter *adapter,
* headers when needed.
*/
if (protocol != htons(ETH_P_IP))
- return -EINVAL;
-
- if (link_uhtid) {
- struct ixgbe_nexthdr *nexthdr = ixgbe_ipv4_jumps;
-
- if (link_uhtid >= IXGBE_MAX_LINK_HANDLE)
- return -EINVAL;
-
- if (!test_bit(link_uhtid - 1, &adapter->tables))
- return -EINVAL;
-
- for (i = 0; nexthdr[i].jump; i++) {
- if (nexthdr->o != cls->knode.sel->offoff ||
- nexthdr->s != cls->knode.sel->offshift ||
- nexthdr->m != cls->knode.sel->offmask ||
- /* do not support multiple key jumps its just mad */
- cls->knode.sel->nkeys > 1)
- return -EINVAL;
-
- if (nexthdr->off != cls->knode.sel->keys[0].off ||
- nexthdr->val != cls->knode.sel->keys[0].val ||
- nexthdr->mask != cls->knode.sel->keys[0].mask)
- return -EINVAL;
-
- adapter->jump_tables[link_uhtid] = nexthdr->jump;
- }
- return 0;
- }
+ return err;
if (loc >= ((1024 << adapter->fdir_pballoc) - 2)) {
e_err(drv, "Location out of range\n");
- return -EINVAL;
+ return err;
}
/* cls u32 is a graph starting at root node 0x800. The driver tracks
@@ -8308,87 +8522,123 @@ static int ixgbe_configure_clsu32(struct ixgbe_adapter *adapter,
* this function _should_ be generic try not to hardcode values here.
*/
if (uhtid == 0x800) {
- field_ptr = adapter->jump_tables[0];
+ field_ptr = (adapter->jump_tables[0])->mat;
} else {
if (uhtid >= IXGBE_MAX_LINK_HANDLE)
- return -EINVAL;
-
- field_ptr = adapter->jump_tables[uhtid];
+ return err;
+ if (!adapter->jump_tables[uhtid])
+ return err;
+ field_ptr = (adapter->jump_tables[uhtid])->mat;
}
if (!field_ptr)
- return -EINVAL;
+ return err;
- input = kzalloc(sizeof(*input), GFP_KERNEL);
- if (!input)
- return -ENOMEM;
+ /* At this point we know the field_ptr is valid and need to either
+ * build cls_u32 link or attach filter. Because adding a link to
+ * a handle that does not exist is invalid and the same for adding
+ * rules to handles that don't exist.
+ */
- for (i = 0; i < cls->knode.sel->nkeys; i++) {
- int off = cls->knode.sel->keys[i].off;
- __be32 val = cls->knode.sel->keys[i].val;
- __be32 m = cls->knode.sel->keys[i].mask;
- bool found_entry = false;
- int j;
+ if (link_uhtid) {
+ struct ixgbe_nexthdr *nexthdr = ixgbe_ipv4_jumps;
- for (j = 0; field_ptr[j].val; j++) {
- if (field_ptr[j].off == off) {
- field_ptr[j].val(input, &mask, val, m);
- input->filter.formatted.flow_type |=
- field_ptr[j].type;
- found_entry = true;
+ if (link_uhtid >= IXGBE_MAX_LINK_HANDLE)
+ return err;
+
+ if (!test_bit(link_uhtid - 1, &adapter->tables))
+ return err;
+
+ for (i = 0; nexthdr[i].jump; i++) {
+ if (nexthdr[i].o != cls->knode.sel->offoff ||
+ nexthdr[i].s != cls->knode.sel->offshift ||
+ nexthdr[i].m != cls->knode.sel->offmask)
+ return err;
+
+ jump = kzalloc(sizeof(*jump), GFP_KERNEL);
+ if (!jump)
+ return -ENOMEM;
+ input = kzalloc(sizeof(*input), GFP_KERNEL);
+ if (!input) {
+ err = -ENOMEM;
+ goto free_jump;
+ }
+ mask = kzalloc(sizeof(*mask), GFP_KERNEL);
+ if (!mask) {
+ err = -ENOMEM;
+ goto free_input;
+ }
+ jump->input = input;
+ jump->mask = mask;
+ err = ixgbe_clsu32_build_input(input, mask, cls,
+ field_ptr, &nexthdr[i]);
+ if (!err) {
+ jump->mat = nexthdr[i].jump;
+ adapter->jump_tables[link_uhtid] = jump;
break;
}
}
-
- if (!found_entry)
- goto err_out;
+ return 0;
}
- mask.formatted.flow_type = IXGBE_ATR_L4TYPE_IPV6_MASK |
- IXGBE_ATR_L4TYPE_MASK;
-
- if (input->filter.formatted.flow_type == IXGBE_ATR_FLOW_TYPE_IPV4)
- mask.formatted.flow_type &= IXGBE_ATR_L4TYPE_IPV6_MASK;
+ input = kzalloc(sizeof(*input), GFP_KERNEL);
+ if (!input)
+ return -ENOMEM;
+ mask = kzalloc(sizeof(*mask), GFP_KERNEL);
+ if (!mask) {
+ err = -ENOMEM;
+ goto free_input;
+ }
-#ifdef CONFIG_NET_CLS_ACT
- if (list_empty(&cls->knode.exts->actions))
+ if ((uhtid != 0x800) && (adapter->jump_tables[uhtid])) {
+ if ((adapter->jump_tables[uhtid])->input)
+ memcpy(input, (adapter->jump_tables[uhtid])->input,
+ sizeof(*input));
+ if ((adapter->jump_tables[uhtid])->mask)
+ memcpy(mask, (adapter->jump_tables[uhtid])->mask,
+ sizeof(*mask));
+ }
+ err = ixgbe_clsu32_build_input(input, mask, cls, field_ptr, NULL);
+ if (err)
goto err_out;
- list_for_each_entry(a, &cls->knode.exts->actions, list) {
- if (!is_tcf_gact_shot(a))
- goto err_out;
- }
-#endif
+ err = parse_tc_actions(adapter, cls->knode.exts, &input->action,
+ &queue);
+ if (err < 0)
+ goto err_out;
- input->action = IXGBE_FDIR_DROP_QUEUE;
- queue = IXGBE_FDIR_DROP_QUEUE;
input->sw_idx = loc;
spin_lock(&adapter->fdir_perfect_lock);
if (hlist_empty(&adapter->fdir_filter_list)) {
- memcpy(&adapter->fdir_mask, &mask, sizeof(mask));
- err = ixgbe_fdir_set_input_mask_82599(hw, &mask);
+ memcpy(&adapter->fdir_mask, mask, sizeof(*mask));
+ err = ixgbe_fdir_set_input_mask_82599(hw, mask);
if (err)
goto err_out_w_lock;
- } else if (memcmp(&adapter->fdir_mask, &mask, sizeof(mask))) {
+ } else if (memcmp(&adapter->fdir_mask, mask, sizeof(*mask))) {
err = -EINVAL;
goto err_out_w_lock;
}
- ixgbe_atr_compute_perfect_hash_82599(&input->filter, &mask);
+ ixgbe_atr_compute_perfect_hash_82599(&input->filter, mask);
err = ixgbe_fdir_write_perfect_filter_82599(hw, &input->filter,
input->sw_idx, queue);
if (!err)
ixgbe_update_ethtool_fdir_entry(adapter, input, input->sw_idx);
spin_unlock(&adapter->fdir_perfect_lock);
+ kfree(mask);
return err;
err_out_w_lock:
spin_unlock(&adapter->fdir_perfect_lock);
err_out:
+ kfree(mask);
+free_input:
kfree(input);
- return -EINVAL;
+free_jump:
+ kfree(jump);
+ return err;
}
static int __ixgbe_setup_tc(struct net_device *dev, u32 handle, __be16 proto,
@@ -8515,11 +8765,6 @@ static int ixgbe_set_features(struct net_device *netdev,
adapter->flags |= IXGBE_FLAG_FDIR_HASH_CAPABLE;
}
- if (features & NETIF_F_HW_VLAN_CTAG_RX)
- ixgbe_vlan_strip_enable(adapter);
- else
- ixgbe_vlan_strip_disable(adapter);
-
if (changed & NETIF_F_RXALL)
need_reset = true;
@@ -8536,6 +8781,9 @@ static int ixgbe_set_features(struct net_device *netdev,
if (need_reset)
ixgbe_do_reset(netdev);
+ else if (changed & (NETIF_F_HW_VLAN_CTAG_RX |
+ NETIF_F_HW_VLAN_CTAG_FILTER))
+ ixgbe_set_rx_mode(netdev);
return 0;
}
@@ -8833,17 +9081,36 @@ static void ixgbe_fwd_del(struct net_device *pdev, void *priv)
kfree(fwd_adapter);
}
-#define IXGBE_MAX_TUNNEL_HDR_LEN 80
+#define IXGBE_MAX_MAC_HDR_LEN 127
+#define IXGBE_MAX_NETWORK_HDR_LEN 511
+
static netdev_features_t
ixgbe_features_check(struct sk_buff *skb, struct net_device *dev,
netdev_features_t features)
{
- if (!skb->encapsulation)
- return features;
-
- if (unlikely(skb_inner_mac_header(skb) - skb_transport_header(skb) >
- IXGBE_MAX_TUNNEL_HDR_LEN))
- return features & ~NETIF_F_CSUM_MASK;
+ unsigned int network_hdr_len, mac_hdr_len;
+
+ /* Make certain the headers can be described by a context descriptor */
+ mac_hdr_len = skb_network_header(skb) - skb->data;
+ if (unlikely(mac_hdr_len > IXGBE_MAX_MAC_HDR_LEN))
+ return features & ~(NETIF_F_HW_CSUM |
+ NETIF_F_SCTP_CRC |
+ NETIF_F_HW_VLAN_CTAG_TX |
+ NETIF_F_TSO |
+ NETIF_F_TSO6);
+
+ network_hdr_len = skb_checksum_start(skb) - skb_network_header(skb);
+ if (unlikely(network_hdr_len > IXGBE_MAX_NETWORK_HDR_LEN))
+ return features & ~(NETIF_F_HW_CSUM |
+ NETIF_F_SCTP_CRC |
+ NETIF_F_TSO |
+ NETIF_F_TSO6);
+
+ /* We can only support IPV4 TSO in tunnels if we can mangle the
+ * inner IP ID field, so strip TSO if MANGLEID is not supported.
+ */
+ if (skb->encapsulation && !(features & NETIF_F_TSO_MANGLEID))
+ features &= ~NETIF_F_TSO;
return features;
}
@@ -8858,6 +9125,7 @@ static const struct net_device_ops ixgbe_netdev_ops = {
.ndo_set_mac_address = ixgbe_set_mac,
.ndo_change_mtu = ixgbe_change_mtu,
.ndo_tx_timeout = ixgbe_tx_timeout,
+ .ndo_set_tx_maxrate = ixgbe_tx_maxrate,
.ndo_vlan_rx_add_vid = ixgbe_vlan_rx_add_vid,
.ndo_vlan_rx_kill_vid = ixgbe_vlan_rx_kill_vid,
.ndo_do_ioctl = ixgbe_ioctl,
@@ -8943,7 +9211,7 @@ static inline int ixgbe_enumerate_functions(struct ixgbe_adapter *adapter)
/**
* ixgbe_wol_supported - Check whether device supports WoL
- * @hw: hw specific details
+ * @adapter: the adapter private structure
* @device_id: the device ID
* @subdev_id: the subsystem device ID
*
@@ -8951,19 +9219,33 @@ static inline int ixgbe_enumerate_functions(struct ixgbe_adapter *adapter)
* which devices have WoL support
*
**/
-int ixgbe_wol_supported(struct ixgbe_adapter *adapter, u16 device_id,
- u16 subdevice_id)
+bool ixgbe_wol_supported(struct ixgbe_adapter *adapter, u16 device_id,
+ u16 subdevice_id)
{
struct ixgbe_hw *hw = &adapter->hw;
u16 wol_cap = adapter->eeprom_cap & IXGBE_DEVICE_CAPS_WOL_MASK;
- int is_wol_supported = 0;
+ /* WOL not supported on 82598 */
+ if (hw->mac.type == ixgbe_mac_82598EB)
+ return false;
+
+ /* check eeprom to see if WOL is enabled for X540 and newer */
+ if (hw->mac.type >= ixgbe_mac_X540) {
+ if ((wol_cap == IXGBE_DEVICE_CAPS_WOL_PORT0_1) ||
+ ((wol_cap == IXGBE_DEVICE_CAPS_WOL_PORT0) &&
+ (hw->bus.func == 0)))
+ return true;
+ }
+
+ /* WOL is determined based on device IDs for 82599 MACs */
switch (device_id) {
case IXGBE_DEV_ID_82599_SFP:
/* Only these subdevices could supports WOL */
switch (subdevice_id) {
- case IXGBE_SUBDEV_ID_82599_SFP_WOL0:
case IXGBE_SUBDEV_ID_82599_560FLR:
+ case IXGBE_SUBDEV_ID_82599_LOM_SNAP6:
+ case IXGBE_SUBDEV_ID_82599_SFP_WOL0:
+ case IXGBE_SUBDEV_ID_82599_SFP_2OCP:
/* only support first port */
if (hw->bus.func != 0)
break;
@@ -8971,66 +9253,31 @@ int ixgbe_wol_supported(struct ixgbe_adapter *adapter, u16 device_id,
case IXGBE_SUBDEV_ID_82599_SFP:
case IXGBE_SUBDEV_ID_82599_RNDC:
case IXGBE_SUBDEV_ID_82599_ECNA_DP:
- case IXGBE_SUBDEV_ID_82599_LOM_SFP:
- is_wol_supported = 1;
- break;
+ case IXGBE_SUBDEV_ID_82599_SFP_1OCP:
+ case IXGBE_SUBDEV_ID_82599_SFP_LOM_OEM1:
+ case IXGBE_SUBDEV_ID_82599_SFP_LOM_OEM2:
+ return true;
}
break;
case IXGBE_DEV_ID_82599EN_SFP:
- /* Only this subdevice supports WOL */
+ /* Only these subdevices support WOL */
switch (subdevice_id) {
case IXGBE_SUBDEV_ID_82599EN_SFP_OCP1:
- is_wol_supported = 1;
- break;
+ return true;
}
break;
case IXGBE_DEV_ID_82599_COMBO_BACKPLANE:
/* All except this subdevice support WOL */
if (subdevice_id != IXGBE_SUBDEV_ID_82599_KX4_KR_MEZZ)
- is_wol_supported = 1;
+ return true;
break;
case IXGBE_DEV_ID_82599_KX4:
- is_wol_supported = 1;
- break;
- case IXGBE_DEV_ID_X540T:
- case IXGBE_DEV_ID_X540T1:
- case IXGBE_DEV_ID_X550T:
- case IXGBE_DEV_ID_X550EM_X_KX4:
- case IXGBE_DEV_ID_X550EM_X_KR:
- case IXGBE_DEV_ID_X550EM_X_10G_T:
- /* check eeprom to see if enabled wol */
- if ((wol_cap == IXGBE_DEVICE_CAPS_WOL_PORT0_1) ||
- ((wol_cap == IXGBE_DEVICE_CAPS_WOL_PORT0) &&
- (hw->bus.func == 0))) {
- is_wol_supported = 1;
- }
+ return true;
+ default:
break;
}
- return is_wol_supported;
-}
-
-/**
- * ixgbe_get_platform_mac_addr - Look up MAC address in Open Firmware / IDPROM
- * @adapter: Pointer to adapter struct
- */
-static void ixgbe_get_platform_mac_addr(struct ixgbe_adapter *adapter)
-{
-#ifdef CONFIG_OF
- struct device_node *dp = pci_device_to_OF_node(adapter->pdev);
- struct ixgbe_hw *hw = &adapter->hw;
- const unsigned char *addr;
-
- addr = of_get_mac_address(dp);
- if (addr) {
- ether_addr_copy(hw->mac.perm_addr, addr);
- return;
- }
-#endif /* CONFIG_OF */
-
-#ifdef CONFIG_SPARC
- ether_addr_copy(hw->mac.perm_addr, idprom->id_ethaddr);
-#endif /* CONFIG_SPARC */
+ return false;
}
/**
@@ -9136,23 +9383,23 @@ static int ixgbe_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
strlcpy(netdev->name, pci_name(pdev), sizeof(netdev->name));
/* Setup hw api */
- memcpy(&hw->mac.ops, ii->mac_ops, sizeof(hw->mac.ops));
+ hw->mac.ops = *ii->mac_ops;
hw->mac.type = ii->mac;
hw->mvals = ii->mvals;
/* EEPROM */
- memcpy(&hw->eeprom.ops, ii->eeprom_ops, sizeof(hw->eeprom.ops));
+ hw->eeprom.ops = *ii->eeprom_ops;
eec = IXGBE_READ_REG(hw, IXGBE_EEC(hw));
if (ixgbe_removed(hw->hw_addr)) {
err = -EIO;
goto err_ioremap;
}
/* If EEPROM is valid (bit 8 = 1), use default otherwise use bit bang */
- if (!(eec & (1 << 8)))
+ if (!(eec & BIT(8)))
hw->eeprom.ops.read = &ixgbe_read_eeprom_bit_bang_generic;
/* PHY */
- memcpy(&hw->phy.ops, ii->phy_ops, sizeof(hw->phy.ops));
+ hw->phy.ops = *ii->phy_ops;
hw->phy.sfp_type = ixgbe_sfp_type_unknown;
/* ixgbe_identify_phy_generic will set prtad and mmds properly */
hw->phy.mdio.prtad = MDIO_PRTAD_NONE;
@@ -9169,12 +9416,17 @@ static int ixgbe_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
if (err)
goto err_sw_init;
+ /* Make sure the SWFW semaphore is in a valid state */
+ if (hw->mac.ops.init_swfw_sync)
+ hw->mac.ops.init_swfw_sync(hw);
+
/* Make it possible the adapter to be woken up via WOL */
switch (adapter->hw.mac.type) {
case ixgbe_mac_82599EB:
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
IXGBE_WRITE_REG(&adapter->hw, IXGBE_WUS, ~0);
break;
default:
@@ -9215,54 +9467,62 @@ static int ixgbe_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
goto skip_sriov;
/* Mailbox */
ixgbe_init_mbx_params_pf(hw);
- memcpy(&hw->mbx.ops, ii->mbx_ops, sizeof(hw->mbx.ops));
+ hw->mbx.ops = ii->mbx_ops;
pci_sriov_set_totalvfs(pdev, IXGBE_MAX_VFS_DRV_LIMIT);
ixgbe_enable_sriov(adapter);
skip_sriov:
#endif
netdev->features = NETIF_F_SG |
- NETIF_F_IP_CSUM |
- NETIF_F_IPV6_CSUM |
- NETIF_F_HW_VLAN_CTAG_TX |
- NETIF_F_HW_VLAN_CTAG_RX |
NETIF_F_TSO |
NETIF_F_TSO6 |
NETIF_F_RXHASH |
- NETIF_F_RXCSUM;
+ NETIF_F_RXCSUM |
+ NETIF_F_HW_CSUM;
- netdev->hw_features = netdev->features | NETIF_F_HW_L2FW_DOFFLOAD;
+#define IXGBE_GSO_PARTIAL_FEATURES (NETIF_F_GSO_GRE | \
+ NETIF_F_GSO_GRE_CSUM | \
+ NETIF_F_GSO_IPXIP4 | \
+ NETIF_F_GSO_IPXIP6 | \
+ NETIF_F_GSO_UDP_TUNNEL | \
+ NETIF_F_GSO_UDP_TUNNEL_CSUM)
- switch (adapter->hw.mac.type) {
- case ixgbe_mac_82599EB:
- case ixgbe_mac_X540:
- case ixgbe_mac_X550:
- case ixgbe_mac_X550EM_x:
+ netdev->gso_partial_features = IXGBE_GSO_PARTIAL_FEATURES;
+ netdev->features |= NETIF_F_GSO_PARTIAL |
+ IXGBE_GSO_PARTIAL_FEATURES;
+
+ if (hw->mac.type >= ixgbe_mac_82599EB)
netdev->features |= NETIF_F_SCTP_CRC;
- netdev->hw_features |= NETIF_F_SCTP_CRC |
- NETIF_F_NTUPLE |
+
+ /* copy netdev features into list of user selectable features */
+ netdev->hw_features |= netdev->features |
+ NETIF_F_HW_VLAN_CTAG_RX |
+ NETIF_F_HW_VLAN_CTAG_TX |
+ NETIF_F_RXALL |
+ NETIF_F_HW_L2FW_DOFFLOAD;
+
+ if (hw->mac.type >= ixgbe_mac_82599EB)
+ netdev->hw_features |= NETIF_F_NTUPLE |
NETIF_F_HW_TC;
- break;
- default:
- break;
- }
- netdev->hw_features |= NETIF_F_RXALL;
- netdev->features |= NETIF_F_HW_VLAN_CTAG_FILTER;
+ if (pci_using_dac)
+ netdev->features |= NETIF_F_HIGHDMA;
- netdev->vlan_features |= NETIF_F_TSO;
- netdev->vlan_features |= NETIF_F_TSO6;
- netdev->vlan_features |= NETIF_F_IP_CSUM;
- netdev->vlan_features |= NETIF_F_IPV6_CSUM;
- netdev->vlan_features |= NETIF_F_SG;
+ netdev->vlan_features |= netdev->features | NETIF_F_TSO_MANGLEID;
+ netdev->hw_enc_features |= netdev->vlan_features;
+ netdev->mpls_features |= NETIF_F_HW_CSUM;
- netdev->hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM;
+ /* set this bit last since it cannot be part of vlan_features */
+ netdev->features |= NETIF_F_HW_VLAN_CTAG_FILTER |
+ NETIF_F_HW_VLAN_CTAG_RX |
+ NETIF_F_HW_VLAN_CTAG_TX;
netdev->priv_flags |= IFF_UNICAST_FLT;
netdev->priv_flags |= IFF_SUPP_NOFCS;
#ifdef CONFIG_IXGBE_DCB
- netdev->dcbnl_ops = &dcbnl_ops;
+ if (adapter->flags & IXGBE_FLAG_DCB_CAPABLE)
+ netdev->dcbnl_ops = &dcbnl_ops;
#endif
#ifdef IXGBE_FCOE
@@ -9287,10 +9547,6 @@ skip_sriov:
NETIF_F_FCOE_MTU;
}
#endif /* IXGBE_FCOE */
- if (pci_using_dac) {
- netdev->features |= NETIF_F_HIGHDMA;
- netdev->vlan_features |= NETIF_F_HIGHDMA;
- }
if (adapter->flags2 & IXGBE_FLAG2_RSC_CAPABLE)
netdev->hw_features |= NETIF_F_LRO;
@@ -9304,7 +9560,8 @@ skip_sriov:
goto err_sw_init;
}
- ixgbe_get_platform_mac_addr(adapter);
+ eth_platform_get_mac_address(&adapter->pdev->dev,
+ adapter->hw.mac.perm_addr);
memcpy(netdev->dev_addr, hw->mac.perm_addr, netdev->addr_len);
@@ -9455,6 +9712,7 @@ err_sw_init:
ixgbe_disable_sriov(adapter);
adapter->flags2 &= ~IXGBE_FLAG2_SEARCH_FOR_SFP;
iounmap(adapter->io_addr);
+ kfree(adapter->jump_tables[0]);
kfree(adapter->mac_table);
err_ioremap:
disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state);
@@ -9483,6 +9741,7 @@ static void ixgbe_remove(struct pci_dev *pdev)
struct ixgbe_adapter *adapter = pci_get_drvdata(pdev);
struct net_device *netdev;
bool disable_dev;
+ int i;
/* if !adapter then we already cleaned up in probe */
if (!adapter)
@@ -9532,6 +9791,14 @@ static void ixgbe_remove(struct pci_dev *pdev)
e_dev_info("complete\n");
+ for (i = 0; i < IXGBE_MAX_LINK_HANDLE; i++) {
+ if (adapter->jump_tables[i]) {
+ kfree(adapter->jump_tables[i]->input);
+ kfree(adapter->jump_tables[i]->mask);
+ }
+ kfree(adapter->jump_tables[i]);
+ }
+
kfree(adapter->mac_table);
disable_dev = !test_and_set_bit(__IXGBE_DISABLED, &adapter->state);
free_netdev(netdev);
@@ -9612,6 +9879,9 @@ static pci_ers_result_t ixgbe_io_error_detected(struct pci_dev *pdev,
case ixgbe_mac_X550EM_x:
device_id = IXGBE_DEV_ID_X550EM_X_VF;
break;
+ case ixgbe_mac_x550em_a:
+ device_id = IXGBE_DEV_ID_X550EM_A_VF;
+ break;
default:
device_id = 0;
break;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c
index 9993a471d668..a0cb84381cd0 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.c
@@ -1,7 +1,7 @@
/*******************************************************************************
Intel 10 Gigabit PCI Express Linux driver
- Copyright(c) 1999 - 2014 Intel Corporation.
+ Copyright(c) 1999 - 2016 Intel Corporation.
This program is free software; you can redistribute it and/or modify it
under the terms and conditions of the GNU General Public License,
@@ -48,10 +48,10 @@ s32 ixgbe_read_mbx(struct ixgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id)
if (size > mbx->size)
size = mbx->size;
- if (!mbx->ops.read)
+ if (!mbx->ops)
return IXGBE_ERR_MBX;
- return mbx->ops.read(hw, msg, size, mbx_id);
+ return mbx->ops->read(hw, msg, size, mbx_id);
}
/**
@@ -70,10 +70,10 @@ s32 ixgbe_write_mbx(struct ixgbe_hw *hw, u32 *msg, u16 size, u16 mbx_id)
if (size > mbx->size)
return IXGBE_ERR_MBX;
- if (!mbx->ops.write)
+ if (!mbx->ops)
return IXGBE_ERR_MBX;
- return mbx->ops.write(hw, msg, size, mbx_id);
+ return mbx->ops->write(hw, msg, size, mbx_id);
}
/**
@@ -87,10 +87,10 @@ s32 ixgbe_check_for_msg(struct ixgbe_hw *hw, u16 mbx_id)
{
struct ixgbe_mbx_info *mbx = &hw->mbx;
- if (!mbx->ops.check_for_msg)
+ if (!mbx->ops)
return IXGBE_ERR_MBX;
- return mbx->ops.check_for_msg(hw, mbx_id);
+ return mbx->ops->check_for_msg(hw, mbx_id);
}
/**
@@ -104,10 +104,10 @@ s32 ixgbe_check_for_ack(struct ixgbe_hw *hw, u16 mbx_id)
{
struct ixgbe_mbx_info *mbx = &hw->mbx;
- if (!mbx->ops.check_for_ack)
+ if (!mbx->ops)
return IXGBE_ERR_MBX;
- return mbx->ops.check_for_ack(hw, mbx_id);
+ return mbx->ops->check_for_ack(hw, mbx_id);
}
/**
@@ -121,10 +121,10 @@ s32 ixgbe_check_for_rst(struct ixgbe_hw *hw, u16 mbx_id)
{
struct ixgbe_mbx_info *mbx = &hw->mbx;
- if (!mbx->ops.check_for_rst)
+ if (!mbx->ops)
return IXGBE_ERR_MBX;
- return mbx->ops.check_for_rst(hw, mbx_id);
+ return mbx->ops->check_for_rst(hw, mbx_id);
}
/**
@@ -139,10 +139,10 @@ static s32 ixgbe_poll_for_msg(struct ixgbe_hw *hw, u16 mbx_id)
struct ixgbe_mbx_info *mbx = &hw->mbx;
int countdown = mbx->timeout;
- if (!countdown || !mbx->ops.check_for_msg)
+ if (!countdown || !mbx->ops)
return IXGBE_ERR_MBX;
- while (mbx->ops.check_for_msg(hw, mbx_id)) {
+ while (mbx->ops->check_for_msg(hw, mbx_id)) {
countdown--;
if (!countdown)
return IXGBE_ERR_MBX;
@@ -164,10 +164,10 @@ static s32 ixgbe_poll_for_ack(struct ixgbe_hw *hw, u16 mbx_id)
struct ixgbe_mbx_info *mbx = &hw->mbx;
int countdown = mbx->timeout;
- if (!countdown || !mbx->ops.check_for_ack)
+ if (!countdown || !mbx->ops)
return IXGBE_ERR_MBX;
- while (mbx->ops.check_for_ack(hw, mbx_id)) {
+ while (mbx->ops->check_for_ack(hw, mbx_id)) {
countdown--;
if (!countdown)
return IXGBE_ERR_MBX;
@@ -193,7 +193,7 @@ static s32 ixgbe_read_posted_mbx(struct ixgbe_hw *hw, u32 *msg, u16 size,
struct ixgbe_mbx_info *mbx = &hw->mbx;
s32 ret_val;
- if (!mbx->ops.read)
+ if (!mbx->ops)
return IXGBE_ERR_MBX;
ret_val = ixgbe_poll_for_msg(hw, mbx_id);
@@ -201,7 +201,7 @@ static s32 ixgbe_read_posted_mbx(struct ixgbe_hw *hw, u32 *msg, u16 size,
return ret_val;
/* if ack received read message */
- return mbx->ops.read(hw, msg, size, mbx_id);
+ return mbx->ops->read(hw, msg, size, mbx_id);
}
/**
@@ -221,11 +221,11 @@ static s32 ixgbe_write_posted_mbx(struct ixgbe_hw *hw, u32 *msg, u16 size,
s32 ret_val;
/* exit if either we can't write or there isn't a defined timeout */
- if (!mbx->ops.write || !mbx->timeout)
+ if (!mbx->ops || !mbx->timeout)
return IXGBE_ERR_MBX;
/* send msg */
- ret_val = mbx->ops.write(hw, msg, size, mbx_id);
+ ret_val = mbx->ops->write(hw, msg, size, mbx_id);
if (ret_val)
return ret_val;
@@ -307,14 +307,15 @@ static s32 ixgbe_check_for_rst_pf(struct ixgbe_hw *hw, u16 vf_number)
case ixgbe_mac_X540:
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
vflre = IXGBE_READ_REG(hw, IXGBE_VFLREC(reg_offset));
break;
default:
break;
}
- if (vflre & (1 << vf_shift)) {
- IXGBE_WRITE_REG(hw, IXGBE_VFLREC(reg_offset), (1 << vf_shift));
+ if (vflre & BIT(vf_shift)) {
+ IXGBE_WRITE_REG(hw, IXGBE_VFLREC(reg_offset), BIT(vf_shift));
hw->mbx.stats.rsts++;
return 0;
}
@@ -430,6 +431,7 @@ void ixgbe_init_mbx_params_pf(struct ixgbe_hw *hw)
if (hw->mac.type != ixgbe_mac_82599EB &&
hw->mac.type != ixgbe_mac_X550 &&
hw->mac.type != ixgbe_mac_X550EM_x &&
+ hw->mac.type != ixgbe_mac_x550em_a &&
hw->mac.type != ixgbe_mac_X540)
return;
@@ -446,7 +448,7 @@ void ixgbe_init_mbx_params_pf(struct ixgbe_hw *hw)
}
#endif /* CONFIG_PCI_IOV */
-struct ixgbe_mbx_operations mbx_ops_generic = {
+const struct ixgbe_mbx_operations mbx_ops_generic = {
.read = ixgbe_read_mbx_pf,
.write = ixgbe_write_mbx_pf,
.read_posted = ixgbe_read_posted_mbx,
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h
index 8daa95f74548..01c2667c0f92 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_mbx.h
@@ -1,7 +1,7 @@
/*******************************************************************************
Intel 10 Gigabit PCI Express Linux driver
- Copyright(c) 1999 - 2013 Intel Corporation.
+ Copyright(c) 1999 - 2016 Intel Corporation.
This program is free software; you can redistribute it and/or modify it
under the terms and conditions of the GNU General Public License,
@@ -123,6 +123,6 @@ s32 ixgbe_check_for_rst(struct ixgbe_hw *, u16);
void ixgbe_init_mbx_params_pf(struct ixgbe_hw *);
#endif /* CONFIG_PCI_IOV */
-extern struct ixgbe_mbx_operations mbx_ops_generic;
+extern const struct ixgbe_mbx_operations mbx_ops_generic;
#endif /* _IXGBE_MBX_H_ */
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_model.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_model.h
index 74c53ad9d268..a8bed3d887f7 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_model.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_model.h
@@ -38,6 +38,12 @@ struct ixgbe_mat_field {
unsigned int type;
};
+struct ixgbe_jump_table {
+ struct ixgbe_mat_field *mat;
+ struct ixgbe_fdir_filter *input;
+ union ixgbe_atr_input *mask;
+};
+
static inline int ixgbe_mat_prgm_sip(struct ixgbe_fdir_filter *input,
union ixgbe_atr_input *mask,
u32 val, u32 m)
@@ -82,6 +88,12 @@ static struct ixgbe_mat_field ixgbe_tcp_fields[] = {
{ .val = NULL } /* terminal node */
};
+static struct ixgbe_mat_field ixgbe_udp_fields[] = {
+ {.off = 0, .val = ixgbe_mat_prgm_ports,
+ .type = IXGBE_ATR_FLOW_TYPE_UDPV4},
+ { .val = NULL } /* terminal node */
+};
+
struct ixgbe_nexthdr {
/* offset, shift, and mask of position to next header */
unsigned int o;
@@ -98,6 +110,8 @@ struct ixgbe_nexthdr {
static struct ixgbe_nexthdr ixgbe_ipv4_jumps[] = {
{ .o = 0, .s = 6, .m = 0xf,
.off = 8, .val = 0x600, .mask = 0xff00, .jump = ixgbe_tcp_fields},
+ { .o = 0, .s = 6, .m = 0xf,
+ .off = 8, .val = 0x1100, .mask = 0xff00, .jump = ixgbe_udp_fields},
{ .jump = NULL } /* terminal node */
};
#endif /* _IXGBE_MODEL_H_ */
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
index 5abd66c84d00..cc735ec3e045 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_phy.h
@@ -1,7 +1,7 @@
/*******************************************************************************
Intel 10 Gigabit PCI Express Linux driver
- Copyright(c) 1999 - 2014 Intel Corporation.
+ Copyright(c) 1999 - 2016 Intel Corporation.
This program is free software; you can redistribute it and/or modify it
under the terms and conditions of the GNU General Public License,
@@ -81,7 +81,11 @@
#define IXGBE_I2C_EEPROM_STATUS_FAIL 0x2
#define IXGBE_I2C_EEPROM_STATUS_IN_PROGRESS 0x3
#define IXGBE_CS4227 0xBE /* CS4227 address */
+#define IXGBE_CS4227_GLOBAL_ID_LSB 0
+#define IXGBE_CS4227_GLOBAL_ID_MSB 1
#define IXGBE_CS4227_SCRATCH 2
+#define IXGBE_CS4223_PHY_ID 0x7003 /* Quad port */
+#define IXGBE_CS4227_PHY_ID 0x3003 /* Dual port */
#define IXGBE_CS4227_RESET_PENDING 0x1357
#define IXGBE_CS4227_RESET_COMPLETE 0x5AA5
#define IXGBE_CS4227_RETRIES 15
@@ -103,7 +107,7 @@
#define IXGBE_PE 0xE0 /* Port expander addr */
#define IXGBE_PE_OUTPUT 1 /* Output reg offset */
#define IXGBE_PE_CONFIG 3 /* Config reg offset */
-#define IXGBE_PE_BIT1 (1 << 1)
+#define IXGBE_PE_BIT1 BIT(1)
/* Flow control defines */
#define IXGBE_TAF_SYM_PAUSE 0x400
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
index ef1504d41890..e5431bfe3339 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_ptp.c
@@ -1,7 +1,7 @@
/*******************************************************************************
Intel 10 Gigabit PCI Express Linux driver
- Copyright(c) 1999 - 2015 Intel Corporation.
+ Copyright(c) 1999 - 2016 Intel Corporation.
This program is free software; you can redistribute it and/or modify it
under the terms and conditions of the GNU General Public License,
@@ -333,6 +333,7 @@ static void ixgbe_ptp_convert_to_hwtstamp(struct ixgbe_adapter *adapter,
*/
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
/* Upper 32 bits represent billions of cycles, lower 32 bits
* represent cycles. However, we use timespec64_to_ns for the
* correct math even though the units haven't been corrected
@@ -395,7 +396,7 @@ static int ixgbe_ptp_adjfreq_82599(struct ptp_clock_info *ptp, s32 ppb)
if (incval > 0x00FFFFFFULL)
e_dev_warn("PTP ppb adjusted SYSTIME rate overflowed!\n");
IXGBE_WRITE_REG(hw, IXGBE_TIMINCA,
- (1 << IXGBE_INCPER_SHIFT_82599) |
+ BIT(IXGBE_INCPER_SHIFT_82599) |
((u32)incval & 0x00FFFFFFUL));
break;
default:
@@ -921,6 +922,7 @@ static int ixgbe_ptp_set_timestamp_mode(struct ixgbe_adapter *adapter,
switch (hw->mac.type) {
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
/* enable timestamping all packets only if at least some
* packets were requested. Otherwise, play nice and disable
* timestamping
@@ -1083,6 +1085,7 @@ void ixgbe_ptp_start_cyclecounter(struct ixgbe_adapter *adapter)
cc.shift = 2;
}
/* fallthrough */
+ case ixgbe_mac_x550em_a:
case ixgbe_mac_X550:
cc.read = ixgbe_ptp_read_X550;
@@ -1111,7 +1114,7 @@ void ixgbe_ptp_start_cyclecounter(struct ixgbe_adapter *adapter)
incval >>= IXGBE_INCVAL_SHIFT_82599;
cc.shift -= IXGBE_INCVAL_SHIFT_82599;
IXGBE_WRITE_REG(hw, IXGBE_TIMINCA,
- (1 << IXGBE_INCPER_SHIFT_82599) | incval);
+ BIT(IXGBE_INCPER_SHIFT_82599) | incval);
break;
default:
/* other devices aren't supported */
@@ -1223,6 +1226,7 @@ static long ixgbe_ptp_create_clock(struct ixgbe_adapter *adapter)
break;
case ixgbe_mac_X550:
case ixgbe_mac_X550EM_x:
+ case ixgbe_mac_x550em_a:
snprintf(adapter->ptp_caps.name, 16, "%s", netdev->name);
adapter->ptp_caps.owner = THIS_MODULE;
adapter->ptp_caps.max_adj = 30000000;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
index 8025a3f93598..c5caacdd193d 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.c
@@ -406,7 +406,7 @@ static int ixgbe_set_vf_multicasts(struct ixgbe_adapter *adapter,
vector_reg = (vfinfo->vf_mc_hashes[i] >> 5) & 0x7F;
vector_bit = vfinfo->vf_mc_hashes[i] & 0x1F;
mta_reg = IXGBE_READ_REG(hw, IXGBE_MTA(vector_reg));
- mta_reg |= (1 << vector_bit);
+ mta_reg |= BIT(vector_bit);
IXGBE_WRITE_REG(hw, IXGBE_MTA(vector_reg), mta_reg);
}
vmolr |= IXGBE_VMOLR_ROMPE;
@@ -433,7 +433,7 @@ void ixgbe_restore_vf_multicasts(struct ixgbe_adapter *adapter)
vector_reg = (vfinfo->vf_mc_hashes[j] >> 5) & 0x7F;
vector_bit = vfinfo->vf_mc_hashes[j] & 0x1F;
mta_reg = IXGBE_READ_REG(hw, IXGBE_MTA(vector_reg));
- mta_reg |= (1 << vector_bit);
+ mta_reg |= BIT(vector_bit);
IXGBE_WRITE_REG(hw, IXGBE_MTA(vector_reg), mta_reg);
}
@@ -536,9 +536,9 @@ static s32 ixgbe_set_vf_lpe(struct ixgbe_adapter *adapter, u32 *msgbuf, u32 vf)
/* enable or disable receive depending on error */
vfre = IXGBE_READ_REG(hw, IXGBE_VFRE(reg_offset));
if (err)
- vfre &= ~(1 << vf_shift);
+ vfre &= ~BIT(vf_shift);
else
- vfre |= 1 << vf_shift;
+ vfre |= BIT(vf_shift);
IXGBE_WRITE_REG(hw, IXGBE_VFRE(reg_offset), vfre);
if (err) {
@@ -589,47 +589,47 @@ static void ixgbe_clear_vmvir(struct ixgbe_adapter *adapter, u32 vf)
static void ixgbe_clear_vf_vlans(struct ixgbe_adapter *adapter, u32 vf)
{
struct ixgbe_hw *hw = &adapter->hw;
- u32 i;
+ u32 vlvfb_mask, pool_mask, i;
+
+ /* create mask for VF and other pools */
+ pool_mask = ~BIT(VMDQ_P(0) % 32);
+ vlvfb_mask = BIT(vf % 32);
/* post increment loop, covers VLVF_ENTRIES - 1 to 0 */
for (i = IXGBE_VLVF_ENTRIES; i--;) {
u32 bits[2], vlvfb, vid, vfta, vlvf;
u32 word = i * 2 + vf / 32;
- u32 mask = 1 << (vf % 32);
+ u32 mask;
vlvfb = IXGBE_READ_REG(hw, IXGBE_VLVFB(word));
/* if our bit isn't set we can skip it */
- if (!(vlvfb & mask))
+ if (!(vlvfb & vlvfb_mask))
continue;
/* clear our bit from vlvfb */
- vlvfb ^= mask;
+ vlvfb ^= vlvfb_mask;
/* create 64b mask to chedk to see if we should clear VLVF */
bits[word % 2] = vlvfb;
bits[~word % 2] = IXGBE_READ_REG(hw, IXGBE_VLVFB(word ^ 1));
- /* if promisc is enabled, PF will be present, leave VFTA */
- if (adapter->flags2 & IXGBE_FLAG2_VLAN_PROMISC) {
- bits[VMDQ_P(0) / 32] &= ~(1 << (VMDQ_P(0) % 32));
-
- if (bits[0] || bits[1])
- goto update_vlvfb;
- goto update_vlvf;
- }
-
/* if other pools are present, just remove ourselves */
- if (bits[0] || bits[1])
+ if (bits[(VMDQ_P(0) / 32) ^ 1] ||
+ (bits[VMDQ_P(0) / 32] & pool_mask))
goto update_vlvfb;
+ /* if PF is present, leave VFTA */
+ if (bits[0] || bits[1])
+ goto update_vlvf;
+
/* if we cannot determine VLAN just remove ourselves */
vlvf = IXGBE_READ_REG(hw, IXGBE_VLVF(i));
if (!vlvf)
goto update_vlvfb;
vid = vlvf & VLAN_VID_MASK;
- mask = 1 << (vid % 32);
+ mask = BIT(vid % 32);
/* clear bit from VFTA */
vfta = IXGBE_READ_REG(hw, IXGBE_VFTA(vid / 32));
@@ -638,6 +638,9 @@ static void ixgbe_clear_vf_vlans(struct ixgbe_adapter *adapter, u32 vf)
update_vlvf:
/* clear POOL selection enable */
IXGBE_WRITE_REG(hw, IXGBE_VLVF(i), 0);
+
+ if (!(adapter->flags2 & IXGBE_FLAG2_VLAN_PROMISC))
+ vlvfb = 0;
update_vlvfb:
/* clear pool bits */
IXGBE_WRITE_REG(hw, IXGBE_VLVFB(word), vlvfb);
@@ -810,7 +813,7 @@ static int ixgbe_vf_reset_msg(struct ixgbe_adapter *adapter, u32 vf)
/* enable transmit for vf */
reg = IXGBE_READ_REG(hw, IXGBE_VFTE(reg_offset));
- reg |= 1 << vf_shift;
+ reg |= BIT(vf_shift);
IXGBE_WRITE_REG(hw, IXGBE_VFTE(reg_offset), reg);
/* force drop enable for all VF Rx queues */
@@ -818,7 +821,7 @@ static int ixgbe_vf_reset_msg(struct ixgbe_adapter *adapter, u32 vf)
/* enable receive for vf */
reg = IXGBE_READ_REG(hw, IXGBE_VFRE(reg_offset));
- reg |= 1 << vf_shift;
+ reg |= BIT(vf_shift);
/*
* The 82599 cannot support a mix of jumbo and non-jumbo PF/VFs.
* For more info take a look at ixgbe_set_vf_lpe
@@ -834,7 +837,7 @@ static int ixgbe_vf_reset_msg(struct ixgbe_adapter *adapter, u32 vf)
#endif /* CONFIG_FCOE */
if (pf_max_frame > ETH_FRAME_LEN)
- reg &= ~(1 << vf_shift);
+ reg &= ~BIT(vf_shift);
}
IXGBE_WRITE_REG(hw, IXGBE_VFRE(reg_offset), reg);
@@ -843,7 +846,7 @@ static int ixgbe_vf_reset_msg(struct ixgbe_adapter *adapter, u32 vf)
/* Enable counting of spoofed packets in the SSVPC register */
reg = IXGBE_READ_REG(hw, IXGBE_VMECM(reg_offset));
- reg |= (1 << vf_shift);
+ reg |= BIT(vf_shift);
IXGBE_WRITE_REG(hw, IXGBE_VMECM(reg_offset), reg);
/*
@@ -887,7 +890,7 @@ static int ixgbe_set_vf_mac_addr(struct ixgbe_adapter *adapter,
return -1;
}
- if (adapter->vfinfo[vf].pf_set_mac &&
+ if (adapter->vfinfo[vf].pf_set_mac && !adapter->vfinfo[vf].trusted &&
!ether_addr_equal(adapter->vfinfo[vf].vf_mac_addresses, new_mac)) {
e_warn(drv,
"VF %d attempted to override administratively set MAC address\n"
@@ -905,8 +908,6 @@ static int ixgbe_set_vf_vlan_msg(struct ixgbe_adapter *adapter,
u32 add = (msgbuf[0] & IXGBE_VT_MSGINFO_MASK) >> IXGBE_VT_MSGINFO_SHIFT;
u32 vid = (msgbuf[1] & IXGBE_VLVF_VLANID_MASK);
u8 tcs = netdev_get_num_tc(adapter->netdev);
- struct ixgbe_hw *hw = &adapter->hw;
- int err;
if (adapter->vfinfo[vf].pf_vlan || tcs) {
e_warn(drv,
@@ -920,19 +921,7 @@ static int ixgbe_set_vf_vlan_msg(struct ixgbe_adapter *adapter,
if (!vid && !add)
return 0;
- err = ixgbe_set_vf_vlan(adapter, add, vid, vf);
- if (err)
- return err;
-
- if (adapter->vfinfo[vf].spoofchk_enabled)
- hw->mac.ops.set_vlan_anti_spoofing(hw, true, vf);
-
- if (add)
- adapter->vfinfo[vf].vlan_count++;
- else if (adapter->vfinfo[vf].vlan_count)
- adapter->vfinfo[vf].vlan_count--;
-
- return 0;
+ return ixgbe_set_vf_vlan(adapter, add, vid, vf);
}
static int ixgbe_set_vf_macvlan_msg(struct ixgbe_adapter *adapter,
@@ -961,8 +950,11 @@ static int ixgbe_set_vf_macvlan_msg(struct ixgbe_adapter *adapter,
* If the VF is allowed to set MAC filters then turn off
* anti-spoofing to avoid false positives.
*/
- if (adapter->vfinfo[vf].spoofchk_enabled)
- ixgbe_ndo_set_vf_spoofchk(adapter->netdev, vf, false);
+ if (adapter->vfinfo[vf].spoofchk_enabled) {
+ struct ixgbe_hw *hw = &adapter->hw;
+
+ hw->mac.ops.set_mac_anti_spoofing(hw, false, vf);
+ }
}
err = ixgbe_set_vf_macvlan(adapter, vf, index, new_mac);
@@ -1318,9 +1310,6 @@ static int ixgbe_enable_port_vlan(struct ixgbe_adapter *adapter, int vf,
ixgbe_set_vmvir(adapter, vlan, qos, vf);
ixgbe_set_vmolr(hw, vf, false);
- if (adapter->vfinfo[vf].spoofchk_enabled)
- hw->mac.ops.set_vlan_anti_spoofing(hw, true, vf);
- adapter->vfinfo[vf].vlan_count++;
/* enable hide vlan on X550 */
if (hw->mac.type >= ixgbe_mac_X550)
@@ -1353,9 +1342,6 @@ static int ixgbe_disable_port_vlan(struct ixgbe_adapter *adapter, int vf)
ixgbe_set_vf_vlan(adapter, true, 0, vf);
ixgbe_clear_vmvir(adapter, vf);
ixgbe_set_vmolr(hw, vf, true);
- hw->mac.ops.set_vlan_anti_spoofing(hw, false, vf);
- if (adapter->vfinfo[vf].vlan_count)
- adapter->vfinfo[vf].vlan_count--;
/* disable hide VLAN on X550 */
if (hw->mac.type >= ixgbe_mac_X550)
@@ -1395,7 +1381,7 @@ out:
return err;
}
-static int ixgbe_link_mbps(struct ixgbe_adapter *adapter)
+int ixgbe_link_mbps(struct ixgbe_adapter *adapter)
{
switch (adapter->link_speed) {
case IXGBE_LINK_SPEED_100_FULL:
@@ -1522,27 +1508,34 @@ int ixgbe_ndo_set_vf_bw(struct net_device *netdev, int vf, int min_tx_rate,
int ixgbe_ndo_set_vf_spoofchk(struct net_device *netdev, int vf, bool setting)
{
struct ixgbe_adapter *adapter = netdev_priv(netdev);
- int vf_target_reg = vf >> 3;
- int vf_target_shift = vf % 8;
struct ixgbe_hw *hw = &adapter->hw;
- u32 regval;
if (vf >= adapter->num_vfs)
return -EINVAL;
adapter->vfinfo[vf].spoofchk_enabled = setting;
- regval = IXGBE_READ_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg));
- regval &= ~(1 << vf_target_shift);
- regval |= (setting << vf_target_shift);
- IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg), regval);
-
- if (adapter->vfinfo[vf].vlan_count) {
- vf_target_shift += IXGBE_SPOOF_VLANAS_SHIFT;
- regval = IXGBE_READ_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg));
- regval &= ~(1 << vf_target_shift);
- regval |= (setting << vf_target_shift);
- IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg), regval);
+ /* configure MAC spoofing */
+ hw->mac.ops.set_mac_anti_spoofing(hw, setting, vf);
+
+ /* configure VLAN spoofing */
+ hw->mac.ops.set_vlan_anti_spoofing(hw, setting, vf);
+
+ /* Ensure LLDP and FC is set for Ethertype Antispoofing if we will be
+ * calling set_ethertype_anti_spoofing for each VF in loop below
+ */
+ if (hw->mac.ops.set_ethertype_anti_spoofing) {
+ IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_LLDP),
+ (IXGBE_ETQF_FILTER_EN |
+ IXGBE_ETQF_TX_ANTISPOOF |
+ IXGBE_ETH_P_LLDP));
+
+ IXGBE_WRITE_REG(hw, IXGBE_ETQF(IXGBE_ETQF_FILTER_FC),
+ (IXGBE_ETQF_FILTER_EN |
+ IXGBE_ETQF_TX_ANTISPOOF |
+ ETH_P_PAUSE));
+
+ hw->mac.ops.set_ethertype_anti_spoofing(hw, setting, vf);
}
return 0;
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.h
index dad925706f4c..47e65e2f886a 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_sriov.h
@@ -44,6 +44,7 @@ void ixgbe_ping_all_vfs(struct ixgbe_adapter *adapter);
int ixgbe_ndo_set_vf_mac(struct net_device *netdev, int queue, u8 *mac);
int ixgbe_ndo_set_vf_vlan(struct net_device *netdev, int queue, u16 vlan,
u8 qos);
+int ixgbe_link_mbps(struct ixgbe_adapter *adapter);
int ixgbe_ndo_set_vf_bw(struct net_device *netdev, int vf, int min_tx_rate,
int max_tx_rate);
int ixgbe_ndo_set_vf_spoofchk(struct net_device *netdev, int vf, bool setting);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
index bf7367a08716..da3d8358fee0 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_type.h
@@ -1,7 +1,7 @@
/*******************************************************************************
Intel 10 Gigabit PCI Express Linux driver
- Copyright(c) 1999 - 2015 Intel Corporation.
+ Copyright(c) 1999 - 2016 Intel Corporation.
This program is free software; you can redistribute it and/or modify it
under the terms and conditions of the GNU General Public License,
@@ -59,8 +59,12 @@
#define IXGBE_SUBDEV_ID_82599_RNDC 0x1F72
#define IXGBE_SUBDEV_ID_82599_560FLR 0x17D0
#define IXGBE_SUBDEV_ID_82599_SP_560FLR 0x211B
+#define IXGBE_SUBDEV_ID_82599_LOM_SNAP6 0x2159
+#define IXGBE_SUBDEV_ID_82599_SFP_1OCP 0x000D
+#define IXGBE_SUBDEV_ID_82599_SFP_2OCP 0x0008
+#define IXGBE_SUBDEV_ID_82599_SFP_LOM_OEM1 0x8976
+#define IXGBE_SUBDEV_ID_82599_SFP_LOM_OEM2 0x06EE
#define IXGBE_SUBDEV_ID_82599_ECNA_DP 0x0470
-#define IXGBE_SUBDEV_ID_82599_LOM_SFP 0x8976
#define IXGBE_DEV_ID_82599_SFP_EM 0x1507
#define IXGBE_DEV_ID_82599_SFP_SF2 0x154D
#define IXGBE_DEV_ID_82599EN_SFP 0x1557
@@ -75,21 +79,25 @@
#define IXGBE_DEV_ID_X540T1 0x1560
#define IXGBE_DEV_ID_X550T 0x1563
+#define IXGBE_DEV_ID_X550T1 0x15D1
#define IXGBE_DEV_ID_X550EM_X_KX4 0x15AA
#define IXGBE_DEV_ID_X550EM_X_KR 0x15AB
#define IXGBE_DEV_ID_X550EM_X_SFP 0x15AC
#define IXGBE_DEV_ID_X550EM_X_10G_T 0x15AD
#define IXGBE_DEV_ID_X550EM_X_1G_T 0x15AE
-#define IXGBE_DEV_ID_X550_VF_HV 0x1564
-#define IXGBE_DEV_ID_X550_VF 0x1565
-#define IXGBE_DEV_ID_X550EM_X_VF 0x15A8
-#define IXGBE_DEV_ID_X550EM_X_VF_HV 0x15A9
+#define IXGBE_DEV_ID_X550EM_A_KR 0x15C2
+#define IXGBE_DEV_ID_X550EM_A_KR_L 0x15C3
+#define IXGBE_DEV_ID_X550EM_A_SFP_N 0x15C4
+#define IXGBE_DEV_ID_X550EM_A_SGMII 0x15C6
+#define IXGBE_DEV_ID_X550EM_A_SGMII_L 0x15C7
+#define IXGBE_DEV_ID_X550EM_A_SFP 0x15CE
/* VF Device IDs */
-#define IXGBE_DEV_ID_82599_VF 0x10ED
-#define IXGBE_DEV_ID_X540_VF 0x1515
+#define IXGBE_DEV_ID_82599_VF 0x10ED
+#define IXGBE_DEV_ID_X540_VF 0x1515
#define IXGBE_DEV_ID_X550_VF 0x1565
#define IXGBE_DEV_ID_X550EM_X_VF 0x15A8
+#define IXGBE_DEV_ID_X550EM_A_VF 0x15C5
#define IXGBE_CAT(r, m) IXGBE_##r##_##m
@@ -128,7 +136,7 @@
#define IXGBE_FLA_X540 IXGBE_FLA_8259X
#define IXGBE_FLA_X550 IXGBE_FLA_8259X
#define IXGBE_FLA_X550EM_x IXGBE_FLA_8259X
-#define IXGBE_FLA_X550EM_a 0x15F6C
+#define IXGBE_FLA_X550EM_a 0x15F68
#define IXGBE_FLA(_hw) IXGBE_BY_MAC((_hw), FLA)
#define IXGBE_EEMNGCTL 0x10110
#define IXGBE_EEMNGDATA 0x10114
@@ -143,13 +151,6 @@
#define IXGBE_GRC_X550EM_a 0x15F64
#define IXGBE_GRC(_hw) IXGBE_BY_MAC((_hw), GRC)
-#define IXGBE_SRAMREL_8259X 0x10210
-#define IXGBE_SRAMREL_X540 IXGBE_SRAMREL_8259X
-#define IXGBE_SRAMREL_X550 IXGBE_SRAMREL_8259X
-#define IXGBE_SRAMREL_X550EM_x IXGBE_SRAMREL_8259X
-#define IXGBE_SRAMREL_X550EM_a 0x15F6C
-#define IXGBE_SRAMREL(_hw) IXGBE_BY_MAC((_hw), SRAMREL)
-
/* General Receive Control */
#define IXGBE_GRC_MNG 0x00000001 /* Manageability Enable */
#define IXGBE_GRC_APME 0x00000002 /* APM enabled in EEPROM */
@@ -375,6 +376,8 @@ struct ixgbe_thermal_sensor_data {
#define IXGBE_MRCTL(_i) (0x0F600 + ((_i) * 4))
#define IXGBE_VMRVLAN(_i) (0x0F610 + ((_i) * 4))
#define IXGBE_VMRVM(_i) (0x0F630 + ((_i) * 4))
+#define IXGBE_WQBR_RX(_i) (0x2FB0 + ((_i) * 4)) /* 4 total */
+#define IXGBE_WQBR_TX(_i) (0x8130 + ((_i) * 4)) /* 4 total */
#define IXGBE_L34T_IMIR(_i) (0x0E800 + ((_i) * 4)) /*128 of these (0-127)*/
#define IXGBE_RXFECCERR0 0x051B8
#define IXGBE_LLITHRESH 0x0EC90
@@ -446,6 +449,8 @@ struct ixgbe_thermal_sensor_data {
#define IXGBE_DMATXCTL_TE 0x1 /* Transmit Enable */
#define IXGBE_DMATXCTL_NS 0x2 /* No Snoop LSO hdr buffer */
#define IXGBE_DMATXCTL_GDV 0x8 /* Global Double VLAN */
+#define IXGBE_DMATXCTL_MDP_EN 0x20 /* Bit 5 */
+#define IXGBE_DMATXCTL_MBINTEN 0x40 /* Bit 6 */
#define IXGBE_DMATXCTL_VT_SHIFT 16 /* VLAN EtherType */
#define IXGBE_PFDTXGSWC_VT_LBEN 0x1 /* Local L2 VT switch enable */
@@ -543,6 +548,7 @@ struct ixgbe_thermal_sensor_data {
/* DCB registers */
#define MAX_TRAFFIC_CLASS 8
#define X540_TRAFFIC_CLASS 4
+#define DEF_TRAFFIC_CLASS 1
#define IXGBE_RMCS 0x03D00
#define IXGBE_DPMCS 0x07F40
#define IXGBE_PDPMCS 0x0CD00
@@ -554,7 +560,6 @@ struct ixgbe_thermal_sensor_data {
#define IXGBE_TDPT2TCCR(_i) (0x0CD20 + ((_i) * 4)) /* 8 of these (0-7) */
#define IXGBE_TDPT2TCSR(_i) (0x0CD40 + ((_i) * 4)) /* 8 of these (0-7) */
-
/* Security Control Registers */
#define IXGBE_SECTXCTRL 0x08800
#define IXGBE_SECTXSTAT 0x08804
@@ -693,16 +698,16 @@ struct ixgbe_thermal_sensor_data {
#define IXGBE_FCDMARW 0x02420 /* FC Receive DMA RW */
#define IXGBE_FCINVST0 0x03FC0 /* FC Invalid DMA Context Status Reg 0 */
#define IXGBE_FCINVST(_i) (IXGBE_FCINVST0 + ((_i) * 4))
-#define IXGBE_FCBUFF_VALID (1 << 0) /* DMA Context Valid */
-#define IXGBE_FCBUFF_BUFFSIZE (3 << 3) /* User Buffer Size */
-#define IXGBE_FCBUFF_WRCONTX (1 << 7) /* 0: Initiator, 1: Target */
+#define IXGBE_FCBUFF_VALID BIT(0) /* DMA Context Valid */
+#define IXGBE_FCBUFF_BUFFSIZE (3u << 3) /* User Buffer Size */
+#define IXGBE_FCBUFF_WRCONTX BIT(7) /* 0: Initiator, 1: Target */
#define IXGBE_FCBUFF_BUFFCNT 0x0000ff00 /* Number of User Buffers */
#define IXGBE_FCBUFF_OFFSET 0xffff0000 /* User Buffer Offset */
#define IXGBE_FCBUFF_BUFFSIZE_SHIFT 3
#define IXGBE_FCBUFF_BUFFCNT_SHIFT 8
#define IXGBE_FCBUFF_OFFSET_SHIFT 16
-#define IXGBE_FCDMARW_WE (1 << 14) /* Write enable */
-#define IXGBE_FCDMARW_RE (1 << 15) /* Read enable */
+#define IXGBE_FCDMARW_WE BIT(14) /* Write enable */
+#define IXGBE_FCDMARW_RE BIT(15) /* Read enable */
#define IXGBE_FCDMARW_FCOESEL 0x000001ff /* FC X_ID: 11 bits */
#define IXGBE_FCDMARW_LASTSIZE 0xffff0000 /* Last User Buffer Size */
#define IXGBE_FCDMARW_LASTSIZE_SHIFT 16
@@ -719,23 +724,23 @@ struct ixgbe_thermal_sensor_data {
#define IXGBE_FCFLT 0x05108 /* FC FLT Context */
#define IXGBE_FCFLTRW 0x05110 /* FC Filter RW Control */
#define IXGBE_FCPARAM 0x051d8 /* FC Offset Parameter */
-#define IXGBE_FCFLT_VALID (1 << 0) /* Filter Context Valid */
-#define IXGBE_FCFLT_FIRST (1 << 1) /* Filter First */
+#define IXGBE_FCFLT_VALID BIT(0) /* Filter Context Valid */
+#define IXGBE_FCFLT_FIRST BIT(1) /* Filter First */
#define IXGBE_FCFLT_SEQID 0x00ff0000 /* Sequence ID */
#define IXGBE_FCFLT_SEQCNT 0xff000000 /* Sequence Count */
-#define IXGBE_FCFLTRW_RVALDT (1 << 13) /* Fast Re-Validation */
-#define IXGBE_FCFLTRW_WE (1 << 14) /* Write Enable */
-#define IXGBE_FCFLTRW_RE (1 << 15) /* Read Enable */
+#define IXGBE_FCFLTRW_RVALDT BIT(13) /* Fast Re-Validation */
+#define IXGBE_FCFLTRW_WE BIT(14) /* Write Enable */
+#define IXGBE_FCFLTRW_RE BIT(15) /* Read Enable */
/* FCoE Receive Control */
#define IXGBE_FCRXCTRL 0x05100 /* FC Receive Control */
-#define IXGBE_FCRXCTRL_FCOELLI (1 << 0) /* Low latency interrupt */
-#define IXGBE_FCRXCTRL_SAVBAD (1 << 1) /* Save Bad Frames */
-#define IXGBE_FCRXCTRL_FRSTRDH (1 << 2) /* EN 1st Read Header */
-#define IXGBE_FCRXCTRL_LASTSEQH (1 << 3) /* EN Last Header in Seq */
-#define IXGBE_FCRXCTRL_ALLH (1 << 4) /* EN All Headers */
-#define IXGBE_FCRXCTRL_FRSTSEQH (1 << 5) /* EN 1st Seq. Header */
-#define IXGBE_FCRXCTRL_ICRC (1 << 6) /* Ignore Bad FC CRC */
-#define IXGBE_FCRXCTRL_FCCRCBO (1 << 7) /* FC CRC Byte Ordering */
+#define IXGBE_FCRXCTRL_FCOELLI BIT(0) /* Low latency interrupt */
+#define IXGBE_FCRXCTRL_SAVBAD BIT(1) /* Save Bad Frames */
+#define IXGBE_FCRXCTRL_FRSTRDH BIT(2) /* EN 1st Read Header */
+#define IXGBE_FCRXCTRL_LASTSEQH BIT(3) /* EN Last Header in Seq */
+#define IXGBE_FCRXCTRL_ALLH BIT(4) /* EN All Headers */
+#define IXGBE_FCRXCTRL_FRSTSEQH BIT(5) /* EN 1st Seq. Header */
+#define IXGBE_FCRXCTRL_ICRC BIT(6) /* Ignore Bad FC CRC */
+#define IXGBE_FCRXCTRL_FCCRCBO BIT(7) /* FC CRC Byte Ordering */
#define IXGBE_FCRXCTRL_FCOEVER 0x00000f00 /* FCoE Version: 4 bits */
#define IXGBE_FCRXCTRL_FCOEVER_SHIFT 8
/* FCoE Redirection */
@@ -1056,15 +1061,9 @@ struct ixgbe_thermal_sensor_data {
#define IXGBE_TIC_DW2(_i) (0x082B0 + ((_i) * 4))
#define IXGBE_TDPROBE 0x07F20
#define IXGBE_TXBUFCTRL 0x0C600
-#define IXGBE_TXBUFDATA0 0x0C610
-#define IXGBE_TXBUFDATA1 0x0C614
-#define IXGBE_TXBUFDATA2 0x0C618
-#define IXGBE_TXBUFDATA3 0x0C61C
+#define IXGBE_TXBUFDATA(_i) (0x0C610 + ((_i) * 4)) /* 4 of these (0-3) */
#define IXGBE_RXBUFCTRL 0x03600
-#define IXGBE_RXBUFDATA0 0x03610
-#define IXGBE_RXBUFDATA1 0x03614
-#define IXGBE_RXBUFDATA2 0x03618
-#define IXGBE_RXBUFDATA3 0x0361C
+#define IXGBE_RXBUFDATA(_i) (0x03610 + ((_i) * 4)) /* 4 of these (0-3) */
#define IXGBE_PCIE_DIAG(_i) (0x11090 + ((_i) * 4)) /* 8 of these */
#define IXGBE_RFVAL 0x050A4
#define IXGBE_MDFTC1 0x042B8
@@ -1127,6 +1126,7 @@ struct ixgbe_thermal_sensor_data {
#define IXGBE_XPCSS 0x04290
#define IXGBE_MFLCN 0x04294
#define IXGBE_SERDESC 0x04298
+#define IXGBE_MAC_SGMII_BUSY 0x04298
#define IXGBE_MACS 0x0429C
#define IXGBE_AUTOC 0x042A0
#define IXGBE_LINKS 0x042A4
@@ -1203,6 +1203,8 @@ struct ixgbe_thermal_sensor_data {
#define IXGBE_RDRXCTL_RSCLLIDIS 0x00800000 /* Disable RSC compl on LLI */
#define IXGBE_RDRXCTL_RSCACKC 0x02000000 /* must set 1 when RSC enabled */
#define IXGBE_RDRXCTL_FCOE_WRFIX 0x04000000 /* must set 1 when RSC enabled */
+#define IXGBE_RDRXCTL_MBINTEN 0x10000000
+#define IXGBE_RDRXCTL_MDP_EN 0x20000000
/* RQTC Bit Masks and Shifts */
#define IXGBE_RQTC_SHIFT_TC(_i) ((_i) * 4)
@@ -1249,20 +1251,20 @@ struct ixgbe_thermal_sensor_data {
#define IXGBE_DCA_RXCTRL_CPUID_MASK 0x0000001F /* Rx CPUID Mask */
#define IXGBE_DCA_RXCTRL_CPUID_MASK_82599 0xFF000000 /* Rx CPUID Mask */
#define IXGBE_DCA_RXCTRL_CPUID_SHIFT_82599 24 /* Rx CPUID Shift */
-#define IXGBE_DCA_RXCTRL_DESC_DCA_EN (1 << 5) /* DCA Rx Desc enable */
-#define IXGBE_DCA_RXCTRL_HEAD_DCA_EN (1 << 6) /* DCA Rx Desc header enable */
-#define IXGBE_DCA_RXCTRL_DATA_DCA_EN (1 << 7) /* DCA Rx Desc payload enable */
-#define IXGBE_DCA_RXCTRL_DESC_RRO_EN (1 << 9) /* DCA Rx rd Desc Relax Order */
-#define IXGBE_DCA_RXCTRL_DATA_WRO_EN (1 << 13) /* Rx wr data Relax Order */
-#define IXGBE_DCA_RXCTRL_HEAD_WRO_EN (1 << 15) /* Rx wr header RO */
+#define IXGBE_DCA_RXCTRL_DESC_DCA_EN BIT(5) /* DCA Rx Desc enable */
+#define IXGBE_DCA_RXCTRL_HEAD_DCA_EN BIT(6) /* DCA Rx Desc header enable */
+#define IXGBE_DCA_RXCTRL_DATA_DCA_EN BIT(7) /* DCA Rx Desc payload enable */
+#define IXGBE_DCA_RXCTRL_DESC_RRO_EN BIT(9) /* DCA Rx rd Desc Relax Order */
+#define IXGBE_DCA_RXCTRL_DATA_WRO_EN BIT(13) /* Rx wr data Relax Order */
+#define IXGBE_DCA_RXCTRL_HEAD_WRO_EN BIT(15) /* Rx wr header RO */
#define IXGBE_DCA_TXCTRL_CPUID_MASK 0x0000001F /* Tx CPUID Mask */
#define IXGBE_DCA_TXCTRL_CPUID_MASK_82599 0xFF000000 /* Tx CPUID Mask */
#define IXGBE_DCA_TXCTRL_CPUID_SHIFT_82599 24 /* Tx CPUID Shift */
-#define IXGBE_DCA_TXCTRL_DESC_DCA_EN (1 << 5) /* DCA Tx Desc enable */
-#define IXGBE_DCA_TXCTRL_DESC_RRO_EN (1 << 9) /* Tx rd Desc Relax Order */
-#define IXGBE_DCA_TXCTRL_DESC_WRO_EN (1 << 11) /* Tx Desc writeback RO bit */
-#define IXGBE_DCA_TXCTRL_DATA_RRO_EN (1 << 13) /* Tx rd data Relax Order */
+#define IXGBE_DCA_TXCTRL_DESC_DCA_EN BIT(5) /* DCA Tx Desc enable */
+#define IXGBE_DCA_TXCTRL_DESC_RRO_EN BIT(9) /* Tx rd Desc Relax Order */
+#define IXGBE_DCA_TXCTRL_DESC_WRO_EN BIT(11) /* Tx Desc writeback RO bit */
+#define IXGBE_DCA_TXCTRL_DATA_RRO_EN BIT(13) /* Tx rd data Relax Order */
#define IXGBE_DCA_MAX_QUEUES_82598 16 /* DCA regs only on 16 queues */
/* MSCA Bit Masks */
@@ -1309,6 +1311,7 @@ struct ixgbe_thermal_sensor_data {
/* MDIO definitions */
+#define IXGBE_MDIO_ZERO_DEV_TYPE 0x0
#define IXGBE_MDIO_PMA_PMD_DEV_TYPE 0x1
#define IXGBE_MDIO_PCS_DEV_TYPE 0x3
#define IXGBE_MDIO_PHY_XS_DEV_TYPE 0x4
@@ -1740,7 +1743,7 @@ enum {
#define IXGBE_ETQF_TX_ANTISPOOF 0x20000000 /* bit 29 */
#define IXGBE_ETQF_1588 0x40000000 /* bit 30 */
#define IXGBE_ETQF_FILTER_EN 0x80000000 /* bit 31 */
-#define IXGBE_ETQF_POOL_ENABLE (1 << 26) /* bit 26 */
+#define IXGBE_ETQF_POOL_ENABLE BIT(26) /* bit 26 */
#define IXGBE_ETQF_POOL_SHIFT 20
#define IXGBE_ETQS_RX_QUEUE 0x007F0000 /* bits 22:16 */
@@ -1866,20 +1869,20 @@ enum {
#define IXGBE_AUTOC_1G_PMA_PMD_SHIFT 9
#define IXGBE_AUTOC_10G_PMA_PMD_MASK 0x00000180
#define IXGBE_AUTOC_10G_PMA_PMD_SHIFT 7
-#define IXGBE_AUTOC_10G_XAUI (0x0 << IXGBE_AUTOC_10G_PMA_PMD_SHIFT)
-#define IXGBE_AUTOC_10G_KX4 (0x1 << IXGBE_AUTOC_10G_PMA_PMD_SHIFT)
-#define IXGBE_AUTOC_10G_CX4 (0x2 << IXGBE_AUTOC_10G_PMA_PMD_SHIFT)
-#define IXGBE_AUTOC_1G_BX (0x0 << IXGBE_AUTOC_1G_PMA_PMD_SHIFT)
-#define IXGBE_AUTOC_1G_KX (0x1 << IXGBE_AUTOC_1G_PMA_PMD_SHIFT)
-#define IXGBE_AUTOC_1G_SFI (0x0 << IXGBE_AUTOC_1G_PMA_PMD_SHIFT)
-#define IXGBE_AUTOC_1G_KX_BX (0x1 << IXGBE_AUTOC_1G_PMA_PMD_SHIFT)
+#define IXGBE_AUTOC_10G_XAUI (0u << IXGBE_AUTOC_10G_PMA_PMD_SHIFT)
+#define IXGBE_AUTOC_10G_KX4 (1u << IXGBE_AUTOC_10G_PMA_PMD_SHIFT)
+#define IXGBE_AUTOC_10G_CX4 (2u << IXGBE_AUTOC_10G_PMA_PMD_SHIFT)
+#define IXGBE_AUTOC_1G_BX (0u << IXGBE_AUTOC_1G_PMA_PMD_SHIFT)
+#define IXGBE_AUTOC_1G_KX (1u << IXGBE_AUTOC_1G_PMA_PMD_SHIFT)
+#define IXGBE_AUTOC_1G_SFI (0u << IXGBE_AUTOC_1G_PMA_PMD_SHIFT)
+#define IXGBE_AUTOC_1G_KX_BX (1u << IXGBE_AUTOC_1G_PMA_PMD_SHIFT)
#define IXGBE_AUTOC2_UPPER_MASK 0xFFFF0000
#define IXGBE_AUTOC2_10G_SERIAL_PMA_PMD_MASK 0x00030000
#define IXGBE_AUTOC2_10G_SERIAL_PMA_PMD_SHIFT 16
-#define IXGBE_AUTOC2_10G_KR (0x0 << IXGBE_AUTOC2_10G_SERIAL_PMA_PMD_SHIFT)
-#define IXGBE_AUTOC2_10G_XFI (0x1 << IXGBE_AUTOC2_10G_SERIAL_PMA_PMD_SHIFT)
-#define IXGBE_AUTOC2_10G_SFI (0x2 << IXGBE_AUTOC2_10G_SERIAL_PMA_PMD_SHIFT)
+#define IXGBE_AUTOC2_10G_KR (0u << IXGBE_AUTOC2_10G_SERIAL_PMA_PMD_SHIFT)
+#define IXGBE_AUTOC2_10G_XFI (1u << IXGBE_AUTOC2_10G_SERIAL_PMA_PMD_SHIFT)
+#define IXGBE_AUTOC2_10G_SFI (2u << IXGBE_AUTOC2_10G_SERIAL_PMA_PMD_SHIFT)
#define IXGBE_AUTOC2_LINK_DISABLE_ON_D3_MASK 0x50000000
#define IXGBE_AUTOC2_LINK_DISABLE_MASK 0x70000000
@@ -1957,7 +1960,9 @@ enum {
#define IXGBE_GSSR_PHY1_SM 0x0004
#define IXGBE_GSSR_MAC_CSR_SM 0x0008
#define IXGBE_GSSR_FLASH_SM 0x0010
+#define IXGBE_GSSR_NVM_UPDATE_SM 0x0200
#define IXGBE_GSSR_SW_MNG_SM 0x0400
+#define IXGBE_GSSR_TOKEN_SM 0x40000000 /* SW bit for shared access */
#define IXGBE_GSSR_SHARED_I2C_SM 0x1806 /* Wait for both phys & I2Cs */
#define IXGBE_GSSR_I2C_MASK 0x1800
#define IXGBE_GSSR_NVM_PHY_MASK 0xF
@@ -1997,6 +2002,9 @@ enum {
#define IXGBE_PBANUM_PTR_GUARD 0xFAFA
#define IXGBE_EEPROM_CHECKSUM 0x3F
#define IXGBE_EEPROM_SUM 0xBABA
+#define IXGBE_EEPROM_CTRL_4 0x45
+#define IXGBE_EE_CTRL_4_INST_ID 0x10
+#define IXGBE_EE_CTRL_4_INST_ID_SHIFT 4
#define IXGBE_PCIE_ANALOG_PTR 0x03
#define IXGBE_ATLAS0_CONFIG_PTR 0x04
#define IXGBE_PHY_PTR 0x04
@@ -2111,6 +2119,7 @@ enum {
#define IXGBE_SAN_MAC_ADDR_PORT1_OFFSET 0x3
#define IXGBE_DEVICE_CAPS_ALLOW_ANY_SFP 0x1
#define IXGBE_DEVICE_CAPS_FCOE_OFFLOADS 0x2
+#define IXGBE_DEVICE_CAPS_NO_CROSSTALK_WR BIT(7)
#define IXGBE_FW_LESM_PARAMETERS_PTR 0x2
#define IXGBE_FW_LESM_STATE_1 0x1
#define IXGBE_FW_LESM_STATE_ENABLED 0x8000 /* LESM Enable bit */
@@ -2530,6 +2539,10 @@ enum ixgbe_fdir_pballoc_type {
#define IXGBE_FDIRCTRL_REPORT_STATUS_ALWAYS 0x00000080
#define IXGBE_FDIRCTRL_DROP_Q_SHIFT 8
#define IXGBE_FDIRCTRL_FLEX_SHIFT 16
+#define IXGBE_FDIRCTRL_DROP_NO_MATCH 0x00008000
+#define IXGBE_FDIRCTRL_FILTERMODE_SHIFT 21
+#define IXGBE_FDIRCTRL_FILTERMODE_MACVLAN 0x0001 /* bit 23:21, 001b */
+#define IXGBE_FDIRCTRL_FILTERMODE_CLOUD 0x0002 /* bit 23:21, 010b */
#define IXGBE_FDIRCTRL_SEARCHLIM 0x00800000
#define IXGBE_FDIRCTRL_MAX_LENGTH_SHIFT 24
#define IXGBE_FDIRCTRL_FULL_THRESH_MASK 0xF0000000
@@ -2620,6 +2633,20 @@ enum ixgbe_fdir_pballoc_type {
#define FW_MAX_READ_BUFFER_SIZE 1024
#define FW_DISABLE_RXEN_CMD 0xDE
#define FW_DISABLE_RXEN_LEN 0x1
+#define FW_PHY_MGMT_REQ_CMD 0x20
+#define FW_PHY_TOKEN_REQ_CMD 0x0A
+#define FW_PHY_TOKEN_REQ_LEN 2
+#define FW_PHY_TOKEN_REQ 0
+#define FW_PHY_TOKEN_REL 1
+#define FW_PHY_TOKEN_OK 1
+#define FW_PHY_TOKEN_RETRY 0x80
+#define FW_PHY_TOKEN_DELAY 5 /* milliseconds */
+#define FW_PHY_TOKEN_WAIT 5 /* seconds */
+#define FW_PHY_TOKEN_RETRIES ((FW_PHY_TOKEN_WAIT * 1000) / FW_PHY_TOKEN_DELAY)
+#define FW_INT_PHY_REQ_CMD 0xB
+#define FW_INT_PHY_REQ_LEN 10
+#define FW_INT_PHY_REQ_READ 0
+#define FW_INT_PHY_REQ_WRITE 1
/* Host Interface Command Structures */
struct ixgbe_hic_hdr {
@@ -2688,6 +2715,28 @@ struct ixgbe_hic_disable_rxen {
u16 pad3;
};
+struct ixgbe_hic_phy_token_req {
+ struct ixgbe_hic_hdr hdr;
+ u8 port_number;
+ u8 command_type;
+ u16 pad;
+};
+
+struct ixgbe_hic_internal_phy_req {
+ struct ixgbe_hic_hdr hdr;
+ u8 port_number;
+ u8 command_type;
+ __be16 address;
+ u16 rsv1;
+ __be32 write_data;
+ u16 pad;
+} __packed;
+
+struct ixgbe_hic_internal_phy_resp {
+ struct ixgbe_hic_hdr hdr;
+ __be32 read_data;
+};
+
/* Transmit Descriptor - Advanced */
union ixgbe_adv_tx_desc {
struct {
@@ -2786,15 +2835,15 @@ struct ixgbe_adv_tx_context_desc {
#define IXGBE_ADVTXD_TUCMD_IPSEC_TYPE_ESP 0x00002000 /* IPSec Type ESP */
#define IXGBE_ADVTXD_TUCMD_IPSEC_ENCRYPT_EN 0x00004000/* ESP Encrypt Enable */
#define IXGBE_ADVTXT_TUCMD_FCOE 0x00008000 /* FCoE Frame Type */
-#define IXGBE_ADVTXD_FCOEF_EOF_MASK (0x3 << 10) /* FC EOF index */
-#define IXGBE_ADVTXD_FCOEF_SOF ((1 << 2) << 10) /* FC SOF index */
-#define IXGBE_ADVTXD_FCOEF_PARINC ((1 << 3) << 10) /* Rel_Off in F_CTL */
-#define IXGBE_ADVTXD_FCOEF_ORIE ((1 << 4) << 10) /* Orientation: End */
-#define IXGBE_ADVTXD_FCOEF_ORIS ((1 << 5) << 10) /* Orientation: Start */
-#define IXGBE_ADVTXD_FCOEF_EOF_N (0x0 << 10) /* 00: EOFn */
-#define IXGBE_ADVTXD_FCOEF_EOF_T (0x1 << 10) /* 01: EOFt */
-#define IXGBE_ADVTXD_FCOEF_EOF_NI (0x2 << 10) /* 10: EOFni */
-#define IXGBE_ADVTXD_FCOEF_EOF_A (0x3 << 10) /* 11: EOFa */
+#define IXGBE_ADVTXD_FCOEF_SOF (BIT(2) << 10) /* FC SOF index */
+#define IXGBE_ADVTXD_FCOEF_PARINC (BIT(3) << 10) /* Rel_Off in F_CTL */
+#define IXGBE_ADVTXD_FCOEF_ORIE (BIT(4) << 10) /* Orientation: End */
+#define IXGBE_ADVTXD_FCOEF_ORIS (BIT(5) << 10) /* Orientation: Start */
+#define IXGBE_ADVTXD_FCOEF_EOF_N (0u << 10) /* 00: EOFn */
+#define IXGBE_ADVTXD_FCOEF_EOF_T (1u << 10) /* 01: EOFt */
+#define IXGBE_ADVTXD_FCOEF_EOF_NI (2u << 10) /* 10: EOFni */
+#define IXGBE_ADVTXD_FCOEF_EOF_A (3u << 10) /* 11: EOFa */
+#define IXGBE_ADVTXD_FCOEF_EOF_MASK (3u << 10) /* FC EOF index */
#define IXGBE_ADVTXD_L4LEN_SHIFT 8 /* Adv ctxt L4LEN shift */
#define IXGBE_ADVTXD_MSS_SHIFT 16 /* Adv ctxt MSS shift */
@@ -2948,7 +2997,6 @@ union ixgbe_atr_hash_dword {
IXGBE_CAT(EEC, m), \
IXGBE_CAT(FLA, m), \
IXGBE_CAT(GRC, m), \
- IXGBE_CAT(SRAMREL, m), \
IXGBE_CAT(FACTPS, m), \
IXGBE_CAT(SWSM, m), \
IXGBE_CAT(SWFW_SYNC, m), \
@@ -2989,6 +3037,7 @@ enum ixgbe_mac_type {
ixgbe_mac_X540,
ixgbe_mac_X550,
ixgbe_mac_X550EM_x,
+ ixgbe_mac_x550em_a,
ixgbe_num_macs
};
@@ -3017,6 +3066,7 @@ enum ixgbe_phy_type {
ixgbe_phy_qsfp_intel,
ixgbe_phy_qsfp_unknown,
ixgbe_phy_sfp_unsupported,
+ ixgbe_phy_sgmii,
ixgbe_phy_generic
};
@@ -3130,8 +3180,9 @@ struct ixgbe_bus_info {
enum ixgbe_bus_width width;
enum ixgbe_bus_type type;
- u16 func;
- u16 lan_id;
+ u8 func;
+ u8 lan_id;
+ u8 instance_id;
};
/* Flow control parameters */
@@ -3266,6 +3317,7 @@ struct ixgbe_mac_operations {
s32 (*enable_rx_dma)(struct ixgbe_hw *, u32);
s32 (*acquire_swfw_sync)(struct ixgbe_hw *, u32);
void (*release_swfw_sync)(struct ixgbe_hw *, u32);
+ void (*init_swfw_sync)(struct ixgbe_hw *);
s32 (*prot_autoc_read)(struct ixgbe_hw *, bool *, u32 *);
s32 (*prot_autoc_write)(struct ixgbe_hw *, u32, bool);
@@ -3308,6 +3360,7 @@ struct ixgbe_mac_operations {
/* Flow Control */
s32 (*fc_enable)(struct ixgbe_hw *);
+ s32 (*setup_fc)(struct ixgbe_hw *);
/* Manageability interface */
s32 (*set_fw_drv_ver)(struct ixgbe_hw *, u8, u8, u8, u8);
@@ -3323,6 +3376,8 @@ struct ixgbe_mac_operations {
s32 (*dmac_config)(struct ixgbe_hw *hw);
s32 (*dmac_update_tcs)(struct ixgbe_hw *hw);
s32 (*dmac_config_tcs)(struct ixgbe_hw *hw);
+ s32 (*read_iosf_sb_reg)(struct ixgbe_hw *, u32, u32, u32 *);
+ s32 (*write_iosf_sb_reg)(struct ixgbe_hw *, u32, u32, u32);
};
struct ixgbe_phy_operations {
@@ -3442,7 +3497,7 @@ struct ixgbe_mbx_stats {
};
struct ixgbe_mbx_info {
- struct ixgbe_mbx_operations ops;
+ const struct ixgbe_mbx_operations *ops;
struct ixgbe_mbx_stats stats;
u32 timeout;
u32 usec_delay;
@@ -3475,10 +3530,10 @@ struct ixgbe_hw {
struct ixgbe_info {
enum ixgbe_mac_type mac;
s32 (*get_invariants)(struct ixgbe_hw *);
- struct ixgbe_mac_operations *mac_ops;
- struct ixgbe_eeprom_operations *eeprom_ops;
- struct ixgbe_phy_operations *phy_ops;
- struct ixgbe_mbx_operations *mbx_ops;
+ const struct ixgbe_mac_operations *mac_ops;
+ const struct ixgbe_eeprom_operations *eeprom_ops;
+ const struct ixgbe_phy_operations *phy_ops;
+ const struct ixgbe_mbx_operations *mbx_ops;
const u32 *mvals;
};
@@ -3517,14 +3572,19 @@ struct ixgbe_info {
#define IXGBE_ERR_INVALID_ARGUMENT -32
#define IXGBE_ERR_HOST_INTERFACE_COMMAND -33
#define IXGBE_ERR_FDIR_CMD_INCOMPLETE -38
+#define IXGBE_ERR_FW_RESP_INVALID -39
+#define IXGBE_ERR_TOKEN_RETRY -40
#define IXGBE_NOT_IMPLEMENTED 0x7FFFFFFF
#define IXGBE_FUSES0_GROUP(_i) (0x11158 + ((_i) * 4))
#define IXGBE_FUSES0_300MHZ BIT(5)
-#define IXGBE_FUSES0_REV_MASK (3 << 6)
+#define IXGBE_FUSES0_REV_MASK (3u << 6)
#define IXGBE_KRM_PORT_CAR_GEN_CTRL(P) ((P) ? 0x8010 : 0x4010)
#define IXGBE_KRM_LINK_CTRL_1(P) ((P) ? 0x820C : 0x420C)
+#define IXGBE_KRM_AN_CNTL_1(P) ((P) ? 0x822C : 0x422C)
+#define IXGBE_KRM_AN_CNTL_8(P) ((P) ? 0x8248 : 0x4248)
+#define IXGBE_KRM_SGMII_CTRL(P) ((P) ? 0x82A0 : 0x42A0)
#define IXGBE_KRM_DSP_TXFFE_STATE_4(P) ((P) ? 0x8634 : 0x4634)
#define IXGBE_KRM_DSP_TXFFE_STATE_5(P) ((P) ? 0x8638 : 0x4638)
#define IXGBE_KRM_RX_TRN_LINKUP_CTRL(P) ((P) ? 0x8B00 : 0x4B00)
@@ -3532,43 +3592,54 @@ struct ixgbe_info {
#define IXGBE_KRM_TX_COEFF_CTRL_1(P) ((P) ? 0x9520 : 0x5520)
#define IXGBE_KRM_RX_ANA_CTL(P) ((P) ? 0x9A00 : 0x5A00)
-#define IXGBE_KRM_PORT_CAR_GEN_CTRL_NELB_32B (1 << 9)
-#define IXGBE_KRM_PORT_CAR_GEN_CTRL_NELB_KRPCS (1 << 11)
+#define IXGBE_KRM_PORT_CAR_GEN_CTRL_NELB_32B BIT(9)
+#define IXGBE_KRM_PORT_CAR_GEN_CTRL_NELB_KRPCS BIT(11)
+
+#define IXGBE_KRM_LINK_CTRL_1_TETH_FORCE_SPEED_MASK (7u << 8)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_FORCE_SPEED_1G (2u << 8)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_FORCE_SPEED_10G (4u << 8)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_SGMII_EN BIT(12)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_CLAUSE_37_EN BIT(13)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_FEC_REQ BIT(14)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_CAP_FEC BIT(15)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_CAP_KX BIT(16)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_CAP_KR BIT(18)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_EEE_CAP_KX BIT(24)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_EEE_CAP_KR BIT(26)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_ENABLE BIT(29)
+#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_RESTART BIT(31)
+
+#define IXGBE_KRM_AN_CNTL_1_SYM_PAUSE BIT(28)
+#define IXGBE_KRM_AN_CNTL_1_ASM_PAUSE BIT(29)
+
+#define IXGBE_KRM_AN_CNTL_8_LINEAR BIT(0)
+#define IXGBE_KRM_AN_CNTL_8_LIMITING BIT(1)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_FORCE_SPEED_MASK (0x7 << 8)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_FORCE_SPEED_1G (2 << 8)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_FORCE_SPEED_10G (4 << 8)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_FEC_REQ (1 << 14)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_CAP_FEC (1 << 15)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_CAP_KX (1 << 16)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_CAP_KR (1 << 18)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_EEE_CAP_KX (1 << 24)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_EEE_CAP_KR (1 << 26)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_ENABLE (1 << 29)
-#define IXGBE_KRM_LINK_CTRL_1_TETH_AN_RESTART (1 << 31)
+#define IXGBE_KRM_SGMII_CTRL_MAC_TAR_FORCE_100_D BIT(12)
+#define IXGBE_KRM_SGMII_CTRL_MAC_TAR_FORCE_10_D BIT(19)
-#define IXGBE_KRM_DSP_TXFFE_STATE_C0_EN (1 << 6)
-#define IXGBE_KRM_DSP_TXFFE_STATE_CP1_CN1_EN (1 << 15)
-#define IXGBE_KRM_DSP_TXFFE_STATE_CO_ADAPT_EN (1 << 16)
+#define IXGBE_KRM_DSP_TXFFE_STATE_C0_EN BIT(6)
+#define IXGBE_KRM_DSP_TXFFE_STATE_CP1_CN1_EN BIT(15)
+#define IXGBE_KRM_DSP_TXFFE_STATE_CO_ADAPT_EN BIT(16)
-#define IXGBE_KRM_RX_TRN_LINKUP_CTRL_CONV_WO_PROTOCOL (1 << 4)
-#define IXGBE_KRM_RX_TRN_LINKUP_CTRL_PROTOCOL_BYPASS (1 << 2)
+#define IXGBE_KRM_RX_TRN_LINKUP_CTRL_CONV_WO_PROTOCOL BIT(4)
+#define IXGBE_KRM_RX_TRN_LINKUP_CTRL_PROTOCOL_BYPASS BIT(2)
-#define IXGBE_KRM_PMD_DFX_BURNIN_TX_RX_KR_LB_MASK (0x3 << 16)
+#define IXGBE_KRM_PMD_DFX_BURNIN_TX_RX_KR_LB_MASK (3u << 16)
-#define IXGBE_KRM_TX_COEFF_CTRL_1_CMINUS1_OVRRD_EN (1 << 1)
-#define IXGBE_KRM_TX_COEFF_CTRL_1_CPLUS1_OVRRD_EN (1 << 2)
-#define IXGBE_KRM_TX_COEFF_CTRL_1_CZERO_EN (1 << 3)
-#define IXGBE_KRM_TX_COEFF_CTRL_1_OVRRD_EN (1 << 31)
+#define IXGBE_KRM_TX_COEFF_CTRL_1_CMINUS1_OVRRD_EN BIT(1)
+#define IXGBE_KRM_TX_COEFF_CTRL_1_CPLUS1_OVRRD_EN BIT(2)
+#define IXGBE_KRM_TX_COEFF_CTRL_1_CZERO_EN BIT(3)
+#define IXGBE_KRM_TX_COEFF_CTRL_1_OVRRD_EN BIT(31)
#define IXGBE_KX4_LINK_CNTL_1 0x4C
-#define IXGBE_KX4_LINK_CNTL_1_TETH_AN_CAP_KX (1 << 16)
-#define IXGBE_KX4_LINK_CNTL_1_TETH_AN_CAP_KX4 (1 << 17)
-#define IXGBE_KX4_LINK_CNTL_1_TETH_EEE_CAP_KX (1 << 24)
-#define IXGBE_KX4_LINK_CNTL_1_TETH_EEE_CAP_KX4 (1 << 25)
-#define IXGBE_KX4_LINK_CNTL_1_TETH_AN_ENABLE (1 << 29)
-#define IXGBE_KX4_LINK_CNTL_1_TETH_FORCE_LINK_UP (1 << 30)
-#define IXGBE_KX4_LINK_CNTL_1_TETH_AN_RESTART (1 << 31)
+#define IXGBE_KX4_LINK_CNTL_1_TETH_AN_CAP_KX BIT(16)
+#define IXGBE_KX4_LINK_CNTL_1_TETH_AN_CAP_KX4 BIT(17)
+#define IXGBE_KX4_LINK_CNTL_1_TETH_EEE_CAP_KX BIT(24)
+#define IXGBE_KX4_LINK_CNTL_1_TETH_EEE_CAP_KX4 BIT(25)
+#define IXGBE_KX4_LINK_CNTL_1_TETH_AN_ENABLE BIT(29)
+#define IXGBE_KX4_LINK_CNTL_1_TETH_FORCE_LINK_UP BIT(30)
+#define IXGBE_KX4_LINK_CNTL_1_TETH_AN_RESTART BIT(31)
#define IXGBE_SB_IOSF_INDIRECT_CTRL 0x00011144
#define IXGBE_SB_IOSF_INDIRECT_DATA 0x00011148
@@ -3584,12 +3655,17 @@ struct ixgbe_info {
#define IXGBE_SB_IOSF_CTRL_TARGET_SELECT_SHIFT 28
#define IXGBE_SB_IOSF_CTRL_TARGET_SELECT_MASK 0x7
#define IXGBE_SB_IOSF_CTRL_BUSY_SHIFT 31
-#define IXGBE_SB_IOSF_CTRL_BUSY (1 << IXGBE_SB_IOSF_CTRL_BUSY_SHIFT)
+#define IXGBE_SB_IOSF_CTRL_BUSY BIT(IXGBE_SB_IOSF_CTRL_BUSY_SHIFT)
#define IXGBE_SB_IOSF_TARGET_KR_PHY 0
#define IXGBE_SB_IOSF_TARGET_KX4_UNIPHY 1
#define IXGBE_SB_IOSF_TARGET_KX4_PCS0 2
#define IXGBE_SB_IOSF_TARGET_KX4_PCS1 3
#define IXGBE_NW_MNG_IF_SEL 0x00011178
+#define IXGBE_NW_MNG_IF_SEL_MDIO_ACT BIT(1)
+#define IXGBE_NW_MNG_IF_SEL_ENABLE_10_100M BIT(23)
#define IXGBE_NW_MNG_IF_SEL_INT_PHY_MODE BIT(24)
+#define IXGBE_NW_MNG_IF_SEL_MDIO_PHY_ADD_SHIFT 3
+#define IXGBE_NW_MNG_IF_SEL_MDIO_PHY_ADD \
+ (0x1F << IXGBE_NW_MNG_IF_SEL_MDIO_PHY_ADD_SHIFT)
#endif /* _IXGBE_TYPE_H_ */
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c
index 2358c1b7d586..f2b1d48a16c3 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.c
@@ -1,7 +1,7 @@
/*******************************************************************************
Intel 10 Gigabit PCI Express Linux driver
- Copyright(c) 1999 - 2014 Intel Corporation.
+ Copyright(c) 1999 - 2016 Intel Corporation.
This program is free software; you can redistribute it and/or modify it
under the terms and conditions of the GNU General Public License,
@@ -214,8 +214,8 @@ s32 ixgbe_init_eeprom_params_X540(struct ixgbe_hw *hw)
eec = IXGBE_READ_REG(hw, IXGBE_EEC(hw));
eeprom_size = (u16)((eec & IXGBE_EEC_SIZE) >>
IXGBE_EEC_SIZE_SHIFT);
- eeprom->word_size = 1 << (eeprom_size +
- IXGBE_EEPROM_WORD_SIZE_SHIFT);
+ eeprom->word_size = BIT(eeprom_size +
+ IXGBE_EEPROM_WORD_SIZE_SHIFT);
hw_dbg(hw, "Eeprom params: type = %d, size = %d\n",
eeprom->type, eeprom->word_size);
@@ -747,6 +747,25 @@ static void ixgbe_release_swfw_sync_semaphore(struct ixgbe_hw *hw)
}
/**
+ * ixgbe_init_swfw_sync_X540 - Release hardware semaphore
+ * @hw: pointer to hardware structure
+ *
+ * This function reset hardware semaphore bits for a semaphore that may
+ * have be left locked due to a catastrophic failure.
+ **/
+void ixgbe_init_swfw_sync_X540(struct ixgbe_hw *hw)
+{
+ /* First try to grab the semaphore but we don't need to bother
+ * looking to see whether we got the lock or not since we do
+ * the same thing regardless of whether we got the lock or not.
+ * We got the lock - we release it.
+ * We timeout trying to get the lock - we force its release.
+ */
+ ixgbe_get_swfw_sync_semaphore(hw);
+ ixgbe_release_swfw_sync_semaphore(hw);
+}
+
+/**
* ixgbe_blink_led_start_X540 - Blink LED based on index.
* @hw: pointer to hardware structure
* @index: led number to blink
@@ -810,7 +829,7 @@ s32 ixgbe_blink_led_stop_X540(struct ixgbe_hw *hw, u32 index)
return 0;
}
-static struct ixgbe_mac_operations mac_ops_X540 = {
+static const struct ixgbe_mac_operations mac_ops_X540 = {
.init_hw = &ixgbe_init_hw_generic,
.reset_hw = &ixgbe_reset_hw_X540,
.start_hw = &ixgbe_start_hw_X540,
@@ -846,6 +865,7 @@ static struct ixgbe_mac_operations mac_ops_X540 = {
.clear_vfta = &ixgbe_clear_vfta_generic,
.set_vfta = &ixgbe_set_vfta_generic,
.fc_enable = &ixgbe_fc_enable_generic,
+ .setup_fc = ixgbe_setup_fc_generic,
.set_fw_drv_ver = &ixgbe_set_fw_drv_ver_generic,
.init_uta_tables = &ixgbe_init_uta_tables_generic,
.setup_sfp = NULL,
@@ -853,6 +873,7 @@ static struct ixgbe_mac_operations mac_ops_X540 = {
.set_vlan_anti_spoofing = &ixgbe_set_vlan_anti_spoofing,
.acquire_swfw_sync = &ixgbe_acquire_swfw_sync_X540,
.release_swfw_sync = &ixgbe_release_swfw_sync_X540,
+ .init_swfw_sync = &ixgbe_init_swfw_sync_X540,
.disable_rx_buff = &ixgbe_disable_rx_buff_generic,
.enable_rx_buff = &ixgbe_enable_rx_buff_generic,
.get_thermal_sensor_data = NULL,
@@ -863,7 +884,7 @@ static struct ixgbe_mac_operations mac_ops_X540 = {
.disable_rx = &ixgbe_disable_rx_generic,
};
-static struct ixgbe_eeprom_operations eeprom_ops_X540 = {
+static const struct ixgbe_eeprom_operations eeprom_ops_X540 = {
.init_params = &ixgbe_init_eeprom_params_X540,
.read = &ixgbe_read_eerd_X540,
.read_buffer = &ixgbe_read_eerd_buffer_X540,
@@ -874,7 +895,7 @@ static struct ixgbe_eeprom_operations eeprom_ops_X540 = {
.update_checksum = &ixgbe_update_eeprom_checksum_X540,
};
-static struct ixgbe_phy_operations phy_ops_X540 = {
+static const struct ixgbe_phy_operations phy_ops_X540 = {
.identify = &ixgbe_identify_phy_generic,
.identify_sfp = &ixgbe_identify_sfp_module_generic,
.init = NULL,
@@ -897,7 +918,7 @@ static const u32 ixgbe_mvals_X540[IXGBE_MVALS_IDX_LIMIT] = {
IXGBE_MVALS_INIT(X540)
};
-struct ixgbe_info ixgbe_X540_info = {
+const struct ixgbe_info ixgbe_X540_info = {
.mac = ixgbe_mac_X540,
.get_invariants = &ixgbe_get_invariants_X540,
.mac_ops = &mac_ops_X540,
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.h b/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.h
index a1468b1f4d8a..e21cd48491d3 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.h
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x540.h
@@ -36,4 +36,5 @@ s32 ixgbe_blink_led_start_X540(struct ixgbe_hw *hw, u32 index);
s32 ixgbe_blink_led_stop_X540(struct ixgbe_hw *hw, u32 index);
s32 ixgbe_acquire_swfw_sync_X540(struct ixgbe_hw *hw, u32 mask);
void ixgbe_release_swfw_sync_X540(struct ixgbe_hw *hw, u32 mask);
+void ixgbe_init_swfw_sync_X540(struct ixgbe_hw *hw);
s32 ixgbe_init_eeprom_params_X540(struct ixgbe_hw *hw);
diff --git a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
index 68a9c646498e..19b75cd98682 100644
--- a/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
+++ b/drivers/net/ethernet/intel/ixgbe/ixgbe_x550.c
@@ -1,7 +1,7 @@
/*******************************************************************************
*
* Intel 10 Gigabit PCI Express Linux driver
- * Copyright(c) 1999 - 2015 Intel Corporation.
+ * Copyright(c) 1999 - 2016 Intel Corporation.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms and conditions of the GNU General Public License,
@@ -27,6 +27,7 @@
#include "ixgbe_phy.h"
static s32 ixgbe_setup_kr_speed_x550em(struct ixgbe_hw *, ixgbe_link_speed);
+static s32 ixgbe_setup_fc_x550em(struct ixgbe_hw *);
static s32 ixgbe_get_invariants_X550_x(struct ixgbe_hw *hw)
{
@@ -272,16 +273,26 @@ out:
static s32 ixgbe_identify_phy_x550em(struct ixgbe_hw *hw)
{
switch (hw->device_id) {
+ case IXGBE_DEV_ID_X550EM_A_SFP:
+ if (hw->bus.lan_id)
+ hw->phy.phy_semaphore_mask = IXGBE_GSSR_PHY1_SM;
+ else
+ hw->phy.phy_semaphore_mask = IXGBE_GSSR_PHY0_SM;
+ return ixgbe_identify_module_generic(hw);
case IXGBE_DEV_ID_X550EM_X_SFP:
/* set up for CS4227 usage */
hw->phy.phy_semaphore_mask = IXGBE_GSSR_SHARED_I2C_SM;
ixgbe_setup_mux_ctl(hw);
ixgbe_check_cs4227(hw);
+ /* Fallthrough */
+ case IXGBE_DEV_ID_X550EM_A_SFP_N:
return ixgbe_identify_module_generic(hw);
case IXGBE_DEV_ID_X550EM_X_KX4:
hw->phy.type = ixgbe_phy_x550em_kx4;
break;
case IXGBE_DEV_ID_X550EM_X_KR:
+ case IXGBE_DEV_ID_X550EM_A_KR:
+ case IXGBE_DEV_ID_X550EM_A_KR_L:
hw->phy.type = ixgbe_phy_x550em_kr;
break;
case IXGBE_DEV_ID_X550EM_X_1G_T:
@@ -324,8 +335,8 @@ static s32 ixgbe_init_eeprom_params_X550(struct ixgbe_hw *hw)
eec = IXGBE_READ_REG(hw, IXGBE_EEC(hw));
eeprom_size = (u16)((eec & IXGBE_EEC_SIZE) >>
IXGBE_EEC_SIZE_SHIFT);
- eeprom->word_size = 1 << (eeprom_size +
- IXGBE_EEPROM_WORD_SIZE_SHIFT);
+ eeprom->word_size = BIT(eeprom_size +
+ IXGBE_EEPROM_WORD_SIZE_SHIFT);
hw_dbg(hw, "Eeprom params: type = %d, size = %d\n",
eeprom->type, eeprom->word_size);
@@ -412,6 +423,121 @@ out:
return ret;
}
+/**
+ * ixgbe_get_phy_token - Get the token for shared PHY access
+ * @hw: Pointer to hardware structure
+ */
+static s32 ixgbe_get_phy_token(struct ixgbe_hw *hw)
+{
+ struct ixgbe_hic_phy_token_req token_cmd;
+ s32 status;
+
+ token_cmd.hdr.cmd = FW_PHY_TOKEN_REQ_CMD;
+ token_cmd.hdr.buf_len = FW_PHY_TOKEN_REQ_LEN;
+ token_cmd.hdr.cmd_or_resp.cmd_resv = 0;
+ token_cmd.hdr.checksum = FW_DEFAULT_CHECKSUM;
+ token_cmd.port_number = hw->bus.lan_id;
+ token_cmd.command_type = FW_PHY_TOKEN_REQ;
+ token_cmd.pad = 0;
+ status = ixgbe_host_interface_command(hw, &token_cmd, sizeof(token_cmd),
+ IXGBE_HI_COMMAND_TIMEOUT,
+ true);
+ if (status)
+ return status;
+ if (token_cmd.hdr.cmd_or_resp.ret_status == FW_PHY_TOKEN_OK)
+ return 0;
+ if (token_cmd.hdr.cmd_or_resp.ret_status != FW_PHY_TOKEN_RETRY)
+ return IXGBE_ERR_FW_RESP_INVALID;
+
+ return IXGBE_ERR_TOKEN_RETRY;
+}
+
+/**
+ * ixgbe_put_phy_token - Put the token for shared PHY access
+ * @hw: Pointer to hardware structure
+ */
+static s32 ixgbe_put_phy_token(struct ixgbe_hw *hw)
+{
+ struct ixgbe_hic_phy_token_req token_cmd;
+ s32 status;
+
+ token_cmd.hdr.cmd = FW_PHY_TOKEN_REQ_CMD;
+ token_cmd.hdr.buf_len = FW_PHY_TOKEN_REQ_LEN;
+ token_cmd.hdr.cmd_or_resp.cmd_resv = 0;
+ token_cmd.hdr.checksum = FW_DEFAULT_CHECKSUM;
+ token_cmd.port_number = hw->bus.lan_id;
+ token_cmd.command_type = FW_PHY_TOKEN_REL;
+ token_cmd.pad = 0;
+ status = ixgbe_host_interface_command(hw, &token_cmd, sizeof(token_cmd),
+ IXGBE_HI_COMMAND_TIMEOUT,
+ true);
+ if (status)
+ return status;
+ if (token_cmd.hdr.cmd_or_resp.ret_status == FW_PHY_TOKEN_OK)
+ return 0;
+ return IXGBE_ERR_FW_RESP_INVALID;
+}
+
+/**
+ * ixgbe_write_iosf_sb_reg_x550a - Write to IOSF PHY register
+ * @hw: pointer to hardware structure
+ * @reg_addr: 32 bit PHY register to write
+ * @device_type: 3 bit device type
+ * @data: Data to write to the register
+ **/
+static s32 ixgbe_write_iosf_sb_reg_x550a(struct ixgbe_hw *hw, u32 reg_addr,
+ __always_unused u32 device_type,
+ u32 data)
+{
+ struct ixgbe_hic_internal_phy_req write_cmd;
+
+ memset(&write_cmd, 0, sizeof(write_cmd));
+ write_cmd.hdr.cmd = FW_INT_PHY_REQ_CMD;
+ write_cmd.hdr.buf_len = FW_INT_PHY_REQ_LEN;
+ write_cmd.hdr.checksum = FW_DEFAULT_CHECKSUM;
+ write_cmd.port_number = hw->bus.lan_id;
+ write_cmd.command_type = FW_INT_PHY_REQ_WRITE;
+ write_cmd.address = cpu_to_be16(reg_addr);
+ write_cmd.write_data = cpu_to_be32(data);
+
+ return ixgbe_host_interface_command(hw, &write_cmd, sizeof(write_cmd),
+ IXGBE_HI_COMMAND_TIMEOUT, false);
+}
+
+/**
+ * ixgbe_read_iosf_sb_reg_x550a - Read from IOSF PHY register
+ * @hw: pointer to hardware structure
+ * @reg_addr: 32 bit PHY register to write
+ * @device_type: 3 bit device type
+ * @data: Pointer to read data from the register
+ **/
+static s32 ixgbe_read_iosf_sb_reg_x550a(struct ixgbe_hw *hw, u32 reg_addr,
+ __always_unused u32 device_type,
+ u32 *data)
+{
+ union {
+ struct ixgbe_hic_internal_phy_req cmd;
+ struct ixgbe_hic_internal_phy_resp rsp;
+ } hic;
+ s32 status;
+
+ memset(&hic, 0, sizeof(hic));
+ hic.cmd.hdr.cmd = FW_INT_PHY_REQ_CMD;
+ hic.cmd.hdr.buf_len = FW_INT_PHY_REQ_LEN;
+ hic.cmd.hdr.checksum = FW_DEFAULT_CHECKSUM;
+ hic.cmd.port_number = hw->bus.lan_id;
+ hic.cmd.command_type = FW_INT_PHY_REQ_READ;
+ hic.cmd.address = cpu_to_be16(reg_addr);
+
+ status = ixgbe_host_interface_command(hw, &hic.cmd, sizeof(hic.cmd),
+ IXGBE_HI_COMMAND_TIMEOUT, true);
+
+ /* Extract the register value from the response. */
+ *data = be32_to_cpu(hic.rsp.read_data);
+
+ return status;
+}
+
/** ixgbe_read_ee_hostif_data_X550 - Read EEPROM word using a host interface
* command assuming that the semaphore is already obtained.
* @hw: pointer to hardware structure
@@ -436,8 +562,7 @@ static s32 ixgbe_read_ee_hostif_data_X550(struct ixgbe_hw *hw, u16 offset,
/* one word */
buffer.length = cpu_to_be16(sizeof(u16));
- status = ixgbe_host_interface_command(hw, (u32 *)&buffer,
- sizeof(buffer),
+ status = ixgbe_host_interface_command(hw, &buffer, sizeof(buffer),
IXGBE_HI_COMMAND_TIMEOUT, false);
if (status)
return status;
@@ -487,7 +612,7 @@ static s32 ixgbe_read_ee_hostif_buffer_X550(struct ixgbe_hw *hw,
buffer.address = cpu_to_be32((offset + current_word) * 2);
buffer.length = cpu_to_be16(words_to_read * 2);
- status = ixgbe_host_interface_command(hw, (u32 *)&buffer,
+ status = ixgbe_host_interface_command(hw, &buffer,
sizeof(buffer),
IXGBE_HI_COMMAND_TIMEOUT,
false);
@@ -770,8 +895,7 @@ static s32 ixgbe_write_ee_hostif_data_X550(struct ixgbe_hw *hw, u16 offset,
buffer.data = data;
buffer.address = cpu_to_be32(offset * 2);
- status = ixgbe_host_interface_command(hw, (u32 *)&buffer,
- sizeof(buffer),
+ status = ixgbe_host_interface_command(hw, &buffer, sizeof(buffer),
IXGBE_HI_COMMAND_TIMEOUT, false);
return status;
}
@@ -813,8 +937,7 @@ static s32 ixgbe_update_flash_X550(struct ixgbe_hw *hw)
buffer.req.buf_lenl = FW_SHADOW_RAM_DUMP_LEN;
buffer.req.checksum = FW_DEFAULT_CHECKSUM;
- status = ixgbe_host_interface_command(hw, (u32 *)&buffer,
- sizeof(buffer),
+ status = ixgbe_host_interface_command(hw, &buffer, sizeof(buffer),
IXGBE_HI_COMMAND_TIMEOUT, false);
return status;
}
@@ -861,9 +984,9 @@ static void ixgbe_disable_rx_x550(struct ixgbe_hw *hw)
fw_cmd.hdr.cmd = FW_DISABLE_RXEN_CMD;
fw_cmd.hdr.buf_len = FW_DISABLE_RXEN_LEN;
fw_cmd.hdr.checksum = FW_DEFAULT_CHECKSUM;
- fw_cmd.port_number = (u8)hw->bus.lan_id;
+ fw_cmd.port_number = hw->bus.lan_id;
- status = ixgbe_host_interface_command(hw, (u32 *)&fw_cmd,
+ status = ixgbe_host_interface_command(hw, &fw_cmd,
sizeof(struct ixgbe_hic_disable_rxen),
IXGBE_HI_COMMAND_TIMEOUT, true);
@@ -1248,6 +1371,117 @@ i2c_err:
}
/**
+ * ixgbe_setup_mac_link_sfp_n - Setup internal PHY for native SFP
+ * @hw: pointer to hardware structure
+ *
+ * Configure the the integrated PHY for native SFP support.
+ */
+static s32
+ixgbe_setup_mac_link_sfp_n(struct ixgbe_hw *hw, ixgbe_link_speed speed,
+ __always_unused bool autoneg_wait_to_complete)
+{
+ bool setup_linear = false;
+ u32 reg_phy_int;
+ s32 rc;
+
+ /* Check if SFP module is supported and linear */
+ rc = ixgbe_supported_sfp_modules_X550em(hw, &setup_linear);
+
+ /* If no SFP module present, then return success. Return success since
+ * SFP not present error is not excepted in the setup MAC link flow.
+ */
+ if (rc == IXGBE_ERR_SFP_NOT_PRESENT)
+ return 0;
+
+ if (!rc)
+ return rc;
+
+ /* Configure internal PHY for native SFI */
+ rc = hw->mac.ops.read_iosf_sb_reg(hw,
+ IXGBE_KRM_AN_CNTL_8(hw->bus.lan_id),
+ IXGBE_SB_IOSF_TARGET_KR_PHY,
+ &reg_phy_int);
+ if (rc)
+ return rc;
+
+ if (setup_linear) {
+ reg_phy_int &= ~IXGBE_KRM_AN_CNTL_8_LIMITING;
+ reg_phy_int |= IXGBE_KRM_AN_CNTL_8_LINEAR;
+ } else {
+ reg_phy_int |= IXGBE_KRM_AN_CNTL_8_LIMITING;
+ reg_phy_int &= ~IXGBE_KRM_AN_CNTL_8_LINEAR;
+ }
+
+ rc = hw->mac.ops.write_iosf_sb_reg(hw,
+ IXGBE_KRM_AN_CNTL_8(hw->bus.lan_id),
+ IXGBE_SB_IOSF_TARGET_KR_PHY,
+ reg_phy_int);
+ if (rc)
+ return rc;
+
+ /* Setup XFI/SFI internal link */
+ return ixgbe_setup_ixfi_x550em(hw, &speed);
+}
+
+/**
+ * ixgbe_setup_mac_link_sfp_x550a - Setup internal PHY for SFP
+ * @hw: pointer to hardware structure
+ *
+ * Configure the the integrated PHY for SFP support.
+ */
+static s32
+ixgbe_setup_mac_link_sfp_x550a(struct ixgbe_hw *hw, ixgbe_link_speed speed,
+ __always_unused bool autoneg_wait_to_complete)
+{
+ u32 reg_slice, slice_offset;
+ bool setup_linear = false;
+ u16 reg_phy_ext;
+ s32 rc;
+
+ /* Check if SFP module is supported and linear */
+ rc = ixgbe_supported_sfp_modules_X550em(hw, &setup_linear);
+
+ /* If no SFP module present, then return success. Return success since
+ * SFP not present error is not excepted in the setup MAC link flow.
+ */
+ if (rc == IXGBE_ERR_SFP_NOT_PRESENT)
+ return 0;
+
+ if (!rc)
+ return rc;
+
+ /* Configure internal PHY for KR/KX. */
+ ixgbe_setup_kr_speed_x550em(hw, speed);
+
+ if (!hw->phy.mdio.prtad || hw->phy.mdio.prtad == 0xFFFF)
+ return IXGBE_ERR_PHY_ADDR_INVALID;
+
+ /* Get external PHY device id */
+ rc = hw->phy.ops.read_reg(hw, IXGBE_CS4227_GLOBAL_ID_MSB,
+ IXGBE_MDIO_ZERO_DEV_TYPE, &reg_phy_ext);
+ if (rc)
+ return rc;
+
+ /* When configuring quad port CS4223, the MAC instance is part
+ * of the slice offset.
+ */
+ if (reg_phy_ext == IXGBE_CS4223_PHY_ID)
+ slice_offset = (hw->bus.lan_id +
+ (hw->bus.instance_id << 1)) << 12;
+ else
+ slice_offset = hw->bus.lan_id << 12;
+
+ /* Configure CS4227/CS4223 LINE side to proper mode. */
+ reg_slice = IXGBE_CS4227_LINE_SPARE24_LSB + slice_offset;
+ if (setup_linear)
+ reg_phy_ext = (IXGBE_CS4227_EDC_MODE_CX1 << 1) | 1;
+ else
+ reg_phy_ext = (IXGBE_CS4227_EDC_MODE_SR << 1) | 1;
+ return hw->phy.ops.write_reg(hw, reg_slice, IXGBE_MDIO_ZERO_DEV_TYPE,
+ reg_phy_ext);
+}
+
+/**
* ixgbe_setup_mac_link_t_X550em - Sets the auto advertised link speed
* @hw: pointer to hardware structure
* @speed: new link speed
@@ -1326,6 +1560,57 @@ static s32 ixgbe_check_link_t_X550em(struct ixgbe_hw *hw,
return 0;
}
+/**
+ * ixgbe_setup_sgmii - Set up link for sgmii
+ * @hw: pointer to hardware structure
+ */
+static s32
+ixgbe_setup_sgmii(struct ixgbe_hw *hw, __always_unused ixgbe_link_speed speed,
+ __always_unused bool autoneg_wait_to_complete)
+{
+ struct ixgbe_mac_info *mac = &hw->mac;
+ u32 lval, sval;
+ s32 rc;
+
+ rc = mac->ops.read_iosf_sb_reg(hw,
+ IXGBE_KRM_LINK_CTRL_1(hw->bus.lan_id),
+ IXGBE_SB_IOSF_TARGET_KR_PHY, &lval);
+ if (rc)
+ return rc;
+
+ lval &= ~IXGBE_KRM_LINK_CTRL_1_TETH_AN_ENABLE;
+ lval &= ~IXGBE_KRM_LINK_CTRL_1_TETH_FORCE_SPEED_MASK;
+ lval |= IXGBE_KRM_LINK_CTRL_1_TETH_AN_SGMII_EN;
+ lval |= IXGBE_KRM_LINK_CTRL_1_TETH_AN_CLAUSE_37_EN;
+ lval |= IXGBE_KRM_LINK_CTRL_1_TETH_FORCE_SPEED_1G;
+ rc = mac->ops.write_iosf_sb_reg(hw,
+ IXGBE_KRM_LINK_CTRL_1(hw->bus.lan_id),
+ IXGBE_SB_IOSF_TARGET_KR_PHY, lval);
+ if (rc)
+ return rc;
+
+ rc = mac->ops.read_iosf_sb_reg(hw,
+ IXGBE_KRM_SGMII_CTRL(hw->bus.lan_id),
+ IXGBE_SB_IOSF_TARGET_KR_PHY, &sval);
+ if (rc)
+ return rc;
+
+ sval |= IXGBE_KRM_SGMII_CTRL_MAC_TAR_FORCE_10_D;
+ sval |= IXGBE_KRM_SGMII_CTRL_MAC_TAR_FORCE_100_D;
+ rc = mac->ops.write_iosf_sb_reg(hw,
+ IXGBE_KRM_SGMII_CTRL(hw->bus.lan_id),
+ IXGBE_SB_IOSF_TARGET_KR_PHY, sval);
+ if (rc)
+ return rc;
+
+ lval |= IXGBE_KRM_LINK_CTRL_1_TETH_AN_RESTART;
+ rc = mac->ops.write_iosf_sb_reg(hw,
+ IXGBE_KRM_LINK_CTRL_1(hw->bus.lan_id),
+ IXGBE_SB_IOSF_TARGET_KR_PHY, lval);
+
+ return rc;
+}
+
/** ixgbe_init_mac_link_ops_X550em - init mac link function pointers
* @hw: pointer to hardware structure
**/
@@ -1342,15 +1627,35 @@ static void ixgbe_init_mac_link_ops_X550em(struct ixgbe_hw *hw)
mac->ops.enable_tx_laser = NULL;
mac->ops.flap_tx_laser = NULL;
mac->ops.setup_link = ixgbe_setup_mac_link_multispeed_fiber;
- mac->ops.setup_mac_link = ixgbe_setup_mac_link_sfp_x550em;
+ mac->ops.setup_fc = ixgbe_setup_fc_x550em;
+ switch (hw->device_id) {
+ case IXGBE_DEV_ID_X550EM_A_SFP_N:
+ mac->ops.setup_mac_link = ixgbe_setup_mac_link_sfp_n;
+ break;
+ case IXGBE_DEV_ID_X550EM_A_SFP:
+ mac->ops.setup_mac_link =
+ ixgbe_setup_mac_link_sfp_x550a;
+ break;
+ default:
+ mac->ops.setup_mac_link =
+ ixgbe_setup_mac_link_sfp_x550em;
+ break;
+ }
mac->ops.set_rate_select_speed =
ixgbe_set_soft_rate_select_speed;
break;
case ixgbe_media_type_copper:
mac->ops.setup_link = ixgbe_setup_mac_link_t_X550em;
+ mac->ops.setup_fc = ixgbe_setup_fc_generic;
mac->ops.check_link = ixgbe_check_link_t_X550em;
+ return;
+ case ixgbe_media_type_backplane:
+ if (hw->device_id == IXGBE_DEV_ID_X550EM_A_SGMII ||
+ hw->device_id == IXGBE_DEV_ID_X550EM_A_SGMII_L)
+ mac->ops.setup_link = ixgbe_setup_sgmii;
break;
default:
+ mac->ops.setup_fc = ixgbe_setup_fc_x550em;
break;
}
}
@@ -1614,7 +1919,7 @@ static s32 ixgbe_setup_kr_speed_x550em(struct ixgbe_hw *hw,
s32 status;
u32 reg_val;
- status = ixgbe_read_iosf_sb_reg_x550(hw,
+ status = hw->mac.ops.read_iosf_sb_reg(hw,
IXGBE_KRM_LINK_CTRL_1(hw->bus.lan_id),
IXGBE_SB_IOSF_TARGET_KR_PHY, &reg_val);
if (status)
@@ -1636,7 +1941,7 @@ static s32 ixgbe_setup_kr_speed_x550em(struct ixgbe_hw *hw,
/* Restart auto-negotiation. */
reg_val |= IXGBE_KRM_LINK_CTRL_1_TETH_AN_RESTART;
- status = ixgbe_write_iosf_sb_reg_x550(hw,
+ status = hw->mac.ops.write_iosf_sb_reg(hw,
IXGBE_KRM_LINK_CTRL_1(hw->bus.lan_id),
IXGBE_SB_IOSF_TARGET_KR_PHY, reg_val);
@@ -1653,9 +1958,9 @@ static s32 ixgbe_setup_kx4_x550em(struct ixgbe_hw *hw)
s32 status;
u32 reg_val;
- status = ixgbe_read_iosf_sb_reg_x550(hw, IXGBE_KX4_LINK_CNTL_1,
- IXGBE_SB_IOSF_TARGET_KX4_PCS0 +
- hw->bus.lan_id, &reg_val);
+ status = hw->mac.ops.read_iosf_sb_reg(hw, IXGBE_KX4_LINK_CNTL_1,
+ IXGBE_SB_IOSF_TARGET_KX4_PCS0 +
+ hw->bus.lan_id, &reg_val);
if (status)
return status;
@@ -1674,20 +1979,24 @@ static s32 ixgbe_setup_kx4_x550em(struct ixgbe_hw *hw)
/* Restart auto-negotiation. */
reg_val |= IXGBE_KX4_LINK_CNTL_1_TETH_AN_RESTART;
- status = ixgbe_write_iosf_sb_reg_x550(hw, IXGBE_KX4_LINK_CNTL_1,
- IXGBE_SB_IOSF_TARGET_KX4_PCS0 +
- hw->bus.lan_id, reg_val);
+ status = hw->mac.ops.write_iosf_sb_reg(hw, IXGBE_KX4_LINK_CNTL_1,
+ IXGBE_SB_IOSF_TARGET_KX4_PCS0 +
+ hw->bus.lan_id, reg_val);
return status;
}
-/** ixgbe_setup_kr_x550em - Configure the KR PHY.
- * @hw: pointer to hardware structure
+/**
+ * ixgbe_setup_kr_x550em - Configure the KR PHY
+ * @hw: pointer to hardware structure
*
- * Configures the integrated KR PHY.
+ * Configures the integrated KR PHY for X550EM_x.
**/
static s32 ixgbe_setup_kr_x550em(struct ixgbe_hw *hw)
{
+ if (hw->mac.type != ixgbe_mac_X550EM_x)
+ return 0;
+
return ixgbe_setup_kr_speed_x550em(hw, hw->phy.autoneg_advertised);
}
@@ -1842,6 +2151,86 @@ static s32 ixgbe_get_lcd_t_x550em(struct ixgbe_hw *hw,
return status;
}
+/**
+ * ixgbe_setup_fc_x550em - Set up flow control
+ * @hw: pointer to hardware structure
+ */
+static s32 ixgbe_setup_fc_x550em(struct ixgbe_hw *hw)
+{
+ bool pause, asm_dir;
+ u32 reg_val;
+ s32 rc;
+
+ /* Validate the requested mode */
+ if (hw->fc.strict_ieee && hw->fc.requested_mode == ixgbe_fc_rx_pause) {
+ hw_err(hw, "ixgbe_fc_rx_pause not valid in strict IEEE mode\n");
+ return IXGBE_ERR_INVALID_LINK_SETTINGS;
+ }
+
+ /* 10gig parts do not have a word in the EEPROM to determine the
+ * default flow control setting, so we explicitly set it to full.
+ */
+ if (hw->fc.requested_mode == ixgbe_fc_default)
+ hw->fc.requested_mode = ixgbe_fc_full;
+
+ /* Determine PAUSE and ASM_DIR bits. */
+ switch (hw->fc.requested_mode) {
+ case ixgbe_fc_none:
+ pause = false;
+ asm_dir = false;
+ break;
+ case ixgbe_fc_tx_pause:
+ pause = false;
+ asm_dir = true;
+ break;
+ case ixgbe_fc_rx_pause:
+ /* Rx Flow control is enabled and Tx Flow control is
+ * disabled by software override. Since there really
+ * isn't a way to advertise that we are capable of RX
+ * Pause ONLY, we will advertise that we support both
+ * symmetric and asymmetric Rx PAUSE, as such we fall
+ * through to the fc_full statement. Later, we will
+ * disable the adapter's ability to send PAUSE frames.
+ */
+ /* Fallthrough */
+ case ixgbe_fc_full:
+ pause = true;
+ asm_dir = true;
+ break;
+ default:
+ hw_err(hw, "Flow control param set incorrectly\n");
+ return IXGBE_ERR_CONFIG;
+ }
+
+ if (hw->device_id != IXGBE_DEV_ID_X550EM_X_KR &&
+ hw->device_id != IXGBE_DEV_ID_X550EM_A_KR &&
+ hw->device_id != IXGBE_DEV_ID_X550EM_A_KR_L)
+ return 0;
+
+ rc = hw->mac.ops.read_iosf_sb_reg(hw,
+ IXGBE_KRM_AN_CNTL_1(hw->bus.lan_id),
+ IXGBE_SB_IOSF_TARGET_KR_PHY,
+ &reg_val);
+ if (rc)
+ return rc;
+
+ reg_val &= ~(IXGBE_KRM_AN_CNTL_1_SYM_PAUSE |
+ IXGBE_KRM_AN_CNTL_1_ASM_PAUSE);
+ if (pause)
+ reg_val |= IXGBE_KRM_AN_CNTL_1_SYM_PAUSE;
+ if (asm_dir)
+ reg_val |= IXGBE_KRM_AN_CNTL_1_ASM_PAUSE;
+ rc = hw->mac.ops.write_iosf_sb_reg(hw,
+ IXGBE_KRM_AN_CNTL_1(hw->bus.lan_id),
+ IXGBE_SB_IOSF_TARGET_KR_PHY,
+ reg_val);
+
+ /* This device does not fully support AN. */
+ hw->fc.disable_fc_autoneg = true;
+
+ return rc;
+}
+
/** ixgbe_enter_lplu_x550em - Transition to low power states
* @hw: pointer to hardware structure
*
@@ -1939,6 +2328,36 @@ static s32 ixgbe_enter_lplu_t_x550em(struct ixgbe_hw *hw)
return status;
}
+/**
+ * ixgbe_read_mng_if_sel_x550em - Read NW_MNG_IF_SEL register
+ * @hw: pointer to hardware structure
+ *
+ * Read NW_MNG_IF_SEL register and save field values.
+ */
+static void ixgbe_read_mng_if_sel_x550em(struct ixgbe_hw *hw)
+{
+ /* Save NW management interface connected on board. This is used
+ * to determine internal PHY mode.
+ */
+ hw->phy.nw_mng_if_sel = IXGBE_READ_REG(hw, IXGBE_NW_MNG_IF_SEL);
+
+ /* If X552 (X550EM_a) and MDIO is connected to external PHY, then set
+ * PHY address. This register field was has only been used for X552.
+ */
+ if (!hw->phy.nw_mng_if_sel) {
+ if (hw->mac.type == ixgbe_mac_x550em_a) {
+ struct ixgbe_adapter *adapter = hw->back;
+
+ e_warn(drv, "nw_mng_if_sel not set\n");
+ }
+ return;
+ }
+
+ hw->phy.mdio.prtad = (hw->phy.nw_mng_if_sel &
+ IXGBE_NW_MNG_IF_SEL_MDIO_PHY_ADD) >>
+ IXGBE_NW_MNG_IF_SEL_MDIO_PHY_ADD_SHIFT;
+}
+
/** ixgbe_init_phy_ops_X550em - PHY/SFP specific init
* @hw: pointer to hardware structure
*
@@ -1953,14 +2372,11 @@ static s32 ixgbe_init_phy_ops_X550em(struct ixgbe_hw *hw)
hw->mac.ops.set_lan_id(hw);
+ ixgbe_read_mng_if_sel_x550em(hw);
+
if (hw->mac.ops.get_media_type(hw) == ixgbe_media_type_fiber) {
phy->phy_semaphore_mask = IXGBE_GSSR_SHARED_I2C_SM;
ixgbe_setup_mux_ctl(hw);
-
- /* Save NW management interface connected on board. This is used
- * to determine internal PHY mode.
- */
- phy->nw_mng_if_sel = IXGBE_READ_REG(hw, IXGBE_NW_MNG_IF_SEL);
}
/* Identify the PHY or SFP module */
@@ -2023,16 +2439,24 @@ static enum ixgbe_media_type ixgbe_get_media_type_X550em(struct ixgbe_hw *hw)
/* Detect if there is a copper PHY attached. */
switch (hw->device_id) {
+ case IXGBE_DEV_ID_X550EM_A_SGMII:
+ case IXGBE_DEV_ID_X550EM_A_SGMII_L:
+ hw->phy.type = ixgbe_phy_sgmii;
+ /* Fallthrough */
case IXGBE_DEV_ID_X550EM_X_KR:
case IXGBE_DEV_ID_X550EM_X_KX4:
+ case IXGBE_DEV_ID_X550EM_A_KR:
+ case IXGBE_DEV_ID_X550EM_A_KR_L:
media_type = ixgbe_media_type_backplane;
break;
case IXGBE_DEV_ID_X550EM_X_SFP:
+ case IXGBE_DEV_ID_X550EM_A_SFP:
+ case IXGBE_DEV_ID_X550EM_A_SFP_N:
media_type = ixgbe_media_type_fiber;
break;
case IXGBE_DEV_ID_X550EM_X_1G_T:
case IXGBE_DEV_ID_X550EM_X_10G_T:
- media_type = ixgbe_media_type_copper;
+ media_type = ixgbe_media_type_copper;
break;
default:
media_type = ixgbe_media_type_unknown;
@@ -2080,6 +2504,27 @@ static s32 ixgbe_init_ext_t_x550em(struct ixgbe_hw *hw)
return status;
}
+/**
+ * ixgbe_set_mdio_speed - Set MDIO clock speed
+ * @hw: pointer to hardware structure
+ */
+static void ixgbe_set_mdio_speed(struct ixgbe_hw *hw)
+{
+ u32 hlreg0;
+
+ switch (hw->device_id) {
+ case IXGBE_DEV_ID_X550EM_X_10G_T:
+ case IXGBE_DEV_ID_X550EM_A_SFP:
+ /* Config MDIO clock speed before the first MDIO PHY access */
+ hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
+ hlreg0 &= ~IXGBE_HLREG0_MDCSPD;
+ IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
+ break;
+ default:
+ break;
+ }
+}
+
/** ixgbe_reset_hw_X550em - Perform hardware reset
** @hw: pointer to hardware structure
**
@@ -2093,7 +2538,6 @@ static s32 ixgbe_reset_hw_X550em(struct ixgbe_hw *hw)
s32 status;
u32 ctrl = 0;
u32 i;
- u32 hlreg0;
bool link_up = false;
/* Call adapter stop to disable Tx/Rx and clear interrupts */
@@ -2179,11 +2623,7 @@ mac_reset_top:
hw->mac.num_rar_entries = 128;
hw->mac.ops.init_rx_addrs(hw);
- if (hw->device_id == IXGBE_DEV_ID_X550EM_X_10G_T) {
- hlreg0 = IXGBE_READ_REG(hw, IXGBE_HLREG0);
- hlreg0 &= ~IXGBE_HLREG0_MDCSPD;
- IXGBE_WRITE_REG(hw, IXGBE_HLREG0, hlreg0);
- }
+ ixgbe_set_mdio_speed(hw);
if (hw->device_id == IXGBE_DEV_ID_X550EM_X_SFP)
ixgbe_setup_mux_ctl(hw);
@@ -2206,9 +2646,9 @@ static void ixgbe_set_ethertype_anti_spoofing_X550(struct ixgbe_hw *hw,
pfvfspoof = IXGBE_READ_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg));
if (enable)
- pfvfspoof |= (1 << vf_target_shift);
+ pfvfspoof |= BIT(vf_target_shift);
else
- pfvfspoof &= ~(1 << vf_target_shift);
+ pfvfspoof &= ~BIT(vf_target_shift);
IXGBE_WRITE_REG(hw, IXGBE_PFVFSPOOF(vf_target_reg), pfvfspoof);
}
@@ -2296,6 +2736,110 @@ static void ixgbe_release_swfw_sync_X550em(struct ixgbe_hw *hw, u32 mask)
ixgbe_release_swfw_sync_X540(hw, mask);
}
+/**
+ * ixgbe_acquire_swfw_sync_x550em_a - Acquire SWFW semaphore
+ * @hw: pointer to hardware structure
+ * @mask: Mask to specify which semaphore to acquire
+ *
+ * Acquires the SWFW semaphore and get the shared PHY token as needed
+ */
+static s32 ixgbe_acquire_swfw_sync_x550em_a(struct ixgbe_hw *hw, u32 mask)
+{
+ u32 hmask = mask & ~IXGBE_GSSR_TOKEN_SM;
+ int retries = FW_PHY_TOKEN_RETRIES;
+ s32 status;
+
+ while (--retries) {
+ status = 0;
+ if (hmask)
+ status = ixgbe_acquire_swfw_sync_X540(hw, hmask);
+ if (status)
+ return status;
+ if (!(mask & IXGBE_GSSR_TOKEN_SM))
+ return 0;
+
+ status = ixgbe_get_phy_token(hw);
+ if (!status)
+ return 0;
+ if (hmask)
+ ixgbe_release_swfw_sync_X540(hw, hmask);
+ if (status != IXGBE_ERR_TOKEN_RETRY)
+ return status;
+ msleep(FW_PHY_TOKEN_DELAY);
+ }
+
+ return status;
+}
+
+/**
+ * ixgbe_release_swfw_sync_x550em_a - Release SWFW semaphore
+ * @hw: pointer to hardware structure
+ * @mask: Mask to specify which semaphore to release
+ *
+ * Release the SWFW semaphore and puts the shared PHY token as needed
+ */
+static void ixgbe_release_swfw_sync_x550em_a(struct ixgbe_hw *hw, u32 mask)
+{
+ u32 hmask = mask & ~IXGBE_GSSR_TOKEN_SM;
+
+ if (mask & IXGBE_GSSR_TOKEN_SM)
+ ixgbe_put_phy_token(hw);
+
+ if (hmask)
+ ixgbe_release_swfw_sync_X540(hw, hmask);
+}
+
+/**
+ * ixgbe_read_phy_reg_x550a - Reads specified PHY register
+ * @hw: pointer to hardware structure
+ * @reg_addr: 32 bit address of PHY register to read
+ * @phy_data: Pointer to read data from PHY register
+ *
+ * Reads a value from a specified PHY register using the SWFW lock and PHY
+ * Token. The PHY Token is needed since the MDIO is shared between to MAC
+ * instances.
+ */
+static s32 ixgbe_read_phy_reg_x550a(struct ixgbe_hw *hw, u32 reg_addr,
+ u32 device_type, u16 *phy_data)
+{
+ u32 mask = hw->phy.phy_semaphore_mask | IXGBE_GSSR_TOKEN_SM;
+ s32 status;
+
+ if (hw->mac.ops.acquire_swfw_sync(hw, mask))
+ return IXGBE_ERR_SWFW_SYNC;
+
+ status = hw->phy.ops.read_reg_mdi(hw, reg_addr, device_type, phy_data);
+
+ hw->mac.ops.release_swfw_sync(hw, mask);
+
+ return status;
+}
+
+/**
+ * ixgbe_write_phy_reg_x550a - Writes specified PHY register
+ * @hw: pointer to hardware structure
+ * @reg_addr: 32 bit PHY register to write
+ * @device_type: 5 bit device type
+ * @phy_data: Data to write to the PHY register
+ *
+ * Writes a value to specified PHY register using the SWFW lock and PHY Token.
+ * The PHY Token is needed since the MDIO is shared between to MAC instances.
+ */
+static s32 ixgbe_write_phy_reg_x550a(struct ixgbe_hw *hw, u32 reg_addr,
+ u32 device_type, u16 phy_data)
+{
+ u32 mask = hw->phy.phy_semaphore_mask | IXGBE_GSSR_TOKEN_SM;
+ s32 status;
+
+ if (hw->mac.ops.acquire_swfw_sync(hw, mask))
+ return IXGBE_ERR_SWFW_SYNC;
+
+ status = ixgbe_write_phy_reg_mdi(hw, reg_addr, device_type, phy_data);
+ hw->mac.ops.release_swfw_sync(hw, mask);
+
+ return status;
+}
+
#define X550_COMMON_MAC \
.init_hw = &ixgbe_init_hw_generic, \
.start_hw = &ixgbe_start_hw_X540, \
@@ -2337,12 +2881,10 @@ static void ixgbe_release_swfw_sync_X550em(struct ixgbe_hw *hw, u32 mask)
.enable_rx_buff = &ixgbe_enable_rx_buff_generic, \
.get_thermal_sensor_data = NULL, \
.init_thermal_sensor_thresh = NULL, \
- .prot_autoc_read = &prot_autoc_read_generic, \
- .prot_autoc_write = &prot_autoc_write_generic, \
.enable_rx = &ixgbe_enable_rx_generic, \
.disable_rx = &ixgbe_disable_rx_x550, \
-static struct ixgbe_mac_operations mac_ops_X550 = {
+static const struct ixgbe_mac_operations mac_ops_X550 = {
X550_COMMON_MAC
.reset_hw = &ixgbe_reset_hw_X540,
.get_media_type = &ixgbe_get_media_type_X540,
@@ -2354,20 +2896,45 @@ static struct ixgbe_mac_operations mac_ops_X550 = {
.setup_sfp = NULL,
.acquire_swfw_sync = &ixgbe_acquire_swfw_sync_X540,
.release_swfw_sync = &ixgbe_release_swfw_sync_X540,
+ .init_swfw_sync = &ixgbe_init_swfw_sync_X540,
+ .prot_autoc_read = prot_autoc_read_generic,
+ .prot_autoc_write = prot_autoc_write_generic,
+ .setup_fc = ixgbe_setup_fc_generic,
};
-static struct ixgbe_mac_operations mac_ops_X550EM_x = {
+static const struct ixgbe_mac_operations mac_ops_X550EM_x = {
X550_COMMON_MAC
.reset_hw = &ixgbe_reset_hw_X550em,
.get_media_type = &ixgbe_get_media_type_X550em,
.get_san_mac_addr = NULL,
.get_wwn_prefix = NULL,
- .setup_link = NULL, /* defined later */
+ .setup_link = &ixgbe_setup_mac_link_X540,
.get_link_capabilities = &ixgbe_get_link_capabilities_X550em,
.get_bus_info = &ixgbe_get_bus_info_X550em,
.setup_sfp = ixgbe_setup_sfp_modules_X550em,
.acquire_swfw_sync = &ixgbe_acquire_swfw_sync_X550em,
.release_swfw_sync = &ixgbe_release_swfw_sync_X550em,
+ .init_swfw_sync = &ixgbe_init_swfw_sync_X540,
+ .setup_fc = NULL, /* defined later */
+ .read_iosf_sb_reg = ixgbe_read_iosf_sb_reg_x550,
+ .write_iosf_sb_reg = ixgbe_write_iosf_sb_reg_x550,
+};
+
+static struct ixgbe_mac_operations mac_ops_x550em_a = {
+ X550_COMMON_MAC
+ .reset_hw = ixgbe_reset_hw_X550em,
+ .get_media_type = ixgbe_get_media_type_X550em,
+ .get_san_mac_addr = NULL,
+ .get_wwn_prefix = NULL,
+ .setup_link = NULL, /* defined later */
+ .get_link_capabilities = ixgbe_get_link_capabilities_X550em,
+ .get_bus_info = ixgbe_get_bus_info_X550em,
+ .setup_sfp = ixgbe_setup_sfp_modules_X550em,
+ .acquire_swfw_sync = ixgbe_acquire_swfw_sync_x550em_a,
+ .release_swfw_sync = ixgbe_release_swfw_sync_x550em_a,
+ .setup_fc = ixgbe_setup_fc_x550em,
+ .read_iosf_sb_reg = ixgbe_read_iosf_sb_reg_x550a,
+ .write_iosf_sb_reg = ixgbe_write_iosf_sb_reg_x550a,
};
#define X550_COMMON_EEP \
@@ -2379,12 +2946,12 @@ static struct ixgbe_mac_operations mac_ops_X550EM_x = {
.update_checksum = &ixgbe_update_eeprom_checksum_X550, \
.calc_checksum = &ixgbe_calc_eeprom_checksum_X550, \
-static struct ixgbe_eeprom_operations eeprom_ops_X550 = {
+static const struct ixgbe_eeprom_operations eeprom_ops_X550 = {
X550_COMMON_EEP
.init_params = &ixgbe_init_eeprom_params_X550,
};
-static struct ixgbe_eeprom_operations eeprom_ops_X550EM_x = {
+static const struct ixgbe_eeprom_operations eeprom_ops_X550EM_x = {
X550_COMMON_EEP
.init_params = &ixgbe_init_eeprom_params_X540,
};
@@ -2398,23 +2965,25 @@ static struct ixgbe_eeprom_operations eeprom_ops_X550EM_x = {
.read_i2c_sff8472 = &ixgbe_read_i2c_sff8472_generic, \
.read_i2c_eeprom = &ixgbe_read_i2c_eeprom_generic, \
.write_i2c_eeprom = &ixgbe_write_i2c_eeprom_generic, \
- .read_reg = &ixgbe_read_phy_reg_generic, \
- .write_reg = &ixgbe_write_phy_reg_generic, \
.setup_link = &ixgbe_setup_phy_link_generic, \
.set_phy_power = NULL, \
.check_overtemp = &ixgbe_tn_check_overtemp, \
.get_firmware_version = &ixgbe_get_phy_firmware_version_generic,
-static struct ixgbe_phy_operations phy_ops_X550 = {
+static const struct ixgbe_phy_operations phy_ops_X550 = {
X550_COMMON_PHY
.init = NULL,
.identify = &ixgbe_identify_phy_generic,
+ .read_reg = &ixgbe_read_phy_reg_generic,
+ .write_reg = &ixgbe_write_phy_reg_generic,
};
-static struct ixgbe_phy_operations phy_ops_X550EM_x = {
+static const struct ixgbe_phy_operations phy_ops_X550EM_x = {
X550_COMMON_PHY
.init = &ixgbe_init_phy_ops_X550em,
.identify = &ixgbe_identify_phy_x550em,
+ .read_reg = &ixgbe_read_phy_reg_generic,
+ .write_reg = &ixgbe_write_phy_reg_generic,
.read_i2c_combined = &ixgbe_read_i2c_combined_generic,
.write_i2c_combined = &ixgbe_write_i2c_combined_generic,
.read_i2c_combined_unlocked = &ixgbe_read_i2c_combined_generic_unlocked,
@@ -2422,6 +2991,14 @@ static struct ixgbe_phy_operations phy_ops_X550EM_x = {
&ixgbe_write_i2c_combined_generic_unlocked,
};
+static const struct ixgbe_phy_operations phy_ops_x550em_a = {
+ X550_COMMON_PHY
+ .init = &ixgbe_init_phy_ops_X550em,
+ .identify = &ixgbe_identify_phy_x550em,
+ .read_reg = &ixgbe_read_phy_reg_x550a,
+ .write_reg = &ixgbe_write_phy_reg_x550a,
+};
+
static const u32 ixgbe_mvals_X550[IXGBE_MVALS_IDX_LIMIT] = {
IXGBE_MVALS_INIT(X550)
};
@@ -2430,7 +3007,11 @@ static const u32 ixgbe_mvals_X550EM_x[IXGBE_MVALS_IDX_LIMIT] = {
IXGBE_MVALS_INIT(X550EM_x)
};
-struct ixgbe_info ixgbe_X550_info = {
+static const u32 ixgbe_mvals_x550em_a[IXGBE_MVALS_IDX_LIMIT] = {
+ IXGBE_MVALS_INIT(X550EM_a)
+};
+
+const struct ixgbe_info ixgbe_X550_info = {
.mac = ixgbe_mac_X550,
.get_invariants = &ixgbe_get_invariants_X540,
.mac_ops = &mac_ops_X550,
@@ -2440,7 +3021,7 @@ struct ixgbe_info ixgbe_X550_info = {
.mvals = ixgbe_mvals_X550,
};
-struct ixgbe_info ixgbe_X550EM_x_info = {
+const struct ixgbe_info ixgbe_X550EM_x_info = {
.mac = ixgbe_mac_X550EM_x,
.get_invariants = &ixgbe_get_invariants_X550_x,
.mac_ops = &mac_ops_X550EM_x,
@@ -2449,3 +3030,13 @@ struct ixgbe_info ixgbe_X550EM_x_info = {
.mbx_ops = &mbx_ops_generic,
.mvals = ixgbe_mvals_X550EM_x,
};
+
+const struct ixgbe_info ixgbe_x550em_a_info = {
+ .mac = ixgbe_mac_x550em_a,
+ .get_invariants = &ixgbe_get_invariants_X550_x,
+ .mac_ops = &mac_ops_x550em_a,
+ .eeprom_ops = &eeprom_ops_X550EM_x,
+ .phy_ops = &phy_ops_x550em_a,
+ .mbx_ops = &mbx_ops_generic,
+ .mvals = ixgbe_mvals_x550em_a,
+};
diff --git a/drivers/net/ethernet/intel/ixgbevf/defines.h b/drivers/net/ethernet/intel/ixgbevf/defines.h
index 58434584b16d..ae09d60e7b67 100644
--- a/drivers/net/ethernet/intel/ixgbevf/defines.h
+++ b/drivers/net/ethernet/intel/ixgbevf/defines.h
@@ -33,6 +33,11 @@
#define IXGBE_DEV_ID_X550_VF 0x1565
#define IXGBE_DEV_ID_X550EM_X_VF 0x15A8
+#define IXGBE_DEV_ID_82599_VF_HV 0x152E
+#define IXGBE_DEV_ID_X540_VF_HV 0x1530
+#define IXGBE_DEV_ID_X550_VF_HV 0x1564
+#define IXGBE_DEV_ID_X550EM_X_VF_HV 0x15A9
+
#define IXGBE_VF_IRQ_CLEAR_MASK 7
#define IXGBE_VF_MAX_TX_QUEUES 8
#define IXGBE_VF_MAX_RX_QUEUES 8
@@ -74,7 +79,7 @@ typedef u32 ixgbe_link_speed;
#define IXGBE_RXDCTL_RLPML_EN 0x00008000
/* DCA Control */
-#define IXGBE_DCA_TXCTRL_TX_WB_RO_EN (1 << 11) /* Tx Desc writeback RO bit */
+#define IXGBE_DCA_TXCTRL_TX_WB_RO_EN BIT(11) /* Tx Desc writeback RO bit */
/* PSRTYPE bit definitions */
#define IXGBE_PSRTYPE_TCPHDR 0x00000010
@@ -296,16 +301,16 @@ struct ixgbe_adv_tx_context_desc {
#define IXGBE_TXDCTL_SWFLSH 0x04000000 /* Tx Desc. wr-bk flushing */
#define IXGBE_TXDCTL_WTHRESH_SHIFT 16 /* shift to WTHRESH bits */
-#define IXGBE_DCA_RXCTRL_DESC_DCA_EN (1 << 5) /* Rx Desc enable */
-#define IXGBE_DCA_RXCTRL_HEAD_DCA_EN (1 << 6) /* Rx Desc header ena */
-#define IXGBE_DCA_RXCTRL_DATA_DCA_EN (1 << 7) /* Rx Desc payload ena */
-#define IXGBE_DCA_RXCTRL_DESC_RRO_EN (1 << 9) /* Rx rd Desc Relax Order */
-#define IXGBE_DCA_RXCTRL_DATA_WRO_EN (1 << 13) /* Rx wr data Relax Order */
-#define IXGBE_DCA_RXCTRL_HEAD_WRO_EN (1 << 15) /* Rx wr header RO */
-
-#define IXGBE_DCA_TXCTRL_DESC_DCA_EN (1 << 5) /* DCA Tx Desc enable */
-#define IXGBE_DCA_TXCTRL_DESC_RRO_EN (1 << 9) /* Tx rd Desc Relax Order */
-#define IXGBE_DCA_TXCTRL_DESC_WRO_EN (1 << 11) /* Tx Desc writeback RO bit */
-#define IXGBE_DCA_TXCTRL_DATA_RRO_EN (1 << 13) /* Tx rd data Relax Order */
+#define IXGBE_DCA_RXCTRL_DESC_DCA_EN BIT(5) /* Rx Desc enable */
+#define IXGBE_DCA_RXCTRL_HEAD_DCA_EN BIT(6) /* Rx Desc header ena */
+#define IXGBE_DCA_RXCTRL_DATA_DCA_EN BIT(7) /* Rx Desc payload ena */
+#define IXGBE_DCA_RXCTRL_DESC_RRO_EN BIT(9) /* Rx rd Desc Relax Order */
+#define IXGBE_DCA_RXCTRL_DATA_WRO_EN BIT(13) /* Rx wr data Relax Order */
+#define IXGBE_DCA_RXCTRL_HEAD_WRO_EN BIT(15) /* Rx wr header RO */
+
+#define IXGBE_DCA_TXCTRL_DESC_DCA_EN BIT(5) /* DCA Tx Desc enable */
+#define IXGBE_DCA_TXCTRL_DESC_RRO_EN BIT(9) /* Tx rd Desc Relax Order */
+#define IXGBE_DCA_TXCTRL_DESC_WRO_EN BIT(11) /* Tx Desc writeback RO bit */
+#define IXGBE_DCA_TXCTRL_DATA_RRO_EN BIT(13) /* Tx rd data Relax Order */
#endif /* _IXGBEVF_DEFINES_H_ */
diff --git a/drivers/net/ethernet/intel/ixgbevf/ethtool.c b/drivers/net/ethernet/intel/ixgbevf/ethtool.c
index d7aa4b203f40..508e72c5f1c2 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ethtool.c
+++ b/drivers/net/ethernet/intel/ixgbevf/ethtool.c
@@ -42,65 +42,54 @@
#define IXGBE_ALL_RAR_ENTRIES 16
+enum {NETDEV_STATS, IXGBEVF_STATS};
+
struct ixgbe_stats {
char stat_string[ETH_GSTRING_LEN];
- struct {
- int sizeof_stat;
- int stat_offset;
- int base_stat_offset;
- int saved_reset_offset;
- };
+ int type;
+ int sizeof_stat;
+ int stat_offset;
};
-#define IXGBEVF_STAT(m, b, r) { \
- .sizeof_stat = FIELD_SIZEOF(struct ixgbevf_adapter, m), \
- .stat_offset = offsetof(struct ixgbevf_adapter, m), \
- .base_stat_offset = offsetof(struct ixgbevf_adapter, b), \
- .saved_reset_offset = offsetof(struct ixgbevf_adapter, r) \
+#define IXGBEVF_STAT(_name, _stat) { \
+ .stat_string = _name, \
+ .type = IXGBEVF_STATS, \
+ .sizeof_stat = FIELD_SIZEOF(struct ixgbevf_adapter, _stat), \
+ .stat_offset = offsetof(struct ixgbevf_adapter, _stat) \
}
-#define IXGBEVF_ZSTAT(m) { \
- .sizeof_stat = FIELD_SIZEOF(struct ixgbevf_adapter, m), \
- .stat_offset = offsetof(struct ixgbevf_adapter, m), \
- .base_stat_offset = -1, \
- .saved_reset_offset = -1 \
+#define IXGBEVF_NETDEV_STAT(_net_stat) { \
+ .stat_string = #_net_stat, \
+ .type = NETDEV_STATS, \
+ .sizeof_stat = FIELD_SIZEOF(struct net_device_stats, _net_stat), \
+ .stat_offset = offsetof(struct net_device_stats, _net_stat) \
}
-static const struct ixgbe_stats ixgbe_gstrings_stats[] = {
- {"rx_packets", IXGBEVF_STAT(stats.vfgprc, stats.base_vfgprc,
- stats.saved_reset_vfgprc)},
- {"tx_packets", IXGBEVF_STAT(stats.vfgptc, stats.base_vfgptc,
- stats.saved_reset_vfgptc)},
- {"rx_bytes", IXGBEVF_STAT(stats.vfgorc, stats.base_vfgorc,
- stats.saved_reset_vfgorc)},
- {"tx_bytes", IXGBEVF_STAT(stats.vfgotc, stats.base_vfgotc,
- stats.saved_reset_vfgotc)},
- {"tx_busy", IXGBEVF_ZSTAT(tx_busy)},
- {"tx_restart_queue", IXGBEVF_ZSTAT(restart_queue)},
- {"tx_timeout_count", IXGBEVF_ZSTAT(tx_timeout_count)},
- {"multicast", IXGBEVF_STAT(stats.vfmprc, stats.base_vfmprc,
- stats.saved_reset_vfmprc)},
- {"rx_csum_offload_errors", IXGBEVF_ZSTAT(hw_csum_rx_error)},
-#ifdef BP_EXTENDED_STATS
- {"rx_bp_poll_yield", IXGBEVF_ZSTAT(bp_rx_yields)},
- {"rx_bp_cleaned", IXGBEVF_ZSTAT(bp_rx_cleaned)},
- {"rx_bp_misses", IXGBEVF_ZSTAT(bp_rx_missed)},
- {"tx_bp_napi_yield", IXGBEVF_ZSTAT(bp_tx_yields)},
- {"tx_bp_cleaned", IXGBEVF_ZSTAT(bp_tx_cleaned)},
- {"tx_bp_misses", IXGBEVF_ZSTAT(bp_tx_missed)},
-#endif
+static struct ixgbe_stats ixgbevf_gstrings_stats[] = {
+ IXGBEVF_NETDEV_STAT(rx_packets),
+ IXGBEVF_NETDEV_STAT(tx_packets),
+ IXGBEVF_NETDEV_STAT(rx_bytes),
+ IXGBEVF_NETDEV_STAT(tx_bytes),
+ IXGBEVF_STAT("tx_busy", tx_busy),
+ IXGBEVF_STAT("tx_restart_queue", restart_queue),
+ IXGBEVF_STAT("tx_timeout_count", tx_timeout_count),
+ IXGBEVF_NETDEV_STAT(multicast),
+ IXGBEVF_STAT("rx_csum_offload_errors", hw_csum_rx_error),
};
-#define IXGBE_QUEUE_STATS_LEN 0
-#define IXGBE_GLOBAL_STATS_LEN ARRAY_SIZE(ixgbe_gstrings_stats)
+#define IXGBEVF_QUEUE_STATS_LEN ( \
+ (((struct ixgbevf_adapter *)netdev_priv(netdev))->num_tx_queues + \
+ ((struct ixgbevf_adapter *)netdev_priv(netdev))->num_rx_queues) * \
+ (sizeof(struct ixgbe_stats) / sizeof(u64)))
+#define IXGBEVF_GLOBAL_STATS_LEN ARRAY_SIZE(ixgbevf_gstrings_stats)
-#define IXGBEVF_STATS_LEN (IXGBE_GLOBAL_STATS_LEN + IXGBE_QUEUE_STATS_LEN)
+#define IXGBEVF_STATS_LEN (IXGBEVF_GLOBAL_STATS_LEN + IXGBEVF_QUEUE_STATS_LEN)
static const char ixgbe_gstrings_test[][ETH_GSTRING_LEN] = {
"Register test (offline)",
"Link test (on/offline)"
};
-#define IXGBE_TEST_LEN (sizeof(ixgbe_gstrings_test) / ETH_GSTRING_LEN)
+#define IXGBEVF_TEST_LEN (sizeof(ixgbe_gstrings_test) / ETH_GSTRING_LEN)
static int ixgbevf_get_settings(struct net_device *netdev,
struct ethtool_cmd *ecmd)
@@ -177,7 +166,8 @@ static void ixgbevf_get_regs(struct net_device *netdev,
memset(p, 0, regs_len);
- regs->version = (1 << 24) | hw->revision_id << 16 | hw->device_id;
+ /* generate a number suitable for ethtool's register version */
+ regs->version = (1u << 24) | (hw->revision_id << 16) | hw->device_id;
/* General Registers */
regs_buff[0] = IXGBE_READ_REG(hw, IXGBE_VFCTRL);
@@ -392,13 +382,13 @@ clear_reset:
return err;
}
-static int ixgbevf_get_sset_count(struct net_device *dev, int stringset)
+static int ixgbevf_get_sset_count(struct net_device *netdev, int stringset)
{
switch (stringset) {
case ETH_SS_TEST:
- return IXGBE_TEST_LEN;
+ return IXGBEVF_TEST_LEN;
case ETH_SS_STATS:
- return IXGBE_GLOBAL_STATS_LEN;
+ return IXGBEVF_STATS_LEN;
default:
return -EINVAL;
}
@@ -408,70 +398,138 @@ static void ixgbevf_get_ethtool_stats(struct net_device *netdev,
struct ethtool_stats *stats, u64 *data)
{
struct ixgbevf_adapter *adapter = netdev_priv(netdev);
- char *base = (char *)adapter;
- int i;
-#ifdef BP_EXTENDED_STATS
- u64 rx_yields = 0, rx_cleaned = 0, rx_missed = 0,
- tx_yields = 0, tx_cleaned = 0, tx_missed = 0;
+ struct rtnl_link_stats64 temp;
+ const struct rtnl_link_stats64 *net_stats;
+ unsigned int start;
+ struct ixgbevf_ring *ring;
+ int i, j;
+ char *p;
- for (i = 0; i < adapter->num_rx_queues; i++) {
- rx_yields += adapter->rx_ring[i]->stats.yields;
- rx_cleaned += adapter->rx_ring[i]->stats.cleaned;
- rx_yields += adapter->rx_ring[i]->stats.yields;
- }
+ ixgbevf_update_stats(adapter);
+ net_stats = dev_get_stats(netdev, &temp);
+ for (i = 0; i < IXGBEVF_GLOBAL_STATS_LEN; i++) {
+ switch (ixgbevf_gstrings_stats[i].type) {
+ case NETDEV_STATS:
+ p = (char *)net_stats +
+ ixgbevf_gstrings_stats[i].stat_offset;
+ break;
+ case IXGBEVF_STATS:
+ p = (char *)adapter +
+ ixgbevf_gstrings_stats[i].stat_offset;
+ break;
+ default:
+ data[i] = 0;
+ continue;
+ }
- for (i = 0; i < adapter->num_tx_queues; i++) {
- tx_yields += adapter->tx_ring[i]->stats.yields;
- tx_cleaned += adapter->tx_ring[i]->stats.cleaned;
- tx_yields += adapter->tx_ring[i]->stats.yields;
+ data[i] = (ixgbevf_gstrings_stats[i].sizeof_stat ==
+ sizeof(u64)) ? *(u64 *)p : *(u32 *)p;
}
- adapter->bp_rx_yields = rx_yields;
- adapter->bp_rx_cleaned = rx_cleaned;
- adapter->bp_rx_missed = rx_missed;
+ /* populate Tx queue data */
+ for (j = 0; j < adapter->num_tx_queues; j++) {
+ ring = adapter->tx_ring[j];
+ if (!ring) {
+ data[i++] = 0;
+ data[i++] = 0;
+#ifdef BP_EXTENDED_STATS
+ data[i++] = 0;
+ data[i++] = 0;
+ data[i++] = 0;
+#endif
+ continue;
+ }
- adapter->bp_tx_yields = tx_yields;
- adapter->bp_tx_cleaned = tx_cleaned;
- adapter->bp_tx_missed = tx_missed;
+ do {
+ start = u64_stats_fetch_begin_irq(&ring->syncp);
+ data[i] = ring->stats.packets;
+ data[i + 1] = ring->stats.bytes;
+ } while (u64_stats_fetch_retry_irq(&ring->syncp, start));
+ i += 2;
+#ifdef BP_EXTENDED_STATS
+ data[i] = ring->stats.yields;
+ data[i + 1] = ring->stats.misses;
+ data[i + 2] = ring->stats.cleaned;
+ i += 3;
#endif
+ }
- ixgbevf_update_stats(adapter);
- for (i = 0; i < IXGBE_GLOBAL_STATS_LEN; i++) {
- char *p = base + ixgbe_gstrings_stats[i].stat_offset;
- char *b = base + ixgbe_gstrings_stats[i].base_stat_offset;
- char *r = base + ixgbe_gstrings_stats[i].saved_reset_offset;
-
- if (ixgbe_gstrings_stats[i].sizeof_stat == sizeof(u64)) {
- if (ixgbe_gstrings_stats[i].base_stat_offset >= 0)
- data[i] = *(u64 *)p - *(u64 *)b + *(u64 *)r;
- else
- data[i] = *(u64 *)p;
- } else {
- if (ixgbe_gstrings_stats[i].base_stat_offset >= 0)
- data[i] = *(u32 *)p - *(u32 *)b + *(u32 *)r;
- else
- data[i] = *(u32 *)p;
+ /* populate Rx queue data */
+ for (j = 0; j < adapter->num_rx_queues; j++) {
+ ring = adapter->rx_ring[j];
+ if (!ring) {
+ data[i++] = 0;
+ data[i++] = 0;
+#ifdef BP_EXTENDED_STATS
+ data[i++] = 0;
+ data[i++] = 0;
+ data[i++] = 0;
+#endif
+ continue;
}
+
+ do {
+ start = u64_stats_fetch_begin_irq(&ring->syncp);
+ data[i] = ring->stats.packets;
+ data[i + 1] = ring->stats.bytes;
+ } while (u64_stats_fetch_retry_irq(&ring->syncp, start));
+ i += 2;
+#ifdef BP_EXTENDED_STATS
+ data[i] = ring->stats.yields;
+ data[i + 1] = ring->stats.misses;
+ data[i + 2] = ring->stats.cleaned;
+ i += 3;
+#endif
}
}
static void ixgbevf_get_strings(struct net_device *netdev, u32 stringset,
u8 *data)
{
+ struct ixgbevf_adapter *adapter = netdev_priv(netdev);
char *p = (char *)data;
int i;
switch (stringset) {
case ETH_SS_TEST:
memcpy(data, *ixgbe_gstrings_test,
- IXGBE_TEST_LEN * ETH_GSTRING_LEN);
+ IXGBEVF_TEST_LEN * ETH_GSTRING_LEN);
break;
case ETH_SS_STATS:
- for (i = 0; i < IXGBE_GLOBAL_STATS_LEN; i++) {
- memcpy(p, ixgbe_gstrings_stats[i].stat_string,
+ for (i = 0; i < IXGBEVF_GLOBAL_STATS_LEN; i++) {
+ memcpy(p, ixgbevf_gstrings_stats[i].stat_string,
ETH_GSTRING_LEN);
p += ETH_GSTRING_LEN;
}
+
+ for (i = 0; i < adapter->num_tx_queues; i++) {
+ sprintf(p, "tx_queue_%u_packets", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "tx_queue_%u_bytes", i);
+ p += ETH_GSTRING_LEN;
+#ifdef BP_EXTENDED_STATS
+ sprintf(p, "tx_queue_%u_bp_napi_yield", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "tx_queue_%u_bp_misses", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "tx_queue_%u_bp_cleaned", i);
+ p += ETH_GSTRING_LEN;
+#endif /* BP_EXTENDED_STATS */
+ }
+ for (i = 0; i < adapter->num_rx_queues; i++) {
+ sprintf(p, "rx_queue_%u_packets", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "rx_queue_%u_bytes", i);
+ p += ETH_GSTRING_LEN;
+#ifdef BP_EXTENDED_STATS
+ sprintf(p, "rx_queue_%u_bp_poll_yield", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "rx_queue_%u_bp_misses", i);
+ p += ETH_GSTRING_LEN;
+ sprintf(p, "rx_queue_%u_bp_cleaned", i);
+ p += ETH_GSTRING_LEN;
+#endif /* BP_EXTENDED_STATS */
+ }
break;
}
}
diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
index 991eeae81473..d5944c391cbb 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf.h
@@ -166,10 +166,10 @@ struct ixgbevf_ring {
#define MAXIMUM_ETHERNET_VLAN_SIZE (VLAN_ETH_FRAME_LEN + ETH_FCS_LEN)
-#define IXGBE_TX_FLAGS_CSUM (u32)(1)
-#define IXGBE_TX_FLAGS_VLAN (u32)(1 << 1)
-#define IXGBE_TX_FLAGS_TSO (u32)(1 << 2)
-#define IXGBE_TX_FLAGS_IPV4 (u32)(1 << 3)
+#define IXGBE_TX_FLAGS_CSUM BIT(0)
+#define IXGBE_TX_FLAGS_VLAN BIT(1)
+#define IXGBE_TX_FLAGS_TSO BIT(2)
+#define IXGBE_TX_FLAGS_IPV4 BIT(3)
#define IXGBE_TX_FLAGS_VLAN_MASK 0xffff0000
#define IXGBE_TX_FLAGS_VLAN_PRIO_MASK 0x0000e000
#define IXGBE_TX_FLAGS_VLAN_SHIFT 16
@@ -403,13 +403,6 @@ struct ixgbevf_adapter {
u32 alloc_rx_page_failed;
u32 alloc_rx_buff_failed;
- /* Some features need tri-state capability,
- * thus the additional *_CAPABLE flags.
- */
- u32 flags;
-#define IXGBEVF_FLAG_RESET_REQUESTED (u32)(1)
-#define IXGBEVF_FLAG_QUEUE_RESET_REQUESTED (u32)(1 << 2)
-
struct msix_entry *msix_entries;
/* OS defined structs */
@@ -429,16 +422,6 @@ struct ixgbevf_adapter {
unsigned int tx_ring_count;
unsigned int rx_ring_count;
-#ifdef BP_EXTENDED_STATS
- u64 bp_rx_yields;
- u64 bp_rx_cleaned;
- u64 bp_rx_missed;
-
- u64 bp_tx_yields;
- u64 bp_tx_cleaned;
- u64 bp_tx_missed;
-#endif
-
u8 __iomem *io_addr; /* Mainly for iounmap use */
u32 link_speed;
bool link_up;
@@ -461,13 +444,19 @@ enum ixbgevf_state_t {
__IXGBEVF_REMOVING,
__IXGBEVF_SERVICE_SCHED,
__IXGBEVF_SERVICE_INITED,
+ __IXGBEVF_RESET_REQUESTED,
+ __IXGBEVF_QUEUE_RESET_REQUESTED,
};
enum ixgbevf_boards {
board_82599_vf,
+ board_82599_vf_hv,
board_X540_vf,
+ board_X540_vf_hv,
board_X550_vf,
+ board_X550_vf_hv,
board_X550EM_x_vf,
+ board_X550EM_x_vf_hv,
};
enum ixgbevf_xcast_modes {
@@ -482,6 +471,12 @@ extern const struct ixgbevf_info ixgbevf_X550_vf_info;
extern const struct ixgbevf_info ixgbevf_X550EM_x_vf_info;
extern const struct ixgbe_mbx_operations ixgbevf_mbx_ops;
+extern const struct ixgbevf_info ixgbevf_82599_vf_hv_info;
+extern const struct ixgbevf_info ixgbevf_X540_vf_hv_info;
+extern const struct ixgbevf_info ixgbevf_X550_vf_hv_info;
+extern const struct ixgbevf_info ixgbevf_X550EM_x_vf_hv_info;
+extern const struct ixgbe_mbx_operations ixgbevf_hv_mbx_ops;
+
/* needed by ethtool.c */
extern const char ixgbevf_driver_name[];
extern const char ixgbevf_driver_version[];
diff --git a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
index b0edae94d73d..acc24010cfe0 100644
--- a/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
+++ b/drivers/net/ethernet/intel/ixgbevf/ixgbevf_main.c
@@ -62,10 +62,14 @@ static char ixgbevf_copyright[] =
"Copyright (c) 2009 - 2015 Intel Corporation.";
static const struct ixgbevf_info *ixgbevf_info_tbl[] = {
- [board_82599_vf] = &ixgbevf_82599_vf_info,
- [board_X540_vf] = &ixgbevf_X540_vf_info,
- [board_X550_vf] = &ixgbevf_X550_vf_info,
- [board_X550EM_x_vf] = &ixgbevf_X550EM_x_vf_info,
+ [board_82599_vf] = &ixgbevf_82599_vf_info,
+ [board_82599_vf_hv] = &ixgbevf_82599_vf_hv_info,
+ [board_X540_vf] = &ixgbevf_X540_vf_info,
+ [board_X540_vf_hv] = &ixgbevf_X540_vf_hv_info,
+ [board_X550_vf] = &ixgbevf_X550_vf_info,
+ [board_X550_vf_hv] = &ixgbevf_X550_vf_hv_info,
+ [board_X550EM_x_vf] = &ixgbevf_X550EM_x_vf_info,
+ [board_X550EM_x_vf_hv] = &ixgbevf_X550EM_x_vf_hv_info,
};
/* ixgbevf_pci_tbl - PCI Device ID Table
@@ -78,9 +82,13 @@ static const struct ixgbevf_info *ixgbevf_info_tbl[] = {
*/
static const struct pci_device_id ixgbevf_pci_tbl[] = {
{PCI_VDEVICE(INTEL, IXGBE_DEV_ID_82599_VF), board_82599_vf },
+ {PCI_VDEVICE(INTEL, IXGBE_DEV_ID_82599_VF_HV), board_82599_vf_hv },
{PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X540_VF), board_X540_vf },
+ {PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X540_VF_HV), board_X540_vf_hv },
{PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X550_VF), board_X550_vf },
+ {PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X550_VF_HV), board_X550_vf_hv },
{PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X550EM_X_VF), board_X550EM_x_vf },
+ {PCI_VDEVICE(INTEL, IXGBE_DEV_ID_X550EM_X_VF_HV), board_X550EM_x_vf_hv},
/* required last entry */
{0, }
};
@@ -268,7 +276,7 @@ static void ixgbevf_tx_timeout_reset(struct ixgbevf_adapter *adapter)
{
/* Do the reset outside of interrupt context */
if (!test_bit(__IXGBEVF_DOWN, &adapter->state)) {
- adapter->flags |= IXGBEVF_FLAG_RESET_REQUESTED;
+ set_bit(__IXGBEVF_RESET_REQUESTED, &adapter->state);
ixgbevf_service_event_schedule(adapter);
}
}
@@ -288,9 +296,10 @@ static void ixgbevf_tx_timeout(struct net_device *netdev)
* ixgbevf_clean_tx_irq - Reclaim resources after transmit completes
* @q_vector: board private structure
* @tx_ring: tx ring to clean
+ * @napi_budget: Used to determine if we are in netpoll
**/
static bool ixgbevf_clean_tx_irq(struct ixgbevf_q_vector *q_vector,
- struct ixgbevf_ring *tx_ring)
+ struct ixgbevf_ring *tx_ring, int napi_budget)
{
struct ixgbevf_adapter *adapter = q_vector->adapter;
struct ixgbevf_tx_buffer *tx_buffer;
@@ -328,7 +337,7 @@ static bool ixgbevf_clean_tx_irq(struct ixgbevf_q_vector *q_vector,
total_packets += tx_buffer->gso_segs;
/* free the skb */
- dev_kfree_skb_any(tx_buffer->skb);
+ napi_consume_skb(tx_buffer->skb, napi_budget);
/* unmap skb header data */
dma_unmap_single(tx_ring->dev,
@@ -1013,8 +1022,10 @@ static int ixgbevf_poll(struct napi_struct *napi, int budget)
int per_ring_budget, work_done = 0;
bool clean_complete = true;
- ixgbevf_for_each_ring(ring, q_vector->tx)
- clean_complete &= ixgbevf_clean_tx_irq(q_vector, ring);
+ ixgbevf_for_each_ring(ring, q_vector->tx) {
+ if (!ixgbevf_clean_tx_irq(q_vector, ring, budget))
+ clean_complete = false;
+ }
if (budget <= 0)
return budget;
@@ -1035,7 +1046,8 @@ static int ixgbevf_poll(struct napi_struct *napi, int budget)
int cleaned = ixgbevf_clean_rx_irq(q_vector, ring,
per_ring_budget);
work_done += cleaned;
- clean_complete &= (cleaned < per_ring_budget);
+ if (cleaned >= per_ring_budget)
+ clean_complete = false;
}
#ifdef CONFIG_NET_RX_BUSY_POLL
@@ -1052,7 +1064,7 @@ static int ixgbevf_poll(struct napi_struct *napi, int budget)
if (!test_bit(__IXGBEVF_DOWN, &adapter->state) &&
!test_bit(__IXGBEVF_REMOVING, &adapter->state))
ixgbevf_irq_enable_queues(adapter,
- 1 << q_vector->v_idx);
+ BIT(q_vector->v_idx));
return 0;
}
@@ -1154,14 +1166,14 @@ static void ixgbevf_configure_msix(struct ixgbevf_adapter *adapter)
}
/* add q_vector eims value to global eims_enable_mask */
- adapter->eims_enable_mask |= 1 << v_idx;
+ adapter->eims_enable_mask |= BIT(v_idx);
ixgbevf_write_eitr(q_vector);
}
ixgbevf_set_ivar(adapter, -1, 1, v_idx);
/* setup eims_other and add value to global eims_enable_mask */
- adapter->eims_other = 1 << v_idx;
+ adapter->eims_other = BIT(v_idx);
adapter->eims_enable_mask |= adapter->eims_other;
}
@@ -1585,8 +1597,8 @@ static void ixgbevf_configure_tx_ring(struct ixgbevf_adapter *adapter,
txdctl |= (8 << 16); /* WTHRESH = 8 */
/* Setting PTHRESH to 32 both improves performance */
- txdctl |= (1 << 8) | /* HTHRESH = 1 */
- 32; /* PTHRESH = 32 */
+ txdctl |= (1u << 8) | /* HTHRESH = 1 */
+ 32; /* PTHRESH = 32 */
clear_bit(__IXGBEVF_HANG_CHECK_ARMED, &ring->state);
@@ -1642,7 +1654,7 @@ static void ixgbevf_setup_psrtype(struct ixgbevf_adapter *adapter)
IXGBE_PSRTYPE_L2HDR;
if (adapter->num_rx_queues > 1)
- psrtype |= 1 << 29;
+ psrtype |= BIT(29);
IXGBE_WRITE_REG(hw, IXGBE_VFPSRTYPE, psrtype);
}
@@ -1748,9 +1760,15 @@ static void ixgbevf_configure_rx_ring(struct ixgbevf_adapter *adapter,
IXGBE_WRITE_REG(hw, IXGBE_VFRDLEN(reg_idx),
ring->count * sizeof(union ixgbe_adv_rx_desc));
+#ifndef CONFIG_SPARC
/* enable relaxed ordering */
IXGBE_WRITE_REG(hw, IXGBE_VFDCA_RXCTRL(reg_idx),
IXGBE_DCA_RXCTRL_DESC_RRO_EN);
+#else
+ IXGBE_WRITE_REG(hw, IXGBE_VFDCA_RXCTRL(reg_idx),
+ IXGBE_DCA_RXCTRL_DESC_RRO_EN |
+ IXGBE_DCA_RXCTRL_DATA_WRO_EN);
+#endif
/* reset head and tail pointers */
IXGBE_WRITE_REG(hw, IXGBE_VFRDH(reg_idx), 0);
@@ -1791,7 +1809,7 @@ static void ixgbevf_configure_rx(struct ixgbevf_adapter *adapter)
ixgbevf_setup_vfmrqc(adapter);
/* notify the PF of our intent to use this size of frame */
- ixgbevf_rlpml_set_vf(hw, netdev->mtu + ETH_HLEN + ETH_FCS_LEN);
+ hw->mac.ops.set_rlpml(hw, netdev->mtu + ETH_HLEN + ETH_FCS_LEN);
/* Setup the HW Rx Head and Tail Descriptor Pointers and
* the Base and Length of the Rx Descriptor Ring
@@ -1904,7 +1922,7 @@ static void ixgbevf_set_rx_mode(struct net_device *netdev)
spin_lock_bh(&adapter->mbx_lock);
- hw->mac.ops.update_xcast_mode(hw, netdev, xcast_mode);
+ hw->mac.ops.update_xcast_mode(hw, xcast_mode);
/* reprogram multicast list */
hw->mac.ops.update_mc_addr_list(hw, netdev);
@@ -1984,7 +2002,7 @@ static int ixgbevf_configure_dcb(struct ixgbevf_adapter *adapter)
hw->mbx.timeout = 0;
/* wait for watchdog to come around and bail us out */
- adapter->flags |= IXGBEVF_FLAG_QUEUE_RESET_REQUESTED;
+ set_bit(__IXGBEVF_QUEUE_RESET_REQUESTED, &adapter->state);
}
return 0;
@@ -2052,7 +2070,7 @@ static void ixgbevf_negotiate_api(struct ixgbevf_adapter *adapter)
spin_lock_bh(&adapter->mbx_lock);
while (api[idx] != ixgbe_mbox_api_unknown) {
- err = ixgbevf_negotiate_api_version(hw, api[idx]);
+ err = hw->mac.ops.negotiate_api_version(hw, api[idx]);
if (!err)
break;
idx++;
@@ -2749,11 +2767,9 @@ static void ixgbevf_service_timer(unsigned long data)
static void ixgbevf_reset_subtask(struct ixgbevf_adapter *adapter)
{
- if (!(adapter->flags & IXGBEVF_FLAG_RESET_REQUESTED))
+ if (!test_and_clear_bit(__IXGBEVF_RESET_REQUESTED, &adapter->state))
return;
- adapter->flags &= ~IXGBEVF_FLAG_RESET_REQUESTED;
-
/* If we're already down or resetting, just bail */
if (test_bit(__IXGBEVF_DOWN, &adapter->state) ||
test_bit(__IXGBEVF_RESETTING, &adapter->state))
@@ -2795,7 +2811,7 @@ static void ixgbevf_check_hang_subtask(struct ixgbevf_adapter *adapter)
struct ixgbevf_q_vector *qv = adapter->q_vector[i];
if (qv->rx.ring || qv->tx.ring)
- eics |= 1 << i;
+ eics |= BIT(i);
}
/* Cause software interrupt to ensure rings are cleaned */
@@ -2821,7 +2837,7 @@ static void ixgbevf_watchdog_update_link(struct ixgbevf_adapter *adapter)
/* if check for link returns error we will need to reset */
if (err && time_after(jiffies, adapter->last_reset + (10 * HZ))) {
- adapter->flags |= IXGBEVF_FLAG_RESET_REQUESTED;
+ set_bit(__IXGBEVF_RESET_REQUESTED, &adapter->state);
link_up = false;
}
@@ -3222,11 +3238,10 @@ static void ixgbevf_queue_reset_subtask(struct ixgbevf_adapter *adapter)
{
struct net_device *dev = adapter->netdev;
- if (!(adapter->flags & IXGBEVF_FLAG_QUEUE_RESET_REQUESTED))
+ if (!test_and_clear_bit(__IXGBEVF_QUEUE_RESET_REQUESTED,
+ &adapter->state))
return;
- adapter->flags &= ~IXGBEVF_FLAG_QUEUE_RESET_REQUESTED;
-
/* if interface is down do nothing */
if (test_bit(__IXGBEVF_DOWN, &adapter->state) ||
test_bit(__IXGBEVF_RESETTING, &adapter->state))
@@ -3271,9 +3286,18 @@ static int ixgbevf_tso(struct ixgbevf_ring *tx_ring,
struct ixgbevf_tx_buffer *first,
u8 *hdr_len)
{
+ u32 vlan_macip_lens, type_tucmd, mss_l4len_idx;
struct sk_buff *skb = first->skb;
- u32 vlan_macip_lens, type_tucmd;
- u32 mss_l4len_idx, l4len;
+ union {
+ struct iphdr *v4;
+ struct ipv6hdr *v6;
+ unsigned char *hdr;
+ } ip;
+ union {
+ struct tcphdr *tcp;
+ unsigned char *hdr;
+ } l4;
+ u32 paylen, l4_offset;
int err;
if (skb->ip_summed != CHECKSUM_PARTIAL)
@@ -3286,49 +3310,53 @@ static int ixgbevf_tso(struct ixgbevf_ring *tx_ring,
if (err < 0)
return err;
+ ip.hdr = skb_network_header(skb);
+ l4.hdr = skb_checksum_start(skb);
+
/* ADV DTYP TUCMD MKRLOC/ISCSIHEDLEN */
type_tucmd = IXGBE_ADVTXD_TUCMD_L4T_TCP;
- if (first->protocol == htons(ETH_P_IP)) {
- struct iphdr *iph = ip_hdr(skb);
-
- iph->tot_len = 0;
- iph->check = 0;
- tcp_hdr(skb)->check = ~csum_tcpudp_magic(iph->saddr,
- iph->daddr, 0,
- IPPROTO_TCP,
- 0);
+ /* initialize outer IP header fields */
+ if (ip.v4->version == 4) {
+ /* IP header will have to cancel out any data that
+ * is not a part of the outer IP header
+ */
+ ip.v4->check = csum_fold(csum_add(lco_csum(skb),
+ csum_unfold(l4.tcp->check)));
type_tucmd |= IXGBE_ADVTXD_TUCMD_IPV4;
+
+ ip.v4->tot_len = 0;
first->tx_flags |= IXGBE_TX_FLAGS_TSO |
IXGBE_TX_FLAGS_CSUM |
IXGBE_TX_FLAGS_IPV4;
- } else if (skb_is_gso_v6(skb)) {
- ipv6_hdr(skb)->payload_len = 0;
- tcp_hdr(skb)->check =
- ~csum_ipv6_magic(&ipv6_hdr(skb)->saddr,
- &ipv6_hdr(skb)->daddr,
- 0, IPPROTO_TCP, 0);
+ } else {
+ ip.v6->payload_len = 0;
first->tx_flags |= IXGBE_TX_FLAGS_TSO |
IXGBE_TX_FLAGS_CSUM;
}
- /* compute header lengths */
- l4len = tcp_hdrlen(skb);
- *hdr_len += l4len;
- *hdr_len = skb_transport_offset(skb) + l4len;
+ /* determine offset of inner transport header */
+ l4_offset = l4.hdr - skb->data;
+
+ /* compute length of segmentation header */
+ *hdr_len = (l4.tcp->doff * 4) + l4_offset;
- /* update GSO size and bytecount with header size */
+ /* remove payload length from inner checksum */
+ paylen = skb->len - l4_offset;
+ csum_replace_by_diff(&l4.tcp->check, htonl(paylen));
+
+ /* update gso size and bytecount with header size */
first->gso_segs = skb_shinfo(skb)->gso_segs;
first->bytecount += (first->gso_segs - 1) * *hdr_len;
/* mss_l4len_id: use 1 as index for TSO */
- mss_l4len_idx = l4len << IXGBE_ADVTXD_L4LEN_SHIFT;
+ mss_l4len_idx = (*hdr_len - l4_offset) << IXGBE_ADVTXD_L4LEN_SHIFT;
mss_l4len_idx |= skb_shinfo(skb)->gso_size << IXGBE_ADVTXD_MSS_SHIFT;
- mss_l4len_idx |= 1 << IXGBE_ADVTXD_IDX_SHIFT;
+ mss_l4len_idx |= (1u << IXGBE_ADVTXD_IDX_SHIFT);
/* vlan_macip_lens: HEADLEN, MACLEN, VLAN tag */
- vlan_macip_lens = skb_network_header_len(skb);
- vlan_macip_lens |= skb_network_offset(skb) << IXGBE_ADVTXD_MACLEN_SHIFT;
+ vlan_macip_lens = l4.hdr - ip.hdr;
+ vlan_macip_lens |= (ip.hdr - skb->data) << IXGBE_ADVTXD_MACLEN_SHIFT;
vlan_macip_lens |= first->tx_flags & IXGBE_TX_FLAGS_VLAN_MASK;
ixgbevf_tx_ctxtdesc(tx_ring, vlan_macip_lens,
@@ -3337,76 +3365,55 @@ static int ixgbevf_tso(struct ixgbevf_ring *tx_ring,
return 1;
}
+static inline bool ixgbevf_ipv6_csum_is_sctp(struct sk_buff *skb)
+{
+ unsigned int offset = 0;
+
+ ipv6_find_hdr(skb, &offset, IPPROTO_SCTP, NULL, NULL);
+
+ return offset == skb_checksum_start_offset(skb);
+}
+
static void ixgbevf_tx_csum(struct ixgbevf_ring *tx_ring,
struct ixgbevf_tx_buffer *first)
{
struct sk_buff *skb = first->skb;
u32 vlan_macip_lens = 0;
- u32 mss_l4len_idx = 0;
u32 type_tucmd = 0;
- if (skb->ip_summed == CHECKSUM_PARTIAL) {
- u8 l4_hdr = 0;
- __be16 frag_off;
-
- switch (first->protocol) {
- case htons(ETH_P_IP):
- vlan_macip_lens |= skb_network_header_len(skb);
- type_tucmd |= IXGBE_ADVTXD_TUCMD_IPV4;
- l4_hdr = ip_hdr(skb)->protocol;
- break;
- case htons(ETH_P_IPV6):
- vlan_macip_lens |= skb_network_header_len(skb);
- l4_hdr = ipv6_hdr(skb)->nexthdr;
- if (likely(skb_network_header_len(skb) ==
- sizeof(struct ipv6hdr)))
- break;
- ipv6_skip_exthdr(skb, skb_network_offset(skb) +
- sizeof(struct ipv6hdr),
- &l4_hdr, &frag_off);
- if (unlikely(frag_off))
- l4_hdr = NEXTHDR_FRAGMENT;
- break;
- default:
- break;
- }
+ if (skb->ip_summed != CHECKSUM_PARTIAL)
+ goto no_csum;
- switch (l4_hdr) {
- case IPPROTO_TCP:
- type_tucmd |= IXGBE_ADVTXD_TUCMD_L4T_TCP;
- mss_l4len_idx = tcp_hdrlen(skb) <<
- IXGBE_ADVTXD_L4LEN_SHIFT;
- break;
- case IPPROTO_SCTP:
- type_tucmd |= IXGBE_ADVTXD_TUCMD_L4T_SCTP;
- mss_l4len_idx = sizeof(struct sctphdr) <<
- IXGBE_ADVTXD_L4LEN_SHIFT;
- break;
- case IPPROTO_UDP:
- mss_l4len_idx = sizeof(struct udphdr) <<
- IXGBE_ADVTXD_L4LEN_SHIFT;
+ switch (skb->csum_offset) {
+ case offsetof(struct tcphdr, check):
+ type_tucmd = IXGBE_ADVTXD_TUCMD_L4T_TCP;
+ /* fall through */
+ case offsetof(struct udphdr, check):
+ break;
+ case offsetof(struct sctphdr, checksum):
+ /* validate that this is actually an SCTP request */
+ if (((first->protocol == htons(ETH_P_IP)) &&
+ (ip_hdr(skb)->protocol == IPPROTO_SCTP)) ||
+ ((first->protocol == htons(ETH_P_IPV6)) &&
+ ixgbevf_ipv6_csum_is_sctp(skb))) {
+ type_tucmd = IXGBE_ADVTXD_TUCMD_L4T_SCTP;
break;
- default:
- if (unlikely(net_ratelimit())) {
- dev_warn(tx_ring->dev,
- "partial checksum, l3 proto=%x, l4 proto=%x\n",
- first->protocol, l4_hdr);
- }
- skb_checksum_help(skb);
- goto no_csum;
}
-
- /* update TX checksum flag */
- first->tx_flags |= IXGBE_TX_FLAGS_CSUM;
+ /* fall through */
+ default:
+ skb_checksum_help(skb);
+ goto no_csum;
}
-
+ /* update TX checksum flag */
+ first->tx_flags |= IXGBE_TX_FLAGS_CSUM;
+ vlan_macip_lens = skb_checksum_start_offset(skb) -
+ skb_network_offset(skb);
no_csum:
/* vlan_macip_lens: MACLEN, VLAN tag */
vlan_macip_lens |= skb_network_offset(skb) << IXGBE_ADVTXD_MACLEN_SHIFT;
vlan_macip_lens |= first->tx_flags & IXGBE_TX_FLAGS_VLAN_MASK;
- ixgbevf_tx_ctxtdesc(tx_ring, vlan_macip_lens,
- type_tucmd, mss_l4len_idx);
+ ixgbevf_tx_ctxtdesc(tx_ring, vlan_macip_lens, type_tucmd, 0);
}
static __le32 ixgbevf_tx_cmd_type(u32 tx_flags)
@@ -3442,7 +3449,7 @@ static void ixgbevf_tx_olinfo_status(union ixgbe_adv_tx_desc *tx_desc,
/* use index 1 context for TSO/FSO/FCOE */
if (tx_flags & IXGBE_TX_FLAGS_TSO)
- olinfo_status |= cpu_to_le32(1 << IXGBE_ADVTXD_IDX_SHIFT);
+ olinfo_status |= cpu_to_le32(1u << IXGBE_ADVTXD_IDX_SHIFT);
/* Check Context must be set if Tx switch is enabled, which it
* always is for case where virtual functions are running
@@ -3747,7 +3754,7 @@ static int ixgbevf_change_mtu(struct net_device *netdev, int new_mtu)
netdev->mtu = new_mtu;
/* notify the PF of our intent to use this size of frame */
- ixgbevf_rlpml_set_vf(hw, max_frame);
+ hw->mac.ops.set_rlpml(hw, max_frame);
return 0;
}
@@ -3890,6 +3897,40 @@ static struct rtnl_link_stats64 *ixgbevf_get_stats(struct net_device *netdev,
return stats;
}
+#define IXGBEVF_MAX_MAC_HDR_LEN 127
+#define IXGBEVF_MAX_NETWORK_HDR_LEN 511
+
+static netdev_features_t
+ixgbevf_features_check(struct sk_buff *skb, struct net_device *dev,
+ netdev_features_t features)
+{
+ unsigned int network_hdr_len, mac_hdr_len;
+
+ /* Make certain the headers can be described by a context descriptor */
+ mac_hdr_len = skb_network_header(skb) - skb->data;
+ if (unlikely(mac_hdr_len > IXGBEVF_MAX_MAC_HDR_LEN))
+ return features & ~(NETIF_F_HW_CSUM |
+ NETIF_F_SCTP_CRC |
+ NETIF_F_HW_VLAN_CTAG_TX |
+ NETIF_F_TSO |
+ NETIF_F_TSO6);
+
+ network_hdr_len = skb_checksum_start(skb) - skb_network_header(skb);
+ if (unlikely(network_hdr_len > IXGBEVF_MAX_NETWORK_HDR_LEN))
+ return features & ~(NETIF_F_HW_CSUM |
+ NETIF_F_SCTP_CRC |
+ NETIF_F_TSO |
+ NETIF_F_TSO6);
+
+ /* We can only support IPV4 TSO in tunnels if we can mangle the
+ * inner IP ID field, so strip TSO if MANGLEID is not supported.
+ */
+ if (skb->encapsulation && !(features & NETIF_F_TSO_MANGLEID))
+ features &= ~NETIF_F_TSO;
+
+ return features;
+}
+
static const struct net_device_ops ixgbevf_netdev_ops = {
.ndo_open = ixgbevf_open,
.ndo_stop = ixgbevf_close,
@@ -3908,7 +3949,7 @@ static const struct net_device_ops ixgbevf_netdev_ops = {
#ifdef CONFIG_NET_POLL_CONTROLLER
.ndo_poll_controller = ixgbevf_netpoll,
#endif
- .ndo_features_check = passthru_features_check,
+ .ndo_features_check = ixgbevf_features_check,
};
static void ixgbevf_assign_netdev_ops(struct net_device *dev)
@@ -4013,26 +4054,37 @@ static int ixgbevf_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
}
netdev->hw_features = NETIF_F_SG |
- NETIF_F_IP_CSUM |
- NETIF_F_IPV6_CSUM |
NETIF_F_TSO |
NETIF_F_TSO6 |
- NETIF_F_RXCSUM;
+ NETIF_F_RXCSUM |
+ NETIF_F_HW_CSUM |
+ NETIF_F_SCTP_CRC;
- netdev->features = netdev->hw_features |
- NETIF_F_HW_VLAN_CTAG_TX |
- NETIF_F_HW_VLAN_CTAG_RX |
- NETIF_F_HW_VLAN_CTAG_FILTER;
+#define IXGBEVF_GSO_PARTIAL_FEATURES (NETIF_F_GSO_GRE | \
+ NETIF_F_GSO_GRE_CSUM | \
+ NETIF_F_GSO_IPXIP4 | \
+ NETIF_F_GSO_IPXIP6 | \
+ NETIF_F_GSO_UDP_TUNNEL | \
+ NETIF_F_GSO_UDP_TUNNEL_CSUM)
- netdev->vlan_features |= NETIF_F_TSO |
- NETIF_F_TSO6 |
- NETIF_F_IP_CSUM |
- NETIF_F_IPV6_CSUM |
- NETIF_F_SG;
+ netdev->gso_partial_features = IXGBEVF_GSO_PARTIAL_FEATURES;
+ netdev->hw_features |= NETIF_F_GSO_PARTIAL |
+ IXGBEVF_GSO_PARTIAL_FEATURES;
+
+ netdev->features = netdev->hw_features;
if (pci_using_dac)
netdev->features |= NETIF_F_HIGHDMA;
+ netdev->vlan_features |= netdev->features | NETIF_F_TSO_MANGLEID;
+ netdev->mpls_features |= NETIF_F_HW_CSUM;
+ netdev->hw_enc_features |= netdev->vlan_features;
+
+ /* set this bit last since it cannot be part of vlan_features */
+ netdev->features |= NETIF_F_HW_VLAN_CTAG_FILTER |
+ NETIF_F_HW_VLAN_CTAG_RX |
+ NETIF_F_HW_VLAN_CTAG_TX;
+
netdev->priv_flags |= IFF_UNICAST_FLT;
if (IXGBE_REMOVED(hw->hw_addr)) {
diff --git a/drivers/net/ethernet/intel/ixgbevf/mbx.c b/drivers/net/ethernet/intel/ixgbevf/mbx.c
index dc68fea4894b..61a80da8b6f0 100644
--- a/drivers/net/ethernet/intel/ixgbevf/mbx.c
+++ b/drivers/net/ethernet/intel/ixgbevf/mbx.c
@@ -346,3 +346,14 @@ const struct ixgbe_mbx_operations ixgbevf_mbx_ops = {
.check_for_rst = ixgbevf_check_for_rst_vf,
};
+/* Mailbox operations when running on Hyper-V.
+ * On Hyper-V, PF/VF communication is not through the
+ * hardware mailbox; this communication is through
+ * a software mediated path.
+ * Most mail box operations are noop while running on
+ * Hyper-V.
+ */
+const struct ixgbe_mbx_operations ixgbevf_hv_mbx_ops = {
+ .init_params = ixgbevf_init_mbx_params_vf,
+ .check_for_rst = ixgbevf_check_for_rst_vf,
+};
diff --git a/drivers/net/ethernet/intel/ixgbevf/vf.c b/drivers/net/ethernet/intel/ixgbevf/vf.c
index 4d613a4f2a7f..e670d3b19c3c 100644
--- a/drivers/net/ethernet/intel/ixgbevf/vf.c
+++ b/drivers/net/ethernet/intel/ixgbevf/vf.c
@@ -27,6 +27,12 @@
#include "vf.h"
#include "ixgbevf.h"
+/* On Hyper-V, to reset, we need to read from this offset
+ * from the PCI config space. This is the mechanism used on
+ * Hyper-V to support PF/VF communication.
+ */
+#define IXGBE_HV_RESET_OFFSET 0x201
+
/**
* ixgbevf_start_hw_vf - Prepare hardware for Tx/Rx
* @hw: pointer to hardware structure
@@ -126,6 +132,27 @@ static s32 ixgbevf_reset_hw_vf(struct ixgbe_hw *hw)
}
/**
+ * Hyper-V variant; the VF/PF communication is through the PCI
+ * config space.
+ */
+static s32 ixgbevf_hv_reset_hw_vf(struct ixgbe_hw *hw)
+{
+#if IS_ENABLED(CONFIG_PCI_MMCONFIG)
+ struct ixgbevf_adapter *adapter = hw->back;
+ int i;
+
+ for (i = 0; i < 6; i++)
+ pci_read_config_byte(adapter->pdev,
+ (i + IXGBE_HV_RESET_OFFSET),
+ &hw->mac.perm_addr[i]);
+ return 0;
+#else
+ pr_err("PCI_MMCONFIG needs to be enabled for Hyper-V\n");
+ return -EOPNOTSUPP;
+#endif
+}
+
+/**
* ixgbevf_stop_hw_vf - Generic stop Tx/Rx units
* @hw: pointer to hardware structure
*
@@ -258,6 +285,11 @@ static s32 ixgbevf_set_uc_addr_vf(struct ixgbe_hw *hw, u32 index, u8 *addr)
return ret_val;
}
+static s32 ixgbevf_hv_set_uc_addr_vf(struct ixgbe_hw *hw, u32 index, u8 *addr)
+{
+ return -EOPNOTSUPP;
+}
+
/**
* ixgbevf_get_reta_locked - get the RSS redirection table (RETA) contents.
* @adapter: pointer to the port handle
@@ -416,6 +448,26 @@ static s32 ixgbevf_set_rar_vf(struct ixgbe_hw *hw, u32 index, u8 *addr,
return ret_val;
}
+/**
+ * ixgbevf_hv_set_rar_vf - set device MAC address Hyper-V variant
+ * @hw: pointer to hardware structure
+ * @index: Receive address register to write
+ * @addr: Address to put into receive address register
+ * @vmdq: Unused in this implementation
+ *
+ * We don't really allow setting the device MAC address. However,
+ * if the address being set is the permanent MAC address we will
+ * permit that.
+ **/
+static s32 ixgbevf_hv_set_rar_vf(struct ixgbe_hw *hw, u32 index, u8 *addr,
+ u32 vmdq)
+{
+ if (ether_addr_equal(addr, hw->mac.perm_addr))
+ return 0;
+
+ return -EOPNOTSUPP;
+}
+
static void ixgbevf_write_msg_read_ack(struct ixgbe_hw *hw,
u32 *msg, u16 size)
{
@@ -473,15 +525,22 @@ static s32 ixgbevf_update_mc_addr_list_vf(struct ixgbe_hw *hw,
}
/**
+ * Hyper-V variant - just a stub.
+ */
+static s32 ixgbevf_hv_update_mc_addr_list_vf(struct ixgbe_hw *hw,
+ struct net_device *netdev)
+{
+ return -EOPNOTSUPP;
+}
+
+/**
* ixgbevf_update_xcast_mode - Update Multicast mode
* @hw: pointer to the HW structure
- * @netdev: pointer to net device structure
* @xcast_mode: new multicast mode
*
* Updates the Multicast Mode of VF.
**/
-static s32 ixgbevf_update_xcast_mode(struct ixgbe_hw *hw,
- struct net_device *netdev, int xcast_mode)
+static s32 ixgbevf_update_xcast_mode(struct ixgbe_hw *hw, int xcast_mode)
{
struct ixgbe_mbx_info *mbx = &hw->mbx;
u32 msgbuf[2];
@@ -513,6 +572,14 @@ static s32 ixgbevf_update_xcast_mode(struct ixgbe_hw *hw,
}
/**
+ * Hyper-V variant - just a stub.
+ */
+static s32 ixgbevf_hv_update_xcast_mode(struct ixgbe_hw *hw, int xcast_mode)
+{
+ return -EOPNOTSUPP;
+}
+
+/**
* ixgbevf_set_vfta_vf - Set/Unset VLAN filter table address
* @hw: pointer to the HW structure
* @vlan: 12 bit VLAN ID
@@ -551,6 +618,15 @@ mbx_err:
}
/**
+ * Hyper-V variant - just a stub.
+ */
+static s32 ixgbevf_hv_set_vfta_vf(struct ixgbe_hw *hw, u32 vlan, u32 vind,
+ bool vlan_on)
+{
+ return -EOPNOTSUPP;
+}
+
+/**
* ixgbevf_setup_mac_link_vf - Setup MAC link settings
* @hw: pointer to hardware structure
* @speed: Unused in this implementation
@@ -656,11 +732,72 @@ out:
}
/**
- * ixgbevf_rlpml_set_vf - Set the maximum receive packet length
+ * Hyper-V variant; there is no mailbox communication.
+ */
+static s32 ixgbevf_hv_check_mac_link_vf(struct ixgbe_hw *hw,
+ ixgbe_link_speed *speed,
+ bool *link_up,
+ bool autoneg_wait_to_complete)
+{
+ struct ixgbe_mbx_info *mbx = &hw->mbx;
+ struct ixgbe_mac_info *mac = &hw->mac;
+ u32 links_reg;
+
+ /* If we were hit with a reset drop the link */
+ if (!mbx->ops.check_for_rst(hw) || !mbx->timeout)
+ mac->get_link_status = true;
+
+ if (!mac->get_link_status)
+ goto out;
+
+ /* if link status is down no point in checking to see if pf is up */
+ links_reg = IXGBE_READ_REG(hw, IXGBE_VFLINKS);
+ if (!(links_reg & IXGBE_LINKS_UP))
+ goto out;
+
+ /* for SFP+ modules and DA cables on 82599 it can take up to 500usecs
+ * before the link status is correct
+ */
+ if (mac->type == ixgbe_mac_82599_vf) {
+ int i;
+
+ for (i = 0; i < 5; i++) {
+ udelay(100);
+ links_reg = IXGBE_READ_REG(hw, IXGBE_VFLINKS);
+
+ if (!(links_reg & IXGBE_LINKS_UP))
+ goto out;
+ }
+ }
+
+ switch (links_reg & IXGBE_LINKS_SPEED_82599) {
+ case IXGBE_LINKS_SPEED_10G_82599:
+ *speed = IXGBE_LINK_SPEED_10GB_FULL;
+ break;
+ case IXGBE_LINKS_SPEED_1G_82599:
+ *speed = IXGBE_LINK_SPEED_1GB_FULL;
+ break;
+ case IXGBE_LINKS_SPEED_100_82599:
+ *speed = IXGBE_LINK_SPEED_100_FULL;
+ break;
+ }
+
+ /* if we passed all the tests above then the link is up and we no
+ * longer need to check for link
+ */
+ mac->get_link_status = false;
+
+out:
+ *link_up = !mac->get_link_status;
+ return 0;
+}
+
+/**
+ * ixgbevf_set_rlpml_vf - Set the maximum receive packet length
* @hw: pointer to the HW structure
* @max_size: value to assign to max frame size
**/
-void ixgbevf_rlpml_set_vf(struct ixgbe_hw *hw, u16 max_size)
+static void ixgbevf_set_rlpml_vf(struct ixgbe_hw *hw, u16 max_size)
{
u32 msgbuf[2];
@@ -670,11 +807,30 @@ void ixgbevf_rlpml_set_vf(struct ixgbe_hw *hw, u16 max_size)
}
/**
- * ixgbevf_negotiate_api_version - Negotiate supported API version
+ * ixgbevf_hv_set_rlpml_vf - Set the maximum receive packet length
+ * @hw: pointer to the HW structure
+ * @max_size: value to assign to max frame size
+ * Hyper-V variant.
+ **/
+static void ixgbevf_hv_set_rlpml_vf(struct ixgbe_hw *hw, u16 max_size)
+{
+ u32 reg;
+
+ /* If we are on Hyper-V, we implement this functionality
+ * differently.
+ */
+ reg = IXGBE_READ_REG(hw, IXGBE_VFRXDCTL(0));
+ /* CRC == 4 */
+ reg |= ((max_size + 4) | IXGBE_RXDCTL_RLPML_EN);
+ IXGBE_WRITE_REG(hw, IXGBE_VFRXDCTL(0), reg);
+}
+
+/**
+ * ixgbevf_negotiate_api_version_vf - Negotiate supported API version
* @hw: pointer to the HW structure
* @api: integer containing requested API version
**/
-int ixgbevf_negotiate_api_version(struct ixgbe_hw *hw, int api)
+static int ixgbevf_negotiate_api_version_vf(struct ixgbe_hw *hw, int api)
{
int err;
u32 msg[3];
@@ -703,6 +859,21 @@ int ixgbevf_negotiate_api_version(struct ixgbe_hw *hw, int api)
return err;
}
+/**
+ * ixgbevf_hv_negotiate_api_version_vf - Negotiate supported API version
+ * @hw: pointer to the HW structure
+ * @api: integer containing requested API version
+ * Hyper-V version - only ixgbe_mbox_api_10 supported.
+ **/
+static int ixgbevf_hv_negotiate_api_version_vf(struct ixgbe_hw *hw, int api)
+{
+ /* Hyper-V only supports api version ixgbe_mbox_api_10 */
+ if (api != ixgbe_mbox_api_10)
+ return IXGBE_ERR_INVALID_ARGUMENT;
+
+ return 0;
+}
+
int ixgbevf_get_queues(struct ixgbe_hw *hw, unsigned int *num_tcs,
unsigned int *default_tc)
{
@@ -769,11 +940,30 @@ static const struct ixgbe_mac_operations ixgbevf_mac_ops = {
.stop_adapter = ixgbevf_stop_hw_vf,
.setup_link = ixgbevf_setup_mac_link_vf,
.check_link = ixgbevf_check_mac_link_vf,
+ .negotiate_api_version = ixgbevf_negotiate_api_version_vf,
.set_rar = ixgbevf_set_rar_vf,
.update_mc_addr_list = ixgbevf_update_mc_addr_list_vf,
.update_xcast_mode = ixgbevf_update_xcast_mode,
.set_uc_addr = ixgbevf_set_uc_addr_vf,
.set_vfta = ixgbevf_set_vfta_vf,
+ .set_rlpml = ixgbevf_set_rlpml_vf,
+};
+
+static const struct ixgbe_mac_operations ixgbevf_hv_mac_ops = {
+ .init_hw = ixgbevf_init_hw_vf,
+ .reset_hw = ixgbevf_hv_reset_hw_vf,
+ .start_hw = ixgbevf_start_hw_vf,
+ .get_mac_addr = ixgbevf_get_mac_addr_vf,
+ .stop_adapter = ixgbevf_stop_hw_vf,
+ .setup_link = ixgbevf_setup_mac_link_vf,
+ .check_link = ixgbevf_hv_check_mac_link_vf,
+ .negotiate_api_version = ixgbevf_hv_negotiate_api_version_vf,
+ .set_rar = ixgbevf_hv_set_rar_vf,
+ .update_mc_addr_list = ixgbevf_hv_update_mc_addr_list_vf,
+ .update_xcast_mode = ixgbevf_hv_update_xcast_mode,
+ .set_uc_addr = ixgbevf_hv_set_uc_addr_vf,
+ .set_vfta = ixgbevf_hv_set_vfta_vf,
+ .set_rlpml = ixgbevf_hv_set_rlpml_vf,
};
const struct ixgbevf_info ixgbevf_82599_vf_info = {
@@ -781,17 +971,37 @@ const struct ixgbevf_info ixgbevf_82599_vf_info = {
.mac_ops = &ixgbevf_mac_ops,
};
+const struct ixgbevf_info ixgbevf_82599_vf_hv_info = {
+ .mac = ixgbe_mac_82599_vf,
+ .mac_ops = &ixgbevf_hv_mac_ops,
+};
+
const struct ixgbevf_info ixgbevf_X540_vf_info = {
.mac = ixgbe_mac_X540_vf,
.mac_ops = &ixgbevf_mac_ops,
};
+const struct ixgbevf_info ixgbevf_X540_vf_hv_info = {
+ .mac = ixgbe_mac_X540_vf,
+ .mac_ops = &ixgbevf_hv_mac_ops,
+};
+
const struct ixgbevf_info ixgbevf_X550_vf_info = {
.mac = ixgbe_mac_X550_vf,
.mac_ops = &ixgbevf_mac_ops,
};
+const struct ixgbevf_info ixgbevf_X550_vf_hv_info = {
+ .mac = ixgbe_mac_X550_vf,
+ .mac_ops = &ixgbevf_hv_mac_ops,
+};
+
const struct ixgbevf_info ixgbevf_X550EM_x_vf_info = {
.mac = ixgbe_mac_X550EM_x_vf,
.mac_ops = &ixgbevf_mac_ops,
};
+
+const struct ixgbevf_info ixgbevf_X550EM_x_vf_hv_info = {
+ .mac = ixgbe_mac_X550EM_x_vf,
+ .mac_ops = &ixgbevf_hv_mac_ops,
+};
diff --git a/drivers/net/ethernet/intel/ixgbevf/vf.h b/drivers/net/ethernet/intel/ixgbevf/vf.h
index ef9f7736b4dc..2cac610f32ba 100644
--- a/drivers/net/ethernet/intel/ixgbevf/vf.h
+++ b/drivers/net/ethernet/intel/ixgbevf/vf.h
@@ -51,6 +51,7 @@ struct ixgbe_mac_operations {
s32 (*get_mac_addr)(struct ixgbe_hw *, u8 *);
s32 (*stop_adapter)(struct ixgbe_hw *);
s32 (*get_bus_info)(struct ixgbe_hw *);
+ s32 (*negotiate_api_version)(struct ixgbe_hw *hw, int api);
/* Link */
s32 (*setup_link)(struct ixgbe_hw *, ixgbe_link_speed, bool, bool);
@@ -63,11 +64,12 @@ struct ixgbe_mac_operations {
s32 (*set_uc_addr)(struct ixgbe_hw *, u32, u8 *);
s32 (*init_rx_addrs)(struct ixgbe_hw *);
s32 (*update_mc_addr_list)(struct ixgbe_hw *, struct net_device *);
- s32 (*update_xcast_mode)(struct ixgbe_hw *, struct net_device *, int);
+ s32 (*update_xcast_mode)(struct ixgbe_hw *, int);
s32 (*enable_mc)(struct ixgbe_hw *);
s32 (*disable_mc)(struct ixgbe_hw *);
s32 (*clear_vfta)(struct ixgbe_hw *);
s32 (*set_vfta)(struct ixgbe_hw *, u32, u32, bool);
+ void (*set_rlpml)(struct ixgbe_hw *, u16);
};
enum ixgbe_mac_type {
@@ -207,8 +209,6 @@ static inline u32 ixgbe_read_reg_array(struct ixgbe_hw *hw, u32 reg,
#define IXGBE_READ_REG_ARRAY(h, r, o) ixgbe_read_reg_array(h, r, o)
-void ixgbevf_rlpml_set_vf(struct ixgbe_hw *hw, u16 max_size);
-int ixgbevf_negotiate_api_version(struct ixgbe_hw *hw, int api);
int ixgbevf_get_queues(struct ixgbe_hw *hw, unsigned int *num_tcs,
unsigned int *default_tc);
int ixgbevf_get_reta_locked(struct ixgbe_hw *hw, u32 *reta, int num_rx_queues);
diff --git a/drivers/net/ethernet/jme.c b/drivers/net/ethernet/jme.c
index 3ddf657bc10b..836ebd8ee768 100644
--- a/drivers/net/ethernet/jme.c
+++ b/drivers/net/ethernet/jme.c
@@ -222,7 +222,7 @@ jme_clear_ghc_reset(struct jme_adapter *jme)
jwrite32f(jme, JME_GHC, jme->reg_ghc);
}
-static inline void
+static void
jme_reset_mac_processor(struct jme_adapter *jme)
{
static const u32 mask[WAKEUP_FRAME_MASK_DWNR] = {0, 0, 0, 0};
diff --git a/drivers/net/ethernet/korina.c b/drivers/net/ethernet/korina.c
index d74f5f4e5782..1799fe1415df 100644
--- a/drivers/net/ethernet/korina.c
+++ b/drivers/net/ethernet/korina.c
@@ -152,7 +152,7 @@ static inline void korina_abort_dma(struct net_device *dev,
writel(0x10, &ch->dmac);
while (!(readl(&ch->dmas) & DMA_STAT_HALT))
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
writel(0, &ch->dmas);
}
@@ -283,7 +283,7 @@ static int korina_send_packet(struct sk_buff *skb, struct net_device *dev)
}
dma_cache_wback((u32) td, sizeof(*td));
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
spin_unlock_irqrestore(&lp->lock, flags);
return NETDEV_TX_OK;
@@ -622,7 +622,7 @@ korina_tx_dma_interrupt(int irq, void *dev_id)
&(lp->tx_dma_regs->dmandptr));
lp->tx_chain_status = desc_empty;
lp->tx_chain_head = lp->tx_chain_tail;
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
}
if (dmas & DMA_STAT_ERR)
printk(KERN_ERR "%s: DMA error\n", dev->name);
@@ -811,7 +811,7 @@ static int korina_init(struct net_device *dev)
/* reset ethernet logic */
writel(0, &lp->eth_regs->ethintfc);
while ((readl(&lp->eth_regs->ethintfc) & ETH_INT_FC_RIP))
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
/* Enable Ethernet Interface */
writel(ETH_INT_FC_EN, &lp->eth_regs->ethintfc);
diff --git a/drivers/net/ethernet/lantiq_etop.c b/drivers/net/ethernet/lantiq_etop.c
index b630ef1e9646..dc82b1b19574 100644
--- a/drivers/net/ethernet/lantiq_etop.c
+++ b/drivers/net/ethernet/lantiq_etop.c
@@ -519,7 +519,7 @@ ltq_etop_tx(struct sk_buff *skb, struct net_device *dev)
byte_offset = CPHYSADDR(skb->data) % 16;
ch->skb[ch->dma.desc] = skb;
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
spin_lock_irqsave(&priv->lock, flags);
desc->addr = ((unsigned int) dma_map_single(NULL, skb->data, len,
@@ -657,7 +657,7 @@ ltq_etop_tx_timeout(struct net_device *dev)
err = ltq_etop_hw_init(dev);
if (err)
goto err_hw;
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
netif_wake_queue(dev);
return;
diff --git a/drivers/net/ethernet/marvell/Kconfig b/drivers/net/ethernet/marvell/Kconfig
index b5c6d42daa12..2664827ddecd 100644
--- a/drivers/net/ethernet/marvell/Kconfig
+++ b/drivers/net/ethernet/marvell/Kconfig
@@ -68,7 +68,7 @@ config MVNETA
config MVNETA_BM
tristate
- default y if MVNETA=y && MVNETA_BM_ENABLE
+ default y if MVNETA=y && MVNETA_BM_ENABLE!=n
default MVNETA_BM_ENABLE
select HWBM
help
diff --git a/drivers/net/ethernet/marvell/pxa168_eth.c b/drivers/net/ethernet/marvell/pxa168_eth.c
index c442f6ad15ff..54d5154ac0f8 100644
--- a/drivers/net/ethernet/marvell/pxa168_eth.c
+++ b/drivers/net/ethernet/marvell/pxa168_eth.c
@@ -286,12 +286,12 @@ static int pxa168_eth_stop(struct net_device *dev);
static inline u32 rdl(struct pxa168_eth_private *pep, int offset)
{
- return readl(pep->base + offset);
+ return readl_relaxed(pep->base + offset);
}
static inline void wrl(struct pxa168_eth_private *pep, int offset, u32 data)
{
- writel(data, pep->base + offset);
+ writel_relaxed(data, pep->base + offset);
}
static void abort_dma(struct pxa168_eth_private *pep)
@@ -342,9 +342,9 @@ static void rxq_refill(struct net_device *dev)
pep->rx_skb[used_rx_desc] = skb;
/* Return the descriptor to DMA ownership */
- wmb();
+ dma_wmb();
p_used_rx_desc->cmd_sts = BUF_OWNED_BY_DMA | RX_EN_INT;
- wmb();
+ dma_wmb();
/* Move the used descriptor pointer to the next descriptor */
pep->rx_used_desc_q = (used_rx_desc + 1) % pep->rx_ring_size;
@@ -794,7 +794,7 @@ static int rxq_process(struct net_device *dev, int budget)
rx_used_desc = pep->rx_used_desc_q;
rx_desc = &pep->p_rx_desc_area[rx_curr_desc];
cmd_sts = rx_desc->cmd_sts;
- rmb();
+ dma_rmb();
if (cmd_sts & (BUF_OWNED_BY_DMA))
break;
skb = pep->rx_skb[rx_curr_desc];
@@ -981,8 +981,6 @@ static int pxa168_init_phy(struct net_device *dev)
pep->phy = mdiobus_scan(pep->smi_bus, pep->phy_addr);
if (IS_ERR(pep->phy))
return PTR_ERR(pep->phy);
- if (!pep->phy)
- return -ENODEV;
err = phy_connect_direct(dev, pep->phy, pxa168_eth_adjust_link,
pep->phy_intf);
@@ -1289,7 +1287,7 @@ static int pxa168_eth_start_xmit(struct sk_buff *skb, struct net_device *dev)
skb_tx_timestamp(skb);
- wmb();
+ dma_wmb();
desc->cmd_sts = BUF_OWNED_BY_DMA | TX_GEN_CRC | TX_FIRST_DESC |
TX_ZERO_PADDING | TX_LAST_DESC | TX_EN_INT;
wmb();
@@ -1297,7 +1295,7 @@ static int pxa168_eth_start_xmit(struct sk_buff *skb, struct net_device *dev)
stats->tx_bytes += length;
stats->tx_packets++;
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
if (pep->tx_ring_size - pep->tx_desc_count <= 1) {
/* We handled the current skb, but now we are out of space.*/
netif_stop_queue(dev);
diff --git a/drivers/net/ethernet/marvell/sky2.c b/drivers/net/ethernet/marvell/sky2.c
index ec0a22119e09..467138b423d3 100644
--- a/drivers/net/ethernet/marvell/sky2.c
+++ b/drivers/net/ethernet/marvell/sky2.c
@@ -2418,7 +2418,7 @@ static int sky2_change_mtu(struct net_device *dev, int new_mtu)
sky2_write32(hw, B0_IMSK, 0);
sky2_read32(hw, B0_IMSK);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
napi_disable(&hw->napi);
netif_tx_disable(dev);
diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.c b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
index e0b68afea56e..c984462fad2a 100644
--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
@@ -536,7 +536,6 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev,
struct mtk_eth *eth = mac->hw;
struct mtk_tx_dma *itxd, *txd;
struct mtk_tx_buf *tx_buf;
- unsigned long flags;
dma_addr_t mapped_addr;
unsigned int nr_frags;
int i, n_desc = 1;
@@ -568,11 +567,6 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev,
if (unlikely(dma_mapping_error(&dev->dev, mapped_addr)))
return -ENOMEM;
- /* normally we can rely on the stack not calling this more than once,
- * however we have 2 queues running ont he same ring so we need to lock
- * the ring access
- */
- spin_lock_irqsave(&eth->page_lock, flags);
WRITE_ONCE(itxd->txd1, mapped_addr);
tx_buf->flags |= MTK_TX_FLAGS_SINGLE0;
dma_unmap_addr_set(tx_buf, dma_addr0, mapped_addr);
@@ -609,8 +603,7 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev,
WRITE_ONCE(txd->txd1, mapped_addr);
WRITE_ONCE(txd->txd3, (TX_DMA_SWC |
TX_DMA_PLEN0(frag_map_size) |
- last_frag * TX_DMA_LS0) |
- mac->id);
+ last_frag * TX_DMA_LS0));
WRITE_ONCE(txd->txd4, 0);
tx_buf->skb = (struct sk_buff *)MTK_DMA_DUMMY_DESC;
@@ -632,8 +625,6 @@ static int mtk_tx_map(struct sk_buff *skb, struct net_device *dev,
WRITE_ONCE(itxd->txd3, (TX_DMA_SWC | TX_DMA_PLEN0(skb_headlen(skb)) |
(!nr_frags * TX_DMA_LS0)));
- spin_unlock_irqrestore(&eth->page_lock, flags);
-
netdev_sent_queue(dev, skb->len);
skb_tx_timestamp(skb);
@@ -661,8 +652,6 @@ err_dma:
itxd = mtk_qdma_phys_to_virt(ring, itxd->txd2);
} while (itxd != txd);
- spin_unlock_irqrestore(&eth->page_lock, flags);
-
return -ENOMEM;
}
@@ -681,7 +670,29 @@ static inline int mtk_cal_txd_req(struct sk_buff *skb)
nfrags += skb_shinfo(skb)->nr_frags;
}
- return DIV_ROUND_UP(nfrags, 2);
+ return nfrags;
+}
+
+static void mtk_wake_queue(struct mtk_eth *eth)
+{
+ int i;
+
+ for (i = 0; i < MTK_MAC_COUNT; i++) {
+ if (!eth->netdev[i])
+ continue;
+ netif_wake_queue(eth->netdev[i]);
+ }
+}
+
+static void mtk_stop_queue(struct mtk_eth *eth)
+{
+ int i;
+
+ for (i = 0; i < MTK_MAC_COUNT; i++) {
+ if (!eth->netdev[i])
+ continue;
+ netif_stop_queue(eth->netdev[i]);
+ }
}
static int mtk_start_xmit(struct sk_buff *skb, struct net_device *dev)
@@ -690,14 +701,22 @@ static int mtk_start_xmit(struct sk_buff *skb, struct net_device *dev)
struct mtk_eth *eth = mac->hw;
struct mtk_tx_ring *ring = &eth->tx_ring;
struct net_device_stats *stats = &dev->stats;
+ unsigned long flags;
bool gso = false;
int tx_num;
+ /* normally we can rely on the stack not calling this more than once,
+ * however we have 2 queues running on the same ring so we need to lock
+ * the ring access
+ */
+ spin_lock_irqsave(&eth->page_lock, flags);
+
tx_num = mtk_cal_txd_req(skb);
if (unlikely(atomic_read(&ring->free_count) <= tx_num)) {
- netif_stop_queue(dev);
+ mtk_stop_queue(eth);
netif_err(eth, tx_queued, dev,
"Tx Ring full when queue awake!\n");
+ spin_unlock_irqrestore(&eth->page_lock, flags);
return NETDEV_TX_BUSY;
}
@@ -720,15 +739,17 @@ static int mtk_start_xmit(struct sk_buff *skb, struct net_device *dev)
goto drop;
if (unlikely(atomic_read(&ring->free_count) <= ring->thresh)) {
- netif_stop_queue(dev);
+ mtk_stop_queue(eth);
if (unlikely(atomic_read(&ring->free_count) >
ring->thresh))
- netif_wake_queue(dev);
+ mtk_wake_queue(eth);
}
+ spin_unlock_irqrestore(&eth->page_lock, flags);
return NETDEV_TX_OK;
drop:
+ spin_unlock_irqrestore(&eth->page_lock, flags);
stats->tx_dropped++;
dev_kfree_skb(skb);
return NETDEV_TX_OK;
@@ -897,13 +918,8 @@ static int mtk_poll_tx(struct mtk_eth *eth, int budget, bool *tx_again)
if (!total)
return 0;
- for (i = 0; i < MTK_MAC_COUNT; i++) {
- if (!eth->netdev[i] ||
- unlikely(!netif_queue_stopped(eth->netdev[i])))
- continue;
- if (atomic_read(&ring->free_count) > ring->thresh)
- netif_wake_queue(eth->netdev[i]);
- }
+ if (atomic_read(&ring->free_count) > ring->thresh)
+ mtk_wake_queue(eth);
return total;
}
@@ -1176,7 +1192,7 @@ static void mtk_tx_timeout(struct net_device *dev)
eth->netdev[mac->id]->stats.tx_errors++;
netif_err(eth, tx_err, dev,
"transmit timed out\n");
- schedule_work(&mac->pending_work);
+ schedule_work(&eth->pending_work);
}
static irqreturn_t mtk_handle_irq(int irq, void *_eth)
@@ -1413,19 +1429,30 @@ static int mtk_do_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
static void mtk_pending_work(struct work_struct *work)
{
- struct mtk_mac *mac = container_of(work, struct mtk_mac, pending_work);
- struct mtk_eth *eth = mac->hw;
- struct net_device *dev = eth->netdev[mac->id];
- int err;
+ struct mtk_eth *eth = container_of(work, struct mtk_eth, pending_work);
+ int err, i;
+ unsigned long restart = 0;
rtnl_lock();
- mtk_stop(dev);
- err = mtk_open(dev);
- if (err) {
- netif_alert(eth, ifup, dev,
- "Driver up/down cycle failed, closing device.\n");
- dev_close(dev);
+ /* stop all devices to make sure that dma is properly shut down */
+ for (i = 0; i < MTK_MAC_COUNT; i++) {
+ if (!eth->netdev[i])
+ continue;
+ mtk_stop(eth->netdev[i]);
+ __set_bit(i, &restart);
+ }
+
+ /* restart DMA and enable IRQs */
+ for (i = 0; i < MTK_MAC_COUNT; i++) {
+ if (!test_bit(i, &restart))
+ continue;
+ err = mtk_open(eth->netdev[i]);
+ if (err) {
+ netif_alert(eth, ifup, eth->netdev[i],
+ "Driver up/down cycle failed, closing device.\n");
+ dev_close(eth->netdev[i]);
+ }
}
rtnl_unlock();
}
@@ -1435,15 +1462,13 @@ static int mtk_cleanup(struct mtk_eth *eth)
int i;
for (i = 0; i < MTK_MAC_COUNT; i++) {
- struct mtk_mac *mac = netdev_priv(eth->netdev[i]);
-
if (!eth->netdev[i])
continue;
unregister_netdev(eth->netdev[i]);
free_netdev(eth->netdev[i]);
- cancel_work_sync(&mac->pending_work);
}
+ cancel_work_sync(&eth->pending_work);
return 0;
}
@@ -1631,7 +1656,6 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
mac->id = id;
mac->hw = eth;
mac->of_node = np;
- INIT_WORK(&mac->pending_work, mtk_pending_work);
mac->hw_stats = devm_kzalloc(eth->dev,
sizeof(*mac->hw_stats),
@@ -1645,6 +1669,7 @@ static int mtk_add_mac(struct mtk_eth *eth, struct device_node *np)
mac->hw_stats->reg_offset = id * MTK_STAT_OFFSET;
SET_NETDEV_DEV(eth->netdev[id], eth->dev);
+ eth->netdev[id]->watchdog_timeo = HZ;
eth->netdev[id]->netdev_ops = &mtk_netdev_ops;
eth->netdev[id]->base_addr = (unsigned long)eth->base;
eth->netdev[id]->vlan_features = MTK_HW_FEATURES &
@@ -1678,10 +1703,6 @@ static int mtk_probe(struct platform_device *pdev)
struct mtk_eth *eth;
int err;
- err = device_reset(&pdev->dev);
- if (err)
- return err;
-
match = of_match_device(of_mtk_match, &pdev->dev);
soc = (struct mtk_soc_data *)match->data;
@@ -1736,6 +1757,7 @@ static int mtk_probe(struct platform_device *pdev)
eth->dev = &pdev->dev;
eth->msg_enable = netif_msg_init(mtk_msg_level, MTK_DEFAULT_MSG_ENABLE);
+ INIT_WORK(&eth->pending_work, mtk_pending_work);
err = mtk_hw_init(eth);
if (err)
diff --git a/drivers/net/ethernet/mediatek/mtk_eth_soc.h b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
index 48a5292c8ed8..eed626d56ea4 100644
--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.h
+++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.h
@@ -363,6 +363,7 @@ struct mtk_rx_ring {
* @clk_gp1: The gmac1 clock
* @clk_gp2: The gmac2 clock
* @mii_bus: If there is a bus we need to create an instance for it
+ * @pending_work: The workqueue used to reset the dma ring
*/
struct mtk_eth {
@@ -389,6 +390,7 @@ struct mtk_eth {
struct clk *clk_gp1;
struct clk *clk_gp2;
struct mii_bus *mii_bus;
+ struct work_struct pending_work;
};
/* struct mtk_mac - the structure that holds the info about the MACs of the
@@ -398,7 +400,6 @@ struct mtk_eth {
* @hw: Backpointer to our main datastruture
* @hw_stats: Packet statistics counter
* @phy_dev: The attached PHY if available
- * @pending_work: The workqueue used to reset the dma ring
*/
struct mtk_mac {
int id;
@@ -406,7 +407,6 @@ struct mtk_mac {
struct mtk_eth *hw;
struct mtk_hw_stats *hw_stats;
struct phy_device *phy_dev;
- struct work_struct pending_work;
};
/* the struct describing the SoC. these are declared in the soc_xyz.c files */
diff --git a/drivers/net/ethernet/mellanox/mlx4/alloc.c b/drivers/net/ethernet/mellanox/mlx4/alloc.c
index 0c51c69f802f..249a4584401a 100644
--- a/drivers/net/ethernet/mellanox/mlx4/alloc.c
+++ b/drivers/net/ethernet/mellanox/mlx4/alloc.c
@@ -576,41 +576,48 @@ out:
return res;
}
-/*
- * Handling for queue buffers -- we allocate a bunch of memory and
- * register it in a memory region at HCA virtual address 0. If the
- * requested size is > max_direct, we split the allocation into
- * multiple pages, so we don't require too much contiguous memory.
- */
-int mlx4_buf_alloc(struct mlx4_dev *dev, int size, int max_direct,
- struct mlx4_buf *buf, gfp_t gfp)
+static int mlx4_buf_direct_alloc(struct mlx4_dev *dev, int size,
+ struct mlx4_buf *buf, gfp_t gfp)
{
dma_addr_t t;
- if (size <= max_direct) {
- buf->nbufs = 1;
- buf->npages = 1;
- buf->page_shift = get_order(size) + PAGE_SHIFT;
- buf->direct.buf = dma_alloc_coherent(&dev->persist->pdev->dev,
- size, &t, gfp);
- if (!buf->direct.buf)
- return -ENOMEM;
+ buf->nbufs = 1;
+ buf->npages = 1;
+ buf->page_shift = get_order(size) + PAGE_SHIFT;
+ buf->direct.buf =
+ dma_zalloc_coherent(&dev->persist->pdev->dev,
+ size, &t, gfp);
+ if (!buf->direct.buf)
+ return -ENOMEM;
- buf->direct.map = t;
+ buf->direct.map = t;
- while (t & ((1 << buf->page_shift) - 1)) {
- --buf->page_shift;
- buf->npages *= 2;
- }
+ while (t & ((1 << buf->page_shift) - 1)) {
+ --buf->page_shift;
+ buf->npages *= 2;
+ }
- memset(buf->direct.buf, 0, size);
+ return 0;
+}
+
+/* Handling for queue buffers -- we allocate a bunch of memory and
+ * register it in a memory region at HCA virtual address 0. If the
+ * requested size is > max_direct, we split the allocation into
+ * multiple pages, so we don't require too much contiguous memory.
+ */
+int mlx4_buf_alloc(struct mlx4_dev *dev, int size, int max_direct,
+ struct mlx4_buf *buf, gfp_t gfp)
+{
+ if (size <= max_direct) {
+ return mlx4_buf_direct_alloc(dev, size, buf, gfp);
} else {
+ dma_addr_t t;
int i;
- buf->direct.buf = NULL;
- buf->nbufs = (size + PAGE_SIZE - 1) / PAGE_SIZE;
- buf->npages = buf->nbufs;
+ buf->direct.buf = NULL;
+ buf->nbufs = (size + PAGE_SIZE - 1) / PAGE_SIZE;
+ buf->npages = buf->nbufs;
buf->page_shift = PAGE_SHIFT;
buf->page_list = kcalloc(buf->nbufs, sizeof(*buf->page_list),
gfp);
@@ -619,28 +626,12 @@ int mlx4_buf_alloc(struct mlx4_dev *dev, int size, int max_direct,
for (i = 0; i < buf->nbufs; ++i) {
buf->page_list[i].buf =
- dma_alloc_coherent(&dev->persist->pdev->dev,
- PAGE_SIZE,
- &t, gfp);
+ dma_zalloc_coherent(&dev->persist->pdev->dev,
+ PAGE_SIZE, &t, gfp);
if (!buf->page_list[i].buf)
goto err_free;
buf->page_list[i].map = t;
-
- memset(buf->page_list[i].buf, 0, PAGE_SIZE);
- }
-
- if (BITS_PER_LONG == 64) {
- struct page **pages;
- pages = kmalloc(sizeof *pages * buf->nbufs, gfp);
- if (!pages)
- goto err_free;
- for (i = 0; i < buf->nbufs; ++i)
- pages[i] = virt_to_page(buf->page_list[i].buf);
- buf->direct.buf = vmap(pages, buf->nbufs, VM_MAP, PAGE_KERNEL);
- kfree(pages);
- if (!buf->direct.buf)
- goto err_free;
}
}
@@ -655,15 +646,11 @@ EXPORT_SYMBOL_GPL(mlx4_buf_alloc);
void mlx4_buf_free(struct mlx4_dev *dev, int size, struct mlx4_buf *buf)
{
- int i;
-
- if (buf->nbufs == 1)
+ if (buf->nbufs == 1) {
dma_free_coherent(&dev->persist->pdev->dev, size,
- buf->direct.buf,
- buf->direct.map);
- else {
- if (BITS_PER_LONG == 64)
- vunmap(buf->direct.buf);
+ buf->direct.buf, buf->direct.map);
+ } else {
+ int i;
for (i = 0; i < buf->nbufs; ++i)
if (buf->page_list[i].buf)
@@ -789,7 +776,7 @@ void mlx4_db_free(struct mlx4_dev *dev, struct mlx4_db *db)
EXPORT_SYMBOL_GPL(mlx4_db_free);
int mlx4_alloc_hwq_res(struct mlx4_dev *dev, struct mlx4_hwq_resources *wqres,
- int size, int max_direct)
+ int size)
{
int err;
@@ -799,7 +786,7 @@ int mlx4_alloc_hwq_res(struct mlx4_dev *dev, struct mlx4_hwq_resources *wqres,
*wqres->db.db = 0;
- err = mlx4_buf_alloc(dev, size, max_direct, &wqres->buf, GFP_KERNEL);
+ err = mlx4_buf_direct_alloc(dev, size, &wqres->buf, GFP_KERNEL);
if (err)
goto err_db;
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_cq.c b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
index af975a2b74c6..132cea655920 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_cq.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_cq.c
@@ -73,22 +73,16 @@ int mlx4_en_create_cq(struct mlx4_en_priv *priv,
*/
set_dev_node(&mdev->dev->persist->pdev->dev, node);
err = mlx4_alloc_hwq_res(mdev->dev, &cq->wqres,
- cq->buf_size, 2 * PAGE_SIZE);
+ cq->buf_size);
set_dev_node(&mdev->dev->persist->pdev->dev, mdev->dev->numa_node);
if (err)
goto err_cq;
- err = mlx4_en_map_buffer(&cq->wqres.buf);
- if (err)
- goto err_res;
-
cq->buf = (struct mlx4_cqe *)cq->wqres.buf.direct.buf;
*pcq = cq;
return 0;
-err_res:
- mlx4_free_hwq_res(mdev->dev, &cq->wqres, cq->buf_size);
err_cq:
kfree(cq);
*pcq = NULL;
@@ -177,7 +171,6 @@ void mlx4_en_destroy_cq(struct mlx4_en_priv *priv, struct mlx4_en_cq **pcq)
struct mlx4_en_dev *mdev = priv->mdev;
struct mlx4_en_cq *cq = *pcq;
- mlx4_en_unmap_buffer(&cq->wqres.buf);
mlx4_free_hwq_res(mdev->dev, &cq->wqres, cq->buf_size);
if (mlx4_is_eq_vector_valid(mdev->dev, priv->port, cq->vector) &&
cq->is_tx == RX)
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
index b4b258c8ca47..92e0624f4cf0 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_netdev.c
@@ -1856,6 +1856,7 @@ static void mlx4_en_restart(struct work_struct *work)
en_dbg(DRV, priv, "Watchdog task called for port %d\n", priv->port);
+ rtnl_lock();
mutex_lock(&mdev->state_lock);
if (priv->port_up) {
mlx4_en_stop_port(dev, 1);
@@ -1863,6 +1864,7 @@ static void mlx4_en_restart(struct work_struct *work)
en_err(priv, "Failed restarting port %d\n", priv->port);
}
mutex_unlock(&mdev->state_lock);
+ rtnl_unlock();
}
static void mlx4_en_clear_stats(struct net_device *dev)
@@ -2355,8 +2357,12 @@ out:
}
/* set offloads */
- priv->dev->hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
- NETIF_F_TSO | NETIF_F_GSO_UDP_TUNNEL;
+ priv->dev->hw_enc_features |= NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
+ NETIF_F_RXCSUM |
+ NETIF_F_TSO | NETIF_F_TSO6 |
+ NETIF_F_GSO_UDP_TUNNEL |
+ NETIF_F_GSO_UDP_TUNNEL_CSUM |
+ NETIF_F_GSO_PARTIAL;
}
static void mlx4_en_del_vxlan_offloads(struct work_struct *work)
@@ -2365,8 +2371,12 @@ static void mlx4_en_del_vxlan_offloads(struct work_struct *work)
struct mlx4_en_priv *priv = container_of(work, struct mlx4_en_priv,
vxlan_del_task);
/* unset offloads */
- priv->dev->hw_enc_features &= ~(NETIF_F_IP_CSUM | NETIF_F_RXCSUM |
- NETIF_F_TSO | NETIF_F_GSO_UDP_TUNNEL);
+ priv->dev->hw_enc_features &= ~(NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
+ NETIF_F_RXCSUM |
+ NETIF_F_TSO | NETIF_F_TSO6 |
+ NETIF_F_GSO_UDP_TUNNEL |
+ NETIF_F_GSO_UDP_TUNNEL_CSUM |
+ NETIF_F_GSO_PARTIAL);
ret = mlx4_SET_PORT_VXLAN(priv->mdev->dev, priv->port,
VXLAN_STEER_BY_OUTER_MAC, 0);
@@ -2425,7 +2435,18 @@ static netdev_features_t mlx4_en_features_check(struct sk_buff *skb,
netdev_features_t features)
{
features = vlan_features_check(skb, features);
- return vxlan_features_check(skb, features);
+ features = vxlan_features_check(skb, features);
+
+ /* The ConnectX-3 doesn't support outer IPv6 checksums but it does
+ * support inner IPv6 checksums and segmentation so we need to
+ * strip that feature if this is an IPv6 encapsulated frame.
+ */
+ if (skb->encapsulation &&
+ (skb->ip_summed == CHECKSUM_PARTIAL) &&
+ (ip_hdr(skb)->version != 4))
+ features &= ~(NETIF_F_CSUM_MASK | NETIF_F_GSO_MASK);
+
+ return features;
}
#endif
@@ -2907,7 +2928,7 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
/* Allocate page for receive rings */
err = mlx4_alloc_hwq_res(mdev->dev, &priv->res,
- MLX4_EN_PAGE_SIZE, MLX4_EN_PAGE_SIZE);
+ MLX4_EN_PAGE_SIZE);
if (err) {
en_err(priv, "Failed to allocate page for rx qps\n");
goto out;
@@ -2990,8 +3011,13 @@ int mlx4_en_init_netdev(struct mlx4_en_dev *mdev, int port,
}
if (mdev->dev->caps.tunnel_offload_mode == MLX4_TUNNEL_OFFLOAD_MODE_VXLAN) {
- dev->hw_features |= NETIF_F_GSO_UDP_TUNNEL;
- dev->features |= NETIF_F_GSO_UDP_TUNNEL;
+ dev->hw_features |= NETIF_F_GSO_UDP_TUNNEL |
+ NETIF_F_GSO_UDP_TUNNEL_CSUM |
+ NETIF_F_GSO_PARTIAL;
+ dev->features |= NETIF_F_GSO_UDP_TUNNEL |
+ NETIF_F_GSO_UDP_TUNNEL_CSUM |
+ NETIF_F_GSO_PARTIAL;
+ dev->gso_partial_features = NETIF_F_GSO_UDP_TUNNEL_CSUM;
}
mdev->pndev[port] = dev;
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_resources.c b/drivers/net/ethernet/mellanox/mlx4/en_resources.c
index 02e925d6f734..a6b0db0e0383 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_resources.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_resources.c
@@ -107,37 +107,6 @@ int mlx4_en_change_mcast_lb(struct mlx4_en_priv *priv, struct mlx4_qp *qp,
return ret;
}
-int mlx4_en_map_buffer(struct mlx4_buf *buf)
-{
- struct page **pages;
- int i;
-
- if (BITS_PER_LONG == 64 || buf->nbufs == 1)
- return 0;
-
- pages = kmalloc(sizeof *pages * buf->nbufs, GFP_KERNEL);
- if (!pages)
- return -ENOMEM;
-
- for (i = 0; i < buf->nbufs; ++i)
- pages[i] = virt_to_page(buf->page_list[i].buf);
-
- buf->direct.buf = vmap(pages, buf->nbufs, VM_MAP, PAGE_KERNEL);
- kfree(pages);
- if (!buf->direct.buf)
- return -ENOMEM;
-
- return 0;
-}
-
-void mlx4_en_unmap_buffer(struct mlx4_buf *buf)
-{
- if (BITS_PER_LONG == 64 || buf->nbufs == 1)
- return;
-
- vunmap(buf->direct.buf);
-}
-
void mlx4_en_sqp_event(struct mlx4_qp *qp, enum mlx4_event event)
{
return;
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_rx.c b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
index b723e3bcab39..c1b3a9c8cf3b 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_rx.c
@@ -394,17 +394,11 @@ int mlx4_en_create_rx_ring(struct mlx4_en_priv *priv,
/* Allocate HW buffers on provided NUMA node */
set_dev_node(&mdev->dev->persist->pdev->dev, node);
- err = mlx4_alloc_hwq_res(mdev->dev, &ring->wqres,
- ring->buf_size, 2 * PAGE_SIZE);
+ err = mlx4_alloc_hwq_res(mdev->dev, &ring->wqres, ring->buf_size);
set_dev_node(&mdev->dev->persist->pdev->dev, mdev->dev->numa_node);
if (err)
goto err_info;
- err = mlx4_en_map_buffer(&ring->wqres.buf);
- if (err) {
- en_err(priv, "Failed to map RX buffer\n");
- goto err_hwq;
- }
ring->buf = ring->wqres.buf.direct.buf;
ring->hwtstamp_rx_filter = priv->hwtstamp_config.rx_filter;
@@ -412,8 +406,6 @@ int mlx4_en_create_rx_ring(struct mlx4_en_priv *priv,
*pring = ring;
return 0;
-err_hwq:
- mlx4_free_hwq_res(mdev->dev, &ring->wqres, ring->buf_size);
err_info:
vfree(ring->rx_info);
ring->rx_info = NULL;
@@ -517,7 +509,6 @@ void mlx4_en_destroy_rx_ring(struct mlx4_en_priv *priv,
struct mlx4_en_dev *mdev = priv->mdev;
struct mlx4_en_rx_ring *ring = *pring;
- mlx4_en_unmap_buffer(&ring->wqres.buf);
mlx4_free_hwq_res(mdev->dev, &ring->wqres, size * stride + TXBB_SIZE);
vfree(ring->rx_info);
ring->rx_info = NULL;
@@ -707,7 +698,7 @@ static int get_fixed_ipv6_csum(__wsum hw_checksum, struct sk_buff *skb,
if (ipv6h->nexthdr == IPPROTO_FRAGMENT || ipv6h->nexthdr == IPPROTO_HOPOPTS)
return -1;
- hw_checksum = csum_add(hw_checksum, (__force __wsum)(ipv6h->nexthdr << 8));
+ hw_checksum = csum_add(hw_checksum, (__force __wsum)htons(ipv6h->nexthdr));
csum_pseudo_hdr = csum_partial(&ipv6h->saddr,
sizeof(ipv6h->saddr) + sizeof(ipv6h->daddr), 0);
diff --git a/drivers/net/ethernet/mellanox/mlx4/en_tx.c b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
index a386f047c1af..f6e61570cb2c 100644
--- a/drivers/net/ethernet/mellanox/mlx4/en_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx4/en_tx.c
@@ -41,6 +41,7 @@
#include <linux/vmalloc.h>
#include <linux/tcp.h>
#include <linux/ip.h>
+#include <linux/ipv6.h>
#include <linux/moduleparam.h>
#include "mlx4_en.h"
@@ -93,20 +94,13 @@ int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv,
/* Allocate HW buffers on provided NUMA node */
set_dev_node(&mdev->dev->persist->pdev->dev, node);
- err = mlx4_alloc_hwq_res(mdev->dev, &ring->wqres, ring->buf_size,
- 2 * PAGE_SIZE);
+ err = mlx4_alloc_hwq_res(mdev->dev, &ring->wqres, ring->buf_size);
set_dev_node(&mdev->dev->persist->pdev->dev, mdev->dev->numa_node);
if (err) {
en_err(priv, "Failed allocating hwq resources\n");
goto err_bounce;
}
- err = mlx4_en_map_buffer(&ring->wqres.buf);
- if (err) {
- en_err(priv, "Failed to map TX buffer\n");
- goto err_hwq_res;
- }
-
ring->buf = ring->wqres.buf.direct.buf;
en_dbg(DRV, priv, "Allocated TX ring (addr:%p) - buf:%p size:%d buf_size:%d dma:%llx\n",
@@ -117,7 +111,7 @@ int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv,
MLX4_RESERVE_ETH_BF_QP);
if (err) {
en_err(priv, "failed reserving qp for TX ring\n");
- goto err_map;
+ goto err_hwq_res;
}
err = mlx4_qp_alloc(mdev->dev, ring->qpn, &ring->qp, GFP_KERNEL);
@@ -154,8 +148,6 @@ int mlx4_en_create_tx_ring(struct mlx4_en_priv *priv,
err_reserve:
mlx4_qp_release_range(mdev->dev, ring->qpn, 1);
-err_map:
- mlx4_en_unmap_buffer(&ring->wqres.buf);
err_hwq_res:
mlx4_free_hwq_res(mdev->dev, &ring->wqres, ring->buf_size);
err_bounce:
@@ -182,7 +174,6 @@ void mlx4_en_destroy_tx_ring(struct mlx4_en_priv *priv,
mlx4_qp_remove(mdev->dev, &ring->qp);
mlx4_qp_free(mdev->dev, &ring->qp);
mlx4_qp_release_range(priv->mdev->dev, ring->qpn, 1);
- mlx4_en_unmap_buffer(&ring->wqres.buf);
mlx4_free_hwq_res(mdev->dev, &ring->wqres, ring->buf_size);
kfree(ring->bounce_buf);
ring->bounce_buf = NULL;
@@ -920,8 +911,18 @@ netdev_tx_t mlx4_en_xmit(struct sk_buff *skb, struct net_device *dev)
tx_ind, fragptr);
if (skb->encapsulation) {
- struct iphdr *ipv4 = (struct iphdr *)skb_inner_network_header(skb);
- if (ipv4->protocol == IPPROTO_TCP || ipv4->protocol == IPPROTO_UDP)
+ union {
+ struct iphdr *v4;
+ struct ipv6hdr *v6;
+ unsigned char *hdr;
+ } ip;
+ u8 proto;
+
+ ip.hdr = skb_inner_network_header(skb);
+ proto = (ip.v4->version == 4) ? ip.v4->protocol :
+ ip.v6->nexthdr;
+
+ if (proto == IPPROTO_TCP || proto == IPPROTO_UDP)
op_own |= cpu_to_be32(MLX4_WQE_CTRL_IIP | MLX4_WQE_CTRL_ILP);
else
op_own |= cpu_to_be32(MLX4_WQE_CTRL_IIP);
diff --git a/drivers/net/ethernet/mellanox/mlx4/mcg.c b/drivers/net/ethernet/mellanox/mlx4/mcg.c
index 6aa73972d478..f2d0920018a5 100644
--- a/drivers/net/ethernet/mellanox/mlx4/mcg.c
+++ b/drivers/net/ethernet/mellanox/mlx4/mcg.c
@@ -1102,7 +1102,7 @@ int mlx4_qp_attach_common(struct mlx4_dev *dev, struct mlx4_qp *qp, u8 gid[16],
struct mlx4_cmd_mailbox *mailbox;
struct mlx4_mgm *mgm;
u32 members_count;
- int index, prev;
+ int index = -1, prev;
int link = 0;
int i;
int err;
@@ -1181,7 +1181,7 @@ int mlx4_qp_attach_common(struct mlx4_dev *dev, struct mlx4_qp *qp, u8 gid[16],
goto out;
out:
- if (prot == MLX4_PROT_ETH) {
+ if (prot == MLX4_PROT_ETH && index != -1) {
/* manage the steering entry for promisc mode */
if (new_entry)
err = new_steering_entry(dev, port, steer,
diff --git a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
index 63b1aeae2c03..cc84e09f324a 100644
--- a/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
+++ b/drivers/net/ethernet/mellanox/mlx4/mlx4_en.h
@@ -672,8 +672,6 @@ void mlx4_en_fill_qp_context(struct mlx4_en_priv *priv, int size, int stride,
int is_tx, int rss, int qpn, int cqn, int user_prio,
struct mlx4_qp_context *context);
void mlx4_en_sqp_event(struct mlx4_qp *qp, enum mlx4_event event);
-int mlx4_en_map_buffer(struct mlx4_buf *buf);
-void mlx4_en_unmap_buffer(struct mlx4_buf *buf);
int mlx4_en_change_mcast_lb(struct mlx4_en_priv *priv, struct mlx4_qp *qp,
int loopback);
void mlx4_en_calc_rx_buf(struct net_device *dev);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
index 559d11a443bc..1cf722eba607 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
+++ b/drivers/net/ethernet/mellanox/mlx5/core/Kconfig
@@ -14,7 +14,6 @@ config MLX5_CORE_EN
bool "Mellanox Technologies ConnectX-4 Ethernet support"
depends on NETDEVICES && ETHERNET && PCI && MLX5_CORE
select PTP_1588_CLOCK
- select VXLAN if MLX5_CORE=y
default n
---help---
Ethernet support in Mellanox Technologies ConnectX-4 NIC.
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/Makefile b/drivers/net/ethernet/mellanox/mlx5/core/Makefile
index 4fc45ee0c5d1..9ea7b583096a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/Makefile
+++ b/drivers/net/ethernet/mellanox/mlx5/core/Makefile
@@ -2,10 +2,10 @@ obj-$(CONFIG_MLX5_CORE) += mlx5_core.o
mlx5_core-y := main.o cmd.o debugfs.o fw.o eq.o uar.o pagealloc.o \
health.o mcg.o cq.o srq.o alloc.o qp.o port.o mr.o pd.o \
- mad.o transobj.o vport.o sriov.o fs_cmd.o fs_core.o
+ mad.o transobj.o vport.o sriov.o fs_cmd.o fs_core.o fs_counters.o
mlx5_core-$(CONFIG_MLX5_CORE_EN) += wq.o eswitch.o \
en_main.o en_fs.o en_ethtool.o en_tx.o en_rx.o \
- en_txrx.o en_clock.o vxlan.o en_tc.o
+ en_txrx.o en_clock.o vxlan.o en_tc.o en_arfs.o
mlx5_core-$(CONFIG_MLX5_CORE_EN_DCB) += en_dcbnl.o
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
index eb926e1ee71c..dcd2df6518de 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/cmd.c
@@ -294,6 +294,7 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op,
case MLX5_CMD_OP_DESTROY_FLOW_TABLE:
case MLX5_CMD_OP_DESTROY_FLOW_GROUP:
case MLX5_CMD_OP_DELETE_FLOW_TABLE_ENTRY:
+ case MLX5_CMD_OP_DEALLOC_FLOW_COUNTER:
return MLX5_CMD_STAT_OK;
case MLX5_CMD_OP_QUERY_HCA_CAP:
@@ -395,6 +396,8 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op,
case MLX5_CMD_OP_QUERY_FLOW_GROUP:
case MLX5_CMD_OP_SET_FLOW_TABLE_ENTRY:
case MLX5_CMD_OP_QUERY_FLOW_TABLE_ENTRY:
+ case MLX5_CMD_OP_ALLOC_FLOW_COUNTER:
+ case MLX5_CMD_OP_QUERY_FLOW_COUNTER:
*status = MLX5_DRIVER_STATUS_ABORTED;
*synd = MLX5_DRIVER_SYND;
return -EIO;
@@ -406,178 +409,142 @@ static int mlx5_internal_err_ret_value(struct mlx5_core_dev *dev, u16 op,
const char *mlx5_command_str(int command)
{
- switch (command) {
- case MLX5_CMD_OP_QUERY_HCA_VPORT_CONTEXT:
- return "QUERY_HCA_VPORT_CONTEXT";
-
- case MLX5_CMD_OP_MODIFY_HCA_VPORT_CONTEXT:
- return "MODIFY_HCA_VPORT_CONTEXT";
-
- case MLX5_CMD_OP_QUERY_HCA_CAP:
- return "QUERY_HCA_CAP";
-
- case MLX5_CMD_OP_SET_HCA_CAP:
- return "SET_HCA_CAP";
-
- case MLX5_CMD_OP_QUERY_ADAPTER:
- return "QUERY_ADAPTER";
-
- case MLX5_CMD_OP_INIT_HCA:
- return "INIT_HCA";
-
- case MLX5_CMD_OP_TEARDOWN_HCA:
- return "TEARDOWN_HCA";
-
- case MLX5_CMD_OP_ENABLE_HCA:
- return "MLX5_CMD_OP_ENABLE_HCA";
-
- case MLX5_CMD_OP_DISABLE_HCA:
- return "MLX5_CMD_OP_DISABLE_HCA";
-
- case MLX5_CMD_OP_QUERY_PAGES:
- return "QUERY_PAGES";
-
- case MLX5_CMD_OP_MANAGE_PAGES:
- return "MANAGE_PAGES";
-
- case MLX5_CMD_OP_CREATE_MKEY:
- return "CREATE_MKEY";
-
- case MLX5_CMD_OP_QUERY_MKEY:
- return "QUERY_MKEY";
-
- case MLX5_CMD_OP_DESTROY_MKEY:
- return "DESTROY_MKEY";
-
- case MLX5_CMD_OP_QUERY_SPECIAL_CONTEXTS:
- return "QUERY_SPECIAL_CONTEXTS";
-
- case MLX5_CMD_OP_CREATE_EQ:
- return "CREATE_EQ";
-
- case MLX5_CMD_OP_DESTROY_EQ:
- return "DESTROY_EQ";
-
- case MLX5_CMD_OP_QUERY_EQ:
- return "QUERY_EQ";
-
- case MLX5_CMD_OP_CREATE_CQ:
- return "CREATE_CQ";
-
- case MLX5_CMD_OP_DESTROY_CQ:
- return "DESTROY_CQ";
-
- case MLX5_CMD_OP_QUERY_CQ:
- return "QUERY_CQ";
-
- case MLX5_CMD_OP_MODIFY_CQ:
- return "MODIFY_CQ";
-
- case MLX5_CMD_OP_CREATE_QP:
- return "CREATE_QP";
-
- case MLX5_CMD_OP_DESTROY_QP:
- return "DESTROY_QP";
-
- case MLX5_CMD_OP_RST2INIT_QP:
- return "RST2INIT_QP";
-
- case MLX5_CMD_OP_INIT2RTR_QP:
- return "INIT2RTR_QP";
-
- case MLX5_CMD_OP_RTR2RTS_QP:
- return "RTR2RTS_QP";
-
- case MLX5_CMD_OP_RTS2RTS_QP:
- return "RTS2RTS_QP";
-
- case MLX5_CMD_OP_SQERR2RTS_QP:
- return "SQERR2RTS_QP";
-
- case MLX5_CMD_OP_2ERR_QP:
- return "2ERR_QP";
-
- case MLX5_CMD_OP_2RST_QP:
- return "2RST_QP";
-
- case MLX5_CMD_OP_QUERY_QP:
- return "QUERY_QP";
-
- case MLX5_CMD_OP_MAD_IFC:
- return "MAD_IFC";
-
- case MLX5_CMD_OP_INIT2INIT_QP:
- return "INIT2INIT_QP";
-
- case MLX5_CMD_OP_CREATE_PSV:
- return "CREATE_PSV";
-
- case MLX5_CMD_OP_DESTROY_PSV:
- return "DESTROY_PSV";
-
- case MLX5_CMD_OP_CREATE_SRQ:
- return "CREATE_SRQ";
-
- case MLX5_CMD_OP_DESTROY_SRQ:
- return "DESTROY_SRQ";
-
- case MLX5_CMD_OP_QUERY_SRQ:
- return "QUERY_SRQ";
-
- case MLX5_CMD_OP_ARM_RQ:
- return "ARM_RQ";
-
- case MLX5_CMD_OP_CREATE_XRC_SRQ:
- return "CREATE_XRC_SRQ";
-
- case MLX5_CMD_OP_DESTROY_XRC_SRQ:
- return "DESTROY_XRC_SRQ";
-
- case MLX5_CMD_OP_QUERY_XRC_SRQ:
- return "QUERY_XRC_SRQ";
-
- case MLX5_CMD_OP_ARM_XRC_SRQ:
- return "ARM_XRC_SRQ";
-
- case MLX5_CMD_OP_ALLOC_PD:
- return "ALLOC_PD";
-
- case MLX5_CMD_OP_DEALLOC_PD:
- return "DEALLOC_PD";
-
- case MLX5_CMD_OP_ALLOC_UAR:
- return "ALLOC_UAR";
-
- case MLX5_CMD_OP_DEALLOC_UAR:
- return "DEALLOC_UAR";
-
- case MLX5_CMD_OP_ATTACH_TO_MCG:
- return "ATTACH_TO_MCG";
-
- case MLX5_CMD_OP_DETTACH_FROM_MCG:
- return "DETTACH_FROM_MCG";
-
- case MLX5_CMD_OP_ALLOC_XRCD:
- return "ALLOC_XRCD";
-
- case MLX5_CMD_OP_DEALLOC_XRCD:
- return "DEALLOC_XRCD";
-
- case MLX5_CMD_OP_ACCESS_REG:
- return "MLX5_CMD_OP_ACCESS_REG";
-
- case MLX5_CMD_OP_SET_WOL_ROL:
- return "SET_WOL_ROL";
-
- case MLX5_CMD_OP_QUERY_WOL_ROL:
- return "QUERY_WOL_ROL";
-
- case MLX5_CMD_OP_ADD_VXLAN_UDP_DPORT:
- return "ADD_VXLAN_UDP_DPORT";
-
- case MLX5_CMD_OP_DELETE_VXLAN_UDP_DPORT:
- return "DELETE_VXLAN_UDP_DPORT";
+#define MLX5_COMMAND_STR_CASE(__cmd) case MLX5_CMD_OP_ ## __cmd: return #__cmd
+ switch (command) {
+ MLX5_COMMAND_STR_CASE(QUERY_HCA_CAP);
+ MLX5_COMMAND_STR_CASE(QUERY_ADAPTER);
+ MLX5_COMMAND_STR_CASE(INIT_HCA);
+ MLX5_COMMAND_STR_CASE(TEARDOWN_HCA);
+ MLX5_COMMAND_STR_CASE(ENABLE_HCA);
+ MLX5_COMMAND_STR_CASE(DISABLE_HCA);
+ MLX5_COMMAND_STR_CASE(QUERY_PAGES);
+ MLX5_COMMAND_STR_CASE(MANAGE_PAGES);
+ MLX5_COMMAND_STR_CASE(SET_HCA_CAP);
+ MLX5_COMMAND_STR_CASE(QUERY_ISSI);
+ MLX5_COMMAND_STR_CASE(SET_ISSI);
+ MLX5_COMMAND_STR_CASE(CREATE_MKEY);
+ MLX5_COMMAND_STR_CASE(QUERY_MKEY);
+ MLX5_COMMAND_STR_CASE(DESTROY_MKEY);
+ MLX5_COMMAND_STR_CASE(QUERY_SPECIAL_CONTEXTS);
+ MLX5_COMMAND_STR_CASE(PAGE_FAULT_RESUME);
+ MLX5_COMMAND_STR_CASE(CREATE_EQ);
+ MLX5_COMMAND_STR_CASE(DESTROY_EQ);
+ MLX5_COMMAND_STR_CASE(QUERY_EQ);
+ MLX5_COMMAND_STR_CASE(GEN_EQE);
+ MLX5_COMMAND_STR_CASE(CREATE_CQ);
+ MLX5_COMMAND_STR_CASE(DESTROY_CQ);
+ MLX5_COMMAND_STR_CASE(QUERY_CQ);
+ MLX5_COMMAND_STR_CASE(MODIFY_CQ);
+ MLX5_COMMAND_STR_CASE(CREATE_QP);
+ MLX5_COMMAND_STR_CASE(DESTROY_QP);
+ MLX5_COMMAND_STR_CASE(RST2INIT_QP);
+ MLX5_COMMAND_STR_CASE(INIT2RTR_QP);
+ MLX5_COMMAND_STR_CASE(RTR2RTS_QP);
+ MLX5_COMMAND_STR_CASE(RTS2RTS_QP);
+ MLX5_COMMAND_STR_CASE(SQERR2RTS_QP);
+ MLX5_COMMAND_STR_CASE(2ERR_QP);
+ MLX5_COMMAND_STR_CASE(2RST_QP);
+ MLX5_COMMAND_STR_CASE(QUERY_QP);
+ MLX5_COMMAND_STR_CASE(SQD_RTS_QP);
+ MLX5_COMMAND_STR_CASE(INIT2INIT_QP);
+ MLX5_COMMAND_STR_CASE(CREATE_PSV);
+ MLX5_COMMAND_STR_CASE(DESTROY_PSV);
+ MLX5_COMMAND_STR_CASE(CREATE_SRQ);
+ MLX5_COMMAND_STR_CASE(DESTROY_SRQ);
+ MLX5_COMMAND_STR_CASE(QUERY_SRQ);
+ MLX5_COMMAND_STR_CASE(ARM_RQ);
+ MLX5_COMMAND_STR_CASE(CREATE_XRC_SRQ);
+ MLX5_COMMAND_STR_CASE(DESTROY_XRC_SRQ);
+ MLX5_COMMAND_STR_CASE(QUERY_XRC_SRQ);
+ MLX5_COMMAND_STR_CASE(ARM_XRC_SRQ);
+ MLX5_COMMAND_STR_CASE(CREATE_DCT);
+ MLX5_COMMAND_STR_CASE(DESTROY_DCT);
+ MLX5_COMMAND_STR_CASE(DRAIN_DCT);
+ MLX5_COMMAND_STR_CASE(QUERY_DCT);
+ MLX5_COMMAND_STR_CASE(ARM_DCT_FOR_KEY_VIOLATION);
+ MLX5_COMMAND_STR_CASE(QUERY_VPORT_STATE);
+ MLX5_COMMAND_STR_CASE(MODIFY_VPORT_STATE);
+ MLX5_COMMAND_STR_CASE(QUERY_ESW_VPORT_CONTEXT);
+ MLX5_COMMAND_STR_CASE(MODIFY_ESW_VPORT_CONTEXT);
+ MLX5_COMMAND_STR_CASE(QUERY_NIC_VPORT_CONTEXT);
+ MLX5_COMMAND_STR_CASE(MODIFY_NIC_VPORT_CONTEXT);
+ MLX5_COMMAND_STR_CASE(QUERY_ROCE_ADDRESS);
+ MLX5_COMMAND_STR_CASE(SET_ROCE_ADDRESS);
+ MLX5_COMMAND_STR_CASE(QUERY_HCA_VPORT_CONTEXT);
+ MLX5_COMMAND_STR_CASE(MODIFY_HCA_VPORT_CONTEXT);
+ MLX5_COMMAND_STR_CASE(QUERY_HCA_VPORT_GID);
+ MLX5_COMMAND_STR_CASE(QUERY_HCA_VPORT_PKEY);
+ MLX5_COMMAND_STR_CASE(QUERY_VPORT_COUNTER);
+ MLX5_COMMAND_STR_CASE(ALLOC_Q_COUNTER);
+ MLX5_COMMAND_STR_CASE(DEALLOC_Q_COUNTER);
+ MLX5_COMMAND_STR_CASE(QUERY_Q_COUNTER);
+ MLX5_COMMAND_STR_CASE(ALLOC_PD);
+ MLX5_COMMAND_STR_CASE(DEALLOC_PD);
+ MLX5_COMMAND_STR_CASE(ALLOC_UAR);
+ MLX5_COMMAND_STR_CASE(DEALLOC_UAR);
+ MLX5_COMMAND_STR_CASE(CONFIG_INT_MODERATION);
+ MLX5_COMMAND_STR_CASE(ACCESS_REG);
+ MLX5_COMMAND_STR_CASE(ATTACH_TO_MCG);
+ MLX5_COMMAND_STR_CASE(DETTACH_FROM_MCG);
+ MLX5_COMMAND_STR_CASE(GET_DROPPED_PACKET_LOG);
+ MLX5_COMMAND_STR_CASE(MAD_IFC);
+ MLX5_COMMAND_STR_CASE(QUERY_MAD_DEMUX);
+ MLX5_COMMAND_STR_CASE(SET_MAD_DEMUX);
+ MLX5_COMMAND_STR_CASE(NOP);
+ MLX5_COMMAND_STR_CASE(ALLOC_XRCD);
+ MLX5_COMMAND_STR_CASE(DEALLOC_XRCD);
+ MLX5_COMMAND_STR_CASE(ALLOC_TRANSPORT_DOMAIN);
+ MLX5_COMMAND_STR_CASE(DEALLOC_TRANSPORT_DOMAIN);
+ MLX5_COMMAND_STR_CASE(QUERY_CONG_STATUS);
+ MLX5_COMMAND_STR_CASE(MODIFY_CONG_STATUS);
+ MLX5_COMMAND_STR_CASE(QUERY_CONG_PARAMS);
+ MLX5_COMMAND_STR_CASE(MODIFY_CONG_PARAMS);
+ MLX5_COMMAND_STR_CASE(QUERY_CONG_STATISTICS);
+ MLX5_COMMAND_STR_CASE(ADD_VXLAN_UDP_DPORT);
+ MLX5_COMMAND_STR_CASE(DELETE_VXLAN_UDP_DPORT);
+ MLX5_COMMAND_STR_CASE(SET_L2_TABLE_ENTRY);
+ MLX5_COMMAND_STR_CASE(QUERY_L2_TABLE_ENTRY);
+ MLX5_COMMAND_STR_CASE(DELETE_L2_TABLE_ENTRY);
+ MLX5_COMMAND_STR_CASE(SET_WOL_ROL);
+ MLX5_COMMAND_STR_CASE(QUERY_WOL_ROL);
+ MLX5_COMMAND_STR_CASE(CREATE_TIR);
+ MLX5_COMMAND_STR_CASE(MODIFY_TIR);
+ MLX5_COMMAND_STR_CASE(DESTROY_TIR);
+ MLX5_COMMAND_STR_CASE(QUERY_TIR);
+ MLX5_COMMAND_STR_CASE(CREATE_SQ);
+ MLX5_COMMAND_STR_CASE(MODIFY_SQ);
+ MLX5_COMMAND_STR_CASE(DESTROY_SQ);
+ MLX5_COMMAND_STR_CASE(QUERY_SQ);
+ MLX5_COMMAND_STR_CASE(CREATE_RQ);
+ MLX5_COMMAND_STR_CASE(MODIFY_RQ);
+ MLX5_COMMAND_STR_CASE(DESTROY_RQ);
+ MLX5_COMMAND_STR_CASE(QUERY_RQ);
+ MLX5_COMMAND_STR_CASE(CREATE_RMP);
+ MLX5_COMMAND_STR_CASE(MODIFY_RMP);
+ MLX5_COMMAND_STR_CASE(DESTROY_RMP);
+ MLX5_COMMAND_STR_CASE(QUERY_RMP);
+ MLX5_COMMAND_STR_CASE(CREATE_TIS);
+ MLX5_COMMAND_STR_CASE(MODIFY_TIS);
+ MLX5_COMMAND_STR_CASE(DESTROY_TIS);
+ MLX5_COMMAND_STR_CASE(QUERY_TIS);
+ MLX5_COMMAND_STR_CASE(CREATE_RQT);
+ MLX5_COMMAND_STR_CASE(MODIFY_RQT);
+ MLX5_COMMAND_STR_CASE(DESTROY_RQT);
+ MLX5_COMMAND_STR_CASE(QUERY_RQT);
+ MLX5_COMMAND_STR_CASE(SET_FLOW_TABLE_ROOT);
+ MLX5_COMMAND_STR_CASE(CREATE_FLOW_TABLE);
+ MLX5_COMMAND_STR_CASE(DESTROY_FLOW_TABLE);
+ MLX5_COMMAND_STR_CASE(QUERY_FLOW_TABLE);
+ MLX5_COMMAND_STR_CASE(CREATE_FLOW_GROUP);
+ MLX5_COMMAND_STR_CASE(DESTROY_FLOW_GROUP);
+ MLX5_COMMAND_STR_CASE(QUERY_FLOW_GROUP);
+ MLX5_COMMAND_STR_CASE(SET_FLOW_TABLE_ENTRY);
+ MLX5_COMMAND_STR_CASE(QUERY_FLOW_TABLE_ENTRY);
+ MLX5_COMMAND_STR_CASE(DELETE_FLOW_TABLE_ENTRY);
+ MLX5_COMMAND_STR_CASE(ALLOC_FLOW_COUNTER);
+ MLX5_COMMAND_STR_CASE(DEALLOC_FLOW_COUNTER);
+ MLX5_COMMAND_STR_CASE(QUERY_FLOW_COUNTER);
default: return "unknown command opcode";
}
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/cq.c b/drivers/net/ethernet/mellanox/mlx5/core/cq.c
index b51e42d6fbec..873a631ad155 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/cq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/cq.c
@@ -39,6 +39,53 @@
#include <linux/mlx5/cq.h>
#include "mlx5_core.h"
+#define TASKLET_MAX_TIME 2
+#define TASKLET_MAX_TIME_JIFFIES msecs_to_jiffies(TASKLET_MAX_TIME)
+
+void mlx5_cq_tasklet_cb(unsigned long data)
+{
+ unsigned long flags;
+ unsigned long end = jiffies + TASKLET_MAX_TIME_JIFFIES;
+ struct mlx5_eq_tasklet *ctx = (struct mlx5_eq_tasklet *)data;
+ struct mlx5_core_cq *mcq;
+ struct mlx5_core_cq *temp;
+
+ spin_lock_irqsave(&ctx->lock, flags);
+ list_splice_tail_init(&ctx->list, &ctx->process_list);
+ spin_unlock_irqrestore(&ctx->lock, flags);
+
+ list_for_each_entry_safe(mcq, temp, &ctx->process_list,
+ tasklet_ctx.list) {
+ list_del_init(&mcq->tasklet_ctx.list);
+ mcq->tasklet_ctx.comp(mcq);
+ if (atomic_dec_and_test(&mcq->refcount))
+ complete(&mcq->free);
+ if (time_after(jiffies, end))
+ break;
+ }
+
+ if (!list_empty(&ctx->process_list))
+ tasklet_schedule(&ctx->task);
+}
+
+static void mlx5_add_cq_to_tasklet(struct mlx5_core_cq *cq)
+{
+ unsigned long flags;
+ struct mlx5_eq_tasklet *tasklet_ctx = cq->tasklet_ctx.priv;
+
+ spin_lock_irqsave(&tasklet_ctx->lock, flags);
+ /* When migrating CQs between EQs will be implemented, please note
+ * that you need to sync this point. It is possible that
+ * while migrating a CQ, completions on the old EQs could
+ * still arrive.
+ */
+ if (list_empty_careful(&cq->tasklet_ctx.list)) {
+ atomic_inc(&cq->refcount);
+ list_add_tail(&cq->tasklet_ctx.list, &tasklet_ctx->list);
+ }
+ spin_unlock_irqrestore(&tasklet_ctx->lock, flags);
+}
+
void mlx5_cq_completion(struct mlx5_core_dev *dev, u32 cqn)
{
struct mlx5_core_cq *cq;
@@ -96,6 +143,13 @@ int mlx5_core_create_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq,
struct mlx5_create_cq_mbox_out out;
struct mlx5_destroy_cq_mbox_in din;
struct mlx5_destroy_cq_mbox_out dout;
+ int eqn = MLX5_GET(cqc, MLX5_ADDR_OF(create_cq_in, in, cq_context),
+ c_eqn);
+ struct mlx5_eq *eq;
+
+ eq = mlx5_eqn2eq(dev, eqn);
+ if (IS_ERR(eq))
+ return PTR_ERR(eq);
in->hdr.opcode = cpu_to_be16(MLX5_CMD_OP_CREATE_CQ);
memset(&out, 0, sizeof(out));
@@ -111,6 +165,11 @@ int mlx5_core_create_cq(struct mlx5_core_dev *dev, struct mlx5_core_cq *cq,
cq->arm_sn = 0;
atomic_set(&cq->refcount, 1);
init_completion(&cq->free);
+ if (!cq->comp)
+ cq->comp = mlx5_add_cq_to_tasklet;
+ /* assuming CQ will be deleted before the EQ */
+ cq->tasklet_ctx.priv = &eq->tasklet_ctx;
+ INIT_LIST_HEAD(&cq->tasklet_ctx.list);
spin_lock_irq(&table->lock);
err = radix_tree_insert(&table->tree, cq->cqn, cq);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en.h b/drivers/net/ethernet/mellanox/mlx5/core/en.h
index 3881dce0cc30..e8a6c3325b39 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en.h
@@ -46,6 +46,9 @@
#include <linux/rhashtable.h>
#include "wq.h"
#include "mlx5_core.h"
+#include "en_stats.h"
+
+#define MLX5_SET_CFG(p, f, v) MLX5_SET(create_flow_group_in, p, f, v)
#define MLX5E_MAX_NUM_TC 8
@@ -57,12 +60,30 @@
#define MLX5E_PARAMS_DEFAULT_LOG_RQ_SIZE 0xa
#define MLX5E_PARAMS_MAXIMUM_LOG_RQ_SIZE 0xd
+#define MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE_MPW 0x1
+#define MLX5E_PARAMS_DEFAULT_LOG_RQ_SIZE_MPW 0x4
+#define MLX5E_PARAMS_MAXIMUM_LOG_RQ_SIZE_MPW 0x6
+
+#define MLX5_MPWRQ_LOG_STRIDE_SIZE 6 /* >= 6, HW restriction */
+#define MLX5_MPWRQ_LOG_STRIDE_SIZE_CQE_COMPRESS 8 /* >= 6, HW restriction */
+#define MLX5_MPWRQ_LOG_WQE_SZ 17
+#define MLX5_MPWRQ_WQE_PAGE_ORDER (MLX5_MPWRQ_LOG_WQE_SZ - PAGE_SHIFT > 0 ? \
+ MLX5_MPWRQ_LOG_WQE_SZ - PAGE_SHIFT : 0)
+#define MLX5_MPWRQ_PAGES_PER_WQE BIT(MLX5_MPWRQ_WQE_PAGE_ORDER)
+#define MLX5_MPWRQ_STRIDES_PER_PAGE (MLX5_MPWRQ_NUM_STRIDES >> \
+ MLX5_MPWRQ_WQE_PAGE_ORDER)
+#define MLX5_CHANNEL_MAX_NUM_MTTS (ALIGN(MLX5_MPWRQ_PAGES_PER_WQE, 8) * \
+ BIT(MLX5E_PARAMS_MAXIMUM_LOG_RQ_SIZE_MPW))
+#define MLX5_UMR_ALIGN (2048)
+#define MLX5_MPWRQ_SMALL_PACKET_THRESHOLD (128)
+
#define MLX5E_PARAMS_DEFAULT_LRO_WQE_SZ (64 * 1024)
#define MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_USEC 0x10
#define MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_PKTS 0x20
#define MLX5E_PARAMS_DEFAULT_TX_CQ_MODERATION_USEC 0x10
#define MLX5E_PARAMS_DEFAULT_TX_CQ_MODERATION_PKTS 0x20
#define MLX5E_PARAMS_DEFAULT_MIN_RX_WQES 0x80
+#define MLX5E_PARAMS_DEFAULT_MIN_RX_WQES_MPW 0x2
#define MLX5E_LOG_INDIR_RQT_SIZE 0x7
#define MLX5E_INDIR_RQT_SIZE BIT(MLX5E_LOG_INDIR_RQT_SIZE)
@@ -73,233 +94,70 @@
#define MLX5E_NUM_MAIN_GROUPS 9
+static inline u16 mlx5_min_rx_wqes(int wq_type, u32 wq_size)
+{
+ switch (wq_type) {
+ case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
+ return min_t(u16, MLX5E_PARAMS_DEFAULT_MIN_RX_WQES_MPW,
+ wq_size / 2);
+ default:
+ return min_t(u16, MLX5E_PARAMS_DEFAULT_MIN_RX_WQES,
+ wq_size / 2);
+ }
+}
+
+static inline int mlx5_min_log_rq_size(int wq_type)
+{
+ switch (wq_type) {
+ case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
+ return MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE_MPW;
+ default:
+ return MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE;
+ }
+}
+
+static inline int mlx5_max_log_rq_size(int wq_type)
+{
+ switch (wq_type) {
+ case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
+ return MLX5E_PARAMS_MAXIMUM_LOG_RQ_SIZE_MPW;
+ default:
+ return MLX5E_PARAMS_MAXIMUM_LOG_RQ_SIZE;
+ }
+}
+
+struct mlx5e_tx_wqe {
+ struct mlx5_wqe_ctrl_seg ctrl;
+ struct mlx5_wqe_eth_seg eth;
+};
+
+struct mlx5e_rx_wqe {
+ struct mlx5_wqe_srq_next_seg next;
+ struct mlx5_wqe_data_seg data;
+};
+
+struct mlx5e_umr_wqe {
+ struct mlx5_wqe_ctrl_seg ctrl;
+ struct mlx5_wqe_umr_ctrl_seg uctrl;
+ struct mlx5_mkey_seg mkc;
+ struct mlx5_wqe_data_seg data;
+};
+
#ifdef CONFIG_MLX5_CORE_EN_DCB
#define MLX5E_MAX_BW_ALLOC 100 /* Max percentage of BW allocation */
#define MLX5E_MIN_BW_ALLOC 1 /* Min percentage of BW allocation */
#endif
-static const char vport_strings[][ETH_GSTRING_LEN] = {
- /* vport statistics */
- "rx_packets",
- "rx_bytes",
- "tx_packets",
- "tx_bytes",
- "rx_error_packets",
- "rx_error_bytes",
- "tx_error_packets",
- "tx_error_bytes",
- "rx_unicast_packets",
- "rx_unicast_bytes",
- "tx_unicast_packets",
- "tx_unicast_bytes",
- "rx_multicast_packets",
- "rx_multicast_bytes",
- "tx_multicast_packets",
- "tx_multicast_bytes",
- "rx_broadcast_packets",
- "rx_broadcast_bytes",
- "tx_broadcast_packets",
- "tx_broadcast_bytes",
-
- /* SW counters */
- "tso_packets",
- "tso_bytes",
- "tso_inner_packets",
- "tso_inner_bytes",
- "lro_packets",
- "lro_bytes",
- "rx_csum_good",
- "rx_csum_none",
- "rx_csum_sw",
- "tx_csum_offload",
- "tx_csum_inner",
- "tx_queue_stopped",
- "tx_queue_wake",
- "tx_queue_dropped",
- "rx_wqe_err",
-};
-
-struct mlx5e_vport_stats {
- /* HW counters */
- u64 rx_packets;
- u64 rx_bytes;
- u64 tx_packets;
- u64 tx_bytes;
- u64 rx_error_packets;
- u64 rx_error_bytes;
- u64 tx_error_packets;
- u64 tx_error_bytes;
- u64 rx_unicast_packets;
- u64 rx_unicast_bytes;
- u64 tx_unicast_packets;
- u64 tx_unicast_bytes;
- u64 rx_multicast_packets;
- u64 rx_multicast_bytes;
- u64 tx_multicast_packets;
- u64 tx_multicast_bytes;
- u64 rx_broadcast_packets;
- u64 rx_broadcast_bytes;
- u64 tx_broadcast_packets;
- u64 tx_broadcast_bytes;
-
- /* SW counters */
- u64 tso_packets;
- u64 tso_bytes;
- u64 tso_inner_packets;
- u64 tso_inner_bytes;
- u64 lro_packets;
- u64 lro_bytes;
- u64 rx_csum_good;
- u64 rx_csum_none;
- u64 rx_csum_sw;
- u64 tx_csum_offload;
- u64 tx_csum_inner;
- u64 tx_queue_stopped;
- u64 tx_queue_wake;
- u64 tx_queue_dropped;
- u64 rx_wqe_err;
-
-#define NUM_VPORT_COUNTERS 35
-};
-
-static const char pport_strings[][ETH_GSTRING_LEN] = {
- /* IEEE802.3 counters */
- "frames_tx",
- "frames_rx",
- "check_seq_err",
- "alignment_err",
- "octets_tx",
- "octets_received",
- "multicast_xmitted",
- "broadcast_xmitted",
- "multicast_rx",
- "broadcast_rx",
- "in_range_len_errors",
- "out_of_range_len",
- "too_long_errors",
- "symbol_err",
- "mac_control_tx",
- "mac_control_rx",
- "unsupported_op_rx",
- "pause_ctrl_rx",
- "pause_ctrl_tx",
-
- /* RFC2863 counters */
- "in_octets",
- "in_ucast_pkts",
- "in_discards",
- "in_errors",
- "in_unknown_protos",
- "out_octets",
- "out_ucast_pkts",
- "out_discards",
- "out_errors",
- "in_multicast_pkts",
- "in_broadcast_pkts",
- "out_multicast_pkts",
- "out_broadcast_pkts",
-
- /* RFC2819 counters */
- "drop_events",
- "octets",
- "pkts",
- "broadcast_pkts",
- "multicast_pkts",
- "crc_align_errors",
- "undersize_pkts",
- "oversize_pkts",
- "fragments",
- "jabbers",
- "collisions",
- "p64octets",
- "p65to127octets",
- "p128to255octets",
- "p256to511octets",
- "p512to1023octets",
- "p1024to1518octets",
- "p1519to2047octets",
- "p2048to4095octets",
- "p4096to8191octets",
- "p8192to10239octets",
-};
-
-#define NUM_IEEE_802_3_COUNTERS 19
-#define NUM_RFC_2863_COUNTERS 13
-#define NUM_RFC_2819_COUNTERS 21
-#define NUM_PPORT_COUNTERS (NUM_IEEE_802_3_COUNTERS + \
- NUM_RFC_2863_COUNTERS + \
- NUM_RFC_2819_COUNTERS)
-
-struct mlx5e_pport_stats {
- __be64 IEEE_802_3_counters[NUM_IEEE_802_3_COUNTERS];
- __be64 RFC_2863_counters[NUM_RFC_2863_COUNTERS];
- __be64 RFC_2819_counters[NUM_RFC_2819_COUNTERS];
-};
-
-static const char rq_stats_strings[][ETH_GSTRING_LEN] = {
- "packets",
- "bytes",
- "csum_none",
- "csum_sw",
- "lro_packets",
- "lro_bytes",
- "wqe_err"
-};
-
-struct mlx5e_rq_stats {
- u64 packets;
- u64 bytes;
- u64 csum_none;
- u64 csum_sw;
- u64 lro_packets;
- u64 lro_bytes;
- u64 wqe_err;
-#define NUM_RQ_STATS 7
-};
-
-static const char sq_stats_strings[][ETH_GSTRING_LEN] = {
- "packets",
- "bytes",
- "tso_packets",
- "tso_bytes",
- "tso_inner_packets",
- "tso_inner_bytes",
- "csum_offload_inner",
- "nop",
- "csum_offload_none",
- "stopped",
- "wake",
- "dropped",
-};
-
-struct mlx5e_sq_stats {
- /* commonly accessed in data path */
- u64 packets;
- u64 bytes;
- u64 tso_packets;
- u64 tso_bytes;
- u64 tso_inner_packets;
- u64 tso_inner_bytes;
- u64 csum_offload_inner;
- u64 nop;
- /* less likely accessed in data path */
- u64 csum_offload_none;
- u64 stopped;
- u64 wake;
- u64 dropped;
-#define NUM_SQ_STATS 12
-};
-
-struct mlx5e_stats {
- struct mlx5e_vport_stats vport;
- struct mlx5e_pport_stats pport;
-};
-
struct mlx5e_params {
u8 log_sq_size;
+ u8 rq_wq_type;
+ u8 mpwqe_log_stride_sz;
+ u8 mpwqe_log_num_strides;
u8 log_rq_size;
u16 num_channels;
u8 num_tc;
+ bool rx_cqe_compress_admin;
+ bool rx_cqe_compress;
u16 rx_cq_moderation_usec;
u16 rx_cq_moderation_pkts;
u16 tx_cq_moderation_usec;
@@ -311,6 +169,7 @@ struct mlx5e_params {
u8 rss_hfunc;
u8 toeplitz_hash_key[40];
u32 indirection_rqt[MLX5E_INDIR_RQT_SIZE];
+ bool vlan_strip_disable;
#ifdef CONFIG_MLX5_CORE_EN_DCB
struct ieee_ets ets;
#endif
@@ -331,6 +190,7 @@ struct mlx5e_tstamp {
enum {
MLX5E_RQ_STATE_POST_WQES_ENABLE,
+ MLX5E_RQ_STATE_UMR_WQE_IN_PROGRESS,
};
struct mlx5e_cq {
@@ -343,32 +203,88 @@ struct mlx5e_cq {
struct mlx5e_channel *channel;
struct mlx5e_priv *priv;
+ /* cqe decompression */
+ struct mlx5_cqe64 title;
+ struct mlx5_mini_cqe8 mini_arr[MLX5_MINI_CQE_ARRAY_SIZE];
+ u8 mini_arr_idx;
+ u16 decmprs_left;
+ u16 decmprs_wqe_counter;
+
/* control */
struct mlx5_wq_ctrl wq_ctrl;
} ____cacheline_aligned_in_smp;
+struct mlx5e_rq;
+typedef void (*mlx5e_fp_handle_rx_cqe)(struct mlx5e_rq *rq,
+ struct mlx5_cqe64 *cqe);
+typedef int (*mlx5e_fp_alloc_wqe)(struct mlx5e_rq *rq, struct mlx5e_rx_wqe *wqe,
+ u16 ix);
+
+struct mlx5e_dma_info {
+ struct page *page;
+ dma_addr_t addr;
+};
+
struct mlx5e_rq {
/* data path */
struct mlx5_wq_ll wq;
u32 wqe_sz;
struct sk_buff **skb;
+ struct mlx5e_mpw_info *wqe_info;
+ __be32 mkey_be;
+ __be32 umr_mkey_be;
struct device *pdev;
struct net_device *netdev;
struct mlx5e_tstamp *tstamp;
struct mlx5e_rq_stats stats;
struct mlx5e_cq cq;
+ mlx5e_fp_handle_rx_cqe handle_rx_cqe;
+ mlx5e_fp_alloc_wqe alloc_wqe;
unsigned long state;
int ix;
/* control */
struct mlx5_wq_ctrl wq_ctrl;
+ u8 wq_type;
+ u32 mpwqe_stride_sz;
+ u32 mpwqe_num_strides;
u32 rqn;
struct mlx5e_channel *channel;
struct mlx5e_priv *priv;
} ____cacheline_aligned_in_smp;
+struct mlx5e_umr_dma_info {
+ __be64 *mtt;
+ __be64 *mtt_no_align;
+ dma_addr_t mtt_addr;
+ struct mlx5e_dma_info *dma_info;
+};
+
+struct mlx5e_mpw_info {
+ union {
+ struct mlx5e_dma_info dma_info;
+ struct mlx5e_umr_dma_info umr;
+ };
+ u16 consumed_strides;
+ u16 skbs_frags[MLX5_MPWRQ_PAGES_PER_WQE];
+
+ void (*dma_pre_sync)(struct device *pdev,
+ struct mlx5e_mpw_info *wi,
+ u32 wqe_offset, u32 len);
+ void (*add_skb_frag)(struct mlx5e_rq *rq,
+ struct sk_buff *skb,
+ struct mlx5e_mpw_info *wi,
+ u32 page_idx, u32 frag_offset, u32 len);
+ void (*copy_skb_header)(struct device *pdev,
+ struct sk_buff *skb,
+ struct mlx5e_mpw_info *wi,
+ u32 page_idx, u32 offset,
+ u32 headlen);
+ void (*free_wqe)(struct mlx5e_rq *rq, struct mlx5e_mpw_info *wi);
+};
+
struct mlx5e_tx_wqe_info {
u32 num_bytes;
u8 num_wqebbs;
@@ -391,6 +307,11 @@ enum {
MLX5E_SQ_STATE_BF_ENABLE,
};
+struct mlx5e_ico_wqe_info {
+ u8 opcode;
+ u8 num_wqebbs;
+};
+
struct mlx5e_sq {
/* data path */
@@ -432,6 +353,7 @@ struct mlx5e_sq {
struct mlx5_uar uar;
struct mlx5e_channel *channel;
int tc;
+ struct mlx5e_ico_wqe_info *ico_wqe_info;
} ____cacheline_aligned_in_smp;
static inline bool mlx5e_sq_has_room_for(struct mlx5e_sq *sq, u16 n)
@@ -448,6 +370,7 @@ struct mlx5e_channel {
/* data path */
struct mlx5e_rq rq;
struct mlx5e_sq sq[MLX5E_MAX_NUM_TC];
+ struct mlx5e_sq icosq; /* internal control operations */
struct napi_struct napi;
struct device *pdev;
struct net_device *netdev;
@@ -474,42 +397,42 @@ enum mlx5e_traffic_types {
MLX5E_TT_IPV6,
MLX5E_TT_ANY,
MLX5E_NUM_TT,
+ MLX5E_NUM_INDIR_TIRS = MLX5E_TT_ANY,
};
-#define IS_HASHING_TT(tt) (tt != MLX5E_TT_ANY)
+enum {
+ MLX5E_STATE_ASYNC_EVENTS_ENABLE,
+ MLX5E_STATE_OPENED,
+ MLX5E_STATE_DESTROYING,
+};
-enum mlx5e_rqt_ix {
- MLX5E_INDIRECTION_RQT,
- MLX5E_SINGLE_RQ_RQT,
- MLX5E_NUM_RQT,
+struct mlx5e_vxlan_db {
+ spinlock_t lock; /* protect vxlan table */
+ struct radix_tree_root tree;
};
-struct mlx5e_eth_addr_info {
+struct mlx5e_l2_rule {
u8 addr[ETH_ALEN + 2];
- u32 tt_vec;
- struct mlx5_flow_rule *ft_rule[MLX5E_NUM_TT];
+ struct mlx5_flow_rule *rule;
};
-#define MLX5E_ETH_ADDR_HASH_SIZE (1 << BITS_PER_BYTE)
-
-struct mlx5e_eth_addr_db {
- struct hlist_head netdev_uc[MLX5E_ETH_ADDR_HASH_SIZE];
- struct hlist_head netdev_mc[MLX5E_ETH_ADDR_HASH_SIZE];
- struct mlx5e_eth_addr_info broadcast;
- struct mlx5e_eth_addr_info allmulti;
- struct mlx5e_eth_addr_info promisc;
- bool broadcast_enabled;
- bool allmulti_enabled;
- bool promisc_enabled;
+struct mlx5e_flow_table {
+ int num_groups;
+ struct mlx5_flow_table *t;
+ struct mlx5_flow_group **g;
};
-enum {
- MLX5E_STATE_ASYNC_EVENTS_ENABLE,
- MLX5E_STATE_OPENED,
- MLX5E_STATE_DESTROYING,
+#define MLX5E_L2_ADDR_HASH_SIZE BIT(BITS_PER_BYTE)
+
+struct mlx5e_tc_table {
+ struct mlx5_flow_table *t;
+
+ struct rhashtable_params ht_params;
+ struct rhashtable ht;
};
-struct mlx5e_vlan_db {
+struct mlx5e_vlan_table {
+ struct mlx5e_flow_table ft;
unsigned long active_vlans[BITS_TO_LONGS(VLAN_N_VID)];
struct mlx5_flow_rule *active_vlans_rule[VLAN_N_VID];
struct mlx5_flow_rule *untagged_rule;
@@ -517,29 +440,74 @@ struct mlx5e_vlan_db {
bool filter_disabled;
};
-struct mlx5e_vxlan_db {
- spinlock_t lock; /* protect vxlan table */
- struct radix_tree_root tree;
+struct mlx5e_l2_table {
+ struct mlx5e_flow_table ft;
+ struct hlist_head netdev_uc[MLX5E_L2_ADDR_HASH_SIZE];
+ struct hlist_head netdev_mc[MLX5E_L2_ADDR_HASH_SIZE];
+ struct mlx5e_l2_rule broadcast;
+ struct mlx5e_l2_rule allmulti;
+ struct mlx5e_l2_rule promisc;
+ bool broadcast_enabled;
+ bool allmulti_enabled;
+ bool promisc_enabled;
};
-struct mlx5e_flow_table {
- int num_groups;
- struct mlx5_flow_table *t;
- struct mlx5_flow_group **g;
+/* L3/L4 traffic type classifier */
+struct mlx5e_ttc_table {
+ struct mlx5e_flow_table ft;
+ struct mlx5_flow_rule *rules[MLX5E_NUM_TT];
};
-struct mlx5e_tc_flow_table {
- struct mlx5_flow_table *t;
+#define ARFS_HASH_SHIFT BITS_PER_BYTE
+#define ARFS_HASH_SIZE BIT(BITS_PER_BYTE)
+struct arfs_table {
+ struct mlx5e_flow_table ft;
+ struct mlx5_flow_rule *default_rule;
+ struct hlist_head rules_hash[ARFS_HASH_SIZE];
+};
- struct rhashtable_params ht_params;
- struct rhashtable ht;
+enum arfs_type {
+ ARFS_IPV4_TCP,
+ ARFS_IPV6_TCP,
+ ARFS_IPV4_UDP,
+ ARFS_IPV6_UDP,
+ ARFS_NUM_TYPES,
+};
+
+struct mlx5e_arfs_tables {
+ struct arfs_table arfs_tables[ARFS_NUM_TYPES];
+ /* Protect aRFS rules list */
+ spinlock_t arfs_lock;
+ struct list_head rules;
+ int last_filter_id;
+ struct workqueue_struct *wq;
+};
+
+/* NIC prio FTS */
+enum {
+ MLX5E_VLAN_FT_LEVEL = 0,
+ MLX5E_L2_FT_LEVEL,
+ MLX5E_TTC_FT_LEVEL,
+ MLX5E_ARFS_FT_LEVEL
};
-struct mlx5e_flow_tables {
- struct mlx5_flow_namespace *ns;
- struct mlx5e_tc_flow_table tc;
- struct mlx5e_flow_table vlan;
- struct mlx5e_flow_table main;
+struct mlx5e_flow_steering {
+ struct mlx5_flow_namespace *ns;
+ struct mlx5e_tc_table tc;
+ struct mlx5e_vlan_table vlan;
+ struct mlx5e_l2_table l2;
+ struct mlx5e_ttc_table ttc;
+ struct mlx5e_arfs_tables arfs;
+};
+
+struct mlx5e_direct_tir {
+ u32 tirn;
+ u32 rqtn;
+};
+
+enum {
+ MLX5E_TC_PRIO = 0,
+ MLX5E_NIC_PRIO
};
struct mlx5e_priv {
@@ -554,16 +522,16 @@ struct mlx5e_priv {
u32 pdn;
u32 tdn;
struct mlx5_core_mkey mkey;
+ struct mlx5_core_mkey umr_mkey;
struct mlx5e_rq drop_rq;
struct mlx5e_channel **channel;
u32 tisn[MLX5E_MAX_NUM_TC];
- u32 rqtn[MLX5E_NUM_RQT];
- u32 tirn[MLX5E_NUM_TT];
+ u32 indir_rqtn;
+ u32 indir_tirn[MLX5E_NUM_INDIR_TIRS];
+ struct mlx5e_direct_tir direct_tir[MLX5E_MAX_NUM_CHANNELS];
- struct mlx5e_flow_tables fts;
- struct mlx5e_eth_addr_db eth_addr;
- struct mlx5e_vlan_db vlan;
+ struct mlx5e_flow_steering fs;
struct mlx5e_vxlan_db vxlan;
struct mlx5e_params params;
@@ -576,18 +544,7 @@ struct mlx5e_priv {
struct net_device *netdev;
struct mlx5e_stats stats;
struct mlx5e_tstamp tstamp;
-};
-
-#define MLX5E_NET_IP_ALIGN 2
-
-struct mlx5e_tx_wqe {
- struct mlx5_wqe_ctrl_seg ctrl;
- struct mlx5_wqe_eth_seg eth;
-};
-
-struct mlx5e_rx_wqe {
- struct mlx5_wqe_srq_next_seg next;
- struct mlx5_wqe_data_seg data;
+ u16 q_counter;
};
enum mlx5e_link_mode {
@@ -632,14 +589,35 @@ void mlx5e_cq_error_event(struct mlx5_core_cq *mcq, enum mlx5_event event);
int mlx5e_napi_poll(struct napi_struct *napi, int budget);
bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget);
int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget);
+
+void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe);
+void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe);
bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq);
+int mlx5e_alloc_rx_wqe(struct mlx5e_rq *rq, struct mlx5e_rx_wqe *wqe, u16 ix);
+int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_rx_wqe *wqe, u16 ix);
+void mlx5e_post_rx_fragmented_mpwqe(struct mlx5e_rq *rq);
+void mlx5e_complete_rx_linear_mpwqe(struct mlx5e_rq *rq,
+ struct mlx5_cqe64 *cqe,
+ u16 byte_cnt,
+ struct mlx5e_mpw_info *wi,
+ struct sk_buff *skb);
+void mlx5e_complete_rx_fragmented_mpwqe(struct mlx5e_rq *rq,
+ struct mlx5_cqe64 *cqe,
+ u16 byte_cnt,
+ struct mlx5e_mpw_info *wi,
+ struct sk_buff *skb);
+void mlx5e_free_rx_linear_mpwqe(struct mlx5e_rq *rq,
+ struct mlx5e_mpw_info *wi);
+void mlx5e_free_rx_fragmented_mpwqe(struct mlx5e_rq *rq,
+ struct mlx5e_mpw_info *wi);
struct mlx5_cqe64 *mlx5e_get_cqe(struct mlx5e_cq *cq);
void mlx5e_update_stats(struct mlx5e_priv *priv);
-int mlx5e_create_flow_tables(struct mlx5e_priv *priv);
-void mlx5e_destroy_flow_tables(struct mlx5e_priv *priv);
-void mlx5e_init_eth_addr(struct mlx5e_priv *priv);
+int mlx5e_create_flow_steering(struct mlx5e_priv *priv);
+void mlx5e_destroy_flow_steering(struct mlx5e_priv *priv);
+void mlx5e_init_l2_addr(struct mlx5e_priv *priv);
+void mlx5e_destroy_flow_table(struct mlx5e_flow_table *ft);
void mlx5e_set_rx_mode_work(struct work_struct *work);
void mlx5e_fill_hwstamp(struct mlx5e_tstamp *clock, u64 timestamp,
@@ -648,6 +626,7 @@ void mlx5e_timestamp_init(struct mlx5e_priv *priv);
void mlx5e_timestamp_cleanup(struct mlx5e_priv *priv);
int mlx5e_hwstamp_set(struct net_device *dev, struct ifreq *ifr);
int mlx5e_hwstamp_get(struct net_device *dev, struct ifreq *ifr);
+void mlx5e_modify_rx_cqe_compression(struct mlx5e_priv *priv, bool val);
int mlx5e_vlan_rx_add_vid(struct net_device *dev, __always_unused __be16 proto,
u16 vid);
@@ -656,16 +635,20 @@ int mlx5e_vlan_rx_kill_vid(struct net_device *dev, __always_unused __be16 proto,
void mlx5e_enable_vlan_filter(struct mlx5e_priv *priv);
void mlx5e_disable_vlan_filter(struct mlx5e_priv *priv);
-int mlx5e_redirect_rqt(struct mlx5e_priv *priv, enum mlx5e_rqt_ix rqt_ix);
+int mlx5e_modify_rqs_vsd(struct mlx5e_priv *priv, bool vsd);
+
+int mlx5e_redirect_rqt(struct mlx5e_priv *priv, u32 rqtn, int sz, int ix);
void mlx5e_build_tir_ctx_hash(void *tirc, struct mlx5e_priv *priv);
int mlx5e_open_locked(struct net_device *netdev);
int mlx5e_close_locked(struct net_device *netdev);
-void mlx5e_build_default_indir_rqt(u32 *indirection_rqt, int len,
+void mlx5e_build_default_indir_rqt(struct mlx5_core_dev *mdev,
+ u32 *indirection_rqt, int len,
int num_channels);
+int mlx5e_get_max_linkspeed(struct mlx5_core_dev *mdev, u32 *speed);
static inline void mlx5e_tx_notify_hw(struct mlx5e_sq *sq,
- struct mlx5e_tx_wqe *wqe, int bf_sz)
+ struct mlx5_wqe_ctrl_seg *ctrl, int bf_sz)
{
u16 ofst = MLX5_BF_OFFSET + sq->bf_offset;
@@ -679,9 +662,9 @@ static inline void mlx5e_tx_notify_hw(struct mlx5e_sq *sq,
*/
wmb();
if (bf_sz)
- __iowrite64_copy(sq->uar_map + ofst, &wqe->ctrl, bf_sz);
+ __iowrite64_copy(sq->uar_map + ofst, ctrl, bf_sz);
else
- mlx5_write64((__be32 *)&wqe->ctrl, sq->uar_map + ofst, NULL);
+ mlx5_write64((__be32 *)ctrl, sq->uar_map + ofst, NULL);
/* flush the write-combining mapped buffer */
wmb();
@@ -702,12 +685,43 @@ static inline int mlx5e_get_max_num_channels(struct mlx5_core_dev *mdev)
MLX5E_MAX_NUM_CHANNELS);
}
+static inline int mlx5e_get_mtt_octw(int npages)
+{
+ return ALIGN(npages, 8) / 2;
+}
+
extern const struct ethtool_ops mlx5e_ethtool_ops;
#ifdef CONFIG_MLX5_CORE_EN_DCB
extern const struct dcbnl_rtnl_ops mlx5e_dcbnl_ops;
int mlx5e_dcbnl_ieee_setets_core(struct mlx5e_priv *priv, struct ieee_ets *ets);
#endif
+#ifndef CONFIG_RFS_ACCEL
+static inline int mlx5e_arfs_create_tables(struct mlx5e_priv *priv)
+{
+ return 0;
+}
+
+static inline void mlx5e_arfs_destroy_tables(struct mlx5e_priv *priv) {}
+
+static inline int mlx5e_arfs_enable(struct mlx5e_priv *priv)
+{
+ return -ENOTSUPP;
+}
+
+static inline int mlx5e_arfs_disable(struct mlx5e_priv *priv)
+{
+ return -ENOTSUPP;
+}
+#else
+int mlx5e_arfs_create_tables(struct mlx5e_priv *priv);
+void mlx5e_arfs_destroy_tables(struct mlx5e_priv *priv);
+int mlx5e_arfs_enable(struct mlx5e_priv *priv);
+int mlx5e_arfs_disable(struct mlx5e_priv *priv);
+int mlx5e_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
+ u16 rxq_index, u32 flow_id);
+#endif
+
u16 mlx5e_get_max_inline_cap(struct mlx5_core_dev *mdev);
#endif /* __MLX5_EN_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c
new file mode 100644
index 000000000000..3515e78ba68f
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_arfs.c
@@ -0,0 +1,752 @@
+/*
+ * Copyright (c) 2016, Mellanox Technologies. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#ifdef CONFIG_RFS_ACCEL
+
+#include <linux/hash.h>
+#include <linux/mlx5/fs.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include "en.h"
+
+struct arfs_tuple {
+ __be16 etype;
+ u8 ip_proto;
+ union {
+ __be32 src_ipv4;
+ struct in6_addr src_ipv6;
+ };
+ union {
+ __be32 dst_ipv4;
+ struct in6_addr dst_ipv6;
+ };
+ __be16 src_port;
+ __be16 dst_port;
+};
+
+struct arfs_rule {
+ struct mlx5e_priv *priv;
+ struct work_struct arfs_work;
+ struct mlx5_flow_rule *rule;
+ struct hlist_node hlist;
+ int rxq;
+ /* Flow ID passed to ndo_rx_flow_steer */
+ int flow_id;
+ /* Filter ID returned by ndo_rx_flow_steer */
+ int filter_id;
+ struct arfs_tuple tuple;
+};
+
+#define mlx5e_for_each_arfs_rule(hn, tmp, arfs_tables, i, j) \
+ for (i = 0; i < ARFS_NUM_TYPES; i++) \
+ mlx5e_for_each_hash_arfs_rule(hn, tmp, arfs_tables[i].rules_hash, j)
+
+#define mlx5e_for_each_hash_arfs_rule(hn, tmp, hash, j) \
+ for (j = 0; j < ARFS_HASH_SIZE; j++) \
+ hlist_for_each_entry_safe(hn, tmp, &hash[j], hlist)
+
+static enum mlx5e_traffic_types arfs_get_tt(enum arfs_type type)
+{
+ switch (type) {
+ case ARFS_IPV4_TCP:
+ return MLX5E_TT_IPV4_TCP;
+ case ARFS_IPV4_UDP:
+ return MLX5E_TT_IPV4_UDP;
+ case ARFS_IPV6_TCP:
+ return MLX5E_TT_IPV6_TCP;
+ case ARFS_IPV6_UDP:
+ return MLX5E_TT_IPV6_UDP;
+ default:
+ return -EINVAL;
+ }
+}
+
+static int arfs_disable(struct mlx5e_priv *priv)
+{
+ struct mlx5_flow_destination dest;
+ u32 *tirn = priv->indir_tirn;
+ int err = 0;
+ int tt;
+ int i;
+
+ dest.type = MLX5_FLOW_DESTINATION_TYPE_TIR;
+ for (i = 0; i < ARFS_NUM_TYPES; i++) {
+ dest.tir_num = tirn[i];
+ tt = arfs_get_tt(i);
+ /* Modify ttc rules destination to bypass the aRFS tables*/
+ err = mlx5_modify_rule_destination(priv->fs.ttc.rules[tt],
+ &dest);
+ if (err) {
+ netdev_err(priv->netdev,
+ "%s: modify ttc destination failed\n",
+ __func__);
+ return err;
+ }
+ }
+ return 0;
+}
+
+static void arfs_del_rules(struct mlx5e_priv *priv);
+
+int mlx5e_arfs_disable(struct mlx5e_priv *priv)
+{
+ arfs_del_rules(priv);
+
+ return arfs_disable(priv);
+}
+
+int mlx5e_arfs_enable(struct mlx5e_priv *priv)
+{
+ struct mlx5_flow_destination dest;
+ int err = 0;
+ int tt;
+ int i;
+
+ dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
+ for (i = 0; i < ARFS_NUM_TYPES; i++) {
+ dest.ft = priv->fs.arfs.arfs_tables[i].ft.t;
+ tt = arfs_get_tt(i);
+ /* Modify ttc rules destination to point on the aRFS FTs */
+ err = mlx5_modify_rule_destination(priv->fs.ttc.rules[tt],
+ &dest);
+ if (err) {
+ netdev_err(priv->netdev,
+ "%s: modify ttc destination failed err=%d\n",
+ __func__, err);
+ arfs_disable(priv);
+ return err;
+ }
+ }
+ return 0;
+}
+
+static void arfs_destroy_table(struct arfs_table *arfs_t)
+{
+ mlx5_del_flow_rule(arfs_t->default_rule);
+ mlx5e_destroy_flow_table(&arfs_t->ft);
+}
+
+void mlx5e_arfs_destroy_tables(struct mlx5e_priv *priv)
+{
+ int i;
+
+ if (!(priv->netdev->hw_features & NETIF_F_NTUPLE))
+ return;
+
+ arfs_del_rules(priv);
+ destroy_workqueue(priv->fs.arfs.wq);
+ for (i = 0; i < ARFS_NUM_TYPES; i++) {
+ if (!IS_ERR_OR_NULL(priv->fs.arfs.arfs_tables[i].ft.t))
+ arfs_destroy_table(&priv->fs.arfs.arfs_tables[i]);
+ }
+}
+
+static int arfs_add_default_rule(struct mlx5e_priv *priv,
+ enum arfs_type type)
+{
+ struct arfs_table *arfs_t = &priv->fs.arfs.arfs_tables[type];
+ struct mlx5_flow_destination dest;
+ u8 match_criteria_enable = 0;
+ u32 *tirn = priv->indir_tirn;
+ u32 *match_criteria;
+ u32 *match_value;
+ int err = 0;
+
+ match_value = mlx5_vzalloc(MLX5_ST_SZ_BYTES(fte_match_param));
+ match_criteria = mlx5_vzalloc(MLX5_ST_SZ_BYTES(fte_match_param));
+ if (!match_value || !match_criteria) {
+ netdev_err(priv->netdev, "%s: alloc failed\n", __func__);
+ err = -ENOMEM;
+ goto out;
+ }
+
+ dest.type = MLX5_FLOW_DESTINATION_TYPE_TIR;
+ switch (type) {
+ case ARFS_IPV4_TCP:
+ dest.tir_num = tirn[MLX5E_TT_IPV4_TCP];
+ break;
+ case ARFS_IPV4_UDP:
+ dest.tir_num = tirn[MLX5E_TT_IPV4_UDP];
+ break;
+ case ARFS_IPV6_TCP:
+ dest.tir_num = tirn[MLX5E_TT_IPV6_TCP];
+ break;
+ case ARFS_IPV6_UDP:
+ dest.tir_num = tirn[MLX5E_TT_IPV6_UDP];
+ break;
+ default:
+ err = -EINVAL;
+ goto out;
+ }
+
+ arfs_t->default_rule = mlx5_add_flow_rule(arfs_t->ft.t, match_criteria_enable,
+ match_criteria, match_value,
+ MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
+ MLX5_FS_DEFAULT_FLOW_TAG,
+ &dest);
+ if (IS_ERR(arfs_t->default_rule)) {
+ err = PTR_ERR(arfs_t->default_rule);
+ arfs_t->default_rule = NULL;
+ netdev_err(priv->netdev, "%s: add rule failed, arfs type=%d\n",
+ __func__, type);
+ }
+out:
+ kvfree(match_criteria);
+ kvfree(match_value);
+ return err;
+}
+
+#define MLX5E_ARFS_NUM_GROUPS 2
+#define MLX5E_ARFS_GROUP1_SIZE BIT(12)
+#define MLX5E_ARFS_GROUP2_SIZE BIT(0)
+#define MLX5E_ARFS_TABLE_SIZE (MLX5E_ARFS_GROUP1_SIZE +\
+ MLX5E_ARFS_GROUP2_SIZE)
+static int arfs_create_groups(struct mlx5e_flow_table *ft,
+ enum arfs_type type)
+{
+ int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
+ void *outer_headers_c;
+ int ix = 0;
+ u32 *in;
+ int err;
+ u8 *mc;
+
+ ft->g = kcalloc(MLX5E_ARFS_NUM_GROUPS,
+ sizeof(*ft->g), GFP_KERNEL);
+ in = mlx5_vzalloc(inlen);
+ if (!in || !ft->g) {
+ kvfree(ft->g);
+ kvfree(in);
+ return -ENOMEM;
+ }
+
+ mc = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria);
+ outer_headers_c = MLX5_ADDR_OF(fte_match_param, mc,
+ outer_headers);
+ MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, outer_headers_c, ethertype);
+ switch (type) {
+ case ARFS_IPV4_TCP:
+ case ARFS_IPV6_TCP:
+ MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, outer_headers_c, tcp_dport);
+ MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, outer_headers_c, tcp_sport);
+ break;
+ case ARFS_IPV4_UDP:
+ case ARFS_IPV6_UDP:
+ MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, outer_headers_c, udp_dport);
+ MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, outer_headers_c, udp_sport);
+ break;
+ default:
+ err = -EINVAL;
+ goto out;
+ }
+
+ switch (type) {
+ case ARFS_IPV4_TCP:
+ case ARFS_IPV4_UDP:
+ MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, outer_headers_c,
+ src_ipv4_src_ipv6.ipv4_layout.ipv4);
+ MLX5_SET_TO_ONES(fte_match_set_lyr_2_4, outer_headers_c,
+ dst_ipv4_dst_ipv6.ipv4_layout.ipv4);
+ break;
+ case ARFS_IPV6_TCP:
+ case ARFS_IPV6_UDP:
+ memset(MLX5_ADDR_OF(fte_match_set_lyr_2_4, outer_headers_c,
+ src_ipv4_src_ipv6.ipv6_layout.ipv6),
+ 0xff, 16);
+ memset(MLX5_ADDR_OF(fte_match_set_lyr_2_4, outer_headers_c,
+ dst_ipv4_dst_ipv6.ipv6_layout.ipv6),
+ 0xff, 16);
+ break;
+ default:
+ err = -EINVAL;
+ goto out;
+ }
+
+ MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
+ MLX5_SET_CFG(in, start_flow_index, ix);
+ ix += MLX5E_ARFS_GROUP1_SIZE;
+ MLX5_SET_CFG(in, end_flow_index, ix - 1);
+ ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
+ if (IS_ERR(ft->g[ft->num_groups]))
+ goto err;
+ ft->num_groups++;
+
+ memset(in, 0, inlen);
+ MLX5_SET_CFG(in, start_flow_index, ix);
+ ix += MLX5E_ARFS_GROUP2_SIZE;
+ MLX5_SET_CFG(in, end_flow_index, ix - 1);
+ ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
+ if (IS_ERR(ft->g[ft->num_groups]))
+ goto err;
+ ft->num_groups++;
+
+ kvfree(in);
+ return 0;
+
+err:
+ err = PTR_ERR(ft->g[ft->num_groups]);
+ ft->g[ft->num_groups] = NULL;
+out:
+ kvfree(in);
+
+ return err;
+}
+
+static int arfs_create_table(struct mlx5e_priv *priv,
+ enum arfs_type type)
+{
+ struct mlx5e_arfs_tables *arfs = &priv->fs.arfs;
+ struct mlx5e_flow_table *ft = &arfs->arfs_tables[type].ft;
+ int err;
+
+ ft->t = mlx5_create_flow_table(priv->fs.ns, MLX5E_NIC_PRIO,
+ MLX5E_ARFS_TABLE_SIZE, MLX5E_ARFS_FT_LEVEL);
+ if (IS_ERR(ft->t)) {
+ err = PTR_ERR(ft->t);
+ ft->t = NULL;
+ return err;
+ }
+
+ err = arfs_create_groups(ft, type);
+ if (err)
+ goto err;
+
+ err = arfs_add_default_rule(priv, type);
+ if (err)
+ goto err;
+
+ return 0;
+err:
+ mlx5e_destroy_flow_table(ft);
+ return err;
+}
+
+int mlx5e_arfs_create_tables(struct mlx5e_priv *priv)
+{
+ int err = 0;
+ int i;
+
+ if (!(priv->netdev->hw_features & NETIF_F_NTUPLE))
+ return 0;
+
+ spin_lock_init(&priv->fs.arfs.arfs_lock);
+ INIT_LIST_HEAD(&priv->fs.arfs.rules);
+ priv->fs.arfs.wq = create_singlethread_workqueue("mlx5e_arfs");
+ if (!priv->fs.arfs.wq)
+ return -ENOMEM;
+
+ for (i = 0; i < ARFS_NUM_TYPES; i++) {
+ err = arfs_create_table(priv, i);
+ if (err)
+ goto err;
+ }
+ return 0;
+err:
+ mlx5e_arfs_destroy_tables(priv);
+ return err;
+}
+
+#define MLX5E_ARFS_EXPIRY_QUOTA 60
+
+static void arfs_may_expire_flow(struct mlx5e_priv *priv)
+{
+ struct arfs_rule *arfs_rule;
+ struct hlist_node *htmp;
+ int quota = 0;
+ int i;
+ int j;
+
+ HLIST_HEAD(del_list);
+ spin_lock_bh(&priv->fs.arfs.arfs_lock);
+ mlx5e_for_each_arfs_rule(arfs_rule, htmp, priv->fs.arfs.arfs_tables, i, j) {
+ if (quota++ > MLX5E_ARFS_EXPIRY_QUOTA)
+ break;
+ if (!work_pending(&arfs_rule->arfs_work) &&
+ rps_may_expire_flow(priv->netdev,
+ arfs_rule->rxq, arfs_rule->flow_id,
+ arfs_rule->filter_id)) {
+ hlist_del_init(&arfs_rule->hlist);
+ hlist_add_head(&arfs_rule->hlist, &del_list);
+ }
+ }
+ spin_unlock_bh(&priv->fs.arfs.arfs_lock);
+ hlist_for_each_entry_safe(arfs_rule, htmp, &del_list, hlist) {
+ if (arfs_rule->rule)
+ mlx5_del_flow_rule(arfs_rule->rule);
+ hlist_del(&arfs_rule->hlist);
+ kfree(arfs_rule);
+ }
+}
+
+static void arfs_del_rules(struct mlx5e_priv *priv)
+{
+ struct hlist_node *htmp;
+ struct arfs_rule *rule;
+ int i;
+ int j;
+
+ HLIST_HEAD(del_list);
+ spin_lock_bh(&priv->fs.arfs.arfs_lock);
+ mlx5e_for_each_arfs_rule(rule, htmp, priv->fs.arfs.arfs_tables, i, j) {
+ hlist_del_init(&rule->hlist);
+ hlist_add_head(&rule->hlist, &del_list);
+ }
+ spin_unlock_bh(&priv->fs.arfs.arfs_lock);
+
+ hlist_for_each_entry_safe(rule, htmp, &del_list, hlist) {
+ cancel_work_sync(&rule->arfs_work);
+ if (rule->rule)
+ mlx5_del_flow_rule(rule->rule);
+ hlist_del(&rule->hlist);
+ kfree(rule);
+ }
+}
+
+static struct hlist_head *
+arfs_hash_bucket(struct arfs_table *arfs_t, __be16 src_port,
+ __be16 dst_port)
+{
+ unsigned long l;
+ int bucket_idx;
+
+ l = (__force unsigned long)src_port |
+ ((__force unsigned long)dst_port << 2);
+
+ bucket_idx = hash_long(l, ARFS_HASH_SHIFT);
+
+ return &arfs_t->rules_hash[bucket_idx];
+}
+
+static u8 arfs_get_ip_proto(const struct sk_buff *skb)
+{
+ return (skb->protocol == htons(ETH_P_IP)) ?
+ ip_hdr(skb)->protocol : ipv6_hdr(skb)->nexthdr;
+}
+
+static struct arfs_table *arfs_get_table(struct mlx5e_arfs_tables *arfs,
+ u8 ip_proto, __be16 etype)
+{
+ if (etype == htons(ETH_P_IP) && ip_proto == IPPROTO_TCP)
+ return &arfs->arfs_tables[ARFS_IPV4_TCP];
+ if (etype == htons(ETH_P_IP) && ip_proto == IPPROTO_UDP)
+ return &arfs->arfs_tables[ARFS_IPV4_UDP];
+ if (etype == htons(ETH_P_IPV6) && ip_proto == IPPROTO_TCP)
+ return &arfs->arfs_tables[ARFS_IPV6_TCP];
+ if (etype == htons(ETH_P_IPV6) && ip_proto == IPPROTO_UDP)
+ return &arfs->arfs_tables[ARFS_IPV6_UDP];
+
+ return NULL;
+}
+
+static struct mlx5_flow_rule *arfs_add_rule(struct mlx5e_priv *priv,
+ struct arfs_rule *arfs_rule)
+{
+ struct mlx5e_arfs_tables *arfs = &priv->fs.arfs;
+ struct arfs_tuple *tuple = &arfs_rule->tuple;
+ struct mlx5_flow_rule *rule = NULL;
+ struct mlx5_flow_destination dest;
+ struct arfs_table *arfs_table;
+ u8 match_criteria_enable = 0;
+ struct mlx5_flow_table *ft;
+ u32 *match_criteria;
+ u32 *match_value;
+ int err = 0;
+
+ match_value = mlx5_vzalloc(MLX5_ST_SZ_BYTES(fte_match_param));
+ match_criteria = mlx5_vzalloc(MLX5_ST_SZ_BYTES(fte_match_param));
+ if (!match_value || !match_criteria) {
+ netdev_err(priv->netdev, "%s: alloc failed\n", __func__);
+ err = -ENOMEM;
+ goto out;
+ }
+ match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria,
+ outer_headers.ethertype);
+ MLX5_SET(fte_match_param, match_value, outer_headers.ethertype,
+ ntohs(tuple->etype));
+ arfs_table = arfs_get_table(arfs, tuple->ip_proto, tuple->etype);
+ if (!arfs_table) {
+ err = -EINVAL;
+ goto out;
+ }
+
+ ft = arfs_table->ft.t;
+ if (tuple->ip_proto == IPPROTO_TCP) {
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria,
+ outer_headers.tcp_dport);
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria,
+ outer_headers.tcp_sport);
+ MLX5_SET(fte_match_param, match_value, outer_headers.tcp_dport,
+ ntohs(tuple->dst_port));
+ MLX5_SET(fte_match_param, match_value, outer_headers.tcp_sport,
+ ntohs(tuple->src_port));
+ } else {
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria,
+ outer_headers.udp_dport);
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria,
+ outer_headers.udp_sport);
+ MLX5_SET(fte_match_param, match_value, outer_headers.udp_dport,
+ ntohs(tuple->dst_port));
+ MLX5_SET(fte_match_param, match_value, outer_headers.udp_sport,
+ ntohs(tuple->src_port));
+ }
+ if (tuple->etype == htons(ETH_P_IP)) {
+ memcpy(MLX5_ADDR_OF(fte_match_param, match_value,
+ outer_headers.src_ipv4_src_ipv6.ipv4_layout.ipv4),
+ &tuple->src_ipv4,
+ 4);
+ memcpy(MLX5_ADDR_OF(fte_match_param, match_value,
+ outer_headers.dst_ipv4_dst_ipv6.ipv4_layout.ipv4),
+ &tuple->dst_ipv4,
+ 4);
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria,
+ outer_headers.src_ipv4_src_ipv6.ipv4_layout.ipv4);
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria,
+ outer_headers.dst_ipv4_dst_ipv6.ipv4_layout.ipv4);
+ } else {
+ memcpy(MLX5_ADDR_OF(fte_match_param, match_value,
+ outer_headers.src_ipv4_src_ipv6.ipv6_layout.ipv6),
+ &tuple->src_ipv6,
+ 16);
+ memcpy(MLX5_ADDR_OF(fte_match_param, match_value,
+ outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6),
+ &tuple->dst_ipv6,
+ 16);
+ memset(MLX5_ADDR_OF(fte_match_param, match_criteria,
+ outer_headers.src_ipv4_src_ipv6.ipv6_layout.ipv6),
+ 0xff,
+ 16);
+ memset(MLX5_ADDR_OF(fte_match_param, match_criteria,
+ outer_headers.dst_ipv4_dst_ipv6.ipv6_layout.ipv6),
+ 0xff,
+ 16);
+ }
+ dest.type = MLX5_FLOW_DESTINATION_TYPE_TIR;
+ dest.tir_num = priv->direct_tir[arfs_rule->rxq].tirn;
+ rule = mlx5_add_flow_rule(ft, match_criteria_enable, match_criteria,
+ match_value, MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
+ MLX5_FS_DEFAULT_FLOW_TAG,
+ &dest);
+ if (IS_ERR(rule)) {
+ err = PTR_ERR(rule);
+ netdev_err(priv->netdev, "%s: add rule(filter id=%d, rq idx=%d) failed, err=%d\n",
+ __func__, arfs_rule->filter_id, arfs_rule->rxq, err);
+ }
+
+out:
+ kvfree(match_criteria);
+ kvfree(match_value);
+ return err ? ERR_PTR(err) : rule;
+}
+
+static void arfs_modify_rule_rq(struct mlx5e_priv *priv,
+ struct mlx5_flow_rule *rule, u16 rxq)
+{
+ struct mlx5_flow_destination dst;
+ int err = 0;
+
+ dst.type = MLX5_FLOW_DESTINATION_TYPE_TIR;
+ dst.tir_num = priv->direct_tir[rxq].tirn;
+ err = mlx5_modify_rule_destination(rule, &dst);
+ if (err)
+ netdev_warn(priv->netdev,
+ "Failed to modfiy aRFS rule destination to rq=%d\n", rxq);
+}
+
+static void arfs_handle_work(struct work_struct *work)
+{
+ struct arfs_rule *arfs_rule = container_of(work,
+ struct arfs_rule,
+ arfs_work);
+ struct mlx5e_priv *priv = arfs_rule->priv;
+ struct mlx5_flow_rule *rule;
+
+ mutex_lock(&priv->state_lock);
+ if (!test_bit(MLX5E_STATE_OPENED, &priv->state)) {
+ spin_lock_bh(&priv->fs.arfs.arfs_lock);
+ hlist_del(&arfs_rule->hlist);
+ spin_unlock_bh(&priv->fs.arfs.arfs_lock);
+
+ mutex_unlock(&priv->state_lock);
+ kfree(arfs_rule);
+ goto out;
+ }
+ mutex_unlock(&priv->state_lock);
+
+ if (!arfs_rule->rule) {
+ rule = arfs_add_rule(priv, arfs_rule);
+ if (IS_ERR(rule))
+ goto out;
+ arfs_rule->rule = rule;
+ } else {
+ arfs_modify_rule_rq(priv, arfs_rule->rule,
+ arfs_rule->rxq);
+ }
+out:
+ arfs_may_expire_flow(priv);
+}
+
+/* return L4 destination port from ip4/6 packets */
+static __be16 arfs_get_dst_port(const struct sk_buff *skb)
+{
+ char *transport_header;
+
+ transport_header = skb_transport_header(skb);
+ if (arfs_get_ip_proto(skb) == IPPROTO_TCP)
+ return ((struct tcphdr *)transport_header)->dest;
+ return ((struct udphdr *)transport_header)->dest;
+}
+
+/* return L4 source port from ip4/6 packets */
+static __be16 arfs_get_src_port(const struct sk_buff *skb)
+{
+ char *transport_header;
+
+ transport_header = skb_transport_header(skb);
+ if (arfs_get_ip_proto(skb) == IPPROTO_TCP)
+ return ((struct tcphdr *)transport_header)->source;
+ return ((struct udphdr *)transport_header)->source;
+}
+
+static struct arfs_rule *arfs_alloc_rule(struct mlx5e_priv *priv,
+ struct arfs_table *arfs_t,
+ const struct sk_buff *skb,
+ u16 rxq, u32 flow_id)
+{
+ struct arfs_rule *rule;
+ struct arfs_tuple *tuple;
+
+ rule = kzalloc(sizeof(*rule), GFP_ATOMIC);
+ if (!rule)
+ return NULL;
+
+ rule->priv = priv;
+ rule->rxq = rxq;
+ INIT_WORK(&rule->arfs_work, arfs_handle_work);
+
+ tuple = &rule->tuple;
+ tuple->etype = skb->protocol;
+ if (tuple->etype == htons(ETH_P_IP)) {
+ tuple->src_ipv4 = ip_hdr(skb)->saddr;
+ tuple->dst_ipv4 = ip_hdr(skb)->daddr;
+ } else {
+ memcpy(&tuple->src_ipv6, &ipv6_hdr(skb)->saddr,
+ sizeof(struct in6_addr));
+ memcpy(&tuple->dst_ipv6, &ipv6_hdr(skb)->daddr,
+ sizeof(struct in6_addr));
+ }
+ tuple->ip_proto = arfs_get_ip_proto(skb);
+ tuple->src_port = arfs_get_src_port(skb);
+ tuple->dst_port = arfs_get_dst_port(skb);
+
+ rule->flow_id = flow_id;
+ rule->filter_id = priv->fs.arfs.last_filter_id++ % RPS_NO_FILTER;
+
+ hlist_add_head(&rule->hlist,
+ arfs_hash_bucket(arfs_t, tuple->src_port,
+ tuple->dst_port));
+ return rule;
+}
+
+static bool arfs_cmp_ips(struct arfs_tuple *tuple,
+ const struct sk_buff *skb)
+{
+ if (tuple->etype == htons(ETH_P_IP) &&
+ tuple->src_ipv4 == ip_hdr(skb)->saddr &&
+ tuple->dst_ipv4 == ip_hdr(skb)->daddr)
+ return true;
+ if (tuple->etype == htons(ETH_P_IPV6) &&
+ (!memcmp(&tuple->src_ipv6, &ipv6_hdr(skb)->saddr,
+ sizeof(struct in6_addr))) &&
+ (!memcmp(&tuple->dst_ipv6, &ipv6_hdr(skb)->daddr,
+ sizeof(struct in6_addr))))
+ return true;
+ return false;
+}
+
+static struct arfs_rule *arfs_find_rule(struct arfs_table *arfs_t,
+ const struct sk_buff *skb)
+{
+ struct arfs_rule *arfs_rule;
+ struct hlist_head *head;
+ __be16 src_port = arfs_get_src_port(skb);
+ __be16 dst_port = arfs_get_dst_port(skb);
+
+ head = arfs_hash_bucket(arfs_t, src_port, dst_port);
+ hlist_for_each_entry(arfs_rule, head, hlist) {
+ if (arfs_rule->tuple.src_port == src_port &&
+ arfs_rule->tuple.dst_port == dst_port &&
+ arfs_cmp_ips(&arfs_rule->tuple, skb)) {
+ return arfs_rule;
+ }
+ }
+
+ return NULL;
+}
+
+int mlx5e_rx_flow_steer(struct net_device *dev, const struct sk_buff *skb,
+ u16 rxq_index, u32 flow_id)
+{
+ struct mlx5e_priv *priv = netdev_priv(dev);
+ struct mlx5e_arfs_tables *arfs = &priv->fs.arfs;
+ struct arfs_table *arfs_t;
+ struct arfs_rule *arfs_rule;
+
+ if (skb->protocol != htons(ETH_P_IP) &&
+ skb->protocol != htons(ETH_P_IPV6))
+ return -EPROTONOSUPPORT;
+
+ arfs_t = arfs_get_table(arfs, arfs_get_ip_proto(skb), skb->protocol);
+ if (!arfs_t)
+ return -EPROTONOSUPPORT;
+
+ spin_lock_bh(&arfs->arfs_lock);
+ arfs_rule = arfs_find_rule(arfs_t, skb);
+ if (arfs_rule) {
+ if (arfs_rule->rxq == rxq_index) {
+ spin_unlock_bh(&arfs->arfs_lock);
+ return arfs_rule->filter_id;
+ }
+ arfs_rule->rxq = rxq_index;
+ } else {
+ arfs_rule = arfs_alloc_rule(priv, arfs_t, skb,
+ rxq_index, flow_id);
+ if (!arfs_rule) {
+ spin_unlock_bh(&arfs->arfs_lock);
+ return -ENOMEM;
+ }
+ }
+ queue_work(priv->fs.arfs.wq, &arfs_rule->arfs_work);
+ spin_unlock_bh(&arfs->arfs_lock);
+ return arfs_rule->filter_id;
+}
+#endif
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_clock.c b/drivers/net/ethernet/mellanox/mlx5/core/en_clock.c
index 2018eebe1531..847a8f3ac2b2 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_clock.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_clock.c
@@ -93,6 +93,8 @@ int mlx5e_hwstamp_set(struct net_device *dev, struct ifreq *ifr)
/* RX HW timestamp */
switch (config.rx_filter) {
case HWTSTAMP_FILTER_NONE:
+ /* Reset CQE compression to Admin default */
+ mlx5e_modify_rx_cqe_compression(priv, priv->params.rx_cqe_compress_admin);
break;
case HWTSTAMP_FILTER_ALL:
case HWTSTAMP_FILTER_SOME:
@@ -108,6 +110,8 @@ int mlx5e_hwstamp_set(struct net_device *dev, struct ifreq *ifr)
case HWTSTAMP_FILTER_PTP_V2_EVENT:
case HWTSTAMP_FILTER_PTP_V2_SYNC:
case HWTSTAMP_FILTER_PTP_V2_DELAY_REQ:
+ /* Disable CQE compression */
+ mlx5e_modify_rx_cqe_compression(priv, false);
config.rx_filter = HWTSTAMP_FILTER_ALL;
break;
default:
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
index 3036f279a8fd..b2db180ae2a5 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_dcbnl.c
@@ -174,8 +174,14 @@ static int mlx5e_dcbnl_ieee_getpfc(struct net_device *dev,
{
struct mlx5e_priv *priv = netdev_priv(dev);
struct mlx5_core_dev *mdev = priv->mdev;
+ struct mlx5e_pport_stats *pstats = &priv->stats.pport;
+ int i;
pfc->pfc_cap = mlx5_max_tc(mdev) + 1;
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ pfc->requests[i] = PPORT_PER_PRIO_GET(pstats, i, tx_pause);
+ pfc->indications[i] = PPORT_PER_PRIO_GET(pstats, i, rx_pause);
+ }
return mlx5_query_port_pfc(mdev, &pfc->pfc_en, NULL);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
index 3476ab844634..fc7dcc03b1de 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_ethtool.c
@@ -165,26 +165,112 @@ static const struct {
},
};
+static unsigned long mlx5e_query_pfc_combined(struct mlx5e_priv *priv)
+{
+ struct mlx5_core_dev *mdev = priv->mdev;
+ u8 pfc_en_tx;
+ u8 pfc_en_rx;
+ int err;
+
+ err = mlx5_query_port_pfc(mdev, &pfc_en_tx, &pfc_en_rx);
+
+ return err ? 0 : pfc_en_tx | pfc_en_rx;
+}
+
+#define MLX5E_NUM_Q_CNTRS(priv) (NUM_Q_COUNTERS * (!!priv->q_counter))
+#define MLX5E_NUM_RQ_STATS(priv) \
+ (NUM_RQ_STATS * priv->params.num_channels * \
+ test_bit(MLX5E_STATE_OPENED, &priv->state))
+#define MLX5E_NUM_SQ_STATS(priv) \
+ (NUM_SQ_STATS * priv->params.num_channels * priv->params.num_tc * \
+ test_bit(MLX5E_STATE_OPENED, &priv->state))
+#define MLX5E_NUM_PFC_COUNTERS(priv) hweight8(mlx5e_query_pfc_combined(priv))
+
static int mlx5e_get_sset_count(struct net_device *dev, int sset)
{
struct mlx5e_priv *priv = netdev_priv(dev);
switch (sset) {
case ETH_SS_STATS:
- return NUM_VPORT_COUNTERS + NUM_PPORT_COUNTERS +
- priv->params.num_channels * NUM_RQ_STATS +
- priv->params.num_channels * priv->params.num_tc *
- NUM_SQ_STATS;
+ return NUM_SW_COUNTERS +
+ MLX5E_NUM_Q_CNTRS(priv) +
+ NUM_VPORT_COUNTERS + NUM_PPORT_COUNTERS +
+ MLX5E_NUM_RQ_STATS(priv) +
+ MLX5E_NUM_SQ_STATS(priv) +
+ MLX5E_NUM_PFC_COUNTERS(priv);
/* fallthrough */
default:
return -EOPNOTSUPP;
}
}
+static void mlx5e_fill_stats_strings(struct mlx5e_priv *priv, uint8_t *data)
+{
+ int i, j, tc, prio, idx = 0;
+ unsigned long pfc_combined;
+
+ /* SW counters */
+ for (i = 0; i < NUM_SW_COUNTERS; i++)
+ strcpy(data + (idx++) * ETH_GSTRING_LEN, sw_stats_desc[i].name);
+
+ /* Q counters */
+ for (i = 0; i < MLX5E_NUM_Q_CNTRS(priv); i++)
+ strcpy(data + (idx++) * ETH_GSTRING_LEN, q_stats_desc[i].name);
+
+ /* VPORT counters */
+ for (i = 0; i < NUM_VPORT_COUNTERS; i++)
+ strcpy(data + (idx++) * ETH_GSTRING_LEN,
+ vport_stats_desc[i].name);
+
+ /* PPORT counters */
+ for (i = 0; i < NUM_PPORT_802_3_COUNTERS; i++)
+ strcpy(data + (idx++) * ETH_GSTRING_LEN,
+ pport_802_3_stats_desc[i].name);
+
+ for (i = 0; i < NUM_PPORT_2863_COUNTERS; i++)
+ strcpy(data + (idx++) * ETH_GSTRING_LEN,
+ pport_2863_stats_desc[i].name);
+
+ for (i = 0; i < NUM_PPORT_2819_COUNTERS; i++)
+ strcpy(data + (idx++) * ETH_GSTRING_LEN,
+ pport_2819_stats_desc[i].name);
+
+ for (prio = 0; prio < NUM_PPORT_PRIO; prio++) {
+ for (i = 0; i < NUM_PPORT_PER_PRIO_TRAFFIC_COUNTERS; i++)
+ sprintf(data + (idx++) * ETH_GSTRING_LEN, "prio%d_%s",
+ prio,
+ pport_per_prio_traffic_stats_desc[i].name);
+ }
+
+ pfc_combined = mlx5e_query_pfc_combined(priv);
+ for_each_set_bit(prio, &pfc_combined, NUM_PPORT_PRIO) {
+ for (i = 0; i < NUM_PPORT_PER_PRIO_PFC_COUNTERS; i++) {
+ sprintf(data + (idx++) * ETH_GSTRING_LEN, "prio%d_%s",
+ prio, pport_per_prio_pfc_stats_desc[i].name);
+ }
+ }
+
+ if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
+ return;
+
+ /* per channel counters */
+ for (i = 0; i < priv->params.num_channels; i++)
+ for (j = 0; j < NUM_RQ_STATS; j++)
+ sprintf(data + (idx++) * ETH_GSTRING_LEN, "rx%d_%s", i,
+ rq_stats_desc[j].name);
+
+ for (tc = 0; tc < priv->params.num_tc; tc++)
+ for (i = 0; i < priv->params.num_channels; i++)
+ for (j = 0; j < NUM_SQ_STATS; j++)
+ sprintf(data + (idx++) * ETH_GSTRING_LEN,
+ "tx%d_%s",
+ priv->channeltc_to_txq_map[i][tc],
+ sq_stats_desc[j].name);
+}
+
static void mlx5e_get_strings(struct net_device *dev,
uint32_t stringset, uint8_t *data)
{
- int i, j, tc, idx = 0;
struct mlx5e_priv *priv = netdev_priv(dev);
switch (stringset) {
@@ -195,30 +281,7 @@ static void mlx5e_get_strings(struct net_device *dev,
break;
case ETH_SS_STATS:
- /* VPORT counters */
- for (i = 0; i < NUM_VPORT_COUNTERS; i++)
- strcpy(data + (idx++) * ETH_GSTRING_LEN,
- vport_strings[i]);
-
- /* PPORT counters */
- for (i = 0; i < NUM_PPORT_COUNTERS; i++)
- strcpy(data + (idx++) * ETH_GSTRING_LEN,
- pport_strings[i]);
-
- /* per channel counters */
- for (i = 0; i < priv->params.num_channels; i++)
- for (j = 0; j < NUM_RQ_STATS; j++)
- sprintf(data + (idx++) * ETH_GSTRING_LEN,
- "rx%d_%s", i, rq_stats_strings[j]);
-
- for (tc = 0; tc < priv->params.num_tc; tc++)
- for (i = 0; i < priv->params.num_channels; i++)
- for (j = 0; j < NUM_SQ_STATS; j++)
- sprintf(data +
- (idx++) * ETH_GSTRING_LEN,
- "tx%d_%s",
- priv->channeltc_to_txq_map[i][tc],
- sq_stats_strings[j]);
+ mlx5e_fill_stats_strings(priv, data);
break;
}
}
@@ -227,7 +290,8 @@ static void mlx5e_get_ethtool_stats(struct net_device *dev,
struct ethtool_stats *stats, u64 *data)
{
struct mlx5e_priv *priv = netdev_priv(dev);
- int i, j, tc, idx = 0;
+ int i, j, tc, prio, idx = 0;
+ unsigned long pfc_combined;
if (!data)
return;
@@ -237,33 +301,68 @@ static void mlx5e_get_ethtool_stats(struct net_device *dev,
mlx5e_update_stats(priv);
mutex_unlock(&priv->state_lock);
+ for (i = 0; i < NUM_SW_COUNTERS; i++)
+ data[idx++] = MLX5E_READ_CTR64_CPU(&priv->stats.sw,
+ sw_stats_desc, i);
+
+ for (i = 0; i < MLX5E_NUM_Q_CNTRS(priv); i++)
+ data[idx++] = MLX5E_READ_CTR32_CPU(&priv->stats.qcnt,
+ q_stats_desc, i);
+
for (i = 0; i < NUM_VPORT_COUNTERS; i++)
- data[idx++] = ((u64 *)&priv->stats.vport)[i];
+ data[idx++] = MLX5E_READ_CTR64_BE(priv->stats.vport.query_vport_out,
+ vport_stats_desc, i);
+
+ for (i = 0; i < NUM_PPORT_802_3_COUNTERS; i++)
+ data[idx++] = MLX5E_READ_CTR64_BE(&priv->stats.pport.IEEE_802_3_counters,
+ pport_802_3_stats_desc, i);
- for (i = 0; i < NUM_PPORT_COUNTERS; i++)
- data[idx++] = be64_to_cpu(((__be64 *)&priv->stats.pport)[i]);
+ for (i = 0; i < NUM_PPORT_2863_COUNTERS; i++)
+ data[idx++] = MLX5E_READ_CTR64_BE(&priv->stats.pport.RFC_2863_counters,
+ pport_2863_stats_desc, i);
+
+ for (i = 0; i < NUM_PPORT_2819_COUNTERS; i++)
+ data[idx++] = MLX5E_READ_CTR64_BE(&priv->stats.pport.RFC_2819_counters,
+ pport_2819_stats_desc, i);
+
+ for (prio = 0; prio < NUM_PPORT_PRIO; prio++) {
+ for (i = 0; i < NUM_PPORT_PER_PRIO_TRAFFIC_COUNTERS; i++)
+ data[idx++] = MLX5E_READ_CTR64_BE(&priv->stats.pport.per_prio_counters[prio],
+ pport_per_prio_traffic_stats_desc, i);
+ }
+
+ pfc_combined = mlx5e_query_pfc_combined(priv);
+ for_each_set_bit(prio, &pfc_combined, NUM_PPORT_PRIO) {
+ for (i = 0; i < NUM_PPORT_PER_PRIO_PFC_COUNTERS; i++) {
+ data[idx++] = MLX5E_READ_CTR64_BE(&priv->stats.pport.per_prio_counters[prio],
+ pport_per_prio_pfc_stats_desc, i);
+ }
+ }
+
+ if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
+ return;
/* per channel counters */
for (i = 0; i < priv->params.num_channels; i++)
for (j = 0; j < NUM_RQ_STATS; j++)
- data[idx++] = !test_bit(MLX5E_STATE_OPENED,
- &priv->state) ? 0 :
- ((u64 *)&priv->channel[i]->rq.stats)[j];
+ data[idx++] =
+ MLX5E_READ_CTR64_CPU(&priv->channel[i]->rq.stats,
+ rq_stats_desc, j);
for (tc = 0; tc < priv->params.num_tc; tc++)
for (i = 0; i < priv->params.num_channels; i++)
for (j = 0; j < NUM_SQ_STATS; j++)
- data[idx++] = !test_bit(MLX5E_STATE_OPENED,
- &priv->state) ? 0 :
- ((u64 *)&priv->channel[i]->sq[tc].stats)[j];
+ data[idx++] = MLX5E_READ_CTR64_CPU(&priv->channel[i]->sq[tc].stats,
+ sq_stats_desc, j);
}
static void mlx5e_get_ringparam(struct net_device *dev,
struct ethtool_ringparam *param)
{
struct mlx5e_priv *priv = netdev_priv(dev);
+ int rq_wq_type = priv->params.rq_wq_type;
- param->rx_max_pending = 1 << MLX5E_PARAMS_MAXIMUM_LOG_RQ_SIZE;
+ param->rx_max_pending = 1 << mlx5_max_log_rq_size(rq_wq_type);
param->tx_max_pending = 1 << MLX5E_PARAMS_MAXIMUM_LOG_SQ_SIZE;
param->rx_pending = 1 << priv->params.log_rq_size;
param->tx_pending = 1 << priv->params.log_sq_size;
@@ -274,6 +373,7 @@ static int mlx5e_set_ringparam(struct net_device *dev,
{
struct mlx5e_priv *priv = netdev_priv(dev);
bool was_opened;
+ int rq_wq_type = priv->params.rq_wq_type;
u16 min_rx_wqes;
u8 log_rq_size;
u8 log_sq_size;
@@ -289,16 +389,16 @@ static int mlx5e_set_ringparam(struct net_device *dev,
__func__);
return -EINVAL;
}
- if (param->rx_pending < (1 << MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE)) {
+ if (param->rx_pending < (1 << mlx5_min_log_rq_size(rq_wq_type))) {
netdev_info(dev, "%s: rx_pending (%d) < min (%d)\n",
__func__, param->rx_pending,
- 1 << MLX5E_PARAMS_MINIMUM_LOG_RQ_SIZE);
+ 1 << mlx5_min_log_rq_size(rq_wq_type));
return -EINVAL;
}
- if (param->rx_pending > (1 << MLX5E_PARAMS_MAXIMUM_LOG_RQ_SIZE)) {
+ if (param->rx_pending > (1 << mlx5_max_log_rq_size(rq_wq_type))) {
netdev_info(dev, "%s: rx_pending (%d) > max (%d)\n",
__func__, param->rx_pending,
- 1 << MLX5E_PARAMS_MAXIMUM_LOG_RQ_SIZE);
+ 1 << mlx5_max_log_rq_size(rq_wq_type));
return -EINVAL;
}
if (param->tx_pending < (1 << MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE)) {
@@ -316,8 +416,7 @@ static int mlx5e_set_ringparam(struct net_device *dev,
log_rq_size = order_base_2(param->rx_pending);
log_sq_size = order_base_2(param->tx_pending);
- min_rx_wqes = min_t(u16, param->rx_pending - 1,
- MLX5E_PARAMS_DEFAULT_MIN_RX_WQES);
+ min_rx_wqes = mlx5_min_rx_wqes(rq_wq_type, param->rx_pending);
if (log_rq_size == priv->params.log_rq_size &&
log_sq_size == priv->params.log_sq_size &&
@@ -357,6 +456,7 @@ static int mlx5e_set_channels(struct net_device *dev,
struct mlx5e_priv *priv = netdev_priv(dev);
int ncv = mlx5e_get_max_num_channels(priv->mdev);
unsigned int count = ch->combined_count;
+ bool arfs_enabled;
bool was_opened;
int err = 0;
@@ -385,13 +485,27 @@ static int mlx5e_set_channels(struct net_device *dev,
if (was_opened)
mlx5e_close_locked(dev);
+ arfs_enabled = dev->features & NETIF_F_NTUPLE;
+ if (arfs_enabled)
+ mlx5e_arfs_disable(priv);
+
priv->params.num_channels = count;
- mlx5e_build_default_indir_rqt(priv->params.indirection_rqt,
+ mlx5e_build_default_indir_rqt(priv->mdev, priv->params.indirection_rqt,
MLX5E_INDIR_RQT_SIZE, count);
if (was_opened)
err = mlx5e_open_locked(dev);
+ if (err)
+ goto out;
+
+ if (arfs_enabled) {
+ err = mlx5e_arfs_enable(priv);
+ if (err)
+ netdev_err(dev, "%s: mlx5e_arfs_enable failed: %d\n",
+ __func__, err);
+ }
+out:
mutex_unlock(&priv->state_lock);
return err;
@@ -499,6 +613,25 @@ static u32 ptys2ethtool_supported_port(u32 eth_proto_cap)
return 0;
}
+int mlx5e_get_max_linkspeed(struct mlx5_core_dev *mdev, u32 *speed)
+{
+ u32 max_speed = 0;
+ u32 proto_cap;
+ int err;
+ int i;
+
+ err = mlx5_query_port_proto_cap(mdev, &proto_cap, MLX5_PTYS_EN);
+ if (err)
+ return err;
+
+ for (i = 0; i < MLX5E_LINK_MODES_NUMBER; ++i)
+ if (proto_cap & MLX5E_PROT_MASK(i))
+ max_speed = max(max_speed, ptys2ethtool_table[i].speed);
+
+ *speed = max_speed;
+ return 0;
+}
+
static void get_speed_duplex(struct net_device *netdev,
u32 eth_proto_oper,
struct ethtool_cmd *cmd)
@@ -727,9 +860,8 @@ static void mlx5e_modify_tirs_hash(struct mlx5e_priv *priv, void *in, int inlen)
MLX5_SET(modify_tir_in, in, bitmask.hash, 1);
mlx5e_build_tir_ctx_hash(tirc, priv);
- for (i = 0; i < MLX5E_NUM_TT; i++)
- if (IS_HASHING_TT(i))
- mlx5_core_modify_tir(mdev, priv->tirn[i], in, inlen);
+ for (i = 0; i < MLX5E_NUM_INDIR_TIRS; i++)
+ mlx5_core_modify_tir(mdev, priv->indir_tirn[i], in, inlen);
}
static int mlx5e_set_rxfh(struct net_device *dev, const u32 *indir,
@@ -751,9 +883,11 @@ static int mlx5e_set_rxfh(struct net_device *dev, const u32 *indir,
mutex_lock(&priv->state_lock);
if (indir) {
+ u32 rqtn = priv->indir_rqtn;
+
memcpy(priv->params.indirection_rqt, indir,
sizeof(priv->params.indirection_rqt));
- mlx5e_redirect_rqt(priv, MLX5E_INDIRECTION_RQT);
+ mlx5e_redirect_rqt(priv, rqtn, MLX5E_INDIR_RQT_SIZE, 0);
}
if (key)
@@ -1036,6 +1170,108 @@ static int mlx5e_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
return mlx5_set_port_wol(mdev, mlx5_wol_mode);
}
+static int mlx5e_set_phys_id(struct net_device *dev,
+ enum ethtool_phys_id_state state)
+{
+ struct mlx5e_priv *priv = netdev_priv(dev);
+ struct mlx5_core_dev *mdev = priv->mdev;
+ u16 beacon_duration;
+
+ if (!MLX5_CAP_GEN(mdev, beacon_led))
+ return -EOPNOTSUPP;
+
+ switch (state) {
+ case ETHTOOL_ID_ACTIVE:
+ beacon_duration = MLX5_BEACON_DURATION_INF;
+ break;
+ case ETHTOOL_ID_INACTIVE:
+ beacon_duration = MLX5_BEACON_DURATION_OFF;
+ break;
+ default:
+ return -EOPNOTSUPP;
+ }
+
+ return mlx5_set_port_beacon(mdev, beacon_duration);
+}
+
+static int mlx5e_get_module_info(struct net_device *netdev,
+ struct ethtool_modinfo *modinfo)
+{
+ struct mlx5e_priv *priv = netdev_priv(netdev);
+ struct mlx5_core_dev *dev = priv->mdev;
+ int size_read = 0;
+ u8 data[4];
+
+ size_read = mlx5_query_module_eeprom(dev, 0, 2, data);
+ if (size_read < 2)
+ return -EIO;
+
+ /* data[0] = identifier byte */
+ switch (data[0]) {
+ case MLX5_MODULE_ID_QSFP:
+ modinfo->type = ETH_MODULE_SFF_8436;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8436_LEN;
+ break;
+ case MLX5_MODULE_ID_QSFP_PLUS:
+ case MLX5_MODULE_ID_QSFP28:
+ /* data[1] = revision id */
+ if (data[0] == MLX5_MODULE_ID_QSFP28 || data[1] >= 0x3) {
+ modinfo->type = ETH_MODULE_SFF_8636;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8636_LEN;
+ } else {
+ modinfo->type = ETH_MODULE_SFF_8436;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8436_LEN;
+ }
+ break;
+ case MLX5_MODULE_ID_SFP:
+ modinfo->type = ETH_MODULE_SFF_8472;
+ modinfo->eeprom_len = ETH_MODULE_SFF_8472_LEN;
+ break;
+ default:
+ netdev_err(priv->netdev, "%s: cable type not recognized:0x%x\n",
+ __func__, data[0]);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int mlx5e_get_module_eeprom(struct net_device *netdev,
+ struct ethtool_eeprom *ee,
+ u8 *data)
+{
+ struct mlx5e_priv *priv = netdev_priv(netdev);
+ struct mlx5_core_dev *mdev = priv->mdev;
+ int offset = ee->offset;
+ int size_read;
+ int i = 0;
+
+ if (!ee->len)
+ return -EINVAL;
+
+ memset(data, 0, ee->len);
+
+ while (i < ee->len) {
+ size_read = mlx5_query_module_eeprom(mdev, offset, ee->len - i,
+ data + i);
+
+ if (!size_read)
+ /* Done reading */
+ return 0;
+
+ if (size_read < 0) {
+ netdev_err(priv->netdev, "%s: mlx5_query_eeprom failed:0x%x\n",
+ __func__, size_read);
+ return 0;
+ }
+
+ i += size_read;
+ offset += size_read;
+ }
+
+ return 0;
+}
+
const struct ethtool_ops mlx5e_ethtool_ops = {
.get_drvinfo = mlx5e_get_drvinfo,
.get_link = ethtool_op_get_link,
@@ -1060,6 +1296,9 @@ const struct ethtool_ops mlx5e_ethtool_ops = {
.get_pauseparam = mlx5e_get_pauseparam,
.set_pauseparam = mlx5e_set_pauseparam,
.get_ts_info = mlx5e_get_ts_info,
+ .set_phys_id = mlx5e_set_phys_id,
.get_wol = mlx5e_get_wol,
.set_wol = mlx5e_set_wol,
+ .get_module_info = mlx5e_get_module_info,
+ .get_module_eeprom = mlx5e_get_module_eeprom,
};
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
index d00a24203410..b32740092854 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_fs.c
@@ -37,7 +37,10 @@
#include <linux/mlx5/fs.h>
#include "en.h"
-#define MLX5_SET_CFG(p, f, v) MLX5_SET(create_flow_group_in, p, f, v)
+static int mlx5e_add_l2_flow_rule(struct mlx5e_priv *priv,
+ struct mlx5e_l2_rule *ai, int type);
+static void mlx5e_del_l2_flow_rule(struct mlx5e_priv *priv,
+ struct mlx5e_l2_rule *ai);
enum {
MLX5E_FULLMATCH = 0,
@@ -58,21 +61,21 @@ enum {
MLX5E_ACTION_DEL = 2,
};
-struct mlx5e_eth_addr_hash_node {
+struct mlx5e_l2_hash_node {
struct hlist_node hlist;
u8 action;
- struct mlx5e_eth_addr_info ai;
+ struct mlx5e_l2_rule ai;
};
-static inline int mlx5e_hash_eth_addr(u8 *addr)
+static inline int mlx5e_hash_l2(u8 *addr)
{
return addr[5];
}
-static void mlx5e_add_eth_addr_to_hash(struct hlist_head *hash, u8 *addr)
+static void mlx5e_add_l2_to_hash(struct hlist_head *hash, u8 *addr)
{
- struct mlx5e_eth_addr_hash_node *hn;
- int ix = mlx5e_hash_eth_addr(addr);
+ struct mlx5e_l2_hash_node *hn;
+ int ix = mlx5e_hash_l2(addr);
int found = 0;
hlist_for_each_entry(hn, &hash[ix], hlist)
@@ -96,371 +99,12 @@ static void mlx5e_add_eth_addr_to_hash(struct hlist_head *hash, u8 *addr)
hlist_add_head(&hn->hlist, &hash[ix]);
}
-static void mlx5e_del_eth_addr_from_hash(struct mlx5e_eth_addr_hash_node *hn)
+static void mlx5e_del_l2_from_hash(struct mlx5e_l2_hash_node *hn)
{
hlist_del(&hn->hlist);
kfree(hn);
}
-static void mlx5e_del_eth_addr_from_flow_table(struct mlx5e_priv *priv,
- struct mlx5e_eth_addr_info *ai)
-{
- if (ai->tt_vec & BIT(MLX5E_TT_IPV6_IPSEC_ESP))
- mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_IPV6_IPSEC_ESP]);
-
- if (ai->tt_vec & BIT(MLX5E_TT_IPV4_IPSEC_ESP))
- mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_IPV4_IPSEC_ESP]);
-
- if (ai->tt_vec & BIT(MLX5E_TT_IPV6_IPSEC_AH))
- mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_IPV6_IPSEC_AH]);
-
- if (ai->tt_vec & BIT(MLX5E_TT_IPV4_IPSEC_AH))
- mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_IPV4_IPSEC_AH]);
-
- if (ai->tt_vec & BIT(MLX5E_TT_IPV6_TCP))
- mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_IPV6_TCP]);
-
- if (ai->tt_vec & BIT(MLX5E_TT_IPV4_TCP))
- mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_IPV4_TCP]);
-
- if (ai->tt_vec & BIT(MLX5E_TT_IPV6_UDP))
- mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_IPV6_UDP]);
-
- if (ai->tt_vec & BIT(MLX5E_TT_IPV4_UDP))
- mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_IPV4_UDP]);
-
- if (ai->tt_vec & BIT(MLX5E_TT_IPV6))
- mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_IPV6]);
-
- if (ai->tt_vec & BIT(MLX5E_TT_IPV4))
- mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_IPV4]);
-
- if (ai->tt_vec & BIT(MLX5E_TT_ANY))
- mlx5_del_flow_rule(ai->ft_rule[MLX5E_TT_ANY]);
-}
-
-static int mlx5e_get_eth_addr_type(u8 *addr)
-{
- if (is_unicast_ether_addr(addr))
- return MLX5E_UC;
-
- if ((addr[0] == 0x01) &&
- (addr[1] == 0x00) &&
- (addr[2] == 0x5e) &&
- !(addr[3] & 0x80))
- return MLX5E_MC_IPV4;
-
- if ((addr[0] == 0x33) &&
- (addr[1] == 0x33))
- return MLX5E_MC_IPV6;
-
- return MLX5E_MC_OTHER;
-}
-
-static u32 mlx5e_get_tt_vec(struct mlx5e_eth_addr_info *ai, int type)
-{
- int eth_addr_type;
- u32 ret;
-
- switch (type) {
- case MLX5E_FULLMATCH:
- eth_addr_type = mlx5e_get_eth_addr_type(ai->addr);
- switch (eth_addr_type) {
- case MLX5E_UC:
- ret =
- BIT(MLX5E_TT_IPV4_TCP) |
- BIT(MLX5E_TT_IPV6_TCP) |
- BIT(MLX5E_TT_IPV4_UDP) |
- BIT(MLX5E_TT_IPV6_UDP) |
- BIT(MLX5E_TT_IPV4_IPSEC_AH) |
- BIT(MLX5E_TT_IPV6_IPSEC_AH) |
- BIT(MLX5E_TT_IPV4_IPSEC_ESP) |
- BIT(MLX5E_TT_IPV6_IPSEC_ESP) |
- BIT(MLX5E_TT_IPV4) |
- BIT(MLX5E_TT_IPV6) |
- BIT(MLX5E_TT_ANY) |
- 0;
- break;
-
- case MLX5E_MC_IPV4:
- ret =
- BIT(MLX5E_TT_IPV4_UDP) |
- BIT(MLX5E_TT_IPV4) |
- 0;
- break;
-
- case MLX5E_MC_IPV6:
- ret =
- BIT(MLX5E_TT_IPV6_UDP) |
- BIT(MLX5E_TT_IPV6) |
- 0;
- break;
-
- case MLX5E_MC_OTHER:
- ret =
- BIT(MLX5E_TT_ANY) |
- 0;
- break;
- }
-
- break;
-
- case MLX5E_ALLMULTI:
- ret =
- BIT(MLX5E_TT_IPV4_UDP) |
- BIT(MLX5E_TT_IPV6_UDP) |
- BIT(MLX5E_TT_IPV4) |
- BIT(MLX5E_TT_IPV6) |
- BIT(MLX5E_TT_ANY) |
- 0;
- break;
-
- default: /* MLX5E_PROMISC */
- ret =
- BIT(MLX5E_TT_IPV4_TCP) |
- BIT(MLX5E_TT_IPV6_TCP) |
- BIT(MLX5E_TT_IPV4_UDP) |
- BIT(MLX5E_TT_IPV6_UDP) |
- BIT(MLX5E_TT_IPV4_IPSEC_AH) |
- BIT(MLX5E_TT_IPV6_IPSEC_AH) |
- BIT(MLX5E_TT_IPV4_IPSEC_ESP) |
- BIT(MLX5E_TT_IPV6_IPSEC_ESP) |
- BIT(MLX5E_TT_IPV4) |
- BIT(MLX5E_TT_IPV6) |
- BIT(MLX5E_TT_ANY) |
- 0;
- break;
- }
-
- return ret;
-}
-
-static int __mlx5e_add_eth_addr_rule(struct mlx5e_priv *priv,
- struct mlx5e_eth_addr_info *ai,
- int type, u32 *mc, u32 *mv)
-{
- struct mlx5_flow_destination dest;
- u8 match_criteria_enable = 0;
- struct mlx5_flow_rule **rule_p;
- struct mlx5_flow_table *ft = priv->fts.main.t;
- u8 *mc_dmac = MLX5_ADDR_OF(fte_match_param, mc,
- outer_headers.dmac_47_16);
- u8 *mv_dmac = MLX5_ADDR_OF(fte_match_param, mv,
- outer_headers.dmac_47_16);
- u32 *tirn = priv->tirn;
- u32 tt_vec;
- int err = 0;
-
- dest.type = MLX5_FLOW_DESTINATION_TYPE_TIR;
-
- switch (type) {
- case MLX5E_FULLMATCH:
- match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
- eth_broadcast_addr(mc_dmac);
- ether_addr_copy(mv_dmac, ai->addr);
- break;
-
- case MLX5E_ALLMULTI:
- match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
- mc_dmac[0] = 0x01;
- mv_dmac[0] = 0x01;
- break;
-
- case MLX5E_PROMISC:
- break;
- }
-
- tt_vec = mlx5e_get_tt_vec(ai, type);
-
- if (tt_vec & BIT(MLX5E_TT_ANY)) {
- rule_p = &ai->ft_rule[MLX5E_TT_ANY];
- dest.tir_num = tirn[MLX5E_TT_ANY];
- *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
- MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
- MLX5_FS_DEFAULT_FLOW_TAG, &dest);
- if (IS_ERR_OR_NULL(*rule_p))
- goto err_del_ai;
- ai->tt_vec |= BIT(MLX5E_TT_ANY);
- }
-
- match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
- MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ethertype);
-
- if (tt_vec & BIT(MLX5E_TT_IPV4)) {
- rule_p = &ai->ft_rule[MLX5E_TT_IPV4];
- dest.tir_num = tirn[MLX5E_TT_IPV4];
- MLX5_SET(fte_match_param, mv, outer_headers.ethertype,
- ETH_P_IP);
- *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
- MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
- MLX5_FS_DEFAULT_FLOW_TAG, &dest);
- if (IS_ERR_OR_NULL(*rule_p))
- goto err_del_ai;
- ai->tt_vec |= BIT(MLX5E_TT_IPV4);
- }
-
- if (tt_vec & BIT(MLX5E_TT_IPV6)) {
- rule_p = &ai->ft_rule[MLX5E_TT_IPV6];
- dest.tir_num = tirn[MLX5E_TT_IPV6];
- MLX5_SET(fte_match_param, mv, outer_headers.ethertype,
- ETH_P_IPV6);
- *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
- MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
- MLX5_FS_DEFAULT_FLOW_TAG, &dest);
- if (IS_ERR_OR_NULL(*rule_p))
- goto err_del_ai;
- ai->tt_vec |= BIT(MLX5E_TT_IPV6);
- }
-
- MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ip_protocol);
- MLX5_SET(fte_match_param, mv, outer_headers.ip_protocol, IPPROTO_UDP);
-
- if (tt_vec & BIT(MLX5E_TT_IPV4_UDP)) {
- rule_p = &ai->ft_rule[MLX5E_TT_IPV4_UDP];
- dest.tir_num = tirn[MLX5E_TT_IPV4_UDP];
- MLX5_SET(fte_match_param, mv, outer_headers.ethertype,
- ETH_P_IP);
- *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
- MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
- MLX5_FS_DEFAULT_FLOW_TAG, &dest);
- if (IS_ERR_OR_NULL(*rule_p))
- goto err_del_ai;
- ai->tt_vec |= BIT(MLX5E_TT_IPV4_UDP);
- }
-
- if (tt_vec & BIT(MLX5E_TT_IPV6_UDP)) {
- rule_p = &ai->ft_rule[MLX5E_TT_IPV6_UDP];
- dest.tir_num = tirn[MLX5E_TT_IPV6_UDP];
- MLX5_SET(fte_match_param, mv, outer_headers.ethertype,
- ETH_P_IPV6);
- *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
- MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
- MLX5_FS_DEFAULT_FLOW_TAG, &dest);
- if (IS_ERR_OR_NULL(*rule_p))
- goto err_del_ai;
- ai->tt_vec |= BIT(MLX5E_TT_IPV6_UDP);
- }
-
- MLX5_SET(fte_match_param, mv, outer_headers.ip_protocol, IPPROTO_TCP);
-
- if (tt_vec & BIT(MLX5E_TT_IPV4_TCP)) {
- rule_p = &ai->ft_rule[MLX5E_TT_IPV4_TCP];
- dest.tir_num = tirn[MLX5E_TT_IPV4_TCP];
- MLX5_SET(fte_match_param, mv, outer_headers.ethertype,
- ETH_P_IP);
- *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
- MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
- MLX5_FS_DEFAULT_FLOW_TAG, &dest);
- if (IS_ERR_OR_NULL(*rule_p))
- goto err_del_ai;
- ai->tt_vec |= BIT(MLX5E_TT_IPV4_TCP);
- }
-
- if (tt_vec & BIT(MLX5E_TT_IPV6_TCP)) {
- rule_p = &ai->ft_rule[MLX5E_TT_IPV6_TCP];
- dest.tir_num = tirn[MLX5E_TT_IPV6_TCP];
- MLX5_SET(fte_match_param, mv, outer_headers.ethertype,
- ETH_P_IPV6);
- *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
- MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
- MLX5_FS_DEFAULT_FLOW_TAG, &dest);
- if (IS_ERR_OR_NULL(*rule_p))
- goto err_del_ai;
-
- ai->tt_vec |= BIT(MLX5E_TT_IPV6_TCP);
- }
-
- MLX5_SET(fte_match_param, mv, outer_headers.ip_protocol, IPPROTO_AH);
-
- if (tt_vec & BIT(MLX5E_TT_IPV4_IPSEC_AH)) {
- rule_p = &ai->ft_rule[MLX5E_TT_IPV4_IPSEC_AH];
- dest.tir_num = tirn[MLX5E_TT_IPV4_IPSEC_AH];
- MLX5_SET(fte_match_param, mv, outer_headers.ethertype,
- ETH_P_IP);
- *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
- MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
- MLX5_FS_DEFAULT_FLOW_TAG, &dest);
- if (IS_ERR_OR_NULL(*rule_p))
- goto err_del_ai;
- ai->tt_vec |= BIT(MLX5E_TT_IPV4_IPSEC_AH);
- }
-
- if (tt_vec & BIT(MLX5E_TT_IPV6_IPSEC_AH)) {
- rule_p = &ai->ft_rule[MLX5E_TT_IPV6_IPSEC_AH];
- dest.tir_num = tirn[MLX5E_TT_IPV6_IPSEC_AH];
- MLX5_SET(fte_match_param, mv, outer_headers.ethertype,
- ETH_P_IPV6);
- *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
- MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
- MLX5_FS_DEFAULT_FLOW_TAG, &dest);
- if (IS_ERR_OR_NULL(*rule_p))
- goto err_del_ai;
- ai->tt_vec |= BIT(MLX5E_TT_IPV6_IPSEC_AH);
- }
-
- MLX5_SET(fte_match_param, mv, outer_headers.ip_protocol, IPPROTO_ESP);
-
- if (tt_vec & BIT(MLX5E_TT_IPV4_IPSEC_ESP)) {
- rule_p = &ai->ft_rule[MLX5E_TT_IPV4_IPSEC_ESP];
- dest.tir_num = tirn[MLX5E_TT_IPV4_IPSEC_ESP];
- MLX5_SET(fte_match_param, mv, outer_headers.ethertype,
- ETH_P_IP);
- *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
- MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
- MLX5_FS_DEFAULT_FLOW_TAG, &dest);
- if (IS_ERR_OR_NULL(*rule_p))
- goto err_del_ai;
- ai->tt_vec |= BIT(MLX5E_TT_IPV4_IPSEC_ESP);
- }
-
- if (tt_vec & BIT(MLX5E_TT_IPV6_IPSEC_ESP)) {
- rule_p = &ai->ft_rule[MLX5E_TT_IPV6_IPSEC_ESP];
- dest.tir_num = tirn[MLX5E_TT_IPV6_IPSEC_ESP];
- MLX5_SET(fte_match_param, mv, outer_headers.ethertype,
- ETH_P_IPV6);
- *rule_p = mlx5_add_flow_rule(ft, match_criteria_enable, mc, mv,
- MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
- MLX5_FS_DEFAULT_FLOW_TAG, &dest);
- if (IS_ERR_OR_NULL(*rule_p))
- goto err_del_ai;
- ai->tt_vec |= BIT(MLX5E_TT_IPV6_IPSEC_ESP);
- }
-
- return 0;
-
-err_del_ai:
- err = PTR_ERR(*rule_p);
- *rule_p = NULL;
- mlx5e_del_eth_addr_from_flow_table(priv, ai);
-
- return err;
-}
-
-static int mlx5e_add_eth_addr_rule(struct mlx5e_priv *priv,
- struct mlx5e_eth_addr_info *ai, int type)
-{
- u32 *match_criteria;
- u32 *match_value;
- int err = 0;
-
- match_value = mlx5_vzalloc(MLX5_ST_SZ_BYTES(fte_match_param));
- match_criteria = mlx5_vzalloc(MLX5_ST_SZ_BYTES(fte_match_param));
- if (!match_value || !match_criteria) {
- netdev_err(priv->netdev, "%s: alloc failed\n", __func__);
- err = -ENOMEM;
- goto add_eth_addr_rule_out;
- }
-
- err = __mlx5e_add_eth_addr_rule(priv, ai, type, match_criteria,
- match_value);
-
-add_eth_addr_rule_out:
- kvfree(match_criteria);
- kvfree(match_value);
-
- return err;
-}
-
static int mlx5e_vport_context_update_vlans(struct mlx5e_priv *priv)
{
struct net_device *ndev = priv->netdev;
@@ -472,7 +116,7 @@ static int mlx5e_vport_context_update_vlans(struct mlx5e_priv *priv)
int i;
list_size = 0;
- for_each_set_bit(vlan, priv->vlan.active_vlans, VLAN_N_VID)
+ for_each_set_bit(vlan, priv->fs.vlan.active_vlans, VLAN_N_VID)
list_size++;
max_list_size = 1 << MLX5_CAP_GEN(priv->mdev, log_max_vlan_list);
@@ -489,7 +133,7 @@ static int mlx5e_vport_context_update_vlans(struct mlx5e_priv *priv)
return -ENOMEM;
i = 0;
- for_each_set_bit(vlan, priv->vlan.active_vlans, VLAN_N_VID) {
+ for_each_set_bit(vlan, priv->fs.vlan.active_vlans, VLAN_N_VID) {
if (i >= list_size)
break;
vlans[i++] = vlan;
@@ -514,28 +158,28 @@ static int __mlx5e_add_vlan_rule(struct mlx5e_priv *priv,
enum mlx5e_vlan_rule_type rule_type,
u16 vid, u32 *mc, u32 *mv)
{
- struct mlx5_flow_table *ft = priv->fts.vlan.t;
+ struct mlx5_flow_table *ft = priv->fs.vlan.ft.t;
struct mlx5_flow_destination dest;
u8 match_criteria_enable = 0;
struct mlx5_flow_rule **rule_p;
int err = 0;
dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
- dest.ft = priv->fts.main.t;
+ dest.ft = priv->fs.l2.ft.t;
match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.vlan_tag);
switch (rule_type) {
case MLX5E_VLAN_RULE_TYPE_UNTAGGED:
- rule_p = &priv->vlan.untagged_rule;
+ rule_p = &priv->fs.vlan.untagged_rule;
break;
case MLX5E_VLAN_RULE_TYPE_ANY_VID:
- rule_p = &priv->vlan.any_vlan_rule;
+ rule_p = &priv->fs.vlan.any_vlan_rule;
MLX5_SET(fte_match_param, mv, outer_headers.vlan_tag, 1);
break;
default: /* MLX5E_VLAN_RULE_TYPE_MATCH_VID */
- rule_p = &priv->vlan.active_vlans_rule[vid];
+ rule_p = &priv->fs.vlan.active_vlans_rule[vid];
MLX5_SET(fte_match_param, mv, outer_headers.vlan_tag, 1);
MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.first_vid);
MLX5_SET(fte_match_param, mv, outer_headers.first_vid, vid);
@@ -589,22 +233,22 @@ static void mlx5e_del_vlan_rule(struct mlx5e_priv *priv,
{
switch (rule_type) {
case MLX5E_VLAN_RULE_TYPE_UNTAGGED:
- if (priv->vlan.untagged_rule) {
- mlx5_del_flow_rule(priv->vlan.untagged_rule);
- priv->vlan.untagged_rule = NULL;
+ if (priv->fs.vlan.untagged_rule) {
+ mlx5_del_flow_rule(priv->fs.vlan.untagged_rule);
+ priv->fs.vlan.untagged_rule = NULL;
}
break;
case MLX5E_VLAN_RULE_TYPE_ANY_VID:
- if (priv->vlan.any_vlan_rule) {
- mlx5_del_flow_rule(priv->vlan.any_vlan_rule);
- priv->vlan.any_vlan_rule = NULL;
+ if (priv->fs.vlan.any_vlan_rule) {
+ mlx5_del_flow_rule(priv->fs.vlan.any_vlan_rule);
+ priv->fs.vlan.any_vlan_rule = NULL;
}
break;
case MLX5E_VLAN_RULE_TYPE_MATCH_VID:
mlx5e_vport_context_update_vlans(priv);
- if (priv->vlan.active_vlans_rule[vid]) {
- mlx5_del_flow_rule(priv->vlan.active_vlans_rule[vid]);
- priv->vlan.active_vlans_rule[vid] = NULL;
+ if (priv->fs.vlan.active_vlans_rule[vid]) {
+ mlx5_del_flow_rule(priv->fs.vlan.active_vlans_rule[vid]);
+ priv->fs.vlan.active_vlans_rule[vid] = NULL;
}
mlx5e_vport_context_update_vlans(priv);
break;
@@ -613,10 +257,10 @@ static void mlx5e_del_vlan_rule(struct mlx5e_priv *priv,
void mlx5e_enable_vlan_filter(struct mlx5e_priv *priv)
{
- if (!priv->vlan.filter_disabled)
+ if (!priv->fs.vlan.filter_disabled)
return;
- priv->vlan.filter_disabled = false;
+ priv->fs.vlan.filter_disabled = false;
if (priv->netdev->flags & IFF_PROMISC)
return;
mlx5e_del_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_ANY_VID, 0);
@@ -624,10 +268,10 @@ void mlx5e_enable_vlan_filter(struct mlx5e_priv *priv)
void mlx5e_disable_vlan_filter(struct mlx5e_priv *priv)
{
- if (priv->vlan.filter_disabled)
+ if (priv->fs.vlan.filter_disabled)
return;
- priv->vlan.filter_disabled = true;
+ priv->fs.vlan.filter_disabled = true;
if (priv->netdev->flags & IFF_PROMISC)
return;
mlx5e_add_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_ANY_VID, 0);
@@ -638,7 +282,7 @@ int mlx5e_vlan_rx_add_vid(struct net_device *dev, __always_unused __be16 proto,
{
struct mlx5e_priv *priv = netdev_priv(dev);
- set_bit(vid, priv->vlan.active_vlans);
+ set_bit(vid, priv->fs.vlan.active_vlans);
return mlx5e_add_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_MATCH_VID, vid);
}
@@ -648,7 +292,7 @@ int mlx5e_vlan_rx_kill_vid(struct net_device *dev, __always_unused __be16 proto,
{
struct mlx5e_priv *priv = netdev_priv(dev);
- clear_bit(vid, priv->vlan.active_vlans);
+ clear_bit(vid, priv->fs.vlan.active_vlans);
mlx5e_del_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_MATCH_VID, vid);
@@ -656,21 +300,21 @@ int mlx5e_vlan_rx_kill_vid(struct net_device *dev, __always_unused __be16 proto,
}
#define mlx5e_for_each_hash_node(hn, tmp, hash, i) \
- for (i = 0; i < MLX5E_ETH_ADDR_HASH_SIZE; i++) \
+ for (i = 0; i < MLX5E_L2_ADDR_HASH_SIZE; i++) \
hlist_for_each_entry_safe(hn, tmp, &hash[i], hlist)
-static void mlx5e_execute_action(struct mlx5e_priv *priv,
- struct mlx5e_eth_addr_hash_node *hn)
+static void mlx5e_execute_l2_action(struct mlx5e_priv *priv,
+ struct mlx5e_l2_hash_node *hn)
{
switch (hn->action) {
case MLX5E_ACTION_ADD:
- mlx5e_add_eth_addr_rule(priv, &hn->ai, MLX5E_FULLMATCH);
+ mlx5e_add_l2_flow_rule(priv, &hn->ai, MLX5E_FULLMATCH);
hn->action = MLX5E_ACTION_NONE;
break;
case MLX5E_ACTION_DEL:
- mlx5e_del_eth_addr_from_flow_table(priv, &hn->ai);
- mlx5e_del_eth_addr_from_hash(hn);
+ mlx5e_del_l2_flow_rule(priv, &hn->ai);
+ mlx5e_del_l2_from_hash(hn);
break;
}
}
@@ -682,14 +326,14 @@ static void mlx5e_sync_netdev_addr(struct mlx5e_priv *priv)
netif_addr_lock_bh(netdev);
- mlx5e_add_eth_addr_to_hash(priv->eth_addr.netdev_uc,
- priv->netdev->dev_addr);
+ mlx5e_add_l2_to_hash(priv->fs.l2.netdev_uc,
+ priv->netdev->dev_addr);
netdev_for_each_uc_addr(ha, netdev)
- mlx5e_add_eth_addr_to_hash(priv->eth_addr.netdev_uc, ha->addr);
+ mlx5e_add_l2_to_hash(priv->fs.l2.netdev_uc, ha->addr);
netdev_for_each_mc_addr(ha, netdev)
- mlx5e_add_eth_addr_to_hash(priv->eth_addr.netdev_mc, ha->addr);
+ mlx5e_add_l2_to_hash(priv->fs.l2.netdev_mc, ha->addr);
netif_addr_unlock_bh(netdev);
}
@@ -699,17 +343,17 @@ static void mlx5e_fill_addr_array(struct mlx5e_priv *priv, int list_type,
{
bool is_uc = (list_type == MLX5_NVPRT_LIST_TYPE_UC);
struct net_device *ndev = priv->netdev;
- struct mlx5e_eth_addr_hash_node *hn;
+ struct mlx5e_l2_hash_node *hn;
struct hlist_head *addr_list;
struct hlist_node *tmp;
int i = 0;
int hi;
- addr_list = is_uc ? priv->eth_addr.netdev_uc : priv->eth_addr.netdev_mc;
+ addr_list = is_uc ? priv->fs.l2.netdev_uc : priv->fs.l2.netdev_mc;
if (is_uc) /* Make sure our own address is pushed first */
ether_addr_copy(addr_array[i++], ndev->dev_addr);
- else if (priv->eth_addr.broadcast_enabled)
+ else if (priv->fs.l2.broadcast_enabled)
ether_addr_copy(addr_array[i++], ndev->broadcast);
mlx5e_for_each_hash_node(hn, tmp, addr_list, hi) {
@@ -725,7 +369,7 @@ static void mlx5e_vport_context_update_addr_list(struct mlx5e_priv *priv,
int list_type)
{
bool is_uc = (list_type == MLX5_NVPRT_LIST_TYPE_UC);
- struct mlx5e_eth_addr_hash_node *hn;
+ struct mlx5e_l2_hash_node *hn;
u8 (*addr_array)[ETH_ALEN] = NULL;
struct hlist_head *addr_list;
struct hlist_node *tmp;
@@ -734,12 +378,12 @@ static void mlx5e_vport_context_update_addr_list(struct mlx5e_priv *priv,
int err;
int hi;
- size = is_uc ? 0 : (priv->eth_addr.broadcast_enabled ? 1 : 0);
+ size = is_uc ? 0 : (priv->fs.l2.broadcast_enabled ? 1 : 0);
max_size = is_uc ?
1 << MLX5_CAP_GEN(priv->mdev, log_max_current_uc_list) :
1 << MLX5_CAP_GEN(priv->mdev, log_max_current_mc_list);
- addr_list = is_uc ? priv->eth_addr.netdev_uc : priv->eth_addr.netdev_mc;
+ addr_list = is_uc ? priv->fs.l2.netdev_uc : priv->fs.l2.netdev_mc;
mlx5e_for_each_hash_node(hn, tmp, addr_list, hi)
size++;
@@ -770,7 +414,7 @@ out:
static void mlx5e_vport_context_update(struct mlx5e_priv *priv)
{
- struct mlx5e_eth_addr_db *ea = &priv->eth_addr;
+ struct mlx5e_l2_table *ea = &priv->fs.l2;
mlx5e_vport_context_update_addr_list(priv, MLX5_NVPRT_LIST_TYPE_UC);
mlx5e_vport_context_update_addr_list(priv, MLX5_NVPRT_LIST_TYPE_MC);
@@ -781,26 +425,26 @@ static void mlx5e_vport_context_update(struct mlx5e_priv *priv)
static void mlx5e_apply_netdev_addr(struct mlx5e_priv *priv)
{
- struct mlx5e_eth_addr_hash_node *hn;
+ struct mlx5e_l2_hash_node *hn;
struct hlist_node *tmp;
int i;
- mlx5e_for_each_hash_node(hn, tmp, priv->eth_addr.netdev_uc, i)
- mlx5e_execute_action(priv, hn);
+ mlx5e_for_each_hash_node(hn, tmp, priv->fs.l2.netdev_uc, i)
+ mlx5e_execute_l2_action(priv, hn);
- mlx5e_for_each_hash_node(hn, tmp, priv->eth_addr.netdev_mc, i)
- mlx5e_execute_action(priv, hn);
+ mlx5e_for_each_hash_node(hn, tmp, priv->fs.l2.netdev_mc, i)
+ mlx5e_execute_l2_action(priv, hn);
}
static void mlx5e_handle_netdev_addr(struct mlx5e_priv *priv)
{
- struct mlx5e_eth_addr_hash_node *hn;
+ struct mlx5e_l2_hash_node *hn;
struct hlist_node *tmp;
int i;
- mlx5e_for_each_hash_node(hn, tmp, priv->eth_addr.netdev_uc, i)
+ mlx5e_for_each_hash_node(hn, tmp, priv->fs.l2.netdev_uc, i)
hn->action = MLX5E_ACTION_DEL;
- mlx5e_for_each_hash_node(hn, tmp, priv->eth_addr.netdev_mc, i)
+ mlx5e_for_each_hash_node(hn, tmp, priv->fs.l2.netdev_mc, i)
hn->action = MLX5E_ACTION_DEL;
if (!test_bit(MLX5E_STATE_DESTROYING, &priv->state))
@@ -814,7 +458,7 @@ void mlx5e_set_rx_mode_work(struct work_struct *work)
struct mlx5e_priv *priv = container_of(work, struct mlx5e_priv,
set_rx_mode_work);
- struct mlx5e_eth_addr_db *ea = &priv->eth_addr;
+ struct mlx5e_l2_table *ea = &priv->fs.l2;
struct net_device *ndev = priv->netdev;
bool rx_mode_enable = !test_bit(MLX5E_STATE_DESTROYING, &priv->state);
@@ -830,27 +474,27 @@ void mlx5e_set_rx_mode_work(struct work_struct *work)
bool disable_broadcast = ea->broadcast_enabled && !broadcast_enabled;
if (enable_promisc) {
- mlx5e_add_eth_addr_rule(priv, &ea->promisc, MLX5E_PROMISC);
- if (!priv->vlan.filter_disabled)
+ mlx5e_add_l2_flow_rule(priv, &ea->promisc, MLX5E_PROMISC);
+ if (!priv->fs.vlan.filter_disabled)
mlx5e_add_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_ANY_VID,
0);
}
if (enable_allmulti)
- mlx5e_add_eth_addr_rule(priv, &ea->allmulti, MLX5E_ALLMULTI);
+ mlx5e_add_l2_flow_rule(priv, &ea->allmulti, MLX5E_ALLMULTI);
if (enable_broadcast)
- mlx5e_add_eth_addr_rule(priv, &ea->broadcast, MLX5E_FULLMATCH);
+ mlx5e_add_l2_flow_rule(priv, &ea->broadcast, MLX5E_FULLMATCH);
mlx5e_handle_netdev_addr(priv);
if (disable_broadcast)
- mlx5e_del_eth_addr_from_flow_table(priv, &ea->broadcast);
+ mlx5e_del_l2_flow_rule(priv, &ea->broadcast);
if (disable_allmulti)
- mlx5e_del_eth_addr_from_flow_table(priv, &ea->allmulti);
+ mlx5e_del_l2_flow_rule(priv, &ea->allmulti);
if (disable_promisc) {
- if (!priv->vlan.filter_disabled)
+ if (!priv->fs.vlan.filter_disabled)
mlx5e_del_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_ANY_VID,
0);
- mlx5e_del_eth_addr_from_flow_table(priv, &ea->promisc);
+ mlx5e_del_l2_flow_rule(priv, &ea->promisc);
}
ea->promisc_enabled = promisc_enabled;
@@ -872,224 +516,454 @@ static void mlx5e_destroy_groups(struct mlx5e_flow_table *ft)
ft->num_groups = 0;
}
-void mlx5e_init_eth_addr(struct mlx5e_priv *priv)
+void mlx5e_init_l2_addr(struct mlx5e_priv *priv)
{
- ether_addr_copy(priv->eth_addr.broadcast.addr, priv->netdev->broadcast);
+ ether_addr_copy(priv->fs.l2.broadcast.addr, priv->netdev->broadcast);
}
-#define MLX5E_MAIN_GROUP0_SIZE BIT(3)
-#define MLX5E_MAIN_GROUP1_SIZE BIT(1)
-#define MLX5E_MAIN_GROUP2_SIZE BIT(0)
-#define MLX5E_MAIN_GROUP3_SIZE BIT(14)
-#define MLX5E_MAIN_GROUP4_SIZE BIT(13)
-#define MLX5E_MAIN_GROUP5_SIZE BIT(11)
-#define MLX5E_MAIN_GROUP6_SIZE BIT(2)
-#define MLX5E_MAIN_GROUP7_SIZE BIT(1)
-#define MLX5E_MAIN_GROUP8_SIZE BIT(0)
-#define MLX5E_MAIN_TABLE_SIZE (MLX5E_MAIN_GROUP0_SIZE +\
- MLX5E_MAIN_GROUP1_SIZE +\
- MLX5E_MAIN_GROUP2_SIZE +\
- MLX5E_MAIN_GROUP3_SIZE +\
- MLX5E_MAIN_GROUP4_SIZE +\
- MLX5E_MAIN_GROUP5_SIZE +\
- MLX5E_MAIN_GROUP6_SIZE +\
- MLX5E_MAIN_GROUP7_SIZE +\
- MLX5E_MAIN_GROUP8_SIZE)
-
-static int __mlx5e_create_main_groups(struct mlx5e_flow_table *ft, u32 *in,
- int inlen)
+void mlx5e_destroy_flow_table(struct mlx5e_flow_table *ft)
{
- u8 *mc = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria);
- u8 *dmac = MLX5_ADDR_OF(create_flow_group_in, in,
- match_criteria.outer_headers.dmac_47_16);
+ mlx5e_destroy_groups(ft);
+ kfree(ft->g);
+ mlx5_destroy_flow_table(ft->t);
+ ft->t = NULL;
+}
+
+static void mlx5e_cleanup_ttc_rules(struct mlx5e_ttc_table *ttc)
+{
+ int i;
+
+ for (i = 0; i < MLX5E_NUM_TT; i++) {
+ if (!IS_ERR_OR_NULL(ttc->rules[i])) {
+ mlx5_del_flow_rule(ttc->rules[i]);
+ ttc->rules[i] = NULL;
+ }
+ }
+}
+
+static struct {
+ u16 etype;
+ u8 proto;
+} ttc_rules[] = {
+ [MLX5E_TT_IPV4_TCP] = {
+ .etype = ETH_P_IP,
+ .proto = IPPROTO_TCP,
+ },
+ [MLX5E_TT_IPV6_TCP] = {
+ .etype = ETH_P_IPV6,
+ .proto = IPPROTO_TCP,
+ },
+ [MLX5E_TT_IPV4_UDP] = {
+ .etype = ETH_P_IP,
+ .proto = IPPROTO_UDP,
+ },
+ [MLX5E_TT_IPV6_UDP] = {
+ .etype = ETH_P_IPV6,
+ .proto = IPPROTO_UDP,
+ },
+ [MLX5E_TT_IPV4_IPSEC_AH] = {
+ .etype = ETH_P_IP,
+ .proto = IPPROTO_AH,
+ },
+ [MLX5E_TT_IPV6_IPSEC_AH] = {
+ .etype = ETH_P_IPV6,
+ .proto = IPPROTO_AH,
+ },
+ [MLX5E_TT_IPV4_IPSEC_ESP] = {
+ .etype = ETH_P_IP,
+ .proto = IPPROTO_ESP,
+ },
+ [MLX5E_TT_IPV6_IPSEC_ESP] = {
+ .etype = ETH_P_IPV6,
+ .proto = IPPROTO_ESP,
+ },
+ [MLX5E_TT_IPV4] = {
+ .etype = ETH_P_IP,
+ .proto = 0,
+ },
+ [MLX5E_TT_IPV6] = {
+ .etype = ETH_P_IPV6,
+ .proto = 0,
+ },
+ [MLX5E_TT_ANY] = {
+ .etype = 0,
+ .proto = 0,
+ },
+};
+
+static struct mlx5_flow_rule *mlx5e_generate_ttc_rule(struct mlx5e_priv *priv,
+ struct mlx5_flow_table *ft,
+ struct mlx5_flow_destination *dest,
+ u16 etype,
+ u8 proto)
+{
+ struct mlx5_flow_rule *rule;
+ u8 match_criteria_enable = 0;
+ u32 *match_criteria;
+ u32 *match_value;
+ int err = 0;
+
+ match_value = mlx5_vzalloc(MLX5_ST_SZ_BYTES(fte_match_param));
+ match_criteria = mlx5_vzalloc(MLX5_ST_SZ_BYTES(fte_match_param));
+ if (!match_value || !match_criteria) {
+ netdev_err(priv->netdev, "%s: alloc failed\n", __func__);
+ err = -ENOMEM;
+ goto out;
+ }
+
+ if (proto) {
+ match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.ip_protocol);
+ MLX5_SET(fte_match_param, match_value, outer_headers.ip_protocol, proto);
+ }
+ if (etype) {
+ match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.ethertype);
+ MLX5_SET(fte_match_param, match_value, outer_headers.ethertype, etype);
+ }
+
+ rule = mlx5_add_flow_rule(ft, match_criteria_enable,
+ match_criteria, match_value,
+ MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
+ MLX5_FS_DEFAULT_FLOW_TAG,
+ dest);
+ if (IS_ERR(rule)) {
+ err = PTR_ERR(rule);
+ netdev_err(priv->netdev, "%s: add rule failed\n", __func__);
+ }
+out:
+ kvfree(match_criteria);
+ kvfree(match_value);
+ return err ? ERR_PTR(err) : rule;
+}
+
+static int mlx5e_generate_ttc_table_rules(struct mlx5e_priv *priv)
+{
+ struct mlx5_flow_destination dest;
+ struct mlx5e_ttc_table *ttc;
+ struct mlx5_flow_rule **rules;
+ struct mlx5_flow_table *ft;
+ int tt;
int err;
+
+ ttc = &priv->fs.ttc;
+ ft = ttc->ft.t;
+ rules = ttc->rules;
+
+ dest.type = MLX5_FLOW_DESTINATION_TYPE_TIR;
+ for (tt = 0; tt < MLX5E_NUM_TT; tt++) {
+ if (tt == MLX5E_TT_ANY)
+ dest.tir_num = priv->direct_tir[0].tirn;
+ else
+ dest.tir_num = priv->indir_tirn[tt];
+ rules[tt] = mlx5e_generate_ttc_rule(priv, ft, &dest,
+ ttc_rules[tt].etype,
+ ttc_rules[tt].proto);
+ if (IS_ERR(rules[tt]))
+ goto del_rules;
+ }
+
+ return 0;
+
+del_rules:
+ err = PTR_ERR(rules[tt]);
+ rules[tt] = NULL;
+ mlx5e_cleanup_ttc_rules(ttc);
+ return err;
+}
+
+#define MLX5E_TTC_NUM_GROUPS 3
+#define MLX5E_TTC_GROUP1_SIZE BIT(3)
+#define MLX5E_TTC_GROUP2_SIZE BIT(1)
+#define MLX5E_TTC_GROUP3_SIZE BIT(0)
+#define MLX5E_TTC_TABLE_SIZE (MLX5E_TTC_GROUP1_SIZE +\
+ MLX5E_TTC_GROUP2_SIZE +\
+ MLX5E_TTC_GROUP3_SIZE)
+static int mlx5e_create_ttc_table_groups(struct mlx5e_ttc_table *ttc)
+{
+ int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
+ struct mlx5e_flow_table *ft = &ttc->ft;
int ix = 0;
+ u32 *in;
+ int err;
+ u8 *mc;
- memset(in, 0, inlen);
- MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
- MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ethertype);
- MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ip_protocol);
- MLX5_SET_CFG(in, start_flow_index, ix);
- ix += MLX5E_MAIN_GROUP0_SIZE;
- MLX5_SET_CFG(in, end_flow_index, ix - 1);
- ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
- if (IS_ERR(ft->g[ft->num_groups]))
- goto err_destroy_groups;
- ft->num_groups++;
+ ft->g = kcalloc(MLX5E_TTC_NUM_GROUPS,
+ sizeof(*ft->g), GFP_KERNEL);
+ if (!ft->g)
+ return -ENOMEM;
+ in = mlx5_vzalloc(inlen);
+ if (!in) {
+ kfree(ft->g);
+ return -ENOMEM;
+ }
- memset(in, 0, inlen);
- MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
+ /* L4 Group */
+ mc = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria);
+ MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ip_protocol);
MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ethertype);
+ MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
MLX5_SET_CFG(in, start_flow_index, ix);
- ix += MLX5E_MAIN_GROUP1_SIZE;
+ ix += MLX5E_TTC_GROUP1_SIZE;
MLX5_SET_CFG(in, end_flow_index, ix - 1);
ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
if (IS_ERR(ft->g[ft->num_groups]))
- goto err_destroy_groups;
+ goto err;
ft->num_groups++;
- memset(in, 0, inlen);
+ /* L3 Group */
+ MLX5_SET(fte_match_param, mc, outer_headers.ip_protocol, 0);
MLX5_SET_CFG(in, start_flow_index, ix);
- ix += MLX5E_MAIN_GROUP2_SIZE;
+ ix += MLX5E_TTC_GROUP2_SIZE;
MLX5_SET_CFG(in, end_flow_index, ix - 1);
ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
if (IS_ERR(ft->g[ft->num_groups]))
- goto err_destroy_groups;
+ goto err;
ft->num_groups++;
+ /* Any Group */
memset(in, 0, inlen);
- MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
- MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ethertype);
- MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ip_protocol);
- eth_broadcast_addr(dmac);
MLX5_SET_CFG(in, start_flow_index, ix);
- ix += MLX5E_MAIN_GROUP3_SIZE;
+ ix += MLX5E_TTC_GROUP3_SIZE;
MLX5_SET_CFG(in, end_flow_index, ix - 1);
ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
if (IS_ERR(ft->g[ft->num_groups]))
- goto err_destroy_groups;
+ goto err;
ft->num_groups++;
- memset(in, 0, inlen);
- MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
- MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ethertype);
- eth_broadcast_addr(dmac);
- MLX5_SET_CFG(in, start_flow_index, ix);
- ix += MLX5E_MAIN_GROUP4_SIZE;
- MLX5_SET_CFG(in, end_flow_index, ix - 1);
- ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
- if (IS_ERR(ft->g[ft->num_groups]))
- goto err_destroy_groups;
- ft->num_groups++;
+ kvfree(in);
+ return 0;
- memset(in, 0, inlen);
- MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
- eth_broadcast_addr(dmac);
- MLX5_SET_CFG(in, start_flow_index, ix);
- ix += MLX5E_MAIN_GROUP5_SIZE;
- MLX5_SET_CFG(in, end_flow_index, ix - 1);
- ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
- if (IS_ERR(ft->g[ft->num_groups]))
- goto err_destroy_groups;
- ft->num_groups++;
+err:
+ err = PTR_ERR(ft->g[ft->num_groups]);
+ ft->g[ft->num_groups] = NULL;
+ kvfree(in);
- memset(in, 0, inlen);
- MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
- MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ethertype);
- MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ip_protocol);
- dmac[0] = 0x01;
+ return err;
+}
+
+static void mlx5e_destroy_ttc_table(struct mlx5e_priv *priv)
+{
+ struct mlx5e_ttc_table *ttc = &priv->fs.ttc;
+
+ mlx5e_cleanup_ttc_rules(ttc);
+ mlx5e_destroy_flow_table(&ttc->ft);
+}
+
+static int mlx5e_create_ttc_table(struct mlx5e_priv *priv)
+{
+ struct mlx5e_ttc_table *ttc = &priv->fs.ttc;
+ struct mlx5e_flow_table *ft = &ttc->ft;
+ int err;
+
+ ft->t = mlx5_create_flow_table(priv->fs.ns, MLX5E_NIC_PRIO,
+ MLX5E_TTC_TABLE_SIZE, MLX5E_TTC_FT_LEVEL);
+ if (IS_ERR(ft->t)) {
+ err = PTR_ERR(ft->t);
+ ft->t = NULL;
+ return err;
+ }
+
+ err = mlx5e_create_ttc_table_groups(ttc);
+ if (err)
+ goto err;
+
+ err = mlx5e_generate_ttc_table_rules(priv);
+ if (err)
+ goto err;
+
+ return 0;
+err:
+ mlx5e_destroy_flow_table(ft);
+ return err;
+}
+
+static void mlx5e_del_l2_flow_rule(struct mlx5e_priv *priv,
+ struct mlx5e_l2_rule *ai)
+{
+ if (!IS_ERR_OR_NULL(ai->rule)) {
+ mlx5_del_flow_rule(ai->rule);
+ ai->rule = NULL;
+ }
+}
+
+static int mlx5e_add_l2_flow_rule(struct mlx5e_priv *priv,
+ struct mlx5e_l2_rule *ai, int type)
+{
+ struct mlx5_flow_table *ft = priv->fs.l2.ft.t;
+ struct mlx5_flow_destination dest;
+ u8 match_criteria_enable = 0;
+ u32 *match_criteria;
+ u32 *match_value;
+ int err = 0;
+ u8 *mc_dmac;
+ u8 *mv_dmac;
+
+ match_value = mlx5_vzalloc(MLX5_ST_SZ_BYTES(fte_match_param));
+ match_criteria = mlx5_vzalloc(MLX5_ST_SZ_BYTES(fte_match_param));
+ if (!match_value || !match_criteria) {
+ netdev_err(priv->netdev, "%s: alloc failed\n", __func__);
+ err = -ENOMEM;
+ goto add_l2_rule_out;
+ }
+
+ mc_dmac = MLX5_ADDR_OF(fte_match_param, match_criteria,
+ outer_headers.dmac_47_16);
+ mv_dmac = MLX5_ADDR_OF(fte_match_param, match_value,
+ outer_headers.dmac_47_16);
+
+ dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
+ dest.ft = priv->fs.ttc.ft.t;
+
+ switch (type) {
+ case MLX5E_FULLMATCH:
+ match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
+ eth_broadcast_addr(mc_dmac);
+ ether_addr_copy(mv_dmac, ai->addr);
+ break;
+
+ case MLX5E_ALLMULTI:
+ match_criteria_enable = MLX5_MATCH_OUTER_HEADERS;
+ mc_dmac[0] = 0x01;
+ mv_dmac[0] = 0x01;
+ break;
+
+ case MLX5E_PROMISC:
+ break;
+ }
+
+ ai->rule = mlx5_add_flow_rule(ft, match_criteria_enable, match_criteria,
+ match_value,
+ MLX5_FLOW_CONTEXT_ACTION_FWD_DEST,
+ MLX5_FS_DEFAULT_FLOW_TAG, &dest);
+ if (IS_ERR(ai->rule)) {
+ netdev_err(priv->netdev, "%s: add l2 rule(mac:%pM) failed\n",
+ __func__, mv_dmac);
+ err = PTR_ERR(ai->rule);
+ ai->rule = NULL;
+ }
+
+add_l2_rule_out:
+ kvfree(match_criteria);
+ kvfree(match_value);
+
+ return err;
+}
+
+#define MLX5E_NUM_L2_GROUPS 3
+#define MLX5E_L2_GROUP1_SIZE BIT(0)
+#define MLX5E_L2_GROUP2_SIZE BIT(15)
+#define MLX5E_L2_GROUP3_SIZE BIT(0)
+#define MLX5E_L2_TABLE_SIZE (MLX5E_L2_GROUP1_SIZE +\
+ MLX5E_L2_GROUP2_SIZE +\
+ MLX5E_L2_GROUP3_SIZE)
+static int mlx5e_create_l2_table_groups(struct mlx5e_l2_table *l2_table)
+{
+ int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
+ struct mlx5e_flow_table *ft = &l2_table->ft;
+ int ix = 0;
+ u8 *mc_dmac;
+ u32 *in;
+ int err;
+ u8 *mc;
+
+ ft->g = kcalloc(MLX5E_NUM_L2_GROUPS, sizeof(*ft->g), GFP_KERNEL);
+ if (!ft->g)
+ return -ENOMEM;
+ in = mlx5_vzalloc(inlen);
+ if (!in) {
+ kfree(ft->g);
+ return -ENOMEM;
+ }
+
+ mc = MLX5_ADDR_OF(create_flow_group_in, in, match_criteria);
+ mc_dmac = MLX5_ADDR_OF(fte_match_param, mc,
+ outer_headers.dmac_47_16);
+ /* Flow Group for promiscuous */
MLX5_SET_CFG(in, start_flow_index, ix);
- ix += MLX5E_MAIN_GROUP6_SIZE;
+ ix += MLX5E_L2_GROUP1_SIZE;
MLX5_SET_CFG(in, end_flow_index, ix - 1);
ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
if (IS_ERR(ft->g[ft->num_groups]))
goto err_destroy_groups;
ft->num_groups++;
- memset(in, 0, inlen);
+ /* Flow Group for full match */
+ eth_broadcast_addr(mc_dmac);
MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
- MLX5_SET_TO_ONES(fte_match_param, mc, outer_headers.ethertype);
- dmac[0] = 0x01;
MLX5_SET_CFG(in, start_flow_index, ix);
- ix += MLX5E_MAIN_GROUP7_SIZE;
+ ix += MLX5E_L2_GROUP2_SIZE;
MLX5_SET_CFG(in, end_flow_index, ix - 1);
ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
if (IS_ERR(ft->g[ft->num_groups]))
goto err_destroy_groups;
ft->num_groups++;
- memset(in, 0, inlen);
- MLX5_SET_CFG(in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
- dmac[0] = 0x01;
+ /* Flow Group for allmulti */
+ eth_zero_addr(mc_dmac);
+ mc_dmac[0] = 0x01;
MLX5_SET_CFG(in, start_flow_index, ix);
- ix += MLX5E_MAIN_GROUP8_SIZE;
+ ix += MLX5E_L2_GROUP3_SIZE;
MLX5_SET_CFG(in, end_flow_index, ix - 1);
ft->g[ft->num_groups] = mlx5_create_flow_group(ft->t, in);
if (IS_ERR(ft->g[ft->num_groups]))
goto err_destroy_groups;
ft->num_groups++;
+ kvfree(in);
return 0;
err_destroy_groups:
err = PTR_ERR(ft->g[ft->num_groups]);
ft->g[ft->num_groups] = NULL;
mlx5e_destroy_groups(ft);
+ kvfree(in);
return err;
}
-static int mlx5e_create_main_groups(struct mlx5e_flow_table *ft)
+static void mlx5e_destroy_l2_table(struct mlx5e_priv *priv)
{
- u32 *in;
- int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
- int err;
-
- in = mlx5_vzalloc(inlen);
- if (!in)
- return -ENOMEM;
-
- err = __mlx5e_create_main_groups(ft, in, inlen);
-
- kvfree(in);
- return err;
+ mlx5e_destroy_flow_table(&priv->fs.l2.ft);
}
-static int mlx5e_create_main_flow_table(struct mlx5e_priv *priv)
+static int mlx5e_create_l2_table(struct mlx5e_priv *priv)
{
- struct mlx5e_flow_table *ft = &priv->fts.main;
+ struct mlx5e_l2_table *l2_table = &priv->fs.l2;
+ struct mlx5e_flow_table *ft = &l2_table->ft;
int err;
ft->num_groups = 0;
- ft->t = mlx5_create_flow_table(priv->fts.ns, 1, MLX5E_MAIN_TABLE_SIZE);
+ ft->t = mlx5_create_flow_table(priv->fs.ns, MLX5E_NIC_PRIO,
+ MLX5E_L2_TABLE_SIZE, MLX5E_L2_FT_LEVEL);
if (IS_ERR(ft->t)) {
err = PTR_ERR(ft->t);
ft->t = NULL;
return err;
}
- ft->g = kcalloc(MLX5E_NUM_MAIN_GROUPS, sizeof(*ft->g), GFP_KERNEL);
- if (!ft->g) {
- err = -ENOMEM;
- goto err_destroy_main_flow_table;
- }
- err = mlx5e_create_main_groups(ft);
+ err = mlx5e_create_l2_table_groups(l2_table);
if (err)
- goto err_free_g;
- return 0;
+ goto err_destroy_flow_table;
-err_free_g:
- kfree(ft->g);
+ return 0;
-err_destroy_main_flow_table:
+err_destroy_flow_table:
mlx5_destroy_flow_table(ft->t);
ft->t = NULL;
return err;
}
-static void mlx5e_destroy_flow_table(struct mlx5e_flow_table *ft)
-{
- mlx5e_destroy_groups(ft);
- kfree(ft->g);
- mlx5_destroy_flow_table(ft->t);
- ft->t = NULL;
-}
-
-static void mlx5e_destroy_main_flow_table(struct mlx5e_priv *priv)
-{
- mlx5e_destroy_flow_table(&priv->fts.main);
-}
-
#define MLX5E_NUM_VLAN_GROUPS 2
#define MLX5E_VLAN_GROUP0_SIZE BIT(12)
#define MLX5E_VLAN_GROUP1_SIZE BIT(1)
#define MLX5E_VLAN_TABLE_SIZE (MLX5E_VLAN_GROUP0_SIZE +\
MLX5E_VLAN_GROUP1_SIZE)
-static int __mlx5e_create_vlan_groups(struct mlx5e_flow_table *ft, u32 *in,
- int inlen)
+static int __mlx5e_create_vlan_table_groups(struct mlx5e_flow_table *ft, u32 *in,
+ int inlen)
{
int err;
int ix = 0;
@@ -1128,7 +1002,7 @@ err_destroy_groups:
return err;
}
-static int mlx5e_create_vlan_groups(struct mlx5e_flow_table *ft)
+static int mlx5e_create_vlan_table_groups(struct mlx5e_flow_table *ft)
{
u32 *in;
int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
@@ -1138,19 +1012,20 @@ static int mlx5e_create_vlan_groups(struct mlx5e_flow_table *ft)
if (!in)
return -ENOMEM;
- err = __mlx5e_create_vlan_groups(ft, in, inlen);
+ err = __mlx5e_create_vlan_table_groups(ft, in, inlen);
kvfree(in);
return err;
}
-static int mlx5e_create_vlan_flow_table(struct mlx5e_priv *priv)
+static int mlx5e_create_vlan_table(struct mlx5e_priv *priv)
{
- struct mlx5e_flow_table *ft = &priv->fts.vlan;
+ struct mlx5e_flow_table *ft = &priv->fs.vlan.ft;
int err;
ft->num_groups = 0;
- ft->t = mlx5_create_flow_table(priv->fts.ns, 1, MLX5E_VLAN_TABLE_SIZE);
+ ft->t = mlx5_create_flow_table(priv->fs.ns, MLX5E_NIC_PRIO,
+ MLX5E_VLAN_TABLE_SIZE, MLX5E_VLAN_FT_LEVEL);
if (IS_ERR(ft->t)) {
err = PTR_ERR(ft->t);
@@ -1160,65 +1035,90 @@ static int mlx5e_create_vlan_flow_table(struct mlx5e_priv *priv)
ft->g = kcalloc(MLX5E_NUM_VLAN_GROUPS, sizeof(*ft->g), GFP_KERNEL);
if (!ft->g) {
err = -ENOMEM;
- goto err_destroy_vlan_flow_table;
+ goto err_destroy_vlan_table;
}
- err = mlx5e_create_vlan_groups(ft);
+ err = mlx5e_create_vlan_table_groups(ft);
if (err)
goto err_free_g;
+ err = mlx5e_add_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_UNTAGGED, 0);
+ if (err)
+ goto err_destroy_vlan_flow_groups;
+
return 0;
+err_destroy_vlan_flow_groups:
+ mlx5e_destroy_groups(ft);
err_free_g:
kfree(ft->g);
-
-err_destroy_vlan_flow_table:
+err_destroy_vlan_table:
mlx5_destroy_flow_table(ft->t);
ft->t = NULL;
return err;
}
-static void mlx5e_destroy_vlan_flow_table(struct mlx5e_priv *priv)
+static void mlx5e_destroy_vlan_table(struct mlx5e_priv *priv)
{
- mlx5e_destroy_flow_table(&priv->fts.vlan);
+ mlx5e_destroy_flow_table(&priv->fs.vlan.ft);
}
-int mlx5e_create_flow_tables(struct mlx5e_priv *priv)
+int mlx5e_create_flow_steering(struct mlx5e_priv *priv)
{
int err;
- priv->fts.ns = mlx5_get_flow_namespace(priv->mdev,
+ priv->fs.ns = mlx5_get_flow_namespace(priv->mdev,
MLX5_FLOW_NAMESPACE_KERNEL);
- if (!priv->fts.ns)
+ if (!priv->fs.ns)
return -EINVAL;
- err = mlx5e_create_vlan_flow_table(priv);
- if (err)
- return err;
+ err = mlx5e_arfs_create_tables(priv);
+ if (err) {
+ netdev_err(priv->netdev, "Failed to create arfs tables, err=%d\n",
+ err);
+ priv->netdev->hw_features &= ~NETIF_F_NTUPLE;
+ }
- err = mlx5e_create_main_flow_table(priv);
- if (err)
- goto err_destroy_vlan_flow_table;
+ err = mlx5e_create_ttc_table(priv);
+ if (err) {
+ netdev_err(priv->netdev, "Failed to create ttc table, err=%d\n",
+ err);
+ goto err_destroy_arfs_tables;
+ }
- err = mlx5e_add_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_UNTAGGED, 0);
- if (err)
- goto err_destroy_main_flow_table;
+ err = mlx5e_create_l2_table(priv);
+ if (err) {
+ netdev_err(priv->netdev, "Failed to create l2 table, err=%d\n",
+ err);
+ goto err_destroy_ttc_table;
+ }
+
+ err = mlx5e_create_vlan_table(priv);
+ if (err) {
+ netdev_err(priv->netdev, "Failed to create vlan table, err=%d\n",
+ err);
+ goto err_destroy_l2_table;
+ }
return 0;
-err_destroy_main_flow_table:
- mlx5e_destroy_main_flow_table(priv);
-err_destroy_vlan_flow_table:
- mlx5e_destroy_vlan_flow_table(priv);
+err_destroy_l2_table:
+ mlx5e_destroy_l2_table(priv);
+err_destroy_ttc_table:
+ mlx5e_destroy_ttc_table(priv);
+err_destroy_arfs_tables:
+ mlx5e_arfs_destroy_tables(priv);
return err;
}
-void mlx5e_destroy_flow_tables(struct mlx5e_priv *priv)
+void mlx5e_destroy_flow_steering(struct mlx5e_priv *priv)
{
mlx5e_del_vlan_rule(priv, MLX5E_VLAN_RULE_TYPE_UNTAGGED, 0);
- mlx5e_destroy_main_flow_table(priv);
- mlx5e_destroy_vlan_flow_table(priv);
+ mlx5e_destroy_vlan_table(priv);
+ mlx5e_destroy_l2_table(priv);
+ mlx5e_destroy_ttc_table(priv);
+ mlx5e_arfs_destroy_tables(priv);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
index d4dfc5ce516a..fd4392999eee 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_main.c
@@ -48,6 +48,7 @@ struct mlx5e_sq_param {
u32 sqc[MLX5_ST_SZ_DW(sqc)];
struct mlx5_wq_param wq;
u16 max_inline;
+ bool icosq;
};
struct mlx5e_cq_param {
@@ -59,8 +60,10 @@ struct mlx5e_cq_param {
struct mlx5e_channel_param {
struct mlx5e_rq_param rq;
struct mlx5e_sq_param sq;
+ struct mlx5e_sq_param icosq;
struct mlx5e_cq_param rx_cq;
struct mlx5e_cq_param tx_cq;
+ struct mlx5e_cq_param icosq_cq;
};
static void mlx5e_update_carrier(struct mlx5e_priv *priv)
@@ -88,82 +91,15 @@ static void mlx5e_update_carrier_work(struct work_struct *work)
mutex_unlock(&priv->state_lock);
}
-static void mlx5e_update_pport_counters(struct mlx5e_priv *priv)
+static void mlx5e_update_sw_counters(struct mlx5e_priv *priv)
{
- struct mlx5_core_dev *mdev = priv->mdev;
- struct mlx5e_pport_stats *s = &priv->stats.pport;
- u32 *in;
- u32 *out;
- int sz = MLX5_ST_SZ_BYTES(ppcnt_reg);
-
- in = mlx5_vzalloc(sz);
- out = mlx5_vzalloc(sz);
- if (!in || !out)
- goto free_out;
-
- MLX5_SET(ppcnt_reg, in, local_port, 1);
-
- MLX5_SET(ppcnt_reg, in, grp, MLX5_IEEE_802_3_COUNTERS_GROUP);
- mlx5_core_access_reg(mdev, in, sz, out,
- sz, MLX5_REG_PPCNT, 0, 0);
- memcpy(s->IEEE_802_3_counters,
- MLX5_ADDR_OF(ppcnt_reg, out, counter_set),
- sizeof(s->IEEE_802_3_counters));
-
- MLX5_SET(ppcnt_reg, in, grp, MLX5_RFC_2863_COUNTERS_GROUP);
- mlx5_core_access_reg(mdev, in, sz, out,
- sz, MLX5_REG_PPCNT, 0, 0);
- memcpy(s->RFC_2863_counters,
- MLX5_ADDR_OF(ppcnt_reg, out, counter_set),
- sizeof(s->RFC_2863_counters));
-
- MLX5_SET(ppcnt_reg, in, grp, MLX5_RFC_2819_COUNTERS_GROUP);
- mlx5_core_access_reg(mdev, in, sz, out,
- sz, MLX5_REG_PPCNT, 0, 0);
- memcpy(s->RFC_2819_counters,
- MLX5_ADDR_OF(ppcnt_reg, out, counter_set),
- sizeof(s->RFC_2819_counters));
-
-free_out:
- kvfree(in);
- kvfree(out);
-}
-
-void mlx5e_update_stats(struct mlx5e_priv *priv)
-{
- struct mlx5_core_dev *mdev = priv->mdev;
- struct mlx5e_vport_stats *s = &priv->stats.vport;
+ struct mlx5e_sw_stats *s = &priv->stats.sw;
struct mlx5e_rq_stats *rq_stats;
struct mlx5e_sq_stats *sq_stats;
- u32 in[MLX5_ST_SZ_DW(query_vport_counter_in)];
- u32 *out;
- int outlen = MLX5_ST_SZ_BYTES(query_vport_counter_out);
- u64 tx_offload_none;
+ u64 tx_offload_none = 0;
int i, j;
- out = mlx5_vzalloc(outlen);
- if (!out)
- return;
-
- /* Collect firts the SW counters and then HW for consistency */
- s->rx_packets = 0;
- s->rx_bytes = 0;
- s->tx_packets = 0;
- s->tx_bytes = 0;
- s->tso_packets = 0;
- s->tso_bytes = 0;
- s->tso_inner_packets = 0;
- s->tso_inner_bytes = 0;
- s->tx_queue_stopped = 0;
- s->tx_queue_wake = 0;
- s->tx_queue_dropped = 0;
- s->tx_csum_inner = 0;
- tx_offload_none = 0;
- s->lro_packets = 0;
- s->lro_bytes = 0;
- s->rx_csum_none = 0;
- s->rx_csum_sw = 0;
- s->rx_wqe_err = 0;
+ memset(s, 0, sizeof(*s));
for (i = 0; i < priv->params.num_channels; i++) {
rq_stats = &priv->channel[i]->rq.stats;
@@ -173,7 +109,13 @@ void mlx5e_update_stats(struct mlx5e_priv *priv)
s->lro_bytes += rq_stats->lro_bytes;
s->rx_csum_none += rq_stats->csum_none;
s->rx_csum_sw += rq_stats->csum_sw;
+ s->rx_csum_inner += rq_stats->csum_inner;
s->rx_wqe_err += rq_stats->wqe_err;
+ s->rx_mpwqe_filler += rq_stats->mpwqe_filler;
+ s->rx_mpwqe_frag += rq_stats->mpwqe_frag;
+ s->rx_buff_alloc_err += rq_stats->buff_alloc_err;
+ s->rx_cqe_compress_blks += rq_stats->cqe_compress_blks;
+ s->rx_cqe_compress_pkts += rq_stats->cqe_compress_pkts;
for (j = 0; j < priv->params.num_tc; j++) {
sq_stats = &priv->channel[i]->sq[j].stats;
@@ -192,7 +134,23 @@ void mlx5e_update_stats(struct mlx5e_priv *priv)
}
}
- /* HW counters */
+ /* Update calculated offload counters */
+ s->tx_csum_offload = s->tx_packets - tx_offload_none - s->tx_csum_inner;
+ s->rx_csum_good = s->rx_packets - s->rx_csum_none -
+ s->rx_csum_sw;
+
+ s->link_down_events = MLX5_GET(ppcnt_reg,
+ priv->stats.pport.phy_counters,
+ counter_set.phys_layer_cntrs.link_down_events);
+}
+
+static void mlx5e_update_vport_counters(struct mlx5e_priv *priv)
+{
+ int outlen = MLX5_ST_SZ_BYTES(query_vport_counter_out);
+ u32 *out = (u32 *)priv->stats.vport.query_vport_out;
+ u32 in[MLX5_ST_SZ_DW(query_vport_counter_in)];
+ struct mlx5_core_dev *mdev = priv->mdev;
+
memset(in, 0, sizeof(in));
MLX5_SET(query_vport_counter_in, in, opcode,
@@ -202,56 +160,69 @@ void mlx5e_update_stats(struct mlx5e_priv *priv)
memset(out, 0, outlen);
- if (mlx5_cmd_exec(mdev, in, sizeof(in), out, outlen))
+ mlx5_cmd_exec(mdev, in, sizeof(in), out, outlen);
+}
+
+static void mlx5e_update_pport_counters(struct mlx5e_priv *priv)
+{
+ struct mlx5e_pport_stats *pstats = &priv->stats.pport;
+ struct mlx5_core_dev *mdev = priv->mdev;
+ int sz = MLX5_ST_SZ_BYTES(ppcnt_reg);
+ int prio;
+ void *out;
+ u32 *in;
+
+ in = mlx5_vzalloc(sz);
+ if (!in)
goto free_out;
-#define MLX5_GET_CTR(p, x) \
- MLX5_GET64(query_vport_counter_out, p, x)
-
- s->rx_error_packets =
- MLX5_GET_CTR(out, received_errors.packets);
- s->rx_error_bytes =
- MLX5_GET_CTR(out, received_errors.octets);
- s->tx_error_packets =
- MLX5_GET_CTR(out, transmit_errors.packets);
- s->tx_error_bytes =
- MLX5_GET_CTR(out, transmit_errors.octets);
-
- s->rx_unicast_packets =
- MLX5_GET_CTR(out, received_eth_unicast.packets);
- s->rx_unicast_bytes =
- MLX5_GET_CTR(out, received_eth_unicast.octets);
- s->tx_unicast_packets =
- MLX5_GET_CTR(out, transmitted_eth_unicast.packets);
- s->tx_unicast_bytes =
- MLX5_GET_CTR(out, transmitted_eth_unicast.octets);
-
- s->rx_multicast_packets =
- MLX5_GET_CTR(out, received_eth_multicast.packets);
- s->rx_multicast_bytes =
- MLX5_GET_CTR(out, received_eth_multicast.octets);
- s->tx_multicast_packets =
- MLX5_GET_CTR(out, transmitted_eth_multicast.packets);
- s->tx_multicast_bytes =
- MLX5_GET_CTR(out, transmitted_eth_multicast.octets);
-
- s->rx_broadcast_packets =
- MLX5_GET_CTR(out, received_eth_broadcast.packets);
- s->rx_broadcast_bytes =
- MLX5_GET_CTR(out, received_eth_broadcast.octets);
- s->tx_broadcast_packets =
- MLX5_GET_CTR(out, transmitted_eth_broadcast.packets);
- s->tx_broadcast_bytes =
- MLX5_GET_CTR(out, transmitted_eth_broadcast.octets);
+ MLX5_SET(ppcnt_reg, in, local_port, 1);
- /* Update calculated offload counters */
- s->tx_csum_offload = s->tx_packets - tx_offload_none - s->tx_csum_inner;
- s->rx_csum_good = s->rx_packets - s->rx_csum_none -
- s->rx_csum_sw;
+ out = pstats->IEEE_802_3_counters;
+ MLX5_SET(ppcnt_reg, in, grp, MLX5_IEEE_802_3_COUNTERS_GROUP);
+ mlx5_core_access_reg(mdev, in, sz, out, sz, MLX5_REG_PPCNT, 0, 0);
+
+ out = pstats->RFC_2863_counters;
+ MLX5_SET(ppcnt_reg, in, grp, MLX5_RFC_2863_COUNTERS_GROUP);
+ mlx5_core_access_reg(mdev, in, sz, out, sz, MLX5_REG_PPCNT, 0, 0);
+
+ out = pstats->RFC_2819_counters;
+ MLX5_SET(ppcnt_reg, in, grp, MLX5_RFC_2819_COUNTERS_GROUP);
+ mlx5_core_access_reg(mdev, in, sz, out, sz, MLX5_REG_PPCNT, 0, 0);
+
+ out = pstats->phy_counters;
+ MLX5_SET(ppcnt_reg, in, grp, MLX5_PHYSICAL_LAYER_COUNTERS_GROUP);
+ mlx5_core_access_reg(mdev, in, sz, out, sz, MLX5_REG_PPCNT, 0, 0);
+
+ MLX5_SET(ppcnt_reg, in, grp, MLX5_PER_PRIORITY_COUNTERS_GROUP);
+ for (prio = 0; prio < NUM_PPORT_PRIO; prio++) {
+ out = pstats->per_prio_counters[prio];
+ MLX5_SET(ppcnt_reg, in, prio_tc, prio);
+ mlx5_core_access_reg(mdev, in, sz, out, sz,
+ MLX5_REG_PPCNT, 0, 0);
+ }
- mlx5e_update_pport_counters(priv);
free_out:
- kvfree(out);
+ kvfree(in);
+}
+
+static void mlx5e_update_q_counter(struct mlx5e_priv *priv)
+{
+ struct mlx5e_qcounter_stats *qcnt = &priv->stats.qcnt;
+
+ if (!priv->q_counter)
+ return;
+
+ mlx5_core_query_out_of_buffer(priv->mdev, priv->q_counter,
+ &qcnt->rx_out_of_buffer);
+}
+
+void mlx5e_update_stats(struct mlx5e_priv *priv)
+{
+ mlx5e_update_q_counter(priv);
+ mlx5e_update_vport_counters(priv);
+ mlx5e_update_pport_counters(priv);
+ mlx5e_update_sw_counters(priv);
}
static void mlx5e_update_stats_work(struct work_struct *work)
@@ -309,6 +280,7 @@ static int mlx5e_create_rq(struct mlx5e_channel *c,
struct mlx5_core_dev *mdev = priv->mdev;
void *rqc = param->rqc;
void *rqc_wq = MLX5_ADDR_OF(rqc, rqc, wq);
+ u32 byte_count;
int wq_sz;
int err;
int i;
@@ -323,32 +295,56 @@ static int mlx5e_create_rq(struct mlx5e_channel *c,
rq->wq.db = &rq->wq.db[MLX5_RCV_DBR];
wq_sz = mlx5_wq_ll_get_size(&rq->wq);
- rq->skb = kzalloc_node(wq_sz * sizeof(*rq->skb), GFP_KERNEL,
- cpu_to_node(c->cpu));
- if (!rq->skb) {
- err = -ENOMEM;
- goto err_rq_wq_destroy;
- }
- rq->wqe_sz = (priv->params.lro_en) ? priv->params.lro_wqe_sz :
- MLX5E_SW2HW_MTU(priv->netdev->mtu);
- rq->wqe_sz = SKB_DATA_ALIGN(rq->wqe_sz + MLX5E_NET_IP_ALIGN);
+ switch (priv->params.rq_wq_type) {
+ case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
+ rq->wqe_info = kzalloc_node(wq_sz * sizeof(*rq->wqe_info),
+ GFP_KERNEL, cpu_to_node(c->cpu));
+ if (!rq->wqe_info) {
+ err = -ENOMEM;
+ goto err_rq_wq_destroy;
+ }
+ rq->handle_rx_cqe = mlx5e_handle_rx_cqe_mpwrq;
+ rq->alloc_wqe = mlx5e_alloc_rx_mpwqe;
+
+ rq->mpwqe_stride_sz = BIT(priv->params.mpwqe_log_stride_sz);
+ rq->mpwqe_num_strides = BIT(priv->params.mpwqe_log_num_strides);
+ rq->wqe_sz = rq->mpwqe_stride_sz * rq->mpwqe_num_strides;
+ byte_count = rq->wqe_sz;
+ break;
+ default: /* MLX5_WQ_TYPE_LINKED_LIST */
+ rq->skb = kzalloc_node(wq_sz * sizeof(*rq->skb), GFP_KERNEL,
+ cpu_to_node(c->cpu));
+ if (!rq->skb) {
+ err = -ENOMEM;
+ goto err_rq_wq_destroy;
+ }
+ rq->handle_rx_cqe = mlx5e_handle_rx_cqe;
+ rq->alloc_wqe = mlx5e_alloc_rx_wqe;
+
+ rq->wqe_sz = (priv->params.lro_en) ?
+ priv->params.lro_wqe_sz :
+ MLX5E_SW2HW_MTU(priv->netdev->mtu);
+ rq->wqe_sz = SKB_DATA_ALIGN(rq->wqe_sz);
+ byte_count = rq->wqe_sz;
+ byte_count |= MLX5_HW_START_PADDING;
+ }
for (i = 0; i < wq_sz; i++) {
struct mlx5e_rx_wqe *wqe = mlx5_wq_ll_get_wqe(&rq->wq, i);
- u32 byte_count = rq->wqe_sz - MLX5E_NET_IP_ALIGN;
- wqe->data.lkey = c->mkey_be;
- wqe->data.byte_count =
- cpu_to_be32(byte_count | MLX5_HW_START_PADDING);
+ wqe->data.byte_count = cpu_to_be32(byte_count);
}
+ rq->wq_type = priv->params.rq_wq_type;
rq->pdev = c->pdev;
rq->netdev = c->netdev;
rq->tstamp = &priv->tstamp;
rq->channel = c;
rq->ix = c->ix;
rq->priv = c->priv;
+ rq->mkey_be = c->mkey_be;
+ rq->umr_mkey_be = cpu_to_be32(c->priv->umr_mkey.key);
return 0;
@@ -360,7 +356,14 @@ err_rq_wq_destroy:
static void mlx5e_destroy_rq(struct mlx5e_rq *rq)
{
- kfree(rq->skb);
+ switch (rq->wq_type) {
+ case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
+ kfree(rq->wqe_info);
+ break;
+ default: /* MLX5_WQ_TYPE_LINKED_LIST */
+ kfree(rq->skb);
+ }
+
mlx5_wq_destroy(&rq->wq_ctrl);
}
@@ -389,6 +392,7 @@ static int mlx5e_enable_rq(struct mlx5e_rq *rq, struct mlx5e_rq_param *param)
MLX5_SET(rqc, rqc, cqn, rq->cq.mcq.cqn);
MLX5_SET(rqc, rqc, state, MLX5_RQC_STATE_RST);
MLX5_SET(rqc, rqc, flush_in_error_en, 1);
+ MLX5_SET(rqc, rqc, vsd, priv->params.vlan_strip_disable);
MLX5_SET(wq, wq, log_wq_pg_sz, rq->wq_ctrl.buf.page_shift -
MLX5_ADAPTER_PAGE_SHIFT);
MLX5_SET64(wq, wq, dbr_addr, rq->wq_ctrl.db.dma);
@@ -403,7 +407,8 @@ static int mlx5e_enable_rq(struct mlx5e_rq *rq, struct mlx5e_rq_param *param)
return err;
}
-static int mlx5e_modify_rq(struct mlx5e_rq *rq, int curr_state, int next_state)
+static int mlx5e_modify_rq_state(struct mlx5e_rq *rq, int curr_state,
+ int next_state)
{
struct mlx5e_channel *c = rq->channel;
struct mlx5e_priv *priv = c->priv;
@@ -431,6 +436,36 @@ static int mlx5e_modify_rq(struct mlx5e_rq *rq, int curr_state, int next_state)
return err;
}
+static int mlx5e_modify_rq_vsd(struct mlx5e_rq *rq, bool vsd)
+{
+ struct mlx5e_channel *c = rq->channel;
+ struct mlx5e_priv *priv = c->priv;
+ struct mlx5_core_dev *mdev = priv->mdev;
+
+ void *in;
+ void *rqc;
+ int inlen;
+ int err;
+
+ inlen = MLX5_ST_SZ_BYTES(modify_rq_in);
+ in = mlx5_vzalloc(inlen);
+ if (!in)
+ return -ENOMEM;
+
+ rqc = MLX5_ADDR_OF(modify_rq_in, in, ctx);
+
+ MLX5_SET(modify_rq_in, in, rq_state, MLX5_RQC_STATE_RDY);
+ MLX5_SET64(modify_rq_in, in, modify_bitmask, MLX5_RQ_BITMASK_VSD);
+ MLX5_SET(rqc, rqc, vsd, vsd);
+ MLX5_SET(rqc, rqc, state, MLX5_RQC_STATE_RDY);
+
+ err = mlx5_core_modify_rq(mdev, rq->rqn, in, inlen);
+
+ kvfree(in);
+
+ return err;
+}
+
static void mlx5e_disable_rq(struct mlx5e_rq *rq)
{
mlx5_core_destroy_rq(rq->priv->mdev, rq->rqn);
@@ -457,6 +492,8 @@ static int mlx5e_open_rq(struct mlx5e_channel *c,
struct mlx5e_rq_param *param,
struct mlx5e_rq *rq)
{
+ struct mlx5e_sq *sq = &c->icosq;
+ u16 pi = sq->pc & sq->wq.sz_m1;
int err;
err = mlx5e_create_rq(c, param, rq);
@@ -467,12 +504,15 @@ static int mlx5e_open_rq(struct mlx5e_channel *c,
if (err)
goto err_destroy_rq;
- err = mlx5e_modify_rq(rq, MLX5_RQC_STATE_RST, MLX5_RQC_STATE_RDY);
+ err = mlx5e_modify_rq_state(rq, MLX5_RQC_STATE_RST, MLX5_RQC_STATE_RDY);
if (err)
goto err_disable_rq;
set_bit(MLX5E_RQ_STATE_POST_WQES_ENABLE, &rq->state);
- mlx5e_send_nop(&c->sq[0], true); /* trigger mlx5e_post_rx_wqes() */
+
+ sq->ico_wqe_info[pi].opcode = MLX5_OPCODE_NOP;
+ sq->ico_wqe_info[pi].num_wqebbs = 1;
+ mlx5e_send_nop(sq, true); /* trigger mlx5e_post_rx_wqes() */
return 0;
@@ -489,7 +529,7 @@ static void mlx5e_close_rq(struct mlx5e_rq *rq)
clear_bit(MLX5E_RQ_STATE_POST_WQES_ENABLE, &rq->state);
napi_synchronize(&rq->channel->napi); /* prevent mlx5e_post_rx_wqes */
- mlx5e_modify_rq(rq, MLX5_RQC_STATE_RDY, MLX5_RQC_STATE_ERR);
+ mlx5e_modify_rq_state(rq, MLX5_RQC_STATE_RDY, MLX5_RQC_STATE_ERR);
while (!mlx5_wq_ll_is_empty(&rq->wq))
msleep(20);
@@ -538,7 +578,6 @@ static int mlx5e_create_sq(struct mlx5e_channel *c,
void *sqc = param->sqc;
void *sqc_wq = MLX5_ADDR_OF(sqc, sqc, wq);
- int txq_ix;
int err;
err = mlx5_alloc_map_uar(mdev, &sq->uar, true);
@@ -566,8 +605,24 @@ static int mlx5e_create_sq(struct mlx5e_channel *c,
if (err)
goto err_sq_wq_destroy;
- txq_ix = c->ix + tc * priv->params.num_channels;
- sq->txq = netdev_get_tx_queue(priv->netdev, txq_ix);
+ if (param->icosq) {
+ u8 wq_sz = mlx5_wq_cyc_get_size(&sq->wq);
+
+ sq->ico_wqe_info = kzalloc_node(sizeof(*sq->ico_wqe_info) *
+ wq_sz,
+ GFP_KERNEL,
+ cpu_to_node(c->cpu));
+ if (!sq->ico_wqe_info) {
+ err = -ENOMEM;
+ goto err_free_sq_db;
+ }
+ } else {
+ int txq_ix;
+
+ txq_ix = c->ix + tc * priv->params.num_channels;
+ sq->txq = netdev_get_tx_queue(priv->netdev, txq_ix);
+ priv->txq_to_sq_map[txq_ix] = sq;
+ }
sq->pdev = c->pdev;
sq->tstamp = &priv->tstamp;
@@ -576,10 +631,12 @@ static int mlx5e_create_sq(struct mlx5e_channel *c,
sq->tc = tc;
sq->edge = (sq->wq.sz_m1 + 1) - MLX5_SEND_WQE_MAX_WQEBBS;
sq->bf_budget = MLX5E_SQ_BF_BUDGET;
- priv->txq_to_sq_map[txq_ix] = sq;
return 0;
+err_free_sq_db:
+ mlx5e_free_sq_db(sq);
+
err_sq_wq_destroy:
mlx5_wq_destroy(&sq->wq_ctrl);
@@ -594,6 +651,7 @@ static void mlx5e_destroy_sq(struct mlx5e_sq *sq)
struct mlx5e_channel *c = sq->channel;
struct mlx5e_priv *priv = c->priv;
+ kfree(sq->ico_wqe_info);
mlx5e_free_sq_db(sq);
mlx5_wq_destroy(&sq->wq_ctrl);
mlx5_unmap_free_uar(priv->mdev, &sq->uar);
@@ -622,10 +680,10 @@ static int mlx5e_enable_sq(struct mlx5e_sq *sq, struct mlx5e_sq_param *param)
memcpy(sqc, param->sqc, sizeof(param->sqc));
- MLX5_SET(sqc, sqc, tis_num_0, priv->tisn[sq->tc]);
- MLX5_SET(sqc, sqc, cqn, c->sq[sq->tc].cq.mcq.cqn);
+ MLX5_SET(sqc, sqc, tis_num_0, param->icosq ? 0 : priv->tisn[sq->tc]);
+ MLX5_SET(sqc, sqc, cqn, sq->cq.mcq.cqn);
MLX5_SET(sqc, sqc, state, MLX5_SQC_STATE_RST);
- MLX5_SET(sqc, sqc, tis_lst_sz, 1);
+ MLX5_SET(sqc, sqc, tis_lst_sz, param->icosq ? 0 : 1);
MLX5_SET(sqc, sqc, flush_in_error_en, 1);
MLX5_SET(wq, wq, wq_type, MLX5_WQ_TYPE_CYCLIC);
@@ -700,9 +758,11 @@ static int mlx5e_open_sq(struct mlx5e_channel *c,
if (err)
goto err_disable_sq;
- set_bit(MLX5E_SQ_STATE_WAKE_TXQ_ENABLE, &sq->state);
- netdev_tx_reset_queue(sq->txq);
- netif_tx_start_queue(sq->txq);
+ if (sq->txq) {
+ set_bit(MLX5E_SQ_STATE_WAKE_TXQ_ENABLE, &sq->state);
+ netdev_tx_reset_queue(sq->txq);
+ netif_tx_start_queue(sq->txq);
+ }
return 0;
@@ -723,15 +783,19 @@ static inline void netif_tx_disable_queue(struct netdev_queue *txq)
static void mlx5e_close_sq(struct mlx5e_sq *sq)
{
- clear_bit(MLX5E_SQ_STATE_WAKE_TXQ_ENABLE, &sq->state);
- napi_synchronize(&sq->channel->napi); /* prevent netif_tx_wake_queue */
- netif_tx_disable_queue(sq->txq);
+ if (sq->txq) {
+ clear_bit(MLX5E_SQ_STATE_WAKE_TXQ_ENABLE, &sq->state);
+ /* prevent netif_tx_wake_queue */
+ napi_synchronize(&sq->channel->napi);
+ netif_tx_disable_queue(sq->txq);
- /* ensure hw is notified of all pending wqes */
- if (mlx5e_sq_has_room_for(sq, 1))
- mlx5e_send_nop(sq, true);
+ /* ensure hw is notified of all pending wqes */
+ if (mlx5e_sq_has_room_for(sq, 1))
+ mlx5e_send_nop(sq, true);
+
+ mlx5e_modify_sq(sq, MLX5_SQC_STATE_RDY, MLX5_SQC_STATE_ERR);
+ }
- mlx5e_modify_sq(sq, MLX5_SQC_STATE_RDY, MLX5_SQC_STATE_ERR);
while (sq->cc != sq->pc) /* wait till sq is empty */
msleep(20);
@@ -985,10 +1049,14 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
netif_napi_add(netdev, &c->napi, mlx5e_napi_poll, 64);
- err = mlx5e_open_tx_cqs(c, cparam);
+ err = mlx5e_open_cq(c, &cparam->icosq_cq, &c->icosq.cq, 0, 0);
if (err)
goto err_napi_del;
+ err = mlx5e_open_tx_cqs(c, cparam);
+ if (err)
+ goto err_close_icosq_cq;
+
err = mlx5e_open_cq(c, &cparam->rx_cq, &c->rq.cq,
priv->params.rx_cq_moderation_usec,
priv->params.rx_cq_moderation_pkts);
@@ -997,10 +1065,14 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
napi_enable(&c->napi);
- err = mlx5e_open_sqs(c, cparam);
+ err = mlx5e_open_sq(c, 0, &cparam->icosq, &c->icosq);
if (err)
goto err_disable_napi;
+ err = mlx5e_open_sqs(c, cparam);
+ if (err)
+ goto err_close_icosq;
+
err = mlx5e_open_rq(c, &cparam->rq, &c->rq);
if (err)
goto err_close_sqs;
@@ -1013,6 +1085,9 @@ static int mlx5e_open_channel(struct mlx5e_priv *priv, int ix,
err_close_sqs:
mlx5e_close_sqs(c);
+err_close_icosq:
+ mlx5e_close_sq(&c->icosq);
+
err_disable_napi:
napi_disable(&c->napi);
mlx5e_close_cq(&c->rq.cq);
@@ -1020,6 +1095,9 @@ err_disable_napi:
err_close_tx_cqs:
mlx5e_close_tx_cqs(c);
+err_close_icosq_cq:
+ mlx5e_close_cq(&c->icosq.cq);
+
err_napi_del:
netif_napi_del(&c->napi);
napi_hash_del(&c->napi);
@@ -1032,9 +1110,11 @@ static void mlx5e_close_channel(struct mlx5e_channel *c)
{
mlx5e_close_rq(&c->rq);
mlx5e_close_sqs(c);
+ mlx5e_close_sq(&c->icosq);
napi_disable(&c->napi);
mlx5e_close_cq(&c->rq.cq);
mlx5e_close_tx_cqs(c);
+ mlx5e_close_cq(&c->icosq.cq);
netif_napi_del(&c->napi);
napi_hash_del(&c->napi);
@@ -1049,11 +1129,23 @@ static void mlx5e_build_rq_param(struct mlx5e_priv *priv,
void *rqc = param->rqc;
void *wq = MLX5_ADDR_OF(rqc, rqc, wq);
- MLX5_SET(wq, wq, wq_type, MLX5_WQ_TYPE_LINKED_LIST);
+ switch (priv->params.rq_wq_type) {
+ case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
+ MLX5_SET(wq, wq, log_wqe_num_of_strides,
+ priv->params.mpwqe_log_num_strides - 9);
+ MLX5_SET(wq, wq, log_wqe_stride_size,
+ priv->params.mpwqe_log_stride_sz - 6);
+ MLX5_SET(wq, wq, wq_type, MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ);
+ break;
+ default: /* MLX5_WQ_TYPE_LINKED_LIST */
+ MLX5_SET(wq, wq, wq_type, MLX5_WQ_TYPE_LINKED_LIST);
+ }
+
MLX5_SET(wq, wq, end_padding_mode, MLX5_WQ_END_PAD_MODE_ALIGN);
MLX5_SET(wq, wq, log_wq_stride, ilog2(sizeof(struct mlx5e_rx_wqe)));
MLX5_SET(wq, wq, log_wq_sz, priv->params.log_rq_size);
MLX5_SET(wq, wq, pd, priv->pdn);
+ MLX5_SET(rqc, rqc, counter_set_id, priv->q_counter);
param->wq.buf_numa_node = dev_to_node(&priv->mdev->pdev->dev);
param->wq.linear = 1;
@@ -1068,17 +1160,27 @@ static void mlx5e_build_drop_rq_param(struct mlx5e_rq_param *param)
MLX5_SET(wq, wq, log_wq_stride, ilog2(sizeof(struct mlx5e_rx_wqe)));
}
-static void mlx5e_build_sq_param(struct mlx5e_priv *priv,
- struct mlx5e_sq_param *param)
+static void mlx5e_build_sq_param_common(struct mlx5e_priv *priv,
+ struct mlx5e_sq_param *param)
{
void *sqc = param->sqc;
void *wq = MLX5_ADDR_OF(sqc, sqc, wq);
- MLX5_SET(wq, wq, log_wq_sz, priv->params.log_sq_size);
MLX5_SET(wq, wq, log_wq_stride, ilog2(MLX5_SEND_WQE_BB));
MLX5_SET(wq, wq, pd, priv->pdn);
param->wq.buf_numa_node = dev_to_node(&priv->mdev->pdev->dev);
+}
+
+static void mlx5e_build_sq_param(struct mlx5e_priv *priv,
+ struct mlx5e_sq_param *param)
+{
+ void *sqc = param->sqc;
+ void *wq = MLX5_ADDR_OF(sqc, sqc, wq);
+
+ mlx5e_build_sq_param_common(priv, param);
+ MLX5_SET(wq, wq, log_wq_sz, priv->params.log_sq_size);
+
param->max_inline = priv->params.tx_max_inline;
}
@@ -1094,8 +1196,22 @@ static void mlx5e_build_rx_cq_param(struct mlx5e_priv *priv,
struct mlx5e_cq_param *param)
{
void *cqc = param->cqc;
+ u8 log_cq_size;
- MLX5_SET(cqc, cqc, log_cq_size, priv->params.log_rq_size);
+ switch (priv->params.rq_wq_type) {
+ case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
+ log_cq_size = priv->params.log_rq_size +
+ priv->params.mpwqe_log_num_strides;
+ break;
+ default: /* MLX5_WQ_TYPE_LINKED_LIST */
+ log_cq_size = priv->params.log_rq_size;
+ }
+
+ MLX5_SET(cqc, cqc, log_cq_size, log_cq_size);
+ if (priv->params.rx_cqe_compress) {
+ MLX5_SET(cqc, cqc, mini_cqe_res_format, MLX5_CQE_FORMAT_CSUM);
+ MLX5_SET(cqc, cqc, cqe_comp_en, 1);
+ }
mlx5e_build_common_cq_param(priv, param);
}
@@ -1105,25 +1221,52 @@ static void mlx5e_build_tx_cq_param(struct mlx5e_priv *priv,
{
void *cqc = param->cqc;
- MLX5_SET(cqc, cqc, log_cq_size, priv->params.log_sq_size);
+ MLX5_SET(cqc, cqc, log_cq_size, priv->params.log_sq_size);
mlx5e_build_common_cq_param(priv, param);
}
-static void mlx5e_build_channel_param(struct mlx5e_priv *priv,
- struct mlx5e_channel_param *cparam)
+static void mlx5e_build_ico_cq_param(struct mlx5e_priv *priv,
+ struct mlx5e_cq_param *param,
+ u8 log_wq_size)
{
- memset(cparam, 0, sizeof(*cparam));
+ void *cqc = param->cqc;
+
+ MLX5_SET(cqc, cqc, log_cq_size, log_wq_size);
+
+ mlx5e_build_common_cq_param(priv, param);
+}
+
+static void mlx5e_build_icosq_param(struct mlx5e_priv *priv,
+ struct mlx5e_sq_param *param,
+ u8 log_wq_size)
+{
+ void *sqc = param->sqc;
+ void *wq = MLX5_ADDR_OF(sqc, sqc, wq);
+
+ mlx5e_build_sq_param_common(priv, param);
+
+ MLX5_SET(wq, wq, log_wq_sz, log_wq_size);
+ MLX5_SET(sqc, sqc, reg_umr, MLX5_CAP_ETH(priv->mdev, reg_umr_sq));
+
+ param->icosq = true;
+}
+
+static void mlx5e_build_channel_param(struct mlx5e_priv *priv, struct mlx5e_channel_param *cparam)
+{
+ u8 icosq_log_wq_sz = MLX5E_PARAMS_MINIMUM_LOG_SQ_SIZE;
mlx5e_build_rq_param(priv, &cparam->rq);
mlx5e_build_sq_param(priv, &cparam->sq);
+ mlx5e_build_icosq_param(priv, &cparam->icosq, icosq_log_wq_sz);
mlx5e_build_rx_cq_param(priv, &cparam->rx_cq);
mlx5e_build_tx_cq_param(priv, &cparam->tx_cq);
+ mlx5e_build_ico_cq_param(priv, &cparam->icosq_cq, icosq_log_wq_sz);
}
static int mlx5e_open_channels(struct mlx5e_priv *priv)
{
- struct mlx5e_channel_param cparam;
+ struct mlx5e_channel_param *cparam;
int nch = priv->params.num_channels;
int err = -ENOMEM;
int i;
@@ -1135,12 +1278,15 @@ static int mlx5e_open_channels(struct mlx5e_priv *priv)
priv->txq_to_sq_map = kcalloc(nch * priv->params.num_tc,
sizeof(struct mlx5e_sq *), GFP_KERNEL);
- if (!priv->channel || !priv->txq_to_sq_map)
+ cparam = kzalloc(sizeof(struct mlx5e_channel_param), GFP_KERNEL);
+
+ if (!priv->channel || !priv->txq_to_sq_map || !cparam)
goto err_free_txq_to_sq_map;
- mlx5e_build_channel_param(priv, &cparam);
+ mlx5e_build_channel_param(priv, cparam);
+
for (i = 0; i < nch; i++) {
- err = mlx5e_open_channel(priv, i, &cparam, &priv->channel[i]);
+ err = mlx5e_open_channel(priv, i, cparam, &priv->channel[i]);
if (err)
goto err_close_channels;
}
@@ -1151,6 +1297,7 @@ static int mlx5e_open_channels(struct mlx5e_priv *priv)
goto err_close_channels;
}
+ kfree(cparam);
return 0;
err_close_channels:
@@ -1160,6 +1307,7 @@ err_close_channels:
err_free_txq_to_sq_map:
kfree(priv->txq_to_sq_map);
kfree(priv->channel);
+ kfree(cparam);
return err;
}
@@ -1199,48 +1347,36 @@ static void mlx5e_fill_indir_rqt_rqns(struct mlx5e_priv *priv, void *rqtc)
for (i = 0; i < MLX5E_INDIR_RQT_SIZE; i++) {
int ix = i;
+ u32 rqn;
if (priv->params.rss_hfunc == ETH_RSS_HASH_XOR)
ix = mlx5e_bits_invert(i, MLX5E_LOG_INDIR_RQT_SIZE);
ix = priv->params.indirection_rqt[ix];
- MLX5_SET(rqtc, rqtc, rq_num[i],
- test_bit(MLX5E_STATE_OPENED, &priv->state) ?
- priv->channel[ix]->rq.rqn :
- priv->drop_rq.rqn);
+ rqn = test_bit(MLX5E_STATE_OPENED, &priv->state) ?
+ priv->channel[ix]->rq.rqn :
+ priv->drop_rq.rqn;
+ MLX5_SET(rqtc, rqtc, rq_num[i], rqn);
}
}
-static void mlx5e_fill_rqt_rqns(struct mlx5e_priv *priv, void *rqtc,
- enum mlx5e_rqt_ix rqt_ix)
+static void mlx5e_fill_direct_rqt_rqn(struct mlx5e_priv *priv, void *rqtc,
+ int ix)
{
+ u32 rqn = test_bit(MLX5E_STATE_OPENED, &priv->state) ?
+ priv->channel[ix]->rq.rqn :
+ priv->drop_rq.rqn;
- switch (rqt_ix) {
- case MLX5E_INDIRECTION_RQT:
- mlx5e_fill_indir_rqt_rqns(priv, rqtc);
-
- break;
-
- default: /* MLX5E_SINGLE_RQ_RQT */
- MLX5_SET(rqtc, rqtc, rq_num[0],
- test_bit(MLX5E_STATE_OPENED, &priv->state) ?
- priv->channel[0]->rq.rqn :
- priv->drop_rq.rqn);
-
- break;
- }
+ MLX5_SET(rqtc, rqtc, rq_num[0], rqn);
}
-static int mlx5e_create_rqt(struct mlx5e_priv *priv, enum mlx5e_rqt_ix rqt_ix)
+static int mlx5e_create_rqt(struct mlx5e_priv *priv, int sz, int ix, u32 *rqtn)
{
struct mlx5_core_dev *mdev = priv->mdev;
- u32 *in;
void *rqtc;
int inlen;
- int sz;
int err;
-
- sz = (rqt_ix == MLX5E_SINGLE_RQ_RQT) ? 1 : MLX5E_INDIR_RQT_SIZE;
+ u32 *in;
inlen = MLX5_ST_SZ_BYTES(create_rqt_in) + sizeof(u32) * sz;
in = mlx5_vzalloc(inlen);
@@ -1252,26 +1388,73 @@ static int mlx5e_create_rqt(struct mlx5e_priv *priv, enum mlx5e_rqt_ix rqt_ix)
MLX5_SET(rqtc, rqtc, rqt_actual_size, sz);
MLX5_SET(rqtc, rqtc, rqt_max_size, sz);
- mlx5e_fill_rqt_rqns(priv, rqtc, rqt_ix);
+ if (sz > 1) /* RSS */
+ mlx5e_fill_indir_rqt_rqns(priv, rqtc);
+ else
+ mlx5e_fill_direct_rqt_rqn(priv, rqtc, ix);
- err = mlx5_core_create_rqt(mdev, in, inlen, &priv->rqtn[rqt_ix]);
+ err = mlx5_core_create_rqt(mdev, in, inlen, rqtn);
kvfree(in);
+ return err;
+}
+
+static void mlx5e_destroy_rqt(struct mlx5e_priv *priv, u32 rqtn)
+{
+ mlx5_core_destroy_rqt(priv->mdev, rqtn);
+}
+
+static int mlx5e_create_rqts(struct mlx5e_priv *priv)
+{
+ int nch = mlx5e_get_max_num_channels(priv->mdev);
+ u32 *rqtn;
+ int err;
+ int ix;
+
+ /* Indirect RQT */
+ rqtn = &priv->indir_rqtn;
+ err = mlx5e_create_rqt(priv, MLX5E_INDIR_RQT_SIZE, 0, rqtn);
+ if (err)
+ return err;
+
+ /* Direct RQTs */
+ for (ix = 0; ix < nch; ix++) {
+ rqtn = &priv->direct_tir[ix].rqtn;
+ err = mlx5e_create_rqt(priv, 1 /*size */, ix, rqtn);
+ if (err)
+ goto err_destroy_rqts;
+ }
+
+ return 0;
+
+err_destroy_rqts:
+ for (ix--; ix >= 0; ix--)
+ mlx5e_destroy_rqt(priv, priv->direct_tir[ix].rqtn);
+
+ mlx5e_destroy_rqt(priv, priv->indir_rqtn);
return err;
}
-int mlx5e_redirect_rqt(struct mlx5e_priv *priv, enum mlx5e_rqt_ix rqt_ix)
+static void mlx5e_destroy_rqts(struct mlx5e_priv *priv)
+{
+ int nch = mlx5e_get_max_num_channels(priv->mdev);
+ int i;
+
+ for (i = 0; i < nch; i++)
+ mlx5e_destroy_rqt(priv, priv->direct_tir[i].rqtn);
+
+ mlx5e_destroy_rqt(priv, priv->indir_rqtn);
+}
+
+int mlx5e_redirect_rqt(struct mlx5e_priv *priv, u32 rqtn, int sz, int ix)
{
struct mlx5_core_dev *mdev = priv->mdev;
- u32 *in;
void *rqtc;
int inlen;
- int sz;
+ u32 *in;
int err;
- sz = (rqt_ix == MLX5E_SINGLE_RQ_RQT) ? 1 : MLX5E_INDIR_RQT_SIZE;
-
inlen = MLX5_ST_SZ_BYTES(modify_rqt_in) + sizeof(u32) * sz;
in = mlx5_vzalloc(inlen);
if (!in)
@@ -1280,27 +1463,31 @@ int mlx5e_redirect_rqt(struct mlx5e_priv *priv, enum mlx5e_rqt_ix rqt_ix)
rqtc = MLX5_ADDR_OF(modify_rqt_in, in, ctx);
MLX5_SET(rqtc, rqtc, rqt_actual_size, sz);
-
- mlx5e_fill_rqt_rqns(priv, rqtc, rqt_ix);
+ if (sz > 1) /* RSS */
+ mlx5e_fill_indir_rqt_rqns(priv, rqtc);
+ else
+ mlx5e_fill_direct_rqt_rqn(priv, rqtc, ix);
MLX5_SET(modify_rqt_in, in, bitmask.rqn_list, 1);
- err = mlx5_core_modify_rqt(mdev, priv->rqtn[rqt_ix], in, inlen);
+ err = mlx5_core_modify_rqt(mdev, rqtn, in, inlen);
kvfree(in);
return err;
}
-static void mlx5e_destroy_rqt(struct mlx5e_priv *priv, enum mlx5e_rqt_ix rqt_ix)
-{
- mlx5_core_destroy_rqt(priv->mdev, priv->rqtn[rqt_ix]);
-}
-
static void mlx5e_redirect_rqts(struct mlx5e_priv *priv)
{
- mlx5e_redirect_rqt(priv, MLX5E_INDIRECTION_RQT);
- mlx5e_redirect_rqt(priv, MLX5E_SINGLE_RQ_RQT);
+ u32 rqtn;
+ int ix;
+
+ rqtn = priv->indir_rqtn;
+ mlx5e_redirect_rqt(priv, rqtn, MLX5E_INDIR_RQT_SIZE, 0);
+ for (ix = 0; ix < priv->params.num_channels; ix++) {
+ rqtn = priv->direct_tir[ix].rqtn;
+ mlx5e_redirect_rqt(priv, rqtn, 1, ix);
+ }
}
static void mlx5e_build_tir_ctx_lro(void *tirc, struct mlx5e_priv *priv)
@@ -1345,6 +1532,7 @@ static int mlx5e_modify_tirs_lro(struct mlx5e_priv *priv)
int inlen;
int err;
int tt;
+ int ix;
inlen = MLX5_ST_SZ_BYTES(modify_tir_in);
in = mlx5_vzalloc(inlen);
@@ -1356,23 +1544,32 @@ static int mlx5e_modify_tirs_lro(struct mlx5e_priv *priv)
mlx5e_build_tir_ctx_lro(tirc, priv);
- for (tt = 0; tt < MLX5E_NUM_TT; tt++) {
- err = mlx5_core_modify_tir(mdev, priv->tirn[tt], in, inlen);
+ for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++) {
+ err = mlx5_core_modify_tir(mdev, priv->indir_tirn[tt], in,
+ inlen);
if (err)
- break;
+ goto free_in;
}
+ for (ix = 0; ix < mlx5e_get_max_num_channels(mdev); ix++) {
+ err = mlx5_core_modify_tir(mdev, priv->direct_tir[ix].tirn,
+ in, inlen);
+ if (err)
+ goto free_in;
+ }
+
+free_in:
kvfree(in);
return err;
}
-static int mlx5e_refresh_tir_self_loopback_enable(struct mlx5_core_dev *mdev,
- u32 tirn)
+static int mlx5e_refresh_tirs_self_loopback_enable(struct mlx5e_priv *priv)
{
void *in;
int inlen;
int err;
+ int i;
inlen = MLX5_ST_SZ_BYTES(modify_tir_in);
in = mlx5_vzalloc(inlen);
@@ -1381,25 +1578,23 @@ static int mlx5e_refresh_tir_self_loopback_enable(struct mlx5_core_dev *mdev,
MLX5_SET(modify_tir_in, in, bitmask.self_lb_en, 1);
- err = mlx5_core_modify_tir(mdev, tirn, in, inlen);
-
- kvfree(in);
-
- return err;
-}
-
-static int mlx5e_refresh_tirs_self_loopback_enable(struct mlx5e_priv *priv)
-{
- int err;
- int i;
+ for (i = 0; i < MLX5E_NUM_INDIR_TIRS; i++) {
+ err = mlx5_core_modify_tir(priv->mdev, priv->indir_tirn[i], in,
+ inlen);
+ if (err)
+ return err;
+ }
- for (i = 0; i < MLX5E_NUM_TT; i++) {
- err = mlx5e_refresh_tir_self_loopback_enable(priv->mdev,
- priv->tirn[i]);
+ for (i = 0; i < priv->params.num_channels; i++) {
+ err = mlx5_core_modify_tir(priv->mdev,
+ priv->direct_tir[i].tirn, in,
+ inlen);
if (err)
return err;
}
+ kvfree(in);
+
return 0;
}
@@ -1503,6 +1698,9 @@ int mlx5e_open_locked(struct net_device *netdev)
mlx5e_redirect_rqts(priv);
mlx5e_update_carrier(priv);
mlx5e_timestamp_init(priv);
+#ifdef CONFIG_RFS_ACCEL
+ priv->netdev->rx_cpu_rmap = priv->mdev->rmap;
+#endif
queue_delayed_work(priv->wq, &priv->update_stats_work, 0);
@@ -1710,7 +1908,8 @@ static void mlx5e_destroy_tises(struct mlx5e_priv *priv)
mlx5e_destroy_tis(priv, tc);
}
-static void mlx5e_build_tir_ctx(struct mlx5e_priv *priv, u32 *tirc, int tt)
+static void mlx5e_build_indir_tir_ctx(struct mlx5e_priv *priv, u32 *tirc,
+ enum mlx5e_traffic_types tt)
{
void *hfso = MLX5_ADDR_OF(tirc, tirc, rx_hash_field_selector_outer);
@@ -1731,19 +1930,8 @@ static void mlx5e_build_tir_ctx(struct mlx5e_priv *priv, u32 *tirc, int tt)
mlx5e_build_tir_ctx_lro(tirc, priv);
MLX5_SET(tirc, tirc, disp_type, MLX5_TIRC_DISP_TYPE_INDIRECT);
-
- switch (tt) {
- case MLX5E_TT_ANY:
- MLX5_SET(tirc, tirc, indirect_table,
- priv->rqtn[MLX5E_SINGLE_RQ_RQT]);
- MLX5_SET(tirc, tirc, rx_hash_fn, MLX5_RX_HASH_FN_INVERTED_XOR8);
- break;
- default:
- MLX5_SET(tirc, tirc, indirect_table,
- priv->rqtn[MLX5E_INDIRECTION_RQT]);
- mlx5e_build_tir_ctx_hash(tirc, priv);
- break;
- }
+ MLX5_SET(tirc, tirc, indirect_table, priv->indir_rqtn);
+ mlx5e_build_tir_ctx_hash(tirc, priv);
switch (tt) {
case MLX5E_TT_IPV4_TCP:
@@ -1823,64 +2011,107 @@ static void mlx5e_build_tir_ctx(struct mlx5e_priv *priv, u32 *tirc, int tt)
MLX5_SET(rx_hash_field_select, hfso, selected_fields,
MLX5_HASH_IP);
break;
+ default:
+ WARN_ONCE(true,
+ "mlx5e_build_indir_tir_ctx: bad traffic type!\n");
}
}
-static int mlx5e_create_tir(struct mlx5e_priv *priv, int tt)
+static void mlx5e_build_direct_tir_ctx(struct mlx5e_priv *priv, u32 *tirc,
+ u32 rqtn)
{
- struct mlx5_core_dev *mdev = priv->mdev;
- u32 *in;
+ MLX5_SET(tirc, tirc, transport_domain, priv->tdn);
+
+ mlx5e_build_tir_ctx_lro(tirc, priv);
+
+ MLX5_SET(tirc, tirc, disp_type, MLX5_TIRC_DISP_TYPE_INDIRECT);
+ MLX5_SET(tirc, tirc, indirect_table, rqtn);
+ MLX5_SET(tirc, tirc, rx_hash_fn, MLX5_RX_HASH_FN_INVERTED_XOR8);
+}
+
+static int mlx5e_create_tirs(struct mlx5e_priv *priv)
+{
+ int nch = mlx5e_get_max_num_channels(priv->mdev);
void *tirc;
int inlen;
+ u32 *tirn;
int err;
+ u32 *in;
+ int ix;
+ int tt;
inlen = MLX5_ST_SZ_BYTES(create_tir_in);
in = mlx5_vzalloc(inlen);
if (!in)
return -ENOMEM;
- tirc = MLX5_ADDR_OF(create_tir_in, in, ctx);
+ /* indirect tirs */
+ for (tt = 0; tt < MLX5E_NUM_INDIR_TIRS; tt++) {
+ memset(in, 0, inlen);
+ tirn = &priv->indir_tirn[tt];
+ tirc = MLX5_ADDR_OF(create_tir_in, in, ctx);
+ mlx5e_build_indir_tir_ctx(priv, tirc, tt);
+ err = mlx5_core_create_tir(priv->mdev, in, inlen, tirn);
+ if (err)
+ goto err_destroy_tirs;
+ }
+
+ /* direct tirs */
+ for (ix = 0; ix < nch; ix++) {
+ memset(in, 0, inlen);
+ tirn = &priv->direct_tir[ix].tirn;
+ tirc = MLX5_ADDR_OF(create_tir_in, in, ctx);
+ mlx5e_build_direct_tir_ctx(priv, tirc,
+ priv->direct_tir[ix].rqtn);
+ err = mlx5_core_create_tir(priv->mdev, in, inlen, tirn);
+ if (err)
+ goto err_destroy_ch_tirs;
+ }
- mlx5e_build_tir_ctx(priv, tirc, tt);
+ kvfree(in);
- err = mlx5_core_create_tir(mdev, in, inlen, &priv->tirn[tt]);
+ return 0;
+
+err_destroy_ch_tirs:
+ for (ix--; ix >= 0; ix--)
+ mlx5_core_destroy_tir(priv->mdev, priv->direct_tir[ix].tirn);
+
+err_destroy_tirs:
+ for (tt--; tt >= 0; tt--)
+ mlx5_core_destroy_tir(priv->mdev, priv->indir_tirn[tt]);
kvfree(in);
return err;
}
-static void mlx5e_destroy_tir(struct mlx5e_priv *priv, int tt)
+static void mlx5e_destroy_tirs(struct mlx5e_priv *priv)
{
- mlx5_core_destroy_tir(priv->mdev, priv->tirn[tt]);
+ int nch = mlx5e_get_max_num_channels(priv->mdev);
+ int i;
+
+ for (i = 0; i < nch; i++)
+ mlx5_core_destroy_tir(priv->mdev, priv->direct_tir[i].tirn);
+
+ for (i = 0; i < MLX5E_NUM_INDIR_TIRS; i++)
+ mlx5_core_destroy_tir(priv->mdev, priv->indir_tirn[i]);
}
-static int mlx5e_create_tirs(struct mlx5e_priv *priv)
+int mlx5e_modify_rqs_vsd(struct mlx5e_priv *priv, bool vsd)
{
- int err;
+ int err = 0;
int i;
- for (i = 0; i < MLX5E_NUM_TT; i++) {
- err = mlx5e_create_tir(priv, i);
+ if (!test_bit(MLX5E_STATE_OPENED, &priv->state))
+ return 0;
+
+ for (i = 0; i < priv->params.num_channels; i++) {
+ err = mlx5e_modify_rq_vsd(&priv->channel[i]->rq, vsd);
if (err)
- goto err_destroy_tirs;
+ return err;
}
return 0;
-
-err_destroy_tirs:
- for (i--; i >= 0; i--)
- mlx5e_destroy_tir(priv, i);
-
- return err;
-}
-
-static void mlx5e_destroy_tirs(struct mlx5e_priv *priv)
-{
- int i;
-
- for (i = 0; i < MLX5E_NUM_TT; i++)
- mlx5e_destroy_tir(priv, i);
}
static int mlx5e_setup_tc(struct net_device *netdev, u8 tc)
@@ -1923,6 +2154,8 @@ static int mlx5e_ndo_setup_tc(struct net_device *dev, u32 handle,
return mlx5e_configure_flower(priv, proto, tc->cls_flower);
case TC_CLSFLOWER_DESTROY:
return mlx5e_delete_flower(priv, tc->cls_flower);
+ case TC_CLSFLOWER_STATS:
+ return mlx5e_stats_flower(priv, tc->cls_flower);
}
default:
return -EOPNOTSUPP;
@@ -1939,19 +2172,37 @@ static struct rtnl_link_stats64 *
mlx5e_get_stats(struct net_device *dev, struct rtnl_link_stats64 *stats)
{
struct mlx5e_priv *priv = netdev_priv(dev);
+ struct mlx5e_sw_stats *sstats = &priv->stats.sw;
struct mlx5e_vport_stats *vstats = &priv->stats.vport;
-
- stats->rx_packets = vstats->rx_packets;
- stats->rx_bytes = vstats->rx_bytes;
- stats->tx_packets = vstats->tx_packets;
- stats->tx_bytes = vstats->tx_bytes;
- stats->multicast = vstats->rx_multicast_packets +
- vstats->tx_multicast_packets;
- stats->tx_errors = vstats->tx_error_packets;
- stats->rx_errors = vstats->rx_error_packets;
- stats->tx_dropped = vstats->tx_queue_dropped;
- stats->rx_crc_errors = 0;
- stats->rx_length_errors = 0;
+ struct mlx5e_pport_stats *pstats = &priv->stats.pport;
+
+ stats->rx_packets = sstats->rx_packets;
+ stats->rx_bytes = sstats->rx_bytes;
+ stats->tx_packets = sstats->tx_packets;
+ stats->tx_bytes = sstats->tx_bytes;
+
+ stats->rx_dropped = priv->stats.qcnt.rx_out_of_buffer;
+ stats->tx_dropped = sstats->tx_queue_dropped;
+
+ stats->rx_length_errors =
+ PPORT_802_3_GET(pstats, a_in_range_length_errors) +
+ PPORT_802_3_GET(pstats, a_out_of_range_length_field) +
+ PPORT_802_3_GET(pstats, a_frame_too_long_errors);
+ stats->rx_crc_errors =
+ PPORT_802_3_GET(pstats, a_frame_check_sequence_errors);
+ stats->rx_frame_errors = PPORT_802_3_GET(pstats, a_alignment_errors);
+ stats->tx_aborted_errors = PPORT_2863_GET(pstats, if_out_discards);
+ stats->tx_carrier_errors =
+ PPORT_802_3_GET(pstats, a_symbol_error_during_carrier);
+ stats->rx_errors = stats->rx_length_errors + stats->rx_crc_errors +
+ stats->rx_frame_errors;
+ stats->tx_errors = stats->tx_aborted_errors + stats->tx_carrier_errors;
+
+ /* vport multicast also counts packets that are dropped due to steering
+ * or rx out of buffer
+ */
+ stats->multicast =
+ VPORT_COUNTER_GET(vstats, received_eth_multicast.packets);
return stats;
}
@@ -1980,49 +2231,153 @@ static int mlx5e_set_mac(struct net_device *netdev, void *addr)
return 0;
}
-static int mlx5e_set_features(struct net_device *netdev,
- netdev_features_t features)
+#define MLX5E_SET_FEATURE(netdev, feature, enable) \
+ do { \
+ if (enable) \
+ netdev->features |= feature; \
+ else \
+ netdev->features &= ~feature; \
+ } while (0)
+
+typedef int (*mlx5e_feature_handler)(struct net_device *netdev, bool enable);
+
+static int set_feature_lro(struct net_device *netdev, bool enable)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
- int err = 0;
- netdev_features_t changes = features ^ netdev->features;
+ bool was_opened = test_bit(MLX5E_STATE_OPENED, &priv->state);
+ int err;
mutex_lock(&priv->state_lock);
- if (changes & NETIF_F_LRO) {
- bool was_opened = test_bit(MLX5E_STATE_OPENED, &priv->state);
-
- if (was_opened)
- mlx5e_close_locked(priv->netdev);
-
- priv->params.lro_en = !!(features & NETIF_F_LRO);
- err = mlx5e_modify_tirs_lro(priv);
- if (err)
- mlx5_core_warn(priv->mdev, "lro modify failed, %d\n",
- err);
+ if (was_opened && (priv->params.rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST))
+ mlx5e_close_locked(priv->netdev);
- if (was_opened)
- err = mlx5e_open_locked(priv->netdev);
+ priv->params.lro_en = enable;
+ err = mlx5e_modify_tirs_lro(priv);
+ if (err) {
+ netdev_err(netdev, "lro modify failed, %d\n", err);
+ priv->params.lro_en = !enable;
}
+ if (was_opened && (priv->params.rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST))
+ mlx5e_open_locked(priv->netdev);
+
mutex_unlock(&priv->state_lock);
- if (changes & NETIF_F_HW_VLAN_CTAG_FILTER) {
- if (features & NETIF_F_HW_VLAN_CTAG_FILTER)
- mlx5e_enable_vlan_filter(priv);
- else
- mlx5e_disable_vlan_filter(priv);
- }
+ return err;
+}
+
+static int set_feature_vlan_filter(struct net_device *netdev, bool enable)
+{
+ struct mlx5e_priv *priv = netdev_priv(netdev);
+
+ if (enable)
+ mlx5e_enable_vlan_filter(priv);
+ else
+ mlx5e_disable_vlan_filter(priv);
+
+ return 0;
+}
- if ((changes & NETIF_F_HW_TC) && !(features & NETIF_F_HW_TC) &&
- mlx5e_tc_num_filters(priv)) {
+static int set_feature_tc_num_filters(struct net_device *netdev, bool enable)
+{
+ struct mlx5e_priv *priv = netdev_priv(netdev);
+
+ if (!enable && mlx5e_tc_num_filters(priv)) {
netdev_err(netdev,
"Active offloaded tc filters, can't turn hw_tc_offload off\n");
return -EINVAL;
}
+ return 0;
+}
+
+static int set_feature_rx_all(struct net_device *netdev, bool enable)
+{
+ struct mlx5e_priv *priv = netdev_priv(netdev);
+ struct mlx5_core_dev *mdev = priv->mdev;
+
+ return mlx5_set_port_fcs(mdev, !enable);
+}
+
+static int set_feature_rx_vlan(struct net_device *netdev, bool enable)
+{
+ struct mlx5e_priv *priv = netdev_priv(netdev);
+ int err;
+
+ mutex_lock(&priv->state_lock);
+
+ priv->params.vlan_strip_disable = !enable;
+ err = mlx5e_modify_rqs_vsd(priv, !enable);
+ if (err)
+ priv->params.vlan_strip_disable = enable;
+
+ mutex_unlock(&priv->state_lock);
+
+ return err;
+}
+
+#ifdef CONFIG_RFS_ACCEL
+static int set_feature_arfs(struct net_device *netdev, bool enable)
+{
+ struct mlx5e_priv *priv = netdev_priv(netdev);
+ int err;
+
+ if (enable)
+ err = mlx5e_arfs_enable(priv);
+ else
+ err = mlx5e_arfs_disable(priv);
+
return err;
}
+#endif
+
+static int mlx5e_handle_feature(struct net_device *netdev,
+ netdev_features_t wanted_features,
+ netdev_features_t feature,
+ mlx5e_feature_handler feature_handler)
+{
+ netdev_features_t changes = wanted_features ^ netdev->features;
+ bool enable = !!(wanted_features & feature);
+ int err;
+
+ if (!(changes & feature))
+ return 0;
+
+ err = feature_handler(netdev, enable);
+ if (err) {
+ netdev_err(netdev, "%s feature 0x%llx failed err %d\n",
+ enable ? "Enable" : "Disable", feature, err);
+ return err;
+ }
+
+ MLX5E_SET_FEATURE(netdev, feature, enable);
+ return 0;
+}
+
+static int mlx5e_set_features(struct net_device *netdev,
+ netdev_features_t features)
+{
+ int err;
+
+ err = mlx5e_handle_feature(netdev, features, NETIF_F_LRO,
+ set_feature_lro);
+ err |= mlx5e_handle_feature(netdev, features,
+ NETIF_F_HW_VLAN_CTAG_FILTER,
+ set_feature_vlan_filter);
+ err |= mlx5e_handle_feature(netdev, features, NETIF_F_HW_TC,
+ set_feature_tc_num_filters);
+ err |= mlx5e_handle_feature(netdev, features, NETIF_F_RXALL,
+ set_feature_rx_all);
+ err |= mlx5e_handle_feature(netdev, features, NETIF_F_HW_VLAN_CTAG_RX,
+ set_feature_rx_vlan);
+#ifdef CONFIG_RFS_ACCEL
+ err |= mlx5e_handle_feature(netdev, features, NETIF_F_NTUPLE,
+ set_feature_arfs);
+#endif
+
+ return err ? -EINVAL : 0;
+}
#define MXL5_HW_MIN_MTU 64
#define MXL5E_MIN_MTU (MXL5_HW_MIN_MTU + ETH_FCS_LEN)
@@ -2093,6 +2448,21 @@ static int mlx5e_set_vf_vlan(struct net_device *dev, int vf, u16 vlan, u8 qos)
vlan, qos);
}
+static int mlx5e_set_vf_spoofchk(struct net_device *dev, int vf, bool setting)
+{
+ struct mlx5e_priv *priv = netdev_priv(dev);
+ struct mlx5_core_dev *mdev = priv->mdev;
+
+ return mlx5_eswitch_set_vport_spoofchk(mdev->priv.eswitch, vf + 1, setting);
+}
+
+static int mlx5e_set_vf_trust(struct net_device *dev, int vf, bool setting)
+{
+ struct mlx5e_priv *priv = netdev_priv(dev);
+ struct mlx5_core_dev *mdev = priv->mdev;
+
+ return mlx5_eswitch_set_vport_trust(mdev->priv.eswitch, vf + 1, setting);
+}
static int mlx5_vport_link2ifla(u8 esw_link)
{
switch (esw_link) {
@@ -2235,6 +2605,9 @@ static const struct net_device_ops mlx5e_netdev_ops_basic = {
.ndo_set_features = mlx5e_set_features,
.ndo_change_mtu = mlx5e_change_mtu,
.ndo_do_ioctl = mlx5e_ioctl,
+#ifdef CONFIG_RFS_ACCEL
+ .ndo_rx_flow_steer = mlx5e_rx_flow_steer,
+#endif
};
static const struct net_device_ops mlx5e_netdev_ops_sriov = {
@@ -2254,8 +2627,13 @@ static const struct net_device_ops mlx5e_netdev_ops_sriov = {
.ndo_add_vxlan_port = mlx5e_add_vxlan_port,
.ndo_del_vxlan_port = mlx5e_del_vxlan_port,
.ndo_features_check = mlx5e_features_check,
+#ifdef CONFIG_RFS_ACCEL
+ .ndo_rx_flow_steer = mlx5e_rx_flow_steer,
+#endif
.ndo_set_vf_mac = mlx5e_set_vf_mac,
.ndo_set_vf_vlan = mlx5e_set_vf_vlan,
+ .ndo_set_vf_spoofchk = mlx5e_set_vf_spoofchk,
+ .ndo_set_vf_trust = mlx5e_set_vf_trust,
.ndo_get_vf_config = mlx5e_get_vf_config,
.ndo_set_vf_link_state = mlx5e_set_vf_link_state,
.ndo_get_vf_stats = mlx5e_get_vf_stats,
@@ -2313,25 +2691,121 @@ static void mlx5e_ets_init(struct mlx5e_priv *priv)
}
#endif
-void mlx5e_build_default_indir_rqt(u32 *indirection_rqt, int len,
+void mlx5e_build_default_indir_rqt(struct mlx5_core_dev *mdev,
+ u32 *indirection_rqt, int len,
int num_channels)
{
+ int node = mdev->priv.numa_node;
+ int node_num_of_cores;
int i;
+ if (node == -1)
+ node = first_online_node;
+
+ node_num_of_cores = cpumask_weight(cpumask_of_node(node));
+
+ if (node_num_of_cores)
+ num_channels = min_t(int, num_channels, node_num_of_cores);
+
for (i = 0; i < len; i++)
indirection_rqt[i] = i % num_channels;
}
+static bool mlx5e_check_fragmented_striding_rq_cap(struct mlx5_core_dev *mdev)
+{
+ return MLX5_CAP_GEN(mdev, striding_rq) &&
+ MLX5_CAP_GEN(mdev, umr_ptr_rlky) &&
+ MLX5_CAP_ETH(mdev, reg_umr_sq);
+}
+
+static int mlx5e_get_pci_bw(struct mlx5_core_dev *mdev, u32 *pci_bw)
+{
+ enum pcie_link_width width;
+ enum pci_bus_speed speed;
+ int err = 0;
+
+ err = pcie_get_minimum_link(mdev->pdev, &speed, &width);
+ if (err)
+ return err;
+
+ if (speed == PCI_SPEED_UNKNOWN || width == PCIE_LNK_WIDTH_UNKNOWN)
+ return -EINVAL;
+
+ switch (speed) {
+ case PCIE_SPEED_2_5GT:
+ *pci_bw = 2500 * width;
+ break;
+ case PCIE_SPEED_5_0GT:
+ *pci_bw = 5000 * width;
+ break;
+ case PCIE_SPEED_8_0GT:
+ *pci_bw = 8000 * width;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static bool cqe_compress_heuristic(u32 link_speed, u32 pci_bw)
+{
+ return (link_speed && pci_bw &&
+ (pci_bw < 40000) && (pci_bw < link_speed));
+}
+
static void mlx5e_build_netdev_priv(struct mlx5_core_dev *mdev,
struct net_device *netdev,
int num_channels)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
+ u32 link_speed = 0;
+ u32 pci_bw = 0;
priv->params.log_sq_size =
MLX5E_PARAMS_DEFAULT_LOG_SQ_SIZE;
- priv->params.log_rq_size =
- MLX5E_PARAMS_DEFAULT_LOG_RQ_SIZE;
+ priv->params.rq_wq_type = mlx5e_check_fragmented_striding_rq_cap(mdev) ?
+ MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ :
+ MLX5_WQ_TYPE_LINKED_LIST;
+
+ /* set CQE compression */
+ priv->params.rx_cqe_compress_admin = false;
+ if (MLX5_CAP_GEN(mdev, cqe_compression) &&
+ MLX5_CAP_GEN(mdev, vport_group_manager)) {
+ mlx5e_get_max_linkspeed(mdev, &link_speed);
+ mlx5e_get_pci_bw(mdev, &pci_bw);
+ mlx5_core_dbg(mdev, "Max link speed = %d, PCI BW = %d\n",
+ link_speed, pci_bw);
+ priv->params.rx_cqe_compress_admin =
+ cqe_compress_heuristic(link_speed, pci_bw);
+ }
+
+ priv->params.rx_cqe_compress = priv->params.rx_cqe_compress_admin;
+
+ switch (priv->params.rq_wq_type) {
+ case MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ:
+ priv->params.log_rq_size = MLX5E_PARAMS_DEFAULT_LOG_RQ_SIZE_MPW;
+ priv->params.mpwqe_log_stride_sz =
+ priv->params.rx_cqe_compress ?
+ MLX5_MPWRQ_LOG_STRIDE_SIZE_CQE_COMPRESS :
+ MLX5_MPWRQ_LOG_STRIDE_SIZE;
+ priv->params.mpwqe_log_num_strides = MLX5_MPWRQ_LOG_WQE_SZ -
+ priv->params.mpwqe_log_stride_sz;
+ priv->params.lro_en = true;
+ break;
+ default: /* MLX5_WQ_TYPE_LINKED_LIST */
+ priv->params.log_rq_size = MLX5E_PARAMS_DEFAULT_LOG_RQ_SIZE;
+ }
+
+ mlx5_core_info(mdev,
+ "MLX5E: StrdRq(%d) RqSz(%ld) StrdSz(%ld) RxCqeCmprss(%d)\n",
+ priv->params.rq_wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ,
+ BIT(priv->params.log_rq_size),
+ BIT(priv->params.mpwqe_log_stride_sz),
+ priv->params.rx_cqe_compress_admin);
+
+ priv->params.min_rx_wqes = mlx5_min_rx_wqes(priv->params.rq_wq_type,
+ BIT(priv->params.log_rq_size));
priv->params.rx_cq_moderation_usec =
MLX5E_PARAMS_DEFAULT_RX_CQ_MODERATION_USEC;
priv->params.rx_cq_moderation_pkts =
@@ -2341,15 +2815,13 @@ static void mlx5e_build_netdev_priv(struct mlx5_core_dev *mdev,
priv->params.tx_cq_moderation_pkts =
MLX5E_PARAMS_DEFAULT_TX_CQ_MODERATION_PKTS;
priv->params.tx_max_inline = mlx5e_get_max_inline_cap(mdev);
- priv->params.min_rx_wqes =
- MLX5E_PARAMS_DEFAULT_MIN_RX_WQES;
priv->params.num_tc = 1;
priv->params.rss_hfunc = ETH_RSS_HASH_XOR;
netdev_rss_key_fill(priv->params.toeplitz_hash_key,
sizeof(priv->params.toeplitz_hash_key));
- mlx5e_build_default_indir_rqt(priv->params.indirection_rqt,
+ mlx5e_build_default_indir_rqt(mdev, priv->params.indirection_rqt,
MLX5E_INDIR_RQT_SIZE, num_channels);
priv->params.lro_wqe_sz =
@@ -2386,6 +2858,8 @@ static void mlx5e_build_netdev(struct net_device *netdev)
{
struct mlx5e_priv *priv = netdev_priv(netdev);
struct mlx5_core_dev *mdev = priv->mdev;
+ bool fcs_supported;
+ bool fcs_enabled;
SET_NETDEV_DEV(netdev, &mdev->pdev->dev);
@@ -2420,25 +2894,41 @@ static void mlx5e_build_netdev(struct net_device *netdev)
netdev->hw_features |= NETIF_F_HW_VLAN_CTAG_FILTER;
if (mlx5e_vxlan_allowed(mdev)) {
- netdev->hw_features |= NETIF_F_GSO_UDP_TUNNEL;
+ netdev->hw_features |= NETIF_F_GSO_UDP_TUNNEL |
+ NETIF_F_GSO_UDP_TUNNEL_CSUM |
+ NETIF_F_GSO_PARTIAL;
netdev->hw_enc_features |= NETIF_F_IP_CSUM;
- netdev->hw_enc_features |= NETIF_F_RXCSUM;
+ netdev->hw_enc_features |= NETIF_F_IPV6_CSUM;
netdev->hw_enc_features |= NETIF_F_TSO;
netdev->hw_enc_features |= NETIF_F_TSO6;
- netdev->hw_enc_features |= NETIF_F_RXHASH;
netdev->hw_enc_features |= NETIF_F_GSO_UDP_TUNNEL;
+ netdev->hw_enc_features |= NETIF_F_GSO_UDP_TUNNEL_CSUM |
+ NETIF_F_GSO_PARTIAL;
+ netdev->gso_partial_features = NETIF_F_GSO_UDP_TUNNEL_CSUM;
}
+ mlx5_query_port_fcs(mdev, &fcs_supported, &fcs_enabled);
+
+ if (fcs_supported)
+ netdev->hw_features |= NETIF_F_RXALL;
+
netdev->features = netdev->hw_features;
if (!priv->params.lro_en)
netdev->features &= ~NETIF_F_LRO;
+ if (fcs_enabled)
+ netdev->features &= ~NETIF_F_RXALL;
+
#define FT_CAP(f) MLX5_CAP_FLOWTABLE(mdev, flow_table_properties_nic_receive.f)
if (FT_CAP(flow_modify_en) &&
FT_CAP(modify_root) &&
FT_CAP(identified_miss_table_mode) &&
- FT_CAP(flow_table_modify))
- priv->netdev->hw_features |= NETIF_F_HW_TC;
+ FT_CAP(flow_table_modify)) {
+ netdev->hw_features |= NETIF_F_HW_TC;
+#ifdef CONFIG_RFS_ACCEL
+ netdev->hw_features |= NETIF_F_NTUPLE;
+#endif
+ }
netdev->features |= NETIF_F_HIGHDMA;
@@ -2472,6 +2962,61 @@ static int mlx5e_create_mkey(struct mlx5e_priv *priv, u32 pdn,
return err;
}
+static void mlx5e_create_q_counter(struct mlx5e_priv *priv)
+{
+ struct mlx5_core_dev *mdev = priv->mdev;
+ int err;
+
+ err = mlx5_core_alloc_q_counter(mdev, &priv->q_counter);
+ if (err) {
+ mlx5_core_warn(mdev, "alloc queue counter failed, %d\n", err);
+ priv->q_counter = 0;
+ }
+}
+
+static void mlx5e_destroy_q_counter(struct mlx5e_priv *priv)
+{
+ if (!priv->q_counter)
+ return;
+
+ mlx5_core_dealloc_q_counter(priv->mdev, priv->q_counter);
+}
+
+static int mlx5e_create_umr_mkey(struct mlx5e_priv *priv)
+{
+ struct mlx5_core_dev *mdev = priv->mdev;
+ struct mlx5_create_mkey_mbox_in *in;
+ struct mlx5_mkey_seg *mkc;
+ int inlen = sizeof(*in);
+ u64 npages =
+ mlx5e_get_max_num_channels(mdev) * MLX5_CHANNEL_MAX_NUM_MTTS;
+ int err;
+
+ in = mlx5_vzalloc(inlen);
+ if (!in)
+ return -ENOMEM;
+
+ mkc = &in->seg;
+ mkc->status = MLX5_MKEY_STATUS_FREE;
+ mkc->flags = MLX5_PERM_UMR_EN |
+ MLX5_PERM_LOCAL_READ |
+ MLX5_PERM_LOCAL_WRITE |
+ MLX5_ACCESS_MODE_MTT;
+
+ mkc->qpn_mkey7_0 = cpu_to_be32(0xffffff << 8);
+ mkc->flags_pd = cpu_to_be32(priv->pdn);
+ mkc->len = cpu_to_be64(npages << PAGE_SHIFT);
+ mkc->xlt_oct_size = cpu_to_be32(mlx5e_get_mtt_octw(npages));
+ mkc->log2_page_size = PAGE_SHIFT;
+
+ err = mlx5_core_create_mkey(mdev, &priv->umr_mkey, in, inlen, NULL,
+ NULL, NULL);
+
+ kvfree(in);
+
+ return err;
+}
+
static void *mlx5e_create_netdev(struct mlx5_core_dev *mdev)
{
struct net_device *netdev;
@@ -2525,10 +3070,16 @@ static void *mlx5e_create_netdev(struct mlx5_core_dev *mdev)
goto err_dealloc_transport_domain;
}
+ err = mlx5e_create_umr_mkey(priv);
+ if (err) {
+ mlx5_core_err(mdev, "create umr mkey failed, %d\n", err);
+ goto err_destroy_mkey;
+ }
+
err = mlx5e_create_tises(priv);
if (err) {
mlx5_core_warn(mdev, "create tises failed, %d\n", err);
- goto err_destroy_mkey;
+ goto err_destroy_umr_mkey;
}
err = mlx5e_open_drop_rq(priv);
@@ -2537,37 +3088,33 @@ static void *mlx5e_create_netdev(struct mlx5_core_dev *mdev)
goto err_destroy_tises;
}
- err = mlx5e_create_rqt(priv, MLX5E_INDIRECTION_RQT);
+ err = mlx5e_create_rqts(priv);
if (err) {
- mlx5_core_warn(mdev, "create rqt(INDIR) failed, %d\n", err);
+ mlx5_core_warn(mdev, "create rqts failed, %d\n", err);
goto err_close_drop_rq;
}
- err = mlx5e_create_rqt(priv, MLX5E_SINGLE_RQ_RQT);
- if (err) {
- mlx5_core_warn(mdev, "create rqt(SINGLE) failed, %d\n", err);
- goto err_destroy_rqt_indir;
- }
-
err = mlx5e_create_tirs(priv);
if (err) {
mlx5_core_warn(mdev, "create tirs failed, %d\n", err);
- goto err_destroy_rqt_single;
+ goto err_destroy_rqts;
}
- err = mlx5e_create_flow_tables(priv);
+ err = mlx5e_create_flow_steering(priv);
if (err) {
- mlx5_core_warn(mdev, "create flow tables failed, %d\n", err);
+ mlx5_core_warn(mdev, "create flow steering failed, %d\n", err);
goto err_destroy_tirs;
}
- mlx5e_init_eth_addr(priv);
+ mlx5e_create_q_counter(priv);
+
+ mlx5e_init_l2_addr(priv);
mlx5e_vxlan_init(priv);
err = mlx5e_tc_init(priv);
if (err)
- goto err_destroy_flow_tables;
+ goto err_dealloc_q_counters;
#ifdef CONFIG_MLX5_CORE_EN_DCB
mlx5e_dcbnl_ieee_setets_core(priv, &priv->params.ets);
@@ -2579,8 +3126,11 @@ static void *mlx5e_create_netdev(struct mlx5_core_dev *mdev)
goto err_tc_cleanup;
}
- if (mlx5e_vxlan_allowed(mdev))
+ if (mlx5e_vxlan_allowed(mdev)) {
+ rtnl_lock();
vxlan_get_rx_port(netdev);
+ rtnl_unlock();
+ }
mlx5e_enable_async_events(priv);
queue_work(priv->wq, &priv->set_rx_mode_work);
@@ -2590,17 +3140,15 @@ static void *mlx5e_create_netdev(struct mlx5_core_dev *mdev)
err_tc_cleanup:
mlx5e_tc_cleanup(priv);
-err_destroy_flow_tables:
- mlx5e_destroy_flow_tables(priv);
+err_dealloc_q_counters:
+ mlx5e_destroy_q_counter(priv);
+ mlx5e_destroy_flow_steering(priv);
err_destroy_tirs:
mlx5e_destroy_tirs(priv);
-err_destroy_rqt_single:
- mlx5e_destroy_rqt(priv, MLX5E_SINGLE_RQ_RQT);
-
-err_destroy_rqt_indir:
- mlx5e_destroy_rqt(priv, MLX5E_INDIRECTION_RQT);
+err_destroy_rqts:
+ mlx5e_destroy_rqts(priv);
err_close_drop_rq:
mlx5e_close_drop_rq(priv);
@@ -2608,6 +3156,9 @@ err_close_drop_rq:
err_destroy_tises:
mlx5e_destroy_tises(priv);
+err_destroy_umr_mkey:
+ mlx5_core_destroy_mkey(mdev, &priv->umr_mkey);
+
err_destroy_mkey:
mlx5_core_destroy_mkey(mdev, &priv->mkey);
@@ -2651,12 +3202,13 @@ static void mlx5e_destroy_netdev(struct mlx5_core_dev *mdev, void *vpriv)
mlx5e_tc_cleanup(priv);
mlx5e_vxlan_cleanup(priv);
- mlx5e_destroy_flow_tables(priv);
+ mlx5e_destroy_q_counter(priv);
+ mlx5e_destroy_flow_steering(priv);
mlx5e_destroy_tirs(priv);
- mlx5e_destroy_rqt(priv, MLX5E_SINGLE_RQ_RQT);
- mlx5e_destroy_rqt(priv, MLX5E_INDIRECTION_RQT);
+ mlx5e_destroy_rqts(priv);
mlx5e_close_drop_rq(priv);
mlx5e_destroy_tises(priv);
+ mlx5_core_destroy_mkey(priv->mdev, &priv->umr_mkey);
mlx5_core_destroy_mkey(priv->mdev, &priv->mkey);
mlx5_core_dealloc_transport_domain(priv->mdev, priv->tdn);
mlx5_core_dealloc_pd(priv->mdev, priv->pdn);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
index 58d4e2f962c3..bd947704b59c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_rx.c
@@ -42,13 +42,149 @@ static inline bool mlx5e_rx_hw_stamp(struct mlx5e_tstamp *tstamp)
return tstamp->hwtstamp_config.rx_filter == HWTSTAMP_FILTER_ALL;
}
-static inline int mlx5e_alloc_rx_wqe(struct mlx5e_rq *rq,
- struct mlx5e_rx_wqe *wqe, u16 ix)
+static inline void mlx5e_read_cqe_slot(struct mlx5e_cq *cq, u32 cqcc,
+ void *data)
+{
+ u32 ci = cqcc & cq->wq.sz_m1;
+
+ memcpy(data, mlx5_cqwq_get_wqe(&cq->wq, ci), sizeof(struct mlx5_cqe64));
+}
+
+static inline void mlx5e_read_title_slot(struct mlx5e_rq *rq,
+ struct mlx5e_cq *cq, u32 cqcc)
+{
+ mlx5e_read_cqe_slot(cq, cqcc, &cq->title);
+ cq->decmprs_left = be32_to_cpu(cq->title.byte_cnt);
+ cq->decmprs_wqe_counter = be16_to_cpu(cq->title.wqe_counter);
+ rq->stats.cqe_compress_blks++;
+}
+
+static inline void mlx5e_read_mini_arr_slot(struct mlx5e_cq *cq, u32 cqcc)
+{
+ mlx5e_read_cqe_slot(cq, cqcc, cq->mini_arr);
+ cq->mini_arr_idx = 0;
+}
+
+static inline void mlx5e_cqes_update_owner(struct mlx5e_cq *cq, u32 cqcc, int n)
+{
+ u8 op_own = (cqcc >> cq->wq.log_sz) & 1;
+ u32 wq_sz = 1 << cq->wq.log_sz;
+ u32 ci = cqcc & cq->wq.sz_m1;
+ u32 ci_top = min_t(u32, wq_sz, ci + n);
+
+ for (; ci < ci_top; ci++, n--) {
+ struct mlx5_cqe64 *cqe = mlx5_cqwq_get_wqe(&cq->wq, ci);
+
+ cqe->op_own = op_own;
+ }
+
+ if (unlikely(ci == wq_sz)) {
+ op_own = !op_own;
+ for (ci = 0; ci < n; ci++) {
+ struct mlx5_cqe64 *cqe = mlx5_cqwq_get_wqe(&cq->wq, ci);
+
+ cqe->op_own = op_own;
+ }
+ }
+}
+
+static inline void mlx5e_decompress_cqe(struct mlx5e_rq *rq,
+ struct mlx5e_cq *cq, u32 cqcc)
+{
+ u16 wqe_cnt_step;
+
+ cq->title.byte_cnt = cq->mini_arr[cq->mini_arr_idx].byte_cnt;
+ cq->title.check_sum = cq->mini_arr[cq->mini_arr_idx].checksum;
+ cq->title.op_own &= 0xf0;
+ cq->title.op_own |= 0x01 & (cqcc >> cq->wq.log_sz);
+ cq->title.wqe_counter = cpu_to_be16(cq->decmprs_wqe_counter);
+
+ wqe_cnt_step =
+ rq->wq_type == MLX5_WQ_TYPE_LINKED_LIST_STRIDING_RQ ?
+ mpwrq_get_cqe_consumed_strides(&cq->title) : 1;
+ cq->decmprs_wqe_counter =
+ (cq->decmprs_wqe_counter + wqe_cnt_step) & rq->wq.sz_m1;
+}
+
+static inline void mlx5e_decompress_cqe_no_hash(struct mlx5e_rq *rq,
+ struct mlx5e_cq *cq, u32 cqcc)
+{
+ mlx5e_decompress_cqe(rq, cq, cqcc);
+ cq->title.rss_hash_type = 0;
+ cq->title.rss_hash_result = 0;
+}
+
+static inline u32 mlx5e_decompress_cqes_cont(struct mlx5e_rq *rq,
+ struct mlx5e_cq *cq,
+ int update_owner_only,
+ int budget_rem)
+{
+ u32 cqcc = cq->wq.cc + update_owner_only;
+ u32 cqe_count;
+ u32 i;
+
+ cqe_count = min_t(u32, cq->decmprs_left, budget_rem);
+
+ for (i = update_owner_only; i < cqe_count;
+ i++, cq->mini_arr_idx++, cqcc++) {
+ if (cq->mini_arr_idx == MLX5_MINI_CQE_ARRAY_SIZE)
+ mlx5e_read_mini_arr_slot(cq, cqcc);
+
+ mlx5e_decompress_cqe_no_hash(rq, cq, cqcc);
+ rq->handle_rx_cqe(rq, &cq->title);
+ }
+ mlx5e_cqes_update_owner(cq, cq->wq.cc, cqcc - cq->wq.cc);
+ cq->wq.cc = cqcc;
+ cq->decmprs_left -= cqe_count;
+ rq->stats.cqe_compress_pkts += cqe_count;
+
+ return cqe_count;
+}
+
+static inline u32 mlx5e_decompress_cqes_start(struct mlx5e_rq *rq,
+ struct mlx5e_cq *cq,
+ int budget_rem)
+{
+ mlx5e_read_title_slot(rq, cq, cq->wq.cc);
+ mlx5e_read_mini_arr_slot(cq, cq->wq.cc + 1);
+ mlx5e_decompress_cqe(rq, cq, cq->wq.cc);
+ rq->handle_rx_cqe(rq, &cq->title);
+ cq->mini_arr_idx++;
+
+ return mlx5e_decompress_cqes_cont(rq, cq, 1, budget_rem) - 1;
+}
+
+void mlx5e_modify_rx_cqe_compression(struct mlx5e_priv *priv, bool val)
+{
+ bool was_opened;
+
+ if (!MLX5_CAP_GEN(priv->mdev, cqe_compression))
+ return;
+
+ mutex_lock(&priv->state_lock);
+
+ if (priv->params.rx_cqe_compress == val)
+ goto unlock;
+
+ was_opened = test_bit(MLX5E_STATE_OPENED, &priv->state);
+ if (was_opened)
+ mlx5e_close_locked(priv->netdev);
+
+ priv->params.rx_cqe_compress = val;
+
+ if (was_opened)
+ mlx5e_open_locked(priv->netdev);
+
+unlock:
+ mutex_unlock(&priv->state_lock);
+}
+
+int mlx5e_alloc_rx_wqe(struct mlx5e_rq *rq, struct mlx5e_rx_wqe *wqe, u16 ix)
{
struct sk_buff *skb;
dma_addr_t dma_addr;
- skb = netdev_alloc_skb(rq->netdev, rq->wqe_sz);
+ skb = napi_alloc_skb(rq->cq.napi, rq->wqe_sz);
if (unlikely(!skb))
return -ENOMEM;
@@ -62,10 +198,9 @@ static inline int mlx5e_alloc_rx_wqe(struct mlx5e_rq *rq,
if (unlikely(dma_mapping_error(rq->pdev, dma_addr)))
goto err_free_skb;
- skb_reserve(skb, MLX5E_NET_IP_ALIGN);
-
*((dma_addr_t *)skb->cb) = dma_addr;
- wqe->data.addr = cpu_to_be64(dma_addr + MLX5E_NET_IP_ALIGN);
+ wqe->data.addr = cpu_to_be64(dma_addr);
+ wqe->data.lkey = rq->mkey_be;
rq->skb[ix] = skb;
@@ -77,18 +212,389 @@ err_free_skb:
return -ENOMEM;
}
+static inline int mlx5e_mpwqe_strides_per_page(struct mlx5e_rq *rq)
+{
+ return rq->mpwqe_num_strides >> MLX5_MPWRQ_WQE_PAGE_ORDER;
+}
+
+static inline void
+mlx5e_dma_pre_sync_linear_mpwqe(struct device *pdev,
+ struct mlx5e_mpw_info *wi,
+ u32 wqe_offset, u32 len)
+{
+ dma_sync_single_for_cpu(pdev, wi->dma_info.addr + wqe_offset,
+ len, DMA_FROM_DEVICE);
+}
+
+static inline void
+mlx5e_dma_pre_sync_fragmented_mpwqe(struct device *pdev,
+ struct mlx5e_mpw_info *wi,
+ u32 wqe_offset, u32 len)
+{
+ /* No dma pre sync for fragmented MPWQE */
+}
+
+static inline void
+mlx5e_add_skb_frag_linear_mpwqe(struct mlx5e_rq *rq,
+ struct sk_buff *skb,
+ struct mlx5e_mpw_info *wi,
+ u32 page_idx, u32 frag_offset,
+ u32 len)
+{
+ unsigned int truesize = ALIGN(len, rq->mpwqe_stride_sz);
+
+ wi->skbs_frags[page_idx]++;
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
+ &wi->dma_info.page[page_idx], frag_offset,
+ len, truesize);
+}
+
+static inline void
+mlx5e_add_skb_frag_fragmented_mpwqe(struct mlx5e_rq *rq,
+ struct sk_buff *skb,
+ struct mlx5e_mpw_info *wi,
+ u32 page_idx, u32 frag_offset,
+ u32 len)
+{
+ unsigned int truesize = ALIGN(len, rq->mpwqe_stride_sz);
+
+ dma_sync_single_for_cpu(rq->pdev,
+ wi->umr.dma_info[page_idx].addr + frag_offset,
+ len, DMA_FROM_DEVICE);
+ wi->skbs_frags[page_idx]++;
+ skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
+ wi->umr.dma_info[page_idx].page, frag_offset,
+ len, truesize);
+}
+
+static inline void
+mlx5e_copy_skb_header_linear_mpwqe(struct device *pdev,
+ struct sk_buff *skb,
+ struct mlx5e_mpw_info *wi,
+ u32 page_idx, u32 offset,
+ u32 headlen)
+{
+ struct page *page = &wi->dma_info.page[page_idx];
+
+ skb_copy_to_linear_data(skb, page_address(page) + offset,
+ ALIGN(headlen, sizeof(long)));
+}
+
+static inline void
+mlx5e_copy_skb_header_fragmented_mpwqe(struct device *pdev,
+ struct sk_buff *skb,
+ struct mlx5e_mpw_info *wi,
+ u32 page_idx, u32 offset,
+ u32 headlen)
+{
+ u16 headlen_pg = min_t(u32, headlen, PAGE_SIZE - offset);
+ struct mlx5e_dma_info *dma_info = &wi->umr.dma_info[page_idx];
+ unsigned int len;
+
+ /* Aligning len to sizeof(long) optimizes memcpy performance */
+ len = ALIGN(headlen_pg, sizeof(long));
+ dma_sync_single_for_cpu(pdev, dma_info->addr + offset, len,
+ DMA_FROM_DEVICE);
+ skb_copy_to_linear_data_offset(skb, 0,
+ page_address(dma_info->page) + offset,
+ len);
+ if (unlikely(offset + headlen > PAGE_SIZE)) {
+ dma_info++;
+ headlen_pg = len;
+ len = ALIGN(headlen - headlen_pg, sizeof(long));
+ dma_sync_single_for_cpu(pdev, dma_info->addr, len,
+ DMA_FROM_DEVICE);
+ skb_copy_to_linear_data_offset(skb, headlen_pg,
+ page_address(dma_info->page),
+ len);
+ }
+}
+
+static u16 mlx5e_get_wqe_mtt_offset(u16 rq_ix, u16 wqe_ix)
+{
+ return rq_ix * MLX5_CHANNEL_MAX_NUM_MTTS +
+ wqe_ix * ALIGN(MLX5_MPWRQ_PAGES_PER_WQE, 8);
+}
+
+static void mlx5e_build_umr_wqe(struct mlx5e_rq *rq,
+ struct mlx5e_sq *sq,
+ struct mlx5e_umr_wqe *wqe,
+ u16 ix)
+{
+ struct mlx5_wqe_ctrl_seg *cseg = &wqe->ctrl;
+ struct mlx5_wqe_umr_ctrl_seg *ucseg = &wqe->uctrl;
+ struct mlx5_wqe_data_seg *dseg = &wqe->data;
+ struct mlx5e_mpw_info *wi = &rq->wqe_info[ix];
+ u8 ds_cnt = DIV_ROUND_UP(sizeof(*wqe), MLX5_SEND_WQE_DS);
+ u16 umr_wqe_mtt_offset = mlx5e_get_wqe_mtt_offset(rq->ix, ix);
+
+ memset(wqe, 0, sizeof(*wqe));
+ cseg->opmod_idx_opcode =
+ cpu_to_be32((sq->pc << MLX5_WQE_CTRL_WQE_INDEX_SHIFT) |
+ MLX5_OPCODE_UMR);
+ cseg->qpn_ds = cpu_to_be32((sq->sqn << MLX5_WQE_CTRL_QPN_SHIFT) |
+ ds_cnt);
+ cseg->fm_ce_se = MLX5_WQE_CTRL_CQ_UPDATE;
+ cseg->imm = rq->umr_mkey_be;
+
+ ucseg->flags = MLX5_UMR_TRANSLATION_OFFSET_EN;
+ ucseg->klm_octowords =
+ cpu_to_be16(mlx5e_get_mtt_octw(MLX5_MPWRQ_PAGES_PER_WQE));
+ ucseg->bsf_octowords =
+ cpu_to_be16(mlx5e_get_mtt_octw(umr_wqe_mtt_offset));
+ ucseg->mkey_mask = cpu_to_be64(MLX5_MKEY_MASK_FREE);
+
+ dseg->lkey = sq->mkey_be;
+ dseg->addr = cpu_to_be64(wi->umr.mtt_addr);
+}
+
+static void mlx5e_post_umr_wqe(struct mlx5e_rq *rq, u16 ix)
+{
+ struct mlx5e_sq *sq = &rq->channel->icosq;
+ struct mlx5_wq_cyc *wq = &sq->wq;
+ struct mlx5e_umr_wqe *wqe;
+ u8 num_wqebbs = DIV_ROUND_UP(sizeof(*wqe), MLX5_SEND_WQE_BB);
+ u16 pi;
+
+ /* fill sq edge with nops to avoid wqe wrap around */
+ while ((pi = (sq->pc & wq->sz_m1)) > sq->edge) {
+ sq->ico_wqe_info[pi].opcode = MLX5_OPCODE_NOP;
+ sq->ico_wqe_info[pi].num_wqebbs = 1;
+ mlx5e_send_nop(sq, true);
+ }
+
+ wqe = mlx5_wq_cyc_get_wqe(wq, pi);
+ mlx5e_build_umr_wqe(rq, sq, wqe, ix);
+ sq->ico_wqe_info[pi].opcode = MLX5_OPCODE_UMR;
+ sq->ico_wqe_info[pi].num_wqebbs = num_wqebbs;
+ sq->pc += num_wqebbs;
+ mlx5e_tx_notify_hw(sq, &wqe->ctrl, 0);
+}
+
+static inline int mlx5e_get_wqe_mtt_sz(void)
+{
+ /* UMR copies MTTs in units of MLX5_UMR_MTT_ALIGNMENT bytes.
+ * To avoid copying garbage after the mtt array, we allocate
+ * a little more.
+ */
+ return ALIGN(MLX5_MPWRQ_PAGES_PER_WQE * sizeof(__be64),
+ MLX5_UMR_MTT_ALIGNMENT);
+}
+
+static int mlx5e_alloc_and_map_page(struct mlx5e_rq *rq,
+ struct mlx5e_mpw_info *wi,
+ int i)
+{
+ struct page *page;
+
+ page = dev_alloc_page();
+ if (unlikely(!page))
+ return -ENOMEM;
+
+ wi->umr.dma_info[i].page = page;
+ wi->umr.dma_info[i].addr = dma_map_page(rq->pdev, page, 0, PAGE_SIZE,
+ PCI_DMA_FROMDEVICE);
+ if (unlikely(dma_mapping_error(rq->pdev, wi->umr.dma_info[i].addr))) {
+ put_page(page);
+ return -ENOMEM;
+ }
+ wi->umr.mtt[i] = cpu_to_be64(wi->umr.dma_info[i].addr | MLX5_EN_WR);
+
+ return 0;
+}
+
+static int mlx5e_alloc_rx_fragmented_mpwqe(struct mlx5e_rq *rq,
+ struct mlx5e_rx_wqe *wqe,
+ u16 ix)
+{
+ struct mlx5e_mpw_info *wi = &rq->wqe_info[ix];
+ int mtt_sz = mlx5e_get_wqe_mtt_sz();
+ u32 dma_offset = mlx5e_get_wqe_mtt_offset(rq->ix, ix) << PAGE_SHIFT;
+ int i;
+
+ wi->umr.dma_info = kmalloc(sizeof(*wi->umr.dma_info) *
+ MLX5_MPWRQ_PAGES_PER_WQE,
+ GFP_ATOMIC);
+ if (unlikely(!wi->umr.dma_info))
+ goto err_out;
+
+ /* We allocate more than mtt_sz as we will align the pointer */
+ wi->umr.mtt_no_align = kzalloc(mtt_sz + MLX5_UMR_ALIGN - 1,
+ GFP_ATOMIC);
+ if (unlikely(!wi->umr.mtt_no_align))
+ goto err_free_umr;
+
+ wi->umr.mtt = PTR_ALIGN(wi->umr.mtt_no_align, MLX5_UMR_ALIGN);
+ wi->umr.mtt_addr = dma_map_single(rq->pdev, wi->umr.mtt, mtt_sz,
+ PCI_DMA_TODEVICE);
+ if (unlikely(dma_mapping_error(rq->pdev, wi->umr.mtt_addr)))
+ goto err_free_mtt;
+
+ for (i = 0; i < MLX5_MPWRQ_PAGES_PER_WQE; i++) {
+ if (unlikely(mlx5e_alloc_and_map_page(rq, wi, i)))
+ goto err_unmap;
+ page_ref_add(wi->umr.dma_info[i].page,
+ mlx5e_mpwqe_strides_per_page(rq));
+ wi->skbs_frags[i] = 0;
+ }
+
+ wi->consumed_strides = 0;
+ wi->dma_pre_sync = mlx5e_dma_pre_sync_fragmented_mpwqe;
+ wi->add_skb_frag = mlx5e_add_skb_frag_fragmented_mpwqe;
+ wi->copy_skb_header = mlx5e_copy_skb_header_fragmented_mpwqe;
+ wi->free_wqe = mlx5e_free_rx_fragmented_mpwqe;
+ wqe->data.lkey = rq->umr_mkey_be;
+ wqe->data.addr = cpu_to_be64(dma_offset);
+
+ return 0;
+
+err_unmap:
+ while (--i >= 0) {
+ dma_unmap_page(rq->pdev, wi->umr.dma_info[i].addr, PAGE_SIZE,
+ PCI_DMA_FROMDEVICE);
+ page_ref_sub(wi->umr.dma_info[i].page,
+ mlx5e_mpwqe_strides_per_page(rq));
+ put_page(wi->umr.dma_info[i].page);
+ }
+ dma_unmap_single(rq->pdev, wi->umr.mtt_addr, mtt_sz, PCI_DMA_TODEVICE);
+
+err_free_mtt:
+ kfree(wi->umr.mtt_no_align);
+
+err_free_umr:
+ kfree(wi->umr.dma_info);
+
+err_out:
+ return -ENOMEM;
+}
+
+void mlx5e_free_rx_fragmented_mpwqe(struct mlx5e_rq *rq,
+ struct mlx5e_mpw_info *wi)
+{
+ int mtt_sz = mlx5e_get_wqe_mtt_sz();
+ int i;
+
+ for (i = 0; i < MLX5_MPWRQ_PAGES_PER_WQE; i++) {
+ dma_unmap_page(rq->pdev, wi->umr.dma_info[i].addr, PAGE_SIZE,
+ PCI_DMA_FROMDEVICE);
+ page_ref_sub(wi->umr.dma_info[i].page,
+ mlx5e_mpwqe_strides_per_page(rq) - wi->skbs_frags[i]);
+ put_page(wi->umr.dma_info[i].page);
+ }
+ dma_unmap_single(rq->pdev, wi->umr.mtt_addr, mtt_sz, PCI_DMA_TODEVICE);
+ kfree(wi->umr.mtt_no_align);
+ kfree(wi->umr.dma_info);
+}
+
+void mlx5e_post_rx_fragmented_mpwqe(struct mlx5e_rq *rq)
+{
+ struct mlx5_wq_ll *wq = &rq->wq;
+ struct mlx5e_rx_wqe *wqe = mlx5_wq_ll_get_wqe(wq, wq->head);
+
+ clear_bit(MLX5E_RQ_STATE_UMR_WQE_IN_PROGRESS, &rq->state);
+ mlx5_wq_ll_push(wq, be16_to_cpu(wqe->next.next_wqe_index));
+ rq->stats.mpwqe_frag++;
+
+ /* ensure wqes are visible to device before updating doorbell record */
+ dma_wmb();
+
+ mlx5_wq_ll_update_db_record(wq);
+}
+
+static int mlx5e_alloc_rx_linear_mpwqe(struct mlx5e_rq *rq,
+ struct mlx5e_rx_wqe *wqe,
+ u16 ix)
+{
+ struct mlx5e_mpw_info *wi = &rq->wqe_info[ix];
+ gfp_t gfp_mask;
+ int i;
+
+ gfp_mask = GFP_ATOMIC | __GFP_COLD | __GFP_MEMALLOC;
+ wi->dma_info.page = alloc_pages_node(NUMA_NO_NODE, gfp_mask,
+ MLX5_MPWRQ_WQE_PAGE_ORDER);
+ if (unlikely(!wi->dma_info.page))
+ return -ENOMEM;
+
+ wi->dma_info.addr = dma_map_page(rq->pdev, wi->dma_info.page, 0,
+ rq->wqe_sz, PCI_DMA_FROMDEVICE);
+ if (unlikely(dma_mapping_error(rq->pdev, wi->dma_info.addr))) {
+ put_page(wi->dma_info.page);
+ return -ENOMEM;
+ }
+
+ /* We split the high-order page into order-0 ones and manage their
+ * reference counter to minimize the memory held by small skb fragments
+ */
+ split_page(wi->dma_info.page, MLX5_MPWRQ_WQE_PAGE_ORDER);
+ for (i = 0; i < MLX5_MPWRQ_PAGES_PER_WQE; i++) {
+ page_ref_add(&wi->dma_info.page[i],
+ mlx5e_mpwqe_strides_per_page(rq));
+ wi->skbs_frags[i] = 0;
+ }
+
+ wi->consumed_strides = 0;
+ wi->dma_pre_sync = mlx5e_dma_pre_sync_linear_mpwqe;
+ wi->add_skb_frag = mlx5e_add_skb_frag_linear_mpwqe;
+ wi->copy_skb_header = mlx5e_copy_skb_header_linear_mpwqe;
+ wi->free_wqe = mlx5e_free_rx_linear_mpwqe;
+ wqe->data.lkey = rq->mkey_be;
+ wqe->data.addr = cpu_to_be64(wi->dma_info.addr);
+
+ return 0;
+}
+
+void mlx5e_free_rx_linear_mpwqe(struct mlx5e_rq *rq,
+ struct mlx5e_mpw_info *wi)
+{
+ int i;
+
+ dma_unmap_page(rq->pdev, wi->dma_info.addr, rq->wqe_sz,
+ PCI_DMA_FROMDEVICE);
+ for (i = 0; i < MLX5_MPWRQ_PAGES_PER_WQE; i++) {
+ page_ref_sub(&wi->dma_info.page[i],
+ mlx5e_mpwqe_strides_per_page(rq) - wi->skbs_frags[i]);
+ put_page(&wi->dma_info.page[i]);
+ }
+}
+
+int mlx5e_alloc_rx_mpwqe(struct mlx5e_rq *rq, struct mlx5e_rx_wqe *wqe, u16 ix)
+{
+ int err;
+
+ err = mlx5e_alloc_rx_linear_mpwqe(rq, wqe, ix);
+ if (unlikely(err)) {
+ err = mlx5e_alloc_rx_fragmented_mpwqe(rq, wqe, ix);
+ if (unlikely(err))
+ return err;
+ set_bit(MLX5E_RQ_STATE_UMR_WQE_IN_PROGRESS, &rq->state);
+ mlx5e_post_umr_wqe(rq, ix);
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
+#define RQ_CANNOT_POST(rq) \
+ (!test_bit(MLX5E_RQ_STATE_POST_WQES_ENABLE, &rq->state) || \
+ test_bit(MLX5E_RQ_STATE_UMR_WQE_IN_PROGRESS, &rq->state))
+
bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq)
{
struct mlx5_wq_ll *wq = &rq->wq;
- if (unlikely(!test_bit(MLX5E_RQ_STATE_POST_WQES_ENABLE, &rq->state)))
+ if (unlikely(RQ_CANNOT_POST(rq)))
return false;
while (!mlx5_wq_ll_is_full(wq)) {
struct mlx5e_rx_wqe *wqe = mlx5_wq_ll_get_wqe(wq, wq->head);
+ int err;
- if (unlikely(mlx5e_alloc_rx_wqe(rq, wqe, wq->head)))
+ err = rq->alloc_wqe(rq, wqe, wq->head);
+ if (unlikely(err)) {
+ if (err != -EBUSY)
+ rq->stats.buff_alloc_err++;
break;
+ }
mlx5_wq_ll_push(wq, be16_to_cpu(wqe->next.next_wqe_index));
}
@@ -101,7 +607,8 @@ bool mlx5e_post_rx_wqes(struct mlx5e_rq *rq)
return !mlx5_wq_ll_is_full(wq);
}
-static void mlx5e_lro_update_hdr(struct sk_buff *skb, struct mlx5_cqe64 *cqe)
+static void mlx5e_lro_update_hdr(struct sk_buff *skb, struct mlx5_cqe64 *cqe,
+ u32 cqe_bcnt)
{
struct ethhdr *eth = (struct ethhdr *)(skb->data);
struct iphdr *ipv4 = (struct iphdr *)(skb->data + ETH_HLEN);
@@ -112,7 +619,7 @@ static void mlx5e_lro_update_hdr(struct sk_buff *skb, struct mlx5_cqe64 *cqe)
int tcp_ack = ((CQE_L4_HDR_TYPE_TCP_ACK_NO_DATA == l4_hdr_type) ||
(CQE_L4_HDR_TYPE_TCP_ACK_AND_DATA == l4_hdr_type));
- u16 tot_len = be32_to_cpu(cqe->byte_cnt) - ETH_HLEN;
+ u16 tot_len = cqe_bcnt - ETH_HLEN;
if (eth->h_proto == htons(ETH_P_IP)) {
tcp = (struct tcphdr *)(skb->data + ETH_HLEN +
@@ -176,35 +683,43 @@ static inline void mlx5e_handle_csum(struct net_device *netdev,
if (lro) {
skb->ip_summed = CHECKSUM_UNNECESSARY;
- } else if (likely(is_first_ethertype_ip(skb))) {
+ return;
+ }
+
+ if (is_first_ethertype_ip(skb)) {
skb->ip_summed = CHECKSUM_COMPLETE;
skb->csum = csum_unfold((__force __sum16)cqe->check_sum);
rq->stats.csum_sw++;
- } else {
- goto csum_none;
+ return;
}
- return;
-
+ if (likely((cqe->hds_ip_ext & CQE_L3_OK) &&
+ (cqe->hds_ip_ext & CQE_L4_OK))) {
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ if (cqe_is_tunneled(cqe)) {
+ skb->csum_level = 1;
+ skb->encapsulation = 1;
+ rq->stats.csum_inner++;
+ }
+ return;
+ }
csum_none:
skb->ip_summed = CHECKSUM_NONE;
rq->stats.csum_none++;
}
static inline void mlx5e_build_rx_skb(struct mlx5_cqe64 *cqe,
+ u32 cqe_bcnt,
struct mlx5e_rq *rq,
struct sk_buff *skb)
{
struct net_device *netdev = rq->netdev;
- u32 cqe_bcnt = be32_to_cpu(cqe->byte_cnt);
struct mlx5e_tstamp *tstamp = rq->tstamp;
int lro_num_seg;
- skb_put(skb, cqe_bcnt);
-
lro_num_seg = be32_to_cpu(cqe->srqn) >> 24;
if (lro_num_seg > 1) {
- mlx5e_lro_update_hdr(skb, cqe);
+ mlx5e_lro_update_hdr(skb, cqe, cqe_bcnt);
skb_shinfo(skb)->gso_size = DIV_ROUND_UP(cqe_bcnt, lro_num_seg);
rq->stats.lro_packets++;
rq->stats.lro_bytes += cqe_bcnt;
@@ -213,10 +728,6 @@ static inline void mlx5e_build_rx_skb(struct mlx5_cqe64 *cqe,
if (unlikely(mlx5e_rx_hw_stamp(tstamp)))
mlx5e_fill_hwstamp(tstamp, get_cqe_ts(cqe), skb_hwtstamps(skb));
- mlx5e_handle_csum(netdev, cqe, rq, skb, !!lro_num_seg);
-
- skb->protocol = eth_type_trans(skb, netdev);
-
skb_record_rx_queue(skb, rq->ix);
if (likely(netdev->features & NETIF_F_RXHASH))
@@ -227,52 +738,165 @@ static inline void mlx5e_build_rx_skb(struct mlx5_cqe64 *cqe,
be16_to_cpu(cqe->vlan_info));
skb->mark = be32_to_cpu(cqe->sop_drop_qpn) & MLX5E_TC_FLOW_ID_MASK;
+
+ mlx5e_handle_csum(netdev, cqe, rq, skb, !!lro_num_seg);
+ skb->protocol = eth_type_trans(skb, netdev);
+}
+
+static inline void mlx5e_complete_rx_cqe(struct mlx5e_rq *rq,
+ struct mlx5_cqe64 *cqe,
+ u32 cqe_bcnt,
+ struct sk_buff *skb)
+{
+ rq->stats.packets++;
+ rq->stats.bytes += cqe_bcnt;
+ mlx5e_build_rx_skb(cqe, cqe_bcnt, rq, skb);
+ napi_gro_receive(rq->cq.napi, skb);
+}
+
+void mlx5e_handle_rx_cqe(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
+{
+ struct mlx5e_rx_wqe *wqe;
+ struct sk_buff *skb;
+ __be16 wqe_counter_be;
+ u16 wqe_counter;
+ u32 cqe_bcnt;
+
+ wqe_counter_be = cqe->wqe_counter;
+ wqe_counter = be16_to_cpu(wqe_counter_be);
+ wqe = mlx5_wq_ll_get_wqe(&rq->wq, wqe_counter);
+ skb = rq->skb[wqe_counter];
+ prefetch(skb->data);
+ rq->skb[wqe_counter] = NULL;
+
+ dma_unmap_single(rq->pdev,
+ *((dma_addr_t *)skb->cb),
+ rq->wqe_sz,
+ DMA_FROM_DEVICE);
+
+ if (unlikely((cqe->op_own >> 4) != MLX5_CQE_RESP_SEND)) {
+ rq->stats.wqe_err++;
+ dev_kfree_skb(skb);
+ goto wq_ll_pop;
+ }
+
+ cqe_bcnt = be32_to_cpu(cqe->byte_cnt);
+ skb_put(skb, cqe_bcnt);
+
+ mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);
+
+wq_ll_pop:
+ mlx5_wq_ll_pop(&rq->wq, wqe_counter_be,
+ &wqe->next.next_wqe_index);
+}
+
+static inline void mlx5e_mpwqe_fill_rx_skb(struct mlx5e_rq *rq,
+ struct mlx5_cqe64 *cqe,
+ struct mlx5e_mpw_info *wi,
+ u32 cqe_bcnt,
+ struct sk_buff *skb)
+{
+ u32 consumed_bytes = ALIGN(cqe_bcnt, rq->mpwqe_stride_sz);
+ u16 stride_ix = mpwrq_get_cqe_stride_index(cqe);
+ u32 wqe_offset = stride_ix * rq->mpwqe_stride_sz;
+ u32 head_offset = wqe_offset & (PAGE_SIZE - 1);
+ u32 page_idx = wqe_offset >> PAGE_SHIFT;
+ u32 head_page_idx = page_idx;
+ u16 headlen = min_t(u16, MLX5_MPWRQ_SMALL_PACKET_THRESHOLD, cqe_bcnt);
+ u32 frag_offset = head_offset + headlen;
+ u16 byte_cnt = cqe_bcnt - headlen;
+
+ if (unlikely(frag_offset >= PAGE_SIZE)) {
+ page_idx++;
+ frag_offset -= PAGE_SIZE;
+ }
+ wi->dma_pre_sync(rq->pdev, wi, wqe_offset, consumed_bytes);
+
+ while (byte_cnt) {
+ u32 pg_consumed_bytes =
+ min_t(u32, PAGE_SIZE - frag_offset, byte_cnt);
+
+ wi->add_skb_frag(rq, skb, wi, page_idx, frag_offset,
+ pg_consumed_bytes);
+ byte_cnt -= pg_consumed_bytes;
+ frag_offset = 0;
+ page_idx++;
+ }
+ /* copy header */
+ wi->copy_skb_header(rq->pdev, skb, wi, head_page_idx, head_offset,
+ headlen);
+ /* skb linear part was allocated with headlen and aligned to long */
+ skb->tail += headlen;
+ skb->len += headlen;
+}
+
+void mlx5e_handle_rx_cqe_mpwrq(struct mlx5e_rq *rq, struct mlx5_cqe64 *cqe)
+{
+ u16 cstrides = mpwrq_get_cqe_consumed_strides(cqe);
+ u16 wqe_id = be16_to_cpu(cqe->wqe_id);
+ struct mlx5e_mpw_info *wi = &rq->wqe_info[wqe_id];
+ struct mlx5e_rx_wqe *wqe = mlx5_wq_ll_get_wqe(&rq->wq, wqe_id);
+ struct sk_buff *skb;
+ u16 cqe_bcnt;
+
+ wi->consumed_strides += cstrides;
+
+ if (unlikely((cqe->op_own >> 4) != MLX5_CQE_RESP_SEND)) {
+ rq->stats.wqe_err++;
+ goto mpwrq_cqe_out;
+ }
+
+ if (unlikely(mpwrq_is_filler_cqe(cqe))) {
+ rq->stats.mpwqe_filler++;
+ goto mpwrq_cqe_out;
+ }
+
+ skb = napi_alloc_skb(rq->cq.napi,
+ ALIGN(MLX5_MPWRQ_SMALL_PACKET_THRESHOLD,
+ sizeof(long)));
+ if (unlikely(!skb)) {
+ rq->stats.buff_alloc_err++;
+ goto mpwrq_cqe_out;
+ }
+
+ prefetch(skb->data);
+ cqe_bcnt = mpwrq_get_cqe_byte_cnt(cqe);
+
+ mlx5e_mpwqe_fill_rx_skb(rq, cqe, wi, cqe_bcnt, skb);
+ mlx5e_complete_rx_cqe(rq, cqe, cqe_bcnt, skb);
+
+mpwrq_cqe_out:
+ if (likely(wi->consumed_strides < rq->mpwqe_num_strides))
+ return;
+
+ wi->free_wqe(rq, wi);
+ mlx5_wq_ll_pop(&rq->wq, cqe->wqe_id, &wqe->next.next_wqe_index);
}
int mlx5e_poll_rx_cq(struct mlx5e_cq *cq, int budget)
{
struct mlx5e_rq *rq = container_of(cq, struct mlx5e_rq, cq);
- int work_done;
+ int work_done = 0;
+
+ if (cq->decmprs_left)
+ work_done += mlx5e_decompress_cqes_cont(rq, cq, 0, budget);
- for (work_done = 0; work_done < budget; work_done++) {
- struct mlx5e_rx_wqe *wqe;
- struct mlx5_cqe64 *cqe;
- struct sk_buff *skb;
- __be16 wqe_counter_be;
- u16 wqe_counter;
+ for (; work_done < budget; work_done++) {
+ struct mlx5_cqe64 *cqe = mlx5e_get_cqe(cq);
- cqe = mlx5e_get_cqe(cq);
if (!cqe)
break;
- mlx5_cqwq_pop(&cq->wq);
-
- wqe_counter_be = cqe->wqe_counter;
- wqe_counter = be16_to_cpu(wqe_counter_be);
- wqe = mlx5_wq_ll_get_wqe(&rq->wq, wqe_counter);
- skb = rq->skb[wqe_counter];
- prefetch(skb->data);
- rq->skb[wqe_counter] = NULL;
-
- dma_unmap_single(rq->pdev,
- *((dma_addr_t *)skb->cb),
- rq->wqe_sz,
- DMA_FROM_DEVICE);
-
- if (unlikely((cqe->op_own >> 4) != MLX5_CQE_RESP_SEND)) {
- rq->stats.wqe_err++;
- dev_kfree_skb(skb);
- goto wq_ll_pop;
+ if (mlx5_get_cqe_format(cqe) == MLX5_COMPRESSED) {
+ work_done +=
+ mlx5e_decompress_cqes_start(rq, cq,
+ budget - work_done);
+ continue;
}
- mlx5e_build_rx_skb(cqe, rq, skb);
- rq->stats.packets++;
- rq->stats.bytes += be32_to_cpu(cqe->byte_cnt);
- napi_gro_receive(cq->napi, skb);
+ mlx5_cqwq_pop(&cq->wq);
-wq_ll_pop:
- mlx5_wq_ll_pop(&rq->wq, wqe_counter_be,
- &wqe->next.next_wqe_index);
+ rq->handle_rx_cqe(rq, cqe);
}
mlx5_cqwq_update_db_record(&cq->wq);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
new file mode 100644
index 000000000000..83bc32b25849
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_stats.h
@@ -0,0 +1,367 @@
+/*
+ * Copyright (c) 2015-2016, Mellanox Technologies. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+#ifndef __MLX5_EN_STATS_H__
+#define __MLX5_EN_STATS_H__
+
+#define MLX5E_READ_CTR64_CPU(ptr, dsc, i) \
+ (*(u64 *)((char *)ptr + dsc[i].offset))
+#define MLX5E_READ_CTR64_BE(ptr, dsc, i) \
+ be64_to_cpu(*(__be64 *)((char *)ptr + dsc[i].offset))
+#define MLX5E_READ_CTR32_CPU(ptr, dsc, i) \
+ (*(u32 *)((char *)ptr + dsc[i].offset))
+#define MLX5E_READ_CTR32_BE(ptr, dsc, i) \
+ be64_to_cpu(*(__be32 *)((char *)ptr + dsc[i].offset))
+
+#define MLX5E_DECLARE_STAT(type, fld) #fld, offsetof(type, fld)
+
+struct counter_desc {
+ char name[ETH_GSTRING_LEN];
+ int offset; /* Byte offset */
+};
+
+struct mlx5e_sw_stats {
+ u64 rx_packets;
+ u64 rx_bytes;
+ u64 tx_packets;
+ u64 tx_bytes;
+ u64 tso_packets;
+ u64 tso_bytes;
+ u64 tso_inner_packets;
+ u64 tso_inner_bytes;
+ u64 lro_packets;
+ u64 lro_bytes;
+ u64 rx_csum_good;
+ u64 rx_csum_none;
+ u64 rx_csum_sw;
+ u64 rx_csum_inner;
+ u64 tx_csum_offload;
+ u64 tx_csum_inner;
+ u64 tx_queue_stopped;
+ u64 tx_queue_wake;
+ u64 tx_queue_dropped;
+ u64 rx_wqe_err;
+ u64 rx_mpwqe_filler;
+ u64 rx_mpwqe_frag;
+ u64 rx_buff_alloc_err;
+ u64 rx_cqe_compress_blks;
+ u64 rx_cqe_compress_pkts;
+
+ /* Special handling counters */
+ u64 link_down_events;
+};
+
+static const struct counter_desc sw_stats_desc[] = {
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_packets) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_bytes) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_packets) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_bytes) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tso_packets) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tso_bytes) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tso_inner_packets) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tso_inner_bytes) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, lro_packets) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, lro_bytes) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_csum_good) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_csum_none) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_csum_sw) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_csum_inner) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_csum_offload) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_csum_inner) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_queue_stopped) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_queue_wake) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, tx_queue_dropped) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_wqe_err) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_mpwqe_filler) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_mpwqe_frag) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_buff_alloc_err) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_cqe_compress_blks) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, rx_cqe_compress_pkts) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sw_stats, link_down_events) },
+};
+
+struct mlx5e_qcounter_stats {
+ u32 rx_out_of_buffer;
+};
+
+static const struct counter_desc q_stats_desc[] = {
+ { MLX5E_DECLARE_STAT(struct mlx5e_qcounter_stats, rx_out_of_buffer) },
+};
+
+#define VPORT_COUNTER_OFF(c) MLX5_BYTE_OFF(query_vport_counter_out, c)
+#define VPORT_COUNTER_GET(vstats, c) MLX5_GET64(query_vport_counter_out, \
+ vstats->query_vport_out, c)
+
+struct mlx5e_vport_stats {
+ __be64 query_vport_out[MLX5_ST_SZ_QW(query_vport_counter_out)];
+};
+
+static const struct counter_desc vport_stats_desc[] = {
+ { "rx_vport_error_packets",
+ VPORT_COUNTER_OFF(received_errors.packets) },
+ { "rx_vport_error_bytes", VPORT_COUNTER_OFF(received_errors.octets) },
+ { "tx_vport_error_packets",
+ VPORT_COUNTER_OFF(transmit_errors.packets) },
+ { "tx_vport_error_bytes", VPORT_COUNTER_OFF(transmit_errors.octets) },
+ { "rx_vport_unicast_packets",
+ VPORT_COUNTER_OFF(received_eth_unicast.packets) },
+ { "rx_vport_unicast_bytes",
+ VPORT_COUNTER_OFF(received_eth_unicast.octets) },
+ { "tx_vport_unicast_packets",
+ VPORT_COUNTER_OFF(transmitted_eth_unicast.packets) },
+ { "tx_vport_unicast_bytes",
+ VPORT_COUNTER_OFF(transmitted_eth_unicast.octets) },
+ { "rx_vport_multicast_packets",
+ VPORT_COUNTER_OFF(received_eth_multicast.packets) },
+ { "rx_vport_multicast_bytes",
+ VPORT_COUNTER_OFF(received_eth_multicast.octets) },
+ { "tx_vport_multicast_packets",
+ VPORT_COUNTER_OFF(transmitted_eth_multicast.packets) },
+ { "tx_vport_multicast_bytes",
+ VPORT_COUNTER_OFF(transmitted_eth_multicast.octets) },
+ { "rx_vport_broadcast_packets",
+ VPORT_COUNTER_OFF(received_eth_broadcast.packets) },
+ { "rx_vport_broadcast_bytes",
+ VPORT_COUNTER_OFF(received_eth_broadcast.octets) },
+ { "tx_vport_broadcast_packets",
+ VPORT_COUNTER_OFF(transmitted_eth_broadcast.packets) },
+ { "tx_vport_broadcast_bytes",
+ VPORT_COUNTER_OFF(transmitted_eth_broadcast.octets) },
+};
+
+#define PPORT_802_3_OFF(c) \
+ MLX5_BYTE_OFF(ppcnt_reg, \
+ counter_set.eth_802_3_cntrs_grp_data_layout.c##_high)
+#define PPORT_802_3_GET(pstats, c) \
+ MLX5_GET64(ppcnt_reg, pstats->IEEE_802_3_counters, \
+ counter_set.eth_802_3_cntrs_grp_data_layout.c##_high)
+#define PPORT_2863_OFF(c) \
+ MLX5_BYTE_OFF(ppcnt_reg, \
+ counter_set.eth_2863_cntrs_grp_data_layout.c##_high)
+#define PPORT_2863_GET(pstats, c) \
+ MLX5_GET64(ppcnt_reg, pstats->RFC_2863_counters, \
+ counter_set.eth_2863_cntrs_grp_data_layout.c##_high)
+#define PPORT_2819_OFF(c) \
+ MLX5_BYTE_OFF(ppcnt_reg, \
+ counter_set.eth_2819_cntrs_grp_data_layout.c##_high)
+#define PPORT_2819_GET(pstats, c) \
+ MLX5_GET64(ppcnt_reg, pstats->RFC_2819_counters, \
+ counter_set.eth_2819_cntrs_grp_data_layout.c##_high)
+#define PPORT_PER_PRIO_OFF(c) \
+ MLX5_BYTE_OFF(ppcnt_reg, \
+ counter_set.eth_per_prio_grp_data_layout.c##_high)
+#define PPORT_PER_PRIO_GET(pstats, prio, c) \
+ MLX5_GET64(ppcnt_reg, pstats->per_prio_counters[prio], \
+ counter_set.eth_per_prio_grp_data_layout.c##_high)
+#define NUM_PPORT_PRIO 8
+
+struct mlx5e_pport_stats {
+ __be64 IEEE_802_3_counters[MLX5_ST_SZ_QW(ppcnt_reg)];
+ __be64 RFC_2863_counters[MLX5_ST_SZ_QW(ppcnt_reg)];
+ __be64 RFC_2819_counters[MLX5_ST_SZ_QW(ppcnt_reg)];
+ __be64 per_prio_counters[NUM_PPORT_PRIO][MLX5_ST_SZ_QW(ppcnt_reg)];
+ __be64 phy_counters[MLX5_ST_SZ_QW(ppcnt_reg)];
+};
+
+static const struct counter_desc pport_802_3_stats_desc[] = {
+ { "frames_tx", PPORT_802_3_OFF(a_frames_transmitted_ok) },
+ { "frames_rx", PPORT_802_3_OFF(a_frames_received_ok) },
+ { "check_seq_err", PPORT_802_3_OFF(a_frame_check_sequence_errors) },
+ { "alignment_err", PPORT_802_3_OFF(a_alignment_errors) },
+ { "octets_tx", PPORT_802_3_OFF(a_octets_transmitted_ok) },
+ { "octets_received", PPORT_802_3_OFF(a_octets_received_ok) },
+ { "multicast_xmitted", PPORT_802_3_OFF(a_multicast_frames_xmitted_ok) },
+ { "broadcast_xmitted", PPORT_802_3_OFF(a_broadcast_frames_xmitted_ok) },
+ { "multicast_rx", PPORT_802_3_OFF(a_multicast_frames_received_ok) },
+ { "broadcast_rx", PPORT_802_3_OFF(a_broadcast_frames_received_ok) },
+ { "in_range_len_errors", PPORT_802_3_OFF(a_in_range_length_errors) },
+ { "out_of_range_len", PPORT_802_3_OFF(a_out_of_range_length_field) },
+ { "too_long_errors", PPORT_802_3_OFF(a_frame_too_long_errors) },
+ { "symbol_err", PPORT_802_3_OFF(a_symbol_error_during_carrier) },
+ { "mac_control_tx", PPORT_802_3_OFF(a_mac_control_frames_transmitted) },
+ { "mac_control_rx", PPORT_802_3_OFF(a_mac_control_frames_received) },
+ { "unsupported_op_rx",
+ PPORT_802_3_OFF(a_unsupported_opcodes_received) },
+ { "pause_ctrl_rx", PPORT_802_3_OFF(a_pause_mac_ctrl_frames_received) },
+ { "pause_ctrl_tx",
+ PPORT_802_3_OFF(a_pause_mac_ctrl_frames_transmitted) },
+};
+
+static const struct counter_desc pport_2863_stats_desc[] = {
+ { "in_octets", PPORT_2863_OFF(if_in_octets) },
+ { "in_ucast_pkts", PPORT_2863_OFF(if_in_ucast_pkts) },
+ { "in_discards", PPORT_2863_OFF(if_in_discards) },
+ { "in_errors", PPORT_2863_OFF(if_in_errors) },
+ { "in_unknown_protos", PPORT_2863_OFF(if_in_unknown_protos) },
+ { "out_octets", PPORT_2863_OFF(if_out_octets) },
+ { "out_ucast_pkts", PPORT_2863_OFF(if_out_ucast_pkts) },
+ { "out_discards", PPORT_2863_OFF(if_out_discards) },
+ { "out_errors", PPORT_2863_OFF(if_out_errors) },
+ { "in_multicast_pkts", PPORT_2863_OFF(if_in_multicast_pkts) },
+ { "in_broadcast_pkts", PPORT_2863_OFF(if_in_broadcast_pkts) },
+ { "out_multicast_pkts", PPORT_2863_OFF(if_out_multicast_pkts) },
+ { "out_broadcast_pkts", PPORT_2863_OFF(if_out_broadcast_pkts) },
+};
+
+static const struct counter_desc pport_2819_stats_desc[] = {
+ { "drop_events", PPORT_2819_OFF(ether_stats_drop_events) },
+ { "octets", PPORT_2819_OFF(ether_stats_octets) },
+ { "pkts", PPORT_2819_OFF(ether_stats_pkts) },
+ { "broadcast_pkts", PPORT_2819_OFF(ether_stats_broadcast_pkts) },
+ { "multicast_pkts", PPORT_2819_OFF(ether_stats_multicast_pkts) },
+ { "crc_align_errors", PPORT_2819_OFF(ether_stats_crc_align_errors) },
+ { "undersize_pkts", PPORT_2819_OFF(ether_stats_undersize_pkts) },
+ { "oversize_pkts", PPORT_2819_OFF(ether_stats_oversize_pkts) },
+ { "fragments", PPORT_2819_OFF(ether_stats_fragments) },
+ { "jabbers", PPORT_2819_OFF(ether_stats_jabbers) },
+ { "collisions", PPORT_2819_OFF(ether_stats_collisions) },
+ { "p64octets", PPORT_2819_OFF(ether_stats_pkts64octets) },
+ { "p65to127octets", PPORT_2819_OFF(ether_stats_pkts65to127octets) },
+ { "p128to255octets", PPORT_2819_OFF(ether_stats_pkts128to255octets) },
+ { "p256to511octets", PPORT_2819_OFF(ether_stats_pkts256to511octets) },
+ { "p512to1023octets", PPORT_2819_OFF(ether_stats_pkts512to1023octets) },
+ { "p1024to1518octets",
+ PPORT_2819_OFF(ether_stats_pkts1024to1518octets) },
+ { "p1519to2047octets",
+ PPORT_2819_OFF(ether_stats_pkts1519to2047octets) },
+ { "p2048to4095octets",
+ PPORT_2819_OFF(ether_stats_pkts2048to4095octets) },
+ { "p4096to8191octets",
+ PPORT_2819_OFF(ether_stats_pkts4096to8191octets) },
+ { "p8192to10239octets",
+ PPORT_2819_OFF(ether_stats_pkts8192to10239octets) },
+};
+
+static const struct counter_desc pport_per_prio_traffic_stats_desc[] = {
+ { "rx_octets", PPORT_PER_PRIO_OFF(rx_octets) },
+ { "rx_frames", PPORT_PER_PRIO_OFF(rx_frames) },
+ { "tx_octets", PPORT_PER_PRIO_OFF(tx_octets) },
+ { "tx_frames", PPORT_PER_PRIO_OFF(tx_frames) },
+};
+
+static const struct counter_desc pport_per_prio_pfc_stats_desc[] = {
+ { "rx_pause", PPORT_PER_PRIO_OFF(rx_pause) },
+ { "rx_pause_duration", PPORT_PER_PRIO_OFF(rx_pause_duration) },
+ { "tx_pause", PPORT_PER_PRIO_OFF(tx_pause) },
+ { "tx_pause_duration", PPORT_PER_PRIO_OFF(tx_pause_duration) },
+ { "rx_pause_transition", PPORT_PER_PRIO_OFF(rx_pause_transition) },
+};
+
+struct mlx5e_rq_stats {
+ u64 packets;
+ u64 bytes;
+ u64 csum_sw;
+ u64 csum_inner;
+ u64 csum_none;
+ u64 lro_packets;
+ u64 lro_bytes;
+ u64 wqe_err;
+ u64 mpwqe_filler;
+ u64 mpwqe_frag;
+ u64 buff_alloc_err;
+ u64 cqe_compress_blks;
+ u64 cqe_compress_pkts;
+};
+
+static const struct counter_desc rq_stats_desc[] = {
+ { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, packets) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, bytes) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, csum_sw) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, csum_inner) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, csum_none) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, lro_packets) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, lro_bytes) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, wqe_err) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, mpwqe_filler) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, mpwqe_frag) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, buff_alloc_err) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, cqe_compress_blks) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_rq_stats, cqe_compress_pkts) },
+};
+
+struct mlx5e_sq_stats {
+ /* commonly accessed in data path */
+ u64 packets;
+ u64 bytes;
+ u64 tso_packets;
+ u64 tso_bytes;
+ u64 tso_inner_packets;
+ u64 tso_inner_bytes;
+ u64 csum_offload_inner;
+ u64 nop;
+ /* less likely accessed in data path */
+ u64 csum_offload_none;
+ u64 stopped;
+ u64 wake;
+ u64 dropped;
+};
+
+static const struct counter_desc sq_stats_desc[] = {
+ { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, packets) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, bytes) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, tso_packets) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, tso_bytes) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, tso_inner_packets) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, tso_inner_bytes) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, csum_offload_inner) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, nop) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, csum_offload_none) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, stopped) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, wake) },
+ { MLX5E_DECLARE_STAT(struct mlx5e_sq_stats, dropped) },
+};
+
+#define NUM_SW_COUNTERS ARRAY_SIZE(sw_stats_desc)
+#define NUM_Q_COUNTERS ARRAY_SIZE(q_stats_desc)
+#define NUM_VPORT_COUNTERS ARRAY_SIZE(vport_stats_desc)
+#define NUM_PPORT_802_3_COUNTERS ARRAY_SIZE(pport_802_3_stats_desc)
+#define NUM_PPORT_2863_COUNTERS ARRAY_SIZE(pport_2863_stats_desc)
+#define NUM_PPORT_2819_COUNTERS ARRAY_SIZE(pport_2819_stats_desc)
+#define NUM_PPORT_PER_PRIO_TRAFFIC_COUNTERS \
+ ARRAY_SIZE(pport_per_prio_traffic_stats_desc)
+#define NUM_PPORT_PER_PRIO_PFC_COUNTERS \
+ ARRAY_SIZE(pport_per_prio_pfc_stats_desc)
+#define NUM_PPORT_COUNTERS (NUM_PPORT_802_3_COUNTERS + \
+ NUM_PPORT_2863_COUNTERS + \
+ NUM_PPORT_2819_COUNTERS + \
+ NUM_PPORT_PER_PRIO_TRAFFIC_COUNTERS * \
+ NUM_PPORT_PRIO)
+#define NUM_RQ_STATS ARRAY_SIZE(rq_stats_desc)
+#define NUM_SQ_STATS ARRAY_SIZE(sq_stats_desc)
+
+struct mlx5e_stats {
+ struct mlx5e_sw_stats sw;
+ struct mlx5e_qcounter_stats qcnt;
+ struct mlx5e_vport_stats vport;
+ struct mlx5e_pport_stats pport;
+};
+
+#endif /* __MLX5_EN_STATS_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
index b3de09f13425..704c3d30493e 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.c
@@ -46,43 +46,65 @@ struct mlx5e_tc_flow {
struct mlx5_flow_rule *rule;
};
-#define MLX5E_TC_FLOW_TABLE_NUM_ENTRIES 1024
-#define MLX5E_TC_FLOW_TABLE_NUM_GROUPS 4
+#define MLX5E_TC_TABLE_NUM_ENTRIES 1024
+#define MLX5E_TC_TABLE_NUM_GROUPS 4
static struct mlx5_flow_rule *mlx5e_tc_add_flow(struct mlx5e_priv *priv,
u32 *match_c, u32 *match_v,
u32 action, u32 flow_tag)
{
- struct mlx5_flow_destination dest = {
- .type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE,
- {.ft = priv->fts.vlan.t},
- };
+ struct mlx5_core_dev *dev = priv->mdev;
+ struct mlx5_flow_destination dest = { 0 };
+ struct mlx5_fc *counter = NULL;
struct mlx5_flow_rule *rule;
bool table_created = false;
- if (IS_ERR_OR_NULL(priv->fts.tc.t)) {
- priv->fts.tc.t =
- mlx5_create_auto_grouped_flow_table(priv->fts.ns, 0,
- MLX5E_TC_FLOW_TABLE_NUM_ENTRIES,
- MLX5E_TC_FLOW_TABLE_NUM_GROUPS);
- if (IS_ERR(priv->fts.tc.t)) {
+ if (action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) {
+ dest.type = MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE;
+ dest.ft = priv->fs.vlan.ft.t;
+ } else {
+ counter = mlx5_fc_create(dev, true);
+ if (IS_ERR(counter))
+ return ERR_CAST(counter);
+
+ dest.type = MLX5_FLOW_DESTINATION_TYPE_COUNTER;
+ dest.counter = counter;
+ }
+
+ if (IS_ERR_OR_NULL(priv->fs.tc.t)) {
+ priv->fs.tc.t =
+ mlx5_create_auto_grouped_flow_table(priv->fs.ns,
+ MLX5E_TC_PRIO,
+ MLX5E_TC_TABLE_NUM_ENTRIES,
+ MLX5E_TC_TABLE_NUM_GROUPS,
+ 0);
+ if (IS_ERR(priv->fs.tc.t)) {
netdev_err(priv->netdev,
"Failed to create tc offload table\n");
- return ERR_CAST(priv->fts.tc.t);
+ rule = ERR_CAST(priv->fs.tc.t);
+ goto err_create_ft;
}
table_created = true;
}
- rule = mlx5_add_flow_rule(priv->fts.tc.t, MLX5_MATCH_OUTER_HEADERS,
+ rule = mlx5_add_flow_rule(priv->fs.tc.t, MLX5_MATCH_OUTER_HEADERS,
match_c, match_v,
action, flow_tag,
- action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST ? &dest : NULL);
+ &dest);
+
+ if (IS_ERR(rule))
+ goto err_add_rule;
+
+ return rule;
- if (IS_ERR(rule) && table_created) {
- mlx5_destroy_flow_table(priv->fts.tc.t);
- priv->fts.tc.t = NULL;
+err_add_rule:
+ if (table_created) {
+ mlx5_destroy_flow_table(priv->fs.tc.t);
+ priv->fs.tc.t = NULL;
}
+err_create_ft:
+ mlx5_fc_destroy(dev, counter);
return rule;
}
@@ -90,11 +112,17 @@ static struct mlx5_flow_rule *mlx5e_tc_add_flow(struct mlx5e_priv *priv,
static void mlx5e_tc_del_flow(struct mlx5e_priv *priv,
struct mlx5_flow_rule *rule)
{
+ struct mlx5_fc *counter = NULL;
+
+ counter = mlx5_flow_rule_counter(rule);
+
mlx5_del_flow_rule(rule);
+ mlx5_fc_destroy(priv->mdev, counter);
+
if (!mlx5e_tc_num_filters(priv)) {
- mlx5_destroy_flow_table(priv->fts.tc.t);
- priv->fts.tc.t = NULL;
+ mlx5_destroy_flow_table(priv->fs.tc.t);
+ priv->fs.tc.t = NULL;
}
}
@@ -284,6 +312,9 @@ static int parse_tc_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
if (is_tcf_gact_shot(a)) {
*action |= MLX5_FLOW_CONTEXT_ACTION_DROP;
+ if (MLX5_CAP_FLOWTABLE(priv->mdev,
+ flow_table_properties_nic_receive.flow_counter))
+ *action |= MLX5_FLOW_CONTEXT_ACTION_COUNT;
continue;
}
@@ -310,7 +341,7 @@ static int parse_tc_actions(struct mlx5e_priv *priv, struct tcf_exts *exts,
int mlx5e_configure_flower(struct mlx5e_priv *priv, __be16 protocol,
struct tc_cls_flower_offload *f)
{
- struct mlx5e_tc_flow_table *tc = &priv->fts.tc;
+ struct mlx5e_tc_table *tc = &priv->fs.tc;
u32 *match_c;
u32 *match_v;
int err = 0;
@@ -376,7 +407,7 @@ int mlx5e_delete_flower(struct mlx5e_priv *priv,
struct tc_cls_flower_offload *f)
{
struct mlx5e_tc_flow *flow;
- struct mlx5e_tc_flow_table *tc = &priv->fts.tc;
+ struct mlx5e_tc_table *tc = &priv->fs.tc;
flow = rhashtable_lookup_fast(&tc->ht, &f->cookie,
tc->ht_params);
@@ -392,6 +423,34 @@ int mlx5e_delete_flower(struct mlx5e_priv *priv,
return 0;
}
+int mlx5e_stats_flower(struct mlx5e_priv *priv,
+ struct tc_cls_flower_offload *f)
+{
+ struct mlx5e_tc_table *tc = &priv->fs.tc;
+ struct mlx5e_tc_flow *flow;
+ struct tc_action *a;
+ struct mlx5_fc *counter;
+ u64 bytes;
+ u64 packets;
+ u64 lastuse;
+
+ flow = rhashtable_lookup_fast(&tc->ht, &f->cookie,
+ tc->ht_params);
+ if (!flow)
+ return -EINVAL;
+
+ counter = mlx5_flow_rule_counter(flow->rule);
+ if (!counter)
+ return 0;
+
+ mlx5_fc_query_cached(counter, &bytes, &packets, &lastuse);
+
+ tc_for_each_action(a, f->exts)
+ tcf_action_stats_update(a, bytes, packets, lastuse);
+
+ return 0;
+}
+
static const struct rhashtable_params mlx5e_tc_flow_ht_params = {
.head_offset = offsetof(struct mlx5e_tc_flow, node),
.key_offset = offsetof(struct mlx5e_tc_flow, cookie),
@@ -401,7 +460,7 @@ static const struct rhashtable_params mlx5e_tc_flow_ht_params = {
int mlx5e_tc_init(struct mlx5e_priv *priv)
{
- struct mlx5e_tc_flow_table *tc = &priv->fts.tc;
+ struct mlx5e_tc_table *tc = &priv->fs.tc;
tc->ht_params = mlx5e_tc_flow_ht_params;
return rhashtable_init(&tc->ht, &tc->ht_params);
@@ -418,12 +477,12 @@ static void _mlx5e_tc_del_flow(void *ptr, void *arg)
void mlx5e_tc_cleanup(struct mlx5e_priv *priv)
{
- struct mlx5e_tc_flow_table *tc = &priv->fts.tc;
+ struct mlx5e_tc_table *tc = &priv->fs.tc;
rhashtable_free_and_destroy(&tc->ht, _mlx5e_tc_del_flow, priv);
- if (!IS_ERR_OR_NULL(priv->fts.tc.t)) {
- mlx5_destroy_flow_table(priv->fts.tc.t);
- priv->fts.tc.t = NULL;
+ if (!IS_ERR_OR_NULL(tc->t)) {
+ mlx5_destroy_flow_table(tc->t);
+ tc->t = NULL;
}
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
index d677428dc10f..34bf903fc886 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tc.h
@@ -43,9 +43,12 @@ int mlx5e_configure_flower(struct mlx5e_priv *priv, __be16 protocol,
int mlx5e_delete_flower(struct mlx5e_priv *priv,
struct tc_cls_flower_offload *f);
+int mlx5e_stats_flower(struct mlx5e_priv *priv,
+ struct tc_cls_flower_offload *f);
+
static inline int mlx5e_tc_num_filters(struct mlx5e_priv *priv)
{
- return atomic_read(&priv->fts.tc.ht.nelems);
+ return atomic_read(&priv->fs.tc.ht.nelems);
}
#endif /* __MLX5_EN_TC_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
index 1ffc7cb6f78c..229ab16fb8d3 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_tx.c
@@ -54,10 +54,11 @@ void mlx5e_send_nop(struct mlx5e_sq *sq, bool notify_hw)
sq->skb[pi] = NULL;
sq->pc++;
+ sq->stats.nop++;
if (notify_hw) {
cseg->fm_ce_se = MLX5_WQE_CTRL_CQ_UPDATE;
- mlx5e_tx_notify_hw(sq, wqe, 0);
+ mlx5e_tx_notify_hw(sq, &wqe->ctrl, 0);
}
}
@@ -309,7 +310,7 @@ static netdev_tx_t mlx5e_sq_xmit(struct mlx5e_sq *sq, struct sk_buff *skb)
bf_sz = wi->num_wqebbs << 3;
cseg->fm_ce_se = MLX5_WQE_CTRL_CQ_UPDATE;
- mlx5e_tx_notify_hw(sq, wqe, bf_sz);
+ mlx5e_tx_notify_hw(sq, &wqe->ctrl, bf_sz);
}
/* fill sq edge with nops to avoid wqe wrap around */
@@ -387,7 +388,6 @@ bool mlx5e_poll_tx_cq(struct mlx5e_cq *cq, int napi_budget)
wi = &sq->wqe_info[ci];
if (unlikely(!skb)) { /* nop */
- sq->stats.nop++;
sqcc++;
continue;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
index 9bb4395aceeb..c38781fa567d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/en_txrx.c
@@ -49,6 +49,60 @@ struct mlx5_cqe64 *mlx5e_get_cqe(struct mlx5e_cq *cq)
return cqe;
}
+static void mlx5e_poll_ico_cq(struct mlx5e_cq *cq)
+{
+ struct mlx5_wq_cyc *wq;
+ struct mlx5_cqe64 *cqe;
+ struct mlx5e_sq *sq;
+ u16 sqcc;
+
+ cqe = mlx5e_get_cqe(cq);
+ if (likely(!cqe))
+ return;
+
+ sq = container_of(cq, struct mlx5e_sq, cq);
+ wq = &sq->wq;
+
+ /* sq->cc must be updated only after mlx5_cqwq_update_db_record(),
+ * otherwise a cq overrun may occur
+ */
+ sqcc = sq->cc;
+
+ do {
+ u16 ci = be16_to_cpu(cqe->wqe_counter) & wq->sz_m1;
+ struct mlx5e_ico_wqe_info *icowi = &sq->ico_wqe_info[ci];
+
+ mlx5_cqwq_pop(&cq->wq);
+ sqcc += icowi->num_wqebbs;
+
+ if (unlikely((cqe->op_own >> 4) != MLX5_CQE_REQ)) {
+ WARN_ONCE(true, "mlx5e: Bad OP in ICOSQ CQE: 0x%x\n",
+ cqe->op_own);
+ break;
+ }
+
+ switch (icowi->opcode) {
+ case MLX5_OPCODE_NOP:
+ break;
+ case MLX5_OPCODE_UMR:
+ mlx5e_post_rx_fragmented_mpwqe(&sq->channel->rq);
+ break;
+ default:
+ WARN_ONCE(true,
+ "mlx5e: Bad OPCODE in ICOSQ WQE info: 0x%x\n",
+ icowi->opcode);
+ }
+
+ } while ((cqe = mlx5e_get_cqe(cq)));
+
+ mlx5_cqwq_update_db_record(&cq->wq);
+
+ /* ensure cq space is freed before enabling more cqes */
+ wmb();
+
+ sq->cc = sqcc;
+}
+
int mlx5e_napi_poll(struct napi_struct *napi, int budget)
{
struct mlx5e_channel *c = container_of(napi, struct mlx5e_channel,
@@ -64,6 +118,9 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget)
work_done = mlx5e_poll_rx_cq(&c->rq.cq, budget);
busy |= work_done == budget;
+
+ mlx5e_poll_ico_cq(&c->icosq.cq);
+
busy |= mlx5e_post_rx_wqes(&c->rq);
if (busy)
@@ -80,6 +137,7 @@ int mlx5e_napi_poll(struct napi_struct *napi, int budget)
for (i = 0; i < c->num_tc; i++)
mlx5e_cq_arm(&c->sq[i].cq);
mlx5e_cq_arm(&c->rq.cq);
+ mlx5e_cq_arm(&c->icosq.cq);
return work_done;
}
@@ -89,7 +147,6 @@ void mlx5e_completion_event(struct mlx5_core_cq *mcq)
struct mlx5e_cq *cq = container_of(mcq, struct mlx5e_cq, mcq);
set_bit(MLX5E_CHANNEL_NAPI_SCHED, &cq->channel->flags);
- barrier();
napi_schedule(cq->napi);
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eq.c b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
index 18fccec72c5d..0e30602ef76d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eq.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eq.c
@@ -202,7 +202,7 @@ static int mlx5_eq_int(struct mlx5_core_dev *dev, struct mlx5_eq *eq)
struct mlx5_eqe *eqe;
int eqes_found = 0;
int set_ci = 0;
- u32 cqn;
+ u32 cqn = -1;
u32 rsn;
u8 port;
@@ -320,6 +320,9 @@ static int mlx5_eq_int(struct mlx5_core_dev *dev, struct mlx5_eq *eq)
eq_update_ci(eq, 1);
+ if (cqn != -1)
+ tasklet_schedule(&eq->tasklet_ctx.task);
+
return eqes_found;
}
@@ -403,6 +406,12 @@ int mlx5_create_map_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq, u8 vecidx,
if (err)
goto err_irq;
+ INIT_LIST_HEAD(&eq->tasklet_ctx.list);
+ INIT_LIST_HEAD(&eq->tasklet_ctx.process_list);
+ spin_lock_init(&eq->tasklet_ctx.lock);
+ tasklet_init(&eq->tasklet_ctx.task, mlx5_cq_tasklet_cb,
+ (unsigned long)&eq->tasklet_ctx);
+
/* EQs are created in ARMED state
*/
eq_update_ci(eq, 1);
@@ -436,6 +445,7 @@ int mlx5_destroy_unmap_eq(struct mlx5_core_dev *dev, struct mlx5_eq *eq)
mlx5_core_warn(dev, "failed to destroy a previously created eq: eqn %d\n",
eq->eqn);
synchronize_irq(eq->irqn);
+ tasklet_disable(&eq->tasklet_ctx.task);
mlx5_buf_free(dev, &eq->buf);
return err;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
index bc3d9f8a75c1..b84a6918a700 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.c
@@ -77,16 +77,20 @@ struct vport_addr {
u8 action;
u32 vport;
struct mlx5_flow_rule *flow_rule; /* SRIOV only */
+ /* A flag indicating that mac was added due to mc promiscuous vport */
+ bool mc_promisc;
};
enum {
UC_ADDR_CHANGE = BIT(0),
MC_ADDR_CHANGE = BIT(1),
+ PROMISC_CHANGE = BIT(3),
};
/* Vport context events */
#define SRIOV_VPORT_EVENTS (UC_ADDR_CHANGE | \
- MC_ADDR_CHANGE)
+ MC_ADDR_CHANGE | \
+ PROMISC_CHANGE)
static int arm_vport_context_events_cmd(struct mlx5_core_dev *dev, u16 vport,
u32 events_mask)
@@ -116,6 +120,9 @@ static int arm_vport_context_events_cmd(struct mlx5_core_dev *dev, u16 vport,
if (events_mask & MC_ADDR_CHANGE)
MLX5_SET(nic_vport_context, nic_vport_ctx,
event_on_mc_address_change, 1);
+ if (events_mask & PROMISC_CHANGE)
+ MLX5_SET(nic_vport_context, nic_vport_ctx,
+ event_on_promisc_change, 1);
err = mlx5_cmd_exec(dev, in, sizeof(in), out, sizeof(out));
if (err)
@@ -323,30 +330,45 @@ static void del_l2_table_entry(struct mlx5_core_dev *dev, u32 index)
/* E-Switch FDB */
static struct mlx5_flow_rule *
-esw_fdb_set_vport_rule(struct mlx5_eswitch *esw, u8 mac[ETH_ALEN], u32 vport)
+__esw_fdb_set_vport_rule(struct mlx5_eswitch *esw, u32 vport, bool rx_rule,
+ u8 mac_c[ETH_ALEN], u8 mac_v[ETH_ALEN])
{
- int match_header = MLX5_MATCH_OUTER_HEADERS;
- struct mlx5_flow_destination dest;
+ int match_header = (is_zero_ether_addr(mac_c) ? 0 :
+ MLX5_MATCH_OUTER_HEADERS);
struct mlx5_flow_rule *flow_rule = NULL;
+ struct mlx5_flow_destination dest;
+ void *mv_misc = NULL;
+ void *mc_misc = NULL;
+ u8 *dmac_v = NULL;
+ u8 *dmac_c = NULL;
u32 *match_v;
u32 *match_c;
- u8 *dmac_v;
- u8 *dmac_c;
+ if (rx_rule)
+ match_header |= MLX5_MATCH_MISC_PARAMETERS;
match_v = kzalloc(MLX5_ST_SZ_BYTES(fte_match_param), GFP_KERNEL);
match_c = kzalloc(MLX5_ST_SZ_BYTES(fte_match_param), GFP_KERNEL);
if (!match_v || !match_c) {
pr_warn("FDB: Failed to alloc match parameters\n");
goto out;
}
+
dmac_v = MLX5_ADDR_OF(fte_match_param, match_v,
outer_headers.dmac_47_16);
dmac_c = MLX5_ADDR_OF(fte_match_param, match_c,
outer_headers.dmac_47_16);
- ether_addr_copy(dmac_v, mac);
- /* Match criteria mask */
- memset(dmac_c, 0xff, 6);
+ if (match_header & MLX5_MATCH_OUTER_HEADERS) {
+ ether_addr_copy(dmac_v, mac_v);
+ ether_addr_copy(dmac_c, mac_c);
+ }
+
+ if (match_header & MLX5_MATCH_MISC_PARAMETERS) {
+ mv_misc = MLX5_ADDR_OF(fte_match_param, match_v, misc_parameters);
+ mc_misc = MLX5_ADDR_OF(fte_match_param, match_c, misc_parameters);
+ MLX5_SET(fte_match_set_misc, mv_misc, source_port, UPLINK_VPORT);
+ MLX5_SET_TO_ONES(fte_match_set_misc, mc_misc, source_port);
+ }
dest.type = MLX5_FLOW_DESTINATION_TYPE_VPORT;
dest.vport_num = vport;
@@ -373,6 +395,39 @@ out:
return flow_rule;
}
+static struct mlx5_flow_rule *
+esw_fdb_set_vport_rule(struct mlx5_eswitch *esw, u8 mac[ETH_ALEN], u32 vport)
+{
+ u8 mac_c[ETH_ALEN];
+
+ eth_broadcast_addr(mac_c);
+ return __esw_fdb_set_vport_rule(esw, vport, false, mac_c, mac);
+}
+
+static struct mlx5_flow_rule *
+esw_fdb_set_vport_allmulti_rule(struct mlx5_eswitch *esw, u32 vport)
+{
+ u8 mac_c[ETH_ALEN];
+ u8 mac_v[ETH_ALEN];
+
+ eth_zero_addr(mac_c);
+ eth_zero_addr(mac_v);
+ mac_c[0] = 0x01;
+ mac_v[0] = 0x01;
+ return __esw_fdb_set_vport_rule(esw, vport, false, mac_c, mac_v);
+}
+
+static struct mlx5_flow_rule *
+esw_fdb_set_vport_promisc_rule(struct mlx5_eswitch *esw, u32 vport)
+{
+ u8 mac_c[ETH_ALEN];
+ u8 mac_v[ETH_ALEN];
+
+ eth_zero_addr(mac_c);
+ eth_zero_addr(mac_v);
+ return __esw_fdb_set_vport_rule(esw, vport, true, mac_c, mac_v);
+}
+
static int esw_create_fdb_table(struct mlx5_eswitch *esw, int nvports)
{
int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
@@ -401,34 +456,80 @@ static int esw_create_fdb_table(struct mlx5_eswitch *esw, int nvports)
memset(flow_group_in, 0, inlen);
table_size = BIT(MLX5_CAP_ESW_FLOWTABLE_FDB(dev, log_max_ft_size));
- fdb = mlx5_create_flow_table(root_ns, 0, table_size);
+ fdb = mlx5_create_flow_table(root_ns, 0, table_size, 0);
if (IS_ERR_OR_NULL(fdb)) {
err = PTR_ERR(fdb);
esw_warn(dev, "Failed to create FDB Table err %d\n", err);
goto out;
}
+ esw->fdb_table.fdb = fdb;
+ /* Addresses group : Full match unicast/multicast addresses */
MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
MLX5_MATCH_OUTER_HEADERS);
match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria);
dmac = MLX5_ADDR_OF(fte_match_param, match_criteria, outer_headers.dmac_47_16);
MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 0);
- MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, table_size - 1);
+ /* Preserve 2 entries for allmulti and promisc rules*/
+ MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, table_size - 3);
eth_broadcast_addr(dmac);
-
g = mlx5_create_flow_group(fdb, flow_group_in);
if (IS_ERR_OR_NULL(g)) {
err = PTR_ERR(g);
esw_warn(dev, "Failed to create flow group err(%d)\n", err);
goto out;
}
-
esw->fdb_table.addr_grp = g;
- esw->fdb_table.fdb = fdb;
+
+ /* Allmulti group : One rule that forwards any mcast traffic */
+ MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
+ MLX5_MATCH_OUTER_HEADERS);
+ MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, table_size - 2);
+ MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, table_size - 2);
+ eth_zero_addr(dmac);
+ dmac[0] = 0x01;
+ g = mlx5_create_flow_group(fdb, flow_group_in);
+ if (IS_ERR_OR_NULL(g)) {
+ err = PTR_ERR(g);
+ esw_warn(dev, "Failed to create allmulti flow group err(%d)\n", err);
+ goto out;
+ }
+ esw->fdb_table.allmulti_grp = g;
+
+ /* Promiscuous group :
+ * One rule that forward all unmatched traffic from previous groups
+ */
+ eth_zero_addr(dmac);
+ MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable,
+ MLX5_MATCH_MISC_PARAMETERS);
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria, misc_parameters.source_port);
+ MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, table_size - 1);
+ MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, table_size - 1);
+ g = mlx5_create_flow_group(fdb, flow_group_in);
+ if (IS_ERR_OR_NULL(g)) {
+ err = PTR_ERR(g);
+ esw_warn(dev, "Failed to create promisc flow group err(%d)\n", err);
+ goto out;
+ }
+ esw->fdb_table.promisc_grp = g;
+
out:
+ if (err) {
+ if (!IS_ERR_OR_NULL(esw->fdb_table.allmulti_grp)) {
+ mlx5_destroy_flow_group(esw->fdb_table.allmulti_grp);
+ esw->fdb_table.allmulti_grp = NULL;
+ }
+ if (!IS_ERR_OR_NULL(esw->fdb_table.addr_grp)) {
+ mlx5_destroy_flow_group(esw->fdb_table.addr_grp);
+ esw->fdb_table.addr_grp = NULL;
+ }
+ if (!IS_ERR_OR_NULL(esw->fdb_table.fdb)) {
+ mlx5_destroy_flow_table(esw->fdb_table.fdb);
+ esw->fdb_table.fdb = NULL;
+ }
+ }
+
kfree(flow_group_in);
- if (err && !IS_ERR_OR_NULL(fdb))
- mlx5_destroy_flow_table(fdb);
return err;
}
@@ -438,10 +539,14 @@ static void esw_destroy_fdb_table(struct mlx5_eswitch *esw)
return;
esw_debug(esw->dev, "Destroy FDB Table\n");
+ mlx5_destroy_flow_group(esw->fdb_table.promisc_grp);
+ mlx5_destroy_flow_group(esw->fdb_table.allmulti_grp);
mlx5_destroy_flow_group(esw->fdb_table.addr_grp);
mlx5_destroy_flow_table(esw->fdb_table.fdb);
esw->fdb_table.fdb = NULL;
esw->fdb_table.addr_grp = NULL;
+ esw->fdb_table.allmulti_grp = NULL;
+ esw->fdb_table.promisc_grp = NULL;
}
/* E-Switch vport UC/MC lists management */
@@ -511,6 +616,52 @@ static int esw_del_uc_addr(struct mlx5_eswitch *esw, struct vport_addr *vaddr)
return 0;
}
+static void update_allmulti_vports(struct mlx5_eswitch *esw,
+ struct vport_addr *vaddr,
+ struct esw_mc_addr *esw_mc)
+{
+ u8 *mac = vaddr->node.addr;
+ u32 vport_idx = 0;
+
+ for (vport_idx = 0; vport_idx < esw->total_vports; vport_idx++) {
+ struct mlx5_vport *vport = &esw->vports[vport_idx];
+ struct hlist_head *vport_hash = vport->mc_list;
+ struct vport_addr *iter_vaddr =
+ l2addr_hash_find(vport_hash,
+ mac,
+ struct vport_addr);
+ if (IS_ERR_OR_NULL(vport->allmulti_rule) ||
+ vaddr->vport == vport_idx)
+ continue;
+ switch (vaddr->action) {
+ case MLX5_ACTION_ADD:
+ if (iter_vaddr)
+ continue;
+ iter_vaddr = l2addr_hash_add(vport_hash, mac,
+ struct vport_addr,
+ GFP_KERNEL);
+ if (!iter_vaddr) {
+ esw_warn(esw->dev,
+ "ALL-MULTI: Failed to add MAC(%pM) to vport[%d] DB\n",
+ mac, vport_idx);
+ continue;
+ }
+ iter_vaddr->vport = vport_idx;
+ iter_vaddr->flow_rule =
+ esw_fdb_set_vport_rule(esw,
+ mac,
+ vport_idx);
+ break;
+ case MLX5_ACTION_DEL:
+ if (!iter_vaddr)
+ continue;
+ mlx5_del_flow_rule(iter_vaddr->flow_rule);
+ l2addr_hash_del(iter_vaddr);
+ break;
+ }
+ }
+}
+
static int esw_add_mc_addr(struct mlx5_eswitch *esw, struct vport_addr *vaddr)
{
struct hlist_head *hash = esw->mc_table;
@@ -531,8 +682,17 @@ static int esw_add_mc_addr(struct mlx5_eswitch *esw, struct vport_addr *vaddr)
esw_mc->uplink_rule = /* Forward MC MAC to Uplink */
esw_fdb_set_vport_rule(esw, mac, UPLINK_VPORT);
+
+ /* Add this multicast mac to all the mc promiscuous vports */
+ update_allmulti_vports(esw, vaddr, esw_mc);
+
add:
- esw_mc->refcnt++;
+ /* If the multicast mac is added as a result of mc promiscuous vport,
+ * don't increment the multicast ref count
+ */
+ if (!vaddr->mc_promisc)
+ esw_mc->refcnt++;
+
/* Forward MC MAC to vport */
vaddr->flow_rule = esw_fdb_set_vport_rule(esw, mac, vport);
esw_debug(esw->dev,
@@ -568,9 +728,15 @@ static int esw_del_mc_addr(struct mlx5_eswitch *esw, struct vport_addr *vaddr)
mlx5_del_flow_rule(vaddr->flow_rule);
vaddr->flow_rule = NULL;
- if (--esw_mc->refcnt)
+ /* If the multicast mac is added as a result of mc promiscuous vport,
+ * don't decrement the multicast ref count.
+ */
+ if (vaddr->mc_promisc || (--esw_mc->refcnt > 0))
return 0;
+ /* Remove this multicast mac from all the mc promiscuous vports */
+ update_allmulti_vports(esw, vaddr, esw_mc);
+
if (esw_mc->uplink_rule)
mlx5_del_flow_rule(esw_mc->uplink_rule);
@@ -643,10 +809,13 @@ static void esw_update_vport_addr_list(struct mlx5_eswitch *esw,
addr->action = MLX5_ACTION_DEL;
}
+ if (!vport->enabled)
+ goto out;
+
err = mlx5_query_nic_vport_mac_list(esw->dev, vport_num, list_type,
mac_list, &size);
if (err)
- return;
+ goto out;
esw_debug(esw->dev, "vport[%d] context update %s list size (%d)\n",
vport_num, is_uc ? "UC" : "MC", size);
@@ -660,6 +829,24 @@ static void esw_update_vport_addr_list(struct mlx5_eswitch *esw,
addr = l2addr_hash_find(hash, mac_list[i], struct vport_addr);
if (addr) {
addr->action = MLX5_ACTION_NONE;
+ /* If this mac was previously added because of allmulti
+ * promiscuous rx mode, its now converted to be original
+ * vport mac.
+ */
+ if (addr->mc_promisc) {
+ struct esw_mc_addr *esw_mc =
+ l2addr_hash_find(esw->mc_table,
+ mac_list[i],
+ struct esw_mc_addr);
+ if (!esw_mc) {
+ esw_warn(esw->dev,
+ "Failed to MAC(%pM) in mcast DB\n",
+ mac_list[i]);
+ continue;
+ }
+ esw_mc->refcnt++;
+ addr->mc_promisc = false;
+ }
continue;
}
@@ -674,13 +861,121 @@ static void esw_update_vport_addr_list(struct mlx5_eswitch *esw,
addr->vport = vport_num;
addr->action = MLX5_ACTION_ADD;
}
+out:
kfree(mac_list);
}
-static void esw_vport_change_handler(struct work_struct *work)
+/* Sync vport UC/MC list from vport context
+ * Must be called after esw_update_vport_addr_list
+ */
+static void esw_update_vport_mc_promisc(struct mlx5_eswitch *esw, u32 vport_num)
+{
+ struct mlx5_vport *vport = &esw->vports[vport_num];
+ struct l2addr_node *node;
+ struct vport_addr *addr;
+ struct hlist_head *hash;
+ struct hlist_node *tmp;
+ int hi;
+
+ hash = vport->mc_list;
+
+ for_each_l2hash_node(node, tmp, esw->mc_table, hi) {
+ u8 *mac = node->addr;
+
+ addr = l2addr_hash_find(hash, mac, struct vport_addr);
+ if (addr) {
+ if (addr->action == MLX5_ACTION_DEL)
+ addr->action = MLX5_ACTION_NONE;
+ continue;
+ }
+ addr = l2addr_hash_add(hash, mac, struct vport_addr,
+ GFP_KERNEL);
+ if (!addr) {
+ esw_warn(esw->dev,
+ "Failed to add allmulti MAC(%pM) to vport[%d] DB\n",
+ mac, vport_num);
+ continue;
+ }
+ addr->vport = vport_num;
+ addr->action = MLX5_ACTION_ADD;
+ addr->mc_promisc = true;
+ }
+}
+
+/* Apply vport rx mode to HW FDB table */
+static void esw_apply_vport_rx_mode(struct mlx5_eswitch *esw, u32 vport_num,
+ bool promisc, bool mc_promisc)
+{
+ struct esw_mc_addr *allmulti_addr = esw->mc_promisc;
+ struct mlx5_vport *vport = &esw->vports[vport_num];
+
+ if (IS_ERR_OR_NULL(vport->allmulti_rule) != mc_promisc)
+ goto promisc;
+
+ if (mc_promisc) {
+ vport->allmulti_rule =
+ esw_fdb_set_vport_allmulti_rule(esw, vport_num);
+ if (!allmulti_addr->uplink_rule)
+ allmulti_addr->uplink_rule =
+ esw_fdb_set_vport_allmulti_rule(esw,
+ UPLINK_VPORT);
+ allmulti_addr->refcnt++;
+ } else if (vport->allmulti_rule) {
+ mlx5_del_flow_rule(vport->allmulti_rule);
+ vport->allmulti_rule = NULL;
+
+ if (--allmulti_addr->refcnt > 0)
+ goto promisc;
+
+ if (allmulti_addr->uplink_rule)
+ mlx5_del_flow_rule(allmulti_addr->uplink_rule);
+ allmulti_addr->uplink_rule = NULL;
+ }
+
+promisc:
+ if (IS_ERR_OR_NULL(vport->promisc_rule) != promisc)
+ return;
+
+ if (promisc) {
+ vport->promisc_rule = esw_fdb_set_vport_promisc_rule(esw,
+ vport_num);
+ } else if (vport->promisc_rule) {
+ mlx5_del_flow_rule(vport->promisc_rule);
+ vport->promisc_rule = NULL;
+ }
+}
+
+/* Sync vport rx mode from vport context */
+static void esw_update_vport_rx_mode(struct mlx5_eswitch *esw, u32 vport_num)
+{
+ struct mlx5_vport *vport = &esw->vports[vport_num];
+ int promisc_all = 0;
+ int promisc_uc = 0;
+ int promisc_mc = 0;
+ int err;
+
+ err = mlx5_query_nic_vport_promisc(esw->dev,
+ vport_num,
+ &promisc_uc,
+ &promisc_mc,
+ &promisc_all);
+ if (err)
+ return;
+ esw_debug(esw->dev, "vport[%d] context update rx mode promisc_all=%d, all_multi=%d\n",
+ vport_num, promisc_all, promisc_mc);
+
+ if (!vport->trusted || !vport->enabled) {
+ promisc_uc = 0;
+ promisc_mc = 0;
+ promisc_all = 0;
+ }
+
+ esw_apply_vport_rx_mode(esw, vport_num, promisc_all,
+ (promisc_all || promisc_mc));
+}
+
+static void esw_vport_change_handle_locked(struct mlx5_vport *vport)
{
- struct mlx5_vport *vport =
- container_of(work, struct mlx5_vport, vport_change_handler);
struct mlx5_core_dev *dev = vport->dev;
struct mlx5_eswitch *esw = dev->priv.eswitch;
u8 mac[ETH_ALEN];
@@ -699,6 +994,15 @@ static void esw_vport_change_handler(struct work_struct *work)
if (vport->enabled_events & MC_ADDR_CHANGE) {
esw_update_vport_addr_list(esw, vport->vport,
MLX5_NVPRT_LIST_TYPE_MC);
+ }
+
+ if (vport->enabled_events & PROMISC_CHANGE) {
+ esw_update_vport_rx_mode(esw, vport->vport);
+ if (!IS_ERR_OR_NULL(vport->allmulti_rule))
+ esw_update_vport_mc_promisc(esw, vport->vport);
+ }
+
+ if (vport->enabled_events & (PROMISC_CHANGE | MC_ADDR_CHANGE)) {
esw_apply_vport_addr_list(esw, vport->vport,
MLX5_NVPRT_LIST_TYPE_MC);
}
@@ -709,15 +1013,477 @@ static void esw_vport_change_handler(struct work_struct *work)
vport->enabled_events);
}
+static void esw_vport_change_handler(struct work_struct *work)
+{
+ struct mlx5_vport *vport =
+ container_of(work, struct mlx5_vport, vport_change_handler);
+ struct mlx5_eswitch *esw = vport->dev->priv.eswitch;
+
+ mutex_lock(&esw->state_lock);
+ esw_vport_change_handle_locked(vport);
+ mutex_unlock(&esw->state_lock);
+}
+
+static void esw_vport_enable_egress_acl(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
+{
+ int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
+ struct mlx5_flow_group *vlan_grp = NULL;
+ struct mlx5_flow_group *drop_grp = NULL;
+ struct mlx5_core_dev *dev = esw->dev;
+ struct mlx5_flow_namespace *root_ns;
+ struct mlx5_flow_table *acl;
+ void *match_criteria;
+ u32 *flow_group_in;
+ /* The egress acl table contains 2 rules:
+ * 1)Allow traffic with vlan_tag=vst_vlan_id
+ * 2)Drop all other traffic.
+ */
+ int table_size = 2;
+ int err = 0;
+
+ if (!MLX5_CAP_ESW_EGRESS_ACL(dev, ft_support) ||
+ !IS_ERR_OR_NULL(vport->egress.acl))
+ return;
+
+ esw_debug(dev, "Create vport[%d] egress ACL log_max_size(%d)\n",
+ vport->vport, MLX5_CAP_ESW_EGRESS_ACL(dev, log_max_ft_size));
+
+ root_ns = mlx5_get_flow_namespace(dev, MLX5_FLOW_NAMESPACE_ESW_EGRESS);
+ if (!root_ns) {
+ esw_warn(dev, "Failed to get E-Switch egress flow namespace\n");
+ return;
+ }
+
+ flow_group_in = mlx5_vzalloc(inlen);
+ if (!flow_group_in)
+ return;
+
+ acl = mlx5_create_vport_flow_table(root_ns, 0, table_size, 0, vport->vport);
+ if (IS_ERR_OR_NULL(acl)) {
+ err = PTR_ERR(acl);
+ esw_warn(dev, "Failed to create E-Switch vport[%d] egress flow Table, err(%d)\n",
+ vport->vport, err);
+ goto out;
+ }
+
+ MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
+ match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria);
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.vlan_tag);
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.first_vid);
+ MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 0);
+ MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 0);
+
+ vlan_grp = mlx5_create_flow_group(acl, flow_group_in);
+ if (IS_ERR_OR_NULL(vlan_grp)) {
+ err = PTR_ERR(vlan_grp);
+ esw_warn(dev, "Failed to create E-Switch vport[%d] egress allowed vlans flow group, err(%d)\n",
+ vport->vport, err);
+ goto out;
+ }
+
+ memset(flow_group_in, 0, inlen);
+ MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 1);
+ MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 1);
+ drop_grp = mlx5_create_flow_group(acl, flow_group_in);
+ if (IS_ERR_OR_NULL(drop_grp)) {
+ err = PTR_ERR(drop_grp);
+ esw_warn(dev, "Failed to create E-Switch vport[%d] egress drop flow group, err(%d)\n",
+ vport->vport, err);
+ goto out;
+ }
+
+ vport->egress.acl = acl;
+ vport->egress.drop_grp = drop_grp;
+ vport->egress.allowed_vlans_grp = vlan_grp;
+out:
+ kfree(flow_group_in);
+ if (err && !IS_ERR_OR_NULL(vlan_grp))
+ mlx5_destroy_flow_group(vlan_grp);
+ if (err && !IS_ERR_OR_NULL(acl))
+ mlx5_destroy_flow_table(acl);
+}
+
+static void esw_vport_cleanup_egress_rules(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
+{
+ if (!IS_ERR_OR_NULL(vport->egress.allowed_vlan))
+ mlx5_del_flow_rule(vport->egress.allowed_vlan);
+
+ if (!IS_ERR_OR_NULL(vport->egress.drop_rule))
+ mlx5_del_flow_rule(vport->egress.drop_rule);
+
+ vport->egress.allowed_vlan = NULL;
+ vport->egress.drop_rule = NULL;
+}
+
+static void esw_vport_disable_egress_acl(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
+{
+ if (IS_ERR_OR_NULL(vport->egress.acl))
+ return;
+
+ esw_debug(esw->dev, "Destroy vport[%d] E-Switch egress ACL\n", vport->vport);
+
+ esw_vport_cleanup_egress_rules(esw, vport);
+ mlx5_destroy_flow_group(vport->egress.allowed_vlans_grp);
+ mlx5_destroy_flow_group(vport->egress.drop_grp);
+ mlx5_destroy_flow_table(vport->egress.acl);
+ vport->egress.allowed_vlans_grp = NULL;
+ vport->egress.drop_grp = NULL;
+ vport->egress.acl = NULL;
+}
+
+static void esw_vport_enable_ingress_acl(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
+{
+ int inlen = MLX5_ST_SZ_BYTES(create_flow_group_in);
+ struct mlx5_core_dev *dev = esw->dev;
+ struct mlx5_flow_namespace *root_ns;
+ struct mlx5_flow_table *acl;
+ struct mlx5_flow_group *g;
+ void *match_criteria;
+ u32 *flow_group_in;
+ /* The ingress acl table contains 4 groups
+ * (2 active rules at the same time -
+ * 1 allow rule from one of the first 3 groups.
+ * 1 drop rule from the last group):
+ * 1)Allow untagged traffic with smac=original mac.
+ * 2)Allow untagged traffic.
+ * 3)Allow traffic with smac=original mac.
+ * 4)Drop all other traffic.
+ */
+ int table_size = 4;
+ int err = 0;
+
+ if (!MLX5_CAP_ESW_INGRESS_ACL(dev, ft_support) ||
+ !IS_ERR_OR_NULL(vport->ingress.acl))
+ return;
+
+ esw_debug(dev, "Create vport[%d] ingress ACL log_max_size(%d)\n",
+ vport->vport, MLX5_CAP_ESW_INGRESS_ACL(dev, log_max_ft_size));
+
+ root_ns = mlx5_get_flow_namespace(dev, MLX5_FLOW_NAMESPACE_ESW_INGRESS);
+ if (!root_ns) {
+ esw_warn(dev, "Failed to get E-Switch ingress flow namespace\n");
+ return;
+ }
+
+ flow_group_in = mlx5_vzalloc(inlen);
+ if (!flow_group_in)
+ return;
+
+ acl = mlx5_create_vport_flow_table(root_ns, 0, table_size, 0, vport->vport);
+ if (IS_ERR_OR_NULL(acl)) {
+ err = PTR_ERR(acl);
+ esw_warn(dev, "Failed to create E-Switch vport[%d] ingress flow Table, err(%d)\n",
+ vport->vport, err);
+ goto out;
+ }
+ vport->ingress.acl = acl;
+
+ match_criteria = MLX5_ADDR_OF(create_flow_group_in, flow_group_in, match_criteria);
+
+ MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.vlan_tag);
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_47_16);
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_15_0);
+ MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 0);
+ MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 0);
+
+ g = mlx5_create_flow_group(acl, flow_group_in);
+ if (IS_ERR_OR_NULL(g)) {
+ err = PTR_ERR(g);
+ esw_warn(dev, "Failed to create E-Switch vport[%d] ingress untagged spoofchk flow group, err(%d)\n",
+ vport->vport, err);
+ goto out;
+ }
+ vport->ingress.allow_untagged_spoofchk_grp = g;
+
+ memset(flow_group_in, 0, inlen);
+ MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.vlan_tag);
+ MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 1);
+ MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 1);
+
+ g = mlx5_create_flow_group(acl, flow_group_in);
+ if (IS_ERR_OR_NULL(g)) {
+ err = PTR_ERR(g);
+ esw_warn(dev, "Failed to create E-Switch vport[%d] ingress untagged flow group, err(%d)\n",
+ vport->vport, err);
+ goto out;
+ }
+ vport->ingress.allow_untagged_only_grp = g;
+
+ memset(flow_group_in, 0, inlen);
+ MLX5_SET(create_flow_group_in, flow_group_in, match_criteria_enable, MLX5_MATCH_OUTER_HEADERS);
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_47_16);
+ MLX5_SET_TO_ONES(fte_match_param, match_criteria, outer_headers.smac_15_0);
+ MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 2);
+ MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 2);
+
+ g = mlx5_create_flow_group(acl, flow_group_in);
+ if (IS_ERR_OR_NULL(g)) {
+ err = PTR_ERR(g);
+ esw_warn(dev, "Failed to create E-Switch vport[%d] ingress spoofchk flow group, err(%d)\n",
+ vport->vport, err);
+ goto out;
+ }
+ vport->ingress.allow_spoofchk_only_grp = g;
+
+ memset(flow_group_in, 0, inlen);
+ MLX5_SET(create_flow_group_in, flow_group_in, start_flow_index, 3);
+ MLX5_SET(create_flow_group_in, flow_group_in, end_flow_index, 3);
+
+ g = mlx5_create_flow_group(acl, flow_group_in);
+ if (IS_ERR_OR_NULL(g)) {
+ err = PTR_ERR(g);
+ esw_warn(dev, "Failed to create E-Switch vport[%d] ingress drop flow group, err(%d)\n",
+ vport->vport, err);
+ goto out;
+ }
+ vport->ingress.drop_grp = g;
+
+out:
+ if (err) {
+ if (!IS_ERR_OR_NULL(vport->ingress.allow_spoofchk_only_grp))
+ mlx5_destroy_flow_group(
+ vport->ingress.allow_spoofchk_only_grp);
+ if (!IS_ERR_OR_NULL(vport->ingress.allow_untagged_only_grp))
+ mlx5_destroy_flow_group(
+ vport->ingress.allow_untagged_only_grp);
+ if (!IS_ERR_OR_NULL(vport->ingress.allow_untagged_spoofchk_grp))
+ mlx5_destroy_flow_group(
+ vport->ingress.allow_untagged_spoofchk_grp);
+ if (!IS_ERR_OR_NULL(vport->ingress.acl))
+ mlx5_destroy_flow_table(vport->ingress.acl);
+ }
+
+ kfree(flow_group_in);
+}
+
+static void esw_vport_cleanup_ingress_rules(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
+{
+ if (!IS_ERR_OR_NULL(vport->ingress.drop_rule))
+ mlx5_del_flow_rule(vport->ingress.drop_rule);
+
+ if (!IS_ERR_OR_NULL(vport->ingress.allow_rule))
+ mlx5_del_flow_rule(vport->ingress.allow_rule);
+
+ vport->ingress.drop_rule = NULL;
+ vport->ingress.allow_rule = NULL;
+}
+
+static void esw_vport_disable_ingress_acl(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
+{
+ if (IS_ERR_OR_NULL(vport->ingress.acl))
+ return;
+
+ esw_debug(esw->dev, "Destroy vport[%d] E-Switch ingress ACL\n", vport->vport);
+
+ esw_vport_cleanup_ingress_rules(esw, vport);
+ mlx5_destroy_flow_group(vport->ingress.allow_spoofchk_only_grp);
+ mlx5_destroy_flow_group(vport->ingress.allow_untagged_only_grp);
+ mlx5_destroy_flow_group(vport->ingress.allow_untagged_spoofchk_grp);
+ mlx5_destroy_flow_group(vport->ingress.drop_grp);
+ mlx5_destroy_flow_table(vport->ingress.acl);
+ vport->ingress.acl = NULL;
+ vport->ingress.drop_grp = NULL;
+ vport->ingress.allow_spoofchk_only_grp = NULL;
+ vport->ingress.allow_untagged_only_grp = NULL;
+ vport->ingress.allow_untagged_spoofchk_grp = NULL;
+}
+
+static int esw_vport_ingress_config(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
+{
+ u8 smac[ETH_ALEN];
+ u32 *match_v;
+ u32 *match_c;
+ int err = 0;
+ u8 *smac_v;
+
+ if (vport->spoofchk) {
+ err = mlx5_query_nic_vport_mac_address(esw->dev, vport->vport, smac);
+ if (err) {
+ esw_warn(esw->dev,
+ "vport[%d] configure ingress rules failed, query smac failed, err(%d)\n",
+ vport->vport, err);
+ return err;
+ }
+
+ if (!is_valid_ether_addr(smac)) {
+ mlx5_core_warn(esw->dev,
+ "vport[%d] configure ingress rules failed, illegal mac with spoofchk\n",
+ vport->vport);
+ return -EPERM;
+ }
+ }
+
+ esw_vport_cleanup_ingress_rules(esw, vport);
+
+ if (!vport->vlan && !vport->qos && !vport->spoofchk) {
+ esw_vport_disable_ingress_acl(esw, vport);
+ return 0;
+ }
+
+ esw_vport_enable_ingress_acl(esw, vport);
+
+ esw_debug(esw->dev,
+ "vport[%d] configure ingress rules, vlan(%d) qos(%d)\n",
+ vport->vport, vport->vlan, vport->qos);
+
+ match_v = kzalloc(MLX5_ST_SZ_BYTES(fte_match_param), GFP_KERNEL);
+ match_c = kzalloc(MLX5_ST_SZ_BYTES(fte_match_param), GFP_KERNEL);
+ if (!match_v || !match_c) {
+ err = -ENOMEM;
+ esw_warn(esw->dev, "vport[%d] configure ingress rules failed, err(%d)\n",
+ vport->vport, err);
+ goto out;
+ }
+
+ if (vport->vlan || vport->qos)
+ MLX5_SET_TO_ONES(fte_match_param, match_c, outer_headers.vlan_tag);
+
+ if (vport->spoofchk) {
+ MLX5_SET_TO_ONES(fte_match_param, match_c, outer_headers.smac_47_16);
+ MLX5_SET_TO_ONES(fte_match_param, match_c, outer_headers.smac_15_0);
+ smac_v = MLX5_ADDR_OF(fte_match_param,
+ match_v,
+ outer_headers.smac_47_16);
+ ether_addr_copy(smac_v, smac);
+ }
+
+ vport->ingress.allow_rule =
+ mlx5_add_flow_rule(vport->ingress.acl,
+ MLX5_MATCH_OUTER_HEADERS,
+ match_c,
+ match_v,
+ MLX5_FLOW_CONTEXT_ACTION_ALLOW,
+ 0, NULL);
+ if (IS_ERR_OR_NULL(vport->ingress.allow_rule)) {
+ err = PTR_ERR(vport->ingress.allow_rule);
+ pr_warn("vport[%d] configure ingress allow rule, err(%d)\n",
+ vport->vport, err);
+ vport->ingress.allow_rule = NULL;
+ goto out;
+ }
+
+ memset(match_c, 0, MLX5_ST_SZ_BYTES(fte_match_param));
+ memset(match_v, 0, MLX5_ST_SZ_BYTES(fte_match_param));
+ vport->ingress.drop_rule =
+ mlx5_add_flow_rule(vport->ingress.acl,
+ 0,
+ match_c,
+ match_v,
+ MLX5_FLOW_CONTEXT_ACTION_DROP,
+ 0, NULL);
+ if (IS_ERR_OR_NULL(vport->ingress.drop_rule)) {
+ err = PTR_ERR(vport->ingress.drop_rule);
+ pr_warn("vport[%d] configure ingress drop rule, err(%d)\n",
+ vport->vport, err);
+ vport->ingress.drop_rule = NULL;
+ goto out;
+ }
+
+out:
+ if (err)
+ esw_vport_cleanup_ingress_rules(esw, vport);
+
+ kfree(match_v);
+ kfree(match_c);
+ return err;
+}
+
+static int esw_vport_egress_config(struct mlx5_eswitch *esw,
+ struct mlx5_vport *vport)
+{
+ u32 *match_v;
+ u32 *match_c;
+ int err = 0;
+
+ esw_vport_cleanup_egress_rules(esw, vport);
+
+ if (!vport->vlan && !vport->qos) {
+ esw_vport_disable_egress_acl(esw, vport);
+ return 0;
+ }
+
+ esw_vport_enable_egress_acl(esw, vport);
+
+ esw_debug(esw->dev,
+ "vport[%d] configure egress rules, vlan(%d) qos(%d)\n",
+ vport->vport, vport->vlan, vport->qos);
+
+ match_v = kzalloc(MLX5_ST_SZ_BYTES(fte_match_param), GFP_KERNEL);
+ match_c = kzalloc(MLX5_ST_SZ_BYTES(fte_match_param), GFP_KERNEL);
+ if (!match_v || !match_c) {
+ err = -ENOMEM;
+ esw_warn(esw->dev, "vport[%d] configure egress rules failed, err(%d)\n",
+ vport->vport, err);
+ goto out;
+ }
+
+ /* Allowed vlan rule */
+ MLX5_SET_TO_ONES(fte_match_param, match_c, outer_headers.vlan_tag);
+ MLX5_SET_TO_ONES(fte_match_param, match_v, outer_headers.vlan_tag);
+ MLX5_SET_TO_ONES(fte_match_param, match_c, outer_headers.first_vid);
+ MLX5_SET(fte_match_param, match_v, outer_headers.first_vid, vport->vlan);
+
+ vport->egress.allowed_vlan =
+ mlx5_add_flow_rule(vport->egress.acl,
+ MLX5_MATCH_OUTER_HEADERS,
+ match_c,
+ match_v,
+ MLX5_FLOW_CONTEXT_ACTION_ALLOW,
+ 0, NULL);
+ if (IS_ERR_OR_NULL(vport->egress.allowed_vlan)) {
+ err = PTR_ERR(vport->egress.allowed_vlan);
+ pr_warn("vport[%d] configure egress allowed vlan rule failed, err(%d)\n",
+ vport->vport, err);
+ vport->egress.allowed_vlan = NULL;
+ goto out;
+ }
+
+ /* Drop others rule (star rule) */
+ memset(match_c, 0, MLX5_ST_SZ_BYTES(fte_match_param));
+ memset(match_v, 0, MLX5_ST_SZ_BYTES(fte_match_param));
+ vport->egress.drop_rule =
+ mlx5_add_flow_rule(vport->egress.acl,
+ 0,
+ match_c,
+ match_v,
+ MLX5_FLOW_CONTEXT_ACTION_DROP,
+ 0, NULL);
+ if (IS_ERR_OR_NULL(vport->egress.drop_rule)) {
+ err = PTR_ERR(vport->egress.drop_rule);
+ pr_warn("vport[%d] configure egress drop rule failed, err(%d)\n",
+ vport->vport, err);
+ vport->egress.drop_rule = NULL;
+ }
+out:
+ kfree(match_v);
+ kfree(match_c);
+ return err;
+}
+
static void esw_enable_vport(struct mlx5_eswitch *esw, int vport_num,
int enable_events)
{
struct mlx5_vport *vport = &esw->vports[vport_num];
- unsigned long flags;
+ mutex_lock(&esw->state_lock);
WARN_ON(vport->enabled);
esw_debug(esw->dev, "Enabling VPORT(%d)\n", vport_num);
+
+ if (vport_num) { /* Only VFs need ACLs for VST and spoofchk filtering */
+ esw_vport_ingress_config(esw, vport);
+ esw_vport_egress_config(esw, vport);
+ }
+
mlx5_modify_vport_admin_state(esw->dev,
MLX5_QUERY_VPORT_STATE_IN_OP_MOD_ESW_VPORT,
vport_num,
@@ -725,53 +1491,32 @@ static void esw_enable_vport(struct mlx5_eswitch *esw, int vport_num,
/* Sync with current vport context */
vport->enabled_events = enable_events;
- esw_vport_change_handler(&vport->vport_change_handler);
+ esw_vport_change_handle_locked(vport);
- spin_lock_irqsave(&vport->lock, flags);
vport->enabled = true;
- spin_unlock_irqrestore(&vport->lock, flags);
+
+ /* only PF is trusted by default */
+ vport->trusted = (vport_num) ? false : true;
arm_vport_context_events_cmd(esw->dev, vport_num, enable_events);
esw->enabled_vports++;
esw_debug(esw->dev, "Enabled VPORT(%d)\n", vport_num);
-}
-
-static void esw_cleanup_vport(struct mlx5_eswitch *esw, u16 vport_num)
-{
- struct mlx5_vport *vport = &esw->vports[vport_num];
- struct l2addr_node *node;
- struct vport_addr *addr;
- struct hlist_node *tmp;
- int hi;
-
- for_each_l2hash_node(node, tmp, vport->uc_list, hi) {
- addr = container_of(node, struct vport_addr, node);
- addr->action = MLX5_ACTION_DEL;
- }
- esw_apply_vport_addr_list(esw, vport_num, MLX5_NVPRT_LIST_TYPE_UC);
-
- for_each_l2hash_node(node, tmp, vport->mc_list, hi) {
- addr = container_of(node, struct vport_addr, node);
- addr->action = MLX5_ACTION_DEL;
- }
- esw_apply_vport_addr_list(esw, vport_num, MLX5_NVPRT_LIST_TYPE_MC);
+ mutex_unlock(&esw->state_lock);
}
static void esw_disable_vport(struct mlx5_eswitch *esw, int vport_num)
{
struct mlx5_vport *vport = &esw->vports[vport_num];
- unsigned long flags;
if (!vport->enabled)
return;
esw_debug(esw->dev, "Disabling vport(%d)\n", vport_num);
/* Mark this vport as disabled to discard new events */
- spin_lock_irqsave(&vport->lock, flags);
vport->enabled = false;
- vport->enabled_events = 0;
- spin_unlock_irqrestore(&vport->lock, flags);
+
+ synchronize_irq(mlx5_get_msix_vec(esw->dev, MLX5_EQ_VEC_ASYNC));
mlx5_modify_vport_admin_state(esw->dev,
MLX5_QUERY_VPORT_STATE_IN_OP_MOD_ESW_VPORT,
@@ -781,9 +1526,19 @@ static void esw_disable_vport(struct mlx5_eswitch *esw, int vport_num)
flush_workqueue(esw->work_queue);
/* Disable events from this vport */
arm_vport_context_events_cmd(esw->dev, vport->vport, 0);
- /* We don't assume VFs will cleanup after themselves */
- esw_cleanup_vport(esw, vport_num);
+ mutex_lock(&esw->state_lock);
+ /* We don't assume VFs will cleanup after themselves.
+ * Calling vport change handler while vport is disabled will cleanup
+ * the vport resources.
+ */
+ esw_vport_change_handle_locked(vport);
+ vport->enabled_events = 0;
+ if (vport_num) {
+ esw_vport_disable_egress_acl(esw, vport);
+ esw_vport_disable_ingress_acl(esw, vport);
+ }
esw->enabled_vports--;
+ mutex_unlock(&esw->state_lock);
}
/* Public E-Switch API */
@@ -802,6 +1557,12 @@ int mlx5_eswitch_enable_sriov(struct mlx5_eswitch *esw, int nvfs)
return -ENOTSUPP;
}
+ if (!MLX5_CAP_ESW_INGRESS_ACL(esw->dev, ft_support))
+ esw_warn(esw->dev, "E-Switch ingress ACL is not supported by FW\n");
+
+ if (!MLX5_CAP_ESW_EGRESS_ACL(esw->dev, ft_support))
+ esw_warn(esw->dev, "E-Switch engress ACL is not supported by FW\n");
+
esw_info(esw->dev, "E-Switch enable SRIOV: nvfs(%d)\n", nvfs);
esw_disable_vport(esw, 0);
@@ -824,6 +1585,7 @@ abort:
void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw)
{
+ struct esw_mc_addr *mc_promisc;
int i;
if (!esw || !MLX5_CAP_GEN(esw->dev, vport_group_manager) ||
@@ -833,9 +1595,14 @@ void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw)
esw_info(esw->dev, "disable SRIOV: active vports(%d)\n",
esw->enabled_vports);
+ mc_promisc = esw->mc_promisc;
+
for (i = 0; i < esw->total_vports; i++)
esw_disable_vport(esw, i);
+ if (mc_promisc && mc_promisc->uplink_rule)
+ mlx5_del_flow_rule(mc_promisc->uplink_rule);
+
esw_destroy_fdb_table(esw);
/* VPORT 0 (PF) must be enabled back with non-sriov configuration */
@@ -845,7 +1612,8 @@ void mlx5_eswitch_disable_sriov(struct mlx5_eswitch *esw)
int mlx5_eswitch_init(struct mlx5_core_dev *dev)
{
int l2_table_size = 1 << MLX5_CAP_GEN(dev, log_max_l2_table);
- int total_vports = 1 + pci_sriov_get_totalvfs(dev->pdev);
+ int total_vports = MLX5_TOTAL_VPORTS(dev);
+ struct esw_mc_addr *mc_promisc;
struct mlx5_eswitch *esw;
int vport_num;
int err;
@@ -874,6 +1642,13 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
}
esw->l2_table.size = l2_table_size;
+ mc_promisc = kzalloc(sizeof(*mc_promisc), GFP_KERNEL);
+ if (!mc_promisc) {
+ err = -ENOMEM;
+ goto abort;
+ }
+ esw->mc_promisc = mc_promisc;
+
esw->work_queue = create_singlethread_workqueue("mlx5_esw_wq");
if (!esw->work_queue) {
err = -ENOMEM;
@@ -887,6 +1662,8 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
goto abort;
}
+ mutex_init(&esw->state_lock);
+
for (vport_num = 0; vport_num < total_vports; vport_num++) {
struct mlx5_vport *vport = &esw->vports[vport_num];
@@ -894,7 +1671,6 @@ int mlx5_eswitch_init(struct mlx5_core_dev *dev)
vport->dev = dev;
INIT_WORK(&vport->vport_change_handler,
esw_vport_change_handler);
- spin_lock_init(&vport->lock);
}
esw->total_vports = total_vports;
@@ -925,6 +1701,7 @@ void mlx5_eswitch_cleanup(struct mlx5_eswitch *esw)
esw->dev->priv.eswitch = NULL;
destroy_workqueue(esw->work_queue);
kfree(esw->l2_table.bitmap);
+ kfree(esw->mc_promisc);
kfree(esw->vports);
kfree(esw);
}
@@ -942,10 +1719,8 @@ void mlx5_eswitch_vport_event(struct mlx5_eswitch *esw, struct mlx5_eqe *eqe)
}
vport = &esw->vports[vport_num];
- spin_lock(&vport->lock);
if (vport->enabled)
queue_work(esw->work_queue, &vport->vport_change_handler);
- spin_unlock(&vport->lock);
}
/* Vport Administration */
@@ -957,12 +1732,22 @@ int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw,
int vport, u8 mac[ETH_ALEN])
{
int err = 0;
+ struct mlx5_vport *evport;
if (!ESW_ALLOWED(esw))
return -EPERM;
if (!LEGAL_VPORT(esw, vport))
return -EINVAL;
+ evport = &esw->vports[vport];
+
+ if (evport->spoofchk && !is_valid_ether_addr(mac)) {
+ mlx5_core_warn(esw->dev,
+ "MAC invalidation is not allowed when spoofchk is on, vport(%d)\n",
+ vport);
+ return -EPERM;
+ }
+
err = mlx5_modify_nic_vport_mac_address(esw->dev, vport, mac);
if (err) {
mlx5_core_warn(esw->dev,
@@ -971,6 +1756,11 @@ int mlx5_eswitch_set_vport_mac(struct mlx5_eswitch *esw,
return err;
}
+ mutex_lock(&esw->state_lock);
+ if (evport->enabled)
+ err = esw_vport_ingress_config(esw, evport);
+ mutex_unlock(&esw->state_lock);
+
return err;
}
@@ -990,6 +1780,7 @@ int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw,
int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw,
int vport, struct ifla_vf_info *ivi)
{
+ struct mlx5_vport *evport;
u16 vlan;
u8 qos;
@@ -998,6 +1789,8 @@ int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw,
if (!LEGAL_VPORT(esw, vport))
return -EINVAL;
+ evport = &esw->vports[vport];
+
memset(ivi, 0, sizeof(*ivi));
ivi->vf = vport - 1;
@@ -1008,7 +1801,7 @@ int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw,
query_esw_vport_cvlan(esw->dev, vport, &vlan, &qos);
ivi->vlan = vlan;
ivi->qos = qos;
- ivi->spoofchk = 0;
+ ivi->spoofchk = evport->spoofchk;
return 0;
}
@@ -1016,6 +1809,8 @@ int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw,
int mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
int vport, u16 vlan, u8 qos)
{
+ struct mlx5_vport *evport;
+ int err = 0;
int set = 0;
if (!ESW_ALLOWED(esw))
@@ -1026,7 +1821,72 @@ int mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
if (vlan || qos)
set = 1;
- return modify_esw_vport_cvlan(esw->dev, vport, vlan, qos, set);
+ evport = &esw->vports[vport];
+
+ err = modify_esw_vport_cvlan(esw->dev, vport, vlan, qos, set);
+ if (err)
+ return err;
+
+ mutex_lock(&esw->state_lock);
+ evport->vlan = vlan;
+ evport->qos = qos;
+ if (evport->enabled) {
+ err = esw_vport_ingress_config(esw, evport);
+ if (err)
+ goto out;
+ err = esw_vport_egress_config(esw, evport);
+ }
+
+out:
+ mutex_unlock(&esw->state_lock);
+ return err;
+}
+
+int mlx5_eswitch_set_vport_spoofchk(struct mlx5_eswitch *esw,
+ int vport, bool spoofchk)
+{
+ struct mlx5_vport *evport;
+ bool pschk;
+ int err = 0;
+
+ if (!ESW_ALLOWED(esw))
+ return -EPERM;
+ if (!LEGAL_VPORT(esw, vport))
+ return -EINVAL;
+
+ evport = &esw->vports[vport];
+
+ mutex_lock(&esw->state_lock);
+ pschk = evport->spoofchk;
+ evport->spoofchk = spoofchk;
+ if (evport->enabled)
+ err = esw_vport_ingress_config(esw, evport);
+ if (err)
+ evport->spoofchk = pschk;
+ mutex_unlock(&esw->state_lock);
+
+ return err;
+}
+
+int mlx5_eswitch_set_vport_trust(struct mlx5_eswitch *esw,
+ int vport, bool setting)
+{
+ struct mlx5_vport *evport;
+
+ if (!ESW_ALLOWED(esw))
+ return -EPERM;
+ if (!LEGAL_VPORT(esw, vport))
+ return -EINVAL;
+
+ evport = &esw->vports[vport];
+
+ mutex_lock(&esw->state_lock);
+ evport->trusted = setting;
+ if (evport->enabled)
+ esw_vport_change_handle_locked(evport);
+ mutex_unlock(&esw->state_lock);
+
+ return 0;
}
int mlx5_eswitch_get_vport_stats(struct mlx5_eswitch *esw,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
index 3416a428f70f..fd6800256d4a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/eswitch.h
@@ -88,18 +88,40 @@ struct l2addr_node {
kfree(ptr); \
})
+struct vport_ingress {
+ struct mlx5_flow_table *acl;
+ struct mlx5_flow_group *allow_untagged_spoofchk_grp;
+ struct mlx5_flow_group *allow_spoofchk_only_grp;
+ struct mlx5_flow_group *allow_untagged_only_grp;
+ struct mlx5_flow_group *drop_grp;
+ struct mlx5_flow_rule *allow_rule;
+ struct mlx5_flow_rule *drop_rule;
+};
+
+struct vport_egress {
+ struct mlx5_flow_table *acl;
+ struct mlx5_flow_group *allowed_vlans_grp;
+ struct mlx5_flow_group *drop_grp;
+ struct mlx5_flow_rule *allowed_vlan;
+ struct mlx5_flow_rule *drop_rule;
+};
+
struct mlx5_vport {
struct mlx5_core_dev *dev;
int vport;
struct hlist_head uc_list[MLX5_L2_ADDR_HASH_SIZE];
struct hlist_head mc_list[MLX5_L2_ADDR_HASH_SIZE];
+ struct mlx5_flow_rule *promisc_rule;
+ struct mlx5_flow_rule *allmulti_rule;
struct work_struct vport_change_handler;
- /* This spinlock protects access to vport data, between
- * "esw_vport_disable" and ongoing interrupt "mlx5_eswitch_vport_event"
- * once vport marked as disabled new interrupts are discarded.
- */
- spinlock_t lock; /* vport events sync */
+ struct vport_ingress ingress;
+ struct vport_egress egress;
+
+ u16 vlan;
+ u8 qos;
+ bool spoofchk;
+ bool trusted;
bool enabled;
u16 enabled_events;
};
@@ -113,6 +135,8 @@ struct mlx5_l2_table {
struct mlx5_eswitch_fdb {
void *fdb;
struct mlx5_flow_group *addr_grp;
+ struct mlx5_flow_group *allmulti_grp;
+ struct mlx5_flow_group *promisc_grp;
};
struct mlx5_eswitch {
@@ -124,6 +148,11 @@ struct mlx5_eswitch {
struct mlx5_vport *vports;
int total_vports;
int enabled_vports;
+ /* Synchronize between vport change events
+ * and async SRIOV admin state changes
+ */
+ struct mutex state_lock;
+ struct esw_mc_addr *mc_promisc;
};
/* E-Switch API */
@@ -138,6 +167,10 @@ int mlx5_eswitch_set_vport_state(struct mlx5_eswitch *esw,
int vport, int link_state);
int mlx5_eswitch_set_vport_vlan(struct mlx5_eswitch *esw,
int vport, u16 vlan, u8 qos);
+int mlx5_eswitch_set_vport_spoofchk(struct mlx5_eswitch *esw,
+ int vport, bool spoofchk);
+int mlx5_eswitch_set_vport_trust(struct mlx5_eswitch *esw,
+ int vport_num, bool setting);
int mlx5_eswitch_get_vport_config(struct mlx5_eswitch *esw,
int vport, struct ifla_vf_info *ivi);
int mlx5_eswitch_get_vport_stats(struct mlx5_eswitch *esw,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
index f46f1db0fc00..a5bb6b695242 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.c
@@ -50,6 +50,10 @@ int mlx5_cmd_update_root_ft(struct mlx5_core_dev *dev,
MLX5_CMD_OP_SET_FLOW_TABLE_ROOT);
MLX5_SET(set_flow_table_root_in, in, table_type, ft->type);
MLX5_SET(set_flow_table_root_in, in, table_id, ft->id);
+ if (ft->vport) {
+ MLX5_SET(set_flow_table_root_in, in, vport_number, ft->vport);
+ MLX5_SET(set_flow_table_root_in, in, other_vport, 1);
+ }
memset(out, 0, sizeof(out));
return mlx5_cmd_exec_check_status(dev, in, sizeof(in), out,
@@ -57,6 +61,7 @@ int mlx5_cmd_update_root_ft(struct mlx5_core_dev *dev,
}
int mlx5_cmd_create_flow_table(struct mlx5_core_dev *dev,
+ u16 vport,
enum fs_flow_table_type type, unsigned int level,
unsigned int log_size, struct mlx5_flow_table
*next_ft, unsigned int *table_id)
@@ -77,6 +82,10 @@ int mlx5_cmd_create_flow_table(struct mlx5_core_dev *dev,
MLX5_SET(create_flow_table_in, in, table_type, type);
MLX5_SET(create_flow_table_in, in, level, level);
MLX5_SET(create_flow_table_in, in, log_size, log_size);
+ if (vport) {
+ MLX5_SET(create_flow_table_in, in, vport_number, vport);
+ MLX5_SET(create_flow_table_in, in, other_vport, 1);
+ }
memset(out, 0, sizeof(out));
err = mlx5_cmd_exec_check_status(dev, in, sizeof(in), out,
@@ -101,6 +110,10 @@ int mlx5_cmd_destroy_flow_table(struct mlx5_core_dev *dev,
MLX5_CMD_OP_DESTROY_FLOW_TABLE);
MLX5_SET(destroy_flow_table_in, in, table_type, ft->type);
MLX5_SET(destroy_flow_table_in, in, table_id, ft->id);
+ if (ft->vport) {
+ MLX5_SET(destroy_flow_table_in, in, vport_number, ft->vport);
+ MLX5_SET(destroy_flow_table_in, in, other_vport, 1);
+ }
return mlx5_cmd_exec_check_status(dev, in, sizeof(in), out,
sizeof(out));
@@ -120,6 +133,10 @@ int mlx5_cmd_modify_flow_table(struct mlx5_core_dev *dev,
MLX5_CMD_OP_MODIFY_FLOW_TABLE);
MLX5_SET(modify_flow_table_in, in, table_type, ft->type);
MLX5_SET(modify_flow_table_in, in, table_id, ft->id);
+ if (ft->vport) {
+ MLX5_SET(modify_flow_table_in, in, vport_number, ft->vport);
+ MLX5_SET(modify_flow_table_in, in, other_vport, 1);
+ }
MLX5_SET(modify_flow_table_in, in, modify_field_select,
MLX5_MODIFY_FLOW_TABLE_MISS_TABLE_ID);
if (next_ft) {
@@ -148,6 +165,10 @@ int mlx5_cmd_create_flow_group(struct mlx5_core_dev *dev,
MLX5_CMD_OP_CREATE_FLOW_GROUP);
MLX5_SET(create_flow_group_in, in, table_type, ft->type);
MLX5_SET(create_flow_group_in, in, table_id, ft->id);
+ if (ft->vport) {
+ MLX5_SET(create_flow_group_in, in, vport_number, ft->vport);
+ MLX5_SET(create_flow_group_in, in, other_vport, 1);
+ }
err = mlx5_cmd_exec_check_status(dev, in,
inlen, out,
@@ -174,6 +195,10 @@ int mlx5_cmd_destroy_flow_group(struct mlx5_core_dev *dev,
MLX5_SET(destroy_flow_group_in, in, table_type, ft->type);
MLX5_SET(destroy_flow_group_in, in, table_id, ft->id);
MLX5_SET(destroy_flow_group_in, in, group_id, group_id);
+ if (ft->vport) {
+ MLX5_SET(destroy_flow_group_in, in, vport_number, ft->vport);
+ MLX5_SET(destroy_flow_group_in, in, other_vport, 1);
+ }
return mlx5_cmd_exec_check_status(dev, in, sizeof(in), out,
sizeof(out));
@@ -207,22 +232,29 @@ static int mlx5_cmd_set_fte(struct mlx5_core_dev *dev,
MLX5_SET(set_fte_in, in, table_type, ft->type);
MLX5_SET(set_fte_in, in, table_id, ft->id);
MLX5_SET(set_fte_in, in, flow_index, fte->index);
+ if (ft->vport) {
+ MLX5_SET(set_fte_in, in, vport_number, ft->vport);
+ MLX5_SET(set_fte_in, in, other_vport, 1);
+ }
in_flow_context = MLX5_ADDR_OF(set_fte_in, in, flow_context);
MLX5_SET(flow_context, in_flow_context, group_id, group_id);
MLX5_SET(flow_context, in_flow_context, flow_tag, fte->flow_tag);
MLX5_SET(flow_context, in_flow_context, action, fte->action);
- MLX5_SET(flow_context, in_flow_context, destination_list_size,
- fte->dests_size);
in_match_value = MLX5_ADDR_OF(flow_context, in_flow_context,
match_value);
memcpy(in_match_value, &fte->val, MLX5_ST_SZ_BYTES(fte_match_param));
+ in_dests = MLX5_ADDR_OF(flow_context, in_flow_context, destination);
if (fte->action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) {
- in_dests = MLX5_ADDR_OF(flow_context, in_flow_context, destination);
+ int list_size = 0;
+
list_for_each_entry(dst, &fte->node.children, node.list) {
unsigned int id;
+ if (dst->dest_attr.type == MLX5_FLOW_DESTINATION_TYPE_COUNTER)
+ continue;
+
MLX5_SET(dest_format_struct, in_dests, destination_type,
dst->dest_attr.type);
if (dst->dest_attr.type ==
@@ -233,8 +265,31 @@ static int mlx5_cmd_set_fte(struct mlx5_core_dev *dev,
}
MLX5_SET(dest_format_struct, in_dests, destination_id, id);
in_dests += MLX5_ST_SZ_BYTES(dest_format_struct);
+ list_size++;
}
+
+ MLX5_SET(flow_context, in_flow_context, destination_list_size,
+ list_size);
}
+
+ if (fte->action & MLX5_FLOW_CONTEXT_ACTION_COUNT) {
+ int list_size = 0;
+
+ list_for_each_entry(dst, &fte->node.children, node.list) {
+ if (dst->dest_attr.type !=
+ MLX5_FLOW_DESTINATION_TYPE_COUNTER)
+ continue;
+
+ MLX5_SET(flow_counter_list, in_dests, flow_counter_id,
+ dst->dest_attr.counter->id);
+ in_dests += MLX5_ST_SZ_BYTES(dest_format_struct);
+ list_size++;
+ }
+
+ MLX5_SET(flow_context, in_flow_context, flow_counter_list_size,
+ list_size);
+ }
+
memset(out, 0, sizeof(out));
err = mlx5_cmd_exec_check_status(dev, in, inlen, out,
sizeof(out));
@@ -254,18 +309,16 @@ int mlx5_cmd_create_fte(struct mlx5_core_dev *dev,
int mlx5_cmd_update_fte(struct mlx5_core_dev *dev,
struct mlx5_flow_table *ft,
unsigned group_id,
+ int modify_mask,
struct fs_fte *fte)
{
int opmod;
- int modify_mask;
int atomic_mod_cap = MLX5_CAP_FLOWTABLE(dev,
flow_table_properties_nic_receive.
flow_modify_en);
if (!atomic_mod_cap)
return -ENOTSUPP;
opmod = 1;
- modify_mask = 1 <<
- MLX5_SET_FTE_MODIFY_ENABLE_MASK_DESTINATION_LIST;
return mlx5_cmd_set_fte(dev, opmod, modify_mask, ft, group_id, fte);
}
@@ -285,8 +338,78 @@ int mlx5_cmd_delete_fte(struct mlx5_core_dev *dev,
MLX5_SET(delete_fte_in, in, table_type, ft->type);
MLX5_SET(delete_fte_in, in, table_id, ft->id);
MLX5_SET(delete_fte_in, in, flow_index, index);
+ if (ft->vport) {
+ MLX5_SET(delete_fte_in, in, vport_number, ft->vport);
+ MLX5_SET(delete_fte_in, in, other_vport, 1);
+ }
err = mlx5_cmd_exec_check_status(dev, in, sizeof(in), out, sizeof(out));
return err;
}
+
+int mlx5_cmd_fc_alloc(struct mlx5_core_dev *dev, u16 *id)
+{
+ u32 in[MLX5_ST_SZ_DW(alloc_flow_counter_in)];
+ u32 out[MLX5_ST_SZ_DW(alloc_flow_counter_out)];
+ int err;
+
+ memset(in, 0, sizeof(in));
+ memset(out, 0, sizeof(out));
+
+ MLX5_SET(alloc_flow_counter_in, in, opcode,
+ MLX5_CMD_OP_ALLOC_FLOW_COUNTER);
+
+ err = mlx5_cmd_exec_check_status(dev, in, sizeof(in), out,
+ sizeof(out));
+ if (err)
+ return err;
+
+ *id = MLX5_GET(alloc_flow_counter_out, out, flow_counter_id);
+
+ return 0;
+}
+
+int mlx5_cmd_fc_free(struct mlx5_core_dev *dev, u16 id)
+{
+ u32 in[MLX5_ST_SZ_DW(dealloc_flow_counter_in)];
+ u32 out[MLX5_ST_SZ_DW(dealloc_flow_counter_out)];
+
+ memset(in, 0, sizeof(in));
+ memset(out, 0, sizeof(out));
+
+ MLX5_SET(dealloc_flow_counter_in, in, opcode,
+ MLX5_CMD_OP_DEALLOC_FLOW_COUNTER);
+ MLX5_SET(dealloc_flow_counter_in, in, flow_counter_id, id);
+
+ return mlx5_cmd_exec_check_status(dev, in, sizeof(in), out,
+ sizeof(out));
+}
+
+int mlx5_cmd_fc_query(struct mlx5_core_dev *dev, u16 id,
+ u64 *packets, u64 *bytes)
+{
+ u32 out[MLX5_ST_SZ_BYTES(query_flow_counter_out) +
+ MLX5_ST_SZ_BYTES(traffic_counter)];
+ u32 in[MLX5_ST_SZ_DW(query_flow_counter_in)];
+ void *stats;
+ int err = 0;
+
+ memset(in, 0, sizeof(in));
+ memset(out, 0, sizeof(out));
+
+ MLX5_SET(query_flow_counter_in, in, opcode,
+ MLX5_CMD_OP_QUERY_FLOW_COUNTER);
+ MLX5_SET(query_flow_counter_in, in, op_mod, 0);
+ MLX5_SET(query_flow_counter_in, in, flow_counter_id, id);
+
+ err = mlx5_cmd_exec_check_status(dev, in, sizeof(in), out, sizeof(out));
+ if (err)
+ return err;
+
+ stats = MLX5_ADDR_OF(query_flow_counter_out, out, flow_statistics);
+ *packets = MLX5_GET64(traffic_counter, stats, packets);
+ *bytes = MLX5_GET64(traffic_counter, stats, octets);
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.h
index 9814d4784803..fc4f7b83fe0a 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_cmd.h
@@ -34,6 +34,7 @@
#define _MLX5_FS_CMD_
int mlx5_cmd_create_flow_table(struct mlx5_core_dev *dev,
+ u16 vport,
enum fs_flow_table_type type, unsigned int level,
unsigned int log_size, struct mlx5_flow_table
*next_ft, unsigned int *table_id);
@@ -61,6 +62,7 @@ int mlx5_cmd_create_fte(struct mlx5_core_dev *dev,
int mlx5_cmd_update_fte(struct mlx5_core_dev *dev,
struct mlx5_flow_table *ft,
unsigned group_id,
+ int modify_mask,
struct fs_fte *fte);
int mlx5_cmd_delete_fte(struct mlx5_core_dev *dev,
@@ -69,4 +71,9 @@ int mlx5_cmd_delete_fte(struct mlx5_core_dev *dev,
int mlx5_cmd_update_root_ft(struct mlx5_core_dev *dev,
struct mlx5_flow_table *ft);
+
+int mlx5_cmd_fc_alloc(struct mlx5_core_dev *dev, u16 *id);
+int mlx5_cmd_fc_free(struct mlx5_core_dev *dev, u16 id);
+int mlx5_cmd_fc_query(struct mlx5_core_dev *dev, u16 id,
+ u64 *packets, u64 *bytes);
#endif
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
index 89cce97d46c6..8b5f0b2c0d5c 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.c
@@ -40,18 +40,18 @@
#define INIT_TREE_NODE_ARRAY_SIZE(...) (sizeof((struct init_tree_node[]){__VA_ARGS__}) /\
sizeof(struct init_tree_node))
-#define ADD_PRIO(num_prios_val, min_level_val, max_ft_val, caps_val,\
+#define ADD_PRIO(num_prios_val, min_level_val, num_levels_val, caps_val,\
...) {.type = FS_TYPE_PRIO,\
.min_ft_level = min_level_val,\
- .max_ft = max_ft_val,\
+ .num_levels = num_levels_val,\
.num_leaf_prios = num_prios_val,\
.caps = caps_val,\
.children = (struct init_tree_node[]) {__VA_ARGS__},\
.ar_size = INIT_TREE_NODE_ARRAY_SIZE(__VA_ARGS__) \
}
-#define ADD_MULTIPLE_PRIO(num_prios_val, max_ft_val, ...)\
- ADD_PRIO(num_prios_val, 0, max_ft_val, {},\
+#define ADD_MULTIPLE_PRIO(num_prios_val, num_levels_val, ...)\
+ ADD_PRIO(num_prios_val, 0, num_levels_val, {},\
__VA_ARGS__)\
#define ADD_NS(...) {.type = FS_TYPE_NAMESPACE,\
@@ -67,17 +67,20 @@
#define FS_REQUIRED_CAPS(...) {.arr_sz = INIT_CAPS_ARRAY_SIZE(__VA_ARGS__), \
.caps = (long[]) {__VA_ARGS__} }
-#define LEFTOVERS_MAX_FT 1
+#define LEFTOVERS_NUM_LEVELS 1
#define LEFTOVERS_NUM_PRIOS 1
-#define BY_PASS_PRIO_MAX_FT 1
-#define BY_PASS_MIN_LEVEL (KENREL_MIN_LEVEL + MLX5_BY_PASS_NUM_PRIOS +\
- LEFTOVERS_MAX_FT)
-#define KERNEL_MAX_FT 3
-#define KERNEL_NUM_PRIOS 2
-#define KENREL_MIN_LEVEL 2
+#define BY_PASS_PRIO_NUM_LEVELS 1
+#define BY_PASS_MIN_LEVEL (KERNEL_MIN_LEVEL + MLX5_BY_PASS_NUM_PRIOS +\
+ LEFTOVERS_NUM_PRIOS)
-#define ANCHOR_MAX_FT 1
+/* Vlan, mac, ttc, aRFS */
+#define KERNEL_NIC_PRIO_NUM_LEVELS 4
+#define KERNEL_NIC_NUM_PRIOS 1
+/* One more level for tc */
+#define KERNEL_MIN_LEVEL (KERNEL_NIC_PRIO_NUM_LEVELS + 1)
+
+#define ANCHOR_NUM_LEVELS 1
#define ANCHOR_NUM_PRIOS 1
#define ANCHOR_MIN_LEVEL (BY_PASS_MIN_LEVEL + 1)
struct node_caps {
@@ -92,7 +95,7 @@ static struct init_tree_node {
int min_ft_level;
int num_leaf_prios;
int prio;
- int max_ft;
+ int num_levels;
} root_fs = {
.type = FS_TYPE_NAMESPACE,
.ar_size = 4,
@@ -102,17 +105,20 @@ static struct init_tree_node {
FS_CAP(flow_table_properties_nic_receive.modify_root),
FS_CAP(flow_table_properties_nic_receive.identified_miss_table_mode),
FS_CAP(flow_table_properties_nic_receive.flow_table_modify)),
- ADD_NS(ADD_MULTIPLE_PRIO(MLX5_BY_PASS_NUM_PRIOS, BY_PASS_PRIO_MAX_FT))),
- ADD_PRIO(0, KENREL_MIN_LEVEL, 0, {},
- ADD_NS(ADD_MULTIPLE_PRIO(KERNEL_NUM_PRIOS, KERNEL_MAX_FT))),
+ ADD_NS(ADD_MULTIPLE_PRIO(MLX5_BY_PASS_NUM_PRIOS,
+ BY_PASS_PRIO_NUM_LEVELS))),
+ ADD_PRIO(0, KERNEL_MIN_LEVEL, 0, {},
+ ADD_NS(ADD_MULTIPLE_PRIO(1, 1),
+ ADD_MULTIPLE_PRIO(KERNEL_NIC_NUM_PRIOS,
+ KERNEL_NIC_PRIO_NUM_LEVELS))),
ADD_PRIO(0, BY_PASS_MIN_LEVEL, 0,
FS_REQUIRED_CAPS(FS_CAP(flow_table_properties_nic_receive.flow_modify_en),
FS_CAP(flow_table_properties_nic_receive.modify_root),
FS_CAP(flow_table_properties_nic_receive.identified_miss_table_mode),
FS_CAP(flow_table_properties_nic_receive.flow_table_modify)),
- ADD_NS(ADD_MULTIPLE_PRIO(LEFTOVERS_NUM_PRIOS, LEFTOVERS_MAX_FT))),
+ ADD_NS(ADD_MULTIPLE_PRIO(LEFTOVERS_NUM_PRIOS, LEFTOVERS_NUM_LEVELS))),
ADD_PRIO(0, ANCHOR_MIN_LEVEL, 0, {},
- ADD_NS(ADD_MULTIPLE_PRIO(ANCHOR_NUM_PRIOS, ANCHOR_MAX_FT))),
+ ADD_NS(ADD_MULTIPLE_PRIO(ANCHOR_NUM_PRIOS, ANCHOR_NUM_LEVELS))),
}
};
@@ -222,19 +228,6 @@ static struct fs_prio *find_prio(struct mlx5_flow_namespace *ns,
return NULL;
}
-static unsigned int find_next_free_level(struct fs_prio *prio)
-{
- if (!list_empty(&prio->node.children)) {
- struct mlx5_flow_table *ft;
-
- ft = list_last_entry(&prio->node.children,
- struct mlx5_flow_table,
- node.list);
- return ft->level + 1;
- }
- return prio->start_level;
-}
-
static bool masked_memcmp(void *mask, void *val1, void *val2, size_t size)
{
unsigned int i;
@@ -351,6 +344,7 @@ static void del_rule(struct fs_node *node)
struct mlx5_flow_group *fg;
struct fs_fte *fte;
u32 *match_value;
+ int modify_mask;
struct mlx5_core_dev *dev = get_dev(node);
int match_len = MLX5_ST_SZ_BYTES(fte_match_param);
int err;
@@ -374,8 +368,11 @@ static void del_rule(struct fs_node *node)
}
if ((fte->action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) &&
--fte->dests_size) {
+ modify_mask = BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_DESTINATION_LIST),
err = mlx5_cmd_update_fte(dev, ft,
- fg->id, fte);
+ fg->id,
+ modify_mask,
+ fte);
if (err)
pr_warn("%s can't del rule fg id=%d fte_index=%d\n",
__func__, fg->id, fte->index);
@@ -464,7 +461,7 @@ static struct mlx5_flow_group *alloc_flow_group(u32 *create_fg_in)
return fg;
}
-static struct mlx5_flow_table *alloc_flow_table(int level, int max_fte,
+static struct mlx5_flow_table *alloc_flow_table(int level, u16 vport, int max_fte,
enum fs_flow_table_type table_type)
{
struct mlx5_flow_table *ft;
@@ -476,6 +473,7 @@ static struct mlx5_flow_table *alloc_flow_table(int level, int max_fte,
ft->level = level;
ft->node.type = FS_TYPE_FLOW_TABLE;
ft->type = table_type;
+ ft->vport = vport;
ft->max_fte = max_fte;
INIT_LIST_HEAD(&ft->fwd_rules);
mutex_init(&ft->lock);
@@ -615,12 +613,13 @@ static int update_root_ft_create(struct mlx5_flow_table *ft, struct fs_prio
return err;
}
-static int mlx5_modify_rule_destination(struct mlx5_flow_rule *rule,
- struct mlx5_flow_destination *dest)
+int mlx5_modify_rule_destination(struct mlx5_flow_rule *rule,
+ struct mlx5_flow_destination *dest)
{
struct mlx5_flow_table *ft;
struct mlx5_flow_group *fg;
struct fs_fte *fte;
+ int modify_mask = BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_DESTINATION_LIST);
int err = 0;
fs_get_obj(fte, rule->node.parent);
@@ -632,7 +631,9 @@ static int mlx5_modify_rule_destination(struct mlx5_flow_rule *rule,
memcpy(&rule->dest_attr, dest, sizeof(*dest));
err = mlx5_cmd_update_fte(get_dev(&ft->node),
- ft, fg->id, fte);
+ ft, fg->id,
+ modify_mask,
+ fte);
unlock_ref_node(&fte->node);
return err;
@@ -693,9 +694,23 @@ static int connect_flow_table(struct mlx5_core_dev *dev, struct mlx5_flow_table
return err;
}
-struct mlx5_flow_table *mlx5_create_flow_table(struct mlx5_flow_namespace *ns,
- int prio,
- int max_fte)
+static void list_add_flow_table(struct mlx5_flow_table *ft,
+ struct fs_prio *prio)
+{
+ struct list_head *prev = &prio->node.children;
+ struct mlx5_flow_table *iter;
+
+ fs_for_each_ft(iter, prio) {
+ if (iter->level > ft->level)
+ break;
+ prev = &iter->node.list;
+ }
+ list_add(&ft->node.list, prev);
+}
+
+static struct mlx5_flow_table *__mlx5_create_flow_table(struct mlx5_flow_namespace *ns,
+ u16 vport, int prio,
+ int max_fte, u32 level)
{
struct mlx5_flow_table *next_ft = NULL;
struct mlx5_flow_table *ft;
@@ -716,12 +731,16 @@ struct mlx5_flow_table *mlx5_create_flow_table(struct mlx5_flow_namespace *ns,
err = -EINVAL;
goto unlock_root;
}
- if (fs_prio->num_ft == fs_prio->max_ft) {
+ if (level >= fs_prio->num_levels) {
err = -ENOSPC;
goto unlock_root;
}
-
- ft = alloc_flow_table(find_next_free_level(fs_prio),
+ /* The level is related to the
+ * priority level range.
+ */
+ level += fs_prio->start_level;
+ ft = alloc_flow_table(level,
+ vport,
roundup_pow_of_two(max_fte),
root->table_type);
if (!ft) {
@@ -732,7 +751,7 @@ struct mlx5_flow_table *mlx5_create_flow_table(struct mlx5_flow_namespace *ns,
tree_init_node(&ft->node, 1, del_flow_table);
log_table_sz = ilog2(ft->max_fte);
next_ft = find_next_chained_ft(fs_prio);
- err = mlx5_cmd_create_flow_table(root->dev, ft->type, ft->level,
+ err = mlx5_cmd_create_flow_table(root->dev, ft->vport, ft->type, ft->level,
log_table_sz, next_ft, &ft->id);
if (err)
goto free_ft;
@@ -742,7 +761,7 @@ struct mlx5_flow_table *mlx5_create_flow_table(struct mlx5_flow_namespace *ns,
goto destroy_ft;
lock_ref_node(&fs_prio->node);
tree_add_node(&ft->node, &fs_prio->node);
- list_add_tail(&ft->node.list, &fs_prio->node.children);
+ list_add_flow_table(ft, fs_prio);
fs_prio->num_ft++;
unlock_ref_node(&fs_prio->node);
mutex_unlock(&root->chain_lock);
@@ -756,17 +775,32 @@ unlock_root:
return ERR_PTR(err);
}
+struct mlx5_flow_table *mlx5_create_flow_table(struct mlx5_flow_namespace *ns,
+ int prio, int max_fte,
+ u32 level)
+{
+ return __mlx5_create_flow_table(ns, 0, prio, max_fte, level);
+}
+
+struct mlx5_flow_table *mlx5_create_vport_flow_table(struct mlx5_flow_namespace *ns,
+ int prio, int max_fte,
+ u32 level, u16 vport)
+{
+ return __mlx5_create_flow_table(ns, vport, prio, max_fte, level);
+}
+
struct mlx5_flow_table *mlx5_create_auto_grouped_flow_table(struct mlx5_flow_namespace *ns,
int prio,
int num_flow_table_entries,
- int max_num_groups)
+ int max_num_groups,
+ u32 level)
{
struct mlx5_flow_table *ft;
if (max_num_groups > num_flow_table_entries)
return ERR_PTR(-EINVAL);
- ft = mlx5_create_flow_table(ns, prio, num_flow_table_entries);
+ ft = mlx5_create_flow_table(ns, prio, num_flow_table_entries, level);
if (IS_ERR(ft))
return ft;
@@ -850,6 +884,7 @@ static struct mlx5_flow_rule *add_rule_fte(struct fs_fte *fte,
{
struct mlx5_flow_table *ft;
struct mlx5_flow_rule *rule;
+ int modify_mask = 0;
int err;
rule = alloc_rule(dest);
@@ -865,14 +900,20 @@ static struct mlx5_flow_rule *add_rule_fte(struct fs_fte *fte,
list_add(&rule->node.list, &fte->node.children);
else
list_add_tail(&rule->node.list, &fte->node.children);
- if (dest)
+ if (dest) {
fte->dests_size++;
+
+ modify_mask |= dest->type == MLX5_FLOW_DESTINATION_TYPE_COUNTER ?
+ BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_FLOW_COUNTERS) :
+ BIT(MLX5_SET_FTE_MODIFY_ENABLE_MASK_DESTINATION_LIST);
+ }
+
if (fte->dests_size == 1 || !dest)
err = mlx5_cmd_create_fte(get_dev(&ft->node),
ft, fg->id, fte);
else
err = mlx5_cmd_update_fte(get_dev(&ft->node),
- ft, fg->id, fte);
+ ft, fg->id, modify_mask, fte);
if (err)
goto free_rule;
@@ -1065,6 +1106,50 @@ unlock_fg:
return rule;
}
+struct mlx5_fc *mlx5_flow_rule_counter(struct mlx5_flow_rule *rule)
+{
+ struct mlx5_flow_rule *dst;
+ struct fs_fte *fte;
+
+ fs_get_obj(fte, rule->node.parent);
+
+ fs_for_each_dst(dst, fte) {
+ if (dst->dest_attr.type == MLX5_FLOW_DESTINATION_TYPE_COUNTER)
+ return dst->dest_attr.counter;
+ }
+
+ return NULL;
+}
+
+static bool counter_is_valid(struct mlx5_fc *counter, u32 action)
+{
+ if (!(action & MLX5_FLOW_CONTEXT_ACTION_COUNT))
+ return !counter;
+
+ if (!counter)
+ return false;
+
+ /* Hardware support counter for a drop action only */
+ return action == (MLX5_FLOW_CONTEXT_ACTION_DROP | MLX5_FLOW_CONTEXT_ACTION_COUNT);
+}
+
+static bool dest_is_valid(struct mlx5_flow_destination *dest,
+ u32 action,
+ struct mlx5_flow_table *ft)
+{
+ if (dest && (dest->type == MLX5_FLOW_DESTINATION_TYPE_COUNTER))
+ return counter_is_valid(dest->counter, action);
+
+ if (!(action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST))
+ return true;
+
+ if (!dest || ((dest->type ==
+ MLX5_FLOW_DESTINATION_TYPE_FLOW_TABLE) &&
+ (dest->ft->level <= ft->level)))
+ return false;
+ return true;
+}
+
static struct mlx5_flow_rule *
_mlx5_add_flow_rule(struct mlx5_flow_table *ft,
u8 match_criteria_enable,
@@ -1077,7 +1162,7 @@ _mlx5_add_flow_rule(struct mlx5_flow_table *ft,
struct mlx5_flow_group *g;
struct mlx5_flow_rule *rule;
- if ((action & MLX5_FLOW_CONTEXT_ACTION_FWD_DEST) && !dest)
+ if (!dest_is_valid(dest, action, ft))
return ERR_PTR(-EINVAL);
nested_lock_ref_node(&ft->node, FS_MUTEX_GRANDPARENT);
@@ -1294,6 +1379,16 @@ struct mlx5_flow_namespace *mlx5_get_flow_namespace(struct mlx5_core_dev *dev,
return &dev->priv.fdb_root_ns->ns;
else
return NULL;
+ case MLX5_FLOW_NAMESPACE_ESW_EGRESS:
+ if (dev->priv.esw_egress_root_ns)
+ return &dev->priv.esw_egress_root_ns->ns;
+ else
+ return NULL;
+ case MLX5_FLOW_NAMESPACE_ESW_INGRESS:
+ if (dev->priv.esw_ingress_root_ns)
+ return &dev->priv.esw_ingress_root_ns->ns;
+ else
+ return NULL;
default:
return NULL;
}
@@ -1311,7 +1406,7 @@ struct mlx5_flow_namespace *mlx5_get_flow_namespace(struct mlx5_core_dev *dev,
EXPORT_SYMBOL(mlx5_get_flow_namespace);
static struct fs_prio *fs_create_prio(struct mlx5_flow_namespace *ns,
- unsigned prio, int max_ft)
+ unsigned int prio, int num_levels)
{
struct fs_prio *fs_prio;
@@ -1322,7 +1417,7 @@ static struct fs_prio *fs_create_prio(struct mlx5_flow_namespace *ns,
fs_prio->node.type = FS_TYPE_PRIO;
tree_init_node(&fs_prio->node, 1, NULL);
tree_add_node(&fs_prio->node, &ns->node);
- fs_prio->max_ft = max_ft;
+ fs_prio->num_levels = num_levels;
fs_prio->prio = prio;
list_add_tail(&fs_prio->node.list, &ns->node.children);
@@ -1353,14 +1448,14 @@ static struct mlx5_flow_namespace *fs_create_namespace(struct fs_prio *prio)
return ns;
}
-static int create_leaf_prios(struct mlx5_flow_namespace *ns, struct init_tree_node
- *prio_metadata)
+static int create_leaf_prios(struct mlx5_flow_namespace *ns, int prio,
+ struct init_tree_node *prio_metadata)
{
struct fs_prio *fs_prio;
int i;
for (i = 0; i < prio_metadata->num_leaf_prios; i++) {
- fs_prio = fs_create_prio(ns, i, prio_metadata->max_ft);
+ fs_prio = fs_create_prio(ns, prio++, prio_metadata->num_levels);
if (IS_ERR(fs_prio))
return PTR_ERR(fs_prio);
}
@@ -1387,7 +1482,7 @@ static int init_root_tree_recursive(struct mlx5_core_dev *dev,
struct init_tree_node *init_node,
struct fs_node *fs_parent_node,
struct init_tree_node *init_parent_node,
- int index)
+ int prio)
{
int max_ft_level = MLX5_CAP_FLOWTABLE(dev,
flow_table_properties_nic_receive.
@@ -1405,8 +1500,8 @@ static int init_root_tree_recursive(struct mlx5_core_dev *dev,
fs_get_obj(fs_ns, fs_parent_node);
if (init_node->num_leaf_prios)
- return create_leaf_prios(fs_ns, init_node);
- fs_prio = fs_create_prio(fs_ns, index, init_node->max_ft);
+ return create_leaf_prios(fs_ns, prio, init_node);
+ fs_prio = fs_create_prio(fs_ns, prio, init_node->num_levels);
if (IS_ERR(fs_prio))
return PTR_ERR(fs_prio);
base = &fs_prio->node;
@@ -1419,11 +1514,16 @@ static int init_root_tree_recursive(struct mlx5_core_dev *dev,
} else {
return -EINVAL;
}
+ prio = 0;
for (i = 0; i < init_node->ar_size; i++) {
err = init_root_tree_recursive(dev, &init_node->children[i],
- base, init_node, i);
+ base, init_node, prio);
if (err)
return err;
+ if (init_node->children[i].type == FS_TYPE_PRIO &&
+ init_node->children[i].num_leaf_prios) {
+ prio += init_node->children[i].num_leaf_prios;
+ }
}
return 0;
@@ -1479,9 +1579,9 @@ static int set_prio_attrs_in_ns(struct mlx5_flow_namespace *ns, int acc_level)
struct fs_prio *prio;
fs_for_each_prio(prio, ns) {
- /* This updates prio start_level and max_ft */
+ /* This updates prio start_level and num_levels */
set_prio_attrs_in_prio(prio, acc_level);
- acc_level += prio->max_ft;
+ acc_level += prio->num_levels;
}
return acc_level;
}
@@ -1493,11 +1593,11 @@ static void set_prio_attrs_in_prio(struct fs_prio *prio, int acc_level)
prio->start_level = acc_level;
fs_for_each_ns(ns, prio)
- /* This updates start_level and max_ft of ns's priority descendants */
+ /* This updates start_level and num_levels of ns's priority descendants */
acc_level_ns = set_prio_attrs_in_ns(ns, acc_level);
- if (!prio->max_ft)
- prio->max_ft = acc_level_ns - prio->start_level;
- WARN_ON(prio->max_ft < acc_level_ns - prio->start_level);
+ if (!prio->num_levels)
+ prio->num_levels = acc_level_ns - prio->start_level;
+ WARN_ON(prio->num_levels < acc_level_ns - prio->start_level);
}
static void set_prio_attrs(struct mlx5_flow_root_namespace *root_ns)
@@ -1508,12 +1608,13 @@ static void set_prio_attrs(struct mlx5_flow_root_namespace *root_ns)
fs_for_each_prio(prio, ns) {
set_prio_attrs_in_prio(prio, start_level);
- start_level += prio->max_ft;
+ start_level += prio->num_levels;
}
}
#define ANCHOR_PRIO 0
#define ANCHOR_SIZE 1
+#define ANCHOR_LEVEL 0
static int create_anchor_flow_table(struct mlx5_core_dev
*dev)
{
@@ -1523,7 +1624,7 @@ static int create_anchor_flow_table(struct mlx5_core_dev
ns = mlx5_get_flow_namespace(dev, MLX5_FLOW_NAMESPACE_ANCHOR);
if (!ns)
return -EINVAL;
- ft = mlx5_create_flow_table(ns, ANCHOR_PRIO, ANCHOR_SIZE);
+ ft = mlx5_create_flow_table(ns, ANCHOR_PRIO, ANCHOR_SIZE, ANCHOR_LEVEL);
if (IS_ERR(ft)) {
mlx5_core_err(dev, "Failed to create last anchor flow table");
return PTR_ERR(ft);
@@ -1668,6 +1769,9 @@ void mlx5_cleanup_fs(struct mlx5_core_dev *dev)
{
cleanup_root_ns(dev);
cleanup_single_prio_root_ns(dev, dev->priv.fdb_root_ns);
+ cleanup_single_prio_root_ns(dev, dev->priv.esw_egress_root_ns);
+ cleanup_single_prio_root_ns(dev, dev->priv.esw_ingress_root_ns);
+ mlx5_cleanup_fc_stats(dev);
}
static int init_fdb_root_ns(struct mlx5_core_dev *dev)
@@ -1688,20 +1792,69 @@ static int init_fdb_root_ns(struct mlx5_core_dev *dev)
}
}
+static int init_egress_acl_root_ns(struct mlx5_core_dev *dev)
+{
+ struct fs_prio *prio;
+
+ dev->priv.esw_egress_root_ns = create_root_ns(dev, FS_FT_ESW_EGRESS_ACL);
+ if (!dev->priv.esw_egress_root_ns)
+ return -ENOMEM;
+
+ /* create 1 prio*/
+ prio = fs_create_prio(&dev->priv.esw_egress_root_ns->ns, 0, MLX5_TOTAL_VPORTS(dev));
+ if (IS_ERR(prio))
+ return PTR_ERR(prio);
+ else
+ return 0;
+}
+
+static int init_ingress_acl_root_ns(struct mlx5_core_dev *dev)
+{
+ struct fs_prio *prio;
+
+ dev->priv.esw_ingress_root_ns = create_root_ns(dev, FS_FT_ESW_INGRESS_ACL);
+ if (!dev->priv.esw_ingress_root_ns)
+ return -ENOMEM;
+
+ /* create 1 prio*/
+ prio = fs_create_prio(&dev->priv.esw_ingress_root_ns->ns, 0, MLX5_TOTAL_VPORTS(dev));
+ if (IS_ERR(prio))
+ return PTR_ERR(prio);
+ else
+ return 0;
+}
+
int mlx5_init_fs(struct mlx5_core_dev *dev)
{
int err = 0;
+ err = mlx5_init_fc_stats(dev);
+ if (err)
+ return err;
+
if (MLX5_CAP_GEN(dev, nic_flow_table)) {
err = init_root_ns(dev);
if (err)
- return err;
+ goto err;
}
if (MLX5_CAP_GEN(dev, eswitch_flow_table)) {
err = init_fdb_root_ns(dev);
if (err)
- cleanup_root_ns(dev);
+ goto err;
+ }
+ if (MLX5_CAP_ESW_EGRESS_ACL(dev, ft_support)) {
+ err = init_egress_acl_root_ns(dev);
+ if (err)
+ goto err;
+ }
+ if (MLX5_CAP_ESW_INGRESS_ACL(dev, ft_support)) {
+ err = init_ingress_acl_root_ns(dev);
+ if (err)
+ goto err;
}
+ return 0;
+err:
+ mlx5_cleanup_fs(dev);
return err;
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
index f37a6248a27b..aa41a7314691 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_core.h
@@ -45,8 +45,10 @@ enum fs_node_type {
};
enum fs_flow_table_type {
- FS_FT_NIC_RX = 0x0,
- FS_FT_FDB = 0X4,
+ FS_FT_NIC_RX = 0x0,
+ FS_FT_ESW_EGRESS_ACL = 0x2,
+ FS_FT_ESW_INGRESS_ACL = 0x3,
+ FS_FT_FDB = 0X4,
};
enum fs_fte_status {
@@ -79,6 +81,7 @@ struct mlx5_flow_rule {
struct mlx5_flow_table {
struct fs_node node;
u32 id;
+ u16 vport;
unsigned int max_fte;
unsigned int level;
enum fs_flow_table_type type;
@@ -93,6 +96,28 @@ struct mlx5_flow_table {
struct list_head fwd_rules;
};
+struct mlx5_fc_cache {
+ u64 packets;
+ u64 bytes;
+ u64 lastuse;
+};
+
+struct mlx5_fc {
+ struct list_head list;
+
+ /* last{packets,bytes} members are used when calculating the delta since
+ * last reading
+ */
+ u64 lastpackets;
+ u64 lastbytes;
+
+ u16 id;
+ bool deleted;
+ bool aging;
+
+ struct mlx5_fc_cache cache ____cacheline_aligned_in_smp;
+};
+
/* Type of children is mlx5_flow_rule */
struct fs_fte {
struct fs_node node;
@@ -102,12 +127,13 @@ struct fs_fte {
u32 index;
u32 action;
enum fs_fte_status status;
+ struct mlx5_fc *counter;
};
/* Type of children is mlx5_flow_table/namespace */
struct fs_prio {
struct fs_node node;
- unsigned int max_ft;
+ unsigned int num_levels;
unsigned int start_level;
unsigned int prio;
unsigned int num_ft;
@@ -143,6 +169,9 @@ struct mlx5_flow_root_namespace {
struct mutex chain_lock;
};
+int mlx5_init_fc_stats(struct mlx5_core_dev *dev);
+void mlx5_cleanup_fc_stats(struct mlx5_core_dev *dev);
+
int mlx5_init_fs(struct mlx5_core_dev *dev);
void mlx5_cleanup_fs(struct mlx5_core_dev *dev);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
new file mode 100644
index 000000000000..164dc37fda72
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlx5/core/fs_counters.c
@@ -0,0 +1,226 @@
+/*
+ * Copyright (c) 2016, Mellanox Technologies. All rights reserved.
+ *
+ * This software is available to you under a choice of one of two
+ * licenses. You may choose to be licensed under the terms of the GNU
+ * General Public License (GPL) Version 2, available from the file
+ * COPYING in the main directory of this source tree, or the
+ * OpenIB.org BSD license below:
+ *
+ * Redistribution and use in source and binary forms, with or
+ * without modification, are permitted provided that the following
+ * conditions are met:
+ *
+ * - Redistributions of source code must retain the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer.
+ *
+ * - Redistributions in binary form must reproduce the above
+ * copyright notice, this list of conditions and the following
+ * disclaimer in the documentation and/or other materials
+ * provided with the distribution.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
+ * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
+ * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
+ * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
+ * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+ * SOFTWARE.
+ */
+
+#include <linux/mlx5/driver.h>
+#include <linux/mlx5/fs.h>
+#include "mlx5_core.h"
+#include "fs_core.h"
+#include "fs_cmd.h"
+
+#define MLX5_FC_STATS_PERIOD msecs_to_jiffies(1000)
+
+/* locking scheme:
+ *
+ * It is the responsibility of the user to prevent concurrent calls or bad
+ * ordering to mlx5_fc_create(), mlx5_fc_destroy() and accessing a reference
+ * to struct mlx5_fc.
+ * e.g en_tc.c is protected by RTNL lock of its caller, and will never call a
+ * dump (access to struct mlx5_fc) after a counter is destroyed.
+ *
+ * access to counter list:
+ * - create (user context)
+ * - mlx5_fc_create() only adds to an addlist to be used by
+ * mlx5_fc_stats_query_work(). addlist is protected by a spinlock.
+ * - spawn thread to do the actual destroy
+ *
+ * - destroy (user context)
+ * - mark a counter as deleted
+ * - spawn thread to do the actual del
+ *
+ * - dump (user context)
+ * user should not call dump after destroy
+ *
+ * - query (single thread workqueue context)
+ * destroy/dump - no conflict (see destroy)
+ * query/dump - packets and bytes might be inconsistent (since update is not
+ * atomic)
+ * query/create - no conflict (see create)
+ * since every create/destroy spawn the work, only after necessary time has
+ * elapsed, the thread will actually query the hardware.
+ */
+
+static void mlx5_fc_stats_work(struct work_struct *work)
+{
+ struct mlx5_core_dev *dev = container_of(work, struct mlx5_core_dev,
+ priv.fc_stats.work.work);
+ struct mlx5_fc_stats *fc_stats = &dev->priv.fc_stats;
+ unsigned long now = jiffies;
+ struct mlx5_fc *counter;
+ struct mlx5_fc *tmp;
+ int err = 0;
+
+ spin_lock(&fc_stats->addlist_lock);
+
+ list_splice_tail_init(&fc_stats->addlist, &fc_stats->list);
+
+ if (!list_empty(&fc_stats->list))
+ queue_delayed_work(fc_stats->wq, &fc_stats->work, MLX5_FC_STATS_PERIOD);
+
+ spin_unlock(&fc_stats->addlist_lock);
+
+ list_for_each_entry_safe(counter, tmp, &fc_stats->list, list) {
+ struct mlx5_fc_cache *c = &counter->cache;
+ u64 packets;
+ u64 bytes;
+
+ if (counter->deleted) {
+ list_del(&counter->list);
+
+ mlx5_cmd_fc_free(dev, counter->id);
+
+ kfree(counter);
+ continue;
+ }
+
+ if (time_before(now, fc_stats->next_query))
+ continue;
+
+ err = mlx5_cmd_fc_query(dev, counter->id, &packets, &bytes);
+ if (err) {
+ pr_err("Error querying stats for counter id %d\n",
+ counter->id);
+ continue;
+ }
+
+ if (packets == c->packets)
+ continue;
+
+ c->lastuse = jiffies;
+ c->packets = packets;
+ c->bytes = bytes;
+ }
+
+ if (time_after_eq(now, fc_stats->next_query))
+ fc_stats->next_query = now + MLX5_FC_STATS_PERIOD;
+}
+
+struct mlx5_fc *mlx5_fc_create(struct mlx5_core_dev *dev, bool aging)
+{
+ struct mlx5_fc_stats *fc_stats = &dev->priv.fc_stats;
+ struct mlx5_fc *counter;
+ int err;
+
+ counter = kzalloc(sizeof(*counter), GFP_KERNEL);
+ if (!counter)
+ return ERR_PTR(-ENOMEM);
+
+ err = mlx5_cmd_fc_alloc(dev, &counter->id);
+ if (err)
+ goto err_out;
+
+ if (aging) {
+ counter->aging = true;
+
+ spin_lock(&fc_stats->addlist_lock);
+ list_add(&counter->list, &fc_stats->addlist);
+ spin_unlock(&fc_stats->addlist_lock);
+
+ mod_delayed_work(fc_stats->wq, &fc_stats->work, 0);
+ }
+
+ return counter;
+
+err_out:
+ kfree(counter);
+
+ return ERR_PTR(err);
+}
+
+void mlx5_fc_destroy(struct mlx5_core_dev *dev, struct mlx5_fc *counter)
+{
+ struct mlx5_fc_stats *fc_stats = &dev->priv.fc_stats;
+
+ if (!counter)
+ return;
+
+ if (counter->aging) {
+ counter->deleted = true;
+ mod_delayed_work(fc_stats->wq, &fc_stats->work, 0);
+ return;
+ }
+
+ mlx5_cmd_fc_free(dev, counter->id);
+ kfree(counter);
+}
+
+int mlx5_init_fc_stats(struct mlx5_core_dev *dev)
+{
+ struct mlx5_fc_stats *fc_stats = &dev->priv.fc_stats;
+
+ INIT_LIST_HEAD(&fc_stats->list);
+ INIT_LIST_HEAD(&fc_stats->addlist);
+ spin_lock_init(&fc_stats->addlist_lock);
+
+ fc_stats->wq = create_singlethread_workqueue("mlx5_fc");
+ if (!fc_stats->wq)
+ return -ENOMEM;
+
+ INIT_DELAYED_WORK(&fc_stats->work, mlx5_fc_stats_work);
+
+ return 0;
+}
+
+void mlx5_cleanup_fc_stats(struct mlx5_core_dev *dev)
+{
+ struct mlx5_fc_stats *fc_stats = &dev->priv.fc_stats;
+ struct mlx5_fc *counter;
+ struct mlx5_fc *tmp;
+
+ cancel_delayed_work_sync(&dev->priv.fc_stats.work);
+ destroy_workqueue(dev->priv.fc_stats.wq);
+ dev->priv.fc_stats.wq = NULL;
+
+ list_splice_tail_init(&fc_stats->addlist, &fc_stats->list);
+
+ list_for_each_entry_safe(counter, tmp, &fc_stats->list, list) {
+ list_del(&counter->list);
+
+ mlx5_cmd_fc_free(dev, counter->id);
+
+ kfree(counter);
+ }
+}
+
+void mlx5_fc_query_cached(struct mlx5_fc *counter,
+ u64 *bytes, u64 *packets, u64 *lastuse)
+{
+ struct mlx5_fc_cache c;
+
+ c = counter->cache;
+
+ *bytes = c.bytes - counter->lastbytes;
+ *packets = c.packets - counter->lastpackets;
+ *lastuse = c.lastuse;
+
+ counter->lastbytes = c.bytes;
+ counter->lastpackets = c.packets;
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/health.c b/drivers/net/ethernet/mellanox/mlx5/core/health.c
index f5deb642d0d6..42d16b9458e4 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/health.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/health.c
@@ -176,11 +176,11 @@ static const char *hsynd_str(u8 synd)
case MLX5_HEALTH_SYNDR_EQ_ERR:
return "EQ error";
case MLX5_HEALTH_SYNDR_EQ_INV:
- return "Invalid EQ refrenced";
+ return "Invalid EQ referenced";
case MLX5_HEALTH_SYNDR_FFSER_ERR:
return "FFSER error";
case MLX5_HEALTH_SYNDR_HIGH_TEMP:
- return "High temprature";
+ return "High temperature";
default:
return "unrecognized error";
}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/main.c b/drivers/net/ethernet/mellanox/mlx5/core/main.c
index 6892746fd10d..a19b59348dd6 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/main.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/main.c
@@ -48,6 +48,9 @@
#include <linux/kmod.h>
#include <linux/delay.h>
#include <linux/mlx5/mlx5_ifc.h>
+#ifdef CONFIG_RFS_ACCEL
+#include <linux/cpu_rmap.h>
+#endif
#include "mlx5_core.h"
#include "fs_core.h"
#ifdef CONFIG_MLX5_CORE_EN
@@ -660,11 +663,34 @@ int mlx5_vector2eqn(struct mlx5_core_dev *dev, int vector, int *eqn,
}
EXPORT_SYMBOL(mlx5_vector2eqn);
+struct mlx5_eq *mlx5_eqn2eq(struct mlx5_core_dev *dev, int eqn)
+{
+ struct mlx5_eq_table *table = &dev->priv.eq_table;
+ struct mlx5_eq *eq;
+
+ spin_lock(&table->lock);
+ list_for_each_entry(eq, &table->comp_eqs_list, list)
+ if (eq->eqn == eqn) {
+ spin_unlock(&table->lock);
+ return eq;
+ }
+
+ spin_unlock(&table->lock);
+
+ return ERR_PTR(-ENOENT);
+}
+
static void free_comp_eqs(struct mlx5_core_dev *dev)
{
struct mlx5_eq_table *table = &dev->priv.eq_table;
struct mlx5_eq *eq, *n;
+#ifdef CONFIG_RFS_ACCEL
+ if (dev->rmap) {
+ free_irq_cpu_rmap(dev->rmap);
+ dev->rmap = NULL;
+ }
+#endif
spin_lock(&table->lock);
list_for_each_entry_safe(eq, n, &table->comp_eqs_list, list) {
list_del(&eq->list);
@@ -691,6 +717,11 @@ static int alloc_comp_eqs(struct mlx5_core_dev *dev)
INIT_LIST_HEAD(&table->comp_eqs_list);
ncomp_vec = table->num_comp_vectors;
nent = MLX5_COMP_EQ_SIZE;
+#ifdef CONFIG_RFS_ACCEL
+ dev->rmap = alloc_irq_cpu_rmap(ncomp_vec);
+ if (!dev->rmap)
+ return -ENOMEM;
+#endif
for (i = 0; i < ncomp_vec; i++) {
eq = kzalloc(sizeof(*eq), GFP_KERNEL);
if (!eq) {
@@ -698,6 +729,10 @@ static int alloc_comp_eqs(struct mlx5_core_dev *dev)
goto clean;
}
+#ifdef CONFIG_RFS_ACCEL
+ irq_cpu_rmap_add(dev->rmap,
+ dev->priv.msix_arr[i + MLX5_EQ_VEC_COMP_BASE].vector);
+#endif
snprintf(name, MLX5_MAX_IRQ_NAME, "mlx5_comp%d", i);
err = mlx5_create_map_eq(dev, eq,
i + MLX5_EQ_VEC_COMP_BASE, nent, 0,
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
index 0b0b226c789e..2f86ec6fcf25 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/mlx5_core.h
@@ -42,6 +42,8 @@
#define DRIVER_VERSION "3.0-1"
#define DRIVER_RELDATE "January 2015"
+#define MLX5_TOTAL_VPORTS(mdev) (1 + pci_sriov_get_totalvfs(mdev->pdev))
+
extern int mlx5_core_debug_mask;
#define mlx5_core_dbg(__dev, format, ...) \
@@ -100,6 +102,8 @@ int mlx5_core_disable_hca(struct mlx5_core_dev *dev, u16 func_id);
int mlx5_wait_for_vf_pages(struct mlx5_core_dev *dev);
cycle_t mlx5_read_internal_timer(struct mlx5_core_dev *dev);
u32 mlx5_get_msix_vec(struct mlx5_core_dev *dev, int vecidx);
+struct mlx5_eq *mlx5_eqn2eq(struct mlx5_core_dev *dev, int eqn);
+void mlx5_cq_tasklet_cb(unsigned long data);
void mlx5e_init(void);
void mlx5e_cleanup(void);
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/port.c b/drivers/net/ethernet/mellanox/mlx5/core/port.c
index 53cc1e2c693b..3e35611b19c3 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/port.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/port.c
@@ -115,6 +115,19 @@ int mlx5_query_port_ptys(struct mlx5_core_dev *dev, u32 *ptys,
}
EXPORT_SYMBOL_GPL(mlx5_query_port_ptys);
+int mlx5_set_port_beacon(struct mlx5_core_dev *dev, u16 beacon_duration)
+{
+ u32 out[MLX5_ST_SZ_DW(mlcr_reg)];
+ u32 in[MLX5_ST_SZ_DW(mlcr_reg)];
+
+ memset(in, 0, sizeof(in));
+ MLX5_SET(mlcr_reg, in, local_port, 1);
+ MLX5_SET(mlcr_reg, in, beacon_duration, beacon_duration);
+
+ return mlx5_core_access_reg(dev, in, sizeof(in), out,
+ sizeof(out), MLX5_REG_MLCR, 0, 1);
+}
+
int mlx5_query_port_proto_cap(struct mlx5_core_dev *dev,
u32 *proto_cap, int proto_mask)
{
@@ -297,6 +310,82 @@ void mlx5_query_port_oper_mtu(struct mlx5_core_dev *dev, u16 *oper_mtu,
}
EXPORT_SYMBOL_GPL(mlx5_query_port_oper_mtu);
+static int mlx5_query_module_num(struct mlx5_core_dev *dev, int *module_num)
+{
+ u32 out[MLX5_ST_SZ_DW(pmlp_reg)];
+ u32 in[MLX5_ST_SZ_DW(pmlp_reg)];
+ int module_mapping;
+ int err;
+
+ memset(in, 0, sizeof(in));
+
+ MLX5_SET(pmlp_reg, in, local_port, 1);
+
+ err = mlx5_core_access_reg(dev, in, sizeof(in), out, sizeof(out),
+ MLX5_REG_PMLP, 0, 0);
+ if (err)
+ return err;
+
+ module_mapping = MLX5_GET(pmlp_reg, out, lane0_module_mapping);
+ *module_num = module_mapping & MLX5_EEPROM_IDENTIFIER_BYTE_MASK;
+
+ return 0;
+}
+
+int mlx5_query_module_eeprom(struct mlx5_core_dev *dev,
+ u16 offset, u16 size, u8 *data)
+{
+ u32 out[MLX5_ST_SZ_DW(mcia_reg)];
+ u32 in[MLX5_ST_SZ_DW(mcia_reg)];
+ int module_num;
+ u16 i2c_addr;
+ int status;
+ int err;
+ void *ptr = MLX5_ADDR_OF(mcia_reg, out, dword_0);
+
+ err = mlx5_query_module_num(dev, &module_num);
+ if (err)
+ return err;
+
+ memset(in, 0, sizeof(in));
+ size = min_t(int, size, MLX5_EEPROM_MAX_BYTES);
+
+ if (offset < MLX5_EEPROM_PAGE_LENGTH &&
+ offset + size > MLX5_EEPROM_PAGE_LENGTH)
+ /* Cross pages read, read until offset 256 in low page */
+ size -= offset + size - MLX5_EEPROM_PAGE_LENGTH;
+
+ i2c_addr = MLX5_I2C_ADDR_LOW;
+ if (offset >= MLX5_EEPROM_PAGE_LENGTH) {
+ i2c_addr = MLX5_I2C_ADDR_HIGH;
+ offset -= MLX5_EEPROM_PAGE_LENGTH;
+ }
+
+ MLX5_SET(mcia_reg, in, l, 0);
+ MLX5_SET(mcia_reg, in, module, module_num);
+ MLX5_SET(mcia_reg, in, i2c_device_address, i2c_addr);
+ MLX5_SET(mcia_reg, in, page_number, 0);
+ MLX5_SET(mcia_reg, in, device_address, offset);
+ MLX5_SET(mcia_reg, in, size, size);
+
+ err = mlx5_core_access_reg(dev, in, sizeof(in), out,
+ sizeof(out), MLX5_REG_MCIA, 0, 0);
+ if (err)
+ return err;
+
+ status = MLX5_GET(mcia_reg, out, status);
+ if (status) {
+ mlx5_core_err(dev, "query_mcia_reg failed: status: 0x%x\n",
+ status);
+ return -EIO;
+ }
+
+ memcpy(data, ptr, size);
+
+ return size;
+}
+EXPORT_SYMBOL_GPL(mlx5_query_module_eeprom);
+
static int mlx5_query_port_pvlc(struct mlx5_core_dev *dev, u32 *pvlc,
int pvlc_size, u8 local_port)
{
@@ -607,3 +696,52 @@ int mlx5_query_port_wol(struct mlx5_core_dev *mdev, u8 *wol_mode)
return err;
}
EXPORT_SYMBOL_GPL(mlx5_query_port_wol);
+
+static int mlx5_query_ports_check(struct mlx5_core_dev *mdev, u32 *out,
+ int outlen)
+{
+ u32 in[MLX5_ST_SZ_DW(pcmr_reg)];
+
+ memset(in, 0, sizeof(in));
+ MLX5_SET(pcmr_reg, in, local_port, 1);
+
+ return mlx5_core_access_reg(mdev, in, sizeof(in), out,
+ outlen, MLX5_REG_PCMR, 0, 0);
+}
+
+static int mlx5_set_ports_check(struct mlx5_core_dev *mdev, u32 *in, int inlen)
+{
+ u32 out[MLX5_ST_SZ_DW(pcmr_reg)];
+
+ return mlx5_core_access_reg(mdev, in, inlen, out,
+ sizeof(out), MLX5_REG_PCMR, 0, 1);
+}
+
+int mlx5_set_port_fcs(struct mlx5_core_dev *mdev, u8 enable)
+{
+ u32 in[MLX5_ST_SZ_DW(pcmr_reg)];
+
+ memset(in, 0, sizeof(in));
+ MLX5_SET(pcmr_reg, in, local_port, 1);
+ MLX5_SET(pcmr_reg, in, fcs_chk, enable);
+
+ return mlx5_set_ports_check(mdev, in, sizeof(in));
+}
+
+void mlx5_query_port_fcs(struct mlx5_core_dev *mdev, bool *supported,
+ bool *enabled)
+{
+ u32 out[MLX5_ST_SZ_DW(pcmr_reg)];
+ /* Default values for FW which do not support MLX5_REG_PCMR */
+ *supported = false;
+ *enabled = true;
+
+ if (!MLX5_CAP_GEN(mdev, ports_check))
+ return;
+
+ if (mlx5_query_ports_check(mdev, out, sizeof(out)))
+ return;
+
+ *supported = !!(MLX5_GET(pcmr_reg, out, fcs_cap));
+ *enabled = !!(MLX5_GET(pcmr_reg, out, fcs_chk));
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/qp.c b/drivers/net/ethernet/mellanox/mlx5/core/qp.c
index def289375ecb..b720a274220d 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/qp.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/qp.c
@@ -538,3 +538,71 @@ void mlx5_core_destroy_sq_tracked(struct mlx5_core_dev *dev,
mlx5_core_destroy_sq(dev, sq->qpn);
}
EXPORT_SYMBOL(mlx5_core_destroy_sq_tracked);
+
+int mlx5_core_alloc_q_counter(struct mlx5_core_dev *dev, u16 *counter_id)
+{
+ u32 in[MLX5_ST_SZ_DW(alloc_q_counter_in)];
+ u32 out[MLX5_ST_SZ_DW(alloc_q_counter_out)];
+ int err;
+
+ memset(in, 0, sizeof(in));
+ memset(out, 0, sizeof(out));
+
+ MLX5_SET(alloc_q_counter_in, in, opcode, MLX5_CMD_OP_ALLOC_Q_COUNTER);
+ err = mlx5_cmd_exec_check_status(dev, in, sizeof(in), out, sizeof(out));
+ if (!err)
+ *counter_id = MLX5_GET(alloc_q_counter_out, out,
+ counter_set_id);
+ return err;
+}
+EXPORT_SYMBOL_GPL(mlx5_core_alloc_q_counter);
+
+int mlx5_core_dealloc_q_counter(struct mlx5_core_dev *dev, u16 counter_id)
+{
+ u32 in[MLX5_ST_SZ_DW(dealloc_q_counter_in)];
+ u32 out[MLX5_ST_SZ_DW(dealloc_q_counter_out)];
+
+ memset(in, 0, sizeof(in));
+ memset(out, 0, sizeof(out));
+
+ MLX5_SET(dealloc_q_counter_in, in, opcode,
+ MLX5_CMD_OP_DEALLOC_Q_COUNTER);
+ MLX5_SET(dealloc_q_counter_in, in, counter_set_id, counter_id);
+ return mlx5_cmd_exec_check_status(dev, in, sizeof(in), out,
+ sizeof(out));
+}
+EXPORT_SYMBOL_GPL(mlx5_core_dealloc_q_counter);
+
+int mlx5_core_query_q_counter(struct mlx5_core_dev *dev, u16 counter_id,
+ int reset, void *out, int out_size)
+{
+ u32 in[MLX5_ST_SZ_DW(query_q_counter_in)];
+
+ memset(in, 0, sizeof(in));
+
+ MLX5_SET(query_q_counter_in, in, opcode, MLX5_CMD_OP_QUERY_Q_COUNTER);
+ MLX5_SET(query_q_counter_in, in, clear, reset);
+ MLX5_SET(query_q_counter_in, in, counter_set_id, counter_id);
+ return mlx5_cmd_exec_check_status(dev, in, sizeof(in), out, out_size);
+}
+EXPORT_SYMBOL_GPL(mlx5_core_query_q_counter);
+
+int mlx5_core_query_out_of_buffer(struct mlx5_core_dev *dev, u16 counter_id,
+ u32 *out_of_buffer)
+{
+ int outlen = MLX5_ST_SZ_BYTES(query_q_counter_out);
+ void *out;
+ int err;
+
+ out = mlx5_vzalloc(outlen);
+ if (!out)
+ return -ENOMEM;
+
+ err = mlx5_core_query_q_counter(dev, counter_id, 0, out, outlen);
+ if (!err)
+ *out_of_buffer = MLX5_GET(query_q_counter_out, out,
+ out_of_buffer);
+
+ kfree(out);
+ return err;
+}
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c
index 7b24386794f9..d6a3f412ba9f 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/sriov.c
+++ b/drivers/net/ethernet/mellanox/mlx5/core/sriov.c
@@ -140,7 +140,7 @@ int mlx5_core_sriov_configure(struct pci_dev *pdev, int num_vfs)
struct mlx5_core_sriov *sriov = &dev->priv.sriov;
int err;
- mlx5_core_dbg(dev, "requsted num_vfs %d\n", num_vfs);
+ mlx5_core_dbg(dev, "requested num_vfs %d\n", num_vfs);
if (!mlx5_core_is_pf(dev))
return -EPERM;
diff --git a/drivers/net/ethernet/mellanox/mlx5/core/vxlan.h b/drivers/net/ethernet/mellanox/mlx5/core/vxlan.h
index 129f3527aa14..5def12c048e3 100644
--- a/drivers/net/ethernet/mellanox/mlx5/core/vxlan.h
+++ b/drivers/net/ethernet/mellanox/mlx5/core/vxlan.h
@@ -53,9 +53,10 @@ static inline bool mlx5e_vxlan_allowed(struct mlx5_core_dev *mdev)
}
void mlx5e_vxlan_init(struct mlx5e_priv *priv);
+void mlx5e_vxlan_cleanup(struct mlx5e_priv *priv);
+
void mlx5e_vxlan_queue_work(struct mlx5e_priv *priv, sa_family_t sa_family,
u16 port, int add);
struct mlx5e_vxlan *mlx5e_vxlan_lookup_port(struct mlx5e_priv *priv, u16 port);
-void mlx5e_vxlan_cleanup(struct mlx5e_priv *priv);
#endif /* __MLX5_VXLAN_H__ */
diff --git a/drivers/net/ethernet/mellanox/mlxsw/Kconfig b/drivers/net/ethernet/mellanox/mlxsw/Kconfig
index 2ad7f67854d5..5989f7cb5462 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/Kconfig
+++ b/drivers/net/ethernet/mellanox/mlxsw/Kconfig
@@ -50,3 +50,11 @@ config MLXSW_SPECTRUM
To compile this driver as a module, choose M here: the
module will be called mlxsw_spectrum.
+
+config MLXSW_SPECTRUM_DCB
+ bool "Data Center Bridging (DCB) support"
+ depends on MLXSW_SPECTRUM && DCB
+ default y
+ ---help---
+ Say Y here if you want to use Data Center Bridging (DCB) in the
+ driver.
diff --git a/drivers/net/ethernet/mellanox/mlxsw/Makefile b/drivers/net/ethernet/mellanox/mlxsw/Makefile
index 584cac444852..9b5ebf84c051 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/Makefile
+++ b/drivers/net/ethernet/mellanox/mlxsw/Makefile
@@ -8,3 +8,4 @@ mlxsw_switchx2-objs := switchx2.o
obj-$(CONFIG_MLXSW_SPECTRUM) += mlxsw_spectrum.o
mlxsw_spectrum-objs := spectrum.o spectrum_buffers.o \
spectrum_switchdev.o
+mlxsw_spectrum-$(CONFIG_MLXSW_SPECTRUM_DCB) += spectrum_dcb.o
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.c b/drivers/net/ethernet/mellanox/mlxsw/core.c
index f69f6280519f..b0a0b01bb4ef 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/core.c
@@ -44,7 +44,7 @@
#include <linux/seq_file.h>
#include <linux/u64_stats_sync.h>
#include <linux/netdevice.h>
-#include <linux/wait.h>
+#include <linux/completion.h>
#include <linux/skbuff.h>
#include <linux/etherdevice.h>
#include <linux/types.h>
@@ -55,6 +55,7 @@
#include <linux/mutex.h>
#include <linux/rcupdate.h>
#include <linux/slab.h>
+#include <linux/workqueue.h>
#include <asm/byteorder.h>
#include <net/devlink.h>
@@ -73,6 +74,8 @@ static const char mlxsw_core_driver_name[] = "mlxsw_core";
static struct dentry *mlxsw_core_dbg_root;
+static struct workqueue_struct *mlxsw_wq;
+
struct mlxsw_core_pcpu_stats {
u64 trap_rx_packets[MLXSW_TRAP_ID_MAX];
u64 trap_rx_bytes[MLXSW_TRAP_ID_MAX];
@@ -93,11 +96,9 @@ struct mlxsw_core {
struct list_head rx_listener_list;
struct list_head event_listener_list;
struct {
- struct sk_buff *resp_skb;
- u64 tid;
- wait_queue_head_t wait;
- bool trans_active;
- struct mutex lock; /* One EMAD transaction at a time. */
+ atomic64_t tid;
+ struct list_head trans_list;
+ spinlock_t trans_list_lock; /* protects trans_list writes */
bool use_emad;
} emad;
struct mlxsw_core_pcpu_stats __percpu *pcpu_stats;
@@ -114,6 +115,12 @@ struct mlxsw_core {
/* driver_priv has to be always the last item */
};
+void *mlxsw_core_driver_priv(struct mlxsw_core *mlxsw_core)
+{
+ return mlxsw_core->driver_priv;
+}
+EXPORT_SYMBOL(mlxsw_core_driver_priv);
+
struct mlxsw_rx_listener_item {
struct list_head list;
struct mlxsw_rx_listener rxl;
@@ -284,7 +291,7 @@ static void mlxsw_emad_pack_reg_tlv(char *reg_tlv,
static void mlxsw_emad_pack_op_tlv(char *op_tlv,
const struct mlxsw_reg_info *reg,
enum mlxsw_core_reg_access_type type,
- struct mlxsw_core *mlxsw_core)
+ u64 tid)
{
mlxsw_emad_op_tlv_type_set(op_tlv, MLXSW_EMAD_TLV_TYPE_OP);
mlxsw_emad_op_tlv_len_set(op_tlv, MLXSW_EMAD_OP_TLV_LEN);
@@ -300,7 +307,7 @@ static void mlxsw_emad_pack_op_tlv(char *op_tlv,
MLXSW_EMAD_OP_TLV_METHOD_WRITE);
mlxsw_emad_op_tlv_class_set(op_tlv,
MLXSW_EMAD_OP_TLV_CLASS_REG_ACCESS);
- mlxsw_emad_op_tlv_tid_set(op_tlv, mlxsw_core->emad.tid);
+ mlxsw_emad_op_tlv_tid_set(op_tlv, tid);
}
static int mlxsw_emad_construct_eth_hdr(struct sk_buff *skb)
@@ -322,7 +329,7 @@ static void mlxsw_emad_construct(struct sk_buff *skb,
const struct mlxsw_reg_info *reg,
char *payload,
enum mlxsw_core_reg_access_type type,
- struct mlxsw_core *mlxsw_core)
+ u64 tid)
{
char *buf;
@@ -333,7 +340,7 @@ static void mlxsw_emad_construct(struct sk_buff *skb,
mlxsw_emad_pack_reg_tlv(buf, reg, payload);
buf = skb_push(skb, MLXSW_EMAD_OP_TLV_LEN * sizeof(u32));
- mlxsw_emad_pack_op_tlv(buf, reg, type, mlxsw_core);
+ mlxsw_emad_pack_op_tlv(buf, reg, type, tid);
mlxsw_emad_construct_eth_hdr(skb);
}
@@ -370,58 +377,16 @@ static bool mlxsw_emad_is_resp(const struct sk_buff *skb)
return (mlxsw_emad_op_tlv_r_get(op_tlv) == MLXSW_EMAD_OP_TLV_RESPONSE);
}
-#define MLXSW_EMAD_TIMEOUT_MS 200
-
-static int __mlxsw_emad_transmit(struct mlxsw_core *mlxsw_core,
- struct sk_buff *skb,
- const struct mlxsw_tx_info *tx_info)
-{
- int err;
- int ret;
-
- mlxsw_core->emad.trans_active = true;
-
- err = mlxsw_core_skb_transmit(mlxsw_core->driver_priv, skb, tx_info);
- if (err) {
- dev_err(mlxsw_core->bus_info->dev, "Failed to transmit EMAD (tid=%llx)\n",
- mlxsw_core->emad.tid);
- dev_kfree_skb(skb);
- goto trans_inactive_out;
- }
-
- ret = wait_event_timeout(mlxsw_core->emad.wait,
- !(mlxsw_core->emad.trans_active),
- msecs_to_jiffies(MLXSW_EMAD_TIMEOUT_MS));
- if (!ret) {
- dev_warn(mlxsw_core->bus_info->dev, "EMAD timed-out (tid=%llx)\n",
- mlxsw_core->emad.tid);
- err = -EIO;
- goto trans_inactive_out;
- }
-
- return 0;
-
-trans_inactive_out:
- mlxsw_core->emad.trans_active = false;
- return err;
-}
-
-static int mlxsw_emad_process_status(struct mlxsw_core *mlxsw_core,
- char *op_tlv)
+static int mlxsw_emad_process_status(char *op_tlv,
+ enum mlxsw_emad_op_tlv_status *p_status)
{
- enum mlxsw_emad_op_tlv_status status;
- u64 tid;
+ *p_status = mlxsw_emad_op_tlv_status_get(op_tlv);
- status = mlxsw_emad_op_tlv_status_get(op_tlv);
- tid = mlxsw_emad_op_tlv_tid_get(op_tlv);
-
- switch (status) {
+ switch (*p_status) {
case MLXSW_EMAD_OP_TLV_STATUS_SUCCESS:
return 0;
case MLXSW_EMAD_OP_TLV_STATUS_BUSY:
case MLXSW_EMAD_OP_TLV_STATUS_MESSAGE_RECEIPT_ACK:
- dev_warn(mlxsw_core->bus_info->dev, "Reg access status again (tid=%llx,status=%x(%s))\n",
- tid, status, mlxsw_emad_op_tlv_status_str(status));
return -EAGAIN;
case MLXSW_EMAD_OP_TLV_STATUS_VERSION_NOT_SUPPORTED:
case MLXSW_EMAD_OP_TLV_STATUS_UNKNOWN_TLV:
@@ -432,70 +397,150 @@ static int mlxsw_emad_process_status(struct mlxsw_core *mlxsw_core,
case MLXSW_EMAD_OP_TLV_STATUS_RESOURCE_NOT_AVAILABLE:
case MLXSW_EMAD_OP_TLV_STATUS_INTERNAL_ERROR:
default:
- dev_err(mlxsw_core->bus_info->dev, "Reg access status failed (tid=%llx,status=%x(%s))\n",
- tid, status, mlxsw_emad_op_tlv_status_str(status));
return -EIO;
}
}
-static int mlxsw_emad_process_status_skb(struct mlxsw_core *mlxsw_core,
- struct sk_buff *skb)
+static int
+mlxsw_emad_process_status_skb(struct sk_buff *skb,
+ enum mlxsw_emad_op_tlv_status *p_status)
+{
+ return mlxsw_emad_process_status(mlxsw_emad_op_tlv(skb), p_status);
+}
+
+struct mlxsw_reg_trans {
+ struct list_head list;
+ struct list_head bulk_list;
+ struct mlxsw_core *core;
+ struct sk_buff *tx_skb;
+ struct mlxsw_tx_info tx_info;
+ struct delayed_work timeout_dw;
+ unsigned int retries;
+ u64 tid;
+ struct completion completion;
+ atomic_t active;
+ mlxsw_reg_trans_cb_t *cb;
+ unsigned long cb_priv;
+ const struct mlxsw_reg_info *reg;
+ enum mlxsw_core_reg_access_type type;
+ int err;
+ enum mlxsw_emad_op_tlv_status emad_status;
+ struct rcu_head rcu;
+};
+
+#define MLXSW_EMAD_TIMEOUT_MS 200
+
+static void mlxsw_emad_trans_timeout_schedule(struct mlxsw_reg_trans *trans)
{
- return mlxsw_emad_process_status(mlxsw_core, mlxsw_emad_op_tlv(skb));
+ unsigned long timeout = msecs_to_jiffies(MLXSW_EMAD_TIMEOUT_MS);
+
+ mlxsw_core_schedule_dw(&trans->timeout_dw, timeout);
}
static int mlxsw_emad_transmit(struct mlxsw_core *mlxsw_core,
- struct sk_buff *skb,
- const struct mlxsw_tx_info *tx_info)
+ struct mlxsw_reg_trans *trans)
{
- struct sk_buff *trans_skb;
- int n_retry;
+ struct sk_buff *skb;
int err;
- n_retry = 0;
-retry:
- /* We copy the EMAD to a new skb, since we might need
- * to retransmit it in case of failure.
- */
- trans_skb = skb_copy(skb, GFP_KERNEL);
- if (!trans_skb) {
- err = -ENOMEM;
- goto out;
+ skb = skb_copy(trans->tx_skb, GFP_KERNEL);
+ if (!skb)
+ return -ENOMEM;
+
+ atomic_set(&trans->active, 1);
+ err = mlxsw_core_skb_transmit(mlxsw_core, skb, &trans->tx_info);
+ if (err) {
+ dev_kfree_skb(skb);
+ return err;
}
+ mlxsw_emad_trans_timeout_schedule(trans);
+ return 0;
+}
- err = __mlxsw_emad_transmit(mlxsw_core, trans_skb, tx_info);
- if (!err) {
- struct sk_buff *resp_skb = mlxsw_core->emad.resp_skb;
+static void mlxsw_emad_trans_finish(struct mlxsw_reg_trans *trans, int err)
+{
+ struct mlxsw_core *mlxsw_core = trans->core;
+
+ dev_kfree_skb(trans->tx_skb);
+ spin_lock_bh(&mlxsw_core->emad.trans_list_lock);
+ list_del_rcu(&trans->list);
+ spin_unlock_bh(&mlxsw_core->emad.trans_list_lock);
+ trans->err = err;
+ complete(&trans->completion);
+}
- err = mlxsw_emad_process_status_skb(mlxsw_core, resp_skb);
- if (err)
- dev_kfree_skb(resp_skb);
- if (!err || err != -EAGAIN)
- goto out;
+static void mlxsw_emad_transmit_retry(struct mlxsw_core *mlxsw_core,
+ struct mlxsw_reg_trans *trans)
+{
+ int err;
+
+ if (trans->retries < MLXSW_EMAD_MAX_RETRY) {
+ trans->retries++;
+ err = mlxsw_emad_transmit(trans->core, trans);
+ if (err == 0)
+ return;
+ } else {
+ err = -EIO;
}
- if (n_retry++ < MLXSW_EMAD_MAX_RETRY)
- goto retry;
+ mlxsw_emad_trans_finish(trans, err);
+}
-out:
- dev_kfree_skb(skb);
- mlxsw_core->emad.tid++;
- return err;
+static void mlxsw_emad_trans_timeout_work(struct work_struct *work)
+{
+ struct mlxsw_reg_trans *trans = container_of(work,
+ struct mlxsw_reg_trans,
+ timeout_dw.work);
+
+ if (!atomic_dec_and_test(&trans->active))
+ return;
+
+ mlxsw_emad_transmit_retry(trans->core, trans);
+}
+
+static void mlxsw_emad_process_response(struct mlxsw_core *mlxsw_core,
+ struct mlxsw_reg_trans *trans,
+ struct sk_buff *skb)
+{
+ int err;
+
+ if (!atomic_dec_and_test(&trans->active))
+ return;
+
+ err = mlxsw_emad_process_status_skb(skb, &trans->emad_status);
+ if (err == -EAGAIN) {
+ mlxsw_emad_transmit_retry(mlxsw_core, trans);
+ } else {
+ if (err == 0) {
+ char *op_tlv = mlxsw_emad_op_tlv(skb);
+
+ if (trans->cb)
+ trans->cb(mlxsw_core,
+ mlxsw_emad_reg_payload(op_tlv),
+ trans->reg->len, trans->cb_priv);
+ }
+ mlxsw_emad_trans_finish(trans, err);
+ }
}
+/* called with rcu read lock held */
static void mlxsw_emad_rx_listener_func(struct sk_buff *skb, u8 local_port,
void *priv)
{
struct mlxsw_core *mlxsw_core = priv;
+ struct mlxsw_reg_trans *trans;
- if (mlxsw_emad_is_resp(skb) &&
- mlxsw_core->emad.trans_active &&
- mlxsw_emad_get_tid(skb) == mlxsw_core->emad.tid) {
- mlxsw_core->emad.resp_skb = skb;
- mlxsw_core->emad.trans_active = false;
- wake_up(&mlxsw_core->emad.wait);
- } else {
- dev_kfree_skb(skb);
+ if (!mlxsw_emad_is_resp(skb))
+ goto free_skb;
+
+ list_for_each_entry_rcu(trans, &mlxsw_core->emad.trans_list, list) {
+ if (mlxsw_emad_get_tid(skb) == trans->tid) {
+ mlxsw_emad_process_response(mlxsw_core, trans, skb);
+ break;
+ }
}
+
+free_skb:
+ dev_kfree_skb(skb);
}
static const struct mlxsw_rx_listener mlxsw_emad_rx_listener = {
@@ -522,18 +567,19 @@ static int mlxsw_emad_traps_set(struct mlxsw_core *mlxsw_core)
static int mlxsw_emad_init(struct mlxsw_core *mlxsw_core)
{
+ u64 tid;
int err;
/* Set the upper 32 bits of the transaction ID field to a random
* number. This allows us to discard EMADs addressed to other
* devices.
*/
- get_random_bytes(&mlxsw_core->emad.tid, 4);
- mlxsw_core->emad.tid = mlxsw_core->emad.tid << 32;
+ get_random_bytes(&tid, 4);
+ tid <<= 32;
+ atomic64_set(&mlxsw_core->emad.tid, tid);
- init_waitqueue_head(&mlxsw_core->emad.wait);
- mlxsw_core->emad.trans_active = false;
- mutex_init(&mlxsw_core->emad.lock);
+ INIT_LIST_HEAD(&mlxsw_core->emad.trans_list);
+ spin_lock_init(&mlxsw_core->emad.trans_list_lock);
err = mlxsw_core_rx_listener_register(mlxsw_core,
&mlxsw_emad_rx_listener,
@@ -591,6 +637,59 @@ static struct sk_buff *mlxsw_emad_alloc(const struct mlxsw_core *mlxsw_core,
return skb;
}
+static int mlxsw_emad_reg_access(struct mlxsw_core *mlxsw_core,
+ const struct mlxsw_reg_info *reg,
+ char *payload,
+ enum mlxsw_core_reg_access_type type,
+ struct mlxsw_reg_trans *trans,
+ struct list_head *bulk_list,
+ mlxsw_reg_trans_cb_t *cb,
+ unsigned long cb_priv, u64 tid)
+{
+ struct sk_buff *skb;
+ int err;
+
+ dev_dbg(mlxsw_core->bus_info->dev, "EMAD reg access (tid=%llx,reg_id=%x(%s),type=%s)\n",
+ trans->tid, reg->id, mlxsw_reg_id_str(reg->id),
+ mlxsw_core_reg_access_type_str(type));
+
+ skb = mlxsw_emad_alloc(mlxsw_core, reg->len);
+ if (!skb)
+ return -ENOMEM;
+
+ list_add_tail(&trans->bulk_list, bulk_list);
+ trans->core = mlxsw_core;
+ trans->tx_skb = skb;
+ trans->tx_info.local_port = MLXSW_PORT_CPU_PORT;
+ trans->tx_info.is_emad = true;
+ INIT_DELAYED_WORK(&trans->timeout_dw, mlxsw_emad_trans_timeout_work);
+ trans->tid = tid;
+ init_completion(&trans->completion);
+ trans->cb = cb;
+ trans->cb_priv = cb_priv;
+ trans->reg = reg;
+ trans->type = type;
+
+ mlxsw_emad_construct(skb, reg, payload, type, trans->tid);
+ mlxsw_core->driver->txhdr_construct(skb, &trans->tx_info);
+
+ spin_lock_bh(&mlxsw_core->emad.trans_list_lock);
+ list_add_tail_rcu(&trans->list, &mlxsw_core->emad.trans_list);
+ spin_unlock_bh(&mlxsw_core->emad.trans_list_lock);
+ err = mlxsw_emad_transmit(mlxsw_core, trans);
+ if (err)
+ goto err_out;
+ return 0;
+
+err_out:
+ spin_lock_bh(&mlxsw_core->emad.trans_list_lock);
+ list_del_rcu(&trans->list);
+ spin_unlock_bh(&mlxsw_core->emad.trans_list_lock);
+ list_del(&trans->bulk_list);
+ dev_kfree_skb(trans->tx_skb);
+ return err;
+}
+
/*****************
* Core functions
*****************/
@@ -680,24 +779,6 @@ static const struct file_operations mlxsw_core_rx_stats_dbg_ops = {
.llseek = seq_lseek
};
-static void mlxsw_core_buf_dump_dbg(struct mlxsw_core *mlxsw_core,
- const char *buf, size_t size)
-{
- __be32 *m = (__be32 *) buf;
- int i;
- int count = size / sizeof(__be32);
-
- for (i = count - 1; i >= 0; i--)
- if (m[i])
- break;
- i++;
- count = i ? i : 1;
- for (i = 0; i < count; i += 4)
- dev_dbg(mlxsw_core->bus_info->dev, "%04x - %08x %08x %08x %08x\n",
- i * 4, be32_to_cpu(m[i]), be32_to_cpu(m[i + 1]),
- be32_to_cpu(m[i + 2]), be32_to_cpu(m[i + 3]));
-}
-
int mlxsw_core_driver_register(struct mlxsw_driver *mlxsw_driver)
{
spin_lock(&mlxsw_core_driver_list_lock);
@@ -795,8 +876,7 @@ static int mlxsw_devlink_port_split(struct devlink *devlink,
return -EINVAL;
if (!mlxsw_core->driver->port_split)
return -EOPNOTSUPP;
- return mlxsw_core->driver->port_split(mlxsw_core->driver_priv,
- port_index, count);
+ return mlxsw_core->driver->port_split(mlxsw_core, port_index, count);
}
static int mlxsw_devlink_port_unsplit(struct devlink *devlink,
@@ -808,13 +888,171 @@ static int mlxsw_devlink_port_unsplit(struct devlink *devlink,
return -EINVAL;
if (!mlxsw_core->driver->port_unsplit)
return -EOPNOTSUPP;
- return mlxsw_core->driver->port_unsplit(mlxsw_core->driver_priv,
- port_index);
+ return mlxsw_core->driver->port_unsplit(mlxsw_core, port_index);
+}
+
+static int
+mlxsw_devlink_sb_pool_get(struct devlink *devlink,
+ unsigned int sb_index, u16 pool_index,
+ struct devlink_sb_pool_info *pool_info)
+{
+ struct mlxsw_core *mlxsw_core = devlink_priv(devlink);
+ struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver;
+
+ if (!mlxsw_driver->sb_pool_get)
+ return -EOPNOTSUPP;
+ return mlxsw_driver->sb_pool_get(mlxsw_core, sb_index,
+ pool_index, pool_info);
+}
+
+static int
+mlxsw_devlink_sb_pool_set(struct devlink *devlink,
+ unsigned int sb_index, u16 pool_index, u32 size,
+ enum devlink_sb_threshold_type threshold_type)
+{
+ struct mlxsw_core *mlxsw_core = devlink_priv(devlink);
+ struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver;
+
+ if (!mlxsw_driver->sb_pool_set)
+ return -EOPNOTSUPP;
+ return mlxsw_driver->sb_pool_set(mlxsw_core, sb_index,
+ pool_index, size, threshold_type);
+}
+
+static void *__dl_port(struct devlink_port *devlink_port)
+{
+ return container_of(devlink_port, struct mlxsw_core_port, devlink_port);
+}
+
+static int mlxsw_devlink_sb_port_pool_get(struct devlink_port *devlink_port,
+ unsigned int sb_index, u16 pool_index,
+ u32 *p_threshold)
+{
+ struct mlxsw_core *mlxsw_core = devlink_priv(devlink_port->devlink);
+ struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver;
+ struct mlxsw_core_port *mlxsw_core_port = __dl_port(devlink_port);
+
+ if (!mlxsw_driver->sb_port_pool_get)
+ return -EOPNOTSUPP;
+ return mlxsw_driver->sb_port_pool_get(mlxsw_core_port, sb_index,
+ pool_index, p_threshold);
+}
+
+static int mlxsw_devlink_sb_port_pool_set(struct devlink_port *devlink_port,
+ unsigned int sb_index, u16 pool_index,
+ u32 threshold)
+{
+ struct mlxsw_core *mlxsw_core = devlink_priv(devlink_port->devlink);
+ struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver;
+ struct mlxsw_core_port *mlxsw_core_port = __dl_port(devlink_port);
+
+ if (!mlxsw_driver->sb_port_pool_set)
+ return -EOPNOTSUPP;
+ return mlxsw_driver->sb_port_pool_set(mlxsw_core_port, sb_index,
+ pool_index, threshold);
+}
+
+static int
+mlxsw_devlink_sb_tc_pool_bind_get(struct devlink_port *devlink_port,
+ unsigned int sb_index, u16 tc_index,
+ enum devlink_sb_pool_type pool_type,
+ u16 *p_pool_index, u32 *p_threshold)
+{
+ struct mlxsw_core *mlxsw_core = devlink_priv(devlink_port->devlink);
+ struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver;
+ struct mlxsw_core_port *mlxsw_core_port = __dl_port(devlink_port);
+
+ if (!mlxsw_driver->sb_tc_pool_bind_get)
+ return -EOPNOTSUPP;
+ return mlxsw_driver->sb_tc_pool_bind_get(mlxsw_core_port, sb_index,
+ tc_index, pool_type,
+ p_pool_index, p_threshold);
+}
+
+static int
+mlxsw_devlink_sb_tc_pool_bind_set(struct devlink_port *devlink_port,
+ unsigned int sb_index, u16 tc_index,
+ enum devlink_sb_pool_type pool_type,
+ u16 pool_index, u32 threshold)
+{
+ struct mlxsw_core *mlxsw_core = devlink_priv(devlink_port->devlink);
+ struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver;
+ struct mlxsw_core_port *mlxsw_core_port = __dl_port(devlink_port);
+
+ if (!mlxsw_driver->sb_tc_pool_bind_set)
+ return -EOPNOTSUPP;
+ return mlxsw_driver->sb_tc_pool_bind_set(mlxsw_core_port, sb_index,
+ tc_index, pool_type,
+ pool_index, threshold);
+}
+
+static int mlxsw_devlink_sb_occ_snapshot(struct devlink *devlink,
+ unsigned int sb_index)
+{
+ struct mlxsw_core *mlxsw_core = devlink_priv(devlink);
+ struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver;
+
+ if (!mlxsw_driver->sb_occ_snapshot)
+ return -EOPNOTSUPP;
+ return mlxsw_driver->sb_occ_snapshot(mlxsw_core, sb_index);
+}
+
+static int mlxsw_devlink_sb_occ_max_clear(struct devlink *devlink,
+ unsigned int sb_index)
+{
+ struct mlxsw_core *mlxsw_core = devlink_priv(devlink);
+ struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver;
+
+ if (!mlxsw_driver->sb_occ_max_clear)
+ return -EOPNOTSUPP;
+ return mlxsw_driver->sb_occ_max_clear(mlxsw_core, sb_index);
+}
+
+static int
+mlxsw_devlink_sb_occ_port_pool_get(struct devlink_port *devlink_port,
+ unsigned int sb_index, u16 pool_index,
+ u32 *p_cur, u32 *p_max)
+{
+ struct mlxsw_core *mlxsw_core = devlink_priv(devlink_port->devlink);
+ struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver;
+ struct mlxsw_core_port *mlxsw_core_port = __dl_port(devlink_port);
+
+ if (!mlxsw_driver->sb_occ_port_pool_get)
+ return -EOPNOTSUPP;
+ return mlxsw_driver->sb_occ_port_pool_get(mlxsw_core_port, sb_index,
+ pool_index, p_cur, p_max);
+}
+
+static int
+mlxsw_devlink_sb_occ_tc_port_bind_get(struct devlink_port *devlink_port,
+ unsigned int sb_index, u16 tc_index,
+ enum devlink_sb_pool_type pool_type,
+ u32 *p_cur, u32 *p_max)
+{
+ struct mlxsw_core *mlxsw_core = devlink_priv(devlink_port->devlink);
+ struct mlxsw_driver *mlxsw_driver = mlxsw_core->driver;
+ struct mlxsw_core_port *mlxsw_core_port = __dl_port(devlink_port);
+
+ if (!mlxsw_driver->sb_occ_tc_port_bind_get)
+ return -EOPNOTSUPP;
+ return mlxsw_driver->sb_occ_tc_port_bind_get(mlxsw_core_port,
+ sb_index, tc_index,
+ pool_type, p_cur, p_max);
}
static const struct devlink_ops mlxsw_devlink_ops = {
- .port_split = mlxsw_devlink_port_split,
- .port_unsplit = mlxsw_devlink_port_unsplit,
+ .port_split = mlxsw_devlink_port_split,
+ .port_unsplit = mlxsw_devlink_port_unsplit,
+ .sb_pool_get = mlxsw_devlink_sb_pool_get,
+ .sb_pool_set = mlxsw_devlink_sb_pool_set,
+ .sb_port_pool_get = mlxsw_devlink_sb_port_pool_get,
+ .sb_port_pool_set = mlxsw_devlink_sb_port_pool_set,
+ .sb_tc_pool_bind_get = mlxsw_devlink_sb_tc_pool_bind_get,
+ .sb_tc_pool_bind_set = mlxsw_devlink_sb_tc_pool_bind_set,
+ .sb_occ_snapshot = mlxsw_devlink_sb_occ_snapshot,
+ .sb_occ_max_clear = mlxsw_devlink_sb_occ_max_clear,
+ .sb_occ_port_pool_get = mlxsw_devlink_sb_occ_port_pool_get,
+ .sb_occ_tc_port_bind_get = mlxsw_devlink_sb_occ_tc_port_bind_get,
};
int mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info,
@@ -880,8 +1118,7 @@ int mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info,
if (err)
goto err_devlink_register;
- err = mlxsw_driver->init(mlxsw_core->driver_priv, mlxsw_core,
- mlxsw_bus_info);
+ err = mlxsw_driver->init(mlxsw_core, mlxsw_bus_info);
if (err)
goto err_driver_init;
@@ -892,7 +1129,7 @@ int mlxsw_core_bus_device_register(const struct mlxsw_bus_info *mlxsw_bus_info,
return 0;
err_debugfs_init:
- mlxsw_core->driver->fini(mlxsw_core->driver_priv);
+ mlxsw_core->driver->fini(mlxsw_core);
err_driver_init:
devlink_unregister(devlink);
err_devlink_register:
@@ -918,7 +1155,7 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core)
struct devlink *devlink = priv_to_devlink(mlxsw_core);
mlxsw_core_debugfs_fini(mlxsw_core);
- mlxsw_core->driver->fini(mlxsw_core->driver_priv);
+ mlxsw_core->driver->fini(mlxsw_core);
devlink_unregister(devlink);
mlxsw_emad_fini(mlxsw_core);
mlxsw_core->bus->fini(mlxsw_core->bus_priv);
@@ -929,26 +1166,17 @@ void mlxsw_core_bus_device_unregister(struct mlxsw_core *mlxsw_core)
}
EXPORT_SYMBOL(mlxsw_core_bus_device_unregister);
-static struct mlxsw_core *__mlxsw_core_get(void *driver_priv)
-{
- return container_of(driver_priv, struct mlxsw_core, driver_priv);
-}
-
-bool mlxsw_core_skb_transmit_busy(void *driver_priv,
+bool mlxsw_core_skb_transmit_busy(struct mlxsw_core *mlxsw_core,
const struct mlxsw_tx_info *tx_info)
{
- struct mlxsw_core *mlxsw_core = __mlxsw_core_get(driver_priv);
-
return mlxsw_core->bus->skb_transmit_busy(mlxsw_core->bus_priv,
tx_info);
}
EXPORT_SYMBOL(mlxsw_core_skb_transmit_busy);
-int mlxsw_core_skb_transmit(void *driver_priv, struct sk_buff *skb,
+int mlxsw_core_skb_transmit(struct mlxsw_core *mlxsw_core, struct sk_buff *skb,
const struct mlxsw_tx_info *tx_info)
{
- struct mlxsw_core *mlxsw_core = __mlxsw_core_get(driver_priv);
-
return mlxsw_core->bus->skb_transmit(mlxsw_core->bus_priv, skb,
tx_info);
}
@@ -1108,56 +1336,112 @@ void mlxsw_core_event_listener_unregister(struct mlxsw_core *mlxsw_core,
}
EXPORT_SYMBOL(mlxsw_core_event_listener_unregister);
+static u64 mlxsw_core_tid_get(struct mlxsw_core *mlxsw_core)
+{
+ return atomic64_inc_return(&mlxsw_core->emad.tid);
+}
+
static int mlxsw_core_reg_access_emad(struct mlxsw_core *mlxsw_core,
const struct mlxsw_reg_info *reg,
char *payload,
- enum mlxsw_core_reg_access_type type)
+ enum mlxsw_core_reg_access_type type,
+ struct list_head *bulk_list,
+ mlxsw_reg_trans_cb_t *cb,
+ unsigned long cb_priv)
{
+ u64 tid = mlxsw_core_tid_get(mlxsw_core);
+ struct mlxsw_reg_trans *trans;
int err;
- char *op_tlv;
- struct sk_buff *skb;
- struct mlxsw_tx_info tx_info = {
- .local_port = MLXSW_PORT_CPU_PORT,
- .is_emad = true,
- };
- skb = mlxsw_emad_alloc(mlxsw_core, reg->len);
- if (!skb)
+ trans = kzalloc(sizeof(*trans), GFP_KERNEL);
+ if (!trans)
return -ENOMEM;
- mlxsw_emad_construct(skb, reg, payload, type, mlxsw_core);
- mlxsw_core->driver->txhdr_construct(skb, &tx_info);
+ err = mlxsw_emad_reg_access(mlxsw_core, reg, payload, type, trans,
+ bulk_list, cb, cb_priv, tid);
+ if (err) {
+ kfree(trans);
+ return err;
+ }
+ return 0;
+}
- dev_dbg(mlxsw_core->bus_info->dev, "EMAD send (tid=%llx)\n",
- mlxsw_core->emad.tid);
- mlxsw_core_buf_dump_dbg(mlxsw_core, skb->data, skb->len);
+int mlxsw_reg_trans_query(struct mlxsw_core *mlxsw_core,
+ const struct mlxsw_reg_info *reg, char *payload,
+ struct list_head *bulk_list,
+ mlxsw_reg_trans_cb_t *cb, unsigned long cb_priv)
+{
+ return mlxsw_core_reg_access_emad(mlxsw_core, reg, payload,
+ MLXSW_CORE_REG_ACCESS_TYPE_QUERY,
+ bulk_list, cb, cb_priv);
+}
+EXPORT_SYMBOL(mlxsw_reg_trans_query);
- err = mlxsw_emad_transmit(mlxsw_core, skb, &tx_info);
- if (!err) {
- op_tlv = mlxsw_emad_op_tlv(mlxsw_core->emad.resp_skb);
- memcpy(payload, mlxsw_emad_reg_payload(op_tlv),
- reg->len);
+int mlxsw_reg_trans_write(struct mlxsw_core *mlxsw_core,
+ const struct mlxsw_reg_info *reg, char *payload,
+ struct list_head *bulk_list,
+ mlxsw_reg_trans_cb_t *cb, unsigned long cb_priv)
+{
+ return mlxsw_core_reg_access_emad(mlxsw_core, reg, payload,
+ MLXSW_CORE_REG_ACCESS_TYPE_WRITE,
+ bulk_list, cb, cb_priv);
+}
+EXPORT_SYMBOL(mlxsw_reg_trans_write);
- dev_dbg(mlxsw_core->bus_info->dev, "EMAD recv (tid=%llx)\n",
- mlxsw_core->emad.tid - 1);
- mlxsw_core_buf_dump_dbg(mlxsw_core,
- mlxsw_core->emad.resp_skb->data,
- mlxsw_core->emad.resp_skb->len);
+static int mlxsw_reg_trans_wait(struct mlxsw_reg_trans *trans)
+{
+ struct mlxsw_core *mlxsw_core = trans->core;
+ int err;
- dev_kfree_skb(mlxsw_core->emad.resp_skb);
- }
+ wait_for_completion(&trans->completion);
+ cancel_delayed_work_sync(&trans->timeout_dw);
+ err = trans->err;
+ if (trans->retries)
+ dev_warn(mlxsw_core->bus_info->dev, "EMAD retries (%d/%d) (tid=%llx)\n",
+ trans->retries, MLXSW_EMAD_MAX_RETRY, trans->tid);
+ if (err)
+ dev_err(mlxsw_core->bus_info->dev, "EMAD reg access failed (tid=%llx,reg_id=%x(%s),type=%s,status=%x(%s))\n",
+ trans->tid, trans->reg->id,
+ mlxsw_reg_id_str(trans->reg->id),
+ mlxsw_core_reg_access_type_str(trans->type),
+ trans->emad_status,
+ mlxsw_emad_op_tlv_status_str(trans->emad_status));
+
+ list_del(&trans->bulk_list);
+ kfree_rcu(trans, rcu);
return err;
}
+int mlxsw_reg_trans_bulk_wait(struct list_head *bulk_list)
+{
+ struct mlxsw_reg_trans *trans;
+ struct mlxsw_reg_trans *tmp;
+ int sum_err = 0;
+ int err;
+
+ list_for_each_entry_safe(trans, tmp, bulk_list, bulk_list) {
+ err = mlxsw_reg_trans_wait(trans);
+ if (err && sum_err == 0)
+ sum_err = err; /* first error to be returned */
+ }
+ return sum_err;
+}
+EXPORT_SYMBOL(mlxsw_reg_trans_bulk_wait);
+
static int mlxsw_core_reg_access_cmd(struct mlxsw_core *mlxsw_core,
const struct mlxsw_reg_info *reg,
char *payload,
enum mlxsw_core_reg_access_type type)
{
+ enum mlxsw_emad_op_tlv_status status;
int err, n_retry;
char *in_mbox, *out_mbox, *tmp;
+ dev_dbg(mlxsw_core->bus_info->dev, "Reg cmd access (reg_id=%x(%s),type=%s)\n",
+ reg->id, mlxsw_reg_id_str(reg->id),
+ mlxsw_core_reg_access_type_str(type));
+
in_mbox = mlxsw_cmd_mbox_alloc();
if (!in_mbox)
return -ENOMEM;
@@ -1168,7 +1452,8 @@ static int mlxsw_core_reg_access_cmd(struct mlxsw_core *mlxsw_core,
goto free_in_mbox;
}
- mlxsw_emad_pack_op_tlv(in_mbox, reg, type, mlxsw_core);
+ mlxsw_emad_pack_op_tlv(in_mbox, reg, type,
+ mlxsw_core_tid_get(mlxsw_core));
tmp = in_mbox + MLXSW_EMAD_OP_TLV_LEN * sizeof(u32);
mlxsw_emad_pack_reg_tlv(tmp, reg, payload);
@@ -1176,60 +1461,61 @@ static int mlxsw_core_reg_access_cmd(struct mlxsw_core *mlxsw_core,
retry:
err = mlxsw_cmd_access_reg(mlxsw_core, in_mbox, out_mbox);
if (!err) {
- err = mlxsw_emad_process_status(mlxsw_core, out_mbox);
- if (err == -EAGAIN && n_retry++ < MLXSW_EMAD_MAX_RETRY)
- goto retry;
+ err = mlxsw_emad_process_status(out_mbox, &status);
+ if (err) {
+ if (err == -EAGAIN && n_retry++ < MLXSW_EMAD_MAX_RETRY)
+ goto retry;
+ dev_err(mlxsw_core->bus_info->dev, "Reg cmd access status failed (status=%x(%s))\n",
+ status, mlxsw_emad_op_tlv_status_str(status));
+ }
}
if (!err)
memcpy(payload, mlxsw_emad_reg_payload(out_mbox),
reg->len);
- mlxsw_core->emad.tid++;
mlxsw_cmd_mbox_free(out_mbox);
free_in_mbox:
mlxsw_cmd_mbox_free(in_mbox);
+ if (err)
+ dev_err(mlxsw_core->bus_info->dev, "Reg cmd access failed (reg_id=%x(%s),type=%s)\n",
+ reg->id, mlxsw_reg_id_str(reg->id),
+ mlxsw_core_reg_access_type_str(type));
return err;
}
+static void mlxsw_core_reg_access_cb(struct mlxsw_core *mlxsw_core,
+ char *payload, size_t payload_len,
+ unsigned long cb_priv)
+{
+ char *orig_payload = (char *) cb_priv;
+
+ memcpy(orig_payload, payload, payload_len);
+}
+
static int mlxsw_core_reg_access(struct mlxsw_core *mlxsw_core,
const struct mlxsw_reg_info *reg,
char *payload,
enum mlxsw_core_reg_access_type type)
{
- u64 cur_tid;
+ LIST_HEAD(bulk_list);
int err;
- if (mutex_lock_interruptible(&mlxsw_core->emad.lock)) {
- dev_err(mlxsw_core->bus_info->dev, "Reg access interrupted (reg_id=%x(%s),type=%s)\n",
- reg->id, mlxsw_reg_id_str(reg->id),
- mlxsw_core_reg_access_type_str(type));
- return -EINTR;
- }
-
- cur_tid = mlxsw_core->emad.tid;
- dev_dbg(mlxsw_core->bus_info->dev, "Reg access (tid=%llx,reg_id=%x(%s),type=%s)\n",
- cur_tid, reg->id, mlxsw_reg_id_str(reg->id),
- mlxsw_core_reg_access_type_str(type));
-
/* During initialization EMAD interface is not available to us,
* so we default to command interface. We switch to EMAD interface
* after setting the appropriate traps.
*/
if (!mlxsw_core->emad.use_emad)
- err = mlxsw_core_reg_access_cmd(mlxsw_core, reg,
- payload, type);
- else
- err = mlxsw_core_reg_access_emad(mlxsw_core, reg,
+ return mlxsw_core_reg_access_cmd(mlxsw_core, reg,
payload, type);
+ err = mlxsw_core_reg_access_emad(mlxsw_core, reg,
+ payload, type, &bulk_list,
+ mlxsw_core_reg_access_cb,
+ (unsigned long) payload);
if (err)
- dev_err(mlxsw_core->bus_info->dev, "Reg access failed (tid=%llx,reg_id=%x(%s),type=%s)\n",
- cur_tid, reg->id, mlxsw_reg_id_str(reg->id),
- mlxsw_core_reg_access_type_str(type));
-
- mutex_unlock(&mlxsw_core->emad.lock);
- return err;
+ return err;
+ return mlxsw_reg_trans_bulk_wait(&bulk_list);
}
int mlxsw_reg_query(struct mlxsw_core *mlxsw_core,
@@ -1358,6 +1644,46 @@ void mlxsw_core_lag_mapping_clear(struct mlxsw_core *mlxsw_core,
}
EXPORT_SYMBOL(mlxsw_core_lag_mapping_clear);
+int mlxsw_core_port_init(struct mlxsw_core *mlxsw_core,
+ struct mlxsw_core_port *mlxsw_core_port, u8 local_port,
+ struct net_device *dev, bool split, u32 split_group)
+{
+ struct devlink *devlink = priv_to_devlink(mlxsw_core);
+ struct devlink_port *devlink_port = &mlxsw_core_port->devlink_port;
+
+ if (split)
+ devlink_port_split_set(devlink_port, split_group);
+ devlink_port_type_eth_set(devlink_port, dev);
+ return devlink_port_register(devlink, devlink_port, local_port);
+}
+EXPORT_SYMBOL(mlxsw_core_port_init);
+
+void mlxsw_core_port_fini(struct mlxsw_core_port *mlxsw_core_port)
+{
+ struct devlink_port *devlink_port = &mlxsw_core_port->devlink_port;
+
+ devlink_port_unregister(devlink_port);
+}
+EXPORT_SYMBOL(mlxsw_core_port_fini);
+
+static void mlxsw_core_buf_dump_dbg(struct mlxsw_core *mlxsw_core,
+ const char *buf, size_t size)
+{
+ __be32 *m = (__be32 *) buf;
+ int i;
+ int count = size / sizeof(__be32);
+
+ for (i = count - 1; i >= 0; i--)
+ if (m[i])
+ break;
+ i++;
+ count = i ? i : 1;
+ for (i = 0; i < count; i += 4)
+ dev_dbg(mlxsw_core->bus_info->dev, "%04x - %08x %08x %08x %08x\n",
+ i * 4, be32_to_cpu(m[i]), be32_to_cpu(m[i + 1]),
+ be32_to_cpu(m[i + 2]), be32_to_cpu(m[i + 3]));
+}
+
int mlxsw_cmd_exec(struct mlxsw_core *mlxsw_core, u16 opcode, u8 opcode_mod,
u32 in_mod, bool out_mbox_direct,
char *in_mbox, size_t in_mbox_size,
@@ -1400,17 +1726,35 @@ int mlxsw_cmd_exec(struct mlxsw_core *mlxsw_core, u16 opcode, u8 opcode_mod,
}
EXPORT_SYMBOL(mlxsw_cmd_exec);
+int mlxsw_core_schedule_dw(struct delayed_work *dwork, unsigned long delay)
+{
+ return queue_delayed_work(mlxsw_wq, dwork, delay);
+}
+EXPORT_SYMBOL(mlxsw_core_schedule_dw);
+
static int __init mlxsw_core_module_init(void)
{
- mlxsw_core_dbg_root = debugfs_create_dir(mlxsw_core_driver_name, NULL);
- if (!mlxsw_core_dbg_root)
+ int err;
+
+ mlxsw_wq = create_workqueue(mlxsw_core_driver_name);
+ if (!mlxsw_wq)
return -ENOMEM;
+ mlxsw_core_dbg_root = debugfs_create_dir(mlxsw_core_driver_name, NULL);
+ if (!mlxsw_core_dbg_root) {
+ err = -ENOMEM;
+ goto err_debugfs_create_dir;
+ }
return 0;
+
+err_debugfs_create_dir:
+ destroy_workqueue(mlxsw_wq);
+ return err;
}
static void __exit mlxsw_core_module_exit(void)
{
debugfs_remove_recursive(mlxsw_core_dbg_root);
+ destroy_workqueue(mlxsw_wq);
}
module_init(mlxsw_core_module_init);
diff --git a/drivers/net/ethernet/mellanox/mlxsw/core.h b/drivers/net/ethernet/mellanox/mlxsw/core.h
index c73d1c0792a6..436bc49df6ab 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/core.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/core.h
@@ -43,6 +43,8 @@
#include <linux/gfp.h>
#include <linux/types.h>
#include <linux/skbuff.h>
+#include <linux/workqueue.h>
+#include <net/devlink.h>
#include "trap.h"
#include "reg.h"
@@ -61,6 +63,8 @@ struct mlxsw_driver;
struct mlxsw_bus;
struct mlxsw_bus_info;
+void *mlxsw_core_driver_priv(struct mlxsw_core *mlxsw_core);
+
int mlxsw_core_driver_register(struct mlxsw_driver *mlxsw_driver);
void mlxsw_core_driver_unregister(struct mlxsw_driver *mlxsw_driver);
@@ -74,10 +78,9 @@ struct mlxsw_tx_info {
bool is_emad;
};
-bool mlxsw_core_skb_transmit_busy(void *driver_priv,
+bool mlxsw_core_skb_transmit_busy(struct mlxsw_core *mlxsw_core,
const struct mlxsw_tx_info *tx_info);
-
-int mlxsw_core_skb_transmit(void *driver_priv, struct sk_buff *skb,
+int mlxsw_core_skb_transmit(struct mlxsw_core *mlxsw_core, struct sk_buff *skb,
const struct mlxsw_tx_info *tx_info);
struct mlxsw_rx_listener {
@@ -106,6 +109,19 @@ void mlxsw_core_event_listener_unregister(struct mlxsw_core *mlxsw_core,
const struct mlxsw_event_listener *el,
void *priv);
+typedef void mlxsw_reg_trans_cb_t(struct mlxsw_core *mlxsw_core, char *payload,
+ size_t payload_len, unsigned long cb_priv);
+
+int mlxsw_reg_trans_query(struct mlxsw_core *mlxsw_core,
+ const struct mlxsw_reg_info *reg, char *payload,
+ struct list_head *bulk_list,
+ mlxsw_reg_trans_cb_t *cb, unsigned long cb_priv);
+int mlxsw_reg_trans_write(struct mlxsw_core *mlxsw_core,
+ const struct mlxsw_reg_info *reg, char *payload,
+ struct list_head *bulk_list,
+ mlxsw_reg_trans_cb_t *cb, unsigned long cb_priv);
+int mlxsw_reg_trans_bulk_wait(struct list_head *bulk_list);
+
int mlxsw_reg_query(struct mlxsw_core *mlxsw_core,
const struct mlxsw_reg_info *reg, char *payload);
int mlxsw_reg_write(struct mlxsw_core *mlxsw_core,
@@ -131,6 +147,26 @@ u8 mlxsw_core_lag_mapping_get(struct mlxsw_core *mlxsw_core,
void mlxsw_core_lag_mapping_clear(struct mlxsw_core *mlxsw_core,
u16 lag_id, u8 local_port);
+struct mlxsw_core_port {
+ struct devlink_port devlink_port;
+};
+
+static inline void *
+mlxsw_core_port_driver_priv(struct mlxsw_core_port *mlxsw_core_port)
+{
+ /* mlxsw_core_port is ensured to always be the first field in driver
+ * port structure.
+ */
+ return mlxsw_core_port;
+}
+
+int mlxsw_core_port_init(struct mlxsw_core *mlxsw_core,
+ struct mlxsw_core_port *mlxsw_core_port, u8 local_port,
+ struct net_device *dev, bool split, u32 split_group);
+void mlxsw_core_port_fini(struct mlxsw_core_port *mlxsw_core_port);
+
+int mlxsw_core_schedule_dw(struct delayed_work *dwork, unsigned long delay);
+
#define MLXSW_CONFIG_PROFILE_SWID_COUNT 8
struct mlxsw_swid_config {
@@ -183,11 +219,43 @@ struct mlxsw_driver {
const char *kind;
struct module *owner;
size_t priv_size;
- int (*init)(void *driver_priv, struct mlxsw_core *mlxsw_core,
+ int (*init)(struct mlxsw_core *mlxsw_core,
const struct mlxsw_bus_info *mlxsw_bus_info);
- void (*fini)(void *driver_priv);
- int (*port_split)(void *driver_priv, u8 local_port, unsigned int count);
- int (*port_unsplit)(void *driver_priv, u8 local_port);
+ void (*fini)(struct mlxsw_core *mlxsw_core);
+ int (*port_split)(struct mlxsw_core *mlxsw_core, u8 local_port,
+ unsigned int count);
+ int (*port_unsplit)(struct mlxsw_core *mlxsw_core, u8 local_port);
+ int (*sb_pool_get)(struct mlxsw_core *mlxsw_core,
+ unsigned int sb_index, u16 pool_index,
+ struct devlink_sb_pool_info *pool_info);
+ int (*sb_pool_set)(struct mlxsw_core *mlxsw_core,
+ unsigned int sb_index, u16 pool_index, u32 size,
+ enum devlink_sb_threshold_type threshold_type);
+ int (*sb_port_pool_get)(struct mlxsw_core_port *mlxsw_core_port,
+ unsigned int sb_index, u16 pool_index,
+ u32 *p_threshold);
+ int (*sb_port_pool_set)(struct mlxsw_core_port *mlxsw_core_port,
+ unsigned int sb_index, u16 pool_index,
+ u32 threshold);
+ int (*sb_tc_pool_bind_get)(struct mlxsw_core_port *mlxsw_core_port,
+ unsigned int sb_index, u16 tc_index,
+ enum devlink_sb_pool_type pool_type,
+ u16 *p_pool_index, u32 *p_threshold);
+ int (*sb_tc_pool_bind_set)(struct mlxsw_core_port *mlxsw_core_port,
+ unsigned int sb_index, u16 tc_index,
+ enum devlink_sb_pool_type pool_type,
+ u16 pool_index, u32 threshold);
+ int (*sb_occ_snapshot)(struct mlxsw_core *mlxsw_core,
+ unsigned int sb_index);
+ int (*sb_occ_max_clear)(struct mlxsw_core *mlxsw_core,
+ unsigned int sb_index);
+ int (*sb_occ_port_pool_get)(struct mlxsw_core_port *mlxsw_core_port,
+ unsigned int sb_index, u16 pool_index,
+ u32 *p_cur, u32 *p_max);
+ int (*sb_occ_tc_port_bind_get)(struct mlxsw_core_port *mlxsw_core_port,
+ unsigned int sb_index, u16 tc_index,
+ enum devlink_sb_pool_type pool_type,
+ u32 *p_cur, u32 *p_max);
void (*txhdr_construct)(struct sk_buff *skb,
const struct mlxsw_tx_info *tx_info);
u8 txhdr_len;
diff --git a/drivers/net/ethernet/mellanox/mlxsw/reg.h b/drivers/net/ethernet/mellanox/mlxsw/reg.h
index ffe4c0305733..1977e7a5c530 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/reg.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/reg.h
@@ -1805,6 +1805,184 @@ static inline void mlxsw_reg_spvmlr_pack(char *payload, u8 local_port,
}
}
+/* QTCT - QoS Switch Traffic Class Table
+ * -------------------------------------
+ * Configures the mapping between the packet switch priority and the
+ * traffic class on the transmit port.
+ */
+#define MLXSW_REG_QTCT_ID 0x400A
+#define MLXSW_REG_QTCT_LEN 0x08
+
+static const struct mlxsw_reg_info mlxsw_reg_qtct = {
+ .id = MLXSW_REG_QTCT_ID,
+ .len = MLXSW_REG_QTCT_LEN,
+};
+
+/* reg_qtct_local_port
+ * Local port number.
+ * Access: Index
+ *
+ * Note: CPU port is not supported.
+ */
+MLXSW_ITEM32(reg, qtct, local_port, 0x00, 16, 8);
+
+/* reg_qtct_sub_port
+ * Virtual port within the physical port.
+ * Should be set to 0 when virtual ports are not enabled on the port.
+ * Access: Index
+ */
+MLXSW_ITEM32(reg, qtct, sub_port, 0x00, 8, 8);
+
+/* reg_qtct_switch_prio
+ * Switch priority.
+ * Access: Index
+ */
+MLXSW_ITEM32(reg, qtct, switch_prio, 0x00, 0, 4);
+
+/* reg_qtct_tclass
+ * Traffic class.
+ * Default values:
+ * switch_prio 0 : tclass 1
+ * switch_prio 1 : tclass 0
+ * switch_prio i : tclass i, for i > 1
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, qtct, tclass, 0x04, 0, 4);
+
+static inline void mlxsw_reg_qtct_pack(char *payload, u8 local_port,
+ u8 switch_prio, u8 tclass)
+{
+ MLXSW_REG_ZERO(qtct, payload);
+ mlxsw_reg_qtct_local_port_set(payload, local_port);
+ mlxsw_reg_qtct_switch_prio_set(payload, switch_prio);
+ mlxsw_reg_qtct_tclass_set(payload, tclass);
+}
+
+/* QEEC - QoS ETS Element Configuration Register
+ * ---------------------------------------------
+ * Configures the ETS elements.
+ */
+#define MLXSW_REG_QEEC_ID 0x400D
+#define MLXSW_REG_QEEC_LEN 0x1C
+
+static const struct mlxsw_reg_info mlxsw_reg_qeec = {
+ .id = MLXSW_REG_QEEC_ID,
+ .len = MLXSW_REG_QEEC_LEN,
+};
+
+/* reg_qeec_local_port
+ * Local port number.
+ * Access: Index
+ *
+ * Note: CPU port is supported.
+ */
+MLXSW_ITEM32(reg, qeec, local_port, 0x00, 16, 8);
+
+enum mlxsw_reg_qeec_hr {
+ MLXSW_REG_QEEC_HIERARCY_PORT,
+ MLXSW_REG_QEEC_HIERARCY_GROUP,
+ MLXSW_REG_QEEC_HIERARCY_SUBGROUP,
+ MLXSW_REG_QEEC_HIERARCY_TC,
+};
+
+/* reg_qeec_element_hierarchy
+ * 0 - Port
+ * 1 - Group
+ * 2 - Subgroup
+ * 3 - Traffic Class
+ * Access: Index
+ */
+MLXSW_ITEM32(reg, qeec, element_hierarchy, 0x04, 16, 4);
+
+/* reg_qeec_element_index
+ * The index of the element in the hierarchy.
+ * Access: Index
+ */
+MLXSW_ITEM32(reg, qeec, element_index, 0x04, 0, 8);
+
+/* reg_qeec_next_element_index
+ * The index of the next (lower) element in the hierarchy.
+ * Access: RW
+ *
+ * Note: Reserved for element_hierarchy 0.
+ */
+MLXSW_ITEM32(reg, qeec, next_element_index, 0x08, 0, 8);
+
+enum {
+ MLXSW_REG_QEEC_BYTES_MODE,
+ MLXSW_REG_QEEC_PACKETS_MODE,
+};
+
+/* reg_qeec_pb
+ * Packets or bytes mode.
+ * 0 - Bytes mode
+ * 1 - Packets mode
+ * Access: RW
+ *
+ * Note: Used for max shaper configuration. For Spectrum, packets mode
+ * is supported only for traffic classes of CPU port.
+ */
+MLXSW_ITEM32(reg, qeec, pb, 0x0C, 28, 1);
+
+/* reg_qeec_mase
+ * Max shaper configuration enable. Enables configuration of the max
+ * shaper on this ETS element.
+ * 0 - Disable
+ * 1 - Enable
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, qeec, mase, 0x10, 31, 1);
+
+/* A large max rate will disable the max shaper. */
+#define MLXSW_REG_QEEC_MAS_DIS 200000000 /* Kbps */
+
+/* reg_qeec_max_shaper_rate
+ * Max shaper information rate.
+ * For CPU port, can only be configured for port hierarchy.
+ * When in bytes mode, value is specified in units of 1000bps.
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, qeec, max_shaper_rate, 0x10, 0, 28);
+
+/* reg_qeec_de
+ * DWRR configuration enable. Enables configuration of the dwrr and
+ * dwrr_weight.
+ * 0 - Disable
+ * 1 - Enable
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, qeec, de, 0x18, 31, 1);
+
+/* reg_qeec_dwrr
+ * Transmission selection algorithm to use on the link going down from
+ * the ETS element.
+ * 0 - Strict priority
+ * 1 - DWRR
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, qeec, dwrr, 0x18, 15, 1);
+
+/* reg_qeec_dwrr_weight
+ * DWRR weight on the link going down from the ETS element. The
+ * percentage of bandwidth guaranteed to an ETS element within
+ * its hierarchy. The sum of all weights across all ETS elements
+ * within one hierarchy should be equal to 100. Reserved when
+ * transmission selection algorithm is strict priority.
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, qeec, dwrr_weight, 0x18, 0, 8);
+
+static inline void mlxsw_reg_qeec_pack(char *payload, u8 local_port,
+ enum mlxsw_reg_qeec_hr hr, u8 index,
+ u8 next_index)
+{
+ MLXSW_REG_ZERO(qeec, payload);
+ mlxsw_reg_qeec_local_port_set(payload, local_port);
+ mlxsw_reg_qeec_element_hierarchy_set(payload, hr);
+ mlxsw_reg_qeec_element_index_set(payload, index);
+ mlxsw_reg_qeec_next_element_index_set(payload, next_index);
+}
+
/* PMLP - Ports Module to Local Port Register
* ------------------------------------------
* Configures the assignment of modules to local ports.
@@ -2141,6 +2319,145 @@ static inline void mlxsw_reg_paos_pack(char *payload, u8 local_port,
mlxsw_reg_paos_e_set(payload, 1);
}
+/* PFCC - Ports Flow Control Configuration Register
+ * ------------------------------------------------
+ * Configures and retrieves the per port flow control configuration.
+ */
+#define MLXSW_REG_PFCC_ID 0x5007
+#define MLXSW_REG_PFCC_LEN 0x20
+
+static const struct mlxsw_reg_info mlxsw_reg_pfcc = {
+ .id = MLXSW_REG_PFCC_ID,
+ .len = MLXSW_REG_PFCC_LEN,
+};
+
+/* reg_pfcc_local_port
+ * Local port number.
+ * Access: Index
+ */
+MLXSW_ITEM32(reg, pfcc, local_port, 0x00, 16, 8);
+
+/* reg_pfcc_pnat
+ * Port number access type. Determines the way local_port is interpreted:
+ * 0 - Local port number.
+ * 1 - IB / label port number.
+ * Access: Index
+ */
+MLXSW_ITEM32(reg, pfcc, pnat, 0x00, 14, 2);
+
+/* reg_pfcc_shl_cap
+ * Send to higher layers capabilities:
+ * 0 - No capability of sending Pause and PFC frames to higher layers.
+ * 1 - Device has capability of sending Pause and PFC frames to higher
+ * layers.
+ * Access: RO
+ */
+MLXSW_ITEM32(reg, pfcc, shl_cap, 0x00, 1, 1);
+
+/* reg_pfcc_shl_opr
+ * Send to higher layers operation:
+ * 0 - Pause and PFC frames are handled by the port (default).
+ * 1 - Pause and PFC frames are handled by the port and also sent to
+ * higher layers. Only valid if shl_cap = 1.
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, pfcc, shl_opr, 0x00, 0, 1);
+
+/* reg_pfcc_ppan
+ * Pause policy auto negotiation.
+ * 0 - Disabled. Generate / ignore Pause frames based on pptx / pprtx.
+ * 1 - Enabled. When auto-negotiation is performed, set the Pause policy
+ * based on the auto-negotiation resolution.
+ * Access: RW
+ *
+ * Note: The auto-negotiation advertisement is set according to pptx and
+ * pprtx. When PFC is set on Tx / Rx, ppan must be set to 0.
+ */
+MLXSW_ITEM32(reg, pfcc, ppan, 0x04, 28, 4);
+
+/* reg_pfcc_prio_mask_tx
+ * Bit per priority indicating if Tx flow control policy should be
+ * updated based on bit pfctx.
+ * Access: WO
+ */
+MLXSW_ITEM32(reg, pfcc, prio_mask_tx, 0x04, 16, 8);
+
+/* reg_pfcc_prio_mask_rx
+ * Bit per priority indicating if Rx flow control policy should be
+ * updated based on bit pfcrx.
+ * Access: WO
+ */
+MLXSW_ITEM32(reg, pfcc, prio_mask_rx, 0x04, 0, 8);
+
+/* reg_pfcc_pptx
+ * Admin Pause policy on Tx.
+ * 0 - Never generate Pause frames (default).
+ * 1 - Generate Pause frames according to Rx buffer threshold.
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, pfcc, pptx, 0x08, 31, 1);
+
+/* reg_pfcc_aptx
+ * Active (operational) Pause policy on Tx.
+ * 0 - Never generate Pause frames.
+ * 1 - Generate Pause frames according to Rx buffer threshold.
+ * Access: RO
+ */
+MLXSW_ITEM32(reg, pfcc, aptx, 0x08, 30, 1);
+
+/* reg_pfcc_pfctx
+ * Priority based flow control policy on Tx[7:0]. Per-priority bit mask:
+ * 0 - Never generate priority Pause frames on the specified priority
+ * (default).
+ * 1 - Generate priority Pause frames according to Rx buffer threshold on
+ * the specified priority.
+ * Access: RW
+ *
+ * Note: pfctx and pptx must be mutually exclusive.
+ */
+MLXSW_ITEM32(reg, pfcc, pfctx, 0x08, 16, 8);
+
+/* reg_pfcc_pprx
+ * Admin Pause policy on Rx.
+ * 0 - Ignore received Pause frames (default).
+ * 1 - Respect received Pause frames.
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, pfcc, pprx, 0x0C, 31, 1);
+
+/* reg_pfcc_aprx
+ * Active (operational) Pause policy on Rx.
+ * 0 - Ignore received Pause frames.
+ * 1 - Respect received Pause frames.
+ * Access: RO
+ */
+MLXSW_ITEM32(reg, pfcc, aprx, 0x0C, 30, 1);
+
+/* reg_pfcc_pfcrx
+ * Priority based flow control policy on Rx[7:0]. Per-priority bit mask:
+ * 0 - Ignore incoming priority Pause frames on the specified priority
+ * (default).
+ * 1 - Respect incoming priority Pause frames on the specified priority.
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, pfcc, pfcrx, 0x0C, 16, 8);
+
+#define MLXSW_REG_PFCC_ALL_PRIO 0xFF
+
+static inline void mlxsw_reg_pfcc_prio_pack(char *payload, u8 pfc_en)
+{
+ mlxsw_reg_pfcc_prio_mask_tx_set(payload, MLXSW_REG_PFCC_ALL_PRIO);
+ mlxsw_reg_pfcc_prio_mask_rx_set(payload, MLXSW_REG_PFCC_ALL_PRIO);
+ mlxsw_reg_pfcc_pfctx_set(payload, pfc_en);
+ mlxsw_reg_pfcc_pfcrx_set(payload, pfc_en);
+}
+
+static inline void mlxsw_reg_pfcc_pack(char *payload, u8 local_port)
+{
+ MLXSW_REG_ZERO(pfcc, payload);
+ mlxsw_reg_pfcc_local_port_set(payload, local_port);
+}
+
/* PPCNT - Ports Performance Counters Register
* -------------------------------------------
* The PPCNT register retrieves per port performance counters.
@@ -2180,6 +2497,11 @@ MLXSW_ITEM32(reg, ppcnt, local_port, 0x00, 16, 8);
*/
MLXSW_ITEM32(reg, ppcnt, pnat, 0x00, 14, 2);
+enum mlxsw_reg_ppcnt_grp {
+ MLXSW_REG_PPCNT_IEEE_8023_CNT = 0x0,
+ MLXSW_REG_PPCNT_PRIO_CNT = 0x10,
+};
+
/* reg_ppcnt_grp
* Performance counter group.
* Group 63 indicates all groups. Only valid on Set() operation with
@@ -2215,6 +2537,8 @@ MLXSW_ITEM32(reg, ppcnt, clr, 0x04, 31, 1);
*/
MLXSW_ITEM32(reg, ppcnt, prio_tc, 0x04, 0, 5);
+/* Ethernet IEEE 802.3 Counter Group */
+
/* reg_ppcnt_a_frames_transmitted_ok
* Access: RO
*/
@@ -2329,15 +2653,145 @@ MLXSW_ITEM64(reg, ppcnt, a_pause_mac_ctrl_frames_received,
MLXSW_ITEM64(reg, ppcnt, a_pause_mac_ctrl_frames_transmitted,
0x08 + 0x90, 0, 64);
-static inline void mlxsw_reg_ppcnt_pack(char *payload, u8 local_port)
+/* Ethernet Per Priority Group Counters */
+
+/* reg_ppcnt_rx_octets
+ * Access: RO
+ */
+MLXSW_ITEM64(reg, ppcnt, rx_octets, 0x08 + 0x00, 0, 64);
+
+/* reg_ppcnt_rx_frames
+ * Access: RO
+ */
+MLXSW_ITEM64(reg, ppcnt, rx_frames, 0x08 + 0x20, 0, 64);
+
+/* reg_ppcnt_tx_octets
+ * Access: RO
+ */
+MLXSW_ITEM64(reg, ppcnt, tx_octets, 0x08 + 0x28, 0, 64);
+
+/* reg_ppcnt_tx_frames
+ * Access: RO
+ */
+MLXSW_ITEM64(reg, ppcnt, tx_frames, 0x08 + 0x48, 0, 64);
+
+/* reg_ppcnt_rx_pause
+ * Access: RO
+ */
+MLXSW_ITEM64(reg, ppcnt, rx_pause, 0x08 + 0x50, 0, 64);
+
+/* reg_ppcnt_rx_pause_duration
+ * Access: RO
+ */
+MLXSW_ITEM64(reg, ppcnt, rx_pause_duration, 0x08 + 0x58, 0, 64);
+
+/* reg_ppcnt_tx_pause
+ * Access: RO
+ */
+MLXSW_ITEM64(reg, ppcnt, tx_pause, 0x08 + 0x60, 0, 64);
+
+/* reg_ppcnt_tx_pause_duration
+ * Access: RO
+ */
+MLXSW_ITEM64(reg, ppcnt, tx_pause_duration, 0x08 + 0x68, 0, 64);
+
+/* reg_ppcnt_rx_pause_transition
+ * Access: RO
+ */
+MLXSW_ITEM64(reg, ppcnt, tx_pause_transition, 0x08 + 0x70, 0, 64);
+
+static inline void mlxsw_reg_ppcnt_pack(char *payload, u8 local_port,
+ enum mlxsw_reg_ppcnt_grp grp,
+ u8 prio_tc)
{
MLXSW_REG_ZERO(ppcnt, payload);
mlxsw_reg_ppcnt_swid_set(payload, 0);
mlxsw_reg_ppcnt_local_port_set(payload, local_port);
mlxsw_reg_ppcnt_pnat_set(payload, 0);
- mlxsw_reg_ppcnt_grp_set(payload, 0);
+ mlxsw_reg_ppcnt_grp_set(payload, grp);
mlxsw_reg_ppcnt_clr_set(payload, 0);
- mlxsw_reg_ppcnt_prio_tc_set(payload, 0);
+ mlxsw_reg_ppcnt_prio_tc_set(payload, prio_tc);
+}
+
+/* PPTB - Port Prio To Buffer Register
+ * -----------------------------------
+ * Configures the switch priority to buffer table.
+ */
+#define MLXSW_REG_PPTB_ID 0x500B
+#define MLXSW_REG_PPTB_LEN 0x0C
+
+static const struct mlxsw_reg_info mlxsw_reg_pptb = {
+ .id = MLXSW_REG_PPTB_ID,
+ .len = MLXSW_REG_PPTB_LEN,
+};
+
+enum {
+ MLXSW_REG_PPTB_MM_UM,
+ MLXSW_REG_PPTB_MM_UNICAST,
+ MLXSW_REG_PPTB_MM_MULTICAST,
+};
+
+/* reg_pptb_mm
+ * Mapping mode.
+ * 0 - Map both unicast and multicast packets to the same buffer.
+ * 1 - Map only unicast packets.
+ * 2 - Map only multicast packets.
+ * Access: Index
+ *
+ * Note: SwitchX-2 only supports the first option.
+ */
+MLXSW_ITEM32(reg, pptb, mm, 0x00, 28, 2);
+
+/* reg_pptb_local_port
+ * Local port number.
+ * Access: Index
+ */
+MLXSW_ITEM32(reg, pptb, local_port, 0x00, 16, 8);
+
+/* reg_pptb_um
+ * Enables the update of the untagged_buf field.
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, pptb, um, 0x00, 8, 1);
+
+/* reg_pptb_pm
+ * Enables the update of the prio_to_buff field.
+ * Bit <i> is a flag for updating the mapping for switch priority <i>.
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, pptb, pm, 0x00, 0, 8);
+
+/* reg_pptb_prio_to_buff
+ * Mapping of switch priority <i> to one of the allocated receive port
+ * buffers.
+ * Access: RW
+ */
+MLXSW_ITEM_BIT_ARRAY(reg, pptb, prio_to_buff, 0x04, 0x04, 4);
+
+/* reg_pptb_pm_msb
+ * Enables the update of the prio_to_buff field.
+ * Bit <i> is a flag for updating the mapping for switch priority <i+8>.
+ * Access: RW
+ */
+MLXSW_ITEM32(reg, pptb, pm_msb, 0x08, 24, 8);
+
+/* reg_pptb_untagged_buff
+ * Mapping of untagged frames to one of the allocated receive port buffers.
+ * Access: RW
+ *
+ * Note: In SwitchX-2 this field must be mapped to buffer 8. Reserved for
+ * Spectrum, as it maps untagged packets based on the default switch priority.
+ */
+MLXSW_ITEM32(reg, pptb, untagged_buff, 0x08, 0, 4);
+
+#define MLXSW_REG_PPTB_ALL_PRIO 0xFF
+
+static inline void mlxsw_reg_pptb_pack(char *payload, u8 local_port)
+{
+ MLXSW_REG_ZERO(pptb, payload);
+ mlxsw_reg_pptb_mm_set(payload, MLXSW_REG_PPTB_MM_UM);
+ mlxsw_reg_pptb_local_port_set(payload, local_port);
+ mlxsw_reg_pptb_pm_set(payload, MLXSW_REG_PPTB_ALL_PRIO);
}
/* PBMC - Port Buffer Management Control Register
@@ -2346,7 +2800,7 @@ static inline void mlxsw_reg_ppcnt_pack(char *payload, u8 local_port)
* allocation for different Prios, and the Pause threshold management.
*/
#define MLXSW_REG_PBMC_ID 0x500C
-#define MLXSW_REG_PBMC_LEN 0x68
+#define MLXSW_REG_PBMC_LEN 0x6C
static const struct mlxsw_reg_info mlxsw_reg_pbmc = {
.id = MLXSW_REG_PBMC_ID,
@@ -2374,6 +2828,8 @@ MLXSW_ITEM32(reg, pbmc, xoff_timer_value, 0x04, 16, 16);
*/
MLXSW_ITEM32(reg, pbmc, xoff_refresh, 0x04, 0, 16);
+#define MLXSW_REG_PBMC_PORT_SHARED_BUF_IDX 11
+
/* reg_pbmc_buf_lossy
* The field indicates if the buffer is lossy.
* 0 - Lossless
@@ -2398,6 +2854,30 @@ MLXSW_ITEM32_INDEXED(reg, pbmc, buf_epsb, 0x0C, 24, 1, 0x08, 0x00, false);
*/
MLXSW_ITEM32_INDEXED(reg, pbmc, buf_size, 0x0C, 0, 16, 0x08, 0x00, false);
+/* reg_pbmc_buf_xoff_threshold
+ * Once the amount of data in the buffer goes above this value, device
+ * starts sending PFC frames for all priorities associated with the
+ * buffer. Units are represented in cells. Reserved in case of lossy
+ * buffer.
+ * Access: RW
+ *
+ * Note: In Spectrum, reserved for buffer[9].
+ */
+MLXSW_ITEM32_INDEXED(reg, pbmc, buf_xoff_threshold, 0x0C, 16, 16,
+ 0x08, 0x04, false);
+
+/* reg_pbmc_buf_xon_threshold
+ * When the amount of data in the buffer goes below this value, device
+ * stops sending PFC frames for the priorities associated with the
+ * buffer. Units are represented in cells. Reserved in case of lossy
+ * buffer.
+ * Access: RW
+ *
+ * Note: In Spectrum, reserved for buffer[9].
+ */
+MLXSW_ITEM32_INDEXED(reg, pbmc, buf_xon_threshold, 0x0C, 0, 16,
+ 0x08, 0x04, false);
+
static inline void mlxsw_reg_pbmc_pack(char *payload, u8 local_port,
u16 xoff_timer_value, u16 xoff_refresh)
{
@@ -2416,6 +2896,17 @@ static inline void mlxsw_reg_pbmc_lossy_buffer_pack(char *payload,
mlxsw_reg_pbmc_buf_size_set(payload, buf_index, size);
}
+static inline void mlxsw_reg_pbmc_lossless_buffer_pack(char *payload,
+ int buf_index, u16 size,
+ u16 threshold)
+{
+ mlxsw_reg_pbmc_buf_lossy_set(payload, buf_index, 0);
+ mlxsw_reg_pbmc_buf_epsb_set(payload, buf_index, 0);
+ mlxsw_reg_pbmc_buf_size_set(payload, buf_index, size);
+ mlxsw_reg_pbmc_buf_xoff_threshold_set(payload, buf_index, threshold);
+ mlxsw_reg_pbmc_buf_xon_threshold_set(payload, buf_index, threshold);
+}
+
/* PSPA - Port Switch Partition Allocation
* ---------------------------------------
* Controls the association of a port with a switch partition and enables
@@ -2985,9 +3476,10 @@ static const struct mlxsw_reg_info mlxsw_reg_sbpr = {
.len = MLXSW_REG_SBPR_LEN,
};
-enum mlxsw_reg_sbpr_dir {
- MLXSW_REG_SBPR_DIR_INGRESS,
- MLXSW_REG_SBPR_DIR_EGRESS,
+/* shared direstion enum for SBPR, SBCM, SBPM */
+enum mlxsw_reg_sbxx_dir {
+ MLXSW_REG_SBXX_DIR_INGRESS,
+ MLXSW_REG_SBXX_DIR_EGRESS,
};
/* reg_sbpr_dir
@@ -3020,7 +3512,7 @@ enum mlxsw_reg_sbpr_mode {
MLXSW_ITEM32(reg, sbpr, mode, 0x08, 0, 4);
static inline void mlxsw_reg_sbpr_pack(char *payload, u8 pool,
- enum mlxsw_reg_sbpr_dir dir,
+ enum mlxsw_reg_sbxx_dir dir,
enum mlxsw_reg_sbpr_mode mode, u32 size)
{
MLXSW_REG_ZERO(sbpr, payload);
@@ -3062,11 +3554,6 @@ MLXSW_ITEM32(reg, sbcm, local_port, 0x00, 16, 8);
*/
MLXSW_ITEM32(reg, sbcm, pg_buff, 0x00, 8, 6);
-enum mlxsw_reg_sbcm_dir {
- MLXSW_REG_SBCM_DIR_INGRESS,
- MLXSW_REG_SBCM_DIR_EGRESS,
-};
-
/* reg_sbcm_dir
* Direction.
* Access: Index
@@ -3079,6 +3566,10 @@ MLXSW_ITEM32(reg, sbcm, dir, 0x00, 0, 2);
*/
MLXSW_ITEM32(reg, sbcm, min_buff, 0x18, 0, 24);
+/* shared max_buff limits for dynamic threshold for SBCM, SBPM */
+#define MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN 1
+#define MLXSW_REG_SBXX_DYN_MAX_BUFF_MAX 14
+
/* reg_sbcm_max_buff
* When the pool associated to the port-pg/tclass is configured to
* static, Maximum buffer size for the limiter configured in cells.
@@ -3099,7 +3590,7 @@ MLXSW_ITEM32(reg, sbcm, max_buff, 0x1C, 0, 24);
MLXSW_ITEM32(reg, sbcm, pool, 0x24, 0, 4);
static inline void mlxsw_reg_sbcm_pack(char *payload, u8 local_port, u8 pg_buff,
- enum mlxsw_reg_sbcm_dir dir,
+ enum mlxsw_reg_sbxx_dir dir,
u32 min_buff, u32 max_buff, u8 pool)
{
MLXSW_REG_ZERO(sbcm, payload);
@@ -3111,8 +3602,8 @@ static inline void mlxsw_reg_sbcm_pack(char *payload, u8 local_port, u8 pg_buff,
mlxsw_reg_sbcm_pool_set(payload, pool);
}
-/* SBPM - Shared Buffer Class Management Register
- * ----------------------------------------------
+/* SBPM - Shared Buffer Port Management Register
+ * ---------------------------------------------
* The SBPM register configures and retrieves the shared buffer allocation
* and configuration according to Port-Pool, including the definition
* of the associated quota.
@@ -3139,17 +3630,33 @@ MLXSW_ITEM32(reg, sbpm, local_port, 0x00, 16, 8);
*/
MLXSW_ITEM32(reg, sbpm, pool, 0x00, 8, 4);
-enum mlxsw_reg_sbpm_dir {
- MLXSW_REG_SBPM_DIR_INGRESS,
- MLXSW_REG_SBPM_DIR_EGRESS,
-};
-
/* reg_sbpm_dir
* Direction.
* Access: Index
*/
MLXSW_ITEM32(reg, sbpm, dir, 0x00, 0, 2);
+/* reg_sbpm_buff_occupancy
+ * Current buffer occupancy in cells.
+ * Access: RO
+ */
+MLXSW_ITEM32(reg, sbpm, buff_occupancy, 0x10, 0, 24);
+
+/* reg_sbpm_clr
+ * Clear Max Buffer Occupancy
+ * When this bit is set, max_buff_occupancy field is cleared (and a
+ * new max value is tracked from the time the clear was performed).
+ * Access: OP
+ */
+MLXSW_ITEM32(reg, sbpm, clr, 0x14, 31, 1);
+
+/* reg_sbpm_max_buff_occupancy
+ * Maximum value of buffer occupancy in cells monitored. Cleared by
+ * writing to the clr field.
+ * Access: RO
+ */
+MLXSW_ITEM32(reg, sbpm, max_buff_occupancy, 0x14, 0, 24);
+
/* reg_sbpm_min_buff
* Minimum buffer size for the limiter, in cells.
* Access: RW
@@ -3170,17 +3677,25 @@ MLXSW_ITEM32(reg, sbpm, min_buff, 0x18, 0, 24);
MLXSW_ITEM32(reg, sbpm, max_buff, 0x1C, 0, 24);
static inline void mlxsw_reg_sbpm_pack(char *payload, u8 local_port, u8 pool,
- enum mlxsw_reg_sbpm_dir dir,
+ enum mlxsw_reg_sbxx_dir dir, bool clr,
u32 min_buff, u32 max_buff)
{
MLXSW_REG_ZERO(sbpm, payload);
mlxsw_reg_sbpm_local_port_set(payload, local_port);
mlxsw_reg_sbpm_pool_set(payload, pool);
mlxsw_reg_sbpm_dir_set(payload, dir);
+ mlxsw_reg_sbpm_clr_set(payload, clr);
mlxsw_reg_sbpm_min_buff_set(payload, min_buff);
mlxsw_reg_sbpm_max_buff_set(payload, max_buff);
}
+static inline void mlxsw_reg_sbpm_unpack(char *payload, u32 *p_buff_occupancy,
+ u32 *p_max_buff_occupancy)
+{
+ *p_buff_occupancy = mlxsw_reg_sbpm_buff_occupancy_get(payload);
+ *p_max_buff_occupancy = mlxsw_reg_sbpm_max_buff_occupancy_get(payload);
+}
+
/* SBMM - Shared Buffer Multicast Management Register
* --------------------------------------------------
* The SBMM register configures and retrieves the shared buffer allocation
@@ -3236,6 +3751,104 @@ static inline void mlxsw_reg_sbmm_pack(char *payload, u8 prio, u32 min_buff,
mlxsw_reg_sbmm_pool_set(payload, pool);
}
+/* SBSR - Shared Buffer Status Register
+ * ------------------------------------
+ * The SBSR register retrieves the shared buffer occupancy according to
+ * Port-Pool. Note that this register enables reading a large amount of data.
+ * It is the user's responsibility to limit the amount of data to ensure the
+ * response can match the maximum transfer unit. In case the response exceeds
+ * the maximum transport unit, it will be truncated with no special notice.
+ */
+#define MLXSW_REG_SBSR_ID 0xB005
+#define MLXSW_REG_SBSR_BASE_LEN 0x5C /* base length, without records */
+#define MLXSW_REG_SBSR_REC_LEN 0x8 /* record length */
+#define MLXSW_REG_SBSR_REC_MAX_COUNT 120
+#define MLXSW_REG_SBSR_LEN (MLXSW_REG_SBSR_BASE_LEN + \
+ MLXSW_REG_SBSR_REC_LEN * \
+ MLXSW_REG_SBSR_REC_MAX_COUNT)
+
+static const struct mlxsw_reg_info mlxsw_reg_sbsr = {
+ .id = MLXSW_REG_SBSR_ID,
+ .len = MLXSW_REG_SBSR_LEN,
+};
+
+/* reg_sbsr_clr
+ * Clear Max Buffer Occupancy. When this bit is set, the max_buff_occupancy
+ * field is cleared (and a new max value is tracked from the time the clear
+ * was performed).
+ * Access: OP
+ */
+MLXSW_ITEM32(reg, sbsr, clr, 0x00, 31, 1);
+
+/* reg_sbsr_ingress_port_mask
+ * Bit vector for all ingress network ports.
+ * Indicates which of the ports (for which the relevant bit is set)
+ * are affected by the set operation. Configuration of any other port
+ * does not change.
+ * Access: Index
+ */
+MLXSW_ITEM_BIT_ARRAY(reg, sbsr, ingress_port_mask, 0x10, 0x20, 1);
+
+/* reg_sbsr_pg_buff_mask
+ * Bit vector for all switch priority groups.
+ * Indicates which of the priorities (for which the relevant bit is set)
+ * are affected by the set operation. Configuration of any other priority
+ * does not change.
+ * Range is 0..cap_max_pg_buffers - 1
+ * Access: Index
+ */
+MLXSW_ITEM_BIT_ARRAY(reg, sbsr, pg_buff_mask, 0x30, 0x4, 1);
+
+/* reg_sbsr_egress_port_mask
+ * Bit vector for all egress network ports.
+ * Indicates which of the ports (for which the relevant bit is set)
+ * are affected by the set operation. Configuration of any other port
+ * does not change.
+ * Access: Index
+ */
+MLXSW_ITEM_BIT_ARRAY(reg, sbsr, egress_port_mask, 0x34, 0x20, 1);
+
+/* reg_sbsr_tclass_mask
+ * Bit vector for all traffic classes.
+ * Indicates which of the traffic classes (for which the relevant bit is
+ * set) are affected by the set operation. Configuration of any other
+ * traffic class does not change.
+ * Range is 0..cap_max_tclass - 1
+ * Access: Index
+ */
+MLXSW_ITEM_BIT_ARRAY(reg, sbsr, tclass_mask, 0x54, 0x8, 1);
+
+static inline void mlxsw_reg_sbsr_pack(char *payload, bool clr)
+{
+ MLXSW_REG_ZERO(sbsr, payload);
+ mlxsw_reg_sbsr_clr_set(payload, clr);
+}
+
+/* reg_sbsr_rec_buff_occupancy
+ * Current buffer occupancy in cells.
+ * Access: RO
+ */
+MLXSW_ITEM32_INDEXED(reg, sbsr, rec_buff_occupancy, MLXSW_REG_SBSR_BASE_LEN,
+ 0, 24, MLXSW_REG_SBSR_REC_LEN, 0x00, false);
+
+/* reg_sbsr_rec_max_buff_occupancy
+ * Maximum value of buffer occupancy in cells monitored. Cleared by
+ * writing to the clr field.
+ * Access: RO
+ */
+MLXSW_ITEM32_INDEXED(reg, sbsr, rec_max_buff_occupancy, MLXSW_REG_SBSR_BASE_LEN,
+ 0, 24, MLXSW_REG_SBSR_REC_LEN, 0x04, false);
+
+static inline void mlxsw_reg_sbsr_rec_unpack(char *payload, int rec_index,
+ u32 *p_buff_occupancy,
+ u32 *p_max_buff_occupancy)
+{
+ *p_buff_occupancy =
+ mlxsw_reg_sbsr_rec_buff_occupancy_get(payload, rec_index);
+ *p_max_buff_occupancy =
+ mlxsw_reg_sbsr_rec_max_buff_occupancy_get(payload, rec_index);
+}
+
static inline const char *mlxsw_reg_id_str(u16 reg_id)
{
switch (reg_id) {
@@ -3283,6 +3896,10 @@ static inline const char *mlxsw_reg_id_str(u16 reg_id)
return "SFMR";
case MLXSW_REG_SPVMLR_ID:
return "SPVMLR";
+ case MLXSW_REG_QTCT_ID:
+ return "QTCT";
+ case MLXSW_REG_QEEC_ID:
+ return "QEEC";
case MLXSW_REG_PMLP_ID:
return "PMLP";
case MLXSW_REG_PMTU_ID:
@@ -3293,8 +3910,12 @@ static inline const char *mlxsw_reg_id_str(u16 reg_id)
return "PPAD";
case MLXSW_REG_PAOS_ID:
return "PAOS";
+ case MLXSW_REG_PFCC_ID:
+ return "PFCC";
case MLXSW_REG_PPCNT_ID:
return "PPCNT";
+ case MLXSW_REG_PPTB_ID:
+ return "PPTB";
case MLXSW_REG_PBMC_ID:
return "PBMC";
case MLXSW_REG_PSPA_ID:
@@ -3323,6 +3944,8 @@ static inline const char *mlxsw_reg_id_str(u16 reg_id)
return "SBPM";
case MLXSW_REG_SBMM_ID:
return "SBMM";
+ case MLXSW_REG_SBSR_ID:
+ return "SBSR";
default:
return "*UNKNOWN*";
}
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
index 4afbc3e9e381..4a7273771028 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.c
@@ -49,7 +49,7 @@
#include <linux/jiffies.h>
#include <linux/bitops.h>
#include <linux/list.h>
-#include <net/devlink.h>
+#include <linux/dcbnl.h>
#include <net/switchdev.h>
#include <generated/utsrelease.h>
@@ -305,9 +305,9 @@ mlxsw_sp_port_system_port_mapping_set(struct mlxsw_sp_port *mlxsw_sp_port)
return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sspr), sspr_pl);
}
-static int mlxsw_sp_port_module_info_get(struct mlxsw_sp *mlxsw_sp,
- u8 local_port, u8 *p_module,
- u8 *p_width)
+static int __mlxsw_sp_port_module_info_get(struct mlxsw_sp *mlxsw_sp,
+ u8 local_port, u8 *p_module,
+ u8 *p_width, u8 *p_lane)
{
char pmlp_pl[MLXSW_REG_PMLP_LEN];
int err;
@@ -318,9 +318,20 @@ static int mlxsw_sp_port_module_info_get(struct mlxsw_sp *mlxsw_sp,
return err;
*p_module = mlxsw_reg_pmlp_module_get(pmlp_pl, 0);
*p_width = mlxsw_reg_pmlp_width_get(pmlp_pl);
+ *p_lane = mlxsw_reg_pmlp_tx_lane_get(pmlp_pl, 0);
return 0;
}
+static int mlxsw_sp_port_module_info_get(struct mlxsw_sp *mlxsw_sp,
+ u8 local_port, u8 *p_module,
+ u8 *p_width)
+{
+ u8 lane;
+
+ return __mlxsw_sp_port_module_info_get(mlxsw_sp, local_port, p_module,
+ p_width, &lane);
+}
+
static int mlxsw_sp_port_module_map(struct mlxsw_sp *mlxsw_sp, u8 local_port,
u8 module, u8 width, u8 lane)
{
@@ -379,7 +390,7 @@ static netdev_tx_t mlxsw_sp_port_xmit(struct sk_buff *skb,
u64 len;
int err;
- if (mlxsw_core_skb_transmit_busy(mlxsw_sp, &tx_info))
+ if (mlxsw_core_skb_transmit_busy(mlxsw_sp->core, &tx_info))
return NETDEV_TX_BUSY;
if (unlikely(skb_headroom(skb) < MLXSW_TXHDR_LEN)) {
@@ -403,7 +414,7 @@ static netdev_tx_t mlxsw_sp_port_xmit(struct sk_buff *skb,
/* Due to a race we might fail here because of a full queue. In that
* unlikely case we simply drop the packet.
*/
- err = mlxsw_core_skb_transmit(mlxsw_sp, skb, &tx_info);
+ err = mlxsw_core_skb_transmit(mlxsw_sp->core, skb, &tx_info);
if (!err) {
pcpu_stats = this_cpu_ptr(mlxsw_sp_port->pcpu_stats);
@@ -438,16 +449,89 @@ static int mlxsw_sp_port_set_mac_address(struct net_device *dev, void *p)
return 0;
}
+static void mlxsw_sp_pg_buf_pack(char *pbmc_pl, int pg_index, int mtu,
+ bool pause_en, bool pfc_en, u16 delay)
+{
+ u16 pg_size = 2 * MLXSW_SP_BYTES_TO_CELLS(mtu);
+
+ delay = pfc_en ? mlxsw_sp_pfc_delay_get(mtu, delay) :
+ MLXSW_SP_PAUSE_DELAY;
+
+ if (pause_en || pfc_en)
+ mlxsw_reg_pbmc_lossless_buffer_pack(pbmc_pl, pg_index,
+ pg_size + delay, pg_size);
+ else
+ mlxsw_reg_pbmc_lossy_buffer_pack(pbmc_pl, pg_index, pg_size);
+}
+
+int __mlxsw_sp_port_headroom_set(struct mlxsw_sp_port *mlxsw_sp_port, int mtu,
+ u8 *prio_tc, bool pause_en,
+ struct ieee_pfc *my_pfc)
+{
+ struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+ u8 pfc_en = !!my_pfc ? my_pfc->pfc_en : 0;
+ u16 delay = !!my_pfc ? my_pfc->delay : 0;
+ char pbmc_pl[MLXSW_REG_PBMC_LEN];
+ int i, j, err;
+
+ mlxsw_reg_pbmc_pack(pbmc_pl, mlxsw_sp_port->local_port, 0, 0);
+ err = mlxsw_reg_query(mlxsw_sp->core, MLXSW_REG(pbmc), pbmc_pl);
+ if (err)
+ return err;
+
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ bool configure = false;
+ bool pfc = false;
+
+ for (j = 0; j < IEEE_8021QAZ_MAX_TCS; j++) {
+ if (prio_tc[j] == i) {
+ pfc = pfc_en & BIT(j);
+ configure = true;
+ break;
+ }
+ }
+
+ if (!configure)
+ continue;
+ mlxsw_sp_pg_buf_pack(pbmc_pl, i, mtu, pause_en, pfc, delay);
+ }
+
+ return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(pbmc), pbmc_pl);
+}
+
+static int mlxsw_sp_port_headroom_set(struct mlxsw_sp_port *mlxsw_sp_port,
+ int mtu, bool pause_en)
+{
+ u8 def_prio_tc[IEEE_8021QAZ_MAX_TCS] = {0};
+ bool dcb_en = !!mlxsw_sp_port->dcb.ets;
+ struct ieee_pfc *my_pfc;
+ u8 *prio_tc;
+
+ prio_tc = dcb_en ? mlxsw_sp_port->dcb.ets->prio_tc : def_prio_tc;
+ my_pfc = dcb_en ? mlxsw_sp_port->dcb.pfc : NULL;
+
+ return __mlxsw_sp_port_headroom_set(mlxsw_sp_port, mtu, prio_tc,
+ pause_en, my_pfc);
+}
+
static int mlxsw_sp_port_change_mtu(struct net_device *dev, int mtu)
{
struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev);
+ bool pause_en = mlxsw_sp_port_is_pause_en(mlxsw_sp_port);
int err;
- err = mlxsw_sp_port_mtu_set(mlxsw_sp_port, mtu);
+ err = mlxsw_sp_port_headroom_set(mlxsw_sp_port, mtu, pause_en);
if (err)
return err;
+ err = mlxsw_sp_port_mtu_set(mlxsw_sp_port, mtu);
+ if (err)
+ goto err_port_mtu_set;
dev->mtu = mtu;
return 0;
+
+err_port_mtu_set:
+ mlxsw_sp_port_headroom_set(mlxsw_sp_port, dev->mtu, pause_en);
+ return err;
}
static struct rtnl_link_stats64 *
@@ -861,6 +945,33 @@ int mlxsw_sp_port_kill_vid(struct net_device *dev,
return 0;
}
+static int mlxsw_sp_port_get_phys_port_name(struct net_device *dev, char *name,
+ size_t len)
+{
+ struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev);
+ u8 module, width, lane;
+ int err;
+
+ err = __mlxsw_sp_port_module_info_get(mlxsw_sp_port->mlxsw_sp,
+ mlxsw_sp_port->local_port,
+ &module, &width, &lane);
+ if (err) {
+ netdev_err(dev, "Failed to retrieve module information\n");
+ return err;
+ }
+
+ if (!mlxsw_sp_port->split)
+ err = snprintf(name, len, "p%d", module + 1);
+ else
+ err = snprintf(name, len, "p%ds%d", module + 1,
+ lane / width);
+
+ if (err >= len)
+ return -EINVAL;
+
+ return 0;
+}
+
static const struct net_device_ops mlxsw_sp_port_netdev_ops = {
.ndo_open = mlxsw_sp_port_open,
.ndo_stop = mlxsw_sp_port_stop,
@@ -877,6 +988,7 @@ static const struct net_device_ops mlxsw_sp_port_netdev_ops = {
.ndo_bridge_setlink = switchdev_port_bridge_setlink,
.ndo_bridge_getlink = switchdev_port_bridge_getlink,
.ndo_bridge_dellink = switchdev_port_bridge_dellink,
+ .ndo_get_phys_port_name = mlxsw_sp_port_get_phys_port_name,
};
static void mlxsw_sp_port_get_drvinfo(struct net_device *dev,
@@ -897,6 +1009,68 @@ static void mlxsw_sp_port_get_drvinfo(struct net_device *dev,
sizeof(drvinfo->bus_info));
}
+static void mlxsw_sp_port_get_pauseparam(struct net_device *dev,
+ struct ethtool_pauseparam *pause)
+{
+ struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev);
+
+ pause->rx_pause = mlxsw_sp_port->link.rx_pause;
+ pause->tx_pause = mlxsw_sp_port->link.tx_pause;
+}
+
+static int mlxsw_sp_port_pause_set(struct mlxsw_sp_port *mlxsw_sp_port,
+ struct ethtool_pauseparam *pause)
+{
+ char pfcc_pl[MLXSW_REG_PFCC_LEN];
+
+ mlxsw_reg_pfcc_pack(pfcc_pl, mlxsw_sp_port->local_port);
+ mlxsw_reg_pfcc_pprx_set(pfcc_pl, pause->rx_pause);
+ mlxsw_reg_pfcc_pptx_set(pfcc_pl, pause->tx_pause);
+
+ return mlxsw_reg_write(mlxsw_sp_port->mlxsw_sp->core, MLXSW_REG(pfcc),
+ pfcc_pl);
+}
+
+static int mlxsw_sp_port_set_pauseparam(struct net_device *dev,
+ struct ethtool_pauseparam *pause)
+{
+ struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev);
+ bool pause_en = pause->tx_pause || pause->rx_pause;
+ int err;
+
+ if (mlxsw_sp_port->dcb.pfc && mlxsw_sp_port->dcb.pfc->pfc_en) {
+ netdev_err(dev, "PFC already enabled on port\n");
+ return -EINVAL;
+ }
+
+ if (pause->autoneg) {
+ netdev_err(dev, "PAUSE frames autonegotiation isn't supported\n");
+ return -EINVAL;
+ }
+
+ err = mlxsw_sp_port_headroom_set(mlxsw_sp_port, dev->mtu, pause_en);
+ if (err) {
+ netdev_err(dev, "Failed to configure port's headroom\n");
+ return err;
+ }
+
+ err = mlxsw_sp_port_pause_set(mlxsw_sp_port, pause);
+ if (err) {
+ netdev_err(dev, "Failed to set PAUSE parameters\n");
+ goto err_port_pause_configure;
+ }
+
+ mlxsw_sp_port->link.rx_pause = pause->rx_pause;
+ mlxsw_sp_port->link.tx_pause = pause->tx_pause;
+
+ return 0;
+
+err_port_pause_configure:
+ pause_en = mlxsw_sp_port_is_pause_en(mlxsw_sp_port);
+ mlxsw_sp_port_headroom_set(mlxsw_sp_port, dev->mtu, pause_en);
+ return err;
+}
+
struct mlxsw_sp_port_hw_stats {
char str[ETH_GSTRING_LEN];
u64 (*getter)(char *payload);
@@ -1032,7 +1206,8 @@ static void mlxsw_sp_port_get_stats(struct net_device *dev,
int i;
int err;
- mlxsw_reg_ppcnt_pack(ppcnt_pl, mlxsw_sp_port->local_port);
+ mlxsw_reg_ppcnt_pack(ppcnt_pl, mlxsw_sp_port->local_port,
+ MLXSW_REG_PPCNT_IEEE_8023_CNT, 0);
err = mlxsw_reg_query(mlxsw_sp->core, MLXSW_REG(ppcnt), ppcnt_pl);
for (i = 0; i < MLXSW_SP_PORT_HW_STATS_LEN; i++)
data[i] = !err ? mlxsw_sp_port_hw_stats[i].getter(ppcnt_pl) : 0;
@@ -1380,6 +1555,8 @@ static int mlxsw_sp_port_set_settings(struct net_device *dev,
static const struct ethtool_ops mlxsw_sp_port_ethtool_ops = {
.get_drvinfo = mlxsw_sp_port_get_drvinfo,
.get_link = ethtool_op_get_link,
+ .get_pauseparam = mlxsw_sp_port_get_pauseparam,
+ .set_pauseparam = mlxsw_sp_port_set_pauseparam,
.get_strings = mlxsw_sp_port_get_strings,
.set_phys_id = mlxsw_sp_port_set_phys_id,
.get_ethtool_stats = mlxsw_sp_port_get_stats,
@@ -1402,12 +1579,112 @@ mlxsw_sp_port_speed_by_width_set(struct mlxsw_sp_port *mlxsw_sp_port, u8 width)
return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(ptys), ptys_pl);
}
+int mlxsw_sp_port_ets_set(struct mlxsw_sp_port *mlxsw_sp_port,
+ enum mlxsw_reg_qeec_hr hr, u8 index, u8 next_index,
+ bool dwrr, u8 dwrr_weight)
+{
+ struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+ char qeec_pl[MLXSW_REG_QEEC_LEN];
+
+ mlxsw_reg_qeec_pack(qeec_pl, mlxsw_sp_port->local_port, hr, index,
+ next_index);
+ mlxsw_reg_qeec_de_set(qeec_pl, true);
+ mlxsw_reg_qeec_dwrr_set(qeec_pl, dwrr);
+ mlxsw_reg_qeec_dwrr_weight_set(qeec_pl, dwrr_weight);
+ return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(qeec), qeec_pl);
+}
+
+int mlxsw_sp_port_ets_maxrate_set(struct mlxsw_sp_port *mlxsw_sp_port,
+ enum mlxsw_reg_qeec_hr hr, u8 index,
+ u8 next_index, u32 maxrate)
+{
+ struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+ char qeec_pl[MLXSW_REG_QEEC_LEN];
+
+ mlxsw_reg_qeec_pack(qeec_pl, mlxsw_sp_port->local_port, hr, index,
+ next_index);
+ mlxsw_reg_qeec_mase_set(qeec_pl, true);
+ mlxsw_reg_qeec_max_shaper_rate_set(qeec_pl, maxrate);
+ return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(qeec), qeec_pl);
+}
+
+int mlxsw_sp_port_prio_tc_set(struct mlxsw_sp_port *mlxsw_sp_port,
+ u8 switch_prio, u8 tclass)
+{
+ struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+ char qtct_pl[MLXSW_REG_QTCT_LEN];
+
+ mlxsw_reg_qtct_pack(qtct_pl, mlxsw_sp_port->local_port, switch_prio,
+ tclass);
+ return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(qtct), qtct_pl);
+}
+
+static int mlxsw_sp_port_ets_init(struct mlxsw_sp_port *mlxsw_sp_port)
+{
+ int err, i;
+
+ /* Setup the elements hierarcy, so that each TC is linked to
+ * one subgroup, which are all member in the same group.
+ */
+ err = mlxsw_sp_port_ets_set(mlxsw_sp_port,
+ MLXSW_REG_QEEC_HIERARCY_GROUP, 0, 0, false,
+ 0);
+ if (err)
+ return err;
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ err = mlxsw_sp_port_ets_set(mlxsw_sp_port,
+ MLXSW_REG_QEEC_HIERARCY_SUBGROUP, i,
+ 0, false, 0);
+ if (err)
+ return err;
+ }
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ err = mlxsw_sp_port_ets_set(mlxsw_sp_port,
+ MLXSW_REG_QEEC_HIERARCY_TC, i, i,
+ false, 0);
+ if (err)
+ return err;
+ }
+
+ /* Make sure the max shaper is disabled in all hierarcies that
+ * support it.
+ */
+ err = mlxsw_sp_port_ets_maxrate_set(mlxsw_sp_port,
+ MLXSW_REG_QEEC_HIERARCY_PORT, 0, 0,
+ MLXSW_REG_QEEC_MAS_DIS);
+ if (err)
+ return err;
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ err = mlxsw_sp_port_ets_maxrate_set(mlxsw_sp_port,
+ MLXSW_REG_QEEC_HIERARCY_SUBGROUP,
+ i, 0,
+ MLXSW_REG_QEEC_MAS_DIS);
+ if (err)
+ return err;
+ }
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ err = mlxsw_sp_port_ets_maxrate_set(mlxsw_sp_port,
+ MLXSW_REG_QEEC_HIERARCY_TC,
+ i, i,
+ MLXSW_REG_QEEC_MAS_DIS);
+ if (err)
+ return err;
+ }
+
+ /* Map all priorities to traffic class 0. */
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ err = mlxsw_sp_port_prio_tc_set(mlxsw_sp_port, i, 0);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
static int __mlxsw_sp_port_create(struct mlxsw_sp *mlxsw_sp, u8 local_port,
bool split, u8 module, u8 width)
{
- struct devlink *devlink = priv_to_devlink(mlxsw_sp->core);
struct mlxsw_sp_port *mlxsw_sp_port;
- struct devlink_port *devlink_port;
struct net_device *dev;
size_t bytes;
int err;
@@ -1460,16 +1737,6 @@ static int __mlxsw_sp_port_create(struct mlxsw_sp *mlxsw_sp, u8 local_port,
*/
dev->hard_header_len += MLXSW_TXHDR_LEN;
- devlink_port = &mlxsw_sp_port->devlink_port;
- if (mlxsw_sp_port->split)
- devlink_port_split_set(devlink_port, module);
- err = devlink_port_register(devlink, devlink_port, local_port);
- if (err) {
- dev_err(mlxsw_sp->bus_info->dev, "Port %d: Failed to register devlink port\n",
- mlxsw_sp_port->local_port);
- goto err_devlink_port_register;
- }
-
err = mlxsw_sp_port_system_port_mapping_set(mlxsw_sp_port);
if (err) {
dev_err(mlxsw_sp->bus_info->dev, "Port %d: Failed to set system port mapping\n",
@@ -1509,6 +1776,21 @@ static int __mlxsw_sp_port_create(struct mlxsw_sp *mlxsw_sp, u8 local_port,
goto err_port_buffers_init;
}
+ err = mlxsw_sp_port_ets_init(mlxsw_sp_port);
+ if (err) {
+ dev_err(mlxsw_sp->bus_info->dev, "Port %d: Failed to initialize ETS\n",
+ mlxsw_sp_port->local_port);
+ goto err_port_ets_init;
+ }
+
+ /* ETS and buffers must be initialized before DCB. */
+ err = mlxsw_sp_port_dcb_init(mlxsw_sp_port);
+ if (err) {
+ dev_err(mlxsw_sp->bus_info->dev, "Port %d: Failed to initialize DCB\n",
+ mlxsw_sp_port->local_port);
+ goto err_port_dcb_init;
+ }
+
mlxsw_sp_port_switchdev_init(mlxsw_sp_port);
err = register_netdev(dev);
if (err) {
@@ -1517,7 +1799,14 @@ static int __mlxsw_sp_port_create(struct mlxsw_sp *mlxsw_sp, u8 local_port,
goto err_register_netdev;
}
- devlink_port_type_eth_set(devlink_port, dev);
+ err = mlxsw_core_port_init(mlxsw_sp->core, &mlxsw_sp_port->core_port,
+ mlxsw_sp_port->local_port, dev,
+ mlxsw_sp_port->split, module);
+ if (err) {
+ dev_err(mlxsw_sp->bus_info->dev, "Port %d: Failed to init core port\n",
+ mlxsw_sp_port->local_port);
+ goto err_core_port_init;
+ }
err = mlxsw_sp_port_vlan_init(mlxsw_sp_port);
if (err)
@@ -1527,16 +1816,18 @@ static int __mlxsw_sp_port_create(struct mlxsw_sp *mlxsw_sp, u8 local_port,
return 0;
err_port_vlan_init:
+ mlxsw_core_port_fini(&mlxsw_sp_port->core_port);
+err_core_port_init:
unregister_netdev(dev);
err_register_netdev:
+err_port_dcb_init:
+err_port_ets_init:
err_port_buffers_init:
err_port_admin_status_set:
err_port_mtu_set:
err_port_speed_by_width_set:
err_port_swid_set:
err_port_system_port_mapping_set:
- devlink_port_unregister(&mlxsw_sp_port->devlink_port);
-err_devlink_port_register:
err_dev_addr_init:
free_percpu(mlxsw_sp_port->pcpu_stats);
err_alloc_stats:
@@ -1590,15 +1881,13 @@ static void mlxsw_sp_port_vports_fini(struct mlxsw_sp_port *mlxsw_sp_port)
static void mlxsw_sp_port_remove(struct mlxsw_sp *mlxsw_sp, u8 local_port)
{
struct mlxsw_sp_port *mlxsw_sp_port = mlxsw_sp->ports[local_port];
- struct devlink_port *devlink_port;
if (!mlxsw_sp_port)
return;
mlxsw_sp->ports[local_port] = NULL;
- devlink_port = &mlxsw_sp_port->devlink_port;
- devlink_port_type_clear(devlink_port);
+ mlxsw_core_port_fini(&mlxsw_sp_port->core_port);
unregister_netdev(mlxsw_sp_port->dev); /* This calls ndo_stop */
- devlink_port_unregister(devlink_port);
+ mlxsw_sp_port_dcb_fini(mlxsw_sp_port);
mlxsw_sp_port_vports_fini(mlxsw_sp_port);
mlxsw_sp_port_switchdev_fini(mlxsw_sp_port);
mlxsw_sp_port_swid_set(mlxsw_sp_port, MLXSW_PORT_SWID_DISABLED_PORT);
@@ -1659,9 +1948,10 @@ static u8 mlxsw_sp_cluster_base_port_get(u8 local_port)
return local_port - offset;
}
-static int mlxsw_sp_port_split(void *priv, u8 local_port, unsigned int count)
+static int mlxsw_sp_port_split(struct mlxsw_core *mlxsw_core, u8 local_port,
+ unsigned int count)
{
- struct mlxsw_sp *mlxsw_sp = priv;
+ struct mlxsw_sp *mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core);
struct mlxsw_sp_port *mlxsw_sp_port;
u8 width = MLXSW_PORT_MODULE_MAX_WIDTH / count;
u8 module, cur_width, base_port;
@@ -1733,9 +2023,9 @@ err_port_create:
return err;
}
-static int mlxsw_sp_port_unsplit(void *priv, u8 local_port)
+static int mlxsw_sp_port_unsplit(struct mlxsw_core *mlxsw_core, u8 local_port)
{
- struct mlxsw_sp *mlxsw_sp = priv;
+ struct mlxsw_sp *mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core);
struct mlxsw_sp_port *mlxsw_sp_port;
u8 module, cur_width, base_port;
unsigned int count;
@@ -2080,10 +2370,10 @@ static int mlxsw_sp_lag_init(struct mlxsw_sp *mlxsw_sp)
return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(slcr), slcr_pl);
}
-static int mlxsw_sp_init(void *priv, struct mlxsw_core *mlxsw_core,
+static int mlxsw_sp_init(struct mlxsw_core *mlxsw_core,
const struct mlxsw_bus_info *mlxsw_bus_info)
{
- struct mlxsw_sp *mlxsw_sp = priv;
+ struct mlxsw_sp *mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core);
int err;
mlxsw_sp->core = mlxsw_core;
@@ -2144,6 +2434,7 @@ static int mlxsw_sp_init(void *priv, struct mlxsw_core *mlxsw_core,
err_switchdev_init:
err_lag_init:
+ mlxsw_sp_buffers_fini(mlxsw_sp);
err_buffers_init:
err_flood_init:
mlxsw_sp_traps_fini(mlxsw_sp);
@@ -2154,11 +2445,12 @@ err_event_register:
return err;
}
-static void mlxsw_sp_fini(void *priv)
+static void mlxsw_sp_fini(struct mlxsw_core *mlxsw_core)
{
- struct mlxsw_sp *mlxsw_sp = priv;
+ struct mlxsw_sp *mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core);
mlxsw_sp_switchdev_fini(mlxsw_sp);
+ mlxsw_sp_buffers_fini(mlxsw_sp);
mlxsw_sp_traps_fini(mlxsw_sp);
mlxsw_sp_event_unregister(mlxsw_sp, MLXSW_TRAP_ID_PUDE);
mlxsw_sp_ports_remove(mlxsw_sp);
@@ -2201,16 +2493,26 @@ static struct mlxsw_config_profile mlxsw_sp_config_profile = {
};
static struct mlxsw_driver mlxsw_sp_driver = {
- .kind = MLXSW_DEVICE_KIND_SPECTRUM,
- .owner = THIS_MODULE,
- .priv_size = sizeof(struct mlxsw_sp),
- .init = mlxsw_sp_init,
- .fini = mlxsw_sp_fini,
- .port_split = mlxsw_sp_port_split,
- .port_unsplit = mlxsw_sp_port_unsplit,
- .txhdr_construct = mlxsw_sp_txhdr_construct,
- .txhdr_len = MLXSW_TXHDR_LEN,
- .profile = &mlxsw_sp_config_profile,
+ .kind = MLXSW_DEVICE_KIND_SPECTRUM,
+ .owner = THIS_MODULE,
+ .priv_size = sizeof(struct mlxsw_sp),
+ .init = mlxsw_sp_init,
+ .fini = mlxsw_sp_fini,
+ .port_split = mlxsw_sp_port_split,
+ .port_unsplit = mlxsw_sp_port_unsplit,
+ .sb_pool_get = mlxsw_sp_sb_pool_get,
+ .sb_pool_set = mlxsw_sp_sb_pool_set,
+ .sb_port_pool_get = mlxsw_sp_sb_port_pool_get,
+ .sb_port_pool_set = mlxsw_sp_sb_port_pool_set,
+ .sb_tc_pool_bind_get = mlxsw_sp_sb_tc_pool_bind_get,
+ .sb_tc_pool_bind_set = mlxsw_sp_sb_tc_pool_bind_set,
+ .sb_occ_snapshot = mlxsw_sp_sb_occ_snapshot,
+ .sb_occ_max_clear = mlxsw_sp_sb_occ_max_clear,
+ .sb_occ_port_pool_get = mlxsw_sp_sb_occ_port_pool_get,
+ .sb_occ_tc_port_bind_get = mlxsw_sp_sb_occ_tc_port_bind_get,
+ .txhdr_construct = mlxsw_sp_txhdr_construct,
+ .txhdr_len = MLXSW_TXHDR_LEN,
+ .profile = &mlxsw_sp_config_profile,
};
static int
@@ -2541,11 +2843,11 @@ static int mlxsw_sp_port_lag_join(struct mlxsw_sp_port *mlxsw_sp_port,
lag->ref_count++;
return 0;
+err_col_port_enable:
+ mlxsw_sp_lag_col_port_remove(mlxsw_sp_port, lag_id);
err_col_port_add:
if (!lag->ref_count)
mlxsw_sp_lag_destroy(mlxsw_sp, lag_id);
-err_col_port_enable:
- mlxsw_sp_lag_col_port_remove(mlxsw_sp_port, lag_id);
return err;
}
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
index 4b8abaf06321..e2c022d3e2f3 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum.h
@@ -42,15 +42,15 @@
#include <linux/bitops.h>
#include <linux/if_vlan.h>
#include <linux/list.h>
+#include <linux/dcbnl.h>
#include <net/switchdev.h>
-#include <net/devlink.h>
#include "port.h"
#include "core.h"
#define MLXSW_SP_VFID_BASE VLAN_N_VID
#define MLXSW_SP_VFID_PORT_MAX 512 /* Non-bridged VLAN interfaces */
-#define MLXSW_SP_VFID_BR_MAX 8192 /* Bridged VLAN interfaces */
+#define MLXSW_SP_VFID_BR_MAX 6144 /* Bridged VLAN interfaces */
#define MLXSW_SP_VFID_MAX (MLXSW_SP_VFID_PORT_MAX + MLXSW_SP_VFID_BR_MAX)
#define MLXSW_SP_LAG_MAX 64
@@ -62,6 +62,24 @@
#define MLXSW_SP_PORT_BASE_SPEED 25000 /* Mb/s */
+#define MLXSW_SP_BYTES_PER_CELL 96
+
+#define MLXSW_SP_BYTES_TO_CELLS(b) DIV_ROUND_UP(b, MLXSW_SP_BYTES_PER_CELL)
+#define MLXSW_SP_CELLS_TO_BYTES(c) (c * MLXSW_SP_BYTES_PER_CELL)
+
+/* Maximum delay buffer needed in case of PAUSE frames, in cells.
+ * Assumes 100m cable and maximum MTU.
+ */
+#define MLXSW_SP_PAUSE_DELAY 612
+
+#define MLXSW_SP_CELL_FACTOR 2 /* 2 * cell_size / (IPG + cell_size + 1) */
+
+static inline u16 mlxsw_sp_pfc_delay_get(int mtu, u16 delay)
+{
+ delay = MLXSW_SP_BYTES_TO_CELLS(DIV_ROUND_UP(delay, BITS_PER_BYTE));
+ return MLXSW_SP_CELL_FACTOR * delay + MLXSW_SP_BYTES_TO_CELLS(mtu);
+}
+
struct mlxsw_sp_port;
struct mlxsw_sp_upper {
@@ -100,6 +118,40 @@ static inline bool mlxsw_sp_fid_is_vfid(u16 fid)
return fid >= MLXSW_SP_VFID_BASE;
}
+struct mlxsw_sp_sb_pr {
+ enum mlxsw_reg_sbpr_mode mode;
+ u32 size;
+};
+
+struct mlxsw_cp_sb_occ {
+ u32 cur;
+ u32 max;
+};
+
+struct mlxsw_sp_sb_cm {
+ u32 min_buff;
+ u32 max_buff;
+ u8 pool;
+ struct mlxsw_cp_sb_occ occ;
+};
+
+struct mlxsw_sp_sb_pm {
+ u32 min_buff;
+ u32 max_buff;
+ struct mlxsw_cp_sb_occ occ;
+};
+
+#define MLXSW_SP_SB_POOL_COUNT 4
+#define MLXSW_SP_SB_TC_COUNT 8
+
+struct mlxsw_sp_sb {
+ struct mlxsw_sp_sb_pr prs[2][MLXSW_SP_SB_POOL_COUNT];
+ struct {
+ struct mlxsw_sp_sb_cm cms[2][MLXSW_SP_SB_TC_COUNT];
+ struct mlxsw_sp_sb_pm pms[2][MLXSW_SP_SB_POOL_COUNT];
+ } ports[MLXSW_PORT_MAX_PORTS];
+};
+
struct mlxsw_sp {
struct {
struct list_head list;
@@ -130,6 +182,7 @@ struct mlxsw_sp {
struct mlxsw_sp_upper master_bridge;
struct mlxsw_sp_upper lags[MLXSW_SP_LAG_MAX];
u8 port_to_module[MLXSW_PORT_MAX_PORTS];
+ struct mlxsw_sp_sb sb;
};
static inline struct mlxsw_sp_upper *
@@ -148,6 +201,7 @@ struct mlxsw_sp_port_pcpu_stats {
};
struct mlxsw_sp_port {
+ struct mlxsw_core_port core_port; /* must be first */
struct net_device *dev;
struct mlxsw_sp_port_pcpu_stats __percpu *pcpu_stats;
struct mlxsw_sp *mlxsw_sp;
@@ -166,14 +220,28 @@ struct mlxsw_sp_port {
struct mlxsw_sp_vfid *vfid;
u16 vid;
} vport;
+ struct {
+ u8 tx_pause:1,
+ rx_pause:1;
+ } link;
+ struct {
+ struct ieee_ets *ets;
+ struct ieee_maxrate *maxrate;
+ struct ieee_pfc *pfc;
+ } dcb;
/* 802.1Q bridge VLANs */
unsigned long *active_vlans;
unsigned long *untagged_vlans;
/* VLAN interfaces */
struct list_head vports_list;
- struct devlink_port devlink_port;
};
+static inline bool
+mlxsw_sp_port_is_pause_en(const struct mlxsw_sp_port *mlxsw_sp_port)
+{
+ return mlxsw_sp_port->link.tx_pause || mlxsw_sp_port->link.rx_pause;
+}
+
static inline struct mlxsw_sp_port *
mlxsw_sp_port_lagged_get(struct mlxsw_sp *mlxsw_sp, u16 lag_id, u8 port_index)
{
@@ -245,7 +313,39 @@ enum mlxsw_sp_flood_table {
};
int mlxsw_sp_buffers_init(struct mlxsw_sp *mlxsw_sp);
+void mlxsw_sp_buffers_fini(struct mlxsw_sp *mlxsw_sp);
int mlxsw_sp_port_buffers_init(struct mlxsw_sp_port *mlxsw_sp_port);
+int mlxsw_sp_sb_pool_get(struct mlxsw_core *mlxsw_core,
+ unsigned int sb_index, u16 pool_index,
+ struct devlink_sb_pool_info *pool_info);
+int mlxsw_sp_sb_pool_set(struct mlxsw_core *mlxsw_core,
+ unsigned int sb_index, u16 pool_index, u32 size,
+ enum devlink_sb_threshold_type threshold_type);
+int mlxsw_sp_sb_port_pool_get(struct mlxsw_core_port *mlxsw_core_port,
+ unsigned int sb_index, u16 pool_index,
+ u32 *p_threshold);
+int mlxsw_sp_sb_port_pool_set(struct mlxsw_core_port *mlxsw_core_port,
+ unsigned int sb_index, u16 pool_index,
+ u32 threshold);
+int mlxsw_sp_sb_tc_pool_bind_get(struct mlxsw_core_port *mlxsw_core_port,
+ unsigned int sb_index, u16 tc_index,
+ enum devlink_sb_pool_type pool_type,
+ u16 *p_pool_index, u32 *p_threshold);
+int mlxsw_sp_sb_tc_pool_bind_set(struct mlxsw_core_port *mlxsw_core_port,
+ unsigned int sb_index, u16 tc_index,
+ enum devlink_sb_pool_type pool_type,
+ u16 pool_index, u32 threshold);
+int mlxsw_sp_sb_occ_snapshot(struct mlxsw_core *mlxsw_core,
+ unsigned int sb_index);
+int mlxsw_sp_sb_occ_max_clear(struct mlxsw_core *mlxsw_core,
+ unsigned int sb_index);
+int mlxsw_sp_sb_occ_port_pool_get(struct mlxsw_core_port *mlxsw_core_port,
+ unsigned int sb_index, u16 pool_index,
+ u32 *p_cur, u32 *p_max);
+int mlxsw_sp_sb_occ_tc_port_bind_get(struct mlxsw_core_port *mlxsw_core_port,
+ unsigned int sb_index, u16 tc_index,
+ enum devlink_sb_pool_type pool_type,
+ u32 *p_cur, u32 *p_max);
int mlxsw_sp_switchdev_init(struct mlxsw_sp *mlxsw_sp);
void mlxsw_sp_switchdev_fini(struct mlxsw_sp *mlxsw_sp);
@@ -265,5 +365,33 @@ int mlxsw_sp_vport_flood_set(struct mlxsw_sp_port *mlxsw_sp_vport, u16 vfid,
bool set, bool only_uc);
void mlxsw_sp_port_active_vlans_del(struct mlxsw_sp_port *mlxsw_sp_port);
int mlxsw_sp_port_pvid_set(struct mlxsw_sp_port *mlxsw_sp_port, u16 vid);
+int mlxsw_sp_port_ets_set(struct mlxsw_sp_port *mlxsw_sp_port,
+ enum mlxsw_reg_qeec_hr hr, u8 index, u8 next_index,
+ bool dwrr, u8 dwrr_weight);
+int mlxsw_sp_port_prio_tc_set(struct mlxsw_sp_port *mlxsw_sp_port,
+ u8 switch_prio, u8 tclass);
+int __mlxsw_sp_port_headroom_set(struct mlxsw_sp_port *mlxsw_sp_port, int mtu,
+ u8 *prio_tc, bool pause_en,
+ struct ieee_pfc *my_pfc);
+int mlxsw_sp_port_ets_maxrate_set(struct mlxsw_sp_port *mlxsw_sp_port,
+ enum mlxsw_reg_qeec_hr hr, u8 index,
+ u8 next_index, u32 maxrate);
+
+#ifdef CONFIG_MLXSW_SPECTRUM_DCB
+
+int mlxsw_sp_port_dcb_init(struct mlxsw_sp_port *mlxsw_sp_port);
+void mlxsw_sp_port_dcb_fini(struct mlxsw_sp_port *mlxsw_sp_port);
+
+#else
+
+static inline int mlxsw_sp_port_dcb_init(struct mlxsw_sp_port *mlxsw_sp_port)
+{
+ return 0;
+}
+
+static inline void mlxsw_sp_port_dcb_fini(struct mlxsw_sp_port *mlxsw_sp_port)
+{}
+
+#endif
#endif
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
index d59195e3f7fb..a3720a0fad7d 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_buffers.c
@@ -34,36 +34,140 @@
#include <linux/kernel.h>
#include <linux/types.h>
+#include <linux/dcbnl.h>
+#include <linux/if_ether.h>
+#include <linux/list.h>
#include "spectrum.h"
#include "core.h"
#include "port.h"
#include "reg.h"
-struct mlxsw_sp_pb {
- u8 index;
- u16 size;
-};
+static struct mlxsw_sp_sb_pr *mlxsw_sp_sb_pr_get(struct mlxsw_sp *mlxsw_sp,
+ u8 pool,
+ enum mlxsw_reg_sbxx_dir dir)
+{
+ return &mlxsw_sp->sb.prs[dir][pool];
+}
-#define MLXSW_SP_PB(_index, _size) \
- { \
- .index = _index, \
- .size = _size, \
+static struct mlxsw_sp_sb_cm *mlxsw_sp_sb_cm_get(struct mlxsw_sp *mlxsw_sp,
+ u8 local_port, u8 pg_buff,
+ enum mlxsw_reg_sbxx_dir dir)
+{
+ return &mlxsw_sp->sb.ports[local_port].cms[dir][pg_buff];
+}
+
+static struct mlxsw_sp_sb_pm *mlxsw_sp_sb_pm_get(struct mlxsw_sp *mlxsw_sp,
+ u8 local_port, u8 pool,
+ enum mlxsw_reg_sbxx_dir dir)
+{
+ return &mlxsw_sp->sb.ports[local_port].pms[dir][pool];
+}
+
+static int mlxsw_sp_sb_pr_write(struct mlxsw_sp *mlxsw_sp, u8 pool,
+ enum mlxsw_reg_sbxx_dir dir,
+ enum mlxsw_reg_sbpr_mode mode, u32 size)
+{
+ char sbpr_pl[MLXSW_REG_SBPR_LEN];
+ struct mlxsw_sp_sb_pr *pr;
+ int err;
+
+ mlxsw_reg_sbpr_pack(sbpr_pl, pool, dir, mode, size);
+ err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sbpr), sbpr_pl);
+ if (err)
+ return err;
+
+ pr = mlxsw_sp_sb_pr_get(mlxsw_sp, pool, dir);
+ pr->mode = mode;
+ pr->size = size;
+ return 0;
+}
+
+static int mlxsw_sp_sb_cm_write(struct mlxsw_sp *mlxsw_sp, u8 local_port,
+ u8 pg_buff, enum mlxsw_reg_sbxx_dir dir,
+ u32 min_buff, u32 max_buff, u8 pool)
+{
+ char sbcm_pl[MLXSW_REG_SBCM_LEN];
+ int err;
+
+ mlxsw_reg_sbcm_pack(sbcm_pl, local_port, pg_buff, dir,
+ min_buff, max_buff, pool);
+ err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sbcm), sbcm_pl);
+ if (err)
+ return err;
+ if (pg_buff < MLXSW_SP_SB_TC_COUNT) {
+ struct mlxsw_sp_sb_cm *cm;
+
+ cm = mlxsw_sp_sb_cm_get(mlxsw_sp, local_port, pg_buff, dir);
+ cm->min_buff = min_buff;
+ cm->max_buff = max_buff;
+ cm->pool = pool;
}
+ return 0;
+}
+
+static int mlxsw_sp_sb_pm_write(struct mlxsw_sp *mlxsw_sp, u8 local_port,
+ u8 pool, enum mlxsw_reg_sbxx_dir dir,
+ u32 min_buff, u32 max_buff)
+{
+ char sbpm_pl[MLXSW_REG_SBPM_LEN];
+ struct mlxsw_sp_sb_pm *pm;
+ int err;
+
+ mlxsw_reg_sbpm_pack(sbpm_pl, local_port, pool, dir, false,
+ min_buff, max_buff);
+ err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sbpm), sbpm_pl);
+ if (err)
+ return err;
+
+ pm = mlxsw_sp_sb_pm_get(mlxsw_sp, local_port, pool, dir);
+ pm->min_buff = min_buff;
+ pm->max_buff = max_buff;
+ return 0;
+}
+
+static int mlxsw_sp_sb_pm_occ_clear(struct mlxsw_sp *mlxsw_sp, u8 local_port,
+ u8 pool, enum mlxsw_reg_sbxx_dir dir,
+ struct list_head *bulk_list)
+{
+ char sbpm_pl[MLXSW_REG_SBPM_LEN];
+
+ mlxsw_reg_sbpm_pack(sbpm_pl, local_port, pool, dir, true, 0, 0);
+ return mlxsw_reg_trans_query(mlxsw_sp->core, MLXSW_REG(sbpm), sbpm_pl,
+ bulk_list, NULL, 0);
+}
+
+static void mlxsw_sp_sb_pm_occ_query_cb(struct mlxsw_core *mlxsw_core,
+ char *sbpm_pl, size_t sbpm_pl_len,
+ unsigned long cb_priv)
+{
+ struct mlxsw_sp_sb_pm *pm = (struct mlxsw_sp_sb_pm *) cb_priv;
+
+ mlxsw_reg_sbpm_unpack(sbpm_pl, &pm->occ.cur, &pm->occ.max);
+}
+
+static int mlxsw_sp_sb_pm_occ_query(struct mlxsw_sp *mlxsw_sp, u8 local_port,
+ u8 pool, enum mlxsw_reg_sbxx_dir dir,
+ struct list_head *bulk_list)
+{
+ char sbpm_pl[MLXSW_REG_SBPM_LEN];
+ struct mlxsw_sp_sb_pm *pm;
+
+ pm = mlxsw_sp_sb_pm_get(mlxsw_sp, local_port, pool, dir);
+ mlxsw_reg_sbpm_pack(sbpm_pl, local_port, pool, dir, false, 0, 0);
+ return mlxsw_reg_trans_query(mlxsw_sp->core, MLXSW_REG(sbpm), sbpm_pl,
+ bulk_list,
+ mlxsw_sp_sb_pm_occ_query_cb,
+ (unsigned long) pm);
+}
-static const struct mlxsw_sp_pb mlxsw_sp_pbs[] = {
- MLXSW_SP_PB(0, 208),
- MLXSW_SP_PB(1, 208),
- MLXSW_SP_PB(2, 208),
- MLXSW_SP_PB(3, 208),
- MLXSW_SP_PB(4, 208),
- MLXSW_SP_PB(5, 208),
- MLXSW_SP_PB(6, 208),
- MLXSW_SP_PB(7, 208),
- MLXSW_SP_PB(9, 208),
+static const u16 mlxsw_sp_pbs[] = {
+ [0] = 2 * MLXSW_SP_BYTES_TO_CELLS(ETH_FRAME_LEN),
+ [9] = 2 * MLXSW_SP_BYTES_TO_CELLS(MLXSW_PORT_MAX_MTU),
};
#define MLXSW_SP_PBS_LEN ARRAY_SIZE(mlxsw_sp_pbs)
+#define MLXSW_SP_PB_UNUSED 8
static int mlxsw_sp_port_pb_init(struct mlxsw_sp_port *mlxsw_sp_port)
{
@@ -73,194 +177,206 @@ static int mlxsw_sp_port_pb_init(struct mlxsw_sp_port *mlxsw_sp_port)
mlxsw_reg_pbmc_pack(pbmc_pl, mlxsw_sp_port->local_port,
0xffff, 0xffff / 2);
for (i = 0; i < MLXSW_SP_PBS_LEN; i++) {
- const struct mlxsw_sp_pb *pb;
-
- pb = &mlxsw_sp_pbs[i];
- mlxsw_reg_pbmc_lossy_buffer_pack(pbmc_pl, pb->index, pb->size);
+ if (i == MLXSW_SP_PB_UNUSED)
+ continue;
+ mlxsw_reg_pbmc_lossy_buffer_pack(pbmc_pl, i, mlxsw_sp_pbs[i]);
}
+ mlxsw_reg_pbmc_lossy_buffer_pack(pbmc_pl,
+ MLXSW_REG_PBMC_PORT_SHARED_BUF_IDX, 0);
return mlxsw_reg_write(mlxsw_sp_port->mlxsw_sp->core,
MLXSW_REG(pbmc), pbmc_pl);
}
-#define MLXSW_SP_SB_BYTES_PER_CELL 96
+static int mlxsw_sp_port_pb_prio_init(struct mlxsw_sp_port *mlxsw_sp_port)
+{
+ char pptb_pl[MLXSW_REG_PPTB_LEN];
+ int i;
-struct mlxsw_sp_sb_pool {
- u8 pool;
- enum mlxsw_reg_sbpr_dir dir;
- enum mlxsw_reg_sbpr_mode mode;
- u32 size;
-};
+ mlxsw_reg_pptb_pack(pptb_pl, mlxsw_sp_port->local_port);
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++)
+ mlxsw_reg_pptb_prio_to_buff_set(pptb_pl, i, 0);
+ return mlxsw_reg_write(mlxsw_sp_port->mlxsw_sp->core, MLXSW_REG(pptb),
+ pptb_pl);
+}
+
+static int mlxsw_sp_port_headroom_init(struct mlxsw_sp_port *mlxsw_sp_port)
+{
+ int err;
+
+ err = mlxsw_sp_port_pb_init(mlxsw_sp_port);
+ if (err)
+ return err;
+ return mlxsw_sp_port_pb_prio_init(mlxsw_sp_port);
+}
-#define MLXSW_SP_SB_POOL_INGRESS_SIZE \
- ((15000000 - (2 * 20000 * MLXSW_PORT_MAX_PORTS)) / \
- MLXSW_SP_SB_BYTES_PER_CELL)
-#define MLXSW_SP_SB_POOL_EGRESS_SIZE \
- ((14000000 - (8 * 1500 * MLXSW_PORT_MAX_PORTS)) / \
- MLXSW_SP_SB_BYTES_PER_CELL)
-
-#define MLXSW_SP_SB_POOL(_pool, _dir, _mode, _size) \
- { \
- .pool = _pool, \
- .dir = _dir, \
- .mode = _mode, \
- .size = _size, \
+#define MLXSW_SP_SB_PR_INGRESS_SIZE \
+ (15000000 - (2 * 20000 * MLXSW_PORT_MAX_PORTS))
+#define MLXSW_SP_SB_PR_INGRESS_MNG_SIZE (200 * 1000)
+#define MLXSW_SP_SB_PR_EGRESS_SIZE \
+ (14000000 - (8 * 1500 * MLXSW_PORT_MAX_PORTS))
+
+#define MLXSW_SP_SB_PR(_mode, _size) \
+ { \
+ .mode = _mode, \
+ .size = _size, \
}
-#define MLXSW_SP_SB_POOL_INGRESS(_pool, _size) \
- MLXSW_SP_SB_POOL(_pool, MLXSW_REG_SBPR_DIR_INGRESS, \
- MLXSW_REG_SBPR_MODE_DYNAMIC, _size)
-
-#define MLXSW_SP_SB_POOL_EGRESS(_pool, _size) \
- MLXSW_SP_SB_POOL(_pool, MLXSW_REG_SBPR_DIR_EGRESS, \
- MLXSW_REG_SBPR_MODE_DYNAMIC, _size)
-
-static const struct mlxsw_sp_sb_pool mlxsw_sp_sb_pools[] = {
- MLXSW_SP_SB_POOL_INGRESS(0, MLXSW_SP_SB_POOL_INGRESS_SIZE),
- MLXSW_SP_SB_POOL_INGRESS(1, 0),
- MLXSW_SP_SB_POOL_INGRESS(2, 0),
- MLXSW_SP_SB_POOL_INGRESS(3, 0),
- MLXSW_SP_SB_POOL_EGRESS(0, MLXSW_SP_SB_POOL_EGRESS_SIZE),
- MLXSW_SP_SB_POOL_EGRESS(1, 0),
- MLXSW_SP_SB_POOL_EGRESS(2, 0),
- MLXSW_SP_SB_POOL_EGRESS(2, MLXSW_SP_SB_POOL_EGRESS_SIZE),
+static const struct mlxsw_sp_sb_pr mlxsw_sp_sb_prs_ingress[] = {
+ MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC,
+ MLXSW_SP_BYTES_TO_CELLS(MLXSW_SP_SB_PR_INGRESS_SIZE)),
+ MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC, 0),
+ MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC, 0),
+ MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC,
+ MLXSW_SP_BYTES_TO_CELLS(MLXSW_SP_SB_PR_INGRESS_MNG_SIZE)),
};
-#define MLXSW_SP_SB_POOLS_LEN ARRAY_SIZE(mlxsw_sp_sb_pools)
+#define MLXSW_SP_SB_PRS_INGRESS_LEN ARRAY_SIZE(mlxsw_sp_sb_prs_ingress)
+
+static const struct mlxsw_sp_sb_pr mlxsw_sp_sb_prs_egress[] = {
+ MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC,
+ MLXSW_SP_BYTES_TO_CELLS(MLXSW_SP_SB_PR_EGRESS_SIZE)),
+ MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC, 0),
+ MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC, 0),
+ MLXSW_SP_SB_PR(MLXSW_REG_SBPR_MODE_DYNAMIC, 0),
+};
-static int mlxsw_sp_sb_pools_init(struct mlxsw_sp *mlxsw_sp)
+#define MLXSW_SP_SB_PRS_EGRESS_LEN ARRAY_SIZE(mlxsw_sp_sb_prs_egress)
+
+static int __mlxsw_sp_sb_prs_init(struct mlxsw_sp *mlxsw_sp,
+ enum mlxsw_reg_sbxx_dir dir,
+ const struct mlxsw_sp_sb_pr *prs,
+ size_t prs_len)
{
- char sbpr_pl[MLXSW_REG_SBPR_LEN];
int i;
int err;
- for (i = 0; i < MLXSW_SP_SB_POOLS_LEN; i++) {
- const struct mlxsw_sp_sb_pool *pool;
+ for (i = 0; i < prs_len; i++) {
+ const struct mlxsw_sp_sb_pr *pr;
- pool = &mlxsw_sp_sb_pools[i];
- mlxsw_reg_sbpr_pack(sbpr_pl, pool->pool, pool->dir,
- pool->mode, pool->size);
- err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sbpr), sbpr_pl);
+ pr = &prs[i];
+ err = mlxsw_sp_sb_pr_write(mlxsw_sp, i, dir,
+ pr->mode, pr->size);
if (err)
return err;
}
return 0;
}
-struct mlxsw_sp_sb_cm {
- union {
- u8 pg;
- u8 tc;
- } u;
- enum mlxsw_reg_sbcm_dir dir;
- u32 min_buff;
- u32 max_buff;
- u8 pool;
-};
+static int mlxsw_sp_sb_prs_init(struct mlxsw_sp *mlxsw_sp)
+{
+ int err;
-#define MLXSW_SP_SB_CM(_pg_tc, _dir, _min_buff, _max_buff, _pool) \
- { \
- .u.pg = _pg_tc, \
- .dir = _dir, \
- .min_buff = _min_buff, \
- .max_buff = _max_buff, \
- .pool = _pool, \
+ err = __mlxsw_sp_sb_prs_init(mlxsw_sp, MLXSW_REG_SBXX_DIR_INGRESS,
+ mlxsw_sp_sb_prs_ingress,
+ MLXSW_SP_SB_PRS_INGRESS_LEN);
+ if (err)
+ return err;
+ return __mlxsw_sp_sb_prs_init(mlxsw_sp, MLXSW_REG_SBXX_DIR_EGRESS,
+ mlxsw_sp_sb_prs_egress,
+ MLXSW_SP_SB_PRS_EGRESS_LEN);
+}
+
+#define MLXSW_SP_SB_CM(_min_buff, _max_buff, _pool) \
+ { \
+ .min_buff = _min_buff, \
+ .max_buff = _max_buff, \
+ .pool = _pool, \
}
-#define MLXSW_SP_SB_CM_INGRESS(_pg, _min_buff, _max_buff) \
- MLXSW_SP_SB_CM(_pg, MLXSW_REG_SBCM_DIR_INGRESS, \
- _min_buff, _max_buff, 0)
-
-#define MLXSW_SP_SB_CM_EGRESS(_tc, _min_buff, _max_buff) \
- MLXSW_SP_SB_CM(_tc, MLXSW_REG_SBCM_DIR_EGRESS, \
- _min_buff, _max_buff, 0)
-
-#define MLXSW_SP_CPU_PORT_SB_CM_EGRESS(_tc) \
- MLXSW_SP_SB_CM(_tc, MLXSW_REG_SBCM_DIR_EGRESS, 104, 2, 3)
-
-static const struct mlxsw_sp_sb_cm mlxsw_sp_sb_cms[] = {
- MLXSW_SP_SB_CM_INGRESS(0, 10000 / MLXSW_SP_SB_BYTES_PER_CELL, 8),
- MLXSW_SP_SB_CM_INGRESS(1, 0, 0),
- MLXSW_SP_SB_CM_INGRESS(2, 0, 0),
- MLXSW_SP_SB_CM_INGRESS(3, 0, 0),
- MLXSW_SP_SB_CM_INGRESS(4, 0, 0),
- MLXSW_SP_SB_CM_INGRESS(5, 0, 0),
- MLXSW_SP_SB_CM_INGRESS(6, 0, 0),
- MLXSW_SP_SB_CM_INGRESS(7, 0, 0),
- MLXSW_SP_SB_CM_INGRESS(9, 20000 / MLXSW_SP_SB_BYTES_PER_CELL, 0xff),
- MLXSW_SP_SB_CM_EGRESS(0, 1500 / MLXSW_SP_SB_BYTES_PER_CELL, 9),
- MLXSW_SP_SB_CM_EGRESS(1, 1500 / MLXSW_SP_SB_BYTES_PER_CELL, 9),
- MLXSW_SP_SB_CM_EGRESS(2, 1500 / MLXSW_SP_SB_BYTES_PER_CELL, 9),
- MLXSW_SP_SB_CM_EGRESS(3, 1500 / MLXSW_SP_SB_BYTES_PER_CELL, 9),
- MLXSW_SP_SB_CM_EGRESS(4, 1500 / MLXSW_SP_SB_BYTES_PER_CELL, 9),
- MLXSW_SP_SB_CM_EGRESS(5, 1500 / MLXSW_SP_SB_BYTES_PER_CELL, 9),
- MLXSW_SP_SB_CM_EGRESS(6, 1500 / MLXSW_SP_SB_BYTES_PER_CELL, 9),
- MLXSW_SP_SB_CM_EGRESS(7, 1500 / MLXSW_SP_SB_BYTES_PER_CELL, 9),
- MLXSW_SP_SB_CM_EGRESS(8, 0, 0),
- MLXSW_SP_SB_CM_EGRESS(9, 0, 0),
- MLXSW_SP_SB_CM_EGRESS(10, 0, 0),
- MLXSW_SP_SB_CM_EGRESS(11, 0, 0),
- MLXSW_SP_SB_CM_EGRESS(12, 0, 0),
- MLXSW_SP_SB_CM_EGRESS(13, 0, 0),
- MLXSW_SP_SB_CM_EGRESS(14, 0, 0),
- MLXSW_SP_SB_CM_EGRESS(15, 0, 0),
- MLXSW_SP_SB_CM_EGRESS(16, 1, 0xff),
+static const struct mlxsw_sp_sb_cm mlxsw_sp_sb_cms_ingress[] = {
+ MLXSW_SP_SB_CM(MLXSW_SP_BYTES_TO_CELLS(10000), 8, 0),
+ MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
+ MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
+ MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
+ MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
+ MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
+ MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
+ MLXSW_SP_SB_CM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN, 0),
+ MLXSW_SP_SB_CM(0, 0, 0), /* dummy, this PG does not exist */
+ MLXSW_SP_SB_CM(MLXSW_SP_BYTES_TO_CELLS(20000), 1, 3),
};
-#define MLXSW_SP_SB_CMS_LEN ARRAY_SIZE(mlxsw_sp_sb_cms)
+#define MLXSW_SP_SB_CMS_INGRESS_LEN ARRAY_SIZE(mlxsw_sp_sb_cms_ingress)
+
+static const struct mlxsw_sp_sb_cm mlxsw_sp_sb_cms_egress[] = {
+ MLXSW_SP_SB_CM(MLXSW_SP_BYTES_TO_CELLS(1500), 9, 0),
+ MLXSW_SP_SB_CM(MLXSW_SP_BYTES_TO_CELLS(1500), 9, 0),
+ MLXSW_SP_SB_CM(MLXSW_SP_BYTES_TO_CELLS(1500), 9, 0),
+ MLXSW_SP_SB_CM(MLXSW_SP_BYTES_TO_CELLS(1500), 9, 0),
+ MLXSW_SP_SB_CM(MLXSW_SP_BYTES_TO_CELLS(1500), 9, 0),
+ MLXSW_SP_SB_CM(MLXSW_SP_BYTES_TO_CELLS(1500), 9, 0),
+ MLXSW_SP_SB_CM(MLXSW_SP_BYTES_TO_CELLS(1500), 9, 0),
+ MLXSW_SP_SB_CM(MLXSW_SP_BYTES_TO_CELLS(1500), 9, 0),
+ MLXSW_SP_SB_CM(0, 0, 0),
+ MLXSW_SP_SB_CM(0, 0, 0),
+ MLXSW_SP_SB_CM(0, 0, 0),
+ MLXSW_SP_SB_CM(0, 0, 0),
+ MLXSW_SP_SB_CM(0, 0, 0),
+ MLXSW_SP_SB_CM(0, 0, 0),
+ MLXSW_SP_SB_CM(0, 0, 0),
+ MLXSW_SP_SB_CM(0, 0, 0),
+ MLXSW_SP_SB_CM(1, 0xff, 0),
+};
+
+#define MLXSW_SP_SB_CMS_EGRESS_LEN ARRAY_SIZE(mlxsw_sp_sb_cms_egress)
+
+#define MLXSW_SP_CPU_PORT_SB_CM MLXSW_SP_SB_CM(0, 0, 0)
static const struct mlxsw_sp_sb_cm mlxsw_sp_cpu_port_sb_cms[] = {
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(0),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(1),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(2),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(3),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(4),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(5),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(6),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(7),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(8),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(9),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(10),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(11),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(12),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(13),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(14),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(15),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(16),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(17),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(18),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(19),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(20),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(21),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(22),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(23),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(24),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(25),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(26),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(27),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(28),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(29),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(30),
- MLXSW_SP_CPU_PORT_SB_CM_EGRESS(31),
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
+ MLXSW_SP_CPU_PORT_SB_CM,
};
#define MLXSW_SP_CPU_PORT_SB_MCS_LEN \
ARRAY_SIZE(mlxsw_sp_cpu_port_sb_cms)
-static int mlxsw_sp_sb_cms_init(struct mlxsw_sp *mlxsw_sp, u8 local_port,
- const struct mlxsw_sp_sb_cm *cms,
- size_t cms_len)
+static int __mlxsw_sp_sb_cms_init(struct mlxsw_sp *mlxsw_sp, u8 local_port,
+ enum mlxsw_reg_sbxx_dir dir,
+ const struct mlxsw_sp_sb_cm *cms,
+ size_t cms_len)
{
- char sbcm_pl[MLXSW_REG_SBCM_LEN];
int i;
int err;
for (i = 0; i < cms_len; i++) {
const struct mlxsw_sp_sb_cm *cm;
+ if (i == 8 && dir == MLXSW_REG_SBXX_DIR_INGRESS)
+ continue; /* PG number 8 does not exist, skip it */
cm = &cms[i];
- mlxsw_reg_sbcm_pack(sbcm_pl, local_port, cm->u.pg, cm->dir,
- cm->min_buff, cm->max_buff, cm->pool);
- err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sbcm), sbcm_pl);
+ err = mlxsw_sp_sb_cm_write(mlxsw_sp, local_port, i, dir,
+ cm->min_buff, cm->max_buff,
+ cm->pool);
if (err)
return err;
}
@@ -269,105 +385,120 @@ static int mlxsw_sp_sb_cms_init(struct mlxsw_sp *mlxsw_sp, u8 local_port,
static int mlxsw_sp_port_sb_cms_init(struct mlxsw_sp_port *mlxsw_sp_port)
{
- return mlxsw_sp_sb_cms_init(mlxsw_sp_port->mlxsw_sp,
- mlxsw_sp_port->local_port, mlxsw_sp_sb_cms,
- MLXSW_SP_SB_CMS_LEN);
+ int err;
+
+ err = __mlxsw_sp_sb_cms_init(mlxsw_sp_port->mlxsw_sp,
+ mlxsw_sp_port->local_port,
+ MLXSW_REG_SBXX_DIR_INGRESS,
+ mlxsw_sp_sb_cms_ingress,
+ MLXSW_SP_SB_CMS_INGRESS_LEN);
+ if (err)
+ return err;
+ return __mlxsw_sp_sb_cms_init(mlxsw_sp_port->mlxsw_sp,
+ mlxsw_sp_port->local_port,
+ MLXSW_REG_SBXX_DIR_EGRESS,
+ mlxsw_sp_sb_cms_egress,
+ MLXSW_SP_SB_CMS_EGRESS_LEN);
}
static int mlxsw_sp_cpu_port_sb_cms_init(struct mlxsw_sp *mlxsw_sp)
{
- return mlxsw_sp_sb_cms_init(mlxsw_sp, 0, mlxsw_sp_cpu_port_sb_cms,
- MLXSW_SP_CPU_PORT_SB_MCS_LEN);
+ return __mlxsw_sp_sb_cms_init(mlxsw_sp, 0, MLXSW_REG_SBXX_DIR_EGRESS,
+ mlxsw_sp_cpu_port_sb_cms,
+ MLXSW_SP_CPU_PORT_SB_MCS_LEN);
}
-struct mlxsw_sp_sb_pm {
- u8 pool;
- enum mlxsw_reg_sbpm_dir dir;
- u32 min_buff;
- u32 max_buff;
+#define MLXSW_SP_SB_PM(_min_buff, _max_buff) \
+ { \
+ .min_buff = _min_buff, \
+ .max_buff = _max_buff, \
+ }
+
+static const struct mlxsw_sp_sb_pm mlxsw_sp_sb_pms_ingress[] = {
+ MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MAX),
+ MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
+ MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
+ MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MAX),
};
-#define MLXSW_SP_SB_PM(_pool, _dir, _min_buff, _max_buff) \
- { \
- .pool = _pool, \
- .dir = _dir, \
- .min_buff = _min_buff, \
- .max_buff = _max_buff, \
- }
+#define MLXSW_SP_SB_PMS_INGRESS_LEN ARRAY_SIZE(mlxsw_sp_sb_pms_ingress)
-#define MLXSW_SP_SB_PM_INGRESS(_pool, _min_buff, _max_buff) \
- MLXSW_SP_SB_PM(_pool, MLXSW_REG_SBPM_DIR_INGRESS, \
- _min_buff, _max_buff)
-
-#define MLXSW_SP_SB_PM_EGRESS(_pool, _min_buff, _max_buff) \
- MLXSW_SP_SB_PM(_pool, MLXSW_REG_SBPM_DIR_EGRESS, \
- _min_buff, _max_buff)
-
-static const struct mlxsw_sp_sb_pm mlxsw_sp_sb_pms[] = {
- MLXSW_SP_SB_PM_INGRESS(0, 0, 0xff),
- MLXSW_SP_SB_PM_INGRESS(1, 0, 0),
- MLXSW_SP_SB_PM_INGRESS(2, 0, 0),
- MLXSW_SP_SB_PM_INGRESS(3, 0, 0),
- MLXSW_SP_SB_PM_EGRESS(0, 0, 7),
- MLXSW_SP_SB_PM_EGRESS(1, 0, 0),
- MLXSW_SP_SB_PM_EGRESS(2, 0, 0),
- MLXSW_SP_SB_PM_EGRESS(3, 0, 0),
+static const struct mlxsw_sp_sb_pm mlxsw_sp_sb_pms_egress[] = {
+ MLXSW_SP_SB_PM(0, 7),
+ MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
+ MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
+ MLXSW_SP_SB_PM(0, MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN),
};
-#define MLXSW_SP_SB_PMS_LEN ARRAY_SIZE(mlxsw_sp_sb_pms)
+#define MLXSW_SP_SB_PMS_EGRESS_LEN ARRAY_SIZE(mlxsw_sp_sb_pms_egress)
-static int mlxsw_sp_port_sb_pms_init(struct mlxsw_sp_port *mlxsw_sp_port)
+static int __mlxsw_sp_port_sb_pms_init(struct mlxsw_sp *mlxsw_sp, u8 local_port,
+ enum mlxsw_reg_sbxx_dir dir,
+ const struct mlxsw_sp_sb_pm *pms,
+ size_t pms_len)
{
- char sbpm_pl[MLXSW_REG_SBPM_LEN];
int i;
int err;
- for (i = 0; i < MLXSW_SP_SB_PMS_LEN; i++) {
+ for (i = 0; i < pms_len; i++) {
const struct mlxsw_sp_sb_pm *pm;
- pm = &mlxsw_sp_sb_pms[i];
- mlxsw_reg_sbpm_pack(sbpm_pl, mlxsw_sp_port->local_port,
- pm->pool, pm->dir,
- pm->min_buff, pm->max_buff);
- err = mlxsw_reg_write(mlxsw_sp_port->mlxsw_sp->core,
- MLXSW_REG(sbpm), sbpm_pl);
+ pm = &pms[i];
+ err = mlxsw_sp_sb_pm_write(mlxsw_sp, local_port, i, dir,
+ pm->min_buff, pm->max_buff);
if (err)
return err;
}
return 0;
}
+static int mlxsw_sp_port_sb_pms_init(struct mlxsw_sp_port *mlxsw_sp_port)
+{
+ int err;
+
+ err = __mlxsw_sp_port_sb_pms_init(mlxsw_sp_port->mlxsw_sp,
+ mlxsw_sp_port->local_port,
+ MLXSW_REG_SBXX_DIR_INGRESS,
+ mlxsw_sp_sb_pms_ingress,
+ MLXSW_SP_SB_PMS_INGRESS_LEN);
+ if (err)
+ return err;
+ return __mlxsw_sp_port_sb_pms_init(mlxsw_sp_port->mlxsw_sp,
+ mlxsw_sp_port->local_port,
+ MLXSW_REG_SBXX_DIR_EGRESS,
+ mlxsw_sp_sb_pms_egress,
+ MLXSW_SP_SB_PMS_EGRESS_LEN);
+}
+
struct mlxsw_sp_sb_mm {
- u8 prio;
u32 min_buff;
u32 max_buff;
u8 pool;
};
-#define MLXSW_SP_SB_MM(_prio, _min_buff, _max_buff, _pool) \
- { \
- .prio = _prio, \
- .min_buff = _min_buff, \
- .max_buff = _max_buff, \
- .pool = _pool, \
+#define MLXSW_SP_SB_MM(_min_buff, _max_buff, _pool) \
+ { \
+ .min_buff = _min_buff, \
+ .max_buff = _max_buff, \
+ .pool = _pool, \
}
static const struct mlxsw_sp_sb_mm mlxsw_sp_sb_mms[] = {
- MLXSW_SP_SB_MM(0, 20000 / MLXSW_SP_SB_BYTES_PER_CELL, 0xff, 0),
- MLXSW_SP_SB_MM(1, 20000 / MLXSW_SP_SB_BYTES_PER_CELL, 0xff, 0),
- MLXSW_SP_SB_MM(2, 20000 / MLXSW_SP_SB_BYTES_PER_CELL, 0xff, 0),
- MLXSW_SP_SB_MM(3, 20000 / MLXSW_SP_SB_BYTES_PER_CELL, 0xff, 0),
- MLXSW_SP_SB_MM(4, 20000 / MLXSW_SP_SB_BYTES_PER_CELL, 0xff, 0),
- MLXSW_SP_SB_MM(5, 20000 / MLXSW_SP_SB_BYTES_PER_CELL, 0xff, 0),
- MLXSW_SP_SB_MM(6, 20000 / MLXSW_SP_SB_BYTES_PER_CELL, 0xff, 0),
- MLXSW_SP_SB_MM(7, 20000 / MLXSW_SP_SB_BYTES_PER_CELL, 0xff, 0),
- MLXSW_SP_SB_MM(8, 20000 / MLXSW_SP_SB_BYTES_PER_CELL, 0xff, 0),
- MLXSW_SP_SB_MM(9, 20000 / MLXSW_SP_SB_BYTES_PER_CELL, 0xff, 0),
- MLXSW_SP_SB_MM(10, 20000 / MLXSW_SP_SB_BYTES_PER_CELL, 0xff, 0),
- MLXSW_SP_SB_MM(11, 20000 / MLXSW_SP_SB_BYTES_PER_CELL, 0xff, 0),
- MLXSW_SP_SB_MM(12, 20000 / MLXSW_SP_SB_BYTES_PER_CELL, 0xff, 0),
- MLXSW_SP_SB_MM(13, 20000 / MLXSW_SP_SB_BYTES_PER_CELL, 0xff, 0),
- MLXSW_SP_SB_MM(14, 20000 / MLXSW_SP_SB_BYTES_PER_CELL, 0xff, 0),
+ MLXSW_SP_SB_MM(MLXSW_SP_BYTES_TO_CELLS(20000), 0xff, 0),
+ MLXSW_SP_SB_MM(MLXSW_SP_BYTES_TO_CELLS(20000), 0xff, 0),
+ MLXSW_SP_SB_MM(MLXSW_SP_BYTES_TO_CELLS(20000), 0xff, 0),
+ MLXSW_SP_SB_MM(MLXSW_SP_BYTES_TO_CELLS(20000), 0xff, 0),
+ MLXSW_SP_SB_MM(MLXSW_SP_BYTES_TO_CELLS(20000), 0xff, 0),
+ MLXSW_SP_SB_MM(MLXSW_SP_BYTES_TO_CELLS(20000), 0xff, 0),
+ MLXSW_SP_SB_MM(MLXSW_SP_BYTES_TO_CELLS(20000), 0xff, 0),
+ MLXSW_SP_SB_MM(MLXSW_SP_BYTES_TO_CELLS(20000), 0xff, 0),
+ MLXSW_SP_SB_MM(MLXSW_SP_BYTES_TO_CELLS(20000), 0xff, 0),
+ MLXSW_SP_SB_MM(MLXSW_SP_BYTES_TO_CELLS(20000), 0xff, 0),
+ MLXSW_SP_SB_MM(MLXSW_SP_BYTES_TO_CELLS(20000), 0xff, 0),
+ MLXSW_SP_SB_MM(MLXSW_SP_BYTES_TO_CELLS(20000), 0xff, 0),
+ MLXSW_SP_SB_MM(MLXSW_SP_BYTES_TO_CELLS(20000), 0xff, 0),
+ MLXSW_SP_SB_MM(MLXSW_SP_BYTES_TO_CELLS(20000), 0xff, 0),
+ MLXSW_SP_SB_MM(MLXSW_SP_BYTES_TO_CELLS(20000), 0xff, 0),
};
#define MLXSW_SP_SB_MMS_LEN ARRAY_SIZE(mlxsw_sp_sb_mms)
@@ -382,7 +513,7 @@ static int mlxsw_sp_sb_mms_init(struct mlxsw_sp *mlxsw_sp)
const struct mlxsw_sp_sb_mm *mc;
mc = &mlxsw_sp_sb_mms[i];
- mlxsw_reg_sbmm_pack(sbmm_pl, mc->prio, mc->min_buff,
+ mlxsw_reg_sbmm_pack(sbmm_pl, i, mc->min_buff,
mc->max_buff, mc->pool);
err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sbmm), sbmm_pl);
if (err)
@@ -391,26 +522,39 @@ static int mlxsw_sp_sb_mms_init(struct mlxsw_sp *mlxsw_sp)
return 0;
}
+#define MLXSW_SP_SB_SIZE (16 * 1024 * 1024)
+
int mlxsw_sp_buffers_init(struct mlxsw_sp *mlxsw_sp)
{
int err;
- err = mlxsw_sp_sb_pools_init(mlxsw_sp);
+ err = mlxsw_sp_sb_prs_init(mlxsw_sp);
if (err)
return err;
err = mlxsw_sp_cpu_port_sb_cms_init(mlxsw_sp);
if (err)
return err;
err = mlxsw_sp_sb_mms_init(mlxsw_sp);
+ if (err)
+ return err;
+ return devlink_sb_register(priv_to_devlink(mlxsw_sp->core), 0,
+ MLXSW_SP_SB_SIZE,
+ MLXSW_SP_SB_POOL_COUNT,
+ MLXSW_SP_SB_POOL_COUNT,
+ MLXSW_SP_SB_TC_COUNT,
+ MLXSW_SP_SB_TC_COUNT);
+}
- return err;
+void mlxsw_sp_buffers_fini(struct mlxsw_sp *mlxsw_sp)
+{
+ devlink_sb_unregister(priv_to_devlink(mlxsw_sp->core), 0);
}
int mlxsw_sp_port_buffers_init(struct mlxsw_sp_port *mlxsw_sp_port)
{
int err;
- err = mlxsw_sp_port_pb_init(mlxsw_sp_port);
+ err = mlxsw_sp_port_headroom_init(mlxsw_sp_port);
if (err)
return err;
err = mlxsw_sp_port_sb_cms_init(mlxsw_sp_port);
@@ -420,3 +564,394 @@ int mlxsw_sp_port_buffers_init(struct mlxsw_sp_port *mlxsw_sp_port)
return err;
}
+
+static u8 pool_get(u16 pool_index)
+{
+ return pool_index % MLXSW_SP_SB_POOL_COUNT;
+}
+
+static u16 pool_index_get(u8 pool, enum mlxsw_reg_sbxx_dir dir)
+{
+ u16 pool_index;
+
+ pool_index = pool;
+ if (dir == MLXSW_REG_SBXX_DIR_EGRESS)
+ pool_index += MLXSW_SP_SB_POOL_COUNT;
+ return pool_index;
+}
+
+static enum mlxsw_reg_sbxx_dir dir_get(u16 pool_index)
+{
+ return pool_index < MLXSW_SP_SB_POOL_COUNT ?
+ MLXSW_REG_SBXX_DIR_INGRESS : MLXSW_REG_SBXX_DIR_EGRESS;
+}
+
+int mlxsw_sp_sb_pool_get(struct mlxsw_core *mlxsw_core,
+ unsigned int sb_index, u16 pool_index,
+ struct devlink_sb_pool_info *pool_info)
+{
+ struct mlxsw_sp *mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core);
+ u8 pool = pool_get(pool_index);
+ enum mlxsw_reg_sbxx_dir dir = dir_get(pool_index);
+ struct mlxsw_sp_sb_pr *pr = mlxsw_sp_sb_pr_get(mlxsw_sp, pool, dir);
+
+ pool_info->pool_type = dir;
+ pool_info->size = MLXSW_SP_CELLS_TO_BYTES(pr->size);
+ pool_info->threshold_type = pr->mode;
+ return 0;
+}
+
+int mlxsw_sp_sb_pool_set(struct mlxsw_core *mlxsw_core,
+ unsigned int sb_index, u16 pool_index, u32 size,
+ enum devlink_sb_threshold_type threshold_type)
+{
+ struct mlxsw_sp *mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core);
+ u8 pool = pool_get(pool_index);
+ enum mlxsw_reg_sbxx_dir dir = dir_get(pool_index);
+ enum mlxsw_reg_sbpr_mode mode = threshold_type;
+ u32 pool_size = MLXSW_SP_BYTES_TO_CELLS(size);
+
+ return mlxsw_sp_sb_pr_write(mlxsw_sp, pool, dir, mode, pool_size);
+}
+
+#define MLXSW_SP_SB_THRESHOLD_TO_ALPHA_OFFSET (-2) /* 3->1, 16->14 */
+
+static u32 mlxsw_sp_sb_threshold_out(struct mlxsw_sp *mlxsw_sp, u8 pool,
+ enum mlxsw_reg_sbxx_dir dir, u32 max_buff)
+{
+ struct mlxsw_sp_sb_pr *pr = mlxsw_sp_sb_pr_get(mlxsw_sp, pool, dir);
+
+ if (pr->mode == MLXSW_REG_SBPR_MODE_DYNAMIC)
+ return max_buff - MLXSW_SP_SB_THRESHOLD_TO_ALPHA_OFFSET;
+ return MLXSW_SP_CELLS_TO_BYTES(max_buff);
+}
+
+static int mlxsw_sp_sb_threshold_in(struct mlxsw_sp *mlxsw_sp, u8 pool,
+ enum mlxsw_reg_sbxx_dir dir, u32 threshold,
+ u32 *p_max_buff)
+{
+ struct mlxsw_sp_sb_pr *pr = mlxsw_sp_sb_pr_get(mlxsw_sp, pool, dir);
+
+ if (pr->mode == MLXSW_REG_SBPR_MODE_DYNAMIC) {
+ int val;
+
+ val = threshold + MLXSW_SP_SB_THRESHOLD_TO_ALPHA_OFFSET;
+ if (val < MLXSW_REG_SBXX_DYN_MAX_BUFF_MIN ||
+ val > MLXSW_REG_SBXX_DYN_MAX_BUFF_MAX)
+ return -EINVAL;
+ *p_max_buff = val;
+ } else {
+ *p_max_buff = MLXSW_SP_BYTES_TO_CELLS(threshold);
+ }
+ return 0;
+}
+
+int mlxsw_sp_sb_port_pool_get(struct mlxsw_core_port *mlxsw_core_port,
+ unsigned int sb_index, u16 pool_index,
+ u32 *p_threshold)
+{
+ struct mlxsw_sp_port *mlxsw_sp_port =
+ mlxsw_core_port_driver_priv(mlxsw_core_port);
+ struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+ u8 local_port = mlxsw_sp_port->local_port;
+ u8 pool = pool_get(pool_index);
+ enum mlxsw_reg_sbxx_dir dir = dir_get(pool_index);
+ struct mlxsw_sp_sb_pm *pm = mlxsw_sp_sb_pm_get(mlxsw_sp, local_port,
+ pool, dir);
+
+ *p_threshold = mlxsw_sp_sb_threshold_out(mlxsw_sp, pool, dir,
+ pm->max_buff);
+ return 0;
+}
+
+int mlxsw_sp_sb_port_pool_set(struct mlxsw_core_port *mlxsw_core_port,
+ unsigned int sb_index, u16 pool_index,
+ u32 threshold)
+{
+ struct mlxsw_sp_port *mlxsw_sp_port =
+ mlxsw_core_port_driver_priv(mlxsw_core_port);
+ struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+ u8 local_port = mlxsw_sp_port->local_port;
+ u8 pool = pool_get(pool_index);
+ enum mlxsw_reg_sbxx_dir dir = dir_get(pool_index);
+ u32 max_buff;
+ int err;
+
+ err = mlxsw_sp_sb_threshold_in(mlxsw_sp, pool, dir,
+ threshold, &max_buff);
+ if (err)
+ return err;
+
+ return mlxsw_sp_sb_pm_write(mlxsw_sp, local_port, pool, dir,
+ 0, max_buff);
+}
+
+int mlxsw_sp_sb_tc_pool_bind_get(struct mlxsw_core_port *mlxsw_core_port,
+ unsigned int sb_index, u16 tc_index,
+ enum devlink_sb_pool_type pool_type,
+ u16 *p_pool_index, u32 *p_threshold)
+{
+ struct mlxsw_sp_port *mlxsw_sp_port =
+ mlxsw_core_port_driver_priv(mlxsw_core_port);
+ struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+ u8 local_port = mlxsw_sp_port->local_port;
+ u8 pg_buff = tc_index;
+ enum mlxsw_reg_sbxx_dir dir = pool_type;
+ struct mlxsw_sp_sb_cm *cm = mlxsw_sp_sb_cm_get(mlxsw_sp, local_port,
+ pg_buff, dir);
+
+ *p_threshold = mlxsw_sp_sb_threshold_out(mlxsw_sp, cm->pool, dir,
+ cm->max_buff);
+ *p_pool_index = pool_index_get(cm->pool, pool_type);
+ return 0;
+}
+
+int mlxsw_sp_sb_tc_pool_bind_set(struct mlxsw_core_port *mlxsw_core_port,
+ unsigned int sb_index, u16 tc_index,
+ enum devlink_sb_pool_type pool_type,
+ u16 pool_index, u32 threshold)
+{
+ struct mlxsw_sp_port *mlxsw_sp_port =
+ mlxsw_core_port_driver_priv(mlxsw_core_port);
+ struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+ u8 local_port = mlxsw_sp_port->local_port;
+ u8 pg_buff = tc_index;
+ enum mlxsw_reg_sbxx_dir dir = pool_type;
+ u8 pool = pool_index;
+ u32 max_buff;
+ int err;
+
+ err = mlxsw_sp_sb_threshold_in(mlxsw_sp, pool, dir,
+ threshold, &max_buff);
+ if (err)
+ return err;
+
+ if (pool_type == DEVLINK_SB_POOL_TYPE_EGRESS) {
+ if (pool < MLXSW_SP_SB_POOL_COUNT)
+ return -EINVAL;
+ pool -= MLXSW_SP_SB_POOL_COUNT;
+ } else if (pool >= MLXSW_SP_SB_POOL_COUNT) {
+ return -EINVAL;
+ }
+ return mlxsw_sp_sb_cm_write(mlxsw_sp, local_port, pg_buff, dir,
+ 0, max_buff, pool);
+}
+
+#define MASKED_COUNT_MAX \
+ (MLXSW_REG_SBSR_REC_MAX_COUNT / (MLXSW_SP_SB_TC_COUNT * 2))
+
+struct mlxsw_sp_sb_sr_occ_query_cb_ctx {
+ u8 masked_count;
+ u8 local_port_1;
+};
+
+static void mlxsw_sp_sb_sr_occ_query_cb(struct mlxsw_core *mlxsw_core,
+ char *sbsr_pl, size_t sbsr_pl_len,
+ unsigned long cb_priv)
+{
+ struct mlxsw_sp *mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core);
+ struct mlxsw_sp_sb_sr_occ_query_cb_ctx cb_ctx;
+ u8 masked_count;
+ u8 local_port;
+ int rec_index = 0;
+ struct mlxsw_sp_sb_cm *cm;
+ int i;
+
+ memcpy(&cb_ctx, &cb_priv, sizeof(cb_ctx));
+
+ masked_count = 0;
+ for (local_port = cb_ctx.local_port_1;
+ local_port < MLXSW_PORT_MAX_PORTS; local_port++) {
+ if (!mlxsw_sp->ports[local_port])
+ continue;
+ for (i = 0; i < MLXSW_SP_SB_TC_COUNT; i++) {
+ cm = mlxsw_sp_sb_cm_get(mlxsw_sp, local_port, i,
+ MLXSW_REG_SBXX_DIR_INGRESS);
+ mlxsw_reg_sbsr_rec_unpack(sbsr_pl, rec_index++,
+ &cm->occ.cur, &cm->occ.max);
+ }
+ if (++masked_count == cb_ctx.masked_count)
+ break;
+ }
+ masked_count = 0;
+ for (local_port = cb_ctx.local_port_1;
+ local_port < MLXSW_PORT_MAX_PORTS; local_port++) {
+ if (!mlxsw_sp->ports[local_port])
+ continue;
+ for (i = 0; i < MLXSW_SP_SB_TC_COUNT; i++) {
+ cm = mlxsw_sp_sb_cm_get(mlxsw_sp, local_port, i,
+ MLXSW_REG_SBXX_DIR_EGRESS);
+ mlxsw_reg_sbsr_rec_unpack(sbsr_pl, rec_index++,
+ &cm->occ.cur, &cm->occ.max);
+ }
+ if (++masked_count == cb_ctx.masked_count)
+ break;
+ }
+}
+
+int mlxsw_sp_sb_occ_snapshot(struct mlxsw_core *mlxsw_core,
+ unsigned int sb_index)
+{
+ struct mlxsw_sp *mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core);
+ struct mlxsw_sp_sb_sr_occ_query_cb_ctx cb_ctx;
+ unsigned long cb_priv;
+ LIST_HEAD(bulk_list);
+ char *sbsr_pl;
+ u8 masked_count;
+ u8 local_port_1;
+ u8 local_port = 0;
+ int i;
+ int err;
+ int err2;
+
+ sbsr_pl = kmalloc(MLXSW_REG_SBSR_LEN, GFP_KERNEL);
+ if (!sbsr_pl)
+ return -ENOMEM;
+
+next_batch:
+ local_port++;
+ local_port_1 = local_port;
+ masked_count = 0;
+ mlxsw_reg_sbsr_pack(sbsr_pl, false);
+ for (i = 0; i < MLXSW_SP_SB_TC_COUNT; i++) {
+ mlxsw_reg_sbsr_pg_buff_mask_set(sbsr_pl, i, 1);
+ mlxsw_reg_sbsr_tclass_mask_set(sbsr_pl, i, 1);
+ }
+ for (; local_port < MLXSW_PORT_MAX_PORTS; local_port++) {
+ if (!mlxsw_sp->ports[local_port])
+ continue;
+ mlxsw_reg_sbsr_ingress_port_mask_set(sbsr_pl, local_port, 1);
+ mlxsw_reg_sbsr_egress_port_mask_set(sbsr_pl, local_port, 1);
+ for (i = 0; i < MLXSW_SP_SB_POOL_COUNT; i++) {
+ err = mlxsw_sp_sb_pm_occ_query(mlxsw_sp, local_port, i,
+ MLXSW_REG_SBXX_DIR_INGRESS,
+ &bulk_list);
+ if (err)
+ goto out;
+ err = mlxsw_sp_sb_pm_occ_query(mlxsw_sp, local_port, i,
+ MLXSW_REG_SBXX_DIR_EGRESS,
+ &bulk_list);
+ if (err)
+ goto out;
+ }
+ if (++masked_count == MASKED_COUNT_MAX)
+ goto do_query;
+ }
+
+do_query:
+ cb_ctx.masked_count = masked_count;
+ cb_ctx.local_port_1 = local_port_1;
+ memcpy(&cb_priv, &cb_ctx, sizeof(cb_ctx));
+ err = mlxsw_reg_trans_query(mlxsw_core, MLXSW_REG(sbsr), sbsr_pl,
+ &bulk_list, mlxsw_sp_sb_sr_occ_query_cb,
+ cb_priv);
+ if (err)
+ goto out;
+ if (local_port < MLXSW_PORT_MAX_PORTS)
+ goto next_batch;
+
+out:
+ err2 = mlxsw_reg_trans_bulk_wait(&bulk_list);
+ if (!err)
+ err = err2;
+ kfree(sbsr_pl);
+ return err;
+}
+
+int mlxsw_sp_sb_occ_max_clear(struct mlxsw_core *mlxsw_core,
+ unsigned int sb_index)
+{
+ struct mlxsw_sp *mlxsw_sp = mlxsw_core_driver_priv(mlxsw_core);
+ LIST_HEAD(bulk_list);
+ char *sbsr_pl;
+ unsigned int masked_count;
+ u8 local_port = 0;
+ int i;
+ int err;
+ int err2;
+
+ sbsr_pl = kmalloc(MLXSW_REG_SBSR_LEN, GFP_KERNEL);
+ if (!sbsr_pl)
+ return -ENOMEM;
+
+next_batch:
+ local_port++;
+ masked_count = 0;
+ mlxsw_reg_sbsr_pack(sbsr_pl, true);
+ for (i = 0; i < MLXSW_SP_SB_TC_COUNT; i++) {
+ mlxsw_reg_sbsr_pg_buff_mask_set(sbsr_pl, i, 1);
+ mlxsw_reg_sbsr_tclass_mask_set(sbsr_pl, i, 1);
+ }
+ for (; local_port < MLXSW_PORT_MAX_PORTS; local_port++) {
+ if (!mlxsw_sp->ports[local_port])
+ continue;
+ mlxsw_reg_sbsr_ingress_port_mask_set(sbsr_pl, local_port, 1);
+ mlxsw_reg_sbsr_egress_port_mask_set(sbsr_pl, local_port, 1);
+ for (i = 0; i < MLXSW_SP_SB_POOL_COUNT; i++) {
+ err = mlxsw_sp_sb_pm_occ_clear(mlxsw_sp, local_port, i,
+ MLXSW_REG_SBXX_DIR_INGRESS,
+ &bulk_list);
+ if (err)
+ goto out;
+ err = mlxsw_sp_sb_pm_occ_clear(mlxsw_sp, local_port, i,
+ MLXSW_REG_SBXX_DIR_EGRESS,
+ &bulk_list);
+ if (err)
+ goto out;
+ }
+ if (++masked_count == MASKED_COUNT_MAX)
+ goto do_query;
+ }
+
+do_query:
+ err = mlxsw_reg_trans_query(mlxsw_core, MLXSW_REG(sbsr), sbsr_pl,
+ &bulk_list, NULL, 0);
+ if (err)
+ goto out;
+ if (local_port < MLXSW_PORT_MAX_PORTS)
+ goto next_batch;
+
+out:
+ err2 = mlxsw_reg_trans_bulk_wait(&bulk_list);
+ if (!err)
+ err = err2;
+ kfree(sbsr_pl);
+ return err;
+}
+
+int mlxsw_sp_sb_occ_port_pool_get(struct mlxsw_core_port *mlxsw_core_port,
+ unsigned int sb_index, u16 pool_index,
+ u32 *p_cur, u32 *p_max)
+{
+ struct mlxsw_sp_port *mlxsw_sp_port =
+ mlxsw_core_port_driver_priv(mlxsw_core_port);
+ struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+ u8 local_port = mlxsw_sp_port->local_port;
+ u8 pool = pool_get(pool_index);
+ enum mlxsw_reg_sbxx_dir dir = dir_get(pool_index);
+ struct mlxsw_sp_sb_pm *pm = mlxsw_sp_sb_pm_get(mlxsw_sp, local_port,
+ pool, dir);
+
+ *p_cur = MLXSW_SP_CELLS_TO_BYTES(pm->occ.cur);
+ *p_max = MLXSW_SP_CELLS_TO_BYTES(pm->occ.max);
+ return 0;
+}
+
+int mlxsw_sp_sb_occ_tc_port_bind_get(struct mlxsw_core_port *mlxsw_core_port,
+ unsigned int sb_index, u16 tc_index,
+ enum devlink_sb_pool_type pool_type,
+ u32 *p_cur, u32 *p_max)
+{
+ struct mlxsw_sp_port *mlxsw_sp_port =
+ mlxsw_core_port_driver_priv(mlxsw_core_port);
+ struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+ u8 local_port = mlxsw_sp_port->local_port;
+ u8 pg_buff = tc_index;
+ enum mlxsw_reg_sbxx_dir dir = pool_type;
+ struct mlxsw_sp_sb_cm *cm = mlxsw_sp_sb_cm_get(mlxsw_sp, local_port,
+ pg_buff, dir);
+
+ *p_cur = MLXSW_SP_CELLS_TO_BYTES(cm->occ.cur);
+ *p_max = MLXSW_SP_CELLS_TO_BYTES(cm->occ.max);
+ return 0;
+}
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c
new file mode 100644
index 000000000000..0b323661c0b6
--- /dev/null
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c
@@ -0,0 +1,480 @@
+/*
+ * drivers/net/ethernet/mellanox/mlxsw/spectrum_dcb.c
+ * Copyright (c) 2016 Mellanox Technologies. All rights reserved.
+ * Copyright (c) 2016 Ido Schimmel <idosch@mellanox.com>
+ *
+ * Redistribution and use in source and binary forms, with or without
+ * modification, are permitted provided that the following conditions are met:
+ *
+ * 1. Redistributions of source code must retain the above copyright
+ * notice, this list of conditions and the following disclaimer.
+ * 2. Redistributions in binary form must reproduce the above copyright
+ * notice, this list of conditions and the following disclaimer in the
+ * documentation and/or other materials provided with the distribution.
+ * 3. Neither the names of the copyright holders nor the names of its
+ * contributors may be used to endorse or promote products derived from
+ * this software without specific prior written permission.
+ *
+ * Alternatively, this software may be distributed under the terms of the
+ * GNU General Public License ("GPL") version 2 as published by the Free
+ * Software Foundation.
+ *
+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
+ * POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <linux/netdevice.h>
+#include <linux/string.h>
+#include <linux/bitops.h>
+#include <net/dcbnl.h>
+
+#include "spectrum.h"
+#include "reg.h"
+
+static u8 mlxsw_sp_dcbnl_getdcbx(struct net_device __always_unused *dev)
+{
+ return DCB_CAP_DCBX_HOST | DCB_CAP_DCBX_VER_IEEE;
+}
+
+static u8 mlxsw_sp_dcbnl_setdcbx(struct net_device __always_unused *dev,
+ u8 mode)
+{
+ return (mode != (DCB_CAP_DCBX_HOST | DCB_CAP_DCBX_VER_IEEE)) ? 1 : 0;
+}
+
+static int mlxsw_sp_dcbnl_ieee_getets(struct net_device *dev,
+ struct ieee_ets *ets)
+{
+ struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev);
+
+ memcpy(ets, mlxsw_sp_port->dcb.ets, sizeof(*ets));
+
+ return 0;
+}
+
+static int mlxsw_sp_port_ets_validate(struct mlxsw_sp_port *mlxsw_sp_port,
+ struct ieee_ets *ets)
+{
+ struct net_device *dev = mlxsw_sp_port->dev;
+ bool has_ets_tc = false;
+ int i, tx_bw_sum = 0;
+
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ switch (ets->tc_tsa[i]) {
+ case IEEE_8021QAZ_TSA_STRICT:
+ break;
+ case IEEE_8021QAZ_TSA_ETS:
+ has_ets_tc = true;
+ tx_bw_sum += ets->tc_tx_bw[i];
+ break;
+ default:
+ netdev_err(dev, "Only strict priority and ETS are supported\n");
+ return -EINVAL;
+ }
+
+ if (ets->prio_tc[i] >= IEEE_8021QAZ_MAX_TCS) {
+ netdev_err(dev, "Invalid TC\n");
+ return -EINVAL;
+ }
+ }
+
+ if (has_ets_tc && tx_bw_sum != 100) {
+ netdev_err(dev, "Total ETS bandwidth should equal 100\n");
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int mlxsw_sp_port_pg_prio_map(struct mlxsw_sp_port *mlxsw_sp_port,
+ u8 *prio_tc)
+{
+ char pptb_pl[MLXSW_REG_PPTB_LEN];
+ int i;
+
+ mlxsw_reg_pptb_pack(pptb_pl, mlxsw_sp_port->local_port);
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++)
+ mlxsw_reg_pptb_prio_to_buff_set(pptb_pl, i, prio_tc[i]);
+ return mlxsw_reg_write(mlxsw_sp_port->mlxsw_sp->core, MLXSW_REG(pptb),
+ pptb_pl);
+}
+
+static bool mlxsw_sp_ets_has_pg(u8 *prio_tc, u8 pg)
+{
+ int i;
+
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++)
+ if (prio_tc[i] == pg)
+ return true;
+ return false;
+}
+
+static int mlxsw_sp_port_pg_destroy(struct mlxsw_sp_port *mlxsw_sp_port,
+ u8 *old_prio_tc, u8 *new_prio_tc)
+{
+ struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+ char pbmc_pl[MLXSW_REG_PBMC_LEN];
+ int err, i;
+
+ mlxsw_reg_pbmc_pack(pbmc_pl, mlxsw_sp_port->local_port, 0, 0);
+ err = mlxsw_reg_query(mlxsw_sp->core, MLXSW_REG(pbmc), pbmc_pl);
+ if (err)
+ return err;
+
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ u8 pg = old_prio_tc[i];
+
+ if (!mlxsw_sp_ets_has_pg(new_prio_tc, pg))
+ mlxsw_reg_pbmc_lossy_buffer_pack(pbmc_pl, pg, 0);
+ }
+
+ return mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(pbmc), pbmc_pl);
+}
+
+static int mlxsw_sp_port_headroom_set(struct mlxsw_sp_port *mlxsw_sp_port,
+ struct ieee_ets *ets)
+{
+ bool pause_en = mlxsw_sp_port_is_pause_en(mlxsw_sp_port);
+ struct ieee_ets *my_ets = mlxsw_sp_port->dcb.ets;
+ struct net_device *dev = mlxsw_sp_port->dev;
+ int err;
+
+ /* Create the required PGs, but don't destroy existing ones, as
+ * traffic is still directed to them.
+ */
+ err = __mlxsw_sp_port_headroom_set(mlxsw_sp_port, dev->mtu,
+ ets->prio_tc, pause_en,
+ mlxsw_sp_port->dcb.pfc);
+ if (err) {
+ netdev_err(dev, "Failed to configure port's headroom\n");
+ return err;
+ }
+
+ err = mlxsw_sp_port_pg_prio_map(mlxsw_sp_port, ets->prio_tc);
+ if (err) {
+ netdev_err(dev, "Failed to set PG-priority mapping\n");
+ goto err_port_prio_pg_map;
+ }
+
+ err = mlxsw_sp_port_pg_destroy(mlxsw_sp_port, my_ets->prio_tc,
+ ets->prio_tc);
+ if (err)
+ netdev_warn(dev, "Failed to remove ununsed PGs\n");
+
+ return 0;
+
+err_port_prio_pg_map:
+ mlxsw_sp_port_pg_destroy(mlxsw_sp_port, ets->prio_tc, my_ets->prio_tc);
+ return err;
+}
+
+static int __mlxsw_sp_dcbnl_ieee_setets(struct mlxsw_sp_port *mlxsw_sp_port,
+ struct ieee_ets *ets)
+{
+ struct ieee_ets *my_ets = mlxsw_sp_port->dcb.ets;
+ struct net_device *dev = mlxsw_sp_port->dev;
+ int i, err;
+
+ /* Egress configuration. */
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ bool dwrr = ets->tc_tsa[i] == IEEE_8021QAZ_TSA_ETS;
+ u8 weight = ets->tc_tx_bw[i];
+
+ err = mlxsw_sp_port_ets_set(mlxsw_sp_port,
+ MLXSW_REG_QEEC_HIERARCY_SUBGROUP, i,
+ 0, dwrr, weight);
+ if (err) {
+ netdev_err(dev, "Failed to link subgroup ETS element %d to group\n",
+ i);
+ goto err_port_ets_set;
+ }
+ }
+
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ err = mlxsw_sp_port_prio_tc_set(mlxsw_sp_port, i,
+ ets->prio_tc[i]);
+ if (err) {
+ netdev_err(dev, "Failed to map prio %d to TC %d\n", i,
+ ets->prio_tc[i]);
+ goto err_port_prio_tc_set;
+ }
+ }
+
+ /* Ingress configuration. */
+ err = mlxsw_sp_port_headroom_set(mlxsw_sp_port, ets);
+ if (err)
+ goto err_port_headroom_set;
+
+ return 0;
+
+err_port_headroom_set:
+ i = IEEE_8021QAZ_MAX_TCS;
+err_port_prio_tc_set:
+ for (i--; i >= 0; i--)
+ mlxsw_sp_port_prio_tc_set(mlxsw_sp_port, i, my_ets->prio_tc[i]);
+ i = IEEE_8021QAZ_MAX_TCS;
+err_port_ets_set:
+ for (i--; i >= 0; i--) {
+ bool dwrr = my_ets->tc_tsa[i] == IEEE_8021QAZ_TSA_ETS;
+ u8 weight = my_ets->tc_tx_bw[i];
+
+ err = mlxsw_sp_port_ets_set(mlxsw_sp_port,
+ MLXSW_REG_QEEC_HIERARCY_SUBGROUP, i,
+ 0, dwrr, weight);
+ }
+ return err;
+}
+
+static int mlxsw_sp_dcbnl_ieee_setets(struct net_device *dev,
+ struct ieee_ets *ets)
+{
+ struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev);
+ int err;
+
+ err = mlxsw_sp_port_ets_validate(mlxsw_sp_port, ets);
+ if (err)
+ return err;
+
+ err = __mlxsw_sp_dcbnl_ieee_setets(mlxsw_sp_port, ets);
+ if (err)
+ return err;
+
+ memcpy(mlxsw_sp_port->dcb.ets, ets, sizeof(*ets));
+
+ return 0;
+}
+
+static int mlxsw_sp_dcbnl_ieee_getmaxrate(struct net_device *dev,
+ struct ieee_maxrate *maxrate)
+{
+ struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev);
+
+ memcpy(maxrate, mlxsw_sp_port->dcb.maxrate, sizeof(*maxrate));
+
+ return 0;
+}
+
+static int mlxsw_sp_dcbnl_ieee_setmaxrate(struct net_device *dev,
+ struct ieee_maxrate *maxrate)
+{
+ struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev);
+ struct ieee_maxrate *my_maxrate = mlxsw_sp_port->dcb.maxrate;
+ int err, i;
+
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ err = mlxsw_sp_port_ets_maxrate_set(mlxsw_sp_port,
+ MLXSW_REG_QEEC_HIERARCY_SUBGROUP,
+ i, 0,
+ maxrate->tc_maxrate[i]);
+ if (err) {
+ netdev_err(dev, "Failed to set maxrate for TC %d\n", i);
+ goto err_port_ets_maxrate_set;
+ }
+ }
+
+ memcpy(mlxsw_sp_port->dcb.maxrate, maxrate, sizeof(*maxrate));
+
+ return 0;
+
+err_port_ets_maxrate_set:
+ for (i--; i >= 0; i--)
+ mlxsw_sp_port_ets_maxrate_set(mlxsw_sp_port,
+ MLXSW_REG_QEEC_HIERARCY_SUBGROUP,
+ i, 0, my_maxrate->tc_maxrate[i]);
+ return err;
+}
+
+static int mlxsw_sp_port_pfc_cnt_get(struct mlxsw_sp_port *mlxsw_sp_port,
+ u8 prio)
+{
+ struct mlxsw_sp *mlxsw_sp = mlxsw_sp_port->mlxsw_sp;
+ struct ieee_pfc *my_pfc = mlxsw_sp_port->dcb.pfc;
+ char ppcnt_pl[MLXSW_REG_PPCNT_LEN];
+ int err;
+
+ mlxsw_reg_ppcnt_pack(ppcnt_pl, mlxsw_sp_port->local_port,
+ MLXSW_REG_PPCNT_PRIO_CNT, prio);
+ err = mlxsw_reg_query(mlxsw_sp->core, MLXSW_REG(ppcnt), ppcnt_pl);
+ if (err)
+ return err;
+
+ my_pfc->requests[prio] = mlxsw_reg_ppcnt_tx_pause_get(ppcnt_pl);
+ my_pfc->indications[prio] = mlxsw_reg_ppcnt_rx_pause_get(ppcnt_pl);
+
+ return 0;
+}
+
+static int mlxsw_sp_dcbnl_ieee_getpfc(struct net_device *dev,
+ struct ieee_pfc *pfc)
+{
+ struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev);
+ int err, i;
+
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++) {
+ err = mlxsw_sp_port_pfc_cnt_get(mlxsw_sp_port, i);
+ if (err) {
+ netdev_err(dev, "Failed to get PFC count for priority %d\n",
+ i);
+ return err;
+ }
+ }
+
+ memcpy(pfc, mlxsw_sp_port->dcb.pfc, sizeof(*pfc));
+
+ return 0;
+}
+
+static int mlxsw_sp_port_pfc_set(struct mlxsw_sp_port *mlxsw_sp_port,
+ struct ieee_pfc *pfc)
+{
+ char pfcc_pl[MLXSW_REG_PFCC_LEN];
+
+ mlxsw_reg_pfcc_pack(pfcc_pl, mlxsw_sp_port->local_port);
+ mlxsw_reg_pfcc_prio_pack(pfcc_pl, pfc->pfc_en);
+
+ return mlxsw_reg_write(mlxsw_sp_port->mlxsw_sp->core, MLXSW_REG(pfcc),
+ pfcc_pl);
+}
+
+static int mlxsw_sp_dcbnl_ieee_setpfc(struct net_device *dev,
+ struct ieee_pfc *pfc)
+{
+ struct mlxsw_sp_port *mlxsw_sp_port = netdev_priv(dev);
+ int err;
+
+ if (mlxsw_sp_port->link.tx_pause || mlxsw_sp_port->link.rx_pause) {
+ netdev_err(dev, "PAUSE frames already enabled on port\n");
+ return -EINVAL;
+ }
+
+ err = __mlxsw_sp_port_headroom_set(mlxsw_sp_port, dev->mtu,
+ mlxsw_sp_port->dcb.ets->prio_tc,
+ false, pfc);
+ if (err) {
+ netdev_err(dev, "Failed to configure port's headroom for PFC\n");
+ return err;
+ }
+
+ err = mlxsw_sp_port_pfc_set(mlxsw_sp_port, pfc);
+ if (err) {
+ netdev_err(dev, "Failed to configure PFC\n");
+ goto err_port_pfc_set;
+ }
+
+ memcpy(mlxsw_sp_port->dcb.pfc, pfc, sizeof(*pfc));
+
+ return 0;
+
+err_port_pfc_set:
+ __mlxsw_sp_port_headroom_set(mlxsw_sp_port, dev->mtu,
+ mlxsw_sp_port->dcb.ets->prio_tc, false,
+ mlxsw_sp_port->dcb.pfc);
+ return err;
+}
+
+static const struct dcbnl_rtnl_ops mlxsw_sp_dcbnl_ops = {
+ .ieee_getets = mlxsw_sp_dcbnl_ieee_getets,
+ .ieee_setets = mlxsw_sp_dcbnl_ieee_setets,
+ .ieee_getmaxrate = mlxsw_sp_dcbnl_ieee_getmaxrate,
+ .ieee_setmaxrate = mlxsw_sp_dcbnl_ieee_setmaxrate,
+ .ieee_getpfc = mlxsw_sp_dcbnl_ieee_getpfc,
+ .ieee_setpfc = mlxsw_sp_dcbnl_ieee_setpfc,
+
+ .getdcbx = mlxsw_sp_dcbnl_getdcbx,
+ .setdcbx = mlxsw_sp_dcbnl_setdcbx,
+};
+
+static int mlxsw_sp_port_ets_init(struct mlxsw_sp_port *mlxsw_sp_port)
+{
+ mlxsw_sp_port->dcb.ets = kzalloc(sizeof(*mlxsw_sp_port->dcb.ets),
+ GFP_KERNEL);
+ if (!mlxsw_sp_port->dcb.ets)
+ return -ENOMEM;
+
+ mlxsw_sp_port->dcb.ets->ets_cap = IEEE_8021QAZ_MAX_TCS;
+
+ return 0;
+}
+
+static void mlxsw_sp_port_ets_fini(struct mlxsw_sp_port *mlxsw_sp_port)
+{
+ kfree(mlxsw_sp_port->dcb.ets);
+}
+
+static int mlxsw_sp_port_maxrate_init(struct mlxsw_sp_port *mlxsw_sp_port)
+{
+ int i;
+
+ mlxsw_sp_port->dcb.maxrate = kmalloc(sizeof(*mlxsw_sp_port->dcb.maxrate),
+ GFP_KERNEL);
+ if (!mlxsw_sp_port->dcb.maxrate)
+ return -ENOMEM;
+
+ for (i = 0; i < IEEE_8021QAZ_MAX_TCS; i++)
+ mlxsw_sp_port->dcb.maxrate->tc_maxrate[i] = MLXSW_REG_QEEC_MAS_DIS;
+
+ return 0;
+}
+
+static void mlxsw_sp_port_maxrate_fini(struct mlxsw_sp_port *mlxsw_sp_port)
+{
+ kfree(mlxsw_sp_port->dcb.maxrate);
+}
+
+static int mlxsw_sp_port_pfc_init(struct mlxsw_sp_port *mlxsw_sp_port)
+{
+ mlxsw_sp_port->dcb.pfc = kzalloc(sizeof(*mlxsw_sp_port->dcb.pfc),
+ GFP_KERNEL);
+ if (!mlxsw_sp_port->dcb.pfc)
+ return -ENOMEM;
+
+ mlxsw_sp_port->dcb.pfc->pfc_cap = IEEE_8021QAZ_MAX_TCS;
+
+ return 0;
+}
+
+static void mlxsw_sp_port_pfc_fini(struct mlxsw_sp_port *mlxsw_sp_port)
+{
+ kfree(mlxsw_sp_port->dcb.pfc);
+}
+
+int mlxsw_sp_port_dcb_init(struct mlxsw_sp_port *mlxsw_sp_port)
+{
+ int err;
+
+ err = mlxsw_sp_port_ets_init(mlxsw_sp_port);
+ if (err)
+ return err;
+ err = mlxsw_sp_port_maxrate_init(mlxsw_sp_port);
+ if (err)
+ goto err_port_maxrate_init;
+ err = mlxsw_sp_port_pfc_init(mlxsw_sp_port);
+ if (err)
+ goto err_port_pfc_init;
+
+ mlxsw_sp_port->dev->dcbnl_ops = &mlxsw_sp_dcbnl_ops;
+
+ return 0;
+
+err_port_pfc_init:
+ mlxsw_sp_port_maxrate_fini(mlxsw_sp_port);
+err_port_maxrate_init:
+ mlxsw_sp_port_ets_fini(mlxsw_sp_port);
+ return err;
+}
+
+void mlxsw_sp_port_dcb_fini(struct mlxsw_sp_port *mlxsw_sp_port)
+{
+ mlxsw_sp_port_pfc_fini(mlxsw_sp_port);
+ mlxsw_sp_port_maxrate_fini(mlxsw_sp_port);
+ mlxsw_sp_port_ets_fini(mlxsw_sp_port);
+}
diff --git a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
index e1c74efff51a..3710f19ed6bb 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/spectrum_switchdev.c
@@ -214,7 +214,15 @@ static int __mlxsw_sp_port_flood_set(struct mlxsw_sp_port *mlxsw_sp_port,
mlxsw_reg_sftr_pack(sftr_pl, MLXSW_SP_FLOOD_TABLE_BM, idx_begin,
table_type, range, local_port, set);
err = mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sftr), sftr_pl);
+ if (err)
+ goto err_flood_bm_set;
+ else
+ goto buffer_out;
+err_flood_bm_set:
+ mlxsw_reg_sftr_pack(sftr_pl, MLXSW_SP_FLOOD_TABLE_UC, idx_begin,
+ table_type, range, local_port, !set);
+ mlxsw_reg_write(mlxsw_sp->core, MLXSW_REG(sftr), sftr_pl);
buffer_out:
kfree(sftr_pl);
return err;
@@ -1430,8 +1438,8 @@ static void mlxsw_sp_fdb_notify_rec_process(struct mlxsw_sp *mlxsw_sp,
static void mlxsw_sp_fdb_notify_work_schedule(struct mlxsw_sp *mlxsw_sp)
{
- schedule_delayed_work(&mlxsw_sp->fdb_notify.dw,
- msecs_to_jiffies(mlxsw_sp->fdb_notify.interval));
+ mlxsw_core_schedule_dw(&mlxsw_sp->fdb_notify.dw,
+ msecs_to_jiffies(mlxsw_sp->fdb_notify.interval));
}
static void mlxsw_sp_fdb_notify_work(struct work_struct *work)
diff --git a/drivers/net/ethernet/mellanox/mlxsw/switchx2.c b/drivers/net/ethernet/mellanox/mlxsw/switchx2.c
index 7a60a26759b6..3842eab9449a 100644
--- a/drivers/net/ethernet/mellanox/mlxsw/switchx2.c
+++ b/drivers/net/ethernet/mellanox/mlxsw/switchx2.c
@@ -43,7 +43,6 @@
#include <linux/device.h>
#include <linux/skbuff.h>
#include <linux/if_vlan.h>
-#include <net/devlink.h>
#include <net/switchdev.h>
#include <generated/utsrelease.h>
@@ -75,11 +74,11 @@ struct mlxsw_sx_port_pcpu_stats {
};
struct mlxsw_sx_port {
+ struct mlxsw_core_port core_port; /* must be first */
struct net_device *dev;
struct mlxsw_sx_port_pcpu_stats __percpu *pcpu_stats;
struct mlxsw_sx *mlxsw_sx;
u8 local_port;
- struct devlink_port devlink_port;
};
/* tx_hdr_version
@@ -303,7 +302,7 @@ static netdev_tx_t mlxsw_sx_port_xmit(struct sk_buff *skb,
u64 len;
int err;
- if (mlxsw_core_skb_transmit_busy(mlxsw_sx, &tx_info))
+ if (mlxsw_core_skb_transmit_busy(mlxsw_sx->core, &tx_info))
return NETDEV_TX_BUSY;
if (unlikely(skb_headroom(skb) < MLXSW_TXHDR_LEN)) {
@@ -321,7 +320,7 @@ static netdev_tx_t mlxsw_sx_port_xmit(struct sk_buff *skb,
/* Due to a race we might fail here because of a full queue. In that
* unlikely case we simply drop the packet.
*/
- err = mlxsw_core_skb_transmit(mlxsw_sx, skb, &tx_info);
+ err = mlxsw_core_skb_transmit(mlxsw_sx->core, skb, &tx_info);
if (!err) {
pcpu_stats = this_cpu_ptr(mlxsw_sx_port->pcpu_stats);
@@ -518,7 +517,8 @@ static void mlxsw_sx_port_get_stats(struct net_device *dev,
int i;
int err;
- mlxsw_reg_ppcnt_pack(ppcnt_pl, mlxsw_sx_port->local_port);
+ mlxsw_reg_ppcnt_pack(ppcnt_pl, mlxsw_sx_port->local_port,
+ MLXSW_REG_PPCNT_IEEE_8023_CNT, 0);
err = mlxsw_reg_query(mlxsw_sx->core, MLXSW_REG(ppcnt), ppcnt_pl);
for (i = 0; i < MLXSW_SX_PORT_HW_STATS_LEN; i++)
data[i] = !err ? mlxsw_sx_port_hw_stats[i].getter(ppcnt_pl) : 0;
@@ -955,9 +955,7 @@ mlxsw_sx_port_mac_learning_mode_set(struct mlxsw_sx_port *mlxsw_sx_port,
static int mlxsw_sx_port_create(struct mlxsw_sx *mlxsw_sx, u8 local_port)
{
- struct devlink *devlink = priv_to_devlink(mlxsw_sx->core);
struct mlxsw_sx_port *mlxsw_sx_port;
- struct devlink_port *devlink_port;
struct net_device *dev;
bool usable;
int err;
@@ -1011,14 +1009,6 @@ static int mlxsw_sx_port_create(struct mlxsw_sx *mlxsw_sx, u8 local_port)
goto port_not_usable;
}
- devlink_port = &mlxsw_sx_port->devlink_port;
- err = devlink_port_register(devlink, devlink_port, local_port);
- if (err) {
- dev_err(mlxsw_sx->bus_info->dev, "Port %d: Failed to register devlink port\n",
- mlxsw_sx_port->local_port);
- goto err_devlink_port_register;
- }
-
err = mlxsw_sx_port_system_port_mapping_set(mlxsw_sx_port);
if (err) {
dev_err(mlxsw_sx->bus_info->dev, "Port %d: Failed to set system port mapping\n",
@@ -1076,11 +1066,19 @@ static int mlxsw_sx_port_create(struct mlxsw_sx *mlxsw_sx, u8 local_port)
goto err_register_netdev;
}
- devlink_port_type_eth_set(devlink_port, dev);
+ err = mlxsw_core_port_init(mlxsw_sx->core, &mlxsw_sx_port->core_port,
+ mlxsw_sx_port->local_port, dev, false, 0);
+ if (err) {
+ dev_err(mlxsw_sx->bus_info->dev, "Port %d: Failed to init core port\n",
+ mlxsw_sx_port->local_port);
+ goto err_core_port_init;
+ }
mlxsw_sx->ports[local_port] = mlxsw_sx_port;
return 0;
+err_core_port_init:
+ unregister_netdev(dev);
err_register_netdev:
err_port_mac_learning_mode_set:
err_port_stp_state_set:
@@ -1089,8 +1087,6 @@ err_port_mtu_set:
err_port_speed_set:
err_port_swid_set:
err_port_system_port_mapping_set:
- devlink_port_unregister(&mlxsw_sx_port->devlink_port);
-err_devlink_port_register:
port_not_usable:
err_port_module_check:
err_dev_addr_get:
@@ -1103,15 +1099,12 @@ err_alloc_stats:
static void mlxsw_sx_port_remove(struct mlxsw_sx *mlxsw_sx, u8 local_port)
{
struct mlxsw_sx_port *mlxsw_sx_port = mlxsw_sx->ports[local_port];
- struct devlink_port *devlink_port;
if (!mlxsw_sx_port)
return;
- devlink_port = &mlxsw_sx_port->devlink_port;
- devlink_port_type_clear(devlink_port);
+ mlxsw_core_port_fini(&mlxsw_sx_port->core_port);
unregister_netdev(mlxsw_sx_port->dev); /* This calls ndo_stop */
mlxsw_sx_port_swid_set(mlxsw_sx_port, MLXSW_PORT_SWID_DISABLED_PORT);
- devlink_port_unregister(devlink_port);
free_percpu(mlxsw_sx_port->pcpu_stats);
free_netdev(mlxsw_sx_port->dev);
}
@@ -1454,10 +1447,10 @@ static int mlxsw_sx_flood_init(struct mlxsw_sx *mlxsw_sx)
return mlxsw_reg_write(mlxsw_sx->core, MLXSW_REG(sgcr), sgcr_pl);
}
-static int mlxsw_sx_init(void *priv, struct mlxsw_core *mlxsw_core,
+static int mlxsw_sx_init(struct mlxsw_core *mlxsw_core,
const struct mlxsw_bus_info *mlxsw_bus_info)
{
- struct mlxsw_sx *mlxsw_sx = priv;
+ struct mlxsw_sx *mlxsw_sx = mlxsw_core_driver_priv(mlxsw_core);
int err;
mlxsw_sx->core = mlxsw_core;
@@ -1504,9 +1497,9 @@ err_event_register:
return err;
}
-static void mlxsw_sx_fini(void *priv)
+static void mlxsw_sx_fini(struct mlxsw_core *mlxsw_core)
{
- struct mlxsw_sx *mlxsw_sx = priv;
+ struct mlxsw_sx *mlxsw_sx = mlxsw_core_driver_priv(mlxsw_core);
mlxsw_sx_traps_fini(mlxsw_sx);
mlxsw_sx_event_unregister(mlxsw_sx, MLXSW_TRAP_ID_PUDE);
diff --git a/drivers/net/ethernet/micrel/ks8695net.c b/drivers/net/ethernet/micrel/ks8695net.c
index a8522d8af95d..20cb85bc0c5f 100644
--- a/drivers/net/ethernet/micrel/ks8695net.c
+++ b/drivers/net/ethernet/micrel/ks8695net.c
@@ -1354,6 +1354,7 @@ ks8695_probe(struct platform_device *pdev)
struct resource *rxirq_res, *txirq_res, *linkirq_res;
int ret = 0;
int buff_n;
+ bool inv_mac_addr = false;
u32 machigh, maclow;
/* Initialise a net_device */
@@ -1456,8 +1457,7 @@ ks8695_probe(struct platform_device *pdev)
ndev->dev_addr[5] = maclow & 0xFF;
if (!is_valid_ether_addr(ndev->dev_addr))
- dev_warn(ksp->dev, "%s: Invalid ethernet MAC address. Please "
- "set using ifconfig\n", ndev->name);
+ inv_mac_addr = true;
/* In order to be efficient memory-wise, we allocate both
* rings in one go.
@@ -1520,6 +1520,9 @@ ks8695_probe(struct platform_device *pdev)
ret = register_netdev(ndev);
if (ret == 0) {
+ if (inv_mac_addr)
+ dev_warn(ksp->dev, "%s: Invalid ethernet MAC address. Please set using ip\n",
+ ndev->name);
dev_info(ksp->dev, "ks8695 ethernet (%s) MAC: %pM\n",
ks8695_port_type(ksp), ndev->dev_addr);
} else {
diff --git a/drivers/net/ethernet/micrel/ksz884x.c b/drivers/net/ethernet/micrel/ksz884x.c
index 75dc46c5fca2..280e761d3a97 100644
--- a/drivers/net/ethernet/micrel/ksz884x.c
+++ b/drivers/net/ethernet/micrel/ksz884x.c
@@ -4790,7 +4790,7 @@ static void transmit_cleanup(struct dev_info *hw_priv, int normal)
/* Notify the network subsystem that the packet has been sent. */
if (dev)
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
}
/**
@@ -4965,7 +4965,7 @@ static void netdev_tx_timeout(struct net_device *dev)
hw_ena_intr(hw);
}
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
netif_wake_queue(dev);
}
diff --git a/drivers/net/ethernet/microchip/enc28j60.c b/drivers/net/ethernet/microchip/enc28j60.c
index 86ea17e7ba7b..7066954c39d6 100644
--- a/drivers/net/ethernet/microchip/enc28j60.c
+++ b/drivers/net/ethernet/microchip/enc28j60.c
@@ -28,11 +28,12 @@
#include <linux/skbuff.h>
#include <linux/delay.h>
#include <linux/spi/spi.h>
+#include <linux/of_net.h>
#include "enc28j60_hw.h"
#define DRV_NAME "enc28j60"
-#define DRV_VERSION "1.01"
+#define DRV_VERSION "1.02"
#define SPI_OPLEN 1
@@ -89,22 +90,26 @@ spi_read_buf(struct enc28j60_net *priv, int len, u8 *data)
{
u8 *rx_buf = priv->spi_transfer_buf + 4;
u8 *tx_buf = priv->spi_transfer_buf;
- struct spi_transfer t = {
+ struct spi_transfer tx = {
.tx_buf = tx_buf,
+ .len = SPI_OPLEN,
+ };
+ struct spi_transfer rx = {
.rx_buf = rx_buf,
- .len = SPI_OPLEN + len,
+ .len = len,
};
struct spi_message msg;
int ret;
tx_buf[0] = ENC28J60_READ_BUF_MEM;
- tx_buf[1] = tx_buf[2] = tx_buf[3] = 0; /* don't care */
spi_message_init(&msg);
- spi_message_add_tail(&t, &msg);
+ spi_message_add_tail(&tx, &msg);
+ spi_message_add_tail(&rx, &msg);
+
ret = spi_sync(priv->spi, &msg);
if (ret == 0) {
- memcpy(data, &rx_buf[SPI_OPLEN], len);
+ memcpy(data, rx_buf, len);
ret = msg.status;
}
if (ret && netif_msg_drv(priv))
@@ -1544,6 +1549,7 @@ static int enc28j60_probe(struct spi_device *spi)
{
struct net_device *dev;
struct enc28j60_net *priv;
+ const void *macaddr;
int ret = 0;
if (netif_msg_drv(&debug))
@@ -1575,7 +1581,12 @@ static int enc28j60_probe(struct spi_device *spi)
ret = -EIO;
goto error_irq;
}
- eth_hw_addr_random(dev);
+
+ macaddr = of_get_mac_address(spi->dev.of_node);
+ if (macaddr)
+ ether_addr_copy(dev->dev_addr, macaddr);
+ else
+ eth_hw_addr_random(dev);
enc28j60_set_hw_macaddr(dev);
/* Board setup must set the relevant edge trigger type;
@@ -1630,9 +1641,16 @@ static int enc28j60_remove(struct spi_device *spi)
return 0;
}
+static const struct of_device_id enc28j60_dt_ids[] = {
+ { .compatible = "microchip,enc28j60" },
+ { /* sentinel */ }
+};
+MODULE_DEVICE_TABLE(of, enc28j60_dt_ids);
+
static struct spi_driver enc28j60_driver = {
.driver = {
- .name = DRV_NAME,
+ .name = DRV_NAME,
+ .of_match_table = enc28j60_dt_ids,
},
.probe = enc28j60_probe,
.remove = enc28j60_remove,
diff --git a/drivers/net/ethernet/microchip/encx24j600.c b/drivers/net/ethernet/microchip/encx24j600.c
index 7df318346b05..42e34076d2de 100644
--- a/drivers/net/ethernet/microchip/encx24j600.c
+++ b/drivers/net/ethernet/microchip/encx24j600.c
@@ -874,7 +874,7 @@ static netdev_tx_t encx24j600_tx(struct sk_buff *skb, struct net_device *dev)
netif_stop_queue(dev);
/* save the timestamp */
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
/* Remember the skb for deferred processing */
priv->tx_skb = skb;
@@ -890,7 +890,7 @@ static void encx24j600_tx_timeout(struct net_device *dev)
struct encx24j600_priv *priv = netdev_priv(dev);
netif_err(priv, tx_err, dev, "TX timeout at %ld, latency %ld\n",
- jiffies, jiffies - dev->trans_start);
+ jiffies, jiffies - dev_trans_start(dev));
dev->stats.tx_errors++;
netif_wake_queue(dev);
diff --git a/drivers/net/ethernet/moxa/moxart_ether.c b/drivers/net/ethernet/moxa/moxart_ether.c
index 3e67f451f2ab..4367dd6879a2 100644
--- a/drivers/net/ethernet/moxa/moxart_ether.c
+++ b/drivers/net/ethernet/moxa/moxart_ether.c
@@ -376,7 +376,7 @@ static int moxart_mac_start_xmit(struct sk_buff *skb, struct net_device *ndev)
priv->tx_head = TX_NEXT(tx_head);
- ndev->trans_start = jiffies;
+ netif_trans_update(ndev);
ret = NETDEV_TX_OK;
out_unlock:
spin_unlock_irq(&priv->txlock);
diff --git a/drivers/net/ethernet/natsemi/natsemi.c b/drivers/net/ethernet/natsemi/natsemi.c
index 122c2ee3dfe2..ed89029ff75b 100644
--- a/drivers/net/ethernet/natsemi/natsemi.c
+++ b/drivers/net/ethernet/natsemi/natsemi.c
@@ -1904,7 +1904,7 @@ static void ns_tx_timeout(struct net_device *dev)
spin_unlock_irq(&np->lock);
enable_irq(irq);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
dev->stats.tx_errors++;
netif_wake_queue(dev);
}
diff --git a/drivers/net/ethernet/natsemi/sonic.c b/drivers/net/ethernet/natsemi/sonic.c
index 1bd419dbda6d..612c7a44b26c 100644
--- a/drivers/net/ethernet/natsemi/sonic.c
+++ b/drivers/net/ethernet/natsemi/sonic.c
@@ -174,7 +174,7 @@ static void sonic_tx_timeout(struct net_device *dev)
/* Try to restart the adaptor. */
sonic_init(dev);
lp->stats.tx_errors++;
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue(dev);
}
diff --git a/drivers/net/ethernet/neterion/s2io.c b/drivers/net/ethernet/neterion/s2io.c
index 9ba975853ec6..2874dffe77de 100644
--- a/drivers/net/ethernet/neterion/s2io.c
+++ b/drivers/net/ethernet/neterion/s2io.c
@@ -4021,7 +4021,6 @@ static netdev_tx_t s2io_xmit(struct sk_buff *skb, struct net_device *dev)
unsigned long flags = 0;
u16 vlan_tag = 0;
struct fifo_info *fifo = NULL;
- int do_spin_lock = 1;
int offload_type;
int enable_per_list_interrupt = 0;
struct config_param *config = &sp->config;
@@ -4074,7 +4073,6 @@ static netdev_tx_t s2io_xmit(struct sk_buff *skb, struct net_device *dev)
queue += sp->udp_fifo_idx;
if (skb->len > 1024)
enable_per_list_interrupt = 1;
- do_spin_lock = 0;
}
}
}
@@ -4084,12 +4082,7 @@ static netdev_tx_t s2io_xmit(struct sk_buff *skb, struct net_device *dev)
[skb->priority & (MAX_TX_FIFOS - 1)];
fifo = &mac_control->fifos[queue];
- if (do_spin_lock)
- spin_lock_irqsave(&fifo->tx_lock, flags);
- else {
- if (unlikely(!spin_trylock_irqsave(&fifo->tx_lock, flags)))
- return NETDEV_TX_LOCKED;
- }
+ spin_lock_irqsave(&fifo->tx_lock, flags);
if (sp->config.multiq) {
if (__netif_subqueue_stopped(dev, fifo->fifo_no)) {
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net.h b/drivers/net/ethernet/netronome/nfp/nfp_net.h
index 75683fb26734..e744acc18ef4 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net.h
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net.h
@@ -59,8 +59,8 @@
netdev_warn((nn)->netdev, fmt, ## args); \
} while (0)
-/* Max time to wait for NFP to respond on updates (in ms) */
-#define NFP_NET_POLL_TIMEOUT 5000
+/* Max time to wait for NFP to respond on updates (in seconds) */
+#define NFP_NET_POLL_TIMEOUT 5
/* Bar allocation */
#define NFP_NET_CRTL_BAR 0
@@ -298,6 +298,8 @@ struct nfp_net_rx_buf {
* @rxds: Virtual address of FL/RX ring in host memory
* @dma: DMA address of the FL/RX ring
* @size: Size, in bytes, of the FL/RX ring (needed to free)
+ * @bufsz: Buffer allocation size for convenience of management routines
+ * (NOTE: this is in second cache line, do not use on fast path!)
*/
struct nfp_net_rx_ring {
struct nfp_net_r_vector *r_vec;
@@ -319,6 +321,7 @@ struct nfp_net_rx_ring {
dma_addr_t dma;
unsigned int size;
+ unsigned int bufsz;
} ____cacheline_aligned;
/**
@@ -444,6 +447,10 @@ static inline bool nfp_net_fw_ver_eq(struct nfp_net_fw_version *fw_ver,
* @shared_name: Name for shared interrupt
* @me_freq_mhz: ME clock_freq (MHz)
* @reconfig_lock: Protects HW reconfiguration request regs/machinery
+ * @reconfig_posted: Pending reconfig bits coming from async sources
+ * @reconfig_timer_active: Timer for reading reconfiguration results is pending
+ * @reconfig_sync_present: Some thread is performing synchronous reconfig
+ * @reconfig_timer: Timer for async reading of reconfig results
* @link_up: Is the link up?
* @link_status_lock: Protects @link_up and ensures atomicity with BAR reading
* @rx_coalesce_usecs: RX interrupt moderation usecs delay parameter
@@ -472,6 +479,9 @@ struct nfp_net {
u32 rx_offset;
+ struct nfp_net_tx_ring *tx_rings;
+ struct nfp_net_rx_ring *rx_rings;
+
#ifdef CONFIG_PCI_IOV
unsigned int num_vfs;
struct vf_data_storage *vfinfo;
@@ -504,9 +514,6 @@ struct nfp_net {
int txd_cnt;
int rxd_cnt;
- struct nfp_net_tx_ring tx_rings[NFP_NET_MAX_TX_RINGS];
- struct nfp_net_rx_ring rx_rings[NFP_NET_MAX_RX_RINGS];
-
u8 num_irqs;
u8 num_r_vecs;
struct nfp_net_r_vector r_vecs[NFP_NET_MAX_TX_RINGS];
@@ -528,6 +535,10 @@ struct nfp_net {
spinlock_t link_status_lock;
spinlock_t reconfig_lock;
+ u32 reconfig_posted;
+ bool reconfig_timer_active;
+ bool reconfig_sync_present;
+ struct timer_list reconfig_timer;
u32 rx_coalesce_usecs;
u32 rx_coalesce_max_frames;
@@ -721,6 +732,7 @@ void nfp_net_rss_write_key(struct nfp_net *nn);
void nfp_net_coalesce_write_cfg(struct nfp_net *nn);
int nfp_net_irqs_alloc(struct nfp_net *nn);
void nfp_net_irqs_disable(struct nfp_net *nn);
+int nfp_net_set_ring_size(struct nfp_net *nn, u32 rxd_cnt, u32 txd_cnt);
#ifdef CONFIG_NFP_NET_DEBUG
void nfp_net_debugfs_create(void);
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
index 43c618bafdb6..fa47c14c743a 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_common.c
@@ -80,6 +80,116 @@ void nfp_net_get_fw_version(struct nfp_net_fw_version *fw_ver,
put_unaligned_le32(reg, fw_ver);
}
+/* Firmware reconfig
+ *
+ * Firmware reconfig may take a while so we have two versions of it -
+ * synchronous and asynchronous (posted). All synchronous callers are holding
+ * RTNL so we don't have to worry about serializing them.
+ */
+static void nfp_net_reconfig_start(struct nfp_net *nn, u32 update)
+{
+ nn_writel(nn, NFP_NET_CFG_UPDATE, update);
+ /* ensure update is written before pinging HW */
+ nn_pci_flush(nn);
+ nfp_qcp_wr_ptr_add(nn->qcp_cfg, 1);
+}
+
+/* Pass 0 as update to run posted reconfigs. */
+static void nfp_net_reconfig_start_async(struct nfp_net *nn, u32 update)
+{
+ update |= nn->reconfig_posted;
+ nn->reconfig_posted = 0;
+
+ nfp_net_reconfig_start(nn, update);
+
+ nn->reconfig_timer_active = true;
+ mod_timer(&nn->reconfig_timer, jiffies + NFP_NET_POLL_TIMEOUT * HZ);
+}
+
+static bool nfp_net_reconfig_check_done(struct nfp_net *nn, bool last_check)
+{
+ u32 reg;
+
+ reg = nn_readl(nn, NFP_NET_CFG_UPDATE);
+ if (reg == 0)
+ return true;
+ if (reg & NFP_NET_CFG_UPDATE_ERR) {
+ nn_err(nn, "Reconfig error: 0x%08x\n", reg);
+ return true;
+ } else if (last_check) {
+ nn_err(nn, "Reconfig timeout: 0x%08x\n", reg);
+ return true;
+ }
+
+ return false;
+}
+
+static int nfp_net_reconfig_wait(struct nfp_net *nn, unsigned long deadline)
+{
+ bool timed_out = false;
+
+ /* Poll update field, waiting for NFP to ack the config */
+ while (!nfp_net_reconfig_check_done(nn, timed_out)) {
+ msleep(1);
+ timed_out = time_is_before_eq_jiffies(deadline);
+ }
+
+ if (nn_readl(nn, NFP_NET_CFG_UPDATE) & NFP_NET_CFG_UPDATE_ERR)
+ return -EIO;
+
+ return timed_out ? -EIO : 0;
+}
+
+static void nfp_net_reconfig_timer(unsigned long data)
+{
+ struct nfp_net *nn = (void *)data;
+
+ spin_lock_bh(&nn->reconfig_lock);
+
+ nn->reconfig_timer_active = false;
+
+ /* If sync caller is present it will take over from us */
+ if (nn->reconfig_sync_present)
+ goto done;
+
+ /* Read reconfig status and report errors */
+ nfp_net_reconfig_check_done(nn, true);
+
+ if (nn->reconfig_posted)
+ nfp_net_reconfig_start_async(nn, 0);
+done:
+ spin_unlock_bh(&nn->reconfig_lock);
+}
+
+/**
+ * nfp_net_reconfig_post() - Post async reconfig request
+ * @nn: NFP Net device to reconfigure
+ * @update: The value for the update field in the BAR config
+ *
+ * Record FW reconfiguration request. Reconfiguration will be kicked off
+ * whenever reconfiguration machinery is idle. Multiple requests can be
+ * merged together!
+ */
+static void nfp_net_reconfig_post(struct nfp_net *nn, u32 update)
+{
+ spin_lock_bh(&nn->reconfig_lock);
+
+ /* Sync caller will kick off async reconf when it's done, just post */
+ if (nn->reconfig_sync_present) {
+ nn->reconfig_posted |= update;
+ goto done;
+ }
+
+ /* Opportunistically check if the previous command is done */
+ if (!nn->reconfig_timer_active ||
+ nfp_net_reconfig_check_done(nn, false))
+ nfp_net_reconfig_start_async(nn, update);
+ else
+ nn->reconfig_posted |= update;
+done:
+ spin_unlock_bh(&nn->reconfig_lock);
+}
+
/**
* nfp_net_reconfig() - Reconfigure the firmware
* @nn: NFP Net device to reconfigure
@@ -93,35 +203,45 @@ void nfp_net_get_fw_version(struct nfp_net_fw_version *fw_ver,
*/
int nfp_net_reconfig(struct nfp_net *nn, u32 update)
{
- int cnt, ret = 0;
- u32 new;
+ bool cancelled_timer = false;
+ u32 pre_posted_requests;
+ int ret;
spin_lock_bh(&nn->reconfig_lock);
- nn_writel(nn, NFP_NET_CFG_UPDATE, update);
- /* ensure update is written before pinging HW */
- nn_pci_flush(nn);
- nfp_qcp_wr_ptr_add(nn->qcp_cfg, 1);
+ nn->reconfig_sync_present = true;
- /* Poll update field, waiting for NFP to ack the config */
- for (cnt = 0; ; cnt++) {
- new = nn_readl(nn, NFP_NET_CFG_UPDATE);
- if (new == 0)
- break;
- if (new & NFP_NET_CFG_UPDATE_ERR) {
- nn_err(nn, "Reconfig error: 0x%08x\n", new);
- ret = -EIO;
- break;
- } else if (cnt >= NFP_NET_POLL_TIMEOUT) {
- nn_err(nn, "Reconfig timeout for 0x%08x after %dms\n",
- update, cnt);
- ret = -EIO;
- break;
- }
- mdelay(1);
+ if (nn->reconfig_timer_active) {
+ del_timer(&nn->reconfig_timer);
+ nn->reconfig_timer_active = false;
+ cancelled_timer = true;
+ }
+ pre_posted_requests = nn->reconfig_posted;
+ nn->reconfig_posted = 0;
+
+ spin_unlock_bh(&nn->reconfig_lock);
+
+ if (cancelled_timer)
+ nfp_net_reconfig_wait(nn, nn->reconfig_timer.expires);
+
+ /* Run the posted reconfigs which were issued before we started */
+ if (pre_posted_requests) {
+ nfp_net_reconfig_start(nn, pre_posted_requests);
+ nfp_net_reconfig_wait(nn, jiffies + HZ * NFP_NET_POLL_TIMEOUT);
}
+ nfp_net_reconfig_start(nn, update);
+ ret = nfp_net_reconfig_wait(nn, jiffies + HZ * NFP_NET_POLL_TIMEOUT);
+
+ spin_lock_bh(&nn->reconfig_lock);
+
+ if (nn->reconfig_posted)
+ nfp_net_reconfig_start_async(nn, 0);
+
+ nn->reconfig_sync_present = false;
+
spin_unlock_bh(&nn->reconfig_lock);
+
return ret;
}
@@ -347,12 +467,18 @@ static irqreturn_t nfp_net_irq_exn(int irq, void *data)
/**
* nfp_net_tx_ring_init() - Fill in the boilerplate for a TX ring
* @tx_ring: TX ring structure
+ * @r_vec: IRQ vector servicing this ring
+ * @idx: Ring index
*/
-static void nfp_net_tx_ring_init(struct nfp_net_tx_ring *tx_ring)
+static void
+nfp_net_tx_ring_init(struct nfp_net_tx_ring *tx_ring,
+ struct nfp_net_r_vector *r_vec, unsigned int idx)
{
- struct nfp_net_r_vector *r_vec = tx_ring->r_vec;
struct nfp_net *nn = r_vec->nfp_net;
+ tx_ring->idx = idx;
+ tx_ring->r_vec = r_vec;
+
tx_ring->qcidx = tx_ring->idx * nn->stride_tx;
tx_ring->qcp_q = nn->tx_bar + NFP_QCP_QUEUE_OFF(tx_ring->qcidx);
}
@@ -360,12 +486,18 @@ static void nfp_net_tx_ring_init(struct nfp_net_tx_ring *tx_ring)
/**
* nfp_net_rx_ring_init() - Fill in the boilerplate for a RX ring
* @rx_ring: RX ring structure
+ * @r_vec: IRQ vector servicing this ring
+ * @idx: Ring index
*/
-static void nfp_net_rx_ring_init(struct nfp_net_rx_ring *rx_ring)
+static void
+nfp_net_rx_ring_init(struct nfp_net_rx_ring *rx_ring,
+ struct nfp_net_r_vector *r_vec, unsigned int idx)
{
- struct nfp_net_r_vector *r_vec = rx_ring->r_vec;
struct nfp_net *nn = r_vec->nfp_net;
+ rx_ring->idx = idx;
+ rx_ring->r_vec = r_vec;
+
rx_ring->fl_qcidx = rx_ring->idx * nn->stride_rx;
rx_ring->rx_qcidx = rx_ring->fl_qcidx + (nn->stride_rx - 1);
@@ -401,16 +533,6 @@ static void nfp_net_irqs_assign(struct net_device *netdev)
r_vec->irq_idx = NFP_NET_NON_Q_VECTORS + r;
cpumask_set_cpu(r, &r_vec->affinity_mask);
-
- r_vec->tx_ring = &nn->tx_rings[r];
- nn->tx_rings[r].idx = r;
- nn->tx_rings[r].r_vec = r_vec;
- nfp_net_tx_ring_init(r_vec->tx_ring);
-
- r_vec->rx_ring = &nn->rx_rings[r];
- nn->rx_rings[r].idx = r;
- nn->rx_rings[r].r_vec = r_vec;
- nfp_net_rx_ring_init(r_vec->rx_ring);
}
}
@@ -865,61 +987,59 @@ static void nfp_net_tx_complete(struct nfp_net_tx_ring *tx_ring)
}
/**
- * nfp_net_tx_flush() - Free any untransmitted buffers currently on the TX ring
- * @tx_ring: TX ring structure
+ * nfp_net_tx_ring_reset() - Free any untransmitted buffers and reset pointers
+ * @nn: NFP Net device
+ * @tx_ring: TX ring structure
*
* Assumes that the device is stopped
*/
-static void nfp_net_tx_flush(struct nfp_net_tx_ring *tx_ring)
+static void
+nfp_net_tx_ring_reset(struct nfp_net *nn, struct nfp_net_tx_ring *tx_ring)
{
- struct nfp_net_r_vector *r_vec = tx_ring->r_vec;
- struct nfp_net *nn = r_vec->nfp_net;
- struct pci_dev *pdev = nn->pdev;
const struct skb_frag_struct *frag;
struct netdev_queue *nd_q;
- struct sk_buff *skb;
- int nr_frags;
- int fidx;
- int idx;
+ struct pci_dev *pdev = nn->pdev;
while (tx_ring->rd_p != tx_ring->wr_p) {
- idx = tx_ring->rd_p % tx_ring->cnt;
+ int nr_frags, fidx, idx;
+ struct sk_buff *skb;
+ idx = tx_ring->rd_p % tx_ring->cnt;
skb = tx_ring->txbufs[idx].skb;
- if (skb) {
- nr_frags = skb_shinfo(skb)->nr_frags;
- fidx = tx_ring->txbufs[idx].fidx;
-
- if (fidx == -1) {
- /* unmap head */
- dma_unmap_single(&pdev->dev,
- tx_ring->txbufs[idx].dma_addr,
- skb_headlen(skb),
- DMA_TO_DEVICE);
- } else {
- /* unmap fragment */
- frag = &skb_shinfo(skb)->frags[fidx];
- dma_unmap_page(&pdev->dev,
- tx_ring->txbufs[idx].dma_addr,
- skb_frag_size(frag),
- DMA_TO_DEVICE);
- }
-
- /* check for last gather fragment */
- if (fidx == nr_frags - 1)
- dev_kfree_skb_any(skb);
-
- tx_ring->txbufs[idx].dma_addr = 0;
- tx_ring->txbufs[idx].skb = NULL;
- tx_ring->txbufs[idx].fidx = -2;
+ nr_frags = skb_shinfo(skb)->nr_frags;
+ fidx = tx_ring->txbufs[idx].fidx;
+
+ if (fidx == -1) {
+ /* unmap head */
+ dma_unmap_single(&pdev->dev,
+ tx_ring->txbufs[idx].dma_addr,
+ skb_headlen(skb), DMA_TO_DEVICE);
+ } else {
+ /* unmap fragment */
+ frag = &skb_shinfo(skb)->frags[fidx];
+ dma_unmap_page(&pdev->dev,
+ tx_ring->txbufs[idx].dma_addr,
+ skb_frag_size(frag), DMA_TO_DEVICE);
}
- memset(&tx_ring->txds[idx], 0, sizeof(tx_ring->txds[idx]));
+ /* check for last gather fragment */
+ if (fidx == nr_frags - 1)
+ dev_kfree_skb_any(skb);
+
+ tx_ring->txbufs[idx].dma_addr = 0;
+ tx_ring->txbufs[idx].skb = NULL;
+ tx_ring->txbufs[idx].fidx = -2;
tx_ring->qcp_rd_p++;
tx_ring->rd_p++;
}
+ memset(tx_ring->txds, 0, sizeof(*tx_ring->txds) * tx_ring->cnt);
+ tx_ring->wr_p = 0;
+ tx_ring->rd_p = 0;
+ tx_ring->qcp_rd_p = 0;
+ tx_ring->wr_ptr_add = 0;
+
nd_q = netdev_get_tx_queue(nn->netdev, tx_ring->idx);
netdev_tx_reset_queue(nd_q);
}
@@ -957,25 +1077,27 @@ static inline int nfp_net_rx_space(struct nfp_net_rx_ring *rx_ring)
* nfp_net_rx_alloc_one() - Allocate and map skb for RX
* @rx_ring: RX ring structure of the skb
* @dma_addr: Pointer to storage for DMA address (output param)
+ * @fl_bufsz: size of freelist buffers
*
* This function will allcate a new skb, map it for DMA.
*
* Return: allocated skb or NULL on failure.
*/
static struct sk_buff *
-nfp_net_rx_alloc_one(struct nfp_net_rx_ring *rx_ring, dma_addr_t *dma_addr)
+nfp_net_rx_alloc_one(struct nfp_net_rx_ring *rx_ring, dma_addr_t *dma_addr,
+ unsigned int fl_bufsz)
{
struct nfp_net *nn = rx_ring->r_vec->nfp_net;
struct sk_buff *skb;
- skb = netdev_alloc_skb(nn->netdev, nn->fl_bufsz);
+ skb = netdev_alloc_skb(nn->netdev, fl_bufsz);
if (!skb) {
nn_warn_ratelimit(nn, "Failed to alloc receive SKB\n");
return NULL;
}
*dma_addr = dma_map_single(&nn->pdev->dev, skb->data,
- nn->fl_bufsz, DMA_FROM_DEVICE);
+ fl_bufsz, DMA_FROM_DEVICE);
if (dma_mapping_error(&nn->pdev->dev, *dma_addr)) {
dev_kfree_skb_any(skb);
nn_warn_ratelimit(nn, "Failed to map DMA RX buffer\n");
@@ -1020,62 +1142,101 @@ static void nfp_net_rx_give_one(struct nfp_net_rx_ring *rx_ring,
}
/**
- * nfp_net_rx_flush() - Free any buffers currently on the RX ring
- * @rx_ring: RX ring to remove buffers from
+ * nfp_net_rx_ring_reset() - Reflect in SW state of freelist after disable
+ * @rx_ring: RX ring structure
*
- * Assumes that the device is stopped
+ * Warning: Do *not* call if ring buffers were never put on the FW freelist
+ * (i.e. device was not enabled)!
*/
-static void nfp_net_rx_flush(struct nfp_net_rx_ring *rx_ring)
+static void nfp_net_rx_ring_reset(struct nfp_net_rx_ring *rx_ring)
{
- struct nfp_net *nn = rx_ring->r_vec->nfp_net;
- struct pci_dev *pdev = nn->pdev;
- int idx;
+ unsigned int wr_idx, last_idx;
- while (rx_ring->rd_p != rx_ring->wr_p) {
- idx = rx_ring->rd_p % rx_ring->cnt;
+ /* Move the empty entry to the end of the list */
+ wr_idx = rx_ring->wr_p % rx_ring->cnt;
+ last_idx = rx_ring->cnt - 1;
+ rx_ring->rxbufs[wr_idx].dma_addr = rx_ring->rxbufs[last_idx].dma_addr;
+ rx_ring->rxbufs[wr_idx].skb = rx_ring->rxbufs[last_idx].skb;
+ rx_ring->rxbufs[last_idx].dma_addr = 0;
+ rx_ring->rxbufs[last_idx].skb = NULL;
- if (rx_ring->rxbufs[idx].skb) {
- dma_unmap_single(&pdev->dev,
- rx_ring->rxbufs[idx].dma_addr,
- nn->fl_bufsz, DMA_FROM_DEVICE);
- dev_kfree_skb_any(rx_ring->rxbufs[idx].skb);
- rx_ring->rxbufs[idx].dma_addr = 0;
- rx_ring->rxbufs[idx].skb = NULL;
- }
+ memset(rx_ring->rxds, 0, sizeof(*rx_ring->rxds) * rx_ring->cnt);
+ rx_ring->wr_p = 0;
+ rx_ring->rd_p = 0;
+ rx_ring->wr_ptr_add = 0;
+}
+
+/**
+ * nfp_net_rx_ring_bufs_free() - Free any buffers currently on the RX ring
+ * @nn: NFP Net device
+ * @rx_ring: RX ring to remove buffers from
+ *
+ * Assumes that the device is stopped and buffers are in [0, ring->cnt - 1)
+ * entries. After device is disabled nfp_net_rx_ring_reset() must be called
+ * to restore required ring geometry.
+ */
+static void
+nfp_net_rx_ring_bufs_free(struct nfp_net *nn, struct nfp_net_rx_ring *rx_ring)
+{
+ struct pci_dev *pdev = nn->pdev;
+ unsigned int i;
- memset(&rx_ring->rxds[idx], 0, sizeof(rx_ring->rxds[idx]));
+ for (i = 0; i < rx_ring->cnt - 1; i++) {
+ /* NULL skb can only happen when initial filling of the ring
+ * fails to allocate enough buffers and calls here to free
+ * already allocated ones.
+ */
+ if (!rx_ring->rxbufs[i].skb)
+ continue;
- rx_ring->rd_p++;
+ dma_unmap_single(&pdev->dev, rx_ring->rxbufs[i].dma_addr,
+ rx_ring->bufsz, DMA_FROM_DEVICE);
+ dev_kfree_skb_any(rx_ring->rxbufs[i].skb);
+ rx_ring->rxbufs[i].dma_addr = 0;
+ rx_ring->rxbufs[i].skb = NULL;
}
}
/**
- * nfp_net_rx_fill_freelist() - Attempt filling freelist with RX buffers
- * @rx_ring: RX ring to fill
- *
- * Try to fill as many buffers as possible into freelist. Return
- * number of buffers added.
- *
- * Return: Number of freelist buffers added.
+ * nfp_net_rx_ring_bufs_alloc() - Fill RX ring with buffers (don't give to FW)
+ * @nn: NFP Net device
+ * @rx_ring: RX ring to remove buffers from
*/
-static int nfp_net_rx_fill_freelist(struct nfp_net_rx_ring *rx_ring)
+static int
+nfp_net_rx_ring_bufs_alloc(struct nfp_net *nn, struct nfp_net_rx_ring *rx_ring)
{
- struct sk_buff *skb;
- dma_addr_t dma_addr;
+ struct nfp_net_rx_buf *rxbufs;
+ unsigned int i;
+
+ rxbufs = rx_ring->rxbufs;
- while (nfp_net_rx_space(rx_ring)) {
- skb = nfp_net_rx_alloc_one(rx_ring, &dma_addr);
- if (!skb) {
- nfp_net_rx_flush(rx_ring);
+ for (i = 0; i < rx_ring->cnt - 1; i++) {
+ rxbufs[i].skb =
+ nfp_net_rx_alloc_one(rx_ring, &rxbufs[i].dma_addr,
+ rx_ring->bufsz);
+ if (!rxbufs[i].skb) {
+ nfp_net_rx_ring_bufs_free(nn, rx_ring);
return -ENOMEM;
}
- nfp_net_rx_give_one(rx_ring, skb, dma_addr);
}
return 0;
}
/**
+ * nfp_net_rx_ring_fill_freelist() - Give buffers from the ring to FW
+ * @rx_ring: RX ring to fill
+ */
+static void nfp_net_rx_ring_fill_freelist(struct nfp_net_rx_ring *rx_ring)
+{
+ unsigned int i;
+
+ for (i = 0; i < rx_ring->cnt - 1; i++)
+ nfp_net_rx_give_one(rx_ring, rx_ring->rxbufs[i].skb,
+ rx_ring->rxbufs[i].dma_addr);
+}
+
+/**
* nfp_net_rx_csum_has_errors() - group check if rxd has any csum errors
* @flags: RX descriptor flags field in CPU byte order
*/
@@ -1240,7 +1401,8 @@ static int nfp_net_rx(struct nfp_net_rx_ring *rx_ring, int budget)
skb = rx_ring->rxbufs[idx].skb;
- new_skb = nfp_net_rx_alloc_one(rx_ring, &new_dma_addr);
+ new_skb = nfp_net_rx_alloc_one(rx_ring, &new_dma_addr,
+ nn->fl_bufsz);
if (!new_skb) {
nfp_net_rx_give_one(rx_ring, rx_ring->rxbufs[idx].skb,
rx_ring->rxbufs[idx].dma_addr);
@@ -1256,23 +1418,25 @@ static int nfp_net_rx(struct nfp_net_rx_ring *rx_ring, int budget)
nfp_net_rx_give_one(rx_ring, new_skb, new_dma_addr);
+ /* < meta_len >
+ * <-- [rx_offset] -->
+ * ---------------------------------------------------------
+ * | [XX] | metadata | packet | XXXX |
+ * ---------------------------------------------------------
+ * <---------------- data_len --------------->
+ *
+ * The rx_offset is fixed for all packets, the meta_len can vary
+ * on a packet by packet basis. If rx_offset is set to zero
+ * (_RX_OFFSET_DYNAMIC) metadata starts at the beginning of the
+ * buffer and is immediately followed by the packet (no [XX]).
+ */
meta_len = rxd->rxd.meta_len_dd & PCIE_DESC_RX_META_LEN_MASK;
data_len = le16_to_cpu(rxd->rxd.data_len);
- if (WARN_ON_ONCE(data_len > nn->fl_bufsz)) {
- dev_kfree_skb_any(skb);
- continue;
- }
-
- if (nn->rx_offset == NFP_NET_CFG_RX_OFFSET_DYNAMIC) {
- /* The packet data starts after the metadata */
+ if (nn->rx_offset == NFP_NET_CFG_RX_OFFSET_DYNAMIC)
skb_reserve(skb, meta_len);
- } else {
- /* The packet data starts at a fixed offset */
+ else
skb_reserve(skb, nn->rx_offset);
- }
-
- /* Adjust the SKB for the dynamic meta data pre-pended */
skb_put(skb, data_len - meta_len);
nfp_net_set_hash(nn->netdev, skb, rxd);
@@ -1349,10 +1513,6 @@ static void nfp_net_tx_ring_free(struct nfp_net_tx_ring *tx_ring)
struct nfp_net *nn = r_vec->nfp_net;
struct pci_dev *pdev = nn->pdev;
- nn_writeq(nn, NFP_NET_CFG_TXR_ADDR(tx_ring->idx), 0);
- nn_writeb(nn, NFP_NET_CFG_TXR_SZ(tx_ring->idx), 0);
- nn_writeb(nn, NFP_NET_CFG_TXR_VEC(tx_ring->idx), 0);
-
kfree(tx_ring->txbufs);
if (tx_ring->txds)
@@ -1360,11 +1520,6 @@ static void nfp_net_tx_ring_free(struct nfp_net_tx_ring *tx_ring)
tx_ring->txds, tx_ring->dma);
tx_ring->cnt = 0;
- tx_ring->wr_p = 0;
- tx_ring->rd_p = 0;
- tx_ring->qcp_rd_p = 0;
- tx_ring->wr_ptr_add = 0;
-
tx_ring->txbufs = NULL;
tx_ring->txds = NULL;
tx_ring->dma = 0;
@@ -1374,17 +1529,18 @@ static void nfp_net_tx_ring_free(struct nfp_net_tx_ring *tx_ring)
/**
* nfp_net_tx_ring_alloc() - Allocate resource for a TX ring
* @tx_ring: TX Ring structure to allocate
+ * @cnt: Ring buffer count
*
* Return: 0 on success, negative errno otherwise.
*/
-static int nfp_net_tx_ring_alloc(struct nfp_net_tx_ring *tx_ring)
+static int nfp_net_tx_ring_alloc(struct nfp_net_tx_ring *tx_ring, u32 cnt)
{
struct nfp_net_r_vector *r_vec = tx_ring->r_vec;
struct nfp_net *nn = r_vec->nfp_net;
struct pci_dev *pdev = nn->pdev;
int sz;
- tx_ring->cnt = nn->txd_cnt;
+ tx_ring->cnt = cnt;
tx_ring->size = sizeof(*tx_ring->txds) * tx_ring->cnt;
tx_ring->txds = dma_zalloc_coherent(&pdev->dev, tx_ring->size,
@@ -1397,11 +1553,6 @@ static int nfp_net_tx_ring_alloc(struct nfp_net_tx_ring *tx_ring)
if (!tx_ring->txbufs)
goto err_alloc;
- /* Write the DMA address, size and MSI-X info to the device */
- nn_writeq(nn, NFP_NET_CFG_TXR_ADDR(tx_ring->idx), tx_ring->dma);
- nn_writeb(nn, NFP_NET_CFG_TXR_SZ(tx_ring->idx), ilog2(tx_ring->cnt));
- nn_writeb(nn, NFP_NET_CFG_TXR_VEC(tx_ring->idx), r_vec->irq_idx);
-
netif_set_xps_queue(nn->netdev, &r_vec->affinity_mask, tx_ring->idx);
nn_dbg(nn, "TxQ%02d: QCidx=%02d cnt=%d dma=%#llx host=%p\n",
@@ -1415,6 +1566,59 @@ err_alloc:
return -ENOMEM;
}
+static struct nfp_net_tx_ring *
+nfp_net_shadow_tx_rings_prepare(struct nfp_net *nn, u32 buf_cnt)
+{
+ struct nfp_net_tx_ring *rings;
+ unsigned int r;
+
+ rings = kcalloc(nn->num_tx_rings, sizeof(*rings), GFP_KERNEL);
+ if (!rings)
+ return NULL;
+
+ for (r = 0; r < nn->num_tx_rings; r++) {
+ nfp_net_tx_ring_init(&rings[r], nn->tx_rings[r].r_vec, r);
+
+ if (nfp_net_tx_ring_alloc(&rings[r], buf_cnt))
+ goto err_free_prev;
+ }
+
+ return rings;
+
+err_free_prev:
+ while (r--)
+ nfp_net_tx_ring_free(&rings[r]);
+ kfree(rings);
+ return NULL;
+}
+
+static struct nfp_net_tx_ring *
+nfp_net_shadow_tx_rings_swap(struct nfp_net *nn, struct nfp_net_tx_ring *rings)
+{
+ struct nfp_net_tx_ring *old = nn->tx_rings;
+ unsigned int r;
+
+ for (r = 0; r < nn->num_tx_rings; r++)
+ old[r].r_vec->tx_ring = &rings[r];
+
+ nn->tx_rings = rings;
+ return old;
+}
+
+static void
+nfp_net_shadow_tx_rings_free(struct nfp_net *nn, struct nfp_net_tx_ring *rings)
+{
+ unsigned int r;
+
+ if (!rings)
+ return;
+
+ for (r = 0; r < nn->num_tx_rings; r++)
+ nfp_net_tx_ring_free(&rings[r]);
+
+ kfree(rings);
+}
+
/**
* nfp_net_rx_ring_free() - Free resources allocated to a RX ring
* @rx_ring: RX ring to free
@@ -1425,10 +1629,6 @@ static void nfp_net_rx_ring_free(struct nfp_net_rx_ring *rx_ring)
struct nfp_net *nn = r_vec->nfp_net;
struct pci_dev *pdev = nn->pdev;
- nn_writeq(nn, NFP_NET_CFG_RXR_ADDR(rx_ring->idx), 0);
- nn_writeb(nn, NFP_NET_CFG_RXR_SZ(rx_ring->idx), 0);
- nn_writeb(nn, NFP_NET_CFG_RXR_VEC(rx_ring->idx), 0);
-
kfree(rx_ring->rxbufs);
if (rx_ring->rxds)
@@ -1436,10 +1636,6 @@ static void nfp_net_rx_ring_free(struct nfp_net_rx_ring *rx_ring)
rx_ring->rxds, rx_ring->dma);
rx_ring->cnt = 0;
- rx_ring->wr_p = 0;
- rx_ring->rd_p = 0;
- rx_ring->wr_ptr_add = 0;
-
rx_ring->rxbufs = NULL;
rx_ring->rxds = NULL;
rx_ring->dma = 0;
@@ -1449,17 +1645,22 @@ static void nfp_net_rx_ring_free(struct nfp_net_rx_ring *rx_ring)
/**
* nfp_net_rx_ring_alloc() - Allocate resource for a RX ring
* @rx_ring: RX ring to allocate
+ * @fl_bufsz: Size of buffers to allocate
+ * @cnt: Ring buffer count
*
* Return: 0 on success, negative errno otherwise.
*/
-static int nfp_net_rx_ring_alloc(struct nfp_net_rx_ring *rx_ring)
+static int
+nfp_net_rx_ring_alloc(struct nfp_net_rx_ring *rx_ring, unsigned int fl_bufsz,
+ u32 cnt)
{
struct nfp_net_r_vector *r_vec = rx_ring->r_vec;
struct nfp_net *nn = r_vec->nfp_net;
struct pci_dev *pdev = nn->pdev;
int sz;
- rx_ring->cnt = nn->rxd_cnt;
+ rx_ring->cnt = cnt;
+ rx_ring->bufsz = fl_bufsz;
rx_ring->size = sizeof(*rx_ring->rxds) * rx_ring->cnt;
rx_ring->rxds = dma_zalloc_coherent(&pdev->dev, rx_ring->size,
@@ -1472,11 +1673,6 @@ static int nfp_net_rx_ring_alloc(struct nfp_net_rx_ring *rx_ring)
if (!rx_ring->rxbufs)
goto err_alloc;
- /* Write the DMA address, size and MSI-X info to the device */
- nn_writeq(nn, NFP_NET_CFG_RXR_ADDR(rx_ring->idx), rx_ring->dma);
- nn_writeb(nn, NFP_NET_CFG_RXR_SZ(rx_ring->idx), ilog2(rx_ring->cnt));
- nn_writeb(nn, NFP_NET_CFG_RXR_VEC(rx_ring->idx), r_vec->irq_idx);
-
nn_dbg(nn, "RxQ%02d: FlQCidx=%02d RxQCidx=%02d cnt=%d dma=%#llx host=%p\n",
rx_ring->idx, rx_ring->fl_qcidx, rx_ring->rx_qcidx,
rx_ring->cnt, (unsigned long long)rx_ring->dma, rx_ring->rxds);
@@ -1488,91 +1684,109 @@ err_alloc:
return -ENOMEM;
}
-static void __nfp_net_free_rings(struct nfp_net *nn, unsigned int n_free)
+static struct nfp_net_rx_ring *
+nfp_net_shadow_rx_rings_prepare(struct nfp_net *nn, unsigned int fl_bufsz,
+ u32 buf_cnt)
{
- struct nfp_net_r_vector *r_vec;
- struct msix_entry *entry;
+ struct nfp_net_rx_ring *rings;
+ unsigned int r;
- while (n_free--) {
- r_vec = &nn->r_vecs[n_free];
- entry = &nn->irq_entries[r_vec->irq_idx];
+ rings = kcalloc(nn->num_rx_rings, sizeof(*rings), GFP_KERNEL);
+ if (!rings)
+ return NULL;
- nfp_net_rx_ring_free(r_vec->rx_ring);
- nfp_net_tx_ring_free(r_vec->tx_ring);
+ for (r = 0; r < nn->num_rx_rings; r++) {
+ nfp_net_rx_ring_init(&rings[r], nn->rx_rings[r].r_vec, r);
- irq_set_affinity_hint(entry->vector, NULL);
- free_irq(entry->vector, r_vec);
+ if (nfp_net_rx_ring_alloc(&rings[r], fl_bufsz, buf_cnt))
+ goto err_free_prev;
- netif_napi_del(&r_vec->napi);
+ if (nfp_net_rx_ring_bufs_alloc(nn, &rings[r]))
+ goto err_free_ring;
}
+
+ return rings;
+
+err_free_prev:
+ while (r--) {
+ nfp_net_rx_ring_bufs_free(nn, &rings[r]);
+err_free_ring:
+ nfp_net_rx_ring_free(&rings[r]);
+ }
+ kfree(rings);
+ return NULL;
}
-/**
- * nfp_net_free_rings() - Free all ring resources
- * @nn: NFP Net device to reconfigure
- */
-static void nfp_net_free_rings(struct nfp_net *nn)
+static struct nfp_net_rx_ring *
+nfp_net_shadow_rx_rings_swap(struct nfp_net *nn, struct nfp_net_rx_ring *rings)
{
- __nfp_net_free_rings(nn, nn->num_r_vecs);
+ struct nfp_net_rx_ring *old = nn->rx_rings;
+ unsigned int r;
+
+ for (r = 0; r < nn->num_rx_rings; r++)
+ old[r].r_vec->rx_ring = &rings[r];
+
+ nn->rx_rings = rings;
+ return old;
}
-/**
- * nfp_net_alloc_rings() - Allocate resources for RX and TX rings
- * @nn: NFP Net device to reconfigure
- *
- * Return: 0 on success or negative errno on error.
- */
-static int nfp_net_alloc_rings(struct nfp_net *nn)
+static void
+nfp_net_shadow_rx_rings_free(struct nfp_net *nn, struct nfp_net_rx_ring *rings)
{
- struct nfp_net_r_vector *r_vec;
- struct msix_entry *entry;
- int err;
- int r;
+ unsigned int r;
+
+ if (!rings)
+ return;
for (r = 0; r < nn->num_r_vecs; r++) {
- r_vec = &nn->r_vecs[r];
- entry = &nn->irq_entries[r_vec->irq_idx];
-
- /* Setup NAPI */
- netif_napi_add(nn->netdev, &r_vec->napi,
- nfp_net_poll, NAPI_POLL_WEIGHT);
-
- snprintf(r_vec->name, sizeof(r_vec->name),
- "%s-rxtx-%d", nn->netdev->name, r);
- err = request_irq(entry->vector, r_vec->handler, 0,
- r_vec->name, r_vec);
- if (err) {
- nn_dbg(nn, "Error requesting IRQ %d\n", entry->vector);
- goto err_napi_del;
- }
+ nfp_net_rx_ring_bufs_free(nn, &rings[r]);
+ nfp_net_rx_ring_free(&rings[r]);
+ }
- irq_set_affinity_hint(entry->vector, &r_vec->affinity_mask);
+ kfree(rings);
+}
- nn_dbg(nn, "RV%02d: irq=%03d/%03d\n",
- r, entry->vector, entry->entry);
+static int
+nfp_net_prepare_vector(struct nfp_net *nn, struct nfp_net_r_vector *r_vec,
+ int idx)
+{
+ struct msix_entry *entry = &nn->irq_entries[r_vec->irq_idx];
+ int err;
- /* Allocate TX ring resources */
- err = nfp_net_tx_ring_alloc(r_vec->tx_ring);
- if (err)
- goto err_free_irq;
+ r_vec->tx_ring = &nn->tx_rings[idx];
+ nfp_net_tx_ring_init(r_vec->tx_ring, r_vec, idx);
- /* Allocate RX ring resources */
- err = nfp_net_rx_ring_alloc(r_vec->rx_ring);
- if (err)
- goto err_free_tx;
+ r_vec->rx_ring = &nn->rx_rings[idx];
+ nfp_net_rx_ring_init(r_vec->rx_ring, r_vec, idx);
+
+ snprintf(r_vec->name, sizeof(r_vec->name),
+ "%s-rxtx-%d", nn->netdev->name, idx);
+ err = request_irq(entry->vector, r_vec->handler, 0, r_vec->name, r_vec);
+ if (err) {
+ nn_err(nn, "Error requesting IRQ %d\n", entry->vector);
+ return err;
}
+ disable_irq(entry->vector);
+
+ /* Setup NAPI */
+ netif_napi_add(nn->netdev, &r_vec->napi,
+ nfp_net_poll, NAPI_POLL_WEIGHT);
+
+ irq_set_affinity_hint(entry->vector, &r_vec->affinity_mask);
+
+ nn_dbg(nn, "RV%02d: irq=%03d/%03d\n", idx, entry->vector, entry->entry);
return 0;
+}
+
+static void
+nfp_net_cleanup_vector(struct nfp_net *nn, struct nfp_net_r_vector *r_vec)
+{
+ struct msix_entry *entry = &nn->irq_entries[r_vec->irq_idx];
-err_free_tx:
- nfp_net_tx_ring_free(r_vec->tx_ring);
-err_free_irq:
irq_set_affinity_hint(entry->vector, NULL);
- free_irq(entry->vector, r_vec);
-err_napi_del:
netif_napi_del(&r_vec->napi);
- __nfp_net_free_rings(nn, r);
- return err;
+ free_irq(entry->vector, r_vec);
}
/**
@@ -1646,6 +1860,17 @@ static void nfp_net_write_mac_addr(struct nfp_net *nn, const u8 *mac)
get_unaligned_be16(nn->netdev->dev_addr + 4) << 16);
}
+static void nfp_net_vec_clear_ring_data(struct nfp_net *nn, unsigned int idx)
+{
+ nn_writeq(nn, NFP_NET_CFG_RXR_ADDR(idx), 0);
+ nn_writeb(nn, NFP_NET_CFG_RXR_SZ(idx), 0);
+ nn_writeb(nn, NFP_NET_CFG_RXR_VEC(idx), 0);
+
+ nn_writeq(nn, NFP_NET_CFG_TXR_ADDR(idx), 0);
+ nn_writeb(nn, NFP_NET_CFG_TXR_SZ(idx), 0);
+ nn_writeb(nn, NFP_NET_CFG_TXR_VEC(idx), 0);
+}
+
/**
* nfp_net_clear_config_and_disable() - Clear control BAR and disable NFP
* @nn: NFP Net device to reconfigure
@@ -1653,6 +1878,7 @@ static void nfp_net_write_mac_addr(struct nfp_net *nn, const u8 *mac)
static void nfp_net_clear_config_and_disable(struct nfp_net *nn)
{
u32 new_ctrl, update;
+ unsigned int r;
int err;
new_ctrl = nn->ctrl;
@@ -1669,79 +1895,40 @@ static void nfp_net_clear_config_and_disable(struct nfp_net *nn)
nn_writel(nn, NFP_NET_CFG_CTRL, new_ctrl);
err = nfp_net_reconfig(nn, update);
- if (err) {
+ if (err)
nn_err(nn, "Could not disable device: %d\n", err);
- return;
+
+ for (r = 0; r < nn->num_r_vecs; r++) {
+ nfp_net_rx_ring_reset(nn->r_vecs[r].rx_ring);
+ nfp_net_tx_ring_reset(nn, nn->r_vecs[r].tx_ring);
+ nfp_net_vec_clear_ring_data(nn, r);
}
nn->ctrl = new_ctrl;
}
-/**
- * nfp_net_start_vec() - Start ring vector
- * @nn: NFP Net device structure
- * @r_vec: Ring vector to be started
- */
-static int nfp_net_start_vec(struct nfp_net *nn, struct nfp_net_r_vector *r_vec)
+static void
+nfp_net_vec_write_ring_data(struct nfp_net *nn, struct nfp_net_r_vector *r_vec,
+ unsigned int idx)
{
- unsigned int irq_vec;
- int err = 0;
-
- irq_vec = nn->irq_entries[r_vec->irq_idx].vector;
-
- disable_irq(irq_vec);
-
- err = nfp_net_rx_fill_freelist(r_vec->rx_ring);
- if (err) {
- nn_err(nn, "RV%02d: couldn't allocate enough buffers\n",
- r_vec->irq_idx);
- goto out;
- }
-
- napi_enable(&r_vec->napi);
-out:
- enable_irq(irq_vec);
+ /* Write the DMA address, size and MSI-X info to the device */
+ nn_writeq(nn, NFP_NET_CFG_RXR_ADDR(idx), r_vec->rx_ring->dma);
+ nn_writeb(nn, NFP_NET_CFG_RXR_SZ(idx), ilog2(r_vec->rx_ring->cnt));
+ nn_writeb(nn, NFP_NET_CFG_RXR_VEC(idx), r_vec->irq_idx);
- return err;
+ nn_writeq(nn, NFP_NET_CFG_TXR_ADDR(idx), r_vec->tx_ring->dma);
+ nn_writeb(nn, NFP_NET_CFG_TXR_SZ(idx), ilog2(r_vec->tx_ring->cnt));
+ nn_writeb(nn, NFP_NET_CFG_TXR_VEC(idx), r_vec->irq_idx);
}
-static int nfp_net_netdev_open(struct net_device *netdev)
+static int __nfp_net_set_config_and_enable(struct nfp_net *nn)
{
- struct nfp_net *nn = netdev_priv(netdev);
- int err, r;
- u32 update = 0;
- u32 new_ctrl;
-
- if (nn->ctrl & NFP_NET_CFG_CTRL_ENABLE) {
- nn_err(nn, "Dev is already enabled: 0x%08x\n", nn->ctrl);
- return -EBUSY;
- }
+ u32 new_ctrl, update = 0;
+ unsigned int r;
+ int err;
new_ctrl = nn->ctrl;
- /* Step 1: Allocate resources for rings and the like
- * - Request interrupts
- * - Allocate RX and TX ring resources
- * - Setup initial RSS table
- */
- err = nfp_net_aux_irq_request(nn, NFP_NET_CFG_EXN, "%s-exn",
- nn->exn_name, sizeof(nn->exn_name),
- NFP_NET_IRQ_EXN_IDX, nn->exn_handler);
- if (err)
- return err;
-
- err = nfp_net_alloc_rings(nn);
- if (err)
- goto err_free_exn;
-
- err = netif_set_real_num_tx_queues(netdev, nn->num_tx_rings);
- if (err)
- goto err_free_rings;
-
- err = netif_set_real_num_rx_queues(netdev, nn->num_rx_rings);
- if (err)
- goto err_free_rings;
-
if (nn->cap & NFP_NET_CFG_CTRL_RSS) {
nfp_net_rss_write_key(nn);
nfp_net_rss_write_itbl(nn);
@@ -1756,22 +1943,18 @@ static int nfp_net_netdev_open(struct net_device *netdev)
update |= NFP_NET_CFG_UPDATE_IRQMOD;
}
- /* Step 2: Configure the NFP
- * - Enable rings from 0 to tx_rings/rx_rings - 1.
- * - Write MAC address (in case it changed)
- * - Set the MTU
- * - Set the Freelist buffer size
- * - Enable the FW
- */
+ for (r = 0; r < nn->num_r_vecs; r++)
+ nfp_net_vec_write_ring_data(nn, &nn->r_vecs[r], r);
+
nn_writeq(nn, NFP_NET_CFG_TXRS_ENABLE, nn->num_tx_rings == 64 ?
0xffffffffffffffffULL : ((u64)1 << nn->num_tx_rings) - 1);
nn_writeq(nn, NFP_NET_CFG_RXRS_ENABLE, nn->num_rx_rings == 64 ?
0xffffffffffffffffULL : ((u64)1 << nn->num_rx_rings) - 1);
- nfp_net_write_mac_addr(nn, netdev->dev_addr);
+ nfp_net_write_mac_addr(nn, nn->netdev->dev_addr);
- nn_writel(nn, NFP_NET_CFG_MTU, netdev->mtu);
+ nn_writel(nn, NFP_NET_CFG_MTU, nn->netdev->mtu);
nn_writel(nn, NFP_NET_CFG_FLBUFSZ, nn->fl_bufsz);
/* Enable device */
@@ -1784,69 +1967,213 @@ static int nfp_net_netdev_open(struct net_device *netdev)
nn_writel(nn, NFP_NET_CFG_CTRL, new_ctrl);
err = nfp_net_reconfig(nn, update);
- if (err)
- goto err_clear_config;
nn->ctrl = new_ctrl;
+ for (r = 0; r < nn->num_r_vecs; r++)
+ nfp_net_rx_ring_fill_freelist(nn->r_vecs[r].rx_ring);
+
/* Since reconfiguration requests while NFP is down are ignored we
* have to wipe the entire VXLAN configuration and reinitialize it.
*/
if (nn->ctrl & NFP_NET_CFG_CTRL_VXLAN) {
memset(&nn->vxlan_ports, 0, sizeof(nn->vxlan_ports));
memset(&nn->vxlan_usecnt, 0, sizeof(nn->vxlan_usecnt));
- vxlan_get_rx_port(netdev);
+ vxlan_get_rx_port(nn->netdev);
}
- /* Step 3: Enable for kernel
- * - put some freelist descriptors on each RX ring
- * - enable NAPI on each ring
- * - enable all TX queues
- * - set link state
- */
+ return err;
+}
+
+/**
+ * nfp_net_set_config_and_enable() - Write control BAR and enable NFP
+ * @nn: NFP Net device to reconfigure
+ */
+static int nfp_net_set_config_and_enable(struct nfp_net *nn)
+{
+ int err;
+
+ err = __nfp_net_set_config_and_enable(nn);
+ if (err)
+ nfp_net_clear_config_and_disable(nn);
+
+ return err;
+}
+
+/**
+ * nfp_net_open_stack() - Start the device from stack's perspective
+ * @nn: NFP Net device to reconfigure
+ */
+static void nfp_net_open_stack(struct nfp_net *nn)
+{
+ unsigned int r;
+
for (r = 0; r < nn->num_r_vecs; r++) {
- err = nfp_net_start_vec(nn, &nn->r_vecs[r]);
- if (err)
- goto err_disable_napi;
+ napi_enable(&nn->r_vecs[r].napi);
+ enable_irq(nn->irq_entries[nn->r_vecs[r].irq_idx].vector);
}
- netif_tx_wake_all_queues(netdev);
+ netif_tx_wake_all_queues(nn->netdev);
+ enable_irq(nn->irq_entries[NFP_NET_CFG_LSC].vector);
+ nfp_net_read_link_status(nn);
+}
+
+static int nfp_net_netdev_open(struct net_device *netdev)
+{
+ struct nfp_net *nn = netdev_priv(netdev);
+ int err, r;
+
+ if (nn->ctrl & NFP_NET_CFG_CTRL_ENABLE) {
+ nn_err(nn, "Dev is already enabled: 0x%08x\n", nn->ctrl);
+ return -EBUSY;
+ }
+
+ /* Step 1: Allocate resources for rings and the like
+ * - Request interrupts
+ * - Allocate RX and TX ring resources
+ * - Setup initial RSS table
+ */
+ err = nfp_net_aux_irq_request(nn, NFP_NET_CFG_EXN, "%s-exn",
+ nn->exn_name, sizeof(nn->exn_name),
+ NFP_NET_IRQ_EXN_IDX, nn->exn_handler);
+ if (err)
+ return err;
err = nfp_net_aux_irq_request(nn, NFP_NET_CFG_LSC, "%s-lsc",
nn->lsc_name, sizeof(nn->lsc_name),
NFP_NET_IRQ_LSC_IDX, nn->lsc_handler);
if (err)
- goto err_stop_tx;
- nfp_net_read_link_status(nn);
+ goto err_free_exn;
+ disable_irq(nn->irq_entries[NFP_NET_CFG_LSC].vector);
- return 0;
+ nn->rx_rings = kcalloc(nn->num_rx_rings, sizeof(*nn->rx_rings),
+ GFP_KERNEL);
+ if (!nn->rx_rings)
+ goto err_free_lsc;
+ nn->tx_rings = kcalloc(nn->num_tx_rings, sizeof(*nn->tx_rings),
+ GFP_KERNEL);
+ if (!nn->tx_rings)
+ goto err_free_rx_rings;
-err_stop_tx:
- netif_tx_disable(netdev);
- for (r = 0; r < nn->num_r_vecs; r++)
- nfp_net_tx_flush(nn->r_vecs[r].tx_ring);
-err_disable_napi:
- while (r--) {
- napi_disable(&nn->r_vecs[r].napi);
- nfp_net_rx_flush(nn->r_vecs[r].rx_ring);
+ for (r = 0; r < nn->num_r_vecs; r++) {
+ err = nfp_net_prepare_vector(nn, &nn->r_vecs[r], r);
+ if (err)
+ goto err_free_prev_vecs;
+
+ err = nfp_net_tx_ring_alloc(nn->r_vecs[r].tx_ring, nn->txd_cnt);
+ if (err)
+ goto err_cleanup_vec_p;
+
+ err = nfp_net_rx_ring_alloc(nn->r_vecs[r].rx_ring,
+ nn->fl_bufsz, nn->rxd_cnt);
+ if (err)
+ goto err_free_tx_ring_p;
+
+ err = nfp_net_rx_ring_bufs_alloc(nn, nn->r_vecs[r].rx_ring);
+ if (err)
+ goto err_flush_rx_ring_p;
}
-err_clear_config:
- nfp_net_clear_config_and_disable(nn);
+
+ err = netif_set_real_num_tx_queues(netdev, nn->num_tx_rings);
+ if (err)
+ goto err_free_rings;
+
+ err = netif_set_real_num_rx_queues(netdev, nn->num_rx_rings);
+ if (err)
+ goto err_free_rings;
+
+ /* Step 2: Configure the NFP
+ * - Enable rings from 0 to tx_rings/rx_rings - 1.
+ * - Write MAC address (in case it changed)
+ * - Set the MTU
+ * - Set the Freelist buffer size
+ * - Enable the FW
+ */
+ err = nfp_net_set_config_and_enable(nn);
+ if (err)
+ goto err_free_rings;
+
+ /* Step 3: Enable for kernel
+ * - put some freelist descriptors on each RX ring
+ * - enable NAPI on each ring
+ * - enable all TX queues
+ * - set link state
+ */
+ nfp_net_open_stack(nn);
+
+ return 0;
+
err_free_rings:
- nfp_net_free_rings(nn);
+ r = nn->num_r_vecs;
+err_free_prev_vecs:
+ while (r--) {
+ nfp_net_rx_ring_bufs_free(nn, nn->r_vecs[r].rx_ring);
+err_flush_rx_ring_p:
+ nfp_net_rx_ring_free(nn->r_vecs[r].rx_ring);
+err_free_tx_ring_p:
+ nfp_net_tx_ring_free(nn->r_vecs[r].tx_ring);
+err_cleanup_vec_p:
+ nfp_net_cleanup_vector(nn, &nn->r_vecs[r]);
+ }
+ kfree(nn->tx_rings);
+err_free_rx_rings:
+ kfree(nn->rx_rings);
+err_free_lsc:
+ nfp_net_aux_irq_free(nn, NFP_NET_CFG_LSC, NFP_NET_IRQ_LSC_IDX);
err_free_exn:
nfp_net_aux_irq_free(nn, NFP_NET_CFG_EXN, NFP_NET_IRQ_EXN_IDX);
return err;
}
/**
+ * nfp_net_close_stack() - Quiescent the stack (part of close)
+ * @nn: NFP Net device to reconfigure
+ */
+static void nfp_net_close_stack(struct nfp_net *nn)
+{
+ unsigned int r;
+
+ disable_irq(nn->irq_entries[NFP_NET_CFG_LSC].vector);
+ netif_carrier_off(nn->netdev);
+ nn->link_up = false;
+
+ for (r = 0; r < nn->num_r_vecs; r++) {
+ disable_irq(nn->irq_entries[nn->r_vecs[r].irq_idx].vector);
+ napi_disable(&nn->r_vecs[r].napi);
+ }
+
+ netif_tx_disable(nn->netdev);
+}
+
+/**
+ * nfp_net_close_free_all() - Free all runtime resources
+ * @nn: NFP Net device to reconfigure
+ */
+static void nfp_net_close_free_all(struct nfp_net *nn)
+{
+ unsigned int r;
+
+ for (r = 0; r < nn->num_r_vecs; r++) {
+ nfp_net_rx_ring_bufs_free(nn, nn->r_vecs[r].rx_ring);
+ nfp_net_rx_ring_free(nn->r_vecs[r].rx_ring);
+ nfp_net_tx_ring_free(nn->r_vecs[r].tx_ring);
+ nfp_net_cleanup_vector(nn, &nn->r_vecs[r]);
+ }
+
+ kfree(nn->rx_rings);
+ kfree(nn->tx_rings);
+
+ nfp_net_aux_irq_free(nn, NFP_NET_CFG_LSC, NFP_NET_IRQ_LSC_IDX);
+ nfp_net_aux_irq_free(nn, NFP_NET_CFG_EXN, NFP_NET_IRQ_EXN_IDX);
+}
+
+/**
* nfp_net_netdev_close() - Called when the device is downed
* @netdev: netdev structure
*/
static int nfp_net_netdev_close(struct net_device *netdev)
{
struct nfp_net *nn = netdev_priv(netdev);
- int r;
if (!(nn->ctrl & NFP_NET_CFG_CTRL_ENABLE)) {
nn_err(nn, "Dev is not up: 0x%08x\n", nn->ctrl);
@@ -1855,14 +2182,7 @@ static int nfp_net_netdev_close(struct net_device *netdev)
/* Step 1: Disable RX and TX rings from the Linux kernel perspective
*/
- nfp_net_aux_irq_free(nn, NFP_NET_CFG_LSC, NFP_NET_IRQ_LSC_IDX);
- netif_carrier_off(netdev);
- nn->link_up = false;
-
- for (r = 0; r < nn->num_r_vecs; r++)
- napi_disable(&nn->r_vecs[r].napi);
-
- netif_tx_disable(netdev);
+ nfp_net_close_stack(nn);
/* Step 2: Tell NFP
*/
@@ -1870,13 +2190,7 @@ static int nfp_net_netdev_close(struct net_device *netdev)
/* Step 3: Free resources
*/
- for (r = 0; r < nn->num_r_vecs; r++) {
- nfp_net_rx_flush(nn->r_vecs[r].rx_ring);
- nfp_net_tx_flush(nn->r_vecs[r].tx_ring);
- }
-
- nfp_net_free_rings(nn);
- nfp_net_aux_irq_free(nn, NFP_NET_CFG_EXN, NFP_NET_IRQ_EXN_IDX);
+ nfp_net_close_free_all(nn);
nn_dbg(nn, "%s down", netdev->name);
return 0;
@@ -1902,37 +2216,139 @@ static void nfp_net_set_rx_mode(struct net_device *netdev)
return;
nn_writel(nn, NFP_NET_CFG_CTRL, new_ctrl);
- if (nfp_net_reconfig(nn, NFP_NET_CFG_UPDATE_GEN))
- return;
+ nfp_net_reconfig_post(nn, NFP_NET_CFG_UPDATE_GEN);
nn->ctrl = new_ctrl;
}
static int nfp_net_change_mtu(struct net_device *netdev, int new_mtu)
{
+ unsigned int old_mtu, old_fl_bufsz, new_fl_bufsz;
struct nfp_net *nn = netdev_priv(netdev);
- u32 tmp;
-
- nn_dbg(nn, "New MTU = %d\n", new_mtu);
+ struct nfp_net_rx_ring *tmp_rings;
+ int err;
if (new_mtu < 68 || new_mtu > nn->max_mtu) {
nn_err(nn, "New MTU (%d) is not valid\n", new_mtu);
return -EINVAL;
}
+ old_mtu = netdev->mtu;
+ old_fl_bufsz = nn->fl_bufsz;
+ new_fl_bufsz = NFP_NET_MAX_PREPEND + ETH_HLEN + VLAN_HLEN * 2 + new_mtu;
+
+ if (!netif_running(netdev)) {
+ netdev->mtu = new_mtu;
+ nn->fl_bufsz = new_fl_bufsz;
+ return 0;
+ }
+
+ /* Prepare new rings */
+ tmp_rings = nfp_net_shadow_rx_rings_prepare(nn, new_fl_bufsz,
+ nn->rxd_cnt);
+ if (!tmp_rings)
+ return -ENOMEM;
+
+ /* Stop device, swap in new rings, try to start the firmware */
+ nfp_net_close_stack(nn);
+ nfp_net_clear_config_and_disable(nn);
+
+ tmp_rings = nfp_net_shadow_rx_rings_swap(nn, tmp_rings);
+
netdev->mtu = new_mtu;
+ nn->fl_bufsz = new_fl_bufsz;
- /* Freelist buffer size rounded up to the nearest 1K */
- tmp = new_mtu + ETH_HLEN + VLAN_HLEN + NFP_NET_MAX_PREPEND;
- nn->fl_bufsz = roundup(tmp, 1024);
+ err = nfp_net_set_config_and_enable(nn);
+ if (err) {
+ const int err_new = err;
- /* restart if running */
- if (netif_running(netdev)) {
- nfp_net_netdev_close(netdev);
- nfp_net_netdev_open(netdev);
+ /* Try with old configuration and old rings */
+ tmp_rings = nfp_net_shadow_rx_rings_swap(nn, tmp_rings);
+
+ netdev->mtu = old_mtu;
+ nn->fl_bufsz = old_fl_bufsz;
+
+ err = __nfp_net_set_config_and_enable(nn);
+ if (err)
+ nn_err(nn, "Can't restore MTU - FW communication failed (%d,%d)\n",
+ err_new, err);
}
- return 0;
+ nfp_net_shadow_rx_rings_free(nn, tmp_rings);
+
+ nfp_net_open_stack(nn);
+
+ return err;
+}
+
+int nfp_net_set_ring_size(struct nfp_net *nn, u32 rxd_cnt, u32 txd_cnt)
+{
+ struct nfp_net_tx_ring *tx_rings = NULL;
+ struct nfp_net_rx_ring *rx_rings = NULL;
+ u32 old_rxd_cnt, old_txd_cnt;
+ int err;
+
+ if (!netif_running(nn->netdev)) {
+ nn->rxd_cnt = rxd_cnt;
+ nn->txd_cnt = txd_cnt;
+ return 0;
+ }
+
+ old_rxd_cnt = nn->rxd_cnt;
+ old_txd_cnt = nn->txd_cnt;
+
+ /* Prepare new rings */
+ if (nn->rxd_cnt != rxd_cnt) {
+ rx_rings = nfp_net_shadow_rx_rings_prepare(nn, nn->fl_bufsz,
+ rxd_cnt);
+ if (!rx_rings)
+ return -ENOMEM;
+ }
+ if (nn->txd_cnt != txd_cnt) {
+ tx_rings = nfp_net_shadow_tx_rings_prepare(nn, txd_cnt);
+ if (!tx_rings) {
+ nfp_net_shadow_rx_rings_free(nn, rx_rings);
+ return -ENOMEM;
+ }
+ }
+
+ /* Stop device, swap in new rings, try to start the firmware */
+ nfp_net_close_stack(nn);
+ nfp_net_clear_config_and_disable(nn);
+
+ if (rx_rings)
+ rx_rings = nfp_net_shadow_rx_rings_swap(nn, rx_rings);
+ if (tx_rings)
+ tx_rings = nfp_net_shadow_tx_rings_swap(nn, tx_rings);
+
+ nn->rxd_cnt = rxd_cnt;
+ nn->txd_cnt = txd_cnt;
+
+ err = nfp_net_set_config_and_enable(nn);
+ if (err) {
+ const int err_new = err;
+
+ /* Try with old configuration and old rings */
+ if (rx_rings)
+ rx_rings = nfp_net_shadow_rx_rings_swap(nn, rx_rings);
+ if (tx_rings)
+ tx_rings = nfp_net_shadow_tx_rings_swap(nn, tx_rings);
+
+ nn->rxd_cnt = old_rxd_cnt;
+ nn->txd_cnt = old_txd_cnt;
+
+ err = __nfp_net_set_config_and_enable(nn);
+ if (err)
+ nn_err(nn, "Can't restore ring config - FW communication failed (%d,%d)\n",
+ err_new, err);
+ }
+
+ nfp_net_shadow_rx_rings_free(nn, rx_rings);
+ nfp_net_shadow_tx_rings_free(nn, tx_rings);
+
+ nfp_net_open_stack(nn);
+
+ return err;
}
static struct rtnl_link_stats64 *nfp_net_stat64(struct net_device *netdev,
@@ -2108,7 +2524,7 @@ static void nfp_net_set_vxlan_port(struct nfp_net *nn, int idx, __be16 port)
be16_to_cpu(nn->vxlan_ports[i + 1]) << 16 |
be16_to_cpu(nn->vxlan_ports[i]));
- nfp_net_reconfig(nn, NFP_NET_CFG_UPDATE_VXLAN);
+ nfp_net_reconfig_post(nn, NFP_NET_CFG_UPDATE_VXLAN);
}
/**
@@ -2254,6 +2670,9 @@ struct nfp_net *nfp_net_netdev_alloc(struct pci_dev *pdev,
spin_lock_init(&nn->reconfig_lock);
spin_lock_init(&nn->link_status_lock);
+ setup_timer(&nn->reconfig_timer,
+ nfp_net_reconfig_timer, (unsigned long)nn);
+
return nn;
}
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h
index 8692003aeed8..ad6c4e31cedd 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ctrl.h
@@ -81,14 +81,10 @@
/**
* @NFP_NET_TXR_MAX: Maximum number of TX rings
- * @NFP_NET_TXR_MASK: Mask for TX rings
* @NFP_NET_RXR_MAX: Maximum number of RX rings
- * @NFP_NET_RXR_MASK: Mask for RX rings
*/
#define NFP_NET_TXR_MAX 64
-#define NFP_NET_TXR_MASK (NFP_NET_TXR_MAX - 1)
#define NFP_NET_RXR_MAX 64
-#define NFP_NET_RXR_MASK (NFP_NET_RXR_MAX - 1)
/**
* Read/Write config words (0x0000 - 0x002c)
@@ -152,9 +148,9 @@
* @NFP_NET_CFG_VERSION: Firmware version number
* @NFP_NET_CFG_STS: Status
* @NFP_NET_CFG_CAP: Capabilities (same bits as @NFP_NET_CFG_CTRL)
- * @NFP_NET_MAX_TXRINGS: Maximum number of TX rings
- * @NFP_NET_MAX_RXRINGS: Maximum number of RX rings
- * @NFP_NET_MAX_MTU: Maximum support MTU
+ * @NFP_NET_CFG_MAX_TXRINGS: Maximum number of TX rings
+ * @NFP_NET_CFG_MAX_RXRINGS: Maximum number of RX rings
+ * @NFP_NET_CFG_MAX_MTU: Maximum support MTU
* @NFP_NET_CFG_START_TXQ: Start Queue Control Queue to use for TX (PF only)
* @NFP_NET_CFG_START_RXQ: Start Queue Control Queue to use for RX (PF only)
*
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c b/drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c
index 4c97c713121c..f7c9a5bc4aa3 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_debugfs.c
@@ -40,8 +40,9 @@ static struct dentry *nfp_dir;
static int nfp_net_debugfs_rx_q_read(struct seq_file *file, void *data)
{
- struct nfp_net_rx_ring *rx_ring = file->private;
int fl_rd_p, fl_wr_p, rx_rd_p, rx_wr_p, rxd_cnt;
+ struct nfp_net_r_vector *r_vec = file->private;
+ struct nfp_net_rx_ring *rx_ring;
struct nfp_net_rx_desc *rxd;
struct sk_buff *skb;
struct nfp_net *nn;
@@ -49,9 +50,10 @@ static int nfp_net_debugfs_rx_q_read(struct seq_file *file, void *data)
rtnl_lock();
- if (!rx_ring->r_vec || !rx_ring->r_vec->nfp_net)
+ if (!r_vec->nfp_net || !r_vec->rx_ring)
goto out;
- nn = rx_ring->r_vec->nfp_net;
+ nn = r_vec->nfp_net;
+ rx_ring = r_vec->rx_ring;
if (!netif_running(nn->netdev))
goto out;
@@ -115,7 +117,8 @@ static const struct file_operations nfp_rx_q_fops = {
static int nfp_net_debugfs_tx_q_read(struct seq_file *file, void *data)
{
- struct nfp_net_tx_ring *tx_ring = file->private;
+ struct nfp_net_r_vector *r_vec = file->private;
+ struct nfp_net_tx_ring *tx_ring;
struct nfp_net_tx_desc *txd;
int d_rd_p, d_wr_p, txd_cnt;
struct sk_buff *skb;
@@ -124,9 +127,10 @@ static int nfp_net_debugfs_tx_q_read(struct seq_file *file, void *data)
rtnl_lock();
- if (!tx_ring->r_vec || !tx_ring->r_vec->nfp_net)
+ if (!r_vec->nfp_net || !r_vec->tx_ring)
goto out;
- nn = tx_ring->r_vec->nfp_net;
+ nn = r_vec->nfp_net;
+ tx_ring = r_vec->tx_ring;
if (!netif_running(nn->netdev))
goto out;
@@ -183,7 +187,7 @@ static const struct file_operations nfp_tx_q_fops = {
void nfp_net_debugfs_adapter_add(struct nfp_net *nn)
{
- static struct dentry *queues, *tx, *rx;
+ struct dentry *queues, *tx, *rx;
char int_name[16];
int i;
@@ -196,7 +200,7 @@ void nfp_net_debugfs_adapter_add(struct nfp_net *nn)
/* Create queue debugging sub-tree */
queues = debugfs_create_dir("queue", nn->debugfs_dir);
- if (IS_ERR_OR_NULL(nn->debugfs_dir))
+ if (IS_ERR_OR_NULL(queues))
return;
rx = debugfs_create_dir("rx", queues);
@@ -207,13 +211,13 @@ void nfp_net_debugfs_adapter_add(struct nfp_net *nn)
for (i = 0; i < nn->num_rx_rings; i++) {
sprintf(int_name, "%d", i);
debugfs_create_file(int_name, S_IRUSR, rx,
- &nn->rx_rings[i], &nfp_rx_q_fops);
+ &nn->r_vecs[i], &nfp_rx_q_fops);
}
for (i = 0; i < nn->num_tx_rings; i++) {
sprintf(int_name, "%d", i);
debugfs_create_file(int_name, S_IRUSR, tx,
- &nn->tx_rings[i], &nfp_tx_q_fops);
+ &nn->r_vecs[i], &nfp_tx_q_fops);
}
}
diff --git a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
index 9a4084a68db5..ccfef1f17627 100644
--- a/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
+++ b/drivers/net/ethernet/netronome/nfp/nfp_net_ethtool.c
@@ -153,37 +153,25 @@ static int nfp_net_set_ringparam(struct net_device *netdev,
struct nfp_net *nn = netdev_priv(netdev);
u32 rxd_cnt, txd_cnt;
- if (netif_running(netdev)) {
- /* Some NIC drivers allow reconfiguration on the fly,
- * some down the interface, change and then up it
- * again. For now we don't allow changes when the
- * device is up.
- */
- nn_warn(nn, "Can't change rings while device is up\n");
- return -EBUSY;
- }
-
/* We don't have separate queues/rings for small/large frames. */
if (ring->rx_mini_pending || ring->rx_jumbo_pending)
return -EINVAL;
/* Round up to supported values */
rxd_cnt = roundup_pow_of_two(ring->rx_pending);
- rxd_cnt = max_t(u32, rxd_cnt, NFP_NET_MIN_RX_DESCS);
- rxd_cnt = min_t(u32, rxd_cnt, NFP_NET_MAX_RX_DESCS);
-
txd_cnt = roundup_pow_of_two(ring->tx_pending);
- txd_cnt = max_t(u32, txd_cnt, NFP_NET_MIN_TX_DESCS);
- txd_cnt = min_t(u32, txd_cnt, NFP_NET_MAX_TX_DESCS);
- if (nn->rxd_cnt != rxd_cnt || nn->txd_cnt != txd_cnt)
- nn_dbg(nn, "Change ring size: RxQ %u->%u, TxQ %u->%u\n",
- nn->rxd_cnt, rxd_cnt, nn->txd_cnt, txd_cnt);
+ if (rxd_cnt < NFP_NET_MIN_RX_DESCS || rxd_cnt > NFP_NET_MAX_RX_DESCS ||
+ txd_cnt < NFP_NET_MIN_TX_DESCS || txd_cnt > NFP_NET_MAX_TX_DESCS)
+ return -EINVAL;
- nn->rxd_cnt = rxd_cnt;
- nn->txd_cnt = txd_cnt;
+ if (nn->rxd_cnt == rxd_cnt && nn->txd_cnt == txd_cnt)
+ return 0;
- return 0;
+ nn_dbg(nn, "Change ring size: RxQ %u->%u, TxQ %u->%u\n",
+ nn->rxd_cnt, rxd_cnt, nn->txd_cnt, txd_cnt);
+
+ return nfp_net_set_ring_size(nn, rxd_cnt, txd_cnt);
}
static void nfp_net_get_strings(struct net_device *netdev,
diff --git a/drivers/net/ethernet/netx-eth.c b/drivers/net/ethernet/netx-eth.c
index 9fbc30264237..adbc47f2d132 100644
--- a/drivers/net/ethernet/netx-eth.c
+++ b/drivers/net/ethernet/netx-eth.c
@@ -313,7 +313,8 @@ static int netx_eth_enable(struct net_device *ndev)
{
struct netx_eth_priv *priv = netdev_priv(ndev);
unsigned int mac4321, mac65;
- int running, i;
+ int running, i, ret;
+ bool inv_mac_addr = false;
ndev->netdev_ops = &netx_eth_netdev_ops;
ndev->watchdog_timeo = msecs_to_jiffies(5000);
@@ -358,15 +359,18 @@ static int netx_eth_enable(struct net_device *ndev)
xc_start(priv->xc);
if (!is_valid_ether_addr(ndev->dev_addr))
- printk("%s: Invalid ethernet MAC address. Please "
- "set using ifconfig\n", ndev->name);
+ inv_mac_addr = true;
for (i=2; i<=18; i++)
pfifo_push(EMPTY_PTR_FIFO(priv->id),
FIFO_PTR_FRAMENO(i) | FIFO_PTR_SEGMENT(priv->id));
- return register_netdev(ndev);
+ ret = register_netdev(ndev);
+ if (inv_mac_addr)
+ printk("%s: Invalid ethernet MAC address. Please set using ip\n",
+ ndev->name);
+ return ret;
}
static int netx_eth_drv_probe(struct platform_device *pdev)
diff --git a/drivers/net/ethernet/nuvoton/w90p910_ether.c b/drivers/net/ethernet/nuvoton/w90p910_ether.c
index 52d9a94aebb9..87b7b814778b 100644
--- a/drivers/net/ethernet/nuvoton/w90p910_ether.c
+++ b/drivers/net/ethernet/nuvoton/w90p910_ether.c
@@ -476,7 +476,7 @@ static void w90p910_reset_mac(struct net_device *dev)
w90p910_init_desc(dev);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
ether->cur_tx = 0x0;
ether->finish_tx = 0x0;
ether->cur_rx = 0x0;
@@ -490,7 +490,7 @@ static void w90p910_reset_mac(struct net_device *dev)
w90p910_trigger_tx(dev);
w90p910_trigger_rx(dev);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
if (netif_queue_stopped(dev))
netif_wake_queue(dev);
diff --git a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe.h b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe.h
index 2a55d6d53ee6..8d710a3b4db0 100644
--- a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe.h
+++ b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe.h
@@ -481,7 +481,6 @@ struct pch_gbe_buffer {
/**
* struct pch_gbe_tx_ring - tx ring information
- * @tx_lock: spinlock structs
* @desc: pointer to the descriptor ring memory
* @dma: physical address of the descriptor ring
* @size: length of descriptor ring in bytes
@@ -491,7 +490,6 @@ struct pch_gbe_buffer {
* @buffer_info: array of buffer information structs
*/
struct pch_gbe_tx_ring {
- spinlock_t tx_lock;
struct pch_gbe_tx_desc *desc;
dma_addr_t dma;
unsigned int size;
diff --git a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
index 3b98b263bad0..3cd87a41ac92 100644
--- a/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
+++ b/drivers/net/ethernet/oki-semi/pch_gbe/pch_gbe_main.c
@@ -1640,7 +1640,7 @@ pch_gbe_clean_tx(struct pch_gbe_adapter *adapter,
cleaned_count);
if (cleaned_count > 0) { /*skip this if nothing cleaned*/
/* Recover from running out of Tx resources in xmit_frame */
- spin_lock(&tx_ring->tx_lock);
+ netif_tx_lock(adapter->netdev);
if (unlikely(cleaned && (netif_queue_stopped(adapter->netdev))))
{
netif_wake_queue(adapter->netdev);
@@ -1652,7 +1652,7 @@ pch_gbe_clean_tx(struct pch_gbe_adapter *adapter,
netdev_dbg(adapter->netdev, "next_to_clean : %d\n",
tx_ring->next_to_clean);
- spin_unlock(&tx_ring->tx_lock);
+ netif_tx_unlock(adapter->netdev);
}
return cleaned;
}
@@ -1805,7 +1805,6 @@ int pch_gbe_setup_tx_resources(struct pch_gbe_adapter *adapter,
tx_ring->next_to_use = 0;
tx_ring->next_to_clean = 0;
- spin_lock_init(&tx_ring->tx_lock);
for (desNo = 0; desNo < tx_ring->count; desNo++) {
tx_desc = PCH_GBE_TX_DESC(*tx_ring, desNo);
@@ -2135,15 +2134,9 @@ static int pch_gbe_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
{
struct pch_gbe_adapter *adapter = netdev_priv(netdev);
struct pch_gbe_tx_ring *tx_ring = adapter->tx_ring;
- unsigned long flags;
- if (!spin_trylock_irqsave(&tx_ring->tx_lock, flags)) {
- /* Collision - tell upper layer to requeue */
- return NETDEV_TX_LOCKED;
- }
if (unlikely(!PCH_GBE_DESC_UNUSED(tx_ring))) {
netif_stop_queue(netdev);
- spin_unlock_irqrestore(&tx_ring->tx_lock, flags);
netdev_dbg(netdev,
"Return : BUSY next_to use : 0x%08x next_to clean : 0x%08x\n",
tx_ring->next_to_use, tx_ring->next_to_clean);
@@ -2152,7 +2145,6 @@ static int pch_gbe_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
/* CRC,ITAG no support */
pch_gbe_tx_queue(adapter, tx_ring, skb);
- spin_unlock_irqrestore(&tx_ring->tx_lock, flags);
return NETDEV_TX_OK;
}
diff --git a/drivers/net/ethernet/packetengines/hamachi.c b/drivers/net/ethernet/packetengines/hamachi.c
index 13d88a6025c8..91be2f02ef1c 100644
--- a/drivers/net/ethernet/packetengines/hamachi.c
+++ b/drivers/net/ethernet/packetengines/hamachi.c
@@ -1144,7 +1144,7 @@ static void hamachi_tx_timeout(struct net_device *dev)
hmp->rx_ring[RX_RING_SIZE-1].status_n_length |= cpu_to_le32(DescEndRing);
/* Trigger an immediate transmit demand. */
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
dev->stats.tx_errors++;
/* Restart the chip's Tx/Rx processes . */
diff --git a/drivers/net/ethernet/packetengines/yellowfin.c b/drivers/net/ethernet/packetengines/yellowfin.c
index fa2db41e02f8..fb1d1031b091 100644
--- a/drivers/net/ethernet/packetengines/yellowfin.c
+++ b/drivers/net/ethernet/packetengines/yellowfin.c
@@ -714,7 +714,7 @@ static void yellowfin_tx_timeout(struct net_device *dev)
if (yp->cur_tx - yp->dirty_tx < TX_QUEUE_SIZE)
netif_wake_queue (dev); /* Typical path */
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
dev->stats.tx_errors++;
}
diff --git a/drivers/net/ethernet/qlogic/Kconfig b/drivers/net/ethernet/qlogic/Kconfig
index ddcfcab034c2..680d8c736d2b 100644
--- a/drivers/net/ethernet/qlogic/Kconfig
+++ b/drivers/net/ethernet/qlogic/Kconfig
@@ -98,9 +98,40 @@ config QED
---help---
This enables the support for ...
+config QED_SRIOV
+ bool "QLogic QED 25/40/100Gb SR-IOV support"
+ depends on QED && PCI_IOV
+ default y
+ ---help---
+ This configuration parameter enables Single Root Input Output
+ Virtualization support for QED devices.
+ This allows for virtual function acceleration in virtualized
+ environments.
+
config QEDE
tristate "QLogic QED 25/40/100Gb Ethernet NIC"
depends on QED
---help---
This enables the support for ...
+
+config QEDE_VXLAN
+ bool "Virtual eXtensible Local Area Network support"
+ default n
+ depends on QEDE && VXLAN && !(QEDE=y && VXLAN=m)
+ ---help---
+ This enables hardware offload support for VXLAN protocol over
+ qede module. Say Y here if you want to enable hardware offload
+ support for Virtual eXtensible Local Area Network (VXLAN)
+ in the driver.
+
+config QEDE_GENEVE
+ bool "Generic Network Virtualization Encapsulation (GENEVE) support"
+ depends on QEDE && GENEVE && !(QEDE=y && GENEVE=m)
+ ---help---
+ This allows one to create GENEVE virtual interfaces that provide
+ Layer 2 Networks over Layer 3 Networks. GENEVE is often used
+ to tunnel virtual network infrastructure in virtualized environments.
+ Say Y here if you want to enable hardware offload support for
+ Generic Network Virtualization Encapsulation (GENEVE) in the driver.
+
endif # NET_VENDOR_QLOGIC
diff --git a/drivers/net/ethernet/qlogic/netxen/netxen_nic_hw.c b/drivers/net/ethernet/qlogic/netxen/netxen_nic_hw.c
index db80eb1c6d4f..2b10f1bcd151 100644
--- a/drivers/net/ethernet/qlogic/netxen/netxen_nic_hw.c
+++ b/drivers/net/ethernet/qlogic/netxen/netxen_nic_hw.c
@@ -1015,20 +1015,24 @@ static int netxen_get_flash_block(struct netxen_adapter *adapter, int base,
{
int i, v, addr;
__le32 *ptr32;
+ int ret;
addr = base;
ptr32 = buf;
for (i = 0; i < size / sizeof(u32); i++) {
- if (netxen_rom_fast_read(adapter, addr, &v) == -1)
- return -1;
+ ret = netxen_rom_fast_read(adapter, addr, &v);
+ if (ret)
+ return ret;
+
*ptr32 = cpu_to_le32(v);
ptr32++;
addr += sizeof(u32);
}
if ((char *)buf + size > (char *)ptr32) {
__le32 local;
- if (netxen_rom_fast_read(adapter, addr, &v) == -1)
- return -1;
+ ret = netxen_rom_fast_read(adapter, addr, &v);
+ if (ret)
+ return ret;
local = cpu_to_le32(v);
memcpy(ptr32, &local, (char *)buf + size - (char *)ptr32);
}
@@ -1940,7 +1944,7 @@ void netxen_nic_set_link_parameters(struct netxen_adapter *adapter)
if (adapter->phy_read &&
adapter->phy_read(adapter,
NETXEN_NIU_GB_MII_MGMT_ADDR_AUTONEG,
- &autoneg) != 0)
+ &autoneg) == 0)
adapter->link_autoneg = autoneg;
} else
goto link_down;
diff --git a/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c b/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
index fd362b6923f4..7a0281a36c28 100644
--- a/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
+++ b/drivers/net/ethernet/qlogic/netxen/netxen_nic_main.c
@@ -852,7 +852,8 @@ netxen_check_options(struct netxen_adapter *adapter)
ptr32 = (__le32 *)&serial_num;
offset = NX_FW_SERIAL_NUM_OFFSET;
for (i = 0; i < 8; i++) {
- if (netxen_rom_fast_read(adapter, offset, &val) == -1) {
+ err = netxen_rom_fast_read(adapter, offset, &val);
+ if (err) {
dev_err(&pdev->dev, "error reading board info\n");
adapter->driver_mismatch = 1;
return;
@@ -2285,7 +2286,7 @@ static void netxen_tx_timeout_task(struct work_struct *work)
goto request_reset;
}
}
- adapter->netdev->trans_start = jiffies;
+ netif_trans_update(adapter->netdev);
rtnl_unlock();
return;
diff --git a/drivers/net/ethernet/qlogic/qed/Makefile b/drivers/net/ethernet/qlogic/qed/Makefile
index 5c2fd57236fe..d1f157e439cf 100644
--- a/drivers/net/ethernet/qlogic/qed/Makefile
+++ b/drivers/net/ethernet/qlogic/qed/Makefile
@@ -1,4 +1,6 @@
obj-$(CONFIG_QED) := qed.o
qed-y := qed_cxt.o qed_dev.o qed_hw.o qed_init_fw_funcs.o qed_init_ops.o \
- qed_int.o qed_main.o qed_mcp.o qed_sp_commands.o qed_spq.o qed_l2.o
+ qed_int.o qed_main.o qed_mcp.o qed_sp_commands.o qed_spq.o qed_l2.o \
+ qed_selftest.o qed_dcbx.o
+qed-$(CONFIG_QED_SRIOV) += qed_sriov.o qed_vf.o
diff --git a/drivers/net/ethernet/qlogic/qed/qed.h b/drivers/net/ethernet/qlogic/qed/qed.h
index fcb8e9ba51d9..1042f2af854a 100644
--- a/drivers/net/ethernet/qlogic/qed/qed.h
+++ b/drivers/net/ethernet/qlogic/qed/qed.h
@@ -26,12 +26,14 @@
#include "qed_hsi.h"
extern const struct qed_common_ops qed_common_ops_pass;
-#define DRV_MODULE_VERSION "8.7.0.0"
+#define DRV_MODULE_VERSION "8.7.1.20"
#define MAX_HWFNS_PER_DEVICE (4)
#define NAME_SIZE 16
#define VER_SIZE 16
+#define QED_WFQ_UNIT 100
+
/* cau states */
enum qed_coalescing_mode {
QED_COAL_MODE_DISABLE,
@@ -74,6 +76,51 @@ struct qed_rt_data {
bool *b_valid;
};
+enum qed_tunn_mode {
+ QED_MODE_L2GENEVE_TUNN,
+ QED_MODE_IPGENEVE_TUNN,
+ QED_MODE_L2GRE_TUNN,
+ QED_MODE_IPGRE_TUNN,
+ QED_MODE_VXLAN_TUNN,
+};
+
+enum qed_tunn_clss {
+ QED_TUNN_CLSS_MAC_VLAN,
+ QED_TUNN_CLSS_MAC_VNI,
+ QED_TUNN_CLSS_INNER_MAC_VLAN,
+ QED_TUNN_CLSS_INNER_MAC_VNI,
+ MAX_QED_TUNN_CLSS,
+};
+
+struct qed_tunn_start_params {
+ unsigned long tunn_mode;
+ u16 vxlan_udp_port;
+ u16 geneve_udp_port;
+ u8 update_vxlan_udp_port;
+ u8 update_geneve_udp_port;
+ u8 tunn_clss_vxlan;
+ u8 tunn_clss_l2geneve;
+ u8 tunn_clss_ipgeneve;
+ u8 tunn_clss_l2gre;
+ u8 tunn_clss_ipgre;
+};
+
+struct qed_tunn_update_params {
+ unsigned long tunn_mode_update_mask;
+ unsigned long tunn_mode;
+ u16 vxlan_udp_port;
+ u16 geneve_udp_port;
+ u8 update_rx_pf_clss;
+ u8 update_tx_pf_clss;
+ u8 update_vxlan_udp_port;
+ u8 update_geneve_udp_port;
+ u8 tunn_clss_vxlan;
+ u8 tunn_clss_l2geneve;
+ u8 tunn_clss_ipgeneve;
+ u8 tunn_clss_l2gre;
+ u8 tunn_clss_ipgre;
+};
+
/* The PCI personality is not quite synonymous to protocol ID:
* 1. All personalities need CORE connections
* 2. The Ethernet personality may support also the RoCE protocol
@@ -105,6 +152,7 @@ enum QED_RESOURCES {
enum QED_FEATURE {
QED_PF_L2_QUE,
+ QED_VF,
QED_MAX_FEATURES,
};
@@ -192,6 +240,12 @@ struct qed_dmae_info {
struct dmae_cmd *p_dmae_cmd;
};
+struct qed_wfq_data {
+ /* when feature is configured for at least 1 vport */
+ u32 min_speed;
+ bool configured;
+};
+
struct qed_qm_info {
struct init_qm_pq_params *qm_pq_params;
struct init_qm_vport_params *qm_vport_params;
@@ -212,6 +266,7 @@ struct qed_qm_info {
bool vport_wfq_en;
u8 pf_wfq;
u32 pf_rl;
+ struct qed_wfq_data *wfq_data;
};
struct storm_stats {
@@ -256,6 +311,8 @@ struct qed_hwfn {
bool first_on_engine;
bool hw_init_done;
+ u8 num_funcs_on_engine;
+
/* BAR access */
void __iomem *regview;
void __iomem *doorbells;
@@ -306,8 +363,12 @@ struct qed_hwfn {
/* True if the driver requests for the link */
bool b_drv_link_init;
+ struct qed_vf_iov *vf_iov_info;
+ struct qed_pf_iov *pf_iov_info;
struct qed_mcp_info *mcp_info;
+ struct qed_dcbx_info *p_dcbx_info;
+
struct qed_hw_cid_data *p_tx_cids;
struct qed_hw_cid_data *p_rx_cids;
@@ -322,6 +383,12 @@ struct qed_hwfn {
struct qed_simd_fp_handler simd_proto_handler[64];
+#ifdef CONFIG_QED_SRIOV
+ struct workqueue_struct *iov_wq;
+ struct delayed_work iov_task;
+ unsigned long iov_task_flags;
+#endif
+
struct z_stream_s *stream;
};
@@ -430,6 +497,13 @@ struct qed_dev {
u8 num_hwfns;
struct qed_hwfn hwfns[MAX_HWFNS_PER_DEVICE];
+ /* SRIOV */
+ struct qed_hw_sriov_info *p_iov_info;
+#define IS_QED_SRIOV(cdev) (!!(cdev)->p_iov_info)
+
+ unsigned long tunn_mode;
+
+ bool b_is_vf;
u32 drv_type;
struct qed_eth_stats *reset_stats;
@@ -459,6 +533,8 @@ struct qed_dev {
const struct firmware *firmware;
};
+#define NUM_OF_VFS(dev) MAX_NUM_VFS_BB
+#define NUM_OF_L2_QUEUES(dev) MAX_NUM_L2_QUEUES_BB
#define NUM_OF_SBS(dev) MAX_SB_PER_PATH_BB
#define NUM_OF_ENG_PFS(dev) MAX_NUM_PFS_BB
@@ -480,6 +556,10 @@ static inline u8 qed_concrete_to_sw_fid(struct qed_dev *cdev,
#define PURE_LB_TC 8
+int qed_configure_vport_wfq(struct qed_dev *cdev, u16 vp_id, u32 rate);
+void qed_configure_vp_wfq_on_link_change(struct qed_dev *cdev, u32 min_pf_rate);
+
+void qed_clean_wfq_db(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt);
#define QED_LEADING_HWFN(dev) (&dev->hwfns[0])
/* Other Linux specific common definitions */
@@ -507,6 +587,4 @@ u32 qed_unzip_data(struct qed_hwfn *p_hwfn,
int qed_slowpath_irq_req(struct qed_hwfn *hwfn);
-#define QED_ETH_INTERFACE_VERSION 300
-
#endif /* _QED_H */
diff --git a/drivers/net/ethernet/qlogic/qed/qed_cxt.c b/drivers/net/ethernet/qlogic/qed/qed_cxt.c
index fc767c07a264..ac284c58d8c2 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_cxt.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_cxt.c
@@ -24,11 +24,13 @@
#include "qed_hw.h"
#include "qed_init_ops.h"
#include "qed_reg_addr.h"
+#include "qed_sriov.h"
/* Max number of connection types in HW (DQ/CDU etc.) */
#define MAX_CONN_TYPES PROTOCOLID_COMMON
#define NUM_TASK_TYPES 2
#define NUM_TASK_PF_SEGMENTS 4
+#define NUM_TASK_VF_SEGMENTS 1
/* QM constants */
#define QM_PQ_ELEMENT_SIZE 4 /* in bytes */
@@ -63,10 +65,12 @@ union conn_context {
struct qed_conn_type_cfg {
u32 cid_count;
u32 cid_start;
+ u32 cids_per_vf;
};
/* ILT Client configuration, Per connection type (protocol) resources. */
#define ILT_CLI_PF_BLOCKS (1 + NUM_TASK_PF_SEGMENTS * 2)
+#define ILT_CLI_VF_BLOCKS (1 + NUM_TASK_VF_SEGMENTS * 2)
#define CDUC_BLK (0)
enum ilt_clients {
@@ -97,6 +101,10 @@ struct qed_ilt_client_cfg {
/* ILT client blocks for PF */
struct qed_ilt_cli_blk pf_blks[ILT_CLI_PF_BLOCKS];
u32 pf_total_lines;
+
+ /* ILT client blocks for VFs */
+ struct qed_ilt_cli_blk vf_blks[ILT_CLI_VF_BLOCKS];
+ u32 vf_total_lines;
};
/* Per Path -
@@ -123,6 +131,11 @@ struct qed_cxt_mngr {
/* computed ILT structure */
struct qed_ilt_client_cfg clients[ILT_CLI_MAX];
+ /* total number of VFs for this hwfn -
+ * ALL VFs are symmetric in terms of HW resources
+ */
+ u32 vf_count;
+
/* Acquired CIDs */
struct qed_cid_acquired_map acquired[MAX_CONN_TYPES];
@@ -131,37 +144,60 @@ struct qed_cxt_mngr {
u32 pf_start_line;
};
-static u32 qed_cxt_cdu_iids(struct qed_cxt_mngr *p_mngr)
-{
- u32 type, pf_cids = 0;
+/* counts the iids for the CDU/CDUC ILT client configuration */
+struct qed_cdu_iids {
+ u32 pf_cids;
+ u32 per_vf_cids;
+};
- for (type = 0; type < MAX_CONN_TYPES; type++)
- pf_cids += p_mngr->conn_cfg[type].cid_count;
+static void qed_cxt_cdu_iids(struct qed_cxt_mngr *p_mngr,
+ struct qed_cdu_iids *iids)
+{
+ u32 type;
- return pf_cids;
+ for (type = 0; type < MAX_CONN_TYPES; type++) {
+ iids->pf_cids += p_mngr->conn_cfg[type].cid_count;
+ iids->per_vf_cids += p_mngr->conn_cfg[type].cids_per_vf;
+ }
}
static void qed_cxt_qm_iids(struct qed_hwfn *p_hwfn,
struct qed_qm_iids *iids)
{
struct qed_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
- int type;
+ u32 vf_cids = 0, type;
- for (type = 0; type < MAX_CONN_TYPES; type++)
+ for (type = 0; type < MAX_CONN_TYPES; type++) {
iids->cids += p_mngr->conn_cfg[type].cid_count;
+ vf_cids += p_mngr->conn_cfg[type].cids_per_vf;
+ }
- DP_VERBOSE(p_hwfn, QED_MSG_ILT, "iids: CIDS %08x\n", iids->cids);
+ iids->vf_cids += vf_cids * p_mngr->vf_count;
+ DP_VERBOSE(p_hwfn, QED_MSG_ILT,
+ "iids: CIDS %08x vf_cids %08x\n",
+ iids->cids, iids->vf_cids);
}
/* set the iids count per protocol */
static void qed_cxt_set_proto_cid_count(struct qed_hwfn *p_hwfn,
enum protocol_type type,
- u32 cid_count)
+ u32 cid_count, u32 vf_cid_cnt)
{
struct qed_cxt_mngr *p_mgr = p_hwfn->p_cxt_mngr;
struct qed_conn_type_cfg *p_conn = &p_mgr->conn_cfg[type];
p_conn->cid_count = roundup(cid_count, DQ_RANGE_ALIGN);
+ p_conn->cids_per_vf = roundup(vf_cid_cnt, DQ_RANGE_ALIGN);
+}
+
+u32 qed_cxt_get_proto_cid_count(struct qed_hwfn *p_hwfn,
+ enum protocol_type type,
+ u32 *vf_cid)
+{
+ if (vf_cid)
+ *vf_cid = p_hwfn->p_cxt_mngr->conn_cfg[type].cids_per_vf;
+
+ return p_hwfn->p_cxt_mngr->conn_cfg[type].cid_count;
}
static void qed_ilt_cli_blk_fill(struct qed_ilt_client_cfg *p_cli,
@@ -210,10 +246,12 @@ int qed_cxt_cfg_ilt_compute(struct qed_hwfn *p_hwfn)
struct qed_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
struct qed_ilt_client_cfg *p_cli;
struct qed_ilt_cli_blk *p_blk;
- u32 curr_line, total, pf_cids;
+ struct qed_cdu_iids cdu_iids;
struct qed_qm_iids qm_iids;
+ u32 curr_line, total, i;
memset(&qm_iids, 0, sizeof(qm_iids));
+ memset(&cdu_iids, 0, sizeof(cdu_iids));
p_mngr->pf_start_line = RESC_START(p_hwfn, QED_ILT);
@@ -224,14 +262,16 @@ int qed_cxt_cfg_ilt_compute(struct qed_hwfn *p_hwfn)
/* CDUC */
p_cli = &p_mngr->clients[ILT_CLI_CDUC];
curr_line = p_mngr->pf_start_line;
+
+ /* CDUC PF */
p_cli->pf_total_lines = 0;
/* get the counters for the CDUC and QM clients */
- pf_cids = qed_cxt_cdu_iids(p_mngr);
+ qed_cxt_cdu_iids(p_mngr, &cdu_iids);
p_blk = &p_cli->pf_blks[CDUC_BLK];
- total = pf_cids * CONN_CXT_SIZE(p_hwfn);
+ total = cdu_iids.pf_cids * CONN_CXT_SIZE(p_hwfn);
qed_ilt_cli_blk_fill(p_cli, p_blk, curr_line,
total, CONN_CXT_SIZE(p_hwfn));
@@ -239,17 +279,36 @@ int qed_cxt_cfg_ilt_compute(struct qed_hwfn *p_hwfn)
qed_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line, ILT_CLI_CDUC);
p_cli->pf_total_lines = curr_line - p_blk->start_line;
+ /* CDUC VF */
+ p_blk = &p_cli->vf_blks[CDUC_BLK];
+ total = cdu_iids.per_vf_cids * CONN_CXT_SIZE(p_hwfn);
+
+ qed_ilt_cli_blk_fill(p_cli, p_blk, curr_line,
+ total, CONN_CXT_SIZE(p_hwfn));
+
+ qed_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line, ILT_CLI_CDUC);
+ p_cli->vf_total_lines = curr_line - p_blk->start_line;
+
+ for (i = 1; i < p_mngr->vf_count; i++)
+ qed_ilt_cli_adv_line(p_hwfn, p_cli, p_blk, &curr_line,
+ ILT_CLI_CDUC);
+
/* QM */
p_cli = &p_mngr->clients[ILT_CLI_QM];
p_blk = &p_cli->pf_blks[0];
qed_cxt_qm_iids(p_hwfn, &qm_iids);
- total = qed_qm_pf_mem_size(p_hwfn->rel_pf_id, qm_iids.cids, 0, 0,
- p_hwfn->qm_info.num_pqs, 0);
-
- DP_VERBOSE(p_hwfn, QED_MSG_ILT,
- "QM ILT Info, (cids=%d, num_pqs=%d, memory_size=%d)\n",
- qm_iids.cids, p_hwfn->qm_info.num_pqs, total);
+ total = qed_qm_pf_mem_size(p_hwfn->rel_pf_id, qm_iids.cids,
+ qm_iids.vf_cids, 0,
+ p_hwfn->qm_info.num_pqs,
+ p_hwfn->qm_info.num_vf_pqs);
+
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_ILT,
+ "QM ILT Info, (cids=%d, vf_cids=%d, num_pqs=%d, num_vf_pqs=%d, memory_size=%d)\n",
+ qm_iids.cids,
+ qm_iids.vf_cids,
+ p_hwfn->qm_info.num_pqs, p_hwfn->qm_info.num_vf_pqs, total);
qed_ilt_cli_blk_fill(p_cli, p_blk,
curr_line, total * 0x1000,
@@ -358,7 +417,7 @@ static int qed_ilt_shadow_alloc(struct qed_hwfn *p_hwfn)
struct qed_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
struct qed_ilt_client_cfg *clients = p_mngr->clients;
struct qed_ilt_cli_blk *p_blk;
- u32 size, i, j;
+ u32 size, i, j, k;
int rc;
size = qed_cxt_ilt_shadow_size(clients);
@@ -383,6 +442,16 @@ static int qed_ilt_shadow_alloc(struct qed_hwfn *p_hwfn)
if (rc != 0)
goto ilt_shadow_fail;
}
+ for (k = 0; k < p_mngr->vf_count; k++) {
+ for (j = 0; j < ILT_CLI_VF_BLOCKS; j++) {
+ u32 lines = clients[i].vf_total_lines * k;
+
+ p_blk = &clients[i].vf_blks[j];
+ rc = qed_ilt_blk_alloc(p_hwfn, p_blk, i, lines);
+ if (rc != 0)
+ goto ilt_shadow_fail;
+ }
+ }
}
return 0;
@@ -467,6 +536,9 @@ int qed_cxt_mngr_alloc(struct qed_hwfn *p_hwfn)
for (i = 0; i < ILT_CLI_MAX; i++)
p_mngr->clients[i].p_size.val = ILT_DEFAULT_HW_P_SIZE;
+ if (p_hwfn->cdev->p_iov_info)
+ p_mngr->vf_count = p_hwfn->cdev->p_iov_info->total_vfs;
+
/* Set the cxt mangr pointer priori to further allocations */
p_hwfn->p_cxt_mngr = p_mngr;
@@ -579,8 +651,10 @@ void qed_qm_init_pf(struct qed_hwfn *p_hwfn)
params.max_phys_tcs_per_port = qm_info->max_phys_tcs_per_port;
params.is_first_pf = p_hwfn->first_on_engine;
params.num_pf_cids = iids.cids;
+ params.num_vf_cids = iids.vf_cids;
params.start_pq = qm_info->start_pq;
- params.num_pf_pqs = qm_info->num_pqs;
+ params.num_pf_pqs = qm_info->num_pqs - qm_info->num_vf_pqs;
+ params.num_vf_pqs = qm_info->num_vf_pqs;
params.start_vport = qm_info->start_vport;
params.num_vports = qm_info->num_vports;
params.pf_wfq = qm_info->pf_wfq;
@@ -610,26 +684,55 @@ static int qed_cm_init_pf(struct qed_hwfn *p_hwfn)
static void qed_dq_init_pf(struct qed_hwfn *p_hwfn)
{
struct qed_cxt_mngr *p_mngr = p_hwfn->p_cxt_mngr;
- u32 dq_pf_max_cid = 0;
+ u32 dq_pf_max_cid = 0, dq_vf_max_cid = 0;
dq_pf_max_cid += (p_mngr->conn_cfg[0].cid_count >> DQ_RANGE_SHIFT);
STORE_RT_REG(p_hwfn, DORQ_REG_PF_MAX_ICID_0_RT_OFFSET, dq_pf_max_cid);
+ dq_vf_max_cid += (p_mngr->conn_cfg[0].cids_per_vf >> DQ_RANGE_SHIFT);
+ STORE_RT_REG(p_hwfn, DORQ_REG_VF_MAX_ICID_0_RT_OFFSET, dq_vf_max_cid);
+
dq_pf_max_cid += (p_mngr->conn_cfg[1].cid_count >> DQ_RANGE_SHIFT);
STORE_RT_REG(p_hwfn, DORQ_REG_PF_MAX_ICID_1_RT_OFFSET, dq_pf_max_cid);
+ dq_vf_max_cid += (p_mngr->conn_cfg[1].cids_per_vf >> DQ_RANGE_SHIFT);
+ STORE_RT_REG(p_hwfn, DORQ_REG_VF_MAX_ICID_1_RT_OFFSET, dq_vf_max_cid);
+
dq_pf_max_cid += (p_mngr->conn_cfg[2].cid_count >> DQ_RANGE_SHIFT);
STORE_RT_REG(p_hwfn, DORQ_REG_PF_MAX_ICID_2_RT_OFFSET, dq_pf_max_cid);
+ dq_vf_max_cid += (p_mngr->conn_cfg[2].cids_per_vf >> DQ_RANGE_SHIFT);
+ STORE_RT_REG(p_hwfn, DORQ_REG_VF_MAX_ICID_2_RT_OFFSET, dq_vf_max_cid);
+
dq_pf_max_cid += (p_mngr->conn_cfg[3].cid_count >> DQ_RANGE_SHIFT);
STORE_RT_REG(p_hwfn, DORQ_REG_PF_MAX_ICID_3_RT_OFFSET, dq_pf_max_cid);
+ dq_vf_max_cid += (p_mngr->conn_cfg[3].cids_per_vf >> DQ_RANGE_SHIFT);
+ STORE_RT_REG(p_hwfn, DORQ_REG_VF_MAX_ICID_3_RT_OFFSET, dq_vf_max_cid);
+
dq_pf_max_cid += (p_mngr->conn_cfg[4].cid_count >> DQ_RANGE_SHIFT);
STORE_RT_REG(p_hwfn, DORQ_REG_PF_MAX_ICID_4_RT_OFFSET, dq_pf_max_cid);
- /* 5 - PF */
+ dq_vf_max_cid += (p_mngr->conn_cfg[4].cids_per_vf >> DQ_RANGE_SHIFT);
+ STORE_RT_REG(p_hwfn, DORQ_REG_VF_MAX_ICID_4_RT_OFFSET, dq_vf_max_cid);
+
dq_pf_max_cid += (p_mngr->conn_cfg[5].cid_count >> DQ_RANGE_SHIFT);
STORE_RT_REG(p_hwfn, DORQ_REG_PF_MAX_ICID_5_RT_OFFSET, dq_pf_max_cid);
+
+ dq_vf_max_cid += (p_mngr->conn_cfg[5].cids_per_vf >> DQ_RANGE_SHIFT);
+ STORE_RT_REG(p_hwfn, DORQ_REG_VF_MAX_ICID_5_RT_OFFSET, dq_vf_max_cid);
+
+ /* Connection types 6 & 7 are not in use, yet they must be configured
+ * as the highest possible connection. Not configuring them means the
+ * defaults will be used, and with a large number of cids a bug may
+ * occur, if the defaults will be smaller than dq_pf_max_cid /
+ * dq_vf_max_cid.
+ */
+ STORE_RT_REG(p_hwfn, DORQ_REG_PF_MAX_ICID_6_RT_OFFSET, dq_pf_max_cid);
+ STORE_RT_REG(p_hwfn, DORQ_REG_VF_MAX_ICID_6_RT_OFFSET, dq_vf_max_cid);
+
+ STORE_RT_REG(p_hwfn, DORQ_REG_PF_MAX_ICID_7_RT_OFFSET, dq_pf_max_cid);
+ STORE_RT_REG(p_hwfn, DORQ_REG_VF_MAX_ICID_7_RT_OFFSET, dq_vf_max_cid);
}
static void qed_ilt_bounds_init(struct qed_hwfn *p_hwfn)
@@ -653,6 +756,38 @@ static void qed_ilt_bounds_init(struct qed_hwfn *p_hwfn)
}
}
+static void qed_ilt_vf_bounds_init(struct qed_hwfn *p_hwfn)
+{
+ struct qed_ilt_client_cfg *p_cli;
+ u32 blk_factor;
+
+ /* For simplicty we set the 'block' to be an ILT page */
+ if (p_hwfn->cdev->p_iov_info) {
+ struct qed_hw_sriov_info *p_iov = p_hwfn->cdev->p_iov_info;
+
+ STORE_RT_REG(p_hwfn,
+ PSWRQ2_REG_VF_BASE_RT_OFFSET,
+ p_iov->first_vf_in_pf);
+ STORE_RT_REG(p_hwfn,
+ PSWRQ2_REG_VF_LAST_ILT_RT_OFFSET,
+ p_iov->first_vf_in_pf + p_iov->total_vfs);
+ }
+
+ p_cli = &p_hwfn->p_cxt_mngr->clients[ILT_CLI_CDUC];
+ blk_factor = ilog2(ILT_PAGE_IN_BYTES(p_cli->p_size.val) >> 10);
+ if (p_cli->active) {
+ STORE_RT_REG(p_hwfn,
+ PSWRQ2_REG_CDUC_BLOCKS_FACTOR_RT_OFFSET,
+ blk_factor);
+ STORE_RT_REG(p_hwfn,
+ PSWRQ2_REG_CDUC_NUMBER_OF_PF_BLOCKS_RT_OFFSET,
+ p_cli->pf_total_lines);
+ STORE_RT_REG(p_hwfn,
+ PSWRQ2_REG_CDUC_VF_BLOCKS_RT_OFFSET,
+ p_cli->vf_total_lines);
+ }
+}
+
/* ILT (PSWRQ2) PF */
static void qed_ilt_init_pf(struct qed_hwfn *p_hwfn)
{
@@ -662,6 +797,7 @@ static void qed_ilt_init_pf(struct qed_hwfn *p_hwfn)
u32 line, rt_offst, i;
qed_ilt_bounds_init(p_hwfn);
+ qed_ilt_vf_bounds_init(p_hwfn);
p_mngr = p_hwfn->p_cxt_mngr;
p_shdw = p_mngr->ilt_shadow;
@@ -839,10 +975,10 @@ int qed_cxt_set_pf_params(struct qed_hwfn *p_hwfn)
/* Set the number of required CORE connections */
u32 core_cids = 1; /* SPQ */
- qed_cxt_set_proto_cid_count(p_hwfn, PROTOCOLID_CORE, core_cids);
+ qed_cxt_set_proto_cid_count(p_hwfn, PROTOCOLID_CORE, core_cids, 0);
qed_cxt_set_proto_cid_count(p_hwfn, PROTOCOLID_ETH,
- p_params->num_cons);
+ p_params->num_cons, 1);
return 0;
}
diff --git a/drivers/net/ethernet/qlogic/qed/qed_cxt.h b/drivers/net/ethernet/qlogic/qed/qed_cxt.h
index c8e1f5e5c42b..234c0fa8db2a 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_cxt.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_cxt.h
@@ -51,6 +51,9 @@ enum qed_cxt_elem_type {
QED_ELEM_TASK
};
+u32 qed_cxt_get_proto_cid_count(struct qed_hwfn *p_hwfn,
+ enum protocol_type type, u32 *vf_cid);
+
/**
* @brief qed_cxt_set_pf_params - Set the PF params for cxt init
*
@@ -128,6 +131,16 @@ void qed_cxt_hw_init_pf(struct qed_hwfn *p_hwfn);
void qed_qm_init_pf(struct qed_hwfn *p_hwfn);
/**
+ * @brief Reconfigures QM pf on the fly
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ *
+ * @return int
+ */
+int qed_qm_reconf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt);
+
+/**
* @brief qed_cxt_release - Release a cid
*
* @param p_hwfn
diff --git a/drivers/net/ethernet/qlogic/qed/qed_dcbx.c b/drivers/net/ethernet/qlogic/qed/qed_dcbx.c
new file mode 100644
index 000000000000..cbf58e1f9333
--- /dev/null
+++ b/drivers/net/ethernet/qlogic/qed/qed_dcbx.c
@@ -0,0 +1,562 @@
+/* QLogic qed NIC Driver
+ * Copyright (c) 2015 QLogic Corporation
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#include <linux/types.h>
+#include <asm/byteorder.h>
+#include <linux/bitops.h>
+#include <linux/errno.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/string.h>
+#include "qed.h"
+#include "qed_cxt.h"
+#include "qed_dcbx.h"
+#include "qed_hsi.h"
+#include "qed_sp.h"
+
+#define QED_DCBX_MAX_MIB_READ_TRY (100)
+#define QED_ETH_TYPE_DEFAULT (0)
+#define QED_ETH_TYPE_ROCE (0x8915)
+#define QED_UDP_PORT_TYPE_ROCE_V2 (0x12B7)
+#define QED_ETH_TYPE_FCOE (0x8906)
+#define QED_TCP_PORT_ISCSI (0xCBC)
+
+#define QED_DCBX_INVALID_PRIORITY 0xFF
+
+/* Get Traffic Class from priority traffic class table, 4 bits represent
+ * the traffic class corresponding to the priority.
+ */
+#define QED_DCBX_PRIO2TC(prio_tc_tbl, prio) \
+ ((u32)(prio_tc_tbl >> ((7 - prio) * 4)) & 0x7)
+
+static const struct qed_dcbx_app_metadata qed_dcbx_app_update[] = {
+ {DCBX_PROTOCOL_ISCSI, "ISCSI", QED_PCI_DEFAULT},
+ {DCBX_PROTOCOL_FCOE, "FCOE", QED_PCI_DEFAULT},
+ {DCBX_PROTOCOL_ROCE, "ROCE", QED_PCI_DEFAULT},
+ {DCBX_PROTOCOL_ROCE_V2, "ROCE_V2", QED_PCI_DEFAULT},
+ {DCBX_PROTOCOL_ETH, "ETH", QED_PCI_ETH}
+};
+
+static bool qed_dcbx_app_ethtype(u32 app_info_bitmap)
+{
+ return !!(QED_MFW_GET_FIELD(app_info_bitmap, DCBX_APP_SF) ==
+ DCBX_APP_SF_ETHTYPE);
+}
+
+static bool qed_dcbx_app_port(u32 app_info_bitmap)
+{
+ return !!(QED_MFW_GET_FIELD(app_info_bitmap, DCBX_APP_SF) ==
+ DCBX_APP_SF_PORT);
+}
+
+static bool qed_dcbx_default_tlv(u32 app_info_bitmap, u16 proto_id)
+{
+ return !!(qed_dcbx_app_ethtype(app_info_bitmap) &&
+ proto_id == QED_ETH_TYPE_DEFAULT);
+}
+
+static bool qed_dcbx_iscsi_tlv(u32 app_info_bitmap, u16 proto_id)
+{
+ return !!(qed_dcbx_app_port(app_info_bitmap) &&
+ proto_id == QED_TCP_PORT_ISCSI);
+}
+
+static bool qed_dcbx_fcoe_tlv(u32 app_info_bitmap, u16 proto_id)
+{
+ return !!(qed_dcbx_app_ethtype(app_info_bitmap) &&
+ proto_id == QED_ETH_TYPE_FCOE);
+}
+
+static bool qed_dcbx_roce_tlv(u32 app_info_bitmap, u16 proto_id)
+{
+ return !!(qed_dcbx_app_ethtype(app_info_bitmap) &&
+ proto_id == QED_ETH_TYPE_ROCE);
+}
+
+static bool qed_dcbx_roce_v2_tlv(u32 app_info_bitmap, u16 proto_id)
+{
+ return !!(qed_dcbx_app_port(app_info_bitmap) &&
+ proto_id == QED_UDP_PORT_TYPE_ROCE_V2);
+}
+
+static void
+qed_dcbx_dp_protocol(struct qed_hwfn *p_hwfn, struct qed_dcbx_results *p_data)
+{
+ enum dcbx_protocol_type id;
+ int i;
+
+ DP_VERBOSE(p_hwfn, QED_MSG_DCB, "DCBX negotiated: %d\n",
+ p_data->dcbx_enabled);
+
+ for (i = 0; i < ARRAY_SIZE(qed_dcbx_app_update); i++) {
+ id = qed_dcbx_app_update[i].id;
+
+ DP_VERBOSE(p_hwfn, QED_MSG_DCB,
+ "%s info: update %d, enable %d, prio %d, tc %d, num_tc %d\n",
+ qed_dcbx_app_update[i].name, p_data->arr[id].update,
+ p_data->arr[id].enable, p_data->arr[id].priority,
+ p_data->arr[id].tc, p_hwfn->hw_info.num_tc);
+ }
+}
+
+static void
+qed_dcbx_set_params(struct qed_dcbx_results *p_data,
+ struct qed_hw_info *p_info,
+ bool enable,
+ bool update,
+ u8 prio,
+ u8 tc,
+ enum dcbx_protocol_type type,
+ enum qed_pci_personality personality)
+{
+ /* PF update ramrod data */
+ p_data->arr[type].update = update;
+ p_data->arr[type].enable = enable;
+ p_data->arr[type].priority = prio;
+ p_data->arr[type].tc = tc;
+
+ /* QM reconf data */
+ if (p_info->personality == personality) {
+ if (personality == QED_PCI_ETH)
+ p_info->non_offload_tc = tc;
+ else
+ p_info->offload_tc = tc;
+ }
+}
+
+/* Update app protocol data and hw_info fields with the TLV info */
+static void
+qed_dcbx_update_app_info(struct qed_dcbx_results *p_data,
+ struct qed_hwfn *p_hwfn,
+ bool enable,
+ bool update,
+ u8 prio, u8 tc, enum dcbx_protocol_type type)
+{
+ struct qed_hw_info *p_info = &p_hwfn->hw_info;
+ enum qed_pci_personality personality;
+ enum dcbx_protocol_type id;
+ char *name;
+ int i;
+
+ for (i = 0; i < ARRAY_SIZE(qed_dcbx_app_update); i++) {
+ id = qed_dcbx_app_update[i].id;
+
+ if (type != id)
+ continue;
+
+ personality = qed_dcbx_app_update[i].personality;
+ name = qed_dcbx_app_update[i].name;
+
+ qed_dcbx_set_params(p_data, p_info, enable, update,
+ prio, tc, type, personality);
+ }
+}
+
+static bool
+qed_dcbx_get_app_protocol_type(struct qed_hwfn *p_hwfn,
+ u32 app_prio_bitmap,
+ u16 id, enum dcbx_protocol_type *type)
+{
+ if (qed_dcbx_fcoe_tlv(app_prio_bitmap, id)) {
+ *type = DCBX_PROTOCOL_FCOE;
+ } else if (qed_dcbx_roce_tlv(app_prio_bitmap, id)) {
+ *type = DCBX_PROTOCOL_ROCE;
+ } else if (qed_dcbx_iscsi_tlv(app_prio_bitmap, id)) {
+ *type = DCBX_PROTOCOL_ISCSI;
+ } else if (qed_dcbx_default_tlv(app_prio_bitmap, id)) {
+ *type = DCBX_PROTOCOL_ETH;
+ } else if (qed_dcbx_roce_v2_tlv(app_prio_bitmap, id)) {
+ *type = DCBX_PROTOCOL_ROCE_V2;
+ } else {
+ *type = DCBX_MAX_PROTOCOL_TYPE;
+ DP_ERR(p_hwfn,
+ "No action required, App TLV id = 0x%x app_prio_bitmap = 0x%x\n",
+ id, app_prio_bitmap);
+ return false;
+ }
+
+ return true;
+}
+
+/* Parse app TLV's to update TC information in hw_info structure for
+ * reconfiguring QM. Get protocol specific data for PF update ramrod command.
+ */
+static int
+qed_dcbx_process_tlv(struct qed_hwfn *p_hwfn,
+ struct qed_dcbx_results *p_data,
+ struct dcbx_app_priority_entry *p_tbl,
+ u32 pri_tc_tbl, int count, bool dcbx_enabled)
+{
+ u8 tc, priority, priority_map;
+ enum dcbx_protocol_type type;
+ u16 protocol_id;
+ bool enable;
+ int i;
+
+ DP_VERBOSE(p_hwfn, QED_MSG_DCB, "Num APP entries = %d\n", count);
+
+ /* Parse APP TLV */
+ for (i = 0; i < count; i++) {
+ protocol_id = QED_MFW_GET_FIELD(p_tbl[i].entry,
+ DCBX_APP_PROTOCOL_ID);
+ priority_map = QED_MFW_GET_FIELD(p_tbl[i].entry,
+ DCBX_APP_PRI_MAP);
+ priority = ffs(priority_map) - 1;
+ if (priority < 0) {
+ DP_ERR(p_hwfn, "Invalid priority\n");
+ return -EINVAL;
+ }
+
+ tc = QED_DCBX_PRIO2TC(pri_tc_tbl, priority);
+ if (qed_dcbx_get_app_protocol_type(p_hwfn, p_tbl[i].entry,
+ protocol_id, &type)) {
+ /* ETH always have the enable bit reset, as it gets
+ * vlan information per packet. For other protocols,
+ * should be set according to the dcbx_enabled
+ * indication, but we only got here if there was an
+ * app tlv for the protocol, so dcbx must be enabled.
+ */
+ enable = !!(type == DCBX_PROTOCOL_ETH);
+
+ qed_dcbx_update_app_info(p_data, p_hwfn, enable, true,
+ priority, tc, type);
+ }
+ }
+
+ /* If RoCE-V2 TLV is not detected, driver need to use RoCE app
+ * data for RoCE-v2 not the default app data.
+ */
+ if (!p_data->arr[DCBX_PROTOCOL_ROCE_V2].update &&
+ p_data->arr[DCBX_PROTOCOL_ROCE].update) {
+ tc = p_data->arr[DCBX_PROTOCOL_ROCE].tc;
+ priority = p_data->arr[DCBX_PROTOCOL_ROCE].priority;
+ qed_dcbx_update_app_info(p_data, p_hwfn, true, true,
+ priority, tc, DCBX_PROTOCOL_ROCE_V2);
+ }
+
+ /* Update ramrod protocol data and hw_info fields
+ * with default info when corresponding APP TLV's are not detected.
+ * The enabled field has a different logic for ethernet as only for
+ * ethernet dcb should disabled by default, as the information arrives
+ * from the OS (unless an explicit app tlv was present).
+ */
+ tc = p_data->arr[DCBX_PROTOCOL_ETH].tc;
+ priority = p_data->arr[DCBX_PROTOCOL_ETH].priority;
+ for (type = 0; type < DCBX_MAX_PROTOCOL_TYPE; type++) {
+ if (p_data->arr[type].update)
+ continue;
+
+ enable = (type == DCBX_PROTOCOL_ETH) ? false : dcbx_enabled;
+ qed_dcbx_update_app_info(p_data, p_hwfn, enable, true,
+ priority, tc, type);
+ }
+
+ return 0;
+}
+
+/* Parse app TLV's to update TC information in hw_info structure for
+ * reconfiguring QM. Get protocol specific data for PF update ramrod command.
+ */
+static int qed_dcbx_process_mib_info(struct qed_hwfn *p_hwfn)
+{
+ struct dcbx_app_priority_feature *p_app;
+ struct dcbx_app_priority_entry *p_tbl;
+ struct qed_dcbx_results data = { 0 };
+ struct dcbx_ets_feature *p_ets;
+ struct qed_hw_info *p_info;
+ u32 pri_tc_tbl, flags;
+ bool dcbx_enabled;
+ int num_entries;
+ int rc = 0;
+
+ /* If DCBx version is non zero, then negotiation was
+ * successfuly performed
+ */
+ flags = p_hwfn->p_dcbx_info->operational.flags;
+ dcbx_enabled = !!QED_MFW_GET_FIELD(flags, DCBX_CONFIG_VERSION);
+
+ p_app = &p_hwfn->p_dcbx_info->operational.features.app;
+ p_tbl = p_app->app_pri_tbl;
+
+ p_ets = &p_hwfn->p_dcbx_info->operational.features.ets;
+ pri_tc_tbl = p_ets->pri_tc_tbl[0];
+
+ p_info = &p_hwfn->hw_info;
+ num_entries = QED_MFW_GET_FIELD(p_app->flags, DCBX_APP_NUM_ENTRIES);
+
+ rc = qed_dcbx_process_tlv(p_hwfn, &data, p_tbl, pri_tc_tbl,
+ num_entries, dcbx_enabled);
+ if (rc)
+ return rc;
+
+ p_info->num_tc = QED_MFW_GET_FIELD(p_ets->flags, DCBX_ETS_MAX_TCS);
+ data.pf_id = p_hwfn->rel_pf_id;
+ data.dcbx_enabled = dcbx_enabled;
+
+ qed_dcbx_dp_protocol(p_hwfn, &data);
+
+ memcpy(&p_hwfn->p_dcbx_info->results, &data,
+ sizeof(struct qed_dcbx_results));
+
+ return 0;
+}
+
+static int
+qed_dcbx_copy_mib(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_dcbx_mib_meta_data *p_data,
+ enum qed_mib_read_type type)
+{
+ u32 prefix_seq_num, suffix_seq_num;
+ int read_count = 0;
+ int rc = 0;
+
+ /* The data is considered to be valid only if both sequence numbers are
+ * the same.
+ */
+ do {
+ if (type == QED_DCBX_REMOTE_LLDP_MIB) {
+ qed_memcpy_from(p_hwfn, p_ptt, p_data->lldp_remote,
+ p_data->addr, p_data->size);
+ prefix_seq_num = p_data->lldp_remote->prefix_seq_num;
+ suffix_seq_num = p_data->lldp_remote->suffix_seq_num;
+ } else {
+ qed_memcpy_from(p_hwfn, p_ptt, p_data->mib,
+ p_data->addr, p_data->size);
+ prefix_seq_num = p_data->mib->prefix_seq_num;
+ suffix_seq_num = p_data->mib->suffix_seq_num;
+ }
+ read_count++;
+
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_DCB,
+ "mib type = %d, try count = %d prefix seq num = %d suffix seq num = %d\n",
+ type, read_count, prefix_seq_num, suffix_seq_num);
+ } while ((prefix_seq_num != suffix_seq_num) &&
+ (read_count < QED_DCBX_MAX_MIB_READ_TRY));
+
+ if (read_count >= QED_DCBX_MAX_MIB_READ_TRY) {
+ DP_ERR(p_hwfn,
+ "MIB read err, mib type = %d, try count = %d prefix seq num = %d suffix seq num = %d\n",
+ type, read_count, prefix_seq_num, suffix_seq_num);
+ rc = -EIO;
+ }
+
+ return rc;
+}
+
+static int
+qed_dcbx_read_local_lldp_mib(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+{
+ struct qed_dcbx_mib_meta_data data;
+ int rc = 0;
+
+ memset(&data, 0, sizeof(data));
+ data.addr = p_hwfn->mcp_info->port_addr + offsetof(struct public_port,
+ lldp_config_params);
+ data.lldp_local = p_hwfn->p_dcbx_info->lldp_local;
+ data.size = sizeof(struct lldp_config_params_s);
+ qed_memcpy_from(p_hwfn, p_ptt, data.lldp_local, data.addr, data.size);
+
+ return rc;
+}
+
+static int
+qed_dcbx_read_remote_lldp_mib(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ enum qed_mib_read_type type)
+{
+ struct qed_dcbx_mib_meta_data data;
+ int rc = 0;
+
+ memset(&data, 0, sizeof(data));
+ data.addr = p_hwfn->mcp_info->port_addr + offsetof(struct public_port,
+ lldp_status_params);
+ data.lldp_remote = p_hwfn->p_dcbx_info->lldp_remote;
+ data.size = sizeof(struct lldp_status_params_s);
+ rc = qed_dcbx_copy_mib(p_hwfn, p_ptt, &data, type);
+
+ return rc;
+}
+
+static int
+qed_dcbx_read_operational_mib(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ enum qed_mib_read_type type)
+{
+ struct qed_dcbx_mib_meta_data data;
+ int rc = 0;
+
+ memset(&data, 0, sizeof(data));
+ data.addr = p_hwfn->mcp_info->port_addr +
+ offsetof(struct public_port, operational_dcbx_mib);
+ data.mib = &p_hwfn->p_dcbx_info->operational;
+ data.size = sizeof(struct dcbx_mib);
+ rc = qed_dcbx_copy_mib(p_hwfn, p_ptt, &data, type);
+
+ return rc;
+}
+
+static int
+qed_dcbx_read_remote_mib(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt, enum qed_mib_read_type type)
+{
+ struct qed_dcbx_mib_meta_data data;
+ int rc = 0;
+
+ memset(&data, 0, sizeof(data));
+ data.addr = p_hwfn->mcp_info->port_addr +
+ offsetof(struct public_port, remote_dcbx_mib);
+ data.mib = &p_hwfn->p_dcbx_info->remote;
+ data.size = sizeof(struct dcbx_mib);
+ rc = qed_dcbx_copy_mib(p_hwfn, p_ptt, &data, type);
+
+ return rc;
+}
+
+static int
+qed_dcbx_read_local_mib(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+{
+ struct qed_dcbx_mib_meta_data data;
+ int rc = 0;
+
+ memset(&data, 0, sizeof(data));
+ data.addr = p_hwfn->mcp_info->port_addr +
+ offsetof(struct public_port, local_admin_dcbx_mib);
+ data.local_admin = &p_hwfn->p_dcbx_info->local_admin;
+ data.size = sizeof(struct dcbx_local_params);
+ qed_memcpy_from(p_hwfn, p_ptt, data.local_admin, data.addr, data.size);
+
+ return rc;
+}
+
+static int qed_dcbx_read_mib(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt, enum qed_mib_read_type type)
+{
+ int rc = -EINVAL;
+
+ switch (type) {
+ case QED_DCBX_OPERATIONAL_MIB:
+ rc = qed_dcbx_read_operational_mib(p_hwfn, p_ptt, type);
+ break;
+ case QED_DCBX_REMOTE_MIB:
+ rc = qed_dcbx_read_remote_mib(p_hwfn, p_ptt, type);
+ break;
+ case QED_DCBX_LOCAL_MIB:
+ rc = qed_dcbx_read_local_mib(p_hwfn, p_ptt);
+ break;
+ case QED_DCBX_REMOTE_LLDP_MIB:
+ rc = qed_dcbx_read_remote_lldp_mib(p_hwfn, p_ptt, type);
+ break;
+ case QED_DCBX_LOCAL_LLDP_MIB:
+ rc = qed_dcbx_read_local_lldp_mib(p_hwfn, p_ptt);
+ break;
+ default:
+ DP_ERR(p_hwfn, "MIB read err, unknown mib type %d\n", type);
+ }
+
+ return rc;
+}
+
+/* Read updated MIB.
+ * Reconfigure QM and invoke PF update ramrod command if operational MIB
+ * change is detected.
+ */
+int
+qed_dcbx_mib_update_event(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt, enum qed_mib_read_type type)
+{
+ int rc = 0;
+
+ rc = qed_dcbx_read_mib(p_hwfn, p_ptt, type);
+ if (rc)
+ return rc;
+
+ if (type == QED_DCBX_OPERATIONAL_MIB) {
+ rc = qed_dcbx_process_mib_info(p_hwfn);
+ if (!rc) {
+ /* reconfigure tcs of QM queues according
+ * to negotiation results
+ */
+ qed_qm_reconf(p_hwfn, p_ptt);
+
+ /* update storm FW with negotiation results */
+ qed_sp_pf_update(p_hwfn);
+ }
+ }
+
+ return rc;
+}
+
+int qed_dcbx_info_alloc(struct qed_hwfn *p_hwfn)
+{
+ int rc = 0;
+
+ p_hwfn->p_dcbx_info = kzalloc(sizeof(*p_hwfn->p_dcbx_info), GFP_KERNEL);
+ if (!p_hwfn->p_dcbx_info) {
+ DP_NOTICE(p_hwfn,
+ "Failed to allocate 'struct qed_dcbx_info'\n");
+ rc = -ENOMEM;
+ }
+
+ return rc;
+}
+
+void qed_dcbx_info_free(struct qed_hwfn *p_hwfn,
+ struct qed_dcbx_info *p_dcbx_info)
+{
+ kfree(p_hwfn->p_dcbx_info);
+}
+
+static void qed_dcbx_update_protocol_data(struct protocol_dcb_data *p_data,
+ struct qed_dcbx_results *p_src,
+ enum dcbx_protocol_type type)
+{
+ p_data->dcb_enable_flag = p_src->arr[type].enable;
+ p_data->dcb_priority = p_src->arr[type].priority;
+ p_data->dcb_tc = p_src->arr[type].tc;
+}
+
+/* Set pf update ramrod command params */
+void qed_dcbx_set_pf_update_params(struct qed_dcbx_results *p_src,
+ struct pf_update_ramrod_data *p_dest)
+{
+ struct protocol_dcb_data *p_dcb_data;
+ bool update_flag = false;
+
+ p_dest->pf_id = p_src->pf_id;
+
+ update_flag = p_src->arr[DCBX_PROTOCOL_FCOE].update;
+ p_dest->update_fcoe_dcb_data_flag = update_flag;
+
+ update_flag = p_src->arr[DCBX_PROTOCOL_ROCE].update;
+ p_dest->update_roce_dcb_data_flag = update_flag;
+ update_flag = p_src->arr[DCBX_PROTOCOL_ROCE_V2].update;
+ p_dest->update_roce_dcb_data_flag = update_flag;
+
+ update_flag = p_src->arr[DCBX_PROTOCOL_ISCSI].update;
+ p_dest->update_iscsi_dcb_data_flag = update_flag;
+ update_flag = p_src->arr[DCBX_PROTOCOL_ETH].update;
+ p_dest->update_eth_dcb_data_flag = update_flag;
+
+ p_dcb_data = &p_dest->fcoe_dcb_data;
+ qed_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_FCOE);
+ p_dcb_data = &p_dest->roce_dcb_data;
+
+ if (p_src->arr[DCBX_PROTOCOL_ROCE].update)
+ qed_dcbx_update_protocol_data(p_dcb_data, p_src,
+ DCBX_PROTOCOL_ROCE);
+ if (p_src->arr[DCBX_PROTOCOL_ROCE_V2].update)
+ qed_dcbx_update_protocol_data(p_dcb_data, p_src,
+ DCBX_PROTOCOL_ROCE_V2);
+
+ p_dcb_data = &p_dest->iscsi_dcb_data;
+ qed_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_ISCSI);
+ p_dcb_data = &p_dest->eth_dcb_data;
+ qed_dcbx_update_protocol_data(p_dcb_data, p_src, DCBX_PROTOCOL_ETH);
+}
diff --git a/drivers/net/ethernet/qlogic/qed/qed_dcbx.h b/drivers/net/ethernet/qlogic/qed/qed_dcbx.h
new file mode 100644
index 000000000000..e7f834dbda2d
--- /dev/null
+++ b/drivers/net/ethernet/qlogic/qed/qed_dcbx.h
@@ -0,0 +1,80 @@
+/* QLogic qed NIC Driver
+ * Copyright (c) 2015 QLogic Corporation
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#ifndef _QED_DCBX_H
+#define _QED_DCBX_H
+#include <linux/types.h>
+#include <linux/slab.h>
+#include "qed.h"
+#include "qed_hsi.h"
+#include "qed_hw.h"
+#include "qed_mcp.h"
+#include "qed_reg_addr.h"
+
+#define DCBX_CONFIG_MAX_APP_PROTOCOL 4
+
+enum qed_mib_read_type {
+ QED_DCBX_OPERATIONAL_MIB,
+ QED_DCBX_REMOTE_MIB,
+ QED_DCBX_LOCAL_MIB,
+ QED_DCBX_REMOTE_LLDP_MIB,
+ QED_DCBX_LOCAL_LLDP_MIB
+};
+
+struct qed_dcbx_app_data {
+ bool enable; /* DCB enabled */
+ bool update; /* Update indication */
+ u8 priority; /* Priority */
+ u8 tc; /* Traffic Class */
+};
+
+struct qed_dcbx_results {
+ bool dcbx_enabled;
+ u8 pf_id;
+ struct qed_dcbx_app_data arr[DCBX_MAX_PROTOCOL_TYPE];
+};
+
+struct qed_dcbx_app_metadata {
+ enum dcbx_protocol_type id;
+ char *name;
+ enum qed_pci_personality personality;
+};
+
+#define QED_MFW_GET_FIELD(name, field) \
+ (((name) & (field ## _MASK)) >> (field ## _SHIFT))
+
+struct qed_dcbx_info {
+ struct lldp_status_params_s lldp_remote[LLDP_MAX_LLDP_AGENTS];
+ struct lldp_config_params_s lldp_local[LLDP_MAX_LLDP_AGENTS];
+ struct dcbx_local_params local_admin;
+ struct qed_dcbx_results results;
+ struct dcbx_mib operational;
+ struct dcbx_mib remote;
+ u8 dcbx_cap;
+};
+
+struct qed_dcbx_mib_meta_data {
+ struct lldp_config_params_s *lldp_local;
+ struct lldp_status_params_s *lldp_remote;
+ struct dcbx_local_params *local_admin;
+ struct dcbx_mib *mib;
+ size_t size;
+ u32 addr;
+};
+
+/* QED local interface routines */
+int
+qed_dcbx_mib_update_event(struct qed_hwfn *,
+ struct qed_ptt *, enum qed_mib_read_type);
+
+int qed_dcbx_info_alloc(struct qed_hwfn *p_hwfn);
+void qed_dcbx_info_free(struct qed_hwfn *, struct qed_dcbx_info *);
+void qed_dcbx_set_pf_update_params(struct qed_dcbx_results *p_src,
+ struct pf_update_ramrod_data *p_dest);
+
+#endif
diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev.c b/drivers/net/ethernet/qlogic/qed/qed_dev.c
index b7d100f6bd6f..089016f46f26 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_dev.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_dev.c
@@ -22,6 +22,7 @@
#include <linux/qed/qed_if.h>
#include "qed.h"
#include "qed_cxt.h"
+#include "qed_dcbx.h"
#include "qed_dev_api.h"
#include "qed_hsi.h"
#include "qed_hw.h"
@@ -30,6 +31,11 @@
#include "qed_mcp.h"
#include "qed_reg_addr.h"
#include "qed_sp.h"
+#include "qed_sriov.h"
+#include "qed_vf.h"
+
+static spinlock_t qm_lock;
+static bool qm_lock_init = false;
/* API common to all protocols */
enum BAR_ID {
@@ -40,10 +46,14 @@ enum BAR_ID {
static u32 qed_hw_bar_size(struct qed_hwfn *p_hwfn,
enum BAR_ID bar_id)
{
- u32 bar_reg = (bar_id == BAR_ID_0 ?
- PGLUE_B_REG_PF_BAR0_SIZE : PGLUE_B_REG_PF_BAR1_SIZE);
- u32 val = qed_rd(p_hwfn, p_hwfn->p_main_ptt, bar_reg);
+ u32 bar_reg = (bar_id == BAR_ID_0 ?
+ PGLUE_B_REG_PF_BAR0_SIZE : PGLUE_B_REG_PF_BAR1_SIZE);
+ u32 val;
+
+ if (IS_VF(p_hwfn->cdev))
+ return 1 << 17;
+ val = qed_rd(p_hwfn, p_hwfn->p_main_ptt, bar_reg);
if (val)
return 1 << (val + 15);
@@ -105,12 +115,17 @@ static void qed_qm_info_free(struct qed_hwfn *p_hwfn)
qm_info->qm_vport_params = NULL;
kfree(qm_info->qm_port_params);
qm_info->qm_port_params = NULL;
+ kfree(qm_info->wfq_data);
+ qm_info->wfq_data = NULL;
}
void qed_resc_free(struct qed_dev *cdev)
{
int i;
+ if (IS_VF(cdev))
+ return;
+
kfree(cdev->fw_data);
cdev->fw_data = NULL;
@@ -134,20 +149,27 @@ void qed_resc_free(struct qed_dev *cdev)
qed_eq_free(p_hwfn, p_hwfn->p_eq);
qed_consq_free(p_hwfn, p_hwfn->p_consq);
qed_int_free(p_hwfn);
+ qed_iov_free(p_hwfn);
qed_dmae_info_free(p_hwfn);
+ qed_dcbx_info_free(p_hwfn, p_hwfn->p_dcbx_info);
}
}
static int qed_init_qm_info(struct qed_hwfn *p_hwfn)
{
+ u8 num_vports, vf_offset = 0, i, vport_id, num_ports, curr_queue = 0;
struct qed_qm_info *qm_info = &p_hwfn->qm_info;
struct init_qm_port_params *p_qm_port;
- u8 num_vports, i, vport_id, num_ports;
u16 num_pqs, multi_cos_tcs = 1;
+ u16 num_vfs = 0;
+#ifdef CONFIG_QED_SRIOV
+ if (p_hwfn->cdev->p_iov_info)
+ num_vfs = p_hwfn->cdev->p_iov_info->total_vfs;
+#endif
memset(qm_info, 0, sizeof(*qm_info));
- num_pqs = multi_cos_tcs + 1; /* The '1' is for pure-LB */
+ num_pqs = multi_cos_tcs + num_vfs + 1; /* The '1' is for pure-LB */
num_vports = (u8)RESC_NUM(p_hwfn, QED_VPORT);
/* Sanity checking that setup requires legal number of resources */
@@ -175,25 +197,50 @@ static int qed_init_qm_info(struct qed_hwfn *p_hwfn)
if (!qm_info->qm_port_params)
goto alloc_err;
+ qm_info->wfq_data = kcalloc(num_vports, sizeof(*qm_info->wfq_data),
+ GFP_KERNEL);
+ if (!qm_info->wfq_data)
+ goto alloc_err;
+
vport_id = (u8)RESC_START(p_hwfn, QED_VPORT);
/* First init per-TC PQs */
for (i = 0; i < multi_cos_tcs; i++) {
- struct init_qm_pq_params *params = &qm_info->qm_pq_params[i];
-
- params->vport_id = vport_id;
- params->tc_id = p_hwfn->hw_info.non_offload_tc;
- params->wrr_group = 1;
+ struct init_qm_pq_params *params =
+ &qm_info->qm_pq_params[curr_queue++];
+
+ if (p_hwfn->hw_info.personality == QED_PCI_ETH) {
+ params->vport_id = vport_id;
+ params->tc_id = p_hwfn->hw_info.non_offload_tc;
+ params->wrr_group = 1;
+ } else {
+ params->vport_id = vport_id;
+ params->tc_id = p_hwfn->hw_info.offload_tc;
+ params->wrr_group = 1;
+ }
}
/* Then init pure-LB PQ */
- qm_info->pure_lb_pq = i;
- qm_info->qm_pq_params[i].vport_id = (u8)RESC_START(p_hwfn, QED_VPORT);
- qm_info->qm_pq_params[i].tc_id = PURE_LB_TC;
- qm_info->qm_pq_params[i].wrr_group = 1;
- i++;
+ qm_info->pure_lb_pq = curr_queue;
+ qm_info->qm_pq_params[curr_queue].vport_id =
+ (u8) RESC_START(p_hwfn, QED_VPORT);
+ qm_info->qm_pq_params[curr_queue].tc_id = PURE_LB_TC;
+ qm_info->qm_pq_params[curr_queue].wrr_group = 1;
+ curr_queue++;
qm_info->offload_pq = 0;
+ /* Then init per-VF PQs */
+ vf_offset = curr_queue;
+ for (i = 0; i < num_vfs; i++) {
+ /* First vport is used by the PF */
+ qm_info->qm_pq_params[curr_queue].vport_id = vport_id + i + 1;
+ qm_info->qm_pq_params[curr_queue].tc_id =
+ p_hwfn->hw_info.non_offload_tc;
+ qm_info->qm_pq_params[curr_queue].wrr_group = 1;
+ curr_queue++;
+ }
+
+ qm_info->vf_queues_offset = vf_offset;
qm_info->num_pqs = num_pqs;
qm_info->num_vports = num_vports;
@@ -211,29 +258,91 @@ static int qed_init_qm_info(struct qed_hwfn *p_hwfn)
qm_info->start_pq = (u16)RESC_START(p_hwfn, QED_PQ);
- qm_info->start_vport = (u8)RESC_START(p_hwfn, QED_VPORT);
+ qm_info->num_vf_pqs = num_vfs;
+ qm_info->start_vport = (u8) RESC_START(p_hwfn, QED_VPORT);
+
+ for (i = 0; i < qm_info->num_vports; i++)
+ qm_info->qm_vport_params[i].vport_wfq = 1;
qm_info->pf_wfq = 0;
qm_info->pf_rl = 0;
qm_info->vport_rl_en = 1;
+ qm_info->vport_wfq_en = 1;
return 0;
alloc_err:
DP_NOTICE(p_hwfn, "Failed to allocate memory for QM params\n");
- kfree(qm_info->qm_pq_params);
- kfree(qm_info->qm_vport_params);
- kfree(qm_info->qm_port_params);
-
+ qed_qm_info_free(p_hwfn);
return -ENOMEM;
}
+/* This function reconfigures the QM pf on the fly.
+ * For this purpose we:
+ * 1. reconfigure the QM database
+ * 2. set new values to runtime arrat
+ * 3. send an sdm_qm_cmd through the rbc interface to stop the QM
+ * 4. activate init tool in QM_PF stage
+ * 5. send an sdm_qm_cmd through rbc interface to release the QM
+ */
+int qed_qm_reconf(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+{
+ struct qed_qm_info *qm_info = &p_hwfn->qm_info;
+ bool b_rc;
+ int rc;
+
+ /* qm_info is allocated in qed_init_qm_info() which is already called
+ * from qed_resc_alloc() or previous call of qed_qm_reconf().
+ * The allocated size may change each init, so we free it before next
+ * allocation.
+ */
+ qed_qm_info_free(p_hwfn);
+
+ /* initialize qed's qm data structure */
+ rc = qed_init_qm_info(p_hwfn);
+ if (rc)
+ return rc;
+
+ /* stop PF's qm queues */
+ spin_lock_bh(&qm_lock);
+ b_rc = qed_send_qm_stop_cmd(p_hwfn, p_ptt, false, true,
+ qm_info->start_pq, qm_info->num_pqs);
+ spin_unlock_bh(&qm_lock);
+ if (!b_rc)
+ return -EINVAL;
+
+ /* clear the QM_PF runtime phase leftovers from previous init */
+ qed_init_clear_rt_data(p_hwfn);
+
+ /* prepare QM portion of runtime array */
+ qed_qm_init_pf(p_hwfn);
+
+ /* activate init tool on runtime array */
+ rc = qed_init_run(p_hwfn, p_ptt, PHASE_QM_PF, p_hwfn->rel_pf_id,
+ p_hwfn->hw_info.hw_mode);
+ if (rc)
+ return rc;
+
+ /* start PF's qm queues */
+ spin_lock_bh(&qm_lock);
+ b_rc = qed_send_qm_stop_cmd(p_hwfn, p_ptt, true, true,
+ qm_info->start_pq, qm_info->num_pqs);
+ spin_unlock_bh(&qm_lock);
+ if (!b_rc)
+ return -EINVAL;
+
+ return 0;
+}
+
int qed_resc_alloc(struct qed_dev *cdev)
{
struct qed_consq *p_consq;
struct qed_eq *p_eq;
int i, rc = 0;
+ if (IS_VF(cdev))
+ return rc;
+
cdev->fw_data = kzalloc(sizeof(*cdev->fw_data), GFP_KERNEL);
if (!cdev->fw_data)
return -ENOMEM;
@@ -308,6 +417,10 @@ int qed_resc_alloc(struct qed_dev *cdev)
if (rc)
goto alloc_err;
+ rc = qed_iov_alloc(p_hwfn);
+ if (rc)
+ goto alloc_err;
+
/* EQ */
p_eq = qed_eq_alloc(p_hwfn, 256);
if (!p_eq) {
@@ -330,6 +443,14 @@ int qed_resc_alloc(struct qed_dev *cdev)
"Failed to allocate memory for dmae_info structure\n");
goto alloc_err;
}
+
+ /* DCBX initialization */
+ rc = qed_dcbx_info_alloc(p_hwfn);
+ if (rc) {
+ DP_NOTICE(p_hwfn,
+ "Failed to allocate memory for dcbx structure\n");
+ goto alloc_err;
+ }
}
cdev->reset_stats = kzalloc(sizeof(*cdev->reset_stats), GFP_KERNEL);
@@ -350,6 +471,9 @@ void qed_resc_setup(struct qed_dev *cdev)
{
int i;
+ if (IS_VF(cdev))
+ return;
+
for_each_hwfn(cdev, i) {
struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
@@ -365,14 +489,15 @@ void qed_resc_setup(struct qed_dev *cdev)
p_hwfn->mcp_info->mfw_mb_length);
qed_int_setup(p_hwfn, p_hwfn->p_main_ptt);
+
+ qed_iov_setup(p_hwfn, p_hwfn->p_main_ptt);
}
}
#define FINAL_CLEANUP_POLL_CNT (100)
#define FINAL_CLEANUP_POLL_TIME (10)
int qed_final_cleanup(struct qed_hwfn *p_hwfn,
- struct qed_ptt *p_ptt,
- u16 id)
+ struct qed_ptt *p_ptt, u16 id, bool is_vf)
{
u32 command = 0, addr, count = FINAL_CLEANUP_POLL_CNT;
int rc = -EBUSY;
@@ -380,6 +505,9 @@ int qed_final_cleanup(struct qed_hwfn *p_hwfn,
addr = GTT_BAR0_MAP_REG_USDM_RAM +
USTORM_FLR_FINAL_ACK_OFFSET(p_hwfn->rel_pf_id);
+ if (is_vf)
+ id += 0x10;
+
command |= X_FINAL_CLEANUP_AGG_INT <<
SDM_AGG_INT_COMP_PARAMS_AGG_INT_INDEX_SHIFT;
command |= 1 << SDM_AGG_INT_COMP_PARAMS_AGG_VECTOR_ENABLE_SHIFT;
@@ -492,7 +620,9 @@ static int qed_hw_init_common(struct qed_hwfn *p_hwfn,
struct qed_qm_info *qm_info = &p_hwfn->qm_info;
struct qed_qm_common_rt_init_params params;
struct qed_dev *cdev = p_hwfn->cdev;
+ u32 concrete_fid;
int rc = 0;
+ u8 vf_id;
qed_init_cau_rt_data(cdev);
@@ -542,6 +672,14 @@ static int qed_hw_init_common(struct qed_hwfn *p_hwfn,
qed_wr(p_hwfn, p_ptt, 0x20b4,
qed_rd(p_hwfn, p_ptt, 0x20b4) & ~0x10);
+ for (vf_id = 0; vf_id < MAX_NUM_VFS_BB; vf_id++) {
+ concrete_fid = qed_vfid_to_concrete(p_hwfn, vf_id);
+ qed_fid_pretend(p_hwfn, p_ptt, (u16) concrete_fid);
+ qed_wr(p_hwfn, p_ptt, CCFC_REG_STRONG_ENABLE_VF, 0x1);
+ }
+ /* pretend to original PF */
+ qed_fid_pretend(p_hwfn, p_ptt, p_hwfn->rel_pf_id);
+
return rc;
}
@@ -558,6 +696,7 @@ static int qed_hw_init_port(struct qed_hwfn *p_hwfn,
static int qed_hw_init_pf(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt,
+ struct qed_tunn_start_params *p_tunn,
int hw_mode,
bool b_hw_start,
enum qed_int_mode int_mode,
@@ -574,7 +713,7 @@ static int qed_hw_init_pf(struct qed_hwfn *p_hwfn,
p_hwfn->qm_info.pf_wfq = p_info->bandwidth_min;
/* Update rate limit once we'll actually have a link */
- p_hwfn->qm_info.pf_rl = 100;
+ p_hwfn->qm_info.pf_rl = 100000;
}
qed_cxt_hw_init_pf(p_hwfn);
@@ -603,7 +742,7 @@ static int qed_hw_init_pf(struct qed_hwfn *p_hwfn,
STORE_RT_REG(p_hwfn, PRS_REG_SEARCH_ROCE_RT_OFFSET, 0);
/* Cleanup chip from previous driver if such remains exist */
- rc = qed_final_cleanup(p_hwfn, p_ptt, rel_pf_id);
+ rc = qed_final_cleanup(p_hwfn, p_ptt, rel_pf_id, false);
if (rc != 0)
return rc;
@@ -625,7 +764,8 @@ static int qed_hw_init_pf(struct qed_hwfn *p_hwfn,
qed_int_igu_enable(p_hwfn, p_ptt, int_mode);
/* send function start command */
- rc = qed_sp_pf_start(p_hwfn, p_hwfn->cdev->mf_mode);
+ rc = qed_sp_pf_start(p_hwfn, p_tunn, p_hwfn->cdev->mf_mode,
+ allow_npar_tx_switch);
if (rc)
DP_NOTICE(p_hwfn, "Function start ramrod failed\n");
}
@@ -672,6 +812,7 @@ static void qed_reset_mb_shadow(struct qed_hwfn *p_hwfn,
}
int qed_hw_init(struct qed_dev *cdev,
+ struct qed_tunn_start_params *p_tunn,
bool b_hw_start,
enum qed_int_mode int_mode,
bool allow_npar_tx_switch,
@@ -680,13 +821,20 @@ int qed_hw_init(struct qed_dev *cdev,
u32 load_code, param;
int rc, mfw_rc, i;
- rc = qed_init_fw_data(cdev, bin_fw_data);
- if (rc != 0)
- return rc;
+ if (IS_PF(cdev)) {
+ rc = qed_init_fw_data(cdev, bin_fw_data);
+ if (rc != 0)
+ return rc;
+ }
for_each_hwfn(cdev, i) {
struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
+ if (IS_VF(cdev)) {
+ p_hwfn->b_int_enabled = 1;
+ continue;
+ }
+
/* Enable DMAE in PXP */
rc = qed_change_pci_hwfn(p_hwfn, p_hwfn->p_main_ptt, true);
@@ -708,6 +856,11 @@ int qed_hw_init(struct qed_dev *cdev,
p_hwfn->first_on_engine = (load_code ==
FW_MSG_CODE_DRV_LOAD_ENGINE);
+ if (!qm_lock_init) {
+ spin_lock_init(&qm_lock);
+ qm_lock_init = true;
+ }
+
switch (load_code) {
case FW_MSG_CODE_DRV_LOAD_ENGINE:
rc = qed_hw_init_common(p_hwfn, p_hwfn->p_main_ptt,
@@ -724,7 +877,7 @@ int qed_hw_init(struct qed_dev *cdev,
/* Fall into */
case FW_MSG_CODE_DRV_LOAD_FUNCTION:
rc = qed_hw_init_pf(p_hwfn, p_hwfn->p_main_ptt,
- p_hwfn->hw_info.hw_mode,
+ p_tunn, p_hwfn->hw_info.hw_mode,
b_hw_start, int_mode,
allow_npar_tx_switch);
break;
@@ -749,6 +902,20 @@ int qed_hw_init(struct qed_dev *cdev,
return mfw_rc;
}
+ /* send DCBX attention request command */
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_DCB,
+ "sending phony dcbx set command to trigger DCBx attention handling\n");
+ mfw_rc = qed_mcp_cmd(p_hwfn, p_hwfn->p_main_ptt,
+ DRV_MSG_CODE_SET_DCBX,
+ 1 << DRV_MB_PARAM_DCBX_NOTIFY_SHIFT,
+ &load_code, &param);
+ if (mfw_rc) {
+ DP_NOTICE(p_hwfn,
+ "Failed to send DCBX attention request\n");
+ return mfw_rc;
+ }
+
p_hwfn->hw_init_done = true;
}
@@ -811,6 +978,11 @@ int qed_hw_stop(struct qed_dev *cdev)
DP_VERBOSE(p_hwfn, NETIF_MSG_IFDOWN, "Stopping hw/fw\n");
+ if (IS_VF(cdev)) {
+ qed_vf_pf_int_cleanup(p_hwfn);
+ continue;
+ }
+
/* mark the hw as uninitialized... */
p_hwfn->hw_init_done = false;
@@ -842,15 +1014,16 @@ int qed_hw_stop(struct qed_dev *cdev)
usleep_range(1000, 2000);
}
- /* Disable DMAE in PXP - in CMT, this should only be done for
- * first hw-function, and only after all transactions have
- * stopped for all active hw-functions.
- */
- t_rc = qed_change_pci_hwfn(&cdev->hwfns[0],
- cdev->hwfns[0].p_main_ptt,
- false);
- if (t_rc != 0)
- rc = t_rc;
+ if (IS_PF(cdev)) {
+ /* Disable DMAE in PXP - in CMT, this should only be done for
+ * first hw-function, and only after all transactions have
+ * stopped for all active hw-functions.
+ */
+ t_rc = qed_change_pci_hwfn(&cdev->hwfns[0],
+ cdev->hwfns[0].p_main_ptt, false);
+ if (t_rc != 0)
+ rc = t_rc;
+ }
return rc;
}
@@ -861,7 +1034,12 @@ void qed_hw_stop_fastpath(struct qed_dev *cdev)
for_each_hwfn(cdev, j) {
struct qed_hwfn *p_hwfn = &cdev->hwfns[j];
- struct qed_ptt *p_ptt = p_hwfn->p_main_ptt;
+ struct qed_ptt *p_ptt = p_hwfn->p_main_ptt;
+
+ if (IS_VF(cdev)) {
+ qed_vf_pf_int_cleanup(p_hwfn);
+ continue;
+ }
DP_VERBOSE(p_hwfn,
NETIF_MSG_IFDOWN,
@@ -885,6 +1063,9 @@ void qed_hw_stop_fastpath(struct qed_dev *cdev)
void qed_hw_start_fastpath(struct qed_hwfn *p_hwfn)
{
+ if (IS_VF(p_hwfn->cdev))
+ return;
+
/* Re-open incoming traffic */
qed_wr(p_hwfn, p_hwfn->p_main_ptt,
NIG_REG_RX_LLH_BRB_GATE_DNTFWD_PERPF, 0x0);
@@ -914,6 +1095,13 @@ int qed_hw_reset(struct qed_dev *cdev)
for_each_hwfn(cdev, i) {
struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
+ if (IS_VF(cdev)) {
+ rc = qed_vf_pf_reset(p_hwfn);
+ if (rc)
+ return rc;
+ continue;
+ }
+
DP_VERBOSE(p_hwfn, NETIF_MSG_IFDOWN, "Resetting hw/fw\n");
/* Check for incorrect states */
@@ -1009,13 +1197,19 @@ static void qed_hw_set_feat(struct qed_hwfn *p_hwfn)
static void qed_hw_get_resc(struct qed_hwfn *p_hwfn)
{
u32 *resc_start = p_hwfn->hw_info.resc_start;
+ u8 num_funcs = p_hwfn->num_funcs_on_engine;
u32 *resc_num = p_hwfn->hw_info.resc_num;
struct qed_sb_cnt_info sb_cnt_info;
- int num_funcs, i;
-
- num_funcs = MAX_NUM_PFS_BB;
+ int i, max_vf_vlan_filters;
memset(&sb_cnt_info, 0, sizeof(sb_cnt_info));
+
+#ifdef CONFIG_QED_SRIOV
+ max_vf_vlan_filters = QED_ETH_MAX_VF_NUM_VLAN_FILTERS;
+#else
+ max_vf_vlan_filters = 0;
+#endif
+
qed_int_get_num_sbs(p_hwfn, &sb_cnt_info);
resc_num[QED_SB] = min_t(u32,
@@ -1220,6 +1414,51 @@ static int qed_hw_get_nvm_info(struct qed_hwfn *p_hwfn,
return qed_mcp_fill_shmem_func_info(p_hwfn, p_ptt);
}
+static void qed_get_num_funcs(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+{
+ u32 reg_function_hide, tmp, eng_mask;
+ u8 num_funcs;
+
+ num_funcs = MAX_NUM_PFS_BB;
+
+ /* Bit 0 of MISCS_REG_FUNCTION_HIDE indicates whether the bypass values
+ * in the other bits are selected.
+ * Bits 1-15 are for functions 1-15, respectively, and their value is
+ * '0' only for enabled functions (function 0 always exists and
+ * enabled).
+ * In case of CMT, only the "even" functions are enabled, and thus the
+ * number of functions for both hwfns is learnt from the same bits.
+ */
+ reg_function_hide = qed_rd(p_hwfn, p_ptt, MISCS_REG_FUNCTION_HIDE);
+
+ if (reg_function_hide & 0x1) {
+ if (QED_PATH_ID(p_hwfn) && p_hwfn->cdev->num_hwfns == 1) {
+ num_funcs = 0;
+ eng_mask = 0xaaaa;
+ } else {
+ num_funcs = 1;
+ eng_mask = 0x5554;
+ }
+
+ /* Get the number of the enabled functions on the engine */
+ tmp = (reg_function_hide ^ 0xffffffff) & eng_mask;
+ while (tmp) {
+ if (tmp & 0x1)
+ num_funcs++;
+ tmp >>= 0x1;
+ }
+ }
+
+ p_hwfn->num_funcs_on_engine = num_funcs;
+
+ DP_VERBOSE(p_hwfn,
+ NETIF_MSG_PROBE,
+ "PF [rel_id %d, abs_id %d] within the %d enabled functions on the engine\n",
+ p_hwfn->rel_pf_id,
+ p_hwfn->abs_pf_id,
+ p_hwfn->num_funcs_on_engine);
+}
+
static int
qed_get_hw_info(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt,
@@ -1228,6 +1467,13 @@ qed_get_hw_info(struct qed_hwfn *p_hwfn,
u32 port_mode;
int rc;
+ /* Since all information is common, only first hwfns should do this */
+ if (IS_LEAD_HWFN(p_hwfn)) {
+ rc = qed_iov_hw_info(p_hwfn);
+ if (rc)
+ return rc;
+ }
+
/* Read the port mode */
port_mode = qed_rd(p_hwfn, p_ptt,
CNIG_REG_NW_PORT_MODE_BB_B0);
@@ -1271,6 +1517,8 @@ qed_get_hw_info(struct qed_hwfn *p_hwfn,
p_hwfn->hw_info.personality = protocol;
}
+ qed_get_num_funcs(p_hwfn, p_ptt);
+
qed_hw_get_resc(p_hwfn);
return rc;
@@ -1336,6 +1584,9 @@ static int qed_hw_prepare_single(struct qed_hwfn *p_hwfn,
p_hwfn->regview = p_regview;
p_hwfn->doorbells = p_doorbells;
+ if (IS_VF(p_hwfn->cdev))
+ return qed_vf_hw_prepare(p_hwfn);
+
/* Validate that chip access is feasible */
if (REG_RD(p_hwfn, PXP_PF_ME_OPAQUE_ADDR) == 0xffffffff) {
DP_ERR(p_hwfn,
@@ -1387,6 +1638,8 @@ static int qed_hw_prepare_single(struct qed_hwfn *p_hwfn,
return rc;
err2:
+ if (IS_LEAD_HWFN(p_hwfn))
+ qed_iov_free_hw_info(p_hwfn->cdev);
qed_mcp_free(p_hwfn);
err1:
qed_hw_hwfn_free(p_hwfn);
@@ -1401,7 +1654,8 @@ int qed_hw_prepare(struct qed_dev *cdev,
int rc;
/* Store the precompiled init data ptrs */
- qed_init_iro_array(cdev);
+ if (IS_PF(cdev))
+ qed_init_iro_array(cdev);
/* Initialize the first hwfn - will learn number of hwfns */
rc = qed_hw_prepare_single(p_hwfn,
@@ -1433,9 +1687,11 @@ int qed_hw_prepare(struct qed_dev *cdev,
* initiliazed hwfn 0.
*/
if (rc) {
- qed_init_free(p_hwfn);
- qed_mcp_free(p_hwfn);
- qed_hw_hwfn_free(p_hwfn);
+ if (IS_PF(cdev)) {
+ qed_init_free(p_hwfn);
+ qed_mcp_free(p_hwfn);
+ qed_hw_hwfn_free(p_hwfn);
+ }
}
}
@@ -1449,10 +1705,17 @@ void qed_hw_remove(struct qed_dev *cdev)
for_each_hwfn(cdev, i) {
struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
+ if (IS_VF(cdev)) {
+ qed_vf_pf_release(p_hwfn);
+ continue;
+ }
+
qed_init_free(p_hwfn);
qed_hw_hwfn_free(p_hwfn);
qed_mcp_free(p_hwfn);
}
+
+ qed_iov_free_hw_info(cdev);
}
int qed_chain_alloc(struct qed_dev *cdev,
@@ -1593,3 +1856,388 @@ int qed_fw_rss_eng(struct qed_hwfn *p_hwfn,
return 0;
}
+
+/* Calculate final WFQ values for all vports and configure them.
+ * After this configuration each vport will have
+ * approx min rate = min_pf_rate * (vport_wfq / QED_WFQ_UNIT)
+ */
+static void qed_configure_wfq_for_all_vports(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ u32 min_pf_rate)
+{
+ struct init_qm_vport_params *vport_params;
+ int i;
+
+ vport_params = p_hwfn->qm_info.qm_vport_params;
+
+ for (i = 0; i < p_hwfn->qm_info.num_vports; i++) {
+ u32 wfq_speed = p_hwfn->qm_info.wfq_data[i].min_speed;
+
+ vport_params[i].vport_wfq = (wfq_speed * QED_WFQ_UNIT) /
+ min_pf_rate;
+ qed_init_vport_wfq(p_hwfn, p_ptt,
+ vport_params[i].first_tx_pq_id,
+ vport_params[i].vport_wfq);
+ }
+}
+
+static void qed_init_wfq_default_param(struct qed_hwfn *p_hwfn,
+ u32 min_pf_rate)
+
+{
+ int i;
+
+ for (i = 0; i < p_hwfn->qm_info.num_vports; i++)
+ p_hwfn->qm_info.qm_vport_params[i].vport_wfq = 1;
+}
+
+static void qed_disable_wfq_for_all_vports(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ u32 min_pf_rate)
+{
+ struct init_qm_vport_params *vport_params;
+ int i;
+
+ vport_params = p_hwfn->qm_info.qm_vport_params;
+
+ for (i = 0; i < p_hwfn->qm_info.num_vports; i++) {
+ qed_init_wfq_default_param(p_hwfn, min_pf_rate);
+ qed_init_vport_wfq(p_hwfn, p_ptt,
+ vport_params[i].first_tx_pq_id,
+ vport_params[i].vport_wfq);
+ }
+}
+
+/* This function performs several validations for WFQ
+ * configuration and required min rate for a given vport
+ * 1. req_rate must be greater than one percent of min_pf_rate.
+ * 2. req_rate should not cause other vports [not configured for WFQ explicitly]
+ * rates to get less than one percent of min_pf_rate.
+ * 3. total_req_min_rate [all vports min rate sum] shouldn't exceed min_pf_rate.
+ */
+static int qed_init_wfq_param(struct qed_hwfn *p_hwfn,
+ u16 vport_id, u32 req_rate,
+ u32 min_pf_rate)
+{
+ u32 total_req_min_rate = 0, total_left_rate = 0, left_rate_per_vp = 0;
+ int non_requested_count = 0, req_count = 0, i, num_vports;
+
+ num_vports = p_hwfn->qm_info.num_vports;
+
+ /* Accounting for the vports which are configured for WFQ explicitly */
+ for (i = 0; i < num_vports; i++) {
+ u32 tmp_speed;
+
+ if ((i != vport_id) &&
+ p_hwfn->qm_info.wfq_data[i].configured) {
+ req_count++;
+ tmp_speed = p_hwfn->qm_info.wfq_data[i].min_speed;
+ total_req_min_rate += tmp_speed;
+ }
+ }
+
+ /* Include current vport data as well */
+ req_count++;
+ total_req_min_rate += req_rate;
+ non_requested_count = num_vports - req_count;
+
+ if (req_rate < min_pf_rate / QED_WFQ_UNIT) {
+ DP_VERBOSE(p_hwfn, NETIF_MSG_LINK,
+ "Vport [%d] - Requested rate[%d Mbps] is less than one percent of configured PF min rate[%d Mbps]\n",
+ vport_id, req_rate, min_pf_rate);
+ return -EINVAL;
+ }
+
+ if (num_vports > QED_WFQ_UNIT) {
+ DP_VERBOSE(p_hwfn, NETIF_MSG_LINK,
+ "Number of vports is greater than %d\n",
+ QED_WFQ_UNIT);
+ return -EINVAL;
+ }
+
+ if (total_req_min_rate > min_pf_rate) {
+ DP_VERBOSE(p_hwfn, NETIF_MSG_LINK,
+ "Total requested min rate for all vports[%d Mbps] is greater than configured PF min rate[%d Mbps]\n",
+ total_req_min_rate, min_pf_rate);
+ return -EINVAL;
+ }
+
+ total_left_rate = min_pf_rate - total_req_min_rate;
+
+ left_rate_per_vp = total_left_rate / non_requested_count;
+ if (left_rate_per_vp < min_pf_rate / QED_WFQ_UNIT) {
+ DP_VERBOSE(p_hwfn, NETIF_MSG_LINK,
+ "Non WFQ configured vports rate [%d Mbps] is less than one percent of configured PF min rate[%d Mbps]\n",
+ left_rate_per_vp, min_pf_rate);
+ return -EINVAL;
+ }
+
+ p_hwfn->qm_info.wfq_data[vport_id].min_speed = req_rate;
+ p_hwfn->qm_info.wfq_data[vport_id].configured = true;
+
+ for (i = 0; i < num_vports; i++) {
+ if (p_hwfn->qm_info.wfq_data[i].configured)
+ continue;
+
+ p_hwfn->qm_info.wfq_data[i].min_speed = left_rate_per_vp;
+ }
+
+ return 0;
+}
+
+static int __qed_configure_vport_wfq(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt, u16 vp_id, u32 rate)
+{
+ struct qed_mcp_link_state *p_link;
+ int rc = 0;
+
+ p_link = &p_hwfn->cdev->hwfns[0].mcp_info->link_output;
+
+ if (!p_link->min_pf_rate) {
+ p_hwfn->qm_info.wfq_data[vp_id].min_speed = rate;
+ p_hwfn->qm_info.wfq_data[vp_id].configured = true;
+ return rc;
+ }
+
+ rc = qed_init_wfq_param(p_hwfn, vp_id, rate, p_link->min_pf_rate);
+
+ if (rc == 0)
+ qed_configure_wfq_for_all_vports(p_hwfn, p_ptt,
+ p_link->min_pf_rate);
+ else
+ DP_NOTICE(p_hwfn,
+ "Validation failed while configuring min rate\n");
+
+ return rc;
+}
+
+static int __qed_configure_vp_wfq_on_link_change(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ u32 min_pf_rate)
+{
+ bool use_wfq = false;
+ int rc = 0;
+ u16 i;
+
+ /* Validate all pre configured vports for wfq */
+ for (i = 0; i < p_hwfn->qm_info.num_vports; i++) {
+ u32 rate;
+
+ if (!p_hwfn->qm_info.wfq_data[i].configured)
+ continue;
+
+ rate = p_hwfn->qm_info.wfq_data[i].min_speed;
+ use_wfq = true;
+
+ rc = qed_init_wfq_param(p_hwfn, i, rate, min_pf_rate);
+ if (rc) {
+ DP_NOTICE(p_hwfn,
+ "WFQ validation failed while configuring min rate\n");
+ break;
+ }
+ }
+
+ if (!rc && use_wfq)
+ qed_configure_wfq_for_all_vports(p_hwfn, p_ptt, min_pf_rate);
+ else
+ qed_disable_wfq_for_all_vports(p_hwfn, p_ptt, min_pf_rate);
+
+ return rc;
+}
+
+/* Main API for qed clients to configure vport min rate.
+ * vp_id - vport id in PF Range[0 - (total_num_vports_per_pf - 1)]
+ * rate - Speed in Mbps needs to be assigned to a given vport.
+ */
+int qed_configure_vport_wfq(struct qed_dev *cdev, u16 vp_id, u32 rate)
+{
+ int i, rc = -EINVAL;
+
+ /* Currently not supported; Might change in future */
+ if (cdev->num_hwfns > 1) {
+ DP_NOTICE(cdev,
+ "WFQ configuration is not supported for this device\n");
+ return rc;
+ }
+
+ for_each_hwfn(cdev, i) {
+ struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
+ struct qed_ptt *p_ptt;
+
+ p_ptt = qed_ptt_acquire(p_hwfn);
+ if (!p_ptt)
+ return -EBUSY;
+
+ rc = __qed_configure_vport_wfq(p_hwfn, p_ptt, vp_id, rate);
+
+ if (!rc) {
+ qed_ptt_release(p_hwfn, p_ptt);
+ return rc;
+ }
+
+ qed_ptt_release(p_hwfn, p_ptt);
+ }
+
+ return rc;
+}
+
+/* API to configure WFQ from mcp link change */
+void qed_configure_vp_wfq_on_link_change(struct qed_dev *cdev, u32 min_pf_rate)
+{
+ int i;
+
+ for_each_hwfn(cdev, i) {
+ struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
+
+ __qed_configure_vp_wfq_on_link_change(p_hwfn,
+ p_hwfn->p_dpc_ptt,
+ min_pf_rate);
+ }
+}
+
+int __qed_configure_pf_max_bandwidth(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_mcp_link_state *p_link,
+ u8 max_bw)
+{
+ int rc = 0;
+
+ p_hwfn->mcp_info->func_info.bandwidth_max = max_bw;
+
+ if (!p_link->line_speed && (max_bw != 100))
+ return rc;
+
+ p_link->speed = (p_link->line_speed * max_bw) / 100;
+ p_hwfn->qm_info.pf_rl = p_link->speed;
+
+ /* Since the limiter also affects Tx-switched traffic, we don't want it
+ * to limit such traffic in case there's no actual limit.
+ * In that case, set limit to imaginary high boundary.
+ */
+ if (max_bw == 100)
+ p_hwfn->qm_info.pf_rl = 100000;
+
+ rc = qed_init_pf_rl(p_hwfn, p_ptt, p_hwfn->rel_pf_id,
+ p_hwfn->qm_info.pf_rl);
+
+ DP_VERBOSE(p_hwfn, NETIF_MSG_LINK,
+ "Configured MAX bandwidth to be %08x Mb/sec\n",
+ p_link->speed);
+
+ return rc;
+}
+
+/* Main API to configure PF max bandwidth where bw range is [1 - 100] */
+int qed_configure_pf_max_bandwidth(struct qed_dev *cdev, u8 max_bw)
+{
+ int i, rc = -EINVAL;
+
+ if (max_bw < 1 || max_bw > 100) {
+ DP_NOTICE(cdev, "PF max bw valid range is [1-100]\n");
+ return rc;
+ }
+
+ for_each_hwfn(cdev, i) {
+ struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
+ struct qed_hwfn *p_lead = QED_LEADING_HWFN(cdev);
+ struct qed_mcp_link_state *p_link;
+ struct qed_ptt *p_ptt;
+
+ p_link = &p_lead->mcp_info->link_output;
+
+ p_ptt = qed_ptt_acquire(p_hwfn);
+ if (!p_ptt)
+ return -EBUSY;
+
+ rc = __qed_configure_pf_max_bandwidth(p_hwfn, p_ptt,
+ p_link, max_bw);
+
+ qed_ptt_release(p_hwfn, p_ptt);
+
+ if (rc)
+ break;
+ }
+
+ return rc;
+}
+
+int __qed_configure_pf_min_bandwidth(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_mcp_link_state *p_link,
+ u8 min_bw)
+{
+ int rc = 0;
+
+ p_hwfn->mcp_info->func_info.bandwidth_min = min_bw;
+ p_hwfn->qm_info.pf_wfq = min_bw;
+
+ if (!p_link->line_speed)
+ return rc;
+
+ p_link->min_pf_rate = (p_link->line_speed * min_bw) / 100;
+
+ rc = qed_init_pf_wfq(p_hwfn, p_ptt, p_hwfn->rel_pf_id, min_bw);
+
+ DP_VERBOSE(p_hwfn, NETIF_MSG_LINK,
+ "Configured MIN bandwidth to be %d Mb/sec\n",
+ p_link->min_pf_rate);
+
+ return rc;
+}
+
+/* Main API to configure PF min bandwidth where bw range is [1-100] */
+int qed_configure_pf_min_bandwidth(struct qed_dev *cdev, u8 min_bw)
+{
+ int i, rc = -EINVAL;
+
+ if (min_bw < 1 || min_bw > 100) {
+ DP_NOTICE(cdev, "PF min bw valid range is [1-100]\n");
+ return rc;
+ }
+
+ for_each_hwfn(cdev, i) {
+ struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
+ struct qed_hwfn *p_lead = QED_LEADING_HWFN(cdev);
+ struct qed_mcp_link_state *p_link;
+ struct qed_ptt *p_ptt;
+
+ p_link = &p_lead->mcp_info->link_output;
+
+ p_ptt = qed_ptt_acquire(p_hwfn);
+ if (!p_ptt)
+ return -EBUSY;
+
+ rc = __qed_configure_pf_min_bandwidth(p_hwfn, p_ptt,
+ p_link, min_bw);
+ if (rc) {
+ qed_ptt_release(p_hwfn, p_ptt);
+ return rc;
+ }
+
+ if (p_link->min_pf_rate) {
+ u32 min_rate = p_link->min_pf_rate;
+
+ rc = __qed_configure_vp_wfq_on_link_change(p_hwfn,
+ p_ptt,
+ min_rate);
+ }
+
+ qed_ptt_release(p_hwfn, p_ptt);
+ }
+
+ return rc;
+}
+
+void qed_clean_wfq_db(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+{
+ struct qed_mcp_link_state *p_link;
+
+ p_link = &p_hwfn->mcp_info->link_output;
+
+ if (p_link->min_pf_rate)
+ qed_disable_wfq_for_all_vports(p_hwfn, p_ptt,
+ p_link->min_pf_rate);
+
+ memset(p_hwfn->qm_info.wfq_data, 0,
+ sizeof(*p_hwfn->qm_info.wfq_data) * p_hwfn->qm_info.num_vports);
+}
diff --git a/drivers/net/ethernet/qlogic/qed/qed_dev_api.h b/drivers/net/ethernet/qlogic/qed/qed_dev_api.h
index d6c7ddf4f4d4..dde364d6f502 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_dev_api.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_dev_api.h
@@ -62,6 +62,7 @@ void qed_resc_setup(struct qed_dev *cdev);
* @brief qed_hw_init -
*
* @param cdev
+ * @param p_tunn
* @param b_hw_start
* @param int_mode - interrupt mode [msix, inta, etc.] to use.
* @param allow_npar_tx_switch - npar tx switching to be used
@@ -72,6 +73,7 @@ void qed_resc_setup(struct qed_dev *cdev);
* @return int
*/
int qed_hw_init(struct qed_dev *cdev,
+ struct qed_tunn_start_params *p_tunn,
bool b_hw_start,
enum qed_int_mode int_mode,
bool allow_npar_tx_switch,
@@ -180,11 +182,15 @@ enum qed_dmae_address_type_t {
* used mostly to write a zeroed buffer to destination address
* using DMA
*/
-#define QED_DMAE_FLAG_RW_REPL_SRC 0x00000001
-#define QED_DMAE_FLAG_COMPLETION_DST 0x00000008
+#define QED_DMAE_FLAG_RW_REPL_SRC 0x00000001
+#define QED_DMAE_FLAG_VF_SRC 0x00000002
+#define QED_DMAE_FLAG_VF_DST 0x00000004
+#define QED_DMAE_FLAG_COMPLETION_DST 0x00000008
struct qed_dmae_params {
- u32 flags; /* consists of QED_DMAE_FLAG_* values */
+ u32 flags; /* consists of QED_DMAE_FLAG_* values */
+ u8 src_vfid;
+ u8 dst_vfid;
};
/**
@@ -207,6 +213,23 @@ qed_dmae_host2grc(struct qed_hwfn *p_hwfn,
u32 flags);
/**
+ * @brief qed_dmae_host2host - copy data from to source address
+ * to a destination adress (for SRIOV) using the given ptt
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param source_addr
+ * @param dest_addr
+ * @param size_in_dwords
+ * @param params
+ */
+int qed_dmae_host2host(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ dma_addr_t source_addr,
+ dma_addr_t dest_addr,
+ u32 size_in_dwords, struct qed_dmae_params *p_params);
+
+/**
* @brief qed_chain_alloc - Allocate and initialize a chain
*
* @param p_hwfn
@@ -280,11 +303,11 @@ int qed_fw_rss_eng(struct qed_hwfn *p_hwfn,
* @param p_hwfn
* @param p_ptt
* @param id - For PF, engine-relative. For VF, PF-relative.
+ * @param is_vf - true iff cleanup is made for a VF.
*
* @return int
*/
int qed_final_cleanup(struct qed_hwfn *p_hwfn,
- struct qed_ptt *p_ptt,
- u16 id);
+ struct qed_ptt *p_ptt, u16 id, bool is_vf);
#endif
diff --git a/drivers/net/ethernet/qlogic/qed/qed_hsi.h b/drivers/net/ethernet/qlogic/qed/qed_hsi.h
index a368f5e71d95..9afc15fdbb02 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_hsi.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_hsi.h
@@ -29,9 +29,9 @@ struct qed_ptt;
enum common_event_opcode {
COMMON_EVENT_PF_START,
COMMON_EVENT_PF_STOP,
- COMMON_EVENT_RESERVED,
- COMMON_EVENT_RESERVED2,
- COMMON_EVENT_RESERVED3,
+ COMMON_EVENT_VF_START,
+ COMMON_EVENT_VF_STOP,
+ COMMON_EVENT_VF_PF_CHANNEL,
COMMON_EVENT_RESERVED4,
COMMON_EVENT_RESERVED5,
COMMON_EVENT_RESERVED6,
@@ -44,9 +44,9 @@ enum common_ramrod_cmd_id {
COMMON_RAMROD_UNUSED,
COMMON_RAMROD_PF_START /* PF Function Start Ramrod */,
COMMON_RAMROD_PF_STOP /* PF Function Stop Ramrod */,
- COMMON_RAMROD_RESERVED,
- COMMON_RAMROD_RESERVED2,
- COMMON_RAMROD_RESERVED3,
+ COMMON_RAMROD_VF_START,
+ COMMON_RAMROD_VF_STOP,
+ COMMON_RAMROD_PF_UPDATE,
COMMON_RAMROD_EMPTY,
MAX_COMMON_RAMROD_CMD_ID
};
@@ -573,6 +573,14 @@ union event_ring_element {
struct event_ring_next_addr next_addr;
};
+struct mstorm_non_trigger_vf_zone {
+ struct eth_mstorm_per_queue_stat eth_queue_stat;
+};
+
+struct mstorm_vf_zone {
+ struct mstorm_non_trigger_vf_zone non_trigger;
+};
+
enum personality_type {
BAD_PERSONALITY_TYP,
PERSONALITY_RESERVED,
@@ -626,6 +634,59 @@ struct pf_start_ramrod_data {
u8 reserved0[4];
};
+/* Data for port update ramrod */
+struct protocol_dcb_data {
+ u8 dcb_enable_flag;
+ u8 dcb_priority;
+ u8 dcb_tc;
+ u8 reserved;
+};
+
+/* tunnel configuration */
+struct pf_update_tunnel_config {
+ u8 update_rx_pf_clss;
+ u8 update_tx_pf_clss;
+ u8 set_vxlan_udp_port_flg;
+ u8 set_geneve_udp_port_flg;
+ u8 tx_enable_vxlan;
+ u8 tx_enable_l2geneve;
+ u8 tx_enable_ipgeneve;
+ u8 tx_enable_l2gre;
+ u8 tx_enable_ipgre;
+ u8 tunnel_clss_vxlan;
+ u8 tunnel_clss_l2geneve;
+ u8 tunnel_clss_ipgeneve;
+ u8 tunnel_clss_l2gre;
+ u8 tunnel_clss_ipgre;
+ __le16 vxlan_udp_port;
+ __le16 geneve_udp_port;
+ __le16 reserved[3];
+};
+
+struct pf_update_ramrod_data {
+ u8 pf_id;
+ u8 update_eth_dcb_data_flag;
+ u8 update_fcoe_dcb_data_flag;
+ u8 update_iscsi_dcb_data_flag;
+ u8 update_roce_dcb_data_flag;
+ u8 update_mf_vlan_flag;
+ __le16 mf_vlan;
+ struct protocol_dcb_data eth_dcb_data;
+ struct protocol_dcb_data fcoe_dcb_data;
+ struct protocol_dcb_data iscsi_dcb_data;
+ struct protocol_dcb_data roce_dcb_data;
+ struct pf_update_tunnel_config tunnel_config;
+};
+
+/* Tunnel classification scheme */
+enum tunnel_clss {
+ TUNNEL_CLSS_MAC_VLAN = 0,
+ TUNNEL_CLSS_MAC_VNI,
+ TUNNEL_CLSS_INNER_MAC_VLAN,
+ TUNNEL_CLSS_INNER_MAC_VNI,
+ MAX_TUNNEL_CLSS
+};
+
enum ports_mode {
ENGX2_PORTX1 /* 2 engines x 1 port */,
ENGX2_PORTX2 /* 2 engines x 2 ports */,
@@ -635,6 +696,16 @@ enum ports_mode {
MAX_PORTS_MODE
};
+struct pstorm_non_trigger_vf_zone {
+ struct eth_pstorm_per_queue_stat eth_queue_stat;
+ struct regpair reserved[2];
+};
+
+struct pstorm_vf_zone {
+ struct pstorm_non_trigger_vf_zone non_trigger;
+ struct regpair reserved[7];
+};
+
/* Ramrod Header of SPQE */
struct ramrod_header {
__le32 cid /* Slowpath Connection CID */;
@@ -664,6 +735,36 @@ struct tstorm_per_port_stat {
struct regpair preroce_irregular_pkt;
};
+struct ustorm_non_trigger_vf_zone {
+ struct eth_ustorm_per_queue_stat eth_queue_stat;
+ struct regpair vf_pf_msg_addr;
+};
+
+struct ustorm_trigger_vf_zone {
+ u8 vf_pf_msg_valid;
+ u8 reserved[7];
+};
+
+struct ustorm_vf_zone {
+ struct ustorm_non_trigger_vf_zone non_trigger;
+ struct ustorm_trigger_vf_zone trigger;
+};
+
+struct vf_start_ramrod_data {
+ u8 vf_id;
+ u8 enable_flr_ack;
+ __le16 opaque_fid;
+ u8 personality;
+ u8 reserved[3];
+};
+
+struct vf_stop_ramrod_data {
+ u8 vf_id;
+ u8 reserved0;
+ __le16 reserved1;
+ __le32 reserved2;
+};
+
struct atten_status_block {
__le32 atten_bits;
__le32 atten_ack;
@@ -990,7 +1091,7 @@ enum init_phases {
PHASE_ENGINE,
PHASE_PORT,
PHASE_PF,
- PHASE_RESERVED,
+ PHASE_VF,
PHASE_QM_PF,
MAX_INIT_PHASES
};
@@ -1603,6 +1704,19 @@ bool qed_send_qm_stop_cmd(struct qed_hwfn *p_hwfn,
u16 start_pq,
u16 num_pqs);
+void qed_set_vxlan_dest_port(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt, u16 dest_port);
+void qed_set_vxlan_enable(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt, bool vxlan_enable);
+void qed_set_gre_enable(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt, bool eth_gre_enable,
+ bool ip_gre_enable);
+void qed_set_geneve_dest_port(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt, u16 dest_port);
+void qed_set_geneve_enable(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt, bool eth_geneve_enable,
+ bool ip_geneve_enable);
+
/* Ystorm flow control mode. Use enum fw_flow_ctrl_mode */
#define YSTORM_FLOW_CONTROL_MODE_OFFSET (IRO[0].base)
#define YSTORM_FLOW_CONTROL_MODE_SIZE (IRO[0].size)
@@ -3788,7 +3902,7 @@ struct public_drv_mb {
#define DRV_MSG_CODE_SET_LLDP 0x24000000
#define DRV_MSG_CODE_SET_DCBX 0x25000000
-
+#define DRV_MSG_CODE_BW_UPDATE_ACK 0x32000000
#define DRV_MSG_CODE_NIG_DRAIN 0x30000000
#define DRV_MSG_CODE_INITIATE_FLR 0x02000000
@@ -3808,6 +3922,7 @@ struct public_drv_mb {
#define DRV_MSG_CODE_PHY_CORE_WRITE 0x000e0000
#define DRV_MSG_CODE_SET_VERSION 0x000f0000
+#define DRV_MSG_CODE_BIST_TEST 0x001e0000
#define DRV_MSG_CODE_SET_LED_MODE 0x00200000
#define DRV_MSG_SEQ_NUMBER_MASK 0x0000ffff
@@ -3865,6 +3980,18 @@ struct public_drv_mb {
#define DRV_MB_PARAM_SET_LED_MODE_ON 0x1
#define DRV_MB_PARAM_SET_LED_MODE_OFF 0x2
+#define DRV_MB_PARAM_BIST_UNKNOWN_TEST 0
+#define DRV_MB_PARAM_BIST_REGISTER_TEST 1
+#define DRV_MB_PARAM_BIST_CLOCK_TEST 2
+
+#define DRV_MB_PARAM_BIST_RC_UNKNOWN 0
+#define DRV_MB_PARAM_BIST_RC_PASSED 1
+#define DRV_MB_PARAM_BIST_RC_FAILED 2
+#define DRV_MB_PARAM_BIST_RC_INVALID_PARAMETER 3
+
+#define DRV_MB_PARAM_BIST_TEST_INDEX_SHIFT 0
+#define DRV_MB_PARAM_BIST_TEST_INDEX_MASK 0x000000FF
+
u32 fw_mb_header;
#define FW_MSG_CODE_MASK 0xffff0000
#define FW_MSG_CODE_DRV_LOAD_ENGINE 0x10100000
@@ -5067,4 +5194,8 @@ struct hw_set_image {
struct hw_set_info hw_sets[1];
};
+int qed_init_pf_wfq(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
+ u8 pf_id, u16 pf_wfq);
+int qed_init_vport_wfq(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
+ u16 first_tx_pq_id[NUM_OF_TCS], u16 vport_wfq);
#endif
diff --git a/drivers/net/ethernet/qlogic/qed/qed_hw.c b/drivers/net/ethernet/qlogic/qed/qed_hw.c
index a95a3e4b3101..0ada7fdb91bc 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_hw.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_hw.c
@@ -23,6 +23,7 @@
#include "qed_hsi.h"
#include "qed_hw.h"
#include "qed_reg_addr.h"
+#include "qed_sriov.h"
#define QED_BAR_ACQUIRE_TIMEOUT 1000
@@ -236,8 +237,12 @@ static void qed_memcpy_hw(struct qed_hwfn *p_hwfn,
quota = min_t(size_t, n - done,
PXP_EXTERNAL_BAR_PF_WINDOW_SINGLE_SIZE);
- qed_ptt_set_win(p_hwfn, p_ptt, hw_addr + done);
- hw_offset = qed_ptt_get_bar_addr(p_ptt);
+ if (IS_PF(p_hwfn->cdev)) {
+ qed_ptt_set_win(p_hwfn, p_ptt, hw_addr + done);
+ hw_offset = qed_ptt_get_bar_addr(p_ptt);
+ } else {
+ hw_offset = hw_addr + done;
+ }
dw_count = quota / 4;
host_addr = (u32 *)((u8 *)addr + done);
@@ -338,14 +343,25 @@ void qed_port_unpretend(struct qed_hwfn *p_hwfn,
*(u32 *)&p_ptt->pxp.pretend);
}
+u32 qed_vfid_to_concrete(struct qed_hwfn *p_hwfn, u8 vfid)
+{
+ u32 concrete_fid = 0;
+
+ SET_FIELD(concrete_fid, PXP_CONCRETE_FID_PFID, p_hwfn->rel_pf_id);
+ SET_FIELD(concrete_fid, PXP_CONCRETE_FID_VFID, vfid);
+ SET_FIELD(concrete_fid, PXP_CONCRETE_FID_VFVALID, 1);
+
+ return concrete_fid;
+}
+
/* DMAE */
static void qed_dmae_opcode(struct qed_hwfn *p_hwfn,
const u8 is_src_type_grc,
const u8 is_dst_type_grc,
struct qed_dmae_params *p_params)
{
+ u16 opcode_b = 0;
u32 opcode = 0;
- u16 opcodeB = 0;
/* Whether the source is the PCIe or the GRC.
* 0- The source is the PCIe
@@ -387,14 +403,24 @@ static void qed_dmae_opcode(struct qed_hwfn *p_hwfn,
opcode |= (DMAE_CMD_DST_ADDR_RESET_MASK <<
DMAE_CMD_DST_ADDR_RESET_SHIFT);
- opcodeB |= (DMAE_CMD_SRC_VF_ID_MASK <<
- DMAE_CMD_SRC_VF_ID_SHIFT);
+ /* SRC/DST VFID: all 1's - pf, otherwise VF id */
+ if (p_params->flags & QED_DMAE_FLAG_VF_SRC) {
+ opcode |= 1 << DMAE_CMD_SRC_VF_ID_VALID_SHIFT;
+ opcode_b |= p_params->src_vfid << DMAE_CMD_SRC_VF_ID_SHIFT;
+ } else {
+ opcode_b |= DMAE_CMD_SRC_VF_ID_MASK <<
+ DMAE_CMD_SRC_VF_ID_SHIFT;
+ }
- opcodeB |= (DMAE_CMD_DST_VF_ID_MASK <<
- DMAE_CMD_DST_VF_ID_SHIFT);
+ if (p_params->flags & QED_DMAE_FLAG_VF_DST) {
+ opcode |= 1 << DMAE_CMD_DST_VF_ID_VALID_SHIFT;
+ opcode_b |= p_params->dst_vfid << DMAE_CMD_DST_VF_ID_SHIFT;
+ } else {
+ opcode_b |= DMAE_CMD_DST_VF_ID_MASK << DMAE_CMD_DST_VF_ID_SHIFT;
+ }
p_hwfn->dmae_info.p_dmae_cmd->opcode = cpu_to_le32(opcode);
- p_hwfn->dmae_info.p_dmae_cmd->opcode_b = cpu_to_le16(opcodeB);
+ p_hwfn->dmae_info.p_dmae_cmd->opcode_b = cpu_to_le16(opcode_b);
}
u32 qed_dmae_idx_to_go_cmd(u8 idx)
@@ -742,6 +768,28 @@ int qed_dmae_host2grc(struct qed_hwfn *p_hwfn,
return rc;
}
+int
+qed_dmae_host2host(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ dma_addr_t source_addr,
+ dma_addr_t dest_addr,
+ u32 size_in_dwords, struct qed_dmae_params *p_params)
+{
+ int rc;
+
+ mutex_lock(&(p_hwfn->dmae_info.mutex));
+
+ rc = qed_dmae_execute_command(p_hwfn, p_ptt, source_addr,
+ dest_addr,
+ QED_DMAE_ADDRESS_HOST_PHYS,
+ QED_DMAE_ADDRESS_HOST_PHYS,
+ size_in_dwords, p_params);
+
+ mutex_unlock(&(p_hwfn->dmae_info.mutex));
+
+ return rc;
+}
+
u16 qed_get_qm_pq(struct qed_hwfn *p_hwfn,
enum protocol_type proto,
union qed_qm_pq_params *p_params)
@@ -765,6 +813,9 @@ u16 qed_get_qm_pq(struct qed_hwfn *p_hwfn,
break;
case PROTOCOLID_ETH:
pq_id = p_params->eth.tc;
+ if (p_params->eth.is_vf)
+ pq_id += p_hwfn->qm_info.vf_queues_offset +
+ p_params->eth.vf_id;
break;
default:
pq_id = 0;
diff --git a/drivers/net/ethernet/qlogic/qed/qed_hw.h b/drivers/net/ethernet/qlogic/qed/qed_hw.h
index e56d433793be..4367363ade40 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_hw.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_hw.h
@@ -221,6 +221,16 @@ void qed_port_unpretend(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt);
/**
+ * @brief qed_vfid_to_concrete - build a concrete FID for a
+ * given VF ID
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param vfid
+ */
+u32 qed_vfid_to_concrete(struct qed_hwfn *p_hwfn, u8 vfid);
+
+/**
* @brief qed_dmae_idx_to_go_cmd - map the idx to dmae cmd
* this is declared here since other files will require it.
* @param idx
diff --git a/drivers/net/ethernet/qlogic/qed/qed_init_fw_funcs.c b/drivers/net/ethernet/qlogic/qed/qed_init_fw_funcs.c
index f55ebdc3c832..e8a3b9da59b5 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_init_fw_funcs.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_init_fw_funcs.c
@@ -712,6 +712,21 @@ int qed_qm_pf_rt_init(struct qed_hwfn *p_hwfn,
return 0;
}
+int qed_init_pf_wfq(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ u8 pf_id, u16 pf_wfq)
+{
+ u32 inc_val = QM_WFQ_INC_VAL(pf_wfq);
+
+ if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
+ DP_NOTICE(p_hwfn, "Invalid PF WFQ weight configuration");
+ return -1;
+ }
+
+ qed_wr(p_hwfn, p_ptt, QM_REG_WFQPFWEIGHT + pf_id * 4, inc_val);
+ return 0;
+}
+
int qed_init_pf_rl(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt,
u8 pf_id,
@@ -732,6 +747,31 @@ int qed_init_pf_rl(struct qed_hwfn *p_hwfn,
return 0;
}
+int qed_init_vport_wfq(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ u16 first_tx_pq_id[NUM_OF_TCS],
+ u16 vport_wfq)
+{
+ u32 inc_val = QM_WFQ_INC_VAL(vport_wfq);
+ u8 tc;
+
+ if (!inc_val || inc_val > QM_WFQ_MAX_INC_VAL) {
+ DP_NOTICE(p_hwfn, "Invalid VPORT WFQ weight configuration");
+ return -1;
+ }
+
+ for (tc = 0; tc < NUM_OF_TCS; tc++) {
+ u16 vport_pq_id = first_tx_pq_id[tc];
+
+ if (vport_pq_id != QM_INVALID_PQ_ID)
+ qed_wr(p_hwfn, p_ptt,
+ QM_REG_WFQVPWEIGHT + vport_pq_id * 4,
+ inc_val);
+ }
+
+ return 0;
+}
+
int qed_init_vport_rl(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt,
u8 vport_id,
@@ -788,3 +828,130 @@ bool qed_send_qm_stop_cmd(struct qed_hwfn *p_hwfn,
return true;
}
+
+static void
+qed_set_tunnel_type_enable_bit(unsigned long *var, int bit, bool enable)
+{
+ if (enable)
+ set_bit(bit, var);
+ else
+ clear_bit(bit, var);
+}
+
+#define PRS_ETH_TUNN_FIC_FORMAT -188897008
+
+void qed_set_vxlan_dest_port(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ u16 dest_port)
+{
+ qed_wr(p_hwfn, p_ptt, PRS_REG_VXLAN_PORT, dest_port);
+ qed_wr(p_hwfn, p_ptt, NIG_REG_VXLAN_PORT, dest_port);
+ qed_wr(p_hwfn, p_ptt, PBF_REG_VXLAN_PORT, dest_port);
+}
+
+void qed_set_vxlan_enable(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ bool vxlan_enable)
+{
+ unsigned long reg_val = 0;
+ u8 shift;
+
+ reg_val = qed_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
+ shift = PRS_REG_ENCAPSULATION_TYPE_EN_VXLAN_ENABLE_SHIFT;
+ qed_set_tunnel_type_enable_bit(&reg_val, shift, vxlan_enable);
+
+ qed_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
+
+ if (reg_val)
+ qed_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
+ PRS_ETH_TUNN_FIC_FORMAT);
+
+ reg_val = qed_rd(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE);
+ shift = NIG_REG_ENC_TYPE_ENABLE_VXLAN_ENABLE_SHIFT;
+ qed_set_tunnel_type_enable_bit(&reg_val, shift, vxlan_enable);
+
+ qed_wr(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE, reg_val);
+
+ qed_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_VXLAN_EN,
+ vxlan_enable ? 1 : 0);
+}
+
+void qed_set_gre_enable(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
+ bool eth_gre_enable, bool ip_gre_enable)
+{
+ unsigned long reg_val = 0;
+ u8 shift;
+
+ reg_val = qed_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
+ shift = PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GRE_ENABLE_SHIFT;
+ qed_set_tunnel_type_enable_bit(&reg_val, shift, eth_gre_enable);
+
+ shift = PRS_REG_ENCAPSULATION_TYPE_EN_IP_OVER_GRE_ENABLE_SHIFT;
+ qed_set_tunnel_type_enable_bit(&reg_val, shift, ip_gre_enable);
+ qed_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
+ if (reg_val)
+ qed_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
+ PRS_ETH_TUNN_FIC_FORMAT);
+
+ reg_val = qed_rd(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE);
+ shift = NIG_REG_ENC_TYPE_ENABLE_ETH_OVER_GRE_ENABLE_SHIFT;
+ qed_set_tunnel_type_enable_bit(&reg_val, shift, eth_gre_enable);
+
+ shift = NIG_REG_ENC_TYPE_ENABLE_IP_OVER_GRE_ENABLE_SHIFT;
+ qed_set_tunnel_type_enable_bit(&reg_val, shift, ip_gre_enable);
+ qed_wr(p_hwfn, p_ptt, NIG_REG_ENC_TYPE_ENABLE, reg_val);
+
+ qed_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_ETH_EN,
+ eth_gre_enable ? 1 : 0);
+ qed_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_GRE_IP_EN,
+ ip_gre_enable ? 1 : 0);
+}
+
+void qed_set_geneve_dest_port(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ u16 dest_port)
+{
+ qed_wr(p_hwfn, p_ptt, PRS_REG_NGE_PORT, dest_port);
+ qed_wr(p_hwfn, p_ptt, NIG_REG_NGE_PORT, dest_port);
+ qed_wr(p_hwfn, p_ptt, PBF_REG_NGE_PORT, dest_port);
+}
+
+void qed_set_geneve_enable(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ bool eth_geneve_enable,
+ bool ip_geneve_enable)
+{
+ unsigned long reg_val = 0;
+ u8 shift;
+
+ reg_val = qed_rd(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN);
+ shift = PRS_REG_ENCAPSULATION_TYPE_EN_ETH_OVER_GENEVE_ENABLE_SHIFT;
+ qed_set_tunnel_type_enable_bit(&reg_val, shift, eth_geneve_enable);
+
+ shift = PRS_REG_ENCAPSULATION_TYPE_EN_IP_OVER_GENEVE_ENABLE_SHIFT;
+ qed_set_tunnel_type_enable_bit(&reg_val, shift, ip_geneve_enable);
+
+ qed_wr(p_hwfn, p_ptt, PRS_REG_ENCAPSULATION_TYPE_EN, reg_val);
+ if (reg_val)
+ qed_wr(p_hwfn, p_ptt, PRS_REG_OUTPUT_FORMAT_4_0,
+ PRS_ETH_TUNN_FIC_FORMAT);
+
+ qed_wr(p_hwfn, p_ptt, NIG_REG_NGE_ETH_ENABLE,
+ eth_geneve_enable ? 1 : 0);
+ qed_wr(p_hwfn, p_ptt, NIG_REG_NGE_IP_ENABLE, ip_geneve_enable ? 1 : 0);
+
+ /* comp ver */
+ reg_val = (ip_geneve_enable || eth_geneve_enable) ? 1 : 0;
+ qed_wr(p_hwfn, p_ptt, NIG_REG_NGE_COMP_VER, reg_val);
+ qed_wr(p_hwfn, p_ptt, PBF_REG_NGE_COMP_VER, reg_val);
+ qed_wr(p_hwfn, p_ptt, PRS_REG_NGE_COMP_VER, reg_val);
+
+ /* EDPM with geneve tunnel not supported in BB_B0 */
+ if (QED_IS_BB_B0(p_hwfn->cdev))
+ return;
+
+ qed_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN,
+ eth_geneve_enable ? 1 : 0);
+ qed_wr(p_hwfn, p_ptt, DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN,
+ ip_geneve_enable ? 1 : 0);
+}
diff --git a/drivers/net/ethernet/qlogic/qed/qed_init_ops.c b/drivers/net/ethernet/qlogic/qed/qed_init_ops.c
index 3269b3610e03..d358c3bb1308 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_init_ops.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_init_ops.c
@@ -18,6 +18,7 @@
#include "qed_hw.h"
#include "qed_init_ops.h"
#include "qed_reg_addr.h"
+#include "qed_sriov.h"
#define QED_INIT_MAX_POLL_COUNT 100
#define QED_INIT_POLL_PERIOD_US 500
@@ -128,6 +129,9 @@ int qed_init_alloc(struct qed_hwfn *p_hwfn)
{
struct qed_rt_data *rt_data = &p_hwfn->rt_data;
+ if (IS_VF(p_hwfn->cdev))
+ return 0;
+
rt_data->b_valid = kzalloc(sizeof(bool) * RUNTIME_ARRAY_SIZE,
GFP_KERNEL);
if (!rt_data->b_valid)
diff --git a/drivers/net/ethernet/qlogic/qed/qed_int.c b/drivers/net/ethernet/qlogic/qed/qed_int.c
index 2017b0121f5f..09a6ad3d22dd 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_int.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_int.c
@@ -26,6 +26,8 @@
#include "qed_mcp.h"
#include "qed_reg_addr.h"
#include "qed_sp.h"
+#include "qed_sriov.h"
+#include "qed_vf.h"
struct qed_pi_info {
qed_int_comp_cb_t comp_cb;
@@ -2513,6 +2515,9 @@ void qed_int_cau_conf_pi(struct qed_hwfn *p_hwfn,
u32 sb_offset;
u32 pi_offset;
+ if (IS_VF(p_hwfn->cdev))
+ return;
+
sb_offset = igu_sb_id * PIS_PER_SB;
memset(&pi_entry, 0, sizeof(struct cau_pi_entry));
@@ -2542,8 +2547,9 @@ void qed_int_sb_setup(struct qed_hwfn *p_hwfn,
sb_info->sb_ack = 0;
memset(sb_info->sb_virt, 0, sizeof(*sb_info->sb_virt));
- qed_int_cau_conf_sb(p_hwfn, p_ptt, sb_info->sb_phys,
- sb_info->igu_sb_id, 0, 0);
+ if (IS_PF(p_hwfn->cdev))
+ qed_int_cau_conf_sb(p_hwfn, p_ptt, sb_info->sb_phys,
+ sb_info->igu_sb_id, 0, 0);
}
/**
@@ -2563,8 +2569,10 @@ static u16 qed_get_igu_sb_id(struct qed_hwfn *p_hwfn,
/* Assuming continuous set of IGU SBs dedicated for given PF */
if (sb_id == QED_SP_SB_ID)
igu_sb_id = p_hwfn->hw_info.p_igu_info->igu_dsb_id;
- else
+ else if (IS_PF(p_hwfn->cdev))
igu_sb_id = sb_id + p_hwfn->hw_info.p_igu_info->igu_base_sb;
+ else
+ igu_sb_id = qed_vf_get_igu_sb_id(p_hwfn, sb_id);
DP_VERBOSE(p_hwfn, NETIF_MSG_INTR, "SB [%s] index is 0x%04x\n",
(sb_id == QED_SP_SB_ID) ? "DSB" : "non-DSB", igu_sb_id);
@@ -2594,9 +2602,16 @@ int qed_int_sb_init(struct qed_hwfn *p_hwfn,
/* The igu address will hold the absolute address that needs to be
* written to for a specific status block
*/
- sb_info->igu_addr = (u8 __iomem *)p_hwfn->regview +
- GTT_BAR0_MAP_REG_IGU_CMD +
- (sb_info->igu_sb_id << 3);
+ if (IS_PF(p_hwfn->cdev)) {
+ sb_info->igu_addr = (u8 __iomem *)p_hwfn->regview +
+ GTT_BAR0_MAP_REG_IGU_CMD +
+ (sb_info->igu_sb_id << 3);
+ } else {
+ sb_info->igu_addr = (u8 __iomem *)p_hwfn->regview +
+ PXP_VF_BAR0_START_IGU +
+ ((IGU_CMD_INT_ACK_BASE +
+ sb_info->igu_sb_id) << 3);
+ }
sb_info->flags |= QED_SB_INFO_INIT;
@@ -2783,24 +2798,20 @@ void qed_int_igu_disable_int(struct qed_hwfn *p_hwfn,
{
p_hwfn->b_int_enabled = 0;
+ if (IS_VF(p_hwfn->cdev))
+ return;
+
qed_wr(p_hwfn, p_ptt, IGU_REG_PF_CONFIGURATION, 0);
}
#define IGU_CLEANUP_SLEEP_LENGTH (1000)
-void qed_int_igu_cleanup_sb(struct qed_hwfn *p_hwfn,
- struct qed_ptt *p_ptt,
- u32 sb_id,
- bool cleanup_set,
- u16 opaque_fid
- )
+static void qed_int_igu_cleanup_sb(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ u32 sb_id, bool cleanup_set, u16 opaque_fid)
{
+ u32 cmd_ctrl = 0, val = 0, sb_bit = 0, sb_bit_addr = 0, data = 0;
u32 pxp_addr = IGU_CMD_INT_ACK_BASE + sb_id;
u32 sleep_cnt = IGU_CLEANUP_SLEEP_LENGTH;
- u32 data = 0;
- u32 cmd_ctrl = 0;
- u32 val = 0;
- u32 sb_bit = 0;
- u32 sb_bit_addr = 0;
/* Set the data field */
SET_FIELD(data, IGU_CLEANUP_CLEANUP_SET, cleanup_set ? 1 : 0);
@@ -2845,11 +2856,9 @@ void qed_int_igu_cleanup_sb(struct qed_hwfn *p_hwfn,
void qed_int_igu_init_pure_rt_single(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt,
- u32 sb_id,
- u16 opaque,
- bool b_set)
+ u32 sb_id, u16 opaque, bool b_set)
{
- int pi;
+ int pi, i;
/* Set */
if (b_set)
@@ -2858,6 +2867,22 @@ void qed_int_igu_init_pure_rt_single(struct qed_hwfn *p_hwfn,
/* Clear */
qed_int_igu_cleanup_sb(p_hwfn, p_ptt, sb_id, 0, opaque);
+ /* Wait for the IGU SB to cleanup */
+ for (i = 0; i < IGU_CLEANUP_SLEEP_LENGTH; i++) {
+ u32 val;
+
+ val = qed_rd(p_hwfn, p_ptt,
+ IGU_REG_WRITE_DONE_PENDING + ((sb_id / 32) * 4));
+ if (val & (1 << (sb_id % 32)))
+ usleep_range(10, 20);
+ else
+ break;
+ }
+ if (i == IGU_CLEANUP_SLEEP_LENGTH)
+ DP_NOTICE(p_hwfn,
+ "Failed SB[0x%08x] still appearing in WRITE_DONE_PENDING\n",
+ sb_id);
+
/* Clear the CAU for the SB */
for (pi = 0; pi < 12; pi++)
qed_wr(p_hwfn, p_ptt,
@@ -2866,13 +2891,11 @@ void qed_int_igu_init_pure_rt_single(struct qed_hwfn *p_hwfn,
void qed_int_igu_init_pure_rt(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt,
- bool b_set,
- bool b_slowpath)
+ bool b_set, bool b_slowpath)
{
u32 igu_base_sb = p_hwfn->hw_info.p_igu_info->igu_base_sb;
u32 igu_sb_cnt = p_hwfn->hw_info.p_igu_info->igu_sb_cnt;
- u32 sb_id = 0;
- u32 val = 0;
+ u32 sb_id = 0, val = 0;
val = qed_rd(p_hwfn, p_ptt, IGU_REG_BLOCK_CONFIGURATION);
val |= IGU_REG_BLOCK_CONFIGURATION_VF_CLEANUP_EN;
@@ -2888,14 +2911,14 @@ void qed_int_igu_init_pure_rt(struct qed_hwfn *p_hwfn,
p_hwfn->hw_info.opaque_fid,
b_set);
- if (b_slowpath) {
- sb_id = p_hwfn->hw_info.p_igu_info->igu_dsb_id;
- DP_VERBOSE(p_hwfn, NETIF_MSG_INTR,
- "IGU cleaning slowpath SB [%d]\n", sb_id);
- qed_int_igu_init_pure_rt_single(p_hwfn, p_ptt, sb_id,
- p_hwfn->hw_info.opaque_fid,
- b_set);
- }
+ if (!b_slowpath)
+ return;
+
+ sb_id = p_hwfn->hw_info.p_igu_info->igu_dsb_id;
+ DP_VERBOSE(p_hwfn, NETIF_MSG_INTR,
+ "IGU cleaning slowpath SB [%d]\n", sb_id);
+ qed_int_igu_init_pure_rt_single(p_hwfn, p_ptt, sb_id,
+ p_hwfn->hw_info.opaque_fid, b_set);
}
static u32 qed_int_igu_read_cam_block(struct qed_hwfn *p_hwfn,
@@ -2935,9 +2958,9 @@ int qed_int_igu_read_cam(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt)
{
struct qed_igu_info *p_igu_info;
+ u32 val, min_vf = 0, max_vf = 0;
+ u16 sb_id, last_iov_sb_id = 0;
struct qed_igu_block *blk;
- u32 val;
- u16 sb_id;
u16 prev_sb_id = 0xFF;
p_hwfn->hw_info.p_igu_info = kzalloc(sizeof(*p_igu_info), GFP_KERNEL);
@@ -2947,12 +2970,19 @@ int qed_int_igu_read_cam(struct qed_hwfn *p_hwfn,
p_igu_info = p_hwfn->hw_info.p_igu_info;
- /* Initialize base sb / sb cnt for PFs */
+ /* Initialize base sb / sb cnt for PFs and VFs */
p_igu_info->igu_base_sb = 0xffff;
p_igu_info->igu_sb_cnt = 0;
p_igu_info->igu_dsb_id = 0xffff;
p_igu_info->igu_base_sb_iov = 0xffff;
+ if (p_hwfn->cdev->p_iov_info) {
+ struct qed_hw_sriov_info *p_iov = p_hwfn->cdev->p_iov_info;
+
+ min_vf = p_iov->first_vf_in_pf;
+ max_vf = p_iov->first_vf_in_pf + p_iov->total_vfs;
+ }
+
for (sb_id = 0; sb_id < QED_MAPPING_MEMORY_SIZE(p_hwfn->cdev);
sb_id++) {
blk = &p_igu_info->igu_map.igu_blocks[sb_id];
@@ -2986,14 +3016,43 @@ int qed_int_igu_read_cam(struct qed_hwfn *p_hwfn,
(p_igu_info->igu_sb_cnt)++;
}
}
+ } else {
+ if ((blk->function_id >= min_vf) &&
+ (blk->function_id < max_vf)) {
+ /* Available for VFs of this PF */
+ if (p_igu_info->igu_base_sb_iov == 0xffff) {
+ p_igu_info->igu_base_sb_iov = sb_id;
+ } else if (last_iov_sb_id != sb_id - 1) {
+ if (!val) {
+ DP_VERBOSE(p_hwfn->cdev,
+ NETIF_MSG_INTR,
+ "First uninitialized IGU CAM entry at index 0x%04x\n",
+ sb_id);
+ } else {
+ DP_NOTICE(p_hwfn->cdev,
+ "Consecutive igu vectors for HWFN %x vfs is broken [jumps from %04x to %04x]\n",
+ p_hwfn->rel_pf_id,
+ last_iov_sb_id,
+ sb_id); }
+ break;
+ }
+ blk->status |= QED_IGU_STATUS_FREE;
+ p_hwfn->hw_info.p_igu_info->free_blks++;
+ last_iov_sb_id = sb_id;
+ }
}
}
-
- DP_VERBOSE(p_hwfn, NETIF_MSG_INTR,
- "IGU igu_base_sb=0x%x igu_sb_cnt=%d igu_dsb_id=0x%x\n",
- p_igu_info->igu_base_sb,
- p_igu_info->igu_sb_cnt,
- p_igu_info->igu_dsb_id);
+ p_igu_info->igu_sb_cnt_iov = p_igu_info->free_blks;
+
+ DP_VERBOSE(
+ p_hwfn,
+ NETIF_MSG_INTR,
+ "IGU igu_base_sb=0x%x [IOV 0x%x] igu_sb_cnt=%d [IOV 0x%x] igu_dsb_id=0x%x\n",
+ p_igu_info->igu_base_sb,
+ p_igu_info->igu_base_sb_iov,
+ p_igu_info->igu_sb_cnt,
+ p_igu_info->igu_sb_cnt_iov,
+ p_igu_info->igu_dsb_id);
if (p_igu_info->igu_base_sb == 0xffff ||
p_igu_info->igu_dsb_id == 0xffff ||
@@ -3116,6 +3175,23 @@ void qed_int_get_num_sbs(struct qed_hwfn *p_hwfn,
p_sb_cnt_info->sb_free_blk = info->free_blks;
}
+u16 qed_int_queue_id_from_sb_id(struct qed_hwfn *p_hwfn, u16 sb_id)
+{
+ struct qed_igu_info *p_info = p_hwfn->hw_info.p_igu_info;
+
+ /* Determine origin of SB id */
+ if ((sb_id >= p_info->igu_base_sb) &&
+ (sb_id < p_info->igu_base_sb + p_info->igu_sb_cnt)) {
+ return sb_id - p_info->igu_base_sb;
+ } else if ((sb_id >= p_info->igu_base_sb_iov) &&
+ (sb_id < p_info->igu_base_sb_iov + p_info->igu_sb_cnt_iov)) {
+ return sb_id - p_info->igu_base_sb_iov + p_info->igu_sb_cnt;
+ } else {
+ DP_NOTICE(p_hwfn, "SB %d not in range for function\n", sb_id);
+ return 0;
+ }
+}
+
void qed_int_disable_post_isr_release(struct qed_dev *cdev)
{
int i;
diff --git a/drivers/net/ethernet/qlogic/qed/qed_int.h b/drivers/net/ethernet/qlogic/qed/qed_int.h
index c57f2e680770..20b468637504 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_int.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_int.h
@@ -20,6 +20,12 @@
#define IGU_PF_CONF_ATTN_BIT_EN (0x1 << 3) /* attention enable */
#define IGU_PF_CONF_SINGLE_ISR_EN (0x1 << 4) /* single ISR mode enable */
#define IGU_PF_CONF_SIMD_MODE (0x1 << 5) /* simd all ones mode */
+/* Fields of IGU VF CONFIGRATION REGISTER */
+#define IGU_VF_CONF_FUNC_EN (0x1 << 0) /* function enable */
+#define IGU_VF_CONF_MSI_MSIX_EN (0x1 << 1) /* MSI/MSIX enable */
+#define IGU_VF_CONF_SINGLE_ISR_EN (0x1 << 4) /* single ISR mode enable */
+#define IGU_VF_CONF_PARENT_MASK (0xF) /* Parent PF */
+#define IGU_VF_CONF_PARENT_SHIFT 5 /* Parent PF */
/* Igu control commands
*/
@@ -292,26 +298,8 @@ u16 qed_int_get_sp_sb_id(struct qed_hwfn *p_hwfn);
* @param p_hwfn
* @param p_ptt
* @param sb_id - igu status block id
- * @param cleanup_set - set(1) / clear(0)
- * @param opaque_fid - the function for which to perform
- * cleanup, for example a PF on behalf of
- * its VFs.
- */
-void qed_int_igu_cleanup_sb(struct qed_hwfn *p_hwfn,
- struct qed_ptt *p_ptt,
- u32 sb_id,
- bool cleanup_set,
- u16 opaque_fid);
-
-/**
- * @brief Status block cleanup. Should be called for each status
- * block that will be used -> both PF / VF
- *
- * @param p_hwfn
- * @param p_ptt
- * @param sb_id - igu status block id
* @param opaque - opaque fid of the sb owner.
- * @param cleanup_set - set(1) / clear(0)
+ * @param b_set - set(1) / clear(0)
*/
void qed_int_igu_init_pure_rt_single(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt,
@@ -365,6 +353,16 @@ void qed_int_setup(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt);
/**
+ * @brief - Returns an Rx queue index appropriate for usage with given SB.
+ *
+ * @param p_hwfn
+ * @param sb_id - absolute index of SB
+ *
+ * @return index of Rx queue
+ */
+u16 qed_int_queue_id_from_sb_id(struct qed_hwfn *p_hwfn, u16 sb_id);
+
+/**
* @brief - Enable Interrupt & Attention for hw function
*
* @param p_hwfn
diff --git a/drivers/net/ethernet/qlogic/qed/qed_l2.c b/drivers/net/ethernet/qlogic/qed/qed_l2.c
index 3f35c6ca9252..8fba87dd48af 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_l2.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_l2.c
@@ -31,137 +31,25 @@
#include "qed_hsi.h"
#include "qed_hw.h"
#include "qed_int.h"
+#include "qed_l2.h"
#include "qed_mcp.h"
#include "qed_reg_addr.h"
#include "qed_sp.h"
+#include "qed_sriov.h"
-enum qed_rss_caps {
- QED_RSS_IPV4 = 0x1,
- QED_RSS_IPV6 = 0x2,
- QED_RSS_IPV4_TCP = 0x4,
- QED_RSS_IPV6_TCP = 0x8,
- QED_RSS_IPV4_UDP = 0x10,
- QED_RSS_IPV6_UDP = 0x20,
-};
-
-/* Should be the same as ETH_RSS_IND_TABLE_ENTRIES_NUM */
-#define QED_RSS_IND_TABLE_SIZE 128
-#define QED_RSS_KEY_SIZE 10 /* size in 32b chunks */
-
-struct qed_rss_params {
- u8 update_rss_config;
- u8 rss_enable;
- u8 rss_eng_id;
- u8 update_rss_capabilities;
- u8 update_rss_ind_table;
- u8 update_rss_key;
- u8 rss_caps;
- u8 rss_table_size_log;
- u16 rss_ind_table[QED_RSS_IND_TABLE_SIZE];
- u32 rss_key[QED_RSS_KEY_SIZE];
-};
-
-enum qed_filter_opcode {
- QED_FILTER_ADD,
- QED_FILTER_REMOVE,
- QED_FILTER_MOVE,
- QED_FILTER_REPLACE, /* Delete all MACs and add new one instead */
- QED_FILTER_FLUSH, /* Removes all filters */
-};
-
-enum qed_filter_ucast_type {
- QED_FILTER_MAC,
- QED_FILTER_VLAN,
- QED_FILTER_MAC_VLAN,
- QED_FILTER_INNER_MAC,
- QED_FILTER_INNER_VLAN,
- QED_FILTER_INNER_PAIR,
- QED_FILTER_INNER_MAC_VNI_PAIR,
- QED_FILTER_MAC_VNI_PAIR,
- QED_FILTER_VNI,
-};
-
-struct qed_filter_ucast {
- enum qed_filter_opcode opcode;
- enum qed_filter_ucast_type type;
- u8 is_rx_filter;
- u8 is_tx_filter;
- u8 vport_to_add_to;
- u8 vport_to_remove_from;
- unsigned char mac[ETH_ALEN];
- u8 assert_on_error;
- u16 vlan;
- u32 vni;
-};
-
-struct qed_filter_mcast {
- /* MOVE is not supported for multicast */
- enum qed_filter_opcode opcode;
- u8 vport_to_add_to;
- u8 vport_to_remove_from;
- u8 num_mc_addrs;
-#define QED_MAX_MC_ADDRS 64
- unsigned char mac[QED_MAX_MC_ADDRS][ETH_ALEN];
-};
-
-struct qed_filter_accept_flags {
- u8 update_rx_mode_config;
- u8 update_tx_mode_config;
- u8 rx_accept_filter;
- u8 tx_accept_filter;
-#define QED_ACCEPT_NONE 0x01
-#define QED_ACCEPT_UCAST_MATCHED 0x02
-#define QED_ACCEPT_UCAST_UNMATCHED 0x04
-#define QED_ACCEPT_MCAST_MATCHED 0x08
-#define QED_ACCEPT_MCAST_UNMATCHED 0x10
-#define QED_ACCEPT_BCAST 0x20
-};
-
-struct qed_sp_vport_update_params {
- u16 opaque_fid;
- u8 vport_id;
- u8 update_vport_active_rx_flg;
- u8 vport_active_rx_flg;
- u8 update_vport_active_tx_flg;
- u8 vport_active_tx_flg;
- u8 update_approx_mcast_flg;
- u8 update_accept_any_vlan_flg;
- u8 accept_any_vlan;
- unsigned long bins[8];
- struct qed_rss_params *rss_params;
- struct qed_filter_accept_flags accept_flags;
-};
-
-enum qed_tpa_mode {
- QED_TPA_MODE_NONE,
- QED_TPA_MODE_UNUSED,
- QED_TPA_MODE_GRO,
- QED_TPA_MODE_MAX
-};
-
-struct qed_sp_vport_start_params {
- enum qed_tpa_mode tpa_mode;
- bool remove_inner_vlan;
- bool drop_ttl0;
- u8 max_buffers_per_cqe;
- u32 concrete_fid;
- u16 opaque_fid;
- u8 vport_id;
- u16 mtu;
-};
#define QED_MAX_SGES_NUM 16
#define CRC32_POLY 0x1edc6f41
-static int qed_sp_vport_start(struct qed_hwfn *p_hwfn,
- struct qed_sp_vport_start_params *p_params)
+int qed_sp_eth_vport_start(struct qed_hwfn *p_hwfn,
+ struct qed_sp_vport_start_params *p_params)
{
struct vport_start_ramrod_data *p_ramrod = NULL;
struct qed_spq_entry *p_ent = NULL;
struct qed_sp_init_data init_data;
+ u8 abs_vport_id = 0;
int rc = -EINVAL;
u16 rx_mode = 0;
- u8 abs_vport_id = 0;
rc = qed_fw_vport(p_hwfn, p_params->vport_id, &abs_vport_id);
if (rc != 0)
@@ -211,6 +99,8 @@ static int qed_sp_vport_start(struct qed_hwfn *p_hwfn,
break;
}
+ p_ramrod->tx_switching_en = p_params->tx_switching;
+
/* Software Function ID in hwfn (PFs are 0 - 15, VFs are 16 - 135) */
p_ramrod->sw_fid = qed_concrete_to_sw_fid(p_hwfn->cdev,
p_params->concrete_fid);
@@ -218,6 +108,21 @@ static int qed_sp_vport_start(struct qed_hwfn *p_hwfn,
return qed_spq_post(p_hwfn, p_ent, NULL);
}
+int qed_sp_vport_start(struct qed_hwfn *p_hwfn,
+ struct qed_sp_vport_start_params *p_params)
+{
+ if (IS_VF(p_hwfn->cdev)) {
+ return qed_vf_pf_vport_start(p_hwfn, p_params->vport_id,
+ p_params->mtu,
+ p_params->remove_inner_vlan,
+ p_params->tpa_mode,
+ p_params->max_buffers_per_cqe,
+ p_params->only_untagged);
+ }
+
+ return qed_sp_eth_vport_start(p_hwfn, p_params);
+}
+
static int
qed_sp_vport_update_rss(struct qed_hwfn *p_hwfn,
struct vport_update_ramrod_data *p_ramrod,
@@ -363,6 +268,38 @@ qed_sp_update_accept_mode(struct qed_hwfn *p_hwfn,
}
static void
+qed_sp_vport_update_sge_tpa(struct qed_hwfn *p_hwfn,
+ struct vport_update_ramrod_data *p_ramrod,
+ struct qed_sge_tpa_params *p_params)
+{
+ struct eth_vport_tpa_param *p_tpa;
+
+ if (!p_params) {
+ p_ramrod->common.update_tpa_param_flg = 0;
+ p_ramrod->common.update_tpa_en_flg = 0;
+ p_ramrod->common.update_tpa_param_flg = 0;
+ return;
+ }
+
+ p_ramrod->common.update_tpa_en_flg = p_params->update_tpa_en_flg;
+ p_tpa = &p_ramrod->tpa_param;
+ p_tpa->tpa_ipv4_en_flg = p_params->tpa_ipv4_en_flg;
+ p_tpa->tpa_ipv6_en_flg = p_params->tpa_ipv6_en_flg;
+ p_tpa->tpa_ipv4_tunn_en_flg = p_params->tpa_ipv4_tunn_en_flg;
+ p_tpa->tpa_ipv6_tunn_en_flg = p_params->tpa_ipv6_tunn_en_flg;
+
+ p_ramrod->common.update_tpa_param_flg = p_params->update_tpa_param_flg;
+ p_tpa->max_buff_num = p_params->max_buffers_per_cqe;
+ p_tpa->tpa_pkt_split_flg = p_params->tpa_pkt_split_flg;
+ p_tpa->tpa_hdr_data_split_flg = p_params->tpa_hdr_data_split_flg;
+ p_tpa->tpa_gro_consistent_flg = p_params->tpa_gro_consistent_flg;
+ p_tpa->tpa_max_aggs_num = p_params->tpa_max_aggs_num;
+ p_tpa->tpa_max_size = p_params->tpa_max_size;
+ p_tpa->tpa_min_size_to_start = p_params->tpa_min_size_to_start;
+ p_tpa->tpa_min_size_to_cont = p_params->tpa_min_size_to_cont;
+}
+
+static void
qed_sp_update_mcast_bin(struct qed_hwfn *p_hwfn,
struct vport_update_ramrod_data *p_ramrod,
struct qed_sp_vport_update_params *p_params)
@@ -383,20 +320,24 @@ qed_sp_update_mcast_bin(struct qed_hwfn *p_hwfn,
}
}
-static int
-qed_sp_vport_update(struct qed_hwfn *p_hwfn,
- struct qed_sp_vport_update_params *p_params,
- enum spq_mode comp_mode,
- struct qed_spq_comp_cb *p_comp_data)
+int qed_sp_vport_update(struct qed_hwfn *p_hwfn,
+ struct qed_sp_vport_update_params *p_params,
+ enum spq_mode comp_mode,
+ struct qed_spq_comp_cb *p_comp_data)
{
struct qed_rss_params *p_rss_params = p_params->rss_params;
struct vport_update_ramrod_data_cmn *p_cmn;
struct qed_sp_init_data init_data;
struct vport_update_ramrod_data *p_ramrod = NULL;
struct qed_spq_entry *p_ent = NULL;
- u8 abs_vport_id = 0;
+ u8 abs_vport_id = 0, val;
int rc = -EINVAL;
+ if (IS_VF(p_hwfn->cdev)) {
+ rc = qed_vf_pf_vport_update(p_hwfn, p_params);
+ return rc;
+ }
+
rc = qed_fw_vport(p_hwfn, p_params->vport_id, &abs_vport_id);
if (rc != 0)
return rc;
@@ -425,6 +366,27 @@ qed_sp_vport_update(struct qed_hwfn *p_hwfn,
p_cmn->accept_any_vlan = p_params->accept_any_vlan;
p_cmn->update_accept_any_vlan_flg =
p_params->update_accept_any_vlan_flg;
+
+ p_cmn->inner_vlan_removal_en = p_params->inner_vlan_removal_flg;
+ val = p_params->update_inner_vlan_removal_flg;
+ p_cmn->update_inner_vlan_removal_en_flg = val;
+
+ p_cmn->default_vlan_en = p_params->default_vlan_enable_flg;
+ val = p_params->update_default_vlan_enable_flg;
+ p_cmn->update_default_vlan_en_flg = val;
+
+ p_cmn->default_vlan = cpu_to_le16(p_params->default_vlan);
+ p_cmn->update_default_vlan_flg = p_params->update_default_vlan_flg;
+
+ p_cmn->silent_vlan_removal_en = p_params->silent_vlan_removal_flg;
+
+ p_ramrod->common.tx_switching_en = p_params->tx_switching_flg;
+ p_cmn->update_tx_switching_en_flg = p_params->update_tx_switching_flg;
+
+ p_cmn->anti_spoofing_en = p_params->anti_spoofing_en;
+ val = p_params->update_anti_spoofing_en_flg;
+ p_ramrod->common.update_anti_spoofing_en_flg = val;
+
rc = qed_sp_vport_update_rss(p_hwfn, p_ramrod, p_rss_params);
if (rc) {
/* Return spq entry which is taken in qed_sp_init_request()*/
@@ -436,12 +398,11 @@ qed_sp_vport_update(struct qed_hwfn *p_hwfn,
qed_sp_update_mcast_bin(p_hwfn, p_ramrod, p_params);
qed_sp_update_accept_mode(p_hwfn, p_ramrod, p_params->accept_flags);
+ qed_sp_vport_update_sge_tpa(p_hwfn, p_ramrod, p_params->sge_tpa_params);
return qed_spq_post(p_hwfn, p_ent, NULL);
}
-static int qed_sp_vport_stop(struct qed_hwfn *p_hwfn,
- u16 opaque_fid,
- u8 vport_id)
+int qed_sp_vport_stop(struct qed_hwfn *p_hwfn, u16 opaque_fid, u8 vport_id)
{
struct vport_stop_ramrod_data *p_ramrod;
struct qed_sp_init_data init_data;
@@ -449,6 +410,9 @@ static int qed_sp_vport_stop(struct qed_hwfn *p_hwfn,
u8 abs_vport_id = 0;
int rc;
+ if (IS_VF(p_hwfn->cdev))
+ return qed_vf_pf_vport_stop(p_hwfn);
+
rc = qed_fw_vport(p_hwfn, vport_id, &abs_vport_id);
if (rc != 0)
return rc;
@@ -470,13 +434,26 @@ static int qed_sp_vport_stop(struct qed_hwfn *p_hwfn,
return qed_spq_post(p_hwfn, p_ent, NULL);
}
+static int
+qed_vf_pf_accept_flags(struct qed_hwfn *p_hwfn,
+ struct qed_filter_accept_flags *p_accept_flags)
+{
+ struct qed_sp_vport_update_params s_params;
+
+ memset(&s_params, 0, sizeof(s_params));
+ memcpy(&s_params.accept_flags, p_accept_flags,
+ sizeof(struct qed_filter_accept_flags));
+
+ return qed_vf_pf_vport_update(p_hwfn, &s_params);
+}
+
static int qed_filter_accept_cmd(struct qed_dev *cdev,
u8 vport,
struct qed_filter_accept_flags accept_flags,
u8 update_accept_any_vlan,
u8 accept_any_vlan,
- enum spq_mode comp_mode,
- struct qed_spq_comp_cb *p_comp_data)
+ enum spq_mode comp_mode,
+ struct qed_spq_comp_cb *p_comp_data)
{
struct qed_sp_vport_update_params vport_update_params;
int i, rc;
@@ -493,6 +470,13 @@ static int qed_filter_accept_cmd(struct qed_dev *cdev,
vport_update_params.opaque_fid = p_hwfn->hw_info.opaque_fid;
+ if (IS_VF(cdev)) {
+ rc = qed_vf_pf_accept_flags(p_hwfn, &accept_flags);
+ if (rc)
+ return rc;
+ continue;
+ }
+
rc = qed_sp_vport_update(p_hwfn, &vport_update_params,
comp_mode, p_comp_data);
if (rc != 0) {
@@ -527,16 +511,14 @@ static int qed_sp_release_queue_cid(
return 0;
}
-static int
-qed_sp_eth_rxq_start_ramrod(struct qed_hwfn *p_hwfn,
- u16 opaque_fid,
- u32 cid,
- struct qed_queue_start_common_params *params,
- u8 stats_id,
- u16 bd_max_bytes,
- dma_addr_t bd_chain_phys_addr,
- dma_addr_t cqe_pbl_addr,
- u16 cqe_pbl_size)
+int qed_sp_eth_rxq_start_ramrod(struct qed_hwfn *p_hwfn,
+ u16 opaque_fid,
+ u32 cid,
+ struct qed_queue_start_common_params *params,
+ u8 stats_id,
+ u16 bd_max_bytes,
+ dma_addr_t bd_chain_phys_addr,
+ dma_addr_t cqe_pbl_addr, u16 cqe_pbl_size)
{
struct rx_queue_start_ramrod_data *p_ramrod = NULL;
struct qed_spq_entry *p_ent = NULL;
@@ -605,8 +587,7 @@ qed_sp_eth_rx_queue_start(struct qed_hwfn *p_hwfn,
u16 bd_max_bytes,
dma_addr_t bd_chain_phys_addr,
dma_addr_t cqe_pbl_addr,
- u16 cqe_pbl_size,
- void __iomem **pp_prod)
+ u16 cqe_pbl_size, void __iomem **pp_prod)
{
struct qed_hw_cid_data *p_rx_cid;
u64 init_prod_val = 0;
@@ -614,6 +595,16 @@ qed_sp_eth_rx_queue_start(struct qed_hwfn *p_hwfn,
u8 abs_stats_id = 0;
int rc;
+ if (IS_VF(p_hwfn->cdev)) {
+ return qed_vf_pf_rxq_start(p_hwfn,
+ params->queue_id,
+ params->sb,
+ params->sb_idx,
+ bd_max_bytes,
+ bd_chain_phys_addr,
+ cqe_pbl_addr, cqe_pbl_size, pp_prod);
+ }
+
rc = qed_fw_l2_queue(p_hwfn, params->queue_id, &abs_l2_queue);
if (rc != 0)
return rc;
@@ -656,10 +647,59 @@ qed_sp_eth_rx_queue_start(struct qed_hwfn *p_hwfn,
return rc;
}
-static int qed_sp_eth_rx_queue_stop(struct qed_hwfn *p_hwfn,
- u16 rx_queue_id,
- bool eq_completion_only,
- bool cqe_completion)
+int qed_sp_eth_rx_queues_update(struct qed_hwfn *p_hwfn,
+ u16 rx_queue_id,
+ u8 num_rxqs,
+ u8 complete_cqe_flg,
+ u8 complete_event_flg,
+ enum spq_mode comp_mode,
+ struct qed_spq_comp_cb *p_comp_data)
+{
+ struct rx_queue_update_ramrod_data *p_ramrod = NULL;
+ struct qed_spq_entry *p_ent = NULL;
+ struct qed_sp_init_data init_data;
+ struct qed_hw_cid_data *p_rx_cid;
+ u16 qid, abs_rx_q_id = 0;
+ int rc = -EINVAL;
+ u8 i;
+
+ memset(&init_data, 0, sizeof(init_data));
+ init_data.comp_mode = comp_mode;
+ init_data.p_comp_data = p_comp_data;
+
+ for (i = 0; i < num_rxqs; i++) {
+ qid = rx_queue_id + i;
+ p_rx_cid = &p_hwfn->p_rx_cids[qid];
+
+ /* Get SPQ entry */
+ init_data.cid = p_rx_cid->cid;
+ init_data.opaque_fid = p_rx_cid->opaque_fid;
+
+ rc = qed_sp_init_request(p_hwfn, &p_ent,
+ ETH_RAMROD_RX_QUEUE_UPDATE,
+ PROTOCOLID_ETH, &init_data);
+ if (rc)
+ return rc;
+
+ p_ramrod = &p_ent->ramrod.rx_queue_update;
+
+ qed_fw_vport(p_hwfn, p_rx_cid->vport_id, &p_ramrod->vport_id);
+ qed_fw_l2_queue(p_hwfn, qid, &abs_rx_q_id);
+ p_ramrod->rx_queue_id = cpu_to_le16(abs_rx_q_id);
+ p_ramrod->complete_cqe_flg = complete_cqe_flg;
+ p_ramrod->complete_event_flg = complete_event_flg;
+
+ rc = qed_spq_post(p_hwfn, p_ent, NULL);
+ if (rc)
+ return rc;
+ }
+
+ return rc;
+}
+
+int qed_sp_eth_rx_queue_stop(struct qed_hwfn *p_hwfn,
+ u16 rx_queue_id,
+ bool eq_completion_only, bool cqe_completion)
{
struct qed_hw_cid_data *p_rx_cid = &p_hwfn->p_rx_cids[rx_queue_id];
struct rx_queue_stop_ramrod_data *p_ramrod = NULL;
@@ -668,6 +708,9 @@ static int qed_sp_eth_rx_queue_stop(struct qed_hwfn *p_hwfn,
u16 abs_rx_q_id = 0;
int rc = -EINVAL;
+ if (IS_VF(p_hwfn->cdev))
+ return qed_vf_pf_rxq_stop(p_hwfn, rx_queue_id, cqe_completion);
+
/* Get SPQ entry */
memset(&init_data, 0, sizeof(init_data));
init_data.cid = p_rx_cid->cid;
@@ -703,15 +746,14 @@ static int qed_sp_eth_rx_queue_stop(struct qed_hwfn *p_hwfn,
return qed_sp_release_queue_cid(p_hwfn, p_rx_cid);
}
-static int
-qed_sp_eth_txq_start_ramrod(struct qed_hwfn *p_hwfn,
- u16 opaque_fid,
- u32 cid,
- struct qed_queue_start_common_params *p_params,
- u8 stats_id,
- dma_addr_t pbl_addr,
- u16 pbl_size,
- union qed_qm_pq_params *p_pq_params)
+int qed_sp_eth_txq_start_ramrod(struct qed_hwfn *p_hwfn,
+ u16 opaque_fid,
+ u32 cid,
+ struct qed_queue_start_common_params *p_params,
+ u8 stats_id,
+ dma_addr_t pbl_addr,
+ u16 pbl_size,
+ union qed_qm_pq_params *p_pq_params)
{
struct tx_queue_start_ramrod_data *p_ramrod = NULL;
struct qed_spq_entry *p_ent = NULL;
@@ -765,14 +807,21 @@ qed_sp_eth_tx_queue_start(struct qed_hwfn *p_hwfn,
u16 opaque_fid,
struct qed_queue_start_common_params *p_params,
dma_addr_t pbl_addr,
- u16 pbl_size,
- void __iomem **pp_doorbell)
+ u16 pbl_size, void __iomem **pp_doorbell)
{
struct qed_hw_cid_data *p_tx_cid;
union qed_qm_pq_params pq_params;
u8 abs_stats_id = 0;
int rc;
+ if (IS_VF(p_hwfn->cdev)) {
+ return qed_vf_pf_txq_start(p_hwfn,
+ p_params->queue_id,
+ p_params->sb,
+ p_params->sb_idx,
+ pbl_addr, pbl_size, pp_doorbell);
+ }
+
rc = qed_fw_vport(p_hwfn, p_params->vport_id, &abs_stats_id);
if (rc)
return rc;
@@ -813,14 +862,16 @@ qed_sp_eth_tx_queue_start(struct qed_hwfn *p_hwfn,
return rc;
}
-static int qed_sp_eth_tx_queue_stop(struct qed_hwfn *p_hwfn,
- u16 tx_queue_id)
+int qed_sp_eth_tx_queue_stop(struct qed_hwfn *p_hwfn, u16 tx_queue_id)
{
struct qed_hw_cid_data *p_tx_cid = &p_hwfn->p_tx_cids[tx_queue_id];
struct qed_spq_entry *p_ent = NULL;
struct qed_sp_init_data init_data;
int rc = -EINVAL;
+ if (IS_VF(p_hwfn->cdev))
+ return qed_vf_pf_txq_stop(p_hwfn, tx_queue_id);
+
/* Get SPQ entry */
memset(&init_data, 0, sizeof(init_data));
init_data.cid = p_tx_cid->cid;
@@ -1016,11 +1067,11 @@ qed_filter_ucast_common(struct qed_hwfn *p_hwfn,
return 0;
}
-static int qed_sp_eth_filter_ucast(struct qed_hwfn *p_hwfn,
- u16 opaque_fid,
- struct qed_filter_ucast *p_filter_cmd,
- enum spq_mode comp_mode,
- struct qed_spq_comp_cb *p_comp_data)
+int qed_sp_eth_filter_ucast(struct qed_hwfn *p_hwfn,
+ u16 opaque_fid,
+ struct qed_filter_ucast *p_filter_cmd,
+ enum spq_mode comp_mode,
+ struct qed_spq_comp_cb *p_comp_data)
{
struct vport_filter_update_ramrod_data *p_ramrod = NULL;
struct qed_spq_entry *p_ent = NULL;
@@ -1118,7 +1169,7 @@ static inline u32 qed_crc32c_le(u32 seed,
return qed_calc_crc32c((u8 *)packet_buf, 8, seed, 0);
}
-static u8 qed_mcast_bin_from_mac(u8 *mac)
+u8 qed_mcast_bin_from_mac(u8 *mac)
{
u32 crc = qed_crc32c_le(ETH_MULTICAST_BIN_FROM_MAC_SEED,
mac, ETH_ALEN);
@@ -1201,11 +1252,10 @@ qed_sp_eth_filter_mcast(struct qed_hwfn *p_hwfn,
return qed_spq_post(p_hwfn, p_ent, NULL);
}
-static int
-qed_filter_mcast_cmd(struct qed_dev *cdev,
- struct qed_filter_mcast *p_filter_cmd,
- enum spq_mode comp_mode,
- struct qed_spq_comp_cb *p_comp_data)
+static int qed_filter_mcast_cmd(struct qed_dev *cdev,
+ struct qed_filter_mcast *p_filter_cmd,
+ enum spq_mode comp_mode,
+ struct qed_spq_comp_cb *p_comp_data)
{
int rc = 0;
int i;
@@ -1221,8 +1271,10 @@ qed_filter_mcast_cmd(struct qed_dev *cdev,
u16 opaque_fid;
- if (rc != 0)
- break;
+ if (IS_VF(cdev)) {
+ qed_vf_pf_filter_mcast(p_hwfn, p_filter_cmd);
+ continue;
+ }
opaque_fid = p_hwfn->hw_info.opaque_fid;
@@ -1247,8 +1299,10 @@ static int qed_filter_ucast_cmd(struct qed_dev *cdev,
struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
u16 opaque_fid;
- if (rc != 0)
- break;
+ if (IS_VF(cdev)) {
+ rc = qed_vf_pf_filter_ucast(p_hwfn, p_filter_cmd);
+ continue;
+ }
opaque_fid = p_hwfn->hw_info.opaque_fid;
@@ -1257,6 +1311,8 @@ static int qed_filter_ucast_cmd(struct qed_dev *cdev,
p_filter_cmd,
comp_mode,
p_comp_data);
+ if (rc != 0)
+ break;
}
return rc;
@@ -1265,12 +1321,19 @@ static int qed_filter_ucast_cmd(struct qed_dev *cdev,
/* Statistics related code */
static void __qed_get_vport_pstats_addrlen(struct qed_hwfn *p_hwfn,
u32 *p_addr,
- u32 *p_len,
- u16 statistics_bin)
+ u32 *p_len, u16 statistics_bin)
{
- *p_addr = BAR0_MAP_REG_PSDM_RAM +
- PSTORM_QUEUE_STAT_OFFSET(statistics_bin);
- *p_len = sizeof(struct eth_pstorm_per_queue_stat);
+ if (IS_PF(p_hwfn->cdev)) {
+ *p_addr = BAR0_MAP_REG_PSDM_RAM +
+ PSTORM_QUEUE_STAT_OFFSET(statistics_bin);
+ *p_len = sizeof(struct eth_pstorm_per_queue_stat);
+ } else {
+ struct qed_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct pfvf_acquire_resp_tlv *p_resp = &p_iov->acquire_resp;
+
+ *p_addr = p_resp->pfdev_info.stats_info.pstats.address;
+ *p_len = p_resp->pfdev_info.stats_info.pstats.len;
+ }
}
static void __qed_get_vport_pstats(struct qed_hwfn *p_hwfn,
@@ -1285,32 +1348,15 @@ static void __qed_get_vport_pstats(struct qed_hwfn *p_hwfn,
statistics_bin);
memset(&pstats, 0, sizeof(pstats));
- qed_memcpy_from(p_hwfn, p_ptt, &pstats,
- pstats_addr, pstats_len);
-
- p_stats->tx_ucast_bytes +=
- HILO_64_REGPAIR(pstats.sent_ucast_bytes);
- p_stats->tx_mcast_bytes +=
- HILO_64_REGPAIR(pstats.sent_mcast_bytes);
- p_stats->tx_bcast_bytes +=
- HILO_64_REGPAIR(pstats.sent_bcast_bytes);
- p_stats->tx_ucast_pkts +=
- HILO_64_REGPAIR(pstats.sent_ucast_pkts);
- p_stats->tx_mcast_pkts +=
- HILO_64_REGPAIR(pstats.sent_mcast_pkts);
- p_stats->tx_bcast_pkts +=
- HILO_64_REGPAIR(pstats.sent_bcast_pkts);
- p_stats->tx_err_drop_pkts +=
- HILO_64_REGPAIR(pstats.error_drop_pkts);
-}
-
-static void __qed_get_vport_tstats_addrlen(struct qed_hwfn *p_hwfn,
- u32 *p_addr,
- u32 *p_len)
-{
- *p_addr = BAR0_MAP_REG_TSDM_RAM +
- TSTORM_PORT_STAT_OFFSET(MFW_PORT(p_hwfn));
- *p_len = sizeof(struct tstorm_per_port_stat);
+ qed_memcpy_from(p_hwfn, p_ptt, &pstats, pstats_addr, pstats_len);
+
+ p_stats->tx_ucast_bytes += HILO_64_REGPAIR(pstats.sent_ucast_bytes);
+ p_stats->tx_mcast_bytes += HILO_64_REGPAIR(pstats.sent_mcast_bytes);
+ p_stats->tx_bcast_bytes += HILO_64_REGPAIR(pstats.sent_bcast_bytes);
+ p_stats->tx_ucast_pkts += HILO_64_REGPAIR(pstats.sent_ucast_pkts);
+ p_stats->tx_mcast_pkts += HILO_64_REGPAIR(pstats.sent_mcast_pkts);
+ p_stats->tx_bcast_pkts += HILO_64_REGPAIR(pstats.sent_bcast_pkts);
+ p_stats->tx_err_drop_pkts += HILO_64_REGPAIR(pstats.error_drop_pkts);
}
static void __qed_get_vport_tstats(struct qed_hwfn *p_hwfn,
@@ -1318,14 +1364,23 @@ static void __qed_get_vport_tstats(struct qed_hwfn *p_hwfn,
struct qed_eth_stats *p_stats,
u16 statistics_bin)
{
- u32 tstats_addr = 0, tstats_len = 0;
struct tstorm_per_port_stat tstats;
+ u32 tstats_addr, tstats_len;
+
+ if (IS_PF(p_hwfn->cdev)) {
+ tstats_addr = BAR0_MAP_REG_TSDM_RAM +
+ TSTORM_PORT_STAT_OFFSET(MFW_PORT(p_hwfn));
+ tstats_len = sizeof(struct tstorm_per_port_stat);
+ } else {
+ struct qed_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct pfvf_acquire_resp_tlv *p_resp = &p_iov->acquire_resp;
- __qed_get_vport_tstats_addrlen(p_hwfn, &tstats_addr, &tstats_len);
+ tstats_addr = p_resp->pfdev_info.stats_info.tstats.address;
+ tstats_len = p_resp->pfdev_info.stats_info.tstats.len;
+ }
memset(&tstats, 0, sizeof(tstats));
- qed_memcpy_from(p_hwfn, p_ptt, &tstats,
- tstats_addr, tstats_len);
+ qed_memcpy_from(p_hwfn, p_ptt, &tstats, tstats_addr, tstats_len);
p_stats->mftag_filter_discards +=
HILO_64_REGPAIR(tstats.mftag_filter_discard);
@@ -1335,12 +1390,19 @@ static void __qed_get_vport_tstats(struct qed_hwfn *p_hwfn,
static void __qed_get_vport_ustats_addrlen(struct qed_hwfn *p_hwfn,
u32 *p_addr,
- u32 *p_len,
- u16 statistics_bin)
+ u32 *p_len, u16 statistics_bin)
{
- *p_addr = BAR0_MAP_REG_USDM_RAM +
- USTORM_QUEUE_STAT_OFFSET(statistics_bin);
- *p_len = sizeof(struct eth_ustorm_per_queue_stat);
+ if (IS_PF(p_hwfn->cdev)) {
+ *p_addr = BAR0_MAP_REG_USDM_RAM +
+ USTORM_QUEUE_STAT_OFFSET(statistics_bin);
+ *p_len = sizeof(struct eth_ustorm_per_queue_stat);
+ } else {
+ struct qed_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct pfvf_acquire_resp_tlv *p_resp = &p_iov->acquire_resp;
+
+ *p_addr = p_resp->pfdev_info.stats_info.ustats.address;
+ *p_len = p_resp->pfdev_info.stats_info.ustats.len;
+ }
}
static void __qed_get_vport_ustats(struct qed_hwfn *p_hwfn,
@@ -1355,31 +1417,31 @@ static void __qed_get_vport_ustats(struct qed_hwfn *p_hwfn,
statistics_bin);
memset(&ustats, 0, sizeof(ustats));
- qed_memcpy_from(p_hwfn, p_ptt, &ustats,
- ustats_addr, ustats_len);
-
- p_stats->rx_ucast_bytes +=
- HILO_64_REGPAIR(ustats.rcv_ucast_bytes);
- p_stats->rx_mcast_bytes +=
- HILO_64_REGPAIR(ustats.rcv_mcast_bytes);
- p_stats->rx_bcast_bytes +=
- HILO_64_REGPAIR(ustats.rcv_bcast_bytes);
- p_stats->rx_ucast_pkts +=
- HILO_64_REGPAIR(ustats.rcv_ucast_pkts);
- p_stats->rx_mcast_pkts +=
- HILO_64_REGPAIR(ustats.rcv_mcast_pkts);
- p_stats->rx_bcast_pkts +=
- HILO_64_REGPAIR(ustats.rcv_bcast_pkts);
+ qed_memcpy_from(p_hwfn, p_ptt, &ustats, ustats_addr, ustats_len);
+
+ p_stats->rx_ucast_bytes += HILO_64_REGPAIR(ustats.rcv_ucast_bytes);
+ p_stats->rx_mcast_bytes += HILO_64_REGPAIR(ustats.rcv_mcast_bytes);
+ p_stats->rx_bcast_bytes += HILO_64_REGPAIR(ustats.rcv_bcast_bytes);
+ p_stats->rx_ucast_pkts += HILO_64_REGPAIR(ustats.rcv_ucast_pkts);
+ p_stats->rx_mcast_pkts += HILO_64_REGPAIR(ustats.rcv_mcast_pkts);
+ p_stats->rx_bcast_pkts += HILO_64_REGPAIR(ustats.rcv_bcast_pkts);
}
static void __qed_get_vport_mstats_addrlen(struct qed_hwfn *p_hwfn,
u32 *p_addr,
- u32 *p_len,
- u16 statistics_bin)
+ u32 *p_len, u16 statistics_bin)
{
- *p_addr = BAR0_MAP_REG_MSDM_RAM +
- MSTORM_QUEUE_STAT_OFFSET(statistics_bin);
- *p_len = sizeof(struct eth_mstorm_per_queue_stat);
+ if (IS_PF(p_hwfn->cdev)) {
+ *p_addr = BAR0_MAP_REG_MSDM_RAM +
+ MSTORM_QUEUE_STAT_OFFSET(statistics_bin);
+ *p_len = sizeof(struct eth_mstorm_per_queue_stat);
+ } else {
+ struct qed_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct pfvf_acquire_resp_tlv *p_resp = &p_iov->acquire_resp;
+
+ *p_addr = p_resp->pfdev_info.stats_info.mstats.address;
+ *p_len = p_resp->pfdev_info.stats_info.mstats.len;
+ }
}
static void __qed_get_vport_mstats(struct qed_hwfn *p_hwfn,
@@ -1394,21 +1456,17 @@ static void __qed_get_vport_mstats(struct qed_hwfn *p_hwfn,
statistics_bin);
memset(&mstats, 0, sizeof(mstats));
- qed_memcpy_from(p_hwfn, p_ptt, &mstats,
- mstats_addr, mstats_len);
+ qed_memcpy_from(p_hwfn, p_ptt, &mstats, mstats_addr, mstats_len);
- p_stats->no_buff_discards +=
- HILO_64_REGPAIR(mstats.no_buff_discard);
+ p_stats->no_buff_discards += HILO_64_REGPAIR(mstats.no_buff_discard);
p_stats->packet_too_big_discard +=
HILO_64_REGPAIR(mstats.packet_too_big_discard);
- p_stats->ttl0_discard +=
- HILO_64_REGPAIR(mstats.ttl0_discard);
+ p_stats->ttl0_discard += HILO_64_REGPAIR(mstats.ttl0_discard);
p_stats->tpa_coalesced_pkts +=
HILO_64_REGPAIR(mstats.tpa_coalesced_pkts);
p_stats->tpa_coalesced_events +=
HILO_64_REGPAIR(mstats.tpa_coalesced_events);
- p_stats->tpa_aborts_num +=
- HILO_64_REGPAIR(mstats.tpa_aborts_num);
+ p_stats->tpa_aborts_num += HILO_64_REGPAIR(mstats.tpa_aborts_num);
p_stats->tpa_coalesced_bytes +=
HILO_64_REGPAIR(mstats.tpa_coalesced_bytes);
}
@@ -1428,16 +1486,16 @@ static void __qed_get_vport_port_stats(struct qed_hwfn *p_hwfn,
sizeof(port_stats));
p_stats->rx_64_byte_packets += port_stats.pmm.r64;
- p_stats->rx_127_byte_packets += port_stats.pmm.r127;
- p_stats->rx_255_byte_packets += port_stats.pmm.r255;
- p_stats->rx_511_byte_packets += port_stats.pmm.r511;
- p_stats->rx_1023_byte_packets += port_stats.pmm.r1023;
- p_stats->rx_1518_byte_packets += port_stats.pmm.r1518;
- p_stats->rx_1522_byte_packets += port_stats.pmm.r1522;
- p_stats->rx_2047_byte_packets += port_stats.pmm.r2047;
- p_stats->rx_4095_byte_packets += port_stats.pmm.r4095;
- p_stats->rx_9216_byte_packets += port_stats.pmm.r9216;
- p_stats->rx_16383_byte_packets += port_stats.pmm.r16383;
+ p_stats->rx_65_to_127_byte_packets += port_stats.pmm.r127;
+ p_stats->rx_128_to_255_byte_packets += port_stats.pmm.r255;
+ p_stats->rx_256_to_511_byte_packets += port_stats.pmm.r511;
+ p_stats->rx_512_to_1023_byte_packets += port_stats.pmm.r1023;
+ p_stats->rx_1024_to_1518_byte_packets += port_stats.pmm.r1518;
+ p_stats->rx_1519_to_1522_byte_packets += port_stats.pmm.r1522;
+ p_stats->rx_1519_to_2047_byte_packets += port_stats.pmm.r2047;
+ p_stats->rx_2048_to_4095_byte_packets += port_stats.pmm.r4095;
+ p_stats->rx_4096_to_9216_byte_packets += port_stats.pmm.r9216;
+ p_stats->rx_9217_to_16383_byte_packets += port_stats.pmm.r16383;
p_stats->rx_crc_errors += port_stats.pmm.rfcs;
p_stats->rx_mac_crtl_frames += port_stats.pmm.rxcf;
p_stats->rx_pause_frames += port_stats.pmm.rxpf;
@@ -1481,44 +1539,49 @@ static void __qed_get_vport_port_stats(struct qed_hwfn *p_hwfn,
static void __qed_get_vport_stats(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt,
struct qed_eth_stats *stats,
- u16 statistics_bin)
+ u16 statistics_bin, bool b_get_port_stats)
{
__qed_get_vport_mstats(p_hwfn, p_ptt, stats, statistics_bin);
__qed_get_vport_ustats(p_hwfn, p_ptt, stats, statistics_bin);
__qed_get_vport_tstats(p_hwfn, p_ptt, stats, statistics_bin);
__qed_get_vport_pstats(p_hwfn, p_ptt, stats, statistics_bin);
- if (p_hwfn->mcp_info)
+ if (b_get_port_stats && p_hwfn->mcp_info)
__qed_get_vport_port_stats(p_hwfn, p_ptt, stats);
}
static void _qed_get_vport_stats(struct qed_dev *cdev,
struct qed_eth_stats *stats)
{
- u8 fw_vport = 0;
- int i;
+ u8 fw_vport = 0;
+ int i;
memset(stats, 0, sizeof(*stats));
for_each_hwfn(cdev, i) {
struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
- struct qed_ptt *p_ptt;
-
- /* The main vport index is relative first */
- if (qed_fw_vport(p_hwfn, 0, &fw_vport)) {
- DP_ERR(p_hwfn, "No vport available!\n");
- continue;
+ struct qed_ptt *p_ptt = IS_PF(cdev) ? qed_ptt_acquire(p_hwfn)
+ : NULL;
+
+ if (IS_PF(cdev)) {
+ /* The main vport index is relative first */
+ if (qed_fw_vport(p_hwfn, 0, &fw_vport)) {
+ DP_ERR(p_hwfn, "No vport available!\n");
+ goto out;
+ }
}
- p_ptt = qed_ptt_acquire(p_hwfn);
- if (!p_ptt) {
+ if (IS_PF(cdev) && !p_ptt) {
DP_ERR(p_hwfn, "Failed to acquire ptt\n");
continue;
}
- __qed_get_vport_stats(p_hwfn, p_ptt, stats, fw_vport);
+ __qed_get_vport_stats(p_hwfn, p_ptt, stats, fw_vport,
+ IS_PF(cdev) ? true : false);
- qed_ptt_release(p_hwfn, p_ptt);
+out:
+ if (IS_PF(cdev) && p_ptt)
+ qed_ptt_release(p_hwfn, p_ptt);
}
}
@@ -1552,10 +1615,11 @@ void qed_reset_vport_stats(struct qed_dev *cdev)
struct eth_mstorm_per_queue_stat mstats;
struct eth_ustorm_per_queue_stat ustats;
struct eth_pstorm_per_queue_stat pstats;
- struct qed_ptt *p_ptt = qed_ptt_acquire(p_hwfn);
+ struct qed_ptt *p_ptt = IS_PF(cdev) ? qed_ptt_acquire(p_hwfn)
+ : NULL;
u32 addr = 0, len = 0;
- if (!p_ptt) {
+ if (IS_PF(cdev) && !p_ptt) {
DP_ERR(p_hwfn, "Failed to acquire ptt\n");
continue;
}
@@ -1572,7 +1636,8 @@ void qed_reset_vport_stats(struct qed_dev *cdev)
__qed_get_vport_pstats_addrlen(p_hwfn, &addr, &len, 0);
qed_memcpy_to(p_hwfn, p_ptt, addr, &pstats, len);
- qed_ptt_release(p_hwfn, p_ptt);
+ if (IS_PF(cdev))
+ qed_ptt_release(p_hwfn, p_ptt);
}
/* PORT statistics are not necessarily reset, so we need to
@@ -1593,32 +1658,61 @@ static int qed_fill_eth_dev_info(struct qed_dev *cdev,
info->num_tc = 1;
- if (cdev->int_params.out.int_mode == QED_INT_MODE_MSIX) {
- for_each_hwfn(cdev, i)
- info->num_queues += FEAT_NUM(&cdev->hwfns[i],
- QED_PF_L2_QUE);
- if (cdev->int_params.fp_msix_cnt)
- info->num_queues = min_t(u8, info->num_queues,
- cdev->int_params.fp_msix_cnt);
+ if (IS_PF(cdev)) {
+ if (cdev->int_params.out.int_mode == QED_INT_MODE_MSIX) {
+ for_each_hwfn(cdev, i)
+ info->num_queues +=
+ FEAT_NUM(&cdev->hwfns[i], QED_PF_L2_QUE);
+ if (cdev->int_params.fp_msix_cnt)
+ info->num_queues =
+ min_t(u8, info->num_queues,
+ cdev->int_params.fp_msix_cnt);
+ } else {
+ info->num_queues = cdev->num_hwfns;
+ }
+
+ info->num_vlan_filters = RESC_NUM(&cdev->hwfns[0], QED_VLAN);
+ ether_addr_copy(info->port_mac,
+ cdev->hwfns[0].hw_info.hw_mac_addr);
} else {
- info->num_queues = cdev->num_hwfns;
- }
+ qed_vf_get_num_rxqs(QED_LEADING_HWFN(cdev), &info->num_queues);
+ if (cdev->num_hwfns > 1) {
+ u8 queues = 0;
- info->num_vlan_filters = RESC_NUM(&cdev->hwfns[0], QED_VLAN);
- ether_addr_copy(info->port_mac,
- cdev->hwfns[0].hw_info.hw_mac_addr);
+ qed_vf_get_num_rxqs(&cdev->hwfns[1], &queues);
+ info->num_queues += queues;
+ }
+
+ qed_vf_get_num_vlan_filters(&cdev->hwfns[0],
+ &info->num_vlan_filters);
+ qed_vf_get_port_mac(&cdev->hwfns[0], info->port_mac);
+ }
qed_fill_dev_info(cdev, &info->common);
+ if (IS_VF(cdev))
+ memset(info->common.hw_mac, 0, ETH_ALEN);
+
return 0;
}
static void qed_register_eth_ops(struct qed_dev *cdev,
- struct qed_eth_cb_ops *ops,
- void *cookie)
+ struct qed_eth_cb_ops *ops, void *cookie)
+{
+ cdev->protocol_ops.eth = ops;
+ cdev->ops_cookie = cookie;
+
+ /* For VF, we start bulletin reading */
+ if (IS_VF(cdev))
+ qed_vf_start_iov_wq(cdev);
+}
+
+static bool qed_check_mac(struct qed_dev *cdev, u8 *mac)
{
- cdev->protocol_ops.eth = ops;
- cdev->ops_cookie = cookie;
+ if (IS_PF(cdev))
+ return true;
+
+ return qed_vf_check_mac(&cdev->hwfns[0], mac);
}
static int qed_start_vport(struct qed_dev *cdev,
@@ -1633,6 +1727,7 @@ static int qed_start_vport(struct qed_dev *cdev,
start.tpa_mode = params->gro_enable ? QED_TPA_MODE_GRO :
QED_TPA_MODE_NONE;
start.remove_inner_vlan = params->remove_inner_vlan;
+ start.only_untagged = true; /* untagged only */
start.drop_ttl0 = params->drop_ttl0;
start.opaque_fid = p_hwfn->hw_info.opaque_fid;
start.concrete_fid = p_hwfn->hw_info.concrete_fid;
@@ -1699,6 +1794,8 @@ static int qed_update_vport(struct qed_dev *cdev,
params->update_vport_active_flg;
sp_params.vport_active_rx_flg = params->vport_active_flg;
sp_params.vport_active_tx_flg = params->vport_active_flg;
+ sp_params.update_tx_switching_flg = params->update_tx_switching_flg;
+ sp_params.tx_switching_flg = params->tx_switching_flg;
sp_params.accept_any_vlan = params->accept_any_vlan;
sp_params.update_accept_any_vlan_flg =
params->update_accept_any_vlan_flg;
@@ -1744,9 +1841,7 @@ static int qed_update_vport(struct qed_dev *cdev,
sp_rss_params.update_rss_capabilities = 1;
sp_rss_params.update_rss_ind_table = 1;
sp_rss_params.update_rss_key = 1;
- sp_rss_params.rss_caps = QED_RSS_IPV4 |
- QED_RSS_IPV6 |
- QED_RSS_IPV4_TCP | QED_RSS_IPV6_TCP;
+ sp_rss_params.rss_caps = params->rss_params.rss_caps;
sp_rss_params.rss_table_size_log = 7; /* 2^7 = 128 */
memcpy(sp_rss_params.rss_ind_table,
params->rss_params.rss_ind_table,
@@ -1899,6 +1994,39 @@ static int qed_stop_txq(struct qed_dev *cdev,
return 0;
}
+static int qed_tunn_configure(struct qed_dev *cdev,
+ struct qed_tunn_params *tunn_params)
+{
+ struct qed_tunn_update_params tunn_info;
+ int i, rc;
+
+ if (IS_VF(cdev))
+ return 0;
+
+ memset(&tunn_info, 0, sizeof(tunn_info));
+ if (tunn_params->update_vxlan_port == 1) {
+ tunn_info.update_vxlan_udp_port = 1;
+ tunn_info.vxlan_udp_port = tunn_params->vxlan_port;
+ }
+
+ if (tunn_params->update_geneve_port == 1) {
+ tunn_info.update_geneve_udp_port = 1;
+ tunn_info.geneve_udp_port = tunn_params->geneve_port;
+ }
+
+ for_each_hwfn(cdev, i) {
+ struct qed_hwfn *hwfn = &cdev->hwfns[i];
+
+ rc = qed_sp_pf_update_tunn_cfg(hwfn, &tunn_info,
+ QED_SPQ_MODE_EBLOCK, NULL);
+
+ if (rc)
+ return rc;
+ }
+
+ return 0;
+}
+
static int qed_configure_filter_rx_mode(struct qed_dev *cdev,
enum qed_filter_rx_mode_type type)
{
@@ -2026,10 +2154,18 @@ static int qed_fp_cqe_completion(struct qed_dev *dev,
cqe);
}
+#ifdef CONFIG_QED_SRIOV
+extern const struct qed_iov_hv_ops qed_iov_ops_pass;
+#endif
+
static const struct qed_eth_ops qed_eth_ops_pass = {
.common = &qed_common_ops_pass,
+#ifdef CONFIG_QED_SRIOV
+ .iov = &qed_iov_ops_pass,
+#endif
.fill_dev_info = &qed_fill_eth_dev_info,
.register_ops = &qed_register_eth_ops,
+ .check_mac = &qed_check_mac,
.vport_start = &qed_start_vport,
.vport_stop = &qed_stop_vport,
.vport_update = &qed_update_vport,
@@ -2041,16 +2177,11 @@ static const struct qed_eth_ops qed_eth_ops_pass = {
.fastpath_stop = &qed_fastpath_stop,
.eth_cqe_completion = &qed_fp_cqe_completion,
.get_vport_stats = &qed_get_vport_stats,
+ .tunn_config = &qed_tunn_configure,
};
-const struct qed_eth_ops *qed_get_eth_ops(u32 version)
+const struct qed_eth_ops *qed_get_eth_ops(void)
{
- if (version != QED_ETH_INTERFACE_VERSION) {
- pr_notice("Cannot supply ethtool operations [%08x != %08x]\n",
- version, QED_ETH_INTERFACE_VERSION);
- return NULL;
- }
-
return &qed_eth_ops_pass;
}
EXPORT_SYMBOL(qed_get_eth_ops);
diff --git a/drivers/net/ethernet/qlogic/qed/qed_l2.h b/drivers/net/ethernet/qlogic/qed/qed_l2.h
new file mode 100644
index 000000000000..002114543451
--- /dev/null
+++ b/drivers/net/ethernet/qlogic/qed/qed_l2.h
@@ -0,0 +1,239 @@
+/* QLogic qed NIC Driver
+ * Copyright (c) 2015 QLogic Corporation
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+#ifndef _QED_L2_H
+#define _QED_L2_H
+#include <linux/types.h>
+#include <linux/io.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/qed/qed_eth_if.h>
+#include "qed.h"
+#include "qed_hw.h"
+#include "qed_sp.h"
+
+struct qed_sge_tpa_params {
+ u8 max_buffers_per_cqe;
+
+ u8 update_tpa_en_flg;
+ u8 tpa_ipv4_en_flg;
+ u8 tpa_ipv6_en_flg;
+ u8 tpa_ipv4_tunn_en_flg;
+ u8 tpa_ipv6_tunn_en_flg;
+
+ u8 update_tpa_param_flg;
+ u8 tpa_pkt_split_flg;
+ u8 tpa_hdr_data_split_flg;
+ u8 tpa_gro_consistent_flg;
+ u8 tpa_max_aggs_num;
+ u16 tpa_max_size;
+ u16 tpa_min_size_to_start;
+ u16 tpa_min_size_to_cont;
+};
+
+enum qed_filter_opcode {
+ QED_FILTER_ADD,
+ QED_FILTER_REMOVE,
+ QED_FILTER_MOVE,
+ QED_FILTER_REPLACE, /* Delete all MACs and add new one instead */
+ QED_FILTER_FLUSH, /* Removes all filters */
+};
+
+enum qed_filter_ucast_type {
+ QED_FILTER_MAC,
+ QED_FILTER_VLAN,
+ QED_FILTER_MAC_VLAN,
+ QED_FILTER_INNER_MAC,
+ QED_FILTER_INNER_VLAN,
+ QED_FILTER_INNER_PAIR,
+ QED_FILTER_INNER_MAC_VNI_PAIR,
+ QED_FILTER_MAC_VNI_PAIR,
+ QED_FILTER_VNI,
+};
+
+struct qed_filter_ucast {
+ enum qed_filter_opcode opcode;
+ enum qed_filter_ucast_type type;
+ u8 is_rx_filter;
+ u8 is_tx_filter;
+ u8 vport_to_add_to;
+ u8 vport_to_remove_from;
+ unsigned char mac[ETH_ALEN];
+ u8 assert_on_error;
+ u16 vlan;
+ u32 vni;
+};
+
+struct qed_filter_mcast {
+ /* MOVE is not supported for multicast */
+ enum qed_filter_opcode opcode;
+ u8 vport_to_add_to;
+ u8 vport_to_remove_from;
+ u8 num_mc_addrs;
+#define QED_MAX_MC_ADDRS 64
+ unsigned char mac[QED_MAX_MC_ADDRS][ETH_ALEN];
+};
+
+int qed_sp_eth_rx_queue_stop(struct qed_hwfn *p_hwfn,
+ u16 rx_queue_id,
+ bool eq_completion_only, bool cqe_completion);
+
+int qed_sp_eth_tx_queue_stop(struct qed_hwfn *p_hwfn, u16 tx_queue_id);
+
+enum qed_tpa_mode {
+ QED_TPA_MODE_NONE,
+ QED_TPA_MODE_UNUSED,
+ QED_TPA_MODE_GRO,
+ QED_TPA_MODE_MAX
+};
+
+struct qed_sp_vport_start_params {
+ enum qed_tpa_mode tpa_mode;
+ bool remove_inner_vlan;
+ bool tx_switching;
+ bool only_untagged;
+ bool drop_ttl0;
+ u8 max_buffers_per_cqe;
+ u32 concrete_fid;
+ u16 opaque_fid;
+ u8 vport_id;
+ u16 mtu;
+};
+
+int qed_sp_eth_vport_start(struct qed_hwfn *p_hwfn,
+ struct qed_sp_vport_start_params *p_params);
+
+struct qed_rss_params {
+ u8 update_rss_config;
+ u8 rss_enable;
+ u8 rss_eng_id;
+ u8 update_rss_capabilities;
+ u8 update_rss_ind_table;
+ u8 update_rss_key;
+ u8 rss_caps;
+ u8 rss_table_size_log;
+ u16 rss_ind_table[QED_RSS_IND_TABLE_SIZE];
+ u32 rss_key[QED_RSS_KEY_SIZE];
+};
+
+struct qed_filter_accept_flags {
+ u8 update_rx_mode_config;
+ u8 update_tx_mode_config;
+ u8 rx_accept_filter;
+ u8 tx_accept_filter;
+#define QED_ACCEPT_NONE 0x01
+#define QED_ACCEPT_UCAST_MATCHED 0x02
+#define QED_ACCEPT_UCAST_UNMATCHED 0x04
+#define QED_ACCEPT_MCAST_MATCHED 0x08
+#define QED_ACCEPT_MCAST_UNMATCHED 0x10
+#define QED_ACCEPT_BCAST 0x20
+};
+
+struct qed_sp_vport_update_params {
+ u16 opaque_fid;
+ u8 vport_id;
+ u8 update_vport_active_rx_flg;
+ u8 vport_active_rx_flg;
+ u8 update_vport_active_tx_flg;
+ u8 vport_active_tx_flg;
+ u8 update_inner_vlan_removal_flg;
+ u8 inner_vlan_removal_flg;
+ u8 silent_vlan_removal_flg;
+ u8 update_default_vlan_enable_flg;
+ u8 default_vlan_enable_flg;
+ u8 update_default_vlan_flg;
+ u16 default_vlan;
+ u8 update_tx_switching_flg;
+ u8 tx_switching_flg;
+ u8 update_approx_mcast_flg;
+ u8 update_anti_spoofing_en_flg;
+ u8 anti_spoofing_en;
+ u8 update_accept_any_vlan_flg;
+ u8 accept_any_vlan;
+ unsigned long bins[8];
+ struct qed_rss_params *rss_params;
+ struct qed_filter_accept_flags accept_flags;
+ struct qed_sge_tpa_params *sge_tpa_params;
+};
+
+int qed_sp_vport_update(struct qed_hwfn *p_hwfn,
+ struct qed_sp_vport_update_params *p_params,
+ enum spq_mode comp_mode,
+ struct qed_spq_comp_cb *p_comp_data);
+
+/**
+ * @brief qed_sp_vport_stop -
+ *
+ * This ramrod closes a VPort after all its RX and TX queues are terminated.
+ * An Assert is generated if any queues are left open.
+ *
+ * @param p_hwfn
+ * @param opaque_fid
+ * @param vport_id VPort ID
+ *
+ * @return int
+ */
+int qed_sp_vport_stop(struct qed_hwfn *p_hwfn, u16 opaque_fid, u8 vport_id);
+
+int qed_sp_eth_filter_ucast(struct qed_hwfn *p_hwfn,
+ u16 opaque_fid,
+ struct qed_filter_ucast *p_filter_cmd,
+ enum spq_mode comp_mode,
+ struct qed_spq_comp_cb *p_comp_data);
+
+/**
+ * @brief qed_sp_rx_eth_queues_update -
+ *
+ * This ramrod updates an RX queue. It is used for setting the active state
+ * of the queue and updating the TPA and SGE parameters.
+ *
+ * @note At the moment - only used by non-linux VFs.
+ *
+ * @param p_hwfn
+ * @param rx_queue_id RX Queue ID
+ * @param num_rxqs Allow to update multiple rx
+ * queues, from rx_queue_id to
+ * (rx_queue_id + num_rxqs)
+ * @param complete_cqe_flg Post completion to the CQE Ring if set
+ * @param complete_event_flg Post completion to the Event Ring if set
+ *
+ * @return int
+ */
+
+int
+qed_sp_eth_rx_queues_update(struct qed_hwfn *p_hwfn,
+ u16 rx_queue_id,
+ u8 num_rxqs,
+ u8 complete_cqe_flg,
+ u8 complete_event_flg,
+ enum spq_mode comp_mode,
+ struct qed_spq_comp_cb *p_comp_data);
+
+int qed_sp_eth_vport_start(struct qed_hwfn *p_hwfn,
+ struct qed_sp_vport_start_params *p_params);
+
+int qed_sp_eth_rxq_start_ramrod(struct qed_hwfn *p_hwfn,
+ u16 opaque_fid,
+ u32 cid,
+ struct qed_queue_start_common_params *params,
+ u8 stats_id,
+ u16 bd_max_bytes,
+ dma_addr_t bd_chain_phys_addr,
+ dma_addr_t cqe_pbl_addr, u16 cqe_pbl_size);
+
+int qed_sp_eth_txq_start_ramrod(struct qed_hwfn *p_hwfn,
+ u16 opaque_fid,
+ u32 cid,
+ struct qed_queue_start_common_params *p_params,
+ u8 stats_id,
+ dma_addr_t pbl_addr,
+ u16 pbl_size,
+ union qed_qm_pq_params *p_pq_params);
+
+u8 qed_mcast_bin_from_mac(u8 *mac);
+
+#endif /* _QED_L2_H */
diff --git a/drivers/net/ethernet/qlogic/qed/qed_main.c b/drivers/net/ethernet/qlogic/qed/qed_main.c
index 26d40db07ddd..8b22f87033ce 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_main.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_main.c
@@ -24,10 +24,12 @@
#include <linux/qed/qed_if.h>
#include "qed.h"
+#include "qed_sriov.h"
#include "qed_sp.h"
#include "qed_dev_api.h"
#include "qed_mcp.h"
#include "qed_hw.h"
+#include "qed_selftest.h"
static char version[] =
"QLogic FastLinQ 4xxxx Core Module qed " DRV_MODULE_VERSION "\n";
@@ -124,7 +126,7 @@ static int qed_init_pci(struct qed_dev *cdev,
goto err1;
}
- if (!(pci_resource_flags(pdev, 2) & IORESOURCE_MEM)) {
+ if (IS_PF(cdev) && !(pci_resource_flags(pdev, 2) & IORESOURCE_MEM)) {
DP_NOTICE(cdev, "No memory region found in bar #2\n");
rc = -EIO;
goto err1;
@@ -156,7 +158,7 @@ static int qed_init_pci(struct qed_dev *cdev,
}
cdev->pci_params.pm_cap = pci_find_capability(pdev, PCI_CAP_ID_PM);
- if (cdev->pci_params.pm_cap == 0)
+ if (IS_PF(cdev) && !cdev->pci_params.pm_cap)
DP_NOTICE(cdev, "Cannot find power management capability\n");
rc = qed_set_coherency_mask(cdev);
@@ -174,12 +176,14 @@ static int qed_init_pci(struct qed_dev *cdev,
goto err2;
}
- cdev->db_phys_addr = pci_resource_start(cdev->pdev, 2);
- cdev->db_size = pci_resource_len(cdev->pdev, 2);
- cdev->doorbells = ioremap_wc(cdev->db_phys_addr, cdev->db_size);
- if (!cdev->doorbells) {
- DP_NOTICE(cdev, "Cannot map doorbell space\n");
- return -ENOMEM;
+ if (IS_PF(cdev)) {
+ cdev->db_phys_addr = pci_resource_start(cdev->pdev, 2);
+ cdev->db_size = pci_resource_len(cdev->pdev, 2);
+ cdev->doorbells = ioremap_wc(cdev->db_phys_addr, cdev->db_size);
+ if (!cdev->doorbells) {
+ DP_NOTICE(cdev, "Cannot map doorbell space\n");
+ return -ENOMEM;
+ }
}
return 0;
@@ -206,20 +210,33 @@ int qed_fill_dev_info(struct qed_dev *cdev,
dev_info->is_mf_default = IS_MF_DEFAULT(&cdev->hwfns[0]);
ether_addr_copy(dev_info->hw_mac, cdev->hwfns[0].hw_info.hw_mac_addr);
- dev_info->fw_major = FW_MAJOR_VERSION;
- dev_info->fw_minor = FW_MINOR_VERSION;
- dev_info->fw_rev = FW_REVISION_VERSION;
- dev_info->fw_eng = FW_ENGINEERING_VERSION;
- dev_info->mf_mode = cdev->mf_mode;
+ if (IS_PF(cdev)) {
+ dev_info->fw_major = FW_MAJOR_VERSION;
+ dev_info->fw_minor = FW_MINOR_VERSION;
+ dev_info->fw_rev = FW_REVISION_VERSION;
+ dev_info->fw_eng = FW_ENGINEERING_VERSION;
+ dev_info->mf_mode = cdev->mf_mode;
+ dev_info->tx_switching = true;
+ } else {
+ qed_vf_get_fw_version(&cdev->hwfns[0], &dev_info->fw_major,
+ &dev_info->fw_minor, &dev_info->fw_rev,
+ &dev_info->fw_eng);
+ }
- qed_mcp_get_mfw_ver(cdev, &dev_info->mfw_rev);
+ if (IS_PF(cdev)) {
+ ptt = qed_ptt_acquire(QED_LEADING_HWFN(cdev));
+ if (ptt) {
+ qed_mcp_get_mfw_ver(QED_LEADING_HWFN(cdev), ptt,
+ &dev_info->mfw_rev, NULL);
- ptt = qed_ptt_acquire(QED_LEADING_HWFN(cdev));
- if (ptt) {
- qed_mcp_get_flash_size(QED_LEADING_HWFN(cdev), ptt,
- &dev_info->flash_size);
+ qed_mcp_get_flash_size(QED_LEADING_HWFN(cdev), ptt,
+ &dev_info->flash_size);
- qed_ptt_release(QED_LEADING_HWFN(cdev), ptt);
+ qed_ptt_release(QED_LEADING_HWFN(cdev), ptt);
+ }
+ } else {
+ qed_mcp_get_mfw_ver(QED_LEADING_HWFN(cdev), NULL,
+ &dev_info->mfw_rev, NULL);
}
return 0;
@@ -256,9 +273,7 @@ static int qed_set_power_state(struct qed_dev *cdev,
/* probing */
static struct qed_dev *qed_probe(struct pci_dev *pdev,
- enum qed_protocol protocol,
- u32 dp_module,
- u8 dp_level)
+ struct qed_probe_params *params)
{
struct qed_dev *cdev;
int rc;
@@ -267,9 +282,12 @@ static struct qed_dev *qed_probe(struct pci_dev *pdev,
if (!cdev)
goto err0;
- cdev->protocol = protocol;
+ cdev->protocol = params->protocol;
+
+ if (params->is_vf)
+ cdev->b_is_vf = true;
- qed_init_dp(cdev, dp_module, dp_level);
+ qed_init_dp(cdev, params->dp_module, params->dp_level);
rc = qed_init_pci(cdev, pdev);
if (rc) {
@@ -663,6 +681,35 @@ static int qed_slowpath_setup_int(struct qed_dev *cdev,
return 0;
}
+static int qed_slowpath_vf_setup_int(struct qed_dev *cdev)
+{
+ int rc;
+
+ memset(&cdev->int_params, 0, sizeof(struct qed_int_params));
+ cdev->int_params.in.int_mode = QED_INT_MODE_MSIX;
+
+ qed_vf_get_num_rxqs(QED_LEADING_HWFN(cdev),
+ &cdev->int_params.in.num_vectors);
+ if (cdev->num_hwfns > 1) {
+ u8 vectors = 0;
+
+ qed_vf_get_num_rxqs(&cdev->hwfns[1], &vectors);
+ cdev->int_params.in.num_vectors += vectors;
+ }
+
+ /* We want a minimum of one fastpath vector per vf hwfn */
+ cdev->int_params.in.min_msix_cnt = cdev->num_hwfns;
+
+ rc = qed_set_int_mode(cdev, true);
+ if (rc)
+ return rc;
+
+ cdev->int_params.fp_msix_base = 0;
+ cdev->int_params.fp_msix_cnt = cdev->int_params.out.num_vectors;
+
+ return 0;
+}
+
u32 qed_unzip_data(struct qed_hwfn *p_hwfn, u32 input_len,
u8 *input_buf, u32 max_size, u8 *unzip_buf)
{
@@ -744,39 +791,62 @@ static void qed_update_pf_params(struct qed_dev *cdev,
static int qed_slowpath_start(struct qed_dev *cdev,
struct qed_slowpath_params *params)
{
+ struct qed_tunn_start_params tunn_info;
struct qed_mcp_drv_version drv_version;
const u8 *data = NULL;
struct qed_hwfn *hwfn;
- int rc;
+ int rc = -EINVAL;
- rc = request_firmware(&cdev->firmware, QED_FW_FILE_NAME,
- &cdev->pdev->dev);
- if (rc) {
- DP_NOTICE(cdev,
- "Failed to find fw file - /lib/firmware/%s\n",
- QED_FW_FILE_NAME);
+ if (qed_iov_wq_start(cdev))
goto err;
+
+ if (IS_PF(cdev)) {
+ rc = request_firmware(&cdev->firmware, QED_FW_FILE_NAME,
+ &cdev->pdev->dev);
+ if (rc) {
+ DP_NOTICE(cdev,
+ "Failed to find fw file - /lib/firmware/%s\n",
+ QED_FW_FILE_NAME);
+ goto err;
+ }
}
rc = qed_nic_setup(cdev);
if (rc)
goto err;
- rc = qed_slowpath_setup_int(cdev, params->int_mode);
+ if (IS_PF(cdev))
+ rc = qed_slowpath_setup_int(cdev, params->int_mode);
+ else
+ rc = qed_slowpath_vf_setup_int(cdev);
if (rc)
goto err1;
- /* Allocate stream for unzipping */
- rc = qed_alloc_stream_mem(cdev);
- if (rc) {
- DP_NOTICE(cdev, "Failed to allocate stream memory\n");
- goto err2;
+ if (IS_PF(cdev)) {
+ /* Allocate stream for unzipping */
+ rc = qed_alloc_stream_mem(cdev);
+ if (rc) {
+ DP_NOTICE(cdev, "Failed to allocate stream memory\n");
+ goto err2;
+ }
+
+ data = cdev->firmware->data;
}
- /* Start the slowpath */
- data = cdev->firmware->data;
+ memset(&tunn_info, 0, sizeof(tunn_info));
+ tunn_info.tunn_mode |= 1 << QED_MODE_VXLAN_TUNN |
+ 1 << QED_MODE_L2GRE_TUNN |
+ 1 << QED_MODE_IPGRE_TUNN |
+ 1 << QED_MODE_L2GENEVE_TUNN |
+ 1 << QED_MODE_IPGENEVE_TUNN;
+
+ tunn_info.tunn_clss_vxlan = QED_TUNN_CLSS_MAC_VLAN;
+ tunn_info.tunn_clss_l2gre = QED_TUNN_CLSS_MAC_VLAN;
+ tunn_info.tunn_clss_ipgre = QED_TUNN_CLSS_MAC_VLAN;
- rc = qed_hw_init(cdev, true, cdev->int_params.out.int_mode,
+ /* Start the slowpath */
+ rc = qed_hw_init(cdev, &tunn_info, true,
+ cdev->int_params.out.int_mode,
true, data);
if (rc)
goto err2;
@@ -784,18 +854,20 @@ static int qed_slowpath_start(struct qed_dev *cdev,
DP_INFO(cdev,
"HW initialization and function start completed successfully\n");
- hwfn = QED_LEADING_HWFN(cdev);
- drv_version.version = (params->drv_major << 24) |
- (params->drv_minor << 16) |
- (params->drv_rev << 8) |
- (params->drv_eng);
- strlcpy(drv_version.name, params->name,
- MCP_DRV_VER_STR_SIZE - 4);
- rc = qed_mcp_send_drv_version(hwfn, hwfn->p_main_ptt,
- &drv_version);
- if (rc) {
- DP_NOTICE(cdev, "Failed sending drv version command\n");
- return rc;
+ if (IS_PF(cdev)) {
+ hwfn = QED_LEADING_HWFN(cdev);
+ drv_version.version = (params->drv_major << 24) |
+ (params->drv_minor << 16) |
+ (params->drv_rev << 8) |
+ (params->drv_eng);
+ strlcpy(drv_version.name, params->name,
+ MCP_DRV_VER_STR_SIZE - 4);
+ rc = qed_mcp_send_drv_version(hwfn, hwfn->p_main_ptt,
+ &drv_version);
+ if (rc) {
+ DP_NOTICE(cdev, "Failed sending drv version command\n");
+ return rc;
+ }
}
qed_reset_vport_stats(cdev);
@@ -804,13 +876,17 @@ static int qed_slowpath_start(struct qed_dev *cdev,
err2:
qed_hw_timers_stop_all(cdev);
- qed_slowpath_irq_free(cdev);
+ if (IS_PF(cdev))
+ qed_slowpath_irq_free(cdev);
qed_free_stream_mem(cdev);
qed_disable_msix(cdev);
err1:
qed_resc_free(cdev);
err:
- release_firmware(cdev->firmware);
+ if (IS_PF(cdev))
+ release_firmware(cdev->firmware);
+
+ qed_iov_wq_stop(cdev, false);
return rc;
}
@@ -820,15 +896,21 @@ static int qed_slowpath_stop(struct qed_dev *cdev)
if (!cdev)
return -ENODEV;
- qed_free_stream_mem(cdev);
+ if (IS_PF(cdev)) {
+ qed_free_stream_mem(cdev);
+ qed_sriov_disable(cdev, true);
- qed_nic_stop(cdev);
- qed_slowpath_irq_free(cdev);
+ qed_nic_stop(cdev);
+ qed_slowpath_irq_free(cdev);
+ }
qed_disable_msix(cdev);
qed_nic_reset(cdev);
- release_firmware(cdev->firmware);
+ qed_iov_wq_stop(cdev, true);
+
+ if (IS_PF(cdev))
+ release_firmware(cdev->firmware);
return 0;
}
@@ -902,6 +984,11 @@ static u32 qed_sb_release(struct qed_dev *cdev,
return rc;
}
+static bool qed_can_link_change(struct qed_dev *cdev)
+{
+ return true;
+}
+
static int qed_set_link(struct qed_dev *cdev,
struct qed_link_params *params)
{
@@ -913,6 +1000,9 @@ static int qed_set_link(struct qed_dev *cdev,
if (!cdev)
return -ENODEV;
+ if (IS_VF(cdev))
+ return 0;
+
/* The link should be set only once per PF */
hwfn = &cdev->hwfns[0];
@@ -944,6 +1034,39 @@ static int qed_set_link(struct qed_dev *cdev,
}
if (params->override_flags & QED_LINK_OVERRIDE_SPEED_FORCED_SPEED)
link_params->speed.forced_speed = params->forced_speed;
+ if (params->override_flags & QED_LINK_OVERRIDE_PAUSE_CONFIG) {
+ if (params->pause_config & QED_LINK_PAUSE_AUTONEG_ENABLE)
+ link_params->pause.autoneg = true;
+ else
+ link_params->pause.autoneg = false;
+ if (params->pause_config & QED_LINK_PAUSE_RX_ENABLE)
+ link_params->pause.forced_rx = true;
+ else
+ link_params->pause.forced_rx = false;
+ if (params->pause_config & QED_LINK_PAUSE_TX_ENABLE)
+ link_params->pause.forced_tx = true;
+ else
+ link_params->pause.forced_tx = false;
+ }
+ if (params->override_flags & QED_LINK_OVERRIDE_LOOPBACK_MODE) {
+ switch (params->loopback_mode) {
+ case QED_LINK_LOOPBACK_INT_PHY:
+ link_params->loopback_mode = PMM_LOOPBACK_INT_PHY;
+ break;
+ case QED_LINK_LOOPBACK_EXT_PHY:
+ link_params->loopback_mode = PMM_LOOPBACK_EXT_PHY;
+ break;
+ case QED_LINK_LOOPBACK_EXT:
+ link_params->loopback_mode = PMM_LOOPBACK_EXT;
+ break;
+ case QED_LINK_LOOPBACK_MAC:
+ link_params->loopback_mode = PMM_LOOPBACK_MAC;
+ break;
+ default:
+ link_params->loopback_mode = PMM_LOOPBACK_NONE;
+ break;
+ }
+ }
rc = qed_mcp_set_link(hwfn, ptt, params->link_up);
@@ -991,10 +1114,16 @@ static void qed_fill_link(struct qed_hwfn *hwfn,
memset(if_link, 0, sizeof(*if_link));
/* Prepare source inputs */
- memcpy(&params, qed_mcp_get_link_params(hwfn), sizeof(params));
- memcpy(&link, qed_mcp_get_link_state(hwfn), sizeof(link));
- memcpy(&link_caps, qed_mcp_get_link_capabilities(hwfn),
- sizeof(link_caps));
+ if (IS_PF(hwfn->cdev)) {
+ memcpy(&params, qed_mcp_get_link_params(hwfn), sizeof(params));
+ memcpy(&link, qed_mcp_get_link_state(hwfn), sizeof(link));
+ memcpy(&link_caps, qed_mcp_get_link_capabilities(hwfn),
+ sizeof(link_caps));
+ } else {
+ qed_vf_get_link_params(hwfn, &params);
+ qed_vf_get_link_state(hwfn, &link);
+ qed_vf_get_link_caps(hwfn, &link_caps);
+ }
/* Set the link parameters to pass to protocol driver */
if (link.link_up)
@@ -1096,7 +1225,12 @@ static void qed_fill_link(struct qed_hwfn *hwfn,
static void qed_get_current_link(struct qed_dev *cdev,
struct qed_link_output *if_link)
{
+ int i;
+
qed_fill_link(&cdev->hwfns[0], if_link);
+
+ for_each_hwfn(cdev, i)
+ qed_inform_vf_link_state(&cdev->hwfns[i]);
}
void qed_link_update(struct qed_hwfn *hwfn)
@@ -1106,6 +1240,7 @@ void qed_link_update(struct qed_hwfn *hwfn)
struct qed_link_output if_link;
qed_fill_link(hwfn, &if_link);
+ qed_inform_vf_link_state(hwfn);
if (IS_LEAD_HWFN(hwfn) && cookie)
op->link_update(cookie, &if_link);
@@ -1117,6 +1252,9 @@ static int qed_drain(struct qed_dev *cdev)
struct qed_ptt *ptt;
int i, rc;
+ if (IS_VF(cdev))
+ return 0;
+
for_each_hwfn(cdev, i) {
hwfn = &cdev->hwfns[i];
ptt = qed_ptt_acquire(hwfn);
@@ -1150,7 +1288,15 @@ static int qed_set_led(struct qed_dev *cdev, enum qed_led_mode mode)
return status;
}
+struct qed_selftest_ops qed_selftest_ops_pass = {
+ .selftest_memory = &qed_selftest_memory,
+ .selftest_interrupt = &qed_selftest_interrupt,
+ .selftest_register = &qed_selftest_register,
+ .selftest_clock = &qed_selftest_clock,
+};
+
const struct qed_common_ops qed_common_ops_pass = {
+ .selftest = &qed_selftest_ops_pass,
.probe = &qed_probe,
.remove = &qed_remove,
.set_power_state = &qed_set_power_state,
@@ -1164,6 +1310,7 @@ const struct qed_common_ops qed_common_ops_pass = {
.sb_release = &qed_sb_release,
.simd_handler_config = &qed_simd_handler_config,
.simd_handler_clean = &qed_simd_handler_clean,
+ .can_link_change = &qed_can_link_change,
.set_link = &qed_set_link,
.get_link = &qed_get_current_link,
.drain = &qed_drain,
@@ -1172,14 +1319,3 @@ const struct qed_common_ops qed_common_ops_pass = {
.chain_free = &qed_chain_free,
.set_led = &qed_set_led,
};
-
-u32 qed_get_protocol_version(enum qed_protocol protocol)
-{
- switch (protocol) {
- case QED_PROTOCOL_ETH:
- return QED_ETH_INTERFACE_VERSION;
- default:
- return 0;
- }
-}
-EXPORT_SYMBOL(qed_get_protocol_version);
diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.c b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
index b89c9a8e1655..1182361798b5 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.c
@@ -15,10 +15,13 @@
#include <linux/spinlock.h>
#include <linux/string.h>
#include "qed.h"
+#include "qed_dcbx.h"
#include "qed_hsi.h"
#include "qed_hw.h"
#include "qed_mcp.h"
#include "qed_reg_addr.h"
+#include "qed_sriov.h"
+
#define CHIP_MCP_RESP_ITER_US 10
#define QED_DRV_MB_MAX_RETRIES (500 * 1000) /* Account for 5 sec */
@@ -440,6 +443,75 @@ int qed_mcp_load_req(struct qed_hwfn *p_hwfn,
return 0;
}
+static void qed_mcp_handle_vf_flr(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt)
+{
+ u32 addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
+ PUBLIC_PATH);
+ u32 mfw_path_offsize = qed_rd(p_hwfn, p_ptt, addr);
+ u32 path_addr = SECTION_ADDR(mfw_path_offsize,
+ QED_PATH_ID(p_hwfn));
+ u32 disabled_vfs[VF_MAX_STATIC / 32];
+ int i;
+
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_SP,
+ "Reading Disabled VF information from [offset %08x], path_addr %08x\n",
+ mfw_path_offsize, path_addr);
+
+ for (i = 0; i < (VF_MAX_STATIC / 32); i++) {
+ disabled_vfs[i] = qed_rd(p_hwfn, p_ptt,
+ path_addr +
+ offsetof(struct public_path,
+ mcp_vf_disabled) +
+ sizeof(u32) * i);
+ DP_VERBOSE(p_hwfn, (QED_MSG_SP | QED_MSG_IOV),
+ "FLR-ed VFs [%08x,...,%08x] - %08x\n",
+ i * 32, (i + 1) * 32 - 1, disabled_vfs[i]);
+ }
+
+ if (qed_iov_mark_vf_flr(p_hwfn, disabled_vfs))
+ qed_schedule_iov(p_hwfn, QED_IOV_WQ_FLR_FLAG);
+}
+
+int qed_mcp_ack_vf_flr(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt, u32 *vfs_to_ack)
+{
+ u32 addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
+ PUBLIC_FUNC);
+ u32 mfw_func_offsize = qed_rd(p_hwfn, p_ptt, addr);
+ u32 func_addr = SECTION_ADDR(mfw_func_offsize,
+ MCP_PF_ID(p_hwfn));
+ struct qed_mcp_mb_params mb_params;
+ union drv_union_data union_data;
+ int rc;
+ int i;
+
+ for (i = 0; i < (VF_MAX_STATIC / 32); i++)
+ DP_VERBOSE(p_hwfn, (QED_MSG_SP | QED_MSG_IOV),
+ "Acking VFs [%08x,...,%08x] - %08x\n",
+ i * 32, (i + 1) * 32 - 1, vfs_to_ack[i]);
+
+ memset(&mb_params, 0, sizeof(mb_params));
+ mb_params.cmd = DRV_MSG_CODE_VF_DISABLED_DONE;
+ memcpy(&union_data.ack_vf_disabled, vfs_to_ack, VF_MAX_STATIC / 8);
+ mb_params.p_data_src = &union_data;
+ rc = qed_mcp_cmd_and_union(p_hwfn, p_ptt, &mb_params);
+ if (rc) {
+ DP_NOTICE(p_hwfn, "Failed to pass ACK for VF flr to MFW\n");
+ return -EBUSY;
+ }
+
+ /* Clear the ACK bits */
+ for (i = 0; i < (VF_MAX_STATIC / 32); i++)
+ qed_wr(p_hwfn, p_ptt,
+ func_addr +
+ offsetof(struct public_func, drv_ack_vf_disabled) +
+ i * sizeof(u32), 0);
+
+ return rc;
+}
+
static void qed_mcp_handle_transceiver_change(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt)
{
@@ -472,6 +544,7 @@ static void qed_mcp_handle_link_change(struct qed_hwfn *p_hwfn,
bool b_reset)
{
struct qed_mcp_link_state *p_link;
+ u8 max_bw, min_bw;
u32 status = 0;
p_link = &p_hwfn->mcp_info->link_output;
@@ -527,17 +600,20 @@ static void qed_mcp_handle_link_change(struct qed_hwfn *p_hwfn,
p_link->speed = 0;
}
- /* Correct speed according to bandwidth allocation */
- if (p_hwfn->mcp_info->func_info.bandwidth_max && p_link->speed) {
- p_link->speed = p_link->speed *
- p_hwfn->mcp_info->func_info.bandwidth_max /
- 100;
- qed_init_pf_rl(p_hwfn, p_ptt, p_hwfn->rel_pf_id,
- p_link->speed);
- DP_VERBOSE(p_hwfn, NETIF_MSG_LINK,
- "Configured MAX bandwidth to be %08x Mb/sec\n",
- p_link->speed);
- }
+ if (p_link->link_up && p_link->speed)
+ p_link->line_speed = p_link->speed;
+ else
+ p_link->line_speed = 0;
+
+ max_bw = p_hwfn->mcp_info->func_info.bandwidth_max;
+ min_bw = p_hwfn->mcp_info->func_info.bandwidth_min;
+
+ /* Max bandwidth configuration */
+ __qed_configure_pf_max_bandwidth(p_hwfn, p_ptt, p_link, max_bw);
+
+ /* Min bandwidth configuration */
+ __qed_configure_pf_min_bandwidth(p_hwfn, p_ptt, p_link, min_bw);
+ qed_configure_vp_wfq_on_link_change(p_hwfn->cdev, p_link->min_pf_rate);
p_link->an = !!(status & LINK_STATUS_AUTO_NEGOTIATE_ENABLED);
p_link->an_complete = !!(status &
@@ -648,6 +724,77 @@ int qed_mcp_set_link(struct qed_hwfn *p_hwfn,
return 0;
}
+static void qed_read_pf_bandwidth(struct qed_hwfn *p_hwfn,
+ struct public_func *p_shmem_info)
+{
+ struct qed_mcp_function_info *p_info;
+
+ p_info = &p_hwfn->mcp_info->func_info;
+
+ p_info->bandwidth_min = (p_shmem_info->config &
+ FUNC_MF_CFG_MIN_BW_MASK) >>
+ FUNC_MF_CFG_MIN_BW_SHIFT;
+ if (p_info->bandwidth_min < 1 || p_info->bandwidth_min > 100) {
+ DP_INFO(p_hwfn,
+ "bandwidth minimum out of bounds [%02x]. Set to 1\n",
+ p_info->bandwidth_min);
+ p_info->bandwidth_min = 1;
+ }
+
+ p_info->bandwidth_max = (p_shmem_info->config &
+ FUNC_MF_CFG_MAX_BW_MASK) >>
+ FUNC_MF_CFG_MAX_BW_SHIFT;
+ if (p_info->bandwidth_max < 1 || p_info->bandwidth_max > 100) {
+ DP_INFO(p_hwfn,
+ "bandwidth maximum out of bounds [%02x]. Set to 100\n",
+ p_info->bandwidth_max);
+ p_info->bandwidth_max = 100;
+ }
+}
+
+static u32 qed_mcp_get_shmem_func(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct public_func *p_data,
+ int pfid)
+{
+ u32 addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
+ PUBLIC_FUNC);
+ u32 mfw_path_offsize = qed_rd(p_hwfn, p_ptt, addr);
+ u32 func_addr = SECTION_ADDR(mfw_path_offsize, pfid);
+ u32 i, size;
+
+ memset(p_data, 0, sizeof(*p_data));
+
+ size = min_t(u32, sizeof(*p_data),
+ QED_SECTION_SIZE(mfw_path_offsize));
+ for (i = 0; i < size / sizeof(u32); i++)
+ ((u32 *)p_data)[i] = qed_rd(p_hwfn, p_ptt,
+ func_addr + (i << 2));
+ return size;
+}
+
+static void qed_mcp_update_bw(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt)
+{
+ struct qed_mcp_function_info *p_info;
+ struct public_func shmem_info;
+ u32 resp = 0, param = 0;
+
+ qed_mcp_get_shmem_func(p_hwfn, p_ptt, &shmem_info,
+ MCP_PF_ID(p_hwfn));
+
+ qed_read_pf_bandwidth(p_hwfn, &shmem_info);
+
+ p_info = &p_hwfn->mcp_info->func_info;
+
+ qed_configure_pf_min_bandwidth(p_hwfn->cdev, p_info->bandwidth_min);
+ qed_configure_pf_max_bandwidth(p_hwfn->cdev, p_info->bandwidth_max);
+
+ /* Acknowledge the MFW */
+ qed_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BW_UPDATE_ACK, 0, &resp,
+ &param);
+}
+
int qed_mcp_handle_events(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt)
{
@@ -676,9 +823,27 @@ int qed_mcp_handle_events(struct qed_hwfn *p_hwfn,
case MFW_DRV_MSG_LINK_CHANGE:
qed_mcp_handle_link_change(p_hwfn, p_ptt, false);
break;
+ case MFW_DRV_MSG_VF_DISABLED:
+ qed_mcp_handle_vf_flr(p_hwfn, p_ptt);
+ break;
+ case MFW_DRV_MSG_LLDP_DATA_UPDATED:
+ qed_dcbx_mib_update_event(p_hwfn, p_ptt,
+ QED_DCBX_REMOTE_LLDP_MIB);
+ break;
+ case MFW_DRV_MSG_DCBX_REMOTE_MIB_UPDATED:
+ qed_dcbx_mib_update_event(p_hwfn, p_ptt,
+ QED_DCBX_REMOTE_MIB);
+ break;
+ case MFW_DRV_MSG_DCBX_OPERATIONAL_MIB_UPDATED:
+ qed_dcbx_mib_update_event(p_hwfn, p_ptt,
+ QED_DCBX_OPERATIONAL_MIB);
+ break;
case MFW_DRV_MSG_TRANSCEIVER_STATE_CHANGE:
qed_mcp_handle_transceiver_change(p_hwfn, p_ptt);
break;
+ case MFW_DRV_MSG_BW_UPDATE:
+ qed_mcp_update_bw(p_hwfn, p_ptt);
+ break;
default:
DP_NOTICE(p_hwfn, "Unimplemented MFW message %d\n", i);
rc = -EINVAL;
@@ -709,26 +874,42 @@ int qed_mcp_handle_events(struct qed_hwfn *p_hwfn,
return rc;
}
-int qed_mcp_get_mfw_ver(struct qed_dev *cdev,
- u32 *p_mfw_ver)
+int qed_mcp_get_mfw_ver(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ u32 *p_mfw_ver, u32 *p_running_bundle_id)
{
- struct qed_hwfn *p_hwfn = &cdev->hwfns[0];
- struct qed_ptt *p_ptt;
u32 global_offsize;
- p_ptt = qed_ptt_acquire(p_hwfn);
- if (!p_ptt)
- return -EBUSY;
+ if (IS_VF(p_hwfn->cdev)) {
+ if (p_hwfn->vf_iov_info) {
+ struct pfvf_acquire_resp_tlv *p_resp;
+
+ p_resp = &p_hwfn->vf_iov_info->acquire_resp;
+ *p_mfw_ver = p_resp->pfdev_info.mfw_ver;
+ return 0;
+ } else {
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_IOV,
+ "VF requested MFW version prior to ACQUIRE\n");
+ return -EINVAL;
+ }
+ }
global_offsize = qed_rd(p_hwfn, p_ptt,
- SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->
- public_base,
+ SECTION_OFFSIZE_ADDR(p_hwfn->
+ mcp_info->public_base,
PUBLIC_GLOBAL));
- *p_mfw_ver = qed_rd(p_hwfn, p_ptt,
- SECTION_ADDR(global_offsize, 0) +
- offsetof(struct public_global, mfw_ver));
-
- qed_ptt_release(p_hwfn, p_ptt);
+ *p_mfw_ver =
+ qed_rd(p_hwfn, p_ptt,
+ SECTION_ADDR(global_offsize,
+ 0) + offsetof(struct public_global, mfw_ver));
+
+ if (p_running_bundle_id != NULL) {
+ *p_running_bundle_id = qed_rd(p_hwfn, p_ptt,
+ SECTION_ADDR(global_offsize, 0) +
+ offsetof(struct public_global,
+ running_bundle_id));
+ }
return 0;
}
@@ -739,6 +920,9 @@ int qed_mcp_get_media_type(struct qed_dev *cdev,
struct qed_hwfn *p_hwfn = &cdev->hwfns[0];
struct qed_ptt *p_ptt;
+ if (IS_VF(cdev))
+ return -EINVAL;
+
if (!qed_mcp_is_init(p_hwfn)) {
DP_NOTICE(p_hwfn, "MFW is not initialized !\n");
return -EBUSY;
@@ -758,28 +942,6 @@ int qed_mcp_get_media_type(struct qed_dev *cdev,
return 0;
}
-static u32 qed_mcp_get_shmem_func(struct qed_hwfn *p_hwfn,
- struct qed_ptt *p_ptt,
- struct public_func *p_data,
- int pfid)
-{
- u32 addr = SECTION_OFFSIZE_ADDR(p_hwfn->mcp_info->public_base,
- PUBLIC_FUNC);
- u32 mfw_path_offsize = qed_rd(p_hwfn, p_ptt, addr);
- u32 func_addr = SECTION_ADDR(mfw_path_offsize, pfid);
- u32 i, size;
-
- memset(p_data, 0, sizeof(*p_data));
-
- size = min_t(u32, sizeof(*p_data),
- QED_SECTION_SIZE(mfw_path_offsize));
- for (i = 0; i < size / sizeof(u32); i++)
- ((u32 *)p_data)[i] = qed_rd(p_hwfn, p_ptt,
- func_addr + (i << 2));
-
- return size;
-}
-
static int
qed_mcp_get_shmem_proto(struct qed_hwfn *p_hwfn,
struct public_func *p_info,
@@ -818,26 +980,7 @@ int qed_mcp_fill_shmem_func_info(struct qed_hwfn *p_hwfn,
return -EINVAL;
}
-
- info->bandwidth_min = (shmem_info.config &
- FUNC_MF_CFG_MIN_BW_MASK) >>
- FUNC_MF_CFG_MIN_BW_SHIFT;
- if (info->bandwidth_min < 1 || info->bandwidth_min > 100) {
- DP_INFO(p_hwfn,
- "bandwidth minimum out of bounds [%02x]. Set to 1\n",
- info->bandwidth_min);
- info->bandwidth_min = 1;
- }
-
- info->bandwidth_max = (shmem_info.config &
- FUNC_MF_CFG_MAX_BW_MASK) >>
- FUNC_MF_CFG_MAX_BW_SHIFT;
- if (info->bandwidth_max < 1 || info->bandwidth_max > 100) {
- DP_INFO(p_hwfn,
- "bandwidth maximum out of bounds [%02x]. Set to 100\n",
- info->bandwidth_max);
- info->bandwidth_max = 100;
- }
+ qed_read_pf_bandwidth(p_hwfn, &shmem_info);
if (shmem_info.mac_upper || shmem_info.mac_lower) {
info->mac[0] = (u8)(shmem_info.mac_upper >> 8);
@@ -914,6 +1057,9 @@ int qed_mcp_get_flash_size(struct qed_hwfn *p_hwfn,
{
u32 flash_size;
+ if (IS_VF(p_hwfn->cdev))
+ return -EINVAL;
+
flash_size = qed_rd(p_hwfn, p_ptt, MCP_REG_NVM_CFG4);
flash_size = (flash_size & MCP_REG_NVM_CFG4_FLASH_SIZE) >>
MCP_REG_NVM_CFG4_FLASH_SIZE_SHIFT;
@@ -924,6 +1070,37 @@ int qed_mcp_get_flash_size(struct qed_hwfn *p_hwfn,
return 0;
}
+int qed_mcp_config_vf_msix(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt, u8 vf_id, u8 num)
+{
+ u32 resp = 0, param = 0, rc_param = 0;
+ int rc;
+
+ /* Only Leader can configure MSIX, and need to take CMT into account */
+ if (!IS_LEAD_HWFN(p_hwfn))
+ return 0;
+ num *= p_hwfn->cdev->num_hwfns;
+
+ param |= (vf_id << DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_SHIFT) &
+ DRV_MB_PARAM_CFG_VF_MSIX_VF_ID_MASK;
+ param |= (num << DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM_SHIFT) &
+ DRV_MB_PARAM_CFG_VF_MSIX_SB_NUM_MASK;
+
+ rc = qed_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_CFG_VF_MSIX, param,
+ &resp, &rc_param);
+
+ if (resp != FW_MSG_CODE_DRV_CFG_VF_MSIX_DONE) {
+ DP_NOTICE(p_hwfn, "VF[%d]: MFW failed to set MSI-X\n", vf_id);
+ rc = -EINVAL;
+ } else {
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "Requested 0x%02x MSI-x interrupts from VF 0x%02x\n",
+ num, vf_id);
+ }
+
+ return rc;
+}
+
int
qed_mcp_send_drv_version(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt,
@@ -938,9 +1115,10 @@ qed_mcp_send_drv_version(struct qed_hwfn *p_hwfn,
p_drv_version = &union_data.drv_version;
p_drv_version->version = p_ver->version;
+
for (i = 0; i < MCP_DRV_VER_STR_SIZE - 1; i += 4) {
val = cpu_to_be32(p_ver->name[i]);
- *(u32 *)&p_drv_version->name[i * sizeof(u32)] = val;
+ *(__be32 *)&p_drv_version->name[i * sizeof(u32)] = val;
}
memset(&mb_params, 0, sizeof(mb_params));
@@ -979,3 +1157,45 @@ int qed_mcp_set_led(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt,
return rc;
}
+
+int qed_mcp_bist_register_test(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+{
+ u32 drv_mb_param = 0, rsp, param;
+ int rc = 0;
+
+ drv_mb_param = (DRV_MB_PARAM_BIST_REGISTER_TEST <<
+ DRV_MB_PARAM_BIST_TEST_INDEX_SHIFT);
+
+ rc = qed_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
+ drv_mb_param, &rsp, &param);
+
+ if (rc)
+ return rc;
+
+ if (((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK) ||
+ (param != DRV_MB_PARAM_BIST_RC_PASSED))
+ rc = -EAGAIN;
+
+ return rc;
+}
+
+int qed_mcp_bist_clock_test(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+{
+ u32 drv_mb_param, rsp, param;
+ int rc = 0;
+
+ drv_mb_param = (DRV_MB_PARAM_BIST_CLOCK_TEST <<
+ DRV_MB_PARAM_BIST_TEST_INDEX_SHIFT);
+
+ rc = qed_mcp_cmd(p_hwfn, p_ptt, DRV_MSG_CODE_BIST_TEST,
+ drv_mb_param, &rsp, &param);
+
+ if (rc)
+ return rc;
+
+ if (((rsp & FW_MSG_CODE_MASK) != FW_MSG_CODE_OK) ||
+ (param != DRV_MB_PARAM_BIST_RC_PASSED))
+ rc = -EAGAIN;
+
+ return rc;
+}
diff --git a/drivers/net/ethernet/qlogic/qed/qed_mcp.h b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
index 50917a2131a5..6dd59eb7f4c6 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_mcp.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_mcp.h
@@ -40,7 +40,15 @@ struct qed_mcp_link_capabilities {
struct qed_mcp_link_state {
bool link_up;
- u32 speed; /* In Mb/s */
+ u32 min_pf_rate;
+
+ /* Actual link speed in Mb/s */
+ u32 line_speed;
+
+ /* PF max speed in Mb/s, deduced from line_speed
+ * according to PF max bandwidth configuration.
+ */
+ u32 speed;
bool full_duplex;
bool an;
@@ -141,13 +149,16 @@ int qed_mcp_set_link(struct qed_hwfn *p_hwfn,
/**
* @brief Get the management firmware version value
*
- * @param cdev - qed dev pointer
- * @param mfw_ver - mfw version value
+ * @param p_hwfn
+ * @param p_ptt
+ * @param p_mfw_ver - mfw version value
+ * @param p_running_bundle_id - image id in nvram; Optional.
*
- * @return int - 0 - operation was successul.
+ * @return int - 0 - operation was successful.
*/
-int qed_mcp_get_mfw_ver(struct qed_dev *cdev,
- u32 *mfw_ver);
+int qed_mcp_get_mfw_ver(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ u32 *p_mfw_ver, u32 *p_running_bundle_id);
/**
* @brief Get media type value of the port.
@@ -237,6 +248,28 @@ int qed_mcp_set_led(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt,
enum qed_led_mode mode);
+/**
+ * @brief Bist register test
+ *
+ * @param p_hwfn - hw function
+ * @param p_ptt - PTT required for register access
+ *
+ * @return int - 0 - operation was successful.
+ */
+int qed_mcp_bist_register_test(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt);
+
+/**
+ * @brief Bist clock test
+ *
+ * @param p_hwfn - hw function
+ * @param p_ptt - PTT required for register access
+ *
+ * @return int - 0 - operation was successful.
+ */
+int qed_mcp_bist_clock_test(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt);
+
/* Using hwfn number (and not pf_num) is required since in CMT mode,
* same pf_num may be used by two different hwfn
* TODO - this shouldn't really be in .h file, but until all fields
@@ -360,6 +393,18 @@ void qed_mcp_read_mb(struct qed_hwfn *p_hwfn,
struct qed_ptt *p_ptt);
/**
+ * @brief Ack to mfw that driver finished FLR process for VFs
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param vfs_to_ack - bit mask of all engine VFs for which the PF acks.
+ *
+ * @param return int - 0 upon success.
+ */
+int qed_mcp_ack_vf_flr(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt, u32 *vfs_to_ack);
+
+/**
* @brief - calls during init to read shmem of all function-related info.
*
* @param p_hwfn
@@ -389,4 +434,27 @@ int qed_mcp_reset(struct qed_hwfn *p_hwfn,
*/
bool qed_mcp_is_init(struct qed_hwfn *p_hwfn);
+/**
+ * @brief request MFW to configure MSI-X for a VF
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ * @param vf_id - absolute inside engine
+ * @param num_sbs - number of entries to request
+ *
+ * @return int
+ */
+int qed_mcp_config_vf_msix(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt, u8 vf_id, u8 num);
+
+int qed_configure_pf_min_bandwidth(struct qed_dev *cdev, u8 min_bw);
+int qed_configure_pf_max_bandwidth(struct qed_dev *cdev, u8 max_bw);
+int __qed_configure_pf_max_bandwidth(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_mcp_link_state *p_link,
+ u8 max_bw);
+int __qed_configure_pf_min_bandwidth(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_mcp_link_state *p_link,
+ u8 min_bw);
#endif
diff --git a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
index c15b1622e636..3a6c506f0d71 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_reg_addr.h
@@ -39,6 +39,10 @@
0x2aae04UL
#define PGLUE_B_REG_INTERNAL_PFID_ENABLE_MASTER \
0x2aa16cUL
+#define PGLUE_B_REG_WAS_ERROR_VF_31_0_CLR \
+ 0x2aa118UL
+#define PSWHST_REG_ZONE_PERMISSION_TABLE \
+ 0x2a0800UL
#define BAR0_MAP_REG_MSDM_RAM \
0x1d00000UL
#define BAR0_MAP_REG_USDM_RAM \
@@ -77,6 +81,8 @@
0x2f2eb0UL
#define DORQ_REG_PF_DB_ENABLE \
0x100508UL
+#define DORQ_REG_VF_USAGE_CNT \
+ 0x1009c4UL
#define QM_REG_PF_EN \
0x2f2ea4UL
#define TCFC_REG_STRONG_ENABLE_PF \
@@ -111,6 +117,8 @@
0x009778UL
#define MISCS_REG_CHIP_METAL \
0x009774UL
+#define MISCS_REG_FUNCTION_HIDE \
+ 0x0096f0UL
#define BRB_REG_HEADER_SIZE \
0x340804UL
#define BTB_REG_HEADER_SIZE \
@@ -119,6 +127,8 @@
0x1c0708UL
#define CCFC_REG_ACTIVITY_COUNTER \
0x2e8800UL
+#define CCFC_REG_STRONG_ENABLE_VF \
+ 0x2e070cUL
#define CDU_REG_CID_ADDR_PARAMS \
0x580900UL
#define DBG_REG_CLIENT_ENABLE \
@@ -161,6 +171,10 @@
0x040200UL
#define PBF_REG_INIT \
0xd80000UL
+#define PBF_REG_NUM_BLOCKS_ALLOCATED_PROD_VOQ0 \
+ 0xd806c8UL
+#define PBF_REG_NUM_BLOCKS_ALLOCATED_CONS_VOQ0 \
+ 0xd806ccUL
#define PTU_REG_ATC_INIT_ARRAY \
0x560000UL
#define PCM_REG_INIT \
@@ -385,6 +399,8 @@
0x1d0000UL
#define IGU_REG_PF_CONFIGURATION \
0x180800UL
+#define IGU_REG_VF_CONFIGURATION \
+ 0x180804UL
#define MISC_REG_AEU_ENABLE1_IGU_OUT_0 \
0x00849cUL
#define MISC_REG_AEU_AFTER_INVERT_1_IGU \
@@ -411,6 +427,10 @@
0x1 << 0)
#define IGU_REG_MAPPING_MEMORY \
0x184000UL
+#define IGU_REG_STATISTIC_NUM_VF_MSG_SENT \
+ 0x180408UL
+#define IGU_REG_WRITE_DONE_PENDING \
+ 0x180900UL
#define MISCS_REG_GENERIC_POR_0 \
0x0096d4UL
#define MCP_REG_NVM_CFG4 \
@@ -427,4 +447,37 @@
0x2aae60UL
#define PGLUE_B_REG_PF_BAR1_SIZE \
0x2aae64UL
+#define PRS_REG_ENCAPSULATION_TYPE_EN 0x1f0730UL
+#define PRS_REG_GRE_PROTOCOL 0x1f0734UL
+#define PRS_REG_VXLAN_PORT 0x1f0738UL
+#define PRS_REG_OUTPUT_FORMAT_4_0 0x1f099cUL
+#define NIG_REG_ENC_TYPE_ENABLE 0x501058UL
+
+#define NIG_REG_ENC_TYPE_ENABLE_ETH_OVER_GRE_ENABLE (0x1 << 0)
+#define NIG_REG_ENC_TYPE_ENABLE_ETH_OVER_GRE_ENABLE_SHIFT 0
+#define NIG_REG_ENC_TYPE_ENABLE_IP_OVER_GRE_ENABLE (0x1 << 1)
+#define NIG_REG_ENC_TYPE_ENABLE_IP_OVER_GRE_ENABLE_SHIFT 1
+#define NIG_REG_ENC_TYPE_ENABLE_VXLAN_ENABLE (0x1 << 2)
+#define NIG_REG_ENC_TYPE_ENABLE_VXLAN_ENABLE_SHIFT 2
+
+#define NIG_REG_VXLAN_PORT 0x50105cUL
+#define PBF_REG_VXLAN_PORT 0xd80518UL
+#define PBF_REG_NGE_PORT 0xd8051cUL
+#define PRS_REG_NGE_PORT 0x1f086cUL
+#define NIG_REG_NGE_PORT 0x508b38UL
+
+#define DORQ_REG_L2_EDPM_TUNNEL_GRE_ETH_EN 0x10090cUL
+#define DORQ_REG_L2_EDPM_TUNNEL_GRE_IP_EN 0x100910UL
+#define DORQ_REG_L2_EDPM_TUNNEL_VXLAN_EN 0x100914UL
+#define DORQ_REG_L2_EDPM_TUNNEL_NGE_IP_EN 0x10092cUL
+#define DORQ_REG_L2_EDPM_TUNNEL_NGE_ETH_EN 0x100930UL
+
+#define NIG_REG_NGE_IP_ENABLE 0x508b28UL
+#define NIG_REG_NGE_ETH_ENABLE 0x508b2cUL
+#define NIG_REG_NGE_COMP_VER 0x508b30UL
+#define PBF_REG_NGE_COMP_VER 0xd80524UL
+#define PRS_REG_NGE_COMP_VER 0x1f0878UL
+
+#define QM_REG_WFQPFWEIGHT 0x2f4e80UL
+#define QM_REG_WFQVPWEIGHT 0x2fa000UL
#endif
diff --git a/drivers/net/ethernet/qlogic/qed/qed_selftest.c b/drivers/net/ethernet/qlogic/qed/qed_selftest.c
new file mode 100644
index 000000000000..a342bfe4280d
--- /dev/null
+++ b/drivers/net/ethernet/qlogic/qed/qed_selftest.c
@@ -0,0 +1,76 @@
+#include "qed.h"
+#include "qed_dev_api.h"
+#include "qed_mcp.h"
+#include "qed_sp.h"
+
+int qed_selftest_memory(struct qed_dev *cdev)
+{
+ int rc = 0, i;
+
+ for_each_hwfn(cdev, i) {
+ rc = qed_sp_heartbeat_ramrod(&cdev->hwfns[i]);
+ if (rc)
+ return rc;
+ }
+
+ return rc;
+}
+
+int qed_selftest_interrupt(struct qed_dev *cdev)
+{
+ int rc = 0, i;
+
+ for_each_hwfn(cdev, i) {
+ rc = qed_sp_heartbeat_ramrod(&cdev->hwfns[i]);
+ if (rc)
+ return rc;
+ }
+
+ return rc;
+}
+
+int qed_selftest_register(struct qed_dev *cdev)
+{
+ struct qed_hwfn *p_hwfn;
+ struct qed_ptt *p_ptt;
+ int rc = 0, i;
+
+ /* although performed by MCP, this test is per engine */
+ for_each_hwfn(cdev, i) {
+ p_hwfn = &cdev->hwfns[i];
+ p_ptt = qed_ptt_acquire(p_hwfn);
+ if (!p_ptt) {
+ DP_ERR(p_hwfn, "failed to acquire ptt\n");
+ return -EBUSY;
+ }
+ rc = qed_mcp_bist_register_test(p_hwfn, p_ptt);
+ qed_ptt_release(p_hwfn, p_ptt);
+ if (rc)
+ break;
+ }
+
+ return rc;
+}
+
+int qed_selftest_clock(struct qed_dev *cdev)
+{
+ struct qed_hwfn *p_hwfn;
+ struct qed_ptt *p_ptt;
+ int rc = 0, i;
+
+ /* although performed by MCP, this test is per engine */
+ for_each_hwfn(cdev, i) {
+ p_hwfn = &cdev->hwfns[i];
+ p_ptt = qed_ptt_acquire(p_hwfn);
+ if (!p_ptt) {
+ DP_ERR(p_hwfn, "failed to acquire ptt\n");
+ return -EBUSY;
+ }
+ rc = qed_mcp_bist_clock_test(p_hwfn, p_ptt);
+ qed_ptt_release(p_hwfn, p_ptt);
+ if (rc)
+ break;
+ }
+
+ return rc;
+}
diff --git a/drivers/net/ethernet/qlogic/qed/qed_selftest.h b/drivers/net/ethernet/qlogic/qed/qed_selftest.h
new file mode 100644
index 000000000000..50eb0b49950f
--- /dev/null
+++ b/drivers/net/ethernet/qlogic/qed/qed_selftest.h
@@ -0,0 +1,40 @@
+#ifndef _QED_SELFTEST_API_H
+#define _QED_SELFTEST_API_H
+#include <linux/types.h>
+
+/**
+ * @brief qed_selftest_memory - Perform memory test
+ *
+ * @param cdev
+ *
+ * @return int
+ */
+int qed_selftest_memory(struct qed_dev *cdev);
+
+/**
+ * @brief qed_selftest_interrupt - Perform interrupt test
+ *
+ * @param cdev
+ *
+ * @return int
+ */
+int qed_selftest_interrupt(struct qed_dev *cdev);
+
+/**
+ * @brief qed_selftest_register - Perform register test
+ *
+ * @param cdev
+ *
+ * @return int
+ */
+int qed_selftest_register(struct qed_dev *cdev);
+
+/**
+ * @brief qed_selftest_clock - Perform clock test
+ *
+ * @param cdev
+ *
+ * @return int
+ */
+int qed_selftest_clock(struct qed_dev *cdev);
+#endif
diff --git a/drivers/net/ethernet/qlogic/qed/qed_sp.h b/drivers/net/ethernet/qlogic/qed/qed_sp.h
index d39f914b66ee..ea4e9ce53e0a 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_sp.h
+++ b/drivers/net/ethernet/qlogic/qed/qed_sp.h
@@ -52,6 +52,7 @@ int qed_eth_cqe_completion(struct qed_hwfn *p_hwfn,
union ramrod_data {
struct pf_start_ramrod_data pf_start;
+ struct pf_update_ramrod_data pf_update;
struct rx_queue_start_ramrod_data rx_queue_start;
struct rx_queue_update_ramrod_data rx_queue_update;
struct rx_queue_stop_ramrod_data rx_queue_stop;
@@ -61,6 +62,9 @@ union ramrod_data {
struct vport_stop_ramrod_data vport_stop;
struct vport_update_ramrod_data vport_update;
struct vport_filter_update_ramrod_data vport_filter_update;
+
+ struct vf_start_ramrod_data vf_start;
+ struct vf_stop_ramrod_data vf_stop;
};
#define EQ_MAX_CREDIT 0xffffffff
@@ -338,13 +342,29 @@ int qed_sp_init_request(struct qed_hwfn *p_hwfn,
* to the internal RAM of the UStorm by the Function Start Ramrod.
*
* @param p_hwfn
+ * @param p_tunn
* @param mode
+ * @param allow_npar_tx_switch
*
* @return int
*/
int qed_sp_pf_start(struct qed_hwfn *p_hwfn,
- enum qed_mf_mode mode);
+ struct qed_tunn_start_params *p_tunn,
+ enum qed_mf_mode mode, bool allow_npar_tx_switch);
+
+/**
+ * @brief qed_sp_pf_update - PF Function Update Ramrod
+ *
+ * This ramrod updates function-related parameters. Every parameter can be
+ * updated independently, according to configuration flags.
+ *
+ * @param p_hwfn
+ *
+ * @return int
+ */
+
+int qed_sp_pf_update(struct qed_hwfn *p_hwfn);
/**
* @brief qed_sp_pf_stop - PF Function Stop Ramrod
@@ -362,4 +382,18 @@ int qed_sp_pf_start(struct qed_hwfn *p_hwfn,
int qed_sp_pf_stop(struct qed_hwfn *p_hwfn);
+int qed_sp_pf_update_tunn_cfg(struct qed_hwfn *p_hwfn,
+ struct qed_tunn_update_params *p_tunn,
+ enum spq_mode comp_mode,
+ struct qed_spq_comp_cb *p_comp_data);
+/**
+ * @brief qed_sp_heartbeat_ramrod - Send empty Ramrod
+ *
+ * @param p_hwfn
+ *
+ * @return int
+ */
+
+int qed_sp_heartbeat_ramrod(struct qed_hwfn *p_hwfn);
+
#endif
diff --git a/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c b/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c
index 1c06c37d4c3d..67f6ce3c84c8 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_sp_commands.c
@@ -15,11 +15,13 @@
#include "qed.h"
#include <linux/qed/qed_chain.h>
#include "qed_cxt.h"
+#include "qed_dcbx.h"
#include "qed_hsi.h"
#include "qed_hw.h"
#include "qed_int.h"
#include "qed_reg_addr.h"
#include "qed_sp.h"
+#include "qed_sriov.h"
int qed_sp_init_request(struct qed_hwfn *p_hwfn,
struct qed_spq_entry **pp_ent,
@@ -87,8 +89,218 @@ int qed_sp_init_request(struct qed_hwfn *p_hwfn,
return 0;
}
+static enum tunnel_clss qed_tunn_get_clss_type(u8 type)
+{
+ switch (type) {
+ case QED_TUNN_CLSS_MAC_VLAN:
+ return TUNNEL_CLSS_MAC_VLAN;
+ case QED_TUNN_CLSS_MAC_VNI:
+ return TUNNEL_CLSS_MAC_VNI;
+ case QED_TUNN_CLSS_INNER_MAC_VLAN:
+ return TUNNEL_CLSS_INNER_MAC_VLAN;
+ case QED_TUNN_CLSS_INNER_MAC_VNI:
+ return TUNNEL_CLSS_INNER_MAC_VNI;
+ default:
+ return TUNNEL_CLSS_MAC_VLAN;
+ }
+}
+
+static void
+qed_tunn_set_pf_fix_tunn_mode(struct qed_hwfn *p_hwfn,
+ struct qed_tunn_update_params *p_src,
+ struct pf_update_tunnel_config *p_tunn_cfg)
+{
+ unsigned long cached_tunn_mode = p_hwfn->cdev->tunn_mode;
+ unsigned long update_mask = p_src->tunn_mode_update_mask;
+ unsigned long tunn_mode = p_src->tunn_mode;
+ unsigned long new_tunn_mode = 0;
+
+ if (test_bit(QED_MODE_L2GRE_TUNN, &update_mask)) {
+ if (test_bit(QED_MODE_L2GRE_TUNN, &tunn_mode))
+ __set_bit(QED_MODE_L2GRE_TUNN, &new_tunn_mode);
+ } else {
+ if (test_bit(QED_MODE_L2GRE_TUNN, &cached_tunn_mode))
+ __set_bit(QED_MODE_L2GRE_TUNN, &new_tunn_mode);
+ }
+
+ if (test_bit(QED_MODE_IPGRE_TUNN, &update_mask)) {
+ if (test_bit(QED_MODE_IPGRE_TUNN, &tunn_mode))
+ __set_bit(QED_MODE_IPGRE_TUNN, &new_tunn_mode);
+ } else {
+ if (test_bit(QED_MODE_IPGRE_TUNN, &cached_tunn_mode))
+ __set_bit(QED_MODE_IPGRE_TUNN, &new_tunn_mode);
+ }
+
+ if (test_bit(QED_MODE_VXLAN_TUNN, &update_mask)) {
+ if (test_bit(QED_MODE_VXLAN_TUNN, &tunn_mode))
+ __set_bit(QED_MODE_VXLAN_TUNN, &new_tunn_mode);
+ } else {
+ if (test_bit(QED_MODE_VXLAN_TUNN, &cached_tunn_mode))
+ __set_bit(QED_MODE_VXLAN_TUNN, &new_tunn_mode);
+ }
+
+ if (p_src->update_geneve_udp_port) {
+ p_tunn_cfg->set_geneve_udp_port_flg = 1;
+ p_tunn_cfg->geneve_udp_port =
+ cpu_to_le16(p_src->geneve_udp_port);
+ }
+
+ if (test_bit(QED_MODE_L2GENEVE_TUNN, &update_mask)) {
+ if (test_bit(QED_MODE_L2GENEVE_TUNN, &tunn_mode))
+ __set_bit(QED_MODE_L2GENEVE_TUNN, &new_tunn_mode);
+ } else {
+ if (test_bit(QED_MODE_L2GENEVE_TUNN, &cached_tunn_mode))
+ __set_bit(QED_MODE_L2GENEVE_TUNN, &new_tunn_mode);
+ }
+
+ if (test_bit(QED_MODE_IPGENEVE_TUNN, &update_mask)) {
+ if (test_bit(QED_MODE_IPGENEVE_TUNN, &tunn_mode))
+ __set_bit(QED_MODE_IPGENEVE_TUNN, &new_tunn_mode);
+ } else {
+ if (test_bit(QED_MODE_IPGENEVE_TUNN, &cached_tunn_mode))
+ __set_bit(QED_MODE_IPGENEVE_TUNN, &new_tunn_mode);
+ }
+
+ p_src->tunn_mode = new_tunn_mode;
+}
+
+static void
+qed_tunn_set_pf_update_params(struct qed_hwfn *p_hwfn,
+ struct qed_tunn_update_params *p_src,
+ struct pf_update_tunnel_config *p_tunn_cfg)
+{
+ unsigned long tunn_mode = p_src->tunn_mode;
+ enum tunnel_clss type;
+
+ qed_tunn_set_pf_fix_tunn_mode(p_hwfn, p_src, p_tunn_cfg);
+ p_tunn_cfg->update_rx_pf_clss = p_src->update_rx_pf_clss;
+ p_tunn_cfg->update_tx_pf_clss = p_src->update_tx_pf_clss;
+
+ type = qed_tunn_get_clss_type(p_src->tunn_clss_vxlan);
+ p_tunn_cfg->tunnel_clss_vxlan = type;
+
+ type = qed_tunn_get_clss_type(p_src->tunn_clss_l2gre);
+ p_tunn_cfg->tunnel_clss_l2gre = type;
+
+ type = qed_tunn_get_clss_type(p_src->tunn_clss_ipgre);
+ p_tunn_cfg->tunnel_clss_ipgre = type;
+
+ if (p_src->update_vxlan_udp_port) {
+ p_tunn_cfg->set_vxlan_udp_port_flg = 1;
+ p_tunn_cfg->vxlan_udp_port = cpu_to_le16(p_src->vxlan_udp_port);
+ }
+
+ if (test_bit(QED_MODE_L2GRE_TUNN, &tunn_mode))
+ p_tunn_cfg->tx_enable_l2gre = 1;
+
+ if (test_bit(QED_MODE_IPGRE_TUNN, &tunn_mode))
+ p_tunn_cfg->tx_enable_ipgre = 1;
+
+ if (test_bit(QED_MODE_VXLAN_TUNN, &tunn_mode))
+ p_tunn_cfg->tx_enable_vxlan = 1;
+
+ if (p_src->update_geneve_udp_port) {
+ p_tunn_cfg->set_geneve_udp_port_flg = 1;
+ p_tunn_cfg->geneve_udp_port =
+ cpu_to_le16(p_src->geneve_udp_port);
+ }
+
+ if (test_bit(QED_MODE_L2GENEVE_TUNN, &tunn_mode))
+ p_tunn_cfg->tx_enable_l2geneve = 1;
+
+ if (test_bit(QED_MODE_IPGENEVE_TUNN, &tunn_mode))
+ p_tunn_cfg->tx_enable_ipgeneve = 1;
+
+ type = qed_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
+ p_tunn_cfg->tunnel_clss_l2geneve = type;
+
+ type = qed_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
+ p_tunn_cfg->tunnel_clss_ipgeneve = type;
+}
+
+static void qed_set_hw_tunn_mode(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ unsigned long tunn_mode)
+{
+ u8 l2gre_enable = 0, ipgre_enable = 0, vxlan_enable = 0;
+ u8 l2geneve_enable = 0, ipgeneve_enable = 0;
+
+ if (test_bit(QED_MODE_L2GRE_TUNN, &tunn_mode))
+ l2gre_enable = 1;
+
+ if (test_bit(QED_MODE_IPGRE_TUNN, &tunn_mode))
+ ipgre_enable = 1;
+
+ if (test_bit(QED_MODE_VXLAN_TUNN, &tunn_mode))
+ vxlan_enable = 1;
+
+ qed_set_gre_enable(p_hwfn, p_ptt, l2gre_enable, ipgre_enable);
+ qed_set_vxlan_enable(p_hwfn, p_ptt, vxlan_enable);
+
+ if (test_bit(QED_MODE_L2GENEVE_TUNN, &tunn_mode))
+ l2geneve_enable = 1;
+
+ if (test_bit(QED_MODE_IPGENEVE_TUNN, &tunn_mode))
+ ipgeneve_enable = 1;
+
+ qed_set_geneve_enable(p_hwfn, p_ptt, l2geneve_enable,
+ ipgeneve_enable);
+}
+
+static void
+qed_tunn_set_pf_start_params(struct qed_hwfn *p_hwfn,
+ struct qed_tunn_start_params *p_src,
+ struct pf_start_tunnel_config *p_tunn_cfg)
+{
+ unsigned long tunn_mode;
+ enum tunnel_clss type;
+
+ if (!p_src)
+ return;
+
+ tunn_mode = p_src->tunn_mode;
+ type = qed_tunn_get_clss_type(p_src->tunn_clss_vxlan);
+ p_tunn_cfg->tunnel_clss_vxlan = type;
+ type = qed_tunn_get_clss_type(p_src->tunn_clss_l2gre);
+ p_tunn_cfg->tunnel_clss_l2gre = type;
+ type = qed_tunn_get_clss_type(p_src->tunn_clss_ipgre);
+ p_tunn_cfg->tunnel_clss_ipgre = type;
+
+ if (p_src->update_vxlan_udp_port) {
+ p_tunn_cfg->set_vxlan_udp_port_flg = 1;
+ p_tunn_cfg->vxlan_udp_port = cpu_to_le16(p_src->vxlan_udp_port);
+ }
+
+ if (test_bit(QED_MODE_L2GRE_TUNN, &tunn_mode))
+ p_tunn_cfg->tx_enable_l2gre = 1;
+
+ if (test_bit(QED_MODE_IPGRE_TUNN, &tunn_mode))
+ p_tunn_cfg->tx_enable_ipgre = 1;
+
+ if (test_bit(QED_MODE_VXLAN_TUNN, &tunn_mode))
+ p_tunn_cfg->tx_enable_vxlan = 1;
+
+ if (p_src->update_geneve_udp_port) {
+ p_tunn_cfg->set_geneve_udp_port_flg = 1;
+ p_tunn_cfg->geneve_udp_port =
+ cpu_to_le16(p_src->geneve_udp_port);
+ }
+
+ if (test_bit(QED_MODE_L2GENEVE_TUNN, &tunn_mode))
+ p_tunn_cfg->tx_enable_l2geneve = 1;
+
+ if (test_bit(QED_MODE_IPGENEVE_TUNN, &tunn_mode))
+ p_tunn_cfg->tx_enable_ipgeneve = 1;
+
+ type = qed_tunn_get_clss_type(p_src->tunn_clss_l2geneve);
+ p_tunn_cfg->tunnel_clss_l2geneve = type;
+ type = qed_tunn_get_clss_type(p_src->tunn_clss_ipgeneve);
+ p_tunn_cfg->tunnel_clss_ipgeneve = type;
+}
+
int qed_sp_pf_start(struct qed_hwfn *p_hwfn,
- enum qed_mf_mode mode)
+ struct qed_tunn_start_params *p_tunn,
+ enum qed_mf_mode mode, bool allow_npar_tx_switch)
{
struct pf_start_ramrod_data *p_ramrod = NULL;
u16 sb = qed_int_get_sp_sb_id(p_hwfn);
@@ -143,16 +355,103 @@ int qed_sp_pf_start(struct qed_hwfn *p_hwfn,
DMA_REGPAIR_LE(p_ramrod->consolid_q_pbl_addr,
p_hwfn->p_consq->chain.pbl.p_phys_table);
+ qed_tunn_set_pf_start_params(p_hwfn, p_tunn,
+ &p_ramrod->tunnel_config);
p_hwfn->hw_info.personality = PERSONALITY_ETH;
+ if (IS_MF_SI(p_hwfn))
+ p_ramrod->allow_npar_tx_switching = allow_npar_tx_switch;
+
+ if (p_hwfn->cdev->p_iov_info) {
+ struct qed_hw_sriov_info *p_iov = p_hwfn->cdev->p_iov_info;
+
+ p_ramrod->base_vf_id = (u8) p_iov->first_vf_in_pf;
+ p_ramrod->num_vfs = (u8) p_iov->total_vfs;
+ }
+
DP_VERBOSE(p_hwfn, QED_MSG_SPQ,
"Setting event_ring_sb [id %04x index %02x], outer_tag [%d]\n",
sb, sb_index,
p_ramrod->outer_tag);
+ rc = qed_spq_post(p_hwfn, p_ent, NULL);
+
+ if (p_tunn) {
+ qed_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt,
+ p_tunn->tunn_mode);
+ p_hwfn->cdev->tunn_mode = p_tunn->tunn_mode;
+ }
+
+ return rc;
+}
+
+int qed_sp_pf_update(struct qed_hwfn *p_hwfn)
+{
+ struct qed_spq_entry *p_ent = NULL;
+ struct qed_sp_init_data init_data;
+ int rc = -EINVAL;
+
+ /* Get SPQ entry */
+ memset(&init_data, 0, sizeof(init_data));
+ init_data.cid = qed_spq_get_cid(p_hwfn);
+ init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+ init_data.comp_mode = QED_SPQ_MODE_CB;
+
+ rc = qed_sp_init_request(p_hwfn, &p_ent,
+ COMMON_RAMROD_PF_UPDATE, PROTOCOLID_COMMON,
+ &init_data);
+ if (rc)
+ return rc;
+
+ qed_dcbx_set_pf_update_params(&p_hwfn->p_dcbx_info->results,
+ &p_ent->ramrod.pf_update);
+
return qed_spq_post(p_hwfn, p_ent, NULL);
}
+/* Set pf update ramrod command params */
+int qed_sp_pf_update_tunn_cfg(struct qed_hwfn *p_hwfn,
+ struct qed_tunn_update_params *p_tunn,
+ enum spq_mode comp_mode,
+ struct qed_spq_comp_cb *p_comp_data)
+{
+ struct qed_spq_entry *p_ent = NULL;
+ struct qed_sp_init_data init_data;
+ int rc = -EINVAL;
+
+ /* Get SPQ entry */
+ memset(&init_data, 0, sizeof(init_data));
+ init_data.cid = qed_spq_get_cid(p_hwfn);
+ init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+ init_data.comp_mode = comp_mode;
+ init_data.p_comp_data = p_comp_data;
+
+ rc = qed_sp_init_request(p_hwfn, &p_ent,
+ COMMON_RAMROD_PF_UPDATE, PROTOCOLID_COMMON,
+ &init_data);
+ if (rc)
+ return rc;
+
+ qed_tunn_set_pf_update_params(p_hwfn, p_tunn,
+ &p_ent->ramrod.pf_update.tunnel_config);
+
+ rc = qed_spq_post(p_hwfn, p_ent, NULL);
+ if (rc)
+ return rc;
+
+ if (p_tunn->update_vxlan_udp_port)
+ qed_set_vxlan_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+ p_tunn->vxlan_udp_port);
+ if (p_tunn->update_geneve_udp_port)
+ qed_set_geneve_dest_port(p_hwfn, p_hwfn->p_main_ptt,
+ p_tunn->geneve_udp_port);
+
+ qed_set_hw_tunn_mode(p_hwfn, p_hwfn->p_main_ptt, p_tunn->tunn_mode);
+ p_hwfn->cdev->tunn_mode = p_tunn->tunn_mode;
+
+ return rc;
+}
+
int qed_sp_pf_stop(struct qed_hwfn *p_hwfn)
{
struct qed_spq_entry *p_ent = NULL;
@@ -173,3 +472,24 @@ int qed_sp_pf_stop(struct qed_hwfn *p_hwfn)
return qed_spq_post(p_hwfn, p_ent, NULL);
}
+
+int qed_sp_heartbeat_ramrod(struct qed_hwfn *p_hwfn)
+{
+ struct qed_spq_entry *p_ent = NULL;
+ struct qed_sp_init_data init_data;
+ int rc;
+
+ /* Get SPQ entry */
+ memset(&init_data, 0, sizeof(init_data));
+ init_data.cid = qed_spq_get_cid(p_hwfn);
+ init_data.opaque_fid = p_hwfn->hw_info.opaque_fid;
+ init_data.comp_mode = QED_SPQ_MODE_EBLOCK;
+
+ rc = qed_sp_init_request(p_hwfn, &p_ent,
+ COMMON_RAMROD_EMPTY, PROTOCOLID_COMMON,
+ &init_data);
+ if (rc)
+ return rc;
+
+ return qed_spq_post(p_hwfn, p_ent, NULL);
+}
diff --git a/drivers/net/ethernet/qlogic/qed/qed_spq.c b/drivers/net/ethernet/qlogic/qed/qed_spq.c
index 89469d5aae25..acac6626a1b2 100644
--- a/drivers/net/ethernet/qlogic/qed/qed_spq.c
+++ b/drivers/net/ethernet/qlogic/qed/qed_spq.c
@@ -27,6 +27,7 @@
#include "qed_mcp.h"
#include "qed_reg_addr.h"
#include "qed_sp.h"
+#include "qed_sriov.h"
/***************************************************************************
* Structures & Definitions
@@ -242,10 +243,17 @@ static int
qed_async_event_completion(struct qed_hwfn *p_hwfn,
struct event_ring_entry *p_eqe)
{
- DP_NOTICE(p_hwfn,
- "Unknown Async completion for protocol: %d\n",
- p_eqe->protocol_id);
- return -EINVAL;
+ switch (p_eqe->protocol_id) {
+ case PROTOCOLID_COMMON:
+ return qed_sriov_eqe_event(p_hwfn,
+ p_eqe->opcode,
+ p_eqe->echo, &p_eqe->data);
+ default:
+ DP_NOTICE(p_hwfn,
+ "Unknown Async completion for protocol: %d\n",
+ p_eqe->protocol_id);
+ return -EINVAL;
+ }
}
/***************************************************************************
@@ -379,6 +387,9 @@ static int qed_cqe_completion(
struct eth_slow_path_rx_cqe *cqe,
enum protocol_type protocol)
{
+ if (IS_VF(p_hwfn->cdev))
+ return 0;
+
/* @@@tmp - it's possible we'll eventually want to handle some
* actual commands that can arrive here, but for now this is only
* used to complete the ramrod using the echo value on the cqe
diff --git a/drivers/net/ethernet/qlogic/qed/qed_sriov.c b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
new file mode 100644
index 000000000000..c325ee857ecd
--- /dev/null
+++ b/drivers/net/ethernet/qlogic/qed/qed_sriov.c
@@ -0,0 +1,3613 @@
+/* QLogic qed NIC Driver
+ * Copyright (c) 2015 QLogic Corporation
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#include <linux/etherdevice.h>
+#include <linux/crc32.h>
+#include <linux/qed/qed_iov_if.h>
+#include "qed_cxt.h"
+#include "qed_hsi.h"
+#include "qed_hw.h"
+#include "qed_init_ops.h"
+#include "qed_int.h"
+#include "qed_mcp.h"
+#include "qed_reg_addr.h"
+#include "qed_sp.h"
+#include "qed_sriov.h"
+#include "qed_vf.h"
+
+/* IOV ramrods */
+static int qed_sp_vf_start(struct qed_hwfn *p_hwfn,
+ u32 concrete_vfid, u16 opaque_vfid)
+{
+ struct vf_start_ramrod_data *p_ramrod = NULL;
+ struct qed_spq_entry *p_ent = NULL;
+ struct qed_sp_init_data init_data;
+ int rc = -EINVAL;
+
+ /* Get SPQ entry */
+ memset(&init_data, 0, sizeof(init_data));
+ init_data.cid = qed_spq_get_cid(p_hwfn);
+ init_data.opaque_fid = opaque_vfid;
+ init_data.comp_mode = QED_SPQ_MODE_EBLOCK;
+
+ rc = qed_sp_init_request(p_hwfn, &p_ent,
+ COMMON_RAMROD_VF_START,
+ PROTOCOLID_COMMON, &init_data);
+ if (rc)
+ return rc;
+
+ p_ramrod = &p_ent->ramrod.vf_start;
+
+ p_ramrod->vf_id = GET_FIELD(concrete_vfid, PXP_CONCRETE_FID_VFID);
+ p_ramrod->opaque_fid = cpu_to_le16(opaque_vfid);
+
+ p_ramrod->personality = PERSONALITY_ETH;
+
+ return qed_spq_post(p_hwfn, p_ent, NULL);
+}
+
+static int qed_sp_vf_stop(struct qed_hwfn *p_hwfn,
+ u32 concrete_vfid, u16 opaque_vfid)
+{
+ struct vf_stop_ramrod_data *p_ramrod = NULL;
+ struct qed_spq_entry *p_ent = NULL;
+ struct qed_sp_init_data init_data;
+ int rc = -EINVAL;
+
+ /* Get SPQ entry */
+ memset(&init_data, 0, sizeof(init_data));
+ init_data.cid = qed_spq_get_cid(p_hwfn);
+ init_data.opaque_fid = opaque_vfid;
+ init_data.comp_mode = QED_SPQ_MODE_EBLOCK;
+
+ rc = qed_sp_init_request(p_hwfn, &p_ent,
+ COMMON_RAMROD_VF_STOP,
+ PROTOCOLID_COMMON, &init_data);
+ if (rc)
+ return rc;
+
+ p_ramrod = &p_ent->ramrod.vf_stop;
+
+ p_ramrod->vf_id = GET_FIELD(concrete_vfid, PXP_CONCRETE_FID_VFID);
+
+ return qed_spq_post(p_hwfn, p_ent, NULL);
+}
+
+bool qed_iov_is_valid_vfid(struct qed_hwfn *p_hwfn,
+ int rel_vf_id, bool b_enabled_only)
+{
+ if (!p_hwfn->pf_iov_info) {
+ DP_NOTICE(p_hwfn->cdev, "No iov info\n");
+ return false;
+ }
+
+ if ((rel_vf_id >= p_hwfn->cdev->p_iov_info->total_vfs) ||
+ (rel_vf_id < 0))
+ return false;
+
+ if ((!p_hwfn->pf_iov_info->vfs_array[rel_vf_id].b_init) &&
+ b_enabled_only)
+ return false;
+
+ return true;
+}
+
+static struct qed_vf_info *qed_iov_get_vf_info(struct qed_hwfn *p_hwfn,
+ u16 relative_vf_id,
+ bool b_enabled_only)
+{
+ struct qed_vf_info *vf = NULL;
+
+ if (!p_hwfn->pf_iov_info) {
+ DP_NOTICE(p_hwfn->cdev, "No iov info\n");
+ return NULL;
+ }
+
+ if (qed_iov_is_valid_vfid(p_hwfn, relative_vf_id, b_enabled_only))
+ vf = &p_hwfn->pf_iov_info->vfs_array[relative_vf_id];
+ else
+ DP_ERR(p_hwfn, "qed_iov_get_vf_info: VF[%d] is not enabled\n",
+ relative_vf_id);
+
+ return vf;
+}
+
+int qed_iov_post_vf_bulletin(struct qed_hwfn *p_hwfn,
+ int vfid, struct qed_ptt *p_ptt)
+{
+ struct qed_bulletin_content *p_bulletin;
+ int crc_size = sizeof(p_bulletin->crc);
+ struct qed_dmae_params params;
+ struct qed_vf_info *p_vf;
+
+ p_vf = qed_iov_get_vf_info(p_hwfn, (u16) vfid, true);
+ if (!p_vf)
+ return -EINVAL;
+
+ if (!p_vf->vf_bulletin)
+ return -EINVAL;
+
+ p_bulletin = p_vf->bulletin.p_virt;
+
+ /* Increment bulletin board version and compute crc */
+ p_bulletin->version++;
+ p_bulletin->crc = crc32(0, (u8 *)p_bulletin + crc_size,
+ p_vf->bulletin.size - crc_size);
+
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "Posting Bulletin 0x%08x to VF[%d] (CRC 0x%08x)\n",
+ p_bulletin->version, p_vf->relative_vf_id, p_bulletin->crc);
+
+ /* propagate bulletin board via dmae to vm memory */
+ memset(&params, 0, sizeof(params));
+ params.flags = QED_DMAE_FLAG_VF_DST;
+ params.dst_vfid = p_vf->abs_vf_id;
+ return qed_dmae_host2host(p_hwfn, p_ptt, p_vf->bulletin.phys,
+ p_vf->vf_bulletin, p_vf->bulletin.size / 4,
+ &params);
+}
+
+static int qed_iov_pci_cfg_info(struct qed_dev *cdev)
+{
+ struct qed_hw_sriov_info *iov = cdev->p_iov_info;
+ int pos = iov->pos;
+
+ DP_VERBOSE(cdev, QED_MSG_IOV, "sriov ext pos %d\n", pos);
+ pci_read_config_word(cdev->pdev, pos + PCI_SRIOV_CTRL, &iov->ctrl);
+
+ pci_read_config_word(cdev->pdev,
+ pos + PCI_SRIOV_TOTAL_VF, &iov->total_vfs);
+ pci_read_config_word(cdev->pdev,
+ pos + PCI_SRIOV_INITIAL_VF, &iov->initial_vfs);
+
+ pci_read_config_word(cdev->pdev, pos + PCI_SRIOV_NUM_VF, &iov->num_vfs);
+ if (iov->num_vfs) {
+ DP_VERBOSE(cdev,
+ QED_MSG_IOV,
+ "Number of VFs are already set to non-zero value. Ignoring PCI configuration value\n");
+ iov->num_vfs = 0;
+ }
+
+ pci_read_config_word(cdev->pdev,
+ pos + PCI_SRIOV_VF_OFFSET, &iov->offset);
+
+ pci_read_config_word(cdev->pdev,
+ pos + PCI_SRIOV_VF_STRIDE, &iov->stride);
+
+ pci_read_config_word(cdev->pdev,
+ pos + PCI_SRIOV_VF_DID, &iov->vf_device_id);
+
+ pci_read_config_dword(cdev->pdev,
+ pos + PCI_SRIOV_SUP_PGSIZE, &iov->pgsz);
+
+ pci_read_config_dword(cdev->pdev, pos + PCI_SRIOV_CAP, &iov->cap);
+
+ pci_read_config_byte(cdev->pdev, pos + PCI_SRIOV_FUNC_LINK, &iov->link);
+
+ DP_VERBOSE(cdev,
+ QED_MSG_IOV,
+ "IOV info: nres %d, cap 0x%x, ctrl 0x%x, total %d, initial %d, num vfs %d, offset %d, stride %d, page size 0x%x\n",
+ iov->nres,
+ iov->cap,
+ iov->ctrl,
+ iov->total_vfs,
+ iov->initial_vfs,
+ iov->nr_virtfn, iov->offset, iov->stride, iov->pgsz);
+
+ /* Some sanity checks */
+ if (iov->num_vfs > NUM_OF_VFS(cdev) ||
+ iov->total_vfs > NUM_OF_VFS(cdev)) {
+ /* This can happen only due to a bug. In this case we set
+ * num_vfs to zero to avoid memory corruption in the code that
+ * assumes max number of vfs
+ */
+ DP_NOTICE(cdev,
+ "IOV: Unexpected number of vfs set: %d setting num_vf to zero\n",
+ iov->num_vfs);
+
+ iov->num_vfs = 0;
+ iov->total_vfs = 0;
+ }
+
+ return 0;
+}
+
+static void qed_iov_clear_vf_igu_blocks(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt)
+{
+ struct qed_igu_block *p_sb;
+ u16 sb_id;
+ u32 val;
+
+ if (!p_hwfn->hw_info.p_igu_info) {
+ DP_ERR(p_hwfn,
+ "qed_iov_clear_vf_igu_blocks IGU Info not initialized\n");
+ return;
+ }
+
+ for (sb_id = 0; sb_id < QED_MAPPING_MEMORY_SIZE(p_hwfn->cdev);
+ sb_id++) {
+ p_sb = &p_hwfn->hw_info.p_igu_info->igu_map.igu_blocks[sb_id];
+ if ((p_sb->status & QED_IGU_STATUS_FREE) &&
+ !(p_sb->status & QED_IGU_STATUS_PF)) {
+ val = qed_rd(p_hwfn, p_ptt,
+ IGU_REG_MAPPING_MEMORY + sb_id * 4);
+ SET_FIELD(val, IGU_MAPPING_LINE_VALID, 0);
+ qed_wr(p_hwfn, p_ptt,
+ IGU_REG_MAPPING_MEMORY + 4 * sb_id, val);
+ }
+ }
+}
+
+static void qed_iov_setup_vfdb(struct qed_hwfn *p_hwfn)
+{
+ struct qed_hw_sriov_info *p_iov = p_hwfn->cdev->p_iov_info;
+ struct qed_pf_iov *p_iov_info = p_hwfn->pf_iov_info;
+ struct qed_bulletin_content *p_bulletin_virt;
+ dma_addr_t req_p, rply_p, bulletin_p;
+ union pfvf_tlvs *p_reply_virt_addr;
+ union vfpf_tlvs *p_req_virt_addr;
+ u8 idx = 0;
+
+ memset(p_iov_info->vfs_array, 0, sizeof(p_iov_info->vfs_array));
+
+ p_req_virt_addr = p_iov_info->mbx_msg_virt_addr;
+ req_p = p_iov_info->mbx_msg_phys_addr;
+ p_reply_virt_addr = p_iov_info->mbx_reply_virt_addr;
+ rply_p = p_iov_info->mbx_reply_phys_addr;
+ p_bulletin_virt = p_iov_info->p_bulletins;
+ bulletin_p = p_iov_info->bulletins_phys;
+ if (!p_req_virt_addr || !p_reply_virt_addr || !p_bulletin_virt) {
+ DP_ERR(p_hwfn,
+ "qed_iov_setup_vfdb called without allocating mem first\n");
+ return;
+ }
+
+ for (idx = 0; idx < p_iov->total_vfs; idx++) {
+ struct qed_vf_info *vf = &p_iov_info->vfs_array[idx];
+ u32 concrete;
+
+ vf->vf_mbx.req_virt = p_req_virt_addr + idx;
+ vf->vf_mbx.req_phys = req_p + idx * sizeof(union vfpf_tlvs);
+ vf->vf_mbx.reply_virt = p_reply_virt_addr + idx;
+ vf->vf_mbx.reply_phys = rply_p + idx * sizeof(union pfvf_tlvs);
+
+ vf->state = VF_STOPPED;
+ vf->b_init = false;
+
+ vf->bulletin.phys = idx *
+ sizeof(struct qed_bulletin_content) +
+ bulletin_p;
+ vf->bulletin.p_virt = p_bulletin_virt + idx;
+ vf->bulletin.size = sizeof(struct qed_bulletin_content);
+
+ vf->relative_vf_id = idx;
+ vf->abs_vf_id = idx + p_iov->first_vf_in_pf;
+ concrete = qed_vfid_to_concrete(p_hwfn, vf->abs_vf_id);
+ vf->concrete_fid = concrete;
+ vf->opaque_fid = (p_hwfn->hw_info.opaque_fid & 0xff) |
+ (vf->abs_vf_id << 8);
+ vf->vport_id = idx + 1;
+ }
+}
+
+static int qed_iov_allocate_vfdb(struct qed_hwfn *p_hwfn)
+{
+ struct qed_pf_iov *p_iov_info = p_hwfn->pf_iov_info;
+ void **p_v_addr;
+ u16 num_vfs = 0;
+
+ num_vfs = p_hwfn->cdev->p_iov_info->total_vfs;
+
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "qed_iov_allocate_vfdb for %d VFs\n", num_vfs);
+
+ /* Allocate PF Mailbox buffer (per-VF) */
+ p_iov_info->mbx_msg_size = sizeof(union vfpf_tlvs) * num_vfs;
+ p_v_addr = &p_iov_info->mbx_msg_virt_addr;
+ *p_v_addr = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev,
+ p_iov_info->mbx_msg_size,
+ &p_iov_info->mbx_msg_phys_addr,
+ GFP_KERNEL);
+ if (!*p_v_addr)
+ return -ENOMEM;
+
+ /* Allocate PF Mailbox Reply buffer (per-VF) */
+ p_iov_info->mbx_reply_size = sizeof(union pfvf_tlvs) * num_vfs;
+ p_v_addr = &p_iov_info->mbx_reply_virt_addr;
+ *p_v_addr = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev,
+ p_iov_info->mbx_reply_size,
+ &p_iov_info->mbx_reply_phys_addr,
+ GFP_KERNEL);
+ if (!*p_v_addr)
+ return -ENOMEM;
+
+ p_iov_info->bulletins_size = sizeof(struct qed_bulletin_content) *
+ num_vfs;
+ p_v_addr = &p_iov_info->p_bulletins;
+ *p_v_addr = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev,
+ p_iov_info->bulletins_size,
+ &p_iov_info->bulletins_phys,
+ GFP_KERNEL);
+ if (!*p_v_addr)
+ return -ENOMEM;
+
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_IOV,
+ "PF's Requests mailbox [%p virt 0x%llx phys], Response mailbox [%p virt 0x%llx phys] Bulletins [%p virt 0x%llx phys]\n",
+ p_iov_info->mbx_msg_virt_addr,
+ (u64) p_iov_info->mbx_msg_phys_addr,
+ p_iov_info->mbx_reply_virt_addr,
+ (u64) p_iov_info->mbx_reply_phys_addr,
+ p_iov_info->p_bulletins, (u64) p_iov_info->bulletins_phys);
+
+ return 0;
+}
+
+static void qed_iov_free_vfdb(struct qed_hwfn *p_hwfn)
+{
+ struct qed_pf_iov *p_iov_info = p_hwfn->pf_iov_info;
+
+ if (p_hwfn->pf_iov_info->mbx_msg_virt_addr)
+ dma_free_coherent(&p_hwfn->cdev->pdev->dev,
+ p_iov_info->mbx_msg_size,
+ p_iov_info->mbx_msg_virt_addr,
+ p_iov_info->mbx_msg_phys_addr);
+
+ if (p_hwfn->pf_iov_info->mbx_reply_virt_addr)
+ dma_free_coherent(&p_hwfn->cdev->pdev->dev,
+ p_iov_info->mbx_reply_size,
+ p_iov_info->mbx_reply_virt_addr,
+ p_iov_info->mbx_reply_phys_addr);
+
+ if (p_iov_info->p_bulletins)
+ dma_free_coherent(&p_hwfn->cdev->pdev->dev,
+ p_iov_info->bulletins_size,
+ p_iov_info->p_bulletins,
+ p_iov_info->bulletins_phys);
+}
+
+int qed_iov_alloc(struct qed_hwfn *p_hwfn)
+{
+ struct qed_pf_iov *p_sriov;
+
+ if (!IS_PF_SRIOV(p_hwfn)) {
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "No SR-IOV - no need for IOV db\n");
+ return 0;
+ }
+
+ p_sriov = kzalloc(sizeof(*p_sriov), GFP_KERNEL);
+ if (!p_sriov) {
+ DP_NOTICE(p_hwfn, "Failed to allocate `struct qed_sriov'\n");
+ return -ENOMEM;
+ }
+
+ p_hwfn->pf_iov_info = p_sriov;
+
+ return qed_iov_allocate_vfdb(p_hwfn);
+}
+
+void qed_iov_setup(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+{
+ if (!IS_PF_SRIOV(p_hwfn) || !IS_PF_SRIOV_ALLOC(p_hwfn))
+ return;
+
+ qed_iov_setup_vfdb(p_hwfn);
+ qed_iov_clear_vf_igu_blocks(p_hwfn, p_ptt);
+}
+
+void qed_iov_free(struct qed_hwfn *p_hwfn)
+{
+ if (IS_PF_SRIOV_ALLOC(p_hwfn)) {
+ qed_iov_free_vfdb(p_hwfn);
+ kfree(p_hwfn->pf_iov_info);
+ }
+}
+
+void qed_iov_free_hw_info(struct qed_dev *cdev)
+{
+ kfree(cdev->p_iov_info);
+ cdev->p_iov_info = NULL;
+}
+
+int qed_iov_hw_info(struct qed_hwfn *p_hwfn)
+{
+ struct qed_dev *cdev = p_hwfn->cdev;
+ int pos;
+ int rc;
+
+ if (IS_VF(p_hwfn->cdev))
+ return 0;
+
+ /* Learn the PCI configuration */
+ pos = pci_find_ext_capability(p_hwfn->cdev->pdev,
+ PCI_EXT_CAP_ID_SRIOV);
+ if (!pos) {
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV, "No PCIe IOV support\n");
+ return 0;
+ }
+
+ /* Allocate a new struct for IOV information */
+ cdev->p_iov_info = kzalloc(sizeof(*cdev->p_iov_info), GFP_KERNEL);
+ if (!cdev->p_iov_info) {
+ DP_NOTICE(p_hwfn, "Can't support IOV due to lack of memory\n");
+ return -ENOMEM;
+ }
+ cdev->p_iov_info->pos = pos;
+
+ rc = qed_iov_pci_cfg_info(cdev);
+ if (rc)
+ return rc;
+
+ /* We want PF IOV to be synonemous with the existance of p_iov_info;
+ * In case the capability is published but there are no VFs, simply
+ * de-allocate the struct.
+ */
+ if (!cdev->p_iov_info->total_vfs) {
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "IOV capabilities, but no VFs are published\n");
+ kfree(cdev->p_iov_info);
+ cdev->p_iov_info = NULL;
+ return 0;
+ }
+
+ /* Calculate the first VF index - this is a bit tricky; Basically,
+ * VFs start at offset 16 relative to PF0, and 2nd engine VFs begin
+ * after the first engine's VFs.
+ */
+ cdev->p_iov_info->first_vf_in_pf = p_hwfn->cdev->p_iov_info->offset +
+ p_hwfn->abs_pf_id - 16;
+ if (QED_PATH_ID(p_hwfn))
+ cdev->p_iov_info->first_vf_in_pf -= MAX_NUM_VFS_BB;
+
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "First VF in hwfn 0x%08x\n",
+ cdev->p_iov_info->first_vf_in_pf);
+
+ return 0;
+}
+
+static bool qed_iov_pf_sanity_check(struct qed_hwfn *p_hwfn, int vfid)
+{
+ /* Check PF supports sriov */
+ if (IS_VF(p_hwfn->cdev) || !IS_QED_SRIOV(p_hwfn->cdev) ||
+ !IS_PF_SRIOV_ALLOC(p_hwfn))
+ return false;
+
+ /* Check VF validity */
+ if (!qed_iov_is_valid_vfid(p_hwfn, vfid, true))
+ return false;
+
+ return true;
+}
+
+static void qed_iov_set_vf_to_disable(struct qed_dev *cdev,
+ u16 rel_vf_id, u8 to_disable)
+{
+ struct qed_vf_info *vf;
+ int i;
+
+ for_each_hwfn(cdev, i) {
+ struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
+
+ vf = qed_iov_get_vf_info(p_hwfn, rel_vf_id, false);
+ if (!vf)
+ continue;
+
+ vf->to_disable = to_disable;
+ }
+}
+
+void qed_iov_set_vfs_to_disable(struct qed_dev *cdev, u8 to_disable)
+{
+ u16 i;
+
+ if (!IS_QED_SRIOV(cdev))
+ return;
+
+ for (i = 0; i < cdev->p_iov_info->total_vfs; i++)
+ qed_iov_set_vf_to_disable(cdev, i, to_disable);
+}
+
+static void qed_iov_vf_pglue_clear_err(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt, u8 abs_vfid)
+{
+ qed_wr(p_hwfn, p_ptt,
+ PGLUE_B_REG_WAS_ERROR_VF_31_0_CLR + (abs_vfid >> 5) * 4,
+ 1 << (abs_vfid & 0x1f));
+}
+
+static void qed_iov_vf_igu_reset(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt, struct qed_vf_info *vf)
+{
+ int i;
+
+ /* Set VF masks and configuration - pretend */
+ qed_fid_pretend(p_hwfn, p_ptt, (u16) vf->concrete_fid);
+
+ qed_wr(p_hwfn, p_ptt, IGU_REG_STATISTIC_NUM_VF_MSG_SENT, 0);
+
+ /* unpretend */
+ qed_fid_pretend(p_hwfn, p_ptt, (u16) p_hwfn->hw_info.concrete_fid);
+
+ /* iterate over all queues, clear sb consumer */
+ for (i = 0; i < vf->num_sbs; i++)
+ qed_int_igu_init_pure_rt_single(p_hwfn, p_ptt,
+ vf->igu_sbs[i],
+ vf->opaque_fid, true);
+}
+
+static void qed_iov_vf_igu_set_int(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *vf, bool enable)
+{
+ u32 igu_vf_conf;
+
+ qed_fid_pretend(p_hwfn, p_ptt, (u16) vf->concrete_fid);
+
+ igu_vf_conf = qed_rd(p_hwfn, p_ptt, IGU_REG_VF_CONFIGURATION);
+
+ if (enable)
+ igu_vf_conf |= IGU_VF_CONF_MSI_MSIX_EN;
+ else
+ igu_vf_conf &= ~IGU_VF_CONF_MSI_MSIX_EN;
+
+ qed_wr(p_hwfn, p_ptt, IGU_REG_VF_CONFIGURATION, igu_vf_conf);
+
+ /* unpretend */
+ qed_fid_pretend(p_hwfn, p_ptt, (u16) p_hwfn->hw_info.concrete_fid);
+}
+
+static int qed_iov_enable_vf_access(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *vf)
+{
+ u32 igu_vf_conf = IGU_VF_CONF_FUNC_EN;
+ int rc;
+
+ if (vf->to_disable)
+ return 0;
+
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_IOV,
+ "Enable internal access for vf %x [abs %x]\n",
+ vf->abs_vf_id, QED_VF_ABS_ID(p_hwfn, vf));
+
+ qed_iov_vf_pglue_clear_err(p_hwfn, p_ptt, QED_VF_ABS_ID(p_hwfn, vf));
+
+ qed_iov_vf_igu_reset(p_hwfn, p_ptt, vf);
+
+ rc = qed_mcp_config_vf_msix(p_hwfn, p_ptt, vf->abs_vf_id, vf->num_sbs);
+ if (rc)
+ return rc;
+
+ qed_fid_pretend(p_hwfn, p_ptt, (u16) vf->concrete_fid);
+
+ SET_FIELD(igu_vf_conf, IGU_VF_CONF_PARENT, p_hwfn->rel_pf_id);
+ STORE_RT_REG(p_hwfn, IGU_REG_VF_CONFIGURATION_RT_OFFSET, igu_vf_conf);
+
+ qed_init_run(p_hwfn, p_ptt, PHASE_VF, vf->abs_vf_id,
+ p_hwfn->hw_info.hw_mode);
+
+ /* unpretend */
+ qed_fid_pretend(p_hwfn, p_ptt, (u16) p_hwfn->hw_info.concrete_fid);
+
+ if (vf->state != VF_STOPPED) {
+ DP_NOTICE(p_hwfn, "VF[%02x] is already started\n",
+ vf->abs_vf_id);
+ return -EINVAL;
+ }
+
+ /* Start VF */
+ rc = qed_sp_vf_start(p_hwfn, vf->concrete_fid, vf->opaque_fid);
+ if (rc)
+ DP_NOTICE(p_hwfn, "Failed to start VF[%02x]\n", vf->abs_vf_id);
+
+ vf->state = VF_FREE;
+
+ return rc;
+}
+
+/**
+ * @brief qed_iov_config_perm_table - configure the permission
+ * zone table.
+ * In E4, queue zone permission table size is 320x9. There
+ * are 320 VF queues for single engine device (256 for dual
+ * engine device), and each entry has the following format:
+ * {Valid, VF[7:0]}
+ * @param p_hwfn
+ * @param p_ptt
+ * @param vf
+ * @param enable
+ */
+static void qed_iov_config_perm_table(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *vf, u8 enable)
+{
+ u32 reg_addr, val;
+ u16 qzone_id = 0;
+ int qid;
+
+ for (qid = 0; qid < vf->num_rxqs; qid++) {
+ qed_fw_l2_queue(p_hwfn, vf->vf_queues[qid].fw_rx_qid,
+ &qzone_id);
+
+ reg_addr = PSWHST_REG_ZONE_PERMISSION_TABLE + qzone_id * 4;
+ val = enable ? (vf->abs_vf_id | (1 << 8)) : 0;
+ qed_wr(p_hwfn, p_ptt, reg_addr, val);
+ }
+}
+
+static void qed_iov_enable_vf_traffic(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *vf)
+{
+ /* Reset vf in IGU - interrupts are still disabled */
+ qed_iov_vf_igu_reset(p_hwfn, p_ptt, vf);
+
+ qed_iov_vf_igu_set_int(p_hwfn, p_ptt, vf, 1);
+
+ /* Permission Table */
+ qed_iov_config_perm_table(p_hwfn, p_ptt, vf, true);
+}
+
+static u8 qed_iov_alloc_vf_igu_sbs(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *vf, u16 num_rx_queues)
+{
+ struct qed_igu_block *igu_blocks;
+ int qid = 0, igu_id = 0;
+ u32 val = 0;
+
+ igu_blocks = p_hwfn->hw_info.p_igu_info->igu_map.igu_blocks;
+
+ if (num_rx_queues > p_hwfn->hw_info.p_igu_info->free_blks)
+ num_rx_queues = p_hwfn->hw_info.p_igu_info->free_blks;
+ p_hwfn->hw_info.p_igu_info->free_blks -= num_rx_queues;
+
+ SET_FIELD(val, IGU_MAPPING_LINE_FUNCTION_NUMBER, vf->abs_vf_id);
+ SET_FIELD(val, IGU_MAPPING_LINE_VALID, 1);
+ SET_FIELD(val, IGU_MAPPING_LINE_PF_VALID, 0);
+
+ while ((qid < num_rx_queues) &&
+ (igu_id < QED_MAPPING_MEMORY_SIZE(p_hwfn->cdev))) {
+ if (igu_blocks[igu_id].status & QED_IGU_STATUS_FREE) {
+ struct cau_sb_entry sb_entry;
+
+ vf->igu_sbs[qid] = (u16)igu_id;
+ igu_blocks[igu_id].status &= ~QED_IGU_STATUS_FREE;
+
+ SET_FIELD(val, IGU_MAPPING_LINE_VECTOR_NUMBER, qid);
+
+ qed_wr(p_hwfn, p_ptt,
+ IGU_REG_MAPPING_MEMORY + sizeof(u32) * igu_id,
+ val);
+
+ /* Configure igu sb in CAU which were marked valid */
+ qed_init_cau_sb_entry(p_hwfn, &sb_entry,
+ p_hwfn->rel_pf_id,
+ vf->abs_vf_id, 1);
+ qed_dmae_host2grc(p_hwfn, p_ptt,
+ (u64)(uintptr_t)&sb_entry,
+ CAU_REG_SB_VAR_MEMORY +
+ igu_id * sizeof(u64), 2, 0);
+ qid++;
+ }
+ igu_id++;
+ }
+
+ vf->num_sbs = (u8) num_rx_queues;
+
+ return vf->num_sbs;
+}
+
+static void qed_iov_free_vf_igu_sbs(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *vf)
+{
+ struct qed_igu_info *p_info = p_hwfn->hw_info.p_igu_info;
+ int idx, igu_id;
+ u32 addr, val;
+
+ /* Invalidate igu CAM lines and mark them as free */
+ for (idx = 0; idx < vf->num_sbs; idx++) {
+ igu_id = vf->igu_sbs[idx];
+ addr = IGU_REG_MAPPING_MEMORY + sizeof(u32) * igu_id;
+
+ val = qed_rd(p_hwfn, p_ptt, addr);
+ SET_FIELD(val, IGU_MAPPING_LINE_VALID, 0);
+ qed_wr(p_hwfn, p_ptt, addr, val);
+
+ p_info->igu_map.igu_blocks[igu_id].status |=
+ QED_IGU_STATUS_FREE;
+
+ p_hwfn->hw_info.p_igu_info->free_blks++;
+ }
+
+ vf->num_sbs = 0;
+}
+
+static int qed_iov_init_hw_for_vf(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ u16 rel_vf_id, u16 num_rx_queues)
+{
+ u8 num_of_vf_avaiable_chains = 0;
+ struct qed_vf_info *vf = NULL;
+ int rc = 0;
+ u32 cids;
+ u8 i;
+
+ vf = qed_iov_get_vf_info(p_hwfn, rel_vf_id, false);
+ if (!vf) {
+ DP_ERR(p_hwfn, "qed_iov_init_hw_for_vf : vf is NULL\n");
+ return -EINVAL;
+ }
+
+ if (vf->b_init) {
+ DP_NOTICE(p_hwfn, "VF[%d] is already active.\n", rel_vf_id);
+ return -EINVAL;
+ }
+
+ /* Limit number of queues according to number of CIDs */
+ qed_cxt_get_proto_cid_count(p_hwfn, PROTOCOLID_ETH, &cids);
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_IOV,
+ "VF[%d] - requesting to initialize for 0x%04x queues [0x%04x CIDs available]\n",
+ vf->relative_vf_id, num_rx_queues, (u16) cids);
+ num_rx_queues = min_t(u16, num_rx_queues, ((u16) cids));
+
+ num_of_vf_avaiable_chains = qed_iov_alloc_vf_igu_sbs(p_hwfn,
+ p_ptt,
+ vf,
+ num_rx_queues);
+ if (!num_of_vf_avaiable_chains) {
+ DP_ERR(p_hwfn, "no available igu sbs\n");
+ return -ENOMEM;
+ }
+
+ /* Choose queue number and index ranges */
+ vf->num_rxqs = num_of_vf_avaiable_chains;
+ vf->num_txqs = num_of_vf_avaiable_chains;
+
+ for (i = 0; i < vf->num_rxqs; i++) {
+ u16 queue_id = qed_int_queue_id_from_sb_id(p_hwfn,
+ vf->igu_sbs[i]);
+
+ if (queue_id > RESC_NUM(p_hwfn, QED_L2_QUEUE)) {
+ DP_NOTICE(p_hwfn,
+ "VF[%d] will require utilizing of out-of-bounds queues - %04x\n",
+ vf->relative_vf_id, queue_id);
+ return -EINVAL;
+ }
+
+ /* CIDs are per-VF, so no problem having them 0-based. */
+ vf->vf_queues[i].fw_rx_qid = queue_id;
+ vf->vf_queues[i].fw_tx_qid = queue_id;
+ vf->vf_queues[i].fw_cid = i;
+
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "VF[%d] - [%d] SB %04x, Tx/Rx queue %04x CID %04x\n",
+ vf->relative_vf_id, i, vf->igu_sbs[i], queue_id, i);
+ }
+ rc = qed_iov_enable_vf_access(p_hwfn, p_ptt, vf);
+ if (!rc) {
+ vf->b_init = true;
+
+ if (IS_LEAD_HWFN(p_hwfn))
+ p_hwfn->cdev->p_iov_info->num_vfs++;
+ }
+
+ return rc;
+}
+
+static void qed_iov_set_link(struct qed_hwfn *p_hwfn,
+ u16 vfid,
+ struct qed_mcp_link_params *params,
+ struct qed_mcp_link_state *link,
+ struct qed_mcp_link_capabilities *p_caps)
+{
+ struct qed_vf_info *p_vf = qed_iov_get_vf_info(p_hwfn,
+ vfid,
+ false);
+ struct qed_bulletin_content *p_bulletin;
+
+ if (!p_vf)
+ return;
+
+ p_bulletin = p_vf->bulletin.p_virt;
+ p_bulletin->req_autoneg = params->speed.autoneg;
+ p_bulletin->req_adv_speed = params->speed.advertised_speeds;
+ p_bulletin->req_forced_speed = params->speed.forced_speed;
+ p_bulletin->req_autoneg_pause = params->pause.autoneg;
+ p_bulletin->req_forced_rx = params->pause.forced_rx;
+ p_bulletin->req_forced_tx = params->pause.forced_tx;
+ p_bulletin->req_loopback = params->loopback_mode;
+
+ p_bulletin->link_up = link->link_up;
+ p_bulletin->speed = link->speed;
+ p_bulletin->full_duplex = link->full_duplex;
+ p_bulletin->autoneg = link->an;
+ p_bulletin->autoneg_complete = link->an_complete;
+ p_bulletin->parallel_detection = link->parallel_detection;
+ p_bulletin->pfc_enabled = link->pfc_enabled;
+ p_bulletin->partner_adv_speed = link->partner_adv_speed;
+ p_bulletin->partner_tx_flow_ctrl_en = link->partner_tx_flow_ctrl_en;
+ p_bulletin->partner_rx_flow_ctrl_en = link->partner_rx_flow_ctrl_en;
+ p_bulletin->partner_adv_pause = link->partner_adv_pause;
+ p_bulletin->sfp_tx_fault = link->sfp_tx_fault;
+
+ p_bulletin->capability_speed = p_caps->speed_capabilities;
+}
+
+static int qed_iov_release_hw_for_vf(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt, u16 rel_vf_id)
+{
+ struct qed_mcp_link_capabilities caps;
+ struct qed_mcp_link_params params;
+ struct qed_mcp_link_state link;
+ struct qed_vf_info *vf = NULL;
+ int rc = 0;
+
+ vf = qed_iov_get_vf_info(p_hwfn, rel_vf_id, true);
+ if (!vf) {
+ DP_ERR(p_hwfn, "qed_iov_release_hw_for_vf : vf is NULL\n");
+ return -EINVAL;
+ }
+
+ if (vf->bulletin.p_virt)
+ memset(vf->bulletin.p_virt, 0, sizeof(*vf->bulletin.p_virt));
+
+ memset(&vf->p_vf_info, 0, sizeof(vf->p_vf_info));
+
+ /* Get the link configuration back in bulletin so
+ * that when VFs are re-enabled they get the actual
+ * link configuration.
+ */
+ memcpy(&params, qed_mcp_get_link_params(p_hwfn), sizeof(params));
+ memcpy(&link, qed_mcp_get_link_state(p_hwfn), sizeof(link));
+ memcpy(&caps, qed_mcp_get_link_capabilities(p_hwfn), sizeof(caps));
+ qed_iov_set_link(p_hwfn, rel_vf_id, &params, &link, &caps);
+
+ if (vf->state != VF_STOPPED) {
+ /* Stopping the VF */
+ rc = qed_sp_vf_stop(p_hwfn, vf->concrete_fid, vf->opaque_fid);
+
+ if (rc != 0) {
+ DP_ERR(p_hwfn, "qed_sp_vf_stop returned error %d\n",
+ rc);
+ return rc;
+ }
+
+ vf->state = VF_STOPPED;
+ }
+
+ /* disablng interrupts and resetting permission table was done during
+ * vf-close, however, we could get here without going through vf_close
+ */
+ /* Disable Interrupts for VF */
+ qed_iov_vf_igu_set_int(p_hwfn, p_ptt, vf, 0);
+
+ /* Reset Permission table */
+ qed_iov_config_perm_table(p_hwfn, p_ptt, vf, 0);
+
+ vf->num_rxqs = 0;
+ vf->num_txqs = 0;
+ qed_iov_free_vf_igu_sbs(p_hwfn, p_ptt, vf);
+
+ if (vf->b_init) {
+ vf->b_init = false;
+
+ if (IS_LEAD_HWFN(p_hwfn))
+ p_hwfn->cdev->p_iov_info->num_vfs--;
+ }
+
+ return 0;
+}
+
+static bool qed_iov_tlv_supported(u16 tlvtype)
+{
+ return CHANNEL_TLV_NONE < tlvtype && tlvtype < CHANNEL_TLV_MAX;
+}
+
+/* place a given tlv on the tlv buffer, continuing current tlv list */
+void *qed_add_tlv(struct qed_hwfn *p_hwfn, u8 **offset, u16 type, u16 length)
+{
+ struct channel_tlv *tl = (struct channel_tlv *)*offset;
+
+ tl->type = type;
+ tl->length = length;
+
+ /* Offset should keep pointing to next TLV (the end of the last) */
+ *offset += length;
+
+ /* Return a pointer to the start of the added tlv */
+ return *offset - length;
+}
+
+/* list the types and lengths of the tlvs on the buffer */
+void qed_dp_tlv_list(struct qed_hwfn *p_hwfn, void *tlvs_list)
+{
+ u16 i = 1, total_length = 0;
+ struct channel_tlv *tlv;
+
+ do {
+ tlv = (struct channel_tlv *)((u8 *)tlvs_list + total_length);
+
+ /* output tlv */
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "TLV number %d: type %d, length %d\n",
+ i, tlv->type, tlv->length);
+
+ if (tlv->type == CHANNEL_TLV_LIST_END)
+ return;
+
+ /* Validate entry - protect against malicious VFs */
+ if (!tlv->length) {
+ DP_NOTICE(p_hwfn, "TLV of length 0 found\n");
+ return;
+ }
+
+ total_length += tlv->length;
+
+ if (total_length >= sizeof(struct tlv_buffer_size)) {
+ DP_NOTICE(p_hwfn, "TLV ==> Buffer overflow\n");
+ return;
+ }
+
+ i++;
+ } while (1);
+}
+
+static void qed_iov_send_response(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *p_vf,
+ u16 length, u8 status)
+{
+ struct qed_iov_vf_mbx *mbx = &p_vf->vf_mbx;
+ struct qed_dmae_params params;
+ u8 eng_vf_id;
+
+ mbx->reply_virt->default_resp.hdr.status = status;
+
+ qed_dp_tlv_list(p_hwfn, mbx->reply_virt);
+
+ eng_vf_id = p_vf->abs_vf_id;
+
+ memset(&params, 0, sizeof(struct qed_dmae_params));
+ params.flags = QED_DMAE_FLAG_VF_DST;
+ params.dst_vfid = eng_vf_id;
+
+ qed_dmae_host2host(p_hwfn, p_ptt, mbx->reply_phys + sizeof(u64),
+ mbx->req_virt->first_tlv.reply_address +
+ sizeof(u64),
+ (sizeof(union pfvf_tlvs) - sizeof(u64)) / 4,
+ &params);
+
+ qed_dmae_host2host(p_hwfn, p_ptt, mbx->reply_phys,
+ mbx->req_virt->first_tlv.reply_address,
+ sizeof(u64) / 4, &params);
+
+ REG_WR(p_hwfn,
+ GTT_BAR0_MAP_REG_USDM_RAM +
+ USTORM_VF_PF_CHANNEL_READY_OFFSET(eng_vf_id), 1);
+}
+
+static u16 qed_iov_vport_to_tlv(struct qed_hwfn *p_hwfn,
+ enum qed_iov_vport_update_flag flag)
+{
+ switch (flag) {
+ case QED_IOV_VP_UPDATE_ACTIVATE:
+ return CHANNEL_TLV_VPORT_UPDATE_ACTIVATE;
+ case QED_IOV_VP_UPDATE_VLAN_STRIP:
+ return CHANNEL_TLV_VPORT_UPDATE_VLAN_STRIP;
+ case QED_IOV_VP_UPDATE_TX_SWITCH:
+ return CHANNEL_TLV_VPORT_UPDATE_TX_SWITCH;
+ case QED_IOV_VP_UPDATE_MCAST:
+ return CHANNEL_TLV_VPORT_UPDATE_MCAST;
+ case QED_IOV_VP_UPDATE_ACCEPT_PARAM:
+ return CHANNEL_TLV_VPORT_UPDATE_ACCEPT_PARAM;
+ case QED_IOV_VP_UPDATE_RSS:
+ return CHANNEL_TLV_VPORT_UPDATE_RSS;
+ case QED_IOV_VP_UPDATE_ACCEPT_ANY_VLAN:
+ return CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN;
+ case QED_IOV_VP_UPDATE_SGE_TPA:
+ return CHANNEL_TLV_VPORT_UPDATE_SGE_TPA;
+ default:
+ return 0;
+ }
+}
+
+static u16 qed_iov_prep_vp_update_resp_tlvs(struct qed_hwfn *p_hwfn,
+ struct qed_vf_info *p_vf,
+ struct qed_iov_vf_mbx *p_mbx,
+ u8 status,
+ u16 tlvs_mask, u16 tlvs_accepted)
+{
+ struct pfvf_def_resp_tlv *resp;
+ u16 size, total_len, i;
+
+ memset(p_mbx->reply_virt, 0, sizeof(union pfvf_tlvs));
+ p_mbx->offset = (u8 *)p_mbx->reply_virt;
+ size = sizeof(struct pfvf_def_resp_tlv);
+ total_len = size;
+
+ qed_add_tlv(p_hwfn, &p_mbx->offset, CHANNEL_TLV_VPORT_UPDATE, size);
+
+ /* Prepare response for all extended tlvs if they are found by PF */
+ for (i = 0; i < QED_IOV_VP_UPDATE_MAX; i++) {
+ if (!(tlvs_mask & (1 << i)))
+ continue;
+
+ resp = qed_add_tlv(p_hwfn, &p_mbx->offset,
+ qed_iov_vport_to_tlv(p_hwfn, i), size);
+
+ if (tlvs_accepted & (1 << i))
+ resp->hdr.status = status;
+ else
+ resp->hdr.status = PFVF_STATUS_NOT_SUPPORTED;
+
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_IOV,
+ "VF[%d] - vport_update response: TLV %d, status %02x\n",
+ p_vf->relative_vf_id,
+ qed_iov_vport_to_tlv(p_hwfn, i), resp->hdr.status);
+
+ total_len += size;
+ }
+
+ qed_add_tlv(p_hwfn, &p_mbx->offset, CHANNEL_TLV_LIST_END,
+ sizeof(struct channel_list_end_tlv));
+
+ return total_len;
+}
+
+static void qed_iov_prepare_resp(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *vf_info,
+ u16 type, u16 length, u8 status)
+{
+ struct qed_iov_vf_mbx *mbx = &vf_info->vf_mbx;
+
+ mbx->offset = (u8 *)mbx->reply_virt;
+
+ qed_add_tlv(p_hwfn, &mbx->offset, type, length);
+ qed_add_tlv(p_hwfn, &mbx->offset, CHANNEL_TLV_LIST_END,
+ sizeof(struct channel_list_end_tlv));
+
+ qed_iov_send_response(p_hwfn, p_ptt, vf_info, length, status);
+}
+
+struct qed_public_vf_info *qed_iov_get_public_vf_info(struct qed_hwfn *p_hwfn,
+ u16 relative_vf_id,
+ bool b_enabled_only)
+{
+ struct qed_vf_info *vf = NULL;
+
+ vf = qed_iov_get_vf_info(p_hwfn, relative_vf_id, b_enabled_only);
+ if (!vf)
+ return NULL;
+
+ return &vf->p_vf_info;
+}
+
+void qed_iov_clean_vf(struct qed_hwfn *p_hwfn, u8 vfid)
+{
+ struct qed_public_vf_info *vf_info;
+
+ vf_info = qed_iov_get_public_vf_info(p_hwfn, vfid, false);
+
+ if (!vf_info)
+ return;
+
+ /* Clear the VF mac */
+ memset(vf_info->mac, 0, ETH_ALEN);
+}
+
+static void qed_iov_vf_cleanup(struct qed_hwfn *p_hwfn,
+ struct qed_vf_info *p_vf)
+{
+ u32 i;
+
+ p_vf->vf_bulletin = 0;
+ p_vf->vport_instance = 0;
+ p_vf->num_mac_filters = 0;
+ p_vf->num_vlan_filters = 0;
+ p_vf->configured_features = 0;
+
+ /* If VF previously requested less resources, go back to default */
+ p_vf->num_rxqs = p_vf->num_sbs;
+ p_vf->num_txqs = p_vf->num_sbs;
+
+ p_vf->num_active_rxqs = 0;
+
+ for (i = 0; i < QED_MAX_VF_CHAINS_PER_PF; i++)
+ p_vf->vf_queues[i].rxq_active = 0;
+
+ memset(&p_vf->shadow_config, 0, sizeof(p_vf->shadow_config));
+ qed_iov_clean_vf(p_hwfn, p_vf->relative_vf_id);
+}
+
+static void qed_iov_vf_mbx_acquire(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *vf)
+{
+ struct qed_iov_vf_mbx *mbx = &vf->vf_mbx;
+ struct pfvf_acquire_resp_tlv *resp = &mbx->reply_virt->acquire_resp;
+ struct pf_vf_pfdev_info *pfdev_info = &resp->pfdev_info;
+ struct vfpf_acquire_tlv *req = &mbx->req_virt->acquire;
+ u8 i, vfpf_status = PFVF_STATUS_SUCCESS;
+ struct pf_vf_resc *resc = &resp->resc;
+
+ /* Validate FW compatibility */
+ if (req->vfdev_info.fw_major != FW_MAJOR_VERSION ||
+ req->vfdev_info.fw_minor != FW_MINOR_VERSION ||
+ req->vfdev_info.fw_revision != FW_REVISION_VERSION ||
+ req->vfdev_info.fw_engineering != FW_ENGINEERING_VERSION) {
+ DP_INFO(p_hwfn,
+ "VF[%d] is running an incompatible driver [VF needs FW %02x:%02x:%02x:%02x but Hypervisor is using %02x:%02x:%02x:%02x]\n",
+ vf->abs_vf_id,
+ req->vfdev_info.fw_major,
+ req->vfdev_info.fw_minor,
+ req->vfdev_info.fw_revision,
+ req->vfdev_info.fw_engineering,
+ FW_MAJOR_VERSION,
+ FW_MINOR_VERSION,
+ FW_REVISION_VERSION, FW_ENGINEERING_VERSION);
+ vfpf_status = PFVF_STATUS_NOT_SUPPORTED;
+ goto out;
+ }
+
+ /* On 100g PFs, prevent old VFs from loading */
+ if ((p_hwfn->cdev->num_hwfns > 1) &&
+ !(req->vfdev_info.capabilities & VFPF_ACQUIRE_CAP_100G)) {
+ DP_INFO(p_hwfn,
+ "VF[%d] is running an old driver that doesn't support 100g\n",
+ vf->abs_vf_id);
+ vfpf_status = PFVF_STATUS_NOT_SUPPORTED;
+ goto out;
+ }
+
+ memset(resp, 0, sizeof(*resp));
+
+ /* Fill in vf info stuff */
+ vf->opaque_fid = req->vfdev_info.opaque_fid;
+ vf->num_mac_filters = 1;
+ vf->num_vlan_filters = QED_ETH_VF_NUM_VLAN_FILTERS;
+
+ vf->vf_bulletin = req->bulletin_addr;
+ vf->bulletin.size = (vf->bulletin.size < req->bulletin_size) ?
+ vf->bulletin.size : req->bulletin_size;
+
+ /* fill in pfdev info */
+ pfdev_info->chip_num = p_hwfn->cdev->chip_num;
+ pfdev_info->db_size = 0;
+ pfdev_info->indices_per_sb = PIS_PER_SB;
+
+ pfdev_info->capabilities = PFVF_ACQUIRE_CAP_DEFAULT_UNTAGGED |
+ PFVF_ACQUIRE_CAP_POST_FW_OVERRIDE;
+ if (p_hwfn->cdev->num_hwfns > 1)
+ pfdev_info->capabilities |= PFVF_ACQUIRE_CAP_100G;
+
+ pfdev_info->stats_info.mstats.address =
+ PXP_VF_BAR0_START_MSDM_ZONE_B +
+ offsetof(struct mstorm_vf_zone, non_trigger.eth_queue_stat);
+ pfdev_info->stats_info.mstats.len =
+ sizeof(struct eth_mstorm_per_queue_stat);
+
+ pfdev_info->stats_info.ustats.address =
+ PXP_VF_BAR0_START_USDM_ZONE_B +
+ offsetof(struct ustorm_vf_zone, non_trigger.eth_queue_stat);
+ pfdev_info->stats_info.ustats.len =
+ sizeof(struct eth_ustorm_per_queue_stat);
+
+ pfdev_info->stats_info.pstats.address =
+ PXP_VF_BAR0_START_PSDM_ZONE_B +
+ offsetof(struct pstorm_vf_zone, non_trigger.eth_queue_stat);
+ pfdev_info->stats_info.pstats.len =
+ sizeof(struct eth_pstorm_per_queue_stat);
+
+ pfdev_info->stats_info.tstats.address = 0;
+ pfdev_info->stats_info.tstats.len = 0;
+
+ memcpy(pfdev_info->port_mac, p_hwfn->hw_info.hw_mac_addr, ETH_ALEN);
+
+ pfdev_info->fw_major = FW_MAJOR_VERSION;
+ pfdev_info->fw_minor = FW_MINOR_VERSION;
+ pfdev_info->fw_rev = FW_REVISION_VERSION;
+ pfdev_info->fw_eng = FW_ENGINEERING_VERSION;
+ pfdev_info->os_type = VFPF_ACQUIRE_OS_LINUX;
+ qed_mcp_get_mfw_ver(p_hwfn, p_ptt, &pfdev_info->mfw_ver, NULL);
+
+ pfdev_info->dev_type = p_hwfn->cdev->type;
+ pfdev_info->chip_rev = p_hwfn->cdev->chip_rev;
+
+ resc->num_rxqs = vf->num_rxqs;
+ resc->num_txqs = vf->num_txqs;
+ resc->num_sbs = vf->num_sbs;
+ for (i = 0; i < resc->num_sbs; i++) {
+ resc->hw_sbs[i].hw_sb_id = vf->igu_sbs[i];
+ resc->hw_sbs[i].sb_qid = 0;
+ }
+
+ for (i = 0; i < resc->num_rxqs; i++) {
+ qed_fw_l2_queue(p_hwfn, vf->vf_queues[i].fw_rx_qid,
+ (u16 *)&resc->hw_qid[i]);
+ resc->cid[i] = vf->vf_queues[i].fw_cid;
+ }
+
+ resc->num_mac_filters = min_t(u8, vf->num_mac_filters,
+ req->resc_request.num_mac_filters);
+ resc->num_vlan_filters = min_t(u8, vf->num_vlan_filters,
+ req->resc_request.num_vlan_filters);
+
+ /* This isn't really required as VF isn't limited, but some VFs might
+ * actually test this value, so need to provide it.
+ */
+ resc->num_mc_filters = req->resc_request.num_mc_filters;
+
+ /* Fill agreed size of bulletin board in response */
+ resp->bulletin_size = vf->bulletin.size;
+ qed_iov_post_vf_bulletin(p_hwfn, vf->relative_vf_id, p_ptt);
+
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_IOV,
+ "VF[%d] ACQUIRE_RESPONSE: pfdev_info- chip_num=0x%x, db_size=%d, idx_per_sb=%d, pf_cap=0x%llx\n"
+ "resources- n_rxq-%d, n_txq-%d, n_sbs-%d, n_macs-%d, n_vlans-%d\n",
+ vf->abs_vf_id,
+ resp->pfdev_info.chip_num,
+ resp->pfdev_info.db_size,
+ resp->pfdev_info.indices_per_sb,
+ resp->pfdev_info.capabilities,
+ resc->num_rxqs,
+ resc->num_txqs,
+ resc->num_sbs,
+ resc->num_mac_filters,
+ resc->num_vlan_filters);
+ vf->state = VF_ACQUIRED;
+
+ /* Prepare Response */
+out:
+ qed_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_ACQUIRE,
+ sizeof(struct pfvf_acquire_resp_tlv), vfpf_status);
+}
+
+static int __qed_iov_spoofchk_set(struct qed_hwfn *p_hwfn,
+ struct qed_vf_info *p_vf, bool val)
+{
+ struct qed_sp_vport_update_params params;
+ int rc;
+
+ if (val == p_vf->spoof_chk) {
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "Spoofchk value[%d] is already configured\n", val);
+ return 0;
+ }
+
+ memset(&params, 0, sizeof(struct qed_sp_vport_update_params));
+ params.opaque_fid = p_vf->opaque_fid;
+ params.vport_id = p_vf->vport_id;
+ params.update_anti_spoofing_en_flg = 1;
+ params.anti_spoofing_en = val;
+
+ rc = qed_sp_vport_update(p_hwfn, &params, QED_SPQ_MODE_EBLOCK, NULL);
+ if (rc) {
+ p_vf->spoof_chk = val;
+ p_vf->req_spoofchk_val = p_vf->spoof_chk;
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "Spoofchk val[%d] configured\n", val);
+ } else {
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "Spoofchk configuration[val:%d] failed for VF[%d]\n",
+ val, p_vf->relative_vf_id);
+ }
+
+ return rc;
+}
+
+static int qed_iov_reconfigure_unicast_vlan(struct qed_hwfn *p_hwfn,
+ struct qed_vf_info *p_vf)
+{
+ struct qed_filter_ucast filter;
+ int rc = 0;
+ int i;
+
+ memset(&filter, 0, sizeof(filter));
+ filter.is_rx_filter = 1;
+ filter.is_tx_filter = 1;
+ filter.vport_to_add_to = p_vf->vport_id;
+ filter.opcode = QED_FILTER_ADD;
+
+ /* Reconfigure vlans */
+ for (i = 0; i < QED_ETH_VF_NUM_VLAN_FILTERS + 1; i++) {
+ if (!p_vf->shadow_config.vlans[i].used)
+ continue;
+
+ filter.type = QED_FILTER_VLAN;
+ filter.vlan = p_vf->shadow_config.vlans[i].vid;
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_IOV,
+ "Reconfiguring VLAN [0x%04x] for VF [%04x]\n",
+ filter.vlan, p_vf->relative_vf_id);
+ rc = qed_sp_eth_filter_ucast(p_hwfn,
+ p_vf->opaque_fid,
+ &filter,
+ QED_SPQ_MODE_CB, NULL);
+ if (rc) {
+ DP_NOTICE(p_hwfn,
+ "Failed to configure VLAN [%04x] to VF [%04x]\n",
+ filter.vlan, p_vf->relative_vf_id);
+ break;
+ }
+ }
+
+ return rc;
+}
+
+static int
+qed_iov_reconfigure_unicast_shadow(struct qed_hwfn *p_hwfn,
+ struct qed_vf_info *p_vf, u64 events)
+{
+ int rc = 0;
+
+ if ((events & (1 << VLAN_ADDR_FORCED)) &&
+ !(p_vf->configured_features & (1 << VLAN_ADDR_FORCED)))
+ rc = qed_iov_reconfigure_unicast_vlan(p_hwfn, p_vf);
+
+ return rc;
+}
+
+static int qed_iov_configure_vport_forced(struct qed_hwfn *p_hwfn,
+ struct qed_vf_info *p_vf, u64 events)
+{
+ int rc = 0;
+ struct qed_filter_ucast filter;
+
+ if (!p_vf->vport_instance)
+ return -EINVAL;
+
+ if (events & (1 << MAC_ADDR_FORCED)) {
+ /* Since there's no way [currently] of removing the MAC,
+ * we can always assume this means we need to force it.
+ */
+ memset(&filter, 0, sizeof(filter));
+ filter.type = QED_FILTER_MAC;
+ filter.opcode = QED_FILTER_REPLACE;
+ filter.is_rx_filter = 1;
+ filter.is_tx_filter = 1;
+ filter.vport_to_add_to = p_vf->vport_id;
+ ether_addr_copy(filter.mac, p_vf->bulletin.p_virt->mac);
+
+ rc = qed_sp_eth_filter_ucast(p_hwfn, p_vf->opaque_fid,
+ &filter, QED_SPQ_MODE_CB, NULL);
+ if (rc) {
+ DP_NOTICE(p_hwfn,
+ "PF failed to configure MAC for VF\n");
+ return rc;
+ }
+
+ p_vf->configured_features |= 1 << MAC_ADDR_FORCED;
+ }
+
+ if (events & (1 << VLAN_ADDR_FORCED)) {
+ struct qed_sp_vport_update_params vport_update;
+ u8 removal;
+ int i;
+
+ memset(&filter, 0, sizeof(filter));
+ filter.type = QED_FILTER_VLAN;
+ filter.is_rx_filter = 1;
+ filter.is_tx_filter = 1;
+ filter.vport_to_add_to = p_vf->vport_id;
+ filter.vlan = p_vf->bulletin.p_virt->pvid;
+ filter.opcode = filter.vlan ? QED_FILTER_REPLACE :
+ QED_FILTER_FLUSH;
+
+ /* Send the ramrod */
+ rc = qed_sp_eth_filter_ucast(p_hwfn, p_vf->opaque_fid,
+ &filter, QED_SPQ_MODE_CB, NULL);
+ if (rc) {
+ DP_NOTICE(p_hwfn,
+ "PF failed to configure VLAN for VF\n");
+ return rc;
+ }
+
+ /* Update the default-vlan & silent vlan stripping */
+ memset(&vport_update, 0, sizeof(vport_update));
+ vport_update.opaque_fid = p_vf->opaque_fid;
+ vport_update.vport_id = p_vf->vport_id;
+ vport_update.update_default_vlan_enable_flg = 1;
+ vport_update.default_vlan_enable_flg = filter.vlan ? 1 : 0;
+ vport_update.update_default_vlan_flg = 1;
+ vport_update.default_vlan = filter.vlan;
+
+ vport_update.update_inner_vlan_removal_flg = 1;
+ removal = filter.vlan ? 1
+ : p_vf->shadow_config.inner_vlan_removal;
+ vport_update.inner_vlan_removal_flg = removal;
+ vport_update.silent_vlan_removal_flg = filter.vlan ? 1 : 0;
+ rc = qed_sp_vport_update(p_hwfn,
+ &vport_update,
+ QED_SPQ_MODE_EBLOCK, NULL);
+ if (rc) {
+ DP_NOTICE(p_hwfn,
+ "PF failed to configure VF vport for vlan\n");
+ return rc;
+ }
+
+ /* Update all the Rx queues */
+ for (i = 0; i < QED_MAX_VF_CHAINS_PER_PF; i++) {
+ u16 qid;
+
+ if (!p_vf->vf_queues[i].rxq_active)
+ continue;
+
+ qid = p_vf->vf_queues[i].fw_rx_qid;
+
+ rc = qed_sp_eth_rx_queues_update(p_hwfn, qid,
+ 1, 0, 1,
+ QED_SPQ_MODE_EBLOCK,
+ NULL);
+ if (rc) {
+ DP_NOTICE(p_hwfn,
+ "Failed to send Rx update fo queue[0x%04x]\n",
+ qid);
+ return rc;
+ }
+ }
+
+ if (filter.vlan)
+ p_vf->configured_features |= 1 << VLAN_ADDR_FORCED;
+ else
+ p_vf->configured_features &= ~(1 << VLAN_ADDR_FORCED);
+ }
+
+ /* If forced features are terminated, we need to configure the shadow
+ * configuration back again.
+ */
+ if (events)
+ qed_iov_reconfigure_unicast_shadow(p_hwfn, p_vf, events);
+
+ return rc;
+}
+
+static void qed_iov_vf_mbx_start_vport(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *vf)
+{
+ struct qed_sp_vport_start_params params = { 0 };
+ struct qed_iov_vf_mbx *mbx = &vf->vf_mbx;
+ struct vfpf_vport_start_tlv *start;
+ u8 status = PFVF_STATUS_SUCCESS;
+ struct qed_vf_info *vf_info;
+ u64 *p_bitmap;
+ int sb_id;
+ int rc;
+
+ vf_info = qed_iov_get_vf_info(p_hwfn, (u16) vf->relative_vf_id, true);
+ if (!vf_info) {
+ DP_NOTICE(p_hwfn->cdev,
+ "Failed to get VF info, invalid vfid [%d]\n",
+ vf->relative_vf_id);
+ return;
+ }
+
+ vf->state = VF_ENABLED;
+ start = &mbx->req_virt->start_vport;
+
+ /* Initialize Status block in CAU */
+ for (sb_id = 0; sb_id < vf->num_sbs; sb_id++) {
+ if (!start->sb_addr[sb_id]) {
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "VF[%d] did not fill the address of SB %d\n",
+ vf->relative_vf_id, sb_id);
+ break;
+ }
+
+ qed_int_cau_conf_sb(p_hwfn, p_ptt,
+ start->sb_addr[sb_id],
+ vf->igu_sbs[sb_id],
+ vf->abs_vf_id, 1);
+ }
+ qed_iov_enable_vf_traffic(p_hwfn, p_ptt, vf);
+
+ vf->mtu = start->mtu;
+ vf->shadow_config.inner_vlan_removal = start->inner_vlan_removal;
+
+ /* Take into consideration configuration forced by hypervisor;
+ * If none is configured, use the supplied VF values [for old
+ * vfs that would still be fine, since they passed '0' as padding].
+ */
+ p_bitmap = &vf_info->bulletin.p_virt->valid_bitmap;
+ if (!(*p_bitmap & (1 << VFPF_BULLETIN_UNTAGGED_DEFAULT_FORCED))) {
+ u8 vf_req = start->only_untagged;
+
+ vf_info->bulletin.p_virt->default_only_untagged = vf_req;
+ *p_bitmap |= 1 << VFPF_BULLETIN_UNTAGGED_DEFAULT;
+ }
+
+ params.tpa_mode = start->tpa_mode;
+ params.remove_inner_vlan = start->inner_vlan_removal;
+ params.tx_switching = true;
+
+ params.only_untagged = vf_info->bulletin.p_virt->default_only_untagged;
+ params.drop_ttl0 = false;
+ params.concrete_fid = vf->concrete_fid;
+ params.opaque_fid = vf->opaque_fid;
+ params.vport_id = vf->vport_id;
+ params.max_buffers_per_cqe = start->max_buffers_per_cqe;
+ params.mtu = vf->mtu;
+
+ rc = qed_sp_eth_vport_start(p_hwfn, &params);
+ if (rc != 0) {
+ DP_ERR(p_hwfn,
+ "qed_iov_vf_mbx_start_vport returned error %d\n", rc);
+ status = PFVF_STATUS_FAILURE;
+ } else {
+ vf->vport_instance++;
+
+ /* Force configuration if needed on the newly opened vport */
+ qed_iov_configure_vport_forced(p_hwfn, vf, *p_bitmap);
+
+ __qed_iov_spoofchk_set(p_hwfn, vf, vf->req_spoofchk_val);
+ }
+ qed_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_VPORT_START,
+ sizeof(struct pfvf_def_resp_tlv), status);
+}
+
+static void qed_iov_vf_mbx_stop_vport(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *vf)
+{
+ u8 status = PFVF_STATUS_SUCCESS;
+ int rc;
+
+ vf->vport_instance--;
+ vf->spoof_chk = false;
+
+ rc = qed_sp_vport_stop(p_hwfn, vf->opaque_fid, vf->vport_id);
+ if (rc != 0) {
+ DP_ERR(p_hwfn, "qed_iov_vf_mbx_stop_vport returned error %d\n",
+ rc);
+ status = PFVF_STATUS_FAILURE;
+ }
+
+ /* Forget the configuration on the vport */
+ vf->configured_features = 0;
+ memset(&vf->shadow_config, 0, sizeof(vf->shadow_config));
+
+ qed_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_VPORT_TEARDOWN,
+ sizeof(struct pfvf_def_resp_tlv), status);
+}
+
+#define TSTORM_QZONE_START PXP_VF_BAR0_START_SDM_ZONE_A
+#define MSTORM_QZONE_START(dev) (TSTORM_QZONE_START + \
+ (TSTORM_QZONE_SIZE * NUM_OF_L2_QUEUES(dev)))
+
+static void qed_iov_vf_mbx_start_rxq_resp(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *vf, u8 status)
+{
+ struct qed_iov_vf_mbx *mbx = &vf->vf_mbx;
+ struct pfvf_start_queue_resp_tlv *p_tlv;
+ struct vfpf_start_rxq_tlv *req;
+
+ mbx->offset = (u8 *)mbx->reply_virt;
+
+ p_tlv = qed_add_tlv(p_hwfn, &mbx->offset, CHANNEL_TLV_START_RXQ,
+ sizeof(*p_tlv));
+ qed_add_tlv(p_hwfn, &mbx->offset, CHANNEL_TLV_LIST_END,
+ sizeof(struct channel_list_end_tlv));
+
+ /* Update the TLV with the response */
+ if (status == PFVF_STATUS_SUCCESS) {
+ u16 hw_qid = 0;
+
+ req = &mbx->req_virt->start_rxq;
+ qed_fw_l2_queue(p_hwfn, vf->vf_queues[req->rx_qid].fw_rx_qid,
+ &hw_qid);
+
+ p_tlv->offset = MSTORM_QZONE_START(p_hwfn->cdev) +
+ hw_qid * MSTORM_QZONE_SIZE +
+ offsetof(struct mstorm_eth_queue_zone,
+ rx_producers);
+ }
+
+ qed_iov_send_response(p_hwfn, p_ptt, vf, sizeof(*p_tlv), status);
+}
+
+static void qed_iov_vf_mbx_start_rxq(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *vf)
+{
+ struct qed_queue_start_common_params params;
+ struct qed_iov_vf_mbx *mbx = &vf->vf_mbx;
+ u8 status = PFVF_STATUS_SUCCESS;
+ struct vfpf_start_rxq_tlv *req;
+ int rc;
+
+ memset(&params, 0, sizeof(params));
+ req = &mbx->req_virt->start_rxq;
+ params.queue_id = vf->vf_queues[req->rx_qid].fw_rx_qid;
+ params.vport_id = vf->vport_id;
+ params.sb = req->hw_sb;
+ params.sb_idx = req->sb_index;
+
+ rc = qed_sp_eth_rxq_start_ramrod(p_hwfn, vf->opaque_fid,
+ vf->vf_queues[req->rx_qid].fw_cid,
+ &params,
+ vf->abs_vf_id + 0x10,
+ req->bd_max_bytes,
+ req->rxq_addr,
+ req->cqe_pbl_addr, req->cqe_pbl_size);
+
+ if (rc) {
+ status = PFVF_STATUS_FAILURE;
+ } else {
+ vf->vf_queues[req->rx_qid].rxq_active = true;
+ vf->num_active_rxqs++;
+ }
+
+ qed_iov_vf_mbx_start_rxq_resp(p_hwfn, p_ptt, vf, status);
+}
+
+static void qed_iov_vf_mbx_start_txq(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *vf)
+{
+ u16 length = sizeof(struct pfvf_def_resp_tlv);
+ struct qed_queue_start_common_params params;
+ struct qed_iov_vf_mbx *mbx = &vf->vf_mbx;
+ union qed_qm_pq_params pq_params;
+ u8 status = PFVF_STATUS_SUCCESS;
+ struct vfpf_start_txq_tlv *req;
+ int rc;
+
+ /* Prepare the parameters which would choose the right PQ */
+ memset(&pq_params, 0, sizeof(pq_params));
+ pq_params.eth.is_vf = 1;
+ pq_params.eth.vf_id = vf->relative_vf_id;
+
+ memset(&params, 0, sizeof(params));
+ req = &mbx->req_virt->start_txq;
+ params.queue_id = vf->vf_queues[req->tx_qid].fw_tx_qid;
+ params.vport_id = vf->vport_id;
+ params.sb = req->hw_sb;
+ params.sb_idx = req->sb_index;
+
+ rc = qed_sp_eth_txq_start_ramrod(p_hwfn,
+ vf->opaque_fid,
+ vf->vf_queues[req->tx_qid].fw_cid,
+ &params,
+ vf->abs_vf_id + 0x10,
+ req->pbl_addr,
+ req->pbl_size, &pq_params);
+
+ if (rc)
+ status = PFVF_STATUS_FAILURE;
+ else
+ vf->vf_queues[req->tx_qid].txq_active = true;
+
+ qed_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_START_TXQ,
+ length, status);
+}
+
+static int qed_iov_vf_stop_rxqs(struct qed_hwfn *p_hwfn,
+ struct qed_vf_info *vf,
+ u16 rxq_id, u8 num_rxqs, bool cqe_completion)
+{
+ int rc = 0;
+ int qid;
+
+ if (rxq_id + num_rxqs > ARRAY_SIZE(vf->vf_queues))
+ return -EINVAL;
+
+ for (qid = rxq_id; qid < rxq_id + num_rxqs; qid++) {
+ if (vf->vf_queues[qid].rxq_active) {
+ rc = qed_sp_eth_rx_queue_stop(p_hwfn,
+ vf->vf_queues[qid].
+ fw_rx_qid, false,
+ cqe_completion);
+
+ if (rc)
+ return rc;
+ }
+ vf->vf_queues[qid].rxq_active = false;
+ vf->num_active_rxqs--;
+ }
+
+ return rc;
+}
+
+static int qed_iov_vf_stop_txqs(struct qed_hwfn *p_hwfn,
+ struct qed_vf_info *vf, u16 txq_id, u8 num_txqs)
+{
+ int rc = 0;
+ int qid;
+
+ if (txq_id + num_txqs > ARRAY_SIZE(vf->vf_queues))
+ return -EINVAL;
+
+ for (qid = txq_id; qid < txq_id + num_txqs; qid++) {
+ if (vf->vf_queues[qid].txq_active) {
+ rc = qed_sp_eth_tx_queue_stop(p_hwfn,
+ vf->vf_queues[qid].
+ fw_tx_qid);
+
+ if (rc)
+ return rc;
+ }
+ vf->vf_queues[qid].txq_active = false;
+ }
+ return rc;
+}
+
+static void qed_iov_vf_mbx_stop_rxqs(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *vf)
+{
+ u16 length = sizeof(struct pfvf_def_resp_tlv);
+ struct qed_iov_vf_mbx *mbx = &vf->vf_mbx;
+ u8 status = PFVF_STATUS_SUCCESS;
+ struct vfpf_stop_rxqs_tlv *req;
+ int rc;
+
+ /* We give the option of starting from qid != 0, in this case we
+ * need to make sure that qid + num_qs doesn't exceed the actual
+ * amount of queues that exist.
+ */
+ req = &mbx->req_virt->stop_rxqs;
+ rc = qed_iov_vf_stop_rxqs(p_hwfn, vf, req->rx_qid,
+ req->num_rxqs, req->cqe_completion);
+ if (rc)
+ status = PFVF_STATUS_FAILURE;
+
+ qed_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_STOP_RXQS,
+ length, status);
+}
+
+static void qed_iov_vf_mbx_stop_txqs(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *vf)
+{
+ u16 length = sizeof(struct pfvf_def_resp_tlv);
+ struct qed_iov_vf_mbx *mbx = &vf->vf_mbx;
+ u8 status = PFVF_STATUS_SUCCESS;
+ struct vfpf_stop_txqs_tlv *req;
+ int rc;
+
+ /* We give the option of starting from qid != 0, in this case we
+ * need to make sure that qid + num_qs doesn't exceed the actual
+ * amount of queues that exist.
+ */
+ req = &mbx->req_virt->stop_txqs;
+ rc = qed_iov_vf_stop_txqs(p_hwfn, vf, req->tx_qid, req->num_txqs);
+ if (rc)
+ status = PFVF_STATUS_FAILURE;
+
+ qed_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_STOP_TXQS,
+ length, status);
+}
+
+static void qed_iov_vf_mbx_update_rxqs(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *vf)
+{
+ u16 length = sizeof(struct pfvf_def_resp_tlv);
+ struct qed_iov_vf_mbx *mbx = &vf->vf_mbx;
+ struct vfpf_update_rxq_tlv *req;
+ u8 status = PFVF_STATUS_SUCCESS;
+ u8 complete_event_flg;
+ u8 complete_cqe_flg;
+ u16 qid;
+ int rc;
+ u8 i;
+
+ req = &mbx->req_virt->update_rxq;
+ complete_cqe_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_CQE_FLAG);
+ complete_event_flg = !!(req->flags & VFPF_RXQ_UPD_COMPLETE_EVENT_FLAG);
+
+ for (i = 0; i < req->num_rxqs; i++) {
+ qid = req->rx_qid + i;
+
+ if (!vf->vf_queues[qid].rxq_active) {
+ DP_NOTICE(p_hwfn, "VF rx_qid = %d isn`t active!\n",
+ qid);
+ status = PFVF_STATUS_FAILURE;
+ break;
+ }
+
+ rc = qed_sp_eth_rx_queues_update(p_hwfn,
+ vf->vf_queues[qid].fw_rx_qid,
+ 1,
+ complete_cqe_flg,
+ complete_event_flg,
+ QED_SPQ_MODE_EBLOCK, NULL);
+
+ if (rc) {
+ status = PFVF_STATUS_FAILURE;
+ break;
+ }
+ }
+
+ qed_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_UPDATE_RXQ,
+ length, status);
+}
+
+void *qed_iov_search_list_tlvs(struct qed_hwfn *p_hwfn,
+ void *p_tlvs_list, u16 req_type)
+{
+ struct channel_tlv *p_tlv = (struct channel_tlv *)p_tlvs_list;
+ int len = 0;
+
+ do {
+ if (!p_tlv->length) {
+ DP_NOTICE(p_hwfn, "Zero length TLV found\n");
+ return NULL;
+ }
+
+ if (p_tlv->type == req_type) {
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "Extended tlv type %d, length %d found\n",
+ p_tlv->type, p_tlv->length);
+ return p_tlv;
+ }
+
+ len += p_tlv->length;
+ p_tlv = (struct channel_tlv *)((u8 *)p_tlv + p_tlv->length);
+
+ if ((len + p_tlv->length) > TLV_BUFFER_SIZE) {
+ DP_NOTICE(p_hwfn, "TLVs has overrun the buffer size\n");
+ return NULL;
+ }
+ } while (p_tlv->type != CHANNEL_TLV_LIST_END);
+
+ return NULL;
+}
+
+static void
+qed_iov_vp_update_act_param(struct qed_hwfn *p_hwfn,
+ struct qed_sp_vport_update_params *p_data,
+ struct qed_iov_vf_mbx *p_mbx, u16 *tlvs_mask)
+{
+ struct vfpf_vport_update_activate_tlv *p_act_tlv;
+ u16 tlv = CHANNEL_TLV_VPORT_UPDATE_ACTIVATE;
+
+ p_act_tlv = (struct vfpf_vport_update_activate_tlv *)
+ qed_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+ if (!p_act_tlv)
+ return;
+
+ p_data->update_vport_active_rx_flg = p_act_tlv->update_rx;
+ p_data->vport_active_rx_flg = p_act_tlv->active_rx;
+ p_data->update_vport_active_tx_flg = p_act_tlv->update_tx;
+ p_data->vport_active_tx_flg = p_act_tlv->active_tx;
+ *tlvs_mask |= 1 << QED_IOV_VP_UPDATE_ACTIVATE;
+}
+
+static void
+qed_iov_vp_update_vlan_param(struct qed_hwfn *p_hwfn,
+ struct qed_sp_vport_update_params *p_data,
+ struct qed_vf_info *p_vf,
+ struct qed_iov_vf_mbx *p_mbx, u16 *tlvs_mask)
+{
+ struct vfpf_vport_update_vlan_strip_tlv *p_vlan_tlv;
+ u16 tlv = CHANNEL_TLV_VPORT_UPDATE_VLAN_STRIP;
+
+ p_vlan_tlv = (struct vfpf_vport_update_vlan_strip_tlv *)
+ qed_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+ if (!p_vlan_tlv)
+ return;
+
+ p_vf->shadow_config.inner_vlan_removal = p_vlan_tlv->remove_vlan;
+
+ /* Ignore the VF request if we're forcing a vlan */
+ if (!(p_vf->configured_features & (1 << VLAN_ADDR_FORCED))) {
+ p_data->update_inner_vlan_removal_flg = 1;
+ p_data->inner_vlan_removal_flg = p_vlan_tlv->remove_vlan;
+ }
+
+ *tlvs_mask |= 1 << QED_IOV_VP_UPDATE_VLAN_STRIP;
+}
+
+static void
+qed_iov_vp_update_tx_switch(struct qed_hwfn *p_hwfn,
+ struct qed_sp_vport_update_params *p_data,
+ struct qed_iov_vf_mbx *p_mbx, u16 *tlvs_mask)
+{
+ struct vfpf_vport_update_tx_switch_tlv *p_tx_switch_tlv;
+ u16 tlv = CHANNEL_TLV_VPORT_UPDATE_TX_SWITCH;
+
+ p_tx_switch_tlv = (struct vfpf_vport_update_tx_switch_tlv *)
+ qed_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt,
+ tlv);
+ if (!p_tx_switch_tlv)
+ return;
+
+ p_data->update_tx_switching_flg = 1;
+ p_data->tx_switching_flg = p_tx_switch_tlv->tx_switching;
+ *tlvs_mask |= 1 << QED_IOV_VP_UPDATE_TX_SWITCH;
+}
+
+static void
+qed_iov_vp_update_mcast_bin_param(struct qed_hwfn *p_hwfn,
+ struct qed_sp_vport_update_params *p_data,
+ struct qed_iov_vf_mbx *p_mbx, u16 *tlvs_mask)
+{
+ struct vfpf_vport_update_mcast_bin_tlv *p_mcast_tlv;
+ u16 tlv = CHANNEL_TLV_VPORT_UPDATE_MCAST;
+
+ p_mcast_tlv = (struct vfpf_vport_update_mcast_bin_tlv *)
+ qed_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+ if (!p_mcast_tlv)
+ return;
+
+ p_data->update_approx_mcast_flg = 1;
+ memcpy(p_data->bins, p_mcast_tlv->bins,
+ sizeof(unsigned long) * ETH_MULTICAST_MAC_BINS_IN_REGS);
+ *tlvs_mask |= 1 << QED_IOV_VP_UPDATE_MCAST;
+}
+
+static void
+qed_iov_vp_update_accept_flag(struct qed_hwfn *p_hwfn,
+ struct qed_sp_vport_update_params *p_data,
+ struct qed_iov_vf_mbx *p_mbx, u16 *tlvs_mask)
+{
+ struct qed_filter_accept_flags *p_flags = &p_data->accept_flags;
+ struct vfpf_vport_update_accept_param_tlv *p_accept_tlv;
+ u16 tlv = CHANNEL_TLV_VPORT_UPDATE_ACCEPT_PARAM;
+
+ p_accept_tlv = (struct vfpf_vport_update_accept_param_tlv *)
+ qed_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+ if (!p_accept_tlv)
+ return;
+
+ p_flags->update_rx_mode_config = p_accept_tlv->update_rx_mode;
+ p_flags->rx_accept_filter = p_accept_tlv->rx_accept_filter;
+ p_flags->update_tx_mode_config = p_accept_tlv->update_tx_mode;
+ p_flags->tx_accept_filter = p_accept_tlv->tx_accept_filter;
+ *tlvs_mask |= 1 << QED_IOV_VP_UPDATE_ACCEPT_PARAM;
+}
+
+static void
+qed_iov_vp_update_accept_any_vlan(struct qed_hwfn *p_hwfn,
+ struct qed_sp_vport_update_params *p_data,
+ struct qed_iov_vf_mbx *p_mbx, u16 *tlvs_mask)
+{
+ struct vfpf_vport_update_accept_any_vlan_tlv *p_accept_any_vlan;
+ u16 tlv = CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN;
+
+ p_accept_any_vlan = (struct vfpf_vport_update_accept_any_vlan_tlv *)
+ qed_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt,
+ tlv);
+ if (!p_accept_any_vlan)
+ return;
+
+ p_data->accept_any_vlan = p_accept_any_vlan->accept_any_vlan;
+ p_data->update_accept_any_vlan_flg =
+ p_accept_any_vlan->update_accept_any_vlan_flg;
+ *tlvs_mask |= 1 << QED_IOV_VP_UPDATE_ACCEPT_ANY_VLAN;
+}
+
+static void
+qed_iov_vp_update_rss_param(struct qed_hwfn *p_hwfn,
+ struct qed_vf_info *vf,
+ struct qed_sp_vport_update_params *p_data,
+ struct qed_rss_params *p_rss,
+ struct qed_iov_vf_mbx *p_mbx, u16 *tlvs_mask)
+{
+ struct vfpf_vport_update_rss_tlv *p_rss_tlv;
+ u16 tlv = CHANNEL_TLV_VPORT_UPDATE_RSS;
+ u16 i, q_idx, max_q_idx;
+ u16 table_size;
+
+ p_rss_tlv = (struct vfpf_vport_update_rss_tlv *)
+ qed_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+ if (!p_rss_tlv) {
+ p_data->rss_params = NULL;
+ return;
+ }
+
+ memset(p_rss, 0, sizeof(struct qed_rss_params));
+
+ p_rss->update_rss_config = !!(p_rss_tlv->update_rss_flags &
+ VFPF_UPDATE_RSS_CONFIG_FLAG);
+ p_rss->update_rss_capabilities = !!(p_rss_tlv->update_rss_flags &
+ VFPF_UPDATE_RSS_CAPS_FLAG);
+ p_rss->update_rss_ind_table = !!(p_rss_tlv->update_rss_flags &
+ VFPF_UPDATE_RSS_IND_TABLE_FLAG);
+ p_rss->update_rss_key = !!(p_rss_tlv->update_rss_flags &
+ VFPF_UPDATE_RSS_KEY_FLAG);
+
+ p_rss->rss_enable = p_rss_tlv->rss_enable;
+ p_rss->rss_eng_id = vf->relative_vf_id + 1;
+ p_rss->rss_caps = p_rss_tlv->rss_caps;
+ p_rss->rss_table_size_log = p_rss_tlv->rss_table_size_log;
+ memcpy(p_rss->rss_ind_table, p_rss_tlv->rss_ind_table,
+ sizeof(p_rss->rss_ind_table));
+ memcpy(p_rss->rss_key, p_rss_tlv->rss_key, sizeof(p_rss->rss_key));
+
+ table_size = min_t(u16, ARRAY_SIZE(p_rss->rss_ind_table),
+ (1 << p_rss_tlv->rss_table_size_log));
+
+ max_q_idx = ARRAY_SIZE(vf->vf_queues);
+
+ for (i = 0; i < table_size; i++) {
+ u16 index = vf->vf_queues[0].fw_rx_qid;
+
+ q_idx = p_rss->rss_ind_table[i];
+ if (q_idx >= max_q_idx)
+ DP_NOTICE(p_hwfn,
+ "rss_ind_table[%d] = %d, rxq is out of range\n",
+ i, q_idx);
+ else if (!vf->vf_queues[q_idx].rxq_active)
+ DP_NOTICE(p_hwfn,
+ "rss_ind_table[%d] = %d, rxq is not active\n",
+ i, q_idx);
+ else
+ index = vf->vf_queues[q_idx].fw_rx_qid;
+ p_rss->rss_ind_table[i] = index;
+ }
+
+ p_data->rss_params = p_rss;
+ *tlvs_mask |= 1 << QED_IOV_VP_UPDATE_RSS;
+}
+
+static void
+qed_iov_vp_update_sge_tpa_param(struct qed_hwfn *p_hwfn,
+ struct qed_vf_info *vf,
+ struct qed_sp_vport_update_params *p_data,
+ struct qed_sge_tpa_params *p_sge_tpa,
+ struct qed_iov_vf_mbx *p_mbx, u16 *tlvs_mask)
+{
+ struct vfpf_vport_update_sge_tpa_tlv *p_sge_tpa_tlv;
+ u16 tlv = CHANNEL_TLV_VPORT_UPDATE_SGE_TPA;
+
+ p_sge_tpa_tlv = (struct vfpf_vport_update_sge_tpa_tlv *)
+ qed_iov_search_list_tlvs(p_hwfn, p_mbx->req_virt, tlv);
+
+ if (!p_sge_tpa_tlv) {
+ p_data->sge_tpa_params = NULL;
+ return;
+ }
+
+ memset(p_sge_tpa, 0, sizeof(struct qed_sge_tpa_params));
+
+ p_sge_tpa->update_tpa_en_flg =
+ !!(p_sge_tpa_tlv->update_sge_tpa_flags & VFPF_UPDATE_TPA_EN_FLAG);
+ p_sge_tpa->update_tpa_param_flg =
+ !!(p_sge_tpa_tlv->update_sge_tpa_flags &
+ VFPF_UPDATE_TPA_PARAM_FLAG);
+
+ p_sge_tpa->tpa_ipv4_en_flg =
+ !!(p_sge_tpa_tlv->sge_tpa_flags & VFPF_TPA_IPV4_EN_FLAG);
+ p_sge_tpa->tpa_ipv6_en_flg =
+ !!(p_sge_tpa_tlv->sge_tpa_flags & VFPF_TPA_IPV6_EN_FLAG);
+ p_sge_tpa->tpa_pkt_split_flg =
+ !!(p_sge_tpa_tlv->sge_tpa_flags & VFPF_TPA_PKT_SPLIT_FLAG);
+ p_sge_tpa->tpa_hdr_data_split_flg =
+ !!(p_sge_tpa_tlv->sge_tpa_flags & VFPF_TPA_HDR_DATA_SPLIT_FLAG);
+ p_sge_tpa->tpa_gro_consistent_flg =
+ !!(p_sge_tpa_tlv->sge_tpa_flags & VFPF_TPA_GRO_CONSIST_FLAG);
+
+ p_sge_tpa->tpa_max_aggs_num = p_sge_tpa_tlv->tpa_max_aggs_num;
+ p_sge_tpa->tpa_max_size = p_sge_tpa_tlv->tpa_max_size;
+ p_sge_tpa->tpa_min_size_to_start = p_sge_tpa_tlv->tpa_min_size_to_start;
+ p_sge_tpa->tpa_min_size_to_cont = p_sge_tpa_tlv->tpa_min_size_to_cont;
+ p_sge_tpa->max_buffers_per_cqe = p_sge_tpa_tlv->max_buffers_per_cqe;
+
+ p_data->sge_tpa_params = p_sge_tpa;
+
+ *tlvs_mask |= 1 << QED_IOV_VP_UPDATE_SGE_TPA;
+}
+
+static void qed_iov_vf_mbx_vport_update(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *vf)
+{
+ struct qed_sp_vport_update_params params;
+ struct qed_iov_vf_mbx *mbx = &vf->vf_mbx;
+ struct qed_sge_tpa_params sge_tpa_params;
+ struct qed_rss_params rss_params;
+ u8 status = PFVF_STATUS_SUCCESS;
+ u16 tlvs_mask = 0;
+ u16 length;
+ int rc;
+
+ memset(&params, 0, sizeof(params));
+ params.opaque_fid = vf->opaque_fid;
+ params.vport_id = vf->vport_id;
+ params.rss_params = NULL;
+
+ /* Search for extended tlvs list and update values
+ * from VF in struct qed_sp_vport_update_params.
+ */
+ qed_iov_vp_update_act_param(p_hwfn, &params, mbx, &tlvs_mask);
+ qed_iov_vp_update_vlan_param(p_hwfn, &params, vf, mbx, &tlvs_mask);
+ qed_iov_vp_update_tx_switch(p_hwfn, &params, mbx, &tlvs_mask);
+ qed_iov_vp_update_mcast_bin_param(p_hwfn, &params, mbx, &tlvs_mask);
+ qed_iov_vp_update_accept_flag(p_hwfn, &params, mbx, &tlvs_mask);
+ qed_iov_vp_update_rss_param(p_hwfn, vf, &params, &rss_params,
+ mbx, &tlvs_mask);
+ qed_iov_vp_update_accept_any_vlan(p_hwfn, &params, mbx, &tlvs_mask);
+ qed_iov_vp_update_sge_tpa_param(p_hwfn, vf, &params,
+ &sge_tpa_params, mbx, &tlvs_mask);
+
+ /* Just log a message if there is no single extended tlv in buffer.
+ * When all features of vport update ramrod would be requested by VF
+ * as extended TLVs in buffer then an error can be returned in response
+ * if there is no extended TLV present in buffer.
+ */
+ if (!tlvs_mask) {
+ DP_NOTICE(p_hwfn,
+ "No feature tlvs found for vport update\n");
+ status = PFVF_STATUS_NOT_SUPPORTED;
+ goto out;
+ }
+
+ rc = qed_sp_vport_update(p_hwfn, &params, QED_SPQ_MODE_EBLOCK, NULL);
+
+ if (rc)
+ status = PFVF_STATUS_FAILURE;
+
+out:
+ length = qed_iov_prep_vp_update_resp_tlvs(p_hwfn, vf, mbx, status,
+ tlvs_mask, tlvs_mask);
+ qed_iov_send_response(p_hwfn, p_ptt, vf, length, status);
+}
+
+static int qed_iov_vf_update_unicast_shadow(struct qed_hwfn *p_hwfn,
+ struct qed_vf_info *p_vf,
+ struct qed_filter_ucast *p_params)
+{
+ int i;
+
+ if (p_params->type == QED_FILTER_MAC)
+ return 0;
+
+ /* First remove entries and then add new ones */
+ if (p_params->opcode == QED_FILTER_REMOVE) {
+ for (i = 0; i < QED_ETH_VF_NUM_VLAN_FILTERS + 1; i++)
+ if (p_vf->shadow_config.vlans[i].used &&
+ p_vf->shadow_config.vlans[i].vid ==
+ p_params->vlan) {
+ p_vf->shadow_config.vlans[i].used = false;
+ break;
+ }
+ if (i == QED_ETH_VF_NUM_VLAN_FILTERS + 1) {
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_IOV,
+ "VF [%d] - Tries to remove a non-existing vlan\n",
+ p_vf->relative_vf_id);
+ return -EINVAL;
+ }
+ } else if (p_params->opcode == QED_FILTER_REPLACE ||
+ p_params->opcode == QED_FILTER_FLUSH) {
+ for (i = 0; i < QED_ETH_VF_NUM_VLAN_FILTERS + 1; i++)
+ p_vf->shadow_config.vlans[i].used = false;
+ }
+
+ /* In forced mode, we're willing to remove entries - but we don't add
+ * new ones.
+ */
+ if (p_vf->bulletin.p_virt->valid_bitmap & (1 << VLAN_ADDR_FORCED))
+ return 0;
+
+ if (p_params->opcode == QED_FILTER_ADD ||
+ p_params->opcode == QED_FILTER_REPLACE) {
+ for (i = 0; i < QED_ETH_VF_NUM_VLAN_FILTERS + 1; i++) {
+ if (p_vf->shadow_config.vlans[i].used)
+ continue;
+
+ p_vf->shadow_config.vlans[i].used = true;
+ p_vf->shadow_config.vlans[i].vid = p_params->vlan;
+ break;
+ }
+
+ if (i == QED_ETH_VF_NUM_VLAN_FILTERS + 1) {
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_IOV,
+ "VF [%d] - Tries to configure more than %d vlan filters\n",
+ p_vf->relative_vf_id,
+ QED_ETH_VF_NUM_VLAN_FILTERS + 1);
+ return -EINVAL;
+ }
+ }
+
+ return 0;
+}
+
+int qed_iov_chk_ucast(struct qed_hwfn *hwfn,
+ int vfid, struct qed_filter_ucast *params)
+{
+ struct qed_public_vf_info *vf;
+
+ vf = qed_iov_get_public_vf_info(hwfn, vfid, true);
+ if (!vf)
+ return -EINVAL;
+
+ /* No real decision to make; Store the configured MAC */
+ if (params->type == QED_FILTER_MAC ||
+ params->type == QED_FILTER_MAC_VLAN)
+ ether_addr_copy(vf->mac, params->mac);
+
+ return 0;
+}
+
+static void qed_iov_vf_mbx_ucast_filter(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *vf)
+{
+ struct qed_bulletin_content *p_bulletin = vf->bulletin.p_virt;
+ struct qed_iov_vf_mbx *mbx = &vf->vf_mbx;
+ struct vfpf_ucast_filter_tlv *req;
+ u8 status = PFVF_STATUS_SUCCESS;
+ struct qed_filter_ucast params;
+ int rc;
+
+ /* Prepare the unicast filter params */
+ memset(&params, 0, sizeof(struct qed_filter_ucast));
+ req = &mbx->req_virt->ucast_filter;
+ params.opcode = (enum qed_filter_opcode)req->opcode;
+ params.type = (enum qed_filter_ucast_type)req->type;
+
+ params.is_rx_filter = 1;
+ params.is_tx_filter = 1;
+ params.vport_to_remove_from = vf->vport_id;
+ params.vport_to_add_to = vf->vport_id;
+ memcpy(params.mac, req->mac, ETH_ALEN);
+ params.vlan = req->vlan;
+
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_IOV,
+ "VF[%d]: opcode 0x%02x type 0x%02x [%s %s] [vport 0x%02x] MAC %02x:%02x:%02x:%02x:%02x:%02x, vlan 0x%04x\n",
+ vf->abs_vf_id, params.opcode, params.type,
+ params.is_rx_filter ? "RX" : "",
+ params.is_tx_filter ? "TX" : "",
+ params.vport_to_add_to,
+ params.mac[0], params.mac[1],
+ params.mac[2], params.mac[3],
+ params.mac[4], params.mac[5], params.vlan);
+
+ if (!vf->vport_instance) {
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_IOV,
+ "No VPORT instance available for VF[%d], failing ucast MAC configuration\n",
+ vf->abs_vf_id);
+ status = PFVF_STATUS_FAILURE;
+ goto out;
+ }
+
+ /* Update shadow copy of the VF configuration */
+ if (qed_iov_vf_update_unicast_shadow(p_hwfn, vf, &params)) {
+ status = PFVF_STATUS_FAILURE;
+ goto out;
+ }
+
+ /* Determine if the unicast filtering is acceptible by PF */
+ if ((p_bulletin->valid_bitmap & (1 << VLAN_ADDR_FORCED)) &&
+ (params.type == QED_FILTER_VLAN ||
+ params.type == QED_FILTER_MAC_VLAN)) {
+ /* Once VLAN is forced or PVID is set, do not allow
+ * to add/replace any further VLANs.
+ */
+ if (params.opcode == QED_FILTER_ADD ||
+ params.opcode == QED_FILTER_REPLACE)
+ status = PFVF_STATUS_FORCED;
+ goto out;
+ }
+
+ if ((p_bulletin->valid_bitmap & (1 << MAC_ADDR_FORCED)) &&
+ (params.type == QED_FILTER_MAC ||
+ params.type == QED_FILTER_MAC_VLAN)) {
+ if (!ether_addr_equal(p_bulletin->mac, params.mac) ||
+ (params.opcode != QED_FILTER_ADD &&
+ params.opcode != QED_FILTER_REPLACE))
+ status = PFVF_STATUS_FORCED;
+ goto out;
+ }
+
+ rc = qed_iov_chk_ucast(p_hwfn, vf->relative_vf_id, &params);
+ if (rc) {
+ status = PFVF_STATUS_FAILURE;
+ goto out;
+ }
+
+ rc = qed_sp_eth_filter_ucast(p_hwfn, vf->opaque_fid, &params,
+ QED_SPQ_MODE_CB, NULL);
+ if (rc)
+ status = PFVF_STATUS_FAILURE;
+
+out:
+ qed_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_UCAST_FILTER,
+ sizeof(struct pfvf_def_resp_tlv), status);
+}
+
+static void qed_iov_vf_mbx_int_cleanup(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *vf)
+{
+ int i;
+
+ /* Reset the SBs */
+ for (i = 0; i < vf->num_sbs; i++)
+ qed_int_igu_init_pure_rt_single(p_hwfn, p_ptt,
+ vf->igu_sbs[i],
+ vf->opaque_fid, false);
+
+ qed_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_INT_CLEANUP,
+ sizeof(struct pfvf_def_resp_tlv),
+ PFVF_STATUS_SUCCESS);
+}
+
+static void qed_iov_vf_mbx_close(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt, struct qed_vf_info *vf)
+{
+ u16 length = sizeof(struct pfvf_def_resp_tlv);
+ u8 status = PFVF_STATUS_SUCCESS;
+
+ /* Disable Interrupts for VF */
+ qed_iov_vf_igu_set_int(p_hwfn, p_ptt, vf, 0);
+
+ /* Reset Permission table */
+ qed_iov_config_perm_table(p_hwfn, p_ptt, vf, 0);
+
+ qed_iov_prepare_resp(p_hwfn, p_ptt, vf, CHANNEL_TLV_CLOSE,
+ length, status);
+}
+
+static void qed_iov_vf_mbx_release(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ struct qed_vf_info *p_vf)
+{
+ u16 length = sizeof(struct pfvf_def_resp_tlv);
+
+ qed_iov_vf_cleanup(p_hwfn, p_vf);
+
+ qed_iov_prepare_resp(p_hwfn, p_ptt, p_vf, CHANNEL_TLV_RELEASE,
+ length, PFVF_STATUS_SUCCESS);
+}
+
+static int
+qed_iov_vf_flr_poll_dorq(struct qed_hwfn *p_hwfn,
+ struct qed_vf_info *p_vf, struct qed_ptt *p_ptt)
+{
+ int cnt;
+ u32 val;
+
+ qed_fid_pretend(p_hwfn, p_ptt, (u16) p_vf->concrete_fid);
+
+ for (cnt = 0; cnt < 50; cnt++) {
+ val = qed_rd(p_hwfn, p_ptt, DORQ_REG_VF_USAGE_CNT);
+ if (!val)
+ break;
+ msleep(20);
+ }
+ qed_fid_pretend(p_hwfn, p_ptt, (u16) p_hwfn->hw_info.concrete_fid);
+
+ if (cnt == 50) {
+ DP_ERR(p_hwfn,
+ "VF[%d] - dorq failed to cleanup [usage 0x%08x]\n",
+ p_vf->abs_vf_id, val);
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
+static int
+qed_iov_vf_flr_poll_pbf(struct qed_hwfn *p_hwfn,
+ struct qed_vf_info *p_vf, struct qed_ptt *p_ptt)
+{
+ u32 cons[MAX_NUM_VOQS], distance[MAX_NUM_VOQS];
+ int i, cnt;
+
+ /* Read initial consumers & producers */
+ for (i = 0; i < MAX_NUM_VOQS; i++) {
+ u32 prod;
+
+ cons[i] = qed_rd(p_hwfn, p_ptt,
+ PBF_REG_NUM_BLOCKS_ALLOCATED_CONS_VOQ0 +
+ i * 0x40);
+ prod = qed_rd(p_hwfn, p_ptt,
+ PBF_REG_NUM_BLOCKS_ALLOCATED_PROD_VOQ0 +
+ i * 0x40);
+ distance[i] = prod - cons[i];
+ }
+
+ /* Wait for consumers to pass the producers */
+ i = 0;
+ for (cnt = 0; cnt < 50; cnt++) {
+ for (; i < MAX_NUM_VOQS; i++) {
+ u32 tmp;
+
+ tmp = qed_rd(p_hwfn, p_ptt,
+ PBF_REG_NUM_BLOCKS_ALLOCATED_CONS_VOQ0 +
+ i * 0x40);
+ if (distance[i] > tmp - cons[i])
+ break;
+ }
+
+ if (i == MAX_NUM_VOQS)
+ break;
+
+ msleep(20);
+ }
+
+ if (cnt == 50) {
+ DP_ERR(p_hwfn, "VF[%d] - pbf polling failed on VOQ %d\n",
+ p_vf->abs_vf_id, i);
+ return -EBUSY;
+ }
+
+ return 0;
+}
+
+static int qed_iov_vf_flr_poll(struct qed_hwfn *p_hwfn,
+ struct qed_vf_info *p_vf, struct qed_ptt *p_ptt)
+{
+ int rc;
+
+ rc = qed_iov_vf_flr_poll_dorq(p_hwfn, p_vf, p_ptt);
+ if (rc)
+ return rc;
+
+ rc = qed_iov_vf_flr_poll_pbf(p_hwfn, p_vf, p_ptt);
+ if (rc)
+ return rc;
+
+ return 0;
+}
+
+static int
+qed_iov_execute_vf_flr_cleanup(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt,
+ u16 rel_vf_id, u32 *ack_vfs)
+{
+ struct qed_vf_info *p_vf;
+ int rc = 0;
+
+ p_vf = qed_iov_get_vf_info(p_hwfn, rel_vf_id, false);
+ if (!p_vf)
+ return 0;
+
+ if (p_hwfn->pf_iov_info->pending_flr[rel_vf_id / 64] &
+ (1ULL << (rel_vf_id % 64))) {
+ u16 vfid = p_vf->abs_vf_id;
+
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "VF[%d] - Handling FLR\n", vfid);
+
+ qed_iov_vf_cleanup(p_hwfn, p_vf);
+
+ /* If VF isn't active, no need for anything but SW */
+ if (!p_vf->b_init)
+ goto cleanup;
+
+ rc = qed_iov_vf_flr_poll(p_hwfn, p_vf, p_ptt);
+ if (rc)
+ goto cleanup;
+
+ rc = qed_final_cleanup(p_hwfn, p_ptt, vfid, true);
+ if (rc) {
+ DP_ERR(p_hwfn, "Failed handle FLR of VF[%d]\n", vfid);
+ return rc;
+ }
+
+ /* VF_STOPPED has to be set only after final cleanup
+ * but prior to re-enabling the VF.
+ */
+ p_vf->state = VF_STOPPED;
+
+ rc = qed_iov_enable_vf_access(p_hwfn, p_ptt, p_vf);
+ if (rc) {
+ DP_ERR(p_hwfn, "Failed to re-enable VF[%d] acces\n",
+ vfid);
+ return rc;
+ }
+cleanup:
+ /* Mark VF for ack and clean pending state */
+ if (p_vf->state == VF_RESET)
+ p_vf->state = VF_STOPPED;
+ ack_vfs[vfid / 32] |= (1 << (vfid % 32));
+ p_hwfn->pf_iov_info->pending_flr[rel_vf_id / 64] &=
+ ~(1ULL << (rel_vf_id % 64));
+ p_hwfn->pf_iov_info->pending_events[rel_vf_id / 64] &=
+ ~(1ULL << (rel_vf_id % 64));
+ }
+
+ return rc;
+}
+
+int qed_iov_vf_flr_cleanup(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+{
+ u32 ack_vfs[VF_MAX_STATIC / 32];
+ int rc = 0;
+ u16 i;
+
+ memset(ack_vfs, 0, sizeof(u32) * (VF_MAX_STATIC / 32));
+
+ /* Since BRB <-> PRS interface can't be tested as part of the flr
+ * polling due to HW limitations, simply sleep a bit. And since
+ * there's no need to wait per-vf, do it before looping.
+ */
+ msleep(100);
+
+ for (i = 0; i < p_hwfn->cdev->p_iov_info->total_vfs; i++)
+ qed_iov_execute_vf_flr_cleanup(p_hwfn, p_ptt, i, ack_vfs);
+
+ rc = qed_mcp_ack_vf_flr(p_hwfn, p_ptt, ack_vfs);
+ return rc;
+}
+
+int qed_iov_mark_vf_flr(struct qed_hwfn *p_hwfn, u32 *p_disabled_vfs)
+{
+ u16 i, found = 0;
+
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV, "Marking FLR-ed VFs\n");
+ for (i = 0; i < (VF_MAX_STATIC / 32); i++)
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "[%08x,...,%08x]: %08x\n",
+ i * 32, (i + 1) * 32 - 1, p_disabled_vfs[i]);
+
+ if (!p_hwfn->cdev->p_iov_info) {
+ DP_NOTICE(p_hwfn, "VF flr but no IOV\n");
+ return 0;
+ }
+
+ /* Mark VFs */
+ for (i = 0; i < p_hwfn->cdev->p_iov_info->total_vfs; i++) {
+ struct qed_vf_info *p_vf;
+ u8 vfid;
+
+ p_vf = qed_iov_get_vf_info(p_hwfn, i, false);
+ if (!p_vf)
+ continue;
+
+ vfid = p_vf->abs_vf_id;
+ if ((1 << (vfid % 32)) & p_disabled_vfs[vfid / 32]) {
+ u64 *p_flr = p_hwfn->pf_iov_info->pending_flr;
+ u16 rel_vf_id = p_vf->relative_vf_id;
+
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "VF[%d] [rel %d] got FLR-ed\n",
+ vfid, rel_vf_id);
+
+ p_vf->state = VF_RESET;
+
+ /* No need to lock here, since pending_flr should
+ * only change here and before ACKing MFw. Since
+ * MFW will not trigger an additional attention for
+ * VF flr until ACKs, we're safe.
+ */
+ p_flr[rel_vf_id / 64] |= 1ULL << (rel_vf_id % 64);
+ found = 1;
+ }
+ }
+
+ return found;
+}
+
+static void qed_iov_get_link(struct qed_hwfn *p_hwfn,
+ u16 vfid,
+ struct qed_mcp_link_params *p_params,
+ struct qed_mcp_link_state *p_link,
+ struct qed_mcp_link_capabilities *p_caps)
+{
+ struct qed_vf_info *p_vf = qed_iov_get_vf_info(p_hwfn,
+ vfid,
+ false);
+ struct qed_bulletin_content *p_bulletin;
+
+ if (!p_vf)
+ return;
+
+ p_bulletin = p_vf->bulletin.p_virt;
+
+ if (p_params)
+ __qed_vf_get_link_params(p_hwfn, p_params, p_bulletin);
+ if (p_link)
+ __qed_vf_get_link_state(p_hwfn, p_link, p_bulletin);
+ if (p_caps)
+ __qed_vf_get_link_caps(p_hwfn, p_caps, p_bulletin);
+}
+
+static void qed_iov_process_mbx_req(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt, int vfid)
+{
+ struct qed_iov_vf_mbx *mbx;
+ struct qed_vf_info *p_vf;
+ int i;
+
+ p_vf = qed_iov_get_vf_info(p_hwfn, (u16) vfid, true);
+ if (!p_vf)
+ return;
+
+ mbx = &p_vf->vf_mbx;
+
+ /* qed_iov_process_mbx_request */
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_IOV,
+ "qed_iov_process_mbx_req vfid %d\n", p_vf->abs_vf_id);
+
+ mbx->first_tlv = mbx->req_virt->first_tlv;
+
+ /* check if tlv type is known */
+ if (qed_iov_tlv_supported(mbx->first_tlv.tl.type)) {
+ switch (mbx->first_tlv.tl.type) {
+ case CHANNEL_TLV_ACQUIRE:
+ qed_iov_vf_mbx_acquire(p_hwfn, p_ptt, p_vf);
+ break;
+ case CHANNEL_TLV_VPORT_START:
+ qed_iov_vf_mbx_start_vport(p_hwfn, p_ptt, p_vf);
+ break;
+ case CHANNEL_TLV_VPORT_TEARDOWN:
+ qed_iov_vf_mbx_stop_vport(p_hwfn, p_ptt, p_vf);
+ break;
+ case CHANNEL_TLV_START_RXQ:
+ qed_iov_vf_mbx_start_rxq(p_hwfn, p_ptt, p_vf);
+ break;
+ case CHANNEL_TLV_START_TXQ:
+ qed_iov_vf_mbx_start_txq(p_hwfn, p_ptt, p_vf);
+ break;
+ case CHANNEL_TLV_STOP_RXQS:
+ qed_iov_vf_mbx_stop_rxqs(p_hwfn, p_ptt, p_vf);
+ break;
+ case CHANNEL_TLV_STOP_TXQS:
+ qed_iov_vf_mbx_stop_txqs(p_hwfn, p_ptt, p_vf);
+ break;
+ case CHANNEL_TLV_UPDATE_RXQ:
+ qed_iov_vf_mbx_update_rxqs(p_hwfn, p_ptt, p_vf);
+ break;
+ case CHANNEL_TLV_VPORT_UPDATE:
+ qed_iov_vf_mbx_vport_update(p_hwfn, p_ptt, p_vf);
+ break;
+ case CHANNEL_TLV_UCAST_FILTER:
+ qed_iov_vf_mbx_ucast_filter(p_hwfn, p_ptt, p_vf);
+ break;
+ case CHANNEL_TLV_CLOSE:
+ qed_iov_vf_mbx_close(p_hwfn, p_ptt, p_vf);
+ break;
+ case CHANNEL_TLV_INT_CLEANUP:
+ qed_iov_vf_mbx_int_cleanup(p_hwfn, p_ptt, p_vf);
+ break;
+ case CHANNEL_TLV_RELEASE:
+ qed_iov_vf_mbx_release(p_hwfn, p_ptt, p_vf);
+ break;
+ }
+ } else {
+ /* unknown TLV - this may belong to a VF driver from the future
+ * - a version written after this PF driver was written, which
+ * supports features unknown as of yet. Too bad since we don't
+ * support them. Or this may be because someone wrote a crappy
+ * VF driver and is sending garbage over the channel.
+ */
+ DP_ERR(p_hwfn,
+ "unknown TLV. type %d length %d. first 20 bytes of mailbox buffer:\n",
+ mbx->first_tlv.tl.type, mbx->first_tlv.tl.length);
+
+ for (i = 0; i < 20; i++) {
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_IOV,
+ "%x ",
+ mbx->req_virt->tlv_buf_size.tlv_buffer[i]);
+ }
+ }
+}
+
+void qed_iov_pf_add_pending_events(struct qed_hwfn *p_hwfn, u8 vfid)
+{
+ u64 add_bit = 1ULL << (vfid % 64);
+
+ p_hwfn->pf_iov_info->pending_events[vfid / 64] |= add_bit;
+}
+
+static void qed_iov_pf_get_and_clear_pending_events(struct qed_hwfn *p_hwfn,
+ u64 *events)
+{
+ u64 *p_pending_events = p_hwfn->pf_iov_info->pending_events;
+
+ memcpy(events, p_pending_events, sizeof(u64) * QED_VF_ARRAY_LENGTH);
+ memset(p_pending_events, 0, sizeof(u64) * QED_VF_ARRAY_LENGTH);
+}
+
+static int qed_sriov_vfpf_msg(struct qed_hwfn *p_hwfn,
+ u16 abs_vfid, struct regpair *vf_msg)
+{
+ u8 min = (u8)p_hwfn->cdev->p_iov_info->first_vf_in_pf;
+ struct qed_vf_info *p_vf;
+
+ if (!qed_iov_pf_sanity_check(p_hwfn, (int)abs_vfid - min)) {
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_IOV,
+ "Got a message from VF [abs 0x%08x] that cannot be handled by PF\n",
+ abs_vfid);
+ return 0;
+ }
+ p_vf = &p_hwfn->pf_iov_info->vfs_array[(u8)abs_vfid - min];
+
+ /* List the physical address of the request so that handler
+ * could later on copy the message from it.
+ */
+ p_vf->vf_mbx.pending_req = (((u64)vf_msg->hi) << 32) | vf_msg->lo;
+
+ /* Mark the event and schedule the workqueue */
+ qed_iov_pf_add_pending_events(p_hwfn, p_vf->relative_vf_id);
+ qed_schedule_iov(p_hwfn, QED_IOV_WQ_MSG_FLAG);
+
+ return 0;
+}
+
+int qed_sriov_eqe_event(struct qed_hwfn *p_hwfn,
+ u8 opcode, __le16 echo, union event_ring_data *data)
+{
+ switch (opcode) {
+ case COMMON_EVENT_VF_PF_CHANNEL:
+ return qed_sriov_vfpf_msg(p_hwfn, le16_to_cpu(echo),
+ &data->vf_pf_channel.msg_addr);
+ default:
+ DP_INFO(p_hwfn->cdev, "Unknown sriov eqe event 0x%02x\n",
+ opcode);
+ return -EINVAL;
+ }
+}
+
+u16 qed_iov_get_next_active_vf(struct qed_hwfn *p_hwfn, u16 rel_vf_id)
+{
+ struct qed_hw_sriov_info *p_iov = p_hwfn->cdev->p_iov_info;
+ u16 i;
+
+ if (!p_iov)
+ goto out;
+
+ for (i = rel_vf_id; i < p_iov->total_vfs; i++)
+ if (qed_iov_is_valid_vfid(p_hwfn, rel_vf_id, true))
+ return i;
+
+out:
+ return MAX_NUM_VFS;
+}
+
+static int qed_iov_copy_vf_msg(struct qed_hwfn *p_hwfn, struct qed_ptt *ptt,
+ int vfid)
+{
+ struct qed_dmae_params params;
+ struct qed_vf_info *vf_info;
+
+ vf_info = qed_iov_get_vf_info(p_hwfn, (u16) vfid, true);
+ if (!vf_info)
+ return -EINVAL;
+
+ memset(&params, 0, sizeof(struct qed_dmae_params));
+ params.flags = QED_DMAE_FLAG_VF_SRC | QED_DMAE_FLAG_COMPLETION_DST;
+ params.src_vfid = vf_info->abs_vf_id;
+
+ if (qed_dmae_host2host(p_hwfn, ptt,
+ vf_info->vf_mbx.pending_req,
+ vf_info->vf_mbx.req_phys,
+ sizeof(union vfpf_tlvs) / 4, &params)) {
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "Failed to copy message from VF 0x%02x\n", vfid);
+
+ return -EIO;
+ }
+
+ return 0;
+}
+
+static void qed_iov_bulletin_set_forced_mac(struct qed_hwfn *p_hwfn,
+ u8 *mac, int vfid)
+{
+ struct qed_vf_info *vf_info;
+ u64 feature;
+
+ vf_info = qed_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+ if (!vf_info) {
+ DP_NOTICE(p_hwfn->cdev,
+ "Can not set forced MAC, invalid vfid [%d]\n", vfid);
+ return;
+ }
+
+ feature = 1 << MAC_ADDR_FORCED;
+ memcpy(vf_info->bulletin.p_virt->mac, mac, ETH_ALEN);
+
+ vf_info->bulletin.p_virt->valid_bitmap |= feature;
+ /* Forced MAC will disable MAC_ADDR */
+ vf_info->bulletin.p_virt->valid_bitmap &=
+ ~(1 << VFPF_BULLETIN_MAC_ADDR);
+
+ qed_iov_configure_vport_forced(p_hwfn, vf_info, feature);
+}
+
+void qed_iov_bulletin_set_forced_vlan(struct qed_hwfn *p_hwfn,
+ u16 pvid, int vfid)
+{
+ struct qed_vf_info *vf_info;
+ u64 feature;
+
+ vf_info = qed_iov_get_vf_info(p_hwfn, (u16) vfid, true);
+ if (!vf_info) {
+ DP_NOTICE(p_hwfn->cdev,
+ "Can not set forced MAC, invalid vfid [%d]\n", vfid);
+ return;
+ }
+
+ feature = 1 << VLAN_ADDR_FORCED;
+ vf_info->bulletin.p_virt->pvid = pvid;
+ if (pvid)
+ vf_info->bulletin.p_virt->valid_bitmap |= feature;
+ else
+ vf_info->bulletin.p_virt->valid_bitmap &= ~feature;
+
+ qed_iov_configure_vport_forced(p_hwfn, vf_info, feature);
+}
+
+static bool qed_iov_vf_has_vport_instance(struct qed_hwfn *p_hwfn, int vfid)
+{
+ struct qed_vf_info *p_vf_info;
+
+ p_vf_info = qed_iov_get_vf_info(p_hwfn, (u16) vfid, true);
+ if (!p_vf_info)
+ return false;
+
+ return !!p_vf_info->vport_instance;
+}
+
+bool qed_iov_is_vf_stopped(struct qed_hwfn *p_hwfn, int vfid)
+{
+ struct qed_vf_info *p_vf_info;
+
+ p_vf_info = qed_iov_get_vf_info(p_hwfn, (u16) vfid, true);
+ if (!p_vf_info)
+ return true;
+
+ return p_vf_info->state == VF_STOPPED;
+}
+
+static bool qed_iov_spoofchk_get(struct qed_hwfn *p_hwfn, int vfid)
+{
+ struct qed_vf_info *vf_info;
+
+ vf_info = qed_iov_get_vf_info(p_hwfn, (u16) vfid, true);
+ if (!vf_info)
+ return false;
+
+ return vf_info->spoof_chk;
+}
+
+int qed_iov_spoofchk_set(struct qed_hwfn *p_hwfn, int vfid, bool val)
+{
+ struct qed_vf_info *vf;
+ int rc = -EINVAL;
+
+ if (!qed_iov_pf_sanity_check(p_hwfn, vfid)) {
+ DP_NOTICE(p_hwfn,
+ "SR-IOV sanity check failed, can't set spoofchk\n");
+ goto out;
+ }
+
+ vf = qed_iov_get_vf_info(p_hwfn, (u16) vfid, true);
+ if (!vf)
+ goto out;
+
+ if (!qed_iov_vf_has_vport_instance(p_hwfn, vfid)) {
+ /* After VF VPORT start PF will configure spoof check */
+ vf->req_spoofchk_val = val;
+ rc = 0;
+ goto out;
+ }
+
+ rc = __qed_iov_spoofchk_set(p_hwfn, vf, val);
+
+out:
+ return rc;
+}
+
+static u8 *qed_iov_bulletin_get_forced_mac(struct qed_hwfn *p_hwfn,
+ u16 rel_vf_id)
+{
+ struct qed_vf_info *p_vf;
+
+ p_vf = qed_iov_get_vf_info(p_hwfn, rel_vf_id, true);
+ if (!p_vf || !p_vf->bulletin.p_virt)
+ return NULL;
+
+ if (!(p_vf->bulletin.p_virt->valid_bitmap & (1 << MAC_ADDR_FORCED)))
+ return NULL;
+
+ return p_vf->bulletin.p_virt->mac;
+}
+
+u16 qed_iov_bulletin_get_forced_vlan(struct qed_hwfn *p_hwfn, u16 rel_vf_id)
+{
+ struct qed_vf_info *p_vf;
+
+ p_vf = qed_iov_get_vf_info(p_hwfn, rel_vf_id, true);
+ if (!p_vf || !p_vf->bulletin.p_virt)
+ return 0;
+
+ if (!(p_vf->bulletin.p_virt->valid_bitmap & (1 << VLAN_ADDR_FORCED)))
+ return 0;
+
+ return p_vf->bulletin.p_virt->pvid;
+}
+
+static int qed_iov_configure_tx_rate(struct qed_hwfn *p_hwfn,
+ struct qed_ptt *p_ptt, int vfid, int val)
+{
+ struct qed_vf_info *vf;
+ u8 abs_vp_id = 0;
+ int rc;
+
+ vf = qed_iov_get_vf_info(p_hwfn, (u16)vfid, true);
+ if (!vf)
+ return -EINVAL;
+
+ rc = qed_fw_vport(p_hwfn, vf->vport_id, &abs_vp_id);
+ if (rc)
+ return rc;
+
+ return qed_init_vport_rl(p_hwfn, p_ptt, abs_vp_id, (u32)val);
+}
+
+int qed_iov_configure_min_tx_rate(struct qed_dev *cdev, int vfid, u32 rate)
+{
+ struct qed_vf_info *vf;
+ u8 vport_id;
+ int i;
+
+ for_each_hwfn(cdev, i) {
+ struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
+
+ if (!qed_iov_pf_sanity_check(p_hwfn, vfid)) {
+ DP_NOTICE(p_hwfn,
+ "SR-IOV sanity check failed, can't set min rate\n");
+ return -EINVAL;
+ }
+ }
+
+ vf = qed_iov_get_vf_info(QED_LEADING_HWFN(cdev), (u16)vfid, true);
+ vport_id = vf->vport_id;
+
+ return qed_configure_vport_wfq(cdev, vport_id, rate);
+}
+
+static int qed_iov_get_vf_min_rate(struct qed_hwfn *p_hwfn, int vfid)
+{
+ struct qed_wfq_data *vf_vp_wfq;
+ struct qed_vf_info *vf_info;
+
+ vf_info = qed_iov_get_vf_info(p_hwfn, (u16) vfid, true);
+ if (!vf_info)
+ return 0;
+
+ vf_vp_wfq = &p_hwfn->qm_info.wfq_data[vf_info->vport_id];
+
+ if (vf_vp_wfq->configured)
+ return vf_vp_wfq->min_speed;
+ else
+ return 0;
+}
+
+/**
+ * qed_schedule_iov - schedules IOV task for VF and PF
+ * @hwfn: hardware function pointer
+ * @flag: IOV flag for VF/PF
+ */
+void qed_schedule_iov(struct qed_hwfn *hwfn, enum qed_iov_wq_flag flag)
+{
+ smp_mb__before_atomic();
+ set_bit(flag, &hwfn->iov_task_flags);
+ smp_mb__after_atomic();
+ DP_VERBOSE(hwfn, QED_MSG_IOV, "Scheduling iov task [Flag: %d]\n", flag);
+ queue_delayed_work(hwfn->iov_wq, &hwfn->iov_task, 0);
+}
+
+void qed_vf_start_iov_wq(struct qed_dev *cdev)
+{
+ int i;
+
+ for_each_hwfn(cdev, i)
+ queue_delayed_work(cdev->hwfns[i].iov_wq,
+ &cdev->hwfns[i].iov_task, 0);
+}
+
+int qed_sriov_disable(struct qed_dev *cdev, bool pci_enabled)
+{
+ int i, j;
+
+ for_each_hwfn(cdev, i)
+ if (cdev->hwfns[i].iov_wq)
+ flush_workqueue(cdev->hwfns[i].iov_wq);
+
+ /* Mark VFs for disablement */
+ qed_iov_set_vfs_to_disable(cdev, true);
+
+ if (cdev->p_iov_info && cdev->p_iov_info->num_vfs && pci_enabled)
+ pci_disable_sriov(cdev->pdev);
+
+ for_each_hwfn(cdev, i) {
+ struct qed_hwfn *hwfn = &cdev->hwfns[i];
+ struct qed_ptt *ptt = qed_ptt_acquire(hwfn);
+
+ /* Failure to acquire the ptt in 100g creates an odd error
+ * where the first engine has already relased IOV.
+ */
+ if (!ptt) {
+ DP_ERR(hwfn, "Failed to acquire ptt\n");
+ return -EBUSY;
+ }
+
+ /* Clean WFQ db and configure equal weight for all vports */
+ qed_clean_wfq_db(hwfn, ptt);
+
+ qed_for_each_vf(hwfn, j) {
+ int k;
+
+ if (!qed_iov_is_valid_vfid(hwfn, j, true))
+ continue;
+
+ /* Wait until VF is disabled before releasing */
+ for (k = 0; k < 100; k++) {
+ if (!qed_iov_is_vf_stopped(hwfn, j))
+ msleep(20);
+ else
+ break;
+ }
+
+ if (k < 100)
+ qed_iov_release_hw_for_vf(&cdev->hwfns[i],
+ ptt, j);
+ else
+ DP_ERR(hwfn,
+ "Timeout waiting for VF's FLR to end\n");
+ }
+
+ qed_ptt_release(hwfn, ptt);
+ }
+
+ qed_iov_set_vfs_to_disable(cdev, false);
+
+ return 0;
+}
+
+static int qed_sriov_enable(struct qed_dev *cdev, int num)
+{
+ struct qed_sb_cnt_info sb_cnt_info;
+ int i, j, rc;
+
+ if (num >= RESC_NUM(&cdev->hwfns[0], QED_VPORT)) {
+ DP_NOTICE(cdev, "Can start at most %d VFs\n",
+ RESC_NUM(&cdev->hwfns[0], QED_VPORT) - 1);
+ return -EINVAL;
+ }
+
+ /* Initialize HW for VF access */
+ for_each_hwfn(cdev, j) {
+ struct qed_hwfn *hwfn = &cdev->hwfns[j];
+ struct qed_ptt *ptt = qed_ptt_acquire(hwfn);
+ int num_sbs = 0, limit = 16;
+
+ if (!ptt) {
+ DP_ERR(hwfn, "Failed to acquire ptt\n");
+ rc = -EBUSY;
+ goto err;
+ }
+
+ if (IS_MF_DEFAULT(hwfn))
+ limit = MAX_NUM_VFS_BB / hwfn->num_funcs_on_engine;
+
+ memset(&sb_cnt_info, 0, sizeof(sb_cnt_info));
+ qed_int_get_num_sbs(hwfn, &sb_cnt_info);
+ num_sbs = min_t(int, sb_cnt_info.sb_free_blk, limit);
+
+ for (i = 0; i < num; i++) {
+ if (!qed_iov_is_valid_vfid(hwfn, i, false))
+ continue;
+
+ rc = qed_iov_init_hw_for_vf(hwfn,
+ ptt, i, num_sbs / num);
+ if (rc) {
+ DP_ERR(cdev, "Failed to enable VF[%d]\n", i);
+ qed_ptt_release(hwfn, ptt);
+ goto err;
+ }
+ }
+
+ qed_ptt_release(hwfn, ptt);
+ }
+
+ /* Enable SRIOV PCIe functions */
+ rc = pci_enable_sriov(cdev->pdev, num);
+ if (rc) {
+ DP_ERR(cdev, "Failed to enable sriov [%d]\n", rc);
+ goto err;
+ }
+
+ return num;
+
+err:
+ qed_sriov_disable(cdev, false);
+ return rc;
+}
+
+static int qed_sriov_configure(struct qed_dev *cdev, int num_vfs_param)
+{
+ if (!IS_QED_SRIOV(cdev)) {
+ DP_VERBOSE(cdev, QED_MSG_IOV, "SR-IOV is not supported\n");
+ return -EOPNOTSUPP;
+ }
+
+ if (num_vfs_param)
+ return qed_sriov_enable(cdev, num_vfs_param);
+ else
+ return qed_sriov_disable(cdev, true);
+}
+
+static int qed_sriov_pf_set_mac(struct qed_dev *cdev, u8 *mac, int vfid)
+{
+ int i;
+
+ if (!IS_QED_SRIOV(cdev) || !IS_PF_SRIOV_ALLOC(&cdev->hwfns[0])) {
+ DP_VERBOSE(cdev, QED_MSG_IOV,
+ "Cannot set a VF MAC; Sriov is not enabled\n");
+ return -EINVAL;
+ }
+
+ if (!qed_iov_is_valid_vfid(&cdev->hwfns[0], vfid, true)) {
+ DP_VERBOSE(cdev, QED_MSG_IOV,
+ "Cannot set VF[%d] MAC (VF is not active)\n", vfid);
+ return -EINVAL;
+ }
+
+ for_each_hwfn(cdev, i) {
+ struct qed_hwfn *hwfn = &cdev->hwfns[i];
+ struct qed_public_vf_info *vf_info;
+
+ vf_info = qed_iov_get_public_vf_info(hwfn, vfid, true);
+ if (!vf_info)
+ continue;
+
+ /* Set the forced MAC, and schedule the IOV task */
+ ether_addr_copy(vf_info->forced_mac, mac);
+ qed_schedule_iov(hwfn, QED_IOV_WQ_SET_UNICAST_FILTER_FLAG);
+ }
+
+ return 0;
+}
+
+static int qed_sriov_pf_set_vlan(struct qed_dev *cdev, u16 vid, int vfid)
+{
+ int i;
+
+ if (!IS_QED_SRIOV(cdev) || !IS_PF_SRIOV_ALLOC(&cdev->hwfns[0])) {
+ DP_VERBOSE(cdev, QED_MSG_IOV,
+ "Cannot set a VF MAC; Sriov is not enabled\n");
+ return -EINVAL;
+ }
+
+ if (!qed_iov_is_valid_vfid(&cdev->hwfns[0], vfid, true)) {
+ DP_VERBOSE(cdev, QED_MSG_IOV,
+ "Cannot set VF[%d] MAC (VF is not active)\n", vfid);
+ return -EINVAL;
+ }
+
+ for_each_hwfn(cdev, i) {
+ struct qed_hwfn *hwfn = &cdev->hwfns[i];
+ struct qed_public_vf_info *vf_info;
+
+ vf_info = qed_iov_get_public_vf_info(hwfn, vfid, true);
+ if (!vf_info)
+ continue;
+
+ /* Set the forced vlan, and schedule the IOV task */
+ vf_info->forced_vlan = vid;
+ qed_schedule_iov(hwfn, QED_IOV_WQ_SET_UNICAST_FILTER_FLAG);
+ }
+
+ return 0;
+}
+
+static int qed_get_vf_config(struct qed_dev *cdev,
+ int vf_id, struct ifla_vf_info *ivi)
+{
+ struct qed_hwfn *hwfn = QED_LEADING_HWFN(cdev);
+ struct qed_public_vf_info *vf_info;
+ struct qed_mcp_link_state link;
+ u32 tx_rate;
+
+ /* Sanitize request */
+ if (IS_VF(cdev))
+ return -EINVAL;
+
+ if (!qed_iov_is_valid_vfid(&cdev->hwfns[0], vf_id, true)) {
+ DP_VERBOSE(cdev, QED_MSG_IOV,
+ "VF index [%d] isn't active\n", vf_id);
+ return -EINVAL;
+ }
+
+ vf_info = qed_iov_get_public_vf_info(hwfn, vf_id, true);
+
+ qed_iov_get_link(hwfn, vf_id, NULL, &link, NULL);
+
+ /* Fill information about VF */
+ ivi->vf = vf_id;
+
+ if (is_valid_ether_addr(vf_info->forced_mac))
+ ether_addr_copy(ivi->mac, vf_info->forced_mac);
+ else
+ ether_addr_copy(ivi->mac, vf_info->mac);
+
+ ivi->vlan = vf_info->forced_vlan;
+ ivi->spoofchk = qed_iov_spoofchk_get(hwfn, vf_id);
+ ivi->linkstate = vf_info->link_state;
+ tx_rate = vf_info->tx_rate;
+ ivi->max_tx_rate = tx_rate ? tx_rate : link.speed;
+ ivi->min_tx_rate = qed_iov_get_vf_min_rate(hwfn, vf_id);
+
+ return 0;
+}
+
+void qed_inform_vf_link_state(struct qed_hwfn *hwfn)
+{
+ struct qed_mcp_link_capabilities caps;
+ struct qed_mcp_link_params params;
+ struct qed_mcp_link_state link;
+ int i;
+
+ if (!hwfn->pf_iov_info)
+ return;
+
+ /* Update bulletin of all future possible VFs with link configuration */
+ for (i = 0; i < hwfn->cdev->p_iov_info->total_vfs; i++) {
+ struct qed_public_vf_info *vf_info;
+
+ vf_info = qed_iov_get_public_vf_info(hwfn, i, false);
+ if (!vf_info)
+ continue;
+
+ memcpy(&params, qed_mcp_get_link_params(hwfn), sizeof(params));
+ memcpy(&link, qed_mcp_get_link_state(hwfn), sizeof(link));
+ memcpy(&caps, qed_mcp_get_link_capabilities(hwfn),
+ sizeof(caps));
+
+ /* Modify link according to the VF's configured link state */
+ switch (vf_info->link_state) {
+ case IFLA_VF_LINK_STATE_DISABLE:
+ link.link_up = false;
+ break;
+ case IFLA_VF_LINK_STATE_ENABLE:
+ link.link_up = true;
+ /* Set speed according to maximum supported by HW.
+ * that is 40G for regular devices and 100G for CMT
+ * mode devices.
+ */
+ link.speed = (hwfn->cdev->num_hwfns > 1) ?
+ 100000 : 40000;
+ default:
+ /* In auto mode pass PF link image to VF */
+ break;
+ }
+
+ if (link.link_up && vf_info->tx_rate) {
+ struct qed_ptt *ptt;
+ int rate;
+
+ rate = min_t(int, vf_info->tx_rate, link.speed);
+
+ ptt = qed_ptt_acquire(hwfn);
+ if (!ptt) {
+ DP_NOTICE(hwfn, "Failed to acquire PTT\n");
+ return;
+ }
+
+ if (!qed_iov_configure_tx_rate(hwfn, ptt, i, rate)) {
+ vf_info->tx_rate = rate;
+ link.speed = rate;
+ }
+
+ qed_ptt_release(hwfn, ptt);
+ }
+
+ qed_iov_set_link(hwfn, i, &params, &link, &caps);
+ }
+
+ qed_schedule_iov(hwfn, QED_IOV_WQ_BULLETIN_UPDATE_FLAG);
+}
+
+static int qed_set_vf_link_state(struct qed_dev *cdev,
+ int vf_id, int link_state)
+{
+ int i;
+
+ /* Sanitize request */
+ if (IS_VF(cdev))
+ return -EINVAL;
+
+ if (!qed_iov_is_valid_vfid(&cdev->hwfns[0], vf_id, true)) {
+ DP_VERBOSE(cdev, QED_MSG_IOV,
+ "VF index [%d] isn't active\n", vf_id);
+ return -EINVAL;
+ }
+
+ /* Handle configuration of link state */
+ for_each_hwfn(cdev, i) {
+ struct qed_hwfn *hwfn = &cdev->hwfns[i];
+ struct qed_public_vf_info *vf;
+
+ vf = qed_iov_get_public_vf_info(hwfn, vf_id, true);
+ if (!vf)
+ continue;
+
+ if (vf->link_state == link_state)
+ continue;
+
+ vf->link_state = link_state;
+ qed_inform_vf_link_state(&cdev->hwfns[i]);
+ }
+
+ return 0;
+}
+
+static int qed_spoof_configure(struct qed_dev *cdev, int vfid, bool val)
+{
+ int i, rc = -EINVAL;
+
+ for_each_hwfn(cdev, i) {
+ struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
+
+ rc = qed_iov_spoofchk_set(p_hwfn, vfid, val);
+ if (rc)
+ break;
+ }
+
+ return rc;
+}
+
+static int qed_configure_max_vf_rate(struct qed_dev *cdev, int vfid, int rate)
+{
+ int i;
+
+ for_each_hwfn(cdev, i) {
+ struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
+ struct qed_public_vf_info *vf;
+
+ if (!qed_iov_pf_sanity_check(p_hwfn, vfid)) {
+ DP_NOTICE(p_hwfn,
+ "SR-IOV sanity check failed, can't set tx rate\n");
+ return -EINVAL;
+ }
+
+ vf = qed_iov_get_public_vf_info(p_hwfn, vfid, true);
+
+ vf->tx_rate = rate;
+
+ qed_inform_vf_link_state(p_hwfn);
+ }
+
+ return 0;
+}
+
+static int qed_set_vf_rate(struct qed_dev *cdev,
+ int vfid, u32 min_rate, u32 max_rate)
+{
+ int rc_min = 0, rc_max = 0;
+
+ if (max_rate)
+ rc_max = qed_configure_max_vf_rate(cdev, vfid, max_rate);
+
+ if (min_rate)
+ rc_min = qed_iov_configure_min_tx_rate(cdev, vfid, min_rate);
+
+ if (rc_max | rc_min)
+ return -EINVAL;
+
+ return 0;
+}
+
+static void qed_handle_vf_msg(struct qed_hwfn *hwfn)
+{
+ u64 events[QED_VF_ARRAY_LENGTH];
+ struct qed_ptt *ptt;
+ int i;
+
+ ptt = qed_ptt_acquire(hwfn);
+ if (!ptt) {
+ DP_VERBOSE(hwfn, QED_MSG_IOV,
+ "Can't acquire PTT; re-scheduling\n");
+ qed_schedule_iov(hwfn, QED_IOV_WQ_MSG_FLAG);
+ return;
+ }
+
+ qed_iov_pf_get_and_clear_pending_events(hwfn, events);
+
+ DP_VERBOSE(hwfn, QED_MSG_IOV,
+ "Event mask of VF events: 0x%llx 0x%llx 0x%llx\n",
+ events[0], events[1], events[2]);
+
+ qed_for_each_vf(hwfn, i) {
+ /* Skip VFs with no pending messages */
+ if (!(events[i / 64] & (1ULL << (i % 64))))
+ continue;
+
+ DP_VERBOSE(hwfn, QED_MSG_IOV,
+ "Handling VF message from VF 0x%02x [Abs 0x%02x]\n",
+ i, hwfn->cdev->p_iov_info->first_vf_in_pf + i);
+
+ /* Copy VF's message to PF's request buffer for that VF */
+ if (qed_iov_copy_vf_msg(hwfn, ptt, i))
+ continue;
+
+ qed_iov_process_mbx_req(hwfn, ptt, i);
+ }
+
+ qed_ptt_release(hwfn, ptt);
+}
+
+static void qed_handle_pf_set_vf_unicast(struct qed_hwfn *hwfn)
+{
+ int i;
+
+ qed_for_each_vf(hwfn, i) {
+ struct qed_public_vf_info *info;
+ bool update = false;
+ u8 *mac;
+
+ info = qed_iov_get_public_vf_info(hwfn, i, true);
+ if (!info)
+ continue;
+
+ /* Update data on bulletin board */
+ mac = qed_iov_bulletin_get_forced_mac(hwfn, i);
+ if (is_valid_ether_addr(info->forced_mac) &&
+ (!mac || !ether_addr_equal(mac, info->forced_mac))) {
+ DP_VERBOSE(hwfn,
+ QED_MSG_IOV,
+ "Handling PF setting of VF MAC to VF 0x%02x [Abs 0x%02x]\n",
+ i,
+ hwfn->cdev->p_iov_info->first_vf_in_pf + i);
+
+ /* Update bulletin board with forced MAC */
+ qed_iov_bulletin_set_forced_mac(hwfn,
+ info->forced_mac, i);
+ update = true;
+ }
+
+ if (qed_iov_bulletin_get_forced_vlan(hwfn, i) ^
+ info->forced_vlan) {
+ DP_VERBOSE(hwfn,
+ QED_MSG_IOV,
+ "Handling PF setting of pvid [0x%04x] to VF 0x%02x [Abs 0x%02x]\n",
+ info->forced_vlan,
+ i,
+ hwfn->cdev->p_iov_info->first_vf_in_pf + i);
+ qed_iov_bulletin_set_forced_vlan(hwfn,
+ info->forced_vlan, i);
+ update = true;
+ }
+
+ if (update)
+ qed_schedule_iov(hwfn, QED_IOV_WQ_BULLETIN_UPDATE_FLAG);
+ }
+}
+
+static void qed_handle_bulletin_post(struct qed_hwfn *hwfn)
+{
+ struct qed_ptt *ptt;
+ int i;
+
+ ptt = qed_ptt_acquire(hwfn);
+ if (!ptt) {
+ DP_NOTICE(hwfn, "Failed allocating a ptt entry\n");
+ qed_schedule_iov(hwfn, QED_IOV_WQ_BULLETIN_UPDATE_FLAG);
+ return;
+ }
+
+ qed_for_each_vf(hwfn, i)
+ qed_iov_post_vf_bulletin(hwfn, i, ptt);
+
+ qed_ptt_release(hwfn, ptt);
+}
+
+void qed_iov_pf_task(struct work_struct *work)
+{
+ struct qed_hwfn *hwfn = container_of(work, struct qed_hwfn,
+ iov_task.work);
+ int rc;
+
+ if (test_and_clear_bit(QED_IOV_WQ_STOP_WQ_FLAG, &hwfn->iov_task_flags))
+ return;
+
+ if (test_and_clear_bit(QED_IOV_WQ_FLR_FLAG, &hwfn->iov_task_flags)) {
+ struct qed_ptt *ptt = qed_ptt_acquire(hwfn);
+
+ if (!ptt) {
+ qed_schedule_iov(hwfn, QED_IOV_WQ_FLR_FLAG);
+ return;
+ }
+
+ rc = qed_iov_vf_flr_cleanup(hwfn, ptt);
+ if (rc)
+ qed_schedule_iov(hwfn, QED_IOV_WQ_FLR_FLAG);
+
+ qed_ptt_release(hwfn, ptt);
+ }
+
+ if (test_and_clear_bit(QED_IOV_WQ_MSG_FLAG, &hwfn->iov_task_flags))
+ qed_handle_vf_msg(hwfn);
+
+ if (test_and_clear_bit(QED_IOV_WQ_SET_UNICAST_FILTER_FLAG,
+ &hwfn->iov_task_flags))
+ qed_handle_pf_set_vf_unicast(hwfn);
+
+ if (test_and_clear_bit(QED_IOV_WQ_BULLETIN_UPDATE_FLAG,
+ &hwfn->iov_task_flags))
+ qed_handle_bulletin_post(hwfn);
+}
+
+void qed_iov_wq_stop(struct qed_dev *cdev, bool schedule_first)
+{
+ int i;
+
+ for_each_hwfn(cdev, i) {
+ if (!cdev->hwfns[i].iov_wq)
+ continue;
+
+ if (schedule_first) {
+ qed_schedule_iov(&cdev->hwfns[i],
+ QED_IOV_WQ_STOP_WQ_FLAG);
+ cancel_delayed_work_sync(&cdev->hwfns[i].iov_task);
+ }
+
+ flush_workqueue(cdev->hwfns[i].iov_wq);
+ destroy_workqueue(cdev->hwfns[i].iov_wq);
+ }
+}
+
+int qed_iov_wq_start(struct qed_dev *cdev)
+{
+ char name[NAME_SIZE];
+ int i;
+
+ for_each_hwfn(cdev, i) {
+ struct qed_hwfn *p_hwfn = &cdev->hwfns[i];
+
+ /* PFs needs a dedicated workqueue only if they support IOV.
+ * VFs always require one.
+ */
+ if (IS_PF(p_hwfn->cdev) && !IS_PF_SRIOV(p_hwfn))
+ continue;
+
+ snprintf(name, NAME_SIZE, "iov-%02x:%02x.%02x",
+ cdev->pdev->bus->number,
+ PCI_SLOT(cdev->pdev->devfn), p_hwfn->abs_pf_id);
+
+ p_hwfn->iov_wq = create_singlethread_workqueue(name);
+ if (!p_hwfn->iov_wq) {
+ DP_NOTICE(p_hwfn, "Cannot create iov workqueue\n");
+ return -ENOMEM;
+ }
+
+ if (IS_PF(cdev))
+ INIT_DELAYED_WORK(&p_hwfn->iov_task, qed_iov_pf_task);
+ else
+ INIT_DELAYED_WORK(&p_hwfn->iov_task, qed_iov_vf_task);
+ }
+
+ return 0;
+}
+
+const struct qed_iov_hv_ops qed_iov_ops_pass = {
+ .configure = &qed_sriov_configure,
+ .set_mac = &qed_sriov_pf_set_mac,
+ .set_vlan = &qed_sriov_pf_set_vlan,
+ .get_config = &qed_get_vf_config,
+ .set_link_state = &qed_set_vf_link_state,
+ .set_spoof = &qed_spoof_configure,
+ .set_rate = &qed_set_vf_rate,
+};
diff --git a/drivers/net/ethernet/qlogic/qed/qed_sriov.h b/drivers/net/ethernet/qlogic/qed/qed_sriov.h
new file mode 100644
index 000000000000..c8667c65e685
--- /dev/null
+++ b/drivers/net/ethernet/qlogic/qed/qed_sriov.h
@@ -0,0 +1,386 @@
+/* QLogic qed NIC Driver
+ * Copyright (c) 2015 QLogic Corporation
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#ifndef _QED_SRIOV_H
+#define _QED_SRIOV_H
+#include <linux/types.h>
+#include "qed_vf.h"
+#define QED_VF_ARRAY_LENGTH (3)
+
+#define IS_VF(cdev) ((cdev)->b_is_vf)
+#define IS_PF(cdev) (!((cdev)->b_is_vf))
+#ifdef CONFIG_QED_SRIOV
+#define IS_PF_SRIOV(p_hwfn) (!!((p_hwfn)->cdev->p_iov_info))
+#else
+#define IS_PF_SRIOV(p_hwfn) (0)
+#endif
+#define IS_PF_SRIOV_ALLOC(p_hwfn) (!!((p_hwfn)->pf_iov_info))
+
+#define QED_MAX_VF_CHAINS_PER_PF 16
+#define QED_ETH_VF_NUM_VLAN_FILTERS 2
+
+#define QED_ETH_MAX_VF_NUM_VLAN_FILTERS \
+ (MAX_NUM_VFS * QED_ETH_VF_NUM_VLAN_FILTERS)
+
+enum qed_iov_vport_update_flag {
+ QED_IOV_VP_UPDATE_ACTIVATE,
+ QED_IOV_VP_UPDATE_VLAN_STRIP,
+ QED_IOV_VP_UPDATE_TX_SWITCH,
+ QED_IOV_VP_UPDATE_MCAST,
+ QED_IOV_VP_UPDATE_ACCEPT_PARAM,
+ QED_IOV_VP_UPDATE_RSS,
+ QED_IOV_VP_UPDATE_ACCEPT_ANY_VLAN,
+ QED_IOV_VP_UPDATE_SGE_TPA,
+ QED_IOV_VP_UPDATE_MAX,
+};
+
+struct qed_public_vf_info {
+ /* These copies will later be reflected in the bulletin board,
+ * but this copy should be newer.
+ */
+ u8 forced_mac[ETH_ALEN];
+ u16 forced_vlan;
+ u8 mac[ETH_ALEN];
+
+ /* IFLA_VF_LINK_STATE_<X> */
+ int link_state;
+
+ /* Currently configured Tx rate in MB/sec. 0 if unconfigured */
+ int tx_rate;
+};
+
+/* This struct is part of qed_dev and contains data relevant to all hwfns;
+ * Initialized only if SR-IOV cpabability is exposed in PCIe config space.
+ */
+struct qed_hw_sriov_info {
+ int pos; /* capability position */
+ int nres; /* number of resources */
+ u32 cap; /* SR-IOV Capabilities */
+ u16 ctrl; /* SR-IOV Control */
+ u16 total_vfs; /* total VFs associated with the PF */
+ u16 num_vfs; /* number of vfs that have been started */
+ u16 initial_vfs; /* initial VFs associated with the PF */
+ u16 nr_virtfn; /* number of VFs available */
+ u16 offset; /* first VF Routing ID offset */
+ u16 stride; /* following VF stride */
+ u16 vf_device_id; /* VF device id */
+ u32 pgsz; /* page size for BAR alignment */
+ u8 link; /* Function Dependency Link */
+
+ u32 first_vf_in_pf;
+};
+
+/* This mailbox is maintained per VF in its PF contains all information
+ * required for sending / receiving a message.
+ */
+struct qed_iov_vf_mbx {
+ union vfpf_tlvs *req_virt;
+ dma_addr_t req_phys;
+ union pfvf_tlvs *reply_virt;
+ dma_addr_t reply_phys;
+
+ /* Address in VF where a pending message is located */
+ dma_addr_t pending_req;
+
+ u8 *offset;
+
+ /* saved VF request header */
+ struct vfpf_first_tlv first_tlv;
+};
+
+struct qed_vf_q_info {
+ u16 fw_rx_qid;
+ u16 fw_tx_qid;
+ u8 fw_cid;
+ u8 rxq_active;
+ u8 txq_active;
+};
+
+enum vf_state {
+ VF_FREE = 0, /* VF ready to be acquired holds no resc */
+ VF_ACQUIRED, /* VF, acquired, but not initalized */
+ VF_ENABLED, /* VF, Enabled */
+ VF_RESET, /* VF, FLR'd, pending cleanup */
+ VF_STOPPED /* VF, Stopped */
+};
+
+struct qed_vf_vlan_shadow {
+ bool used;
+ u16 vid;
+};
+
+struct qed_vf_shadow_config {
+ /* Shadow copy of all guest vlans */
+ struct qed_vf_vlan_shadow vlans[QED_ETH_VF_NUM_VLAN_FILTERS + 1];
+
+ u8 inner_vlan_removal;
+};
+
+/* PFs maintain an array of this structure, per VF */
+struct qed_vf_info {
+ struct qed_iov_vf_mbx vf_mbx;
+ enum vf_state state;
+ bool b_init;
+ u8 to_disable;
+
+ struct qed_bulletin bulletin;
+ dma_addr_t vf_bulletin;
+
+ u32 concrete_fid;
+ u16 opaque_fid;
+ u16 mtu;
+
+ u8 vport_id;
+ u8 relative_vf_id;
+ u8 abs_vf_id;
+#define QED_VF_ABS_ID(p_hwfn, p_vf) (QED_PATH_ID(p_hwfn) ? \
+ (p_vf)->abs_vf_id + MAX_NUM_VFS_BB : \
+ (p_vf)->abs_vf_id)
+
+ u8 vport_instance;
+ u8 num_rxqs;
+ u8 num_txqs;
+
+ u8 num_sbs;
+
+ u8 num_mac_filters;
+ u8 num_vlan_filters;
+ struct qed_vf_q_info vf_queues[QED_MAX_VF_CHAINS_PER_PF];
+ u16 igu_sbs[QED_MAX_VF_CHAINS_PER_PF];
+ u8 num_active_rxqs;
+ struct qed_public_vf_info p_vf_info;
+ bool spoof_chk;
+ bool req_spoofchk_val;
+
+ /* Stores the configuration requested by VF */
+ struct qed_vf_shadow_config shadow_config;
+
+ /* A bitfield using bulletin's valid-map bits, used to indicate
+ * which of the bulletin board features have been configured.
+ */
+ u64 configured_features;
+#define QED_IOV_CONFIGURED_FEATURES_MASK ((1 << MAC_ADDR_FORCED) | \
+ (1 << VLAN_ADDR_FORCED))
+};
+
+/* This structure is part of qed_hwfn and used only for PFs that have sriov
+ * capability enabled.
+ */
+struct qed_pf_iov {
+ struct qed_vf_info vfs_array[MAX_NUM_VFS];
+ u64 pending_events[QED_VF_ARRAY_LENGTH];
+ u64 pending_flr[QED_VF_ARRAY_LENGTH];
+
+ /* Allocate message address continuosuly and split to each VF */
+ void *mbx_msg_virt_addr;
+ dma_addr_t mbx_msg_phys_addr;
+ u32 mbx_msg_size;
+ void *mbx_reply_virt_addr;
+ dma_addr_t mbx_reply_phys_addr;
+ u32 mbx_reply_size;
+ void *p_bulletins;
+ dma_addr_t bulletins_phys;
+ u32 bulletins_size;
+};
+
+enum qed_iov_wq_flag {
+ QED_IOV_WQ_MSG_FLAG,
+ QED_IOV_WQ_SET_UNICAST_FILTER_FLAG,
+ QED_IOV_WQ_BULLETIN_UPDATE_FLAG,
+ QED_IOV_WQ_STOP_WQ_FLAG,
+ QED_IOV_WQ_FLR_FLAG,
+};
+
+#ifdef CONFIG_QED_SRIOV
+/**
+ * @brief - Given a VF index, return index of next [including that] active VF.
+ *
+ * @param p_hwfn
+ * @param rel_vf_id
+ *
+ * @return MAX_NUM_VFS in case no further active VFs, otherwise index.
+ */
+u16 qed_iov_get_next_active_vf(struct qed_hwfn *p_hwfn, u16 rel_vf_id);
+
+/**
+ * @brief Read sriov related information and allocated resources
+ * reads from configuraiton space, shmem, etc.
+ *
+ * @param p_hwfn
+ *
+ * @return int
+ */
+int qed_iov_hw_info(struct qed_hwfn *p_hwfn);
+
+/**
+ * @brief qed_add_tlv - place a given tlv on the tlv buffer at next offset
+ *
+ * @param p_hwfn
+ * @param p_iov
+ * @param type
+ * @param length
+ *
+ * @return pointer to the newly placed tlv
+ */
+void *qed_add_tlv(struct qed_hwfn *p_hwfn, u8 **offset, u16 type, u16 length);
+
+/**
+ * @brief list the types and lengths of the tlvs on the buffer
+ *
+ * @param p_hwfn
+ * @param tlvs_list
+ */
+void qed_dp_tlv_list(struct qed_hwfn *p_hwfn, void *tlvs_list);
+
+/**
+ * @brief qed_iov_alloc - allocate sriov related resources
+ *
+ * @param p_hwfn
+ *
+ * @return int
+ */
+int qed_iov_alloc(struct qed_hwfn *p_hwfn);
+
+/**
+ * @brief qed_iov_setup - setup sriov related resources
+ *
+ * @param p_hwfn
+ * @param p_ptt
+ */
+void qed_iov_setup(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt);
+
+/**
+ * @brief qed_iov_free - free sriov related resources
+ *
+ * @param p_hwfn
+ */
+void qed_iov_free(struct qed_hwfn *p_hwfn);
+
+/**
+ * @brief free sriov related memory that was allocated during hw_prepare
+ *
+ * @param cdev
+ */
+void qed_iov_free_hw_info(struct qed_dev *cdev);
+
+/**
+ * @brief qed_sriov_eqe_event - handle async sriov event arrived on eqe.
+ *
+ * @param p_hwfn
+ * @param opcode
+ * @param echo
+ * @param data
+ */
+int qed_sriov_eqe_event(struct qed_hwfn *p_hwfn,
+ u8 opcode, __le16 echo, union event_ring_data *data);
+
+/**
+ * @brief Mark structs of vfs that have been FLR-ed.
+ *
+ * @param p_hwfn
+ * @param disabled_vfs - bitmask of all VFs on path that were FLRed
+ *
+ * @return 1 iff one of the PF's vfs got FLRed. 0 otherwise.
+ */
+int qed_iov_mark_vf_flr(struct qed_hwfn *p_hwfn, u32 *disabled_vfs);
+
+/**
+ * @brief Search extended TLVs in request/reply buffer.
+ *
+ * @param p_hwfn
+ * @param p_tlvs_list - Pointer to tlvs list
+ * @param req_type - Type of TLV
+ *
+ * @return pointer to tlv type if found, otherwise returns NULL.
+ */
+void *qed_iov_search_list_tlvs(struct qed_hwfn *p_hwfn,
+ void *p_tlvs_list, u16 req_type);
+
+void qed_iov_wq_stop(struct qed_dev *cdev, bool schedule_first);
+int qed_iov_wq_start(struct qed_dev *cdev);
+
+void qed_schedule_iov(struct qed_hwfn *hwfn, enum qed_iov_wq_flag flag);
+void qed_vf_start_iov_wq(struct qed_dev *cdev);
+int qed_sriov_disable(struct qed_dev *cdev, bool pci_enabled);
+void qed_inform_vf_link_state(struct qed_hwfn *hwfn);
+#else
+static inline u16 qed_iov_get_next_active_vf(struct qed_hwfn *p_hwfn,
+ u16 rel_vf_id)
+{
+ return MAX_NUM_VFS;
+}
+
+static inline int qed_iov_hw_info(struct qed_hwfn *p_hwfn)
+{
+ return 0;
+}
+
+static inline int qed_iov_alloc(struct qed_hwfn *p_hwfn)
+{
+ return 0;
+}
+
+static inline void qed_iov_setup(struct qed_hwfn *p_hwfn, struct qed_ptt *p_ptt)
+{
+}
+
+static inline void qed_iov_free(struct qed_hwfn *p_hwfn)
+{
+}
+
+static inline void qed_iov_free_hw_info(struct qed_dev *cdev)
+{
+}
+
+static inline int qed_sriov_eqe_event(struct qed_hwfn *p_hwfn,
+ u8 opcode,
+ __le16 echo, union event_ring_data *data)
+{
+ return -EINVAL;
+}
+
+static inline int qed_iov_mark_vf_flr(struct qed_hwfn *p_hwfn,
+ u32 *disabled_vfs)
+{
+ return 0;
+}
+
+static inline void qed_iov_wq_stop(struct qed_dev *cdev, bool schedule_first)
+{
+}
+
+static inline int qed_iov_wq_start(struct qed_dev *cdev)
+{
+ return 0;
+}
+
+static inline void qed_schedule_iov(struct qed_hwfn *hwfn,
+ enum qed_iov_wq_flag flag)
+{
+}
+
+static inline void qed_vf_start_iov_wq(struct qed_dev *cdev)
+{
+}
+
+static inline int qed_sriov_disable(struct qed_dev *cdev, bool pci_enabled)
+{
+ return 0;
+}
+
+static inline void qed_inform_vf_link_state(struct qed_hwfn *hwfn)
+{
+}
+#endif
+
+#define qed_for_each_vf(_p_hwfn, _i) \
+ for (_i = qed_iov_get_next_active_vf(_p_hwfn, 0); \
+ _i < MAX_NUM_VFS; \
+ _i = qed_iov_get_next_active_vf(_p_hwfn, _i + 1))
+
+#endif
diff --git a/drivers/net/ethernet/qlogic/qed/qed_vf.c b/drivers/net/ethernet/qlogic/qed/qed_vf.c
new file mode 100644
index 000000000000..72e69c0ec10d
--- /dev/null
+++ b/drivers/net/ethernet/qlogic/qed/qed_vf.c
@@ -0,0 +1,1102 @@
+/* QLogic qed NIC Driver
+ * Copyright (c) 2015 QLogic Corporation
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#include <linux/crc32.h>
+#include <linux/etherdevice.h>
+#include "qed.h"
+#include "qed_sriov.h"
+#include "qed_vf.h"
+
+static void *qed_vf_pf_prep(struct qed_hwfn *p_hwfn, u16 type, u16 length)
+{
+ struct qed_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ void *p_tlv;
+
+ /* This lock is released when we receive PF's response
+ * in qed_send_msg2pf().
+ * So, qed_vf_pf_prep() and qed_send_msg2pf()
+ * must come in sequence.
+ */
+ mutex_lock(&(p_iov->mutex));
+
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_IOV,
+ "preparing to send 0x%04x tlv over vf pf channel\n",
+ type);
+
+ /* Reset Requst offset */
+ p_iov->offset = (u8 *)p_iov->vf2pf_request;
+
+ /* Clear mailbox - both request and reply */
+ memset(p_iov->vf2pf_request, 0, sizeof(union vfpf_tlvs));
+ memset(p_iov->pf2vf_reply, 0, sizeof(union pfvf_tlvs));
+
+ /* Init type and length */
+ p_tlv = qed_add_tlv(p_hwfn, &p_iov->offset, type, length);
+
+ /* Init first tlv header */
+ ((struct vfpf_first_tlv *)p_tlv)->reply_address =
+ (u64)p_iov->pf2vf_reply_phys;
+
+ return p_tlv;
+}
+
+static int qed_send_msg2pf(struct qed_hwfn *p_hwfn, u8 *done, u32 resp_size)
+{
+ union vfpf_tlvs *p_req = p_hwfn->vf_iov_info->vf2pf_request;
+ struct ustorm_trigger_vf_zone trigger;
+ struct ustorm_vf_zone *zone_data;
+ int rc = 0, time = 100;
+
+ zone_data = (struct ustorm_vf_zone *)PXP_VF_BAR0_START_USDM_ZONE_B;
+
+ /* output tlvs list */
+ qed_dp_tlv_list(p_hwfn, p_req);
+
+ /* need to add the END TLV to the message size */
+ resp_size += sizeof(struct channel_list_end_tlv);
+
+ /* Send TLVs over HW channel */
+ memset(&trigger, 0, sizeof(struct ustorm_trigger_vf_zone));
+ trigger.vf_pf_msg_valid = 1;
+
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_IOV,
+ "VF -> PF [%02x] message: [%08x, %08x] --> %p, %08x --> %p\n",
+ GET_FIELD(p_hwfn->hw_info.concrete_fid,
+ PXP_CONCRETE_FID_PFID),
+ upper_32_bits(p_hwfn->vf_iov_info->vf2pf_request_phys),
+ lower_32_bits(p_hwfn->vf_iov_info->vf2pf_request_phys),
+ &zone_data->non_trigger.vf_pf_msg_addr,
+ *((u32 *)&trigger), &zone_data->trigger);
+
+ REG_WR(p_hwfn,
+ (uintptr_t)&zone_data->non_trigger.vf_pf_msg_addr.lo,
+ lower_32_bits(p_hwfn->vf_iov_info->vf2pf_request_phys));
+
+ REG_WR(p_hwfn,
+ (uintptr_t)&zone_data->non_trigger.vf_pf_msg_addr.hi,
+ upper_32_bits(p_hwfn->vf_iov_info->vf2pf_request_phys));
+
+ /* The message data must be written first, to prevent trigger before
+ * data is written.
+ */
+ wmb();
+
+ REG_WR(p_hwfn, (uintptr_t)&zone_data->trigger, *((u32 *)&trigger));
+
+ /* When PF would be done with the response, it would write back to the
+ * `done' address. Poll until then.
+ */
+ while ((!*done) && time) {
+ msleep(25);
+ time--;
+ }
+
+ if (!*done) {
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "VF <-- PF Timeout [Type %d]\n",
+ p_req->first_tlv.tl.type);
+ rc = -EBUSY;
+ goto exit;
+ } else {
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "PF response: %d [Type %d]\n",
+ *done, p_req->first_tlv.tl.type);
+ }
+
+exit:
+ mutex_unlock(&(p_hwfn->vf_iov_info->mutex));
+
+ return rc;
+}
+
+#define VF_ACQUIRE_THRESH 3
+#define VF_ACQUIRE_MAC_FILTERS 1
+
+static int qed_vf_pf_acquire(struct qed_hwfn *p_hwfn)
+{
+ struct qed_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct pfvf_acquire_resp_tlv *resp = &p_iov->pf2vf_reply->acquire_resp;
+ struct pf_vf_pfdev_info *pfdev_info = &resp->pfdev_info;
+ u8 rx_count = 1, tx_count = 1, num_sbs = 1;
+ u8 num_mac = VF_ACQUIRE_MAC_FILTERS;
+ bool resources_acquired = false;
+ struct vfpf_acquire_tlv *req;
+ int rc = 0, attempts = 0;
+
+ /* clear mailbox and prep first tlv */
+ req = qed_vf_pf_prep(p_hwfn, CHANNEL_TLV_ACQUIRE, sizeof(*req));
+
+ /* starting filling the request */
+ req->vfdev_info.opaque_fid = p_hwfn->hw_info.opaque_fid;
+
+ req->resc_request.num_rxqs = rx_count;
+ req->resc_request.num_txqs = tx_count;
+ req->resc_request.num_sbs = num_sbs;
+ req->resc_request.num_mac_filters = num_mac;
+ req->resc_request.num_vlan_filters = QED_ETH_VF_NUM_VLAN_FILTERS;
+
+ req->vfdev_info.os_type = VFPF_ACQUIRE_OS_LINUX;
+ req->vfdev_info.fw_major = FW_MAJOR_VERSION;
+ req->vfdev_info.fw_minor = FW_MINOR_VERSION;
+ req->vfdev_info.fw_revision = FW_REVISION_VERSION;
+ req->vfdev_info.fw_engineering = FW_ENGINEERING_VERSION;
+
+ /* Fill capability field with any non-deprecated config we support */
+ req->vfdev_info.capabilities |= VFPF_ACQUIRE_CAP_100G;
+
+ /* pf 2 vf bulletin board address */
+ req->bulletin_addr = p_iov->bulletin.phys;
+ req->bulletin_size = p_iov->bulletin.size;
+
+ /* add list termination tlv */
+ qed_add_tlv(p_hwfn, &p_iov->offset,
+ CHANNEL_TLV_LIST_END, sizeof(struct channel_list_end_tlv));
+
+ while (!resources_acquired) {
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_IOV, "attempting to acquire resources\n");
+
+ /* send acquire request */
+ rc = qed_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+ if (rc)
+ return rc;
+
+ /* copy acquire response from buffer to p_hwfn */
+ memcpy(&p_iov->acquire_resp, resp, sizeof(p_iov->acquire_resp));
+
+ attempts++;
+
+ if (resp->hdr.status == PFVF_STATUS_SUCCESS) {
+ /* PF agrees to allocate our resources */
+ if (!(resp->pfdev_info.capabilities &
+ PFVF_ACQUIRE_CAP_POST_FW_OVERRIDE)) {
+ DP_INFO(p_hwfn,
+ "PF is using old incompatible driver; Either downgrade driver or request provider to update hypervisor version\n");
+ return -EINVAL;
+ }
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV, "resources acquired\n");
+ resources_acquired = true;
+ } else if (resp->hdr.status == PFVF_STATUS_NO_RESOURCE &&
+ attempts < VF_ACQUIRE_THRESH) {
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_IOV,
+ "PF unwilling to fullfill resource request. Try PF recommended amount\n");
+
+ /* humble our request */
+ req->resc_request.num_txqs = resp->resc.num_txqs;
+ req->resc_request.num_rxqs = resp->resc.num_rxqs;
+ req->resc_request.num_sbs = resp->resc.num_sbs;
+ req->resc_request.num_mac_filters =
+ resp->resc.num_mac_filters;
+ req->resc_request.num_vlan_filters =
+ resp->resc.num_vlan_filters;
+
+ /* Clear response buffer */
+ memset(p_iov->pf2vf_reply, 0, sizeof(union pfvf_tlvs));
+ } else {
+ DP_ERR(p_hwfn,
+ "PF returned error %d to VF acquisition request\n",
+ resp->hdr.status);
+ return -EAGAIN;
+ }
+ }
+
+ /* Update bulletin board size with response from PF */
+ p_iov->bulletin.size = resp->bulletin_size;
+
+ /* get HW info */
+ p_hwfn->cdev->type = resp->pfdev_info.dev_type;
+ p_hwfn->cdev->chip_rev = resp->pfdev_info.chip_rev;
+
+ p_hwfn->cdev->chip_num = pfdev_info->chip_num & 0xffff;
+
+ /* Learn of the possibility of CMT */
+ if (IS_LEAD_HWFN(p_hwfn)) {
+ if (resp->pfdev_info.capabilities & PFVF_ACQUIRE_CAP_100G) {
+ DP_NOTICE(p_hwfn, "100g VF\n");
+ p_hwfn->cdev->num_hwfns = 2;
+ }
+ }
+
+ return 0;
+}
+
+int qed_vf_hw_prepare(struct qed_hwfn *p_hwfn)
+{
+ struct qed_vf_iov *p_iov;
+ u32 reg;
+
+ /* Set number of hwfns - might be overriden once leading hwfn learns
+ * actual configuration from PF.
+ */
+ if (IS_LEAD_HWFN(p_hwfn))
+ p_hwfn->cdev->num_hwfns = 1;
+
+ /* Set the doorbell bar. Assumption: regview is set */
+ p_hwfn->doorbells = (u8 __iomem *)p_hwfn->regview +
+ PXP_VF_BAR0_START_DQ;
+
+ reg = PXP_VF_BAR0_ME_OPAQUE_ADDRESS;
+ p_hwfn->hw_info.opaque_fid = (u16)REG_RD(p_hwfn, reg);
+
+ reg = PXP_VF_BAR0_ME_CONCRETE_ADDRESS;
+ p_hwfn->hw_info.concrete_fid = REG_RD(p_hwfn, reg);
+
+ /* Allocate vf sriov info */
+ p_iov = kzalloc(sizeof(*p_iov), GFP_KERNEL);
+ if (!p_iov) {
+ DP_NOTICE(p_hwfn, "Failed to allocate `struct qed_sriov'\n");
+ return -ENOMEM;
+ }
+
+ /* Allocate vf2pf msg */
+ p_iov->vf2pf_request = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev,
+ sizeof(union vfpf_tlvs),
+ &p_iov->vf2pf_request_phys,
+ GFP_KERNEL);
+ if (!p_iov->vf2pf_request) {
+ DP_NOTICE(p_hwfn,
+ "Failed to allocate `vf2pf_request' DMA memory\n");
+ goto free_p_iov;
+ }
+
+ p_iov->pf2vf_reply = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev,
+ sizeof(union pfvf_tlvs),
+ &p_iov->pf2vf_reply_phys,
+ GFP_KERNEL);
+ if (!p_iov->pf2vf_reply) {
+ DP_NOTICE(p_hwfn,
+ "Failed to allocate `pf2vf_reply' DMA memory\n");
+ goto free_vf2pf_request;
+ }
+
+ DP_VERBOSE(p_hwfn,
+ QED_MSG_IOV,
+ "VF's Request mailbox [%p virt 0x%llx phys], Response mailbox [%p virt 0x%llx phys]\n",
+ p_iov->vf2pf_request,
+ (u64) p_iov->vf2pf_request_phys,
+ p_iov->pf2vf_reply, (u64)p_iov->pf2vf_reply_phys);
+
+ /* Allocate Bulletin board */
+ p_iov->bulletin.size = sizeof(struct qed_bulletin_content);
+ p_iov->bulletin.p_virt = dma_alloc_coherent(&p_hwfn->cdev->pdev->dev,
+ p_iov->bulletin.size,
+ &p_iov->bulletin.phys,
+ GFP_KERNEL);
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "VF's bulletin Board [%p virt 0x%llx phys 0x%08x bytes]\n",
+ p_iov->bulletin.p_virt,
+ (u64)p_iov->bulletin.phys, p_iov->bulletin.size);
+
+ mutex_init(&p_iov->mutex);
+
+ p_hwfn->vf_iov_info = p_iov;
+
+ p_hwfn->hw_info.personality = QED_PCI_ETH;
+
+ return qed_vf_pf_acquire(p_hwfn);
+
+free_vf2pf_request:
+ dma_free_coherent(&p_hwfn->cdev->pdev->dev,
+ sizeof(union vfpf_tlvs),
+ p_iov->vf2pf_request, p_iov->vf2pf_request_phys);
+free_p_iov:
+ kfree(p_iov);
+
+ return -ENOMEM;
+}
+
+int qed_vf_pf_rxq_start(struct qed_hwfn *p_hwfn,
+ u8 rx_qid,
+ u16 sb,
+ u8 sb_index,
+ u16 bd_max_bytes,
+ dma_addr_t bd_chain_phys_addr,
+ dma_addr_t cqe_pbl_addr,
+ u16 cqe_pbl_size, void __iomem **pp_prod)
+{
+ struct qed_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct pfvf_start_queue_resp_tlv *resp;
+ struct vfpf_start_rxq_tlv *req;
+ int rc;
+
+ /* clear mailbox and prep first tlv */
+ req = qed_vf_pf_prep(p_hwfn, CHANNEL_TLV_START_RXQ, sizeof(*req));
+
+ req->rx_qid = rx_qid;
+ req->cqe_pbl_addr = cqe_pbl_addr;
+ req->cqe_pbl_size = cqe_pbl_size;
+ req->rxq_addr = bd_chain_phys_addr;
+ req->hw_sb = sb;
+ req->sb_index = sb_index;
+ req->bd_max_bytes = bd_max_bytes;
+ req->stat_id = -1;
+
+ /* add list termination tlv */
+ qed_add_tlv(p_hwfn, &p_iov->offset,
+ CHANNEL_TLV_LIST_END, sizeof(struct channel_list_end_tlv));
+
+ resp = &p_iov->pf2vf_reply->queue_start;
+ rc = qed_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+ if (rc)
+ return rc;
+
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+ return -EINVAL;
+
+ /* Learn the address of the producer from the response */
+ if (pp_prod) {
+ u64 init_prod_val = 0;
+
+ *pp_prod = (u8 __iomem *)p_hwfn->regview + resp->offset;
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "Rxq[0x%02x]: producer at %p [offset 0x%08x]\n",
+ rx_qid, *pp_prod, resp->offset);
+
+ /* Init the rcq, rx bd and rx sge (if valid) producers to 0 */
+ __internal_ram_wr(p_hwfn, *pp_prod, sizeof(u64),
+ (u32 *)&init_prod_val);
+ }
+
+ return rc;
+}
+
+int qed_vf_pf_rxq_stop(struct qed_hwfn *p_hwfn, u16 rx_qid, bool cqe_completion)
+{
+ struct qed_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct vfpf_stop_rxqs_tlv *req;
+ struct pfvf_def_resp_tlv *resp;
+ int rc;
+
+ /* clear mailbox and prep first tlv */
+ req = qed_vf_pf_prep(p_hwfn, CHANNEL_TLV_STOP_RXQS, sizeof(*req));
+
+ req->rx_qid = rx_qid;
+ req->num_rxqs = 1;
+ req->cqe_completion = cqe_completion;
+
+ /* add list termination tlv */
+ qed_add_tlv(p_hwfn, &p_iov->offset,
+ CHANNEL_TLV_LIST_END, sizeof(struct channel_list_end_tlv));
+
+ resp = &p_iov->pf2vf_reply->default_resp;
+ rc = qed_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+ if (rc)
+ return rc;
+
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+ return -EINVAL;
+
+ return rc;
+}
+
+int qed_vf_pf_txq_start(struct qed_hwfn *p_hwfn,
+ u16 tx_queue_id,
+ u16 sb,
+ u8 sb_index,
+ dma_addr_t pbl_addr,
+ u16 pbl_size, void __iomem **pp_doorbell)
+{
+ struct qed_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct vfpf_start_txq_tlv *req;
+ struct pfvf_def_resp_tlv *resp;
+ int rc;
+
+ /* clear mailbox and prep first tlv */
+ req = qed_vf_pf_prep(p_hwfn, CHANNEL_TLV_START_TXQ, sizeof(*req));
+
+ req->tx_qid = tx_queue_id;
+
+ /* Tx */
+ req->pbl_addr = pbl_addr;
+ req->pbl_size = pbl_size;
+ req->hw_sb = sb;
+ req->sb_index = sb_index;
+
+ /* add list termination tlv */
+ qed_add_tlv(p_hwfn, &p_iov->offset,
+ CHANNEL_TLV_LIST_END, sizeof(struct channel_list_end_tlv));
+
+ resp = &p_iov->pf2vf_reply->default_resp;
+ rc = qed_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+ if (rc)
+ return rc;
+
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+ return -EINVAL;
+
+ if (pp_doorbell) {
+ u8 cid = p_iov->acquire_resp.resc.cid[tx_queue_id];
+
+ *pp_doorbell = (u8 __iomem *)p_hwfn->doorbells +
+ qed_db_addr(cid, DQ_DEMS_LEGACY);
+ }
+
+ return rc;
+}
+
+int qed_vf_pf_txq_stop(struct qed_hwfn *p_hwfn, u16 tx_qid)
+{
+ struct qed_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct vfpf_stop_txqs_tlv *req;
+ struct pfvf_def_resp_tlv *resp;
+ int rc;
+
+ /* clear mailbox and prep first tlv */
+ req = qed_vf_pf_prep(p_hwfn, CHANNEL_TLV_STOP_TXQS, sizeof(*req));
+
+ req->tx_qid = tx_qid;
+ req->num_txqs = 1;
+
+ /* add list termination tlv */
+ qed_add_tlv(p_hwfn, &p_iov->offset,
+ CHANNEL_TLV_LIST_END, sizeof(struct channel_list_end_tlv));
+
+ resp = &p_iov->pf2vf_reply->default_resp;
+ rc = qed_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+ if (rc)
+ return rc;
+
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+ return -EINVAL;
+
+ return rc;
+}
+
+int qed_vf_pf_vport_start(struct qed_hwfn *p_hwfn,
+ u8 vport_id,
+ u16 mtu,
+ u8 inner_vlan_removal,
+ enum qed_tpa_mode tpa_mode,
+ u8 max_buffers_per_cqe, u8 only_untagged)
+{
+ struct qed_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct vfpf_vport_start_tlv *req;
+ struct pfvf_def_resp_tlv *resp;
+ int rc, i;
+
+ /* clear mailbox and prep first tlv */
+ req = qed_vf_pf_prep(p_hwfn, CHANNEL_TLV_VPORT_START, sizeof(*req));
+
+ req->mtu = mtu;
+ req->vport_id = vport_id;
+ req->inner_vlan_removal = inner_vlan_removal;
+ req->tpa_mode = tpa_mode;
+ req->max_buffers_per_cqe = max_buffers_per_cqe;
+ req->only_untagged = only_untagged;
+
+ /* status blocks */
+ for (i = 0; i < p_hwfn->vf_iov_info->acquire_resp.resc.num_sbs; i++)
+ if (p_hwfn->sbs_info[i])
+ req->sb_addr[i] = p_hwfn->sbs_info[i]->sb_phys;
+
+ /* add list termination tlv */
+ qed_add_tlv(p_hwfn, &p_iov->offset,
+ CHANNEL_TLV_LIST_END, sizeof(struct channel_list_end_tlv));
+
+ resp = &p_iov->pf2vf_reply->default_resp;
+ rc = qed_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+ if (rc)
+ return rc;
+
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+ return -EINVAL;
+
+ return rc;
+}
+
+int qed_vf_pf_vport_stop(struct qed_hwfn *p_hwfn)
+{
+ struct qed_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
+ int rc;
+
+ /* clear mailbox and prep first tlv */
+ qed_vf_pf_prep(p_hwfn, CHANNEL_TLV_VPORT_TEARDOWN,
+ sizeof(struct vfpf_first_tlv));
+
+ /* add list termination tlv */
+ qed_add_tlv(p_hwfn, &p_iov->offset,
+ CHANNEL_TLV_LIST_END, sizeof(struct channel_list_end_tlv));
+
+ rc = qed_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+ if (rc)
+ return rc;
+
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+ return -EINVAL;
+
+ return rc;
+}
+
+static bool
+qed_vf_handle_vp_update_is_needed(struct qed_hwfn *p_hwfn,
+ struct qed_sp_vport_update_params *p_data,
+ u16 tlv)
+{
+ switch (tlv) {
+ case CHANNEL_TLV_VPORT_UPDATE_ACTIVATE:
+ return !!(p_data->update_vport_active_rx_flg ||
+ p_data->update_vport_active_tx_flg);
+ case CHANNEL_TLV_VPORT_UPDATE_TX_SWITCH:
+ return !!p_data->update_tx_switching_flg;
+ case CHANNEL_TLV_VPORT_UPDATE_VLAN_STRIP:
+ return !!p_data->update_inner_vlan_removal_flg;
+ case CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN:
+ return !!p_data->update_accept_any_vlan_flg;
+ case CHANNEL_TLV_VPORT_UPDATE_MCAST:
+ return !!p_data->update_approx_mcast_flg;
+ case CHANNEL_TLV_VPORT_UPDATE_ACCEPT_PARAM:
+ return !!(p_data->accept_flags.update_rx_mode_config ||
+ p_data->accept_flags.update_tx_mode_config);
+ case CHANNEL_TLV_VPORT_UPDATE_RSS:
+ return !!p_data->rss_params;
+ case CHANNEL_TLV_VPORT_UPDATE_SGE_TPA:
+ return !!p_data->sge_tpa_params;
+ default:
+ DP_INFO(p_hwfn, "Unexpected vport-update TLV[%d]\n",
+ tlv);
+ return false;
+ }
+}
+
+static void
+qed_vf_handle_vp_update_tlvs_resp(struct qed_hwfn *p_hwfn,
+ struct qed_sp_vport_update_params *p_data)
+{
+ struct qed_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct pfvf_def_resp_tlv *p_resp;
+ u16 tlv;
+
+ for (tlv = CHANNEL_TLV_VPORT_UPDATE_ACTIVATE;
+ tlv < CHANNEL_TLV_VPORT_UPDATE_MAX; tlv++) {
+ if (!qed_vf_handle_vp_update_is_needed(p_hwfn, p_data, tlv))
+ continue;
+
+ p_resp = (struct pfvf_def_resp_tlv *)
+ qed_iov_search_list_tlvs(p_hwfn, p_iov->pf2vf_reply,
+ tlv);
+ if (p_resp && p_resp->hdr.status)
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "TLV[%d] Configuration %s\n",
+ tlv,
+ (p_resp && p_resp->hdr.status) ? "succeeded"
+ : "failed");
+ }
+}
+
+int qed_vf_pf_vport_update(struct qed_hwfn *p_hwfn,
+ struct qed_sp_vport_update_params *p_params)
+{
+ struct qed_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct vfpf_vport_update_tlv *req;
+ struct pfvf_def_resp_tlv *resp;
+ u8 update_rx, update_tx;
+ u32 resp_size = 0;
+ u16 size, tlv;
+ int rc;
+
+ resp = &p_iov->pf2vf_reply->default_resp;
+ resp_size = sizeof(*resp);
+
+ update_rx = p_params->update_vport_active_rx_flg;
+ update_tx = p_params->update_vport_active_tx_flg;
+
+ /* clear mailbox and prep header tlv */
+ qed_vf_pf_prep(p_hwfn, CHANNEL_TLV_VPORT_UPDATE, sizeof(*req));
+
+ /* Prepare extended tlvs */
+ if (update_rx || update_tx) {
+ struct vfpf_vport_update_activate_tlv *p_act_tlv;
+
+ size = sizeof(struct vfpf_vport_update_activate_tlv);
+ p_act_tlv = qed_add_tlv(p_hwfn, &p_iov->offset,
+ CHANNEL_TLV_VPORT_UPDATE_ACTIVATE,
+ size);
+ resp_size += sizeof(struct pfvf_def_resp_tlv);
+
+ if (update_rx) {
+ p_act_tlv->update_rx = update_rx;
+ p_act_tlv->active_rx = p_params->vport_active_rx_flg;
+ }
+
+ if (update_tx) {
+ p_act_tlv->update_tx = update_tx;
+ p_act_tlv->active_tx = p_params->vport_active_tx_flg;
+ }
+ }
+
+ if (p_params->update_tx_switching_flg) {
+ struct vfpf_vport_update_tx_switch_tlv *p_tx_switch_tlv;
+
+ size = sizeof(struct vfpf_vport_update_tx_switch_tlv);
+ tlv = CHANNEL_TLV_VPORT_UPDATE_TX_SWITCH;
+ p_tx_switch_tlv = qed_add_tlv(p_hwfn, &p_iov->offset,
+ tlv, size);
+ resp_size += sizeof(struct pfvf_def_resp_tlv);
+
+ p_tx_switch_tlv->tx_switching = p_params->tx_switching_flg;
+ }
+
+ if (p_params->update_approx_mcast_flg) {
+ struct vfpf_vport_update_mcast_bin_tlv *p_mcast_tlv;
+
+ size = sizeof(struct vfpf_vport_update_mcast_bin_tlv);
+ p_mcast_tlv = qed_add_tlv(p_hwfn, &p_iov->offset,
+ CHANNEL_TLV_VPORT_UPDATE_MCAST, size);
+ resp_size += sizeof(struct pfvf_def_resp_tlv);
+
+ memcpy(p_mcast_tlv->bins, p_params->bins,
+ sizeof(unsigned long) * ETH_MULTICAST_MAC_BINS_IN_REGS);
+ }
+
+ update_rx = p_params->accept_flags.update_rx_mode_config;
+ update_tx = p_params->accept_flags.update_tx_mode_config;
+
+ if (update_rx || update_tx) {
+ struct vfpf_vport_update_accept_param_tlv *p_accept_tlv;
+
+ tlv = CHANNEL_TLV_VPORT_UPDATE_ACCEPT_PARAM;
+ size = sizeof(struct vfpf_vport_update_accept_param_tlv);
+ p_accept_tlv = qed_add_tlv(p_hwfn, &p_iov->offset, tlv, size);
+ resp_size += sizeof(struct pfvf_def_resp_tlv);
+
+ if (update_rx) {
+ p_accept_tlv->update_rx_mode = update_rx;
+ p_accept_tlv->rx_accept_filter =
+ p_params->accept_flags.rx_accept_filter;
+ }
+
+ if (update_tx) {
+ p_accept_tlv->update_tx_mode = update_tx;
+ p_accept_tlv->tx_accept_filter =
+ p_params->accept_flags.tx_accept_filter;
+ }
+ }
+
+ if (p_params->rss_params) {
+ struct qed_rss_params *rss_params = p_params->rss_params;
+ struct vfpf_vport_update_rss_tlv *p_rss_tlv;
+
+ size = sizeof(struct vfpf_vport_update_rss_tlv);
+ p_rss_tlv = qed_add_tlv(p_hwfn,
+ &p_iov->offset,
+ CHANNEL_TLV_VPORT_UPDATE_RSS, size);
+ resp_size += sizeof(struct pfvf_def_resp_tlv);
+
+ if (rss_params->update_rss_config)
+ p_rss_tlv->update_rss_flags |=
+ VFPF_UPDATE_RSS_CONFIG_FLAG;
+ if (rss_params->update_rss_capabilities)
+ p_rss_tlv->update_rss_flags |=
+ VFPF_UPDATE_RSS_CAPS_FLAG;
+ if (rss_params->update_rss_ind_table)
+ p_rss_tlv->update_rss_flags |=
+ VFPF_UPDATE_RSS_IND_TABLE_FLAG;
+ if (rss_params->update_rss_key)
+ p_rss_tlv->update_rss_flags |= VFPF_UPDATE_RSS_KEY_FLAG;
+
+ p_rss_tlv->rss_enable = rss_params->rss_enable;
+ p_rss_tlv->rss_caps = rss_params->rss_caps;
+ p_rss_tlv->rss_table_size_log = rss_params->rss_table_size_log;
+ memcpy(p_rss_tlv->rss_ind_table, rss_params->rss_ind_table,
+ sizeof(rss_params->rss_ind_table));
+ memcpy(p_rss_tlv->rss_key, rss_params->rss_key,
+ sizeof(rss_params->rss_key));
+ }
+
+ if (p_params->update_accept_any_vlan_flg) {
+ struct vfpf_vport_update_accept_any_vlan_tlv *p_any_vlan_tlv;
+
+ size = sizeof(struct vfpf_vport_update_accept_any_vlan_tlv);
+ tlv = CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN;
+ p_any_vlan_tlv = qed_add_tlv(p_hwfn, &p_iov->offset, tlv, size);
+
+ resp_size += sizeof(struct pfvf_def_resp_tlv);
+ p_any_vlan_tlv->accept_any_vlan = p_params->accept_any_vlan;
+ p_any_vlan_tlv->update_accept_any_vlan_flg =
+ p_params->update_accept_any_vlan_flg;
+ }
+
+ /* add list termination tlv */
+ qed_add_tlv(p_hwfn, &p_iov->offset,
+ CHANNEL_TLV_LIST_END, sizeof(struct channel_list_end_tlv));
+
+ rc = qed_send_msg2pf(p_hwfn, &resp->hdr.status, resp_size);
+ if (rc)
+ return rc;
+
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+ return -EINVAL;
+
+ qed_vf_handle_vp_update_tlvs_resp(p_hwfn, p_params);
+
+ return rc;
+}
+
+int qed_vf_pf_reset(struct qed_hwfn *p_hwfn)
+{
+ struct qed_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct pfvf_def_resp_tlv *resp;
+ struct vfpf_first_tlv *req;
+ int rc;
+
+ /* clear mailbox and prep first tlv */
+ req = qed_vf_pf_prep(p_hwfn, CHANNEL_TLV_CLOSE, sizeof(*req));
+
+ /* add list termination tlv */
+ qed_add_tlv(p_hwfn, &p_iov->offset,
+ CHANNEL_TLV_LIST_END, sizeof(struct channel_list_end_tlv));
+
+ resp = &p_iov->pf2vf_reply->default_resp;
+ rc = qed_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+ if (rc)
+ return rc;
+
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+ return -EAGAIN;
+
+ p_hwfn->b_int_enabled = 0;
+
+ return 0;
+}
+
+int qed_vf_pf_release(struct qed_hwfn *p_hwfn)
+{
+ struct qed_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct pfvf_def_resp_tlv *resp;
+ struct vfpf_first_tlv *req;
+ u32 size;
+ int rc;
+
+ /* clear mailbox and prep first tlv */
+ req = qed_vf_pf_prep(p_hwfn, CHANNEL_TLV_RELEASE, sizeof(*req));
+
+ /* add list termination tlv */
+ qed_add_tlv(p_hwfn, &p_iov->offset,
+ CHANNEL_TLV_LIST_END, sizeof(struct channel_list_end_tlv));
+
+ resp = &p_iov->pf2vf_reply->default_resp;
+ rc = qed_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+
+ if (!rc && resp->hdr.status != PFVF_STATUS_SUCCESS)
+ rc = -EAGAIN;
+
+ p_hwfn->b_int_enabled = 0;
+
+ if (p_iov->vf2pf_request)
+ dma_free_coherent(&p_hwfn->cdev->pdev->dev,
+ sizeof(union vfpf_tlvs),
+ p_iov->vf2pf_request,
+ p_iov->vf2pf_request_phys);
+ if (p_iov->pf2vf_reply)
+ dma_free_coherent(&p_hwfn->cdev->pdev->dev,
+ sizeof(union pfvf_tlvs),
+ p_iov->pf2vf_reply, p_iov->pf2vf_reply_phys);
+
+ if (p_iov->bulletin.p_virt) {
+ size = sizeof(struct qed_bulletin_content);
+ dma_free_coherent(&p_hwfn->cdev->pdev->dev,
+ size,
+ p_iov->bulletin.p_virt, p_iov->bulletin.phys);
+ }
+
+ kfree(p_hwfn->vf_iov_info);
+ p_hwfn->vf_iov_info = NULL;
+
+ return rc;
+}
+
+void qed_vf_pf_filter_mcast(struct qed_hwfn *p_hwfn,
+ struct qed_filter_mcast *p_filter_cmd)
+{
+ struct qed_sp_vport_update_params sp_params;
+ int i;
+
+ memset(&sp_params, 0, sizeof(sp_params));
+ sp_params.update_approx_mcast_flg = 1;
+
+ if (p_filter_cmd->opcode == QED_FILTER_ADD) {
+ for (i = 0; i < p_filter_cmd->num_mc_addrs; i++) {
+ u32 bit;
+
+ bit = qed_mcast_bin_from_mac(p_filter_cmd->mac[i]);
+ __set_bit(bit, sp_params.bins);
+ }
+ }
+
+ qed_vf_pf_vport_update(p_hwfn, &sp_params);
+}
+
+int qed_vf_pf_filter_ucast(struct qed_hwfn *p_hwfn,
+ struct qed_filter_ucast *p_ucast)
+{
+ struct qed_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct vfpf_ucast_filter_tlv *req;
+ struct pfvf_def_resp_tlv *resp;
+ int rc;
+
+ /* clear mailbox and prep first tlv */
+ req = qed_vf_pf_prep(p_hwfn, CHANNEL_TLV_UCAST_FILTER, sizeof(*req));
+ req->opcode = (u8) p_ucast->opcode;
+ req->type = (u8) p_ucast->type;
+ memcpy(req->mac, p_ucast->mac, ETH_ALEN);
+ req->vlan = p_ucast->vlan;
+
+ /* add list termination tlv */
+ qed_add_tlv(p_hwfn, &p_iov->offset,
+ CHANNEL_TLV_LIST_END, sizeof(struct channel_list_end_tlv));
+
+ resp = &p_iov->pf2vf_reply->default_resp;
+ rc = qed_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+ if (rc)
+ return rc;
+
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+ return -EAGAIN;
+
+ return 0;
+}
+
+int qed_vf_pf_int_cleanup(struct qed_hwfn *p_hwfn)
+{
+ struct qed_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct pfvf_def_resp_tlv *resp = &p_iov->pf2vf_reply->default_resp;
+ int rc;
+
+ /* clear mailbox and prep first tlv */
+ qed_vf_pf_prep(p_hwfn, CHANNEL_TLV_INT_CLEANUP,
+ sizeof(struct vfpf_first_tlv));
+
+ /* add list termination tlv */
+ qed_add_tlv(p_hwfn, &p_iov->offset,
+ CHANNEL_TLV_LIST_END, sizeof(struct channel_list_end_tlv));
+
+ rc = qed_send_msg2pf(p_hwfn, &resp->hdr.status, sizeof(*resp));
+ if (rc)
+ return rc;
+
+ if (resp->hdr.status != PFVF_STATUS_SUCCESS)
+ return -EINVAL;
+
+ return 0;
+}
+
+u16 qed_vf_get_igu_sb_id(struct qed_hwfn *p_hwfn, u16 sb_id)
+{
+ struct qed_vf_iov *p_iov = p_hwfn->vf_iov_info;
+
+ if (!p_iov) {
+ DP_NOTICE(p_hwfn, "vf_sriov_info isn't initialized\n");
+ return 0;
+ }
+
+ return p_iov->acquire_resp.resc.hw_sbs[sb_id].hw_sb_id;
+}
+
+int qed_vf_read_bulletin(struct qed_hwfn *p_hwfn, u8 *p_change)
+{
+ struct qed_vf_iov *p_iov = p_hwfn->vf_iov_info;
+ struct qed_bulletin_content shadow;
+ u32 crc, crc_size;
+
+ crc_size = sizeof(p_iov->bulletin.p_virt->crc);
+ *p_change = 0;
+
+ /* Need to guarantee PF is not in the middle of writing it */
+ memcpy(&shadow, p_iov->bulletin.p_virt, p_iov->bulletin.size);
+
+ /* If version did not update, no need to do anything */
+ if (shadow.version == p_iov->bulletin_shadow.version)
+ return 0;
+
+ /* Verify the bulletin we see is valid */
+ crc = crc32(0, (u8 *)&shadow + crc_size,
+ p_iov->bulletin.size - crc_size);
+ if (crc != shadow.crc)
+ return -EAGAIN;
+
+ /* Set the shadow bulletin and process it */
+ memcpy(&p_iov->bulletin_shadow, &shadow, p_iov->bulletin.size);
+
+ DP_VERBOSE(p_hwfn, QED_MSG_IOV,
+ "Read a bulletin update %08x\n", shadow.version);
+
+ *p_change = 1;
+
+ return 0;
+}
+
+void __qed_vf_get_link_params(struct qed_hwfn *p_hwfn,
+ struct qed_mcp_link_params *p_params,
+ struct qed_bulletin_content *p_bulletin)
+{
+ memset(p_params, 0, sizeof(*p_params));
+
+ p_params->speed.autoneg = p_bulletin->req_autoneg;
+ p_params->speed.advertised_speeds = p_bulletin->req_adv_speed;
+ p_params->speed.forced_speed = p_bulletin->req_forced_speed;
+ p_params->pause.autoneg = p_bulletin->req_autoneg_pause;
+ p_params->pause.forced_rx = p_bulletin->req_forced_rx;
+ p_params->pause.forced_tx = p_bulletin->req_forced_tx;
+ p_params->loopback_mode = p_bulletin->req_loopback;
+}
+
+void qed_vf_get_link_params(struct qed_hwfn *p_hwfn,
+ struct qed_mcp_link_params *params)
+{
+ __qed_vf_get_link_params(p_hwfn, params,
+ &(p_hwfn->vf_iov_info->bulletin_shadow));
+}
+
+void __qed_vf_get_link_state(struct qed_hwfn *p_hwfn,
+ struct qed_mcp_link_state *p_link,
+ struct qed_bulletin_content *p_bulletin)
+{
+ memset(p_link, 0, sizeof(*p_link));
+
+ p_link->link_up = p_bulletin->link_up;
+ p_link->speed = p_bulletin->speed;
+ p_link->full_duplex = p_bulletin->full_duplex;
+ p_link->an = p_bulletin->autoneg;
+ p_link->an_complete = p_bulletin->autoneg_complete;
+ p_link->parallel_detection = p_bulletin->parallel_detection;
+ p_link->pfc_enabled = p_bulletin->pfc_enabled;
+ p_link->partner_adv_speed = p_bulletin->partner_adv_speed;
+ p_link->partner_tx_flow_ctrl_en = p_bulletin->partner_tx_flow_ctrl_en;
+ p_link->partner_rx_flow_ctrl_en = p_bulletin->partner_rx_flow_ctrl_en;
+ p_link->partner_adv_pause = p_bulletin->partner_adv_pause;
+ p_link->sfp_tx_fault = p_bulletin->sfp_tx_fault;
+}
+
+void qed_vf_get_link_state(struct qed_hwfn *p_hwfn,
+ struct qed_mcp_link_state *link)
+{
+ __qed_vf_get_link_state(p_hwfn, link,
+ &(p_hwfn->vf_iov_info->bulletin_shadow));
+}
+
+void __qed_vf_get_link_caps(struct qed_hwfn *p_hwfn,
+ struct qed_mcp_link_capabilities *p_link_caps,
+ struct qed_bulletin_content *p_bulletin)
+{
+ memset(p_link_caps, 0, sizeof(*p_link_caps));
+ p_link_caps->speed_capabilities = p_bulletin->capability_speed;
+}
+
+void qed_vf_get_link_caps(struct qed_hwfn *p_hwfn,
+ struct qed_mcp_link_capabilities *p_link_caps)
+{
+ __qed_vf_get_link_caps(p_hwfn, p_link_caps,
+ &(p_hwfn->vf_iov_info->bulletin_shadow));
+}
+
+void qed_vf_get_num_rxqs(struct qed_hwfn *p_hwfn, u8 *num_rxqs)
+{
+ *num_rxqs = p_hwfn->vf_iov_info->acquire_resp.resc.num_rxqs;
+}
+
+void qed_vf_get_port_mac(struct qed_hwfn *p_hwfn, u8 *port_mac)
+{
+ memcpy(port_mac,
+ p_hwfn->vf_iov_info->acquire_resp.pfdev_info.port_mac, ETH_ALEN);
+}
+
+void qed_vf_get_num_vlan_filters(struct qed_hwfn *p_hwfn, u8 *num_vlan_filters)
+{
+ struct qed_vf_iov *p_vf;
+
+ p_vf = p_hwfn->vf_iov_info;
+ *num_vlan_filters = p_vf->acquire_resp.resc.num_vlan_filters;
+}
+
+bool qed_vf_check_mac(struct qed_hwfn *p_hwfn, u8 *mac)
+{
+ struct qed_bulletin_content *bulletin;
+
+ bulletin = &p_hwfn->vf_iov_info->bulletin_shadow;
+ if (!(bulletin->valid_bitmap & (1 << MAC_ADDR_FORCED)))
+ return true;
+
+ /* Forbid VF from changing a MAC enforced by PF */
+ if (ether_addr_equal(bulletin->mac, mac))
+ return false;
+
+ return false;
+}
+
+bool qed_vf_bulletin_get_forced_mac(struct qed_hwfn *hwfn,
+ u8 *dst_mac, u8 *p_is_forced)
+{
+ struct qed_bulletin_content *bulletin;
+
+ bulletin = &hwfn->vf_iov_info->bulletin_shadow;
+
+ if (bulletin->valid_bitmap & (1 << MAC_ADDR_FORCED)) {
+ if (p_is_forced)
+ *p_is_forced = 1;
+ } else if (bulletin->valid_bitmap & (1 << VFPF_BULLETIN_MAC_ADDR)) {
+ if (p_is_forced)
+ *p_is_forced = 0;
+ } else {
+ return false;
+ }
+
+ ether_addr_copy(dst_mac, bulletin->mac);
+
+ return true;
+}
+
+void qed_vf_get_fw_version(struct qed_hwfn *p_hwfn,
+ u16 *fw_major, u16 *fw_minor,
+ u16 *fw_rev, u16 *fw_eng)
+{
+ struct pf_vf_pfdev_info *info;
+
+ info = &p_hwfn->vf_iov_info->acquire_resp.pfdev_info;
+
+ *fw_major = info->fw_major;
+ *fw_minor = info->fw_minor;
+ *fw_rev = info->fw_rev;
+ *fw_eng = info->fw_eng;
+}
+
+static void qed_handle_bulletin_change(struct qed_hwfn *hwfn)
+{
+ struct qed_eth_cb_ops *ops = hwfn->cdev->protocol_ops.eth;
+ u8 mac[ETH_ALEN], is_mac_exist, is_mac_forced;
+ void *cookie = hwfn->cdev->ops_cookie;
+
+ is_mac_exist = qed_vf_bulletin_get_forced_mac(hwfn, mac,
+ &is_mac_forced);
+ if (is_mac_exist && is_mac_forced && cookie)
+ ops->force_mac(cookie, mac);
+
+ /* Always update link configuration according to bulletin */
+ qed_link_update(hwfn);
+}
+
+void qed_iov_vf_task(struct work_struct *work)
+{
+ struct qed_hwfn *hwfn = container_of(work, struct qed_hwfn,
+ iov_task.work);
+ u8 change = 0;
+
+ if (test_and_clear_bit(QED_IOV_WQ_STOP_WQ_FLAG, &hwfn->iov_task_flags))
+ return;
+
+ /* Handle bulletin board changes */
+ qed_vf_read_bulletin(hwfn, &change);
+ if (change)
+ qed_handle_bulletin_change(hwfn);
+
+ /* As VF is polling bulletin board, need to constantly re-schedule */
+ queue_delayed_work(hwfn->iov_wq, &hwfn->iov_task, HZ);
+}
diff --git a/drivers/net/ethernet/qlogic/qed/qed_vf.h b/drivers/net/ethernet/qlogic/qed/qed_vf.h
new file mode 100644
index 000000000000..b82fda964bbd
--- /dev/null
+++ b/drivers/net/ethernet/qlogic/qed/qed_vf.h
@@ -0,0 +1,990 @@
+/* QLogic qed NIC Driver
+ * Copyright (c) 2015 QLogic Corporation
+ *
+ * This software is available under the terms of the GNU General Public License
+ * (GPL) Version 2, available from the file COPYING in the main directory of
+ * this source tree.
+ */
+
+#ifndef _QED_VF_H
+#define _QED_VF_H
+
+#include "qed_l2.h"
+#include "qed_mcp.h"
+
+#define T_ETH_INDIRECTION_TABLE_SIZE 128
+#define T_ETH_RSS_KEY_SIZE 10
+
+struct vf_pf_resc_request {
+ u8 num_rxqs;
+ u8 num_txqs;
+ u8 num_sbs;
+ u8 num_mac_filters;
+ u8 num_vlan_filters;
+ u8 num_mc_filters;
+ u16 padding;
+};
+
+struct hw_sb_info {
+ u16 hw_sb_id;
+ u8 sb_qid;
+ u8 padding[5];
+};
+
+#define TLV_BUFFER_SIZE 1024
+
+enum {
+ PFVF_STATUS_WAITING,
+ PFVF_STATUS_SUCCESS,
+ PFVF_STATUS_FAILURE,
+ PFVF_STATUS_NOT_SUPPORTED,
+ PFVF_STATUS_NO_RESOURCE,
+ PFVF_STATUS_FORCED,
+};
+
+/* vf pf channel tlvs */
+/* general tlv header (used for both vf->pf request and pf->vf response) */
+struct channel_tlv {
+ u16 type;
+ u16 length;
+};
+
+/* header of first vf->pf tlv carries the offset used to calculate reponse
+ * buffer address
+ */
+struct vfpf_first_tlv {
+ struct channel_tlv tl;
+ u32 padding;
+ u64 reply_address;
+};
+
+/* header of pf->vf tlvs, carries the status of handling the request */
+struct pfvf_tlv {
+ struct channel_tlv tl;
+ u8 status;
+ u8 padding[3];
+};
+
+/* response tlv used for most tlvs */
+struct pfvf_def_resp_tlv {
+ struct pfvf_tlv hdr;
+};
+
+/* used to terminate and pad a tlv list */
+struct channel_list_end_tlv {
+ struct channel_tlv tl;
+ u8 padding[4];
+};
+
+#define VFPF_ACQUIRE_OS_LINUX (0)
+#define VFPF_ACQUIRE_OS_WINDOWS (1)
+#define VFPF_ACQUIRE_OS_ESX (2)
+#define VFPF_ACQUIRE_OS_SOLARIS (3)
+#define VFPF_ACQUIRE_OS_LINUX_USERSPACE (4)
+
+struct vfpf_acquire_tlv {
+ struct vfpf_first_tlv first_tlv;
+
+ struct vf_pf_vfdev_info {
+#define VFPF_ACQUIRE_CAP_OBSOLETE (1 << 0)
+#define VFPF_ACQUIRE_CAP_100G (1 << 1) /* VF can support 100g */
+ u64 capabilities;
+ u8 fw_major;
+ u8 fw_minor;
+ u8 fw_revision;
+ u8 fw_engineering;
+ u32 driver_version;
+ u16 opaque_fid; /* ME register value */
+ u8 os_type; /* VFPF_ACQUIRE_OS_* value */
+ u8 padding[5];
+ } vfdev_info;
+
+ struct vf_pf_resc_request resc_request;
+
+ u64 bulletin_addr;
+ u32 bulletin_size;
+ u32 padding;
+};
+
+/* receive side scaling tlv */
+struct vfpf_vport_update_rss_tlv {
+ struct channel_tlv tl;
+
+ u8 update_rss_flags;
+#define VFPF_UPDATE_RSS_CONFIG_FLAG BIT(0)
+#define VFPF_UPDATE_RSS_CAPS_FLAG BIT(1)
+#define VFPF_UPDATE_RSS_IND_TABLE_FLAG BIT(2)
+#define VFPF_UPDATE_RSS_KEY_FLAG BIT(3)
+
+ u8 rss_enable;
+ u8 rss_caps;
+ u8 rss_table_size_log; /* The table size is 2 ^ rss_table_size_log */
+ u16 rss_ind_table[T_ETH_INDIRECTION_TABLE_SIZE];
+ u32 rss_key[T_ETH_RSS_KEY_SIZE];
+};
+
+struct pfvf_storm_stats {
+ u32 address;
+ u32 len;
+};
+
+struct pfvf_stats_info {
+ struct pfvf_storm_stats mstats;
+ struct pfvf_storm_stats pstats;
+ struct pfvf_storm_stats tstats;
+ struct pfvf_storm_stats ustats;
+};
+
+struct pfvf_acquire_resp_tlv {
+ struct pfvf_tlv hdr;
+
+ struct pf_vf_pfdev_info {
+ u32 chip_num;
+ u32 mfw_ver;
+
+ u16 fw_major;
+ u16 fw_minor;
+ u16 fw_rev;
+ u16 fw_eng;
+
+ u64 capabilities;
+#define PFVF_ACQUIRE_CAP_DEFAULT_UNTAGGED BIT(0)
+#define PFVF_ACQUIRE_CAP_100G BIT(1) /* If set, 100g PF */
+/* There are old PF versions where the PF might mistakenly override the sanity
+ * mechanism [version-based] and allow a VF that can't be supported to pass
+ * the acquisition phase.
+ * To overcome this, PFs now indicate that they're past that point and the new
+ * VFs would fail probe on the older PFs that fail to do so.
+ */
+#define PFVF_ACQUIRE_CAP_POST_FW_OVERRIDE BIT(2)
+
+ u16 db_size;
+ u8 indices_per_sb;
+ u8 os_type;
+
+ /* These should match the PF's qed_dev values */
+ u16 chip_rev;
+ u8 dev_type;
+
+ u8 padding;
+
+ struct pfvf_stats_info stats_info;
+
+ u8 port_mac[ETH_ALEN];
+ u8 padding2[2];
+ } pfdev_info;
+
+ struct pf_vf_resc {
+#define PFVF_MAX_QUEUES_PER_VF 16
+#define PFVF_MAX_SBS_PER_VF 16
+ struct hw_sb_info hw_sbs[PFVF_MAX_SBS_PER_VF];
+ u8 hw_qid[PFVF_MAX_QUEUES_PER_VF];
+ u8 cid[PFVF_MAX_QUEUES_PER_VF];
+
+ u8 num_rxqs;
+ u8 num_txqs;
+ u8 num_sbs;
+ u8 num_mac_filters;
+ u8 num_vlan_filters;
+ u8 num_mc_filters;
+ u8 padding[2];
+ } resc;
+
+ u32 bulletin_size;
+ u32 padding;
+};
+
+struct pfvf_start_queue_resp_tlv {
+ struct pfvf_tlv hdr;
+ u32 offset; /* offset to consumer/producer of queue */
+ u8 padding[4];
+};
+
+/* Setup Queue */
+struct vfpf_start_rxq_tlv {
+ struct vfpf_first_tlv first_tlv;
+
+ /* physical addresses */
+ u64 rxq_addr;
+ u64 deprecated_sge_addr;
+ u64 cqe_pbl_addr;
+
+ u16 cqe_pbl_size;
+ u16 hw_sb;
+ u16 rx_qid;
+ u16 hc_rate; /* desired interrupts per sec. */
+
+ u16 bd_max_bytes;
+ u16 stat_id;
+ u8 sb_index;
+ u8 padding[3];
+};
+
+struct vfpf_start_txq_tlv {
+ struct vfpf_first_tlv first_tlv;
+
+ /* physical addresses */
+ u64 pbl_addr;
+ u16 pbl_size;
+ u16 stat_id;
+ u16 tx_qid;
+ u16 hw_sb;
+
+ u32 flags; /* VFPF_QUEUE_FLG_X flags */
+ u16 hc_rate; /* desired interrupts per sec. */
+ u8 sb_index;
+ u8 padding[3];
+};
+
+/* Stop RX Queue */
+struct vfpf_stop_rxqs_tlv {
+ struct vfpf_first_tlv first_tlv;
+
+ u16 rx_qid;
+ u8 num_rxqs;
+ u8 cqe_completion;
+ u8 padding[4];
+};
+
+/* Stop TX Queues */
+struct vfpf_stop_txqs_tlv {
+ struct vfpf_first_tlv first_tlv;
+
+ u16 tx_qid;
+ u8 num_txqs;
+ u8 padding[5];
+};
+
+struct vfpf_update_rxq_tlv {
+ struct vfpf_first_tlv first_tlv;
+
+ u64 deprecated_sge_addr[PFVF_MAX_QUEUES_PER_VF];
+
+ u16 rx_qid;
+ u8 num_rxqs;
+ u8 flags;
+#define VFPF_RXQ_UPD_INIT_SGE_DEPRECATE_FLAG BIT(0)
+#define VFPF_RXQ_UPD_COMPLETE_CQE_FLAG BIT(1)
+#define VFPF_RXQ_UPD_COMPLETE_EVENT_FLAG BIT(2)
+
+ u8 padding[4];
+};
+
+/* Set Queue Filters */
+struct vfpf_q_mac_vlan_filter {
+ u32 flags;
+#define VFPF_Q_FILTER_DEST_MAC_VALID 0x01
+#define VFPF_Q_FILTER_VLAN_TAG_VALID 0x02
+#define VFPF_Q_FILTER_SET_MAC 0x100 /* set/clear */
+
+ u8 mac[ETH_ALEN];
+ u16 vlan_tag;
+
+ u8 padding[4];
+};
+
+/* Start a vport */
+struct vfpf_vport_start_tlv {
+ struct vfpf_first_tlv first_tlv;
+
+ u64 sb_addr[PFVF_MAX_SBS_PER_VF];
+
+ u32 tpa_mode;
+ u16 dep1;
+ u16 mtu;
+
+ u8 vport_id;
+ u8 inner_vlan_removal;
+
+ u8 only_untagged;
+ u8 max_buffers_per_cqe;
+
+ u8 padding[4];
+};
+
+/* Extended tlvs - need to add rss, mcast, accept mode tlvs */
+struct vfpf_vport_update_activate_tlv {
+ struct channel_tlv tl;
+ u8 update_rx;
+ u8 update_tx;
+ u8 active_rx;
+ u8 active_tx;
+};
+
+struct vfpf_vport_update_tx_switch_tlv {
+ struct channel_tlv tl;
+ u8 tx_switching;
+ u8 padding[3];
+};
+
+struct vfpf_vport_update_vlan_strip_tlv {
+ struct channel_tlv tl;
+ u8 remove_vlan;
+ u8 padding[3];
+};
+
+struct vfpf_vport_update_mcast_bin_tlv {
+ struct channel_tlv tl;
+ u8 padding[4];
+
+ u64 bins[8];
+};
+
+struct vfpf_vport_update_accept_param_tlv {
+ struct channel_tlv tl;
+ u8 update_rx_mode;
+ u8 update_tx_mode;
+ u8 rx_accept_filter;
+ u8 tx_accept_filter;
+};
+
+struct vfpf_vport_update_accept_any_vlan_tlv {
+ struct channel_tlv tl;
+ u8 update_accept_any_vlan_flg;
+ u8 accept_any_vlan;
+
+ u8 padding[2];
+};
+
+struct vfpf_vport_update_sge_tpa_tlv {
+ struct channel_tlv tl;
+
+ u16 sge_tpa_flags;
+#define VFPF_TPA_IPV4_EN_FLAG BIT(0)
+#define VFPF_TPA_IPV6_EN_FLAG BIT(1)
+#define VFPF_TPA_PKT_SPLIT_FLAG BIT(2)
+#define VFPF_TPA_HDR_DATA_SPLIT_FLAG BIT(3)
+#define VFPF_TPA_GRO_CONSIST_FLAG BIT(4)
+
+ u8 update_sge_tpa_flags;
+#define VFPF_UPDATE_SGE_DEPRECATED_FLAG BIT(0)
+#define VFPF_UPDATE_TPA_EN_FLAG BIT(1)
+#define VFPF_UPDATE_TPA_PARAM_FLAG BIT(2)
+
+ u8 max_buffers_per_cqe;
+
+ u16 deprecated_sge_buff_size;
+ u16 tpa_max_size;
+ u16 tpa_min_size_to_start;
+ u16 tpa_min_size_to_cont;
+
+ u8 tpa_max_aggs_num;
+ u8 padding[7];
+};
+
+/* Primary tlv as a header for various extended tlvs for
+ * various functionalities in vport update ramrod.
+ */
+struct vfpf_vport_update_tlv {
+ struct vfpf_first_tlv first_tlv;
+};
+
+struct vfpf_ucast_filter_tlv {
+ struct vfpf_first_tlv first_tlv;
+
+ u8 opcode;
+ u8 type;
+
+ u8 mac[ETH_ALEN];
+
+ u16 vlan;
+ u16 padding[3];
+};
+
+struct tlv_buffer_size {
+ u8 tlv_buffer[TLV_BUFFER_SIZE];
+};
+
+union vfpf_tlvs {
+ struct vfpf_first_tlv first_tlv;
+ struct vfpf_acquire_tlv acquire;
+ struct vfpf_start_rxq_tlv start_rxq;
+ struct vfpf_start_txq_tlv start_txq;
+ struct vfpf_stop_rxqs_tlv stop_rxqs;
+ struct vfpf_stop_txqs_tlv stop_txqs;
+ struct vfpf_update_rxq_tlv update_rxq;
+ struct vfpf_vport_start_tlv start_vport;
+ struct vfpf_vport_update_tlv vport_update;
+ struct vfpf_ucast_filter_tlv ucast_filter;
+ struct channel_list_end_tlv list_end;
+ struct tlv_buffer_size tlv_buf_size;
+};
+
+union pfvf_tlvs {
+ struct pfvf_def_resp_tlv default_resp;
+ struct pfvf_acquire_resp_tlv acquire_resp;
+ struct tlv_buffer_size tlv_buf_size;
+ struct pfvf_start_queue_resp_tlv queue_start;
+};
+
+enum qed_bulletin_bit {
+ /* Alert the VF that a forced MAC was set by the PF */
+ MAC_ADDR_FORCED = 0,
+ /* Alert the VF that a forced VLAN was set by the PF */
+ VLAN_ADDR_FORCED = 2,
+
+ /* Indicate that `default_only_untagged' contains actual data */
+ VFPF_BULLETIN_UNTAGGED_DEFAULT = 3,
+ VFPF_BULLETIN_UNTAGGED_DEFAULT_FORCED = 4,
+
+ /* Alert the VF that suggested mac was sent by the PF.
+ * MAC_ADDR will be disabled in case MAC_ADDR_FORCED is set.
+ */
+ VFPF_BULLETIN_MAC_ADDR = 5
+};
+
+struct qed_bulletin_content {
+ /* crc of structure to ensure is not in mid-update */
+ u32 crc;
+
+ u32 version;
+
+ /* bitmap indicating which fields hold valid values */
+ u64 valid_bitmap;
+
+ /* used for MAC_ADDR or MAC_ADDR_FORCED */
+ u8 mac[ETH_ALEN];
+
+ /* If valid, 1 => only untagged Rx if no vlan is configured */
+ u8 default_only_untagged;
+ u8 padding;
+
+ /* The following is a 'copy' of qed_mcp_link_state,
+ * qed_mcp_link_params and qed_mcp_link_capabilities. Since it's
+ * possible the structs will increase further along the road we cannot
+ * have it here; Instead we need to have all of its fields.
+ */
+ u8 req_autoneg;
+ u8 req_autoneg_pause;
+ u8 req_forced_rx;
+ u8 req_forced_tx;
+ u8 padding2[4];
+
+ u32 req_adv_speed;
+ u32 req_forced_speed;
+ u32 req_loopback;
+ u32 padding3;
+
+ u8 link_up;
+ u8 full_duplex;
+ u8 autoneg;
+ u8 autoneg_complete;
+ u8 parallel_detection;
+ u8 pfc_enabled;
+ u8 partner_tx_flow_ctrl_en;
+ u8 partner_rx_flow_ctrl_en;
+ u8 partner_adv_pause;
+ u8 sfp_tx_fault;
+ u8 padding4[6];
+
+ u32 speed;
+ u32 partner_adv_speed;
+
+ u32 capability_speed;
+
+ /* Forced vlan */
+ u16 pvid;
+ u16 padding5;
+};
+
+struct qed_bulletin {
+ dma_addr_t phys;
+ struct qed_bulletin_content *p_virt;
+ u32 size;
+};
+
+enum {
+ CHANNEL_TLV_NONE, /* ends tlv sequence */
+ CHANNEL_TLV_ACQUIRE,
+ CHANNEL_TLV_VPORT_START,
+ CHANNEL_TLV_VPORT_UPDATE,
+ CHANNEL_TLV_VPORT_TEARDOWN,
+ CHANNEL_TLV_START_RXQ,
+ CHANNEL_TLV_START_TXQ,
+ CHANNEL_TLV_STOP_RXQS,
+ CHANNEL_TLV_STOP_TXQS,
+ CHANNEL_TLV_UPDATE_RXQ,
+ CHANNEL_TLV_INT_CLEANUP,
+ CHANNEL_TLV_CLOSE,
+ CHANNEL_TLV_RELEASE,
+ CHANNEL_TLV_LIST_END,
+ CHANNEL_TLV_UCAST_FILTER,
+ CHANNEL_TLV_VPORT_UPDATE_ACTIVATE,
+ CHANNEL_TLV_VPORT_UPDATE_TX_SWITCH,
+ CHANNEL_TLV_VPORT_UPDATE_VLAN_STRIP,
+ CHANNEL_TLV_VPORT_UPDATE_MCAST,
+ CHANNEL_TLV_VPORT_UPDATE_ACCEPT_PARAM,
+ CHANNEL_TLV_VPORT_UPDATE_RSS,
+ CHANNEL_TLV_VPORT_UPDATE_ACCEPT_ANY_VLAN,
+ CHANNEL_TLV_VPORT_UPDATE_SGE_TPA,
+ CHANNEL_TLV_MAX,
+
+ /* Required for iterating over vport-update tlvs.
+ * Will break in case non-sequential vport-update tlvs.
+ */
+ CHANNEL_TLV_VPORT_UPDATE_MAX = CHANNEL_TLV_VPORT_UPDATE_SGE_TPA + 1,
+};
+
+/* This data is held in the qed_hwfn structure for VFs only. */
+struct qed_vf_iov {
+ union vfpf_tlvs *vf2pf_request;
+ dma_addr_t vf2pf_request_phys;
+ union pfvf_tlvs *pf2vf_reply;
+ dma_addr_t pf2vf_reply_phys;
+
+ /* Should be taken whenever the mailbox buffers are accessed */
+ struct mutex mutex;
+ u8 *offset;
+
+ /* Bulletin Board */
+ struct qed_bulletin bulletin;
+ struct qed_bulletin_content bulletin_shadow;
+
+ /* we set aside a copy of the acquire response */
+ struct pfvf_acquire_resp_tlv acquire_resp;
+};
+
+#ifdef CONFIG_QED_SRIOV
+/**
+ * @brief Read the VF bulletin and act on it if needed
+ *
+ * @param p_hwfn
+ * @param p_change - qed fills 1 iff bulletin board has changed, 0 otherwise.
+ *
+ * @return enum _qed_status
+ */
+int qed_vf_read_bulletin(struct qed_hwfn *p_hwfn, u8 *p_change);
+
+/**
+ * @brief Get link paramters for VF from qed
+ *
+ * @param p_hwfn
+ * @param params - the link params structure to be filled for the VF
+ */
+void qed_vf_get_link_params(struct qed_hwfn *p_hwfn,
+ struct qed_mcp_link_params *params);
+
+/**
+ * @brief Get link state for VF from qed
+ *
+ * @param p_hwfn
+ * @param link - the link state structure to be filled for the VF
+ */
+void qed_vf_get_link_state(struct qed_hwfn *p_hwfn,
+ struct qed_mcp_link_state *link);
+
+/**
+ * @brief Get link capabilities for VF from qed
+ *
+ * @param p_hwfn
+ * @param p_link_caps - the link capabilities structure to be filled for the VF
+ */
+void qed_vf_get_link_caps(struct qed_hwfn *p_hwfn,
+ struct qed_mcp_link_capabilities *p_link_caps);
+
+/**
+ * @brief Get number of Rx queues allocated for VF by qed
+ *
+ * @param p_hwfn
+ * @param num_rxqs - allocated RX queues
+ */
+void qed_vf_get_num_rxqs(struct qed_hwfn *p_hwfn, u8 *num_rxqs);
+
+/**
+ * @brief Get port mac address for VF
+ *
+ * @param p_hwfn
+ * @param port_mac - destination location for port mac
+ */
+void qed_vf_get_port_mac(struct qed_hwfn *p_hwfn, u8 *port_mac);
+
+/**
+ * @brief Get number of VLAN filters allocated for VF by qed
+ *
+ * @param p_hwfn
+ * @param num_rxqs - allocated VLAN filters
+ */
+void qed_vf_get_num_vlan_filters(struct qed_hwfn *p_hwfn,
+ u8 *num_vlan_filters);
+
+/**
+ * @brief Check if VF can set a MAC address
+ *
+ * @param p_hwfn
+ * @param mac
+ *
+ * @return bool
+ */
+bool qed_vf_check_mac(struct qed_hwfn *p_hwfn, u8 *mac);
+
+/**
+ * @brief Set firmware version information in dev_info from VFs acquire response tlv
+ *
+ * @param p_hwfn
+ * @param fw_major
+ * @param fw_minor
+ * @param fw_rev
+ * @param fw_eng
+ */
+void qed_vf_get_fw_version(struct qed_hwfn *p_hwfn,
+ u16 *fw_major, u16 *fw_minor,
+ u16 *fw_rev, u16 *fw_eng);
+
+/**
+ * @brief hw preparation for VF
+ * sends ACQUIRE message
+ *
+ * @param p_hwfn
+ *
+ * @return int
+ */
+int qed_vf_hw_prepare(struct qed_hwfn *p_hwfn);
+
+/**
+ * @brief VF - start the RX Queue by sending a message to the PF
+ * @param p_hwfn
+ * @param cid - zero based within the VF
+ * @param rx_queue_id - zero based within the VF
+ * @param sb - VF status block for this queue
+ * @param sb_index - Index within the status block
+ * @param bd_max_bytes - maximum number of bytes per bd
+ * @param bd_chain_phys_addr - physical address of bd chain
+ * @param cqe_pbl_addr - physical address of pbl
+ * @param cqe_pbl_size - pbl size
+ * @param pp_prod - pointer to the producer to be
+ * used in fastpath
+ *
+ * @return int
+ */
+int qed_vf_pf_rxq_start(struct qed_hwfn *p_hwfn,
+ u8 rx_queue_id,
+ u16 sb,
+ u8 sb_index,
+ u16 bd_max_bytes,
+ dma_addr_t bd_chain_phys_addr,
+ dma_addr_t cqe_pbl_addr,
+ u16 cqe_pbl_size, void __iomem **pp_prod);
+
+/**
+ * @brief VF - start the TX queue by sending a message to the
+ * PF.
+ *
+ * @param p_hwfn
+ * @param tx_queue_id - zero based within the VF
+ * @param sb - status block for this queue
+ * @param sb_index - index within the status block
+ * @param bd_chain_phys_addr - physical address of tx chain
+ * @param pp_doorbell - pointer to address to which to
+ * write the doorbell too..
+ *
+ * @return int
+ */
+int qed_vf_pf_txq_start(struct qed_hwfn *p_hwfn,
+ u16 tx_queue_id,
+ u16 sb,
+ u8 sb_index,
+ dma_addr_t pbl_addr,
+ u16 pbl_size, void __iomem **pp_doorbell);
+
+/**
+ * @brief VF - stop the RX queue by sending a message to the PF
+ *
+ * @param p_hwfn
+ * @param rx_qid
+ * @param cqe_completion
+ *
+ * @return int
+ */
+int qed_vf_pf_rxq_stop(struct qed_hwfn *p_hwfn,
+ u16 rx_qid, bool cqe_completion);
+
+/**
+ * @brief VF - stop the TX queue by sending a message to the PF
+ *
+ * @param p_hwfn
+ * @param tx_qid
+ *
+ * @return int
+ */
+int qed_vf_pf_txq_stop(struct qed_hwfn *p_hwfn, u16 tx_qid);
+
+/**
+ * @brief VF - send a vport update command
+ *
+ * @param p_hwfn
+ * @param params
+ *
+ * @return int
+ */
+int qed_vf_pf_vport_update(struct qed_hwfn *p_hwfn,
+ struct qed_sp_vport_update_params *p_params);
+
+/**
+ *
+ * @brief VF - send a close message to PF
+ *
+ * @param p_hwfn
+ *
+ * @return enum _qed_status
+ */
+int qed_vf_pf_reset(struct qed_hwfn *p_hwfn);
+
+/**
+ * @brief VF - free vf`s memories
+ *
+ * @param p_hwfn
+ *
+ * @return enum _qed_status
+ */
+int qed_vf_pf_release(struct qed_hwfn *p_hwfn);
+
+/**
+ * @brief qed_vf_get_igu_sb_id - Get the IGU SB ID for a given
+ * sb_id. For VFs igu sbs don't have to be contiguous
+ *
+ * @param p_hwfn
+ * @param sb_id
+ *
+ * @return INLINE u16
+ */
+u16 qed_vf_get_igu_sb_id(struct qed_hwfn *p_hwfn, u16 sb_id);
+
+/**
+ * @brief qed_vf_pf_vport_start - perform vport start for VF.
+ *
+ * @param p_hwfn
+ * @param vport_id
+ * @param mtu
+ * @param inner_vlan_removal
+ * @param tpa_mode
+ * @param max_buffers_per_cqe,
+ * @param only_untagged - default behavior regarding vlan acceptance
+ *
+ * @return enum _qed_status
+ */
+int qed_vf_pf_vport_start(struct qed_hwfn *p_hwfn,
+ u8 vport_id,
+ u16 mtu,
+ u8 inner_vlan_removal,
+ enum qed_tpa_mode tpa_mode,
+ u8 max_buffers_per_cqe, u8 only_untagged);
+
+/**
+ * @brief qed_vf_pf_vport_stop - stop the VF's vport
+ *
+ * @param p_hwfn
+ *
+ * @return enum _qed_status
+ */
+int qed_vf_pf_vport_stop(struct qed_hwfn *p_hwfn);
+
+int qed_vf_pf_filter_ucast(struct qed_hwfn *p_hwfn,
+ struct qed_filter_ucast *p_param);
+
+void qed_vf_pf_filter_mcast(struct qed_hwfn *p_hwfn,
+ struct qed_filter_mcast *p_filter_cmd);
+
+/**
+ * @brief qed_vf_pf_int_cleanup - clean the SB of the VF
+ *
+ * @param p_hwfn
+ *
+ * @return enum _qed_status
+ */
+int qed_vf_pf_int_cleanup(struct qed_hwfn *p_hwfn);
+
+/**
+ * @brief - return the link params in a given bulletin board
+ *
+ * @param p_hwfn
+ * @param p_params - pointer to a struct to fill with link params
+ * @param p_bulletin
+ */
+void __qed_vf_get_link_params(struct qed_hwfn *p_hwfn,
+ struct qed_mcp_link_params *p_params,
+ struct qed_bulletin_content *p_bulletin);
+
+/**
+ * @brief - return the link state in a given bulletin board
+ *
+ * @param p_hwfn
+ * @param p_link - pointer to a struct to fill with link state
+ * @param p_bulletin
+ */
+void __qed_vf_get_link_state(struct qed_hwfn *p_hwfn,
+ struct qed_mcp_link_state *p_link,
+ struct qed_bulletin_content *p_bulletin);
+
+/**
+ * @brief - return the link capabilities in a given bulletin board
+ *
+ * @param p_hwfn
+ * @param p_link - pointer to a struct to fill with link capabilities
+ * @param p_bulletin
+ */
+void __qed_vf_get_link_caps(struct qed_hwfn *p_hwfn,
+ struct qed_mcp_link_capabilities *p_link_caps,
+ struct qed_bulletin_content *p_bulletin);
+
+void qed_iov_vf_task(struct work_struct *work);
+#else
+static inline void qed_vf_get_link_params(struct qed_hwfn *p_hwfn,
+ struct qed_mcp_link_params *params)
+{
+}
+
+static inline void qed_vf_get_link_state(struct qed_hwfn *p_hwfn,
+ struct qed_mcp_link_state *link)
+{
+}
+
+static inline void
+qed_vf_get_link_caps(struct qed_hwfn *p_hwfn,
+ struct qed_mcp_link_capabilities *p_link_caps)
+{
+}
+
+static inline void qed_vf_get_num_rxqs(struct qed_hwfn *p_hwfn, u8 *num_rxqs)
+{
+}
+
+static inline void qed_vf_get_port_mac(struct qed_hwfn *p_hwfn, u8 *port_mac)
+{
+}
+
+static inline void qed_vf_get_num_vlan_filters(struct qed_hwfn *p_hwfn,
+ u8 *num_vlan_filters)
+{
+}
+
+static inline bool qed_vf_check_mac(struct qed_hwfn *p_hwfn, u8 *mac)
+{
+ return false;
+}
+
+static inline void qed_vf_get_fw_version(struct qed_hwfn *p_hwfn,
+ u16 *fw_major, u16 *fw_minor,
+ u16 *fw_rev, u16 *fw_eng)
+{
+}
+
+static inline int qed_vf_hw_prepare(struct qed_hwfn *p_hwfn)
+{
+ return -EINVAL;
+}
+
+static inline int qed_vf_pf_rxq_start(struct qed_hwfn *p_hwfn,
+ u8 rx_queue_id,
+ u16 sb,
+ u8 sb_index,
+ u16 bd_max_bytes,
+ dma_addr_t bd_chain_phys_adr,
+ dma_addr_t cqe_pbl_addr,
+ u16 cqe_pbl_size, void __iomem **pp_prod)
+{
+ return -EINVAL;
+}
+
+static inline int qed_vf_pf_txq_start(struct qed_hwfn *p_hwfn,
+ u16 tx_queue_id,
+ u16 sb,
+ u8 sb_index,
+ dma_addr_t pbl_addr,
+ u16 pbl_size, void __iomem **pp_doorbell)
+{
+ return -EINVAL;
+}
+
+static inline int qed_vf_pf_rxq_stop(struct qed_hwfn *p_hwfn,
+ u16 rx_qid, bool cqe_completion)
+{
+ return -EINVAL;
+}
+
+static inline int qed_vf_pf_txq_stop(struct qed_hwfn *p_hwfn, u16 tx_qid)
+{
+ return -EINVAL;
+}
+
+static inline int
+qed_vf_pf_vport_update(struct qed_hwfn *p_hwfn,
+ struct qed_sp_vport_update_params *p_params)
+{
+ return -EINVAL;
+}
+
+static inline int qed_vf_pf_reset(struct qed_hwfn *p_hwfn)
+{
+ return -EINVAL;
+}
+
+static inline int qed_vf_pf_release(struct qed_hwfn *p_hwfn)
+{
+ return -EINVAL;
+}
+
+static inline u16 qed_vf_get_igu_sb_id(struct qed_hwfn *p_hwfn, u16 sb_id)
+{
+ return 0;
+}
+
+static inline int qed_vf_pf_vport_start(struct qed_hwfn *p_hwfn,
+ u8 vport_id,
+ u16 mtu,
+ u8 inner_vlan_removal,
+ enum qed_tpa_mode tpa_mode,
+ u8 max_buffers_per_cqe,
+ u8 only_untagged)
+{
+ return -EINVAL;
+}
+
+static inline int qed_vf_pf_vport_stop(struct qed_hwfn *p_hwfn)
+{
+ return -EINVAL;
+}
+
+static inline int qed_vf_pf_filter_ucast(struct qed_hwfn *p_hwfn,
+ struct qed_filter_ucast *p_param)
+{
+ return -EINVAL;
+}
+
+static inline void qed_vf_pf_filter_mcast(struct qed_hwfn *p_hwfn,
+ struct qed_filter_mcast *p_filter_cmd)
+{
+}
+
+static inline int qed_vf_pf_int_cleanup(struct qed_hwfn *p_hwfn)
+{
+ return -EINVAL;
+}
+
+static inline void __qed_vf_get_link_params(struct qed_hwfn *p_hwfn,
+ struct qed_mcp_link_params
+ *p_params,
+ struct qed_bulletin_content
+ *p_bulletin)
+{
+}
+
+static inline void __qed_vf_get_link_state(struct qed_hwfn *p_hwfn,
+ struct qed_mcp_link_state *p_link,
+ struct qed_bulletin_content
+ *p_bulletin)
+{
+}
+
+static inline void
+__qed_vf_get_link_caps(struct qed_hwfn *p_hwfn,
+ struct qed_mcp_link_capabilities *p_link_caps,
+ struct qed_bulletin_content *p_bulletin)
+{
+}
+
+static inline void qed_iov_vf_task(struct work_struct *work)
+{
+}
+#endif
+
+#endif
diff --git a/drivers/net/ethernet/qlogic/qede/qede.h b/drivers/net/ethernet/qlogic/qede/qede.h
index d023251544d9..47d6b22252f6 100644
--- a/drivers/net/ethernet/qlogic/qede/qede.h
+++ b/drivers/net/ethernet/qlogic/qede/qede.h
@@ -25,15 +25,13 @@
#define QEDE_MAJOR_VERSION 8
#define QEDE_MINOR_VERSION 7
-#define QEDE_REVISION_VERSION 0
-#define QEDE_ENGINEERING_VERSION 0
+#define QEDE_REVISION_VERSION 1
+#define QEDE_ENGINEERING_VERSION 20
#define DRV_MODULE_VERSION __stringify(QEDE_MAJOR_VERSION) "." \
__stringify(QEDE_MINOR_VERSION) "." \
__stringify(QEDE_REVISION_VERSION) "." \
__stringify(QEDE_ENGINEERING_VERSION)
-#define QEDE_ETH_INTERFACE_VERSION 300
-
#define DRV_MODULE_SYM qede
struct qede_stats {
@@ -61,16 +59,16 @@ struct qede_stats {
/* port */
u64 rx_64_byte_packets;
- u64 rx_127_byte_packets;
- u64 rx_255_byte_packets;
- u64 rx_511_byte_packets;
- u64 rx_1023_byte_packets;
- u64 rx_1518_byte_packets;
- u64 rx_1522_byte_packets;
- u64 rx_2047_byte_packets;
- u64 rx_4095_byte_packets;
- u64 rx_9216_byte_packets;
- u64 rx_16383_byte_packets;
+ u64 rx_65_to_127_byte_packets;
+ u64 rx_128_to_255_byte_packets;
+ u64 rx_256_to_511_byte_packets;
+ u64 rx_512_to_1023_byte_packets;
+ u64 rx_1024_to_1518_byte_packets;
+ u64 rx_1519_to_1522_byte_packets;
+ u64 rx_1519_to_2047_byte_packets;
+ u64 rx_2048_to_4095_byte_packets;
+ u64 rx_4096_to_9216_byte_packets;
+ u64 rx_9217_to_16383_byte_packets;
u64 rx_crc_errors;
u64 rx_mac_crtl_frames;
u64 rx_pause_frames;
@@ -114,6 +112,10 @@ struct qede_dev {
u32 dp_module;
u8 dp_level;
+ u32 flags;
+#define QEDE_FLAG_IS_VF BIT(0)
+#define IS_VF(edev) (!!((edev)->flags & QEDE_FLAG_IS_VF))
+
const struct qed_eth_ops *ops;
struct qed_dev_eth_info dev_info;
@@ -156,6 +158,10 @@ struct qede_dev {
SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
struct qede_stats stats;
+#define QEDE_RSS_INDIR_INITED BIT(0)
+#define QEDE_RSS_KEY_INITED BIT(1)
+#define QEDE_RSS_CAPS_INITED BIT(2)
+ u32 rss_params_inited; /* bit-field to track initialized rss params */
struct qed_update_vport_rss_params rss_params;
u16 q_num_rx_buffers; /* Must be a power of two */
u16 q_num_tx_buffers; /* Must be a power of two */
@@ -167,6 +173,8 @@ struct qede_dev {
bool accept_any_vlan;
struct delayed_work sp_task;
unsigned long sp_flags;
+ u16 vxlan_dst_port;
+ u16 geneve_dst_port;
};
enum QEDE_STATE {
@@ -286,8 +294,11 @@ struct qede_fastpath {
#define QEDE_CSUM_ERROR BIT(0)
#define QEDE_CSUM_UNNECESSARY BIT(1)
+#define QEDE_TUNN_CSUM_UNNECESSARY BIT(2)
-#define QEDE_SP_RX_MODE 1
+#define QEDE_SP_RX_MODE 1
+#define QEDE_SP_VXLAN_PORT_CONFIG 2
+#define QEDE_SP_GENEVE_PORT_CONFIG 3
union qede_reload_args {
u16 mtu;
@@ -301,6 +312,10 @@ void qede_reload(struct qede_dev *edev,
union qede_reload_args *args);
int qede_change_mtu(struct net_device *dev, int new_mtu);
void qede_fill_by_demand_stats(struct qede_dev *edev);
+bool qede_has_rx_work(struct qede_rx_queue *rxq);
+int qede_txq_has_work(struct qede_tx_queue *txq);
+void qede_recycle_rx_bd_ring(struct qede_rx_queue *rxq, struct qede_dev *edev,
+ u8 count);
#define RX_RING_SIZE_POW 13
#define RX_RING_SIZE ((u16)BIT(RX_RING_SIZE_POW))
diff --git a/drivers/net/ethernet/qlogic/qede/qede_ethtool.c b/drivers/net/ethernet/qlogic/qede/qede_ethtool.c
index c49dc10ce151..1bc75358cbc4 100644
--- a/drivers/net/ethernet/qlogic/qede/qede_ethtool.c
+++ b/drivers/net/ethernet/qlogic/qede/qede_ethtool.c
@@ -9,6 +9,7 @@
#include <linux/version.h>
#include <linux/types.h>
#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
#include <linux/ethtool.h>
#include <linux/string.h>
#include <linux/pci.h>
@@ -27,6 +28,9 @@
#define QEDE_RQSTAT_STRING(stat_name) (#stat_name)
#define QEDE_RQSTAT(stat_name) \
{QEDE_RQSTAT_OFFSET(stat_name), QEDE_RQSTAT_STRING(stat_name)}
+
+#define QEDE_SELFTEST_POLL_COUNT 100
+
static const struct {
u64 offset;
char string[ETH_GSTRING_LEN];
@@ -59,16 +63,16 @@ static const struct {
QEDE_STAT(tx_bcast_pkts),
QEDE_PF_STAT(rx_64_byte_packets),
- QEDE_PF_STAT(rx_127_byte_packets),
- QEDE_PF_STAT(rx_255_byte_packets),
- QEDE_PF_STAT(rx_511_byte_packets),
- QEDE_PF_STAT(rx_1023_byte_packets),
- QEDE_PF_STAT(rx_1518_byte_packets),
- QEDE_PF_STAT(rx_1522_byte_packets),
- QEDE_PF_STAT(rx_2047_byte_packets),
- QEDE_PF_STAT(rx_4095_byte_packets),
- QEDE_PF_STAT(rx_9216_byte_packets),
- QEDE_PF_STAT(rx_16383_byte_packets),
+ QEDE_PF_STAT(rx_65_to_127_byte_packets),
+ QEDE_PF_STAT(rx_128_to_255_byte_packets),
+ QEDE_PF_STAT(rx_256_to_511_byte_packets),
+ QEDE_PF_STAT(rx_512_to_1023_byte_packets),
+ QEDE_PF_STAT(rx_1024_to_1518_byte_packets),
+ QEDE_PF_STAT(rx_1519_to_1522_byte_packets),
+ QEDE_PF_STAT(rx_1519_to_2047_byte_packets),
+ QEDE_PF_STAT(rx_2048_to_4095_byte_packets),
+ QEDE_PF_STAT(rx_4096_to_9216_byte_packets),
+ QEDE_PF_STAT(rx_9217_to_16383_byte_packets),
QEDE_PF_STAT(tx_64_byte_packets),
QEDE_PF_STAT(tx_65_to_127_byte_packets),
QEDE_PF_STAT(tx_128_to_255_byte_packets),
@@ -116,11 +120,39 @@ static const struct {
#define QEDE_NUM_STATS ARRAY_SIZE(qede_stats_arr)
+enum {
+ QEDE_PRI_FLAG_CMT,
+ QEDE_PRI_FLAG_LEN,
+};
+
+static const char qede_private_arr[QEDE_PRI_FLAG_LEN][ETH_GSTRING_LEN] = {
+ "Coupled-Function",
+};
+
+enum qede_ethtool_tests {
+ QEDE_ETHTOOL_INT_LOOPBACK,
+ QEDE_ETHTOOL_INTERRUPT_TEST,
+ QEDE_ETHTOOL_MEMORY_TEST,
+ QEDE_ETHTOOL_REGISTER_TEST,
+ QEDE_ETHTOOL_CLOCK_TEST,
+ QEDE_ETHTOOL_TEST_MAX
+};
+
+static const char qede_tests_str_arr[QEDE_ETHTOOL_TEST_MAX][ETH_GSTRING_LEN] = {
+ "Internal loopback (offline)",
+ "Interrupt (online)\t",
+ "Memory (online)\t\t",
+ "Register (online)\t",
+ "Clock (online)\t\t",
+};
+
static void qede_get_strings_stats(struct qede_dev *edev, u8 *buf)
{
int i, j, k;
for (i = 0, j = 0; i < QEDE_NUM_STATS; i++) {
+ if (IS_VF(edev) && qede_stats_arr[i].pf_only)
+ continue;
strcpy(buf + j * ETH_GSTRING_LEN,
qede_stats_arr[i].string);
j++;
@@ -139,6 +171,14 @@ static void qede_get_strings(struct net_device *dev, u32 stringset, u8 *buf)
case ETH_SS_STATS:
qede_get_strings_stats(edev, buf);
break;
+ case ETH_SS_PRIV_FLAGS:
+ memcpy(buf, qede_private_arr,
+ ETH_GSTRING_LEN * QEDE_PRI_FLAG_LEN);
+ break;
+ case ETH_SS_TEST:
+ memcpy(buf, qede_tests_str_arr,
+ ETH_GSTRING_LEN * QEDE_ETHTOOL_TEST_MAX);
+ break;
default:
DP_VERBOSE(edev, QED_MSG_DEBUG,
"Unsupported stringset 0x%08x\n", stringset);
@@ -156,8 +196,11 @@ static void qede_get_ethtool_stats(struct net_device *dev,
mutex_lock(&edev->qede_lock);
- for (sidx = 0; sidx < QEDE_NUM_STATS; sidx++)
+ for (sidx = 0; sidx < QEDE_NUM_STATS; sidx++) {
+ if (IS_VF(edev) && qede_stats_arr[sidx].pf_only)
+ continue;
buf[cnt++] = QEDE_STATS_DATA(edev, sidx);
+ }
for (sidx = 0; sidx < QEDE_NUM_RQSTATS; sidx++) {
buf[cnt] = 0;
@@ -176,8 +219,18 @@ static int qede_get_sset_count(struct net_device *dev, int stringset)
switch (stringset) {
case ETH_SS_STATS:
- return num_stats + QEDE_NUM_RQSTATS;
+ if (IS_VF(edev)) {
+ int i;
+ for (i = 0; i < QEDE_NUM_STATS; i++)
+ if (qede_stats_arr[i].pf_only)
+ num_stats--;
+ }
+ return num_stats + QEDE_NUM_RQSTATS;
+ case ETH_SS_PRIV_FLAGS:
+ return QEDE_PRI_FLAG_LEN;
+ case ETH_SS_TEST:
+ return QEDE_ETHTOOL_TEST_MAX;
default:
DP_VERBOSE(edev, QED_MSG_DEBUG,
"Unsupported stringset 0x%08x\n", stringset);
@@ -185,6 +238,13 @@ static int qede_get_sset_count(struct net_device *dev, int stringset)
}
}
+static u32 qede_get_priv_flags(struct net_device *dev)
+{
+ struct qede_dev *edev = netdev_priv(dev);
+
+ return (!!(edev->dev_info.common.num_hwfns > 1)) << QEDE_PRI_FLAG_CMT;
+}
+
static int qede_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
{
struct qede_dev *edev = netdev_priv(dev);
@@ -217,9 +277,9 @@ static int qede_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
struct qed_link_params params;
u32 speed;
- if (!edev->dev_info.common.is_mf_default) {
+ if (!edev->ops || !edev->ops->common->can_link_change(edev->cdev)) {
DP_INFO(edev,
- "Link parameters can not be changed in non-default mode\n");
+ "Link settings are not allowed to be changed\n");
return -EOPNOTSUPP;
}
@@ -328,6 +388,12 @@ static int qede_nway_reset(struct net_device *dev)
struct qed_link_output current_link;
struct qed_link_params link_params;
+ if (!edev->ops || !edev->ops->common->can_link_change(edev->cdev)) {
+ DP_INFO(edev,
+ "Link settings are not allowed to be changed\n");
+ return -EOPNOTSUPP;
+ }
+
if (!netif_running(dev))
return 0;
@@ -428,9 +494,9 @@ static int qede_set_pauseparam(struct net_device *dev,
struct qed_link_params params;
struct qed_link_output current_link;
- if (!edev->dev_info.common.is_mf_default) {
+ if (!edev->ops || !edev->ops->common->can_link_change(edev->cdev)) {
DP_INFO(edev,
- "Pause parameters can not be updated in non-default mode\n");
+ "Pause settings are not allowed to be changed\n");
return -EOPNOTSUPP;
}
@@ -569,6 +635,497 @@ static int qede_set_phys_id(struct net_device *dev,
return 0;
}
+static int qede_get_rss_flags(struct qede_dev *edev, struct ethtool_rxnfc *info)
+{
+ info->data = RXH_IP_SRC | RXH_IP_DST;
+
+ switch (info->flow_type) {
+ case TCP_V4_FLOW:
+ case TCP_V6_FLOW:
+ info->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+ break;
+ case UDP_V4_FLOW:
+ if (edev->rss_params.rss_caps & QED_RSS_IPV4_UDP)
+ info->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+ break;
+ case UDP_V6_FLOW:
+ if (edev->rss_params.rss_caps & QED_RSS_IPV6_UDP)
+ info->data |= RXH_L4_B_0_1 | RXH_L4_B_2_3;
+ break;
+ case IPV4_FLOW:
+ case IPV6_FLOW:
+ break;
+ default:
+ info->data = 0;
+ break;
+ }
+
+ return 0;
+}
+
+static int qede_get_rxnfc(struct net_device *dev, struct ethtool_rxnfc *info,
+ u32 *rules __always_unused)
+{
+ struct qede_dev *edev = netdev_priv(dev);
+
+ switch (info->cmd) {
+ case ETHTOOL_GRXRINGS:
+ info->data = edev->num_rss;
+ return 0;
+ case ETHTOOL_GRXFH:
+ return qede_get_rss_flags(edev, info);
+ default:
+ DP_ERR(edev, "Command parameters not supported\n");
+ return -EOPNOTSUPP;
+ }
+}
+
+static int qede_set_rss_flags(struct qede_dev *edev, struct ethtool_rxnfc *info)
+{
+ struct qed_update_vport_params vport_update_params;
+ u8 set_caps = 0, clr_caps = 0;
+
+ DP_VERBOSE(edev, QED_MSG_DEBUG,
+ "Set rss flags command parameters: flow type = %d, data = %llu\n",
+ info->flow_type, info->data);
+
+ switch (info->flow_type) {
+ case TCP_V4_FLOW:
+ case TCP_V6_FLOW:
+ /* For TCP only 4-tuple hash is supported */
+ if (info->data ^ (RXH_IP_SRC | RXH_IP_DST |
+ RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
+ DP_INFO(edev, "Command parameters not supported\n");
+ return -EINVAL;
+ }
+ return 0;
+ case UDP_V4_FLOW:
+ /* For UDP either 2-tuple hash or 4-tuple hash is supported */
+ if (info->data == (RXH_IP_SRC | RXH_IP_DST |
+ RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
+ set_caps = QED_RSS_IPV4_UDP;
+ DP_VERBOSE(edev, QED_MSG_DEBUG,
+ "UDP 4-tuple enabled\n");
+ } else if (info->data == (RXH_IP_SRC | RXH_IP_DST)) {
+ clr_caps = QED_RSS_IPV4_UDP;
+ DP_VERBOSE(edev, QED_MSG_DEBUG,
+ "UDP 4-tuple disabled\n");
+ } else {
+ return -EINVAL;
+ }
+ break;
+ case UDP_V6_FLOW:
+ /* For UDP either 2-tuple hash or 4-tuple hash is supported */
+ if (info->data == (RXH_IP_SRC | RXH_IP_DST |
+ RXH_L4_B_0_1 | RXH_L4_B_2_3)) {
+ set_caps = QED_RSS_IPV6_UDP;
+ DP_VERBOSE(edev, QED_MSG_DEBUG,
+ "UDP 4-tuple enabled\n");
+ } else if (info->data == (RXH_IP_SRC | RXH_IP_DST)) {
+ clr_caps = QED_RSS_IPV6_UDP;
+ DP_VERBOSE(edev, QED_MSG_DEBUG,
+ "UDP 4-tuple disabled\n");
+ } else {
+ return -EINVAL;
+ }
+ break;
+ case IPV4_FLOW:
+ case IPV6_FLOW:
+ /* For IP only 2-tuple hash is supported */
+ if (info->data ^ (RXH_IP_SRC | RXH_IP_DST)) {
+ DP_INFO(edev, "Command parameters not supported\n");
+ return -EINVAL;
+ }
+ return 0;
+ case SCTP_V4_FLOW:
+ case AH_ESP_V4_FLOW:
+ case AH_V4_FLOW:
+ case ESP_V4_FLOW:
+ case SCTP_V6_FLOW:
+ case AH_ESP_V6_FLOW:
+ case AH_V6_FLOW:
+ case ESP_V6_FLOW:
+ case IP_USER_FLOW:
+ case ETHER_FLOW:
+ /* RSS is not supported for these protocols */
+ if (info->data) {
+ DP_INFO(edev, "Command parameters not supported\n");
+ return -EINVAL;
+ }
+ return 0;
+ default:
+ return -EINVAL;
+ }
+
+ /* No action is needed if there is no change in the rss capability */
+ if (edev->rss_params.rss_caps == ((edev->rss_params.rss_caps &
+ ~clr_caps) | set_caps))
+ return 0;
+
+ /* Update internal configuration */
+ edev->rss_params.rss_caps = (edev->rss_params.rss_caps & ~clr_caps) |
+ set_caps;
+ edev->rss_params_inited |= QEDE_RSS_CAPS_INITED;
+
+ /* Re-configure if possible */
+ if (netif_running(edev->ndev)) {
+ memset(&vport_update_params, 0, sizeof(vport_update_params));
+ vport_update_params.update_rss_flg = 1;
+ vport_update_params.vport_id = 0;
+ memcpy(&vport_update_params.rss_params, &edev->rss_params,
+ sizeof(vport_update_params.rss_params));
+ return edev->ops->vport_update(edev->cdev,
+ &vport_update_params);
+ }
+
+ return 0;
+}
+
+static int qede_set_rxnfc(struct net_device *dev, struct ethtool_rxnfc *info)
+{
+ struct qede_dev *edev = netdev_priv(dev);
+
+ switch (info->cmd) {
+ case ETHTOOL_SRXFH:
+ return qede_set_rss_flags(edev, info);
+ default:
+ DP_INFO(edev, "Command parameters not supported\n");
+ return -EOPNOTSUPP;
+ }
+}
+
+static u32 qede_get_rxfh_indir_size(struct net_device *dev)
+{
+ return QED_RSS_IND_TABLE_SIZE;
+}
+
+static u32 qede_get_rxfh_key_size(struct net_device *dev)
+{
+ struct qede_dev *edev = netdev_priv(dev);
+
+ return sizeof(edev->rss_params.rss_key);
+}
+
+static int qede_get_rxfh(struct net_device *dev, u32 *indir, u8 *key, u8 *hfunc)
+{
+ struct qede_dev *edev = netdev_priv(dev);
+ int i;
+
+ if (hfunc)
+ *hfunc = ETH_RSS_HASH_TOP;
+
+ if (!indir)
+ return 0;
+
+ for (i = 0; i < QED_RSS_IND_TABLE_SIZE; i++)
+ indir[i] = edev->rss_params.rss_ind_table[i];
+
+ if (key)
+ memcpy(key, edev->rss_params.rss_key,
+ qede_get_rxfh_key_size(dev));
+
+ return 0;
+}
+
+static int qede_set_rxfh(struct net_device *dev, const u32 *indir,
+ const u8 *key, const u8 hfunc)
+{
+ struct qed_update_vport_params vport_update_params;
+ struct qede_dev *edev = netdev_priv(dev);
+ int i;
+
+ if (hfunc != ETH_RSS_HASH_NO_CHANGE && hfunc != ETH_RSS_HASH_TOP)
+ return -EOPNOTSUPP;
+
+ if (!indir && !key)
+ return 0;
+
+ if (indir) {
+ for (i = 0; i < QED_RSS_IND_TABLE_SIZE; i++)
+ edev->rss_params.rss_ind_table[i] = indir[i];
+ edev->rss_params_inited |= QEDE_RSS_INDIR_INITED;
+ }
+
+ if (key) {
+ memcpy(&edev->rss_params.rss_key, key,
+ qede_get_rxfh_key_size(dev));
+ edev->rss_params_inited |= QEDE_RSS_KEY_INITED;
+ }
+
+ if (netif_running(edev->ndev)) {
+ memset(&vport_update_params, 0, sizeof(vport_update_params));
+ vport_update_params.update_rss_flg = 1;
+ vport_update_params.vport_id = 0;
+ memcpy(&vport_update_params.rss_params, &edev->rss_params,
+ sizeof(vport_update_params.rss_params));
+ return edev->ops->vport_update(edev->cdev,
+ &vport_update_params);
+ }
+
+ return 0;
+}
+
+/* This function enables the interrupt generation and the NAPI on the device */
+static void qede_netif_start(struct qede_dev *edev)
+{
+ int i;
+
+ if (!netif_running(edev->ndev))
+ return;
+
+ for_each_rss(i) {
+ /* Update and reenable interrupts */
+ qed_sb_ack(edev->fp_array[i].sb_info, IGU_INT_ENABLE, 1);
+ napi_enable(&edev->fp_array[i].napi);
+ }
+}
+
+/* This function disables the NAPI and the interrupt generation on the device */
+static void qede_netif_stop(struct qede_dev *edev)
+{
+ int i;
+
+ for_each_rss(i) {
+ napi_disable(&edev->fp_array[i].napi);
+ /* Disable interrupts */
+ qed_sb_ack(edev->fp_array[i].sb_info, IGU_INT_DISABLE, 0);
+ }
+}
+
+static int qede_selftest_transmit_traffic(struct qede_dev *edev,
+ struct sk_buff *skb)
+{
+ struct qede_tx_queue *txq = &edev->fp_array[0].txqs[0];
+ struct eth_tx_1st_bd *first_bd;
+ dma_addr_t mapping;
+ int i, idx, val;
+
+ /* Fill the entry in the SW ring and the BDs in the FW ring */
+ idx = txq->sw_tx_prod & NUM_TX_BDS_MAX;
+ txq->sw_tx_ring[idx].skb = skb;
+ first_bd = qed_chain_produce(&txq->tx_pbl);
+ memset(first_bd, 0, sizeof(*first_bd));
+ val = 1 << ETH_TX_1ST_BD_FLAGS_START_BD_SHIFT;
+ first_bd->data.bd_flags.bitfields = val;
+
+ /* Map skb linear data for DMA and set in the first BD */
+ mapping = dma_map_single(&edev->pdev->dev, skb->data,
+ skb_headlen(skb), DMA_TO_DEVICE);
+ if (unlikely(dma_mapping_error(&edev->pdev->dev, mapping))) {
+ DP_NOTICE(edev, "SKB mapping failed\n");
+ return -ENOMEM;
+ }
+ BD_SET_UNMAP_ADDR_LEN(first_bd, mapping, skb_headlen(skb));
+
+ /* update the first BD with the actual num BDs */
+ first_bd->data.nbds = 1;
+ txq->sw_tx_prod++;
+ /* 'next page' entries are counted in the producer value */
+ val = cpu_to_le16(qed_chain_get_prod_idx(&txq->tx_pbl));
+ txq->tx_db.data.bd_prod = val;
+
+ /* wmb makes sure that the BDs data is updated before updating the
+ * producer, otherwise FW may read old data from the BDs.
+ */
+ wmb();
+ barrier();
+ writel(txq->tx_db.raw, txq->doorbell_addr);
+
+ /* mmiowb is needed to synchronize doorbell writes from more than one
+ * processor. It guarantees that the write arrives to the device before
+ * the queue lock is released and another start_xmit is called (possibly
+ * on another CPU). Without this barrier, the next doorbell can bypass
+ * this doorbell. This is applicable to IA64/Altix systems.
+ */
+ mmiowb();
+
+ for (i = 0; i < QEDE_SELFTEST_POLL_COUNT; i++) {
+ if (qede_txq_has_work(txq))
+ break;
+ usleep_range(100, 200);
+ }
+
+ if (!qede_txq_has_work(txq)) {
+ DP_NOTICE(edev, "Tx completion didn't happen\n");
+ return -1;
+ }
+
+ first_bd = (struct eth_tx_1st_bd *)qed_chain_consume(&txq->tx_pbl);
+ dma_unmap_page(&edev->pdev->dev, BD_UNMAP_ADDR(first_bd),
+ BD_UNMAP_LEN(first_bd), DMA_TO_DEVICE);
+ txq->sw_tx_cons++;
+ txq->sw_tx_ring[idx].skb = NULL;
+
+ return 0;
+}
+
+static int qede_selftest_receive_traffic(struct qede_dev *edev)
+{
+ struct qede_rx_queue *rxq = edev->fp_array[0].rxq;
+ u16 hw_comp_cons, sw_comp_cons, sw_rx_index, len;
+ struct eth_fast_path_rx_reg_cqe *fp_cqe;
+ struct sw_rx_data *sw_rx_data;
+ union eth_rx_cqe *cqe;
+ u8 *data_ptr;
+ int i;
+
+ /* The packet is expected to receive on rx-queue 0 even though RSS is
+ * enabled. This is because the queue 0 is configured as the default
+ * queue and that the loopback traffic is not IP.
+ */
+ for (i = 0; i < QEDE_SELFTEST_POLL_COUNT; i++) {
+ if (qede_has_rx_work(rxq))
+ break;
+ usleep_range(100, 200);
+ }
+
+ if (!qede_has_rx_work(rxq)) {
+ DP_NOTICE(edev, "Failed to receive the traffic\n");
+ return -1;
+ }
+
+ hw_comp_cons = le16_to_cpu(*rxq->hw_cons_ptr);
+ sw_comp_cons = qed_chain_get_cons_idx(&rxq->rx_comp_ring);
+
+ /* Memory barrier to prevent the CPU from doing speculative reads of CQE
+ * / BD before reading hw_comp_cons. If the CQE is read before it is
+ * written by FW, then FW writes CQE and SB, and then the CPU reads the
+ * hw_comp_cons, it will use an old CQE.
+ */
+ rmb();
+
+ /* Get the CQE from the completion ring */
+ cqe = (union eth_rx_cqe *)qed_chain_consume(&rxq->rx_comp_ring);
+
+ /* Get the data from the SW ring */
+ sw_rx_index = rxq->sw_rx_cons & NUM_RX_BDS_MAX;
+ sw_rx_data = &rxq->sw_rx_ring[sw_rx_index];
+ fp_cqe = &cqe->fast_path_regular;
+ len = le16_to_cpu(fp_cqe->len_on_first_bd);
+ data_ptr = (u8 *)(page_address(sw_rx_data->data) +
+ fp_cqe->placement_offset + sw_rx_data->page_offset);
+ for (i = ETH_HLEN; i < len; i++)
+ if (data_ptr[i] != (unsigned char)(i & 0xff)) {
+ DP_NOTICE(edev, "Loopback test failed\n");
+ qede_recycle_rx_bd_ring(rxq, edev, 1);
+ return -1;
+ }
+
+ qede_recycle_rx_bd_ring(rxq, edev, 1);
+
+ return 0;
+}
+
+static int qede_selftest_run_loopback(struct qede_dev *edev, u32 loopback_mode)
+{
+ struct qed_link_params link_params;
+ struct sk_buff *skb = NULL;
+ int rc = 0, i;
+ u32 pkt_size;
+ u8 *packet;
+
+ if (!netif_running(edev->ndev)) {
+ DP_NOTICE(edev, "Interface is down\n");
+ return -EINVAL;
+ }
+
+ qede_netif_stop(edev);
+
+ /* Bring up the link in Loopback mode */
+ memset(&link_params, 0, sizeof(link_params));
+ link_params.link_up = true;
+ link_params.override_flags = QED_LINK_OVERRIDE_LOOPBACK_MODE;
+ link_params.loopback_mode = loopback_mode;
+ edev->ops->common->set_link(edev->cdev, &link_params);
+
+ /* Wait for loopback configuration to apply */
+ msleep_interruptible(500);
+
+ /* prepare the loopback packet */
+ pkt_size = edev->ndev->mtu + ETH_HLEN;
+
+ skb = netdev_alloc_skb(edev->ndev, pkt_size);
+ if (!skb) {
+ DP_INFO(edev, "Can't allocate skb\n");
+ rc = -ENOMEM;
+ goto test_loopback_exit;
+ }
+ packet = skb_put(skb, pkt_size);
+ ether_addr_copy(packet, edev->ndev->dev_addr);
+ ether_addr_copy(packet + ETH_ALEN, edev->ndev->dev_addr);
+ memset(packet + (2 * ETH_ALEN), 0x77, (ETH_HLEN - (2 * ETH_ALEN)));
+ for (i = ETH_HLEN; i < pkt_size; i++)
+ packet[i] = (unsigned char)(i & 0xff);
+
+ rc = qede_selftest_transmit_traffic(edev, skb);
+ if (rc)
+ goto test_loopback_exit;
+
+ rc = qede_selftest_receive_traffic(edev);
+ if (rc)
+ goto test_loopback_exit;
+
+ DP_VERBOSE(edev, NETIF_MSG_RX_STATUS, "Loopback test successful\n");
+
+test_loopback_exit:
+ dev_kfree_skb(skb);
+
+ /* Bring up the link in Normal mode */
+ memset(&link_params, 0, sizeof(link_params));
+ link_params.link_up = true;
+ link_params.override_flags = QED_LINK_OVERRIDE_LOOPBACK_MODE;
+ link_params.loopback_mode = QED_LINK_LOOPBACK_NONE;
+ edev->ops->common->set_link(edev->cdev, &link_params);
+
+ /* Wait for loopback configuration to apply */
+ msleep_interruptible(500);
+
+ qede_netif_start(edev);
+
+ return rc;
+}
+
+static void qede_self_test(struct net_device *dev,
+ struct ethtool_test *etest, u64 *buf)
+{
+ struct qede_dev *edev = netdev_priv(dev);
+
+ DP_VERBOSE(edev, QED_MSG_DEBUG,
+ "Self-test command parameters: offline = %d, external_lb = %d\n",
+ (etest->flags & ETH_TEST_FL_OFFLINE),
+ (etest->flags & ETH_TEST_FL_EXTERNAL_LB) >> 2);
+
+ memset(buf, 0, sizeof(u64) * QEDE_ETHTOOL_TEST_MAX);
+
+ if (etest->flags & ETH_TEST_FL_OFFLINE) {
+ if (qede_selftest_run_loopback(edev,
+ QED_LINK_LOOPBACK_INT_PHY)) {
+ buf[QEDE_ETHTOOL_INT_LOOPBACK] = 1;
+ etest->flags |= ETH_TEST_FL_FAILED;
+ }
+ }
+
+ if (edev->ops->common->selftest->selftest_interrupt(edev->cdev)) {
+ buf[QEDE_ETHTOOL_INTERRUPT_TEST] = 1;
+ etest->flags |= ETH_TEST_FL_FAILED;
+ }
+
+ if (edev->ops->common->selftest->selftest_memory(edev->cdev)) {
+ buf[QEDE_ETHTOOL_MEMORY_TEST] = 1;
+ etest->flags |= ETH_TEST_FL_FAILED;
+ }
+
+ if (edev->ops->common->selftest->selftest_register(edev->cdev)) {
+ buf[QEDE_ETHTOOL_REGISTER_TEST] = 1;
+ etest->flags |= ETH_TEST_FL_FAILED;
+ }
+
+ if (edev->ops->common->selftest->selftest_clock(edev->cdev)) {
+ buf[QEDE_ETHTOOL_CLOCK_TEST] = 1;
+ etest->flags |= ETH_TEST_FL_FAILED;
+ }
+}
+
static const struct ethtool_ops qede_ethtool_ops = {
.get_settings = qede_get_settings,
.set_settings = qede_set_settings,
@@ -584,13 +1141,47 @@ static const struct ethtool_ops qede_ethtool_ops = {
.get_strings = qede_get_strings,
.set_phys_id = qede_set_phys_id,
.get_ethtool_stats = qede_get_ethtool_stats,
+ .get_priv_flags = qede_get_priv_flags,
.get_sset_count = qede_get_sset_count,
+ .get_rxnfc = qede_get_rxnfc,
+ .set_rxnfc = qede_set_rxnfc,
+ .get_rxfh_indir_size = qede_get_rxfh_indir_size,
+ .get_rxfh_key_size = qede_get_rxfh_key_size,
+ .get_rxfh = qede_get_rxfh,
+ .set_rxfh = qede_set_rxfh,
+ .get_channels = qede_get_channels,
+ .set_channels = qede_set_channels,
+ .self_test = qede_self_test,
+};
+static const struct ethtool_ops qede_vf_ethtool_ops = {
+ .get_settings = qede_get_settings,
+ .get_drvinfo = qede_get_drvinfo,
+ .get_msglevel = qede_get_msglevel,
+ .set_msglevel = qede_set_msglevel,
+ .get_link = qede_get_link,
+ .get_ringparam = qede_get_ringparam,
+ .set_ringparam = qede_set_ringparam,
+ .get_strings = qede_get_strings,
+ .get_ethtool_stats = qede_get_ethtool_stats,
+ .get_priv_flags = qede_get_priv_flags,
+ .get_sset_count = qede_get_sset_count,
+ .get_rxnfc = qede_get_rxnfc,
+ .set_rxnfc = qede_set_rxnfc,
+ .get_rxfh_indir_size = qede_get_rxfh_indir_size,
+ .get_rxfh_key_size = qede_get_rxfh_key_size,
+ .get_rxfh = qede_get_rxfh,
+ .set_rxfh = qede_set_rxfh,
.get_channels = qede_get_channels,
.set_channels = qede_set_channels,
};
void qede_set_ethtool_ops(struct net_device *dev)
{
- dev->ethtool_ops = &qede_ethtool_ops;
+ struct qede_dev *edev = netdev_priv(dev);
+
+ if (IS_VF(edev))
+ dev->ethtool_ops = &qede_vf_ethtool_ops;
+ else
+ dev->ethtool_ops = &qede_ethtool_ops;
}
diff --git a/drivers/net/ethernet/qlogic/qede/qede_main.c b/drivers/net/ethernet/qlogic/qede/qede_main.c
index 7869465435fa..337e839ca586 100644
--- a/drivers/net/ethernet/qlogic/qede/qede_main.c
+++ b/drivers/net/ethernet/qlogic/qede/qede_main.c
@@ -24,7 +24,12 @@
#include <linux/netdev_features.h>
#include <linux/udp.h>
#include <linux/tcp.h>
+#ifdef CONFIG_QEDE_VXLAN
#include <net/vxlan.h>
+#endif
+#ifdef CONFIG_QEDE_GENEVE
+#include <net/geneve.h>
+#endif
#include <linux/ip.h>
#include <net/ipv6.h>
#include <net/tcp.h>
@@ -58,6 +63,7 @@ static const struct qed_eth_ops *qed_ops;
#define CHIP_NUM_57980S_100 0x1644
#define CHIP_NUM_57980S_50 0x1654
#define CHIP_NUM_57980S_25 0x1656
+#define CHIP_NUM_57980S_IOV 0x1664
#ifndef PCI_DEVICE_ID_NX2_57980E
#define PCI_DEVICE_ID_57980S_40 CHIP_NUM_57980S_40
@@ -66,15 +72,22 @@ static const struct qed_eth_ops *qed_ops;
#define PCI_DEVICE_ID_57980S_100 CHIP_NUM_57980S_100
#define PCI_DEVICE_ID_57980S_50 CHIP_NUM_57980S_50
#define PCI_DEVICE_ID_57980S_25 CHIP_NUM_57980S_25
+#define PCI_DEVICE_ID_57980S_IOV CHIP_NUM_57980S_IOV
#endif
+enum qede_pci_private {
+ QEDE_PRIVATE_PF,
+ QEDE_PRIVATE_VF
+};
+
static const struct pci_device_id qede_pci_tbl[] = {
- { PCI_VDEVICE(QLOGIC, PCI_DEVICE_ID_57980S_40), 0 },
- { PCI_VDEVICE(QLOGIC, PCI_DEVICE_ID_57980S_10), 0 },
- { PCI_VDEVICE(QLOGIC, PCI_DEVICE_ID_57980S_MF), 0 },
- { PCI_VDEVICE(QLOGIC, PCI_DEVICE_ID_57980S_100), 0 },
- { PCI_VDEVICE(QLOGIC, PCI_DEVICE_ID_57980S_50), 0 },
- { PCI_VDEVICE(QLOGIC, PCI_DEVICE_ID_57980S_25), 0 },
+ {PCI_VDEVICE(QLOGIC, PCI_DEVICE_ID_57980S_40), QEDE_PRIVATE_PF},
+ {PCI_VDEVICE(QLOGIC, PCI_DEVICE_ID_57980S_10), QEDE_PRIVATE_PF},
+ {PCI_VDEVICE(QLOGIC, PCI_DEVICE_ID_57980S_MF), QEDE_PRIVATE_PF},
+ {PCI_VDEVICE(QLOGIC, PCI_DEVICE_ID_57980S_100), QEDE_PRIVATE_PF},
+ {PCI_VDEVICE(QLOGIC, PCI_DEVICE_ID_57980S_50), QEDE_PRIVATE_PF},
+ {PCI_VDEVICE(QLOGIC, PCI_DEVICE_ID_57980S_25), QEDE_PRIVATE_PF},
+ {PCI_VDEVICE(QLOGIC, PCI_DEVICE_ID_57980S_IOV), QEDE_PRIVATE_VF},
{ 0 }
};
@@ -89,17 +102,87 @@ static int qede_alloc_rx_buffer(struct qede_dev *edev,
struct qede_rx_queue *rxq);
static void qede_link_update(void *dev, struct qed_link_output *link);
+#ifdef CONFIG_QED_SRIOV
+static int qede_set_vf_vlan(struct net_device *ndev, int vf, u16 vlan, u8 qos)
+{
+ struct qede_dev *edev = netdev_priv(ndev);
+
+ if (vlan > 4095) {
+ DP_NOTICE(edev, "Illegal vlan value %d\n", vlan);
+ return -EINVAL;
+ }
+
+ DP_VERBOSE(edev, QED_MSG_IOV, "Setting Vlan 0x%04x to VF [%d]\n",
+ vlan, vf);
+
+ return edev->ops->iov->set_vlan(edev->cdev, vlan, vf);
+}
+
+static int qede_set_vf_mac(struct net_device *ndev, int vfidx, u8 *mac)
+{
+ struct qede_dev *edev = netdev_priv(ndev);
+
+ DP_VERBOSE(edev, QED_MSG_IOV,
+ "Setting MAC %02x:%02x:%02x:%02x:%02x:%02x to VF [%d]\n",
+ mac[0], mac[1], mac[2], mac[3], mac[4], mac[5], vfidx);
+
+ if (!is_valid_ether_addr(mac)) {
+ DP_VERBOSE(edev, QED_MSG_IOV, "MAC address isn't valid\n");
+ return -EINVAL;
+ }
+
+ return edev->ops->iov->set_mac(edev->cdev, mac, vfidx);
+}
+
+static int qede_sriov_configure(struct pci_dev *pdev, int num_vfs_param)
+{
+ struct qede_dev *edev = netdev_priv(pci_get_drvdata(pdev));
+ struct qed_dev_info *qed_info = &edev->dev_info.common;
+ int rc;
+
+ DP_VERBOSE(edev, QED_MSG_IOV, "Requested %d VFs\n", num_vfs_param);
+
+ rc = edev->ops->iov->configure(edev->cdev, num_vfs_param);
+
+ /* Enable/Disable Tx switching for PF */
+ if ((rc == num_vfs_param) && netif_running(edev->ndev) &&
+ qed_info->mf_mode != QED_MF_NPAR && qed_info->tx_switching) {
+ struct qed_update_vport_params params;
+
+ memset(&params, 0, sizeof(params));
+ params.vport_id = 0;
+ params.update_tx_switching_flg = 1;
+ params.tx_switching_flg = num_vfs_param ? 1 : 0;
+ edev->ops->vport_update(edev->cdev, &params);
+ }
+
+ return rc;
+}
+#endif
+
static struct pci_driver qede_pci_driver = {
.name = "qede",
.id_table = qede_pci_tbl,
.probe = qede_probe,
.remove = qede_remove,
+#ifdef CONFIG_QED_SRIOV
+ .sriov_configure = qede_sriov_configure,
+#endif
};
+static void qede_force_mac(void *dev, u8 *mac)
+{
+ struct qede_dev *edev = dev;
+
+ ether_addr_copy(edev->ndev->dev_addr, mac);
+ ether_addr_copy(edev->primary_mac, mac);
+}
+
static struct qed_eth_cb_ops qede_ll_ops = {
{
.link_update = qede_link_update,
},
+ .force_mac = qede_force_mac,
};
static int qede_netdev_event(struct notifier_block *this, unsigned long event,
@@ -141,19 +224,10 @@ static
int __init qede_init(void)
{
int ret;
- u32 qed_ver;
pr_notice("qede_init: %s\n", version);
- qed_ver = qed_get_protocol_version(QED_PROTOCOL_ETH);
- if (qed_ver != QEDE_ETH_INTERFACE_VERSION) {
- pr_notice("Version mismatch [%08x != %08x]\n",
- qed_ver,
- QEDE_ETH_INTERFACE_VERSION);
- return -EINVAL;
- }
-
- qed_ops = qed_get_eth_ops(QEDE_ETH_INTERFACE_VERSION);
+ qed_ops = qed_get_eth_ops();
if (!qed_ops) {
pr_notice("Failed to get qed ethtool operations\n");
return -EINVAL;
@@ -319,6 +393,9 @@ static u32 qede_xmit_type(struct qede_dev *edev,
(ipv6_hdr(skb)->nexthdr == NEXTHDR_IPV6))
*ipv6_ext = 1;
+ if (skb->encapsulation)
+ rc |= XMIT_ENC;
+
if (skb_is_gso(skb))
rc |= XMIT_LSO;
@@ -380,6 +457,16 @@ static int map_frag_to_bd(struct qede_dev *edev,
return 0;
}
+static u16 qede_get_skb_hlen(struct sk_buff *skb, bool is_encap_pkt)
+{
+ if (is_encap_pkt)
+ return (skb_inner_transport_header(skb) +
+ inner_tcp_hdrlen(skb) - skb->data);
+ else
+ return (skb_transport_header(skb) +
+ tcp_hdrlen(skb) - skb->data);
+}
+
/* +2 for 1st BD for headers and 2nd BD for headlen (if required) */
#if ((MAX_SKB_FRAGS + 2) > ETH_TX_MAX_BDS_PER_NON_LSO_PACKET)
static bool qede_pkt_req_lin(struct qede_dev *edev, struct sk_buff *skb,
@@ -390,8 +477,7 @@ static bool qede_pkt_req_lin(struct qede_dev *edev, struct sk_buff *skb,
if (xmit_type & XMIT_LSO) {
int hlen;
- hlen = skb_transport_header(skb) +
- tcp_hdrlen(skb) - skb->data;
+ hlen = qede_get_skb_hlen(skb, xmit_type & XMIT_ENC);
/* linear payload would require its own BD */
if (skb_headlen(skb) > hlen)
@@ -421,7 +507,7 @@ netdev_tx_t qede_start_xmit(struct sk_buff *skb,
u8 xmit_type;
u16 idx;
u16 hlen;
- bool data_split;
+ bool data_split = false;
/* Get tx-queue context and netdev index */
txq_index = skb_get_queue_mapping(skb);
@@ -499,7 +585,18 @@ netdev_tx_t qede_start_xmit(struct sk_buff *skb,
first_bd->data.bd_flags.bitfields |=
1 << ETH_TX_1ST_BD_FLAGS_L4_CSUM_SHIFT;
- first_bd->data.bitfields |= cpu_to_le16(temp);
+ if (xmit_type & XMIT_ENC) {
+ first_bd->data.bd_flags.bitfields |=
+ 1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
+ } else {
+ /* In cases when OS doesn't indicate for inner offloads
+ * when packet is tunnelled, we need to override the HW
+ * tunnel configuration so that packets are treated as
+ * regular non tunnelled packets and no inner offloads
+ * are done by the hardware.
+ */
+ first_bd->data.bitfields |= cpu_to_le16(temp);
+ }
/* If the packet is IPv6 with extension header, indicate that
* to FW and pass few params, since the device cracker doesn't
@@ -515,10 +612,15 @@ netdev_tx_t qede_start_xmit(struct sk_buff *skb,
third_bd->data.lso_mss =
cpu_to_le16(skb_shinfo(skb)->gso_size);
- first_bd->data.bd_flags.bitfields |=
- 1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
- hlen = skb_transport_header(skb) +
- tcp_hdrlen(skb) - skb->data;
+ if (unlikely(xmit_type & XMIT_ENC)) {
+ first_bd->data.bd_flags.bitfields |=
+ 1 << ETH_TX_1ST_BD_FLAGS_TUNN_IP_CSUM_SHIFT;
+ hlen = qede_get_skb_hlen(skb, true);
+ } else {
+ first_bd->data.bd_flags.bitfields |=
+ 1 << ETH_TX_1ST_BD_FLAGS_IP_CSUM_SHIFT;
+ hlen = qede_get_skb_hlen(skb, false);
+ }
/* @@@TBD - if will not be removed need to check */
third_bd->data.bitfields |=
@@ -644,7 +746,7 @@ netdev_tx_t qede_start_xmit(struct sk_buff *skb,
return NETDEV_TX_OK;
}
-static int qede_txq_has_work(struct qede_tx_queue *txq)
+int qede_txq_has_work(struct qede_tx_queue *txq)
{
u16 hw_bd_cons;
@@ -727,7 +829,7 @@ static int qede_tx_int(struct qede_dev *edev,
return 0;
}
-static bool qede_has_rx_work(struct qede_rx_queue *rxq)
+bool qede_has_rx_work(struct qede_rx_queue *rxq)
{
u16 hw_comp_cons, sw_comp_cons;
@@ -782,8 +884,8 @@ static inline void qede_reuse_page(struct qede_dev *edev,
/* In case of allocation failures reuse buffers
* from consumer index to produce buffers for firmware
*/
-static void qede_recycle_rx_bd_ring(struct qede_rx_queue *rxq,
- struct qede_dev *edev, u8 count)
+void qede_recycle_rx_bd_ring(struct qede_rx_queue *rxq,
+ struct qede_dev *edev, u8 count)
{
struct sw_rx_data *curr_cons;
@@ -818,7 +920,7 @@ static inline int qede_realloc_rx_buffer(struct qede_dev *edev,
* network stack to take the ownership of the page
* which can be recycled multiple times by the driver.
*/
- atomic_inc(&curr_cons->data->_count);
+ page_ref_inc(curr_cons->data);
qede_reuse_page(edev, rxq, curr_cons);
}
@@ -879,6 +981,9 @@ static void qede_set_skb_csum(struct sk_buff *skb, u8 csum_flag)
if (csum_flag & QEDE_CSUM_UNNECESSARY)
skb->ip_summed = CHECKSUM_UNNECESSARY;
+
+ if (csum_flag & QEDE_TUNN_CSUM_UNNECESSARY)
+ skb->csum_level = 1;
}
static inline void qede_skb_receive(struct qede_dev *edev,
@@ -931,7 +1036,7 @@ static int qede_fill_frag_skb(struct qede_dev *edev,
/* Incr page ref count to reuse on allocation failure
* so that it doesn't get freed while freeing SKB.
*/
- atomic_inc(&current_bd->data->_count);
+ page_ref_inc(current_bd->data);
goto out;
}
@@ -971,8 +1076,7 @@ static void qede_tpa_start(struct qede_dev *edev,
* start until its over and we don't want to risk allocation failing
* here, so re-allocate when aggregation will be over.
*/
- dma_unmap_addr_set(sw_rx_data_prod, mapping,
- dma_unmap_addr(replace_buf, mapping));
+ sw_rx_data_prod->mapping = replace_buf->mapping;
sw_rx_data_prod->data = replace_buf->data;
rx_bd_prod->addr.hi = cpu_to_le32(upper_32_bits(mapping));
@@ -1188,13 +1292,47 @@ err:
tpa_info->skb = NULL;
}
-static u8 qede_check_csum(u16 flag)
+static bool qede_tunn_exist(u16 flag)
+{
+ return !!(flag & (PARSING_AND_ERR_FLAGS_TUNNELEXIST_MASK <<
+ PARSING_AND_ERR_FLAGS_TUNNELEXIST_SHIFT));
+}
+
+static u8 qede_check_tunn_csum(u16 flag)
+{
+ u16 csum_flag = 0;
+ u8 tcsum = 0;
+
+ if (flag & (PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMWASCALCULATED_MASK <<
+ PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMWASCALCULATED_SHIFT))
+ csum_flag |= PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMERROR_MASK <<
+ PARSING_AND_ERR_FLAGS_TUNNELL4CHKSMERROR_SHIFT;
+
+ if (flag & (PARSING_AND_ERR_FLAGS_L4CHKSMWASCALCULATED_MASK <<
+ PARSING_AND_ERR_FLAGS_L4CHKSMWASCALCULATED_SHIFT)) {
+ csum_flag |= PARSING_AND_ERR_FLAGS_L4CHKSMERROR_MASK <<
+ PARSING_AND_ERR_FLAGS_L4CHKSMERROR_SHIFT;
+ tcsum = QEDE_TUNN_CSUM_UNNECESSARY;
+ }
+
+ csum_flag |= PARSING_AND_ERR_FLAGS_TUNNELIPHDRERROR_MASK <<
+ PARSING_AND_ERR_FLAGS_TUNNELIPHDRERROR_SHIFT |
+ PARSING_AND_ERR_FLAGS_IPHDRERROR_MASK <<
+ PARSING_AND_ERR_FLAGS_IPHDRERROR_SHIFT;
+
+ if (csum_flag & flag)
+ return QEDE_CSUM_ERROR;
+
+ return QEDE_CSUM_UNNECESSARY | tcsum;
+}
+
+static u8 qede_check_notunn_csum(u16 flag)
{
u16 csum_flag = 0;
u8 csum = 0;
- if ((PARSING_AND_ERR_FLAGS_L4CHKSMWASCALCULATED_MASK <<
- PARSING_AND_ERR_FLAGS_L4CHKSMWASCALCULATED_SHIFT) & flag) {
+ if (flag & (PARSING_AND_ERR_FLAGS_L4CHKSMWASCALCULATED_MASK <<
+ PARSING_AND_ERR_FLAGS_L4CHKSMWASCALCULATED_SHIFT)) {
csum_flag |= PARSING_AND_ERR_FLAGS_L4CHKSMERROR_MASK <<
PARSING_AND_ERR_FLAGS_L4CHKSMERROR_SHIFT;
csum = QEDE_CSUM_UNNECESSARY;
@@ -1209,6 +1347,14 @@ static u8 qede_check_csum(u16 flag)
return csum;
}
+static u8 qede_check_csum(u16 flag)
+{
+ if (!qede_tunn_exist(flag))
+ return qede_check_notunn_csum(flag);
+ else
+ return qede_check_tunn_csum(flag);
+}
+
static int qede_rx_int(struct qede_fastpath *fp, int budget)
{
struct qede_dev *edev = fp->edev;
@@ -1340,7 +1486,7 @@ static int qede_rx_int(struct qede_fastpath *fp, int budget)
* freeing SKB.
*/
- atomic_inc(&sw_rx_data->data->_count);
+ page_ref_inc(sw_rx_data->data);
rxq->rx_alloc_errors++;
qede_recycle_rx_bd_ring(rxq, edev,
fp_cqe->bd_num);
@@ -1569,16 +1715,25 @@ void qede_fill_by_demand_stats(struct qede_dev *edev)
edev->stats.coalesced_bytes = stats.tpa_coalesced_bytes;
edev->stats.rx_64_byte_packets = stats.rx_64_byte_packets;
- edev->stats.rx_127_byte_packets = stats.rx_127_byte_packets;
- edev->stats.rx_255_byte_packets = stats.rx_255_byte_packets;
- edev->stats.rx_511_byte_packets = stats.rx_511_byte_packets;
- edev->stats.rx_1023_byte_packets = stats.rx_1023_byte_packets;
- edev->stats.rx_1518_byte_packets = stats.rx_1518_byte_packets;
- edev->stats.rx_1522_byte_packets = stats.rx_1522_byte_packets;
- edev->stats.rx_2047_byte_packets = stats.rx_2047_byte_packets;
- edev->stats.rx_4095_byte_packets = stats.rx_4095_byte_packets;
- edev->stats.rx_9216_byte_packets = stats.rx_9216_byte_packets;
- edev->stats.rx_16383_byte_packets = stats.rx_16383_byte_packets;
+ edev->stats.rx_65_to_127_byte_packets = stats.rx_65_to_127_byte_packets;
+ edev->stats.rx_128_to_255_byte_packets =
+ stats.rx_128_to_255_byte_packets;
+ edev->stats.rx_256_to_511_byte_packets =
+ stats.rx_256_to_511_byte_packets;
+ edev->stats.rx_512_to_1023_byte_packets =
+ stats.rx_512_to_1023_byte_packets;
+ edev->stats.rx_1024_to_1518_byte_packets =
+ stats.rx_1024_to_1518_byte_packets;
+ edev->stats.rx_1519_to_1522_byte_packets =
+ stats.rx_1519_to_1522_byte_packets;
+ edev->stats.rx_1519_to_2047_byte_packets =
+ stats.rx_1519_to_2047_byte_packets;
+ edev->stats.rx_2048_to_4095_byte_packets =
+ stats.rx_2048_to_4095_byte_packets;
+ edev->stats.rx_4096_to_9216_byte_packets =
+ stats.rx_4096_to_9216_byte_packets;
+ edev->stats.rx_9217_to_16383_byte_packets =
+ stats.rx_9217_to_16383_byte_packets;
edev->stats.rx_crc_errors = stats.rx_crc_errors;
edev->stats.rx_mac_crtl_frames = stats.rx_mac_crtl_frames;
edev->stats.rx_pause_frames = stats.rx_pause_frames;
@@ -1652,6 +1807,49 @@ static struct rtnl_link_stats64 *qede_get_stats64(
return stats;
}
+#ifdef CONFIG_QED_SRIOV
+static int qede_get_vf_config(struct net_device *dev, int vfidx,
+ struct ifla_vf_info *ivi)
+{
+ struct qede_dev *edev = netdev_priv(dev);
+
+ if (!edev->ops)
+ return -EINVAL;
+
+ return edev->ops->iov->get_config(edev->cdev, vfidx, ivi);
+}
+
+static int qede_set_vf_rate(struct net_device *dev, int vfidx,
+ int min_tx_rate, int max_tx_rate)
+{
+ struct qede_dev *edev = netdev_priv(dev);
+
+ return edev->ops->iov->set_rate(edev->cdev, vfidx, max_tx_rate,
+ max_tx_rate);
+}
+
+static int qede_set_vf_spoofchk(struct net_device *dev, int vfidx, bool val)
+{
+ struct qede_dev *edev = netdev_priv(dev);
+
+ if (!edev->ops)
+ return -EINVAL;
+
+ return edev->ops->iov->set_spoof(edev->cdev, vfidx, val);
+}
+
+static int qede_set_vf_link_state(struct net_device *dev, int vfidx,
+ int link_state)
+{
+ struct qede_dev *edev = netdev_priv(dev);
+
+ if (!edev->ops)
+ return -EINVAL;
+
+ return edev->ops->iov->set_link_state(edev->cdev, vfidx, link_state);
+}
+#endif
+
static void qede_config_accept_any_vlan(struct qede_dev *edev, bool action)
{
struct qed_update_vport_params params;
@@ -1893,6 +2091,76 @@ static void qede_vlan_mark_nonconfigured(struct qede_dev *edev)
edev->accept_any_vlan = false;
}
+#ifdef CONFIG_QEDE_VXLAN
+static void qede_add_vxlan_port(struct net_device *dev,
+ sa_family_t sa_family, __be16 port)
+{
+ struct qede_dev *edev = netdev_priv(dev);
+ u16 t_port = ntohs(port);
+
+ if (edev->vxlan_dst_port)
+ return;
+
+ edev->vxlan_dst_port = t_port;
+
+ DP_VERBOSE(edev, QED_MSG_DEBUG, "Added vxlan port=%d", t_port);
+
+ set_bit(QEDE_SP_VXLAN_PORT_CONFIG, &edev->sp_flags);
+ schedule_delayed_work(&edev->sp_task, 0);
+}
+
+static void qede_del_vxlan_port(struct net_device *dev,
+ sa_family_t sa_family, __be16 port)
+{
+ struct qede_dev *edev = netdev_priv(dev);
+ u16 t_port = ntohs(port);
+
+ if (t_port != edev->vxlan_dst_port)
+ return;
+
+ edev->vxlan_dst_port = 0;
+
+ DP_VERBOSE(edev, QED_MSG_DEBUG, "Deleted vxlan port=%d", t_port);
+
+ set_bit(QEDE_SP_VXLAN_PORT_CONFIG, &edev->sp_flags);
+ schedule_delayed_work(&edev->sp_task, 0);
+}
+#endif
+
+#ifdef CONFIG_QEDE_GENEVE
+static void qede_add_geneve_port(struct net_device *dev,
+ sa_family_t sa_family, __be16 port)
+{
+ struct qede_dev *edev = netdev_priv(dev);
+ u16 t_port = ntohs(port);
+
+ if (edev->geneve_dst_port)
+ return;
+
+ edev->geneve_dst_port = t_port;
+
+ DP_VERBOSE(edev, QED_MSG_DEBUG, "Added geneve port=%d", t_port);
+ set_bit(QEDE_SP_GENEVE_PORT_CONFIG, &edev->sp_flags);
+ schedule_delayed_work(&edev->sp_task, 0);
+}
+
+static void qede_del_geneve_port(struct net_device *dev,
+ sa_family_t sa_family, __be16 port)
+{
+ struct qede_dev *edev = netdev_priv(dev);
+ u16 t_port = ntohs(port);
+
+ if (t_port != edev->geneve_dst_port)
+ return;
+
+ edev->geneve_dst_port = 0;
+
+ DP_VERBOSE(edev, QED_MSG_DEBUG, "Deleted geneve port=%d", t_port);
+ set_bit(QEDE_SP_GENEVE_PORT_CONFIG, &edev->sp_flags);
+ schedule_delayed_work(&edev->sp_task, 0);
+}
+#endif
+
static const struct net_device_ops qede_netdev_ops = {
.ndo_open = qede_open,
.ndo_stop = qede_close,
@@ -1901,9 +2169,27 @@ static const struct net_device_ops qede_netdev_ops = {
.ndo_set_mac_address = qede_set_mac_addr,
.ndo_validate_addr = eth_validate_addr,
.ndo_change_mtu = qede_change_mtu,
+#ifdef CONFIG_QED_SRIOV
+ .ndo_set_vf_mac = qede_set_vf_mac,
+ .ndo_set_vf_vlan = qede_set_vf_vlan,
+#endif
.ndo_vlan_rx_add_vid = qede_vlan_rx_add_vid,
.ndo_vlan_rx_kill_vid = qede_vlan_rx_kill_vid,
.ndo_get_stats64 = qede_get_stats64,
+#ifdef CONFIG_QED_SRIOV
+ .ndo_set_vf_link_state = qede_set_vf_link_state,
+ .ndo_set_vf_spoofchk = qede_set_vf_spoofchk,
+ .ndo_get_vf_config = qede_get_vf_config,
+ .ndo_set_vf_rate = qede_set_vf_rate,
+#endif
+#ifdef CONFIG_QEDE_VXLAN
+ .ndo_add_vxlan_port = qede_add_vxlan_port,
+ .ndo_del_vxlan_port = qede_del_vxlan_port,
+#endif
+#ifdef CONFIG_QEDE_GENEVE
+ .ndo_add_geneve_port = qede_add_geneve_port,
+ .ndo_del_geneve_port = qede_del_geneve_port,
+#endif
};
/* -------------------------------------------------------------------------
@@ -1938,8 +2224,6 @@ static struct qede_dev *qede_alloc_etherdev(struct qed_dev *cdev,
edev->q_num_rx_buffers = NUM_RX_BDS_DEF;
edev->q_num_tx_buffers = NUM_TX_BDS_DEF;
- DP_INFO(edev, "Allocated netdev with 64 tx queues and 64 rx queues\n");
-
SET_NETDEV_DEV(ndev, &pdev->dev);
memset(&edev->stats, 0, sizeof(edev->stats));
@@ -1976,6 +2260,14 @@ static void qede_init_ndev(struct qede_dev *edev)
NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
NETIF_F_TSO | NETIF_F_TSO6;
+ /* Encap features*/
+ hw_features |= NETIF_F_GSO_GRE | NETIF_F_GSO_UDP_TUNNEL |
+ NETIF_F_TSO_ECN;
+ ndev->hw_enc_features = NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
+ NETIF_F_SG | NETIF_F_TSO | NETIF_F_TSO_ECN |
+ NETIF_F_TSO6 | NETIF_F_GSO_GRE |
+ NETIF_F_GSO_UDP_TUNNEL | NETIF_F_RXCSUM;
+
ndev->vlan_features = hw_features | NETIF_F_RXHASH | NETIF_F_RXCSUM |
NETIF_F_HIGHDMA;
ndev->features = hw_features | NETIF_F_RXHASH | NETIF_F_RXCSUM |
@@ -2076,6 +2368,8 @@ static void qede_sp_task(struct work_struct *work)
{
struct qede_dev *edev = container_of(work, struct qede_dev,
sp_task.work);
+ struct qed_dev *cdev = edev->cdev;
+
mutex_lock(&edev->qede_lock);
if (edev->state == QEDE_STATE_OPEN) {
@@ -2083,6 +2377,24 @@ static void qede_sp_task(struct work_struct *work)
qede_config_rx_mode(edev->ndev);
}
+ if (test_and_clear_bit(QEDE_SP_VXLAN_PORT_CONFIG, &edev->sp_flags)) {
+ struct qed_tunn_params tunn_params;
+
+ memset(&tunn_params, 0, sizeof(tunn_params));
+ tunn_params.update_vxlan_port = 1;
+ tunn_params.vxlan_port = edev->vxlan_dst_port;
+ qed_ops->tunn_config(cdev, &tunn_params);
+ }
+
+ if (test_and_clear_bit(QEDE_SP_GENEVE_PORT_CONFIG, &edev->sp_flags)) {
+ struct qed_tunn_params tunn_params;
+
+ memset(&tunn_params, 0, sizeof(tunn_params));
+ tunn_params.update_geneve_port = 1;
+ tunn_params.geneve_port = edev->geneve_dst_port;
+ qed_ops->tunn_config(cdev, &tunn_params);
+ }
+
mutex_unlock(&edev->qede_lock);
}
@@ -2090,9 +2402,9 @@ static void qede_update_pf_params(struct qed_dev *cdev)
{
struct qed_pf_params pf_params;
- /* 16 rx + 16 tx */
+ /* 64 rx + 64 tx */
memset(&pf_params, 0, sizeof(struct qed_pf_params));
- pf_params.eth_pf_params.num_cons = 32;
+ pf_params.eth_pf_params.num_cons = 128;
qed_ops->common->update_pf_params(cdev, &pf_params);
}
@@ -2101,8 +2413,9 @@ enum qede_probe_mode {
};
static int __qede_probe(struct pci_dev *pdev, u32 dp_module, u8 dp_level,
- enum qede_probe_mode mode)
+ bool is_vf, enum qede_probe_mode mode)
{
+ struct qed_probe_params probe_params;
struct qed_slowpath_params params;
struct qed_dev_eth_info dev_info;
struct qede_dev *edev;
@@ -2112,8 +2425,12 @@ static int __qede_probe(struct pci_dev *pdev, u32 dp_module, u8 dp_level,
if (unlikely(dp_level & QED_LEVEL_INFO))
pr_notice("Starting qede probe\n");
- cdev = qed_ops->common->probe(pdev, QED_PROTOCOL_ETH,
- dp_module, dp_level);
+ memset(&probe_params, 0, sizeof(probe_params));
+ probe_params.protocol = QED_PROTOCOL_ETH;
+ probe_params.dp_module = dp_module;
+ probe_params.dp_level = dp_level;
+ probe_params.is_vf = is_vf;
+ cdev = qed_ops->common->probe(pdev, &probe_params);
if (!cdev) {
rc = -ENODEV;
goto err0;
@@ -2147,6 +2464,9 @@ static int __qede_probe(struct pci_dev *pdev, u32 dp_module, u8 dp_level,
goto err2;
}
+ if (is_vf)
+ edev->flags |= QEDE_FLAG_IS_VF;
+
qede_init_ndev(edev);
rc = register_netdev(edev->ndev);
@@ -2178,12 +2498,24 @@ err0:
static int qede_probe(struct pci_dev *pdev, const struct pci_device_id *id)
{
+ bool is_vf = false;
u32 dp_module = 0;
u8 dp_level = 0;
+ switch ((enum qede_pci_private)id->driver_data) {
+ case QEDE_PRIVATE_VF:
+ if (debug & QED_LOG_VERBOSE_MASK)
+ dev_err(&pdev->dev, "Probing a VF\n");
+ is_vf = true;
+ break;
+ default:
+ if (debug & QED_LOG_VERBOSE_MASK)
+ dev_err(&pdev->dev, "Probing a PF\n");
+ }
+
qede_config_debug(debug, &dp_module, &dp_level);
- return __qede_probe(pdev, dp_module, dp_level,
+ return __qede_probe(pdev, dp_module, dp_level, is_vf,
QEDE_PROBE_NORMAL);
}
@@ -2322,7 +2654,7 @@ static void qede_free_sge_mem(struct qede_dev *edev,
if (replace_buf->data) {
dma_unmap_page(&edev->pdev->dev,
- dma_unmap_addr(replace_buf, mapping),
+ replace_buf->mapping,
PAGE_SIZE, DMA_FROM_DEVICE);
__free_page(replace_buf->data);
}
@@ -2422,7 +2754,7 @@ static int qede_alloc_sge_mem(struct qede_dev *edev,
goto err;
}
- dma_unmap_addr_set(replace_buf, mapping, mapping);
+ replace_buf->mapping = mapping;
tpa_info->replace_buf.page_offset = 0;
tpa_info->replace_buf_mapping = mapping;
@@ -2878,10 +3210,11 @@ static int qede_start_queues(struct qede_dev *edev)
int rc, tc, i;
int vlan_removal_en = 1;
struct qed_dev *cdev = edev->cdev;
- struct qed_update_vport_rss_params *rss_params = &edev->rss_params;
struct qed_update_vport_params vport_update_params;
struct qed_queue_start_common_params q_params;
+ struct qed_dev_info *qed_info = &edev->dev_info.common;
struct qed_start_vport_params start = {0};
+ bool reset_rss_indir = false;
if (!edev->num_rss) {
DP_ERR(edev,
@@ -2973,19 +3306,59 @@ static int qede_start_queues(struct qede_dev *edev)
vport_update_params.update_vport_active_flg = 1;
vport_update_params.vport_active_flg = 1;
+ if ((qed_info->mf_mode == QED_MF_NPAR || pci_num_vf(edev->pdev)) &&
+ qed_info->tx_switching) {
+ vport_update_params.update_tx_switching_flg = 1;
+ vport_update_params.tx_switching_flg = 1;
+ }
+
/* Fill struct with RSS params */
if (QEDE_RSS_CNT(edev) > 1) {
vport_update_params.update_rss_flg = 1;
- for (i = 0; i < 128; i++)
- rss_params->rss_ind_table[i] =
- ethtool_rxfh_indir_default(i, QEDE_RSS_CNT(edev));
- netdev_rss_key_fill(rss_params->rss_key,
- sizeof(rss_params->rss_key));
+
+ /* Need to validate current RSS config uses valid entries */
+ for (i = 0; i < QED_RSS_IND_TABLE_SIZE; i++) {
+ if (edev->rss_params.rss_ind_table[i] >=
+ edev->num_rss) {
+ reset_rss_indir = true;
+ break;
+ }
+ }
+
+ if (!(edev->rss_params_inited & QEDE_RSS_INDIR_INITED) ||
+ reset_rss_indir) {
+ u16 val;
+
+ for (i = 0; i < QED_RSS_IND_TABLE_SIZE; i++) {
+ u16 indir_val;
+
+ val = QEDE_RSS_CNT(edev);
+ indir_val = ethtool_rxfh_indir_default(i, val);
+ edev->rss_params.rss_ind_table[i] = indir_val;
+ }
+ edev->rss_params_inited |= QEDE_RSS_INDIR_INITED;
+ }
+
+ if (!(edev->rss_params_inited & QEDE_RSS_KEY_INITED)) {
+ netdev_rss_key_fill(edev->rss_params.rss_key,
+ sizeof(edev->rss_params.rss_key));
+ edev->rss_params_inited |= QEDE_RSS_KEY_INITED;
+ }
+
+ if (!(edev->rss_params_inited & QEDE_RSS_CAPS_INITED)) {
+ edev->rss_params.rss_caps = QED_RSS_IPV4 |
+ QED_RSS_IPV6 |
+ QED_RSS_IPV4_TCP |
+ QED_RSS_IPV6_TCP;
+ edev->rss_params_inited |= QEDE_RSS_CAPS_INITED;
+ }
+
+ memcpy(&vport_update_params.rss_params, &edev->rss_params,
+ sizeof(vport_update_params.rss_params));
} else {
- memset(rss_params, 0, sizeof(*rss_params));
+ memset(&vport_update_params.rss_params, 0,
+ sizeof(vport_update_params.rss_params));
}
- memcpy(&vport_update_params.rss_params, rss_params,
- sizeof(*rss_params));
rc = edev->ops->vport_update(cdev, &vport_update_params);
if (rc) {
@@ -3167,12 +3540,24 @@ void qede_reload(struct qede_dev *edev,
static int qede_open(struct net_device *ndev)
{
struct qede_dev *edev = netdev_priv(ndev);
+ int rc;
netif_carrier_off(ndev);
edev->ops->common->set_power_state(edev->cdev, PCI_D0);
- return qede_load(edev, QEDE_LOAD_NORMAL);
+ rc = qede_load(edev, QEDE_LOAD_NORMAL);
+
+ if (rc)
+ return rc;
+
+#ifdef CONFIG_QEDE_VXLAN
+ vxlan_get_rx_port(ndev);
+#endif
+#ifdef CONFIG_QEDE_GENEVE
+ geneve_get_rx_port(ndev);
+#endif
+ return 0;
}
static int qede_close(struct net_device *ndev)
@@ -3223,6 +3608,11 @@ static int qede_set_mac_addr(struct net_device *ndev, void *p)
return -EFAULT;
}
+ if (!edev->ops->check_mac(edev->cdev, addr->sa_data)) {
+ DP_NOTICE(edev, "qed prevents setting MAC\n");
+ return -EINVAL;
+ }
+
ether_addr_copy(ndev->dev_addr, addr->sa_data);
if (!netif_running(ndev)) {
diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
index 1205f6f9c941..1c29105b6c36 100644
--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
+++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_main.c
@@ -3952,8 +3952,14 @@ static pci_ers_result_t qlcnic_82xx_io_error_detected(struct pci_dev *pdev,
static pci_ers_result_t qlcnic_82xx_io_slot_reset(struct pci_dev *pdev)
{
- return qlcnic_attach_func(pdev) ? PCI_ERS_RESULT_DISCONNECT :
- PCI_ERS_RESULT_RECOVERED;
+ pci_ers_result_t res;
+
+ rtnl_lock();
+ res = qlcnic_attach_func(pdev) ? PCI_ERS_RESULT_DISCONNECT :
+ PCI_ERS_RESULT_RECOVERED;
+ rtnl_unlock();
+
+ return res;
}
static void qlcnic_82xx_io_resume(struct pci_dev *pdev)
diff --git a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_minidump.c b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_minidump.c
index cda9e604a95f..0844b7c75767 100644
--- a/drivers/net/ethernet/qlogic/qlcnic/qlcnic_minidump.c
+++ b/drivers/net/ethernet/qlogic/qlcnic/qlcnic_minidump.c
@@ -1417,6 +1417,7 @@ void qlcnic_83xx_get_minidump_template(struct qlcnic_adapter *adapter)
struct qlcnic_fw_dump *fw_dump = &ahw->fw_dump;
struct pci_dev *pdev = adapter->pdev;
bool extended = false;
+ int ret;
prev_version = adapter->fw_version;
current_version = qlcnic_83xx_get_fw_version(adapter);
@@ -1427,8 +1428,11 @@ void qlcnic_83xx_get_minidump_template(struct qlcnic_adapter *adapter)
if (qlcnic_83xx_md_check_extended_dump_capability(adapter))
extended = !qlcnic_83xx_extend_md_capab(adapter);
- if (!qlcnic_fw_cmd_get_minidump_temp(adapter))
- dev_info(&pdev->dev, "Supports FW dump capability\n");
+ ret = qlcnic_fw_cmd_get_minidump_temp(adapter);
+ if (ret)
+ return;
+
+ dev_info(&pdev->dev, "Supports FW dump capability\n");
/* Once we have minidump template with extended iSCSI dump
* capability, update the minidump capture mask to 0x1f as
diff --git a/drivers/net/ethernet/qlogic/qlge/qlge_main.c b/drivers/net/ethernet/qlogic/qlge/qlge_main.c
index b28e73ea2c25..83d72106471c 100644
--- a/drivers/net/ethernet/qlogic/qlge/qlge_main.c
+++ b/drivers/net/ethernet/qlogic/qlge/qlge_main.c
@@ -4687,7 +4687,7 @@ static int ql_init_device(struct pci_dev *pdev, struct net_device *ndev,
/*
* Set up the operating parameters.
*/
- qdev->workqueue = create_singlethread_workqueue(ndev->name);
+ qdev->workqueue = alloc_ordered_workqueue(ndev->name, WQ_MEM_RECLAIM);
INIT_DELAYED_WORK(&qdev->asic_reset_work, ql_asic_reset_work);
INIT_DELAYED_WORK(&qdev->mpi_reset_work, ql_mpi_reset_work);
INIT_DELAYED_WORK(&qdev->mpi_work, ql_mpi_work);
diff --git a/drivers/net/ethernet/qualcomm/qca_spi.c b/drivers/net/ethernet/qualcomm/qca_spi.c
index 1ef03939d25f..6e2add979471 100644
--- a/drivers/net/ethernet/qualcomm/qca_spi.c
+++ b/drivers/net/ethernet/qualcomm/qca_spi.c
@@ -719,7 +719,7 @@ qcaspi_netdev_xmit(struct sk_buff *skb, struct net_device *dev)
qca->stats.ring_full++;
}
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
if (qca->spi_thread &&
qca->spi_thread->state != TASK_RUNNING)
@@ -734,7 +734,7 @@ qcaspi_netdev_tx_timeout(struct net_device *dev)
struct qcaspi *qca = netdev_priv(dev);
netdev_info(qca->net_dev, "Transmit timeout at %ld, latency %ld\n",
- jiffies, jiffies - dev->trans_start);
+ jiffies, jiffies - dev_trans_start(dev));
qca->net_dev->stats.tx_errors++;
/* Trigger tx queue flush and QCA7000 reset */
qca->sync = QCASPI_SYNC_UNKNOWN;
diff --git a/drivers/net/ethernet/realtek/atp.c b/drivers/net/ethernet/realtek/atp.c
index d77d60ea8202..5cb96785fb63 100644
--- a/drivers/net/ethernet/realtek/atp.c
+++ b/drivers/net/ethernet/realtek/atp.c
@@ -544,7 +544,7 @@ static void tx_timeout(struct net_device *dev)
dev->stats.tx_errors++;
/* Try to restart the adapter. */
hardware_init(dev);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue(dev);
dev->stats.tx_errors++;
}
diff --git a/drivers/net/ethernet/realtek/r8169.c b/drivers/net/ethernet/realtek/r8169.c
index 94f08f1e841c..0e62d74b09b3 100644
--- a/drivers/net/ethernet/realtek/r8169.c
+++ b/drivers/net/ethernet/realtek/r8169.c
@@ -345,7 +345,7 @@ static const struct pci_device_id rtl8169_pci_tbl[] = {
MODULE_DEVICE_TABLE(pci, rtl8169_pci_tbl);
static int rx_buf_sz = 16383;
-static int use_dac;
+static int use_dac = -1;
static struct {
u32 msg_enable;
} debug = { -1 };
@@ -8224,20 +8224,6 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
goto err_out_mwi_2;
}
- tp->cp_cmd = 0;
-
- if ((sizeof(dma_addr_t) > 4) &&
- !pci_set_dma_mask(pdev, DMA_BIT_MASK(64)) && use_dac) {
- tp->cp_cmd |= PCIDAC;
- dev->features |= NETIF_F_HIGHDMA;
- } else {
- rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
- if (rc < 0) {
- netif_err(tp, probe, dev, "DMA configuration failed\n");
- goto err_out_free_res_3;
- }
- }
-
/* ioremap MMIO region */
ioaddr = ioremap(pci_resource_start(pdev, region), R8169_REGS_SIZE);
if (!ioaddr) {
@@ -8253,6 +8239,25 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
/* Identify chip attached to board */
rtl8169_get_mac_version(tp, dev, cfg->default_ver);
+ tp->cp_cmd = 0;
+
+ if ((sizeof(dma_addr_t) > 4) &&
+ (use_dac == 1 || (use_dac == -1 && pci_is_pcie(pdev) &&
+ tp->mac_version >= RTL_GIGA_MAC_VER_18)) &&
+ !pci_set_dma_mask(pdev, DMA_BIT_MASK(64))) {
+
+ /* CPlusCmd Dual Access Cycle is only needed for non-PCIe */
+ if (!pci_is_pcie(pdev))
+ tp->cp_cmd |= PCIDAC;
+ dev->features |= NETIF_F_HIGHDMA;
+ } else {
+ rc = pci_set_dma_mask(pdev, DMA_BIT_MASK(32));
+ if (rc < 0) {
+ netif_err(tp, probe, dev, "DMA configuration failed\n");
+ goto err_out_unmap_4;
+ }
+ }
+
rtl_init_rxcfg(tp);
rtl_irq_disable(tp);
@@ -8412,12 +8417,12 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
&tp->counters_phys_addr, GFP_KERNEL);
if (!tp->counters) {
rc = -ENOMEM;
- goto err_out_msi_4;
+ goto err_out_msi_5;
}
rc = register_netdev(dev);
if (rc < 0)
- goto err_out_cnt_5;
+ goto err_out_cnt_6;
pci_set_drvdata(pdev, dev);
@@ -8451,12 +8456,13 @@ static int rtl_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
out:
return rc;
-err_out_cnt_5:
+err_out_cnt_6:
dma_free_coherent(&pdev->dev, sizeof(*tp->counters), tp->counters,
tp->counters_phys_addr);
-err_out_msi_4:
+err_out_msi_5:
netif_napi_del(&tp->napi);
rtl_disable_msi(pdev, tp);
+err_out_unmap_4:
iounmap(ioaddr);
err_out_free_res_3:
pci_release_regions(pdev);
diff --git a/drivers/net/ethernet/renesas/ravb.h b/drivers/net/ethernet/renesas/ravb.h
index b2160d1b9c71..4e5d5e953e15 100644
--- a/drivers/net/ethernet/renesas/ravb.h
+++ b/drivers/net/ethernet/renesas/ravb.h
@@ -157,6 +157,7 @@ enum ravb_reg {
TIC = 0x0378,
TIS = 0x037C,
ISS = 0x0380,
+ CIE = 0x0384, /* R-Car Gen3 only */
GCCR = 0x0390,
GMTT = 0x0394,
GPTC = 0x0398,
@@ -170,6 +171,15 @@ enum ravb_reg {
GCT0 = 0x03B8,
GCT1 = 0x03BC,
GCT2 = 0x03C0,
+ GIE = 0x03CC, /* R-Car Gen3 only */
+ GID = 0x03D0, /* R-Car Gen3 only */
+ DIL = 0x0440, /* R-Car Gen3 only */
+ RIE0 = 0x0460, /* R-Car Gen3 only */
+ RID0 = 0x0464, /* R-Car Gen3 only */
+ RIE2 = 0x0470, /* R-Car Gen3 only */
+ RID2 = 0x0474, /* R-Car Gen3 only */
+ TIE = 0x0478, /* R-Car Gen3 only */
+ TID = 0x047c, /* R-Car Gen3 only */
/* E-MAC registers */
ECMR = 0x0500,
@@ -556,6 +566,16 @@ enum ISS_BIT {
ISS_DPS15 = 0x80000000,
};
+/* CIE (R-Car Gen3 only) */
+enum CIE_BIT {
+ CIE_CRIE = 0x00000001,
+ CIE_CTIE = 0x00000100,
+ CIE_RQFM = 0x00010000,
+ CIE_CL0M = 0x00020000,
+ CIE_RFWL = 0x00040000,
+ CIE_RFFL = 0x00080000,
+};
+
/* GCCR */
enum GCCR_BIT {
GCCR_TCR = 0x00000003,
@@ -592,6 +612,188 @@ enum GIS_BIT {
GIS_PTMF = 0x00000004,
};
+/* GIE (R-Car Gen3 only) */
+enum GIE_BIT {
+ GIE_PTCS = 0x00000001,
+ GIE_PTOS = 0x00000002,
+ GIE_PTMS0 = 0x00000004,
+ GIE_PTMS1 = 0x00000008,
+ GIE_PTMS2 = 0x00000010,
+ GIE_PTMS3 = 0x00000020,
+ GIE_PTMS4 = 0x00000040,
+ GIE_PTMS5 = 0x00000080,
+ GIE_PTMS6 = 0x00000100,
+ GIE_PTMS7 = 0x00000200,
+ GIE_ATCS0 = 0x00010000,
+ GIE_ATCS1 = 0x00020000,
+ GIE_ATCS2 = 0x00040000,
+ GIE_ATCS3 = 0x00080000,
+ GIE_ATCS4 = 0x00100000,
+ GIE_ATCS5 = 0x00200000,
+ GIE_ATCS6 = 0x00400000,
+ GIE_ATCS7 = 0x00800000,
+ GIE_ATCS8 = 0x01000000,
+ GIE_ATCS9 = 0x02000000,
+ GIE_ATCS10 = 0x04000000,
+ GIE_ATCS11 = 0x08000000,
+ GIE_ATCS12 = 0x10000000,
+ GIE_ATCS13 = 0x20000000,
+ GIE_ATCS14 = 0x40000000,
+ GIE_ATCS15 = 0x80000000,
+};
+
+/* GID (R-Car Gen3 only) */
+enum GID_BIT {
+ GID_PTCD = 0x00000001,
+ GID_PTOD = 0x00000002,
+ GID_PTMD0 = 0x00000004,
+ GID_PTMD1 = 0x00000008,
+ GID_PTMD2 = 0x00000010,
+ GID_PTMD3 = 0x00000020,
+ GID_PTMD4 = 0x00000040,
+ GID_PTMD5 = 0x00000080,
+ GID_PTMD6 = 0x00000100,
+ GID_PTMD7 = 0x00000200,
+ GID_ATCD0 = 0x00010000,
+ GID_ATCD1 = 0x00020000,
+ GID_ATCD2 = 0x00040000,
+ GID_ATCD3 = 0x00080000,
+ GID_ATCD4 = 0x00100000,
+ GID_ATCD5 = 0x00200000,
+ GID_ATCD6 = 0x00400000,
+ GID_ATCD7 = 0x00800000,
+ GID_ATCD8 = 0x01000000,
+ GID_ATCD9 = 0x02000000,
+ GID_ATCD10 = 0x04000000,
+ GID_ATCD11 = 0x08000000,
+ GID_ATCD12 = 0x10000000,
+ GID_ATCD13 = 0x20000000,
+ GID_ATCD14 = 0x40000000,
+ GID_ATCD15 = 0x80000000,
+};
+
+/* RIE0 (R-Car Gen3 only) */
+enum RIE0_BIT {
+ RIE0_FRS0 = 0x00000001,
+ RIE0_FRS1 = 0x00000002,
+ RIE0_FRS2 = 0x00000004,
+ RIE0_FRS3 = 0x00000008,
+ RIE0_FRS4 = 0x00000010,
+ RIE0_FRS5 = 0x00000020,
+ RIE0_FRS6 = 0x00000040,
+ RIE0_FRS7 = 0x00000080,
+ RIE0_FRS8 = 0x00000100,
+ RIE0_FRS9 = 0x00000200,
+ RIE0_FRS10 = 0x00000400,
+ RIE0_FRS11 = 0x00000800,
+ RIE0_FRS12 = 0x00001000,
+ RIE0_FRS13 = 0x00002000,
+ RIE0_FRS14 = 0x00004000,
+ RIE0_FRS15 = 0x00008000,
+ RIE0_FRS16 = 0x00010000,
+ RIE0_FRS17 = 0x00020000,
+};
+
+/* RID0 (R-Car Gen3 only) */
+enum RID0_BIT {
+ RID0_FRD0 = 0x00000001,
+ RID0_FRD1 = 0x00000002,
+ RID0_FRD2 = 0x00000004,
+ RID0_FRD3 = 0x00000008,
+ RID0_FRD4 = 0x00000010,
+ RID0_FRD5 = 0x00000020,
+ RID0_FRD6 = 0x00000040,
+ RID0_FRD7 = 0x00000080,
+ RID0_FRD8 = 0x00000100,
+ RID0_FRD9 = 0x00000200,
+ RID0_FRD10 = 0x00000400,
+ RID0_FRD11 = 0x00000800,
+ RID0_FRD12 = 0x00001000,
+ RID0_FRD13 = 0x00002000,
+ RID0_FRD14 = 0x00004000,
+ RID0_FRD15 = 0x00008000,
+ RID0_FRD16 = 0x00010000,
+ RID0_FRD17 = 0x00020000,
+};
+
+/* RIE2 (R-Car Gen3 only) */
+enum RIE2_BIT {
+ RIE2_QFS0 = 0x00000001,
+ RIE2_QFS1 = 0x00000002,
+ RIE2_QFS2 = 0x00000004,
+ RIE2_QFS3 = 0x00000008,
+ RIE2_QFS4 = 0x00000010,
+ RIE2_QFS5 = 0x00000020,
+ RIE2_QFS6 = 0x00000040,
+ RIE2_QFS7 = 0x00000080,
+ RIE2_QFS8 = 0x00000100,
+ RIE2_QFS9 = 0x00000200,
+ RIE2_QFS10 = 0x00000400,
+ RIE2_QFS11 = 0x00000800,
+ RIE2_QFS12 = 0x00001000,
+ RIE2_QFS13 = 0x00002000,
+ RIE2_QFS14 = 0x00004000,
+ RIE2_QFS15 = 0x00008000,
+ RIE2_QFS16 = 0x00010000,
+ RIE2_QFS17 = 0x00020000,
+ RIE2_RFFS = 0x80000000,
+};
+
+/* RID2 (R-Car Gen3 only) */
+enum RID2_BIT {
+ RID2_QFD0 = 0x00000001,
+ RID2_QFD1 = 0x00000002,
+ RID2_QFD2 = 0x00000004,
+ RID2_QFD3 = 0x00000008,
+ RID2_QFD4 = 0x00000010,
+ RID2_QFD5 = 0x00000020,
+ RID2_QFD6 = 0x00000040,
+ RID2_QFD7 = 0x00000080,
+ RID2_QFD8 = 0x00000100,
+ RID2_QFD9 = 0x00000200,
+ RID2_QFD10 = 0x00000400,
+ RID2_QFD11 = 0x00000800,
+ RID2_QFD12 = 0x00001000,
+ RID2_QFD13 = 0x00002000,
+ RID2_QFD14 = 0x00004000,
+ RID2_QFD15 = 0x00008000,
+ RID2_QFD16 = 0x00010000,
+ RID2_QFD17 = 0x00020000,
+ RID2_RFFD = 0x80000000,
+};
+
+/* TIE (R-Car Gen3 only) */
+enum TIE_BIT {
+ TIE_FTS0 = 0x00000001,
+ TIE_FTS1 = 0x00000002,
+ TIE_FTS2 = 0x00000004,
+ TIE_FTS3 = 0x00000008,
+ TIE_TFUS = 0x00000100,
+ TIE_TFWS = 0x00000200,
+ TIE_MFUS = 0x00000400,
+ TIE_MFWS = 0x00000800,
+ TIE_TDPS0 = 0x00010000,
+ TIE_TDPS1 = 0x00020000,
+ TIE_TDPS2 = 0x00040000,
+ TIE_TDPS3 = 0x00080000,
+};
+
+/* TID (R-Car Gen3 only) */
+enum TID_BIT {
+ TID_FTD0 = 0x00000001,
+ TID_FTD1 = 0x00000002,
+ TID_FTD2 = 0x00000004,
+ TID_FTD3 = 0x00000008,
+ TID_TFUD = 0x00000100,
+ TID_TFWD = 0x00000200,
+ TID_MFUD = 0x00000400,
+ TID_MFWD = 0x00000800,
+ TID_TDPD0 = 0x00010000,
+ TID_TDPD1 = 0x00020000,
+ TID_TDPD2 = 0x00040000,
+ TID_TDPD3 = 0x00080000,
+};
+
/* ECMR */
enum ECMR_BIT {
ECMR_PRM = 0x00000001,
@@ -817,6 +1019,8 @@ struct ravb_private {
int duplex;
int emac_irq;
enum ravb_chip_id chip_id;
+ int rx_irqs[NUM_RX_QUEUE];
+ int tx_irqs[NUM_TX_QUEUE];
unsigned no_avb_link:1;
unsigned avb_link_active_low:1;
@@ -841,7 +1045,7 @@ void ravb_modify(struct net_device *ndev, enum ravb_reg reg, u32 clear,
u32 set);
int ravb_wait(struct net_device *ndev, enum ravb_reg reg, u32 mask, u32 value);
-irqreturn_t ravb_ptp_interrupt(struct net_device *ndev);
+void ravb_ptp_interrupt(struct net_device *ndev);
void ravb_ptp_init(struct net_device *ndev, struct platform_device *pdev);
void ravb_ptp_stop(struct net_device *ndev);
diff --git a/drivers/net/ethernet/renesas/ravb_main.c b/drivers/net/ethernet/renesas/ravb_main.c
index 9e2a0bd8f5a8..867caf6e7a5a 100644
--- a/drivers/net/ethernet/renesas/ravb_main.c
+++ b/drivers/net/ethernet/renesas/ravb_main.c
@@ -42,6 +42,16 @@
NETIF_MSG_RX_ERR | \
NETIF_MSG_TX_ERR)
+static const char *ravb_rx_irqs[NUM_RX_QUEUE] = {
+ "ch0", /* RAVB_BE */
+ "ch1", /* RAVB_NC */
+};
+
+static const char *ravb_tx_irqs[NUM_TX_QUEUE] = {
+ "ch18", /* RAVB_BE */
+ "ch19", /* RAVB_NC */
+};
+
void ravb_modify(struct net_device *ndev, enum ravb_reg reg, u32 clear,
u32 set)
{
@@ -236,10 +246,9 @@ static void ravb_ring_format(struct net_device *ndev, int q)
for (i = 0; i < priv->num_rx_ring[q]; i++) {
/* RX descriptor */
rx_desc = &priv->rx_ring[q][i];
- /* The size of the buffer should be on 16-byte boundary. */
- rx_desc->ds_cc = cpu_to_le16(ALIGN(PKT_BUF_SZ, 16));
+ rx_desc->ds_cc = cpu_to_le16(PKT_BUF_SZ);
dma_addr = dma_map_single(ndev->dev.parent, priv->rx_skb[q][i]->data,
- ALIGN(PKT_BUF_SZ, 16),
+ PKT_BUF_SZ,
DMA_FROM_DEVICE);
/* We just set the data size to 0 for a failed mapping which
* should prevent DMA from happening...
@@ -365,6 +374,7 @@ static void ravb_emac_init(struct net_device *ndev)
/* Device init function for Ethernet AVB */
static int ravb_dmac_init(struct net_device *ndev)
{
+ struct ravb_private *priv = netdev_priv(ndev);
int error;
/* Set CONFIG mode */
@@ -401,6 +411,12 @@ static int ravb_dmac_init(struct net_device *ndev)
ravb_write(ndev, TCCR_TFEN, TCCR);
/* Interrupt init: */
+ if (priv->chip_id == RCAR_GEN3) {
+ /* Clear DIL.DPLx */
+ ravb_write(ndev, 0, DIL);
+ /* Set queue specific interrupt */
+ ravb_write(ndev, CIE_CRIE | CIE_CTIE | CIE_CL0M, CIE);
+ }
/* Frame receive */
ravb_write(ndev, RIC0_FRE0 | RIC0_FRE1, RIC0);
/* Disable FIFO full warning */
@@ -541,7 +557,7 @@ static bool ravb_rx(struct net_device *ndev, int *quota, int q)
skb = priv->rx_skb[q][entry];
priv->rx_skb[q][entry] = NULL;
dma_unmap_single(ndev->dev.parent, le32_to_cpu(desc->dptr),
- ALIGN(PKT_BUF_SZ, 16),
+ PKT_BUF_SZ,
DMA_FROM_DEVICE);
get_ts &= (q == RAVB_NC) ?
RAVB_RXTSTAMP_TYPE_V2_L2_EVENT :
@@ -571,8 +587,7 @@ static bool ravb_rx(struct net_device *ndev, int *quota, int q)
for (; priv->cur_rx[q] - priv->dirty_rx[q] > 0; priv->dirty_rx[q]++) {
entry = priv->dirty_rx[q] % priv->num_rx_ring[q];
desc = &priv->rx_ring[q][entry];
- /* The size of the buffer should be on 16-byte boundary. */
- desc->ds_cc = cpu_to_le16(ALIGN(PKT_BUF_SZ, 16));
+ desc->ds_cc = cpu_to_le16(PKT_BUF_SZ);
if (!priv->rx_skb[q][entry]) {
skb = netdev_alloc_skb(ndev,
@@ -643,7 +658,7 @@ static int ravb_stop_dma(struct net_device *ndev)
}
/* E-MAC interrupt handler */
-static void ravb_emac_interrupt(struct net_device *ndev)
+static void ravb_emac_interrupt_unlocked(struct net_device *ndev)
{
struct ravb_private *priv = netdev_priv(ndev);
u32 ecsr, psr;
@@ -669,6 +684,18 @@ static void ravb_emac_interrupt(struct net_device *ndev)
}
}
+static irqreturn_t ravb_emac_interrupt(int irq, void *dev_id)
+{
+ struct net_device *ndev = dev_id;
+ struct ravb_private *priv = netdev_priv(ndev);
+
+ spin_lock(&priv->lock);
+ ravb_emac_interrupt_unlocked(ndev);
+ mmiowb();
+ spin_unlock(&priv->lock);
+ return IRQ_HANDLED;
+}
+
/* Error interrupt handler */
static void ravb_error_interrupt(struct net_device *ndev)
{
@@ -695,6 +722,50 @@ static void ravb_error_interrupt(struct net_device *ndev)
}
}
+static bool ravb_queue_interrupt(struct net_device *ndev, int q)
+{
+ struct ravb_private *priv = netdev_priv(ndev);
+ u32 ris0 = ravb_read(ndev, RIS0);
+ u32 ric0 = ravb_read(ndev, RIC0);
+ u32 tis = ravb_read(ndev, TIS);
+ u32 tic = ravb_read(ndev, TIC);
+
+ if (((ris0 & ric0) & BIT(q)) || ((tis & tic) & BIT(q))) {
+ if (napi_schedule_prep(&priv->napi[q])) {
+ /* Mask RX and TX interrupts */
+ if (priv->chip_id == RCAR_GEN2) {
+ ravb_write(ndev, ric0 & ~BIT(q), RIC0);
+ ravb_write(ndev, tic & ~BIT(q), TIC);
+ } else {
+ ravb_write(ndev, BIT(q), RID0);
+ ravb_write(ndev, BIT(q), TID);
+ }
+ __napi_schedule(&priv->napi[q]);
+ } else {
+ netdev_warn(ndev,
+ "ignoring interrupt, rx status 0x%08x, rx mask 0x%08x,\n",
+ ris0, ric0);
+ netdev_warn(ndev,
+ " tx status 0x%08x, tx mask 0x%08x.\n",
+ tis, tic);
+ }
+ return true;
+ }
+ return false;
+}
+
+static bool ravb_timestamp_interrupt(struct net_device *ndev)
+{
+ u32 tis = ravb_read(ndev, TIS);
+
+ if (tis & TIS_TFUF) {
+ ravb_write(ndev, ~TIS_TFUF, TIS);
+ ravb_get_tx_tstamp(ndev);
+ return true;
+ }
+ return false;
+}
+
static irqreturn_t ravb_interrupt(int irq, void *dev_id)
{
struct net_device *ndev = dev_id;
@@ -708,46 +779,22 @@ static irqreturn_t ravb_interrupt(int irq, void *dev_id)
/* Received and transmitted interrupts */
if (iss & (ISS_FRS | ISS_FTS | ISS_TFUS)) {
- u32 ris0 = ravb_read(ndev, RIS0);
- u32 ric0 = ravb_read(ndev, RIC0);
- u32 tis = ravb_read(ndev, TIS);
- u32 tic = ravb_read(ndev, TIC);
int q;
/* Timestamp updated */
- if (tis & TIS_TFUF) {
- ravb_write(ndev, ~TIS_TFUF, TIS);
- ravb_get_tx_tstamp(ndev);
+ if (ravb_timestamp_interrupt(ndev))
result = IRQ_HANDLED;
- }
/* Network control and best effort queue RX/TX */
for (q = RAVB_NC; q >= RAVB_BE; q--) {
- if (((ris0 & ric0) & BIT(q)) ||
- ((tis & tic) & BIT(q))) {
- if (napi_schedule_prep(&priv->napi[q])) {
- /* Mask RX and TX interrupts */
- ric0 &= ~BIT(q);
- tic &= ~BIT(q);
- ravb_write(ndev, ric0, RIC0);
- ravb_write(ndev, tic, TIC);
- __napi_schedule(&priv->napi[q]);
- } else {
- netdev_warn(ndev,
- "ignoring interrupt, rx status 0x%08x, rx mask 0x%08x,\n",
- ris0, ric0);
- netdev_warn(ndev,
- " tx status 0x%08x, tx mask 0x%08x.\n",
- tis, tic);
- }
+ if (ravb_queue_interrupt(ndev, q))
result = IRQ_HANDLED;
- }
}
}
/* E-MAC status summary */
if (iss & ISS_MS) {
- ravb_emac_interrupt(ndev);
+ ravb_emac_interrupt_unlocked(ndev);
result = IRQ_HANDLED;
}
@@ -757,14 +804,77 @@ static irqreturn_t ravb_interrupt(int irq, void *dev_id)
result = IRQ_HANDLED;
}
- if ((iss & ISS_CGIS) && ravb_ptp_interrupt(ndev) == IRQ_HANDLED)
+ /* gPTP interrupt status summary */
+ if (iss & ISS_CGIS) {
+ ravb_ptp_interrupt(ndev);
result = IRQ_HANDLED;
+ }
mmiowb();
spin_unlock(&priv->lock);
return result;
}
+/* Timestamp/Error/gPTP interrupt handler */
+static irqreturn_t ravb_multi_interrupt(int irq, void *dev_id)
+{
+ struct net_device *ndev = dev_id;
+ struct ravb_private *priv = netdev_priv(ndev);
+ irqreturn_t result = IRQ_NONE;
+ u32 iss;
+
+ spin_lock(&priv->lock);
+ /* Get interrupt status */
+ iss = ravb_read(ndev, ISS);
+
+ /* Timestamp updated */
+ if ((iss & ISS_TFUS) && ravb_timestamp_interrupt(ndev))
+ result = IRQ_HANDLED;
+
+ /* Error status summary */
+ if (iss & ISS_ES) {
+ ravb_error_interrupt(ndev);
+ result = IRQ_HANDLED;
+ }
+
+ /* gPTP interrupt status summary */
+ if (iss & ISS_CGIS) {
+ ravb_ptp_interrupt(ndev);
+ result = IRQ_HANDLED;
+ }
+
+ mmiowb();
+ spin_unlock(&priv->lock);
+ return result;
+}
+
+static irqreturn_t ravb_dma_interrupt(int irq, void *dev_id, int q)
+{
+ struct net_device *ndev = dev_id;
+ struct ravb_private *priv = netdev_priv(ndev);
+ irqreturn_t result = IRQ_NONE;
+
+ spin_lock(&priv->lock);
+
+ /* Network control/Best effort queue RX/TX */
+ if (ravb_queue_interrupt(ndev, q))
+ result = IRQ_HANDLED;
+
+ mmiowb();
+ spin_unlock(&priv->lock);
+ return result;
+}
+
+static irqreturn_t ravb_be_interrupt(int irq, void *dev_id)
+{
+ return ravb_dma_interrupt(irq, dev_id, RAVB_BE);
+}
+
+static irqreturn_t ravb_nc_interrupt(int irq, void *dev_id)
+{
+ return ravb_dma_interrupt(irq, dev_id, RAVB_NC);
+}
+
static int ravb_poll(struct napi_struct *napi, int budget)
{
struct net_device *ndev = napi->dev;
@@ -804,8 +914,13 @@ static int ravb_poll(struct napi_struct *napi, int budget)
/* Re-enable RX/TX interrupts */
spin_lock_irqsave(&priv->lock, flags);
- ravb_modify(ndev, RIC0, mask, mask);
- ravb_modify(ndev, TIC, mask, mask);
+ if (priv->chip_id == RCAR_GEN2) {
+ ravb_modify(ndev, RIC0, mask, mask);
+ ravb_modify(ndev, TIC, mask, mask);
+ } else {
+ ravb_write(ndev, mask, RIE0);
+ ravb_write(ndev, mask, TIE);
+ }
mmiowb();
spin_unlock_irqrestore(&priv->lock, flags);
@@ -1208,35 +1323,72 @@ static const struct ethtool_ops ravb_ethtool_ops = {
.get_ts_info = ravb_get_ts_info,
};
+static inline int ravb_hook_irq(unsigned int irq, irq_handler_t handler,
+ struct net_device *ndev, struct device *dev,
+ const char *ch)
+{
+ char *name;
+ int error;
+
+ name = devm_kasprintf(dev, GFP_KERNEL, "%s:%s", ndev->name, ch);
+ if (!name)
+ return -ENOMEM;
+ error = request_irq(irq, handler, 0, name, ndev);
+ if (error)
+ netdev_err(ndev, "cannot request IRQ %s\n", name);
+
+ return error;
+}
+
/* Network device open function for Ethernet AVB */
static int ravb_open(struct net_device *ndev)
{
struct ravb_private *priv = netdev_priv(ndev);
+ struct platform_device *pdev = priv->pdev;
+ struct device *dev = &pdev->dev;
int error;
napi_enable(&priv->napi[RAVB_BE]);
napi_enable(&priv->napi[RAVB_NC]);
- error = request_irq(ndev->irq, ravb_interrupt, IRQF_SHARED, ndev->name,
- ndev);
- if (error) {
- netdev_err(ndev, "cannot request IRQ\n");
- goto out_napi_off;
- }
-
- if (priv->chip_id == RCAR_GEN3) {
- error = request_irq(priv->emac_irq, ravb_interrupt,
- IRQF_SHARED, ndev->name, ndev);
+ if (priv->chip_id == RCAR_GEN2) {
+ error = request_irq(ndev->irq, ravb_interrupt, IRQF_SHARED,
+ ndev->name, ndev);
if (error) {
netdev_err(ndev, "cannot request IRQ\n");
- goto out_free_irq;
+ goto out_napi_off;
}
+ } else {
+ error = ravb_hook_irq(ndev->irq, ravb_multi_interrupt, ndev,
+ dev, "ch22:multi");
+ if (error)
+ goto out_napi_off;
+ error = ravb_hook_irq(priv->emac_irq, ravb_emac_interrupt, ndev,
+ dev, "ch24:emac");
+ if (error)
+ goto out_free_irq;
+ error = ravb_hook_irq(priv->rx_irqs[RAVB_BE], ravb_be_interrupt,
+ ndev, dev, "ch0:rx_be");
+ if (error)
+ goto out_free_irq_emac;
+ error = ravb_hook_irq(priv->tx_irqs[RAVB_BE], ravb_be_interrupt,
+ ndev, dev, "ch18:tx_be");
+ if (error)
+ goto out_free_irq_be_rx;
+ error = ravb_hook_irq(priv->rx_irqs[RAVB_NC], ravb_nc_interrupt,
+ ndev, dev, "ch1:rx_nc");
+ if (error)
+ goto out_free_irq_be_tx;
+ error = ravb_hook_irq(priv->tx_irqs[RAVB_NC], ravb_nc_interrupt,
+ ndev, dev, "ch19:tx_nc");
+ if (error)
+ goto out_free_irq_nc_rx;
}
/* Device init */
error = ravb_dmac_init(ndev);
if (error)
- goto out_free_irq2;
+ goto out_free_irq_nc_tx;
ravb_emac_init(ndev);
/* Initialise PTP Clock driver */
@@ -1256,9 +1408,18 @@ out_ptp_stop:
/* Stop PTP Clock driver */
if (priv->chip_id == RCAR_GEN2)
ravb_ptp_stop(ndev);
-out_free_irq2:
- if (priv->chip_id == RCAR_GEN3)
- free_irq(priv->emac_irq, ndev);
+out_free_irq_nc_tx:
+ if (priv->chip_id == RCAR_GEN2)
+ goto out_free_irq;
+ free_irq(priv->tx_irqs[RAVB_NC], ndev);
+out_free_irq_nc_rx:
+ free_irq(priv->rx_irqs[RAVB_NC], ndev);
+out_free_irq_be_tx:
+ free_irq(priv->tx_irqs[RAVB_BE], ndev);
+out_free_irq_be_rx:
+ free_irq(priv->rx_irqs[RAVB_BE], ndev);
+out_free_irq_emac:
+ free_irq(priv->emac_irq, ndev);
out_free_irq:
free_irq(ndev->irq, ndev);
out_napi_off:
@@ -1506,6 +1667,13 @@ static int ravb_close(struct net_device *ndev)
priv->phydev = NULL;
}
+ if (priv->chip_id != RCAR_GEN2) {
+ free_irq(priv->tx_irqs[RAVB_NC], ndev);
+ free_irq(priv->rx_irqs[RAVB_NC], ndev);
+ free_irq(priv->tx_irqs[RAVB_BE], ndev);
+ free_irq(priv->rx_irqs[RAVB_BE], ndev);
+ free_irq(priv->emac_irq, ndev);
+ }
free_irq(ndev->irq, ndev);
napi_disable(&priv->napi[RAVB_NC]);
@@ -1716,6 +1884,7 @@ static int ravb_probe(struct platform_device *pdev)
struct net_device *ndev;
int error, irq, q;
struct resource *res;
+ int i;
if (!np) {
dev_err(&pdev->dev,
@@ -1785,6 +1954,22 @@ static int ravb_probe(struct platform_device *pdev)
goto out_release;
}
priv->emac_irq = irq;
+ for (i = 0; i < NUM_RX_QUEUE; i++) {
+ irq = platform_get_irq_byname(pdev, ravb_rx_irqs[i]);
+ if (irq < 0) {
+ error = irq;
+ goto out_release;
+ }
+ priv->rx_irqs[i] = irq;
+ }
+ for (i = 0; i < NUM_TX_QUEUE; i++) {
+ irq = platform_get_irq_byname(pdev, ravb_tx_irqs[i]);
+ if (irq < 0) {
+ error = irq;
+ goto out_release;
+ }
+ priv->tx_irqs[i] = irq;
+ }
}
priv->chip_id = chip_id;
diff --git a/drivers/net/ethernet/renesas/ravb_ptp.c b/drivers/net/ethernet/renesas/ravb_ptp.c
index 57992ccc4657..eede70ec37f8 100644
--- a/drivers/net/ethernet/renesas/ravb_ptp.c
+++ b/drivers/net/ethernet/renesas/ravb_ptp.c
@@ -194,7 +194,12 @@ static int ravb_ptp_extts(struct ptp_clock_info *ptp,
priv->ptp.extts[req->index] = on;
spin_lock_irqsave(&priv->lock, flags);
- ravb_modify(ndev, GIC, GIC_PTCE, on ? GIC_PTCE : 0);
+ if (priv->chip_id == RCAR_GEN2)
+ ravb_modify(ndev, GIC, GIC_PTCE, on ? GIC_PTCE : 0);
+ else if (on)
+ ravb_write(ndev, GIE_PTCS, GIE);
+ else
+ ravb_write(ndev, GID_PTCD, GID);
mmiowb();
spin_unlock_irqrestore(&priv->lock, flags);
@@ -241,7 +246,10 @@ static int ravb_ptp_perout(struct ptp_clock_info *ptp,
error = ravb_ptp_update_compare(priv, (u32)start_ns);
if (!error) {
/* Unmask interrupt */
- ravb_modify(ndev, GIC, GIC_PTME, GIC_PTME);
+ if (priv->chip_id == RCAR_GEN2)
+ ravb_modify(ndev, GIC, GIC_PTME, GIC_PTME);
+ else
+ ravb_write(ndev, GIE_PTMS0, GIE);
}
} else {
spin_lock_irqsave(&priv->lock, flags);
@@ -250,7 +258,10 @@ static int ravb_ptp_perout(struct ptp_clock_info *ptp,
perout->period = 0;
/* Mask interrupt */
- ravb_modify(ndev, GIC, GIC_PTME, 0);
+ if (priv->chip_id == RCAR_GEN2)
+ ravb_modify(ndev, GIC, GIC_PTME, 0);
+ else
+ ravb_write(ndev, GID_PTMD0, GID);
}
mmiowb();
spin_unlock_irqrestore(&priv->lock, flags);
@@ -285,7 +296,7 @@ static const struct ptp_clock_info ravb_ptp_info = {
};
/* Caller must hold the lock */
-irqreturn_t ravb_ptp_interrupt(struct net_device *ndev)
+void ravb_ptp_interrupt(struct net_device *ndev)
{
struct ravb_private *priv = netdev_priv(ndev);
u32 gis = ravb_read(ndev, GIS);
@@ -308,12 +319,7 @@ irqreturn_t ravb_ptp_interrupt(struct net_device *ndev)
}
}
- if (gis) {
- ravb_write(ndev, ~gis, GIS);
- return IRQ_HANDLED;
- }
-
- return IRQ_NONE;
+ ravb_write(ndev, ~gis, GIS);
}
void ravb_ptp_init(struct net_device *ndev, struct platform_device *pdev)
diff --git a/drivers/net/ethernet/renesas/sh_eth.c b/drivers/net/ethernet/renesas/sh_eth.c
index ceea74cc2229..04cd39f66cc9 100644
--- a/drivers/net/ethernet/renesas/sh_eth.c
+++ b/drivers/net/ethernet/renesas/sh_eth.c
@@ -482,7 +482,7 @@ static void sh_eth_chip_reset(struct net_device *ndev)
struct sh_eth_private *mdp = netdev_priv(ndev);
/* reset device */
- sh_eth_tsu_write(mdp, ARSTR_ARSTR, ARSTR);
+ sh_eth_tsu_write(mdp, ARSTR_ARST, ARSTR);
mdelay(1);
}
@@ -537,11 +537,7 @@ static struct sh_eth_cpu_data r7s72100_data = {
static void sh_eth_chip_reset_r8a7740(struct net_device *ndev)
{
- struct sh_eth_private *mdp = netdev_priv(ndev);
-
- /* reset device */
- sh_eth_tsu_write(mdp, ARSTR_ARSTR, ARSTR);
- mdelay(1);
+ sh_eth_chip_reset(ndev);
sh_eth_select_mii(ndev);
}
@@ -725,8 +721,8 @@ static struct sh_eth_cpu_data sh7757_data = {
#define GIGA_MAHR(port) (SH_GIGA_ETH_BASE + 0x800 * (port) + 0x05c0)
static void sh_eth_chip_reset_giga(struct net_device *ndev)
{
- int i;
u32 mahr[2], malr[2];
+ int i;
/* save MAHR and MALR */
for (i = 0; i < 2; i++) {
@@ -734,9 +730,7 @@ static void sh_eth_chip_reset_giga(struct net_device *ndev)
mahr[i] = ioread32((void *)GIGA_MAHR(i));
}
- /* reset device */
- iowrite32(ARSTR_ARSTR, (void *)(SH_GIGA_ETH_BASE + 0x1800));
- mdelay(1);
+ sh_eth_chip_reset(ndev);
/* restore MAHR and MALR */
for (i = 0; i < 2; i++) {
@@ -899,7 +893,7 @@ static int sh_eth_check_reset(struct net_device *ndev)
int cnt = 100;
while (cnt > 0) {
- if (!(sh_eth_read(ndev, EDMR) & 0x3))
+ if (!(sh_eth_read(ndev, EDMR) & EDMR_SRST_GETHER))
break;
mdelay(1);
cnt--;
@@ -1229,7 +1223,7 @@ ring_free:
return -ENOMEM;
}
-static int sh_eth_dev_init(struct net_device *ndev, bool start)
+static int sh_eth_dev_init(struct net_device *ndev)
{
struct sh_eth_private *mdp = netdev_priv(ndev);
int ret;
@@ -1279,10 +1273,8 @@ static int sh_eth_dev_init(struct net_device *ndev, bool start)
RFLR);
sh_eth_modify(ndev, EESR, 0, 0);
- if (start) {
- mdp->irq_enabled = true;
- sh_eth_write(ndev, mdp->cd->eesipr_value, EESIPR);
- }
+ mdp->irq_enabled = true;
+ sh_eth_write(ndev, mdp->cd->eesipr_value, EESIPR);
/* PAUSE Prohibition */
sh_eth_write(ndev, ECMR_ZPF | (mdp->duplex ? ECMR_DM : 0) |
@@ -1295,8 +1287,7 @@ static int sh_eth_dev_init(struct net_device *ndev, bool start)
sh_eth_write(ndev, mdp->cd->ecsr_value, ECSR);
/* E-MAC Interrupt Enable register */
- if (start)
- sh_eth_write(ndev, mdp->cd->ecsipr_value, ECSIPR);
+ sh_eth_write(ndev, mdp->cd->ecsipr_value, ECSIPR);
/* Set MAC address */
update_mac_address(ndev);
@@ -1309,10 +1300,8 @@ static int sh_eth_dev_init(struct net_device *ndev, bool start)
if (mdp->cd->tpauser)
sh_eth_write(ndev, TPAUSER_UNLIMITED, TPAUSER);
- if (start) {
- /* Setting the Rx mode will start the Rx process. */
- sh_eth_write(ndev, EDRRR_R, EDRRR);
- }
+ /* Setting the Rx mode will start the Rx process. */
+ sh_eth_write(ndev, EDRRR_R, EDRRR);
return ret;
}
@@ -2194,7 +2183,7 @@ static int sh_eth_set_ringparam(struct net_device *ndev,
__func__);
return ret;
}
- ret = sh_eth_dev_init(ndev, true);
+ ret = sh_eth_dev_init(ndev);
if (ret < 0) {
netdev_err(ndev, "%s: sh_eth_dev_init failed.\n",
__func__);
@@ -2246,7 +2235,7 @@ static int sh_eth_open(struct net_device *ndev)
goto out_free_irq;
/* device init */
- ret = sh_eth_dev_init(ndev, true);
+ ret = sh_eth_dev_init(ndev);
if (ret)
goto out_free_irq;
@@ -2299,7 +2288,7 @@ static void sh_eth_tx_timeout(struct net_device *ndev)
}
/* device init */
- sh_eth_dev_init(ndev, true);
+ sh_eth_dev_init(ndev);
netif_start_queue(ndev);
}
diff --git a/drivers/net/ethernet/renesas/sh_eth.h b/drivers/net/ethernet/renesas/sh_eth.h
index 8fa4ef3a7fdd..c62380e34a1d 100644
--- a/drivers/net/ethernet/renesas/sh_eth.h
+++ b/drivers/net/ethernet/renesas/sh_eth.h
@@ -394,7 +394,7 @@ enum RPADIR_BIT {
#define DEFAULT_FDR_INIT 0x00000707
/* ARSTR */
-enum ARSTR_BIT { ARSTR_ARSTR = 0x00000001, };
+enum ARSTR_BIT { ARSTR_ARST = 0x00000001, };
/* TSU_FWEN0 */
enum TSU_FWEN0_BIT {
diff --git a/drivers/net/ethernet/rocker/rocker_ofdpa.c b/drivers/net/ethernet/rocker/rocker_ofdpa.c
index 0e758bcb26b0..1ca796316173 100644
--- a/drivers/net/ethernet/rocker/rocker_ofdpa.c
+++ b/drivers/net/ethernet/rocker/rocker_ofdpa.c
@@ -2727,7 +2727,7 @@ static int ofdpa_port_obj_fib4_add(struct rocker_port *rocker_port,
return ofdpa_port_fib_ipv4(ofdpa_port, trans,
htonl(fib4->dst), fib4->dst_len,
- &fib4->fi, fib4->tb_id, 0);
+ fib4->fi, fib4->tb_id, 0);
}
static int ofdpa_port_obj_fib4_del(struct rocker_port *rocker_port,
@@ -2737,7 +2737,7 @@ static int ofdpa_port_obj_fib4_del(struct rocker_port *rocker_port,
return ofdpa_port_fib_ipv4(ofdpa_port, NULL,
htonl(fib4->dst), fib4->dst_len,
- &fib4->fi, fib4->tb_id,
+ fib4->fi, fib4->tb_id,
OFDPA_OP_FLAG_REMOVE);
}
diff --git a/drivers/net/ethernet/seeq/sgiseeq.c b/drivers/net/ethernet/seeq/sgiseeq.c
index ca7336605748..c2bd5378ffda 100644
--- a/drivers/net/ethernet/seeq/sgiseeq.c
+++ b/drivers/net/ethernet/seeq/sgiseeq.c
@@ -572,7 +572,7 @@ static inline int sgiseeq_reset(struct net_device *dev)
if (err)
return err;
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue(dev);
return 0;
@@ -648,7 +648,7 @@ static void timeout(struct net_device *dev)
printk(KERN_NOTICE "%s: transmit timed out, resetting\n", dev->name);
sgiseeq_reset(dev);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue(dev);
}
diff --git a/drivers/net/ethernet/sgi/meth.c b/drivers/net/ethernet/sgi/meth.c
index 5eac523b4b0c..aaa80f13859b 100644
--- a/drivers/net/ethernet/sgi/meth.c
+++ b/drivers/net/ethernet/sgi/meth.c
@@ -708,7 +708,7 @@ static int meth_tx(struct sk_buff *skb, struct net_device *dev)
mace->eth.dma_ctrl = priv->dma_ctrl;
meth_add_to_tx_ring(priv, skb);
- dev->trans_start = jiffies; /* save the timestamp */
+ netif_trans_update(dev); /* save the timestamp */
/* If TX ring is full, tell the upper layer to stop sending packets */
if (meth_tx_full(dev)) {
@@ -756,7 +756,7 @@ static void meth_tx_timeout(struct net_device *dev)
/* Enable interrupt */
spin_unlock_irqrestore(&priv->meth_lock, flags);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue(dev);
}
diff --git a/drivers/net/ethernet/sis/sis900.c b/drivers/net/ethernet/sis/sis900.c
index fd812d2e5e1c..95001ee408ab 100644
--- a/drivers/net/ethernet/sis/sis900.c
+++ b/drivers/net/ethernet/sis/sis900.c
@@ -1575,7 +1575,7 @@ static void sis900_tx_timeout(struct net_device *net_dev)
spin_unlock_irqrestore(&sis_priv->lock, flags);
- net_dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(net_dev); /* prevent tx timeout */
/* load Transmit Descriptor Register */
sw32(txdp, sis_priv->tx_ring_dma);
diff --git a/drivers/net/ethernet/smsc/epic100.c b/drivers/net/ethernet/smsc/epic100.c
index 443f1da9fc9e..7186b89269ad 100644
--- a/drivers/net/ethernet/smsc/epic100.c
+++ b/drivers/net/ethernet/smsc/epic100.c
@@ -889,7 +889,7 @@ static void epic_tx_timeout(struct net_device *dev)
ew32(COMMAND, TxQueued);
}
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
dev->stats.tx_errors++;
if (!ep->tx_full)
netif_wake_queue(dev);
diff --git a/drivers/net/ethernet/smsc/smc911x.c b/drivers/net/ethernet/smsc/smc911x.c
index a733868a43aa..cb49c9654f0a 100644
--- a/drivers/net/ethernet/smsc/smc911x.c
+++ b/drivers/net/ethernet/smsc/smc911x.c
@@ -499,7 +499,7 @@ static void smc911x_hardware_send_pkt(struct net_device *dev)
/* DMA complete IRQ will free buffer and set jiffies */
#else
SMC_PUSH_DATA(lp, buf, len);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
dev_kfree_skb_irq(skb);
#endif
if (!lp->tx_throttle) {
@@ -1189,7 +1189,7 @@ smc911x_tx_dma_irq(void *data)
DBG(SMC_DEBUG_TX | SMC_DEBUG_DMA, dev, "TX DMA irq handler\n");
BUG_ON(skb == NULL);
dma_unmap_single(NULL, tx_dmabuf, tx_dmalen, DMA_TO_DEVICE);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
dev_kfree_skb_irq(skb);
lp->current_tx_skb = NULL;
if (lp->pending_tx_skb != NULL)
@@ -1283,7 +1283,7 @@ static void smc911x_timeout(struct net_device *dev)
schedule_work(&lp->phy_configure);
/* We can accept TX packets again */
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue(dev);
}
diff --git a/drivers/net/ethernet/smsc/smc9194.c b/drivers/net/ethernet/smsc/smc9194.c
index 664f596971b5..d496888b85d3 100644
--- a/drivers/net/ethernet/smsc/smc9194.c
+++ b/drivers/net/ethernet/smsc/smc9194.c
@@ -663,7 +663,7 @@ static void smc_hardware_send_packet( struct net_device * dev )
lp->saved_skb = NULL;
dev_kfree_skb_any (skb);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
/* we can send another packet */
netif_wake_queue(dev);
@@ -1104,7 +1104,7 @@ static void smc_timeout(struct net_device *dev)
/* "kick" the adaptor */
smc_reset( dev->base_addr );
smc_enable( dev->base_addr );
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
/* clear anything saved */
((struct smc_local *)netdev_priv(dev))->saved_skb = NULL;
netif_wake_queue(dev);
diff --git a/drivers/net/ethernet/smsc/smc91c92_cs.c b/drivers/net/ethernet/smsc/smc91c92_cs.c
index 3449893aea8d..db3c696d7002 100644
--- a/drivers/net/ethernet/smsc/smc91c92_cs.c
+++ b/drivers/net/ethernet/smsc/smc91c92_cs.c
@@ -1172,7 +1172,7 @@ static void smc_hardware_send_packet(struct net_device * dev)
smc->saved_skb = NULL;
dev_kfree_skb_irq(skb);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
netif_start_queue(dev);
}
@@ -1187,7 +1187,7 @@ static void smc_tx_timeout(struct net_device *dev)
inw(ioaddr)&0xff, inw(ioaddr + 2));
dev->stats.tx_errors++;
smc_reset(dev);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
smc->saved_skb = NULL;
netif_wake_queue(dev);
}
diff --git a/drivers/net/ethernet/smsc/smc91x.c b/drivers/net/ethernet/smsc/smc91x.c
index c5ed27c54724..18ac52ded696 100644
--- a/drivers/net/ethernet/smsc/smc91x.c
+++ b/drivers/net/ethernet/smsc/smc91x.c
@@ -619,7 +619,7 @@ static void smc_hardware_send_pkt(unsigned long data)
SMC_SET_MMU_CMD(lp, MC_ENQUEUE);
smc_special_unlock(&lp->lock, flags);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
dev->stats.tx_packets++;
dev->stats.tx_bytes += len;
@@ -1364,7 +1364,7 @@ static void smc_timeout(struct net_device *dev)
schedule_work(&lp->phy_configure);
/* We can accept TX packets again */
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue(dev);
}
diff --git a/drivers/net/ethernet/stmicro/stmmac/Makefile b/drivers/net/ethernet/stmicro/stmmac/Makefile
index b3901616f4f6..0fb362d5a722 100644
--- a/drivers/net/ethernet/stmicro/stmmac/Makefile
+++ b/drivers/net/ethernet/stmicro/stmmac/Makefile
@@ -2,7 +2,8 @@ obj-$(CONFIG_STMMAC_ETH) += stmmac.o
stmmac-objs:= stmmac_main.o stmmac_ethtool.o stmmac_mdio.o ring_mode.o \
chain_mode.o dwmac_lib.o dwmac1000_core.o dwmac1000_dma.o \
dwmac100_core.o dwmac100_dma.o enh_desc.o norm_desc.o \
- mmc_core.o stmmac_hwtstamp.o stmmac_ptp.o $(stmmac-y)
+ mmc_core.o stmmac_hwtstamp.o stmmac_ptp.o dwmac4_descs.o \
+ dwmac4_dma.o dwmac4_lib.o dwmac4_core.o $(stmmac-y)
# Ordering matters. Generic driver must be last.
obj-$(CONFIG_STMMAC_PLATFORM) += stmmac-platform.o
diff --git a/drivers/net/ethernet/stmicro/stmmac/common.h b/drivers/net/ethernet/stmicro/stmmac/common.h
index f96d257308b0..fc60368df2e7 100644
--- a/drivers/net/ethernet/stmicro/stmmac/common.h
+++ b/drivers/net/ethernet/stmicro/stmmac/common.h
@@ -41,6 +41,8 @@
/* Synopsys Core versions */
#define DWMAC_CORE_3_40 0x34
#define DWMAC_CORE_3_50 0x35
+#define DWMAC_CORE_4_00 0x40
+#define STMMAC_CHAN0 0 /* Always supported and default for all chips */
#define DMA_TX_SIZE 512
#define DMA_RX_SIZE 512
@@ -167,6 +169,9 @@ struct stmmac_extra_stats {
unsigned long mtl_rx_fifo_ctrl_active;
unsigned long mac_rx_frame_ctrl_fifo;
unsigned long mac_gmii_rx_proto_engine;
+ /* TSO */
+ unsigned long tx_tso_frames;
+ unsigned long tx_tso_nfrags;
};
/* CSR Frequency Access Defines*/
@@ -243,6 +248,7 @@ enum rx_frame_status {
csum_none = 0x2,
llc_snap = 0x4,
dma_own = 0x8,
+ rx_not_ls = 0x10,
};
/* Tx status */
@@ -269,6 +275,7 @@ enum dma_irq_status {
#define CORE_PCS_ANE_COMPLETE (1 << 5)
#define CORE_PCS_LINK_STATUS (1 << 6)
#define CORE_RGMII_IRQ (1 << 7)
+#define CORE_IRQ_MTL_RX_OVERFLOW BIT(8)
/* Physical Coding Sublayer */
struct rgmii_adv {
@@ -300,8 +307,10 @@ struct dma_features {
/* 802.3az - Energy-Efficient Ethernet (EEE) */
unsigned int eee;
unsigned int av;
+ unsigned int tsoen;
/* TX and RX csum */
unsigned int tx_coe;
+ unsigned int rx_coe;
unsigned int rx_coe_type1;
unsigned int rx_coe_type2;
unsigned int rxfifo_over_2048;
@@ -348,6 +357,10 @@ struct stmmac_desc_ops {
void (*prepare_tx_desc) (struct dma_desc *p, int is_fs, int len,
bool csum_flag, int mode, bool tx_own,
bool ls);
+ void (*prepare_tso_tx_desc)(struct dma_desc *p, int is_fs, int len1,
+ int len2, bool tx_own, bool ls,
+ unsigned int tcphdrlen,
+ unsigned int tcppayloadlen);
/* Set/get the owner of the descriptor */
void (*set_tx_owner) (struct dma_desc *p);
int (*get_tx_owner) (struct dma_desc *p);
@@ -380,6 +393,10 @@ struct stmmac_desc_ops {
u64(*get_timestamp) (void *desc, u32 ats);
/* get rx timestamp status */
int (*get_rx_timestamp_status) (void *desc, u32 ats);
+ /* Display ring */
+ void (*display_ring)(void *head, unsigned int size, bool rx);
+ /* set MSS via context descriptor */
+ void (*set_mss)(struct dma_desc *p, unsigned int mss);
};
extern const struct stmmac_desc_ops enh_desc_ops;
@@ -412,9 +429,15 @@ struct stmmac_dma_ops {
int (*dma_interrupt) (void __iomem *ioaddr,
struct stmmac_extra_stats *x);
/* If supported then get the optional core features */
- unsigned int (*get_hw_feature) (void __iomem *ioaddr);
+ void (*get_hw_feature)(void __iomem *ioaddr,
+ struct dma_features *dma_cap);
/* Program the HW RX Watchdog */
void (*rx_watchdog) (void __iomem *ioaddr, u32 riwt);
+ void (*set_tx_ring_len)(void __iomem *ioaddr, u32 len);
+ void (*set_rx_ring_len)(void __iomem *ioaddr, u32 len);
+ void (*set_rx_tail_ptr)(void __iomem *ioaddr, u32 tail_ptr, u32 chan);
+ void (*set_tx_tail_ptr)(void __iomem *ioaddr, u32 tail_ptr, u32 chan);
+ void (*enable_tso)(void __iomem *ioaddr, bool en, u32 chan);
};
struct mac_device_info;
@@ -463,6 +486,7 @@ struct stmmac_hwtimestamp {
};
extern const struct stmmac_hwtimestamp stmmac_ptp;
+extern const struct stmmac_mode_ops dwmac4_ring_mode_ops;
struct mac_link {
int port;
@@ -495,7 +519,6 @@ struct mac_device_info {
const struct stmmac_hwtimestamp *ptp;
struct mii_regs mii; /* MII register Addresses */
struct mac_link link;
- unsigned int synopsys_uid;
void __iomem *pcsr; /* vpointer to device CSRs */
int multicast_filter_bins;
int unicast_filter_entries;
@@ -504,18 +527,47 @@ struct mac_device_info {
};
struct mac_device_info *dwmac1000_setup(void __iomem *ioaddr, int mcbins,
- int perfect_uc_entries);
-struct mac_device_info *dwmac100_setup(void __iomem *ioaddr);
+ int perfect_uc_entries,
+ int *synopsys_id);
+struct mac_device_info *dwmac100_setup(void __iomem *ioaddr, int *synopsys_id);
+struct mac_device_info *dwmac4_setup(void __iomem *ioaddr, int mcbins,
+ int perfect_uc_entries, int *synopsys_id);
void stmmac_set_mac_addr(void __iomem *ioaddr, u8 addr[6],
unsigned int high, unsigned int low);
void stmmac_get_mac_addr(void __iomem *ioaddr, unsigned char *addr,
unsigned int high, unsigned int low);
-
void stmmac_set_mac(void __iomem *ioaddr, bool enable);
+void stmmac_dwmac4_set_mac_addr(void __iomem *ioaddr, u8 addr[6],
+ unsigned int high, unsigned int low);
+void stmmac_dwmac4_get_mac_addr(void __iomem *ioaddr, unsigned char *addr,
+ unsigned int high, unsigned int low);
+void stmmac_dwmac4_set_mac(void __iomem *ioaddr, bool enable);
+
void dwmac_dma_flush_tx_fifo(void __iomem *ioaddr);
extern const struct stmmac_mode_ops ring_mode_ops;
extern const struct stmmac_mode_ops chain_mode_ops;
-
+extern const struct stmmac_desc_ops dwmac4_desc_ops;
+
+/**
+ * stmmac_get_synopsys_id - return the SYINID.
+ * @priv: driver private structure
+ * Description: this simple function is to decode and return the SYINID
+ * starting from the HW core register.
+ */
+static inline u32 stmmac_get_synopsys_id(u32 hwid)
+{
+ /* Check Synopsys Id (not available on old chips) */
+ if (likely(hwid)) {
+ u32 uid = ((hwid & 0x0000ff00) >> 8);
+ u32 synid = (hwid & 0x000000ff);
+
+ pr_info("stmmac - user ID: 0x%x, Synopsys ID: 0x%x\n",
+ uid, synid);
+
+ return synid;
+ }
+ return 0;
+}
#endif /* __COMMON_H__ */
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
index afb90d129cb6..f13499fa1f58 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac-socfpga.c
@@ -49,6 +49,7 @@ struct socfpga_dwmac {
u32 reg_shift;
struct device *dev;
struct regmap *sys_mgr_base_addr;
+ struct reset_control *stmmac_rst;
void __iomem *splitter_base;
bool f2h_ptp_ref_clk;
};
@@ -135,7 +136,7 @@ static int socfpga_dwmac_parse_data(struct socfpga_dwmac *dwmac, struct device *
return 0;
}
-static int socfpga_dwmac_setup(struct socfpga_dwmac *dwmac)
+static int socfpga_dwmac_set_phy_mode(struct socfpga_dwmac *dwmac)
{
struct regmap *sys_mgr_base_addr = dwmac->sys_mgr_base_addr;
int phymode = dwmac->interface;
@@ -164,6 +165,10 @@ static int socfpga_dwmac_setup(struct socfpga_dwmac *dwmac)
if (dwmac->splitter_base)
val = SYSMGR_EMACGRP_CTRL_PHYSEL_ENUM_GMII_MII;
+ /* Assert reset to the enet controller before changing the phy mode */
+ if (dwmac->stmmac_rst)
+ reset_control_assert(dwmac->stmmac_rst);
+
regmap_read(sys_mgr_base_addr, reg_offset, &ctrl);
ctrl &= ~(SYSMGR_EMACGRP_CTRL_PHYSEL_MASK << reg_shift);
ctrl |= val << reg_shift;
@@ -181,57 +186,13 @@ static int socfpga_dwmac_setup(struct socfpga_dwmac *dwmac)
regmap_write(sys_mgr_base_addr, reg_offset, ctrl);
- return 0;
-}
-
-static int socfpga_dwmac_init(struct platform_device *pdev, void *priv)
-{
- struct socfpga_dwmac *dwmac = priv;
- struct net_device *ndev = platform_get_drvdata(pdev);
- struct stmmac_priv *stpriv = NULL;
- int ret = 0;
-
- if (!ndev)
- return -EINVAL;
-
- stpriv = netdev_priv(ndev);
- if (!stpriv)
- return -EINVAL;
-
- /* Assert reset to the enet controller before changing the phy mode */
- if (stpriv->stmmac_rst)
- reset_control_assert(stpriv->stmmac_rst);
-
- /* Setup the phy mode in the system manager registers according to
- * devicetree configuration
- */
- ret = socfpga_dwmac_setup(dwmac);
-
/* Deassert reset for the phy configuration to be sampled by
* the enet controller, and operation to start in requested mode
*/
- if (stpriv->stmmac_rst)
- reset_control_deassert(stpriv->stmmac_rst);
-
- /* Before the enet controller is suspended, the phy is suspended.
- * This causes the phy clock to be gated. The enet controller is
- * resumed before the phy, so the clock is still gated "off" when
- * the enet controller is resumed. This code makes sure the phy
- * is "resumed" before reinitializing the enet controller since
- * the enet controller depends on an active phy clock to complete
- * a DMA reset. A DMA reset will "time out" if executed
- * with no phy clock input on the Synopsys enet controller.
- * Verified through Synopsys Case #8000711656.
- *
- * Note that the phy clock is also gated when the phy is isolated.
- * Phy "suspend" and "isolate" controls are located in phy basic
- * control register 0, and can be modified by the phy driver
- * framework.
- */
- if (stpriv->phydev)
- phy_resume(stpriv->phydev);
+ if (dwmac->stmmac_rst)
+ reset_control_deassert(dwmac->stmmac_rst);
- return ret;
+ return 0;
}
static int socfpga_dwmac_probe(struct platform_device *pdev)
@@ -260,23 +221,59 @@ static int socfpga_dwmac_probe(struct platform_device *pdev)
return ret;
}
- ret = socfpga_dwmac_setup(dwmac);
- if (ret) {
- dev_err(dev, "couldn't setup SoC glue (%d)\n", ret);
- return ret;
- }
-
plat_dat->bsp_priv = dwmac;
- plat_dat->init = socfpga_dwmac_init;
plat_dat->fix_mac_speed = socfpga_dwmac_fix_mac_speed;
ret = stmmac_dvr_probe(&pdev->dev, plat_dat, &stmmac_res);
- if (!ret)
- ret = socfpga_dwmac_init(pdev, dwmac);
+ if (!ret) {
+ struct net_device *ndev = platform_get_drvdata(pdev);
+ struct stmmac_priv *stpriv = netdev_priv(ndev);
+
+ /* The socfpga driver needs to control the stmmac reset to
+ * set the phy mode. Create a copy of the core reset handel
+ * so it can be used by the driver later.
+ */
+ dwmac->stmmac_rst = stpriv->stmmac_rst;
+
+ ret = socfpga_dwmac_set_phy_mode(dwmac);
+ }
return ret;
}
+#ifdef CONFIG_PM_SLEEP
+static int socfpga_dwmac_resume(struct device *dev)
+{
+ struct net_device *ndev = dev_get_drvdata(dev);
+ struct stmmac_priv *priv = netdev_priv(ndev);
+
+ socfpga_dwmac_set_phy_mode(priv->plat->bsp_priv);
+
+ /* Before the enet controller is suspended, the phy is suspended.
+ * This causes the phy clock to be gated. The enet controller is
+ * resumed before the phy, so the clock is still gated "off" when
+ * the enet controller is resumed. This code makes sure the phy
+ * is "resumed" before reinitializing the enet controller since
+ * the enet controller depends on an active phy clock to complete
+ * a DMA reset. A DMA reset will "time out" if executed
+ * with no phy clock input on the Synopsys enet controller.
+ * Verified through Synopsys Case #8000711656.
+ *
+ * Note that the phy clock is also gated when the phy is isolated.
+ * Phy "suspend" and "isolate" controls are located in phy basic
+ * control register 0, and can be modified by the phy driver
+ * framework.
+ */
+ if (priv->phydev)
+ phy_resume(priv->phydev);
+
+ return stmmac_resume(dev);
+}
+#endif /* CONFIG_PM_SLEEP */
+
+static SIMPLE_DEV_PM_OPS(socfpga_dwmac_pm_ops, stmmac_suspend,
+ socfpga_dwmac_resume);
+
static const struct of_device_id socfpga_dwmac_match[] = {
{ .compatible = "altr,socfpga-stmmac" },
{ }
@@ -288,7 +285,7 @@ static struct platform_driver socfpga_dwmac_driver = {
.remove = stmmac_pltfr_remove,
.driver = {
.name = "socfpga-dwmac",
- .pm = &stmmac_pltfr_pm_ops,
+ .pm = &socfpga_dwmac_pm_ops,
.of_match_table = socfpga_dwmac_match,
},
};
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
index c2941172f6d1..fb1eb578e34e 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_core.c
@@ -491,7 +491,8 @@ static const struct stmmac_ops dwmac1000_ops = {
};
struct mac_device_info *dwmac1000_setup(void __iomem *ioaddr, int mcbins,
- int perfect_uc_entries)
+ int perfect_uc_entries,
+ int *synopsys_id)
{
struct mac_device_info *mac;
u32 hwid = readl(ioaddr + GMAC_VERSION);
@@ -516,7 +517,9 @@ struct mac_device_info *dwmac1000_setup(void __iomem *ioaddr, int mcbins,
mac->link.speed = GMAC_CONTROL_FES;
mac->mii.addr = GMAC_MII_ADDR;
mac->mii.data = GMAC_MII_DATA;
- mac->synopsys_uid = hwid;
+
+ /* Get and dump the chip ID */
+ *synopsys_id = stmmac_get_synopsys_id(hwid);
return mac;
}
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c
index da32d6037e3e..990746955216 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac1000_dma.c
@@ -215,9 +215,40 @@ static void dwmac1000_dump_dma_regs(void __iomem *ioaddr)
}
}
-static unsigned int dwmac1000_get_hw_feature(void __iomem *ioaddr)
+static void dwmac1000_get_hw_feature(void __iomem *ioaddr,
+ struct dma_features *dma_cap)
{
- return readl(ioaddr + DMA_HW_FEATURE);
+ u32 hw_cap = readl(ioaddr + DMA_HW_FEATURE);
+
+ dma_cap->mbps_10_100 = (hw_cap & DMA_HW_FEAT_MIISEL);
+ dma_cap->mbps_1000 = (hw_cap & DMA_HW_FEAT_GMIISEL) >> 1;
+ dma_cap->half_duplex = (hw_cap & DMA_HW_FEAT_HDSEL) >> 2;
+ dma_cap->hash_filter = (hw_cap & DMA_HW_FEAT_HASHSEL) >> 4;
+ dma_cap->multi_addr = (hw_cap & DMA_HW_FEAT_ADDMAC) >> 5;
+ dma_cap->pcs = (hw_cap & DMA_HW_FEAT_PCSSEL) >> 6;
+ dma_cap->sma_mdio = (hw_cap & DMA_HW_FEAT_SMASEL) >> 8;
+ dma_cap->pmt_remote_wake_up = (hw_cap & DMA_HW_FEAT_RWKSEL) >> 9;
+ dma_cap->pmt_magic_frame = (hw_cap & DMA_HW_FEAT_MGKSEL) >> 10;
+ /* MMC */
+ dma_cap->rmon = (hw_cap & DMA_HW_FEAT_MMCSEL) >> 11;
+ /* IEEE 1588-2002 */
+ dma_cap->time_stamp =
+ (hw_cap & DMA_HW_FEAT_TSVER1SEL) >> 12;
+ /* IEEE 1588-2008 */
+ dma_cap->atime_stamp = (hw_cap & DMA_HW_FEAT_TSVER2SEL) >> 13;
+ /* 802.3az - Energy-Efficient Ethernet (EEE) */
+ dma_cap->eee = (hw_cap & DMA_HW_FEAT_EEESEL) >> 14;
+ dma_cap->av = (hw_cap & DMA_HW_FEAT_AVSEL) >> 15;
+ /* TX and RX csum */
+ dma_cap->tx_coe = (hw_cap & DMA_HW_FEAT_TXCOESEL) >> 16;
+ dma_cap->rx_coe_type1 = (hw_cap & DMA_HW_FEAT_RXTYP1COE) >> 17;
+ dma_cap->rx_coe_type2 = (hw_cap & DMA_HW_FEAT_RXTYP2COE) >> 18;
+ dma_cap->rxfifo_over_2048 = (hw_cap & DMA_HW_FEAT_RXFIFOSIZE) >> 19;
+ /* TX and RX number of channels */
+ dma_cap->number_rx_channel = (hw_cap & DMA_HW_FEAT_RXCHCNT) >> 20;
+ dma_cap->number_tx_channel = (hw_cap & DMA_HW_FEAT_TXCHCNT) >> 22;
+ /* Alternate (enhanced) DESC mode */
+ dma_cap->enh_desc = (hw_cap & DMA_HW_FEAT_ENHDESSEL) >> 24;
}
static void dwmac1000_rx_watchdog(void __iomem *ioaddr, u32 riwt)
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac100_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac100_core.c
index f8dd773f246c..6418b2e07619 100644
--- a/drivers/net/ethernet/stmicro/stmmac/dwmac100_core.c
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac100_core.c
@@ -173,7 +173,7 @@ static const struct stmmac_ops dwmac100_ops = {
.get_umac_addr = dwmac100_get_umac_addr,
};
-struct mac_device_info *dwmac100_setup(void __iomem *ioaddr)
+struct mac_device_info *dwmac100_setup(void __iomem *ioaddr, int *synopsys_id)
{
struct mac_device_info *mac;
@@ -192,7 +192,8 @@ struct mac_device_info *dwmac100_setup(void __iomem *ioaddr)
mac->link.speed = 0;
mac->mii.addr = MAC_MII_ADDR;
mac->mii.data = MAC_MII_DATA;
- mac->synopsys_uid = 0;
+ /* Synopsys Id is not available on old chips */
+ *synopsys_id = 0;
return mac;
}
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4.h b/drivers/net/ethernet/stmicro/stmmac/dwmac4.h
new file mode 100644
index 000000000000..bc50952a18e7
--- /dev/null
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4.h
@@ -0,0 +1,255 @@
+/*
+ * DWMAC4 Header file.
+ *
+ * Copyright (C) 2015 STMicroelectronics Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * Author: Alexandre Torgue <alexandre.torgue@st.com>
+ */
+
+#ifndef __DWMAC4_H__
+#define __DWMAC4_H__
+
+#include "common.h"
+
+/* MAC registers */
+#define GMAC_CONFIG 0x00000000
+#define GMAC_PACKET_FILTER 0x00000008
+#define GMAC_HASH_TAB_0_31 0x00000010
+#define GMAC_HASH_TAB_32_63 0x00000014
+#define GMAC_RX_FLOW_CTRL 0x00000090
+#define GMAC_QX_TX_FLOW_CTRL(x) (0x70 + x * 4)
+#define GMAC_INT_STATUS 0x000000b0
+#define GMAC_INT_EN 0x000000b4
+#define GMAC_AN_CTRL 0x000000e0
+#define GMAC_AN_STATUS 0x000000e4
+#define GMAC_AN_ADV 0x000000e8
+#define GMAC_AN_LPA 0x000000ec
+#define GMAC_PMT 0x000000c0
+#define GMAC_VERSION 0x00000110
+#define GMAC_DEBUG 0x00000114
+#define GMAC_HW_FEATURE0 0x0000011c
+#define GMAC_HW_FEATURE1 0x00000120
+#define GMAC_HW_FEATURE2 0x00000124
+#define GMAC_MDIO_ADDR 0x00000200
+#define GMAC_MDIO_DATA 0x00000204
+#define GMAC_ADDR_HIGH(reg) (0x300 + reg * 8)
+#define GMAC_ADDR_LOW(reg) (0x304 + reg * 8)
+
+/* MAC Packet Filtering */
+#define GMAC_PACKET_FILTER_PR BIT(0)
+#define GMAC_PACKET_FILTER_HMC BIT(2)
+#define GMAC_PACKET_FILTER_PM BIT(4)
+
+#define GMAC_MAX_PERFECT_ADDRESSES 128
+
+/* MAC Flow Control RX */
+#define GMAC_RX_FLOW_CTRL_RFE BIT(0)
+
+/* MAC Flow Control TX */
+#define GMAC_TX_FLOW_CTRL_TFE BIT(1)
+#define GMAC_TX_FLOW_CTRL_PT_SHIFT 16
+
+/* MAC Interrupt bitmap*/
+#define GMAC_INT_PMT_EN BIT(4)
+#define GMAC_INT_LPI_EN BIT(5)
+
+enum dwmac4_irq_status {
+ time_stamp_irq = 0x00001000,
+ mmc_rx_csum_offload_irq = 0x00000800,
+ mmc_tx_irq = 0x00000400,
+ mmc_rx_irq = 0x00000200,
+ mmc_irq = 0x00000100,
+ pmt_irq = 0x00000010,
+ pcs_ane_irq = 0x00000004,
+ pcs_link_irq = 0x00000002,
+};
+
+/* MAC Auto-Neg bitmap*/
+#define GMAC_AN_CTRL_RAN BIT(9)
+#define GMAC_AN_CTRL_ANE BIT(12)
+#define GMAC_AN_CTRL_ELE BIT(14)
+#define GMAC_AN_FD BIT(5)
+#define GMAC_AN_HD BIT(6)
+#define GMAC_AN_PSE_MASK GENMASK(8, 7)
+#define GMAC_AN_PSE_SHIFT 7
+
+/* MAC PMT bitmap */
+enum power_event {
+ pointer_reset = 0x80000000,
+ global_unicast = 0x00000200,
+ wake_up_rx_frame = 0x00000040,
+ magic_frame = 0x00000020,
+ wake_up_frame_en = 0x00000004,
+ magic_pkt_en = 0x00000002,
+ power_down = 0x00000001,
+};
+
+/* MAC Debug bitmap */
+#define GMAC_DEBUG_TFCSTS_MASK GENMASK(18, 17)
+#define GMAC_DEBUG_TFCSTS_SHIFT 17
+#define GMAC_DEBUG_TFCSTS_IDLE 0
+#define GMAC_DEBUG_TFCSTS_WAIT 1
+#define GMAC_DEBUG_TFCSTS_GEN_PAUSE 2
+#define GMAC_DEBUG_TFCSTS_XFER 3
+#define GMAC_DEBUG_TPESTS BIT(16)
+#define GMAC_DEBUG_RFCFCSTS_MASK GENMASK(2, 1)
+#define GMAC_DEBUG_RFCFCSTS_SHIFT 1
+#define GMAC_DEBUG_RPESTS BIT(0)
+
+/* MAC config */
+#define GMAC_CONFIG_IPC BIT(27)
+#define GMAC_CONFIG_2K BIT(22)
+#define GMAC_CONFIG_ACS BIT(20)
+#define GMAC_CONFIG_BE BIT(18)
+#define GMAC_CONFIG_JD BIT(17)
+#define GMAC_CONFIG_JE BIT(16)
+#define GMAC_CONFIG_PS BIT(15)
+#define GMAC_CONFIG_FES BIT(14)
+#define GMAC_CONFIG_DM BIT(13)
+#define GMAC_CONFIG_DCRS BIT(9)
+#define GMAC_CONFIG_TE BIT(1)
+#define GMAC_CONFIG_RE BIT(0)
+
+/* MAC HW features0 bitmap */
+#define GMAC_HW_FEAT_ADDMAC BIT(18)
+#define GMAC_HW_FEAT_RXCOESEL BIT(16)
+#define GMAC_HW_FEAT_TXCOSEL BIT(14)
+#define GMAC_HW_FEAT_EEESEL BIT(13)
+#define GMAC_HW_FEAT_TSSEL BIT(12)
+#define GMAC_HW_FEAT_MMCSEL BIT(8)
+#define GMAC_HW_FEAT_MGKSEL BIT(7)
+#define GMAC_HW_FEAT_RWKSEL BIT(6)
+#define GMAC_HW_FEAT_SMASEL BIT(5)
+#define GMAC_HW_FEAT_VLHASH BIT(4)
+#define GMAC_HW_FEAT_PCSSEL BIT(3)
+#define GMAC_HW_FEAT_HDSEL BIT(2)
+#define GMAC_HW_FEAT_GMIISEL BIT(1)
+#define GMAC_HW_FEAT_MIISEL BIT(0)
+
+/* MAC HW features1 bitmap */
+#define GMAC_HW_FEAT_AVSEL BIT(20)
+#define GMAC_HW_TSOEN BIT(18)
+
+/* MAC HW features2 bitmap */
+#define GMAC_HW_FEAT_TXCHCNT GENMASK(21, 18)
+#define GMAC_HW_FEAT_RXCHCNT GENMASK(15, 12)
+
+/* MAC HW ADDR regs */
+#define GMAC_HI_DCS GENMASK(18, 16)
+#define GMAC_HI_DCS_SHIFT 16
+#define GMAC_HI_REG_AE BIT(31)
+
+/* MTL registers */
+#define MTL_INT_STATUS 0x00000c20
+#define MTL_INT_Q0 BIT(0)
+
+#define MTL_CHAN_BASE_ADDR 0x00000d00
+#define MTL_CHAN_BASE_OFFSET 0x40
+#define MTL_CHANX_BASE_ADDR(x) (MTL_CHAN_BASE_ADDR + \
+ (x * MTL_CHAN_BASE_OFFSET))
+
+#define MTL_CHAN_TX_OP_MODE(x) MTL_CHANX_BASE_ADDR(x)
+#define MTL_CHAN_TX_DEBUG(x) (MTL_CHANX_BASE_ADDR(x) + 0x8)
+#define MTL_CHAN_INT_CTRL(x) (MTL_CHANX_BASE_ADDR(x) + 0x2c)
+#define MTL_CHAN_RX_OP_MODE(x) (MTL_CHANX_BASE_ADDR(x) + 0x30)
+#define MTL_CHAN_RX_DEBUG(x) (MTL_CHANX_BASE_ADDR(x) + 0x38)
+
+#define MTL_OP_MODE_RSF BIT(5)
+#define MTL_OP_MODE_TSF BIT(1)
+
+#define MTL_OP_MODE_TTC_MASK 0x70
+#define MTL_OP_MODE_TTC_SHIFT 4
+
+#define MTL_OP_MODE_TTC_32 0
+#define MTL_OP_MODE_TTC_64 (1 << MTL_OP_MODE_TTC_SHIFT)
+#define MTL_OP_MODE_TTC_96 (2 << MTL_OP_MODE_TTC_SHIFT)
+#define MTL_OP_MODE_TTC_128 (3 << MTL_OP_MODE_TTC_SHIFT)
+#define MTL_OP_MODE_TTC_192 (4 << MTL_OP_MODE_TTC_SHIFT)
+#define MTL_OP_MODE_TTC_256 (5 << MTL_OP_MODE_TTC_SHIFT)
+#define MTL_OP_MODE_TTC_384 (6 << MTL_OP_MODE_TTC_SHIFT)
+#define MTL_OP_MODE_TTC_512 (7 << MTL_OP_MODE_TTC_SHIFT)
+
+#define MTL_OP_MODE_RTC_MASK 0x18
+#define MTL_OP_MODE_RTC_SHIFT 3
+
+#define MTL_OP_MODE_RTC_32 (1 << MTL_OP_MODE_RTC_SHIFT)
+#define MTL_OP_MODE_RTC_64 0
+#define MTL_OP_MODE_RTC_96 (2 << MTL_OP_MODE_RTC_SHIFT)
+#define MTL_OP_MODE_RTC_128 (3 << MTL_OP_MODE_RTC_SHIFT)
+
+/* MTL debug */
+#define MTL_DEBUG_TXSTSFSTS BIT(5)
+#define MTL_DEBUG_TXFSTS BIT(4)
+#define MTL_DEBUG_TWCSTS BIT(3)
+
+/* MTL debug: Tx FIFO Read Controller Status */
+#define MTL_DEBUG_TRCSTS_MASK GENMASK(2, 1)
+#define MTL_DEBUG_TRCSTS_SHIFT 1
+#define MTL_DEBUG_TRCSTS_IDLE 0
+#define MTL_DEBUG_TRCSTS_READ 1
+#define MTL_DEBUG_TRCSTS_TXW 2
+#define MTL_DEBUG_TRCSTS_WRITE 3
+#define MTL_DEBUG_TXPAUSED BIT(0)
+
+/* MAC debug: GMII or MII Transmit Protocol Engine Status */
+#define MTL_DEBUG_RXFSTS_MASK GENMASK(5, 4)
+#define MTL_DEBUG_RXFSTS_SHIFT 4
+#define MTL_DEBUG_RXFSTS_EMPTY 0
+#define MTL_DEBUG_RXFSTS_BT 1
+#define MTL_DEBUG_RXFSTS_AT 2
+#define MTL_DEBUG_RXFSTS_FULL 3
+#define MTL_DEBUG_RRCSTS_MASK GENMASK(2, 1)
+#define MTL_DEBUG_RRCSTS_SHIFT 1
+#define MTL_DEBUG_RRCSTS_IDLE 0
+#define MTL_DEBUG_RRCSTS_RDATA 1
+#define MTL_DEBUG_RRCSTS_RSTAT 2
+#define MTL_DEBUG_RRCSTS_FLUSH 3
+#define MTL_DEBUG_RWCSTS BIT(0)
+
+/* MTL interrupt */
+#define MTL_RX_OVERFLOW_INT_EN BIT(24)
+#define MTL_RX_OVERFLOW_INT BIT(16)
+
+/* Default operating mode of the MAC */
+#define GMAC_CORE_INIT (GMAC_CONFIG_JD | GMAC_CONFIG_PS | GMAC_CONFIG_ACS | \
+ GMAC_CONFIG_BE | GMAC_CONFIG_DCRS)
+
+/* To dump the core regs excluding the Address Registers */
+#define GMAC_REG_NUM 132
+
+/* MTL debug */
+#define MTL_DEBUG_TXSTSFSTS BIT(5)
+#define MTL_DEBUG_TXFSTS BIT(4)
+#define MTL_DEBUG_TWCSTS BIT(3)
+
+/* MTL debug: Tx FIFO Read Controller Status */
+#define MTL_DEBUG_TRCSTS_MASK GENMASK(2, 1)
+#define MTL_DEBUG_TRCSTS_SHIFT 1
+#define MTL_DEBUG_TRCSTS_IDLE 0
+#define MTL_DEBUG_TRCSTS_READ 1
+#define MTL_DEBUG_TRCSTS_TXW 2
+#define MTL_DEBUG_TRCSTS_WRITE 3
+#define MTL_DEBUG_TXPAUSED BIT(0)
+
+/* MAC debug: GMII or MII Transmit Protocol Engine Status */
+#define MTL_DEBUG_RXFSTS_MASK GENMASK(5, 4)
+#define MTL_DEBUG_RXFSTS_SHIFT 4
+#define MTL_DEBUG_RXFSTS_EMPTY 0
+#define MTL_DEBUG_RXFSTS_BT 1
+#define MTL_DEBUG_RXFSTS_AT 2
+#define MTL_DEBUG_RXFSTS_FULL 3
+#define MTL_DEBUG_RRCSTS_MASK GENMASK(2, 1)
+#define MTL_DEBUG_RRCSTS_SHIFT 1
+#define MTL_DEBUG_RRCSTS_IDLE 0
+#define MTL_DEBUG_RRCSTS_RDATA 1
+#define MTL_DEBUG_RRCSTS_RSTAT 2
+#define MTL_DEBUG_RRCSTS_FLUSH 3
+#define MTL_DEBUG_RWCSTS BIT(0)
+
+extern const struct stmmac_dma_ops dwmac4_dma_ops;
+extern const struct stmmac_dma_ops dwmac410_dma_ops;
+#endif /* __DWMAC4_H__ */
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
new file mode 100644
index 000000000000..4f7283d05588
--- /dev/null
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_core.c
@@ -0,0 +1,407 @@
+/*
+ * This is the driver for the GMAC on-chip Ethernet controller for ST SoCs.
+ * DWC Ether MAC version 4.00 has been used for developing this code.
+ *
+ * This only implements the mac core functions for this chip.
+ *
+ * Copyright (C) 2015 STMicroelectronics Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * Author: Alexandre Torgue <alexandre.torgue@st.com>
+ */
+
+#include <linux/crc32.h>
+#include <linux/slab.h>
+#include <linux/ethtool.h>
+#include <linux/io.h>
+#include "dwmac4.h"
+
+static void dwmac4_core_init(struct mac_device_info *hw, int mtu)
+{
+ void __iomem *ioaddr = hw->pcsr;
+ u32 value = readl(ioaddr + GMAC_CONFIG);
+
+ value |= GMAC_CORE_INIT;
+
+ if (mtu > 1500)
+ value |= GMAC_CONFIG_2K;
+ if (mtu > 2000)
+ value |= GMAC_CONFIG_JE;
+
+ writel(value, ioaddr + GMAC_CONFIG);
+
+ /* Mask GMAC interrupts */
+ writel(GMAC_INT_PMT_EN, ioaddr + GMAC_INT_EN);
+}
+
+static void dwmac4_dump_regs(struct mac_device_info *hw)
+{
+ void __iomem *ioaddr = hw->pcsr;
+ int i;
+
+ pr_debug("\tDWMAC4 regs (base addr = 0x%p)\n", ioaddr);
+
+ for (i = 0; i < GMAC_REG_NUM; i++) {
+ int offset = i * 4;
+
+ pr_debug("\tReg No. %d (offset 0x%x): 0x%08x\n", i,
+ offset, readl(ioaddr + offset));
+ }
+}
+
+static int dwmac4_rx_ipc_enable(struct mac_device_info *hw)
+{
+ void __iomem *ioaddr = hw->pcsr;
+ u32 value = readl(ioaddr + GMAC_CONFIG);
+
+ if (hw->rx_csum)
+ value |= GMAC_CONFIG_IPC;
+ else
+ value &= ~GMAC_CONFIG_IPC;
+
+ writel(value, ioaddr + GMAC_CONFIG);
+
+ value = readl(ioaddr + GMAC_CONFIG);
+
+ return !!(value & GMAC_CONFIG_IPC);
+}
+
+static void dwmac4_pmt(struct mac_device_info *hw, unsigned long mode)
+{
+ void __iomem *ioaddr = hw->pcsr;
+ unsigned int pmt = 0;
+
+ if (mode & WAKE_MAGIC) {
+ pr_debug("GMAC: WOL Magic frame\n");
+ pmt |= power_down | magic_pkt_en;
+ }
+ if (mode & WAKE_UCAST) {
+ pr_debug("GMAC: WOL on global unicast\n");
+ pmt |= global_unicast;
+ }
+
+ writel(pmt, ioaddr + GMAC_PMT);
+}
+
+static void dwmac4_set_umac_addr(struct mac_device_info *hw,
+ unsigned char *addr, unsigned int reg_n)
+{
+ void __iomem *ioaddr = hw->pcsr;
+
+ stmmac_dwmac4_set_mac_addr(ioaddr, addr, GMAC_ADDR_HIGH(reg_n),
+ GMAC_ADDR_LOW(reg_n));
+}
+
+static void dwmac4_get_umac_addr(struct mac_device_info *hw,
+ unsigned char *addr, unsigned int reg_n)
+{
+ void __iomem *ioaddr = hw->pcsr;
+
+ stmmac_dwmac4_get_mac_addr(ioaddr, addr, GMAC_ADDR_HIGH(reg_n),
+ GMAC_ADDR_LOW(reg_n));
+}
+
+static void dwmac4_set_filter(struct mac_device_info *hw,
+ struct net_device *dev)
+{
+ void __iomem *ioaddr = (void __iomem *)dev->base_addr;
+ unsigned int value = 0;
+
+ if (dev->flags & IFF_PROMISC) {
+ value = GMAC_PACKET_FILTER_PR;
+ } else if ((dev->flags & IFF_ALLMULTI) ||
+ (netdev_mc_count(dev) > HASH_TABLE_SIZE)) {
+ /* Pass all multi */
+ value = GMAC_PACKET_FILTER_PM;
+ /* Set the 64 bits of the HASH tab. To be updated if taller
+ * hash table is used
+ */
+ writel(0xffffffff, ioaddr + GMAC_HASH_TAB_0_31);
+ writel(0xffffffff, ioaddr + GMAC_HASH_TAB_32_63);
+ } else if (!netdev_mc_empty(dev)) {
+ u32 mc_filter[2];
+ struct netdev_hw_addr *ha;
+
+ /* Hash filter for multicast */
+ value = GMAC_PACKET_FILTER_HMC;
+
+ memset(mc_filter, 0, sizeof(mc_filter));
+ netdev_for_each_mc_addr(ha, dev) {
+ /* The upper 6 bits of the calculated CRC are used to
+ * index the content of the Hash Table Reg 0 and 1.
+ */
+ int bit_nr =
+ (bitrev32(~crc32_le(~0, ha->addr, 6)) >> 26);
+ /* The most significant bit determines the register
+ * to use while the other 5 bits determines the bit
+ * within the selected register
+ */
+ mc_filter[bit_nr >> 5] |= (1 << (bit_nr & 0x1F));
+ }
+ writel(mc_filter[0], ioaddr + GMAC_HASH_TAB_0_31);
+ writel(mc_filter[1], ioaddr + GMAC_HASH_TAB_32_63);
+ }
+
+ /* Handle multiple unicast addresses */
+ if (netdev_uc_count(dev) > GMAC_MAX_PERFECT_ADDRESSES) {
+ /* Switch to promiscuous mode if more than 128 addrs
+ * are required
+ */
+ value |= GMAC_PACKET_FILTER_PR;
+ } else if (!netdev_uc_empty(dev)) {
+ int reg = 1;
+ struct netdev_hw_addr *ha;
+
+ netdev_for_each_uc_addr(ha, dev) {
+ dwmac4_set_umac_addr(ioaddr, ha->addr, reg);
+ reg++;
+ }
+ }
+
+ writel(value, ioaddr + GMAC_PACKET_FILTER);
+}
+
+static void dwmac4_flow_ctrl(struct mac_device_info *hw, unsigned int duplex,
+ unsigned int fc, unsigned int pause_time)
+{
+ void __iomem *ioaddr = hw->pcsr;
+ u32 channel = STMMAC_CHAN0; /* FIXME */
+ unsigned int flow = 0;
+
+ pr_debug("GMAC Flow-Control:\n");
+ if (fc & FLOW_RX) {
+ pr_debug("\tReceive Flow-Control ON\n");
+ flow |= GMAC_RX_FLOW_CTRL_RFE;
+ writel(flow, ioaddr + GMAC_RX_FLOW_CTRL);
+ }
+ if (fc & FLOW_TX) {
+ pr_debug("\tTransmit Flow-Control ON\n");
+ flow |= GMAC_TX_FLOW_CTRL_TFE;
+ writel(flow, ioaddr + GMAC_QX_TX_FLOW_CTRL(channel));
+
+ if (duplex) {
+ pr_debug("\tduplex mode: PAUSE %d\n", pause_time);
+ flow |= (pause_time << GMAC_TX_FLOW_CTRL_PT_SHIFT);
+ writel(flow, ioaddr + GMAC_QX_TX_FLOW_CTRL(channel));
+ }
+ }
+}
+
+static void dwmac4_ctrl_ane(struct mac_device_info *hw, bool restart)
+{
+ void __iomem *ioaddr = hw->pcsr;
+
+ /* auto negotiation enable and External Loopback enable */
+ u32 value = GMAC_AN_CTRL_ANE | GMAC_AN_CTRL_ELE;
+
+ if (restart)
+ value |= GMAC_AN_CTRL_RAN;
+
+ writel(value, ioaddr + GMAC_AN_CTRL);
+}
+
+static void dwmac4_get_adv(struct mac_device_info *hw, struct rgmii_adv *adv)
+{
+ void __iomem *ioaddr = hw->pcsr;
+ u32 value = readl(ioaddr + GMAC_AN_ADV);
+
+ if (value & GMAC_AN_FD)
+ adv->duplex = DUPLEX_FULL;
+ if (value & GMAC_AN_HD)
+ adv->duplex |= DUPLEX_HALF;
+
+ adv->pause = (value & GMAC_AN_PSE_MASK) >> GMAC_AN_PSE_SHIFT;
+
+ value = readl(ioaddr + GMAC_AN_LPA);
+
+ if (value & GMAC_AN_FD)
+ adv->lp_duplex = DUPLEX_FULL;
+ if (value & GMAC_AN_HD)
+ adv->lp_duplex = DUPLEX_HALF;
+
+ adv->lp_pause = (value & GMAC_AN_PSE_MASK) >> GMAC_AN_PSE_SHIFT;
+}
+
+static int dwmac4_irq_status(struct mac_device_info *hw,
+ struct stmmac_extra_stats *x)
+{
+ void __iomem *ioaddr = hw->pcsr;
+ u32 mtl_int_qx_status;
+ u32 intr_status;
+ int ret = 0;
+
+ intr_status = readl(ioaddr + GMAC_INT_STATUS);
+
+ /* Not used events (e.g. MMC interrupts) are not handled. */
+ if ((intr_status & mmc_tx_irq))
+ x->mmc_tx_irq_n++;
+ if (unlikely(intr_status & mmc_rx_irq))
+ x->mmc_rx_irq_n++;
+ if (unlikely(intr_status & mmc_rx_csum_offload_irq))
+ x->mmc_rx_csum_offload_irq_n++;
+ /* Clear the PMT bits 5 and 6 by reading the PMT status reg */
+ if (unlikely(intr_status & pmt_irq)) {
+ readl(ioaddr + GMAC_PMT);
+ x->irq_receive_pmt_irq_n++;
+ }
+
+ if ((intr_status & pcs_ane_irq) || (intr_status & pcs_link_irq)) {
+ readl(ioaddr + GMAC_AN_STATUS);
+ x->irq_pcs_ane_n++;
+ }
+
+ mtl_int_qx_status = readl(ioaddr + MTL_INT_STATUS);
+ /* Check MTL Interrupt: Currently only one queue is used: Q0. */
+ if (mtl_int_qx_status & MTL_INT_Q0) {
+ /* read Queue 0 Interrupt status */
+ u32 status = readl(ioaddr + MTL_CHAN_INT_CTRL(STMMAC_CHAN0));
+
+ if (status & MTL_RX_OVERFLOW_INT) {
+ /* clear Interrupt */
+ writel(status | MTL_RX_OVERFLOW_INT,
+ ioaddr + MTL_CHAN_INT_CTRL(STMMAC_CHAN0));
+ ret = CORE_IRQ_MTL_RX_OVERFLOW;
+ }
+ }
+
+ return ret;
+}
+
+static void dwmac4_debug(void __iomem *ioaddr, struct stmmac_extra_stats *x)
+{
+ u32 value;
+
+ /* Currently only channel 0 is supported */
+ value = readl(ioaddr + MTL_CHAN_TX_DEBUG(STMMAC_CHAN0));
+
+ if (value & MTL_DEBUG_TXSTSFSTS)
+ x->mtl_tx_status_fifo_full++;
+ if (value & MTL_DEBUG_TXFSTS)
+ x->mtl_tx_fifo_not_empty++;
+ if (value & MTL_DEBUG_TWCSTS)
+ x->mmtl_fifo_ctrl++;
+ if (value & MTL_DEBUG_TRCSTS_MASK) {
+ u32 trcsts = (value & MTL_DEBUG_TRCSTS_MASK)
+ >> MTL_DEBUG_TRCSTS_SHIFT;
+ if (trcsts == MTL_DEBUG_TRCSTS_WRITE)
+ x->mtl_tx_fifo_read_ctrl_write++;
+ else if (trcsts == MTL_DEBUG_TRCSTS_TXW)
+ x->mtl_tx_fifo_read_ctrl_wait++;
+ else if (trcsts == MTL_DEBUG_TRCSTS_READ)
+ x->mtl_tx_fifo_read_ctrl_read++;
+ else
+ x->mtl_tx_fifo_read_ctrl_idle++;
+ }
+ if (value & MTL_DEBUG_TXPAUSED)
+ x->mac_tx_in_pause++;
+
+ value = readl(ioaddr + MTL_CHAN_RX_DEBUG(STMMAC_CHAN0));
+
+ if (value & MTL_DEBUG_RXFSTS_MASK) {
+ u32 rxfsts = (value & MTL_DEBUG_RXFSTS_MASK)
+ >> MTL_DEBUG_RRCSTS_SHIFT;
+
+ if (rxfsts == MTL_DEBUG_RXFSTS_FULL)
+ x->mtl_rx_fifo_fill_level_full++;
+ else if (rxfsts == MTL_DEBUG_RXFSTS_AT)
+ x->mtl_rx_fifo_fill_above_thresh++;
+ else if (rxfsts == MTL_DEBUG_RXFSTS_BT)
+ x->mtl_rx_fifo_fill_below_thresh++;
+ else
+ x->mtl_rx_fifo_fill_level_empty++;
+ }
+ if (value & MTL_DEBUG_RRCSTS_MASK) {
+ u32 rrcsts = (value & MTL_DEBUG_RRCSTS_MASK) >>
+ MTL_DEBUG_RRCSTS_SHIFT;
+
+ if (rrcsts == MTL_DEBUG_RRCSTS_FLUSH)
+ x->mtl_rx_fifo_read_ctrl_flush++;
+ else if (rrcsts == MTL_DEBUG_RRCSTS_RSTAT)
+ x->mtl_rx_fifo_read_ctrl_read_data++;
+ else if (rrcsts == MTL_DEBUG_RRCSTS_RDATA)
+ x->mtl_rx_fifo_read_ctrl_status++;
+ else
+ x->mtl_rx_fifo_read_ctrl_idle++;
+ }
+ if (value & MTL_DEBUG_RWCSTS)
+ x->mtl_rx_fifo_ctrl_active++;
+
+ /* GMAC debug */
+ value = readl(ioaddr + GMAC_DEBUG);
+
+ if (value & GMAC_DEBUG_TFCSTS_MASK) {
+ u32 tfcsts = (value & GMAC_DEBUG_TFCSTS_MASK)
+ >> GMAC_DEBUG_TFCSTS_SHIFT;
+
+ if (tfcsts == GMAC_DEBUG_TFCSTS_XFER)
+ x->mac_tx_frame_ctrl_xfer++;
+ else if (tfcsts == GMAC_DEBUG_TFCSTS_GEN_PAUSE)
+ x->mac_tx_frame_ctrl_pause++;
+ else if (tfcsts == GMAC_DEBUG_TFCSTS_WAIT)
+ x->mac_tx_frame_ctrl_wait++;
+ else
+ x->mac_tx_frame_ctrl_idle++;
+ }
+ if (value & GMAC_DEBUG_TPESTS)
+ x->mac_gmii_tx_proto_engine++;
+ if (value & GMAC_DEBUG_RFCFCSTS_MASK)
+ x->mac_rx_frame_ctrl_fifo = (value & GMAC_DEBUG_RFCFCSTS_MASK)
+ >> GMAC_DEBUG_RFCFCSTS_SHIFT;
+ if (value & GMAC_DEBUG_RPESTS)
+ x->mac_gmii_rx_proto_engine++;
+}
+
+static const struct stmmac_ops dwmac4_ops = {
+ .core_init = dwmac4_core_init,
+ .rx_ipc = dwmac4_rx_ipc_enable,
+ .dump_regs = dwmac4_dump_regs,
+ .host_irq_status = dwmac4_irq_status,
+ .flow_ctrl = dwmac4_flow_ctrl,
+ .pmt = dwmac4_pmt,
+ .set_umac_addr = dwmac4_set_umac_addr,
+ .get_umac_addr = dwmac4_get_umac_addr,
+ .ctrl_ane = dwmac4_ctrl_ane,
+ .get_adv = dwmac4_get_adv,
+ .debug = dwmac4_debug,
+ .set_filter = dwmac4_set_filter,
+};
+
+struct mac_device_info *dwmac4_setup(void __iomem *ioaddr, int mcbins,
+ int perfect_uc_entries, int *synopsys_id)
+{
+ struct mac_device_info *mac;
+ u32 hwid = readl(ioaddr + GMAC_VERSION);
+
+ mac = kzalloc(sizeof(const struct mac_device_info), GFP_KERNEL);
+ if (!mac)
+ return NULL;
+
+ mac->pcsr = ioaddr;
+ mac->multicast_filter_bins = mcbins;
+ mac->unicast_filter_entries = perfect_uc_entries;
+ mac->mcast_bits_log2 = 0;
+
+ if (mac->multicast_filter_bins)
+ mac->mcast_bits_log2 = ilog2(mac->multicast_filter_bins);
+
+ mac->mac = &dwmac4_ops;
+
+ mac->link.port = GMAC_CONFIG_PS;
+ mac->link.duplex = GMAC_CONFIG_DM;
+ mac->link.speed = GMAC_CONFIG_FES;
+ mac->mii.addr = GMAC_MDIO_ADDR;
+ mac->mii.data = GMAC_MDIO_DATA;
+
+ /* Get and dump the chip ID */
+ *synopsys_id = stmmac_get_synopsys_id(hwid);
+
+ if (*synopsys_id > DWMAC_CORE_4_00)
+ mac->dma = &dwmac410_dma_ops;
+ else
+ mac->dma = &dwmac4_dma_ops;
+
+ return mac;
+}
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
new file mode 100644
index 000000000000..4ec7397e7fb3
--- /dev/null
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.c
@@ -0,0 +1,389 @@
+/*
+ * This contains the functions to handle the descriptors for DesignWare databook
+ * 4.xx.
+ *
+ * Copyright (C) 2015 STMicroelectronics Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * Author: Alexandre Torgue <alexandre.torgue@st.com>
+ */
+
+#include <linux/stmmac.h>
+#include "common.h"
+#include "dwmac4_descs.h"
+
+static int dwmac4_wrback_get_tx_status(void *data, struct stmmac_extra_stats *x,
+ struct dma_desc *p,
+ void __iomem *ioaddr)
+{
+ struct net_device_stats *stats = (struct net_device_stats *)data;
+ unsigned int tdes3;
+ int ret = tx_done;
+
+ tdes3 = p->des3;
+
+ /* Get tx owner first */
+ if (unlikely(tdes3 & TDES3_OWN))
+ return tx_dma_own;
+
+ /* Verify tx error by looking at the last segment. */
+ if (likely(!(tdes3 & TDES3_LAST_DESCRIPTOR)))
+ return tx_not_ls;
+
+ if (unlikely(tdes3 & TDES3_ERROR_SUMMARY)) {
+ if (unlikely(tdes3 & TDES3_JABBER_TIMEOUT))
+ x->tx_jabber++;
+ if (unlikely(tdes3 & TDES3_PACKET_FLUSHED))
+ x->tx_frame_flushed++;
+ if (unlikely(tdes3 & TDES3_LOSS_CARRIER)) {
+ x->tx_losscarrier++;
+ stats->tx_carrier_errors++;
+ }
+ if (unlikely(tdes3 & TDES3_NO_CARRIER)) {
+ x->tx_carrier++;
+ stats->tx_carrier_errors++;
+ }
+ if (unlikely((tdes3 & TDES3_LATE_COLLISION) ||
+ (tdes3 & TDES3_EXCESSIVE_COLLISION)))
+ stats->collisions +=
+ (tdes3 & TDES3_COLLISION_COUNT_MASK)
+ >> TDES3_COLLISION_COUNT_SHIFT;
+
+ if (unlikely(tdes3 & TDES3_EXCESSIVE_DEFERRAL))
+ x->tx_deferred++;
+
+ if (unlikely(tdes3 & TDES3_UNDERFLOW_ERROR))
+ x->tx_underflow++;
+
+ if (unlikely(tdes3 & TDES3_IP_HDR_ERROR))
+ x->tx_ip_header_error++;
+
+ if (unlikely(tdes3 & TDES3_PAYLOAD_ERROR))
+ x->tx_payload_error++;
+
+ ret = tx_err;
+ }
+
+ if (unlikely(tdes3 & TDES3_DEFERRED))
+ x->tx_deferred++;
+
+ return ret;
+}
+
+static int dwmac4_wrback_get_rx_status(void *data, struct stmmac_extra_stats *x,
+ struct dma_desc *p)
+{
+ struct net_device_stats *stats = (struct net_device_stats *)data;
+ unsigned int rdes1 = p->des1;
+ unsigned int rdes2 = p->des2;
+ unsigned int rdes3 = p->des3;
+ int message_type;
+ int ret = good_frame;
+
+ if (unlikely(rdes3 & RDES3_OWN))
+ return dma_own;
+
+ /* Verify rx error by looking at the last segment. */
+ if (likely(!(rdes3 & RDES3_LAST_DESCRIPTOR)))
+ return discard_frame;
+
+ if (unlikely(rdes3 & RDES3_ERROR_SUMMARY)) {
+ if (unlikely(rdes3 & RDES3_GIANT_PACKET))
+ stats->rx_length_errors++;
+ if (unlikely(rdes3 & RDES3_OVERFLOW_ERROR))
+ x->rx_gmac_overflow++;
+
+ if (unlikely(rdes3 & RDES3_RECEIVE_WATCHDOG))
+ x->rx_watchdog++;
+
+ if (unlikely(rdes3 & RDES3_RECEIVE_ERROR))
+ x->rx_mii++;
+
+ if (unlikely(rdes3 & RDES3_CRC_ERROR)) {
+ x->rx_crc++;
+ stats->rx_crc_errors++;
+ }
+
+ if (unlikely(rdes3 & RDES3_DRIBBLE_ERROR))
+ x->dribbling_bit++;
+
+ ret = discard_frame;
+ }
+
+ message_type = (rdes1 & ERDES4_MSG_TYPE_MASK) >> 8;
+
+ if (rdes1 & RDES1_IP_HDR_ERROR)
+ x->ip_hdr_err++;
+ if (rdes1 & RDES1_IP_CSUM_BYPASSED)
+ x->ip_csum_bypassed++;
+ if (rdes1 & RDES1_IPV4_HEADER)
+ x->ipv4_pkt_rcvd++;
+ if (rdes1 & RDES1_IPV6_HEADER)
+ x->ipv6_pkt_rcvd++;
+ if (message_type == RDES_EXT_SYNC)
+ x->rx_msg_type_sync++;
+ else if (message_type == RDES_EXT_FOLLOW_UP)
+ x->rx_msg_type_follow_up++;
+ else if (message_type == RDES_EXT_DELAY_REQ)
+ x->rx_msg_type_delay_req++;
+ else if (message_type == RDES_EXT_DELAY_RESP)
+ x->rx_msg_type_delay_resp++;
+ else if (message_type == RDES_EXT_PDELAY_REQ)
+ x->rx_msg_type_pdelay_req++;
+ else if (message_type == RDES_EXT_PDELAY_RESP)
+ x->rx_msg_type_pdelay_resp++;
+ else if (message_type == RDES_EXT_PDELAY_FOLLOW_UP)
+ x->rx_msg_type_pdelay_follow_up++;
+ else
+ x->rx_msg_type_ext_no_ptp++;
+
+ if (rdes1 & RDES1_PTP_PACKET_TYPE)
+ x->ptp_frame_type++;
+ if (rdes1 & RDES1_PTP_VER)
+ x->ptp_ver++;
+ if (rdes1 & RDES1_TIMESTAMP_DROPPED)
+ x->timestamp_dropped++;
+
+ if (unlikely(rdes2 & RDES2_SA_FILTER_FAIL)) {
+ x->sa_rx_filter_fail++;
+ ret = discard_frame;
+ }
+ if (unlikely(rdes2 & RDES2_DA_FILTER_FAIL)) {
+ x->da_rx_filter_fail++;
+ ret = discard_frame;
+ }
+
+ if (rdes2 & RDES2_L3_FILTER_MATCH)
+ x->l3_filter_match++;
+ if (rdes2 & RDES2_L4_FILTER_MATCH)
+ x->l4_filter_match++;
+ if ((rdes2 & RDES2_L3_L4_FILT_NB_MATCH_MASK)
+ >> RDES2_L3_L4_FILT_NB_MATCH_SHIFT)
+ x->l3_l4_filter_no_match++;
+
+ return ret;
+}
+
+static int dwmac4_rd_get_tx_len(struct dma_desc *p)
+{
+ return (p->des2 & TDES2_BUFFER1_SIZE_MASK);
+}
+
+static int dwmac4_get_tx_owner(struct dma_desc *p)
+{
+ return (p->des3 & TDES3_OWN) >> TDES3_OWN_SHIFT;
+}
+
+static void dwmac4_set_tx_owner(struct dma_desc *p)
+{
+ p->des3 |= TDES3_OWN;
+}
+
+static void dwmac4_set_rx_owner(struct dma_desc *p)
+{
+ p->des3 |= RDES3_OWN;
+}
+
+static int dwmac4_get_tx_ls(struct dma_desc *p)
+{
+ return (p->des3 & TDES3_LAST_DESCRIPTOR) >> TDES3_LAST_DESCRIPTOR_SHIFT;
+}
+
+static int dwmac4_wrback_get_rx_frame_len(struct dma_desc *p, int rx_coe)
+{
+ return (p->des3 & RDES3_PACKET_SIZE_MASK);
+}
+
+static void dwmac4_rd_enable_tx_timestamp(struct dma_desc *p)
+{
+ p->des2 |= TDES2_TIMESTAMP_ENABLE;
+}
+
+static int dwmac4_wrback_get_tx_timestamp_status(struct dma_desc *p)
+{
+ return (p->des3 & TDES3_TIMESTAMP_STATUS)
+ >> TDES3_TIMESTAMP_STATUS_SHIFT;
+}
+
+/* NOTE: For RX CTX bit has to be checked before
+ * HAVE a specific function for TX and another one for RX
+ */
+static u64 dwmac4_wrback_get_timestamp(void *desc, u32 ats)
+{
+ struct dma_desc *p = (struct dma_desc *)desc;
+ u64 ns;
+
+ ns = p->des0;
+ /* convert high/sec time stamp value to nanosecond */
+ ns += p->des1 * 1000000000ULL;
+
+ return ns;
+}
+
+static int dwmac4_context_get_rx_timestamp_status(void *desc, u32 ats)
+{
+ struct dma_desc *p = (struct dma_desc *)desc;
+
+ return (p->des1 & RDES1_TIMESTAMP_AVAILABLE)
+ >> RDES1_TIMESTAMP_AVAILABLE_SHIFT;
+}
+
+static void dwmac4_rd_init_rx_desc(struct dma_desc *p, int disable_rx_ic,
+ int mode, int end)
+{
+ p->des3 = RDES3_OWN | RDES3_BUFFER1_VALID_ADDR;
+
+ if (!disable_rx_ic)
+ p->des3 |= RDES3_INT_ON_COMPLETION_EN;
+}
+
+static void dwmac4_rd_init_tx_desc(struct dma_desc *p, int mode, int end)
+{
+ p->des0 = 0;
+ p->des1 = 0;
+ p->des2 = 0;
+ p->des3 = 0;
+}
+
+static void dwmac4_rd_prepare_tx_desc(struct dma_desc *p, int is_fs, int len,
+ bool csum_flag, int mode, bool tx_own,
+ bool ls)
+{
+ unsigned int tdes3 = p->des3;
+
+ p->des2 |= (len & TDES2_BUFFER1_SIZE_MASK);
+
+ if (is_fs)
+ tdes3 |= TDES3_FIRST_DESCRIPTOR;
+ else
+ tdes3 &= ~TDES3_FIRST_DESCRIPTOR;
+
+ if (likely(csum_flag))
+ tdes3 |= (TX_CIC_FULL << TDES3_CHECKSUM_INSERTION_SHIFT);
+ else
+ tdes3 &= ~(TX_CIC_FULL << TDES3_CHECKSUM_INSERTION_SHIFT);
+
+ if (ls)
+ tdes3 |= TDES3_LAST_DESCRIPTOR;
+ else
+ tdes3 &= ~TDES3_LAST_DESCRIPTOR;
+
+ /* Finally set the OWN bit. Later the DMA will start! */
+ if (tx_own)
+ tdes3 |= TDES3_OWN;
+
+ if (is_fs & tx_own)
+ /* When the own bit, for the first frame, has to be set, all
+ * descriptors for the same frame has to be set before, to
+ * avoid race condition.
+ */
+ wmb();
+
+ p->des3 = tdes3;
+}
+
+static void dwmac4_rd_prepare_tso_tx_desc(struct dma_desc *p, int is_fs,
+ int len1, int len2, bool tx_own,
+ bool ls, unsigned int tcphdrlen,
+ unsigned int tcppayloadlen)
+{
+ unsigned int tdes3 = p->des3;
+
+ if (len1)
+ p->des2 |= (len1 & TDES2_BUFFER1_SIZE_MASK);
+
+ if (len2)
+ p->des2 |= (len2 << TDES2_BUFFER2_SIZE_MASK_SHIFT)
+ & TDES2_BUFFER2_SIZE_MASK;
+
+ if (is_fs) {
+ tdes3 |= TDES3_FIRST_DESCRIPTOR |
+ TDES3_TCP_SEGMENTATION_ENABLE |
+ ((tcphdrlen << TDES3_HDR_LEN_SHIFT) &
+ TDES3_SLOT_NUMBER_MASK) |
+ ((tcppayloadlen & TDES3_TCP_PKT_PAYLOAD_MASK));
+ } else {
+ tdes3 &= ~TDES3_FIRST_DESCRIPTOR;
+ }
+
+ if (ls)
+ tdes3 |= TDES3_LAST_DESCRIPTOR;
+ else
+ tdes3 &= ~TDES3_LAST_DESCRIPTOR;
+
+ /* Finally set the OWN bit. Later the DMA will start! */
+ if (tx_own)
+ tdes3 |= TDES3_OWN;
+
+ if (is_fs & tx_own)
+ /* When the own bit, for the first frame, has to be set, all
+ * descriptors for the same frame has to be set before, to
+ * avoid race condition.
+ */
+ wmb();
+
+ p->des3 = tdes3;
+}
+
+static void dwmac4_release_tx_desc(struct dma_desc *p, int mode)
+{
+ p->des2 = 0;
+ p->des3 = 0;
+}
+
+static void dwmac4_rd_set_tx_ic(struct dma_desc *p)
+{
+ p->des2 |= TDES2_INTERRUPT_ON_COMPLETION;
+}
+
+static void dwmac4_display_ring(void *head, unsigned int size, bool rx)
+{
+ struct dma_desc *p = (struct dma_desc *)head;
+ int i;
+
+ pr_info("%s descriptor ring:\n", rx ? "RX" : "TX");
+
+ for (i = 0; i < size; i++) {
+ if (p->des0)
+ pr_info("%d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n",
+ i, (unsigned int)virt_to_phys(p),
+ p->des0, p->des1, p->des2, p->des3);
+ p++;
+ }
+}
+
+static void dwmac4_set_mss_ctxt(struct dma_desc *p, unsigned int mss)
+{
+ p->des0 = 0;
+ p->des1 = 0;
+ p->des2 = mss;
+ p->des3 = TDES3_CONTEXT_TYPE | TDES3_CTXT_TCMSSV;
+}
+
+const struct stmmac_desc_ops dwmac4_desc_ops = {
+ .tx_status = dwmac4_wrback_get_tx_status,
+ .rx_status = dwmac4_wrback_get_rx_status,
+ .get_tx_len = dwmac4_rd_get_tx_len,
+ .get_tx_owner = dwmac4_get_tx_owner,
+ .set_tx_owner = dwmac4_set_tx_owner,
+ .set_rx_owner = dwmac4_set_rx_owner,
+ .get_tx_ls = dwmac4_get_tx_ls,
+ .get_rx_frame_len = dwmac4_wrback_get_rx_frame_len,
+ .enable_tx_timestamp = dwmac4_rd_enable_tx_timestamp,
+ .get_tx_timestamp_status = dwmac4_wrback_get_tx_timestamp_status,
+ .get_timestamp = dwmac4_wrback_get_timestamp,
+ .get_rx_timestamp_status = dwmac4_context_get_rx_timestamp_status,
+ .set_tx_ic = dwmac4_rd_set_tx_ic,
+ .prepare_tx_desc = dwmac4_rd_prepare_tx_desc,
+ .prepare_tso_tx_desc = dwmac4_rd_prepare_tso_tx_desc,
+ .release_tx_desc = dwmac4_release_tx_desc,
+ .init_rx_desc = dwmac4_rd_init_rx_desc,
+ .init_tx_desc = dwmac4_rd_init_tx_desc,
+ .display_ring = dwmac4_display_ring,
+ .set_mss = dwmac4_set_mss_ctxt,
+};
+
+const struct stmmac_mode_ops dwmac4_ring_mode_ops = { };
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.h b/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.h
new file mode 100644
index 000000000000..0902a2edeaa9
--- /dev/null
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_descs.h
@@ -0,0 +1,129 @@
+/*
+ * Header File to describe the DMA descriptors and related definitions specific
+ * for DesignWare databook 4.xx.
+ *
+ * Copyright (C) 2015 STMicroelectronics Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * Author: Alexandre Torgue <alexandre.torgue@st.com>
+ */
+
+#ifndef __DWMAC4_DESCS_H__
+#define __DWMAC4_DESCS_H__
+
+#include <linux/bitops.h>
+
+/* Normal transmit descriptor defines (without split feature) */
+
+/* TDES2 (read format) */
+#define TDES2_BUFFER1_SIZE_MASK GENMASK(13, 0)
+#define TDES2_VLAN_TAG_MASK GENMASK(15, 14)
+#define TDES2_BUFFER2_SIZE_MASK GENMASK(29, 16)
+#define TDES2_BUFFER2_SIZE_MASK_SHIFT 16
+#define TDES2_TIMESTAMP_ENABLE BIT(30)
+#define TDES2_INTERRUPT_ON_COMPLETION BIT(31)
+
+/* TDES3 (read format) */
+#define TDES3_PACKET_SIZE_MASK GENMASK(14, 0)
+#define TDES3_CHECKSUM_INSERTION_MASK GENMASK(17, 16)
+#define TDES3_CHECKSUM_INSERTION_SHIFT 16
+#define TDES3_TCP_PKT_PAYLOAD_MASK GENMASK(17, 0)
+#define TDES3_TCP_SEGMENTATION_ENABLE BIT(18)
+#define TDES3_HDR_LEN_SHIFT 19
+#define TDES3_SLOT_NUMBER_MASK GENMASK(22, 19)
+#define TDES3_SA_INSERT_CTRL_MASK GENMASK(25, 23)
+#define TDES3_CRC_PAD_CTRL_MASK GENMASK(27, 26)
+
+/* TDES3 (write back format) */
+#define TDES3_IP_HDR_ERROR BIT(0)
+#define TDES3_DEFERRED BIT(1)
+#define TDES3_UNDERFLOW_ERROR BIT(2)
+#define TDES3_EXCESSIVE_DEFERRAL BIT(3)
+#define TDES3_COLLISION_COUNT_MASK GENMASK(7, 4)
+#define TDES3_COLLISION_COUNT_SHIFT 4
+#define TDES3_EXCESSIVE_COLLISION BIT(8)
+#define TDES3_LATE_COLLISION BIT(9)
+#define TDES3_NO_CARRIER BIT(10)
+#define TDES3_LOSS_CARRIER BIT(11)
+#define TDES3_PAYLOAD_ERROR BIT(12)
+#define TDES3_PACKET_FLUSHED BIT(13)
+#define TDES3_JABBER_TIMEOUT BIT(14)
+#define TDES3_ERROR_SUMMARY BIT(15)
+#define TDES3_TIMESTAMP_STATUS BIT(17)
+#define TDES3_TIMESTAMP_STATUS_SHIFT 17
+
+/* TDES3 context */
+#define TDES3_CTXT_TCMSSV BIT(26)
+
+/* TDES3 Common */
+#define TDES3_LAST_DESCRIPTOR BIT(28)
+#define TDES3_LAST_DESCRIPTOR_SHIFT 28
+#define TDES3_FIRST_DESCRIPTOR BIT(29)
+#define TDES3_CONTEXT_TYPE BIT(30)
+
+/* TDS3 use for both format (read and write back) */
+#define TDES3_OWN BIT(31)
+#define TDES3_OWN_SHIFT 31
+
+/* Normal receive descriptor defines (without split feature) */
+
+/* RDES0 (write back format) */
+#define RDES0_VLAN_TAG_MASK GENMASK(15, 0)
+
+/* RDES1 (write back format) */
+#define RDES1_IP_PAYLOAD_TYPE_MASK GENMASK(2, 0)
+#define RDES1_IP_HDR_ERROR BIT(3)
+#define RDES1_IPV4_HEADER BIT(4)
+#define RDES1_IPV6_HEADER BIT(5)
+#define RDES1_IP_CSUM_BYPASSED BIT(6)
+#define RDES1_IP_CSUM_ERROR BIT(7)
+#define RDES1_PTP_MSG_TYPE_MASK GENMASK(11, 8)
+#define RDES1_PTP_PACKET_TYPE BIT(12)
+#define RDES1_PTP_VER BIT(13)
+#define RDES1_TIMESTAMP_AVAILABLE BIT(14)
+#define RDES1_TIMESTAMP_AVAILABLE_SHIFT 14
+#define RDES1_TIMESTAMP_DROPPED BIT(15)
+#define RDES1_IP_TYPE1_CSUM_MASK GENMASK(31, 16)
+
+/* RDES2 (write back format) */
+#define RDES2_L3_L4_HEADER_SIZE_MASK GENMASK(9, 0)
+#define RDES2_VLAN_FILTER_STATUS BIT(15)
+#define RDES2_SA_FILTER_FAIL BIT(16)
+#define RDES2_DA_FILTER_FAIL BIT(17)
+#define RDES2_HASH_FILTER_STATUS BIT(18)
+#define RDES2_MAC_ADDR_MATCH_MASK GENMASK(26, 19)
+#define RDES2_HASH_VALUE_MATCH_MASK GENMASK(26, 19)
+#define RDES2_L3_FILTER_MATCH BIT(27)
+#define RDES2_L4_FILTER_MATCH BIT(28)
+#define RDES2_L3_L4_FILT_NB_MATCH_MASK GENMASK(27, 26)
+#define RDES2_L3_L4_FILT_NB_MATCH_SHIFT 26
+
+/* RDES3 (write back format) */
+#define RDES3_PACKET_SIZE_MASK GENMASK(14, 0)
+#define RDES3_ERROR_SUMMARY BIT(15)
+#define RDES3_PACKET_LEN_TYPE_MASK GENMASK(18, 16)
+#define RDES3_DRIBBLE_ERROR BIT(19)
+#define RDES3_RECEIVE_ERROR BIT(20)
+#define RDES3_OVERFLOW_ERROR BIT(21)
+#define RDES3_RECEIVE_WATCHDOG BIT(22)
+#define RDES3_GIANT_PACKET BIT(23)
+#define RDES3_CRC_ERROR BIT(24)
+#define RDES3_RDES0_VALID BIT(25)
+#define RDES3_RDES1_VALID BIT(26)
+#define RDES3_RDES2_VALID BIT(27)
+#define RDES3_LAST_DESCRIPTOR BIT(28)
+#define RDES3_FIRST_DESCRIPTOR BIT(29)
+#define RDES3_CONTEXT_DESCRIPTOR BIT(30)
+
+/* RDES3 (read format) */
+#define RDES3_BUFFER1_VALID_ADDR BIT(24)
+#define RDES3_BUFFER2_VALID_ADDR BIT(25)
+#define RDES3_INT_ON_COMPLETION_EN BIT(30)
+
+/* TDS3 use for both format (read and write back) */
+#define RDES3_OWN BIT(31)
+
+#endif /* __DWMAC4_DESCS_H__ */
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
new file mode 100644
index 000000000000..116151cd6a95
--- /dev/null
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.c
@@ -0,0 +1,354 @@
+/*
+ * This is the driver for the GMAC on-chip Ethernet controller for ST SoCs.
+ * DWC Ether MAC version 4.xx has been used for developing this code.
+ *
+ * This contains the functions to handle the dma.
+ *
+ * Copyright (C) 2015 STMicroelectronics Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * Author: Alexandre Torgue <alexandre.torgue@st.com>
+ */
+
+#include <linux/io.h>
+#include "dwmac4.h"
+#include "dwmac4_dma.h"
+
+static void dwmac4_dma_axi(void __iomem *ioaddr, struct stmmac_axi *axi)
+{
+ u32 value = readl(ioaddr + DMA_SYS_BUS_MODE);
+ int i;
+
+ pr_info("dwmac4: Master AXI performs %s burst length\n",
+ (value & DMA_SYS_BUS_FB) ? "fixed" : "any");
+
+ if (axi->axi_lpi_en)
+ value |= DMA_AXI_EN_LPI;
+ if (axi->axi_xit_frm)
+ value |= DMA_AXI_LPI_XIT_FRM;
+
+ value |= (axi->axi_wr_osr_lmt & DMA_AXI_OSR_MAX) <<
+ DMA_AXI_WR_OSR_LMT_SHIFT;
+
+ value |= (axi->axi_rd_osr_lmt & DMA_AXI_OSR_MAX) <<
+ DMA_AXI_RD_OSR_LMT_SHIFT;
+
+ /* Depending on the UNDEF bit the Master AXI will perform any burst
+ * length according to the BLEN programmed (by default all BLEN are
+ * set).
+ */
+ for (i = 0; i < AXI_BLEN; i++) {
+ switch (axi->axi_blen[i]) {
+ case 256:
+ value |= DMA_AXI_BLEN256;
+ break;
+ case 128:
+ value |= DMA_AXI_BLEN128;
+ break;
+ case 64:
+ value |= DMA_AXI_BLEN64;
+ break;
+ case 32:
+ value |= DMA_AXI_BLEN32;
+ break;
+ case 16:
+ value |= DMA_AXI_BLEN16;
+ break;
+ case 8:
+ value |= DMA_AXI_BLEN8;
+ break;
+ case 4:
+ value |= DMA_AXI_BLEN4;
+ break;
+ }
+ }
+
+ writel(value, ioaddr + DMA_SYS_BUS_MODE);
+}
+
+static void dwmac4_dma_init_channel(void __iomem *ioaddr, int pbl,
+ u32 dma_tx_phy, u32 dma_rx_phy,
+ u32 channel)
+{
+ u32 value;
+
+ /* set PBL for each channels. Currently we affect same configuration
+ * on each channel
+ */
+ value = readl(ioaddr + DMA_CHAN_CONTROL(channel));
+ value = value | DMA_BUS_MODE_PBL;
+ writel(value, ioaddr + DMA_CHAN_CONTROL(channel));
+
+ value = readl(ioaddr + DMA_CHAN_TX_CONTROL(channel));
+ value = value | (pbl << DMA_BUS_MODE_PBL_SHIFT);
+ writel(value, ioaddr + DMA_CHAN_TX_CONTROL(channel));
+
+ value = readl(ioaddr + DMA_CHAN_RX_CONTROL(channel));
+ value = value | (pbl << DMA_BUS_MODE_RPBL_SHIFT);
+ writel(value, ioaddr + DMA_CHAN_RX_CONTROL(channel));
+
+ /* Mask interrupts by writing to CSR7 */
+ writel(DMA_CHAN_INTR_DEFAULT_MASK, ioaddr + DMA_CHAN_INTR_ENA(channel));
+
+ writel(dma_tx_phy, ioaddr + DMA_CHAN_TX_BASE_ADDR(channel));
+ writel(dma_rx_phy, ioaddr + DMA_CHAN_RX_BASE_ADDR(channel));
+}
+
+static void dwmac4_dma_init(void __iomem *ioaddr, int pbl, int fb, int mb,
+ int aal, u32 dma_tx, u32 dma_rx, int atds)
+{
+ u32 value = readl(ioaddr + DMA_SYS_BUS_MODE);
+ int i;
+
+ /* Set the Fixed burst mode */
+ if (fb)
+ value |= DMA_SYS_BUS_FB;
+
+ /* Mixed Burst has no effect when fb is set */
+ if (mb)
+ value |= DMA_SYS_BUS_MB;
+
+ if (aal)
+ value |= DMA_SYS_BUS_AAL;
+
+ writel(value, ioaddr + DMA_SYS_BUS_MODE);
+
+ for (i = 0; i < DMA_CHANNEL_NB_MAX; i++)
+ dwmac4_dma_init_channel(ioaddr, pbl, dma_tx, dma_rx, i);
+}
+
+static void _dwmac4_dump_dma_regs(void __iomem *ioaddr, u32 channel)
+{
+ pr_debug(" Channel %d\n", channel);
+ pr_debug("\tDMA_CHAN_CONTROL, offset: 0x%x, val: 0x%x\n", 0,
+ readl(ioaddr + DMA_CHAN_CONTROL(channel)));
+ pr_debug("\tDMA_CHAN_TX_CONTROL, offset: 0x%x, val: 0x%x\n", 0x4,
+ readl(ioaddr + DMA_CHAN_TX_CONTROL(channel)));
+ pr_debug("\tDMA_CHAN_RX_CONTROL, offset: 0x%x, val: 0x%x\n", 0x8,
+ readl(ioaddr + DMA_CHAN_RX_CONTROL(channel)));
+ pr_debug("\tDMA_CHAN_TX_BASE_ADDR, offset: 0x%x, val: 0x%x\n", 0x14,
+ readl(ioaddr + DMA_CHAN_TX_BASE_ADDR(channel)));
+ pr_debug("\tDMA_CHAN_RX_BASE_ADDR, offset: 0x%x, val: 0x%x\n", 0x1c,
+ readl(ioaddr + DMA_CHAN_RX_BASE_ADDR(channel)));
+ pr_debug("\tDMA_CHAN_TX_END_ADDR, offset: 0x%x, val: 0x%x\n", 0x20,
+ readl(ioaddr + DMA_CHAN_TX_END_ADDR(channel)));
+ pr_debug("\tDMA_CHAN_RX_END_ADDR, offset: 0x%x, val: 0x%x\n", 0x28,
+ readl(ioaddr + DMA_CHAN_RX_END_ADDR(channel)));
+ pr_debug("\tDMA_CHAN_TX_RING_LEN, offset: 0x%x, val: 0x%x\n", 0x2c,
+ readl(ioaddr + DMA_CHAN_TX_RING_LEN(channel)));
+ pr_debug("\tDMA_CHAN_RX_RING_LEN, offset: 0x%x, val: 0x%x\n", 0x30,
+ readl(ioaddr + DMA_CHAN_RX_RING_LEN(channel)));
+ pr_debug("\tDMA_CHAN_INTR_ENA, offset: 0x%x, val: 0x%x\n", 0x34,
+ readl(ioaddr + DMA_CHAN_INTR_ENA(channel)));
+ pr_debug("\tDMA_CHAN_RX_WATCHDOG, offset: 0x%x, val: 0x%x\n", 0x38,
+ readl(ioaddr + DMA_CHAN_RX_WATCHDOG(channel)));
+ pr_debug("\tDMA_CHAN_SLOT_CTRL_STATUS, offset: 0x%x, val: 0x%x\n", 0x3c,
+ readl(ioaddr + DMA_CHAN_SLOT_CTRL_STATUS(channel)));
+ pr_debug("\tDMA_CHAN_CUR_TX_DESC, offset: 0x%x, val: 0x%x\n", 0x44,
+ readl(ioaddr + DMA_CHAN_CUR_TX_DESC(channel)));
+ pr_debug("\tDMA_CHAN_CUR_RX_DESC, offset: 0x%x, val: 0x%x\n", 0x4c,
+ readl(ioaddr + DMA_CHAN_CUR_RX_DESC(channel)));
+ pr_debug("\tDMA_CHAN_CUR_TX_BUF_ADDR, offset: 0x%x, val: 0x%x\n", 0x54,
+ readl(ioaddr + DMA_CHAN_CUR_TX_BUF_ADDR(channel)));
+ pr_debug("\tDMA_CHAN_CUR_RX_BUF_ADDR, offset: 0x%x, val: 0x%x\n", 0x5c,
+ readl(ioaddr + DMA_CHAN_CUR_RX_BUF_ADDR(channel)));
+ pr_debug("\tDMA_CHAN_STATUS, offset: 0x%x, val: 0x%x\n", 0x60,
+ readl(ioaddr + DMA_CHAN_STATUS(channel)));
+}
+
+static void dwmac4_dump_dma_regs(void __iomem *ioaddr)
+{
+ int i;
+
+ pr_debug(" GMAC4 DMA registers\n");
+
+ for (i = 0; i < DMA_CHANNEL_NB_MAX; i++)
+ _dwmac4_dump_dma_regs(ioaddr, i);
+}
+
+static void dwmac4_rx_watchdog(void __iomem *ioaddr, u32 riwt)
+{
+ int i;
+
+ for (i = 0; i < DMA_CHANNEL_NB_MAX; i++)
+ writel(riwt, ioaddr + DMA_CHAN_RX_WATCHDOG(i));
+}
+
+static void dwmac4_dma_chan_op_mode(void __iomem *ioaddr, int txmode,
+ int rxmode, u32 channel)
+{
+ u32 mtl_tx_op, mtl_rx_op, mtl_rx_int;
+
+ /* Following code only done for channel 0, other channels not yet
+ * supported.
+ */
+ mtl_tx_op = readl(ioaddr + MTL_CHAN_TX_OP_MODE(channel));
+
+ if (txmode == SF_DMA_MODE) {
+ pr_debug("GMAC: enable TX store and forward mode\n");
+ /* Transmit COE type 2 cannot be done in cut-through mode. */
+ mtl_tx_op |= MTL_OP_MODE_TSF;
+ } else {
+ pr_debug("GMAC: disabling TX SF (threshold %d)\n", txmode);
+ mtl_tx_op &= ~MTL_OP_MODE_TSF;
+ mtl_tx_op &= MTL_OP_MODE_TTC_MASK;
+ /* Set the transmit threshold */
+ if (txmode <= 32)
+ mtl_tx_op |= MTL_OP_MODE_TTC_32;
+ else if (txmode <= 64)
+ mtl_tx_op |= MTL_OP_MODE_TTC_64;
+ else if (txmode <= 96)
+ mtl_tx_op |= MTL_OP_MODE_TTC_96;
+ else if (txmode <= 128)
+ mtl_tx_op |= MTL_OP_MODE_TTC_128;
+ else if (txmode <= 192)
+ mtl_tx_op |= MTL_OP_MODE_TTC_192;
+ else if (txmode <= 256)
+ mtl_tx_op |= MTL_OP_MODE_TTC_256;
+ else if (txmode <= 384)
+ mtl_tx_op |= MTL_OP_MODE_TTC_384;
+ else
+ mtl_tx_op |= MTL_OP_MODE_TTC_512;
+ }
+
+ writel(mtl_tx_op, ioaddr + MTL_CHAN_TX_OP_MODE(channel));
+
+ mtl_rx_op = readl(ioaddr + MTL_CHAN_RX_OP_MODE(channel));
+
+ if (rxmode == SF_DMA_MODE) {
+ pr_debug("GMAC: enable RX store and forward mode\n");
+ mtl_rx_op |= MTL_OP_MODE_RSF;
+ } else {
+ pr_debug("GMAC: disable RX SF mode (threshold %d)\n", rxmode);
+ mtl_rx_op &= ~MTL_OP_MODE_RSF;
+ mtl_rx_op &= MTL_OP_MODE_RTC_MASK;
+ if (rxmode <= 32)
+ mtl_rx_op |= MTL_OP_MODE_RTC_32;
+ else if (rxmode <= 64)
+ mtl_rx_op |= MTL_OP_MODE_RTC_64;
+ else if (rxmode <= 96)
+ mtl_rx_op |= MTL_OP_MODE_RTC_96;
+ else
+ mtl_rx_op |= MTL_OP_MODE_RTC_128;
+ }
+
+ writel(mtl_rx_op, ioaddr + MTL_CHAN_RX_OP_MODE(channel));
+
+ /* Enable MTL RX overflow */
+ mtl_rx_int = readl(ioaddr + MTL_CHAN_INT_CTRL(channel));
+ writel(mtl_rx_int | MTL_RX_OVERFLOW_INT_EN,
+ ioaddr + MTL_CHAN_INT_CTRL(channel));
+}
+
+static void dwmac4_dma_operation_mode(void __iomem *ioaddr, int txmode,
+ int rxmode, int rxfifosz)
+{
+ /* Only Channel 0 is actually configured and used */
+ dwmac4_dma_chan_op_mode(ioaddr, txmode, rxmode, 0);
+}
+
+static void dwmac4_get_hw_feature(void __iomem *ioaddr,
+ struct dma_features *dma_cap)
+{
+ u32 hw_cap = readl(ioaddr + GMAC_HW_FEATURE0);
+
+ /* MAC HW feature0 */
+ dma_cap->mbps_10_100 = (hw_cap & GMAC_HW_FEAT_MIISEL);
+ dma_cap->mbps_1000 = (hw_cap & GMAC_HW_FEAT_GMIISEL) >> 1;
+ dma_cap->half_duplex = (hw_cap & GMAC_HW_FEAT_HDSEL) >> 2;
+ dma_cap->hash_filter = (hw_cap & GMAC_HW_FEAT_VLHASH) >> 4;
+ dma_cap->multi_addr = (hw_cap & GMAC_HW_FEAT_ADDMAC) >> 18;
+ dma_cap->pcs = (hw_cap & GMAC_HW_FEAT_PCSSEL) >> 3;
+ dma_cap->sma_mdio = (hw_cap & GMAC_HW_FEAT_SMASEL) >> 5;
+ dma_cap->pmt_remote_wake_up = (hw_cap & GMAC_HW_FEAT_RWKSEL) >> 6;
+ dma_cap->pmt_magic_frame = (hw_cap & GMAC_HW_FEAT_MGKSEL) >> 7;
+ /* MMC */
+ dma_cap->rmon = (hw_cap & GMAC_HW_FEAT_MMCSEL) >> 8;
+ /* IEEE 1588-2008 */
+ dma_cap->atime_stamp = (hw_cap & GMAC_HW_FEAT_TSSEL) >> 12;
+ /* 802.3az - Energy-Efficient Ethernet (EEE) */
+ dma_cap->eee = (hw_cap & GMAC_HW_FEAT_EEESEL) >> 13;
+ /* TX and RX csum */
+ dma_cap->tx_coe = (hw_cap & GMAC_HW_FEAT_TXCOSEL) >> 14;
+ dma_cap->rx_coe = (hw_cap & GMAC_HW_FEAT_RXCOESEL) >> 16;
+
+ /* MAC HW feature1 */
+ hw_cap = readl(ioaddr + GMAC_HW_FEATURE1);
+ dma_cap->av = (hw_cap & GMAC_HW_FEAT_AVSEL) >> 20;
+ dma_cap->tsoen = (hw_cap & GMAC_HW_TSOEN) >> 18;
+ /* MAC HW feature2 */
+ hw_cap = readl(ioaddr + GMAC_HW_FEATURE2);
+ /* TX and RX number of channels */
+ dma_cap->number_rx_channel =
+ ((hw_cap & GMAC_HW_FEAT_RXCHCNT) >> 12) + 1;
+ dma_cap->number_tx_channel =
+ ((hw_cap & GMAC_HW_FEAT_TXCHCNT) >> 18) + 1;
+
+ /* IEEE 1588-2002 */
+ dma_cap->time_stamp = 0;
+}
+
+/* Enable/disable TSO feature and set MSS */
+static void dwmac4_enable_tso(void __iomem *ioaddr, bool en, u32 chan)
+{
+ u32 value;
+
+ if (en) {
+ /* enable TSO */
+ value = readl(ioaddr + DMA_CHAN_TX_CONTROL(chan));
+ writel(value | DMA_CONTROL_TSE,
+ ioaddr + DMA_CHAN_TX_CONTROL(chan));
+ } else {
+ /* enable TSO */
+ value = readl(ioaddr + DMA_CHAN_TX_CONTROL(chan));
+ writel(value & ~DMA_CONTROL_TSE,
+ ioaddr + DMA_CHAN_TX_CONTROL(chan));
+ }
+}
+
+const struct stmmac_dma_ops dwmac4_dma_ops = {
+ .reset = dwmac4_dma_reset,
+ .init = dwmac4_dma_init,
+ .axi = dwmac4_dma_axi,
+ .dump_regs = dwmac4_dump_dma_regs,
+ .dma_mode = dwmac4_dma_operation_mode,
+ .enable_dma_irq = dwmac4_enable_dma_irq,
+ .disable_dma_irq = dwmac4_disable_dma_irq,
+ .start_tx = dwmac4_dma_start_tx,
+ .stop_tx = dwmac4_dma_stop_tx,
+ .start_rx = dwmac4_dma_start_rx,
+ .stop_rx = dwmac4_dma_stop_rx,
+ .dma_interrupt = dwmac4_dma_interrupt,
+ .get_hw_feature = dwmac4_get_hw_feature,
+ .rx_watchdog = dwmac4_rx_watchdog,
+ .set_rx_ring_len = dwmac4_set_rx_ring_len,
+ .set_tx_ring_len = dwmac4_set_tx_ring_len,
+ .set_rx_tail_ptr = dwmac4_set_rx_tail_ptr,
+ .set_tx_tail_ptr = dwmac4_set_tx_tail_ptr,
+ .enable_tso = dwmac4_enable_tso,
+};
+
+const struct stmmac_dma_ops dwmac410_dma_ops = {
+ .reset = dwmac4_dma_reset,
+ .init = dwmac4_dma_init,
+ .axi = dwmac4_dma_axi,
+ .dump_regs = dwmac4_dump_dma_regs,
+ .dma_mode = dwmac4_dma_operation_mode,
+ .enable_dma_irq = dwmac410_enable_dma_irq,
+ .disable_dma_irq = dwmac4_disable_dma_irq,
+ .start_tx = dwmac4_dma_start_tx,
+ .stop_tx = dwmac4_dma_stop_tx,
+ .start_rx = dwmac4_dma_start_rx,
+ .stop_rx = dwmac4_dma_stop_rx,
+ .dma_interrupt = dwmac4_dma_interrupt,
+ .get_hw_feature = dwmac4_get_hw_feature,
+ .rx_watchdog = dwmac4_rx_watchdog,
+ .set_rx_ring_len = dwmac4_set_rx_ring_len,
+ .set_tx_ring_len = dwmac4_set_tx_ring_len,
+ .set_rx_tail_ptr = dwmac4_set_rx_tail_ptr,
+ .set_tx_tail_ptr = dwmac4_set_tx_tail_ptr,
+ .enable_tso = dwmac4_enable_tso,
+};
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.h b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.h
new file mode 100644
index 000000000000..1b06df749e2b
--- /dev/null
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_dma.h
@@ -0,0 +1,202 @@
+/*
+ * DWMAC4 DMA Header file.
+ *
+ *
+ * Copyright (C) 2007-2015 STMicroelectronics Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * Author: Alexandre Torgue <alexandre.torgue@st.com>
+ */
+
+#ifndef __DWMAC4_DMA_H__
+#define __DWMAC4_DMA_H__
+
+/* Define the max channel number used for tx (also rx).
+ * dwmac4 accepts up to 8 channels for TX (and also 8 channels for RX
+ */
+#define DMA_CHANNEL_NB_MAX 1
+
+#define DMA_BUS_MODE 0x00001000
+#define DMA_SYS_BUS_MODE 0x00001004
+#define DMA_STATUS 0x00001008
+#define DMA_DEBUG_STATUS_0 0x0000100c
+#define DMA_DEBUG_STATUS_1 0x00001010
+#define DMA_DEBUG_STATUS_2 0x00001014
+#define DMA_AXI_BUS_MODE 0x00001028
+
+/* DMA Bus Mode bitmap */
+#define DMA_BUS_MODE_SFT_RESET BIT(0)
+
+/* DMA SYS Bus Mode bitmap */
+#define DMA_BUS_MODE_SPH BIT(24)
+#define DMA_BUS_MODE_PBL BIT(16)
+#define DMA_BUS_MODE_PBL_SHIFT 16
+#define DMA_BUS_MODE_RPBL_SHIFT 16
+#define DMA_BUS_MODE_MB BIT(14)
+#define DMA_BUS_MODE_FB BIT(0)
+
+/* DMA Interrupt top status */
+#define DMA_STATUS_MAC BIT(17)
+#define DMA_STATUS_MTL BIT(16)
+#define DMA_STATUS_CHAN7 BIT(7)
+#define DMA_STATUS_CHAN6 BIT(6)
+#define DMA_STATUS_CHAN5 BIT(5)
+#define DMA_STATUS_CHAN4 BIT(4)
+#define DMA_STATUS_CHAN3 BIT(3)
+#define DMA_STATUS_CHAN2 BIT(2)
+#define DMA_STATUS_CHAN1 BIT(1)
+#define DMA_STATUS_CHAN0 BIT(0)
+
+/* DMA debug status bitmap */
+#define DMA_DEBUG_STATUS_TS_MASK 0xf
+#define DMA_DEBUG_STATUS_RS_MASK 0xf
+
+/* DMA AXI bitmap */
+#define DMA_AXI_EN_LPI BIT(31)
+#define DMA_AXI_LPI_XIT_FRM BIT(30)
+#define DMA_AXI_WR_OSR_LMT GENMASK(27, 24)
+#define DMA_AXI_WR_OSR_LMT_SHIFT 24
+#define DMA_AXI_RD_OSR_LMT GENMASK(19, 16)
+#define DMA_AXI_RD_OSR_LMT_SHIFT 16
+
+#define DMA_AXI_OSR_MAX 0xf
+#define DMA_AXI_MAX_OSR_LIMIT ((DMA_AXI_OSR_MAX << DMA_AXI_WR_OSR_LMT_SHIFT) | \
+ (DMA_AXI_OSR_MAX << DMA_AXI_RD_OSR_LMT_SHIFT))
+
+#define DMA_SYS_BUS_MB BIT(14)
+#define DMA_AXI_1KBBE BIT(13)
+#define DMA_SYS_BUS_AAL BIT(12)
+#define DMA_AXI_BLEN256 BIT(7)
+#define DMA_AXI_BLEN128 BIT(6)
+#define DMA_AXI_BLEN64 BIT(5)
+#define DMA_AXI_BLEN32 BIT(4)
+#define DMA_AXI_BLEN16 BIT(3)
+#define DMA_AXI_BLEN8 BIT(2)
+#define DMA_AXI_BLEN4 BIT(1)
+#define DMA_SYS_BUS_FB BIT(0)
+
+#define DMA_BURST_LEN_DEFAULT (DMA_AXI_BLEN256 | DMA_AXI_BLEN128 | \
+ DMA_AXI_BLEN64 | DMA_AXI_BLEN32 | \
+ DMA_AXI_BLEN16 | DMA_AXI_BLEN8 | \
+ DMA_AXI_BLEN4)
+
+#define DMA_AXI_BURST_LEN_MASK 0x000000FE
+
+/* Following DMA defines are chanels oriented */
+#define DMA_CHAN_BASE_ADDR 0x00001100
+#define DMA_CHAN_BASE_OFFSET 0x80
+#define DMA_CHANX_BASE_ADDR(x) (DMA_CHAN_BASE_ADDR + \
+ (x * DMA_CHAN_BASE_OFFSET))
+#define DMA_CHAN_REG_NUMBER 17
+
+#define DMA_CHAN_CONTROL(x) DMA_CHANX_BASE_ADDR(x)
+#define DMA_CHAN_TX_CONTROL(x) (DMA_CHANX_BASE_ADDR(x) + 0x4)
+#define DMA_CHAN_RX_CONTROL(x) (DMA_CHANX_BASE_ADDR(x) + 0x8)
+#define DMA_CHAN_TX_BASE_ADDR(x) (DMA_CHANX_BASE_ADDR(x) + 0x14)
+#define DMA_CHAN_RX_BASE_ADDR(x) (DMA_CHANX_BASE_ADDR(x) + 0x1c)
+#define DMA_CHAN_TX_END_ADDR(x) (DMA_CHANX_BASE_ADDR(x) + 0x20)
+#define DMA_CHAN_RX_END_ADDR(x) (DMA_CHANX_BASE_ADDR(x) + 0x28)
+#define DMA_CHAN_TX_RING_LEN(x) (DMA_CHANX_BASE_ADDR(x) + 0x2c)
+#define DMA_CHAN_RX_RING_LEN(x) (DMA_CHANX_BASE_ADDR(x) + 0x30)
+#define DMA_CHAN_INTR_ENA(x) (DMA_CHANX_BASE_ADDR(x) + 0x34)
+#define DMA_CHAN_RX_WATCHDOG(x) (DMA_CHANX_BASE_ADDR(x) + 0x38)
+#define DMA_CHAN_SLOT_CTRL_STATUS(x) (DMA_CHANX_BASE_ADDR(x) + 0x3c)
+#define DMA_CHAN_CUR_TX_DESC(x) (DMA_CHANX_BASE_ADDR(x) + 0x44)
+#define DMA_CHAN_CUR_RX_DESC(x) (DMA_CHANX_BASE_ADDR(x) + 0x4c)
+#define DMA_CHAN_CUR_TX_BUF_ADDR(x) (DMA_CHANX_BASE_ADDR(x) + 0x54)
+#define DMA_CHAN_CUR_RX_BUF_ADDR(x) (DMA_CHANX_BASE_ADDR(x) + 0x5c)
+#define DMA_CHAN_STATUS(x) (DMA_CHANX_BASE_ADDR(x) + 0x60)
+
+/* DMA Control X */
+#define DMA_CONTROL_MSS_MASK GENMASK(13, 0)
+
+/* DMA Tx Channel X Control register defines */
+#define DMA_CONTROL_TSE BIT(12)
+#define DMA_CONTROL_OSP BIT(4)
+#define DMA_CONTROL_ST BIT(0)
+
+/* DMA Rx Channel X Control register defines */
+#define DMA_CONTROL_SR BIT(0)
+
+/* Interrupt status per channel */
+#define DMA_CHAN_STATUS_REB GENMASK(21, 19)
+#define DMA_CHAN_STATUS_REB_SHIFT 19
+#define DMA_CHAN_STATUS_TEB GENMASK(18, 16)
+#define DMA_CHAN_STATUS_TEB_SHIFT 16
+#define DMA_CHAN_STATUS_NIS BIT(15)
+#define DMA_CHAN_STATUS_AIS BIT(14)
+#define DMA_CHAN_STATUS_CDE BIT(13)
+#define DMA_CHAN_STATUS_FBE BIT(12)
+#define DMA_CHAN_STATUS_ERI BIT(11)
+#define DMA_CHAN_STATUS_ETI BIT(10)
+#define DMA_CHAN_STATUS_RWT BIT(9)
+#define DMA_CHAN_STATUS_RPS BIT(8)
+#define DMA_CHAN_STATUS_RBU BIT(7)
+#define DMA_CHAN_STATUS_RI BIT(6)
+#define DMA_CHAN_STATUS_TBU BIT(2)
+#define DMA_CHAN_STATUS_TPS BIT(1)
+#define DMA_CHAN_STATUS_TI BIT(0)
+
+/* Interrupt enable bits per channel */
+#define DMA_CHAN_INTR_ENA_NIE BIT(16)
+#define DMA_CHAN_INTR_ENA_AIE BIT(15)
+#define DMA_CHAN_INTR_ENA_NIE_4_10 BIT(15)
+#define DMA_CHAN_INTR_ENA_AIE_4_10 BIT(14)
+#define DMA_CHAN_INTR_ENA_CDE BIT(13)
+#define DMA_CHAN_INTR_ENA_FBE BIT(12)
+#define DMA_CHAN_INTR_ENA_ERE BIT(11)
+#define DMA_CHAN_INTR_ENA_ETE BIT(10)
+#define DMA_CHAN_INTR_ENA_RWE BIT(9)
+#define DMA_CHAN_INTR_ENA_RSE BIT(8)
+#define DMA_CHAN_INTR_ENA_RBUE BIT(7)
+#define DMA_CHAN_INTR_ENA_RIE BIT(6)
+#define DMA_CHAN_INTR_ENA_TBUE BIT(2)
+#define DMA_CHAN_INTR_ENA_TSE BIT(1)
+#define DMA_CHAN_INTR_ENA_TIE BIT(0)
+
+#define DMA_CHAN_INTR_NORMAL (DMA_CHAN_INTR_ENA_NIE | \
+ DMA_CHAN_INTR_ENA_RIE | \
+ DMA_CHAN_INTR_ENA_TIE)
+
+#define DMA_CHAN_INTR_ABNORMAL (DMA_CHAN_INTR_ENA_AIE | \
+ DMA_CHAN_INTR_ENA_FBE)
+/* DMA default interrupt mask for 4.00 */
+#define DMA_CHAN_INTR_DEFAULT_MASK (DMA_CHAN_INTR_NORMAL | \
+ DMA_CHAN_INTR_ABNORMAL)
+
+#define DMA_CHAN_INTR_NORMAL_4_10 (DMA_CHAN_INTR_ENA_NIE_4_10 | \
+ DMA_CHAN_INTR_ENA_RIE | \
+ DMA_CHAN_INTR_ENA_TIE)
+
+#define DMA_CHAN_INTR_ABNORMAL_4_10 (DMA_CHAN_INTR_ENA_AIE_4_10 | \
+ DMA_CHAN_INTR_ENA_FBE)
+/* DMA default interrupt mask for 4.10a */
+#define DMA_CHAN_INTR_DEFAULT_MASK_4_10 (DMA_CHAN_INTR_NORMAL_4_10 | \
+ DMA_CHAN_INTR_ABNORMAL_4_10)
+
+/* channel 0 specific fields */
+#define DMA_CHAN0_DBG_STAT_TPS GENMASK(15, 12)
+#define DMA_CHAN0_DBG_STAT_TPS_SHIFT 12
+#define DMA_CHAN0_DBG_STAT_RPS GENMASK(11, 8)
+#define DMA_CHAN0_DBG_STAT_RPS_SHIFT 8
+
+int dwmac4_dma_reset(void __iomem *ioaddr);
+void dwmac4_enable_dma_transmission(void __iomem *ioaddr, u32 tail_ptr);
+void dwmac4_enable_dma_irq(void __iomem *ioaddr);
+void dwmac410_enable_dma_irq(void __iomem *ioaddr);
+void dwmac4_disable_dma_irq(void __iomem *ioaddr);
+void dwmac4_dma_start_tx(void __iomem *ioaddr);
+void dwmac4_dma_stop_tx(void __iomem *ioaddr);
+void dwmac4_dma_start_rx(void __iomem *ioaddr);
+void dwmac4_dma_stop_rx(void __iomem *ioaddr);
+int dwmac4_dma_interrupt(void __iomem *ioaddr,
+ struct stmmac_extra_stats *x);
+void dwmac4_set_rx_ring_len(void __iomem *ioaddr, u32 len);
+void dwmac4_set_tx_ring_len(void __iomem *ioaddr, u32 len);
+void dwmac4_set_rx_tail_ptr(void __iomem *ioaddr, u32 tail_ptr, u32 chan);
+void dwmac4_set_tx_tail_ptr(void __iomem *ioaddr, u32 tail_ptr, u32 chan);
+
+#endif /* __DWMAC4_DMA_H__ */
diff --git a/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
new file mode 100644
index 000000000000..c7326d5b2f43
--- /dev/null
+++ b/drivers/net/ethernet/stmicro/stmmac/dwmac4_lib.c
@@ -0,0 +1,225 @@
+/*
+ * Copyright (C) 2007-2015 STMicroelectronics Ltd
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * Author: Alexandre Torgue <alexandre.torgue@st.com>
+ */
+
+#include <linux/io.h>
+#include <linux/delay.h>
+#include "common.h"
+#include "dwmac4_dma.h"
+#include "dwmac4.h"
+
+int dwmac4_dma_reset(void __iomem *ioaddr)
+{
+ u32 value = readl(ioaddr + DMA_BUS_MODE);
+ int limit;
+
+ /* DMA SW reset */
+ value |= DMA_BUS_MODE_SFT_RESET;
+ writel(value, ioaddr + DMA_BUS_MODE);
+ limit = 10;
+ while (limit--) {
+ if (!(readl(ioaddr + DMA_BUS_MODE) & DMA_BUS_MODE_SFT_RESET))
+ break;
+ mdelay(10);
+ }
+
+ if (limit < 0)
+ return -EBUSY;
+
+ return 0;
+}
+
+void dwmac4_set_rx_tail_ptr(void __iomem *ioaddr, u32 tail_ptr, u32 chan)
+{
+ writel(tail_ptr, ioaddr + DMA_CHAN_RX_END_ADDR(0));
+}
+
+void dwmac4_set_tx_tail_ptr(void __iomem *ioaddr, u32 tail_ptr, u32 chan)
+{
+ writel(tail_ptr, ioaddr + DMA_CHAN_TX_END_ADDR(0));
+}
+
+void dwmac4_dma_start_tx(void __iomem *ioaddr)
+{
+ u32 value = readl(ioaddr + DMA_CHAN_TX_CONTROL(STMMAC_CHAN0));
+
+ value |= DMA_CONTROL_ST;
+ writel(value, ioaddr + DMA_CHAN_TX_CONTROL(STMMAC_CHAN0));
+
+ value = readl(ioaddr + GMAC_CONFIG);
+ value |= GMAC_CONFIG_TE;
+ writel(value, ioaddr + GMAC_CONFIG);
+}
+
+void dwmac4_dma_stop_tx(void __iomem *ioaddr)
+{
+ u32 value = readl(ioaddr + DMA_CHAN_TX_CONTROL(STMMAC_CHAN0));
+
+ value &= ~DMA_CONTROL_ST;
+ writel(value, ioaddr + DMA_CHAN_TX_CONTROL(STMMAC_CHAN0));
+
+ value = readl(ioaddr + GMAC_CONFIG);
+ value &= ~GMAC_CONFIG_TE;
+ writel(value, ioaddr + GMAC_CONFIG);
+}
+
+void dwmac4_dma_start_rx(void __iomem *ioaddr)
+{
+ u32 value = readl(ioaddr + DMA_CHAN_RX_CONTROL(STMMAC_CHAN0));
+
+ value |= DMA_CONTROL_SR;
+
+ writel(value, ioaddr + DMA_CHAN_RX_CONTROL(STMMAC_CHAN0));
+
+ value = readl(ioaddr + GMAC_CONFIG);
+ value |= GMAC_CONFIG_RE;
+ writel(value, ioaddr + GMAC_CONFIG);
+}
+
+void dwmac4_dma_stop_rx(void __iomem *ioaddr)
+{
+ u32 value = readl(ioaddr + DMA_CHAN_RX_CONTROL(STMMAC_CHAN0));
+
+ value &= ~DMA_CONTROL_SR;
+ writel(value, ioaddr + DMA_CHAN_RX_CONTROL(STMMAC_CHAN0));
+
+ value = readl(ioaddr + GMAC_CONFIG);
+ value &= ~GMAC_CONFIG_RE;
+ writel(value, ioaddr + GMAC_CONFIG);
+}
+
+void dwmac4_set_tx_ring_len(void __iomem *ioaddr, u32 len)
+{
+ writel(len, ioaddr + DMA_CHAN_TX_RING_LEN(STMMAC_CHAN0));
+}
+
+void dwmac4_set_rx_ring_len(void __iomem *ioaddr, u32 len)
+{
+ writel(len, ioaddr + DMA_CHAN_RX_RING_LEN(STMMAC_CHAN0));
+}
+
+void dwmac4_enable_dma_irq(void __iomem *ioaddr)
+{
+ writel(DMA_CHAN_INTR_DEFAULT_MASK, ioaddr +
+ DMA_CHAN_INTR_ENA(STMMAC_CHAN0));
+}
+
+void dwmac410_enable_dma_irq(void __iomem *ioaddr)
+{
+ writel(DMA_CHAN_INTR_DEFAULT_MASK_4_10,
+ ioaddr + DMA_CHAN_INTR_ENA(STMMAC_CHAN0));
+}
+
+void dwmac4_disable_dma_irq(void __iomem *ioaddr)
+{
+ writel(0, ioaddr + DMA_CHAN_INTR_ENA(STMMAC_CHAN0));
+}
+
+int dwmac4_dma_interrupt(void __iomem *ioaddr,
+ struct stmmac_extra_stats *x)
+{
+ int ret = 0;
+
+ u32 intr_status = readl(ioaddr + DMA_CHAN_STATUS(0));
+
+ /* ABNORMAL interrupts */
+ if (unlikely(intr_status & DMA_CHAN_STATUS_AIS)) {
+ if (unlikely(intr_status & DMA_CHAN_STATUS_RBU))
+ x->rx_buf_unav_irq++;
+ if (unlikely(intr_status & DMA_CHAN_STATUS_RPS))
+ x->rx_process_stopped_irq++;
+ if (unlikely(intr_status & DMA_CHAN_STATUS_RWT))
+ x->rx_watchdog_irq++;
+ if (unlikely(intr_status & DMA_CHAN_STATUS_ETI))
+ x->tx_early_irq++;
+ if (unlikely(intr_status & DMA_CHAN_STATUS_TPS)) {
+ x->tx_process_stopped_irq++;
+ ret = tx_hard_error;
+ }
+ if (unlikely(intr_status & DMA_CHAN_STATUS_FBE)) {
+ x->fatal_bus_error_irq++;
+ ret = tx_hard_error;
+ }
+ }
+ /* TX/RX NORMAL interrupts */
+ if (likely(intr_status & DMA_CHAN_STATUS_NIS)) {
+ x->normal_irq_n++;
+ if (likely(intr_status & DMA_CHAN_STATUS_RI)) {
+ u32 value;
+
+ value = readl(ioaddr + DMA_CHAN_INTR_ENA(STMMAC_CHAN0));
+ /* to schedule NAPI on real RIE event. */
+ if (likely(value & DMA_CHAN_INTR_ENA_RIE)) {
+ x->rx_normal_irq_n++;
+ ret |= handle_rx;
+ }
+ }
+ if (likely(intr_status & DMA_CHAN_STATUS_TI)) {
+ x->tx_normal_irq_n++;
+ ret |= handle_tx;
+ }
+ if (unlikely(intr_status & DMA_CHAN_STATUS_ERI))
+ x->rx_early_irq++;
+ }
+
+ /* Clear the interrupt by writing a logic 1 to the chanX interrupt
+ * status [21-0] expect reserved bits [5-3]
+ */
+ writel((intr_status & 0x3fffc7),
+ ioaddr + DMA_CHAN_STATUS(STMMAC_CHAN0));
+
+ return ret;
+}
+
+void stmmac_dwmac4_set_mac_addr(void __iomem *ioaddr, u8 addr[6],
+ unsigned int high, unsigned int low)
+{
+ unsigned long data;
+
+ data = (addr[5] << 8) | addr[4];
+ /* For MAC Addr registers se have to set the Address Enable (AE)
+ * bit that has no effect on the High Reg 0 where the bit 31 (MO)
+ * is RO.
+ */
+ data |= (STMMAC_CHAN0 << GMAC_HI_DCS_SHIFT);
+ writel(data | GMAC_HI_REG_AE, ioaddr + high);
+ data = (addr[3] << 24) | (addr[2] << 16) | (addr[1] << 8) | addr[0];
+ writel(data, ioaddr + low);
+}
+
+/* Enable disable MAC RX/TX */
+void stmmac_dwmac4_set_mac(void __iomem *ioaddr, bool enable)
+{
+ u32 value = readl(ioaddr + GMAC_CONFIG);
+
+ if (enable)
+ value |= GMAC_CONFIG_RE | GMAC_CONFIG_TE;
+ else
+ value &= ~(GMAC_CONFIG_TE | GMAC_CONFIG_RE);
+
+ writel(value, ioaddr + GMAC_CONFIG);
+}
+
+void stmmac_dwmac4_get_mac_addr(void __iomem *ioaddr, unsigned char *addr,
+ unsigned int high, unsigned int low)
+{
+ unsigned int hi_addr, lo_addr;
+
+ /* Read the MAC address from the hardware */
+ hi_addr = readl(ioaddr + high);
+ lo_addr = readl(ioaddr + low);
+
+ /* Extract the MAC address from the high and low words */
+ addr[0] = lo_addr & 0xff;
+ addr[1] = (lo_addr >> 8) & 0xff;
+ addr[2] = (lo_addr >> 16) & 0xff;
+ addr[3] = (lo_addr >> 24) & 0xff;
+ addr[4] = hi_addr & 0xff;
+ addr[5] = (hi_addr >> 8) & 0xff;
+}
diff --git a/drivers/net/ethernet/stmicro/stmmac/enh_desc.c b/drivers/net/ethernet/stmicro/stmmac/enh_desc.c
index cfb018c7c5eb..38f19c99cf59 100644
--- a/drivers/net/ethernet/stmicro/stmmac/enh_desc.c
+++ b/drivers/net/ethernet/stmicro/stmmac/enh_desc.c
@@ -411,6 +411,26 @@ static int enh_desc_get_rx_timestamp_status(void *desc, u32 ats)
}
}
+static void enh_desc_display_ring(void *head, unsigned int size, bool rx)
+{
+ struct dma_extended_desc *ep = (struct dma_extended_desc *)head;
+ int i;
+
+ pr_info("Extended %s descriptor ring:\n", rx ? "RX" : "TX");
+
+ for (i = 0; i < size; i++) {
+ u64 x;
+
+ x = *(u64 *)ep;
+ pr_info("%d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n",
+ i, (unsigned int)virt_to_phys(ep),
+ (unsigned int)x, (unsigned int)(x >> 32),
+ ep->basic.des2, ep->basic.des3);
+ ep++;
+ }
+ pr_info("\n");
+}
+
const struct stmmac_desc_ops enh_desc_ops = {
.tx_status = enh_desc_get_tx_status,
.rx_status = enh_desc_get_rx_status,
@@ -430,4 +450,5 @@ const struct stmmac_desc_ops enh_desc_ops = {
.get_tx_timestamp_status = enh_desc_get_tx_timestamp_status,
.get_timestamp = enh_desc_get_timestamp,
.get_rx_timestamp_status = enh_desc_get_rx_timestamp_status,
+ .display_ring = enh_desc_display_ring,
};
diff --git a/drivers/net/ethernet/stmicro/stmmac/mmc.h b/drivers/net/ethernet/stmicro/stmmac/mmc.h
index 192c2491330b..38a1a5603293 100644
--- a/drivers/net/ethernet/stmicro/stmmac/mmc.h
+++ b/drivers/net/ethernet/stmicro/stmmac/mmc.h
@@ -35,6 +35,10 @@
* current value.*/
#define MMC_CNTRL_PRESET 0x10
#define MMC_CNTRL_FULL_HALF_PRESET 0x20
+
+#define MMC_GMAC4_OFFSET 0x700
+#define MMC_GMAC3_X_OFFSET 0x100
+
struct stmmac_counters {
unsigned int mmc_tx_octetcount_gb;
unsigned int mmc_tx_framecount_gb;
diff --git a/drivers/net/ethernet/stmicro/stmmac/mmc_core.c b/drivers/net/ethernet/stmicro/stmmac/mmc_core.c
index 3f20bb1fe570..ce9aa792857b 100644
--- a/drivers/net/ethernet/stmicro/stmmac/mmc_core.c
+++ b/drivers/net/ethernet/stmicro/stmmac/mmc_core.c
@@ -28,12 +28,12 @@
/* MAC Management Counters register offset */
-#define MMC_CNTRL 0x00000100 /* MMC Control */
-#define MMC_RX_INTR 0x00000104 /* MMC RX Interrupt */
-#define MMC_TX_INTR 0x00000108 /* MMC TX Interrupt */
-#define MMC_RX_INTR_MASK 0x0000010c /* MMC Interrupt Mask */
-#define MMC_TX_INTR_MASK 0x00000110 /* MMC Interrupt Mask */
-#define MMC_DEFAULT_MASK 0xffffffff
+#define MMC_CNTRL 0x00 /* MMC Control */
+#define MMC_RX_INTR 0x04 /* MMC RX Interrupt */
+#define MMC_TX_INTR 0x08 /* MMC TX Interrupt */
+#define MMC_RX_INTR_MASK 0x0c /* MMC Interrupt Mask */
+#define MMC_TX_INTR_MASK 0x10 /* MMC Interrupt Mask */
+#define MMC_DEFAULT_MASK 0xffffffff
/* MMC TX counter registers */
@@ -41,115 +41,115 @@
* _GB register stands for good and bad frames
* _G is for good only.
*/
-#define MMC_TX_OCTETCOUNT_GB 0x00000114
-#define MMC_TX_FRAMECOUNT_GB 0x00000118
-#define MMC_TX_BROADCASTFRAME_G 0x0000011c
-#define MMC_TX_MULTICASTFRAME_G 0x00000120
-#define MMC_TX_64_OCTETS_GB 0x00000124
-#define MMC_TX_65_TO_127_OCTETS_GB 0x00000128
-#define MMC_TX_128_TO_255_OCTETS_GB 0x0000012c
-#define MMC_TX_256_TO_511_OCTETS_GB 0x00000130
-#define MMC_TX_512_TO_1023_OCTETS_GB 0x00000134
-#define MMC_TX_1024_TO_MAX_OCTETS_GB 0x00000138
-#define MMC_TX_UNICAST_GB 0x0000013c
-#define MMC_TX_MULTICAST_GB 0x00000140
-#define MMC_TX_BROADCAST_GB 0x00000144
-#define MMC_TX_UNDERFLOW_ERROR 0x00000148
-#define MMC_TX_SINGLECOL_G 0x0000014c
-#define MMC_TX_MULTICOL_G 0x00000150
-#define MMC_TX_DEFERRED 0x00000154
-#define MMC_TX_LATECOL 0x00000158
-#define MMC_TX_EXESSCOL 0x0000015c
-#define MMC_TX_CARRIER_ERROR 0x00000160
-#define MMC_TX_OCTETCOUNT_G 0x00000164
-#define MMC_TX_FRAMECOUNT_G 0x00000168
-#define MMC_TX_EXCESSDEF 0x0000016c
-#define MMC_TX_PAUSE_FRAME 0x00000170
-#define MMC_TX_VLAN_FRAME_G 0x00000174
+#define MMC_TX_OCTETCOUNT_GB 0x14
+#define MMC_TX_FRAMECOUNT_GB 0x18
+#define MMC_TX_BROADCASTFRAME_G 0x1c
+#define MMC_TX_MULTICASTFRAME_G 0x20
+#define MMC_TX_64_OCTETS_GB 0x24
+#define MMC_TX_65_TO_127_OCTETS_GB 0x28
+#define MMC_TX_128_TO_255_OCTETS_GB 0x2c
+#define MMC_TX_256_TO_511_OCTETS_GB 0x30
+#define MMC_TX_512_TO_1023_OCTETS_GB 0x34
+#define MMC_TX_1024_TO_MAX_OCTETS_GB 0x38
+#define MMC_TX_UNICAST_GB 0x3c
+#define MMC_TX_MULTICAST_GB 0x40
+#define MMC_TX_BROADCAST_GB 0x44
+#define MMC_TX_UNDERFLOW_ERROR 0x48
+#define MMC_TX_SINGLECOL_G 0x4c
+#define MMC_TX_MULTICOL_G 0x50
+#define MMC_TX_DEFERRED 0x54
+#define MMC_TX_LATECOL 0x58
+#define MMC_TX_EXESSCOL 0x5c
+#define MMC_TX_CARRIER_ERROR 0x60
+#define MMC_TX_OCTETCOUNT_G 0x64
+#define MMC_TX_FRAMECOUNT_G 0x68
+#define MMC_TX_EXCESSDEF 0x6c
+#define MMC_TX_PAUSE_FRAME 0x70
+#define MMC_TX_VLAN_FRAME_G 0x74
/* MMC RX counter registers */
-#define MMC_RX_FRAMECOUNT_GB 0x00000180
-#define MMC_RX_OCTETCOUNT_GB 0x00000184
-#define MMC_RX_OCTETCOUNT_G 0x00000188
-#define MMC_RX_BROADCASTFRAME_G 0x0000018c
-#define MMC_RX_MULTICASTFRAME_G 0x00000190
-#define MMC_RX_CRC_ERROR 0x00000194
-#define MMC_RX_ALIGN_ERROR 0x00000198
-#define MMC_RX_RUN_ERROR 0x0000019C
-#define MMC_RX_JABBER_ERROR 0x000001A0
-#define MMC_RX_UNDERSIZE_G 0x000001A4
-#define MMC_RX_OVERSIZE_G 0x000001A8
-#define MMC_RX_64_OCTETS_GB 0x000001AC
-#define MMC_RX_65_TO_127_OCTETS_GB 0x000001b0
-#define MMC_RX_128_TO_255_OCTETS_GB 0x000001b4
-#define MMC_RX_256_TO_511_OCTETS_GB 0x000001b8
-#define MMC_RX_512_TO_1023_OCTETS_GB 0x000001bc
-#define MMC_RX_1024_TO_MAX_OCTETS_GB 0x000001c0
-#define MMC_RX_UNICAST_G 0x000001c4
-#define MMC_RX_LENGTH_ERROR 0x000001c8
-#define MMC_RX_AUTOFRANGETYPE 0x000001cc
-#define MMC_RX_PAUSE_FRAMES 0x000001d0
-#define MMC_RX_FIFO_OVERFLOW 0x000001d4
-#define MMC_RX_VLAN_FRAMES_GB 0x000001d8
-#define MMC_RX_WATCHDOG_ERROR 0x000001dc
+#define MMC_RX_FRAMECOUNT_GB 0x80
+#define MMC_RX_OCTETCOUNT_GB 0x84
+#define MMC_RX_OCTETCOUNT_G 0x88
+#define MMC_RX_BROADCASTFRAME_G 0x8c
+#define MMC_RX_MULTICASTFRAME_G 0x90
+#define MMC_RX_CRC_ERROR 0x94
+#define MMC_RX_ALIGN_ERROR 0x98
+#define MMC_RX_RUN_ERROR 0x9C
+#define MMC_RX_JABBER_ERROR 0xA0
+#define MMC_RX_UNDERSIZE_G 0xA4
+#define MMC_RX_OVERSIZE_G 0xA8
+#define MMC_RX_64_OCTETS_GB 0xAC
+#define MMC_RX_65_TO_127_OCTETS_GB 0xb0
+#define MMC_RX_128_TO_255_OCTETS_GB 0xb4
+#define MMC_RX_256_TO_511_OCTETS_GB 0xb8
+#define MMC_RX_512_TO_1023_OCTETS_GB 0xbc
+#define MMC_RX_1024_TO_MAX_OCTETS_GB 0xc0
+#define MMC_RX_UNICAST_G 0xc4
+#define MMC_RX_LENGTH_ERROR 0xc8
+#define MMC_RX_AUTOFRANGETYPE 0xcc
+#define MMC_RX_PAUSE_FRAMES 0xd0
+#define MMC_RX_FIFO_OVERFLOW 0xd4
+#define MMC_RX_VLAN_FRAMES_GB 0xd8
+#define MMC_RX_WATCHDOG_ERROR 0xdc
/* IPC*/
-#define MMC_RX_IPC_INTR_MASK 0x00000200
-#define MMC_RX_IPC_INTR 0x00000208
+#define MMC_RX_IPC_INTR_MASK 0x100
+#define MMC_RX_IPC_INTR 0x108
/* IPv4*/
-#define MMC_RX_IPV4_GD 0x00000210
-#define MMC_RX_IPV4_HDERR 0x00000214
-#define MMC_RX_IPV4_NOPAY 0x00000218
-#define MMC_RX_IPV4_FRAG 0x0000021C
-#define MMC_RX_IPV4_UDSBL 0x00000220
+#define MMC_RX_IPV4_GD 0x110
+#define MMC_RX_IPV4_HDERR 0x114
+#define MMC_RX_IPV4_NOPAY 0x118
+#define MMC_RX_IPV4_FRAG 0x11C
+#define MMC_RX_IPV4_UDSBL 0x120
-#define MMC_RX_IPV4_GD_OCTETS 0x00000250
-#define MMC_RX_IPV4_HDERR_OCTETS 0x00000254
-#define MMC_RX_IPV4_NOPAY_OCTETS 0x00000258
-#define MMC_RX_IPV4_FRAG_OCTETS 0x0000025c
-#define MMC_RX_IPV4_UDSBL_OCTETS 0x00000260
+#define MMC_RX_IPV4_GD_OCTETS 0x150
+#define MMC_RX_IPV4_HDERR_OCTETS 0x154
+#define MMC_RX_IPV4_NOPAY_OCTETS 0x158
+#define MMC_RX_IPV4_FRAG_OCTETS 0x15c
+#define MMC_RX_IPV4_UDSBL_OCTETS 0x160
/* IPV6*/
-#define MMC_RX_IPV6_GD_OCTETS 0x00000264
-#define MMC_RX_IPV6_HDERR_OCTETS 0x00000268
-#define MMC_RX_IPV6_NOPAY_OCTETS 0x0000026c
+#define MMC_RX_IPV6_GD_OCTETS 0x164
+#define MMC_RX_IPV6_HDERR_OCTETS 0x168
+#define MMC_RX_IPV6_NOPAY_OCTETS 0x16c
-#define MMC_RX_IPV6_GD 0x00000224
-#define MMC_RX_IPV6_HDERR 0x00000228
-#define MMC_RX_IPV6_NOPAY 0x0000022c
+#define MMC_RX_IPV6_GD 0x124
+#define MMC_RX_IPV6_HDERR 0x128
+#define MMC_RX_IPV6_NOPAY 0x12c
/* Protocols*/
-#define MMC_RX_UDP_GD 0x00000230
-#define MMC_RX_UDP_ERR 0x00000234
-#define MMC_RX_TCP_GD 0x00000238
-#define MMC_RX_TCP_ERR 0x0000023c
-#define MMC_RX_ICMP_GD 0x00000240
-#define MMC_RX_ICMP_ERR 0x00000244
+#define MMC_RX_UDP_GD 0x130
+#define MMC_RX_UDP_ERR 0x134
+#define MMC_RX_TCP_GD 0x138
+#define MMC_RX_TCP_ERR 0x13c
+#define MMC_RX_ICMP_GD 0x140
+#define MMC_RX_ICMP_ERR 0x144
-#define MMC_RX_UDP_GD_OCTETS 0x00000270
-#define MMC_RX_UDP_ERR_OCTETS 0x00000274
-#define MMC_RX_TCP_GD_OCTETS 0x00000278
-#define MMC_RX_TCP_ERR_OCTETS 0x0000027c
-#define MMC_RX_ICMP_GD_OCTETS 0x00000280
-#define MMC_RX_ICMP_ERR_OCTETS 0x00000284
+#define MMC_RX_UDP_GD_OCTETS 0x170
+#define MMC_RX_UDP_ERR_OCTETS 0x174
+#define MMC_RX_TCP_GD_OCTETS 0x178
+#define MMC_RX_TCP_ERR_OCTETS 0x17c
+#define MMC_RX_ICMP_GD_OCTETS 0x180
+#define MMC_RX_ICMP_ERR_OCTETS 0x184
-void dwmac_mmc_ctrl(void __iomem *ioaddr, unsigned int mode)
+void dwmac_mmc_ctrl(void __iomem *mmcaddr, unsigned int mode)
{
- u32 value = readl(ioaddr + MMC_CNTRL);
+ u32 value = readl(mmcaddr + MMC_CNTRL);
value |= (mode & 0x3F);
- writel(value, ioaddr + MMC_CNTRL);
+ writel(value, mmcaddr + MMC_CNTRL);
pr_debug("stmmac: MMC ctrl register (offset 0x%x): 0x%08x\n",
MMC_CNTRL, value);
}
/* To mask all all interrupts.*/
-void dwmac_mmc_intr_all_mask(void __iomem *ioaddr)
+void dwmac_mmc_intr_all_mask(void __iomem *mmcaddr)
{
- writel(MMC_DEFAULT_MASK, ioaddr + MMC_RX_INTR_MASK);
- writel(MMC_DEFAULT_MASK, ioaddr + MMC_TX_INTR_MASK);
- writel(MMC_DEFAULT_MASK, ioaddr + MMC_RX_IPC_INTR_MASK);
+ writel(MMC_DEFAULT_MASK, mmcaddr + MMC_RX_INTR_MASK);
+ writel(MMC_DEFAULT_MASK, mmcaddr + MMC_TX_INTR_MASK);
+ writel(MMC_DEFAULT_MASK, mmcaddr + MMC_RX_IPC_INTR_MASK);
}
/* This reads the MAC core counters (if actaully supported).
@@ -157,111 +157,116 @@ void dwmac_mmc_intr_all_mask(void __iomem *ioaddr)
* counter after a read. So all the field of the mmc struct
* have to be incremented.
*/
-void dwmac_mmc_read(void __iomem *ioaddr, struct stmmac_counters *mmc)
+void dwmac_mmc_read(void __iomem *mmcaddr, struct stmmac_counters *mmc)
{
- mmc->mmc_tx_octetcount_gb += readl(ioaddr + MMC_TX_OCTETCOUNT_GB);
- mmc->mmc_tx_framecount_gb += readl(ioaddr + MMC_TX_FRAMECOUNT_GB);
- mmc->mmc_tx_broadcastframe_g += readl(ioaddr + MMC_TX_BROADCASTFRAME_G);
- mmc->mmc_tx_multicastframe_g += readl(ioaddr + MMC_TX_MULTICASTFRAME_G);
- mmc->mmc_tx_64_octets_gb += readl(ioaddr + MMC_TX_64_OCTETS_GB);
+ mmc->mmc_tx_octetcount_gb += readl(mmcaddr + MMC_TX_OCTETCOUNT_GB);
+ mmc->mmc_tx_framecount_gb += readl(mmcaddr + MMC_TX_FRAMECOUNT_GB);
+ mmc->mmc_tx_broadcastframe_g += readl(mmcaddr +
+ MMC_TX_BROADCASTFRAME_G);
+ mmc->mmc_tx_multicastframe_g += readl(mmcaddr +
+ MMC_TX_MULTICASTFRAME_G);
+ mmc->mmc_tx_64_octets_gb += readl(mmcaddr + MMC_TX_64_OCTETS_GB);
mmc->mmc_tx_65_to_127_octets_gb +=
- readl(ioaddr + MMC_TX_65_TO_127_OCTETS_GB);
+ readl(mmcaddr + MMC_TX_65_TO_127_OCTETS_GB);
mmc->mmc_tx_128_to_255_octets_gb +=
- readl(ioaddr + MMC_TX_128_TO_255_OCTETS_GB);
+ readl(mmcaddr + MMC_TX_128_TO_255_OCTETS_GB);
mmc->mmc_tx_256_to_511_octets_gb +=
- readl(ioaddr + MMC_TX_256_TO_511_OCTETS_GB);
+ readl(mmcaddr + MMC_TX_256_TO_511_OCTETS_GB);
mmc->mmc_tx_512_to_1023_octets_gb +=
- readl(ioaddr + MMC_TX_512_TO_1023_OCTETS_GB);
+ readl(mmcaddr + MMC_TX_512_TO_1023_OCTETS_GB);
mmc->mmc_tx_1024_to_max_octets_gb +=
- readl(ioaddr + MMC_TX_1024_TO_MAX_OCTETS_GB);
- mmc->mmc_tx_unicast_gb += readl(ioaddr + MMC_TX_UNICAST_GB);
- mmc->mmc_tx_multicast_gb += readl(ioaddr + MMC_TX_MULTICAST_GB);
- mmc->mmc_tx_broadcast_gb += readl(ioaddr + MMC_TX_BROADCAST_GB);
- mmc->mmc_tx_underflow_error += readl(ioaddr + MMC_TX_UNDERFLOW_ERROR);
- mmc->mmc_tx_singlecol_g += readl(ioaddr + MMC_TX_SINGLECOL_G);
- mmc->mmc_tx_multicol_g += readl(ioaddr + MMC_TX_MULTICOL_G);
- mmc->mmc_tx_deferred += readl(ioaddr + MMC_TX_DEFERRED);
- mmc->mmc_tx_latecol += readl(ioaddr + MMC_TX_LATECOL);
- mmc->mmc_tx_exesscol += readl(ioaddr + MMC_TX_EXESSCOL);
- mmc->mmc_tx_carrier_error += readl(ioaddr + MMC_TX_CARRIER_ERROR);
- mmc->mmc_tx_octetcount_g += readl(ioaddr + MMC_TX_OCTETCOUNT_G);
- mmc->mmc_tx_framecount_g += readl(ioaddr + MMC_TX_FRAMECOUNT_G);
- mmc->mmc_tx_excessdef += readl(ioaddr + MMC_TX_EXCESSDEF);
- mmc->mmc_tx_pause_frame += readl(ioaddr + MMC_TX_PAUSE_FRAME);
- mmc->mmc_tx_vlan_frame_g += readl(ioaddr + MMC_TX_VLAN_FRAME_G);
+ readl(mmcaddr + MMC_TX_1024_TO_MAX_OCTETS_GB);
+ mmc->mmc_tx_unicast_gb += readl(mmcaddr + MMC_TX_UNICAST_GB);
+ mmc->mmc_tx_multicast_gb += readl(mmcaddr + MMC_TX_MULTICAST_GB);
+ mmc->mmc_tx_broadcast_gb += readl(mmcaddr + MMC_TX_BROADCAST_GB);
+ mmc->mmc_tx_underflow_error += readl(mmcaddr + MMC_TX_UNDERFLOW_ERROR);
+ mmc->mmc_tx_singlecol_g += readl(mmcaddr + MMC_TX_SINGLECOL_G);
+ mmc->mmc_tx_multicol_g += readl(mmcaddr + MMC_TX_MULTICOL_G);
+ mmc->mmc_tx_deferred += readl(mmcaddr + MMC_TX_DEFERRED);
+ mmc->mmc_tx_latecol += readl(mmcaddr + MMC_TX_LATECOL);
+ mmc->mmc_tx_exesscol += readl(mmcaddr + MMC_TX_EXESSCOL);
+ mmc->mmc_tx_carrier_error += readl(mmcaddr + MMC_TX_CARRIER_ERROR);
+ mmc->mmc_tx_octetcount_g += readl(mmcaddr + MMC_TX_OCTETCOUNT_G);
+ mmc->mmc_tx_framecount_g += readl(mmcaddr + MMC_TX_FRAMECOUNT_G);
+ mmc->mmc_tx_excessdef += readl(mmcaddr + MMC_TX_EXCESSDEF);
+ mmc->mmc_tx_pause_frame += readl(mmcaddr + MMC_TX_PAUSE_FRAME);
+ mmc->mmc_tx_vlan_frame_g += readl(mmcaddr + MMC_TX_VLAN_FRAME_G);
/* MMC RX counter registers */
- mmc->mmc_rx_framecount_gb += readl(ioaddr + MMC_RX_FRAMECOUNT_GB);
- mmc->mmc_rx_octetcount_gb += readl(ioaddr + MMC_RX_OCTETCOUNT_GB);
- mmc->mmc_rx_octetcount_g += readl(ioaddr + MMC_RX_OCTETCOUNT_G);
- mmc->mmc_rx_broadcastframe_g += readl(ioaddr + MMC_RX_BROADCASTFRAME_G);
- mmc->mmc_rx_multicastframe_g += readl(ioaddr + MMC_RX_MULTICASTFRAME_G);
- mmc->mmc_rx_crc_error += readl(ioaddr + MMC_RX_CRC_ERROR);
- mmc->mmc_rx_align_error += readl(ioaddr + MMC_RX_ALIGN_ERROR);
- mmc->mmc_rx_run_error += readl(ioaddr + MMC_RX_RUN_ERROR);
- mmc->mmc_rx_jabber_error += readl(ioaddr + MMC_RX_JABBER_ERROR);
- mmc->mmc_rx_undersize_g += readl(ioaddr + MMC_RX_UNDERSIZE_G);
- mmc->mmc_rx_oversize_g += readl(ioaddr + MMC_RX_OVERSIZE_G);
- mmc->mmc_rx_64_octets_gb += readl(ioaddr + MMC_RX_64_OCTETS_GB);
+ mmc->mmc_rx_framecount_gb += readl(mmcaddr + MMC_RX_FRAMECOUNT_GB);
+ mmc->mmc_rx_octetcount_gb += readl(mmcaddr + MMC_RX_OCTETCOUNT_GB);
+ mmc->mmc_rx_octetcount_g += readl(mmcaddr + MMC_RX_OCTETCOUNT_G);
+ mmc->mmc_rx_broadcastframe_g += readl(mmcaddr +
+ MMC_RX_BROADCASTFRAME_G);
+ mmc->mmc_rx_multicastframe_g += readl(mmcaddr +
+ MMC_RX_MULTICASTFRAME_G);
+ mmc->mmc_rx_crc_error += readl(mmcaddr + MMC_RX_CRC_ERROR);
+ mmc->mmc_rx_align_error += readl(mmcaddr + MMC_RX_ALIGN_ERROR);
+ mmc->mmc_rx_run_error += readl(mmcaddr + MMC_RX_RUN_ERROR);
+ mmc->mmc_rx_jabber_error += readl(mmcaddr + MMC_RX_JABBER_ERROR);
+ mmc->mmc_rx_undersize_g += readl(mmcaddr + MMC_RX_UNDERSIZE_G);
+ mmc->mmc_rx_oversize_g += readl(mmcaddr + MMC_RX_OVERSIZE_G);
+ mmc->mmc_rx_64_octets_gb += readl(mmcaddr + MMC_RX_64_OCTETS_GB);
mmc->mmc_rx_65_to_127_octets_gb +=
- readl(ioaddr + MMC_RX_65_TO_127_OCTETS_GB);
+ readl(mmcaddr + MMC_RX_65_TO_127_OCTETS_GB);
mmc->mmc_rx_128_to_255_octets_gb +=
- readl(ioaddr + MMC_RX_128_TO_255_OCTETS_GB);
+ readl(mmcaddr + MMC_RX_128_TO_255_OCTETS_GB);
mmc->mmc_rx_256_to_511_octets_gb +=
- readl(ioaddr + MMC_RX_256_TO_511_OCTETS_GB);
+ readl(mmcaddr + MMC_RX_256_TO_511_OCTETS_GB);
mmc->mmc_rx_512_to_1023_octets_gb +=
- readl(ioaddr + MMC_RX_512_TO_1023_OCTETS_GB);
+ readl(mmcaddr + MMC_RX_512_TO_1023_OCTETS_GB);
mmc->mmc_rx_1024_to_max_octets_gb +=
- readl(ioaddr + MMC_RX_1024_TO_MAX_OCTETS_GB);
- mmc->mmc_rx_unicast_g += readl(ioaddr + MMC_RX_UNICAST_G);
- mmc->mmc_rx_length_error += readl(ioaddr + MMC_RX_LENGTH_ERROR);
- mmc->mmc_rx_autofrangetype += readl(ioaddr + MMC_RX_AUTOFRANGETYPE);
- mmc->mmc_rx_pause_frames += readl(ioaddr + MMC_RX_PAUSE_FRAMES);
- mmc->mmc_rx_fifo_overflow += readl(ioaddr + MMC_RX_FIFO_OVERFLOW);
- mmc->mmc_rx_vlan_frames_gb += readl(ioaddr + MMC_RX_VLAN_FRAMES_GB);
- mmc->mmc_rx_watchdog_error += readl(ioaddr + MMC_RX_WATCHDOG_ERROR);
+ readl(mmcaddr + MMC_RX_1024_TO_MAX_OCTETS_GB);
+ mmc->mmc_rx_unicast_g += readl(mmcaddr + MMC_RX_UNICAST_G);
+ mmc->mmc_rx_length_error += readl(mmcaddr + MMC_RX_LENGTH_ERROR);
+ mmc->mmc_rx_autofrangetype += readl(mmcaddr + MMC_RX_AUTOFRANGETYPE);
+ mmc->mmc_rx_pause_frames += readl(mmcaddr + MMC_RX_PAUSE_FRAMES);
+ mmc->mmc_rx_fifo_overflow += readl(mmcaddr + MMC_RX_FIFO_OVERFLOW);
+ mmc->mmc_rx_vlan_frames_gb += readl(mmcaddr + MMC_RX_VLAN_FRAMES_GB);
+ mmc->mmc_rx_watchdog_error += readl(mmcaddr + MMC_RX_WATCHDOG_ERROR);
/* IPC */
- mmc->mmc_rx_ipc_intr_mask += readl(ioaddr + MMC_RX_IPC_INTR_MASK);
- mmc->mmc_rx_ipc_intr += readl(ioaddr + MMC_RX_IPC_INTR);
+ mmc->mmc_rx_ipc_intr_mask += readl(mmcaddr + MMC_RX_IPC_INTR_MASK);
+ mmc->mmc_rx_ipc_intr += readl(mmcaddr + MMC_RX_IPC_INTR);
/* IPv4 */
- mmc->mmc_rx_ipv4_gd += readl(ioaddr + MMC_RX_IPV4_GD);
- mmc->mmc_rx_ipv4_hderr += readl(ioaddr + MMC_RX_IPV4_HDERR);
- mmc->mmc_rx_ipv4_nopay += readl(ioaddr + MMC_RX_IPV4_NOPAY);
- mmc->mmc_rx_ipv4_frag += readl(ioaddr + MMC_RX_IPV4_FRAG);
- mmc->mmc_rx_ipv4_udsbl += readl(ioaddr + MMC_RX_IPV4_UDSBL);
+ mmc->mmc_rx_ipv4_gd += readl(mmcaddr + MMC_RX_IPV4_GD);
+ mmc->mmc_rx_ipv4_hderr += readl(mmcaddr + MMC_RX_IPV4_HDERR);
+ mmc->mmc_rx_ipv4_nopay += readl(mmcaddr + MMC_RX_IPV4_NOPAY);
+ mmc->mmc_rx_ipv4_frag += readl(mmcaddr + MMC_RX_IPV4_FRAG);
+ mmc->mmc_rx_ipv4_udsbl += readl(mmcaddr + MMC_RX_IPV4_UDSBL);
- mmc->mmc_rx_ipv4_gd_octets += readl(ioaddr + MMC_RX_IPV4_GD_OCTETS);
+ mmc->mmc_rx_ipv4_gd_octets += readl(mmcaddr + MMC_RX_IPV4_GD_OCTETS);
mmc->mmc_rx_ipv4_hderr_octets +=
- readl(ioaddr + MMC_RX_IPV4_HDERR_OCTETS);
+ readl(mmcaddr + MMC_RX_IPV4_HDERR_OCTETS);
mmc->mmc_rx_ipv4_nopay_octets +=
- readl(ioaddr + MMC_RX_IPV4_NOPAY_OCTETS);
- mmc->mmc_rx_ipv4_frag_octets += readl(ioaddr + MMC_RX_IPV4_FRAG_OCTETS);
+ readl(mmcaddr + MMC_RX_IPV4_NOPAY_OCTETS);
+ mmc->mmc_rx_ipv4_frag_octets += readl(mmcaddr +
+ MMC_RX_IPV4_FRAG_OCTETS);
mmc->mmc_rx_ipv4_udsbl_octets +=
- readl(ioaddr + MMC_RX_IPV4_UDSBL_OCTETS);
+ readl(mmcaddr + MMC_RX_IPV4_UDSBL_OCTETS);
/* IPV6 */
- mmc->mmc_rx_ipv6_gd_octets += readl(ioaddr + MMC_RX_IPV6_GD_OCTETS);
+ mmc->mmc_rx_ipv6_gd_octets += readl(mmcaddr + MMC_RX_IPV6_GD_OCTETS);
mmc->mmc_rx_ipv6_hderr_octets +=
- readl(ioaddr + MMC_RX_IPV6_HDERR_OCTETS);
+ readl(mmcaddr + MMC_RX_IPV6_HDERR_OCTETS);
mmc->mmc_rx_ipv6_nopay_octets +=
- readl(ioaddr + MMC_RX_IPV6_NOPAY_OCTETS);
+ readl(mmcaddr + MMC_RX_IPV6_NOPAY_OCTETS);
- mmc->mmc_rx_ipv6_gd += readl(ioaddr + MMC_RX_IPV6_GD);
- mmc->mmc_rx_ipv6_hderr += readl(ioaddr + MMC_RX_IPV6_HDERR);
- mmc->mmc_rx_ipv6_nopay += readl(ioaddr + MMC_RX_IPV6_NOPAY);
+ mmc->mmc_rx_ipv6_gd += readl(mmcaddr + MMC_RX_IPV6_GD);
+ mmc->mmc_rx_ipv6_hderr += readl(mmcaddr + MMC_RX_IPV6_HDERR);
+ mmc->mmc_rx_ipv6_nopay += readl(mmcaddr + MMC_RX_IPV6_NOPAY);
/* Protocols */
- mmc->mmc_rx_udp_gd += readl(ioaddr + MMC_RX_UDP_GD);
- mmc->mmc_rx_udp_err += readl(ioaddr + MMC_RX_UDP_ERR);
- mmc->mmc_rx_tcp_gd += readl(ioaddr + MMC_RX_TCP_GD);
- mmc->mmc_rx_tcp_err += readl(ioaddr + MMC_RX_TCP_ERR);
- mmc->mmc_rx_icmp_gd += readl(ioaddr + MMC_RX_ICMP_GD);
- mmc->mmc_rx_icmp_err += readl(ioaddr + MMC_RX_ICMP_ERR);
+ mmc->mmc_rx_udp_gd += readl(mmcaddr + MMC_RX_UDP_GD);
+ mmc->mmc_rx_udp_err += readl(mmcaddr + MMC_RX_UDP_ERR);
+ mmc->mmc_rx_tcp_gd += readl(mmcaddr + MMC_RX_TCP_GD);
+ mmc->mmc_rx_tcp_err += readl(mmcaddr + MMC_RX_TCP_ERR);
+ mmc->mmc_rx_icmp_gd += readl(mmcaddr + MMC_RX_ICMP_GD);
+ mmc->mmc_rx_icmp_err += readl(mmcaddr + MMC_RX_ICMP_ERR);
- mmc->mmc_rx_udp_gd_octets += readl(ioaddr + MMC_RX_UDP_GD_OCTETS);
- mmc->mmc_rx_udp_err_octets += readl(ioaddr + MMC_RX_UDP_ERR_OCTETS);
- mmc->mmc_rx_tcp_gd_octets += readl(ioaddr + MMC_RX_TCP_GD_OCTETS);
- mmc->mmc_rx_tcp_err_octets += readl(ioaddr + MMC_RX_TCP_ERR_OCTETS);
- mmc->mmc_rx_icmp_gd_octets += readl(ioaddr + MMC_RX_ICMP_GD_OCTETS);
- mmc->mmc_rx_icmp_err_octets += readl(ioaddr + MMC_RX_ICMP_ERR_OCTETS);
+ mmc->mmc_rx_udp_gd_octets += readl(mmcaddr + MMC_RX_UDP_GD_OCTETS);
+ mmc->mmc_rx_udp_err_octets += readl(mmcaddr + MMC_RX_UDP_ERR_OCTETS);
+ mmc->mmc_rx_tcp_gd_octets += readl(mmcaddr + MMC_RX_TCP_GD_OCTETS);
+ mmc->mmc_rx_tcp_err_octets += readl(mmcaddr + MMC_RX_TCP_ERR_OCTETS);
+ mmc->mmc_rx_icmp_gd_octets += readl(mmcaddr + MMC_RX_ICMP_GD_OCTETS);
+ mmc->mmc_rx_icmp_err_octets += readl(mmcaddr + MMC_RX_ICMP_ERR_OCTETS);
}
diff --git a/drivers/net/ethernet/stmicro/stmmac/norm_desc.c b/drivers/net/ethernet/stmicro/stmmac/norm_desc.c
index 011386f6f24d..2beacd0d3043 100644
--- a/drivers/net/ethernet/stmicro/stmmac/norm_desc.c
+++ b/drivers/net/ethernet/stmicro/stmmac/norm_desc.c
@@ -279,6 +279,26 @@ static int ndesc_get_rx_timestamp_status(void *desc, u32 ats)
return 1;
}
+static void ndesc_display_ring(void *head, unsigned int size, bool rx)
+{
+ struct dma_desc *p = (struct dma_desc *)head;
+ int i;
+
+ pr_info("%s descriptor ring:\n", rx ? "RX" : "TX");
+
+ for (i = 0; i < size; i++) {
+ u64 x;
+
+ x = *(u64 *)p;
+ pr_info("%d [0x%x]: 0x%x 0x%x 0x%x 0x%x",
+ i, (unsigned int)virt_to_phys(p),
+ (unsigned int)x, (unsigned int)(x >> 32),
+ p->des2, p->des3);
+ p++;
+ }
+ pr_info("\n");
+}
+
const struct stmmac_desc_ops ndesc_ops = {
.tx_status = ndesc_get_tx_status,
.rx_status = ndesc_get_rx_status,
@@ -297,4 +317,5 @@ const struct stmmac_desc_ops ndesc_ops = {
.get_tx_timestamp_status = ndesc_get_tx_timestamp_status,
.get_timestamp = ndesc_get_timestamp,
.get_rx_timestamp_status = ndesc_get_rx_timestamp_status,
+ .display_ring = ndesc_display_ring,
};
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac.h b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
index 8bbab97895fe..59ae6088cd22 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac.h
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac.h
@@ -24,7 +24,7 @@
#define __STMMAC_H__
#define STMMAC_RESOURCE_NAME "stmmaceth"
-#define DRV_MODULE_VERSION "Oct_2015"
+#define DRV_MODULE_VERSION "Jan_2016"
#include <linux/clk.h>
#include <linux/stmmac.h>
@@ -67,6 +67,7 @@ struct stmmac_priv {
spinlock_t tx_lock;
bool tx_path_in_lpi_mode;
struct timer_list txtimer;
+ bool tso;
struct dma_desc *dma_rx ____cacheline_aligned_in_smp;
struct dma_extended_desc *dma_erx;
@@ -128,6 +129,10 @@ struct stmmac_priv {
int use_riwt;
int irq_wake;
spinlock_t ptp_lock;
+ void __iomem *mmcaddr;
+ u32 rx_tail_addr;
+ u32 tx_tail_addr;
+ u32 mss;
#ifdef CONFIG_DEBUG_FS
struct dentry *dbgfs_dir;
@@ -143,9 +148,9 @@ void stmmac_set_ethtool_ops(struct net_device *netdev);
int stmmac_ptp_register(struct stmmac_priv *priv);
void stmmac_ptp_unregister(struct stmmac_priv *priv);
-int stmmac_resume(struct net_device *ndev);
-int stmmac_suspend(struct net_device *ndev);
-int stmmac_dvr_remove(struct net_device *ndev);
+int stmmac_resume(struct device *dev);
+int stmmac_suspend(struct device *dev);
+int stmmac_dvr_remove(struct device *dev);
int stmmac_dvr_probe(struct device *device,
struct plat_stmmacenet_data *plat_dat,
struct stmmac_resources *res);
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
index 3c7928edfebb..e2b98b01647e 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_ethtool.c
@@ -161,6 +161,9 @@ static const struct stmmac_stats stmmac_gstrings_stats[] = {
STMMAC_STAT(mtl_rx_fifo_ctrl_active),
STMMAC_STAT(mac_rx_frame_ctrl_fifo),
STMMAC_STAT(mac_gmii_rx_proto_engine),
+ /* TSO */
+ STMMAC_STAT(tx_tso_frames),
+ STMMAC_STAT(tx_tso_nfrags),
};
#define STMMAC_STATS_LEN ARRAY_SIZE(stmmac_gstrings_stats)
@@ -499,14 +502,14 @@ static void stmmac_get_ethtool_stats(struct net_device *dev,
int i, j = 0;
/* Update the DMA HW counters for dwmac10/100 */
- if (!priv->plat->has_gmac)
+ if (priv->hw->dma->dma_diagnostic_fr)
priv->hw->dma->dma_diagnostic_fr(&dev->stats,
(void *) &priv->xstats,
priv->ioaddr);
else {
/* If supported, for new GMAC chips expose the MMC counters */
if (priv->dma_cap.rmon) {
- dwmac_mmc_read(priv->ioaddr, &priv->mmc);
+ dwmac_mmc_read(priv->mmcaddr, &priv->mmc);
for (i = 0; i < STMMAC_MMC_STATS_LEN; i++) {
char *p;
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index fcbd4be562e2..eac45d0c75e2 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -56,6 +56,7 @@
#include "dwmac1000.h"
#define STMMAC_ALIGN(x) L1_CACHE_ALIGN(x)
+#define TSO_MAX_BUFF_SIZE (SZ_16K - 1)
/* Module parameters */
#define TX_TIMEO 5000
@@ -721,13 +722,15 @@ static void stmmac_adjust_link(struct net_device *dev)
new_state = 1;
switch (phydev->speed) {
case 1000:
- if (likely(priv->plat->has_gmac))
+ if (likely((priv->plat->has_gmac) ||
+ (priv->plat->has_gmac4)))
ctrl &= ~priv->hw->link.port;
stmmac_hw_fix_mac_speed(priv);
break;
case 100:
case 10:
- if (priv->plat->has_gmac) {
+ if (likely((priv->plat->has_gmac) ||
+ (priv->plat->has_gmac4))) {
ctrl |= priv->hw->link.port;
if (phydev->speed == SPEED_100) {
ctrl |= priv->hw->link.speed;
@@ -875,53 +878,22 @@ static int stmmac_init_phy(struct net_device *dev)
return 0;
}
-/**
- * stmmac_display_ring - display ring
- * @head: pointer to the head of the ring passed.
- * @size: size of the ring.
- * @extend_desc: to verify if extended descriptors are used.
- * Description: display the control/status and buffer descriptors.
- */
-static void stmmac_display_ring(void *head, int size, int extend_desc)
-{
- int i;
- struct dma_extended_desc *ep = (struct dma_extended_desc *)head;
- struct dma_desc *p = (struct dma_desc *)head;
-
- for (i = 0; i < size; i++) {
- u64 x;
- if (extend_desc) {
- x = *(u64 *) ep;
- pr_info("%d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n",
- i, (unsigned int)virt_to_phys(ep),
- (unsigned int)x, (unsigned int)(x >> 32),
- ep->basic.des2, ep->basic.des3);
- ep++;
- } else {
- x = *(u64 *) p;
- pr_info("%d [0x%x]: 0x%x 0x%x 0x%x 0x%x",
- i, (unsigned int)virt_to_phys(p),
- (unsigned int)x, (unsigned int)(x >> 32),
- p->des2, p->des3);
- p++;
- }
- pr_info("\n");
- }
-}
-
static void stmmac_display_rings(struct stmmac_priv *priv)
{
+ void *head_rx, *head_tx;
+
if (priv->extend_desc) {
- pr_info("Extended RX descriptor ring:\n");
- stmmac_display_ring((void *)priv->dma_erx, DMA_RX_SIZE, 1);
- pr_info("Extended TX descriptor ring:\n");
- stmmac_display_ring((void *)priv->dma_etx, DMA_TX_SIZE, 1);
+ head_rx = (void *)priv->dma_erx;
+ head_tx = (void *)priv->dma_etx;
} else {
- pr_info("RX descriptor ring:\n");
- stmmac_display_ring((void *)priv->dma_rx, DMA_RX_SIZE, 0);
- pr_info("TX descriptor ring:\n");
- stmmac_display_ring((void *)priv->dma_tx, DMA_TX_SIZE, 0);
+ head_rx = (void *)priv->dma_rx;
+ head_tx = (void *)priv->dma_tx;
}
+
+ /* Display Rx ring */
+ priv->hw->desc->display_ring(head_rx, DMA_RX_SIZE, true);
+ /* Display Tx ring */
+ priv->hw->desc->display_ring(head_tx, DMA_TX_SIZE, false);
}
static int stmmac_set_bfsize(int mtu, int bufsize)
@@ -1000,7 +972,10 @@ static int stmmac_init_rx_buffers(struct stmmac_priv *priv, struct dma_desc *p,
return -EINVAL;
}
- p->des2 = priv->rx_skbuff_dma[i];
+ if (priv->synopsys_id >= DWMAC_CORE_4_00)
+ p->des0 = priv->rx_skbuff_dma[i];
+ else
+ p->des2 = priv->rx_skbuff_dma[i];
if ((priv->hw->mode->init_desc3) &&
(priv->dma_buf_sz == BUF_SIZE_16KiB))
@@ -1091,7 +1066,16 @@ static int init_dma_desc_rings(struct net_device *dev, gfp_t flags)
p = &((priv->dma_etx + i)->basic);
else
p = priv->dma_tx + i;
- p->des2 = 0;
+
+ if (priv->synopsys_id >= DWMAC_CORE_4_00) {
+ p->des0 = 0;
+ p->des1 = 0;
+ p->des2 = 0;
+ p->des3 = 0;
+ } else {
+ p->des2 = 0;
+ }
+
priv->tx_skbuff_dma[i].buf = 0;
priv->tx_skbuff_dma[i].map_as_page = false;
priv->tx_skbuff_dma[i].len = 0;
@@ -1354,9 +1338,13 @@ static void stmmac_tx_clean(struct stmmac_priv *priv)
priv->tx_skbuff_dma[entry].len,
DMA_TO_DEVICE);
priv->tx_skbuff_dma[entry].buf = 0;
+ priv->tx_skbuff_dma[entry].len = 0;
priv->tx_skbuff_dma[entry].map_as_page = false;
}
- priv->hw->mode->clean_desc3(priv, p);
+
+ if (priv->hw->mode->clean_desc3)
+ priv->hw->mode->clean_desc3(priv, p);
+
priv->tx_skbuff_dma[entry].last_segment = false;
priv->tx_skbuff_dma[entry].is_jumbo = false;
@@ -1479,41 +1467,23 @@ static void stmmac_dma_interrupt(struct stmmac_priv *priv)
static void stmmac_mmc_setup(struct stmmac_priv *priv)
{
unsigned int mode = MMC_CNTRL_RESET_ON_READ | MMC_CNTRL_COUNTER_RESET |
- MMC_CNTRL_PRESET | MMC_CNTRL_FULL_HALF_PRESET;
+ MMC_CNTRL_PRESET | MMC_CNTRL_FULL_HALF_PRESET;
- dwmac_mmc_intr_all_mask(priv->ioaddr);
+ if (priv->synopsys_id >= DWMAC_CORE_4_00)
+ priv->mmcaddr = priv->ioaddr + MMC_GMAC4_OFFSET;
+ else
+ priv->mmcaddr = priv->ioaddr + MMC_GMAC3_X_OFFSET;
+
+ dwmac_mmc_intr_all_mask(priv->mmcaddr);
if (priv->dma_cap.rmon) {
- dwmac_mmc_ctrl(priv->ioaddr, mode);
+ dwmac_mmc_ctrl(priv->mmcaddr, mode);
memset(&priv->mmc, 0, sizeof(struct stmmac_counters));
} else
pr_info(" No MAC Management Counters available\n");
}
/**
- * stmmac_get_synopsys_id - return the SYINID.
- * @priv: driver private structure
- * Description: this simple function is to decode and return the SYINID
- * starting from the HW core register.
- */
-static u32 stmmac_get_synopsys_id(struct stmmac_priv *priv)
-{
- u32 hwid = priv->hw->synopsys_uid;
-
- /* Check Synopsys Id (not available on old chips) */
- if (likely(hwid)) {
- u32 uid = ((hwid & 0x0000ff00) >> 8);
- u32 synid = (hwid & 0x000000ff);
-
- pr_info("stmmac - user ID: 0x%x, Synopsys ID: 0x%x\n",
- uid, synid);
-
- return synid;
- }
- return 0;
-}
-
-/**
* stmmac_selec_desc_mode - to select among: normal/alternate/extend descriptors
* @priv: driver private structure
* Description: select the Enhanced/Alternate or Normal descriptors.
@@ -1550,51 +1520,15 @@ static void stmmac_selec_desc_mode(struct stmmac_priv *priv)
*/
static int stmmac_get_hw_features(struct stmmac_priv *priv)
{
- u32 hw_cap = 0;
+ u32 ret = 0;
if (priv->hw->dma->get_hw_feature) {
- hw_cap = priv->hw->dma->get_hw_feature(priv->ioaddr);
-
- priv->dma_cap.mbps_10_100 = (hw_cap & DMA_HW_FEAT_MIISEL);
- priv->dma_cap.mbps_1000 = (hw_cap & DMA_HW_FEAT_GMIISEL) >> 1;
- priv->dma_cap.half_duplex = (hw_cap & DMA_HW_FEAT_HDSEL) >> 2;
- priv->dma_cap.hash_filter = (hw_cap & DMA_HW_FEAT_HASHSEL) >> 4;
- priv->dma_cap.multi_addr = (hw_cap & DMA_HW_FEAT_ADDMAC) >> 5;
- priv->dma_cap.pcs = (hw_cap & DMA_HW_FEAT_PCSSEL) >> 6;
- priv->dma_cap.sma_mdio = (hw_cap & DMA_HW_FEAT_SMASEL) >> 8;
- priv->dma_cap.pmt_remote_wake_up =
- (hw_cap & DMA_HW_FEAT_RWKSEL) >> 9;
- priv->dma_cap.pmt_magic_frame =
- (hw_cap & DMA_HW_FEAT_MGKSEL) >> 10;
- /* MMC */
- priv->dma_cap.rmon = (hw_cap & DMA_HW_FEAT_MMCSEL) >> 11;
- /* IEEE 1588-2002 */
- priv->dma_cap.time_stamp =
- (hw_cap & DMA_HW_FEAT_TSVER1SEL) >> 12;
- /* IEEE 1588-2008 */
- priv->dma_cap.atime_stamp =
- (hw_cap & DMA_HW_FEAT_TSVER2SEL) >> 13;
- /* 802.3az - Energy-Efficient Ethernet (EEE) */
- priv->dma_cap.eee = (hw_cap & DMA_HW_FEAT_EEESEL) >> 14;
- priv->dma_cap.av = (hw_cap & DMA_HW_FEAT_AVSEL) >> 15;
- /* TX and RX csum */
- priv->dma_cap.tx_coe = (hw_cap & DMA_HW_FEAT_TXCOESEL) >> 16;
- priv->dma_cap.rx_coe_type1 =
- (hw_cap & DMA_HW_FEAT_RXTYP1COE) >> 17;
- priv->dma_cap.rx_coe_type2 =
- (hw_cap & DMA_HW_FEAT_RXTYP2COE) >> 18;
- priv->dma_cap.rxfifo_over_2048 =
- (hw_cap & DMA_HW_FEAT_RXFIFOSIZE) >> 19;
- /* TX and RX number of channels */
- priv->dma_cap.number_rx_channel =
- (hw_cap & DMA_HW_FEAT_RXCHCNT) >> 20;
- priv->dma_cap.number_tx_channel =
- (hw_cap & DMA_HW_FEAT_TXCHCNT) >> 22;
- /* Alternate (enhanced) DESC mode */
- priv->dma_cap.enh_desc = (hw_cap & DMA_HW_FEAT_ENHDESSEL) >> 24;
- }
-
- return hw_cap;
+ priv->hw->dma->get_hw_feature(priv->ioaddr,
+ &priv->dma_cap);
+ ret = 1;
+ }
+
+ return ret;
}
/**
@@ -1650,8 +1584,19 @@ static int stmmac_init_dma_engine(struct stmmac_priv *priv)
priv->hw->dma->init(priv->ioaddr, pbl, fixed_burst, mixed_burst,
aal, priv->dma_tx_phy, priv->dma_rx_phy, atds);
- if ((priv->synopsys_id >= DWMAC_CORE_3_50) &&
- (priv->plat->axi && priv->hw->dma->axi))
+ if (priv->synopsys_id >= DWMAC_CORE_4_00) {
+ priv->rx_tail_addr = priv->dma_rx_phy +
+ (DMA_RX_SIZE * sizeof(struct dma_desc));
+ priv->hw->dma->set_rx_tail_ptr(priv->ioaddr, priv->rx_tail_addr,
+ STMMAC_CHAN0);
+
+ priv->tx_tail_addr = priv->dma_tx_phy +
+ (DMA_TX_SIZE * sizeof(struct dma_desc));
+ priv->hw->dma->set_tx_tail_ptr(priv->ioaddr, priv->tx_tail_addr,
+ STMMAC_CHAN0);
+ }
+
+ if (priv->plat->axi && priv->hw->dma->axi)
priv->hw->dma->axi(priv->ioaddr, priv->plat->axi);
return ret;
@@ -1731,7 +1676,10 @@ static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
}
/* Enable the MAC Rx/Tx */
- stmmac_set_mac(priv->ioaddr, true);
+ if (priv->synopsys_id >= DWMAC_CORE_4_00)
+ stmmac_dwmac4_set_mac(priv->ioaddr, true);
+ else
+ stmmac_set_mac(priv->ioaddr, true);
/* Set the HW DMA mode and the COE */
stmmac_dma_operation_mode(priv);
@@ -1769,6 +1717,18 @@ static int stmmac_hw_setup(struct net_device *dev, bool init_ptp)
if (priv->pcs && priv->hw->mac->ctrl_ane)
priv->hw->mac->ctrl_ane(priv->hw, 0);
+ /* set TX ring length */
+ if (priv->hw->dma->set_tx_ring_len)
+ priv->hw->dma->set_tx_ring_len(priv->ioaddr,
+ (DMA_TX_SIZE - 1));
+ /* set RX ring length */
+ if (priv->hw->dma->set_rx_ring_len)
+ priv->hw->dma->set_rx_ring_len(priv->ioaddr,
+ (DMA_RX_SIZE - 1));
+ /* Enable TSO */
+ if (priv->tso)
+ priv->hw->dma->enable_tso(priv->ioaddr, 1, STMMAC_CHAN0);
+
return 0;
}
@@ -1934,6 +1894,239 @@ static int stmmac_release(struct net_device *dev)
}
/**
+ * stmmac_tso_allocator - close entry point of the driver
+ * @priv: driver private structure
+ * @des: buffer start address
+ * @total_len: total length to fill in descriptors
+ * @last_segmant: condition for the last descriptor
+ * Description:
+ * This function fills descriptor and request new descriptors according to
+ * buffer length to fill
+ */
+static void stmmac_tso_allocator(struct stmmac_priv *priv, unsigned int des,
+ int total_len, bool last_segment)
+{
+ struct dma_desc *desc;
+ int tmp_len;
+ u32 buff_size;
+
+ tmp_len = total_len;
+
+ while (tmp_len > 0) {
+ priv->cur_tx = STMMAC_GET_ENTRY(priv->cur_tx, DMA_TX_SIZE);
+ desc = priv->dma_tx + priv->cur_tx;
+
+ desc->des0 = des + (total_len - tmp_len);
+ buff_size = tmp_len >= TSO_MAX_BUFF_SIZE ?
+ TSO_MAX_BUFF_SIZE : tmp_len;
+
+ priv->hw->desc->prepare_tso_tx_desc(desc, 0, buff_size,
+ 0, 1,
+ (last_segment) && (buff_size < TSO_MAX_BUFF_SIZE),
+ 0, 0);
+
+ tmp_len -= TSO_MAX_BUFF_SIZE;
+ }
+}
+
+/**
+ * stmmac_tso_xmit - Tx entry point of the driver for oversized frames (TSO)
+ * @skb : the socket buffer
+ * @dev : device pointer
+ * Description: this is the transmit function that is called on TSO frames
+ * (support available on GMAC4 and newer chips).
+ * Diagram below show the ring programming in case of TSO frames:
+ *
+ * First Descriptor
+ * --------
+ * | DES0 |---> buffer1 = L2/L3/L4 header
+ * | DES1 |---> TCP Payload (can continue on next descr...)
+ * | DES2 |---> buffer 1 and 2 len
+ * | DES3 |---> must set TSE, TCP hdr len-> [22:19]. TCP payload len [17:0]
+ * --------
+ * |
+ * ...
+ * |
+ * --------
+ * | DES0 | --| Split TCP Payload on Buffers 1 and 2
+ * | DES1 | --|
+ * | DES2 | --> buffer 1 and 2 len
+ * | DES3 |
+ * --------
+ *
+ * mss is fixed when enable tso, so w/o programming the TDES3 ctx field.
+ */
+static netdev_tx_t stmmac_tso_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ u32 pay_len, mss;
+ int tmp_pay_len = 0;
+ struct stmmac_priv *priv = netdev_priv(dev);
+ int nfrags = skb_shinfo(skb)->nr_frags;
+ unsigned int first_entry, des;
+ struct dma_desc *desc, *first, *mss_desc = NULL;
+ u8 proto_hdr_len;
+ int i;
+
+ spin_lock(&priv->tx_lock);
+
+ /* Compute header lengths */
+ proto_hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+
+ /* Desc availability based on threshold should be enough safe */
+ if (unlikely(stmmac_tx_avail(priv) <
+ (((skb->len - proto_hdr_len) / TSO_MAX_BUFF_SIZE + 1)))) {
+ if (!netif_queue_stopped(dev)) {
+ netif_stop_queue(dev);
+ /* This is a hard error, log it. */
+ pr_err("%s: Tx Ring full when queue awake\n", __func__);
+ }
+ spin_unlock(&priv->tx_lock);
+ return NETDEV_TX_BUSY;
+ }
+
+ pay_len = skb_headlen(skb) - proto_hdr_len; /* no frags */
+
+ mss = skb_shinfo(skb)->gso_size;
+
+ /* set new MSS value if needed */
+ if (mss != priv->mss) {
+ mss_desc = priv->dma_tx + priv->cur_tx;
+ priv->hw->desc->set_mss(mss_desc, mss);
+ priv->mss = mss;
+ priv->cur_tx = STMMAC_GET_ENTRY(priv->cur_tx, DMA_TX_SIZE);
+ }
+
+ if (netif_msg_tx_queued(priv)) {
+ pr_info("%s: tcphdrlen %d, hdr_len %d, pay_len %d, mss %d\n",
+ __func__, tcp_hdrlen(skb), proto_hdr_len, pay_len, mss);
+ pr_info("\tskb->len %d, skb->data_len %d\n", skb->len,
+ skb->data_len);
+ }
+
+ first_entry = priv->cur_tx;
+
+ desc = priv->dma_tx + first_entry;
+ first = desc;
+
+ /* first descriptor: fill Headers on Buf1 */
+ des = dma_map_single(priv->device, skb->data, skb_headlen(skb),
+ DMA_TO_DEVICE);
+ if (dma_mapping_error(priv->device, des))
+ goto dma_map_err;
+
+ priv->tx_skbuff_dma[first_entry].buf = des;
+ priv->tx_skbuff_dma[first_entry].len = skb_headlen(skb);
+ priv->tx_skbuff[first_entry] = skb;
+
+ first->des0 = des;
+
+ /* Fill start of payload in buff2 of first descriptor */
+ if (pay_len)
+ first->des1 = des + proto_hdr_len;
+
+ /* If needed take extra descriptors to fill the remaining payload */
+ tmp_pay_len = pay_len - TSO_MAX_BUFF_SIZE;
+
+ stmmac_tso_allocator(priv, des, tmp_pay_len, (nfrags == 0));
+
+ /* Prepare fragments */
+ for (i = 0; i < nfrags; i++) {
+ const skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+
+ des = skb_frag_dma_map(priv->device, frag, 0,
+ skb_frag_size(frag),
+ DMA_TO_DEVICE);
+
+ stmmac_tso_allocator(priv, des, skb_frag_size(frag),
+ (i == nfrags - 1));
+
+ priv->tx_skbuff_dma[priv->cur_tx].buf = des;
+ priv->tx_skbuff_dma[priv->cur_tx].len = skb_frag_size(frag);
+ priv->tx_skbuff[priv->cur_tx] = NULL;
+ priv->tx_skbuff_dma[priv->cur_tx].map_as_page = true;
+ }
+
+ priv->tx_skbuff_dma[priv->cur_tx].last_segment = true;
+
+ priv->cur_tx = STMMAC_GET_ENTRY(priv->cur_tx, DMA_TX_SIZE);
+
+ if (unlikely(stmmac_tx_avail(priv) <= (MAX_SKB_FRAGS + 1))) {
+ if (netif_msg_hw(priv))
+ pr_debug("%s: stop transmitted packets\n", __func__);
+ netif_stop_queue(dev);
+ }
+
+ dev->stats.tx_bytes += skb->len;
+ priv->xstats.tx_tso_frames++;
+ priv->xstats.tx_tso_nfrags += nfrags;
+
+ /* Manage tx mitigation */
+ priv->tx_count_frames += nfrags + 1;
+ if (likely(priv->tx_coal_frames > priv->tx_count_frames)) {
+ mod_timer(&priv->txtimer,
+ STMMAC_COAL_TIMER(priv->tx_coal_timer));
+ } else {
+ priv->tx_count_frames = 0;
+ priv->hw->desc->set_tx_ic(desc);
+ priv->xstats.tx_set_ic_bit++;
+ }
+
+ if (!priv->hwts_tx_en)
+ skb_tx_timestamp(skb);
+
+ if (unlikely((skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) &&
+ priv->hwts_tx_en)) {
+ /* declare that device is doing timestamping */
+ skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
+ priv->hw->desc->enable_tx_timestamp(first);
+ }
+
+ /* Complete the first descriptor before granting the DMA */
+ priv->hw->desc->prepare_tso_tx_desc(first, 1,
+ proto_hdr_len,
+ pay_len,
+ 1, priv->tx_skbuff_dma[first_entry].last_segment,
+ tcp_hdrlen(skb) / 4, (skb->len - proto_hdr_len));
+
+ /* If context desc is used to change MSS */
+ if (mss_desc)
+ priv->hw->desc->set_tx_owner(mss_desc);
+
+ /* The own bit must be the latest setting done when prepare the
+ * descriptor and then barrier is needed to make sure that
+ * all is coherent before granting the DMA engine.
+ */
+ smp_wmb();
+
+ if (netif_msg_pktdata(priv)) {
+ pr_info("%s: curr=%d dirty=%d f=%d, e=%d, f_p=%p, nfrags %d\n",
+ __func__, priv->cur_tx, priv->dirty_tx, first_entry,
+ priv->cur_tx, first, nfrags);
+
+ priv->hw->desc->display_ring((void *)priv->dma_tx, DMA_TX_SIZE,
+ 0);
+
+ pr_info(">>> frame to be transmitted: ");
+ print_pkt(skb->data, skb_headlen(skb));
+ }
+
+ netdev_sent_queue(dev, skb->len);
+
+ priv->hw->dma->set_tx_tail_ptr(priv->ioaddr, priv->tx_tail_addr,
+ STMMAC_CHAN0);
+
+ spin_unlock(&priv->tx_lock);
+ return NETDEV_TX_OK;
+
+dma_map_err:
+ spin_unlock(&priv->tx_lock);
+ dev_err(priv->device, "Tx dma map failed\n");
+ dev_kfree_skb(skb);
+ priv->dev->stats.tx_dropped++;
+ return NETDEV_TX_OK;
+}
+
+/**
* stmmac_xmit - Tx entry point of the driver
* @skb : the socket buffer
* @dev : device pointer
@@ -1950,6 +2143,13 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
unsigned int entry, first_entry;
struct dma_desc *desc, *first;
unsigned int enh_desc;
+ unsigned int des;
+
+ /* Manage oversized TCP frames for GMAC4 device */
+ if (skb_is_gso(skb) && priv->tso) {
+ if (ip_hdr(skb)->protocol == IPPROTO_TCP)
+ return stmmac_tso_xmit(skb, dev);
+ }
spin_lock(&priv->tx_lock);
@@ -1985,7 +2185,8 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
if (enh_desc)
is_jumbo = priv->hw->mode->is_jumbo_frm(skb->len, enh_desc);
- if (unlikely(is_jumbo)) {
+ if (unlikely(is_jumbo) && likely(priv->synopsys_id <
+ DWMAC_CORE_4_00)) {
entry = priv->hw->mode->jumbo_frm(priv, skb, csum_insertion);
if (unlikely(entry < 0))
goto dma_map_err;
@@ -2003,13 +2204,21 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
else
desc = priv->dma_tx + entry;
- desc->des2 = skb_frag_dma_map(priv->device, frag, 0, len,
- DMA_TO_DEVICE);
- if (dma_mapping_error(priv->device, desc->des2))
+ des = skb_frag_dma_map(priv->device, frag, 0, len,
+ DMA_TO_DEVICE);
+ if (dma_mapping_error(priv->device, des))
goto dma_map_err; /* should reuse desc w/o issues */
priv->tx_skbuff[entry] = NULL;
- priv->tx_skbuff_dma[entry].buf = desc->des2;
+
+ if (unlikely(priv->synopsys_id >= DWMAC_CORE_4_00)) {
+ desc->des0 = des;
+ priv->tx_skbuff_dma[entry].buf = desc->des0;
+ } else {
+ desc->des2 = des;
+ priv->tx_skbuff_dma[entry].buf = desc->des2;
+ }
+
priv->tx_skbuff_dma[entry].map_as_page = true;
priv->tx_skbuff_dma[entry].len = len;
priv->tx_skbuff_dma[entry].last_segment = last_segment;
@@ -2024,16 +2233,18 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
priv->cur_tx = entry;
if (netif_msg_pktdata(priv)) {
+ void *tx_head;
+
pr_debug("%s: curr=%d dirty=%d f=%d, e=%d, first=%p, nfrags=%d",
__func__, priv->cur_tx, priv->dirty_tx, first_entry,
entry, first, nfrags);
if (priv->extend_desc)
- stmmac_display_ring((void *)priv->dma_etx,
- DMA_TX_SIZE, 1);
+ tx_head = (void *)priv->dma_etx;
else
- stmmac_display_ring((void *)priv->dma_tx,
- DMA_TX_SIZE, 0);
+ tx_head = (void *)priv->dma_tx;
+
+ priv->hw->desc->display_ring(tx_head, DMA_TX_SIZE, false);
pr_debug(">>> frame to be transmitted: ");
print_pkt(skb->data, skb->len);
@@ -2072,12 +2283,19 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
if (likely(!is_jumbo)) {
bool last_segment = (nfrags == 0);
- first->des2 = dma_map_single(priv->device, skb->data,
- nopaged_len, DMA_TO_DEVICE);
- if (dma_mapping_error(priv->device, first->des2))
+ des = dma_map_single(priv->device, skb->data,
+ nopaged_len, DMA_TO_DEVICE);
+ if (dma_mapping_error(priv->device, des))
goto dma_map_err;
- priv->tx_skbuff_dma[first_entry].buf = first->des2;
+ if (unlikely(priv->synopsys_id >= DWMAC_CORE_4_00)) {
+ first->des0 = des;
+ priv->tx_skbuff_dma[first_entry].buf = first->des0;
+ } else {
+ first->des2 = des;
+ priv->tx_skbuff_dma[first_entry].buf = first->des2;
+ }
+
priv->tx_skbuff_dma[first_entry].len = nopaged_len;
priv->tx_skbuff_dma[first_entry].last_segment = last_segment;
@@ -2101,7 +2319,12 @@ static netdev_tx_t stmmac_xmit(struct sk_buff *skb, struct net_device *dev)
}
netdev_sent_queue(dev, skb->len);
- priv->hw->dma->enable_dma_transmission(priv->ioaddr);
+
+ if (priv->synopsys_id < DWMAC_CORE_4_00)
+ priv->hw->dma->enable_dma_transmission(priv->ioaddr);
+ else
+ priv->hw->dma->set_tx_tail_ptr(priv->ioaddr, priv->tx_tail_addr,
+ STMMAC_CHAN0);
spin_unlock(&priv->tx_lock);
return NETDEV_TX_OK;
@@ -2183,9 +2406,15 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv)
dev_kfree_skb(skb);
break;
}
- p->des2 = priv->rx_skbuff_dma[entry];
- priv->hw->mode->refill_desc3(priv, p);
+ if (unlikely(priv->synopsys_id >= DWMAC_CORE_4_00)) {
+ p->des0 = priv->rx_skbuff_dma[entry];
+ p->des1 = 0;
+ } else {
+ p->des2 = priv->rx_skbuff_dma[entry];
+ }
+ if (priv->hw->mode->refill_desc3)
+ priv->hw->mode->refill_desc3(priv, p);
if (priv->rx_zeroc_thresh > 0)
priv->rx_zeroc_thresh--;
@@ -2193,9 +2422,13 @@ static inline void stmmac_rx_refill(struct stmmac_priv *priv)
if (netif_msg_rx_status(priv))
pr_debug("\trefill entry #%d\n", entry);
}
-
wmb();
- priv->hw->desc->set_rx_owner(p);
+
+ if (unlikely(priv->synopsys_id >= DWMAC_CORE_4_00))
+ priv->hw->desc->init_rx_desc(p, priv->use_riwt, 0, 0);
+ else
+ priv->hw->desc->set_rx_owner(p);
+
wmb();
entry = STMMAC_GET_ENTRY(entry, DMA_RX_SIZE);
@@ -2218,13 +2451,15 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit)
int coe = priv->hw->rx_csum;
if (netif_msg_rx_status(priv)) {
+ void *rx_head;
+
pr_debug("%s: descriptor ring:\n", __func__);
if (priv->extend_desc)
- stmmac_display_ring((void *)priv->dma_erx,
- DMA_RX_SIZE, 1);
+ rx_head = (void *)priv->dma_erx;
else
- stmmac_display_ring((void *)priv->dma_rx,
- DMA_RX_SIZE, 0);
+ rx_head = (void *)priv->dma_rx;
+
+ priv->hw->desc->display_ring(rx_head, DMA_RX_SIZE, true);
}
while (count < limit) {
int status;
@@ -2274,11 +2509,23 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit)
} else {
struct sk_buff *skb;
int frame_len;
+ unsigned int des;
+
+ if (unlikely(priv->synopsys_id >= DWMAC_CORE_4_00))
+ des = p->des0;
+ else
+ des = p->des2;
frame_len = priv->hw->desc->get_rx_frame_len(p, coe);
- /* check if frame_len fits the preallocated memory */
+ /* If frame length is greather than skb buffer size
+ * (preallocated during init) then the packet is
+ * ignored
+ */
if (frame_len > priv->dma_buf_sz) {
+ pr_err("%s: len %d larger than size (%d)\n",
+ priv->dev->name, frame_len,
+ priv->dma_buf_sz);
priv->dev->stats.rx_length_errors++;
break;
}
@@ -2291,14 +2538,19 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit)
if (netif_msg_rx_status(priv)) {
pr_debug("\tdesc: %p [entry %d] buff=0x%x\n",
- p, entry, p->des2);
+ p, entry, des);
if (frame_len > ETH_FRAME_LEN)
pr_debug("\tframe size %d, COE: %d\n",
frame_len, status);
}
- if (unlikely((frame_len < priv->rx_copybreak) ||
- stmmac_rx_threshold_count(priv))) {
+ /* The zero-copy is always used for all the sizes
+ * in case of GMAC4 because it needs
+ * to refill the used descriptors, always.
+ */
+ if (unlikely(!priv->plat->has_gmac4 &&
+ ((frame_len < priv->rx_copybreak) ||
+ stmmac_rx_threshold_count(priv)))) {
skb = netdev_alloc_skb_ip_align(priv->dev,
frame_len);
if (unlikely(!skb)) {
@@ -2450,7 +2702,7 @@ static int stmmac_change_mtu(struct net_device *dev, int new_mtu)
return -EBUSY;
}
- if (priv->plat->enh_desc)
+ if ((priv->plat->enh_desc) || (priv->synopsys_id >= DWMAC_CORE_4_00))
max_mtu = JUMBO_LEN;
else
max_mtu = SKB_MAX_HEAD(NET_SKB_PAD + NET_IP_ALIGN);
@@ -2464,6 +2716,7 @@ static int stmmac_change_mtu(struct net_device *dev, int new_mtu)
}
dev->mtu = new_mtu;
+
netdev_update_features(dev);
return 0;
@@ -2488,6 +2741,14 @@ static netdev_features_t stmmac_fix_features(struct net_device *dev,
if (priv->plat->bugged_jumbo && (dev->mtu > ETH_DATA_LEN))
features &= ~NETIF_F_CSUM_MASK;
+ /* Disable tso if asked by ethtool */
+ if ((priv->plat->tso_en) && (priv->dma_cap.tsoen)) {
+ if (features & NETIF_F_TSO)
+ priv->tso = true;
+ else
+ priv->tso = false;
+ }
+
return features;
}
@@ -2534,7 +2795,7 @@ static irqreturn_t stmmac_interrupt(int irq, void *dev_id)
}
/* To handle GMAC own interrupts */
- if (priv->plat->has_gmac) {
+ if ((priv->plat->has_gmac) || (priv->plat->has_gmac4)) {
int status = priv->hw->mac->host_irq_status(priv->hw,
&priv->xstats);
if (unlikely(status)) {
@@ -2543,6 +2804,10 @@ static irqreturn_t stmmac_interrupt(int irq, void *dev_id)
priv->tx_path_in_lpi_mode = true;
if (status & CORE_IRQ_TX_PATH_EXIT_LPI_MODE)
priv->tx_path_in_lpi_mode = false;
+ if (status & CORE_IRQ_MTL_RX_OVERFLOW)
+ priv->hw->dma->set_rx_tail_ptr(priv->ioaddr,
+ priv->rx_tail_addr,
+ STMMAC_CHAN0);
}
}
@@ -2615,15 +2880,14 @@ static void sysfs_display_ring(void *head, int size, int extend_desc,
x = *(u64 *) ep;
seq_printf(seq, "%d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n",
i, (unsigned int)virt_to_phys(ep),
- (unsigned int)x, (unsigned int)(x >> 32),
+ ep->basic.des0, ep->basic.des1,
ep->basic.des2, ep->basic.des3);
ep++;
} else {
x = *(u64 *) p;
seq_printf(seq, "%d [0x%x]: 0x%x 0x%x 0x%x 0x%x\n",
i, (unsigned int)virt_to_phys(ep),
- (unsigned int)x, (unsigned int)(x >> 32),
- p->des2, p->des3);
+ p->des0, p->des1, p->des2, p->des3);
p++;
}
seq_printf(seq, "\n");
@@ -2706,10 +2970,15 @@ static int stmmac_sysfs_dma_cap_read(struct seq_file *seq, void *v)
seq_printf(seq, "\tAV features: %s\n", (priv->dma_cap.av) ? "Y" : "N");
seq_printf(seq, "\tChecksum Offload in TX: %s\n",
(priv->dma_cap.tx_coe) ? "Y" : "N");
- seq_printf(seq, "\tIP Checksum Offload (type1) in RX: %s\n",
- (priv->dma_cap.rx_coe_type1) ? "Y" : "N");
- seq_printf(seq, "\tIP Checksum Offload (type2) in RX: %s\n",
- (priv->dma_cap.rx_coe_type2) ? "Y" : "N");
+ if (priv->synopsys_id >= DWMAC_CORE_4_00) {
+ seq_printf(seq, "\tIP Checksum Offload in RX: %s\n",
+ (priv->dma_cap.rx_coe) ? "Y" : "N");
+ } else {
+ seq_printf(seq, "\tIP Checksum Offload (type1) in RX: %s\n",
+ (priv->dma_cap.rx_coe_type1) ? "Y" : "N");
+ seq_printf(seq, "\tIP Checksum Offload (type2) in RX: %s\n",
+ (priv->dma_cap.rx_coe_type2) ? "Y" : "N");
+ }
seq_printf(seq, "\tRXFIFO > 2048bytes: %s\n",
(priv->dma_cap.rxfifo_over_2048) ? "Y" : "N");
seq_printf(seq, "\tNumber of Additional RX channel: %d\n",
@@ -2818,27 +3087,35 @@ static int stmmac_hw_init(struct stmmac_priv *priv)
priv->dev->priv_flags |= IFF_UNICAST_FLT;
mac = dwmac1000_setup(priv->ioaddr,
priv->plat->multicast_filter_bins,
- priv->plat->unicast_filter_entries);
+ priv->plat->unicast_filter_entries,
+ &priv->synopsys_id);
+ } else if (priv->plat->has_gmac4) {
+ priv->dev->priv_flags |= IFF_UNICAST_FLT;
+ mac = dwmac4_setup(priv->ioaddr,
+ priv->plat->multicast_filter_bins,
+ priv->plat->unicast_filter_entries,
+ &priv->synopsys_id);
} else {
- mac = dwmac100_setup(priv->ioaddr);
+ mac = dwmac100_setup(priv->ioaddr, &priv->synopsys_id);
}
if (!mac)
return -ENOMEM;
priv->hw = mac;
- /* Get and dump the chip ID */
- priv->synopsys_id = stmmac_get_synopsys_id(priv);
-
/* To use the chained or ring mode */
- if (chain_mode) {
- priv->hw->mode = &chain_mode_ops;
- pr_info(" Chain mode enabled\n");
- priv->mode = STMMAC_CHAIN_MODE;
+ if (priv->synopsys_id >= DWMAC_CORE_4_00) {
+ priv->hw->mode = &dwmac4_ring_mode_ops;
} else {
- priv->hw->mode = &ring_mode_ops;
- pr_info(" Ring mode enabled\n");
- priv->mode = STMMAC_RING_MODE;
+ if (chain_mode) {
+ priv->hw->mode = &chain_mode_ops;
+ pr_info(" Chain mode enabled\n");
+ priv->mode = STMMAC_CHAIN_MODE;
+ } else {
+ priv->hw->mode = &ring_mode_ops;
+ pr_info(" Ring mode enabled\n");
+ priv->mode = STMMAC_RING_MODE;
+ }
}
/* Get the HW capability (new GMAC newer than 3.50a) */
@@ -2860,6 +3137,9 @@ static int stmmac_hw_init(struct stmmac_priv *priv)
else
priv->plat->tx_coe = priv->dma_cap.tx_coe;
+ /* In case of GMAC4 rx_coe is from HW cap register. */
+ priv->plat->rx_coe = priv->dma_cap.rx_coe;
+
if (priv->dma_cap.rx_coe_type2)
priv->plat->rx_coe = STMMAC_RX_COE_TYPE2;
else if (priv->dma_cap.rx_coe_type1)
@@ -2868,13 +3148,17 @@ static int stmmac_hw_init(struct stmmac_priv *priv)
} else
pr_info(" No HW DMA feature register supported");
- /* To use alternate (extended) or normal descriptor structures */
- stmmac_selec_desc_mode(priv);
+ /* To use alternate (extended), normal or GMAC4 descriptor structures */
+ if (priv->synopsys_id >= DWMAC_CORE_4_00)
+ priv->hw->desc = &dwmac4_desc_ops;
+ else
+ stmmac_selec_desc_mode(priv);
if (priv->plat->rx_coe) {
priv->hw->rx_csum = priv->plat->rx_coe;
- pr_info(" RX Checksum Offload Engine supported (type %d)\n",
- priv->plat->rx_coe);
+ pr_info(" RX Checksum Offload Engine supported\n");
+ if (priv->synopsys_id < DWMAC_CORE_4_00)
+ pr_info("\tCOE Type %d\n", priv->hw->rx_csum);
}
if (priv->plat->tx_coe)
pr_info(" TX Checksum insertion supported\n");
@@ -2884,6 +3168,9 @@ static int stmmac_hw_init(struct stmmac_priv *priv)
device_set_wakeup_capable(priv->device, 1);
}
+ if (priv->dma_cap.tsoen)
+ pr_info(" TSO supported\n");
+
return 0;
}
@@ -2987,6 +3274,12 @@ int stmmac_dvr_probe(struct device *device,
ndev->hw_features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
NETIF_F_RXCSUM;
+
+ if ((priv->plat->tso_en) && (priv->dma_cap.tsoen)) {
+ ndev->hw_features |= NETIF_F_TSO;
+ priv->tso = true;
+ pr_info(" TSO feature enabled\n");
+ }
ndev->features |= ndev->hw_features | NETIF_F_HIGHDMA;
ndev->watchdog_timeo = msecs_to_jiffies(watchdog);
#ifdef STMMAC_VLAN_TAG_USED
@@ -3062,12 +3355,13 @@ EXPORT_SYMBOL_GPL(stmmac_dvr_probe);
/**
* stmmac_dvr_remove
- * @ndev: net device pointer
+ * @dev: device pointer
* Description: this function resets the TX/RX processes, disables the MAC RX/TX
* changes the link status, releases the DMA descriptor rings.
*/
-int stmmac_dvr_remove(struct net_device *ndev)
+int stmmac_dvr_remove(struct device *dev)
{
+ struct net_device *ndev = dev_get_drvdata(dev);
struct stmmac_priv *priv = netdev_priv(ndev);
pr_info("%s:\n\tremoving driver", __func__);
@@ -3093,13 +3387,14 @@ EXPORT_SYMBOL_GPL(stmmac_dvr_remove);
/**
* stmmac_suspend - suspend callback
- * @ndev: net device pointer
+ * @dev: device pointer
* Description: this is the function to suspend the device and it is called
* by the platform driver to stop the network queue, release the resources,
* program the PMT register (for WoL), clean and release driver resources.
*/
-int stmmac_suspend(struct net_device *ndev)
+int stmmac_suspend(struct device *dev)
{
+ struct net_device *ndev = dev_get_drvdata(dev);
struct stmmac_priv *priv = netdev_priv(ndev);
unsigned long flags;
@@ -3142,12 +3437,13 @@ EXPORT_SYMBOL_GPL(stmmac_suspend);
/**
* stmmac_resume - resume callback
- * @ndev: net device pointer
+ * @dev: device pointer
* Description: when resume this function is invoked to setup the DMA and CORE
* in a usable state.
*/
-int stmmac_resume(struct net_device *ndev)
+int stmmac_resume(struct device *dev)
{
+ struct net_device *ndev = dev_get_drvdata(dev);
struct stmmac_priv *priv = netdev_priv(ndev);
unsigned long flags;
@@ -3181,6 +3477,11 @@ int stmmac_resume(struct net_device *ndev)
priv->dirty_rx = 0;
priv->dirty_tx = 0;
priv->cur_tx = 0;
+ /* reset private mss value to force mss context settings at
+ * next tso xmit (only used for gmac4).
+ */
+ priv->mss = 0;
+
stmmac_clear_descriptors(priv);
stmmac_hw_setup(ndev, false);
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
index 06704ca6f9ca..3f83c369f56c 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_mdio.c
@@ -37,6 +37,18 @@
#define MII_BUSY 0x00000001
#define MII_WRITE 0x00000002
+/* GMAC4 defines */
+#define MII_GMAC4_GOC_SHIFT 2
+#define MII_GMAC4_WRITE (1 << MII_GMAC4_GOC_SHIFT)
+#define MII_GMAC4_READ (3 << MII_GMAC4_GOC_SHIFT)
+
+#define MII_PHY_ADDR_GMAC4_SHIFT 21
+#define MII_PHY_ADDR_GMAC4_MASK GENMASK(25, 21)
+#define MII_PHY_REG_GMAC4_SHIFT 16
+#define MII_PHY_REG_GMAC4_MASK GENMASK(20, 16)
+#define MII_CSR_CLK_GMAC4_SHIFT 8
+#define MII_CSR_CLK_GMAC4_MASK GENMASK(11, 8)
+
static int stmmac_mdio_busy_wait(void __iomem *ioaddr, unsigned int mii_addr)
{
unsigned long curr;
@@ -124,6 +136,80 @@ static int stmmac_mdio_write(struct mii_bus *bus, int phyaddr, int phyreg,
}
/**
+ * stmmac_mdio_read_gmac4
+ * @bus: points to the mii_bus structure
+ * @phyaddr: MII addr reg bits 25-21
+ * @phyreg: MII addr reg bits 20-16
+ * Description: it reads data from the MII register of GMAC4 from within
+ * the phy device.
+ */
+static int stmmac_mdio_read_gmac4(struct mii_bus *bus, int phyaddr, int phyreg)
+{
+ struct net_device *ndev = bus->priv;
+ struct stmmac_priv *priv = netdev_priv(ndev);
+ unsigned int mii_address = priv->hw->mii.addr;
+ unsigned int mii_data = priv->hw->mii.data;
+ int data;
+ u32 value = (((phyaddr << MII_PHY_ADDR_GMAC4_SHIFT) &
+ (MII_PHY_ADDR_GMAC4_MASK)) |
+ ((phyreg << MII_PHY_REG_GMAC4_SHIFT) &
+ (MII_PHY_REG_GMAC4_MASK))) | MII_GMAC4_READ;
+
+ value |= MII_BUSY | ((priv->clk_csr & MII_CSR_CLK_GMAC4_MASK)
+ << MII_CSR_CLK_GMAC4_SHIFT);
+
+ if (stmmac_mdio_busy_wait(priv->ioaddr, mii_address))
+ return -EBUSY;
+
+ writel(value, priv->ioaddr + mii_address);
+
+ if (stmmac_mdio_busy_wait(priv->ioaddr, mii_address))
+ return -EBUSY;
+
+ /* Read the data from the MII data register */
+ data = (int)readl(priv->ioaddr + mii_data);
+
+ return data;
+}
+
+/**
+ * stmmac_mdio_write_gmac4
+ * @bus: points to the mii_bus structure
+ * @phyaddr: MII addr reg bits 25-21
+ * @phyreg: MII addr reg bits 20-16
+ * @phydata: phy data
+ * Description: it writes the data into the MII register of GMAC4 from within
+ * the device.
+ */
+static int stmmac_mdio_write_gmac4(struct mii_bus *bus, int phyaddr, int phyreg,
+ u16 phydata)
+{
+ struct net_device *ndev = bus->priv;
+ struct stmmac_priv *priv = netdev_priv(ndev);
+ unsigned int mii_address = priv->hw->mii.addr;
+ unsigned int mii_data = priv->hw->mii.data;
+
+ u32 value = (((phyaddr << MII_PHY_ADDR_GMAC4_SHIFT) &
+ (MII_PHY_ADDR_GMAC4_MASK)) |
+ ((phyreg << MII_PHY_REG_GMAC4_SHIFT) &
+ (MII_PHY_REG_GMAC4_MASK))) | MII_GMAC4_WRITE;
+
+ value |= MII_BUSY | ((priv->clk_csr & MII_CSR_CLK_GMAC4_MASK)
+ << MII_CSR_CLK_GMAC4_SHIFT);
+
+ /* Wait until any existing MII operation is complete */
+ if (stmmac_mdio_busy_wait(priv->ioaddr, mii_address))
+ return -EBUSY;
+
+ /* Set the MII address register to write */
+ writel(phydata, priv->ioaddr + mii_data);
+ writel(value, priv->ioaddr + mii_address);
+
+ /* Wait until any existing MII operation is complete */
+ return stmmac_mdio_busy_wait(priv->ioaddr, mii_address);
+}
+
+/**
* stmmac_mdio_reset
* @bus: points to the mii_bus structure
* Description: reset the MII bus
@@ -180,9 +266,11 @@ int stmmac_mdio_reset(struct mii_bus *bus)
/* This is a workaround for problems with the STE101P PHY.
* It doesn't complete its reset until at least one clock cycle
- * on MDC, so perform a dummy mdio read.
+ * on MDC, so perform a dummy mdio read. To be upadted for GMAC4
+ * if needed.
*/
- writel(0, priv->ioaddr + mii_address);
+ if (!priv->plat->has_gmac4)
+ writel(0, priv->ioaddr + mii_address);
#endif
return 0;
}
@@ -217,8 +305,14 @@ int stmmac_mdio_register(struct net_device *ndev)
#endif
new_bus->name = "stmmac";
- new_bus->read = &stmmac_mdio_read;
- new_bus->write = &stmmac_mdio_write;
+ if (priv->plat->has_gmac4) {
+ new_bus->read = &stmmac_mdio_read_gmac4;
+ new_bus->write = &stmmac_mdio_write_gmac4;
+ } else {
+ new_bus->read = &stmmac_mdio_read;
+ new_bus->write = &stmmac_mdio_write;
+ }
+
new_bus->reset = &stmmac_mdio_reset;
snprintf(new_bus->id, MII_BUS_ID_SIZE, "%s-%x",
new_bus->name, priv->plat->bus_id);
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
index ae4388735b7f..56c8a2342c14 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_pci.c
@@ -231,30 +231,10 @@ static int stmmac_pci_probe(struct pci_dev *pdev,
*/
static void stmmac_pci_remove(struct pci_dev *pdev)
{
- struct net_device *ndev = pci_get_drvdata(pdev);
-
- stmmac_dvr_remove(ndev);
-}
-
-#ifdef CONFIG_PM_SLEEP
-static int stmmac_pci_suspend(struct device *dev)
-{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct net_device *ndev = pci_get_drvdata(pdev);
-
- return stmmac_suspend(ndev);
-}
-
-static int stmmac_pci_resume(struct device *dev)
-{
- struct pci_dev *pdev = to_pci_dev(dev);
- struct net_device *ndev = pci_get_drvdata(pdev);
-
- return stmmac_resume(ndev);
+ stmmac_dvr_remove(&pdev->dev);
}
-#endif
-static SIMPLE_DEV_PM_OPS(stmmac_pm_ops, stmmac_pci_suspend, stmmac_pci_resume);
+static SIMPLE_DEV_PM_OPS(stmmac_pm_ops, stmmac_suspend, stmmac_resume);
#define STMMAC_VENDOR_ID 0x700
#define STMMAC_QUARK_ID 0x0937
diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
index cf37ea558ecc..409db913b117 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_platform.c
@@ -284,6 +284,13 @@ stmmac_probe_config_dt(struct platform_device *pdev, const char **mac)
plat->pmt = 1;
}
+ if (of_device_is_compatible(np, "snps,dwmac-4.00") ||
+ of_device_is_compatible(np, "snps,dwmac-4.10a")) {
+ plat->has_gmac4 = 1;
+ plat->pmt = 1;
+ plat->tso_en = of_property_read_bool(np, "snps,tso");
+ }
+
if (of_device_is_compatible(np, "snps,dwmac-3.610") ||
of_device_is_compatible(np, "snps,dwmac-3.710")) {
plat->enh_desc = 1;
@@ -379,7 +386,7 @@ int stmmac_pltfr_remove(struct platform_device *pdev)
{
struct net_device *ndev = platform_get_drvdata(pdev);
struct stmmac_priv *priv = netdev_priv(ndev);
- int ret = stmmac_dvr_remove(ndev);
+ int ret = stmmac_dvr_remove(&pdev->dev);
if (priv->plat->exit)
priv->plat->exit(pdev, priv->plat->bsp_priv);
@@ -403,7 +410,7 @@ static int stmmac_pltfr_suspend(struct device *dev)
struct stmmac_priv *priv = netdev_priv(ndev);
struct platform_device *pdev = to_platform_device(dev);
- ret = stmmac_suspend(ndev);
+ ret = stmmac_suspend(dev);
if (priv->plat->exit)
priv->plat->exit(pdev, priv->plat->bsp_priv);
@@ -426,7 +433,7 @@ static int stmmac_pltfr_resume(struct device *dev)
if (priv->plat->init)
priv->plat->init(pdev, priv->plat->bsp_priv);
- return stmmac_resume(ndev);
+ return stmmac_resume(dev);
}
#endif /* CONFIG_PM_SLEEP */
diff --git a/drivers/net/ethernet/sun/niu.c b/drivers/net/ethernet/sun/niu.c
index 9cc45649f477..a2371aa14a49 100644
--- a/drivers/net/ethernet/sun/niu.c
+++ b/drivers/net/ethernet/sun/niu.c
@@ -6431,7 +6431,7 @@ static int niu_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
static void niu_netif_stop(struct niu *np)
{
- np->dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(np->dev); /* prevent tx timeout */
niu_disable_napi(np);
diff --git a/drivers/net/ethernet/sun/sungem.c b/drivers/net/ethernet/sun/sungem.c
index 2437227712dc..d6ad0fbd054e 100644
--- a/drivers/net/ethernet/sun/sungem.c
+++ b/drivers/net/ethernet/sun/sungem.c
@@ -226,7 +226,7 @@ static void gem_put_cell(struct gem *gp)
static inline void gem_netif_stop(struct gem *gp)
{
- gp->dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(gp->dev); /* prevent tx timeout */
napi_disable(&gp->napi);
netif_tx_disable(gp->dev);
}
diff --git a/drivers/net/ethernet/synopsys/dwc_eth_qos.c b/drivers/net/ethernet/synopsys/dwc_eth_qos.c
index af11ed1e0bcc..158213cd6cdd 100644
--- a/drivers/net/ethernet/synopsys/dwc_eth_qos.c
+++ b/drivers/net/ethernet/synopsys/dwc_eth_qos.c
@@ -949,7 +949,7 @@ static void dwceqos_adjust_link(struct net_device *ndev)
if (status_change) {
if (phydev->link) {
- lp->ndev->trans_start = jiffies;
+ netif_trans_update(lp->ndev);
dwceqos_link_up(lp);
} else {
dwceqos_link_down(lp);
@@ -2203,7 +2203,7 @@ static int dwceqos_start_xmit(struct sk_buff *skb, struct net_device *ndev)
netdev_sent_queue(ndev, skb->len);
spin_unlock_bh(&lp->tx_lock);
- ndev->trans_start = jiffies;
+ netif_trans_update(ndev);
return 0;
tx_error:
diff --git a/drivers/net/ethernet/tehuti/tehuti.c b/drivers/net/ethernet/tehuti/tehuti.c
index 14c9d1baa85c..7452b5f9d024 100644
--- a/drivers/net/ethernet/tehuti/tehuti.c
+++ b/drivers/net/ethernet/tehuti/tehuti.c
@@ -1610,7 +1610,6 @@ static inline int bdx_tx_space(struct bdx_priv *priv)
* o NETDEV_TX_BUSY Cannot transmit packet, try later
* Usually a bug, means queue start/stop flow control is broken in
* the driver. Note: the driver must NOT put the skb in its DMA ring.
- * o NETDEV_TX_LOCKED Locking failed, please retry quickly.
*/
static netdev_tx_t bdx_tx_transmit(struct sk_buff *skb,
struct net_device *ndev)
@@ -1630,12 +1629,7 @@ static netdev_tx_t bdx_tx_transmit(struct sk_buff *skb,
ENTER;
local_irq_save(flags);
- if (!spin_trylock(&priv->tx_lock)) {
- local_irq_restore(flags);
- DBG("%s[%s]: TX locked, returning NETDEV_TX_LOCKED\n",
- BDX_DRV_NAME, ndev->name);
- return NETDEV_TX_LOCKED;
- }
+ spin_lock(&priv->tx_lock);
/* build tx descriptor */
BDX_ASSERT(f->m.wptr >= f->m.memsz); /* started with valid wptr */
@@ -1707,7 +1701,7 @@ static netdev_tx_t bdx_tx_transmit(struct sk_buff *skb,
#endif
#ifdef BDX_LLTX
- ndev->trans_start = jiffies; /* NETIF_F_LLTX driver :( */
+ netif_trans_update(ndev); /* NETIF_F_LLTX driver :( */
#endif
ndev->stats.tx_packets++;
ndev->stats.tx_bytes += skb->len;
diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index e2fcdf1eec44..4b08a2f52b3e 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -380,7 +380,6 @@ struct cpsw_priv {
u32 coal_intvl;
u32 bus_freq_mhz;
int rx_packet_max;
- int host_port;
struct clk *clk;
u8 mac_addr[ETH_ALEN];
struct cpsw_slave *slaves;
@@ -530,21 +529,18 @@ static const struct cpsw_stats cpsw_gstrings_stats[] = {
int slave_port = cpsw_get_slave_port(priv, \
slave->slave_num); \
cpsw_ale_add_mcast(priv->ale, addr, \
- 1 << slave_port | 1 << priv->host_port, \
+ 1 << slave_port | ALE_PORT_HOST, \
ALE_VLAN, slave->port_vlan, 0); \
} else { \
cpsw_ale_add_mcast(priv->ale, addr, \
- ALE_ALL_PORTS << priv->host_port, \
+ ALE_ALL_PORTS, \
0, 0, 0); \
} \
} while (0)
static inline int cpsw_get_slave_port(struct cpsw_priv *priv, u32 slave_num)
{
- if (priv->host_port == 0)
- return slave_num + 1;
- else
- return slave_num;
+ return slave_num + 1;
}
static void cpsw_set_promiscious(struct net_device *ndev, bool enable)
@@ -601,8 +597,7 @@ static void cpsw_set_promiscious(struct net_device *ndev, bool enable)
cpsw_ale_control_set(ale, 0, ALE_AGEOUT, 1);
/* Clear all mcast from ALE */
- cpsw_ale_flush_multicast(ale, ALE_ALL_PORTS <<
- priv->host_port, -1);
+ cpsw_ale_flush_multicast(ale, ALE_ALL_PORTS, -1);
/* Flood All Unicast Packets to Host port */
cpsw_ale_control_set(ale, 0, ALE_P0_UNI_FLOOD, 1);
@@ -647,8 +642,7 @@ static void cpsw_ndo_set_rx_mode(struct net_device *ndev)
cpsw_ale_set_allmulti(priv->ale, priv->ndev->flags & IFF_ALLMULTI);
/* Clear all mcast from ALE */
- cpsw_ale_flush_multicast(priv->ale, ALE_ALL_PORTS << priv->host_port,
- vid);
+ cpsw_ale_flush_multicast(priv->ale, ALE_ALL_PORTS, vid);
if (!netdev_mc_empty(ndev)) {
struct netdev_hw_addr *ha;
@@ -1091,7 +1085,7 @@ static inline void cpsw_add_dual_emac_def_ale_entries(
struct cpsw_priv *priv, struct cpsw_slave *slave,
u32 slave_port)
{
- u32 port_mask = 1 << slave_port | 1 << priv->host_port;
+ u32 port_mask = 1 << slave_port | ALE_PORT_HOST;
if (priv->version == CPSW_VERSION_1)
slave_write(slave, slave->port_vlan, CPSW1_PORT_VLAN);
@@ -1102,7 +1096,7 @@ static inline void cpsw_add_dual_emac_def_ale_entries(
cpsw_ale_add_mcast(priv->ale, priv->ndev->broadcast,
port_mask, ALE_VLAN, slave->port_vlan, 0);
cpsw_ale_add_ucast(priv->ale, priv->mac_addr,
- priv->host_port, ALE_VLAN | ALE_SECURE, slave->port_vlan);
+ HOST_PORT_NUM, ALE_VLAN | ALE_SECURE, slave->port_vlan);
}
static void soft_reset_slave(struct cpsw_slave *slave)
@@ -1180,7 +1174,6 @@ static void cpsw_slave_open(struct cpsw_slave *slave, struct cpsw_priv *priv)
static inline void cpsw_add_default_vlan(struct cpsw_priv *priv)
{
const int vlan = priv->data.default_vlan;
- const int port = priv->host_port;
u32 reg;
int i;
int unreg_mcast_mask;
@@ -1198,9 +1191,9 @@ static inline void cpsw_add_default_vlan(struct cpsw_priv *priv)
else
unreg_mcast_mask = ALE_PORT_1 | ALE_PORT_2;
- cpsw_ale_add_vlan(priv->ale, vlan, ALE_ALL_PORTS << port,
- ALE_ALL_PORTS << port, ALE_ALL_PORTS << port,
- unreg_mcast_mask << port);
+ cpsw_ale_add_vlan(priv->ale, vlan, ALE_ALL_PORTS,
+ ALE_ALL_PORTS, ALE_ALL_PORTS,
+ unreg_mcast_mask);
}
static void cpsw_init_host_port(struct cpsw_priv *priv)
@@ -1213,7 +1206,7 @@ static void cpsw_init_host_port(struct cpsw_priv *priv)
cpsw_ale_start(priv->ale);
/* switch to vlan unaware mode */
- cpsw_ale_control_set(priv->ale, priv->host_port, ALE_VLAN_AWARE,
+ cpsw_ale_control_set(priv->ale, HOST_PORT_NUM, ALE_VLAN_AWARE,
CPSW_ALE_VLAN_AWARE);
control_reg = readl(&priv->regs->control);
control_reg |= CPSW_VLAN_AWARE;
@@ -1227,14 +1220,14 @@ static void cpsw_init_host_port(struct cpsw_priv *priv)
&priv->host_port_regs->cpdma_tx_pri_map);
__raw_writel(0, &priv->host_port_regs->cpdma_rx_chan_map);
- cpsw_ale_control_set(priv->ale, priv->host_port,
+ cpsw_ale_control_set(priv->ale, HOST_PORT_NUM,
ALE_PORT_STATE, ALE_PORT_STATE_FORWARD);
if (!priv->data.dual_emac) {
- cpsw_ale_add_ucast(priv->ale, priv->mac_addr, priv->host_port,
+ cpsw_ale_add_ucast(priv->ale, priv->mac_addr, HOST_PORT_NUM,
0, 0);
cpsw_ale_add_mcast(priv->ale, priv->ndev->broadcast,
- 1 << priv->host_port, 0, 0, ALE_MCAST_FWD_2);
+ ALE_PORT_HOST, 0, 0, ALE_MCAST_FWD_2);
}
}
@@ -1281,8 +1274,7 @@ static int cpsw_ndo_open(struct net_device *ndev)
cpsw_add_default_vlan(priv);
else
cpsw_ale_add_vlan(priv->ale, priv->data.default_vlan,
- ALE_ALL_PORTS << priv->host_port,
- ALE_ALL_PORTS << priv->host_port, 0, 0);
+ ALE_ALL_PORTS, ALE_ALL_PORTS, 0, 0);
if (!cpsw_common_res_usage_state(priv)) {
struct cpsw_priv *priv_sl0 = cpsw_get_slave_priv(priv, 0);
@@ -1397,7 +1389,7 @@ static netdev_tx_t cpsw_ndo_start_xmit(struct sk_buff *skb,
struct cpsw_priv *priv = netdev_priv(ndev);
int ret;
- ndev->trans_start = jiffies;
+ netif_trans_update(ndev);
if (skb_padto(skb, CPSW_MIN_PACKET_SIZE)) {
cpsw_err(priv, tx_err, "packet pad failed\n");
@@ -1628,9 +1620,9 @@ static int cpsw_ndo_set_mac_address(struct net_device *ndev, void *p)
flags = ALE_VLAN;
}
- cpsw_ale_del_ucast(priv->ale, priv->mac_addr, priv->host_port,
+ cpsw_ale_del_ucast(priv->ale, priv->mac_addr, HOST_PORT_NUM,
flags, vid);
- cpsw_ale_add_ucast(priv->ale, addr->sa_data, priv->host_port,
+ cpsw_ale_add_ucast(priv->ale, addr->sa_data, HOST_PORT_NUM,
flags, vid);
memcpy(priv->mac_addr, addr->sa_data, ETH_ALEN);
@@ -1674,12 +1666,12 @@ static inline int cpsw_add_vlan_ale_entry(struct cpsw_priv *priv,
}
ret = cpsw_ale_add_vlan(priv->ale, vid, port_mask, 0, port_mask,
- unreg_mcast_mask << priv->host_port);
+ unreg_mcast_mask);
if (ret != 0)
return ret;
ret = cpsw_ale_add_ucast(priv->ale, priv->mac_addr,
- priv->host_port, ALE_VLAN, vid);
+ HOST_PORT_NUM, ALE_VLAN, vid);
if (ret != 0)
goto clean_vid;
@@ -1691,7 +1683,7 @@ static inline int cpsw_add_vlan_ale_entry(struct cpsw_priv *priv,
clean_vlan_ucast:
cpsw_ale_del_ucast(priv->ale, priv->mac_addr,
- priv->host_port, ALE_VLAN, vid);
+ HOST_PORT_NUM, ALE_VLAN, vid);
clean_vid:
cpsw_ale_del_vlan(priv->ale, vid, 0);
return ret;
@@ -1746,7 +1738,7 @@ static int cpsw_ndo_vlan_rx_kill_vid(struct net_device *ndev,
return ret;
ret = cpsw_ale_del_ucast(priv->ale, priv->mac_addr,
- priv->host_port, ALE_VLAN, vid);
+ HOST_PORT_NUM, ALE_VLAN, vid);
if (ret != 0)
return ret;
@@ -2157,7 +2149,6 @@ static int cpsw_probe_dual_emac(struct platform_device *pdev,
priv_sl2->bus_freq_mhz = priv->bus_freq_mhz;
priv_sl2->regs = priv->regs;
- priv_sl2->host_port = priv->host_port;
priv_sl2->host_port_regs = priv->host_port_regs;
priv_sl2->wr_regs = priv->wr_regs;
priv_sl2->hw_stats = priv->hw_stats;
@@ -2326,7 +2317,6 @@ static int cpsw_probe(struct platform_device *pdev)
goto clean_runtime_disable_ret;
}
priv->regs = ss_regs;
- priv->host_port = HOST_PORT_NUM;
/* Need to enable clocks with runtime PM api to access module
* registers
diff --git a/drivers/net/ethernet/ti/netcp_core.c b/drivers/net/ethernet/ti/netcp_core.c
index 1d0942c53120..32516661f180 100644
--- a/drivers/net/ethernet/ti/netcp_core.c
+++ b/drivers/net/ethernet/ti/netcp_core.c
@@ -1272,7 +1272,7 @@ static int netcp_ndo_start_xmit(struct sk_buff *skb, struct net_device *ndev)
if (ret)
goto drop;
- ndev->trans_start = jiffies;
+ netif_trans_update(ndev);
/* Check Tx pool count & stop subqueue if needed */
desc_count = knav_pool_count(netcp->tx_pool);
@@ -1788,7 +1788,7 @@ static void netcp_ndo_tx_timeout(struct net_device *ndev)
dev_err(netcp->ndev_dev, "transmit timed out tx descs(%d)\n", descs);
netcp_process_tx_compl_packets(netcp, netcp->tx_pool_size);
- ndev->trans_start = jiffies;
+ netif_trans_update(ndev);
netif_tx_wake_all_queues(ndev);
}
diff --git a/drivers/net/ethernet/ti/tlan.c b/drivers/net/ethernet/ti/tlan.c
index a274cd49afe9..561703317312 100644
--- a/drivers/net/ethernet/ti/tlan.c
+++ b/drivers/net/ethernet/ti/tlan.c
@@ -1007,7 +1007,7 @@ static void tlan_tx_timeout(struct net_device *dev)
tlan_reset_lists(dev);
tlan_read_and_clear_stats(dev, TLAN_IGNORE);
tlan_reset_adapter(dev);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue(dev);
}
diff --git a/drivers/net/ethernet/tile/tilepro.c b/drivers/net/ethernet/tile/tilepro.c
index 298e059d0498..922a443e3415 100644
--- a/drivers/net/ethernet/tile/tilepro.c
+++ b/drivers/net/ethernet/tile/tilepro.c
@@ -1883,7 +1883,7 @@ static int tile_net_tx(struct sk_buff *skb, struct net_device *dev)
/* Save the timestamp. */
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
#ifdef TILE_NET_PARANOIA
@@ -2026,7 +2026,7 @@ static void tile_net_tx_timeout(struct net_device *dev)
{
PDEBUG("tile_net_tx_timeout()\n");
PDEBUG("Transmit timeout at %ld, latency %ld\n", jiffies,
- jiffies - dev->trans_start);
+ jiffies - dev_trans_start(dev));
/* XXX: ISSUE: This doesn't seem useful for us. */
netif_wake_queue(dev);
diff --git a/drivers/net/ethernet/toshiba/ps3_gelic_wireless.c b/drivers/net/ethernet/toshiba/ps3_gelic_wireless.c
index 743b18266a7c..446ea580ad42 100644
--- a/drivers/net/ethernet/toshiba/ps3_gelic_wireless.c
+++ b/drivers/net/ethernet/toshiba/ps3_gelic_wireless.c
@@ -1616,13 +1616,13 @@ static void gelic_wl_scan_complete_event(struct gelic_wl_info *wl)
target->valid = 1;
target->eurus_index = i;
kfree(target->hwinfo);
- target->hwinfo = kzalloc(be16_to_cpu(scan_info->size),
+ target->hwinfo = kmemdup(scan_info,
+ be16_to_cpu(scan_info->size),
GFP_KERNEL);
if (!target->hwinfo)
continue;
/* copy hw scan info */
- memcpy(target->hwinfo, scan_info, be16_to_cpu(scan_info->size));
target->essid_len = strnlen(scan_info->essid,
sizeof(scan_info->essid));
target->rate_len = 0;
diff --git a/drivers/net/ethernet/toshiba/spider_net.c b/drivers/net/ethernet/toshiba/spider_net.c
index 67610270d171..36a6e8b54d94 100644
--- a/drivers/net/ethernet/toshiba/spider_net.c
+++ b/drivers/net/ethernet/toshiba/spider_net.c
@@ -705,7 +705,7 @@ spider_net_prepare_tx_descr(struct spider_net_card *card,
wmb();
descr->prev->hwdescr->next_descr_addr = descr->bus_addr;
- card->netdev->trans_start = jiffies; /* set netdev watchdog timer */
+ netif_trans_update(card->netdev); /* set netdev watchdog timer */
return 0;
}
diff --git a/drivers/net/ethernet/tundra/tsi108_eth.c b/drivers/net/ethernet/tundra/tsi108_eth.c
index 520cf50a3d5a..01a77145a0fa 100644
--- a/drivers/net/ethernet/tundra/tsi108_eth.c
+++ b/drivers/net/ethernet/tundra/tsi108_eth.c
@@ -1314,7 +1314,8 @@ static int tsi108_open(struct net_device *dev)
data->txring = dma_zalloc_coherent(NULL, txring_size, &data->txdma,
GFP_KERNEL);
if (!data->txring) {
- pci_free_consistent(0, rxring_size, data->rxring, data->rxdma);
+ pci_free_consistent(NULL, rxring_size, data->rxring,
+ data->rxdma);
return -ENOMEM;
}
diff --git a/drivers/net/ethernet/via/via-rhine.c b/drivers/net/ethernet/via/via-rhine.c
index 2b7550c43f78..9d14731cdcb1 100644
--- a/drivers/net/ethernet/via/via-rhine.c
+++ b/drivers/net/ethernet/via/via-rhine.c
@@ -1758,7 +1758,7 @@ static void rhine_reset_task(struct work_struct *work)
spin_unlock_bh(&rp->lock);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
dev->stats.tx_errors++;
netif_wake_queue(dev);
diff --git a/drivers/net/ethernet/wiznet/Kconfig b/drivers/net/ethernet/wiznet/Kconfig
index f98b91d21f33..1981e88c18dc 100644
--- a/drivers/net/ethernet/wiznet/Kconfig
+++ b/drivers/net/ethernet/wiznet/Kconfig
@@ -69,4 +69,18 @@ config WIZNET_BUS_ANY
Performance may decrease compared to explicitly selected bus mode.
endchoice
+config WIZNET_W5100_SPI
+ tristate "WIZnet W5100/W5200/W5500 Ethernet support for SPI mode"
+ depends on WIZNET_BUS_ANY && WIZNET_W5100
+ depends on SPI
+ ---help---
+ In SPI mode host system accesses registers using SPI protocol
+ (mode 0) on the SPI bus.
+
+ Performance decreases compared to other bus interface mode.
+ In W5100 SPI mode, burst READ/WRITE processing are not provided.
+
+ To compile this driver as a module, choose M here: the module
+ will be called w5100-spi.
+
endif # NET_VENDOR_WIZNET
diff --git a/drivers/net/ethernet/wiznet/Makefile b/drivers/net/ethernet/wiznet/Makefile
index c614535227e8..1e05e1a84208 100644
--- a/drivers/net/ethernet/wiznet/Makefile
+++ b/drivers/net/ethernet/wiznet/Makefile
@@ -1,2 +1,3 @@
obj-$(CONFIG_WIZNET_W5100) += w5100.o
+obj-$(CONFIG_WIZNET_W5100_SPI) += w5100-spi.o
obj-$(CONFIG_WIZNET_W5300) += w5300.o
diff --git a/drivers/net/ethernet/wiznet/w5100-spi.c b/drivers/net/ethernet/wiznet/w5100-spi.c
new file mode 100644
index 000000000000..93a2d3c07303
--- /dev/null
+++ b/drivers/net/ethernet/wiznet/w5100-spi.c
@@ -0,0 +1,466 @@
+/*
+ * Ethernet driver for the WIZnet W5100/W5200/W5500 chip.
+ *
+ * Copyright (C) 2016 Akinobu Mita <akinobu.mita@gmail.com>
+ *
+ * Licensed under the GPL-2 or later.
+ *
+ * Datasheet:
+ * http://www.wiznet.co.kr/wp-content/uploads/wiznethome/Chip/W5100/Document/W5100_Datasheet_v1.2.6.pdf
+ * http://wiznethome.cafe24.com/wp-content/uploads/wiznethome/Chip/W5200/Documents/W5200_DS_V140E.pdf
+ * http://wizwiki.net/wiki/lib/exe/fetch.php?media=products:w5500:w5500_ds_v106e_141230.pdf
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/delay.h>
+#include <linux/netdevice.h>
+#include <linux/of_net.h>
+#include <linux/spi/spi.h>
+
+#include "w5100.h"
+
+#define W5100_SPI_WRITE_OPCODE 0xf0
+#define W5100_SPI_READ_OPCODE 0x0f
+
+static int w5100_spi_read(struct net_device *ndev, u32 addr)
+{
+ struct spi_device *spi = to_spi_device(ndev->dev.parent);
+ u8 cmd[3] = { W5100_SPI_READ_OPCODE, addr >> 8, addr & 0xff };
+ u8 data;
+ int ret;
+
+ ret = spi_write_then_read(spi, cmd, sizeof(cmd), &data, 1);
+
+ return ret ? ret : data;
+}
+
+static int w5100_spi_write(struct net_device *ndev, u32 addr, u8 data)
+{
+ struct spi_device *spi = to_spi_device(ndev->dev.parent);
+ u8 cmd[4] = { W5100_SPI_WRITE_OPCODE, addr >> 8, addr & 0xff, data};
+
+ return spi_write_then_read(spi, cmd, sizeof(cmd), NULL, 0);
+}
+
+static int w5100_spi_read16(struct net_device *ndev, u32 addr)
+{
+ u16 data;
+ int ret;
+
+ ret = w5100_spi_read(ndev, addr);
+ if (ret < 0)
+ return ret;
+ data = ret << 8;
+ ret = w5100_spi_read(ndev, addr + 1);
+
+ return ret < 0 ? ret : data | ret;
+}
+
+static int w5100_spi_write16(struct net_device *ndev, u32 addr, u16 data)
+{
+ int ret;
+
+ ret = w5100_spi_write(ndev, addr, data >> 8);
+ if (ret)
+ return ret;
+
+ return w5100_spi_write(ndev, addr + 1, data & 0xff);
+}
+
+static int w5100_spi_readbulk(struct net_device *ndev, u32 addr, u8 *buf,
+ int len)
+{
+ int i;
+
+ for (i = 0; i < len; i++) {
+ int ret = w5100_spi_read(ndev, addr + i);
+
+ if (ret < 0)
+ return ret;
+ buf[i] = ret;
+ }
+
+ return 0;
+}
+
+static int w5100_spi_writebulk(struct net_device *ndev, u32 addr, const u8 *buf,
+ int len)
+{
+ int i;
+
+ for (i = 0; i < len; i++) {
+ int ret = w5100_spi_write(ndev, addr + i, buf[i]);
+
+ if (ret)
+ return ret;
+ }
+
+ return 0;
+}
+
+static const struct w5100_ops w5100_spi_ops = {
+ .may_sleep = true,
+ .chip_id = W5100,
+ .read = w5100_spi_read,
+ .write = w5100_spi_write,
+ .read16 = w5100_spi_read16,
+ .write16 = w5100_spi_write16,
+ .readbulk = w5100_spi_readbulk,
+ .writebulk = w5100_spi_writebulk,
+};
+
+#define W5200_SPI_WRITE_OPCODE 0x80
+
+struct w5200_spi_priv {
+ /* Serialize access to cmd_buf */
+ struct mutex cmd_lock;
+
+ /* DMA (thus cache coherency maintenance) requires the
+ * transfer buffers to live in their own cache lines.
+ */
+ u8 cmd_buf[4] ____cacheline_aligned;
+};
+
+static struct w5200_spi_priv *w5200_spi_priv(struct net_device *ndev)
+{
+ return w5100_ops_priv(ndev);
+}
+
+static int w5200_spi_init(struct net_device *ndev)
+{
+ struct w5200_spi_priv *spi_priv = w5200_spi_priv(ndev);
+
+ mutex_init(&spi_priv->cmd_lock);
+
+ return 0;
+}
+
+static int w5200_spi_read(struct net_device *ndev, u32 addr)
+{
+ struct spi_device *spi = to_spi_device(ndev->dev.parent);
+ u8 cmd[4] = { addr >> 8, addr & 0xff, 0, 1 };
+ u8 data;
+ int ret;
+
+ ret = spi_write_then_read(spi, cmd, sizeof(cmd), &data, 1);
+
+ return ret ? ret : data;
+}
+
+static int w5200_spi_write(struct net_device *ndev, u32 addr, u8 data)
+{
+ struct spi_device *spi = to_spi_device(ndev->dev.parent);
+ u8 cmd[5] = { addr >> 8, addr & 0xff, W5200_SPI_WRITE_OPCODE, 1, data };
+
+ return spi_write_then_read(spi, cmd, sizeof(cmd), NULL, 0);
+}
+
+static int w5200_spi_read16(struct net_device *ndev, u32 addr)
+{
+ struct spi_device *spi = to_spi_device(ndev->dev.parent);
+ u8 cmd[4] = { addr >> 8, addr & 0xff, 0, 2 };
+ __be16 data;
+ int ret;
+
+ ret = spi_write_then_read(spi, cmd, sizeof(cmd), &data, sizeof(data));
+
+ return ret ? ret : be16_to_cpu(data);
+}
+
+static int w5200_spi_write16(struct net_device *ndev, u32 addr, u16 data)
+{
+ struct spi_device *spi = to_spi_device(ndev->dev.parent);
+ u8 cmd[6] = {
+ addr >> 8, addr & 0xff,
+ W5200_SPI_WRITE_OPCODE, 2,
+ data >> 8, data & 0xff
+ };
+
+ return spi_write_then_read(spi, cmd, sizeof(cmd), NULL, 0);
+}
+
+static int w5200_spi_readbulk(struct net_device *ndev, u32 addr, u8 *buf,
+ int len)
+{
+ struct spi_device *spi = to_spi_device(ndev->dev.parent);
+ struct w5200_spi_priv *spi_priv = w5200_spi_priv(ndev);
+ struct spi_transfer xfer[] = {
+ {
+ .tx_buf = spi_priv->cmd_buf,
+ .len = sizeof(spi_priv->cmd_buf),
+ },
+ {
+ .rx_buf = buf,
+ .len = len,
+ },
+ };
+ int ret;
+
+ mutex_lock(&spi_priv->cmd_lock);
+
+ spi_priv->cmd_buf[0] = addr >> 8;
+ spi_priv->cmd_buf[1] = addr;
+ spi_priv->cmd_buf[2] = len >> 8;
+ spi_priv->cmd_buf[3] = len;
+ ret = spi_sync_transfer(spi, xfer, ARRAY_SIZE(xfer));
+
+ mutex_unlock(&spi_priv->cmd_lock);
+
+ return ret;
+}
+
+static int w5200_spi_writebulk(struct net_device *ndev, u32 addr, const u8 *buf,
+ int len)
+{
+ struct spi_device *spi = to_spi_device(ndev->dev.parent);
+ struct w5200_spi_priv *spi_priv = w5200_spi_priv(ndev);
+ struct spi_transfer xfer[] = {
+ {
+ .tx_buf = spi_priv->cmd_buf,
+ .len = sizeof(spi_priv->cmd_buf),
+ },
+ {
+ .tx_buf = buf,
+ .len = len,
+ },
+ };
+ int ret;
+
+ mutex_lock(&spi_priv->cmd_lock);
+
+ spi_priv->cmd_buf[0] = addr >> 8;
+ spi_priv->cmd_buf[1] = addr;
+ spi_priv->cmd_buf[2] = W5200_SPI_WRITE_OPCODE | (len >> 8);
+ spi_priv->cmd_buf[3] = len;
+ ret = spi_sync_transfer(spi, xfer, ARRAY_SIZE(xfer));
+
+ mutex_unlock(&spi_priv->cmd_lock);
+
+ return ret;
+}
+
+static const struct w5100_ops w5200_ops = {
+ .may_sleep = true,
+ .chip_id = W5200,
+ .read = w5200_spi_read,
+ .write = w5200_spi_write,
+ .read16 = w5200_spi_read16,
+ .write16 = w5200_spi_write16,
+ .readbulk = w5200_spi_readbulk,
+ .writebulk = w5200_spi_writebulk,
+ .init = w5200_spi_init,
+};
+
+#define W5500_SPI_BLOCK_SELECT(addr) (((addr) >> 16) & 0x1f)
+#define W5500_SPI_READ_CONTROL(addr) (W5500_SPI_BLOCK_SELECT(addr) << 3)
+#define W5500_SPI_WRITE_CONTROL(addr) \
+ ((W5500_SPI_BLOCK_SELECT(addr) << 3) | BIT(2))
+
+struct w5500_spi_priv {
+ /* Serialize access to cmd_buf */
+ struct mutex cmd_lock;
+
+ /* DMA (thus cache coherency maintenance) requires the
+ * transfer buffers to live in their own cache lines.
+ */
+ u8 cmd_buf[3] ____cacheline_aligned;
+};
+
+static struct w5500_spi_priv *w5500_spi_priv(struct net_device *ndev)
+{
+ return w5100_ops_priv(ndev);
+}
+
+static int w5500_spi_init(struct net_device *ndev)
+{
+ struct w5500_spi_priv *spi_priv = w5500_spi_priv(ndev);
+
+ mutex_init(&spi_priv->cmd_lock);
+
+ return 0;
+}
+
+static int w5500_spi_read(struct net_device *ndev, u32 addr)
+{
+ struct spi_device *spi = to_spi_device(ndev->dev.parent);
+ u8 cmd[3] = {
+ addr >> 8,
+ addr,
+ W5500_SPI_READ_CONTROL(addr)
+ };
+ u8 data;
+ int ret;
+
+ ret = spi_write_then_read(spi, cmd, sizeof(cmd), &data, 1);
+
+ return ret ? ret : data;
+}
+
+static int w5500_spi_write(struct net_device *ndev, u32 addr, u8 data)
+{
+ struct spi_device *spi = to_spi_device(ndev->dev.parent);
+ u8 cmd[4] = {
+ addr >> 8,
+ addr,
+ W5500_SPI_WRITE_CONTROL(addr),
+ data
+ };
+
+ return spi_write_then_read(spi, cmd, sizeof(cmd), NULL, 0);
+}
+
+static int w5500_spi_read16(struct net_device *ndev, u32 addr)
+{
+ struct spi_device *spi = to_spi_device(ndev->dev.parent);
+ u8 cmd[3] = {
+ addr >> 8,
+ addr,
+ W5500_SPI_READ_CONTROL(addr)
+ };
+ __be16 data;
+ int ret;
+
+ ret = spi_write_then_read(spi, cmd, sizeof(cmd), &data, sizeof(data));
+
+ return ret ? ret : be16_to_cpu(data);
+}
+
+static int w5500_spi_write16(struct net_device *ndev, u32 addr, u16 data)
+{
+ struct spi_device *spi = to_spi_device(ndev->dev.parent);
+ u8 cmd[5] = {
+ addr >> 8,
+ addr,
+ W5500_SPI_WRITE_CONTROL(addr),
+ data >> 8,
+ data
+ };
+
+ return spi_write_then_read(spi, cmd, sizeof(cmd), NULL, 0);
+}
+
+static int w5500_spi_readbulk(struct net_device *ndev, u32 addr, u8 *buf,
+ int len)
+{
+ struct spi_device *spi = to_spi_device(ndev->dev.parent);
+ struct w5500_spi_priv *spi_priv = w5500_spi_priv(ndev);
+ struct spi_transfer xfer[] = {
+ {
+ .tx_buf = spi_priv->cmd_buf,
+ .len = sizeof(spi_priv->cmd_buf),
+ },
+ {
+ .rx_buf = buf,
+ .len = len,
+ },
+ };
+ int ret;
+
+ mutex_lock(&spi_priv->cmd_lock);
+
+ spi_priv->cmd_buf[0] = addr >> 8;
+ spi_priv->cmd_buf[1] = addr;
+ spi_priv->cmd_buf[2] = W5500_SPI_READ_CONTROL(addr);
+ ret = spi_sync_transfer(spi, xfer, ARRAY_SIZE(xfer));
+
+ mutex_unlock(&spi_priv->cmd_lock);
+
+ return ret;
+}
+
+static int w5500_spi_writebulk(struct net_device *ndev, u32 addr, const u8 *buf,
+ int len)
+{
+ struct spi_device *spi = to_spi_device(ndev->dev.parent);
+ struct w5500_spi_priv *spi_priv = w5500_spi_priv(ndev);
+ struct spi_transfer xfer[] = {
+ {
+ .tx_buf = spi_priv->cmd_buf,
+ .len = sizeof(spi_priv->cmd_buf),
+ },
+ {
+ .tx_buf = buf,
+ .len = len,
+ },
+ };
+ int ret;
+
+ mutex_lock(&spi_priv->cmd_lock);
+
+ spi_priv->cmd_buf[0] = addr >> 8;
+ spi_priv->cmd_buf[1] = addr;
+ spi_priv->cmd_buf[2] = W5500_SPI_WRITE_CONTROL(addr);
+ ret = spi_sync_transfer(spi, xfer, ARRAY_SIZE(xfer));
+
+ mutex_unlock(&spi_priv->cmd_lock);
+
+ return ret;
+}
+
+static const struct w5100_ops w5500_ops = {
+ .may_sleep = true,
+ .chip_id = W5500,
+ .read = w5500_spi_read,
+ .write = w5500_spi_write,
+ .read16 = w5500_spi_read16,
+ .write16 = w5500_spi_write16,
+ .readbulk = w5500_spi_readbulk,
+ .writebulk = w5500_spi_writebulk,
+ .init = w5500_spi_init,
+};
+
+static int w5100_spi_probe(struct spi_device *spi)
+{
+ const struct spi_device_id *id = spi_get_device_id(spi);
+ const struct w5100_ops *ops;
+ int priv_size;
+ const void *mac = of_get_mac_address(spi->dev.of_node);
+
+ switch (id->driver_data) {
+ case W5100:
+ ops = &w5100_spi_ops;
+ priv_size = 0;
+ break;
+ case W5200:
+ ops = &w5200_ops;
+ priv_size = sizeof(struct w5200_spi_priv);
+ break;
+ case W5500:
+ ops = &w5500_ops;
+ priv_size = sizeof(struct w5500_spi_priv);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return w5100_probe(&spi->dev, ops, priv_size, mac, spi->irq, -EINVAL);
+}
+
+static int w5100_spi_remove(struct spi_device *spi)
+{
+ return w5100_remove(&spi->dev);
+}
+
+static const struct spi_device_id w5100_spi_ids[] = {
+ { "w5100", W5100 },
+ { "w5200", W5200 },
+ { "w5500", W5500 },
+ {}
+};
+MODULE_DEVICE_TABLE(spi, w5100_spi_ids);
+
+static struct spi_driver w5100_spi_driver = {
+ .driver = {
+ .name = "w5100",
+ .pm = &w5100_pm_ops,
+ },
+ .probe = w5100_spi_probe,
+ .remove = w5100_spi_remove,
+ .id_table = w5100_spi_ids,
+};
+module_spi_driver(w5100_spi_driver);
+
+MODULE_DESCRIPTION("WIZnet W5100/W5200/W5500 Ethernet driver for SPI mode");
+MODULE_AUTHOR("Akinobu Mita <akinobu.mita@gmail.com>");
+MODULE_LICENSE("GPL");
diff --git a/drivers/net/ethernet/wiznet/w5100.c b/drivers/net/ethernet/wiznet/w5100.c
index 8b282d0b169c..4f6255cf62ce 100644
--- a/drivers/net/ethernet/wiznet/w5100.c
+++ b/drivers/net/ethernet/wiznet/w5100.c
@@ -27,6 +27,8 @@
#include <linux/irq.h>
#include <linux/gpio.h>
+#include "w5100.h"
+
#define DRV_NAME "w5100"
#define DRV_VERSION "2012-04-04"
@@ -36,7 +38,7 @@ MODULE_ALIAS("platform:"DRV_NAME);
MODULE_LICENSE("GPL");
/*
- * Registers
+ * W5100/W5200/W5500 common registers
*/
#define W5100_COMMON_REGS 0x0000
#define W5100_MR 0x0000 /* Mode Register */
@@ -46,55 +48,115 @@ MODULE_LICENSE("GPL");
#define MR_IND 0x01 /* Indirect mode */
#define W5100_SHAR 0x0009 /* Source MAC address */
#define W5100_IR 0x0015 /* Interrupt Register */
-#define W5100_IMR 0x0016 /* Interrupt Mask Register */
-#define IR_S0 0x01 /* S0 interrupt */
-#define W5100_RTR 0x0017 /* Retry Time-value Register */
-#define RTR_DEFAULT 2000 /* =0x07d0 (2000) */
-#define W5100_RMSR 0x001a /* Receive Memory Size */
-#define W5100_TMSR 0x001b /* Transmit Memory Size */
#define W5100_COMMON_REGS_LEN 0x0040
-#define W5100_S0_REGS 0x0400
-#define W5100_S0_MR 0x0400 /* S0 Mode Register */
-#define S0_MR_MACRAW 0x04 /* MAC RAW mode (promiscuous) */
-#define S0_MR_MACRAW_MF 0x44 /* MAC RAW mode (filtered) */
-#define W5100_S0_CR 0x0401 /* S0 Command Register */
+#define W5100_Sn_MR 0x0000 /* Sn Mode Register */
+#define W5100_Sn_CR 0x0001 /* Sn Command Register */
+#define W5100_Sn_IR 0x0002 /* Sn Interrupt Register */
+#define W5100_Sn_SR 0x0003 /* Sn Status Register */
+#define W5100_Sn_TX_FSR 0x0020 /* Sn Transmit free memory size */
+#define W5100_Sn_TX_RD 0x0022 /* Sn Transmit memory read pointer */
+#define W5100_Sn_TX_WR 0x0024 /* Sn Transmit memory write pointer */
+#define W5100_Sn_RX_RSR 0x0026 /* Sn Receive free memory size */
+#define W5100_Sn_RX_RD 0x0028 /* Sn Receive memory read pointer */
+
+#define S0_REGS(priv) ((priv)->s0_regs)
+
+#define W5100_S0_MR(priv) (S0_REGS(priv) + W5100_Sn_MR)
+#define S0_MR_MACRAW 0x04 /* MAC RAW mode */
+#define S0_MR_MF 0x40 /* MAC Filter for W5100 and W5200 */
+#define W5500_S0_MR_MF 0x80 /* MAC Filter for W5500 */
+#define W5100_S0_CR(priv) (S0_REGS(priv) + W5100_Sn_CR)
#define S0_CR_OPEN 0x01 /* OPEN command */
#define S0_CR_CLOSE 0x10 /* CLOSE command */
#define S0_CR_SEND 0x20 /* SEND command */
#define S0_CR_RECV 0x40 /* RECV command */
-#define W5100_S0_IR 0x0402 /* S0 Interrupt Register */
+#define W5100_S0_IR(priv) (S0_REGS(priv) + W5100_Sn_IR)
#define S0_IR_SENDOK 0x10 /* complete sending */
#define S0_IR_RECV 0x04 /* receiving data */
-#define W5100_S0_SR 0x0403 /* S0 Status Register */
+#define W5100_S0_SR(priv) (S0_REGS(priv) + W5100_Sn_SR)
#define S0_SR_MACRAW 0x42 /* mac raw mode */
-#define W5100_S0_TX_FSR 0x0420 /* S0 Transmit free memory size */
-#define W5100_S0_TX_RD 0x0422 /* S0 Transmit memory read pointer */
-#define W5100_S0_TX_WR 0x0424 /* S0 Transmit memory write pointer */
-#define W5100_S0_RX_RSR 0x0426 /* S0 Receive free memory size */
-#define W5100_S0_RX_RD 0x0428 /* S0 Receive memory read pointer */
+#define W5100_S0_TX_FSR(priv) (S0_REGS(priv) + W5100_Sn_TX_FSR)
+#define W5100_S0_TX_RD(priv) (S0_REGS(priv) + W5100_Sn_TX_RD)
+#define W5100_S0_TX_WR(priv) (S0_REGS(priv) + W5100_Sn_TX_WR)
+#define W5100_S0_RX_RSR(priv) (S0_REGS(priv) + W5100_Sn_RX_RSR)
+#define W5100_S0_RX_RD(priv) (S0_REGS(priv) + W5100_Sn_RX_RD)
+
#define W5100_S0_REGS_LEN 0x0040
+/*
+ * W5100 and W5200 common registers
+ */
+#define W5100_IMR 0x0016 /* Interrupt Mask Register */
+#define IR_S0 0x01 /* S0 interrupt */
+#define W5100_RTR 0x0017 /* Retry Time-value Register */
+#define RTR_DEFAULT 2000 /* =0x07d0 (2000) */
+
+/*
+ * W5100 specific register and memory
+ */
+#define W5100_RMSR 0x001a /* Receive Memory Size */
+#define W5100_TMSR 0x001b /* Transmit Memory Size */
+
+#define W5100_S0_REGS 0x0400
+
#define W5100_TX_MEM_START 0x4000
-#define W5100_TX_MEM_END 0x5fff
-#define W5100_TX_MEM_MASK 0x1fff
+#define W5100_TX_MEM_SIZE 0x2000
#define W5100_RX_MEM_START 0x6000
-#define W5100_RX_MEM_END 0x7fff
-#define W5100_RX_MEM_MASK 0x1fff
+#define W5100_RX_MEM_SIZE 0x2000
+
+/*
+ * W5200 specific register and memory
+ */
+#define W5200_S0_REGS 0x4000
+
+#define W5200_Sn_RXMEM_SIZE(n) (0x401e + (n) * 0x0100) /* Sn RX Memory Size */
+#define W5200_Sn_TXMEM_SIZE(n) (0x401f + (n) * 0x0100) /* Sn TX Memory Size */
+
+#define W5200_TX_MEM_START 0x8000
+#define W5200_TX_MEM_SIZE 0x4000
+#define W5200_RX_MEM_START 0xc000
+#define W5200_RX_MEM_SIZE 0x4000
+
+/*
+ * W5500 specific register and memory
+ *
+ * W5500 register and memory are organized by multiple blocks. Each one is
+ * selected by 16bits offset address and 5bits block select bits. So we
+ * encode it into 32bits address. (lower 16bits is offset address and
+ * upper 16bits is block select bits)
+ */
+#define W5500_SIMR 0x0018 /* Socket Interrupt Mask Register */
+#define W5500_RTR 0x0019 /* Retry Time-value Register */
+
+#define W5500_S0_REGS 0x10000
+
+#define W5500_Sn_RXMEM_SIZE(n) \
+ (0x1001e + (n) * 0x40000) /* Sn RX Memory Size */
+#define W5500_Sn_TXMEM_SIZE(n) \
+ (0x1001f + (n) * 0x40000) /* Sn TX Memory Size */
+
+#define W5500_TX_MEM_START 0x20000
+#define W5500_TX_MEM_SIZE 0x04000
+#define W5500_RX_MEM_START 0x30000
+#define W5500_RX_MEM_SIZE 0x04000
/*
* Device driver private data structure
*/
+
struct w5100_priv {
- void __iomem *base;
- spinlock_t reg_lock;
- bool indirect;
- u8 (*read)(struct w5100_priv *priv, u16 addr);
- void (*write)(struct w5100_priv *priv, u16 addr, u8 data);
- u16 (*read16)(struct w5100_priv *priv, u16 addr);
- void (*write16)(struct w5100_priv *priv, u16 addr, u16 data);
- void (*readbuf)(struct w5100_priv *priv, u16 addr, u8 *buf, int len);
- void (*writebuf)(struct w5100_priv *priv, u16 addr, u8 *buf, int len);
+ const struct w5100_ops *ops;
+
+ /* Socket 0 register offset address */
+ u32 s0_regs;
+ /* Socket 0 TX buffer offset address and size */
+ u32 s0_tx_buf;
+ u16 s0_tx_buf_size;
+ /* Socket 0 RX buffer offset address and size */
+ u32 s0_rx_buf;
+ u16 s0_rx_buf_size;
+
int irq;
int link_irq;
int link_gpio;
@@ -103,6 +165,13 @@ struct w5100_priv {
struct net_device *ndev;
bool promisc;
u32 msg_enable;
+
+ struct workqueue_struct *xfer_wq;
+ struct work_struct rx_work;
+ struct sk_buff *tx_skb;
+ struct work_struct tx_work;
+ struct work_struct setrx_work;
+ struct work_struct restart_work;
};
/************************************************************************
@@ -111,63 +180,122 @@ struct w5100_priv {
*
***********************************************************************/
+struct w5100_mmio_priv {
+ void __iomem *base;
+ /* Serialize access in indirect address mode */
+ spinlock_t reg_lock;
+};
+
+static inline struct w5100_mmio_priv *w5100_mmio_priv(struct net_device *dev)
+{
+ return w5100_ops_priv(dev);
+}
+
+static inline void __iomem *w5100_mmio(struct net_device *ndev)
+{
+ struct w5100_mmio_priv *mmio_priv = w5100_mmio_priv(ndev);
+
+ return mmio_priv->base;
+}
+
/*
* In direct address mode host system can directly access W5100 registers
* after mapping to Memory-Mapped I/O space.
*
* 0x8000 bytes are required for memory space.
*/
-static inline u8 w5100_read_direct(struct w5100_priv *priv, u16 addr)
+static inline int w5100_read_direct(struct net_device *ndev, u32 addr)
{
- return ioread8(priv->base + (addr << CONFIG_WIZNET_BUS_SHIFT));
+ return ioread8(w5100_mmio(ndev) + (addr << CONFIG_WIZNET_BUS_SHIFT));
}
-static inline void w5100_write_direct(struct w5100_priv *priv,
- u16 addr, u8 data)
+static inline int __w5100_write_direct(struct net_device *ndev, u32 addr,
+ u8 data)
{
- iowrite8(data, priv->base + (addr << CONFIG_WIZNET_BUS_SHIFT));
+ iowrite8(data, w5100_mmio(ndev) + (addr << CONFIG_WIZNET_BUS_SHIFT));
+
+ return 0;
}
-static u16 w5100_read16_direct(struct w5100_priv *priv, u16 addr)
+static inline int w5100_write_direct(struct net_device *ndev, u32 addr, u8 data)
+{
+ __w5100_write_direct(ndev, addr, data);
+ mmiowb();
+
+ return 0;
+}
+
+static int w5100_read16_direct(struct net_device *ndev, u32 addr)
{
u16 data;
- data = w5100_read_direct(priv, addr) << 8;
- data |= w5100_read_direct(priv, addr + 1);
+ data = w5100_read_direct(ndev, addr) << 8;
+ data |= w5100_read_direct(ndev, addr + 1);
return data;
}
-static void w5100_write16_direct(struct w5100_priv *priv, u16 addr, u16 data)
+static int w5100_write16_direct(struct net_device *ndev, u32 addr, u16 data)
{
- w5100_write_direct(priv, addr, data >> 8);
- w5100_write_direct(priv, addr + 1, data);
+ __w5100_write_direct(ndev, addr, data >> 8);
+ __w5100_write_direct(ndev, addr + 1, data);
+ mmiowb();
+
+ return 0;
}
-static void w5100_readbuf_direct(struct w5100_priv *priv,
- u16 offset, u8 *buf, int len)
+static int w5100_readbulk_direct(struct net_device *ndev, u32 addr, u8 *buf,
+ int len)
{
- u16 addr = W5100_RX_MEM_START + (offset & W5100_RX_MEM_MASK);
int i;
- for (i = 0; i < len; i++, addr++) {
- if (unlikely(addr > W5100_RX_MEM_END))
- addr = W5100_RX_MEM_START;
- *buf++ = w5100_read_direct(priv, addr);
- }
+ for (i = 0; i < len; i++, addr++)
+ *buf++ = w5100_read_direct(ndev, addr);
+
+ return 0;
}
-static void w5100_writebuf_direct(struct w5100_priv *priv,
- u16 offset, u8 *buf, int len)
+static int w5100_writebulk_direct(struct net_device *ndev, u32 addr,
+ const u8 *buf, int len)
{
- u16 addr = W5100_TX_MEM_START + (offset & W5100_TX_MEM_MASK);
int i;
- for (i = 0; i < len; i++, addr++) {
- if (unlikely(addr > W5100_TX_MEM_END))
- addr = W5100_TX_MEM_START;
- w5100_write_direct(priv, addr, *buf++);
- }
+ for (i = 0; i < len; i++, addr++)
+ __w5100_write_direct(ndev, addr, *buf++);
+
+ mmiowb();
+
+ return 0;
}
+static int w5100_mmio_init(struct net_device *ndev)
+{
+ struct platform_device *pdev = to_platform_device(ndev->dev.parent);
+ struct w5100_priv *priv = netdev_priv(ndev);
+ struct w5100_mmio_priv *mmio_priv = w5100_mmio_priv(ndev);
+ struct resource *mem;
+
+ spin_lock_init(&mmio_priv->reg_lock);
+
+ mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
+ mmio_priv->base = devm_ioremap_resource(&pdev->dev, mem);
+ if (IS_ERR(mmio_priv->base))
+ return PTR_ERR(mmio_priv->base);
+
+ netdev_info(ndev, "at 0x%llx irq %d\n", (u64)mem->start, priv->irq);
+
+ return 0;
+}
+
+static const struct w5100_ops w5100_mmio_direct_ops = {
+ .chip_id = W5100,
+ .read = w5100_read_direct,
+ .write = w5100_write_direct,
+ .read16 = w5100_read16_direct,
+ .write16 = w5100_write16_direct,
+ .readbulk = w5100_readbulk_direct,
+ .writebulk = w5100_writebulk_direct,
+ .init = w5100_mmio_init,
+};
+
/*
* In indirect address mode host system indirectly accesses registers by
* using Indirect Mode Address Register (IDM_AR) and Indirect Mode Data
@@ -179,139 +307,290 @@ static void w5100_writebuf_direct(struct w5100_priv *priv,
#define W5100_IDM_AR 0x01 /* Indirect Mode Address Register */
#define W5100_IDM_DR 0x03 /* Indirect Mode Data Register */
-static u8 w5100_read_indirect(struct w5100_priv *priv, u16 addr)
+static int w5100_read_indirect(struct net_device *ndev, u32 addr)
{
+ struct w5100_mmio_priv *mmio_priv = w5100_mmio_priv(ndev);
unsigned long flags;
u8 data;
- spin_lock_irqsave(&priv->reg_lock, flags);
- w5100_write16_direct(priv, W5100_IDM_AR, addr);
- mmiowb();
- data = w5100_read_direct(priv, W5100_IDM_DR);
- spin_unlock_irqrestore(&priv->reg_lock, flags);
+ spin_lock_irqsave(&mmio_priv->reg_lock, flags);
+ w5100_write16_direct(ndev, W5100_IDM_AR, addr);
+ data = w5100_read_direct(ndev, W5100_IDM_DR);
+ spin_unlock_irqrestore(&mmio_priv->reg_lock, flags);
return data;
}
-static void w5100_write_indirect(struct w5100_priv *priv, u16 addr, u8 data)
+static int w5100_write_indirect(struct net_device *ndev, u32 addr, u8 data)
{
+ struct w5100_mmio_priv *mmio_priv = w5100_mmio_priv(ndev);
unsigned long flags;
- spin_lock_irqsave(&priv->reg_lock, flags);
- w5100_write16_direct(priv, W5100_IDM_AR, addr);
- mmiowb();
- w5100_write_direct(priv, W5100_IDM_DR, data);
- mmiowb();
- spin_unlock_irqrestore(&priv->reg_lock, flags);
+ spin_lock_irqsave(&mmio_priv->reg_lock, flags);
+ w5100_write16_direct(ndev, W5100_IDM_AR, addr);
+ w5100_write_direct(ndev, W5100_IDM_DR, data);
+ spin_unlock_irqrestore(&mmio_priv->reg_lock, flags);
+
+ return 0;
}
-static u16 w5100_read16_indirect(struct w5100_priv *priv, u16 addr)
+static int w5100_read16_indirect(struct net_device *ndev, u32 addr)
{
+ struct w5100_mmio_priv *mmio_priv = w5100_mmio_priv(ndev);
unsigned long flags;
u16 data;
- spin_lock_irqsave(&priv->reg_lock, flags);
- w5100_write16_direct(priv, W5100_IDM_AR, addr);
- mmiowb();
- data = w5100_read_direct(priv, W5100_IDM_DR) << 8;
- data |= w5100_read_direct(priv, W5100_IDM_DR);
- spin_unlock_irqrestore(&priv->reg_lock, flags);
+ spin_lock_irqsave(&mmio_priv->reg_lock, flags);
+ w5100_write16_direct(ndev, W5100_IDM_AR, addr);
+ data = w5100_read_direct(ndev, W5100_IDM_DR) << 8;
+ data |= w5100_read_direct(ndev, W5100_IDM_DR);
+ spin_unlock_irqrestore(&mmio_priv->reg_lock, flags);
return data;
}
-static void w5100_write16_indirect(struct w5100_priv *priv, u16 addr, u16 data)
+static int w5100_write16_indirect(struct net_device *ndev, u32 addr, u16 data)
{
+ struct w5100_mmio_priv *mmio_priv = w5100_mmio_priv(ndev);
unsigned long flags;
- spin_lock_irqsave(&priv->reg_lock, flags);
- w5100_write16_direct(priv, W5100_IDM_AR, addr);
- mmiowb();
- w5100_write_direct(priv, W5100_IDM_DR, data >> 8);
- w5100_write_direct(priv, W5100_IDM_DR, data);
- mmiowb();
- spin_unlock_irqrestore(&priv->reg_lock, flags);
+ spin_lock_irqsave(&mmio_priv->reg_lock, flags);
+ w5100_write16_direct(ndev, W5100_IDM_AR, addr);
+ __w5100_write_direct(ndev, W5100_IDM_DR, data >> 8);
+ w5100_write_direct(ndev, W5100_IDM_DR, data);
+ spin_unlock_irqrestore(&mmio_priv->reg_lock, flags);
+
+ return 0;
}
-static void w5100_readbuf_indirect(struct w5100_priv *priv,
- u16 offset, u8 *buf, int len)
+static int w5100_readbulk_indirect(struct net_device *ndev, u32 addr, u8 *buf,
+ int len)
{
- u16 addr = W5100_RX_MEM_START + (offset & W5100_RX_MEM_MASK);
+ struct w5100_mmio_priv *mmio_priv = w5100_mmio_priv(ndev);
unsigned long flags;
int i;
- spin_lock_irqsave(&priv->reg_lock, flags);
- w5100_write16_direct(priv, W5100_IDM_AR, addr);
- mmiowb();
+ spin_lock_irqsave(&mmio_priv->reg_lock, flags);
+ w5100_write16_direct(ndev, W5100_IDM_AR, addr);
+
+ for (i = 0; i < len; i++)
+ *buf++ = w5100_read_direct(ndev, W5100_IDM_DR);
- for (i = 0; i < len; i++, addr++) {
- if (unlikely(addr > W5100_RX_MEM_END)) {
- addr = W5100_RX_MEM_START;
- w5100_write16_direct(priv, W5100_IDM_AR, addr);
- mmiowb();
- }
- *buf++ = w5100_read_direct(priv, W5100_IDM_DR);
- }
mmiowb();
- spin_unlock_irqrestore(&priv->reg_lock, flags);
+ spin_unlock_irqrestore(&mmio_priv->reg_lock, flags);
+
+ return 0;
}
-static void w5100_writebuf_indirect(struct w5100_priv *priv,
- u16 offset, u8 *buf, int len)
+static int w5100_writebulk_indirect(struct net_device *ndev, u32 addr,
+ const u8 *buf, int len)
{
- u16 addr = W5100_TX_MEM_START + (offset & W5100_TX_MEM_MASK);
+ struct w5100_mmio_priv *mmio_priv = w5100_mmio_priv(ndev);
unsigned long flags;
int i;
- spin_lock_irqsave(&priv->reg_lock, flags);
- w5100_write16_direct(priv, W5100_IDM_AR, addr);
- mmiowb();
+ spin_lock_irqsave(&mmio_priv->reg_lock, flags);
+ w5100_write16_direct(ndev, W5100_IDM_AR, addr);
+
+ for (i = 0; i < len; i++)
+ __w5100_write_direct(ndev, W5100_IDM_DR, *buf++);
- for (i = 0; i < len; i++, addr++) {
- if (unlikely(addr > W5100_TX_MEM_END)) {
- addr = W5100_TX_MEM_START;
- w5100_write16_direct(priv, W5100_IDM_AR, addr);
- mmiowb();
- }
- w5100_write_direct(priv, W5100_IDM_DR, *buf++);
- }
mmiowb();
- spin_unlock_irqrestore(&priv->reg_lock, flags);
+ spin_unlock_irqrestore(&mmio_priv->reg_lock, flags);
+
+ return 0;
}
+static int w5100_reset_indirect(struct net_device *ndev)
+{
+ w5100_write_direct(ndev, W5100_MR, MR_RST);
+ mdelay(5);
+ w5100_write_direct(ndev, W5100_MR, MR_PB | MR_AI | MR_IND);
+
+ return 0;
+}
+
+static const struct w5100_ops w5100_mmio_indirect_ops = {
+ .chip_id = W5100,
+ .read = w5100_read_indirect,
+ .write = w5100_write_indirect,
+ .read16 = w5100_read16_indirect,
+ .write16 = w5100_write16_indirect,
+ .readbulk = w5100_readbulk_indirect,
+ .writebulk = w5100_writebulk_indirect,
+ .init = w5100_mmio_init,
+ .reset = w5100_reset_indirect,
+};
+
#if defined(CONFIG_WIZNET_BUS_DIRECT)
-#define w5100_read w5100_read_direct
-#define w5100_write w5100_write_direct
-#define w5100_read16 w5100_read16_direct
-#define w5100_write16 w5100_write16_direct
-#define w5100_readbuf w5100_readbuf_direct
-#define w5100_writebuf w5100_writebuf_direct
+
+static int w5100_read(struct w5100_priv *priv, u32 addr)
+{
+ return w5100_read_direct(priv->ndev, addr);
+}
+
+static int w5100_write(struct w5100_priv *priv, u32 addr, u8 data)
+{
+ return w5100_write_direct(priv->ndev, addr, data);
+}
+
+static int w5100_read16(struct w5100_priv *priv, u32 addr)
+{
+ return w5100_read16_direct(priv->ndev, addr);
+}
+
+static int w5100_write16(struct w5100_priv *priv, u32 addr, u16 data)
+{
+ return w5100_write16_direct(priv->ndev, addr, data);
+}
+
+static int w5100_readbulk(struct w5100_priv *priv, u32 addr, u8 *buf, int len)
+{
+ return w5100_readbulk_direct(priv->ndev, addr, buf, len);
+}
+
+static int w5100_writebulk(struct w5100_priv *priv, u32 addr, const u8 *buf,
+ int len)
+{
+ return w5100_writebulk_direct(priv->ndev, addr, buf, len);
+}
#elif defined(CONFIG_WIZNET_BUS_INDIRECT)
-#define w5100_read w5100_read_indirect
-#define w5100_write w5100_write_indirect
-#define w5100_read16 w5100_read16_indirect
-#define w5100_write16 w5100_write16_indirect
-#define w5100_readbuf w5100_readbuf_indirect
-#define w5100_writebuf w5100_writebuf_indirect
+
+static int w5100_read(struct w5100_priv *priv, u32 addr)
+{
+ return w5100_read_indirect(priv->ndev, addr);
+}
+
+static int w5100_write(struct w5100_priv *priv, u32 addr, u8 data)
+{
+ return w5100_write_indirect(priv->ndev, addr, data);
+}
+
+static int w5100_read16(struct w5100_priv *priv, u32 addr)
+{
+ return w5100_read16_indirect(priv->ndev, addr);
+}
+
+static int w5100_write16(struct w5100_priv *priv, u32 addr, u16 data)
+{
+ return w5100_write16_indirect(priv->ndev, addr, data);
+}
+
+static int w5100_readbulk(struct w5100_priv *priv, u32 addr, u8 *buf, int len)
+{
+ return w5100_readbulk_indirect(priv->ndev, addr, buf, len);
+}
+
+static int w5100_writebulk(struct w5100_priv *priv, u32 addr, const u8 *buf,
+ int len)
+{
+ return w5100_writebulk_indirect(priv->ndev, addr, buf, len);
+}
#else /* CONFIG_WIZNET_BUS_ANY */
-#define w5100_read priv->read
-#define w5100_write priv->write
-#define w5100_read16 priv->read16
-#define w5100_write16 priv->write16
-#define w5100_readbuf priv->readbuf
-#define w5100_writebuf priv->writebuf
+
+static int w5100_read(struct w5100_priv *priv, u32 addr)
+{
+ return priv->ops->read(priv->ndev, addr);
+}
+
+static int w5100_write(struct w5100_priv *priv, u32 addr, u8 data)
+{
+ return priv->ops->write(priv->ndev, addr, data);
+}
+
+static int w5100_read16(struct w5100_priv *priv, u32 addr)
+{
+ return priv->ops->read16(priv->ndev, addr);
+}
+
+static int w5100_write16(struct w5100_priv *priv, u32 addr, u16 data)
+{
+ return priv->ops->write16(priv->ndev, addr, data);
+}
+
+static int w5100_readbulk(struct w5100_priv *priv, u32 addr, u8 *buf, int len)
+{
+ return priv->ops->readbulk(priv->ndev, addr, buf, len);
+}
+
+static int w5100_writebulk(struct w5100_priv *priv, u32 addr, const u8 *buf,
+ int len)
+{
+ return priv->ops->writebulk(priv->ndev, addr, buf, len);
+}
+
#endif
+static int w5100_readbuf(struct w5100_priv *priv, u16 offset, u8 *buf, int len)
+{
+ u32 addr;
+ int remain = 0;
+ int ret;
+ const u32 mem_start = priv->s0_rx_buf;
+ const u16 mem_size = priv->s0_rx_buf_size;
+
+ offset %= mem_size;
+ addr = mem_start + offset;
+
+ if (offset + len > mem_size) {
+ remain = (offset + len) % mem_size;
+ len = mem_size - offset;
+ }
+
+ ret = w5100_readbulk(priv, addr, buf, len);
+ if (ret || !remain)
+ return ret;
+
+ return w5100_readbulk(priv, mem_start, buf + len, remain);
+}
+
+static int w5100_writebuf(struct w5100_priv *priv, u16 offset, const u8 *buf,
+ int len)
+{
+ u32 addr;
+ int ret;
+ int remain = 0;
+ const u32 mem_start = priv->s0_tx_buf;
+ const u16 mem_size = priv->s0_tx_buf_size;
+
+ offset %= mem_size;
+ addr = mem_start + offset;
+
+ if (offset + len > mem_size) {
+ remain = (offset + len) % mem_size;
+ len = mem_size - offset;
+ }
+
+ ret = w5100_writebulk(priv, addr, buf, len);
+ if (ret || !remain)
+ return ret;
+
+ return w5100_writebulk(priv, mem_start, buf + len, remain);
+}
+
+static int w5100_reset(struct w5100_priv *priv)
+{
+ if (priv->ops->reset)
+ return priv->ops->reset(priv->ndev);
+
+ w5100_write(priv, W5100_MR, MR_RST);
+ mdelay(5);
+ w5100_write(priv, W5100_MR, MR_PB);
+
+ return 0;
+}
+
static int w5100_command(struct w5100_priv *priv, u16 cmd)
{
- unsigned long timeout = jiffies + msecs_to_jiffies(100);
+ unsigned long timeout;
- w5100_write(priv, W5100_S0_CR, cmd);
- mmiowb();
+ w5100_write(priv, W5100_S0_CR(priv), cmd);
+
+ timeout = jiffies + msecs_to_jiffies(100);
- while (w5100_read(priv, W5100_S0_CR) != 0) {
+ while (w5100_read(priv, W5100_S0_CR(priv)) != 0) {
if (time_after(jiffies, timeout))
return -EIO;
cpu_relax();
@@ -323,47 +602,124 @@ static int w5100_command(struct w5100_priv *priv, u16 cmd)
static void w5100_write_macaddr(struct w5100_priv *priv)
{
struct net_device *ndev = priv->ndev;
- int i;
- for (i = 0; i < ETH_ALEN; i++)
- w5100_write(priv, W5100_SHAR + i, ndev->dev_addr[i]);
- mmiowb();
+ w5100_writebulk(priv, W5100_SHAR, ndev->dev_addr, ETH_ALEN);
}
-static void w5100_hw_reset(struct w5100_priv *priv)
+static void w5100_socket_intr_mask(struct w5100_priv *priv, u8 mask)
{
- w5100_write_direct(priv, W5100_MR, MR_RST);
- mmiowb();
- mdelay(5);
- w5100_write_direct(priv, W5100_MR, priv->indirect ?
- MR_PB | MR_AI | MR_IND :
- MR_PB);
- mmiowb();
- w5100_write(priv, W5100_IMR, 0);
- w5100_write_macaddr(priv);
+ u32 imr;
+
+ if (priv->ops->chip_id == W5500)
+ imr = W5500_SIMR;
+ else
+ imr = W5100_IMR;
+
+ w5100_write(priv, imr, mask);
+}
+static void w5100_enable_intr(struct w5100_priv *priv)
+{
+ w5100_socket_intr_mask(priv, IR_S0);
+}
+
+static void w5100_disable_intr(struct w5100_priv *priv)
+{
+ w5100_socket_intr_mask(priv, 0);
+}
+
+static void w5100_memory_configure(struct w5100_priv *priv)
+{
/* Configure 16K of internal memory
* as 8K RX buffer and 8K TX buffer
*/
w5100_write(priv, W5100_RMSR, 0x03);
w5100_write(priv, W5100_TMSR, 0x03);
- mmiowb();
+}
+
+static void w5200_memory_configure(struct w5100_priv *priv)
+{
+ int i;
+
+ /* Configure internal RX memory as 16K RX buffer and
+ * internal TX memory as 16K TX buffer
+ */
+ w5100_write(priv, W5200_Sn_RXMEM_SIZE(0), 0x10);
+ w5100_write(priv, W5200_Sn_TXMEM_SIZE(0), 0x10);
+
+ for (i = 1; i < 8; i++) {
+ w5100_write(priv, W5200_Sn_RXMEM_SIZE(i), 0);
+ w5100_write(priv, W5200_Sn_TXMEM_SIZE(i), 0);
+ }
+}
+
+static void w5500_memory_configure(struct w5100_priv *priv)
+{
+ int i;
+
+ /* Configure internal RX memory as 16K RX buffer and
+ * internal TX memory as 16K TX buffer
+ */
+ w5100_write(priv, W5500_Sn_RXMEM_SIZE(0), 0x10);
+ w5100_write(priv, W5500_Sn_TXMEM_SIZE(0), 0x10);
+
+ for (i = 1; i < 8; i++) {
+ w5100_write(priv, W5500_Sn_RXMEM_SIZE(i), 0);
+ w5100_write(priv, W5500_Sn_TXMEM_SIZE(i), 0);
+ }
+}
+
+static int w5100_hw_reset(struct w5100_priv *priv)
+{
+ u32 rtr;
+
+ w5100_reset(priv);
+
+ w5100_disable_intr(priv);
+ w5100_write_macaddr(priv);
+
+ switch (priv->ops->chip_id) {
+ case W5100:
+ w5100_memory_configure(priv);
+ rtr = W5100_RTR;
+ break;
+ case W5200:
+ w5200_memory_configure(priv);
+ rtr = W5100_RTR;
+ break;
+ case W5500:
+ w5500_memory_configure(priv);
+ rtr = W5500_RTR;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (w5100_read16(priv, rtr) != RTR_DEFAULT)
+ return -ENODEV;
+
+ return 0;
}
static void w5100_hw_start(struct w5100_priv *priv)
{
- w5100_write(priv, W5100_S0_MR, priv->promisc ?
- S0_MR_MACRAW : S0_MR_MACRAW_MF);
- mmiowb();
+ u8 mode = S0_MR_MACRAW;
+
+ if (!priv->promisc) {
+ if (priv->ops->chip_id == W5500)
+ mode |= W5500_S0_MR_MF;
+ else
+ mode |= S0_MR_MF;
+ }
+
+ w5100_write(priv, W5100_S0_MR(priv), mode);
w5100_command(priv, S0_CR_OPEN);
- w5100_write(priv, W5100_IMR, IR_S0);
- mmiowb();
+ w5100_enable_intr(priv);
}
static void w5100_hw_close(struct w5100_priv *priv)
{
- w5100_write(priv, W5100_IMR, 0);
- mmiowb();
+ w5100_disable_intr(priv);
w5100_command(priv, S0_CR_CLOSE);
}
@@ -412,20 +768,17 @@ static int w5100_get_regs_len(struct net_device *ndev)
}
static void w5100_get_regs(struct net_device *ndev,
- struct ethtool_regs *regs, void *_buf)
+ struct ethtool_regs *regs, void *buf)
{
struct w5100_priv *priv = netdev_priv(ndev);
- u8 *buf = _buf;
- u16 i;
regs->version = 1;
- for (i = 0; i < W5100_COMMON_REGS_LEN; i++)
- *buf++ = w5100_read(priv, W5100_COMMON_REGS + i);
- for (i = 0; i < W5100_S0_REGS_LEN; i++)
- *buf++ = w5100_read(priv, W5100_S0_REGS + i);
+ w5100_readbulk(priv, W5100_COMMON_REGS, buf, W5100_COMMON_REGS_LEN);
+ buf += W5100_COMMON_REGS_LEN;
+ w5100_readbulk(priv, S0_REGS(priv), buf, W5100_S0_REGS_LEN);
}
-static void w5100_tx_timeout(struct net_device *ndev)
+static void w5100_restart(struct net_device *ndev)
{
struct w5100_priv *priv = netdev_priv(ndev);
@@ -433,74 +786,138 @@ static void w5100_tx_timeout(struct net_device *ndev)
w5100_hw_reset(priv);
w5100_hw_start(priv);
ndev->stats.tx_errors++;
- ndev->trans_start = jiffies;
+ netif_trans_update(ndev);
netif_wake_queue(ndev);
}
-static int w5100_start_tx(struct sk_buff *skb, struct net_device *ndev)
+static void w5100_restart_work(struct work_struct *work)
+{
+ struct w5100_priv *priv = container_of(work, struct w5100_priv,
+ restart_work);
+
+ w5100_restart(priv->ndev);
+}
+
+static void w5100_tx_timeout(struct net_device *ndev)
{
struct w5100_priv *priv = netdev_priv(ndev);
- u16 offset;
- netif_stop_queue(ndev);
+ if (priv->ops->may_sleep)
+ schedule_work(&priv->restart_work);
+ else
+ w5100_restart(ndev);
+}
- offset = w5100_read16(priv, W5100_S0_TX_WR);
+static void w5100_tx_skb(struct net_device *ndev, struct sk_buff *skb)
+{
+ struct w5100_priv *priv = netdev_priv(ndev);
+ u16 offset;
+
+ offset = w5100_read16(priv, W5100_S0_TX_WR(priv));
w5100_writebuf(priv, offset, skb->data, skb->len);
- w5100_write16(priv, W5100_S0_TX_WR, offset + skb->len);
- mmiowb();
+ w5100_write16(priv, W5100_S0_TX_WR(priv), offset + skb->len);
ndev->stats.tx_bytes += skb->len;
ndev->stats.tx_packets++;
dev_kfree_skb(skb);
w5100_command(priv, S0_CR_SEND);
+}
+
+static void w5100_tx_work(struct work_struct *work)
+{
+ struct w5100_priv *priv = container_of(work, struct w5100_priv,
+ tx_work);
+ struct sk_buff *skb = priv->tx_skb;
+
+ priv->tx_skb = NULL;
+
+ if (WARN_ON(!skb))
+ return;
+ w5100_tx_skb(priv->ndev, skb);
+}
+
+static int w5100_start_tx(struct sk_buff *skb, struct net_device *ndev)
+{
+ struct w5100_priv *priv = netdev_priv(ndev);
+
+ netif_stop_queue(ndev);
+
+ if (priv->ops->may_sleep) {
+ WARN_ON(priv->tx_skb);
+ priv->tx_skb = skb;
+ queue_work(priv->xfer_wq, &priv->tx_work);
+ } else {
+ w5100_tx_skb(ndev, skb);
+ }
return NETDEV_TX_OK;
}
-static int w5100_napi_poll(struct napi_struct *napi, int budget)
+static struct sk_buff *w5100_rx_skb(struct net_device *ndev)
{
- struct w5100_priv *priv = container_of(napi, struct w5100_priv, napi);
- struct net_device *ndev = priv->ndev;
+ struct w5100_priv *priv = netdev_priv(ndev);
struct sk_buff *skb;
- int rx_count;
u16 rx_len;
u16 offset;
u8 header[2];
+ u16 rx_buf_len = w5100_read16(priv, W5100_S0_RX_RSR(priv));
- for (rx_count = 0; rx_count < budget; rx_count++) {
- u16 rx_buf_len = w5100_read16(priv, W5100_S0_RX_RSR);
- if (rx_buf_len == 0)
- break;
+ if (rx_buf_len == 0)
+ return NULL;
- offset = w5100_read16(priv, W5100_S0_RX_RD);
- w5100_readbuf(priv, offset, header, 2);
- rx_len = get_unaligned_be16(header) - 2;
-
- skb = netdev_alloc_skb_ip_align(ndev, rx_len);
- if (unlikely(!skb)) {
- w5100_write16(priv, W5100_S0_RX_RD,
- offset + rx_buf_len);
- w5100_command(priv, S0_CR_RECV);
- ndev->stats.rx_dropped++;
- return -ENOMEM;
- }
+ offset = w5100_read16(priv, W5100_S0_RX_RD(priv));
+ w5100_readbuf(priv, offset, header, 2);
+ rx_len = get_unaligned_be16(header) - 2;
- skb_put(skb, rx_len);
- w5100_readbuf(priv, offset + 2, skb->data, rx_len);
- w5100_write16(priv, W5100_S0_RX_RD, offset + 2 + rx_len);
- mmiowb();
+ skb = netdev_alloc_skb_ip_align(ndev, rx_len);
+ if (unlikely(!skb)) {
+ w5100_write16(priv, W5100_S0_RX_RD(priv), offset + rx_buf_len);
w5100_command(priv, S0_CR_RECV);
- skb->protocol = eth_type_trans(skb, ndev);
+ ndev->stats.rx_dropped++;
+ return NULL;
+ }
+
+ skb_put(skb, rx_len);
+ w5100_readbuf(priv, offset + 2, skb->data, rx_len);
+ w5100_write16(priv, W5100_S0_RX_RD(priv), offset + 2 + rx_len);
+ w5100_command(priv, S0_CR_RECV);
+ skb->protocol = eth_type_trans(skb, ndev);
+
+ ndev->stats.rx_packets++;
+ ndev->stats.rx_bytes += rx_len;
+
+ return skb;
+}
+
+static void w5100_rx_work(struct work_struct *work)
+{
+ struct w5100_priv *priv = container_of(work, struct w5100_priv,
+ rx_work);
+ struct sk_buff *skb;
+
+ while ((skb = w5100_rx_skb(priv->ndev)))
+ netif_rx_ni(skb);
+
+ w5100_enable_intr(priv);
+}
+
+static int w5100_napi_poll(struct napi_struct *napi, int budget)
+{
+ struct w5100_priv *priv = container_of(napi, struct w5100_priv, napi);
+ int rx_count;
+
+ for (rx_count = 0; rx_count < budget; rx_count++) {
+ struct sk_buff *skb = w5100_rx_skb(priv->ndev);
- netif_receive_skb(skb);
- ndev->stats.rx_packets++;
- ndev->stats.rx_bytes += rx_len;
+ if (skb)
+ netif_receive_skb(skb);
+ else
+ break;
}
if (rx_count < budget) {
napi_complete(napi);
- w5100_write(priv, W5100_IMR, IR_S0);
- mmiowb();
+ w5100_enable_intr(priv);
}
return rx_count;
@@ -511,11 +928,10 @@ static irqreturn_t w5100_interrupt(int irq, void *ndev_instance)
struct net_device *ndev = ndev_instance;
struct w5100_priv *priv = netdev_priv(ndev);
- int ir = w5100_read(priv, W5100_S0_IR);
+ int ir = w5100_read(priv, W5100_S0_IR(priv));
if (!ir)
return IRQ_NONE;
- w5100_write(priv, W5100_S0_IR, ir);
- mmiowb();
+ w5100_write(priv, W5100_S0_IR(priv), ir);
if (ir & S0_IR_SENDOK) {
netif_dbg(priv, tx_done, ndev, "tx done\n");
@@ -523,11 +939,12 @@ static irqreturn_t w5100_interrupt(int irq, void *ndev_instance)
}
if (ir & S0_IR_RECV) {
- if (napi_schedule_prep(&priv->napi)) {
- w5100_write(priv, W5100_IMR, 0);
- mmiowb();
+ w5100_disable_intr(priv);
+
+ if (priv->ops->may_sleep)
+ queue_work(priv->xfer_wq, &priv->rx_work);
+ else if (napi_schedule_prep(&priv->napi))
__napi_schedule(&priv->napi);
- }
}
return IRQ_HANDLED;
@@ -551,6 +968,14 @@ static irqreturn_t w5100_detect_link(int irq, void *ndev_instance)
return IRQ_HANDLED;
}
+static void w5100_setrx_work(struct work_struct *work)
+{
+ struct w5100_priv *priv = container_of(work, struct w5100_priv,
+ setrx_work);
+
+ w5100_hw_start(priv);
+}
+
static void w5100_set_rx_mode(struct net_device *ndev)
{
struct w5100_priv *priv = netdev_priv(ndev);
@@ -558,7 +983,11 @@ static void w5100_set_rx_mode(struct net_device *ndev)
if (priv->promisc != set_promisc) {
priv->promisc = set_promisc;
- w5100_hw_start(priv);
+
+ if (priv->ops->may_sleep)
+ schedule_work(&priv->setrx_work);
+ else
+ w5100_hw_start(priv);
}
}
@@ -620,95 +1049,100 @@ static const struct net_device_ops w5100_netdev_ops = {
.ndo_change_mtu = eth_change_mtu,
};
-static int w5100_hw_probe(struct platform_device *pdev)
+static int w5100_mmio_probe(struct platform_device *pdev)
{
struct wiznet_platform_data *data = dev_get_platdata(&pdev->dev);
- struct net_device *ndev = platform_get_drvdata(pdev);
- struct w5100_priv *priv = netdev_priv(ndev);
- const char *name = netdev_name(ndev);
+ const void *mac_addr = NULL;
struct resource *mem;
- int mem_size;
+ const struct w5100_ops *ops;
int irq;
- int ret;
- if (data && is_valid_ether_addr(data->mac_addr)) {
- memcpy(ndev->dev_addr, data->mac_addr, ETH_ALEN);
- } else {
- eth_hw_addr_random(ndev);
- }
+ if (data && is_valid_ether_addr(data->mac_addr))
+ mac_addr = data->mac_addr;
mem = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- priv->base = devm_ioremap_resource(&pdev->dev, mem);
- if (IS_ERR(priv->base))
- return PTR_ERR(priv->base);
-
- mem_size = resource_size(mem);
-
- spin_lock_init(&priv->reg_lock);
- priv->indirect = mem_size < W5100_BUS_DIRECT_SIZE;
- if (priv->indirect) {
- priv->read = w5100_read_indirect;
- priv->write = w5100_write_indirect;
- priv->read16 = w5100_read16_indirect;
- priv->write16 = w5100_write16_indirect;
- priv->readbuf = w5100_readbuf_indirect;
- priv->writebuf = w5100_writebuf_indirect;
- } else {
- priv->read = w5100_read_direct;
- priv->write = w5100_write_direct;
- priv->read16 = w5100_read16_direct;
- priv->write16 = w5100_write16_direct;
- priv->readbuf = w5100_readbuf_direct;
- priv->writebuf = w5100_writebuf_direct;
- }
-
- w5100_hw_reset(priv);
- if (w5100_read16(priv, W5100_RTR) != RTR_DEFAULT)
- return -ENODEV;
+ if (resource_size(mem) < W5100_BUS_DIRECT_SIZE)
+ ops = &w5100_mmio_indirect_ops;
+ else
+ ops = &w5100_mmio_direct_ops;
irq = platform_get_irq(pdev, 0);
if (irq < 0)
return irq;
- ret = request_irq(irq, w5100_interrupt,
- IRQ_TYPE_LEVEL_LOW, name, ndev);
- if (ret < 0)
- return ret;
- priv->irq = irq;
- priv->link_gpio = data ? data->link_gpio : -EINVAL;
- if (gpio_is_valid(priv->link_gpio)) {
- char *link_name = devm_kzalloc(&pdev->dev, 16, GFP_KERNEL);
- if (!link_name)
- return -ENOMEM;
- snprintf(link_name, 16, "%s-link", name);
- priv->link_irq = gpio_to_irq(priv->link_gpio);
- if (request_any_context_irq(priv->link_irq, w5100_detect_link,
- IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING,
- link_name, priv->ndev) < 0)
- priv->link_gpio = -EINVAL;
- }
+ return w5100_probe(&pdev->dev, ops, sizeof(struct w5100_mmio_priv),
+ mac_addr, irq, data ? data->link_gpio : -EINVAL);
+}
- netdev_info(ndev, "at 0x%llx irq %d\n", (u64)mem->start, irq);
- return 0;
+static int w5100_mmio_remove(struct platform_device *pdev)
+{
+ return w5100_remove(&pdev->dev);
+}
+
+void *w5100_ops_priv(const struct net_device *ndev)
+{
+ return netdev_priv(ndev) +
+ ALIGN(sizeof(struct w5100_priv), NETDEV_ALIGN);
}
+EXPORT_SYMBOL_GPL(w5100_ops_priv);
-static int w5100_probe(struct platform_device *pdev)
+int w5100_probe(struct device *dev, const struct w5100_ops *ops,
+ int sizeof_ops_priv, const void *mac_addr, int irq,
+ int link_gpio)
{
struct w5100_priv *priv;
struct net_device *ndev;
int err;
+ size_t alloc_size;
- ndev = alloc_etherdev(sizeof(*priv));
+ alloc_size = sizeof(*priv);
+ if (sizeof_ops_priv) {
+ alloc_size = ALIGN(alloc_size, NETDEV_ALIGN);
+ alloc_size += sizeof_ops_priv;
+ }
+ alloc_size += NETDEV_ALIGN - 1;
+
+ ndev = alloc_etherdev(alloc_size);
if (!ndev)
return -ENOMEM;
- SET_NETDEV_DEV(ndev, &pdev->dev);
- platform_set_drvdata(pdev, ndev);
+ SET_NETDEV_DEV(ndev, dev);
+ dev_set_drvdata(dev, ndev);
priv = netdev_priv(ndev);
+
+ switch (ops->chip_id) {
+ case W5100:
+ priv->s0_regs = W5100_S0_REGS;
+ priv->s0_tx_buf = W5100_TX_MEM_START;
+ priv->s0_tx_buf_size = W5100_TX_MEM_SIZE;
+ priv->s0_rx_buf = W5100_RX_MEM_START;
+ priv->s0_rx_buf_size = W5100_RX_MEM_SIZE;
+ break;
+ case W5200:
+ priv->s0_regs = W5200_S0_REGS;
+ priv->s0_tx_buf = W5200_TX_MEM_START;
+ priv->s0_tx_buf_size = W5200_TX_MEM_SIZE;
+ priv->s0_rx_buf = W5200_RX_MEM_START;
+ priv->s0_rx_buf_size = W5200_RX_MEM_SIZE;
+ break;
+ case W5500:
+ priv->s0_regs = W5500_S0_REGS;
+ priv->s0_tx_buf = W5500_TX_MEM_START;
+ priv->s0_tx_buf_size = W5500_TX_MEM_SIZE;
+ priv->s0_rx_buf = W5500_RX_MEM_START;
+ priv->s0_rx_buf_size = W5500_RX_MEM_SIZE;
+ break;
+ default:
+ err = -EINVAL;
+ goto err_register;
+ }
+
priv->ndev = ndev;
+ priv->ops = ops;
+ priv->irq = irq;
+ priv->link_gpio = link_gpio;
ndev->netdev_ops = &w5100_netdev_ops;
ndev->ethtool_ops = &w5100_ethtool_ops;
- ndev->watchdog_timeo = HZ;
netif_napi_add(ndev, &priv->napi, w5100_napi_poll, 16);
/* This chip doesn't support VLAN packets with normal MTU,
@@ -720,22 +1154,76 @@ static int w5100_probe(struct platform_device *pdev)
if (err < 0)
goto err_register;
- err = w5100_hw_probe(pdev);
- if (err < 0)
- goto err_hw_probe;
+ priv->xfer_wq = create_workqueue(netdev_name(ndev));
+ if (!priv->xfer_wq) {
+ err = -ENOMEM;
+ goto err_wq;
+ }
+
+ INIT_WORK(&priv->rx_work, w5100_rx_work);
+ INIT_WORK(&priv->tx_work, w5100_tx_work);
+ INIT_WORK(&priv->setrx_work, w5100_setrx_work);
+ INIT_WORK(&priv->restart_work, w5100_restart_work);
+
+ if (mac_addr)
+ memcpy(ndev->dev_addr, mac_addr, ETH_ALEN);
+ else
+ eth_hw_addr_random(ndev);
+
+ if (priv->ops->init) {
+ err = priv->ops->init(priv->ndev);
+ if (err)
+ goto err_hw;
+ }
+
+ err = w5100_hw_reset(priv);
+ if (err)
+ goto err_hw;
+
+ if (ops->may_sleep) {
+ err = request_threaded_irq(priv->irq, NULL, w5100_interrupt,
+ IRQF_TRIGGER_LOW | IRQF_ONESHOT,
+ netdev_name(ndev), ndev);
+ } else {
+ err = request_irq(priv->irq, w5100_interrupt,
+ IRQF_TRIGGER_LOW, netdev_name(ndev), ndev);
+ }
+ if (err)
+ goto err_hw;
+
+ if (gpio_is_valid(priv->link_gpio)) {
+ char *link_name = devm_kzalloc(dev, 16, GFP_KERNEL);
+
+ if (!link_name) {
+ err = -ENOMEM;
+ goto err_gpio;
+ }
+ snprintf(link_name, 16, "%s-link", netdev_name(ndev));
+ priv->link_irq = gpio_to_irq(priv->link_gpio);
+ if (request_any_context_irq(priv->link_irq, w5100_detect_link,
+ IRQF_TRIGGER_RISING |
+ IRQF_TRIGGER_FALLING,
+ link_name, priv->ndev) < 0)
+ priv->link_gpio = -EINVAL;
+ }
return 0;
-err_hw_probe:
+err_gpio:
+ free_irq(priv->irq, ndev);
+err_hw:
+ destroy_workqueue(priv->xfer_wq);
+err_wq:
unregister_netdev(ndev);
err_register:
free_netdev(ndev);
return err;
}
+EXPORT_SYMBOL_GPL(w5100_probe);
-static int w5100_remove(struct platform_device *pdev)
+int w5100_remove(struct device *dev)
{
- struct net_device *ndev = platform_get_drvdata(pdev);
+ struct net_device *ndev = dev_get_drvdata(dev);
struct w5100_priv *priv = netdev_priv(ndev);
w5100_hw_reset(priv);
@@ -743,16 +1231,21 @@ static int w5100_remove(struct platform_device *pdev)
if (gpio_is_valid(priv->link_gpio))
free_irq(priv->link_irq, ndev);
+ flush_work(&priv->setrx_work);
+ flush_work(&priv->restart_work);
+ flush_workqueue(priv->xfer_wq);
+ destroy_workqueue(priv->xfer_wq);
+
unregister_netdev(ndev);
free_netdev(ndev);
return 0;
}
+EXPORT_SYMBOL_GPL(w5100_remove);
#ifdef CONFIG_PM_SLEEP
static int w5100_suspend(struct device *dev)
{
- struct platform_device *pdev = to_platform_device(dev);
- struct net_device *ndev = platform_get_drvdata(pdev);
+ struct net_device *ndev = dev_get_drvdata(dev);
struct w5100_priv *priv = netdev_priv(ndev);
if (netif_running(ndev)) {
@@ -766,8 +1259,7 @@ static int w5100_suspend(struct device *dev)
static int w5100_resume(struct device *dev)
{
- struct platform_device *pdev = to_platform_device(dev);
- struct net_device *ndev = platform_get_drvdata(pdev);
+ struct net_device *ndev = dev_get_drvdata(dev);
struct w5100_priv *priv = netdev_priv(ndev);
if (netif_running(ndev)) {
@@ -783,15 +1275,15 @@ static int w5100_resume(struct device *dev)
}
#endif /* CONFIG_PM_SLEEP */
-static SIMPLE_DEV_PM_OPS(w5100_pm_ops, w5100_suspend, w5100_resume);
+SIMPLE_DEV_PM_OPS(w5100_pm_ops, w5100_suspend, w5100_resume);
+EXPORT_SYMBOL_GPL(w5100_pm_ops);
-static struct platform_driver w5100_driver = {
+static struct platform_driver w5100_mmio_driver = {
.driver = {
.name = DRV_NAME,
.pm = &w5100_pm_ops,
},
- .probe = w5100_probe,
- .remove = w5100_remove,
+ .probe = w5100_mmio_probe,
+ .remove = w5100_mmio_remove,
};
-
-module_platform_driver(w5100_driver);
+module_platform_driver(w5100_mmio_driver);
diff --git a/drivers/net/ethernet/wiznet/w5100.h b/drivers/net/ethernet/wiznet/w5100.h
new file mode 100644
index 000000000000..17983a3b8d6c
--- /dev/null
+++ b/drivers/net/ethernet/wiznet/w5100.h
@@ -0,0 +1,37 @@
+/*
+ * Ethernet driver for the WIZnet W5100 chip.
+ *
+ * Copyright (C) 2006-2008 WIZnet Co.,Ltd.
+ * Copyright (C) 2012 Mike Sinkovsky <msink@permonline.ru>
+ *
+ * Licensed under the GPL-2 or later.
+ */
+
+enum {
+ W5100,
+ W5200,
+ W5500,
+};
+
+struct w5100_ops {
+ bool may_sleep;
+ int chip_id;
+ int (*read)(struct net_device *ndev, u32 addr);
+ int (*write)(struct net_device *ndev, u32 addr, u8 data);
+ int (*read16)(struct net_device *ndev, u32 addr);
+ int (*write16)(struct net_device *ndev, u32 addr, u16 data);
+ int (*readbulk)(struct net_device *ndev, u32 addr, u8 *buf, int len);
+ int (*writebulk)(struct net_device *ndev, u32 addr, const u8 *buf,
+ int len);
+ int (*reset)(struct net_device *ndev);
+ int (*init)(struct net_device *ndev);
+};
+
+void *w5100_ops_priv(const struct net_device *ndev);
+
+int w5100_probe(struct device *dev, const struct w5100_ops *ops,
+ int sizeof_ops_priv, const void *mac_addr, int irq,
+ int link_gpio);
+int w5100_remove(struct device *dev);
+
+extern const struct dev_pm_ops w5100_pm_ops;
diff --git a/drivers/net/ethernet/wiznet/w5300.c b/drivers/net/ethernet/wiznet/w5300.c
index 8da7b930ff59..0b37ce9f28f1 100644
--- a/drivers/net/ethernet/wiznet/w5300.c
+++ b/drivers/net/ethernet/wiznet/w5300.c
@@ -362,7 +362,7 @@ static void w5300_tx_timeout(struct net_device *ndev)
w5300_hw_reset(priv);
w5300_hw_start(priv);
ndev->stats.tx_errors++;
- ndev->trans_start = jiffies;
+ netif_trans_update(ndev);
netif_wake_queue(ndev);
}
diff --git a/drivers/net/ethernet/xilinx/ll_temac_main.c b/drivers/net/ethernet/xilinx/ll_temac_main.c
index 5a1068df7038..739708712022 100644
--- a/drivers/net/ethernet/xilinx/ll_temac_main.c
+++ b/drivers/net/ethernet/xilinx/ll_temac_main.c
@@ -584,7 +584,7 @@ static void temac_device_reset(struct net_device *ndev)
dev_err(&ndev->dev, "Error setting TEMAC options\n");
/* Init Driver variable */
- ndev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(ndev); /* prevent tx timeout */
}
static void temac_adjust_link(struct net_device *ndev)
diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
index 4684644703cc..8c7f5be51e62 100644
--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
@@ -508,7 +508,7 @@ static void axienet_device_reset(struct net_device *ndev)
axienet_set_multicast_list(ndev);
axienet_setoptions(ndev, lp->options);
- ndev->trans_start = jiffies;
+ netif_trans_update(ndev);
}
/**
diff --git a/drivers/net/ethernet/xilinx/xilinx_emaclite.c b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
index e324b3092380..3cee84a24815 100644
--- a/drivers/net/ethernet/xilinx/xilinx_emaclite.c
+++ b/drivers/net/ethernet/xilinx/xilinx_emaclite.c
@@ -531,7 +531,7 @@ static void xemaclite_tx_timeout(struct net_device *dev)
}
/* To exclude tx timeout */
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
/* We're all ready to go. Start the queue */
netif_wake_queue(dev);
@@ -563,7 +563,7 @@ static void xemaclite_tx_handler(struct net_device *dev)
dev->stats.tx_bytes += lp->deferred_skb->len;
dev_kfree_skb_irq(lp->deferred_skb);
lp->deferred_skb = NULL;
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue(dev);
}
}
diff --git a/drivers/net/ethernet/xircom/xirc2ps_cs.c b/drivers/net/ethernet/xircom/xirc2ps_cs.c
index d56f8693202b..7b44968e02e6 100644
--- a/drivers/net/ethernet/xircom/xirc2ps_cs.c
+++ b/drivers/net/ethernet/xircom/xirc2ps_cs.c
@@ -1199,7 +1199,7 @@ xirc2ps_tx_timeout_task(struct work_struct *work)
struct net_device *dev = local->dev;
/* reset the card */
do_reset(dev,1);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue(dev);
}
diff --git a/drivers/net/fjes/fjes_hw.c b/drivers/net/fjes/fjes_hw.c
index b103adb8d62e..0dbafedc0a34 100644
--- a/drivers/net/fjes/fjes_hw.c
+++ b/drivers/net/fjes/fjes_hw.c
@@ -179,6 +179,8 @@ void fjes_hw_setup_epbuf(struct epbuf_handler *epbh, u8 *mac_addr, u32 mtu)
for (i = 0; i < EP_BUFFER_SUPPORT_VLAN_MAX; i++)
info->v1i.vlan_id[i] = vlan_id[i];
+
+ info->v1i.rx_status |= FJES_RX_MTU_CHANGING_DONE;
}
void
@@ -214,6 +216,7 @@ static int fjes_hw_setup(struct fjes_hw *hw)
u8 mac[ETH_ALEN] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 };
struct fjes_device_command_param param;
struct ep_share_mem_info *buf_pair;
+ unsigned long flags;
size_t mem_size;
int result;
int epidx;
@@ -262,10 +265,12 @@ static int fjes_hw_setup(struct fjes_hw *hw)
if (result)
return result;
+ spin_lock_irqsave(&hw->rx_status_lock, flags);
fjes_hw_setup_epbuf(&buf_pair->tx, mac,
fjes_support_mtu[0]);
fjes_hw_setup_epbuf(&buf_pair->rx, mac,
fjes_support_mtu[0]);
+ spin_unlock_irqrestore(&hw->rx_status_lock, flags);
}
}
@@ -327,6 +332,7 @@ int fjes_hw_init(struct fjes_hw *hw)
INIT_WORK(&hw->epstop_task, fjes_hw_epstop_task);
mutex_init(&hw->hw_info.lock);
+ spin_lock_init(&hw->rx_status_lock);
hw->max_epid = fjes_hw_get_max_epid(hw);
hw->my_epid = fjes_hw_get_my_epid(hw);
@@ -734,6 +740,7 @@ fjes_hw_get_partner_ep_status(struct fjes_hw *hw, int epid)
void fjes_hw_raise_epstop(struct fjes_hw *hw)
{
enum ep_partner_status status;
+ unsigned long flags;
int epidx;
for (epidx = 0; epidx < hw->max_epid; epidx++) {
@@ -753,8 +760,10 @@ void fjes_hw_raise_epstop(struct fjes_hw *hw)
set_bit(epidx, &hw->hw_info.buffer_unshare_reserve_bit);
set_bit(epidx, &hw->txrx_stop_req_bit);
+ spin_lock_irqsave(&hw->rx_status_lock, flags);
hw->ep_shm_info[epidx].tx.info->v1i.rx_status |=
FJES_RX_STOP_REQ_REQUEST;
+ spin_unlock_irqrestore(&hw->rx_status_lock, flags);
}
}
@@ -810,7 +819,8 @@ bool fjes_hw_check_mtu(struct epbuf_handler *epbh, u32 mtu)
{
union ep_buffer_info *info = epbh->info;
- return (info->v1i.frame_max == FJES_MTU_TO_FRAME_SIZE(mtu));
+ return ((info->v1i.frame_max == FJES_MTU_TO_FRAME_SIZE(mtu)) &&
+ info->v1i.rx_status & FJES_RX_MTU_CHANGING_DONE);
}
bool fjes_hw_check_vlan_id(struct epbuf_handler *epbh, u16 vlan_id)
@@ -863,6 +873,9 @@ bool fjes_hw_epbuf_rx_is_empty(struct epbuf_handler *epbh)
{
union ep_buffer_info *info = epbh->info;
+ if (!(info->v1i.rx_status & FJES_RX_MTU_CHANGING_DONE))
+ return true;
+
if (info->v1i.count_max == 0)
return true;
@@ -932,6 +945,7 @@ static void fjes_hw_update_zone_task(struct work_struct *work)
struct fjes_adapter *adapter;
struct net_device *netdev;
+ unsigned long flags;
ulong unshare_bit = 0;
ulong share_bit = 0;
@@ -1024,8 +1038,10 @@ static void fjes_hw_update_zone_task(struct work_struct *work)
continue;
if (test_bit(epidx, &share_bit)) {
+ spin_lock_irqsave(&hw->rx_status_lock, flags);
fjes_hw_setup_epbuf(&hw->ep_shm_info[epidx].tx,
netdev->dev_addr, netdev->mtu);
+ spin_unlock_irqrestore(&hw->rx_status_lock, flags);
mutex_lock(&hw->hw_info.lock);
@@ -1069,10 +1085,14 @@ static void fjes_hw_update_zone_task(struct work_struct *work)
mutex_unlock(&hw->hw_info.lock);
- if (ret == 0)
+ if (ret == 0) {
+ spin_lock_irqsave(&hw->rx_status_lock, flags);
fjes_hw_setup_epbuf(
&hw->ep_shm_info[epidx].tx,
netdev->dev_addr, netdev->mtu);
+ spin_unlock_irqrestore(&hw->rx_status_lock,
+ flags);
+ }
}
if (test_bit(epidx, &irq_bit)) {
@@ -1080,9 +1100,11 @@ static void fjes_hw_update_zone_task(struct work_struct *work)
REG_ICTL_MASK_TXRX_STOP_REQ);
set_bit(epidx, &hw->txrx_stop_req_bit);
+ spin_lock_irqsave(&hw->rx_status_lock, flags);
hw->ep_shm_info[epidx].tx.
info->v1i.rx_status |=
FJES_RX_STOP_REQ_REQUEST;
+ spin_unlock_irqrestore(&hw->rx_status_lock, flags);
set_bit(epidx, &hw->hw_info.buffer_unshare_reserve_bit);
}
}
@@ -1098,6 +1120,7 @@ static void fjes_hw_epstop_task(struct work_struct *work)
{
struct fjes_hw *hw = container_of(work, struct fjes_hw, epstop_task);
struct fjes_adapter *adapter = (struct fjes_adapter *)hw->back;
+ unsigned long flags;
ulong remain_bit;
int epid_bit;
@@ -1105,9 +1128,12 @@ static void fjes_hw_epstop_task(struct work_struct *work)
while ((remain_bit = hw->epstop_req_bit)) {
for (epid_bit = 0; remain_bit; remain_bit >>= 1, epid_bit++) {
if (remain_bit & 1) {
+ spin_lock_irqsave(&hw->rx_status_lock, flags);
hw->ep_shm_info[epid_bit].
tx.info->v1i.rx_status |=
FJES_RX_STOP_REQ_DONE;
+ spin_unlock_irqrestore(&hw->rx_status_lock,
+ flags);
clear_bit(epid_bit, &hw->epstop_req_bit);
set_bit(epid_bit,
diff --git a/drivers/net/fjes/fjes_hw.h b/drivers/net/fjes/fjes_hw.h
index 6d57b89a0ee8..1445ac99d6e3 100644
--- a/drivers/net/fjes/fjes_hw.h
+++ b/drivers/net/fjes/fjes_hw.h
@@ -33,9 +33,9 @@ struct fjes_hw;
#define EP_BUFFER_SUPPORT_VLAN_MAX 4
#define EP_BUFFER_INFO_SIZE 4096
-#define FJES_DEVICE_RESET_TIMEOUT ((17 + 1) * 3) /* sec */
-#define FJES_COMMAND_REQ_TIMEOUT (5 + 1) /* sec */
-#define FJES_COMMAND_REQ_BUFF_TIMEOUT (8 * 3) /* sec */
+#define FJES_DEVICE_RESET_TIMEOUT ((17 + 1) * 3 * 8) /* sec */
+#define FJES_COMMAND_REQ_TIMEOUT ((5 + 1) * 3 * 8) /* sec */
+#define FJES_COMMAND_REQ_BUFF_TIMEOUT (60 * 3) /* sec */
#define FJES_COMMAND_EPSTOP_WAIT_TIMEOUT (1) /* sec */
#define FJES_CMD_REQ_ERR_INFO_PARAM (0x0001)
@@ -57,6 +57,7 @@ struct fjes_hw;
#define FJES_RX_STOP_REQ_DONE (0x1)
#define FJES_RX_STOP_REQ_REQUEST (0x2)
#define FJES_RX_POLL_WORK (0x4)
+#define FJES_RX_MTU_CHANGING_DONE (0x8)
#define EP_BUFFER_SIZE \
(((sizeof(union ep_buffer_info) + (128 * (64 * 1024))) \
@@ -299,6 +300,8 @@ struct fjes_hw {
u8 *base;
struct fjes_hw_info hw_info;
+
+ spinlock_t rx_status_lock; /* spinlock for rx_status */
};
int fjes_hw_init(struct fjes_hw *);
diff --git a/drivers/net/fjes/fjes_main.c b/drivers/net/fjes/fjes_main.c
index 0ddb54fe3d91..86c331bb5eb3 100644
--- a/drivers/net/fjes/fjes_main.c
+++ b/drivers/net/fjes/fjes_main.c
@@ -29,7 +29,7 @@
#include "fjes.h"
#define MAJ 1
-#define MIN 0
+#define MIN 1
#define DRV_VERSION __stringify(MAJ) "." __stringify(MIN)
#define DRV_NAME "fjes"
char fjes_driver_name[] = DRV_NAME;
@@ -290,6 +290,7 @@ static int fjes_close(struct net_device *netdev)
{
struct fjes_adapter *adapter = netdev_priv(netdev);
struct fjes_hw *hw = &adapter->hw;
+ unsigned long flags;
int epidx;
netif_tx_stop_all_queues(netdev);
@@ -299,13 +300,18 @@ static int fjes_close(struct net_device *netdev)
napi_disable(&adapter->napi);
+ spin_lock_irqsave(&hw->rx_status_lock, flags);
for (epidx = 0; epidx < hw->max_epid; epidx++) {
if (epidx == hw->my_epid)
continue;
- adapter->hw.ep_shm_info[epidx].tx.info->v1i.rx_status &=
- ~FJES_RX_POLL_WORK;
+ if (fjes_hw_get_partner_ep_status(hw, epidx) ==
+ EP_PARTNER_SHARED)
+ adapter->hw.ep_shm_info[epidx]
+ .tx.info->v1i.rx_status &=
+ ~FJES_RX_POLL_WORK;
}
+ spin_unlock_irqrestore(&hw->rx_status_lock, flags);
fjes_free_irq(adapter);
@@ -330,6 +336,7 @@ static int fjes_setup_resources(struct fjes_adapter *adapter)
struct net_device *netdev = adapter->netdev;
struct ep_share_mem_info *buf_pair;
struct fjes_hw *hw = &adapter->hw;
+ unsigned long flags;
int result;
int epidx;
@@ -371,8 +378,10 @@ static int fjes_setup_resources(struct fjes_adapter *adapter)
buf_pair = &hw->ep_shm_info[epidx];
+ spin_lock_irqsave(&hw->rx_status_lock, flags);
fjes_hw_setup_epbuf(&buf_pair->tx, netdev->dev_addr,
netdev->mtu);
+ spin_unlock_irqrestore(&hw->rx_status_lock, flags);
if (fjes_hw_epid_is_same_zone(hw, epidx)) {
mutex_lock(&hw->hw_info.lock);
@@ -402,6 +411,7 @@ static void fjes_free_resources(struct fjes_adapter *adapter)
struct ep_share_mem_info *buf_pair;
struct fjes_hw *hw = &adapter->hw;
bool reset_flag = false;
+ unsigned long flags;
int result;
int epidx;
@@ -418,8 +428,10 @@ static void fjes_free_resources(struct fjes_adapter *adapter)
buf_pair = &hw->ep_shm_info[epidx];
+ spin_lock_irqsave(&hw->rx_status_lock, flags);
fjes_hw_setup_epbuf(&buf_pair->tx,
netdev->dev_addr, netdev->mtu);
+ spin_unlock_irqrestore(&hw->rx_status_lock, flags);
clear_bit(epidx, &hw->txrx_stop_req_bit);
}
@@ -459,7 +471,7 @@ static void fjes_tx_stall_task(struct work_struct *work)
int i;
if (((long)jiffies -
- (long)(netdev->trans_start)) > FJES_TX_TX_STALL_TIMEOUT) {
+ dev_trans_start(netdev)) > FJES_TX_TX_STALL_TIMEOUT) {
netif_wake_queue(netdev);
return;
}
@@ -481,6 +493,9 @@ static void fjes_tx_stall_task(struct work_struct *work)
info = adapter->hw.ep_shm_info[epid].tx.info;
+ if (!(info->v1i.rx_status & FJES_RX_MTU_CHANGING_DONE))
+ return;
+
if (EP_RING_FULL(info->v1i.head, info->v1i.tail,
info->v1i.count_max)) {
all_queue_available = 0;
@@ -549,7 +564,8 @@ static void fjes_raise_intr_rxdata_task(struct work_struct *work)
if ((hw->ep_shm_info[epid].tx_status_work ==
FJES_TX_DELAY_SEND_PENDING) &&
(pstatus == EP_PARTNER_SHARED) &&
- !(hw->ep_shm_info[epid].rx.info->v1i.rx_status)) {
+ !(hw->ep_shm_info[epid].rx.info->v1i.rx_status &
+ FJES_RX_POLL_WORK)) {
fjes_hw_raise_interrupt(hw, epid,
REG_ICTL_MASK_RX_DATA);
}
@@ -653,7 +669,7 @@ fjes_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
&adapter->hw.ep_shm_info[dest_epid].rx, 0)) {
/* version is NOT 0 */
adapter->stats64.tx_carrier_errors += 1;
- hw->ep_shm_info[my_epid].net_stats
+ hw->ep_shm_info[dest_epid].net_stats
.tx_carrier_errors += 1;
ret = NETDEV_TX_OK;
@@ -661,9 +677,9 @@ fjes_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
&adapter->hw.ep_shm_info[dest_epid].rx,
netdev->mtu)) {
adapter->stats64.tx_dropped += 1;
- hw->ep_shm_info[my_epid].net_stats.tx_dropped += 1;
+ hw->ep_shm_info[dest_epid].net_stats.tx_dropped += 1;
adapter->stats64.tx_errors += 1;
- hw->ep_shm_info[my_epid].net_stats.tx_errors += 1;
+ hw->ep_shm_info[dest_epid].net_stats.tx_errors += 1;
ret = NETDEV_TX_OK;
} else if (vlan &&
@@ -694,15 +710,15 @@ fjes_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
(long)adapter->tx_start_jiffies) >=
FJES_TX_RETRY_TIMEOUT) {
adapter->stats64.tx_fifo_errors += 1;
- hw->ep_shm_info[my_epid].net_stats
+ hw->ep_shm_info[dest_epid].net_stats
.tx_fifo_errors += 1;
adapter->stats64.tx_errors += 1;
- hw->ep_shm_info[my_epid].net_stats
+ hw->ep_shm_info[dest_epid].net_stats
.tx_errors += 1;
ret = NETDEV_TX_OK;
} else {
- netdev->trans_start = jiffies;
+ netif_trans_update(netdev);
netif_tx_stop_queue(cur_queue);
if (!work_pending(&adapter->tx_stall_task))
@@ -714,10 +730,10 @@ fjes_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
} else {
if (!is_multi) {
adapter->stats64.tx_packets += 1;
- hw->ep_shm_info[my_epid].net_stats
+ hw->ep_shm_info[dest_epid].net_stats
.tx_packets += 1;
adapter->stats64.tx_bytes += len;
- hw->ep_shm_info[my_epid].net_stats
+ hw->ep_shm_info[dest_epid].net_stats
.tx_bytes += len;
}
@@ -759,9 +775,12 @@ fjes_get_stats64(struct net_device *netdev, struct rtnl_link_stats64 *stats)
static int fjes_change_mtu(struct net_device *netdev, int new_mtu)
{
+ struct fjes_adapter *adapter = netdev_priv(netdev);
bool running = netif_running(netdev);
- int ret = 0;
- int idx;
+ struct fjes_hw *hw = &adapter->hw;
+ unsigned long flags;
+ int ret = -EINVAL;
+ int idx, epidx;
for (idx = 0; fjes_support_mtu[idx] != 0; idx++) {
if (new_mtu <= fjes_support_mtu[idx]) {
@@ -769,19 +788,58 @@ static int fjes_change_mtu(struct net_device *netdev, int new_mtu)
if (new_mtu == netdev->mtu)
return 0;
- if (running)
- fjes_close(netdev);
+ ret = 0;
+ break;
+ }
+ }
+
+ if (ret)
+ return ret;
+
+ if (running) {
+ spin_lock_irqsave(&hw->rx_status_lock, flags);
+ for (epidx = 0; epidx < hw->max_epid; epidx++) {
+ if (epidx == hw->my_epid)
+ continue;
+ hw->ep_shm_info[epidx].tx.info->v1i.rx_status &=
+ ~FJES_RX_MTU_CHANGING_DONE;
+ }
+ spin_unlock_irqrestore(&hw->rx_status_lock, flags);
+
+ netif_tx_stop_all_queues(netdev);
+ netif_carrier_off(netdev);
+ cancel_work_sync(&adapter->tx_stall_task);
+ napi_disable(&adapter->napi);
+
+ msleep(1000);
+
+ netif_tx_stop_all_queues(netdev);
+ }
+
+ netdev->mtu = new_mtu;
- netdev->mtu = new_mtu;
+ if (running) {
+ for (epidx = 0; epidx < hw->max_epid; epidx++) {
+ if (epidx == hw->my_epid)
+ continue;
- if (running)
- ret = fjes_open(netdev);
+ spin_lock_irqsave(&hw->rx_status_lock, flags);
+ fjes_hw_setup_epbuf(&hw->ep_shm_info[epidx].tx,
+ netdev->dev_addr,
+ netdev->mtu);
- return ret;
+ hw->ep_shm_info[epidx].tx.info->v1i.rx_status |=
+ FJES_RX_MTU_CHANGING_DONE;
+ spin_unlock_irqrestore(&hw->rx_status_lock, flags);
}
+
+ netif_tx_wake_all_queues(netdev);
+ netif_carrier_on(netdev);
+ napi_enable(&adapter->napi);
+ napi_schedule(&adapter->napi);
}
- return -EINVAL;
+ return ret;
}
static int fjes_vlan_rx_add_vid(struct net_device *netdev,
@@ -825,6 +883,7 @@ static void fjes_txrx_stop_req_irq(struct fjes_adapter *adapter,
{
struct fjes_hw *hw = &adapter->hw;
enum ep_partner_status status;
+ unsigned long flags;
status = fjes_hw_get_partner_ep_status(hw, src_epid);
switch (status) {
@@ -834,8 +893,10 @@ static void fjes_txrx_stop_req_irq(struct fjes_adapter *adapter,
break;
case EP_PARTNER_WAITING:
if (src_epid < hw->my_epid) {
+ spin_lock_irqsave(&hw->rx_status_lock, flags);
hw->ep_shm_info[src_epid].tx.info->v1i.rx_status |=
FJES_RX_STOP_REQ_DONE;
+ spin_unlock_irqrestore(&hw->rx_status_lock, flags);
clear_bit(src_epid, &hw->txrx_stop_req_bit);
set_bit(src_epid, &adapter->unshare_watch_bitmask);
@@ -861,14 +922,17 @@ static void fjes_stop_req_irq(struct fjes_adapter *adapter, int src_epid)
{
struct fjes_hw *hw = &adapter->hw;
enum ep_partner_status status;
+ unsigned long flags;
set_bit(src_epid, &hw->hw_info.buffer_unshare_reserve_bit);
status = fjes_hw_get_partner_ep_status(hw, src_epid);
switch (status) {
case EP_PARTNER_WAITING:
+ spin_lock_irqsave(&hw->rx_status_lock, flags);
hw->ep_shm_info[src_epid].tx.info->v1i.rx_status |=
FJES_RX_STOP_REQ_DONE;
+ spin_unlock_irqrestore(&hw->rx_status_lock, flags);
clear_bit(src_epid, &hw->txrx_stop_req_bit);
/* fall through */
case EP_PARTNER_UNSHARE:
@@ -1001,13 +1065,17 @@ static int fjes_poll(struct napi_struct *napi, int budget)
size_t frame_len;
void *frame;
+ spin_lock(&hw->rx_status_lock);
for (epidx = 0; epidx < hw->max_epid; epidx++) {
if (epidx == hw->my_epid)
continue;
- adapter->hw.ep_shm_info[epidx].tx.info->v1i.rx_status |=
- FJES_RX_POLL_WORK;
+ if (fjes_hw_get_partner_ep_status(hw, epidx) ==
+ EP_PARTNER_SHARED)
+ adapter->hw.ep_shm_info[epidx]
+ .tx.info->v1i.rx_status |= FJES_RX_POLL_WORK;
}
+ spin_unlock(&hw->rx_status_lock);
while (work_done < budget) {
prefetch(&adapter->hw);
@@ -1065,13 +1133,17 @@ static int fjes_poll(struct napi_struct *napi, int budget)
if (((long)jiffies - (long)adapter->rx_last_jiffies) < 3) {
napi_reschedule(napi);
} else {
+ spin_lock(&hw->rx_status_lock);
for (epidx = 0; epidx < hw->max_epid; epidx++) {
if (epidx == hw->my_epid)
continue;
- adapter->hw.ep_shm_info[epidx]
- .tx.info->v1i.rx_status &=
+ if (fjes_hw_get_partner_ep_status(hw, epidx) ==
+ EP_PARTNER_SHARED)
+ adapter->hw.ep_shm_info[epidx].tx
+ .info->v1i.rx_status &=
~FJES_RX_POLL_WORK;
}
+ spin_unlock(&hw->rx_status_lock);
fjes_hw_set_irqmask(hw, REG_ICTL_MASK_RX_DATA, false);
}
@@ -1129,7 +1201,7 @@ static int fjes_probe(struct platform_device *plat_dev)
res = platform_get_resource(plat_dev, IORESOURCE_MEM, 0);
hw->hw_res.start = res->start;
- hw->hw_res.size = res->end - res->start + 1;
+ hw->hw_res.size = resource_size(res);
hw->hw_res.irq = platform_get_irq(plat_dev, 0);
err = fjes_hw_init(&adapter->hw);
if (err)
@@ -1203,7 +1275,7 @@ static void fjes_netdev_setup(struct net_device *netdev)
netdev->watchdog_timeo = FJES_TX_RETRY_INTERVAL;
netdev->netdev_ops = &fjes_netdev_ops;
fjes_set_ethtool_ops(netdev);
- netdev->mtu = fjes_support_mtu[0];
+ netdev->mtu = fjes_support_mtu[3];
netdev->flags |= IFF_BROADCAST;
netdev->features |= NETIF_F_HW_CSUM | NETIF_F_HW_VLAN_CTAG_FILTER;
}
@@ -1240,6 +1312,7 @@ static void fjes_watch_unshare_task(struct work_struct *work)
int max_epid, my_epid, epidx;
int stop_req, stop_req_done;
ulong unshare_watch_bitmask;
+ unsigned long flags;
int wait_time = 0;
int is_shared;
int ret;
@@ -1292,8 +1365,10 @@ static void fjes_watch_unshare_task(struct work_struct *work)
}
mutex_unlock(&hw->hw_info.lock);
+ spin_lock_irqsave(&hw->rx_status_lock, flags);
fjes_hw_setup_epbuf(&hw->ep_shm_info[epidx].tx,
netdev->dev_addr, netdev->mtu);
+ spin_unlock_irqrestore(&hw->rx_status_lock, flags);
clear_bit(epidx, &hw->txrx_stop_req_bit);
clear_bit(epidx, &unshare_watch_bitmask);
@@ -1331,9 +1406,12 @@ static void fjes_watch_unshare_task(struct work_struct *work)
}
mutex_unlock(&hw->hw_info.lock);
+ spin_lock_irqsave(&hw->rx_status_lock, flags);
fjes_hw_setup_epbuf(
&hw->ep_shm_info[epidx].tx,
netdev->dev_addr, netdev->mtu);
+ spin_unlock_irqrestore(&hw->rx_status_lock,
+ flags);
clear_bit(epidx, &hw->txrx_stop_req_bit);
clear_bit(epidx, &unshare_watch_bitmask);
@@ -1341,8 +1419,11 @@ static void fjes_watch_unshare_task(struct work_struct *work)
}
if (test_bit(epidx, &unshare_watch_bitmask)) {
+ spin_lock_irqsave(&hw->rx_status_lock, flags);
hw->ep_shm_info[epidx].tx.info->v1i.rx_status &=
~FJES_RX_STOP_REQ_DONE;
+ spin_unlock_irqrestore(&hw->rx_status_lock,
+ flags);
}
}
}
diff --git a/drivers/net/geneve.c b/drivers/net/geneve.c
index bc168894bda3..cadefe4fdaa2 100644
--- a/drivers/net/geneve.c
+++ b/drivers/net/geneve.c
@@ -87,7 +87,6 @@ struct geneve_sock {
struct socket *sock;
struct rcu_head rcu;
int refcnt;
- struct udp_offload udp_offloads;
struct hlist_head vni_list[VNI_HASH_SIZE];
u32 flags;
};
@@ -336,15 +335,15 @@ static int geneve_udp_encap_recv(struct sock *sk, struct sk_buff *skb)
/* Need Geneve and inner Ethernet header to be present */
if (unlikely(!pskb_may_pull(skb, GENEVE_BASE_HLEN)))
- goto error;
+ goto drop;
/* Return packets with reserved bits set */
geneveh = geneve_hdr(skb);
if (unlikely(geneveh->ver != GENEVE_VER))
- goto error;
+ goto drop;
if (unlikely(geneveh->proto_type != htons(ETH_P_TEB)))
- goto error;
+ goto drop;
gs = rcu_dereference_sk_user_data(sk);
if (!gs)
@@ -367,10 +366,6 @@ drop:
/* Consume bad packet */
kfree_skb(skb);
return 0;
-
-error:
- /* Let the UDP layer deal with the skb */
- return 1;
}
static struct socket *geneve_create_sock(struct net *net, bool ipv6,
@@ -409,14 +404,6 @@ static void geneve_notify_add_rx_port(struct geneve_sock *gs)
struct net *net = sock_net(sk);
sa_family_t sa_family = geneve_get_sk_family(gs);
__be16 port = inet_sk(sk)->inet_sport;
- int err;
-
- if (sa_family == AF_INET) {
- err = udp_add_offload(sock_net(sk), &gs->udp_offloads);
- if (err)
- pr_warn("geneve: udp_add_offload failed with status %d\n",
- err);
- }
rcu_read_lock();
for_each_netdev_rcu(net, dev) {
@@ -432,9 +419,9 @@ static int geneve_hlen(struct genevehdr *gh)
return sizeof(*gh) + gh->opt_len * 4;
}
-static struct sk_buff **geneve_gro_receive(struct sk_buff **head,
- struct sk_buff *skb,
- struct udp_offload *uoff)
+static struct sk_buff **geneve_gro_receive(struct sock *sk,
+ struct sk_buff **head,
+ struct sk_buff *skb)
{
struct sk_buff *p, **pp = NULL;
struct genevehdr *gh, *gh2;
@@ -495,8 +482,8 @@ out:
return pp;
}
-static int geneve_gro_complete(struct sk_buff *skb, int nhoff,
- struct udp_offload *uoff)
+static int geneve_gro_complete(struct sock *sk, struct sk_buff *skb,
+ int nhoff)
{
struct genevehdr *gh;
struct packet_offload *ptype;
@@ -504,8 +491,6 @@ static int geneve_gro_complete(struct sk_buff *skb, int nhoff,
int gh_len;
int err = -ENOSYS;
- udp_tunnel_gro_complete(skb, nhoff);
-
gh = (struct genevehdr *)(skb->data + nhoff);
gh_len = geneve_hlen(gh);
type = gh->proto_type;
@@ -516,6 +501,9 @@ static int geneve_gro_complete(struct sk_buff *skb, int nhoff,
err = ptype->callbacks.gro_complete(skb, nhoff + gh_len);
rcu_read_unlock();
+
+ skb_set_inner_mac_header(skb, nhoff + gh_len);
+
return err;
}
@@ -545,14 +533,14 @@ static struct geneve_sock *geneve_socket_create(struct net *net, __be16 port,
INIT_HLIST_HEAD(&gs->vni_list[h]);
/* Initialize the geneve udp offloads structure */
- gs->udp_offloads.port = port;
- gs->udp_offloads.callbacks.gro_receive = geneve_gro_receive;
- gs->udp_offloads.callbacks.gro_complete = geneve_gro_complete;
geneve_notify_add_rx_port(gs);
/* Mark socket as an encapsulation socket */
+ memset(&tunnel_cfg, 0, sizeof(tunnel_cfg));
tunnel_cfg.sk_user_data = gs;
tunnel_cfg.encap_type = 1;
+ tunnel_cfg.gro_receive = geneve_gro_receive;
+ tunnel_cfg.gro_complete = geneve_gro_complete;
tunnel_cfg.encap_rcv = geneve_udp_encap_recv;
tunnel_cfg.encap_destroy = NULL;
setup_udp_tunnel_sock(net, sock, &tunnel_cfg);
@@ -576,9 +564,6 @@ static void geneve_notify_del_rx_port(struct geneve_sock *gs)
}
rcu_read_unlock();
-
- if (sa_family == AF_INET)
- udp_del_offload(&gs->udp_offloads);
}
static void __geneve_sock_release(struct geneve_sock *gs)
@@ -708,16 +693,12 @@ static int geneve_build_skb(struct rtable *rt, struct sk_buff *skb,
min_headroom = LL_RESERVED_SPACE(rt->dst.dev) + rt->dst.header_len
+ GENEVE_BASE_HLEN + opt_len + sizeof(struct iphdr);
err = skb_cow_head(skb, min_headroom);
- if (unlikely(err)) {
- kfree_skb(skb);
+ if (unlikely(err))
goto free_rt;
- }
- skb = udp_tunnel_handle_offloads(skb, udp_sum);
- if (IS_ERR(skb)) {
- err = PTR_ERR(skb);
+ err = udp_tunnel_handle_offloads(skb, udp_sum);
+ if (err)
goto free_rt;
- }
gnvh = (struct genevehdr *)__skb_push(skb, sizeof(*gnvh) + opt_len);
geneve_build_header(gnvh, tun_flags, vni, opt_len, opt);
@@ -745,16 +726,12 @@ static int geneve6_build_skb(struct dst_entry *dst, struct sk_buff *skb,
min_headroom = LL_RESERVED_SPACE(dst->dev) + dst->header_len
+ GENEVE_BASE_HLEN + opt_len + sizeof(struct ipv6hdr);
err = skb_cow_head(skb, min_headroom);
- if (unlikely(err)) {
- kfree_skb(skb);
+ if (unlikely(err))
goto free_dst;
- }
- skb = udp_tunnel_handle_offloads(skb, udp_sum);
- if (IS_ERR(skb)) {
- err = PTR_ERR(skb);
+ err = udp_tunnel_handle_offloads(skb, udp_sum);
+ if (err)
goto free_dst;
- }
gnvh = (struct genevehdr *)__skb_push(skb, sizeof(*gnvh) + opt_len);
geneve_build_header(gnvh, tun_flags, vni, opt_len, opt);
@@ -949,7 +926,7 @@ static netdev_tx_t geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
err = geneve_build_skb(rt, skb, key->tun_flags, vni,
info->options_len, opts, flags, xnet);
if (unlikely(err))
- goto err;
+ goto tx_error;
tos = ip_tunnel_ecn_encap(key->tos, iip, skb);
ttl = key->ttl;
@@ -958,7 +935,7 @@ static netdev_tx_t geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
err = geneve_build_skb(rt, skb, 0, geneve->vni,
0, NULL, flags, xnet);
if (unlikely(err))
- goto err;
+ goto tx_error;
tos = ip_tunnel_ecn_encap(fl4.flowi4_tos, iip, skb);
ttl = geneve->ttl;
@@ -976,7 +953,7 @@ static netdev_tx_t geneve_xmit_skb(struct sk_buff *skb, struct net_device *dev,
tx_error:
dev_kfree_skb(skb);
-err:
+
if (err == -ELOOP)
dev->stats.collisions++;
else if (err == -ENETUNREACH)
@@ -1038,7 +1015,7 @@ static netdev_tx_t geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
info->options_len, opts,
flags, xnet);
if (unlikely(err))
- goto err;
+ goto tx_error;
prio = ip_tunnel_ecn_encap(key->tos, iip, skb);
ttl = key->ttl;
@@ -1047,7 +1024,7 @@ static netdev_tx_t geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
err = geneve6_build_skb(dst, skb, 0, geneve->vni,
0, NULL, flags, xnet);
if (unlikely(err))
- goto err;
+ goto tx_error;
prio = ip_tunnel_ecn_encap(ip6_tclass(fl6.flowlabel),
iip, skb);
@@ -1066,7 +1043,7 @@ static netdev_tx_t geneve6_xmit_skb(struct sk_buff *skb, struct net_device *dev,
tx_error:
dev_kfree_skb(skb);
-err:
+
if (err == -ELOOP)
dev->stats.collisions++;
else if (err == -ENETUNREACH)
@@ -1192,7 +1169,7 @@ static struct device_type geneve_type = {
* supply the listening GENEVE udp ports. Callers are expected
* to implement the ndo_add_geneve_port.
*/
-void geneve_get_rx_port(struct net_device *dev)
+static void geneve_push_rx_ports(struct net_device *dev)
{
struct net *net = dev_net(dev);
struct geneve_net *gn = net_generic(net, geneve_net_id);
@@ -1201,6 +1178,9 @@ void geneve_get_rx_port(struct net_device *dev)
struct sock *sk;
__be16 port;
+ if (!dev->netdev_ops->ndo_add_geneve_port)
+ return;
+
rcu_read_lock();
list_for_each_entry_rcu(gs, &gn->sock_list, list) {
sk = gs->sock->sk;
@@ -1210,7 +1190,6 @@ void geneve_get_rx_port(struct net_device *dev)
}
rcu_read_unlock();
}
-EXPORT_SYMBOL_GPL(geneve_get_rx_port);
/* Initialize the device structure. */
static void geneve_setup(struct net_device *dev)
@@ -1558,6 +1537,21 @@ struct net_device *geneve_dev_create_fb(struct net *net, const char *name,
}
EXPORT_SYMBOL_GPL(geneve_dev_create_fb);
+static int geneve_netdevice_event(struct notifier_block *unused,
+ unsigned long event, void *ptr)
+{
+ struct net_device *dev = netdev_notifier_info_to_dev(ptr);
+
+ if (event == NETDEV_OFFLOAD_PUSH_GENEVE)
+ geneve_push_rx_ports(dev);
+
+ return NOTIFY_DONE;
+}
+
+static struct notifier_block geneve_notifier_block __read_mostly = {
+ .notifier_call = geneve_netdevice_event,
+};
+
static __net_init int geneve_init_net(struct net *net)
{
struct geneve_net *gn = net_generic(net, geneve_net_id);
@@ -1610,11 +1604,18 @@ static int __init geneve_init_module(void)
if (rc)
goto out1;
- rc = rtnl_link_register(&geneve_link_ops);
+ rc = register_netdevice_notifier(&geneve_notifier_block);
if (rc)
goto out2;
+ rc = rtnl_link_register(&geneve_link_ops);
+ if (rc)
+ goto out3;
+
return 0;
+
+out3:
+ unregister_netdevice_notifier(&geneve_notifier_block);
out2:
unregister_pernet_subsys(&geneve_net_ops);
out1:
@@ -1625,6 +1626,7 @@ late_initcall(geneve_init_module);
static void __exit geneve_cleanup_module(void)
{
rtnl_link_unregister(&geneve_link_ops);
+ unregister_netdevice_notifier(&geneve_notifier_block);
unregister_pernet_subsys(&geneve_net_ops);
}
module_exit(geneve_cleanup_module);
diff --git a/drivers/net/gtp.c b/drivers/net/gtp.c
new file mode 100644
index 000000000000..4e976a0d5a76
--- /dev/null
+++ b/drivers/net/gtp.c
@@ -0,0 +1,1375 @@
+/* GTP according to GSM TS 09.60 / 3GPP TS 29.060
+ *
+ * (C) 2012-2014 by sysmocom - s.f.m.c. GmbH
+ * (C) 2016 by Pablo Neira Ayuso <pablo@netfilter.org>
+ *
+ * Author: Harald Welte <hwelte@sysmocom.de>
+ * Pablo Neira Ayuso <pablo@netfilter.org>
+ * Andreas Schultz <aschultz@travelping.com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+
+#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+
+#include <linux/module.h>
+#include <linux/version.h>
+#include <linux/skbuff.h>
+#include <linux/udp.h>
+#include <linux/rculist.h>
+#include <linux/jhash.h>
+#include <linux/if_tunnel.h>
+#include <linux/net.h>
+#include <linux/file.h>
+#include <linux/gtp.h>
+
+#include <net/net_namespace.h>
+#include <net/protocol.h>
+#include <net/ip.h>
+#include <net/udp.h>
+#include <net/udp_tunnel.h>
+#include <net/icmp.h>
+#include <net/xfrm.h>
+#include <net/genetlink.h>
+#include <net/netns/generic.h>
+#include <net/gtp.h>
+
+/* An active session for the subscriber. */
+struct pdp_ctx {
+ struct hlist_node hlist_tid;
+ struct hlist_node hlist_addr;
+
+ union {
+ u64 tid;
+ struct {
+ u64 tid;
+ u16 flow;
+ } v0;
+ struct {
+ u32 i_tei;
+ u32 o_tei;
+ } v1;
+ } u;
+ u8 gtp_version;
+ u16 af;
+
+ struct in_addr ms_addr_ip4;
+ struct in_addr sgsn_addr_ip4;
+
+ atomic_t tx_seq;
+ struct rcu_head rcu_head;
+};
+
+/* One instance of the GTP device. */
+struct gtp_dev {
+ struct list_head list;
+
+ struct socket *sock0;
+ struct socket *sock1u;
+
+ struct net *net;
+ struct net_device *dev;
+
+ unsigned int hash_size;
+ struct hlist_head *tid_hash;
+ struct hlist_head *addr_hash;
+};
+
+static int gtp_net_id __read_mostly;
+
+struct gtp_net {
+ struct list_head gtp_dev_list;
+};
+
+static u32 gtp_h_initval;
+
+static inline u32 gtp0_hashfn(u64 tid)
+{
+ u32 *tid32 = (u32 *) &tid;
+ return jhash_2words(tid32[0], tid32[1], gtp_h_initval);
+}
+
+static inline u32 gtp1u_hashfn(u32 tid)
+{
+ return jhash_1word(tid, gtp_h_initval);
+}
+
+static inline u32 ipv4_hashfn(__be32 ip)
+{
+ return jhash_1word((__force u32)ip, gtp_h_initval);
+}
+
+/* Resolve a PDP context structure based on the 64bit TID. */
+static struct pdp_ctx *gtp0_pdp_find(struct gtp_dev *gtp, u64 tid)
+{
+ struct hlist_head *head;
+ struct pdp_ctx *pdp;
+
+ head = &gtp->tid_hash[gtp0_hashfn(tid) % gtp->hash_size];
+
+ hlist_for_each_entry_rcu(pdp, head, hlist_tid) {
+ if (pdp->gtp_version == GTP_V0 &&
+ pdp->u.v0.tid == tid)
+ return pdp;
+ }
+ return NULL;
+}
+
+/* Resolve a PDP context structure based on the 32bit TEI. */
+static struct pdp_ctx *gtp1_pdp_find(struct gtp_dev *gtp, u32 tid)
+{
+ struct hlist_head *head;
+ struct pdp_ctx *pdp;
+
+ head = &gtp->tid_hash[gtp1u_hashfn(tid) % gtp->hash_size];
+
+ hlist_for_each_entry_rcu(pdp, head, hlist_tid) {
+ if (pdp->gtp_version == GTP_V1 &&
+ pdp->u.v1.i_tei == tid)
+ return pdp;
+ }
+ return NULL;
+}
+
+/* Resolve a PDP context based on IPv4 address of MS. */
+static struct pdp_ctx *ipv4_pdp_find(struct gtp_dev *gtp, __be32 ms_addr)
+{
+ struct hlist_head *head;
+ struct pdp_ctx *pdp;
+
+ head = &gtp->addr_hash[ipv4_hashfn(ms_addr) % gtp->hash_size];
+
+ hlist_for_each_entry_rcu(pdp, head, hlist_addr) {
+ if (pdp->af == AF_INET &&
+ pdp->ms_addr_ip4.s_addr == ms_addr)
+ return pdp;
+ }
+
+ return NULL;
+}
+
+static bool gtp_check_src_ms_ipv4(struct sk_buff *skb, struct pdp_ctx *pctx,
+ unsigned int hdrlen)
+{
+ struct iphdr *iph;
+
+ if (!pskb_may_pull(skb, hdrlen + sizeof(struct iphdr)))
+ return false;
+
+ iph = (struct iphdr *)(skb->data + hdrlen + sizeof(struct iphdr));
+
+ return iph->saddr != pctx->ms_addr_ip4.s_addr;
+}
+
+/* Check if the inner IP source address in this packet is assigned to any
+ * existing mobile subscriber.
+ */
+static bool gtp_check_src_ms(struct sk_buff *skb, struct pdp_ctx *pctx,
+ unsigned int hdrlen)
+{
+ switch (ntohs(skb->protocol)) {
+ case ETH_P_IP:
+ return gtp_check_src_ms_ipv4(skb, pctx, hdrlen);
+ }
+ return false;
+}
+
+/* 1 means pass up to the stack, -1 means drop and 0 means decapsulated. */
+static int gtp0_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb,
+ bool xnet)
+{
+ unsigned int hdrlen = sizeof(struct udphdr) +
+ sizeof(struct gtp0_header);
+ struct gtp0_header *gtp0;
+ struct pdp_ctx *pctx;
+ int ret = 0;
+
+ if (!pskb_may_pull(skb, hdrlen))
+ return -1;
+
+ gtp0 = (struct gtp0_header *)(skb->data + sizeof(struct udphdr));
+
+ if ((gtp0->flags >> 5) != GTP_V0)
+ return 1;
+
+ if (gtp0->type != GTP_TPDU)
+ return 1;
+
+ rcu_read_lock();
+ pctx = gtp0_pdp_find(gtp, be64_to_cpu(gtp0->tid));
+ if (!pctx) {
+ netdev_dbg(gtp->dev, "No PDP ctx to decap skb=%p\n", skb);
+ ret = -1;
+ goto out_rcu;
+ }
+
+ if (!gtp_check_src_ms(skb, pctx, hdrlen)) {
+ netdev_dbg(gtp->dev, "No PDP ctx for this MS\n");
+ ret = -1;
+ goto out_rcu;
+ }
+ rcu_read_unlock();
+
+ /* Get rid of the GTP + UDP headers. */
+ return iptunnel_pull_header(skb, hdrlen, skb->protocol, xnet);
+out_rcu:
+ rcu_read_unlock();
+ return ret;
+}
+
+static int gtp1u_udp_encap_recv(struct gtp_dev *gtp, struct sk_buff *skb,
+ bool xnet)
+{
+ unsigned int hdrlen = sizeof(struct udphdr) +
+ sizeof(struct gtp1_header);
+ struct gtp1_header *gtp1;
+ struct pdp_ctx *pctx;
+ int ret = 0;
+
+ if (!pskb_may_pull(skb, hdrlen))
+ return -1;
+
+ gtp1 = (struct gtp1_header *)(skb->data + sizeof(struct udphdr));
+
+ if ((gtp1->flags >> 5) != GTP_V1)
+ return 1;
+
+ if (gtp1->type != GTP_TPDU)
+ return 1;
+
+ /* From 29.060: "This field shall be present if and only if any one or
+ * more of the S, PN and E flags are set.".
+ *
+ * If any of the bit is set, then the remaining ones also have to be
+ * set.
+ */
+ if (gtp1->flags & GTP1_F_MASK)
+ hdrlen += 4;
+
+ /* Make sure the header is larger enough, including extensions. */
+ if (!pskb_may_pull(skb, hdrlen))
+ return -1;
+
+ gtp1 = (struct gtp1_header *)(skb->data + sizeof(struct udphdr));
+
+ rcu_read_lock();
+ pctx = gtp1_pdp_find(gtp, ntohl(gtp1->tid));
+ if (!pctx) {
+ netdev_dbg(gtp->dev, "No PDP ctx to decap skb=%p\n", skb);
+ ret = -1;
+ goto out_rcu;
+ }
+
+ if (!gtp_check_src_ms(skb, pctx, hdrlen)) {
+ netdev_dbg(gtp->dev, "No PDP ctx for this MS\n");
+ ret = -1;
+ goto out_rcu;
+ }
+ rcu_read_unlock();
+
+ /* Get rid of the GTP + UDP headers. */
+ return iptunnel_pull_header(skb, hdrlen, skb->protocol, xnet);
+out_rcu:
+ rcu_read_unlock();
+ return ret;
+}
+
+static void gtp_encap_disable(struct gtp_dev *gtp)
+{
+ if (gtp->sock0 && gtp->sock0->sk) {
+ udp_sk(gtp->sock0->sk)->encap_type = 0;
+ rcu_assign_sk_user_data(gtp->sock0->sk, NULL);
+ }
+ if (gtp->sock1u && gtp->sock1u->sk) {
+ udp_sk(gtp->sock1u->sk)->encap_type = 0;
+ rcu_assign_sk_user_data(gtp->sock1u->sk, NULL);
+ }
+
+ gtp->sock0 = NULL;
+ gtp->sock1u = NULL;
+}
+
+static void gtp_encap_destroy(struct sock *sk)
+{
+ struct gtp_dev *gtp;
+
+ gtp = rcu_dereference_sk_user_data(sk);
+ if (gtp)
+ gtp_encap_disable(gtp);
+}
+
+/* UDP encapsulation receive handler. See net/ipv4/udp.c.
+ * Return codes: 0: success, <0: error, >0: pass up to userspace UDP socket.
+ */
+static int gtp_encap_recv(struct sock *sk, struct sk_buff *skb)
+{
+ struct pcpu_sw_netstats *stats;
+ struct gtp_dev *gtp;
+ bool xnet;
+ int ret;
+
+ gtp = rcu_dereference_sk_user_data(sk);
+ if (!gtp)
+ return 1;
+
+ netdev_dbg(gtp->dev, "encap_recv sk=%p\n", sk);
+
+ xnet = !net_eq(gtp->net, dev_net(gtp->dev));
+
+ switch (udp_sk(sk)->encap_type) {
+ case UDP_ENCAP_GTP0:
+ netdev_dbg(gtp->dev, "received GTP0 packet\n");
+ ret = gtp0_udp_encap_recv(gtp, skb, xnet);
+ break;
+ case UDP_ENCAP_GTP1U:
+ netdev_dbg(gtp->dev, "received GTP1U packet\n");
+ ret = gtp1u_udp_encap_recv(gtp, skb, xnet);
+ break;
+ default:
+ ret = -1; /* Shouldn't happen. */
+ }
+
+ switch (ret) {
+ case 1:
+ netdev_dbg(gtp->dev, "pass up to the process\n");
+ return 1;
+ case 0:
+ netdev_dbg(gtp->dev, "forwarding packet from GGSN to uplink\n");
+ break;
+ case -1:
+ netdev_dbg(gtp->dev, "GTP packet has been dropped\n");
+ kfree_skb(skb);
+ return 0;
+ }
+
+ /* Now that the UDP and the GTP header have been removed, set up the
+ * new network header. This is required by the upper layer to
+ * calculate the transport header.
+ */
+ skb_reset_network_header(skb);
+
+ skb->dev = gtp->dev;
+
+ stats = this_cpu_ptr(gtp->dev->tstats);
+ u64_stats_update_begin(&stats->syncp);
+ stats->rx_packets++;
+ stats->rx_bytes += skb->len;
+ u64_stats_update_end(&stats->syncp);
+
+ netif_rx(skb);
+
+ return 0;
+}
+
+static int gtp_dev_init(struct net_device *dev)
+{
+ struct gtp_dev *gtp = netdev_priv(dev);
+
+ gtp->dev = dev;
+
+ dev->tstats = alloc_percpu(struct pcpu_sw_netstats);
+ if (!dev->tstats)
+ return -ENOMEM;
+
+ return 0;
+}
+
+static void gtp_dev_uninit(struct net_device *dev)
+{
+ struct gtp_dev *gtp = netdev_priv(dev);
+
+ gtp_encap_disable(gtp);
+ free_percpu(dev->tstats);
+}
+
+static struct rtable *ip4_route_output_gtp(struct net *net, struct flowi4 *fl4,
+ const struct sock *sk, __be32 daddr)
+{
+ memset(fl4, 0, sizeof(*fl4));
+ fl4->flowi4_oif = sk->sk_bound_dev_if;
+ fl4->daddr = daddr;
+ fl4->saddr = inet_sk(sk)->inet_saddr;
+ fl4->flowi4_tos = RT_CONN_FLAGS(sk);
+ fl4->flowi4_proto = sk->sk_protocol;
+
+ return ip_route_output_key(net, fl4);
+}
+
+static inline void gtp0_push_header(struct sk_buff *skb, struct pdp_ctx *pctx)
+{
+ int payload_len = skb->len;
+ struct gtp0_header *gtp0;
+
+ gtp0 = (struct gtp0_header *) skb_push(skb, sizeof(*gtp0));
+
+ gtp0->flags = 0x1e; /* v0, GTP-non-prime. */
+ gtp0->type = GTP_TPDU;
+ gtp0->length = htons(payload_len);
+ gtp0->seq = htons((atomic_inc_return(&pctx->tx_seq) - 1) % 0xffff);
+ gtp0->flow = htons(pctx->u.v0.flow);
+ gtp0->number = 0xff;
+ gtp0->spare[0] = gtp0->spare[1] = gtp0->spare[2] = 0xff;
+ gtp0->tid = cpu_to_be64(pctx->u.v0.tid);
+}
+
+static inline void gtp1_push_header(struct sk_buff *skb, struct pdp_ctx *pctx)
+{
+ int payload_len = skb->len;
+ struct gtp1_header *gtp1;
+
+ gtp1 = (struct gtp1_header *) skb_push(skb, sizeof(*gtp1));
+
+ /* Bits 8 7 6 5 4 3 2 1
+ * +--+--+--+--+--+--+--+--+
+ * |version |PT| 1| E| S|PN|
+ * +--+--+--+--+--+--+--+--+
+ * 0 0 1 1 1 0 0 0
+ */
+ gtp1->flags = 0x38; /* v1, GTP-non-prime. */
+ gtp1->type = GTP_TPDU;
+ gtp1->length = htons(payload_len);
+ gtp1->tid = htonl(pctx->u.v1.o_tei);
+
+ /* TODO: Suppport for extension header, sequence number and N-PDU.
+ * Update the length field if any of them is available.
+ */
+}
+
+struct gtp_pktinfo {
+ struct sock *sk;
+ struct iphdr *iph;
+ struct flowi4 fl4;
+ struct rtable *rt;
+ struct pdp_ctx *pctx;
+ struct net_device *dev;
+ __be16 gtph_port;
+};
+
+static void gtp_push_header(struct sk_buff *skb, struct gtp_pktinfo *pktinfo)
+{
+ switch (pktinfo->pctx->gtp_version) {
+ case GTP_V0:
+ pktinfo->gtph_port = htons(GTP0_PORT);
+ gtp0_push_header(skb, pktinfo->pctx);
+ break;
+ case GTP_V1:
+ pktinfo->gtph_port = htons(GTP1U_PORT);
+ gtp1_push_header(skb, pktinfo->pctx);
+ break;
+ }
+}
+
+static inline void gtp_set_pktinfo_ipv4(struct gtp_pktinfo *pktinfo,
+ struct sock *sk, struct iphdr *iph,
+ struct pdp_ctx *pctx, struct rtable *rt,
+ struct flowi4 *fl4,
+ struct net_device *dev)
+{
+ pktinfo->sk = sk;
+ pktinfo->iph = iph;
+ pktinfo->pctx = pctx;
+ pktinfo->rt = rt;
+ pktinfo->fl4 = *fl4;
+ pktinfo->dev = dev;
+}
+
+static int gtp_build_skb_ip4(struct sk_buff *skb, struct net_device *dev,
+ struct gtp_pktinfo *pktinfo)
+{
+ struct gtp_dev *gtp = netdev_priv(dev);
+ struct pdp_ctx *pctx;
+ struct rtable *rt;
+ struct flowi4 fl4;
+ struct iphdr *iph;
+ struct sock *sk;
+ __be16 df;
+ int mtu;
+
+ /* Read the IP destination address and resolve the PDP context.
+ * Prepend PDP header with TEI/TID from PDP ctx.
+ */
+ iph = ip_hdr(skb);
+ pctx = ipv4_pdp_find(gtp, iph->daddr);
+ if (!pctx) {
+ netdev_dbg(dev, "no PDP ctx found for %pI4, skip\n",
+ &iph->daddr);
+ return -ENOENT;
+ }
+ netdev_dbg(dev, "found PDP context %p\n", pctx);
+
+ switch (pctx->gtp_version) {
+ case GTP_V0:
+ if (gtp->sock0)
+ sk = gtp->sock0->sk;
+ else
+ sk = NULL;
+ break;
+ case GTP_V1:
+ if (gtp->sock1u)
+ sk = gtp->sock1u->sk;
+ else
+ sk = NULL;
+ break;
+ default:
+ return -ENOENT;
+ }
+
+ if (!sk) {
+ netdev_dbg(dev, "no userspace socket is available, skip\n");
+ return -ENOENT;
+ }
+
+ rt = ip4_route_output_gtp(sock_net(sk), &fl4, gtp->sock0->sk,
+ pctx->sgsn_addr_ip4.s_addr);
+ if (IS_ERR(rt)) {
+ netdev_dbg(dev, "no route to SSGN %pI4\n",
+ &pctx->sgsn_addr_ip4.s_addr);
+ dev->stats.tx_carrier_errors++;
+ goto err;
+ }
+
+ if (rt->dst.dev == dev) {
+ netdev_dbg(dev, "circular route to SSGN %pI4\n",
+ &pctx->sgsn_addr_ip4.s_addr);
+ dev->stats.collisions++;
+ goto err_rt;
+ }
+
+ skb_dst_drop(skb);
+
+ /* This is similar to tnl_update_pmtu(). */
+ df = iph->frag_off;
+ if (df) {
+ mtu = dst_mtu(&rt->dst) - dev->hard_header_len -
+ sizeof(struct iphdr) - sizeof(struct udphdr);
+ switch (pctx->gtp_version) {
+ case GTP_V0:
+ mtu -= sizeof(struct gtp0_header);
+ break;
+ case GTP_V1:
+ mtu -= sizeof(struct gtp1_header);
+ break;
+ }
+ } else {
+ mtu = dst_mtu(&rt->dst);
+ }
+
+ rt->dst.ops->update_pmtu(&rt->dst, NULL, skb, mtu);
+
+ if (!skb_is_gso(skb) && (iph->frag_off & htons(IP_DF)) &&
+ mtu < ntohs(iph->tot_len)) {
+ netdev_dbg(dev, "packet too big, fragmentation needed\n");
+ memset(IPCB(skb), 0, sizeof(*IPCB(skb)));
+ icmp_send(skb, ICMP_DEST_UNREACH, ICMP_FRAG_NEEDED,
+ htonl(mtu));
+ goto err_rt;
+ }
+
+ gtp_set_pktinfo_ipv4(pktinfo, sk, iph, pctx, rt, &fl4, dev);
+ gtp_push_header(skb, pktinfo);
+
+ return 0;
+err_rt:
+ ip_rt_put(rt);
+err:
+ return -EBADMSG;
+}
+
+static netdev_tx_t gtp_dev_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ unsigned int proto = ntohs(skb->protocol);
+ struct gtp_pktinfo pktinfo;
+ int err;
+
+ /* Ensure there is sufficient headroom. */
+ if (skb_cow_head(skb, dev->needed_headroom))
+ goto tx_err;
+
+ skb_reset_inner_headers(skb);
+
+ /* PDP context lookups in gtp_build_skb_*() need rcu read-side lock. */
+ rcu_read_lock();
+ switch (proto) {
+ case ETH_P_IP:
+ err = gtp_build_skb_ip4(skb, dev, &pktinfo);
+ break;
+ default:
+ err = -EOPNOTSUPP;
+ break;
+ }
+ rcu_read_unlock();
+
+ if (err < 0)
+ goto tx_err;
+
+ switch (proto) {
+ case ETH_P_IP:
+ netdev_dbg(pktinfo.dev, "gtp -> IP src: %pI4 dst: %pI4\n",
+ &pktinfo.iph->saddr, &pktinfo.iph->daddr);
+ udp_tunnel_xmit_skb(pktinfo.rt, pktinfo.sk, skb,
+ pktinfo.fl4.saddr, pktinfo.fl4.daddr,
+ pktinfo.iph->tos,
+ ip4_dst_hoplimit(&pktinfo.rt->dst),
+ htons(IP_DF),
+ pktinfo.gtph_port, pktinfo.gtph_port,
+ true, false);
+ break;
+ }
+
+ return NETDEV_TX_OK;
+tx_err:
+ dev->stats.tx_errors++;
+ dev_kfree_skb(skb);
+ return NETDEV_TX_OK;
+}
+
+static const struct net_device_ops gtp_netdev_ops = {
+ .ndo_init = gtp_dev_init,
+ .ndo_uninit = gtp_dev_uninit,
+ .ndo_start_xmit = gtp_dev_xmit,
+ .ndo_get_stats64 = ip_tunnel_get_stats64,
+};
+
+static void gtp_link_setup(struct net_device *dev)
+{
+ dev->netdev_ops = &gtp_netdev_ops;
+ dev->destructor = free_netdev;
+
+ dev->hard_header_len = 0;
+ dev->addr_len = 0;
+
+ /* Zero header length. */
+ dev->type = ARPHRD_NONE;
+ dev->flags = IFF_POINTOPOINT | IFF_NOARP | IFF_MULTICAST;
+
+ dev->priv_flags |= IFF_NO_QUEUE;
+ dev->features |= NETIF_F_LLTX;
+ netif_keep_dst(dev);
+
+ /* Assume largest header, ie. GTPv0. */
+ dev->needed_headroom = LL_MAX_HEADER +
+ sizeof(struct iphdr) +
+ sizeof(struct udphdr) +
+ sizeof(struct gtp0_header);
+}
+
+static int gtp_hashtable_new(struct gtp_dev *gtp, int hsize);
+static void gtp_hashtable_free(struct gtp_dev *gtp);
+static int gtp_encap_enable(struct net_device *dev, struct gtp_dev *gtp,
+ int fd_gtp0, int fd_gtp1, struct net *src_net);
+
+static int gtp_newlink(struct net *src_net, struct net_device *dev,
+ struct nlattr *tb[], struct nlattr *data[])
+{
+ int hashsize, err, fd0, fd1;
+ struct gtp_dev *gtp;
+ struct gtp_net *gn;
+
+ if (!data[IFLA_GTP_FD0] || !data[IFLA_GTP_FD1])
+ return -EINVAL;
+
+ gtp = netdev_priv(dev);
+
+ fd0 = nla_get_u32(data[IFLA_GTP_FD0]);
+ fd1 = nla_get_u32(data[IFLA_GTP_FD1]);
+
+ err = gtp_encap_enable(dev, gtp, fd0, fd1, src_net);
+ if (err < 0)
+ goto out_err;
+
+ if (!data[IFLA_GTP_PDP_HASHSIZE])
+ hashsize = 1024;
+ else
+ hashsize = nla_get_u32(data[IFLA_GTP_PDP_HASHSIZE]);
+
+ err = gtp_hashtable_new(gtp, hashsize);
+ if (err < 0)
+ goto out_encap;
+
+ err = register_netdevice(dev);
+ if (err < 0) {
+ netdev_dbg(dev, "failed to register new netdev %d\n", err);
+ goto out_hashtable;
+ }
+
+ gn = net_generic(dev_net(dev), gtp_net_id);
+ list_add_rcu(&gtp->list, &gn->gtp_dev_list);
+
+ netdev_dbg(dev, "registered new GTP interface\n");
+
+ return 0;
+
+out_hashtable:
+ gtp_hashtable_free(gtp);
+out_encap:
+ gtp_encap_disable(gtp);
+out_err:
+ return err;
+}
+
+static void gtp_dellink(struct net_device *dev, struct list_head *head)
+{
+ struct gtp_dev *gtp = netdev_priv(dev);
+
+ gtp_encap_disable(gtp);
+ gtp_hashtable_free(gtp);
+ list_del_rcu(&gtp->list);
+ unregister_netdevice_queue(dev, head);
+}
+
+static const struct nla_policy gtp_policy[IFLA_GTP_MAX + 1] = {
+ [IFLA_GTP_FD0] = { .type = NLA_U32 },
+ [IFLA_GTP_FD1] = { .type = NLA_U32 },
+ [IFLA_GTP_PDP_HASHSIZE] = { .type = NLA_U32 },
+};
+
+static int gtp_validate(struct nlattr *tb[], struct nlattr *data[])
+{
+ if (!data)
+ return -EINVAL;
+
+ return 0;
+}
+
+static size_t gtp_get_size(const struct net_device *dev)
+{
+ return nla_total_size(sizeof(__u32)); /* IFLA_GTP_PDP_HASHSIZE */
+}
+
+static int gtp_fill_info(struct sk_buff *skb, const struct net_device *dev)
+{
+ struct gtp_dev *gtp = netdev_priv(dev);
+
+ if (nla_put_u32(skb, IFLA_GTP_PDP_HASHSIZE, gtp->hash_size))
+ goto nla_put_failure;
+
+ return 0;
+
+nla_put_failure:
+ return -EMSGSIZE;
+}
+
+static struct rtnl_link_ops gtp_link_ops __read_mostly = {
+ .kind = "gtp",
+ .maxtype = IFLA_GTP_MAX,
+ .policy = gtp_policy,
+ .priv_size = sizeof(struct gtp_dev),
+ .setup = gtp_link_setup,
+ .validate = gtp_validate,
+ .newlink = gtp_newlink,
+ .dellink = gtp_dellink,
+ .get_size = gtp_get_size,
+ .fill_info = gtp_fill_info,
+};
+
+static struct net *gtp_genl_get_net(struct net *src_net, struct nlattr *tb[])
+{
+ struct net *net;
+
+ /* Examine the link attributes and figure out which network namespace
+ * we are talking about.
+ */
+ if (tb[GTPA_NET_NS_FD])
+ net = get_net_ns_by_fd(nla_get_u32(tb[GTPA_NET_NS_FD]));
+ else
+ net = get_net(src_net);
+
+ return net;
+}
+
+static int gtp_hashtable_new(struct gtp_dev *gtp, int hsize)
+{
+ int i;
+
+ gtp->addr_hash = kmalloc(sizeof(struct hlist_head) * hsize, GFP_KERNEL);
+ if (gtp->addr_hash == NULL)
+ return -ENOMEM;
+
+ gtp->tid_hash = kmalloc(sizeof(struct hlist_head) * hsize, GFP_KERNEL);
+ if (gtp->tid_hash == NULL)
+ goto err1;
+
+ gtp->hash_size = hsize;
+
+ for (i = 0; i < hsize; i++) {
+ INIT_HLIST_HEAD(&gtp->addr_hash[i]);
+ INIT_HLIST_HEAD(&gtp->tid_hash[i]);
+ }
+ return 0;
+err1:
+ kfree(gtp->addr_hash);
+ return -ENOMEM;
+}
+
+static void gtp_hashtable_free(struct gtp_dev *gtp)
+{
+ struct pdp_ctx *pctx;
+ int i;
+
+ for (i = 0; i < gtp->hash_size; i++) {
+ hlist_for_each_entry_rcu(pctx, &gtp->tid_hash[i], hlist_tid) {
+ hlist_del_rcu(&pctx->hlist_tid);
+ hlist_del_rcu(&pctx->hlist_addr);
+ kfree_rcu(pctx, rcu_head);
+ }
+ }
+ synchronize_rcu();
+ kfree(gtp->addr_hash);
+ kfree(gtp->tid_hash);
+}
+
+static int gtp_encap_enable(struct net_device *dev, struct gtp_dev *gtp,
+ int fd_gtp0, int fd_gtp1, struct net *src_net)
+{
+ struct udp_tunnel_sock_cfg tuncfg = {NULL};
+ struct socket *sock0, *sock1u;
+ int err;
+
+ netdev_dbg(dev, "enable gtp on %d, %d\n", fd_gtp0, fd_gtp1);
+
+ sock0 = sockfd_lookup(fd_gtp0, &err);
+ if (sock0 == NULL) {
+ netdev_dbg(dev, "socket fd=%d not found (gtp0)\n", fd_gtp0);
+ return -ENOENT;
+ }
+
+ if (sock0->sk->sk_protocol != IPPROTO_UDP) {
+ netdev_dbg(dev, "socket fd=%d not UDP\n", fd_gtp0);
+ err = -EINVAL;
+ goto err1;
+ }
+
+ sock1u = sockfd_lookup(fd_gtp1, &err);
+ if (sock1u == NULL) {
+ netdev_dbg(dev, "socket fd=%d not found (gtp1u)\n", fd_gtp1);
+ err = -ENOENT;
+ goto err1;
+ }
+
+ if (sock1u->sk->sk_protocol != IPPROTO_UDP) {
+ netdev_dbg(dev, "socket fd=%d not UDP\n", fd_gtp1);
+ err = -EINVAL;
+ goto err2;
+ }
+
+ netdev_dbg(dev, "enable gtp on %p, %p\n", sock0, sock1u);
+
+ gtp->sock0 = sock0;
+ gtp->sock1u = sock1u;
+ gtp->net = src_net;
+
+ tuncfg.sk_user_data = gtp;
+ tuncfg.encap_rcv = gtp_encap_recv;
+ tuncfg.encap_destroy = gtp_encap_destroy;
+
+ tuncfg.encap_type = UDP_ENCAP_GTP0;
+ setup_udp_tunnel_sock(sock_net(gtp->sock0->sk), gtp->sock0, &tuncfg);
+
+ tuncfg.encap_type = UDP_ENCAP_GTP1U;
+ setup_udp_tunnel_sock(sock_net(gtp->sock1u->sk), gtp->sock1u, &tuncfg);
+
+ err = 0;
+err2:
+ sockfd_put(sock1u);
+err1:
+ sockfd_put(sock0);
+ return err;
+}
+
+static struct net_device *gtp_find_dev(struct net *net, int ifindex)
+{
+ struct gtp_net *gn = net_generic(net, gtp_net_id);
+ struct gtp_dev *gtp;
+
+ list_for_each_entry_rcu(gtp, &gn->gtp_dev_list, list) {
+ if (ifindex == gtp->dev->ifindex)
+ return gtp->dev;
+ }
+ return NULL;
+}
+
+static void ipv4_pdp_fill(struct pdp_ctx *pctx, struct genl_info *info)
+{
+ pctx->gtp_version = nla_get_u32(info->attrs[GTPA_VERSION]);
+ pctx->af = AF_INET;
+ pctx->sgsn_addr_ip4.s_addr =
+ nla_get_be32(info->attrs[GTPA_SGSN_ADDRESS]);
+ pctx->ms_addr_ip4.s_addr =
+ nla_get_be32(info->attrs[GTPA_MS_ADDRESS]);
+
+ switch (pctx->gtp_version) {
+ case GTP_V0:
+ /* According to TS 09.60, sections 7.5.1 and 7.5.2, the flow
+ * label needs to be the same for uplink and downlink packets,
+ * so let's annotate this.
+ */
+ pctx->u.v0.tid = nla_get_u64(info->attrs[GTPA_TID]);
+ pctx->u.v0.flow = nla_get_u16(info->attrs[GTPA_FLOW]);
+ break;
+ case GTP_V1:
+ pctx->u.v1.i_tei = nla_get_u32(info->attrs[GTPA_I_TEI]);
+ pctx->u.v1.o_tei = nla_get_u32(info->attrs[GTPA_O_TEI]);
+ break;
+ default:
+ break;
+ }
+}
+
+static int ipv4_pdp_add(struct net_device *dev, struct genl_info *info)
+{
+ struct gtp_dev *gtp = netdev_priv(dev);
+ u32 hash_ms, hash_tid = 0;
+ struct pdp_ctx *pctx;
+ bool found = false;
+ __be32 ms_addr;
+
+ ms_addr = nla_get_be32(info->attrs[GTPA_MS_ADDRESS]);
+ hash_ms = ipv4_hashfn(ms_addr) % gtp->hash_size;
+
+ hlist_for_each_entry_rcu(pctx, &gtp->addr_hash[hash_ms], hlist_addr) {
+ if (pctx->ms_addr_ip4.s_addr == ms_addr) {
+ found = true;
+ break;
+ }
+ }
+
+ if (found) {
+ if (info->nlhdr->nlmsg_flags & NLM_F_EXCL)
+ return -EEXIST;
+ if (info->nlhdr->nlmsg_flags & NLM_F_REPLACE)
+ return -EOPNOTSUPP;
+
+ ipv4_pdp_fill(pctx, info);
+
+ if (pctx->gtp_version == GTP_V0)
+ netdev_dbg(dev, "GTPv0-U: update tunnel id = %llx (pdp %p)\n",
+ pctx->u.v0.tid, pctx);
+ else if (pctx->gtp_version == GTP_V1)
+ netdev_dbg(dev, "GTPv1-U: update tunnel id = %x/%x (pdp %p)\n",
+ pctx->u.v1.i_tei, pctx->u.v1.o_tei, pctx);
+
+ return 0;
+
+ }
+
+ pctx = kmalloc(sizeof(struct pdp_ctx), GFP_KERNEL);
+ if (pctx == NULL)
+ return -ENOMEM;
+
+ ipv4_pdp_fill(pctx, info);
+ atomic_set(&pctx->tx_seq, 0);
+
+ switch (pctx->gtp_version) {
+ case GTP_V0:
+ /* TS 09.60: "The flow label identifies unambiguously a GTP
+ * flow.". We use the tid for this instead, I cannot find a
+ * situation in which this doesn't unambiguosly identify the
+ * PDP context.
+ */
+ hash_tid = gtp0_hashfn(pctx->u.v0.tid) % gtp->hash_size;
+ break;
+ case GTP_V1:
+ hash_tid = gtp1u_hashfn(pctx->u.v1.i_tei) % gtp->hash_size;
+ break;
+ }
+
+ hlist_add_head_rcu(&pctx->hlist_addr, &gtp->addr_hash[hash_ms]);
+ hlist_add_head_rcu(&pctx->hlist_tid, &gtp->tid_hash[hash_tid]);
+
+ switch (pctx->gtp_version) {
+ case GTP_V0:
+ netdev_dbg(dev, "GTPv0-U: new PDP ctx id=%llx ssgn=%pI4 ms=%pI4 (pdp=%p)\n",
+ pctx->u.v0.tid, &pctx->sgsn_addr_ip4,
+ &pctx->ms_addr_ip4, pctx);
+ break;
+ case GTP_V1:
+ netdev_dbg(dev, "GTPv1-U: new PDP ctx id=%x/%x ssgn=%pI4 ms=%pI4 (pdp=%p)\n",
+ pctx->u.v1.i_tei, pctx->u.v1.o_tei,
+ &pctx->sgsn_addr_ip4, &pctx->ms_addr_ip4, pctx);
+ break;
+ }
+
+ return 0;
+}
+
+static int gtp_genl_new_pdp(struct sk_buff *skb, struct genl_info *info)
+{
+ struct net_device *dev;
+ struct net *net;
+
+ if (!info->attrs[GTPA_VERSION] ||
+ !info->attrs[GTPA_LINK] ||
+ !info->attrs[GTPA_SGSN_ADDRESS] ||
+ !info->attrs[GTPA_MS_ADDRESS])
+ return -EINVAL;
+
+ switch (nla_get_u32(info->attrs[GTPA_VERSION])) {
+ case GTP_V0:
+ if (!info->attrs[GTPA_TID] ||
+ !info->attrs[GTPA_FLOW])
+ return -EINVAL;
+ break;
+ case GTP_V1:
+ if (!info->attrs[GTPA_I_TEI] ||
+ !info->attrs[GTPA_O_TEI])
+ return -EINVAL;
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ net = gtp_genl_get_net(sock_net(skb->sk), info->attrs);
+ if (IS_ERR(net))
+ return PTR_ERR(net);
+
+ /* Check if there's an existing gtpX device to configure */
+ dev = gtp_find_dev(net, nla_get_u32(info->attrs[GTPA_LINK]));
+ if (dev == NULL) {
+ put_net(net);
+ return -ENODEV;
+ }
+ put_net(net);
+
+ return ipv4_pdp_add(dev, info);
+}
+
+static int gtp_genl_del_pdp(struct sk_buff *skb, struct genl_info *info)
+{
+ struct net_device *dev;
+ struct pdp_ctx *pctx;
+ struct gtp_dev *gtp;
+ struct net *net;
+
+ if (!info->attrs[GTPA_VERSION] ||
+ !info->attrs[GTPA_LINK])
+ return -EINVAL;
+
+ net = gtp_genl_get_net(sock_net(skb->sk), info->attrs);
+ if (IS_ERR(net))
+ return PTR_ERR(net);
+
+ /* Check if there's an existing gtpX device to configure */
+ dev = gtp_find_dev(net, nla_get_u32(info->attrs[GTPA_LINK]));
+ if (dev == NULL) {
+ put_net(net);
+ return -ENODEV;
+ }
+ put_net(net);
+
+ gtp = netdev_priv(dev);
+
+ switch (nla_get_u32(info->attrs[GTPA_VERSION])) {
+ case GTP_V0:
+ if (!info->attrs[GTPA_TID])
+ return -EINVAL;
+ pctx = gtp0_pdp_find(gtp, nla_get_u64(info->attrs[GTPA_TID]));
+ break;
+ case GTP_V1:
+ if (!info->attrs[GTPA_I_TEI])
+ return -EINVAL;
+ pctx = gtp1_pdp_find(gtp, nla_get_u64(info->attrs[GTPA_I_TEI]));
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ if (pctx == NULL)
+ return -ENOENT;
+
+ if (pctx->gtp_version == GTP_V0)
+ netdev_dbg(dev, "GTPv0-U: deleting tunnel id = %llx (pdp %p)\n",
+ pctx->u.v0.tid, pctx);
+ else if (pctx->gtp_version == GTP_V1)
+ netdev_dbg(dev, "GTPv1-U: deleting tunnel id = %x/%x (pdp %p)\n",
+ pctx->u.v1.i_tei, pctx->u.v1.o_tei, pctx);
+
+ hlist_del_rcu(&pctx->hlist_tid);
+ hlist_del_rcu(&pctx->hlist_addr);
+ kfree_rcu(pctx, rcu_head);
+
+ return 0;
+}
+
+static struct genl_family gtp_genl_family = {
+ .id = GENL_ID_GENERATE,
+ .name = "gtp",
+ .version = 0,
+ .hdrsize = 0,
+ .maxattr = GTPA_MAX,
+ .netnsok = true,
+};
+
+static int gtp_genl_fill_info(struct sk_buff *skb, u32 snd_portid, u32 snd_seq,
+ u32 type, struct pdp_ctx *pctx)
+{
+ void *genlh;
+
+ genlh = genlmsg_put(skb, snd_portid, snd_seq, &gtp_genl_family, 0,
+ type);
+ if (genlh == NULL)
+ goto nlmsg_failure;
+
+ if (nla_put_u32(skb, GTPA_VERSION, pctx->gtp_version) ||
+ nla_put_be32(skb, GTPA_SGSN_ADDRESS, pctx->sgsn_addr_ip4.s_addr) ||
+ nla_put_be32(skb, GTPA_MS_ADDRESS, pctx->ms_addr_ip4.s_addr))
+ goto nla_put_failure;
+
+ switch (pctx->gtp_version) {
+ case GTP_V0:
+ if (nla_put_u64_64bit(skb, GTPA_TID, pctx->u.v0.tid, GTPA_PAD) ||
+ nla_put_u16(skb, GTPA_FLOW, pctx->u.v0.flow))
+ goto nla_put_failure;
+ break;
+ case GTP_V1:
+ if (nla_put_u32(skb, GTPA_I_TEI, pctx->u.v1.i_tei) ||
+ nla_put_u32(skb, GTPA_O_TEI, pctx->u.v1.o_tei))
+ goto nla_put_failure;
+ break;
+ }
+ genlmsg_end(skb, genlh);
+ return 0;
+
+nlmsg_failure:
+nla_put_failure:
+ genlmsg_cancel(skb, genlh);
+ return -EMSGSIZE;
+}
+
+static int gtp_genl_get_pdp(struct sk_buff *skb, struct genl_info *info)
+{
+ struct pdp_ctx *pctx = NULL;
+ struct net_device *dev;
+ struct sk_buff *skb2;
+ struct gtp_dev *gtp;
+ u32 gtp_version;
+ struct net *net;
+ int err;
+
+ if (!info->attrs[GTPA_VERSION] ||
+ !info->attrs[GTPA_LINK])
+ return -EINVAL;
+
+ gtp_version = nla_get_u32(info->attrs[GTPA_VERSION]);
+ switch (gtp_version) {
+ case GTP_V0:
+ case GTP_V1:
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ net = gtp_genl_get_net(sock_net(skb->sk), info->attrs);
+ if (IS_ERR(net))
+ return PTR_ERR(net);
+
+ /* Check if there's an existing gtpX device to configure */
+ dev = gtp_find_dev(net, nla_get_u32(info->attrs[GTPA_LINK]));
+ if (dev == NULL) {
+ put_net(net);
+ return -ENODEV;
+ }
+ put_net(net);
+
+ gtp = netdev_priv(dev);
+
+ rcu_read_lock();
+ if (gtp_version == GTP_V0 &&
+ info->attrs[GTPA_TID]) {
+ u64 tid = nla_get_u64(info->attrs[GTPA_TID]);
+
+ pctx = gtp0_pdp_find(gtp, tid);
+ } else if (gtp_version == GTP_V1 &&
+ info->attrs[GTPA_I_TEI]) {
+ u32 tid = nla_get_u32(info->attrs[GTPA_I_TEI]);
+
+ pctx = gtp1_pdp_find(gtp, tid);
+ } else if (info->attrs[GTPA_MS_ADDRESS]) {
+ __be32 ip = nla_get_be32(info->attrs[GTPA_MS_ADDRESS]);
+
+ pctx = ipv4_pdp_find(gtp, ip);
+ }
+
+ if (pctx == NULL) {
+ err = -ENOENT;
+ goto err_unlock;
+ }
+
+ skb2 = genlmsg_new(NLMSG_GOODSIZE, GFP_ATOMIC);
+ if (skb2 == NULL) {
+ err = -ENOMEM;
+ goto err_unlock;
+ }
+
+ err = gtp_genl_fill_info(skb2, NETLINK_CB(skb).portid,
+ info->snd_seq, info->nlhdr->nlmsg_type, pctx);
+ if (err < 0)
+ goto err_unlock_free;
+
+ rcu_read_unlock();
+ return genlmsg_unicast(genl_info_net(info), skb2, info->snd_portid);
+
+err_unlock_free:
+ kfree_skb(skb2);
+err_unlock:
+ rcu_read_unlock();
+ return err;
+}
+
+static int gtp_genl_dump_pdp(struct sk_buff *skb,
+ struct netlink_callback *cb)
+{
+ struct gtp_dev *last_gtp = (struct gtp_dev *)cb->args[2], *gtp;
+ struct net *net = sock_net(skb->sk);
+ struct gtp_net *gn = net_generic(net, gtp_net_id);
+ unsigned long tid = cb->args[1];
+ int i, k = cb->args[0], ret;
+ struct pdp_ctx *pctx;
+
+ if (cb->args[4])
+ return 0;
+
+ list_for_each_entry_rcu(gtp, &gn->gtp_dev_list, list) {
+ if (last_gtp && last_gtp != gtp)
+ continue;
+ else
+ last_gtp = NULL;
+
+ for (i = k; i < gtp->hash_size; i++) {
+ hlist_for_each_entry_rcu(pctx, &gtp->tid_hash[i], hlist_tid) {
+ if (tid && tid != pctx->u.tid)
+ continue;
+ else
+ tid = 0;
+
+ ret = gtp_genl_fill_info(skb,
+ NETLINK_CB(cb->skb).portid,
+ cb->nlh->nlmsg_seq,
+ cb->nlh->nlmsg_type, pctx);
+ if (ret < 0) {
+ cb->args[0] = i;
+ cb->args[1] = pctx->u.tid;
+ cb->args[2] = (unsigned long)gtp;
+ goto out;
+ }
+ }
+ }
+ }
+ cb->args[4] = 1;
+out:
+ return skb->len;
+}
+
+static struct nla_policy gtp_genl_policy[GTPA_MAX + 1] = {
+ [GTPA_LINK] = { .type = NLA_U32, },
+ [GTPA_VERSION] = { .type = NLA_U32, },
+ [GTPA_TID] = { .type = NLA_U64, },
+ [GTPA_SGSN_ADDRESS] = { .type = NLA_U32, },
+ [GTPA_MS_ADDRESS] = { .type = NLA_U32, },
+ [GTPA_FLOW] = { .type = NLA_U16, },
+ [GTPA_NET_NS_FD] = { .type = NLA_U32, },
+ [GTPA_I_TEI] = { .type = NLA_U32, },
+ [GTPA_O_TEI] = { .type = NLA_U32, },
+};
+
+static const struct genl_ops gtp_genl_ops[] = {
+ {
+ .cmd = GTP_CMD_NEWPDP,
+ .doit = gtp_genl_new_pdp,
+ .policy = gtp_genl_policy,
+ .flags = GENL_ADMIN_PERM,
+ },
+ {
+ .cmd = GTP_CMD_DELPDP,
+ .doit = gtp_genl_del_pdp,
+ .policy = gtp_genl_policy,
+ .flags = GENL_ADMIN_PERM,
+ },
+ {
+ .cmd = GTP_CMD_GETPDP,
+ .doit = gtp_genl_get_pdp,
+ .dumpit = gtp_genl_dump_pdp,
+ .policy = gtp_genl_policy,
+ .flags = GENL_ADMIN_PERM,
+ },
+};
+
+static int __net_init gtp_net_init(struct net *net)
+{
+ struct gtp_net *gn = net_generic(net, gtp_net_id);
+
+ INIT_LIST_HEAD(&gn->gtp_dev_list);
+ return 0;
+}
+
+static void __net_exit gtp_net_exit(struct net *net)
+{
+ struct gtp_net *gn = net_generic(net, gtp_net_id);
+ struct gtp_dev *gtp;
+ LIST_HEAD(list);
+
+ rtnl_lock();
+ list_for_each_entry(gtp, &gn->gtp_dev_list, list)
+ gtp_dellink(gtp->dev, &list);
+
+ unregister_netdevice_many(&list);
+ rtnl_unlock();
+}
+
+static struct pernet_operations gtp_net_ops = {
+ .init = gtp_net_init,
+ .exit = gtp_net_exit,
+ .id = &gtp_net_id,
+ .size = sizeof(struct gtp_net),
+};
+
+static int __init gtp_init(void)
+{
+ int err;
+
+ get_random_bytes(&gtp_h_initval, sizeof(gtp_h_initval));
+
+ err = rtnl_link_register(&gtp_link_ops);
+ if (err < 0)
+ goto error_out;
+
+ err = genl_register_family_with_ops(&gtp_genl_family, gtp_genl_ops);
+ if (err < 0)
+ goto unreg_rtnl_link;
+
+ err = register_pernet_subsys(&gtp_net_ops);
+ if (err < 0)
+ goto unreg_genl_family;
+
+ pr_info("GTP module loaded (pdp ctx size %Zd bytes)\n",
+ sizeof(struct pdp_ctx));
+ return 0;
+
+unreg_genl_family:
+ genl_unregister_family(&gtp_genl_family);
+unreg_rtnl_link:
+ rtnl_link_unregister(&gtp_link_ops);
+error_out:
+ pr_err("error loading GTP module loaded\n");
+ return err;
+}
+late_initcall(gtp_init);
+
+static void __exit gtp_fini(void)
+{
+ unregister_pernet_subsys(&gtp_net_ops);
+ genl_unregister_family(&gtp_genl_family);
+ rtnl_link_unregister(&gtp_link_ops);
+
+ pr_info("GTP module unloaded\n");
+}
+module_exit(gtp_fini);
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Harald Welte <hwelte@sysmocom.de>");
+MODULE_DESCRIPTION("Interface driver for GTP encapsulated traffic");
+MODULE_ALIAS_RTNL_LINK("gtp");
diff --git a/drivers/net/hamradio/baycom_epp.c b/drivers/net/hamradio/baycom_epp.c
index 72c9f1f352b4..78dbc44540f6 100644
--- a/drivers/net/hamradio/baycom_epp.c
+++ b/drivers/net/hamradio/baycom_epp.c
@@ -635,10 +635,10 @@ static int receive(struct net_device *dev, int cnt)
#ifdef __i386__
#include <asm/msr.h>
-#define GETTICK(x) \
-({ \
- if (cpu_has_tsc) \
- x = (unsigned int)rdtsc(); \
+#define GETTICK(x) \
+({ \
+ if (boot_cpu_has(X86_FEATURE_TSC)) \
+ x = (unsigned int)rdtsc(); \
})
#else /* __i386__ */
#define GETTICK(x)
@@ -780,8 +780,10 @@ static int baycom_send_packet(struct sk_buff *skb, struct net_device *dev)
dev_kfree_skb(skb);
return NETDEV_TX_OK;
}
- if (bc->skb)
- return NETDEV_TX_LOCKED;
+ if (bc->skb) {
+ dev_kfree_skb(skb);
+ return NETDEV_TX_OK;
+ }
/* strip KISS byte */
if (skb->len >= HDLCDRV_MAXFLEN+1 || skb->len < 3) {
dev_kfree_skb(skb);
diff --git a/drivers/net/hamradio/hdlcdrv.c b/drivers/net/hamradio/hdlcdrv.c
index 49fe59b180a8..4bad0b894e9c 100644
--- a/drivers/net/hamradio/hdlcdrv.c
+++ b/drivers/net/hamradio/hdlcdrv.c
@@ -412,8 +412,10 @@ static netdev_tx_t hdlcdrv_send_packet(struct sk_buff *skb,
dev_kfree_skb(skb);
return NETDEV_TX_OK;
}
- if (sm->skb)
- return NETDEV_TX_LOCKED;
+ if (sm->skb) {
+ dev_kfree_skb(skb);
+ return NETDEV_TX_OK;
+ }
netif_stop_queue(dev);
sm->skb = skb;
return NETDEV_TX_OK;
diff --git a/drivers/net/hamradio/mkiss.c b/drivers/net/hamradio/mkiss.c
index 85828f153445..1dfe2304daa7 100644
--- a/drivers/net/hamradio/mkiss.c
+++ b/drivers/net/hamradio/mkiss.c
@@ -519,7 +519,7 @@ static void ax_encaps(struct net_device *dev, unsigned char *icp, int len)
dev->stats.tx_packets++;
dev->stats.tx_bytes += actual;
- ax->dev->trans_start = jiffies;
+ netif_trans_update(ax->dev);
ax->xleft = count - actual;
ax->xhead = ax->xbuff + actual;
}
@@ -542,7 +542,7 @@ static netdev_tx_t ax_xmit(struct sk_buff *skb, struct net_device *dev)
* May be we must check transmitter timeout here ?
* 14 Oct 1994 Dmitry Gorodchanin.
*/
- if (time_before(jiffies, dev->trans_start + 20 * HZ)) {
+ if (time_before(jiffies, dev_trans_start(dev) + 20 * HZ)) {
/* 20 sec timeout not reached */
return NETDEV_TX_BUSY;
}
diff --git a/drivers/net/hamradio/scc.c b/drivers/net/hamradio/scc.c
index ce88df33fe17..b8083161ef46 100644
--- a/drivers/net/hamradio/scc.c
+++ b/drivers/net/hamradio/scc.c
@@ -1669,7 +1669,7 @@ static netdev_tx_t scc_net_tx(struct sk_buff *skb, struct net_device *dev)
dev_kfree_skb(skb_del);
}
skb_queue_tail(&scc->tx_queue, skb);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
/*
diff --git a/drivers/net/hamradio/yam.c b/drivers/net/hamradio/yam.c
index 1a4729c36aa4..aaff07c10058 100644
--- a/drivers/net/hamradio/yam.c
+++ b/drivers/net/hamradio/yam.c
@@ -601,7 +601,7 @@ static netdev_tx_t yam_send_packet(struct sk_buff *skb,
return ax25_ip_xmit(skb);
skb_queue_tail(&yp->send_queue, skb);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
return NETDEV_TX_OK;
}
diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
index 8b3bd8ecd1c4..c270c5a54f3a 100644
--- a/drivers/net/hyperv/hyperv_net.h
+++ b/drivers/net/hyperv/hyperv_net.h
@@ -158,7 +158,7 @@ enum rndis_device_state {
};
struct rndis_device {
- struct netvsc_device *net_dev;
+ struct net_device *ndev;
enum rndis_device_state state;
bool link_state;
@@ -202,6 +202,8 @@ int rndis_filter_receive(struct hv_device *dev,
int rndis_filter_set_packet_filter(struct rndis_device *dev, u32 new_filter);
int rndis_filter_set_device_mac(struct hv_device *hdev, char *mac);
+void netvsc_switch_datapath(struct net_device *nv_dev, bool vf);
+
#define NVSP_INVALID_PROTOCOL_VERSION ((u32)0xFFFFFFFF)
#define NVSP_PROTOCOL_VERSION_1 2
@@ -641,10 +643,18 @@ struct netvsc_reconfig {
u32 event;
};
+struct garp_wrk {
+ struct work_struct dwrk;
+ struct net_device *netdev;
+ struct netvsc_device *netvsc_dev;
+};
+
/* The context of the netvsc device */
struct net_device_context {
/* point back to our device context */
struct hv_device *device_ctx;
+ /* netvsc_device */
+ struct netvsc_device *nvdev;
/* reconfigure work */
struct delayed_work dwork;
/* last reconfig time */
@@ -656,6 +666,7 @@ struct net_device_context {
struct work_struct work;
u32 msg_enable; /* debug level */
+ struct garp_wrk gwrk;
struct netvsc_stats __percpu *tx_stats;
struct netvsc_stats __percpu *rx_stats;
@@ -663,17 +674,17 @@ struct net_device_context {
/* Ethtool settings */
u8 duplex;
u32 speed;
+
+ /* the device is going away */
+ bool start_remove;
};
/* Per netvsc device */
struct netvsc_device {
- struct hv_device *dev;
-
u32 nvsp_version;
atomic_t num_outstanding_sends;
wait_queue_head_t wait_drain;
- bool start_remove;
bool destroy;
/* Receive buffer allocated by us but manages by NetVSP */
@@ -699,8 +710,6 @@ struct netvsc_device {
struct nvsp_message revoke_packet;
/* unsigned char HwMacAddr[HW_MACADDR_LEN]; */
- struct net_device *ndev;
-
struct vmbus_channel *chn_table[VRSS_CHANNEL_MAX];
u32 send_table[VRSS_SEND_TAB_SIZE];
u32 max_chn;
@@ -723,13 +732,15 @@ struct netvsc_device {
u32 max_pkt; /* max number of pkt in one send, e.g. 8 */
u32 pkt_align; /* alignment bytes, e.g. 8 */
- /* The net device context */
- struct net_device_context *nd_ctx;
-
/* 1: allocated, serial number is valid. 0: not allocated */
u32 vf_alloc;
/* Serial number of the VF to team with */
u32 vf_serial;
+ atomic_t open_cnt;
+ /* State to manage the associated VF interface. */
+ bool vf_inject;
+ struct net_device *vf_netdev;
+ atomic_t vf_use_cnt;
};
/* NdisInitialize message */
diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
index ec313fc08d82..719cb3578e55 100644
--- a/drivers/net/hyperv/netvsc.c
+++ b/drivers/net/hyperv/netvsc.c
@@ -33,11 +33,36 @@
#include "hyperv_net.h"
+/*
+ * Switch the data path from the synthetic interface to the VF
+ * interface.
+ */
+void netvsc_switch_datapath(struct net_device *ndev, bool vf)
+{
+ struct net_device_context *net_device_ctx = netdev_priv(ndev);
+ struct hv_device *dev = net_device_ctx->device_ctx;
+ struct netvsc_device *nv_dev = net_device_ctx->nvdev;
+ struct nvsp_message *init_pkt = &nv_dev->channel_init_pkt;
+
+ memset(init_pkt, 0, sizeof(struct nvsp_message));
+ init_pkt->hdr.msg_type = NVSP_MSG4_TYPE_SWITCH_DATA_PATH;
+ if (vf)
+ init_pkt->msg.v4_msg.active_dp.active_datapath =
+ NVSP_DATAPATH_VF;
+ else
+ init_pkt->msg.v4_msg.active_dp.active_datapath =
+ NVSP_DATAPATH_SYNTHETIC;
+
+ vmbus_sendpacket(dev->channel, init_pkt,
+ sizeof(struct nvsp_message),
+ (unsigned long)init_pkt,
+ VM_PKT_DATA_INBAND, 0);
+}
+
-static struct netvsc_device *alloc_net_device(struct hv_device *device)
+static struct netvsc_device *alloc_net_device(void)
{
struct netvsc_device *net_device;
- struct net_device *ndev = hv_get_drvdata(device);
net_device = kzalloc(sizeof(struct netvsc_device), GFP_KERNEL);
if (!net_device)
@@ -50,14 +75,15 @@ static struct netvsc_device *alloc_net_device(struct hv_device *device)
}
init_waitqueue_head(&net_device->wait_drain);
- net_device->start_remove = false;
net_device->destroy = false;
- net_device->dev = device;
- net_device->ndev = ndev;
+ atomic_set(&net_device->open_cnt, 0);
+ atomic_set(&net_device->vf_use_cnt, 0);
net_device->max_pkt = RNDIS_MAX_PKT_DEFAULT;
net_device->pkt_align = RNDIS_PKT_ALIGN_DEFAULT;
- hv_set_drvdata(device, net_device);
+ net_device->vf_netdev = NULL;
+ net_device->vf_inject = false;
+
return net_device;
}
@@ -69,9 +95,10 @@ static void free_netvsc_device(struct netvsc_device *nvdev)
static struct netvsc_device *get_outbound_net_device(struct hv_device *device)
{
- struct netvsc_device *net_device;
+ struct net_device *ndev = hv_get_drvdata(device);
+ struct net_device_context *net_device_ctx = netdev_priv(ndev);
+ struct netvsc_device *net_device = net_device_ctx->nvdev;
- net_device = hv_get_drvdata(device);
if (net_device && net_device->destroy)
net_device = NULL;
@@ -80,9 +107,9 @@ static struct netvsc_device *get_outbound_net_device(struct hv_device *device)
static struct netvsc_device *get_inbound_net_device(struct hv_device *device)
{
- struct netvsc_device *net_device;
-
- net_device = hv_get_drvdata(device);
+ struct net_device *ndev = hv_get_drvdata(device);
+ struct net_device_context *net_device_ctx = netdev_priv(ndev);
+ struct netvsc_device *net_device = net_device_ctx->nvdev;
if (!net_device)
goto get_in_err;
@@ -96,11 +123,13 @@ get_in_err:
}
-static int netvsc_destroy_buf(struct netvsc_device *net_device)
+static int netvsc_destroy_buf(struct hv_device *device)
{
struct nvsp_message *revoke_packet;
int ret = 0;
- struct net_device *ndev = net_device->ndev;
+ struct net_device *ndev = hv_get_drvdata(device);
+ struct net_device_context *net_device_ctx = netdev_priv(ndev);
+ struct netvsc_device *net_device = net_device_ctx->nvdev;
/*
* If we got a section count, it means we received a
@@ -118,7 +147,7 @@ static int netvsc_destroy_buf(struct netvsc_device *net_device)
revoke_packet->msg.v1_msg.
revoke_recv_buf.id = NETVSC_RECEIVE_BUFFER_ID;
- ret = vmbus_sendpacket(net_device->dev->channel,
+ ret = vmbus_sendpacket(device->channel,
revoke_packet,
sizeof(struct nvsp_message),
(unsigned long)revoke_packet,
@@ -136,8 +165,8 @@ static int netvsc_destroy_buf(struct netvsc_device *net_device)
/* Teardown the gpadl on the vsp end */
if (net_device->recv_buf_gpadl_handle) {
- ret = vmbus_teardown_gpadl(net_device->dev->channel,
- net_device->recv_buf_gpadl_handle);
+ ret = vmbus_teardown_gpadl(device->channel,
+ net_device->recv_buf_gpadl_handle);
/* If we failed here, we might as well return and have a leak
* rather than continue and a bugchk
@@ -178,7 +207,7 @@ static int netvsc_destroy_buf(struct netvsc_device *net_device)
revoke_packet->msg.v1_msg.revoke_send_buf.id =
NETVSC_SEND_BUFFER_ID;
- ret = vmbus_sendpacket(net_device->dev->channel,
+ ret = vmbus_sendpacket(device->channel,
revoke_packet,
sizeof(struct nvsp_message),
(unsigned long)revoke_packet,
@@ -194,7 +223,7 @@ static int netvsc_destroy_buf(struct netvsc_device *net_device)
}
/* Teardown the gpadl on the vsp end */
if (net_device->send_buf_gpadl_handle) {
- ret = vmbus_teardown_gpadl(net_device->dev->channel,
+ ret = vmbus_teardown_gpadl(device->channel,
net_device->send_buf_gpadl_handle);
/* If we failed here, we might as well return and have a leak
@@ -229,7 +258,7 @@ static int netvsc_init_buf(struct hv_device *device)
net_device = get_outbound_net_device(device);
if (!net_device)
return -ENODEV;
- ndev = net_device->ndev;
+ ndev = hv_get_drvdata(device);
node = cpu_to_node(device->channel->target_cpu);
net_device->recv_buf = vzalloc_node(net_device->recv_buf_size, node);
@@ -406,7 +435,7 @@ static int netvsc_init_buf(struct hv_device *device)
goto exit;
cleanup:
- netvsc_destroy_buf(net_device);
+ netvsc_destroy_buf(device);
exit:
return ret;
@@ -419,6 +448,7 @@ static int negotiate_nvsp_ver(struct hv_device *device,
struct nvsp_message *init_packet,
u32 nvsp_ver)
{
+ struct net_device *ndev = hv_get_drvdata(device);
int ret;
unsigned long t;
@@ -452,8 +482,7 @@ static int negotiate_nvsp_ver(struct hv_device *device,
/* NVSPv2 or later: Send NDIS config */
memset(init_packet, 0, sizeof(struct nvsp_message));
init_packet->hdr.msg_type = NVSP_MSG2_TYPE_SEND_NDIS_CONFIG;
- init_packet->msg.v2_msg.send_ndis_config.mtu = net_device->ndev->mtu +
- ETH_HLEN;
+ init_packet->msg.v2_msg.send_ndis_config.mtu = ndev->mtu + ETH_HLEN;
init_packet->msg.v2_msg.send_ndis_config.capability.ieee8021q = 1;
if (nvsp_ver >= NVSP_PROTOCOL_VERSION_5)
@@ -473,7 +502,6 @@ static int netvsc_connect_vsp(struct hv_device *device)
struct netvsc_device *net_device;
struct nvsp_message *init_packet;
int ndis_version;
- struct net_device *ndev;
u32 ver_list[] = { NVSP_PROTOCOL_VERSION_1, NVSP_PROTOCOL_VERSION_2,
NVSP_PROTOCOL_VERSION_4, NVSP_PROTOCOL_VERSION_5 };
int i, num_ver = 4; /* number of different NVSP versions */
@@ -481,7 +509,6 @@ static int netvsc_connect_vsp(struct hv_device *device)
net_device = get_outbound_net_device(device);
if (!net_device)
return -ENODEV;
- ndev = net_device->ndev;
init_packet = &net_device->channel_init_pkt;
@@ -537,9 +564,9 @@ cleanup:
return ret;
}
-static void netvsc_disconnect_vsp(struct netvsc_device *net_device)
+static void netvsc_disconnect_vsp(struct hv_device *device)
{
- netvsc_destroy_buf(net_device);
+ netvsc_destroy_buf(device);
}
/*
@@ -547,24 +574,13 @@ static void netvsc_disconnect_vsp(struct netvsc_device *net_device)
*/
int netvsc_device_remove(struct hv_device *device)
{
- struct netvsc_device *net_device;
- unsigned long flags;
-
- net_device = hv_get_drvdata(device);
-
- netvsc_disconnect_vsp(net_device);
+ struct net_device *ndev = hv_get_drvdata(device);
+ struct net_device_context *net_device_ctx = netdev_priv(ndev);
+ struct netvsc_device *net_device = net_device_ctx->nvdev;
- /*
- * Since we have already drained, we don't need to busy wait
- * as was done in final_release_stor_device()
- * Note that we cannot set the ext pointer to NULL until
- * we have drained - to drain the outgoing packets, we need to
- * allow incoming packets.
- */
+ netvsc_disconnect_vsp(device);
- spin_lock_irqsave(&device->channel->inbound_lock, flags);
- hv_set_drvdata(device, NULL);
- spin_unlock_irqrestore(&device->channel->inbound_lock, flags);
+ net_device_ctx->nvdev = NULL;
/*
* At this point, no one should be accessing net_device
@@ -612,12 +628,11 @@ static void netvsc_send_completion(struct netvsc_device *net_device,
{
struct nvsp_message *nvsp_packet;
struct hv_netvsc_packet *nvsc_packet;
- struct net_device *ndev;
+ struct net_device *ndev = hv_get_drvdata(device);
+ struct net_device_context *net_device_ctx = netdev_priv(ndev);
u32 send_index;
struct sk_buff *skb;
- ndev = net_device->ndev;
-
nvsp_packet = (struct nvsp_message *)((unsigned long)packet +
(packet->offset8 << 3));
@@ -662,7 +677,7 @@ static void netvsc_send_completion(struct netvsc_device *net_device,
wake_up(&net_device->wait_drain);
if (netif_tx_queue_stopped(netdev_get_tx_queue(ndev, q_idx)) &&
- !net_device->start_remove &&
+ !net_device_ctx->start_remove &&
(hv_ringbuf_avail_percent(&channel->outbound) >
RING_AVAIL_PERCENT_HIWATER || queue_sends < 1))
netif_tx_wake_queue(netdev_get_tx_queue(
@@ -746,6 +761,7 @@ static u32 netvsc_copy_to_send_buf(struct netvsc_device *net_device,
}
static inline int netvsc_send_pkt(
+ struct hv_device *device,
struct hv_netvsc_packet *packet,
struct netvsc_device *net_device,
struct hv_page_buffer **pb,
@@ -754,7 +770,7 @@ static inline int netvsc_send_pkt(
struct nvsp_message nvmsg;
u16 q_idx = packet->q_idx;
struct vmbus_channel *out_channel = net_device->chn_table[q_idx];
- struct net_device *ndev = net_device->ndev;
+ struct net_device *ndev = hv_get_drvdata(device);
u64 req_id;
int ret;
struct hv_page_buffer *pgbuf;
@@ -949,7 +965,8 @@ int netvsc_send(struct hv_device *device,
}
if (msd_send) {
- m_ret = netvsc_send_pkt(msd_send, net_device, NULL, msd_skb);
+ m_ret = netvsc_send_pkt(device, msd_send, net_device,
+ NULL, msd_skb);
if (m_ret != 0) {
netvsc_free_send_slot(net_device,
@@ -960,7 +977,7 @@ int netvsc_send(struct hv_device *device,
send_now:
if (cur_send)
- ret = netvsc_send_pkt(cur_send, net_device, pb, skb);
+ ret = netvsc_send_pkt(device, cur_send, net_device, pb, skb);
if (ret != 0 && section_index != NETVSC_INVALID_INDEX)
netvsc_free_send_slot(net_device, section_index);
@@ -976,9 +993,7 @@ static void netvsc_send_recv_completion(struct hv_device *device,
struct nvsp_message recvcompMessage;
int retries = 0;
int ret;
- struct net_device *ndev;
-
- ndev = net_device->ndev;
+ struct net_device *ndev = hv_get_drvdata(device);
recvcompMessage.hdr.msg_type =
NVSP_MSG1_TYPE_SEND_RNDIS_PKT_COMPLETE;
@@ -1025,11 +1040,9 @@ static void netvsc_receive(struct netvsc_device *net_device,
u32 status = NVSP_STAT_SUCCESS;
int i;
int count = 0;
- struct net_device *ndev;
+ struct net_device *ndev = hv_get_drvdata(device);
void *data;
- ndev = net_device->ndev;
-
/*
* All inbound packets other than send completion should be xfer page
* packet
@@ -1085,14 +1098,13 @@ static void netvsc_send_table(struct hv_device *hdev,
struct nvsp_message *nvmsg)
{
struct netvsc_device *nvscdev;
- struct net_device *ndev;
+ struct net_device *ndev = hv_get_drvdata(hdev);
int i;
u32 count, *tab;
nvscdev = get_outbound_net_device(hdev);
if (!nvscdev)
return;
- ndev = nvscdev->ndev;
count = nvmsg->msg.v5_msg.send_table.count;
if (count != VRSS_SEND_TAB_SIZE) {
@@ -1151,7 +1163,7 @@ void netvsc_channel_cb(void *context)
net_device = get_inbound_net_device(device);
if (!net_device)
return;
- ndev = net_device->ndev;
+ ndev = hv_get_drvdata(device);
buffer = get_per_channel_state(channel);
do {
@@ -1224,30 +1236,19 @@ void netvsc_channel_cb(void *context)
*/
int netvsc_device_add(struct hv_device *device, void *additional_info)
{
- int ret = 0;
+ int i, ret = 0;
int ring_size =
((struct netvsc_device_info *)additional_info)->ring_size;
struct netvsc_device *net_device;
- struct net_device *ndev;
+ struct net_device *ndev = hv_get_drvdata(device);
+ struct net_device_context *net_device_ctx = netdev_priv(ndev);
- net_device = alloc_net_device(device);
+ net_device = alloc_net_device();
if (!net_device)
return -ENOMEM;
net_device->ring_size = ring_size;
- /*
- * Coming into this function, struct net_device * is
- * registered as the driver private data.
- * In alloc_net_device(), we register struct netvsc_device *
- * as the driver private data and stash away struct net_device *
- * in struct netvsc_device *.
- */
- ndev = net_device->ndev;
-
- /* Add netvsc_device context to netvsc_device */
- net_device->nd_ctx = netdev_priv(ndev);
-
/* Initialize the NetVSC channel extension */
init_completion(&net_device->channel_init_wait);
@@ -1266,7 +1267,19 @@ int netvsc_device_add(struct hv_device *device, void *additional_info)
/* Channel is opened */
pr_info("hv_netvsc channel opened successfully\n");
- net_device->chn_table[0] = device->channel;
+ /* If we're reopening the device we may have multiple queues, fill the
+ * chn_table with the default channel to use it before subchannels are
+ * opened.
+ */
+ for (i = 0; i < VRSS_CHANNEL_MAX; i++)
+ net_device->chn_table[i] = device->channel;
+
+ /* Writing nvdev pointer unlocks netvsc_send(), make sure chn_table is
+ * populated.
+ */
+ wmb();
+
+ net_device_ctx->nvdev = net_device;
/* Connect with the NetVsp */
ret = netvsc_connect_vsp(device);
diff --git a/drivers/net/hyperv/netvsc_drv.c b/drivers/net/hyperv/netvsc_drv.c
index b8121eba33ff..6a69b5cc9fe2 100644
--- a/drivers/net/hyperv/netvsc_drv.c
+++ b/drivers/net/hyperv/netvsc_drv.c
@@ -67,18 +67,19 @@ static void do_set_multicast(struct work_struct *w)
{
struct net_device_context *ndevctx =
container_of(w, struct net_device_context, work);
- struct netvsc_device *nvdev;
+ struct hv_device *device_obj = ndevctx->device_ctx;
+ struct net_device *ndev = hv_get_drvdata(device_obj);
+ struct netvsc_device *nvdev = ndevctx->nvdev;
struct rndis_device *rdev;
- nvdev = hv_get_drvdata(ndevctx->device_ctx);
- if (nvdev == NULL || nvdev->ndev == NULL)
+ if (!nvdev)
return;
rdev = nvdev->extension;
if (rdev == NULL)
return;
- if (nvdev->ndev->flags & IFF_PROMISC)
+ if (ndev->flags & IFF_PROMISC)
rndis_filter_set_packet_filter(rdev,
NDIS_PACKET_TYPE_PROMISCUOUS);
else
@@ -99,7 +100,7 @@ static int netvsc_open(struct net_device *net)
{
struct net_device_context *net_device_ctx = netdev_priv(net);
struct hv_device *device_obj = net_device_ctx->device_ctx;
- struct netvsc_device *nvdev;
+ struct netvsc_device *nvdev = net_device_ctx->nvdev;
struct rndis_device *rdev;
int ret = 0;
@@ -114,7 +115,6 @@ static int netvsc_open(struct net_device *net)
netif_tx_wake_all_queues(net);
- nvdev = hv_get_drvdata(device_obj);
rdev = nvdev->extension;
if (!rdev->link_state)
netif_carrier_on(net);
@@ -126,7 +126,7 @@ static int netvsc_close(struct net_device *net)
{
struct net_device_context *net_device_ctx = netdev_priv(net);
struct hv_device *device_obj = net_device_ctx->device_ctx;
- struct netvsc_device *nvdev = hv_get_drvdata(device_obj);
+ struct netvsc_device *nvdev = net_device_ctx->nvdev;
int ret;
u32 aread, awrite, i, msec = 10, retry = 0, retry_max = 20;
struct vmbus_channel *chn;
@@ -205,8 +205,7 @@ static u16 netvsc_select_queue(struct net_device *ndev, struct sk_buff *skb,
void *accel_priv, select_queue_fallback_t fallback)
{
struct net_device_context *net_device_ctx = netdev_priv(ndev);
- struct hv_device *hdev = net_device_ctx->device_ctx;
- struct netvsc_device *nvsc_dev = hv_get_drvdata(hdev);
+ struct netvsc_device *nvsc_dev = net_device_ctx->nvdev;
u32 hash;
u16 q_idx = 0;
@@ -580,7 +579,6 @@ void netvsc_linkstatus_callback(struct hv_device *device_obj,
struct rndis_indicate_status *indicate = &resp->msg.indicate_status;
struct net_device *net;
struct net_device_context *ndev_ctx;
- struct netvsc_device *net_device;
struct netvsc_reconfig *event;
unsigned long flags;
@@ -590,8 +588,7 @@ void netvsc_linkstatus_callback(struct hv_device *device_obj,
indicate->status != RNDIS_STATUS_MEDIA_DISCONNECT)
return;
- net_device = hv_get_drvdata(device_obj);
- net = net_device->ndev;
+ net = hv_get_drvdata(device_obj);
if (!net || net->reg_state != NETREG_REGISTERED)
return;
@@ -610,42 +607,24 @@ void netvsc_linkstatus_callback(struct hv_device *device_obj,
schedule_delayed_work(&ndev_ctx->dwork, 0);
}
-/*
- * netvsc_recv_callback - Callback when we receive a packet from the
- * "wire" on the specified device.
- */
-int netvsc_recv_callback(struct hv_device *device_obj,
+
+static struct sk_buff *netvsc_alloc_recv_skb(struct net_device *net,
struct hv_netvsc_packet *packet,
- void **data,
struct ndis_tcp_ip_checksum_info *csum_info,
- struct vmbus_channel *channel,
- u16 vlan_tci)
+ void *data, u16 vlan_tci)
{
- struct net_device *net;
- struct net_device_context *net_device_ctx;
struct sk_buff *skb;
- struct netvsc_stats *rx_stats;
- net = ((struct netvsc_device *)hv_get_drvdata(device_obj))->ndev;
- if (!net || net->reg_state != NETREG_REGISTERED) {
- return NVSP_STAT_FAIL;
- }
- net_device_ctx = netdev_priv(net);
- rx_stats = this_cpu_ptr(net_device_ctx->rx_stats);
-
- /* Allocate a skb - TODO direct I/O to pages? */
skb = netdev_alloc_skb_ip_align(net, packet->total_data_buflen);
- if (unlikely(!skb)) {
- ++net->stats.rx_dropped;
- return NVSP_STAT_FAIL;
- }
+ if (!skb)
+ return skb;
/*
* Copy to skb. This copy is needed here since the memory pointed by
* hv_netvsc_packet cannot be deallocated
*/
- memcpy(skb_put(skb, packet->total_data_buflen), *data,
- packet->total_data_buflen);
+ memcpy(skb_put(skb, packet->total_data_buflen), data,
+ packet->total_data_buflen);
skb->protocol = eth_type_trans(skb, net);
if (csum_info) {
@@ -663,6 +642,74 @@ int netvsc_recv_callback(struct hv_device *device_obj,
__vlan_hwaccel_put_tag(skb, htons(ETH_P_8021Q),
vlan_tci);
+ return skb;
+}
+
+/*
+ * netvsc_recv_callback - Callback when we receive a packet from the
+ * "wire" on the specified device.
+ */
+int netvsc_recv_callback(struct hv_device *device_obj,
+ struct hv_netvsc_packet *packet,
+ void **data,
+ struct ndis_tcp_ip_checksum_info *csum_info,
+ struct vmbus_channel *channel,
+ u16 vlan_tci)
+{
+ struct net_device *net = hv_get_drvdata(device_obj);
+ struct net_device_context *net_device_ctx = netdev_priv(net);
+ struct sk_buff *skb;
+ struct sk_buff *vf_skb;
+ struct netvsc_stats *rx_stats;
+ struct netvsc_device *netvsc_dev = net_device_ctx->nvdev;
+ u32 bytes_recvd = packet->total_data_buflen;
+ int ret = 0;
+
+ if (!net || net->reg_state != NETREG_REGISTERED)
+ return NVSP_STAT_FAIL;
+
+ if (READ_ONCE(netvsc_dev->vf_inject)) {
+ atomic_inc(&netvsc_dev->vf_use_cnt);
+ if (!READ_ONCE(netvsc_dev->vf_inject)) {
+ /*
+ * We raced; just move on.
+ */
+ atomic_dec(&netvsc_dev->vf_use_cnt);
+ goto vf_injection_done;
+ }
+
+ /*
+ * Inject this packet into the VF inerface.
+ * On Hyper-V, multicast and brodcast packets
+ * are only delivered on the synthetic interface
+ * (after subjecting these to policy filters on
+ * the host). Deliver these via the VF interface
+ * in the guest.
+ */
+ vf_skb = netvsc_alloc_recv_skb(netvsc_dev->vf_netdev, packet,
+ csum_info, *data, vlan_tci);
+ if (vf_skb != NULL) {
+ ++netvsc_dev->vf_netdev->stats.rx_packets;
+ netvsc_dev->vf_netdev->stats.rx_bytes += bytes_recvd;
+ netif_receive_skb(vf_skb);
+ } else {
+ ++net->stats.rx_dropped;
+ ret = NVSP_STAT_FAIL;
+ }
+ atomic_dec(&netvsc_dev->vf_use_cnt);
+ return ret;
+ }
+
+vf_injection_done:
+ net_device_ctx = netdev_priv(net);
+ rx_stats = this_cpu_ptr(net_device_ctx->rx_stats);
+
+ /* Allocate a skb - TODO direct I/O to pages? */
+ skb = netvsc_alloc_recv_skb(net, packet, csum_info, *data, vlan_tci);
+ if (unlikely(!skb)) {
+ ++net->stats.rx_dropped;
+ return NVSP_STAT_FAIL;
+ }
skb_record_rx_queue(skb, channel->
offermsg.offer.sub_channel_index);
@@ -692,8 +739,7 @@ static void netvsc_get_channels(struct net_device *net,
struct ethtool_channels *channel)
{
struct net_device_context *net_device_ctx = netdev_priv(net);
- struct hv_device *dev = net_device_ctx->device_ctx;
- struct netvsc_device *nvdev = hv_get_drvdata(dev);
+ struct netvsc_device *nvdev = net_device_ctx->nvdev;
if (nvdev) {
channel->max_combined = nvdev->max_chn;
@@ -706,14 +752,14 @@ static int netvsc_set_channels(struct net_device *net,
{
struct net_device_context *net_device_ctx = netdev_priv(net);
struct hv_device *dev = net_device_ctx->device_ctx;
- struct netvsc_device *nvdev = hv_get_drvdata(dev);
+ struct netvsc_device *nvdev = net_device_ctx->nvdev;
struct netvsc_device_info device_info;
u32 num_chn;
u32 max_chn;
int ret = 0;
bool recovering = false;
- if (!nvdev || nvdev->destroy)
+ if (net_device_ctx->start_remove || !nvdev || nvdev->destroy)
return -ENODEV;
num_chn = nvdev->num_chn;
@@ -742,14 +788,11 @@ static int netvsc_set_channels(struct net_device *net,
goto out;
do_set:
- nvdev->start_remove = true;
+ net_device_ctx->start_remove = true;
rndis_filter_device_remove(dev);
nvdev->num_chn = channels->combined_count;
- net_device_ctx->device_ctx = dev;
- hv_set_drvdata(dev, net);
-
memset(&device_info, 0, sizeof(device_info));
device_info.num_chn = nvdev->num_chn; /* passed to RNDIS */
device_info.ring_size = ring_size;
@@ -764,7 +807,7 @@ static int netvsc_set_channels(struct net_device *net,
goto recover;
}
- nvdev = hv_get_drvdata(dev);
+ nvdev = net_device_ctx->nvdev;
ret = netif_set_real_num_tx_queues(net, nvdev->num_chn);
if (ret) {
@@ -786,6 +829,9 @@ static int netvsc_set_channels(struct net_device *net,
out:
netvsc_open(net);
+ net_device_ctx->start_remove = false;
+ /* We may have missed link change notifications */
+ schedule_delayed_work(&net_device_ctx->dwork, 0);
return ret;
@@ -854,14 +900,14 @@ static int netvsc_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
static int netvsc_change_mtu(struct net_device *ndev, int mtu)
{
struct net_device_context *ndevctx = netdev_priv(ndev);
- struct hv_device *hdev = ndevctx->device_ctx;
- struct netvsc_device *nvdev = hv_get_drvdata(hdev);
+ struct netvsc_device *nvdev = ndevctx->nvdev;
+ struct hv_device *hdev = ndevctx->device_ctx;
struct netvsc_device_info device_info;
int limit = ETH_DATA_LEN;
u32 num_chn;
int ret = 0;
- if (nvdev == NULL || nvdev->destroy)
+ if (ndevctx->start_remove || !nvdev || nvdev->destroy)
return -ENODEV;
if (nvdev->nvsp_version >= NVSP_PROTOCOL_VERSION_2)
@@ -876,14 +922,11 @@ static int netvsc_change_mtu(struct net_device *ndev, int mtu)
num_chn = nvdev->num_chn;
- nvdev->start_remove = true;
+ ndevctx->start_remove = true;
rndis_filter_device_remove(hdev);
ndev->mtu = mtu;
- ndevctx->device_ctx = hdev;
- hv_set_drvdata(hdev, ndev);
-
memset(&device_info, 0, sizeof(device_info));
device_info.ring_size = ring_size;
device_info.num_chn = num_chn;
@@ -892,6 +935,10 @@ static int netvsc_change_mtu(struct net_device *ndev, int mtu)
out:
netvsc_open(ndev);
+ ndevctx->start_remove = false;
+
+ /* We may have missed link change notifications */
+ schedule_delayed_work(&ndevctx->dwork, 0);
return ret;
}
@@ -1004,18 +1051,22 @@ static const struct net_device_ops device_ops = {
*/
static void netvsc_link_change(struct work_struct *w)
{
- struct net_device_context *ndev_ctx;
- struct net_device *net;
+ struct net_device_context *ndev_ctx =
+ container_of(w, struct net_device_context, dwork.work);
+ struct hv_device *device_obj = ndev_ctx->device_ctx;
+ struct net_device *net = hv_get_drvdata(device_obj);
struct netvsc_device *net_device;
struct rndis_device *rdev;
struct netvsc_reconfig *event = NULL;
bool notify = false, reschedule = false;
unsigned long flags, next_reconfig, delay;
- ndev_ctx = container_of(w, struct net_device_context, dwork.work);
- net_device = hv_get_drvdata(ndev_ctx->device_ctx);
+ rtnl_lock();
+ if (ndev_ctx->start_remove)
+ goto out_unlock;
+
+ net_device = ndev_ctx->nvdev;
rdev = net_device->extension;
- net = net_device->ndev;
next_reconfig = ndev_ctx->last_reconfig + LINKCHANGE_INT;
if (time_is_after_jiffies(next_reconfig)) {
@@ -1026,7 +1077,7 @@ static void netvsc_link_change(struct work_struct *w)
delay = next_reconfig - jiffies;
delay = delay < LINKCHANGE_INT ? delay : LINKCHANGE_INT;
schedule_delayed_work(&ndev_ctx->dwork, delay);
- return;
+ goto out_unlock;
}
ndev_ctx->last_reconfig = jiffies;
@@ -1040,9 +1091,7 @@ static void netvsc_link_change(struct work_struct *w)
spin_unlock_irqrestore(&ndev_ctx->lock, flags);
if (!event)
- return;
-
- rtnl_lock();
+ goto out_unlock;
switch (event->event) {
/* Only the following events are possible due to the check in
@@ -1074,7 +1123,7 @@ static void netvsc_link_change(struct work_struct *w)
netif_tx_stop_all_queues(net);
event->event = RNDIS_STATUS_MEDIA_CONNECT;
spin_lock_irqsave(&ndev_ctx->lock, flags);
- list_add_tail(&event->list, &ndev_ctx->reconfig_events);
+ list_add(&event->list, &ndev_ctx->reconfig_events);
spin_unlock_irqrestore(&ndev_ctx->lock, flags);
reschedule = true;
}
@@ -1091,6 +1140,11 @@ static void netvsc_link_change(struct work_struct *w)
*/
if (reschedule)
schedule_delayed_work(&ndev_ctx->dwork, LINKCHANGE_INT);
+
+ return;
+
+out_unlock:
+ rtnl_unlock();
}
static void netvsc_free_netdev(struct net_device *netdev)
@@ -1102,6 +1156,192 @@ static void netvsc_free_netdev(struct net_device *netdev)
free_netdev(netdev);
}
+static void netvsc_notify_peers(struct work_struct *wrk)
+{
+ struct garp_wrk *gwrk;
+
+ gwrk = container_of(wrk, struct garp_wrk, dwrk);
+
+ netdev_notify_peers(gwrk->netdev);
+
+ atomic_dec(&gwrk->netvsc_dev->vf_use_cnt);
+}
+
+static struct net_device *get_netvsc_net_device(char *mac)
+{
+ struct net_device *dev, *found = NULL;
+ int rtnl_locked;
+
+ rtnl_locked = rtnl_trylock();
+
+ for_each_netdev(&init_net, dev) {
+ if (memcmp(dev->dev_addr, mac, ETH_ALEN) == 0) {
+ if (dev->netdev_ops != &device_ops)
+ continue;
+ found = dev;
+ break;
+ }
+ }
+ if (rtnl_locked)
+ rtnl_unlock();
+
+ return found;
+}
+
+static int netvsc_register_vf(struct net_device *vf_netdev)
+{
+ struct net_device *ndev;
+ struct net_device_context *net_device_ctx;
+ struct netvsc_device *netvsc_dev;
+ const struct ethtool_ops *eth_ops = vf_netdev->ethtool_ops;
+
+ if (eth_ops == NULL || eth_ops == &ethtool_ops)
+ return NOTIFY_DONE;
+
+ /*
+ * We will use the MAC address to locate the synthetic interface to
+ * associate with the VF interface. If we don't find a matching
+ * synthetic interface, move on.
+ */
+ ndev = get_netvsc_net_device(vf_netdev->dev_addr);
+ if (!ndev)
+ return NOTIFY_DONE;
+
+ net_device_ctx = netdev_priv(ndev);
+ netvsc_dev = net_device_ctx->nvdev;
+ if (netvsc_dev == NULL)
+ return NOTIFY_DONE;
+
+ netdev_info(ndev, "VF registering: %s\n", vf_netdev->name);
+ /*
+ * Take a reference on the module.
+ */
+ try_module_get(THIS_MODULE);
+ netvsc_dev->vf_netdev = vf_netdev;
+ return NOTIFY_OK;
+}
+
+
+static int netvsc_vf_up(struct net_device *vf_netdev)
+{
+ struct net_device *ndev;
+ struct netvsc_device *netvsc_dev;
+ const struct ethtool_ops *eth_ops = vf_netdev->ethtool_ops;
+ struct net_device_context *net_device_ctx;
+
+ if (eth_ops == &ethtool_ops)
+ return NOTIFY_DONE;
+
+ ndev = get_netvsc_net_device(vf_netdev->dev_addr);
+ if (!ndev)
+ return NOTIFY_DONE;
+
+ net_device_ctx = netdev_priv(ndev);
+ netvsc_dev = net_device_ctx->nvdev;
+
+ if ((netvsc_dev == NULL) || (netvsc_dev->vf_netdev == NULL))
+ return NOTIFY_DONE;
+
+ netdev_info(ndev, "VF up: %s\n", vf_netdev->name);
+ netvsc_dev->vf_inject = true;
+
+ /*
+ * Open the device before switching data path.
+ */
+ rndis_filter_open(net_device_ctx->device_ctx);
+
+ /*
+ * notify the host to switch the data path.
+ */
+ netvsc_switch_datapath(ndev, true);
+ netdev_info(ndev, "Data path switched to VF: %s\n", vf_netdev->name);
+
+ netif_carrier_off(ndev);
+
+ /*
+ * Now notify peers. We are scheduling work to
+ * notify peers; take a reference to prevent
+ * the VF interface from vanishing.
+ */
+ atomic_inc(&netvsc_dev->vf_use_cnt);
+ net_device_ctx->gwrk.netdev = vf_netdev;
+ net_device_ctx->gwrk.netvsc_dev = netvsc_dev;
+ schedule_work(&net_device_ctx->gwrk.dwrk);
+
+ return NOTIFY_OK;
+}
+
+
+static int netvsc_vf_down(struct net_device *vf_netdev)
+{
+ struct net_device *ndev;
+ struct netvsc_device *netvsc_dev;
+ struct net_device_context *net_device_ctx;
+ const struct ethtool_ops *eth_ops = vf_netdev->ethtool_ops;
+
+ if (eth_ops == &ethtool_ops)
+ return NOTIFY_DONE;
+
+ ndev = get_netvsc_net_device(vf_netdev->dev_addr);
+ if (!ndev)
+ return NOTIFY_DONE;
+
+ net_device_ctx = netdev_priv(ndev);
+ netvsc_dev = net_device_ctx->nvdev;
+
+ if ((netvsc_dev == NULL) || (netvsc_dev->vf_netdev == NULL))
+ return NOTIFY_DONE;
+
+ netdev_info(ndev, "VF down: %s\n", vf_netdev->name);
+ netvsc_dev->vf_inject = false;
+ /*
+ * Wait for currently active users to
+ * drain out.
+ */
+
+ while (atomic_read(&netvsc_dev->vf_use_cnt) != 0)
+ udelay(50);
+ netvsc_switch_datapath(ndev, false);
+ netdev_info(ndev, "Data path switched from VF: %s\n", vf_netdev->name);
+ rndis_filter_close(net_device_ctx->device_ctx);
+ netif_carrier_on(ndev);
+ /*
+ * Notify peers.
+ */
+ atomic_inc(&netvsc_dev->vf_use_cnt);
+ net_device_ctx->gwrk.netdev = ndev;
+ net_device_ctx->gwrk.netvsc_dev = netvsc_dev;
+ schedule_work(&net_device_ctx->gwrk.dwrk);
+
+ return NOTIFY_OK;
+}
+
+
+static int netvsc_unregister_vf(struct net_device *vf_netdev)
+{
+ struct net_device *ndev;
+ struct netvsc_device *netvsc_dev;
+ const struct ethtool_ops *eth_ops = vf_netdev->ethtool_ops;
+ struct net_device_context *net_device_ctx;
+
+ if (eth_ops == &ethtool_ops)
+ return NOTIFY_DONE;
+
+ ndev = get_netvsc_net_device(vf_netdev->dev_addr);
+ if (!ndev)
+ return NOTIFY_DONE;
+
+ net_device_ctx = netdev_priv(ndev);
+ netvsc_dev = net_device_ctx->nvdev;
+ if (netvsc_dev == NULL)
+ return NOTIFY_DONE;
+ netdev_info(ndev, "VF unregistering: %s\n", vf_netdev->name);
+
+ netvsc_dev->vf_netdev = NULL;
+ module_put(THIS_MODULE);
+ return NOTIFY_OK;
+}
+
static int netvsc_probe(struct hv_device *dev,
const struct hv_vmbus_device_id *dev_id)
{
@@ -1138,8 +1378,12 @@ static int netvsc_probe(struct hv_device *dev,
}
hv_set_drvdata(dev, net);
+
+ net_device_ctx->start_remove = false;
+
INIT_DELAYED_WORK(&net_device_ctx->dwork, netvsc_link_change);
INIT_WORK(&net_device_ctx->work, do_set_multicast);
+ INIT_WORK(&net_device_ctx->gwrk.dwrk, netvsc_notify_peers);
spin_lock_init(&net_device_ctx->lock);
INIT_LIST_HEAD(&net_device_ctx->reconfig_events);
@@ -1168,7 +1412,7 @@ static int netvsc_probe(struct hv_device *dev,
}
memcpy(net->dev_addr, device_info.mac_adr, ETH_ALEN);
- nvdev = hv_get_drvdata(dev);
+ nvdev = net_device_ctx->nvdev;
netif_set_real_num_tx_queues(net, nvdev->num_chn);
netif_set_real_num_rx_queues(net, nvdev->num_chn);
@@ -1190,17 +1434,24 @@ static int netvsc_remove(struct hv_device *dev)
struct net_device_context *ndev_ctx;
struct netvsc_device *net_device;
- net_device = hv_get_drvdata(dev);
- net = net_device->ndev;
+ net = hv_get_drvdata(dev);
if (net == NULL) {
dev_err(&dev->device, "No net device to remove\n");
return 0;
}
- net_device->start_remove = true;
ndev_ctx = netdev_priv(net);
+ net_device = ndev_ctx->nvdev;
+
+ /* Avoid racing with netvsc_change_mtu()/netvsc_set_channels()
+ * removing the device.
+ */
+ rtnl_lock();
+ ndev_ctx->start_remove = true;
+ rtnl_unlock();
+
cancel_delayed_work_sync(&ndev_ctx->dwork);
cancel_work_sync(&ndev_ctx->work);
@@ -1215,6 +1466,8 @@ static int netvsc_remove(struct hv_device *dev)
*/
rndis_filter_device_remove(dev);
+ hv_set_drvdata(dev, NULL);
+
netvsc_free_netdev(net);
return 0;
}
@@ -1235,19 +1488,58 @@ static struct hv_driver netvsc_drv = {
.remove = netvsc_remove,
};
+
+/*
+ * On Hyper-V, every VF interface is matched with a corresponding
+ * synthetic interface. The synthetic interface is presented first
+ * to the guest. When the corresponding VF instance is registered,
+ * we will take care of switching the data path.
+ */
+static int netvsc_netdev_event(struct notifier_block *this,
+ unsigned long event, void *ptr)
+{
+ struct net_device *event_dev = netdev_notifier_info_to_dev(ptr);
+
+ switch (event) {
+ case NETDEV_REGISTER:
+ return netvsc_register_vf(event_dev);
+ case NETDEV_UNREGISTER:
+ return netvsc_unregister_vf(event_dev);
+ case NETDEV_UP:
+ return netvsc_vf_up(event_dev);
+ case NETDEV_DOWN:
+ return netvsc_vf_down(event_dev);
+ default:
+ return NOTIFY_DONE;
+ }
+}
+
+static struct notifier_block netvsc_netdev_notifier = {
+ .notifier_call = netvsc_netdev_event,
+};
+
static void __exit netvsc_drv_exit(void)
{
+ unregister_netdevice_notifier(&netvsc_netdev_notifier);
vmbus_driver_unregister(&netvsc_drv);
}
static int __init netvsc_drv_init(void)
{
+ int ret;
+
if (ring_size < RING_SIZE_MIN) {
ring_size = RING_SIZE_MIN;
pr_info("Increased ring_size to %d (min allowed)\n",
ring_size);
}
- return vmbus_driver_register(&netvsc_drv);
+ ret = vmbus_driver_register(&netvsc_drv);
+
+ if (ret)
+ return ret;
+
+ register_netdevice_notifier(&netvsc_netdev_notifier);
+ return 0;
}
MODULE_LICENSE("GPL");
diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c
index c4e1e0408433..97c292b7dbea 100644
--- a/drivers/net/hyperv/rndis_filter.c
+++ b/drivers/net/hyperv/rndis_filter.c
@@ -126,11 +126,7 @@ static void put_rndis_request(struct rndis_device *dev,
static void dump_rndis_message(struct hv_device *hv_dev,
struct rndis_message *rndis_msg)
{
- struct net_device *netdev;
- struct netvsc_device *net_device;
-
- net_device = hv_get_drvdata(hv_dev);
- netdev = net_device->ndev;
+ struct net_device *netdev = hv_get_drvdata(hv_dev);
switch (rndis_msg->ndis_msg_type) {
case RNDIS_MSG_PACKET:
@@ -211,6 +207,7 @@ static int rndis_filter_send_request(struct rndis_device *dev,
struct hv_netvsc_packet *packet;
struct hv_page_buffer page_buf[2];
struct hv_page_buffer *pb = page_buf;
+ struct net_device_context *net_device_ctx = netdev_priv(dev->ndev);
/* Setup the packet to send it */
packet = &req->pkt;
@@ -236,7 +233,7 @@ static int rndis_filter_send_request(struct rndis_device *dev,
pb[0].len;
}
- ret = netvsc_send(dev->net_dev->dev, packet, NULL, &pb, NULL);
+ ret = netvsc_send(net_device_ctx->device_ctx, packet, NULL, &pb, NULL);
return ret;
}
@@ -262,9 +259,7 @@ static void rndis_filter_receive_response(struct rndis_device *dev,
struct rndis_request *request = NULL;
bool found = false;
unsigned long flags;
- struct net_device *ndev;
-
- ndev = dev->net_dev->ndev;
+ struct net_device *ndev = dev->ndev;
spin_lock_irqsave(&dev->request_lock, flags);
list_for_each_entry(request, &dev->req_list, list_ent) {
@@ -355,6 +350,7 @@ static int rndis_filter_receive_data(struct rndis_device *dev,
struct ndis_pkt_8021q_info *vlan;
struct ndis_tcp_ip_checksum_info *csum_info;
u16 vlan_tci = 0;
+ struct net_device_context *net_device_ctx = netdev_priv(dev->ndev);
rndis_pkt = &msg->msg.pkt;
@@ -368,7 +364,7 @@ static int rndis_filter_receive_data(struct rndis_device *dev,
* should be the data packet size plus the trailer padding size
*/
if (pkt->total_data_buflen < rndis_pkt->data_len) {
- netdev_err(dev->net_dev->ndev, "rndis message buffer "
+ netdev_err(dev->ndev, "rndis message buffer "
"overflow detected (got %u, min %u)"
"...dropping this message!\n",
pkt->total_data_buflen, rndis_pkt->data_len);
@@ -390,7 +386,7 @@ static int rndis_filter_receive_data(struct rndis_device *dev,
}
csum_info = rndis_get_ppi(rndis_pkt, TCPIP_CHKSUM_PKTINFO);
- return netvsc_recv_callback(dev->net_dev->dev, pkt, data,
+ return netvsc_recv_callback(net_device_ctx->device_ctx, pkt, data,
csum_info, channel, vlan_tci);
}
@@ -399,10 +395,11 @@ int rndis_filter_receive(struct hv_device *dev,
void **data,
struct vmbus_channel *channel)
{
- struct netvsc_device *net_dev = hv_get_drvdata(dev);
+ struct net_device *ndev = hv_get_drvdata(dev);
+ struct net_device_context *net_device_ctx = netdev_priv(ndev);
+ struct netvsc_device *net_dev = net_device_ctx->nvdev;
struct rndis_device *rndis_dev;
struct rndis_message *rndis_msg;
- struct net_device *ndev;
int ret = 0;
if (!net_dev) {
@@ -410,8 +407,6 @@ int rndis_filter_receive(struct hv_device *dev,
goto exit;
}
- ndev = net_dev->ndev;
-
/* Make sure the rndis device state is initialized */
if (!net_dev->extension) {
netdev_err(ndev, "got rndis message but no rndis device - "
@@ -430,7 +425,7 @@ int rndis_filter_receive(struct hv_device *dev,
rndis_msg = *data;
- if (netif_msg_rx_err(net_dev->nd_ctx))
+ if (netif_msg_rx_err(net_device_ctx))
dump_rndis_message(dev, rndis_msg);
switch (rndis_msg->ndis_msg_type) {
@@ -550,9 +545,10 @@ static int rndis_filter_query_device_mac(struct rndis_device *dev)
int rndis_filter_set_device_mac(struct hv_device *hdev, char *mac)
{
- struct netvsc_device *nvdev = hv_get_drvdata(hdev);
+ struct net_device *ndev = hv_get_drvdata(hdev);
+ struct net_device_context *net_device_ctx = netdev_priv(ndev);
+ struct netvsc_device *nvdev = net_device_ctx->nvdev;
struct rndis_device *rdev = nvdev->extension;
- struct net_device *ndev = nvdev->ndev;
struct rndis_request *request;
struct rndis_set_request *set;
struct rndis_config_parameter_info *cpi;
@@ -629,9 +625,10 @@ static int
rndis_filter_set_offload_params(struct hv_device *hdev,
struct ndis_offload_params *req_offloads)
{
- struct netvsc_device *nvdev = hv_get_drvdata(hdev);
+ struct net_device *ndev = hv_get_drvdata(hdev);
+ struct net_device_context *net_device_ctx = netdev_priv(ndev);
+ struct netvsc_device *nvdev = net_device_ctx->nvdev;
struct rndis_device *rdev = nvdev->extension;
- struct net_device *ndev = nvdev->ndev;
struct rndis_request *request;
struct rndis_set_request *set;
struct ndis_offload_params *offload_params;
@@ -703,7 +700,7 @@ u8 netvsc_hash_key[HASH_KEYLEN] = {
static int rndis_filter_set_rss_param(struct rndis_device *rdev, int num_queue)
{
- struct net_device *ndev = rdev->net_dev->ndev;
+ struct net_device *ndev = rdev->ndev;
struct rndis_request *request;
struct rndis_set_request *set;
struct rndis_set_complete *set_complete;
@@ -799,9 +796,7 @@ int rndis_filter_set_packet_filter(struct rndis_device *dev, u32 new_filter)
u32 status;
int ret;
unsigned long t;
- struct net_device *ndev;
-
- ndev = dev->net_dev->ndev;
+ struct net_device *ndev = dev->ndev;
request = get_rndis_request(dev, RNDIS_MSG_SET,
RNDIS_MESSAGE_SIZE(struct rndis_set_request) +
@@ -856,7 +851,8 @@ static int rndis_filter_init_device(struct rndis_device *dev)
u32 status;
int ret;
unsigned long t;
- struct netvsc_device *nvdev = dev->net_dev;
+ struct net_device_context *net_device_ctx = netdev_priv(dev->ndev);
+ struct netvsc_device *nvdev = net_device_ctx->nvdev;
request = get_rndis_request(dev, RNDIS_MSG_INIT,
RNDIS_MESSAGE_SIZE(struct rndis_initialize_request));
@@ -879,7 +875,6 @@ static int rndis_filter_init_device(struct rndis_device *dev)
goto cleanup;
}
-
t = wait_for_completion_timeout(&request->wait_event, 5*HZ);
if (t == 0) {
@@ -910,8 +905,9 @@ static void rndis_filter_halt_device(struct rndis_device *dev)
{
struct rndis_request *request;
struct rndis_halt_request *halt;
- struct netvsc_device *nvdev = dev->net_dev;
- struct hv_device *hdev = nvdev->dev;
+ struct net_device_context *net_device_ctx = netdev_priv(dev->ndev);
+ struct netvsc_device *nvdev = net_device_ctx->nvdev;
+ struct hv_device *hdev = net_device_ctx->device_ctx;
ulong flags;
/* Attempt to do a rndis device halt */
@@ -979,13 +975,14 @@ static int rndis_filter_close_device(struct rndis_device *dev)
static void netvsc_sc_open(struct vmbus_channel *new_sc)
{
- struct netvsc_device *nvscdev;
+ struct net_device *ndev =
+ hv_get_drvdata(new_sc->primary_channel->device_obj);
+ struct net_device_context *net_device_ctx = netdev_priv(ndev);
+ struct netvsc_device *nvscdev = net_device_ctx->nvdev;
u16 chn_index = new_sc->offermsg.offer.sub_channel_index;
int ret;
unsigned long flags;
- nvscdev = hv_get_drvdata(new_sc->primary_channel->device_obj);
-
if (chn_index >= nvscdev->num_chn)
return;
@@ -1010,6 +1007,8 @@ int rndis_filter_device_add(struct hv_device *dev,
void *additional_info)
{
int ret;
+ struct net_device *net = hv_get_drvdata(dev);
+ struct net_device_context *net_device_ctx = netdev_priv(net);
struct netvsc_device *net_device;
struct rndis_device *rndis_device;
struct netvsc_device_info *device_info = additional_info;
@@ -1040,16 +1039,15 @@ int rndis_filter_device_add(struct hv_device *dev,
return ret;
}
-
/* Initialize the rndis device */
- net_device = hv_get_drvdata(dev);
+ net_device = net_device_ctx->nvdev;
net_device->max_chn = 1;
net_device->num_chn = 1;
spin_lock_init(&net_device->sc_lock);
net_device->extension = rndis_device;
- rndis_device->net_dev = net_device;
+ rndis_device->ndev = net;
/* Send the rndis initialization message */
ret = rndis_filter_init_device(rndis_device);
@@ -1063,8 +1061,8 @@ int rndis_filter_device_add(struct hv_device *dev,
ret = rndis_filter_query_device(rndis_device,
RNDIS_OID_GEN_MAXIMUM_FRAME_SIZE,
&mtu, &size);
- if (ret == 0 && size == sizeof(u32) && mtu < net_device->ndev->mtu)
- net_device->ndev->mtu = mtu;
+ if (ret == 0 && size == sizeof(u32) && mtu < net->mtu)
+ net->mtu = mtu;
/* Get the mac address */
ret = rndis_filter_query_device_mac(rndis_device);
@@ -1198,7 +1196,9 @@ err_dev_remv:
void rndis_filter_device_remove(struct hv_device *dev)
{
- struct netvsc_device *net_dev = hv_get_drvdata(dev);
+ struct net_device *ndev = hv_get_drvdata(dev);
+ struct net_device_context *net_device_ctx = netdev_priv(ndev);
+ struct netvsc_device *net_dev = net_device_ctx->nvdev;
struct rndis_device *rndis_dev = net_dev->extension;
unsigned long t;
@@ -1224,20 +1224,30 @@ void rndis_filter_device_remove(struct hv_device *dev)
int rndis_filter_open(struct hv_device *dev)
{
- struct netvsc_device *net_device = hv_get_drvdata(dev);
+ struct net_device *ndev = hv_get_drvdata(dev);
+ struct net_device_context *net_device_ctx = netdev_priv(ndev);
+ struct netvsc_device *net_device = net_device_ctx->nvdev;
if (!net_device)
return -EINVAL;
+ if (atomic_inc_return(&net_device->open_cnt) != 1)
+ return 0;
+
return rndis_filter_open_device(net_device->extension);
}
int rndis_filter_close(struct hv_device *dev)
{
- struct netvsc_device *nvdev = hv_get_drvdata(dev);
+ struct net_device *ndev = hv_get_drvdata(dev);
+ struct net_device_context *net_device_ctx = netdev_priv(ndev);
+ struct netvsc_device *nvdev = net_device_ctx->nvdev;
if (!nvdev)
return -EINVAL;
+ if (atomic_dec_return(&nvdev->open_cnt) != 0)
+ return 0;
+
return rndis_filter_close_device(nvdev->extension);
}
diff --git a/drivers/net/ieee802154/adf7242.c b/drivers/net/ieee802154/adf7242.c
index 89154c079788..9fa7ac9f8e68 100644
--- a/drivers/net/ieee802154/adf7242.c
+++ b/drivers/net/ieee802154/adf7242.c
@@ -915,7 +915,6 @@ static void adf7242_debug(u8 irq1)
(stat & 0xf) == RC_STATUS_PHY_RDY ? "RC_STATUS_PHY_RDY" : "",
(stat & 0xf) == RC_STATUS_RX ? "RC_STATUS_RX" : "",
(stat & 0xf) == RC_STATUS_TX ? "RC_STATUS_TX" : "");
- }
#endif
}
@@ -1030,6 +1029,7 @@ static int adf7242_hw_init(struct adf7242_local *lp)
if (ret) {
dev_err(&lp->spi->dev,
"upload firmware failed with %d\n", ret);
+ release_firmware(fw);
return ret;
}
@@ -1037,6 +1037,7 @@ static int adf7242_hw_init(struct adf7242_local *lp)
if (ret) {
dev_err(&lp->spi->dev,
"verify firmware failed with %d\n", ret);
+ release_firmware(fw);
return ret;
}
diff --git a/drivers/net/ieee802154/at86rf230.c b/drivers/net/ieee802154/at86rf230.c
index cb9e9fe6d77a..9f10da60e02d 100644
--- a/drivers/net/ieee802154/at86rf230.c
+++ b/drivers/net/ieee802154/at86rf230.c
@@ -1340,7 +1340,7 @@ static struct at86rf2xx_chip_data at86rf233_data = {
.t_off_to_aack = 80,
.t_off_to_tx_on = 80,
.t_off_to_sleep = 35,
- .t_sleep_to_off = 210,
+ .t_sleep_to_off = 1000,
.t_frame = 4096,
.t_p_ack = 545,
.rssi_base_val = -91,
@@ -1355,7 +1355,7 @@ static struct at86rf2xx_chip_data at86rf231_data = {
.t_off_to_aack = 110,
.t_off_to_tx_on = 110,
.t_off_to_sleep = 35,
- .t_sleep_to_off = 380,
+ .t_sleep_to_off = 1000,
.t_frame = 4096,
.t_p_ack = 545,
.rssi_base_val = -91,
@@ -1370,7 +1370,7 @@ static struct at86rf2xx_chip_data at86rf212_data = {
.t_off_to_aack = 200,
.t_off_to_tx_on = 200,
.t_off_to_sleep = 35,
- .t_sleep_to_off = 380,
+ .t_sleep_to_off = 1000,
.t_frame = 4096,
.t_p_ack = 545,
.rssi_base_val = -100,
diff --git a/drivers/net/ieee802154/atusb.c b/drivers/net/ieee802154/atusb.c
index b1cd865ade2e..52c9051f3b95 100644
--- a/drivers/net/ieee802154/atusb.c
+++ b/drivers/net/ieee802154/atusb.c
@@ -3,6 +3,8 @@
*
* Written 2013 by Werner Almesberger <werner@almesberger.net>
*
+ * Copyright (c) 2015 - 2016 Stefan Schmidt <stefan@datenfreihafen.org>
+ *
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License as
* published by the Free Software Foundation, version 2
@@ -472,6 +474,76 @@ atusb_set_txpower(struct ieee802154_hw *hw, s32 mbm)
return -EINVAL;
}
+#define ATUSB_MAX_ED_LEVELS 0xF
+static const s32 atusb_ed_levels[ATUSB_MAX_ED_LEVELS + 1] = {
+ -9100, -8900, -8700, -8500, -8300, -8100, -7900, -7700, -7500, -7300,
+ -7100, -6900, -6700, -6500, -6300, -6100,
+};
+
+static int
+atusb_set_cca_mode(struct ieee802154_hw *hw, const struct wpan_phy_cca *cca)
+{
+ struct atusb *atusb = hw->priv;
+ u8 val;
+
+ /* mapping 802.15.4 to driver spec */
+ switch (cca->mode) {
+ case NL802154_CCA_ENERGY:
+ val = 1;
+ break;
+ case NL802154_CCA_CARRIER:
+ val = 2;
+ break;
+ case NL802154_CCA_ENERGY_CARRIER:
+ switch (cca->opt) {
+ case NL802154_CCA_OPT_ENERGY_CARRIER_AND:
+ val = 3;
+ break;
+ case NL802154_CCA_OPT_ENERGY_CARRIER_OR:
+ val = 0;
+ break;
+ default:
+ return -EINVAL;
+ }
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ return atusb_write_subreg(atusb, SR_CCA_MODE, val);
+}
+
+static int
+atusb_set_cca_ed_level(struct ieee802154_hw *hw, s32 mbm)
+{
+ struct atusb *atusb = hw->priv;
+ u32 i;
+
+ for (i = 0; i < hw->phy->supported.cca_ed_levels_size; i++) {
+ if (hw->phy->supported.cca_ed_levels[i] == mbm)
+ return atusb_write_subreg(atusb, SR_CCA_ED_THRES, i);
+ }
+
+ return -EINVAL;
+}
+
+static int
+atusb_set_csma_params(struct ieee802154_hw *hw, u8 min_be, u8 max_be, u8 retries)
+{
+ struct atusb *atusb = hw->priv;
+ int ret;
+
+ ret = atusb_write_subreg(atusb, SR_MIN_BE, min_be);
+ if (ret)
+ return ret;
+
+ ret = atusb_write_subreg(atusb, SR_MAX_BE, max_be);
+ if (ret)
+ return ret;
+
+ return atusb_write_subreg(atusb, SR_MAX_CSMA_RETRIES, retries);
+}
+
static int
atusb_set_promiscuous_mode(struct ieee802154_hw *hw, const bool on)
{
@@ -508,6 +580,9 @@ static struct ieee802154_ops atusb_ops = {
.stop = atusb_stop,
.set_hw_addr_filt = atusb_set_hw_addr_filt,
.set_txpower = atusb_set_txpower,
+ .set_cca_mode = atusb_set_cca_mode,
+ .set_cca_ed_level = atusb_set_cca_ed_level,
+ .set_csma_params = atusb_set_csma_params,
.set_promiscuous_mode = atusb_set_promiscuous_mode,
};
@@ -636,9 +711,20 @@ static int atusb_probe(struct usb_interface *interface,
hw->parent = &usb_dev->dev;
hw->flags = IEEE802154_HW_TX_OMIT_CKSUM | IEEE802154_HW_AFILT |
- IEEE802154_HW_PROMISCUOUS;
+ IEEE802154_HW_PROMISCUOUS | IEEE802154_HW_CSMA_PARAMS;
+
+ hw->phy->flags = WPAN_PHY_FLAG_TXPOWER | WPAN_PHY_FLAG_CCA_ED_LEVEL |
+ WPAN_PHY_FLAG_CCA_MODE;
+
+ hw->phy->supported.cca_modes = BIT(NL802154_CCA_ENERGY) |
+ BIT(NL802154_CCA_CARRIER) | BIT(NL802154_CCA_ENERGY_CARRIER);
+ hw->phy->supported.cca_opts = BIT(NL802154_CCA_OPT_ENERGY_CARRIER_AND) |
+ BIT(NL802154_CCA_OPT_ENERGY_CARRIER_OR);
+
+ hw->phy->supported.cca_ed_levels = atusb_ed_levels;
+ hw->phy->supported.cca_ed_levels_size = ARRAY_SIZE(atusb_ed_levels);
- hw->phy->flags = WPAN_PHY_FLAG_TXPOWER;
+ hw->phy->cca.mode = NL802154_CCA_ENERGY;
hw->phy->current_page = 0;
hw->phy->current_channel = 11; /* reset default */
@@ -647,6 +733,7 @@ static int atusb_probe(struct usb_interface *interface,
hw->phy->supported.tx_powers_size = ARRAY_SIZE(atusb_powers);
hw->phy->transmit_power = hw->phy->supported.tx_powers[0];
ieee802154_random_extended_addr(&hw->phy->perm_extended_addr);
+ hw->phy->cca_ed_level = hw->phy->supported.cca_ed_levels[7];
atusb_command(atusb, ATUSB_RF_RESET, 0);
atusb_get_and_show_chip(atusb);
diff --git a/drivers/net/ieee802154/mrf24j40.c b/drivers/net/ieee802154/mrf24j40.c
index 764a2bddfaee..f446db828561 100644
--- a/drivers/net/ieee802154/mrf24j40.c
+++ b/drivers/net/ieee802154/mrf24j40.c
@@ -61,6 +61,7 @@
#define REG_TXBCON0 0x1A
#define REG_TXNCON 0x1B /* Transmit Normal FIFO Control */
#define BIT_TXNTRIG BIT(0)
+#define BIT_TXNSECEN BIT(1)
#define BIT_TXNACKREQ BIT(2)
#define REG_TXG1CON 0x1C
@@ -85,10 +86,13 @@
#define REG_INTSTAT 0x31 /* Interrupt Status */
#define BIT_TXNIF BIT(0)
#define BIT_RXIF BIT(3)
+#define BIT_SECIF BIT(4)
+#define BIT_SECIGNORE BIT(7)
#define REG_INTCON 0x32 /* Interrupt Control */
#define BIT_TXNIE BIT(0)
#define BIT_RXIE BIT(3)
+#define BIT_SECIE BIT(4)
#define REG_GPIO 0x33 /* GPIO */
#define REG_TRISGPIO 0x34 /* GPIO direction */
@@ -548,6 +552,9 @@ static void write_tx_buf_complete(void *context)
u8 val = BIT_TXNTRIG;
int ret;
+ if (ieee802154_is_secen(fc))
+ val |= BIT_TXNSECEN;
+
if (ieee802154_is_ackreq(fc))
val |= BIT_TXNACKREQ;
@@ -616,7 +623,7 @@ static int mrf24j40_start(struct ieee802154_hw *hw)
/* Clear TXNIE and RXIE. Enable interrupts */
return regmap_update_bits(devrec->regmap_short, REG_INTCON,
- BIT_TXNIE | BIT_RXIE, 0);
+ BIT_TXNIE | BIT_RXIE | BIT_SECIE, 0);
}
static void mrf24j40_stop(struct ieee802154_hw *hw)
@@ -1025,6 +1032,11 @@ static void mrf24j40_intstat_complete(void *context)
enable_irq(devrec->spi->irq);
+ /* Ignore Rx security decryption */
+ if (intstat & BIT_SECIF)
+ regmap_write_async(devrec->regmap_short, REG_SECCON0,
+ BIT_SECIGNORE);
+
/* Check for TX complete */
if (intstat & BIT_TXNIF)
ieee802154_xmit_complete(devrec->hw, devrec->tx_skb, false);
diff --git a/drivers/net/ifb.c b/drivers/net/ifb.c
index cc56fac3c3f8..66c0eeafcb5d 100644
--- a/drivers/net/ifb.c
+++ b/drivers/net/ifb.c
@@ -196,6 +196,7 @@ static const struct net_device_ops ifb_netdev_ops = {
#define IFB_FEATURES (NETIF_F_HW_CSUM | NETIF_F_SG | NETIF_F_FRAGLIST | \
NETIF_F_TSO_ECN | NETIF_F_TSO | NETIF_F_TSO6 | \
+ NETIF_F_GSO_ENCAP_ALL | \
NETIF_F_HIGHDMA | NETIF_F_HW_VLAN_CTAG_TX | \
NETIF_F_HW_VLAN_STAG_TX)
@@ -224,6 +225,8 @@ static void ifb_setup(struct net_device *dev)
dev->tx_queue_len = TX_Q_LIMIT;
dev->features |= IFB_FEATURES;
+ dev->hw_features |= dev->features;
+ dev->hw_enc_features |= dev->features;
dev->vlan_features |= IFB_FEATURES & ~(NETIF_F_HW_VLAN_CTAG_TX |
NETIF_F_HW_VLAN_STAG_TX);
diff --git a/drivers/net/ipvlan/ipvlan_main.c b/drivers/net/ipvlan/ipvlan_main.c
index 57941d3f4227..1c4d395fbd49 100644
--- a/drivers/net/ipvlan/ipvlan_main.c
+++ b/drivers/net/ipvlan/ipvlan_main.c
@@ -113,6 +113,7 @@ static int ipvlan_init(struct net_device *dev)
{
struct ipvl_dev *ipvlan = netdev_priv(dev);
const struct net_device *phy_dev = ipvlan->phy_dev;
+ struct ipvl_port *port = ipvlan->port;
dev->state = (dev->state & ~IPVLAN_STATE_MASK) |
(phy_dev->state & IPVLAN_STATE_MASK);
@@ -128,6 +129,8 @@ static int ipvlan_init(struct net_device *dev)
if (!ipvlan->pcpu_stats)
return -ENOMEM;
+ port->count += 1;
+
return 0;
}
@@ -481,27 +484,21 @@ static int ipvlan_link_new(struct net *src_net, struct net_device *dev,
dev->priv_flags |= IFF_IPVLAN_SLAVE;
- port->count += 1;
err = register_netdevice(dev);
if (err < 0)
- goto ipvlan_destroy_port;
+ return err;
err = netdev_upper_dev_link(phy_dev, dev);
- if (err)
- goto ipvlan_destroy_port;
+ if (err) {
+ unregister_netdevice(dev);
+ return err;
+ }
list_add_tail_rcu(&ipvlan->pnode, &port->ipvlans);
ipvlan_set_port_mode(port, mode);
netif_stacked_transfer_operstate(phy_dev, dev);
return 0;
-
-ipvlan_destroy_port:
- port->count -= 1;
- if (!port->count)
- ipvlan_port_destroy(phy_dev);
-
- return err;
}
static void ipvlan_link_delete(struct net_device *dev, struct list_head *head)
diff --git a/drivers/net/irda/Kconfig b/drivers/net/irda/Kconfig
index a2c227bfb687..e070e1222733 100644
--- a/drivers/net/irda/Kconfig
+++ b/drivers/net/irda/Kconfig
@@ -394,12 +394,5 @@ config MCS_FIR
To compile it as a module, choose M here: the module will be called
mcs7780.
-config SH_IRDA
- tristate "SuperH IrDA driver"
- depends on IRDA
- depends on (ARCH_SHMOBILE || COMPILE_TEST) && HAS_IOMEM
- help
- Say Y here if your want to enable SuperH IrDA devices.
-
endmenu
diff --git a/drivers/net/irda/Makefile b/drivers/net/irda/Makefile
index be8ab5b9a4a2..4c344433dae5 100644
--- a/drivers/net/irda/Makefile
+++ b/drivers/net/irda/Makefile
@@ -19,7 +19,6 @@ obj-$(CONFIG_VIA_FIR) += via-ircc.o
obj-$(CONFIG_PXA_FICP) += pxaficp_ir.o
obj-$(CONFIG_MCS_FIR) += mcs7780.o
obj-$(CONFIG_AU1000_FIR) += au1k_ir.o
-obj-$(CONFIG_SH_IRDA) += sh_irda.o
# SIR drivers
obj-$(CONFIG_IRTTY_SIR) += irtty-sir.o sir-dev.o
obj-$(CONFIG_BFIN_SIR) += bfin_sir.o
diff --git a/drivers/net/irda/ali-ircc.c b/drivers/net/irda/ali-ircc.c
index 64bb44d5d867..c285eafd3f1c 100644
--- a/drivers/net/irda/ali-ircc.c
+++ b/drivers/net/irda/ali-ircc.c
@@ -1427,7 +1427,7 @@ static netdev_tx_t ali_ircc_fir_hard_xmit(struct sk_buff *skb,
/* Check for empty frame */
if (!skb->len) {
ali_ircc_change_speed(self, speed);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
spin_unlock_irqrestore(&self->lock, flags);
dev_kfree_skb(skb);
return NETDEV_TX_OK;
@@ -1533,7 +1533,7 @@ static netdev_tx_t ali_ircc_fir_hard_xmit(struct sk_buff *skb,
/* Restore bank register */
switch_bank(iobase, BANK0);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
spin_unlock_irqrestore(&self->lock, flags);
dev_kfree_skb(skb);
@@ -1946,7 +1946,7 @@ static netdev_tx_t ali_ircc_sir_hard_xmit(struct sk_buff *skb,
/* Check for empty frame */
if (!skb->len) {
ali_ircc_change_speed(self, speed);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
spin_unlock_irqrestore(&self->lock, flags);
dev_kfree_skb(skb);
return NETDEV_TX_OK;
@@ -1966,7 +1966,7 @@ static netdev_tx_t ali_ircc_sir_hard_xmit(struct sk_buff *skb,
/* Turn on transmit finished interrupt. Will fire immediately! */
outb(UART_IER_THRI, iobase+UART_IER);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
spin_unlock_irqrestore(&self->lock, flags);
dev_kfree_skb(skb);
diff --git a/drivers/net/irda/bfin_sir.c b/drivers/net/irda/bfin_sir.c
index 303c4bd26e17..be5bb0b7f29c 100644
--- a/drivers/net/irda/bfin_sir.c
+++ b/drivers/net/irda/bfin_sir.c
@@ -531,7 +531,7 @@ static void bfin_sir_send_work(struct work_struct *work)
bfin_sir_dma_tx_chars(dev);
#endif
bfin_sir_enable_tx(port);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
}
static int bfin_sir_hard_xmit(struct sk_buff *skb, struct net_device *dev)
diff --git a/drivers/net/irda/irda-usb.c b/drivers/net/irda/irda-usb.c
index 25f21968fa5c..a198946bc54f 100644
--- a/drivers/net/irda/irda-usb.c
+++ b/drivers/net/irda/irda-usb.c
@@ -429,7 +429,7 @@ static netdev_tx_t irda_usb_hard_xmit(struct sk_buff *skb,
* do an extra memcpy and increment packet counters...
* Jean II */
irda_usb_change_speed_xbofs(self);
- netdev->trans_start = jiffies;
+ netif_trans_update(netdev);
/* Will netif_wake_queue() in callback */
goto drop;
}
@@ -526,7 +526,7 @@ static netdev_tx_t irda_usb_hard_xmit(struct sk_buff *skb,
netdev->stats.tx_packets++;
netdev->stats.tx_bytes += skb->len;
- netdev->trans_start = jiffies;
+ netif_trans_update(netdev);
}
spin_unlock_irqrestore(&self->lock, flags);
diff --git a/drivers/net/irda/nsc-ircc.c b/drivers/net/irda/nsc-ircc.c
index dc0dbd8dd0b5..aaecc3baaf30 100644
--- a/drivers/net/irda/nsc-ircc.c
+++ b/drivers/net/irda/nsc-ircc.c
@@ -1253,7 +1253,7 @@ static void nsc_ircc_change_dongle_speed(int iobase, int speed, int dongle_id)
*/
static __u8 nsc_ircc_change_speed(struct nsc_ircc_cb *self, __u32 speed)
{
- struct net_device *dev = self->netdev;
+ struct net_device *dev;
__u8 mcr = MCR_SIR;
int iobase;
__u8 bank;
@@ -1263,6 +1263,7 @@ static __u8 nsc_ircc_change_speed(struct nsc_ircc_cb *self, __u32 speed)
IRDA_ASSERT(self != NULL, return 0;);
+ dev = self->netdev;
iobase = self->io.fir_base;
/* Update accounting for new speed */
@@ -1399,7 +1400,7 @@ static netdev_tx_t nsc_ircc_hard_xmit_sir(struct sk_buff *skb,
* to make sure packets gets through the
* proper xmit handler - Jean II */
}
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
spin_unlock_irqrestore(&self->lock, flags);
dev_kfree_skb(skb);
return NETDEV_TX_OK;
@@ -1424,7 +1425,7 @@ static netdev_tx_t nsc_ircc_hard_xmit_sir(struct sk_buff *skb,
/* Restore bank register */
outb(bank, iobase+BSR);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
spin_unlock_irqrestore(&self->lock, flags);
dev_kfree_skb(skb);
@@ -1470,7 +1471,7 @@ static netdev_tx_t nsc_ircc_hard_xmit_fir(struct sk_buff *skb,
* the speed change has been done.
* Jean II */
}
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
spin_unlock_irqrestore(&self->lock, flags);
dev_kfree_skb(skb);
return NETDEV_TX_OK;
@@ -1553,7 +1554,7 @@ static netdev_tx_t nsc_ircc_hard_xmit_fir(struct sk_buff *skb,
/* Restore bank register */
outb(bank, iobase+BSR);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
spin_unlock_irqrestore(&self->lock, flags);
dev_kfree_skb(skb);
diff --git a/drivers/net/irda/sh_irda.c b/drivers/net/irda/sh_irda.c
deleted file mode 100644
index c96b46b2c3a8..000000000000
--- a/drivers/net/irda/sh_irda.c
+++ /dev/null
@@ -1,875 +0,0 @@
-/*
- * SuperH IrDA Driver
- *
- * Copyright (C) 2010 Renesas Solutions Corp.
- * Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
- *
- * Based on sh_sir.c
- * Copyright (C) 2009 Renesas Solutions Corp.
- * Copyright 2006-2009 Analog Devices Inc.
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License version 2 as
- * published by the Free Software Foundation.
- */
-
-/*
- * CAUTION
- *
- * This driver is very simple.
- * So, it doesn't have below support now
- * - MIR/FIR support
- * - DMA transfer support
- * - FIFO mode support
- */
-#include <linux/io.h>
-#include <linux/interrupt.h>
-#include <linux/module.h>
-#include <linux/platform_device.h>
-#include <linux/pm_runtime.h>
-#include <linux/clk.h>
-#include <net/irda/wrapper.h>
-#include <net/irda/irda_device.h>
-
-#define DRIVER_NAME "sh_irda"
-
-#define __IRDARAM_LEN 0x1039
-
-#define IRTMR 0x1F00 /* Transfer mode */
-#define IRCFR 0x1F02 /* Configuration */
-#define IRCTR 0x1F04 /* IR control */
-#define IRTFLR 0x1F20 /* Transmit frame length */
-#define IRTCTR 0x1F22 /* Transmit control */
-#define IRRFLR 0x1F40 /* Receive frame length */
-#define IRRCTR 0x1F42 /* Receive control */
-#define SIRISR 0x1F60 /* SIR-UART mode interrupt source */
-#define SIRIMR 0x1F62 /* SIR-UART mode interrupt mask */
-#define SIRICR 0x1F64 /* SIR-UART mode interrupt clear */
-#define SIRBCR 0x1F68 /* SIR-UART mode baud rate count */
-#define MFIRISR 0x1F70 /* MIR/FIR mode interrupt source */
-#define MFIRIMR 0x1F72 /* MIR/FIR mode interrupt mask */
-#define MFIRICR 0x1F74 /* MIR/FIR mode interrupt clear */
-#define CRCCTR 0x1F80 /* CRC engine control */
-#define CRCIR 0x1F86 /* CRC engine input data */
-#define CRCCR 0x1F8A /* CRC engine calculation */
-#define CRCOR 0x1F8E /* CRC engine output data */
-#define FIFOCP 0x1FC0 /* FIFO current pointer */
-#define FIFOFP 0x1FC2 /* FIFO follow pointer */
-#define FIFORSMSK 0x1FC4 /* FIFO receive status mask */
-#define FIFORSOR 0x1FC6 /* FIFO receive status OR */
-#define FIFOSEL 0x1FC8 /* FIFO select */
-#define FIFORS 0x1FCA /* FIFO receive status */
-#define FIFORFL 0x1FCC /* FIFO receive frame length */
-#define FIFORAMCP 0x1FCE /* FIFO RAM current pointer */
-#define FIFORAMFP 0x1FD0 /* FIFO RAM follow pointer */
-#define BIFCTL 0x1FD2 /* BUS interface control */
-#define IRDARAM 0x0000 /* IrDA buffer RAM */
-#define IRDARAM_LEN __IRDARAM_LEN /* - 8/16/32 (read-only for 32) */
-
-/* IRTMR */
-#define TMD_MASK (0x3 << 14) /* Transfer Mode */
-#define TMD_SIR (0x0 << 14)
-#define TMD_MIR (0x3 << 14)
-#define TMD_FIR (0x2 << 14)
-
-#define FIFORIM (1 << 8) /* FIFO receive interrupt mask */
-#define MIM (1 << 4) /* MIR/FIR Interrupt Mask */
-#define SIM (1 << 0) /* SIR Interrupt Mask */
-#define xIM_MASK (FIFORIM | MIM | SIM)
-
-/* IRCFR */
-#define RTO_SHIFT 8 /* shift for Receive Timeout */
-#define RTO (0x3 << RTO_SHIFT)
-
-/* IRTCTR */
-#define ARMOD (1 << 15) /* Auto-Receive Mode */
-#define TE (1 << 0) /* Transmit Enable */
-
-/* IRRFLR */
-#define RFL_MASK (0x1FFF) /* mask for Receive Frame Length */
-
-/* IRRCTR */
-#define RE (1 << 0) /* Receive Enable */
-
-/*
- * SIRISR, SIRIMR, SIRICR,
- * MFIRISR, MFIRIMR, MFIRICR
- */
-#define FRE (1 << 15) /* Frame Receive End */
-#define TROV (1 << 11) /* Transfer Area Overflow */
-#define xIR_9 (1 << 9)
-#define TOT xIR_9 /* for SIR Timeout */
-#define ABTD xIR_9 /* for MIR/FIR Abort Detection */
-#define xIR_8 (1 << 8)
-#define FER xIR_8 /* for SIR Framing Error */
-#define CRCER xIR_8 /* for MIR/FIR CRC error */
-#define FTE (1 << 7) /* Frame Transmit End */
-#define xIR_MASK (FRE | TROV | xIR_9 | xIR_8 | FTE)
-
-/* SIRBCR */
-#define BRC_MASK (0x3F) /* mask for Baud Rate Count */
-
-/* CRCCTR */
-#define CRC_RST (1 << 15) /* CRC Engine Reset */
-#define CRC_CT_MASK 0x0FFF /* mask for CRC Engine Input Data Count */
-
-/* CRCIR */
-#define CRC_IN_MASK 0x0FFF /* mask for CRC Engine Input Data */
-
-/************************************************************************
-
-
- enum / structure
-
-
-************************************************************************/
-enum sh_irda_mode {
- SH_IRDA_NONE = 0,
- SH_IRDA_SIR,
- SH_IRDA_MIR,
- SH_IRDA_FIR,
-};
-
-struct sh_irda_self;
-struct sh_irda_xir_func {
- int (*xir_fre) (struct sh_irda_self *self);
- int (*xir_trov) (struct sh_irda_self *self);
- int (*xir_9) (struct sh_irda_self *self);
- int (*xir_8) (struct sh_irda_self *self);
- int (*xir_fte) (struct sh_irda_self *self);
-};
-
-struct sh_irda_self {
- void __iomem *membase;
- unsigned int irq;
- struct platform_device *pdev;
-
- struct net_device *ndev;
-
- struct irlap_cb *irlap;
- struct qos_info qos;
-
- iobuff_t tx_buff;
- iobuff_t rx_buff;
-
- enum sh_irda_mode mode;
- spinlock_t lock;
-
- struct sh_irda_xir_func *xir_func;
-};
-
-/************************************************************************
-
-
- common function
-
-
-************************************************************************/
-static void sh_irda_write(struct sh_irda_self *self, u32 offset, u16 data)
-{
- unsigned long flags;
-
- spin_lock_irqsave(&self->lock, flags);
- iowrite16(data, self->membase + offset);
- spin_unlock_irqrestore(&self->lock, flags);
-}
-
-static u16 sh_irda_read(struct sh_irda_self *self, u32 offset)
-{
- unsigned long flags;
- u16 ret;
-
- spin_lock_irqsave(&self->lock, flags);
- ret = ioread16(self->membase + offset);
- spin_unlock_irqrestore(&self->lock, flags);
-
- return ret;
-}
-
-static void sh_irda_update_bits(struct sh_irda_self *self, u32 offset,
- u16 mask, u16 data)
-{
- unsigned long flags;
- u16 old, new;
-
- spin_lock_irqsave(&self->lock, flags);
- old = ioread16(self->membase + offset);
- new = (old & ~mask) | data;
- if (old != new)
- iowrite16(data, self->membase + offset);
- spin_unlock_irqrestore(&self->lock, flags);
-}
-
-/************************************************************************
-
-
- mode function
-
-
-************************************************************************/
-/*=====================================
- *
- * common
- *
- *=====================================*/
-static void sh_irda_rcv_ctrl(struct sh_irda_self *self, int enable)
-{
- struct device *dev = &self->ndev->dev;
-
- sh_irda_update_bits(self, IRRCTR, RE, enable ? RE : 0);
- dev_dbg(dev, "recv %s\n", enable ? "enable" : "disable");
-}
-
-static int sh_irda_set_timeout(struct sh_irda_self *self, int interval)
-{
- struct device *dev = &self->ndev->dev;
-
- if (SH_IRDA_SIR != self->mode)
- interval = 0;
-
- if (interval < 0 || interval > 2) {
- dev_err(dev, "unsupported timeout interval\n");
- return -EINVAL;
- }
-
- sh_irda_update_bits(self, IRCFR, RTO, interval << RTO_SHIFT);
- return 0;
-}
-
-static int sh_irda_set_baudrate(struct sh_irda_self *self, int baudrate)
-{
- struct device *dev = &self->ndev->dev;
- u16 val;
-
- if (baudrate < 0)
- return 0;
-
- if (SH_IRDA_SIR != self->mode) {
- dev_err(dev, "it is not SIR mode\n");
- return -EINVAL;
- }
-
- /*
- * Baud rate (bits/s) =
- * (48 MHz / 26) / (baud rate counter value + 1) x 16
- */
- val = (48000000 / 26 / 16 / baudrate) - 1;
- dev_dbg(dev, "baudrate = %d, val = 0x%02x\n", baudrate, val);
-
- sh_irda_update_bits(self, SIRBCR, BRC_MASK, val);
-
- return 0;
-}
-
-static int sh_irda_get_rcv_length(struct sh_irda_self *self)
-{
- return RFL_MASK & sh_irda_read(self, IRRFLR);
-}
-
-/*=====================================
- *
- * NONE MODE
- *
- *=====================================*/
-static int sh_irda_xir_fre(struct sh_irda_self *self)
-{
- struct device *dev = &self->ndev->dev;
- dev_err(dev, "none mode: frame recv\n");
- return 0;
-}
-
-static int sh_irda_xir_trov(struct sh_irda_self *self)
-{
- struct device *dev = &self->ndev->dev;
- dev_err(dev, "none mode: buffer ram over\n");
- return 0;
-}
-
-static int sh_irda_xir_9(struct sh_irda_self *self)
-{
- struct device *dev = &self->ndev->dev;
- dev_err(dev, "none mode: time over\n");
- return 0;
-}
-
-static int sh_irda_xir_8(struct sh_irda_self *self)
-{
- struct device *dev = &self->ndev->dev;
- dev_err(dev, "none mode: framing error\n");
- return 0;
-}
-
-static int sh_irda_xir_fte(struct sh_irda_self *self)
-{
- struct device *dev = &self->ndev->dev;
- dev_err(dev, "none mode: frame transmit end\n");
- return 0;
-}
-
-static struct sh_irda_xir_func sh_irda_xir_func = {
- .xir_fre = sh_irda_xir_fre,
- .xir_trov = sh_irda_xir_trov,
- .xir_9 = sh_irda_xir_9,
- .xir_8 = sh_irda_xir_8,
- .xir_fte = sh_irda_xir_fte,
-};
-
-/*=====================================
- *
- * MIR/FIR MODE
- *
- * MIR/FIR are not supported now
- *=====================================*/
-static struct sh_irda_xir_func sh_irda_mfir_func = {
- .xir_fre = sh_irda_xir_fre,
- .xir_trov = sh_irda_xir_trov,
- .xir_9 = sh_irda_xir_9,
- .xir_8 = sh_irda_xir_8,
- .xir_fte = sh_irda_xir_fte,
-};
-
-/*=====================================
- *
- * SIR MODE
- *
- *=====================================*/
-static int sh_irda_sir_fre(struct sh_irda_self *self)
-{
- struct device *dev = &self->ndev->dev;
- u16 data16;
- u8 *data = (u8 *)&data16;
- int len = sh_irda_get_rcv_length(self);
- int i, j;
-
- if (len > IRDARAM_LEN)
- len = IRDARAM_LEN;
-
- dev_dbg(dev, "frame recv length = %d\n", len);
-
- for (i = 0; i < len; i++) {
- j = i % 2;
- if (!j)
- data16 = sh_irda_read(self, IRDARAM + i);
-
- async_unwrap_char(self->ndev, &self->ndev->stats,
- &self->rx_buff, data[j]);
- }
- self->ndev->last_rx = jiffies;
-
- sh_irda_rcv_ctrl(self, 1);
-
- return 0;
-}
-
-static int sh_irda_sir_trov(struct sh_irda_self *self)
-{
- struct device *dev = &self->ndev->dev;
-
- dev_err(dev, "buffer ram over\n");
- sh_irda_rcv_ctrl(self, 1);
- return 0;
-}
-
-static int sh_irda_sir_tot(struct sh_irda_self *self)
-{
- struct device *dev = &self->ndev->dev;
-
- dev_err(dev, "time over\n");
- sh_irda_set_baudrate(self, 9600);
- sh_irda_rcv_ctrl(self, 1);
- return 0;
-}
-
-static int sh_irda_sir_fer(struct sh_irda_self *self)
-{
- struct device *dev = &self->ndev->dev;
-
- dev_err(dev, "framing error\n");
- sh_irda_rcv_ctrl(self, 1);
- return 0;
-}
-
-static int sh_irda_sir_fte(struct sh_irda_self *self)
-{
- struct device *dev = &self->ndev->dev;
-
- dev_dbg(dev, "frame transmit end\n");
- netif_wake_queue(self->ndev);
-
- return 0;
-}
-
-static struct sh_irda_xir_func sh_irda_sir_func = {
- .xir_fre = sh_irda_sir_fre,
- .xir_trov = sh_irda_sir_trov,
- .xir_9 = sh_irda_sir_tot,
- .xir_8 = sh_irda_sir_fer,
- .xir_fte = sh_irda_sir_fte,
-};
-
-static void sh_irda_set_mode(struct sh_irda_self *self, enum sh_irda_mode mode)
-{
- struct device *dev = &self->ndev->dev;
- struct sh_irda_xir_func *func;
- const char *name;
- u16 data;
-
- switch (mode) {
- case SH_IRDA_SIR:
- name = "SIR";
- data = TMD_SIR;
- func = &sh_irda_sir_func;
- break;
- case SH_IRDA_MIR:
- name = "MIR";
- data = TMD_MIR;
- func = &sh_irda_mfir_func;
- break;
- case SH_IRDA_FIR:
- name = "FIR";
- data = TMD_FIR;
- func = &sh_irda_mfir_func;
- break;
- default:
- name = "NONE";
- data = 0;
- func = &sh_irda_xir_func;
- break;
- }
-
- self->mode = mode;
- self->xir_func = func;
- sh_irda_update_bits(self, IRTMR, TMD_MASK, data);
-
- dev_dbg(dev, "switch to %s mode", name);
-}
-
-/************************************************************************
-
-
- irq function
-
-
-************************************************************************/
-static void sh_irda_set_irq_mask(struct sh_irda_self *self)
-{
- u16 tmr_hole;
- u16 xir_reg;
-
- /* set all mask */
- sh_irda_update_bits(self, IRTMR, xIM_MASK, xIM_MASK);
- sh_irda_update_bits(self, SIRIMR, xIR_MASK, xIR_MASK);
- sh_irda_update_bits(self, MFIRIMR, xIR_MASK, xIR_MASK);
-
- /* clear irq */
- sh_irda_update_bits(self, SIRICR, xIR_MASK, xIR_MASK);
- sh_irda_update_bits(self, MFIRICR, xIR_MASK, xIR_MASK);
-
- switch (self->mode) {
- case SH_IRDA_SIR:
- tmr_hole = SIM;
- xir_reg = SIRIMR;
- break;
- case SH_IRDA_MIR:
- case SH_IRDA_FIR:
- tmr_hole = MIM;
- xir_reg = MFIRIMR;
- break;
- default:
- tmr_hole = 0;
- xir_reg = 0;
- break;
- }
-
- /* open mask */
- if (xir_reg) {
- sh_irda_update_bits(self, IRTMR, tmr_hole, 0);
- sh_irda_update_bits(self, xir_reg, xIR_MASK, 0);
- }
-}
-
-static irqreturn_t sh_irda_irq(int irq, void *dev_id)
-{
- struct sh_irda_self *self = dev_id;
- struct sh_irda_xir_func *func = self->xir_func;
- u16 isr = sh_irda_read(self, SIRISR);
-
- /* clear irq */
- sh_irda_write(self, SIRICR, isr);
-
- if (isr & FRE)
- func->xir_fre(self);
- if (isr & TROV)
- func->xir_trov(self);
- if (isr & xIR_9)
- func->xir_9(self);
- if (isr & xIR_8)
- func->xir_8(self);
- if (isr & FTE)
- func->xir_fte(self);
-
- return IRQ_HANDLED;
-}
-
-/************************************************************************
-
-
- CRC function
-
-
-************************************************************************/
-static void sh_irda_crc_reset(struct sh_irda_self *self)
-{
- sh_irda_write(self, CRCCTR, CRC_RST);
-}
-
-static void sh_irda_crc_add(struct sh_irda_self *self, u16 data)
-{
- sh_irda_write(self, CRCIR, data & CRC_IN_MASK);
-}
-
-static u16 sh_irda_crc_cnt(struct sh_irda_self *self)
-{
- return CRC_CT_MASK & sh_irda_read(self, CRCCTR);
-}
-
-static u16 sh_irda_crc_out(struct sh_irda_self *self)
-{
- return sh_irda_read(self, CRCOR);
-}
-
-static int sh_irda_crc_init(struct sh_irda_self *self)
-{
- struct device *dev = &self->ndev->dev;
- int ret = -EIO;
- u16 val;
-
- sh_irda_crc_reset(self);
-
- sh_irda_crc_add(self, 0xCC);
- sh_irda_crc_add(self, 0xF5);
- sh_irda_crc_add(self, 0xF1);
- sh_irda_crc_add(self, 0xA7);
-
- val = sh_irda_crc_cnt(self);
- if (4 != val) {
- dev_err(dev, "CRC count error %x\n", val);
- goto crc_init_out;
- }
-
- val = sh_irda_crc_out(self);
- if (0x51DF != val) {
- dev_err(dev, "CRC result error%x\n", val);
- goto crc_init_out;
- }
-
- ret = 0;
-
-crc_init_out:
-
- sh_irda_crc_reset(self);
- return ret;
-}
-
-/************************************************************************
-
-
- iobuf function
-
-
-************************************************************************/
-static void sh_irda_remove_iobuf(struct sh_irda_self *self)
-{
- kfree(self->rx_buff.head);
-
- self->tx_buff.head = NULL;
- self->tx_buff.data = NULL;
- self->rx_buff.head = NULL;
- self->rx_buff.data = NULL;
-}
-
-static int sh_irda_init_iobuf(struct sh_irda_self *self, int rxsize, int txsize)
-{
- if (self->rx_buff.head ||
- self->tx_buff.head) {
- dev_err(&self->ndev->dev, "iobuff has already existed.");
- return -EINVAL;
- }
-
- /* rx_buff */
- self->rx_buff.head = kmalloc(rxsize, GFP_KERNEL);
- if (!self->rx_buff.head)
- return -ENOMEM;
-
- self->rx_buff.truesize = rxsize;
- self->rx_buff.in_frame = FALSE;
- self->rx_buff.state = OUTSIDE_FRAME;
- self->rx_buff.data = self->rx_buff.head;
-
- /* tx_buff */
- self->tx_buff.head = self->membase + IRDARAM;
- self->tx_buff.truesize = IRDARAM_LEN;
-
- return 0;
-}
-
-/************************************************************************
-
-
- net_device_ops function
-
-
-************************************************************************/
-static int sh_irda_hard_xmit(struct sk_buff *skb, struct net_device *ndev)
-{
- struct sh_irda_self *self = netdev_priv(ndev);
- struct device *dev = &self->ndev->dev;
- int speed = irda_get_next_speed(skb);
- int ret;
-
- dev_dbg(dev, "hard xmit\n");
-
- netif_stop_queue(ndev);
- sh_irda_rcv_ctrl(self, 0);
-
- ret = sh_irda_set_baudrate(self, speed);
- if (ret < 0)
- goto sh_irda_hard_xmit_end;
-
- self->tx_buff.len = 0;
- if (skb->len) {
- unsigned long flags;
-
- spin_lock_irqsave(&self->lock, flags);
- self->tx_buff.len = async_wrap_skb(skb,
- self->tx_buff.head,
- self->tx_buff.truesize);
- spin_unlock_irqrestore(&self->lock, flags);
-
- if (self->tx_buff.len > self->tx_buff.truesize)
- self->tx_buff.len = self->tx_buff.truesize;
-
- sh_irda_write(self, IRTFLR, self->tx_buff.len);
- sh_irda_write(self, IRTCTR, ARMOD | TE);
- } else
- goto sh_irda_hard_xmit_end;
-
- dev_kfree_skb(skb);
-
- return 0;
-
-sh_irda_hard_xmit_end:
- sh_irda_set_baudrate(self, 9600);
- netif_wake_queue(self->ndev);
- sh_irda_rcv_ctrl(self, 1);
- dev_kfree_skb(skb);
-
- return ret;
-
-}
-
-static int sh_irda_ioctl(struct net_device *ndev, struct ifreq *ifreq, int cmd)
-{
- /*
- * FIXME
- *
- * This function is needed for irda framework.
- * But nothing to do now
- */
- return 0;
-}
-
-static struct net_device_stats *sh_irda_stats(struct net_device *ndev)
-{
- struct sh_irda_self *self = netdev_priv(ndev);
-
- return &self->ndev->stats;
-}
-
-static int sh_irda_open(struct net_device *ndev)
-{
- struct sh_irda_self *self = netdev_priv(ndev);
- int err;
-
- pm_runtime_get_sync(&self->pdev->dev);
- err = sh_irda_crc_init(self);
- if (err)
- goto open_err;
-
- sh_irda_set_mode(self, SH_IRDA_SIR);
- sh_irda_set_timeout(self, 2);
- sh_irda_set_baudrate(self, 9600);
-
- self->irlap = irlap_open(ndev, &self->qos, DRIVER_NAME);
- if (!self->irlap) {
- err = -ENODEV;
- goto open_err;
- }
-
- netif_start_queue(ndev);
- sh_irda_rcv_ctrl(self, 1);
- sh_irda_set_irq_mask(self);
-
- dev_info(&ndev->dev, "opened\n");
-
- return 0;
-
-open_err:
- pm_runtime_put_sync(&self->pdev->dev);
-
- return err;
-}
-
-static int sh_irda_stop(struct net_device *ndev)
-{
- struct sh_irda_self *self = netdev_priv(ndev);
-
- /* Stop IrLAP */
- if (self->irlap) {
- irlap_close(self->irlap);
- self->irlap = NULL;
- }
-
- netif_stop_queue(ndev);
- pm_runtime_put_sync(&self->pdev->dev);
-
- dev_info(&ndev->dev, "stopped\n");
-
- return 0;
-}
-
-static const struct net_device_ops sh_irda_ndo = {
- .ndo_open = sh_irda_open,
- .ndo_stop = sh_irda_stop,
- .ndo_start_xmit = sh_irda_hard_xmit,
- .ndo_do_ioctl = sh_irda_ioctl,
- .ndo_get_stats = sh_irda_stats,
-};
-
-/************************************************************************
-
-
- platform_driver function
-
-
-************************************************************************/
-static int sh_irda_probe(struct platform_device *pdev)
-{
- struct net_device *ndev;
- struct sh_irda_self *self;
- struct resource *res;
- int irq;
- int err = -ENOMEM;
-
- res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
- irq = platform_get_irq(pdev, 0);
- if (!res || irq < 0) {
- dev_err(&pdev->dev, "Not enough platform resources.\n");
- goto exit;
- }
-
- ndev = alloc_irdadev(sizeof(*self));
- if (!ndev)
- goto exit;
-
- self = netdev_priv(ndev);
- self->membase = ioremap_nocache(res->start, resource_size(res));
- if (!self->membase) {
- err = -ENXIO;
- dev_err(&pdev->dev, "Unable to ioremap.\n");
- goto err_mem_1;
- }
-
- err = sh_irda_init_iobuf(self, IRDA_SKB_MAX_MTU, IRDA_SIR_MAX_FRAME);
- if (err)
- goto err_mem_2;
-
- self->pdev = pdev;
- pm_runtime_enable(&pdev->dev);
-
- irda_init_max_qos_capabilies(&self->qos);
-
- ndev->netdev_ops = &sh_irda_ndo;
- ndev->irq = irq;
-
- self->ndev = ndev;
- self->qos.baud_rate.bits &= IR_9600; /* FIXME */
- self->qos.min_turn_time.bits = 1; /* 10 ms or more */
- spin_lock_init(&self->lock);
-
- irda_qos_bits_to_value(&self->qos);
-
- err = register_netdev(ndev);
- if (err)
- goto err_mem_4;
-
- platform_set_drvdata(pdev, ndev);
- err = devm_request_irq(&pdev->dev, irq, sh_irda_irq, 0, "sh_irda", self);
- if (err) {
- dev_warn(&pdev->dev, "Unable to attach sh_irda interrupt\n");
- goto err_mem_4;
- }
-
- dev_info(&pdev->dev, "SuperH IrDA probed\n");
-
- goto exit;
-
-err_mem_4:
- pm_runtime_disable(&pdev->dev);
- sh_irda_remove_iobuf(self);
-err_mem_2:
- iounmap(self->membase);
-err_mem_1:
- free_netdev(ndev);
-exit:
- return err;
-}
-
-static int sh_irda_remove(struct platform_device *pdev)
-{
- struct net_device *ndev = platform_get_drvdata(pdev);
- struct sh_irda_self *self = netdev_priv(ndev);
-
- if (!self)
- return 0;
-
- unregister_netdev(ndev);
- pm_runtime_disable(&pdev->dev);
- sh_irda_remove_iobuf(self);
- iounmap(self->membase);
- free_netdev(ndev);
-
- return 0;
-}
-
-static int sh_irda_runtime_nop(struct device *dev)
-{
- /* Runtime PM callback shared between ->runtime_suspend()
- * and ->runtime_resume(). Simply returns success.
- *
- * This driver re-initializes all registers after
- * pm_runtime_get_sync() anyway so there is no need
- * to save and restore registers here.
- */
- return 0;
-}
-
-static const struct dev_pm_ops sh_irda_pm_ops = {
- .runtime_suspend = sh_irda_runtime_nop,
- .runtime_resume = sh_irda_runtime_nop,
-};
-
-static struct platform_driver sh_irda_driver = {
- .probe = sh_irda_probe,
- .remove = sh_irda_remove,
- .driver = {
- .name = DRIVER_NAME,
- .pm = &sh_irda_pm_ops,
- },
-};
-
-module_platform_driver(sh_irda_driver);
-
-MODULE_AUTHOR("Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>");
-MODULE_DESCRIPTION("SuperH IrDA driver");
-MODULE_LICENSE("GPL");
diff --git a/drivers/net/irda/smsc-ircc2.c b/drivers/net/irda/smsc-ircc2.c
index b455ffe8850c..dcf92ba80872 100644
--- a/drivers/net/irda/smsc-ircc2.c
+++ b/drivers/net/irda/smsc-ircc2.c
@@ -862,7 +862,7 @@ static void smsc_ircc_timeout(struct net_device *dev)
spin_lock_irqsave(&self->lock, flags);
smsc_ircc_sir_start(self);
smsc_ircc_change_speed(self, self->io.speed);
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
netif_wake_queue(dev);
spin_unlock_irqrestore(&self->lock, flags);
}
diff --git a/drivers/net/irda/stir4200.c b/drivers/net/irda/stir4200.c
index 83cc48a01802..42da094b68dd 100644
--- a/drivers/net/irda/stir4200.c
+++ b/drivers/net/irda/stir4200.c
@@ -718,7 +718,7 @@ static void stir_send(struct stir_cb *stir, struct sk_buff *skb)
stir->netdev->stats.tx_packets++;
stir->netdev->stats.tx_bytes += skb->len;
- stir->netdev->trans_start = jiffies;
+ netif_trans_update(stir->netdev);
pr_debug("send %d (%d)\n", skb->len, wraplen);
if (usb_bulk_msg(stir->usbdev, usb_sndbulkpipe(stir->usbdev, 1),
diff --git a/drivers/net/irda/via-ircc.c b/drivers/net/irda/via-ircc.c
index 6960d4cd3cae..ca4442a9d631 100644
--- a/drivers/net/irda/via-ircc.c
+++ b/drivers/net/irda/via-ircc.c
@@ -774,7 +774,7 @@ static netdev_tx_t via_ircc_hard_xmit_sir(struct sk_buff *skb,
/* Check for empty frame */
if (!skb->len) {
via_ircc_change_speed(self, speed);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
dev_kfree_skb(skb);
return NETDEV_TX_OK;
} else
@@ -821,7 +821,7 @@ static netdev_tx_t via_ircc_hard_xmit_sir(struct sk_buff *skb,
RXStart(iobase, OFF);
TXStart(iobase, ON);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
spin_unlock_irqrestore(&self->lock, flags);
dev_kfree_skb(skb);
return NETDEV_TX_OK;
@@ -849,7 +849,7 @@ static netdev_tx_t via_ircc_hard_xmit_fir(struct sk_buff *skb,
if ((speed != self->io.speed) && (speed != -1)) {
if (!skb->len) {
via_ircc_change_speed(self, speed);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
dev_kfree_skb(skb);
return NETDEV_TX_OK;
} else
@@ -869,7 +869,7 @@ static netdev_tx_t via_ircc_hard_xmit_fir(struct sk_buff *skb,
via_ircc_dma_xmit(self, iobase);
//F01 }
//F01 if (self->tx_fifo.free < (MAX_TX_WINDOW -1 )) netif_wake_queue(self->netdev);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
dev_kfree_skb(skb);
spin_unlock_irqrestore(&self->lock, flags);
return NETDEV_TX_OK;
diff --git a/drivers/net/macsec.c b/drivers/net/macsec.c
index c6385617bfb2..47ee2c840b55 100644
--- a/drivers/net/macsec.c
+++ b/drivers/net/macsec.c
@@ -85,7 +85,7 @@ struct gcm_iv {
* @tfm: crypto struct, key storage
*/
struct macsec_key {
- u64 id;
+ u8 id[MACSEC_KEYID_LEN];
struct crypto_aead *tfm;
};
@@ -1408,9 +1408,10 @@ static sci_t nla_get_sci(const struct nlattr *nla)
return (__force sci_t)nla_get_u64(nla);
}
-static int nla_put_sci(struct sk_buff *skb, int attrtype, sci_t value)
+static int nla_put_sci(struct sk_buff *skb, int attrtype, sci_t value,
+ int padattr)
{
- return nla_put_u64(skb, attrtype, (__force u64)value);
+ return nla_put_u64_64bit(skb, attrtype, (__force u64)value, padattr);
}
static struct macsec_tx_sa *get_txsa_from_nl(struct net *net,
@@ -1529,7 +1530,8 @@ static const struct nla_policy macsec_genl_sa_policy[NUM_MACSEC_SA_ATTR] = {
[MACSEC_SA_ATTR_AN] = { .type = NLA_U8 },
[MACSEC_SA_ATTR_ACTIVE] = { .type = NLA_U8 },
[MACSEC_SA_ATTR_PN] = { .type = NLA_U32 },
- [MACSEC_SA_ATTR_KEYID] = { .type = NLA_U64 },
+ [MACSEC_SA_ATTR_KEYID] = { .type = NLA_BINARY,
+ .len = MACSEC_KEYID_LEN, },
[MACSEC_SA_ATTR_KEY] = { .type = NLA_BINARY,
.len = MACSEC_MAX_KEY_LEN, },
};
@@ -1576,6 +1578,9 @@ static bool validate_add_rxsa(struct nlattr **attrs)
return false;
}
+ if (nla_len(attrs[MACSEC_SA_ATTR_KEYID]) != MACSEC_KEYID_LEN)
+ return false;
+
return true;
}
@@ -1641,7 +1646,7 @@ static int macsec_add_rxsa(struct sk_buff *skb, struct genl_info *info)
if (tb_sa[MACSEC_SA_ATTR_ACTIVE])
rx_sa->active = !!nla_get_u8(tb_sa[MACSEC_SA_ATTR_ACTIVE]);
- rx_sa->key.id = nla_get_u64(tb_sa[MACSEC_SA_ATTR_KEYID]);
+ nla_memcpy(rx_sa->key.id, tb_sa[MACSEC_SA_ATTR_KEYID], MACSEC_KEYID_LEN);
rx_sa->sc = rx_sc;
rcu_assign_pointer(rx_sc->sa[assoc_num], rx_sa);
@@ -1722,6 +1727,9 @@ static bool validate_add_txsa(struct nlattr **attrs)
return false;
}
+ if (nla_len(attrs[MACSEC_SA_ATTR_KEYID]) != MACSEC_KEYID_LEN)
+ return false;
+
return true;
}
@@ -1777,7 +1785,7 @@ static int macsec_add_txsa(struct sk_buff *skb, struct genl_info *info)
return -ENOMEM;
}
- tx_sa->key.id = nla_get_u64(tb_sa[MACSEC_SA_ATTR_KEYID]);
+ nla_memcpy(tx_sa->key.id, tb_sa[MACSEC_SA_ATTR_KEYID], MACSEC_KEYID_LEN);
spin_lock_bh(&tx_sa->lock);
tx_sa->next_pn = nla_get_u32(tb_sa[MACSEC_SA_ATTR_PN]);
@@ -2136,16 +2144,36 @@ static int copy_rx_sc_stats(struct sk_buff *skb,
sum.InPktsUnusedSA += tmp.InPktsUnusedSA;
}
- if (nla_put_u64(skb, MACSEC_RXSC_STATS_ATTR_IN_OCTETS_VALIDATED, sum.InOctetsValidated) ||
- nla_put_u64(skb, MACSEC_RXSC_STATS_ATTR_IN_OCTETS_DECRYPTED, sum.InOctetsDecrypted) ||
- nla_put_u64(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_UNCHECKED, sum.InPktsUnchecked) ||
- nla_put_u64(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_DELAYED, sum.InPktsDelayed) ||
- nla_put_u64(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_OK, sum.InPktsOK) ||
- nla_put_u64(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_INVALID, sum.InPktsInvalid) ||
- nla_put_u64(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_LATE, sum.InPktsLate) ||
- nla_put_u64(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_NOT_VALID, sum.InPktsNotValid) ||
- nla_put_u64(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_NOT_USING_SA, sum.InPktsNotUsingSA) ||
- nla_put_u64(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_UNUSED_SA, sum.InPktsUnusedSA))
+ if (nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_OCTETS_VALIDATED,
+ sum.InOctetsValidated,
+ MACSEC_RXSC_STATS_ATTR_PAD) ||
+ nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_OCTETS_DECRYPTED,
+ sum.InOctetsDecrypted,
+ MACSEC_RXSC_STATS_ATTR_PAD) ||
+ nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_UNCHECKED,
+ sum.InPktsUnchecked,
+ MACSEC_RXSC_STATS_ATTR_PAD) ||
+ nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_DELAYED,
+ sum.InPktsDelayed,
+ MACSEC_RXSC_STATS_ATTR_PAD) ||
+ nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_OK,
+ sum.InPktsOK,
+ MACSEC_RXSC_STATS_ATTR_PAD) ||
+ nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_INVALID,
+ sum.InPktsInvalid,
+ MACSEC_RXSC_STATS_ATTR_PAD) ||
+ nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_LATE,
+ sum.InPktsLate,
+ MACSEC_RXSC_STATS_ATTR_PAD) ||
+ nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_NOT_VALID,
+ sum.InPktsNotValid,
+ MACSEC_RXSC_STATS_ATTR_PAD) ||
+ nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_NOT_USING_SA,
+ sum.InPktsNotUsingSA,
+ MACSEC_RXSC_STATS_ATTR_PAD) ||
+ nla_put_u64_64bit(skb, MACSEC_RXSC_STATS_ATTR_IN_PKTS_UNUSED_SA,
+ sum.InPktsUnusedSA,
+ MACSEC_RXSC_STATS_ATTR_PAD))
return -EMSGSIZE;
return 0;
@@ -2174,10 +2202,18 @@ static int copy_tx_sc_stats(struct sk_buff *skb,
sum.OutOctetsEncrypted += tmp.OutOctetsEncrypted;
}
- if (nla_put_u64(skb, MACSEC_TXSC_STATS_ATTR_OUT_PKTS_PROTECTED, sum.OutPktsProtected) ||
- nla_put_u64(skb, MACSEC_TXSC_STATS_ATTR_OUT_PKTS_ENCRYPTED, sum.OutPktsEncrypted) ||
- nla_put_u64(skb, MACSEC_TXSC_STATS_ATTR_OUT_OCTETS_PROTECTED, sum.OutOctetsProtected) ||
- nla_put_u64(skb, MACSEC_TXSC_STATS_ATTR_OUT_OCTETS_ENCRYPTED, sum.OutOctetsEncrypted))
+ if (nla_put_u64_64bit(skb, MACSEC_TXSC_STATS_ATTR_OUT_PKTS_PROTECTED,
+ sum.OutPktsProtected,
+ MACSEC_TXSC_STATS_ATTR_PAD) ||
+ nla_put_u64_64bit(skb, MACSEC_TXSC_STATS_ATTR_OUT_PKTS_ENCRYPTED,
+ sum.OutPktsEncrypted,
+ MACSEC_TXSC_STATS_ATTR_PAD) ||
+ nla_put_u64_64bit(skb, MACSEC_TXSC_STATS_ATTR_OUT_OCTETS_PROTECTED,
+ sum.OutOctetsProtected,
+ MACSEC_TXSC_STATS_ATTR_PAD) ||
+ nla_put_u64_64bit(skb, MACSEC_TXSC_STATS_ATTR_OUT_OCTETS_ENCRYPTED,
+ sum.OutOctetsEncrypted,
+ MACSEC_TXSC_STATS_ATTR_PAD))
return -EMSGSIZE;
return 0;
@@ -2210,14 +2246,30 @@ static int copy_secy_stats(struct sk_buff *skb,
sum.InPktsOverrun += tmp.InPktsOverrun;
}
- if (nla_put_u64(skb, MACSEC_SECY_STATS_ATTR_OUT_PKTS_UNTAGGED, sum.OutPktsUntagged) ||
- nla_put_u64(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_UNTAGGED, sum.InPktsUntagged) ||
- nla_put_u64(skb, MACSEC_SECY_STATS_ATTR_OUT_PKTS_TOO_LONG, sum.OutPktsTooLong) ||
- nla_put_u64(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_NO_TAG, sum.InPktsNoTag) ||
- nla_put_u64(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_BAD_TAG, sum.InPktsBadTag) ||
- nla_put_u64(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_UNKNOWN_SCI, sum.InPktsUnknownSCI) ||
- nla_put_u64(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_NO_SCI, sum.InPktsNoSCI) ||
- nla_put_u64(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_OVERRUN, sum.InPktsOverrun))
+ if (nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_OUT_PKTS_UNTAGGED,
+ sum.OutPktsUntagged,
+ MACSEC_SECY_STATS_ATTR_PAD) ||
+ nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_UNTAGGED,
+ sum.InPktsUntagged,
+ MACSEC_SECY_STATS_ATTR_PAD) ||
+ nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_OUT_PKTS_TOO_LONG,
+ sum.OutPktsTooLong,
+ MACSEC_SECY_STATS_ATTR_PAD) ||
+ nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_NO_TAG,
+ sum.InPktsNoTag,
+ MACSEC_SECY_STATS_ATTR_PAD) ||
+ nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_BAD_TAG,
+ sum.InPktsBadTag,
+ MACSEC_SECY_STATS_ATTR_PAD) ||
+ nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_UNKNOWN_SCI,
+ sum.InPktsUnknownSCI,
+ MACSEC_SECY_STATS_ATTR_PAD) ||
+ nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_NO_SCI,
+ sum.InPktsNoSCI,
+ MACSEC_SECY_STATS_ATTR_PAD) ||
+ nla_put_u64_64bit(skb, MACSEC_SECY_STATS_ATTR_IN_PKTS_OVERRUN,
+ sum.InPktsOverrun,
+ MACSEC_SECY_STATS_ATTR_PAD))
return -EMSGSIZE;
return 0;
@@ -2231,9 +2283,11 @@ static int nla_put_secy(struct macsec_secy *secy, struct sk_buff *skb)
if (!secy_nest)
return 1;
- if (nla_put_sci(skb, MACSEC_SECY_ATTR_SCI, secy->sci) ||
- nla_put_u64(skb, MACSEC_SECY_ATTR_CIPHER_SUITE,
- MACSEC_DEFAULT_CIPHER_ID) ||
+ if (nla_put_sci(skb, MACSEC_SECY_ATTR_SCI, secy->sci,
+ MACSEC_SECY_ATTR_PAD) ||
+ nla_put_u64_64bit(skb, MACSEC_SECY_ATTR_CIPHER_SUITE,
+ MACSEC_DEFAULT_CIPHER_ID,
+ MACSEC_SECY_ATTR_PAD) ||
nla_put_u8(skb, MACSEC_SECY_ATTR_ICV_LEN, secy->icv_len) ||
nla_put_u8(skb, MACSEC_SECY_ATTR_OPER, secy->operational) ||
nla_put_u8(skb, MACSEC_SECY_ATTR_PROTECT, secy->protect_frames) ||
@@ -2318,7 +2372,7 @@ static int dump_secy(struct macsec_secy *secy, struct net_device *dev,
if (nla_put_u8(skb, MACSEC_SA_ATTR_AN, i) ||
nla_put_u32(skb, MACSEC_SA_ATTR_PN, tx_sa->next_pn) ||
- nla_put_u64(skb, MACSEC_SA_ATTR_KEYID, tx_sa->key.id) ||
+ nla_put(skb, MACSEC_SA_ATTR_KEYID, MACSEC_KEYID_LEN, tx_sa->key.id) ||
nla_put_u8(skb, MACSEC_SA_ATTR_ACTIVE, tx_sa->active)) {
nla_nest_cancel(skb, txsa_nest);
nla_nest_cancel(skb, txsa_list);
@@ -2359,7 +2413,8 @@ static int dump_secy(struct macsec_secy *secy, struct net_device *dev,
}
if (nla_put_u8(skb, MACSEC_RXSC_ATTR_ACTIVE, rx_sc->active) ||
- nla_put_sci(skb, MACSEC_RXSC_ATTR_SCI, rx_sc->sci)) {
+ nla_put_sci(skb, MACSEC_RXSC_ATTR_SCI, rx_sc->sci,
+ MACSEC_RXSC_ATTR_PAD)) {
nla_nest_cancel(skb, rxsc_nest);
nla_nest_cancel(skb, rxsc_list);
goto nla_put_failure;
@@ -2419,7 +2474,7 @@ static int dump_secy(struct macsec_secy *secy, struct net_device *dev,
if (nla_put_u8(skb, MACSEC_SA_ATTR_AN, i) ||
nla_put_u32(skb, MACSEC_SA_ATTR_PN, rx_sa->next_pn) ||
- nla_put_u64(skb, MACSEC_SA_ATTR_KEYID, rx_sa->key.id) ||
+ nla_put(skb, MACSEC_SA_ATTR_KEYID, MACSEC_KEYID_LEN, rx_sa->key.id) ||
nla_put_u8(skb, MACSEC_SA_ATTR_ACTIVE, rx_sa->active)) {
nla_nest_cancel(skb, rxsa_nest);
nla_nest_cancel(skb, rxsc_nest);
@@ -2836,7 +2891,7 @@ static void macsec_free_netdev(struct net_device *dev)
static void macsec_setup(struct net_device *dev)
{
ether_setup(dev);
- dev->tx_queue_len = 0;
+ dev->priv_flags |= IFF_NO_QUEUE;
dev->netdev_ops = &macsec_netdev_ops;
dev->destructor = macsec_free_netdev;
@@ -3163,9 +3218,9 @@ static struct net *macsec_get_link_net(const struct net_device *dev)
static size_t macsec_get_size(const struct net_device *dev)
{
return 0 +
- nla_total_size(8) + /* SCI */
+ nla_total_size_64bit(8) + /* SCI */
nla_total_size(1) + /* ICV_LEN */
- nla_total_size(8) + /* CIPHER_SUITE */
+ nla_total_size_64bit(8) + /* CIPHER_SUITE */
nla_total_size(4) + /* WINDOW */
nla_total_size(1) + /* ENCODING_SA */
nla_total_size(1) + /* ENCRYPT */
@@ -3184,10 +3239,11 @@ static int macsec_fill_info(struct sk_buff *skb,
struct macsec_secy *secy = &macsec_priv(dev)->secy;
struct macsec_tx_sc *tx_sc = &secy->tx_sc;
- if (nla_put_sci(skb, IFLA_MACSEC_SCI, secy->sci) ||
+ if (nla_put_sci(skb, IFLA_MACSEC_SCI, secy->sci,
+ IFLA_MACSEC_PAD) ||
nla_put_u8(skb, IFLA_MACSEC_ICV_LEN, secy->icv_len) ||
- nla_put_u64(skb, IFLA_MACSEC_CIPHER_SUITE,
- MACSEC_DEFAULT_CIPHER_ID) ||
+ nla_put_u64_64bit(skb, IFLA_MACSEC_CIPHER_SUITE,
+ MACSEC_DEFAULT_CIPHER_ID, IFLA_MACSEC_PAD) ||
nla_put_u8(skb, IFLA_MACSEC_ENCODING_SA, tx_sc->encoding_sa) ||
nla_put_u8(skb, IFLA_MACSEC_ENCRYPT, tx_sc->encrypt) ||
nla_put_u8(skb, IFLA_MACSEC_PROTECT, secy->protect_frames) ||
diff --git a/drivers/net/macvlan.c b/drivers/net/macvlan.c
index 2bcf1f321bea..cb01023eab41 100644
--- a/drivers/net/macvlan.c
+++ b/drivers/net/macvlan.c
@@ -795,6 +795,7 @@ static int macvlan_init(struct net_device *dev)
{
struct macvlan_dev *vlan = netdev_priv(dev);
const struct net_device *lowerdev = vlan->lowerdev;
+ struct macvlan_port *port = vlan->port;
dev->state = (dev->state & ~MACVLAN_STATE_MASK) |
(lowerdev->state & MACVLAN_STATE_MASK);
@@ -812,6 +813,8 @@ static int macvlan_init(struct net_device *dev)
if (!vlan->pcpu_stats)
return -ENOMEM;
+ port->count += 1;
+
return 0;
}
@@ -1312,10 +1315,9 @@ int macvlan_common_newlink(struct net *src_net, struct net_device *dev,
return err;
}
- port->count += 1;
err = register_netdevice(dev);
if (err < 0)
- goto destroy_port;
+ return err;
dev->priv_flags |= IFF_MACVLAN;
err = netdev_upper_dev_link(lowerdev, dev);
@@ -1330,10 +1332,6 @@ int macvlan_common_newlink(struct net *src_net, struct net_device *dev,
unregister_netdev:
unregister_netdevice(dev);
-destroy_port:
- port->count -= 1;
- if (!port->count)
- macvlan_port_destroy(lowerdev);
return err;
}
diff --git a/drivers/net/macvtap.c b/drivers/net/macvtap.c
index 95394edd1ed5..bd6720962b1f 100644
--- a/drivers/net/macvtap.c
+++ b/drivers/net/macvtap.c
@@ -129,7 +129,18 @@ static DEFINE_MUTEX(minor_lock);
static DEFINE_IDR(minor_idr);
#define GOODCOPY_LEN 128
-static struct class *macvtap_class;
+static const void *macvtap_net_namespace(struct device *d)
+{
+ struct net_device *dev = to_net_dev(d->parent);
+ return dev_net(dev);
+}
+
+static struct class macvtap_class = {
+ .name = "macvtap",
+ .owner = THIS_MODULE,
+ .ns_type = &net_ns_type_operations,
+ .namespace = macvtap_net_namespace,
+};
static struct cdev macvtap_cdev;
static const struct proto_ops macvtap_socket_ops;
@@ -373,7 +384,7 @@ static rx_handler_result_t macvtap_handle_frame(struct sk_buff **pskb)
goto wake_up;
}
- kfree_skb(skb);
+ consume_skb(skb);
while (segs) {
struct sk_buff *nskb = segs->next;
@@ -1278,10 +1289,12 @@ static int macvtap_device_event(struct notifier_block *unused,
struct device *classdev;
dev_t devt;
int err;
+ char tap_name[IFNAMSIZ];
if (dev->rtnl_link_ops != &macvtap_link_ops)
return NOTIFY_DONE;
+ snprintf(tap_name, IFNAMSIZ, "tap%d", dev->ifindex);
vlan = netdev_priv(dev);
switch (event) {
@@ -1295,16 +1308,24 @@ static int macvtap_device_event(struct notifier_block *unused,
return notifier_from_errno(err);
devt = MKDEV(MAJOR(macvtap_major), vlan->minor);
- classdev = device_create(macvtap_class, &dev->dev, devt,
- dev, "tap%d", dev->ifindex);
+ classdev = device_create(&macvtap_class, &dev->dev, devt,
+ dev, tap_name);
if (IS_ERR(classdev)) {
macvtap_free_minor(vlan);
return notifier_from_errno(PTR_ERR(classdev));
}
+ err = sysfs_create_link(&dev->dev.kobj, &classdev->kobj,
+ tap_name);
+ if (err)
+ return notifier_from_errno(err);
break;
case NETDEV_UNREGISTER:
+ /* vlan->minor == 0 if NETDEV_REGISTER above failed */
+ if (vlan->minor == 0)
+ break;
+ sysfs_remove_link(&dev->dev.kobj, tap_name);
devt = MKDEV(MAJOR(macvtap_major), vlan->minor);
- device_destroy(macvtap_class, devt);
+ device_destroy(&macvtap_class, devt);
macvtap_free_minor(vlan);
break;
}
@@ -1330,11 +1351,9 @@ static int macvtap_init(void)
if (err)
goto out2;
- macvtap_class = class_create(THIS_MODULE, "macvtap");
- if (IS_ERR(macvtap_class)) {
- err = PTR_ERR(macvtap_class);
+ err = class_register(&macvtap_class);
+ if (err)
goto out3;
- }
err = register_netdevice_notifier(&macvtap_notifier_block);
if (err)
@@ -1349,7 +1368,7 @@ static int macvtap_init(void)
out5:
unregister_netdevice_notifier(&macvtap_notifier_block);
out4:
- class_unregister(macvtap_class);
+ class_unregister(&macvtap_class);
out3:
cdev_del(&macvtap_cdev);
out2:
@@ -1363,7 +1382,7 @@ static void macvtap_exit(void)
{
rtnl_link_unregister(&macvtap_link_ops);
unregister_netdevice_notifier(&macvtap_notifier_block);
- class_unregister(macvtap_class);
+ class_unregister(&macvtap_class);
cdev_del(&macvtap_cdev);
unregister_chrdev_region(macvtap_major, MACVTAP_NUM_DEVS);
idr_destroy(&minor_idr);
diff --git a/drivers/net/phy/fixed_phy.c b/drivers/net/phy/fixed_phy.c
index fc07a8866020..2d2e4339f0df 100644
--- a/drivers/net/phy/fixed_phy.c
+++ b/drivers/net/phy/fixed_phy.c
@@ -255,7 +255,8 @@ int fixed_phy_add(unsigned int irq, int phy_addr,
memset(fp->regs, 0xFF, sizeof(fp->regs[0]) * MII_REGS_NUM);
- fmb->mii_bus->irq[phy_addr] = irq;
+ if (irq != PHY_POLL)
+ fmb->mii_bus->irq[phy_addr] = irq;
fp->addr = phy_addr;
fp->status = *status;
@@ -314,6 +315,9 @@ struct phy_device *fixed_phy_register(unsigned int irq,
int phy_addr;
int ret;
+ if (!fmb->mii_bus || fmb->mii_bus->state != MDIOBUS_REGISTERED)
+ return ERR_PTR(-EPROBE_DEFER);
+
/* Get the next available PHY address, up to PHY_MAX_ADDR */
spin_lock(&phy_fixed_addr_lock);
if (phy_fixed_addr == PHY_MAX_ADDR) {
@@ -328,7 +332,7 @@ struct phy_device *fixed_phy_register(unsigned int irq,
return ERR_PTR(ret);
phy = get_phy_device(fmb->mii_bus, phy_addr, false);
- if (!phy || IS_ERR(phy)) {
+ if (IS_ERR(phy)) {
fixed_phy_del(phy_addr);
return ERR_PTR(-EINVAL);
}
diff --git a/drivers/net/phy/lxt.c b/drivers/net/phy/lxt.c
index f6078376ef50..b9fde1bcf0f0 100644
--- a/drivers/net/phy/lxt.c
+++ b/drivers/net/phy/lxt.c
@@ -80,23 +80,15 @@ static int lxt970_ack_interrupt(struct phy_device *phydev)
static int lxt970_config_intr(struct phy_device *phydev)
{
- int err;
-
if (phydev->interrupts == PHY_INTERRUPT_ENABLED)
- err = phy_write(phydev, MII_LXT970_IER, MII_LXT970_IER_IEN);
+ return phy_write(phydev, MII_LXT970_IER, MII_LXT970_IER_IEN);
else
- err = phy_write(phydev, MII_LXT970_IER, 0);
-
- return err;
+ return phy_write(phydev, MII_LXT970_IER, 0);
}
static int lxt970_config_init(struct phy_device *phydev)
{
- int err;
-
- err = phy_write(phydev, MII_LXT970_CONFIG, 0);
-
- return err;
+ return phy_write(phydev, MII_LXT970_CONFIG, 0);
}
@@ -112,14 +104,10 @@ static int lxt971_ack_interrupt(struct phy_device *phydev)
static int lxt971_config_intr(struct phy_device *phydev)
{
- int err;
-
if (phydev->interrupts == PHY_INTERRUPT_ENABLED)
- err = phy_write(phydev, MII_LXT971_IER, MII_LXT971_IER_IEN);
+ return phy_write(phydev, MII_LXT971_IER, MII_LXT971_IER_IEN);
else
- err = phy_write(phydev, MII_LXT971_IER, 0);
-
- return err;
+ return phy_write(phydev, MII_LXT971_IER, 0);
}
/*
diff --git a/drivers/net/phy/mdio-mux.c b/drivers/net/phy/mdio-mux.c
index 308ade0eb1b6..5c81d6faf304 100644
--- a/drivers/net/phy/mdio-mux.c
+++ b/drivers/net/phy/mdio-mux.c
@@ -45,13 +45,7 @@ static int mdio_mux_read(struct mii_bus *bus, int phy_id, int regnum)
struct mdio_mux_parent_bus *pb = cb->parent;
int r;
- /* In theory multiple mdio_mux could be stacked, thus creating
- * more than a single level of nesting. But in practice,
- * SINGLE_DEPTH_NESTING will cover the vast majority of use
- * cases. We use it, instead of trying to handle the general
- * case.
- */
- mutex_lock_nested(&pb->mii_bus->mdio_lock, SINGLE_DEPTH_NESTING);
+ mutex_lock_nested(&pb->mii_bus->mdio_lock, MDIO_MUTEX_MUX);
r = pb->switch_fn(pb->current_child, cb->bus_number, pb->switch_data);
if (r)
goto out;
@@ -76,7 +70,7 @@ static int mdio_mux_write(struct mii_bus *bus, int phy_id,
int r;
- mutex_lock_nested(&pb->mii_bus->mdio_lock, SINGLE_DEPTH_NESTING);
+ mutex_lock_nested(&pb->mii_bus->mdio_lock, MDIO_MUTEX_MUX);
r = pb->switch_fn(pb->current_child, cb->bus_number, pb->switch_data);
if (r)
goto out;
diff --git a/drivers/net/phy/mdio_bus.c b/drivers/net/phy/mdio_bus.c
index 0cba64f1ecf4..09deef4bed09 100644
--- a/drivers/net/phy/mdio_bus.c
+++ b/drivers/net/phy/mdio_bus.c
@@ -333,7 +333,7 @@ int __mdiobus_register(struct mii_bus *bus, struct module *owner)
struct phy_device *phydev;
phydev = mdiobus_scan(bus, i);
- if (IS_ERR(phydev)) {
+ if (IS_ERR(phydev) && (PTR_ERR(phydev) != -ENODEV)) {
err = PTR_ERR(phydev);
goto error;
}
@@ -419,7 +419,7 @@ struct phy_device *mdiobus_scan(struct mii_bus *bus, int addr)
int err;
phydev = get_phy_device(bus, addr, false);
- if (IS_ERR(phydev) || phydev == NULL)
+ if (IS_ERR(phydev))
return phydev;
/*
@@ -431,7 +431,7 @@ struct phy_device *mdiobus_scan(struct mii_bus *bus, int addr)
err = phy_device_register(phydev);
if (err) {
phy_device_free(phydev);
- return NULL;
+ return ERR_PTR(-ENODEV);
}
return phydev;
@@ -457,7 +457,7 @@ int mdiobus_read_nested(struct mii_bus *bus, int addr, u32 regnum)
BUG_ON(in_interrupt());
- mutex_lock_nested(&bus->mdio_lock, SINGLE_DEPTH_NESTING);
+ mutex_lock_nested(&bus->mdio_lock, MDIO_MUTEX_NESTED);
retval = bus->read(bus, addr, regnum);
mutex_unlock(&bus->mdio_lock);
@@ -509,7 +509,7 @@ int mdiobus_write_nested(struct mii_bus *bus, int addr, u32 regnum, u16 val)
BUG_ON(in_interrupt());
- mutex_lock_nested(&bus->mdio_lock, SINGLE_DEPTH_NESTING);
+ mutex_lock_nested(&bus->mdio_lock, MDIO_MUTEX_NESTED);
err = bus->write(bus, addr, regnum, val);
mutex_unlock(&bus->mdio_lock);
diff --git a/drivers/net/phy/micrel.c b/drivers/net/phy/micrel.c
index 4516c8a4fd82..5a8fefc25157 100644
--- a/drivers/net/phy/micrel.c
+++ b/drivers/net/phy/micrel.c
@@ -726,7 +726,7 @@ static int kszphy_probe(struct phy_device *phydev)
static struct phy_driver ksphy_driver[] = {
{
.phy_id = PHY_ID_KS8737,
- .phy_id_mask = 0x00fffff0,
+ .phy_id_mask = MICREL_PHY_ID_MASK,
.name = "Micrel KS8737",
.features = (PHY_BASIC_FEATURES | SUPPORTED_Pause),
.flags = PHY_HAS_MAGICANEG | PHY_HAS_INTERRUPT,
@@ -781,7 +781,7 @@ static struct phy_driver ksphy_driver[] = {
.resume = genphy_resume,
}, {
.phy_id = PHY_ID_KSZ8041,
- .phy_id_mask = 0x00fffff0,
+ .phy_id_mask = MICREL_PHY_ID_MASK,
.name = "Micrel KSZ8041",
.features = (PHY_BASIC_FEATURES | SUPPORTED_Pause
| SUPPORTED_Asym_Pause),
@@ -800,7 +800,7 @@ static struct phy_driver ksphy_driver[] = {
.resume = genphy_resume,
}, {
.phy_id = PHY_ID_KSZ8041RNLI,
- .phy_id_mask = 0x00fffff0,
+ .phy_id_mask = MICREL_PHY_ID_MASK,
.name = "Micrel KSZ8041RNLI",
.features = PHY_BASIC_FEATURES |
SUPPORTED_Pause | SUPPORTED_Asym_Pause,
@@ -819,7 +819,7 @@ static struct phy_driver ksphy_driver[] = {
.resume = genphy_resume,
}, {
.phy_id = PHY_ID_KSZ8051,
- .phy_id_mask = 0x00fffff0,
+ .phy_id_mask = MICREL_PHY_ID_MASK,
.name = "Micrel KSZ8051",
.features = (PHY_BASIC_FEATURES | SUPPORTED_Pause
| SUPPORTED_Asym_Pause),
@@ -857,7 +857,7 @@ static struct phy_driver ksphy_driver[] = {
}, {
.phy_id = PHY_ID_KSZ8081,
.name = "Micrel KSZ8081 or KSZ8091",
- .phy_id_mask = 0x00fffff0,
+ .phy_id_mask = MICREL_PHY_ID_MASK,
.features = (PHY_BASIC_FEATURES | SUPPORTED_Pause),
.flags = PHY_HAS_MAGICANEG | PHY_HAS_INTERRUPT,
.driver_data = &ksz8081_type,
@@ -875,7 +875,7 @@ static struct phy_driver ksphy_driver[] = {
}, {
.phy_id = PHY_ID_KSZ8061,
.name = "Micrel KSZ8061",
- .phy_id_mask = 0x00fffff0,
+ .phy_id_mask = MICREL_PHY_ID_MASK,
.features = (PHY_BASIC_FEATURES | SUPPORTED_Pause),
.flags = PHY_HAS_MAGICANEG | PHY_HAS_INTERRUPT,
.config_init = kszphy_config_init,
@@ -909,7 +909,7 @@ static struct phy_driver ksphy_driver[] = {
.write_mmd_indirect = ksz9021_wr_mmd_phyreg,
}, {
.phy_id = PHY_ID_KSZ9031,
- .phy_id_mask = 0x00fffff0,
+ .phy_id_mask = MICREL_PHY_ID_MASK,
.name = "Micrel KSZ9031 Gigabit PHY",
.features = (PHY_GBIT_FEATURES | SUPPORTED_Pause),
.flags = PHY_HAS_MAGICANEG | PHY_HAS_INTERRUPT,
@@ -926,7 +926,7 @@ static struct phy_driver ksphy_driver[] = {
.resume = genphy_resume,
}, {
.phy_id = PHY_ID_KSZ8873MLL,
- .phy_id_mask = 0x00fffff0,
+ .phy_id_mask = MICREL_PHY_ID_MASK,
.name = "Micrel KSZ8873MLL Switch",
.features = (SUPPORTED_Pause | SUPPORTED_Asym_Pause),
.flags = PHY_HAS_MAGICANEG,
@@ -940,7 +940,7 @@ static struct phy_driver ksphy_driver[] = {
.resume = genphy_resume,
}, {
.phy_id = PHY_ID_KSZ886X,
- .phy_id_mask = 0x00fffff0,
+ .phy_id_mask = MICREL_PHY_ID_MASK,
.name = "Micrel KSZ886X Switch",
.features = (PHY_BASIC_FEATURES | SUPPORTED_Pause),
.flags = PHY_HAS_MAGICANEG | PHY_HAS_INTERRUPT,
@@ -962,17 +962,17 @@ MODULE_LICENSE("GPL");
static struct mdio_device_id __maybe_unused micrel_tbl[] = {
{ PHY_ID_KSZ9021, 0x000ffffe },
- { PHY_ID_KSZ9031, 0x00fffff0 },
+ { PHY_ID_KSZ9031, MICREL_PHY_ID_MASK },
{ PHY_ID_KSZ8001, 0x00ffffff },
- { PHY_ID_KS8737, 0x00fffff0 },
+ { PHY_ID_KS8737, MICREL_PHY_ID_MASK },
{ PHY_ID_KSZ8021, 0x00ffffff },
{ PHY_ID_KSZ8031, 0x00ffffff },
- { PHY_ID_KSZ8041, 0x00fffff0 },
- { PHY_ID_KSZ8051, 0x00fffff0 },
- { PHY_ID_KSZ8061, 0x00fffff0 },
- { PHY_ID_KSZ8081, 0x00fffff0 },
- { PHY_ID_KSZ8873MLL, 0x00fffff0 },
- { PHY_ID_KSZ886X, 0x00fffff0 },
+ { PHY_ID_KSZ8041, MICREL_PHY_ID_MASK },
+ { PHY_ID_KSZ8051, MICREL_PHY_ID_MASK },
+ { PHY_ID_KSZ8061, MICREL_PHY_ID_MASK },
+ { PHY_ID_KSZ8081, MICREL_PHY_ID_MASK },
+ { PHY_ID_KSZ8873MLL, MICREL_PHY_ID_MASK },
+ { PHY_ID_KSZ886X, MICREL_PHY_ID_MASK },
{ }
};
diff --git a/drivers/net/phy/phy.c b/drivers/net/phy/phy.c
index 5590b9c182c9..c5dc2c363f96 100644
--- a/drivers/net/phy/phy.c
+++ b/drivers/net/phy/phy.c
@@ -362,6 +362,60 @@ int phy_ethtool_sset(struct phy_device *phydev, struct ethtool_cmd *cmd)
}
EXPORT_SYMBOL(phy_ethtool_sset);
+int phy_ethtool_ksettings_set(struct phy_device *phydev,
+ const struct ethtool_link_ksettings *cmd)
+{
+ u8 autoneg = cmd->base.autoneg;
+ u8 duplex = cmd->base.duplex;
+ u32 speed = cmd->base.speed;
+ u32 advertising;
+
+ if (cmd->base.phy_address != phydev->mdio.addr)
+ return -EINVAL;
+
+ ethtool_convert_link_mode_to_legacy_u32(&advertising,
+ cmd->link_modes.advertising);
+
+ /* We make sure that we don't pass unsupported values in to the PHY */
+ advertising &= phydev->supported;
+
+ /* Verify the settings we care about. */
+ if (autoneg != AUTONEG_ENABLE && autoneg != AUTONEG_DISABLE)
+ return -EINVAL;
+
+ if (autoneg == AUTONEG_ENABLE && advertising == 0)
+ return -EINVAL;
+
+ if (autoneg == AUTONEG_DISABLE &&
+ ((speed != SPEED_1000 &&
+ speed != SPEED_100 &&
+ speed != SPEED_10) ||
+ (duplex != DUPLEX_HALF &&
+ duplex != DUPLEX_FULL)))
+ return -EINVAL;
+
+ phydev->autoneg = autoneg;
+
+ phydev->speed = speed;
+
+ phydev->advertising = advertising;
+
+ if (autoneg == AUTONEG_ENABLE)
+ phydev->advertising |= ADVERTISED_Autoneg;
+ else
+ phydev->advertising &= ~ADVERTISED_Autoneg;
+
+ phydev->duplex = duplex;
+
+ phydev->mdix = cmd->base.eth_tp_mdix_ctrl;
+
+ /* Restart the PHY */
+ phy_start_aneg(phydev);
+
+ return 0;
+}
+EXPORT_SYMBOL(phy_ethtool_ksettings_set);
+
int phy_ethtool_gset(struct phy_device *phydev, struct ethtool_cmd *cmd)
{
cmd->supported = phydev->supported;
@@ -385,6 +439,33 @@ int phy_ethtool_gset(struct phy_device *phydev, struct ethtool_cmd *cmd)
}
EXPORT_SYMBOL(phy_ethtool_gset);
+int phy_ethtool_ksettings_get(struct phy_device *phydev,
+ struct ethtool_link_ksettings *cmd)
+{
+ ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.supported,
+ phydev->supported);
+
+ ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.advertising,
+ phydev->advertising);
+
+ ethtool_convert_legacy_u32_to_link_mode(cmd->link_modes.lp_advertising,
+ phydev->lp_advertising);
+
+ cmd->base.speed = phydev->speed;
+ cmd->base.duplex = phydev->duplex;
+ if (phydev->interface == PHY_INTERFACE_MODE_MOCA)
+ cmd->base.port = PORT_BNC;
+ else
+ cmd->base.port = PORT_MII;
+
+ cmd->base.phy_address = phydev->mdio.addr;
+ cmd->base.autoneg = phydev->autoneg;
+ cmd->base.eth_tp_mdix_ctrl = phydev->mdix;
+
+ return 0;
+}
+EXPORT_SYMBOL(phy_ethtool_ksettings_get);
+
/**
* phy_mii_ioctl - generic PHY MII ioctl interface
* @phydev: the phy_device struct
@@ -790,9 +871,11 @@ void phy_start(struct phy_device *phydev)
break;
case PHY_HALTED:
/* make sure interrupts are re-enabled for the PHY */
- err = phy_enable_interrupts(phydev);
- if (err < 0)
- break;
+ if (phydev->irq != PHY_POLL) {
+ err = phy_enable_interrupts(phydev);
+ if (err < 0)
+ break;
+ }
phydev->state = PHY_RESUMING;
do_resume = true;
@@ -1266,3 +1349,27 @@ void phy_ethtool_get_wol(struct phy_device *phydev, struct ethtool_wolinfo *wol)
phydev->drv->get_wol(phydev, wol);
}
EXPORT_SYMBOL(phy_ethtool_get_wol);
+
+int phy_ethtool_get_link_ksettings(struct net_device *ndev,
+ struct ethtool_link_ksettings *cmd)
+{
+ struct phy_device *phydev = ndev->phydev;
+
+ if (!phydev)
+ return -ENODEV;
+
+ return phy_ethtool_ksettings_get(phydev, cmd);
+}
+EXPORT_SYMBOL(phy_ethtool_get_link_ksettings);
+
+int phy_ethtool_set_link_ksettings(struct net_device *ndev,
+ const struct ethtool_link_ksettings *cmd)
+{
+ struct phy_device *phydev = ndev->phydev;
+
+ if (!phydev)
+ return -ENODEV;
+
+ return phy_ethtool_ksettings_set(phydev, cmd);
+}
+EXPORT_SYMBOL(phy_ethtool_set_link_ksettings);
diff --git a/drivers/net/phy/phy_device.c b/drivers/net/phy/phy_device.c
index e551f3a89cfd..e977ba931878 100644
--- a/drivers/net/phy/phy_device.c
+++ b/drivers/net/phy/phy_device.c
@@ -529,7 +529,7 @@ struct phy_device *get_phy_device(struct mii_bus *bus, int addr, bool is_c45)
/* If the phy_id is mostly Fs, there is no device there */
if ((phy_id & 0x1fffffff) == 0x1fffffff)
- return NULL;
+ return ERR_PTR(-ENODEV);
return phy_device_create(bus, addr, phy_id, is_c45, &c45_ids);
}
@@ -1123,8 +1123,9 @@ static int genphy_config_advert(struct phy_device *phydev)
*/
int genphy_setup_forced(struct phy_device *phydev)
{
- int ctl = 0;
+ int ctl = phy_read(phydev, MII_BMCR);
+ ctl &= BMCR_LOOPBACK | BMCR_ISOLATE | BMCR_PDOWN;
phydev->pause = 0;
phydev->asym_pause = 0;
diff --git a/drivers/net/ppp/ppp_generic.c b/drivers/net/ppp/ppp_generic.c
index f572b31a2b20..8dedafa1a95d 100644
--- a/drivers/net/ppp/ppp_generic.c
+++ b/drivers/net/ppp/ppp_generic.c
@@ -46,6 +46,7 @@
#include <linux/device.h>
#include <linux/mutex.h>
#include <linux/slab.h>
+#include <linux/file.h>
#include <asm/unaligned.h>
#include <net/slhc_vj.h>
#include <linux/atomic.h>
@@ -183,6 +184,12 @@ struct channel {
#endif /* CONFIG_PPP_MULTILINK */
};
+struct ppp_config {
+ struct file *file;
+ s32 unit;
+ bool ifname_is_set;
+};
+
/*
* SMP locking issues:
* Both the ppp.rlock and ppp.wlock locks protect the ppp.channels
@@ -269,8 +276,7 @@ static void ppp_ccp_peek(struct ppp *ppp, struct sk_buff *skb, int inbound);
static void ppp_ccp_closed(struct ppp *ppp);
static struct compressor *find_compressor(int type);
static void ppp_get_stats(struct ppp *ppp, struct ppp_stats *st);
-static struct ppp *ppp_create_interface(struct net *net, int unit,
- struct file *file, int *retp);
+static int ppp_create_interface(struct net *net, struct file *file, int *unit);
static void init_ppp_file(struct ppp_file *pf, int kind);
static void ppp_destroy_interface(struct ppp *ppp);
static struct ppp *ppp_find_unit(struct ppp_net *pn, int unit);
@@ -282,6 +288,7 @@ static int unit_get(struct idr *p, void *ptr);
static int unit_set(struct idr *p, void *ptr, int n);
static void unit_put(struct idr *p, int n);
static void *unit_find(struct idr *p, int n);
+static void ppp_setup(struct net_device *dev);
static const struct net_device_ops ppp_netdev_ops;
@@ -853,12 +860,12 @@ static int ppp_unattached_ioctl(struct net *net, struct ppp_file *pf,
/* Create a new ppp unit */
if (get_user(unit, p))
break;
- ppp = ppp_create_interface(net, unit, file, &err);
- if (!ppp)
+ err = ppp_create_interface(net, file, &unit);
+ if (err < 0)
break;
- file->private_data = &ppp->file;
+
err = -EFAULT;
- if (put_user(ppp->file.index, p))
+ if (put_user(unit, p))
break;
err = 0;
break;
@@ -960,6 +967,188 @@ static struct pernet_operations ppp_net_ops = {
.size = sizeof(struct ppp_net),
};
+static int ppp_unit_register(struct ppp *ppp, int unit, bool ifname_is_set)
+{
+ struct ppp_net *pn = ppp_pernet(ppp->ppp_net);
+ int ret;
+
+ mutex_lock(&pn->all_ppp_mutex);
+
+ if (unit < 0) {
+ ret = unit_get(&pn->units_idr, ppp);
+ if (ret < 0)
+ goto err;
+ } else {
+ /* Caller asked for a specific unit number. Fail with -EEXIST
+ * if unavailable. For backward compatibility, return -EEXIST
+ * too if idr allocation fails; this makes pppd retry without
+ * requesting a specific unit number.
+ */
+ if (unit_find(&pn->units_idr, unit)) {
+ ret = -EEXIST;
+ goto err;
+ }
+ ret = unit_set(&pn->units_idr, ppp, unit);
+ if (ret < 0) {
+ /* Rewrite error for backward compatibility */
+ ret = -EEXIST;
+ goto err;
+ }
+ }
+ ppp->file.index = ret;
+
+ if (!ifname_is_set)
+ snprintf(ppp->dev->name, IFNAMSIZ, "ppp%i", ppp->file.index);
+
+ ret = register_netdevice(ppp->dev);
+ if (ret < 0)
+ goto err_unit;
+
+ atomic_inc(&ppp_unit_count);
+
+ mutex_unlock(&pn->all_ppp_mutex);
+
+ return 0;
+
+err_unit:
+ unit_put(&pn->units_idr, ppp->file.index);
+err:
+ mutex_unlock(&pn->all_ppp_mutex);
+
+ return ret;
+}
+
+static int ppp_dev_configure(struct net *src_net, struct net_device *dev,
+ const struct ppp_config *conf)
+{
+ struct ppp *ppp = netdev_priv(dev);
+ int indx;
+ int err;
+
+ ppp->dev = dev;
+ ppp->ppp_net = src_net;
+ ppp->mru = PPP_MRU;
+ ppp->owner = conf->file;
+
+ init_ppp_file(&ppp->file, INTERFACE);
+ ppp->file.hdrlen = PPP_HDRLEN - 2; /* don't count proto bytes */
+
+ for (indx = 0; indx < NUM_NP; ++indx)
+ ppp->npmode[indx] = NPMODE_PASS;
+ INIT_LIST_HEAD(&ppp->channels);
+ spin_lock_init(&ppp->rlock);
+ spin_lock_init(&ppp->wlock);
+#ifdef CONFIG_PPP_MULTILINK
+ ppp->minseq = -1;
+ skb_queue_head_init(&ppp->mrq);
+#endif /* CONFIG_PPP_MULTILINK */
+#ifdef CONFIG_PPP_FILTER
+ ppp->pass_filter = NULL;
+ ppp->active_filter = NULL;
+#endif /* CONFIG_PPP_FILTER */
+
+ err = ppp_unit_register(ppp, conf->unit, conf->ifname_is_set);
+ if (err < 0)
+ return err;
+
+ conf->file->private_data = &ppp->file;
+
+ return 0;
+}
+
+static const struct nla_policy ppp_nl_policy[IFLA_PPP_MAX + 1] = {
+ [IFLA_PPP_DEV_FD] = { .type = NLA_S32 },
+};
+
+static int ppp_nl_validate(struct nlattr *tb[], struct nlattr *data[])
+{
+ if (!data)
+ return -EINVAL;
+
+ if (!data[IFLA_PPP_DEV_FD])
+ return -EINVAL;
+ if (nla_get_s32(data[IFLA_PPP_DEV_FD]) < 0)
+ return -EBADF;
+
+ return 0;
+}
+
+static int ppp_nl_newlink(struct net *src_net, struct net_device *dev,
+ struct nlattr *tb[], struct nlattr *data[])
+{
+ struct ppp_config conf = {
+ .unit = -1,
+ .ifname_is_set = true,
+ };
+ struct file *file;
+ int err;
+
+ file = fget(nla_get_s32(data[IFLA_PPP_DEV_FD]));
+ if (!file)
+ return -EBADF;
+
+ /* rtnl_lock is already held here, but ppp_create_interface() locks
+ * ppp_mutex before holding rtnl_lock. Using mutex_trylock() avoids
+ * possible deadlock due to lock order inversion, at the cost of
+ * pushing the problem back to userspace.
+ */
+ if (!mutex_trylock(&ppp_mutex)) {
+ err = -EBUSY;
+ goto out;
+ }
+
+ if (file->f_op != &ppp_device_fops || file->private_data) {
+ err = -EBADF;
+ goto out_unlock;
+ }
+
+ conf.file = file;
+ err = ppp_dev_configure(src_net, dev, &conf);
+
+out_unlock:
+ mutex_unlock(&ppp_mutex);
+out:
+ fput(file);
+
+ return err;
+}
+
+static void ppp_nl_dellink(struct net_device *dev, struct list_head *head)
+{
+ unregister_netdevice_queue(dev, head);
+}
+
+static size_t ppp_nl_get_size(const struct net_device *dev)
+{
+ return 0;
+}
+
+static int ppp_nl_fill_info(struct sk_buff *skb, const struct net_device *dev)
+{
+ return 0;
+}
+
+static struct net *ppp_nl_get_link_net(const struct net_device *dev)
+{
+ struct ppp *ppp = netdev_priv(dev);
+
+ return ppp->ppp_net;
+}
+
+static struct rtnl_link_ops ppp_link_ops __read_mostly = {
+ .kind = "ppp",
+ .maxtype = IFLA_PPP_MAX,
+ .policy = ppp_nl_policy,
+ .priv_size = sizeof(struct ppp),
+ .setup = ppp_setup,
+ .validate = ppp_nl_validate,
+ .newlink = ppp_nl_newlink,
+ .dellink = ppp_nl_dellink,
+ .get_size = ppp_nl_get_size,
+ .fill_info = ppp_nl_fill_info,
+ .get_link_net = ppp_nl_get_link_net,
+};
+
#define PPP_MAJOR 108
/* Called at boot time if ppp is compiled into the kernel,
@@ -988,11 +1177,19 @@ static int __init ppp_init(void)
goto out_chrdev;
}
+ err = rtnl_link_register(&ppp_link_ops);
+ if (err) {
+ pr_err("failed to register rtnetlink PPP handler\n");
+ goto out_class;
+ }
+
/* not a big deal if we fail here :-) */
device_create(ppp_class, NULL, MKDEV(PPP_MAJOR, 0), NULL, "ppp");
return 0;
+out_class:
+ class_destroy(ppp_class);
out_chrdev:
unregister_chrdev(PPP_MAJOR, "ppp");
out_net:
@@ -2732,102 +2929,42 @@ ppp_get_stats(struct ppp *ppp, struct ppp_stats *st)
* or if there is already a unit with the requested number.
* unit == -1 means allocate a new number.
*/
-static struct ppp *ppp_create_interface(struct net *net, int unit,
- struct file *file, int *retp)
+static int ppp_create_interface(struct net *net, struct file *file, int *unit)
{
+ struct ppp_config conf = {
+ .file = file,
+ .unit = *unit,
+ .ifname_is_set = false,
+ };
+ struct net_device *dev;
struct ppp *ppp;
- struct ppp_net *pn;
- struct net_device *dev = NULL;
- int ret = -ENOMEM;
- int i;
+ int err;
dev = alloc_netdev(sizeof(struct ppp), "", NET_NAME_ENUM, ppp_setup);
- if (!dev)
- goto out1;
-
- pn = ppp_pernet(net);
-
- ppp = netdev_priv(dev);
- ppp->dev = dev;
- ppp->mru = PPP_MRU;
- init_ppp_file(&ppp->file, INTERFACE);
- ppp->file.hdrlen = PPP_HDRLEN - 2; /* don't count proto bytes */
- ppp->owner = file;
- for (i = 0; i < NUM_NP; ++i)
- ppp->npmode[i] = NPMODE_PASS;
- INIT_LIST_HEAD(&ppp->channels);
- spin_lock_init(&ppp->rlock);
- spin_lock_init(&ppp->wlock);
-#ifdef CONFIG_PPP_MULTILINK
- ppp->minseq = -1;
- skb_queue_head_init(&ppp->mrq);
-#endif /* CONFIG_PPP_MULTILINK */
-#ifdef CONFIG_PPP_FILTER
- ppp->pass_filter = NULL;
- ppp->active_filter = NULL;
-#endif /* CONFIG_PPP_FILTER */
-
- /*
- * drum roll: don't forget to set
- * the net device is belong to
- */
+ if (!dev) {
+ err = -ENOMEM;
+ goto err;
+ }
dev_net_set(dev, net);
+ dev->rtnl_link_ops = &ppp_link_ops;
rtnl_lock();
- mutex_lock(&pn->all_ppp_mutex);
- if (unit < 0) {
- unit = unit_get(&pn->units_idr, ppp);
- if (unit < 0) {
- ret = unit;
- goto out2;
- }
- } else {
- ret = -EEXIST;
- if (unit_find(&pn->units_idr, unit))
- goto out2; /* unit already exists */
- /*
- * if caller need a specified unit number
- * lets try to satisfy him, otherwise --
- * he should better ask us for new unit number
- *
- * NOTE: yes I know that returning EEXIST it's not
- * fair but at least pppd will ask us to allocate
- * new unit in this case so user is happy :)
- */
- unit = unit_set(&pn->units_idr, ppp, unit);
- if (unit < 0)
- goto out2;
- }
-
- /* Initialize the new ppp unit */
- ppp->file.index = unit;
- sprintf(dev->name, "ppp%d", unit);
-
- ret = register_netdevice(dev);
- if (ret != 0) {
- unit_put(&pn->units_idr, unit);
- netdev_err(ppp->dev, "PPP: couldn't register device %s (%d)\n",
- dev->name, ret);
- goto out2;
- }
-
- ppp->ppp_net = net;
+ err = ppp_dev_configure(net, dev, &conf);
+ if (err < 0)
+ goto err_dev;
+ ppp = netdev_priv(dev);
+ *unit = ppp->file.index;
- atomic_inc(&ppp_unit_count);
- mutex_unlock(&pn->all_ppp_mutex);
rtnl_unlock();
- *retp = 0;
- return ppp;
+ return 0;
-out2:
- mutex_unlock(&pn->all_ppp_mutex);
+err_dev:
rtnl_unlock();
free_netdev(dev);
-out1:
- *retp = ret;
- return NULL;
+err:
+ return err;
}
/*
@@ -3016,6 +3153,7 @@ static void __exit ppp_cleanup(void)
/* should never happen */
if (atomic_read(&ppp_unit_count) || atomic_read(&channel_count))
pr_err("PPP: removing module but units remain!\n");
+ rtnl_link_unregister(&ppp_link_ops);
unregister_chrdev(PPP_MAJOR, "ppp");
device_destroy(ppp_class, MKDEV(PPP_MAJOR, 0));
class_destroy(ppp_class);
@@ -3074,4 +3212,5 @@ EXPORT_SYMBOL(ppp_register_compressor);
EXPORT_SYMBOL(ppp_unregister_compressor);
MODULE_LICENSE("GPL");
MODULE_ALIAS_CHARDEV(PPP_MAJOR, 0);
+MODULE_ALIAS_RTNL_LINK("ppp");
MODULE_ALIAS("devname:ppp");
diff --git a/drivers/net/rionet.c b/drivers/net/rionet.c
index 9cfe6aeac84e..a31f4610b493 100644
--- a/drivers/net/rionet.c
+++ b/drivers/net/rionet.c
@@ -179,11 +179,7 @@ static int rionet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
unsigned long flags;
int add_num = 1;
- local_irq_save(flags);
- if (!spin_trylock(&rnet->tx_lock)) {
- local_irq_restore(flags);
- return NETDEV_TX_LOCKED;
- }
+ spin_lock_irqsave(&rnet->tx_lock, flags);
if (is_multicast_ether_addr(eth->h_dest))
add_num = nets[rnet->mport->id].nact;
diff --git a/drivers/net/slip/slip.c b/drivers/net/slip/slip.c
index a17d86a57734..9ed6d1c1ee45 100644
--- a/drivers/net/slip/slip.c
+++ b/drivers/net/slip/slip.c
@@ -407,7 +407,7 @@ static void sl_encaps(struct slip *sl, unsigned char *icp, int len)
set_bit(TTY_DO_WRITE_WAKEUP, &sl->tty->flags);
actual = sl->tty->ops->write(sl->tty, sl->xbuff, count);
#ifdef SL_CHECK_TRANSMIT
- sl->dev->trans_start = jiffies;
+ netif_trans_update(sl->dev);
#endif
sl->xleft = count - actual;
sl->xhead = sl->xbuff + actual;
diff --git a/drivers/net/tun.c b/drivers/net/tun.c
index 2c9e45f50edb..e16487cc6a9a 100644
--- a/drivers/net/tun.c
+++ b/drivers/net/tun.c
@@ -131,6 +131,17 @@ struct tap_filter {
#define TUN_FLOW_EXPIRE (3 * HZ)
+struct tun_pcpu_stats {
+ u64 rx_packets;
+ u64 rx_bytes;
+ u64 tx_packets;
+ u64 tx_bytes;
+ struct u64_stats_sync syncp;
+ u32 rx_dropped;
+ u32 tx_dropped;
+ u32 rx_frame_errors;
+};
+
/* A tun_file connects an open character device to a tuntap netdevice. It
* also contains all socket related structures (except sock_fprog and tap_filter)
* to serve as one transmit queue for tuntap device. The sock_fprog and
@@ -205,6 +216,7 @@ struct tun_struct {
struct list_head disabled;
void *security;
u32 flow_count;
+ struct tun_pcpu_stats __percpu *pcpu_stats;
};
#ifdef CONFIG_TUN_VNET_CROSS_LE
@@ -568,11 +580,13 @@ static void tun_detach_all(struct net_device *dev)
for (i = 0; i < n; i++) {
tfile = rtnl_dereference(tun->tfiles[i]);
BUG_ON(!tfile);
+ tfile->socket.sk->sk_shutdown = RCV_SHUTDOWN;
tfile->socket.sk->sk_data_ready(tfile->socket.sk);
RCU_INIT_POINTER(tfile->tun, NULL);
--tun->numqueues;
}
list_for_each_entry(tfile, &tun->disabled, next) {
+ tfile->socket.sk->sk_shutdown = RCV_SHUTDOWN;
tfile->socket.sk->sk_data_ready(tfile->socket.sk);
RCU_INIT_POINTER(tfile->tun, NULL);
}
@@ -622,12 +636,14 @@ static int tun_attach(struct tun_struct *tun, struct file *file, bool skip_filte
/* Re-attach the filter to persist device */
if (!skip_filter && (tun->filter_attached == true)) {
- err = __sk_attach_filter(&tun->fprog, tfile->socket.sk,
- lockdep_rtnl_is_held());
+ lock_sock(tfile->socket.sk);
+ err = sk_attach_filter(&tun->fprog, tfile->socket.sk);
+ release_sock(tfile->socket.sk);
if (!err)
goto out;
}
tfile->queue_index = tun->numqueues;
+ tfile->socket.sk->sk_shutdown &= ~RCV_SHUTDOWN;
rcu_assign_pointer(tfile->tun, tun);
rcu_assign_pointer(tun->tfiles[tun->numqueues], tfile);
tun->numqueues++;
@@ -820,7 +836,8 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev)
if (txq >= numqueues)
goto drop;
- if (numqueues == 1) {
+#ifdef CONFIG_RPS
+ if (numqueues == 1 && static_key_false(&rps_needed)) {
/* Select queue was not called for the skbuff, so we extract the
* RPS hash and save it into the flow_table here.
*/
@@ -835,6 +852,7 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev)
tun_flow_save_rps_rxhash(e, rxhash);
}
}
+#endif
tun_debug(KERN_INFO, tun, "tun_net_xmit %d\n", skb->len);
@@ -861,7 +879,8 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev)
goto drop;
if (skb->sk && sk_fullsock(skb->sk)) {
- sock_tx_timestamp(skb->sk, &skb_shinfo(skb)->tx_flags);
+ sock_tx_timestamp(skb->sk, skb->sk->sk_tsflags,
+ &skb_shinfo(skb)->tx_flags);
sw_tx_timestamp(skb);
}
@@ -884,7 +903,7 @@ static netdev_tx_t tun_net_xmit(struct sk_buff *skb, struct net_device *dev)
return NETDEV_TX_OK;
drop:
- dev->stats.tx_dropped++;
+ this_cpu_inc(tun->pcpu_stats->tx_dropped);
skb_tx_error(skb);
kfree_skb(skb);
rcu_read_unlock();
@@ -947,6 +966,43 @@ static void tun_set_headroom(struct net_device *dev, int new_hr)
tun->align = new_hr;
}
+static struct rtnl_link_stats64 *
+tun_net_get_stats64(struct net_device *dev, struct rtnl_link_stats64 *stats)
+{
+ u32 rx_dropped = 0, tx_dropped = 0, rx_frame_errors = 0;
+ struct tun_struct *tun = netdev_priv(dev);
+ struct tun_pcpu_stats *p;
+ int i;
+
+ for_each_possible_cpu(i) {
+ u64 rxpackets, rxbytes, txpackets, txbytes;
+ unsigned int start;
+
+ p = per_cpu_ptr(tun->pcpu_stats, i);
+ do {
+ start = u64_stats_fetch_begin(&p->syncp);
+ rxpackets = p->rx_packets;
+ rxbytes = p->rx_bytes;
+ txpackets = p->tx_packets;
+ txbytes = p->tx_bytes;
+ } while (u64_stats_fetch_retry(&p->syncp, start));
+
+ stats->rx_packets += rxpackets;
+ stats->rx_bytes += rxbytes;
+ stats->tx_packets += txpackets;
+ stats->tx_bytes += txbytes;
+
+ /* u32 counters */
+ rx_dropped += p->rx_dropped;
+ rx_frame_errors += p->rx_frame_errors;
+ tx_dropped += p->tx_dropped;
+ }
+ stats->rx_dropped = rx_dropped;
+ stats->rx_frame_errors = rx_frame_errors;
+ stats->tx_dropped = tx_dropped;
+ return stats;
+}
+
static const struct net_device_ops tun_netdev_ops = {
.ndo_uninit = tun_net_uninit,
.ndo_open = tun_net_open,
@@ -959,6 +1015,7 @@ static const struct net_device_ops tun_netdev_ops = {
.ndo_poll_controller = tun_poll_controller,
#endif
.ndo_set_rx_headroom = tun_set_headroom,
+ .ndo_get_stats64 = tun_net_get_stats64,
};
static const struct net_device_ops tap_netdev_ops = {
@@ -977,6 +1034,7 @@ static const struct net_device_ops tap_netdev_ops = {
#endif
.ndo_features_check = passthru_features_check,
.ndo_set_rx_headroom = tun_set_headroom,
+ .ndo_get_stats64 = tun_net_get_stats64,
};
static void tun_flow_init(struct tun_struct *tun)
@@ -1101,6 +1159,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
size_t total_len = iov_iter_count(from);
size_t len = total_len, align = tun->align, linear;
struct virtio_net_hdr gso = { 0 };
+ struct tun_pcpu_stats *stats;
int good_linear;
int copylen;
bool zerocopy = false;
@@ -1175,7 +1234,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
skb = tun_alloc_skb(tfile, align, copylen, linear, noblock);
if (IS_ERR(skb)) {
if (PTR_ERR(skb) != -EAGAIN)
- tun->dev->stats.rx_dropped++;
+ this_cpu_inc(tun->pcpu_stats->rx_dropped);
return PTR_ERR(skb);
}
@@ -1190,7 +1249,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
}
if (err) {
- tun->dev->stats.rx_dropped++;
+ this_cpu_inc(tun->pcpu_stats->rx_dropped);
kfree_skb(skb);
return -EFAULT;
}
@@ -1198,7 +1257,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
if (gso.flags & VIRTIO_NET_HDR_F_NEEDS_CSUM) {
if (!skb_partial_csum_set(skb, tun16_to_cpu(tun, gso.csum_start),
tun16_to_cpu(tun, gso.csum_offset))) {
- tun->dev->stats.rx_frame_errors++;
+ this_cpu_inc(tun->pcpu_stats->rx_frame_errors);
kfree_skb(skb);
return -EINVAL;
}
@@ -1215,7 +1274,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
pi.proto = htons(ETH_P_IPV6);
break;
default:
- tun->dev->stats.rx_dropped++;
+ this_cpu_inc(tun->pcpu_stats->rx_dropped);
kfree_skb(skb);
return -EINVAL;
}
@@ -1243,7 +1302,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
skb_shinfo(skb)->gso_type = SKB_GSO_UDP;
break;
default:
- tun->dev->stats.rx_frame_errors++;
+ this_cpu_inc(tun->pcpu_stats->rx_frame_errors);
kfree_skb(skb);
return -EINVAL;
}
@@ -1253,7 +1312,7 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
skb_shinfo(skb)->gso_size = tun16_to_cpu(tun, gso.gso_size);
if (skb_shinfo(skb)->gso_size == 0) {
- tun->dev->stats.rx_frame_errors++;
+ this_cpu_inc(tun->pcpu_stats->rx_frame_errors);
kfree_skb(skb);
return -EINVAL;
}
@@ -1276,8 +1335,12 @@ static ssize_t tun_get_user(struct tun_struct *tun, struct tun_file *tfile,
rxhash = skb_get_hash(skb);
netif_rx_ni(skb);
- tun->dev->stats.rx_packets++;
- tun->dev->stats.rx_bytes += len;
+ stats = get_cpu_ptr(tun->pcpu_stats);
+ u64_stats_update_begin(&stats->syncp);
+ stats->rx_packets++;
+ stats->rx_bytes += len;
+ u64_stats_update_end(&stats->syncp);
+ put_cpu_ptr(stats);
tun_flow_update(tun, rxhash, tfile);
return total_len;
@@ -1306,6 +1369,7 @@ static ssize_t tun_put_user(struct tun_struct *tun,
struct iov_iter *iter)
{
struct tun_pi pi = { 0, skb->protocol };
+ struct tun_pcpu_stats *stats;
ssize_t total;
int vlan_offset = 0;
int vlan_hlen = 0;
@@ -1406,8 +1470,13 @@ static ssize_t tun_put_user(struct tun_struct *tun,
skb_copy_datagram_iter(skb, vlan_offset, iter, skb->len - vlan_offset);
done:
- tun->dev->stats.tx_packets++;
- tun->dev->stats.tx_bytes += skb->len + vlan_hlen;
+ /* caller is in process context, */
+ stats = get_cpu_ptr(tun->pcpu_stats);
+ u64_stats_update_begin(&stats->syncp);
+ stats->tx_packets++;
+ stats->tx_bytes += skb->len + vlan_hlen;
+ u64_stats_update_end(&stats->syncp);
+ put_cpu_ptr(tun->pcpu_stats);
return total;
}
@@ -1425,9 +1494,6 @@ static ssize_t tun_do_read(struct tun_struct *tun, struct tun_file *tfile,
if (!iov_iter_count(to))
return 0;
- if (tun->dev->reg_state != NETREG_REGISTERED)
- return -EIO;
-
/* Read frames from queue */
skb = __skb_recv_datagram(tfile->socket.sk, noblock ? MSG_DONTWAIT : 0,
&peeked, &off, &err);
@@ -1465,6 +1531,7 @@ static void tun_free_netdev(struct net_device *dev)
struct tun_struct *tun = netdev_priv(dev);
BUG_ON(!(list_empty(&tun->disabled)));
+ free_percpu(tun->pcpu_stats);
tun_flow_uninit(tun);
security_tun_dev_free_security(tun->security);
free_netdev(dev);
@@ -1713,11 +1780,17 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
tun->filter_attached = false;
tun->sndbuf = tfile->socket.sk->sk_sndbuf;
+ tun->pcpu_stats = netdev_alloc_pcpu_stats(struct tun_pcpu_stats);
+ if (!tun->pcpu_stats) {
+ err = -ENOMEM;
+ goto err_free_dev;
+ }
+
spin_lock_init(&tun->lock);
err = security_tun_dev_alloc_security(&tun->security);
if (err < 0)
- goto err_free_dev;
+ goto err_free_stat;
tun_net_init(dev);
tun_flow_init(tun);
@@ -1725,7 +1798,7 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
dev->hw_features = NETIF_F_SG | NETIF_F_FRAGLIST |
TUN_USER_FEATURES | NETIF_F_HW_VLAN_CTAG_TX |
NETIF_F_HW_VLAN_STAG_TX;
- dev->features = dev->hw_features;
+ dev->features = dev->hw_features | NETIF_F_LLTX;
dev->vlan_features = dev->features &
~(NETIF_F_HW_VLAN_CTAG_TX |
NETIF_F_HW_VLAN_STAG_TX);
@@ -1761,6 +1834,8 @@ err_detach:
err_free_flow:
tun_flow_uninit(tun);
security_tun_dev_free_security(tun->security);
+err_free_stat:
+ free_percpu(tun->pcpu_stats);
err_free_dev:
free_netdev(dev);
return err;
@@ -1823,7 +1898,9 @@ static void tun_detach_filter(struct tun_struct *tun, int n)
for (i = 0; i < n; i++) {
tfile = rtnl_dereference(tun->tfiles[i]);
- __sk_detach_filter(tfile->socket.sk, lockdep_rtnl_is_held());
+ lock_sock(tfile->socket.sk);
+ sk_detach_filter(tfile->socket.sk);
+ release_sock(tfile->socket.sk);
}
tun->filter_attached = false;
@@ -1836,8 +1913,9 @@ static int tun_attach_filter(struct tun_struct *tun)
for (i = 0; i < tun->numqueues; i++) {
tfile = rtnl_dereference(tun->tfiles[i]);
- ret = __sk_attach_filter(&tun->fprog, tfile->socket.sk,
- lockdep_rtnl_is_held());
+ lock_sock(tfile->socket.sk);
+ ret = sk_attach_filter(&tun->fprog, tfile->socket.sk);
+ release_sock(tfile->socket.sk);
if (ret) {
tun_detach_filter(tun, i);
return ret;
diff --git a/drivers/net/usb/asix_common.c b/drivers/net/usb/asix_common.c
index 0c5c22b84da8..7de5ab589e4e 100644
--- a/drivers/net/usb/asix_common.c
+++ b/drivers/net/usb/asix_common.c
@@ -66,7 +66,7 @@ int asix_rx_fixup_internal(struct usbnet *dev, struct sk_buff *skb,
* buffer.
*/
if (rx->remaining && (rx->remaining + sizeof(u32) <= skb->len)) {
- offset = ((rx->remaining + 1) & 0xfffe) + sizeof(u32);
+ offset = ((rx->remaining + 1) & 0xfffe);
rx->header = get_unaligned_le32(skb->data + offset);
offset = 0;
diff --git a/drivers/net/usb/catc.c b/drivers/net/usb/catc.c
index 4e2b26a88b15..d9ca05d3ac8e 100644
--- a/drivers/net/usb/catc.c
+++ b/drivers/net/usb/catc.c
@@ -376,7 +376,7 @@ static int catc_tx_run(struct catc *catc)
catc->tx_idx = !catc->tx_idx;
catc->tx_ptr = 0;
- catc->netdev->trans_start = jiffies;
+ netif_trans_update(catc->netdev);
return status;
}
@@ -389,7 +389,7 @@ static void catc_tx_done(struct urb *urb)
if (status == -ECONNRESET) {
dev_dbg(&urb->dev->dev, "Tx Reset.\n");
urb->status = 0;
- catc->netdev->trans_start = jiffies;
+ netif_trans_update(catc->netdev);
catc->netdev->stats.tx_errors++;
clear_bit(TX_RUNNING, &catc->flags);
netif_wake_queue(catc->netdev);
diff --git a/drivers/net/usb/cdc_ncm.c b/drivers/net/usb/cdc_ncm.c
index 2fb31edab125..53759c315b97 100644
--- a/drivers/net/usb/cdc_ncm.c
+++ b/drivers/net/usb/cdc_ncm.c
@@ -740,12 +740,14 @@ static void cdc_ncm_free(struct cdc_ncm_ctx *ctx)
int cdc_ncm_change_mtu(struct net_device *net, int new_mtu)
{
struct usbnet *dev = netdev_priv(net);
- struct cdc_ncm_ctx *ctx = (struct cdc_ncm_ctx *)dev->data[0];
- int maxmtu = ctx->max_datagram_size - cdc_ncm_eth_hlen(dev);
+ int maxmtu = cdc_ncm_max_dgram_size(dev) - cdc_ncm_eth_hlen(dev);
if (new_mtu <= 0 || new_mtu > maxmtu)
return -EINVAL;
+
net->mtu = new_mtu;
+ cdc_ncm_set_dgram_size(dev, new_mtu + cdc_ncm_eth_hlen(dev));
+
return 0;
}
EXPORT_SYMBOL_GPL(cdc_ncm_change_mtu);
diff --git a/drivers/net/usb/ch9200.c b/drivers/net/usb/ch9200.c
index 5e151e6a3e09..8a40202c0a17 100644
--- a/drivers/net/usb/ch9200.c
+++ b/drivers/net/usb/ch9200.c
@@ -155,12 +155,11 @@ static int control_write(struct usbnet *dev, unsigned char request,
index, size);
if (data) {
- buf = kmalloc(size, GFP_KERNEL);
+ buf = kmemdup(data, size, GFP_KERNEL);
if (!buf) {
err = -ENOMEM;
goto err_out;
}
- memcpy(buf, data, size);
}
err = usb_control_msg(dev->udev,
diff --git a/drivers/net/usb/hso.c b/drivers/net/usb/hso.c
index 111d907e0c11..4b4458616693 100644
--- a/drivers/net/usb/hso.c
+++ b/drivers/net/usb/hso.c
@@ -2029,7 +2029,7 @@ static int put_rxbuf_data(struct urb *urb, struct hso_serial *serial)
tty = tty_port_tty_get(&serial->port);
- if (tty && test_bit(TTY_THROTTLED, &tty->flags)) {
+ if (tty && tty_throttled(tty)) {
tty_kref_put(tty);
return -1;
}
diff --git a/drivers/net/usb/kaweth.c b/drivers/net/usb/kaweth.c
index f64b25c221e8..770212baaf05 100644
--- a/drivers/net/usb/kaweth.c
+++ b/drivers/net/usb/kaweth.c
@@ -938,7 +938,7 @@ static void kaweth_tx_timeout(struct net_device *net)
dev_warn(&net->dev, "%s: Tx timed out. Resetting.\n", net->name);
kaweth->stats.tx_errors++;
- net->trans_start = jiffies;
+ netif_trans_update(net);
usb_unlink_urb(kaweth->tx_urb);
}
diff --git a/drivers/net/usb/lan78xx.c b/drivers/net/usb/lan78xx.c
index f64778ad9753..6a9d474b08b2 100644
--- a/drivers/net/usb/lan78xx.c
+++ b/drivers/net/usb/lan78xx.c
@@ -3045,7 +3045,7 @@ gso_skb:
ret = usb_submit_urb(urb, GFP_ATOMIC);
switch (ret) {
case 0:
- dev->net->trans_start = jiffies;
+ netif_trans_update(dev->net);
lan78xx_queue_skb(&dev->txq, skb, tx_start);
if (skb_queue_len(&dev->txq) >= dev->tx_qlen)
netif_stop_queue(dev->net);
@@ -3729,7 +3729,7 @@ int lan78xx_resume(struct usb_interface *intf)
usb_free_urb(res);
usb_autopm_put_interface_async(dev->intf);
} else {
- dev->net->trans_start = jiffies;
+ netif_trans_update(dev->net);
lan78xx_queue_skb(&dev->txq, skb, tx_start);
}
}
diff --git a/drivers/net/usb/pegasus.c b/drivers/net/usb/pegasus.c
index 82129eef7774..36cd7f016a8d 100644
--- a/drivers/net/usb/pegasus.c
+++ b/drivers/net/usb/pegasus.c
@@ -615,7 +615,7 @@ static void write_bulk_callback(struct urb *urb)
break;
}
- net->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(net); /* prevent tx timeout */
netif_wake_queue(net);
}
diff --git a/drivers/net/usb/r8152.c b/drivers/net/usb/r8152.c
index d1f78c2c97aa..3f9f6ed3eec4 100644
--- a/drivers/net/usb/r8152.c
+++ b/drivers/net/usb/r8152.c
@@ -3366,7 +3366,7 @@ static void r8153_init(struct r8152 *tp)
ocp_write_word(tp, MCU_TYPE_PLA, PLA_LED_FEATURE, ocp_data);
ocp_data = FIFO_EMPTY_1FB | ROK_EXIT_LPM;
- if (tp->version == RTL_VER_04 && tp->udev->speed != USB_SPEED_SUPER)
+ if (tp->version == RTL_VER_04 && tp->udev->speed < USB_SPEED_SUPER)
ocp_data |= LPM_TIMER_500MS;
else
ocp_data |= LPM_TIMER_500US;
@@ -4211,6 +4211,7 @@ static int rtl8152_probe(struct usb_interface *intf,
switch (udev->speed) {
case USB_SPEED_SUPER:
+ case USB_SPEED_SUPER_PLUS:
tp->coalesce = COALESCE_SUPER;
break;
case USB_SPEED_HIGH:
diff --git a/drivers/net/usb/rtl8150.c b/drivers/net/usb/rtl8150.c
index d37b7dce2d40..7c72bfac89d0 100644
--- a/drivers/net/usb/rtl8150.c
+++ b/drivers/net/usb/rtl8150.c
@@ -451,7 +451,7 @@ static void write_bulk_callback(struct urb *urb)
if (status)
dev_info(&urb->dev->dev, "%s: Tx status %d\n",
dev->netdev->name, status);
- dev->netdev->trans_start = jiffies;
+ netif_trans_update(dev->netdev);
netif_wake_queue(dev->netdev);
}
@@ -694,7 +694,7 @@ static netdev_tx_t rtl8150_start_xmit(struct sk_buff *skb,
} else {
netdev->stats.tx_packets++;
netdev->stats.tx_bytes += skb->len;
- netdev->trans_start = jiffies;
+ netif_trans_update(netdev);
}
return NETDEV_TX_OK;
diff --git a/drivers/net/usb/smsc75xx.c b/drivers/net/usb/smsc75xx.c
index c369db99c005..9af9799935db 100644
--- a/drivers/net/usb/smsc75xx.c
+++ b/drivers/net/usb/smsc75xx.c
@@ -99,9 +99,11 @@ static int __must_check __smsc75xx_read_reg(struct usbnet *dev, u32 index,
ret = fn(dev, USB_VENDOR_REQUEST_READ_REGISTER, USB_DIR_IN
| USB_TYPE_VENDOR | USB_RECIP_DEVICE,
0, index, &buf, 4);
- if (unlikely(ret < 0))
+ if (unlikely(ret < 0)) {
netdev_warn(dev->net, "Failed to read reg index 0x%08x: %d\n",
index, ret);
+ return ret;
+ }
le32_to_cpus(&buf);
*data = buf;
diff --git a/drivers/net/usb/smsc95xx.c b/drivers/net/usb/smsc95xx.c
index 2edc2bc6d1b9..d9d2806a47b1 100644
--- a/drivers/net/usb/smsc95xx.c
+++ b/drivers/net/usb/smsc95xx.c
@@ -92,9 +92,11 @@ static int __must_check __smsc95xx_read_reg(struct usbnet *dev, u32 index,
ret = fn(dev, USB_VENDOR_REQUEST_READ_REGISTER, USB_DIR_IN
| USB_TYPE_VENDOR | USB_RECIP_DEVICE,
0, index, &buf, 4);
- if (unlikely(ret < 0))
+ if (unlikely(ret < 0)) {
netdev_warn(dev->net, "Failed to read reg index 0x%08x: %d\n",
index, ret);
+ return ret;
+ }
le32_to_cpus(&buf);
*data = buf;
diff --git a/drivers/net/usb/usbnet.c b/drivers/net/usb/usbnet.c
index 10798128c03f..61ba46404937 100644
--- a/drivers/net/usb/usbnet.c
+++ b/drivers/net/usb/usbnet.c
@@ -356,6 +356,7 @@ void usbnet_update_max_qlen(struct usbnet *dev)
dev->tx_qlen = MAX_QUEUE_MEMORY / dev->hard_mtu;
break;
case USB_SPEED_SUPER:
+ case USB_SPEED_SUPER_PLUS:
/*
* Not take default 5ms qlen for super speed HC to
* save memory, and iperf tests show 2.5ms qlen can
@@ -1415,7 +1416,7 @@ netdev_tx_t usbnet_start_xmit (struct sk_buff *skb,
"tx: submit urb err %d\n", retval);
break;
case 0:
- net->trans_start = jiffies;
+ netif_trans_update(net);
__usbnet_queue_skb(&dev->txq, skb, tx_start);
if (dev->txq.qlen >= TX_QLEN (dev))
netif_stop_queue (net);
@@ -1844,7 +1845,7 @@ int usbnet_resume (struct usb_interface *intf)
usb_free_urb(res);
usb_autopm_put_interface_async(dev->intf);
} else {
- dev->net->trans_start = jiffies;
+ netif_trans_update(dev->net);
__skb_queue_tail(&dev->txq, skb);
}
}
diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 4f30a6ae50d0..f37a6e61d4ad 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -312,10 +312,9 @@ static const struct net_device_ops veth_netdev_ops = {
.ndo_set_rx_headroom = veth_set_rx_headroom,
};
-#define VETH_FEATURES (NETIF_F_SG | NETIF_F_FRAGLIST | NETIF_F_ALL_TSO | \
- NETIF_F_HW_CSUM | NETIF_F_RXCSUM | NETIF_F_HIGHDMA | \
- NETIF_F_GSO_GRE | NETIF_F_GSO_UDP_TUNNEL | \
- NETIF_F_GSO_IPIP | NETIF_F_GSO_SIT | NETIF_F_UFO | \
+#define VETH_FEATURES (NETIF_F_SG | NETIF_F_FRAGLIST | NETIF_F_HW_CSUM | \
+ NETIF_F_RXCSUM | NETIF_F_HIGHDMA | \
+ NETIF_F_GSO_SOFTWARE | NETIF_F_GSO_ENCAP_ALL | \
NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_CTAG_RX | \
NETIF_F_HW_VLAN_STAG_TX | NETIF_F_HW_VLAN_STAG_RX )
diff --git a/drivers/net/vrf.c b/drivers/net/vrf.c
index 8a8f1e58b415..dff08842f26d 100644
--- a/drivers/net/vrf.c
+++ b/drivers/net/vrf.c
@@ -42,12 +42,9 @@
#define DRV_NAME "vrf"
#define DRV_VERSION "1.0"
-#define vrf_master_get_rcu(dev) \
- ((struct net_device *)rcu_dereference(dev->rx_handler_data))
-
struct net_vrf {
- struct rtable *rth;
- struct rt6_info *rt6;
+ struct rtable __rcu *rth;
+ struct rt6_info __rcu *rt6;
u32 tb_id;
};
@@ -60,90 +57,12 @@ struct pcpu_dstats {
struct u64_stats_sync syncp;
};
-/* neighbor handling is done with actual device; do not want
- * to flip skb->dev for those ndisc packets. This really fails
- * for multiple next protocols (e.g., NEXTHDR_HOP). But it is
- * a start.
- */
-#if IS_ENABLED(CONFIG_IPV6)
-static bool check_ipv6_frame(const struct sk_buff *skb)
-{
- const struct ipv6hdr *ipv6h;
- struct ipv6hdr _ipv6h;
- bool rc = true;
-
- ipv6h = skb_header_pointer(skb, 0, sizeof(_ipv6h), &_ipv6h);
- if (!ipv6h)
- goto out;
-
- if (ipv6h->nexthdr == NEXTHDR_ICMP) {
- const struct icmp6hdr *icmph;
- struct icmp6hdr _icmph;
-
- icmph = skb_header_pointer(skb, sizeof(_ipv6h),
- sizeof(_icmph), &_icmph);
- if (!icmph)
- goto out;
-
- switch (icmph->icmp6_type) {
- case NDISC_ROUTER_SOLICITATION:
- case NDISC_ROUTER_ADVERTISEMENT:
- case NDISC_NEIGHBOUR_SOLICITATION:
- case NDISC_NEIGHBOUR_ADVERTISEMENT:
- case NDISC_REDIRECT:
- rc = false;
- break;
- }
- }
-
-out:
- return rc;
-}
-#else
-static bool check_ipv6_frame(const struct sk_buff *skb)
-{
- return false;
-}
-#endif
-
-static bool is_ip_rx_frame(struct sk_buff *skb)
-{
- switch (skb->protocol) {
- case htons(ETH_P_IP):
- return true;
- case htons(ETH_P_IPV6):
- return check_ipv6_frame(skb);
- }
- return false;
-}
-
static void vrf_tx_error(struct net_device *vrf_dev, struct sk_buff *skb)
{
vrf_dev->stats.tx_errors++;
kfree_skb(skb);
}
-/* note: already called with rcu_read_lock */
-static rx_handler_result_t vrf_handle_frame(struct sk_buff **pskb)
-{
- struct sk_buff *skb = *pskb;
-
- if (is_ip_rx_frame(skb)) {
- struct net_device *dev = vrf_master_get_rcu(skb->dev);
- struct pcpu_dstats *dstats = this_cpu_ptr(dev->dstats);
-
- u64_stats_update_begin(&dstats->syncp);
- dstats->rx_pkts++;
- dstats->rx_bytes += skb->len;
- u64_stats_update_end(&dstats->syncp);
-
- skb->dev = dev;
-
- return RX_HANDLER_ANOTHER;
- }
- return RX_HANDLER_PASS;
-}
-
static struct rtnl_link_stats64 *vrf_get_stats64(struct net_device *dev,
struct rtnl_link_stats64 *stats)
{
@@ -354,28 +273,40 @@ static int vrf_output6(struct net *net, struct sock *sk, struct sk_buff *skb)
!(IP6CB(skb)->flags & IP6SKB_REROUTED));
}
+/* holding rtnl */
static void vrf_rt6_release(struct net_vrf *vrf)
{
- dst_release(&vrf->rt6->dst);
- vrf->rt6 = NULL;
+ struct rt6_info *rt6 = rtnl_dereference(vrf->rt6);
+
+ rcu_assign_pointer(vrf->rt6, NULL);
+
+ if (rt6)
+ dst_release(&rt6->dst);
}
static int vrf_rt6_create(struct net_device *dev)
{
struct net_vrf *vrf = netdev_priv(dev);
struct net *net = dev_net(dev);
+ struct fib6_table *rt6i_table;
struct rt6_info *rt6;
int rc = -ENOMEM;
+ rt6i_table = fib6_new_table(net, vrf->tb_id);
+ if (!rt6i_table)
+ goto out;
+
rt6 = ip6_dst_alloc(net, dev,
DST_HOST | DST_NOPOLICY | DST_NOXFRM | DST_NOCACHE);
if (!rt6)
goto out;
- rt6->dst.output = vrf_output6;
- rt6->rt6i_table = fib6_get_table(net, vrf->tb_id);
dst_hold(&rt6->dst);
- vrf->rt6 = rt6;
+
+ rt6->rt6i_table = rt6i_table;
+ rt6->dst.output = vrf_output6;
+ rcu_assign_pointer(vrf->rt6, rt6);
+
rc = 0;
out:
return rc;
@@ -449,26 +380,35 @@ static int vrf_output(struct net *net, struct sock *sk, struct sk_buff *skb)
!(IPCB(skb)->flags & IPSKB_REROUTED));
}
+/* holding rtnl */
static void vrf_rtable_release(struct net_vrf *vrf)
{
- struct dst_entry *dst = (struct dst_entry *)vrf->rth;
+ struct rtable *rth = rtnl_dereference(vrf->rth);
- dst_release(dst);
- vrf->rth = NULL;
+ rcu_assign_pointer(vrf->rth, NULL);
+
+ if (rth)
+ dst_release(&rth->dst);
}
-static struct rtable *vrf_rtable_create(struct net_device *dev)
+static int vrf_rtable_create(struct net_device *dev)
{
struct net_vrf *vrf = netdev_priv(dev);
struct rtable *rth;
+ if (!fib_new_table(dev_net(dev), vrf->tb_id))
+ return -ENOMEM;
+
rth = rt_dst_alloc(dev, 0, RTN_UNICAST, 1, 1, 0);
- if (rth) {
- rth->dst.output = vrf_output;
- rth->rt_table_id = vrf->tb_id;
- }
+ if (!rth)
+ return -ENOMEM;
- return rth;
+ rth->dst.output = vrf_output;
+ rth->rt_table_id = vrf->tb_id;
+
+ rcu_assign_pointer(vrf->rth, rth);
+
+ return 0;
}
/**************************** device handling ********************/
@@ -497,28 +437,14 @@ static int do_vrf_add_slave(struct net_device *dev, struct net_device *port_dev)
{
int ret;
- /* register the packet handler for slave ports */
- ret = netdev_rx_handler_register(port_dev, vrf_handle_frame, dev);
- if (ret) {
- netdev_err(port_dev,
- "Device %s failed to register rx_handler\n",
- port_dev->name);
- goto out_fail;
- }
-
ret = netdev_master_upper_dev_link(port_dev, dev, NULL, NULL);
if (ret < 0)
- goto out_unregister;
+ return ret;
port_dev->priv_flags |= IFF_L3MDEV_SLAVE;
cycle_netdev(port_dev);
return 0;
-
-out_unregister:
- netdev_rx_handler_unregister(port_dev);
-out_fail:
- return ret;
}
static int vrf_add_slave(struct net_device *dev, struct net_device *port_dev)
@@ -535,8 +461,6 @@ static int do_vrf_del_slave(struct net_device *dev, struct net_device *port_dev)
netdev_upper_dev_unlink(port_dev, dev);
port_dev->priv_flags &= ~IFF_L3MDEV_SLAVE;
- netdev_rx_handler_unregister(port_dev);
-
cycle_netdev(port_dev);
return 0;
@@ -572,8 +496,7 @@ static int vrf_dev_init(struct net_device *dev)
goto out_nomem;
/* create the default dst which points back to us */
- vrf->rth = vrf_rtable_create(dev);
- if (!vrf->rth)
+ if (vrf_rtable_create(dev) != 0)
goto out_stats;
if (vrf_rt6_create(dev) != 0)
@@ -616,8 +539,13 @@ static struct rtable *vrf_get_rtable(const struct net_device *dev,
if (!(fl4->flowi4_flags & FLOWI_FLAG_L3MDEV_SRC)) {
struct net_vrf *vrf = netdev_priv(dev);
- rth = vrf->rth;
- dst_hold(&rth->dst);
+ rcu_read_lock();
+
+ rth = rcu_dereference(vrf->rth);
+ if (likely(rth))
+ dst_hold(&rth->dst);
+
+ rcu_read_unlock();
}
return rth;
@@ -639,6 +567,8 @@ static int vrf_get_saddr(struct net_device *dev, struct flowi4 *fl4)
fl4->flowi4_flags |= FLOWI_FLAG_SKIP_NH_OIF;
fl4->flowi4_iif = LOOPBACK_IFINDEX;
+ /* make sure oif is set to VRF device for lookup */
+ fl4->flowi4_oif = dev->ifindex;
fl4->flowi4_tos = tos & IPTOS_RT_MASK;
fl4->flowi4_scope = ((tos & RTO_ONLINK) ?
RT_SCOPE_LINK : RT_SCOPE_UNIVERSE);
@@ -659,19 +589,116 @@ static int vrf_get_saddr(struct net_device *dev, struct flowi4 *fl4)
}
#if IS_ENABLED(CONFIG_IPV6)
+/* neighbor handling is done with actual device; do not want
+ * to flip skb->dev for those ndisc packets. This really fails
+ * for multiple next protocols (e.g., NEXTHDR_HOP). But it is
+ * a start.
+ */
+static bool ipv6_ndisc_frame(const struct sk_buff *skb)
+{
+ const struct ipv6hdr *iph = ipv6_hdr(skb);
+ bool rc = false;
+
+ if (iph->nexthdr == NEXTHDR_ICMP) {
+ const struct icmp6hdr *icmph;
+ struct icmp6hdr _icmph;
+
+ icmph = skb_header_pointer(skb, sizeof(*iph),
+ sizeof(_icmph), &_icmph);
+ if (!icmph)
+ goto out;
+
+ switch (icmph->icmp6_type) {
+ case NDISC_ROUTER_SOLICITATION:
+ case NDISC_ROUTER_ADVERTISEMENT:
+ case NDISC_NEIGHBOUR_SOLICITATION:
+ case NDISC_NEIGHBOUR_ADVERTISEMENT:
+ case NDISC_REDIRECT:
+ rc = true;
+ break;
+ }
+ }
+
+out:
+ return rc;
+}
+
+static struct sk_buff *vrf_ip6_rcv(struct net_device *vrf_dev,
+ struct sk_buff *skb)
+{
+ /* if packet is NDISC keep the ingress interface */
+ if (!ipv6_ndisc_frame(skb)) {
+ skb->dev = vrf_dev;
+ skb->skb_iif = vrf_dev->ifindex;
+
+ skb_push(skb, skb->mac_len);
+ dev_queue_xmit_nit(skb, vrf_dev);
+ skb_pull(skb, skb->mac_len);
+
+ IP6CB(skb)->flags |= IP6SKB_L3SLAVE;
+ }
+
+ return skb;
+}
+
+#else
+static struct sk_buff *vrf_ip6_rcv(struct net_device *vrf_dev,
+ struct sk_buff *skb)
+{
+ return skb;
+}
+#endif
+
+static struct sk_buff *vrf_ip_rcv(struct net_device *vrf_dev,
+ struct sk_buff *skb)
+{
+ skb->dev = vrf_dev;
+ skb->skb_iif = vrf_dev->ifindex;
+
+ skb_push(skb, skb->mac_len);
+ dev_queue_xmit_nit(skb, vrf_dev);
+ skb_pull(skb, skb->mac_len);
+
+ return skb;
+}
+
+/* called with rcu lock held */
+static struct sk_buff *vrf_l3_rcv(struct net_device *vrf_dev,
+ struct sk_buff *skb,
+ u16 proto)
+{
+ switch (proto) {
+ case AF_INET:
+ return vrf_ip_rcv(vrf_dev, skb);
+ case AF_INET6:
+ return vrf_ip6_rcv(vrf_dev, skb);
+ }
+
+ return skb;
+}
+
+#if IS_ENABLED(CONFIG_IPV6)
static struct dst_entry *vrf_get_rt6_dst(const struct net_device *dev,
const struct flowi6 *fl6)
{
- struct rt6_info *rt = NULL;
+ struct dst_entry *dst = NULL;
if (!(fl6->flowi6_flags & FLOWI_FLAG_L3MDEV_SRC)) {
struct net_vrf *vrf = netdev_priv(dev);
+ struct rt6_info *rt;
+
+ rcu_read_lock();
+
+ rt = rcu_dereference(vrf->rt6);
+ if (likely(rt)) {
+ dst = &rt->dst;
+ dst_hold(dst);
+ }
- rt = vrf->rt6;
- dst_hold(&rt->dst);
+ rcu_read_unlock();
}
- return (struct dst_entry *)rt;
+ return dst;
}
#endif
@@ -679,6 +706,7 @@ static const struct l3mdev_ops vrf_l3mdev_ops = {
.l3mdev_fib_table = vrf_fib_table,
.l3mdev_get_rtable = vrf_get_rtable,
.l3mdev_get_saddr = vrf_get_saddr,
+ .l3mdev_l3_rcv = vrf_l3_rcv,
#if IS_ENABLED(CONFIG_IPV6)
.l3mdev_get_rt6_dst = vrf_get_rt6_dst,
#endif
diff --git a/drivers/net/vxlan.c b/drivers/net/vxlan.c
index 1c0fa364323e..8ff30c3bdfce 100644
--- a/drivers/net/vxlan.c
+++ b/drivers/net/vxlan.c
@@ -98,7 +98,6 @@ struct vxlan_fdb {
/* salt for hash table */
static u32 vxlan_salt __read_mostly;
-static struct workqueue_struct *vxlan_wq;
static inline bool vxlan_collect_metadata(struct vxlan_sock *vs)
{
@@ -551,16 +550,15 @@ static struct vxlanhdr *vxlan_gro_remcsum(struct sk_buff *skb,
return vh;
}
-static struct sk_buff **vxlan_gro_receive(struct sk_buff **head,
- struct sk_buff *skb,
- struct udp_offload *uoff)
+static struct sk_buff **vxlan_gro_receive(struct sock *sk,
+ struct sk_buff **head,
+ struct sk_buff *skb)
{
struct sk_buff *p, **pp = NULL;
struct vxlanhdr *vh, *vh2;
unsigned int hlen, off_vx;
int flush = 1;
- struct vxlan_sock *vs = container_of(uoff, struct vxlan_sock,
- udp_offloads);
+ struct vxlan_sock *vs = rcu_dereference_sk_user_data(sk);
__be32 flags;
struct gro_remcsum grc;
@@ -613,11 +611,11 @@ out:
return pp;
}
-static int vxlan_gro_complete(struct sk_buff *skb, int nhoff,
- struct udp_offload *uoff)
+static int vxlan_gro_complete(struct sock *sk, struct sk_buff *skb, int nhoff)
{
- udp_tunnel_gro_complete(skb, nhoff);
-
+ /* Sets 'skb->inner_mac_header' since we are always called with
+ * 'skb->encapsulation' set.
+ */
return eth_gro_complete(skb, nhoff + sizeof(struct vxlanhdr));
}
@@ -629,13 +627,6 @@ static void vxlan_notify_add_rx_port(struct vxlan_sock *vs)
struct net *net = sock_net(sk);
sa_family_t sa_family = vxlan_get_sk_family(vs);
__be16 port = inet_sk(sk)->inet_sport;
- int err;
-
- if (sa_family == AF_INET) {
- err = udp_add_offload(net, &vs->udp_offloads);
- if (err)
- pr_warn("vxlan: udp_add_offload failed with status %d\n", err);
- }
rcu_read_lock();
for_each_netdev_rcu(net, dev) {
@@ -662,9 +653,6 @@ static void vxlan_notify_del_rx_port(struct vxlan_sock *vs)
port);
}
rcu_read_unlock();
-
- if (sa_family == AF_INET)
- udp_del_offload(&vs->udp_offloads);
}
/* Add new entry to forwarding table -- assumes lock held */
@@ -1050,14 +1038,14 @@ static bool vxlan_group_used(struct vxlan_net *vn, struct vxlan_dev *dev)
return false;
}
-static void __vxlan_sock_release(struct vxlan_sock *vs)
+static bool __vxlan_sock_release_prep(struct vxlan_sock *vs)
{
struct vxlan_net *vn;
if (!vs)
- return;
+ return false;
if (!atomic_dec_and_test(&vs->refcnt))
- return;
+ return false;
vn = net_generic(sock_net(vs->sock->sk), vxlan_net_id);
spin_lock(&vn->sock_lock);
@@ -1065,14 +1053,28 @@ static void __vxlan_sock_release(struct vxlan_sock *vs)
vxlan_notify_del_rx_port(vs);
spin_unlock(&vn->sock_lock);
- queue_work(vxlan_wq, &vs->del_work);
+ return true;
}
static void vxlan_sock_release(struct vxlan_dev *vxlan)
{
- __vxlan_sock_release(vxlan->vn4_sock);
+ bool ipv4 = __vxlan_sock_release_prep(vxlan->vn4_sock);
+#if IS_ENABLED(CONFIG_IPV6)
+ bool ipv6 = __vxlan_sock_release_prep(vxlan->vn6_sock);
+#endif
+
+ synchronize_net();
+
+ if (ipv4) {
+ udp_tunnel_sock_release(vxlan->vn4_sock->sock);
+ kfree(vxlan->vn4_sock);
+ }
+
#if IS_ENABLED(CONFIG_IPV6)
- __vxlan_sock_release(vxlan->vn6_sock);
+ if (ipv6) {
+ udp_tunnel_sock_release(vxlan->vn6_sock->sock);
+ kfree(vxlan->vn6_sock);
+ }
#endif
}
@@ -1192,6 +1194,45 @@ out:
unparsed->vx_flags &= ~VXLAN_GBP_USED_BITS;
}
+static bool vxlan_parse_gpe_hdr(struct vxlanhdr *unparsed,
+ __be16 *protocol,
+ struct sk_buff *skb, u32 vxflags)
+{
+ struct vxlanhdr_gpe *gpe = (struct vxlanhdr_gpe *)unparsed;
+
+ /* Need to have Next Protocol set for interfaces in GPE mode. */
+ if (!gpe->np_applied)
+ return false;
+ /* "The initial version is 0. If a receiver does not support the
+ * version indicated it MUST drop the packet.
+ */
+ if (gpe->version != 0)
+ return false;
+ /* "When the O bit is set to 1, the packet is an OAM packet and OAM
+ * processing MUST occur." However, we don't implement OAM
+ * processing, thus drop the packet.
+ */
+ if (gpe->oam_flag)
+ return false;
+
+ switch (gpe->next_protocol) {
+ case VXLAN_GPE_NP_IPV4:
+ *protocol = htons(ETH_P_IP);
+ break;
+ case VXLAN_GPE_NP_IPV6:
+ *protocol = htons(ETH_P_IPV6);
+ break;
+ case VXLAN_GPE_NP_ETHERNET:
+ *protocol = htons(ETH_P_TEB);
+ break;
+ default:
+ return false;
+ }
+
+ unparsed->vx_flags &= ~VXLAN_GPE_USED_BITS;
+ return true;
+}
+
static bool vxlan_set_mac(struct vxlan_dev *vxlan,
struct vxlan_sock *vs,
struct sk_buff *skb)
@@ -1257,11 +1298,13 @@ static int vxlan_rcv(struct sock *sk, struct sk_buff *skb)
struct vxlanhdr unparsed;
struct vxlan_metadata _md;
struct vxlan_metadata *md = &_md;
+ __be16 protocol = htons(ETH_P_TEB);
+ bool raw_proto = false;
void *oiph;
- /* Need Vxlan and inner Ethernet header to be present */
+ /* Need UDP and VXLAN header to be present */
if (!pskb_may_pull(skb, VXLAN_HLEN))
- return 1;
+ goto drop;
unparsed = *vxlan_hdr(skb);
/* VNI flag always required to be set */
@@ -1270,7 +1313,7 @@ static int vxlan_rcv(struct sock *sk, struct sk_buff *skb)
ntohl(vxlan_hdr(skb)->vx_flags),
ntohl(vxlan_hdr(skb)->vx_vni));
/* Return non vxlan pkt */
- return 1;
+ goto drop;
}
unparsed.vx_flags &= ~VXLAN_HF_VNI;
unparsed.vx_vni &= ~VXLAN_VNI_MASK;
@@ -1283,9 +1326,18 @@ static int vxlan_rcv(struct sock *sk, struct sk_buff *skb)
if (!vxlan)
goto drop;
- if (iptunnel_pull_header(skb, VXLAN_HLEN, htons(ETH_P_TEB),
- !net_eq(vxlan->net, dev_net(vxlan->dev))))
- goto drop;
+ /* For backwards compatibility, only allow reserved fields to be
+ * used by VXLAN extensions if explicitly requested.
+ */
+ if (vs->flags & VXLAN_F_GPE) {
+ if (!vxlan_parse_gpe_hdr(&unparsed, &protocol, skb, vs->flags))
+ goto drop;
+ raw_proto = true;
+ }
+
+ if (__iptunnel_pull_header(skb, VXLAN_HLEN, protocol, raw_proto,
+ !net_eq(vxlan->net, dev_net(vxlan->dev))))
+ goto drop;
if (vxlan_collect_metadata(vs)) {
__be32 vni = vxlan_vni(vxlan_hdr(skb)->vx_vni);
@@ -1304,14 +1356,14 @@ static int vxlan_rcv(struct sock *sk, struct sk_buff *skb)
memset(md, 0, sizeof(*md));
}
- /* For backwards compatibility, only allow reserved fields to be
- * used by VXLAN extensions if explicitly requested.
- */
if (vs->flags & VXLAN_F_REMCSUM_RX)
if (!vxlan_remcsum(&unparsed, skb, vs->flags))
goto drop;
if (vs->flags & VXLAN_F_GBP)
vxlan_parse_gbp_hdr(&unparsed, skb, vs->flags, md);
+ /* Note that GBP and GPE can never be active together. This is
+ * ensured in vxlan_dev_configure.
+ */
if (unparsed.vx_flags || unparsed.vx_vni) {
/* If there are any unprocessed flags remaining treat
@@ -1325,8 +1377,14 @@ static int vxlan_rcv(struct sock *sk, struct sk_buff *skb)
goto drop;
}
- if (!vxlan_set_mac(vxlan, vs, skb))
- goto drop;
+ if (!raw_proto) {
+ if (!vxlan_set_mac(vxlan, vs, skb))
+ goto drop;
+ } else {
+ skb_reset_mac_header(skb);
+ skb->dev = vxlan->dev;
+ skb->pkt_type = PACKET_HOST;
+ }
oiph = skb_network_header(skb);
skb_reset_network_header(skb);
@@ -1685,6 +1743,27 @@ static void vxlan_build_gbp_hdr(struct vxlanhdr *vxh, u32 vxflags,
gbp->policy_id = htons(md->gbp & VXLAN_GBP_ID_MASK);
}
+static int vxlan_build_gpe_hdr(struct vxlanhdr *vxh, u32 vxflags,
+ __be16 protocol)
+{
+ struct vxlanhdr_gpe *gpe = (struct vxlanhdr_gpe *)vxh;
+
+ gpe->np_applied = 1;
+
+ switch (protocol) {
+ case htons(ETH_P_IP):
+ gpe->next_protocol = VXLAN_GPE_NP_IPV4;
+ return 0;
+ case htons(ETH_P_IPV6):
+ gpe->next_protocol = VXLAN_GPE_NP_IPV6;
+ return 0;
+ case htons(ETH_P_TEB):
+ gpe->next_protocol = VXLAN_GPE_NP_ETHERNET;
+ return 0;
+ }
+ return -EPFNOSUPPORT;
+}
+
static int vxlan_build_skb(struct sk_buff *skb, struct dst_entry *dst,
int iphdr_len, __be32 vni,
struct vxlan_metadata *md, u32 vxflags,
@@ -1694,6 +1773,7 @@ static int vxlan_build_skb(struct sk_buff *skb, struct dst_entry *dst,
int min_headroom;
int err;
int type = udp_sum ? SKB_GSO_UDP_TUNNEL_CSUM : SKB_GSO_UDP_TUNNEL;
+ __be16 inner_protocol = htons(ETH_P_TEB);
if ((vxflags & VXLAN_F_REMCSUM_TX) &&
skb->ip_summed == CHECKSUM_PARTIAL) {
@@ -1712,18 +1792,16 @@ static int vxlan_build_skb(struct sk_buff *skb, struct dst_entry *dst,
/* Need space for new headers (invalidates iph ptr) */
err = skb_cow_head(skb, min_headroom);
- if (unlikely(err)) {
- kfree_skb(skb);
- return err;
- }
+ if (unlikely(err))
+ goto out_free;
skb = vlan_hwaccel_push_inside(skb);
if (WARN_ON(!skb))
return -ENOMEM;
- skb = iptunnel_handle_offloads(skb, type);
- if (IS_ERR(skb))
- return PTR_ERR(skb);
+ err = iptunnel_handle_offloads(skb, type);
+ if (err)
+ goto out_free;
vxh = (struct vxlanhdr *) __skb_push(skb, sizeof(*vxh));
vxh->vx_flags = VXLAN_HF_VNI;
@@ -1744,9 +1822,19 @@ static int vxlan_build_skb(struct sk_buff *skb, struct dst_entry *dst,
if (vxflags & VXLAN_F_GBP)
vxlan_build_gbp_hdr(vxh, vxflags, md);
+ if (vxflags & VXLAN_F_GPE) {
+ err = vxlan_build_gpe_hdr(vxh, vxflags, skb->protocol);
+ if (err < 0)
+ goto out_free;
+ inner_protocol = skb->protocol;
+ }
- skb_set_inner_protocol(skb, htons(ETH_P_TEB));
+ skb_set_inner_protocol(skb, inner_protocol);
return 0;
+
+out_free:
+ kfree_skb(skb);
+ return err;
}
static struct rtable *vxlan_get_route(struct vxlan_dev *vxlan,
@@ -2106,9 +2194,17 @@ static netdev_tx_t vxlan_xmit(struct sk_buff *skb, struct net_device *dev)
info = skb_tunnel_info(skb);
skb_reset_mac_header(skb);
- eth = eth_hdr(skb);
- if ((vxlan->flags & VXLAN_F_PROXY)) {
+ if (vxlan->flags & VXLAN_F_COLLECT_METADATA) {
+ if (info && info->mode & IP_TUNNEL_INFO_TX)
+ vxlan_xmit_one(skb, dev, NULL, false);
+ else
+ kfree_skb(skb);
+ return NETDEV_TX_OK;
+ }
+
+ if (vxlan->flags & VXLAN_F_PROXY) {
+ eth = eth_hdr(skb);
if (ntohs(eth->h_proto) == ETH_P_ARP)
return arp_reduce(dev, skb);
#if IS_ENABLED(CONFIG_IPV6)
@@ -2123,18 +2219,10 @@ static netdev_tx_t vxlan_xmit(struct sk_buff *skb, struct net_device *dev)
msg->icmph.icmp6_type == NDISC_NEIGHBOUR_SOLICITATION)
return neigh_reduce(dev, skb);
}
- eth = eth_hdr(skb);
#endif
}
- if (vxlan->flags & VXLAN_F_COLLECT_METADATA) {
- if (info && info->mode & IP_TUNNEL_INFO_TX)
- vxlan_xmit_one(skb, dev, NULL, false);
- else
- kfree_skb(skb);
- return NETDEV_TX_OK;
- }
-
+ eth = eth_hdr(skb);
f = vxlan_find_mac(vxlan, eth->h_dest);
did_rsc = false;
@@ -2404,7 +2492,7 @@ static int vxlan_fill_metadata_dst(struct net_device *dev, struct sk_buff *skb)
return 0;
}
-static const struct net_device_ops vxlan_netdev_ops = {
+static const struct net_device_ops vxlan_netdev_ether_ops = {
.ndo_init = vxlan_init,
.ndo_uninit = vxlan_uninit,
.ndo_open = vxlan_open,
@@ -2421,6 +2509,17 @@ static const struct net_device_ops vxlan_netdev_ops = {
.ndo_fill_metadata_dst = vxlan_fill_metadata_dst,
};
+static const struct net_device_ops vxlan_netdev_raw_ops = {
+ .ndo_init = vxlan_init,
+ .ndo_uninit = vxlan_uninit,
+ .ndo_open = vxlan_open,
+ .ndo_stop = vxlan_stop,
+ .ndo_start_xmit = vxlan_xmit,
+ .ndo_get_stats64 = ip_tunnel_get_stats64,
+ .ndo_change_mtu = vxlan_change_mtu,
+ .ndo_fill_metadata_dst = vxlan_fill_metadata_dst,
+};
+
/* Info for udev, that this is a virtual tunnel endpoint */
static struct device_type vxlan_type = {
.name = "vxlan",
@@ -2430,7 +2529,7 @@ static struct device_type vxlan_type = {
* supply the listening VXLAN udp ports. Callers are expected
* to implement the ndo_add_vxlan_port.
*/
-void vxlan_get_rx_port(struct net_device *dev)
+static void vxlan_push_rx_ports(struct net_device *dev)
{
struct vxlan_sock *vs;
struct net *net = dev_net(dev);
@@ -2439,6 +2538,9 @@ void vxlan_get_rx_port(struct net_device *dev)
__be16 port;
unsigned int i;
+ if (!dev->netdev_ops->ndo_add_vxlan_port)
+ return;
+
spin_lock(&vn->sock_lock);
for (i = 0; i < PORT_HASH_SIZE; ++i) {
hlist_for_each_entry_rcu(vs, &vn->sock_list[i], hlist) {
@@ -2450,7 +2552,6 @@ void vxlan_get_rx_port(struct net_device *dev)
}
spin_unlock(&vn->sock_lock);
}
-EXPORT_SYMBOL_GPL(vxlan_get_rx_port);
/* Initialize the device structure. */
static void vxlan_setup(struct net_device *dev)
@@ -2461,7 +2562,6 @@ static void vxlan_setup(struct net_device *dev)
eth_hw_addr_random(dev);
ether_setup(dev);
- dev->netdev_ops = &vxlan_netdev_ops;
dev->destructor = free_netdev;
SET_NETDEV_DEVTYPE(dev, &vxlan_type);
@@ -2476,8 +2576,7 @@ static void vxlan_setup(struct net_device *dev)
dev->hw_features |= NETIF_F_GSO_SOFTWARE;
dev->hw_features |= NETIF_F_HW_VLAN_CTAG_TX | NETIF_F_HW_VLAN_STAG_TX;
netif_keep_dst(dev);
- dev->priv_flags &= ~IFF_TX_SKB_SHARING;
- dev->priv_flags |= IFF_LIVE_ADDR_CHANGE | IFF_NO_QUEUE;
+ dev->priv_flags |= IFF_NO_QUEUE;
INIT_LIST_HEAD(&vxlan->next);
spin_lock_init(&vxlan->hash_lock);
@@ -2496,6 +2595,23 @@ static void vxlan_setup(struct net_device *dev)
INIT_HLIST_HEAD(&vxlan->fdb_head[h]);
}
+static void vxlan_ether_setup(struct net_device *dev)
+{
+ dev->priv_flags &= ~IFF_TX_SKB_SHARING;
+ dev->priv_flags |= IFF_LIVE_ADDR_CHANGE;
+ dev->netdev_ops = &vxlan_netdev_ether_ops;
+}
+
+static void vxlan_raw_setup(struct net_device *dev)
+{
+ dev->header_ops = NULL;
+ dev->type = ARPHRD_NONE;
+ dev->hard_header_len = 0;
+ dev->addr_len = 0;
+ dev->flags = IFF_POINTOPOINT | IFF_NOARP | IFF_MULTICAST;
+ dev->netdev_ops = &vxlan_netdev_raw_ops;
+}
+
static const struct nla_policy vxlan_policy[IFLA_VXLAN_MAX + 1] = {
[IFLA_VXLAN_ID] = { .type = NLA_U32 },
[IFLA_VXLAN_GROUP] = { .len = FIELD_SIZEOF(struct iphdr, daddr) },
@@ -2522,6 +2638,7 @@ static const struct nla_policy vxlan_policy[IFLA_VXLAN_MAX + 1] = {
[IFLA_VXLAN_REMCSUM_TX] = { .type = NLA_U8 },
[IFLA_VXLAN_REMCSUM_RX] = { .type = NLA_U8 },
[IFLA_VXLAN_GBP] = { .type = NLA_FLAG, },
+ [IFLA_VXLAN_GPE] = { .type = NLA_FLAG, },
[IFLA_VXLAN_REMCSUM_NOPARTIAL] = { .type = NLA_FLAG },
};
@@ -2574,13 +2691,6 @@ static const struct ethtool_ops vxlan_ethtool_ops = {
.get_link = ethtool_op_get_link,
};
-static void vxlan_del_work(struct work_struct *work)
-{
- struct vxlan_sock *vs = container_of(work, struct vxlan_sock, del_work);
- udp_tunnel_sock_release(vs->sock);
- kfree_rcu(vs, rcu);
-}
-
static struct socket *vxlan_create_sock(struct net *net, bool ipv6,
__be16 port, u32 flags)
{
@@ -2626,8 +2736,6 @@ static struct vxlan_sock *vxlan_socket_create(struct net *net, bool ipv6,
for (h = 0; h < VNI_HASH_SIZE; ++h)
INIT_HLIST_HEAD(&vs->vni_list[h]);
- INIT_WORK(&vs->del_work, vxlan_del_work);
-
sock = vxlan_create_sock(net, ipv6, port, flags);
if (IS_ERR(sock)) {
pr_info("Cannot bind port %d, err=%ld\n", ntohs(port),
@@ -2640,21 +2748,19 @@ static struct vxlan_sock *vxlan_socket_create(struct net *net, bool ipv6,
atomic_set(&vs->refcnt, 1);
vs->flags = (flags & VXLAN_F_RCV_FLAGS);
- /* Initialize the vxlan udp offloads structure */
- vs->udp_offloads.port = port;
- vs->udp_offloads.callbacks.gro_receive = vxlan_gro_receive;
- vs->udp_offloads.callbacks.gro_complete = vxlan_gro_complete;
-
spin_lock(&vn->sock_lock);
hlist_add_head_rcu(&vs->hlist, vs_head(net, port));
vxlan_notify_add_rx_port(vs);
spin_unlock(&vn->sock_lock);
/* Mark socket as an encapsulation socket. */
+ memset(&tunnel_cfg, 0, sizeof(tunnel_cfg));
tunnel_cfg.sk_user_data = vs;
tunnel_cfg.encap_type = 1;
tunnel_cfg.encap_rcv = vxlan_rcv;
tunnel_cfg.encap_destroy = NULL;
+ tunnel_cfg.gro_receive = vxlan_gro_receive;
+ tunnel_cfg.gro_complete = vxlan_gro_complete;
setup_udp_tunnel_sock(net, sock, &tunnel_cfg);
@@ -2722,6 +2828,21 @@ static int vxlan_dev_configure(struct net *src_net, struct net_device *dev,
__be16 default_port = vxlan->cfg.dst_port;
struct net_device *lowerdev = NULL;
+ if (conf->flags & VXLAN_F_GPE) {
+ if (conf->flags & ~VXLAN_F_ALLOWED_GPE)
+ return -EINVAL;
+ /* For now, allow GPE only together with COLLECT_METADATA.
+ * This can be relaxed later; in such case, the other side
+ * of the PtP link will have to be provided.
+ */
+ if (!(conf->flags & VXLAN_F_COLLECT_METADATA))
+ return -EINVAL;
+
+ vxlan_raw_setup(dev);
+ } else {
+ vxlan_ether_setup(dev);
+ }
+
vxlan->net = src_net;
dst->remote_vni = conf->vni;
@@ -2783,8 +2904,12 @@ static int vxlan_dev_configure(struct net *src_net, struct net_device *dev,
dev->needed_headroom = needed_headroom;
memcpy(&vxlan->cfg, conf, sizeof(*conf));
- if (!vxlan->cfg.dst_port)
- vxlan->cfg.dst_port = default_port;
+ if (!vxlan->cfg.dst_port) {
+ if (conf->flags & VXLAN_F_GPE)
+ vxlan->cfg.dst_port = 4790; /* IANA assigned VXLAN-GPE port */
+ else
+ vxlan->cfg.dst_port = default_port;
+ }
vxlan->flags |= conf->flags;
if (!vxlan->cfg.age_interval)
@@ -2955,6 +3080,9 @@ static int vxlan_newlink(struct net *src_net, struct net_device *dev,
if (data[IFLA_VXLAN_GBP])
conf.flags |= VXLAN_F_GBP;
+ if (data[IFLA_VXLAN_GPE])
+ conf.flags |= VXLAN_F_GPE;
+
if (data[IFLA_VXLAN_REMCSUM_NOPARTIAL])
conf.flags |= VXLAN_F_REMCSUM_NOPARTIAL;
@@ -2971,6 +3099,10 @@ static int vxlan_newlink(struct net *src_net, struct net_device *dev,
case -EEXIST:
pr_info("duplicate VNI %u\n", be32_to_cpu(conf.vni));
break;
+
+ case -EINVAL:
+ pr_info("unsupported combination of extensions\n");
+ break;
}
return err;
@@ -3098,6 +3230,10 @@ static int vxlan_fill_info(struct sk_buff *skb, const struct net_device *dev)
nla_put_flag(skb, IFLA_VXLAN_GBP))
goto nla_put_failure;
+ if (vxlan->flags & VXLAN_F_GPE &&
+ nla_put_flag(skb, IFLA_VXLAN_GPE))
+ goto nla_put_failure;
+
if (vxlan->flags & VXLAN_F_REMCSUM_NOPARTIAL &&
nla_put_flag(skb, IFLA_VXLAN_REMCSUM_NOPARTIAL))
goto nla_put_failure;
@@ -3151,20 +3287,22 @@ static void vxlan_handle_lowerdev_unregister(struct vxlan_net *vn,
unregister_netdevice_many(&list_kill);
}
-static int vxlan_lowerdev_event(struct notifier_block *unused,
- unsigned long event, void *ptr)
+static int vxlan_netdevice_event(struct notifier_block *unused,
+ unsigned long event, void *ptr)
{
struct net_device *dev = netdev_notifier_info_to_dev(ptr);
struct vxlan_net *vn = net_generic(dev_net(dev), vxlan_net_id);
if (event == NETDEV_UNREGISTER)
vxlan_handle_lowerdev_unregister(vn, dev);
+ else if (event == NETDEV_OFFLOAD_PUSH_VXLAN)
+ vxlan_push_rx_ports(dev);
return NOTIFY_DONE;
}
static struct notifier_block vxlan_notifier_block __read_mostly = {
- .notifier_call = vxlan_lowerdev_event,
+ .notifier_call = vxlan_netdevice_event,
};
static __net_init int vxlan_init_net(struct net *net)
@@ -3218,10 +3356,6 @@ static int __init vxlan_init_module(void)
{
int rc;
- vxlan_wq = alloc_workqueue("vxlan", 0, 0);
- if (!vxlan_wq)
- return -ENOMEM;
-
get_random_bytes(&vxlan_salt, sizeof(vxlan_salt));
rc = register_pernet_subsys(&vxlan_net_ops);
@@ -3242,7 +3376,6 @@ out3:
out2:
unregister_pernet_subsys(&vxlan_net_ops);
out1:
- destroy_workqueue(vxlan_wq);
return rc;
}
late_initcall(vxlan_init_module);
@@ -3251,7 +3384,6 @@ static void __exit vxlan_cleanup_module(void)
{
rtnl_link_unregister(&vxlan_link_ops);
unregister_netdevice_notifier(&vxlan_notifier_block);
- destroy_workqueue(vxlan_wq);
unregister_pernet_subsys(&vxlan_net_ops);
/* rcu_barrier() is called by netns */
}
diff --git a/drivers/net/wan/cosa.c b/drivers/net/wan/cosa.c
index 848ea6a399f2..b87fe0a01c69 100644
--- a/drivers/net/wan/cosa.c
+++ b/drivers/net/wan/cosa.c
@@ -739,7 +739,7 @@ static char *cosa_net_setup_rx(struct channel_data *chan, int size)
chan->netdev->stats.rx_dropped++;
return NULL;
}
- chan->netdev->trans_start = jiffies;
+ netif_trans_update(chan->netdev);
return skb_put(chan->rx_skb, size);
}
diff --git a/drivers/net/wan/farsync.c b/drivers/net/wan/farsync.c
index 69b994f3b8c5..3c9cbf908ec7 100644
--- a/drivers/net/wan/farsync.c
+++ b/drivers/net/wan/farsync.c
@@ -831,7 +831,7 @@ fst_tx_dma_complete(struct fst_card_info *card, struct fst_port_info *port,
DMA_OWN | TX_STP | TX_ENP);
dev->stats.tx_packets++;
dev->stats.tx_bytes += len;
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
}
/*
@@ -1389,7 +1389,7 @@ do_bottom_half_tx(struct fst_card_info *card)
DMA_OWN | TX_STP | TX_ENP);
dev->stats.tx_packets++;
dev->stats.tx_bytes += skb->len;
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
} else {
/* Or do it through dma */
memcpy(card->tx_dma_handle_host,
@@ -2258,7 +2258,7 @@ fst_tx_timeout(struct net_device *dev)
card->card_no, port->index);
fst_issue_cmd(port, ABORTTX);
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
netif_wake_queue(dev);
port->start = 0;
}
diff --git a/drivers/net/wan/lmc/lmc_main.c b/drivers/net/wan/lmc/lmc_main.c
index bb33b242ab48..299140c04556 100644
--- a/drivers/net/wan/lmc/lmc_main.c
+++ b/drivers/net/wan/lmc/lmc_main.c
@@ -2105,7 +2105,7 @@ static void lmc_driver_timeout(struct net_device *dev)
sc->lmc_device->stats.tx_errors++;
sc->extra_stats.tx_ProcTimeout++; /* -baz */
- dev->trans_start = jiffies; /* prevent tx timeout */
+ netif_trans_update(dev); /* prevent tx timeout */
bug_out:
diff --git a/drivers/net/wan/sbni.c b/drivers/net/wan/sbni.c
index 8fef8d83436d..d98c7e57137d 100644
--- a/drivers/net/wan/sbni.c
+++ b/drivers/net/wan/sbni.c
@@ -860,9 +860,9 @@ prepare_to_send( struct sk_buff *skb, struct net_device *dev )
outb( inb( dev->base_addr + CSR0 ) | TR_REQ, dev->base_addr + CSR0 );
#ifdef CONFIG_SBNI_MULTILINE
- nl->master->trans_start = jiffies;
+ netif_trans_update(nl->master);
#else
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
#endif
}
@@ -889,10 +889,10 @@ drop_xmit_queue( struct net_device *dev )
nl->state &= ~(FL_WAIT_ACK | FL_NEED_RESEND);
#ifdef CONFIG_SBNI_MULTILINE
netif_start_queue( nl->master );
- nl->master->trans_start = jiffies;
+ netif_trans_update(nl->master);
#else
netif_start_queue( dev );
- dev->trans_start = jiffies;
+ netif_trans_update(dev);
#endif
}
diff --git a/drivers/net/wimax/i2400m/netdev.c b/drivers/net/wimax/i2400m/netdev.c
index a9970f1af976..bb74f4b9a02f 100644
--- a/drivers/net/wimax/i2400m/netdev.c
+++ b/drivers/net/wimax/i2400m/netdev.c
@@ -334,7 +334,7 @@ int i2400m_net_tx(struct i2400m *i2400m, struct net_device *net_dev,
d_fnstart(3, dev, "(i2400m %p net_dev %p skb %p)\n",
i2400m, net_dev, skb);
/* FIXME: check eth hdr, only IPv4 is routed by the device as of now */
- net_dev->trans_start = jiffies;
+ netif_trans_update(net_dev);
i2400m_tx_prep_header(skb);
d_printf(3, dev, "NETTX: skb %p sending %d bytes to radio\n",
skb, skb->len);
diff --git a/drivers/net/wireless/admtek/adm8211.c b/drivers/net/wireless/admtek/adm8211.c
index 15f057ed41ad..70ecd82d674d 100644
--- a/drivers/net/wireless/admtek/adm8211.c
+++ b/drivers/net/wireless/admtek/adm8211.c
@@ -440,7 +440,7 @@ static void adm8211_interrupt_rci(struct ieee80211_hw *dev)
rx_status.rate_idx = rate;
rx_status.freq = adm8211_channels[priv->channel - 1].center_freq;
- rx_status.band = IEEE80211_BAND_2GHZ;
+ rx_status.band = NL80211_BAND_2GHZ;
memcpy(IEEE80211_SKB_RXCB(skb), &rx_status, sizeof(rx_status));
ieee80211_rx_irqsafe(dev, skb);
@@ -1894,7 +1894,7 @@ static int adm8211_probe(struct pci_dev *pdev,
priv->channel = 1;
- dev->wiphy->bands[IEEE80211_BAND_2GHZ] = &priv->band;
+ dev->wiphy->bands[NL80211_BAND_2GHZ] = &priv->band;
err = ieee80211_register_hw(dev);
if (err) {
diff --git a/drivers/net/wireless/ath/ar5523/ar5523.c b/drivers/net/wireless/ath/ar5523/ar5523.c
index 3b343c63aa52..8aded24bcdf4 100644
--- a/drivers/net/wireless/ath/ar5523/ar5523.c
+++ b/drivers/net/wireless/ath/ar5523/ar5523.c
@@ -1471,12 +1471,12 @@ static int ar5523_init_modes(struct ar5523 *ar)
memcpy(ar->channels, ar5523_channels, sizeof(ar5523_channels));
memcpy(ar->rates, ar5523_rates, sizeof(ar5523_rates));
- ar->band.band = IEEE80211_BAND_2GHZ;
+ ar->band.band = NL80211_BAND_2GHZ;
ar->band.channels = ar->channels;
ar->band.n_channels = ARRAY_SIZE(ar5523_channels);
ar->band.bitrates = ar->rates;
ar->band.n_bitrates = ARRAY_SIZE(ar5523_rates);
- ar->hw->wiphy->bands[IEEE80211_BAND_2GHZ] = &ar->band;
+ ar->hw->wiphy->bands[NL80211_BAND_2GHZ] = &ar->band;
return 0;
}
diff --git a/drivers/net/wireless/ath/ath.h b/drivers/net/wireless/ath/ath.h
index 65ef483ebf50..da7a7c8dafb2 100644
--- a/drivers/net/wireless/ath/ath.h
+++ b/drivers/net/wireless/ath/ath.h
@@ -185,7 +185,7 @@ struct ath_common {
bool bt_ant_diversity;
int last_rssi;
- struct ieee80211_supported_band sbands[IEEE80211_NUM_BANDS];
+ struct ieee80211_supported_band sbands[NUM_NL80211_BANDS];
};
static inline const struct ath_ps_ops *ath_ps_ops(struct ath_common *common)
diff --git a/drivers/net/wireless/ath/ath10k/ce.c b/drivers/net/wireless/ath/ath10k/ce.c
index edf3629288bc..9fb8d7472d18 100644
--- a/drivers/net/wireless/ath/ath10k/ce.c
+++ b/drivers/net/wireless/ath/ath10k/ce.c
@@ -411,7 +411,8 @@ int __ath10k_ce_rx_post_buf(struct ath10k_ce_pipe *pipe, void *ctx, u32 paddr)
lockdep_assert_held(&ar_pci->ce_lock);
- if (CE_RING_DELTA(nentries_mask, write_index, sw_index - 1) == 0)
+ if ((pipe->id != 5) &&
+ CE_RING_DELTA(nentries_mask, write_index, sw_index - 1) == 0)
return -ENOSPC;
desc->addr = __cpu_to_le32(paddr);
@@ -425,6 +426,19 @@ int __ath10k_ce_rx_post_buf(struct ath10k_ce_pipe *pipe, void *ctx, u32 paddr)
return 0;
}
+void ath10k_ce_rx_update_write_idx(struct ath10k_ce_pipe *pipe, u32 nentries)
+{
+ struct ath10k *ar = pipe->ar;
+ struct ath10k_ce_ring *dest_ring = pipe->dest_ring;
+ unsigned int nentries_mask = dest_ring->nentries_mask;
+ unsigned int write_index = dest_ring->write_index;
+ u32 ctrl_addr = pipe->ctrl_addr;
+
+ write_index = CE_RING_IDX_ADD(nentries_mask, write_index, nentries);
+ ath10k_ce_dest_ring_write_index_set(ar, ctrl_addr, write_index);
+ dest_ring->write_index = write_index;
+}
+
int ath10k_ce_rx_post_buf(struct ath10k_ce_pipe *pipe, void *ctx, u32 paddr)
{
struct ath10k *ar = pipe->ar;
@@ -444,14 +458,10 @@ int ath10k_ce_rx_post_buf(struct ath10k_ce_pipe *pipe, void *ctx, u32 paddr)
*/
int ath10k_ce_completed_recv_next_nolock(struct ath10k_ce_pipe *ce_state,
void **per_transfer_contextp,
- u32 *bufferp,
- unsigned int *nbytesp,
- unsigned int *transfer_idp,
- unsigned int *flagsp)
+ unsigned int *nbytesp)
{
struct ath10k_ce_ring *dest_ring = ce_state->dest_ring;
unsigned int nentries_mask = dest_ring->nentries_mask;
- struct ath10k *ar = ce_state->ar;
unsigned int sw_index = dest_ring->sw_index;
struct ce_desc *base = dest_ring->base_addr_owner_space;
@@ -476,21 +486,17 @@ int ath10k_ce_completed_recv_next_nolock(struct ath10k_ce_pipe *ce_state,
desc->nbytes = 0;
/* Return data from completed destination descriptor */
- *bufferp = __le32_to_cpu(sdesc.addr);
*nbytesp = nbytes;
- *transfer_idp = MS(__le16_to_cpu(sdesc.flags), CE_DESC_FLAGS_META_DATA);
-
- if (__le16_to_cpu(sdesc.flags) & CE_DESC_FLAGS_BYTE_SWAP)
- *flagsp = CE_RECV_FLAG_SWAPPED;
- else
- *flagsp = 0;
if (per_transfer_contextp)
*per_transfer_contextp =
dest_ring->per_transfer_context[sw_index];
- /* sanity */
- dest_ring->per_transfer_context[sw_index] = NULL;
+ /* Copy engine 5 (HTT Rx) will reuse the same transfer context.
+ * So update transfer context all CEs except CE5.
+ */
+ if (ce_state->id != 5)
+ dest_ring->per_transfer_context[sw_index] = NULL;
/* Update sw_index */
sw_index = CE_RING_IDX_INCR(nentries_mask, sw_index);
@@ -501,10 +507,7 @@ int ath10k_ce_completed_recv_next_nolock(struct ath10k_ce_pipe *ce_state,
int ath10k_ce_completed_recv_next(struct ath10k_ce_pipe *ce_state,
void **per_transfer_contextp,
- u32 *bufferp,
- unsigned int *nbytesp,
- unsigned int *transfer_idp,
- unsigned int *flagsp)
+ unsigned int *nbytesp)
{
struct ath10k *ar = ce_state->ar;
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
@@ -513,8 +516,7 @@ int ath10k_ce_completed_recv_next(struct ath10k_ce_pipe *ce_state,
spin_lock_bh(&ar_pci->ce_lock);
ret = ath10k_ce_completed_recv_next_nolock(ce_state,
per_transfer_contextp,
- bufferp, nbytesp,
- transfer_idp, flagsp);
+ nbytesp);
spin_unlock_bh(&ar_pci->ce_lock);
return ret;
@@ -1048,11 +1050,11 @@ int ath10k_ce_alloc_pipe(struct ath10k *ar, int ce_id,
*
* For the lack of a better place do the check here.
*/
- BUILD_BUG_ON(2*TARGET_NUM_MSDU_DESC >
+ BUILD_BUG_ON(2 * TARGET_NUM_MSDU_DESC >
(CE_HTT_H2T_MSG_SRC_NENTRIES - 1));
- BUILD_BUG_ON(2*TARGET_10X_NUM_MSDU_DESC >
+ BUILD_BUG_ON(2 * TARGET_10X_NUM_MSDU_DESC >
(CE_HTT_H2T_MSG_SRC_NENTRIES - 1));
- BUILD_BUG_ON(2*TARGET_TLV_NUM_MSDU_DESC >
+ BUILD_BUG_ON(2 * TARGET_TLV_NUM_MSDU_DESC >
(CE_HTT_H2T_MSG_SRC_NENTRIES - 1));
ce_state->ar = ar;
diff --git a/drivers/net/wireless/ath/ath10k/ce.h b/drivers/net/wireless/ath/ath10k/ce.h
index 47b734ce7ecf..dfc098606bee 100644
--- a/drivers/net/wireless/ath/ath10k/ce.h
+++ b/drivers/net/wireless/ath/ath10k/ce.h
@@ -22,7 +22,7 @@
/* Maximum number of Copy Engine's supported */
#define CE_COUNT_MAX 12
-#define CE_HTT_H2T_MSG_SRC_NENTRIES 4096
+#define CE_HTT_H2T_MSG_SRC_NENTRIES 8192
/* Descriptor rings must be aligned to this boundary */
#define CE_DESC_RING_ALIGN 8
@@ -166,6 +166,7 @@ int ath10k_ce_num_free_src_entries(struct ath10k_ce_pipe *pipe);
int __ath10k_ce_rx_num_free_bufs(struct ath10k_ce_pipe *pipe);
int __ath10k_ce_rx_post_buf(struct ath10k_ce_pipe *pipe, void *ctx, u32 paddr);
int ath10k_ce_rx_post_buf(struct ath10k_ce_pipe *pipe, void *ctx, u32 paddr);
+void ath10k_ce_rx_update_write_idx(struct ath10k_ce_pipe *pipe, u32 nentries);
/* recv flags */
/* Data is byte-swapped */
@@ -177,10 +178,7 @@ int ath10k_ce_rx_post_buf(struct ath10k_ce_pipe *pipe, void *ctx, u32 paddr);
*/
int ath10k_ce_completed_recv_next(struct ath10k_ce_pipe *ce_state,
void **per_transfer_contextp,
- u32 *bufferp,
- unsigned int *nbytesp,
- unsigned int *transfer_idp,
- unsigned int *flagsp);
+ unsigned int *nbytesp);
/*
* Supply data for the next completed unprocessed send descriptor.
* Pops 1 completed send buffer from Source ring.
@@ -212,10 +210,7 @@ int ath10k_ce_revoke_recv_next(struct ath10k_ce_pipe *ce_state,
int ath10k_ce_completed_recv_next_nolock(struct ath10k_ce_pipe *ce_state,
void **per_transfer_contextp,
- u32 *bufferp,
- unsigned int *nbytesp,
- unsigned int *transfer_idp,
- unsigned int *flagsp);
+ unsigned int *nbytesp);
/*
* Support clean shutdown by allowing the caller to cancel
@@ -413,9 +408,11 @@ static inline u32 ath10k_ce_base_address(struct ath10k *ar, unsigned int ce_id)
/* Ring arithmetic (modulus number of entries in ring, which is a pwr of 2). */
#define CE_RING_DELTA(nentries_mask, fromidx, toidx) \
- (((int)(toidx)-(int)(fromidx)) & (nentries_mask))
+ (((int)(toidx) - (int)(fromidx)) & (nentries_mask))
#define CE_RING_IDX_INCR(nentries_mask, idx) (((idx) + 1) & (nentries_mask))
+#define CE_RING_IDX_ADD(nentries_mask, idx, num) \
+ (((idx) + (num)) & (nentries_mask))
#define CE_WRAPPER_INTERRUPT_SUMMARY_HOST_MSI_LSB \
ar->regs->ce_wrap_intr_sum_host_msi_lsb
diff --git a/drivers/net/wireless/ath/ath10k/core.c b/drivers/net/wireless/ath/ath10k/core.c
index c84c2d30ef1f..49af62428c88 100644
--- a/drivers/net/wireless/ath/ath10k/core.c
+++ b/drivers/net/wireless/ath/ath10k/core.c
@@ -60,10 +60,9 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.channel_counters_freq_hz = 88000,
.max_probe_resp_desc_thres = 0,
.hw_4addr_pad = ATH10K_HW_4ADDR_PAD_AFTER,
+ .cal_data_len = 2116,
.fw = {
.dir = QCA988X_HW_2_0_FW_DIR,
- .fw = QCA988X_HW_2_0_FW_FILE,
- .otp = QCA988X_HW_2_0_OTP_FILE,
.board = QCA988X_HW_2_0_BOARD_DATA_FILE,
.board_size = QCA988X_BOARD_DATA_SZ,
.board_ext_size = QCA988X_BOARD_EXT_DATA_SZ,
@@ -78,10 +77,9 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.otp_exe_param = 0,
.channel_counters_freq_hz = 88000,
.max_probe_resp_desc_thres = 0,
+ .cal_data_len = 8124,
.fw = {
.dir = QCA6174_HW_2_1_FW_DIR,
- .fw = QCA6174_HW_2_1_FW_FILE,
- .otp = QCA6174_HW_2_1_OTP_FILE,
.board = QCA6174_HW_2_1_BOARD_DATA_FILE,
.board_size = QCA6174_BOARD_DATA_SZ,
.board_ext_size = QCA6174_BOARD_EXT_DATA_SZ,
@@ -97,10 +95,9 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.channel_counters_freq_hz = 88000,
.max_probe_resp_desc_thres = 0,
.hw_4addr_pad = ATH10K_HW_4ADDR_PAD_AFTER,
+ .cal_data_len = 8124,
.fw = {
.dir = QCA6174_HW_2_1_FW_DIR,
- .fw = QCA6174_HW_2_1_FW_FILE,
- .otp = QCA6174_HW_2_1_OTP_FILE,
.board = QCA6174_HW_2_1_BOARD_DATA_FILE,
.board_size = QCA6174_BOARD_DATA_SZ,
.board_ext_size = QCA6174_BOARD_EXT_DATA_SZ,
@@ -116,10 +113,9 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.channel_counters_freq_hz = 88000,
.max_probe_resp_desc_thres = 0,
.hw_4addr_pad = ATH10K_HW_4ADDR_PAD_AFTER,
+ .cal_data_len = 8124,
.fw = {
.dir = QCA6174_HW_3_0_FW_DIR,
- .fw = QCA6174_HW_3_0_FW_FILE,
- .otp = QCA6174_HW_3_0_OTP_FILE,
.board = QCA6174_HW_3_0_BOARD_DATA_FILE,
.board_size = QCA6174_BOARD_DATA_SZ,
.board_ext_size = QCA6174_BOARD_EXT_DATA_SZ,
@@ -135,11 +131,10 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.channel_counters_freq_hz = 88000,
.max_probe_resp_desc_thres = 0,
.hw_4addr_pad = ATH10K_HW_4ADDR_PAD_AFTER,
+ .cal_data_len = 8124,
.fw = {
/* uses same binaries as hw3.0 */
.dir = QCA6174_HW_3_0_FW_DIR,
- .fw = QCA6174_HW_3_0_FW_FILE,
- .otp = QCA6174_HW_3_0_OTP_FILE,
.board = QCA6174_HW_3_0_BOARD_DATA_FILE,
.board_size = QCA6174_BOARD_DATA_SZ,
.board_ext_size = QCA6174_BOARD_EXT_DATA_SZ,
@@ -156,15 +151,12 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.channel_counters_freq_hz = 150000,
.max_probe_resp_desc_thres = 24,
.hw_4addr_pad = ATH10K_HW_4ADDR_PAD_BEFORE,
- .num_msdu_desc = 1424,
- .qcache_active_peers = 50,
.tx_chain_mask = 0xf,
.rx_chain_mask = 0xf,
.max_spatial_stream = 4,
+ .cal_data_len = 12064,
.fw = {
.dir = QCA99X0_HW_2_0_FW_DIR,
- .fw = QCA99X0_HW_2_0_FW_FILE,
- .otp = QCA99X0_HW_2_0_OTP_FILE,
.board = QCA99X0_HW_2_0_BOARD_DATA_FILE,
.board_size = QCA99X0_BOARD_DATA_SZ,
.board_ext_size = QCA99X0_BOARD_EXT_DATA_SZ,
@@ -179,10 +171,9 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.otp_exe_param = 0,
.channel_counters_freq_hz = 88000,
.max_probe_resp_desc_thres = 0,
+ .cal_data_len = 8124,
.fw = {
.dir = QCA9377_HW_1_0_FW_DIR,
- .fw = QCA9377_HW_1_0_FW_FILE,
- .otp = QCA9377_HW_1_0_OTP_FILE,
.board = QCA9377_HW_1_0_BOARD_DATA_FILE,
.board_size = QCA9377_BOARD_DATA_SZ,
.board_ext_size = QCA9377_BOARD_EXT_DATA_SZ,
@@ -197,10 +188,9 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.otp_exe_param = 0,
.channel_counters_freq_hz = 88000,
.max_probe_resp_desc_thres = 0,
+ .cal_data_len = 8124,
.fw = {
.dir = QCA9377_HW_1_0_FW_DIR,
- .fw = QCA9377_HW_1_0_FW_FILE,
- .otp = QCA9377_HW_1_0_OTP_FILE,
.board = QCA9377_HW_1_0_BOARD_DATA_FILE,
.board_size = QCA9377_BOARD_DATA_SZ,
.board_ext_size = QCA9377_BOARD_EXT_DATA_SZ,
@@ -212,20 +202,18 @@ static const struct ath10k_hw_params ath10k_hw_params_list[] = {
.name = "qca4019 hw1.0",
.patch_load_addr = QCA4019_HW_1_0_PATCH_LOAD_ADDR,
.uart_pin = 7,
+ .has_shifted_cc_wraparound = true,
.otp_exe_param = 0x0010000,
.continuous_frag_desc = true,
.channel_counters_freq_hz = 125000,
.max_probe_resp_desc_thres = 24,
.hw_4addr_pad = ATH10K_HW_4ADDR_PAD_BEFORE,
- .num_msdu_desc = 2500,
- .qcache_active_peers = 35,
.tx_chain_mask = 0x3,
.rx_chain_mask = 0x3,
.max_spatial_stream = 2,
+ .cal_data_len = 12064,
.fw = {
.dir = QCA4019_HW_1_0_FW_DIR,
- .fw = QCA4019_HW_1_0_FW_FILE,
- .otp = QCA4019_HW_1_0_OTP_FILE,
.board = QCA4019_HW_1_0_BOARD_DATA_FILE,
.board_size = QCA4019_BOARD_DATA_SZ,
.board_ext_size = QCA4019_BOARD_EXT_DATA_SZ,
@@ -274,7 +262,7 @@ void ath10k_core_get_fw_features_str(struct ath10k *ar,
int i;
for (i = 0; i < ATH10K_FW_FEATURE_COUNT; i++) {
- if (test_bit(i, ar->fw_features)) {
+ if (test_bit(i, ar->normal_mode_fw.fw_file.fw_features)) {
if (len > 0)
len += scnprintf(buf + len, buf_len - len, ",");
@@ -466,18 +454,18 @@ exit:
return ret;
}
-static int ath10k_download_cal_file(struct ath10k *ar)
+static int ath10k_download_cal_file(struct ath10k *ar,
+ const struct firmware *file)
{
int ret;
- if (!ar->cal_file)
+ if (!file)
return -ENOENT;
- if (IS_ERR(ar->cal_file))
- return PTR_ERR(ar->cal_file);
+ if (IS_ERR(file))
+ return PTR_ERR(file);
- ret = ath10k_download_board_data(ar, ar->cal_file->data,
- ar->cal_file->size);
+ ret = ath10k_download_board_data(ar, file->data, file->size);
if (ret) {
ath10k_err(ar, "failed to download cal_file data: %d\n", ret);
return ret;
@@ -488,7 +476,7 @@ static int ath10k_download_cal_file(struct ath10k *ar)
return 0;
}
-static int ath10k_download_cal_dt(struct ath10k *ar)
+static int ath10k_download_cal_dt(struct ath10k *ar, const char *dt_name)
{
struct device_node *node;
int data_len;
@@ -502,13 +490,12 @@ static int ath10k_download_cal_dt(struct ath10k *ar)
*/
return -ENOENT;
- if (!of_get_property(node, "qcom,ath10k-calibration-data",
- &data_len)) {
+ if (!of_get_property(node, dt_name, &data_len)) {
/* The calibration data node is optional */
return -ENOENT;
}
- if (data_len != QCA988X_CAL_DATA_LEN) {
+ if (data_len != ar->hw_params.cal_data_len) {
ath10k_warn(ar, "invalid calibration data length in DT: %d\n",
data_len);
ret = -EMSGSIZE;
@@ -521,8 +508,7 @@ static int ath10k_download_cal_dt(struct ath10k *ar)
goto out;
}
- ret = of_property_read_u8_array(node, "qcom,ath10k-calibration-data",
- data, data_len);
+ ret = of_property_read_u8_array(node, dt_name, data, data_len);
if (ret) {
ath10k_warn(ar, "failed to read calibration data from DT: %d\n",
ret);
@@ -553,7 +539,8 @@ static int ath10k_core_get_board_id_from_otp(struct ath10k *ar)
address = ar->hw_params.patch_load_addr;
- if (!ar->otp_data || !ar->otp_len) {
+ if (!ar->normal_mode_fw.fw_file.otp_data ||
+ !ar->normal_mode_fw.fw_file.otp_len) {
ath10k_warn(ar,
"failed to retrieve board id because of invalid otp\n");
return -ENODATA;
@@ -561,9 +548,11 @@ static int ath10k_core_get_board_id_from_otp(struct ath10k *ar)
ath10k_dbg(ar, ATH10K_DBG_BOOT,
"boot upload otp to 0x%x len %zd for board id\n",
- address, ar->otp_len);
+ address, ar->normal_mode_fw.fw_file.otp_len);
- ret = ath10k_bmi_fast_download(ar, address, ar->otp_data, ar->otp_len);
+ ret = ath10k_bmi_fast_download(ar, address,
+ ar->normal_mode_fw.fw_file.otp_data,
+ ar->normal_mode_fw.fw_file.otp_len);
if (ret) {
ath10k_err(ar, "could not write otp for board id check: %d\n",
ret);
@@ -601,7 +590,9 @@ static int ath10k_download_and_run_otp(struct ath10k *ar)
u32 bmi_otp_exe_param = ar->hw_params.otp_exe_param;
int ret;
- ret = ath10k_download_board_data(ar, ar->board_data, ar->board_len);
+ ret = ath10k_download_board_data(ar,
+ ar->running_fw->board_data,
+ ar->running_fw->board_len);
if (ret) {
ath10k_err(ar, "failed to download board data: %d\n", ret);
return ret;
@@ -609,16 +600,20 @@ static int ath10k_download_and_run_otp(struct ath10k *ar)
/* OTP is optional */
- if (!ar->otp_data || !ar->otp_len) {
+ if (!ar->running_fw->fw_file.otp_data ||
+ !ar->running_fw->fw_file.otp_len) {
ath10k_warn(ar, "Not running otp, calibration will be incorrect (otp-data %p otp_len %zd)!\n",
- ar->otp_data, ar->otp_len);
+ ar->running_fw->fw_file.otp_data,
+ ar->running_fw->fw_file.otp_len);
return 0;
}
ath10k_dbg(ar, ATH10K_DBG_BOOT, "boot upload otp to 0x%x len %zd\n",
- address, ar->otp_len);
+ address, ar->running_fw->fw_file.otp_len);
- ret = ath10k_bmi_fast_download(ar, address, ar->otp_data, ar->otp_len);
+ ret = ath10k_bmi_fast_download(ar, address,
+ ar->running_fw->fw_file.otp_data,
+ ar->running_fw->fw_file.otp_len);
if (ret) {
ath10k_err(ar, "could not write otp (%d)\n", ret);
return ret;
@@ -633,7 +628,7 @@ static int ath10k_download_and_run_otp(struct ath10k *ar)
ath10k_dbg(ar, ATH10K_DBG_BOOT, "boot otp execute result %d\n", result);
if (!(skip_otp || test_bit(ATH10K_FW_FEATURE_IGNORE_OTP_RESULT,
- ar->fw_features)) &&
+ ar->running_fw->fw_file.fw_features)) &&
result != 0) {
ath10k_err(ar, "otp calibration failed: %d", result);
return -EINVAL;
@@ -642,46 +637,32 @@ static int ath10k_download_and_run_otp(struct ath10k *ar)
return 0;
}
-static int ath10k_download_fw(struct ath10k *ar, enum ath10k_firmware_mode mode)
+static int ath10k_download_fw(struct ath10k *ar)
{
u32 address, data_len;
- const char *mode_name;
const void *data;
int ret;
address = ar->hw_params.patch_load_addr;
- switch (mode) {
- case ATH10K_FIRMWARE_MODE_NORMAL:
- data = ar->firmware_data;
- data_len = ar->firmware_len;
- mode_name = "normal";
- ret = ath10k_swap_code_seg_configure(ar,
- ATH10K_SWAP_CODE_SEG_BIN_TYPE_FW);
- if (ret) {
- ath10k_err(ar, "failed to configure fw code swap: %d\n",
- ret);
- return ret;
- }
- break;
- case ATH10K_FIRMWARE_MODE_UTF:
- data = ar->testmode.utf_firmware_data;
- data_len = ar->testmode.utf_firmware_len;
- mode_name = "utf";
- break;
- default:
- ath10k_err(ar, "unknown firmware mode: %d\n", mode);
- return -EINVAL;
+ data = ar->running_fw->fw_file.firmware_data;
+ data_len = ar->running_fw->fw_file.firmware_len;
+
+ ret = ath10k_swap_code_seg_configure(ar);
+ if (ret) {
+ ath10k_err(ar, "failed to configure fw code swap: %d\n",
+ ret);
+ return ret;
}
ath10k_dbg(ar, ATH10K_DBG_BOOT,
- "boot uploading firmware image %p len %d mode %s\n",
- data, data_len, mode_name);
+ "boot uploading firmware image %p len %d\n",
+ data, data_len);
ret = ath10k_bmi_fast_download(ar, address, data, data_len);
if (ret) {
- ath10k_err(ar, "failed to download %s firmware: %d\n",
- mode_name, ret);
+ ath10k_err(ar, "failed to download firmware: %d\n",
+ ret);
return ret;
}
@@ -690,42 +671,50 @@ static int ath10k_download_fw(struct ath10k *ar, enum ath10k_firmware_mode mode)
static void ath10k_core_free_board_files(struct ath10k *ar)
{
- if (!IS_ERR(ar->board))
- release_firmware(ar->board);
+ if (!IS_ERR(ar->normal_mode_fw.board))
+ release_firmware(ar->normal_mode_fw.board);
- ar->board = NULL;
- ar->board_data = NULL;
- ar->board_len = 0;
+ ar->normal_mode_fw.board = NULL;
+ ar->normal_mode_fw.board_data = NULL;
+ ar->normal_mode_fw.board_len = 0;
}
static void ath10k_core_free_firmware_files(struct ath10k *ar)
{
- if (!IS_ERR(ar->otp))
- release_firmware(ar->otp);
-
- if (!IS_ERR(ar->firmware))
- release_firmware(ar->firmware);
+ if (!IS_ERR(ar->normal_mode_fw.fw_file.firmware))
+ release_firmware(ar->normal_mode_fw.fw_file.firmware);
if (!IS_ERR(ar->cal_file))
release_firmware(ar->cal_file);
+ if (!IS_ERR(ar->pre_cal_file))
+ release_firmware(ar->pre_cal_file);
+
ath10k_swap_code_seg_release(ar);
- ar->otp = NULL;
- ar->otp_data = NULL;
- ar->otp_len = 0;
+ ar->normal_mode_fw.fw_file.otp_data = NULL;
+ ar->normal_mode_fw.fw_file.otp_len = 0;
- ar->firmware = NULL;
- ar->firmware_data = NULL;
- ar->firmware_len = 0;
+ ar->normal_mode_fw.fw_file.firmware = NULL;
+ ar->normal_mode_fw.fw_file.firmware_data = NULL;
+ ar->normal_mode_fw.fw_file.firmware_len = 0;
ar->cal_file = NULL;
+ ar->pre_cal_file = NULL;
}
static int ath10k_fetch_cal_file(struct ath10k *ar)
{
char filename[100];
+ /* pre-cal-<bus>-<id>.bin */
+ scnprintf(filename, sizeof(filename), "pre-cal-%s-%s.bin",
+ ath10k_bus_str(ar->hif.bus), dev_name(ar->dev));
+
+ ar->pre_cal_file = ath10k_fetch_fw_file(ar, ATH10K_FW_DIR, filename);
+ if (!IS_ERR(ar->pre_cal_file))
+ goto success;
+
/* cal-<bus>-<id>.bin */
scnprintf(filename, sizeof(filename), "cal-%s-%s.bin",
ath10k_bus_str(ar->hif.bus), dev_name(ar->dev));
@@ -734,7 +723,7 @@ static int ath10k_fetch_cal_file(struct ath10k *ar)
if (IS_ERR(ar->cal_file))
/* calibration file is optional, don't print any warnings */
return PTR_ERR(ar->cal_file);
-
+success:
ath10k_dbg(ar, ATH10K_DBG_BOOT, "found calibration file %s/%s\n",
ATH10K_FW_DIR, filename);
@@ -748,14 +737,14 @@ static int ath10k_core_fetch_board_data_api_1(struct ath10k *ar)
return -EINVAL;
}
- ar->board = ath10k_fetch_fw_file(ar,
- ar->hw_params.fw.dir,
- ar->hw_params.fw.board);
- if (IS_ERR(ar->board))
- return PTR_ERR(ar->board);
+ ar->normal_mode_fw.board = ath10k_fetch_fw_file(ar,
+ ar->hw_params.fw.dir,
+ ar->hw_params.fw.board);
+ if (IS_ERR(ar->normal_mode_fw.board))
+ return PTR_ERR(ar->normal_mode_fw.board);
- ar->board_data = ar->board->data;
- ar->board_len = ar->board->size;
+ ar->normal_mode_fw.board_data = ar->normal_mode_fw.board->data;
+ ar->normal_mode_fw.board_len = ar->normal_mode_fw.board->size;
return 0;
}
@@ -815,8 +804,8 @@ static int ath10k_core_parse_bd_ie_board(struct ath10k *ar,
"boot found board data for '%s'",
boardname);
- ar->board_data = board_ie_data;
- ar->board_len = board_ie_len;
+ ar->normal_mode_fw.board_data = board_ie_data;
+ ar->normal_mode_fw.board_len = board_ie_len;
ret = 0;
goto out;
@@ -849,12 +838,14 @@ static int ath10k_core_fetch_board_data_api_n(struct ath10k *ar,
const u8 *data;
int ret, ie_id;
- ar->board = ath10k_fetch_fw_file(ar, ar->hw_params.fw.dir, filename);
- if (IS_ERR(ar->board))
- return PTR_ERR(ar->board);
+ ar->normal_mode_fw.board = ath10k_fetch_fw_file(ar,
+ ar->hw_params.fw.dir,
+ filename);
+ if (IS_ERR(ar->normal_mode_fw.board))
+ return PTR_ERR(ar->normal_mode_fw.board);
- data = ar->board->data;
- len = ar->board->size;
+ data = ar->normal_mode_fw.board->data;
+ len = ar->normal_mode_fw.board->size;
/* magic has extra null byte padded */
magic_len = strlen(ATH10K_BOARD_MAGIC) + 1;
@@ -921,7 +912,7 @@ static int ath10k_core_fetch_board_data_api_n(struct ath10k *ar,
}
out:
- if (!ar->board_data || !ar->board_len) {
+ if (!ar->normal_mode_fw.board_data || !ar->normal_mode_fw.board_len) {
ath10k_err(ar,
"failed to fetch board data for %s from %s/%s\n",
boardname, ar->hw_params.fw.dir, filename);
@@ -989,51 +980,8 @@ success:
return 0;
}
-static int ath10k_core_fetch_firmware_api_1(struct ath10k *ar)
-{
- int ret = 0;
-
- if (ar->hw_params.fw.fw == NULL) {
- ath10k_err(ar, "firmware file not defined\n");
- return -EINVAL;
- }
-
- ar->firmware = ath10k_fetch_fw_file(ar,
- ar->hw_params.fw.dir,
- ar->hw_params.fw.fw);
- if (IS_ERR(ar->firmware)) {
- ret = PTR_ERR(ar->firmware);
- ath10k_err(ar, "could not fetch firmware (%d)\n", ret);
- goto err;
- }
-
- ar->firmware_data = ar->firmware->data;
- ar->firmware_len = ar->firmware->size;
-
- /* OTP may be undefined. If so, don't fetch it at all */
- if (ar->hw_params.fw.otp == NULL)
- return 0;
-
- ar->otp = ath10k_fetch_fw_file(ar,
- ar->hw_params.fw.dir,
- ar->hw_params.fw.otp);
- if (IS_ERR(ar->otp)) {
- ret = PTR_ERR(ar->otp);
- ath10k_err(ar, "could not fetch otp (%d)\n", ret);
- goto err;
- }
-
- ar->otp_data = ar->otp->data;
- ar->otp_len = ar->otp->size;
-
- return 0;
-
-err:
- ath10k_core_free_firmware_files(ar);
- return ret;
-}
-
-static int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name)
+int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name,
+ struct ath10k_fw_file *fw_file)
{
size_t magic_len, len, ie_len;
int ie_id, i, index, bit, ret;
@@ -1042,15 +990,17 @@ static int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name)
__le32 *timestamp, *version;
/* first fetch the firmware file (firmware-*.bin) */
- ar->firmware = ath10k_fetch_fw_file(ar, ar->hw_params.fw.dir, name);
- if (IS_ERR(ar->firmware)) {
+ fw_file->firmware = ath10k_fetch_fw_file(ar, ar->hw_params.fw.dir,
+ name);
+ if (IS_ERR(fw_file->firmware)) {
ath10k_err(ar, "could not fetch firmware file '%s/%s': %ld\n",
- ar->hw_params.fw.dir, name, PTR_ERR(ar->firmware));
- return PTR_ERR(ar->firmware);
+ ar->hw_params.fw.dir, name,
+ PTR_ERR(fw_file->firmware));
+ return PTR_ERR(fw_file->firmware);
}
- data = ar->firmware->data;
- len = ar->firmware->size;
+ data = fw_file->firmware->data;
+ len = fw_file->firmware->size;
/* magic also includes the null byte, check that as well */
magic_len = strlen(ATH10K_FIRMWARE_MAGIC) + 1;
@@ -1093,15 +1043,15 @@ static int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name)
switch (ie_id) {
case ATH10K_FW_IE_FW_VERSION:
- if (ie_len > sizeof(ar->hw->wiphy->fw_version) - 1)
+ if (ie_len > sizeof(fw_file->fw_version) - 1)
break;
- memcpy(ar->hw->wiphy->fw_version, data, ie_len);
- ar->hw->wiphy->fw_version[ie_len] = '\0';
+ memcpy(fw_file->fw_version, data, ie_len);
+ fw_file->fw_version[ie_len] = '\0';
ath10k_dbg(ar, ATH10K_DBG_BOOT,
"found fw version %s\n",
- ar->hw->wiphy->fw_version);
+ fw_file->fw_version);
break;
case ATH10K_FW_IE_TIMESTAMP:
if (ie_len != sizeof(u32))
@@ -1128,21 +1078,21 @@ static int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name)
ath10k_dbg(ar, ATH10K_DBG_BOOT,
"Enabling feature bit: %i\n",
i);
- __set_bit(i, ar->fw_features);
+ __set_bit(i, fw_file->fw_features);
}
}
ath10k_dbg_dump(ar, ATH10K_DBG_BOOT, "features", "",
- ar->fw_features,
- sizeof(ar->fw_features));
+ ar->running_fw->fw_file.fw_features,
+ sizeof(fw_file->fw_features));
break;
case ATH10K_FW_IE_FW_IMAGE:
ath10k_dbg(ar, ATH10K_DBG_BOOT,
"found fw image ie (%zd B)\n",
ie_len);
- ar->firmware_data = data;
- ar->firmware_len = ie_len;
+ fw_file->firmware_data = data;
+ fw_file->firmware_len = ie_len;
break;
case ATH10K_FW_IE_OTP_IMAGE:
@@ -1150,8 +1100,8 @@ static int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name)
"found otp image ie (%zd B)\n",
ie_len);
- ar->otp_data = data;
- ar->otp_len = ie_len;
+ fw_file->otp_data = data;
+ fw_file->otp_len = ie_len;
break;
case ATH10K_FW_IE_WMI_OP_VERSION:
@@ -1160,10 +1110,10 @@ static int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name)
version = (__le32 *)data;
- ar->wmi.op_version = le32_to_cpup(version);
+ fw_file->wmi_op_version = le32_to_cpup(version);
ath10k_dbg(ar, ATH10K_DBG_BOOT, "found fw ie wmi op version %d\n",
- ar->wmi.op_version);
+ fw_file->wmi_op_version);
break;
case ATH10K_FW_IE_HTT_OP_VERSION:
if (ie_len != sizeof(u32))
@@ -1171,17 +1121,17 @@ static int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name)
version = (__le32 *)data;
- ar->htt.op_version = le32_to_cpup(version);
+ fw_file->htt_op_version = le32_to_cpup(version);
ath10k_dbg(ar, ATH10K_DBG_BOOT, "found fw ie htt op version %d\n",
- ar->htt.op_version);
+ fw_file->htt_op_version);
break;
case ATH10K_FW_IE_FW_CODE_SWAP_IMAGE:
ath10k_dbg(ar, ATH10K_DBG_BOOT,
"found fw code swap image ie (%zd B)\n",
ie_len);
- ar->swap.firmware_codeswap_data = data;
- ar->swap.firmware_codeswap_len = ie_len;
+ fw_file->codeswap_data = data;
+ fw_file->codeswap_len = ie_len;
break;
default:
ath10k_warn(ar, "Unknown FW IE: %u\n",
@@ -1196,7 +1146,8 @@ static int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name)
data += ie_len;
}
- if (!ar->firmware_data || !ar->firmware_len) {
+ if (!fw_file->firmware_data ||
+ !fw_file->firmware_len) {
ath10k_warn(ar, "No ATH10K_FW_IE_FW_IMAGE found from '%s/%s', skipping\n",
ar->hw_params.fw.dir, name);
ret = -ENOMEDIUM;
@@ -1220,40 +1171,95 @@ static int ath10k_core_fetch_firmware_files(struct ath10k *ar)
ar->fw_api = 5;
ath10k_dbg(ar, ATH10K_DBG_BOOT, "trying fw api %d\n", ar->fw_api);
- ret = ath10k_core_fetch_firmware_api_n(ar, ATH10K_FW_API5_FILE);
+ ret = ath10k_core_fetch_firmware_api_n(ar, ATH10K_FW_API5_FILE,
+ &ar->normal_mode_fw.fw_file);
if (ret == 0)
goto success;
ar->fw_api = 4;
ath10k_dbg(ar, ATH10K_DBG_BOOT, "trying fw api %d\n", ar->fw_api);
- ret = ath10k_core_fetch_firmware_api_n(ar, ATH10K_FW_API4_FILE);
+ ret = ath10k_core_fetch_firmware_api_n(ar, ATH10K_FW_API4_FILE,
+ &ar->normal_mode_fw.fw_file);
if (ret == 0)
goto success;
ar->fw_api = 3;
ath10k_dbg(ar, ATH10K_DBG_BOOT, "trying fw api %d\n", ar->fw_api);
- ret = ath10k_core_fetch_firmware_api_n(ar, ATH10K_FW_API3_FILE);
+ ret = ath10k_core_fetch_firmware_api_n(ar, ATH10K_FW_API3_FILE,
+ &ar->normal_mode_fw.fw_file);
if (ret == 0)
goto success;
ar->fw_api = 2;
ath10k_dbg(ar, ATH10K_DBG_BOOT, "trying fw api %d\n", ar->fw_api);
- ret = ath10k_core_fetch_firmware_api_n(ar, ATH10K_FW_API2_FILE);
- if (ret == 0)
+ ret = ath10k_core_fetch_firmware_api_n(ar, ATH10K_FW_API2_FILE,
+ &ar->normal_mode_fw.fw_file);
+ if (ret)
+ return ret;
+
+success:
+ ath10k_dbg(ar, ATH10K_DBG_BOOT, "using fw api %d\n", ar->fw_api);
+
+ return 0;
+}
+
+static int ath10k_core_pre_cal_download(struct ath10k *ar)
+{
+ int ret;
+
+ ret = ath10k_download_cal_file(ar, ar->pre_cal_file);
+ if (ret == 0) {
+ ar->cal_mode = ATH10K_PRE_CAL_MODE_FILE;
goto success;
+ }
- ar->fw_api = 1;
- ath10k_dbg(ar, ATH10K_DBG_BOOT, "trying fw api %d\n", ar->fw_api);
+ ath10k_dbg(ar, ATH10K_DBG_BOOT,
+ "boot did not find a pre calibration file, try DT next: %d\n",
+ ret);
- ret = ath10k_core_fetch_firmware_api_1(ar);
- if (ret)
+ ret = ath10k_download_cal_dt(ar, "qcom,ath10k-pre-calibration-data");
+ if (ret) {
+ ath10k_dbg(ar, ATH10K_DBG_BOOT,
+ "unable to load pre cal data from DT: %d\n", ret);
return ret;
+ }
+ ar->cal_mode = ATH10K_PRE_CAL_MODE_DT;
success:
- ath10k_dbg(ar, ATH10K_DBG_BOOT, "using fw api %d\n", ar->fw_api);
+ ath10k_dbg(ar, ATH10K_DBG_BOOT, "boot using calibration mode %s\n",
+ ath10k_cal_mode_str(ar->cal_mode));
+
+ return 0;
+}
+
+static int ath10k_core_pre_cal_config(struct ath10k *ar)
+{
+ int ret;
+
+ ret = ath10k_core_pre_cal_download(ar);
+ if (ret) {
+ ath10k_dbg(ar, ATH10K_DBG_BOOT,
+ "failed to load pre cal data: %d\n", ret);
+ return ret;
+ }
+
+ ret = ath10k_core_get_board_id_from_otp(ar);
+ if (ret) {
+ ath10k_err(ar, "failed to get board id: %d\n", ret);
+ return ret;
+ }
+
+ ret = ath10k_download_and_run_otp(ar);
+ if (ret) {
+ ath10k_err(ar, "failed to run otp: %d\n", ret);
+ return ret;
+ }
+
+ ath10k_dbg(ar, ATH10K_DBG_BOOT,
+ "pre cal configuration done successfully\n");
return 0;
}
@@ -1262,7 +1268,15 @@ static int ath10k_download_cal_data(struct ath10k *ar)
{
int ret;
- ret = ath10k_download_cal_file(ar);
+ ret = ath10k_core_pre_cal_config(ar);
+ if (ret == 0)
+ return 0;
+
+ ath10k_dbg(ar, ATH10K_DBG_BOOT,
+ "pre cal download procedure failed, try cal file: %d\n",
+ ret);
+
+ ret = ath10k_download_cal_file(ar, ar->cal_file);
if (ret == 0) {
ar->cal_mode = ATH10K_CAL_MODE_FILE;
goto done;
@@ -1272,7 +1286,7 @@ static int ath10k_download_cal_data(struct ath10k *ar)
"boot did not find a calibration file, try DT next: %d\n",
ret);
- ret = ath10k_download_cal_dt(ar);
+ ret = ath10k_download_cal_dt(ar, "qcom,ath10k-calibration-data");
if (ret == 0) {
ar->cal_mode = ATH10K_CAL_MODE_DT;
goto done;
@@ -1383,6 +1397,7 @@ static void ath10k_core_restart(struct work_struct *work)
complete_all(&ar->install_key_done);
complete_all(&ar->vdev_setup_done);
complete_all(&ar->thermal.wmi_sync);
+ complete_all(&ar->bss_survey_done);
wake_up(&ar->htt.empty_tx_wq);
wake_up(&ar->wmi.tx_credits_wq);
wake_up(&ar->peer_mapping_wq);
@@ -1420,15 +1435,17 @@ static void ath10k_core_restart(struct work_struct *work)
static int ath10k_core_init_firmware_features(struct ath10k *ar)
{
- if (test_bit(ATH10K_FW_FEATURE_WMI_10_2, ar->fw_features) &&
- !test_bit(ATH10K_FW_FEATURE_WMI_10X, ar->fw_features)) {
+ struct ath10k_fw_file *fw_file = &ar->normal_mode_fw.fw_file;
+
+ if (test_bit(ATH10K_FW_FEATURE_WMI_10_2, fw_file->fw_features) &&
+ !test_bit(ATH10K_FW_FEATURE_WMI_10X, fw_file->fw_features)) {
ath10k_err(ar, "feature bits corrupted: 10.2 feature requires 10.x feature to be set as well");
return -EINVAL;
}
- if (ar->wmi.op_version >= ATH10K_FW_WMI_OP_VERSION_MAX) {
+ if (fw_file->wmi_op_version >= ATH10K_FW_WMI_OP_VERSION_MAX) {
ath10k_err(ar, "unsupported WMI OP version (max %d): %d\n",
- ATH10K_FW_WMI_OP_VERSION_MAX, ar->wmi.op_version);
+ ATH10K_FW_WMI_OP_VERSION_MAX, fw_file->wmi_op_version);
return -EINVAL;
}
@@ -1440,7 +1457,7 @@ static int ath10k_core_init_firmware_features(struct ath10k *ar)
break;
case ATH10K_CRYPT_MODE_SW:
if (!test_bit(ATH10K_FW_FEATURE_RAW_MODE_SUPPORT,
- ar->fw_features)) {
+ fw_file->fw_features)) {
ath10k_err(ar, "cryptmode > 0 requires raw mode support from firmware");
return -EINVAL;
}
@@ -1459,7 +1476,7 @@ static int ath10k_core_init_firmware_features(struct ath10k *ar)
if (rawmode) {
if (!test_bit(ATH10K_FW_FEATURE_RAW_MODE_SUPPORT,
- ar->fw_features)) {
+ fw_file->fw_features)) {
ath10k_err(ar, "rawmode = 1 requires support from firmware");
return -EINVAL;
}
@@ -1484,19 +1501,19 @@ static int ath10k_core_init_firmware_features(struct ath10k *ar)
/* Backwards compatibility for firmwares without
* ATH10K_FW_IE_WMI_OP_VERSION.
*/
- if (ar->wmi.op_version == ATH10K_FW_WMI_OP_VERSION_UNSET) {
- if (test_bit(ATH10K_FW_FEATURE_WMI_10X, ar->fw_features)) {
+ if (fw_file->wmi_op_version == ATH10K_FW_WMI_OP_VERSION_UNSET) {
+ if (test_bit(ATH10K_FW_FEATURE_WMI_10X, fw_file->fw_features)) {
if (test_bit(ATH10K_FW_FEATURE_WMI_10_2,
- ar->fw_features))
- ar->wmi.op_version = ATH10K_FW_WMI_OP_VERSION_10_2;
+ fw_file->fw_features))
+ fw_file->wmi_op_version = ATH10K_FW_WMI_OP_VERSION_10_2;
else
- ar->wmi.op_version = ATH10K_FW_WMI_OP_VERSION_10_1;
+ fw_file->wmi_op_version = ATH10K_FW_WMI_OP_VERSION_10_1;
} else {
- ar->wmi.op_version = ATH10K_FW_WMI_OP_VERSION_MAIN;
+ fw_file->wmi_op_version = ATH10K_FW_WMI_OP_VERSION_MAIN;
}
}
- switch (ar->wmi.op_version) {
+ switch (fw_file->wmi_op_version) {
case ATH10K_FW_WMI_OP_VERSION_MAIN:
ar->max_num_peers = TARGET_NUM_PEERS;
ar->max_num_stations = TARGET_NUM_STATIONS;
@@ -1509,7 +1526,7 @@ static int ath10k_core_init_firmware_features(struct ath10k *ar)
case ATH10K_FW_WMI_OP_VERSION_10_1:
case ATH10K_FW_WMI_OP_VERSION_10_2:
case ATH10K_FW_WMI_OP_VERSION_10_2_4:
- if (test_bit(WMI_SERVICE_PEER_STATS, ar->wmi.svc_map)) {
+ if (ath10k_peer_stats_enabled(ar)) {
ar->max_num_peers = TARGET_10X_TX_STATS_NUM_PEERS;
ar->max_num_stations = TARGET_10X_TX_STATS_NUM_STATIONS;
} else {
@@ -1538,9 +1555,15 @@ static int ath10k_core_init_firmware_features(struct ath10k *ar)
ar->num_active_peers = TARGET_10_4_ACTIVE_PEERS;
ar->max_num_vdevs = TARGET_10_4_NUM_VDEVS;
ar->num_tids = TARGET_10_4_TGT_NUM_TIDS;
- ar->htt.max_num_pending_tx = ar->hw_params.num_msdu_desc;
- ar->fw_stats_req_mask = WMI_STAT_PEER;
+ ar->fw_stats_req_mask = WMI_10_4_STAT_PEER |
+ WMI_10_4_STAT_PEER_EXTD;
ar->max_spatial_stream = ar->hw_params.max_spatial_stream;
+
+ if (test_bit(ATH10K_FW_FEATURE_PEER_FLOW_CONTROL,
+ fw_file->fw_features))
+ ar->htt.max_num_pending_tx = TARGET_10_4_NUM_MSDU_DESC_PFC;
+ else
+ ar->htt.max_num_pending_tx = TARGET_10_4_NUM_MSDU_DESC;
break;
case ATH10K_FW_WMI_OP_VERSION_UNSET:
case ATH10K_FW_WMI_OP_VERSION_MAX:
@@ -1551,18 +1574,18 @@ static int ath10k_core_init_firmware_features(struct ath10k *ar)
/* Backwards compatibility for firmwares without
* ATH10K_FW_IE_HTT_OP_VERSION.
*/
- if (ar->htt.op_version == ATH10K_FW_HTT_OP_VERSION_UNSET) {
- switch (ar->wmi.op_version) {
+ if (fw_file->htt_op_version == ATH10K_FW_HTT_OP_VERSION_UNSET) {
+ switch (fw_file->wmi_op_version) {
case ATH10K_FW_WMI_OP_VERSION_MAIN:
- ar->htt.op_version = ATH10K_FW_HTT_OP_VERSION_MAIN;
+ fw_file->htt_op_version = ATH10K_FW_HTT_OP_VERSION_MAIN;
break;
case ATH10K_FW_WMI_OP_VERSION_10_1:
case ATH10K_FW_WMI_OP_VERSION_10_2:
case ATH10K_FW_WMI_OP_VERSION_10_2_4:
- ar->htt.op_version = ATH10K_FW_HTT_OP_VERSION_10_1;
+ fw_file->htt_op_version = ATH10K_FW_HTT_OP_VERSION_10_1;
break;
case ATH10K_FW_WMI_OP_VERSION_TLV:
- ar->htt.op_version = ATH10K_FW_HTT_OP_VERSION_TLV;
+ fw_file->htt_op_version = ATH10K_FW_HTT_OP_VERSION_TLV;
break;
case ATH10K_FW_WMI_OP_VERSION_10_4:
case ATH10K_FW_WMI_OP_VERSION_UNSET:
@@ -1575,14 +1598,18 @@ static int ath10k_core_init_firmware_features(struct ath10k *ar)
return 0;
}
-int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode)
+int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode,
+ const struct ath10k_fw_components *fw)
{
int status;
+ u32 val;
lockdep_assert_held(&ar->conf_mutex);
clear_bit(ATH10K_FLAG_CRASH_FLUSH, &ar->dev_flags);
+ ar->running_fw = fw;
+
ath10k_bmi_start(ar);
if (ath10k_init_configure_target(ar)) {
@@ -1601,7 +1628,7 @@ int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode)
* to set the clock source once the target is initialized.
*/
if (test_bit(ATH10K_FW_FEATURE_SUPPORTS_SKIP_CLOCK_INIT,
- ar->fw_features)) {
+ ar->running_fw->fw_file.fw_features)) {
status = ath10k_bmi_write32(ar, hi_skip_clock_init, 1);
if (status) {
ath10k_err(ar, "could not write to skip_clock_init: %d\n",
@@ -1610,7 +1637,7 @@ int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode)
}
}
- status = ath10k_download_fw(ar, mode);
+ status = ath10k_download_fw(ar);
if (status)
goto err;
@@ -1698,6 +1725,23 @@ int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode)
ath10k_dbg(ar, ATH10K_DBG_BOOT, "firmware %s booted\n",
ar->hw->wiphy->fw_version);
+ if (test_bit(WMI_SERVICE_EXT_RES_CFG_SUPPORT, ar->wmi.svc_map)) {
+ val = 0;
+ if (ath10k_peer_stats_enabled(ar))
+ val = WMI_10_4_PEER_STATS;
+
+ if (test_bit(WMI_SERVICE_BSS_CHANNEL_INFO_64, ar->wmi.svc_map))
+ val |= WMI_10_4_BSS_CHANNEL_INFO_64;
+
+ status = ath10k_mac_ext_resource_config(ar, val);
+ if (status) {
+ ath10k_err(ar,
+ "failed to send ext resource cfg command : %d\n",
+ status);
+ goto err_hif_stop;
+ }
+ }
+
status = ath10k_wmi_cmd_init(ar);
if (status) {
ath10k_err(ar, "could not send WMI init command (%d)\n",
@@ -1723,6 +1767,10 @@ int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode)
goto err_hif_stop;
}
+ ar->free_vdev_map = (1LL << ar->max_num_vdevs) - 1;
+
+ INIT_LIST_HEAD(&ar->arvifs);
+
/* we don't care about HTT in UTF mode */
if (mode == ATH10K_FIRMWARE_MODE_NORMAL) {
status = ath10k_htt_setup(&ar->htt);
@@ -1736,10 +1784,6 @@ int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode)
if (status)
goto err_hif_stop;
- ar->free_vdev_map = (1LL << ar->max_num_vdevs) - 1;
-
- INIT_LIST_HEAD(&ar->arvifs);
-
return 0;
err_hif_stop:
@@ -1832,13 +1876,27 @@ static int ath10k_core_probe_fw(struct ath10k *ar)
goto err_power_down;
}
+ BUILD_BUG_ON(sizeof(ar->hw->wiphy->fw_version) !=
+ sizeof(ar->normal_mode_fw.fw_file.fw_version));
+ memcpy(ar->hw->wiphy->fw_version, ar->normal_mode_fw.fw_file.fw_version,
+ sizeof(ar->hw->wiphy->fw_version));
+
ath10k_debug_print_hwfw_info(ar);
+ ret = ath10k_core_pre_cal_download(ar);
+ if (ret) {
+ /* pre calibration data download is not necessary
+ * for all the chipsets. Ignore failures and continue.
+ */
+ ath10k_dbg(ar, ATH10K_DBG_BOOT,
+ "could not load pre cal data: %d\n", ret);
+ }
+
ret = ath10k_core_get_board_id_from_otp(ar);
if (ret && ret != -EOPNOTSUPP) {
ath10k_err(ar, "failed to get board id from otp: %d\n",
ret);
- return ret;
+ goto err_free_firmware_files;
}
ret = ath10k_core_fetch_board_file(ar);
@@ -1865,7 +1923,8 @@ static int ath10k_core_probe_fw(struct ath10k *ar)
mutex_lock(&ar->conf_mutex);
- ret = ath10k_core_start(ar, ATH10K_FIRMWARE_MODE_NORMAL);
+ ret = ath10k_core_start(ar, ATH10K_FIRMWARE_MODE_NORMAL,
+ &ar->normal_mode_fw);
if (ret) {
ath10k_err(ar, "could not init core (%d)\n", ret);
goto err_unlock;
@@ -2035,6 +2094,7 @@ struct ath10k *ath10k_core_create(size_t priv_size, struct device *dev,
init_completion(&ar->install_key_done);
init_completion(&ar->vdev_setup_done);
init_completion(&ar->thermal.wmi_sync);
+ init_completion(&ar->bss_survey_done);
INIT_DELAYED_WORK(&ar->scan.timeout, ath10k_scan_timeout_work);
@@ -2048,7 +2108,9 @@ struct ath10k *ath10k_core_create(size_t priv_size, struct device *dev,
mutex_init(&ar->conf_mutex);
spin_lock_init(&ar->data_lock);
+ spin_lock_init(&ar->txqs_lock);
+ INIT_LIST_HEAD(&ar->txqs);
INIT_LIST_HEAD(&ar->peers);
init_waitqueue_head(&ar->peer_mapping_wq);
init_waitqueue_head(&ar->htt.empty_tx_wq);
diff --git a/drivers/net/wireless/ath/ath10k/core.h b/drivers/net/wireless/ath/ath10k/core.h
index a62b62a62266..1852e0ee3fa1 100644
--- a/drivers/net/wireless/ath/ath10k/core.h
+++ b/drivers/net/wireless/ath/ath10k/core.h
@@ -44,8 +44,8 @@
#define ATH10K_SCAN_ID 0
#define WMI_READY_TIMEOUT (5 * HZ)
-#define ATH10K_FLUSH_TIMEOUT_HZ (5*HZ)
-#define ATH10K_CONNECTION_LOSS_HZ (3*HZ)
+#define ATH10K_FLUSH_TIMEOUT_HZ (5 * HZ)
+#define ATH10K_CONNECTION_LOSS_HZ (3 * HZ)
#define ATH10K_NUM_CHANS 39
/* Antenna noise floor */
@@ -98,6 +98,7 @@ struct ath10k_skb_cb {
u8 eid;
u16 msdu_id;
struct ieee80211_vif *vif;
+ struct ieee80211_txq *txq;
} __packed;
struct ath10k_skb_rxcb {
@@ -138,7 +139,6 @@ struct ath10k_mem_chunk {
};
struct ath10k_wmi {
- enum ath10k_fw_wmi_op_version op_version;
enum ath10k_htc_ep_id eid;
struct completion service_ready;
struct completion unified_ready;
@@ -297,6 +297,9 @@ struct ath10k_dfs_stats {
struct ath10k_peer {
struct list_head list;
+ struct ieee80211_vif *vif;
+ struct ieee80211_sta *sta;
+
int vdev_id;
u8 addr[ETH_ALEN];
DECLARE_BITMAP(peer_ids, ATH10K_MAX_NUM_PEER_IDS);
@@ -305,6 +308,12 @@ struct ath10k_peer {
struct ieee80211_key_conf *keys[WMI_MAX_KEY_INDEX + 1];
};
+struct ath10k_txq {
+ struct list_head list;
+ unsigned long num_fw_queued;
+ unsigned long num_push_allowed;
+};
+
struct ath10k_sta {
struct ath10k_vif *arvif;
@@ -313,6 +322,7 @@ struct ath10k_sta {
u32 bw;
u32 nss;
u32 smps;
+ u16 peer_id;
struct work_struct update_wk;
@@ -323,7 +333,7 @@ struct ath10k_sta {
#endif
};
-#define ATH10K_VDEV_SETUP_TIMEOUT_HZ (5*HZ)
+#define ATH10K_VDEV_SETUP_TIMEOUT_HZ (5 * HZ)
enum ath10k_beacon_state {
ATH10K_BEACON_SCHEDULED = 0,
@@ -335,6 +345,7 @@ struct ath10k_vif {
struct list_head list;
u32 vdev_id;
+ u16 peer_id;
enum wmi_vdev_type vdev_type;
enum wmi_vdev_subtype vdev_subtype;
u32 beacon_interval;
@@ -549,12 +560,17 @@ enum ath10k_dev_flags {
/* Bluetooth coexistance enabled */
ATH10K_FLAG_BTCOEX,
+
+ /* Per Station statistics service */
+ ATH10K_FLAG_PEER_STATS,
};
enum ath10k_cal_mode {
ATH10K_CAL_MODE_FILE,
ATH10K_CAL_MODE_OTP,
ATH10K_CAL_MODE_DT,
+ ATH10K_PRE_CAL_MODE_FILE,
+ ATH10K_PRE_CAL_MODE_DT,
};
enum ath10k_crypt_mode {
@@ -573,6 +589,10 @@ static inline const char *ath10k_cal_mode_str(enum ath10k_cal_mode mode)
return "otp";
case ATH10K_CAL_MODE_DT:
return "dt";
+ case ATH10K_PRE_CAL_MODE_FILE:
+ return "pre-cal-file";
+ case ATH10K_PRE_CAL_MODE_DT:
+ return "pre-cal-dt";
}
return "unknown";
@@ -606,6 +626,34 @@ enum ath10k_tx_pause_reason {
ATH10K_TX_PAUSE_MAX,
};
+struct ath10k_fw_file {
+ const struct firmware *firmware;
+
+ char fw_version[ETHTOOL_FWVERS_LEN];
+
+ DECLARE_BITMAP(fw_features, ATH10K_FW_FEATURE_COUNT);
+
+ enum ath10k_fw_wmi_op_version wmi_op_version;
+ enum ath10k_fw_htt_op_version htt_op_version;
+
+ const void *firmware_data;
+ size_t firmware_len;
+
+ const void *otp_data;
+ size_t otp_len;
+
+ const void *codeswap_data;
+ size_t codeswap_len;
+};
+
+struct ath10k_fw_components {
+ const struct firmware *board;
+ const void *board_data;
+ size_t board_len;
+
+ struct ath10k_fw_file fw_file;
+};
+
struct ath10k {
struct ath_common ath_common;
struct ieee80211_hw *hw;
@@ -631,8 +679,6 @@ struct ath10k {
/* protected by conf_mutex */
bool ani_enabled;
- DECLARE_BITMAP(fw_features, ATH10K_FW_FEATURE_COUNT);
-
bool p2p;
struct {
@@ -680,39 +726,31 @@ struct ath10k {
/* The padding bytes's location is different on various chips */
enum ath10k_hw_4addr_pad hw_4addr_pad;
- u32 num_msdu_desc;
- u32 qcache_active_peers;
u32 tx_chain_mask;
u32 rx_chain_mask;
u32 max_spatial_stream;
+ u32 cal_data_len;
struct ath10k_hw_params_fw {
const char *dir;
- const char *fw;
- const char *otp;
const char *board;
size_t board_size;
size_t board_ext_size;
} fw;
} hw_params;
- const struct firmware *board;
- const void *board_data;
- size_t board_len;
-
- const struct firmware *otp;
- const void *otp_data;
- size_t otp_len;
+ /* contains the firmware images used with ATH10K_FIRMWARE_MODE_NORMAL */
+ struct ath10k_fw_components normal_mode_fw;
- const struct firmware *firmware;
- const void *firmware_data;
- size_t firmware_len;
+ /* READ-ONLY images of the running firmware, which can be either
+ * normal or UTF. Do not modify, release etc!
+ */
+ const struct ath10k_fw_components *running_fw;
+ const struct firmware *pre_cal_file;
const struct firmware *cal_file;
struct {
- const void *firmware_codeswap_data;
- size_t firmware_codeswap_len;
struct ath10k_swap_code_seg_info *firmware_swap_code_seg_info;
} swap;
@@ -744,7 +782,7 @@ struct ath10k {
} scan;
struct {
- struct ieee80211_supported_band sbands[IEEE80211_NUM_BANDS];
+ struct ieee80211_supported_band sbands[NUM_NL80211_BANDS];
} mac;
/* should never be NULL; needed for regular htt rx */
@@ -756,6 +794,9 @@ struct ath10k {
/* current operating channel definition */
struct cfg80211_chan_def chandef;
+ /* currently configured operating channel in firmware */
+ struct ieee80211_channel *tgt_oper_chan;
+
unsigned long long free_vdev_map;
struct ath10k_vif *monitor_arvif;
bool monitor;
@@ -786,9 +827,13 @@ struct ath10k {
/* protects shared structure data */
spinlock_t data_lock;
+ /* protects: ar->txqs, artxq->list */
+ spinlock_t txqs_lock;
+ struct list_head txqs;
struct list_head arvifs;
struct list_head peers;
+ struct ath10k_peer *peer_map[ATH10K_MAX_NUM_PEER_IDS];
wait_queue_head_t peer_mapping_wq;
/* protected by conf_mutex */
@@ -831,6 +876,7 @@ struct ath10k {
* avoid reporting garbage data.
*/
bool ch_info_can_report_survey;
+ struct completion bss_survey_done;
struct dfs_pattern_detector *dfs_detector;
@@ -838,8 +884,6 @@ struct ath10k {
#ifdef CONFIG_ATH10K_DEBUGFS
struct ath10k_debug debug;
-#endif
-
struct {
/* relay(fs) channel for spectral scan */
struct rchan *rfs_chan_spec_scan;
@@ -848,16 +892,12 @@ struct ath10k {
enum ath10k_spectral_mode mode;
struct ath10k_spec_scan config;
} spectral;
+#endif
struct {
/* protected by conf_mutex */
- const struct firmware *utf;
- char utf_version[32];
- const void *utf_firmware_data;
- size_t utf_firmware_len;
- DECLARE_BITMAP(orig_fw_features, ATH10K_FW_FEATURE_COUNT);
- enum ath10k_fw_wmi_op_version orig_wmi_op_version;
- enum ath10k_fw_wmi_op_version op_version;
+ struct ath10k_fw_components utf_mode_fw;
+
/* protected by data_lock */
bool utf_monitor;
} testmode;
@@ -876,6 +916,15 @@ struct ath10k {
u8 drv_priv[0] __aligned(sizeof(void *));
};
+static inline bool ath10k_peer_stats_enabled(struct ath10k *ar)
+{
+ if (test_bit(ATH10K_FLAG_PEER_STATS, &ar->dev_flags) &&
+ test_bit(WMI_SERVICE_PEER_STATS, ar->wmi.svc_map))
+ return true;
+
+ return false;
+}
+
struct ath10k *ath10k_core_create(size_t priv_size, struct device *dev,
enum ath10k_bus bus,
enum ath10k_hw_rev hw_rev,
@@ -884,8 +933,11 @@ void ath10k_core_destroy(struct ath10k *ar);
void ath10k_core_get_fw_features_str(struct ath10k *ar,
char *buf,
size_t max_len);
+int ath10k_core_fetch_firmware_api_n(struct ath10k *ar, const char *name,
+ struct ath10k_fw_file *fw_file);
-int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode);
+int ath10k_core_start(struct ath10k *ar, enum ath10k_firmware_mode mode,
+ const struct ath10k_fw_components *fw_components);
int ath10k_wait_for_suspend(struct ath10k *ar, u32 suspend_opt);
void ath10k_core_stop(struct ath10k *ar);
int ath10k_core_register(struct ath10k *ar, u32 chip_id);
diff --git a/drivers/net/wireless/ath/ath10k/debug.c b/drivers/net/wireless/ath/ath10k/debug.c
index 076d29b53ddf..e2511550fbb8 100644
--- a/drivers/net/wireless/ath/ath10k/debug.c
+++ b/drivers/net/wireless/ath/ath10k/debug.c
@@ -126,7 +126,9 @@ EXPORT_SYMBOL(ath10k_info);
void ath10k_debug_print_hwfw_info(struct ath10k *ar)
{
+ const struct firmware *firmware;
char fw_features[128] = {};
+ u32 crc = 0;
ath10k_core_get_fw_features_str(ar, fw_features, sizeof(fw_features));
@@ -143,11 +145,15 @@ void ath10k_debug_print_hwfw_info(struct ath10k *ar)
config_enabled(CONFIG_ATH10K_DFS_CERTIFIED),
config_enabled(CONFIG_NL80211_TESTMODE));
+ firmware = ar->normal_mode_fw.fw_file.firmware;
+ if (firmware)
+ crc = crc32_le(0, firmware->data, firmware->size);
+
ath10k_info(ar, "firmware ver %s api %d features %s crc32 %08x\n",
ar->hw->wiphy->fw_version,
ar->fw_api,
fw_features,
- crc32_le(0, ar->firmware->data, ar->firmware->size));
+ crc);
}
void ath10k_debug_print_board_info(struct ath10k *ar)
@@ -163,7 +169,8 @@ void ath10k_debug_print_board_info(struct ath10k *ar)
ath10k_info(ar, "board_file api %d bmi_id %s crc32 %08x",
ar->bd_api,
boardinfo,
- crc32_le(0, ar->board->data, ar->board->size));
+ crc32_le(0, ar->normal_mode_fw.board->data,
+ ar->normal_mode_fw.board->size));
}
void ath10k_debug_print_boot_info(struct ath10k *ar)
@@ -171,8 +178,8 @@ void ath10k_debug_print_boot_info(struct ath10k *ar)
ath10k_info(ar, "htt-ver %d.%d wmi-op %d htt-op %d cal %s max-sta %d raw %d hwcrypto %d\n",
ar->htt.target_version_major,
ar->htt.target_version_minor,
- ar->wmi.op_version,
- ar->htt.op_version,
+ ar->normal_mode_fw.fw_file.wmi_op_version,
+ ar->normal_mode_fw.fw_file.htt_op_version,
ath10k_cal_mode_str(ar->cal_mode),
ar->max_num_stations,
test_bit(ATH10K_FLAG_RAW_MODE, &ar->dev_flags),
@@ -319,7 +326,7 @@ static void ath10k_debug_fw_stats_reset(struct ath10k *ar)
void ath10k_debug_fw_stats_process(struct ath10k *ar, struct sk_buff *skb)
{
struct ath10k_fw_stats stats = {};
- bool is_start, is_started, is_end, peer_stats_svc;
+ bool is_start, is_started, is_end;
size_t num_peers;
size_t num_vdevs;
int ret;
@@ -346,13 +353,11 @@ void ath10k_debug_fw_stats_process(struct ath10k *ar, struct sk_buff *skb)
* b) consume stat update events until another one with pdev stats is
* delivered which is treated as end-of-data and is itself discarded
*/
-
- peer_stats_svc = test_bit(WMI_SERVICE_PEER_STATS, ar->wmi.svc_map);
- if (peer_stats_svc)
+ if (ath10k_peer_stats_enabled(ar))
ath10k_sta_update_rx_duration(ar, &stats.peers);
if (ar->debug.fw_stats_done) {
- if (!peer_stats_svc)
+ if (!ath10k_peer_stats_enabled(ar))
ath10k_warn(ar, "received unsolicited stats update event\n");
goto free;
@@ -1447,7 +1452,7 @@ static int ath10k_debug_cal_data_open(struct inode *inode, struct file *file)
goto err;
}
- buf = vmalloc(QCA988X_CAL_DATA_LEN);
+ buf = vmalloc(ar->hw_params.cal_data_len);
if (!buf) {
ret = -ENOMEM;
goto err;
@@ -1462,7 +1467,7 @@ static int ath10k_debug_cal_data_open(struct inode *inode, struct file *file)
}
ret = ath10k_hif_diag_read(ar, le32_to_cpu(addr), buf,
- QCA988X_CAL_DATA_LEN);
+ ar->hw_params.cal_data_len);
if (ret) {
ath10k_warn(ar, "failed to read calibration data: %d\n", ret);
goto err_vfree;
@@ -1487,10 +1492,11 @@ static ssize_t ath10k_debug_cal_data_read(struct file *file,
char __user *user_buf,
size_t count, loff_t *ppos)
{
+ struct ath10k *ar = file->private_data;
void *buf = file->private_data;
return simple_read_from_buffer(user_buf, count, ppos,
- buf, QCA988X_CAL_DATA_LEN);
+ buf, ar->hw_params.cal_data_len);
}
static int ath10k_debug_cal_data_release(struct inode *inode,
@@ -2019,7 +2025,12 @@ static ssize_t ath10k_write_pktlog_filter(struct file *file,
goto out;
}
- if (filter && (filter != ar->debug.pktlog_filter)) {
+ if (filter == ar->debug.pktlog_filter) {
+ ret = count;
+ goto out;
+ }
+
+ if (filter) {
ret = ath10k_wmi_pdev_pktlog_enable(ar, filter);
if (ret) {
ath10k_warn(ar, "failed to enable pktlog filter %x: %d\n",
@@ -2114,7 +2125,7 @@ static ssize_t ath10k_write_btcoex(struct file *file,
struct ath10k *ar = file->private_data;
char buf[32];
size_t buf_size;
- int ret = 0;
+ int ret;
bool val;
buf_size = min(count, (sizeof(buf) - 1));
@@ -2134,8 +2145,10 @@ static ssize_t ath10k_write_btcoex(struct file *file,
goto exit;
}
- if (!(test_bit(ATH10K_FLAG_BTCOEX, &ar->dev_flags) ^ val))
+ if (!(test_bit(ATH10K_FLAG_BTCOEX, &ar->dev_flags) ^ val)) {
+ ret = count;
goto exit;
+ }
if (val)
set_bit(ATH10K_FLAG_BTCOEX, &ar->dev_flags);
@@ -2174,6 +2187,75 @@ static const struct file_operations fops_btcoex = {
.open = simple_open
};
+static ssize_t ath10k_write_peer_stats(struct file *file,
+ const char __user *ubuf,
+ size_t count, loff_t *ppos)
+{
+ struct ath10k *ar = file->private_data;
+ char buf[32];
+ size_t buf_size;
+ int ret;
+ bool val;
+
+ buf_size = min(count, (sizeof(buf) - 1));
+ if (copy_from_user(buf, ubuf, buf_size))
+ return -EFAULT;
+
+ buf[buf_size] = '\0';
+
+ if (strtobool(buf, &val) != 0)
+ return -EINVAL;
+
+ mutex_lock(&ar->conf_mutex);
+
+ if (ar->state != ATH10K_STATE_ON &&
+ ar->state != ATH10K_STATE_RESTARTED) {
+ ret = -ENETDOWN;
+ goto exit;
+ }
+
+ if (!(test_bit(ATH10K_FLAG_PEER_STATS, &ar->dev_flags) ^ val)) {
+ ret = count;
+ goto exit;
+ }
+
+ if (val)
+ set_bit(ATH10K_FLAG_PEER_STATS, &ar->dev_flags);
+ else
+ clear_bit(ATH10K_FLAG_PEER_STATS, &ar->dev_flags);
+
+ ath10k_info(ar, "restarting firmware due to Peer stats change");
+
+ queue_work(ar->workqueue, &ar->restart_work);
+ ret = count;
+
+exit:
+ mutex_unlock(&ar->conf_mutex);
+ return ret;
+}
+
+static ssize_t ath10k_read_peer_stats(struct file *file, char __user *ubuf,
+ size_t count, loff_t *ppos)
+
+{
+ char buf[32];
+ struct ath10k *ar = file->private_data;
+ int len = 0;
+
+ mutex_lock(&ar->conf_mutex);
+ len = scnprintf(buf, sizeof(buf) - len, "%d\n",
+ test_bit(ATH10K_FLAG_PEER_STATS, &ar->dev_flags));
+ mutex_unlock(&ar->conf_mutex);
+
+ return simple_read_from_buffer(ubuf, count, ppos, buf, len);
+}
+
+static const struct file_operations fops_peer_stats = {
+ .read = ath10k_read_peer_stats,
+ .write = ath10k_write_peer_stats,
+ .open = simple_open
+};
+
static ssize_t ath10k_debug_fw_checksums_read(struct file *file,
char __user *user_buf,
size_t count, loff_t *ppos)
@@ -2191,23 +2273,28 @@ static ssize_t ath10k_debug_fw_checksums_read(struct file *file,
len += scnprintf(buf + len, buf_len - len,
"firmware-N.bin\t\t%08x\n",
- crc32_le(0, ar->firmware->data, ar->firmware->size));
+ crc32_le(0, ar->normal_mode_fw.fw_file.firmware->data,
+ ar->normal_mode_fw.fw_file.firmware->size));
len += scnprintf(buf + len, buf_len - len,
"athwlan\t\t\t%08x\n",
- crc32_le(0, ar->firmware_data, ar->firmware_len));
+ crc32_le(0, ar->normal_mode_fw.fw_file.firmware_data,
+ ar->normal_mode_fw.fw_file.firmware_len));
len += scnprintf(buf + len, buf_len - len,
"otp\t\t\t%08x\n",
- crc32_le(0, ar->otp_data, ar->otp_len));
+ crc32_le(0, ar->normal_mode_fw.fw_file.otp_data,
+ ar->normal_mode_fw.fw_file.otp_len));
len += scnprintf(buf + len, buf_len - len,
"codeswap\t\t%08x\n",
- crc32_le(0, ar->swap.firmware_codeswap_data,
- ar->swap.firmware_codeswap_len));
+ crc32_le(0, ar->normal_mode_fw.fw_file.codeswap_data,
+ ar->normal_mode_fw.fw_file.codeswap_len));
len += scnprintf(buf + len, buf_len - len,
"board-N.bin\t\t%08x\n",
- crc32_le(0, ar->board->data, ar->board->size));
+ crc32_le(0, ar->normal_mode_fw.board->data,
+ ar->normal_mode_fw.board->size));
len += scnprintf(buf + len, buf_len - len,
"board\t\t\t%08x\n",
- crc32_le(0, ar->board_data, ar->board_len));
+ crc32_le(0, ar->normal_mode_fw.board_data,
+ ar->normal_mode_fw.board_len));
ret_cnt = simple_read_from_buffer(user_buf, count, ppos, buf, len);
@@ -2337,6 +2424,11 @@ int ath10k_debug_register(struct ath10k *ar)
debugfs_create_file("btcoex", S_IRUGO | S_IWUSR,
ar->debug.debugfs_phy, ar, &fops_btcoex);
+ if (test_bit(WMI_SERVICE_PEER_STATS, ar->wmi.svc_map))
+ debugfs_create_file("peer_stats", S_IRUGO | S_IWUSR,
+ ar->debug.debugfs_phy, ar,
+ &fops_peer_stats);
+
debugfs_create_file("fw_checksums", S_IRUSR,
ar->debug.debugfs_phy, ar, &fops_fw_checksums);
diff --git a/drivers/net/wireless/ath/ath10k/debug.h b/drivers/net/wireless/ath/ath10k/debug.h
index 6206edd7c49f..75c89e3625ef 100644
--- a/drivers/net/wireless/ath/ath10k/debug.h
+++ b/drivers/net/wireless/ath/ath10k/debug.h
@@ -57,7 +57,7 @@ enum ath10k_dbg_aggr_mode {
};
/* FIXME: How to calculate the buffer size sanely? */
-#define ATH10K_FW_STATS_BUF_SIZE (1024*1024)
+#define ATH10K_FW_STATS_BUF_SIZE (1024 * 1024)
extern unsigned int ath10k_debug_mask;
diff --git a/drivers/net/wireless/ath/ath10k/htc.h b/drivers/net/wireless/ath/ath10k/htc.h
index e70aa38e6e05..cc827185d3e9 100644
--- a/drivers/net/wireless/ath/ath10k/htc.h
+++ b/drivers/net/wireless/ath/ath10k/htc.h
@@ -297,10 +297,10 @@ struct ath10k_htc_svc_conn_resp {
#define ATH10K_NUM_CONTROL_TX_BUFFERS 2
#define ATH10K_HTC_MAX_LEN 4096
#define ATH10K_HTC_MAX_CTRL_MSG_LEN 256
-#define ATH10K_HTC_WAIT_TIMEOUT_HZ (1*HZ)
+#define ATH10K_HTC_WAIT_TIMEOUT_HZ (1 * HZ)
#define ATH10K_HTC_CONTROL_BUFFER_SIZE (ATH10K_HTC_MAX_CTRL_MSG_LEN + \
sizeof(struct ath10k_htc_hdr))
-#define ATH10K_HTC_CONN_SVC_TIMEOUT_HZ (1*HZ)
+#define ATH10K_HTC_CONN_SVC_TIMEOUT_HZ (1 * HZ)
struct ath10k_htc_ep {
struct ath10k_htc *htc;
diff --git a/drivers/net/wireless/ath/ath10k/htt.c b/drivers/net/wireless/ath/ath10k/htt.c
index 7561f22f10f9..130cd9502021 100644
--- a/drivers/net/wireless/ath/ath10k/htt.c
+++ b/drivers/net/wireless/ath/ath10k/htt.c
@@ -149,7 +149,7 @@ int ath10k_htt_connect(struct ath10k_htt *htt)
memset(&conn_resp, 0, sizeof(conn_resp));
conn_req.ep_ops.ep_tx_complete = ath10k_htt_htc_tx_complete;
- conn_req.ep_ops.ep_rx_complete = ath10k_htt_t2h_msg_handler;
+ conn_req.ep_ops.ep_rx_complete = ath10k_htt_htc_t2h_msg_handler;
/* connect to control service */
conn_req.service_id = ATH10K_HTC_SVC_ID_HTT_DATA_MSG;
@@ -183,7 +183,7 @@ int ath10k_htt_init(struct ath10k *ar)
8 + /* llc snap */
2; /* ip4 dscp or ip6 priority */
- switch (ar->htt.op_version) {
+ switch (ar->running_fw->fw_file.htt_op_version) {
case ATH10K_FW_HTT_OP_VERSION_10_4:
ar->htt.t2h_msg_types = htt_10_4_t2h_msg_types;
ar->htt.t2h_msg_types_max = HTT_10_4_T2H_NUM_MSGS;
@@ -208,7 +208,7 @@ int ath10k_htt_init(struct ath10k *ar)
return 0;
}
-#define HTT_TARGET_VERSION_TIMEOUT_HZ (3*HZ)
+#define HTT_TARGET_VERSION_TIMEOUT_HZ (3 * HZ)
static int ath10k_htt_verify_version(struct ath10k_htt *htt)
{
diff --git a/drivers/net/wireless/ath/ath10k/htt.h b/drivers/net/wireless/ath/ath10k/htt.h
index 13391ea4422d..911c535d0863 100644
--- a/drivers/net/wireless/ath/ath10k/htt.h
+++ b/drivers/net/wireless/ath/ath10k/htt.h
@@ -22,6 +22,7 @@
#include <linux/interrupt.h>
#include <linux/dmapool.h>
#include <linux/hashtable.h>
+#include <linux/kfifo.h>
#include <net/mac80211.h>
#include "htc.h"
@@ -1461,15 +1462,23 @@ struct htt_tx_mode_switch_ind {
struct htt_tx_mode_switch_record records[0];
} __packed;
+struct htt_channel_change {
+ u8 pad[3];
+ __le32 freq;
+ __le32 center_freq1;
+ __le32 center_freq2;
+ __le32 phymode;
+} __packed;
+
union htt_rx_pn_t {
/* WEP: 24-bit PN */
u32 pn24;
/* TKIP or CCMP: 48-bit PN */
- u_int64_t pn48;
+ u64 pn48;
/* WAPI: 128-bit PN */
- u_int64_t pn128[2];
+ u64 pn128[2];
};
struct htt_cmd {
@@ -1511,16 +1520,22 @@ struct htt_resp {
struct htt_tx_fetch_ind tx_fetch_ind;
struct htt_tx_fetch_confirm tx_fetch_confirm;
struct htt_tx_mode_switch_ind tx_mode_switch_ind;
+ struct htt_channel_change chan_change;
};
} __packed;
/*** host side structures follow ***/
struct htt_tx_done {
- u32 msdu_id;
- bool discard;
- bool no_ack;
- bool success;
+ u16 msdu_id;
+ u16 status;
+};
+
+enum htt_tx_compl_state {
+ HTT_TX_COMPL_STATE_NONE,
+ HTT_TX_COMPL_STATE_ACK,
+ HTT_TX_COMPL_STATE_NOACK,
+ HTT_TX_COMPL_STATE_DISCARD,
};
struct htt_peer_map_event {
@@ -1547,7 +1562,6 @@ struct ath10k_htt {
u8 target_version_major;
u8 target_version_minor;
struct completion target_version_received;
- enum ath10k_fw_htt_op_version op_version;
u8 max_num_amsdu;
u8 max_num_ampdu;
@@ -1641,17 +1655,20 @@ struct ath10k_htt {
struct idr pending_tx;
wait_queue_head_t empty_tx_wq;
+ /* FIFO for storing tx done status {ack, no-ack, discard} and msdu id */
+ DECLARE_KFIFO_PTR(txdone_fifo, struct htt_tx_done);
+
/* set if host-fw communication goes haywire
* used to avoid further failures */
bool rx_confused;
- struct tasklet_struct rx_replenish_task;
+ atomic_t num_mpdus_ready;
/* This is used to group tx/rx completions separately and process them
* in batches to reduce cache stalls */
struct tasklet_struct txrx_compl_task;
- struct sk_buff_head tx_compl_q;
struct sk_buff_head rx_compl_q;
struct sk_buff_head rx_in_ord_compl_q;
+ struct sk_buff_head tx_fetch_ind_q;
/* rx_status template */
struct ieee80211_rx_status rx_status;
@@ -1667,10 +1684,13 @@ struct ath10k_htt {
} txbuf;
struct {
+ bool enabled;
struct htt_q_state *vaddr;
dma_addr_t paddr;
+ u16 num_push_allowed;
u16 num_peers;
u16 num_tids;
+ enum htt_tx_mode_switch_mode mode;
enum htt_q_depth_type type;
} tx_q_state;
};
@@ -1715,7 +1735,7 @@ struct htt_rx_desc {
/* Refill a bunch of RX buffers for each refill round so that FW/HW can handle
* aggregated traffic more nicely. */
-#define ATH10K_HTT_MAX_NUM_REFILL 16
+#define ATH10K_HTT_MAX_NUM_REFILL 100
/*
* DMA_MAP expects the buffer to be an integral number of cache lines.
@@ -1743,7 +1763,8 @@ int ath10k_htt_rx_ring_refill(struct ath10k *ar);
void ath10k_htt_rx_free(struct ath10k_htt *htt);
void ath10k_htt_htc_tx_complete(struct ath10k *ar, struct sk_buff *skb);
-void ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb);
+void ath10k_htt_htc_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb);
+bool ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb);
int ath10k_htt_h2t_ver_req_msg(struct ath10k_htt *htt);
int ath10k_htt_h2t_stats_req(struct ath10k_htt *htt, u8 mask, u64 cookie);
int ath10k_htt_send_frag_desc_bank_cfg(struct ath10k_htt *htt);
@@ -1752,8 +1773,23 @@ int ath10k_htt_h2t_aggr_cfg_msg(struct ath10k_htt *htt,
u8 max_subfrms_ampdu,
u8 max_subfrms_amsdu);
void ath10k_htt_hif_tx_complete(struct ath10k *ar, struct sk_buff *skb);
+int ath10k_htt_tx_fetch_resp(struct ath10k *ar,
+ __le32 token,
+ __le16 fetch_seq_num,
+ struct htt_tx_fetch_record *records,
+ size_t num_records);
+
+void ath10k_htt_tx_txq_update(struct ieee80211_hw *hw,
+ struct ieee80211_txq *txq);
+void ath10k_htt_tx_txq_recalc(struct ieee80211_hw *hw,
+ struct ieee80211_txq *txq);
+void ath10k_htt_tx_txq_sync(struct ath10k *ar);
+void ath10k_htt_tx_dec_pending(struct ath10k_htt *htt);
+int ath10k_htt_tx_inc_pending(struct ath10k_htt *htt);
+void ath10k_htt_tx_mgmt_dec_pending(struct ath10k_htt *htt);
+int ath10k_htt_tx_mgmt_inc_pending(struct ath10k_htt *htt, bool is_mgmt,
+ bool is_presp);
-void __ath10k_htt_tx_dec_pending(struct ath10k_htt *htt, bool limit_mgmt_desc);
int ath10k_htt_tx_alloc_msdu_id(struct ath10k_htt *htt, struct sk_buff *skb);
void ath10k_htt_tx_free_msdu_id(struct ath10k_htt *htt, u16 msdu_id);
int ath10k_htt_mgmt_tx(struct ath10k_htt *htt, struct sk_buff *);
diff --git a/drivers/net/wireless/ath/ath10k/htt_rx.c b/drivers/net/wireless/ath/ath10k/htt_rx.c
index ae9b686a4e91..cc979a4faeb0 100644
--- a/drivers/net/wireless/ath/ath10k/htt_rx.c
+++ b/drivers/net/wireless/ath/ath10k/htt_rx.c
@@ -31,6 +31,8 @@
/* when under memory pressure rx ring refill may fail and needs a retry */
#define HTT_RX_RING_REFILL_RETRY_MS 50
+#define HTT_RX_RING_REFILL_RESCHED_MS 5
+
static int ath10k_htt_rx_get_csum_state(struct sk_buff *skb);
static void ath10k_htt_txrx_compl_task(unsigned long ptr);
@@ -192,7 +194,8 @@ static void ath10k_htt_rx_msdu_buff_replenish(struct ath10k_htt *htt)
mod_timer(&htt->rx_ring.refill_retry_timer, jiffies +
msecs_to_jiffies(HTT_RX_RING_REFILL_RETRY_MS));
} else if (num_deficit > 0) {
- tasklet_schedule(&htt->rx_replenish_task);
+ mod_timer(&htt->rx_ring.refill_retry_timer, jiffies +
+ msecs_to_jiffies(HTT_RX_RING_REFILL_RESCHED_MS));
}
spin_unlock_bh(&htt->rx_ring.lock);
}
@@ -223,12 +226,11 @@ int ath10k_htt_rx_ring_refill(struct ath10k *ar)
void ath10k_htt_rx_free(struct ath10k_htt *htt)
{
del_timer_sync(&htt->rx_ring.refill_retry_timer);
- tasklet_kill(&htt->rx_replenish_task);
tasklet_kill(&htt->txrx_compl_task);
- skb_queue_purge(&htt->tx_compl_q);
skb_queue_purge(&htt->rx_compl_q);
skb_queue_purge(&htt->rx_in_ord_compl_q);
+ skb_queue_purge(&htt->tx_fetch_ind_q);
ath10k_htt_rx_ring_free(htt);
@@ -281,7 +283,6 @@ static inline struct sk_buff *ath10k_htt_rx_netbuf_pop(struct ath10k_htt *htt)
/* return: < 0 fatal error, 0 - non chained msdu, 1 chained msdu */
static int ath10k_htt_rx_amsdu_pop(struct ath10k_htt *htt,
- u8 **fw_desc, int *fw_desc_len,
struct sk_buff_head *amsdu)
{
struct ath10k *ar = htt->ar;
@@ -323,48 +324,6 @@ static int ath10k_htt_rx_amsdu_pop(struct ath10k_htt *htt,
return -EIO;
}
- /*
- * Copy the FW rx descriptor for this MSDU from the rx
- * indication message into the MSDU's netbuf. HL uses the
- * same rx indication message definition as LL, and simply
- * appends new info (fields from the HW rx desc, and the
- * MSDU payload itself). So, the offset into the rx
- * indication message only has to account for the standard
- * offset of the per-MSDU FW rx desc info within the
- * message, and how many bytes of the per-MSDU FW rx desc
- * info have already been consumed. (And the endianness of
- * the host, since for a big-endian host, the rx ind
- * message contents, including the per-MSDU rx desc bytes,
- * were byteswapped during upload.)
- */
- if (*fw_desc_len > 0) {
- rx_desc->fw_desc.info0 = **fw_desc;
- /*
- * The target is expected to only provide the basic
- * per-MSDU rx descriptors. Just to be sure, verify
- * that the target has not attached extension data
- * (e.g. LRO flow ID).
- */
-
- /* or more, if there's extension data */
- (*fw_desc)++;
- (*fw_desc_len)--;
- } else {
- /*
- * When an oversized AMSDU happened, FW will lost
- * some of MSDU status - in this case, the FW
- * descriptors provided will be less than the
- * actual MSDUs inside this MPDU. Mark the FW
- * descriptors so that it will still deliver to
- * upper stack, if no CRC error for this MPDU.
- *
- * FIX THIS - the FW descriptors are actually for
- * MSDUs in the end of this A-MSDU instead of the
- * beginning.
- */
- rx_desc->fw_desc.info0 = 0;
- }
-
msdu_len_invalid = !!(__le32_to_cpu(rx_desc->attention.flags)
& (RX_ATTENTION_FLAGS_MPDU_LENGTH_ERR |
RX_ATTENTION_FLAGS_MSDU_LENGTH_ERR));
@@ -423,13 +382,6 @@ static int ath10k_htt_rx_amsdu_pop(struct ath10k_htt *htt,
return msdu_chaining;
}
-static void ath10k_htt_rx_replenish_task(unsigned long ptr)
-{
- struct ath10k_htt *htt = (struct ath10k_htt *)ptr;
-
- ath10k_htt_rx_msdu_buff_replenish(htt);
-}
-
static struct sk_buff *ath10k_htt_rx_pop_paddr(struct ath10k_htt *htt,
u32 paddr)
{
@@ -563,12 +515,10 @@ int ath10k_htt_rx_alloc(struct ath10k_htt *htt)
htt->rx_ring.sw_rd_idx.msdu_payld = 0;
hash_init(htt->rx_ring.skb_table);
- tasklet_init(&htt->rx_replenish_task, ath10k_htt_rx_replenish_task,
- (unsigned long)htt);
-
- skb_queue_head_init(&htt->tx_compl_q);
skb_queue_head_init(&htt->rx_compl_q);
skb_queue_head_init(&htt->rx_in_ord_compl_q);
+ skb_queue_head_init(&htt->tx_fetch_ind_q);
+ atomic_set(&htt->num_mpdus_ready, 0);
tasklet_init(&htt->txrx_compl_task, ath10k_htt_txrx_compl_task,
(unsigned long)htt);
@@ -860,6 +810,8 @@ static bool ath10k_htt_rx_h_channel(struct ath10k *ar,
ch = ath10k_htt_rx_h_vdev_channel(ar, vdev_id);
if (!ch)
ch = ath10k_htt_rx_h_any_channel(ar);
+ if (!ch)
+ ch = ar->tgt_oper_chan;
spin_unlock_bh(&ar->data_lock);
if (!ch)
@@ -979,7 +931,7 @@ static void ath10k_process_rx(struct ath10k *ar,
*status = *rx_status;
ath10k_dbg(ar, ATH10K_DBG_DATA,
- "rx skb %p len %u peer %pM %s %s sn %u %s%s%s%s%s %srate_idx %u vht_nss %u freq %u band %u flag 0x%x fcs-err %i mic-err %i amsdu-more %i\n",
+ "rx skb %p len %u peer %pM %s %s sn %u %s%s%s%s%s %srate_idx %u vht_nss %u freq %u band %u flag 0x%llx fcs-err %i mic-err %i amsdu-more %i\n",
skb,
skb->len,
ieee80211_get_SA(hdr),
@@ -1014,7 +966,7 @@ static int ath10k_htt_rx_nwifi_hdrlen(struct ath10k *ar,
int len = ieee80211_hdrlen(hdr->frame_control);
if (!test_bit(ATH10K_FW_FEATURE_NO_NWIFI_DECAP_4ADDR_PADDING,
- ar->fw_features))
+ ar->running_fw->fw_file.fw_features))
len = round_up(len, 4);
return len;
@@ -1076,20 +1028,25 @@ static void ath10k_htt_rx_h_undecap_raw(struct ath10k *ar,
hdr = (void *)msdu->data;
/* Tail */
- skb_trim(msdu, msdu->len - ath10k_htt_rx_crypto_tail_len(ar, enctype));
+ if (status->flag & RX_FLAG_IV_STRIPPED)
+ skb_trim(msdu, msdu->len -
+ ath10k_htt_rx_crypto_tail_len(ar, enctype));
/* MMIC */
- if (!ieee80211_has_morefrags(hdr->frame_control) &&
+ if ((status->flag & RX_FLAG_MMIC_STRIPPED) &&
+ !ieee80211_has_morefrags(hdr->frame_control) &&
enctype == HTT_RX_MPDU_ENCRYPT_TKIP_WPA)
skb_trim(msdu, msdu->len - 8);
/* Head */
- hdr_len = ieee80211_hdrlen(hdr->frame_control);
- crypto_len = ath10k_htt_rx_crypto_param_len(ar, enctype);
+ if (status->flag & RX_FLAG_IV_STRIPPED) {
+ hdr_len = ieee80211_hdrlen(hdr->frame_control);
+ crypto_len = ath10k_htt_rx_crypto_param_len(ar, enctype);
- memmove((void *)msdu->data + crypto_len,
- (void *)msdu->data, hdr_len);
- skb_pull(msdu, crypto_len);
+ memmove((void *)msdu->data + crypto_len,
+ (void *)msdu->data, hdr_len);
+ skb_pull(msdu, crypto_len);
+ }
}
static void ath10k_htt_rx_h_undecap_nwifi(struct ath10k *ar,
@@ -1343,6 +1300,7 @@ static void ath10k_htt_rx_h_mpdu(struct ath10k *ar,
bool has_tkip_err;
bool has_peer_idx_invalid;
bool is_decrypted;
+ bool is_mgmt;
u32 attention;
if (skb_queue_empty(amsdu))
@@ -1351,6 +1309,9 @@ static void ath10k_htt_rx_h_mpdu(struct ath10k *ar,
first = skb_peek(amsdu);
rxd = (void *)first->data - sizeof(*rxd);
+ is_mgmt = !!(rxd->attention.flags &
+ __cpu_to_le32(RX_ATTENTION_FLAGS_MGMT_TYPE));
+
enctype = MS(__le32_to_cpu(rxd->mpdu_start.info0),
RX_MPDU_START_INFO0_ENCRYPT_TYPE);
@@ -1392,6 +1353,7 @@ static void ath10k_htt_rx_h_mpdu(struct ath10k *ar,
RX_FLAG_MMIC_ERROR |
RX_FLAG_DECRYPTED |
RX_FLAG_IV_STRIPPED |
+ RX_FLAG_ONLY_MONITOR |
RX_FLAG_MMIC_STRIPPED);
if (has_fcs_err)
@@ -1400,10 +1362,21 @@ static void ath10k_htt_rx_h_mpdu(struct ath10k *ar,
if (has_tkip_err)
status->flag |= RX_FLAG_MMIC_ERROR;
- if (is_decrypted)
- status->flag |= RX_FLAG_DECRYPTED |
- RX_FLAG_IV_STRIPPED |
- RX_FLAG_MMIC_STRIPPED;
+ /* Firmware reports all necessary management frames via WMI already.
+ * They are not reported to monitor interfaces at all so pass the ones
+ * coming via HTT to monitor interfaces instead. This simplifies
+ * matters a lot.
+ */
+ if (is_mgmt)
+ status->flag |= RX_FLAG_ONLY_MONITOR;
+
+ if (is_decrypted) {
+ status->flag |= RX_FLAG_DECRYPTED;
+
+ if (likely(!is_mgmt))
+ status->flag |= RX_FLAG_IV_STRIPPED |
+ RX_FLAG_MMIC_STRIPPED;
+}
skb_queue_walk(amsdu, msdu) {
ath10k_htt_rx_h_csum_offload(msdu);
@@ -1416,6 +1389,8 @@ static void ath10k_htt_rx_h_mpdu(struct ath10k *ar,
*/
if (!is_decrypted)
continue;
+ if (is_mgmt)
+ continue;
hdr = (void *)msdu->data;
hdr->frame_control &= ~__cpu_to_le16(IEEE80211_FCTL_PROTECTED);
@@ -1516,14 +1491,6 @@ static bool ath10k_htt_rx_amsdu_allowed(struct ath10k *ar,
struct sk_buff_head *amsdu,
struct ieee80211_rx_status *rx_status)
{
- struct sk_buff *msdu;
- struct htt_rx_desc *rxd;
- bool is_mgmt;
- bool has_fcs_err;
-
- msdu = skb_peek(amsdu);
- rxd = (void *)msdu->data - sizeof(*rxd);
-
/* FIXME: It might be a good idea to do some fuzzy-testing to drop
* invalid/dangerous frames.
*/
@@ -1533,23 +1500,6 @@ static bool ath10k_htt_rx_amsdu_allowed(struct ath10k *ar,
return false;
}
- is_mgmt = !!(rxd->attention.flags &
- __cpu_to_le32(RX_ATTENTION_FLAGS_MGMT_TYPE));
- has_fcs_err = !!(rxd->attention.flags &
- __cpu_to_le32(RX_ATTENTION_FLAGS_FCS_ERR));
-
- /* Management frames are handled via WMI events. The pros of such
- * approach is that channel is explicitly provided in WMI events
- * whereas HTT doesn't provide channel information for Rxed frames.
- *
- * However some firmware revisions don't report corrupted frames via
- * WMI so don't drop them.
- */
- if (is_mgmt && !has_fcs_err) {
- ath10k_dbg(ar, ATH10K_DBG_HTT, "htt rx mgmt ctrl\n");
- return false;
- }
-
if (test_bit(ATH10K_CAC_RUNNING, &ar->dev_flags)) {
ath10k_dbg(ar, ATH10K_DBG_HTT, "htt rx cac running\n");
return false;
@@ -1571,25 +1521,49 @@ static void ath10k_htt_rx_h_filter(struct ath10k *ar,
__skb_queue_purge(amsdu);
}
-static void ath10k_htt_rx_handler(struct ath10k_htt *htt,
- struct htt_rx_indication *rx)
+static int ath10k_htt_rx_handle_amsdu(struct ath10k_htt *htt)
{
struct ath10k *ar = htt->ar;
- struct ieee80211_rx_status *rx_status = &htt->rx_status;
- struct htt_rx_indication_mpdu_range *mpdu_ranges;
+ static struct ieee80211_rx_status rx_status;
struct sk_buff_head amsdu;
- int num_mpdu_ranges;
- int fw_desc_len;
- u8 *fw_desc;
- int i, ret, mpdu_count = 0;
+ int ret;
- lockdep_assert_held(&htt->rx_ring.lock);
+ __skb_queue_head_init(&amsdu);
- if (htt->rx_confused)
- return;
+ spin_lock_bh(&htt->rx_ring.lock);
+ if (htt->rx_confused) {
+ spin_unlock_bh(&htt->rx_ring.lock);
+ return -EIO;
+ }
+ ret = ath10k_htt_rx_amsdu_pop(htt, &amsdu);
+ spin_unlock_bh(&htt->rx_ring.lock);
- fw_desc_len = __le16_to_cpu(rx->prefix.fw_rx_desc_bytes);
- fw_desc = (u8 *)&rx->fw_desc;
+ if (ret < 0) {
+ ath10k_warn(ar, "rx ring became corrupted: %d\n", ret);
+ __skb_queue_purge(&amsdu);
+ /* FIXME: It's probably a good idea to reboot the
+ * device instead of leaving it inoperable.
+ */
+ htt->rx_confused = true;
+ return ret;
+ }
+
+ ath10k_htt_rx_h_ppdu(ar, &amsdu, &rx_status, 0xffff);
+ ath10k_htt_rx_h_unchain(ar, &amsdu, ret > 0);
+ ath10k_htt_rx_h_filter(ar, &amsdu, &rx_status);
+ ath10k_htt_rx_h_mpdu(ar, &amsdu, &rx_status);
+ ath10k_htt_rx_h_deliver(ar, &amsdu, &rx_status);
+
+ return 0;
+}
+
+static void ath10k_htt_rx_proc_rx_ind(struct ath10k_htt *htt,
+ struct htt_rx_indication *rx)
+{
+ struct ath10k *ar = htt->ar;
+ struct htt_rx_indication_mpdu_range *mpdu_ranges;
+ int num_mpdu_ranges;
+ int i, mpdu_count = 0;
num_mpdu_ranges = MS(__le32_to_cpu(rx->hdr.info1),
HTT_RX_INDICATION_INFO1_NUM_MPDU_RANGES);
@@ -1603,80 +1577,19 @@ static void ath10k_htt_rx_handler(struct ath10k_htt *htt,
for (i = 0; i < num_mpdu_ranges; i++)
mpdu_count += mpdu_ranges[i].mpdu_count;
- while (mpdu_count--) {
- __skb_queue_head_init(&amsdu);
- ret = ath10k_htt_rx_amsdu_pop(htt, &fw_desc,
- &fw_desc_len, &amsdu);
- if (ret < 0) {
- ath10k_warn(ar, "rx ring became corrupted: %d\n", ret);
- __skb_queue_purge(&amsdu);
- /* FIXME: It's probably a good idea to reboot the
- * device instead of leaving it inoperable.
- */
- htt->rx_confused = true;
- break;
- }
+ atomic_add(mpdu_count, &htt->num_mpdus_ready);
- ath10k_htt_rx_h_ppdu(ar, &amsdu, rx_status, 0xffff);
- ath10k_htt_rx_h_unchain(ar, &amsdu, ret > 0);
- ath10k_htt_rx_h_filter(ar, &amsdu, rx_status);
- ath10k_htt_rx_h_mpdu(ar, &amsdu, rx_status);
- ath10k_htt_rx_h_deliver(ar, &amsdu, rx_status);
- }
-
- tasklet_schedule(&htt->rx_replenish_task);
+ tasklet_schedule(&htt->txrx_compl_task);
}
-static void ath10k_htt_rx_frag_handler(struct ath10k_htt *htt,
- struct htt_rx_fragment_indication *frag)
+static void ath10k_htt_rx_frag_handler(struct ath10k_htt *htt)
{
- struct ath10k *ar = htt->ar;
- struct ieee80211_rx_status *rx_status = &htt->rx_status;
- struct sk_buff_head amsdu;
- int ret;
- u8 *fw_desc;
- int fw_desc_len;
-
- fw_desc_len = __le16_to_cpu(frag->fw_rx_desc_bytes);
- fw_desc = (u8 *)frag->fw_msdu_rx_desc;
-
- __skb_queue_head_init(&amsdu);
+ atomic_inc(&htt->num_mpdus_ready);
- spin_lock_bh(&htt->rx_ring.lock);
- ret = ath10k_htt_rx_amsdu_pop(htt, &fw_desc, &fw_desc_len,
- &amsdu);
- spin_unlock_bh(&htt->rx_ring.lock);
-
- tasklet_schedule(&htt->rx_replenish_task);
-
- ath10k_dbg(ar, ATH10K_DBG_HTT_DUMP, "htt rx frag ahead\n");
-
- if (ret) {
- ath10k_warn(ar, "failed to pop amsdu from httr rx ring for fragmented rx %d\n",
- ret);
- __skb_queue_purge(&amsdu);
- return;
- }
-
- if (skb_queue_len(&amsdu) != 1) {
- ath10k_warn(ar, "failed to pop frag amsdu: too many msdus\n");
- __skb_queue_purge(&amsdu);
- return;
- }
-
- ath10k_htt_rx_h_ppdu(ar, &amsdu, rx_status, 0xffff);
- ath10k_htt_rx_h_filter(ar, &amsdu, rx_status);
- ath10k_htt_rx_h_mpdu(ar, &amsdu, rx_status);
- ath10k_htt_rx_h_deliver(ar, &amsdu, rx_status);
-
- if (fw_desc_len > 0) {
- ath10k_dbg(ar, ATH10K_DBG_HTT,
- "expecting more fragmented rx in one indication %d\n",
- fw_desc_len);
- }
+ tasklet_schedule(&htt->txrx_compl_task);
}
-static void ath10k_htt_rx_frm_tx_compl(struct ath10k *ar,
+static void ath10k_htt_rx_tx_compl_ind(struct ath10k *ar,
struct sk_buff *skb)
{
struct ath10k_htt *htt = &ar->htt;
@@ -1688,19 +1601,19 @@ static void ath10k_htt_rx_frm_tx_compl(struct ath10k *ar,
switch (status) {
case HTT_DATA_TX_STATUS_NO_ACK:
- tx_done.no_ack = true;
+ tx_done.status = HTT_TX_COMPL_STATE_NOACK;
break;
case HTT_DATA_TX_STATUS_OK:
- tx_done.success = true;
+ tx_done.status = HTT_TX_COMPL_STATE_ACK;
break;
case HTT_DATA_TX_STATUS_DISCARD:
case HTT_DATA_TX_STATUS_POSTPONE:
case HTT_DATA_TX_STATUS_DOWNLOAD_FAIL:
- tx_done.discard = true;
+ tx_done.status = HTT_TX_COMPL_STATE_DISCARD;
break;
default:
ath10k_warn(ar, "unhandled tx completion status %d\n", status);
- tx_done.discard = true;
+ tx_done.status = HTT_TX_COMPL_STATE_DISCARD;
break;
}
@@ -1710,7 +1623,20 @@ static void ath10k_htt_rx_frm_tx_compl(struct ath10k *ar,
for (i = 0; i < resp->data_tx_completion.num_msdus; i++) {
msdu_id = resp->data_tx_completion.msdus[i];
tx_done.msdu_id = __le16_to_cpu(msdu_id);
- ath10k_txrx_tx_unref(htt, &tx_done);
+
+ /* kfifo_put: In practice firmware shouldn't fire off per-CE
+ * interrupt and main interrupt (MSI/-X range case) for the same
+ * HTC service so it should be safe to use kfifo_put w/o lock.
+ *
+ * From kfifo_put() documentation:
+ * Note that with only one concurrent reader and one concurrent
+ * writer, you don't need extra locking to use these macro.
+ */
+ if (!kfifo_put(&htt->txdone_fifo, tx_done)) {
+ ath10k_warn(ar, "txdone fifo overrun, msdu_id %d status %d\n",
+ tx_done.msdu_id, tx_done.status);
+ ath10k_txrx_tx_unref(htt, &tx_done);
+ }
}
}
@@ -1978,11 +1904,324 @@ static void ath10k_htt_rx_in_ord_ind(struct ath10k *ar, struct sk_buff *skb)
return;
}
}
+ ath10k_htt_rx_msdu_buff_replenish(htt);
+}
+
+static void ath10k_htt_rx_tx_fetch_resp_id_confirm(struct ath10k *ar,
+ const __le32 *resp_ids,
+ int num_resp_ids)
+{
+ int i;
+ u32 resp_id;
+
+ ath10k_dbg(ar, ATH10K_DBG_HTT, "htt rx tx fetch confirm num_resp_ids %d\n",
+ num_resp_ids);
+
+ for (i = 0; i < num_resp_ids; i++) {
+ resp_id = le32_to_cpu(resp_ids[i]);
+
+ ath10k_dbg(ar, ATH10K_DBG_HTT, "htt rx tx fetch confirm resp_id %u\n",
+ resp_id);
+
+ /* TODO: free resp_id */
+ }
+}
+
+static void ath10k_htt_rx_tx_fetch_ind(struct ath10k *ar, struct sk_buff *skb)
+{
+ struct ieee80211_hw *hw = ar->hw;
+ struct ieee80211_txq *txq;
+ struct htt_resp *resp = (struct htt_resp *)skb->data;
+ struct htt_tx_fetch_record *record;
+ size_t len;
+ size_t max_num_bytes;
+ size_t max_num_msdus;
+ size_t num_bytes;
+ size_t num_msdus;
+ const __le32 *resp_ids;
+ u16 num_records;
+ u16 num_resp_ids;
+ u16 peer_id;
+ u8 tid;
+ int ret;
+ int i;
+
+ ath10k_dbg(ar, ATH10K_DBG_HTT, "htt rx tx fetch ind\n");
+
+ len = sizeof(resp->hdr) + sizeof(resp->tx_fetch_ind);
+ if (unlikely(skb->len < len)) {
+ ath10k_warn(ar, "received corrupted tx_fetch_ind event: buffer too short\n");
+ return;
+ }
+
+ num_records = le16_to_cpu(resp->tx_fetch_ind.num_records);
+ num_resp_ids = le16_to_cpu(resp->tx_fetch_ind.num_resp_ids);
+
+ len += sizeof(resp->tx_fetch_ind.records[0]) * num_records;
+ len += sizeof(resp->tx_fetch_ind.resp_ids[0]) * num_resp_ids;
+
+ if (unlikely(skb->len < len)) {
+ ath10k_warn(ar, "received corrupted tx_fetch_ind event: too many records/resp_ids\n");
+ return;
+ }
+
+ ath10k_dbg(ar, ATH10K_DBG_HTT, "htt rx tx fetch ind num records %hu num resps %hu seq %hu\n",
+ num_records, num_resp_ids,
+ le16_to_cpu(resp->tx_fetch_ind.fetch_seq_num));
+
+ if (!ar->htt.tx_q_state.enabled) {
+ ath10k_warn(ar, "received unexpected tx_fetch_ind event: not enabled\n");
+ return;
+ }
+
+ if (ar->htt.tx_q_state.mode == HTT_TX_MODE_SWITCH_PUSH) {
+ ath10k_warn(ar, "received unexpected tx_fetch_ind event: in push mode\n");
+ return;
+ }
+
+ rcu_read_lock();
+
+ for (i = 0; i < num_records; i++) {
+ record = &resp->tx_fetch_ind.records[i];
+ peer_id = MS(le16_to_cpu(record->info),
+ HTT_TX_FETCH_RECORD_INFO_PEER_ID);
+ tid = MS(le16_to_cpu(record->info),
+ HTT_TX_FETCH_RECORD_INFO_TID);
+ max_num_msdus = le16_to_cpu(record->num_msdus);
+ max_num_bytes = le32_to_cpu(record->num_bytes);
+
+ ath10k_dbg(ar, ATH10K_DBG_HTT, "htt rx tx fetch record %i peer_id %hu tid %hhu msdus %zu bytes %zu\n",
+ i, peer_id, tid, max_num_msdus, max_num_bytes);
+
+ if (unlikely(peer_id >= ar->htt.tx_q_state.num_peers) ||
+ unlikely(tid >= ar->htt.tx_q_state.num_tids)) {
+ ath10k_warn(ar, "received out of range peer_id %hu tid %hhu\n",
+ peer_id, tid);
+ continue;
+ }
+
+ spin_lock_bh(&ar->data_lock);
+ txq = ath10k_mac_txq_lookup(ar, peer_id, tid);
+ spin_unlock_bh(&ar->data_lock);
+
+ /* It is okay to release the lock and use txq because RCU read
+ * lock is held.
+ */
+
+ if (unlikely(!txq)) {
+ ath10k_warn(ar, "failed to lookup txq for peer_id %hu tid %hhu\n",
+ peer_id, tid);
+ continue;
+ }
+
+ num_msdus = 0;
+ num_bytes = 0;
+
+ while (num_msdus < max_num_msdus &&
+ num_bytes < max_num_bytes) {
+ ret = ath10k_mac_tx_push_txq(hw, txq);
+ if (ret < 0)
+ break;
+
+ num_msdus++;
+ num_bytes += ret;
+ }
+
+ record->num_msdus = cpu_to_le16(num_msdus);
+ record->num_bytes = cpu_to_le32(num_bytes);
+
+ ath10k_htt_tx_txq_recalc(hw, txq);
+ }
+
+ rcu_read_unlock();
+
+ resp_ids = ath10k_htt_get_tx_fetch_ind_resp_ids(&resp->tx_fetch_ind);
+ ath10k_htt_rx_tx_fetch_resp_id_confirm(ar, resp_ids, num_resp_ids);
+
+ ret = ath10k_htt_tx_fetch_resp(ar,
+ resp->tx_fetch_ind.token,
+ resp->tx_fetch_ind.fetch_seq_num,
+ resp->tx_fetch_ind.records,
+ num_records);
+ if (unlikely(ret)) {
+ ath10k_warn(ar, "failed to submit tx fetch resp for token 0x%08x: %d\n",
+ le32_to_cpu(resp->tx_fetch_ind.token), ret);
+ /* FIXME: request fw restart */
+ }
- tasklet_schedule(&htt->rx_replenish_task);
+ ath10k_htt_tx_txq_sync(ar);
}
-void ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
+static void ath10k_htt_rx_tx_fetch_confirm(struct ath10k *ar,
+ struct sk_buff *skb)
+{
+ const struct htt_resp *resp = (void *)skb->data;
+ size_t len;
+ int num_resp_ids;
+
+ ath10k_dbg(ar, ATH10K_DBG_HTT, "htt rx tx fetch confirm\n");
+
+ len = sizeof(resp->hdr) + sizeof(resp->tx_fetch_confirm);
+ if (unlikely(skb->len < len)) {
+ ath10k_warn(ar, "received corrupted tx_fetch_confirm event: buffer too short\n");
+ return;
+ }
+
+ num_resp_ids = le16_to_cpu(resp->tx_fetch_confirm.num_resp_ids);
+ len += sizeof(resp->tx_fetch_confirm.resp_ids[0]) * num_resp_ids;
+
+ if (unlikely(skb->len < len)) {
+ ath10k_warn(ar, "received corrupted tx_fetch_confirm event: resp_ids buffer overflow\n");
+ return;
+ }
+
+ ath10k_htt_rx_tx_fetch_resp_id_confirm(ar,
+ resp->tx_fetch_confirm.resp_ids,
+ num_resp_ids);
+}
+
+static void ath10k_htt_rx_tx_mode_switch_ind(struct ath10k *ar,
+ struct sk_buff *skb)
+{
+ const struct htt_resp *resp = (void *)skb->data;
+ const struct htt_tx_mode_switch_record *record;
+ struct ieee80211_txq *txq;
+ struct ath10k_txq *artxq;
+ size_t len;
+ size_t num_records;
+ enum htt_tx_mode_switch_mode mode;
+ bool enable;
+ u16 info0;
+ u16 info1;
+ u16 threshold;
+ u16 peer_id;
+ u8 tid;
+ int i;
+
+ ath10k_dbg(ar, ATH10K_DBG_HTT, "htt rx tx mode switch ind\n");
+
+ len = sizeof(resp->hdr) + sizeof(resp->tx_mode_switch_ind);
+ if (unlikely(skb->len < len)) {
+ ath10k_warn(ar, "received corrupted tx_mode_switch_ind event: buffer too short\n");
+ return;
+ }
+
+ info0 = le16_to_cpu(resp->tx_mode_switch_ind.info0);
+ info1 = le16_to_cpu(resp->tx_mode_switch_ind.info1);
+
+ enable = !!(info0 & HTT_TX_MODE_SWITCH_IND_INFO0_ENABLE);
+ num_records = MS(info0, HTT_TX_MODE_SWITCH_IND_INFO1_THRESHOLD);
+ mode = MS(info1, HTT_TX_MODE_SWITCH_IND_INFO1_MODE);
+ threshold = MS(info1, HTT_TX_MODE_SWITCH_IND_INFO1_THRESHOLD);
+
+ ath10k_dbg(ar, ATH10K_DBG_HTT,
+ "htt rx tx mode switch ind info0 0x%04hx info1 0x%04hx enable %d num records %zd mode %d threshold %hu\n",
+ info0, info1, enable, num_records, mode, threshold);
+
+ len += sizeof(resp->tx_mode_switch_ind.records[0]) * num_records;
+
+ if (unlikely(skb->len < len)) {
+ ath10k_warn(ar, "received corrupted tx_mode_switch_mode_ind event: too many records\n");
+ return;
+ }
+
+ switch (mode) {
+ case HTT_TX_MODE_SWITCH_PUSH:
+ case HTT_TX_MODE_SWITCH_PUSH_PULL:
+ break;
+ default:
+ ath10k_warn(ar, "received invalid tx_mode_switch_mode_ind mode %d, ignoring\n",
+ mode);
+ return;
+ }
+
+ if (!enable)
+ return;
+
+ ar->htt.tx_q_state.enabled = enable;
+ ar->htt.tx_q_state.mode = mode;
+ ar->htt.tx_q_state.num_push_allowed = threshold;
+
+ rcu_read_lock();
+
+ for (i = 0; i < num_records; i++) {
+ record = &resp->tx_mode_switch_ind.records[i];
+ info0 = le16_to_cpu(record->info0);
+ peer_id = MS(info0, HTT_TX_MODE_SWITCH_RECORD_INFO0_PEER_ID);
+ tid = MS(info0, HTT_TX_MODE_SWITCH_RECORD_INFO0_TID);
+
+ if (unlikely(peer_id >= ar->htt.tx_q_state.num_peers) ||
+ unlikely(tid >= ar->htt.tx_q_state.num_tids)) {
+ ath10k_warn(ar, "received out of range peer_id %hu tid %hhu\n",
+ peer_id, tid);
+ continue;
+ }
+
+ spin_lock_bh(&ar->data_lock);
+ txq = ath10k_mac_txq_lookup(ar, peer_id, tid);
+ spin_unlock_bh(&ar->data_lock);
+
+ /* It is okay to release the lock and use txq because RCU read
+ * lock is held.
+ */
+
+ if (unlikely(!txq)) {
+ ath10k_warn(ar, "failed to lookup txq for peer_id %hu tid %hhu\n",
+ peer_id, tid);
+ continue;
+ }
+
+ spin_lock_bh(&ar->htt.tx_lock);
+ artxq = (void *)txq->drv_priv;
+ artxq->num_push_allowed = le16_to_cpu(record->num_max_msdus);
+ spin_unlock_bh(&ar->htt.tx_lock);
+ }
+
+ rcu_read_unlock();
+
+ ath10k_mac_tx_push_pending(ar);
+}
+
+static inline enum nl80211_band phy_mode_to_band(u32 phy_mode)
+{
+ enum nl80211_band band;
+
+ switch (phy_mode) {
+ case MODE_11A:
+ case MODE_11NA_HT20:
+ case MODE_11NA_HT40:
+ case MODE_11AC_VHT20:
+ case MODE_11AC_VHT40:
+ case MODE_11AC_VHT80:
+ band = NL80211_BAND_5GHZ;
+ break;
+ case MODE_11G:
+ case MODE_11B:
+ case MODE_11GONLY:
+ case MODE_11NG_HT20:
+ case MODE_11NG_HT40:
+ case MODE_11AC_VHT20_2G:
+ case MODE_11AC_VHT40_2G:
+ case MODE_11AC_VHT80_2G:
+ default:
+ band = NL80211_BAND_2GHZ;
+ }
+
+ return band;
+}
+
+void ath10k_htt_htc_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
+{
+ bool release;
+
+ release = ath10k_htt_t2h_msg_handler(ar, skb);
+
+ /* Free the indication buffer */
+ if (release)
+ dev_kfree_skb_any(skb);
+}
+
+bool ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
{
struct ath10k_htt *htt = &ar->htt;
struct htt_resp *resp = (struct htt_resp *)skb->data;
@@ -1998,8 +2237,7 @@ void ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
if (resp->hdr.msg_type >= ar->htt.t2h_msg_types_max) {
ath10k_dbg(ar, ATH10K_DBG_HTT, "htt rx, unsupported msg_type: 0x%0X\n max: 0x%0X",
resp->hdr.msg_type, ar->htt.t2h_msg_types_max);
- dev_kfree_skb_any(skb);
- return;
+ return true;
}
type = ar->htt.t2h_msg_types[resp->hdr.msg_type];
@@ -2011,9 +2249,8 @@ void ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
break;
}
case HTT_T2H_MSG_TYPE_RX_IND:
- skb_queue_tail(&htt->rx_compl_q, skb);
- tasklet_schedule(&htt->txrx_compl_task);
- return;
+ ath10k_htt_rx_proc_rx_ind(htt, &resp->rx_ind);
+ break;
case HTT_T2H_MSG_TYPE_PEER_MAP: {
struct htt_peer_map_event ev = {
.vdev_id = resp->peer_map.vdev_id,
@@ -2034,28 +2271,33 @@ void ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
struct htt_tx_done tx_done = {};
int status = __le32_to_cpu(resp->mgmt_tx_completion.status);
- tx_done.msdu_id =
- __le32_to_cpu(resp->mgmt_tx_completion.desc_id);
+ tx_done.msdu_id = __le32_to_cpu(resp->mgmt_tx_completion.desc_id);
switch (status) {
case HTT_MGMT_TX_STATUS_OK:
- tx_done.success = true;
+ tx_done.status = HTT_TX_COMPL_STATE_ACK;
break;
case HTT_MGMT_TX_STATUS_RETRY:
- tx_done.no_ack = true;
+ tx_done.status = HTT_TX_COMPL_STATE_NOACK;
break;
case HTT_MGMT_TX_STATUS_DROP:
- tx_done.discard = true;
+ tx_done.status = HTT_TX_COMPL_STATE_DISCARD;
break;
}
- ath10k_txrx_tx_unref(htt, &tx_done);
+ status = ath10k_txrx_tx_unref(htt, &tx_done);
+ if (!status) {
+ spin_lock_bh(&htt->tx_lock);
+ ath10k_htt_tx_mgmt_dec_pending(htt);
+ spin_unlock_bh(&htt->tx_lock);
+ }
+ ath10k_mac_tx_push_pending(ar);
break;
}
case HTT_T2H_MSG_TYPE_TX_COMPL_IND:
- skb_queue_tail(&htt->tx_compl_q, skb);
+ ath10k_htt_rx_tx_compl_ind(htt->ar, skb);
tasklet_schedule(&htt->txrx_compl_task);
- return;
+ break;
case HTT_T2H_MSG_TYPE_SEC_IND: {
struct ath10k *ar = htt->ar;
struct htt_security_indication *ev = &resp->security_indication;
@@ -2071,7 +2313,7 @@ void ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
case HTT_T2H_MSG_TYPE_RX_FRAG_IND: {
ath10k_dbg_dump(ar, ATH10K_DBG_HTT_DUMP, NULL, "htt event: ",
skb->data, skb->len);
- ath10k_htt_rx_frag_handler(htt, &resp->rx_frag_ind);
+ ath10k_htt_rx_frag_handler(htt);
break;
}
case HTT_T2H_MSG_TYPE_TEST:
@@ -2111,18 +2353,39 @@ void ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
case HTT_T2H_MSG_TYPE_RX_IN_ORD_PADDR_IND: {
skb_queue_tail(&htt->rx_in_ord_compl_q, skb);
tasklet_schedule(&htt->txrx_compl_task);
- return;
+ return false;
}
case HTT_T2H_MSG_TYPE_TX_CREDIT_UPDATE_IND:
break;
- case HTT_T2H_MSG_TYPE_CHAN_CHANGE:
+ case HTT_T2H_MSG_TYPE_CHAN_CHANGE: {
+ u32 phymode = __le32_to_cpu(resp->chan_change.phymode);
+ u32 freq = __le32_to_cpu(resp->chan_change.freq);
+
+ ar->tgt_oper_chan =
+ __ieee80211_get_channel(ar->hw->wiphy, freq);
+ ath10k_dbg(ar, ATH10K_DBG_HTT,
+ "htt chan change freq %u phymode %s\n",
+ freq, ath10k_wmi_phymode_str(phymode));
break;
+ }
case HTT_T2H_MSG_TYPE_AGGR_CONF:
break;
- case HTT_T2H_MSG_TYPE_TX_FETCH_IND:
+ case HTT_T2H_MSG_TYPE_TX_FETCH_IND: {
+ struct sk_buff *tx_fetch_ind = skb_copy(skb, GFP_ATOMIC);
+
+ if (!tx_fetch_ind) {
+ ath10k_warn(ar, "failed to copy htt tx fetch ind\n");
+ break;
+ }
+ skb_queue_tail(&htt->tx_fetch_ind_q, tx_fetch_ind);
+ tasklet_schedule(&htt->txrx_compl_task);
+ break;
+ }
case HTT_T2H_MSG_TYPE_TX_FETCH_CONFIRM:
+ ath10k_htt_rx_tx_fetch_confirm(ar, skb);
+ break;
case HTT_T2H_MSG_TYPE_TX_MODE_SWITCH_IND:
- /* TODO: Implement pull-push logic */
+ ath10k_htt_rx_tx_mode_switch_ind(ar, skb);
break;
case HTT_T2H_MSG_TYPE_EN_STATS:
default:
@@ -2132,9 +2395,7 @@ void ath10k_htt_t2h_msg_handler(struct ath10k *ar, struct sk_buff *skb)
skb->data, skb->len);
break;
};
-
- /* Free the indication buffer */
- dev_kfree_skb_any(skb);
+ return true;
}
EXPORT_SYMBOL(ath10k_htt_t2h_msg_handler);
@@ -2150,40 +2411,47 @@ static void ath10k_htt_txrx_compl_task(unsigned long ptr)
{
struct ath10k_htt *htt = (struct ath10k_htt *)ptr;
struct ath10k *ar = htt->ar;
- struct sk_buff_head tx_q;
- struct sk_buff_head rx_q;
+ struct htt_tx_done tx_done = {};
struct sk_buff_head rx_ind_q;
- struct htt_resp *resp;
+ struct sk_buff_head tx_ind_q;
struct sk_buff *skb;
unsigned long flags;
+ int num_mpdus;
- __skb_queue_head_init(&tx_q);
- __skb_queue_head_init(&rx_q);
__skb_queue_head_init(&rx_ind_q);
-
- spin_lock_irqsave(&htt->tx_compl_q.lock, flags);
- skb_queue_splice_init(&htt->tx_compl_q, &tx_q);
- spin_unlock_irqrestore(&htt->tx_compl_q.lock, flags);
-
- spin_lock_irqsave(&htt->rx_compl_q.lock, flags);
- skb_queue_splice_init(&htt->rx_compl_q, &rx_q);
- spin_unlock_irqrestore(&htt->rx_compl_q.lock, flags);
+ __skb_queue_head_init(&tx_ind_q);
spin_lock_irqsave(&htt->rx_in_ord_compl_q.lock, flags);
skb_queue_splice_init(&htt->rx_in_ord_compl_q, &rx_ind_q);
spin_unlock_irqrestore(&htt->rx_in_ord_compl_q.lock, flags);
- while ((skb = __skb_dequeue(&tx_q))) {
- ath10k_htt_rx_frm_tx_compl(htt->ar, skb);
+ spin_lock_irqsave(&htt->tx_fetch_ind_q.lock, flags);
+ skb_queue_splice_init(&htt->tx_fetch_ind_q, &tx_ind_q);
+ spin_unlock_irqrestore(&htt->tx_fetch_ind_q.lock, flags);
+
+ /* kfifo_get: called only within txrx_tasklet so it's neatly serialized.
+ * From kfifo_get() documentation:
+ * Note that with only one concurrent reader and one concurrent writer,
+ * you don't need extra locking to use these macro.
+ */
+ while (kfifo_get(&htt->txdone_fifo, &tx_done))
+ ath10k_txrx_tx_unref(htt, &tx_done);
+
+ while ((skb = __skb_dequeue(&tx_ind_q))) {
+ ath10k_htt_rx_tx_fetch_ind(ar, skb);
dev_kfree_skb_any(skb);
}
- while ((skb = __skb_dequeue(&rx_q))) {
- resp = (struct htt_resp *)skb->data;
- spin_lock_bh(&htt->rx_ring.lock);
- ath10k_htt_rx_handler(htt, &resp->rx_ind);
- spin_unlock_bh(&htt->rx_ring.lock);
- dev_kfree_skb_any(skb);
+ ath10k_mac_tx_push_pending(ar);
+
+ num_mpdus = atomic_read(&htt->num_mpdus_ready);
+
+ while (num_mpdus) {
+ if (ath10k_htt_rx_handle_amsdu(htt))
+ break;
+
+ num_mpdus--;
+ atomic_dec(&htt->num_mpdus_ready);
}
while ((skb = __skb_dequeue(&rx_ind_q))) {
@@ -2192,4 +2460,6 @@ static void ath10k_htt_txrx_compl_task(unsigned long ptr)
spin_unlock_bh(&htt->rx_ring.lock);
dev_kfree_skb_any(skb);
}
+
+ ath10k_htt_rx_msdu_buff_replenish(htt);
}
diff --git a/drivers/net/wireless/ath/ath10k/htt_tx.c b/drivers/net/wireless/ath/ath10k/htt_tx.c
index 95acb727c068..6269c610b0a3 100644
--- a/drivers/net/wireless/ath/ath10k/htt_tx.c
+++ b/drivers/net/wireless/ath/ath10k/htt_tx.c
@@ -22,53 +22,183 @@
#include "txrx.h"
#include "debug.h"
-void __ath10k_htt_tx_dec_pending(struct ath10k_htt *htt, bool limit_mgmt_desc)
+static u8 ath10k_htt_tx_txq_calc_size(size_t count)
{
- if (limit_mgmt_desc)
- htt->num_pending_mgmt_tx--;
+ int exp;
+ int factor;
+
+ exp = 0;
+ factor = count >> 7;
+
+ while (factor >= 64 && exp < 4) {
+ factor >>= 3;
+ exp++;
+ }
+
+ if (exp == 4)
+ return 0xff;
+
+ if (count > 0)
+ factor = max(1, factor);
+
+ return SM(exp, HTT_TX_Q_STATE_ENTRY_EXP) |
+ SM(factor, HTT_TX_Q_STATE_ENTRY_FACTOR);
+}
+
+static void __ath10k_htt_tx_txq_recalc(struct ieee80211_hw *hw,
+ struct ieee80211_txq *txq)
+{
+ struct ath10k *ar = hw->priv;
+ struct ath10k_sta *arsta = (void *)txq->sta->drv_priv;
+ struct ath10k_vif *arvif = (void *)txq->vif->drv_priv;
+ unsigned long frame_cnt;
+ unsigned long byte_cnt;
+ int idx;
+ u32 bit;
+ u16 peer_id;
+ u8 tid;
+ u8 count;
+
+ lockdep_assert_held(&ar->htt.tx_lock);
+
+ if (!ar->htt.tx_q_state.enabled)
+ return;
+
+ if (ar->htt.tx_q_state.mode != HTT_TX_MODE_SWITCH_PUSH_PULL)
+ return;
+
+ if (txq->sta)
+ peer_id = arsta->peer_id;
+ else
+ peer_id = arvif->peer_id;
+
+ tid = txq->tid;
+ bit = BIT(peer_id % 32);
+ idx = peer_id / 32;
+
+ ieee80211_txq_get_depth(txq, &frame_cnt, &byte_cnt);
+ count = ath10k_htt_tx_txq_calc_size(byte_cnt);
+
+ if (unlikely(peer_id >= ar->htt.tx_q_state.num_peers) ||
+ unlikely(tid >= ar->htt.tx_q_state.num_tids)) {
+ ath10k_warn(ar, "refusing to update txq for peer_id %hu tid %hhu due to out of bounds\n",
+ peer_id, tid);
+ return;
+ }
+
+ ar->htt.tx_q_state.vaddr->count[tid][peer_id] = count;
+ ar->htt.tx_q_state.vaddr->map[tid][idx] &= ~bit;
+ ar->htt.tx_q_state.vaddr->map[tid][idx] |= count ? bit : 0;
+
+ ath10k_dbg(ar, ATH10K_DBG_HTT, "htt tx txq state update peer_id %hu tid %hhu count %hhu\n",
+ peer_id, tid, count);
+}
+
+static void __ath10k_htt_tx_txq_sync(struct ath10k *ar)
+{
+ u32 seq;
+ size_t size;
+
+ lockdep_assert_held(&ar->htt.tx_lock);
+
+ if (!ar->htt.tx_q_state.enabled)
+ return;
+
+ if (ar->htt.tx_q_state.mode != HTT_TX_MODE_SWITCH_PUSH_PULL)
+ return;
+
+ seq = le32_to_cpu(ar->htt.tx_q_state.vaddr->seq);
+ seq++;
+ ar->htt.tx_q_state.vaddr->seq = cpu_to_le32(seq);
+
+ ath10k_dbg(ar, ATH10K_DBG_HTT, "htt tx txq state update commit seq %u\n",
+ seq);
+
+ size = sizeof(*ar->htt.tx_q_state.vaddr);
+ dma_sync_single_for_device(ar->dev,
+ ar->htt.tx_q_state.paddr,
+ size,
+ DMA_TO_DEVICE);
+}
+
+void ath10k_htt_tx_txq_recalc(struct ieee80211_hw *hw,
+ struct ieee80211_txq *txq)
+{
+ struct ath10k *ar = hw->priv;
+
+ spin_lock_bh(&ar->htt.tx_lock);
+ __ath10k_htt_tx_txq_recalc(hw, txq);
+ spin_unlock_bh(&ar->htt.tx_lock);
+}
+
+void ath10k_htt_tx_txq_sync(struct ath10k *ar)
+{
+ spin_lock_bh(&ar->htt.tx_lock);
+ __ath10k_htt_tx_txq_sync(ar);
+ spin_unlock_bh(&ar->htt.tx_lock);
+}
+
+void ath10k_htt_tx_txq_update(struct ieee80211_hw *hw,
+ struct ieee80211_txq *txq)
+{
+ struct ath10k *ar = hw->priv;
+
+ spin_lock_bh(&ar->htt.tx_lock);
+ __ath10k_htt_tx_txq_recalc(hw, txq);
+ __ath10k_htt_tx_txq_sync(ar);
+ spin_unlock_bh(&ar->htt.tx_lock);
+}
+
+void ath10k_htt_tx_dec_pending(struct ath10k_htt *htt)
+{
+ lockdep_assert_held(&htt->tx_lock);
htt->num_pending_tx--;
if (htt->num_pending_tx == htt->max_num_pending_tx - 1)
ath10k_mac_tx_unlock(htt->ar, ATH10K_TX_PAUSE_Q_FULL);
}
-static void ath10k_htt_tx_dec_pending(struct ath10k_htt *htt,
- bool limit_mgmt_desc)
+int ath10k_htt_tx_inc_pending(struct ath10k_htt *htt)
{
- spin_lock_bh(&htt->tx_lock);
- __ath10k_htt_tx_dec_pending(htt, limit_mgmt_desc);
- spin_unlock_bh(&htt->tx_lock);
+ lockdep_assert_held(&htt->tx_lock);
+
+ if (htt->num_pending_tx >= htt->max_num_pending_tx)
+ return -EBUSY;
+
+ htt->num_pending_tx++;
+ if (htt->num_pending_tx == htt->max_num_pending_tx)
+ ath10k_mac_tx_lock(htt->ar, ATH10K_TX_PAUSE_Q_FULL);
+
+ return 0;
}
-static int ath10k_htt_tx_inc_pending(struct ath10k_htt *htt,
- bool limit_mgmt_desc, bool is_probe_resp)
+int ath10k_htt_tx_mgmt_inc_pending(struct ath10k_htt *htt, bool is_mgmt,
+ bool is_presp)
{
struct ath10k *ar = htt->ar;
- int ret = 0;
- spin_lock_bh(&htt->tx_lock);
+ lockdep_assert_held(&htt->tx_lock);
- if (htt->num_pending_tx >= htt->max_num_pending_tx) {
- ret = -EBUSY;
- goto exit;
- }
+ if (!is_mgmt || !ar->hw_params.max_probe_resp_desc_thres)
+ return 0;
- if (limit_mgmt_desc) {
- if (is_probe_resp && (htt->num_pending_mgmt_tx >
- ar->hw_params.max_probe_resp_desc_thres)) {
- ret = -EBUSY;
- goto exit;
- }
- htt->num_pending_mgmt_tx++;
- }
+ if (is_presp &&
+ ar->hw_params.max_probe_resp_desc_thres < htt->num_pending_mgmt_tx)
+ return -EBUSY;
- htt->num_pending_tx++;
- if (htt->num_pending_tx == htt->max_num_pending_tx)
- ath10k_mac_tx_lock(htt->ar, ATH10K_TX_PAUSE_Q_FULL);
+ htt->num_pending_mgmt_tx++;
-exit:
- spin_unlock_bh(&htt->tx_lock);
- return ret;
+ return 0;
+}
+
+void ath10k_htt_tx_mgmt_dec_pending(struct ath10k_htt *htt)
+{
+ lockdep_assert_held(&htt->tx_lock);
+
+ if (!htt->ar->hw_params.max_probe_resp_desc_thres)
+ return;
+
+ htt->num_pending_mgmt_tx--;
}
int ath10k_htt_tx_alloc_msdu_id(struct ath10k_htt *htt, struct sk_buff *skb)
@@ -137,7 +267,8 @@ static void ath10k_htt_tx_free_txq(struct ath10k_htt *htt)
struct ath10k *ar = htt->ar;
size_t size;
- if (!test_bit(ATH10K_FW_FEATURE_PEER_FLOW_CONTROL, ar->fw_features))
+ if (!test_bit(ATH10K_FW_FEATURE_PEER_FLOW_CONTROL,
+ ar->running_fw->fw_file.fw_features))
return;
size = sizeof(*htt->tx_q_state.vaddr);
@@ -152,7 +283,8 @@ static int ath10k_htt_tx_alloc_txq(struct ath10k_htt *htt)
size_t size;
int ret;
- if (!test_bit(ATH10K_FW_FEATURE_PEER_FLOW_CONTROL, ar->fw_features))
+ if (!test_bit(ATH10K_FW_FEATURE_PEER_FLOW_CONTROL,
+ ar->running_fw->fw_file.fw_features))
return 0;
htt->tx_q_state.num_peers = HTT_TX_Q_STATE_NUM_PEERS;
@@ -209,8 +341,18 @@ int ath10k_htt_tx_alloc(struct ath10k_htt *htt)
goto free_frag_desc;
}
+ size = roundup_pow_of_two(htt->max_num_pending_tx);
+ ret = kfifo_alloc(&htt->txdone_fifo, size, GFP_KERNEL);
+ if (ret) {
+ ath10k_err(ar, "failed to alloc txdone fifo: %d\n", ret);
+ goto free_txq;
+ }
+
return 0;
+free_txq:
+ ath10k_htt_tx_free_txq(htt);
+
free_frag_desc:
ath10k_htt_tx_free_cont_frag_desc(htt);
@@ -234,8 +376,8 @@ static int ath10k_htt_tx_clean_up_pending(int msdu_id, void *skb, void *ctx)
ath10k_dbg(ar, ATH10K_DBG_HTT, "force cleanup msdu_id %hu\n", msdu_id);
- tx_done.discard = 1;
tx_done.msdu_id = msdu_id;
+ tx_done.status = HTT_TX_COMPL_STATE_DISCARD;
ath10k_txrx_tx_unref(htt, &tx_done);
@@ -258,6 +400,8 @@ void ath10k_htt_tx_free(struct ath10k_htt *htt)
ath10k_htt_tx_free_txq(htt);
ath10k_htt_tx_free_cont_frag_desc(htt);
+ WARN_ON(!kfifo_is_empty(&htt->txdone_fifo));
+ kfifo_free(&htt->txdone_fifo);
}
void ath10k_htt_htc_tx_complete(struct ath10k *ar, struct sk_buff *skb)
@@ -371,7 +515,8 @@ int ath10k_htt_send_frag_desc_bank_cfg(struct ath10k_htt *htt)
info |= SM(htt->tx_q_state.type,
HTT_FRAG_DESC_BANK_CFG_INFO_Q_STATE_DEPTH_TYPE);
- if (test_bit(ATH10K_FW_FEATURE_PEER_FLOW_CONTROL, ar->fw_features))
+ if (test_bit(ATH10K_FW_FEATURE_PEER_FLOW_CONTROL,
+ ar->running_fw->fw_file.fw_features))
info |= HTT_FRAG_DESC_BANK_CFG_INFO_Q_STATE_VALID;
cfg = &cmd->frag_desc_bank_cfg;
@@ -535,6 +680,55 @@ int ath10k_htt_h2t_aggr_cfg_msg(struct ath10k_htt *htt,
return 0;
}
+int ath10k_htt_tx_fetch_resp(struct ath10k *ar,
+ __le32 token,
+ __le16 fetch_seq_num,
+ struct htt_tx_fetch_record *records,
+ size_t num_records)
+{
+ struct sk_buff *skb;
+ struct htt_cmd *cmd;
+ const u16 resp_id = 0;
+ int len = 0;
+ int ret;
+
+ /* Response IDs are echo-ed back only for host driver convienence
+ * purposes. They aren't used for anything in the driver yet so use 0.
+ */
+
+ len += sizeof(cmd->hdr);
+ len += sizeof(cmd->tx_fetch_resp);
+ len += sizeof(cmd->tx_fetch_resp.records[0]) * num_records;
+
+ skb = ath10k_htc_alloc_skb(ar, len);
+ if (!skb)
+ return -ENOMEM;
+
+ skb_put(skb, len);
+ cmd = (struct htt_cmd *)skb->data;
+ cmd->hdr.msg_type = HTT_H2T_MSG_TYPE_TX_FETCH_RESP;
+ cmd->tx_fetch_resp.resp_id = cpu_to_le16(resp_id);
+ cmd->tx_fetch_resp.fetch_seq_num = fetch_seq_num;
+ cmd->tx_fetch_resp.num_records = cpu_to_le16(num_records);
+ cmd->tx_fetch_resp.token = token;
+
+ memcpy(cmd->tx_fetch_resp.records, records,
+ sizeof(records[0]) * num_records);
+
+ ret = ath10k_htc_send(&ar->htc, ar->htt.eid, skb);
+ if (ret) {
+ ath10k_warn(ar, "failed to submit htc command: %d\n", ret);
+ goto err_free_skb;
+ }
+
+ return 0;
+
+err_free_skb:
+ dev_kfree_skb_any(skb);
+
+ return ret;
+}
+
static u8 ath10k_htt_tx_get_vdev_id(struct ath10k *ar, struct sk_buff *skb)
{
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
@@ -576,20 +770,6 @@ int ath10k_htt_mgmt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
int msdu_id = -1;
int res;
struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)msdu->data;
- bool limit_mgmt_desc = false;
- bool is_probe_resp = false;
-
- if (ar->hw_params.max_probe_resp_desc_thres) {
- limit_mgmt_desc = true;
-
- if (ieee80211_is_probe_resp(hdr->frame_control))
- is_probe_resp = true;
- }
-
- res = ath10k_htt_tx_inc_pending(htt, limit_mgmt_desc, is_probe_resp);
-
- if (res)
- goto err;
len += sizeof(cmd->hdr);
len += sizeof(cmd->mgmt_tx);
@@ -598,7 +778,7 @@ int ath10k_htt_mgmt_tx(struct ath10k_htt *htt, struct sk_buff *msdu)
res = ath10k_htt_tx_alloc_msdu_id(htt, msdu);
spin_unlock_bh(&htt->tx_lock);
if (res < 0)
- goto err_tx_dec;
+ goto err;
msdu_id = res;
@@ -649,8 +829,6 @@ err_free_msdu_id:
spin_lock_bh(&htt->tx_lock);
ath10k_htt_tx_free_msdu_id(htt, msdu_id);
spin_unlock_bh(&htt->tx_lock);
-err_tx_dec:
- ath10k_htt_tx_dec_pending(htt, limit_mgmt_desc);
err:
return res;
}
@@ -677,26 +855,12 @@ int ath10k_htt_tx(struct ath10k_htt *htt, enum ath10k_hw_txrx_mode txmode,
u32 frags_paddr = 0;
u32 txbuf_paddr;
struct htt_msdu_ext_desc *ext_desc = NULL;
- bool limit_mgmt_desc = false;
- bool is_probe_resp = false;
-
- if (unlikely(ieee80211_is_mgmt(hdr->frame_control)) &&
- ar->hw_params.max_probe_resp_desc_thres) {
- limit_mgmt_desc = true;
-
- if (ieee80211_is_probe_resp(hdr->frame_control))
- is_probe_resp = true;
- }
-
- res = ath10k_htt_tx_inc_pending(htt, limit_mgmt_desc, is_probe_resp);
- if (res)
- goto err;
spin_lock_bh(&htt->tx_lock);
res = ath10k_htt_tx_alloc_msdu_id(htt, msdu);
spin_unlock_bh(&htt->tx_lock);
if (res < 0)
- goto err_tx_dec;
+ goto err;
msdu_id = res;
@@ -862,11 +1026,7 @@ int ath10k_htt_tx(struct ath10k_htt *htt, enum ath10k_hw_txrx_mode txmode,
err_unmap_msdu:
dma_unmap_single(dev, skb_cb->paddr, msdu->len, DMA_TO_DEVICE);
err_free_msdu_id:
- spin_lock_bh(&htt->tx_lock);
ath10k_htt_tx_free_msdu_id(htt, msdu_id);
- spin_unlock_bh(&htt->tx_lock);
-err_tx_dec:
- ath10k_htt_tx_dec_pending(htt, limit_mgmt_desc);
err:
return res;
}
diff --git a/drivers/net/wireless/ath/ath10k/hw.h b/drivers/net/wireless/ath/ath10k/hw.h
index f0cfbc745c97..aedd8987040b 100644
--- a/drivers/net/wireless/ath/ath10k/hw.h
+++ b/drivers/net/wireless/ath/ath10k/hw.h
@@ -35,8 +35,6 @@
#define QCA988X_HW_2_0_VERSION 0x4100016c
#define QCA988X_HW_2_0_CHIP_ID_REV 0x2
#define QCA988X_HW_2_0_FW_DIR ATH10K_FW_DIR "/QCA988X/hw2.0"
-#define QCA988X_HW_2_0_FW_FILE "firmware.bin"
-#define QCA988X_HW_2_0_OTP_FILE "otp.bin"
#define QCA988X_HW_2_0_BOARD_DATA_FILE "board.bin"
#define QCA988X_HW_2_0_PATCH_LOAD_ADDR 0x1234
@@ -76,14 +74,10 @@ enum qca9377_chip_id_rev {
};
#define QCA6174_HW_2_1_FW_DIR "ath10k/QCA6174/hw2.1"
-#define QCA6174_HW_2_1_FW_FILE "firmware.bin"
-#define QCA6174_HW_2_1_OTP_FILE "otp.bin"
#define QCA6174_HW_2_1_BOARD_DATA_FILE "board.bin"
#define QCA6174_HW_2_1_PATCH_LOAD_ADDR 0x1234
#define QCA6174_HW_3_0_FW_DIR "ath10k/QCA6174/hw3.0"
-#define QCA6174_HW_3_0_FW_FILE "firmware.bin"
-#define QCA6174_HW_3_0_OTP_FILE "otp.bin"
#define QCA6174_HW_3_0_BOARD_DATA_FILE "board.bin"
#define QCA6174_HW_3_0_PATCH_LOAD_ADDR 0x1234
@@ -94,23 +88,17 @@ enum qca9377_chip_id_rev {
#define QCA99X0_HW_2_0_DEV_VERSION 0x01000000
#define QCA99X0_HW_2_0_CHIP_ID_REV 0x1
#define QCA99X0_HW_2_0_FW_DIR ATH10K_FW_DIR "/QCA99X0/hw2.0"
-#define QCA99X0_HW_2_0_FW_FILE "firmware.bin"
-#define QCA99X0_HW_2_0_OTP_FILE "otp.bin"
#define QCA99X0_HW_2_0_BOARD_DATA_FILE "board.bin"
#define QCA99X0_HW_2_0_PATCH_LOAD_ADDR 0x1234
/* QCA9377 1.0 definitions */
#define QCA9377_HW_1_0_FW_DIR ATH10K_FW_DIR "/QCA9377/hw1.0"
-#define QCA9377_HW_1_0_FW_FILE "firmware.bin"
-#define QCA9377_HW_1_0_OTP_FILE "otp.bin"
#define QCA9377_HW_1_0_BOARD_DATA_FILE "board.bin"
#define QCA9377_HW_1_0_PATCH_LOAD_ADDR 0x1234
/* QCA4019 1.0 definitions */
#define QCA4019_HW_1_0_DEV_VERSION 0x01000000
#define QCA4019_HW_1_0_FW_DIR ATH10K_FW_DIR "/QCA4019/hw1.0"
-#define QCA4019_HW_1_0_FW_FILE "firmware.bin"
-#define QCA4019_HW_1_0_OTP_FILE "otp.bin"
#define QCA4019_HW_1_0_BOARD_DATA_FILE "board.bin"
#define QCA4019_HW_1_0_PATCH_LOAD_ADDR 0x1234
@@ -134,8 +122,6 @@ enum qca9377_chip_id_rev {
#define REG_DUMP_COUNT_QCA988X 60
-#define QCA988X_CAL_DATA_LEN 2116
-
struct ath10k_fw_ie {
__le32 id;
__le32 len;
@@ -431,10 +417,14 @@ enum ath10k_hw_4addr_pad {
#define TARGET_10_4_ACTIVE_PEERS 0
#define TARGET_10_4_NUM_QCACHE_PEERS_MAX 512
+#define TARGET_10_4_QCACHE_ACTIVE_PEERS 50
+#define TARGET_10_4_QCACHE_ACTIVE_PEERS_PFC 35
#define TARGET_10_4_NUM_OFFLOAD_PEERS 0
#define TARGET_10_4_NUM_OFFLOAD_REORDER_BUFFS 0
#define TARGET_10_4_NUM_PEER_KEYS 2
#define TARGET_10_4_TGT_NUM_TIDS ((TARGET_10_4_NUM_PEERS) * 2)
+#define TARGET_10_4_NUM_MSDU_DESC (1024 + 400)
+#define TARGET_10_4_NUM_MSDU_DESC_PFC 2500
#define TARGET_10_4_AST_SKID_LIMIT 32
/* 100 ms for video, best-effort, and background */
diff --git a/drivers/net/wireless/ath/ath10k/mac.c b/drivers/net/wireless/ath/ath10k/mac.c
index 78999c9de23b..6dd1d26b357f 100644
--- a/drivers/net/wireless/ath/ath10k/mac.c
+++ b/drivers/net/wireless/ath/ath10k/mac.c
@@ -157,6 +157,26 @@ ath10k_mac_max_vht_nss(const u16 vht_mcs_mask[NL80211_VHT_NSS_MAX])
return 1;
}
+int ath10k_mac_ext_resource_config(struct ath10k *ar, u32 val)
+{
+ enum wmi_host_platform_type platform_type;
+ int ret;
+
+ if (test_bit(WMI_SERVICE_TX_MODE_DYNAMIC, ar->wmi.svc_map))
+ platform_type = WMI_HOST_PLATFORM_LOW_PERF;
+ else
+ platform_type = WMI_HOST_PLATFORM_HIGH_PERF;
+
+ ret = ath10k_wmi_ext_resource_config(ar, platform_type, val);
+
+ if (ret && ret != -EOPNOTSUPP) {
+ ath10k_warn(ar, "failed to configure ext resource: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
+}
+
/**********/
/* Crypto */
/**********/
@@ -449,10 +469,10 @@ static int ath10k_mac_vif_update_wep_key(struct ath10k_vif *arvif,
lockdep_assert_held(&ar->conf_mutex);
list_for_each_entry(peer, &ar->peers, list) {
- if (!memcmp(peer->addr, arvif->vif->addr, ETH_ALEN))
+ if (ether_addr_equal(peer->addr, arvif->vif->addr))
continue;
- if (!memcmp(peer->addr, arvif->bssid, ETH_ALEN))
+ if (ether_addr_equal(peer->addr, arvif->bssid))
continue;
if (peer->keys[key->keyidx] == key)
@@ -482,7 +502,7 @@ chan_to_phymode(const struct cfg80211_chan_def *chandef)
enum wmi_phy_mode phymode = MODE_UNKNOWN;
switch (chandef->chan->band) {
- case IEEE80211_BAND_2GHZ:
+ case NL80211_BAND_2GHZ:
switch (chandef->width) {
case NL80211_CHAN_WIDTH_20_NOHT:
if (chandef->chan->flags & IEEE80211_CHAN_NO_OFDM)
@@ -505,7 +525,7 @@ chan_to_phymode(const struct cfg80211_chan_def *chandef)
break;
}
break;
- case IEEE80211_BAND_5GHZ:
+ case NL80211_BAND_5GHZ:
switch (chandef->width) {
case NL80211_CHAN_WIDTH_20_NOHT:
phymode = MODE_11A;
@@ -618,10 +638,15 @@ ath10k_mac_get_any_chandef_iter(struct ieee80211_hw *hw,
*def = &conf->def;
}
-static int ath10k_peer_create(struct ath10k *ar, u32 vdev_id, const u8 *addr,
+static int ath10k_peer_create(struct ath10k *ar,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ u32 vdev_id,
+ const u8 *addr,
enum wmi_peer_type peer_type)
{
struct ath10k_vif *arvif;
+ struct ath10k_peer *peer;
int num_peers = 0;
int ret;
@@ -650,6 +675,22 @@ static int ath10k_peer_create(struct ath10k *ar, u32 vdev_id, const u8 *addr,
return ret;
}
+ spin_lock_bh(&ar->data_lock);
+
+ peer = ath10k_peer_find(ar, vdev_id, addr);
+ if (!peer) {
+ ath10k_warn(ar, "failed to find peer %pM on vdev %i after creation\n",
+ addr, vdev_id);
+ ath10k_wmi_peer_delete(ar, vdev_id, addr);
+ spin_unlock_bh(&ar->data_lock);
+ return -ENOENT;
+ }
+
+ peer->vif = vif;
+ peer->sta = sta;
+
+ spin_unlock_bh(&ar->data_lock);
+
ar->num_peers++;
return 0;
@@ -731,6 +772,7 @@ static int ath10k_peer_delete(struct ath10k *ar, u32 vdev_id, const u8 *addr)
static void ath10k_peer_cleanup(struct ath10k *ar, u32 vdev_id)
{
struct ath10k_peer *peer, *tmp;
+ int peer_id;
lockdep_assert_held(&ar->conf_mutex);
@@ -742,6 +784,11 @@ static void ath10k_peer_cleanup(struct ath10k *ar, u32 vdev_id)
ath10k_warn(ar, "removing stale peer %pM from vdev_id %d\n",
peer->addr, vdev_id);
+ for_each_set_bit(peer_id, peer->peer_ids,
+ ATH10K_MAX_NUM_PEER_IDS) {
+ ar->peer_map[peer_id] = NULL;
+ }
+
list_del(&peer->list);
kfree(peer);
ar->num_peers--;
@@ -1725,7 +1772,7 @@ static int ath10k_mac_vif_setup_ps(struct ath10k_vif *arvif)
if (enable_ps && ath10k_mac_num_vifs_started(ar) > 1 &&
!test_bit(ATH10K_FW_FEATURE_MULTI_VIF_PS_SUPPORT,
- ar->fw_features)) {
+ ar->running_fw->fw_file.fw_features)) {
ath10k_warn(ar, "refusing to enable ps on vdev %i: not supported by fw\n",
arvif->vdev_id);
enable_ps = false;
@@ -2013,7 +2060,8 @@ static void ath10k_peer_assoc_h_crypto(struct ath10k *ar,
}
if (sta->mfp &&
- test_bit(ATH10K_FW_FEATURE_MFP_SUPPORT, ar->fw_features)) {
+ test_bit(ATH10K_FW_FEATURE_MFP_SUPPORT,
+ ar->running_fw->fw_file.fw_features)) {
arg->peer_flags |= ar->wmi.peer_flags->pmf;
}
}
@@ -2028,7 +2076,7 @@ static void ath10k_peer_assoc_h_rates(struct ath10k *ar,
struct cfg80211_chan_def def;
const struct ieee80211_supported_band *sband;
const struct ieee80211_rate *rates;
- enum ieee80211_band band;
+ enum nl80211_band band;
u32 ratemask;
u8 rate;
int i;
@@ -2088,7 +2136,7 @@ static void ath10k_peer_assoc_h_ht(struct ath10k *ar,
const struct ieee80211_sta_ht_cap *ht_cap = &sta->ht_cap;
struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif);
struct cfg80211_chan_def def;
- enum ieee80211_band band;
+ enum nl80211_band band;
const u8 *ht_mcs_mask;
const u16 *vht_mcs_mask;
int i, n;
@@ -2312,7 +2360,7 @@ static void ath10k_peer_assoc_h_vht(struct ath10k *ar,
const struct ieee80211_sta_vht_cap *vht_cap = &sta->vht_cap;
struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif);
struct cfg80211_chan_def def;
- enum ieee80211_band band;
+ enum nl80211_band band;
const u16 *vht_mcs_mask;
u8 ampdu_factor;
@@ -2330,7 +2378,7 @@ static void ath10k_peer_assoc_h_vht(struct ath10k *ar,
arg->peer_flags |= ar->wmi.peer_flags->vht;
- if (def.chan->band == IEEE80211_BAND_2GHZ)
+ if (def.chan->band == NL80211_BAND_2GHZ)
arg->peer_flags |= ar->wmi.peer_flags->vht_2g;
arg->peer_vht_caps = vht_cap->cap;
@@ -2399,7 +2447,7 @@ static void ath10k_peer_assoc_h_qos(struct ath10k *ar,
static bool ath10k_mac_sta_has_ofdm_only(struct ieee80211_sta *sta)
{
- return sta->supp_rates[IEEE80211_BAND_2GHZ] >>
+ return sta->supp_rates[NL80211_BAND_2GHZ] >>
ATH10K_MAC_FIRST_OFDM_RATE_IDX;
}
@@ -2410,7 +2458,7 @@ static void ath10k_peer_assoc_h_phymode(struct ath10k *ar,
{
struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif);
struct cfg80211_chan_def def;
- enum ieee80211_band band;
+ enum nl80211_band band;
const u8 *ht_mcs_mask;
const u16 *vht_mcs_mask;
enum wmi_phy_mode phymode = MODE_UNKNOWN;
@@ -2423,7 +2471,7 @@ static void ath10k_peer_assoc_h_phymode(struct ath10k *ar,
vht_mcs_mask = arvif->bitrate_mask.control[band].vht_mcs;
switch (band) {
- case IEEE80211_BAND_2GHZ:
+ case NL80211_BAND_2GHZ:
if (sta->vht_cap.vht_supported &&
!ath10k_peer_assoc_h_vht_masked(vht_mcs_mask)) {
if (sta->bandwidth == IEEE80211_STA_RX_BW_40)
@@ -2443,7 +2491,7 @@ static void ath10k_peer_assoc_h_phymode(struct ath10k *ar,
}
break;
- case IEEE80211_BAND_5GHZ:
+ case NL80211_BAND_5GHZ:
/*
* Check VHT first.
*/
@@ -2821,7 +2869,7 @@ static int ath10k_update_channel_list(struct ath10k *ar)
{
struct ieee80211_hw *hw = ar->hw;
struct ieee80211_supported_band **bands;
- enum ieee80211_band band;
+ enum nl80211_band band;
struct ieee80211_channel *channel;
struct wmi_scan_chan_list_arg arg = {0};
struct wmi_channel_arg *ch;
@@ -2833,7 +2881,7 @@ static int ath10k_update_channel_list(struct ath10k *ar)
lockdep_assert_held(&ar->conf_mutex);
bands = hw->wiphy->bands;
- for (band = 0; band < IEEE80211_NUM_BANDS; band++) {
+ for (band = 0; band < NUM_NL80211_BANDS; band++) {
if (!bands[band])
continue;
@@ -2852,7 +2900,7 @@ static int ath10k_update_channel_list(struct ath10k *ar)
return -ENOMEM;
ch = arg.channels;
- for (band = 0; band < IEEE80211_NUM_BANDS; band++) {
+ for (band = 0; band < NUM_NL80211_BANDS; band++) {
if (!bands[band])
continue;
@@ -2890,7 +2938,7 @@ static int ath10k_update_channel_list(struct ath10k *ar)
/* FIXME: why use only legacy modes, why not any
* HT/VHT modes? Would that even make any
* difference? */
- if (channel->band == IEEE80211_BAND_2GHZ)
+ if (channel->band == NL80211_BAND_2GHZ)
ch->mode = MODE_11G;
else
ch->mode = MODE_11A;
@@ -2994,6 +3042,13 @@ static void ath10k_reg_notifier(struct wiphy *wiphy,
/* TX handlers */
/***************/
+enum ath10k_mac_tx_path {
+ ATH10K_MAC_TX_HTT,
+ ATH10K_MAC_TX_HTT_MGMT,
+ ATH10K_MAC_TX_WMI_MGMT,
+ ATH10K_MAC_TX_UNKNOWN,
+};
+
void ath10k_mac_tx_lock(struct ath10k *ar, int reason)
{
lockdep_assert_held(&ar->htt.tx_lock);
@@ -3153,7 +3208,8 @@ ath10k_mac_tx_h_get_txmode(struct ath10k *ar,
*/
if (ar->htt.target_version_major < 3 &&
(ieee80211_is_nullfunc(fc) || ieee80211_is_qos_nullfunc(fc)) &&
- !test_bit(ATH10K_FW_FEATURE_HAS_WMI_MGMT_TX, ar->fw_features))
+ !test_bit(ATH10K_FW_FEATURE_HAS_WMI_MGMT_TX,
+ ar->running_fw->fw_file.fw_features))
return ATH10K_HW_TXRX_MGMT;
/* Workaround:
@@ -3271,6 +3327,28 @@ static void ath10k_tx_h_add_p2p_noa_ie(struct ath10k *ar,
}
}
+static void ath10k_mac_tx_h_fill_cb(struct ath10k *ar,
+ struct ieee80211_vif *vif,
+ struct ieee80211_txq *txq,
+ struct sk_buff *skb)
+{
+ struct ieee80211_hdr *hdr = (void *)skb->data;
+ struct ath10k_skb_cb *cb = ATH10K_SKB_CB(skb);
+
+ cb->flags = 0;
+ if (!ath10k_tx_h_use_hwcrypto(vif, skb))
+ cb->flags |= ATH10K_SKB_F_NO_HWCRYPT;
+
+ if (ieee80211_is_mgmt(hdr->frame_control))
+ cb->flags |= ATH10K_SKB_F_MGMT;
+
+ if (ieee80211_is_data_qos(hdr->frame_control))
+ cb->flags |= ATH10K_SKB_F_QOS;
+
+ cb->vif = vif;
+ cb->txq = txq;
+}
+
bool ath10k_mac_tx_frm_has_freq(struct ath10k *ar)
{
/* FIXME: Not really sure since when the behaviour changed. At some
@@ -3281,7 +3359,7 @@ bool ath10k_mac_tx_frm_has_freq(struct ath10k *ar)
*/
return (ar->htt.target_version_major >= 3 &&
ar->htt.target_version_minor >= 4 &&
- ar->htt.op_version == ATH10K_FW_HTT_OP_VERSION_TLV);
+ ar->running_fw->fw_file.htt_op_version == ATH10K_FW_HTT_OP_VERSION_TLV);
}
static int ath10k_mac_tx_wmi_mgmt(struct ath10k *ar, struct sk_buff *skb)
@@ -3306,26 +3384,50 @@ unlock:
return ret;
}
-static void ath10k_mac_tx(struct ath10k *ar, enum ath10k_hw_txrx_mode txmode,
- struct sk_buff *skb)
+static enum ath10k_mac_tx_path
+ath10k_mac_tx_h_get_txpath(struct ath10k *ar,
+ struct sk_buff *skb,
+ enum ath10k_hw_txrx_mode txmode)
{
- struct ath10k_htt *htt = &ar->htt;
- int ret = 0;
-
switch (txmode) {
case ATH10K_HW_TXRX_RAW:
case ATH10K_HW_TXRX_NATIVE_WIFI:
case ATH10K_HW_TXRX_ETHERNET:
- ret = ath10k_htt_tx(htt, txmode, skb);
- break;
+ return ATH10K_MAC_TX_HTT;
case ATH10K_HW_TXRX_MGMT:
if (test_bit(ATH10K_FW_FEATURE_HAS_WMI_MGMT_TX,
- ar->fw_features))
- ret = ath10k_mac_tx_wmi_mgmt(ar, skb);
+ ar->running_fw->fw_file.fw_features))
+ return ATH10K_MAC_TX_WMI_MGMT;
else if (ar->htt.target_version_major >= 3)
- ret = ath10k_htt_tx(htt, txmode, skb);
+ return ATH10K_MAC_TX_HTT;
else
- ret = ath10k_htt_mgmt_tx(htt, skb);
+ return ATH10K_MAC_TX_HTT_MGMT;
+ }
+
+ return ATH10K_MAC_TX_UNKNOWN;
+}
+
+static int ath10k_mac_tx_submit(struct ath10k *ar,
+ enum ath10k_hw_txrx_mode txmode,
+ enum ath10k_mac_tx_path txpath,
+ struct sk_buff *skb)
+{
+ struct ath10k_htt *htt = &ar->htt;
+ int ret = -EINVAL;
+
+ switch (txpath) {
+ case ATH10K_MAC_TX_HTT:
+ ret = ath10k_htt_tx(htt, txmode, skb);
+ break;
+ case ATH10K_MAC_TX_HTT_MGMT:
+ ret = ath10k_htt_mgmt_tx(htt, skb);
+ break;
+ case ATH10K_MAC_TX_WMI_MGMT:
+ ret = ath10k_mac_tx_wmi_mgmt(ar, skb);
+ break;
+ case ATH10K_MAC_TX_UNKNOWN:
+ WARN_ON_ONCE(1);
+ ret = -EINVAL;
break;
}
@@ -3334,6 +3436,64 @@ static void ath10k_mac_tx(struct ath10k *ar, enum ath10k_hw_txrx_mode txmode,
ret);
ieee80211_free_txskb(ar->hw, skb);
}
+
+ return ret;
+}
+
+/* This function consumes the sk_buff regardless of return value as far as
+ * caller is concerned so no freeing is necessary afterwards.
+ */
+static int ath10k_mac_tx(struct ath10k *ar,
+ struct ieee80211_vif *vif,
+ struct ieee80211_sta *sta,
+ enum ath10k_hw_txrx_mode txmode,
+ enum ath10k_mac_tx_path txpath,
+ struct sk_buff *skb)
+{
+ struct ieee80211_hw *hw = ar->hw;
+ struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
+ int ret;
+
+ /* We should disable CCK RATE due to P2P */
+ if (info->flags & IEEE80211_TX_CTL_NO_CCK_RATE)
+ ath10k_dbg(ar, ATH10K_DBG_MAC, "IEEE80211_TX_CTL_NO_CCK_RATE\n");
+
+ switch (txmode) {
+ case ATH10K_HW_TXRX_MGMT:
+ case ATH10K_HW_TXRX_NATIVE_WIFI:
+ ath10k_tx_h_nwifi(hw, skb);
+ ath10k_tx_h_add_p2p_noa_ie(ar, vif, skb);
+ ath10k_tx_h_seq_no(vif, skb);
+ break;
+ case ATH10K_HW_TXRX_ETHERNET:
+ ath10k_tx_h_8023(skb);
+ break;
+ case ATH10K_HW_TXRX_RAW:
+ if (!test_bit(ATH10K_FLAG_RAW_MODE, &ar->dev_flags)) {
+ WARN_ON_ONCE(1);
+ ieee80211_free_txskb(hw, skb);
+ return -ENOTSUPP;
+ }
+ }
+
+ if (info->flags & IEEE80211_TX_CTL_TX_OFFCHAN) {
+ if (!ath10k_mac_tx_frm_has_freq(ar)) {
+ ath10k_dbg(ar, ATH10K_DBG_MAC, "queued offchannel skb %p\n",
+ skb);
+
+ skb_queue_tail(&ar->offchan_tx_queue, skb);
+ ieee80211_queue_work(hw, &ar->offchan_tx_work);
+ return 0;
+ }
+ }
+
+ ret = ath10k_mac_tx_submit(ar, txmode, txpath, skb);
+ if (ret) {
+ ath10k_warn(ar, "failed to submit frame: %d\n", ret);
+ return ret;
+ }
+
+ return 0;
}
void ath10k_offchan_tx_purge(struct ath10k *ar)
@@ -3354,12 +3514,13 @@ void ath10k_offchan_tx_work(struct work_struct *work)
struct ath10k *ar = container_of(work, struct ath10k, offchan_tx_work);
struct ath10k_peer *peer;
struct ath10k_vif *arvif;
+ enum ath10k_hw_txrx_mode txmode;
+ enum ath10k_mac_tx_path txpath;
struct ieee80211_hdr *hdr;
struct ieee80211_vif *vif;
struct ieee80211_sta *sta;
struct sk_buff *skb;
const u8 *peer_addr;
- enum ath10k_hw_txrx_mode txmode;
int vdev_id;
int ret;
unsigned long time_left;
@@ -3396,7 +3557,8 @@ void ath10k_offchan_tx_work(struct work_struct *work)
peer_addr, vdev_id);
if (!peer) {
- ret = ath10k_peer_create(ar, vdev_id, peer_addr,
+ ret = ath10k_peer_create(ar, NULL, NULL, vdev_id,
+ peer_addr,
WMI_PEER_TYPE_DEFAULT);
if (ret)
ath10k_warn(ar, "failed to create peer %pM on vdev %d: %d\n",
@@ -3423,8 +3585,14 @@ void ath10k_offchan_tx_work(struct work_struct *work)
}
txmode = ath10k_mac_tx_h_get_txmode(ar, vif, sta, skb);
+ txpath = ath10k_mac_tx_h_get_txpath(ar, skb, txmode);
- ath10k_mac_tx(ar, txmode, skb);
+ ret = ath10k_mac_tx(ar, vif, sta, txmode, txpath, skb);
+ if (ret) {
+ ath10k_warn(ar, "failed to transmit offchannel frame: %d\n",
+ ret);
+ /* not serious */
+ }
time_left =
wait_for_completion_timeout(&ar->offchan_tx_completed, 3 * HZ);
@@ -3476,6 +3644,175 @@ void ath10k_mgmt_over_wmi_tx_work(struct work_struct *work)
}
}
+static void ath10k_mac_txq_init(struct ieee80211_txq *txq)
+{
+ struct ath10k_txq *artxq = (void *)txq->drv_priv;
+
+ if (!txq)
+ return;
+
+ INIT_LIST_HEAD(&artxq->list);
+}
+
+static void ath10k_mac_txq_unref(struct ath10k *ar, struct ieee80211_txq *txq)
+{
+ struct ath10k_txq *artxq = (void *)txq->drv_priv;
+ struct ath10k_skb_cb *cb;
+ struct sk_buff *msdu;
+ int msdu_id;
+
+ if (!txq)
+ return;
+
+ spin_lock_bh(&ar->txqs_lock);
+ if (!list_empty(&artxq->list))
+ list_del_init(&artxq->list);
+ spin_unlock_bh(&ar->txqs_lock);
+
+ spin_lock_bh(&ar->htt.tx_lock);
+ idr_for_each_entry(&ar->htt.pending_tx, msdu, msdu_id) {
+ cb = ATH10K_SKB_CB(msdu);
+ if (cb->txq == txq)
+ cb->txq = NULL;
+ }
+ spin_unlock_bh(&ar->htt.tx_lock);
+}
+
+struct ieee80211_txq *ath10k_mac_txq_lookup(struct ath10k *ar,
+ u16 peer_id,
+ u8 tid)
+{
+ struct ath10k_peer *peer;
+
+ lockdep_assert_held(&ar->data_lock);
+
+ peer = ar->peer_map[peer_id];
+ if (!peer)
+ return NULL;
+
+ if (peer->sta)
+ return peer->sta->txq[tid];
+ else if (peer->vif)
+ return peer->vif->txq;
+ else
+ return NULL;
+}
+
+static bool ath10k_mac_tx_can_push(struct ieee80211_hw *hw,
+ struct ieee80211_txq *txq)
+{
+ struct ath10k *ar = hw->priv;
+ struct ath10k_txq *artxq = (void *)txq->drv_priv;
+
+ /* No need to get locks */
+
+ if (ar->htt.tx_q_state.mode == HTT_TX_MODE_SWITCH_PUSH)
+ return true;
+
+ if (ar->htt.num_pending_tx < ar->htt.tx_q_state.num_push_allowed)
+ return true;
+
+ if (artxq->num_fw_queued < artxq->num_push_allowed)
+ return true;
+
+ return false;
+}
+
+int ath10k_mac_tx_push_txq(struct ieee80211_hw *hw,
+ struct ieee80211_txq *txq)
+{
+ struct ath10k *ar = hw->priv;
+ struct ath10k_htt *htt = &ar->htt;
+ struct ath10k_txq *artxq = (void *)txq->drv_priv;
+ struct ieee80211_vif *vif = txq->vif;
+ struct ieee80211_sta *sta = txq->sta;
+ enum ath10k_hw_txrx_mode txmode;
+ enum ath10k_mac_tx_path txpath;
+ struct sk_buff *skb;
+ size_t skb_len;
+ int ret;
+
+ spin_lock_bh(&ar->htt.tx_lock);
+ ret = ath10k_htt_tx_inc_pending(htt);
+ spin_unlock_bh(&ar->htt.tx_lock);
+
+ if (ret)
+ return ret;
+
+ skb = ieee80211_tx_dequeue(hw, txq);
+ if (!skb) {
+ spin_lock_bh(&ar->htt.tx_lock);
+ ath10k_htt_tx_dec_pending(htt);
+ spin_unlock_bh(&ar->htt.tx_lock);
+
+ return -ENOENT;
+ }
+
+ ath10k_mac_tx_h_fill_cb(ar, vif, txq, skb);
+
+ skb_len = skb->len;
+ txmode = ath10k_mac_tx_h_get_txmode(ar, vif, sta, skb);
+ txpath = ath10k_mac_tx_h_get_txpath(ar, skb, txmode);
+
+ ret = ath10k_mac_tx(ar, vif, sta, txmode, txpath, skb);
+ if (unlikely(ret)) {
+ ath10k_warn(ar, "failed to push frame: %d\n", ret);
+
+ spin_lock_bh(&ar->htt.tx_lock);
+ ath10k_htt_tx_dec_pending(htt);
+ spin_unlock_bh(&ar->htt.tx_lock);
+
+ return ret;
+ }
+
+ spin_lock_bh(&ar->htt.tx_lock);
+ artxq->num_fw_queued++;
+ spin_unlock_bh(&ar->htt.tx_lock);
+
+ return skb_len;
+}
+
+void ath10k_mac_tx_push_pending(struct ath10k *ar)
+{
+ struct ieee80211_hw *hw = ar->hw;
+ struct ieee80211_txq *txq;
+ struct ath10k_txq *artxq;
+ struct ath10k_txq *last;
+ int ret;
+ int max;
+
+ spin_lock_bh(&ar->txqs_lock);
+ rcu_read_lock();
+
+ last = list_last_entry(&ar->txqs, struct ath10k_txq, list);
+ while (!list_empty(&ar->txqs)) {
+ artxq = list_first_entry(&ar->txqs, struct ath10k_txq, list);
+ txq = container_of((void *)artxq, struct ieee80211_txq,
+ drv_priv);
+
+ /* Prevent aggressive sta/tid taking over tx queue */
+ max = 16;
+ ret = 0;
+ while (ath10k_mac_tx_can_push(hw, txq) && max--) {
+ ret = ath10k_mac_tx_push_txq(hw, txq);
+ if (ret < 0)
+ break;
+ }
+
+ list_del_init(&artxq->list);
+ if (ret != -ENOENT)
+ list_add_tail(&artxq->list, &ar->txqs);
+
+ ath10k_htt_tx_txq_update(hw, txq);
+
+ if (artxq == last || (ret < 0 && ret != -ENOENT))
+ break;
+ }
+
+ rcu_read_unlock();
+ spin_unlock_bh(&ar->txqs_lock);
+}
+
/************/
/* Scanning */
/************/
@@ -3531,7 +3868,7 @@ static int ath10k_scan_stop(struct ath10k *ar)
goto out;
}
- ret = wait_for_completion_timeout(&ar->scan.completed, 3*HZ);
+ ret = wait_for_completion_timeout(&ar->scan.completed, 3 * HZ);
if (ret == 0) {
ath10k_warn(ar, "failed to receive scan abortion completion: timed out\n");
ret = -ETIMEDOUT;
@@ -3611,7 +3948,7 @@ static int ath10k_start_scan(struct ath10k *ar,
if (ret)
return ret;
- ret = wait_for_completion_timeout(&ar->scan.started, 1*HZ);
+ ret = wait_for_completion_timeout(&ar->scan.started, 1 * HZ);
if (ret == 0) {
ret = ath10k_scan_stop(ar);
if (ret)
@@ -3638,66 +3975,86 @@ static int ath10k_start_scan(struct ath10k *ar,
/* mac80211 callbacks */
/**********************/
-static void ath10k_tx(struct ieee80211_hw *hw,
- struct ieee80211_tx_control *control,
- struct sk_buff *skb)
+static void ath10k_mac_op_tx(struct ieee80211_hw *hw,
+ struct ieee80211_tx_control *control,
+ struct sk_buff *skb)
{
struct ath10k *ar = hw->priv;
- struct ath10k_skb_cb *skb_cb = ATH10K_SKB_CB(skb);
+ struct ath10k_htt *htt = &ar->htt;
struct ieee80211_tx_info *info = IEEE80211_SKB_CB(skb);
struct ieee80211_vif *vif = info->control.vif;
struct ieee80211_sta *sta = control->sta;
- struct ieee80211_hdr *hdr = (struct ieee80211_hdr *)skb->data;
+ struct ieee80211_txq *txq = NULL;
+ struct ieee80211_hdr *hdr = (void *)skb->data;
enum ath10k_hw_txrx_mode txmode;
+ enum ath10k_mac_tx_path txpath;
+ bool is_htt;
+ bool is_mgmt;
+ bool is_presp;
+ int ret;
- /* We should disable CCK RATE due to P2P */
- if (info->flags & IEEE80211_TX_CTL_NO_CCK_RATE)
- ath10k_dbg(ar, ATH10K_DBG_MAC, "IEEE80211_TX_CTL_NO_CCK_RATE\n");
+ ath10k_mac_tx_h_fill_cb(ar, vif, txq, skb);
txmode = ath10k_mac_tx_h_get_txmode(ar, vif, sta, skb);
+ txpath = ath10k_mac_tx_h_get_txpath(ar, skb, txmode);
+ is_htt = (txpath == ATH10K_MAC_TX_HTT ||
+ txpath == ATH10K_MAC_TX_HTT_MGMT);
+ is_mgmt = (txpath == ATH10K_MAC_TX_HTT_MGMT);
- skb_cb->flags = 0;
- if (!ath10k_tx_h_use_hwcrypto(vif, skb))
- skb_cb->flags |= ATH10K_SKB_F_NO_HWCRYPT;
+ if (is_htt) {
+ spin_lock_bh(&ar->htt.tx_lock);
+ is_presp = ieee80211_is_probe_resp(hdr->frame_control);
- if (ieee80211_is_mgmt(hdr->frame_control))
- skb_cb->flags |= ATH10K_SKB_F_MGMT;
-
- if (ieee80211_is_data_qos(hdr->frame_control))
- skb_cb->flags |= ATH10K_SKB_F_QOS;
-
- skb_cb->vif = vif;
+ ret = ath10k_htt_tx_inc_pending(htt);
+ if (ret) {
+ ath10k_warn(ar, "failed to increase tx pending count: %d, dropping\n",
+ ret);
+ spin_unlock_bh(&ar->htt.tx_lock);
+ ieee80211_free_txskb(ar->hw, skb);
+ return;
+ }
- switch (txmode) {
- case ATH10K_HW_TXRX_MGMT:
- case ATH10K_HW_TXRX_NATIVE_WIFI:
- ath10k_tx_h_nwifi(hw, skb);
- ath10k_tx_h_add_p2p_noa_ie(ar, vif, skb);
- ath10k_tx_h_seq_no(vif, skb);
- break;
- case ATH10K_HW_TXRX_ETHERNET:
- ath10k_tx_h_8023(skb);
- break;
- case ATH10K_HW_TXRX_RAW:
- if (!test_bit(ATH10K_FLAG_RAW_MODE, &ar->dev_flags)) {
- WARN_ON_ONCE(1);
- ieee80211_free_txskb(hw, skb);
+ ret = ath10k_htt_tx_mgmt_inc_pending(htt, is_mgmt, is_presp);
+ if (ret) {
+ ath10k_dbg(ar, ATH10K_DBG_MAC, "failed to increase tx mgmt pending count: %d, dropping\n",
+ ret);
+ ath10k_htt_tx_dec_pending(htt);
+ spin_unlock_bh(&ar->htt.tx_lock);
+ ieee80211_free_txskb(ar->hw, skb);
return;
}
+ spin_unlock_bh(&ar->htt.tx_lock);
}
- if (info->flags & IEEE80211_TX_CTL_TX_OFFCHAN) {
- if (!ath10k_mac_tx_frm_has_freq(ar)) {
- ath10k_dbg(ar, ATH10K_DBG_MAC, "queued offchannel skb %p\n",
- skb);
-
- skb_queue_tail(&ar->offchan_tx_queue, skb);
- ieee80211_queue_work(hw, &ar->offchan_tx_work);
- return;
+ ret = ath10k_mac_tx(ar, vif, sta, txmode, txpath, skb);
+ if (ret) {
+ ath10k_warn(ar, "failed to transmit frame: %d\n", ret);
+ if (is_htt) {
+ spin_lock_bh(&ar->htt.tx_lock);
+ ath10k_htt_tx_dec_pending(htt);
+ if (is_mgmt)
+ ath10k_htt_tx_mgmt_dec_pending(htt);
+ spin_unlock_bh(&ar->htt.tx_lock);
}
+ return;
}
+}
+
+static void ath10k_mac_op_wake_tx_queue(struct ieee80211_hw *hw,
+ struct ieee80211_txq *txq)
+{
+ struct ath10k *ar = hw->priv;
+ struct ath10k_txq *artxq = (void *)txq->drv_priv;
+
+ spin_lock_bh(&ar->txqs_lock);
+ if (list_empty(&artxq->list))
+ list_add_tail(&artxq->list, &ar->txqs);
+ spin_unlock_bh(&ar->txqs_lock);
- ath10k_mac_tx(ar, txmode, skb);
+ if (ath10k_mac_tx_can_push(hw, txq))
+ tasklet_schedule(&ar->htt.txrx_compl_task);
+
+ ath10k_htt_tx_txq_update(hw, txq);
}
/* Must not be called with conf_mutex held as workers can use that also. */
@@ -3919,14 +4276,11 @@ static void ath10k_mac_setup_ht_vht_cap(struct ath10k *ar)
vht_cap = ath10k_create_vht_cap(ar);
if (ar->phy_capability & WHAL_WLAN_11G_CAPABILITY) {
- band = &ar->mac.sbands[IEEE80211_BAND_2GHZ];
+ band = &ar->mac.sbands[NL80211_BAND_2GHZ];
band->ht_cap = ht_cap;
-
- /* Enable the VHT support at 2.4 GHz */
- band->vht_cap = vht_cap;
}
if (ar->phy_capability & WHAL_WLAN_11A_CAPABILITY) {
- band = &ar->mac.sbands[IEEE80211_BAND_5GHZ];
+ band = &ar->mac.sbands[NL80211_BAND_5GHZ];
band->ht_cap = ht_cap;
band->vht_cap = vht_cap;
}
@@ -3989,7 +4343,7 @@ static int ath10k_start(struct ieee80211_hw *hw)
/*
* This makes sense only when restarting hw. It is harmless to call
- * uncoditionally. This is necessary to make sure no HTT/WMI tx
+ * unconditionally. This is necessary to make sure no HTT/WMI tx
* commands will be submitted while restarting.
*/
ath10k_drain_tx(ar);
@@ -4021,7 +4375,8 @@ static int ath10k_start(struct ieee80211_hw *hw)
goto err_off;
}
- ret = ath10k_core_start(ar, ATH10K_FIRMWARE_MODE_NORMAL);
+ ret = ath10k_core_start(ar, ATH10K_FIRMWARE_MODE_NORMAL,
+ &ar->normal_mode_fw);
if (ret) {
ath10k_err(ar, "Could not init core: %d\n", ret);
goto err_power_down;
@@ -4079,7 +4434,7 @@ static int ath10k_start(struct ieee80211_hw *hw)
}
if (test_bit(ATH10K_FW_FEATURE_SUPPORTS_ADAPTIVE_CCA,
- ar->fw_features)) {
+ ar->running_fw->fw_file.fw_features)) {
ret = ath10k_wmi_pdev_enable_adaptive_cca(ar, 1,
WMI_CCA_DETECT_LEVEL_AUTO,
WMI_CCA_DETECT_MARGIN_AUTO);
@@ -4100,7 +4455,7 @@ static int ath10k_start(struct ieee80211_hw *hw)
ar->ani_enabled = true;
- if (test_bit(WMI_SERVICE_PEER_STATS, ar->wmi.svc_map)) {
+ if (ath10k_peer_stats_enabled(ar)) {
param = ar->wmi.pdev_param->peer_stats_update_period;
ret = ath10k_wmi_pdev_set_param(ar, param,
PEER_DEFAULT_STATS_UPDATE_PERIOD);
@@ -4313,6 +4668,7 @@ static int ath10k_add_interface(struct ieee80211_hw *hw,
{
struct ath10k *ar = hw->priv;
struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif);
+ struct ath10k_peer *peer;
enum wmi_sta_powersave_param param;
int ret = 0;
u32 value;
@@ -4325,6 +4681,7 @@ static int ath10k_add_interface(struct ieee80211_hw *hw,
mutex_lock(&ar->conf_mutex);
memset(arvif, 0, sizeof(*arvif));
+ ath10k_mac_txq_init(vif->txq);
arvif->ar = ar;
arvif->vif = vif;
@@ -4489,7 +4846,10 @@ static int ath10k_add_interface(struct ieee80211_hw *hw,
goto err_vdev_delete;
}
- if (ar->cfg_tx_chainmask) {
+ /* Configuring number of spatial stream for monitor interface is causing
+ * target assert in qca9888 and qca6174.
+ */
+ if (ar->cfg_tx_chainmask && (vif->type != NL80211_IFTYPE_MONITOR)) {
u16 nss = get_nss_from_chainmask(ar->cfg_tx_chainmask);
vdev_param = ar->wmi.vdev_param->nss;
@@ -4505,13 +4865,31 @@ static int ath10k_add_interface(struct ieee80211_hw *hw,
if (arvif->vdev_type == WMI_VDEV_TYPE_AP ||
arvif->vdev_type == WMI_VDEV_TYPE_IBSS) {
- ret = ath10k_peer_create(ar, arvif->vdev_id, vif->addr,
- WMI_PEER_TYPE_DEFAULT);
+ ret = ath10k_peer_create(ar, vif, NULL, arvif->vdev_id,
+ vif->addr, WMI_PEER_TYPE_DEFAULT);
if (ret) {
ath10k_warn(ar, "failed to create vdev %i peer for AP/IBSS: %d\n",
arvif->vdev_id, ret);
goto err_vdev_delete;
}
+
+ spin_lock_bh(&ar->data_lock);
+
+ peer = ath10k_peer_find(ar, arvif->vdev_id, vif->addr);
+ if (!peer) {
+ ath10k_warn(ar, "failed to lookup peer %pM on vdev %i\n",
+ vif->addr, arvif->vdev_id);
+ spin_unlock_bh(&ar->data_lock);
+ ret = -ENOENT;
+ goto err_peer_delete;
+ }
+
+ arvif->peer_id = find_first_bit(peer->peer_ids,
+ ATH10K_MAX_NUM_PEER_IDS);
+
+ spin_unlock_bh(&ar->data_lock);
+ } else {
+ arvif->peer_id = HTT_INVALID_PEERID;
}
if (arvif->vdev_type == WMI_VDEV_TYPE_AP) {
@@ -4622,7 +5000,9 @@ static void ath10k_remove_interface(struct ieee80211_hw *hw,
{
struct ath10k *ar = hw->priv;
struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif);
+ struct ath10k_peer *peer;
int ret;
+ int i;
cancel_work_sync(&arvif->ap_csa_work);
cancel_delayed_work_sync(&arvif->connection_loss_work);
@@ -4676,7 +5056,22 @@ static void ath10k_remove_interface(struct ieee80211_hw *hw,
spin_unlock_bh(&ar->data_lock);
}
+ spin_lock_bh(&ar->data_lock);
+ for (i = 0; i < ARRAY_SIZE(ar->peer_map); i++) {
+ peer = ar->peer_map[i];
+ if (!peer)
+ continue;
+
+ if (peer->vif == vif) {
+ ath10k_warn(ar, "found vif peer %pM entry on vdev %i after it was supposedly removed\n",
+ vif->addr, arvif->vdev_id);
+ peer->vif = NULL;
+ }
+ }
+ spin_unlock_bh(&ar->data_lock);
+
ath10k_peer_cleanup(ar, arvif->vdev_id);
+ ath10k_mac_txq_unref(ar, vif->txq);
if (vif->type == NL80211_IFTYPE_MONITOR) {
ar->monitor_arvif = NULL;
@@ -4689,6 +5084,8 @@ static void ath10k_remove_interface(struct ieee80211_hw *hw,
ath10k_mac_vif_tx_unlock_all(arvif);
spin_unlock_bh(&ar->htt.tx_lock);
+ ath10k_mac_txq_unref(ar, vif->txq);
+
mutex_unlock(&ar->conf_mutex);
}
@@ -5218,7 +5615,7 @@ static void ath10k_sta_rc_update_wk(struct work_struct *wk)
struct ath10k_sta *arsta;
struct ieee80211_sta *sta;
struct cfg80211_chan_def def;
- enum ieee80211_band band;
+ enum nl80211_band band;
const u8 *ht_mcs_mask;
const u16 *vht_mcs_mask;
u32 changed, bw, nss, smps;
@@ -5393,13 +5790,18 @@ static int ath10k_sta_state(struct ieee80211_hw *hw,
struct ath10k *ar = hw->priv;
struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif);
struct ath10k_sta *arsta = (struct ath10k_sta *)sta->drv_priv;
+ struct ath10k_peer *peer;
int ret = 0;
+ int i;
if (old_state == IEEE80211_STA_NOTEXIST &&
new_state == IEEE80211_STA_NONE) {
memset(arsta, 0, sizeof(*arsta));
arsta->arvif = arvif;
INIT_WORK(&arsta->update_wk, ath10k_sta_rc_update_wk);
+
+ for (i = 0; i < ARRAY_SIZE(sta->txq); i++)
+ ath10k_mac_txq_init(sta->txq[i]);
}
/* cancel must be done outside the mutex to avoid deadlock */
@@ -5434,8 +5836,8 @@ static int ath10k_sta_state(struct ieee80211_hw *hw,
if (sta->tdls)
peer_type = WMI_PEER_TYPE_TDLS;
- ret = ath10k_peer_create(ar, arvif->vdev_id, sta->addr,
- peer_type);
+ ret = ath10k_peer_create(ar, vif, sta, arvif->vdev_id,
+ sta->addr, peer_type);
if (ret) {
ath10k_warn(ar, "failed to add peer %pM for vdev %d when adding a new sta: %i\n",
sta->addr, arvif->vdev_id, ret);
@@ -5443,6 +5845,24 @@ static int ath10k_sta_state(struct ieee80211_hw *hw,
goto exit;
}
+ spin_lock_bh(&ar->data_lock);
+
+ peer = ath10k_peer_find(ar, arvif->vdev_id, sta->addr);
+ if (!peer) {
+ ath10k_warn(ar, "failed to lookup peer %pM on vdev %i\n",
+ vif->addr, arvif->vdev_id);
+ spin_unlock_bh(&ar->data_lock);
+ ath10k_peer_delete(ar, arvif->vdev_id, sta->addr);
+ ath10k_mac_dec_num_stations(arvif, sta);
+ ret = -ENOENT;
+ goto exit;
+ }
+
+ arsta->peer_id = find_first_bit(peer->peer_ids,
+ ATH10K_MAX_NUM_PEER_IDS);
+
+ spin_unlock_bh(&ar->data_lock);
+
if (!sta->tdls)
goto exit;
@@ -5505,6 +5925,23 @@ static int ath10k_sta_state(struct ieee80211_hw *hw,
ath10k_mac_dec_num_stations(arvif, sta);
+ spin_lock_bh(&ar->data_lock);
+ for (i = 0; i < ARRAY_SIZE(ar->peer_map); i++) {
+ peer = ar->peer_map[i];
+ if (!peer)
+ continue;
+
+ if (peer->sta == sta) {
+ ath10k_warn(ar, "found sta peer %pM entry on vdev %i after it was supposedly removed\n",
+ sta->addr, arvif->vdev_id);
+ peer->sta = NULL;
+ }
+ }
+ spin_unlock_bh(&ar->data_lock);
+
+ for (i = 0; i < ARRAY_SIZE(sta->txq); i++)
+ ath10k_mac_txq_unref(ar, sta->txq[i]);
+
if (!sta->tdls)
goto exit;
@@ -5751,7 +6188,7 @@ exit:
return ret;
}
-#define ATH10K_ROC_TIMEOUT_HZ (2*HZ)
+#define ATH10K_ROC_TIMEOUT_HZ (2 * HZ)
static int ath10k_remain_on_channel(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
@@ -5815,7 +6252,7 @@ static int ath10k_remain_on_channel(struct ieee80211_hw *hw,
goto exit;
}
- ret = wait_for_completion_timeout(&ar->scan.on_channel, 3*HZ);
+ ret = wait_for_completion_timeout(&ar->scan.on_channel, 3 * HZ);
if (ret == 0) {
ath10k_warn(ar, "failed to switch to channel for roc scan\n");
@@ -5967,6 +6404,39 @@ static void ath10k_reconfig_complete(struct ieee80211_hw *hw,
mutex_unlock(&ar->conf_mutex);
}
+static void
+ath10k_mac_update_bss_chan_survey(struct ath10k *ar,
+ struct ieee80211_channel *channel)
+{
+ int ret;
+ enum wmi_bss_survey_req_type type = WMI_BSS_SURVEY_REQ_TYPE_READ_CLEAR;
+
+ lockdep_assert_held(&ar->conf_mutex);
+
+ if (!test_bit(WMI_SERVICE_BSS_CHANNEL_INFO_64, ar->wmi.svc_map) ||
+ (ar->rx_channel != channel))
+ return;
+
+ if (ar->scan.state != ATH10K_SCAN_IDLE) {
+ ath10k_dbg(ar, ATH10K_DBG_MAC, "ignoring bss chan info request while scanning..\n");
+ return;
+ }
+
+ reinit_completion(&ar->bss_survey_done);
+
+ ret = ath10k_wmi_pdev_bss_chan_info_request(ar, type);
+ if (ret) {
+ ath10k_warn(ar, "failed to send pdev bss chan info request\n");
+ return;
+ }
+
+ ret = wait_for_completion_timeout(&ar->bss_survey_done, 3 * HZ);
+ if (!ret) {
+ ath10k_warn(ar, "bss channel survey timed out\n");
+ return;
+ }
+}
+
static int ath10k_get_survey(struct ieee80211_hw *hw, int idx,
struct survey_info *survey)
{
@@ -5977,20 +6447,22 @@ static int ath10k_get_survey(struct ieee80211_hw *hw, int idx,
mutex_lock(&ar->conf_mutex);
- sband = hw->wiphy->bands[IEEE80211_BAND_2GHZ];
+ sband = hw->wiphy->bands[NL80211_BAND_2GHZ];
if (sband && idx >= sband->n_channels) {
idx -= sband->n_channels;
sband = NULL;
}
if (!sband)
- sband = hw->wiphy->bands[IEEE80211_BAND_5GHZ];
+ sband = hw->wiphy->bands[NL80211_BAND_5GHZ];
if (!sband || idx >= sband->n_channels) {
ret = -ENOENT;
goto exit;
}
+ ath10k_mac_update_bss_chan_survey(ar, survey->channel);
+
spin_lock_bh(&ar->data_lock);
memcpy(survey, ar_survey, sizeof(*survey));
spin_unlock_bh(&ar->data_lock);
@@ -6007,7 +6479,7 @@ exit:
static bool
ath10k_mac_bitrate_mask_has_single_rate(struct ath10k *ar,
- enum ieee80211_band band,
+ enum nl80211_band band,
const struct cfg80211_bitrate_mask *mask)
{
int num_rates = 0;
@@ -6026,7 +6498,7 @@ ath10k_mac_bitrate_mask_has_single_rate(struct ath10k *ar,
static bool
ath10k_mac_bitrate_mask_get_single_nss(struct ath10k *ar,
- enum ieee80211_band band,
+ enum nl80211_band band,
const struct cfg80211_bitrate_mask *mask,
int *nss)
{
@@ -6075,7 +6547,7 @@ ath10k_mac_bitrate_mask_get_single_nss(struct ath10k *ar,
static int
ath10k_mac_bitrate_mask_get_single_rate(struct ath10k *ar,
- enum ieee80211_band band,
+ enum nl80211_band band,
const struct cfg80211_bitrate_mask *mask,
u8 *rate, u8 *nss)
{
@@ -6176,7 +6648,7 @@ static int ath10k_mac_set_fixed_rate_params(struct ath10k_vif *arvif,
static bool
ath10k_mac_can_set_bitrate_mask(struct ath10k *ar,
- enum ieee80211_band band,
+ enum nl80211_band band,
const struct cfg80211_bitrate_mask *mask)
{
int i;
@@ -6228,7 +6700,7 @@ static int ath10k_mac_op_set_bitrate_mask(struct ieee80211_hw *hw,
struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif);
struct cfg80211_chan_def def;
struct ath10k *ar = arvif->ar;
- enum ieee80211_band band;
+ enum nl80211_band band;
const u8 *ht_mcs_mask;
const u16 *vht_mcs_mask;
u8 rate;
@@ -6379,6 +6851,32 @@ static u64 ath10k_get_tsf(struct ieee80211_hw *hw, struct ieee80211_vif *vif)
return 0;
}
+static void ath10k_set_tsf(struct ieee80211_hw *hw, struct ieee80211_vif *vif,
+ u64 tsf)
+{
+ struct ath10k *ar = hw->priv;
+ struct ath10k_vif *arvif = ath10k_vif_to_arvif(vif);
+ u32 tsf_offset, vdev_param = ar->wmi.vdev_param->set_tsf;
+ int ret;
+
+ /* Workaround:
+ *
+ * Given tsf argument is entire TSF value, but firmware accepts
+ * only TSF offset to current TSF.
+ *
+ * get_tsf function is used to get offset value, however since
+ * ath10k_get_tsf is not implemented properly, it will return 0 always.
+ * Luckily all the caller functions to set_tsf, as of now, also rely on
+ * get_tsf function to get entire tsf value such get_tsf() + tsf_delta,
+ * final tsf offset value to firmware will be arithmetically correct.
+ */
+ tsf_offset = tsf - ath10k_get_tsf(hw, vif);
+ ret = ath10k_wmi_vdev_set_param(ar, arvif->vdev_id,
+ vdev_param, tsf_offset);
+ if (ret && ret != -EOPNOTSUPP)
+ ath10k_warn(ar, "failed to set tsf offset: %d\n", ret);
+}
+
static int ath10k_ampdu_action(struct ieee80211_hw *hw,
struct ieee80211_vif *vif,
struct ieee80211_ampdu_params *params)
@@ -6450,7 +6948,13 @@ ath10k_mac_update_rx_channel(struct ath10k *ar,
def = &vifs[0].new_ctx->def;
ar->rx_channel = def->chan;
- } else if (ctx && ath10k_mac_num_chanctxs(ar) == 0) {
+ } else if ((ctx && ath10k_mac_num_chanctxs(ar) == 0) ||
+ (ctx && (ar->state == ATH10K_STATE_RESTARTED))) {
+ /* During driver restart due to firmware assert, since mac80211
+ * already has valid channel context for given radio, channel
+ * context iteration return num_chanctx > 0. So fix rx_channel
+ * when restart is in progress.
+ */
ar->rx_channel = ctx->def.chan;
} else {
ar->rx_channel = NULL;
@@ -6807,7 +7311,8 @@ ath10k_mac_op_switch_vif_chanctx(struct ieee80211_hw *hw,
}
static const struct ieee80211_ops ath10k_ops = {
- .tx = ath10k_tx,
+ .tx = ath10k_mac_op_tx,
+ .wake_tx_queue = ath10k_mac_op_wake_tx_queue,
.start = ath10k_start,
.stop = ath10k_stop,
.config = ath10k_config,
@@ -6834,6 +7339,7 @@ static const struct ieee80211_ops ath10k_ops = {
.set_bitrate_mask = ath10k_mac_op_set_bitrate_mask,
.sta_rc_update = ath10k_sta_rc_update,
.get_tsf = ath10k_get_tsf,
+ .set_tsf = ath10k_set_tsf,
.ampdu_action = ath10k_ampdu_action,
.get_et_sset_count = ath10k_debug_get_et_sset_count,
.get_et_stats = ath10k_debug_get_et_stats,
@@ -6857,7 +7363,7 @@ static const struct ieee80211_ops ath10k_ops = {
};
#define CHAN2G(_channel, _freq, _flags) { \
- .band = IEEE80211_BAND_2GHZ, \
+ .band = NL80211_BAND_2GHZ, \
.hw_value = (_channel), \
.center_freq = (_freq), \
.flags = (_flags), \
@@ -6866,7 +7372,7 @@ static const struct ieee80211_ops ath10k_ops = {
}
#define CHAN5G(_channel, _freq, _flags) { \
- .band = IEEE80211_BAND_5GHZ, \
+ .band = NL80211_BAND_5GHZ, \
.hw_value = (_channel), \
.center_freq = (_freq), \
.flags = (_flags), \
@@ -7186,13 +7692,13 @@ int ath10k_mac_register(struct ath10k *ar)
goto err_free;
}
- band = &ar->mac.sbands[IEEE80211_BAND_2GHZ];
+ band = &ar->mac.sbands[NL80211_BAND_2GHZ];
band->n_channels = ARRAY_SIZE(ath10k_2ghz_channels);
band->channels = channels;
band->n_bitrates = ath10k_g_rates_size;
band->bitrates = ath10k_g_rates;
- ar->hw->wiphy->bands[IEEE80211_BAND_2GHZ] = band;
+ ar->hw->wiphy->bands[NL80211_BAND_2GHZ] = band;
}
if (ar->phy_capability & WHAL_WLAN_11A_CAPABILITY) {
@@ -7204,12 +7710,12 @@ int ath10k_mac_register(struct ath10k *ar)
goto err_free;
}
- band = &ar->mac.sbands[IEEE80211_BAND_5GHZ];
+ band = &ar->mac.sbands[NL80211_BAND_5GHZ];
band->n_channels = ARRAY_SIZE(ath10k_5ghz_channels);
band->channels = channels;
band->n_bitrates = ath10k_a_rates_size;
band->bitrates = ath10k_a_rates;
- ar->hw->wiphy->bands[IEEE80211_BAND_5GHZ] = band;
+ ar->hw->wiphy->bands[NL80211_BAND_5GHZ] = band;
}
ath10k_mac_setup_ht_vht_cap(ar);
@@ -7222,7 +7728,7 @@ int ath10k_mac_register(struct ath10k *ar)
ar->hw->wiphy->available_antennas_rx = ar->cfg_rx_chainmask;
ar->hw->wiphy->available_antennas_tx = ar->cfg_tx_chainmask;
- if (!test_bit(ATH10K_FW_FEATURE_NO_P2P, ar->fw_features))
+ if (!test_bit(ATH10K_FW_FEATURE_NO_P2P, ar->normal_mode_fw.fw_file.fw_features))
ar->hw->wiphy->interface_modes |=
BIT(NL80211_IFTYPE_P2P_DEVICE) |
BIT(NL80211_IFTYPE_P2P_CLIENT) |
@@ -7262,6 +7768,7 @@ int ath10k_mac_register(struct ath10k *ar)
ar->hw->vif_data_size = sizeof(struct ath10k_vif);
ar->hw->sta_data_size = sizeof(struct ath10k_sta);
+ ar->hw->txq_data_size = sizeof(struct ath10k_txq);
ar->hw->max_listen_interval = ATH10K_MAX_HW_LISTEN_INTERVAL;
@@ -7286,7 +7793,8 @@ int ath10k_mac_register(struct ath10k *ar)
ar->hw->wiphy->max_remain_on_channel_duration = 5000;
ar->hw->wiphy->flags |= WIPHY_FLAG_AP_UAPSD;
- ar->hw->wiphy->features |= NL80211_FEATURE_AP_MODE_CHAN_WIDTH_CHANGE;
+ ar->hw->wiphy->features |= NL80211_FEATURE_AP_MODE_CHAN_WIDTH_CHANGE |
+ NL80211_FEATURE_AP_SCAN;
ar->hw->wiphy->max_ap_assoc_sta = ar->max_num_stations;
@@ -7310,7 +7818,7 @@ int ath10k_mac_register(struct ath10k *ar)
*/
ar->hw->offchannel_tx_hw_queue = IEEE80211_MAX_QUEUES - 1;
- switch (ar->wmi.op_version) {
+ switch (ar->running_fw->fw_file.wmi_op_version) {
case ATH10K_FW_WMI_OP_VERSION_MAIN:
ar->hw->wiphy->iface_combinations = ath10k_if_comb;
ar->hw->wiphy->n_iface_combinations =
@@ -7395,8 +7903,8 @@ err_dfs_detector_exit:
ar->dfs_detector->exit(ar->dfs_detector);
err_free:
- kfree(ar->mac.sbands[IEEE80211_BAND_2GHZ].channels);
- kfree(ar->mac.sbands[IEEE80211_BAND_5GHZ].channels);
+ kfree(ar->mac.sbands[NL80211_BAND_2GHZ].channels);
+ kfree(ar->mac.sbands[NL80211_BAND_5GHZ].channels);
SET_IEEE80211_DEV(ar->hw, NULL);
return ret;
@@ -7409,8 +7917,8 @@ void ath10k_mac_unregister(struct ath10k *ar)
if (config_enabled(CONFIG_ATH10K_DFS_CERTIFIED) && ar->dfs_detector)
ar->dfs_detector->exit(ar->dfs_detector);
- kfree(ar->mac.sbands[IEEE80211_BAND_2GHZ].channels);
- kfree(ar->mac.sbands[IEEE80211_BAND_5GHZ].channels);
+ kfree(ar->mac.sbands[NL80211_BAND_2GHZ].channels);
+ kfree(ar->mac.sbands[NL80211_BAND_5GHZ].channels);
SET_IEEE80211_DEV(ar->hw, NULL);
}
diff --git a/drivers/net/wireless/ath/ath10k/mac.h b/drivers/net/wireless/ath/ath10k/mac.h
index 53091588090d..1bd29ecfcdcc 100644
--- a/drivers/net/wireless/ath/ath10k/mac.h
+++ b/drivers/net/wireless/ath/ath10k/mac.h
@@ -75,6 +75,13 @@ void ath10k_mac_tx_unlock(struct ath10k *ar, int reason);
void ath10k_mac_vif_tx_lock(struct ath10k_vif *arvif, int reason);
void ath10k_mac_vif_tx_unlock(struct ath10k_vif *arvif, int reason);
bool ath10k_mac_tx_frm_has_freq(struct ath10k *ar);
+void ath10k_mac_tx_push_pending(struct ath10k *ar);
+int ath10k_mac_tx_push_txq(struct ieee80211_hw *hw,
+ struct ieee80211_txq *txq);
+struct ieee80211_txq *ath10k_mac_txq_lookup(struct ath10k *ar,
+ u16 peer_id,
+ u8 tid);
+int ath10k_mac_ext_resource_config(struct ath10k *ar, u32 val);
static inline struct ath10k_vif *ath10k_vif_to_arvif(struct ieee80211_vif *vif)
{
diff --git a/drivers/net/wireless/ath/ath10k/pci.c b/drivers/net/wireless/ath/ath10k/pci.c
index b3cff1d3364a..8133d7b5b956 100644
--- a/drivers/net/wireless/ath/ath10k/pci.c
+++ b/drivers/net/wireless/ath/ath10k/pci.c
@@ -33,12 +33,6 @@
#include "ce.h"
#include "pci.h"
-enum ath10k_pci_irq_mode {
- ATH10K_PCI_IRQ_AUTO = 0,
- ATH10K_PCI_IRQ_LEGACY = 1,
- ATH10K_PCI_IRQ_MSI = 2,
-};
-
enum ath10k_pci_reset_mode {
ATH10K_PCI_RESET_AUTO = 0,
ATH10K_PCI_RESET_WARM_ONLY = 1,
@@ -745,10 +739,7 @@ static inline const char *ath10k_pci_get_irq_method(struct ath10k *ar)
{
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
- if (ar_pci->num_msi_intrs > 1)
- return "msi-x";
-
- if (ar_pci->num_msi_intrs == 1)
+ if (ar_pci->oper_irq_mode == ATH10K_PCI_IRQ_MSI)
return "msi";
return "legacy";
@@ -809,7 +800,8 @@ static void ath10k_pci_rx_post_pipe(struct ath10k_pci_pipe *pipe)
spin_lock_bh(&ar_pci->ce_lock);
num = __ath10k_ce_rx_num_free_bufs(ce_pipe);
spin_unlock_bh(&ar_pci->ce_lock);
- while (num--) {
+
+ while (num >= 0) {
ret = __ath10k_pci_rx_post_buf(pipe);
if (ret) {
if (ret == -ENOSPC)
@@ -819,6 +811,7 @@ static void ath10k_pci_rx_post_pipe(struct ath10k_pci_pipe *pipe)
ATH10K_PCI_RX_POST_RETRY_MS);
break;
}
+ num--;
}
}
@@ -870,10 +863,8 @@ static int ath10k_pci_diag_read_mem(struct ath10k *ar, u32 address, void *data,
{
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
int ret = 0;
- u32 buf;
+ u32 *buf;
unsigned int completed_nbytes, orig_nbytes, remaining_bytes;
- unsigned int id;
- unsigned int flags;
struct ath10k_ce_pipe *ce_diag;
/* Host buffer address in CE space */
u32 ce_data;
@@ -909,7 +900,7 @@ static int ath10k_pci_diag_read_mem(struct ath10k *ar, u32 address, void *data,
nbytes = min_t(unsigned int, remaining_bytes,
DIAG_TRANSFER_LIMIT);
- ret = __ath10k_ce_rx_post_buf(ce_diag, NULL, ce_data);
+ ret = __ath10k_ce_rx_post_buf(ce_diag, &ce_data, ce_data);
if (ret != 0)
goto done;
@@ -940,9 +931,10 @@ static int ath10k_pci_diag_read_mem(struct ath10k *ar, u32 address, void *data,
}
i = 0;
- while (ath10k_ce_completed_recv_next_nolock(ce_diag, NULL, &buf,
- &completed_nbytes,
- &id, &flags) != 0) {
+ while (ath10k_ce_completed_recv_next_nolock(ce_diag,
+ (void **)&buf,
+ &completed_nbytes)
+ != 0) {
mdelay(1);
if (i++ > DIAG_ACCESS_CE_TIMEOUT_MS) {
@@ -956,7 +948,7 @@ static int ath10k_pci_diag_read_mem(struct ath10k *ar, u32 address, void *data,
goto done;
}
- if (buf != ce_data) {
+ if (*buf != ce_data) {
ret = -EIO;
goto done;
}
@@ -1026,10 +1018,8 @@ int ath10k_pci_diag_write_mem(struct ath10k *ar, u32 address,
{
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
int ret = 0;
- u32 buf;
+ u32 *buf;
unsigned int completed_nbytes, orig_nbytes, remaining_bytes;
- unsigned int id;
- unsigned int flags;
struct ath10k_ce_pipe *ce_diag;
void *data_buf = NULL;
u32 ce_data; /* Host buffer address in CE space */
@@ -1078,7 +1068,7 @@ int ath10k_pci_diag_write_mem(struct ath10k *ar, u32 address,
nbytes = min_t(int, remaining_bytes, DIAG_TRANSFER_LIMIT);
/* Set up to receive directly into Target(!) address */
- ret = __ath10k_ce_rx_post_buf(ce_diag, NULL, address);
+ ret = __ath10k_ce_rx_post_buf(ce_diag, &address, address);
if (ret != 0)
goto done;
@@ -1103,9 +1093,10 @@ int ath10k_pci_diag_write_mem(struct ath10k *ar, u32 address,
}
i = 0;
- while (ath10k_ce_completed_recv_next_nolock(ce_diag, NULL, &buf,
- &completed_nbytes,
- &id, &flags) != 0) {
+ while (ath10k_ce_completed_recv_next_nolock(ce_diag,
+ (void **)&buf,
+ &completed_nbytes)
+ != 0) {
mdelay(1);
if (i++ > DIAG_ACCESS_CE_TIMEOUT_MS) {
@@ -1119,7 +1110,7 @@ int ath10k_pci_diag_write_mem(struct ath10k *ar, u32 address,
goto done;
}
- if (buf != address) {
+ if (*buf != address) {
ret = -EIO;
goto done;
}
@@ -1181,15 +1172,11 @@ static void ath10k_pci_process_rx_cb(struct ath10k_ce_pipe *ce_state,
struct sk_buff *skb;
struct sk_buff_head list;
void *transfer_context;
- u32 ce_data;
unsigned int nbytes, max_nbytes;
- unsigned int transfer_id;
- unsigned int flags;
__skb_queue_head_init(&list);
while (ath10k_ce_completed_recv_next(ce_state, &transfer_context,
- &ce_data, &nbytes, &transfer_id,
- &flags) == 0) {
+ &nbytes) == 0) {
skb = transfer_context;
max_nbytes = skb->len + skb_tailroom(skb);
dma_unmap_single(ar->dev, ATH10K_SKB_RXCB(skb)->paddr,
@@ -1218,6 +1205,63 @@ static void ath10k_pci_process_rx_cb(struct ath10k_ce_pipe *ce_state,
ath10k_pci_rx_post_pipe(pipe_info);
}
+static void ath10k_pci_process_htt_rx_cb(struct ath10k_ce_pipe *ce_state,
+ void (*callback)(struct ath10k *ar,
+ struct sk_buff *skb))
+{
+ struct ath10k *ar = ce_state->ar;
+ struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
+ struct ath10k_pci_pipe *pipe_info = &ar_pci->pipe_info[ce_state->id];
+ struct ath10k_ce_pipe *ce_pipe = pipe_info->ce_hdl;
+ struct sk_buff *skb;
+ struct sk_buff_head list;
+ void *transfer_context;
+ unsigned int nbytes, max_nbytes, nentries;
+ int orig_len;
+
+ /* No need to aquire ce_lock for CE5, since this is the only place CE5
+ * is processed other than init and deinit. Before releasing CE5
+ * buffers, interrupts are disabled. Thus CE5 access is serialized.
+ */
+ __skb_queue_head_init(&list);
+ while (ath10k_ce_completed_recv_next_nolock(ce_state, &transfer_context,
+ &nbytes) == 0) {
+ skb = transfer_context;
+ max_nbytes = skb->len + skb_tailroom(skb);
+
+ if (unlikely(max_nbytes < nbytes)) {
+ ath10k_warn(ar, "rxed more than expected (nbytes %d, max %d)",
+ nbytes, max_nbytes);
+ continue;
+ }
+
+ dma_sync_single_for_cpu(ar->dev, ATH10K_SKB_RXCB(skb)->paddr,
+ max_nbytes, DMA_FROM_DEVICE);
+ skb_put(skb, nbytes);
+ __skb_queue_tail(&list, skb);
+ }
+
+ nentries = skb_queue_len(&list);
+ while ((skb = __skb_dequeue(&list))) {
+ ath10k_dbg(ar, ATH10K_DBG_PCI, "pci rx ce pipe %d len %d\n",
+ ce_state->id, skb->len);
+ ath10k_dbg_dump(ar, ATH10K_DBG_PCI_DUMP, NULL, "pci rx: ",
+ skb->data, skb->len);
+
+ orig_len = skb->len;
+ callback(ar, skb);
+ skb_push(skb, orig_len - skb->len);
+ skb_reset_tail_pointer(skb);
+ skb_trim(skb, 0);
+
+ /*let device gain the buffer again*/
+ dma_sync_single_for_device(ar->dev, ATH10K_SKB_RXCB(skb)->paddr,
+ skb->len + skb_tailroom(skb),
+ DMA_FROM_DEVICE);
+ }
+ ath10k_ce_rx_update_write_idx(ce_pipe, nentries);
+}
+
/* Called by lower (CE) layer when data is received from the Target. */
static void ath10k_pci_htc_rx_cb(struct ath10k_ce_pipe *ce_state)
{
@@ -1274,7 +1318,7 @@ static void ath10k_pci_htt_rx_cb(struct ath10k_ce_pipe *ce_state)
*/
ath10k_ce_per_engine_service(ce_state->ar, 4);
- ath10k_pci_process_rx_cb(ce_state, ath10k_pci_htt_rx_deliver);
+ ath10k_pci_process_htt_rx_cb(ce_state, ath10k_pci_htt_rx_deliver);
}
int ath10k_pci_hif_tx_sg(struct ath10k *ar, u8 pipe_id,
@@ -1449,13 +1493,8 @@ void ath10k_pci_hif_send_complete_check(struct ath10k *ar, u8 pipe,
void ath10k_pci_kill_tasklet(struct ath10k *ar)
{
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
- int i;
tasklet_kill(&ar_pci->intr_tq);
- tasklet_kill(&ar_pci->msi_fw_err);
-
- for (i = 0; i < CE_COUNT; i++)
- tasklet_kill(&ar_pci->pipe_info[i].intr);
del_timer_sync(&ar_pci->rx_post_retry);
}
@@ -1571,10 +1610,8 @@ static void ath10k_pci_irq_disable(struct ath10k *ar)
static void ath10k_pci_irq_sync(struct ath10k *ar)
{
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
- int i;
- for (i = 0; i < max(1, ar_pci->num_msi_intrs); i++)
- synchronize_irq(ar_pci->pdev->irq + i);
+ synchronize_irq(ar_pci->pdev->irq);
}
static void ath10k_pci_irq_enable(struct ath10k *ar)
@@ -1835,13 +1872,10 @@ static void ath10k_pci_bmi_recv_data(struct ath10k_ce_pipe *ce_state)
{
struct ath10k *ar = ce_state->ar;
struct bmi_xfer *xfer;
- u32 ce_data;
unsigned int nbytes;
- unsigned int transfer_id;
- unsigned int flags;
- if (ath10k_ce_completed_recv_next(ce_state, (void **)&xfer, &ce_data,
- &nbytes, &transfer_id, &flags))
+ if (ath10k_ce_completed_recv_next(ce_state, (void **)&xfer,
+ &nbytes))
return;
if (WARN_ON_ONCE(!xfer))
@@ -2546,65 +2580,6 @@ static const struct ath10k_hif_ops ath10k_pci_hif_ops = {
#endif
};
-static void ath10k_pci_ce_tasklet(unsigned long ptr)
-{
- struct ath10k_pci_pipe *pipe = (struct ath10k_pci_pipe *)ptr;
- struct ath10k_pci *ar_pci = pipe->ar_pci;
-
- ath10k_ce_per_engine_service(ar_pci->ar, pipe->pipe_num);
-}
-
-static void ath10k_msi_err_tasklet(unsigned long data)
-{
- struct ath10k *ar = (struct ath10k *)data;
-
- if (!ath10k_pci_has_fw_crashed(ar)) {
- ath10k_warn(ar, "received unsolicited fw crash interrupt\n");
- return;
- }
-
- ath10k_pci_irq_disable(ar);
- ath10k_pci_fw_crashed_clear(ar);
- ath10k_pci_fw_crashed_dump(ar);
-}
-
-/*
- * Handler for a per-engine interrupt on a PARTICULAR CE.
- * This is used in cases where each CE has a private MSI interrupt.
- */
-static irqreturn_t ath10k_pci_per_engine_handler(int irq, void *arg)
-{
- struct ath10k *ar = arg;
- struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
- int ce_id = irq - ar_pci->pdev->irq - MSI_ASSIGN_CE_INITIAL;
-
- if (ce_id < 0 || ce_id >= ARRAY_SIZE(ar_pci->pipe_info)) {
- ath10k_warn(ar, "unexpected/invalid irq %d ce_id %d\n", irq,
- ce_id);
- return IRQ_HANDLED;
- }
-
- /*
- * NOTE: We are able to derive ce_id from irq because we
- * use a one-to-one mapping for CE's 0..5.
- * CE's 6 & 7 do not use interrupts at all.
- *
- * This mapping must be kept in sync with the mapping
- * used by firmware.
- */
- tasklet_schedule(&ar_pci->pipe_info[ce_id].intr);
- return IRQ_HANDLED;
-}
-
-static irqreturn_t ath10k_pci_msi_fw_handler(int irq, void *arg)
-{
- struct ath10k *ar = arg;
- struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
-
- tasklet_schedule(&ar_pci->msi_fw_err);
- return IRQ_HANDLED;
-}
-
/*
* Top-level interrupt handler for all PCI interrupts from a Target.
* When a block of MSI interrupts is allocated, this top-level handler
@@ -2622,7 +2597,7 @@ static irqreturn_t ath10k_pci_interrupt_handler(int irq, void *arg)
return IRQ_NONE;
}
- if (ar_pci->num_msi_intrs == 0) {
+ if (ar_pci->oper_irq_mode == ATH10K_PCI_IRQ_LEGACY) {
if (!ath10k_pci_irq_pending(ar))
return IRQ_NONE;
@@ -2649,43 +2624,10 @@ static void ath10k_pci_tasklet(unsigned long data)
ath10k_ce_per_engine_service_any(ar);
/* Re-enable legacy irq that was disabled in the irq handler */
- if (ar_pci->num_msi_intrs == 0)
+ if (ar_pci->oper_irq_mode == ATH10K_PCI_IRQ_LEGACY)
ath10k_pci_enable_legacy_irq(ar);
}
-static int ath10k_pci_request_irq_msix(struct ath10k *ar)
-{
- struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
- int ret, i;
-
- ret = request_irq(ar_pci->pdev->irq + MSI_ASSIGN_FW,
- ath10k_pci_msi_fw_handler,
- IRQF_SHARED, "ath10k_pci", ar);
- if (ret) {
- ath10k_warn(ar, "failed to request MSI-X fw irq %d: %d\n",
- ar_pci->pdev->irq + MSI_ASSIGN_FW, ret);
- return ret;
- }
-
- for (i = MSI_ASSIGN_CE_INITIAL; i <= MSI_ASSIGN_CE_MAX; i++) {
- ret = request_irq(ar_pci->pdev->irq + i,
- ath10k_pci_per_engine_handler,
- IRQF_SHARED, "ath10k_pci", ar);
- if (ret) {
- ath10k_warn(ar, "failed to request MSI-X ce irq %d: %d\n",
- ar_pci->pdev->irq + i, ret);
-
- for (i--; i >= MSI_ASSIGN_CE_INITIAL; i--)
- free_irq(ar_pci->pdev->irq + i, ar);
-
- free_irq(ar_pci->pdev->irq + MSI_ASSIGN_FW, ar);
- return ret;
- }
- }
-
- return 0;
-}
-
static int ath10k_pci_request_irq_msi(struct ath10k *ar)
{
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
@@ -2724,41 +2666,28 @@ static int ath10k_pci_request_irq(struct ath10k *ar)
{
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
- switch (ar_pci->num_msi_intrs) {
- case 0:
+ switch (ar_pci->oper_irq_mode) {
+ case ATH10K_PCI_IRQ_LEGACY:
return ath10k_pci_request_irq_legacy(ar);
- case 1:
+ case ATH10K_PCI_IRQ_MSI:
return ath10k_pci_request_irq_msi(ar);
default:
- return ath10k_pci_request_irq_msix(ar);
+ return -EINVAL;
}
}
static void ath10k_pci_free_irq(struct ath10k *ar)
{
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
- int i;
- /* There's at least one interrupt irregardless whether its legacy INTR
- * or MSI or MSI-X */
- for (i = 0; i < max(1, ar_pci->num_msi_intrs); i++)
- free_irq(ar_pci->pdev->irq + i, ar);
+ free_irq(ar_pci->pdev->irq, ar);
}
void ath10k_pci_init_irq_tasklets(struct ath10k *ar)
{
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
- int i;
tasklet_init(&ar_pci->intr_tq, ath10k_pci_tasklet, (unsigned long)ar);
- tasklet_init(&ar_pci->msi_fw_err, ath10k_msi_err_tasklet,
- (unsigned long)ar);
-
- for (i = 0; i < CE_COUNT; i++) {
- ar_pci->pipe_info[i].ar_pci = ar_pci;
- tasklet_init(&ar_pci->pipe_info[i].intr, ath10k_pci_ce_tasklet,
- (unsigned long)&ar_pci->pipe_info[i]);
- }
}
static int ath10k_pci_init_irq(struct ath10k *ar)
@@ -2772,20 +2701,9 @@ static int ath10k_pci_init_irq(struct ath10k *ar)
ath10k_info(ar, "limiting irq mode to: %d\n",
ath10k_pci_irq_mode);
- /* Try MSI-X */
- if (ath10k_pci_irq_mode == ATH10K_PCI_IRQ_AUTO) {
- ar_pci->num_msi_intrs = MSI_ASSIGN_CE_MAX + 1;
- ret = pci_enable_msi_range(ar_pci->pdev, ar_pci->num_msi_intrs,
- ar_pci->num_msi_intrs);
- if (ret > 0)
- return 0;
-
- /* fall-through */
- }
-
/* Try MSI */
if (ath10k_pci_irq_mode != ATH10K_PCI_IRQ_LEGACY) {
- ar_pci->num_msi_intrs = 1;
+ ar_pci->oper_irq_mode = ATH10K_PCI_IRQ_MSI;
ret = pci_enable_msi(ar_pci->pdev);
if (ret == 0)
return 0;
@@ -2801,7 +2719,7 @@ static int ath10k_pci_init_irq(struct ath10k *ar)
* This write might get lost if target has NOT written BAR.
* For now, fix the race by repeating the write in below
* synchronization checking. */
- ar_pci->num_msi_intrs = 0;
+ ar_pci->oper_irq_mode = ATH10K_PCI_IRQ_LEGACY;
ath10k_pci_write32(ar, SOC_CORE_BASE_ADDRESS + PCIE_INTR_ENABLE_ADDRESS,
PCIE_INTR_FIRMWARE_MASK | PCIE_INTR_CE_MASK_ALL);
@@ -2819,8 +2737,8 @@ static int ath10k_pci_deinit_irq(struct ath10k *ar)
{
struct ath10k_pci *ar_pci = ath10k_pci_priv(ar);
- switch (ar_pci->num_msi_intrs) {
- case 0:
+ switch (ar_pci->oper_irq_mode) {
+ case ATH10K_PCI_IRQ_LEGACY:
ath10k_pci_deinit_irq_legacy(ar);
break;
default:
@@ -2858,7 +2776,7 @@ int ath10k_pci_wait_for_target_init(struct ath10k *ar)
if (val & FW_IND_INITIALIZED)
break;
- if (ar_pci->num_msi_intrs == 0)
+ if (ar_pci->oper_irq_mode == ATH10K_PCI_IRQ_LEGACY)
/* Fix potential race by repeating CORE_BASE writes */
ath10k_pci_enable_legacy_irq(ar);
@@ -3136,8 +3054,8 @@ static int ath10k_pci_probe(struct pci_dev *pdev,
goto err_sleep;
}
- ath10k_info(ar, "pci irq %s interrupts %d irq_mode %d reset_mode %d\n",
- ath10k_pci_get_irq_method(ar), ar_pci->num_msi_intrs,
+ ath10k_info(ar, "pci irq %s oper_irq_mode %d irq_mode %d reset_mode %d\n",
+ ath10k_pci_get_irq_method(ar), ar_pci->oper_irq_mode,
ath10k_pci_irq_mode, ath10k_pci_reset_mode);
ret = ath10k_pci_request_irq(ar);
@@ -3255,7 +3173,6 @@ MODULE_DESCRIPTION("Driver support for Atheros QCA988X PCIe devices");
MODULE_LICENSE("Dual BSD/GPL");
/* QCA988x 2.0 firmware files */
-MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" QCA988X_HW_2_0_FW_FILE);
MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" ATH10K_FW_API2_FILE);
MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" ATH10K_FW_API3_FILE);
MODULE_FIRMWARE(QCA988X_HW_2_0_FW_DIR "/" ATH10K_FW_API4_FILE);
diff --git a/drivers/net/wireless/ath/ath10k/pci.h b/drivers/net/wireless/ath/ath10k/pci.h
index 249c73a69800..959dc321b75e 100644
--- a/drivers/net/wireless/ath/ath10k/pci.h
+++ b/drivers/net/wireless/ath/ath10k/pci.h
@@ -148,9 +148,6 @@ struct ath10k_pci_pipe {
/* protects compl_free and num_send_allowed */
spinlock_t pipe_lock;
-
- struct ath10k_pci *ar_pci;
- struct tasklet_struct intr;
};
struct ath10k_pci_supp_chip {
@@ -164,6 +161,12 @@ struct ath10k_bus_ops {
int (*get_num_banks)(struct ath10k *ar);
};
+enum ath10k_pci_irq_mode {
+ ATH10K_PCI_IRQ_AUTO = 0,
+ ATH10K_PCI_IRQ_LEGACY = 1,
+ ATH10K_PCI_IRQ_MSI = 2,
+};
+
struct ath10k_pci {
struct pci_dev *pdev;
struct device *dev;
@@ -171,14 +174,10 @@ struct ath10k_pci {
void __iomem *mem;
size_t mem_len;
- /*
- * Number of MSI interrupts granted, 0 --> using legacy PCI line
- * interrupts.
- */
- int num_msi_intrs;
+ /* Operating interrupt mode */
+ enum ath10k_pci_irq_mode oper_irq_mode;
struct tasklet_struct intr_tq;
- struct tasklet_struct msi_fw_err;
struct ath10k_pci_pipe pipe_info[CE_COUNT_MAX];
diff --git a/drivers/net/wireless/ath/ath10k/swap.c b/drivers/net/wireless/ath/ath10k/swap.c
index 3ca3fae408a7..0c5f5863dac8 100644
--- a/drivers/net/wireless/ath/ath10k/swap.c
+++ b/drivers/net/wireless/ath/ath10k/swap.c
@@ -134,27 +134,17 @@ ath10k_swap_code_seg_alloc(struct ath10k *ar, size_t swap_bin_len)
return seg_info;
}
-int ath10k_swap_code_seg_configure(struct ath10k *ar,
- enum ath10k_swap_code_seg_bin_type type)
+int ath10k_swap_code_seg_configure(struct ath10k *ar)
{
int ret;
struct ath10k_swap_code_seg_info *seg_info = NULL;
- switch (type) {
- case ATH10K_SWAP_CODE_SEG_BIN_TYPE_FW:
- if (!ar->swap.firmware_swap_code_seg_info)
- return 0;
-
- ath10k_dbg(ar, ATH10K_DBG_BOOT, "boot found firmware code swap binary\n");
- seg_info = ar->swap.firmware_swap_code_seg_info;
- break;
- default:
- case ATH10K_SWAP_CODE_SEG_BIN_TYPE_OTP:
- case ATH10K_SWAP_CODE_SEG_BIN_TYPE_UTF:
- ath10k_warn(ar, "ignoring unknown code swap binary type %d\n",
- type);
+ if (!ar->swap.firmware_swap_code_seg_info)
return 0;
- }
+
+ ath10k_dbg(ar, ATH10K_DBG_BOOT, "boot found firmware code swap binary\n");
+
+ seg_info = ar->swap.firmware_swap_code_seg_info;
ret = ath10k_bmi_write_memory(ar, seg_info->target_addr,
&seg_info->seg_hw_info,
@@ -171,8 +161,13 @@ int ath10k_swap_code_seg_configure(struct ath10k *ar,
void ath10k_swap_code_seg_release(struct ath10k *ar)
{
ath10k_swap_code_seg_free(ar, ar->swap.firmware_swap_code_seg_info);
- ar->swap.firmware_codeswap_data = NULL;
- ar->swap.firmware_codeswap_len = 0;
+
+ /* FIXME: these two assignments look to bein wrong place! Shouldn't
+ * they be in ath10k_core_free_firmware_files() like the rest?
+ */
+ ar->normal_mode_fw.fw_file.codeswap_data = NULL;
+ ar->normal_mode_fw.fw_file.codeswap_len = 0;
+
ar->swap.firmware_swap_code_seg_info = NULL;
}
@@ -180,20 +175,23 @@ int ath10k_swap_code_seg_init(struct ath10k *ar)
{
int ret;
struct ath10k_swap_code_seg_info *seg_info;
+ const void *codeswap_data;
+ size_t codeswap_len;
+
+ codeswap_data = ar->normal_mode_fw.fw_file.codeswap_data;
+ codeswap_len = ar->normal_mode_fw.fw_file.codeswap_len;
- if (!ar->swap.firmware_codeswap_len || !ar->swap.firmware_codeswap_data)
+ if (!codeswap_len || !codeswap_data)
return 0;
- seg_info = ath10k_swap_code_seg_alloc(ar,
- ar->swap.firmware_codeswap_len);
+ seg_info = ath10k_swap_code_seg_alloc(ar, codeswap_len);
if (!seg_info) {
ath10k_err(ar, "failed to allocate fw code swap segment\n");
return -ENOMEM;
}
ret = ath10k_swap_code_seg_fill(ar, seg_info,
- ar->swap.firmware_codeswap_data,
- ar->swap.firmware_codeswap_len);
+ codeswap_data, codeswap_len);
if (ret) {
ath10k_warn(ar, "failed to initialize fw code swap segment: %d\n",
diff --git a/drivers/net/wireless/ath/ath10k/swap.h b/drivers/net/wireless/ath/ath10k/swap.h
index 5c89952dd20f..36991c7b07a0 100644
--- a/drivers/net/wireless/ath/ath10k/swap.h
+++ b/drivers/net/wireless/ath/ath10k/swap.h
@@ -39,12 +39,6 @@ union ath10k_swap_code_seg_item {
struct ath10k_swap_code_seg_tail tail;
} __packed;
-enum ath10k_swap_code_seg_bin_type {
- ATH10K_SWAP_CODE_SEG_BIN_TYPE_OTP,
- ATH10K_SWAP_CODE_SEG_BIN_TYPE_FW,
- ATH10K_SWAP_CODE_SEG_BIN_TYPE_UTF,
-};
-
struct ath10k_swap_code_seg_hw_info {
/* Swap binary image size */
__le32 swap_size;
@@ -64,8 +58,7 @@ struct ath10k_swap_code_seg_info {
dma_addr_t paddr[ATH10K_SWAP_CODE_SEG_NUM_SUPPORTED];
};
-int ath10k_swap_code_seg_configure(struct ath10k *ar,
- enum ath10k_swap_code_seg_bin_type type);
+int ath10k_swap_code_seg_configure(struct ath10k *ar);
void ath10k_swap_code_seg_release(struct ath10k *ar);
int ath10k_swap_code_seg_init(struct ath10k *ar);
diff --git a/drivers/net/wireless/ath/ath10k/targaddrs.h b/drivers/net/wireless/ath/ath10k/targaddrs.h
index 361f143b019c..8e24099fa936 100644
--- a/drivers/net/wireless/ath/ath10k/targaddrs.h
+++ b/drivers/net/wireless/ath/ath10k/targaddrs.h
@@ -438,7 +438,7 @@ Fw Mode/SubMode Mask
((HOST_INTEREST->hi_pwr_save_flags & HI_PWR_SAVE_LPL_ENABLED))
#define HI_DEV_LPL_TYPE_GET(_devix) \
(HOST_INTEREST->hi_pwr_save_flags & ((HI_PWR_SAVE_LPL_DEV_MASK) << \
- (HI_PWR_SAVE_LPL_DEV0_LSB + (_devix)*2)))
+ (HI_PWR_SAVE_LPL_DEV0_LSB + (_devix) * 2)))
#define HOST_INTEREST_SMPS_IS_ALLOWED() \
((HOST_INTEREST->hi_smps_options & HI_SMPS_ALLOW_MASK))
diff --git a/drivers/net/wireless/ath/ath10k/testmode.c b/drivers/net/wireless/ath/ath10k/testmode.c
index 1d5a2fdcbf56..120f4234d3b0 100644
--- a/drivers/net/wireless/ath/ath10k/testmode.c
+++ b/drivers/net/wireless/ath/ath10k/testmode.c
@@ -139,127 +139,8 @@ static int ath10k_tm_cmd_get_version(struct ath10k *ar, struct nlattr *tb[])
return cfg80211_testmode_reply(skb);
}
-static int ath10k_tm_fetch_utf_firmware_api_2(struct ath10k *ar)
-{
- size_t len, magic_len, ie_len;
- struct ath10k_fw_ie *hdr;
- char filename[100];
- __le32 *version;
- const u8 *data;
- int ie_id, ret;
-
- snprintf(filename, sizeof(filename), "%s/%s",
- ar->hw_params.fw.dir, ATH10K_FW_UTF_API2_FILE);
-
- /* load utf firmware image */
- ret = request_firmware(&ar->testmode.utf, filename, ar->dev);
- if (ret) {
- ath10k_warn(ar, "failed to retrieve utf firmware '%s': %d\n",
- filename, ret);
- return ret;
- }
-
- data = ar->testmode.utf->data;
- len = ar->testmode.utf->size;
-
- /* FIXME: call release_firmware() in error cases */
-
- /* magic also includes the null byte, check that as well */
- magic_len = strlen(ATH10K_FIRMWARE_MAGIC) + 1;
-
- if (len < magic_len) {
- ath10k_err(ar, "utf firmware file is too small to contain magic\n");
- ret = -EINVAL;
- goto err;
- }
-
- if (memcmp(data, ATH10K_FIRMWARE_MAGIC, magic_len) != 0) {
- ath10k_err(ar, "invalid firmware magic\n");
- ret = -EINVAL;
- goto err;
- }
-
- /* jump over the padding */
- magic_len = ALIGN(magic_len, 4);
-
- len -= magic_len;
- data += magic_len;
-
- /* loop elements */
- while (len > sizeof(struct ath10k_fw_ie)) {
- hdr = (struct ath10k_fw_ie *)data;
-
- ie_id = le32_to_cpu(hdr->id);
- ie_len = le32_to_cpu(hdr->len);
-
- len -= sizeof(*hdr);
- data += sizeof(*hdr);
-
- if (len < ie_len) {
- ath10k_err(ar, "invalid length for FW IE %d (%zu < %zu)\n",
- ie_id, len, ie_len);
- ret = -EINVAL;
- goto err;
- }
-
- switch (ie_id) {
- case ATH10K_FW_IE_FW_VERSION:
- if (ie_len > sizeof(ar->testmode.utf_version) - 1)
- break;
-
- memcpy(ar->testmode.utf_version, data, ie_len);
- ar->testmode.utf_version[ie_len] = '\0';
-
- ath10k_dbg(ar, ATH10K_DBG_TESTMODE,
- "testmode found fw utf version %s\n",
- ar->testmode.utf_version);
- break;
- case ATH10K_FW_IE_TIMESTAMP:
- /* ignore timestamp, but don't warn about it either */
- break;
- case ATH10K_FW_IE_FW_IMAGE:
- ath10k_dbg(ar, ATH10K_DBG_TESTMODE,
- "testmode found fw image ie (%zd B)\n",
- ie_len);
-
- ar->testmode.utf_firmware_data = data;
- ar->testmode.utf_firmware_len = ie_len;
- break;
- case ATH10K_FW_IE_WMI_OP_VERSION:
- if (ie_len != sizeof(u32))
- break;
- version = (__le32 *)data;
- ar->testmode.op_version = le32_to_cpup(version);
- ath10k_dbg(ar, ATH10K_DBG_TESTMODE, "testmode found fw ie wmi op version %d\n",
- ar->testmode.op_version);
- break;
- default:
- ath10k_warn(ar, "Unknown testmode FW IE: %u\n",
- le32_to_cpu(hdr->id));
- break;
- }
- /* jump over the padding */
- ie_len = ALIGN(ie_len, 4);
-
- len -= ie_len;
- data += ie_len;
- }
-
- if (!ar->testmode.utf_firmware_data || !ar->testmode.utf_firmware_len) {
- ath10k_err(ar, "No ATH10K_FW_IE_FW_IMAGE found\n");
- ret = -EINVAL;
- goto err;
- }
-
- return 0;
-
-err:
- release_firmware(ar->testmode.utf);
-
- return ret;
-}
-
-static int ath10k_tm_fetch_utf_firmware_api_1(struct ath10k *ar)
+static int ath10k_tm_fetch_utf_firmware_api_1(struct ath10k *ar,
+ struct ath10k_fw_file *fw_file)
{
char filename[100];
int ret;
@@ -268,7 +149,7 @@ static int ath10k_tm_fetch_utf_firmware_api_1(struct ath10k *ar)
ar->hw_params.fw.dir, ATH10K_FW_UTF_FILE);
/* load utf firmware image */
- ret = request_firmware(&ar->testmode.utf, filename, ar->dev);
+ ret = request_firmware(&fw_file->firmware, filename, ar->dev);
if (ret) {
ath10k_warn(ar, "failed to retrieve utf firmware '%s': %d\n",
filename, ret);
@@ -281,24 +162,27 @@ static int ath10k_tm_fetch_utf_firmware_api_1(struct ath10k *ar)
* correct WMI interface.
*/
- ar->testmode.op_version = ATH10K_FW_WMI_OP_VERSION_10_1;
- ar->testmode.utf_firmware_data = ar->testmode.utf->data;
- ar->testmode.utf_firmware_len = ar->testmode.utf->size;
+ fw_file->wmi_op_version = ATH10K_FW_WMI_OP_VERSION_10_1;
+ fw_file->htt_op_version = ATH10K_FW_HTT_OP_VERSION_10_1;
+ fw_file->firmware_data = fw_file->firmware->data;
+ fw_file->firmware_len = fw_file->firmware->size;
return 0;
}
static int ath10k_tm_fetch_firmware(struct ath10k *ar)
{
+ struct ath10k_fw_components *utf_mode_fw;
int ret;
- ret = ath10k_tm_fetch_utf_firmware_api_2(ar);
+ ret = ath10k_core_fetch_firmware_api_n(ar, ATH10K_FW_UTF_API2_FILE,
+ &ar->testmode.utf_mode_fw.fw_file);
if (ret == 0) {
ath10k_dbg(ar, ATH10K_DBG_TESTMODE, "testmode using fw utf api 2");
- return 0;
+ goto out;
}
- ret = ath10k_tm_fetch_utf_firmware_api_1(ar);
+ ret = ath10k_tm_fetch_utf_firmware_api_1(ar, &ar->testmode.utf_mode_fw.fw_file);
if (ret) {
ath10k_err(ar, "failed to fetch utf firmware binary: %d", ret);
return ret;
@@ -306,6 +190,21 @@ static int ath10k_tm_fetch_firmware(struct ath10k *ar)
ath10k_dbg(ar, ATH10K_DBG_TESTMODE, "testmode using utf api 1");
+out:
+ utf_mode_fw = &ar->testmode.utf_mode_fw;
+
+ /* Use the same board data file as the normal firmware uses (but
+ * it's still "owned" by normal_mode_fw so we shouldn't free it.
+ */
+ utf_mode_fw->board_data = ar->normal_mode_fw.board_data;
+ utf_mode_fw->board_len = ar->normal_mode_fw.board_len;
+
+ if (!utf_mode_fw->fw_file.otp_data) {
+ ath10k_info(ar, "utf.bin didn't contain otp binary, taking it from the normal mode firmware");
+ utf_mode_fw->fw_file.otp_data = ar->normal_mode_fw.fw_file.otp_data;
+ utf_mode_fw->fw_file.otp_len = ar->normal_mode_fw.fw_file.otp_len;
+ }
+
return 0;
}
@@ -329,7 +228,7 @@ static int ath10k_tm_cmd_utf_start(struct ath10k *ar, struct nlattr *tb[])
goto err;
}
- if (WARN_ON(ar->testmode.utf != NULL)) {
+ if (WARN_ON(ar->testmode.utf_mode_fw.fw_file.firmware != NULL)) {
/* utf image is already downloaded, it shouldn't be */
ret = -EEXIST;
goto err;
@@ -344,27 +243,19 @@ static int ath10k_tm_cmd_utf_start(struct ath10k *ar, struct nlattr *tb[])
spin_lock_bh(&ar->data_lock);
ar->testmode.utf_monitor = true;
spin_unlock_bh(&ar->data_lock);
- BUILD_BUG_ON(sizeof(ar->fw_features) !=
- sizeof(ar->testmode.orig_fw_features));
-
- memcpy(ar->testmode.orig_fw_features, ar->fw_features,
- sizeof(ar->fw_features));
- ar->testmode.orig_wmi_op_version = ar->wmi.op_version;
- memset(ar->fw_features, 0, sizeof(ar->fw_features));
-
- ar->wmi.op_version = ar->testmode.op_version;
ath10k_dbg(ar, ATH10K_DBG_TESTMODE, "testmode wmi version %d\n",
- ar->wmi.op_version);
+ ar->testmode.utf_mode_fw.fw_file.wmi_op_version);
ret = ath10k_hif_power_up(ar);
if (ret) {
ath10k_err(ar, "failed to power up hif (testmode): %d\n", ret);
ar->state = ATH10K_STATE_OFF;
- goto err_fw_features;
+ goto err_release_utf_mode_fw;
}
- ret = ath10k_core_start(ar, ATH10K_FIRMWARE_MODE_UTF);
+ ret = ath10k_core_start(ar, ATH10K_FIRMWARE_MODE_UTF,
+ &ar->testmode.utf_mode_fw);
if (ret) {
ath10k_err(ar, "failed to start core (testmode): %d\n", ret);
ar->state = ATH10K_STATE_OFF;
@@ -373,8 +264,8 @@ static int ath10k_tm_cmd_utf_start(struct ath10k *ar, struct nlattr *tb[])
ar->state = ATH10K_STATE_UTF;
- if (strlen(ar->testmode.utf_version) > 0)
- ver = ar->testmode.utf_version;
+ if (strlen(ar->testmode.utf_mode_fw.fw_file.fw_version) > 0)
+ ver = ar->testmode.utf_mode_fw.fw_file.fw_version;
else
ver = "API 1";
@@ -387,14 +278,9 @@ static int ath10k_tm_cmd_utf_start(struct ath10k *ar, struct nlattr *tb[])
err_power_down:
ath10k_hif_power_down(ar);
-err_fw_features:
- /* return the original firmware features */
- memcpy(ar->fw_features, ar->testmode.orig_fw_features,
- sizeof(ar->fw_features));
- ar->wmi.op_version = ar->testmode.orig_wmi_op_version;
-
- release_firmware(ar->testmode.utf);
- ar->testmode.utf = NULL;
+err_release_utf_mode_fw:
+ release_firmware(ar->testmode.utf_mode_fw.fw_file.firmware);
+ ar->testmode.utf_mode_fw.fw_file.firmware = NULL;
err:
mutex_unlock(&ar->conf_mutex);
@@ -415,13 +301,8 @@ static void __ath10k_tm_cmd_utf_stop(struct ath10k *ar)
spin_unlock_bh(&ar->data_lock);
- /* return the original firmware features */
- memcpy(ar->fw_features, ar->testmode.orig_fw_features,
- sizeof(ar->fw_features));
- ar->wmi.op_version = ar->testmode.orig_wmi_op_version;
-
- release_firmware(ar->testmode.utf);
- ar->testmode.utf = NULL;
+ release_firmware(ar->testmode.utf_mode_fw.fw_file.firmware);
+ ar->testmode.utf_mode_fw.fw_file.firmware = NULL;
ar->state = ATH10K_STATE_OFF;
}
diff --git a/drivers/net/wireless/ath/ath10k/thermal.h b/drivers/net/wireless/ath/ath10k/thermal.h
index c9223e9e962f..3abb97f63b1e 100644
--- a/drivers/net/wireless/ath/ath10k/thermal.h
+++ b/drivers/net/wireless/ath/ath10k/thermal.h
@@ -20,7 +20,7 @@
#define ATH10K_QUIET_PERIOD_MIN 25
#define ATH10K_QUIET_START_OFFSET 10
#define ATH10K_HWMON_NAME_LEN 15
-#define ATH10K_THERMAL_SYNC_TIMEOUT_HZ (5*HZ)
+#define ATH10K_THERMAL_SYNC_TIMEOUT_HZ (5 * HZ)
#define ATH10K_THERMAL_THROTTLE_MAX 100
struct ath10k_thermal {
diff --git a/drivers/net/wireless/ath/ath10k/txrx.c b/drivers/net/wireless/ath/ath10k/txrx.c
index fbfb608e48ab..576e7c42ed65 100644
--- a/drivers/net/wireless/ath/ath10k/txrx.c
+++ b/drivers/net/wireless/ath/ath10k/txrx.c
@@ -49,25 +49,25 @@ out:
spin_unlock_bh(&ar->data_lock);
}
-void ath10k_txrx_tx_unref(struct ath10k_htt *htt,
- const struct htt_tx_done *tx_done)
+int ath10k_txrx_tx_unref(struct ath10k_htt *htt,
+ const struct htt_tx_done *tx_done)
{
struct ath10k *ar = htt->ar;
struct device *dev = ar->dev;
struct ieee80211_tx_info *info;
+ struct ieee80211_txq *txq;
struct ath10k_skb_cb *skb_cb;
+ struct ath10k_txq *artxq;
struct sk_buff *msdu;
- bool limit_mgmt_desc = false;
ath10k_dbg(ar, ATH10K_DBG_HTT,
- "htt tx completion msdu_id %u discard %d no_ack %d success %d\n",
- tx_done->msdu_id, !!tx_done->discard,
- !!tx_done->no_ack, !!tx_done->success);
+ "htt tx completion msdu_id %u status %d\n",
+ tx_done->msdu_id, tx_done->status);
if (tx_done->msdu_id >= htt->max_num_pending_tx) {
ath10k_warn(ar, "warning: msdu_id %d too big, ignoring\n",
tx_done->msdu_id);
- return;
+ return -EINVAL;
}
spin_lock_bh(&htt->tx_lock);
@@ -76,17 +76,18 @@ void ath10k_txrx_tx_unref(struct ath10k_htt *htt,
ath10k_warn(ar, "received tx completion for invalid msdu_id: %d\n",
tx_done->msdu_id);
spin_unlock_bh(&htt->tx_lock);
- return;
+ return -ENOENT;
}
skb_cb = ATH10K_SKB_CB(msdu);
+ txq = skb_cb->txq;
+ artxq = (void *)txq->drv_priv;
- if (unlikely(skb_cb->flags & ATH10K_SKB_F_MGMT) &&
- ar->hw_params.max_probe_resp_desc_thres)
- limit_mgmt_desc = true;
+ if (txq)
+ artxq->num_fw_queued--;
ath10k_htt_tx_free_msdu_id(htt, tx_done->msdu_id);
- __ath10k_htt_tx_dec_pending(htt, limit_mgmt_desc);
+ ath10k_htt_tx_dec_pending(htt);
if (htt->num_pending_tx == 0)
wake_up(&htt->empty_tx_wq);
spin_unlock_bh(&htt->tx_lock);
@@ -99,22 +100,24 @@ void ath10k_txrx_tx_unref(struct ath10k_htt *htt,
memset(&info->status, 0, sizeof(info->status));
trace_ath10k_txrx_tx_unref(ar, tx_done->msdu_id);
- if (tx_done->discard) {
+ if (tx_done->status == HTT_TX_COMPL_STATE_DISCARD) {
ieee80211_free_txskb(htt->ar->hw, msdu);
- return;
+ return 0;
}
if (!(info->flags & IEEE80211_TX_CTL_NO_ACK))
info->flags |= IEEE80211_TX_STAT_ACK;
- if (tx_done->no_ack)
+ if (tx_done->status == HTT_TX_COMPL_STATE_NOACK)
info->flags &= ~IEEE80211_TX_STAT_ACK;
- if (tx_done->success && (info->flags & IEEE80211_TX_CTL_NO_ACK))
+ if ((tx_done->status == HTT_TX_COMPL_STATE_ACK) &&
+ (info->flags & IEEE80211_TX_CTL_NO_ACK))
info->flags |= IEEE80211_TX_STAT_NOACK_TRANSMITTED;
ieee80211_tx_status(htt->ar->hw, msdu);
/* we do not own the msdu anymore */
+ return 0;
}
struct ath10k_peer *ath10k_peer_find(struct ath10k *ar, int vdev_id,
@@ -127,7 +130,7 @@ struct ath10k_peer *ath10k_peer_find(struct ath10k *ar, int vdev_id,
list_for_each_entry(peer, &ar->peers, list) {
if (peer->vdev_id != vdev_id)
continue;
- if (memcmp(peer->addr, addr, ETH_ALEN))
+ if (!ether_addr_equal(peer->addr, addr))
continue;
return peer;
@@ -163,7 +166,7 @@ static int ath10k_wait_for_peer_common(struct ath10k *ar, int vdev_id,
(mapped == expect_mapped ||
test_bit(ATH10K_FLAG_CRASH_FLUSH, &ar->dev_flags));
- }), 3*HZ);
+ }), 3 * HZ);
if (time_left == 0)
return -ETIMEDOUT;
@@ -187,6 +190,13 @@ void ath10k_peer_map_event(struct ath10k_htt *htt,
struct ath10k *ar = htt->ar;
struct ath10k_peer *peer;
+ if (ev->peer_id >= ATH10K_MAX_NUM_PEER_IDS) {
+ ath10k_warn(ar,
+ "received htt peer map event with idx out of bounds: %hu\n",
+ ev->peer_id);
+ return;
+ }
+
spin_lock_bh(&ar->data_lock);
peer = ath10k_peer_find(ar, ev->vdev_id, ev->addr);
if (!peer) {
@@ -203,6 +213,7 @@ void ath10k_peer_map_event(struct ath10k_htt *htt,
ath10k_dbg(ar, ATH10K_DBG_HTT, "htt peer map vdev %d peer %pM id %d\n",
ev->vdev_id, ev->addr, ev->peer_id);
+ ar->peer_map[ev->peer_id] = peer;
set_bit(ev->peer_id, peer->peer_ids);
exit:
spin_unlock_bh(&ar->data_lock);
@@ -214,6 +225,13 @@ void ath10k_peer_unmap_event(struct ath10k_htt *htt,
struct ath10k *ar = htt->ar;
struct ath10k_peer *peer;
+ if (ev->peer_id >= ATH10K_MAX_NUM_PEER_IDS) {
+ ath10k_warn(ar,
+ "received htt peer unmap event with idx out of bounds: %hu\n",
+ ev->peer_id);
+ return;
+ }
+
spin_lock_bh(&ar->data_lock);
peer = ath10k_peer_find_by_id(ar, ev->peer_id);
if (!peer) {
@@ -225,6 +243,7 @@ void ath10k_peer_unmap_event(struct ath10k_htt *htt,
ath10k_dbg(ar, ATH10K_DBG_HTT, "htt peer unmap vdev %d peer %pM id %d\n",
peer->vdev_id, peer->addr, ev->peer_id);
+ ar->peer_map[ev->peer_id] = NULL;
clear_bit(ev->peer_id, peer->peer_ids);
if (bitmap_empty(peer->peer_ids, ATH10K_MAX_NUM_PEER_IDS)) {
diff --git a/drivers/net/wireless/ath/ath10k/txrx.h b/drivers/net/wireless/ath/ath10k/txrx.h
index a90e09f5c7f2..e7ea1ae1c438 100644
--- a/drivers/net/wireless/ath/ath10k/txrx.h
+++ b/drivers/net/wireless/ath/ath10k/txrx.h
@@ -19,8 +19,8 @@
#include "htt.h"
-void ath10k_txrx_tx_unref(struct ath10k_htt *htt,
- const struct htt_tx_done *tx_done);
+int ath10k_txrx_tx_unref(struct ath10k_htt *htt,
+ const struct htt_tx_done *tx_done);
struct ath10k_peer *ath10k_peer_find(struct ath10k *ar, int vdev_id,
const u8 *addr);
diff --git a/drivers/net/wireless/ath/ath10k/wmi-ops.h b/drivers/net/wireless/ath/ath10k/wmi-ops.h
index 32ab34edceb5..64ebd304f907 100644
--- a/drivers/net/wireless/ath/ath10k/wmi-ops.h
+++ b/drivers/net/wireless/ath/ath10k/wmi-ops.h
@@ -186,8 +186,14 @@ struct wmi_ops {
u8 enable,
u32 detect_level,
u32 detect_margin);
+ struct sk_buff *(*ext_resource_config)(struct ath10k *ar,
+ enum wmi_host_platform_type type,
+ u32 fw_feature_bitmap);
int (*get_vdev_subtype)(struct ath10k *ar,
enum wmi_vdev_subtype subtype);
+ struct sk_buff *(*gen_pdev_bss_chan_info_req)
+ (struct ath10k *ar,
+ enum wmi_bss_survey_req_type type);
};
int ath10k_wmi_cmd_send(struct ath10k *ar, struct sk_buff *skb, u32 cmd_id);
@@ -1330,6 +1336,26 @@ ath10k_wmi_pdev_enable_adaptive_cca(struct ath10k *ar, u8 enable,
}
static inline int
+ath10k_wmi_ext_resource_config(struct ath10k *ar,
+ enum wmi_host_platform_type type,
+ u32 fw_feature_bitmap)
+{
+ struct sk_buff *skb;
+
+ if (!ar->wmi.ops->ext_resource_config)
+ return -EOPNOTSUPP;
+
+ skb = ar->wmi.ops->ext_resource_config(ar, type,
+ fw_feature_bitmap);
+
+ if (IS_ERR(skb))
+ return PTR_ERR(skb);
+
+ return ath10k_wmi_cmd_send(ar, skb,
+ ar->wmi.cmd->ext_resource_cfg_cmdid);
+}
+
+static inline int
ath10k_wmi_get_vdev_subtype(struct ath10k *ar, enum wmi_vdev_subtype subtype)
{
if (!ar->wmi.ops->get_vdev_subtype)
@@ -1338,4 +1364,22 @@ ath10k_wmi_get_vdev_subtype(struct ath10k *ar, enum wmi_vdev_subtype subtype)
return ar->wmi.ops->get_vdev_subtype(ar, subtype);
}
+static inline int
+ath10k_wmi_pdev_bss_chan_info_request(struct ath10k *ar,
+ enum wmi_bss_survey_req_type type)
+{
+ struct ath10k_wmi *wmi = &ar->wmi;
+ struct sk_buff *skb;
+
+ if (!wmi->ops->gen_pdev_bss_chan_info_req)
+ return -EOPNOTSUPP;
+
+ skb = wmi->ops->gen_pdev_bss_chan_info_req(ar, type);
+ if (IS_ERR(skb))
+ return PTR_ERR(skb);
+
+ return ath10k_wmi_cmd_send(ar, skb,
+ wmi->cmd->pdev_bss_chan_info_request_cmdid);
+}
+
#endif
diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.c b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
index 108593202052..e09337ee7c96 100644
--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.c
+++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.c
@@ -3409,6 +3409,7 @@ static struct wmi_vdev_param_map wmi_tlv_vdev_param_map = {
.meru_vc = WMI_VDEV_PARAM_UNSUPPORTED,
.rx_decap_type = WMI_VDEV_PARAM_UNSUPPORTED,
.bw_nss_ratemask = WMI_VDEV_PARAM_UNSUPPORTED,
+ .set_tsf = WMI_VDEV_PARAM_UNSUPPORTED,
};
static const struct wmi_ops wmi_tlv_ops = {
diff --git a/drivers/net/wireless/ath/ath10k/wmi-tlv.h b/drivers/net/wireless/ath/ath10k/wmi-tlv.h
index dd678590531a..b8aa6000573c 100644
--- a/drivers/net/wireless/ath/ath10k/wmi-tlv.h
+++ b/drivers/net/wireless/ath/ath10k/wmi-tlv.h
@@ -968,8 +968,8 @@ enum wmi_tlv_service {
#define WMI_SERVICE_IS_ENABLED(wmi_svc_bmap, svc_id, len) \
((svc_id) < (len) && \
- __le32_to_cpu((wmi_svc_bmap)[(svc_id)/(sizeof(u32))]) & \
- BIT((svc_id)%(sizeof(u32))))
+ __le32_to_cpu((wmi_svc_bmap)[(svc_id) / (sizeof(u32))]) & \
+ BIT((svc_id) % (sizeof(u32))))
#define SVCMAP(x, y, len) \
do { \
diff --git a/drivers/net/wireless/ath/ath10k/wmi.c b/drivers/net/wireless/ath/ath10k/wmi.c
index 70261387d1a5..2c300329ebc3 100644
--- a/drivers/net/wireless/ath/ath10k/wmi.c
+++ b/drivers/net/wireless/ath/ath10k/wmi.c
@@ -521,7 +521,8 @@ static struct wmi_cmd_map wmi_10_2_4_cmd_map = {
.vdev_filter_neighbor_rx_packets_cmdid = WMI_CMD_UNSUPPORTED,
.mu_cal_start_cmdid = WMI_CMD_UNSUPPORTED,
.set_cca_params_cmdid = WMI_CMD_UNSUPPORTED,
- .pdev_bss_chan_info_request_cmdid = WMI_CMD_UNSUPPORTED,
+ .pdev_bss_chan_info_request_cmdid =
+ WMI_10_2_PDEV_BSS_CHAN_INFO_REQUEST_CMDID,
};
/* 10.4 WMI cmd track */
@@ -705,6 +706,7 @@ static struct wmi_cmd_map wmi_10_4_cmd_map = {
.set_cca_params_cmdid = WMI_10_4_SET_CCA_PARAMS_CMDID,
.pdev_bss_chan_info_request_cmdid =
WMI_10_4_PDEV_BSS_CHAN_INFO_REQUEST_CMDID,
+ .ext_resource_cfg_cmdid = WMI_10_4_EXT_RESOURCE_CFG_CMDID,
};
/* MAIN WMI VDEV param map */
@@ -780,6 +782,7 @@ static struct wmi_vdev_param_map wmi_vdev_param_map = {
.meru_vc = WMI_VDEV_PARAM_UNSUPPORTED,
.rx_decap_type = WMI_VDEV_PARAM_UNSUPPORTED,
.bw_nss_ratemask = WMI_VDEV_PARAM_UNSUPPORTED,
+ .set_tsf = WMI_VDEV_PARAM_UNSUPPORTED,
};
/* 10.X WMI VDEV param map */
@@ -855,6 +858,7 @@ static struct wmi_vdev_param_map wmi_10x_vdev_param_map = {
.meru_vc = WMI_VDEV_PARAM_UNSUPPORTED,
.rx_decap_type = WMI_VDEV_PARAM_UNSUPPORTED,
.bw_nss_ratemask = WMI_VDEV_PARAM_UNSUPPORTED,
+ .set_tsf = WMI_VDEV_PARAM_UNSUPPORTED,
};
static struct wmi_vdev_param_map wmi_10_2_4_vdev_param_map = {
@@ -929,6 +933,7 @@ static struct wmi_vdev_param_map wmi_10_2_4_vdev_param_map = {
.meru_vc = WMI_VDEV_PARAM_UNSUPPORTED,
.rx_decap_type = WMI_VDEV_PARAM_UNSUPPORTED,
.bw_nss_ratemask = WMI_VDEV_PARAM_UNSUPPORTED,
+ .set_tsf = WMI_10X_VDEV_PARAM_TSF_INCREMENT,
};
static struct wmi_vdev_param_map wmi_10_4_vdev_param_map = {
@@ -1004,6 +1009,7 @@ static struct wmi_vdev_param_map wmi_10_4_vdev_param_map = {
.meru_vc = WMI_10_4_VDEV_PARAM_MERU_VC,
.rx_decap_type = WMI_10_4_VDEV_PARAM_RX_DECAP_TYPE,
.bw_nss_ratemask = WMI_10_4_VDEV_PARAM_BW_NSS_RATEMASK,
+ .set_tsf = WMI_10_4_VDEV_PARAM_TSF_INCREMENT,
};
static struct wmi_pdev_param_map wmi_pdev_param_map = {
@@ -1628,6 +1634,7 @@ void ath10k_wmi_put_wmi_channel(struct wmi_channel *ch,
ch->max_power = arg->max_power;
ch->reg_power = arg->max_reg_power;
ch->antenna_max = arg->max_antenna_gain;
+ ch->max_tx_power = arg->max_power;
/* mode & flags share storage */
ch->mode = arg->mode;
@@ -1803,7 +1810,7 @@ int ath10k_wmi_cmd_send(struct ath10k *ar, struct sk_buff *skb, u32 cmd_id)
ret = -ESHUTDOWN;
(ret != -EAGAIN);
- }), 3*HZ);
+ }), 3 * HZ);
if (ret)
dev_kfree_skb_any(skb);
@@ -2099,34 +2106,6 @@ int ath10k_wmi_event_scan(struct ath10k *ar, struct sk_buff *skb)
return 0;
}
-static inline enum ieee80211_band phy_mode_to_band(u32 phy_mode)
-{
- enum ieee80211_band band;
-
- switch (phy_mode) {
- case MODE_11A:
- case MODE_11NA_HT20:
- case MODE_11NA_HT40:
- case MODE_11AC_VHT20:
- case MODE_11AC_VHT40:
- case MODE_11AC_VHT80:
- band = IEEE80211_BAND_5GHZ;
- break;
- case MODE_11G:
- case MODE_11B:
- case MODE_11GONLY:
- case MODE_11NG_HT20:
- case MODE_11NG_HT40:
- case MODE_11AC_VHT20_2G:
- case MODE_11AC_VHT40_2G:
- case MODE_11AC_VHT80_2G:
- default:
- band = IEEE80211_BAND_2GHZ;
- }
-
- return band;
-}
-
/* If keys are configured, HW decrypts all frames
* with protected bit set. Mark such frames as decrypted.
*/
@@ -2167,10 +2146,13 @@ static int ath10k_wmi_op_pull_mgmt_rx_ev(struct ath10k *ar, struct sk_buff *skb,
struct wmi_mgmt_rx_event_v1 *ev_v1;
struct wmi_mgmt_rx_event_v2 *ev_v2;
struct wmi_mgmt_rx_hdr_v1 *ev_hdr;
+ struct wmi_mgmt_rx_ext_info *ext_info;
size_t pull_len;
u32 msdu_len;
+ u32 len;
- if (test_bit(ATH10K_FW_FEATURE_EXT_WMI_MGMT_RX, ar->fw_features)) {
+ if (test_bit(ATH10K_FW_FEATURE_EXT_WMI_MGMT_RX,
+ ar->running_fw->fw_file.fw_features)) {
ev_v2 = (struct wmi_mgmt_rx_event_v2 *)skb->data;
ev_hdr = &ev_v2->hdr.v1;
pull_len = sizeof(*ev_v2);
@@ -2195,6 +2177,12 @@ static int ath10k_wmi_op_pull_mgmt_rx_ev(struct ath10k *ar, struct sk_buff *skb,
if (skb->len < msdu_len)
return -EPROTO;
+ if (le32_to_cpu(arg->status) & WMI_RX_STATUS_EXT_INFO) {
+ len = ALIGN(le32_to_cpu(arg->buf_len), 4);
+ ext_info = (struct wmi_mgmt_rx_ext_info *)(skb->data + len);
+ memcpy(&arg->ext_info, ext_info,
+ sizeof(struct wmi_mgmt_rx_ext_info));
+ }
/* the WMI buffer might've ended up being padded to 4 bytes due to HTC
* trailer with credit update. Trim the excess garbage.
*/
@@ -2211,6 +2199,8 @@ static int ath10k_wmi_10_4_op_pull_mgmt_rx_ev(struct ath10k *ar,
struct wmi_10_4_mgmt_rx_hdr *ev_hdr;
size_t pull_len;
u32 msdu_len;
+ struct wmi_mgmt_rx_ext_info *ext_info;
+ u32 len;
ev = (struct wmi_10_4_mgmt_rx_event *)skb->data;
ev_hdr = &ev->hdr;
@@ -2231,6 +2221,13 @@ static int ath10k_wmi_10_4_op_pull_mgmt_rx_ev(struct ath10k *ar,
if (skb->len < msdu_len)
return -EPROTO;
+ if (le32_to_cpu(arg->status) & WMI_RX_STATUS_EXT_INFO) {
+ len = ALIGN(le32_to_cpu(arg->buf_len), 4);
+ ext_info = (struct wmi_mgmt_rx_ext_info *)(skb->data + len);
+ memcpy(&arg->ext_info, ext_info,
+ sizeof(struct wmi_mgmt_rx_ext_info));
+ }
+
/* Make sure bytes added for padding are removed. */
skb_trim(skb, msdu_len);
@@ -2281,14 +2278,19 @@ int ath10k_wmi_event_mgmt_rx(struct ath10k *ar, struct sk_buff *skb)
if (rx_status & WMI_RX_STATUS_ERR_MIC)
status->flag |= RX_FLAG_MMIC_ERROR;
+ if (rx_status & WMI_RX_STATUS_EXT_INFO) {
+ status->mactime =
+ __le64_to_cpu(arg.ext_info.rx_mac_timestamp);
+ status->flag |= RX_FLAG_MACTIME_END;
+ }
/* Hardware can Rx CCK rates on 5GHz. In that case phy_mode is set to
* MODE_11B. This means phy_mode is not a reliable source for the band
* of mgmt rx.
*/
if (channel >= 1 && channel <= 14) {
- status->band = IEEE80211_BAND_2GHZ;
+ status->band = NL80211_BAND_2GHZ;
} else if (channel >= 36 && channel <= 165) {
- status->band = IEEE80211_BAND_5GHZ;
+ status->band = NL80211_BAND_5GHZ;
} else {
/* Shouldn't happen unless list of advertised channels to
* mac80211 has been changed.
@@ -2298,7 +2300,7 @@ int ath10k_wmi_event_mgmt_rx(struct ath10k *ar, struct sk_buff *skb)
return 0;
}
- if (phy_mode == MODE_11B && status->band == IEEE80211_BAND_5GHZ)
+ if (phy_mode == MODE_11B && status->band == NL80211_BAND_5GHZ)
ath10k_dbg(ar, ATH10K_DBG_MGMT, "wmi mgmt rx 11b (CCK) on 5GHz\n");
sband = &ar->mac.sbands[status->band];
@@ -2310,6 +2312,12 @@ int ath10k_wmi_event_mgmt_rx(struct ath10k *ar, struct sk_buff *skb)
hdr = (struct ieee80211_hdr *)skb->data;
fc = le16_to_cpu(hdr->frame_control);
+ /* Firmware is guaranteed to report all essential management frames via
+ * WMI while it can deliver some extra via HTT. Since there can be
+ * duplicates split the reporting wrt monitor/sniffing.
+ */
+ status->flag |= RX_FLAG_SKIP_MONITOR;
+
ath10k_wmi_handle_wep_reauth(ar, skb, status);
/* FW delivers WEP Shared Auth frame with Protected Bit set and
@@ -2351,7 +2359,7 @@ static int freq_to_idx(struct ath10k *ar, int freq)
struct ieee80211_supported_band *sband;
int band, ch, idx = 0;
- for (band = IEEE80211_BAND_2GHZ; band < IEEE80211_NUM_BANDS; band++) {
+ for (band = NL80211_BAND_2GHZ; band < NUM_NL80211_BANDS; band++) {
sband = ar->hw->wiphy->bands[band];
if (!sband)
continue;
@@ -2612,6 +2620,16 @@ void ath10k_wmi_pull_peer_stats(const struct wmi_peer_stats *src,
dst->peer_tx_rate = __le32_to_cpu(src->peer_tx_rate);
}
+static void
+ath10k_wmi_10_4_pull_peer_stats(const struct wmi_10_4_peer_stats *src,
+ struct ath10k_fw_stats_peer *dst)
+{
+ ether_addr_copy(dst->peer_macaddr, src->peer_macaddr.addr);
+ dst->peer_rssi = __le32_to_cpu(src->peer_rssi);
+ dst->peer_tx_rate = __le32_to_cpu(src->peer_tx_rate);
+ dst->peer_rx_rate = __le32_to_cpu(src->peer_rx_rate);
+}
+
static int ath10k_wmi_main_op_pull_fw_stats(struct ath10k *ar,
struct sk_buff *skb,
struct ath10k_fw_stats *stats)
@@ -2865,11 +2883,8 @@ static int ath10k_wmi_10_2_4_op_pull_fw_stats(struct ath10k *ar,
const struct wmi_10_2_4_ext_peer_stats *src;
struct ath10k_fw_stats_peer *dst;
int stats_len;
- bool ext_peer_stats_support;
- ext_peer_stats_support = test_bit(WMI_SERVICE_PEER_STATS,
- ar->wmi.svc_map);
- if (ext_peer_stats_support)
+ if (test_bit(WMI_SERVICE_PEER_STATS, ar->wmi.svc_map))
stats_len = sizeof(struct wmi_10_2_4_ext_peer_stats);
else
stats_len = sizeof(struct wmi_10_2_4_peer_stats);
@@ -2886,7 +2901,7 @@ static int ath10k_wmi_10_2_4_op_pull_fw_stats(struct ath10k *ar,
dst->peer_rx_rate = __le32_to_cpu(src->common.peer_rx_rate);
- if (ext_peer_stats_support)
+ if (ath10k_peer_stats_enabled(ar))
dst->rx_duration = __le32_to_cpu(src->rx_duration);
/* FIXME: expose 10.2 specific values */
@@ -2905,6 +2920,7 @@ static int ath10k_wmi_10_4_op_pull_fw_stats(struct ath10k *ar,
u32 num_pdev_ext_stats;
u32 num_vdev_stats;
u32 num_peer_stats;
+ u32 stats_id;
int i;
if (!skb_pull(skb, sizeof(*ev)))
@@ -2914,6 +2930,7 @@ static int ath10k_wmi_10_4_op_pull_fw_stats(struct ath10k *ar,
num_pdev_ext_stats = __le32_to_cpu(ev->num_pdev_ext_stats);
num_vdev_stats = __le32_to_cpu(ev->num_vdev_stats);
num_peer_stats = __le32_to_cpu(ev->num_peer_stats);
+ stats_id = __le32_to_cpu(ev->stats_id);
for (i = 0; i < num_pdev_stats; i++) {
const struct wmi_10_4_pdev_stats *src;
@@ -2953,22 +2970,28 @@ static int ath10k_wmi_10_4_op_pull_fw_stats(struct ath10k *ar,
/* fw doesn't implement vdev stats */
for (i = 0; i < num_peer_stats; i++) {
- const struct wmi_10_4_peer_stats *src;
+ const struct wmi_10_4_peer_extd_stats *src;
struct ath10k_fw_stats_peer *dst;
+ int stats_len;
+ bool extd_peer_stats = !!(stats_id & WMI_10_4_STAT_PEER_EXTD);
+
+ if (extd_peer_stats)
+ stats_len = sizeof(struct wmi_10_4_peer_extd_stats);
+ else
+ stats_len = sizeof(struct wmi_10_4_peer_stats);
src = (void *)skb->data;
- if (!skb_pull(skb, sizeof(*src)))
+ if (!skb_pull(skb, stats_len))
return -EPROTO;
dst = kzalloc(sizeof(*dst), GFP_ATOMIC);
if (!dst)
continue;
- ether_addr_copy(dst->peer_macaddr, src->peer_macaddr.addr);
- dst->peer_rssi = __le32_to_cpu(src->peer_rssi);
- dst->peer_tx_rate = __le32_to_cpu(src->peer_tx_rate);
- dst->peer_rx_rate = __le32_to_cpu(src->peer_rx_rate);
+ ath10k_wmi_10_4_pull_peer_stats(&src->common, dst);
/* FIXME: expose 10.4 specific values */
+ if (extd_peer_stats)
+ dst->rx_duration = __le32_to_cpu(src->rx_duration);
list_add_tail(&dst->list, &stats->peers);
}
@@ -4584,10 +4607,6 @@ static void ath10k_wmi_event_service_ready_work(struct work_struct *work)
ath10k_dbg_dump(ar, ATH10K_DBG_WMI, NULL, "wmi svc: ",
arg.service_map, arg.service_map_len);
- /* only manually set fw features when not using FW IE format */
- if (ar->fw_api == 1 && ar->fw_version_build > 636)
- set_bit(ATH10K_FW_FEATURE_EXT_WMI_MGMT_RX, ar->fw_features);
-
if (ar->num_rf_chains > ar->max_spatial_stream) {
ath10k_warn(ar, "hardware advertises support for more spatial streams than it should (%d > %d)\n",
ar->num_rf_chains, ar->max_spatial_stream);
@@ -4617,10 +4636,16 @@ static void ath10k_wmi_event_service_ready_work(struct work_struct *work)
}
if (test_bit(WMI_SERVICE_PEER_CACHING, ar->wmi.svc_map)) {
+ if (test_bit(ATH10K_FW_FEATURE_PEER_FLOW_CONTROL,
+ ar->running_fw->fw_file.fw_features))
+ ar->num_active_peers = TARGET_10_4_QCACHE_ACTIVE_PEERS_PFC +
+ ar->max_num_vdevs;
+ else
+ ar->num_active_peers = TARGET_10_4_QCACHE_ACTIVE_PEERS +
+ ar->max_num_vdevs;
+
ar->max_num_peers = TARGET_10_4_NUM_QCACHE_PEERS_MAX +
ar->max_num_vdevs;
- ar->num_active_peers = ar->hw_params.qcache_active_peers +
- ar->max_num_vdevs;
ar->num_tids = ar->num_active_peers * 2;
ar->max_num_stations = TARGET_10_4_NUM_QCACHE_PEERS_MAX;
}
@@ -4769,6 +4794,58 @@ static int ath10k_wmi_event_temperature(struct ath10k *ar, struct sk_buff *skb)
return 0;
}
+static int ath10k_wmi_event_pdev_bss_chan_info(struct ath10k *ar,
+ struct sk_buff *skb)
+{
+ struct wmi_pdev_bss_chan_info_event *ev;
+ struct survey_info *survey;
+ u64 busy, total, tx, rx, rx_bss;
+ u32 freq, noise_floor;
+ u32 cc_freq_hz = ar->hw_params.channel_counters_freq_hz;
+ int idx;
+
+ ev = (struct wmi_pdev_bss_chan_info_event *)skb->data;
+ if (WARN_ON(skb->len < sizeof(*ev)))
+ return -EPROTO;
+
+ freq = __le32_to_cpu(ev->freq);
+ noise_floor = __le32_to_cpu(ev->noise_floor);
+ busy = __le64_to_cpu(ev->cycle_busy);
+ total = __le64_to_cpu(ev->cycle_total);
+ tx = __le64_to_cpu(ev->cycle_tx);
+ rx = __le64_to_cpu(ev->cycle_rx);
+ rx_bss = __le64_to_cpu(ev->cycle_rx_bss);
+
+ ath10k_dbg(ar, ATH10K_DBG_WMI,
+ "wmi event pdev bss chan info:\n freq: %d noise: %d cycle: busy %llu total %llu tx %llu rx %llu rx_bss %llu\n",
+ freq, noise_floor, busy, total, tx, rx, rx_bss);
+
+ spin_lock_bh(&ar->data_lock);
+ idx = freq_to_idx(ar, freq);
+ if (idx >= ARRAY_SIZE(ar->survey)) {
+ ath10k_warn(ar, "bss chan info: invalid frequency %d (idx %d out of bounds)\n",
+ freq, idx);
+ goto exit;
+ }
+
+ survey = &ar->survey[idx];
+
+ survey->noise = noise_floor;
+ survey->time = div_u64(total, cc_freq_hz);
+ survey->time_busy = div_u64(busy, cc_freq_hz);
+ survey->time_rx = div_u64(rx_bss, cc_freq_hz);
+ survey->time_tx = div_u64(tx, cc_freq_hz);
+ survey->filled |= (SURVEY_INFO_NOISE_DBM |
+ SURVEY_INFO_TIME |
+ SURVEY_INFO_TIME_BUSY |
+ SURVEY_INFO_TIME_RX |
+ SURVEY_INFO_TIME_TX);
+exit:
+ spin_unlock_bh(&ar->data_lock);
+ complete(&ar->bss_survey_done);
+ return 0;
+}
+
static void ath10k_wmi_op_rx(struct ath10k *ar, struct sk_buff *skb)
{
struct wmi_cmd_hdr *cmd_hdr;
@@ -5112,6 +5189,9 @@ static void ath10k_wmi_10_2_op_rx(struct ath10k *ar, struct sk_buff *skb)
case WMI_10_2_PDEV_TEMPERATURE_EVENTID:
ath10k_wmi_event_temperature(ar, skb);
break;
+ case WMI_10_2_PDEV_BSS_CHAN_INFO_EVENTID:
+ ath10k_wmi_event_pdev_bss_chan_info(ar, skb);
+ break;
case WMI_10_2_RTT_KEEPALIVE_EVENTID:
case WMI_10_2_GPIO_INPUT_EVENTID:
case WMI_10_2_PEER_RATECODE_LIST_EVENTID:
@@ -5189,6 +5269,7 @@ static void ath10k_wmi_10_4_op_rx(struct ath10k *ar, struct sk_buff *skb)
ath10k_wmi_event_vdev_stopped(ar, skb);
break;
case WMI_10_4_WOW_WAKEUP_HOST_EVENTID:
+ case WMI_10_4_PEER_RATECODE_LIST_EVENTID:
ath10k_dbg(ar, ATH10K_DBG_WMI,
"received event id %d not implemented\n", id);
break;
@@ -5198,6 +5279,9 @@ static void ath10k_wmi_10_4_op_rx(struct ath10k *ar, struct sk_buff *skb)
case WMI_10_4_PDEV_TEMPERATURE_EVENTID:
ath10k_wmi_event_temperature(ar, skb);
break;
+ case WMI_10_4_PDEV_BSS_CHAN_INFO_EVENTID:
+ ath10k_wmi_event_pdev_bss_chan_info(ar, skb);
+ break;
default:
ath10k_warn(ar, "Unknown eventid: %d\n", id);
break;
@@ -5517,7 +5601,8 @@ static struct sk_buff *ath10k_wmi_10_2_op_gen_init(struct ath10k *ar)
config.num_vdevs = __cpu_to_le32(TARGET_10X_NUM_VDEVS);
config.num_peer_keys = __cpu_to_le32(TARGET_10X_NUM_PEER_KEYS);
- if (test_bit(WMI_SERVICE_PEER_STATS, ar->wmi.svc_map)) {
+
+ if (ath10k_peer_stats_enabled(ar)) {
config.num_peers = __cpu_to_le32(TARGET_10X_TX_STATS_NUM_PEERS);
config.num_tids = __cpu_to_le32(TARGET_10X_TX_STATS_NUM_TIDS);
} else {
@@ -5579,9 +5664,12 @@ static struct sk_buff *ath10k_wmi_10_2_op_gen_init(struct ath10k *ar)
test_bit(WMI_SERVICE_COEX_GPIO, ar->wmi.svc_map))
features |= WMI_10_2_COEX_GPIO;
- if (test_bit(WMI_SERVICE_PEER_STATS, ar->wmi.svc_map))
+ if (ath10k_peer_stats_enabled(ar))
features |= WMI_10_2_PEER_STATS;
+ if (test_bit(WMI_SERVICE_BSS_CHANNEL_INFO_64, ar->wmi.svc_map))
+ features |= WMI_10_2_BSS_CHAN_INFO;
+
cmd->resource_config.feature_mask = __cpu_to_le32(features);
memcpy(&cmd->resource_config.common, &config, sizeof(config));
@@ -5800,9 +5888,8 @@ ath10k_wmi_put_start_scan_tlvs(struct wmi_start_scan_tlvs *tlvs,
bssids->num_bssid = __cpu_to_le32(arg->n_bssids);
for (i = 0; i < arg->n_bssids; i++)
- memcpy(&bssids->bssid_list[i],
- arg->bssids[i].bssid,
- ETH_ALEN);
+ ether_addr_copy(bssids->bssid_list[i].addr,
+ arg->bssids[i].bssid);
ptr += sizeof(*bssids);
ptr += sizeof(struct wmi_mac_addr) * arg->n_bssids;
@@ -6613,6 +6700,26 @@ ath10k_wmi_10_2_op_gen_pdev_get_temperature(struct ath10k *ar)
return skb;
}
+static struct sk_buff *
+ath10k_wmi_10_2_op_gen_pdev_bss_chan_info(struct ath10k *ar,
+ enum wmi_bss_survey_req_type type)
+{
+ struct wmi_pdev_chan_info_req_cmd *cmd;
+ struct sk_buff *skb;
+
+ skb = ath10k_wmi_alloc_skb(ar, sizeof(*cmd));
+ if (!skb)
+ return ERR_PTR(-ENOMEM);
+
+ cmd = (struct wmi_pdev_chan_info_req_cmd *)skb->data;
+ cmd->type = __cpu_to_le32(type);
+
+ ath10k_dbg(ar, ATH10K_DBG_WMI,
+ "wmi pdev bss info request type %d\n", type);
+
+ return skb;
+}
+
/* This function assumes the beacon is already DMA mapped */
static struct sk_buff *
ath10k_wmi_op_gen_beacon_dma(struct ath10k *ar, u32 vdev_id, const void *bcn,
@@ -7484,6 +7591,28 @@ static int ath10k_wmi_10_4_op_get_vdev_subtype(struct ath10k *ar,
return -ENOTSUPP;
}
+static struct sk_buff *
+ath10k_wmi_10_4_ext_resource_config(struct ath10k *ar,
+ enum wmi_host_platform_type type,
+ u32 fw_feature_bitmap)
+{
+ struct wmi_ext_resource_config_10_4_cmd *cmd;
+ struct sk_buff *skb;
+
+ skb = ath10k_wmi_alloc_skb(ar, sizeof(*cmd));
+ if (!skb)
+ return ERR_PTR(-ENOMEM);
+
+ cmd = (struct wmi_ext_resource_config_10_4_cmd *)skb->data;
+ cmd->host_platform_config = __cpu_to_le32(type);
+ cmd->fw_feature_bitmap = __cpu_to_le32(fw_feature_bitmap);
+
+ ath10k_dbg(ar, ATH10K_DBG_WMI,
+ "wmi ext resource config host type %d firmware feature bitmap %08x\n",
+ type, fw_feature_bitmap);
+ return skb;
+}
+
static const struct wmi_ops wmi_ops = {
.rx = ath10k_wmi_op_rx,
.map_svc = wmi_main_svc_map,
@@ -7690,6 +7819,7 @@ static const struct wmi_ops wmi_10_2_4_ops = {
.gen_init = ath10k_wmi_10_2_op_gen_init,
.gen_peer_assoc = ath10k_wmi_10_2_op_gen_peer_assoc,
.gen_pdev_get_temperature = ath10k_wmi_10_2_op_gen_pdev_get_temperature,
+ .gen_pdev_bss_chan_info_req = ath10k_wmi_10_2_op_gen_pdev_bss_chan_info,
/* shared with 10.1 */
.map_svc = wmi_10x_svc_map,
@@ -7810,16 +7940,18 @@ static const struct wmi_ops wmi_10_4_ops = {
.gen_addba_set_resp = ath10k_wmi_op_gen_addba_set_resp,
.gen_delba_send = ath10k_wmi_op_gen_delba_send,
.fw_stats_fill = ath10k_wmi_10_4_op_fw_stats_fill,
+ .ext_resource_config = ath10k_wmi_10_4_ext_resource_config,
/* shared with 10.2 */
.gen_request_stats = ath10k_wmi_op_gen_request_stats,
.gen_pdev_get_temperature = ath10k_wmi_10_2_op_gen_pdev_get_temperature,
.get_vdev_subtype = ath10k_wmi_10_4_op_get_vdev_subtype,
+ .gen_pdev_bss_chan_info_req = ath10k_wmi_10_2_op_gen_pdev_bss_chan_info,
};
int ath10k_wmi_attach(struct ath10k *ar)
{
- switch (ar->wmi.op_version) {
+ switch (ar->running_fw->fw_file.wmi_op_version) {
case ATH10K_FW_WMI_OP_VERSION_10_4:
ar->wmi.ops = &wmi_10_4_ops;
ar->wmi.cmd = &wmi_10_4_cmd_map;
@@ -7861,7 +7993,7 @@ int ath10k_wmi_attach(struct ath10k *ar)
case ATH10K_FW_WMI_OP_VERSION_UNSET:
case ATH10K_FW_WMI_OP_VERSION_MAX:
ath10k_err(ar, "unsupported WMI op version: %d\n",
- ar->wmi.op_version);
+ ar->running_fw->fw_file.wmi_op_version);
return -EINVAL;
}
diff --git a/drivers/net/wireless/ath/ath10k/wmi.h b/drivers/net/wireless/ath/ath10k/wmi.h
index 4d3cbc44fcd2..9fdf47ea27d0 100644
--- a/drivers/net/wireless/ath/ath10k/wmi.h
+++ b/drivers/net/wireless/ath/ath10k/wmi.h
@@ -180,6 +180,9 @@ enum wmi_service {
WMI_SERVICE_MESH_NON_11S,
WMI_SERVICE_PEER_STATS,
WMI_SERVICE_RESTRT_CHNL_SUPPORT,
+ WMI_SERVICE_TX_MODE_PUSH_ONLY,
+ WMI_SERVICE_TX_MODE_PUSH_PULL,
+ WMI_SERVICE_TX_MODE_DYNAMIC,
/* keep last */
WMI_SERVICE_MAX,
@@ -302,6 +305,9 @@ enum wmi_10_4_service {
WMI_10_4_SERVICE_RESTRT_CHNL_SUPPORT,
WMI_10_4_SERVICE_PEER_STATS,
WMI_10_4_SERVICE_MESH_11S,
+ WMI_10_4_SERVICE_TX_MODE_PUSH_ONLY,
+ WMI_10_4_SERVICE_TX_MODE_PUSH_PULL,
+ WMI_10_4_SERVICE_TX_MODE_DYNAMIC,
};
static inline char *wmi_service_name(int service_id)
@@ -396,6 +402,9 @@ static inline char *wmi_service_name(int service_id)
SVCSTR(WMI_SERVICE_MESH_NON_11S);
SVCSTR(WMI_SERVICE_PEER_STATS);
SVCSTR(WMI_SERVICE_RESTRT_CHNL_SUPPORT);
+ SVCSTR(WMI_SERVICE_TX_MODE_PUSH_ONLY);
+ SVCSTR(WMI_SERVICE_TX_MODE_PUSH_PULL);
+ SVCSTR(WMI_SERVICE_TX_MODE_DYNAMIC);
default:
return NULL;
}
@@ -405,8 +414,8 @@ static inline char *wmi_service_name(int service_id)
#define WMI_SERVICE_IS_ENABLED(wmi_svc_bmap, svc_id, len) \
((svc_id) < (len) && \
- __le32_to_cpu((wmi_svc_bmap)[(svc_id)/(sizeof(u32))]) & \
- BIT((svc_id)%(sizeof(u32))))
+ __le32_to_cpu((wmi_svc_bmap)[(svc_id) / (sizeof(u32))]) & \
+ BIT((svc_id) % (sizeof(u32))))
#define SVCMAP(x, y, len) \
do { \
@@ -643,6 +652,12 @@ static inline void wmi_10_4_svc_map(const __le32 *in, unsigned long *out,
WMI_SERVICE_PEER_STATS, len);
SVCMAP(WMI_10_4_SERVICE_MESH_11S,
WMI_SERVICE_MESH_11S, len);
+ SVCMAP(WMI_10_4_SERVICE_TX_MODE_PUSH_ONLY,
+ WMI_SERVICE_TX_MODE_PUSH_ONLY, len);
+ SVCMAP(WMI_10_4_SERVICE_TX_MODE_PUSH_PULL,
+ WMI_SERVICE_TX_MODE_PUSH_PULL, len);
+ SVCMAP(WMI_10_4_SERVICE_TX_MODE_DYNAMIC,
+ WMI_SERVICE_TX_MODE_DYNAMIC, len);
}
#undef SVCMAP
@@ -816,6 +831,7 @@ struct wmi_cmd_map {
u32 set_cca_params_cmdid;
u32 pdev_bss_chan_info_request_cmdid;
u32 pdev_enable_adaptive_cca_cmdid;
+ u32 ext_resource_cfg_cmdid;
};
/*
@@ -1308,7 +1324,7 @@ enum wmi_10x_event_id {
WMI_10X_PDEV_TPC_CONFIG_EVENTID,
WMI_10X_GPIO_INPUT_EVENTID,
- WMI_10X_PDEV_UTF_EVENTID = WMI_10X_END_EVENTID-1,
+ WMI_10X_PDEV_UTF_EVENTID = WMI_10X_END_EVENTID - 1,
};
enum wmi_10_2_cmd_id {
@@ -1428,6 +1444,7 @@ enum wmi_10_2_cmd_id {
WMI_10_2_MU_CAL_START_CMDID,
WMI_10_2_SET_LTEU_CONFIG_CMDID,
WMI_10_2_SET_CCA_PARAMS,
+ WMI_10_2_PDEV_BSS_CHAN_INFO_REQUEST_CMDID,
WMI_10_2_PDEV_UTF_CMDID = WMI_10_2_END_CMDID - 1,
};
@@ -1471,6 +1488,8 @@ enum wmi_10_2_event_id {
WMI_10_2_WDS_PEER_EVENTID,
WMI_10_2_PEER_STA_PS_STATECHG_EVENTID,
WMI_10_2_PDEV_TEMPERATURE_EVENTID,
+ WMI_10_2_MU_REPORT_EVENTID,
+ WMI_10_2_PDEV_BSS_CHAN_INFO_EVENTID,
WMI_10_2_PDEV_UTF_EVENTID = WMI_10_2_END_EVENTID - 1,
};
@@ -1779,6 +1798,7 @@ struct wmi_channel {
__le32 reginfo1;
struct {
u8 antenna_max;
+ u8 max_tx_power;
} __packed;
} __packed;
} __packed;
@@ -2041,8 +2061,8 @@ struct wmi_10x_service_ready_event {
struct wlan_host_mem_req mem_reqs[0];
} __packed;
-#define WMI_SERVICE_READY_TIMEOUT_HZ (5*HZ)
-#define WMI_UNIFIED_READY_TIMEOUT_HZ (5*HZ)
+#define WMI_SERVICE_READY_TIMEOUT_HZ (5 * HZ)
+#define WMI_UNIFIED_READY_TIMEOUT_HZ (5 * HZ)
struct wmi_ready_event {
__le32 sw_version;
@@ -2434,6 +2454,7 @@ enum wmi_10_2_feature_mask {
WMI_10_2_RX_BATCH_MODE = BIT(0),
WMI_10_2_ATF_CONFIG = BIT(1),
WMI_10_2_COEX_GPIO = BIT(3),
+ WMI_10_2_BSS_CHAN_INFO = BIT(6),
WMI_10_2_PEER_STATS = BIT(7),
};
@@ -2660,13 +2681,43 @@ struct wmi_resource_config_10_4 {
*/
__le32 iphdr_pad_config;
- /* qwrap configuration
+ /* qwrap configuration (bits 15-0)
* 1 - This is qwrap configuration
* 0 - This is not qwrap
+ *
+ * Bits 31-16 is alloc_frag_desc_for_data_pkt (1 enables, 0 disables)
+ * In order to get ack-RSSI reporting and to specify the tx-rate for
+ * individual frames, this option must be enabled. This uses an extra
+ * 4 bytes per tx-msdu descriptor, so don't enable it unless you need it.
*/
__le32 qwrap_config;
} __packed;
+/**
+ * enum wmi_10_4_feature_mask - WMI 10.4 feature enable/disable flags
+ * @WMI_10_4_LTEU_SUPPORT: LTEU config
+ * @WMI_10_4_COEX_GPIO_SUPPORT: COEX GPIO config
+ * @WMI_10_4_AUX_RADIO_SPECTRAL_INTF: AUX Radio Enhancement for spectral scan
+ * @WMI_10_4_AUX_RADIO_CHAN_LOAD_INTF: AUX Radio Enhancement for chan load scan
+ * @WMI_10_4_BSS_CHANNEL_INFO_64: BSS channel info stats
+ * @WMI_10_4_PEER_STATS: Per station stats
+ */
+enum wmi_10_4_feature_mask {
+ WMI_10_4_LTEU_SUPPORT = BIT(0),
+ WMI_10_4_COEX_GPIO_SUPPORT = BIT(1),
+ WMI_10_4_AUX_RADIO_SPECTRAL_INTF = BIT(2),
+ WMI_10_4_AUX_RADIO_CHAN_LOAD_INTF = BIT(3),
+ WMI_10_4_BSS_CHANNEL_INFO_64 = BIT(4),
+ WMI_10_4_PEER_STATS = BIT(5),
+};
+
+struct wmi_ext_resource_config_10_4_cmd {
+ /* contains enum wmi_host_platform_type */
+ __le32 host_platform_config;
+ /* see enum wmi_10_4_feature_mask */
+ __le32 fw_feature_bitmap;
+};
+
/* strucutre describing host memory chunk. */
struct host_memory_chunk {
/* id of the request that is passed up in service ready */
@@ -3037,11 +3088,17 @@ struct wmi_10_4_mgmt_rx_event {
u8 buf[0];
} __packed;
+struct wmi_mgmt_rx_ext_info {
+ __le64 rx_mac_timestamp;
+} __packed __aligned(4);
+
#define WMI_RX_STATUS_OK 0x00
#define WMI_RX_STATUS_ERR_CRC 0x01
#define WMI_RX_STATUS_ERR_DECRYPT 0x08
#define WMI_RX_STATUS_ERR_MIC 0x10
#define WMI_RX_STATUS_ERR_KEY_CACHE_MISS 0x20
+/* Extension data at the end of mgmt frame */
+#define WMI_RX_STATUS_EXT_INFO 0x40
#define PHY_ERROR_GEN_SPECTRAL_SCAN 0x26
#define PHY_ERROR_GEN_FALSE_RADAR_EXT 0x24
@@ -4072,6 +4129,13 @@ enum wmi_stats_id {
WMI_STAT_VDEV_RATE = BIT(5),
};
+enum wmi_10_4_stats_id {
+ WMI_10_4_STAT_PEER = BIT(0),
+ WMI_10_4_STAT_AP = BIT(1),
+ WMI_10_4_STAT_INST = BIT(2),
+ WMI_10_4_STAT_PEER_EXTD = BIT(3),
+};
+
struct wlan_inst_rssi_args {
__le16 cfg_retry_count;
__le16 retry_count;
@@ -4271,6 +4335,15 @@ struct wmi_10_4_peer_stats {
__le32 peer_rssi_changed;
} __packed;
+struct wmi_10_4_peer_extd_stats {
+ struct wmi_10_4_peer_stats common;
+ struct wmi_mac_addr peer_macaddr;
+ __le32 inactive_time;
+ __le32 peer_chain_rssi;
+ __le32 rx_duration;
+ __le32 reserved[10];
+} __packed;
+
struct wmi_10_2_pdev_ext_stats {
__le32 rx_rssi_comb;
__le32 rx_rssi[4];
@@ -4336,14 +4409,14 @@ enum wmi_vdev_subtype_10_4 {
/*
* Indicates that AP VDEV uses hidden ssid. only valid for
* AP/GO */
-#define WMI_VDEV_START_HIDDEN_SSID (1<<0)
+#define WMI_VDEV_START_HIDDEN_SSID (1 << 0)
/*
* Indicates if robust management frame/management frame
* protection is enabled. For GO/AP vdevs, it indicates that
* it may support station/client associations with RMF enabled.
* For STA/client vdevs, it indicates that sta will
* associate with AP with RMF enabled. */
-#define WMI_VDEV_START_PMF_ENABLED (1<<1)
+#define WMI_VDEV_START_PMF_ENABLED (1 << 1)
struct wmi_p2p_noa_descriptor {
__le32 type_count; /* 255: continuous schedule, 0: reserved */
@@ -4582,6 +4655,7 @@ struct wmi_vdev_param_map {
u32 meru_vc;
u32 rx_decap_type;
u32 bw_nss_ratemask;
+ u32 set_tsf;
};
#define WMI_VDEV_PARAM_UNSUPPORTED 0
@@ -4838,6 +4912,7 @@ enum wmi_10x_vdev_param {
WMI_10X_VDEV_PARAM_RTS_FIXED_RATE,
WMI_10X_VDEV_PARAM_VHT_SGIMASK,
WMI_10X_VDEV_PARAM_VHT80_RATEMASK,
+ WMI_10X_VDEV_PARAM_TSF_INCREMENT,
};
enum wmi_10_4_vdev_param {
@@ -4907,6 +4982,12 @@ enum wmi_10_4_vdev_param {
WMI_10_4_VDEV_PARAM_MERU_VC,
WMI_10_4_VDEV_PARAM_RX_DECAP_TYPE,
WMI_10_4_VDEV_PARAM_BW_NSS_RATEMASK,
+ WMI_10_4_VDEV_PARAM_SENSOR_AP,
+ WMI_10_4_VDEV_PARAM_BEACON_RATE,
+ WMI_10_4_VDEV_PARAM_DTIM_ENABLE_CTS,
+ WMI_10_4_VDEV_PARAM_STA_KICKOUT,
+ WMI_10_4_VDEV_PARAM_CAPABILITIES,
+ WMI_10_4_VDEV_PARAM_TSF_INCREMENT,
};
#define WMI_VDEV_PARAM_TXBF_SU_TX_BFEE BIT(0)
@@ -5281,7 +5362,7 @@ enum wmi_sta_ps_param_pspoll_count {
#define WMI_UAPSD_AC_TYPE_TRIG 1
#define WMI_UAPSD_AC_BIT_MASK(ac, type) \
- ((type == WMI_UAPSD_AC_TYPE_DELI) ? (1<<(ac<<1)) : (1<<((ac<<1)+1)))
+ ((type == WMI_UAPSD_AC_TYPE_DELI) ? (1 << (ac << 1)) : (1 << ((ac << 1) + 1)))
enum wmi_sta_ps_param_uapsd {
WMI_STA_PS_UAPSD_AC0_DELIVERY_EN = (1 << 0),
@@ -5696,7 +5777,7 @@ struct wmi_rate_set {
* the rates are filled from least significant byte to most
* significant byte.
*/
- __le32 rates[(MAX_SUPPORTED_RATES/4)+1];
+ __le32 rates[(MAX_SUPPORTED_RATES / 4) + 1];
} __packed;
struct wmi_rate_set_arg {
@@ -6116,6 +6197,7 @@ struct wmi_mgmt_rx_ev_arg {
__le32 phy_mode;
__le32 buf_len;
__le32 status; /* %WMI_RX_STATUS_ */
+ struct wmi_mgmt_rx_ext_info ext_info;
};
struct wmi_ch_info_ev_arg {
@@ -6203,6 +6285,17 @@ struct wmi_pdev_temperature_event {
__le32 temperature;
} __packed;
+struct wmi_pdev_bss_chan_info_event {
+ __le32 freq;
+ __le32 noise_floor;
+ __le64 cycle_busy;
+ __le64 cycle_total;
+ __le64 cycle_tx;
+ __le64 cycle_rx;
+ __le64 cycle_rx_bss;
+ __le32 reserved;
+} __packed;
+
/* WOW structures */
enum wmi_wow_wakeup_event {
WOW_BMISS_EVENT = 0,
@@ -6401,6 +6494,21 @@ struct wmi_pdev_set_adaptive_cca_params {
__le32 cca_detect_margin;
} __packed;
+enum wmi_host_platform_type {
+ WMI_HOST_PLATFORM_HIGH_PERF,
+ WMI_HOST_PLATFORM_LOW_PERF,
+};
+
+enum wmi_bss_survey_req_type {
+ WMI_BSS_SURVEY_REQ_TYPE_READ = 1,
+ WMI_BSS_SURVEY_REQ_TYPE_READ_CLEAR,
+};
+
+struct wmi_pdev_chan_info_req_cmd {
+ __le32 type;
+ __le32 reserved;
+} __packed;
+
struct ath10k;
struct ath10k_vif;
struct ath10k_fw_stats_pdev;
diff --git a/drivers/net/wireless/ath/ath10k/wow.c b/drivers/net/wireless/ath/ath10k/wow.c
index 8e02b381990f..77100d42f401 100644
--- a/drivers/net/wireless/ath/ath10k/wow.c
+++ b/drivers/net/wireless/ath/ath10k/wow.c
@@ -233,7 +233,7 @@ int ath10k_wow_op_suspend(struct ieee80211_hw *hw,
mutex_lock(&ar->conf_mutex);
if (WARN_ON(!test_bit(ATH10K_FW_FEATURE_WOWLAN_SUPPORT,
- ar->fw_features))) {
+ ar->running_fw->fw_file.fw_features))) {
ret = 1;
goto exit;
}
@@ -285,7 +285,7 @@ int ath10k_wow_op_resume(struct ieee80211_hw *hw)
mutex_lock(&ar->conf_mutex);
if (WARN_ON(!test_bit(ATH10K_FW_FEATURE_WOWLAN_SUPPORT,
- ar->fw_features))) {
+ ar->running_fw->fw_file.fw_features))) {
ret = 1;
goto exit;
}
@@ -325,7 +325,8 @@ exit:
int ath10k_wow_init(struct ath10k *ar)
{
- if (!test_bit(ATH10K_FW_FEATURE_WOWLAN_SUPPORT, ar->fw_features))
+ if (!test_bit(ATH10K_FW_FEATURE_WOWLAN_SUPPORT,
+ ar->running_fw->fw_file.fw_features))
return 0;
if (WARN_ON(!test_bit(WMI_SERVICE_WOW, ar->wmi.svc_map)))
diff --git a/drivers/net/wireless/ath/ath5k/ani.c b/drivers/net/wireless/ath/ath5k/ani.c
index 38be2702c0e2..0624333f5430 100644
--- a/drivers/net/wireless/ath/ath5k/ani.c
+++ b/drivers/net/wireless/ath/ath5k/ani.c
@@ -279,7 +279,7 @@ ath5k_ani_raise_immunity(struct ath5k_hw *ah, struct ath5k_ani_state *as,
if (as->firstep_level < ATH5K_ANI_MAX_FIRSTEP_LVL)
ath5k_ani_set_firstep_level(ah, as->firstep_level + 1);
return;
- } else if (ah->ah_current_channel->band == IEEE80211_BAND_2GHZ) {
+ } else if (ah->ah_current_channel->band == NL80211_BAND_2GHZ) {
/* beacon RSSI is low. in B/G mode turn of OFDM weak signal
* detect and zero firstep level to maximize CCK sensitivity */
ATH5K_DBG_UNLIMIT(ah, ATH5K_DEBUG_ANI,
diff --git a/drivers/net/wireless/ath/ath5k/ath5k.h b/drivers/net/wireless/ath/ath5k/ath5k.h
index ba12f7f4061d..67fedb61fcc0 100644
--- a/drivers/net/wireless/ath/ath5k/ath5k.h
+++ b/drivers/net/wireless/ath/ath5k/ath5k.h
@@ -1265,10 +1265,10 @@ struct ath5k_hw {
void __iomem *iobase; /* address of the device */
struct mutex lock; /* dev-level lock */
struct ieee80211_hw *hw; /* IEEE 802.11 common */
- struct ieee80211_supported_band sbands[IEEE80211_NUM_BANDS];
+ struct ieee80211_supported_band sbands[NUM_NL80211_BANDS];
struct ieee80211_channel channels[ATH_CHAN_MAX];
- struct ieee80211_rate rates[IEEE80211_NUM_BANDS][AR5K_MAX_RATES];
- s8 rate_idx[IEEE80211_NUM_BANDS][AR5K_MAX_RATES];
+ struct ieee80211_rate rates[NUM_NL80211_BANDS][AR5K_MAX_RATES];
+ s8 rate_idx[NUM_NL80211_BANDS][AR5K_MAX_RATES];
enum nl80211_iftype opmode;
#ifdef CONFIG_ATH5K_DEBUG
@@ -1532,7 +1532,7 @@ int ath5k_eeprom_mode_from_channel(struct ath5k_hw *ah,
/* Protocol Control Unit Functions */
/* Helpers */
-int ath5k_hw_get_frame_duration(struct ath5k_hw *ah, enum ieee80211_band band,
+int ath5k_hw_get_frame_duration(struct ath5k_hw *ah, enum nl80211_band band,
int len, struct ieee80211_rate *rate, bool shortpre);
unsigned int ath5k_hw_get_default_slottime(struct ath5k_hw *ah);
unsigned int ath5k_hw_get_default_sifs(struct ath5k_hw *ah);
@@ -1611,7 +1611,7 @@ int ath5k_hw_write_initvals(struct ath5k_hw *ah, u8 mode, bool change_channel);
/* PHY functions */
/* Misc PHY functions */
-u16 ath5k_hw_radio_revision(struct ath5k_hw *ah, enum ieee80211_band band);
+u16 ath5k_hw_radio_revision(struct ath5k_hw *ah, enum nl80211_band band);
int ath5k_hw_phy_disable(struct ath5k_hw *ah);
/* Gain_F optimization */
enum ath5k_rfgain ath5k_hw_gainf_calibrate(struct ath5k_hw *ah);
diff --git a/drivers/net/wireless/ath/ath5k/attach.c b/drivers/net/wireless/ath/ath5k/attach.c
index 66b6366158b9..233054bd6b52 100644
--- a/drivers/net/wireless/ath/ath5k/attach.c
+++ b/drivers/net/wireless/ath/ath5k/attach.c
@@ -152,7 +152,7 @@ int ath5k_hw_init(struct ath5k_hw *ah)
ah->ah_phy_revision = ath5k_hw_reg_read(ah, AR5K_PHY_CHIP_ID) &
0xffffffff;
ah->ah_radio_5ghz_revision = ath5k_hw_radio_revision(ah,
- IEEE80211_BAND_5GHZ);
+ NL80211_BAND_5GHZ);
/* Try to identify radio chip based on its srev */
switch (ah->ah_radio_5ghz_revision & 0xf0) {
@@ -160,14 +160,14 @@ int ath5k_hw_init(struct ath5k_hw *ah)
ah->ah_radio = AR5K_RF5111;
ah->ah_single_chip = false;
ah->ah_radio_2ghz_revision = ath5k_hw_radio_revision(ah,
- IEEE80211_BAND_2GHZ);
+ NL80211_BAND_2GHZ);
break;
case AR5K_SREV_RAD_5112:
case AR5K_SREV_RAD_2112:
ah->ah_radio = AR5K_RF5112;
ah->ah_single_chip = false;
ah->ah_radio_2ghz_revision = ath5k_hw_radio_revision(ah,
- IEEE80211_BAND_2GHZ);
+ NL80211_BAND_2GHZ);
break;
case AR5K_SREV_RAD_2413:
ah->ah_radio = AR5K_RF2413;
@@ -204,7 +204,7 @@ int ath5k_hw_init(struct ath5k_hw *ah)
ah->ah_radio = AR5K_RF5111;
ah->ah_single_chip = false;
ah->ah_radio_2ghz_revision = ath5k_hw_radio_revision(ah,
- IEEE80211_BAND_2GHZ);
+ NL80211_BAND_2GHZ);
} else if (ah->ah_mac_version == (AR5K_SREV_AR2425 >> 4) ||
ah->ah_mac_version == (AR5K_SREV_AR2417 >> 4) ||
ah->ah_phy_revision == AR5K_SREV_PHY_2425) {
diff --git a/drivers/net/wireless/ath/ath5k/base.c b/drivers/net/wireless/ath/ath5k/base.c
index 3d946d8b2db2..d98fd421c7ec 100644
--- a/drivers/net/wireless/ath/ath5k/base.c
+++ b/drivers/net/wireless/ath/ath5k/base.c
@@ -268,15 +268,15 @@ static void ath5k_reg_notifier(struct wiphy *wiphy,
* Returns true for the channel numbers used.
*/
#ifdef CONFIG_ATH5K_TEST_CHANNELS
-static bool ath5k_is_standard_channel(short chan, enum ieee80211_band band)
+static bool ath5k_is_standard_channel(short chan, enum nl80211_band band)
{
return true;
}
#else
-static bool ath5k_is_standard_channel(short chan, enum ieee80211_band band)
+static bool ath5k_is_standard_channel(short chan, enum nl80211_band band)
{
- if (band == IEEE80211_BAND_2GHZ && chan <= 14)
+ if (band == NL80211_BAND_2GHZ && chan <= 14)
return true;
return /* UNII 1,2 */
@@ -297,18 +297,18 @@ ath5k_setup_channels(struct ath5k_hw *ah, struct ieee80211_channel *channels,
unsigned int mode, unsigned int max)
{
unsigned int count, size, freq, ch;
- enum ieee80211_band band;
+ enum nl80211_band band;
switch (mode) {
case AR5K_MODE_11A:
/* 1..220, but 2GHz frequencies are filtered by check_channel */
size = 220;
- band = IEEE80211_BAND_5GHZ;
+ band = NL80211_BAND_5GHZ;
break;
case AR5K_MODE_11B:
case AR5K_MODE_11G:
size = 26;
- band = IEEE80211_BAND_2GHZ;
+ band = NL80211_BAND_2GHZ;
break;
default:
ATH5K_WARN(ah, "bad mode, not copying channels\n");
@@ -363,13 +363,13 @@ ath5k_setup_bands(struct ieee80211_hw *hw)
int max_c, count_c = 0;
int i;
- BUILD_BUG_ON(ARRAY_SIZE(ah->sbands) < IEEE80211_NUM_BANDS);
+ BUILD_BUG_ON(ARRAY_SIZE(ah->sbands) < NUM_NL80211_BANDS);
max_c = ARRAY_SIZE(ah->channels);
/* 2GHz band */
- sband = &ah->sbands[IEEE80211_BAND_2GHZ];
- sband->band = IEEE80211_BAND_2GHZ;
- sband->bitrates = &ah->rates[IEEE80211_BAND_2GHZ][0];
+ sband = &ah->sbands[NL80211_BAND_2GHZ];
+ sband->band = NL80211_BAND_2GHZ;
+ sband->bitrates = &ah->rates[NL80211_BAND_2GHZ][0];
if (test_bit(AR5K_MODE_11G, ah->ah_capabilities.cap_mode)) {
/* G mode */
@@ -381,7 +381,7 @@ ath5k_setup_bands(struct ieee80211_hw *hw)
sband->n_channels = ath5k_setup_channels(ah, sband->channels,
AR5K_MODE_11G, max_c);
- hw->wiphy->bands[IEEE80211_BAND_2GHZ] = sband;
+ hw->wiphy->bands[NL80211_BAND_2GHZ] = sband;
count_c = sband->n_channels;
max_c -= count_c;
} else if (test_bit(AR5K_MODE_11B, ah->ah_capabilities.cap_mode)) {
@@ -407,7 +407,7 @@ ath5k_setup_bands(struct ieee80211_hw *hw)
sband->n_channels = ath5k_setup_channels(ah, sband->channels,
AR5K_MODE_11B, max_c);
- hw->wiphy->bands[IEEE80211_BAND_2GHZ] = sband;
+ hw->wiphy->bands[NL80211_BAND_2GHZ] = sband;
count_c = sband->n_channels;
max_c -= count_c;
}
@@ -415,9 +415,9 @@ ath5k_setup_bands(struct ieee80211_hw *hw)
/* 5GHz band, A mode */
if (test_bit(AR5K_MODE_11A, ah->ah_capabilities.cap_mode)) {
- sband = &ah->sbands[IEEE80211_BAND_5GHZ];
- sband->band = IEEE80211_BAND_5GHZ;
- sband->bitrates = &ah->rates[IEEE80211_BAND_5GHZ][0];
+ sband = &ah->sbands[NL80211_BAND_5GHZ];
+ sband->band = NL80211_BAND_5GHZ;
+ sband->bitrates = &ah->rates[NL80211_BAND_5GHZ][0];
memcpy(sband->bitrates, &ath5k_rates[4],
sizeof(struct ieee80211_rate) * 8);
@@ -427,7 +427,7 @@ ath5k_setup_bands(struct ieee80211_hw *hw)
sband->n_channels = ath5k_setup_channels(ah, sband->channels,
AR5K_MODE_11A, max_c);
- hw->wiphy->bands[IEEE80211_BAND_5GHZ] = sband;
+ hw->wiphy->bands[NL80211_BAND_5GHZ] = sband;
}
ath5k_setup_rate_idx(ah, sband);